summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2015-03-27MIPS: KVM: Drop pr_info messages on init/exitJames Hogan
The information messages when the KVM module is loaded and unloaded are a bit pointless and out of line with other architectures, so lets drop them. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2015-03-27MIPS: KVM: Sort kvm_mips_get_reg() registersJames Hogan
Sort the registers in the kvm_mips_get_reg() switch by register number, which puts ERROREPC after the CONFIG registers. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2015-03-27MIPS: KVM: Implement PRid CP0 register accessJames Hogan
Implement access to the guest Processor Identification CP0 register using the KVM_GET_ONE_REG and KVM_SET_ONE_REG ioctls. This allows the owning process to modify and read back the value that is exposed to the guest in this register. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2015-03-27MIPS: KVM: Handle TRAP exceptions from guest kernelJames Hogan
Trap instructions are used by Linux to implement BUG_ON(), however KVM doesn't pass trap exceptions on to the guest if they occur in guest kernel mode, instead triggering an internal error "Exception Code: 13, not yet handled". The guest kernel then doesn't get a chance to print the usual BUG message and stack trace. Implement handling of the trap exception so that it gets passed to the guest and the user is left with a more useful log message. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org
2015-03-27MIPS: Clear [MSA]FPE CSR.Cause after notify_die()James Hogan
When handling floating point exceptions (FPEs) and MSA FPEs the Cause bits of the appropriate control and status register (FCSR for FPEs and MSACSR for MSA FPEs) are read and cleared before enabling interrupts, presumably so that it doesn't have to go through the pain of restoring those bits if the process is pre-empted, since writing those bits would cause another immediate exception while still in the kernel. The bits aren't normally ever restored again, since userland never expects to see them set. However for virtualisation it is necessary for the kernel to be able to restore these Cause bits, as the guest may have been interrupted in an FP exception handler but before it could read the Cause bits. This can be done by registering a die notifier, to get notified of the exception when such a value is restored, and if the PC was at the instruction which is used to restore the guest state, the handler can step over it and continue execution. The Cause bits can then remain set without causing further exceptions. For this to work safely a few changes are made: - __build_clear_fpe and __build_clear_msa_fpe no longer clear the Cause bits, and now return from exception level with interrupts disabled instead of enabled. - do_fpe() now clears the Cause bits and enables interrupts after notify_die() is called, so that the notifier can chose to return from exception without this happening. - do_msa_fpe() acts similarly, but now actually makes use of the second argument (msacsr) and calls notify_die() with the new DIE_MSAFP, allowing die notifiers to be informed of MSA FPEs too. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2015-03-27MIPS: KVM: Handle MSA Disabled exceptions from guestJames Hogan
Guest user mode can generate a guest MSA Disabled exception on an MSA capable core by simply trying to execute an MSA instruction. Since this exception is unknown to KVM it will be passed on to the guest kernel. However guest Linux kernels prior to v3.15 do not set up an exception handler for the MSA Disabled exception as they don't support any MSA capable cores. This results in a guest OS panic. Since an older processor ID may be being emulated, and MSA support is not advertised to the guest, the correct behaviour is to generate a Reserved Instruction exception in the guest kernel so it can send the guest process an illegal instruction signal (SIGILL), as would happen with a non-MSA-capable core. Fix this as minimally as reasonably possible by preventing kvm_mips_check_privilege() from relaying MSA Disabled exceptions from guest user mode to the guest kernel, and handling the MSA Disabled exception by emulating a Reserved Instruction exception in the guest, via a new handle_msa_disabled() KVM callback. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: <stable@vger.kernel.org> # v3.15+
2015-03-27Merge branch '4.1-fp' of git://git.linux-mips.org/pub/scm/ralf/upstream-sfr ↵James Hogan
into kvm_mips_queue MIPS FP/MSA fixes from the MIPS tree. Includes a fix to ensure that the FPU is properly disabled by lose_fpu() when MSA is in use, and Paul Burton's "FP/MSA fixes" patchset which is required for FP/MSA support in KVM: > This series fixes a bunch of bugs, both build & runtime, with FP & MSA > support. Most of them only affect systems with the new FP modes & MSA > support enabled but patch 6 in particular is more general, fixing > problems for mips64 systems.
2015-03-27libata: Blacklist queued TRIM on Samsung SSD 850 ProMartin K. Petersen
Blacklist queued TRIM on this drive for now. Reported-by: Stefan Keller <linux-list@zahlenfresser.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> CC: stable@vger.kernel.org Signed-off-by: Tejun Heo <tj@kernel.org>
2015-03-27libata: Update Crucial/Micron blacklistMartin K. Petersen
Micron has released an updated firmware (MU02) for M510/M550/MX100 drives to fix the issues with queued TRIM. Queued TRIM remains broken on M500 but is working fine on later drives such as M600 and MX200. Tweak our blacklist to reflect the above. Link: https://bugzilla.kernel.org/show_bug.cgi?id=71371 Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Cc: stable@vger.kernel.org Signed-off-by: Tejun Heo <tj@kernel.org>
2015-03-27MIPS: MSA: Fix big-endian FPR_IDX implementationJames Hogan
The maximum word size is 64-bits since MSA state is saved using st.d which stores two 64-bit words, therefore reimplement FPR_IDX using xor, and only within each 64-bit word. Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9169/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27Revert "MIPS: Don't assume 64-bit FP registers for context switch"James Hogan
This reverts commit 02987633df7ba2f62967791dda816eb191d1add3. The basic premise of the patch was incorrect since MSA context (including FP state) is saved using st.d which stores two consecutive 64-bit words in memory rather than a single 128-bit word. This means that even with big endian MSA, the FP state is still in the first 64-bit word. Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9168/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: disable FPU if the mode is unsupportedPaul Burton
The expected semantics of __enable_fpu are for the FPU to be enabled in the given mode if possible, otherwise for the FPU to be left disabled and SIGFPE returned. The FPU was incorrectly being left enabled in cases where the desired value for FR was unavailable. Without ensuring the FPU is disabled in this case, it would be possible for userland to go on to execute further FP instructions natively in the incorrect mode, rather than those instructions being trapped & emulated as they need to be. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9167/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: prevent FP context set via ptrace being discardedPaul Burton
If a ptracee has not used the FPU and the ptracer sets its FP context using PTRACE_POKEUSR, PTRACE_SETFPREGS or PTRACE_SETREGSET then that context will be discarded upon either the ptracee using the FPU or a further write to the context via ptrace. Prevent this loss by recording that the task has "used" math once its FP context has been written to. The context initialisation code that was present for the PTRACE_POKEUSR case is reused for the other 2 cases to provide consistent behaviour for the different ptrace requests. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9166/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: Ensure FCSR cause bits are clear after invoking FPU emulatorPaul Burton
When running the emulator to handle an instruction that raised an FP unimplemented operation exception, the FCSR cause bits were being cleared. This is done to ensure that the kernel does not take an FP exception when later restoring FP context to registers. However, this was not being done when the emulator is invoked in response to a coprocessor unusable exception. This happens in 2 cases: - There is no FPU present in the system. In this case things were OK, since the FP context is never restored to hardware registers and thus no FP exception may be raised when restoring FCSR. - The FPU could not be configured to the mode required by the task. In this case it would be possible for the emulator to set cause bits which are later restored to hardware if the task migrates to a CPU whose associated FPU does support its mode requirements, or if the tasks FP mode requirements change. Consistently clear the cause bits after invoking the emulator, by moving the clearing to process_fpemu_return and ensuring this is always called before the tasks FP context is restored. This will make it easier to catch further paths invoking the emulator in future, as will be introduced in further patches. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9165/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: clear MSACSR cause bits when handling MSA FP exceptionPaul Burton
Much like for traditional scalar FP exceptions, the cause bits in the MSACSR register need to be cleared following an MSA FP exception. Without doing so the exception will simply be raised again whenever the kernel restores MSACSR from a tasks saved context, leading to undesirable spurious exceptions. Clear the cause bits from the handle_msa_fpe function, mirroring the way handle_fpe clears the cause bits in FCSR. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9164/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: wrap cfcmsa & ctcmsa accesses for toolchains with MSA supportPaul Burton
Uses of the cfcmsa & ctcmsa instructions were not being wrapped by a macro in the case where the toolchain supports MSA, since the arguments exactly match a typical use of the instructions. However using current toolchains this leads to errors such as: arch/mips/kernel/genex.S:437: Error: opcode not supported on this processor: mips32r2 (mips32r2) `cfcmsa $5,1' Thus uses of the instructions must be in the context of a ".set msa" directive, however doing that from the users of the instructions would be messy due to the possibility that the toolchain does not support MSA. Fix this by renaming the macros (prepending an underscore) in order to avoid recursion when attempting to emit the instructions, and provide implementations for the TOOLCHAIN_SUPPORTS_MSA case which ".set msa" as appropriate. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9163/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: remove MSA macro recursionPaul Burton
Recursive macros made the code more concise & worked great for the case where the toolchain doesn't support MSA. However, with toolchains which do support MSA they lead to build failures such as: arch/mips/kernel/r4k_switch.S: Assembler messages: arch/mips/kernel/r4k_switch.S:148: Error: invalid operands `insert.w $w(0+1)[2],$1' arch/mips/kernel/r4k_switch.S:148: Error: invalid operands `insert.w $w(0+1)[3],$1' arch/mips/kernel/r4k_switch.S:148: Error: invalid operands `insert.w $w((0+1)+1)[2],$1' arch/mips/kernel/r4k_switch.S:148: Error: invalid operands `insert.w $w((0+1)+1)[3],$1' ... Drop the recursion from msa_init_all_upper invoking the msa_init_upper macro explicitly for each vector register. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9162/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: assume at as source/dest of MSA copy/insert instructionsPaul Burton
Assuming at ($1) as the source or destination register of copy or insert instructions: - Simplifies the macros providing those instructions for toolchains without MSA support. - Avoids an unnecessary move instruction when at is used as the source or destination register anyway. - Is sufficient for the uses to be introduced in the kernel by a subsequent patch. Note that due to a patch ordering snafu on my part this also fixes the currently broken build with MSA support enabled. The build has been broken since commit c9017757c532 "MIPS: init upper 64b of vector registers when MSA is first used", which this patch should have preceeded. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9161/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: Push .set mips64r* into the functions needing itPaul Burton
The {save,restore}_fp_context{,32} functions require that the assembler allows the use of sdc instructions on any FP register, and this is acomplished by setting the arch to mips64r2 or mips64r6 (using MIPS_ISA_ARCH_LEVEL_RAW). However this has the effect of enabling the assembler to use mips64 instructions in the expansion of pseudo-instructions. This was done in the (now-reverted) commit eec43a224cf1 "MIPS: Save/restore MSA context around signals" which led to my mistakenly believing that there was an assembler bug, when in reality the assembler was just emitting mips64 instructions. Avoid the issue for future commits which will add code to r4k_fpu.S by pushing the .set MIPS_ISA_ARCH_LEVEL_RAW directives into the functions that require it, and remove the spurious assertion declaring the assembler bug. Signed-off-by: Paul Burton <paul.burton@imgtec.com> [james.hogan@imgtec.com: Rebase on v4.0-rc1 and reword commit message to reflect use of MIPS_ISA_ARCH_LEVEL_RAW] Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9612/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27MIPS: lose_fpu(): Disable FPU when MSA enabledJames Hogan
The lose_fpu() function only disables the FPU in CP0_Status.CU1 if the FPU is in use and MSA isn't enabled. This isn't necessarily a problem because KSTK_STATUS(current), the version of CP0_Status stored on the kernel stack on entry from user mode, does always get updated and gets restored when returning to user mode, but I don't think it was intended, and it is inconsistent with the case of only the FPU being in use. Sometimes leaving the FPU enabled may also mask kernel bugs where FPU operations are executed when the FPU might not be enabled. So lets disable the FPU in the MSA case too. Fixes: 33c771ba5c5d ("MIPS: save/disable MSA in lose_fpu") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9323/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-03-27drm/radeon: programm the VCE fw BAR as wellChristian König
Otherwise the VCE firmware needs to be in the first 256MB of VRAM. Signed-off-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2015-03-27drm/radeon: always dump the ring content if it's availableChristian König
Dumping is still possible if a ring isn't ready, only when it isn't allocated at all we need to abort here. Signed-off-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2015-03-27radeon: Do not directly dereference pointers to BIOS area.David Miller
Use readb() and memcpy_fromio() accessors instead. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2015-03-27drm/radeon/dpm: fix 120hz handling harderAlex Deucher
Need to expand the check to handle short circuiting if the selected state is the same as current state. bug: https://bugs.freedesktop.org/show_bug.cgi?id=87796 Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2015-03-27x86/asm/entry/32: Make register zero-extension more prominentDenys Vlasenko
There are a couple of syscall argument zero-extension instructions in the 32-bit compat entry code, and it was mentioned that people keep trying to optimize them out, introducing bugs. Make them more visible, and add a "do not remove" comment. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427452582-21624-3-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27x86/asm/entry/32: Update "interrupt off" commentsDenys Vlasenko
The existing comment has proven to be not very clear. Replace it with a comment similar to the one we now have in the 64-bit syscall entry point. (Three instances, one per 32-bit syscall entry). In the INT80 entry point's CFI annotations, replace mysterious expressions with numric constants. In this case, raw numbers look more understandable. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427452582-21624-2-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27x86/asm/entry/64: Add missing CFI annotationDenys Vlasenko
This is a missing bit of the recent MOV-to-PUSH conversion. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427452582-21624-1-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27drm/edid: set ELD for firmware and debugfs override EDIDsJani Nikula
If the user supplies EDID through firmware or debugfs override, the driver callbacks are bypassed and the connector ELD does not get updated, and audio fails. Set ELD for firmware and debugfs EDIDs too. There should be no harm in gratuitously doing this for non HDMI/DP connectors, as it's still up to the driver to use the ELD, if any. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82349 Reference: https://bugs.freedesktop.org/show_bug.cgi?id=80691 Reported-by: Emil <emilsvennesson@gmail.com> Reported-by: Rob Engle <grenoble@gmail.com> Tested-by: Jolan Luff <jolan@gormsby.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2015-03-27x86/asm/entry/64: Fix comment about SYSENTER MSRsDenys Vlasenko
The comment is ancient, it dates to the time when only AMD's x86_64 implementation existed. AMD wasn't (and still isn't) supporting SYSENTER, so these writes were "just in case" back then. This has changed: Intel's x86_64 appeared, and Intel does support SYSENTER in long mode. "Some future 64-bit CPU" is here already. The code may appear "buggy" for AMD as it stands, since MSR_IA32_SYSENTER_EIP is only 32-bit for AMD CPUs. Writing a kernel function's address to it would drop high bits. Subsequent use of this MSR for branch via SYSENTER seem to allow user to transition to CPL0 while executing his code. Scary, eh? Explain why that is not a bug: because SYSENTER insn would not work on AMD CPU. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427453956-21931-1-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27locks: fix file_lock deletion inside loopYan, Zheng
locks_delete_lock_ctx() is called inside the loop, so we should use list_for_each_entry_safe. Fixes: 8634b51f6ca2 (locks: convert lease handling to file_lock_context) Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
2015-03-27firmware: dmi_scan: Prevent dmi_num integer overflowJean Delvare
dmi_num is a u16, dmi_len is a u32, so this construct: dmi_num = dmi_len / 4; would result in an integer overflow for a DMI table larger than 256 kB. I've never see such a large table so far, but SMBIOS 3.0 makes it possible so maybe we'll see such tables in the future. So instead of faking a structure count when the entry point does not provide it, adjust the loop condition in dmi_table() to properly deal with the case where dmi_num is not set. This bug was introduced with the initial SMBIOS 3.0 support in commit fc43026278b2 ("dmi: add support for SMBIOS 3.0 64-bit entry point"). Signed-off-by: Jean Delvare <jdelvare@suse.de> Cc: Matt Fleming <matt.fleming@intel.com> Cc: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Cc: <stable@vger.kernel.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2015-03-27gpio: syscon: reduce message level when direction reg offset not in dtGrygorii Strashko
Now GPIO syscon driver produces bunch of warnings during the boot of Kesytone 2 SoCs: gpio-syscon soc:keystone_dsp_gpio@02620240: can't read the dir register offset! gpio-syscon soc:keystone_dsp_gpio@2620244: can't read the dir register offset! This message unintentionally was added using dev_err(), but its actual log level is debug, because third cell of "ti,syscon-dev" is optional. Hence change it to dev_dbg() as it should be. This patch fixes commit: 5a3e3f8 ("gpio: syscon: retriave syscon node and regs offsets from dt") Reported-by: Russell King <linux@arm.linux.org.uk> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by: Grygorii Strashko <grygorii.strashko@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2015-03-27Merge branch 'upstream' of git://git.infradead.org/users/pcmoore/selinux ↵James Morris
into for-linus
2015-03-27clockevents: Don't validate dev->mode against CLOCK_EVT_MODE_UNUSED for new ↵Viresh Kumar
interface It was a requirement in the legacy interface that drivers must initialize ->mode field to 'CLOCK_EVT_MODE_UNUSED'. This field isn't used anymore by the new interface and so should be only checked for the legacy interface. Probably it can be dropped as well as core doesn't rely on it anymore, but lets keep it to support legacy interface. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Kevin Hilman <khilman@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com> Cc: linaro-kernel@lists.linaro.org Cc: linaro-networking@linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/c6604fa1a77fe1fc8dcab87769857228fb1dadd5.1425037853.git.viresh.kumar@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27clockevents: Manage device's state separately for the coreViresh Kumar
'enum clock_event_mode' is used for two purposes today: - to pass mode to the driver of clockevent device::set_mode(). - for managing state of the device for clockevents core. For supporting new modes/states we have moved away from the legacy set_mode() callback to new per-mode/state callbacks. New modes/states shouldn't be exposed to the legacy (now OBSOLOTE) callbacks and so we shouldn't add new states to 'enum clock_event_mode'. Lets have separate enums for the two use cases mentioned above. Keep using the earlier enum for legacy set_mode() callback and mark it OBSOLETE. And add another enum to clearly specify the possible states of a clockevent device. This also renames the newly added per-mode callbacks to reflect state changes. We haven't got rid of 'mode' member of 'struct clock_event_device' as it is used by some of the clockevent drivers and it would automatically die down once we migrate those drivers to the new interface. It ('mode') is only updated now for the drivers using the legacy interface. Suggested-by: Peter Zijlstra <peterz@infradead.org> Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Kevin Hilman <khilman@linaro.org> Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com> Cc: linaro-kernel@lists.linaro.org Cc: linaro-networking@linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/b6b0143a8a57bd58352ad35e08c25424c879c0cb.1425037853.git.viresh.kumar@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27clockevents: Handle tick device's resume separatelyViresh Kumar
Upcoming patch will redefine possible states of a clockevent device. The RESUME mode is a special case only for tick's clockevent devices. In future it can be replaced by ->resume() callback already available for clockevent devices. Lets handle it separately so that clockevents_set_mode() only handles states valid across all devices. This also renames set_mode_resume() to tick_resume() to make it more explicit. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Kevin Hilman <khilman@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com> Cc: linaro-kernel@lists.linaro.org Cc: linaro-networking@linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/c1b0112410870f49e7bf06958e1483eac6c15e20.1425037853.git.viresh.kumar@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27x86/irq/tracing: Do not save callee-preserved registers around ↵Denys Vlasenko
lockdep_sys_exit_thunk Internally, lockdep_sys_exit_thunk saves callee-clobbered registers, and calls a C function, lockdep_sys_exit. Thus, callee-preserved registers won't be mangled, there is no need to save them. Patch was run-tested. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427314468-12763-4-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27x86/irq/tracing: Fold ARCH_LOCKDEP_SYS_EXIT defines into their usersDenys Vlasenko
There is no need to have an extra level of macro indirection here. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427314468-12763-3-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27x86/irq/tracing: Move ARCH_LOCKDEP_SYS_EXIT defines closer to their usersDenys Vlasenko
This change simply moves defines around (even if it's not obvious in a patch form). Nothing is changed. This is a preparation for folding ARCH_LOCKDEP_SYS_EXIT defines into their users. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427314468-12763-2-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27x86/asm/entry/64: Use smaller instructionsDenys Vlasenko
The $AUDIT_ARCH_X86_64 parameter to syscall_trace_enter_phase1/2 is a 32-bit constant, loading it with 32-bit MOV produces 5-byte insn instead of 10-byte MOVABS one. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427303896-24023-3-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27x86/asm/entry/64: Use better label name, fix commentsDenys Vlasenko
A named label "ret_from_sys_call" implies that there are jumps to this location from elsewhere, as happens with many other labels in this file. But this label is used only by the JMP a few insns above. To make that obvious, use local numeric label instead. Improve comments: "and return regs->ax" isn't too informative. We always return regs->ax. The comment suggesting that it'd be cool to use rip relative addressing for CALL is deleted. It's unclear why that would be an improvement - we aren't striving to use position-independent code here. PIC code here would require something like LEA sys_call_table(%rip),reg + CALL *(reg,%rax*8)... "iret frame is also incomplete" is no longer true, fix that too. Also fix typo in comment. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1427303896-24023-1-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27time: Add ktime_get_tai_ns()Peter Zijlstra
Because it was the only clock for which we didn't have a _ns() accessor yet. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27time: Introduce tk_fast_rawPeter Zijlstra
Add the NMI safe CLOCK_MONOTONIC_RAW accessor.. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: John Stultz <john.stultz@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20150319093400.562746929@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27time: Parametrize all tk_fast_mono usersPeter Zijlstra
In preparation for more tk_fast instances, remove all hard-coded tk_fast_mono references. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: John Stultz <john.stultz@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20150319093400.484279927@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27time: Add timerkeeper::tkr_rawPeter Zijlstra
Introduce tkr_raw and make use of it. base_raw -> tkr_raw.base clock->{mult,shift} -> tkr_raw.{mult.shift} Kill timekeeping_get_ns_raw() in favour of timekeeping_get_ns(&tkr_raw), this removes all mono_raw special casing. Duplicate the updates to tkr_mono.cycle_last into tkr_raw.cycle_last, both need the same value. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: John Stultz <john.stultz@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20150319093400.422589590@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27time: Rename timekeeper::tkr to timekeeper::tkr_monoPeter Zijlstra
In preparation of adding another tkr field, rename this one to tkr_mono. Also rename tk_read_base::base_mono to tk_read_base::base, since the structure is not specific to CLOCK_MONOTONIC and the mono name got added to the tk_read_base instance. Lots of trivial churn. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: John Stultz <john.stultz@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20150319093400.344679419@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27locking: Remove atomicy checks from {READ,WRITE}_ONCEPeter Zijlstra
The fact that volatile allows for atomic load/stores is a special case not a requirement for {READ,WRITE}_ONCE(). Their primary purpose is to force the compiler to emit load/stores _once_. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27sched/deadline: Fix rt runtime corruption when dl fails its global constraintsWanpeng Li
One version of sched_rt_global_constaints() (the !rt-cgroup one) changes state, therefore if we fail the later sched_dl_global_constraints() call the state is left in an inconsistent state. Fix this by changing the order of the calls. Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> [ Improved the changelog. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Juri Lelli <juri.lelli@arm.com> Link: http://lkml.kernel.org/r/1426590931-4639-2-git-send-email-wanpeng.li@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27sched/deadline: Avoid a superfluous checkWanpeng Li
Since commit 40767b0dc768 ("sched/deadline: Fix deadline parameter modification handling") we clear the thottled state when switching from a dl task, therefore we should never find it set in switching to a dl task. Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> [ Improved the changelog. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Juri Lelli <juri.lelli@arm.com> Link: http://lkml.kernel.org/r/1426590931-4639-1-git-send-email-wanpeng.li@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27sched: Improve load balancing in the presence of idle CPUsPreeti U Murthy
When a CPU is kicked to do nohz idle balancing, it wakes up to do load balancing on itself, followed by load balancing on behalf of idle CPUs. But it may end up with load after the load balancing attempt on itself. This aborts nohz idle balancing. As a result several idle CPUs are left without tasks till such a time that an ILB CPU finds it unfavorable to pull tasks upon itself. This delays spreading of load across idle CPUs and worse, clutters only a few CPUs with tasks. The effect of the above problem was observed on an SMT8 POWER server with 2 levels of numa domains. Busy loops equal to number of cores were spawned. Since load balancing on fork/exec is discouraged across numa domains, all busy loops would start on one of the numa domains. However it was expected that eventually one busy loop would run per core across all domains due to nohz idle load balancing. But it was observed that it took as long as 10 seconds to spread the load across numa domains. Further investigation showed that this was a consequence of the following: 1. An ILB CPU was chosen from the first numa domain to trigger nohz idle load balancing [Given the experiment, upto 6 CPUs per core could be potentially idle in this domain.] 2. However the ILB CPU would call load_balance() on itself before initiating nohz idle load balancing. 3. Given cores are SMT8, the ILB CPU had enough opportunities to pull tasks from its sibling cores to even out load. 4. Now that the ILB CPU was no longer idle, it would abort nohz idle load balancing As a result the opportunities to spread load across numa domains were lost until such a time that the cores within the first numa domain had equal number of tasks among themselves. This is a pretty bad scenario, since the cores within the first numa domain would have as many as 4 tasks each, while cores in the neighbouring numa domains would all remain idle. Fix this, by checking if a CPU was woken up to do nohz idle load balancing, before it does load balancing upon itself. This way we allow idle CPUs across the system to do load balancing which results in quicker spread of load, instead of performing load balancing within the local sched domain hierarchy of the ILB CPU alone under circumstances such as above. Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Jason Low <jason.low2@hp.com> Cc: benh@kernel.crashing.org Cc: daniel.lezcano@linaro.org Cc: efault@gmx.de Cc: iamjoonsoo.kim@lge.com Cc: morten.rasmussen@arm.com Cc: pjt@google.com Cc: riel@redhat.com Cc: srikar@linux.vnet.ibm.com Cc: svaidy@linux.vnet.ibm.com Cc: tim.c.chen@linux.intel.com Cc: vincent.guittot@linaro.org Link: http://lkml.kernel.org/r/20150326130014.21532.17158.stgit@preeti.in.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>