summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2014-06-21x86, acpi: Reorganize code to avoid forward declaration in boot.cJiang Liu
Reorganize code to avoid forward declaration in boot.c, no function changes. Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Likely <grant.likely@linaro.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Len Brown <len.brown@intel.com> Cc: Pavel Machek <pavel@ucw.cz> Link: http://lkml.kernel.org/r/1402302011-23642-5-git-send-email-jiang.liu@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-06-21x86, mpparse: Simplify arch/x86/include/asm/mpspec.hJiang Liu
Simplify arch/x86/include/asm/mpspec.h by 1) Change max_physical_apicid to static as it's only used in apic.c. 2) Kill declaration of mpc_default_type, it's never defined. 3) Delete default_acpi_madt_oem_check(), it has already been declared in apic.h. 4) Make default_acpi_madt_oem_check() depends on CONFIG_X86_LOCAL_APIC instead of CONFIG_X86_64 to support i386. 5) Change mp_override_legacy_irq(), mp_config_acpi_legacy_irqs() and mp_register_gsi() as static because they are only used in acpi/boot.c. Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Likely <grant.likely@linaro.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Len Brown <len.brown@intel.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Seiji Aguchi <seiji.aguchi@hds.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Richard Weinberger <richard@nod.at> Cc: Andi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1402302011-23642-4-git-send-email-jiang.liu@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-06-21x86, mpparse: Use pr_lvl() helper utilities to replace printk(KERN_LVL)Jiang Liu
Use pr_lvl() helper utilities to replace printk(KERN_LVL) for readability, no function changes. Also use pr_cont() to avoid multiple newlines in one printk(). Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Likely <grant.likely@linaro.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1402302011-23642-3-git-send-email-jiang.liu@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-06-21Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "This is larger than usual: the main reason are the ARM symbol lookup speedups that came in late and were hard to resist. There's also a kprobes fix and various tooling fixes, plus the minimal re-enablement of the mmap2 support interface" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits) x86/kprobes: Fix build errors and blacklist context_track_user perf tests: Add test for closing dso objects on EMFILE error perf tests: Add test for caching dso file descriptors perf tests: Allow reuse of test_file function perf tests: Spawn child for each test perf tools: Add dso__data_* interface descriptons perf tools: Allow to close dso fd in case of open failure perf tools: Add file size check and factor dso__data_read_offset perf tools: Cache dso data file descriptor perf tools: Add global count of opened dso objects perf tools: Add global list of opened dso objects perf tools: Add data_fd into dso object perf tools: Separate dso data related variables perf tools: Cache register accesses for unwind processing perf record: Fix to honor user freq/interval properly perf timechart: Reflow documentation perf probe: Improve error messages in --line option perf probe: Improve an error message of perf probe --vars mode perf probe: Show error code and description in verbose mode perf probe: Improve error message for unknown member of data structure ...
2014-06-20x86/vdso: Create .build-id links for unstripped vdso filesAndy Lutomirski
With this change, doing 'make vdso_install' and telling gdb: set debug-file-directory /lib/modules/KVER/vdso will enable vdso debugging with symbols. This is useful for testing, but kernel RPM builds will probably want to manually delete these symlinks or otherwise do something sensible when they strip the vdso/*.so files. If ld does not support --build-id, then the symlinks will not be created. Note that kernel packagers that use vdso_install may need to adjust their packaging scripts to accomdate this change. For example, Fedora's scripts create build-id symlinks themselves in a different location, so the spec should probably be updated to remove the symlinks created by make vdso_install. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/a424b189ce3ced85fe1e82d032a20e765e0fe0d3.1403291930.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-20crypto: aes - AES CTR x86_64 "by8" AVX optimizationchandramouli narayanan
This patch introduces "by8" AES CTR mode AVX optimization inspired by Intel Optimized IPSEC Cryptograhpic library. For additional information, please see: http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=22972 The functions aes_ctr_enc_128_avx_by8(), aes_ctr_enc_192_avx_by8() and aes_ctr_enc_256_avx_by8() are adapted from Intel Optimized IPSEC Cryptographic library. When both AES and AVX features are enabled in a platform, the glue code in AESNI module overrieds the existing "by4" CTR mode en/decryption with the "by8" AES CTR mode en/decryption. On a Haswell desktop, with turbo disabled and all cpus running at maximum frequency, the "by8" CTR mode optimization shows better performance results across data & key sizes as measured by tcrypt. The average performance improvement of the "by8" version over the "by4" version is as follows: For 128 bit key and data sizes >= 256 bytes, there is a 10-16% improvement. For 192 bit key and data sizes >= 256 bytes, there is a 20-22% improvement. For 256 bit key and data sizes >= 256 bytes, there is a 20-25% improvement. A typical run of tcrypt with AES CTR mode encryption of the "by4" and "by8" optimization shows the following results: tcrypt with "by4" AES CTR mode encryption optimization on a Haswell Desktop: --------------------------------------------------------------------------- testing speed of __ctr-aes-aesni encryption test 0 (128 bit key, 16 byte blocks): 1 operation in 343 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 336 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 491 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 1130 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 7309 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 346 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 361 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 543 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 1321 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 9649 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 369 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 366 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 595 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 1531 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 10522 cycles (8192 bytes) testing speed of __ctr-aes-aesni decryption test 0 (128 bit key, 16 byte blocks): 1 operation in 336 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 350 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 487 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 1129 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 7287 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 350 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 359 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 635 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 1324 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 9595 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 364 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 377 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 604 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 1527 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 10549 cycles (8192 bytes) tcrypt with "by8" AES CTR mode encryption optimization on a Haswell Desktop: --------------------------------------------------------------------------- testing speed of __ctr-aes-aesni encryption test 0 (128 bit key, 16 byte blocks): 1 operation in 340 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 330 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 450 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 1043 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 6597 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 339 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 352 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 539 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 1153 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 8458 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 353 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 360 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 512 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 1277 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 8745 cycles (8192 bytes) testing speed of __ctr-aes-aesni decryption test 0 (128 bit key, 16 byte blocks): 1 operation in 348 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 335 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 451 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 1030 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 6611 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 354 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 346 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 488 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 1154 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 8390 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 357 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 362 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 515 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 1284 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 8681 cycles (8192 bytes) crypto: Incorporate feed back to AES CTR mode optimization patch Specifically, the following: a) alignment around main loop in aes_ctrby8_avx_x86_64.S b) .rodata around data constants used in the assembely code. c) the use of CONFIG_AVX in the glue code. d) fix up white space. e) informational message for "by8" AES CTR mode optimization f) "by8" AES CTR mode optimization can be simply enabled if the platform supports both AES and AVX features. The optimization works superbly on Sandybridge as well. Testing on Haswell shows no performance change since the last. Testing on Sandybridge shows that the "by8" AES CTR mode optimization greatly improves performance. tcrypt log with "by4" AES CTR mode optimization on Sandybridge -------------------------------------------------------------- testing speed of __ctr-aes-aesni encryption test 0 (128 bit key, 16 byte blocks): 1 operation in 383 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 408 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 707 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 1864 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 12813 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 395 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 432 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 780 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 2132 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 15765 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 416 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 438 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 842 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 2383 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 16945 cycles (8192 bytes) testing speed of __ctr-aes-aesni decryption test 0 (128 bit key, 16 byte blocks): 1 operation in 389 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 409 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 704 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 1865 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 12783 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 409 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 434 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 792 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 2151 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 15804 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 421 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 444 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 840 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 2394 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 16928 cycles (8192 bytes) tcrypt log with "by8" AES CTR mode optimization on Sandybridge -------------------------------------------------------------- testing speed of __ctr-aes-aesni encryption test 0 (128 bit key, 16 byte blocks): 1 operation in 383 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 401 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 522 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 1136 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 7046 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 394 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 418 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 559 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 1263 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 9072 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 408 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 428 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 595 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 1385 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 9224 cycles (8192 bytes) testing speed of __ctr-aes-aesni decryption test 0 (128 bit key, 16 byte blocks): 1 operation in 390 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 402 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 530 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 1135 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 7079 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 414 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 417 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 572 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 1312 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 9073 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 415 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 454 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 598 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 1407 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 9288 cycles (8192 bytes) crypto: Fix redundant checks a) Fix the redundant check for cpu_has_aes b) Fix the key length check when invoking the CTR mode "by8" encryptor/decryptor. crypto: fix typo in AES ctr mode transform Signed-off-by: Chandramouli Narayanan <mouli@linux.intel.com> Reviewed-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: des_3des - add x86-64 assembly implementationJussi Kivilinna
Patch adds x86_64 assembly implementation of Triple DES EDE cipher algorithm. Two assembly implementations are provided. First is regular 'one-block at time' encrypt/decrypt function. Second is 'three-blocks at time' function that gains performance increase on out-of-order CPUs. tcrypt test results: Intel Core i5-4570: des3_ede-asm vs des3_ede-generic: size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec 16B 1.21x 1.22x 1.27x 1.36x 1.25x 1.25x 64B 1.98x 1.96x 1.23x 2.04x 2.01x 2.00x 256B 2.34x 2.37x 1.21x 2.40x 2.38x 2.39x 1024B 2.50x 2.47x 1.22x 2.51x 2.52x 2.51x 8192B 2.51x 2.53x 1.21x 2.56x 2.54x 2.55x Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: crc32c-pclmul - Shrink K_table to 32-bit wordsGeorge Spelvin
There's no need for the K_table to be made of 64-bit words. For some reason, the original authors didn't fully reduce the values modulo the CRC32C polynomial, and so had some 33-bit values in there. They can all be reduced to 32 bits. Doing that cuts the table size in half. Since the code depends on both pclmulq and crc32, SSE 4.1 is obviously present, so we can use pmovzxdq to fetch it in the correct format. This adds (measured on Ivy Bridge) 1 cycle per main loop iteration (CRC of up to 3K bytes), less than 0.2%. The hope is that the reduced D-cache footprint will make up the loss in other code. Two other related fixes: * K_table is read-only, so belongs in .rodata, and * There's no need for more than 8-byte alignment Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: George Spelvin <linux@horizon.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-19x86/tsc: Get rid of custom DIV_ROUND() macroMichal Nazarewicz
When invoced for positive values, DIV_ROUND macro defined in arch/x86/kernel/tsc.c behaves exactly like DIV_ROUND_CLOSEST from include/linux/kernel.h file, so remove the custom macro in favour of the shared one. [ hpa: changed line breaks ] Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Link: http://lkml.kernel.org/r/1403143116-21755-1-git-send-email-mina86@mina86.com Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-06-19Merge tag 'pm+acpi-3.16-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull ACPI and power management fixes from Rafael Wysocki: "These are fixes mostly (ia64 regression related to the ACPI enumeration of devices, cpufreq regressions, fix for I2C controllers included in Intel SoCs, mvebu cpuidle driver fix related to sysfs) plus additional kernel command line arguments from Kees to make it possible to build kernel images with hibernation and the kernel address space randomization included simultaneously, a new ACPI battery driver quirk for a system with a broken BIOS and a couple of ACPI core cleanups. Specifics: - Fix for an ia64 regression introduced during the 3.11 cycle by a commit that modified the hardware initialization ordering and made device discovery fail on some systems. - Fix for a build problem on systems where the cpufreq-cpu0 driver is built-in and the cpu-thermal driver is modular from Arnd Bergmann. - Fix for a recently introduced computational mistake in the intel_pstate driver that leads to excessive rounding errors from Doug Smythies. - Fix for a failure code path in cpufreq_update_policy() that fails to unlock the locks acquired previously from Aaron Plattner. - Fix for the cpuidle mvebu driver to use shorter state names which will prevent the sysfs interface from returning mangled strings. From Gregory Clement. - ACPI LPSS driver fix to make sure that the I2C controllers included in BayTrail SoCs are not held in the reset state while they are being probed from Mika Westerberg. - New kernel command line arguments making it possible to build kernel images with hibernation and kASLR included at the same time and to select which of them will be used via the command line (they are still functionally mutually exclusive, though). From Kees Cook. - ACPI battery driver quirk for Acer Aspire V5-573G that fails to send battery status change notifications timely from Alexander Mezin. - Two ACPI core cleanups from Christoph Jaeger and Fabian Frederick" * tag 'pm+acpi-3.16-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: cpuidle: mvebu: Fix the name of the states cpufreq: unlock when failing cpufreq_update_policy() intel_pstate: Correct rounding in busy calculation ACPI: use kstrto*() instead of simple_strto*() ACPI / processor replace __attribute__((packed)) by __packed ACPI / battery: add quirk for Acer Aspire V5-573G ACPI / battery: use callback for setting up quirks ACPI / LPSS: Take I2C host controllers out of reset x86, kaslr: boot-time selectable with hibernation PM / hibernate: introduce "nohibernate" boot parameter cpufreq: cpufreq-cpu0: fix CPU_THERMAL dependency ACPI / ia64 / sba_iommu: Restore the working initialization ordering
2014-06-19x86/vdso: Remove some redundant in-memory section headersAndy Lutomirski
.data doesn't need to be separate from .rodata: they're both readonly. .altinstructions and .altinstr_replacement aren't needed by anything except vdso2c; strip them from the final image. While we're at it, rather than aligning the actual executable text, just shove some unused-at-runtime data in between real data and text. My vdso image is still above 4k, but I'm disinclined to try to trim it harder for 3.16. For future trimming, I suspect that these sections could be moved to later in the file and dropped from the in-memory image: .gnu.version and .gnu.version_d (this may lose versions in gdb) .eh_frame (should be harmless) .eh_frame_hdr (I'm not really sure) .hash (AFAIK nothing needs this section header) Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2e96d0c49016ea6d026a614ae645e93edd325961.1403129369.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-19x86/vdso: Improve the fake section headersAndy Lutomirski
Fully stripping the vDSO has other unfortunate side effects: - binutils is unable to find ELF notes without a SHT_NOTE section. - Even elfutils has trouble: it can find ELF notes without a section table at all, but if a section table is present, it won't look for PT_NOTE. - gdb wants section names to match between stripped DSOs and their symbols; otherwise it will corrupt symbol addresses. We're also breaking the rules: section 0 is supposed to be SHT_NULL. Fix these problems by building a better fake section table. While we're at it, we might as well let buggy Go versions keep working well by giving the SHT_DYNSYM entry the correct size. This is a bit unfortunate: it adds quite a bit of size to the vdso image. If/when binutils improves and the improved versions become widespread, it would be worth considering dropping most of this. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/0e546a5eeaafdf1840e6ee654a55c1e727c26663.1403129369.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-19x86/vdso2c: Use better macros for ELF bitnessAndy Lutomirski
Rather than using a separate macro for each replacement, use generic macros. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/d953cd2e70ceee1400985d091188cdd65fba2f05.1403129369.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-19x86/vdso: Discard the __bug_table sectionAndy Lutomirski
It serves no purpose in user code. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2a5bebff42defd8a5e81d96f7dc00f21143c80e8.1403129369.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-19Merge tag 'stable/for-linus-3.16-rc1-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull Xen fixes from David Vrabel: "Xen regression and PVH fixes for 3.16-rc1 - fix dom0 PVH memory setup on latest unstable Xen releases - fix 64-bit x86 PV guest boot failure on Xen 3.1 and earlier - fix resume regression on non-PV (auto-translated physmap) guests" * tag 'stable/for-linus-3.16-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/grant-table: fix suspend for non-PV guests x86/xen: no need to explicitly register an NMI callback Revert "xen/pvh: Update E820 to work with PVH (v2)" x86/xen: fix memory setup for PVH dom0
2014-06-19KVM: x86: preserve the high 32-bits of the PAT registerPaolo Bonzini
KVM does not really do much with the PAT, so this went unnoticed for a long time. It is exposed however if you try to do rdmsr on the PAT register. Reported-by: Valentine Sinitsyn <valentine.sinitsyn@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19kvm: fix wrong address when writing Hyper-V tsc pageXiaoming Gao
When kvm_write_guest writes the tsc_ref structure to the guest, or it will lead the low HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT bits of the TSC page address must be cleared, or the guest can see a non-zero sequence number. Otherwise Windows guests would not be able to get a correct clocksource (QueryPerformanceCounter will always return 0) which causes serious chaos. Signed-off-by: Xiaoming Gao <newtongao@tencnet.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: vmx: vmx instructions handling does not consider cs.lNadav Amit
VMX instructions use 32-bit operands in 32-bit mode, and 64-bit operands in 64-bit mode. The current implementation is broken since it does not use the register operands correctly, and always uses 64-bit for reads and writes. Moreover, write to memory in vmwrite only considers long-mode, so it ignores cs.l. This patch fixes this behavior. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: vmx: handle_cr ignores 32/64-bit modeNadav Amit
On 32-bit mode only bits [31:0] of the CR should be used for setting the CR value. Otherwise, the host may incorrectly assume the value is invalid if bits [63:32] are not zero. Moreover, the CR is currently being read twice when CR8 is used. Last, nested mov-cr exiting is modified to handle the CR value correctly as well. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: x86: Hypercall handling does not considers opsize correctlyNadav Amit
Currently, the hypercall handling routine only considers LME as an indication to whether the guest uses 32/64-bit mode. This is incosistent with hyperv hypercalls handling and against the common sense of considering cs.l as well. This patch uses is_64_bit_mode instead of is_long_mode for that matter. In addition, the result is masked in respect to the guest execution mode. Last, it changes kvm_hv_hypercall to use is_64_bit_mode as well to simplify the code. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: x86: check DR6/7 high-bits are clear only on long-modeNadav Amit
When the guest sets DR6 and DR7, KVM asserts the high 32-bits are clear, and otherwise injects a #GP exception. This exception should only be injected only if running in long-mode. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: nVMX: Fix returned value of MSR_IA32_VMX_VMCS_ENUMJan Kiszka
Many real CPUs get this wrong as well, but ours is totally off: bits 9:1 define the highest index value. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: nVMX: Allow to disable VM_{ENTRY_LOAD,EXIT_SAVE}_DEBUG_CONTROLSJan Kiszka
Allow L1 to "leak" its debug controls into L2, i.e. permit cleared VM_{ENTRY_LOAD,EXIT_SAVE}_DEBUG_CONTROLS. This requires to manually transfer the state of DR7 and IA32_DEBUGCTLMSR from L1 into L2 as both run on different VMCS. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: nVMX: Fix returned value of MSR_IA32_VMX_PROCBASED_CTLSJan Kiszka
SDM says bits 1, 4-6, 8, 13-16, and 26 have to be set. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: nVMX: Allow to disable CR3 access interceptionJan Kiszka
We already have this control enabled by exposing a broken MSR_IA32_VMX_PROCBASED_CTLS value. This will properly advertise our capability once the value is fixed by clearing the right bits in MSR_IA32_VMX_TRUE_PROCBASED_CTLS. We also have to ensure to test the right value on L2 entry. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: nVMX: Advertise support for MSR_IA32_VMX_TRUE_*_CTLSJan Kiszka
We already implemented them but failed to advertise them. Currently they all return the identical values to the capability MSRs they are augmenting. So there is no change in exposed features yet. Drop related comments at this chance that are partially incorrect and redundant anyway. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: x86: Fix constant value of VM_{EXIT_SAVE,ENTRY_LOAD}_DEBUG_CONTROLSJan Kiszka
The spec says those controls are at bit position 2 - makes 4 as value. The impact of this mistake is effectively zero as we only use them to ensure that these features are set at position 2 (or, previously, 1) in MSR_IA32_VMX_{EXIT,ENTRY}_CTLS - which is and will be always true according to the spec. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: x86: NOP emulation clears (incorrectly) the high 32-bits of RAXNadav Amit
On long-mode the current NOP (0x90) emulation still writes back to RAX. As a result, EAX is zero-extended and the high 32-bits of RAX are cleared. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: x86: emulation of dword cmov on long-mode should clear [63:32]Nadav Amit
Even if the condition of cmov is not satisfied, bits[63:32] should be cleared. This is clearly stated in Intel's CMOVcc documentation. The solution is to reassign the destination onto itself if the condition is unsatisfied. For that matter the original destination value needs to be read. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: x86: Inter-privilege level ret emulation is not implemenetedNadav Amit
Return unhandlable error on inter-privilege level ret instruction. This is since the current emulation does not check the privilege level correctly when loading the CS, and does not pop RSP/SS as needed. Cc: stable@vger.kernel.org Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: x86: Wrong emulation on 'xadd X, X'Nadav Amit
The emulator does not emulate the xadd instruction correctly if the two operands are the same. In this (unlikely) situation the result should be the sum of X and X (2X) when it is currently X. The solution is to first perform writeback to the source, before writing to the destination. The only instruction which should be affected is xadd, as the other instructions that perform writeback to the source use the extended accumlator (e.g., RAX:RDX). Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: x86: bit-ops emulation ignores offset on 64-bitNadav Amit
The current emulation of bit operations ignores the offset from the destination on 64-bit target memory operands. This patch fixes this behavior. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19arch/x86/kvm/vmx.c: use PAGE_ALIGNED instead of IS_ALIGNED(PAGE_SIZEFabian Frederick
use mm.h definition Cc: Gleb Natapov <gleb@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: emulate: fix harmless typo in MMX decodingPaolo Bonzini
It was using the wrong member of the union. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19KVM: emulate: simplify BitOp handlingPaolo Bonzini
Memory is always the destination for BitOp instructions. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-19x86/efi: Support initrd loaded above 4GYinghai Lu
For boot efi kernel directly without bootloader. If the kernel support XLF_CAN_BE_LOADED_ABOVE_4G, we should not limit initrd under hdr->initrd_add_max. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-19x86/efi: Use early_memunmap() to squelch sparse errorsMatt Fleming
The kbuild reports the following sparse errors, >> arch/x86/platform/efi/quirks.c:242:23: sparse: incorrect type in >> argument 1 (different address spaces) arch/x86/platform/efi/quirks.c:242:23: expected void [noderef] <asn:2>*addr arch/x86/platform/efi/quirks.c:242:23: got void *[assigned] tablep >> arch/x86/platform/efi/quirks.c:245:23: sparse: incorrect type in >> argument 1 (different address spaces) arch/x86/platform/efi/quirks.c:245:23: expected void [noderef] <asn:2>*addr arch/x86/platform/efi/quirks.c:245:23: got struct efi_setup_data *[assigned] data Dave Young had made previous attempts to convert the early_iounmap() calls to early_memunmap() but ran into merge conflicts with commit 9e5c33d7aeee ("mm: create generic early_ioremap() support"). Now that we've got that commit in place we can switch to using early_memunmap() since we're already using early_memremap() in efi_reuse_config(). Cc: Dave Young <dyoung@redhat.com> Cc: Saurabh Tangri <saurabh.tangri@intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-19x86/efi: Move all workarounds to a separate file quirks.cSaurabh Tangri
Currently, it's difficult to find all the workarounds that are applied when running on EFI, because they're littered throughout various code paths. This change moves all of them into a separate file with the hope that it will be come the single location for all our well documented quirks. Signed-off-by: Saurabh Tangri <saurabh.tangri@intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-19KVM: x86: Increase the number of fixed MTRR regs to 10Nadav Amit
Recent Intel CPUs have 10 variable range MTRRs. Since operating systems sometime make assumptions on CPUs while they ignore capability MSRs, it is better for KVM to be consistent with recent CPUs. Reporting more MTRRs than actually supported has no functional implications. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-18x86, cpufeature: Convert more "features" to bugsBorislav Petkov
X86_FEATURE_FXSAVE_LEAK, X86_FEATURE_11AP and X86_FEATURE_CLFLUSH_MONITOR are not really features but synthetic bits we use for applying different bug workarounds. Call them what they really are, and make sure they get the proper cross-CPU behavior (OR rather than AND). Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1403042783-23278-1-git-send-email-bp@alien8.de Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-18Merge tag 'v3.16-rc1' into x86/cpufeatureH. Peter Anvin
Linux 3.16-rc1 Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-18x86, xsave: Add forgotten inline annotationBorislav Petkov
Add a missing inline annotation on a static function, in order to shut up a bunch of warnings like: In file included from arch/x86/crypto/camellia_aesni_avx_glue.c:23:0: ./arch/x86/include/asm/xsave.h:73:12: warning: ‘xsave_state_booting’ defined but not used [-Wunused-function] static int xsave_state_booting(struct xsave_struct *fx, u64 mask) ^ In file included from arch/x86/crypto/camellia_aesni_avx2_glue.c:23:0: ./arch/x86/include/asm/xsave.h:73:12: warning: ‘xsave_state_booting’ defined but not used [-Wunused-function] static int xsave_state_booting(struct xsave_struct *fx, u64 mask) ^ ... Cc: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1403000468-30094-1-git-send-email-bp@alien8.de Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-18x86, locking: Use no more OOSTORE nonsensePeter Zijlstra
Paul reported that X86_OOSTORE is dead, yay! Update a comment and remove a newly added reference. Reported-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Dave Jones <davej@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Waiman Long <Waiman.Long@hp.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20140611090509.GC3588@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-06-18KVM: emulate: POP SS triggers a MOV SS shadow tooPaolo Bonzini
We did not do that when interruptibility was added to the emulator, because at the time pop to segment was not implemented. Now it is, add it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-18KVM: x86: smsw emulation is incorrect in 64-bit modeNadav Amit
In 64-bit mode, when the destination is a register, the assignment is done according to the operand size. Otherwise (memory operand or no 64-bit mode), a 16-bit assignment is performed. Currently, 16-bit assignment is always done to the destination. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-18KVM: x86: Return error on cmpxchg16b emulationNadav Amit
cmpxchg16b is currently unimplemented in the emulator. The least we can do is return error upon the emulation of this instruction. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-18KVM: x86: rdpmc emulation checks the counter incorrectlyNadav Amit
The rdpmc emulation checks that the counter (ECX) is not higher than 2, without taking into considerations bits 30:31 role (e.g., bit 30 marks whether the counter is fixed). The fix uses the pmu information for checking the validity of the pmu counter. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-18KVM: x86: movnti minimum op size of 32-bit is not keptNadav Amit
If the operand-size prefix (0x66) is used in 64-bit mode, the emulator would assume the destination operand is 64-bit, when it should be 32-bit. Reminder: movnti does not support 16-bit operands and its default operand size is 32-bit. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-18KVM: x86: cmpxchg emulation should compare in reverse orderNadav Amit
The current implementation of cmpxchg does not update the flags correctly, since the accumulator should be compared with the destination and not the other way around. The current implementation does not update the flags correctly. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-18KVM: x86: sgdt and sidt are not privilagedNadav Amit
The SGDT and SIDT instructions are not privilaged, i.e. they can be executed with CPL>0. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>