summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-07-21arm64: Honor nosmp kernel command line optionSuzuki K Poulose
Passing "nosmp" should boot the kernel with a single processor, without provision to enable secondary CPUs even if they are present. "nosmp" is implemented by setting maxcpus=0. At the moment we still mark the secondary CPUs present even with nosmp, which allows the userspace to bring them up. This patch corrects the smp_prepare_cpus() to honor the maxcpus == 0. Commit 44dbcc93ab67145 ("arm64: Fix behavior of maxcpus=N") fixed the behavior for maxcpus >= 1, but broke maxcpus = 0. Fixes: 44dbcc93ab67 ("arm64: Fix behavior of maxcpus=N") Cc: <stable@vger.kernel.org> # 4.7+ Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: updated code comment] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-21arm64: Fix incorrect per-cpu usage for boot CPUSuzuki K Poulose
In smp_prepare_boot_cpu(), we invoke cpuinfo_store_boot_cpu to store the cpuinfo in a per-cpu ptr, before initialising the per-cpu offset for the boot CPU. This patch reorders the sequence to make sure we initialise the per-cpu offset before accessing the per-cpu area. Commit 4b998ff1885eec ("arm64: Delay cpuinfo_store_boot_cpu") fixed the issue where we modified the per-cpu area even before the kernel initialises the per-cpu areas, but failed to wait until the boot cpu updated it's offset. Fixes: 4b998ff1885e ("arm64: Delay cpuinfo_store_boot_cpu") Cc: <stable@vger.kernel.org> # 4.4+ Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-21pNFS/files: filelayout_write_done_cb must call nfs_writeback_update_inode()Trond Myklebust
All write callbacks are required to call nfs_writeback_update_inode() upon success to ensure that file size changes are recorded, and the attribute cache is invalidated. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-07-21cpufreq: add cpufreq_driver_resolve_freq()Steve Muckle
Cpufreq governors may need to know what a particular target frequency maps to in the driver without necessarily wanting to set the frequency. Support this operation via a new cpufreq API, cpufreq_driver_resolve_freq(). This API returns the lowest driver frequency equal or greater than the target frequency (CPUFREQ_RELATION_L), subject to any policy (min/max) or driver limitations. The mapping is also cached in the policy so that a subsequent fast_switch operation can avoid repeating the same lookup. The API will call a new cpufreq driver callback, resolve_freq(), if it has been registered by the driver. Otherwise the frequency is resolved via cpufreq_frequency_table_target(). Rather than require ->target() style drivers to provide a resolve_freq() callback it is left to the caller to ensure that the driver implements this callback if necessary to use cpufreq_driver_resolve_freq(). Suggested-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Steve Muckle <smuckle@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-07-21perf tools: Add AVX-512 instructions to the new instructions testAdrian Hunter
Previous patches added support for Intel's AVX-512 instructions to the kernel and perf tools instruction decoders. AVX-512 instructions are documented in Intel Architecture Instruction Set Extensions Programming Reference (February 2016). Add a representative set of instructions to perf's "new instructions" test. e.g. perf test "new instructions" Or to view a particular instruction: perf test -v "new instructions" 2>&1 | grep vbroadcasti64x4 Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dan Williams <dan.j.williams@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: X86 ML <x86@kernel.org> Link: http://lkml.kernel.org/r/1469003437-32706-5-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-21perf tools: Add AVX-512 support to the instruction decoder used by Intel PTAdrian Hunter
Add support for Intel's AVX-512 instructions to perf tools instruction decoder used by Intel PT. The kernel's instruction decoder was updated in a previous patch. AVX-512 instructions are documented in Intel Architecture Instruction Set Extensions Programming Reference (February 2016). AVX-512 instructions are identified by a EVEX prefix which, for the purpose of instruction decoding, can be treated as though it were a 4-byte VEX prefix. Existing instructions which can now accept an EVEX prefix need not be further annotated in the op code map (x86-opcode-map.txt). In the case of new instructions, the op code map is updated accordingly. Also add associated Mask Instructions that are used to manipulate mask registers used in AVX-512 instructions. A representative set of instructions is added to the perf tools new instructions test in a subsequent patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dan Williams <dan.j.williams@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: X86 ML <x86@kernel.org> Link: http://lkml.kernel.org/r/1469003437-32706-4-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-21x86/insn: Add AVX-512 support to the instruction decoderAdrian Hunter
Add support for Intel's AVX-512 instructions to the instruction decoder. AVX-512 instructions are documented in Intel Architecture Instruction Set Extensions Programming Reference (February 2016). AVX-512 instructions are identified by a EVEX prefix which, for the purpose of instruction decoding, can be treated as though it were a 4-byte VEX prefix. Existing instructions which can now accept an EVEX prefix need not be further annotated in the op code map (x86-opcode-map.txt). In the case of new instructions, the op code map is updated accordingly. Also add associated Mask Instructions that are used to manipulate mask registers used in AVX-512 instructions. The 'perf tools' instruction decoder is updated in a subsequent patch. And a representative set of instructions is added to the perf tools new instructions test in a subsequent patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dan Williams <dan.j.williams@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: X86 ML <x86@kernel.org> Link: http://lkml.kernel.org/r/1469003437-32706-3-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-21cpufreq: intel_pstate: Check cpuid for MSR_HWP_INTERRUPTSrinivas Pandruvada
The MSR MSR_HWP_INTERRUPT is valid only when CPUID.06H:EAX[8] = 1, so check for feature before accessing this MSR. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-07-21intel_pstate: Update cpu_frequency tracepoint every timeRafael J. Wysocki
Currently, intel_pstate only updates the cpu_frequency tracepoint if the new P-state to set is different from the current one, but that causes powertop to report 100% idle on an 100% loaded system sometimes. Prevent that from happening by updating the cpu_frequency tracepoint every time intel_pstate_update_pstate() is called. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>-
2016-07-21cpufreq: intel_pstate: clean remnant struct elementCarsten Emde
When I was working with the Intel P state driver I came across a remnant struct element that is no longer needed after the function intel_pstate_calc_freq() was retired. Signed-off-by: Carsten Emde <C.Emde@osadl.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-07-21ACPI / DPTF: move int340x_thermal.c to the DPTF folderSrinivas Pandruvada
Since DPTF has its own folder under ACPI, move this file also there. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-07-21ACPI / DPTF: Add DPTF power participant driverSrinivas Pandruvada
This driver adds support for Dynamic Platform and Thermal Framework (DPTF) Platform Power Participant device (INT3407) support. This participant is responsible for exposing platform telemetry such as: max_platform_power platform_power_source adapter_rating battery_steady_power charger_type These attributes are presented via sysfs interface under the INT3407 platform device: $ls /sys/bus/platform/devices/INT3407\:00/dptf_power/ adapter_rating_mw battery_steady_power_mw charger_type max_platform_power_mw platform_power_source ` ACPI methods description used in this driver: PMAX: Maximum platform power that can be supported by the battery in mW. PSRC: System charge source, 0x00 = DC 0x01 = AC 0x02 = USB 0x03 = Wireless Charger ARTG: Adapter rating in mW (Maximum Adapter power) Must be 0 if no AC adapter is plugged in. CTYP: Charger Type, Traditional : 0x01 Hybrid: 0x02 NVDC: 0x03 PBSS: Returns max sustained power for battery in milliWatts. The INT3407 also contains _BTS and _BIX objects, which are compliant to ACPI 5.0, specification. Those objects are already used by ACPI battery (PNP0C0A) driver and information about them is exported via Linux power supply class registration. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-07-21ASoC: cs53l30: Fix bit shift issue of TDM modeNicolin Chen
The TDM mode using PCM format now has two-bit right shift due to the format configuration in the driver. According to Figure 4-13 in the CS53L30 datasheet, using ASP_SCLK_INV = 0 and SHIFT_LEFT = 1 should be the correct combination to create one-bit right shift for the DSP type A format. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2016-07-21ASoC: cs53l30: Fix a bug for TDM slot location validationNicolin Chen
The maximum slot number of CS53L30 is 4 while it should support the situation that's less than 4 channels based on the rx_mask. So when the driver validates the last slot location, it should check the last active slot instead of always the 4th one. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2016-07-21Btrfs: fix delalloc accounting after copy_from_user faultsChris Mason
Commit 56244ef151c3cd11 was almost but not quite enough to fix the reservation math after btrfs_copy_from_user returned partial copies. Some users are still seeing warnings in btrfs_destroy_inode, and with a long enough test run I'm able to trigger them as well. This patch fixes the accounting math again, bringing it much closer to the way it was before the sectorsize conversion Chandan did. The problem is accounting for the offset into the page/sector when we do a partial copy. This one just uses the dirty_sectors variable which should already be updated properly. Signed-off-by: Chris Mason <clm@fb.com> cc: stable@vger.kernel.org # v4.6+
2016-07-21arm64: kprobes: Add KASAN instrumentation around stack accessesCatalin Marinas
This patch disables KASAN around the memcpy from/to the kernel or IRQ stacks to avoid warnings like below: BUG: KASAN: stack-out-of-bounds in setjmp_pre_handler+0xe4/0x170 at addr ffff800935cbbbc0 Read of size 128 by task swapper/0/1 page:ffff7e0024d72ec0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x1000000000000000() page dumped because: kasan: bad access detected CPU: 4 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc4+ #1 Hardware name: ARM Juno development board (r0) (DT) Call trace: [<ffff20000808ad88>] dump_backtrace+0x0/0x280 [<ffff20000808b01c>] show_stack+0x14/0x20 [<ffff200008563a64>] dump_stack+0xa4/0xc8 [<ffff20000824a1fc>] kasan_report_error+0x4fc/0x528 [<ffff20000824a5e8>] kasan_report+0x40/0x48 [<ffff20000824948c>] check_memory_region+0x144/0x1a0 [<ffff200008249814>] memcpy+0x34/0x68 [<ffff200008c3ee2c>] setjmp_pre_handler+0xe4/0x170 [<ffff200008c3ec5c>] kprobe_breakpoint_handler+0xec/0x1d8 [<ffff2000080853a4>] brk_handler+0x5c/0xa0 [<ffff2000080813f0>] do_debug_exception+0xa0/0x138 Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-21arm64: kprobes: Cleanup jprobe_returnMarc Zyngier
jprobe_return seems to have aged badly. Comments referring to non-existent behaviours, and a dangerous habit of messing with registers without telling the compiler. This patches applies the following remedies: - Fix the comments to describe the actual behaviour - Tidy up the asm sequence to directly assign the stack pointer without clobbering extra registers - Mark the rest of the function as unreachable() so that the compiler knows that there is no need for an epilogue - Stop making jprobe_return_break a global function (you really don't want to call that guy, and it isn't even a function). Tested with tcp_probe. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-21crypto: vmx - Convert to CPU feature based module autoloadingAlastair D'Silva
This patch utilises the GENERIC_CPU_AUTOPROBE infrastructure to automatically load the vmx_crypto module if the CPU supports it. Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Add module autoloading based on CPU featuresAlastair D'Silva
This patch provides the necessary infrastructure to allow drivers to be automatically loaded via udev. It implements the minimum required to be able to use module_cpu_feature_match() to trigger the GENERIC_CPU_AUTOPROBE mechanisms. The features exposed are a mirror of the cpu_user_features (converted to an offset from a mask). This decision was made to ensure that the behavior between features for module loading and userspace are consistent. Signed-off-by: Alastair D'Silva <alastair@d-silva.org> [mpe: Only define the bits we currently need] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/powernv/ioda: Fix endianness when reading TCEsAlexey Kardashevskiy
The iommu_table_ops::exchange() callback writes new TCE to the table and returns old value and permission mask. The old TCE value is correctly converted from BE to CPU endian; however permission mask was calculated from BE value and therefore always returned DMA_NONE which could cause memory leak on LE systems using VFIO SPAPR TCE IOMMU v1 driver. This fixes pnv_tce_xchg() to have @oldtce a CPU endian. Fixes: 05c6cfb9dce0 ("powerpc/iommu/powernv: Release replaced TCE") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/mm: Add memory barrier in __hugepte_alloc()Sukadev Bhattiprolu
__hugepte_alloc() uses kmem_cache_zalloc() to allocate a zeroed PTE and proceeds to use the newly allocated PTE. Add a memory barrier to make sure that the other CPUs see a properly initialized PTE. Based on a fix suggested by James Dykman. Reported-by: James Dykman <jdykman@us.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Tested-by: James Dykman <jdykman@us.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/modules: Never restore r2 for a mprofile-kernel style mcount() callMichael Ellerman
In the module loader we process relocations, and for long jumps we generate trampolines (aka stubs). At the call site for one of these trampolines we usually need to generate a load instruction to restore the TOC pointer into r2. There is one exception however, which is calls to mcount() using the mprofile-kernel ABI, they handle the TOC inside the stub, and so for them we do not generate a TOC load. The bug is in how the code in restore_r2() decides if it needs to generate the TOC load. It does so by looking for a nop following the branch, and if it sees a nop, it replaces it with the load. In general the compiler has no reason to generate a nop following the mcount() call and so that check works OK. However if we combine a jump label at the start of a function, with an early return, such that GCC applies the shrink-wrapping optimisation, we can then end up with an mcount call followed immediately by a nop. However the nop is not there for a TOC load, it is for the jump label. That confuses restore_r2() into replacing the jump label nop with a TOC load, which in turn confuses ftrace into replacing the mcount call with a b +8 (fixed in the previous commit). The end result is we jump over the jump label, which if it was supposed to return means we incorrectly run the body of the function. We have seen this in practice with some yet-to-be-merged patches that use jump labels more extensively. The fix is relatively simple, in restore_r2() we check for an mprofile-kernel style mcount() call first, before looking for the presence of a nop. Fixes: 153086644fd1 ("powerpc/ftrace: Add support for -mprofile-kernel ftrace ABI") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/ftrace: Separate the heuristics for checking call sitesMichael Ellerman
In __ftrace_make_nop() (the 64-bit version), we have code to deal with two ftrace ABIs. There is the original ABI, which looks mostly like a function call, and then the mprofile-kernel ABI which is just a branch. The code tries to handle both cases, by looking for the presence of a load to restore the TOC pointer (PPC_INST_LD_TOC). If we detect the TOC load, we assume the call site is for an mcount() call using the old ABI. That means we patch the mcount() call with a b +8, to branch over the TOC load. However if the kernel was built with mprofile-kernel, then there will never be a call site using the original ftrace ABI. If for some reason we do see a TOC load, then it's there for a good reason, and we should not jump over it. So split the code, using the existing CC_USING_MPROFILE_KERNEL. Kernels built with mprofile-kernel will only look for, and expect, the new ABI, and similarly for the original ABI. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Merge 32-bit and 64-bit setup_arch()Benjamin Herrenschmidt
There is little enough differences now. mpe: Add a/p/k/setup.h to contain the prototypes and empty versions of functions we need, rather than using weak functions. Add a few other empty versions to avoid as many #ifdefs as possible in the code. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/64: Make a few boot functions __initBenjamin Herrenschmidt
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Re-order setup_panic()Benjamin Herrenschmidt
Do it right after probe_machine() since it's about testing ppc_md, and put the test in the common code. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Re-order the call to smp_setup_cpu_maps()Benjamin Herrenschmidt
It makes more sense to do it before intializing xmon() as xmon might use the info in there. We do want to register the console early though in case we want some functioning printk's in the cpu map setup. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/32: Move cache info inits to a separate functionBenjamin Herrenschmidt
Matches 64-bit. Also move the call to the same spot as ppc64 Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21Merge branch 'for-miklos' of ↵Miklos Szeredi
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs into for-next
2016-07-21powerpc/64: Move the content of setup_system() to setup_arch()Benjamin Herrenschmidt
And kill setup_system(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/64: Move setting of {i,d}cache_bsize to initialize_cache_info()Benjamin Herrenschmidt
Also remove the completely osbolete comment. We *do* look in the device-tree. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/64: Move the boot time info banner to a separate functionBenjamin Herrenschmidt
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Get rid of ppc_md.init_early()Benjamin Herrenschmidt
It is now called right after platform probe, so the probe function can just do the job. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Move 32-bit probe() machine to later in the boot processBenjamin Herrenschmidt
This converts all the 32-bit platforms to use the expanded device-tree which is a pretty mechanical change. Unlike 64-bit, the 32-bit kernel didn't rely on platform initializations to setup the MMU since it sets it up entirely before probe_machine() so the move has comparatively less consequences though it's a bigger patch. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/64: Move 64-bit probe_machine() to later in the boot processBenjamin Herrenschmidt
We no long need the machine type that early, so we can move probe_machine() to after the device-tree has been expanded. This will allow further consolidation. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Ensure that ppc_md is empty before probing for machine typeBenjamin Herrenschmidt
Anything in there will be overwritten, so it helps catching nasty bugs if we check that it's indeed full of NULL's before we do so. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/mm: Move hash table ops to a separate structureBenjamin Herrenschmidt
Moving probe_machine() to after mmu init will cause the ppc_md fields relative to the hash table management to be overwritten. Since we have essentially disconnected the machine type from the hash backend ops, finish the job by moving them to a different structure. The only callback that didn't quite fix is update_partition_table since this is not specific to hash, so I moved it to a standalone variable for now. We can revisit later if needed. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Fix ppc64e build failure in kexec] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/pmac: Remove spurrious machine type testBenjamin Herrenschmidt
pmac_declare_of_platform_devices() is already a machine initcall, thus it won't be called on a non-powermac machine. Testing for chrp there is pointless. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/mm/hash64: Don't test for machine type to detect HEA special caseBenjamin Herrenschmidt
Instead, check for FW_FEATURE_SPLPAR. This should be roughtly equivalent as all pseries machiens that can have an HEA also support SPLPAR and no other machine type does. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/mm/hash: Don't use machine_is() early during bootBenjamin Herrenschmidt
Use the device-tree instead as we'll be moving probe_machine() out of early_setup Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/pasemi: Remove IOBMAP allocation from platform probe()Benjamin Herrenschmidt
These days, memblocks is available later, so we can just allocate it as part of iob_init. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/64: Move MMU backend selection out of platform codeBenjamin Herrenschmidt
We move it into early_mmu_init() based on firmware features. For PS3, we have to move the setting of these into early_init_devtree(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/pmac: Remove early allocation of the SMU command bufferBenjamin Herrenschmidt
The SMU command buffer needs to be allocated below 2G using memblock. In the past, this had to be done very early from the arch code as memblock wasn't available past that point. That is no longer the case though, smu_init() is called from setup_arch() when memblock is still functional these days. So move the allocation to the SMU driver itself. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Put exception configuration in a common placeBenjamin Herrenschmidt
The various calls to establish exception endianness and AIL are now done from a single point using already established CPU and FW feature bits to decide what to do. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Move FW feature probing out of pseries probe()Benjamin Herrenschmidt
We move the function itself to pseries/firmware.c and call it along with almost all other flat device-tree parsers from early_init_devtree() Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Move #ifdefs into the header by providing pseries_probe_fw_features()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc/dart: Use a cachable DARTBenjamin Herrenschmidt
Instead of punching a hole in the linear mapping, just use normal cachable memory, and apply the flush sequence documented in the CPC625 (aka U3) user manual. This allows us to remove quite a bit of code related to the early allocation of the DART and the hole in the linear mapping. We can also get rid of the copy of the DART for suspend/resume as the original memory can just be saved/restored now, as long as we properly sync the caches. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Integrate dart_init() fix to return ENODEV when DART disabled] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Move 64-bit memory reserves to setup_arch()Benjamin Herrenschmidt
There is really no need to do them that early, early_setup() runs before MMU is on, we should do the strict minimum there to get the MMU going. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Move 64-bit feature fixup earlierBenjamin Herrenschmidt
Make it part of early_setup() as we really want the feature fixups to be applied before we turn on the MMU since they can have an impact on the various assembly path related to MMU management and interrupts. This makes 64-bit match what 32-bit does. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21powerpc: Factor do_feature_fixup callsBenjamin Herrenschmidt
32 and 64-bit do a similar set of calls early on, we move it all to a single common function to make the boot code more readable. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-21x86/boot: Reorganize and clean up the BIOS area reservation codeIngo Molnar
So the reserve_ebda_region() code has accumulated a number of problems over the years that make it really difficult to read and understand: - The calculation of 'lowmem' and 'ebda_addr' is an unnecessarily interleaved mess of first lowmem, then ebda_addr, then lowmem tweaks... - 'lowmem' here means 'super low mem' - i.e. 16-bit addressable memory. In other parts of the x86 code 'lowmem' means 32-bit addressable memory... This makes it super confusing to read. - It does not help at all that we have various memory range markers, half of which are 'start of range', half of which are 'end of range' - but this crucial property is not obvious in the naming at all ... gave me a headache trying to understand all this. - Also, the 'ebda_addr' name sucks: it highlights that it's an address (which is obvious, all values here are addresses!), while it does not highlight that it's the _start_ of the EBDA region ... - 'BIOS_LOWMEM_KILOBYTES' says a lot of things, except that this is the only value that is a pointer to a value, not a memory range address! - The function name itself is a misnomer: it says 'reserve_ebda_region()' while its main purpose is to reserve all the firmware ROM typically between 640K and 1MB, while the 'EBDA' part is only a small part of that ... - Likewise, the paravirt quirk flag name 'ebda_search' is misleading as well: this too should be about whether to reserve firmware areas in the paravirt case. - In fact thinking about this as 'end of RAM' is confusing: what this function *really* wants to reserve is firmware data and code areas! Once the thinking is inverted from a mixed 'ram' and 'reserved firmware area' notion to a pure 'reserved area' notion everything becomes a lot clearer. To improve all this rewrite the whole code (without changing the logic): - Firstly invert the naming from 'lowmem end' to 'BIOS reserved area start' and propagate this concept through all the variable names and constants. BIOS_RAM_SIZE_KB_PTR // was: BIOS_LOWMEM_KILOBYTES BIOS_START_MIN // was: INSANE_CUTOFF ebda_start // was: ebda_addr bios_start // was: lowmem BIOS_START_MAX // was: LOWMEM_CAP - Then clean up the name of the function itself by renaming it to reserve_bios_regions() and renaming the ::ebda_search paravirt flag to ::reserve_bios_regions. - Fix up all the comments (fix typos), harmonize and simplify their formulation and remove comments that become unnecessary due to the much better naming all around. Signed-off-by: Ingo Molnar <mingo@kernel.org>