Age | Commit message (Collapse) | Author |
|
ACPICA commit fbe67c46830f10c839941f8512cac5bddcb86bd3
Index (XXXX, 2) is now supported by XXXX [2]
This patch doesn't affect Linux kernel.
Link: https://github.com/acpica/acpica/commit/fbe67c46
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
ACPICA commit eea1f0e561893b6d6417913b2d224082fe3a0a5e
Remove use of ACPI_DEBUGGER and ACPI_DISASSEMBLER where these
defines are used around entire modules.
Note: This type of code also causes problems with IDEs.
Link: https://github.com/acpica/acpica/commit/eea1f0e5
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
into drm-fixes
Just one fix from Ilia to resolve various issues that have resulted from
buffer eviction.
* 'linux-4.3' of git://anongit.freedesktop.org/nouveau/linux-2.6:
drm/nouveau/gem: return only valid domain when there's only one
|
|
On nv50+, we restrict the valid domains to just the one where the buffer
was originally created. However after the buffer is evicted to system
memory, we might move it back to a different domain that was not
originally valid. When sharing the buffer and retrieving its GEM_INFO
data, we still want the domain that will be valid for this buffer in a
pushbuf, not the one where it currently happens to be.
This resolves fdo#92504 and several others. These are due to suspend
evicting all buffers, making it more likely that they temporarily end up
in the wrong place.
Cc: stable@vger.kernel.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92504
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
The correct scope for looking up the objects to generate data packages for
data-only subnodes pointed to by another data-only subnode is the scope
of the parent of that subnode and not the scope containing the _DSD object
at the top of the hierarchy (the latter works only if all of the objects
returning data-only subnode packages in a given hierarchy are in the same
scope).
Fix the code to work as expected.
Fixes: 445b0eb058f5 (ACPI / property: Add support for data-only subnodes)
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com>
|
|
In Linux 4.3-rc5, there is an error case in drm_dp_get_branch_device
that returns without releasing mgr->lock, resulting a spew of kernel
messages about a kernel work function possibly having leaked a mutex
and presumably more serious adverse consequences later. This patch
changes the error to "goto out" to unlock the mutex before returning.
[airlied: grabbed from drm-next as it fixes something we've seen]
Signed-off-by: Adam J. Richter <adam_richter2004@yahoo.com>
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Pull intel-iommu bugfix from David Woodhouse:
"This contains a single fix, for when the IOMMU API is used to overlay
an existing mapping comprised of 4KiB pages, with a mapping that can
use superpages.
For the *first* superpage in the new mapping, we were correctly¹
freeing the old bottom-level page table page and clearing the link to
it, before installing the superpage. For subsequent superpages,
however, we weren't. This causes a memory leak, and a warning about
setting a PTE which is already set.
¹ Well, not *entirely* correctly. We just free the page table pages
right there and then, which is wrong. In fact they should only be
freed *after* the IOTLB is flushed so we know the hardware will no
longer be looking at them.... and in fact I note that the IOTLB
flush is completely missing from the intel_iommu_map() code path,
although it needs to be there if it's permitted to overwrite
existing mappings.
Fixing those is somewhat more intrusive though, and will probably
need to wait for 4.4 at this point"
* tag 'for-linus-20151021' of git://git.infradead.org/intel-iommu:
iommu/vt-d: fix range computation when making room for large pages
|
|
Pull MMC bugfix from Ulf Hansson:
"Here's yet another MMC fix intended for v4.3 rc7. I don't expect to
send any further pull requests for 4.3 rc[n].
MMC core:
- Don't re-tune in the reset sequence to allow re-init of the card"
* tag 'mmc-v4.3-rc5' of git://git.linaro.org/people/ulf.hansson/mmc:
mmc: core: Fix init_card in 52Mhz
|
|
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-q0lde9ajs84oi38nlyjcqbwg@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add a missing field to the perf_event_attr debug output.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1445366797-30894-4-git-send-email-andi@firstfloor.org
[ Print it between config2 and sample_regs_user (peterz)]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
ib_send_cm_sidr_rep could sometimes erase the node from the sidr
(depending on errors in the process). Since ib_send_cm_sidr_rep is
called both from cm_sidr_req_handler and cm_destroy_id, cm_id_priv
could be either erased from the rb_tree twice or not erased at all.
Fixing that by making sure it's erased only once before freeing
cm_id_priv.
Fixes: a977049dacde ('[PATCH] IB: Add the kernel CM implementation')
Signed-off-by: Doron Tsur <doront@mellanox.com>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
This commit addresses some stability problems with the command buffer
submission code recently introduced:
1) Make the vmw_cmdbuf_man_process() function handle reruns internally to
avoid losing interrupts if the caller forgets to rerun on -EAGAIN.
2) Handle default command buffer allocations using inline command buffers.
This avoids rare allocation deadlocks.
3) In case of command buffer errors we might lose fence submissions.
Therefore send a new fence after each command buffer error. This will help
avoid lengthy fence waits.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
|
|
There's only one user of this helper which can be replaces with a call
to hci_pend_le_action_lookup() and a check for params->explicit_connect.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
There's no need to clear the HCI_CONN_ENCRYPT_PEND flag in
smp_failure. In fact, this may cause the encryption tracking to get
out of sync as this has nothing to do with HCI activity.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
The hci_le_create_connection_cancel() function needs to use the hdev
pointer in many places so add a variable for it to avoid the need to
dereference the hci_conn every time.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
Instead of doing all of the LE-specific handling in an else-branch in
unpair_device() create a 'done' label for the BR/EDR branch to jump to
and then remove the else-branch completely.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
Use the new hci_conn_hash_lookup_le() API to look up LE connections.
This way we're guaranteed exact matches that also take into account
the address type.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
Use the new hci_conn_hash_lookup_le() API to look up LE connections.
This way we're guaranteed exact matches that also take into account
the address type.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
Many of the existing LE connection lookups are forced to use
hci_conn_hash_lookup_ba() which doesn't take into account the address
type. What's worse, most of the users don't bother checking that the
returned address type matches what was wanted.
This patch adds a new helper API to look up LE connections based on
their address and address type, paving the way to have the
hci_conn_hash_lookup_ba() users converted to do more precise lookups.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
The mgmt code needs to convert from mgmt/L2CAP address types to HCI in
many places. Having a dedicated helper function for this simplifies
code by shortening it and removing unnecessary 'addr_type' variables.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-master
A late round of KVM/ARM fixes for v4.3-rc7, fixing:
- A bug where level-triggered interrupts lowered from userspace
are still routed to the guest
- A memory leak an a failed initialization path
- A build error under certain configurations
- Several timer bugs introduced with moving the timer to the active
state handling instead of the masking trick.
|
|
Merge "mvebu fixes for 4.3 (part 2)" from Gregory CLEMENT:
Fix wrong compatible for A385 DB AP preventing using suspend
* tag 'mvebu-fixes-4.3-2' of git://git.infradead.org/linux-mvebu:
ARM: mvebu: correct a385-db-ap compatible string
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into fixes
Merge "Samsung 2nd fixes for v4.3" from Kukjin Kim:
- fix SOC detection of exynos thermal on exynos5260
- fix audio card detection on Peach boards
- fix double of_node_put() when parsing child power domains
* tag 'samsung-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung:
thermal: exynos: Fix register read in TMU
ARM: dts: Fix audio card detection on Peach boards
ARM: EXYNOS: Fix double of_node_put() when parsing child power domains
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes
Merge "Fixes for omaps for v4.3-rc cycle" from Tony Lindgren:
- Fix oops with LPAE and moew than 2GB of memory by enabling
ZONE_DMA for LPAE. Probably no need for stable on this one as we
only recently ran into this with the mainline kernel
- Fix imprecise external abort caused by bogus SRAM init. This affects
dm814x recently merged, so no need for stable on this one AFAIK
* tag 'omap-for-v4.3/fixes-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP2+: Fix imprecise external abort caused by bogus SRAM init
ARM: OMAP2+: Fix oops with LPAE and more than 2GB of memory
|
|
Configure ageing time to the HW for newly bridged device
CC: Scott Feldman <sfeldma@gmail.com>
CC: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Elad Raz <eladr@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is decrementing the pointer, instead of the value stored in the
pointer. KASan detects it as an out of bounds reference.
Reported-by: "Berry Cheng 程君(成淼)" <chengmiao.cj@alibaba-inc.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sometimes xennet_create_queues() may failed to created all requested
queues, we need to update num_queues to real created to avoid NULL
pointer dereference.
Signed-off-by: Joe Jin <joe.jin@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: David S. Miller <davem@davemloft.net>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
reset transport and unlock if misc_register failed.
Signed-off-by: Gao feng <omarapazanadi@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Philipp Kirchhofer says:
====================
net: mv643xx_eth: TSO TX data corruption fixes
as previously discussed [1] the mv643xx_eth driver has some
issues with data corruption when using TCP segmentation offload (TSO).
The following patch set improves this situation by fixing two data
corruption bugs in the TSO TX path.
Before applying the patches repeatedly accessing large files located on
a SMB share on my NSA325 NAS with TSO enabled resulted in different
hash sums, which confirmed that data corruption is happening during
file transfer. After applying the patches the hash sums were the same.
As this is my first patch submission please feel free to point out any
issues with the patch set.
[1] http://thread.gmane.org/gmane.linux.network/336530
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To prevent a race between the TX DMA engine and the CPU the writing of the
first transmit descriptor must be deferred until all following descriptors
have been updated. The network card may otherwise start transmitting before
all packet descriptors are set up correctly, which leads to data corruption
or an aborted transmit operation.
This deferral is already done in the non-TSO TX path, implement it also in
the TSO TX path.
Signed-off-by: Philipp Kirchhofer <philipp@familie-kirchhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The TX DMA engine requires that buffers with a size of 8 bytes or smaller
must be 64 bit aligned. This requirement may be violated when doing TSO,
as in this case larger skb frags can be broken up and transmitted in small
parts with then inappropriate alignment.
Fix this by checking for proper alignment before handing a buffer to the
DMA engine. If the data is misaligned realign it by copying it into the
TSO header data area.
Signed-off-by: Philipp Kirchhofer <philipp@familie-kirchhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The hwcap string arrays used for generating the contents of
/proc/cpuinfo are currently arrays of non-const pointers.
There's no need for these pointers to be mutable, so this patch makes
them const so that they can be moved to .rodata.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Use the system wide safe value from the new API for safer
decisions
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Use the system wide value of ID_AA64DFR0 to make safer decisions
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
The FP/ASIMD is detected in fpsimd_init(), which is built-in
unconditionally. Lets move the hwcap handling to the central place.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Extend struct arm64_cpu_capabilities to handle the HWCAP detection
and make use of the system wide value of the feature registers for
a reliable set of HWCAPs.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Now that we can reliably read the system wide safe value for a
feature register, use that to compute the system capability.
This patch also replaces the 'feature-register-specific'
methods with a generic routine to check the capability.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
At the moment we run through the arm64_features capability list for
each CPU and set the capability if one of the CPU supports it. This
could be problematic in a heterogeneous system with differing capabilities.
Delay the CPU feature checks until all the enabled CPUs are up(i.e,
smp_cpus_done(), so that we can make better decisions based on the
overall system capability. Once we decide and advertise the capabilities
the alternatives can be applied. From this state, we cannot roll back
a feature to disabled based on the values from a new hotplugged CPU,
due to the runtime patching and other reasons. So, for all new CPUs,
we need to make sure that they have the established system capabilities.
Failing which, we bring the CPU down, preventing it from turning online.
Once the capabilities are decided, any new CPU booting up goes through
verification to ensure that it has all the enabled capabilities and also
invokes the respective enable() method on the CPU.
The CPU errata checks are not delayed and is still executed per-CPU
to detect the respective capabilities. If we ever come across a non-errata
capability that needs to be checked on each-CPU, we could introduce them via
a new capability table(or introduce a flag), which can be processed per CPU.
The next patch will make the feature checks use the system wide
safe value of a feature register.
NOTE: The enable() methods associated with the capability is scheduled
on all the CPUs (which is the only use case at the moment). If we need
a different type of 'enable()' which only needs to be run once on any CPU,
we should be able to handle that when needed.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
[catalin.marinas@arm.com: static variable and coding style fixes]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
check_cpu_capabilities runs through a given list of caps and
checks if the system has the cap, updates the system capability
bitmap and also runs any enable() methods associated with them.
All of this is not quite obvious from the name 'check'. This
patch splits the check_cpu_capabilities into two parts :
1) update_cpu_capabilities
=> Runs through the given list and updates the system
wide capability map.
2) enable_cpu_capabilities
=> Runs through the given list and invokes enable() (if any)
for the caps enabled on the system.
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Suggested-by: Catalin Marinas <catalin.marinsa@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Make use of the system wide safe register to decide the support
for mixed endian.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Add an API for reading the safe CPUID value across the
system from the new infrastructure.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This patch consolidates the CPU Sanity check to the new infrastructure.
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This patch adds an infrastructure to keep track of the CPU feature
registers on the system. For each register, the infrastructure keeps
track of the system wide safe value of the feature bits. Also, tracks
the which fields of a register should be matched strictly across all
the CPUs on the system for the SANITY check infrastructure.
The feature bits are classified into following 3 types depending on
the implication of the possible values. This information is used to
decide the safe value for a feature.
LOWER_SAFE - The smaller value is safer
HIGHER_SAFE - The bigger value is safer
EXACT - We can't decide between the two, so a predefined safe_value is used.
This infrastructure will be later used to make better decisions for:
- Kernel features (e.g, KVM, Debug)
- SANITY Check
- CPU capability
- ELF HWCAP
- Exposing CPU Feature register to userspace.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
[catalin.marinas@arm.com: whitespace fix]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Introduce a helper to extract cpuid feature for any given
width.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This patch moves the /proc/cpuinfo handling code:
arch/arm64/kernel/{setup.c to cpuinfo.c}
No functional changes
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Move the mixed endian support detection code to cpufeature.c
from cpuinfo.c. This also moves the update_cpu_features()
used by mixed endian detection code, which will get more
functionality.
Also moves the ID register field shifts to asm/sysreg.h,
where all the useful definitions will end up in later patches.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This patch moves the CPU feature detection code from
arch/arm64/kernel/{setup.c to cpufeature.c}
The plan is to consolidate all the CPU feature handling
in cpufeature.c.
Apart from changing pr_fmt from "alternatives" to "cpu features",
there are no functional changes.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
At the moment the boot CPU stores the cpuinfo long before the
PERCPU areas are initialised by the kernel. This could be problematic
as the non-boot CPU data structures might get copied with the data
from the boot CPU, giving us no chance to detect if a particular CPU
updated its cpuinfo. This patch delays the boot cpu store to
smp_prepare_boot_cpu().
Also kills the setup_processor() which no longer does meaningful
work.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Delay the ELF HWCAP initialisation until all the (enabled) CPUs are
up, i.e, smp_cpus_done(). This is in preparation for detecting the
common features across the CPUS and creating a consistent ELF HWCAP
for the system.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
At early boot, we print the CPU version/revision. On a heterogeneous
system, we could have different types of CPUs. Print the CPU info for
all active cpus. Also, the secondary CPUs prints the message only when
they turn online.
Also, remove the redundant 'revision' information which doesn't
make any sense without the 'variant' field.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|