Age | Commit message (Collapse) | Author |
|
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc: TIF_ABI_PENDING bit removal
powerpc/pseries: Fix xics build without CONFIG_SMP
powerpc/4xx: Add pcix type 1 transactions
powerpc/pci: Add missing call to header fixup
powerpc/pci: Add missing hookup to pci_slot
powerpc/pci: Add calls to set_pcie_port_type() and set_pcie_hotplug_bridge()
powerpc/40x: Update the PowerPC 40x board defconfigs
powerpc/44x: Update PowerPC 44x board defconfigs
|
|
The SH7780 PCI controller supports 3 different ranges of PCI memory in
addition to its PCI I/O window. In the case of 29-bit mode, only 2 memory
windows are supported, while in 32-bit mode all 3 are visible. This
attempts to make the resource handling completely dynamic and to permit
platforms to map in as many apertures as they can handle.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
These were never handled before, so implement some common infrastructure
to support them, then make use of that in the SH7780-specific code. In
practice there is little here that can not be generalized for SH4 parts,
which will be an incremental change as the 7780/7751 code is gradually
unified.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
register_pci_controller() can fail, but presently is a void function.
Change this over to an int so that we can bail early before continuing on
with post-registration initialization (such as throwing the controller in
to 66MHz mode in the case of the SH7780 host controller).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This adds some helper glue for scanning the bus and determining if all
of the devices are 66MHz capable or not before flipping on 66MHz mode.
This isn't quite to spec, but it's fairly consistent with what other
embedded controllers end up having to do.
Scanning code cribbed from the MIPS txx9 PCI code.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Here are the powerpc bits to remove TIF_ABI_PENDING now that
set_personality() is called at the appropriate place in exec.
Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
For systems that have more than 512MB we need to set up an additional
mapping, this fixes up the rounding to the next power of two and splits
out the mapping accordingly between the two local bus mapping windows.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
desc->affinity doesn't exit in that case. Let's use a macro for
the UP variant of get_irq_server(), it's the easiest way, avoids
evaluating arguments.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Current behaviour is to generate the IT instruction only for Thumb-2
code. However, the kernel helpers in entry-armv.S are compiled to ARM in
a unified syntax file (if THUMB2_KERNEL). Recent compilers warn about
missing IT instruction in unified assembly syntax files. The patch
changes the "-mimplicit-it" gas option to "always".
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Colin Tuckley <colin.tuckley@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch fixes the regression in functionality where the
kernel debugger and the perf API do not nicely share hw
breakpoint reservations.
The kernel debugger cannot use any mutex_lock() calls because it
can start the kernel running from an invalid context.
A mutex free version of the reservation API needed to get
created for the kernel debugger to safely update hw breakpoint
reservations.
The possibility for a breakpoint reservation to be concurrently
processed at the time that kgdb interrupts the system is
improbable. Should this corner case occur the end user is
warned, and the kernel debugger will prohibit updating the
hardware breakpoint reservations.
Any time the kernel debugger reserves a hardware breakpoint it
will be a system wide reservation.
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: kgdb-bugreport@lists.sourceforge.net
Cc: K.Prasad <prasad@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: torvalds@linux-foundation.org
LKML-Reference: <1264719883-7285-3-git-send-email-jason.wessel@windriver.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
In the 2.6.33 kernel, the hw_breakpoint API is now used for the
performance event counters. The hw_breakpoint_handler() now
consumes the hw breakpoints that were previously set by kgdb
arch specific code. In order for kgdb to work in conjunction
with this core API change, kgdb must use some of the low level
functions of the hw_breakpoint API to install, uninstall, and
deal with hw breakpoint reservations.
The kgdb core required a change to call kgdb_disable_hw_debug
anytime a slave cpu enters kgdb_wait() in order to keep all the
hw breakpoints in sync as well as to prevent hitting a hw
breakpoint while kgdb is active.
During the architecture specific initialization of kgdb, it will
pre-allocate 4 disabled (struct perf event **) structures. Kgdb
will use these to manage the capabilities for the 4 hw
breakpoint registers, per cpu. Right now the hw_breakpoint API
does not have a way to ask how many breakpoints are available,
on each CPU so it is possible that the install of a breakpoint
might fail when kgdb restores the system to the run state. The
intent of this patch is to first get the basic functionality of
hw breakpoints working and leave it to the person debugging the
kernel to understand what hw breakpoints are in use and what
restrictions have been imposed as a result. Breakpoint
constraints will be dealt with in a future patch.
While atomic, the x86 specific kgdb code will call
arch_uninstall_hw_breakpoint() and arch_install_hw_breakpoint()
to manage the cpu specific hw breakpoints.
The net result of these changes allow kgdb to use the same pool
of hw_breakpoints that are used by the perf event API, but
neither knows about future reservations for the available hw
breakpoint slots.
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: kgdb-bugreport@lists.sourceforge.net
Cc: K.Prasad <prasad@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: torvalds@linux-foundation.org
LKML-Reference: <1264719883-7285-2-git-send-email-jason.wessel@windriver.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Presently headers_check complains about linux/kdebug.h being unexported,
so just bump the __KERNEL__ ifdef up, as per the x86 change.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
The irq_desc needs to be accessed with irq_to_desc(), this fixes up a
build error with irq_desc being undefined.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Commit 6aa542a694dc9ea4344a8a590d2628c33d1b9431 added a quirk for the
Intel DG45ID board due to low memory corruption. The Intel DG45FC
shares the same BIOS (and the same bug) as noted in:
http://bugzilla.kernel.org/show_bug.cgi?id=13736
Signed-off-by: David Härdeman <david@hardeman.nu>
LKML-Reference: <20100128200254.GA9134@hardeman.nu>
Cc: <stable@kernel.org>
Cc: Alexey Fisher <bug-track@fisher-privat.net>
Cc: ykzhao <yakui.zhao@intel.com>
Cc: Tony Bones <aabonesml@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
|
|
online path
Lowest priority delivery of logical flat mode is broken on some systems,
such that even when IO-APIC RTE says deliver the interrupt to a particular CPU,
interrupt subsystem delivers the interrupt to totally different CPU.
For example, this behavior was observed on a P4 based system with SiS chipset
which was reported by Li Zefan. We have been handling this kind of behavior by
making sure that in logical flat mode, we assign the same vector to irq
mappings on all the 8 possible logical cpu's.
But we have been doing this initial assignment (__setup_vector_irq()) a little
late (before which interrupts were already enabled for a short duration).
Move the __setup_vector_irq() before the first irq enable point in the
cpu online path to avoid the issue of not handling some interrupts that
wrongly hit the cpu which is still coming online.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <20100129194330.283696385@sbs-t61.sc.intel.com>
Tested-by: Li Zefan <lizf@cn.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
|
|
In the recent change of not reserving IRQ0_VECTOR..IRQ15_VECTOR's on all
cpu's, we start with irq 0..15 getting directed to (and handled on) cpu-0.
In the logical flat mode, once the AP's are online (and before irqbalance
comes into picture), kernel intends to handle these IRQ's on any cpu (as the
logical flat mode allows to specify multiple cpu's for the irq destination and
the chipset based routing can deliver to the interrupt to any one of
the specified cpu's). This was broken with our recent change, which was ending
up using only cpu 0 as the destination, even when the kernel was specifying to
use all online cpu's for the logical flat mode case.
Fix this by updating vector allocation domain (cfg->domain) for legacy irqs,
when the IO-APIC handles them.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <20100129194330.207790269@sbs-t61.sc.intel.com>
Tested-by: Li Zefan <lizf@cn.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
|
|
The only usage of _toggle_gpio_edge_triggering is in
an #ifdef CONFIG_ARCH_OMAP1 block, so only provide it if
CONFIG_ARCH_OMAP1 is defined, too.
This fixes a compiler warning:
arch/arm/plat-omap/gpio.c:758: warning: '_toggle_gpio_edge_triggering' defined but not used
when compiling for ARCH_OMAP2, ARCH_OMAP3 or ARCH_OMAP4.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Kevin Hilman <khilman@deeprootsystems.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
|
|
Convert CONFIG_ARCH_OMAP34XX to CONFIG_ARCH_OMAP3, and
CONFIG_ARCH_OMAP24XX to CONFIG_ARCH_OMAP2, in preparation for Tony's
multi-OMAP patches.
While here, update some copyrights, convert instances of "34xx" to
"3xxx" where applicable, and convert preprocessor directives of the
form
#if defined(CONFIG_ARCH_OMAP2) | defined(CONFIG_ARCH_OMAP3)
to
#if defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3)
for standardization.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Cc: Tony Lindgren <tony@atomide.com>
|
|
The macros defining the shift bits in registers for various
register bit fields are defined as 1 << n.
Instead define them as n. They can then be used as val << n.
The changes are generated by updating the script which autogenerates
the files modifed in the patch.
Signed-off-by: Rajendra Nayak <rnayak@ti.com>
Signed-off-by: Paul Walmsley <paul@pwsan.com>
|
|
Rename the omap2_clk_init() in the OMAP2, 3, and 4 clock code to be
omap2xxx_clk_init(), omap3xxx_clk_init(), etc. Remove all traces of
the (commented) old virt_prcm_set code from omap3xxx_clk_init() and
omap4xxx_clk_init(), since this will be handled with the OPP code that
is cooking in the PM branch.
After this patch, there should be very little else in the clock code
that blocks a multi-OMAP 2+3 kernel. (OMAP2420+OMAP2430 still has some
outstanding issues that need to be resolved; this is pending on some
additions to the hwmod data.)
Signed-off-by: Paul Walmsley <paul@pwsan.com>
|
|
Resolve all remaining sparse warnings in the OMAP clock code.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
|
|
Move all static functions up to the top of the file to match the
practice in other OMAP clock code. Make omap3_noncore_dpll_program()
static (noted by sparse) and prepend an underscore to the function
name to mark that it is file-local.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
|
|
omap2_clk_prepare_for_reboot() is only applicable to OMAP2xxx chips,
so rename it to omap2xxx_clk_prepare_for_reboot() and only call it when
running on OMAP2xxx chips. Remove the old stub in the OMAP3 clock code.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
|
|
Now that almost all of the code has been removed from clock2xxx.c and
clock34xx.c, many of the includes are now unnecessary and can be removed.
While we're here, standardize the initial comment blocks.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Cc: Richard Woodruff <r-woodruff2@ti.com>
Cc: Jouni Högander <jouni.hogander@nokia.com>
|
|
In the OMAP3xxx clock code, remove the #ifdef CONFIG_ARCH_OMAP3 in
clock34xx.c, since this file is only compiled for OMAP3xxx builds. Also,
rename omap2_clk_arch_init in this file to omap3xxx_clk_arch_init() to
pave the way for multi-OMAP kernels. Ensure that it is not executed
on non-OMAP3xxx systems.
In the OMAP2xxx clock code, rename omap2_clk_arch_init in this file to
omap2xxx_clk_arch_init() to pave the way for multi-OMAP kernels.
Ensure that it is not executed on non-OMAP2xxx systems.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
|
|
The host controllers only support type 1, so there's not much else to
test for. Some of the older controllers also supported type 2 accesses,
but we've never supported those, and likely never will. Beyond that, the
P1SEG test is meaningless for 32-bit mode, so rather than refactoring it,
just kill the type 1 test off completely.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Here are the sparc bits to remove TIF_ABI_PENDING now that
set_personality() is called at the appropriate place in exec.
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Now that the previous commit made it possible to do the personality
setting at the point of no return, we do just that for ELF binaries.
And suddenly all the reasons for that insane TIF_ABI_PENDING bit go
away, and we can just make SET_PERSONALITY() just do the obvious thing
for a 32-bit compat process.
Everything becomes much more straightforward this way.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
'flush_old_exec()' is the point of no return when doing an execve(), and
it is pretty badly misnamed. It doesn't just flush the old executable
environment, it also starts up the new one.
Which is very inconvenient for things like setting up the new
personality, because we want the new personality to affect the starting
of the new environment, but at the same time we do _not_ want the new
personality to take effect if flushing the old one fails.
As a result, the x86-64 '32-bit' personality is actually done using this
insane "I'm going to change the ABI, but I haven't done it yet" bit
(TIF_ABI_PENDING), with SET_PERSONALITY() not actually setting the
personality, but just the "pending" bit, so that "flush_thread()" can do
the actual personality magic.
This patch in no way changes any of that insanity, but it does split the
'flush_old_exec()' function up into a preparatory part that can fail
(still called flush_old_exec()), and a new part that will actually set
up the new exec environment (setup_new_exec()). All callers are changed
to trivially comply with the new world order.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Newer SH parts are now commonly shipping with multiple controllers, so
we wire up PCI domain support to deal with them. Shamelessly cloned from
the MIPS implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Presently we just call in to request_resource() for the ioport and iomem
resources without checking for errors. This has already hidden a couple
of bugs, so add some error handling in for good measure.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This consolidates the PCI initialization code for all of the pci-sh7780
users, and sets up the memory window dynamically as opposed to using
hardcoded window positions.
A number of bugs were fixed at the same time, including the PIO handling
and master abort timeout settings being incorrect.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Merge reason: We want to queue up a dependent patch. Also update to
later -rc's.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Reported-by: Tim Sander <tstone@vlsi.informatik.tu-darmstadt.de>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|
|
At enable time the counter might still have a ->idx pointing to
a previously occupied location that might now be taken by
another event. Resetting the counter at that location with data
from this event will destroy the other counter's count.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100127221122.261477183@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
The new Intel documentation includes Westmere arch specific
event maps that are significantly different from the Nehalem
ones. Add support for this generation.
Found the CPUID model numbers on wikipedia.
Also ammend some Nehalem constraints, spotted those when looking
for the differences between Nehalem and Westmere.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100127221122.151865645@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Put the recursion avoidance code in the generic hook instead of
replicating it in each implementation.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100127221122.057507285@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Since constraints are specified on the event number, not number
and unit mask shorten the constraint masks so that we'll
actually match something.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100127221121.967610372@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Share the meat of the x86_pmu_disable() code with hw_perf_enable().
Also remove the barrier() from that code, since I could not convince
myself we actually need it.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
- Remove stray debug code
- Improve ugly macros a bit
- Remove some whitespace damage
- (Also fix up some accumulated damage in perf_event.h)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Stephane Eranian <eranian@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
|
|
x86_pmu_disable() removes the event from the cpuc->event_list[], however
since an event can only be on that list once, stop looking after we found
it.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Remove num from the fast path and save a few ops.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155536.056430539@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Add a weight member to the constraint structure and avoid recomputing the
weight at runtime.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155535.963944926@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Instead of copying bitmasks around, pass pointers to the constraint
structure.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155535.887853503@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Introduce INTEL_EVENT_CONSTRAINT and FIXED_EVENT_CONSTRAINT to reduce
some line length and typing work.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155535.688730371@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
We need this to be u64 for direct assigment, but the bitmask functions
all work on unsigned long, leading to cast heaven, solve this by using a
union.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155535.595961269@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Constraints gets defined an u64 but in long quantities and then cast to
long.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155535.504916780@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
GCC was complaining the stack usage was too large, so allocate the
structure.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155535.411197266@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Implement correct fastpath scheduling, i.e., reuse previous assignment.
Signed-off-by: Stephane Eranian <eranian@google.com>
[ split from larger patch]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <4b588464.1818d00a.4456.383b@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|