summaryrefslogtreecommitdiff
path: root/drivers/cpufreq
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-07-03 14:35:40 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2013-07-03 14:35:40 -0700
commitf991fae5c6d42dfc5029150b05a78cf3f6c18cc9 (patch)
treed140deb437bde0631778b4984eeb72c1f4ee0c1d /drivers/cpufreq
parentd4141531f63a29bb2a980092b6f2828c385e6edd (diff)
parent2c843bd92ec276ecb68504b3b5ffa7066183f032 (diff)
Merge tag 'pm+acpi-3.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI updates from Rafael Wysocki: "This time the total number of ACPI commits is slightly greater than the number of cpufreq commits, but Viresh Kumar (who works on cpufreq) remains the most active patch submitter. To me, the most significant change is the addition of offline/online device operations to the driver core (with the Greg's blessing) and the related modifications of the ACPI core hotplug code. Next are the freezer updates from Colin Cross that should make the freezing of tasks a bit less heavy weight. We also have a couple of regression fixes, a number of fixes for issues that have not been identified as regressions, two new drivers and a bunch of cleanups all over. Highlights: - Hotplug changes to support graceful hot-removal failures. It sometimes is necessary to fail device hot-removal operations gracefully if they cannot be carried out completely. For example, if memory from a memory module being hot-removed has been allocated for the kernel's own use and cannot be moved elsewhere, it's desirable to fail the hot-removal operation in a graceful way rather than to crash the kernel, but currenty a success or a kernel crash are the only possible outcomes of an attempted memory hot-removal. Needless to say, that is not a very attractive alternative and it had to be addressed. However, in order to make it work for memory, I first had to make it work for CPUs and for this purpose I needed to modify the ACPI processor driver. It's been split into two parts, a resident one handling the low-level initialization/cleanup and a modular one playing the actual driver's role (but it binds to the CPU system device objects rather than to the ACPI device objects representing processors). That's been sort of like a live brain surgery on a patient who's riding a bike. So this is a little scary, but since we found and fixed a couple of regressions it caused to happen during the early linux-next testing (a month ago), nobody has complained. As a bonus we remove some duplicated ACPI hotplug code, because the ACPI-based CPU hotplug is now going to use the common ACPI hotplug code. - Lighter weight freezing of tasks. These changes from Colin Cross and Mandeep Singh Baines are targeted at making the freezing of tasks a bit less heavy weight operation. They reduce the number of tasks woken up every time during the freezing, by using the observation that the freezer simply doesn't need to wake up some of them and wait for them all to call refrigerator(). The time needed for the freezer to decide to report a failure is reduced too. Also reintroduced is the check causing a lockdep warining to trigger when try_to_freeze() is called with locks held (which is generally unsafe and shouldn't happen). - cpufreq updates First off, a commit from Srivatsa S Bhat fixes a resume regression introduced during the 3.10 cycle causing some cpufreq sysfs attributes to return wrong values to user space after resume. The fix is kind of fresh, but also it's pretty obvious once Srivatsa has identified the root cause. Second, we have a new freqdomain_cpus sysfs attribute for the acpi-cpufreq driver to provide information previously available via related_cpus. From Lan Tianyu. Finally, we fix a number of issues, mostly related to the CPUFREQ_POSTCHANGE notifier and cpufreq Kconfig options and clean up some code. The majority of changes from Viresh Kumar with bits from Jacob Shin, Heiko Stübner, Xiaoguang Chen, Ezequiel Garcia, Arnd Bergmann, and Tang Yuantian. - ACPICA update A usual bunch of updates from the ACPICA upstream. During the 3.4 cycle we introduced support for ACPI 5 extended sleep registers, but they are only supposed to be used if the HW-reduced mode bit is set in the FADT flags and the code attempted to use them without checking that bit. That caused suspend/resume regressions to happen on some systems. Fix from Lv Zheng causes those registers to be used only if the HW-reduced mode bit is set. Apart from this some other ACPICA bugs are fixed and code cleanups are made by Bob Moore, Tomasz Nowicki, Lv Zheng, Chao Guan, and Zhang Rui. - cpuidle updates New driver for Xilinx Zynq processors is added by Michal Simek. Multidriver support simplification, addition of some missing kerneldoc comments and Kconfig-related fixes come from Daniel Lezcano. - ACPI power management updates Changes to make suspend/resume work correctly in Xen guests from Konrad Rzeszutek Wilk, sparse warning fix from Fengguang Wu and cleanups and fixes of the ACPI device power state selection routine. - ACPI documentation updates Some previously missing pieces of ACPI documentation are added by Lv Zheng and Aaron Lu (hopefully, that will help people to uderstand how the ACPI subsystem works) and one outdated doc is updated by Hanjun Guo. - Assorted ACPI updates We finally nailed down the IA-64 issue that was the reason for reverting commit 9f29ab11ddbf ("ACPI / scan: do not match drivers against objects having scan handlers"), so we can fix it and move the ACPI scan handler check added to the ACPI video driver back to the core. A mechanism for adding CMOS RTC address space handlers is introduced by Lan Tianyu to allow some EC-related breakage to be fixed on some systems. A spec-compliant implementation of acpi_os_get_timer() is added by Mika Westerberg. The evaluation of _STA is added to do_acpi_find_child() to avoid situations in which a pointer to a disabled device object is returned instead of an enabled one with the same _ADR value. From Jeff Wu. Intel BayTrail PCH (Platform Controller Hub) support is added to the ACPI driver for Intel Low-Power Subsystems (LPSS) and that driver is modified to work around a couple of known BIOS issues. Changes from Mika Westerberg and Heikki Krogerus. The EC driver is fixed by Vasiliy Kulikov to use get_user() and put_user() instead of dereferencing user space pointers blindly. Code cleanups are made by Bjorn Helgaas, Nicholas Mazzuca and Toshi Kani. - Assorted power management updates The "runtime idle" helper routine is changed to take the return values of the callbacks executed by it into account and to call rpm_suspend() if they return 0, which allows us to reduce the overall code bloat a bit (by dropping some code that's not necessary any more after that modification). The runtime PM documentation is updated by Alan Stern (to reflect the "runtime idle" behavior change). New trace points for PM QoS are added by Sahara (<keun-o.park@windriver.com>). PM QoS documentation is updated by Lan Tianyu. Code cleanups are made and minor issues are addressed by Bernie Thompson, Bjorn Helgaas, Julius Werner, and Shuah Khan. - devfreq updates New driver for the Exynos5-bus device from Abhilash Kesavan. Minor cleanups, fixes and MAINTAINERS update from MyungJoo Ham, Abhilash Kesavan, Paul Bolle, Rajagopal Venkat, and Wei Yongjun. - OMAP power management updates Adaptive Voltage Scaling (AVS) SmartReflex voltage control driver updates from Andrii Tseglytskyi and Nishanth Menon." * tag 'pm+acpi-3.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (162 commits) cpufreq: Fix cpufreq regression after suspend/resume ACPI / PM: Fix possible NULL pointer deref in acpi_pm_device_sleep_state() PM / Sleep: Warn about system time after resume with pm_trace cpufreq: don't leave stale policy pointer in cdbs->cur_policy acpi-cpufreq: Add new sysfs attribute freqdomain_cpus cpufreq: make sure frequency transitions are serialized ACPI: implement acpi_os_get_timer() according the spec ACPI / EC: Add HP Folio 13 to ec_dmi_table in order to skip DSDT scan ACPI: Add CMOS RTC Operation Region handler support ACPI / processor: Drop unused variable from processor_perflib.c cpufreq: tegra: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: s3c64xx: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: omap: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: imx6q: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: exynos: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: dbx500: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: davinci: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: arm-big-little: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: powernow-k8: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: pcc: call CPUFREQ_POSTCHANGE notfier in error cases ...
Diffstat (limited to 'drivers/cpufreq')
-rw-r--r--drivers/cpufreq/Kconfig.arm17
-rw-r--r--drivers/cpufreq/Kconfig.powerpc37
-rw-r--r--drivers/cpufreq/Kconfig.x861
-rw-r--r--drivers/cpufreq/Makefile8
-rw-r--r--drivers/cpufreq/acpi-cpufreq.c46
-rw-r--r--drivers/cpufreq/arm_big_little.c4
-rw-r--r--drivers/cpufreq/blackfin-cpufreq.c10
-rw-r--r--drivers/cpufreq/cpufreq.c223
-rw-r--r--drivers/cpufreq/cpufreq_governor.c51
-rw-r--r--drivers/cpufreq/cpufreq_governor.h5
-rw-r--r--drivers/cpufreq/cpufreq_performance.c4
-rw-r--r--drivers/cpufreq/cpufreq_powersave.c6
-rw-r--r--drivers/cpufreq/cpufreq_stats.c5
-rw-r--r--drivers/cpufreq/cpufreq_userspace.c112
-rw-r--r--drivers/cpufreq/davinci-cpufreq.c3
-rw-r--r--drivers/cpufreq/dbx500-cpufreq.c4
-rw-r--r--drivers/cpufreq/e_powersaver.c11
-rw-r--r--drivers/cpufreq/exynos-cpufreq.c10
-rw-r--r--drivers/cpufreq/freq_table.c26
-rw-r--r--drivers/cpufreq/ia64-acpi-cpufreq.c2
-rw-r--r--drivers/cpufreq/imx6q-cpufreq.c17
-rw-r--r--drivers/cpufreq/kirkwood-cpufreq.c2
-rw-r--r--drivers/cpufreq/longhaul.c16
-rw-r--r--drivers/cpufreq/loongson2_cpufreq.c2
-rw-r--r--drivers/cpufreq/omap-cpufreq.c6
-rw-r--r--drivers/cpufreq/p4-clockmod.c4
-rw-r--r--drivers/cpufreq/pasemi-cpufreq.c331
-rw-r--r--drivers/cpufreq/pcc-cpufreq.c2
-rw-r--r--drivers/cpufreq/pmac32-cpufreq.c721
-rw-r--r--drivers/cpufreq/pmac64-cpufreq.c746
-rw-r--r--drivers/cpufreq/powernow-k6.c8
-rw-r--r--drivers/cpufreq/powernow-k7.c16
-rw-r--r--drivers/cpufreq/powernow-k8.c24
-rw-r--r--drivers/cpufreq/ppc-corenet-cpufreq.c380
-rw-r--r--drivers/cpufreq/ppc_cbe_cpufreq.c4
-rw-r--r--drivers/cpufreq/pxa2xx-cpufreq.c8
-rw-r--r--drivers/cpufreq/pxa3xx-cpufreq.c4
-rw-r--r--drivers/cpufreq/s3c2416-cpufreq.c6
-rw-r--r--drivers/cpufreq/s3c24xx-cpufreq.c4
-rw-r--r--drivers/cpufreq/s3c64xx-cpufreq.c10
-rw-r--r--drivers/cpufreq/sc520_freq.c2
-rw-r--r--drivers/cpufreq/sparc-us2e-cpufreq.c12
-rw-r--r--drivers/cpufreq/sparc-us3-cpufreq.c8
-rw-r--r--drivers/cpufreq/spear-cpufreq.c4
-rw-r--r--drivers/cpufreq/speedstep-centrino.c8
-rw-r--r--drivers/cpufreq/tegra-cpufreq.c23
46 files changed, 2605 insertions, 348 deletions
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
index a92440896868..de4d5d93c3fd 100644
--- a/drivers/cpufreq/Kconfig.arm
+++ b/drivers/cpufreq/Kconfig.arm
@@ -5,6 +5,7 @@
config ARM_BIG_LITTLE_CPUFREQ
tristate "Generic ARM big LITTLE CPUfreq driver"
depends on ARM_CPU_TOPOLOGY && PM_OPP && HAVE_CLK
+ select CPU_FREQ_TABLE
help
This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
@@ -18,6 +19,7 @@ config ARM_DT_BL_CPUFREQ
config ARM_EXYNOS_CPUFREQ
bool "SAMSUNG EXYNOS SoCs"
depends on ARCH_EXYNOS
+ select CPU_FREQ_TABLE
default y
help
This adds the CPUFreq driver common part for Samsung
@@ -46,6 +48,7 @@ config ARM_EXYNOS5250_CPUFREQ
config ARM_EXYNOS5440_CPUFREQ
def_bool SOC_EXYNOS5440
depends on HAVE_CLK && PM_OPP && OF
+ select CPU_FREQ_TABLE
help
This adds the CPUFreq driver for Samsung EXYNOS5440
SoC. The nature of exynos5440 clock controller is
@@ -55,7 +58,6 @@ config ARM_EXYNOS5440_CPUFREQ
config ARM_HIGHBANK_CPUFREQ
tristate "Calxeda Highbank-based"
depends on ARCH_HIGHBANK
- select CPU_FREQ_TABLE
select GENERIC_CPUFREQ_CPU0
select PM_OPP
select REGULATOR
@@ -71,6 +73,7 @@ config ARM_IMX6Q_CPUFREQ
tristate "Freescale i.MX6Q cpufreq support"
depends on SOC_IMX6Q
depends on REGULATOR_ANATOP
+ select CPU_FREQ_TABLE
help
This adds cpufreq driver support for Freescale i.MX6Q SOC.
@@ -86,6 +89,7 @@ config ARM_INTEGRATOR
config ARM_KIRKWOOD_CPUFREQ
def_bool ARCH_KIRKWOOD && OF
+ select CPU_FREQ_TABLE
help
This adds the CPUFreq driver for Marvell Kirkwood
SoCs.
@@ -149,6 +153,7 @@ config ARM_S3C2412_CPUFREQ
config ARM_S3C2416_CPUFREQ
bool "S3C2416 CPU Frequency scaling support"
depends on CPU_S3C2416
+ select CPU_FREQ_TABLE
help
This adds the CPUFreq driver for the Samsung S3C2416 and
S3C2450 SoC. The S3C2416 supports changing the rate of the
@@ -179,6 +184,7 @@ config ARM_S3C2440_CPUFREQ
config ARM_S3C64XX_CPUFREQ
bool "Samsung S3C64XX"
depends on CPU_S3C6410
+ select CPU_FREQ_TABLE
default y
help
This adds the CPUFreq driver for Samsung S3C6410 SoC.
@@ -205,6 +211,15 @@ config ARM_SA1110_CPUFREQ
config ARM_SPEAR_CPUFREQ
bool "SPEAr CPUFreq support"
depends on PLAT_SPEAR
+ select CPU_FREQ_TABLE
default y
help
This adds the CPUFreq driver support for SPEAr SOCs.
+
+config ARM_TEGRA_CPUFREQ
+ bool "TEGRA CPUFreq support"
+ depends on ARCH_TEGRA
+ select CPU_FREQ_TABLE
+ default y
+ help
+ This adds the CPUFreq driver support for TEGRA SOCs.
diff --git a/drivers/cpufreq/Kconfig.powerpc b/drivers/cpufreq/Kconfig.powerpc
index 9c926ca0d718..25ca9db62e09 100644
--- a/drivers/cpufreq/Kconfig.powerpc
+++ b/drivers/cpufreq/Kconfig.powerpc
@@ -1,6 +1,7 @@
config CPU_FREQ_CBE
tristate "CBE frequency scaling"
depends on CBE_RAS && PPC_CELL
+ select CPU_FREQ_TABLE
default m
help
This adds the cpufreq driver for Cell BE processors.
@@ -23,3 +24,39 @@ config CPU_FREQ_MAPLE
help
This adds support for frequency switching on Maple 970FX
Evaluation Board and compatible boards (IBM JS2x blades).
+
+config PPC_CORENET_CPUFREQ
+ tristate "CPU frequency scaling driver for Freescale E500MC SoCs"
+ depends on PPC_E500MC && OF && COMMON_CLK
+ select CPU_FREQ_TABLE
+ select CLK_PPC_CORENET
+ help
+ This adds the CPUFreq driver support for Freescale e500mc,
+ e5500 and e6500 series SoCs which are capable of changing
+ the CPU's frequency dynamically.
+
+config CPU_FREQ_PMAC
+ bool "Support for Apple PowerBooks"
+ depends on ADB_PMU && PPC32
+ select CPU_FREQ_TABLE
+ help
+ This adds support for frequency switching on Apple PowerBooks,
+ this currently includes some models of iBook & Titanium
+ PowerBook.
+
+config CPU_FREQ_PMAC64
+ bool "Support for some Apple G5s"
+ depends on PPC_PMAC && PPC64
+ select CPU_FREQ_TABLE
+ help
+ This adds support for frequency switching on Apple iMac G5,
+ and some of the more recent desktop G5 machines as well.
+
+config PPC_PASEMI_CPUFREQ
+ bool "Support for PA Semi PWRficient"
+ depends on PPC_PASEMI
+ select CPU_FREQ_TABLE
+ default y
+ help
+ This adds the support for frequency switching on PA Semi
+ PWRficient processors.
diff --git a/drivers/cpufreq/Kconfig.x86 b/drivers/cpufreq/Kconfig.x86
index 6bd63d63d356..e2b6eabef221 100644
--- a/drivers/cpufreq/Kconfig.x86
+++ b/drivers/cpufreq/Kconfig.x86
@@ -132,6 +132,7 @@ config X86_POWERNOW_K8
config X86_AMD_FREQ_SENSITIVITY
tristate "AMD frequency sensitivity feedback powersave bias"
depends on CPU_FREQ_GOV_ONDEMAND && X86_ACPI_CPUFREQ && CPU_SUP_AMD
+ select CPU_FREQ_TABLE
help
This adds AMD-specific powersave bias function to the ondemand
governor, which allows it to make more power-conscious frequency
diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
index 6ad0b913ca17..d345b5a7aa71 100644
--- a/drivers/cpufreq/Makefile
+++ b/drivers/cpufreq/Makefile
@@ -76,7 +76,7 @@ obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o
obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o
obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o
obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
-obj-$(CONFIG_ARCH_TEGRA) += tegra-cpufreq.o
+obj-$(CONFIG_ARM_TEGRA_CPUFREQ) += tegra-cpufreq.o
##################################################################################
# PowerPC platform drivers
@@ -84,11 +84,15 @@ obj-$(CONFIG_CPU_FREQ_CBE) += ppc-cbe-cpufreq.o
ppc-cbe-cpufreq-y += ppc_cbe_cpufreq_pervasive.o ppc_cbe_cpufreq.o
obj-$(CONFIG_CPU_FREQ_CBE_PMI) += ppc_cbe_cpufreq_pmi.o
obj-$(CONFIG_CPU_FREQ_MAPLE) += maple-cpufreq.o
+obj-$(CONFIG_PPC_CORENET_CPUFREQ) += ppc-corenet-cpufreq.o
+obj-$(CONFIG_CPU_FREQ_PMAC) += pmac32-cpufreq.o
+obj-$(CONFIG_CPU_FREQ_PMAC64) += pmac64-cpufreq.o
+obj-$(CONFIG_PPC_PASEMI_CPUFREQ) += pasemi-cpufreq.o
##################################################################################
# Other platform drivers
obj-$(CONFIG_AVR32_AT32AP_CPUFREQ) += at32ap-cpufreq.o
-obj-$(CONFIG_BLACKFIN) += blackfin-cpufreq.o
+obj-$(CONFIG_BFIN_CPU_FREQ) += blackfin-cpufreq.o
obj-$(CONFIG_CRIS_MACH_ARTPEC3) += cris-artpec3-cpufreq.o
obj-$(CONFIG_ETRAXFS) += cris-etraxfs-cpufreq.o
obj-$(CONFIG_IA64_ACPI_CPUFREQ) += ia64-acpi-cpufreq.o
diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
index edc089e9d0c4..39264020b88a 100644
--- a/drivers/cpufreq/acpi-cpufreq.c
+++ b/drivers/cpufreq/acpi-cpufreq.c
@@ -70,6 +70,7 @@ struct acpi_cpufreq_data {
struct cpufreq_frequency_table *freq_table;
unsigned int resume;
unsigned int cpu_feature;
+ cpumask_var_t freqdomain_cpus;
};
static DEFINE_PER_CPU(struct acpi_cpufreq_data *, acfreq_data);
@@ -176,6 +177,15 @@ static struct global_attr global_boost = __ATTR(boost, 0644,
show_global_boost,
store_global_boost);
+static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf)
+{
+ struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
+
+ return cpufreq_show_cpus(data->freqdomain_cpus, buf);
+}
+
+cpufreq_freq_attr_ro(freqdomain_cpus);
+
#ifdef CONFIG_X86_ACPI_CPUFREQ_CPB
static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf,
size_t count)
@@ -232,7 +242,7 @@ static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data)
perf = data->acpi_data;
for (i = 0; data->freq_table[i].frequency != CPUFREQ_TABLE_END; i++) {
- if (msr == perf->states[data->freq_table[i].index].status)
+ if (msr == perf->states[data->freq_table[i].driver_data].status)
return data->freq_table[i].frequency;
}
return data->freq_table[0].frequency;
@@ -442,7 +452,7 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
goto out;
}
- next_perf_state = data->freq_table[next_state].index;
+ next_perf_state = data->freq_table[next_state].driver_data;
if (perf->state == next_perf_state) {
if (unlikely(data->resume)) {
pr_debug("Called after resume, resetting to P%d\n",
@@ -494,12 +504,14 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
pr_debug("acpi_cpufreq_target failed (%d)\n",
policy->cpu);
result = -EAGAIN;
- goto out;
+ freqs.new = freqs.old;
}
}
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
- perf->state = next_perf_state;
+
+ if (!result)
+ perf->state = next_perf_state;
out:
return result;
@@ -702,6 +714,11 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
if (!data)
return -ENOMEM;
+ if (!zalloc_cpumask_var(&data->freqdomain_cpus, GFP_KERNEL)) {
+ result = -ENOMEM;
+ goto err_free;
+ }
+
data->acpi_data = per_cpu_ptr(acpi_perf_data, cpu);
per_cpu(acfreq_data, cpu) = data;
@@ -710,7 +727,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
result = acpi_processor_register_performance(data->acpi_data, cpu);
if (result)
- goto err_free;
+ goto err_free_mask;
perf = data->acpi_data;
policy->shared_type = perf->shared_type;
@@ -723,6 +740,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
cpumask_copy(policy->cpus, perf->shared_cpu_map);
}
+ cpumask_copy(data->freqdomain_cpus, perf->shared_cpu_map);
#ifdef CONFIG_SMP
dmi_check_system(sw_any_bug_dmi_table);
@@ -734,6 +752,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
if (check_amd_hwpstate_cpu(cpu) && !acpi_pstate_strict) {
cpumask_clear(policy->cpus);
cpumask_set_cpu(cpu, policy->cpus);
+ cpumask_copy(data->freqdomain_cpus, cpu_sibling_mask(cpu));
policy->shared_type = CPUFREQ_SHARED_TYPE_HW;
pr_info_once(PFX "overriding BIOS provided _PSD data\n");
}
@@ -811,7 +830,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
data->freq_table[valid_states-1].frequency / 1000)
continue;
- data->freq_table[valid_states].index = i;
+ data->freq_table[valid_states].driver_data = i;
data->freq_table[valid_states].frequency =
perf->states[i].core_frequency * 1000;
valid_states++;
@@ -868,6 +887,8 @@ err_freqfree:
kfree(data->freq_table);
err_unreg:
acpi_processor_unregister_performance(perf, cpu);
+err_free_mask:
+ free_cpumask_var(data->freqdomain_cpus);
err_free:
kfree(data);
per_cpu(acfreq_data, cpu) = NULL;
@@ -886,6 +907,7 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
per_cpu(acfreq_data, policy->cpu) = NULL;
acpi_processor_unregister_performance(data->acpi_data,
policy->cpu);
+ free_cpumask_var(data->freqdomain_cpus);
kfree(data->freq_table);
kfree(data);
}
@@ -906,6 +928,7 @@ static int acpi_cpufreq_resume(struct cpufreq_policy *policy)
static struct freq_attr *acpi_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
+ &freqdomain_cpus,
NULL, /* this is a placeholder for cpb, do not remove */
NULL,
};
@@ -947,7 +970,7 @@ static void __init acpi_cpufreq_boost_init(void)
/* We create the boost file in any case, though for systems without
* hardware support it will be read-only and hardwired to return 0.
*/
- if (sysfs_create_file(cpufreq_global_kobject, &(global_boost.attr)))
+ if (cpufreq_sysfs_create_file(&(global_boost.attr)))
pr_warn(PFX "could not register global boost sysfs file\n");
else
pr_debug("registered global boost sysfs file\n");
@@ -955,7 +978,7 @@ static void __init acpi_cpufreq_boost_init(void)
static void __exit acpi_cpufreq_boost_exit(void)
{
- sysfs_remove_file(cpufreq_global_kobject, &(global_boost.attr));
+ cpufreq_sysfs_remove_file(&(global_boost.attr));
if (msrs) {
unregister_cpu_notifier(&boost_nb);
@@ -1034,4 +1057,11 @@ static const struct x86_cpu_id acpi_cpufreq_ids[] = {
};
MODULE_DEVICE_TABLE(x86cpu, acpi_cpufreq_ids);
+static const struct acpi_device_id processor_device_ids[] = {
+ {ACPI_PROCESSOR_OBJECT_HID, },
+ {ACPI_PROCESSOR_DEVICE_HID, },
+ {},
+};
+MODULE_DEVICE_TABLE(acpi, processor_device_ids);
+
MODULE_ALIAS("acpi");
diff --git a/drivers/cpufreq/arm_big_little.c b/drivers/cpufreq/arm_big_little.c
index 5d7f53fcd6f5..3549f0784af1 100644
--- a/drivers/cpufreq/arm_big_little.c
+++ b/drivers/cpufreq/arm_big_little.c
@@ -84,11 +84,9 @@ static int bL_cpufreq_set_target(struct cpufreq_policy *policy,
ret = clk_set_rate(clk[cur_cluster], freqs.new * 1000);
if (ret) {
pr_err("clk_set_rate failed: %d\n", ret);
- return ret;
+ freqs.new = freqs.old;
}
- policy->cur = freqs.new;
-
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
return ret;
diff --git a/drivers/cpufreq/blackfin-cpufreq.c b/drivers/cpufreq/blackfin-cpufreq.c
index 995511e80bef..9cdbbd278a80 100644
--- a/drivers/cpufreq/blackfin-cpufreq.c
+++ b/drivers/cpufreq/blackfin-cpufreq.c
@@ -20,23 +20,23 @@
/* this is the table of CCLK frequencies, in Hz */
-/* .index is the entry in the auxiliary dpm_state_table[] */
+/* .driver_data is the entry in the auxiliary dpm_state_table[] */
static struct cpufreq_frequency_table bfin_freq_table[] = {
{
.frequency = CPUFREQ_TABLE_END,
- .index = 0,
+ .driver_data = 0,
},
{
.frequency = CPUFREQ_TABLE_END,
- .index = 1,
+ .driver_data = 1,
},
{
.frequency = CPUFREQ_TABLE_END,
- .index = 2,
+ .driver_data = 2,
},
{
.frequency = CPUFREQ_TABLE_END,
- .index = 0,
+ .driver_data = 0,
},
};
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 2d53f47d1747..6a015ada5285 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -3,6 +3,7 @@
*
* Copyright (C) 2001 Russell King
* (C) 2002 - 2003 Dominik Brodowski <linux@brodo.de>
+ * (C) 2013 Viresh Kumar <viresh.kumar@linaro.org>
*
* Oct 2005 - Ashok Raj <ashok.raj@intel.com>
* Added handling for CPU hotplug
@@ -12,12 +13,13 @@
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
- *
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <asm/cputime.h>
#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/notifier.h>
@@ -25,6 +27,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/spinlock.h>
+#include <linux/tick.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/cpu.h>
@@ -41,11 +44,13 @@
*/
static struct cpufreq_driver *cpufreq_driver;
static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data);
+static DEFINE_RWLOCK(cpufreq_driver_lock);
+static DEFINE_MUTEX(cpufreq_governor_lock);
+
#ifdef CONFIG_HOTPLUG_CPU
/* This one keeps track of the previously set governor of a removed CPU */
static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor);
#endif
-static DEFINE_RWLOCK(cpufreq_driver_lock);
/*
* cpu_policy_rwsem is a per CPU reader-writer semaphore designed to cure
@@ -132,6 +137,51 @@ bool have_governor_per_policy(void)
{
return cpufreq_driver->have_governor_per_policy;
}
+EXPORT_SYMBOL_GPL(have_governor_per_policy);
+
+struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy)
+{
+ if (have_governor_per_policy())
+ return &policy->kobj;
+ else
+ return cpufreq_global_kobject;
+}
+EXPORT_SYMBOL_GPL(get_governor_parent_kobj);
+
+static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall)
+{
+ u64 idle_time;
+ u64 cur_wall_time;
+ u64 busy_time;
+
+ cur_wall_time = jiffies64_to_cputime64(get_jiffies_64());
+
+ busy_time = kcpustat_cpu(cpu).cpustat[CPUTIME_USER];
+ busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_SYSTEM];
+ busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_IRQ];
+ busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_SOFTIRQ];
+ busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_STEAL];
+ busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_NICE];
+
+ idle_time = cur_wall_time - busy_time;
+ if (wall)
+ *wall = cputime_to_usecs(cur_wall_time);
+
+ return cputime_to_usecs(idle_time);
+}
+
+u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy)
+{
+ u64 idle_time = get_cpu_idle_time_us(cpu, io_busy ? wall : NULL);
+
+ if (idle_time == -1ULL)
+ return get_cpu_idle_time_jiffy(cpu, wall);
+ else if (!io_busy)
+ idle_time += get_cpu_iowait_time_us(cpu, wall);
+
+ return idle_time;
+}
+EXPORT_SYMBOL_GPL(get_cpu_idle_time);
static struct cpufreq_policy *__cpufreq_cpu_get(unsigned int cpu, bool sysfs)
{
@@ -150,7 +200,6 @@ static struct cpufreq_policy *__cpufreq_cpu_get(unsigned int cpu, bool sysfs)
if (!try_module_get(cpufreq_driver->owner))
goto err_out_unlock;
-
/* get the CPU */
data = per_cpu(cpufreq_cpu_data, cpu);
@@ -220,7 +269,7 @@ static void cpufreq_cpu_put_sysfs(struct cpufreq_policy *data)
*/
#ifndef CONFIG_SMP
static unsigned long l_p_j_ref;
-static unsigned int l_p_j_ref_freq;
+static unsigned int l_p_j_ref_freq;
static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci)
{
@@ -233,7 +282,7 @@ static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci)
pr_debug("saving %lu as reference value for loops_per_jiffy; "
"freq is %u kHz\n", l_p_j_ref, l_p_j_ref_freq);
}
- if ((val == CPUFREQ_POSTCHANGE && ci->old != ci->new) ||
+ if ((val == CPUFREQ_POSTCHANGE && ci->old != ci->new) ||
(val == CPUFREQ_RESUMECHANGE || val == CPUFREQ_SUSPENDCHANGE)) {
loops_per_jiffy = cpufreq_scale(l_p_j_ref, l_p_j_ref_freq,
ci->new);
@@ -248,8 +297,7 @@ static inline void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci)
}
#endif
-
-void __cpufreq_notify_transition(struct cpufreq_policy *policy,
+static void __cpufreq_notify_transition(struct cpufreq_policy *policy,
struct cpufreq_freqs *freqs, unsigned int state)
{
BUG_ON(irqs_disabled());
@@ -264,6 +312,12 @@ void __cpufreq_notify_transition(struct cpufreq_policy *policy,
switch (state) {
case CPUFREQ_PRECHANGE:
+ if (WARN(policy->transition_ongoing,
+ "In middle of another frequency transition\n"))
+ return;
+
+ policy->transition_ongoing = true;
+
/* detect if the driver reported a value as "old frequency"
* which is not equal to what the cpufreq core thinks is
* "old frequency".
@@ -283,6 +337,12 @@ void __cpufreq_notify_transition(struct cpufreq_policy *policy,
break;
case CPUFREQ_POSTCHANGE:
+ if (WARN(!policy->transition_ongoing,
+ "No frequency transition in progress\n"))
+ return;
+
+ policy->transition_ongoing = false;
+
adjust_jiffies(CPUFREQ_POSTCHANGE, freqs);
pr_debug("FREQ: %lu - CPU: %lu", (unsigned long)freqs->new,
(unsigned long)freqs->cpu);
@@ -294,6 +354,7 @@ void __cpufreq_notify_transition(struct cpufreq_policy *policy,
break;
}
}
+
/**
* cpufreq_notify_transition - call notifier chain and adjust_jiffies
* on frequency transition.
@@ -311,7 +372,6 @@ void cpufreq_notify_transition(struct cpufreq_policy *policy,
EXPORT_SYMBOL_GPL(cpufreq_notify_transition);
-
/*********************************************************************
* SYSFS INTERFACE *
*********************************************************************/
@@ -376,7 +436,6 @@ out:
return err;
}
-
/**
* cpufreq_per_cpu_attr_read() / show_##file_name() -
* print out cpufreq information
@@ -441,7 +500,6 @@ static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy,
return sprintf(buf, "%u\n", cur_freq);
}
-
/**
* show_scaling_governor - show the current policy for the specified CPU
*/
@@ -457,7 +515,6 @@ static ssize_t show_scaling_governor(struct cpufreq_policy *policy, char *buf)
return -EINVAL;
}
-
/**
* store_scaling_governor - store policy for the specified CPU
*/
@@ -480,8 +537,10 @@ static ssize_t store_scaling_governor(struct cpufreq_policy *policy,
&new_policy.governor))
return -EINVAL;
- /* Do not use cpufreq_set_policy here or the user_policy.max
- will be wrongly overridden */
+ /*
+ * Do not use cpufreq_set_policy here or the user_policy.max
+ * will be wrongly overridden
+ */
ret = __cpufreq_set_policy(policy, &new_policy);
policy->user_policy.policy = policy->policy;
@@ -526,7 +585,7 @@ out:
return i;
}
-static ssize_t show_cpus(const struct cpumask *mask, char *buf)
+ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf)
{
ssize_t i = 0;
unsigned int cpu;
@@ -541,6 +600,7 @@ static ssize_t show_cpus(const struct cpumask *mask, char *buf)
i += sprintf(&buf[i], "\n");
return i;
}
+EXPORT_SYMBOL_GPL(cpufreq_show_cpus);
/**
* show_related_cpus - show the CPUs affected by each transition even if
@@ -548,7 +608,7 @@ static ssize_t show_cpus(const struct cpumask *mask, char *buf)
*/
static ssize_t show_related_cpus(struct cpufreq_policy *policy, char *buf)
{
- return show_cpus(policy->related_cpus, buf);
+ return cpufreq_show_cpus(policy->related_cpus, buf);
}
/**
@@ -556,7 +616,7 @@ static ssize_t show_related_cpus(struct cpufreq_policy *policy, char *buf)
*/
static ssize_t show_affected_cpus(struct cpufreq_policy *policy, char *buf)
{
- return show_cpus(policy->cpus, buf);
+ return cpufreq_show_cpus(policy->cpus, buf);
}
static ssize_t store_scaling_setspeed(struct cpufreq_policy *policy,
@@ -630,9 +690,6 @@ static struct attribute *default_attrs[] = {
NULL
};
-struct kobject *cpufreq_global_kobject;
-EXPORT_SYMBOL(cpufreq_global_kobject);
-
#define to_policy(k) container_of(k, struct cpufreq_policy, kobj)
#define to_attr(a) container_of(a, struct freq_attr, attr)
@@ -703,6 +760,49 @@ static struct kobj_type ktype_cpufreq = {
.release = cpufreq_sysfs_release,
};
+struct kobject *cpufreq_global_kobject;
+EXPORT_SYMBOL(cpufreq_global_kobject);
+
+static int cpufreq_global_kobject_usage;
+
+int cpufreq_get_global_kobject(void)
+{
+ if (!cpufreq_global_kobject_usage++)
+ return kobject_add(cpufreq_global_kobject,
+ &cpu_subsys.dev_root->kobj, "%s", "cpufreq");
+
+ return 0;
+}
+EXPORT_SYMBOL(cpufreq_get_global_kobject);
+
+void cpufreq_put_global_kobject(void)
+{
+ if (!--cpufreq_global_kobject_usage)
+ kobject_del(cpufreq_global_kobject);
+}
+EXPORT_SYMBOL(cpufreq_put_global_kobject);
+
+int cpufreq_sysfs_create_file(const struct attribute *attr)
+{
+ int ret = cpufreq_get_global_kobject();
+
+ if (!ret) {
+ ret = sysfs_create_file(cpufreq_global_kobject, attr);
+ if (ret)
+ cpufreq_put_global_kobject();
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(cpufreq_sysfs_create_file);
+
+void cpufreq_sysfs_remove_file(const struct attribute *attr)
+{
+ sysfs_remove_file(cpufreq_global_kobject, attr);
+ cpufreq_put_global_kobject();
+}
+EXPORT_SYMBOL(cpufreq_sysfs_remove_file);
+
/* symlink affected CPUs */
static int cpufreq_add_dev_symlink(unsigned int cpu,
struct cpufreq_policy *policy)
@@ -1005,7 +1105,8 @@ static void update_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu)
* Caller should already have policy_rwsem in write mode for this CPU.
* This routine frees the rwsem before returning.
*/
-static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
+static int __cpufreq_remove_dev(struct device *dev,
+ struct subsys_interface *sif)
{
unsigned int cpu = dev->id, ret, cpus;
unsigned long flags;
@@ -1112,7 +1213,6 @@ static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif
return 0;
}
-
static int cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
{
unsigned int cpu = dev->id;
@@ -1125,7 +1225,6 @@ static int cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
return retval;
}
-
static void handle_update(struct work_struct *work)
{
struct cpufreq_policy *policy =
@@ -1136,7 +1235,8 @@ static void handle_update(struct work_struct *work)
}
/**
- * cpufreq_out_of_sync - If actual and saved CPU frequency differs, we're in deep trouble.
+ * cpufreq_out_of_sync - If actual and saved CPU frequency differs, we're
+ * in deep trouble.
* @cpu: cpu number
* @old_freq: CPU frequency the kernel thinks the CPU runs at
* @new_freq: CPU frequency the CPU actually runs at
@@ -1151,7 +1251,6 @@ static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq,
struct cpufreq_freqs freqs;
unsigned long flags;
-
pr_debug("Warning: CPU frequency out of sync: cpufreq and timing "
"core thinks of %u, is %u kHz.\n", old_freq, new_freq);
@@ -1166,7 +1265,6 @@ static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq,
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
}
-
/**
* cpufreq_quick_get - get the CPU frequency (in kHz) from policy->cur
* @cpu: CPU number
@@ -1212,7 +1310,6 @@ unsigned int cpufreq_quick_get_max(unsigned int cpu)
}
EXPORT_SYMBOL(cpufreq_quick_get_max);
-
static unsigned int __cpufreq_get(unsigned int cpu)
{
struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
@@ -1271,7 +1368,6 @@ static struct subsys_interface cpufreq_interface = {
.remove_dev = cpufreq_remove_dev,
};
-
/**
* cpufreq_bp_suspend - Prepare the boot CPU for system suspend.
*
@@ -1408,11 +1504,10 @@ int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list)
}
EXPORT_SYMBOL(cpufreq_register_notifier);
-
/**
* cpufreq_unregister_notifier - unregister a driver with cpufreq
* @nb: notifier block to be unregistered
- * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER
+ * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER
*
* Remove a driver from the CPU frequency notifier list.
*
@@ -1448,7 +1543,6 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier);
* GOVERNORS *
*********************************************************************/
-
int __cpufreq_driver_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
@@ -1458,6 +1552,8 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
if (cpufreq_disabled())
return -ENODEV;
+ if (policy->transition_ongoing)
+ return -EBUSY;
/* Make sure that target_freq is within supported range */
if (target_freq > policy->max)
@@ -1484,10 +1580,6 @@ int cpufreq_driver_target(struct cpufreq_policy *policy,
{
int ret = -EINVAL;
- policy = cpufreq_cpu_get(policy->cpu);
- if (!policy)
- goto no_policy;
-
if (unlikely(lock_policy_rwsem_write(policy->cpu)))
goto fail;
@@ -1496,30 +1588,19 @@ int cpufreq_driver_target(struct cpufreq_policy *policy,
unlock_policy_rwsem_write(policy->cpu);
fail:
- cpufreq_cpu_put(policy);
-no_policy:
return ret;
}
EXPORT_SYMBOL_GPL(cpufreq_driver_target);
int __cpufreq_driver_getavg(struct cpufreq_policy *policy, unsigned int cpu)
{
- int ret = 0;
-
if (cpufreq_disabled())
- return ret;
+ return 0;
if (!cpufreq_driver->getavg)
return 0;
- policy = cpufreq_cpu_get(policy->cpu);
- if (!policy)
- return -EINVAL;
-
- ret = cpufreq_driver->getavg(policy, cpu);
-
- cpufreq_cpu_put(policy);
- return ret;
+ return cpufreq_driver->getavg(policy, cpu);
}
EXPORT_SYMBOL_GPL(__cpufreq_driver_getavg);
@@ -1562,6 +1643,21 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,
pr_debug("__cpufreq_governor for CPU %u, event %u\n",
policy->cpu, event);
+
+ mutex_lock(&cpufreq_governor_lock);
+ if ((!policy->governor_enabled && (event == CPUFREQ_GOV_STOP)) ||
+ (policy->governor_enabled && (event == CPUFREQ_GOV_START))) {
+ mutex_unlock(&cpufreq_governor_lock);
+ return -EBUSY;
+ }
+
+ if (event == CPUFREQ_GOV_STOP)
+ policy->governor_enabled = false;
+ else if (event == CPUFREQ_GOV_START)
+ policy->governor_enabled = true;
+
+ mutex_unlock(&cpufreq_governor_lock);
+
ret = policy->governor->governor(policy, event);
if (!ret) {
@@ -1569,6 +1665,14 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,
policy->governor->initialized++;
else if (event == CPUFREQ_GOV_POLICY_EXIT)
policy->governor->initialized--;
+ } else {
+ /* Restore original values */
+ mutex_lock(&cpufreq_governor_lock);
+ if (event == CPUFREQ_GOV_STOP)
+ policy->governor_enabled = true;
+ else if (event == CPUFREQ_GOV_START)
+ policy->governor_enabled = false;
+ mutex_unlock(&cpufreq_governor_lock);
}
/* we keep one module reference alive for
@@ -1581,7 +1685,6 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,
return ret;
}
-
int cpufreq_register_governor(struct cpufreq_governor *governor)
{
int err;
@@ -1606,7 +1709,6 @@ int cpufreq_register_governor(struct cpufreq_governor *governor)
}
EXPORT_SYMBOL_GPL(cpufreq_register_governor);
-
void cpufreq_unregister_governor(struct cpufreq_governor *governor)
{
#ifdef CONFIG_HOTPLUG_CPU
@@ -1636,7 +1738,6 @@ void cpufreq_unregister_governor(struct cpufreq_governor *governor)
EXPORT_SYMBOL_GPL(cpufreq_unregister_governor);
-
/*********************************************************************
* POLICY INTERFACE *
*********************************************************************/
@@ -1665,7 +1766,6 @@ int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu)
}
EXPORT_SYMBOL(cpufreq_get_policy);
-
/*
* data : current policy.
* policy : policy to be set.
@@ -1699,8 +1799,10 @@ static int __cpufreq_set_policy(struct cpufreq_policy *data,
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_INCOMPATIBLE, policy);
- /* verify the cpu speed can be set within this limit,
- which might be different to the first one */
+ /*
+ * verify the cpu speed can be set within this limit, which might be
+ * different to the first one
+ */
ret = cpufreq_driver->verify(policy);
if (ret)
goto error_out;
@@ -1802,8 +1904,10 @@ int cpufreq_update_policy(unsigned int cpu)
policy.policy = data->user_policy.policy;
policy.governor = data->user_policy.governor;
- /* BIOS might change freq behind our back
- -> ask driver for current freq and notify governors about a change */
+ /*
+ * BIOS might change freq behind our back
+ * -> ask driver for current freq and notify governors about a change
+ */
if (cpufreq_driver->get) {
policy.cur = cpufreq_driver->get(cpu);
if (!data->cur) {
@@ -1852,7 +1956,7 @@ static int __cpuinit cpufreq_cpu_callback(struct notifier_block *nfb,
}
static struct notifier_block __refdata cpufreq_cpu_notifier = {
- .notifier_call = cpufreq_cpu_callback,
+ .notifier_call = cpufreq_cpu_callback,
};
/*********************************************************************
@@ -1864,7 +1968,7 @@ static struct notifier_block __refdata cpufreq_cpu_notifier = {
* @driver_data: A struct cpufreq_driver containing the values#
* submitted by the CPU Frequency driver.
*
- * Registers a CPU Frequency driver to this core code. This code
+ * Registers a CPU Frequency driver to this core code. This code
* returns zero on success, -EBUSY when another driver got here first
* (and isn't unregistered in the meantime).
*
@@ -1931,11 +2035,10 @@ err_null_driver:
}
EXPORT_SYMBOL_GPL(cpufreq_register_driver);
-
/**
* cpufreq_unregister_driver - unregister the current CPUFreq driver
*
- * Unregister the current CPUFreq driver. Only call this if you have
+ * Unregister the current CPUFreq driver. Only call this if you have
* the right to do so, i.e. if you have succeeded in initialising before!
* Returns zero if successful, and -EINVAL if the cpufreq_driver is
* currently not initialised.
@@ -1972,7 +2075,7 @@ static int __init cpufreq_core_init(void)
init_rwsem(&per_cpu(cpu_policy_rwsem, cpu));
}
- cpufreq_global_kobject = kobject_create_and_add("cpufreq", &cpu_subsys.dev_root->kobj);
+ cpufreq_global_kobject = kobject_create();
BUG_ON(!cpufreq_global_kobject);
register_syscore_ops(&cpufreq_syscore_ops);
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index dc9b72e25c1a..464587697561 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -23,21 +23,12 @@
#include <linux/kernel_stat.h>
#include <linux/mutex.h>
#include <linux/slab.h>
-#include <linux/tick.h>
#include <linux/types.h>
#include <linux/workqueue.h>
#include <linux/cpu.h>
#include "cpufreq_governor.h"
-static struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy)
-{
- if (have_governor_per_policy())
- return &policy->kobj;
- else
- return cpufreq_global_kobject;
-}
-
static struct attribute_group *get_sysfs_attr(struct dbs_data *dbs_data)
{
if (have_governor_per_policy())
@@ -46,41 +37,6 @@ static struct attribute_group *get_sysfs_attr(struct dbs_data *dbs_data)
return dbs_data->cdata->attr_group_gov_sys;
}
-static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall)
-{
- u64 idle_time;
- u64 cur_wall_time;
- u64 busy_time;
-
- cur_wall_time = jiffies64_to_cputime64(get_jiffies_64());
-
- busy_time = kcpustat_cpu(cpu).cpustat[CPUTIME_USER];
- busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_SYSTEM];
- busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_IRQ];
- busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_SOFTIRQ];
- busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_STEAL];
- busy_time += kcpustat_cpu(cpu).cpustat[CPUTIME_NICE];
-
- idle_time = cur_wall_time - busy_time;
- if (wall)
- *wall = cputime_to_usecs(cur_wall_time);
-
- return cputime_to_usecs(idle_time);
-}
-
-u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy)
-{
- u64 idle_time = get_cpu_idle_time_us(cpu, io_busy ? wall : NULL);
-
- if (idle_time == -1ULL)
- return get_cpu_idle_time_jiffy(cpu, wall);
- else if (!io_busy)
- idle_time += get_cpu_iowait_time_us(cpu, wall);
-
- return idle_time;
-}
-EXPORT_SYMBOL_GPL(get_cpu_idle_time);
-
void dbs_check_cpu(struct dbs_data *dbs_data, int cpu)
{
struct cpu_dbs_common_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu);
@@ -278,6 +234,9 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
return rc;
}
+ if (!have_governor_per_policy())
+ WARN_ON(cpufreq_get_global_kobject());
+
rc = sysfs_create_group(get_governor_parent_kobj(policy),
get_sysfs_attr(dbs_data));
if (rc) {
@@ -316,6 +275,9 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
sysfs_remove_group(get_governor_parent_kobj(policy),
get_sysfs_attr(dbs_data));
+ if (!have_governor_per_policy())
+ cpufreq_put_global_kobject();
+
if ((dbs_data->cdata->governor == GOV_CONSERVATIVE) &&
(policy->governor->initialized == 1)) {
struct cs_ops *cs_ops = dbs_data->cdata->gov_ops;
@@ -404,6 +366,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
mutex_lock(&dbs_data->mutex);
mutex_destroy(&cpu_cdbs->timer_mutex);
+ cpu_cdbs->cur_policy = NULL;
mutex_unlock(&dbs_data->mutex);
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index e16a96130cb3..6663ec3b3056 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -81,7 +81,7 @@ static ssize_t show_##file_name##_gov_sys \
return sprintf(buf, "%u\n", tuners->file_name); \
} \
\
-static ssize_t show_##file_name##_gov_pol \
+static ssize_t show_##file_name##_gov_pol \
(struct cpufreq_policy *policy, char *buf) \
{ \
struct dbs_data *dbs_data = policy->governor_data; \
@@ -91,7 +91,7 @@ static ssize_t show_##file_name##_gov_pol \
#define store_one(_gov, file_name) \
static ssize_t store_##file_name##_gov_sys \
-(struct kobject *kobj, struct attribute *attr, const char *buf, size_t count) \
+(struct kobject *kobj, struct attribute *attr, const char *buf, size_t count) \
{ \
struct dbs_data *dbs_data = _gov##_dbs_cdata.gdbs_data; \
return store_##file_name(dbs_data, buf, count); \
@@ -256,7 +256,6 @@ static ssize_t show_sampling_rate_min_gov_pol \
return sprintf(buf, "%u\n", dbs_data->min_sampling_rate); \
}
-u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy);
void dbs_check_cpu(struct dbs_data *dbs_data, int cpu);
bool need_load_eval(struct cpu_dbs_common_info *cdbs,
unsigned int sampling_rate);
diff --git a/drivers/cpufreq/cpufreq_performance.c b/drivers/cpufreq/cpufreq_performance.c
index ceee06849b91..9fef7d6e4e6a 100644
--- a/drivers/cpufreq/cpufreq_performance.c
+++ b/drivers/cpufreq/cpufreq_performance.c
@@ -17,7 +17,6 @@
#include <linux/cpufreq.h>
#include <linux/init.h>
-
static int cpufreq_governor_performance(struct cpufreq_policy *policy,
unsigned int event)
{
@@ -44,19 +43,16 @@ struct cpufreq_governor cpufreq_gov_performance = {
.owner = THIS_MODULE,
};
-
static int __init cpufreq_gov_performance_init(void)
{
return cpufreq_register_governor(&cpufreq_gov_performance);
}
-
static void __exit cpufreq_gov_performance_exit(void)
{
cpufreq_unregister_governor(&cpufreq_gov_performance);
}
-
MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>");
MODULE_DESCRIPTION("CPUfreq policy governor 'performance'");
MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/cpufreq_powersave.c b/drivers/cpufreq/cpufreq_powersave.c
index 2d948a171155..32109a14f5dc 100644
--- a/drivers/cpufreq/cpufreq_powersave.c
+++ b/drivers/cpufreq/cpufreq_powersave.c
@@ -1,7 +1,7 @@
/*
- * linux/drivers/cpufreq/cpufreq_powersave.c
+ * linux/drivers/cpufreq/cpufreq_powersave.c
*
- * Copyright (C) 2002 - 2003 Dominik Brodowski <linux@brodo.de>
+ * Copyright (C) 2002 - 2003 Dominik Brodowski <linux@brodo.de>
*
*
* This program is free software; you can redistribute it and/or modify
@@ -48,13 +48,11 @@ static int __init cpufreq_gov_powersave_init(void)
return cpufreq_register_governor(&cpufreq_gov_powersave);
}
-
static void __exit cpufreq_gov_powersave_exit(void)
{
cpufreq_unregister_governor(&cpufreq_gov_powersave);
}
-
MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>");
MODULE_DESCRIPTION("CPUfreq policy governor 'powersave'");
MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
index fb65decffa28..cd9e81713a71 100644
--- a/drivers/cpufreq/cpufreq_stats.c
+++ b/drivers/cpufreq/cpufreq_stats.c
@@ -27,7 +27,7 @@ static spinlock_t cpufreq_stats_lock;
struct cpufreq_stats {
unsigned int cpu;
unsigned int total_trans;
- unsigned long long last_time;
+ unsigned long long last_time;
unsigned int max_state;
unsigned int state_num;
unsigned int last_index;
@@ -116,7 +116,7 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
len += snprintf(buf + len, PAGE_SIZE - len, "%9u: ",
stat->freq_table[i]);
- for (j = 0; j < stat->state_num; j++) {
+ for (j = 0; j < stat->state_num; j++) {
if (len >= PAGE_SIZE)
break;
len += snprintf(buf + len, PAGE_SIZE - len, "%9u ",
@@ -349,6 +349,7 @@ static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb,
switch (action) {
case CPU_ONLINE:
+ case CPU_ONLINE_FROZEN:
cpufreq_update_policy(cpu);
break;
case CPU_DOWN_PREPARE:
diff --git a/drivers/cpufreq/cpufreq_userspace.c b/drivers/cpufreq/cpufreq_userspace.c
index bbeb9c0720a6..03078090b5f7 100644
--- a/drivers/cpufreq/cpufreq_userspace.c
+++ b/drivers/cpufreq/cpufreq_userspace.c
@@ -13,55 +13,13 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/smp.h>
-#include <linux/init.h>
-#include <linux/spinlock.h>
-#include <linux/interrupt.h>
#include <linux/cpufreq.h>
-#include <linux/cpu.h>
-#include <linux/types.h>
-#include <linux/fs.h>
-#include <linux/sysfs.h>
+#include <linux/init.h>
+#include <linux/module.h>
#include <linux/mutex.h>
-/**
- * A few values needed by the userspace governor
- */
-static DEFINE_PER_CPU(unsigned int, cpu_max_freq);
-static DEFINE_PER_CPU(unsigned int, cpu_min_freq);
-static DEFINE_PER_CPU(unsigned int, cpu_cur_freq); /* current CPU freq */
-static DEFINE_PER_CPU(unsigned int, cpu_set_freq); /* CPU freq desired by
- userspace */
static DEFINE_PER_CPU(unsigned int, cpu_is_managed);
-
static DEFINE_MUTEX(userspace_mutex);
-static int cpus_using_userspace_governor;
-
-/* keep track of frequency transitions */
-static int
-userspace_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
- void *data)
-{
- struct cpufreq_freqs *freq = data;
-
- if (!per_cpu(cpu_is_managed, freq->cpu))
- return 0;
-
- if (val == CPUFREQ_POSTCHANGE) {
- pr_debug("saving cpu_cur_freq of cpu %u to be %u kHz\n",
- freq->cpu, freq->new);
- per_cpu(cpu_cur_freq, freq->cpu) = freq->new;
- }
-
- return 0;
-}
-
-static struct notifier_block userspace_cpufreq_notifier_block = {
- .notifier_call = userspace_cpufreq_notifier
-};
-
/**
* cpufreq_set - set the CPU frequency
@@ -80,13 +38,6 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
if (!per_cpu(cpu_is_managed, policy->cpu))
goto err;
- per_cpu(cpu_set_freq, policy->cpu) = freq;
-
- if (freq < per_cpu(cpu_min_freq, policy->cpu))
- freq = per_cpu(cpu_min_freq, policy->cpu);
- if (freq > per_cpu(cpu_max_freq, policy->cpu))
- freq = per_cpu(cpu_max_freq, policy->cpu);
-
/*
* We're safe from concurrent calls to ->target() here
* as we hold the userspace_mutex lock. If we were calling
@@ -104,10 +55,9 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
return ret;
}
-
static ssize_t show_speed(struct cpufreq_policy *policy, char *buf)
{
- return sprintf(buf, "%u\n", per_cpu(cpu_cur_freq, policy->cpu));
+ return sprintf(buf, "%u\n", policy->cur);
}
static int cpufreq_governor_userspace(struct cpufreq_policy *policy,
@@ -119,73 +69,37 @@ static int cpufreq_governor_userspace(struct cpufreq_policy *policy,
switch (event) {
case CPUFREQ_GOV_START:
BUG_ON(!policy->cur);
- mutex_lock(&userspace_mutex);
-
- if (cpus_using_userspace_governor == 0) {
- cpufreq_register_notifier(
- &userspace_cpufreq_notifier_block,
- CPUFREQ_TRANSITION_NOTIFIER);
- }
- cpus_using_userspace_governor++;
+ pr_debug("started managing cpu %u\n", cpu);
+ mutex_lock(&userspace_mutex);
per_cpu(cpu_is_managed, cpu) = 1;
- per_cpu(cpu_min_freq, cpu) = policy->min;
- per_cpu(cpu_max_freq, cpu) = policy->max;
- per_cpu(cpu_cur_freq, cpu) = policy->cur;
- per_cpu(cpu_set_freq, cpu) = policy->cur;
- pr_debug("managing cpu %u started "
- "(%u - %u kHz, currently %u kHz)\n",
- cpu,
- per_cpu(cpu_min_freq, cpu),
- per_cpu(cpu_max_freq, cpu),
- per_cpu(cpu_cur_freq, cpu));
-
mutex_unlock(&userspace_mutex);
break;
case CPUFREQ_GOV_STOP:
- mutex_lock(&userspace_mutex);
- cpus_using_userspace_governor--;
- if (cpus_using_userspace_governor == 0) {
- cpufreq_unregister_notifier(
- &userspace_cpufreq_notifier_block,
- CPUFREQ_TRANSITION_NOTIFIER);
- }
+ pr_debug("managing cpu %u stopped\n", cpu);
+ mutex_lock(&userspace_mutex);
per_cpu(cpu_is_managed, cpu) = 0;
- per_cpu(cpu_min_freq, cpu) = 0;
- per_cpu(cpu_max_freq, cpu) = 0;
- per_cpu(cpu_set_freq, cpu) = 0;
- pr_debug("managing cpu %u stopped\n", cpu);
mutex_unlock(&userspace_mutex);
break;
case CPUFREQ_GOV_LIMITS:
mutex_lock(&userspace_mutex);
- pr_debug("limit event for cpu %u: %u - %u kHz, "
- "currently %u kHz, last set to %u kHz\n",
+ pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz\n",
cpu, policy->min, policy->max,
- per_cpu(cpu_cur_freq, cpu),
- per_cpu(cpu_set_freq, cpu));
- if (policy->max < per_cpu(cpu_set_freq, cpu)) {
+ policy->cur);
+
+ if (policy->max < policy->cur)
__cpufreq_driver_target(policy, policy->max,
CPUFREQ_RELATION_H);
- } else if (policy->min > per_cpu(cpu_set_freq, cpu)) {
+ else if (policy->min > policy->cur)
__cpufreq_driver_target(policy, policy->min,
CPUFREQ_RELATION_L);
- } else {
- __cpufreq_driver_target(policy,
- per_cpu(cpu_set_freq, cpu),
- CPUFREQ_RELATION_L);
- }
- per_cpu(cpu_min_freq, cpu) = policy->min;
- per_cpu(cpu_max_freq, cpu) = policy->max;
- per_cpu(cpu_cur_freq, cpu) = policy->cur;
mutex_unlock(&userspace_mutex);
break;
}
return rc;
}
-
#ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE
static
#endif
@@ -202,13 +116,11 @@ static int __init cpufreq_gov_userspace_init(void)
return cpufreq_register_governor(&cpufreq_gov_userspace);
}
-
static void __exit cpufreq_gov_userspace_exit(void)
{
cpufreq_unregister_governor(&cpufreq_gov_userspace);
}
-
MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>, "
"Russell King <rmk@arm.linux.org.uk>");
MODULE_DESCRIPTION("CPUfreq policy governor 'userspace'");
diff --git a/drivers/cpufreq/davinci-cpufreq.c b/drivers/cpufreq/davinci-cpufreq.c
index c33c76c360fa..551dd655c6f2 100644
--- a/drivers/cpufreq/davinci-cpufreq.c
+++ b/drivers/cpufreq/davinci-cpufreq.c
@@ -114,6 +114,9 @@ static int davinci_target(struct cpufreq_policy *policy,
pdata->set_voltage(idx);
out:
+ if (ret)
+ freqs.new = freqs.old;
+
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
return ret;
diff --git a/drivers/cpufreq/dbx500-cpufreq.c b/drivers/cpufreq/dbx500-cpufreq.c
index 6ec6539ae041..1fdb02b9f1ec 100644
--- a/drivers/cpufreq/dbx500-cpufreq.c
+++ b/drivers/cpufreq/dbx500-cpufreq.c
@@ -57,13 +57,13 @@ static int dbx500_cpufreq_target(struct cpufreq_policy *policy,
if (ret) {
pr_err("dbx500-cpufreq: Failed to set armss_clk to %d Hz: error %d\n",
freqs.new * 1000, ret);
- return ret;
+ freqs.new = freqs.old;
}
/* post change notification */
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
- return 0;
+ return ret;
}
static unsigned int dbx500_cpufreq_getspeed(unsigned int cpu)
diff --git a/drivers/cpufreq/e_powersaver.c b/drivers/cpufreq/e_powersaver.c
index 37380fb92621..a60efaeb4cf8 100644
--- a/drivers/cpufreq/e_powersaver.c
+++ b/drivers/cpufreq/e_powersaver.c
@@ -161,6 +161,9 @@ postchange:
current_multiplier);
}
#endif
+ if (err)
+ freqs.new = freqs.old;
+
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
return err;
}
@@ -188,7 +191,7 @@ static int eps_target(struct cpufreq_policy *policy,
}
/* Make frequency transition */
- dest_state = centaur->freq_table[newstate].index & 0xffff;
+ dest_state = centaur->freq_table[newstate].driver_data & 0xffff;
ret = eps_set_state(centaur, policy, dest_state);
if (ret)
printk(KERN_ERR "eps: Timeout!\n");
@@ -380,9 +383,9 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
f_table = &centaur->freq_table[0];
if (brand != EPS_BRAND_C7M) {
f_table[0].frequency = fsb * min_multiplier;
- f_table[0].index = (min_multiplier << 8) | min_voltage;
+ f_table[0].driver_data = (min_multiplier << 8) | min_voltage;
f_table[1].frequency = fsb * max_multiplier;
- f_table[1].index = (max_multiplier << 8) | max_voltage;
+ f_table[1].driver_data = (max_multiplier << 8) | max_voltage;
f_table[2].frequency = CPUFREQ_TABLE_END;
} else {
k = 0;
@@ -391,7 +394,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
for (i = min_multiplier; i <= max_multiplier; i++) {
voltage = (k * step) / 256 + min_voltage;
f_table[k].frequency = fsb * i;
- f_table[k].index = (i << 8) | voltage;
+ f_table[k].driver_data = (i << 8) | voltage;
k++;
}
f_table[k].frequency = CPUFREQ_TABLE_END;
diff --git a/drivers/cpufreq/exynos-cpufreq.c b/drivers/cpufreq/exynos-cpufreq.c
index 475b4f607f0d..0d32f02ef4d6 100644
--- a/drivers/cpufreq/exynos-cpufreq.c
+++ b/drivers/cpufreq/exynos-cpufreq.c
@@ -113,7 +113,8 @@ static int exynos_cpufreq_scale(unsigned int target_freq)
if (ret) {
pr_err("%s: failed to set cpu voltage to %d\n",
__func__, arm_volt);
- goto out;
+ freqs.new = freqs.old;
+ goto post_notify;
}
}
@@ -123,14 +124,19 @@ static int exynos_cpufreq_scale(unsigned int target_freq)
if (ret) {
pr_err("%s: failed to set cpu voltage to %d\n",
__func__, safe_arm_volt);
- goto out;
+ freqs.new = freqs.old;
+ goto post_notify;
}
}
exynos_info->set_freq(old_index, index);
+post_notify:
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
+ if (ret)
+ goto out;
+
/* When the new frequency is lower than current frequency */
if ((freqs.new < freqs.old) ||
((freqs.new > freqs.old) && safe_arm_volt)) {
diff --git a/drivers/cpufreq/freq_table.c b/drivers/cpufreq/freq_table.c
index d7a79662e24c..f0d87412cc91 100644
--- a/drivers/cpufreq/freq_table.c
+++ b/drivers/cpufreq/freq_table.c
@@ -34,8 +34,8 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
continue;
}
- pr_debug("table entry %u: %u kHz, %u index\n",
- i, freq, table[i].index);
+ pr_debug("table entry %u: %u kHz, %u driver_data\n",
+ i, freq, table[i].driver_data);
if (freq < min_freq)
min_freq = freq;
if (freq > max_freq)
@@ -97,11 +97,11 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
unsigned int *index)
{
struct cpufreq_frequency_table optimal = {
- .index = ~0,
+ .driver_data = ~0,
.frequency = 0,
};
struct cpufreq_frequency_table suboptimal = {
- .index = ~0,
+ .driver_data = ~0,
.frequency = 0,
};
unsigned int i;
@@ -129,12 +129,12 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
if (freq <= target_freq) {
if (freq >= optimal.frequency) {
optimal.frequency = freq;
- optimal.index = i;
+ optimal.driver_data = i;
}
} else {
if (freq <= suboptimal.frequency) {
suboptimal.frequency = freq;
- suboptimal.index = i;
+ suboptimal.driver_data = i;
}
}
break;
@@ -142,26 +142,26 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
if (freq >= target_freq) {
if (freq <= optimal.frequency) {
optimal.frequency = freq;
- optimal.index = i;
+ optimal.driver_data = i;
}
} else {
if (freq >= suboptimal.frequency) {
suboptimal.frequency = freq;
- suboptimal.index = i;
+ suboptimal.driver_data = i;
}
}
break;
}
}
- if (optimal.index > i) {
- if (suboptimal.index > i)
+ if (optimal.driver_data > i) {
+ if (suboptimal.driver_data > i)
return -EINVAL;
- *index = suboptimal.index;
+ *index = suboptimal.driver_data;
} else
- *index = optimal.index;
+ *index = optimal.driver_data;
pr_debug("target is %u (%u kHz, %u)\n", *index, table[*index].frequency,
- table[*index].index);
+ table[*index].driver_data);
return 0;
}
diff --git a/drivers/cpufreq/ia64-acpi-cpufreq.c b/drivers/cpufreq/ia64-acpi-cpufreq.c
index c0075dbaa633..573c14ea802d 100644
--- a/drivers/cpufreq/ia64-acpi-cpufreq.c
+++ b/drivers/cpufreq/ia64-acpi-cpufreq.c
@@ -326,7 +326,7 @@ acpi_cpufreq_cpu_init (
/* table init */
for (i = 0; i <= data->acpi_data.state_count; i++)
{
- data->freq_table[i].index = i;
+ data->freq_table[i].driver_data = i;
if (i < data->acpi_data.state_count) {
data->freq_table[i].frequency =
data->acpi_data.states[i].core_frequency * 1000;
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
index b78bc35973ba..e37cdaedbb5b 100644
--- a/drivers/cpufreq/imx6q-cpufreq.c
+++ b/drivers/cpufreq/imx6q-cpufreq.c
@@ -68,8 +68,6 @@ static int imx6q_set_target(struct cpufreq_policy *policy,
if (freqs.old == freqs.new)
return 0;
- cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
-
rcu_read_lock();
opp = opp_find_freq_ceil(cpu_dev, &freq_hz);
if (IS_ERR(opp)) {
@@ -86,13 +84,16 @@ static int imx6q_set_target(struct cpufreq_policy *policy,
freqs.old / 1000, volt_old / 1000,
freqs.new / 1000, volt / 1000);
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
+
/* scaling up? scale voltage before frequency */
if (freqs.new > freqs.old) {
ret = regulator_set_voltage_tol(arm_reg, volt, 0);
if (ret) {
dev_err(cpu_dev,
"failed to scale vddarm up: %d\n", ret);
- return ret;
+ freqs.new = freqs.old;
+ goto post_notify;
}
/*
@@ -145,15 +146,18 @@ static int imx6q_set_target(struct cpufreq_policy *policy,
if (ret) {
dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);
regulator_set_voltage_tol(arm_reg, volt_old, 0);
- return ret;
+ freqs.new = freqs.old;
+ goto post_notify;
}
/* scaling down? scale voltage after frequency */
if (freqs.new < freqs.old) {
ret = regulator_set_voltage_tol(arm_reg, volt, 0);
- if (ret)
+ if (ret) {
dev_warn(cpu_dev,
"failed to scale vddarm down: %d\n", ret);
+ ret = 0;
+ }
if (freqs.old == FREQ_1P2_GHZ / 1000) {
regulator_set_voltage_tol(pu_reg,
@@ -163,9 +167,10 @@ static int imx6q_set_target(struct cpufreq_policy *policy,
}
}
+post_notify:
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
- return 0;
+ return ret;
}
static int imx6q_cpufreq_init(struct cpufreq_policy *policy)
diff --git a/drivers/cpufreq/kirkwood-cpufreq.c b/drivers/cpufreq/kirkwood-cpufreq.c
index b2644af985ec..c233ea617366 100644
--- a/drivers/cpufreq/kirkwood-cpufreq.c
+++ b/drivers/cpufreq/kirkwood-cpufreq.c
@@ -59,7 +59,7 @@ static void kirkwood_cpufreq_set_cpu_state(struct cpufreq_policy *policy,
unsigned int index)
{
struct cpufreq_freqs freqs;
- unsigned int state = kirkwood_freq_table[index].index;
+ unsigned int state = kirkwood_freq_table[index].driver_data;
unsigned long reg;
freqs.old = kirkwood_cpufreq_get_cpu_frequency(0);
diff --git a/drivers/cpufreq/longhaul.c b/drivers/cpufreq/longhaul.c
index b448638e34de..b6a0a7a406b0 100644
--- a/drivers/cpufreq/longhaul.c
+++ b/drivers/cpufreq/longhaul.c
@@ -254,7 +254,7 @@ static void longhaul_setstate(struct cpufreq_policy *policy,
u32 bm_timeout = 1000;
unsigned int dir = 0;
- mults_index = longhaul_table[table_index].index;
+ mults_index = longhaul_table[table_index].driver_data;
/* Safety precautions */
mult = mults[mults_index & 0x1f];
if (mult == -1)
@@ -487,7 +487,7 @@ static int __cpuinit longhaul_get_ranges(void)
if (ratio > maxmult || ratio < minmult)
continue;
longhaul_table[k].frequency = calc_speed(ratio);
- longhaul_table[k].index = j;
+ longhaul_table[k].driver_data = j;
k++;
}
if (k <= 1) {
@@ -508,8 +508,8 @@ static int __cpuinit longhaul_get_ranges(void)
if (min_i != j) {
swap(longhaul_table[j].frequency,
longhaul_table[min_i].frequency);
- swap(longhaul_table[j].index,
- longhaul_table[min_i].index);
+ swap(longhaul_table[j].driver_data,
+ longhaul_table[min_i].driver_data);
}
}
@@ -517,7 +517,7 @@ static int __cpuinit longhaul_get_ranges(void)
/* Find index we are running on */
for (j = 0; j < k; j++) {
- if (mults[longhaul_table[j].index & 0x1f] == mult) {
+ if (mults[longhaul_table[j].driver_data & 0x1f] == mult) {
longhaul_index = j;
break;
}
@@ -613,7 +613,7 @@ static void __cpuinit longhaul_setup_voltagescaling(void)
pos = (speed - min_vid_speed) / kHz_step + minvid.pos;
else
pos = minvid.pos;
- longhaul_table[j].index |= mV_vrm_table[pos] << 8;
+ longhaul_table[j].driver_data |= mV_vrm_table[pos] << 8;
vid = vrm_mV_table[mV_vrm_table[pos]];
printk(KERN_INFO PFX "f: %d kHz, index: %d, vid: %d mV\n",
speed, j, vid.mV);
@@ -656,12 +656,12 @@ static int longhaul_target(struct cpufreq_policy *policy,
* this in hardware, C3 is old and we need to do this
* in software. */
i = longhaul_index;
- current_vid = (longhaul_table[longhaul_index].index >> 8);
+ current_vid = (longhaul_table[longhaul_index].driver_data >> 8);
current_vid &= 0x1f;
if (table_index > longhaul_index)
dir = 1;
while (i != table_index) {
- vid = (longhaul_table[i].index >> 8) & 0x1f;
+ vid = (longhaul_table[i].driver_data >> 8) & 0x1f;
if (vid != current_vid) {
longhaul_setstate(policy, i);
current_vid = vid;
diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
index d53912768946..bb838b985077 100644
--- a/drivers/cpufreq/loongson2_cpufreq.c
+++ b/drivers/cpufreq/loongson2_cpufreq.c
@@ -72,7 +72,7 @@ static int loongson2_cpufreq_target(struct cpufreq_policy *policy,
freq =
((cpu_clock_freq / 1000) *
- loongson2_clockmod_table[newstate].index) / 8;
+ loongson2_clockmod_table[newstate].driver_data) / 8;
if (freq < policy->min || freq > policy->max)
return -EINVAL;
diff --git a/drivers/cpufreq/omap-cpufreq.c b/drivers/cpufreq/omap-cpufreq.c
index 0279d18a57f9..29468a522ee9 100644
--- a/drivers/cpufreq/omap-cpufreq.c
+++ b/drivers/cpufreq/omap-cpufreq.c
@@ -93,9 +93,6 @@ static int omap_target(struct cpufreq_policy *policy,
if (freqs.old == freqs.new && policy->cur == freqs.new)
return ret;
- /* notifiers */
- cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
-
freq = freqs.new * 1000;
ret = clk_round_rate(mpu_clk, freq);
if (IS_ERR_VALUE(ret)) {
@@ -125,6 +122,9 @@ static int omap_target(struct cpufreq_policy *policy,
freqs.old / 1000, volt_old ? volt_old / 1000 : -1,
freqs.new / 1000, volt ? volt / 1000 : -1);
+ /* notifiers */
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
+
/* scaling up? scale voltage before frequency */
if (mpu_reg && (freqs.new > freqs.old)) {
r = regulator_set_voltage(mpu_reg, volt - tol, volt + tol);
diff --git a/drivers/cpufreq/p4-clockmod.c b/drivers/cpufreq/p4-clockmod.c
index 421ef37d0bb3..9ee78170ff86 100644
--- a/drivers/cpufreq/p4-clockmod.c
+++ b/drivers/cpufreq/p4-clockmod.c
@@ -118,7 +118,7 @@ static int cpufreq_p4_target(struct cpufreq_policy *policy,
return -EINVAL;
freqs.old = cpufreq_p4_get(policy->cpu);
- freqs.new = stock_freq * p4clockmod_table[newstate].index / 8;
+ freqs.new = stock_freq * p4clockmod_table[newstate].driver_data / 8;
if (freqs.new == freqs.old)
return 0;
@@ -131,7 +131,7 @@ static int cpufreq_p4_target(struct cpufreq_policy *policy,
* Developer's Manual, Volume 3
*/
for_each_cpu(i, policy->cpus)
- cpufreq_p4_setdc(i, p4clockmod_table[newstate].index);
+ cpufreq_p4_setdc(i, p4clockmod_table[newstate].driver_data);
/* notifiers */
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
diff --git a/drivers/cpufreq/pasemi-cpufreq.c b/drivers/cpufreq/pasemi-cpufreq.c
new file mode 100644
index 000000000000..b704da404067
--- /dev/null
+++ b/drivers/cpufreq/pasemi-cpufreq.c
@@ -0,0 +1,331 @@
+/*
+ * Copyright (C) 2007 PA Semi, Inc
+ *
+ * Authors: Egor Martovetsky <egor@pasemi.com>
+ * Olof Johansson <olof@lixom.net>
+ *
+ * Maintained by: Olof Johansson <olof@lixom.net>
+ *
+ * Based on arch/powerpc/platforms/cell/cbe_cpufreq.c:
+ * (C) Copyright IBM Deutschland Entwicklung GmbH 2005
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#include <linux/cpufreq.h>
+#include <linux/timer.h>
+#include <linux/module.h>
+
+#include <asm/hw_irq.h>
+#include <asm/io.h>
+#include <asm/prom.h>
+#include <asm/time.h>
+#include <asm/smp.h>
+
+#define SDCASR_REG 0x0100
+#define SDCASR_REG_STRIDE 0x1000
+#define SDCPWR_CFGA0_REG 0x0100
+#define SDCPWR_PWST0_REG 0x0000
+#define SDCPWR_GIZTIME_REG 0x0440
+
+/* SDCPWR_GIZTIME_REG fields */
+#define SDCPWR_GIZTIME_GR 0x80000000
+#define SDCPWR_GIZTIME_LONGLOCK 0x000000ff
+
+/* Offset of ASR registers from SDC base */
+#define SDCASR_OFFSET 0x120000
+
+static void __iomem *sdcpwr_mapbase;
+static void __iomem *sdcasr_mapbase;
+
+static DEFINE_MUTEX(pas_switch_mutex);
+
+/* Current astate, is used when waking up from power savings on
+ * one core, in case the other core has switched states during
+ * the idle time.
+ */
+static int current_astate;
+
+/* We support 5(A0-A4) power states excluding turbo(A5-A6) modes */
+static struct cpufreq_frequency_table pas_freqs[] = {
+ {0, 0},
+ {1, 0},
+ {2, 0},
+ {3, 0},
+ {4, 0},
+ {0, CPUFREQ_TABLE_END},
+};
+
+static struct freq_attr *pas_cpu_freqs_attr[] = {
+ &cpufreq_freq_attr_scaling_available_freqs,
+ NULL,
+};
+
+/*
+ * hardware specific functions
+ */
+
+static int get_astate_freq(int astate)
+{
+ u32 ret;
+ ret = in_le32(sdcpwr_mapbase + SDCPWR_CFGA0_REG + (astate * 0x10));
+
+ return ret & 0x3f;
+}
+
+static int get_cur_astate(int cpu)
+{
+ u32 ret;
+
+ ret = in_le32(sdcpwr_mapbase + SDCPWR_PWST0_REG);
+ ret = (ret >> (cpu * 4)) & 0x7;
+
+ return ret;
+}
+
+static int get_gizmo_latency(void)
+{
+ u32 giztime, ret;
+
+ giztime = in_le32(sdcpwr_mapbase + SDCPWR_GIZTIME_REG);
+
+ /* just provide the upper bound */
+ if (giztime & SDCPWR_GIZTIME_GR)
+ ret = (giztime & SDCPWR_GIZTIME_LONGLOCK) * 128000;
+ else
+ ret = (giztime & SDCPWR_GIZTIME_LONGLOCK) * 1000;
+
+ return ret;
+}
+
+static void set_astate(int cpu, unsigned int astate)
+{
+ unsigned long flags;
+
+ /* Return if called before init has run */
+ if (unlikely(!sdcasr_mapbase))
+ return;
+
+ local_irq_save(flags);
+
+ out_le32(sdcasr_mapbase + SDCASR_REG + SDCASR_REG_STRIDE*cpu, astate);
+
+ local_irq_restore(flags);
+}
+
+int check_astate(void)
+{
+ return get_cur_astate(hard_smp_processor_id());
+}
+
+void restore_astate(int cpu)
+{
+ set_astate(cpu, current_astate);
+}
+
+/*
+ * cpufreq functions
+ */
+
+static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
+{
+ const u32 *max_freqp;
+ u32 max_freq;
+ int i, cur_astate;
+ struct resource res;
+ struct device_node *cpu, *dn;
+ int err = -ENODEV;
+
+ cpu = of_get_cpu_node(policy->cpu, NULL);
+
+ if (!cpu)
+ goto out;
+
+ dn = of_find_compatible_node(NULL, NULL, "1682m-sdc");
+ if (!dn)
+ dn = of_find_compatible_node(NULL, NULL,
+ "pasemi,pwrficient-sdc");
+ if (!dn)
+ goto out;
+ err = of_address_to_resource(dn, 0, &res);
+ of_node_put(dn);
+ if (err)
+ goto out;
+ sdcasr_mapbase = ioremap(res.start + SDCASR_OFFSET, 0x2000);
+ if (!sdcasr_mapbase) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ dn = of_find_compatible_node(NULL, NULL, "1682m-gizmo");
+ if (!dn)
+ dn = of_find_compatible_node(NULL, NULL,
+ "pasemi,pwrficient-gizmo");
+ if (!dn) {
+ err = -ENODEV;
+ goto out_unmap_sdcasr;
+ }
+ err = of_address_to_resource(dn, 0, &res);
+ of_node_put(dn);
+ if (err)
+ goto out_unmap_sdcasr;
+ sdcpwr_mapbase = ioremap(res.start, 0x1000);
+ if (!sdcpwr_mapbase) {
+ err = -EINVAL;
+ goto out_unmap_sdcasr;
+ }
+
+ pr_debug("init cpufreq on CPU %d\n", policy->cpu);
+
+ max_freqp = of_get_property(cpu, "clock-frequency", NULL);
+ if (!max_freqp) {
+ err = -EINVAL;
+ goto out_unmap_sdcpwr;
+ }
+
+ /* we need the freq in kHz */
+ max_freq = *max_freqp / 1000;
+
+ pr_debug("max clock-frequency is at %u kHz\n", max_freq);
+ pr_debug("initializing frequency table\n");
+
+ /* initialize frequency table */
+ for (i=0; pas_freqs[i].frequency!=CPUFREQ_TABLE_END; i++) {
+ pas_freqs[i].frequency =
+ get_astate_freq(pas_freqs[i].driver_data) * 100000;
+ pr_debug("%d: %d\n", i, pas_freqs[i].frequency);
+ }
+
+ policy->cpuinfo.transition_latency = get_gizmo_latency();
+
+ cur_astate = get_cur_astate(policy->cpu);
+ pr_debug("current astate is at %d\n",cur_astate);
+
+ policy->cur = pas_freqs[cur_astate].frequency;
+ cpumask_copy(policy->cpus, cpu_online_mask);
+
+ ppc_proc_freq = policy->cur * 1000ul;
+
+ cpufreq_frequency_table_get_attr(pas_freqs, policy->cpu);
+
+ /* this ensures that policy->cpuinfo_min and policy->cpuinfo_max
+ * are set correctly
+ */
+ return cpufreq_frequency_table_cpuinfo(policy, pas_freqs);
+
+out_unmap_sdcpwr:
+ iounmap(sdcpwr_mapbase);
+
+out_unmap_sdcasr:
+ iounmap(sdcasr_mapbase);
+out:
+ return err;
+}
+
+static int pas_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+{
+ /*
+ * We don't support CPU hotplug. Don't unmap after the system
+ * has already made it to a running state.
+ */
+ if (system_state != SYSTEM_BOOTING)
+ return 0;
+
+ if (sdcasr_mapbase)
+ iounmap(sdcasr_mapbase);
+ if (sdcpwr_mapbase)
+ iounmap(sdcpwr_mapbase);
+
+ cpufreq_frequency_table_put_attr(policy->cpu);
+ return 0;
+}
+
+static int pas_cpufreq_verify(struct cpufreq_policy *policy)
+{
+ return cpufreq_frequency_table_verify(policy, pas_freqs);
+}
+
+static int pas_cpufreq_target(struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
+{
+ struct cpufreq_freqs freqs;
+ int pas_astate_new;
+ int i;
+
+ cpufreq_frequency_table_target(policy,
+ pas_freqs,
+ target_freq,
+ relation,
+ &pas_astate_new);
+
+ freqs.old = policy->cur;
+ freqs.new = pas_freqs[pas_astate_new].frequency;
+
+ mutex_lock(&pas_switch_mutex);
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
+
+ pr_debug("setting frequency for cpu %d to %d kHz, 1/%d of max frequency\n",
+ policy->cpu,
+ pas_freqs[pas_astate_new].frequency,
+ pas_freqs[pas_astate_new].driver_data);
+
+ current_astate = pas_astate_new;
+
+ for_each_online_cpu(i)
+ set_astate(i, pas_astate_new);
+
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
+ mutex_unlock(&pas_switch_mutex);
+
+ ppc_proc_freq = freqs.new * 1000ul;
+ return 0;
+}
+
+static struct cpufreq_driver pas_cpufreq_driver = {
+ .name = "pas-cpufreq",
+ .owner = THIS_MODULE,
+ .flags = CPUFREQ_CONST_LOOPS,
+ .init = pas_cpufreq_cpu_init,
+ .exit = pas_cpufreq_cpu_exit,
+ .verify = pas_cpufreq_verify,
+ .target = pas_cpufreq_target,
+ .attr = pas_cpu_freqs_attr,
+};
+
+/*
+ * module init and destoy
+ */
+
+static int __init pas_cpufreq_init(void)
+{
+ if (!of_machine_is_compatible("PA6T-1682M") &&
+ !of_machine_is_compatible("pasemi,pwrficient"))
+ return -ENODEV;
+
+ return cpufreq_register_driver(&pas_cpufreq_driver);
+}
+
+static void __exit pas_cpufreq_exit(void)
+{
+ cpufreq_unregister_driver(&pas_cpufreq_driver);
+}
+
+module_init(pas_cpufreq_init);
+module_exit(pas_cpufreq_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Egor Martovetsky <egor@pasemi.com>, Olof Johansson <olof@lixom.net>");
diff --git a/drivers/cpufreq/pcc-cpufreq.c b/drivers/cpufreq/pcc-cpufreq.c
index 0de00081a81e..1581fcc4cf4a 100644
--- a/drivers/cpufreq/pcc-cpufreq.c
+++ b/drivers/cpufreq/pcc-cpufreq.c
@@ -243,6 +243,8 @@ static int pcc_cpufreq_target(struct cpufreq_policy *policy,
return 0;
cmd_incomplete:
+ freqs.new = freqs.old;
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
iowrite16(0, &pcch_hdr->status);
spin_unlock(&pcc_lock);
return -EINVAL;
diff --git a/drivers/cpufreq/pmac32-cpufreq.c b/drivers/cpufreq/pmac32-cpufreq.c
new file mode 100644
index 000000000000..3104fad82480
--- /dev/null
+++ b/drivers/cpufreq/pmac32-cpufreq.c
@@ -0,0 +1,721 @@
+/*
+ * Copyright (C) 2002 - 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org>
+ * Copyright (C) 2004 John Steele Scott <toojays@toojays.net>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * TODO: Need a big cleanup here. Basically, we need to have different
+ * cpufreq_driver structures for the different type of HW instead of the
+ * current mess. We also need to better deal with the detection of the
+ * type of machine.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/sched.h>
+#include <linux/adb.h>
+#include <linux/pmu.h>
+#include <linux/cpufreq.h>
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/hardirq.h>
+#include <asm/prom.h>
+#include <asm/machdep.h>
+#include <asm/irq.h>
+#include <asm/pmac_feature.h>
+#include <asm/mmu_context.h>
+#include <asm/sections.h>
+#include <asm/cputable.h>
+#include <asm/time.h>
+#include <asm/mpic.h>
+#include <asm/keylargo.h>
+#include <asm/switch_to.h>
+
+/* WARNING !!! This will cause calibrate_delay() to be called,
+ * but this is an __init function ! So you MUST go edit
+ * init/main.c to make it non-init before enabling DEBUG_FREQ
+ */
+#undef DEBUG_FREQ
+
+extern void low_choose_7447a_dfs(int dfs);
+extern void low_choose_750fx_pll(int pll);
+extern void low_sleep_handler(void);
+
+/*
+ * Currently, PowerMac cpufreq supports only high & low frequencies
+ * that are set by the firmware
+ */
+static unsigned int low_freq;
+static unsigned int hi_freq;
+static unsigned int cur_freq;
+static unsigned int sleep_freq;
+static unsigned long transition_latency;
+
+/*
+ * Different models uses different mechanisms to switch the frequency
+ */
+static int (*set_speed_proc)(int low_speed);
+static unsigned int (*get_speed_proc)(void);
+
+/*
+ * Some definitions used by the various speedprocs
+ */
+static u32 voltage_gpio;
+static u32 frequency_gpio;
+static u32 slew_done_gpio;
+static int no_schedule;
+static int has_cpu_l2lve;
+static int is_pmu_based;
+
+/* There are only two frequency states for each processor. Values
+ * are in kHz for the time being.
+ */
+#define CPUFREQ_HIGH 0
+#define CPUFREQ_LOW 1
+
+static struct cpufreq_frequency_table pmac_cpu_freqs[] = {
+ {CPUFREQ_HIGH, 0},
+ {CPUFREQ_LOW, 0},
+ {0, CPUFREQ_TABLE_END},
+};
+
+static struct freq_attr* pmac_cpu_freqs_attr[] = {
+ &cpufreq_freq_attr_scaling_available_freqs,
+ NULL,
+};
+
+static inline void local_delay(unsigned long ms)
+{
+ if (no_schedule)
+ mdelay(ms);
+ else
+ msleep(ms);
+}
+
+#ifdef DEBUG_FREQ
+static inline void debug_calc_bogomips(void)
+{
+ /* This will cause a recalc of bogomips and display the
+ * result. We backup/restore the value to avoid affecting the
+ * core cpufreq framework's own calculation.
+ */
+ unsigned long save_lpj = loops_per_jiffy;
+ calibrate_delay();
+ loops_per_jiffy = save_lpj;
+}
+#endif /* DEBUG_FREQ */
+
+/* Switch CPU speed under 750FX CPU control
+ */
+static int cpu_750fx_cpu_speed(int low_speed)
+{
+ u32 hid2;
+
+ if (low_speed == 0) {
+ /* ramping up, set voltage first */
+ pmac_call_feature(PMAC_FTR_WRITE_GPIO, NULL, voltage_gpio, 0x05);
+ /* Make sure we sleep for at least 1ms */
+ local_delay(10);
+
+ /* tweak L2 for high voltage */
+ if (has_cpu_l2lve) {
+ hid2 = mfspr(SPRN_HID2);
+ hid2 &= ~0x2000;
+ mtspr(SPRN_HID2, hid2);
+ }
+ }
+#ifdef CONFIG_6xx
+ low_choose_750fx_pll(low_speed);
+#endif
+ if (low_speed == 1) {
+ /* tweak L2 for low voltage */
+ if (has_cpu_l2lve) {
+ hid2 = mfspr(SPRN_HID2);
+ hid2 |= 0x2000;
+ mtspr(SPRN_HID2, hid2);
+ }
+
+ /* ramping down, set voltage last */
+ pmac_call_feature(PMAC_FTR_WRITE_GPIO, NULL, voltage_gpio, 0x04);
+ local_delay(10);
+ }
+
+ return 0;
+}
+
+static unsigned int cpu_750fx_get_cpu_speed(void)
+{
+ if (mfspr(SPRN_HID1) & HID1_PS)
+ return low_freq;
+ else
+ return hi_freq;
+}
+
+/* Switch CPU speed using DFS */
+static int dfs_set_cpu_speed(int low_speed)
+{
+ if (low_speed == 0) {
+ /* ramping up, set voltage first */
+ pmac_call_feature(PMAC_FTR_WRITE_GPIO, NULL, voltage_gpio, 0x05);
+ /* Make sure we sleep for at least 1ms */
+ local_delay(1);
+ }
+
+ /* set frequency */
+#ifdef CONFIG_6xx
+ low_choose_7447a_dfs(low_speed);
+#endif
+ udelay(100);
+
+ if (low_speed == 1) {
+ /* ramping down, set voltage last */
+ pmac_call_feature(PMAC_FTR_WRITE_GPIO, NULL, voltage_gpio, 0x04);
+ local_delay(1);
+ }
+
+ return 0;
+}
+
+static unsigned int dfs_get_cpu_speed(void)
+{
+ if (mfspr(SPRN_HID1) & HID1_DFS)
+ return low_freq;
+ else
+ return hi_freq;
+}
+
+
+/* Switch CPU speed using slewing GPIOs
+ */
+static int gpios_set_cpu_speed(int low_speed)
+{
+ int gpio, timeout = 0;
+
+ /* If ramping up, set voltage first */
+ if (low_speed == 0) {
+ pmac_call_feature(PMAC_FTR_WRITE_GPIO, NULL, voltage_gpio, 0x05);
+ /* Delay is way too big but it's ok, we schedule */
+ local_delay(10);
+ }
+
+ /* Set frequency */
+ gpio = pmac_call_feature(PMAC_FTR_READ_GPIO, NULL, frequency_gpio, 0);
+ if (low_speed == ((gpio & 0x01) == 0))
+ goto skip;
+
+ pmac_call_feature(PMAC_FTR_WRITE_GPIO, NULL, frequency_gpio,
+ low_speed ? 0x04 : 0x05);
+ udelay(200);
+ do {
+ if (++timeout > 100)
+ break;
+ local_delay(1);
+ gpio = pmac_call_feature(PMAC_FTR_READ_GPIO, NULL, slew_done_gpio, 0);
+ } while((gpio & 0x02) == 0);
+ skip:
+ /* If ramping down, set voltage last */
+ if (low_speed == 1) {
+ pmac_call_feature(PMAC_FTR_WRITE_GPIO, NULL, voltage_gpio, 0x04);
+ /* Delay is way too big but it's ok, we schedule */
+ local_delay(10);
+ }
+
+#ifdef DEBUG_FREQ
+ debug_calc_bogomips();
+#endif
+
+ return 0;
+}
+
+/* Switch CPU speed under PMU control
+ */
+static int pmu_set_cpu_speed(int low_speed)
+{
+ struct adb_request req;
+ unsigned long save_l2cr;
+ unsigned long save_l3cr;
+ unsigned int pic_prio;
+ unsigned long flags;
+
+ preempt_disable();
+
+#ifdef DEBUG_FREQ
+ printk(KERN_DEBUG "HID1, before: %x\n", mfspr(SPRN_HID1));
+#endif
+ pmu_suspend();
+
+ /* Disable all interrupt sources on openpic */
+ pic_prio = mpic_cpu_get_priority();
+ mpic_cpu_set_priority(0xf);
+
+ /* Make sure the decrementer won't interrupt us */
+ asm volatile("mtdec %0" : : "r" (0x7fffffff));
+ /* Make sure any pending DEC interrupt occurring while we did
+ * the above didn't re-enable the DEC */
+ mb();
+ asm volatile("mtdec %0" : : "r" (0x7fffffff));
+
+ /* We can now disable MSR_EE */
+ local_irq_save(flags);
+
+ /* Giveup the FPU & vec */
+ enable_kernel_fp();
+
+#ifdef CONFIG_ALTIVEC
+ if (cpu_has_feature(CPU_FTR_ALTIVEC))
+ enable_kernel_altivec();
+#endif /* CONFIG_ALTIVEC */
+
+ /* Save & disable L2 and L3 caches */
+ save_l3cr = _get_L3CR(); /* (returns -1 if not available) */
+ save_l2cr = _get_L2CR(); /* (returns -1 if not available) */
+
+ /* Send the new speed command. My assumption is that this command
+ * will cause PLL_CFG[0..3] to be changed next time CPU goes to sleep
+ */
+ pmu_request(&req, NULL, 6, PMU_CPU_SPEED, 'W', 'O', 'O', 'F', low_speed);
+ while (!req.complete)
+ pmu_poll();
+
+ /* Prepare the northbridge for the speed transition */
+ pmac_call_feature(PMAC_FTR_SLEEP_STATE,NULL,1,1);
+
+ /* Call low level code to backup CPU state and recover from
+ * hardware reset
+ */
+ low_sleep_handler();
+
+ /* Restore the northbridge */
+ pmac_call_feature(PMAC_FTR_SLEEP_STATE,NULL,1,0);
+
+ /* Restore L2 cache */
+ if (save_l2cr != 0xffffffff && (save_l2cr & L2CR_L2E) != 0)
+ _set_L2CR(save_l2cr);
+ /* Restore L3 cache */
+ if (save_l3cr != 0xffffffff && (save_l3cr & L3CR_L3E) != 0)
+ _set_L3CR(save_l3cr);
+
+ /* Restore userland MMU context */
+ switch_mmu_context(NULL, current->active_mm);
+
+#ifdef DEBUG_FREQ
+ printk(KERN_DEBUG "HID1, after: %x\n", mfspr(SPRN_HID1));
+#endif
+
+ /* Restore low level PMU operations */
+ pmu_unlock();
+
+ /*
+ * Restore decrementer; we'll take a decrementer interrupt
+ * as soon as interrupts are re-enabled and the generic
+ * clockevents code will reprogram it with the right value.
+ */
+ set_dec(1);
+
+ /* Restore interrupts */
+ mpic_cpu_set_priority(pic_prio);
+
+ /* Let interrupts flow again ... */
+ local_irq_restore(flags);
+
+#ifdef DEBUG_FREQ
+ debug_calc_bogomips();
+#endif
+
+ pmu_resume();
+
+ preempt_enable();
+
+ return 0;
+}
+
+static int do_set_cpu_speed(struct cpufreq_policy *policy, int speed_mode,
+ int notify)
+{
+ struct cpufreq_freqs freqs;
+ unsigned long l3cr;
+ static unsigned long prev_l3cr;
+
+ freqs.old = cur_freq;
+ freqs.new = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq;
+
+ if (freqs.old == freqs.new)
+ return 0;
+
+ if (notify)
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
+ if (speed_mode == CPUFREQ_LOW &&
+ cpu_has_feature(CPU_FTR_L3CR)) {
+ l3cr = _get_L3CR();
+ if (l3cr & L3CR_L3E) {
+ prev_l3cr = l3cr;
+ _set_L3CR(0);
+ }
+ }
+ set_speed_proc(speed_mode == CPUFREQ_LOW);
+ if (speed_mode == CPUFREQ_HIGH &&
+ cpu_has_feature(CPU_FTR_L3CR)) {
+ l3cr = _get_L3CR();
+ if ((prev_l3cr & L3CR_L3E) && l3cr != prev_l3cr)
+ _set_L3CR(prev_l3cr);
+ }
+ if (notify)
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
+ cur_freq = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq;
+
+ return 0;
+}
+
+static unsigned int pmac_cpufreq_get_speed(unsigned int cpu)
+{
+ return cur_freq;
+}
+
+static int pmac_cpufreq_verify(struct cpufreq_policy *policy)
+{
+ return cpufreq_frequency_table_verify(policy, pmac_cpu_freqs);
+}
+
+static int pmac_cpufreq_target( struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
+{
+ unsigned int newstate = 0;
+ int rc;
+
+ if (cpufreq_frequency_table_target(policy, pmac_cpu_freqs,
+ target_freq, relation, &newstate))
+ return -EINVAL;
+
+ rc = do_set_cpu_speed(policy, newstate, 1);
+
+ ppc_proc_freq = cur_freq * 1000ul;
+ return rc;
+}
+
+static int pmac_cpufreq_cpu_init(struct cpufreq_policy *policy)
+{
+ if (policy->cpu != 0)
+ return -ENODEV;
+
+ policy->cpuinfo.transition_latency = transition_latency;
+ policy->cur = cur_freq;
+
+ cpufreq_frequency_table_get_attr(pmac_cpu_freqs, policy->cpu);
+ return cpufreq_frequency_table_cpuinfo(policy, pmac_cpu_freqs);
+}
+
+static u32 read_gpio(struct device_node *np)
+{
+ const u32 *reg = of_get_property(np, "reg", NULL);
+ u32 offset;
+
+ if (reg == NULL)
+ return 0;
+ /* That works for all keylargos but shall be fixed properly
+ * some day... The problem is that it seems we can't rely
+ * on the "reg" property of the GPIO nodes, they are either
+ * relative to the base of KeyLargo or to the base of the
+ * GPIO space, and the device-tree doesn't help.
+ */
+ offset = *reg;
+ if (offset < KEYLARGO_GPIO_LEVELS0)
+ offset += KEYLARGO_GPIO_LEVELS0;
+ return offset;
+}
+
+static int pmac_cpufreq_suspend(struct cpufreq_policy *policy)
+{
+ /* Ok, this could be made a bit smarter, but let's be robust for now. We
+ * always force a speed change to high speed before sleep, to make sure
+ * we have appropriate voltage and/or bus speed for the wakeup process,
+ * and to make sure our loops_per_jiffies are "good enough", that is will
+ * not cause too short delays if we sleep in low speed and wake in high
+ * speed..
+ */
+ no_schedule = 1;
+ sleep_freq = cur_freq;
+ if (cur_freq == low_freq && !is_pmu_based)
+ do_set_cpu_speed(policy, CPUFREQ_HIGH, 0);
+ return 0;
+}
+
+static int pmac_cpufreq_resume(struct cpufreq_policy *policy)
+{
+ /* If we resume, first check if we have a get() function */
+ if (get_speed_proc)
+ cur_freq = get_speed_proc();
+ else
+ cur_freq = 0;
+
+ /* We don't, hrm... we don't really know our speed here, best
+ * is that we force a switch to whatever it was, which is
+ * probably high speed due to our suspend() routine
+ */
+ do_set_cpu_speed(policy, sleep_freq == low_freq ?
+ CPUFREQ_LOW : CPUFREQ_HIGH, 0);
+
+ ppc_proc_freq = cur_freq * 1000ul;
+
+ no_schedule = 0;
+ return 0;
+}
+
+static struct cpufreq_driver pmac_cpufreq_driver = {
+ .verify = pmac_cpufreq_verify,
+ .target = pmac_cpufreq_target,
+ .get = pmac_cpufreq_get_speed,
+ .init = pmac_cpufreq_cpu_init,
+ .suspend = pmac_cpufreq_suspend,
+ .resume = pmac_cpufreq_resume,
+ .flags = CPUFREQ_PM_NO_WARN,
+ .attr = pmac_cpu_freqs_attr,
+ .name = "powermac",
+ .owner = THIS_MODULE,
+};
+
+
+static int pmac_cpufreq_init_MacRISC3(struct device_node *cpunode)
+{
+ struct device_node *volt_gpio_np = of_find_node_by_name(NULL,
+ "voltage-gpio");
+ struct device_node *freq_gpio_np = of_find_node_by_name(NULL,
+ "frequency-gpio");
+ struct device_node *slew_done_gpio_np = of_find_node_by_name(NULL,
+ "slewing-done");
+ const u32 *value;
+
+ /*
+ * Check to see if it's GPIO driven or PMU only
+ *
+ * The way we extract the GPIO address is slightly hackish, but it
+ * works well enough for now. We need to abstract the whole GPIO
+ * stuff sooner or later anyway
+ */
+
+ if (volt_gpio_np)
+ voltage_gpio = read_gpio(volt_gpio_np);
+ if (freq_gpio_np)
+ frequency_gpio = read_gpio(freq_gpio_np);
+ if (slew_done_gpio_np)
+ slew_done_gpio = read_gpio(slew_done_gpio_np);
+
+ /* If we use the frequency GPIOs, calculate the min/max speeds based
+ * on the bus frequencies
+ */
+ if (frequency_gpio && slew_done_gpio) {
+ int lenp, rc;
+ const u32 *freqs, *ratio;
+
+ freqs = of_get_property(cpunode, "bus-frequencies", &lenp);
+ lenp /= sizeof(u32);
+ if (freqs == NULL || lenp != 2) {
+ printk(KERN_ERR "cpufreq: bus-frequencies incorrect or missing\n");
+ return 1;
+ }
+ ratio = of_get_property(cpunode, "processor-to-bus-ratio*2",
+ NULL);
+ if (ratio == NULL) {
+ printk(KERN_ERR "cpufreq: processor-to-bus-ratio*2 missing\n");
+ return 1;
+ }
+
+ /* Get the min/max bus frequencies */
+ low_freq = min(freqs[0], freqs[1]);
+ hi_freq = max(freqs[0], freqs[1]);
+
+ /* Grrrr.. It _seems_ that the device-tree is lying on the low bus
+ * frequency, it claims it to be around 84Mhz on some models while
+ * it appears to be approx. 101Mhz on all. Let's hack around here...
+ * fortunately, we don't need to be too precise
+ */
+ if (low_freq < 98000000)
+ low_freq = 101000000;
+
+ /* Convert those to CPU core clocks */
+ low_freq = (low_freq * (*ratio)) / 2000;
+ hi_freq = (hi_freq * (*ratio)) / 2000;
+
+ /* Now we get the frequencies, we read the GPIO to see what is out current
+ * speed
+ */
+ rc = pmac_call_feature(PMAC_FTR_READ_GPIO, NULL, frequency_gpio, 0);
+ cur_freq = (rc & 0x01) ? hi_freq : low_freq;
+
+ set_speed_proc = gpios_set_cpu_speed;
+ return 1;
+ }
+
+ /* If we use the PMU, look for the min & max frequencies in the
+ * device-tree
+ */
+ value = of_get_property(cpunode, "min-clock-frequency", NULL);
+ if (!value)
+ return 1;
+ low_freq = (*value) / 1000;
+ /* The PowerBook G4 12" (PowerBook6,1) has an error in the device-tree
+ * here */
+ if (low_freq < 100000)
+ low_freq *= 10;
+
+ value = of_get_property(cpunode, "max-clock-frequency", NULL);
+ if (!value)
+ return 1;
+ hi_freq = (*value) / 1000;
+ set_speed_proc = pmu_set_cpu_speed;
+ is_pmu_based = 1;
+
+ return 0;
+}
+
+static int pmac_cpufreq_init_7447A(struct device_node *cpunode)
+{
+ struct device_node *volt_gpio_np;
+
+ if (of_get_property(cpunode, "dynamic-power-step", NULL) == NULL)
+ return 1;
+
+ volt_gpio_np = of_find_node_by_name(NULL, "cpu-vcore-select");
+ if (volt_gpio_np)
+ voltage_gpio = read_gpio(volt_gpio_np);
+ if (!voltage_gpio){
+ printk(KERN_ERR "cpufreq: missing cpu-vcore-select gpio\n");
+ return 1;
+ }
+
+ /* OF only reports the high frequency */
+ hi_freq = cur_freq;
+ low_freq = cur_freq/2;
+
+ /* Read actual frequency from CPU */
+ cur_freq = dfs_get_cpu_speed();
+ set_speed_proc = dfs_set_cpu_speed;
+ get_speed_proc = dfs_get_cpu_speed;
+
+ return 0;
+}
+
+static int pmac_cpufreq_init_750FX(struct device_node *cpunode)
+{
+ struct device_node *volt_gpio_np;
+ u32 pvr;
+ const u32 *value;
+
+ if (of_get_property(cpunode, "dynamic-power-step", NULL) == NULL)
+ return 1;
+
+ hi_freq = cur_freq;
+ value = of_get_property(cpunode, "reduced-clock-frequency", NULL);
+ if (!value)
+ return 1;
+ low_freq = (*value) / 1000;
+
+ volt_gpio_np = of_find_node_by_name(NULL, "cpu-vcore-select");
+ if (volt_gpio_np)
+ voltage_gpio = read_gpio(volt_gpio_np);
+
+ pvr = mfspr(SPRN_PVR);
+ has_cpu_l2lve = !((pvr & 0xf00) == 0x100);
+
+ set_speed_proc = cpu_750fx_cpu_speed;
+ get_speed_proc = cpu_750fx_get_cpu_speed;
+ cur_freq = cpu_750fx_get_cpu_speed();
+
+ return 0;
+}
+
+/* Currently, we support the following machines:
+ *
+ * - Titanium PowerBook 1Ghz (PMU based, 667Mhz & 1Ghz)
+ * - Titanium PowerBook 800 (PMU based, 667Mhz & 800Mhz)
+ * - Titanium PowerBook 400 (PMU based, 300Mhz & 400Mhz)
+ * - Titanium PowerBook 500 (PMU based, 300Mhz & 500Mhz)
+ * - iBook2 500/600 (PMU based, 400Mhz & 500/600Mhz)
+ * - iBook2 700 (CPU based, 400Mhz & 700Mhz, support low voltage)
+ * - Recent MacRISC3 laptops
+ * - All new machines with 7447A CPUs
+ */
+static int __init pmac_cpufreq_setup(void)
+{
+ struct device_node *cpunode;
+ const u32 *value;
+
+ if (strstr(cmd_line, "nocpufreq"))
+ return 0;
+
+ /* Assume only one CPU */
+ cpunode = of_find_node_by_type(NULL, "cpu");
+ if (!cpunode)
+ goto out;
+
+ /* Get current cpu clock freq */
+ value = of_get_property(cpunode, "clock-frequency", NULL);
+ if (!value)
+ goto out;
+ cur_freq = (*value) / 1000;
+ transition_latency = CPUFREQ_ETERNAL;
+
+ /* Check for 7447A based MacRISC3 */
+ if (of_machine_is_compatible("MacRISC3") &&
+ of_get_property(cpunode, "dynamic-power-step", NULL) &&
+ PVR_VER(mfspr(SPRN_PVR)) == 0x8003) {
+ pmac_cpufreq_init_7447A(cpunode);
+ transition_latency = 8000000;
+ /* Check for other MacRISC3 machines */
+ } else if (of_machine_is_compatible("PowerBook3,4") ||
+ of_machine_is_compatible("PowerBook3,5") ||
+ of_machine_is_compatible("MacRISC3")) {
+ pmac_cpufreq_init_MacRISC3(cpunode);
+ /* Else check for iBook2 500/600 */
+ } else if (of_machine_is_compatible("PowerBook4,1")) {
+ hi_freq = cur_freq;
+ low_freq = 400000;
+ set_speed_proc = pmu_set_cpu_speed;
+ is_pmu_based = 1;
+ }
+ /* Else check for TiPb 550 */
+ else if (of_machine_is_compatible("PowerBook3,3") && cur_freq == 550000) {
+ hi_freq = cur_freq;
+ low_freq = 500000;
+ set_speed_proc = pmu_set_cpu_speed;
+ is_pmu_based = 1;
+ }
+ /* Else check for TiPb 400 & 500 */
+ else if (of_machine_is_compatible("PowerBook3,2")) {
+ /* We only know about the 400 MHz and the 500Mhz model
+ * they both have 300 MHz as low frequency
+ */
+ if (cur_freq < 350000 || cur_freq > 550000)
+ goto out;
+ hi_freq = cur_freq;
+ low_freq = 300000;
+ set_speed_proc = pmu_set_cpu_speed;
+ is_pmu_based = 1;
+ }
+ /* Else check for 750FX */
+ else if (PVR_VER(mfspr(SPRN_PVR)) == 0x7000)
+ pmac_cpufreq_init_750FX(cpunode);
+out:
+ of_node_put(cpunode);
+ if (set_speed_proc == NULL)
+ return -ENODEV;
+
+ pmac_cpu_freqs[CPUFREQ_LOW].frequency = low_freq;
+ pmac_cpu_freqs[CPUFREQ_HIGH].frequency = hi_freq;
+ ppc_proc_freq = cur_freq * 1000ul;
+
+ printk(KERN_INFO "Registering PowerMac CPU frequency driver\n");
+ printk(KERN_INFO "Low: %d Mhz, High: %d Mhz, Boot: %d Mhz\n",
+ low_freq/1000, hi_freq/1000, cur_freq/1000);
+
+ return cpufreq_register_driver(&pmac_cpufreq_driver);
+}
+
+module_init(pmac_cpufreq_setup);
+
diff --git a/drivers/cpufreq/pmac64-cpufreq.c b/drivers/cpufreq/pmac64-cpufreq.c
new file mode 100644
index 000000000000..7ba423431cfe
--- /dev/null
+++ b/drivers/cpufreq/pmac64-cpufreq.c
@@ -0,0 +1,746 @@
+/*
+ * Copyright (C) 2002 - 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org>
+ * and Markus Demleitner <msdemlei@cl.uni-heidelberg.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This driver adds basic cpufreq support for SMU & 970FX based G5 Macs,
+ * that is iMac G5 and latest single CPU desktop.
+ */
+
+#undef DEBUG
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/sched.h>
+#include <linux/cpufreq.h>
+#include <linux/init.h>
+#include <linux/completion.h>
+#include <linux/mutex.h>
+#include <asm/prom.h>
+#include <asm/machdep.h>
+#include <asm/irq.h>
+#include <asm/sections.h>
+#include <asm/cputable.h>
+#include <asm/time.h>
+#include <asm/smu.h>
+#include <asm/pmac_pfunc.h>
+
+#define DBG(fmt...) pr_debug(fmt)
+
+/* see 970FX user manual */
+
+#define SCOM_PCR 0x0aa001 /* PCR scom addr */
+
+#define PCR_HILO_SELECT 0x80000000U /* 1 = PCR, 0 = PCRH */
+#define PCR_SPEED_FULL 0x00000000U /* 1:1 speed value */
+#define PCR_SPEED_HALF 0x00020000U /* 1:2 speed value */
+#define PCR_SPEED_QUARTER 0x00040000U /* 1:4 speed value */
+#define PCR_SPEED_MASK 0x000e0000U /* speed mask */
+#define PCR_SPEED_SHIFT 17
+#define PCR_FREQ_REQ_VALID 0x00010000U /* freq request valid */
+#define PCR_VOLT_REQ_VALID 0x00008000U /* volt request valid */
+#define PCR_TARGET_TIME_MASK 0x00006000U /* target time */
+#define PCR_STATLAT_MASK 0x00001f00U /* STATLAT value */
+#define PCR_SNOOPLAT_MASK 0x000000f0U /* SNOOPLAT value */
+#define PCR_SNOOPACC_MASK 0x0000000fU /* SNOOPACC value */
+
+#define SCOM_PSR 0x408001 /* PSR scom addr */
+/* warning: PSR is a 64 bits register */
+#define PSR_CMD_RECEIVED 0x2000000000000000U /* command received */
+#define PSR_CMD_COMPLETED 0x1000000000000000U /* command completed */
+#define PSR_CUR_SPEED_MASK 0x0300000000000000U /* current speed */
+#define PSR_CUR_SPEED_SHIFT (56)
+
+/*
+ * The G5 only supports two frequencies (Quarter speed is not supported)
+ */
+#define CPUFREQ_HIGH 0
+#define CPUFREQ_LOW 1
+
+static struct cpufreq_frequency_table g5_cpu_freqs[] = {
+ {CPUFREQ_HIGH, 0},
+ {CPUFREQ_LOW, 0},
+ {0, CPUFREQ_TABLE_END},
+};
+
+static struct freq_attr* g5_cpu_freqs_attr[] = {
+ &cpufreq_freq_attr_scaling_available_freqs,
+ NULL,
+};
+
+/* Power mode data is an array of the 32 bits PCR values to use for
+ * the various frequencies, retrieved from the device-tree
+ */
+static int g5_pmode_cur;
+
+static void (*g5_switch_volt)(int speed_mode);
+static int (*g5_switch_freq)(int speed_mode);
+static int (*g5_query_freq)(void);
+
+static DEFINE_MUTEX(g5_switch_mutex);
+
+static unsigned long transition_latency;
+
+#ifdef CONFIG_PMAC_SMU
+
+static const u32 *g5_pmode_data;
+static int g5_pmode_max;
+
+static struct smu_sdbp_fvt *g5_fvt_table; /* table of op. points */
+static int g5_fvt_count; /* number of op. points */
+static int g5_fvt_cur; /* current op. point */
+
+/*
+ * SMU based voltage switching for Neo2 platforms
+ */
+
+static void g5_smu_switch_volt(int speed_mode)
+{
+ struct smu_simple_cmd cmd;
+
+ DECLARE_COMPLETION_ONSTACK(comp);
+ smu_queue_simple(&cmd, SMU_CMD_POWER_COMMAND, 8, smu_done_complete,
+ &comp, 'V', 'S', 'L', 'E', 'W',
+ 0xff, g5_fvt_cur+1, speed_mode);
+ wait_for_completion(&comp);
+}
+
+/*
+ * Platform function based voltage/vdnap switching for Neo2
+ */
+
+static struct pmf_function *pfunc_set_vdnap0;
+static struct pmf_function *pfunc_vdnap0_complete;
+
+static void g5_vdnap_switch_volt(int speed_mode)
+{
+ struct pmf_args args;
+ u32 slew, done = 0;
+ unsigned long timeout;
+
+ slew = (speed_mode == CPUFREQ_LOW) ? 1 : 0;
+ args.count = 1;
+ args.u[0].p = &slew;
+
+ pmf_call_one(pfunc_set_vdnap0, &args);
+
+ /* It's an irq GPIO so we should be able to just block here,
+ * I'll do that later after I've properly tested the IRQ code for
+ * platform functions
+ */
+ timeout = jiffies + HZ/10;
+ while(!time_after(jiffies, timeout)) {
+ args.count = 1;
+ args.u[0].p = &done;
+ pmf_call_one(pfunc_vdnap0_complete, &args);
+ if (done)
+ break;
+ msleep(1);
+ }
+ if (done == 0)
+ printk(KERN_WARNING "cpufreq: Timeout in clock slewing !\n");
+}
+
+
+/*
+ * SCOM based frequency switching for 970FX rev3
+ */
+static int g5_scom_switch_freq(int speed_mode)
+{
+ unsigned long flags;
+ int to;
+
+ /* If frequency is going up, first ramp up the voltage */
+ if (speed_mode < g5_pmode_cur)
+ g5_switch_volt(speed_mode);
+
+ local_irq_save(flags);
+
+ /* Clear PCR high */
+ scom970_write(SCOM_PCR, 0);
+ /* Clear PCR low */
+ scom970_write(SCOM_PCR, PCR_HILO_SELECT | 0);
+ /* Set PCR low */
+ scom970_write(SCOM_PCR, PCR_HILO_SELECT |
+ g5_pmode_data[speed_mode]);
+
+ /* Wait for completion */
+ for (to = 0; to < 10; to++) {
+ unsigned long psr = scom970_read(SCOM_PSR);
+
+ if ((psr & PSR_CMD_RECEIVED) == 0 &&
+ (((psr >> PSR_CUR_SPEED_SHIFT) ^
+ (g5_pmode_data[speed_mode] >> PCR_SPEED_SHIFT)) & 0x3)
+ == 0)
+ break;
+ if (psr & PSR_CMD_COMPLETED)
+ break;
+ udelay(100);
+ }
+
+ local_irq_restore(flags);
+
+ /* If frequency is going down, last ramp the voltage */
+ if (speed_mode > g5_pmode_cur)
+ g5_switch_volt(speed_mode);
+
+ g5_pmode_cur = speed_mode;
+ ppc_proc_freq = g5_cpu_freqs[speed_mode].frequency * 1000ul;
+
+ return 0;
+}
+
+static int g5_scom_query_freq(void)
+{
+ unsigned long psr = scom970_read(SCOM_PSR);
+ int i;
+
+ for (i = 0; i <= g5_pmode_max; i++)
+ if ((((psr >> PSR_CUR_SPEED_SHIFT) ^
+ (g5_pmode_data[i] >> PCR_SPEED_SHIFT)) & 0x3) == 0)
+ break;
+ return i;
+}
+
+/*
+ * Fake voltage switching for platforms with missing support
+ */
+
+static void g5_dummy_switch_volt(int speed_mode)
+{
+}
+
+#endif /* CONFIG_PMAC_SMU */
+
+/*
+ * Platform function based voltage switching for PowerMac7,2 & 7,3
+ */
+
+static struct pmf_function *pfunc_cpu0_volt_high;
+static struct pmf_function *pfunc_cpu0_volt_low;
+static struct pmf_function *pfunc_cpu1_volt_high;
+static struct pmf_function *pfunc_cpu1_volt_low;
+
+static void g5_pfunc_switch_volt(int speed_mode)
+{
+ if (speed_mode == CPUFREQ_HIGH) {
+ if (pfunc_cpu0_volt_high)
+ pmf_call_one(pfunc_cpu0_volt_high, NULL);
+ if (pfunc_cpu1_volt_high)
+ pmf_call_one(pfunc_cpu1_volt_high, NULL);
+ } else {
+ if (pfunc_cpu0_volt_low)
+ pmf_call_one(pfunc_cpu0_volt_low, NULL);
+ if (pfunc_cpu1_volt_low)
+ pmf_call_one(pfunc_cpu1_volt_low, NULL);
+ }
+ msleep(10); /* should be faster , to fix */
+}
+
+/*
+ * Platform function based frequency switching for PowerMac7,2 & 7,3
+ */
+
+static struct pmf_function *pfunc_cpu_setfreq_high;
+static struct pmf_function *pfunc_cpu_setfreq_low;
+static struct pmf_function *pfunc_cpu_getfreq;
+static struct pmf_function *pfunc_slewing_done;
+
+static int g5_pfunc_switch_freq(int speed_mode)
+{
+ struct pmf_args args;
+ u32 done = 0;
+ unsigned long timeout;
+ int rc;
+
+ DBG("g5_pfunc_switch_freq(%d)\n", speed_mode);
+
+ /* If frequency is going up, first ramp up the voltage */
+ if (speed_mode < g5_pmode_cur)
+ g5_switch_volt(speed_mode);
+
+ /* Do it */
+ if (speed_mode == CPUFREQ_HIGH)
+ rc = pmf_call_one(pfunc_cpu_setfreq_high, NULL);
+ else
+ rc = pmf_call_one(pfunc_cpu_setfreq_low, NULL);
+
+ if (rc)
+ printk(KERN_WARNING "cpufreq: pfunc switch error %d\n", rc);
+
+ /* It's an irq GPIO so we should be able to just block here,
+ * I'll do that later after I've properly tested the IRQ code for
+ * platform functions
+ */
+ timeout = jiffies + HZ/10;
+ while(!time_after(jiffies, timeout)) {
+ args.count = 1;
+ args.u[0].p = &done;
+ pmf_call_one(pfunc_slewing_done, &args);
+ if (done)
+ break;
+ msleep(1);
+ }
+ if (done == 0)
+ printk(KERN_WARNING "cpufreq: Timeout in clock slewing !\n");
+
+ /* If frequency is going down, last ramp the voltage */
+ if (speed_mode > g5_pmode_cur)
+ g5_switch_volt(speed_mode);
+
+ g5_pmode_cur = speed_mode;
+ ppc_proc_freq = g5_cpu_freqs[speed_mode].frequency * 1000ul;
+
+ return 0;
+}
+
+static int g5_pfunc_query_freq(void)
+{
+ struct pmf_args args;
+ u32 val = 0;
+
+ args.count = 1;
+ args.u[0].p = &val;
+ pmf_call_one(pfunc_cpu_getfreq, &args);
+ return val ? CPUFREQ_HIGH : CPUFREQ_LOW;
+}
+
+
+/*
+ * Common interface to the cpufreq core
+ */
+
+static int g5_cpufreq_verify(struct cpufreq_policy *policy)
+{
+ return cpufreq_frequency_table_verify(policy, g5_cpu_freqs);
+}
+
+static int g5_cpufreq_target(struct cpufreq_policy *policy,
+ unsigned int target_freq, unsigned int relation)
+{
+ unsigned int newstate = 0;
+ struct cpufreq_freqs freqs;
+ int rc;
+
+ if (cpufreq_frequency_table_target(policy, g5_cpu_freqs,
+ target_freq, relation, &newstate))
+ return -EINVAL;
+
+ if (g5_pmode_cur == newstate)
+ return 0;
+
+ mutex_lock(&g5_switch_mutex);
+
+ freqs.old = g5_cpu_freqs[g5_pmode_cur].frequency;
+ freqs.new = g5_cpu_freqs[newstate].frequency;
+
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
+ rc = g5_switch_freq(newstate);
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
+
+ mutex_unlock(&g5_switch_mutex);
+
+ return rc;
+}
+
+static unsigned int g5_cpufreq_get_speed(unsigned int cpu)
+{
+ return g5_cpu_freqs[g5_pmode_cur].frequency;
+}
+
+static int g5_cpufreq_cpu_init(struct cpufreq_policy *policy)
+{
+ policy->cpuinfo.transition_latency = transition_latency;
+ policy->cur = g5_cpu_freqs[g5_query_freq()].frequency;
+ /* secondary CPUs are tied to the primary one by the
+ * cpufreq core if in the secondary policy we tell it that
+ * it actually must be one policy together with all others. */
+ cpumask_copy(policy->cpus, cpu_online_mask);
+ cpufreq_frequency_table_get_attr(g5_cpu_freqs, policy->cpu);
+
+ return cpufreq_frequency_table_cpuinfo(policy,
+ g5_cpu_freqs);
+}
+
+
+static struct cpufreq_driver g5_cpufreq_driver = {
+ .name = "powermac",
+ .owner = THIS_MODULE,
+ .flags = CPUFREQ_CONST_LOOPS,
+ .init = g5_cpufreq_cpu_init,
+ .verify = g5_cpufreq_verify,
+ .target = g5_cpufreq_target,
+ .get = g5_cpufreq_get_speed,
+ .attr = g5_cpu_freqs_attr,
+};
+
+
+#ifdef CONFIG_PMAC_SMU
+
+static int __init g5_neo2_cpufreq_init(struct device_node *cpus)
+{
+ struct device_node *cpunode;
+ unsigned int psize, ssize;
+ unsigned long max_freq;
+ char *freq_method, *volt_method;
+ const u32 *valp;
+ u32 pvr_hi;
+ int use_volts_vdnap = 0;
+ int use_volts_smu = 0;
+ int rc = -ENODEV;
+
+ /* Check supported platforms */
+ if (of_machine_is_compatible("PowerMac8,1") ||
+ of_machine_is_compatible("PowerMac8,2") ||
+ of_machine_is_compatible("PowerMac9,1"))
+ use_volts_smu = 1;
+ else if (of_machine_is_compatible("PowerMac11,2"))
+ use_volts_vdnap = 1;
+ else
+ return -ENODEV;
+
+ /* Get first CPU node */
+ for (cpunode = NULL;
+ (cpunode = of_get_next_child(cpus, cpunode)) != NULL;) {
+ const u32 *reg = of_get_property(cpunode, "reg", NULL);
+ if (reg == NULL || (*reg) != 0)
+ continue;
+ if (!strcmp(cpunode->type, "cpu"))
+ break;
+ }
+ if (cpunode == NULL) {
+ printk(KERN_ERR "cpufreq: Can't find any CPU 0 node\n");
+ return -ENODEV;
+ }
+
+ /* Check 970FX for now */
+ valp = of_get_property(cpunode, "cpu-version", NULL);
+ if (!valp) {
+ DBG("No cpu-version property !\n");
+ goto bail_noprops;
+ }
+ pvr_hi = (*valp) >> 16;
+ if (pvr_hi != 0x3c && pvr_hi != 0x44) {
+ printk(KERN_ERR "cpufreq: Unsupported CPU version\n");
+ goto bail_noprops;
+ }
+
+ /* Look for the powertune data in the device-tree */
+ g5_pmode_data = of_get_property(cpunode, "power-mode-data",&psize);
+ if (!g5_pmode_data) {
+ DBG("No power-mode-data !\n");
+ goto bail_noprops;
+ }
+ g5_pmode_max = psize / sizeof(u32) - 1;
+
+ if (use_volts_smu) {
+ const struct smu_sdbp_header *shdr;
+
+ /* Look for the FVT table */
+ shdr = smu_get_sdb_partition(SMU_SDB_FVT_ID, NULL);
+ if (!shdr)
+ goto bail_noprops;
+ g5_fvt_table = (struct smu_sdbp_fvt *)&shdr[1];
+ ssize = (shdr->len * sizeof(u32)) -
+ sizeof(struct smu_sdbp_header);
+ g5_fvt_count = ssize / sizeof(struct smu_sdbp_fvt);
+ g5_fvt_cur = 0;
+
+ /* Sanity checking */
+ if (g5_fvt_count < 1 || g5_pmode_max < 1)
+ goto bail_noprops;
+
+ g5_switch_volt = g5_smu_switch_volt;
+ volt_method = "SMU";
+ } else if (use_volts_vdnap) {
+ struct device_node *root;
+
+ root = of_find_node_by_path("/");
+ if (root == NULL) {
+ printk(KERN_ERR "cpufreq: Can't find root of "
+ "device tree\n");
+ goto bail_noprops;
+ }
+ pfunc_set_vdnap0 = pmf_find_function(root, "set-vdnap0");
+ pfunc_vdnap0_complete =
+ pmf_find_function(root, "slewing-done");
+ if (pfunc_set_vdnap0 == NULL ||
+ pfunc_vdnap0_complete == NULL) {
+ printk(KERN_ERR "cpufreq: Can't find required "
+ "platform function\n");
+ goto bail_noprops;
+ }
+
+ g5_switch_volt = g5_vdnap_switch_volt;
+ volt_method = "GPIO";
+ } else {
+ g5_switch_volt = g5_dummy_switch_volt;
+ volt_method = "none";
+ }
+
+ /*
+ * From what I see, clock-frequency is always the maximal frequency.
+ * The current driver can not slew sysclk yet, so we really only deal
+ * with powertune steps for now. We also only implement full freq and
+ * half freq in this version. So far, I haven't yet seen a machine
+ * supporting anything else.
+ */
+ valp = of_get_property(cpunode, "clock-frequency", NULL);
+ if (!valp)
+ return -ENODEV;
+ max_freq = (*valp)/1000;
+ g5_cpu_freqs[0].frequency = max_freq;
+ g5_cpu_freqs[1].frequency = max_freq/2;
+
+ /* Set callbacks */
+ transition_latency = 12000;
+ g5_switch_freq = g5_scom_switch_freq;
+ g5_query_freq = g5_scom_query_freq;
+ freq_method = "SCOM";
+
+ /* Force apply current frequency to make sure everything is in
+ * sync (voltage is right for example). Firmware may leave us with
+ * a strange setting ...
+ */
+ g5_switch_volt(CPUFREQ_HIGH);
+ msleep(10);
+ g5_pmode_cur = -1;
+ g5_switch_freq(g5_query_freq());
+
+ printk(KERN_INFO "Registering G5 CPU frequency driver\n");
+ printk(KERN_INFO "Frequency method: %s, Voltage method: %s\n",
+ freq_method, volt_method);
+ printk(KERN_INFO "Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n",
+ g5_cpu_freqs[1].frequency/1000,
+ g5_cpu_freqs[0].frequency/1000,
+ g5_cpu_freqs[g5_pmode_cur].frequency/1000);
+
+ rc = cpufreq_register_driver(&g5_cpufreq_driver);
+
+ /* We keep the CPU node on hold... hopefully, Apple G5 don't have
+ * hotplug CPU with a dynamic device-tree ...
+ */
+ return rc;
+
+ bail_noprops:
+ of_node_put(cpunode);
+
+ return rc;
+}
+
+#endif /* CONFIG_PMAC_SMU */
+
+
+static int __init g5_pm72_cpufreq_init(struct device_node *cpus)
+{
+ struct device_node *cpuid = NULL, *hwclock = NULL, *cpunode = NULL;
+ const u8 *eeprom = NULL;
+ const u32 *valp;
+ u64 max_freq, min_freq, ih, il;
+ int has_volt = 1, rc = 0;
+
+ DBG("cpufreq: Initializing for PowerMac7,2, PowerMac7,3 and"
+ " RackMac3,1...\n");
+
+ /* Get first CPU node */
+ for (cpunode = NULL;
+ (cpunode = of_get_next_child(cpus, cpunode)) != NULL;) {
+ if (!strcmp(cpunode->type, "cpu"))
+ break;
+ }
+ if (cpunode == NULL) {
+ printk(KERN_ERR "cpufreq: Can't find any CPU node\n");
+ return -ENODEV;
+ }
+
+ /* Lookup the cpuid eeprom node */
+ cpuid = of_find_node_by_path("/u3@0,f8000000/i2c@f8001000/cpuid@a0");
+ if (cpuid != NULL)
+ eeprom = of_get_property(cpuid, "cpuid", NULL);
+ if (eeprom == NULL) {
+ printk(KERN_ERR "cpufreq: Can't find cpuid EEPROM !\n");
+ rc = -ENODEV;
+ goto bail;
+ }
+
+ /* Lookup the i2c hwclock */
+ for (hwclock = NULL;
+ (hwclock = of_find_node_by_name(hwclock, "i2c-hwclock")) != NULL;){
+ const char *loc = of_get_property(hwclock,
+ "hwctrl-location", NULL);
+ if (loc == NULL)
+ continue;
+ if (strcmp(loc, "CPU CLOCK"))
+ continue;
+ if (!of_get_property(hwclock, "platform-get-frequency", NULL))
+ continue;
+ break;
+ }
+ if (hwclock == NULL) {
+ printk(KERN_ERR "cpufreq: Can't find i2c clock chip !\n");
+ rc = -ENODEV;
+ goto bail;
+ }
+
+ DBG("cpufreq: i2c clock chip found: %s\n", hwclock->full_name);
+
+ /* Now get all the platform functions */
+ pfunc_cpu_getfreq =
+ pmf_find_function(hwclock, "get-frequency");
+ pfunc_cpu_setfreq_high =
+ pmf_find_function(hwclock, "set-frequency-high");
+ pfunc_cpu_setfreq_low =
+ pmf_find_function(hwclock, "set-frequency-low");
+ pfunc_slewing_done =
+ pmf_find_function(hwclock, "slewing-done");
+ pfunc_cpu0_volt_high =
+ pmf_find_function(hwclock, "set-voltage-high-0");
+ pfunc_cpu0_volt_low =
+ pmf_find_function(hwclock, "set-voltage-low-0");
+ pfunc_cpu1_volt_high =
+ pmf_find_function(hwclock, "set-voltage-high-1");
+ pfunc_cpu1_volt_low =
+ pmf_find_function(hwclock, "set-voltage-low-1");
+
+ /* Check we have minimum requirements */
+ if (pfunc_cpu_getfreq == NULL || pfunc_cpu_setfreq_high == NULL ||
+ pfunc_cpu_setfreq_low == NULL || pfunc_slewing_done == NULL) {
+ printk(KERN_ERR "cpufreq: Can't find platform functions !\n");
+ rc = -ENODEV;
+ goto bail;
+ }
+
+ /* Check that we have complete sets */
+ if (pfunc_cpu0_volt_high == NULL || pfunc_cpu0_volt_low == NULL) {
+ pmf_put_function(pfunc_cpu0_volt_high);
+ pmf_put_function(pfunc_cpu0_volt_low);
+ pfunc_cpu0_volt_high = pfunc_cpu0_volt_low = NULL;
+ has_volt = 0;
+ }
+ if (!has_volt ||
+ pfunc_cpu1_volt_high == NULL || pfunc_cpu1_volt_low == NULL) {
+ pmf_put_function(pfunc_cpu1_volt_high);
+ pmf_put_function(pfunc_cpu1_volt_low);
+ pfunc_cpu1_volt_high = pfunc_cpu1_volt_low = NULL;
+ }
+
+ /* Note: The device tree also contains a "platform-set-values"
+ * function for which I haven't quite figured out the usage. It
+ * might have to be called on init and/or wakeup, I'm not too sure
+ * but things seem to work fine without it so far ...
+ */
+
+ /* Get max frequency from device-tree */
+ valp = of_get_property(cpunode, "clock-frequency", NULL);
+ if (!valp) {
+ printk(KERN_ERR "cpufreq: Can't find CPU frequency !\n");
+ rc = -ENODEV;
+ goto bail;
+ }
+
+ max_freq = (*valp)/1000;
+
+ /* Now calculate reduced frequency by using the cpuid input freq
+ * ratio. This requires 64 bits math unless we are willing to lose
+ * some precision
+ */
+ ih = *((u32 *)(eeprom + 0x10));
+ il = *((u32 *)(eeprom + 0x20));
+
+ /* Check for machines with no useful settings */
+ if (il == ih) {
+ printk(KERN_WARNING "cpufreq: No low frequency mode available"
+ " on this model !\n");
+ rc = -ENODEV;
+ goto bail;
+ }
+
+ min_freq = 0;
+ if (ih != 0 && il != 0)
+ min_freq = (max_freq * il) / ih;
+
+ /* Sanity check */
+ if (min_freq >= max_freq || min_freq < 1000) {
+ printk(KERN_ERR "cpufreq: Can't calculate low frequency !\n");
+ rc = -ENXIO;
+ goto bail;
+ }
+ g5_cpu_freqs[0].frequency = max_freq;
+ g5_cpu_freqs[1].frequency = min_freq;
+
+ /* Set callbacks */
+ transition_latency = CPUFREQ_ETERNAL;
+ g5_switch_volt = g5_pfunc_switch_volt;
+ g5_switch_freq = g5_pfunc_switch_freq;
+ g5_query_freq = g5_pfunc_query_freq;
+
+ /* Force apply current frequency to make sure everything is in
+ * sync (voltage is right for example). Firmware may leave us with
+ * a strange setting ...
+ */
+ g5_switch_volt(CPUFREQ_HIGH);
+ msleep(10);
+ g5_pmode_cur = -1;
+ g5_switch_freq(g5_query_freq());
+
+ printk(KERN_INFO "Registering G5 CPU frequency driver\n");
+ printk(KERN_INFO "Frequency method: i2c/pfunc, "
+ "Voltage method: %s\n", has_volt ? "i2c/pfunc" : "none");
+ printk(KERN_INFO "Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n",
+ g5_cpu_freqs[1].frequency/1000,
+ g5_cpu_freqs[0].frequency/1000,
+ g5_cpu_freqs[g5_pmode_cur].frequency/1000);
+
+ rc = cpufreq_register_driver(&g5_cpufreq_driver);
+ bail:
+ if (rc != 0) {
+ pmf_put_function(pfunc_cpu_getfreq);
+ pmf_put_function(pfunc_cpu_setfreq_high);
+ pmf_put_function(pfunc_cpu_setfreq_low);
+ pmf_put_function(pfunc_slewing_done);
+ pmf_put_function(pfunc_cpu0_volt_high);
+ pmf_put_function(pfunc_cpu0_volt_low);
+ pmf_put_function(pfunc_cpu1_volt_high);
+ pmf_put_function(pfunc_cpu1_volt_low);
+ }
+ of_node_put(hwclock);
+ of_node_put(cpuid);
+ of_node_put(cpunode);
+
+ return rc;
+}
+
+static int __init g5_cpufreq_init(void)
+{
+ struct device_node *cpus;
+ int rc = 0;
+
+ cpus = of_find_node_by_path("/cpus");
+ if (cpus == NULL) {
+ DBG("No /cpus node !\n");
+ return -ENODEV;
+ }
+
+ if (of_machine_is_compatible("PowerMac7,2") ||
+ of_machine_is_compatible("PowerMac7,3") ||
+ of_machine_is_compatible("RackMac3,1"))
+ rc = g5_pm72_cpufreq_init(cpus);
+#ifdef CONFIG_PMAC_SMU
+ else
+ rc = g5_neo2_cpufreq_init(cpus);
+#endif /* CONFIG_PMAC_SMU */
+
+ of_node_put(cpus);
+ return rc;
+}
+
+module_init(g5_cpufreq_init);
+
+
+MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/powernow-k6.c b/drivers/cpufreq/powernow-k6.c
index ea0222a45b7b..ea8e10382ec5 100644
--- a/drivers/cpufreq/powernow-k6.c
+++ b/drivers/cpufreq/powernow-k6.c
@@ -58,7 +58,7 @@ static int powernow_k6_get_cpu_multiplier(void)
msrval = POWERNOW_IOPORT + 0x0;
wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */
- return clock_ratio[(invalue >> 5)&7].index;
+ return clock_ratio[(invalue >> 5)&7].driver_data;
}
@@ -75,13 +75,13 @@ static void powernow_k6_set_state(struct cpufreq_policy *policy,
unsigned long msrval;
struct cpufreq_freqs freqs;
- if (clock_ratio[best_i].index > max_multiplier) {
+ if (clock_ratio[best_i].driver_data > max_multiplier) {
printk(KERN_ERR PFX "invalid target frequency\n");
return;
}
freqs.old = busfreq * powernow_k6_get_cpu_multiplier();
- freqs.new = busfreq * clock_ratio[best_i].index;
+ freqs.new = busfreq * clock_ratio[best_i].driver_data;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
@@ -156,7 +156,7 @@ static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
/* table init */
for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {
- f = clock_ratio[i].index;
+ f = clock_ratio[i].driver_data;
if (f > max_multiplier)
clock_ratio[i].frequency = CPUFREQ_ENTRY_INVALID;
else
diff --git a/drivers/cpufreq/powernow-k7.c b/drivers/cpufreq/powernow-k7.c
index 53888dacbe58..b9f80b713fda 100644
--- a/drivers/cpufreq/powernow-k7.c
+++ b/drivers/cpufreq/powernow-k7.c
@@ -186,7 +186,7 @@ static int get_ranges(unsigned char *pst)
fid = *pst++;
powernow_table[j].frequency = (fsb * fid_codes[fid]) / 10;
- powernow_table[j].index = fid; /* lower 8 bits */
+ powernow_table[j].driver_data = fid; /* lower 8 bits */
speed = powernow_table[j].frequency;
@@ -203,7 +203,7 @@ static int get_ranges(unsigned char *pst)
maximum_speed = speed;
vid = *pst++;
- powernow_table[j].index |= (vid << 8); /* upper 8 bits */
+ powernow_table[j].driver_data |= (vid << 8); /* upper 8 bits */
pr_debug(" FID: 0x%x (%d.%dx [%dMHz]) "
"VID: 0x%x (%d.%03dV)\n", fid, fid_codes[fid] / 10,
@@ -212,7 +212,7 @@ static int get_ranges(unsigned char *pst)
mobile_vid_table[vid]%1000);
}
powernow_table[number_scales].frequency = CPUFREQ_TABLE_END;
- powernow_table[number_scales].index = 0;
+ powernow_table[number_scales].driver_data = 0;
return 0;
}
@@ -260,8 +260,8 @@ static void change_speed(struct cpufreq_policy *policy, unsigned int index)
* vid are the upper 8 bits.
*/
- fid = powernow_table[index].index & 0xFF;
- vid = (powernow_table[index].index & 0xFF00) >> 8;
+ fid = powernow_table[index].driver_data & 0xFF;
+ vid = (powernow_table[index].driver_data & 0xFF00) >> 8;
rdmsrl(MSR_K7_FID_VID_STATUS, fidvidstatus.val);
cfid = fidvidstatus.bits.CFID;
@@ -373,8 +373,8 @@ static int powernow_acpi_init(void)
fid = pc.bits.fid;
powernow_table[i].frequency = fsb * fid_codes[fid] / 10;
- powernow_table[i].index = fid; /* lower 8 bits */
- powernow_table[i].index |= (vid << 8); /* upper 8 bits */
+ powernow_table[i].driver_data = fid; /* lower 8 bits */
+ powernow_table[i].driver_data |= (vid << 8); /* upper 8 bits */
speed = powernow_table[i].frequency;
speed_mhz = speed / 1000;
@@ -417,7 +417,7 @@ static int powernow_acpi_init(void)
}
powernow_table[i].frequency = CPUFREQ_TABLE_END;
- powernow_table[i].index = 0;
+ powernow_table[i].driver_data = 0;
/* notify BIOS that we exist */
acpi_processor_notify_smm(THIS_MODULE);
diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c
index b828efe4b2f8..78f018f2a5de 100644
--- a/drivers/cpufreq/powernow-k8.c
+++ b/drivers/cpufreq/powernow-k8.c
@@ -584,9 +584,9 @@ static void print_basics(struct powernow_k8_data *data)
CPUFREQ_ENTRY_INVALID) {
printk(KERN_INFO PFX
"fid 0x%x (%d MHz), vid 0x%x\n",
- data->powernow_table[j].index & 0xff,
+ data->powernow_table[j].driver_data & 0xff,
data->powernow_table[j].frequency/1000,
- data->powernow_table[j].index >> 8);
+ data->powernow_table[j].driver_data >> 8);
}
}
if (data->batps)
@@ -632,13 +632,13 @@ static int fill_powernow_table(struct powernow_k8_data *data,
for (j = 0; j < data->numps; j++) {
int freq;
- powernow_table[j].index = pst[j].fid; /* lower 8 bits */
- powernow_table[j].index |= (pst[j].vid << 8); /* upper 8 bits */
+ powernow_table[j].driver_data = pst[j].fid; /* lower 8 bits */
+ powernow_table[j].driver_data |= (pst[j].vid << 8); /* upper 8 bits */
freq = find_khz_freq_from_fid(pst[j].fid);
powernow_table[j].frequency = freq;
}
powernow_table[data->numps].frequency = CPUFREQ_TABLE_END;
- powernow_table[data->numps].index = 0;
+ powernow_table[data->numps].driver_data = 0;
if (query_current_values_with_pending_wait(data)) {
kfree(powernow_table);
@@ -810,7 +810,7 @@ static int powernow_k8_cpu_init_acpi(struct powernow_k8_data *data)
powernow_table[data->acpi_data.state_count].frequency =
CPUFREQ_TABLE_END;
- powernow_table[data->acpi_data.state_count].index = 0;
+ powernow_table[data->acpi_data.state_count].driver_data = 0;
data->powernow_table = powernow_table;
if (cpumask_first(cpu_core_mask(data->cpu)) == data->cpu)
@@ -865,7 +865,7 @@ static int fill_powernow_table_fidvid(struct powernow_k8_data *data,
pr_debug(" %d : fid 0x%x, vid 0x%x\n", i, fid, vid);
index = fid | (vid<<8);
- powernow_table[i].index = index;
+ powernow_table[i].driver_data = index;
freq = find_khz_freq_from_fid(fid);
powernow_table[i].frequency = freq;
@@ -941,8 +941,8 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data,
* the cpufreq frequency table in find_psb_table, vid
* are the upper 8 bits.
*/
- fid = data->powernow_table[index].index & 0xFF;
- vid = (data->powernow_table[index].index & 0xFF00) >> 8;
+ fid = data->powernow_table[index].driver_data & 0xFF;
+ vid = (data->powernow_table[index].driver_data & 0xFF00) >> 8;
pr_debug("table matched fid 0x%x, giving vid 0x%x\n", fid, vid);
@@ -967,9 +967,9 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data,
res = transition_fid_vid(data, fid, vid);
if (res)
- return res;
-
- freqs.new = find_khz_freq_from_fid(data->currfid);
+ freqs.new = freqs.old;
+ else
+ freqs.new = find_khz_freq_from_fid(data->currfid);
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
return res;
diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c b/drivers/cpufreq/ppc-corenet-cpufreq.c
new file mode 100644
index 000000000000..3cae4529f959
--- /dev/null
+++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
@@ -0,0 +1,380 @@
+/*
+ * Copyright 2013 Freescale Semiconductor, Inc.
+ *
+ * CPU Frequency Scaling driver for Freescale PowerPC corenet SoCs.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/clk.h>
+#include <linux/cpufreq.h>
+#include <linux/errno.h>
+#include <sysdev/fsl_soc.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/slab.h>
+#include <linux/smp.h>
+
+/**
+ * struct cpu_data - per CPU data struct
+ * @clk: the clk of CPU
+ * @parent: the parent node of cpu clock
+ * @table: frequency table
+ */
+struct cpu_data {
+ struct clk *clk;
+ struct device_node *parent;
+ struct cpufreq_frequency_table *table;
+};
+
+/**
+ * struct soc_data - SoC specific data
+ * @freq_mask: mask the disallowed frequencies
+ * @flag: unique flags
+ */
+struct soc_data {
+ u32 freq_mask[4];
+ u32 flag;
+};
+
+#define FREQ_MASK 1
+/* see hardware specification for the allowed frqeuencies */
+static const struct soc_data sdata[] = {
+ { /* used by p2041 and p3041 */
+ .freq_mask = {0x8, 0x8, 0x2, 0x2},
+ .flag = FREQ_MASK,
+ },
+ { /* used by p5020 */
+ .freq_mask = {0x8, 0x2},
+ .flag = FREQ_MASK,
+ },
+ { /* used by p4080, p5040 */
+ .freq_mask = {0},
+ .flag = 0,
+ },
+};
+
+/*
+ * the minimum allowed core frequency, in Hz
+ * for chassis v1.0, >= platform frequency
+ * for chassis v2.0, >= platform frequency / 2
+ */
+static u32 min_cpufreq;
+static const u32 *fmask;
+
+/* serialize frequency changes */
+static DEFINE_MUTEX(cpufreq_lock);
+static DEFINE_PER_CPU(struct cpu_data *, cpu_data);
+
+/* cpumask in a cluster */
+static DEFINE_PER_CPU(cpumask_var_t, cpu_mask);
+
+#ifndef CONFIG_SMP
+static inline const struct cpumask *cpu_core_mask(int cpu)
+{
+ return cpumask_of(0);
+}
+#endif
+
+static unsigned int corenet_cpufreq_get_speed(unsigned int cpu)
+{
+ struct cpu_data *data = per_cpu(cpu_data, cpu);
+
+ return clk_get_rate(data->clk) / 1000;
+}
+
+/* reduce the duplicated frequencies in frequency table */
+static void freq_table_redup(struct cpufreq_frequency_table *freq_table,
+ int count)
+{
+ int i, j;
+
+ for (i = 1; i < count; i++) {
+ for (j = 0; j < i; j++) {
+ if (freq_table[j].frequency == CPUFREQ_ENTRY_INVALID ||
+ freq_table[j].frequency !=
+ freq_table[i].frequency)
+ continue;
+
+ freq_table[i].frequency = CPUFREQ_ENTRY_INVALID;
+ break;
+ }
+ }
+}
+
+/* sort the frequencies in frequency table in descenting order */
+static void freq_table_sort(struct cpufreq_frequency_table *freq_table,
+ int count)
+{
+ int i, j, ind;
+ unsigned int freq, max_freq;
+ struct cpufreq_frequency_table table;
+ for (i = 0; i < count - 1; i++) {
+ max_freq = freq_table[i].frequency;
+ ind = i;
+ for (j = i + 1; j < count; j++) {
+ freq = freq_table[j].frequency;
+ if (freq == CPUFREQ_ENTRY_INVALID ||
+ freq <= max_freq)
+ continue;
+ ind = j;
+ max_freq = freq;
+ }
+
+ if (ind != i) {
+ /* exchange the frequencies */
+ table.driver_data = freq_table[i].driver_data;
+ table.frequency = freq_table[i].frequency;
+ freq_table[i].driver_data = freq_table[ind].driver_data;
+ freq_table[i].frequency = freq_table[ind].frequency;
+ freq_table[ind].driver_data = table.driver_data;
+ freq_table[ind].frequency = table.frequency;
+ }
+ }
+}
+
+static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
+{
+ struct device_node *np;
+ int i, count, ret;
+ u32 freq, mask;
+ struct clk *clk;
+ struct cpufreq_frequency_table *table;
+ struct cpu_data *data;
+ unsigned int cpu = policy->cpu;
+
+ np = of_get_cpu_node(cpu, NULL);
+ if (!np)
+ return -ENODEV;
+
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (!data) {
+ pr_err("%s: no memory\n", __func__);
+ goto err_np;
+ }
+
+ data->clk = of_clk_get(np, 0);
+ if (IS_ERR(data->clk)) {
+ pr_err("%s: no clock information\n", __func__);
+ goto err_nomem2;
+ }
+
+ data->parent = of_parse_phandle(np, "clocks", 0);
+ if (!data->parent) {
+ pr_err("%s: could not get clock information\n", __func__);
+ goto err_nomem2;
+ }
+
+ count = of_property_count_strings(data->parent, "clock-names");
+ table = kcalloc(count + 1, sizeof(*table), GFP_KERNEL);
+ if (!table) {
+ pr_err("%s: no memory\n", __func__);
+ goto err_node;
+ }
+
+ if (fmask)
+ mask = fmask[get_hard_smp_processor_id(cpu)];
+ else
+ mask = 0x0;
+
+ for (i = 0; i < count; i++) {
+ clk = of_clk_get(data->parent, i);
+ freq = clk_get_rate(clk);
+ /*
+ * the clock is valid if its frequency is not masked
+ * and large than minimum allowed frequency.
+ */
+ if (freq < min_cpufreq || (mask & (1 << i)))
+ table[i].frequency = CPUFREQ_ENTRY_INVALID;
+ else
+ table[i].frequency = freq / 1000;
+ table[i].driver_data = i;
+ }
+ freq_table_redup(table, count);
+ freq_table_sort(table, count);
+ table[i].frequency = CPUFREQ_TABLE_END;
+
+ /* set the min and max frequency properly */
+ ret = cpufreq_frequency_table_cpuinfo(policy, table);
+ if (ret) {
+ pr_err("invalid frequency table: %d\n", ret);
+ goto err_nomem1;
+ }
+
+ data->table = table;
+ per_cpu(cpu_data, cpu) = data;
+
+ /* update ->cpus if we have cluster, no harm if not */
+ cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
+ for_each_cpu(i, per_cpu(cpu_mask, cpu))
+ per_cpu(cpu_data, i) = data;
+
+ policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
+ policy->cur = corenet_cpufreq_get_speed(policy->cpu);
+
+ cpufreq_frequency_table_get_attr(table, cpu);
+ of_node_put(np);
+
+ return 0;
+
+err_nomem1:
+ kfree(table);
+err_node:
+ of_node_put(data->parent);
+err_nomem2:
+ per_cpu(cpu_data, cpu) = NULL;
+ kfree(data);
+err_np:
+ of_node_put(np);
+
+ return -ENODEV;
+}
+
+static int __exit corenet_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+{
+ struct cpu_data *data = per_cpu(cpu_data, policy->cpu);
+ unsigned int cpu;
+
+ cpufreq_frequency_table_put_attr(policy->cpu);
+ of_node_put(data->parent);
+ kfree(data->table);
+ kfree(data);
+
+ for_each_cpu(cpu, per_cpu(cpu_mask, policy->cpu))
+ per_cpu(cpu_data, cpu) = NULL;
+
+ return 0;
+}
+
+static int corenet_cpufreq_verify(struct cpufreq_policy *policy)
+{
+ struct cpufreq_frequency_table *table =
+ per_cpu(cpu_data, policy->cpu)->table;
+
+ return cpufreq_frequency_table_verify(policy, table);
+}
+
+static int corenet_cpufreq_target(struct cpufreq_policy *policy,
+ unsigned int target_freq, unsigned int relation)
+{
+ struct cpufreq_freqs freqs;
+ unsigned int new;
+ struct clk *parent;
+ int ret;
+ struct cpu_data *data = per_cpu(cpu_data, policy->cpu);
+
+ cpufreq_frequency_table_target(policy, data->table,
+ target_freq, relation, &new);
+
+ if (policy->cur == data->table[new].frequency)
+ return 0;
+
+ freqs.old = policy->cur;
+ freqs.new = data->table[new].frequency;
+
+ mutex_lock(&cpufreq_lock);
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
+
+ parent = of_clk_get(data->parent, data->table[new].driver_data);
+ ret = clk_set_parent(data->clk, parent);
+ if (ret)
+ freqs.new = freqs.old;
+
+ cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
+ mutex_unlock(&cpufreq_lock);
+
+ return ret;
+}
+
+static struct freq_attr *corenet_cpufreq_attr[] = {
+ &cpufreq_freq_attr_scaling_available_freqs,
+ NULL,
+};
+
+static struct cpufreq_driver ppc_corenet_cpufreq_driver = {
+ .name = "ppc_cpufreq",
+ .owner = THIS_MODULE,
+ .flags = CPUFREQ_CONST_LOOPS,
+ .init = corenet_cpufreq_cpu_init,
+ .exit = __exit_p(corenet_cpufreq_cpu_exit),
+ .verify = corenet_cpufreq_verify,
+ .target = corenet_cpufreq_target,
+ .get = corenet_cpufreq_get_speed,
+ .attr = corenet_cpufreq_attr,
+};
+
+static const struct of_device_id node_matches[] __initdata = {
+ { .compatible = "fsl,p2041-clockgen", .data = &sdata[0], },
+ { .compatible = "fsl,p3041-clockgen", .data = &sdata[0], },
+ { .compatible = "fsl,p5020-clockgen", .data = &sdata[1], },
+ { .compatible = "fsl,p4080-clockgen", .data = &sdata[2], },
+ { .compatible = "fsl,p5040-clockgen", .data = &sdata[2], },
+ { .compatible = "fsl,qoriq-clockgen-2.0", },
+ {}
+};
+
+static int __init ppc_corenet_cpufreq_init(void)
+{
+ int ret;
+ struct device_node *np;
+ const struct of_device_id *match;
+ const struct soc_data *data;
+ unsigned int cpu;
+
+ np = of_find_matching_node(NULL, node_matches);
+ if (!np)
+ return -ENODEV;
+
+ for_each_possible_cpu(cpu) {
+ if (!alloc_cpumask_var(&per_cpu(cpu_mask, cpu), GFP_KERNEL))
+ goto err_mask;
+ cpumask_copy(per_cpu(cpu_mask, cpu), cpu_core_mask(cpu));
+ }
+
+ match = of_match_node(node_matches, np);
+ data = match->data;
+ if (data) {
+ if (data->flag)
+ fmask = data->freq_mask;
+ min_cpufreq = fsl_get_sys_freq();
+ } else {
+ min_cpufreq = fsl_get_sys_freq() / 2;
+ }
+
+ of_node_put(np);
+
+ ret = cpufreq_register_driver(&ppc_corenet_cpufreq_driver);
+ if (!ret)
+ pr_info("Freescale PowerPC corenet CPU frequency scaling driver\n");
+
+ return ret;
+
+err_mask:
+ for_each_possible_cpu(cpu)
+ free_cpumask_var(per_cpu(cpu_mask, cpu));
+
+ return -ENOMEM;
+}
+module_init(ppc_corenet_cpufreq_init);
+
+static void __exit ppc_corenet_cpufreq_exit(void)
+{
+ unsigned int cpu;
+
+ for_each_possible_cpu(cpu)
+ free_cpumask_var(per_cpu(cpu_mask, cpu));
+
+ cpufreq_unregister_driver(&ppc_corenet_cpufreq_driver);
+}
+module_exit(ppc_corenet_cpufreq_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Tang Yuantian <Yuantian.Tang@freescale.com>");
+MODULE_DESCRIPTION("cpufreq driver for Freescale e500mc series SoCs");
diff --git a/drivers/cpufreq/ppc_cbe_cpufreq.c b/drivers/cpufreq/ppc_cbe_cpufreq.c
index e577a1dbbfcd..5936f8d6f2cc 100644
--- a/drivers/cpufreq/ppc_cbe_cpufreq.c
+++ b/drivers/cpufreq/ppc_cbe_cpufreq.c
@@ -106,7 +106,7 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
/* initialize frequency table */
for (i=0; cbe_freqs[i].frequency!=CPUFREQ_TABLE_END; i++) {
- cbe_freqs[i].frequency = max_freq / cbe_freqs[i].index;
+ cbe_freqs[i].frequency = max_freq / cbe_freqs[i].driver_data;
pr_debug("%d: %d\n", i, cbe_freqs[i].frequency);
}
@@ -165,7 +165,7 @@ static int cbe_cpufreq_target(struct cpufreq_policy *policy,
"1/%d of max frequency\n",
policy->cpu,
cbe_freqs[cbe_pmode_new].frequency,
- cbe_freqs[cbe_pmode_new].index);
+ cbe_freqs[cbe_pmode_new].driver_data);
rc = set_pmode(policy->cpu, cbe_pmode_new);
diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c
index 9e5bc8e388a0..fb3981ac829f 100644
--- a/drivers/cpufreq/pxa2xx-cpufreq.c
+++ b/drivers/cpufreq/pxa2xx-cpufreq.c
@@ -420,7 +420,7 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
/* Generate pxa25x the run cpufreq_frequency_table struct */
for (i = 0; i < NUM_PXA25x_RUN_FREQS; i++) {
pxa255_run_freq_table[i].frequency = pxa255_run_freqs[i].khz;
- pxa255_run_freq_table[i].index = i;
+ pxa255_run_freq_table[i].driver_data = i;
}
pxa255_run_freq_table[i].frequency = CPUFREQ_TABLE_END;
@@ -428,7 +428,7 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
for (i = 0; i < NUM_PXA25x_TURBO_FREQS; i++) {
pxa255_turbo_freq_table[i].frequency =
pxa255_turbo_freqs[i].khz;
- pxa255_turbo_freq_table[i].index = i;
+ pxa255_turbo_freq_table[i].driver_data = i;
}
pxa255_turbo_freq_table[i].frequency = CPUFREQ_TABLE_END;
@@ -440,9 +440,9 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
if (freq > pxa27x_maxfreq)
break;
pxa27x_freq_table[i].frequency = freq;
- pxa27x_freq_table[i].index = i;
+ pxa27x_freq_table[i].driver_data = i;
}
- pxa27x_freq_table[i].index = i;
+ pxa27x_freq_table[i].driver_data = i;
pxa27x_freq_table[i].frequency = CPUFREQ_TABLE_END;
/*
diff --git a/drivers/cpufreq/pxa3xx-cpufreq.c b/drivers/cpufreq/pxa3xx-cpufreq.c
index 15d60f857ad5..9c92ef032a9e 100644
--- a/drivers/cpufreq/pxa3xx-cpufreq.c
+++ b/drivers/cpufreq/pxa3xx-cpufreq.c
@@ -98,10 +98,10 @@ static int setup_freqs_table(struct cpufreq_policy *policy,
return -ENOMEM;
for (i = 0; i < num; i++) {
- table[i].index = i;
+ table[i].driver_data = i;
table[i].frequency = freqs[i].cpufreq_mhz * 1000;
}
- table[num].index = i;
+ table[num].driver_data = i;
table[num].frequency = CPUFREQ_TABLE_END;
pxa3xx_freqs = freqs;
diff --git a/drivers/cpufreq/s3c2416-cpufreq.c b/drivers/cpufreq/s3c2416-cpufreq.c
index 4f1881eee3f1..e35865af96e2 100644
--- a/drivers/cpufreq/s3c2416-cpufreq.c
+++ b/drivers/cpufreq/s3c2416-cpufreq.c
@@ -244,7 +244,7 @@ static int s3c2416_cpufreq_set_target(struct cpufreq_policy *policy,
if (ret != 0)
goto out;
- idx = s3c_freq->freq_table[i].index;
+ idx = s3c_freq->freq_table[i].driver_data;
if (idx == SOURCE_HCLK)
to_dvs = 1;
@@ -312,7 +312,7 @@ static void __init s3c2416_cpufreq_cfg_regulator(struct s3c2416_data *s3c_freq)
if (freq->frequency == CPUFREQ_ENTRY_INVALID)
continue;
- dvfs = &s3c2416_dvfs_table[freq->index];
+ dvfs = &s3c2416_dvfs_table[freq->driver_data];
found = 0;
/* Check only the min-voltage, more is always ok on S3C2416 */
@@ -462,7 +462,7 @@ static int __init s3c2416_cpufreq_driver_init(struct cpufreq_policy *policy)
freq = s3c_freq->freq_table;
while (freq->frequency != CPUFREQ_TABLE_END) {
/* special handling for dvs mode */
- if (freq->index == 0) {
+ if (freq->driver_data == 0) {
if (!s3c_freq->hclk) {
pr_debug("cpufreq: %dkHz unsupported as it would need unavailable dvs mode\n",
freq->frequency);
diff --git a/drivers/cpufreq/s3c24xx-cpufreq.c b/drivers/cpufreq/s3c24xx-cpufreq.c
index 3c0e78ede0da..3513e7477160 100644
--- a/drivers/cpufreq/s3c24xx-cpufreq.c
+++ b/drivers/cpufreq/s3c24xx-cpufreq.c
@@ -70,7 +70,7 @@ static void s3c_cpufreq_getcur(struct s3c_cpufreq_config *cfg)
cfg->freq.pclk = pclk = clk_get_rate(clk_pclk);
cfg->freq.armclk = armclk = clk_get_rate(clk_arm);
- cfg->pll.index = __raw_readl(S3C2410_MPLLCON);
+ cfg->pll.driver_data = __raw_readl(S3C2410_MPLLCON);
cfg->pll.frequency = fclk;
cfg->freq.hclk_tns = 1000000000 / (cfg->freq.hclk / 10);
@@ -431,7 +431,7 @@ static unsigned int suspend_freq;
static int s3c_cpufreq_suspend(struct cpufreq_policy *policy)
{
suspend_pll.frequency = clk_get_rate(_clk_mpll);
- suspend_pll.index = __raw_readl(S3C2410_MPLLCON);
+ suspend_pll.driver_data = __raw_readl(S3C2410_MPLLCON);
suspend_freq = s3c_cpufreq_get(0) * 1000;
return 0;
diff --git a/drivers/cpufreq/s3c64xx-cpufreq.c b/drivers/cpufreq/s3c64xx-cpufreq.c
index 27cacb524796..13bb4bae64ee 100644
--- a/drivers/cpufreq/s3c64xx-cpufreq.c
+++ b/drivers/cpufreq/s3c64xx-cpufreq.c
@@ -87,7 +87,7 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
freqs.old = clk_get_rate(armclk) / 1000;
freqs.new = s3c64xx_freq_table[i].frequency;
freqs.flags = 0;
- dvfs = &s3c64xx_dvfs_table[s3c64xx_freq_table[i].index];
+ dvfs = &s3c64xx_dvfs_table[s3c64xx_freq_table[i].driver_data];
if (freqs.old == freqs.new)
return 0;
@@ -104,7 +104,8 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
if (ret != 0) {
pr_err("Failed to set VDDARM for %dkHz: %d\n",
freqs.new, ret);
- goto err;
+ freqs.new = freqs.old;
+ goto post_notify;
}
}
#endif
@@ -113,10 +114,13 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
if (ret < 0) {
pr_err("Failed to set rate %dkHz: %d\n",
freqs.new, ret);
- goto err;
+ freqs.new = freqs.old;
}
+post_notify:
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
+ if (ret)
+ goto err;
#ifdef CONFIG_REGULATOR
if (vddarm && freqs.new < freqs.old) {
diff --git a/drivers/cpufreq/sc520_freq.c b/drivers/cpufreq/sc520_freq.c
index f740b134d27b..77a210975fc4 100644
--- a/drivers/cpufreq/sc520_freq.c
+++ b/drivers/cpufreq/sc520_freq.c
@@ -71,7 +71,7 @@ static void sc520_freq_set_cpu_state(struct cpufreq_policy *policy,
local_irq_disable();
clockspeed_reg = *cpuctl & ~0x03;
- *cpuctl = clockspeed_reg | sc520_freq_table[state].index;
+ *cpuctl = clockspeed_reg | sc520_freq_table[state].driver_data;
local_irq_enable();
diff --git a/drivers/cpufreq/sparc-us2e-cpufreq.c b/drivers/cpufreq/sparc-us2e-cpufreq.c
index 306ae462bba6..93061a408773 100644
--- a/drivers/cpufreq/sparc-us2e-cpufreq.c
+++ b/drivers/cpufreq/sparc-us2e-cpufreq.c
@@ -308,17 +308,17 @@ static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy)
struct cpufreq_frequency_table *table =
&us2e_freq_table[cpu].table[0];
- table[0].index = 0;
+ table[0].driver_data = 0;
table[0].frequency = clock_tick / 1;
- table[1].index = 1;
+ table[1].driver_data = 1;
table[1].frequency = clock_tick / 2;
- table[2].index = 2;
+ table[2].driver_data = 2;
table[2].frequency = clock_tick / 4;
- table[2].index = 3;
+ table[2].driver_data = 3;
table[2].frequency = clock_tick / 6;
- table[2].index = 4;
+ table[2].driver_data = 4;
table[2].frequency = clock_tick / 8;
- table[2].index = 5;
+ table[2].driver_data = 5;
table[3].frequency = CPUFREQ_TABLE_END;
policy->cpuinfo.transition_latency = 0;
diff --git a/drivers/cpufreq/sparc-us3-cpufreq.c b/drivers/cpufreq/sparc-us3-cpufreq.c
index c71ee142347a..880ee293d61e 100644
--- a/drivers/cpufreq/sparc-us3-cpufreq.c
+++ b/drivers/cpufreq/sparc-us3-cpufreq.c
@@ -169,13 +169,13 @@ static int __init us3_freq_cpu_init(struct cpufreq_policy *policy)
struct cpufreq_frequency_table *table =
&us3_freq_table[cpu].table[0];
- table[0].index = 0;
+ table[0].driver_data = 0;
table[0].frequency = clock_tick / 1;
- table[1].index = 1;
+ table[1].driver_data = 1;
table[1].frequency = clock_tick / 2;
- table[2].index = 2;
+ table[2].driver_data = 2;
table[2].frequency = clock_tick / 32;
- table[3].index = 0;
+ table[3].driver_data = 0;
table[3].frequency = CPUFREQ_TABLE_END;
policy->cpuinfo.transition_latency = 0;
diff --git a/drivers/cpufreq/spear-cpufreq.c b/drivers/cpufreq/spear-cpufreq.c
index 156829f4576d..c3efa7f2a908 100644
--- a/drivers/cpufreq/spear-cpufreq.c
+++ b/drivers/cpufreq/spear-cpufreq.c
@@ -250,11 +250,11 @@ static int spear_cpufreq_driver_init(void)
}
for (i = 0; i < cnt; i++) {
- freq_tbl[i].index = i;
+ freq_tbl[i].driver_data = i;
freq_tbl[i].frequency = be32_to_cpup(val++);
}
- freq_tbl[i].index = i;
+ freq_tbl[i].driver_data = i;
freq_tbl[i].frequency = CPUFREQ_TABLE_END;
spear_cpufreq.freq_tbl = freq_tbl;
diff --git a/drivers/cpufreq/speedstep-centrino.c b/drivers/cpufreq/speedstep-centrino.c
index 618e6f417b1c..0915e712fbdc 100644
--- a/drivers/cpufreq/speedstep-centrino.c
+++ b/drivers/cpufreq/speedstep-centrino.c
@@ -79,11 +79,11 @@ static struct cpufreq_driver centrino_driver;
/* Computes the correct form for IA32_PERF_CTL MSR for a particular
frequency/voltage operating point; frequency in MHz, volts in mV.
- This is stored as "index" in the structure. */
+ This is stored as "driver_data" in the structure. */
#define OP(mhz, mv) \
{ \
.frequency = (mhz) * 1000, \
- .index = (((mhz)/100) << 8) | ((mv - 700) / 16) \
+ .driver_data = (((mhz)/100) << 8) | ((mv - 700) / 16) \
}
/*
@@ -307,7 +307,7 @@ static unsigned extract_clock(unsigned msr, unsigned int cpu, int failsafe)
per_cpu(centrino_model, cpu)->op_points[i].frequency
!= CPUFREQ_TABLE_END;
i++) {
- if (msr == per_cpu(centrino_model, cpu)->op_points[i].index)
+ if (msr == per_cpu(centrino_model, cpu)->op_points[i].driver_data)
return per_cpu(centrino_model, cpu)->
op_points[i].frequency;
}
@@ -501,7 +501,7 @@ static int centrino_target (struct cpufreq_policy *policy,
break;
}
- msr = per_cpu(centrino_model, cpu)->op_points[newstate].index;
+ msr = per_cpu(centrino_model, cpu)->op_points[newstate].driver_data;
if (first_cpu) {
rdmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, &oldmsr, &h);
diff --git a/drivers/cpufreq/tegra-cpufreq.c b/drivers/cpufreq/tegra-cpufreq.c
index c74c0e130ef4..cd66b85d927c 100644
--- a/drivers/cpufreq/tegra-cpufreq.c
+++ b/drivers/cpufreq/tegra-cpufreq.c
@@ -28,17 +28,16 @@
#include <linux/io.h>
#include <linux/suspend.h>
-/* Frequency table index must be sequential starting at 0 */
static struct cpufreq_frequency_table freq_table[] = {
- { 0, 216000 },
- { 1, 312000 },
- { 2, 456000 },
- { 3, 608000 },
- { 4, 760000 },
- { 5, 816000 },
- { 6, 912000 },
- { 7, 1000000 },
- { 8, CPUFREQ_TABLE_END },
+ { .frequency = 216000 },
+ { .frequency = 312000 },
+ { .frequency = 456000 },
+ { .frequency = 608000 },
+ { .frequency = 760000 },
+ { .frequency = 816000 },
+ { .frequency = 912000 },
+ { .frequency = 1000000 },
+ { .frequency = CPUFREQ_TABLE_END },
};
#define NUM_CPUS 2
@@ -138,12 +137,12 @@ static int tegra_update_cpu_speed(struct cpufreq_policy *policy,
if (ret) {
pr_err("cpu-tegra: Failed to set cpu frequency to %d kHz\n",
freqs.new);
- return ret;
+ freqs.new = freqs.old;
}
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
- return 0;
+ return ret;
}
static unsigned long tegra_cpu_highest_speed(void)