summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2011-05-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linusLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: params.c: Use new strtobool function to process boolean inputs debugfs: move to new strtobool Add a strtobool function matching semantics of existing in kernel equivalents modpost: Update 64k section support for binutils 2.18.50 module: Use binary search in lookup_symbol() module: Use the binary search for symbols resolution lib: Add generic binary search function to the kernel. module: Sort exported symbols module: each_symbol_section instead of each_symbol module: split unset_section_ro_nx function. module: undo module RONX protection correctly. module: zero mod->init_ro_size after init is freed. minor ANSI prototype sparse fix module: reorder kparam_array to remove alignment padding on 64 bit builds module: remove 64 bit alignment padding from struct module with CONFIG_TRACE* module: do not hide __modver_version_show declaration behind ifdef module: deal with alignment issues in built-in module versions
2011-05-19list: remove prefetching from regular list iteratorsLinus Torvalds
This is removes the use of software prefetching from the regular list iterators. We don't want it. If you do want to prefetch in some iterator of yours, go right ahead. Just don't expect the iterator to do it, since normally the downsides are bigger than the upsides. It also replaces <linux/prefetch.h> with <linux/const.h>, because the use of LIST_POISON ends up needing it. <linux/poison.h> is sadly not self-contained, and including prefetch.h just happened to hide that. Suggested by David Miller (networking has a lot of regular lists that are often empty or a single entry, and prefetching is not going to do anything but add useless instructions). Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: David S. Miller <davem@davemloft.net> Cc: linux-arch@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-19ASoC: Asahi Kasei AK4641 codec driverDmitry Artamonow
A driver for the AK4641 codec used in iPAQ hx4700 and Glofiish M800 among others. Signed-off-by: Harald Welte <laforge@gnumonks.org> Signed-off-by: Philipp Zabel <philipp.zabel@gmail.com> Signed-off-by: Dmitry Artamonow <mad_soft@inbox.ru> Acked-by: Liam Girdwood <lrg@ti.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2011-05-19hlist: remove software prefetching in hlist iteratorsLinus Torvalds
They not only increase the code footprint, they actually make things slower rather than faster. On internationally acclaimed benchmarks ("make -j16" on an already fully built kernel source tree) the hlist prefetching slows down the build by up to 1%. (Almost all of it comes from hlist_for_each_entry_rcu() as used by avc_has_perm_noaudit(), which is very hot due to all the pathname lookups to see if there is anything to do). The cause seems to be two-fold: - on at least some Intel cores, prefetch(NULL) ends up with some microarchitectural stall due to the TLB miss that it incurs. The hlist case triggers this very commonly, since the NULL pointer is the last entry in the list. - the prefetch appears to cause more D$ activity, probably because it prefetches hash list entries that are never actually used (because we ended the search early due to a hit). Regardless, the numbers clearly say that the implicit prefetching is simply a bad idea. If some _particular_ user of the hlist iterators wants to prefetch the next list entry, they can do so themselves explicitly, rather than depend on all list iterators doing so implicitly. Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: David S. Miller <davem@davemloft.net> Cc: linux-arch@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-19ipv6: reduce per device ICMP mib sizesEric Dumazet
ipv6 has per device ICMP SNMP counters, taking too much space because they use percpu storage. needed size per device is : (512+4)*sizeof(long)*number_of_possible_cpus*2 On a 32bit kernel, 16 possible cpus, this wastes more than 64kbytes of memory per ipv6 enabled network device, taken in vmalloc pool. Since ICMP messages are rare, just use shared counters (atomic_long_t) Per network space ICMP counters are still using percpu memory, we might also convert them to shared counters in a future patch. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Denys Fedoryshchenko <denys@visp.net.lb> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-19Merge branch 'tip/perf/core' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into perf/core
2011-05-19clockevents: Provide interface to reconfigure an active clock event deviceThomas Gleixner
Some ARM SoCs have clock event devices which have their frequency modified due to frequency scaling. Provide an interface which allows to reconfigure an active device. After reconfiguration reprogram the current pending event. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: LAK <linux-arm-kernel@lists.infradead.org> Cc: John Stultz <john.stultz@linaro.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Ingo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.437459958%40linutronix.de%3E
2011-05-19clockevents: Provide combined configure and register functionThomas Gleixner
All clockevent devices have the same open coded initialization functions. Provide an interface which does all necessary initialization in the core code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Reviewed-by: Ingo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.331975870%40linutronix.de%3E
2011-05-19clockevents: Restructure clock_event_device membersThomas Gleixner
Group the hot path members of struct clock_event_device together so we have a better cache line footprint. Make it cacheline aligned. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Reviewed-by: Ingo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.223607682%40linutronix.de%3E
2011-05-19clocksource: Restructure clocksource struct membersThomas Gleixner
Group the hot path members of struct clocksource together so we have a better cache line footprint. Make it cacheline aligned. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Reviewed-by: Ingo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.003081882%40linutronix.de%3E
2011-05-19SSB: Change fallback sprom to callback mechanism.Hauke Mehrtens
Some embedded devices like the Netgear WNDR3300 have two SSB based cards without an own sprom on the pci bus. We have to provide two different fallback sproms for these and this was not possible with the old solution. In the bcm47xx architecture the sprom data is stored in the nvram in the main flash storage. The architecture code will be able to fill the sprom with the stored data based on the bus where the device was found. The bcm63xx code should do the same thing as before, just using the new API. Acked-by: Michael Buesch <mb@bu3sch.de> Cc: netdev@vger.kernel.org Cc: linux-wireless@vger.kernel.org Cc: Florian Fainelli <florian@openwrt.org> Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2362/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2011-05-19Add a strtobool function matching semantics of existing in kernel equivalentsJonathan Cameron
This is a rename of the usr_strtobool proposal, which was a renamed, relocated and fixed version of previous kstrtobool RFC Signed-off-by: Jonathan Cameron <jic23@cam.ac.uk> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2011-05-19lib: Add generic binary search function to the kernel.Tim Abbott
There a large number hand-coded binary searches in the kernel (run "git grep search | grep binary" to find many of them). Since in my experience, hand-coding binary searches can be error-prone, it seems worth cleaning this up by providing a generic binary search function. This generic binary search implementation comes from Ksplice. It has the same basic API as the C library bsearch() function. Ksplice uses it in half a dozen places with 4 different comparison functions, and I think our code is substantially cleaner because of this. Signed-off-by: Tim Abbott <tabbott@ksplice.com> Extra-bikeshedding-by: Alan Jenkins <alan-jenkins@tuffmail.co.uk> Extra-bikeshedding-by: André Goddard Rosa <andre.goddard@gmail.com> Extra-bikeshedding-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Alessio Igor Bogani <abogani@kernel.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2011-05-19module: Sort exported symbolsAlessio Igor Bogani
This patch places every exported symbol in its own section (i.e. "___ksymtab+printk"). Thus the linker will use its SORT() directive to sort and finally merge all symbol in the right and final section (i.e. "__ksymtab"). The symbol prefixed archs use an underscore as prefix for symbols. To avoid collision we use a different character to create the temporary section names. This work was supported by a hardware donation from the CE Linux Forum. Signed-off-by: Alessio Igor Bogani <abogani@kernel.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (folded in '+' fixup) Tested-by: Dirk Behme <dirk.behme@googlemail.com>
2011-05-19module: each_symbol_section instead of each_symbolRusty Russell
Instead of having a callback function for each symbol in the kernel, have a callback for each array of symbols. This eases the logic when we move to sorted symbols and binary search. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Alessio Igor Bogani <abogani@kernel.org>
2011-05-19module: reorder kparam_array to remove alignment padding on 64 bit buildsRichard Kennedy
Reorder structure kparam_array to remove 8 bytes of alignment padding on 64 bit builds, dropping its size from 40 to 32 bytes. Also update the macro module_param_array_named to initialise the structure using its member names to allow it to be changed without touching all its call sites. 'git grep' finds module_param_array in 1037 places so this patch will save a small amount of data space across many modules. Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2011-05-19module: remove 64 bit alignment padding from struct module with CONFIG_TRACE*Richard Kennedy
Reorder struct module to remove 24 bytes of alignment padding on 64 bit builds when the CONFIG_TRACE options are selected. This allows the structure to fit into one fewer cache lines, and its size drops from 592 to 568 on x86_64. Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2011-05-19module: do not hide __modver_version_show declaration behind ifdefDmitry Torokhov
Doing so prevents the following warning from sparse: CHECK kernel/params.c kernel/params.c:817:9: warning: symbol '__modver_version_show' was not declared. Should it be static? since kernel/params.c is never compiled with MODULE being set. Signed-off-by: Dmitry Torokhov <dtor@vmware.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2011-05-19module: deal with alignment issues in built-in module versionsDmitry Torokhov
On m68k natural alignment is 2-byte boundary but we are trying to align structures in __modver section on sizeof(void *) boundary. This causes trouble when we try to access elements in this section in array-like fashion when create "version" attributes for built-in modules. Moreover, as DaveM said, we can't reliably put structures into independent objects, put them into a special section, and then expect array access over them (via the section boundaries) after linking the objects together to just "work" due to variable alignment choices in different situations. The only solution that seems to work reliably is to make an array of plain pointers to the objects in question and put those pointers in the special section. Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Dmitry Torokhov <dtor@vmware.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2011-05-19Merge branch 'merge' into nextBenjamin Herrenschmidt
2011-05-18ftrace: Modify ftrace_set_filter/notrace to take opsSteven Rostedt
Since users of the function tracer can now pick and choose which functions they want to trace agnostically from other users of the function tracer, we need to pass the ops struct to the ftrace_set_filter() functions. The functions ftrace_set_global_filter() and ftrace_set_global_notrace() is added to keep the old filter functions which are used to modify the generic function tracers. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-18ipv4: Pass explicit destination address to rt_bind_peer().David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-18ipv4: Pass explicit destination address to rt_get_peer().David S. Miller
This will next trickle down to rt_bind_peer(). Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-18ipv4: Make caller provide flowi4 key to inet_csk_route_req().David S. Miller
This way the caller can get at the fully resolved fl4->{daddr,saddr} etc. Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-18ipv4: Kill RT_CACHE_DEBUGDavid S. Miller
It's way past it's usefulness. And this gets rid of a bunch of stray ->rt_{dst,src} references. Even the comment documenting the macro was inaccurate (stated default was 1 when it's 0). If reintroduced, it should be done properly, with dynamic debug facilities. Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-18Merge branch 'devicetree/merge' of git://git.secretlab.ca/git/linux-2.6Linus Torvalds
* 'devicetree/merge' of git://git.secretlab.ca/git/linux-2.6: drivercore: revert addition of of_match to struct device of: fix race when matching drivers
2011-05-18ftrace: Allow dynamically allocated function tracersSteven Rostedt
Now that functions may be selected individually, it only makes sense that we should allow dynamically allocated trace structures to be traced. This will allow perf to allocate a ftrace_ops structure at runtime and use it to pick and choose which functions that structure will trace. Note, a dynamically allocated ftrace_ops will always be called indirectly instead of being called directly from the mcount in entry.S. This is because there's no safe way to prevent mcount from being preempted before calling the function, unless we modify every entry.S to do so (not likely). Thus, dynamically allocated functions will now be called by the ftrace_ops_list_func() that loops through the ops that are allocated if there are more than one op allocated at a time. This loop is protected with a preempt_disable. To determine if an ftrace_ops structure is allocated or not, a new util function was added to the kernel/extable.c called core_kernel_data(), which returns 1 if the address is between _sdata and _edata. Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-18ftrace: Implement separate user function filteringSteven Rostedt
ftrace_ops that are registered to trace functions can now be agnostic to each other in respect to what functions they trace. Each ops has their own hash of the functions they want to trace and a hash to what they do not want to trace. A empty hash for the functions they want to trace denotes all functions should be traced that are not in the notrace hash. Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-18ftrace: Use counters to enable functions to traceSteven Rostedt
Every function has its own record that stores the instruction pointer and flags for the function to be traced. There are only two flags: enabled and free. The enabled flag states that tracing for the function has been enabled (actively traced), and the free flag states that the record no longer points to a function and can be used by new functions (loaded modules). These flags are now moved to the MSB of the flags (actually just the top 32bits). The rest of the bits (30 bits) are now used as a ref counter. Everytime a tracer register functions to trace, those functions will have its counter incremented. When tracing is enabled, to determine if a function should be traced, the counter is examined, and if it is non-zero it is set to trace. When a ftrace_ops is registered to trace functions, its hashes are examined. If the ftrace_ops filter_hash count is zero, then all functions are set to be traced, otherwise only the functions in the hash are to be traced. The exception to this is if a function is also in the ftrace_ops notrace_hash. Then that function's counter is not incremented for this ftrace_ops. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-18ftrace: Create a global_ops to hold the filter and notrace hashesSteven Rostedt
Combine the filter and notrace hashes to be accessed by a single entity, the global_ops. The global_ops is a ftrace_ops structure that is passed to different functions that can read or modify the filtering of the function tracer. The ftrace_ops structure was modified to hold a filter and notrace hashes so that later patches may allow each ftrace_ops to have its own set of rules to what functions may be filtered. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-18ftrace: Use hash instead for FTRACE_FL_FILTERSteven Rostedt
When multiple users are allowed to have their own set of functions to trace, having the FTRACE_FL_FILTER flag will not be enough to handle the accounting of those users. Each user will need their own set of functions. Replace the FTRACE_FL_FILTER with a filter_hash instead. This is temporary until the rest of the function filtering accounting gets in. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-18ftrace: Replace FTRACE_FL_NOTRACE flag with a hash of ignored functionsSteven Rostedt
To prepare for the accounting system that will allow multiple users of the function tracer, having the FTRACE_FL_NOTRACE as a flag in the dyn_trace record does not make sense. All ftrace_ops will soon have a hash of functions they should trace and not trace. By making a global hash of functions not to trace makes this easier for the transition. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-18drivercore: revert addition of of_match to struct deviceGrant Likely
Commit b826291c, "drivercore/dt: add a match table pointer to struct device" added an of_match pointer to struct device to cache the of_match_table entry discovered at driver match time. This was unsafe because matching is not an atomic operation with probing a driver. If two or more drivers are attempted to be matched to a driver at the same time, then the cached matching entry pointer could get overwritten. This patch reverts the of_match cache pointer and reworks all users to call of_match_device() directly instead. Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2011-05-18of: fix race when matching driversMilton Miller
If two drivers are probing devices at the same time, both will write their match table result to the dev->of_match cache at the same time. Only write the result if the device matches. In a thread titled "SBus devices sometimes detected, sometimes not", Meelis reported his SBus hme was not detected about 50% of the time. From the debug suggested by Grant it was obvious another driver matched some devices between the call to match the hme and the hme discovery failling. Reported-by: Meelis Roos <mroos@linux.ee> Signed-off-by: Milton Miller <miltonm@bga.com> [grant.likely: modified to only call of_match_device() once] Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2011-05-18Merge branch 'for-linus' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds
* 'for-linus' of git://git.kernel.dk/linux-2.6-block: block: don't delay blk_run_queue_async scsi: remove performance regression due to async queue run blk-throttle: Use task_subsys_state() to determine a task's blkio_cgroup block: rescan partitions on invalidated devices on -ENOMEDIA too cdrom: always check_disk_change() on open block: unexport DISK_EVENT_MEDIA_CHANGE for legacy/fringe drivers
2011-05-18signal: trivial, fix the "timespec declared inside parameter list" warningOleg Nesterov
Fix the compile warning, do_sigtimedwait(struct timespec *) in signal.h needs the forward declaration of timespec. Reported-and-acked-by: Mike Frysinger <vapier.adi@gmail.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com>
2011-05-18genirq: Uninline and sanity check generic_handle_irq()Thomas Gleixner
generic_handle_irq() is missing a NULL pointer check for the result of irq_to_desc. This was a not a big problem, but we want to expose it to drivers, so we better have sanity checks in place. Add a return value as well, which indicates that the irq number was valid and the handler was invoked. Based on the pure code move from Jonathan Cameron. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jonathan Cameron <jic23@cam.ac.uk>
2011-05-18procfs: add stub for proc_mkdir_mode()Randy Dunlap
Provide a stub for proc_mkdir_mode() when CONFIG_PROC_FS is not enabled, just like the stub for proc_mkdir(). Fixes this linux-next build error: drivers/net/wireless/airo.c:4504: error: implicit declaration of function 'proc_mkdir_mode' Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: "John W. Linville" <linville@tuxdriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-17Merge branch 'master' of ↵David S. Miller
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/vmxnet3/vmxnet3_ethtool.c net/core/dev.c
2011-05-17PM: Introduce generic prepare and complete callbacks for subsystemsRafael J. Wysocki
Introduce generic .prepare() and .complete() power management callbacks, currently missing, that can be used by subsystems and power domains and export them. Provide NULL definitions of all the generic system sleep callbacks for CONFIG_PM_SLEEP unset. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-05-17PM: Allow drivers to allocate memory from .prepare() callbacks safelyRafael J. Wysocki
If device drivers allocate substantial amounts of memory (above 1 MB) in their hibernate .freeze() callbacks (or in their legacy suspend callbcks during hibernation), the subsequent creation of hibernate image may fail due to the lack of memory. This is the case, because the drivers' .freeze() callbacks are executed after the hibernate memory preallocation has been carried out and the preallocated amount of memory may be too small to cover the new driver allocations. Unfortunately, the drivers' .prepare() callbacks also are executed after the hibernate memory preallocation has completed, so they are not suitable for allocating additional memory either. Thus the only way a driver can safely allocate memory during hibernation is to use a hibernate/suspend notifier. However, the notifiers are called before the freezing of user space and the drivers wanting to use them for allocating additional memory may not know how much memory needs to be allocated at that point. To let device drivers overcome this difficulty rework the hibernation sequence so that the memory preallocation is carried out after the drivers' .prepare() callbacks have been executed, so that the .prepare() callbacks can be used for allocating additional memory to be used by the drivers' .freeze() callbacks. Update documentation to match the new behavior of the code. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-05-17Merge branch 'power-domains' into for-linusRafael J. Wysocki
* power-domains: PM: Fix build issue in clock_ops.c for CONFIG_PM_RUNTIME unset PM: Revert "driver core: platform_bus: allow runtime override of dev_pm_ops" OMAP1 / PM: Use generic clock manipulation routines for runtime PM PM / Runtime: Generic clock manipulation rountines for runtime PM (v6) PM / Runtime: Add subsystem data field to struct dev_pm_info OMAP2+ / PM: move runtime PM implementation to use device power domains PM / Platform: Use generic runtime PM callbacks directly shmobile: Use power domains for platform runtime PM PM: Export platform bus type's default PM callbacks PM: Make power domain callbacks take precedence over subsystem ones
2011-05-17Merge branch 'syscore' into for-linusRafael J. Wysocki
* syscore: PM: Remove sysdev suspend, resume and shutdown operations PM / PowerPC: Use struct syscore_ops instead of sysdevs for PM PM / UNICORE32: Use struct syscore_ops instead of sysdevs for PM PM / AVR32: Use struct syscore_ops instead of sysdevs for PM PM / Blackfin: Use struct syscore_ops instead of sysdevs for PM ARM / Samsung: Use struct syscore_ops for "core" power management ARM / PXA: Use struct syscore_ops for "core" power management ARM / SA1100: Use struct syscore_ops for "core" power management ARM / Integrator: Use struct syscore_ops for core PM ARM / OMAP: Use struct syscore_ops for "core" power management ARM: Use struct syscore_ops instead of sysdevs for PM in common code
2011-05-17kmod: always provide usermodehelper_disable()Kay Sievers
We need to prevent kernel-forked processes during system poweroff. Such processes try to access the filesystem whose disks we are trying to shutdown at the same time. This causes delays and exceptions in the storage drivers. A follow-up patch will add these calls and need usermodehelper_disable() also on systems without suspend support. Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-05-17PM: Print a warning if firmware is requested when tasks are frozenRafael J. Wysocki
Some drivers erroneously use request_firmware() from their ->resume() (or ->thaw(), or ->restore()) callbacks, which is not going to work unless the firmware has been built in. This causes system resume to stall until the firmware-loading timeout expires, which makes users think that the resume has failed and reboot their machines unnecessarily. For this reason, make _request_firmware() print a warning and return immediately with error code if it has been called when tasks are frozen and it's impossible to start any new usermode helpers. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Acked-by: Greg Kroah-Hartman <gregkh@suse.de> Reviewed-by: Valdis Kletnieks <valdis.kletnieks@vt.edu>
2011-05-17net: ping: fix build failureVasiliy Kulikov
If CONFIG_PROC_SYSCTL=n the building process fails: ping.c:(.text+0x52af3): undefined reference to `inet_get_ping_group_range_net' Moved inet_get_ping_group_range_net() to ping.c. Reported-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Vasiliy Kulikov <segoon@openwall.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-17scsi: remove performance regression due to async queue runJens Axboe
Commit c21e6beb removed our queue request_fn re-enter protection, and defaulted to always running the queues from kblockd to be safe. This was a known potential slow down, but should be safe. Unfortunately this is causing big performance regressions for some, so we need to improve this logic. Looking into the details of the re-enter, the real issue is on requeue of requests. Requeue of requests upon seeing a BUSY condition from the device ends up re-running the queue, causing traces like this: scsi_request_fn() scsi_dispatch_cmd() scsi_queue_insert() __scsi_queue_insert() scsi_run_queue() scsi_request_fn() ... potentially causing the issue we want to avoid. So special case the requeue re-run of the queue, but improve it to offload the entire run of local queue and starved queue from a single workqueue callback. This is a lot better than potentially kicking off a workqueue run for each device seen. This also fixes the issue of the local device going into recursion, since the above mentioned commit never moved that queue run out of line. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2011-05-16Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc: Revert "mmc: fix a race between card-detect rescan and clock-gate work instances"
2011-05-16Merge branch 'master' of ↵John W. Linville
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6 into for-davem Conflicts: drivers/net/wireless/iwlwifi/iwl-agn-tx.c net/mac80211/sta_info.h
2011-05-16ftrace: Avoid recording mcount on .init sections directlySteven Rostedt
The init and exit sections should not be traced and adding a call to mcount to them is a waste of text and instruction cache. Have the macro section attributes include notrace to ignore these functions for tracing from the build. Link: http://lkml.kernel.org/r/20110421023738.953028219@goodmis.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org>