summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2012-05-21Merge tag 'for-usb-next-2012-05-21' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/sarah/xhci into usb-next xhci/usb: Build error fixes for 3.5 Hi Greg, Here's four patches that fix the build errors introduced by the USB 3.0 Link PM patches. Please pull for inclusion in 3.5. Sarah Sharp
2012-05-21Merge branches 'core', 'cxgb4', 'ipath', 'iser', 'lockdep', 'mlx4', 'nes', ↵Roland Dreier
'ocrdma', 'qib' and 'raw-qp' into for-linus
2012-05-21xhci: Fix DIV_ROUND_UP compile error.Sarah Sharp
Fengguang reports that the xHCI driver isn't linked properly on his machine: ERROR: "__udivdi3" [drivers/usb/host/xhci-hcd.ko] undefined! ERROR: "handle_edge_irq" [drivers/gpio/gpio-pch.ko] undefined! ERROR: "irq_to_desc" [drivers/gpio/gpio-pch.ko] undefined! The driver compiles fine on my 64-bit box (gcc version 4.6.1). Fengguang thinks it's because the xHCI driver was using DIV_ROUND_UP() instead of DIV_ROUND_UP_ULL() with arguments that were unsigned long long variables. Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Reported-by: Wu Fengguang <wfg@linux.intel.com>
2012-05-21RDMA/cxgb4: Include vmalloc.h for vmalloc and vfreeVipul Pandya
Signed-off-by: Vipul Pandya <vipul@chelsio.com> Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-05-21xhci: Fix compile with CONFIG_USB_SUSPEND=nSarah Sharp
The USB 2.0 Link PM code is conditionally compiled when CONFIG_USB_SUSPEND=y. I believe that's a mistake, since Link PM is not directly related to USB device suspend and Link PM is implemented without relying on any of the suspend code in the USB core. For now, keep the USB 2.0 Link PM code conditionally compiled if CONFIG_USB_SUSPEND=y. This patch does move the code to implement USB 3.0 Link PM out of the xHCI driver #ifdefs for CONFIG_USB_SUSPEND and moves it into a section dependent on CONFIG_PM. The USB core functions for USB 3.0 Link PM are already conditionally compiled when CONFIG_PM=y. Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
2012-05-21USB: Fix core compile with CONFIG_USB_SUSPEND=nSarah Sharp
When CONFIG_PM=n, make sure that the usb_[unlocked_][en/dis]able_lpm declarations are visible in include/linux/usb.h, and exported from drivers/usb/core/hub.c. Before this patch, if CONFIG_USB_SUSPEND was turned off, it would cause build errors: drivers/usb/core/hub.c: In function 'usb_disable_lpm': drivers/usb/core/hub.c:3394:2: error: implicit declaration of function 'usb_enable_lpm' [-Werror=implicit-function-declaration] drivers/usb/core/hub.c: At top level: drivers/usb/core/hub.c:3424:6: warning: conflicting types for 'usb_enable_lpm' [enabled by default] drivers/usb/core/hub.c:3394:2: note: previous implicit declaration of 'usb_enable_lpm' was here drivers/usb/core/driver.c: In function 'usb_probe_interface': drivers/usb/core/driver.c:339:2: error: implicit declaration of function 'usb_unlocked_disable_lpm' [-Werror=implicit-function-declaration] drivers/usb/core/driver.c:364:3: error: implicit declaration of function 'usb_unlocked_enable_lpm' [-Werror=implicit-function-declaration] drivers/usb/core/message.c: In function 'usb_set_interface': drivers/usb/core/message.c:1314:2: error: implicit declaration of function 'usb_disable_lpm' [-Werror=implicit-function-declaration] drivers/usb/core/message.c:1323:3: error: implicit declaration of function 'usb_enable_lpm' [-Werror=implicit-function-declaration] drivers/usb/core/message.c:1368:2: error: implicit declaration of function 'usb_unlocked_enable_lpm' [-Werror=implicit-function-declaration] Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Reported-by: Chen Peter-B29397 <B29397@freescale.com>
2012-05-21brcm80211: Fix compile error for .disable_hub_initiated_lpm.Sarah Sharp
Fix missing comma. Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Reported-by: Wu Fengguang <wfg@linux.intel.com>
2012-05-21Revert "USB: EHCI: work around bug in the Philips ISP1562 controller"Greg Kroah-Hartman
This reverts commit 1996e6c572969a8cf6d7fa97eef621219acd94a9. It turned out to not be needed, now that the real fix has been committed. Reported-by: Alan Stern <stern@rowland.harvard.edu> Cc: stable <stable@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-05-21Merge branch 'dentry-cleanups' (dcache access cleanups and optimizations)Linus Torvalds
This branch simplifies and clarifies the dcache lookup, and allows us to do certain nice optimizations when comparing dentries. It also cleans up the interface to __d_lookup_rcu(), especially around passing the inode information around. * dentry-cleanups: vfs: make it possible to access the dentry hash/len as one 64-bit entry vfs: move dentry name length comparison from dentry_cmp() into callers vfs: do the careful dentry name access for all dentry_cmp cases vfs: remove unnecessary d_unhashed() check from __d_lookup_rcu vfs: clean up __d_lookup_rcu() and dentry_cmp() interfaces
2012-05-21[media] saa7134-cards: Remove a PCI entry added by mistakeMauro Carvalho Chehab
changeset 75c7dbcab added a wrong PCI ID address by mistake. Remove it. Reported-by: Remi Schwartz <remi.schwartz@gmail.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
2012-05-21Merge branch 'vfs-cleanups' (random vfs cleanups)Linus Torvalds
This teaches vfs_fstat() to use the appropriate f[get|put]_light functions, allowing it to avoid some unnecessary locking for the common case. More noticeably, it also cleans up and simplifies the "getname_flags()" function, which now relies on the architecture strncpy_from_user() doing all the user access checks properly, instead of hacking around the fact that on x86 it didn't use to do it right (see commit 92ae03f2ef99: "x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up"). * vfs-cleanups: VFS: make vfs_fstat() use f[get|put]_light() VFS: clean up and simplify getname_flags() x86: make word-at-a-time strncpy_from_user clear bytes at the end
2012-05-21xfs: add trace points for log forcesDave Chinner
To enable easy tracing of the location of log forces and the frequency of them via perf, add a pair of trace points to the log force functions. This will help debug where excessive log forces are being issued from by simple perf commands like: # ~/perf/perf top -e xfs:xfs_log_force -G -U Which gives this sort of output: Events: 141 xfs:xfs_log_force - 100.00% [kernel] [k] xfs_log_force - xfs_log_force 87.04% xfsaild kthread kernel_thread_helper - 12.87% xfs_buf_lock _xfs_buf_find xfs_buf_get xfs_trans_get_buf xfs_da_do_buf xfs_da_get_buf xfs_dir2_data_init xfs_dir2_leaf_addname xfs_dir_createname xfs_create xfs_vn_mknod xfs_vn_create vfs_create do_last.isra.41 path_openat do_filp_open do_sys_open sys_open system_call_fastpath Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sig.com>
2012-05-21xfs: fix memory reclaim deadlock on agi bufferPeter Watkins
Note xfs_iget can be called while holding a locked agi buffer. If it goes into memory reclaim then inode teardown may try to lock the same buffer. Prevent the deadlock by calling radix_tree_preload with GFP_NOFS. Signed-off-by: Peter Watkins <treestem@gmail.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2012-05-21xfs: fix delalloc quota accounting on failureDave Chinner
xfstest 270 was causing quota reservations way beyond what was sane (ten to hundreds of TB) for a 4GB filesystem. There's a sign problem in the error handling path of xfs_bmapi_reserve_delalloc() because xfs_trans_unreserve_quota_nblks() simple negates the value passed - which doesn't work for an unsigned variable. This causes reservations of close to 2^32 block instead of removing a reservation of a handful of blocks. Fix the same problem in the other xfs_trans_unreserve_quota_nblks() callers where unsigned integer variables are used, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2012-05-21Merge branch 'stat-cleanups' (clean up copying of stat info to user space)Linus Torvalds
This makes cp_new_stat() a bit more readable, and avoids having to memset() the whole structure just to fill in a couple of padding fields. This is another result of me looking at code generation of functions that show up high on certain kernel profiles, and just going "Oh, let's just clean that up". Architectures that don't supply the #define to fill just the padding fields will still fall back to memset(). * stat-cleanups: vfs: don't force a big memset of stat data just to clear padding fields vfs: de-crapify "cp_new_stat()" function
2012-05-21Merge branch 'vm-cleanups' (unmap_vma() interface cleanup)Linus Torvalds
This series sanitizes the interface to unmap_vma(). The crazy interface annoyed me no end when I was looking at unmap_single_vma(), which we can spend quite a lot of time in (especially with loads that have a lot of small fork/exec's: shell scripts etc). Moving the nr_accounted calculations to where they belong at least clarifies things a little. I hope to come back to look at the performance of this later, but if/when I get back to it I at least don't have to see the crazy interfaces any more. * vm-cleanups: vm: remove 'nr_accounted' calculations from the unmap_vmas() interfaces vm: simplify unmap_vmas() calling convention
2012-05-21hvc_xen: NULL dereference on allocation failureDan Carpenter
If kzalloc() returns a NULL here, we pass a NULL to xencons_disconnect_backend() which will cause an Oops. Also I removed the __GFP_ZERO while I was at it since kzalloc() implies __GFP_ZERO. CC: stable@kernel.org Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-05-21xen: Add selfballoning memory reservation tunable.Jana Saout
Currently, the memory target in the Xen selfballooning driver is mainly driven by the value of "Committed_AS". However, there are cases in which it is desirable to assign additional memory to be available for the kernel, e.g. for local caches (which are not covered by cleancache), e.g. dcache and inode caches. This adds an additional tunable in the selfballooning driver (accessible via sysfs) which allows the user to specify an additional constant amount of memory to be reserved by the selfballoning driver for the local domain. Signed-off-by: Jana Saout <jana@saout.de> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-05-21KVM: make asm-generic/kvm_para.h have an ifdef __KERNEL__ blockPaul Gortmaker
There are two functions in this asm-generic file. Looking at other arch which do not use the generic version, these two fcns are within an #ifdef __KERNEL__ block, so make the generic one consistent with those. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-05-21Merge branch 'v3-removal' into for-linusRussell King
Conflicts: arch/arm/boot/compressed/head.S
2012-05-21Merge branch 'misc' into for-linusRussell King
Conflicts: arch/arm/kernel/ptrace.c
2012-05-21Merge branches 'amba', 'devel-stable', 'fixes', 'mach-types', 'mmci', 'pci' ↵Russell King
and 'versatile' into for-linus
2012-05-21Merge branch 'bugfixes' into nfs-for-nextTrond Myklebust
2012-05-21xenbus: Add support for xenbus backend in stub domainDaniel De Graaf
Add an ioctl to the /dev/xen/xenbus_backend device allowing the xenbus backend to be started after the kernel has booted. This allows xenstore to run in a different domain from the dom0. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-05-21MAINTAINERS: Add myself as maintainer to the USB PHY LayerFelipe Balbi
I have been looking over those patches for quite a while. Adding myself officially as the maintainer as agreed with Greg KH. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-05-21HID: wacom: fix build breakage without CONFIG_LEDS_CLASSDavid Rientjes
CONFIG_HID_WACOM must depend on CONFIG_LEDS_CLASS, otherwise CONFIG_NEW_LEDS may be disabled. Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2012-05-21xen/smp: unbind irqworkX when unplugging vCPUs.Konrad Rzeszutek Wilk
The git commit 1ff2b0c303698e486f1e0886b4d9876200ef8ca5 "xen: implement IRQ_WORK_VECTOR handler" added the functionality to have a per-cpu "irqworkX" for the IPI APIC functionality. However it missed the unbind when a vCPU is unplugged resulting in an orphaned per-cpu interrupt line for unplugged vCPU: 30: 216 0 xen-dyn-event hvc_console 31: 810 4 xen-dyn-event eth0 32: 29 0 xen-dyn-event blkif - 36: 0 0 xen-percpu-ipi irqwork2 - 37: 287 0 xen-dyn-event xenbus + 36: 287 0 xen-dyn-event xenbus NMI: 0 0 Non-maskable interrupts LOC: 0 0 Local timer interrupts SPU: 0 0 Spurious interrupts Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-05-21ARM: dma-mapping: use PMD size for section unmapVitaly Andrianov
The dma_contiguous_remap() function clears existing section maps using the wrong size (PGDIR_SIZE instead of PMD_SIZE). This is a bug which does not affect non-LPAE systems, where PGDIR_SIZE and PMD_SIZE are the same. On LPAE systems, however, this bug causes the kernel to hang at this point. This fix has been tested on both LPAE and non-LPAE kernel builds. Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2012-05-21cma: fix migration modeMinchan Kim
__alloc_contig_migrate_range calls migrate_pages with wrong argument for migrate_mode. Fix it. Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Acked-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2012-05-21ARM: integrate CMA with DMA-mapping subsystemMarek Szyprowski
This patch adds support for CMA to dma-mapping subsystem for ARM architecture. By default a global CMA area is used, but specific devices are allowed to have their private memory areas if required (they can be created with dma_declare_contiguous() function during board initialisation). Contiguous memory areas reserved for DMA are remapped with 2-level page tables on boot. Once a buffer is requested, a low memory kernel mapping is updated to to match requested memory access type. GFP_ATOMIC allocations are performed from special pool which is created early during boot. This way remapping page attributes is not needed on allocation time. CMA has been enabled unconditionally for ARMv6+ systems. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> CC: Michal Nazarewicz <mina86@mina86.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21X86: integrate CMA with DMA-mapping subsystemMarek Szyprowski
This patch adds support for CMA to dma-mapping subsystem for x86 architecture that uses common pci-dma/pci-nommu implementation. This allows to test CMA on KVM/QEMU and a lot of common x86 boxes. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> CC: Michal Nazarewicz <mina86@mina86.com> Acked-by: Arnd Bergmann <arnd@arndb.de>
2012-05-21drivers: add Contiguous Memory AllocatorMarek Szyprowski
The Contiguous Memory Allocator is a set of helper functions for DMA mapping framework that improves allocations of contiguous memory chunks. CMA grabs memory on system boot, marks it with MIGRATE_CMA migrate type and gives back to the system. Kernel is allowed to allocate only movable pages within CMA's managed memory so that it can be used for example for page cache when DMA mapping do not use it. On dma_alloc_from_contiguous() request such pages are migrated out of CMA area to free required contiguous block and fulfill the request. This allows to allocate large contiguous chunks of memory at any time assuming that there is enough free memory available in the system. This code is heavily based on earlier works by Michal Nazarewicz. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: trigger page reclaim in alloc_contig_range() to stabilise watermarksMarek Szyprowski
alloc_contig_range() performs memory allocation so it also should keep track on keeping the correct level of memory watermarks. This commit adds a call to *_slowpath style reclaim to grab enough pages to make sure that the final collection of contiguous pages from freelists will not starve the system. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> CC: Michal Nazarewicz <mina86@mina86.com> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: extract reclaim code from __alloc_pages_direct_reclaim()Marek Szyprowski
This patch extracts common reclaim code from __alloc_pages_direct_reclaim() function to separate function: __perform_reclaim() which can be later used by alloc_contig_range(). Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Cc: Michal Nazarewicz <mina86@mina86.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: Serialize access to min_free_kbytesMel Gorman
There is a race between the min_free_kbytes sysctl, memory hotplug and transparent hugepage support enablement. Memory hotplug uses a zonelists_mutex to avoid a race when building zonelists. Reuse it to serialise watermark updates. [a.p.zijlstra@chello.nl: Older patch fixed the race with spinlock] Signed-off-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: page_isolation: MIGRATE_CMA isolation functions addedMichal Nazarewicz
This commit changes various functions that change pages and pageblocks migrate type between MIGRATE_ISOLATE and MIGRATE_MOVABLE in such a way as to allow to work with MIGRATE_CMA migrate type. Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: mmzone: MIGRATE_CMA migration type addedMichal Nazarewicz
The MIGRATE_CMA migration type has two main characteristics: (i) only movable pages can be allocated from MIGRATE_CMA pageblocks and (ii) page allocator will never change migration type of MIGRATE_CMA pageblocks. This guarantees (to some degree) that page in a MIGRATE_CMA page block can always be migrated somewhere else (unless there's no memory left in the system). It is designed to be used for allocating big chunks (eg. 10MiB) of physically contiguous memory. Once driver requests contiguous memory, pages from MIGRATE_CMA pageblocks may be migrated away to create a contiguous block. To minimise number of migrations, MIGRATE_CMA migration type is the last type tried when page allocator falls back to other migration types when requested. Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: page_alloc: change fallbacks array handlingMichal Nazarewicz
This commit adds a row for MIGRATE_ISOLATE type to the fallbacks array which was missing from it. It also, changes the array traversal logic a little making MIGRATE_RESERVE an end marker. The letter change, removes the implicit MIGRATE_UNMOVABLE from the end of each row which was read by __rmqueue_fallback() function. Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: page_alloc: introduce alloc_contig_range()Michal Nazarewicz
This commit adds the alloc_contig_range() function which tries to allocate given range of pages. It tries to migrate all already allocated pages that fall in the range thus freeing them. Once all pages in the range are freed they are removed from the buddy system thus allocated for the caller to use. Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: compaction: export some of the functionsMichal Nazarewicz
This commit exports some of the functions from compaction.c file outside of it adding their declaration into internal.h header file so that other mm related code can use them. This forced compaction.c to always be compiled (as opposed to being compiled only if CONFIG_COMPACTION is defined) but as to avoid introducing code that user did not ask for, part of the compaction.c is now wrapped in on #ifdef. Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: compaction: introduce isolate_freepages_range()Michal Nazarewicz
This commit introduces isolate_freepages_range() function which generalises isolate_freepages_block() so that it can be used on arbitrary PFN ranges. isolate_freepages_block() is left with only minor changes. Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: compaction: introduce map_pages()Michal Nazarewicz
This commit creates a map_pages() function which map pages freed using split_free_pages(). This merely moves some code from isolate_freepages() so that it can be reused in other places. Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: compaction: introduce isolate_migratepages_range()Michal Nazarewicz
This commit introduces isolate_migratepages_range() function which extracts functionality from isolate_migratepages() so that it can be used on arbitrary PFN ranges. isolate_migratepages() function is implemented as a simple wrapper around isolate_migratepages_range(). Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Rob Clark <rob.clark@linaro.org> Tested-by: Ohad Ben-Cohen <ohad@wizery.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Robert Nelson <robertcnelson@gmail.com> Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21mm: page_alloc: remove trailing whitespaceMichal Nazarewicz
Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Mel Gorman <mel@csn.ul.ie>
2012-05-21ARM: dma-mapping: add support for IOMMU mapperMarek Szyprowski
This patch add a complete implementation of DMA-mapping API for devices which have IOMMU support. This implementation tries to optimize dma address space usage by remapping all possible physical memory chunks into a single dma address space chunk. DMA address space is managed on top of the bitmap stored in the dma_iommu_mapping structure stored in device->archdata. Platform setup code has to initialize parameters of the dma address space (base address, size, allocation precision order) with arm_iommu_create_mapping() function. To reduce the size of the bitmap, all allocations are aligned to the specified order of base 4 KiB pages. dma_alloc_* functions allocate physical memory in chunks, each with alloc_pages() function to avoid failing if the physical memory gets fragmented. In worst case the allocated buffer is composed of 4 KiB page chunks. dma_map_sg() function minimizes the total number of dma address space chunks by merging of physical memory chunks into one larger dma address space chunk. If requested chunk (scatter list entry) boundaries match physical page boundaries, most calls to dma_map_sg() requests will result in creating only one chunk in dma address space. dma_map_page() simply creates a mapping for the given page(s) in the dma address space. All dma functions also perform required cache operation like their counterparts from the arm linear physical memory mapping version. This patch contains code and fixes kindly provided by: - Krishna Reddy <vdumpa@nvidia.com>, - Andrzej Pietrasiewicz <andrzej.p@samsung.com>, - Hiroshi DOYU <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-21ARM: dma-mapping: use alloc, mmap, free from dma_opsMarek Szyprowski
This patch converts dma_alloc/free/mmap_{coherent,writecombine} functions to use generic alloc/free/mmap methods from dma_map_ops structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been introduced to implement writecombine methods. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-21ARM: dma-mapping: remove redundant code and do the cleanupMarek Szyprowski
This patch just performs a global cleanup in DMA mapping implementation for ARM architecture. Some of the tiny helper functions have been moved to the caller code, some have been merged together. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-21ARM: dma-mapping: move all dma bounce code to separate dma ops structureMarek Szyprowski
This patch removes dma bounce hooks from the common dma mapping implementation on ARM architecture and creates a separate set of dma_map_ops for dma bounce devices. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-21ARM: dma-mapping: implement dma sg methods on top of any generic dma opsMarek Szyprowski
This patch converts all dma_sg methods to be generic (independent of the current DMA mapping implementation for ARM architecture). All dma sg operations are now implemented on top of respective dma_map_page/dma_sync_single_for* operations from dma_map_ops structure. Before this patch there were custom methods for all scatter/gather related operations. They iterated over the whole scatter list and called cache related operations directly (which in turn checked if we use dma bounce code or not and called respective version). This patch changes them not to use such shortcut. Instead it provides similar loop over scatter list and calls methods from the device's dma_map_ops structure. This enables us to use device dependent implementations of cache related operations (direct linear or dma bounce) depending on the provided dma_map_ops structure. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-21ARM: dma-mapping: use asm-generic/dma-mapping-common.hMarek Szyprowski
This patch modifies dma-mapping implementation on ARM architecture to use common dma_map_ops structure and asm-generic/dma-mapping-common.h helpers. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>