summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2010-10-26AT91: reset routine cleanup, remove not needed icache flushNicolas Ferre
Generalize assembler reset routine to allow use on several at91sam9 chips. This patch replace double definitions of SDRAM controller registers and RSTC registers with use of classical header files. For this rework, we remove the not needed icache flush as it is already done in the calling function: arm_machine_restart(). Rename at91sam9g20_reset.S to generalize to several chips. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
2010-10-26AT91: trivial: align comment of at91sam9g20_reset with one more tabNicolas Ferre
Preparing next patch with longer names Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
2010-10-26AT91: Fix AT91SAM9G20 reset as per the errata in the data sheetPeter Horton
If the SDRAM is not cleanly shutdown before reset it can be left driving the bus, which then stops the bootloader booting from NAND. Signed-off-by: Peter Horton <phorton@bitbox.co.uk> [nicolas.ferre@atmel.com: change file header line order] Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
2010-10-26AT91: add board support for Pcontrol_G20Peter Gsellmann
Board is a carrier board for Stamp9G20, with additional peripherals for a building automation system Signed-off-by: Peter Gsellmann <pgsellmann@portner-elektronik.at> [nicolas.ferre@atmel.com: remove machine_desc.io_pg_offst and .phys_io] Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
2010-10-26ACPI: install ACPI table handler before any dynamic tables being loadedZhang Rui
ACPI table sysfs I/F is broken by commit 78f1699659963fff97975df44db6d5dbe7218e55 Author: Alex Chiang <achiang@hp.com> Date: Sun Dec 20 12:19:09 2009 -0700 ACPI: processor: call _PDC early because dynamic SSDT tables may be loaded in _PDC, before installing the ACPI table handler. As a result, the sysfs I/F of these dynamic tables are located at /sys/firmware/acpi/tables instead of /sys/firmware/acpi/tables/dynamic, which is not true. Invoke acpi_sysfs_init() before acpi_early_processor_set_pdc(), so that the table handler is installed before any dynamic tables loaded. https://bugzilla.kernel.org/show_bug.cgi?id=21142 CC: Dennis Jansen <dennis.jansen@web.de> CC: Alex Chiang <achiang@hp.com> Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Len Brown <len.brown@intel.com>
2010-10-26Merge branch 'perf/core' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux-2.6 into perf/urgent
2010-10-26sh: Switch dynamic IRQ creation to generic irq allocator.Paul Mundt
Now that the genirq code provides an IRQ bitmap of its own and the necessary API to manipulate it, there's no need to keep our own version around anymore. In the process we kill off some unused IRQ reservation code, with future users now having to tie in to the genirq API as normal. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-10-26sh: Tidy up genirq Kconfig bits.Paul Mundt
Now that there's a HAVE_GENERIC_HARDIRQS, switch over. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-10-26sh: Sanitize sparse irqThomas Gleixner
Switch over to the new allocator functions. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-10-26sh: Expose physical addressing mode through cpuinfo.Paul Mundt
CPUs can be in either the legacy 29-bit or 32-bit physical addressing modes. This follows the x86 approach of tracking the phys bits in cpuinfo and exposing it to userspace through procfs. This change was requested to permit kexec-tools to detect the physical addressing mode in order to determine the appropriate address mangling. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-10-26drm/radeon/kms: properly compute group_size on 6xx/7xxAlex Deucher
Needed for tiled surfaces. Signed-off-by: Alex Deucher <alexdeucher@gmail.com> Cc: stable@kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-10-26drm/radeon/kms: fix 2D tile height alignment in the r600 CS checkerAlex Deucher
macro tile heights are aligned to num channels, not num banks. Noticed by Dave Airlie. Signed-off-by: Alex Deucher <alexdeucher@gmail.com> Cc: stable@kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-10-26drm/radeon/kms/evergreen: set the clear state to the blit stateAlex Deucher
The hw stores a default clear state for registers in the context range that can be initialized when the CP is set up. Set the blit state as the default clear state and use the CLEAR_STATE packet to load the blit state rather than loading it from an IB. This reduces overhead when doing bo moves using the 3D engine. Signed-off-by: Alex Deucher <alexdeucher@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-10-26drm/radeon/kms: don't poll dac load detect.Dave Airlie
This is slightly destructive, cpu intensive and can cause lockups. Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-10-25net/sunrpc: Use static const char arraysJoe Perches
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2010-10-25nfs4: fix channel attribute sanity-checksJ. Bruce Fields
The sanity checks here are incorrect; in the worst case they allow values that crash the client. They're also over-reliant on the preprocessor. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2010-10-25Merge branch 'for-next' of git://android.git.kernel.org/kernel/tegraLinus Torvalds
* 'for-next' of git://android.git.kernel.org/kernel/tegra: spi: tegra: fix error setting on timeout spi: add spi_tegra driver tegra: harmony: enable PCI Express tegra: add PCI Express support tegra: add PCI Express clocks [ARM] tegra: Add APB DMA support [ARM] tegra: Add cpufreq support [ARM] tegra: common: Update common clock init table [ARM] tegra: clock: Add dvfs support, bug fixes, and cleanups [ARM] tegra: Add support for reading fuses [ARM] tegra: gpio: Add suspend and wake support [ARM] tegra: pinmux: add safe values, move tegra2, add suspend [ARM] tegra: add suspend and mirror irqs to legacy controller [ARM] tegra: Add legacy irq support [ARM] tegra: update iomap
2010-10-25Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vapier/blackfin * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/vapier/blackfin: Blackfin: fix inverted anomaly 05000481 logic Blackfin: drop unused irq_panic()/DEBUG_ICACHE_CHECK Blackfin: ppi/spi/twi headers: add missing __BFP undef Blackfin: update defconfigs Blackfin: bfin_twi.h: start a common TWI header netdev: bfin_mac: push settings to platform resources
2010-10-25split invalidate_inodes()Al Viro
Pull removal of fsnotify marks into generic_shutdown_super(). Split umount-time work into a new function - evict_inodes(). Make sure that invalidate_inodes() will be able to cope with I_FREEING once we change locking in iput(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: skip I_FREEING inodes in writeback_sb_inodesChristoph Hellwig
Skip I_FREEING inodes just like I_WILL_FREE and I_NEW when walking the writeback lists. Currenly this can't happen, but once we move from inode_lock to more fine grained locking we can have an inode that's still on the writeback lists but has I_FREEING set, and we absolutely need to skip it here, just like we do for all other inode list walks. Based on a patch from Dave Chinner. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: fold invalidate_list into invalidate_inodesChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: do not drop inode_lock in dispose_listChristoph Hellwig
Despite the comment above it we can not safely drop the lock here. invalidate_list is called from many other places that just umount. Also switch to proper list macros now that we never drop the lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: inode split IO and LRU listsNick Piggin
The use of the same inode list structure (inode->i_list) for two different list constructs with different lifecycles and purposes makes it impossible to separate the locking of the different operations. Therefore, to enable the separation of the locking of the writeback and reclaim lists, split the inode->i_list into two separate lists dedicated to their specific tracking functions. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: switch bdev inode bdi's correctlyDave Chinner
bdev inodes can remain dirty even after their last close. Hence the BDI associated with the bdev->inode gets modified duringthe last close to point to the default BDI. However, the bdev inode still needs to be moved to the dirty lists of the new BDI, otherwise it will corrupt the writeback list is was left on. Add a new function bdev_inode_switch_bdi() to move all the bdi state from the old bdi to the new one safely. This is only a temporary measure until the bdev inode<->bdi lifecycle problems are sorted out. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: fix buffer invalidation in invalidate_listChristoph Hellwig
We must not call invalidate_inode_buffers in invalidate_list unless the inode can be reclaimed. If we remove the buffer association of a busy inode fsync won't find the buffers anymore. As invalidate_inode_buffers is called from various others sources than umount this actually does matter in practice. While at it change the loop to a more natural form and remove the WARN_ON for I_NEW, wich we already tested a few lines above. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fsnotify: use dget_parentChristoph Hellwig
Use dget_parent instead of opencoding it. This simplifies the code, but more importanly prepares for the more complicated locking for a parent dget in the dcache scale patch series. It means we do grab a reference to the parent now if need to be watched, but not with the specified mask. If this turns out to be a problem we'll have to revisit it, but for now let's keep as much as possible dcache internals inside dcache.[ch]. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25smbfs: use dget_parentChristoph Hellwig
Use dget_parent instead of opencoding it. This simplifies the code, but more importanly prepares for the more complicated locking for a parent dget in the dcache scale patch series. Note that the d_time assignment in smb_renew_times moves out of d_lock, but it's a single atomic 32-bit value, and that's what other sites setting it do already. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25exportfs: use dget_parentChristoph Hellwig
Use dget_parent instead of opencoding it. This simplifies the code, but more importanly prepares for the more complicated locking for a parent dget in the dcache scale patch series. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: use RCU read side protection in d_validateChristoph Hellwig
d_validate does a purely read lookup in the dentry hash, so use RCU read side locking instead of dcache_lock. Split out from a larget patch by Nick Piggin <npiggin@suse.de>. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: clean up dentry lru modificationChristoph Hellwig
Always do a list_del_init on the LRU to make sure the list_empty invariant for not beeing on the LRU always holds true, and fold dentry_lru_del_init into dentry_lru_del. Replace the dentry_lru_add_tail primitive with a dentry_lru_move_tail operations that simpler when the dentry already is one the list, which is always is. Move the list_empty into dentry_lru_add to fit the scheme of the other lru helpers, and simplify locking once we move to a separate LRU lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: split __shrink_dcache_sbChristoph Hellwig
Currently __shrink_dcache_sb has an extremly awkward calling convention because it tries to please very different callers. Split out the main loop into a shrink_dentry_list helper, which gets called directly from shrink_dcache_sb for the cases where all dentries need to be pruned, or from __shrink_dcache_sb for pruning only a certain number of dentries. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: improve DCACHE_REFERENCED usageNick Piggin
dentry referenced bit is only set when installing the dentry back onto the LRU. However with lazy LRU, the dentry can already be on the LRU list at dput time, thus missing out on setting the referenced bit. Fix this. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: use percpu counter for nr_dentry and nr_dentry_unusedChristoph Hellwig
The nr_dentry stat is a globally touched cacheline and atomic operation twice over the lifetime of a dentry. It is used for the benfit of userspace only. Turn it into a per-cpu counter and always decrement it in d_free instead of doing various batching operations to reduce lock hold times in the callers. Based on an earlier patch from Nick Piggin <npiggin@suse.de>. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: simplify __d_freeChristoph Hellwig
Remove d_callback and always call __d_free with a RCU head. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: take dcache_lock inside __d_pathChristoph Hellwig
All callers take dcache_lock just around the call to __d_path, so take the lock into it in preparation of getting rid of dcache_lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: do not assign default i_ino in new_inodeChristoph Hellwig
Instead of always assigning an increasing inode number in new_inode move the call to assign it into those callers that actually need it. For now callers that need it is estimated conservatively, that is the call is added to all filesystems that do not assign an i_ino by themselves. For a few more filesystems we can avoid assigning any inode number given that they aren't user visible, and for others it could be done lazily when an inode number is actually needed, but that's left for later patches. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: introduce a per-cpu last_ino allocatorEric Dumazet
new_inode() dirties a contended cache line to get increasing inode numbers. This limits performance on workloads that cause significant parallel inode allocation. Solve this problem by using a per_cpu variable fed by the shared last_ino in batches of 1024 allocations. This reduces contention on the shared last_ino, and give same spreading ino numbers than before (i.e. same wraparound after 2^32 allocations). Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25new helper: ihold()Al Viro
Clones an existing reference to inode; caller must already hold one. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: remove inode_add_to_list/__inode_add_to_listChristoph Hellwig
Split up inode_add_to_list/__inode_add_to_list. Locking for the two lists will be split soon so these helpers really don't buy us much anymore. The __ prefixes for the sb list helpers will go away soon, but until inode_lock is gone we'll need them to distinguish between the locked and unlocked variants. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: move i_count increments into find_inode/find_inode_fastChristoph Hellwig
Now that iunique is not abusing find_inode anymore we can move the i_ref increment back to where it belongs. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: Stop abusing find_inode_fast in iuniqueChristoph Hellwig
Stop abusing find_inode_fast for iunique and opencode the inode hash walk. Introduce a new iunique_lock to protect the iunique counters once inode_lock is removed. Based on a patch originally from Nick Piggin. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: Factor inode hash operations into functionsDave Chinner
Before replacing the inode hash locking with a more scalable mechanism, factor the removal of the inode from the hashes rather than open coding it in several places. Based on a patch originally from Nick Piggin. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: Implement lazy LRU updates for inodesNick Piggin
Convert the inode LRU to use lazy updates to reduce lock and cacheline traffic. We avoid moving inodes around in the LRU list during iget/iput operations so these frequent operations don't need to access the LRUs. Instead, we defer the refcount checks to reclaim-time and use a per-inode state flag, I_REFERENCED, to tell reclaim that iget has touched the inode in the past. This means that only reclaim should be touching the LRU with any frequency, hence significantly reducing lock acquisitions and the amount contention on LRU updates. This also removes the inode_in_use list, which means we now only have one list for tracking the inode LRU status. This makes it much simpler to split out the LRU list operations under it's own lock. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25fs: Convert nr_inodes and nr_unused to per-cpu countersDave Chinner
The number of inodes allocated does not need to be tied to the addition or removal of an inode to/from a list. If we are not tied to a list lock, we could update the counters when inodes are initialised or destroyed, but to do that we need to convert the counters to be per-cpu (i.e. independent of a lock). This means that we have the freedom to change the list/locking implementation without needing to care about the counters. Based on a patch originally from Eric Dumazet. [AV: cleaned up a bit, fixed build breakage on weird configs Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25vfs: fix infinite loop caused by clone_mnt raceMiklos Szeredi
If clone_mnt() happens while mnt_make_readonly() is running, the cloned mount might have MNT_WRITE_HOLD flag set, which results in mnt_want_write() spinning forever on this mount. Needs CAP_SYS_ADMIN to trigger deliberately and unlikely to happen accidentally. But if it does happen it can hang the machine. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25switch hfs to hlist_add_fake()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25list.h: new helper - hlist_add_fake()Al Viro
Make node look as if it was on hlist, with hlist_del() working correctly. Usable without any locking... Convert a couple of places where we want to do that to inode->i_hash. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25new helper: inode_unhashed()Al Viro
note: for race-free uses you inode_lock held Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25unexport invalidate_inodesAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2010-10-25smbfs never retains inodes with zero refcount in the first placeAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>