summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2011-07-20anonfd: fix missing declarationTomasz Stanislawski
The forward declaration of struct file_operations is added to avoid compilation warnings. Signed-off-by: Tomasz Stanislawski <t.stanislaws@samsung.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20xfs: make use of new shrinker callout for the inode cacheDave Chinner
Convert the inode reclaim shrinker to use the new per-sb shrinker operations. This allows much bigger reclaim batches to be used, and allows the XFS inode cache to be shrunk in proportion with the VFS dentry and inode caches. This avoids the problem of the VFS caches being shrunk significantly before the XFS inode cache is shrunk resulting in imbalances in the caches during reclaim. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20vfs: increase shrinker batch sizeDave Chinner
Now that the per-sb shrinker is responsible for shrinking 2 or more caches, increase the batch size to keep econmies of scale for shrinking each cache. Increase the shrinker batch size to 1024 objects. To allow for a large increase in batch size, add a conditional reschedule to prune_icache_sb() so that we don't hold the LRU spin lock for too long. This mirrors the behaviour of the __shrink_dcache_sb(), and allows us to increase the batch size without needing to worry about problems caused by long lock hold times. To ensure that filesystems using the per-sb shrinker callouts don't cause problems, document that the object freeing method must reschedule appropriately inside loops. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20superblock: add filesystem shrinker operationsDave Chinner
Now we have a per-superblock shrinker implementation, we can add a filesystem specific callout to it to allow filesystem internal caches to be shrunk by the superblock shrinker. Rather than perpetuate the multipurpose shrinker callback API (i.e. nr_to_scan == 0 meaning "tell me how many objects freeable in the cache), two operations will be added. The first will return the number of objects that are freeable, the second is the actual shrinker call. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20inode: remove iprune_semDave Chinner
Now that we have per-sb shrinkers with a lifecycle that is a subset of the superblock lifecycle and can reliably detect a filesystem being unmounted, there is not longer any race condition for the iprune_sem to protect against. Hence we can remove it. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20superblock: introduce per-sb cache shrinker infrastructureDave Chinner
With context based shrinkers, we can implement a per-superblock shrinker that shrinks the caches attached to the superblock. We currently have global shrinkers for the inode and dentry caches that split up into per-superblock operations via a coarse proportioning method that does not batch very well. The global shrinkers also have a dependency - dentries pin inodes - so we have to be very careful about how we register the global shrinkers so that the implicit call order is always correct. With a per-sb shrinker callout, we can encode this dependency directly into the per-sb shrinker, hence avoiding the need for strictly ordering shrinker registrations. We also have no need for any proportioning code for the shrinker subsystem already provides this functionality across all shrinkers. Allowing the shrinker to operate on a single superblock at a time means that we do less superblock list traversals and locking and reclaim should batch more effectively. This should result in less CPU overhead for reclaim and potentially faster reclaim of items from each filesystem. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20xfs: add size update tracepoint to IO completionDave Chinner
For improving insight into IO completion behaviour. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alex Elder <aelder@sgi.com>
2011-07-20xfs: convert AIL cursors to use struct list_headDave Chinner
The list of active AIL cursors uses a roll-your-own linked list with special casing for the AIL push cursor. Simplify this code by replacing the list with standard struct list_head lists, and use a separate list_head to track the active cursors. This allows us to treat the AIL push cursor as a generic cursor rather than as a special case, further simplifying the code. Further, fix the duplicate push cursor initialisation that the special case handling was hiding, and clean up all the comments around the active cursor list handling. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alex Elder <aelder@sgi.com>
2011-07-20xfs: remove confusing ail cursor wrapperDave Chinner
xfs_trans_ail_cursor_set() doesn't set the cursor to the current log item, it sets it to the next item. There is already a function for doing this - xfs_trans_ail_cursor_next() - and the _set function is simply a two line wrapper. Remove it and open code the setting of the cursor in the two locations that call it to remove the confusion. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alex Elder <aelder@sgi.com>
2011-07-20xfs: use a cursor for bulk AIL insertionDave Chinner
Delayed logging can insert tens of thousands of log items into the AIL at the same LSN. When the committing of log commit records occur, we can get insertions occurring at an LSN that is not at the end of the AIL. If there are thousands of items in the AIL on the tail LSN, each insertion has to walk the AIL to find the correct place to insert the new item into the AIL. This can consume large amounts of CPU time and block other operations from occurring while the traversals are in progress. To avoid this repeated walk, use a AIL cursor to record where we should be inserting the new items into the AIL without having to repeat the walk. The cursor infrastructure already provides this functionality for push walks, so is a simple extension of existing code. While this will not avoid the initial walk, it will avoid repeating it tens of thousands of times during a single checkpoint commit. This version includes logic improvements from Christoph Hellwig. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alex Elder <aelder@sgi.com>
2011-07-20xfs: failure mapping nfs fh to inode should return ESTALEJ. Bruce Fields
On xfs exports, nfsd is incorrectly returning ENOENT instead of ESTALE on attempts to use a filehandle of a deleted file (spotted with pynfs test PUTFH3). The ENOENT was coming from xfs_iget. (It's tempting to wonder whether we should just map all xfs_iget errors to ESTALE, but I don't believe so--xfs_iget can also return ENOMEM at least, which we wouldn't want mapped to ESTALE.) While we're at it, the other return of ENOENT in xfs_nfs_get_inode() also looks wrong. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Alex Elder <aelder@sgi.com>
2011-07-20xfs: Remove the second parameter to xfs_sb_count()Chandra Seetharaman
Remove the second parameter to xfs_sb_count() since all callers of the function set them. Also, fix the header comment regarding it being called periodically. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: Alex Elder <aelder@sgi.com>
2011-07-20Merge branch 'core-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: signal: align __lock_task_sighand() irq disabling and RCU softirq,rcu: Inform RCU of irq_exit() activity sched: Add irq_{enter,exit}() to scheduler_ipi() rcu: protect __rcu_read_unlock() against scheduler-using irq handlers rcu: Streamline code produced by __rcu_read_unlock() rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special rcu: decrease rcu_report_exp_rnp coupling with scheduler
2011-07-20Merge branch 'sched-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: sched: Avoid creating superfluous NUMA domains on non-NUMA systems sched: Allow for overlapping sched_domain spans sched: Break out cpu_power from the sched_group structure
2011-07-20time: Fix stupid KERN_WARN compile issueJohn Stultz
Terribly embarassing. Don't know how I committed this, but its KERN_WARNING not KERN_WARN. This fixes the following compile error: kernel/time/timekeeping.c: In function ‘__timekeeping_inject_sleeptime’: kernel/time/timekeeping.c:608: error: ‘KERN_WARN’ undeclared (first use in this function) kernel/time/timekeeping.c:608: error: (Each undeclared identifier is reported only once kernel/time/timekeeping.c:608: error: for each function it appears in.) kernel/time/timekeeping.c:608: error: expected ‘)’ before string constant make[2]: *** [kernel/time/timekeeping.o] Error 1 Reported-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: John Stultz <john.stultz@linaro.org>
2011-07-20Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86. reboot: Make Dell Latitude E6320 use reboot=pci x86, doc only: Correct real-mode kernel header offset for init_size x86: Disable AMD_NUMA for 32bit for now
2011-07-20mmc: omap_hsmmc: Remove unused iclkBalaji T K
After runtime conversion to handle clk, iclk node is not used. However fclk node is still used to get clock rate. Signed-off-by: Balaji T K <balajitk@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: omap_hsmmc: add runtime pm supportBalaji T K
* Add runtime pm support to HSMMC host controller. * Use runtime pm API to enable/disable HSMMC clock. * Use runtime autosuspend APIs to enable auto suspend delay. Based on OMAP HSMMC runtime implementation by Kevin Hilman and Kishore Kadiyala. Signed-off-by: Balaji T K <balajitk@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: omap_hsmmc: Remove lazy_disableBalaji T K
lazy_disable framework in OMAP HSMMC manages multiple low power states and card is powered off after inactivity time of 8 seconds. Based on previous discussion on the list, card power (regulator) handling (when to power OFF/ON) should ideally be handled by core layer. Remove usage of lazy disable to allow core layer _only_ to handle card power. With the removal of lazy disable framework, MMC regulators are left ON until MMC_POWER_OFF via set_ios. Signed-off-by: Balaji T K <balajitk@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: core: Set non-default Drive Strength via platform hookPhilip Rakity
Non default Drive Strength cannot be set automatically. It is a function of the board design and only if there is a specific platform handler can it be set. The platform handler needs to take into account the board design. Pass to the platform code the necessary information. For example: The card and host controller may indicate they support HIGH and LOW drive strength. There is no way to know what should be chosen without specific board knowledge. Setting HIGH may lead to reflections and setting LOW may not suffice. There is no mechanism (like ethernet duplex or speed pulses) to determine what should be done automatically. If no platform handler is defined -- use the default value. Signed-off-by: Philip Rakity <prakity@marvell.com> Reviewed-by: Arindam Nath <arindam.nath@amd.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: block: add handling for two parallel block requests in issue_rw_rqPer Forlin
Change mmc_blk_issue_rw_rq() to become asynchronous. The execution flow looks like this: * The mmc-queue calls issue_rw_rq(), which sends the request to the host and returns back to the mmc-queue. * The mmc-queue calls issue_rw_rq() again with a new request. * This new request is prepared in issue_rw_rq(), then it waits for the active request to complete before pushing it to the host. * When the mmc-queue is empty it will call issue_rw_rq() with a NULL req to finish off the active request without starting a new request. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: queue: add a second mmc queue request memberPer Forlin
Add an additional mmc queue request instance to make way for two active block requests. One request may be active while the other request is being prepared. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: block: move error path in issue_rw_rq to a separate function.Per Forlin
Break out code without functional changes. This simplifies the code and makes way for handling two parallel requests. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar<sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: block: add a block request prepare functionPer Forlin
Break out code from mmc_blk_issue_rw_rq to create a block request prepare function. This doesn't change any functionallity. This helps when handling more than one active block request. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: block: add member in mmc queue struct to hold request dataPer Forlin
The way the request data is organized in the mmc queue struct, it only allows processing of one request at a time. This patch adds a new struct to hold mmc queue request data such as sg list, request, blk request and bounce buffers, and updates any functions depending on the mmc queue struct. This prepares for using multiple active requests in one mmc queue. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: mmc_test: test to measure how sg_len affect performancePer Forlin
Add a test that measures how the mmc bandwidth depends on the numbers of sg elements in the sg list. The transfer size if fixed and sg length goes from a few up to 512. The purpose is to measure overhead caused by multiple sg elements. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: mmc_test: add test for non-blocking transfersPer Forlin
Add four tests for read and write performance per different transfer size, 4k to 4M. * Read using blocking mmc request * Read using non-blocking mmc request * Write using blocking mmc request * Write using non-blocking mmc request The host driver must support pre_req() and post_req() in order to run the non-blocking test cases. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar<sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: mmc_test: add debugfs file to list all testsPer Forlin
Add a debugfs file "testlist" to print all available tests. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar<sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: mmci: implement pre_req() and post_req()Per Forlin
pre_req() runs dma_map_sg() and prepares the dma descriptor for the next mmc data transfer. post_req() runs dma_unmap_sg. If not calling pre_req() before mmci_request(), mmci_request() will prepare the cache and dma just like it did it before. It is optional to use pre_req() and post_req() for mmci. Signed-off-by: Per Forlin <per.forlin@linaro.org> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: omap_hsmmc: add support for pre_req and post_reqPer Forlin
pre_req() runs dma_map_sg(), post_req() runs dma_unmap_sg. If not calling pre_req() before omap_hsmmc_request(), dma_map_sg will be issued before starting the transfer. It is optional to use pre_req(). If issuing pre_req(), post_req() must be called as well. Signed-off-by: Per Forlin <per.forlin@linaro.org> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: core: add non-blocking mmc request functionPer Forlin
Previously there has only been one function mmc_wait_for_req() to start and wait for a request. This patch adds: * mmc_start_req() - starts a request wihtout waiting If there is on ongoing request wait for completion of that request and start the new one and return. Does not wait for the new command to complete. This patch also adds new function members in struct mmc_host_ops only called from core.c: * pre_req - asks the host driver to prepare for the next job * post_req - asks the host driver to clean up after a completed job The intention is to use pre_req() and post_req() to do cache maintenance while a request is active. pre_req() can be called while a request is active to minimize latency to start next job. post_req() can be used after the next job is started to clean up the request. This will minimize the host driver request end latency. post_req() is typically used before ending the block request and handing over the buffer to the block layer. Add a host-private member in mmc_data to be used by pre_req to mark the data. The host driver will then check this mark to see if the data is prepared or not. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: MAINTAINERS: change omap_hsmmc maintainence to orphanMadhusudhan Chikkature
Update the OMAP HSMMC entry from the MAINTAINERS file as I will no longer be able to maintain this driver. Signed-off-by: Madhusudhan Chikkature <madhu.cr@ti.com> [khilman@ti.com: change to Orphan rather than complete removal] Signed-off-by: Kevin Hilman <khilman@ti.com> Acked-by: Venkatraman S <svenkatr@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: kconfig: remove EXPERIMENTAL from the DMA selection of atmel-mciNicolas Ferre
This driver has been used for years with this option enabled. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: atmel-mci: add suspend/resume supportNicolas Ferre
Take care of slots while going to suspend state. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Reviewed-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: sdhci-pci: allow 8-bit bus width for Intel Medfield eMMCsAdrian Hunter
Unless MMC_CAP_8_BIT_DATA is set, the bus width defaults to 4. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: sdhci-pci: add 8-bit bus width support for mrst hc0Major Lee
And hook platform_8bit_width to support 8-bit bus width. Signed-off-by: Major Lee <major_lee@wistron.com> Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Dirk Brandewie <dirk.brandewie@gmail.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: dw_mmc: reset FIFO after an errorJames Hogan
If an error occurs mid way through a transaction (such as a missing CRC status response after the 2nd block written out of 3), then the FIFO may still contain data which will interfere with the next transaction. Therefore after an error has been detected, reset the fifo using the CTRL register. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: dw_mmc: handle "no CRC status" errorJames Hogan
When a data write isn't acknowledged by the card (so no CRC status token is detected after the data), the error -EIO is returned instead of the -ETIMEDOUT expected by mmc_test 15 - "Correct xfer_size at write (start failure)" and 17 "Correct xfer_size at write (midway failure)". In PIO mode the reported number of bytes transferred is also exaggerated since the last block actually failed. Handle the "Write no CRC" error specially, setting the error to -ETIMEDOUT and setting the bytes_xferred to 0. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: dw_mmc: remove unnecessary error messagesJames Hogan
Remove error messages for timeout and CRC failure, since the error code already indicates the problem. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: dw_mmc: fix stop when fallen back to PIOJames Hogan
There are several situations when dw_mci_submit_data_dma() decides to fall back to PIO mode instead of using DMA, due to a short (to avoid overhead) or "complex" (e.g. with unaligned buffers) transaction, even though host->use_dma is set. However dw_mci_stop_dma() decides whether to stop DMA or set the EVENT_XFER_COMPLETE event based on host->use_dma. When falling back to PIO mode this results in data timeout errors getting missed and the driver locking up. Therefore add host->using_dma to indicate whether the current transaction is using dma or not, and adjust dw_mci_stop_dma() to use that instead. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: sdhci-s3c: Fix return value in sdhci_s3c_suspend/resume()Wonil Choi
Signed-off-by: Wonil Choi <wonil22.choi@samsung.com> Signed-off-by: Minho Ban <mhban@samsung.com> Cc: Ben Dooks <ben-linux@fluff.org> Signed-off-by: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: sdhci: specify maximum discard timeoutAdrian Hunter
In general, SDHC hardware timeout cannot be avoided. Accordingly, the maximum timeout is specified to limit the maximum discard size. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: queue: let host controllers specify maximum discard timeoutAdrian Hunter
Some host controllers will not operate without a hardware timeout that is limited in value. However large discards require large timeouts, so there needs to be a way to specify the maximum discard size. A host controller driver may now specify the maximum discard timeout possible so that max_discard_sectors can be calculated. However, for eMMC when the High Capacity Erase Group Size is not in use, the timeout calculation depends on clock rate which may change. For that case Preferred Erase Size is used instead. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: sdhci-esdhc-imx: remove "WP" from flag ESDHC_FLAG_GPIO_FOR_CD_WPShawn Guo
The use of flag ESDHC_FLAG_GPIO_FOR_CD_WP is all CD related. It does not necessarily need to bother WP in the flag name. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: sdhci-esdhc-imx: SDHCI_CARD_PRESENT does not get clearedShawn Guo
The function esdhc_readl_le intends to clear bit SDHCI_CARD_PRESENT, when the card detect gpio tells there is no card. But it does not clear the bit actually. The patch gives a fix on that. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Acked-by: Wolfram Sang <w.sang@pengutronix.de> Cc: <stable@kernel.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: sdhci: fix interrupt storm from card detectionShawn Guo
The issue was initially found by Eric Benard as below. http://permalink.gmane.org/gmane.linux.ports.arm.kernel/108031 Not sure about other SDHCI based controller, but on Freescale eSDHC, the SDHCI_INT_CARD_INSERT bits will be immediately set again when it gets cleared, if a card is inserted. The driver need to mask the irq to prevent interrupt storm which will freeze the system. And the SDHCI_INT_CARD_REMOVE gets the same situation. The patch fixes the problem based on the initial idea from Eric Benard. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Cc: Eric Benard <eric@eukrea.com> Tested-by: Arnaud Patard <arnaud.patard@rtp-net.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: tmio: Fix race condition resulting in spurious interruptsPaul Parsons
There is a race condition in the tmio_mmc_irq() interrupt handler, caused by the presence of a while loop, which results in warnings of spurious interrupts. This was found on an HP iPAQ hx4700 whose HTC ASIC3 reportedly incorporates the Toshiba TC6380AF controller. Towards the end of a multiple read (CMD18) operation the handler clears the final RXRDY status bit in the first loop iteration, sees the DATAEND status bit at the bottom of the loop, and so clears the DATAEND status bit in the second loop iteration. However the DATAEND interrupt is still queued in the system somewhere and can't be delivered until the handler has returned. This second interrupt is then reported as spurious in the next call to the handler. Likewise for single read (CMD17) operations. And something similar occurs for multiple write (CMD25) and single write (CMD24) operations, where CMDRESPEND and TXRQ status bits are cleared in a single call. In these cases the interrupt handler clears two separate interrupts when it should only clear the one interrupt for which it was invoked. The fix is to remove the while loop. Signed-off-by: Paul Parsons <lost.distance@yahoo.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: tmio: Fix build error without CONFIG_MMC_SDHIPaul Parsons
Only compile tmio_mmc_dma.o when CONFIG_MMC_SDHI is selected (as y or m). Signed-off-by: Paul Parsons <lost.distance@yahoo.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: dw_mmc: handle unaligned buffers and sizesJames Hogan
Update functions for PIO pushing and pulling data to and from the FIFO so that they can handle unaligned output buffers and unaligned buffer lengths. This makes more of the tests in mmc_test pass. Unaligned lengths in pulls are handled by reading the full FIFO item, and storing the remaining bytes in a small internal buffer (part_buf). The next data pull will copy data out of this buffer first before accessing the FIFO again. Similarly, for pushes the final bytes that don't fill a FIFO item are stored in the part_buf (or sent anyway if it's the last transfer), and then the part_buf is included at the beginning of the next buffer pushed. Unaligned buffers in pulls are handled specially if the architecture cannot do efficient unaligned accesses, by reading FIFO items into a aligned local buffer, and memcpy'ing them into the output buffer, again storing any remaining bytes in the internal buffer. Similarly for pushes the buffer is memcpy'd into an aligned local buffer then written to the FIFO. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20mmc: dw_mmc: don't hard code fifo depth, fix usageJames Hogan
The FIFO_DEPTH hardware configuration parameter can be found from the power-on value of RX_WMark in the FIFOTH register. This is used to initialise the watermarks, but when calculating the number of free fifo spaces a preprocessor definition is used which is hard coded to 32. Fix reading the value out of FIFOTH (the default value in the RX_WMark field is FIFO_DEPTH-1 not FIFO_DEPTH). Allow the fifo depth to be overriden by platform data (since a bootloader may have changed FIFOTH making auto-detection unreliable). Store the fifo_depth for later use. Also fix the calculation to find the number of free bytes in the fifo to include the fifo depth in the left shift by the data shift, since the fifo depth is measured in fifo items not bytes. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>