summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2014-06-06net: filter: fix typo in sparc BPF JITAlexei Starovoitov
fix typo in sparc codegen for SKF_AD_IFINDEX and SKF_AD_HATYPE classic BPF extensions Fixes: 2809a2087cc4 ("net: filter: Just In Time compiler for sparc") Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-06drm/tegra: sor - Make debugfs setup consistentThierry Reding
Other output drivers set up debugfs slightly differently. Bring the SOR driver in line with those for consistency. Signed-off-by: Thierry Reding <treding@nvidia.com>
2014-06-06drm/tegra: sor - Recursively remove debugfs treeThierry Reding
Removing only the root directory will fail when there are still files in it. Instead of manually removing all files, remove the whole directory recursively. Signed-off-by: Thierry Reding <treding@nvidia.com>
2014-06-06xfs: Fix rounding in xfs_alloc_fix_len()Jan Kara
Rounding in xfs_alloc_fix_len() is wrong. As the comment states, the result should be a number of a form (k*prod+mod) however due to sign mistake the result is different. As a result allocations on raid arrays could be misaligned in some cases. This also seems to fix occasional assertion failure: XFS_WANT_CORRUPTED_GOTO(rlen <= flen, error0) in xfs_alloc_ag_vextent_size(). Also add an assertion that the result of xfs_alloc_fix_len() is of expected form. Signed-off-by: Jan Kara <jack@suse.cz> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: tone down writepage/releasepage WARN_ONsChristoph Hellwig
I recently ran into the issue fixed by "xfs: kill buffers over failed write ranges properly" which spams the log with lots of backtraces. Make debugging any issues like that easier by using WARN_ON_ONCE in the writeback code. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: small cleanup in xfs_lowbit64()Dan Carpenter
There are two checkpatch.pl complaints here because of the bad indenting and because of the assignment inside the condition. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: kill xfs_buf_geterror()Dave Chinner
Most of the callers are just calling ASSERT(!xfs_buf_geterror()) which means they are checking for bp->b_error == 0. If bp is null in this case, we will assert fail, and hence it's no different in result to oopsing because of a null bp. In some cases, errors have already been checked for or the function returning the buffer can't return a buffer with an error, so it's just a redundant assert. Either way, the assert can either be removed. The other two non-assert callers can just test for a buffer and error properly. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: xfs_readsb needs to check for magic numbersDave Chinner
Commit daba542 ("xfs: skip verification on initial "guess" superblock read") dropped the use of a verifier for the initial superblock read so we can probe the sector size of the filesystem stored in the superblock. It, however, now fails to validate that what was read initially is actually an XFS superblock and hence will fail the sector size check and return ENOSYS. This causes probe-based mounts to fail because it expects XFS to return EINVAL when it doesn't recognise the superblock format. cc: <stable@vger.kernel.org> Reported-by: Plamen Petrov <plamen.sisi@gmail.com> Tested-by: Plamen Petrov <plamen.sisi@gmail.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: block allocation work needs to be kswapd awareDave Chinner
Upon memory pressure, kswapd calls xfs_vm_writepage() from shrink_page_list(). This can result in delayed allocation occurring and that gets deferred to the the allocation workqueue. The allocation then runs outside kswapd context, which means if it needs memory (and it does to demand page metadata from disk) it can block in shrink_inactive_list() waiting for IO congestion. These blocking waits are normally avoiding in kswapd context, so under memory pressure writeback from kswapd can be arbitrarily delayed by memory reclaim. To avoid this, pass the kswapd context to the allocation being done by the workqueue, so that memory reclaim understands correctly that the work is being done for kswapd and therefore it is not blocked and does not delay memory reclaim. To avoid issues with int->char conversion of flag fields (as noticed in v1 of this patch) convert the flag fields in the struct xfs_bmalloca to bool types. pahole indicates these variables are still single byte variables, so no extra space is consumed by this change. cc: <stable@vger.kernel.org> Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06x86, locking/rwlocks: Enable qrwlocks on x86Waiman Long
Make x86 use the fair rwlock_t. Implement the custom queue_write_unlock() for best performance. Signed-off-by: Waiman Long <Waiman.Long@hp.com> [peterz: near complete rewrite] Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Dave Jones <davej@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: linux-kernel@vger.kernel.org Cc: x86@kernel.org Link: http://lkml.kernel.org/n/tip-r1xuzmdysvuhl3h86n5fbxi7@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-06-06locking/rwlocks: Introduce 'qrwlocks' - fair, queued rwlocksWaiman Long
This rwlock uses the arch_spin_lock_t as a waitqueue, and assuming the arch_spin_lock_t is a fair lock (ticket,mcs etc..) the resulting rwlock is a fair lock. It fits in the same 8 bytes as the regular rwlock_t by folding the reader and writer count into a single integer, using the remaining 4 bytes for the arch_spinlock_t. Architectures that can single-copy adress bytes can optimize queue_write_unlock() with a 0 write to the LSB (the write count). Performance as measured by Davidlohr Bueso (rwlock_t -> qrwlock_t): +--------------+-------------+---------------+ | Workload | #users | delta | +--------------+-------------+---------------+ | alltests | > 1400 | -4.83% | | custom | 0-100,> 100 | +1.43%,-1.57% | | high_systime | > 1000 | -2.61 | | shared | all | +0.32 | +--------------+-------------+---------------+ http://www.stgolabs.net/qrwlock-stuff/aim7-results-vs-rwsem_optsin/ Signed-off-by: Waiman Long <Waiman.Long@hp.com> [peterz: near complete rewrite] Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/n/tip-gac1nnl3wvs2ij87zv2xkdzq@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-06-06ALSA: hda - add two new pin tablesHui Wang
These two new pin tables can fix headset mic problems for several new Dell machines. And also delete some machines from old quirk table since the existing pin talbes already cover them. Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2014-06-06perf: Differentiate exec() and non-exec() comm eventsAdrian Hunter
perf tools like 'perf report' can aggregate samples by comm strings, which generally works. However, there are other potential use-cases. For example, to pair up 'calls' with 'returns' accurately (from branch events like Intel BTS) it is necessary to identify whether the process has exec'd. Although a comm event is generated when an 'exec' happens it is also generated whenever the comm string is changed on a whim (e.g. by prctl PR_SET_NAME). This patch adds a flag to the comm event to differentiate one case from the other. In order to determine whether the kernel supports the new flag, a selection bit named 'exec' is added to struct perf_event_attr. The bit does nothing but will cause perf_event_open() to fail if the bit is set on kernels that do not have it defined. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/537D9EBE.7030806@intel.com Cc: Paul Mackerras <paulus@samba.org> Cc: Dave Jones <davej@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-06-06Merge branch 'perf/urgent' into perf/core, to resolve conflict and to ↵Ingo Molnar
prepare for new patches Conflicts: arch/x86/kernel/traps.c Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-06-06perf: Fix perf_event_comm() vs. exec() assumptionPeter Zijlstra
perf_event_comm() assumes that set_task_comm() is only called on exec(), and in particular that its only called on current. Neither are true, as Dave reported a WARN triggered by set_task_comm() being called on !current. Separate the exec() hook from the comm hook. Reported-by: Dave Jones <davej@redhat.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20140521153219.GH5226@laptop.programming.kicks-ass.net [ Build fix. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-06-06xfs: remove redundant geometry information from xfs_da_stateDave Chinner
It's carried in state->args->geo, so there's no need to duplicate it and use more stack space than necessary. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: replace attr LBSIZE with xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: pass xfs_da_args to xfs_attr_leaf_newentsizeDave Chinner
As it's only ever called from contexts where the xfs_da_args is present and contains all the information needed inside the args structure. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: use xfs_da_geometry for block size in attr codeDave Chinner
Rather than using the superblock value obtained through the xfs_mount. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: remove mp->m_dir_geo from directory loggingDave Chinner
We don't pass the xfs_da_args or the geometry all the way down to the directory buffer logging code, hence we have to use mp->m_dir_geo here. Fix this to use the geometry passed via the xfs_da_args, and convert all the directory logging functions for consistency. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: reduce direct usage of mp->m_dir_geoDave Chinner
There are many places in the directory code were we don't pass the args into and so have to extract the geometry direct from the mount structure. Push the args or the geometry into these leaf functions so that we don't need to grab it from the struct xfs_mount. This, in turn, brings use to the point where directory geometry is no longer a property of the struct xfs_mount; it is not a global property anymore, and hence we can start to consider per-directory configuration of physical geometries. Start by converting the xfs_dir_isblock/leaf code - pass in the xfs_da_args and convert the readdir code to use xfs_da_args like the rest of the directory code to pass information around. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: move node entry counts to xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: convert dir/attr btree threshold to xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: convert m_dirblksize to xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: convert m_dirblkfsbs to xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: convert directory segment limits to xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: convert directory db conversion to xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: convert directory dablk conversion to xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: convert dir byte/off conversion to xfs_da_geometryDave Chinner
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: kill XFS_DIR2...FIRSTDB macrosDave Chinner
They are just simple wrappers around xfs_dir2_byte_to_db(), and we've already removed one usage earlier in the patch set. Kill the rest before we start removing the xfs_mount from conversion functions. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: move directory block translatiosn to xfs_dir2_priv.hDave Chinner
Because they aren't actually part of the on-disk format, and so shouldn't be in xfs_da_format.h. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06xfs: introduce directory geometry structureDave Chinner
The directory code has a dependency on the struct xfs_mount to supply the directory block geometry. Block size, block log size, and other parameters are pre-caclulated in the struct xfs_mount or access directly from the superblock embedded in the struct xfs_mount. Extract all of this geometry information out of the struct xfs_mount and superblock and place it into a new struct xfs_da_geometry defined by the directory code. Allocate and initialise it at mount time, and attach it to the struct xfs_mount so it canbe passed back into the directory code appropriately rather than using the struct xfs_mount. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-06-06tracing: Only calculate stats of tracepoint benchmarks for 2^32 timesSteven Rostedt (Red Hat)
When calculating the average and standard deviation, it is required that the count be less than UINT_MAX, otherwise the do_div() will get undefined results. After 2^32 counts of data, the average and standard deviation should pretty much be set anyway. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-06-06selftests/powerpc: Test the THP bug we fixed in the previous commitMichael Ellerman
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-06-06powerpc/mm: Check paca psize is up to date for huge mappingsMichael Ellerman
We have a bug in our hugepage handling which exhibits as an infinite loop of hash faults. If the fault is being taken in the kernel it will typically trigger the softlockup detector, or the RCU stall detector. The bug is as follows: 1. mmap(0xa0000000, ..., MAP_FIXED | MAP_HUGE_TLB | MAP_ANONYMOUS ..) 2. Slice code converts the slice psize to 16M. 3. The code on lines 539-540 of slice.c in slice_get_unmapped_area() synchronises the mm->context with the paca->context. So the paca slice mask is updated to include the 16M slice. 3. Either: * mmap() fails because there are no huge pages available. * mmap() succeeds and the mapping is then munmapped. In both cases the slice psize remains at 16M in both the paca & mm. 4. mmap(0xa0000000, ..., MAP_FIXED | MAP_ANONYMOUS ..) 5. The slice psize is converted back to 64K. Because of the check on line 539 of slice.c we DO NOT update the paca->context. The paca slice mask is now out of sync with the mm slice mask. 6. User/kernel accesses 0xa0000000. 7. The SLB miss handler slb_allocate_realmode() **uses the paca slice mask** to create an SLB entry and inserts it in the SLB. 18. With the 16M SLB entry in place the hardware does a hash lookup, no entry is found so a data access exception is generated. 19. The data access handler calls do_page_fault() -> handle_mm_fault(). 10. __handle_mm_fault() creates a THP mapping with do_huge_pmd_anonymous_page(). 11. The hardware retries the access, there is still nothing in the hash table so once again a data access exception is generated. 12. hash_page() calls into __hash_page_thp() and inserts a mapping in the hash. Although the THP mapping maps 16M the hashing is done using 64K as the segment page size. 13. hash_page() returns immediately after calling __hash_page_thp(), skipping over the code at line 1125. Resulting in the mismatch between the paca->context and mm->context not being detected. 14. The hardware retries the access, the hash it generates using the 16M SLB entry does NOT match the hash we inserted. 15. We take another data access and go into __hash_page_thp(). 16. We see a valid entry in the hpte_slot_array and so we call updatepp() which succeeds. 17. Goto 14. We could fix this in two ways. The first would be to remove or modify the check on line 539 of slice.c. The second option is to cause the check of paca psize in hash_page() on line 1125 to also be done for THP pages. We prefer the latter, because the check & update of the paca psize is not done until we know it's necessary. It's also done only on the current cpu, so we don't need to IPI all other cpus. Without further rearranging the code, the simplest fix is to pull out the code that checks paca psize and call it in two places. Firstly for THP/hugetlb, and secondly for other mappings as before. Thanks to Dave Jones for trinity, which originally found this bug. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: stable@vger.kernel.org [v3.11+]
2014-06-05iscsi-target: Reject mutual authentication with reflected CHAP_CNicholas Bellinger
This patch adds an explicit check in chap_server_compute_md5() to ensure the CHAP_C value received from the initiator during mutual authentication does not match the original CHAP_C provided by the target. This is in line with RFC-3720, section 8.2.1: Originators MUST NOT reuse the CHAP challenge sent by the Responder for the other direction of a bidirectional authentication. Responders MUST check for this condition and close the iSCSI TCP connection if it occurs. Reported-by: Tejas Vaykole <tejas.vaykole@calsoftinc.com> Cc: stable@vger.kernel.org # 3.1+ Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2014-06-05iscsi-target: Remove no-op from iscsit_tpg_del_portal_groupNicholas Bellinger
This patch removes a no-op iscsit_clear_tpg_np_login_threads() call in iscsit_tpg_del_portal_group(), which is unnecessary because iscsit_tpg_del_portal_group() can only ever be removed from configfs once all of the child network portals have been released. Also, go ahed and make iscsit_clear_tpg_np_login_threads() declared as static. Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2014-06-05iscsi-target: Fix CHAP_A parameter list handlingTejas Vaykole
The target is failing to handle list of CHAP_A key-value pair form initiator.The target is expecting CHAP_A=5 always. In other cases, where initiator sends list (for example) CHAP_A=6,5 target is failing the security negotiation. Which is incorrect. This patch handles the case (RFC 3720 section 11.1.4). where in the initiator may send list of CHAP_A values and target replies with appropriate CHAP_A value in response (Drop whitespaces + rename to chap_check_algorithm + save original pointer + add explicit check for CHAP_A key - nab) Signed-off-by: Tejas Vaykole <tejas.vaykole@calsoftinc.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2014-06-05ip_tunnel: fix possible rtable leakDmitry Popov
ip_rt_put(rt) is always called in "error" branches above, but was missed in skb_cow_head branch. As rt is not yet bound to skb here we have to release it by hand. Signed-off-by: Dmitry Popov <ixaphire@qrator.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-06ceph: include time stamp in every MDS requestSage Weil
We recently modified the client/MDS protocol to include a timestamp in the client request. This allows ctime updates to follow the client's clock in most cases, which avoids subtle problems when clocks are out of sync and timestamps are updated sometimes by the MDS clock (for most requests) and sometimes by the client clock (for cap writeback). Signed-off-by: Sage Weil <sage@inktank.com>
2014-06-06rbd: fix ida/idr memory leakIlya Dryomov
ida_destroy() needs to be called on module exit to release ida caches. Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Alex Elder <elder@linaro.org>
2014-06-06rbd: use reference counts for image requestsAlex Elder
Each image request contains a reference count, but to date it has not actually been used. (I think this was just an oversight.) A recent report involving rbd failing an assertion shed light on why and where we need to use these reference counts. Every OSD request associated with an object request uses rbd_osd_req_callback() as its callback function. That function will call a helper function (dependent on the type of OSD request) that will set the object request's "done" flag if the object request if appropriate. If that "done" flag is set, the object request is passed to rbd_obj_request_complete(). In rbd_obj_request_complete(), requests are processed in sequential order. So if an object request completes before one of its predecessors in the image request, the completion is deferred. Otherwise, if it's a completing object's "turn" to be completed, it is passed to rbd_img_obj_end_request(), which records the result of the operation, accumulates transferred bytes, and so on. Next, the successor to this request is checked and if it is marked "done", (deferred) completion processing is performed on that request, and so on. If the last object request in an image request is completed, rbd_img_request_complete() is called, which (typically) destroys the image request. There is a race here, however. The instant an object request is marked "done" it can be provided (by a thread handling completion of one of its predecessor operations) to rbd_img_obj_end_request(), which (for the last request) can then lead to the image request getting torn down. And this can happen *before* that object has itself entered rbd_img_obj_end_request(). As a result, once it *does* enter that function, the image request (and even the object request itself) may have been freed and become invalid. All that's necessary to avoid this is to properly count references to the image requests. We tear down an image request's object requests all at once--only when the entire image request has completed. So there's no need for an image request to count references for its object requests. However, we don't want an image request to go away until the last of its object requests has passed through rbd_img_obj_callback(). In other words, we don't want rbd_img_request_complete() to necessarily result in the image request being destroyed, because it may get called before we've finished processing on all of its object requests. So the fix is to add a reference to an image request for each of its object requests. The reference can be viewed as representing an object request that has not yet finished its call to rbd_img_obj_callback(). That is emphasized by getting the reference right after assigning that as the image object's callback function. The corresponding release of that reference is done at the end of rbd_img_obj_callback(), which every image object request passes through exactly once. Cc: stable@vger.kernel.org Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Ilya Dryomov <ilya.dryomov@inktank.com>
2014-06-06rbd: fix osd_request memory leak in __rbd_dev_header_watch_sync()Ilya Dryomov
osd_request, along with r_request and r_reply messages attached to it are leaked in __rbd_dev_header_watch_sync() if the requested image doesn't exist. This is because lingering requests are special and get an extra ref in the reply path. Fix it by unregistering linger request on the error path and split __rbd_dev_header_watch_sync() into two functions to make it maintainable. Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
2014-06-06rbd: make sure we have latest osdmap on 'rbd map'Ilya Dryomov
Given an existing idle mapping (img1), mapping an image (img2) in a newly created pool (pool2) fails: $ ceph osd pool create pool1 8 8 $ rbd create --size 1000 pool1/img1 $ sudo rbd map pool1/img1 $ ceph osd pool create pool2 8 8 $ rbd create --size 1000 pool2/img2 $ sudo rbd map pool2/img2 rbd: sysfs write failed rbd: map failed: (2) No such file or directory This is because client instances are shared by default and we don't request an osdmap update when bumping a ref on an existing client. The fix is to use the mon_get_version request to see if the osdmap we have is the latest, and block until the requested update is received if it's not. Fixes: http://tracker.ceph.com/issues/8184 Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
2014-06-06libceph: add ceph_monc_wait_osdmap()Ilya Dryomov
Add ceph_monc_wait_osdmap(), which will block until the osdmap with the specified epoch is received or timeout occurs. Export both of these as they are going to be needed by rbd. Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
2014-06-06libceph: mon_get_version request infrastructureIlya Dryomov
Add support for mon_get_version requests to libceph. This reuses much of the ceph_mon_generic_request infrastructure, with one exception. Older OSDs don't set mon_get_version reply hdr->tid even if the original request had a non-zero tid, which makes it impossible to lookup ceph_mon_generic_request contexts by tid in get_generic_reply() for such replies. As a workaround, we allocate a reply message on the reply path. This can probably interfere with revoke, but I don't see a better way. Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
2014-06-06libceph: recognize poolop requests in debugfsIlya Dryomov
Recognize poolop requests in debugfs monc dump, fix prink format specifiers - tid is unsigned. Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
2014-06-06ceph: refactor readpage_nounlock() to make the logic clearerZhang Zhen
If the return value of ceph_osdc_readpages() is not negative, it is certainly greater than or equal to zero. Remove the useless condition judgment and redundant braces. Signed-off-by: Zhang Zhen <zhenzhang.zhang@huawei.com> Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-06-06mds: check cap ID when handling cap export messageYan, Zheng
handle following sequence of events: - mds0 exports an inode to mds1. client receives the cap import message from mds1. caps from mds0 are removed while handling the cap import message. - mds1 exports an inode to mds0. client receives the cap export message from mds1. handle_cap_export() adds placeholder caps for mds0 - client receives the first cap export message (for exporting inode from mds0 to mds1) Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-06-06ceph: remember subtree root dirfrag's auth MDSYan, Zheng
remember dirfrag's auth MDS when it's different from its parent inode's auth MDS. Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>