summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2015-06-23drm/i915: Update [vma|object]_move_to_active() to take request structuresJohn Harrison
Now that everything above has been converted to use request structures, it is possible to update the lower level move_to_active() functions to be request based as well. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update add_request() to take a request structureJohn Harrison
Now that all callers of i915_add_request() have a request pointer to hand, it is possible to update the add request function to take a request pointer rather than pulling it out of the OLR. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update queue_flip() to take a request structureJohn Harrison
Updated the display page flip code to do explicit request creation and submission rather than relying on the OLR and just hoping that the request actually gets submitted at some random point. The sequence is now to create a request, queue the work to the ring, assign the known request to the flip queue work item then actually submit the work and post the request. Note that every single flip function used to finish with '__intel_ring_advance(ring);'. However, immediately after they return there is now an add request call which will do the advance anyway. Thus the many duplicate advance calls have been removed. v2: Updated commit message with comment about advance removal. v3: The request can now be allocated by the _sync() code earlier on. Thus the page flip path does not necessarily need to allocate a new request, it may be able to re-use one. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update overlay code to do explicit request managementJohn Harrison
The overlay update code path to do explicit request creation and submission rather than relying on the OLR to do the right thing. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update i915_gem_object_sync() to take a request structureJohn Harrison
The plan is to pass requests around as the basic submission tracking structure rather than rings and contexts. This patch updates the i915_gem_object_sync() code path. v2: Much more complex patch to share a single request between the sync and the page flip. The _sync() function now supports lazy allocation of the request structure. That is, if one is passed in then that will be used. If one is not, then a request will be allocated and passed back out. Note that the _sync() code does not necessarily require a request. Thus one will only be created until certain situations. The reason the lazy allocation must be done within the _sync() code itself is because the decision to need one or not is not really something that code above can second guess (except in the case where one is definitely not required because no ring is passed in). The call chains above _sync() now support passing a request through which most callers passing in NULL and assuming that no request will be required (because they also pass in NULL for the ring and therefore can't be generating any ring code). The exeception is intel_crtc_page_flip() which now supports having a request returned from _sync(). If one is, then that request is shared by the page flip (if the page flip is of a type to need a request). If _sync() does not generate a request but the page flip does need one, then the page flip path will create its own request. v3: Updated comment description to be clearer about 'to_req' parameter (Tomas Elf review request). Rebased onto newer tree that significantly changed the synchronisation code. v4: Updated comments from review feedback (Tomas Elf) For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update render_state_init() to take a request structureJohn Harrison
Updated the two render_state_init() functions to take a request pointer instead of a ring. This removes their reliance on the OLR. v2: Rebased to newer tree. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update init_context() to take a request structureJohn Harrison
Now that everything above has been converted to use requests, it is possible to update init_context() to take a request pointer instead of a ring/context pair. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update deferred context creation to do explicit request managementJohn Harrison
In execlist mode, context initialisation is deferred until first use of the given context. This is because execlist mode has per ring context state and thus many more context storage objects than legacy mode and many are never actually used. Previously, the initialisation commands were written to the ring and tagged with some random request structure via the OLR. This seemed to be causing a null pointer deference bug under certain circumstances (BZ:88865). This patch adds explicit request creation and submission to the deferred initialisation code path. Thus removing any reliance on or randomness caused by the OLR. Note that it should be possible to move the deferred context creation until even later - when the context is actually switched to rather than when it is merely validated. This would allow the initialisation to be done within the request of the work that is wanting to use the context. Hence, the extra request that is created, used and retired just for the context init could be removed completely. However, this is left for a follow up patch. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update do_switch() to take a request structureJohn Harrison
Updated do_switch() to take a request pointer instead of a ring/context pair. v2: Removed some overzealous req-> dereferencing. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update i915_switch_context() to take a request structureJohn Harrison
Now that the request is guaranteed to specify the context, it is possible to update the context switch code to use requests rather than ring and context pairs. This patch updates i915_switch_context() accordingly. Also removed the warning that the request's context must match the last context switch's context. As the context switch now gets the context object from the request structure, there is no longer any scope for the two to become out of step. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update ppgtt_init_ring() & context_enable() to take requestsJohn Harrison
The final step in removing the OLR from i915_gem_init_hw() is to pass the newly allocated request structure in to each step rather than passing a ring structure. This patch updates both i915_ppgtt_init_ring() and i915_gem_context_enable() to take request pointers. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Add explicit request management to i915_gem_init_hw()John Harrison
Now that a single per ring loop is being done for all the different intialisation steps in i915_gem_init_hw(), it is possible to add proper request management as well. The last remaining issue is that the context enable call eventually ends up within *_render_state_init() and this does its own private _i915_add_request() call. This patch adds explicit request creation and submission to the top level loop and removes the add_request() from deep within the sub-functions. v2: Updated for removal of batch_obj from add_request call in previous patch. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Don't tag kernel batches as user batchesJohn Harrison
The render state initialisation code does an explicit i915_add_request() call to commit the init commands. It was passing in the initialisation batch buffer to add_request() as the batch object parameter. However, the batch object entry in the request structure (which is all that parameter is used for) is meant for keeping track of user generated batch buffers for blame tagging during GPU hangs. This patch clears the batch object parameter so that kernel generated batch buffers are not tagged as being user generated. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Moved the for_each_ring loop outside of i915_gem_context_enable()John Harrison
The start of day context initialisation code in i915_gem_context_enable() loops over each ring and calls the legacy switch context or the execlist init context code as appropriate. This patch moves the ring looping out of that function in to the top level caller i915_gem_init_hw(). This means the a single pass can be made over all rings doing the PPGTT, L3 remap and context initialisation of each ring altogether. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Split i915_ppgtt_init_hw() in half - generic and per ringJohn Harrison
The i915_gem_init_hw() function calls a bunch of smaller initialisation functions. Multiple of which have generic sections and per ring sections. This means multiple passes are done over the rings. Each pass writes data to the ring which floats around in that ring's OLR until some random point in the future when an add_request() is done by some random other piece of code. This patch breaks i915_ppgtt_init_hw() in two with the per ring initialisation now being done in i915_ppgtt_init_ring(). The ring looping is now done at the top level in i915_gem_init_hw(). v2: Fix dumb loop variable re-use. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> (v1) Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update i915_gpu_idle() to manage its own requestJohn Harrison
Added explicit request creation and submission to the GPU idle code path. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Add flag to i915_add_request() to skip the cache flushJohn Harrison
In order to explcitly track all GPU work (and completely remove the outstanding lazy request), it is necessary to add extra i915_add_request() calls to various places. Some of these do not need the implicit cache flush done as part of the standard batch buffer submission process. This patch adds a flag to _add_request() to specify whether the flush is required or not. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update execbuffer_move_to_active() to take a request structureJohn Harrison
The plan is to pass requests around as the basic submission tracking structure rather than rings and contexts. This patch updates the execbuffer_move_to_active() code path. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update move_to_gpu() to take a request structureJohn Harrison
The plan is to pass requests around as the basic submission tracking structure rather than rings and contexts. This patch updates the move_to_gpu() code paths. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update the dispatch tracepoint to use params->requestJohn Harrison
Updated a couple of trace points to use the now cached request pointer rather than extracting it from the ring. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Add request to execbuf params and add explicit cleanupJohn Harrison
Rather than just having a local request variable in the execbuff code, the request pointer is now stored in the execbuff params structure. Also added explicit cleanup of the request (plus wiping the OLR to match) in the error case. This means that the execbuff code is no longer dependent upon the OLR keeping track of the request so as to not leak it when things do go wrong. Note that in the success case, the i915_add_request() at the end of the submission function will tidy up the request and clear the OLR. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update alloc_request to return the allocated requestJohn Harrison
The alloc_request() function does not actually return the newly allocated request. Instead, it must be pulled from ring->outstanding_lazy_request. This patch fixes this so that code can create a request and start using it knowing exactly which request it actually owns. v2: Updated for new i915_gem_request_alloc() scheme. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Simplify i915_gem_execbuffer_retire_commands() parametersJohn Harrison
Shrunk the parameter list of i915_gem_execbuffer_retire_commands() to a single structure as everything it requires is available in the execbuff_params object. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Merged the many do_execbuf() parameters into a structureJohn Harrison
The do_execbuf() function takes quite a few parameters. The actual set of parameters is going to change with the conversion to passing requests around. Further, it is due to grow massively with the arrival of the GPU scheduler. This patch simplifies the prototype by passing a parameter structure instead. Changing the parameter set in the future is then simply a matter of adding/removing items to the structure. Note that the structure does not contain absolutely everything that is passed in. This is because the intention is to use this structure more extensively later in this patch series and more especially in the GPU scheduler that is coming soon. The latter requires hanging on to the structure as the final hardware submission can be delayed until long after the execbuf IOCTL has returned to user land. Thus it is unsafe to put anything in the structure that is local to the IOCTL call itself - such as the 'args' parameter. All entries must be copies of data or pointers to structures that are reference counted in some way and guaranteed to exist for the duration of the batch buffer's life. v2: Rebased to newer tree and updated for changes to the command parser. Specifically, a code shuffle has required saving the batch start address in the params structure. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Set context in request from creation even in legacy modeJohn Harrison
In execlist mode, the context object pointer is written in to the request structure (and reference counted) at the point of request creation. In legacy mode, this only happens inside i915_add_request(). This patch updates the legacy code path to match the execlist version. This allows all the intermediate code between request creation and request submission to get at the context object given only a request structure. Thus negating the need to pass context pointers here, there and everywhere. v2: Moved the context reference so it does not need to be undone if the get_seqno() fails. v3: Fixed execlist mode always hitting a warning about invalid last_contexts (which don't exist in execlist mode). v4: Updated for new i915_gem_request_alloc() scheme. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Early alloc request in execbuffJohn Harrison
Start of explicit request management in the execbuffer code path. This patch adds a call to allocate a request structure before all the actual hardware work is done. Thus guaranteeing that all that work is tagged by a known request. At present, nothing further is done with the request, the rest comes later in the series. The only noticable change is that failure to get a request (e.g. due to lack of memory) will be caught earlier in the sequence. It now occurs right at the start before any un-undoable work has been done. v2: Simplified the error handling path. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: i915_add_request must not failJohn Harrison
The i915_add_request() function is called to keep track of work that has been written to the ring buffer. It adds epilogue commands to track progress (seqno updates and such), moves the request structure onto the right list and other such house keeping tasks. However, the work itself has already been written to the ring and will get executed whether or not the add request call succeeds. So no matter what goes wrong, there isn't a whole lot of point in failing the call. At the moment, this is fine(ish). If the add request does bail early on and not do the housekeeping, the request will still float around in the ring->outstanding_lazy_request field and be picked up next time. It means multiple pieces of work will be tagged as the same request and driver can't actually wait for the first piece of work until something else has been submitted. But it all sort of hangs together. This patch series is all about removing the OLR and guaranteeing that each piece of work gets its own personal request. That means that there is no more 'hoovering up of forgotten requests'. If the request does not get tracked then it will be leaked. Thus the add request call _must_ not fail. The previous patch should have already ensured that it _will_ not fail by removing the potential for running out of ring space. This patch enforces the rule by actually removing the early exit paths and the return code. Note that if something does manage to fail and the epilogue commands don't get written to the ring, the driver will still hang together. The request will be added to the tracking lists. And as in the old case, any subsequent work will generate a new seqno which will suffice for marking the old one as complete. v2: Improved WARNings (Tomas Elf review request). For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Reserve ring buffer space for i915_add_request() commandsJohn Harrison
It is a bad idea for i915_add_request() to fail. The work will already have been send to the ring and will be processed, but there will not be any tracking or management of that work. The only way the add request call can fail is if it can't write its epilogue commands to the ring (cache flushing, seqno updates, interrupt signalling). The reasons for that are mostly down to running out of ring buffer space and the problems associated with trying to get some more. This patch prevents that situation from happening in the first place. When a request is created, it marks sufficient space as reserved for the epilogue commands. Thus guaranteeing that by the time the epilogue is written, there will be plenty of space for it. Note that a ring_begin() call is required to actually reserve the space (and do any potential waiting). However, that is not currently done at request creation time. This is because the ring_begin() code can allocate a request. Hence calling begin() from the request allocation code would lead to infinite recursion! Later patches in this series remove the need for begin() to do the allocate. At that point, it becomes safe for the allocate to call begin() and really reserve the space. Until then, there is a potential for insufficient space to be available at the point of calling i915_add_request(). However, that would only be in the case where the request was created and immediately submitted without ever calling ring_begin() and adding any work to that request. Which should never happen. And even if it does, and if that request happens to fall down the tiny window of opportunity for failing due to being out of ring space then does it really matter because the request wasn't doing anything in the first place? v2: Updated the 'reserved space too small' warning to include the offending sizes. Added a 'cancel' operation to clean up when a request is abandoned. Added re-initialisation of tracking state after a buffer wrap to keep the sanity checks accurate. v3: Incremented the reserved size to accommodate Ironlake (after finally managing to run on an ILK system). Also fixed missing wrap code in LRC mode. v4: Added extra comment and removed duplicate WARN (feedback from Tomas). For: VIZ-5115 CC: Tomas Elf <tomas.elf@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23Merge remote-tracking branch 'airlied/drm-next' into HEADDaniel Vetter
Backmerge drm-next because the conflict between Ander's atomic fixes for 4.2 and Maartens future work are getting to unwielding to handle. Conflicts: drivers/gpu/drm/i915/intel_display.c drivers/gpu/drm/i915/intel_ringbuffer.h Just always take ours, same as git merge -X ours, but done by hand because I didn't trust git: It's confusing that it doesn't show any conflicts in the merge diff at all. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2015-06-23drm/i915/gen8: Add WaFlushCoherentL3CacheLinesAtContextSwitch workaroundArun Siluvery
In Indirect context w/a batch buffer, +WaFlushCoherentL3CacheLinesAtContextSwitch:bdw v2: Add LRI commands to set/reset bit that invalidates coherent lines, update WA to include programming restrictions and exclude CHV as it is not required (Ville) v3: Avoid unnecessary read when it can be done by reading register once (Chris). Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915/gen8: Add WaDisableCtxRestoreArbitration workaroundArun Siluvery
In Indirect and Per context w/a batch buffer, +WaDisableCtxRestoreArbitration Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915/gen8: Re-order init pipe_control in lrc modeArun Siluvery
Some of the WA applied using WA batch buffers perform writes to scratch page. In the current flow WA are initialized before scratch obj is allocated. This patch reorders intel_init_pipe_control() to have a valid scratch obj before we initialize WA. v2: Check for valid scratch page before initializing WA as some of them perform writes to it. Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915/gen8: Add infrastructure to initialize WA batch buffersArun Siluvery
Some of the WA are to be applied during context save but before restore and some at the end of context save/restore but before executing the instructions in the ring, WA batch buffers are created for this purpose and these WA cannot be applied using normal means. Each context has two registers to load the offsets of these batch buffers. If they are non-zero, HW understands that it need to execute these batches. v1: In this version two separate ring_buffer objects were used to load WA instructions for indirect and per context batch buffers and they were part of every context. v2: Chris suggested to include additional page in context and use it to load these WA instead of creating separate objects. This will simplify lot of things as we need not explicity pin/unpin them. Thomas Daniel further pointed that GuC is planning to use a similar setup to share data between GuC and driver and WA batch buffers can probably share that page. However after discussions with Dave who is implementing GuC changes, he suggested to use an independent page for the reasons - GuC area might grow and these WA are initialized only once and are not changed afterwards so we can share them share across all contexts. The page is updated with WA during render ring init. This has an advantage of not adding more special cases to default_context. We don't know upfront the number of WA we will applying using these batch buffers. For this reason the size was fixed earlier but it is not a good idea. To fix this, the functions that load instructions are modified to report the no of commands inserted and the size is now calculated after the batch is updated. A macro is introduced to add commands to these batch buffers which also checks for overflow and returns error. We have a full page dedicated for these WA so that should be sufficient for good number of WA, anything more means we have major issues. The list for Gen8 is small, same for Gen9 also, maybe few more gets added going forward but not close to filling entire page. Chris suggested a two-pass approach but we agreed to go with single page setup as it is a one-off routine and simpler code wins. One additional option is offset field which is helpful if we would like to have multiple batches at different offsets within the page and select them based on some criteria. This is not a requirement at this point but could help in future (Dave). Chris provided some helpful macros and suggestions which further simplified the code, they will also help in reducing code duplication when WA for other Gen are added. Add detailed comments explaining restrictions. Use do {} while(0) for wa_ctx_emit() macro. (Many thanks to Chris, Dave and Thomas for their reviews and inputs) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Report an error when i915.reset prevents a resetChris Wilson
If the user disables the GPU reset using the i915.reset parameter and one occurs, report that we failed to reset the GPU. If we return early, as we currently do, then we leave all state intact (with a hung GPU) and clients block forever waiting for their requests to complete. Testcase: igt/gem_eio Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Mark i915.reset as an unsafe modoption, as discussed with Chris.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Fix up KMS Kconfig removal patchDaniel Vetter
The module pciid list got lost, but somehow most distros seem to force-load drm drivers early and no one noticed for a while. Bug introduced in commit fd930478fb797e4cbaa799d9ddd970e9a1fa1b4a Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Jun 19 20:27:27 2015 +0100 drm/i915: Remove KMS Kconfig option Reported-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Damien Lespiau <damien.lespiau@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2015-06-23xen-netback: fix a BUG() during initializationPalik, Imre
Commit edafc132baac ("xen-netback: making the bandwidth limiter runtime settable") introduced the capability to change the bandwidth rate limit at runtime. But it also introduced a possible crashing bug. If netback receives two XenbusStateConnected without getting the hotplug-status watch firing in between, then it will try to register the watches for the rate limiter again. But this triggers a BUG() in the watch registration code. The fix modifies connect() to remove the possibly existing packet-rate watches before trying to install those watches. This behaviour is in line with how connect() deals with the hotplug-status watch. Signed-off-by: Imre Palik <imrep@amazon.de> Cc: Matt Wilson <msw@amazon.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23bridge: multicast: restore router configuration on port link down/upSatish Ashok
When a port goes through a link down/up the multicast router configuration is not restored. Signed-off-by: Satish Ashok <sashok@cumulusnetworks.com> Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Fixes: 0909e11758bd ("bridge: Add multicast_router sysfs entries") Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23bridge: multicast: start querier timer when running user-space stpNikolay Aleksandrov
When STP is running in user-space and querier is configured, the querier timer is not started when a port goes to a non-blocking state. This patch unifies the user- and kernel-space stp multicast port enable path and enables it in all states different from blocking. Note that when a port goes in BR_STATE_DISABLED it's not enabled because that is handled in the beginning of the port list loop. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23NET: ROSE: Don't dereference NULL neighbour pointer.Ralf Baechle
A ROSE socket doesn't necessarily always have a neighbour pointer so check if the neighbour pointer is valid before dereferencing it. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Tested-by: Bernard Pidoux <f6bvp@free.fr> Cc: stable@vger.kernel.org #2.6.11+ Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23Merge tag 'nfc-next-4.2-2' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/sameo/nfc-next NFC 4.2 2nd pull request This one only contains a one liner fix for a typo that I introduced while cleaning some of the nfcmrvl patches that were part of the 1st 4.2 pull request.
2015-06-23Merge branch 'for-upstream' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next Johan Hedberg says: ==================== pull request: bluetooth-next 2015-06-18 Here's the final bluetooth-next pull request for 4.2. - Cleanups & fixes to 802.15.4 code and related drivers - Fix btusb driver memory leak - New USB IDs for Atheros controllers - Support for BCM4324B3 UART based Broadcom controller - Fix for Bluetooth encryption key size handling - Broadcom controller initialization fixes - Support for Intel controller DDC parameters - Support for multiple Bluetooth LE advertising instances - Fix for HCI user channel cleanup path Please let me know if there are any issues pulling. Thanks. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23bonding: Display LACP info only to CAP_NET_ADMIN capable userMahesh Bandewar
Actor and Partner details can be accessed via proc-fs, sys-fs entries or netlink interface. These interfaces are world readable at this moment. The earlier patch-series made the LACP communication secure to avoid nuisance attack from within the same L2 domain but it did not prevent "someone unprivileged" looking at that information on host and perform the same act. This patch essentially avoids spitting those entries if the user in question does not have enough privileges. Signed-off-by: Mahesh Bandewar <maheshb@google.com> Signed-off-by: Andy Gospodarek <gospo@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23ALSA: pcm: Fix pcm_class sysfs outputTakashi Iwai
The pcm_class sysfs of each PCM substream gives only "none" since the recent code change to embed the struct device. Fix the code to point directly to the embedded device object properly. Fixes: ef46c7af93f9 ('ALSA: pcm: Embed struct device') Cc: <stable@vger.kernel.org> # v4.0+ Signed-off-by: Takashi Iwai <tiwai@suse.de>
2015-06-23tcp: Do not call tcp_fastopen_reset_cipher from interrupt contextChristoph Paasch
tcp_fastopen_reset_cipher really cannot be called from interrupt context. It allocates the tcp_fastopen_context with GFP_KERNEL and calls crypto_alloc_cipher, which allocates all kind of stuff with GFP_KERNEL. Thus, we might sleep when the key-generation is triggered by an incoming TFO cookie-request which would then happen in interrupt- context, as shown by enabling CONFIG_DEBUG_ATOMIC_SLEEP: [ 36.001813] BUG: sleeping function called from invalid context at mm/slub.c:1266 [ 36.003624] in_atomic(): 1, irqs_disabled(): 0, pid: 1016, name: packetdrill [ 36.004859] CPU: 1 PID: 1016 Comm: packetdrill Not tainted 4.1.0-rc7 #14 [ 36.006085] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 [ 36.008250] 00000000000004f2 ffff88007f8838a8 ffffffff8171d53a ffff880075a084a8 [ 36.009630] ffff880075a08000 ffff88007f8838c8 ffffffff810967d3 ffff88007f883928 [ 36.011076] 0000000000000000 ffff88007f8838f8 ffffffff81096892 ffff88007f89be00 [ 36.012494] Call Trace: [ 36.012953] <IRQ> [<ffffffff8171d53a>] dump_stack+0x4f/0x6d [ 36.014085] [<ffffffff810967d3>] ___might_sleep+0x103/0x170 [ 36.015117] [<ffffffff81096892>] __might_sleep+0x52/0x90 [ 36.016117] [<ffffffff8118e887>] kmem_cache_alloc_trace+0x47/0x190 [ 36.017266] [<ffffffff81680d82>] ? tcp_fastopen_reset_cipher+0x42/0x130 [ 36.018485] [<ffffffff81680d82>] tcp_fastopen_reset_cipher+0x42/0x130 [ 36.019679] [<ffffffff81680f01>] tcp_fastopen_init_key_once+0x61/0x70 [ 36.020884] [<ffffffff81680f2c>] __tcp_fastopen_cookie_gen+0x1c/0x60 [ 36.022058] [<ffffffff816814ff>] tcp_try_fastopen+0x58f/0x730 [ 36.023118] [<ffffffff81671788>] tcp_conn_request+0x3e8/0x7b0 [ 36.024185] [<ffffffff810e3872>] ? __module_text_address+0x12/0x60 [ 36.025327] [<ffffffff8167b2e1>] tcp_v4_conn_request+0x51/0x60 [ 36.026410] [<ffffffff816727e0>] tcp_rcv_state_process+0x190/0xda0 [ 36.027556] [<ffffffff81661f97>] ? __inet_lookup_established+0x47/0x170 [ 36.028784] [<ffffffff8167c2ad>] tcp_v4_do_rcv+0x16d/0x3d0 [ 36.029832] [<ffffffff812e6806>] ? security_sock_rcv_skb+0x16/0x20 [ 36.030936] [<ffffffff8167cc8a>] tcp_v4_rcv+0x77a/0x7b0 [ 36.031875] [<ffffffff816af8c3>] ? iptable_filter_hook+0x33/0x70 [ 36.032953] [<ffffffff81657d22>] ip_local_deliver_finish+0x92/0x1f0 [ 36.034065] [<ffffffff81657f1a>] ip_local_deliver+0x9a/0xb0 [ 36.035069] [<ffffffff81657c90>] ? ip_rcv+0x3d0/0x3d0 [ 36.035963] [<ffffffff81657569>] ip_rcv_finish+0x119/0x330 [ 36.036950] [<ffffffff81657ba7>] ip_rcv+0x2e7/0x3d0 [ 36.037847] [<ffffffff81610652>] __netif_receive_skb_core+0x552/0x930 [ 36.038994] [<ffffffff81610a57>] __netif_receive_skb+0x27/0x70 [ 36.040033] [<ffffffff81610b72>] process_backlog+0xd2/0x1f0 [ 36.041025] [<ffffffff81611482>] net_rx_action+0x122/0x310 [ 36.042007] [<ffffffff81076743>] __do_softirq+0x103/0x2f0 [ 36.042978] [<ffffffff81723e3c>] do_softirq_own_stack+0x1c/0x30 This patch moves the call to tcp_fastopen_init_key_once to the places where a listener socket creates its TFO-state, which always happens in user-context (either from the setsockopt, or implicitly during the listen()-call) Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Fixes: 222e83d2e0ae ("tcp: switch tcp_fastopen key generation to net_get_random_once") Signed-off-by: Christoph Paasch <cpaasch@apple.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23mvneta: add forgotten initialization of autonegotiation bitsStas Sergeev
The commit 898b2970e2c9 ("mvneta: implement SGMII-based in-band link state signaling") changed mvneta_adjust_link() so that it does not clear the auto-negotiation bits in MVNETA_GMAC_AUTONEG_CONFIG register. This was necessary for auto-negotiation mode to work. Unfortunately I haven't checked if these bits are ever initialized. It appears they are not. This patch adds the missing initialization of the auto-negotiation bits in the MVNETA_GMAC_AUTONEG_CONFIG register. It fixes the following regression: https://www.mail-archive.com/netdev@vger.kernel.org/msg67928.html Since the patch was tested to fix a regression, it should be applied to stable tree. Tested-by: Arnaud Ebalard <arno@natisbad.org> CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> CC: Florian Fainelli <f.fainelli@gmail.com> CC: netdev@vger.kernel.org CC: linux-kernel@vger.kernel.org CC: stable@vger.kernel.org Signed-off-by: Stas Sergeev <stsp@users.sourceforge.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23Merge branch 'macb-sama5d2'David S. Miller
Nicolas Ferre says: ==================== net/macb: add sama5d2 support This series is basically the support for another flavor of the GEM IP configuration. It ended up being a series because of some little fixes made to the binding documentation before adding the new compatibility string. Bye, v2: - fix bindings - add sama5d2 compatibility string to the binding documentation ==================== Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23net/macb: add config for Atmel sama5d2 SoCsCyrille Pitchen
Add the compatible string for Atmel sama5d2 SoC family as the configuration options differ from other instances of the GEM. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23net/macb: bindings doc: add sama5d2 compatibility stingNicolas Ferre
Add sama5d2 to the biding documentation for this use of the GEM IP. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23net/macb: bindings doc/trivial: fix sama5d4 commentNicolas Ferre
On sama5d4, we only have a GEM IP that is configured to do 10/100 Mbits. So the use of "Gigabit" can be confusing. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-23net/macb: bindings doc: fix compatibility stringNicolas Ferre
In the driver and the DT bindings we use the "atmel" prefix. Fix it in the binding documentation. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: David S. Miller <davem@davemloft.net>