Age | Commit message (Collapse) | Author |
|
Andrew Ruder says:
====================
miscellaneous dm9000 driver fixes
This is a collection of changes discovered while bringing a PXA270 based board
(Arcom ZEUS) with a Davicom DM9000A/B up to a more recent kernel (from 2.6.xx).
This addresses all of my earlier issues (August 2013) listed here:
http://marc.info/?l=linux-netdev&m=137598605603324&w=2
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
On the DM9000B, dm9000_msleep() is called during the dm9000_timeout()
routine. Since dm9000_timeout() holds the main spinlock through the
entire routine, mdelay() needs to be used rather than msleep().
Furthermore, the mutex_lock()/mutex_unlock() should be avoided so as to
not sleep with spinlocks held.
Signed-off-by: Andrew Ruder <andrew.ruder@elecsyscorp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
On the DM9000A/DM9000B force the initial check of the link status. The
DM9000A/B has a link status changed event and this interrupt bit isn't
always set out of reset when a cable is plugged in. This results in the
driver not seeing the cable attached link status until the cable is
removed and plugged in again.
Signed-off-by: Andrew Ruder <andrew.ruder@elecsyscorp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since dm9000_interrupt() is already reading/clearing every set bit in
DM9000_ISR, this additional clear in dm9000_rx() (which is only called
by dm9000_interrupt()) is unnecessary and can be removed.
Signed-off-by: Andrew Ruder <andrew.ruder@elecsyscorp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
DM9000 uses level-triggered interrupts. Some systems (PXA270) only
support edge-triggered interrupts on GPIOs. Some changes are necessary
to ensure that interrupts are not triggered while the GPIO interrupt is
masked or we will miss the interrupt forever.
* Make some helper functions called dm9000_mask_interrupts() and
dm9000_unmask_interrupts() for readability.
* dm9000_init_dm9000(): ensure that this function always leaves interrupts
masked regardless of the state when it entered the function. This is
primarily to support the situation in dm9000_open where the logic used
to go:
dm9000_open()
dm9000_init_dm9000()
unmask interrupts
request_irq()
If an interrupt occurred between unmasking the interrupt and
requesting the irq, it would be missed forever as the edge event would
never be seen by the GPIO hardware in the PXA270. This allows us to
change the logic to:
dm9000_open()
dm9000_init_dm9000()
dm9000_mask_interrupts()
request_irq()
dm9000_unmask_interrupts()
* dm9000_timeout(), dm9000_drv_resume(): Add the missing
dm9000_unmask_interrupts() now required by the change above.
* dm9000_shutdown(): Use mask helper function
* dm9000_interrupt(): Use mask/unmask helper functions
Signed-off-by: Andrew Ruder <andrew.ruder@elecsyscorp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
* Change a hard-coded 0x3 to NCR_RST | NCR_MAC_LBK in dm9000_reset
* Every single place where dm9000_init_dm9000 was ran, a dm9000_reset
was called immediately before-hand. Bring dm9000_reset into
dm9000_init_dm9000.
* The following commit updated the dm9000_probe reset routine to use NCR_RST
| NCR_MAC_LBK:
6741f40 DM9000B: driver initialization upgrade
and a later commit added a bug-fix to always reset the chip twice:
09ee9f8 dm9000: Implement full reset of DM9000 network device
Unfortunately, since the changes in 6741f40 were made by replacing the
dm9000_probe dm9000_reset with the adjusted iow(), the changes in
09ee9f8 were not incorporated into the dm9000_probe reset.
Furthermore, it bypassed the requisite reset-delay causing some boards
to get at least one "read wrong id ..." dev_err message during
dm9000_probe.
Signed-off-by: Andrew Ruder <andrew.ruder@elecsyscorp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The DM9000 supports both active high interrupts and active low interrupts.
This is configured via the attached EEPROM. In the device-tree case, make sure
that the DM9000 driver passes the correct flags to request_irq.
Signed-off-by: Andrew Ruder <andrew.ruder@elecsyscorp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A recent commit (a02eb4 "xen-netback: worse-case estimate in xenvif_rx_action is
underestimating") capped the slot estimation to MAX_SKB_FRAGS, but that triggers
the next BUG_ON a few lines down, as the packet consumes more slots than
estimated.
This patch introduces full_coalesce on the skb callback buffer, which is used in
start_new_rx_buffer() to decide whether netback needs coalescing more
aggresively. By doing that, no packet should need more than
(XEN_NETIF_MAX_TX_SIZE + 1) / PAGE_SIZE data slots (excluding the optional GSO
slot, it doesn't carry data, therefore irrelevant in this case), as the provided
buffers are fully utilized.
Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@gmail.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
o Uninitialzed fields in mailbox command structure
caused commands to time out randomly due to garbage
values so initialize it to zero.
Signed-off-by: Rajesh Borundia <rajesh.borundia@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If an MPLS packet requires segmentation then use mpls_features
to determine if the software implementation should be used.
As no driver advertises MPLS GSO segmentation this will always be
the case.
I had not noticed that this was necessary before as software MPLS GSO
segmentation was already being used in my test environment. I believe that
the reason for that is the skbs in question always had fragments and the
driver I used does not advertise NETIF_F_FRAGLIST (which seems to be the
case for most drivers). Thus software segmentation was activated by
skb_gso_ok().
This introduces the overhead of an extra call to skb_network_protocol()
in the case where where CONFIG_NET_MPLS_GSO is set and
skb->ip_summed == CHECKSUM_NONE.
Thanks to Jesse Gross for prompting me to investigate this.
Signed-off-by: Simon Horman <horms@verge.net.au>
Acked-by: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Acked-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Quoting David Miller:
"At the moment you call register_netdev() the device is visible, notifications
are sent to userspace, and userland tools can try to bring the interface up
and see the incorrect link state, before you do the netif_carrier_off().
Said another way, between the register_netdev() and netif_carrier_off() call,
userspace can see the device in an inconsistent state."
So call netif_carrier_off() prior to register_netdev().
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Tegra has been switching to intermediate frequency (pll_p_clk) forever.
CPUFreq core has better support for handling notifications for these
frequencies and so we can adapt Tegra's driver to it.
Also do a WARN() if clk_set_parent() fails while moving back to pll_x
as we should have atleast restored to earlier frequency on error.
Tested-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Doug Anderson <dianders@chromium.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Douglas Anderson, recently pointed out an interesting problem due to which
udelay() was expiring earlier than it should.
While transitioning between frequencies few platforms may temporarily switch to
a stable frequency, waiting for the main PLL to stabilize.
For example: When we transition between very low frequencies on exynos, like
between 200MHz and 300MHz, we may temporarily switch to a PLL running at 800MHz.
No CPUFREQ notification is sent for that. That means there's a period of time
when we're running at 800MHz but loops_per_jiffy is calibrated at between 200MHz
and 300MHz. And so udelay behaves badly.
To get this fixed in a generic way, introduce another set of callbacks
get_intermediate() and target_intermediate(), only for drivers with
target_index() and CPUFREQ_ASYNC_NOTIFICATION unset.
get_intermediate() should return a stable intermediate frequency platform wants
to switch to, and target_intermediate() should set CPU to that frequency,
before jumping to the frequency corresponding to 'index'. Core will take care of
sending notifications and driver doesn't have to handle them in
target_intermediate() or target_index().
NOTE: ->target_index() should restore to policy->restore_freq in case of
failures as core would send notifications for that.
Tested-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Doug Anderson <dianders@chromium.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Doing so allows the hotplug events generated by the connector to be
properly handled by the DRM poll helpers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Calling the drm_helper_hpd_irq_event() helper can sleep, so instead of
invoking it directly from the interrupt handler, schedule a work queue
and run it from there.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Enable hardware cursor support on Tegra124. Earlier generations support
the hardware cursor to some degree as well, but not in a way that can be
generically exposed.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The DRM core can now cope with drivers that don't have an associated
struct drm_bus, so the host1x implementation is no longer useful.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
With the recent addition of the drm_set_unique() function, devices can
now be registered without requiring a drm_bus. Add a brief description
to the DRM docbook to show how that can be achieved.
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Describe how devices are registered using the drm_*_init() functions.
Adding this to docbook requires a largish set of changes to the comments
in drm_{pci,usb,platform}.c since they are doxygen-style rather than
proper kernel-doc and therefore mess with the docbook generation.
While at it, mark usage of drm_put_dev() as discouraged in favour of
calling drm_dev_unregister() and drm_dev_unref() directly.
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Add a helper function that allows drivers to statically set the unique
name of the device. This will allow platform and USB drivers to get rid
of their DRM bus implementations and directly use drm_dev_alloc() and
drm_dev_register().
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The internal host1x_{,un}register_client() functions can potentially be
confused with public the host1x_client_{,un}register() functions.
Rename them to host1x_{add,del}_client() to remove some of the possible
confusion.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The function is never used outside of the source file and therefore can
be locally scoped.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Tegra124 is mostly backwards-compatible with Tegra114. However, Tegra124
supports a few more features (e.g. interlacing, ...). Introduce a new
compatible string and TMDS tables to cope with these differences.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Accessing the CRC debugfs file will hang the system if the SOR is not
enabled, so make sure that it is stays enabled until the CRC has been
read.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
In some cases the pixel clock used to not be correct, which is why it
had to be recomputed. It turns out that the reason why it wasn't correct
is that it was used wrongly. If used correctly there's not need for the
recomputation.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The shift clock divider is highly dependent on the type of output, so
push computation of it down into the output drivers. The old code used
to work merely by accident.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Program the shift clock divider in tegra_crtc_setup_clk() since that's
where the divider is computed, so passing it around can be avoided.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Assert the DSI controller's reset when the driver is unloaded to reduce
power consumption and to put the controller into a known state for
subsequent driver reloads.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
When disabling the DSI controller, the code wasn't really doing what it
was supposed to.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
To prevent the enable or disable operations to potentially be run
multiple times, add guards to return early when the output is already
in the targetted state.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The packet sequencer needs to be programmed depending on the video mode
of the attached peripheral. Add support for non-burst video modes with
sync events (as opposed to sync pulses) and select either sequence
depending on the video mode.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The DSI controllers are powered by a (typically 1.2V) regulator. Usually
this is always on, so there was no need to support enabling or disabling
it thus far. But in order not to consume any power when DSI is inactive,
give the driver a chance to enable or disable the supply as needed.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
A bunch of registers are initialized to 0 upon during driver probe. It
turns out that none of these are actually needed, so they can simply be
dropped.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The pixel format enumeration values used by the Tegra DSI controller
don't match those defined by the DSI framework. Make sure to convert
them to the internal format before writing it to the register.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
For some reason when the PW*_ENABLE and PM*_ENABLE fields are cleared
during disable, the HDMI output stops working properly. Resetting and
initializing doesn't help.
Comment out those accesses for now until it has been determined what to
do about them.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Disable LVDS mode according to register documentation. It seems like
this has no effect on the operation of HDMI, but it's probably a good
idea to do this anyway.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
This reflects the power-up sequence as described in the documentation,
but it doesn't seem to be strictly necessary to get HDMI to work.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Clocks are never enabled or disabled in atomic context, so we can use
the clk_prepare_enable() and clk_disable_unprepare() helpers instead.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Schematics indicate that the AVDD_HDMI_PLL supply should be enabled
prior to the AVDD_HDMI supply.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The generic Tegra output code already sets up the clocks properly, so
there's no need to do it again when the HDMI output is enabled.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Revert commit 18ebc0f404d5 "drm/tegra: hdmi: Enable VDD earlier for
hotplug/DDC" and instead add a new supply for the +5V pin on the HDMI
connector.
The vdd-supply property refers to the regulator that supplies the
AVDD_HDMI input on Tegra, rather than the +5V HDMI connector pin. This
was never a problem before, because all boards had that pin hooked up to
a regulator that was always on. Starting with Dalmore and continuing
with Venice2, the +5V pin is controllable via a GPIO. For reasons
unknown, the GPIO ended up as the controlling GPIO of the AVDD_HDMI
supply in the Dalmore and Venice2 DTS files. But that's not correct.
Instead, a separate supply must be introduced so that the +5V pin can be
controlled separately from the supplies that feed the HDMI block within
Tegra.
A new hdmi-supply property is introduced that takes the place of the
vdd-supply and vdd-supply is only enabled when HDMI is enabled rather
than all the time.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
For HDMI compliance both of these values need to be set to 1.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Setting the bits in this register is dependent on the output type driven
by the display controller. All output drivers already set these properly
so there is no need to do it here again.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The tegra_dc_format() and tegra_dc_setup_window() functions are only
used internally by the display controller driver. Move them upwards in
order to make them static and get rid of the function prototypes.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
V_DIRECTION is the name of the field in the documentation, so use that
for consistency. Also add the H_DIRECTION field for completeness.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The SOR allows the computation of a 32 bit CRC of the content that it
transmits. This functionality is exposed via debugfs and is useful to
verify proper operation of the SOR.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
YUYV is UYVY with swapped bytes. Luckily the Tegra DC hardware can swap
bytes during scan-out, so supporting YUYV is simply a matter of writing
the correct value to the byteswap register.
This patch modifies tegra_dc_format() to return the byte swap parameter
via an output parameter in addition to returning the pixel format. Many
other formats can potentially be supported in a similar way.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Remove extern keyword from function prototypes since it isn't needed and
drop an unnecessary forward declaration.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
I've fumbled my own idea and enthusiastically wrapped all the
getconnector code with the connection_mutex. But we only need it to
chase the connector->encoder link. Even there it's not really needed
since races with userspace won't matter, but better paranoid and
consistent about this stuff.
If we grap it everywhere connector probe callbacks can't grab it
themselves, which means they'll deadlock. i915 does that for the load
detect pipe. Furthermore i915 needs to do a ww dance since we also
need to grab the mutex of the load detect crtc.
This is a regression from
commit 6e9f798d91c526982cca0026cd451e8fdbf18aaf
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Thu May 29 23:54:47 2014 +0200
drm: Split connection_mutex out of mode_config.mutex (v3)
Cc: Rob Clark <robdclark@gmail.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Now that we're hoping to have resolved all of the problems with
video.use_native_backlight=1, make that the default at last.
Link: http://marc.info/?l=linux-acpi&m=139716088401106&w=2
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|