| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases we may receive a mode config that has a different
CRTC<->encoder map that the current configuration. In that case, we
need to disable any re-routed encoders before setting the mode,
otherwise they may not pick up the new CRTC (if the output types are
incompatible for example).
Tested-by: Kristian Høgsberg <krh@bitplanet.net>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Dave Airlie <airlied@linux.ie>
|
|
|
|
|
|
|
|
|
| |
Check the error paths within intel_pipe_set_base() to first cleanup and
then report back the error.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When mode setting is first initialized, the driver will call into
drm_helper_initial_config() to set up an initial output and framebuffer
configuration. This routine is responsible for probing the available
connectors, encoders, and crtcs, looking for modes and putting together
something reasonable (where reasonable is defined as "allows kernel
messages to be visible on as many displays as possible").
However, the code was a bit too aggressive in setting default modes when
none were found on a given connector. Even if some connectors had modes,
any connectors found lacking modes would have the default 800x600 mode added
to their mode list, which in some cases could cause problems later down the
line. In my case, the LVDS was perfectly available, but the initial config
code added 800x600 modes to both of the detected but unavailable HDMI
connectors (which are on my non-existent docking station). This ended up
preventing later code from setting a mode on my LVDS, which is bad.
This patch fixes that behavior by making the initial config code walk
through the connectors first, counting the available modes, before it decides
to add any default modes to a possibly connected output. It also fixes the
logic in drm_target_preferred() that was causing zeroed out modes to be set
as the preferred mode for a given connector, even if no modes were available.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
|
|
|
|
|
|
|
|
|
| |
This removes the requirement for user space to pin a buffer before
setting a mode that is backed by the pixels from that buffer.
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
|
|
Add mode setting support to the DRM layer.
This is a fairly big chunk of work that allows DRM drivers to provide
full output control and configuration capabilities to userspace. It was
motivated by several factors:
- the fb layer's APIs aren't suited for anything but simple
configurations
- coordination between the fb layer, DRM layer, and various userspace
drivers is poor to non-existent (radeonfb excepted)
- user level mode setting drivers makes displaying panic & oops
messages more difficult
- suspend/resume of graphics state is possible in many more
configurations with kernel level support
This commit just adds the core DRM part of the mode setting APIs.
Driver specific commits using these new structure and APIs will follow.
Co-authors: Jesse Barnes <jbarnes@virtuousgeek.org>, Jakob Bornecrantz <jakob@tungstengraphics.com>
Contributors: Alan Hourihane <alanh@tungstengraphics.com>, Maarten Maathuis <madman2003@gmail.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|