linux/drivers/gpu/drm/i915/display/intel_dsb.h

78 lines
2.4 KiB
C
Raw Permalink Normal View History

/* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#ifndef _INTEL_DSB_H
#define _INTEL_DSB_H
#include <linux/types.h>
#include "i915_reg_defs.h"
struct intel_atomic_state;
struct intel_crtc;
2023-06-06 22:15:02 +03:00
struct intel_crtc_state;
struct intel_display;
struct intel_dsb;
enum pipe;
enum intel_dsb_id {
INTEL_DSB_0,
INTEL_DSB_1,
INTEL_DSB_2,
I915_MAX_DSBS,
};
unsigned int intel_dsb_size(struct intel_dsb *dsb);
unsigned int intel_dsb_head(struct intel_dsb *dsb);
struct intel_dsb *intel_dsb_prepare(struct intel_atomic_state *state,
struct intel_crtc *crtc,
enum intel_dsb_id dsb_id,
unsigned int max_cmds);
void intel_dsb_finish(struct intel_dsb *dsb);
void intel_dsb_gosub_finish(struct intel_dsb *dsb);
void intel_dsb_cleanup(struct intel_dsb *dsb);
int intel_dsb_exec_time_us(void);
void intel_dsb_reg_write(struct intel_dsb *dsb,
drm/i915/dsb: Pre allocate and late cleanup of cmd buffer Pre-allocate command buffer in atomic_commit using intel_dsb_prepare function which also includes pinning and map in cpu domain. No functional change is dsb write/commit functions. Now dsb get/put function is removed and ref-count mechanism is not needed. Below dsb api added to do respective job mentioned below. intel_dsb_prepare - Allocate, pin and map the buffer. intel_dsb_cleanup - Unpin and release the gem object. RFC: Initial patch for design review. v2: included _init() part in _prepare(). [Daniel, Ville] v3: dsb_cleanup called after cleanup_planes. [Daniel] v4: dsb structure is moved to intel_crtc_state from intel_crtc. [Maarten] v5: dsb get/put/ref-count mechanism removed. [Maarten] v6: Based on review feedback following changes are added, - replaced intel_dsb structure by pointer in intel_crtc_state. [Maarten] - passing intel_crtc_state to dsp-api to simplify the code. [Maarten] - few dsb functions prototype modified to simplify code. v7: added few cosmetic changes suggested by Jani and null check for crtc_state in dsb_cleanup removed as suggested by Maarten. v8: changed the function parameter to intel_crtc_state* of ivb_load_lut_ext_max() from intel_crtc. [Maarten] v9: error handling improved in _write() and prepare(). [Maarten] Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@intel.com> Signed-off-by: Animesh Manna <animesh.manna@intel.com> Signed-off-by: Uma Shankar <uma.shankar@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200520130737.11240-1-animesh.manna@intel.com
2020-05-20 18:37:37 +05:30
i915_reg_t reg, u32 val);
drm/i915/dsb: Don't use indexed register writes needlessly Turns out the DSB indexed register write command has rather significant initial overhead compared to the normal MMIO write command. Based on some quick experiments on TGL you have to write the register at least ~5 times for the indexed write command to come out ahead. If you write the register less times than that the MMIO write is faster. So it seems my automagic indexed write logic was a bit misguided. Go back to the original approach only use indexed writes for the cases we know will benefit from it (indexed LUT register updates). Currently we shouldn't have any cases where this truly matters (just some rare double writes to the precision LUT index registers), but we will need to switch the legacy LUT updates to write each LUT register twice (to avoid some palette anti-collision logic troubles). This would be close to the worst case for using indexed writes (two writes per register, and 256 separate registers). Using the MMIO write command should shave off around 30% of the execution time compared to using the indexed write command. Cc: stable@vger.kernel.org Fixes: 34d8311f4a1c ("drm/i915/dsb: Re-instate DSB for LUT updates") Fixes: 25ea3411bd23 ("drm/i915/dsb: Use non-posted register writes for legacy LUT") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20241120164123.12706-2-ville.syrjala@linux.intel.com Reviewed-by: Uma Shankar <uma.shankar@intel.com>
2024-11-20 18:41:20 +02:00
void intel_dsb_reg_write_indexed(struct intel_dsb *dsb,
i915_reg_t reg, u32 val);
void intel_dsb_reg_write_masked(struct intel_dsb *dsb,
i915_reg_t reg, u32 mask, u32 val);
void intel_dsb_noop(struct intel_dsb *dsb, int count);
void intel_dsb_nonpost_start(struct intel_dsb *dsb);
void intel_dsb_nonpost_end(struct intel_dsb *dsb);
void intel_dsb_interrupt(struct intel_dsb *dsb);
void intel_dsb_wait_usec(struct intel_dsb *dsb, int count);
void intel_dsb_wait_vblanks(struct intel_dsb *dsb, int count);
void intel_dsb_wait_vblank_delay(struct intel_atomic_state *state,
struct intel_dsb *dsb);
void intel_dsb_wait_scanline_in(struct intel_atomic_state *state,
struct intel_dsb *dsb,
int lower, int upper);
void intel_dsb_wait_scanline_out(struct intel_atomic_state *state,
struct intel_dsb *dsb,
int lower, int upper);
void intel_dsb_vblank_evade(struct intel_atomic_state *state,
struct intel_dsb *dsb);
void intel_dsb_poll(struct intel_dsb *dsb,
i915_reg_t reg, u32 mask, u32 val,
int wait_us, int count);
void intel_dsb_gosub(struct intel_dsb *dsb,
struct intel_dsb *sub_dsb);
void intel_dsb_chain(struct intel_atomic_state *state,
struct intel_dsb *dsb,
drm/i915/dsb: Allow intel_dsb_chain() to use DSB_WAIT_FOR_VBLANK Allow intel_dsb_chain() to start the chained DSB at start of the undelaye vblank. This is slightly more involved than simply setting the bit as we must use the DEwake mechanism to eliminate pkgC latency. And DSB_ENABLE_DEWAKE itself is problematic in that it allows us to configure just a single scanline, and if the current scanline is already past that DSB_ENABLE_DEWAKE won't do anything, rendering the whole thing moot. The current workaround involves checking the pipe's current scanline with the CPU, and if it looks like we're about to miss the configured DEwake scanline we set DSB_FORCE_DEWAKE to immediately assert DEwake. This is somewhat racy since the hardware is making progress all the while we're checking it on the CPU. We can make things less racy by chaining two DSBs and handling the DSB_FORCE_DEWAKE stuff entirely without CPU involvement: 1. CPU starts the first DSB immediately 2. First DSB configures the second DSB, including its dewake_scanline 3. First DSB starts the second w/ DSB_WAIT_FOR_VBLANK 4. First DSB asserts DSB_FORCE_DEWAKE 5. First DSB waits until we're outside the dewake_scanline-vblank_start window 6. First DSB deasserts DSB_FORCE_DEWAKE That will guarantee that the we are fully awake when the second DSB starts to actually execute. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240624191032.27333-12-ville.syrjala@linux.intel.com Reviewed-by: Animesh Manna <animesh.manna@intel.com>
2024-06-24 22:10:29 +03:00
struct intel_dsb *chained_dsb,
bool wait_for_vblank);
void intel_dsb_commit(struct intel_dsb *dsb);
void intel_dsb_wait(struct intel_dsb *dsb);
void intel_dsb_irq_handler(struct intel_display *display,
enum pipe pipe, enum intel_dsb_id dsb_id);
#endif