drm fixes for 6.16-rc4

gpuvm:
 - fix some typos
 
 xe:
 - Fix user-fence race issue
 - Couple xe_vm fixes
 - Don't trigger rebind on initial dma-buf validation
 - Fix a build issue related to basename() posix vs gnu discrepancy
 
 amdgpu:
 - pin buffers while vmapping
 - UserQ fixes
 - Revert CSA fix
 - SR-IOV fix
 
 nouveau:
 - fix linear modifier
 - remove some dead code
 
 msm:
 - Core/GPU:
   - fix comment doc warning in gpuvm
   - fix build with KMS disabled
   - fix pgtable setup/teardown race
   - global fault counter fix
   - various error path fixes
   - GPU devcoredump snapshot fixes
   - handle in-place VM_BIND remaps to solve turnip vm update race
   - skip re-emitting IBs for unusable VMs
   - Don't use %pK through printk
   - moved display snapshot init earlier, fixing a crash
 - DPU:
   - Fixed crash in virtual plane checking code
   - Fixed mode comparison in virtual plane checking code
 - DSI:
   - Adjusted width of resulution-related registers
   - Fixed locking issue on 14nm PLLs
 - UBWC (per Bjorn's ack)
   - Added UBWC configuration for several missing platforms (fixing
     regression)
 
 mediatek:
 - Add error handling for old state CRTC in atomic_disable
 - Fix DSI host and panel bridge pre-enable order
 - Fix device/node reference count leaks in mtk_drm_get_all_drm_priv
 - mtk_hdmi: Fix inverted parameters in some regmap_update_bits calls
 
 tegra:
 - revert dma-buf change
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmixFPQACgkQDHTzWXnE
 hr4d4Q/+Lf/meDiKrjUHiNwW+kZPeZhwrKEEbG2Opc9uZZdQpx44ZhOKM668dUx1
 YWAC2hcQFKWe0tz4Qsg2drPBGmoobhxmVScwXpY7xz1XN4taINssuZRwERp3bLmx
 l63tcQleGJOYvbCcg0SCfDeR4fd7a7iVf+PelFU04mbn/qorJBdYuy/XPTZlCond
 lf1MtaPH8f2M80WHmwb4g5K3mIdIa0HCt6Z9Z73mLRmF0b6KfcWF8159OLz03tiS
 voHDmFR4VsoZihDC4H+fAAitbQG+5vZN6YtX2CC4OtQLxeGkpkOniGwfjD/beKne
 edGHvWpzBchmK0Z6o64+7rzJa7RYEkpgryvAD/jEljG9kk0RzlGAYvXFV3aeLFLn
 +WJX8ZOw/SqkGX5/DDA0CpDb3Ty8QxEmS/MEHn/kAUNmBp0Uw2ad6b/4b3NqkDh9
 ftgWSobFubZJ8oSf8amPwpW7KjSsccogQzASiuDlV/XCCdlKvuxkn/ZX9XjHq0rr
 O0CX+4vLhl20grRstD4oIVsmrLYQXg97uqpOeaoMrzflKfyEzJ7bxDSBd78ZbfVf
 dlHSULx3cgusNe8HNWRnsnC0XeC1MBTXCbpTjrZYkqu1arLZN+r8LXOGRZrKBCPu
 PtlmU21JJKygKUTWeXzX1P2eBdBoTloOKJGkLWthR9KxFDqfppM=
 =QtCr
 -----END PGP SIGNATURE-----

Merge tag 'drm-fixes-2025-08-29' of https://gitlab.freedesktop.org/drm/kernel

Pull drm fixes from Dave Airlie:
 "Weekly fixes, feels a bit big.

  The major piece is msm fixes, then the usual amdgpu/xe along with some
  mediatek and nouveau fixes and a tegra revert.

  gpuvm:
   - fix some typos

  xe:
   - Fix user-fence race issue
   - Couple xe_vm fixes
   - Don't trigger rebind on initial dma-buf validation
   - Fix a build issue related to basename() posix vs gnu discrepancy

  amdgpu:
   - pin buffers while vmapping
   - UserQ fixes
   - Revert CSA fix
   - SR-IOV fix

  nouveau:
   - fix linear modifier
   - remove some dead code

  msm:
   - Core/GPU:
      - fix comment doc warning in gpuvm
      - fix build with KMS disabled
      - fix pgtable setup/teardown race
      - global fault counter fix
      - various error path fixes
      - GPU devcoredump snapshot fixes
      - handle in-place VM_BIND remaps to solve turnip vm update race
      - skip re-emitting IBs for unusable VMs
      - Don't use %pK through printk
      - moved display snapshot init earlier, fixing a crash
   - DPU:
      - Fixed crash in virtual plane checking code
      - Fixed mode comparison in virtual plane checking code
   - DSI:
      - Adjusted width of resulution-related registers
      - Fixed locking issue on 14nm PLLs
   - UBWC (per Bjorn's ack)
      - Added UBWC configuration for several missing platforms (fixing
        regression)

  mediatek:
   - Add error handling for old state CRTC in atomic_disable
   - Fix DSI host and panel bridge pre-enable order
   - Fix device/node reference count leaks in mtk_drm_get_all_drm_priv
   - mtk_hdmi: Fix inverted parameters in some regmap_update_bits calls

  tegra:
   - revert dma-buf change"

* tag 'drm-fixes-2025-08-29' of https://gitlab.freedesktop.org/drm/kernel: (56 commits)
  drm/mediatek: mtk_hdmi: Fix inverted parameters in some regmap_update_bits calls
  drm/amdgpu/userq: fix error handling of invalid doorbell
  drm/amdgpu: update firmware version checks for user queue support
  drm/amd/amdgpu: disable hwmon power1_cap* for gfx 11.0.3 on vf mode
  Revert "drm/amdgpu: fix incorrect vm flags to map bo"
  drm/amdgpu/gfx12: set MQD as appriopriate for queue types
  drm/amdgpu/gfx11: set MQD as appriopriate for queue types
  drm/xe: switch to local xbasename() helper
  drm/xe: Don't trigger rebind on initial dma-buf validation
  drm/xe/vm: Clear the scratch_pt pointer on error
  drm/xe/vm: Don't pin the vm_resv during validation
  drm/xe/xe_sync: avoid race during ufence signaling
  Revert "drm/tegra: Use dma_buf from GEM object instance"
  soc: qcom: use no-UBWC config for MSM8956/76
  soc: qcom: add configuration for MSM8929
  soc: qcom: ubwc: add more missing platforms
  soc: qcom: ubwc: use no-uwbc config for MSM8917
  drm/msm/dpu: Add a null ptr check for dpu_encoder_needs_modeset
  dt-bindings: display/msm: qcom,mdp5: drop lut clock
  drm/gpuvm: fix various typos in .c and .h gpuvm file
  ...
This commit is contained in:
Linus Torvalds 2025-08-28 19:56:32 -07:00
commit 18ee2b9b7b
46 changed files with 467 additions and 309 deletions

View file

@ -60,7 +60,6 @@ properties:
- const: bus
- const: core
- const: vsync
- const: lut
- const: tbu
- const: tbu_rt
# MSM8996 has additional iommu clock

View file

@ -88,8 +88,8 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
}
r = amdgpu_vm_bo_map(adev, *bo_va, csa_addr, 0, size,
AMDGPU_VM_PAGE_READABLE | AMDGPU_VM_PAGE_WRITEABLE |
AMDGPU_VM_PAGE_EXECUTABLE);
AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
AMDGPU_PTE_EXECUTABLE);
if (r) {
DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);

View file

@ -285,6 +285,36 @@ static int amdgpu_dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
return ret;
}
static int amdgpu_dma_buf_vmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
int ret;
/*
* Pin to keep buffer in place while it's vmap'ed. The actual
* domain is not that important as long as it's mapable. Using
* GTT and VRAM should be compatible with most use cases.
*/
ret = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT | AMDGPU_GEM_DOMAIN_VRAM);
if (ret)
return ret;
ret = drm_gem_dmabuf_vmap(dma_buf, map);
if (ret)
amdgpu_bo_unpin(bo);
return ret;
}
static void amdgpu_dma_buf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
drm_gem_dmabuf_vunmap(dma_buf, map);
amdgpu_bo_unpin(bo);
}
const struct dma_buf_ops amdgpu_dmabuf_ops = {
.attach = amdgpu_dma_buf_attach,
.pin = amdgpu_dma_buf_pin,
@ -294,8 +324,8 @@ const struct dma_buf_ops amdgpu_dmabuf_ops = {
.release = drm_gem_dmabuf_release,
.begin_cpu_access = amdgpu_dma_buf_begin_cpu_access,
.mmap = drm_gem_dmabuf_mmap,
.vmap = drm_gem_dmabuf_vmap,
.vunmap = drm_gem_dmabuf_vunmap,
.vmap = amdgpu_dma_buf_vmap,
.vunmap = amdgpu_dma_buf_vunmap,
};
/**

View file

@ -471,6 +471,7 @@ amdgpu_userq_create(struct drm_file *filp, union drm_amdgpu_userq *args)
if (index == (uint64_t)-EINVAL) {
drm_file_err(uq_mgr->file, "Failed to get doorbell for queue\n");
kfree(queue);
r = -EINVAL;
goto unlock;
}

View file

@ -1612,9 +1612,9 @@ static int gfx_v11_0_sw_init(struct amdgpu_ip_block *ip_block)
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
if (!adev->gfx.disable_uq &&
adev->gfx.me_fw_version >= 2390 &&
adev->gfx.pfp_fw_version >= 2530 &&
adev->gfx.mec_fw_version >= 2600 &&
adev->gfx.me_fw_version >= 2420 &&
adev->gfx.pfp_fw_version >= 2580 &&
adev->gfx.mec_fw_version >= 2650 &&
adev->mes.fw_version[0] >= 120) {
adev->userq_funcs[AMDGPU_HW_IP_GFX] = &userq_mes_funcs;
adev->userq_funcs[AMDGPU_HW_IP_COMPUTE] = &userq_mes_funcs;
@ -4129,6 +4129,8 @@ static int gfx_v11_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
#endif
if (prop->tmz_queue)
tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, TMZ_MATCH, 1);
if (!prop->kernel_queue)
tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_NON_PRIV, 1);
mqd->cp_gfx_hqd_cntl = tmp;
/* set up cp_doorbell_control */
@ -4281,8 +4283,10 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH,
prop->allow_tunneling);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
if (prop->kernel_queue) {
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
}
if (prop->tmz_queue)
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TMZ, 1);
mqd->cp_hqd_pq_control = tmp;

View file

@ -3026,6 +3026,8 @@ static int gfx_v12_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
#endif
if (prop->tmz_queue)
tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, TMZ_MATCH, 1);
if (!prop->kernel_queue)
tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_NON_PRIV, 1);
mqd->cp_gfx_hqd_cntl = tmp;
/* set up cp_doorbell_control */
@ -3175,8 +3177,10 @@ static int gfx_v12_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
(order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1));
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
if (prop->kernel_queue) {
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
}
if (prop->tmz_queue)
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TMZ, 1);
mqd->cp_hqd_pq_control = tmp;

View file

@ -3458,14 +3458,16 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
effective_mode &= ~S_IWUSR;
/* not implemented yet for APUs other than GC 10.3.1 (vangogh) and 9.4.3 */
if (((adev->family == AMDGPU_FAMILY_SI) ||
((adev->flags & AMD_IS_APU) && (gc_ver != IP_VERSION(10, 3, 1)) &&
(gc_ver != IP_VERSION(9, 4, 3) && gc_ver != IP_VERSION(9, 4, 4)))) &&
(attr == &sensor_dev_attr_power1_cap_max.dev_attr.attr ||
attr == &sensor_dev_attr_power1_cap_min.dev_attr.attr ||
attr == &sensor_dev_attr_power1_cap.dev_attr.attr ||
attr == &sensor_dev_attr_power1_cap_default.dev_attr.attr))
return 0;
if (attr == &sensor_dev_attr_power1_cap_max.dev_attr.attr ||
attr == &sensor_dev_attr_power1_cap_min.dev_attr.attr ||
attr == &sensor_dev_attr_power1_cap.dev_attr.attr ||
attr == &sensor_dev_attr_power1_cap_default.dev_attr.attr) {
if (adev->family == AMDGPU_FAMILY_SI ||
((adev->flags & AMD_IS_APU) && gc_ver != IP_VERSION(10, 3, 1) &&
(gc_ver != IP_VERSION(9, 4, 3) && gc_ver != IP_VERSION(9, 4, 4))) ||
(amdgpu_sriov_vf(adev) && gc_ver == IP_VERSION(11, 0, 3)))
return 0;
}
/* not implemented yet for APUs having < GC 9.3.0 (Renoir) */
if (((adev->family == AMDGPU_FAMILY_SI) ||

View file

@ -40,7 +40,7 @@
* mapping's backing &drm_gem_object buffers.
*
* &drm_gem_object buffers maintain a list of &drm_gpuva objects representing
* all existent GPU VA mappings using this &drm_gem_object as backing buffer.
* all existing GPU VA mappings using this &drm_gem_object as backing buffer.
*
* GPU VAs can be flagged as sparse, such that drivers may use GPU VAs to also
* keep track of sparse PTEs in order to support Vulkan 'Sparse Resources'.
@ -72,7 +72,7 @@
* but it can also be a 'dummy' object, which can be allocated with
* drm_gpuvm_resv_object_alloc().
*
* In order to connect a struct drm_gpuva its backing &drm_gem_object each
* In order to connect a struct drm_gpuva to its backing &drm_gem_object each
* &drm_gem_object maintains a list of &drm_gpuvm_bo structures, and each
* &drm_gpuvm_bo contains a list of &drm_gpuva structures.
*
@ -81,7 +81,7 @@
* This is ensured by the API through drm_gpuvm_bo_obtain() and
* drm_gpuvm_bo_obtain_prealloc() which first look into the corresponding
* &drm_gem_object list of &drm_gpuvm_bos for an existing instance of this
* particular combination. If not existent a new instance is created and linked
* particular combination. If not present, a new instance is created and linked
* to the &drm_gem_object.
*
* &drm_gpuvm_bo structures, since unique for a given &drm_gpuvm, are also used
@ -108,7 +108,7 @@
* sequence of operations to satisfy a given map or unmap request.
*
* Therefore the DRM GPU VA manager provides an algorithm implementing splitting
* and merging of existent GPU VA mappings with the ones that are requested to
* and merging of existing GPU VA mappings with the ones that are requested to
* be mapped or unmapped. This feature is required by the Vulkan API to
* implement Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this
* as VM BIND.
@ -119,7 +119,7 @@
* execute in order to integrate the new mapping cleanly into the current state
* of the GPU VA space.
*
* Depending on how the new GPU VA mapping intersects with the existent mappings
* Depending on how the new GPU VA mapping intersects with the existing mappings
* of the GPU VA space the &drm_gpuvm_ops callbacks contain an arbitrary amount
* of unmap operations, a maximum of two remap operations and a single map
* operation. The caller might receive no callback at all if no operation is
@ -139,16 +139,16 @@
* one unmap operation and one or two map operations, such that drivers can
* derive the page table update delta accordingly.
*
* Note that there can't be more than two existent mappings to split up, one at
* Note that there can't be more than two existing mappings to split up, one at
* the beginning and one at the end of the new mapping, hence there is a
* maximum of two remap operations.
*
* Analogous to drm_gpuvm_sm_map() drm_gpuvm_sm_unmap() uses &drm_gpuvm_ops to
* call back into the driver in order to unmap a range of GPU VA space. The
* logic behind this function is way simpler though: For all existent mappings
* logic behind this function is way simpler though: For all existing mappings
* enclosed by the given range unmap operations are created. For mappings which
* are only partically located within the given range, remap operations are
* created such that those mappings are split up and re-mapped partically.
* are only partially located within the given range, remap operations are
* created such that those mappings are split up and re-mapped partially.
*
* As an alternative to drm_gpuvm_sm_map() and drm_gpuvm_sm_unmap(),
* drm_gpuvm_sm_map_ops_create() and drm_gpuvm_sm_unmap_ops_create() can be used
@ -168,7 +168,7 @@
* provided helper functions drm_gpuva_map(), drm_gpuva_remap() and
* drm_gpuva_unmap() instead.
*
* The following diagram depicts the basic relationships of existent GPU VA
* The following diagram depicts the basic relationships of existing GPU VA
* mappings, a newly requested mapping and the resulting mappings as implemented
* by drm_gpuvm_sm_map() - it doesn't cover any arbitrary combinations of these.
*
@ -218,7 +218,7 @@
*
*
* 4) Existent mapping is a left aligned subset of the requested one, hence
* replace the existent one.
* replace the existing one.
*
* ::
*
@ -236,9 +236,9 @@
* and/or non-contiguous BO offset.
*
*
* 5) Requested mapping's range is a left aligned subset of the existent one,
* 5) Requested mapping's range is a left aligned subset of the existing one,
* but backed by a different BO. Hence, map the requested mapping and split
* the existent one adjusting its BO offset.
* the existing one adjusting its BO offset.
*
* ::
*
@ -271,9 +271,9 @@
* new: |-----|-----| (a.bo_offset=n, a'.bo_offset=n+1)
*
*
* 7) Requested mapping's range is a right aligned subset of the existent one,
* 7) Requested mapping's range is a right aligned subset of the existing one,
* but backed by a different BO. Hence, map the requested mapping and split
* the existent one, without adjusting the BO offset.
* the existing one, without adjusting the BO offset.
*
* ::
*
@ -304,7 +304,7 @@
*
* 9) Existent mapping is overlapped at the end by the requested mapping backed
* by a different BO. Hence, map the requested mapping and split up the
* existent one, without adjusting the BO offset.
* existing one, without adjusting the BO offset.
*
* ::
*
@ -334,9 +334,9 @@
* new: |-----|-----------| (a'.bo_offset=n, a.bo_offset=n+1)
*
*
* 11) Requested mapping's range is a centered subset of the existent one
* 11) Requested mapping's range is a centered subset of the existing one
* having a different backing BO. Hence, map the requested mapping and split
* up the existent one in two mappings, adjusting the BO offset of the right
* up the existing one in two mappings, adjusting the BO offset of the right
* one accordingly.
*
* ::
@ -351,7 +351,7 @@
* new: |-----|-----|-----| (a.bo_offset=n,b.bo_offset=m,a'.bo_offset=n+2)
*
*
* 12) Requested mapping is a contiguous subset of the existent one. Split it
* 12) Requested mapping is a contiguous subset of the existing one. Split it
* up, but indicate that the backing PTEs could be kept.
*
* ::
@ -367,7 +367,7 @@
*
*
* 13) Existent mapping is a right aligned subset of the requested one, hence
* replace the existent one.
* replace the existing one.
*
* ::
*
@ -386,7 +386,7 @@
*
*
* 14) Existent mapping is a centered subset of the requested one, hence
* replace the existent one.
* replace the existing one.
*
* ::
*
@ -406,7 +406,7 @@
*
* 15) Existent mappings is overlapped at the beginning by the requested mapping
* backed by a different BO. Hence, map the requested mapping and split up
* the existent one, adjusting its BO offset accordingly.
* the existing one, adjusting its BO offset accordingly.
*
* ::
*
@ -469,8 +469,8 @@
* make use of them.
*
* The below code is strictly limited to illustrate the generic usage pattern.
* To maintain simplicitly, it doesn't make use of any abstractions for common
* code, different (asyncronous) stages with fence signalling critical paths,
* To maintain simplicity, it doesn't make use of any abstractions for common
* code, different (asynchronous) stages with fence signalling critical paths,
* any other helpers or error handling in terms of freeing memory and dropping
* previously taken locks.
*
@ -479,7 +479,7 @@
* // Allocates a new &drm_gpuva.
* struct drm_gpuva * driver_gpuva_alloc(void);
*
* // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
* // Typically drivers would embed the &drm_gpuvm and &drm_gpuva
* // structure in individual driver structures and lock the dma-resv with
* // drm_exec or similar helpers.
* int driver_mapping_create(struct drm_gpuvm *gpuvm,
@ -582,7 +582,7 @@
* .sm_step_unmap = driver_gpuva_unmap,
* };
*
* // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
* // Typically drivers would embed the &drm_gpuvm and &drm_gpuva
* // structure in individual driver structures and lock the dma-resv with
* // drm_exec or similar helpers.
* int driver_mapping_create(struct drm_gpuvm *gpuvm,
@ -680,7 +680,7 @@
*
* This helper is here to provide lockless list iteration. Lockless as in, the
* iterator releases the lock immediately after picking the first element from
* the list, so list insertion deletion can happen concurrently.
* the list, so list insertion and deletion can happen concurrently.
*
* Elements popped from the original list are kept in a local list, so removal
* and is_empty checks can still happen while we're iterating the list.
@ -1160,7 +1160,7 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm,
}
/**
* drm_gpuvm_prepare_objects() - prepare all assoiciated BOs
* drm_gpuvm_prepare_objects() - prepare all associated BOs
* @gpuvm: the &drm_gpuvm
* @exec: the &drm_exec locking context
* @num_fences: the amount of &dma_fences to reserve
@ -1230,13 +1230,13 @@ drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_range);
/**
* drm_gpuvm_exec_lock() - lock all dma-resv of all assoiciated BOs
* drm_gpuvm_exec_lock() - lock all dma-resv of all associated BOs
* @vm_exec: the &drm_gpuvm_exec wrapper
*
* Acquires all dma-resv locks of all &drm_gem_objects the given
* &drm_gpuvm contains mappings of.
*
* Addionally, when calling this function with struct drm_gpuvm_exec::extra
* Additionally, when calling this function with struct drm_gpuvm_exec::extra
* being set the driver receives the given @fn callback to lock additional
* dma-resv in the context of the &drm_gpuvm_exec instance. Typically, drivers
* would call drm_exec_prepare_obj() from within this callback.
@ -1293,7 +1293,7 @@ fn_lock_array(struct drm_gpuvm_exec *vm_exec)
}
/**
* drm_gpuvm_exec_lock_array() - lock all dma-resv of all assoiciated BOs
* drm_gpuvm_exec_lock_array() - lock all dma-resv of all associated BOs
* @vm_exec: the &drm_gpuvm_exec wrapper
* @objs: additional &drm_gem_objects to lock
* @num_objs: the number of additional &drm_gem_objects to lock
@ -1588,7 +1588,7 @@ drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find);
/**
* drm_gpuvm_bo_obtain() - obtains and instance of the &drm_gpuvm_bo for the
* drm_gpuvm_bo_obtain() - obtains an instance of the &drm_gpuvm_bo for the
* given &drm_gpuvm and &drm_gem_object
* @gpuvm: The &drm_gpuvm the @obj is mapped in.
* @obj: The &drm_gem_object being mapped in the @gpuvm.
@ -1624,7 +1624,7 @@ drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain);
/**
* drm_gpuvm_bo_obtain_prealloc() - obtains and instance of the &drm_gpuvm_bo
* drm_gpuvm_bo_obtain_prealloc() - obtains an instance of the &drm_gpuvm_bo
* for the given &drm_gpuvm and &drm_gem_object
* @__vm_bo: A pre-allocated struct drm_gpuvm_bo.
*
@ -1688,7 +1688,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_extobj_add);
* @vm_bo: the &drm_gpuvm_bo to add or remove
* @evict: indicates whether the object is evicted
*
* Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvms evicted list.
* Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvm's evicted list.
*/
void
drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict)
@ -1790,7 +1790,7 @@ __drm_gpuva_remove(struct drm_gpuva *va)
* drm_gpuva_remove() - remove a &drm_gpuva
* @va: the &drm_gpuva to remove
*
* This removes the given &va from the underlaying tree.
* This removes the given &va from the underlying tree.
*
* It is safe to use this function using the safe versions of iterating the GPU
* VA space, such as drm_gpuvm_for_each_va_safe() and
@ -2358,7 +2358,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
*
* This function iterates the given range of the GPU VA space. It utilizes the
* &drm_gpuvm_ops to call back into the driver providing the operations to
* unmap and, if required, split existent mappings.
* unmap and, if required, split existing mappings.
*
* Drivers may use these callbacks to update the GPU VA space right away within
* the callback. In case the driver decides to copy and store the operations for
@ -2430,7 +2430,7 @@ static const struct drm_gpuvm_ops lock_ops = {
* remapped, and locks+prepares (drm_exec_prepare_object()) objects that
* will be newly mapped.
*
* The expected usage is:
* The expected usage is::
*
* .. code-block:: c
*
@ -2475,7 +2475,7 @@ static const struct drm_gpuvm_ops lock_ops = {
* required without the earlier DRIVER_OP_MAP. This is safe because we've
* already locked the GEM object in the earlier DRIVER_OP_MAP step.
*
* Returns: 0 on success or a negative error codec
* Returns: 0 on success or a negative error code
*/
int
drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
@ -2619,12 +2619,12 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
* @req_offset: the offset within the &drm_gem_object
*
* This function creates a list of operations to perform splitting and merging
* of existent mapping(s) with the newly requested one.
* of existing mapping(s) with the newly requested one.
*
* The list can be iterated with &drm_gpuva_for_each_op and must be processed
* in the given order. It can contain map, unmap and remap operations, but it
* also can be empty if no operation is required, e.g. if the requested mapping
* already exists is the exact same way.
* already exists in the exact same way.
*
* There can be an arbitrary amount of unmap operations, a maximum of two remap
* operations and a single map operation. The latter one represents the original

View file

@ -387,19 +387,19 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
of_id = of_match_node(mtk_drm_of_ids, node);
if (!of_id)
continue;
goto next_put_node;
pdev = of_find_device_by_node(node);
if (!pdev)
continue;
goto next_put_node;
drm_dev = device_find_child(&pdev->dev, NULL, mtk_drm_match);
if (!drm_dev)
continue;
goto next_put_device_pdev_dev;
temp_drm_priv = dev_get_drvdata(drm_dev);
if (!temp_drm_priv)
continue;
goto next_put_device_drm_dev;
if (temp_drm_priv->data->main_len)
all_drm_priv[CRTC_MAIN] = temp_drm_priv;
@ -411,10 +411,17 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
if (temp_drm_priv->mtk_drm_bound)
cnt++;
if (cnt == MAX_CRTC) {
of_node_put(node);
next_put_device_drm_dev:
put_device(drm_dev);
next_put_device_pdev_dev:
put_device(&pdev->dev);
next_put_node:
of_node_put(node);
if (cnt == MAX_CRTC)
break;
}
}
if (drm_priv->data->mmsys_dev_num == cnt) {

View file

@ -1002,6 +1002,12 @@ static int mtk_dsi_host_attach(struct mipi_dsi_host *host,
return PTR_ERR(dsi->next_bridge);
}
/*
* set flag to request the DSI host bridge be pre-enabled before device bridge
* in the chain, so the DSI host is ready when the device bridge is pre-enabled
*/
dsi->next_bridge->pre_enable_prev_first = true;
drm_bridge_add(&dsi->bridge);
ret = component_add(host->dev, &mtk_dsi_component_ops);

View file

@ -182,8 +182,8 @@ static inline struct mtk_hdmi *hdmi_ctx_from_bridge(struct drm_bridge *b)
static void mtk_hdmi_hw_vid_black(struct mtk_hdmi *hdmi, bool black)
{
regmap_update_bits(hdmi->regs, VIDEO_SOURCE_SEL,
VIDEO_CFG_4, black ? GEN_RGB : NORMAL_PATH);
regmap_update_bits(hdmi->regs, VIDEO_CFG_4,
VIDEO_SOURCE_SEL, black ? GEN_RGB : NORMAL_PATH);
}
static void mtk_hdmi_hw_make_reg_writable(struct mtk_hdmi *hdmi, bool enable)
@ -310,8 +310,8 @@ static void mtk_hdmi_hw_send_info_frame(struct mtk_hdmi *hdmi, u8 *buffer,
static void mtk_hdmi_hw_send_aud_packet(struct mtk_hdmi *hdmi, bool enable)
{
regmap_update_bits(hdmi->regs, AUDIO_PACKET_OFF,
GRL_SHIFT_R2, enable ? 0 : AUDIO_PACKET_OFF);
regmap_update_bits(hdmi->regs, GRL_SHIFT_R2,
AUDIO_PACKET_OFF, enable ? 0 : AUDIO_PACKET_OFF);
}
static void mtk_hdmi_hw_config_sys(struct mtk_hdmi *hdmi)

View file

@ -292,7 +292,8 @@ static void mtk_plane_atomic_disable(struct drm_plane *plane,
wmb(); /* Make sure the above parameter is set before update */
mtk_plane_state->pending.dirty = true;
mtk_crtc_plane_disable(old_state->crtc, plane);
if (old_state && old_state->crtc)
mtk_crtc_plane_disable(old_state->crtc, plane);
}
static void mtk_plane_atomic_update(struct drm_plane *plane,

View file

@ -11,7 +11,7 @@
static const unsigned int *gen7_0_0_external_core_regs[] __always_unused;
static const unsigned int *gen7_2_0_external_core_regs[] __always_unused;
static const unsigned int *gen7_9_0_external_core_regs[] __always_unused;
static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] __always_unused;
static const struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] __always_unused;
static const u32 gen7_9_0_cx_debugbus_blocks[] __always_unused;
#include "adreno_gen7_0_0_snapshot.h"
@ -174,8 +174,15 @@ static int a6xx_crashdumper_run(struct msm_gpu *gpu,
static int debugbus_read(struct msm_gpu *gpu, u32 block, u32 offset,
u32 *data)
{
u32 reg = A6XX_DBGC_CFG_DBGBUS_SEL_D_PING_INDEX(offset) |
A6XX_DBGC_CFG_DBGBUS_SEL_D_PING_BLK_SEL(block);
u32 reg;
if (to_adreno_gpu(gpu)->info->family >= ADRENO_7XX_GEN1) {
reg = A7XX_DBGC_CFG_DBGBUS_SEL_D_PING_INDEX(offset) |
A7XX_DBGC_CFG_DBGBUS_SEL_D_PING_BLK_SEL(block);
} else {
reg = A6XX_DBGC_CFG_DBGBUS_SEL_D_PING_INDEX(offset) |
A6XX_DBGC_CFG_DBGBUS_SEL_D_PING_BLK_SEL(block);
}
gpu_write(gpu, REG_A6XX_DBGC_CFG_DBGBUS_SEL_A, reg);
gpu_write(gpu, REG_A6XX_DBGC_CFG_DBGBUS_SEL_B, reg);
@ -198,11 +205,18 @@ static int debugbus_read(struct msm_gpu *gpu, u32 block, u32 offset,
readl((ptr) + ((offset) << 2))
/* read a value from the CX debug bus */
static int cx_debugbus_read(void __iomem *cxdbg, u32 block, u32 offset,
static int cx_debugbus_read(struct msm_gpu *gpu, void __iomem *cxdbg, u32 block, u32 offset,
u32 *data)
{
u32 reg = A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_INDEX(offset) |
A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_BLK_SEL(block);
u32 reg;
if (to_adreno_gpu(gpu)->info->family >= ADRENO_7XX_GEN1) {
reg = A7XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_INDEX(offset) |
A7XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_BLK_SEL(block);
} else {
reg = A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_INDEX(offset) |
A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_BLK_SEL(block);
}
cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_A, reg);
cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_B, reg);
@ -315,7 +329,8 @@ static void a6xx_get_debugbus_block(struct msm_gpu *gpu,
ptr += debugbus_read(gpu, block->id, i, ptr);
}
static void a6xx_get_cx_debugbus_block(void __iomem *cxdbg,
static void a6xx_get_cx_debugbus_block(struct msm_gpu *gpu,
void __iomem *cxdbg,
struct a6xx_gpu_state *a6xx_state,
const struct a6xx_debugbus_block *block,
struct a6xx_gpu_state_obj *obj)
@ -330,7 +345,7 @@ static void a6xx_get_cx_debugbus_block(void __iomem *cxdbg,
obj->handle = block;
for (ptr = obj->data, i = 0; i < block->count; i++)
ptr += cx_debugbus_read(cxdbg, block->id, i, ptr);
ptr += cx_debugbus_read(gpu, cxdbg, block->id, i, ptr);
}
static void a6xx_get_debugbus_blocks(struct msm_gpu *gpu,
@ -423,8 +438,9 @@ static void a7xx_get_debugbus_blocks(struct msm_gpu *gpu,
a6xx_state, &a7xx_debugbus_blocks[gbif_debugbus_blocks[i]],
&a6xx_state->debugbus[i + debugbus_blocks_count]);
}
}
a6xx_state->nr_debugbus = total_debugbus_blocks;
}
}
static void a6xx_get_debugbus(struct msm_gpu *gpu,
@ -526,7 +542,8 @@ static void a6xx_get_debugbus(struct msm_gpu *gpu,
int i;
for (i = 0; i < nr_cx_debugbus_blocks; i++)
a6xx_get_cx_debugbus_block(cxdbg,
a6xx_get_cx_debugbus_block(gpu,
cxdbg,
a6xx_state,
&cx_debugbus_blocks[i],
&a6xx_state->cx_debugbus[i]);
@ -759,15 +776,15 @@ static void a7xx_get_cluster(struct msm_gpu *gpu,
size_t datasize;
int i, regcount = 0;
/* Some clusters need a selector register to be programmed too */
if (cluster->sel)
in += CRASHDUMP_WRITE(in, cluster->sel->cd_reg, cluster->sel->val);
in += CRASHDUMP_WRITE(in, REG_A7XX_CP_APERTURE_CNTL_CD,
A7XX_CP_APERTURE_CNTL_CD_PIPE(cluster->pipe_id) |
A7XX_CP_APERTURE_CNTL_CD_CLUSTER(cluster->cluster_id) |
A7XX_CP_APERTURE_CNTL_CD_CONTEXT(cluster->context_id));
/* Some clusters need a selector register to be programmed too */
if (cluster->sel)
in += CRASHDUMP_WRITE(in, cluster->sel->cd_reg, cluster->sel->val);
for (i = 0; cluster->regs[i] != UINT_MAX; i += 2) {
int count = RANGE(cluster->regs, i);
@ -1796,6 +1813,7 @@ static void a7xx_show_shader(struct a6xx_gpu_state_obj *obj,
print_name(p, " - type: ", a7xx_statetype_names[block->statetype]);
print_name(p, " - pipe: ", a7xx_pipe_names[block->pipeid]);
drm_printf(p, " - location: %d\n", block->location);
for (i = 0; i < block->num_sps; i++) {
drm_printf(p, " - sp: %d\n", i);
@ -1873,6 +1891,7 @@ static void a7xx_show_dbgahb_cluster(struct a6xx_gpu_state_obj *obj,
print_name(p, " - pipe: ", a7xx_pipe_names[dbgahb->pipe_id]);
print_name(p, " - cluster-name: ", a7xx_cluster_names[dbgahb->cluster_id]);
drm_printf(p, " - context: %d\n", dbgahb->context_id);
drm_printf(p, " - location: %d\n", dbgahb->location_id);
a7xx_show_registers_indented(dbgahb->regs, obj->data, p, 4);
}
}

View file

@ -419,47 +419,47 @@ static const struct a6xx_indexed_registers a6xx_indexed_reglist[] = {
REG_A6XX_CP_SQE_STAT_DATA, 0x33, NULL },
{ "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR,
REG_A6XX_CP_DRAW_STATE_DATA, 0x100, NULL },
{ "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR,
{ "CP_SQE_UCODE_DBG", REG_A6XX_CP_SQE_UCODE_DBG_ADDR,
REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x8000, NULL },
{ "CP_ROQ", REG_A6XX_CP_ROQ_DBG_ADDR,
{ "CP_ROQ_DBG", REG_A6XX_CP_ROQ_DBG_ADDR,
REG_A6XX_CP_ROQ_DBG_DATA, 0, a6xx_get_cp_roq_size},
};
static const struct a6xx_indexed_registers a7xx_indexed_reglist[] = {
{ "CP_SQE_STAT", REG_A6XX_CP_SQE_STAT_ADDR,
REG_A6XX_CP_SQE_STAT_DATA, 0x33, NULL },
REG_A6XX_CP_SQE_STAT_DATA, 0x40, NULL },
{ "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR,
REG_A6XX_CP_DRAW_STATE_DATA, 0x100, NULL },
{ "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR,
{ "CP_SQE_UCODE_DBG", REG_A6XX_CP_SQE_UCODE_DBG_ADDR,
REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x8000, NULL },
{ "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_SQE_STAT_ADDR,
REG_A7XX_CP_BV_SQE_STAT_DATA, 0x33, NULL },
{ "CP_BV_DRAW_STATE_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR,
{ "CP_BV_SQE_STAT", REG_A7XX_CP_BV_SQE_STAT_ADDR,
REG_A7XX_CP_BV_SQE_STAT_DATA, 0x40, NULL },
{ "CP_BV_DRAW_STATE", REG_A7XX_CP_BV_DRAW_STATE_ADDR,
REG_A7XX_CP_BV_DRAW_STATE_DATA, 0x100, NULL },
{ "CP_BV_SQE_UCODE_DBG_ADDR", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR,
{ "CP_BV_SQE_UCODE_DBG", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR,
REG_A7XX_CP_BV_SQE_UCODE_DBG_DATA, 0x8000, NULL },
{ "CP_SQE_AC_STAT_ADDR", REG_A7XX_CP_SQE_AC_STAT_ADDR,
REG_A7XX_CP_SQE_AC_STAT_DATA, 0x33, NULL },
{ "CP_LPAC_DRAW_STATE_ADDR", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR,
{ "CP_SQE_AC_STAT", REG_A7XX_CP_SQE_AC_STAT_ADDR,
REG_A7XX_CP_SQE_AC_STAT_DATA, 0x40, NULL },
{ "CP_LPAC_DRAW_STATE", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR,
REG_A7XX_CP_LPAC_DRAW_STATE_DATA, 0x100, NULL },
{ "CP_SQE_AC_UCODE_DBG_ADDR", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR,
{ "CP_SQE_AC_UCODE_DBG", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR,
REG_A7XX_CP_SQE_AC_UCODE_DBG_DATA, 0x8000, NULL },
{ "CP_LPAC_FIFO_DBG_ADDR", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR,
{ "CP_LPAC_FIFO_DBG", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR,
REG_A7XX_CP_LPAC_FIFO_DBG_DATA, 0x40, NULL },
{ "CP_ROQ", REG_A6XX_CP_ROQ_DBG_ADDR,
{ "CP_ROQ_DBG", REG_A6XX_CP_ROQ_DBG_ADDR,
REG_A6XX_CP_ROQ_DBG_DATA, 0, a7xx_get_cp_roq_size },
};
static const struct a6xx_indexed_registers a6xx_cp_mempool_indexed = {
"CP_MEMPOOL", REG_A6XX_CP_MEM_POOL_DBG_ADDR,
"CP_MEM_POOL_DBG", REG_A6XX_CP_MEM_POOL_DBG_ADDR,
REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2060, NULL,
};
static const struct a6xx_indexed_registers a7xx_cp_bv_mempool_indexed[] = {
{ "CP_MEMPOOL", REG_A6XX_CP_MEM_POOL_DBG_ADDR,
REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2100, NULL },
{ "CP_BV_MEMPOOL", REG_A7XX_CP_BV_MEM_POOL_DBG_ADDR,
REG_A7XX_CP_BV_MEM_POOL_DBG_DATA, 0x2100, NULL },
{ "CP_MEM_POOL_DBG", REG_A6XX_CP_MEM_POOL_DBG_ADDR,
REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2200, NULL },
{ "CP_BV_MEM_POOL_DBG", REG_A7XX_CP_BV_MEM_POOL_DBG_ADDR,
REG_A7XX_CP_BV_MEM_POOL_DBG_DATA, 0x2200, NULL },
};
#define DEBUGBUS(_id, _count) { .id = _id, .name = #_id, .count = _count }

View file

@ -81,7 +81,7 @@ static const u32 gen7_0_0_debugbus_blocks[] = {
A7XX_DBGBUS_USPTP_7,
};
static struct gen7_shader_block gen7_0_0_shader_blocks[] = {
static const struct gen7_shader_block gen7_0_0_shader_blocks[] = {
{A7XX_TP0_TMO_DATA, 0x200, 4, 2, A7XX_PIPE_BR, A7XX_USPTP},
{A7XX_TP0_SMO_DATA, 0x80, 4, 2, A7XX_PIPE_BR, A7XX_USPTP},
{A7XX_TP0_MIPMAP_BASE_DATA, 0x3c0, 4, 2, A7XX_PIPE_BR, A7XX_USPTP},
@ -668,12 +668,19 @@ static const u32 gen7_0_0_sp_noncontext_pipe_lpac_usptp_registers[] = {
};
static_assert(IS_ALIGNED(sizeof(gen7_0_0_sp_noncontext_pipe_lpac_usptp_registers), 8));
/* Block: TPl1 Cluster: noncontext Pipeline: A7XX_PIPE_BR */
static const u32 gen7_0_0_tpl1_noncontext_pipe_br_registers[] = {
/* Block: TPl1 Cluster: noncontext Pipeline: A7XX_PIPE_NONE */
static const u32 gen7_0_0_tpl1_noncontext_pipe_none_registers[] = {
0x0b600, 0x0b600, 0x0b602, 0x0b602, 0x0b604, 0x0b604, 0x0b608, 0x0b60c,
0x0b60f, 0x0b621, 0x0b630, 0x0b633,
UINT_MAX, UINT_MAX,
};
static_assert(IS_ALIGNED(sizeof(gen7_0_0_tpl1_noncontext_pipe_none_registers), 8));
/* Block: TPl1 Cluster: noncontext Pipeline: A7XX_PIPE_BR */
static const u32 gen7_0_0_tpl1_noncontext_pipe_br_registers[] = {
0x0b600, 0x0b600,
UINT_MAX, UINT_MAX,
};
static_assert(IS_ALIGNED(sizeof(gen7_0_0_tpl1_noncontext_pipe_br_registers), 8));
/* Block: TPl1 Cluster: noncontext Pipeline: A7XX_PIPE_LPAC */
@ -695,7 +702,7 @@ static const struct gen7_sel_reg gen7_0_0_rb_rbp_sel = {
.val = 0x9,
};
static struct gen7_cluster_registers gen7_0_0_clusters[] = {
static const struct gen7_cluster_registers gen7_0_0_clusters[] = {
{ A7XX_CLUSTER_NONE, A7XX_PIPE_BR, STATE_NON_CONTEXT,
gen7_0_0_noncontext_pipe_br_registers, },
{ A7XX_CLUSTER_NONE, A7XX_PIPE_BV, STATE_NON_CONTEXT,
@ -764,7 +771,7 @@ static struct gen7_cluster_registers gen7_0_0_clusters[] = {
gen7_0_0_vpc_cluster_vpc_ps_pipe_bv_registers, },
};
static struct gen7_sptp_cluster_registers gen7_0_0_sptp_clusters[] = {
static const struct gen7_sptp_cluster_registers gen7_0_0_sptp_clusters[] = {
{ A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_HLSQ_STATE,
gen7_0_0_sp_noncontext_pipe_br_hlsq_state_registers, 0xae00 },
{ A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_SP_TOP,
@ -914,7 +921,7 @@ static const u32 gen7_0_0_dpm_registers[] = {
};
static_assert(IS_ALIGNED(sizeof(gen7_0_0_dpm_registers), 8));
static struct gen7_reg_list gen7_0_0_reg_list[] = {
static const struct gen7_reg_list gen7_0_0_reg_list[] = {
{ gen7_0_0_gpu_registers, NULL },
{ gen7_0_0_cx_misc_registers, NULL },
{ gen7_0_0_dpm_registers, NULL },

View file

@ -95,7 +95,7 @@ static const u32 gen7_2_0_debugbus_blocks[] = {
A7XX_DBGBUS_CCHE_2,
};
static struct gen7_shader_block gen7_2_0_shader_blocks[] = {
static const struct gen7_shader_block gen7_2_0_shader_blocks[] = {
{A7XX_TP0_TMO_DATA, 0x200, 6, 2, A7XX_PIPE_BR, A7XX_USPTP},
{A7XX_TP0_SMO_DATA, 0x80, 6, 2, A7XX_PIPE_BR, A7XX_USPTP},
{A7XX_TP0_MIPMAP_BASE_DATA, 0x3c0, 6, 2, A7XX_PIPE_BR, A7XX_USPTP},
@ -489,7 +489,7 @@ static const struct gen7_sel_reg gen7_2_0_rb_rbp_sel = {
.val = 0x9,
};
static struct gen7_cluster_registers gen7_2_0_clusters[] = {
static const struct gen7_cluster_registers gen7_2_0_clusters[] = {
{ A7XX_CLUSTER_NONE, A7XX_PIPE_BR, STATE_NON_CONTEXT,
gen7_2_0_noncontext_pipe_br_registers, },
{ A7XX_CLUSTER_NONE, A7XX_PIPE_BV, STATE_NON_CONTEXT,
@ -558,7 +558,7 @@ static struct gen7_cluster_registers gen7_2_0_clusters[] = {
gen7_0_0_vpc_cluster_vpc_ps_pipe_bv_registers, },
};
static struct gen7_sptp_cluster_registers gen7_2_0_sptp_clusters[] = {
static const struct gen7_sptp_cluster_registers gen7_2_0_sptp_clusters[] = {
{ A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_HLSQ_STATE,
gen7_0_0_sp_noncontext_pipe_br_hlsq_state_registers, 0xae00 },
{ A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_SP_TOP,
@ -573,6 +573,8 @@ static struct gen7_sptp_cluster_registers gen7_2_0_sptp_clusters[] = {
gen7_0_0_sp_noncontext_pipe_lpac_usptp_registers, 0xaf80 },
{ A7XX_CLUSTER_NONE, A7XX_TP0_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_USPTP,
gen7_0_0_tpl1_noncontext_pipe_br_registers, 0xb600 },
{ A7XX_CLUSTER_NONE, A7XX_TP0_NCTX_REG, A7XX_PIPE_NONE, 0, A7XX_USPTP,
gen7_0_0_tpl1_noncontext_pipe_none_registers, 0xb600 },
{ A7XX_CLUSTER_NONE, A7XX_TP0_NCTX_REG, A7XX_PIPE_LPAC, 0, A7XX_USPTP,
gen7_0_0_tpl1_noncontext_pipe_lpac_registers, 0xb780 },
{ A7XX_CLUSTER_SP_PS, A7XX_SP_CTX0_3D_CPS_REG, A7XX_PIPE_BR, 0, A7XX_HLSQ_STATE,
@ -737,7 +739,7 @@ static const u32 gen7_2_0_dpm_registers[] = {
};
static_assert(IS_ALIGNED(sizeof(gen7_2_0_dpm_registers), 8));
static struct gen7_reg_list gen7_2_0_reg_list[] = {
static const struct gen7_reg_list gen7_2_0_reg_list[] = {
{ gen7_2_0_gpu_registers, NULL },
{ gen7_2_0_cx_misc_registers, NULL },
{ gen7_2_0_dpm_registers, NULL },

View file

@ -117,7 +117,7 @@ static const u32 gen7_9_0_cx_debugbus_blocks[] = {
A7XX_DBGBUS_GBIF_CX,
};
static struct gen7_shader_block gen7_9_0_shader_blocks[] = {
static const struct gen7_shader_block gen7_9_0_shader_blocks[] = {
{ A7XX_TP0_TMO_DATA, 0x0200, 6, 2, A7XX_PIPE_BR, A7XX_USPTP },
{ A7XX_TP0_SMO_DATA, 0x0080, 6, 2, A7XX_PIPE_BR, A7XX_USPTP },
{ A7XX_TP0_MIPMAP_BASE_DATA, 0x03C0, 6, 2, A7XX_PIPE_BR, A7XX_USPTP },
@ -1116,7 +1116,7 @@ static const struct gen7_sel_reg gen7_9_0_rb_rbp_sel = {
.val = 0x9,
};
static struct gen7_cluster_registers gen7_9_0_clusters[] = {
static const struct gen7_cluster_registers gen7_9_0_clusters[] = {
{ A7XX_CLUSTER_NONE, A7XX_PIPE_BR, STATE_NON_CONTEXT,
gen7_9_0_non_context_pipe_br_registers, },
{ A7XX_CLUSTER_NONE, A7XX_PIPE_BV, STATE_NON_CONTEXT,
@ -1185,7 +1185,7 @@ static struct gen7_cluster_registers gen7_9_0_clusters[] = {
gen7_9_0_vpc_pipe_bv_cluster_vpc_ps_registers, },
};
static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] = {
static const struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] = {
{ A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_HLSQ_STATE,
gen7_9_0_non_context_sp_pipe_br_hlsq_state_registers, 0xae00},
{ A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_SP_TOP,
@ -1294,34 +1294,34 @@ static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] = {
gen7_9_0_tpl1_pipe_br_cluster_sp_ps_usptp_registers, 0xb000},
};
static struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = {
static const struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = {
{ "CP_SQE_STAT", REG_A6XX_CP_SQE_STAT_ADDR,
REG_A6XX_CP_SQE_STAT_DATA, 0x00040},
{ "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR,
REG_A6XX_CP_DRAW_STATE_DATA, 0x00200},
{ "CP_ROQ", REG_A6XX_CP_ROQ_DBG_ADDR,
{ "CP_ROQ_DBG", REG_A6XX_CP_ROQ_DBG_ADDR,
REG_A6XX_CP_ROQ_DBG_DATA, 0x00800},
{ "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR,
{ "CP_SQE_UCODE_DBG", REG_A6XX_CP_SQE_UCODE_DBG_ADDR,
REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x08000},
{ "CP_BV_DRAW_STATE_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR,
{ "CP_BV_DRAW_STATE", REG_A7XX_CP_BV_DRAW_STATE_ADDR,
REG_A7XX_CP_BV_DRAW_STATE_DATA, 0x00200},
{ "CP_BV_ROQ_DBG_ADDR", REG_A7XX_CP_BV_ROQ_DBG_ADDR,
{ "CP_BV_ROQ_DBG", REG_A7XX_CP_BV_ROQ_DBG_ADDR,
REG_A7XX_CP_BV_ROQ_DBG_DATA, 0x00800},
{ "CP_BV_SQE_UCODE_DBG_ADDR", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR,
{ "CP_BV_SQE_UCODE_DBG", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR,
REG_A7XX_CP_BV_SQE_UCODE_DBG_DATA, 0x08000},
{ "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_SQE_STAT_ADDR,
{ "CP_BV_SQE_STAT", REG_A7XX_CP_BV_SQE_STAT_ADDR,
REG_A7XX_CP_BV_SQE_STAT_DATA, 0x00040},
{ "CP_RESOURCE_TBL", REG_A7XX_CP_RESOURCE_TABLE_DBG_ADDR,
{ "CP_RESOURCE_TABLE_DBG", REG_A7XX_CP_RESOURCE_TABLE_DBG_ADDR,
REG_A7XX_CP_RESOURCE_TABLE_DBG_DATA, 0x04100},
{ "CP_LPAC_DRAW_STATE_ADDR", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR,
{ "CP_LPAC_DRAW_STATE", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR,
REG_A7XX_CP_LPAC_DRAW_STATE_DATA, 0x00200},
{ "CP_LPAC_ROQ", REG_A7XX_CP_LPAC_ROQ_DBG_ADDR,
{ "CP_LPAC_ROQ_DBG", REG_A7XX_CP_LPAC_ROQ_DBG_ADDR,
REG_A7XX_CP_LPAC_ROQ_DBG_DATA, 0x00200},
{ "CP_SQE_AC_UCODE_DBG_ADDR", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR,
{ "CP_SQE_AC_UCODE_DBG", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR,
REG_A7XX_CP_SQE_AC_UCODE_DBG_DATA, 0x08000},
{ "CP_SQE_AC_STAT_ADDR", REG_A7XX_CP_SQE_AC_STAT_ADDR,
{ "CP_SQE_AC_STAT", REG_A7XX_CP_SQE_AC_STAT_ADDR,
REG_A7XX_CP_SQE_AC_STAT_DATA, 0x00040},
{ "CP_LPAC_FIFO_DBG_ADDR", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR,
{ "CP_LPAC_FIFO_DBG", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR,
REG_A7XX_CP_LPAC_FIFO_DBG_DATA, 0x00040},
{ "CP_AQE_ROQ_0", REG_A7XX_CP_AQE_ROQ_DBG_ADDR_0,
REG_A7XX_CP_AQE_ROQ_DBG_DATA_0, 0x00100},
@ -1337,7 +1337,7 @@ static struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = {
REG_A7XX_CP_AQE_STAT_DATA_1, 0x00040},
};
static struct gen7_reg_list gen7_9_0_reg_list[] = {
static const struct gen7_reg_list gen7_9_0_reg_list[] = {
{ gen7_9_0_gpu_registers, NULL},
{ gen7_9_0_cx_misc_registers, NULL},
{ gen7_9_0_cx_dbgc_registers, NULL},

View file

@ -596,7 +596,7 @@ static void _dpu_crtc_complete_flip(struct drm_crtc *crtc)
spin_lock_irqsave(&dev->event_lock, flags);
if (dpu_crtc->event) {
DRM_DEBUG_VBL("%s: send event: %pK\n", dpu_crtc->name,
DRM_DEBUG_VBL("%s: send event: %p\n", dpu_crtc->name,
dpu_crtc->event);
trace_dpu_crtc_complete_flip(DRMID(crtc));
drm_crtc_send_vblank_event(crtc, dpu_crtc->event);

View file

@ -730,6 +730,8 @@ bool dpu_encoder_needs_modeset(struct drm_encoder *drm_enc, struct drm_atomic_st
return false;
conn_state = drm_atomic_get_new_connector_state(state, connector);
if (!conn_state)
return false;
/**
* These checks are duplicated from dpu_encoder_update_topology() since

View file

@ -31,14 +31,14 @@ static void dpu_setup_dspp_pcc(struct dpu_hw_dspp *ctx,
u32 base;
if (!ctx) {
DRM_ERROR("invalid ctx %pK\n", ctx);
DRM_ERROR("invalid ctx %p\n", ctx);
return;
}
base = ctx->cap->sblk->pcc.base;
if (!base) {
DRM_ERROR("invalid ctx %pK pcc base 0x%x\n", ctx, base);
DRM_ERROR("invalid ctx %p pcc base 0x%x\n", ctx, base);
return;
}

View file

@ -1345,7 +1345,7 @@ static int dpu_kms_mmap_mdp5(struct dpu_kms *dpu_kms)
dpu_kms->mmio = NULL;
return ret;
}
DRM_DEBUG("mapped dpu address space @%pK\n", dpu_kms->mmio);
DRM_DEBUG("mapped dpu address space @%p\n", dpu_kms->mmio);
dpu_kms->vbif[VBIF_RT] = msm_ioremap_mdss(mdss_dev,
dpu_kms->pdev,
@ -1380,7 +1380,7 @@ static int dpu_kms_mmap_dpu(struct dpu_kms *dpu_kms)
dpu_kms->mmio = NULL;
return ret;
}
DRM_DEBUG("mapped dpu address space @%pK\n", dpu_kms->mmio);
DRM_DEBUG("mapped dpu address space @%p\n", dpu_kms->mmio);
dpu_kms->vbif[VBIF_RT] = msm_ioremap(pdev, "vbif");
if (IS_ERR(dpu_kms->vbif[VBIF_RT])) {

View file

@ -1129,7 +1129,7 @@ static int dpu_plane_virtual_atomic_check(struct drm_plane *plane,
struct drm_plane_state *old_plane_state =
drm_atomic_get_old_plane_state(state, plane);
struct dpu_plane_state *pstate = to_dpu_plane_state(plane_state);
struct drm_crtc_state *crtc_state;
struct drm_crtc_state *crtc_state = NULL;
int ret;
if (IS_ERR(plane_state))
@ -1162,7 +1162,7 @@ static int dpu_plane_virtual_atomic_check(struct drm_plane *plane,
if (!old_plane_state || !old_plane_state->fb ||
old_plane_state->src_w != plane_state->src_w ||
old_plane_state->src_h != plane_state->src_h ||
old_plane_state->src_w != plane_state->src_w ||
old_plane_state->crtc_w != plane_state->crtc_w ||
old_plane_state->crtc_h != plane_state->crtc_h ||
msm_framebuffer_format(old_plane_state->fb) !=
msm_framebuffer_format(plane_state->fb))

View file

@ -5,6 +5,8 @@
#include <linux/clk-provider.h>
#include <linux/platform_device.h>
#include <linux/pm_clock.h>
#include <linux/pm_runtime.h>
#include <dt-bindings/phy/phy.h>
#include "dsi_phy.h"
@ -511,30 +513,6 @@ int msm_dsi_cphy_timing_calc_v4(struct msm_dsi_dphy_timing *timing,
return 0;
}
static int dsi_phy_enable_resource(struct msm_dsi_phy *phy)
{
struct device *dev = &phy->pdev->dev;
int ret;
ret = pm_runtime_resume_and_get(dev);
if (ret)
return ret;
ret = clk_prepare_enable(phy->ahb_clk);
if (ret) {
DRM_DEV_ERROR(dev, "%s: can't enable ahb clk, %d\n", __func__, ret);
pm_runtime_put_sync(dev);
}
return ret;
}
static void dsi_phy_disable_resource(struct msm_dsi_phy *phy)
{
clk_disable_unprepare(phy->ahb_clk);
pm_runtime_put(&phy->pdev->dev);
}
static const struct of_device_id dsi_phy_dt_match[] = {
#ifdef CONFIG_DRM_MSM_DSI_28NM_PHY
{ .compatible = "qcom,dsi-phy-28nm-hpm",
@ -698,22 +676,20 @@ static int dsi_phy_driver_probe(struct platform_device *pdev)
if (ret)
return ret;
phy->ahb_clk = msm_clk_get(pdev, "iface");
if (IS_ERR(phy->ahb_clk))
return dev_err_probe(dev, PTR_ERR(phy->ahb_clk),
"Unable to get ahb clk\n");
platform_set_drvdata(pdev, phy);
ret = devm_pm_runtime_enable(&pdev->dev);
ret = devm_pm_runtime_enable(dev);
if (ret)
return ret;
/* PLL init will call into clk_register which requires
* register access, so we need to enable power and ahb clock.
*/
ret = dsi_phy_enable_resource(phy);
ret = devm_pm_clk_create(dev);
if (ret)
return ret;
ret = pm_clk_add(dev, "iface");
if (ret < 0)
return dev_err_probe(dev, ret, "Unable to get iface clk\n");
if (phy->cfg->ops.pll_init) {
ret = phy->cfg->ops.pll_init(phy);
if (ret)
@ -727,18 +703,19 @@ static int dsi_phy_driver_probe(struct platform_device *pdev)
return dev_err_probe(dev, ret,
"Failed to register clk provider\n");
dsi_phy_disable_resource(phy);
platform_set_drvdata(pdev, phy);
return 0;
}
static const struct dev_pm_ops dsi_phy_pm_ops = {
SET_RUNTIME_PM_OPS(pm_clk_suspend, pm_clk_resume, NULL)
};
static struct platform_driver dsi_phy_platform_driver = {
.probe = dsi_phy_driver_probe,
.driver = {
.name = "msm_dsi_phy",
.of_match_table = dsi_phy_dt_match,
.pm = &dsi_phy_pm_ops,
},
};
@ -764,9 +741,9 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
dev = &phy->pdev->dev;
ret = dsi_phy_enable_resource(phy);
ret = pm_runtime_resume_and_get(dev);
if (ret) {
DRM_DEV_ERROR(dev, "%s: resource enable failed, %d\n",
DRM_DEV_ERROR(dev, "%s: resume failed, %d\n",
__func__, ret);
goto res_en_fail;
}
@ -810,7 +787,7 @@ pll_restor_fail:
phy_en_fail:
regulator_bulk_disable(phy->cfg->num_regulators, phy->supplies);
reg_en_fail:
dsi_phy_disable_resource(phy);
pm_runtime_put(dev);
res_en_fail:
return ret;
}
@ -823,7 +800,7 @@ void msm_dsi_phy_disable(struct msm_dsi_phy *phy)
phy->cfg->ops.disable(phy);
regulator_bulk_disable(phy->cfg->num_regulators, phy->supplies);
dsi_phy_disable_resource(phy);
pm_runtime_put(&phy->pdev->dev);
}
void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,

View file

@ -104,7 +104,6 @@ struct msm_dsi_phy {
phys_addr_t lane_size;
int id;
struct clk *ahb_clk;
struct regulator_bulk_data *supplies;
struct msm_dsi_dphy_timing timing;

View file

@ -325,25 +325,28 @@ static struct drm_info_list msm_debugfs_list[] = {
static int late_init_minor(struct drm_minor *minor)
{
struct drm_device *dev = minor->dev;
struct msm_drm_private *priv = dev->dev_private;
struct drm_device *dev;
struct msm_drm_private *priv;
int ret;
if (!minor)
return 0;
dev = minor->dev;
priv = dev->dev_private;
if (!priv->gpu_pdev)
return 0;
ret = msm_rd_debugfs_init(minor);
if (ret) {
DRM_DEV_ERROR(minor->dev->dev, "could not install rd debugfs\n");
DRM_DEV_ERROR(dev->dev, "could not install rd debugfs\n");
return ret;
}
ret = msm_perf_debugfs_init(minor);
if (ret) {
DRM_DEV_ERROR(minor->dev->dev, "could not install perf debugfs\n");
DRM_DEV_ERROR(dev->dev, "could not install perf debugfs\n");
return ret;
}

View file

@ -95,7 +95,6 @@ void msm_gem_vma_get(struct drm_gem_object *obj)
void msm_gem_vma_put(struct drm_gem_object *obj)
{
struct msm_drm_private *priv = obj->dev->dev_private;
struct drm_exec exec;
if (atomic_dec_return(&to_msm_bo(obj)->vma_ref))
return;
@ -103,9 +102,13 @@ void msm_gem_vma_put(struct drm_gem_object *obj)
if (!priv->kms)
return;
#ifdef CONFIG_DRM_MSM_KMS
struct drm_exec exec;
msm_gem_lock_vm_and_obj(&exec, obj, priv->kms->vm);
put_iova_spaces(obj, priv->kms->vm, true, "vma_put");
drm_exec_fini(&exec); /* drop locks */
#endif
}
/*
@ -663,9 +666,13 @@ int msm_gem_set_iova(struct drm_gem_object *obj,
static bool is_kms_vm(struct drm_gpuvm *vm)
{
#ifdef CONFIG_DRM_MSM_KMS
struct msm_drm_private *priv = vm->drm->dev_private;
return priv->kms && (priv->kms->vm == vm);
#else
return false;
#endif
}
/*
@ -1113,10 +1120,12 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
put_pages(obj);
}
if (msm_obj->flags & MSM_BO_NO_SHARE) {
if (obj->resv != &obj->_resv) {
struct drm_gem_object *r_obj =
container_of(obj->resv, struct drm_gem_object, _resv);
WARN_ON(!(msm_obj->flags & MSM_BO_NO_SHARE));
/* Drop reference we hold to shared resv obj: */
drm_gem_object_put(r_obj);
}

View file

@ -100,7 +100,7 @@ struct msm_gem_vm {
*
* Only used for kernel managed VMs, unused for user managed VMs.
*
* Protected by @mm_lock.
* Protected by vm lock. See msm_gem_lock_vm_and_obj(), for ex.
*/
struct drm_mm mm;

View file

@ -271,32 +271,37 @@ out:
return ret;
}
static int submit_lock_objects_vmbind(struct msm_gem_submit *submit)
{
unsigned flags = DRM_EXEC_INTERRUPTIBLE_WAIT | DRM_EXEC_IGNORE_DUPLICATES;
struct drm_exec *exec = &submit->exec;
int ret = 0;
drm_exec_init(&submit->exec, flags, submit->nr_bos);
drm_exec_until_all_locked (&submit->exec) {
ret = drm_gpuvm_prepare_vm(submit->vm, exec, 1);
drm_exec_retry_on_contention(exec);
if (ret)
break;
ret = drm_gpuvm_prepare_objects(submit->vm, exec, 1);
drm_exec_retry_on_contention(exec);
if (ret)
break;
}
return ret;
}
/* This is where we make sure all the bo's are reserved and pin'd: */
static int submit_lock_objects(struct msm_gem_submit *submit)
{
unsigned flags = DRM_EXEC_INTERRUPTIBLE_WAIT;
struct drm_exec *exec = &submit->exec;
int ret;
int ret = 0;
if (msm_context_is_vmbind(submit->queue->ctx)) {
flags |= DRM_EXEC_IGNORE_DUPLICATES;
drm_exec_init(&submit->exec, flags, submit->nr_bos);
drm_exec_until_all_locked (&submit->exec) {
ret = drm_gpuvm_prepare_vm(submit->vm, exec, 1);
drm_exec_retry_on_contention(exec);
if (ret)
return ret;
ret = drm_gpuvm_prepare_objects(submit->vm, exec, 1);
drm_exec_retry_on_contention(exec);
if (ret)
return ret;
}
return 0;
}
if (msm_context_is_vmbind(submit->queue->ctx))
return submit_lock_objects_vmbind(submit);
drm_exec_init(&submit->exec, flags, submit->nr_bos);
@ -305,17 +310,17 @@ static int submit_lock_objects(struct msm_gem_submit *submit)
drm_gpuvm_resv_obj(submit->vm));
drm_exec_retry_on_contention(&submit->exec);
if (ret)
return ret;
break;
for (unsigned i = 0; i < submit->nr_bos; i++) {
struct drm_gem_object *obj = submit->bos[i].obj;
ret = drm_exec_prepare_obj(&submit->exec, obj, 1);
drm_exec_retry_on_contention(&submit->exec);
if (ret)
return ret;
break;
}
}
return 0;
return ret;
}
static int submit_fence_sync(struct msm_gem_submit *submit)
@ -514,14 +519,15 @@ out:
*/
static void submit_cleanup(struct msm_gem_submit *submit, bool error)
{
if (error)
submit_unpin_objects(submit);
if (submit->exec.objects)
drm_exec_fini(&submit->exec);
if (error) {
submit_unpin_objects(submit);
/* job wasn't enqueued to scheduler, so early retirement: */
/* if job wasn't enqueued to scheduler, early retirement: */
if (error)
msm_submit_retire(submit);
}
}
void msm_submit_retire(struct msm_gem_submit *submit)
@ -769,12 +775,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
sync_file = sync_file_create(submit->user_fence);
if (!sync_file) {
if (!sync_file)
ret = -ENOMEM;
} else {
fd_install(out_fence_fd, sync_file->file);
args->fence_fd = out_fence_fd;
}
}
if (ret)
@ -812,10 +814,14 @@ out:
out_unlock:
mutex_unlock(&queue->lock);
out_post_unlock:
if (ret && (out_fence_fd >= 0)) {
put_unused_fd(out_fence_fd);
if (ret) {
if (out_fence_fd >= 0)
put_unused_fd(out_fence_fd);
if (sync_file)
fput(sync_file->file);
} else if (sync_file) {
fd_install(out_fence_fd, sync_file->file);
args->fence_fd = out_fence_fd;
}
if (!IS_ERR_OR_NULL(submit)) {

View file

@ -319,13 +319,10 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt)
mutex_lock(&vm->mmu_lock);
/*
* NOTE: iommu/io-pgtable can allocate pages, so we cannot hold
* NOTE: if not using pgtable preallocation, we cannot hold
* a lock across map/unmap which is also used in the job_run()
* path, as this can cause deadlock in job_run() vs shrinker/
* reclaim.
*
* Revisit this if we can come up with a scheme to pre-alloc pages
* for the pgtable in map/unmap ops.
*/
ret = vm_map_op(vm, &(struct msm_vm_map_op){
.iova = vma->va.addr,
@ -454,6 +451,8 @@ msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
struct op_arg {
unsigned flags;
struct msm_vm_bind_job *job;
const struct msm_vm_bind_op *op;
bool kept;
};
static void
@ -475,14 +474,18 @@ vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op)
}
static int
msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg)
msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *_arg)
{
struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job;
struct op_arg *arg = _arg;
struct msm_vm_bind_job *job = arg->job;
struct drm_gem_object *obj = op->map.gem.obj;
struct drm_gpuva *vma;
struct sg_table *sgt;
unsigned prot;
if (arg->kept)
return 0;
vma = vma_from_op(arg, &op->map);
if (WARN_ON(IS_ERR(vma)))
return PTR_ERR(vma);
@ -602,15 +605,41 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg)
}
static int
msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg)
msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *_arg)
{
struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job;
struct op_arg *arg = _arg;
struct msm_vm_bind_job *job = arg->job;
struct drm_gpuva *vma = op->unmap.va;
struct msm_gem_vma *msm_vma = to_msm_vma(vma);
vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj,
vma->va.addr, vma->va.range);
/*
* Detect in-place remap. Turnip does this to change the vma flags,
* in particular MSM_VMA_DUMP. In this case we want to avoid actually
* touching the page tables, as that would require synchronization
* against SUBMIT jobs running on the GPU.
*/
if (op->unmap.keep &&
(arg->op->op == MSM_VM_BIND_OP_MAP) &&
(vma->gem.obj == arg->op->obj) &&
(vma->gem.offset == arg->op->obj_offset) &&
(vma->va.addr == arg->op->iova) &&
(vma->va.range == arg->op->range)) {
/* We are only expecting a single in-place unmap+map cb pair: */
WARN_ON(arg->kept);
/* Leave the existing VMA in place, but signal that to the map cb: */
arg->kept = true;
/* Only flags are changing, so update that in-place: */
unsigned orig_flags = vma->flags & (DRM_GPUVA_USERBITS - 1);
vma->flags = orig_flags | arg->flags;
return 0;
}
if (!msm_vma->mapped)
goto out_close;
@ -1271,6 +1300,7 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job)
const struct msm_vm_bind_op *op = &job->ops[i];
struct op_arg arg = {
.job = job,
.op = op,
};
switch (op->op) {
@ -1460,12 +1490,8 @@ msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *file)
if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) {
sync_file = sync_file_create(job->fence);
if (!sync_file) {
if (!sync_file)
ret = -ENOMEM;
} else {
fd_install(out_fence_fd, sync_file->file);
args->fence_fd = out_fence_fd;
}
}
if (ret)
@ -1494,10 +1520,14 @@ out:
out_unlock:
mutex_unlock(&queue->lock);
out_post_unlock:
if (ret && (out_fence_fd >= 0)) {
put_unused_fd(out_fence_fd);
if (ret) {
if (out_fence_fd >= 0)
put_unused_fd(out_fence_fd);
if (sync_file)
fput(sync_file->file);
} else if (sync_file) {
fd_install(out_fence_fd, sync_file->file);
args->fence_fd = out_fence_fd;
}
if (!IS_ERR_OR_NULL(job)) {

View file

@ -465,6 +465,7 @@ static void recover_worker(struct kthread_work *work)
struct msm_gem_submit *submit;
struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu);
char *comm = NULL, *cmd = NULL;
struct task_struct *task;
int i;
mutex_lock(&gpu->lock);
@ -482,16 +483,20 @@ static void recover_worker(struct kthread_work *work)
/* Increment the fault counts */
submit->queue->faults++;
if (submit->vm) {
task = get_pid_task(submit->pid, PIDTYPE_PID);
if (!task)
gpu->global_faults++;
else {
struct msm_gem_vm *vm = to_msm_vm(submit->vm);
vm->faults++;
/*
* If userspace has opted-in to VM_BIND (and therefore userspace
* management of the VM), faults mark the VM as unusuable. This
* management of the VM), faults mark the VM as unusable. This
* matches vulkan expectations (vulkan is the main target for
* VM_BIND)
* VM_BIND).
*/
if (!vm->managed)
msm_gem_vm_unusable(submit->vm);
@ -553,8 +558,15 @@ static void recover_worker(struct kthread_work *work)
unsigned long flags;
spin_lock_irqsave(&ring->submit_lock, flags);
list_for_each_entry(submit, &ring->submits, node)
list_for_each_entry(submit, &ring->submits, node) {
/*
* If the submit uses an unusable vm make sure
* we don't actually run it
*/
if (to_msm_vm(submit->vm)->unusable)
submit->nr_cmds = 0;
gpu->funcs->submit(gpu, submit);
}
spin_unlock_irqrestore(&ring->submit_lock, flags);
}
}

View file

@ -14,7 +14,9 @@
struct msm_iommu {
struct msm_mmu base;
struct iommu_domain *domain;
atomic_t pagetables;
struct mutex init_lock; /* protects pagetables counter and prr_page */
int pagetables;
struct page *prr_page;
struct kmem_cache *pt_cache;
@ -227,7 +229,8 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu)
* If this is the last attached pagetable for the parent,
* disable TTBR0 in the arm-smmu driver
*/
if (atomic_dec_return(&iommu->pagetables) == 0) {
mutex_lock(&iommu->init_lock);
if (--iommu->pagetables == 0) {
adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL);
if (adreno_smmu->set_prr_bit) {
@ -236,6 +239,7 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu)
iommu->prr_page = NULL;
}
}
mutex_unlock(&iommu->init_lock);
free_io_pgtable_ops(pagetable->pgtbl_ops);
kfree(pagetable);
@ -568,9 +572,12 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_m
* If this is the first pagetable that we've allocated, send it back to
* the arm-smmu driver as a trigger to set up TTBR0
*/
if (atomic_inc_return(&iommu->pagetables) == 1) {
mutex_lock(&iommu->init_lock);
if (iommu->pagetables++ == 0) {
ret = adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, &ttbr0_cfg);
if (ret) {
iommu->pagetables--;
mutex_unlock(&iommu->init_lock);
free_io_pgtable_ops(pagetable->pgtbl_ops);
kfree(pagetable);
return ERR_PTR(ret);
@ -595,6 +602,7 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_m
adreno_smmu->set_prr_bit(adreno_smmu->cookie, true);
}
}
mutex_unlock(&iommu->init_lock);
/* Needed later for TLB flush */
pagetable->parent = parent;
@ -730,7 +738,7 @@ struct msm_mmu *msm_iommu_new(struct device *dev, unsigned long quirks)
iommu->domain = domain;
msm_mmu_init(&iommu->base, dev, &funcs, MSM_MMU_IOMMU);
atomic_set(&iommu->pagetables, 0);
mutex_init(&iommu->init_lock);
ret = iommu_attach_device(iommu->domain, dev);
if (ret) {

View file

@ -275,6 +275,12 @@ int msm_drm_kms_init(struct device *dev, const struct drm_driver *drv)
if (ret)
return ret;
ret = msm_disp_snapshot_init(ddev);
if (ret) {
DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
return ret;
}
ret = priv->kms_init(ddev);
if (ret) {
DRM_DEV_ERROR(dev, "failed to load kms\n");
@ -327,10 +333,6 @@ int msm_drm_kms_init(struct device *dev, const struct drm_driver *drv)
goto err_msm_uninit;
}
ret = msm_disp_snapshot_init(ddev);
if (ret)
DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
drm_mode_config_reset(ddev);
return 0;

View file

@ -423,7 +423,7 @@ static struct msm_mdss *msm_mdss_init(struct platform_device *pdev, bool is_mdp5
if (IS_ERR(msm_mdss->mmio))
return ERR_CAST(msm_mdss->mmio);
dev_dbg(&pdev->dev, "mapped mdss address space @%pK\n", msm_mdss->mmio);
dev_dbg(&pdev->dev, "mapped mdss address space @%p\n", msm_mdss->mmio);
ret = msm_mdss_parse_data_bus_icc_path(&pdev->dev, msm_mdss);
if (ret)

View file

@ -594,10 +594,14 @@ by a particular renderpass/blit.
<reg32 offset="0x0600" name="DBGC_CFG_DBGBUS_SEL_A"/>
<reg32 offset="0x0601" name="DBGC_CFG_DBGBUS_SEL_B"/>
<reg32 offset="0x0602" name="DBGC_CFG_DBGBUS_SEL_C"/>
<reg32 offset="0x0603" name="DBGC_CFG_DBGBUS_SEL_D">
<reg32 offset="0x0603" name="DBGC_CFG_DBGBUS_SEL_D" variants="A6XX">
<bitfield high="7" low="0" name="PING_INDEX"/>
<bitfield high="15" low="8" name="PING_BLK_SEL"/>
</reg32>
<reg32 offset="0x0603" name="DBGC_CFG_DBGBUS_SEL_D" variants="A7XX-">
<bitfield high="7" low="0" name="PING_INDEX"/>
<bitfield high="24" low="16" name="PING_BLK_SEL"/>
</reg32>
<reg32 offset="0x0604" name="DBGC_CFG_DBGBUS_CNTLT">
<bitfield high="5" low="0" name="TRACEEN"/>
<bitfield high="14" low="12" name="GRANU"/>
@ -3796,6 +3800,14 @@ by a particular renderpass/blit.
<reg32 offset="0x0030" name="CFG_DBGBUS_TRACE_BUF2"/>
</domain>
<domain name="A7XX_CX_DBGC" width="32">
<!-- Bitfields shifted, but otherwise the same: -->
<reg32 offset="0x0000" name="CFG_DBGBUS_SEL_A" variants="A7XX-">
<bitfield high="7" low="0" name="PING_INDEX"/>
<bitfield high="24" low="16" name="PING_BLK_SEL"/>
</reg32>
</domain>
<domain name="A6XX_CX_MISC" width="32" prefix="variant" varset="chip">
<reg32 offset="0x0001" name="SYSTEM_CACHE_CNTL_0"/>
<reg32 offset="0x0002" name="SYSTEM_CACHE_CNTL_1"/>

View file

@ -159,28 +159,28 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
<bitfield name="RGB_SWAP" low="12" high="14" type="dsi_rgb_swap"/>
</reg32>
<reg32 offset="0x00020" name="ACTIVE_H">
<bitfield name="START" low="0" high="11" type="uint"/>
<bitfield name="END" low="16" high="27" type="uint"/>
<bitfield name="START" low="0" high="15" type="uint"/>
<bitfield name="END" low="16" high="31" type="uint"/>
</reg32>
<reg32 offset="0x00024" name="ACTIVE_V">
<bitfield name="START" low="0" high="11" type="uint"/>
<bitfield name="END" low="16" high="27" type="uint"/>
<bitfield name="START" low="0" high="15" type="uint"/>
<bitfield name="END" low="16" high="31" type="uint"/>
</reg32>
<reg32 offset="0x00028" name="TOTAL">
<bitfield name="H_TOTAL" low="0" high="11" type="uint"/>
<bitfield name="V_TOTAL" low="16" high="27" type="uint"/>
<bitfield name="H_TOTAL" low="0" high="15" type="uint"/>
<bitfield name="V_TOTAL" low="16" high="31" type="uint"/>
</reg32>
<reg32 offset="0x0002c" name="ACTIVE_HSYNC">
<bitfield name="START" low="0" high="11" type="uint"/>
<bitfield name="END" low="16" high="27" type="uint"/>
<bitfield name="START" low="0" high="15" type="uint"/>
<bitfield name="END" low="16" high="31" type="uint"/>
</reg32>
<reg32 offset="0x00030" name="ACTIVE_VSYNC_HPOS">
<bitfield name="START" low="0" high="11" type="uint"/>
<bitfield name="END" low="16" high="27" type="uint"/>
<bitfield name="START" low="0" high="15" type="uint"/>
<bitfield name="END" low="16" high="31" type="uint"/>
</reg32>
<reg32 offset="0x00034" name="ACTIVE_VSYNC_VPOS">
<bitfield name="START" low="0" high="11" type="uint"/>
<bitfield name="END" low="16" high="27" type="uint"/>
<bitfield name="START" low="0" high="15" type="uint"/>
<bitfield name="END" low="16" high="31" type="uint"/>
</reg32>
<reg32 offset="0x00038" name="CMD_DMA_CTRL">
@ -209,8 +209,8 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
<bitfield name="WORD_COUNT" low="16" high="31" type="uint"/>
</reg32>
<reg32 offset="0x00058" name="CMD_MDP_STREAM0_TOTAL">
<bitfield name="H_TOTAL" low="0" high="11" type="uint"/>
<bitfield name="V_TOTAL" low="16" high="27" type="uint"/>
<bitfield name="H_TOTAL" low="0" high="15" type="uint"/>
<bitfield name="V_TOTAL" low="16" high="31" type="uint"/>
</reg32>
<reg32 offset="0x0005c" name="CMD_MDP_STREAM1_CTRL">
<bitfield name="DATA_TYPE" low="0" high="5" type="uint"/>

View file

@ -795,6 +795,10 @@ static bool nv50_plane_format_mod_supported(struct drm_plane *plane,
struct nouveau_drm *drm = nouveau_drm(plane->dev);
uint8_t i;
/* All chipsets can display all formats in linear layout */
if (modifier == DRM_FORMAT_MOD_LINEAR)
return true;
if (drm->client.device.info.chipset < 0xc0) {
const struct drm_format_info *info = drm_format_info(format);
const uint8_t kind = (modifier >> 12) & 0xff;

View file

@ -103,7 +103,7 @@ gm200_flcn_pio_imem_wr_init(struct nvkm_falcon *falcon, u8 port, bool sec, u32 i
static void
gm200_flcn_pio_imem_wr(struct nvkm_falcon *falcon, u8 port, const u8 *img, int len, u16 tag)
{
nvkm_falcon_wr32(falcon, 0x188 + (port * 0x10), tag++);
nvkm_falcon_wr32(falcon, 0x188 + (port * 0x10), tag);
while (len >= 4) {
nvkm_falcon_wr32(falcon, 0x184 + (port * 0x10), *(u32 *)img);
img += 4;
@ -249,9 +249,11 @@ int
gm200_flcn_fw_load(struct nvkm_falcon_fw *fw)
{
struct nvkm_falcon *falcon = fw->falcon;
int target, ret;
int ret;
if (fw->inst) {
int target;
nvkm_falcon_mask(falcon, 0x048, 0x00000001, 0x00000001);
switch (nvkm_memory_target(fw->inst)) {
@ -285,15 +287,6 @@ gm200_flcn_fw_load(struct nvkm_falcon_fw *fw)
}
if (fw->boot) {
switch (nvkm_memory_target(&fw->fw.mem.memory)) {
case NVKM_MEM_TARGET_VRAM: target = 4; break;
case NVKM_MEM_TARGET_HOST: target = 5; break;
case NVKM_MEM_TARGET_NCOH: target = 6; break;
default:
WARN_ON(1);
return -EINVAL;
}
ret = nvkm_falcon_pio_wr(falcon, fw->boot, 0, 0,
IMEM, falcon->code.limit - fw->boot_size, fw->boot_size,
fw->boot_addr >> 8, false);

View file

@ -209,11 +209,12 @@ nvkm_gsp_fwsec_v2(struct nvkm_gsp *gsp, const char *name,
fw->boot_addr = bld->start_tag << 8;
fw->boot_size = bld->code_size;
fw->boot = kmemdup(bl->data + hdr->data_offset + bld->code_off, fw->boot_size, GFP_KERNEL);
if (!fw->boot)
ret = -ENOMEM;
nvkm_firmware_put(bl);
if (!fw->boot)
return -ENOMEM;
/* Patch in interface data. */
return nvkm_gsp_fwsec_patch(gsp, fw, desc->InterfaceOffset, init_cmd);
}

View file

@ -526,7 +526,7 @@ void tegra_bo_free_object(struct drm_gem_object *gem)
if (drm_gem_is_imported(gem)) {
dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt,
DMA_TO_DEVICE);
dma_buf_detach(gem->dma_buf, gem->import_attach);
dma_buf_detach(gem->import_attach->dmabuf, gem->import_attach);
}
}

View file

@ -812,7 +812,8 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
}
if (ttm_bo->type == ttm_bo_type_sg) {
ret = xe_bo_move_notify(bo, ctx);
if (new_mem->mem_type == XE_PL_SYSTEM)
ret = xe_bo_move_notify(bo, ctx);
if (!ret)
ret = xe_bo_move_dmabuf(ttm_bo, new_mem);
return ret;
@ -2438,7 +2439,6 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
.no_wait_gpu = false,
.gfp_retry_mayfail = true,
};
struct pin_cookie cookie;
int ret;
if (vm) {
@ -2449,10 +2449,10 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
ctx.resv = xe_vm_resv(vm);
}
cookie = xe_vm_set_validating(vm, allow_res_evict);
xe_vm_set_validating(vm, allow_res_evict);
trace_xe_bo_validate(bo);
ret = ttm_bo_validate(&bo->ttm, &bo->placement, &ctx);
xe_vm_clear_validating(vm, allow_res_evict, cookie);
xe_vm_clear_validating(vm, allow_res_evict);
return ret;
}

View file

@ -123,11 +123,19 @@ static int parse(FILE *input, FILE *csource, FILE *cheader, char *prefix)
return 0;
}
/* Avoid GNU vs POSIX basename() discrepancy, just use our own */
static const char *xbasename(const char *s)
{
const char *p = strrchr(s, '/');
return p ? p + 1 : s;
}
static int fn_to_prefix(const char *fn, char *prefix, size_t size)
{
size_t len;
fn = basename(fn);
fn = xbasename(fn);
len = strlen(fn);
if (len > size - 1)

View file

@ -77,6 +77,7 @@ static void user_fence_worker(struct work_struct *w)
{
struct xe_user_fence *ufence = container_of(w, struct xe_user_fence, worker);
WRITE_ONCE(ufence->signalled, 1);
if (mmget_not_zero(ufence->mm)) {
kthread_use_mm(ufence->mm);
if (copy_to_user(ufence->addr, &ufence->value, sizeof(ufence->value)))
@ -91,7 +92,6 @@ static void user_fence_worker(struct work_struct *w)
* Wake up waiters only after updating the ufence state, allowing the UMD
* to safely reuse the same ufence without encountering -EBUSY errors.
*/
WRITE_ONCE(ufence->signalled, 1);
wake_up_all(&ufence->xe->ufence_wq);
user_fence_put(ufence);
}

View file

@ -1610,8 +1610,12 @@ static int xe_vm_create_scratch(struct xe_device *xe, struct xe_tile *tile,
for (i = MAX_HUGEPTE_LEVEL; i < vm->pt_root[id]->level; i++) {
vm->scratch_pt[id][i] = xe_pt_create(vm, tile, i);
if (IS_ERR(vm->scratch_pt[id][i]))
return PTR_ERR(vm->scratch_pt[id][i]);
if (IS_ERR(vm->scratch_pt[id][i])) {
int err = PTR_ERR(vm->scratch_pt[id][i]);
vm->scratch_pt[id][i] = NULL;
return err;
}
xe_pt_populate_empty(tile, vm, vm->scratch_pt[id][i]);
}

View file

@ -315,22 +315,14 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
* Register this task as currently making bos resident for the vm. Intended
* to avoid eviction by the same task of shared bos bound to the vm.
* Call with the vm's resv lock held.
*
* Return: A pin cookie that should be used for xe_vm_clear_validating().
*/
static inline struct pin_cookie xe_vm_set_validating(struct xe_vm *vm,
bool allow_res_evict)
static inline void xe_vm_set_validating(struct xe_vm *vm, bool allow_res_evict)
{
struct pin_cookie cookie = {};
if (vm && !allow_res_evict) {
xe_vm_assert_held(vm);
cookie = lockdep_pin_lock(&xe_vm_resv(vm)->lock.base);
/* Pairs with READ_ONCE in xe_vm_is_validating() */
WRITE_ONCE(vm->validating, current);
}
return cookie;
}
/**
@ -338,17 +330,14 @@ static inline struct pin_cookie xe_vm_set_validating(struct xe_vm *vm,
* @vm: Pointer to the vm or NULL
* @allow_res_evict: Eviction from @vm was allowed. Must be set to the same
* value as for xe_vm_set_validation().
* @cookie: Cookie obtained from xe_vm_set_validating().
*
* Register this task as currently making bos resident for the vm. Intended
* to avoid eviction by the same task of shared bos bound to the vm.
* Call with the vm's resv lock held.
*/
static inline void xe_vm_clear_validating(struct xe_vm *vm, bool allow_res_evict,
struct pin_cookie cookie)
static inline void xe_vm_clear_validating(struct xe_vm *vm, bool allow_res_evict)
{
if (vm && !allow_res_evict) {
lockdep_unpin_lock(&xe_vm_resv(vm)->lock.base, cookie);
/* Pairs with READ_ONCE in xe_vm_is_validating() */
WRITE_ONCE(vm->validating, NULL);
}

View file

@ -12,6 +12,10 @@
#include <linux/soc/qcom/ubwc.h>
static const struct qcom_ubwc_cfg_data no_ubwc_data = {
/* no UBWC, no HBB */
};
static const struct qcom_ubwc_cfg_data msm8937_data = {
.ubwc_enc_version = UBWC_1_0,
.ubwc_dec_version = UBWC_1_0,
@ -215,12 +219,20 @@ static const struct qcom_ubwc_cfg_data x1e80100_data = {
};
static const struct of_device_id qcom_ubwc_configs[] __maybe_unused = {
{ .compatible = "qcom,apq8016", .data = &no_ubwc_data },
{ .compatible = "qcom,apq8026", .data = &no_ubwc_data },
{ .compatible = "qcom,apq8074", .data = &no_ubwc_data },
{ .compatible = "qcom,apq8096", .data = &msm8998_data },
{ .compatible = "qcom,msm8917", .data = &msm8937_data },
{ .compatible = "qcom,msm8226", .data = &no_ubwc_data },
{ .compatible = "qcom,msm8916", .data = &no_ubwc_data },
{ .compatible = "qcom,msm8917", .data = &no_ubwc_data },
{ .compatible = "qcom,msm8937", .data = &msm8937_data },
{ .compatible = "qcom,msm8929", .data = &no_ubwc_data },
{ .compatible = "qcom,msm8939", .data = &no_ubwc_data },
{ .compatible = "qcom,msm8953", .data = &msm8937_data },
{ .compatible = "qcom,msm8956", .data = &msm8937_data },
{ .compatible = "qcom,msm8976", .data = &msm8937_data },
{ .compatible = "qcom,msm8956", .data = &no_ubwc_data },
{ .compatible = "qcom,msm8974", .data = &no_ubwc_data },
{ .compatible = "qcom,msm8976", .data = &no_ubwc_data },
{ .compatible = "qcom,msm8996", .data = &msm8998_data },
{ .compatible = "qcom,msm8998", .data = &msm8998_data },
{ .compatible = "qcom,qcm2290", .data = &qcm2290_data, },
@ -233,7 +245,10 @@ static const struct of_device_id qcom_ubwc_configs[] __maybe_unused = {
{ .compatible = "qcom,sc7280", .data = &sc7280_data, },
{ .compatible = "qcom,sc8180x", .data = &sc8180x_data, },
{ .compatible = "qcom,sc8280xp", .data = &sc8280xp_data, },
{ .compatible = "qcom,sda660", .data = &msm8937_data },
{ .compatible = "qcom,sdm450", .data = &msm8937_data },
{ .compatible = "qcom,sdm630", .data = &msm8937_data },
{ .compatible = "qcom,sdm632", .data = &msm8937_data },
{ .compatible = "qcom,sdm636", .data = &msm8937_data },
{ .compatible = "qcom,sdm660", .data = &msm8937_data },
{ .compatible = "qcom,sdm670", .data = &sdm670_data, },
@ -246,6 +261,8 @@ static const struct of_device_id qcom_ubwc_configs[] __maybe_unused = {
{ .compatible = "qcom,sm6375", .data = &sm6350_data, },
{ .compatible = "qcom,sm7125", .data = &sc7180_data },
{ .compatible = "qcom,sm7150", .data = &sm7150_data, },
{ .compatible = "qcom,sm7225", .data = &sm6350_data, },
{ .compatible = "qcom,sm7325", .data = &sc7280_data, },
{ .compatible = "qcom,sm8150", .data = &sm8150_data, },
{ .compatible = "qcom,sm8250", .data = &sm8250_data, },
{ .compatible = "qcom,sm8350", .data = &sm8350_data, },

View file

@ -103,7 +103,7 @@ struct drm_gpuva {
} va;
/**
* @gem: structure containing the &drm_gem_object and it's offset
* @gem: structure containing the &drm_gem_object and its offset
*/
struct {
/**
@ -843,7 +843,7 @@ struct drm_gpuva_op_map {
} va;
/**
* @gem: structure containing the &drm_gem_object and it's offset
* @gem: structure containing the &drm_gem_object and its offset
*/
struct {
/**
@ -1189,11 +1189,11 @@ struct drm_gpuvm_ops {
/**
* @sm_step_unmap: called from &drm_gpuvm_sm_map and
* &drm_gpuvm_sm_unmap to unmap an existent mapping
* &drm_gpuvm_sm_unmap to unmap an existing mapping
*
* This callback is called when existent mapping needs to be unmapped.
* This callback is called when existing mapping needs to be unmapped.
* This is the case when either a newly requested mapping encloses an
* existent mapping or an unmap of an existent mapping is requested.
* existing mapping or an unmap of an existing mapping is requested.
*
* The &priv pointer matches the one the driver passed to
* &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.