aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorDave Airlie <airlied@redhat.com>2016-11-30 18:25:58 -0500
committerDave Airlie <airlied@redhat.com>2016-11-30 18:25:58 -0500
commitf5590134365f6f23dba723f140f72effcc71773f (patch)
tree2094f339374fa0b37b22f33a9f40950de8950def
parenta90f58311f48f510ea63cd2db2e32f74712c43f3 (diff)
parent2401a008461481387741bacf7318d13af2c2055f (diff)
Merge branch 'msm-next' of git://people.freedesktop.org/~robclark/linux into drm-next
On the userspace side, all the basics are working, and most of glmark2 is working. I've been working through deqp, and I've got a couple more things to fix (but we've gone from 70% to 80+% pass in last day, and current deqp run that is going should pick up another 5-10%). I expect to push the mesa patches today or tomorrow. There are a couple more a5xx related patches to take the gpu out of secure mode (for the devices that come up in secure mode, like the hw I have), but those depend on an scm patch that would come in through another tree. If that can land in the next day or two, there might be a second late pull request for drm/msm. In addition to the new-shiny, there have also been a lot of overlay/ plane related fixes for issues found using drm-hwc2 (in the process of testing/debugging the atomic/kms fence patches), resulting in rework to assign hwpipes to kms planes dynamically (as part of global atomic state) and also handling SMP (fifo) block allocation atomically as part of the ->atomic_check() step. All those patches should also help out atomic weston (when those patches eventually land). * 'msm-next' of git://people.freedesktop.org/~robclark/linux: (36 commits) drm/msm: gpu: Add support for the GPMU drm/msm: gpu: Add A5XX target support drm/msm: Disable interrupts during init drm/msm: Remove 'src_clk' from adreno configuration drm/msm: gpu: Add OUT_TYPE4 and OUT_TYPE7 drm/msm: Add adreno_gpu_write64() drm/msm: gpu Add new gpu register read/write functions drm/msm: gpu: Return error on hw_init failure drm/msm: gpu: Cut down the list of "generic" registers to the ones we use drm/msm: update generated headers drm/msm/adreno: move scratch register dumping to per-gen code drm/msm/rd: support for 64b iova drm/msm: convert iova to 64b drm/msm: set dma_mask properly drm/msm: Remove bad calls to of_node_put() drm/msm/mdp5: move LM bounds check into plane->atomic_check() drm/msm/mdp5: dump smp state on errors too drm/msm/mdp5: add debugfs to show smp block status drm/msm/mdp5: handle SMP block allocations "atomically" drm/msm/mdp5: dynamically assign hw pipes to planes ...
-rw-r--r--drivers/gpu/drm/msm/Makefile4
-rw-r--r--drivers/gpu/drm/msm/adreno/a2xx.xml.h27
-rw-r--r--drivers/gpu/drm/msm/adreno/a3xx.xml.h38
-rw-r--r--drivers/gpu/drm/msm/adreno/a3xx_gpu.c112
-rw-r--r--drivers/gpu/drm/msm/adreno/a4xx.xml.h111
-rw-r--r--drivers/gpu/drm/msm/adreno/a4xx_gpu.c119
-rw-r--r--drivers/gpu/drm/msm/adreno/a5xx.xml.h3757
-rw-r--r--drivers/gpu/drm/msm/adreno/a5xx_gpu.c888
-rw-r--r--drivers/gpu/drm/msm/adreno/a5xx_gpu.h60
-rw-r--r--drivers/gpu/drm/msm/adreno/a5xx_power.c344
-rw-r--r--drivers/gpu/drm/msm/adreno/adreno_common.xml.h21
-rw-r--r--drivers/gpu/drm/msm/adreno/adreno_device.c29
-rw-r--r--drivers/gpu/drm/msm/adreno/adreno_gpu.c39
-rw-r--r--drivers/gpu/drm/msm/adreno/adreno_gpu.h159
-rw-r--r--drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h300
-rw-r--r--drivers/gpu/drm/msm/dsi/dsi.xml.h2
-rw-r--r--drivers/gpu/drm/msm/dsi/dsi_host.c4
-rw-r--r--drivers/gpu/drm/msm/dsi/mmss_cc.xml.h2
-rw-r--r--drivers/gpu/drm/msm/dsi/sfpb.xml.h2
-rw-r--r--drivers/gpu/drm/msm/edp/edp.xml.h2
-rw-r--r--drivers/gpu/drm/msm/hdmi/hdmi.xml.h2
-rw-r--r--drivers/gpu/drm/msm/hdmi/qfprom.xml.h2
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp4/mdp4.xml.h2
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c4
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c38
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.h4
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h14
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c4
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c40
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c2
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c267
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h41
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c133
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h56
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c258
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c306
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.h70
-rw-r--r--drivers/gpu/drm/msm/mdp/mdp_common.xml.h2
-rw-r--r--drivers/gpu/drm/msm/msm_atomic.c31
-rw-r--r--drivers/gpu/drm/msm/msm_debugfs.c16
-rw-r--r--drivers/gpu/drm/msm/msm_drv.c29
-rw-r--r--drivers/gpu/drm/msm/msm_drv.h40
-rw-r--r--drivers/gpu/drm/msm/msm_fb.c4
-rw-r--r--drivers/gpu/drm/msm/msm_fbdev.c2
-rw-r--r--drivers/gpu/drm/msm/msm_gem.c46
-rw-r--r--drivers/gpu/drm/msm/msm_gem.h23
-rw-r--r--drivers/gpu/drm/msm/msm_gem_submit.c9
-rw-r--r--drivers/gpu/drm/msm/msm_gem_vma.c90
-rw-r--r--drivers/gpu/drm/msm/msm_gpu.c66
-rw-r--r--drivers/gpu/drm/msm/msm_gpu.h45
-rw-r--r--drivers/gpu/drm/msm/msm_iommu.c12
-rw-r--r--drivers/gpu/drm/msm/msm_kms.h19
-rw-r--r--drivers/gpu/drm/msm/msm_mmu.h4
-rw-r--r--drivers/gpu/drm/msm/msm_rd.c4
-rw-r--r--include/uapi/drm/msm_drm.h25
55 files changed, 6773 insertions, 957 deletions
diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
index 4e2806cf778c..028c24df2291 100644
--- a/drivers/gpu/drm/msm/Makefile
+++ b/drivers/gpu/drm/msm/Makefile
@@ -6,6 +6,8 @@ msm-y := \
6 adreno/adreno_gpu.o \ 6 adreno/adreno_gpu.o \
7 adreno/a3xx_gpu.o \ 7 adreno/a3xx_gpu.o \
8 adreno/a4xx_gpu.o \ 8 adreno/a4xx_gpu.o \
9 adreno/a5xx_gpu.o \
10 adreno/a5xx_power.o \
9 hdmi/hdmi.o \ 11 hdmi/hdmi.o \
10 hdmi/hdmi_audio.o \ 12 hdmi/hdmi_audio.o \
11 hdmi/hdmi_bridge.o \ 13 hdmi/hdmi_bridge.o \
@@ -37,6 +39,7 @@ msm-y := \
37 mdp/mdp5/mdp5_irq.o \ 39 mdp/mdp5/mdp5_irq.o \
38 mdp/mdp5/mdp5_mdss.o \ 40 mdp/mdp5/mdp5_mdss.o \
39 mdp/mdp5/mdp5_kms.o \ 41 mdp/mdp5/mdp5_kms.o \
42 mdp/mdp5/mdp5_pipe.o \
40 mdp/mdp5/mdp5_plane.o \ 43 mdp/mdp5/mdp5_plane.o \
41 mdp/mdp5/mdp5_smp.o \ 44 mdp/mdp5/mdp5_smp.o \
42 msm_atomic.o \ 45 msm_atomic.o \
@@ -48,6 +51,7 @@ msm-y := \
48 msm_gem_prime.o \ 51 msm_gem_prime.o \
49 msm_gem_shrinker.o \ 52 msm_gem_shrinker.o \
50 msm_gem_submit.o \ 53 msm_gem_submit.o \
54 msm_gem_vma.o \
51 msm_gpu.o \ 55 msm_gpu.o \
52 msm_iommu.o \ 56 msm_iommu.o \
53 msm_perf.o \ 57 msm_perf.o \
diff --git a/drivers/gpu/drm/msm/adreno/a2xx.xml.h b/drivers/gpu/drm/msm/adreno/a2xx.xml.h
index fee24297fb92..4be092f911f9 100644
--- a/drivers/gpu/drm/msm/adreno/a2xx.xml.h
+++ b/drivers/gpu/drm/msm/adreno/a2xx.xml.h
@@ -8,16 +8,17 @@ http://github.com/freedreno/envytools/
8git clone https://github.com/freedreno/envytools.git 8git clone https://github.com/freedreno/envytools.git
9 9
10The rules-ng-ng source files this header was generated from are: 10The rules-ng-ng source files this header was generated from are:
11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 398 bytes, from 2015-09-24 17:25:31) 11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 431 bytes, from 2016-04-26 17:56:44)
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32907 bytes, from 2016-11-26 23:01:08)
14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 11518 bytes, from 2016-02-10 21:03:25) 14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 12025 bytes, from 2016-11-26 23:01:08)
15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 16166 bytes, from 2016-02-11 21:20:31) 15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 22544 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83967 bytes, from 2016-02-10 17:07:21) 16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83840 bytes, from 2016-11-26 23:01:08)
17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 109916 bytes, from 2016-02-20 18:44:48) 17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 110765 bytes, from 2016-11-26 23:01:48)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/a5xx.xml ( 90321 bytes, from 2016-11-28 16:50:05)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00) 19- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00)
19 20
20Copyright (C) 2013-2015 by the following authors: 21Copyright (C) 2013-2016 by the following authors:
21- Rob Clark <robdclark@gmail.com> (robclark) 22- Rob Clark <robdclark@gmail.com> (robclark)
22- Ilia Mirkin <imirkin@alum.mit.edu> (imirkin) 23- Ilia Mirkin <imirkin@alum.mit.edu> (imirkin)
23 24
@@ -206,12 +207,12 @@ enum a2xx_rb_copy_sample_select {
206}; 207};
207 208
208enum a2xx_rb_blend_opcode { 209enum a2xx_rb_blend_opcode {
209 BLEND_DST_PLUS_SRC = 0, 210 BLEND2_DST_PLUS_SRC = 0,
210 BLEND_SRC_MINUS_DST = 1, 211 BLEND2_SRC_MINUS_DST = 1,
211 BLEND_MIN_DST_SRC = 2, 212 BLEND2_MIN_DST_SRC = 2,
212 BLEND_MAX_DST_SRC = 3, 213 BLEND2_MAX_DST_SRC = 3,
213 BLEND_DST_MINUS_SRC = 4, 214 BLEND2_DST_MINUS_SRC = 4,
214 BLEND_DST_PLUS_SRC_BIAS = 5, 215 BLEND2_DST_PLUS_SRC_BIAS = 5,
215}; 216};
216 217
217enum adreno_mmu_clnt_beh { 218enum adreno_mmu_clnt_beh {
diff --git a/drivers/gpu/drm/msm/adreno/a3xx.xml.h b/drivers/gpu/drm/msm/adreno/a3xx.xml.h
index 27dabd5e57fb..a066c8b9eccd 100644
--- a/drivers/gpu/drm/msm/adreno/a3xx.xml.h
+++ b/drivers/gpu/drm/msm/adreno/a3xx.xml.h
@@ -8,13 +8,14 @@ http://github.com/freedreno/envytools/
8git clone https://github.com/freedreno/envytools.git 8git clone https://github.com/freedreno/envytools.git
9 9
10The rules-ng-ng source files this header was generated from are: 10The rules-ng-ng source files this header was generated from are:
11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 398 bytes, from 2015-09-24 17:25:31) 11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 431 bytes, from 2016-04-26 17:56:44)
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32907 bytes, from 2016-11-26 23:01:08)
14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 11518 bytes, from 2016-02-10 21:03:25) 14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 12025 bytes, from 2016-11-26 23:01:08)
15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 16166 bytes, from 2016-02-11 21:20:31) 15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 22544 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83967 bytes, from 2016-02-10 17:07:21) 16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83840 bytes, from 2016-11-26 23:01:08)
17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 109916 bytes, from 2016-02-20 18:44:48) 17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 110765 bytes, from 2016-11-26 23:01:48)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/a5xx.xml ( 90321 bytes, from 2016-11-28 16:50:05)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00) 19- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00)
19 20
20Copyright (C) 2013-2016 by the following authors: 21Copyright (C) 2013-2016 by the following authors:
@@ -129,10 +130,14 @@ enum a3xx_tex_fmt {
129 TFMT_Z16_UNORM = 9, 130 TFMT_Z16_UNORM = 9,
130 TFMT_X8Z24_UNORM = 10, 131 TFMT_X8Z24_UNORM = 10,
131 TFMT_Z32_FLOAT = 11, 132 TFMT_Z32_FLOAT = 11,
132 TFMT_NV12_UV_TILED = 17, 133 TFMT_UV_64X32 = 16,
133 TFMT_NV12_Y_TILED = 19, 134 TFMT_VU_64X32 = 17,
134 TFMT_NV12_UV = 21, 135 TFMT_Y_64X32 = 18,
135 TFMT_NV12_Y = 23, 136 TFMT_NV12_64X32 = 19,
137 TFMT_UV_LINEAR = 20,
138 TFMT_VU_LINEAR = 21,
139 TFMT_Y_LINEAR = 22,
140 TFMT_NV12_LINEAR = 23,
136 TFMT_I420_Y = 24, 141 TFMT_I420_Y = 24,
137 TFMT_I420_U = 26, 142 TFMT_I420_U = 26,
138 TFMT_I420_V = 27, 143 TFMT_I420_V = 27,
@@ -525,14 +530,6 @@ enum a3xx_uche_perfcounter_select {
525 UCHE_UCHEPERF_ACTIVE_CYCLES = 20, 530 UCHE_UCHEPERF_ACTIVE_CYCLES = 20,
526}; 531};
527 532
528enum a3xx_rb_blend_opcode {
529 BLEND_DST_PLUS_SRC = 0,
530 BLEND_SRC_MINUS_DST = 1,
531 BLEND_DST_MINUS_SRC = 2,
532 BLEND_MIN_DST_SRC = 3,
533 BLEND_MAX_DST_SRC = 4,
534};
535
536enum a3xx_intp_mode { 533enum a3xx_intp_mode {
537 SMOOTH = 0, 534 SMOOTH = 0,
538 FLAT = 1, 535 FLAT = 1,
@@ -1393,13 +1390,14 @@ static inline uint32_t A3XX_RB_COPY_CONTROL_MODE(enum adreno_rb_copy_control_mod
1393{ 1390{
1394 return ((val) << A3XX_RB_COPY_CONTROL_MODE__SHIFT) & A3XX_RB_COPY_CONTROL_MODE__MASK; 1391 return ((val) << A3XX_RB_COPY_CONTROL_MODE__SHIFT) & A3XX_RB_COPY_CONTROL_MODE__MASK;
1395} 1392}
1393#define A3XX_RB_COPY_CONTROL_MSAA_SRGB_DOWNSAMPLE 0x00000080
1396#define A3XX_RB_COPY_CONTROL_FASTCLEAR__MASK 0x00000f00 1394#define A3XX_RB_COPY_CONTROL_FASTCLEAR__MASK 0x00000f00
1397#define A3XX_RB_COPY_CONTROL_FASTCLEAR__SHIFT 8 1395#define A3XX_RB_COPY_CONTROL_FASTCLEAR__SHIFT 8
1398static inline uint32_t A3XX_RB_COPY_CONTROL_FASTCLEAR(uint32_t val) 1396static inline uint32_t A3XX_RB_COPY_CONTROL_FASTCLEAR(uint32_t val)
1399{ 1397{
1400 return ((val) << A3XX_RB_COPY_CONTROL_FASTCLEAR__SHIFT) & A3XX_RB_COPY_CONTROL_FASTCLEAR__MASK; 1398 return ((val) << A3XX_RB_COPY_CONTROL_FASTCLEAR__SHIFT) & A3XX_RB_COPY_CONTROL_FASTCLEAR__MASK;
1401} 1399}
1402#define A3XX_RB_COPY_CONTROL_UNK12 0x00001000 1400#define A3XX_RB_COPY_CONTROL_DEPTH32_RESOLVE 0x00001000
1403#define A3XX_RB_COPY_CONTROL_GMEM_BASE__MASK 0xffffc000 1401#define A3XX_RB_COPY_CONTROL_GMEM_BASE__MASK 0xffffc000
1404#define A3XX_RB_COPY_CONTROL_GMEM_BASE__SHIFT 14 1402#define A3XX_RB_COPY_CONTROL_GMEM_BASE__SHIFT 14
1405static inline uint32_t A3XX_RB_COPY_CONTROL_GMEM_BASE(uint32_t val) 1403static inline uint32_t A3XX_RB_COPY_CONTROL_GMEM_BASE(uint32_t val)
@@ -1472,7 +1470,7 @@ static inline uint32_t A3XX_RB_DEPTH_CONTROL_ZFUNC(enum adreno_compare_func val)
1472{ 1470{
1473 return ((val) << A3XX_RB_DEPTH_CONTROL_ZFUNC__SHIFT) & A3XX_RB_DEPTH_CONTROL_ZFUNC__MASK; 1471 return ((val) << A3XX_RB_DEPTH_CONTROL_ZFUNC__SHIFT) & A3XX_RB_DEPTH_CONTROL_ZFUNC__MASK;
1474} 1472}
1475#define A3XX_RB_DEPTH_CONTROL_BF_ENABLE 0x00000080 1473#define A3XX_RB_DEPTH_CONTROL_Z_CLAMP_ENABLE 0x00000080
1476#define A3XX_RB_DEPTH_CONTROL_Z_TEST_ENABLE 0x80000000 1474#define A3XX_RB_DEPTH_CONTROL_Z_TEST_ENABLE 0x80000000
1477 1475
1478#define REG_A3XX_RB_DEPTH_CLEAR 0x00002101 1476#define REG_A3XX_RB_DEPTH_CLEAR 0x00002101
diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
index fd266ed963b6..b999349b7d2d 100644
--- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
@@ -41,7 +41,7 @@ extern bool hang_debug;
41 41
42static void a3xx_dump(struct msm_gpu *gpu); 42static void a3xx_dump(struct msm_gpu *gpu);
43 43
44static void a3xx_me_init(struct msm_gpu *gpu) 44static bool a3xx_me_init(struct msm_gpu *gpu)
45{ 45{
46 struct msm_ringbuffer *ring = gpu->rb; 46 struct msm_ringbuffer *ring = gpu->rb;
47 47
@@ -65,7 +65,7 @@ static void a3xx_me_init(struct msm_gpu *gpu)
65 OUT_RING(ring, 0x00000000); 65 OUT_RING(ring, 0x00000000);
66 66
67 gpu->funcs->flush(gpu); 67 gpu->funcs->flush(gpu);
68 gpu->funcs->idle(gpu); 68 return gpu->funcs->idle(gpu);
69} 69}
70 70
71static int a3xx_hw_init(struct msm_gpu *gpu) 71static int a3xx_hw_init(struct msm_gpu *gpu)
@@ -294,15 +294,20 @@ static int a3xx_hw_init(struct msm_gpu *gpu)
294 /* clear ME_HALT to start micro engine */ 294 /* clear ME_HALT to start micro engine */
295 gpu_write(gpu, REG_AXXX_CP_ME_CNTL, 0); 295 gpu_write(gpu, REG_AXXX_CP_ME_CNTL, 0);
296 296
297 a3xx_me_init(gpu); 297 return a3xx_me_init(gpu) ? 0 : -EINVAL;
298
299 return 0;
300} 298}
301 299
302static void a3xx_recover(struct msm_gpu *gpu) 300static void a3xx_recover(struct msm_gpu *gpu)
303{ 301{
302 int i;
303
304 adreno_dump_info(gpu); 304 adreno_dump_info(gpu);
305 305
306 for (i = 0; i < 8; i++) {
307 printk("CP_SCRATCH_REG%d: %u\n", i,
308 gpu_read(gpu, REG_AXXX_CP_SCRATCH_REG0 + i));
309 }
310
306 /* dump registers before resetting gpu, if enabled: */ 311 /* dump registers before resetting gpu, if enabled: */
307 if (hang_debug) 312 if (hang_debug)
308 a3xx_dump(gpu); 313 a3xx_dump(gpu);
@@ -330,17 +335,22 @@ static void a3xx_destroy(struct msm_gpu *gpu)
330 kfree(a3xx_gpu); 335 kfree(a3xx_gpu);
331} 336}
332 337
333static void a3xx_idle(struct msm_gpu *gpu) 338static bool a3xx_idle(struct msm_gpu *gpu)
334{ 339{
335 /* wait for ringbuffer to drain: */ 340 /* wait for ringbuffer to drain: */
336 adreno_idle(gpu); 341 if (!adreno_idle(gpu))
342 return false;
337 343
338 /* then wait for GPU to finish: */ 344 /* then wait for GPU to finish: */
339 if (spin_until(!(gpu_read(gpu, REG_A3XX_RBBM_STATUS) & 345 if (spin_until(!(gpu_read(gpu, REG_A3XX_RBBM_STATUS) &
340 A3XX_RBBM_STATUS_GPU_BUSY))) 346 A3XX_RBBM_STATUS_GPU_BUSY))) {
341 DRM_ERROR("%s: timeout waiting for GPU to idle!\n", gpu->name); 347 DRM_ERROR("%s: timeout waiting for GPU to idle!\n", gpu->name);
342 348
343 /* TODO maybe we need to reset GPU here to recover from hang? */ 349 /* TODO maybe we need to reset GPU here to recover from hang? */
350 return false;
351 }
352
353 return true;
344} 354}
345 355
346static irqreturn_t a3xx_irq(struct msm_gpu *gpu) 356static irqreturn_t a3xx_irq(struct msm_gpu *gpu)
@@ -419,91 +429,13 @@ static void a3xx_dump(struct msm_gpu *gpu)
419} 429}
420/* Register offset defines for A3XX */ 430/* Register offset defines for A3XX */
421static const unsigned int a3xx_register_offsets[REG_ADRENO_REGISTER_MAX] = { 431static const unsigned int a3xx_register_offsets[REG_ADRENO_REGISTER_MAX] = {
422 REG_ADRENO_DEFINE(REG_ADRENO_CP_DEBUG, REG_AXXX_CP_DEBUG),
423 REG_ADRENO_DEFINE(REG_ADRENO_CP_ME_RAM_WADDR, REG_AXXX_CP_ME_RAM_WADDR),
424 REG_ADRENO_DEFINE(REG_ADRENO_CP_ME_RAM_DATA, REG_AXXX_CP_ME_RAM_DATA),
425 REG_ADRENO_DEFINE(REG_ADRENO_CP_PFP_UCODE_DATA,
426 REG_A3XX_CP_PFP_UCODE_DATA),
427 REG_ADRENO_DEFINE(REG_ADRENO_CP_PFP_UCODE_ADDR,
428 REG_A3XX_CP_PFP_UCODE_ADDR),
429 REG_ADRENO_DEFINE(REG_ADRENO_CP_WFI_PEND_CTR, REG_A3XX_CP_WFI_PEND_CTR),
430 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_AXXX_CP_RB_BASE), 432 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_AXXX_CP_RB_BASE),
433 REG_ADRENO_SKIP(REG_ADRENO_CP_RB_BASE_HI),
431 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_AXXX_CP_RB_RPTR_ADDR), 434 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_AXXX_CP_RB_RPTR_ADDR),
435 REG_ADRENO_SKIP(REG_ADRENO_CP_RB_RPTR_ADDR_HI),
432 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_AXXX_CP_RB_RPTR), 436 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_AXXX_CP_RB_RPTR),
433 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_AXXX_CP_RB_WPTR), 437 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_AXXX_CP_RB_WPTR),
434 REG_ADRENO_DEFINE(REG_ADRENO_CP_PROTECT_CTRL, REG_A3XX_CP_PROTECT_CTRL),
435 REG_ADRENO_DEFINE(REG_ADRENO_CP_ME_CNTL, REG_AXXX_CP_ME_CNTL),
436 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_AXXX_CP_RB_CNTL), 438 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_AXXX_CP_RB_CNTL),
437 REG_ADRENO_DEFINE(REG_ADRENO_CP_IB1_BASE, REG_AXXX_CP_IB1_BASE),
438 REG_ADRENO_DEFINE(REG_ADRENO_CP_IB1_BUFSZ, REG_AXXX_CP_IB1_BUFSZ),
439 REG_ADRENO_DEFINE(REG_ADRENO_CP_IB2_BASE, REG_AXXX_CP_IB2_BASE),
440 REG_ADRENO_DEFINE(REG_ADRENO_CP_IB2_BUFSZ, REG_AXXX_CP_IB2_BUFSZ),
441 REG_ADRENO_DEFINE(REG_ADRENO_CP_TIMESTAMP, REG_AXXX_CP_SCRATCH_REG0),
442 REG_ADRENO_DEFINE(REG_ADRENO_CP_ME_RAM_RADDR, REG_AXXX_CP_ME_RAM_RADDR),
443 REG_ADRENO_DEFINE(REG_ADRENO_SCRATCH_ADDR, REG_AXXX_SCRATCH_ADDR),
444 REG_ADRENO_DEFINE(REG_ADRENO_SCRATCH_UMSK, REG_AXXX_SCRATCH_UMSK),
445 REG_ADRENO_DEFINE(REG_ADRENO_CP_ROQ_ADDR, REG_A3XX_CP_ROQ_ADDR),
446 REG_ADRENO_DEFINE(REG_ADRENO_CP_ROQ_DATA, REG_A3XX_CP_ROQ_DATA),
447 REG_ADRENO_DEFINE(REG_ADRENO_CP_MERCIU_ADDR, REG_A3XX_CP_MERCIU_ADDR),
448 REG_ADRENO_DEFINE(REG_ADRENO_CP_MERCIU_DATA, REG_A3XX_CP_MERCIU_DATA),
449 REG_ADRENO_DEFINE(REG_ADRENO_CP_MERCIU_DATA2, REG_A3XX_CP_MERCIU_DATA2),
450 REG_ADRENO_DEFINE(REG_ADRENO_CP_MEQ_ADDR, REG_A3XX_CP_MEQ_ADDR),
451 REG_ADRENO_DEFINE(REG_ADRENO_CP_MEQ_DATA, REG_A3XX_CP_MEQ_DATA),
452 REG_ADRENO_DEFINE(REG_ADRENO_CP_HW_FAULT, REG_A3XX_CP_HW_FAULT),
453 REG_ADRENO_DEFINE(REG_ADRENO_CP_PROTECT_STATUS,
454 REG_A3XX_CP_PROTECT_STATUS),
455 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_STATUS, REG_A3XX_RBBM_STATUS),
456 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_CTL,
457 REG_A3XX_RBBM_PERFCTR_CTL),
458 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_CMD0,
459 REG_A3XX_RBBM_PERFCTR_LOAD_CMD0),
460 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_CMD1,
461 REG_A3XX_RBBM_PERFCTR_LOAD_CMD1),
462 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_PWR_1_LO,
463 REG_A3XX_RBBM_PERFCTR_PWR_1_LO),
464 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_INT_0_MASK, REG_A3XX_RBBM_INT_0_MASK),
465 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_INT_0_STATUS,
466 REG_A3XX_RBBM_INT_0_STATUS),
467 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_AHB_ERROR_STATUS,
468 REG_A3XX_RBBM_AHB_ERROR_STATUS),
469 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_AHB_CMD, REG_A3XX_RBBM_AHB_CMD),
470 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_INT_CLEAR_CMD,
471 REG_A3XX_RBBM_INT_CLEAR_CMD),
472 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_CLOCK_CTL, REG_A3XX_RBBM_CLOCK_CTL),
473 REG_ADRENO_DEFINE(REG_ADRENO_VPC_DEBUG_RAM_SEL,
474 REG_A3XX_VPC_VPC_DEBUG_RAM_SEL),
475 REG_ADRENO_DEFINE(REG_ADRENO_VPC_DEBUG_RAM_READ,
476 REG_A3XX_VPC_VPC_DEBUG_RAM_READ),
477 REG_ADRENO_DEFINE(REG_ADRENO_VSC_SIZE_ADDRESS,
478 REG_A3XX_VSC_SIZE_ADDRESS),
479 REG_ADRENO_DEFINE(REG_ADRENO_VFD_CONTROL_0, REG_A3XX_VFD_CONTROL_0),
480 REG_ADRENO_DEFINE(REG_ADRENO_VFD_INDEX_MAX, REG_A3XX_VFD_INDEX_MAX),
481 REG_ADRENO_DEFINE(REG_ADRENO_SP_VS_PVT_MEM_ADDR_REG,
482 REG_A3XX_SP_VS_PVT_MEM_ADDR_REG),
483 REG_ADRENO_DEFINE(REG_ADRENO_SP_FS_PVT_MEM_ADDR_REG,
484 REG_A3XX_SP_FS_PVT_MEM_ADDR_REG),
485 REG_ADRENO_DEFINE(REG_ADRENO_SP_VS_OBJ_START_REG,
486 REG_A3XX_SP_VS_OBJ_START_REG),
487 REG_ADRENO_DEFINE(REG_ADRENO_SP_FS_OBJ_START_REG,
488 REG_A3XX_SP_FS_OBJ_START_REG),
489 REG_ADRENO_DEFINE(REG_ADRENO_PA_SC_AA_CONFIG, REG_A3XX_PA_SC_AA_CONFIG),
490 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PM_OVERRIDE2,
491 REG_A3XX_RBBM_PM_OVERRIDE2),
492 REG_ADRENO_DEFINE(REG_ADRENO_SCRATCH_REG2, REG_AXXX_CP_SCRATCH_REG2),
493 REG_ADRENO_DEFINE(REG_ADRENO_SQ_GPR_MANAGEMENT,
494 REG_A3XX_SQ_GPR_MANAGEMENT),
495 REG_ADRENO_DEFINE(REG_ADRENO_SQ_INST_STORE_MANAGMENT,
496 REG_A3XX_SQ_INST_STORE_MANAGMENT),
497 REG_ADRENO_DEFINE(REG_ADRENO_TP0_CHICKEN, REG_A3XX_TP0_CHICKEN),
498 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_RBBM_CTL, REG_A3XX_RBBM_RBBM_CTL),
499 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_SW_RESET_CMD,
500 REG_A3XX_RBBM_SW_RESET_CMD),
501 REG_ADRENO_DEFINE(REG_ADRENO_UCHE_INVALIDATE0,
502 REG_A3XX_UCHE_CACHE_INVALIDATE0_REG),
503 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_VALUE_LO,
504 REG_A3XX_RBBM_PERFCTR_LOAD_VALUE_LO),
505 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_VALUE_HI,
506 REG_A3XX_RBBM_PERFCTR_LOAD_VALUE_HI),
507}; 439};
508 440
509static const struct adreno_gpu_funcs funcs = { 441static const struct adreno_gpu_funcs funcs = {
@@ -583,7 +515,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
583#endif 515#endif
584 } 516 }
585 517
586 if (!gpu->mmu) { 518 if (!gpu->aspace) {
587 /* TODO we think it is possible to configure the GPU to 519 /* TODO we think it is possible to configure the GPU to
588 * restrict access to VRAM carveout. But the required 520 * restrict access to VRAM carveout. But the required
589 * registers are unknown. For now just bail out and 521 * registers are unknown. For now just bail out and
diff --git a/drivers/gpu/drm/msm/adreno/a4xx.xml.h b/drivers/gpu/drm/msm/adreno/a4xx.xml.h
index 3220b91f559a..4ce21b902779 100644
--- a/drivers/gpu/drm/msm/adreno/a4xx.xml.h
+++ b/drivers/gpu/drm/msm/adreno/a4xx.xml.h
@@ -8,13 +8,14 @@ http://github.com/freedreno/envytools/
8git clone https://github.com/freedreno/envytools.git 8git clone https://github.com/freedreno/envytools.git
9 9
10The rules-ng-ng source files this header was generated from are: 10The rules-ng-ng source files this header was generated from are:
11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 398 bytes, from 2015-09-24 17:25:31) 11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 431 bytes, from 2016-04-26 17:56:44)
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32907 bytes, from 2016-11-26 23:01:08)
14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 11518 bytes, from 2016-02-10 21:03:25) 14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 12025 bytes, from 2016-11-26 23:01:08)
15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 16166 bytes, from 2016-02-11 21:20:31) 15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 22544 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83967 bytes, from 2016-02-10 17:07:21) 16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83840 bytes, from 2016-11-26 23:01:08)
17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 109916 bytes, from 2016-02-20 18:44:48) 17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 110765 bytes, from 2016-11-26 23:01:48)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/a5xx.xml ( 90321 bytes, from 2016-11-28 16:50:05)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00) 19- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00)
19 20
20Copyright (C) 2013-2016 by the following authors: 21Copyright (C) 2013-2016 by the following authors:
@@ -46,6 +47,9 @@ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
46enum a4xx_color_fmt { 47enum a4xx_color_fmt {
47 RB4_A8_UNORM = 1, 48 RB4_A8_UNORM = 1,
48 RB4_R8_UNORM = 2, 49 RB4_R8_UNORM = 2,
50 RB4_R8_SNORM = 3,
51 RB4_R8_UINT = 4,
52 RB4_R8_SINT = 5,
49 RB4_R4G4B4A4_UNORM = 8, 53 RB4_R4G4B4A4_UNORM = 8,
50 RB4_R5G5B5A1_UNORM = 10, 54 RB4_R5G5B5A1_UNORM = 10,
51 RB4_R5G6B5_UNORM = 14, 55 RB4_R5G6B5_UNORM = 14,
@@ -89,17 +93,10 @@ enum a4xx_color_fmt {
89 93
90enum a4xx_tile_mode { 94enum a4xx_tile_mode {
91 TILE4_LINEAR = 0, 95 TILE4_LINEAR = 0,
96 TILE4_2 = 2,
92 TILE4_3 = 3, 97 TILE4_3 = 3,
93}; 98};
94 99
95enum a4xx_rb_blend_opcode {
96 BLEND_DST_PLUS_SRC = 0,
97 BLEND_SRC_MINUS_DST = 1,
98 BLEND_DST_MINUS_SRC = 2,
99 BLEND_MIN_DST_SRC = 3,
100 BLEND_MAX_DST_SRC = 4,
101};
102
103enum a4xx_vtx_fmt { 100enum a4xx_vtx_fmt {
104 VFMT4_32_FLOAT = 1, 101 VFMT4_32_FLOAT = 1,
105 VFMT4_32_32_FLOAT = 2, 102 VFMT4_32_32_FLOAT = 2,
@@ -940,6 +937,7 @@ static inline uint32_t A4XX_RB_MODE_CONTROL_HEIGHT(uint32_t val)
940{ 937{
941 return ((val >> 5) << A4XX_RB_MODE_CONTROL_HEIGHT__SHIFT) & A4XX_RB_MODE_CONTROL_HEIGHT__MASK; 938 return ((val >> 5) << A4XX_RB_MODE_CONTROL_HEIGHT__SHIFT) & A4XX_RB_MODE_CONTROL_HEIGHT__MASK;
942} 939}
940#define A4XX_RB_MODE_CONTROL_ENABLE_GMEM 0x00010000
943 941
944#define REG_A4XX_RB_RENDER_CONTROL 0x000020a1 942#define REG_A4XX_RB_RENDER_CONTROL 0x000020a1
945#define A4XX_RB_RENDER_CONTROL_BINNING_PASS 0x00000001 943#define A4XX_RB_RENDER_CONTROL_BINNING_PASS 0x00000001
@@ -1043,7 +1041,7 @@ static inline uint32_t A4XX_RB_MRT_BLEND_CONTROL_RGB_SRC_FACTOR(enum adreno_rb_b
1043} 1041}
1044#define A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__MASK 0x000000e0 1042#define A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__MASK 0x000000e0
1045#define A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__SHIFT 5 1043#define A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__SHIFT 5
1046static inline uint32_t A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE(enum a4xx_rb_blend_opcode val) 1044static inline uint32_t A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE(enum a3xx_rb_blend_opcode val)
1047{ 1045{
1048 return ((val) << A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__SHIFT) & A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__MASK; 1046 return ((val) << A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__SHIFT) & A4XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__MASK;
1049} 1047}
@@ -1061,7 +1059,7 @@ static inline uint32_t A4XX_RB_MRT_BLEND_CONTROL_ALPHA_SRC_FACTOR(enum adreno_rb
1061} 1059}
1062#define A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__MASK 0x00e00000 1060#define A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__MASK 0x00e00000
1063#define A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__SHIFT 21 1061#define A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__SHIFT 21
1064static inline uint32_t A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE(enum a4xx_rb_blend_opcode val) 1062static inline uint32_t A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE(enum a3xx_rb_blend_opcode val)
1065{ 1063{
1066 return ((val) << A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__SHIFT) & A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__MASK; 1064 return ((val) << A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__SHIFT) & A4XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__MASK;
1067} 1065}
@@ -1073,12 +1071,18 @@ static inline uint32_t A4XX_RB_MRT_BLEND_CONTROL_ALPHA_DEST_FACTOR(enum adreno_r
1073} 1071}
1074 1072
1075#define REG_A4XX_RB_BLEND_RED 0x000020f0 1073#define REG_A4XX_RB_BLEND_RED 0x000020f0
1076#define A4XX_RB_BLEND_RED_UINT__MASK 0x0000ffff 1074#define A4XX_RB_BLEND_RED_UINT__MASK 0x000000ff
1077#define A4XX_RB_BLEND_RED_UINT__SHIFT 0 1075#define A4XX_RB_BLEND_RED_UINT__SHIFT 0
1078static inline uint32_t A4XX_RB_BLEND_RED_UINT(uint32_t val) 1076static inline uint32_t A4XX_RB_BLEND_RED_UINT(uint32_t val)
1079{ 1077{
1080 return ((val) << A4XX_RB_BLEND_RED_UINT__SHIFT) & A4XX_RB_BLEND_RED_UINT__MASK; 1078 return ((val) << A4XX_RB_BLEND_RED_UINT__SHIFT) & A4XX_RB_BLEND_RED_UINT__MASK;
1081} 1079}
1080#define A4XX_RB_BLEND_RED_SINT__MASK 0x0000ff00
1081#define A4XX_RB_BLEND_RED_SINT__SHIFT 8
1082static inline uint32_t A4XX_RB_BLEND_RED_SINT(uint32_t val)
1083{
1084 return ((val) << A4XX_RB_BLEND_RED_SINT__SHIFT) & A4XX_RB_BLEND_RED_SINT__MASK;
1085}
1082#define A4XX_RB_BLEND_RED_FLOAT__MASK 0xffff0000 1086#define A4XX_RB_BLEND_RED_FLOAT__MASK 0xffff0000
1083#define A4XX_RB_BLEND_RED_FLOAT__SHIFT 16 1087#define A4XX_RB_BLEND_RED_FLOAT__SHIFT 16
1084static inline uint32_t A4XX_RB_BLEND_RED_FLOAT(float val) 1088static inline uint32_t A4XX_RB_BLEND_RED_FLOAT(float val)
@@ -1095,12 +1099,18 @@ static inline uint32_t A4XX_RB_BLEND_RED_F32(float val)
1095} 1099}
1096 1100
1097#define REG_A4XX_RB_BLEND_GREEN 0x000020f2 1101#define REG_A4XX_RB_BLEND_GREEN 0x000020f2
1098#define A4XX_RB_BLEND_GREEN_UINT__MASK 0x0000ffff 1102#define A4XX_RB_BLEND_GREEN_UINT__MASK 0x000000ff
1099#define A4XX_RB_BLEND_GREEN_UINT__SHIFT 0 1103#define A4XX_RB_BLEND_GREEN_UINT__SHIFT 0
1100static inline uint32_t A4XX_RB_BLEND_GREEN_UINT(uint32_t val) 1104static inline uint32_t A4XX_RB_BLEND_GREEN_UINT(uint32_t val)
1101{ 1105{
1102 return ((val) << A4XX_RB_BLEND_GREEN_UINT__SHIFT) & A4XX_RB_BLEND_GREEN_UINT__MASK; 1106 return ((val) << A4XX_RB_BLEND_GREEN_UINT__SHIFT) & A4XX_RB_BLEND_GREEN_UINT__MASK;
1103} 1107}
1108#define A4XX_RB_BLEND_GREEN_SINT__MASK 0x0000ff00
1109#define A4XX_RB_BLEND_GREEN_SINT__SHIFT 8
1110static inline uint32_t A4XX_RB_BLEND_GREEN_SINT(uint32_t val)
1111{
1112 return ((val) << A4XX_RB_BLEND_GREEN_SINT__SHIFT) & A4XX_RB_BLEND_GREEN_SINT__MASK;
1113}
1104#define A4XX_RB_BLEND_GREEN_FLOAT__MASK 0xffff0000 1114#define A4XX_RB_BLEND_GREEN_FLOAT__MASK 0xffff0000
1105#define A4XX_RB_BLEND_GREEN_FLOAT__SHIFT 16 1115#define A4XX_RB_BLEND_GREEN_FLOAT__SHIFT 16
1106static inline uint32_t A4XX_RB_BLEND_GREEN_FLOAT(float val) 1116static inline uint32_t A4XX_RB_BLEND_GREEN_FLOAT(float val)
@@ -1117,12 +1127,18 @@ static inline uint32_t A4XX_RB_BLEND_GREEN_F32(float val)
1117} 1127}
1118 1128
1119#define REG_A4XX_RB_BLEND_BLUE 0x000020f4 1129#define REG_A4XX_RB_BLEND_BLUE 0x000020f4
1120#define A4XX_RB_BLEND_BLUE_UINT__MASK 0x0000ffff 1130#define A4XX_RB_BLEND_BLUE_UINT__MASK 0x000000ff
1121#define A4XX_RB_BLEND_BLUE_UINT__SHIFT 0 1131#define A4XX_RB_BLEND_BLUE_UINT__SHIFT 0
1122static inline uint32_t A4XX_RB_BLEND_BLUE_UINT(uint32_t val) 1132static inline uint32_t A4XX_RB_BLEND_BLUE_UINT(uint32_t val)
1123{ 1133{
1124 return ((val) << A4XX_RB_BLEND_BLUE_UINT__SHIFT) & A4XX_RB_BLEND_BLUE_UINT__MASK; 1134 return ((val) << A4XX_RB_BLEND_BLUE_UINT__SHIFT) & A4XX_RB_BLEND_BLUE_UINT__MASK;
1125} 1135}
1136#define A4XX_RB_BLEND_BLUE_SINT__MASK 0x0000ff00
1137#define A4XX_RB_BLEND_BLUE_SINT__SHIFT 8
1138static inline uint32_t A4XX_RB_BLEND_BLUE_SINT(uint32_t val)
1139{
1140 return ((val) << A4XX_RB_BLEND_BLUE_SINT__SHIFT) & A4XX_RB_BLEND_BLUE_SINT__MASK;
1141}
1126#define A4XX_RB_BLEND_BLUE_FLOAT__MASK 0xffff0000 1142#define A4XX_RB_BLEND_BLUE_FLOAT__MASK 0xffff0000
1127#define A4XX_RB_BLEND_BLUE_FLOAT__SHIFT 16 1143#define A4XX_RB_BLEND_BLUE_FLOAT__SHIFT 16
1128static inline uint32_t A4XX_RB_BLEND_BLUE_FLOAT(float val) 1144static inline uint32_t A4XX_RB_BLEND_BLUE_FLOAT(float val)
@@ -1139,12 +1155,18 @@ static inline uint32_t A4XX_RB_BLEND_BLUE_F32(float val)
1139} 1155}
1140 1156
1141#define REG_A4XX_RB_BLEND_ALPHA 0x000020f6 1157#define REG_A4XX_RB_BLEND_ALPHA 0x000020f6
1142#define A4XX_RB_BLEND_ALPHA_UINT__MASK 0x0000ffff 1158#define A4XX_RB_BLEND_ALPHA_UINT__MASK 0x000000ff
1143#define A4XX_RB_BLEND_ALPHA_UINT__SHIFT 0 1159#define A4XX_RB_BLEND_ALPHA_UINT__SHIFT 0
1144static inline uint32_t A4XX_RB_BLEND_ALPHA_UINT(uint32_t val) 1160static inline uint32_t A4XX_RB_BLEND_ALPHA_UINT(uint32_t val)
1145{ 1161{
1146 return ((val) << A4XX_RB_BLEND_ALPHA_UINT__SHIFT) & A4XX_RB_BLEND_ALPHA_UINT__MASK; 1162 return ((val) << A4XX_RB_BLEND_ALPHA_UINT__SHIFT) & A4XX_RB_BLEND_ALPHA_UINT__MASK;
1147} 1163}
1164#define A4XX_RB_BLEND_ALPHA_SINT__MASK 0x0000ff00
1165#define A4XX_RB_BLEND_ALPHA_SINT__SHIFT 8
1166static inline uint32_t A4XX_RB_BLEND_ALPHA_SINT(uint32_t val)
1167{
1168 return ((val) << A4XX_RB_BLEND_ALPHA_SINT__SHIFT) & A4XX_RB_BLEND_ALPHA_SINT__MASK;
1169}
1148#define A4XX_RB_BLEND_ALPHA_FLOAT__MASK 0xffff0000 1170#define A4XX_RB_BLEND_ALPHA_FLOAT__MASK 0xffff0000
1149#define A4XX_RB_BLEND_ALPHA_FLOAT__SHIFT 16 1171#define A4XX_RB_BLEND_ALPHA_FLOAT__SHIFT 16
1150static inline uint32_t A4XX_RB_BLEND_ALPHA_FLOAT(float val) 1172static inline uint32_t A4XX_RB_BLEND_ALPHA_FLOAT(float val)
@@ -1348,7 +1370,7 @@ static inline uint32_t A4XX_RB_DEPTH_CONTROL_ZFUNC(enum adreno_compare_func val)
1348{ 1370{
1349 return ((val) << A4XX_RB_DEPTH_CONTROL_ZFUNC__SHIFT) & A4XX_RB_DEPTH_CONTROL_ZFUNC__MASK; 1371 return ((val) << A4XX_RB_DEPTH_CONTROL_ZFUNC__SHIFT) & A4XX_RB_DEPTH_CONTROL_ZFUNC__MASK;
1350} 1372}
1351#define A4XX_RB_DEPTH_CONTROL_BF_ENABLE 0x00000080 1373#define A4XX_RB_DEPTH_CONTROL_Z_CLAMP_ENABLE 0x00000080
1352#define A4XX_RB_DEPTH_CONTROL_EARLY_Z_DISABLE 0x00010000 1374#define A4XX_RB_DEPTH_CONTROL_EARLY_Z_DISABLE 0x00010000
1353#define A4XX_RB_DEPTH_CONTROL_FORCE_FRAGZ_TO_FS 0x00020000 1375#define A4XX_RB_DEPTH_CONTROL_FORCE_FRAGZ_TO_FS 0x00020000
1354#define A4XX_RB_DEPTH_CONTROL_Z_TEST_ENABLE 0x80000000 1376#define A4XX_RB_DEPTH_CONTROL_Z_TEST_ENABLE 0x80000000
@@ -2177,11 +2199,23 @@ static inline uint32_t REG_A4XX_RBBM_CLOCK_DELAY_RB_MARB_CCU_L1_REG(uint32_t i0)
2177 2199
2178#define REG_A4XX_CP_DRAW_STATE_ADDR 0x00000232 2200#define REG_A4XX_CP_DRAW_STATE_ADDR 0x00000232
2179 2201
2180#define REG_A4XX_CP_PROTECT_REG_0 0x00000240
2181
2182static inline uint32_t REG_A4XX_CP_PROTECT(uint32_t i0) { return 0x00000240 + 0x1*i0; } 2202static inline uint32_t REG_A4XX_CP_PROTECT(uint32_t i0) { return 0x00000240 + 0x1*i0; }
2183 2203
2184static inline uint32_t REG_A4XX_CP_PROTECT_REG(uint32_t i0) { return 0x00000240 + 0x1*i0; } 2204static inline uint32_t REG_A4XX_CP_PROTECT_REG(uint32_t i0) { return 0x00000240 + 0x1*i0; }
2205#define A4XX_CP_PROTECT_REG_BASE_ADDR__MASK 0x0001ffff
2206#define A4XX_CP_PROTECT_REG_BASE_ADDR__SHIFT 0
2207static inline uint32_t A4XX_CP_PROTECT_REG_BASE_ADDR(uint32_t val)
2208{
2209 return ((val) << A4XX_CP_PROTECT_REG_BASE_ADDR__SHIFT) & A4XX_CP_PROTECT_REG_BASE_ADDR__MASK;
2210}
2211#define A4XX_CP_PROTECT_REG_MASK_LEN__MASK 0x1f000000
2212#define A4XX_CP_PROTECT_REG_MASK_LEN__SHIFT 24
2213static inline uint32_t A4XX_CP_PROTECT_REG_MASK_LEN(uint32_t val)
2214{
2215 return ((val) << A4XX_CP_PROTECT_REG_MASK_LEN__SHIFT) & A4XX_CP_PROTECT_REG_MASK_LEN__MASK;
2216}
2217#define A4XX_CP_PROTECT_REG_TRAP_WRITE 0x20000000
2218#define A4XX_CP_PROTECT_REG_TRAP_READ 0x40000000
2185 2219
2186#define REG_A4XX_CP_PROTECT_CTRL 0x00000250 2220#define REG_A4XX_CP_PROTECT_CTRL 0x00000250
2187 2221
@@ -2272,7 +2306,7 @@ static inline uint32_t A4XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT(uint32_t val)
2272{ 2306{
2273 return ((val) << A4XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT__SHIFT) & A4XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT__MASK; 2307 return ((val) << A4XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT__SHIFT) & A4XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT__MASK;
2274} 2308}
2275#define A4XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT__MASK 0x0003fc00 2309#define A4XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT__MASK 0x0000fc00
2276#define A4XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT__SHIFT 10 2310#define A4XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT__SHIFT 10
2277static inline uint32_t A4XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT(uint32_t val) 2311static inline uint32_t A4XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT(uint32_t val)
2278{ 2312{
@@ -2420,7 +2454,7 @@ static inline uint32_t A4XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT(uint32_t val)
2420{ 2454{
2421 return ((val) << A4XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT__SHIFT) & A4XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT__MASK; 2455 return ((val) << A4XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT__SHIFT) & A4XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT__MASK;
2422} 2456}
2423#define A4XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT__MASK 0x0003fc00 2457#define A4XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT__MASK 0x0000fc00
2424#define A4XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT__SHIFT 10 2458#define A4XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT__SHIFT 10
2425static inline uint32_t A4XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT(uint32_t val) 2459static inline uint32_t A4XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT(uint32_t val)
2426{ 2460{
@@ -3117,6 +3151,8 @@ static inline uint32_t A4XX_TPL1_TP_TEX_COUNT_GS(uint32_t val)
3117 3151
3118#define REG_A4XX_GRAS_CL_CLIP_CNTL 0x00002000 3152#define REG_A4XX_GRAS_CL_CLIP_CNTL 0x00002000
3119#define A4XX_GRAS_CL_CLIP_CNTL_CLIP_DISABLE 0x00008000 3153#define A4XX_GRAS_CL_CLIP_CNTL_CLIP_DISABLE 0x00008000
3154#define A4XX_GRAS_CL_CLIP_CNTL_ZNEAR_CLIP_DISABLE 0x00010000
3155#define A4XX_GRAS_CL_CLIP_CNTL_ZFAR_CLIP_DISABLE 0x00020000
3120#define A4XX_GRAS_CL_CLIP_CNTL_ZERO_GB_SCALE_Z 0x00400000 3156#define A4XX_GRAS_CL_CLIP_CNTL_ZERO_GB_SCALE_Z 0x00400000
3121 3157
3122#define REG_A4XX_GRAS_CLEAR_CNTL 0x00002003 3158#define REG_A4XX_GRAS_CLEAR_CNTL 0x00002003
@@ -3253,6 +3289,7 @@ static inline uint32_t A4XX_GRAS_SU_MODE_CONTROL_LINEHALFWIDTH(float val)
3253 return ((((int32_t)(val * 4.0))) << A4XX_GRAS_SU_MODE_CONTROL_LINEHALFWIDTH__SHIFT) & A4XX_GRAS_SU_MODE_CONTROL_LINEHALFWIDTH__MASK; 3289 return ((((int32_t)(val * 4.0))) << A4XX_GRAS_SU_MODE_CONTROL_LINEHALFWIDTH__SHIFT) & A4XX_GRAS_SU_MODE_CONTROL_LINEHALFWIDTH__MASK;
3254} 3290}
3255#define A4XX_GRAS_SU_MODE_CONTROL_POLY_OFFSET 0x00000800 3291#define A4XX_GRAS_SU_MODE_CONTROL_POLY_OFFSET 0x00000800
3292#define A4XX_GRAS_SU_MODE_CONTROL_MSAA_ENABLE 0x00002000
3256#define A4XX_GRAS_SU_MODE_CONTROL_RENDERING_PASS 0x00100000 3293#define A4XX_GRAS_SU_MODE_CONTROL_RENDERING_PASS 0x00100000
3257 3294
3258#define REG_A4XX_GRAS_SC_CONTROL 0x0000207b 3295#define REG_A4XX_GRAS_SC_CONTROL 0x0000207b
@@ -3670,6 +3707,8 @@ static inline uint32_t A4XX_HLSQ_GS_CONTROL_REG_INSTRLENGTH(uint32_t val)
3670#define REG_A4XX_PC_BINNING_COMMAND 0x00000d00 3707#define REG_A4XX_PC_BINNING_COMMAND 0x00000d00
3671#define A4XX_PC_BINNING_COMMAND_BINNING_ENABLE 0x00000001 3708#define A4XX_PC_BINNING_COMMAND_BINNING_ENABLE 0x00000001
3672 3709
3710#define REG_A4XX_PC_TESSFACTOR_ADDR 0x00000d08
3711
3673#define REG_A4XX_PC_DRAWCALL_SETUP_OVERRIDE 0x00000d0c 3712#define REG_A4XX_PC_DRAWCALL_SETUP_OVERRIDE 0x00000d0c
3674 3713
3675#define REG_A4XX_PC_PERFCTR_PC_SEL_0 0x00000d10 3714#define REG_A4XX_PC_PERFCTR_PC_SEL_0 0x00000d10
@@ -3690,6 +3729,20 @@ static inline uint32_t A4XX_HLSQ_GS_CONTROL_REG_INSTRLENGTH(uint32_t val)
3690 3729
3691#define REG_A4XX_PC_BIN_BASE 0x000021c0 3730#define REG_A4XX_PC_BIN_BASE 0x000021c0
3692 3731
3732#define REG_A4XX_PC_VSTREAM_CONTROL 0x000021c2
3733#define A4XX_PC_VSTREAM_CONTROL_SIZE__MASK 0x003f0000
3734#define A4XX_PC_VSTREAM_CONTROL_SIZE__SHIFT 16
3735static inline uint32_t A4XX_PC_VSTREAM_CONTROL_SIZE(uint32_t val)
3736{
3737 return ((val) << A4XX_PC_VSTREAM_CONTROL_SIZE__SHIFT) & A4XX_PC_VSTREAM_CONTROL_SIZE__MASK;
3738}
3739#define A4XX_PC_VSTREAM_CONTROL_N__MASK 0x07c00000
3740#define A4XX_PC_VSTREAM_CONTROL_N__SHIFT 22
3741static inline uint32_t A4XX_PC_VSTREAM_CONTROL_N(uint32_t val)
3742{
3743 return ((val) << A4XX_PC_VSTREAM_CONTROL_N__SHIFT) & A4XX_PC_VSTREAM_CONTROL_N__MASK;
3744}
3745
3693#define REG_A4XX_PC_PRIM_VTX_CNTL 0x000021c4 3746#define REG_A4XX_PC_PRIM_VTX_CNTL 0x000021c4
3694#define A4XX_PC_PRIM_VTX_CNTL_VAROUT__MASK 0x0000000f 3747#define A4XX_PC_PRIM_VTX_CNTL_VAROUT__MASK 0x0000000f
3695#define A4XX_PC_PRIM_VTX_CNTL_VAROUT__SHIFT 0 3748#define A4XX_PC_PRIM_VTX_CNTL_VAROUT__SHIFT 0
@@ -3752,12 +3805,8 @@ static inline uint32_t A4XX_PC_HS_PARAM_SPACING(enum a4xx_tess_spacing val)
3752{ 3805{
3753 return ((val) << A4XX_PC_HS_PARAM_SPACING__SHIFT) & A4XX_PC_HS_PARAM_SPACING__MASK; 3806 return ((val) << A4XX_PC_HS_PARAM_SPACING__SHIFT) & A4XX_PC_HS_PARAM_SPACING__MASK;
3754} 3807}
3755#define A4XX_PC_HS_PARAM_PRIMTYPE__MASK 0x01800000 3808#define A4XX_PC_HS_PARAM_CW 0x00800000
3756#define A4XX_PC_HS_PARAM_PRIMTYPE__SHIFT 23 3809#define A4XX_PC_HS_PARAM_CONNECTED 0x01000000
3757static inline uint32_t A4XX_PC_HS_PARAM_PRIMTYPE(enum adreno_pa_su_sc_draw val)
3758{
3759 return ((val) << A4XX_PC_HS_PARAM_PRIMTYPE__SHIFT) & A4XX_PC_HS_PARAM_PRIMTYPE__MASK;
3760}
3761 3810
3762#define REG_A4XX_VBIF_VERSION 0x00003000 3811#define REG_A4XX_VBIF_VERSION 0x00003000
3763 3812
diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
index d0d3c7baa8fe..511bc855cc7f 100644
--- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
@@ -113,7 +113,7 @@ static void a4xx_enable_hwcg(struct msm_gpu *gpu)
113} 113}
114 114
115 115
116static void a4xx_me_init(struct msm_gpu *gpu) 116static bool a4xx_me_init(struct msm_gpu *gpu)
117{ 117{
118 struct msm_ringbuffer *ring = gpu->rb; 118 struct msm_ringbuffer *ring = gpu->rb;
119 119
@@ -137,7 +137,7 @@ static void a4xx_me_init(struct msm_gpu *gpu)
137 OUT_RING(ring, 0x00000000); 137 OUT_RING(ring, 0x00000000);
138 138
139 gpu->funcs->flush(gpu); 139 gpu->funcs->flush(gpu);
140 gpu->funcs->idle(gpu); 140 return gpu->funcs->idle(gpu);
141} 141}
142 142
143static int a4xx_hw_init(struct msm_gpu *gpu) 143static int a4xx_hw_init(struct msm_gpu *gpu)
@@ -292,15 +292,20 @@ static int a4xx_hw_init(struct msm_gpu *gpu)
292 /* clear ME_HALT to start micro engine */ 292 /* clear ME_HALT to start micro engine */
293 gpu_write(gpu, REG_A4XX_CP_ME_CNTL, 0); 293 gpu_write(gpu, REG_A4XX_CP_ME_CNTL, 0);
294 294
295 a4xx_me_init(gpu); 295 return a4xx_me_init(gpu) ? 0 : -EINVAL;
296
297 return 0;
298} 296}
299 297
300static void a4xx_recover(struct msm_gpu *gpu) 298static void a4xx_recover(struct msm_gpu *gpu)
301{ 299{
300 int i;
301
302 adreno_dump_info(gpu); 302 adreno_dump_info(gpu);
303 303
304 for (i = 0; i < 8; i++) {
305 printk("CP_SCRATCH_REG%d: %u\n", i,
306 gpu_read(gpu, REG_AXXX_CP_SCRATCH_REG0 + i));
307 }
308
304 /* dump registers before resetting gpu, if enabled: */ 309 /* dump registers before resetting gpu, if enabled: */
305 if (hang_debug) 310 if (hang_debug)
306 a4xx_dump(gpu); 311 a4xx_dump(gpu);
@@ -328,17 +333,21 @@ static void a4xx_destroy(struct msm_gpu *gpu)
328 kfree(a4xx_gpu); 333 kfree(a4xx_gpu);
329} 334}
330 335
331static void a4xx_idle(struct msm_gpu *gpu) 336static bool a4xx_idle(struct msm_gpu *gpu)
332{ 337{
333 /* wait for ringbuffer to drain: */ 338 /* wait for ringbuffer to drain: */
334 adreno_idle(gpu); 339 if (!adreno_idle(gpu))
340 return false;
335 341
336 /* then wait for GPU to finish: */ 342 /* then wait for GPU to finish: */
337 if (spin_until(!(gpu_read(gpu, REG_A4XX_RBBM_STATUS) & 343 if (spin_until(!(gpu_read(gpu, REG_A4XX_RBBM_STATUS) &
338 A4XX_RBBM_STATUS_GPU_BUSY))) 344 A4XX_RBBM_STATUS_GPU_BUSY))) {
339 DRM_ERROR("%s: timeout waiting for GPU to idle!\n", gpu->name); 345 DRM_ERROR("%s: timeout waiting for GPU to idle!\n", gpu->name);
346 /* TODO maybe we need to reset GPU here to recover from hang? */
347 return false;
348 }
340 349
341 /* TODO maybe we need to reset GPU here to recover from hang? */ 350 return true;
342} 351}
343 352
344static irqreturn_t a4xx_irq(struct msm_gpu *gpu) 353static irqreturn_t a4xx_irq(struct msm_gpu *gpu)
@@ -460,87 +469,13 @@ static void a4xx_show(struct msm_gpu *gpu, struct seq_file *m)
460 469
461/* Register offset defines for A4XX, in order of enum adreno_regs */ 470/* Register offset defines for A4XX, in order of enum adreno_regs */
462static const unsigned int a4xx_register_offsets[REG_ADRENO_REGISTER_MAX] = { 471static const unsigned int a4xx_register_offsets[REG_ADRENO_REGISTER_MAX] = {
463 REG_ADRENO_DEFINE(REG_ADRENO_CP_DEBUG, REG_A4XX_CP_DEBUG),
464 REG_ADRENO_DEFINE(REG_ADRENO_CP_ME_RAM_WADDR, REG_A4XX_CP_ME_RAM_WADDR),
465 REG_ADRENO_DEFINE(REG_ADRENO_CP_ME_RAM_DATA, REG_A4XX_CP_ME_RAM_DATA),
466 REG_ADRENO_DEFINE(REG_ADRENO_CP_PFP_UCODE_DATA,
467 REG_A4XX_CP_PFP_UCODE_DATA),
468 REG_ADRENO_DEFINE(REG_ADRENO_CP_PFP_UCODE_ADDR,
469 REG_A4XX_CP_PFP_UCODE_ADDR),
470 REG_ADRENO_DEFINE(REG_ADRENO_CP_WFI_PEND_CTR, REG_A4XX_CP_WFI_PEND_CTR),
471 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_A4XX_CP_RB_BASE), 472 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_A4XX_CP_RB_BASE),
473 REG_ADRENO_SKIP(REG_ADRENO_CP_RB_BASE_HI),
472 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_A4XX_CP_RB_RPTR_ADDR), 474 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_A4XX_CP_RB_RPTR_ADDR),
475 REG_ADRENO_SKIP(REG_ADRENO_CP_RB_RPTR_ADDR_HI),
473 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_A4XX_CP_RB_RPTR), 476 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_A4XX_CP_RB_RPTR),
474 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_A4XX_CP_RB_WPTR), 477 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_A4XX_CP_RB_WPTR),
475 REG_ADRENO_DEFINE(REG_ADRENO_CP_PROTECT_CTRL, REG_A4XX_CP_PROTECT_CTRL),
476 REG_ADRENO_DEFINE(REG_ADRENO_CP_ME_CNTL, REG_A4XX_CP_ME_CNTL),
477 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_A4XX_CP_RB_CNTL), 478 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_A4XX_CP_RB_CNTL),
478 REG_ADRENO_DEFINE(REG_ADRENO_CP_IB1_BASE, REG_A4XX_CP_IB1_BASE),
479 REG_ADRENO_DEFINE(REG_ADRENO_CP_IB1_BUFSZ, REG_A4XX_CP_IB1_BUFSZ),
480 REG_ADRENO_DEFINE(REG_ADRENO_CP_IB2_BASE, REG_A4XX_CP_IB2_BASE),
481 REG_ADRENO_DEFINE(REG_ADRENO_CP_IB2_BUFSZ, REG_A4XX_CP_IB2_BUFSZ),
482 REG_ADRENO_DEFINE(REG_ADRENO_CP_TIMESTAMP, REG_AXXX_CP_SCRATCH_REG0),
483 REG_ADRENO_DEFINE(REG_ADRENO_CP_ME_RAM_RADDR, REG_A4XX_CP_ME_RAM_RADDR),
484 REG_ADRENO_DEFINE(REG_ADRENO_CP_ROQ_ADDR, REG_A4XX_CP_ROQ_ADDR),
485 REG_ADRENO_DEFINE(REG_ADRENO_CP_ROQ_DATA, REG_A4XX_CP_ROQ_DATA),
486 REG_ADRENO_DEFINE(REG_ADRENO_CP_MERCIU_ADDR, REG_A4XX_CP_MERCIU_ADDR),
487 REG_ADRENO_DEFINE(REG_ADRENO_CP_MERCIU_DATA, REG_A4XX_CP_MERCIU_DATA),
488 REG_ADRENO_DEFINE(REG_ADRENO_CP_MERCIU_DATA2, REG_A4XX_CP_MERCIU_DATA2),
489 REG_ADRENO_DEFINE(REG_ADRENO_CP_MEQ_ADDR, REG_A4XX_CP_MEQ_ADDR),
490 REG_ADRENO_DEFINE(REG_ADRENO_CP_MEQ_DATA, REG_A4XX_CP_MEQ_DATA),
491 REG_ADRENO_DEFINE(REG_ADRENO_CP_HW_FAULT, REG_A4XX_CP_HW_FAULT),
492 REG_ADRENO_DEFINE(REG_ADRENO_CP_PROTECT_STATUS,
493 REG_A4XX_CP_PROTECT_STATUS),
494 REG_ADRENO_DEFINE(REG_ADRENO_SCRATCH_ADDR, REG_A4XX_CP_SCRATCH_ADDR),
495 REG_ADRENO_DEFINE(REG_ADRENO_SCRATCH_UMSK, REG_A4XX_CP_SCRATCH_UMASK),
496 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_STATUS, REG_A4XX_RBBM_STATUS),
497 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_CTL,
498 REG_A4XX_RBBM_PERFCTR_CTL),
499 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_CMD0,
500 REG_A4XX_RBBM_PERFCTR_LOAD_CMD0),
501 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_CMD1,
502 REG_A4XX_RBBM_PERFCTR_LOAD_CMD1),
503 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_CMD2,
504 REG_A4XX_RBBM_PERFCTR_LOAD_CMD2),
505 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_PWR_1_LO,
506 REG_A4XX_RBBM_PERFCTR_PWR_1_LO),
507 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_INT_0_MASK, REG_A4XX_RBBM_INT_0_MASK),
508 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_INT_0_STATUS,
509 REG_A4XX_RBBM_INT_0_STATUS),
510 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_AHB_ERROR_STATUS,
511 REG_A4XX_RBBM_AHB_ERROR_STATUS),
512 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_AHB_CMD, REG_A4XX_RBBM_AHB_CMD),
513 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_CLOCK_CTL, REG_A4XX_RBBM_CLOCK_CTL),
514 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_AHB_ME_SPLIT_STATUS,
515 REG_A4XX_RBBM_AHB_ME_SPLIT_STATUS),
516 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_AHB_PFP_SPLIT_STATUS,
517 REG_A4XX_RBBM_AHB_PFP_SPLIT_STATUS),
518 REG_ADRENO_DEFINE(REG_ADRENO_VPC_DEBUG_RAM_SEL,
519 REG_A4XX_VPC_DEBUG_RAM_SEL),
520 REG_ADRENO_DEFINE(REG_ADRENO_VPC_DEBUG_RAM_READ,
521 REG_A4XX_VPC_DEBUG_RAM_READ),
522 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_INT_CLEAR_CMD,
523 REG_A4XX_RBBM_INT_CLEAR_CMD),
524 REG_ADRENO_DEFINE(REG_ADRENO_VSC_SIZE_ADDRESS,
525 REG_A4XX_VSC_SIZE_ADDRESS),
526 REG_ADRENO_DEFINE(REG_ADRENO_VFD_CONTROL_0, REG_A4XX_VFD_CONTROL_0),
527 REG_ADRENO_DEFINE(REG_ADRENO_SP_VS_PVT_MEM_ADDR_REG,
528 REG_A4XX_SP_VS_PVT_MEM_ADDR),
529 REG_ADRENO_DEFINE(REG_ADRENO_SP_FS_PVT_MEM_ADDR_REG,
530 REG_A4XX_SP_FS_PVT_MEM_ADDR),
531 REG_ADRENO_DEFINE(REG_ADRENO_SP_VS_OBJ_START_REG,
532 REG_A4XX_SP_VS_OBJ_START),
533 REG_ADRENO_DEFINE(REG_ADRENO_SP_FS_OBJ_START_REG,
534 REG_A4XX_SP_FS_OBJ_START),
535 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_RBBM_CTL, REG_A4XX_RBBM_RBBM_CTL),
536 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_SW_RESET_CMD,
537 REG_A4XX_RBBM_SW_RESET_CMD),
538 REG_ADRENO_DEFINE(REG_ADRENO_UCHE_INVALIDATE0,
539 REG_A4XX_UCHE_INVALIDATE0),
540 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_VALUE_LO,
541 REG_A4XX_RBBM_PERFCTR_LOAD_VALUE_LO),
542 REG_ADRENO_DEFINE(REG_ADRENO_RBBM_PERFCTR_LOAD_VALUE_HI,
543 REG_A4XX_RBBM_PERFCTR_LOAD_VALUE_HI),
544}; 479};
545 480
546static void a4xx_dump(struct msm_gpu *gpu) 481static void a4xx_dump(struct msm_gpu *gpu)
@@ -587,16 +522,8 @@ static int a4xx_pm_suspend(struct msm_gpu *gpu) {
587 522
588static int a4xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) 523static int a4xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
589{ 524{
590 uint32_t hi, lo, tmp; 525 *value = gpu_read64(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_LO,
591 526 REG_A4XX_RBBM_PERFCTR_CP_0_HI);
592 tmp = gpu_read(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_HI);
593 do {
594 hi = tmp;
595 lo = gpu_read(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_LO);
596 tmp = gpu_read(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_HI);
597 } while (tmp != hi);
598
599 *value = (((uint64_t)hi) << 32) | lo;
600 527
601 return 0; 528 return 0;
602} 529}
@@ -672,7 +599,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev)
672#endif 599#endif
673 } 600 }
674 601
675 if (!gpu->mmu) { 602 if (!gpu->aspace) {
676 /* TODO we think it is possible to configure the GPU to 603 /* TODO we think it is possible to configure the GPU to
677 * restrict access to VRAM carveout. But the required 604 * restrict access to VRAM carveout. But the required
678 * registers are unknown. For now just bail out and 605 * registers are unknown. For now just bail out and
diff --git a/drivers/gpu/drm/msm/adreno/a5xx.xml.h b/drivers/gpu/drm/msm/adreno/a5xx.xml.h
new file mode 100644
index 000000000000..b6fe763ddf34
--- /dev/null
+++ b/drivers/gpu/drm/msm/adreno/a5xx.xml.h
@@ -0,0 +1,3757 @@
1#ifndef A5XX_XML
2#define A5XX_XML
3
4/* Autogenerated file, DO NOT EDIT manually!
5
6This file was generated by the rules-ng-ng headergen tool in this git repository:
7http://github.com/freedreno/envytools/
8git clone https://github.com/freedreno/envytools.git
9
10The rules-ng-ng source files this header was generated from are:
11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 431 bytes, from 2016-04-26 17:56:44)
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32907 bytes, from 2016-11-26 23:01:08)
14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 12025 bytes, from 2016-11-26 23:01:08)
15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 22544 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83840 bytes, from 2016-11-26 23:01:08)
17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 110765 bytes, from 2016-11-26 23:01:48)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/a5xx.xml ( 90321 bytes, from 2016-11-28 16:50:05)
19- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00)
20
21Copyright (C) 2013-2016 by the following authors:
22- Rob Clark <robdclark@gmail.com> (robclark)
23- Ilia Mirkin <imirkin@alum.mit.edu> (imirkin)
24
25Permission is hereby granted, free of charge, to any person obtaining
26a copy of this software and associated documentation files (the
27"Software"), to deal in the Software without restriction, including
28without limitation the rights to use, copy, modify, merge, publish,
29distribute, sublicense, and/or sell copies of the Software, and to
30permit persons to whom the Software is furnished to do so, subject to
31the following conditions:
32
33The above copyright notice and this permission notice (including the
34next paragraph) shall be included in all copies or substantial
35portions of the Software.
36
37THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
38EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
39MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
40IN NO EVENT SHALL THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE
41LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
42OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
43WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
44*/
45
46
47enum a5xx_color_fmt {
48 RB5_R8_UNORM = 3,
49 RB5_R4G4B4A4_UNORM = 8,
50 RB5_R5G5B5A1_UNORM = 10,
51 RB5_R5G6B5_UNORM = 14,
52 RB5_R16_FLOAT = 23,
53 RB5_R8G8B8A8_UNORM = 48,
54 RB5_R8G8B8_UNORM = 49,
55 RB5_R8G8B8A8_UINT = 51,
56 RB5_R10G10B10A2_UINT = 58,
57 RB5_R16G16_FLOAT = 69,
58 RB5_R32_FLOAT = 74,
59 RB5_R16G16B16A16_FLOAT = 98,
60 RB5_R32G32_FLOAT = 103,
61 RB5_R32G32B32A32_FLOAT = 130,
62};
63
64enum a5xx_tile_mode {
65 TILE5_LINEAR = 0,
66 TILE5_2 = 2,
67 TILE5_3 = 3,
68};
69
70enum a5xx_vtx_fmt {
71 VFMT5_8_UNORM = 3,
72 VFMT5_8_SNORM = 4,
73 VFMT5_8_UINT = 5,
74 VFMT5_8_SINT = 6,
75 VFMT5_8_8_UNORM = 15,
76 VFMT5_8_8_SNORM = 16,
77 VFMT5_8_8_UINT = 17,
78 VFMT5_8_8_SINT = 18,
79 VFMT5_16_UNORM = 21,
80 VFMT5_16_SNORM = 22,
81 VFMT5_16_FLOAT = 23,
82 VFMT5_16_UINT = 24,
83 VFMT5_16_SINT = 25,
84 VFMT5_8_8_8_UNORM = 33,
85 VFMT5_8_8_8_SNORM = 34,
86 VFMT5_8_8_8_UINT = 35,
87 VFMT5_8_8_8_SINT = 36,
88 VFMT5_8_8_8_8_UNORM = 48,
89 VFMT5_8_8_8_8_SNORM = 50,
90 VFMT5_8_8_8_8_UINT = 51,
91 VFMT5_8_8_8_8_SINT = 52,
92 VFMT5_16_16_UNORM = 67,
93 VFMT5_16_16_SNORM = 68,
94 VFMT5_16_16_FLOAT = 69,
95 VFMT5_16_16_UINT = 70,
96 VFMT5_16_16_SINT = 71,
97 VFMT5_32_UNORM = 72,
98 VFMT5_32_SNORM = 73,
99 VFMT5_32_FLOAT = 74,
100 VFMT5_32_UINT = 75,
101 VFMT5_32_SINT = 76,
102 VFMT5_32_FIXED = 77,
103 VFMT5_16_16_16_UNORM = 88,
104 VFMT5_16_16_16_SNORM = 89,
105 VFMT5_16_16_16_FLOAT = 90,
106 VFMT5_16_16_16_UINT = 91,
107 VFMT5_16_16_16_SINT = 92,
108 VFMT5_16_16_16_16_UNORM = 96,
109 VFMT5_16_16_16_16_SNORM = 97,
110 VFMT5_16_16_16_16_FLOAT = 98,
111 VFMT5_16_16_16_16_UINT = 99,
112 VFMT5_16_16_16_16_SINT = 100,
113 VFMT5_32_32_UNORM = 101,
114 VFMT5_32_32_SNORM = 102,
115 VFMT5_32_32_FLOAT = 103,
116 VFMT5_32_32_UINT = 104,
117 VFMT5_32_32_SINT = 105,
118 VFMT5_32_32_FIXED = 106,
119 VFMT5_32_32_32_UNORM = 112,
120 VFMT5_32_32_32_SNORM = 113,
121 VFMT5_32_32_32_UINT = 114,
122 VFMT5_32_32_32_SINT = 115,
123 VFMT5_32_32_32_FLOAT = 116,
124 VFMT5_32_32_32_FIXED = 117,
125 VFMT5_32_32_32_32_UNORM = 128,
126 VFMT5_32_32_32_32_SNORM = 129,
127 VFMT5_32_32_32_32_FLOAT = 130,
128 VFMT5_32_32_32_32_UINT = 131,
129 VFMT5_32_32_32_32_SINT = 132,
130 VFMT5_32_32_32_32_FIXED = 133,
131};
132
133enum a5xx_tex_fmt {
134 TFMT5_A8_UNORM = 2,
135 TFMT5_8_UNORM = 3,
136 TFMT5_4_4_4_4_UNORM = 8,
137 TFMT5_5_5_5_1_UNORM = 10,
138 TFMT5_5_6_5_UNORM = 14,
139 TFMT5_8_8_UNORM = 15,
140 TFMT5_8_8_SNORM = 16,
141 TFMT5_L8_A8_UNORM = 19,
142 TFMT5_16_FLOAT = 23,
143 TFMT5_8_8_8_8_UNORM = 48,
144 TFMT5_8_8_8_UNORM = 49,
145 TFMT5_8_8_8_SNORM = 50,
146 TFMT5_9_9_9_E5_FLOAT = 53,
147 TFMT5_10_10_10_2_UNORM = 54,
148 TFMT5_11_11_10_FLOAT = 66,
149 TFMT5_16_16_FLOAT = 69,
150 TFMT5_32_FLOAT = 74,
151 TFMT5_16_16_16_16_FLOAT = 98,
152 TFMT5_32_32_FLOAT = 103,
153 TFMT5_32_32_32_32_FLOAT = 130,
154 TFMT5_X8Z24_UNORM = 160,
155};
156
157enum a5xx_tex_fetchsize {
158 TFETCH5_1_BYTE = 0,
159 TFETCH5_2_BYTE = 1,
160 TFETCH5_4_BYTE = 2,
161 TFETCH5_8_BYTE = 3,
162 TFETCH5_16_BYTE = 4,
163};
164
165enum a5xx_depth_format {
166 DEPTH5_NONE = 0,
167 DEPTH5_16 = 1,
168 DEPTH5_24_8 = 2,
169 DEPTH5_32 = 4,
170};
171
172enum a5xx_blit_buf {
173 BLIT_MRT0 = 0,
174 BLIT_MRT1 = 1,
175 BLIT_MRT2 = 2,
176 BLIT_MRT3 = 3,
177 BLIT_MRT4 = 4,
178 BLIT_MRT5 = 5,
179 BLIT_MRT6 = 6,
180 BLIT_MRT7 = 7,
181 BLIT_ZS = 8,
182 BLIT_Z32 = 9,
183};
184
185enum a5xx_tex_filter {
186 A5XX_TEX_NEAREST = 0,
187 A5XX_TEX_LINEAR = 1,
188 A5XX_TEX_ANISO = 2,
189};
190
191enum a5xx_tex_clamp {
192 A5XX_TEX_REPEAT = 0,
193 A5XX_TEX_CLAMP_TO_EDGE = 1,
194 A5XX_TEX_MIRROR_REPEAT = 2,
195 A5XX_TEX_CLAMP_TO_BORDER = 3,
196 A5XX_TEX_MIRROR_CLAMP = 4,
197};
198
199enum a5xx_tex_aniso {
200 A5XX_TEX_ANISO_1 = 0,
201 A5XX_TEX_ANISO_2 = 1,
202 A5XX_TEX_ANISO_4 = 2,
203 A5XX_TEX_ANISO_8 = 3,
204 A5XX_TEX_ANISO_16 = 4,
205};
206
207enum a5xx_tex_swiz {
208 A5XX_TEX_X = 0,
209 A5XX_TEX_Y = 1,
210 A5XX_TEX_Z = 2,
211 A5XX_TEX_W = 3,
212 A5XX_TEX_ZERO = 4,
213 A5XX_TEX_ONE = 5,
214};
215
216enum a5xx_tex_type {
217 A5XX_TEX_1D = 0,
218 A5XX_TEX_2D = 1,
219 A5XX_TEX_CUBE = 2,
220 A5XX_TEX_3D = 3,
221};
222
223#define A5XX_INT0_RBBM_GPU_IDLE 0x00000001
224#define A5XX_INT0_RBBM_AHB_ERROR 0x00000002
225#define A5XX_INT0_RBBM_TRANSFER_TIMEOUT 0x00000004
226#define A5XX_INT0_RBBM_ME_MS_TIMEOUT 0x00000008
227#define A5XX_INT0_RBBM_PFP_MS_TIMEOUT 0x00000010
228#define A5XX_INT0_RBBM_ETS_MS_TIMEOUT 0x00000020
229#define A5XX_INT0_RBBM_ATB_ASYNC_OVERFLOW 0x00000040
230#define A5XX_INT0_RBBM_GPC_ERROR 0x00000080
231#define A5XX_INT0_CP_SW 0x00000100
232#define A5XX_INT0_CP_HW_ERROR 0x00000200
233#define A5XX_INT0_CP_CCU_FLUSH_DEPTH_TS 0x00000400
234#define A5XX_INT0_CP_CCU_FLUSH_COLOR_TS 0x00000800
235#define A5XX_INT0_CP_CCU_RESOLVE_TS 0x00001000
236#define A5XX_INT0_CP_IB2 0x00002000
237#define A5XX_INT0_CP_IB1 0x00004000
238#define A5XX_INT0_CP_RB 0x00008000
239#define A5XX_INT0_CP_UNUSED_1 0x00010000
240#define A5XX_INT0_CP_RB_DONE_TS 0x00020000
241#define A5XX_INT0_CP_WT_DONE_TS 0x00040000
242#define A5XX_INT0_UNKNOWN_1 0x00080000
243#define A5XX_INT0_CP_CACHE_FLUSH_TS 0x00100000
244#define A5XX_INT0_UNUSED_2 0x00200000
245#define A5XX_INT0_RBBM_ATB_BUS_OVERFLOW 0x00400000
246#define A5XX_INT0_MISC_HANG_DETECT 0x00800000
247#define A5XX_INT0_UCHE_OOB_ACCESS 0x01000000
248#define A5XX_INT0_UCHE_TRAP_INTR 0x02000000
249#define A5XX_INT0_DEBBUS_INTR_0 0x04000000
250#define A5XX_INT0_DEBBUS_INTR_1 0x08000000
251#define A5XX_INT0_GPMU_VOLTAGE_DROOP 0x10000000
252#define A5XX_INT0_GPMU_FIRMWARE 0x20000000
253#define A5XX_INT0_ISDB_CPU_IRQ 0x40000000
254#define A5XX_INT0_ISDB_UNDER_DEBUG 0x80000000
255#define A5XX_CP_INT_CP_OPCODE_ERROR 0x00000001
256#define A5XX_CP_INT_CP_RESERVED_BIT_ERROR 0x00000002
257#define A5XX_CP_INT_CP_HW_FAULT_ERROR 0x00000004
258#define A5XX_CP_INT_CP_DMA_ERROR 0x00000008
259#define A5XX_CP_INT_CP_REGISTER_PROTECTION_ERROR 0x00000010
260#define A5XX_CP_INT_CP_AHB_ERROR 0x00000020
261#define REG_A5XX_CP_RB_BASE 0x00000800
262
263#define REG_A5XX_CP_RB_BASE_HI 0x00000801
264
265#define REG_A5XX_CP_RB_CNTL 0x00000802
266
267#define REG_A5XX_CP_RB_RPTR_ADDR 0x00000804
268
269#define REG_A5XX_CP_RB_RPTR_ADDR_HI 0x00000805
270
271#define REG_A5XX_CP_RB_RPTR 0x00000806
272
273#define REG_A5XX_CP_RB_WPTR 0x00000807
274
275#define REG_A5XX_CP_PFP_STAT_ADDR 0x00000808
276
277#define REG_A5XX_CP_PFP_STAT_DATA 0x00000809
278
279#define REG_A5XX_CP_DRAW_STATE_ADDR 0x0000080b
280
281#define REG_A5XX_CP_DRAW_STATE_DATA 0x0000080c
282
283#define REG_A5XX_CP_CRASH_SCRIPT_BASE_LO 0x00000817
284
285#define REG_A5XX_CP_CRASH_SCRIPT_BASE_HI 0x00000818
286
287#define REG_A5XX_CP_CRASH_DUMP_CNTL 0x00000819
288
289#define REG_A5XX_CP_ME_STAT_ADDR 0x0000081a
290
291#define REG_A5XX_CP_ROQ_THRESHOLDS_1 0x0000081f
292
293#define REG_A5XX_CP_ROQ_THRESHOLDS_2 0x00000820
294
295#define REG_A5XX_CP_ROQ_DBG_ADDR 0x00000821
296
297#define REG_A5XX_CP_ROQ_DBG_DATA 0x00000822
298
299#define REG_A5XX_CP_MEQ_DBG_ADDR 0x00000823
300
301#define REG_A5XX_CP_MEQ_DBG_DATA 0x00000824
302
303#define REG_A5XX_CP_MEQ_THRESHOLDS 0x00000825
304
305#define REG_A5XX_CP_MERCIU_SIZE 0x00000826
306
307#define REG_A5XX_CP_MERCIU_DBG_ADDR 0x00000827
308
309#define REG_A5XX_CP_MERCIU_DBG_DATA_1 0x00000828
310
311#define REG_A5XX_CP_MERCIU_DBG_DATA_2 0x00000829
312
313#define REG_A5XX_CP_PFP_UCODE_DBG_ADDR 0x0000082a
314
315#define REG_A5XX_CP_PFP_UCODE_DBG_DATA 0x0000082b
316
317#define REG_A5XX_CP_ME_UCODE_DBG_ADDR 0x0000082f
318
319#define REG_A5XX_CP_ME_UCODE_DBG_DATA 0x00000830
320
321#define REG_A5XX_CP_CNTL 0x00000831
322
323#define REG_A5XX_CP_PFP_ME_CNTL 0x00000832
324
325#define REG_A5XX_CP_CHICKEN_DBG 0x00000833
326
327#define REG_A5XX_CP_PFP_INSTR_BASE_LO 0x00000835
328
329#define REG_A5XX_CP_PFP_INSTR_BASE_HI 0x00000836
330
331#define REG_A5XX_CP_ME_INSTR_BASE_LO 0x00000838
332
333#define REG_A5XX_CP_ME_INSTR_BASE_HI 0x00000839
334
335#define REG_A5XX_CP_CONTEXT_SWITCH_CNTL 0x0000083b
336
337#define REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_LO 0x0000083c
338
339#define REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_HI 0x0000083d
340
341#define REG_A5XX_CP_CONTEXT_SWITCH_SAVE_ADDR_LO 0x0000083e
342
343#define REG_A5XX_CP_CONTEXT_SWITCH_SAVE_ADDR_HI 0x0000083f
344
345#define REG_A5XX_CP_CONTEXT_SWITCH_SMMU_INFO_LO 0x00000840
346
347#define REG_A5XX_CP_CONTEXT_SWITCH_SMMU_INFO_HI 0x00000841
348
349#define REG_A5XX_CP_ADDR_MODE_CNTL 0x00000860
350
351#define REG_A5XX_CP_ME_STAT_DATA 0x00000b14
352
353#define REG_A5XX_CP_WFI_PEND_CTR 0x00000b15
354
355#define REG_A5XX_CP_INTERRUPT_STATUS 0x00000b18
356
357#define REG_A5XX_CP_HW_FAULT 0x00000b1a
358
359#define REG_A5XX_CP_PROTECT_STATUS 0x00000b1c
360
361#define REG_A5XX_CP_IB1_BASE 0x00000b1f
362
363#define REG_A5XX_CP_IB1_BASE_HI 0x00000b20
364
365#define REG_A5XX_CP_IB1_BUFSZ 0x00000b21
366
367#define REG_A5XX_CP_IB2_BASE 0x00000b22
368
369#define REG_A5XX_CP_IB2_BASE_HI 0x00000b23
370
371#define REG_A5XX_CP_IB2_BUFSZ 0x00000b24
372
373static inline uint32_t REG_A5XX_CP_SCRATCH(uint32_t i0) { return 0x00000b78 + 0x1*i0; }
374
375static inline uint32_t REG_A5XX_CP_SCRATCH_REG(uint32_t i0) { return 0x00000b78 + 0x1*i0; }
376
377static inline uint32_t REG_A5XX_CP_PROTECT(uint32_t i0) { return 0x00000880 + 0x1*i0; }
378
379static inline uint32_t REG_A5XX_CP_PROTECT_REG(uint32_t i0) { return 0x00000880 + 0x1*i0; }
380#define A5XX_CP_PROTECT_REG_BASE_ADDR__MASK 0x0001ffff
381#define A5XX_CP_PROTECT_REG_BASE_ADDR__SHIFT 0
382static inline uint32_t A5XX_CP_PROTECT_REG_BASE_ADDR(uint32_t val)
383{
384 return ((val) << A5XX_CP_PROTECT_REG_BASE_ADDR__SHIFT) & A5XX_CP_PROTECT_REG_BASE_ADDR__MASK;
385}
386#define A5XX_CP_PROTECT_REG_MASK_LEN__MASK 0x1f000000
387#define A5XX_CP_PROTECT_REG_MASK_LEN__SHIFT 24
388static inline uint32_t A5XX_CP_PROTECT_REG_MASK_LEN(uint32_t val)
389{
390 return ((val) << A5XX_CP_PROTECT_REG_MASK_LEN__SHIFT) & A5XX_CP_PROTECT_REG_MASK_LEN__MASK;
391}
392#define A5XX_CP_PROTECT_REG_TRAP_WRITE 0x20000000
393#define A5XX_CP_PROTECT_REG_TRAP_READ 0x40000000
394
395#define REG_A5XX_CP_PROTECT_CNTL 0x000008a0
396
397#define REG_A5XX_CP_AHB_FAULT 0x00000b1b
398
399#define REG_A5XX_CP_PERFCTR_CP_SEL_0 0x00000bb0
400
401#define REG_A5XX_CP_PERFCTR_CP_SEL_1 0x00000bb1
402
403#define REG_A5XX_CP_PERFCTR_CP_SEL_2 0x00000bb2
404
405#define REG_A5XX_CP_PERFCTR_CP_SEL_3 0x00000bb3
406
407#define REG_A5XX_CP_PERFCTR_CP_SEL_4 0x00000bb4
408
409#define REG_A5XX_CP_PERFCTR_CP_SEL_5 0x00000bb5
410
411#define REG_A5XX_CP_PERFCTR_CP_SEL_6 0x00000bb6
412
413#define REG_A5XX_CP_PERFCTR_CP_SEL_7 0x00000bb7
414
415#define REG_A5XX_VSC_ADDR_MODE_CNTL 0x00000bc1
416
417#define REG_A5XX_CP_POWERCTR_CP_SEL_0 0x00000bba
418
419#define REG_A5XX_CP_POWERCTR_CP_SEL_1 0x00000bbb
420
421#define REG_A5XX_CP_POWERCTR_CP_SEL_2 0x00000bbc
422
423#define REG_A5XX_CP_POWERCTR_CP_SEL_3 0x00000bbd
424
425#define REG_A5XX_RBBM_CFG_DBGBUS_SEL_A 0x00000004
426
427#define REG_A5XX_RBBM_CFG_DBGBUS_SEL_B 0x00000005
428
429#define REG_A5XX_RBBM_CFG_DBGBUS_SEL_C 0x00000006
430
431#define REG_A5XX_RBBM_CFG_DBGBUS_SEL_D 0x00000007
432
433#define REG_A5XX_RBBM_CFG_DBGBUS_CNTLT 0x00000008
434
435#define REG_A5XX_RBBM_CFG_DBGBUS_CNTLM 0x00000009
436
437#define REG_A5XX_RBBM_CFG_DEBBUS_CTLTM_ENABLE_SHIFT 0x00000018
438
439#define REG_A5XX_RBBM_CFG_DBGBUS_OPL 0x0000000a
440
441#define REG_A5XX_RBBM_CFG_DBGBUS_OPE 0x0000000b
442
443#define REG_A5XX_RBBM_CFG_DBGBUS_IVTL_0 0x0000000c
444
445#define REG_A5XX_RBBM_CFG_DBGBUS_IVTL_1 0x0000000d
446
447#define REG_A5XX_RBBM_CFG_DBGBUS_IVTL_2 0x0000000e
448
449#define REG_A5XX_RBBM_CFG_DBGBUS_IVTL_3 0x0000000f
450
451#define REG_A5XX_RBBM_CFG_DBGBUS_MASKL_0 0x00000010
452
453#define REG_A5XX_RBBM_CFG_DBGBUS_MASKL_1 0x00000011
454
455#define REG_A5XX_RBBM_CFG_DBGBUS_MASKL_2 0x00000012
456
457#define REG_A5XX_RBBM_CFG_DBGBUS_MASKL_3 0x00000013
458
459#define REG_A5XX_RBBM_CFG_DBGBUS_BYTEL_0 0x00000014
460
461#define REG_A5XX_RBBM_CFG_DBGBUS_BYTEL_1 0x00000015
462
463#define REG_A5XX_RBBM_CFG_DBGBUS_IVTE_0 0x00000016
464
465#define REG_A5XX_RBBM_CFG_DBGBUS_IVTE_1 0x00000017
466
467#define REG_A5XX_RBBM_CFG_DBGBUS_IVTE_2 0x00000018
468
469#define REG_A5XX_RBBM_CFG_DBGBUS_IVTE_3 0x00000019
470
471#define REG_A5XX_RBBM_CFG_DBGBUS_MASKE_0 0x0000001a
472
473#define REG_A5XX_RBBM_CFG_DBGBUS_MASKE_1 0x0000001b
474
475#define REG_A5XX_RBBM_CFG_DBGBUS_MASKE_2 0x0000001c
476
477#define REG_A5XX_RBBM_CFG_DBGBUS_MASKE_3 0x0000001d
478
479#define REG_A5XX_RBBM_CFG_DBGBUS_NIBBLEE 0x0000001e
480
481#define REG_A5XX_RBBM_CFG_DBGBUS_PTRC0 0x0000001f
482
483#define REG_A5XX_RBBM_CFG_DBGBUS_PTRC1 0x00000020
484
485#define REG_A5XX_RBBM_CFG_DBGBUS_LOADREG 0x00000021
486
487#define REG_A5XX_RBBM_CFG_DBGBUS_IDX 0x00000022
488
489#define REG_A5XX_RBBM_CFG_DBGBUS_CLRC 0x00000023
490
491#define REG_A5XX_RBBM_CFG_DBGBUS_LOADIVT 0x00000024
492
493#define REG_A5XX_RBBM_INTERFACE_HANG_INT_CNTL 0x0000002f
494
495#define REG_A5XX_RBBM_INT_CLEAR_CMD 0x00000037
496
497#define REG_A5XX_RBBM_INT_0_MASK 0x00000038
498#define A5XX_RBBM_INT_0_MASK_RBBM_GPU_IDLE 0x00000001
499#define A5XX_RBBM_INT_0_MASK_RBBM_AHB_ERROR 0x00000002
500#define A5XX_RBBM_INT_0_MASK_RBBM_TRANSFER_TIMEOUT 0x00000004
501#define A5XX_RBBM_INT_0_MASK_RBBM_ME_MS_TIMEOUT 0x00000008
502#define A5XX_RBBM_INT_0_MASK_RBBM_PFP_MS_TIMEOUT 0x00000010
503#define A5XX_RBBM_INT_0_MASK_RBBM_ETS_MS_TIMEOUT 0x00000020
504#define A5XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNC_OVERFLOW 0x00000040
505#define A5XX_RBBM_INT_0_MASK_RBBM_GPC_ERROR 0x00000080
506#define A5XX_RBBM_INT_0_MASK_CP_SW 0x00000100
507#define A5XX_RBBM_INT_0_MASK_CP_HW_ERROR 0x00000200
508#define A5XX_RBBM_INT_0_MASK_CP_CCU_FLUSH_DEPTH_TS 0x00000400
509#define A5XX_RBBM_INT_0_MASK_CP_CCU_FLUSH_COLOR_TS 0x00000800
510#define A5XX_RBBM_INT_0_MASK_CP_CCU_RESOLVE_TS 0x00001000
511#define A5XX_RBBM_INT_0_MASK_CP_IB2 0x00002000
512#define A5XX_RBBM_INT_0_MASK_CP_IB1 0x00004000
513#define A5XX_RBBM_INT_0_MASK_CP_RB 0x00008000
514#define A5XX_RBBM_INT_0_MASK_CP_RB_DONE_TS 0x00020000
515#define A5XX_RBBM_INT_0_MASK_CP_WT_DONE_TS 0x00040000
516#define A5XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS 0x00100000
517#define A5XX_RBBM_INT_0_MASK_RBBM_ATB_BUS_OVERFLOW 0x00400000
518#define A5XX_RBBM_INT_0_MASK_MISC_HANG_DETECT 0x00800000
519#define A5XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS 0x01000000
520#define A5XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR 0x02000000
521#define A5XX_RBBM_INT_0_MASK_DEBBUS_INTR_0 0x04000000
522#define A5XX_RBBM_INT_0_MASK_DEBBUS_INTR_1 0x08000000
523#define A5XX_RBBM_INT_0_MASK_GPMU_VOLTAGE_DROOP 0x10000000
524#define A5XX_RBBM_INT_0_MASK_GPMU_FIRMWARE 0x20000000
525#define A5XX_RBBM_INT_0_MASK_ISDB_CPU_IRQ 0x40000000
526#define A5XX_RBBM_INT_0_MASK_ISDB_UNDER_DEBUG 0x80000000
527
528#define REG_A5XX_RBBM_AHB_DBG_CNTL 0x0000003f
529
530#define REG_A5XX_RBBM_EXT_VBIF_DBG_CNTL 0x00000041
531
532#define REG_A5XX_RBBM_SW_RESET_CMD 0x00000043
533
534#define REG_A5XX_RBBM_BLOCK_SW_RESET_CMD 0x00000045
535
536#define REG_A5XX_RBBM_BLOCK_SW_RESET_CMD2 0x00000046
537
538#define REG_A5XX_RBBM_DBG_LO_HI_GPIO 0x00000048
539
540#define REG_A5XX_RBBM_EXT_TRACE_BUS_CNTL 0x00000049
541
542#define REG_A5XX_RBBM_CLOCK_CNTL_TP0 0x0000004a
543
544#define REG_A5XX_RBBM_CLOCK_CNTL_TP1 0x0000004b
545
546#define REG_A5XX_RBBM_CLOCK_CNTL_TP2 0x0000004c
547
548#define REG_A5XX_RBBM_CLOCK_CNTL_TP3 0x0000004d
549
550#define REG_A5XX_RBBM_CLOCK_CNTL2_TP0 0x0000004e
551
552#define REG_A5XX_RBBM_CLOCK_CNTL2_TP1 0x0000004f
553
554#define REG_A5XX_RBBM_CLOCK_CNTL2_TP2 0x00000050
555
556#define REG_A5XX_RBBM_CLOCK_CNTL2_TP3 0x00000051
557
558#define REG_A5XX_RBBM_CLOCK_CNTL3_TP0 0x00000052
559
560#define REG_A5XX_RBBM_CLOCK_CNTL3_TP1 0x00000053
561
562#define REG_A5XX_RBBM_CLOCK_CNTL3_TP2 0x00000054
563
564#define REG_A5XX_RBBM_CLOCK_CNTL3_TP3 0x00000055
565
566#define REG_A5XX_RBBM_READ_AHB_THROUGH_DBG 0x00000059
567
568#define REG_A5XX_RBBM_CLOCK_CNTL_UCHE 0x0000005a
569
570#define REG_A5XX_RBBM_CLOCK_CNTL2_UCHE 0x0000005b
571
572#define REG_A5XX_RBBM_CLOCK_CNTL3_UCHE 0x0000005c
573
574#define REG_A5XX_RBBM_CLOCK_CNTL4_UCHE 0x0000005d
575
576#define REG_A5XX_RBBM_CLOCK_HYST_UCHE 0x0000005e
577
578#define REG_A5XX_RBBM_CLOCK_DELAY_UCHE 0x0000005f
579
580#define REG_A5XX_RBBM_CLOCK_MODE_GPC 0x00000060
581
582#define REG_A5XX_RBBM_CLOCK_DELAY_GPC 0x00000061
583
584#define REG_A5XX_RBBM_CLOCK_HYST_GPC 0x00000062
585
586#define REG_A5XX_RBBM_CLOCK_CNTL_TSE_RAS_RBBM 0x00000063
587
588#define REG_A5XX_RBBM_CLOCK_HYST_TSE_RAS_RBBM 0x00000064
589
590#define REG_A5XX_RBBM_CLOCK_DELAY_TSE_RAS_RBBM 0x00000065
591
592#define REG_A5XX_RBBM_CLOCK_DELAY_HLSQ 0x00000066
593
594#define REG_A5XX_RBBM_CLOCK_CNTL 0x00000067
595
596#define REG_A5XX_RBBM_CLOCK_CNTL_SP0 0x00000068
597
598#define REG_A5XX_RBBM_CLOCK_CNTL_SP1 0x00000069
599
600#define REG_A5XX_RBBM_CLOCK_CNTL_SP2 0x0000006a
601
602#define REG_A5XX_RBBM_CLOCK_CNTL_SP3 0x0000006b
603
604#define REG_A5XX_RBBM_CLOCK_CNTL2_SP0 0x0000006c
605
606#define REG_A5XX_RBBM_CLOCK_CNTL2_SP1 0x0000006d
607
608#define REG_A5XX_RBBM_CLOCK_CNTL2_SP2 0x0000006e
609
610#define REG_A5XX_RBBM_CLOCK_CNTL2_SP3 0x0000006f
611
612#define REG_A5XX_RBBM_CLOCK_HYST_SP0 0x00000070
613
614#define REG_A5XX_RBBM_CLOCK_HYST_SP1 0x00000071
615
616#define REG_A5XX_RBBM_CLOCK_HYST_SP2 0x00000072
617
618#define REG_A5XX_RBBM_CLOCK_HYST_SP3 0x00000073
619
620#define REG_A5XX_RBBM_CLOCK_DELAY_SP0 0x00000074
621
622#define REG_A5XX_RBBM_CLOCK_DELAY_SP1 0x00000075
623
624#define REG_A5XX_RBBM_CLOCK_DELAY_SP2 0x00000076
625
626#define REG_A5XX_RBBM_CLOCK_DELAY_SP3 0x00000077
627
628#define REG_A5XX_RBBM_CLOCK_CNTL_RB0 0x00000078
629
630#define REG_A5XX_RBBM_CLOCK_CNTL_RB1 0x00000079
631
632#define REG_A5XX_RBBM_CLOCK_CNTL_RB2 0x0000007a
633
634#define REG_A5XX_RBBM_CLOCK_CNTL_RB3 0x0000007b
635
636#define REG_A5XX_RBBM_CLOCK_CNTL2_RB0 0x0000007c
637
638#define REG_A5XX_RBBM_CLOCK_CNTL2_RB1 0x0000007d
639
640#define REG_A5XX_RBBM_CLOCK_CNTL2_RB2 0x0000007e
641
642#define REG_A5XX_RBBM_CLOCK_CNTL2_RB3 0x0000007f
643
644#define REG_A5XX_RBBM_CLOCK_HYST_RAC 0x00000080
645
646#define REG_A5XX_RBBM_CLOCK_DELAY_RAC 0x00000081
647
648#define REG_A5XX_RBBM_CLOCK_CNTL_CCU0 0x00000082
649
650#define REG_A5XX_RBBM_CLOCK_CNTL_CCU1 0x00000083
651
652#define REG_A5XX_RBBM_CLOCK_CNTL_CCU2 0x00000084
653
654#define REG_A5XX_RBBM_CLOCK_CNTL_CCU3 0x00000085
655
656#define REG_A5XX_RBBM_CLOCK_HYST_RB_CCU0 0x00000086
657
658#define REG_A5XX_RBBM_CLOCK_HYST_RB_CCU1 0x00000087
659
660#define REG_A5XX_RBBM_CLOCK_HYST_RB_CCU2 0x00000088
661
662#define REG_A5XX_RBBM_CLOCK_HYST_RB_CCU3 0x00000089
663
664#define REG_A5XX_RBBM_CLOCK_CNTL_RAC 0x0000008a
665
666#define REG_A5XX_RBBM_CLOCK_CNTL2_RAC 0x0000008b
667
668#define REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_0 0x0000008c
669
670#define REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_1 0x0000008d
671
672#define REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_2 0x0000008e
673
674#define REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_3 0x0000008f
675
676#define REG_A5XX_RBBM_CLOCK_HYST_VFD 0x00000090
677
678#define REG_A5XX_RBBM_CLOCK_MODE_VFD 0x00000091
679
680#define REG_A5XX_RBBM_CLOCK_DELAY_VFD 0x00000092
681
682#define REG_A5XX_RBBM_AHB_CNTL0 0x00000093
683
684#define REG_A5XX_RBBM_AHB_CNTL1 0x00000094
685
686#define REG_A5XX_RBBM_AHB_CNTL2 0x00000095
687
688#define REG_A5XX_RBBM_AHB_CMD 0x00000096
689
690#define REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL11 0x0000009c
691
692#define REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL12 0x0000009d
693
694#define REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL13 0x0000009e
695
696#define REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL14 0x0000009f
697
698#define REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL15 0x000000a0
699
700#define REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL16 0x000000a1
701
702#define REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL17 0x000000a2
703
704#define REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL18 0x000000a3
705
706#define REG_A5XX_RBBM_CLOCK_DELAY_TP0 0x000000a4
707
708#define REG_A5XX_RBBM_CLOCK_DELAY_TP1 0x000000a5
709
710#define REG_A5XX_RBBM_CLOCK_DELAY_TP2 0x000000a6
711
712#define REG_A5XX_RBBM_CLOCK_DELAY_TP3 0x000000a7
713
714#define REG_A5XX_RBBM_CLOCK_DELAY2_TP0 0x000000a8
715
716#define REG_A5XX_RBBM_CLOCK_DELAY2_TP1 0x000000a9
717
718#define REG_A5XX_RBBM_CLOCK_DELAY2_TP2 0x000000aa
719
720#define REG_A5XX_RBBM_CLOCK_DELAY2_TP3 0x000000ab
721
722#define REG_A5XX_RBBM_CLOCK_DELAY3_TP0 0x000000ac
723
724#define REG_A5XX_RBBM_CLOCK_DELAY3_TP1 0x000000ad
725
726#define REG_A5XX_RBBM_CLOCK_DELAY3_TP2 0x000000ae
727
728#define REG_A5XX_RBBM_CLOCK_DELAY3_TP3 0x000000af
729
730#define REG_A5XX_RBBM_CLOCK_HYST_TP0 0x000000b0
731
732#define REG_A5XX_RBBM_CLOCK_HYST_TP1 0x000000b1
733
734#define REG_A5XX_RBBM_CLOCK_HYST_TP2 0x000000b2
735
736#define REG_A5XX_RBBM_CLOCK_HYST_TP3 0x000000b3
737
738#define REG_A5XX_RBBM_CLOCK_HYST2_TP0 0x000000b4
739
740#define REG_A5XX_RBBM_CLOCK_HYST2_TP1 0x000000b5
741
742#define REG_A5XX_RBBM_CLOCK_HYST2_TP2 0x000000b6
743
744#define REG_A5XX_RBBM_CLOCK_HYST2_TP3 0x000000b7
745
746#define REG_A5XX_RBBM_CLOCK_HYST3_TP0 0x000000b8
747
748#define REG_A5XX_RBBM_CLOCK_HYST3_TP1 0x000000b9
749
750#define REG_A5XX_RBBM_CLOCK_HYST3_TP2 0x000000ba
751
752#define REG_A5XX_RBBM_CLOCK_HYST3_TP3 0x000000bb
753
754#define REG_A5XX_RBBM_CLOCK_CNTL_GPMU 0x000000c8
755
756#define REG_A5XX_RBBM_CLOCK_DELAY_GPMU 0x000000c9
757
758#define REG_A5XX_RBBM_CLOCK_HYST_GPMU 0x000000ca
759
760#define REG_A5XX_RBBM_PERFCTR_CP_0_LO 0x000003a0
761
762#define REG_A5XX_RBBM_PERFCTR_CP_0_HI 0x000003a1
763
764#define REG_A5XX_RBBM_PERFCTR_CP_1_LO 0x000003a2
765
766#define REG_A5XX_RBBM_PERFCTR_CP_1_HI 0x000003a3
767
768#define REG_A5XX_RBBM_PERFCTR_CP_2_LO 0x000003a4
769
770#define REG_A5XX_RBBM_PERFCTR_CP_2_HI 0x000003a5
771
772#define REG_A5XX_RBBM_PERFCTR_CP_3_LO 0x000003a6
773
774#define REG_A5XX_RBBM_PERFCTR_CP_3_HI 0x000003a7
775
776#define REG_A5XX_RBBM_PERFCTR_CP_4_LO 0x000003a8
777
778#define REG_A5XX_RBBM_PERFCTR_CP_4_HI 0x000003a9
779
780#define REG_A5XX_RBBM_PERFCTR_CP_5_LO 0x000003aa
781
782#define REG_A5XX_RBBM_PERFCTR_CP_5_HI 0x000003ab
783
784#define REG_A5XX_RBBM_PERFCTR_CP_6_LO 0x000003ac
785
786#define REG_A5XX_RBBM_PERFCTR_CP_6_HI 0x000003ad
787
788#define REG_A5XX_RBBM_PERFCTR_CP_7_LO 0x000003ae
789
790#define REG_A5XX_RBBM_PERFCTR_CP_7_HI 0x000003af
791
792#define REG_A5XX_RBBM_PERFCTR_RBBM_0_LO 0x000003b0
793
794#define REG_A5XX_RBBM_PERFCTR_RBBM_0_HI 0x000003b1
795
796#define REG_A5XX_RBBM_PERFCTR_RBBM_1_LO 0x000003b2
797
798#define REG_A5XX_RBBM_PERFCTR_RBBM_1_HI 0x000003b3
799
800#define REG_A5XX_RBBM_PERFCTR_RBBM_2_LO 0x000003b4
801
802#define REG_A5XX_RBBM_PERFCTR_RBBM_2_HI 0x000003b5
803
804#define REG_A5XX_RBBM_PERFCTR_RBBM_3_LO 0x000003b6
805
806#define REG_A5XX_RBBM_PERFCTR_RBBM_3_HI 0x000003b7
807
808#define REG_A5XX_RBBM_PERFCTR_PC_0_LO 0x000003b8
809
810#define REG_A5XX_RBBM_PERFCTR_PC_0_HI 0x000003b9
811
812#define REG_A5XX_RBBM_PERFCTR_PC_1_LO 0x000003ba
813
814#define REG_A5XX_RBBM_PERFCTR_PC_1_HI 0x000003bb
815
816#define REG_A5XX_RBBM_PERFCTR_PC_2_LO 0x000003bc
817
818#define REG_A5XX_RBBM_PERFCTR_PC_2_HI 0x000003bd
819
820#define REG_A5XX_RBBM_PERFCTR_PC_3_LO 0x000003be
821
822#define REG_A5XX_RBBM_PERFCTR_PC_3_HI 0x000003bf
823
824#define REG_A5XX_RBBM_PERFCTR_PC_4_LO 0x000003c0
825
826#define REG_A5XX_RBBM_PERFCTR_PC_4_HI 0x000003c1
827
828#define REG_A5XX_RBBM_PERFCTR_PC_5_LO 0x000003c2
829
830#define REG_A5XX_RBBM_PERFCTR_PC_5_HI 0x000003c3
831
832#define REG_A5XX_RBBM_PERFCTR_PC_6_LO 0x000003c4
833
834#define REG_A5XX_RBBM_PERFCTR_PC_6_HI 0x000003c5
835
836#define REG_A5XX_RBBM_PERFCTR_PC_7_LO 0x000003c6
837
838#define REG_A5XX_RBBM_PERFCTR_PC_7_HI 0x000003c7
839
840#define REG_A5XX_RBBM_PERFCTR_VFD_0_LO 0x000003c8
841
842#define REG_A5XX_RBBM_PERFCTR_VFD_0_HI 0x000003c9
843
844#define REG_A5XX_RBBM_PERFCTR_VFD_1_LO 0x000003ca
845
846#define REG_A5XX_RBBM_PERFCTR_VFD_1_HI 0x000003cb
847
848#define REG_A5XX_RBBM_PERFCTR_VFD_2_LO 0x000003cc
849
850#define REG_A5XX_RBBM_PERFCTR_VFD_2_HI 0x000003cd
851
852#define REG_A5XX_RBBM_PERFCTR_VFD_3_LO 0x000003ce
853
854#define REG_A5XX_RBBM_PERFCTR_VFD_3_HI 0x000003cf
855
856#define REG_A5XX_RBBM_PERFCTR_VFD_4_LO 0x000003d0
857
858#define REG_A5XX_RBBM_PERFCTR_VFD_4_HI 0x000003d1
859
860#define REG_A5XX_RBBM_PERFCTR_VFD_5_LO 0x000003d2
861
862#define REG_A5XX_RBBM_PERFCTR_VFD_5_HI 0x000003d3
863
864#define REG_A5XX_RBBM_PERFCTR_VFD_6_LO 0x000003d4
865
866#define REG_A5XX_RBBM_PERFCTR_VFD_6_HI 0x000003d5
867
868#define REG_A5XX_RBBM_PERFCTR_VFD_7_LO 0x000003d6
869
870#define REG_A5XX_RBBM_PERFCTR_VFD_7_HI 0x000003d7
871
872#define REG_A5XX_RBBM_PERFCTR_HLSQ_0_LO 0x000003d8
873
874#define REG_A5XX_RBBM_PERFCTR_HLSQ_0_HI 0x000003d9
875
876#define REG_A5XX_RBBM_PERFCTR_HLSQ_1_LO 0x000003da
877
878#define REG_A5XX_RBBM_PERFCTR_HLSQ_1_HI 0x000003db
879
880#define REG_A5XX_RBBM_PERFCTR_HLSQ_2_LO 0x000003dc
881
882#define REG_A5XX_RBBM_PERFCTR_HLSQ_2_HI 0x000003dd
883
884#define REG_A5XX_RBBM_PERFCTR_HLSQ_3_LO 0x000003de
885
886#define REG_A5XX_RBBM_PERFCTR_HLSQ_3_HI 0x000003df
887
888#define REG_A5XX_RBBM_PERFCTR_HLSQ_4_LO 0x000003e0
889
890#define REG_A5XX_RBBM_PERFCTR_HLSQ_4_HI 0x000003e1
891
892#define REG_A5XX_RBBM_PERFCTR_HLSQ_5_LO 0x000003e2
893
894#define REG_A5XX_RBBM_PERFCTR_HLSQ_5_HI 0x000003e3
895
896#define REG_A5XX_RBBM_PERFCTR_HLSQ_6_LO 0x000003e4
897
898#define REG_A5XX_RBBM_PERFCTR_HLSQ_6_HI 0x000003e5
899
900#define REG_A5XX_RBBM_PERFCTR_HLSQ_7_LO 0x000003e6
901
902#define REG_A5XX_RBBM_PERFCTR_HLSQ_7_HI 0x000003e7
903
904#define REG_A5XX_RBBM_PERFCTR_VPC_0_LO 0x000003e8
905
906#define REG_A5XX_RBBM_PERFCTR_VPC_0_HI 0x000003e9
907
908#define REG_A5XX_RBBM_PERFCTR_VPC_1_LO 0x000003ea
909
910#define REG_A5XX_RBBM_PERFCTR_VPC_1_HI 0x000003eb
911
912#define REG_A5XX_RBBM_PERFCTR_VPC_2_LO 0x000003ec
913
914#define REG_A5XX_RBBM_PERFCTR_VPC_2_HI 0x000003ed
915
916#define REG_A5XX_RBBM_PERFCTR_VPC_3_LO 0x000003ee
917
918#define REG_A5XX_RBBM_PERFCTR_VPC_3_HI 0x000003ef
919
920#define REG_A5XX_RBBM_PERFCTR_CCU_0_LO 0x000003f0
921
922#define REG_A5XX_RBBM_PERFCTR_CCU_0_HI 0x000003f1
923
924#define REG_A5XX_RBBM_PERFCTR_CCU_1_LO 0x000003f2
925
926#define REG_A5XX_RBBM_PERFCTR_CCU_1_HI 0x000003f3
927
928#define REG_A5XX_RBBM_PERFCTR_CCU_2_LO 0x000003f4
929
930#define REG_A5XX_RBBM_PERFCTR_CCU_2_HI 0x000003f5
931
932#define REG_A5XX_RBBM_PERFCTR_CCU_3_LO 0x000003f6
933
934#define REG_A5XX_RBBM_PERFCTR_CCU_3_HI 0x000003f7
935
936#define REG_A5XX_RBBM_PERFCTR_TSE_0_LO 0x000003f8
937
938#define REG_A5XX_RBBM_PERFCTR_TSE_0_HI 0x000003f9
939
940#define REG_A5XX_RBBM_PERFCTR_TSE_1_LO 0x000003fa
941
942#define REG_A5XX_RBBM_PERFCTR_TSE_1_HI 0x000003fb
943
944#define REG_A5XX_RBBM_PERFCTR_TSE_2_LO 0x000003fc
945
946#define REG_A5XX_RBBM_PERFCTR_TSE_2_HI 0x000003fd
947
948#define REG_A5XX_RBBM_PERFCTR_TSE_3_LO 0x000003fe
949
950#define REG_A5XX_RBBM_PERFCTR_TSE_3_HI 0x000003ff
951
952#define REG_A5XX_RBBM_PERFCTR_RAS_0_LO 0x00000400
953
954#define REG_A5XX_RBBM_PERFCTR_RAS_0_HI 0x00000401
955
956#define REG_A5XX_RBBM_PERFCTR_RAS_1_LO 0x00000402
957
958#define REG_A5XX_RBBM_PERFCTR_RAS_1_HI 0x00000403
959
960#define REG_A5XX_RBBM_PERFCTR_RAS_2_LO 0x00000404
961
962#define REG_A5XX_RBBM_PERFCTR_RAS_2_HI 0x00000405
963
964#define REG_A5XX_RBBM_PERFCTR_RAS_3_LO 0x00000406
965
966#define REG_A5XX_RBBM_PERFCTR_RAS_3_HI 0x00000407
967
968#define REG_A5XX_RBBM_PERFCTR_UCHE_0_LO 0x00000408
969
970#define REG_A5XX_RBBM_PERFCTR_UCHE_0_HI 0x00000409
971
972#define REG_A5XX_RBBM_PERFCTR_UCHE_1_LO 0x0000040a
973
974#define REG_A5XX_RBBM_PERFCTR_UCHE_1_HI 0x0000040b
975
976#define REG_A5XX_RBBM_PERFCTR_UCHE_2_LO 0x0000040c
977
978#define REG_A5XX_RBBM_PERFCTR_UCHE_2_HI 0x0000040d
979
980#define REG_A5XX_RBBM_PERFCTR_UCHE_3_LO 0x0000040e
981
982#define REG_A5XX_RBBM_PERFCTR_UCHE_3_HI 0x0000040f
983
984#define REG_A5XX_RBBM_PERFCTR_UCHE_4_LO 0x00000410
985
986#define REG_A5XX_RBBM_PERFCTR_UCHE_4_HI 0x00000411
987
988#define REG_A5XX_RBBM_PERFCTR_UCHE_5_LO 0x00000412
989
990#define REG_A5XX_RBBM_PERFCTR_UCHE_5_HI 0x00000413
991
992#define REG_A5XX_RBBM_PERFCTR_UCHE_6_LO 0x00000414
993
994#define REG_A5XX_RBBM_PERFCTR_UCHE_6_HI 0x00000415
995
996#define REG_A5XX_RBBM_PERFCTR_UCHE_7_LO 0x00000416
997
998#define REG_A5XX_RBBM_PERFCTR_UCHE_7_HI 0x00000417
999
1000#define REG_A5XX_RBBM_PERFCTR_TP_0_LO 0x00000418
1001
1002#define REG_A5XX_RBBM_PERFCTR_TP_0_HI 0x00000419
1003
1004#define REG_A5XX_RBBM_PERFCTR_TP_1_LO 0x0000041a
1005
1006#define REG_A5XX_RBBM_PERFCTR_TP_1_HI 0x0000041b
1007
1008#define REG_A5XX_RBBM_PERFCTR_TP_2_LO 0x0000041c
1009
1010#define REG_A5XX_RBBM_PERFCTR_TP_2_HI 0x0000041d
1011
1012#define REG_A5XX_RBBM_PERFCTR_TP_3_LO 0x0000041e
1013
1014#define REG_A5XX_RBBM_PERFCTR_TP_3_HI 0x0000041f
1015
1016#define REG_A5XX_RBBM_PERFCTR_TP_4_LO 0x00000420
1017
1018#define REG_A5XX_RBBM_PERFCTR_TP_4_HI 0x00000421
1019
1020#define REG_A5XX_RBBM_PERFCTR_TP_5_LO 0x00000422
1021
1022#define REG_A5XX_RBBM_PERFCTR_TP_5_HI 0x00000423
1023
1024#define REG_A5XX_RBBM_PERFCTR_TP_6_LO 0x00000424
1025
1026#define REG_A5XX_RBBM_PERFCTR_TP_6_HI 0x00000425
1027
1028#define REG_A5XX_RBBM_PERFCTR_TP_7_LO 0x00000426
1029
1030#define REG_A5XX_RBBM_PERFCTR_TP_7_HI 0x00000427
1031
1032#define REG_A5XX_RBBM_PERFCTR_SP_0_LO 0x00000428
1033
1034#define REG_A5XX_RBBM_PERFCTR_SP_0_HI 0x00000429
1035
1036#define REG_A5XX_RBBM_PERFCTR_SP_1_LO 0x0000042a
1037
1038#define REG_A5XX_RBBM_PERFCTR_SP_1_HI 0x0000042b
1039
1040#define REG_A5XX_RBBM_PERFCTR_SP_2_LO 0x0000042c
1041
1042#define REG_A5XX_RBBM_PERFCTR_SP_2_HI 0x0000042d
1043
1044#define REG_A5XX_RBBM_PERFCTR_SP_3_LO 0x0000042e
1045
1046#define REG_A5XX_RBBM_PERFCTR_SP_3_HI 0x0000042f
1047
1048#define REG_A5XX_RBBM_PERFCTR_SP_4_LO 0x00000430
1049
1050#define REG_A5XX_RBBM_PERFCTR_SP_4_HI 0x00000431
1051
1052#define REG_A5XX_RBBM_PERFCTR_SP_5_LO 0x00000432
1053
1054#define REG_A5XX_RBBM_PERFCTR_SP_5_HI 0x00000433
1055
1056#define REG_A5XX_RBBM_PERFCTR_SP_6_LO 0x00000434
1057
1058#define REG_A5XX_RBBM_PERFCTR_SP_6_HI 0x00000435
1059
1060#define REG_A5XX_RBBM_PERFCTR_SP_7_LO 0x00000436
1061
1062#define REG_A5XX_RBBM_PERFCTR_SP_7_HI 0x00000437
1063
1064#define REG_A5XX_RBBM_PERFCTR_SP_8_LO 0x00000438
1065
1066#define REG_A5XX_RBBM_PERFCTR_SP_8_HI 0x00000439
1067
1068#define REG_A5XX_RBBM_PERFCTR_SP_9_LO 0x0000043a
1069
1070#define REG_A5XX_RBBM_PERFCTR_SP_9_HI 0x0000043b
1071
1072#define REG_A5XX_RBBM_PERFCTR_SP_10_LO 0x0000043c
1073
1074#define REG_A5XX_RBBM_PERFCTR_SP_10_HI 0x0000043d
1075
1076#define REG_A5XX_RBBM_PERFCTR_SP_11_LO 0x0000043e
1077
1078#define REG_A5XX_RBBM_PERFCTR_SP_11_HI 0x0000043f
1079
1080#define REG_A5XX_RBBM_PERFCTR_RB_0_LO 0x00000440
1081
1082#define REG_A5XX_RBBM_PERFCTR_RB_0_HI 0x00000441
1083
1084#define REG_A5XX_RBBM_PERFCTR_RB_1_LO 0x00000442
1085
1086#define REG_A5XX_RBBM_PERFCTR_RB_1_HI 0x00000443
1087
1088#define REG_A5XX_RBBM_PERFCTR_RB_2_LO 0x00000444
1089
1090#define REG_A5XX_RBBM_PERFCTR_RB_2_HI 0x00000445
1091
1092#define REG_A5XX_RBBM_PERFCTR_RB_3_LO 0x00000446
1093
1094#define REG_A5XX_RBBM_PERFCTR_RB_3_HI 0x00000447
1095
1096#define REG_A5XX_RBBM_PERFCTR_RB_4_LO 0x00000448
1097
1098#define REG_A5XX_RBBM_PERFCTR_RB_4_HI 0x00000449
1099
1100#define REG_A5XX_RBBM_PERFCTR_RB_5_LO 0x0000044a
1101
1102#define REG_A5XX_RBBM_PERFCTR_RB_5_HI 0x0000044b
1103
1104#define REG_A5XX_RBBM_PERFCTR_RB_6_LO 0x0000044c
1105
1106#define REG_A5XX_RBBM_PERFCTR_RB_6_HI 0x0000044d
1107
1108#define REG_A5XX_RBBM_PERFCTR_RB_7_LO 0x0000044e
1109
1110#define REG_A5XX_RBBM_PERFCTR_RB_7_HI 0x0000044f
1111
1112#define REG_A5XX_RBBM_PERFCTR_VSC_0_LO 0x00000450
1113
1114#define REG_A5XX_RBBM_PERFCTR_VSC_0_HI 0x00000451
1115
1116#define REG_A5XX_RBBM_PERFCTR_VSC_1_LO 0x00000452
1117
1118#define REG_A5XX_RBBM_PERFCTR_VSC_1_HI 0x00000453
1119
1120#define REG_A5XX_RBBM_PERFCTR_LRZ_0_LO 0x00000454
1121
1122#define REG_A5XX_RBBM_PERFCTR_LRZ_0_HI 0x00000455
1123
1124#define REG_A5XX_RBBM_PERFCTR_LRZ_1_LO 0x00000456
1125
1126#define REG_A5XX_RBBM_PERFCTR_LRZ_1_HI 0x00000457
1127
1128#define REG_A5XX_RBBM_PERFCTR_LRZ_2_LO 0x00000458
1129
1130#define REG_A5XX_RBBM_PERFCTR_LRZ_2_HI 0x00000459
1131
1132#define REG_A5XX_RBBM_PERFCTR_LRZ_3_LO 0x0000045a
1133
1134#define REG_A5XX_RBBM_PERFCTR_LRZ_3_HI 0x0000045b
1135
1136#define REG_A5XX_RBBM_PERFCTR_CMP_0_LO 0x0000045c
1137
1138#define REG_A5XX_RBBM_PERFCTR_CMP_0_HI 0x0000045d
1139
1140#define REG_A5XX_RBBM_PERFCTR_CMP_1_LO 0x0000045e
1141
1142#define REG_A5XX_RBBM_PERFCTR_CMP_1_HI 0x0000045f
1143
1144#define REG_A5XX_RBBM_PERFCTR_CMP_2_LO 0x00000460
1145
1146#define REG_A5XX_RBBM_PERFCTR_CMP_2_HI 0x00000461
1147
1148#define REG_A5XX_RBBM_PERFCTR_CMP_3_LO 0x00000462
1149
1150#define REG_A5XX_RBBM_PERFCTR_CMP_3_HI 0x00000463
1151
1152#define REG_A5XX_RBBM_PERFCTR_RBBM_SEL_0 0x0000046b
1153
1154#define REG_A5XX_RBBM_PERFCTR_RBBM_SEL_1 0x0000046c
1155
1156#define REG_A5XX_RBBM_PERFCTR_RBBM_SEL_2 0x0000046d
1157
1158#define REG_A5XX_RBBM_PERFCTR_RBBM_SEL_3 0x0000046e
1159
1160#define REG_A5XX_RBBM_ALWAYSON_COUNTER_LO 0x000004d2
1161
1162#define REG_A5XX_RBBM_ALWAYSON_COUNTER_HI 0x000004d3
1163
1164#define REG_A5XX_RBBM_STATUS 0x000004f5
1165#define A5XX_RBBM_STATUS_GPU_BUSY_IGN_AHB 0x80000000
1166#define A5XX_RBBM_STATUS_GPU_BUSY_IGN_AHB_CP 0x40000000
1167#define A5XX_RBBM_STATUS_HLSQ_BUSY 0x20000000
1168#define A5XX_RBBM_STATUS_VSC_BUSY 0x10000000
1169#define A5XX_RBBM_STATUS_TPL1_BUSY 0x08000000
1170#define A5XX_RBBM_STATUS_SP_BUSY 0x04000000
1171#define A5XX_RBBM_STATUS_UCHE_BUSY 0x02000000
1172#define A5XX_RBBM_STATUS_VPC_BUSY 0x01000000
1173#define A5XX_RBBM_STATUS_VFDP_BUSY 0x00800000
1174#define A5XX_RBBM_STATUS_VFD_BUSY 0x00400000
1175#define A5XX_RBBM_STATUS_TESS_BUSY 0x00200000
1176#define A5XX_RBBM_STATUS_PC_VSD_BUSY 0x00100000
1177#define A5XX_RBBM_STATUS_PC_DCALL_BUSY 0x00080000
1178#define A5XX_RBBM_STATUS_GPMU_SLAVE_BUSY 0x00040000
1179#define A5XX_RBBM_STATUS_DCOM_BUSY 0x00020000
1180#define A5XX_RBBM_STATUS_COM_BUSY 0x00010000
1181#define A5XX_RBBM_STATUS_LRZ_BUZY 0x00008000
1182#define A5XX_RBBM_STATUS_A2D_DSP_BUSY 0x00004000
1183#define A5XX_RBBM_STATUS_CCUFCHE_BUSY 0x00002000
1184#define A5XX_RBBM_STATUS_RB_BUSY 0x00001000
1185#define A5XX_RBBM_STATUS_RAS_BUSY 0x00000800
1186#define A5XX_RBBM_STATUS_TSE_BUSY 0x00000400
1187#define A5XX_RBBM_STATUS_VBIF_BUSY 0x00000200
1188#define A5XX_RBBM_STATUS_GPU_BUSY_IGN_AHB_HYST 0x00000100
1189#define A5XX_RBBM_STATUS_CP_BUSY_IGN_HYST 0x00000080
1190#define A5XX_RBBM_STATUS_CP_BUSY 0x00000040
1191#define A5XX_RBBM_STATUS_GPMU_MASTER_BUSY 0x00000020
1192#define A5XX_RBBM_STATUS_CP_CRASH_BUSY 0x00000010
1193#define A5XX_RBBM_STATUS_CP_ETS_BUSY 0x00000008
1194#define A5XX_RBBM_STATUS_CP_PFP_BUSY 0x00000004
1195#define A5XX_RBBM_STATUS_CP_ME_BUSY 0x00000002
1196#define A5XX_RBBM_STATUS_HI_BUSY 0x00000001
1197
1198#define REG_A5XX_RBBM_STATUS3 0x00000530
1199
1200#define REG_A5XX_RBBM_INT_0_STATUS 0x000004e1
1201
1202#define REG_A5XX_RBBM_AHB_ME_SPLIT_STATUS 0x000004f0
1203
1204#define REG_A5XX_RBBM_AHB_PFP_SPLIT_STATUS 0x000004f1
1205
1206#define REG_A5XX_RBBM_AHB_ETS_SPLIT_STATUS 0x000004f3
1207
1208#define REG_A5XX_RBBM_AHB_ERROR_STATUS 0x000004f4
1209
1210#define REG_A5XX_RBBM_PERFCTR_CNTL 0x00000464
1211
1212#define REG_A5XX_RBBM_PERFCTR_LOAD_CMD0 0x00000465
1213
1214#define REG_A5XX_RBBM_PERFCTR_LOAD_CMD1 0x00000466
1215
1216#define REG_A5XX_RBBM_PERFCTR_LOAD_CMD2 0x00000467
1217
1218#define REG_A5XX_RBBM_PERFCTR_LOAD_CMD3 0x00000468
1219
1220#define REG_A5XX_RBBM_PERFCTR_LOAD_VALUE_LO 0x00000469
1221
1222#define REG_A5XX_RBBM_PERFCTR_LOAD_VALUE_HI 0x0000046a
1223
1224#define REG_A5XX_RBBM_PERFCTR_RBBM_SEL_0 0x0000046b
1225
1226#define REG_A5XX_RBBM_PERFCTR_RBBM_SEL_1 0x0000046c
1227
1228#define REG_A5XX_RBBM_PERFCTR_RBBM_SEL_2 0x0000046d
1229
1230#define REG_A5XX_RBBM_PERFCTR_RBBM_SEL_3 0x0000046e
1231
1232#define REG_A5XX_RBBM_PERFCTR_GPU_BUSY_MASKED 0x0000046f
1233
1234#define REG_A5XX_RBBM_AHB_ERROR 0x000004ed
1235
1236#define REG_A5XX_RBBM_CFG_DBGBUS_EVENT_LOGIC 0x00000504
1237
1238#define REG_A5XX_RBBM_CFG_DBGBUS_OVER 0x00000505
1239
1240#define REG_A5XX_RBBM_CFG_DBGBUS_COUNT0 0x00000506
1241
1242#define REG_A5XX_RBBM_CFG_DBGBUS_COUNT1 0x00000507
1243
1244#define REG_A5XX_RBBM_CFG_DBGBUS_COUNT2 0x00000508
1245
1246#define REG_A5XX_RBBM_CFG_DBGBUS_COUNT3 0x00000509
1247
1248#define REG_A5XX_RBBM_CFG_DBGBUS_COUNT4 0x0000050a
1249
1250#define REG_A5XX_RBBM_CFG_DBGBUS_COUNT5 0x0000050b
1251
1252#define REG_A5XX_RBBM_CFG_DBGBUS_TRACE_ADDR 0x0000050c
1253
1254#define REG_A5XX_RBBM_CFG_DBGBUS_TRACE_BUF0 0x0000050d
1255
1256#define REG_A5XX_RBBM_CFG_DBGBUS_TRACE_BUF1 0x0000050e
1257
1258#define REG_A5XX_RBBM_CFG_DBGBUS_TRACE_BUF2 0x0000050f
1259
1260#define REG_A5XX_RBBM_CFG_DBGBUS_TRACE_BUF3 0x00000510
1261
1262#define REG_A5XX_RBBM_CFG_DBGBUS_TRACE_BUF4 0x00000511
1263
1264#define REG_A5XX_RBBM_CFG_DBGBUS_MISR0 0x00000512
1265
1266#define REG_A5XX_RBBM_CFG_DBGBUS_MISR1 0x00000513
1267
1268#define REG_A5XX_RBBM_ISDB_CNT 0x00000533
1269
1270#define REG_A5XX_RBBM_SECVID_TRUST_CONFIG 0x0000f000
1271
1272#define REG_A5XX_RBBM_SECVID_TRUST_CNTL 0x0000f400
1273
1274#define REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO 0x0000f800
1275
1276#define REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_HI 0x0000f801
1277
1278#define REG_A5XX_RBBM_SECVID_TSB_TRUSTED_SIZE 0x0000f802
1279
1280#define REG_A5XX_RBBM_SECVID_TSB_CNTL 0x0000f803
1281
1282#define REG_A5XX_RBBM_SECVID_TSB_COMP_STATUS_LO 0x0000f804
1283
1284#define REG_A5XX_RBBM_SECVID_TSB_COMP_STATUS_HI 0x0000f805
1285
1286#define REG_A5XX_RBBM_SECVID_TSB_UCHE_STATUS_LO 0x0000f806
1287
1288#define REG_A5XX_RBBM_SECVID_TSB_UCHE_STATUS_HI 0x0000f807
1289
1290#define REG_A5XX_RBBM_SECVID_TSB_ADDR_MODE_CNTL 0x0000f810
1291
1292#define REG_A5XX_VSC_PIPE_DATA_LENGTH_0 0x00000c00
1293
1294#define REG_A5XX_VSC_PERFCTR_VSC_SEL_0 0x00000c60
1295
1296#define REG_A5XX_VSC_PERFCTR_VSC_SEL_1 0x00000c61
1297
1298#define REG_A5XX_VSC_BIN_SIZE 0x00000cdd
1299#define A5XX_VSC_BIN_SIZE_WINDOW_OFFSET_DISABLE 0x80000000
1300#define A5XX_VSC_BIN_SIZE_X__MASK 0x00007fff
1301#define A5XX_VSC_BIN_SIZE_X__SHIFT 0
1302static inline uint32_t A5XX_VSC_BIN_SIZE_X(uint32_t val)
1303{
1304 return ((val) << A5XX_VSC_BIN_SIZE_X__SHIFT) & A5XX_VSC_BIN_SIZE_X__MASK;
1305}
1306#define A5XX_VSC_BIN_SIZE_Y__MASK 0x7fff0000
1307#define A5XX_VSC_BIN_SIZE_Y__SHIFT 16
1308static inline uint32_t A5XX_VSC_BIN_SIZE_Y(uint32_t val)
1309{
1310 return ((val) << A5XX_VSC_BIN_SIZE_Y__SHIFT) & A5XX_VSC_BIN_SIZE_Y__MASK;
1311}
1312
1313#define REG_A5XX_GRAS_ADDR_MODE_CNTL 0x00000c81
1314
1315#define REG_A5XX_GRAS_PERFCTR_TSE_SEL_0 0x00000c90
1316
1317#define REG_A5XX_GRAS_PERFCTR_TSE_SEL_1 0x00000c91
1318
1319#define REG_A5XX_GRAS_PERFCTR_TSE_SEL_2 0x00000c92
1320
1321#define REG_A5XX_GRAS_PERFCTR_TSE_SEL_3 0x00000c93
1322
1323#define REG_A5XX_GRAS_PERFCTR_RAS_SEL_0 0x00000c94
1324
1325#define REG_A5XX_GRAS_PERFCTR_RAS_SEL_1 0x00000c95
1326
1327#define REG_A5XX_GRAS_PERFCTR_RAS_SEL_2 0x00000c96
1328
1329#define REG_A5XX_GRAS_PERFCTR_RAS_SEL_3 0x00000c97
1330
1331#define REG_A5XX_GRAS_PERFCTR_LRZ_SEL_0 0x00000c98
1332
1333#define REG_A5XX_GRAS_PERFCTR_LRZ_SEL_1 0x00000c99
1334
1335#define REG_A5XX_GRAS_PERFCTR_LRZ_SEL_2 0x00000c9a
1336
1337#define REG_A5XX_GRAS_PERFCTR_LRZ_SEL_3 0x00000c9b
1338
1339#define REG_A5XX_RB_DBG_ECO_CNTL 0x00000cc4
1340
1341#define REG_A5XX_RB_ADDR_MODE_CNTL 0x00000cc5
1342
1343#define REG_A5XX_RB_MODE_CNTL 0x00000cc6
1344
1345#define REG_A5XX_RB_CCU_CNTL 0x00000cc7
1346
1347#define REG_A5XX_RB_PERFCTR_RB_SEL_0 0x00000cd0
1348
1349#define REG_A5XX_RB_PERFCTR_RB_SEL_1 0x00000cd1
1350
1351#define REG_A5XX_RB_PERFCTR_RB_SEL_2 0x00000cd2
1352
1353#define REG_A5XX_RB_PERFCTR_RB_SEL_3 0x00000cd3
1354
1355#define REG_A5XX_RB_PERFCTR_RB_SEL_4 0x00000cd4
1356
1357#define REG_A5XX_RB_PERFCTR_RB_SEL_5 0x00000cd5
1358
1359#define REG_A5XX_RB_PERFCTR_RB_SEL_6 0x00000cd6
1360
1361#define REG_A5XX_RB_PERFCTR_RB_SEL_7 0x00000cd7
1362
1363#define REG_A5XX_RB_PERFCTR_CCU_SEL_0 0x00000cd8
1364
1365#define REG_A5XX_RB_PERFCTR_CCU_SEL_1 0x00000cd9
1366
1367#define REG_A5XX_RB_PERFCTR_CCU_SEL_2 0x00000cda
1368
1369#define REG_A5XX_RB_PERFCTR_CCU_SEL_3 0x00000cdb
1370
1371#define REG_A5XX_RB_POWERCTR_RB_SEL_0 0x00000ce0
1372
1373#define REG_A5XX_RB_POWERCTR_RB_SEL_1 0x00000ce1
1374
1375#define REG_A5XX_RB_POWERCTR_RB_SEL_2 0x00000ce2
1376
1377#define REG_A5XX_RB_POWERCTR_RB_SEL_3 0x00000ce3
1378
1379#define REG_A5XX_RB_POWERCTR_CCU_SEL_0 0x00000ce4
1380
1381#define REG_A5XX_RB_POWERCTR_CCU_SEL_1 0x00000ce5
1382
1383#define REG_A5XX_RB_PERFCTR_CMP_SEL_0 0x00000cec
1384
1385#define REG_A5XX_RB_PERFCTR_CMP_SEL_1 0x00000ced
1386
1387#define REG_A5XX_RB_PERFCTR_CMP_SEL_2 0x00000cee
1388
1389#define REG_A5XX_RB_PERFCTR_CMP_SEL_3 0x00000cef
1390
1391#define REG_A5XX_PC_DBG_ECO_CNTL 0x00000d00
1392#define A5XX_PC_DBG_ECO_CNTL_TWOPASSUSEWFI 0x00000100
1393
1394#define REG_A5XX_PC_ADDR_MODE_CNTL 0x00000d01
1395
1396#define REG_A5XX_PC_MODE_CNTL 0x00000d02
1397
1398#define REG_A5XX_UNKNOWN_0D08 0x00000d08
1399
1400#define REG_A5XX_UNKNOWN_0D09 0x00000d09
1401
1402#define REG_A5XX_PC_PERFCTR_PC_SEL_0 0x00000d10
1403
1404#define REG_A5XX_PC_PERFCTR_PC_SEL_1 0x00000d11
1405
1406#define REG_A5XX_PC_PERFCTR_PC_SEL_2 0x00000d12
1407
1408#define REG_A5XX_PC_PERFCTR_PC_SEL_3 0x00000d13
1409
1410#define REG_A5XX_PC_PERFCTR_PC_SEL_4 0x00000d14
1411
1412#define REG_A5XX_PC_PERFCTR_PC_SEL_5 0x00000d15
1413
1414#define REG_A5XX_PC_PERFCTR_PC_SEL_6 0x00000d16
1415
1416#define REG_A5XX_PC_PERFCTR_PC_SEL_7 0x00000d17
1417
1418#define REG_A5XX_HLSQ_TIMEOUT_THRESHOLD_0 0x00000e00
1419
1420#define REG_A5XX_HLSQ_TIMEOUT_THRESHOLD_1 0x00000e01
1421
1422#define REG_A5XX_HLSQ_ADDR_MODE_CNTL 0x00000e05
1423
1424#define REG_A5XX_HLSQ_MODE_CNTL 0x00000e06
1425
1426#define REG_A5XX_HLSQ_PERFCTR_HLSQ_SEL_0 0x00000e10
1427
1428#define REG_A5XX_HLSQ_PERFCTR_HLSQ_SEL_1 0x00000e11
1429
1430#define REG_A5XX_HLSQ_PERFCTR_HLSQ_SEL_2 0x00000e12
1431
1432#define REG_A5XX_HLSQ_PERFCTR_HLSQ_SEL_3 0x00000e13
1433
1434#define REG_A5XX_HLSQ_PERFCTR_HLSQ_SEL_4 0x00000e14
1435
1436#define REG_A5XX_HLSQ_PERFCTR_HLSQ_SEL_5 0x00000e15
1437
1438#define REG_A5XX_HLSQ_PERFCTR_HLSQ_SEL_6 0x00000e16
1439
1440#define REG_A5XX_HLSQ_PERFCTR_HLSQ_SEL_7 0x00000e17
1441
1442#define REG_A5XX_HLSQ_SPTP_RDSEL 0x00000f08
1443
1444#define REG_A5XX_HLSQ_DBG_READ_SEL 0x0000bc00
1445
1446#define REG_A5XX_HLSQ_DBG_AHB_READ_APERTURE 0x0000a000
1447
1448#define REG_A5XX_VFD_ADDR_MODE_CNTL 0x00000e41
1449
1450#define REG_A5XX_VFD_MODE_CNTL 0x00000e42
1451
1452#define REG_A5XX_VFD_PERFCTR_VFD_SEL_0 0x00000e50
1453
1454#define REG_A5XX_VFD_PERFCTR_VFD_SEL_1 0x00000e51
1455
1456#define REG_A5XX_VFD_PERFCTR_VFD_SEL_2 0x00000e52
1457
1458#define REG_A5XX_VFD_PERFCTR_VFD_SEL_3 0x00000e53
1459
1460#define REG_A5XX_VFD_PERFCTR_VFD_SEL_4 0x00000e54
1461
1462#define REG_A5XX_VFD_PERFCTR_VFD_SEL_5 0x00000e55
1463
1464#define REG_A5XX_VFD_PERFCTR_VFD_SEL_6 0x00000e56
1465
1466#define REG_A5XX_VFD_PERFCTR_VFD_SEL_7 0x00000e57
1467
1468#define REG_A5XX_VPC_DBG_ECO_CNTL 0x00000e60
1469
1470#define REG_A5XX_VPC_ADDR_MODE_CNTL 0x00000e61
1471
1472#define REG_A5XX_VPC_MODE_CNTL 0x00000e62
1473
1474#define REG_A5XX_VPC_PERFCTR_VPC_SEL_0 0x00000e64
1475
1476#define REG_A5XX_VPC_PERFCTR_VPC_SEL_1 0x00000e65
1477
1478#define REG_A5XX_VPC_PERFCTR_VPC_SEL_2 0x00000e66
1479
1480#define REG_A5XX_VPC_PERFCTR_VPC_SEL_3 0x00000e67
1481
1482#define REG_A5XX_UCHE_ADDR_MODE_CNTL 0x00000e80
1483
1484#define REG_A5XX_UCHE_SVM_CNTL 0x00000e82
1485
1486#define REG_A5XX_UCHE_WRITE_THRU_BASE_LO 0x00000e87
1487
1488#define REG_A5XX_UCHE_WRITE_THRU_BASE_HI 0x00000e88
1489
1490#define REG_A5XX_UCHE_TRAP_BASE_LO 0x00000e89
1491
1492#define REG_A5XX_UCHE_TRAP_BASE_HI 0x00000e8a
1493
1494#define REG_A5XX_UCHE_GMEM_RANGE_MIN_LO 0x00000e8b
1495
1496#define REG_A5XX_UCHE_GMEM_RANGE_MIN_HI 0x00000e8c
1497
1498#define REG_A5XX_UCHE_GMEM_RANGE_MAX_LO 0x00000e8d
1499
1500#define REG_A5XX_UCHE_GMEM_RANGE_MAX_HI 0x00000e8e
1501
1502#define REG_A5XX_UCHE_DBG_ECO_CNTL_2 0x00000e8f
1503
1504#define REG_A5XX_UCHE_DBG_ECO_CNTL 0x00000e90
1505
1506#define REG_A5XX_UCHE_CACHE_INVALIDATE_MIN_LO 0x00000e91
1507
1508#define REG_A5XX_UCHE_CACHE_INVALIDATE_MIN_HI 0x00000e92
1509
1510#define REG_A5XX_UCHE_CACHE_INVALIDATE_MAX_LO 0x00000e93
1511
1512#define REG_A5XX_UCHE_CACHE_INVALIDATE_MAX_HI 0x00000e94
1513
1514#define REG_A5XX_UCHE_CACHE_INVALIDATE 0x00000e95
1515
1516#define REG_A5XX_UCHE_CACHE_WAYS 0x00000e96
1517
1518#define REG_A5XX_UCHE_PERFCTR_UCHE_SEL_0 0x00000ea0
1519
1520#define REG_A5XX_UCHE_PERFCTR_UCHE_SEL_1 0x00000ea1
1521
1522#define REG_A5XX_UCHE_PERFCTR_UCHE_SEL_2 0x00000ea2
1523
1524#define REG_A5XX_UCHE_PERFCTR_UCHE_SEL_3 0x00000ea3
1525
1526#define REG_A5XX_UCHE_PERFCTR_UCHE_SEL_4 0x00000ea4
1527
1528#define REG_A5XX_UCHE_PERFCTR_UCHE_SEL_5 0x00000ea5
1529
1530#define REG_A5XX_UCHE_PERFCTR_UCHE_SEL_6 0x00000ea6
1531
1532#define REG_A5XX_UCHE_PERFCTR_UCHE_SEL_7 0x00000ea7
1533
1534#define REG_A5XX_UCHE_POWERCTR_UCHE_SEL_0 0x00000ea8
1535
1536#define REG_A5XX_UCHE_POWERCTR_UCHE_SEL_1 0x00000ea9
1537
1538#define REG_A5XX_UCHE_POWERCTR_UCHE_SEL_2 0x00000eaa
1539
1540#define REG_A5XX_UCHE_POWERCTR_UCHE_SEL_3 0x00000eab
1541
1542#define REG_A5XX_UCHE_TRAP_LOG_LO 0x00000eb1
1543
1544#define REG_A5XX_UCHE_TRAP_LOG_HI 0x00000eb2
1545
1546#define REG_A5XX_SP_DBG_ECO_CNTL 0x00000ec0
1547
1548#define REG_A5XX_SP_ADDR_MODE_CNTL 0x00000ec1
1549
1550#define REG_A5XX_SP_MODE_CNTL 0x00000ec2
1551
1552#define REG_A5XX_SP_PERFCTR_SP_SEL_0 0x00000ed0
1553
1554#define REG_A5XX_SP_PERFCTR_SP_SEL_1 0x00000ed1
1555
1556#define REG_A5XX_SP_PERFCTR_SP_SEL_2 0x00000ed2
1557
1558#define REG_A5XX_SP_PERFCTR_SP_SEL_3 0x00000ed3
1559
1560#define REG_A5XX_SP_PERFCTR_SP_SEL_4 0x00000ed4
1561
1562#define REG_A5XX_SP_PERFCTR_SP_SEL_5 0x00000ed5
1563
1564#define REG_A5XX_SP_PERFCTR_SP_SEL_6 0x00000ed6
1565
1566#define REG_A5XX_SP_PERFCTR_SP_SEL_7 0x00000ed7
1567
1568#define REG_A5XX_SP_PERFCTR_SP_SEL_8 0x00000ed8
1569
1570#define REG_A5XX_SP_PERFCTR_SP_SEL_9 0x00000ed9
1571
1572#define REG_A5XX_SP_PERFCTR_SP_SEL_10 0x00000eda
1573
1574#define REG_A5XX_SP_PERFCTR_SP_SEL_11 0x00000edb
1575
1576#define REG_A5XX_SP_POWERCTR_SP_SEL_0 0x00000edc
1577
1578#define REG_A5XX_SP_POWERCTR_SP_SEL_1 0x00000edd
1579
1580#define REG_A5XX_SP_POWERCTR_SP_SEL_2 0x00000ede
1581
1582#define REG_A5XX_SP_POWERCTR_SP_SEL_3 0x00000edf
1583
1584#define REG_A5XX_TPL1_ADDR_MODE_CNTL 0x00000f01
1585
1586#define REG_A5XX_TPL1_MODE_CNTL 0x00000f02
1587
1588#define REG_A5XX_TPL1_PERFCTR_TP_SEL_0 0x00000f10
1589
1590#define REG_A5XX_TPL1_PERFCTR_TP_SEL_1 0x00000f11
1591
1592#define REG_A5XX_TPL1_PERFCTR_TP_SEL_2 0x00000f12
1593
1594#define REG_A5XX_TPL1_PERFCTR_TP_SEL_3 0x00000f13
1595
1596#define REG_A5XX_TPL1_PERFCTR_TP_SEL_4 0x00000f14
1597
1598#define REG_A5XX_TPL1_PERFCTR_TP_SEL_5 0x00000f15
1599
1600#define REG_A5XX_TPL1_PERFCTR_TP_SEL_6 0x00000f16
1601
1602#define REG_A5XX_TPL1_PERFCTR_TP_SEL_7 0x00000f17
1603
1604#define REG_A5XX_TPL1_POWERCTR_TP_SEL_0 0x00000f18
1605
1606#define REG_A5XX_TPL1_POWERCTR_TP_SEL_1 0x00000f19
1607
1608#define REG_A5XX_TPL1_POWERCTR_TP_SEL_2 0x00000f1a
1609
1610#define REG_A5XX_TPL1_POWERCTR_TP_SEL_3 0x00000f1b
1611
1612#define REG_A5XX_VBIF_VERSION 0x00003000
1613
1614#define REG_A5XX_VBIF_CLKON 0x00003001
1615
1616#define REG_A5XX_VBIF_ABIT_SORT 0x00003028
1617
1618#define REG_A5XX_VBIF_ABIT_SORT_CONF 0x00003029
1619
1620#define REG_A5XX_VBIF_ROUND_ROBIN_QOS_ARB 0x00003049
1621
1622#define REG_A5XX_VBIF_GATE_OFF_WRREQ_EN 0x0000302a
1623
1624#define REG_A5XX_VBIF_IN_RD_LIM_CONF0 0x0000302c
1625
1626#define REG_A5XX_VBIF_IN_RD_LIM_CONF1 0x0000302d
1627
1628#define REG_A5XX_VBIF_XIN_HALT_CTRL0 0x00003080
1629
1630#define REG_A5XX_VBIF_XIN_HALT_CTRL1 0x00003081
1631
1632#define REG_A5XX_VBIF_TEST_BUS_OUT_CTRL 0x00003084
1633
1634#define REG_A5XX_VBIF_TEST_BUS1_CTRL0 0x00003085
1635
1636#define REG_A5XX_VBIF_TEST_BUS1_CTRL1 0x00003086
1637
1638#define REG_A5XX_VBIF_TEST_BUS2_CTRL0 0x00003087
1639
1640#define REG_A5XX_VBIF_TEST_BUS2_CTRL1 0x00003088
1641
1642#define REG_A5XX_VBIF_TEST_BUS_OUT 0x0000308c
1643
1644#define REG_A5XX_VBIF_PERF_CNT_SEL0 0x000030d0
1645
1646#define REG_A5XX_VBIF_PERF_CNT_SEL1 0x000030d1
1647
1648#define REG_A5XX_VBIF_PERF_CNT_SEL2 0x000030d2
1649
1650#define REG_A5XX_VBIF_PERF_CNT_SEL3 0x000030d3
1651
1652#define REG_A5XX_VBIF_PERF_CNT_LOW0 0x000030d8
1653
1654#define REG_A5XX_VBIF_PERF_CNT_LOW1 0x000030d9
1655
1656#define REG_A5XX_VBIF_PERF_CNT_LOW2 0x000030da
1657
1658#define REG_A5XX_VBIF_PERF_CNT_LOW3 0x000030db
1659
1660#define REG_A5XX_VBIF_PERF_CNT_HIGH0 0x000030e0
1661
1662#define REG_A5XX_VBIF_PERF_CNT_HIGH1 0x000030e1
1663
1664#define REG_A5XX_VBIF_PERF_CNT_HIGH2 0x000030e2
1665
1666#define REG_A5XX_VBIF_PERF_CNT_HIGH3 0x000030e3
1667
1668#define REG_A5XX_VBIF_PERF_PWR_CNT_EN0 0x00003100
1669
1670#define REG_A5XX_VBIF_PERF_PWR_CNT_EN1 0x00003101
1671
1672#define REG_A5XX_VBIF_PERF_PWR_CNT_EN2 0x00003102
1673
1674#define REG_A5XX_VBIF_PERF_PWR_CNT_LOW0 0x00003110
1675
1676#define REG_A5XX_VBIF_PERF_PWR_CNT_LOW1 0x00003111
1677
1678#define REG_A5XX_VBIF_PERF_PWR_CNT_LOW2 0x00003112
1679
1680#define REG_A5XX_VBIF_PERF_PWR_CNT_HIGH0 0x00003118
1681
1682#define REG_A5XX_VBIF_PERF_PWR_CNT_HIGH1 0x00003119
1683
1684#define REG_A5XX_VBIF_PERF_PWR_CNT_HIGH2 0x0000311a
1685
1686#define REG_A5XX_GPMU_INST_RAM_BASE 0x00008800
1687
1688#define REG_A5XX_GPMU_DATA_RAM_BASE 0x00009800
1689
1690#define REG_A5XX_GPMU_SP_POWER_CNTL 0x0000a881
1691
1692#define REG_A5XX_GPMU_RBCCU_CLOCK_CNTL 0x0000a886
1693
1694#define REG_A5XX_GPMU_RBCCU_POWER_CNTL 0x0000a887
1695
1696#define REG_A5XX_GPMU_SP_PWR_CLK_STATUS 0x0000a88b
1697#define A5XX_GPMU_SP_PWR_CLK_STATUS_PWR_ON 0x00100000
1698
1699#define REG_A5XX_GPMU_RBCCU_PWR_CLK_STATUS 0x0000a88d
1700#define A5XX_GPMU_RBCCU_PWR_CLK_STATUS_PWR_ON 0x00100000
1701
1702#define REG_A5XX_GPMU_PWR_COL_STAGGER_DELAY 0x0000a891
1703
1704#define REG_A5XX_GPMU_PWR_COL_INTER_FRAME_CTRL 0x0000a892
1705
1706#define REG_A5XX_GPMU_PWR_COL_INTER_FRAME_HYST 0x0000a893
1707
1708#define REG_A5XX_GPMU_PWR_COL_BINNING_CTRL 0x0000a894
1709
1710#define REG_A5XX_GPMU_CLOCK_THROTTLE_CTRL 0x0000a8a3
1711
1712#define REG_A5XX_GPMU_WFI_CONFIG 0x0000a8c1
1713
1714#define REG_A5XX_GPMU_RBBM_INTR_INFO 0x0000a8d6
1715
1716#define REG_A5XX_GPMU_CM3_SYSRESET 0x0000a8d8
1717
1718#define REG_A5XX_GPMU_GENERAL_0 0x0000a8e0
1719
1720#define REG_A5XX_GPMU_GENERAL_1 0x0000a8e1
1721
1722#define REG_A5XX_SP_POWER_COUNTER_0_LO 0x0000a840
1723
1724#define REG_A5XX_SP_POWER_COUNTER_0_HI 0x0000a841
1725
1726#define REG_A5XX_SP_POWER_COUNTER_1_LO 0x0000a842
1727
1728#define REG_A5XX_SP_POWER_COUNTER_1_HI 0x0000a843
1729
1730#define REG_A5XX_SP_POWER_COUNTER_2_LO 0x0000a844
1731
1732#define REG_A5XX_SP_POWER_COUNTER_2_HI 0x0000a845
1733
1734#define REG_A5XX_SP_POWER_COUNTER_3_LO 0x0000a846
1735
1736#define REG_A5XX_SP_POWER_COUNTER_3_HI 0x0000a847
1737
1738#define REG_A5XX_TP_POWER_COUNTER_0_LO 0x0000a848
1739
1740#define REG_A5XX_TP_POWER_COUNTER_0_HI 0x0000a849
1741
1742#define REG_A5XX_TP_POWER_COUNTER_1_LO 0x0000a84a
1743
1744#define REG_A5XX_TP_POWER_COUNTER_1_HI 0x0000a84b
1745
1746#define REG_A5XX_TP_POWER_COUNTER_2_LO 0x0000a84c
1747
1748#define REG_A5XX_TP_POWER_COUNTER_2_HI 0x0000a84d
1749
1750#define REG_A5XX_TP_POWER_COUNTER_3_LO 0x0000a84e
1751
1752#define REG_A5XX_TP_POWER_COUNTER_3_HI 0x0000a84f
1753
1754#define REG_A5XX_RB_POWER_COUNTER_0_LO 0x0000a850
1755
1756#define REG_A5XX_RB_POWER_COUNTER_0_HI 0x0000a851
1757
1758#define REG_A5XX_RB_POWER_COUNTER_1_LO 0x0000a852
1759
1760#define REG_A5XX_RB_POWER_COUNTER_1_HI 0x0000a853
1761
1762#define REG_A5XX_RB_POWER_COUNTER_2_LO 0x0000a854
1763
1764#define REG_A5XX_RB_POWER_COUNTER_2_HI 0x0000a855
1765
1766#define REG_A5XX_RB_POWER_COUNTER_3_LO 0x0000a856
1767
1768#define REG_A5XX_RB_POWER_COUNTER_3_HI 0x0000a857
1769
1770#define REG_A5XX_CCU_POWER_COUNTER_0_LO 0x0000a858
1771
1772#define REG_A5XX_CCU_POWER_COUNTER_0_HI 0x0000a859
1773
1774#define REG_A5XX_CCU_POWER_COUNTER_1_LO 0x0000a85a
1775
1776#define REG_A5XX_CCU_POWER_COUNTER_1_HI 0x0000a85b
1777
1778#define REG_A5XX_UCHE_POWER_COUNTER_0_LO 0x0000a85c
1779
1780#define REG_A5XX_UCHE_POWER_COUNTER_0_HI 0x0000a85d
1781
1782#define REG_A5XX_UCHE_POWER_COUNTER_1_LO 0x0000a85e
1783
1784#define REG_A5XX_UCHE_POWER_COUNTER_1_HI 0x0000a85f
1785
1786#define REG_A5XX_UCHE_POWER_COUNTER_2_LO 0x0000a860
1787
1788#define REG_A5XX_UCHE_POWER_COUNTER_2_HI 0x0000a861
1789
1790#define REG_A5XX_UCHE_POWER_COUNTER_3_LO 0x0000a862
1791
1792#define REG_A5XX_UCHE_POWER_COUNTER_3_HI 0x0000a863
1793
1794#define REG_A5XX_CP_POWER_COUNTER_0_LO 0x0000a864
1795
1796#define REG_A5XX_CP_POWER_COUNTER_0_HI 0x0000a865
1797
1798#define REG_A5XX_CP_POWER_COUNTER_1_LO 0x0000a866
1799
1800#define REG_A5XX_CP_POWER_COUNTER_1_HI 0x0000a867
1801
1802#define REG_A5XX_CP_POWER_COUNTER_2_LO 0x0000a868
1803
1804#define REG_A5XX_CP_POWER_COUNTER_2_HI 0x0000a869
1805
1806#define REG_A5XX_CP_POWER_COUNTER_3_LO 0x0000a86a
1807
1808#define REG_A5XX_CP_POWER_COUNTER_3_HI 0x0000a86b
1809
1810#define REG_A5XX_GPMU_POWER_COUNTER_0_LO 0x0000a86c
1811
1812#define REG_A5XX_GPMU_POWER_COUNTER_0_HI 0x0000a86d
1813
1814#define REG_A5XX_GPMU_POWER_COUNTER_1_LO 0x0000a86e
1815
1816#define REG_A5XX_GPMU_POWER_COUNTER_1_HI 0x0000a86f
1817
1818#define REG_A5XX_GPMU_POWER_COUNTER_2_LO 0x0000a870
1819
1820#define REG_A5XX_GPMU_POWER_COUNTER_2_HI 0x0000a871
1821
1822#define REG_A5XX_GPMU_POWER_COUNTER_3_LO 0x0000a872
1823
1824#define REG_A5XX_GPMU_POWER_COUNTER_3_HI 0x0000a873
1825
1826#define REG_A5XX_GPMU_POWER_COUNTER_4_LO 0x0000a874
1827
1828#define REG_A5XX_GPMU_POWER_COUNTER_4_HI 0x0000a875
1829
1830#define REG_A5XX_GPMU_POWER_COUNTER_5_LO 0x0000a876
1831
1832#define REG_A5XX_GPMU_POWER_COUNTER_5_HI 0x0000a877
1833
1834#define REG_A5XX_GPMU_POWER_COUNTER_ENABLE 0x0000a878
1835
1836#define REG_A5XX_GPMU_ALWAYS_ON_COUNTER_LO 0x0000a879
1837
1838#define REG_A5XX_GPMU_ALWAYS_ON_COUNTER_HI 0x0000a87a
1839
1840#define REG_A5XX_GPMU_ALWAYS_ON_COUNTER_RESET 0x0000a87b
1841
1842#define REG_A5XX_GPMU_POWER_COUNTER_SELECT_0 0x0000a87c
1843
1844#define REG_A5XX_GPMU_POWER_COUNTER_SELECT_1 0x0000a87d
1845
1846#define REG_A5XX_GPMU_CLOCK_THROTTLE_CTRL 0x0000a8a3
1847
1848#define REG_A5XX_GPMU_THROTTLE_UNMASK_FORCE_CTRL 0x0000a8a8
1849
1850#define REG_A5XX_GPMU_TEMP_SENSOR_ID 0x0000ac00
1851
1852#define REG_A5XX_GPMU_TEMP_SENSOR_CONFIG 0x0000ac01
1853
1854#define REG_A5XX_GPMU_TEMP_VAL 0x0000ac02
1855
1856#define REG_A5XX_GPMU_DELTA_TEMP_THRESHOLD 0x0000ac03
1857
1858#define REG_A5XX_GPMU_TEMP_THRESHOLD_INTR_STATUS 0x0000ac05
1859
1860#define REG_A5XX_GPMU_TEMP_THRESHOLD_INTR_EN_MASK 0x0000ac06
1861
1862#define REG_A5XX_GPMU_LEAKAGE_TEMP_COEFF_0_1 0x0000ac40
1863
1864#define REG_A5XX_GPMU_LEAKAGE_TEMP_COEFF_2_3 0x0000ac41
1865
1866#define REG_A5XX_GPMU_LEAKAGE_VTG_COEFF_0_1 0x0000ac42
1867
1868#define REG_A5XX_GPMU_LEAKAGE_VTG_COEFF_2_3 0x0000ac43
1869
1870#define REG_A5XX_GPMU_BASE_LEAKAGE 0x0000ac46
1871
1872#define REG_A5XX_GPMU_GPMU_VOLTAGE 0x0000ac60
1873
1874#define REG_A5XX_GPMU_GPMU_VOLTAGE_INTR_STATUS 0x0000ac61
1875
1876#define REG_A5XX_GPMU_GPMU_VOLTAGE_INTR_EN_MASK 0x0000ac62
1877
1878#define REG_A5XX_GPMU_GPMU_PWR_THRESHOLD 0x0000ac80
1879
1880#define REG_A5XX_GPMU_GPMU_LLM_GLM_SLEEP_CTRL 0x0000acc4
1881
1882#define REG_A5XX_GPMU_GPMU_LLM_GLM_SLEEP_STATUS 0x0000acc5
1883
1884#define REG_A5XX_GDPM_CONFIG1 0x0000b80c
1885
1886#define REG_A5XX_GDPM_CONFIG2 0x0000b80d
1887
1888#define REG_A5XX_GDPM_INT_EN 0x0000b80f
1889
1890#define REG_A5XX_GDPM_INT_MASK 0x0000b811
1891
1892#define REG_A5XX_GPMU_BEC_ENABLE 0x0000b9a0
1893
1894#define REG_A5XX_GPU_CS_SENSOR_GENERAL_STATUS 0x0000c41a
1895
1896#define REG_A5XX_GPU_CS_AMP_CALIBRATION_STATUS1_0 0x0000c41d
1897
1898#define REG_A5XX_GPU_CS_AMP_CALIBRATION_STATUS1_2 0x0000c41f
1899
1900#define REG_A5XX_GPU_CS_AMP_CALIBRATION_STATUS1_4 0x0000c421
1901
1902#define REG_A5XX_GPU_CS_ENABLE_REG 0x0000c520
1903
1904#define REG_A5XX_GPU_CS_AMP_CALIBRATION_CONTROL1 0x0000c557
1905
1906#define REG_A5XX_GRAS_CL_CNTL 0x0000e000
1907
1908#define REG_A5XX_UNKNOWN_E001 0x0000e001
1909
1910#define REG_A5XX_UNKNOWN_E004 0x0000e004
1911
1912#define REG_A5XX_GRAS_CNTL 0x0000e005
1913#define A5XX_GRAS_CNTL_VARYING 0x00000001
1914
1915#define REG_A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ 0x0000e006
1916#define A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_HORZ__MASK 0x000003ff
1917#define A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_HORZ__SHIFT 0
1918static inline uint32_t A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_HORZ(uint32_t val)
1919{
1920 return ((val) << A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_HORZ__SHIFT) & A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_HORZ__MASK;
1921}
1922#define A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_VERT__MASK 0x000ffc00
1923#define A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_VERT__SHIFT 10
1924static inline uint32_t A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_VERT(uint32_t val)
1925{
1926 return ((val) << A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_VERT__SHIFT) & A5XX_GRAS_CL_GUARDBAND_CLIP_ADJ_VERT__MASK;
1927}
1928
1929#define REG_A5XX_GRAS_CL_VPORT_XOFFSET_0 0x0000e010
1930#define A5XX_GRAS_CL_VPORT_XOFFSET_0__MASK 0xffffffff
1931#define A5XX_GRAS_CL_VPORT_XOFFSET_0__SHIFT 0
1932static inline uint32_t A5XX_GRAS_CL_VPORT_XOFFSET_0(float val)
1933{
1934 return ((fui(val)) << A5XX_GRAS_CL_VPORT_XOFFSET_0__SHIFT) & A5XX_GRAS_CL_VPORT_XOFFSET_0__MASK;
1935}
1936
1937#define REG_A5XX_GRAS_CL_VPORT_XSCALE_0 0x0000e011
1938#define A5XX_GRAS_CL_VPORT_XSCALE_0__MASK 0xffffffff
1939#define A5XX_GRAS_CL_VPORT_XSCALE_0__SHIFT 0
1940static inline uint32_t A5XX_GRAS_CL_VPORT_XSCALE_0(float val)
1941{
1942 return ((fui(val)) << A5XX_GRAS_CL_VPORT_XSCALE_0__SHIFT) & A5XX_GRAS_CL_VPORT_XSCALE_0__MASK;
1943}
1944
1945#define REG_A5XX_GRAS_CL_VPORT_YOFFSET_0 0x0000e012
1946#define A5XX_GRAS_CL_VPORT_YOFFSET_0__MASK 0xffffffff
1947#define A5XX_GRAS_CL_VPORT_YOFFSET_0__SHIFT 0
1948static inline uint32_t A5XX_GRAS_CL_VPORT_YOFFSET_0(float val)
1949{
1950 return ((fui(val)) << A5XX_GRAS_CL_VPORT_YOFFSET_0__SHIFT) & A5XX_GRAS_CL_VPORT_YOFFSET_0__MASK;
1951}
1952
1953#define REG_A5XX_GRAS_CL_VPORT_YSCALE_0 0x0000e013
1954#define A5XX_GRAS_CL_VPORT_YSCALE_0__MASK 0xffffffff
1955#define A5XX_GRAS_CL_VPORT_YSCALE_0__SHIFT 0
1956static inline uint32_t A5XX_GRAS_CL_VPORT_YSCALE_0(float val)
1957{
1958 return ((fui(val)) << A5XX_GRAS_CL_VPORT_YSCALE_0__SHIFT) & A5XX_GRAS_CL_VPORT_YSCALE_0__MASK;
1959}
1960
1961#define REG_A5XX_GRAS_CL_VPORT_ZOFFSET_0 0x0000e014
1962#define A5XX_GRAS_CL_VPORT_ZOFFSET_0__MASK 0xffffffff
1963#define A5XX_GRAS_CL_VPORT_ZOFFSET_0__SHIFT 0
1964static inline uint32_t A5XX_GRAS_CL_VPORT_ZOFFSET_0(float val)
1965{
1966 return ((fui(val)) << A5XX_GRAS_CL_VPORT_ZOFFSET_0__SHIFT) & A5XX_GRAS_CL_VPORT_ZOFFSET_0__MASK;
1967}
1968
1969#define REG_A5XX_GRAS_CL_VPORT_ZSCALE_0 0x0000e015
1970#define A5XX_GRAS_CL_VPORT_ZSCALE_0__MASK 0xffffffff
1971#define A5XX_GRAS_CL_VPORT_ZSCALE_0__SHIFT 0
1972static inline uint32_t A5XX_GRAS_CL_VPORT_ZSCALE_0(float val)
1973{
1974 return ((fui(val)) << A5XX_GRAS_CL_VPORT_ZSCALE_0__SHIFT) & A5XX_GRAS_CL_VPORT_ZSCALE_0__MASK;
1975}
1976
1977#define REG_A5XX_GRAS_SU_CNTL 0x0000e090
1978#define A5XX_GRAS_SU_CNTL_FRONT_CW 0x00000004
1979#define A5XX_GRAS_SU_CNTL_LINEHALFWIDTH__MASK 0x000007f8
1980#define A5XX_GRAS_SU_CNTL_LINEHALFWIDTH__SHIFT 3
1981static inline uint32_t A5XX_GRAS_SU_CNTL_LINEHALFWIDTH(float val)
1982{
1983 return ((((int32_t)(val * 4.0))) << A5XX_GRAS_SU_CNTL_LINEHALFWIDTH__SHIFT) & A5XX_GRAS_SU_CNTL_LINEHALFWIDTH__MASK;
1984}
1985#define A5XX_GRAS_SU_CNTL_POLY_OFFSET 0x00000800
1986#define A5XX_GRAS_SU_CNTL_MSAA_ENABLE 0x00002000
1987
1988#define REG_A5XX_GRAS_SU_POINT_MINMAX 0x0000e091
1989#define A5XX_GRAS_SU_POINT_MINMAX_MIN__MASK 0x0000ffff
1990#define A5XX_GRAS_SU_POINT_MINMAX_MIN__SHIFT 0
1991static inline uint32_t A5XX_GRAS_SU_POINT_MINMAX_MIN(float val)
1992{
1993 return ((((uint32_t)(val * 16.0))) << A5XX_GRAS_SU_POINT_MINMAX_MIN__SHIFT) & A5XX_GRAS_SU_POINT_MINMAX_MIN__MASK;
1994}
1995#define A5XX_GRAS_SU_POINT_MINMAX_MAX__MASK 0xffff0000
1996#define A5XX_GRAS_SU_POINT_MINMAX_MAX__SHIFT 16
1997static inline uint32_t A5XX_GRAS_SU_POINT_MINMAX_MAX(float val)
1998{
1999 return ((((uint32_t)(val * 16.0))) << A5XX_GRAS_SU_POINT_MINMAX_MAX__SHIFT) & A5XX_GRAS_SU_POINT_MINMAX_MAX__MASK;
2000}
2001
2002#define REG_A5XX_GRAS_SU_POINT_SIZE 0x0000e092
2003#define A5XX_GRAS_SU_POINT_SIZE__MASK 0xffffffff
2004#define A5XX_GRAS_SU_POINT_SIZE__SHIFT 0
2005static inline uint32_t A5XX_GRAS_SU_POINT_SIZE(float val)
2006{
2007 return ((((int32_t)(val * 16.0))) << A5XX_GRAS_SU_POINT_SIZE__SHIFT) & A5XX_GRAS_SU_POINT_SIZE__MASK;
2008}
2009
2010#define REG_A5XX_UNKNOWN_E093 0x0000e093
2011
2012#define REG_A5XX_GRAS_SU_DEPTH_PLANE_CNTL 0x0000e094
2013#define A5XX_GRAS_SU_DEPTH_PLANE_CNTL_ALPHA_TEST_ENABLE 0x00000001
2014
2015#define REG_A5XX_GRAS_SU_POLY_OFFSET_SCALE 0x0000e095
2016#define A5XX_GRAS_SU_POLY_OFFSET_SCALE__MASK 0xffffffff
2017#define A5XX_GRAS_SU_POLY_OFFSET_SCALE__SHIFT 0
2018static inline uint32_t A5XX_GRAS_SU_POLY_OFFSET_SCALE(float val)
2019{
2020 return ((fui(val)) << A5XX_GRAS_SU_POLY_OFFSET_SCALE__SHIFT) & A5XX_GRAS_SU_POLY_OFFSET_SCALE__MASK;
2021}
2022
2023#define REG_A5XX_GRAS_SU_POLY_OFFSET_OFFSET 0x0000e096
2024#define A5XX_GRAS_SU_POLY_OFFSET_OFFSET__MASK 0xffffffff
2025#define A5XX_GRAS_SU_POLY_OFFSET_OFFSET__SHIFT 0
2026static inline uint32_t A5XX_GRAS_SU_POLY_OFFSET_OFFSET(float val)
2027{
2028 return ((fui(val)) << A5XX_GRAS_SU_POLY_OFFSET_OFFSET__SHIFT) & A5XX_GRAS_SU_POLY_OFFSET_OFFSET__MASK;
2029}
2030
2031#define REG_A5XX_GRAS_SU_POLY_OFFSET_OFFSET_CLAMP 0x0000e097
2032#define A5XX_GRAS_SU_POLY_OFFSET_OFFSET_CLAMP__MASK 0xffffffff
2033#define A5XX_GRAS_SU_POLY_OFFSET_OFFSET_CLAMP__SHIFT 0
2034static inline uint32_t A5XX_GRAS_SU_POLY_OFFSET_OFFSET_CLAMP(float val)
2035{
2036 return ((fui(val)) << A5XX_GRAS_SU_POLY_OFFSET_OFFSET_CLAMP__SHIFT) & A5XX_GRAS_SU_POLY_OFFSET_OFFSET_CLAMP__MASK;
2037}
2038
2039#define REG_A5XX_GRAS_SU_DEPTH_BUFFER_INFO 0x0000e098
2040#define A5XX_GRAS_SU_DEPTH_BUFFER_INFO_DEPTH_FORMAT__MASK 0x00000007
2041#define A5XX_GRAS_SU_DEPTH_BUFFER_INFO_DEPTH_FORMAT__SHIFT 0
2042static inline uint32_t A5XX_GRAS_SU_DEPTH_BUFFER_INFO_DEPTH_FORMAT(enum a5xx_depth_format val)
2043{
2044 return ((val) << A5XX_GRAS_SU_DEPTH_BUFFER_INFO_DEPTH_FORMAT__SHIFT) & A5XX_GRAS_SU_DEPTH_BUFFER_INFO_DEPTH_FORMAT__MASK;
2045}
2046
2047#define REG_A5XX_GRAS_SU_CONSERVATIVE_RAS_CNTL 0x0000e099
2048
2049#define REG_A5XX_GRAS_SC_CNTL 0x0000e0a0
2050#define A5XX_GRAS_SC_CNTL_SAMPLES_PASSED 0x00008000
2051
2052#define REG_A5XX_GRAS_SC_BIN_CNTL 0x0000e0a1
2053
2054#define REG_A5XX_GRAS_SC_RAS_MSAA_CNTL 0x0000e0a2
2055#define A5XX_GRAS_SC_RAS_MSAA_CNTL_SAMPLES__MASK 0x00000003
2056#define A5XX_GRAS_SC_RAS_MSAA_CNTL_SAMPLES__SHIFT 0
2057static inline uint32_t A5XX_GRAS_SC_RAS_MSAA_CNTL_SAMPLES(enum a3xx_msaa_samples val)
2058{
2059 return ((val) << A5XX_GRAS_SC_RAS_MSAA_CNTL_SAMPLES__SHIFT) & A5XX_GRAS_SC_RAS_MSAA_CNTL_SAMPLES__MASK;
2060}
2061
2062#define REG_A5XX_GRAS_SC_DEST_MSAA_CNTL 0x0000e0a3
2063#define A5XX_GRAS_SC_DEST_MSAA_CNTL_SAMPLES__MASK 0x00000003
2064#define A5XX_GRAS_SC_DEST_MSAA_CNTL_SAMPLES__SHIFT 0
2065static inline uint32_t A5XX_GRAS_SC_DEST_MSAA_CNTL_SAMPLES(enum a3xx_msaa_samples val)
2066{
2067 return ((val) << A5XX_GRAS_SC_DEST_MSAA_CNTL_SAMPLES__SHIFT) & A5XX_GRAS_SC_DEST_MSAA_CNTL_SAMPLES__MASK;
2068}
2069#define A5XX_GRAS_SC_DEST_MSAA_CNTL_MSAA_DISABLE 0x00000004
2070
2071#define REG_A5XX_GRAS_SC_SCREEN_SCISSOR_CNTL 0x0000e0a4
2072
2073#define REG_A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0 0x0000e0aa
2074#define A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_WINDOW_OFFSET_DISABLE 0x80000000
2075#define A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_X__MASK 0x00007fff
2076#define A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_X__SHIFT 0
2077static inline uint32_t A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_X(uint32_t val)
2078{
2079 return ((val) << A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_X__SHIFT) & A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_X__MASK;
2080}
2081#define A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_Y__MASK 0x7fff0000
2082#define A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_Y__SHIFT 16
2083static inline uint32_t A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_Y(uint32_t val)
2084{
2085 return ((val) << A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_Y__SHIFT) & A5XX_GRAS_SC_SCREEN_SCISSOR_TL_0_Y__MASK;
2086}
2087
2088#define REG_A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0 0x0000e0ab
2089#define A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_WINDOW_OFFSET_DISABLE 0x80000000
2090#define A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_X__MASK 0x00007fff
2091#define A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_X__SHIFT 0
2092static inline uint32_t A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_X(uint32_t val)
2093{
2094 return ((val) << A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_X__SHIFT) & A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_X__MASK;
2095}
2096#define A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_Y__MASK 0x7fff0000
2097#define A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_Y__SHIFT 16
2098static inline uint32_t A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_Y(uint32_t val)
2099{
2100 return ((val) << A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_Y__SHIFT) & A5XX_GRAS_SC_SCREEN_SCISSOR_BR_0_Y__MASK;
2101}
2102
2103#define REG_A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0 0x0000e0ca
2104#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_WINDOW_OFFSET_DISABLE 0x80000000
2105#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_X__MASK 0x00007fff
2106#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_X__SHIFT 0
2107static inline uint32_t A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_X(uint32_t val)
2108{
2109 return ((val) << A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_X__SHIFT) & A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_X__MASK;
2110}
2111#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_Y__MASK 0x7fff0000
2112#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_Y__SHIFT 16
2113static inline uint32_t A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_Y(uint32_t val)
2114{
2115 return ((val) << A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_Y__SHIFT) & A5XX_GRAS_SC_VIEWPORT_SCISSOR_TL_0_Y__MASK;
2116}
2117
2118#define REG_A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0 0x0000e0cb
2119#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_WINDOW_OFFSET_DISABLE 0x80000000
2120#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_X__MASK 0x00007fff
2121#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_X__SHIFT 0
2122static inline uint32_t A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_X(uint32_t val)
2123{
2124 return ((val) << A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_X__SHIFT) & A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_X__MASK;
2125}
2126#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_Y__MASK 0x7fff0000
2127#define A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_Y__SHIFT 16
2128static inline uint32_t A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_Y(uint32_t val)
2129{
2130 return ((val) << A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_Y__SHIFT) & A5XX_GRAS_SC_VIEWPORT_SCISSOR_BR_0_Y__MASK;
2131}
2132
2133#define REG_A5XX_GRAS_SC_WINDOW_SCISSOR_TL 0x0000e0ea
2134#define A5XX_GRAS_SC_WINDOW_SCISSOR_TL_WINDOW_OFFSET_DISABLE 0x80000000
2135#define A5XX_GRAS_SC_WINDOW_SCISSOR_TL_X__MASK 0x00007fff
2136#define A5XX_GRAS_SC_WINDOW_SCISSOR_TL_X__SHIFT 0
2137static inline uint32_t A5XX_GRAS_SC_WINDOW_SCISSOR_TL_X(uint32_t val)
2138{
2139 return ((val) << A5XX_GRAS_SC_WINDOW_SCISSOR_TL_X__SHIFT) & A5XX_GRAS_SC_WINDOW_SCISSOR_TL_X__MASK;
2140}
2141#define A5XX_GRAS_SC_WINDOW_SCISSOR_TL_Y__MASK 0x7fff0000
2142#define A5XX_GRAS_SC_WINDOW_SCISSOR_TL_Y__SHIFT 16
2143static inline uint32_t A5XX_GRAS_SC_WINDOW_SCISSOR_TL_Y(uint32_t val)
2144{
2145 return ((val) << A5XX_GRAS_SC_WINDOW_SCISSOR_TL_Y__SHIFT) & A5XX_GRAS_SC_WINDOW_SCISSOR_TL_Y__MASK;
2146}
2147
2148#define REG_A5XX_GRAS_SC_WINDOW_SCISSOR_BR 0x0000e0eb
2149#define A5XX_GRAS_SC_WINDOW_SCISSOR_BR_WINDOW_OFFSET_DISABLE 0x80000000
2150#define A5XX_GRAS_SC_WINDOW_SCISSOR_BR_X__MASK 0x00007fff
2151#define A5XX_GRAS_SC_WINDOW_SCISSOR_BR_X__SHIFT 0
2152static inline uint32_t A5XX_GRAS_SC_WINDOW_SCISSOR_BR_X(uint32_t val)
2153{
2154 return ((val) << A5XX_GRAS_SC_WINDOW_SCISSOR_BR_X__SHIFT) & A5XX_GRAS_SC_WINDOW_SCISSOR_BR_X__MASK;
2155}
2156#define A5XX_GRAS_SC_WINDOW_SCISSOR_BR_Y__MASK 0x7fff0000
2157#define A5XX_GRAS_SC_WINDOW_SCISSOR_BR_Y__SHIFT 16
2158static inline uint32_t A5XX_GRAS_SC_WINDOW_SCISSOR_BR_Y(uint32_t val)
2159{
2160 return ((val) << A5XX_GRAS_SC_WINDOW_SCISSOR_BR_Y__SHIFT) & A5XX_GRAS_SC_WINDOW_SCISSOR_BR_Y__MASK;
2161}
2162
2163#define REG_A5XX_GRAS_LRZ_CNTL 0x0000e100
2164
2165#define REG_A5XX_GRAS_LRZ_BUFFER_BASE_LO 0x0000e101
2166
2167#define REG_A5XX_GRAS_LRZ_BUFFER_BASE_HI 0x0000e102
2168
2169#define REG_A5XX_GRAS_LRZ_BUFFER_PITCH 0x0000e103
2170
2171#define REG_A5XX_GRAS_LRZ_FAST_CLEAR_BUFFER_BASE_LO 0x0000e104
2172
2173#define REG_A5XX_GRAS_LRZ_FAST_CLEAR_BUFFER_BASE_HI 0x0000e105
2174
2175#define REG_A5XX_RB_CNTL 0x0000e140
2176#define A5XX_RB_CNTL_WIDTH__MASK 0x000000ff
2177#define A5XX_RB_CNTL_WIDTH__SHIFT 0
2178static inline uint32_t A5XX_RB_CNTL_WIDTH(uint32_t val)
2179{
2180 return ((val >> 5) << A5XX_RB_CNTL_WIDTH__SHIFT) & A5XX_RB_CNTL_WIDTH__MASK;
2181}
2182#define A5XX_RB_CNTL_HEIGHT__MASK 0x0001fe00
2183#define A5XX_RB_CNTL_HEIGHT__SHIFT 9
2184static inline uint32_t A5XX_RB_CNTL_HEIGHT(uint32_t val)
2185{
2186 return ((val >> 5) << A5XX_RB_CNTL_HEIGHT__SHIFT) & A5XX_RB_CNTL_HEIGHT__MASK;
2187}
2188#define A5XX_RB_CNTL_BYPASS 0x00020000
2189
2190#define REG_A5XX_RB_RENDER_CNTL 0x0000e141
2191#define A5XX_RB_RENDER_CNTL_SAMPLES_PASSED 0x00000040
2192#define A5XX_RB_RENDER_CNTL_FLAG_DEPTH 0x00004000
2193#define A5XX_RB_RENDER_CNTL_FLAG_DEPTH2 0x00008000
2194#define A5XX_RB_RENDER_CNTL_FLAG_MRTS__MASK 0x00ff0000
2195#define A5XX_RB_RENDER_CNTL_FLAG_MRTS__SHIFT 16
2196static inline uint32_t A5XX_RB_RENDER_CNTL_FLAG_MRTS(uint32_t val)
2197{
2198 return ((val) << A5XX_RB_RENDER_CNTL_FLAG_MRTS__SHIFT) & A5XX_RB_RENDER_CNTL_FLAG_MRTS__MASK;
2199}
2200#define A5XX_RB_RENDER_CNTL_FLAG_MRTS2__MASK 0xff000000
2201#define A5XX_RB_RENDER_CNTL_FLAG_MRTS2__SHIFT 24
2202static inline uint32_t A5XX_RB_RENDER_CNTL_FLAG_MRTS2(uint32_t val)
2203{
2204 return ((val) << A5XX_RB_RENDER_CNTL_FLAG_MRTS2__SHIFT) & A5XX_RB_RENDER_CNTL_FLAG_MRTS2__MASK;
2205}
2206
2207#define REG_A5XX_RB_RAS_MSAA_CNTL 0x0000e142
2208#define A5XX_RB_RAS_MSAA_CNTL_SAMPLES__MASK 0x00000003
2209#define A5XX_RB_RAS_MSAA_CNTL_SAMPLES__SHIFT 0
2210static inline uint32_t A5XX_RB_RAS_MSAA_CNTL_SAMPLES(enum a3xx_msaa_samples val)
2211{
2212 return ((val) << A5XX_RB_RAS_MSAA_CNTL_SAMPLES__SHIFT) & A5XX_RB_RAS_MSAA_CNTL_SAMPLES__MASK;
2213}
2214
2215#define REG_A5XX_RB_DEST_MSAA_CNTL 0x0000e143
2216#define A5XX_RB_DEST_MSAA_CNTL_SAMPLES__MASK 0x00000003
2217#define A5XX_RB_DEST_MSAA_CNTL_SAMPLES__SHIFT 0
2218static inline uint32_t A5XX_RB_DEST_MSAA_CNTL_SAMPLES(enum a3xx_msaa_samples val)
2219{
2220 return ((val) << A5XX_RB_DEST_MSAA_CNTL_SAMPLES__SHIFT) & A5XX_RB_DEST_MSAA_CNTL_SAMPLES__MASK;
2221}
2222#define A5XX_RB_DEST_MSAA_CNTL_MSAA_DISABLE 0x00000004
2223
2224#define REG_A5XX_RB_RENDER_CONTROL0 0x0000e144
2225#define A5XX_RB_RENDER_CONTROL0_VARYING 0x00000001
2226#define A5XX_RB_RENDER_CONTROL0_XCOORD 0x00000040
2227#define A5XX_RB_RENDER_CONTROL0_YCOORD 0x00000080
2228#define A5XX_RB_RENDER_CONTROL0_ZCOORD 0x00000100
2229#define A5XX_RB_RENDER_CONTROL0_WCOORD 0x00000200
2230
2231#define REG_A5XX_RB_RENDER_CONTROL1 0x0000e145
2232#define A5XX_RB_RENDER_CONTROL1_FACENESS 0x00000002
2233
2234#define REG_A5XX_RB_FS_OUTPUT_CNTL 0x0000e146
2235#define A5XX_RB_FS_OUTPUT_CNTL_MRT__MASK 0x0000000f
2236#define A5XX_RB_FS_OUTPUT_CNTL_MRT__SHIFT 0
2237static inline uint32_t A5XX_RB_FS_OUTPUT_CNTL_MRT(uint32_t val)
2238{
2239 return ((val) << A5XX_RB_FS_OUTPUT_CNTL_MRT__SHIFT) & A5XX_RB_FS_OUTPUT_CNTL_MRT__MASK;
2240}
2241#define A5XX_RB_FS_OUTPUT_CNTL_FRAG_WRITES_Z 0x00000020
2242
2243#define REG_A5XX_RB_RENDER_COMPONENTS 0x0000e147
2244#define A5XX_RB_RENDER_COMPONENTS_RT0__MASK 0x0000000f
2245#define A5XX_RB_RENDER_COMPONENTS_RT0__SHIFT 0
2246static inline uint32_t A5XX_RB_RENDER_COMPONENTS_RT0(uint32_t val)
2247{
2248 return ((val) << A5XX_RB_RENDER_COMPONENTS_RT0__SHIFT) & A5XX_RB_RENDER_COMPONENTS_RT0__MASK;
2249}
2250#define A5XX_RB_RENDER_COMPONENTS_RT1__MASK 0x000000f0
2251#define A5XX_RB_RENDER_COMPONENTS_RT1__SHIFT 4
2252static inline uint32_t A5XX_RB_RENDER_COMPONENTS_RT1(uint32_t val)
2253{
2254 return ((val) << A5XX_RB_RENDER_COMPONENTS_RT1__SHIFT) & A5XX_RB_RENDER_COMPONENTS_RT1__MASK;
2255}
2256#define A5XX_RB_RENDER_COMPONENTS_RT2__MASK 0x00000f00
2257#define A5XX_RB_RENDER_COMPONENTS_RT2__SHIFT 8
2258static inline uint32_t A5XX_RB_RENDER_COMPONENTS_RT2(uint32_t val)
2259{
2260 return ((val) << A5XX_RB_RENDER_COMPONENTS_RT2__SHIFT) & A5XX_RB_RENDER_COMPONENTS_RT2__MASK;
2261}
2262#define A5XX_RB_RENDER_COMPONENTS_RT3__MASK 0x0000f000
2263#define A5XX_RB_RENDER_COMPONENTS_RT3__SHIFT 12
2264static inline uint32_t A5XX_RB_RENDER_COMPONENTS_RT3(uint32_t val)
2265{
2266 return ((val) << A5XX_RB_RENDER_COMPONENTS_RT3__SHIFT) & A5XX_RB_RENDER_COMPONENTS_RT3__MASK;
2267}
2268#define A5XX_RB_RENDER_COMPONENTS_RT4__MASK 0x000f0000
2269#define A5XX_RB_RENDER_COMPONENTS_RT4__SHIFT 16
2270static inline uint32_t A5XX_RB_RENDER_COMPONENTS_RT4(uint32_t val)
2271{
2272 return ((val) << A5XX_RB_RENDER_COMPONENTS_RT4__SHIFT) & A5XX_RB_RENDER_COMPONENTS_RT4__MASK;
2273}
2274#define A5XX_RB_RENDER_COMPONENTS_RT5__MASK 0x00f00000
2275#define A5XX_RB_RENDER_COMPONENTS_RT5__SHIFT 20
2276static inline uint32_t A5XX_RB_RENDER_COMPONENTS_RT5(uint32_t val)
2277{
2278 return ((val) << A5XX_RB_RENDER_COMPONENTS_RT5__SHIFT) & A5XX_RB_RENDER_COMPONENTS_RT5__MASK;
2279}
2280#define A5XX_RB_RENDER_COMPONENTS_RT6__MASK 0x0f000000
2281#define A5XX_RB_RENDER_COMPONENTS_RT6__SHIFT 24
2282static inline uint32_t A5XX_RB_RENDER_COMPONENTS_RT6(uint32_t val)
2283{
2284 return ((val) << A5XX_RB_RENDER_COMPONENTS_RT6__SHIFT) & A5XX_RB_RENDER_COMPONENTS_RT6__MASK;
2285}
2286#define A5XX_RB_RENDER_COMPONENTS_RT7__MASK 0xf0000000
2287#define A5XX_RB_RENDER_COMPONENTS_RT7__SHIFT 28
2288static inline uint32_t A5XX_RB_RENDER_COMPONENTS_RT7(uint32_t val)
2289{
2290 return ((val) << A5XX_RB_RENDER_COMPONENTS_RT7__SHIFT) & A5XX_RB_RENDER_COMPONENTS_RT7__MASK;
2291}
2292
2293static inline uint32_t REG_A5XX_RB_MRT(uint32_t i0) { return 0x0000e150 + 0x7*i0; }
2294
2295static inline uint32_t REG_A5XX_RB_MRT_CONTROL(uint32_t i0) { return 0x0000e150 + 0x7*i0; }
2296#define A5XX_RB_MRT_CONTROL_BLEND 0x00000001
2297#define A5XX_RB_MRT_CONTROL_BLEND2 0x00000002
2298#define A5XX_RB_MRT_CONTROL_COMPONENT_ENABLE__MASK 0x00000780
2299#define A5XX_RB_MRT_CONTROL_COMPONENT_ENABLE__SHIFT 7
2300static inline uint32_t A5XX_RB_MRT_CONTROL_COMPONENT_ENABLE(uint32_t val)
2301{
2302 return ((val) << A5XX_RB_MRT_CONTROL_COMPONENT_ENABLE__SHIFT) & A5XX_RB_MRT_CONTROL_COMPONENT_ENABLE__MASK;
2303}
2304
2305static inline uint32_t REG_A5XX_RB_MRT_BLEND_CONTROL(uint32_t i0) { return 0x0000e151 + 0x7*i0; }
2306#define A5XX_RB_MRT_BLEND_CONTROL_RGB_SRC_FACTOR__MASK 0x0000001f
2307#define A5XX_RB_MRT_BLEND_CONTROL_RGB_SRC_FACTOR__SHIFT 0
2308static inline uint32_t A5XX_RB_MRT_BLEND_CONTROL_RGB_SRC_FACTOR(enum adreno_rb_blend_factor val)
2309{
2310 return ((val) << A5XX_RB_MRT_BLEND_CONTROL_RGB_SRC_FACTOR__SHIFT) & A5XX_RB_MRT_BLEND_CONTROL_RGB_SRC_FACTOR__MASK;
2311}
2312#define A5XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__MASK 0x000000e0
2313#define A5XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__SHIFT 5
2314static inline uint32_t A5XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE(enum a3xx_rb_blend_opcode val)
2315{
2316 return ((val) << A5XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__SHIFT) & A5XX_RB_MRT_BLEND_CONTROL_RGB_BLEND_OPCODE__MASK;
2317}
2318#define A5XX_RB_MRT_BLEND_CONTROL_RGB_DEST_FACTOR__MASK 0x00001f00
2319#define A5XX_RB_MRT_BLEND_CONTROL_RGB_DEST_FACTOR__SHIFT 8
2320static inline uint32_t A5XX_RB_MRT_BLEND_CONTROL_RGB_DEST_FACTOR(enum adreno_rb_blend_factor val)
2321{
2322 return ((val) << A5XX_RB_MRT_BLEND_CONTROL_RGB_DEST_FACTOR__SHIFT) & A5XX_RB_MRT_BLEND_CONTROL_RGB_DEST_FACTOR__MASK;
2323}
2324#define A5XX_RB_MRT_BLEND_CONTROL_ALPHA_SRC_FACTOR__MASK 0x001f0000
2325#define A5XX_RB_MRT_BLEND_CONTROL_ALPHA_SRC_FACTOR__SHIFT 16
2326static inline uint32_t A5XX_RB_MRT_BLEND_CONTROL_ALPHA_SRC_FACTOR(enum adreno_rb_blend_factor val)
2327{
2328 return ((val) << A5XX_RB_MRT_BLEND_CONTROL_ALPHA_SRC_FACTOR__SHIFT) & A5XX_RB_MRT_BLEND_CONTROL_ALPHA_SRC_FACTOR__MASK;
2329}
2330#define A5XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__MASK 0x00e00000
2331#define A5XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__SHIFT 21
2332static inline uint32_t A5XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE(enum a3xx_rb_blend_opcode val)
2333{
2334 return ((val) << A5XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__SHIFT) & A5XX_RB_MRT_BLEND_CONTROL_ALPHA_BLEND_OPCODE__MASK;
2335}
2336#define A5XX_RB_MRT_BLEND_CONTROL_ALPHA_DEST_FACTOR__MASK 0x1f000000
2337#define A5XX_RB_MRT_BLEND_CONTROL_ALPHA_DEST_FACTOR__SHIFT 24
2338static inline uint32_t A5XX_RB_MRT_BLEND_CONTROL_ALPHA_DEST_FACTOR(enum adreno_rb_blend_factor val)
2339{
2340 return ((val) << A5XX_RB_MRT_BLEND_CONTROL_ALPHA_DEST_FACTOR__SHIFT) & A5XX_RB_MRT_BLEND_CONTROL_ALPHA_DEST_FACTOR__MASK;
2341}
2342
2343static inline uint32_t REG_A5XX_RB_MRT_BUF_INFO(uint32_t i0) { return 0x0000e152 + 0x7*i0; }
2344#define A5XX_RB_MRT_BUF_INFO_COLOR_FORMAT__MASK 0x000000ff
2345#define A5XX_RB_MRT_BUF_INFO_COLOR_FORMAT__SHIFT 0
2346static inline uint32_t A5XX_RB_MRT_BUF_INFO_COLOR_FORMAT(enum a5xx_color_fmt val)
2347{
2348 return ((val) << A5XX_RB_MRT_BUF_INFO_COLOR_FORMAT__SHIFT) & A5XX_RB_MRT_BUF_INFO_COLOR_FORMAT__MASK;
2349}
2350#define A5XX_RB_MRT_BUF_INFO_COLOR_TILE_MODE__MASK 0x00000300
2351#define A5XX_RB_MRT_BUF_INFO_COLOR_TILE_MODE__SHIFT 8
2352static inline uint32_t A5XX_RB_MRT_BUF_INFO_COLOR_TILE_MODE(enum a5xx_tile_mode val)
2353{
2354 return ((val) << A5XX_RB_MRT_BUF_INFO_COLOR_TILE_MODE__SHIFT) & A5XX_RB_MRT_BUF_INFO_COLOR_TILE_MODE__MASK;
2355}
2356#define A5XX_RB_MRT_BUF_INFO_COLOR_SWAP__MASK 0x00006000
2357#define A5XX_RB_MRT_BUF_INFO_COLOR_SWAP__SHIFT 13
2358static inline uint32_t A5XX_RB_MRT_BUF_INFO_COLOR_SWAP(enum a3xx_color_swap val)
2359{
2360 return ((val) << A5XX_RB_MRT_BUF_INFO_COLOR_SWAP__SHIFT) & A5XX_RB_MRT_BUF_INFO_COLOR_SWAP__MASK;
2361}
2362#define A5XX_RB_MRT_BUF_INFO_COLOR_SRGB 0x00008000
2363
2364static inline uint32_t REG_A5XX_RB_MRT_PITCH(uint32_t i0) { return 0x0000e153 + 0x7*i0; }
2365#define A5XX_RB_MRT_PITCH__MASK 0xffffffff
2366#define A5XX_RB_MRT_PITCH__SHIFT 0
2367static inline uint32_t A5XX_RB_MRT_PITCH(uint32_t val)
2368{
2369 return ((val >> 6) << A5XX_RB_MRT_PITCH__SHIFT) & A5XX_RB_MRT_PITCH__MASK;
2370}
2371
2372static inline uint32_t REG_A5XX_RB_MRT_ARRAY_PITCH(uint32_t i0) { return 0x0000e154 + 0x7*i0; }
2373#define A5XX_RB_MRT_ARRAY_PITCH__MASK 0xffffffff
2374#define A5XX_RB_MRT_ARRAY_PITCH__SHIFT 0
2375static inline uint32_t A5XX_RB_MRT_ARRAY_PITCH(uint32_t val)
2376{
2377 return ((val >> 6) << A5XX_RB_MRT_ARRAY_PITCH__SHIFT) & A5XX_RB_MRT_ARRAY_PITCH__MASK;
2378}
2379
2380static inline uint32_t REG_A5XX_RB_MRT_BASE_LO(uint32_t i0) { return 0x0000e155 + 0x7*i0; }
2381
2382static inline uint32_t REG_A5XX_RB_MRT_BASE_HI(uint32_t i0) { return 0x0000e156 + 0x7*i0; }
2383
2384#define REG_A5XX_RB_BLEND_RED 0x0000e1a0
2385#define A5XX_RB_BLEND_RED_UINT__MASK 0x000000ff
2386#define A5XX_RB_BLEND_RED_UINT__SHIFT 0
2387static inline uint32_t A5XX_RB_BLEND_RED_UINT(uint32_t val)
2388{
2389 return ((val) << A5XX_RB_BLEND_RED_UINT__SHIFT) & A5XX_RB_BLEND_RED_UINT__MASK;
2390}
2391#define A5XX_RB_BLEND_RED_SINT__MASK 0x0000ff00
2392#define A5XX_RB_BLEND_RED_SINT__SHIFT 8
2393static inline uint32_t A5XX_RB_BLEND_RED_SINT(uint32_t val)
2394{
2395 return ((val) << A5XX_RB_BLEND_RED_SINT__SHIFT) & A5XX_RB_BLEND_RED_SINT__MASK;
2396}
2397#define A5XX_RB_BLEND_RED_FLOAT__MASK 0xffff0000
2398#define A5XX_RB_BLEND_RED_FLOAT__SHIFT 16
2399static inline uint32_t A5XX_RB_BLEND_RED_FLOAT(float val)
2400{
2401 return ((util_float_to_half(val)) << A5XX_RB_BLEND_RED_FLOAT__SHIFT) & A5XX_RB_BLEND_RED_FLOAT__MASK;
2402}
2403
2404#define REG_A5XX_RB_BLEND_RED_F32 0x0000e1a1
2405#define A5XX_RB_BLEND_RED_F32__MASK 0xffffffff
2406#define A5XX_RB_BLEND_RED_F32__SHIFT 0
2407static inline uint32_t A5XX_RB_BLEND_RED_F32(float val)
2408{
2409 return ((fui(val)) << A5XX_RB_BLEND_RED_F32__SHIFT) & A5XX_RB_BLEND_RED_F32__MASK;
2410}
2411
2412#define REG_A5XX_RB_BLEND_GREEN 0x0000e1a2
2413#define A5XX_RB_BLEND_GREEN_UINT__MASK 0x000000ff
2414#define A5XX_RB_BLEND_GREEN_UINT__SHIFT 0
2415static inline uint32_t A5XX_RB_BLEND_GREEN_UINT(uint32_t val)
2416{
2417 return ((val) << A5XX_RB_BLEND_GREEN_UINT__SHIFT) & A5XX_RB_BLEND_GREEN_UINT__MASK;
2418}
2419#define A5XX_RB_BLEND_GREEN_SINT__MASK 0x0000ff00
2420#define A5XX_RB_BLEND_GREEN_SINT__SHIFT 8
2421static inline uint32_t A5XX_RB_BLEND_GREEN_SINT(uint32_t val)
2422{
2423 return ((val) << A5XX_RB_BLEND_GREEN_SINT__SHIFT) & A5XX_RB_BLEND_GREEN_SINT__MASK;
2424}
2425#define A5XX_RB_BLEND_GREEN_FLOAT__MASK 0xffff0000
2426#define A5XX_RB_BLEND_GREEN_FLOAT__SHIFT 16
2427static inline uint32_t A5XX_RB_BLEND_GREEN_FLOAT(float val)
2428{
2429 return ((util_float_to_half(val)) << A5XX_RB_BLEND_GREEN_FLOAT__SHIFT) & A5XX_RB_BLEND_GREEN_FLOAT__MASK;
2430}
2431
2432#define REG_A5XX_RB_BLEND_GREEN_F32 0x0000e1a3
2433#define A5XX_RB_BLEND_GREEN_F32__MASK 0xffffffff
2434#define A5XX_RB_BLEND_GREEN_F32__SHIFT 0
2435static inline uint32_t A5XX_RB_BLEND_GREEN_F32(float val)
2436{
2437 return ((fui(val)) << A5XX_RB_BLEND_GREEN_F32__SHIFT) & A5XX_RB_BLEND_GREEN_F32__MASK;
2438}
2439
2440#define REG_A5XX_RB_BLEND_BLUE 0x0000e1a4
2441#define A5XX_RB_BLEND_BLUE_UINT__MASK 0x000000ff
2442#define A5XX_RB_BLEND_BLUE_UINT__SHIFT 0
2443static inline uint32_t A5XX_RB_BLEND_BLUE_UINT(uint32_t val)
2444{
2445 return ((val) << A5XX_RB_BLEND_BLUE_UINT__SHIFT) & A5XX_RB_BLEND_BLUE_UINT__MASK;
2446}
2447#define A5XX_RB_BLEND_BLUE_SINT__MASK 0x0000ff00
2448#define A5XX_RB_BLEND_BLUE_SINT__SHIFT 8
2449static inline uint32_t A5XX_RB_BLEND_BLUE_SINT(uint32_t val)
2450{
2451 return ((val) << A5XX_RB_BLEND_BLUE_SINT__SHIFT) & A5XX_RB_BLEND_BLUE_SINT__MASK;
2452}
2453#define A5XX_RB_BLEND_BLUE_FLOAT__MASK 0xffff0000
2454#define A5XX_RB_BLEND_BLUE_FLOAT__SHIFT 16
2455static inline uint32_t A5XX_RB_BLEND_BLUE_FLOAT(float val)
2456{
2457 return ((util_float_to_half(val)) << A5XX_RB_BLEND_BLUE_FLOAT__SHIFT) & A5XX_RB_BLEND_BLUE_FLOAT__MASK;
2458}
2459
2460#define REG_A5XX_RB_BLEND_BLUE_F32 0x0000e1a5
2461#define A5XX_RB_BLEND_BLUE_F32__MASK 0xffffffff
2462#define A5XX_RB_BLEND_BLUE_F32__SHIFT 0
2463static inline uint32_t A5XX_RB_BLEND_BLUE_F32(float val)
2464{
2465 return ((fui(val)) << A5XX_RB_BLEND_BLUE_F32__SHIFT) & A5XX_RB_BLEND_BLUE_F32__MASK;
2466}
2467
2468#define REG_A5XX_RB_BLEND_ALPHA 0x0000e1a6
2469#define A5XX_RB_BLEND_ALPHA_UINT__MASK 0x000000ff
2470#define A5XX_RB_BLEND_ALPHA_UINT__SHIFT 0
2471static inline uint32_t A5XX_RB_BLEND_ALPHA_UINT(uint32_t val)
2472{
2473 return ((val) << A5XX_RB_BLEND_ALPHA_UINT__SHIFT) & A5XX_RB_BLEND_ALPHA_UINT__MASK;
2474}
2475#define A5XX_RB_BLEND_ALPHA_SINT__MASK 0x0000ff00
2476#define A5XX_RB_BLEND_ALPHA_SINT__SHIFT 8
2477static inline uint32_t A5XX_RB_BLEND_ALPHA_SINT(uint32_t val)
2478{
2479 return ((val) << A5XX_RB_BLEND_ALPHA_SINT__SHIFT) & A5XX_RB_BLEND_ALPHA_SINT__MASK;
2480}
2481#define A5XX_RB_BLEND_ALPHA_FLOAT__MASK 0xffff0000
2482#define A5XX_RB_BLEND_ALPHA_FLOAT__SHIFT 16
2483static inline uint32_t A5XX_RB_BLEND_ALPHA_FLOAT(float val)
2484{
2485 return ((util_float_to_half(val)) << A5XX_RB_BLEND_ALPHA_FLOAT__SHIFT) & A5XX_RB_BLEND_ALPHA_FLOAT__MASK;
2486}
2487
2488#define REG_A5XX_RB_BLEND_ALPHA_F32 0x0000e1a7
2489#define A5XX_RB_BLEND_ALPHA_F32__MASK 0xffffffff
2490#define A5XX_RB_BLEND_ALPHA_F32__SHIFT 0
2491static inline uint32_t A5XX_RB_BLEND_ALPHA_F32(float val)
2492{
2493 return ((fui(val)) << A5XX_RB_BLEND_ALPHA_F32__SHIFT) & A5XX_RB_BLEND_ALPHA_F32__MASK;
2494}
2495
2496#define REG_A5XX_RB_ALPHA_CONTROL 0x0000e1a8
2497#define A5XX_RB_ALPHA_CONTROL_ALPHA_REF__MASK 0x000000ff
2498#define A5XX_RB_ALPHA_CONTROL_ALPHA_REF__SHIFT 0
2499static inline uint32_t A5XX_RB_ALPHA_CONTROL_ALPHA_REF(uint32_t val)
2500{
2501 return ((val) << A5XX_RB_ALPHA_CONTROL_ALPHA_REF__SHIFT) & A5XX_RB_ALPHA_CONTROL_ALPHA_REF__MASK;
2502}
2503#define A5XX_RB_ALPHA_CONTROL_ALPHA_TEST 0x00000100
2504#define A5XX_RB_ALPHA_CONTROL_ALPHA_TEST_FUNC__MASK 0x00000e00
2505#define A5XX_RB_ALPHA_CONTROL_ALPHA_TEST_FUNC__SHIFT 9
2506static inline uint32_t A5XX_RB_ALPHA_CONTROL_ALPHA_TEST_FUNC(enum adreno_compare_func val)
2507{
2508 return ((val) << A5XX_RB_ALPHA_CONTROL_ALPHA_TEST_FUNC__SHIFT) & A5XX_RB_ALPHA_CONTROL_ALPHA_TEST_FUNC__MASK;
2509}
2510
2511#define REG_A5XX_RB_BLEND_CNTL 0x0000e1a9
2512#define A5XX_RB_BLEND_CNTL_ENABLE_BLEND__MASK 0x000000ff
2513#define A5XX_RB_BLEND_CNTL_ENABLE_BLEND__SHIFT 0
2514static inline uint32_t A5XX_RB_BLEND_CNTL_ENABLE_BLEND(uint32_t val)
2515{
2516 return ((val) << A5XX_RB_BLEND_CNTL_ENABLE_BLEND__SHIFT) & A5XX_RB_BLEND_CNTL_ENABLE_BLEND__MASK;
2517}
2518#define A5XX_RB_BLEND_CNTL_INDEPENDENT_BLEND 0x00000100
2519#define A5XX_RB_BLEND_CNTL_SAMPLE_MASK__MASK 0xffff0000
2520#define A5XX_RB_BLEND_CNTL_SAMPLE_MASK__SHIFT 16
2521static inline uint32_t A5XX_RB_BLEND_CNTL_SAMPLE_MASK(uint32_t val)
2522{
2523 return ((val) << A5XX_RB_BLEND_CNTL_SAMPLE_MASK__SHIFT) & A5XX_RB_BLEND_CNTL_SAMPLE_MASK__MASK;
2524}
2525
2526#define REG_A5XX_RB_DEPTH_PLANE_CNTL 0x0000e1b0
2527#define A5XX_RB_DEPTH_PLANE_CNTL_FRAG_WRITES_Z 0x00000001
2528
2529#define REG_A5XX_RB_DEPTH_CNTL 0x0000e1b1
2530#define A5XX_RB_DEPTH_CNTL_Z_ENABLE 0x00000001
2531#define A5XX_RB_DEPTH_CNTL_Z_WRITE_ENABLE 0x00000002
2532#define A5XX_RB_DEPTH_CNTL_ZFUNC__MASK 0x0000001c
2533#define A5XX_RB_DEPTH_CNTL_ZFUNC__SHIFT 2
2534static inline uint32_t A5XX_RB_DEPTH_CNTL_ZFUNC(enum adreno_compare_func val)
2535{
2536 return ((val) << A5XX_RB_DEPTH_CNTL_ZFUNC__SHIFT) & A5XX_RB_DEPTH_CNTL_ZFUNC__MASK;
2537}
2538#define A5XX_RB_DEPTH_CNTL_Z_TEST_ENABLE 0x00000040
2539
2540#define REG_A5XX_RB_DEPTH_BUFFER_INFO 0x0000e1b2
2541#define A5XX_RB_DEPTH_BUFFER_INFO_DEPTH_FORMAT__MASK 0x00000007
2542#define A5XX_RB_DEPTH_BUFFER_INFO_DEPTH_FORMAT__SHIFT 0
2543static inline uint32_t A5XX_RB_DEPTH_BUFFER_INFO_DEPTH_FORMAT(enum a5xx_depth_format val)
2544{
2545 return ((val) << A5XX_RB_DEPTH_BUFFER_INFO_DEPTH_FORMAT__SHIFT) & A5XX_RB_DEPTH_BUFFER_INFO_DEPTH_FORMAT__MASK;
2546}
2547
2548#define REG_A5XX_RB_DEPTH_BUFFER_BASE_LO 0x0000e1b3
2549
2550#define REG_A5XX_RB_DEPTH_BUFFER_BASE_HI 0x0000e1b4
2551
2552#define REG_A5XX_RB_DEPTH_BUFFER_PITCH 0x0000e1b5
2553#define A5XX_RB_DEPTH_BUFFER_PITCH__MASK 0xffffffff
2554#define A5XX_RB_DEPTH_BUFFER_PITCH__SHIFT 0
2555static inline uint32_t A5XX_RB_DEPTH_BUFFER_PITCH(uint32_t val)
2556{
2557 return ((val >> 5) << A5XX_RB_DEPTH_BUFFER_PITCH__SHIFT) & A5XX_RB_DEPTH_BUFFER_PITCH__MASK;
2558}
2559
2560#define REG_A5XX_RB_DEPTH_BUFFER_ARRAY_PITCH 0x0000e1b6
2561#define A5XX_RB_DEPTH_BUFFER_ARRAY_PITCH__MASK 0xffffffff
2562#define A5XX_RB_DEPTH_BUFFER_ARRAY_PITCH__SHIFT 0
2563static inline uint32_t A5XX_RB_DEPTH_BUFFER_ARRAY_PITCH(uint32_t val)
2564{
2565 return ((val >> 5) << A5XX_RB_DEPTH_BUFFER_ARRAY_PITCH__SHIFT) & A5XX_RB_DEPTH_BUFFER_ARRAY_PITCH__MASK;
2566}
2567
2568#define REG_A5XX_RB_STENCIL_CONTROL 0x0000e1c0
2569#define A5XX_RB_STENCIL_CONTROL_STENCIL_ENABLE 0x00000001
2570#define A5XX_RB_STENCIL_CONTROL_STENCIL_ENABLE_BF 0x00000002
2571#define A5XX_RB_STENCIL_CONTROL_STENCIL_READ 0x00000004
2572#define A5XX_RB_STENCIL_CONTROL_FUNC__MASK 0x00000700
2573#define A5XX_RB_STENCIL_CONTROL_FUNC__SHIFT 8
2574static inline uint32_t A5XX_RB_STENCIL_CONTROL_FUNC(enum adreno_compare_func val)
2575{
2576 return ((val) << A5XX_RB_STENCIL_CONTROL_FUNC__SHIFT) & A5XX_RB_STENCIL_CONTROL_FUNC__MASK;
2577}
2578#define A5XX_RB_STENCIL_CONTROL_FAIL__MASK 0x00003800
2579#define A5XX_RB_STENCIL_CONTROL_FAIL__SHIFT 11
2580static inline uint32_t A5XX_RB_STENCIL_CONTROL_FAIL(enum adreno_stencil_op val)
2581{
2582 return ((val) << A5XX_RB_STENCIL_CONTROL_FAIL__SHIFT) & A5XX_RB_STENCIL_CONTROL_FAIL__MASK;
2583}
2584#define A5XX_RB_STENCIL_CONTROL_ZPASS__MASK 0x0001c000
2585#define A5XX_RB_STENCIL_CONTROL_ZPASS__SHIFT 14
2586static inline uint32_t A5XX_RB_STENCIL_CONTROL_ZPASS(enum adreno_stencil_op val)
2587{
2588 return ((val) << A5XX_RB_STENCIL_CONTROL_ZPASS__SHIFT) & A5XX_RB_STENCIL_CONTROL_ZPASS__MASK;
2589}
2590#define A5XX_RB_STENCIL_CONTROL_ZFAIL__MASK 0x000e0000
2591#define A5XX_RB_STENCIL_CONTROL_ZFAIL__SHIFT 17
2592static inline uint32_t A5XX_RB_STENCIL_CONTROL_ZFAIL(enum adreno_stencil_op val)
2593{
2594 return ((val) << A5XX_RB_STENCIL_CONTROL_ZFAIL__SHIFT) & A5XX_RB_STENCIL_CONTROL_ZFAIL__MASK;
2595}
2596#define A5XX_RB_STENCIL_CONTROL_FUNC_BF__MASK 0x00700000
2597#define A5XX_RB_STENCIL_CONTROL_FUNC_BF__SHIFT 20
2598static inline uint32_t A5XX_RB_STENCIL_CONTROL_FUNC_BF(enum adreno_compare_func val)
2599{
2600 return ((val) << A5XX_RB_STENCIL_CONTROL_FUNC_BF__SHIFT) & A5XX_RB_STENCIL_CONTROL_FUNC_BF__MASK;
2601}
2602#define A5XX_RB_STENCIL_CONTROL_FAIL_BF__MASK 0x03800000
2603#define A5XX_RB_STENCIL_CONTROL_FAIL_BF__SHIFT 23
2604static inline uint32_t A5XX_RB_STENCIL_CONTROL_FAIL_BF(enum adreno_stencil_op val)
2605{
2606 return ((val) << A5XX_RB_STENCIL_CONTROL_FAIL_BF__SHIFT) & A5XX_RB_STENCIL_CONTROL_FAIL_BF__MASK;
2607}
2608#define A5XX_RB_STENCIL_CONTROL_ZPASS_BF__MASK 0x1c000000
2609#define A5XX_RB_STENCIL_CONTROL_ZPASS_BF__SHIFT 26
2610static inline uint32_t A5XX_RB_STENCIL_CONTROL_ZPASS_BF(enum adreno_stencil_op val)
2611{
2612 return ((val) << A5XX_RB_STENCIL_CONTROL_ZPASS_BF__SHIFT) & A5XX_RB_STENCIL_CONTROL_ZPASS_BF__MASK;
2613}
2614#define A5XX_RB_STENCIL_CONTROL_ZFAIL_BF__MASK 0xe0000000
2615#define A5XX_RB_STENCIL_CONTROL_ZFAIL_BF__SHIFT 29
2616static inline uint32_t A5XX_RB_STENCIL_CONTROL_ZFAIL_BF(enum adreno_stencil_op val)
2617{
2618 return ((val) << A5XX_RB_STENCIL_CONTROL_ZFAIL_BF__SHIFT) & A5XX_RB_STENCIL_CONTROL_ZFAIL_BF__MASK;
2619}
2620
2621#define REG_A5XX_RB_STENCIL_INFO 0x0000e1c1
2622#define A5XX_RB_STENCIL_INFO_SEPARATE_STENCIL 0x00000001
2623
2624#define REG_A5XX_RB_STENCIL_BASE_LO 0x0000e1c2
2625
2626#define REG_A5XX_RB_STENCIL_BASE_HI 0x0000e1c3
2627
2628#define REG_A5XX_RB_STENCIL_PITCH 0x0000e1c4
2629#define A5XX_RB_STENCIL_PITCH__MASK 0xffffffff
2630#define A5XX_RB_STENCIL_PITCH__SHIFT 0
2631static inline uint32_t A5XX_RB_STENCIL_PITCH(uint32_t val)
2632{
2633 return ((val >> 6) << A5XX_RB_STENCIL_PITCH__SHIFT) & A5XX_RB_STENCIL_PITCH__MASK;
2634}
2635
2636#define REG_A5XX_RB_STENCIL_ARRAY_PITCH 0x0000e1c5
2637#define A5XX_RB_STENCIL_ARRAY_PITCH__MASK 0xffffffff
2638#define A5XX_RB_STENCIL_ARRAY_PITCH__SHIFT 0
2639static inline uint32_t A5XX_RB_STENCIL_ARRAY_PITCH(uint32_t val)
2640{
2641 return ((val >> 6) << A5XX_RB_STENCIL_ARRAY_PITCH__SHIFT) & A5XX_RB_STENCIL_ARRAY_PITCH__MASK;
2642}
2643
2644#define REG_A5XX_RB_STENCILREFMASK 0x0000e1c6
2645#define A5XX_RB_STENCILREFMASK_STENCILREF__MASK 0x000000ff
2646#define A5XX_RB_STENCILREFMASK_STENCILREF__SHIFT 0
2647static inline uint32_t A5XX_RB_STENCILREFMASK_STENCILREF(uint32_t val)
2648{
2649 return ((val) << A5XX_RB_STENCILREFMASK_STENCILREF__SHIFT) & A5XX_RB_STENCILREFMASK_STENCILREF__MASK;
2650}
2651#define A5XX_RB_STENCILREFMASK_STENCILMASK__MASK 0x0000ff00
2652#define A5XX_RB_STENCILREFMASK_STENCILMASK__SHIFT 8
2653static inline uint32_t A5XX_RB_STENCILREFMASK_STENCILMASK(uint32_t val)
2654{
2655 return ((val) << A5XX_RB_STENCILREFMASK_STENCILMASK__SHIFT) & A5XX_RB_STENCILREFMASK_STENCILMASK__MASK;
2656}
2657#define A5XX_RB_STENCILREFMASK_STENCILWRITEMASK__MASK 0x00ff0000
2658#define A5XX_RB_STENCILREFMASK_STENCILWRITEMASK__SHIFT 16
2659static inline uint32_t A5XX_RB_STENCILREFMASK_STENCILWRITEMASK(uint32_t val)
2660{
2661 return ((val) << A5XX_RB_STENCILREFMASK_STENCILWRITEMASK__SHIFT) & A5XX_RB_STENCILREFMASK_STENCILWRITEMASK__MASK;
2662}
2663
2664#define REG_A5XX_UNKNOWN_E1C7 0x0000e1c7
2665
2666#define REG_A5XX_RB_WINDOW_OFFSET 0x0000e1d0
2667#define A5XX_RB_WINDOW_OFFSET_WINDOW_OFFSET_DISABLE 0x80000000
2668#define A5XX_RB_WINDOW_OFFSET_X__MASK 0x00007fff
2669#define A5XX_RB_WINDOW_OFFSET_X__SHIFT 0
2670static inline uint32_t A5XX_RB_WINDOW_OFFSET_X(uint32_t val)
2671{
2672 return ((val) << A5XX_RB_WINDOW_OFFSET_X__SHIFT) & A5XX_RB_WINDOW_OFFSET_X__MASK;
2673}
2674#define A5XX_RB_WINDOW_OFFSET_Y__MASK 0x7fff0000
2675#define A5XX_RB_WINDOW_OFFSET_Y__SHIFT 16
2676static inline uint32_t A5XX_RB_WINDOW_OFFSET_Y(uint32_t val)
2677{
2678 return ((val) << A5XX_RB_WINDOW_OFFSET_Y__SHIFT) & A5XX_RB_WINDOW_OFFSET_Y__MASK;
2679}
2680
2681#define REG_A5XX_RB_BLIT_CNTL 0x0000e210
2682#define A5XX_RB_BLIT_CNTL_BUF__MASK 0x0000003f
2683#define A5XX_RB_BLIT_CNTL_BUF__SHIFT 0
2684static inline uint32_t A5XX_RB_BLIT_CNTL_BUF(enum a5xx_blit_buf val)
2685{
2686 return ((val) << A5XX_RB_BLIT_CNTL_BUF__SHIFT) & A5XX_RB_BLIT_CNTL_BUF__MASK;
2687}
2688
2689#define REG_A5XX_RB_RESOLVE_CNTL_1 0x0000e211
2690#define A5XX_RB_RESOLVE_CNTL_1_WINDOW_OFFSET_DISABLE 0x80000000
2691#define A5XX_RB_RESOLVE_CNTL_1_X__MASK 0x00007fff
2692#define A5XX_RB_RESOLVE_CNTL_1_X__SHIFT 0
2693static inline uint32_t A5XX_RB_RESOLVE_CNTL_1_X(uint32_t val)
2694{
2695 return ((val) << A5XX_RB_RESOLVE_CNTL_1_X__SHIFT) & A5XX_RB_RESOLVE_CNTL_1_X__MASK;
2696}
2697#define A5XX_RB_RESOLVE_CNTL_1_Y__MASK 0x7fff0000
2698#define A5XX_RB_RESOLVE_CNTL_1_Y__SHIFT 16
2699static inline uint32_t A5XX_RB_RESOLVE_CNTL_1_Y(uint32_t val)
2700{
2701 return ((val) << A5XX_RB_RESOLVE_CNTL_1_Y__SHIFT) & A5XX_RB_RESOLVE_CNTL_1_Y__MASK;
2702}
2703
2704#define REG_A5XX_RB_RESOLVE_CNTL_2 0x0000e212
2705#define A5XX_RB_RESOLVE_CNTL_2_WINDOW_OFFSET_DISABLE 0x80000000
2706#define A5XX_RB_RESOLVE_CNTL_2_X__MASK 0x00007fff
2707#define A5XX_RB_RESOLVE_CNTL_2_X__SHIFT 0
2708static inline uint32_t A5XX_RB_RESOLVE_CNTL_2_X(uint32_t val)
2709{
2710 return ((val) << A5XX_RB_RESOLVE_CNTL_2_X__SHIFT) & A5XX_RB_RESOLVE_CNTL_2_X__MASK;
2711}
2712#define A5XX_RB_RESOLVE_CNTL_2_Y__MASK 0x7fff0000
2713#define A5XX_RB_RESOLVE_CNTL_2_Y__SHIFT 16
2714static inline uint32_t A5XX_RB_RESOLVE_CNTL_2_Y(uint32_t val)
2715{
2716 return ((val) << A5XX_RB_RESOLVE_CNTL_2_Y__SHIFT) & A5XX_RB_RESOLVE_CNTL_2_Y__MASK;
2717}
2718
2719#define REG_A5XX_RB_RESOLVE_CNTL_3 0x0000e213
2720
2721#define REG_A5XX_RB_BLIT_DST_LO 0x0000e214
2722
2723#define REG_A5XX_RB_BLIT_DST_HI 0x0000e215
2724
2725#define REG_A5XX_RB_BLIT_DST_PITCH 0x0000e216
2726#define A5XX_RB_BLIT_DST_PITCH__MASK 0xffffffff
2727#define A5XX_RB_BLIT_DST_PITCH__SHIFT 0
2728static inline uint32_t A5XX_RB_BLIT_DST_PITCH(uint32_t val)
2729{
2730 return ((val >> 6) << A5XX_RB_BLIT_DST_PITCH__SHIFT) & A5XX_RB_BLIT_DST_PITCH__MASK;
2731}
2732
2733#define REG_A5XX_RB_BLIT_DST_ARRAY_PITCH 0x0000e217
2734#define A5XX_RB_BLIT_DST_ARRAY_PITCH__MASK 0xffffffff
2735#define A5XX_RB_BLIT_DST_ARRAY_PITCH__SHIFT 0
2736static inline uint32_t A5XX_RB_BLIT_DST_ARRAY_PITCH(uint32_t val)
2737{
2738 return ((val >> 6) << A5XX_RB_BLIT_DST_ARRAY_PITCH__SHIFT) & A5XX_RB_BLIT_DST_ARRAY_PITCH__MASK;
2739}
2740
2741#define REG_A5XX_RB_CLEAR_COLOR_DW0 0x0000e218
2742
2743#define REG_A5XX_RB_CLEAR_COLOR_DW1 0x0000e219
2744
2745#define REG_A5XX_RB_CLEAR_COLOR_DW2 0x0000e21a
2746
2747#define REG_A5XX_RB_CLEAR_COLOR_DW3 0x0000e21b
2748
2749#define REG_A5XX_RB_CLEAR_CNTL 0x0000e21c
2750#define A5XX_RB_CLEAR_CNTL_FAST_CLEAR 0x00000002
2751#define A5XX_RB_CLEAR_CNTL_MASK__MASK 0x000000f0
2752#define A5XX_RB_CLEAR_CNTL_MASK__SHIFT 4
2753static inline uint32_t A5XX_RB_CLEAR_CNTL_MASK(uint32_t val)
2754{
2755 return ((val) << A5XX_RB_CLEAR_CNTL_MASK__SHIFT) & A5XX_RB_CLEAR_CNTL_MASK__MASK;
2756}
2757
2758#define REG_A5XX_RB_DEPTH_FLAG_BUFFER_BASE_LO 0x0000e240
2759
2760#define REG_A5XX_RB_DEPTH_FLAG_BUFFER_BASE_HI 0x0000e241
2761
2762#define REG_A5XX_RB_DEPTH_FLAG_BUFFER_PITCH 0x0000e242
2763
2764static inline uint32_t REG_A5XX_RB_MRT_FLAG_BUFFER(uint32_t i0) { return 0x0000e243 + 0x4*i0; }
2765
2766static inline uint32_t REG_A5XX_RB_MRT_FLAG_BUFFER_ADDR_LO(uint32_t i0) { return 0x0000e243 + 0x4*i0; }
2767
2768static inline uint32_t REG_A5XX_RB_MRT_FLAG_BUFFER_ADDR_HI(uint32_t i0) { return 0x0000e244 + 0x4*i0; }
2769
2770static inline uint32_t REG_A5XX_RB_MRT_FLAG_BUFFER_PITCH(uint32_t i0) { return 0x0000e245 + 0x4*i0; }
2771#define A5XX_RB_MRT_FLAG_BUFFER_PITCH__MASK 0xffffffff
2772#define A5XX_RB_MRT_FLAG_BUFFER_PITCH__SHIFT 0
2773static inline uint32_t A5XX_RB_MRT_FLAG_BUFFER_PITCH(uint32_t val)
2774{
2775 return ((val >> 6) << A5XX_RB_MRT_FLAG_BUFFER_PITCH__SHIFT) & A5XX_RB_MRT_FLAG_BUFFER_PITCH__MASK;
2776}
2777
2778static inline uint32_t REG_A5XX_RB_MRT_FLAG_BUFFER_ARRAY_PITCH(uint32_t i0) { return 0x0000e246 + 0x4*i0; }
2779#define A5XX_RB_MRT_FLAG_BUFFER_ARRAY_PITCH__MASK 0xffffffff
2780#define A5XX_RB_MRT_FLAG_BUFFER_ARRAY_PITCH__SHIFT 0
2781static inline uint32_t A5XX_RB_MRT_FLAG_BUFFER_ARRAY_PITCH(uint32_t val)
2782{
2783 return ((val >> 6) << A5XX_RB_MRT_FLAG_BUFFER_ARRAY_PITCH__SHIFT) & A5XX_RB_MRT_FLAG_BUFFER_ARRAY_PITCH__MASK;
2784}
2785
2786#define REG_A5XX_RB_BLIT_FLAG_DST_LO 0x0000e263
2787
2788#define REG_A5XX_RB_BLIT_FLAG_DST_HI 0x0000e264
2789
2790#define REG_A5XX_RB_BLIT_FLAG_DST_PITCH 0x0000e265
2791#define A5XX_RB_BLIT_FLAG_DST_PITCH__MASK 0xffffffff
2792#define A5XX_RB_BLIT_FLAG_DST_PITCH__SHIFT 0
2793static inline uint32_t A5XX_RB_BLIT_FLAG_DST_PITCH(uint32_t val)
2794{
2795 return ((val >> 6) << A5XX_RB_BLIT_FLAG_DST_PITCH__SHIFT) & A5XX_RB_BLIT_FLAG_DST_PITCH__MASK;
2796}
2797
2798#define REG_A5XX_RB_BLIT_FLAG_DST_ARRAY_PITCH 0x0000e266
2799#define A5XX_RB_BLIT_FLAG_DST_ARRAY_PITCH__MASK 0xffffffff
2800#define A5XX_RB_BLIT_FLAG_DST_ARRAY_PITCH__SHIFT 0
2801static inline uint32_t A5XX_RB_BLIT_FLAG_DST_ARRAY_PITCH(uint32_t val)
2802{
2803 return ((val >> 6) << A5XX_RB_BLIT_FLAG_DST_ARRAY_PITCH__SHIFT) & A5XX_RB_BLIT_FLAG_DST_ARRAY_PITCH__MASK;
2804}
2805
2806#define REG_A5XX_VPC_CNTL_0 0x0000e280
2807#define A5XX_VPC_CNTL_0_STRIDE_IN_VPC__MASK 0x0000007f
2808#define A5XX_VPC_CNTL_0_STRIDE_IN_VPC__SHIFT 0
2809static inline uint32_t A5XX_VPC_CNTL_0_STRIDE_IN_VPC(uint32_t val)
2810{
2811 return ((val) << A5XX_VPC_CNTL_0_STRIDE_IN_VPC__SHIFT) & A5XX_VPC_CNTL_0_STRIDE_IN_VPC__MASK;
2812}
2813#define A5XX_VPC_CNTL_0_VARYING 0x00000800
2814
2815static inline uint32_t REG_A5XX_VPC_VARYING_INTERP(uint32_t i0) { return 0x0000e282 + 0x1*i0; }
2816
2817static inline uint32_t REG_A5XX_VPC_VARYING_INTERP_MODE(uint32_t i0) { return 0x0000e282 + 0x1*i0; }
2818
2819static inline uint32_t REG_A5XX_VPC_VARYING_PS_REPL(uint32_t i0) { return 0x0000e28a + 0x1*i0; }
2820
2821static inline uint32_t REG_A5XX_VPC_VARYING_PS_REPL_MODE(uint32_t i0) { return 0x0000e28a + 0x1*i0; }
2822
2823#define REG_A5XX_UNKNOWN_E292 0x0000e292
2824
2825#define REG_A5XX_UNKNOWN_E293 0x0000e293
2826
2827static inline uint32_t REG_A5XX_VPC_VAR(uint32_t i0) { return 0x0000e294 + 0x1*i0; }
2828
2829static inline uint32_t REG_A5XX_VPC_VAR_DISABLE(uint32_t i0) { return 0x0000e294 + 0x1*i0; }
2830
2831#define REG_A5XX_VPC_GS_SIV_CNTL 0x0000e298
2832
2833#define REG_A5XX_UNKNOWN_E29A 0x0000e29a
2834
2835#define REG_A5XX_VPC_PACK 0x0000e29d
2836#define A5XX_VPC_PACK_NUMNONPOSVAR__MASK 0x000000ff
2837#define A5XX_VPC_PACK_NUMNONPOSVAR__SHIFT 0
2838static inline uint32_t A5XX_VPC_PACK_NUMNONPOSVAR(uint32_t val)
2839{
2840 return ((val) << A5XX_VPC_PACK_NUMNONPOSVAR__SHIFT) & A5XX_VPC_PACK_NUMNONPOSVAR__MASK;
2841}
2842
2843#define REG_A5XX_VPC_FS_PRIMITIVEID_CNTL 0x0000e2a0
2844
2845#define REG_A5XX_UNKNOWN_E2A1 0x0000e2a1
2846
2847#define REG_A5XX_VPC_SO_OVERRIDE 0x0000e2a2
2848
2849#define REG_A5XX_VPC_SO_BUFFER_BASE_LO_0 0x0000e2a7
2850
2851#define REG_A5XX_VPC_SO_BUFFER_BASE_HI_0 0x0000e2a8
2852
2853#define REG_A5XX_VPC_SO_BUFFER_SIZE_0 0x0000e2a9
2854
2855#define REG_A5XX_UNKNOWN_E2AB 0x0000e2ab
2856
2857#define REG_A5XX_VPC_SO_FLUSH_BASE_LO_0 0x0000e2ac
2858
2859#define REG_A5XX_VPC_SO_FLUSH_BASE_HI_0 0x0000e2ad
2860
2861#define REG_A5XX_UNKNOWN_E2AE 0x0000e2ae
2862
2863#define REG_A5XX_UNKNOWN_E2B2 0x0000e2b2
2864
2865#define REG_A5XX_UNKNOWN_E2B9 0x0000e2b9
2866
2867#define REG_A5XX_UNKNOWN_E2C0 0x0000e2c0
2868
2869#define REG_A5XX_PC_PRIMITIVE_CNTL 0x0000e384
2870#define A5XX_PC_PRIMITIVE_CNTL_STRIDE_IN_VPC__MASK 0x0000007f
2871#define A5XX_PC_PRIMITIVE_CNTL_STRIDE_IN_VPC__SHIFT 0
2872static inline uint32_t A5XX_PC_PRIMITIVE_CNTL_STRIDE_IN_VPC(uint32_t val)
2873{
2874 return ((val) << A5XX_PC_PRIMITIVE_CNTL_STRIDE_IN_VPC__SHIFT) & A5XX_PC_PRIMITIVE_CNTL_STRIDE_IN_VPC__MASK;
2875}
2876
2877#define REG_A5XX_PC_PRIM_VTX_CNTL 0x0000e385
2878#define A5XX_PC_PRIM_VTX_CNTL_PSIZE 0x00000800
2879
2880#define REG_A5XX_PC_RASTER_CNTL 0x0000e388
2881
2882#define REG_A5XX_UNKNOWN_E389 0x0000e389
2883
2884#define REG_A5XX_PC_RESTART_INDEX 0x0000e38c
2885
2886#define REG_A5XX_UNKNOWN_E38D 0x0000e38d
2887
2888#define REG_A5XX_PC_GS_PARAM 0x0000e38e
2889
2890#define REG_A5XX_PC_HS_PARAM 0x0000e38f
2891
2892#define REG_A5XX_PC_POWER_CNTL 0x0000e3b0
2893
2894#define REG_A5XX_VFD_CONTROL_0 0x0000e400
2895#define A5XX_VFD_CONTROL_0_VTXCNT__MASK 0x0000003f
2896#define A5XX_VFD_CONTROL_0_VTXCNT__SHIFT 0
2897static inline uint32_t A5XX_VFD_CONTROL_0_VTXCNT(uint32_t val)
2898{
2899 return ((val) << A5XX_VFD_CONTROL_0_VTXCNT__SHIFT) & A5XX_VFD_CONTROL_0_VTXCNT__MASK;
2900}
2901
2902#define REG_A5XX_VFD_CONTROL_1 0x0000e401
2903#define A5XX_VFD_CONTROL_1_REGID4INST__MASK 0x0000ff00
2904#define A5XX_VFD_CONTROL_1_REGID4INST__SHIFT 8
2905static inline uint32_t A5XX_VFD_CONTROL_1_REGID4INST(uint32_t val)
2906{
2907 return ((val) << A5XX_VFD_CONTROL_1_REGID4INST__SHIFT) & A5XX_VFD_CONTROL_1_REGID4INST__MASK;
2908}
2909#define A5XX_VFD_CONTROL_1_REGID4VTX__MASK 0x00ff0000
2910#define A5XX_VFD_CONTROL_1_REGID4VTX__SHIFT 16
2911static inline uint32_t A5XX_VFD_CONTROL_1_REGID4VTX(uint32_t val)
2912{
2913 return ((val) << A5XX_VFD_CONTROL_1_REGID4VTX__SHIFT) & A5XX_VFD_CONTROL_1_REGID4VTX__MASK;
2914}
2915
2916#define REG_A5XX_VFD_CONTROL_2 0x0000e402
2917
2918#define REG_A5XX_VFD_CONTROL_3 0x0000e403
2919
2920#define REG_A5XX_VFD_CONTROL_4 0x0000e404
2921
2922#define REG_A5XX_VFD_CONTROL_5 0x0000e405
2923
2924#define REG_A5XX_VFD_INDEX_OFFSET 0x0000e408
2925
2926#define REG_A5XX_VFD_INSTANCE_START_OFFSET 0x0000e409
2927
2928static inline uint32_t REG_A5XX_VFD_FETCH(uint32_t i0) { return 0x0000e40a + 0x4*i0; }
2929
2930static inline uint32_t REG_A5XX_VFD_FETCH_BASE_LO(uint32_t i0) { return 0x0000e40a + 0x4*i0; }
2931
2932static inline uint32_t REG_A5XX_VFD_FETCH_BASE_HI(uint32_t i0) { return 0x0000e40b + 0x4*i0; }
2933
2934static inline uint32_t REG_A5XX_VFD_FETCH_SIZE(uint32_t i0) { return 0x0000e40c + 0x4*i0; }
2935
2936static inline uint32_t REG_A5XX_VFD_FETCH_STRIDE(uint32_t i0) { return 0x0000e40d + 0x4*i0; }
2937
2938static inline uint32_t REG_A5XX_VFD_DECODE(uint32_t i0) { return 0x0000e48a + 0x2*i0; }
2939
2940static inline uint32_t REG_A5XX_VFD_DECODE_INSTR(uint32_t i0) { return 0x0000e48a + 0x2*i0; }
2941#define A5XX_VFD_DECODE_INSTR_IDX__MASK 0x0000001f
2942#define A5XX_VFD_DECODE_INSTR_IDX__SHIFT 0
2943static inline uint32_t A5XX_VFD_DECODE_INSTR_IDX(uint32_t val)
2944{
2945 return ((val) << A5XX_VFD_DECODE_INSTR_IDX__SHIFT) & A5XX_VFD_DECODE_INSTR_IDX__MASK;
2946}
2947#define A5XX_VFD_DECODE_INSTR_FORMAT__MASK 0x3ff00000
2948#define A5XX_VFD_DECODE_INSTR_FORMAT__SHIFT 20
2949static inline uint32_t A5XX_VFD_DECODE_INSTR_FORMAT(enum a5xx_vtx_fmt val)
2950{
2951 return ((val) << A5XX_VFD_DECODE_INSTR_FORMAT__SHIFT) & A5XX_VFD_DECODE_INSTR_FORMAT__MASK;
2952}
2953#define A5XX_VFD_DECODE_INSTR_SWAP__MASK 0xc0000000
2954#define A5XX_VFD_DECODE_INSTR_SWAP__SHIFT 30
2955static inline uint32_t A5XX_VFD_DECODE_INSTR_SWAP(enum a3xx_color_swap val)
2956{
2957 return ((val) << A5XX_VFD_DECODE_INSTR_SWAP__SHIFT) & A5XX_VFD_DECODE_INSTR_SWAP__MASK;
2958}
2959
2960static inline uint32_t REG_A5XX_VFD_DECODE_STEP_RATE(uint32_t i0) { return 0x0000e48b + 0x2*i0; }
2961
2962static inline uint32_t REG_A5XX_VFD_DEST_CNTL(uint32_t i0) { return 0x0000e4ca + 0x1*i0; }
2963
2964static inline uint32_t REG_A5XX_VFD_DEST_CNTL_INSTR(uint32_t i0) { return 0x0000e4ca + 0x1*i0; }
2965#define A5XX_VFD_DEST_CNTL_INSTR_WRITEMASK__MASK 0x0000000f
2966#define A5XX_VFD_DEST_CNTL_INSTR_WRITEMASK__SHIFT 0
2967static inline uint32_t A5XX_VFD_DEST_CNTL_INSTR_WRITEMASK(uint32_t val)
2968{
2969 return ((val) << A5XX_VFD_DEST_CNTL_INSTR_WRITEMASK__SHIFT) & A5XX_VFD_DEST_CNTL_INSTR_WRITEMASK__MASK;
2970}
2971#define A5XX_VFD_DEST_CNTL_INSTR_REGID__MASK 0x00000ff0
2972#define A5XX_VFD_DEST_CNTL_INSTR_REGID__SHIFT 4
2973static inline uint32_t A5XX_VFD_DEST_CNTL_INSTR_REGID(uint32_t val)
2974{
2975 return ((val) << A5XX_VFD_DEST_CNTL_INSTR_REGID__SHIFT) & A5XX_VFD_DEST_CNTL_INSTR_REGID__MASK;
2976}
2977
2978#define REG_A5XX_VFD_POWER_CNTL 0x0000e4f0
2979
2980#define REG_A5XX_SP_SP_CNTL 0x0000e580
2981
2982#define REG_A5XX_SP_VS_CONTROL_REG 0x0000e584
2983#define A5XX_SP_VS_CONTROL_REG_ENABLED 0x00000001
2984#define A5XX_SP_VS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
2985#define A5XX_SP_VS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
2986static inline uint32_t A5XX_SP_VS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
2987{
2988 return ((val) << A5XX_SP_VS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_SP_VS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
2989}
2990#define A5XX_SP_VS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
2991#define A5XX_SP_VS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
2992static inline uint32_t A5XX_SP_VS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
2993{
2994 return ((val) << A5XX_SP_VS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_SP_VS_CONTROL_REG_SHADEROBJOFFSET__MASK;
2995}
2996
2997#define REG_A5XX_SP_FS_CONTROL_REG 0x0000e585
2998#define A5XX_SP_FS_CONTROL_REG_ENABLED 0x00000001
2999#define A5XX_SP_FS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3000#define A5XX_SP_FS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3001static inline uint32_t A5XX_SP_FS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3002{
3003 return ((val) << A5XX_SP_FS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_SP_FS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3004}
3005#define A5XX_SP_FS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3006#define A5XX_SP_FS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3007static inline uint32_t A5XX_SP_FS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3008{
3009 return ((val) << A5XX_SP_FS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_SP_FS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3010}
3011
3012#define REG_A5XX_SP_HS_CONTROL_REG 0x0000e586
3013#define A5XX_SP_HS_CONTROL_REG_ENABLED 0x00000001
3014#define A5XX_SP_HS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3015#define A5XX_SP_HS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3016static inline uint32_t A5XX_SP_HS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3017{
3018 return ((val) << A5XX_SP_HS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_SP_HS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3019}
3020#define A5XX_SP_HS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3021#define A5XX_SP_HS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3022static inline uint32_t A5XX_SP_HS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3023{
3024 return ((val) << A5XX_SP_HS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_SP_HS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3025}
3026
3027#define REG_A5XX_SP_DS_CONTROL_REG 0x0000e587
3028#define A5XX_SP_DS_CONTROL_REG_ENABLED 0x00000001
3029#define A5XX_SP_DS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3030#define A5XX_SP_DS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3031static inline uint32_t A5XX_SP_DS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3032{
3033 return ((val) << A5XX_SP_DS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_SP_DS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3034}
3035#define A5XX_SP_DS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3036#define A5XX_SP_DS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3037static inline uint32_t A5XX_SP_DS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3038{
3039 return ((val) << A5XX_SP_DS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_SP_DS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3040}
3041
3042#define REG_A5XX_SP_GS_CONTROL_REG 0x0000e588
3043#define A5XX_SP_GS_CONTROL_REG_ENABLED 0x00000001
3044#define A5XX_SP_GS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3045#define A5XX_SP_GS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3046static inline uint32_t A5XX_SP_GS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3047{
3048 return ((val) << A5XX_SP_GS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_SP_GS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3049}
3050#define A5XX_SP_GS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3051#define A5XX_SP_GS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3052static inline uint32_t A5XX_SP_GS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3053{
3054 return ((val) << A5XX_SP_GS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_SP_GS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3055}
3056
3057#define REG_A5XX_SP_CS_CONFIG 0x0000e589
3058
3059#define REG_A5XX_SP_VS_CONFIG_MAX_CONST 0x0000e58a
3060
3061#define REG_A5XX_SP_FS_CONFIG_MAX_CONST 0x0000e58b
3062
3063#define REG_A5XX_SP_VS_CTRL_REG0 0x0000e590
3064#define A5XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT__MASK 0x000003f0
3065#define A5XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT__SHIFT 4
3066static inline uint32_t A5XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT(uint32_t val)
3067{
3068 return ((val) << A5XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT__SHIFT) & A5XX_SP_VS_CTRL_REG0_HALFREGFOOTPRINT__MASK;
3069}
3070#define A5XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT__MASK 0x0000fc00
3071#define A5XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT__SHIFT 10
3072static inline uint32_t A5XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT(uint32_t val)
3073{
3074 return ((val) << A5XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT__SHIFT) & A5XX_SP_VS_CTRL_REG0_FULLREGFOOTPRINT__MASK;
3075}
3076#define A5XX_SP_VS_CTRL_REG0_VARYING 0x00010000
3077#define A5XX_SP_VS_CTRL_REG0_PIXLODENABLE 0x00100000
3078
3079#define REG_A5XX_SP_PRIMITIVE_CNTL 0x0000e592
3080#define A5XX_SP_PRIMITIVE_CNTL_STRIDE_IN_VPC__MASK 0x0000001f
3081#define A5XX_SP_PRIMITIVE_CNTL_STRIDE_IN_VPC__SHIFT 0
3082static inline uint32_t A5XX_SP_PRIMITIVE_CNTL_STRIDE_IN_VPC(uint32_t val)
3083{
3084 return ((val >> 2) << A5XX_SP_PRIMITIVE_CNTL_STRIDE_IN_VPC__SHIFT) & A5XX_SP_PRIMITIVE_CNTL_STRIDE_IN_VPC__MASK;
3085}
3086
3087static inline uint32_t REG_A5XX_SP_VS_OUT(uint32_t i0) { return 0x0000e593 + 0x1*i0; }
3088
3089static inline uint32_t REG_A5XX_SP_VS_OUT_REG(uint32_t i0) { return 0x0000e593 + 0x1*i0; }
3090#define A5XX_SP_VS_OUT_REG_A_REGID__MASK 0x000000ff
3091#define A5XX_SP_VS_OUT_REG_A_REGID__SHIFT 0
3092static inline uint32_t A5XX_SP_VS_OUT_REG_A_REGID(uint32_t val)
3093{
3094 return ((val) << A5XX_SP_VS_OUT_REG_A_REGID__SHIFT) & A5XX_SP_VS_OUT_REG_A_REGID__MASK;
3095}
3096#define A5XX_SP_VS_OUT_REG_A_COMPMASK__MASK 0x00000f00
3097#define A5XX_SP_VS_OUT_REG_A_COMPMASK__SHIFT 8
3098static inline uint32_t A5XX_SP_VS_OUT_REG_A_COMPMASK(uint32_t val)
3099{
3100 return ((val) << A5XX_SP_VS_OUT_REG_A_COMPMASK__SHIFT) & A5XX_SP_VS_OUT_REG_A_COMPMASK__MASK;
3101}
3102#define A5XX_SP_VS_OUT_REG_B_REGID__MASK 0x00ff0000
3103#define A5XX_SP_VS_OUT_REG_B_REGID__SHIFT 16
3104static inline uint32_t A5XX_SP_VS_OUT_REG_B_REGID(uint32_t val)
3105{
3106 return ((val) << A5XX_SP_VS_OUT_REG_B_REGID__SHIFT) & A5XX_SP_VS_OUT_REG_B_REGID__MASK;
3107}
3108#define A5XX_SP_VS_OUT_REG_B_COMPMASK__MASK 0x0f000000
3109#define A5XX_SP_VS_OUT_REG_B_COMPMASK__SHIFT 24
3110static inline uint32_t A5XX_SP_VS_OUT_REG_B_COMPMASK(uint32_t val)
3111{
3112 return ((val) << A5XX_SP_VS_OUT_REG_B_COMPMASK__SHIFT) & A5XX_SP_VS_OUT_REG_B_COMPMASK__MASK;
3113}
3114
3115static inline uint32_t REG_A5XX_SP_VS_VPC_DST(uint32_t i0) { return 0x0000e5a3 + 0x1*i0; }
3116
3117static inline uint32_t REG_A5XX_SP_VS_VPC_DST_REG(uint32_t i0) { return 0x0000e5a3 + 0x1*i0; }
3118#define A5XX_SP_VS_VPC_DST_REG_OUTLOC0__MASK 0x000000ff
3119#define A5XX_SP_VS_VPC_DST_REG_OUTLOC0__SHIFT 0
3120static inline uint32_t A5XX_SP_VS_VPC_DST_REG_OUTLOC0(uint32_t val)
3121{
3122 return ((val) << A5XX_SP_VS_VPC_DST_REG_OUTLOC0__SHIFT) & A5XX_SP_VS_VPC_DST_REG_OUTLOC0__MASK;
3123}
3124#define A5XX_SP_VS_VPC_DST_REG_OUTLOC1__MASK 0x0000ff00
3125#define A5XX_SP_VS_VPC_DST_REG_OUTLOC1__SHIFT 8
3126static inline uint32_t A5XX_SP_VS_VPC_DST_REG_OUTLOC1(uint32_t val)
3127{
3128 return ((val) << A5XX_SP_VS_VPC_DST_REG_OUTLOC1__SHIFT) & A5XX_SP_VS_VPC_DST_REG_OUTLOC1__MASK;
3129}
3130#define A5XX_SP_VS_VPC_DST_REG_OUTLOC2__MASK 0x00ff0000
3131#define A5XX_SP_VS_VPC_DST_REG_OUTLOC2__SHIFT 16
3132static inline uint32_t A5XX_SP_VS_VPC_DST_REG_OUTLOC2(uint32_t val)
3133{
3134 return ((val) << A5XX_SP_VS_VPC_DST_REG_OUTLOC2__SHIFT) & A5XX_SP_VS_VPC_DST_REG_OUTLOC2__MASK;
3135}
3136#define A5XX_SP_VS_VPC_DST_REG_OUTLOC3__MASK 0xff000000
3137#define A5XX_SP_VS_VPC_DST_REG_OUTLOC3__SHIFT 24
3138static inline uint32_t A5XX_SP_VS_VPC_DST_REG_OUTLOC3(uint32_t val)
3139{
3140 return ((val) << A5XX_SP_VS_VPC_DST_REG_OUTLOC3__SHIFT) & A5XX_SP_VS_VPC_DST_REG_OUTLOC3__MASK;
3141}
3142
3143#define REG_A5XX_UNKNOWN_E5AB 0x0000e5ab
3144
3145#define REG_A5XX_SP_VS_OBJ_START_LO 0x0000e5ac
3146
3147#define REG_A5XX_SP_VS_OBJ_START_HI 0x0000e5ad
3148
3149#define REG_A5XX_SP_FS_CTRL_REG0 0x0000e5c0
3150#define A5XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT__MASK 0x000003f0
3151#define A5XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT__SHIFT 4
3152static inline uint32_t A5XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT(uint32_t val)
3153{
3154 return ((val) << A5XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT__SHIFT) & A5XX_SP_FS_CTRL_REG0_HALFREGFOOTPRINT__MASK;
3155}
3156#define A5XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT__MASK 0x0000fc00
3157#define A5XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT__SHIFT 10
3158static inline uint32_t A5XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT(uint32_t val)
3159{
3160 return ((val) << A5XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT__SHIFT) & A5XX_SP_FS_CTRL_REG0_FULLREGFOOTPRINT__MASK;
3161}
3162#define A5XX_SP_FS_CTRL_REG0_VARYING 0x00010000
3163#define A5XX_SP_FS_CTRL_REG0_PIXLODENABLE 0x00100000
3164
3165#define REG_A5XX_UNKNOWN_E5C2 0x0000e5c2
3166
3167#define REG_A5XX_SP_FS_OBJ_START_LO 0x0000e5c3
3168
3169#define REG_A5XX_SP_FS_OBJ_START_HI 0x0000e5c4
3170
3171#define REG_A5XX_SP_BLEND_CNTL 0x0000e5c9
3172
3173#define REG_A5XX_SP_FS_OUTPUT_CNTL 0x0000e5ca
3174#define A5XX_SP_FS_OUTPUT_CNTL_MRT__MASK 0x0000000f
3175#define A5XX_SP_FS_OUTPUT_CNTL_MRT__SHIFT 0
3176static inline uint32_t A5XX_SP_FS_OUTPUT_CNTL_MRT(uint32_t val)
3177{
3178 return ((val) << A5XX_SP_FS_OUTPUT_CNTL_MRT__SHIFT) & A5XX_SP_FS_OUTPUT_CNTL_MRT__MASK;
3179}
3180#define A5XX_SP_FS_OUTPUT_CNTL_DEPTH_REGID__MASK 0x00001fe0
3181#define A5XX_SP_FS_OUTPUT_CNTL_DEPTH_REGID__SHIFT 5
3182static inline uint32_t A5XX_SP_FS_OUTPUT_CNTL_DEPTH_REGID(uint32_t val)
3183{
3184 return ((val) << A5XX_SP_FS_OUTPUT_CNTL_DEPTH_REGID__SHIFT) & A5XX_SP_FS_OUTPUT_CNTL_DEPTH_REGID__MASK;
3185}
3186#define A5XX_SP_FS_OUTPUT_CNTL_SAMPLEMASK_REGID__MASK 0x001fe000
3187#define A5XX_SP_FS_OUTPUT_CNTL_SAMPLEMASK_REGID__SHIFT 13
3188static inline uint32_t A5XX_SP_FS_OUTPUT_CNTL_SAMPLEMASK_REGID(uint32_t val)
3189{
3190 return ((val) << A5XX_SP_FS_OUTPUT_CNTL_SAMPLEMASK_REGID__SHIFT) & A5XX_SP_FS_OUTPUT_CNTL_SAMPLEMASK_REGID__MASK;
3191}
3192
3193static inline uint32_t REG_A5XX_SP_FS_OUTPUT(uint32_t i0) { return 0x0000e5cb + 0x1*i0; }
3194
3195static inline uint32_t REG_A5XX_SP_FS_OUTPUT_REG(uint32_t i0) { return 0x0000e5cb + 0x1*i0; }
3196#define A5XX_SP_FS_OUTPUT_REG_REGID__MASK 0x000000ff
3197#define A5XX_SP_FS_OUTPUT_REG_REGID__SHIFT 0
3198static inline uint32_t A5XX_SP_FS_OUTPUT_REG_REGID(uint32_t val)
3199{
3200 return ((val) << A5XX_SP_FS_OUTPUT_REG_REGID__SHIFT) & A5XX_SP_FS_OUTPUT_REG_REGID__MASK;
3201}
3202#define A5XX_SP_FS_OUTPUT_REG_HALF_PRECISION 0x00000100
3203
3204static inline uint32_t REG_A5XX_SP_FS_MRT(uint32_t i0) { return 0x0000e5d3 + 0x1*i0; }
3205
3206static inline uint32_t REG_A5XX_SP_FS_MRT_REG(uint32_t i0) { return 0x0000e5d3 + 0x1*i0; }
3207#define A5XX_SP_FS_MRT_REG_COLOR_FORMAT__MASK 0x000000ff
3208#define A5XX_SP_FS_MRT_REG_COLOR_FORMAT__SHIFT 0
3209static inline uint32_t A5XX_SP_FS_MRT_REG_COLOR_FORMAT(enum a5xx_color_fmt val)
3210{
3211 return ((val) << A5XX_SP_FS_MRT_REG_COLOR_FORMAT__SHIFT) & A5XX_SP_FS_MRT_REG_COLOR_FORMAT__MASK;
3212}
3213
3214#define REG_A5XX_UNKNOWN_E5DB 0x0000e5db
3215
3216#define REG_A5XX_SP_CS_CNTL_0 0x0000e5f0
3217
3218#define REG_A5XX_UNKNOWN_E600 0x0000e600
3219
3220#define REG_A5XX_UNKNOWN_E640 0x0000e640
3221
3222#define REG_A5XX_TPL1_TP_RAS_MSAA_CNTL 0x0000e704
3223#define A5XX_TPL1_TP_RAS_MSAA_CNTL_SAMPLES__MASK 0x00000003
3224#define A5XX_TPL1_TP_RAS_MSAA_CNTL_SAMPLES__SHIFT 0
3225static inline uint32_t A5XX_TPL1_TP_RAS_MSAA_CNTL_SAMPLES(enum a3xx_msaa_samples val)
3226{
3227 return ((val) << A5XX_TPL1_TP_RAS_MSAA_CNTL_SAMPLES__SHIFT) & A5XX_TPL1_TP_RAS_MSAA_CNTL_SAMPLES__MASK;
3228}
3229
3230#define REG_A5XX_TPL1_TP_DEST_MSAA_CNTL 0x0000e705
3231#define A5XX_TPL1_TP_DEST_MSAA_CNTL_SAMPLES__MASK 0x00000003
3232#define A5XX_TPL1_TP_DEST_MSAA_CNTL_SAMPLES__SHIFT 0
3233static inline uint32_t A5XX_TPL1_TP_DEST_MSAA_CNTL_SAMPLES(enum a3xx_msaa_samples val)
3234{
3235 return ((val) << A5XX_TPL1_TP_DEST_MSAA_CNTL_SAMPLES__SHIFT) & A5XX_TPL1_TP_DEST_MSAA_CNTL_SAMPLES__MASK;
3236}
3237#define A5XX_TPL1_TP_DEST_MSAA_CNTL_MSAA_DISABLE 0x00000004
3238
3239#define REG_A5XX_TPL1_VS_TEX_COUNT 0x0000e700
3240
3241#define REG_A5XX_TPL1_VS_TEX_SAMP_LO 0x0000e722
3242
3243#define REG_A5XX_TPL1_VS_TEX_SAMP_HI 0x0000e723
3244
3245#define REG_A5XX_TPL1_VS_TEX_CONST_LO 0x0000e72a
3246
3247#define REG_A5XX_TPL1_VS_TEX_CONST_HI 0x0000e72b
3248
3249#define REG_A5XX_TPL1_FS_TEX_COUNT 0x0000e750
3250
3251#define REG_A5XX_TPL1_FS_TEX_SAMP_LO 0x0000e75a
3252
3253#define REG_A5XX_TPL1_FS_TEX_SAMP_HI 0x0000e75b
3254
3255#define REG_A5XX_TPL1_FS_TEX_CONST_LO 0x0000e75e
3256
3257#define REG_A5XX_TPL1_FS_TEX_CONST_HI 0x0000e75f
3258
3259#define REG_A5XX_TPL1_TP_FS_ROTATION_CNTL 0x0000e764
3260
3261#define REG_A5XX_HLSQ_CONTROL_0_REG 0x0000e784
3262
3263#define REG_A5XX_HLSQ_CONTROL_1_REG 0x0000e785
3264#define A5XX_HLSQ_CONTROL_1_REG_PRIMALLOCTHRESHOLD__MASK 0x0000003f
3265#define A5XX_HLSQ_CONTROL_1_REG_PRIMALLOCTHRESHOLD__SHIFT 0
3266static inline uint32_t A5XX_HLSQ_CONTROL_1_REG_PRIMALLOCTHRESHOLD(uint32_t val)
3267{
3268 return ((val) << A5XX_HLSQ_CONTROL_1_REG_PRIMALLOCTHRESHOLD__SHIFT) & A5XX_HLSQ_CONTROL_1_REG_PRIMALLOCTHRESHOLD__MASK;
3269}
3270
3271#define REG_A5XX_HLSQ_CONTROL_2_REG 0x0000e786
3272#define A5XX_HLSQ_CONTROL_2_REG_FACEREGID__MASK 0x000000ff
3273#define A5XX_HLSQ_CONTROL_2_REG_FACEREGID__SHIFT 0
3274static inline uint32_t A5XX_HLSQ_CONTROL_2_REG_FACEREGID(uint32_t val)
3275{
3276 return ((val) << A5XX_HLSQ_CONTROL_2_REG_FACEREGID__SHIFT) & A5XX_HLSQ_CONTROL_2_REG_FACEREGID__MASK;
3277}
3278
3279#define REG_A5XX_HLSQ_CONTROL_3_REG 0x0000e787
3280#define A5XX_HLSQ_CONTROL_3_REG_FRAGCOORDXYREGID__MASK 0x000000ff
3281#define A5XX_HLSQ_CONTROL_3_REG_FRAGCOORDXYREGID__SHIFT 0
3282static inline uint32_t A5XX_HLSQ_CONTROL_3_REG_FRAGCOORDXYREGID(uint32_t val)
3283{
3284 return ((val) << A5XX_HLSQ_CONTROL_3_REG_FRAGCOORDXYREGID__SHIFT) & A5XX_HLSQ_CONTROL_3_REG_FRAGCOORDXYREGID__MASK;
3285}
3286
3287#define REG_A5XX_HLSQ_CONTROL_4_REG 0x0000e788
3288#define A5XX_HLSQ_CONTROL_4_REG_XYCOORDREGID__MASK 0x00ff0000
3289#define A5XX_HLSQ_CONTROL_4_REG_XYCOORDREGID__SHIFT 16
3290static inline uint32_t A5XX_HLSQ_CONTROL_4_REG_XYCOORDREGID(uint32_t val)
3291{
3292 return ((val) << A5XX_HLSQ_CONTROL_4_REG_XYCOORDREGID__SHIFT) & A5XX_HLSQ_CONTROL_4_REG_XYCOORDREGID__MASK;
3293}
3294#define A5XX_HLSQ_CONTROL_4_REG_ZWCOORDREGID__MASK 0xff000000
3295#define A5XX_HLSQ_CONTROL_4_REG_ZWCOORDREGID__SHIFT 24
3296static inline uint32_t A5XX_HLSQ_CONTROL_4_REG_ZWCOORDREGID(uint32_t val)
3297{
3298 return ((val) << A5XX_HLSQ_CONTROL_4_REG_ZWCOORDREGID__SHIFT) & A5XX_HLSQ_CONTROL_4_REG_ZWCOORDREGID__MASK;
3299}
3300
3301#define REG_A5XX_HLSQ_UPDATE_CNTL 0x0000e78a
3302
3303#define REG_A5XX_HLSQ_VS_CONTROL_REG 0x0000e78b
3304#define A5XX_HLSQ_VS_CONTROL_REG_ENABLED 0x00000001
3305#define A5XX_HLSQ_VS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3306#define A5XX_HLSQ_VS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3307static inline uint32_t A5XX_HLSQ_VS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3308{
3309 return ((val) << A5XX_HLSQ_VS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_HLSQ_VS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3310}
3311#define A5XX_HLSQ_VS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3312#define A5XX_HLSQ_VS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3313static inline uint32_t A5XX_HLSQ_VS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3314{
3315 return ((val) << A5XX_HLSQ_VS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_HLSQ_VS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3316}
3317
3318#define REG_A5XX_HLSQ_FS_CONTROL_REG 0x0000e78c
3319#define A5XX_HLSQ_FS_CONTROL_REG_ENABLED 0x00000001
3320#define A5XX_HLSQ_FS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3321#define A5XX_HLSQ_FS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3322static inline uint32_t A5XX_HLSQ_FS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3323{
3324 return ((val) << A5XX_HLSQ_FS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_HLSQ_FS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3325}
3326#define A5XX_HLSQ_FS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3327#define A5XX_HLSQ_FS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3328static inline uint32_t A5XX_HLSQ_FS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3329{
3330 return ((val) << A5XX_HLSQ_FS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_HLSQ_FS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3331}
3332
3333#define REG_A5XX_HLSQ_HS_CONTROL_REG 0x0000e78d
3334#define A5XX_HLSQ_HS_CONTROL_REG_ENABLED 0x00000001
3335#define A5XX_HLSQ_HS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3336#define A5XX_HLSQ_HS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3337static inline uint32_t A5XX_HLSQ_HS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3338{
3339 return ((val) << A5XX_HLSQ_HS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_HLSQ_HS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3340}
3341#define A5XX_HLSQ_HS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3342#define A5XX_HLSQ_HS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3343static inline uint32_t A5XX_HLSQ_HS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3344{
3345 return ((val) << A5XX_HLSQ_HS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_HLSQ_HS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3346}
3347
3348#define REG_A5XX_HLSQ_DS_CONTROL_REG 0x0000e78e
3349#define A5XX_HLSQ_DS_CONTROL_REG_ENABLED 0x00000001
3350#define A5XX_HLSQ_DS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3351#define A5XX_HLSQ_DS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3352static inline uint32_t A5XX_HLSQ_DS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3353{
3354 return ((val) << A5XX_HLSQ_DS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_HLSQ_DS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3355}
3356#define A5XX_HLSQ_DS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3357#define A5XX_HLSQ_DS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3358static inline uint32_t A5XX_HLSQ_DS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3359{
3360 return ((val) << A5XX_HLSQ_DS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_HLSQ_DS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3361}
3362
3363#define REG_A5XX_HLSQ_GS_CONTROL_REG 0x0000e78f
3364#define A5XX_HLSQ_GS_CONTROL_REG_ENABLED 0x00000001
3365#define A5XX_HLSQ_GS_CONTROL_REG_CONSTOBJECTOFFSET__MASK 0x000000fe
3366#define A5XX_HLSQ_GS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT 1
3367static inline uint32_t A5XX_HLSQ_GS_CONTROL_REG_CONSTOBJECTOFFSET(uint32_t val)
3368{
3369 return ((val) << A5XX_HLSQ_GS_CONTROL_REG_CONSTOBJECTOFFSET__SHIFT) & A5XX_HLSQ_GS_CONTROL_REG_CONSTOBJECTOFFSET__MASK;
3370}
3371#define A5XX_HLSQ_GS_CONTROL_REG_SHADEROBJOFFSET__MASK 0x00007f00
3372#define A5XX_HLSQ_GS_CONTROL_REG_SHADEROBJOFFSET__SHIFT 8
3373static inline uint32_t A5XX_HLSQ_GS_CONTROL_REG_SHADEROBJOFFSET(uint32_t val)
3374{
3375 return ((val) << A5XX_HLSQ_GS_CONTROL_REG_SHADEROBJOFFSET__SHIFT) & A5XX_HLSQ_GS_CONTROL_REG_SHADEROBJOFFSET__MASK;
3376}
3377
3378#define REG_A5XX_HLSQ_CS_CONFIG 0x0000e790
3379
3380#define REG_A5XX_HLSQ_VS_CNTL 0x0000e791
3381#define A5XX_HLSQ_VS_CNTL_INSTRLEN__MASK 0xfffffffe
3382#define A5XX_HLSQ_VS_CNTL_INSTRLEN__SHIFT 1
3383static inline uint32_t A5XX_HLSQ_VS_CNTL_INSTRLEN(uint32_t val)
3384{
3385 return ((val) << A5XX_HLSQ_VS_CNTL_INSTRLEN__SHIFT) & A5XX_HLSQ_VS_CNTL_INSTRLEN__MASK;
3386}
3387
3388#define REG_A5XX_HLSQ_FS_CNTL 0x0000e792
3389#define A5XX_HLSQ_FS_CNTL_INSTRLEN__MASK 0xfffffffe
3390#define A5XX_HLSQ_FS_CNTL_INSTRLEN__SHIFT 1
3391static inline uint32_t A5XX_HLSQ_FS_CNTL_INSTRLEN(uint32_t val)
3392{
3393 return ((val) << A5XX_HLSQ_FS_CNTL_INSTRLEN__SHIFT) & A5XX_HLSQ_FS_CNTL_INSTRLEN__MASK;
3394}
3395
3396#define REG_A5XX_HLSQ_HS_CNTL 0x0000e793
3397#define A5XX_HLSQ_HS_CNTL_INSTRLEN__MASK 0xfffffffe
3398#define A5XX_HLSQ_HS_CNTL_INSTRLEN__SHIFT 1
3399static inline uint32_t A5XX_HLSQ_HS_CNTL_INSTRLEN(uint32_t val)
3400{
3401 return ((val) << A5XX_HLSQ_HS_CNTL_INSTRLEN__SHIFT) & A5XX_HLSQ_HS_CNTL_INSTRLEN__MASK;
3402}
3403
3404#define REG_A5XX_HLSQ_DS_CNTL 0x0000e794
3405#define A5XX_HLSQ_DS_CNTL_INSTRLEN__MASK 0xfffffffe
3406#define A5XX_HLSQ_DS_CNTL_INSTRLEN__SHIFT 1
3407static inline uint32_t A5XX_HLSQ_DS_CNTL_INSTRLEN(uint32_t val)
3408{
3409 return ((val) << A5XX_HLSQ_DS_CNTL_INSTRLEN__SHIFT) & A5XX_HLSQ_DS_CNTL_INSTRLEN__MASK;
3410}
3411
3412#define REG_A5XX_HLSQ_GS_CNTL 0x0000e795
3413#define A5XX_HLSQ_GS_CNTL_INSTRLEN__MASK 0xfffffffe
3414#define A5XX_HLSQ_GS_CNTL_INSTRLEN__SHIFT 1
3415static inline uint32_t A5XX_HLSQ_GS_CNTL_INSTRLEN(uint32_t val)
3416{
3417 return ((val) << A5XX_HLSQ_GS_CNTL_INSTRLEN__SHIFT) & A5XX_HLSQ_GS_CNTL_INSTRLEN__MASK;
3418}
3419
3420#define REG_A5XX_HLSQ_CS_CNTL 0x0000e796
3421#define A5XX_HLSQ_CS_CNTL_INSTRLEN__MASK 0xfffffffe
3422#define A5XX_HLSQ_CS_CNTL_INSTRLEN__SHIFT 1
3423static inline uint32_t A5XX_HLSQ_CS_CNTL_INSTRLEN(uint32_t val)
3424{
3425 return ((val) << A5XX_HLSQ_CS_CNTL_INSTRLEN__SHIFT) & A5XX_HLSQ_CS_CNTL_INSTRLEN__MASK;
3426}
3427
3428#define REG_A5XX_HLSQ_CS_KERNEL_GROUP_X 0x0000e7b9
3429
3430#define REG_A5XX_HLSQ_CS_KERNEL_GROUP_Y 0x0000e7ba
3431
3432#define REG_A5XX_HLSQ_CS_KERNEL_GROUP_Z 0x0000e7bb
3433
3434#define REG_A5XX_HLSQ_CS_NDRANGE_0 0x0000e7b0
3435
3436#define REG_A5XX_HLSQ_CS_NDRANGE_1 0x0000e7b1
3437
3438#define REG_A5XX_HLSQ_CS_NDRANGE_2 0x0000e7b2
3439
3440#define REG_A5XX_HLSQ_CS_NDRANGE_3 0x0000e7b3
3441
3442#define REG_A5XX_HLSQ_CS_NDRANGE_4 0x0000e7b4
3443
3444#define REG_A5XX_HLSQ_CS_NDRANGE_5 0x0000e7b5
3445
3446#define REG_A5XX_HLSQ_CS_NDRANGE_6 0x0000e7b6
3447
3448#define REG_A5XX_HLSQ_CS_CNTL_0 0x0000e7b7
3449
3450#define REG_A5XX_HLSQ_CS_CNTL_1 0x0000e7b8
3451
3452#define REG_A5XX_UNKNOWN_E7C0 0x0000e7c0
3453
3454#define REG_A5XX_HLSQ_VS_CONSTLEN 0x0000e7c3
3455
3456#define REG_A5XX_HLSQ_VS_INSTRLEN 0x0000e7c4
3457
3458#define REG_A5XX_UNKNOWN_E7C5 0x0000e7c5
3459
3460#define REG_A5XX_UNKNOWN_E7CA 0x0000e7ca
3461
3462#define REG_A5XX_HLSQ_FS_CONSTLEN 0x0000e7d7
3463
3464#define REG_A5XX_HLSQ_FS_INSTRLEN 0x0000e7d8
3465
3466#define REG_A5XX_HLSQ_HS_CONSTLEN 0x0000e7c8
3467
3468#define REG_A5XX_HLSQ_HS_INSTRLEN 0x0000e7c9
3469
3470#define REG_A5XX_HLSQ_DS_CONSTLEN 0x0000e7cd
3471
3472#define REG_A5XX_HLSQ_DS_INSTRLEN 0x0000e7ce
3473
3474#define REG_A5XX_UNKNOWN_E7CF 0x0000e7cf
3475
3476#define REG_A5XX_HLSQ_GS_CONSTLEN 0x0000e7d2
3477
3478#define REG_A5XX_HLSQ_GS_INSTRLEN 0x0000e7d3
3479
3480#define REG_A5XX_UNKNOWN_E7D4 0x0000e7d4
3481
3482#define REG_A5XX_UNKNOWN_E7D9 0x0000e7d9
3483
3484#define REG_A5XX_HLSQ_CONTEXT_SWITCH_CS_SW_3 0x0000e7dc
3485
3486#define REG_A5XX_HLSQ_CONTEXT_SWITCH_CS_SW_4 0x0000e7dd
3487
3488#define REG_A5XX_RB_2D_DST_FILL 0x00002101
3489
3490#define REG_A5XX_RB_2D_SRC_INFO 0x00002107
3491#define A5XX_RB_2D_SRC_INFO_COLOR_FORMAT__MASK 0x000000ff
3492#define A5XX_RB_2D_SRC_INFO_COLOR_FORMAT__SHIFT 0
3493static inline uint32_t A5XX_RB_2D_SRC_INFO_COLOR_FORMAT(enum a5xx_color_fmt val)
3494{
3495 return ((val) << A5XX_RB_2D_SRC_INFO_COLOR_FORMAT__SHIFT) & A5XX_RB_2D_SRC_INFO_COLOR_FORMAT__MASK;
3496}
3497#define A5XX_RB_2D_SRC_INFO_COLOR_SWAP__MASK 0x00000c00
3498#define A5XX_RB_2D_SRC_INFO_COLOR_SWAP__SHIFT 10
3499static inline uint32_t A5XX_RB_2D_SRC_INFO_COLOR_SWAP(enum a3xx_color_swap val)
3500{
3501 return ((val) << A5XX_RB_2D_SRC_INFO_COLOR_SWAP__SHIFT) & A5XX_RB_2D_SRC_INFO_COLOR_SWAP__MASK;
3502}
3503
3504#define REG_A5XX_RB_2D_SRC_LO 0x00002108
3505
3506#define REG_A5XX_RB_2D_SRC_HI 0x00002109
3507
3508#define REG_A5XX_RB_2D_DST_INFO 0x00002110
3509#define A5XX_RB_2D_DST_INFO_COLOR_FORMAT__MASK 0x000000ff
3510#define A5XX_RB_2D_DST_INFO_COLOR_FORMAT__SHIFT 0
3511static inline uint32_t A5XX_RB_2D_DST_INFO_COLOR_FORMAT(enum a5xx_color_fmt val)
3512{
3513 return ((val) << A5XX_RB_2D_DST_INFO_COLOR_FORMAT__SHIFT) & A5XX_RB_2D_DST_INFO_COLOR_FORMAT__MASK;
3514}
3515#define A5XX_RB_2D_DST_INFO_COLOR_SWAP__MASK 0x00000c00
3516#define A5XX_RB_2D_DST_INFO_COLOR_SWAP__SHIFT 10
3517static inline uint32_t A5XX_RB_2D_DST_INFO_COLOR_SWAP(enum a3xx_color_swap val)
3518{
3519 return ((val) << A5XX_RB_2D_DST_INFO_COLOR_SWAP__SHIFT) & A5XX_RB_2D_DST_INFO_COLOR_SWAP__MASK;
3520}
3521
3522#define REG_A5XX_RB_2D_SRC_FLAGS_LO 0x00002140
3523
3524#define REG_A5XX_RB_2D_SRC_FLAGS_HI 0x00002141
3525
3526#define REG_A5XX_RB_2D_DST_LO 0x00002111
3527
3528#define REG_A5XX_RB_2D_DST_HI 0x00002112
3529
3530#define REG_A5XX_RB_2D_DST_FLAGS_LO 0x00002143
3531
3532#define REG_A5XX_RB_2D_DST_FLAGS_HI 0x00002144
3533
3534#define REG_A5XX_GRAS_2D_SRC_INFO 0x00002181
3535#define A5XX_GRAS_2D_SRC_INFO_COLOR_FORMAT__MASK 0x000000ff
3536#define A5XX_GRAS_2D_SRC_INFO_COLOR_FORMAT__SHIFT 0
3537static inline uint32_t A5XX_GRAS_2D_SRC_INFO_COLOR_FORMAT(enum a5xx_color_fmt val)
3538{
3539 return ((val) << A5XX_GRAS_2D_SRC_INFO_COLOR_FORMAT__SHIFT) & A5XX_GRAS_2D_SRC_INFO_COLOR_FORMAT__MASK;
3540}
3541#define A5XX_GRAS_2D_SRC_INFO_COLOR_SWAP__MASK 0x00000c00
3542#define A5XX_GRAS_2D_SRC_INFO_COLOR_SWAP__SHIFT 10
3543static inline uint32_t A5XX_GRAS_2D_SRC_INFO_COLOR_SWAP(enum a3xx_color_swap val)
3544{
3545 return ((val) << A5XX_GRAS_2D_SRC_INFO_COLOR_SWAP__SHIFT) & A5XX_GRAS_2D_SRC_INFO_COLOR_SWAP__MASK;
3546}
3547
3548#define REG_A5XX_GRAS_2D_DST_INFO 0x00002182
3549#define A5XX_GRAS_2D_DST_INFO_COLOR_FORMAT__MASK 0x000000ff
3550#define A5XX_GRAS_2D_DST_INFO_COLOR_FORMAT__SHIFT 0
3551static inline uint32_t A5XX_GRAS_2D_DST_INFO_COLOR_FORMAT(enum a5xx_color_fmt val)
3552{
3553 return ((val) << A5XX_GRAS_2D_DST_INFO_COLOR_FORMAT__SHIFT) & A5XX_GRAS_2D_DST_INFO_COLOR_FORMAT__MASK;
3554}
3555#define A5XX_GRAS_2D_DST_INFO_COLOR_SWAP__MASK 0x00000c00
3556#define A5XX_GRAS_2D_DST_INFO_COLOR_SWAP__SHIFT 10
3557static inline uint32_t A5XX_GRAS_2D_DST_INFO_COLOR_SWAP(enum a3xx_color_swap val)
3558{
3559 return ((val) << A5XX_GRAS_2D_DST_INFO_COLOR_SWAP__SHIFT) & A5XX_GRAS_2D_DST_INFO_COLOR_SWAP__MASK;
3560}
3561
3562#define REG_A5XX_TEX_SAMP_0 0x00000000
3563#define A5XX_TEX_SAMP_0_MIPFILTER_LINEAR_NEAR 0x00000001
3564#define A5XX_TEX_SAMP_0_XY_MAG__MASK 0x00000006
3565#define A5XX_TEX_SAMP_0_XY_MAG__SHIFT 1
3566static inline uint32_t A5XX_TEX_SAMP_0_XY_MAG(enum a5xx_tex_filter val)
3567{
3568 return ((val) << A5XX_TEX_SAMP_0_XY_MAG__SHIFT) & A5XX_TEX_SAMP_0_XY_MAG__MASK;
3569}
3570#define A5XX_TEX_SAMP_0_XY_MIN__MASK 0x00000018
3571#define A5XX_TEX_SAMP_0_XY_MIN__SHIFT 3
3572static inline uint32_t A5XX_TEX_SAMP_0_XY_MIN(enum a5xx_tex_filter val)
3573{
3574 return ((val) << A5XX_TEX_SAMP_0_XY_MIN__SHIFT) & A5XX_TEX_SAMP_0_XY_MIN__MASK;
3575}
3576#define A5XX_TEX_SAMP_0_WRAP_S__MASK 0x000000e0
3577#define A5XX_TEX_SAMP_0_WRAP_S__SHIFT 5
3578static inline uint32_t A5XX_TEX_SAMP_0_WRAP_S(enum a5xx_tex_clamp val)
3579{
3580 return ((val) << A5XX_TEX_SAMP_0_WRAP_S__SHIFT) & A5XX_TEX_SAMP_0_WRAP_S__MASK;
3581}
3582#define A5XX_TEX_SAMP_0_WRAP_T__MASK 0x00000700
3583#define A5XX_TEX_SAMP_0_WRAP_T__SHIFT 8
3584static inline uint32_t A5XX_TEX_SAMP_0_WRAP_T(enum a5xx_tex_clamp val)
3585{
3586 return ((val) << A5XX_TEX_SAMP_0_WRAP_T__SHIFT) & A5XX_TEX_SAMP_0_WRAP_T__MASK;
3587}
3588#define A5XX_TEX_SAMP_0_WRAP_R__MASK 0x00003800
3589#define A5XX_TEX_SAMP_0_WRAP_R__SHIFT 11
3590static inline uint32_t A5XX_TEX_SAMP_0_WRAP_R(enum a5xx_tex_clamp val)
3591{
3592 return ((val) << A5XX_TEX_SAMP_0_WRAP_R__SHIFT) & A5XX_TEX_SAMP_0_WRAP_R__MASK;
3593}
3594#define A5XX_TEX_SAMP_0_ANISO__MASK 0x0001c000
3595#define A5XX_TEX_SAMP_0_ANISO__SHIFT 14
3596static inline uint32_t A5XX_TEX_SAMP_0_ANISO(enum a5xx_tex_aniso val)
3597{
3598 return ((val) << A5XX_TEX_SAMP_0_ANISO__SHIFT) & A5XX_TEX_SAMP_0_ANISO__MASK;
3599}
3600#define A5XX_TEX_SAMP_0_LOD_BIAS__MASK 0xfff80000
3601#define A5XX_TEX_SAMP_0_LOD_BIAS__SHIFT 19
3602static inline uint32_t A5XX_TEX_SAMP_0_LOD_BIAS(float val)
3603{
3604 return ((((int32_t)(val * 256.0))) << A5XX_TEX_SAMP_0_LOD_BIAS__SHIFT) & A5XX_TEX_SAMP_0_LOD_BIAS__MASK;
3605}
3606
3607#define REG_A5XX_TEX_SAMP_1 0x00000001
3608#define A5XX_TEX_SAMP_1_COMPARE_FUNC__MASK 0x0000000e
3609#define A5XX_TEX_SAMP_1_COMPARE_FUNC__SHIFT 1
3610static inline uint32_t A5XX_TEX_SAMP_1_COMPARE_FUNC(enum adreno_compare_func val)
3611{
3612 return ((val) << A5XX_TEX_SAMP_1_COMPARE_FUNC__SHIFT) & A5XX_TEX_SAMP_1_COMPARE_FUNC__MASK;
3613}
3614#define A5XX_TEX_SAMP_1_CUBEMAPSEAMLESSFILTOFF 0x00000010
3615#define A5XX_TEX_SAMP_1_UNNORM_COORDS 0x00000020
3616#define A5XX_TEX_SAMP_1_MIPFILTER_LINEAR_FAR 0x00000040
3617#define A5XX_TEX_SAMP_1_MAX_LOD__MASK 0x000fff00
3618#define A5XX_TEX_SAMP_1_MAX_LOD__SHIFT 8
3619static inline uint32_t A5XX_TEX_SAMP_1_MAX_LOD(float val)
3620{
3621 return ((((uint32_t)(val * 256.0))) << A5XX_TEX_SAMP_1_MAX_LOD__SHIFT) & A5XX_TEX_SAMP_1_MAX_LOD__MASK;
3622}
3623#define A5XX_TEX_SAMP_1_MIN_LOD__MASK 0xfff00000
3624#define A5XX_TEX_SAMP_1_MIN_LOD__SHIFT 20
3625static inline uint32_t A5XX_TEX_SAMP_1_MIN_LOD(float val)
3626{
3627 return ((((uint32_t)(val * 256.0))) << A5XX_TEX_SAMP_1_MIN_LOD__SHIFT) & A5XX_TEX_SAMP_1_MIN_LOD__MASK;
3628}
3629
3630#define REG_A5XX_TEX_SAMP_2 0x00000002
3631
3632#define REG_A5XX_TEX_SAMP_3 0x00000003
3633
3634#define REG_A5XX_TEX_CONST_0 0x00000000
3635#define A5XX_TEX_CONST_0_TILE_MODE__MASK 0x00000003
3636#define A5XX_TEX_CONST_0_TILE_MODE__SHIFT 0
3637static inline uint32_t A5XX_TEX_CONST_0_TILE_MODE(enum a5xx_tile_mode val)
3638{
3639 return ((val) << A5XX_TEX_CONST_0_TILE_MODE__SHIFT) & A5XX_TEX_CONST_0_TILE_MODE__MASK;
3640}
3641#define A5XX_TEX_CONST_0_SRGB 0x00000004
3642#define A5XX_TEX_CONST_0_SWIZ_X__MASK 0x00000070
3643#define A5XX_TEX_CONST_0_SWIZ_X__SHIFT 4
3644static inline uint32_t A5XX_TEX_CONST_0_SWIZ_X(enum a5xx_tex_swiz val)
3645{
3646 return ((val) << A5XX_TEX_CONST_0_SWIZ_X__SHIFT) & A5XX_TEX_CONST_0_SWIZ_X__MASK;
3647}
3648#define A5XX_TEX_CONST_0_SWIZ_Y__MASK 0x00000380
3649#define A5XX_TEX_CONST_0_SWIZ_Y__SHIFT 7
3650static inline uint32_t A5XX_TEX_CONST_0_SWIZ_Y(enum a5xx_tex_swiz val)
3651{
3652 return ((val) << A5XX_TEX_CONST_0_SWIZ_Y__SHIFT) & A5XX_TEX_CONST_0_SWIZ_Y__MASK;
3653}
3654#define A5XX_TEX_CONST_0_SWIZ_Z__MASK 0x00001c00
3655#define A5XX_TEX_CONST_0_SWIZ_Z__SHIFT 10
3656static inline uint32_t A5XX_TEX_CONST_0_SWIZ_Z(enum a5xx_tex_swiz val)
3657{
3658 return ((val) << A5XX_TEX_CONST_0_SWIZ_Z__SHIFT) & A5XX_TEX_CONST_0_SWIZ_Z__MASK;
3659}
3660#define A5XX_TEX_CONST_0_SWIZ_W__MASK 0x0000e000
3661#define A5XX_TEX_CONST_0_SWIZ_W__SHIFT 13
3662static inline uint32_t A5XX_TEX_CONST_0_SWIZ_W(enum a5xx_tex_swiz val)
3663{
3664 return ((val) << A5XX_TEX_CONST_0_SWIZ_W__SHIFT) & A5XX_TEX_CONST_0_SWIZ_W__MASK;
3665}
3666#define A5XX_TEX_CONST_0_FMT__MASK 0x3fc00000
3667#define A5XX_TEX_CONST_0_FMT__SHIFT 22
3668static inline uint32_t A5XX_TEX_CONST_0_FMT(enum a5xx_tex_fmt val)
3669{
3670 return ((val) << A5XX_TEX_CONST_0_FMT__SHIFT) & A5XX_TEX_CONST_0_FMT__MASK;
3671}
3672#define A5XX_TEX_CONST_0_SWAP__MASK 0xc0000000
3673#define A5XX_TEX_CONST_0_SWAP__SHIFT 30
3674static inline uint32_t A5XX_TEX_CONST_0_SWAP(enum a3xx_color_swap val)
3675{
3676 return ((val) << A5XX_TEX_CONST_0_SWAP__SHIFT) & A5XX_TEX_CONST_0_SWAP__MASK;
3677}
3678
3679#define REG_A5XX_TEX_CONST_1 0x00000001
3680#define A5XX_TEX_CONST_1_WIDTH__MASK 0x00007fff
3681#define A5XX_TEX_CONST_1_WIDTH__SHIFT 0
3682static inline uint32_t A5XX_TEX_CONST_1_WIDTH(uint32_t val)
3683{
3684 return ((val) << A5XX_TEX_CONST_1_WIDTH__SHIFT) & A5XX_TEX_CONST_1_WIDTH__MASK;
3685}
3686#define A5XX_TEX_CONST_1_HEIGHT__MASK 0x3fff8000
3687#define A5XX_TEX_CONST_1_HEIGHT__SHIFT 15
3688static inline uint32_t A5XX_TEX_CONST_1_HEIGHT(uint32_t val)
3689{
3690 return ((val) << A5XX_TEX_CONST_1_HEIGHT__SHIFT) & A5XX_TEX_CONST_1_HEIGHT__MASK;
3691}
3692
3693#define REG_A5XX_TEX_CONST_2 0x00000002
3694#define A5XX_TEX_CONST_2_FETCHSIZE__MASK 0x0000000f
3695#define A5XX_TEX_CONST_2_FETCHSIZE__SHIFT 0
3696static inline uint32_t A5XX_TEX_CONST_2_FETCHSIZE(enum a5xx_tex_fetchsize val)
3697{
3698 return ((val) << A5XX_TEX_CONST_2_FETCHSIZE__SHIFT) & A5XX_TEX_CONST_2_FETCHSIZE__MASK;
3699}
3700#define A5XX_TEX_CONST_2_PITCH__MASK 0x1fffff80
3701#define A5XX_TEX_CONST_2_PITCH__SHIFT 7
3702static inline uint32_t A5XX_TEX_CONST_2_PITCH(uint32_t val)
3703{
3704 return ((val) << A5XX_TEX_CONST_2_PITCH__SHIFT) & A5XX_TEX_CONST_2_PITCH__MASK;
3705}
3706#define A5XX_TEX_CONST_2_TYPE__MASK 0x60000000
3707#define A5XX_TEX_CONST_2_TYPE__SHIFT 29
3708static inline uint32_t A5XX_TEX_CONST_2_TYPE(enum a5xx_tex_type val)
3709{
3710 return ((val) << A5XX_TEX_CONST_2_TYPE__SHIFT) & A5XX_TEX_CONST_2_TYPE__MASK;
3711}
3712
3713#define REG_A5XX_TEX_CONST_3 0x00000003
3714#define A5XX_TEX_CONST_3_ARRAY_PITCH__MASK 0x00003fff
3715#define A5XX_TEX_CONST_3_ARRAY_PITCH__SHIFT 0
3716static inline uint32_t A5XX_TEX_CONST_3_ARRAY_PITCH(uint32_t val)
3717{
3718 return ((val >> 12) << A5XX_TEX_CONST_3_ARRAY_PITCH__SHIFT) & A5XX_TEX_CONST_3_ARRAY_PITCH__MASK;
3719}
3720#define A5XX_TEX_CONST_3_FLAG 0x10000000
3721
3722#define REG_A5XX_TEX_CONST_4 0x00000004
3723#define A5XX_TEX_CONST_4_BASE_LO__MASK 0xffffffe0
3724#define A5XX_TEX_CONST_4_BASE_LO__SHIFT 5
3725static inline uint32_t A5XX_TEX_CONST_4_BASE_LO(uint32_t val)
3726{
3727 return ((val >> 5) << A5XX_TEX_CONST_4_BASE_LO__SHIFT) & A5XX_TEX_CONST_4_BASE_LO__MASK;
3728}
3729
3730#define REG_A5XX_TEX_CONST_5 0x00000005
3731#define A5XX_TEX_CONST_5_BASE_HI__MASK 0x0001ffff
3732#define A5XX_TEX_CONST_5_BASE_HI__SHIFT 0
3733static inline uint32_t A5XX_TEX_CONST_5_BASE_HI(uint32_t val)
3734{
3735 return ((val) << A5XX_TEX_CONST_5_BASE_HI__SHIFT) & A5XX_TEX_CONST_5_BASE_HI__MASK;
3736}
3737#define A5XX_TEX_CONST_5_DEPTH__MASK 0x3ffe0000
3738#define A5XX_TEX_CONST_5_DEPTH__SHIFT 17
3739static inline uint32_t A5XX_TEX_CONST_5_DEPTH(uint32_t val)
3740{
3741 return ((val) << A5XX_TEX_CONST_5_DEPTH__SHIFT) & A5XX_TEX_CONST_5_DEPTH__MASK;
3742}
3743
3744#define REG_A5XX_TEX_CONST_6 0x00000006
3745
3746#define REG_A5XX_TEX_CONST_7 0x00000007
3747
3748#define REG_A5XX_TEX_CONST_8 0x00000008
3749
3750#define REG_A5XX_TEX_CONST_9 0x00000009
3751
3752#define REG_A5XX_TEX_CONST_10 0x0000000a
3753
3754#define REG_A5XX_TEX_CONST_11 0x0000000b
3755
3756
3757#endif /* A5XX_XML */
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
new file mode 100644
index 000000000000..b8647198c11c
--- /dev/null
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -0,0 +1,888 @@
1/* Copyright (c) 2016 The Linux Foundation. All rights reserved.
2 *
3 * This program is free software; you can redistribute it and/or modify
4 * it under the terms of the GNU General Public License version 2 and
5 * only version 2 as published by the Free Software Foundation.
6 *
7 * This program is distributed in the hope that it will be useful,
8 * but WITHOUT ANY WARRANTY; without even the implied warranty of
9 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 * GNU General Public License for more details.
11 *
12 */
13
14#include "msm_gem.h"
15#include "a5xx_gpu.h"
16
17extern bool hang_debug;
18static void a5xx_dump(struct msm_gpu *gpu);
19
20static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
21 struct msm_file_private *ctx)
22{
23 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
24 struct msm_drm_private *priv = gpu->dev->dev_private;
25 struct msm_ringbuffer *ring = gpu->rb;
26 unsigned int i, ibs = 0;
27
28 for (i = 0; i < submit->nr_cmds; i++) {
29 switch (submit->cmd[i].type) {
30 case MSM_SUBMIT_CMD_IB_TARGET_BUF:
31 break;
32 case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
33 if (priv->lastctx == ctx)
34 break;
35 case MSM_SUBMIT_CMD_BUF:
36 OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3);
37 OUT_RING(ring, lower_32_bits(submit->cmd[i].iova));
38 OUT_RING(ring, upper_32_bits(submit->cmd[i].iova));
39 OUT_RING(ring, submit->cmd[i].size);
40 ibs++;
41 break;
42 }
43 }
44
45 OUT_PKT4(ring, REG_A5XX_CP_SCRATCH_REG(2), 1);
46 OUT_RING(ring, submit->fence->seqno);
47
48 OUT_PKT7(ring, CP_EVENT_WRITE, 4);
49 OUT_RING(ring, CACHE_FLUSH_TS | (1 << 31));
50 OUT_RING(ring, lower_32_bits(rbmemptr(adreno_gpu, fence)));
51 OUT_RING(ring, upper_32_bits(rbmemptr(adreno_gpu, fence)));
52 OUT_RING(ring, submit->fence->seqno);
53
54 gpu->funcs->flush(gpu);
55}
56
57struct a5xx_hwcg {
58 u32 offset;
59 u32 value;
60};
61
62static const struct a5xx_hwcg a530_hwcg[] = {
63 {REG_A5XX_RBBM_CLOCK_CNTL_SP0, 0x02222222},
64 {REG_A5XX_RBBM_CLOCK_CNTL_SP1, 0x02222222},
65 {REG_A5XX_RBBM_CLOCK_CNTL_SP2, 0x02222222},
66 {REG_A5XX_RBBM_CLOCK_CNTL_SP3, 0x02222222},
67 {REG_A5XX_RBBM_CLOCK_CNTL2_SP0, 0x02222220},
68 {REG_A5XX_RBBM_CLOCK_CNTL2_SP1, 0x02222220},
69 {REG_A5XX_RBBM_CLOCK_CNTL2_SP2, 0x02222220},
70 {REG_A5XX_RBBM_CLOCK_CNTL2_SP3, 0x02222220},
71 {REG_A5XX_RBBM_CLOCK_HYST_SP0, 0x0000F3CF},
72 {REG_A5XX_RBBM_CLOCK_HYST_SP1, 0x0000F3CF},
73 {REG_A5XX_RBBM_CLOCK_HYST_SP2, 0x0000F3CF},
74 {REG_A5XX_RBBM_CLOCK_HYST_SP3, 0x0000F3CF},
75 {REG_A5XX_RBBM_CLOCK_DELAY_SP0, 0x00000080},
76 {REG_A5XX_RBBM_CLOCK_DELAY_SP1, 0x00000080},
77 {REG_A5XX_RBBM_CLOCK_DELAY_SP2, 0x00000080},
78 {REG_A5XX_RBBM_CLOCK_DELAY_SP3, 0x00000080},
79 {REG_A5XX_RBBM_CLOCK_CNTL_TP0, 0x22222222},
80 {REG_A5XX_RBBM_CLOCK_CNTL_TP1, 0x22222222},
81 {REG_A5XX_RBBM_CLOCK_CNTL_TP2, 0x22222222},
82 {REG_A5XX_RBBM_CLOCK_CNTL_TP3, 0x22222222},
83 {REG_A5XX_RBBM_CLOCK_CNTL2_TP0, 0x22222222},
84 {REG_A5XX_RBBM_CLOCK_CNTL2_TP1, 0x22222222},
85 {REG_A5XX_RBBM_CLOCK_CNTL2_TP2, 0x22222222},
86 {REG_A5XX_RBBM_CLOCK_CNTL2_TP3, 0x22222222},
87 {REG_A5XX_RBBM_CLOCK_CNTL3_TP0, 0x00002222},
88 {REG_A5XX_RBBM_CLOCK_CNTL3_TP1, 0x00002222},
89 {REG_A5XX_RBBM_CLOCK_CNTL3_TP2, 0x00002222},
90 {REG_A5XX_RBBM_CLOCK_CNTL3_TP3, 0x00002222},
91 {REG_A5XX_RBBM_CLOCK_HYST_TP0, 0x77777777},
92 {REG_A5XX_RBBM_CLOCK_HYST_TP1, 0x77777777},
93 {REG_A5XX_RBBM_CLOCK_HYST_TP2, 0x77777777},
94 {REG_A5XX_RBBM_CLOCK_HYST_TP3, 0x77777777},
95 {REG_A5XX_RBBM_CLOCK_HYST2_TP0, 0x77777777},
96 {REG_A5XX_RBBM_CLOCK_HYST2_TP1, 0x77777777},
97 {REG_A5XX_RBBM_CLOCK_HYST2_TP2, 0x77777777},
98 {REG_A5XX_RBBM_CLOCK_HYST2_TP3, 0x77777777},
99 {REG_A5XX_RBBM_CLOCK_HYST3_TP0, 0x00007777},
100 {REG_A5XX_RBBM_CLOCK_HYST3_TP1, 0x00007777},
101 {REG_A5XX_RBBM_CLOCK_HYST3_TP2, 0x00007777},
102 {REG_A5XX_RBBM_CLOCK_HYST3_TP3, 0x00007777},
103 {REG_A5XX_RBBM_CLOCK_DELAY_TP0, 0x11111111},
104 {REG_A5XX_RBBM_CLOCK_DELAY_TP1, 0x11111111},
105 {REG_A5XX_RBBM_CLOCK_DELAY_TP2, 0x11111111},
106 {REG_A5XX_RBBM_CLOCK_DELAY_TP3, 0x11111111},
107 {REG_A5XX_RBBM_CLOCK_DELAY2_TP0, 0x11111111},
108 {REG_A5XX_RBBM_CLOCK_DELAY2_TP1, 0x11111111},
109 {REG_A5XX_RBBM_CLOCK_DELAY2_TP2, 0x11111111},
110 {REG_A5XX_RBBM_CLOCK_DELAY2_TP3, 0x11111111},
111 {REG_A5XX_RBBM_CLOCK_DELAY3_TP0, 0x00001111},
112 {REG_A5XX_RBBM_CLOCK_DELAY3_TP1, 0x00001111},
113 {REG_A5XX_RBBM_CLOCK_DELAY3_TP2, 0x00001111},
114 {REG_A5XX_RBBM_CLOCK_DELAY3_TP3, 0x00001111},
115 {REG_A5XX_RBBM_CLOCK_CNTL_UCHE, 0x22222222},
116 {REG_A5XX_RBBM_CLOCK_CNTL2_UCHE, 0x22222222},
117 {REG_A5XX_RBBM_CLOCK_CNTL3_UCHE, 0x22222222},
118 {REG_A5XX_RBBM_CLOCK_CNTL4_UCHE, 0x00222222},
119 {REG_A5XX_RBBM_CLOCK_HYST_UCHE, 0x00444444},
120 {REG_A5XX_RBBM_CLOCK_DELAY_UCHE, 0x00000002},
121 {REG_A5XX_RBBM_CLOCK_CNTL_RB0, 0x22222222},
122 {REG_A5XX_RBBM_CLOCK_CNTL_RB1, 0x22222222},
123 {REG_A5XX_RBBM_CLOCK_CNTL_RB2, 0x22222222},
124 {REG_A5XX_RBBM_CLOCK_CNTL_RB3, 0x22222222},
125 {REG_A5XX_RBBM_CLOCK_CNTL2_RB0, 0x00222222},
126 {REG_A5XX_RBBM_CLOCK_CNTL2_RB1, 0x00222222},
127 {REG_A5XX_RBBM_CLOCK_CNTL2_RB2, 0x00222222},
128 {REG_A5XX_RBBM_CLOCK_CNTL2_RB3, 0x00222222},
129 {REG_A5XX_RBBM_CLOCK_CNTL_CCU0, 0x00022220},
130 {REG_A5XX_RBBM_CLOCK_CNTL_CCU1, 0x00022220},
131 {REG_A5XX_RBBM_CLOCK_CNTL_CCU2, 0x00022220},
132 {REG_A5XX_RBBM_CLOCK_CNTL_CCU3, 0x00022220},
133 {REG_A5XX_RBBM_CLOCK_CNTL_RAC, 0x05522222},
134 {REG_A5XX_RBBM_CLOCK_CNTL2_RAC, 0x00505555},
135 {REG_A5XX_RBBM_CLOCK_HYST_RB_CCU0, 0x04040404},
136 {REG_A5XX_RBBM_CLOCK_HYST_RB_CCU1, 0x04040404},
137 {REG_A5XX_RBBM_CLOCK_HYST_RB_CCU2, 0x04040404},
138 {REG_A5XX_RBBM_CLOCK_HYST_RB_CCU3, 0x04040404},
139 {REG_A5XX_RBBM_CLOCK_HYST_RAC, 0x07444044},
140 {REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_0, 0x00000002},
141 {REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_1, 0x00000002},
142 {REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_2, 0x00000002},
143 {REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_3, 0x00000002},
144 {REG_A5XX_RBBM_CLOCK_DELAY_RAC, 0x00010011},
145 {REG_A5XX_RBBM_CLOCK_CNTL_TSE_RAS_RBBM, 0x04222222},
146 {REG_A5XX_RBBM_CLOCK_MODE_GPC, 0x02222222},
147 {REG_A5XX_RBBM_CLOCK_MODE_VFD, 0x00002222},
148 {REG_A5XX_RBBM_CLOCK_HYST_TSE_RAS_RBBM, 0x00000000},
149 {REG_A5XX_RBBM_CLOCK_HYST_GPC, 0x04104004},
150 {REG_A5XX_RBBM_CLOCK_HYST_VFD, 0x00000000},
151 {REG_A5XX_RBBM_CLOCK_DELAY_HLSQ, 0x00000000},
152 {REG_A5XX_RBBM_CLOCK_DELAY_TSE_RAS_RBBM, 0x00004000},
153 {REG_A5XX_RBBM_CLOCK_DELAY_GPC, 0x00000200},
154 {REG_A5XX_RBBM_CLOCK_DELAY_VFD, 0x00002222}
155};
156
157static const struct {
158 int (*test)(struct adreno_gpu *gpu);
159 const struct a5xx_hwcg *regs;
160 unsigned int count;
161} a5xx_hwcg_regs[] = {
162 { adreno_is_a530, a530_hwcg, ARRAY_SIZE(a530_hwcg), },
163};
164
165static void _a5xx_enable_hwcg(struct msm_gpu *gpu,
166 const struct a5xx_hwcg *regs, unsigned int count)
167{
168 unsigned int i;
169
170 for (i = 0; i < count; i++)
171 gpu_write(gpu, regs[i].offset, regs[i].value);
172
173 gpu_write(gpu, REG_A5XX_RBBM_CLOCK_CNTL, 0xAAA8AA00);
174 gpu_write(gpu, REG_A5XX_RBBM_ISDB_CNT, 0x182);
175}
176
177static void a5xx_enable_hwcg(struct msm_gpu *gpu)
178{
179 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
180 unsigned int i;
181
182 for (i = 0; i < ARRAY_SIZE(a5xx_hwcg_regs); i++) {
183 if (a5xx_hwcg_regs[i].test(adreno_gpu)) {
184 _a5xx_enable_hwcg(gpu, a5xx_hwcg_regs[i].regs,
185 a5xx_hwcg_regs[i].count);
186 return;
187 }
188 }
189}
190
191static int a5xx_me_init(struct msm_gpu *gpu)
192{
193 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
194 struct msm_ringbuffer *ring = gpu->rb;
195
196 OUT_PKT7(ring, CP_ME_INIT, 8);
197
198 OUT_RING(ring, 0x0000002F);
199
200 /* Enable multiple hardware contexts */
201 OUT_RING(ring, 0x00000003);
202
203 /* Enable error detection */
204 OUT_RING(ring, 0x20000000);
205
206 /* Don't enable header dump */
207 OUT_RING(ring, 0x00000000);
208 OUT_RING(ring, 0x00000000);
209
210 /* Specify workarounds for various microcode issues */
211 if (adreno_is_a530(adreno_gpu)) {
212 /* Workaround for token end syncs
213 * Force a WFI after every direct-render 3D mode draw and every
214 * 2D mode 3 draw
215 */
216 OUT_RING(ring, 0x0000000B);
217 } else {
218 /* No workarounds enabled */
219 OUT_RING(ring, 0x00000000);
220 }
221
222 OUT_RING(ring, 0x00000000);
223 OUT_RING(ring, 0x00000000);
224
225 gpu->funcs->flush(gpu);
226
227 return gpu->funcs->idle(gpu) ? 0 : -EINVAL;
228}
229
230static struct drm_gem_object *a5xx_ucode_load_bo(struct msm_gpu *gpu,
231 const struct firmware *fw, u64 *iova)
232{
233 struct drm_device *drm = gpu->dev;
234 struct drm_gem_object *bo;
235 void *ptr;
236
237 mutex_lock(&drm->struct_mutex);
238 bo = msm_gem_new(drm, fw->size - 4, MSM_BO_UNCACHED);
239 mutex_unlock(&drm->struct_mutex);
240
241 if (IS_ERR(bo))
242 return bo;
243
244 ptr = msm_gem_get_vaddr(bo);
245 if (!ptr) {
246 drm_gem_object_unreference_unlocked(bo);
247 return ERR_PTR(-ENOMEM);
248 }
249
250 if (iova) {
251 int ret = msm_gem_get_iova(bo, gpu->id, iova);
252
253 if (ret) {
254 drm_gem_object_unreference_unlocked(bo);
255 return ERR_PTR(ret);
256 }
257 }
258
259 memcpy(ptr, &fw->data[4], fw->size - 4);
260
261 msm_gem_put_vaddr(bo);
262 return bo;
263}
264
265static int a5xx_ucode_init(struct msm_gpu *gpu)
266{
267 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
268 struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
269 int ret;
270
271 if (!a5xx_gpu->pm4_bo) {
272 a5xx_gpu->pm4_bo = a5xx_ucode_load_bo(gpu, adreno_gpu->pm4,
273 &a5xx_gpu->pm4_iova);
274
275 if (IS_ERR(a5xx_gpu->pm4_bo)) {
276 ret = PTR_ERR(a5xx_gpu->pm4_bo);
277 a5xx_gpu->pm4_bo = NULL;
278 dev_err(gpu->dev->dev, "could not allocate PM4: %d\n",
279 ret);
280 return ret;
281 }
282 }
283
284 if (!a5xx_gpu->pfp_bo) {
285 a5xx_gpu->pfp_bo = a5xx_ucode_load_bo(gpu, adreno_gpu->pfp,
286 &a5xx_gpu->pfp_iova);
287
288 if (IS_ERR(a5xx_gpu->pfp_bo)) {
289 ret = PTR_ERR(a5xx_gpu->pfp_bo);
290 a5xx_gpu->pfp_bo = NULL;
291 dev_err(gpu->dev->dev, "could not allocate PFP: %d\n",
292 ret);
293 return ret;
294 }
295 }
296
297 gpu_write64(gpu, REG_A5XX_CP_ME_INSTR_BASE_LO,
298 REG_A5XX_CP_ME_INSTR_BASE_HI, a5xx_gpu->pm4_iova);
299
300 gpu_write64(gpu, REG_A5XX_CP_PFP_INSTR_BASE_LO,
301 REG_A5XX_CP_PFP_INSTR_BASE_HI, a5xx_gpu->pfp_iova);
302
303 return 0;
304}
305
306#define A5XX_INT_MASK (A5XX_RBBM_INT_0_MASK_RBBM_AHB_ERROR | \
307 A5XX_RBBM_INT_0_MASK_RBBM_TRANSFER_TIMEOUT | \
308 A5XX_RBBM_INT_0_MASK_RBBM_ME_MS_TIMEOUT | \
309 A5XX_RBBM_INT_0_MASK_RBBM_PFP_MS_TIMEOUT | \
310 A5XX_RBBM_INT_0_MASK_RBBM_ETS_MS_TIMEOUT | \
311 A5XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNC_OVERFLOW | \
312 A5XX_RBBM_INT_0_MASK_CP_HW_ERROR | \
313 A5XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS | \
314 A5XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \
315 A5XX_RBBM_INT_0_MASK_GPMU_VOLTAGE_DROOP)
316
317static int a5xx_hw_init(struct msm_gpu *gpu)
318{
319 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
320 int ret;
321
322 gpu_write(gpu, REG_A5XX_VBIF_ROUND_ROBIN_QOS_ARB, 0x00000003);
323
324 /* Make all blocks contribute to the GPU BUSY perf counter */
325 gpu_write(gpu, REG_A5XX_RBBM_PERFCTR_GPU_BUSY_MASKED, 0xFFFFFFFF);
326
327 /* Enable RBBM error reporting bits */
328 gpu_write(gpu, REG_A5XX_RBBM_AHB_CNTL0, 0x00000001);
329
330 if (adreno_gpu->quirks & ADRENO_QUIRK_FAULT_DETECT_MASK) {
331 /*
332 * Mask out the activity signals from RB1-3 to avoid false
333 * positives
334 */
335
336 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL11,
337 0xF0000000);
338 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL12,
339 0xFFFFFFFF);
340 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL13,
341 0xFFFFFFFF);
342 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL14,
343 0xFFFFFFFF);
344 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL15,
345 0xFFFFFFFF);
346 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL16,
347 0xFFFFFFFF);
348 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL17,
349 0xFFFFFFFF);
350 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_MASK_CNTL18,
351 0xFFFFFFFF);
352 }
353
354 /* Enable fault detection */
355 gpu_write(gpu, REG_A5XX_RBBM_INTERFACE_HANG_INT_CNTL,
356 (1 << 30) | 0xFFFF);
357
358 /* Turn on performance counters */
359 gpu_write(gpu, REG_A5XX_RBBM_PERFCTR_CNTL, 0x01);
360
361 /* Increase VFD cache access so LRZ and other data gets evicted less */
362 gpu_write(gpu, REG_A5XX_UCHE_CACHE_WAYS, 0x02);
363
364 /* Disable L2 bypass in the UCHE */
365 gpu_write(gpu, REG_A5XX_UCHE_TRAP_BASE_LO, 0xFFFF0000);
366 gpu_write(gpu, REG_A5XX_UCHE_TRAP_BASE_HI, 0x0001FFFF);
367 gpu_write(gpu, REG_A5XX_UCHE_WRITE_THRU_BASE_LO, 0xFFFF0000);
368 gpu_write(gpu, REG_A5XX_UCHE_WRITE_THRU_BASE_HI, 0x0001FFFF);
369
370 /* Set the GMEM VA range (0 to gpu->gmem) */
371 gpu_write(gpu, REG_A5XX_UCHE_GMEM_RANGE_MIN_LO, 0x00100000);
372 gpu_write(gpu, REG_A5XX_UCHE_GMEM_RANGE_MIN_HI, 0x00000000);
373 gpu_write(gpu, REG_A5XX_UCHE_GMEM_RANGE_MAX_LO,
374 0x00100000 + adreno_gpu->gmem - 1);
375 gpu_write(gpu, REG_A5XX_UCHE_GMEM_RANGE_MAX_HI, 0x00000000);
376
377 gpu_write(gpu, REG_A5XX_CP_MEQ_THRESHOLDS, 0x40);
378 gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x40);
379 gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_2, 0x80000060);
380 gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_1, 0x40201B16);
381
382 gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL, (0x400 << 11 | 0x300 << 22));
383
384 if (adreno_gpu->quirks & ADRENO_QUIRK_TWO_PASS_USE_WFI)
385 gpu_rmw(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 0, (1 << 8));
386
387 gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 0xc0200100);
388
389 /* Enable USE_RETENTION_FLOPS */
390 gpu_write(gpu, REG_A5XX_CP_CHICKEN_DBG, 0x02000000);
391
392 /* Enable ME/PFP split notification */
393 gpu_write(gpu, REG_A5XX_RBBM_AHB_CNTL1, 0xA6FFFFFF);
394
395 /* Enable HWCG */
396 a5xx_enable_hwcg(gpu);
397
398 gpu_write(gpu, REG_A5XX_RBBM_AHB_CNTL2, 0x0000003F);
399
400 /* Set the highest bank bit */
401 gpu_write(gpu, REG_A5XX_TPL1_MODE_CNTL, 2 << 7);
402 gpu_write(gpu, REG_A5XX_RB_MODE_CNTL, 2 << 1);
403
404 /* Protect registers from the CP */
405 gpu_write(gpu, REG_A5XX_CP_PROTECT_CNTL, 0x00000007);
406
407 /* RBBM */
408 gpu_write(gpu, REG_A5XX_CP_PROTECT(0), ADRENO_PROTECT_RW(0x04, 4));
409 gpu_write(gpu, REG_A5XX_CP_PROTECT(1), ADRENO_PROTECT_RW(0x08, 8));
410 gpu_write(gpu, REG_A5XX_CP_PROTECT(2), ADRENO_PROTECT_RW(0x10, 16));
411 gpu_write(gpu, REG_A5XX_CP_PROTECT(3), ADRENO_PROTECT_RW(0x20, 32));
412 gpu_write(gpu, REG_A5XX_CP_PROTECT(4), ADRENO_PROTECT_RW(0x40, 64));
413 gpu_write(gpu, REG_A5XX_CP_PROTECT(5), ADRENO_PROTECT_RW(0x80, 64));
414
415 /* Content protect */
416 gpu_write(gpu, REG_A5XX_CP_PROTECT(6),
417 ADRENO_PROTECT_RW(REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO,
418 16));
419 gpu_write(gpu, REG_A5XX_CP_PROTECT(7),
420 ADRENO_PROTECT_RW(REG_A5XX_RBBM_SECVID_TRUST_CNTL, 2));
421
422 /* CP */
423 gpu_write(gpu, REG_A5XX_CP_PROTECT(8), ADRENO_PROTECT_RW(0x800, 64));
424 gpu_write(gpu, REG_A5XX_CP_PROTECT(9), ADRENO_PROTECT_RW(0x840, 8));
425 gpu_write(gpu, REG_A5XX_CP_PROTECT(10), ADRENO_PROTECT_RW(0x880, 32));
426 gpu_write(gpu, REG_A5XX_CP_PROTECT(11), ADRENO_PROTECT_RW(0xAA0, 1));
427
428 /* RB */
429 gpu_write(gpu, REG_A5XX_CP_PROTECT(12), ADRENO_PROTECT_RW(0xCC0, 1));
430 gpu_write(gpu, REG_A5XX_CP_PROTECT(13), ADRENO_PROTECT_RW(0xCF0, 2));
431
432 /* VPC */
433 gpu_write(gpu, REG_A5XX_CP_PROTECT(14), ADRENO_PROTECT_RW(0xE68, 8));
434 gpu_write(gpu, REG_A5XX_CP_PROTECT(15), ADRENO_PROTECT_RW(0xE70, 4));
435
436 /* UCHE */
437 gpu_write(gpu, REG_A5XX_CP_PROTECT(16), ADRENO_PROTECT_RW(0xE80, 16));
438
439 if (adreno_is_a530(adreno_gpu))
440 gpu_write(gpu, REG_A5XX_CP_PROTECT(17),
441 ADRENO_PROTECT_RW(0x10000, 0x8000));
442
443 gpu_write(gpu, REG_A5XX_RBBM_SECVID_TSB_CNTL, 0);
444 /*
445 * Disable the trusted memory range - we don't actually supported secure
446 * memory rendering at this point in time and we don't want to block off
447 * part of the virtual memory space.
448 */
449 gpu_write64(gpu, REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO,
450 REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_HI, 0x00000000);
451 gpu_write(gpu, REG_A5XX_RBBM_SECVID_TSB_TRUSTED_SIZE, 0x00000000);
452
453 /* Load the GPMU firmware before starting the HW init */
454 a5xx_gpmu_ucode_init(gpu);
455
456 ret = adreno_hw_init(gpu);
457 if (ret)
458 return ret;
459
460 ret = a5xx_ucode_init(gpu);
461 if (ret)
462 return ret;
463
464 /* Disable the interrupts through the initial bringup stage */
465 gpu_write(gpu, REG_A5XX_RBBM_INT_0_MASK, A5XX_INT_MASK);
466
467 /* Clear ME_HALT to start the micro engine */
468 gpu_write(gpu, REG_A5XX_CP_PFP_ME_CNTL, 0);
469 ret = a5xx_me_init(gpu);
470 if (ret)
471 return ret;
472
473 ret = a5xx_power_init(gpu);
474 if (ret)
475 return ret;
476
477 /*
478 * Send a pipeline event stat to get misbehaving counters to start
479 * ticking correctly
480 */
481 if (adreno_is_a530(adreno_gpu)) {
482 OUT_PKT7(gpu->rb, CP_EVENT_WRITE, 1);
483 OUT_RING(gpu->rb, 0x0F);
484
485 gpu->funcs->flush(gpu);
486 if (!gpu->funcs->idle(gpu))
487 return -EINVAL;
488 }
489
490 /* Put the GPU into unsecure mode */
491 gpu_write(gpu, REG_A5XX_RBBM_SECVID_TRUST_CNTL, 0x0);
492
493 return 0;
494}
495
496static void a5xx_recover(struct msm_gpu *gpu)
497{
498 int i;
499
500 adreno_dump_info(gpu);
501
502 for (i = 0; i < 8; i++) {
503 printk("CP_SCRATCH_REG%d: %u\n", i,
504 gpu_read(gpu, REG_A5XX_CP_SCRATCH_REG(i)));
505 }
506
507 if (hang_debug)
508 a5xx_dump(gpu);
509
510 gpu_write(gpu, REG_A5XX_RBBM_SW_RESET_CMD, 1);
511 gpu_read(gpu, REG_A5XX_RBBM_SW_RESET_CMD);
512 gpu_write(gpu, REG_A5XX_RBBM_SW_RESET_CMD, 0);
513 adreno_recover(gpu);
514}
515
516static void a5xx_destroy(struct msm_gpu *gpu)
517{
518 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
519 struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
520
521 DBG("%s", gpu->name);
522
523 if (a5xx_gpu->pm4_bo) {
524 if (a5xx_gpu->pm4_iova)
525 msm_gem_put_iova(a5xx_gpu->pm4_bo, gpu->id);
526 drm_gem_object_unreference_unlocked(a5xx_gpu->pm4_bo);
527 }
528
529 if (a5xx_gpu->pfp_bo) {
530 if (a5xx_gpu->pfp_iova)
531 msm_gem_put_iova(a5xx_gpu->pfp_bo, gpu->id);
532 drm_gem_object_unreference_unlocked(a5xx_gpu->pfp_bo);
533 }
534
535 if (a5xx_gpu->gpmu_bo) {
536 if (a5xx_gpu->gpmu_bo)
537 msm_gem_put_iova(a5xx_gpu->gpmu_bo, gpu->id);
538 drm_gem_object_unreference_unlocked(a5xx_gpu->gpmu_bo);
539 }
540
541 adreno_gpu_cleanup(adreno_gpu);
542 kfree(a5xx_gpu);
543}
544
545static inline bool _a5xx_check_idle(struct msm_gpu *gpu)
546{
547 if (gpu_read(gpu, REG_A5XX_RBBM_STATUS) & ~A5XX_RBBM_STATUS_HI_BUSY)
548 return false;
549
550 /*
551 * Nearly every abnormality ends up pausing the GPU and triggering a
552 * fault so we can safely just watch for this one interrupt to fire
553 */
554 return !(gpu_read(gpu, REG_A5XX_RBBM_INT_0_STATUS) &
555 A5XX_RBBM_INT_0_MASK_MISC_HANG_DETECT);
556}
557
558static bool a5xx_idle(struct msm_gpu *gpu)
559{
560 /* wait for CP to drain ringbuffer: */
561 if (!adreno_idle(gpu))
562 return false;
563
564 if (spin_until(_a5xx_check_idle(gpu))) {
565 DRM_ERROR("%s: %ps: timeout waiting for GPU to idle: status %8.8X irq %8.8X\n",
566 gpu->name, __builtin_return_address(0),
567 gpu_read(gpu, REG_A5XX_RBBM_STATUS),
568 gpu_read(gpu, REG_A5XX_RBBM_INT_0_STATUS));
569
570 return false;
571 }
572
573 return true;
574}
575
576static void a5xx_cp_err_irq(struct msm_gpu *gpu)
577{
578 u32 status = gpu_read(gpu, REG_A5XX_CP_INTERRUPT_STATUS);
579
580 if (status & A5XX_CP_INT_CP_OPCODE_ERROR) {
581 u32 val;
582
583 gpu_write(gpu, REG_A5XX_CP_PFP_STAT_ADDR, 0);
584
585 /*
586 * REG_A5XX_CP_PFP_STAT_DATA is indexed, and we want index 1 so
587 * read it twice
588 */
589
590 gpu_read(gpu, REG_A5XX_CP_PFP_STAT_DATA);
591 val = gpu_read(gpu, REG_A5XX_CP_PFP_STAT_DATA);
592
593 dev_err_ratelimited(gpu->dev->dev, "CP | opcode error | possible opcode=0x%8.8X\n",
594 val);
595 }
596
597 if (status & A5XX_CP_INT_CP_HW_FAULT_ERROR)
598 dev_err_ratelimited(gpu->dev->dev, "CP | HW fault | status=0x%8.8X\n",
599 gpu_read(gpu, REG_A5XX_CP_HW_FAULT));
600
601 if (status & A5XX_CP_INT_CP_DMA_ERROR)
602 dev_err_ratelimited(gpu->dev->dev, "CP | DMA error\n");
603
604 if (status & A5XX_CP_INT_CP_REGISTER_PROTECTION_ERROR) {
605 u32 val = gpu_read(gpu, REG_A5XX_CP_PROTECT_STATUS);
606
607 dev_err_ratelimited(gpu->dev->dev,
608 "CP | protected mode error | %s | addr=0x%8.8X | status=0x%8.8X\n",
609 val & (1 << 24) ? "WRITE" : "READ",
610 (val & 0xFFFFF) >> 2, val);
611 }
612
613 if (status & A5XX_CP_INT_CP_AHB_ERROR) {
614 u32 status = gpu_read(gpu, REG_A5XX_CP_AHB_FAULT);
615 const char *access[16] = { "reserved", "reserved",
616 "timestamp lo", "timestamp hi", "pfp read", "pfp write",
617 "", "", "me read", "me write", "", "", "crashdump read",
618 "crashdump write" };
619
620 dev_err_ratelimited(gpu->dev->dev,
621 "CP | AHB error | addr=%X access=%s error=%d | status=0x%8.8X\n",
622 status & 0xFFFFF, access[(status >> 24) & 0xF],
623 (status & (1 << 31)), status);
624 }
625}
626
627static void a5xx_rbbm_err_irq(struct msm_gpu *gpu)
628{
629 u32 status = gpu_read(gpu, REG_A5XX_RBBM_INT_0_STATUS);
630
631 if (status & A5XX_RBBM_INT_0_MASK_RBBM_AHB_ERROR) {
632 u32 val = gpu_read(gpu, REG_A5XX_RBBM_AHB_ERROR_STATUS);
633
634 dev_err_ratelimited(gpu->dev->dev,
635 "RBBM | AHB bus error | %s | addr=0x%X | ports=0x%X:0x%X\n",
636 val & (1 << 28) ? "WRITE" : "READ",
637 (val & 0xFFFFF) >> 2, (val >> 20) & 0x3,
638 (val >> 24) & 0xF);
639
640 /* Clear the error */
641 gpu_write(gpu, REG_A5XX_RBBM_AHB_CMD, (1 << 4));
642 }
643
644 if (status & A5XX_RBBM_INT_0_MASK_RBBM_TRANSFER_TIMEOUT)
645 dev_err_ratelimited(gpu->dev->dev, "RBBM | AHB transfer timeout\n");
646
647 if (status & A5XX_RBBM_INT_0_MASK_RBBM_ME_MS_TIMEOUT)
648 dev_err_ratelimited(gpu->dev->dev, "RBBM | ME master split | status=0x%X\n",
649 gpu_read(gpu, REG_A5XX_RBBM_AHB_ME_SPLIT_STATUS));
650
651 if (status & A5XX_RBBM_INT_0_MASK_RBBM_PFP_MS_TIMEOUT)
652 dev_err_ratelimited(gpu->dev->dev, "RBBM | PFP master split | status=0x%X\n",
653 gpu_read(gpu, REG_A5XX_RBBM_AHB_PFP_SPLIT_STATUS));
654
655 if (status & A5XX_RBBM_INT_0_MASK_RBBM_ETS_MS_TIMEOUT)
656 dev_err_ratelimited(gpu->dev->dev, "RBBM | ETS master split | status=0x%X\n",
657 gpu_read(gpu, REG_A5XX_RBBM_AHB_ETS_SPLIT_STATUS));
658
659 if (status & A5XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNC_OVERFLOW)
660 dev_err_ratelimited(gpu->dev->dev, "RBBM | ATB ASYNC overflow\n");
661
662 if (status & A5XX_RBBM_INT_0_MASK_RBBM_ATB_BUS_OVERFLOW)
663 dev_err_ratelimited(gpu->dev->dev, "RBBM | ATB bus overflow\n");
664}
665
666static void a5xx_uche_err_irq(struct msm_gpu *gpu)
667{
668 uint64_t addr = (uint64_t) gpu_read(gpu, REG_A5XX_UCHE_TRAP_LOG_HI);
669
670 addr |= gpu_read(gpu, REG_A5XX_UCHE_TRAP_LOG_LO);
671
672 dev_err_ratelimited(gpu->dev->dev, "UCHE | Out of bounds access | addr=0x%llX\n",
673 addr);
674}
675
676static void a5xx_gpmu_err_irq(struct msm_gpu *gpu)
677{
678 dev_err_ratelimited(gpu->dev->dev, "GPMU | voltage droop\n");
679}
680
681#define RBBM_ERROR_MASK \
682 (A5XX_RBBM_INT_0_MASK_RBBM_AHB_ERROR | \
683 A5XX_RBBM_INT_0_MASK_RBBM_TRANSFER_TIMEOUT | \
684 A5XX_RBBM_INT_0_MASK_RBBM_ME_MS_TIMEOUT | \
685 A5XX_RBBM_INT_0_MASK_RBBM_PFP_MS_TIMEOUT | \
686 A5XX_RBBM_INT_0_MASK_RBBM_ETS_MS_TIMEOUT | \
687 A5XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNC_OVERFLOW)
688
689static irqreturn_t a5xx_irq(struct msm_gpu *gpu)
690{
691 u32 status = gpu_read(gpu, REG_A5XX_RBBM_INT_0_STATUS);
692
693 gpu_write(gpu, REG_A5XX_RBBM_INT_CLEAR_CMD, status);
694
695 if (status & RBBM_ERROR_MASK)
696 a5xx_rbbm_err_irq(gpu);
697
698 if (status & A5XX_RBBM_INT_0_MASK_CP_HW_ERROR)
699 a5xx_cp_err_irq(gpu);
700
701 if (status & A5XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS)
702 a5xx_uche_err_irq(gpu);
703
704 if (status & A5XX_RBBM_INT_0_MASK_GPMU_VOLTAGE_DROOP)
705 a5xx_gpmu_err_irq(gpu);
706
707 if (status & A5XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS)
708 msm_gpu_retire(gpu);
709
710 return IRQ_HANDLED;
711}
712
713static const u32 a5xx_register_offsets[REG_ADRENO_REGISTER_MAX] = {
714 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_A5XX_CP_RB_BASE),
715 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE_HI, REG_A5XX_CP_RB_BASE_HI),
716 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_A5XX_CP_RB_RPTR_ADDR),
717 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR_HI,
718 REG_A5XX_CP_RB_RPTR_ADDR_HI),
719 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_A5XX_CP_RB_RPTR),
720 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_A5XX_CP_RB_WPTR),
721 REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_A5XX_CP_RB_CNTL),
722};
723
724static const u32 a5xx_registers[] = {
725 0x0000, 0x0002, 0x0004, 0x0020, 0x0022, 0x0026, 0x0029, 0x002B,
726 0x002E, 0x0035, 0x0038, 0x0042, 0x0044, 0x0044, 0x0047, 0x0095,
727 0x0097, 0x00BB, 0x03A0, 0x0464, 0x0469, 0x046F, 0x04D2, 0x04D3,
728 0x04E0, 0x0533, 0x0540, 0x0555, 0xF400, 0xF400, 0xF800, 0xF807,
729 0x0800, 0x081A, 0x081F, 0x0841, 0x0860, 0x0860, 0x0880, 0x08A0,
730 0x0B00, 0x0B12, 0x0B15, 0x0B28, 0x0B78, 0x0B7F, 0x0BB0, 0x0BBD,
731 0x0BC0, 0x0BC6, 0x0BD0, 0x0C53, 0x0C60, 0x0C61, 0x0C80, 0x0C82,
732 0x0C84, 0x0C85, 0x0C90, 0x0C98, 0x0CA0, 0x0CA0, 0x0CB0, 0x0CB2,
733 0x2180, 0x2185, 0x2580, 0x2585, 0x0CC1, 0x0CC1, 0x0CC4, 0x0CC7,
734 0x0CCC, 0x0CCC, 0x0CD0, 0x0CD8, 0x0CE0, 0x0CE5, 0x0CE8, 0x0CE8,
735 0x0CEC, 0x0CF1, 0x0CFB, 0x0D0E, 0x2100, 0x211E, 0x2140, 0x2145,
736 0x2500, 0x251E, 0x2540, 0x2545, 0x0D10, 0x0D17, 0x0D20, 0x0D23,
737 0x0D30, 0x0D30, 0x20C0, 0x20C0, 0x24C0, 0x24C0, 0x0E40, 0x0E43,
738 0x0E4A, 0x0E4A, 0x0E50, 0x0E57, 0x0E60, 0x0E7C, 0x0E80, 0x0E8E,
739 0x0E90, 0x0E96, 0x0EA0, 0x0EA8, 0x0EB0, 0x0EB2, 0xE140, 0xE147,
740 0xE150, 0xE187, 0xE1A0, 0xE1A9, 0xE1B0, 0xE1B6, 0xE1C0, 0xE1C7,
741 0xE1D0, 0xE1D1, 0xE200, 0xE201, 0xE210, 0xE21C, 0xE240, 0xE268,
742 0xE000, 0xE006, 0xE010, 0xE09A, 0xE0A0, 0xE0A4, 0xE0AA, 0xE0EB,
743 0xE100, 0xE105, 0xE380, 0xE38F, 0xE3B0, 0xE3B0, 0xE400, 0xE405,
744 0xE408, 0xE4E9, 0xE4F0, 0xE4F0, 0xE280, 0xE280, 0xE282, 0xE2A3,
745 0xE2A5, 0xE2C2, 0xE940, 0xE947, 0xE950, 0xE987, 0xE9A0, 0xE9A9,
746 0xE9B0, 0xE9B6, 0xE9C0, 0xE9C7, 0xE9D0, 0xE9D1, 0xEA00, 0xEA01,
747 0xEA10, 0xEA1C, 0xEA40, 0xEA68, 0xE800, 0xE806, 0xE810, 0xE89A,
748 0xE8A0, 0xE8A4, 0xE8AA, 0xE8EB, 0xE900, 0xE905, 0xEB80, 0xEB8F,
749 0xEBB0, 0xEBB0, 0xEC00, 0xEC05, 0xEC08, 0xECE9, 0xECF0, 0xECF0,
750 0xEA80, 0xEA80, 0xEA82, 0xEAA3, 0xEAA5, 0xEAC2, 0xA800, 0xA8FF,
751 0xAC60, 0xAC60, 0xB000, 0xB97F, 0xB9A0, 0xB9BF,
752 ~0
753};
754
755static void a5xx_dump(struct msm_gpu *gpu)
756{
757 dev_info(gpu->dev->dev, "status: %08x\n",
758 gpu_read(gpu, REG_A5XX_RBBM_STATUS));
759 adreno_dump(gpu);
760}
761
762static int a5xx_pm_resume(struct msm_gpu *gpu)
763{
764 int ret;
765
766 /* Turn on the core power */
767 ret = msm_gpu_pm_resume(gpu);
768 if (ret)
769 return ret;
770
771 /* Turn the RBCCU domain first to limit the chances of voltage droop */
772 gpu_write(gpu, REG_A5XX_GPMU_RBCCU_POWER_CNTL, 0x778000);
773
774 /* Wait 3 usecs before polling */
775 udelay(3);
776
777 ret = spin_usecs(gpu, 20, REG_A5XX_GPMU_RBCCU_PWR_CLK_STATUS,
778 (1 << 20), (1 << 20));
779 if (ret) {
780 DRM_ERROR("%s: timeout waiting for RBCCU GDSC enable: %X\n",
781 gpu->name,
782 gpu_read(gpu, REG_A5XX_GPMU_RBCCU_PWR_CLK_STATUS));
783 return ret;
784 }
785
786 /* Turn on the SP domain */
787 gpu_write(gpu, REG_A5XX_GPMU_SP_POWER_CNTL, 0x778000);
788 ret = spin_usecs(gpu, 20, REG_A5XX_GPMU_SP_PWR_CLK_STATUS,
789 (1 << 20), (1 << 20));
790 if (ret)
791 DRM_ERROR("%s: timeout waiting for SP GDSC enable\n",
792 gpu->name);
793
794 return ret;
795}
796
797static int a5xx_pm_suspend(struct msm_gpu *gpu)
798{
799 /* Clear the VBIF pipe before shutting down */
800 gpu_write(gpu, REG_A5XX_VBIF_XIN_HALT_CTRL0, 0xF);
801 spin_until((gpu_read(gpu, REG_A5XX_VBIF_XIN_HALT_CTRL1) & 0xF) == 0xF);
802
803 gpu_write(gpu, REG_A5XX_VBIF_XIN_HALT_CTRL0, 0);
804
805 /*
806 * Reset the VBIF before power collapse to avoid issue with FIFO
807 * entries
808 */
809 gpu_write(gpu, REG_A5XX_RBBM_BLOCK_SW_RESET_CMD, 0x003C0000);
810 gpu_write(gpu, REG_A5XX_RBBM_BLOCK_SW_RESET_CMD, 0x00000000);
811
812 return msm_gpu_pm_suspend(gpu);
813}
814
815static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
816{
817 *value = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_CP_0_LO,
818 REG_A5XX_RBBM_PERFCTR_CP_0_HI);
819
820 return 0;
821}
822
823#ifdef CONFIG_DEBUG_FS
824static void a5xx_show(struct msm_gpu *gpu, struct seq_file *m)
825{
826 gpu->funcs->pm_resume(gpu);
827
828 seq_printf(m, "status: %08x\n",
829 gpu_read(gpu, REG_A5XX_RBBM_STATUS));
830 gpu->funcs->pm_suspend(gpu);
831
832 adreno_show(gpu, m);
833}
834#endif
835
836static const struct adreno_gpu_funcs funcs = {
837 .base = {
838 .get_param = adreno_get_param,
839 .hw_init = a5xx_hw_init,
840 .pm_suspend = a5xx_pm_suspend,
841 .pm_resume = a5xx_pm_resume,
842 .recover = a5xx_recover,
843 .last_fence = adreno_last_fence,
844 .submit = a5xx_submit,
845 .flush = adreno_flush,
846 .idle = a5xx_idle,
847 .irq = a5xx_irq,
848 .destroy = a5xx_destroy,
849 .show = a5xx_show,
850 },
851 .get_timestamp = a5xx_get_timestamp,
852};
853
854struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
855{
856 struct msm_drm_private *priv = dev->dev_private;
857 struct platform_device *pdev = priv->gpu_pdev;
858 struct a5xx_gpu *a5xx_gpu = NULL;
859 struct adreno_gpu *adreno_gpu;
860 struct msm_gpu *gpu;
861 int ret;
862
863 if (!pdev) {
864 dev_err(dev->dev, "No A5XX device is defined\n");
865 return ERR_PTR(-ENXIO);
866 }
867
868 a5xx_gpu = kzalloc(sizeof(*a5xx_gpu), GFP_KERNEL);
869 if (!a5xx_gpu)
870 return ERR_PTR(-ENOMEM);
871
872 adreno_gpu = &a5xx_gpu->base;
873 gpu = &adreno_gpu->base;
874
875 a5xx_gpu->pdev = pdev;
876 adreno_gpu->registers = a5xx_registers;
877 adreno_gpu->reg_offsets = a5xx_register_offsets;
878
879 a5xx_gpu->lm_leakage = 0x4E001A;
880
881 ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs);
882 if (ret) {
883 a5xx_destroy(&(a5xx_gpu->base.base));
884 return ERR_PTR(ret);
885 }
886
887 return gpu;
888}
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
new file mode 100644
index 000000000000..1590f845d554
--- /dev/null
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
@@ -0,0 +1,60 @@
1/* Copyright (c) 2016 The Linux Foundation. All rights reserved.
2 *
3 * This program is free software; you can redistribute it and/or modify
4 * it under the terms of the GNU General Public License version 2 and
5 * only version 2 as published by the Free Software Foundation.
6 *
7 * This program is distributed in the hope that it will be useful,
8 * but WITHOUT ANY WARRANTY; without even the implied warranty of
9 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 * GNU General Public License for more details.
11 *
12 */
13#ifndef __A5XX_GPU_H__
14#define __A5XX_GPU_H__
15
16#include "adreno_gpu.h"
17
18/* Bringing over the hack from the previous targets */
19#undef ROP_COPY
20#undef ROP_XOR
21
22#include "a5xx.xml.h"
23
24struct a5xx_gpu {
25 struct adreno_gpu base;
26 struct platform_device *pdev;
27
28 struct drm_gem_object *pm4_bo;
29 uint64_t pm4_iova;
30
31 struct drm_gem_object *pfp_bo;
32 uint64_t pfp_iova;
33
34 struct drm_gem_object *gpmu_bo;
35 uint64_t gpmu_iova;
36 uint32_t gpmu_dwords;
37
38 uint32_t lm_leakage;
39};
40
41#define to_a5xx_gpu(x) container_of(x, struct a5xx_gpu, base)
42
43int a5xx_power_init(struct msm_gpu *gpu);
44void a5xx_gpmu_ucode_init(struct msm_gpu *gpu);
45
46static inline int spin_usecs(struct msm_gpu *gpu, uint32_t usecs,
47 uint32_t reg, uint32_t mask, uint32_t value)
48{
49 while (usecs--) {
50 udelay(1);
51 if ((gpu_read(gpu, reg) & mask) == value)
52 return 0;
53 cpu_relax();
54 }
55
56 return -ETIMEDOUT;
57}
58
59
60#endif /* __A5XX_GPU_H__ */
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c
new file mode 100644
index 000000000000..72d52c71f769
--- /dev/null
+++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
@@ -0,0 +1,344 @@
1/* Copyright (c) 2016 The Linux Foundation. All rights reserved.
2 *
3 * This program is free software; you can redistribute it and/or modify
4 * it under the terms of the GNU General Public License version 2 and
5 * only version 2 as published by the Free Software Foundation.
6 *
7 * This program is distributed in the hope that it will be useful,
8 * but WITHOUT ANY WARRANTY; without even the implied warranty of
9 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 * GNU General Public License for more details.
11 *
12 */
13
14#include <linux/pm_opp.h>
15#include "a5xx_gpu.h"
16
17/*
18 * The GPMU data block is a block of shared registers that can be used to
19 * communicate back and forth. These "registers" are by convention with the GPMU
20 * firwmare and not bound to any specific hardware design
21 */
22
23#define AGC_INIT_BASE REG_A5XX_GPMU_DATA_RAM_BASE
24#define AGC_INIT_MSG_MAGIC (AGC_INIT_BASE + 5)
25#define AGC_MSG_BASE (AGC_INIT_BASE + 7)
26
27#define AGC_MSG_STATE (AGC_MSG_BASE + 0)
28#define AGC_MSG_COMMAND (AGC_MSG_BASE + 1)
29#define AGC_MSG_PAYLOAD_SIZE (AGC_MSG_BASE + 3)
30#define AGC_MSG_PAYLOAD(_o) ((AGC_MSG_BASE + 5) + (_o))
31
32#define AGC_POWER_CONFIG_PRODUCTION_ID 1
33#define AGC_INIT_MSG_VALUE 0xBABEFACE
34
35static struct {
36 uint32_t reg;
37 uint32_t value;
38} a5xx_sequence_regs[] = {
39 { 0xB9A1, 0x00010303 },
40 { 0xB9A2, 0x13000000 },
41 { 0xB9A3, 0x00460020 },
42 { 0xB9A4, 0x10000000 },
43 { 0xB9A5, 0x040A1707 },
44 { 0xB9A6, 0x00010000 },
45 { 0xB9A7, 0x0E000904 },
46 { 0xB9A8, 0x10000000 },
47 { 0xB9A9, 0x01165000 },
48 { 0xB9AA, 0x000E0002 },
49 { 0xB9AB, 0x03884141 },
50 { 0xB9AC, 0x10000840 },
51 { 0xB9AD, 0x572A5000 },
52 { 0xB9AE, 0x00000003 },
53 { 0xB9AF, 0x00000000 },
54 { 0xB9B0, 0x10000000 },
55 { 0xB828, 0x6C204010 },
56 { 0xB829, 0x6C204011 },
57 { 0xB82A, 0x6C204012 },
58 { 0xB82B, 0x6C204013 },
59 { 0xB82C, 0x6C204014 },
60 { 0xB90F, 0x00000004 },
61 { 0xB910, 0x00000002 },
62 { 0xB911, 0x00000002 },
63 { 0xB912, 0x00000002 },
64 { 0xB913, 0x00000002 },
65 { 0xB92F, 0x00000004 },
66 { 0xB930, 0x00000005 },
67 { 0xB931, 0x00000005 },
68 { 0xB932, 0x00000005 },
69 { 0xB933, 0x00000005 },
70 { 0xB96F, 0x00000001 },
71 { 0xB970, 0x00000003 },
72 { 0xB94F, 0x00000004 },
73 { 0xB950, 0x0000000B },
74 { 0xB951, 0x0000000B },
75 { 0xB952, 0x0000000B },
76 { 0xB953, 0x0000000B },
77 { 0xB907, 0x00000019 },
78 { 0xB927, 0x00000019 },
79 { 0xB947, 0x00000019 },
80 { 0xB967, 0x00000019 },
81 { 0xB987, 0x00000019 },
82 { 0xB906, 0x00220001 },
83 { 0xB926, 0x00220001 },
84 { 0xB946, 0x00220001 },
85 { 0xB966, 0x00220001 },
86 { 0xB986, 0x00300000 },
87 { 0xAC40, 0x0340FF41 },
88 { 0xAC41, 0x03BEFED0 },
89 { 0xAC42, 0x00331FED },
90 { 0xAC43, 0x021FFDD3 },
91 { 0xAC44, 0x5555AAAA },
92 { 0xAC45, 0x5555AAAA },
93 { 0xB9BA, 0x00000008 },
94};
95
96/*
97 * Get the actual voltage value for the operating point at the specified
98 * frequency
99 */
100static inline uint32_t _get_mvolts(struct msm_gpu *gpu, uint32_t freq)
101{
102 struct drm_device *dev = gpu->dev;
103 struct msm_drm_private *priv = dev->dev_private;
104 struct platform_device *pdev = priv->gpu_pdev;
105 struct dev_pm_opp *opp;
106
107 opp = dev_pm_opp_find_freq_exact(&pdev->dev, freq, true);
108
109 return (!IS_ERR(opp)) ? dev_pm_opp_get_voltage(opp) / 1000 : 0;
110}
111
112/* Setup thermal limit management */
113static void a5xx_lm_setup(struct msm_gpu *gpu)
114{
115 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
116 struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
117 unsigned int i;
118
119 /* Write the block of sequence registers */
120 for (i = 0; i < ARRAY_SIZE(a5xx_sequence_regs); i++)
121 gpu_write(gpu, a5xx_sequence_regs[i].reg,
122 a5xx_sequence_regs[i].value);
123
124 /* Hard code the A530 GPU thermal sensor ID for the GPMU */
125 gpu_write(gpu, REG_A5XX_GPMU_TEMP_SENSOR_ID, 0x60007);
126 gpu_write(gpu, REG_A5XX_GPMU_DELTA_TEMP_THRESHOLD, 0x01);
127 gpu_write(gpu, REG_A5XX_GPMU_TEMP_SENSOR_CONFIG, 0x01);
128
129 /* Until we get clock scaling 0 is always the active power level */
130 gpu_write(gpu, REG_A5XX_GPMU_GPMU_VOLTAGE, 0x80000000 | 0);
131
132 gpu_write(gpu, REG_A5XX_GPMU_BASE_LEAKAGE, a5xx_gpu->lm_leakage);
133
134 /* The threshold is fixed at 6000 for A530 */
135 gpu_write(gpu, REG_A5XX_GPMU_GPMU_PWR_THRESHOLD, 0x80000000 | 6000);
136
137 gpu_write(gpu, REG_A5XX_GPMU_BEC_ENABLE, 0x10001FFF);
138 gpu_write(gpu, REG_A5XX_GDPM_CONFIG1, 0x00201FF1);
139
140 /* Write the voltage table */
141 gpu_write(gpu, REG_A5XX_GPMU_BEC_ENABLE, 0x10001FFF);
142 gpu_write(gpu, REG_A5XX_GDPM_CONFIG1, 0x201FF1);
143
144 gpu_write(gpu, AGC_MSG_STATE, 1);
145 gpu_write(gpu, AGC_MSG_COMMAND, AGC_POWER_CONFIG_PRODUCTION_ID);
146
147 /* Write the max power - hard coded to 5448 for A530 */
148 gpu_write(gpu, AGC_MSG_PAYLOAD(0), 5448);
149 gpu_write(gpu, AGC_MSG_PAYLOAD(1), 1);
150
151 /*
152 * For now just write the one voltage level - we will do more when we
153 * can do scaling
154 */
155 gpu_write(gpu, AGC_MSG_PAYLOAD(2), _get_mvolts(gpu, gpu->fast_rate));
156 gpu_write(gpu, AGC_MSG_PAYLOAD(3), gpu->fast_rate / 1000000);
157
158 gpu_write(gpu, AGC_MSG_PAYLOAD_SIZE, 4 * sizeof(uint32_t));
159 gpu_write(gpu, AGC_INIT_MSG_MAGIC, AGC_INIT_MSG_VALUE);
160}
161
162/* Enable SP/TP cpower collapse */
163static void a5xx_pc_init(struct msm_gpu *gpu)
164{
165 gpu_write(gpu, REG_A5XX_GPMU_PWR_COL_INTER_FRAME_CTRL, 0x7F);
166 gpu_write(gpu, REG_A5XX_GPMU_PWR_COL_BINNING_CTRL, 0);
167 gpu_write(gpu, REG_A5XX_GPMU_PWR_COL_INTER_FRAME_HYST, 0xA0080);
168 gpu_write(gpu, REG_A5XX_GPMU_PWR_COL_STAGGER_DELAY, 0x600040);
169}
170
171/* Enable the GPMU microcontroller */
172static int a5xx_gpmu_init(struct msm_gpu *gpu)
173{
174 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
175 struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
176 struct msm_ringbuffer *ring = gpu->rb;
177
178 if (!a5xx_gpu->gpmu_dwords)
179 return 0;
180
181 /* Turn off protected mode for this operation */
182 OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
183 OUT_RING(ring, 0);
184
185 /* Kick off the IB to load the GPMU microcode */
186 OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3);
187 OUT_RING(ring, lower_32_bits(a5xx_gpu->gpmu_iova));
188 OUT_RING(ring, upper_32_bits(a5xx_gpu->gpmu_iova));
189 OUT_RING(ring, a5xx_gpu->gpmu_dwords);
190
191 /* Turn back on protected mode */
192 OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
193 OUT_RING(ring, 1);
194
195 gpu->funcs->flush(gpu);
196
197 if (!gpu->funcs->idle(gpu)) {
198 DRM_ERROR("%s: Unable to load GPMU firmware. GPMU will not be active\n",
199 gpu->name);
200 return -EINVAL;
201 }
202
203 gpu_write(gpu, REG_A5XX_GPMU_WFI_CONFIG, 0x4014);
204
205 /* Kick off the GPMU */
206 gpu_write(gpu, REG_A5XX_GPMU_CM3_SYSRESET, 0x0);
207
208 /*
209 * Wait for the GPMU to respond. It isn't fatal if it doesn't, we just
210 * won't have advanced power collapse.
211 */
212 if (spin_usecs(gpu, 25, REG_A5XX_GPMU_GENERAL_0, 0xFFFFFFFF,
213 0xBABEFACE))
214 DRM_ERROR("%s: GPMU firmware initialization timed out\n",
215 gpu->name);
216
217 return 0;
218}
219
220/* Enable limits management */
221static void a5xx_lm_enable(struct msm_gpu *gpu)
222{
223 gpu_write(gpu, REG_A5XX_GDPM_INT_MASK, 0x0);
224 gpu_write(gpu, REG_A5XX_GDPM_INT_EN, 0x0A);
225 gpu_write(gpu, REG_A5XX_GPMU_GPMU_VOLTAGE_INTR_EN_MASK, 0x01);
226 gpu_write(gpu, REG_A5XX_GPMU_TEMP_THRESHOLD_INTR_EN_MASK, 0x50000);
227 gpu_write(gpu, REG_A5XX_GPMU_THROTTLE_UNMASK_FORCE_CTRL, 0x30000);
228
229 gpu_write(gpu, REG_A5XX_GPMU_CLOCK_THROTTLE_CTRL, 0x011);
230}
231
232int a5xx_power_init(struct msm_gpu *gpu)
233{
234 int ret;
235
236 /* Set up the limits management */
237 a5xx_lm_setup(gpu);
238
239 /* Set up SP/TP power collpase */
240 a5xx_pc_init(gpu);
241
242 /* Start the GPMU */
243 ret = a5xx_gpmu_init(gpu);
244 if (ret)
245 return ret;
246
247 /* Start the limits management */
248 a5xx_lm_enable(gpu);
249
250 return 0;
251}
252
253void a5xx_gpmu_ucode_init(struct msm_gpu *gpu)
254{
255 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
256 struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
257 struct drm_device *drm = gpu->dev;
258 const struct firmware *fw;
259 uint32_t dwords = 0, offset = 0, bosize;
260 unsigned int *data, *ptr, *cmds;
261 unsigned int cmds_size;
262
263 if (a5xx_gpu->gpmu_bo)
264 return;
265
266 /* Get the firmware */
267 if (request_firmware(&fw, adreno_gpu->info->gpmufw, drm->dev)) {
268 DRM_ERROR("%s: Could not get GPMU firmware. GPMU will not be active\n",
269 gpu->name);
270 return;
271 }
272
273 data = (unsigned int *) fw->data;
274
275 /*
276 * The first dword is the size of the remaining data in dwords. Use it
277 * as a checksum of sorts and make sure it matches the actual size of
278 * the firmware that we read
279 */
280
281 if (fw->size < 8 || (data[0] < 2) || (data[0] >= (fw->size >> 2)))
282 goto out;
283
284 /* The second dword is an ID - look for 2 (GPMU_FIRMWARE_ID) */
285 if (data[1] != 2)
286 goto out;
287
288 cmds = data + data[2] + 3;
289 cmds_size = data[0] - data[2] - 2;
290
291 /*
292 * A single type4 opcode can only have so many values attached so
293 * add enough opcodes to load the all the commands
294 */
295 bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2;
296
297 mutex_lock(&drm->struct_mutex);
298 a5xx_gpu->gpmu_bo = msm_gem_new(drm, bosize, MSM_BO_UNCACHED);
299 mutex_unlock(&drm->struct_mutex);
300
301 if (IS_ERR(a5xx_gpu->gpmu_bo))
302 goto err;
303
304 if (msm_gem_get_iova(a5xx_gpu->gpmu_bo, gpu->id, &a5xx_gpu->gpmu_iova))
305 goto err;
306
307 ptr = msm_gem_get_vaddr(a5xx_gpu->gpmu_bo);
308 if (!ptr)
309 goto err;
310
311 while (cmds_size > 0) {
312 int i;
313 uint32_t _size = cmds_size > TYPE4_MAX_PAYLOAD ?
314 TYPE4_MAX_PAYLOAD : cmds_size;
315
316 ptr[dwords++] = PKT4(REG_A5XX_GPMU_INST_RAM_BASE + offset,
317 _size);
318
319 for (i = 0; i < _size; i++)
320 ptr[dwords++] = *cmds++;
321
322 offset += _size;
323 cmds_size -= _size;
324 }
325
326 msm_gem_put_vaddr(a5xx_gpu->gpmu_bo);
327 a5xx_gpu->gpmu_dwords = dwords;
328
329 goto out;
330
331err:
332 if (a5xx_gpu->gpmu_iova)
333 msm_gem_put_iova(a5xx_gpu->gpmu_bo, gpu->id);
334 if (a5xx_gpu->gpmu_bo)
335 drm_gem_object_unreference_unlocked(a5xx_gpu->gpmu_bo);
336
337 a5xx_gpu->gpmu_bo = NULL;
338 a5xx_gpu->gpmu_iova = 0;
339 a5xx_gpu->gpmu_dwords = 0;
340
341out:
342 /* No need to keep that firmware laying around anymore */
343 release_firmware(fw);
344}
diff --git a/drivers/gpu/drm/msm/adreno/adreno_common.xml.h b/drivers/gpu/drm/msm/adreno/adreno_common.xml.h
index e81481d1b7df..4a33ba6f1244 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_common.xml.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_common.xml.h
@@ -8,13 +8,14 @@ http://github.com/freedreno/envytools/
8git clone https://github.com/freedreno/envytools.git 8git clone https://github.com/freedreno/envytools.git
9 9
10The rules-ng-ng source files this header was generated from are: 10The rules-ng-ng source files this header was generated from are:
11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 398 bytes, from 2015-09-24 17:25:31) 11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 431 bytes, from 2016-04-26 17:56:44)
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32907 bytes, from 2016-11-26 23:01:08)
14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 11518 bytes, from 2016-02-10 21:03:25) 14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 12025 bytes, from 2016-11-26 23:01:08)
15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 16166 bytes, from 2016-02-11 21:20:31) 15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 22544 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83967 bytes, from 2016-02-10 17:07:21) 16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83840 bytes, from 2016-11-26 23:01:08)
17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 109916 bytes, from 2016-02-20 18:44:48) 17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 110765 bytes, from 2016-11-26 23:01:48)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/a5xx.xml ( 90321 bytes, from 2016-11-28 16:50:05)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00) 19- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00)
19 20
20Copyright (C) 2013-2016 by the following authors: 21Copyright (C) 2013-2016 by the following authors:
@@ -172,6 +173,14 @@ enum a3xx_color_swap {
172 XYZW = 3, 173 XYZW = 3,
173}; 174};
174 175
176enum a3xx_rb_blend_opcode {
177 BLEND_DST_PLUS_SRC = 0,
178 BLEND_SRC_MINUS_DST = 1,
179 BLEND_DST_MINUS_SRC = 2,
180 BLEND_MIN_DST_SRC = 3,
181 BLEND_MAX_DST_SRC = 4,
182};
183
175#define REG_AXXX_CP_RB_BASE 0x000001c0 184#define REG_AXXX_CP_RB_BASE 0x000001c0
176 185
177#define REG_AXXX_CP_RB_CNTL 0x000001c1 186#define REG_AXXX_CP_RB_CNTL 0x000001c1
diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
index 7250ffc6322f..893eb2b2531b 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
@@ -74,6 +74,15 @@ static const struct adreno_info gpulist[] = {
74 .pfpfw = "a420_pfp.fw", 74 .pfpfw = "a420_pfp.fw",
75 .gmem = (SZ_1M + SZ_512K), 75 .gmem = (SZ_1M + SZ_512K),
76 .init = a4xx_gpu_init, 76 .init = a4xx_gpu_init,
77 }, {
78 .rev = ADRENO_REV(5, 3, 0, ANY_ID),
79 .revn = 530,
80 .name = "A530",
81 .pm4fw = "a530_pm4.fw",
82 .pfpfw = "a530_pfp.fw",
83 .gmem = SZ_1M,
84 .init = a5xx_gpu_init,
85 .gpmufw = "a530v3_gpmu.fw2",
77 }, 86 },
78}; 87};
79 88
@@ -83,6 +92,8 @@ MODULE_FIRMWARE("a330_pm4.fw");
83MODULE_FIRMWARE("a330_pfp.fw"); 92MODULE_FIRMWARE("a330_pfp.fw");
84MODULE_FIRMWARE("a420_pm4.fw"); 93MODULE_FIRMWARE("a420_pm4.fw");
85MODULE_FIRMWARE("a420_pfp.fw"); 94MODULE_FIRMWARE("a420_pfp.fw");
95MODULE_FIRMWARE("a530_fm4.fw");
96MODULE_FIRMWARE("a530_pfp.fw");
86 97
87static inline bool _rev_match(uint8_t entry, uint8_t id) 98static inline bool _rev_match(uint8_t entry, uint8_t id)
88{ 99{
@@ -145,12 +156,16 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
145 mutex_lock(&dev->struct_mutex); 156 mutex_lock(&dev->struct_mutex);
146 gpu->funcs->pm_resume(gpu); 157 gpu->funcs->pm_resume(gpu);
147 mutex_unlock(&dev->struct_mutex); 158 mutex_unlock(&dev->struct_mutex);
159
160 disable_irq(gpu->irq);
161
148 ret = gpu->funcs->hw_init(gpu); 162 ret = gpu->funcs->hw_init(gpu);
149 if (ret) { 163 if (ret) {
150 dev_err(dev->dev, "gpu hw init failed: %d\n", ret); 164 dev_err(dev->dev, "gpu hw init failed: %d\n", ret);
151 gpu->funcs->destroy(gpu); 165 gpu->funcs->destroy(gpu);
152 gpu = NULL; 166 gpu = NULL;
153 } else { 167 } else {
168 enable_irq(gpu->irq);
154 /* give inactive pm a chance to kick in: */ 169 /* give inactive pm a chance to kick in: */
155 msm_gpu_retire(gpu); 170 msm_gpu_retire(gpu);
156 } 171 }
@@ -166,12 +181,20 @@ static void set_gpu_pdev(struct drm_device *dev,
166 priv->gpu_pdev = pdev; 181 priv->gpu_pdev = pdev;
167} 182}
168 183
184static const struct {
185 const char *str;
186 uint32_t flag;
187} quirks[] = {
188 { "qcom,gpu-quirk-two-pass-use-wfi", ADRENO_QUIRK_TWO_PASS_USE_WFI },
189 { "qcom,gpu-quirk-fault-detect-mask", ADRENO_QUIRK_FAULT_DETECT_MASK },
190};
191
169static int adreno_bind(struct device *dev, struct device *master, void *data) 192static int adreno_bind(struct device *dev, struct device *master, void *data)
170{ 193{
171 static struct adreno_platform_config config = {}; 194 static struct adreno_platform_config config = {};
172 struct device_node *child, *node = dev->of_node; 195 struct device_node *child, *node = dev->of_node;
173 u32 val; 196 u32 val;
174 int ret; 197 int ret, i;
175 198
176 ret = of_property_read_u32(node, "qcom,chipid", &val); 199 ret = of_property_read_u32(node, "qcom,chipid", &val);
177 if (ret) { 200 if (ret) {
@@ -205,6 +228,10 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
205 return -ENXIO; 228 return -ENXIO;
206 } 229 }
207 230
231 for (i = 0; i < ARRAY_SIZE(quirks); i++)
232 if (of_property_read_bool(node, quirks[i].str))
233 config.quirks |= quirks[i].flag;
234
208 dev->platform_data = &config; 235 dev->platform_data = &config;
209 set_gpu_pdev(dev_get_drvdata(master), to_platform_device(dev)); 236 set_gpu_pdev(dev_get_drvdata(master), to_platform_device(dev));
210 return 0; 237 return 0;
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index f386f463278d..a18126150e11 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -22,7 +22,7 @@
22#include "msm_mmu.h" 22#include "msm_mmu.h"
23 23
24#define RB_SIZE SZ_32K 24#define RB_SIZE SZ_32K
25#define RB_BLKSIZE 16 25#define RB_BLKSIZE 32
26 26
27int adreno_get_param(struct msm_gpu *gpu, uint32_t param, uint64_t *value) 27int adreno_get_param(struct msm_gpu *gpu, uint32_t param, uint64_t *value)
28{ 28{
@@ -54,9 +54,6 @@ int adreno_get_param(struct msm_gpu *gpu, uint32_t param, uint64_t *value)
54 } 54 }
55} 55}
56 56
57#define rbmemptr(adreno_gpu, member) \
58 ((adreno_gpu)->memptrs_iova + offsetof(struct adreno_rbmemptrs, member))
59
60int adreno_hw_init(struct msm_gpu *gpu) 57int adreno_hw_init(struct msm_gpu *gpu)
61{ 58{
62 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 59 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
@@ -79,11 +76,14 @@ int adreno_hw_init(struct msm_gpu *gpu)
79 (adreno_is_a430(adreno_gpu) ? AXXX_CP_RB_CNTL_NO_UPDATE : 0)); 76 (adreno_is_a430(adreno_gpu) ? AXXX_CP_RB_CNTL_NO_UPDATE : 0));
80 77
81 /* Setup ringbuffer address: */ 78 /* Setup ringbuffer address: */
82 adreno_gpu_write(adreno_gpu, REG_ADRENO_CP_RB_BASE, gpu->rb_iova); 79 adreno_gpu_write64(adreno_gpu, REG_ADRENO_CP_RB_BASE,
80 REG_ADRENO_CP_RB_BASE_HI, gpu->rb_iova);
83 81
84 if (!adreno_is_a430(adreno_gpu)) 82 if (!adreno_is_a430(adreno_gpu)) {
85 adreno_gpu_write(adreno_gpu, REG_ADRENO_CP_RB_RPTR_ADDR, 83 adreno_gpu_write64(adreno_gpu, REG_ADRENO_CP_RB_RPTR_ADDR,
86 rbmemptr(adreno_gpu, rptr)); 84 REG_ADRENO_CP_RB_RPTR_ADDR_HI,
85 rbmemptr(adreno_gpu, rptr));
86 }
87 87
88 return 0; 88 return 0;
89} 89}
@@ -126,11 +126,14 @@ void adreno_recover(struct msm_gpu *gpu)
126 adreno_gpu->memptrs->wptr = 0; 126 adreno_gpu->memptrs->wptr = 0;
127 127
128 gpu->funcs->pm_resume(gpu); 128 gpu->funcs->pm_resume(gpu);
129
130 disable_irq(gpu->irq);
129 ret = gpu->funcs->hw_init(gpu); 131 ret = gpu->funcs->hw_init(gpu);
130 if (ret) { 132 if (ret) {
131 dev_err(dev->dev, "gpu hw init failed: %d\n", ret); 133 dev_err(dev->dev, "gpu hw init failed: %d\n", ret);
132 /* hmm, oh well? */ 134 /* hmm, oh well? */
133 } 135 }
136 enable_irq(gpu->irq);
134} 137}
135 138
136void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, 139void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
@@ -218,19 +221,18 @@ void adreno_flush(struct msm_gpu *gpu)
218 adreno_gpu_write(adreno_gpu, REG_ADRENO_CP_RB_WPTR, wptr); 221 adreno_gpu_write(adreno_gpu, REG_ADRENO_CP_RB_WPTR, wptr);
219} 222}
220 223
221void adreno_idle(struct msm_gpu *gpu) 224bool adreno_idle(struct msm_gpu *gpu)
222{ 225{
223 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 226 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
224 uint32_t wptr = get_wptr(gpu->rb); 227 uint32_t wptr = get_wptr(gpu->rb);
225 int ret;
226 228
227 /* wait for CP to drain ringbuffer: */ 229 /* wait for CP to drain ringbuffer: */
228 ret = spin_until(get_rptr(adreno_gpu) == wptr); 230 if (!spin_until(get_rptr(adreno_gpu) == wptr))
229 231 return true;
230 if (ret)
231 DRM_ERROR("%s: timeout waiting to drain ringbuffer!\n", gpu->name);
232 232
233 /* TODO maybe we need to reset GPU here to recover from hang? */ 233 /* TODO maybe we need to reset GPU here to recover from hang? */
234 DRM_ERROR("%s: timeout waiting to drain ringbuffer!\n", gpu->name);
235 return false;
234} 236}
235 237
236#ifdef CONFIG_DEBUG_FS 238#ifdef CONFIG_DEBUG_FS
@@ -278,7 +280,6 @@ void adreno_show(struct msm_gpu *gpu, struct seq_file *m)
278void adreno_dump_info(struct msm_gpu *gpu) 280void adreno_dump_info(struct msm_gpu *gpu)
279{ 281{
280 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 282 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
281 int i;
282 283
283 printk("revision: %d (%d.%d.%d.%d)\n", 284 printk("revision: %d (%d.%d.%d.%d)\n",
284 adreno_gpu->info->revn, adreno_gpu->rev.core, 285 adreno_gpu->info->revn, adreno_gpu->rev.core,
@@ -290,11 +291,6 @@ void adreno_dump_info(struct msm_gpu *gpu)
290 printk("rptr: %d\n", get_rptr(adreno_gpu)); 291 printk("rptr: %d\n", get_rptr(adreno_gpu));
291 printk("wptr: %d\n", adreno_gpu->memptrs->wptr); 292 printk("wptr: %d\n", adreno_gpu->memptrs->wptr);
292 printk("rb wptr: %d\n", get_wptr(gpu->rb)); 293 printk("rb wptr: %d\n", get_wptr(gpu->rb));
293
294 for (i = 0; i < 8; i++) {
295 printk("CP_SCRATCH_REG%d: %u\n", i,
296 gpu_read(gpu, REG_AXXX_CP_SCRATCH_REG0 + i));
297 }
298} 294}
299 295
300/* would be nice to not have to duplicate the _show() stuff with printk(): */ 296/* would be nice to not have to duplicate the _show() stuff with printk(): */
@@ -350,6 +346,7 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
350 adreno_gpu->gmem = adreno_gpu->info->gmem; 346 adreno_gpu->gmem = adreno_gpu->info->gmem;
351 adreno_gpu->revn = adreno_gpu->info->revn; 347 adreno_gpu->revn = adreno_gpu->info->revn;
352 adreno_gpu->rev = config->rev; 348 adreno_gpu->rev = config->rev;
349 adreno_gpu->quirks = config->quirks;
353 350
354 gpu->fast_rate = config->fast_rate; 351 gpu->fast_rate = config->fast_rate;
355 gpu->slow_rate = config->slow_rate; 352 gpu->slow_rate = config->slow_rate;
@@ -381,7 +378,7 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
381 return ret; 378 return ret;
382 } 379 }
383 380
384 mmu = gpu->mmu; 381 mmu = gpu->aspace->mmu;
385 if (mmu) { 382 if (mmu) {
386 ret = mmu->funcs->attach(mmu, iommu_ports, 383 ret = mmu->funcs->attach(mmu, iommu_ports,
387 ARRAY_SIZE(iommu_ports)); 384 ARRAY_SIZE(iommu_ports));
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index 07d99bdf7c99..e8d55b0306ed 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -28,6 +28,9 @@
28#include "adreno_pm4.xml.h" 28#include "adreno_pm4.xml.h"
29 29
30#define REG_ADRENO_DEFINE(_offset, _reg) [_offset] = (_reg) + 1 30#define REG_ADRENO_DEFINE(_offset, _reg) [_offset] = (_reg) + 1
31#define REG_SKIP ~0
32#define REG_ADRENO_SKIP(_offset) [_offset] = REG_SKIP
33
31/** 34/**
32 * adreno_regs: List of registers that are used in across all 35 * adreno_regs: List of registers that are used in across all
33 * 3D devices. Each device type has different offset value for the same 36 * 3D devices. Each device type has different offset value for the same
@@ -35,73 +38,21 @@
35 * and are indexed by the enumeration values defined in this enum 38 * and are indexed by the enumeration values defined in this enum
36 */ 39 */
37enum adreno_regs { 40enum adreno_regs {
38 REG_ADRENO_CP_DEBUG,
39 REG_ADRENO_CP_ME_RAM_WADDR,
40 REG_ADRENO_CP_ME_RAM_DATA,
41 REG_ADRENO_CP_PFP_UCODE_DATA,
42 REG_ADRENO_CP_PFP_UCODE_ADDR,
43 REG_ADRENO_CP_WFI_PEND_CTR,
44 REG_ADRENO_CP_RB_BASE, 41 REG_ADRENO_CP_RB_BASE,
42 REG_ADRENO_CP_RB_BASE_HI,
45 REG_ADRENO_CP_RB_RPTR_ADDR, 43 REG_ADRENO_CP_RB_RPTR_ADDR,
44 REG_ADRENO_CP_RB_RPTR_ADDR_HI,
46 REG_ADRENO_CP_RB_RPTR, 45 REG_ADRENO_CP_RB_RPTR,
47 REG_ADRENO_CP_RB_WPTR, 46 REG_ADRENO_CP_RB_WPTR,
48 REG_ADRENO_CP_PROTECT_CTRL,
49 REG_ADRENO_CP_ME_CNTL,
50 REG_ADRENO_CP_RB_CNTL, 47 REG_ADRENO_CP_RB_CNTL,
51 REG_ADRENO_CP_IB1_BASE,
52 REG_ADRENO_CP_IB1_BUFSZ,
53 REG_ADRENO_CP_IB2_BASE,
54 REG_ADRENO_CP_IB2_BUFSZ,
55 REG_ADRENO_CP_TIMESTAMP,
56 REG_ADRENO_CP_ME_RAM_RADDR,
57 REG_ADRENO_CP_ROQ_ADDR,
58 REG_ADRENO_CP_ROQ_DATA,
59 REG_ADRENO_CP_MERCIU_ADDR,
60 REG_ADRENO_CP_MERCIU_DATA,
61 REG_ADRENO_CP_MERCIU_DATA2,
62 REG_ADRENO_CP_MEQ_ADDR,
63 REG_ADRENO_CP_MEQ_DATA,
64 REG_ADRENO_CP_HW_FAULT,
65 REG_ADRENO_CP_PROTECT_STATUS,
66 REG_ADRENO_SCRATCH_ADDR,
67 REG_ADRENO_SCRATCH_UMSK,
68 REG_ADRENO_SCRATCH_REG2,
69 REG_ADRENO_RBBM_STATUS,
70 REG_ADRENO_RBBM_PERFCTR_CTL,
71 REG_ADRENO_RBBM_PERFCTR_LOAD_CMD0,
72 REG_ADRENO_RBBM_PERFCTR_LOAD_CMD1,
73 REG_ADRENO_RBBM_PERFCTR_LOAD_CMD2,
74 REG_ADRENO_RBBM_PERFCTR_PWR_1_LO,
75 REG_ADRENO_RBBM_INT_0_MASK,
76 REG_ADRENO_RBBM_INT_0_STATUS,
77 REG_ADRENO_RBBM_AHB_ERROR_STATUS,
78 REG_ADRENO_RBBM_PM_OVERRIDE2,
79 REG_ADRENO_RBBM_AHB_CMD,
80 REG_ADRENO_RBBM_INT_CLEAR_CMD,
81 REG_ADRENO_RBBM_SW_RESET_CMD,
82 REG_ADRENO_RBBM_CLOCK_CTL,
83 REG_ADRENO_RBBM_AHB_ME_SPLIT_STATUS,
84 REG_ADRENO_RBBM_AHB_PFP_SPLIT_STATUS,
85 REG_ADRENO_VPC_DEBUG_RAM_SEL,
86 REG_ADRENO_VPC_DEBUG_RAM_READ,
87 REG_ADRENO_VSC_SIZE_ADDRESS,
88 REG_ADRENO_VFD_CONTROL_0,
89 REG_ADRENO_VFD_INDEX_MAX,
90 REG_ADRENO_SP_VS_PVT_MEM_ADDR_REG,
91 REG_ADRENO_SP_FS_PVT_MEM_ADDR_REG,
92 REG_ADRENO_SP_VS_OBJ_START_REG,
93 REG_ADRENO_SP_FS_OBJ_START_REG,
94 REG_ADRENO_PA_SC_AA_CONFIG,
95 REG_ADRENO_SQ_GPR_MANAGEMENT,
96 REG_ADRENO_SQ_INST_STORE_MANAGMENT,
97 REG_ADRENO_TP0_CHICKEN,
98 REG_ADRENO_RBBM_RBBM_CTL,
99 REG_ADRENO_UCHE_INVALIDATE0,
100 REG_ADRENO_RBBM_PERFCTR_LOAD_VALUE_LO,
101 REG_ADRENO_RBBM_PERFCTR_LOAD_VALUE_HI,
102 REG_ADRENO_REGISTER_MAX, 48 REG_ADRENO_REGISTER_MAX,
103}; 49};
104 50
51enum adreno_quirks {
52 ADRENO_QUIRK_TWO_PASS_USE_WFI = 1,
53 ADRENO_QUIRK_FAULT_DETECT_MASK = 2,
54};
55
105struct adreno_rev { 56struct adreno_rev {
106 uint8_t core; 57 uint8_t core;
107 uint8_t major; 58 uint8_t major;
@@ -122,12 +73,16 @@ struct adreno_info {
122 uint32_t revn; 73 uint32_t revn;
123 const char *name; 74 const char *name;
124 const char *pm4fw, *pfpfw; 75 const char *pm4fw, *pfpfw;
76 const char *gpmufw;
125 uint32_t gmem; 77 uint32_t gmem;
126 struct msm_gpu *(*init)(struct drm_device *dev); 78 struct msm_gpu *(*init)(struct drm_device *dev);
127}; 79};
128 80
129const struct adreno_info *adreno_info(struct adreno_rev rev); 81const struct adreno_info *adreno_info(struct adreno_rev rev);
130 82
83#define rbmemptr(adreno_gpu, member) \
84 ((adreno_gpu)->memptrs_iova + offsetof(struct adreno_rbmemptrs, member))
85
131struct adreno_rbmemptrs { 86struct adreno_rbmemptrs {
132 volatile uint32_t rptr; 87 volatile uint32_t rptr;
133 volatile uint32_t wptr; 88 volatile uint32_t wptr;
@@ -153,7 +108,7 @@ struct adreno_gpu {
153 // different for z180.. 108 // different for z180..
154 struct adreno_rbmemptrs *memptrs; 109 struct adreno_rbmemptrs *memptrs;
155 struct drm_gem_object *memptrs_bo; 110 struct drm_gem_object *memptrs_bo;
156 uint32_t memptrs_iova; 111 uint64_t memptrs_iova;
157 112
158 /* 113 /*
159 * Register offsets are different between some GPUs. 114 * Register offsets are different between some GPUs.
@@ -161,6 +116,8 @@ struct adreno_gpu {
161 * code (a3xx_gpu.c) and stored in this common location. 116 * code (a3xx_gpu.c) and stored in this common location.
162 */ 117 */
163 const unsigned int *reg_offsets; 118 const unsigned int *reg_offsets;
119
120 uint32_t quirks;
164}; 121};
165#define to_adreno_gpu(x) container_of(x, struct adreno_gpu, base) 122#define to_adreno_gpu(x) container_of(x, struct adreno_gpu, base)
166 123
@@ -171,6 +128,7 @@ struct adreno_platform_config {
171#ifdef DOWNSTREAM_CONFIG_MSM_BUS_SCALING 128#ifdef DOWNSTREAM_CONFIG_MSM_BUS_SCALING
172 struct msm_bus_scale_pdata *bus_scale_table; 129 struct msm_bus_scale_pdata *bus_scale_table;
173#endif 130#endif
131 uint32_t quirks;
174}; 132};
175 133
176#define ADRENO_IDLE_TIMEOUT msecs_to_jiffies(1000) 134#define ADRENO_IDLE_TIMEOUT msecs_to_jiffies(1000)
@@ -234,6 +192,11 @@ static inline int adreno_is_a430(struct adreno_gpu *gpu)
234 return gpu->revn == 430; 192 return gpu->revn == 430;
235} 193}
236 194
195static inline int adreno_is_a530(struct adreno_gpu *gpu)
196{
197 return gpu->revn == 530;
198}
199
237int adreno_get_param(struct msm_gpu *gpu, uint32_t param, uint64_t *value); 200int adreno_get_param(struct msm_gpu *gpu, uint32_t param, uint64_t *value);
238int adreno_hw_init(struct msm_gpu *gpu); 201int adreno_hw_init(struct msm_gpu *gpu);
239uint32_t adreno_last_fence(struct msm_gpu *gpu); 202uint32_t adreno_last_fence(struct msm_gpu *gpu);
@@ -241,7 +204,7 @@ void adreno_recover(struct msm_gpu *gpu);
241void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, 204void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
242 struct msm_file_private *ctx); 205 struct msm_file_private *ctx);
243void adreno_flush(struct msm_gpu *gpu); 206void adreno_flush(struct msm_gpu *gpu);
244void adreno_idle(struct msm_gpu *gpu); 207bool adreno_idle(struct msm_gpu *gpu);
245#ifdef CONFIG_DEBUG_FS 208#ifdef CONFIG_DEBUG_FS
246void adreno_show(struct msm_gpu *gpu, struct seq_file *m); 209void adreno_show(struct msm_gpu *gpu, struct seq_file *m);
247#endif 210#endif
@@ -278,8 +241,38 @@ OUT_PKT3(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
278 OUT_RING(ring, CP_TYPE3_PKT | ((cnt-1) << 16) | ((opcode & 0xFF) << 8)); 241 OUT_RING(ring, CP_TYPE3_PKT | ((cnt-1) << 16) | ((opcode & 0xFF) << 8));
279} 242}
280 243
244static inline u32 PM4_PARITY(u32 val)
245{
246 return (0x9669 >> (0xF & (val ^
247 (val >> 4) ^ (val >> 8) ^ (val >> 12) ^
248 (val >> 16) ^ ((val) >> 20) ^ (val >> 24) ^
249 (val >> 28)))) & 1;
250}
251
252/* Maximum number of values that can be executed for one opcode */
253#define TYPE4_MAX_PAYLOAD 127
254
255#define PKT4(_reg, _cnt) \
256 (CP_TYPE4_PKT | ((_cnt) << 0) | (PM4_PARITY((_cnt)) << 7) | \
257 (((_reg) & 0x3FFFF) << 8) | (PM4_PARITY((_reg)) << 27))
258
259static inline void
260OUT_PKT4(struct msm_ringbuffer *ring, uint16_t regindx, uint16_t cnt)
261{
262 adreno_wait_ring(ring->gpu, cnt + 1);
263 OUT_RING(ring, PKT4(regindx, cnt));
264}
265
266static inline void
267OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
268{
269 adreno_wait_ring(ring->gpu, cnt + 1);
270 OUT_RING(ring, CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) |
271 ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23));
272}
273
281/* 274/*
282 * adreno_checkreg_off() - Checks the validity of a register enum 275 * adreno_reg_check() - Checks the validity of a register enum
283 * @gpu: Pointer to struct adreno_gpu 276 * @gpu: Pointer to struct adreno_gpu
284 * @offset_name: The register enum that is checked 277 * @offset_name: The register enum that is checked
285 */ 278 */
@@ -290,6 +283,16 @@ static inline bool adreno_reg_check(struct adreno_gpu *gpu,
290 !gpu->reg_offsets[offset_name]) { 283 !gpu->reg_offsets[offset_name]) {
291 BUG(); 284 BUG();
292 } 285 }
286
287 /*
288 * REG_SKIP is a special value that tell us that the register in
289 * question isn't implemented on target but don't trigger a BUG(). This
290 * is used to cleanly implement adreno_gpu_write64() and
291 * adreno_gpu_read64() in a generic fashion
292 */
293 if (gpu->reg_offsets[offset_name] == REG_SKIP)
294 return false;
295
293 return true; 296 return true;
294} 297}
295 298
@@ -313,5 +316,37 @@ static inline void adreno_gpu_write(struct adreno_gpu *gpu,
313 316
314struct msm_gpu *a3xx_gpu_init(struct drm_device *dev); 317struct msm_gpu *a3xx_gpu_init(struct drm_device *dev);
315struct msm_gpu *a4xx_gpu_init(struct drm_device *dev); 318struct msm_gpu *a4xx_gpu_init(struct drm_device *dev);
319struct msm_gpu *a5xx_gpu_init(struct drm_device *dev);
320
321static inline void adreno_gpu_write64(struct adreno_gpu *gpu,
322 enum adreno_regs lo, enum adreno_regs hi, u64 data)
323{
324 adreno_gpu_write(gpu, lo, lower_32_bits(data));
325 adreno_gpu_write(gpu, hi, upper_32_bits(data));
326}
327
328/*
329 * Given a register and a count, return a value to program into
330 * REG_CP_PROTECT_REG(n) - this will block both reads and writes for _len
331 * registers starting at _reg.
332 *
333 * The register base needs to be a multiple of the length. If it is not, the
334 * hardware will quietly mask off the bits for you and shift the size. For
335 * example, if you intend the protection to start at 0x07 for a length of 4
336 * (0x07-0x0A) the hardware will actually protect (0x04-0x07) which might
337 * expose registers you intended to protect!
338 */
339#define ADRENO_PROTECT_RW(_reg, _len) \
340 ((1 << 30) | (1 << 29) | \
341 ((ilog2((_len)) & 0x1F) << 24) | (((_reg) << 2) & 0xFFFFF))
342
343/*
344 * Same as above, but allow reads over the range. For areas of mixed use (such
345 * as performance counters) this allows us to protect a much larger range with a
346 * single register
347 */
348#define ADRENO_PROTECT_RDONLY(_reg, _len) \
349 ((1 << 29) \
350 ((ilog2((_len)) & 0x1F) << 24) | (((_reg) << 2) & 0xFFFFF))
316 351
317#endif /* __ADRENO_GPU_H__ */ 352#endif /* __ADRENO_GPU_H__ */
diff --git a/drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h b/drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h
index d7477ff867c9..6a2930e75503 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h
@@ -8,13 +8,14 @@ http://github.com/freedreno/envytools/
8git clone https://github.com/freedreno/envytools.git 8git clone https://github.com/freedreno/envytools.git
9 9
10The rules-ng-ng source files this header was generated from are: 10The rules-ng-ng source files this header was generated from are:
11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 398 bytes, from 2015-09-24 17:25:31) 11- /home/robclark/src/freedreno/envytools/rnndb/adreno.xml ( 431 bytes, from 2016-04-26 17:56:44)
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32907 bytes, from 2016-11-26 23:01:08)
14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 11518 bytes, from 2016-02-10 21:03:25) 14- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 12025 bytes, from 2016-11-26 23:01:08)
15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 16166 bytes, from 2016-02-11 21:20:31) 15- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 22544 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83967 bytes, from 2016-02-10 17:07:21) 16- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 83840 bytes, from 2016-11-26 23:01:08)
17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 109916 bytes, from 2016-02-20 18:44:48) 17- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 110765 bytes, from 2016-11-26 23:01:48)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/a5xx.xml ( 90321 bytes, from 2016-11-28 16:50:05)
18- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00) 19- /home/robclark/src/freedreno/envytools/rnndb/adreno/ocmem.xml ( 1773 bytes, from 2015-09-24 17:30:00)
19 20
20Copyright (C) 2013-2016 by the following authors: 21Copyright (C) 2013-2016 by the following authors:
@@ -58,6 +59,7 @@ enum vgt_event_type {
58 RST_PIX_CNT = 13, 59 RST_PIX_CNT = 13,
59 RST_VTX_CNT = 14, 60 RST_VTX_CNT = 14,
60 TILE_FLUSH = 15, 61 TILE_FLUSH = 15,
62 STAT_EVENT = 16,
61 CACHE_FLUSH_AND_INV_TS_EVENT = 20, 63 CACHE_FLUSH_AND_INV_TS_EVENT = 20,
62 ZPASS_DONE = 21, 64 ZPASS_DONE = 21,
63 CACHE_FLUSH_AND_INV_EVENT = 22, 65 CACHE_FLUSH_AND_INV_EVENT = 22,
@@ -65,6 +67,10 @@ enum vgt_event_type {
65 PERFCOUNTER_STOP = 24, 67 PERFCOUNTER_STOP = 24,
66 VS_FETCH_DONE = 27, 68 VS_FETCH_DONE = 27,
67 FACENESS_FLUSH = 28, 69 FACENESS_FLUSH = 28,
70 UNK_1C = 28,
71 UNK_1D = 29,
72 BLIT = 30,
73 UNK_26 = 38,
68}; 74};
69 75
70enum pc_di_primtype { 76enum pc_di_primtype {
@@ -82,7 +88,6 @@ enum pc_di_primtype {
82 DI_PT_LINESTRIP_ADJ = 11, 88 DI_PT_LINESTRIP_ADJ = 11,
83 DI_PT_TRI_ADJ = 12, 89 DI_PT_TRI_ADJ = 12,
84 DI_PT_TRISTRIP_ADJ = 13, 90 DI_PT_TRISTRIP_ADJ = 13,
85 DI_PT_PATCHES = 34,
86}; 91};
87 92
88enum pc_di_src_sel { 93enum pc_di_src_sel {
@@ -110,11 +115,15 @@ enum adreno_pm4_packet_type {
110 CP_TYPE1_PKT = 0x40000000, 115 CP_TYPE1_PKT = 0x40000000,
111 CP_TYPE2_PKT = 0x80000000, 116 CP_TYPE2_PKT = 0x80000000,
112 CP_TYPE3_PKT = 0xc0000000, 117 CP_TYPE3_PKT = 0xc0000000,
118 CP_TYPE4_PKT = 0x40000000,
119 CP_TYPE7_PKT = 0x70000000,
113}; 120};
114 121
115enum adreno_pm4_type3_packets { 122enum adreno_pm4_type3_packets {
116 CP_ME_INIT = 72, 123 CP_ME_INIT = 72,
117 CP_NOP = 16, 124 CP_NOP = 16,
125 CP_PREEMPT_ENABLE = 28,
126 CP_PREEMPT_TOKEN = 30,
118 CP_INDIRECT_BUFFER = 63, 127 CP_INDIRECT_BUFFER = 63,
119 CP_INDIRECT_BUFFER_PFD = 55, 128 CP_INDIRECT_BUFFER_PFD = 55,
120 CP_WAIT_FOR_IDLE = 38, 129 CP_WAIT_FOR_IDLE = 38,
@@ -163,6 +172,7 @@ enum adreno_pm4_type3_packets {
163 CP_TEST_TWO_MEMS = 113, 172 CP_TEST_TWO_MEMS = 113,
164 CP_REG_WR_NO_CTXT = 120, 173 CP_REG_WR_NO_CTXT = 120,
165 CP_RECORD_PFP_TIMESTAMP = 17, 174 CP_RECORD_PFP_TIMESTAMP = 17,
175 CP_SET_SECURE_MODE = 102,
166 CP_WAIT_FOR_ME = 19, 176 CP_WAIT_FOR_ME = 19,
167 CP_SET_DRAW_STATE = 67, 177 CP_SET_DRAW_STATE = 67,
168 CP_DRAW_INDX_OFFSET = 56, 178 CP_DRAW_INDX_OFFSET = 56,
@@ -178,6 +188,22 @@ enum adreno_pm4_type3_packets {
178 CP_WAIT_MEM_WRITES = 18, 188 CP_WAIT_MEM_WRITES = 18,
179 CP_COND_REG_EXEC = 71, 189 CP_COND_REG_EXEC = 71,
180 CP_MEM_TO_REG = 66, 190 CP_MEM_TO_REG = 66,
191 CP_EXEC_CS = 51,
192 CP_PERFCOUNTER_ACTION = 80,
193 CP_SMMU_TABLE_UPDATE = 83,
194 CP_CONTEXT_REG_BUNCH = 92,
195 CP_YIELD_ENABLE = 28,
196 CP_SKIP_IB2_ENABLE_GLOBAL = 29,
197 CP_SKIP_IB2_ENABLE_LOCAL = 35,
198 CP_SET_SUBDRAW_SIZE = 53,
199 CP_SET_VISIBILITY_OVERRIDE = 100,
200 CP_PREEMPT_ENABLE_GLOBAL = 105,
201 CP_PREEMPT_ENABLE_LOCAL = 106,
202 CP_CONTEXT_SWITCH_YIELD = 107,
203 CP_SET_RENDER_MODE = 108,
204 CP_COMPUTE_CHECKPOINT = 110,
205 CP_MEM_TO_MEM = 115,
206 CP_BLIT = 44,
181 IN_IB_PREFETCH_END = 23, 207 IN_IB_PREFETCH_END = 23,
182 IN_SUBBLK_PREFETCH = 31, 208 IN_SUBBLK_PREFETCH = 31,
183 IN_INSTR_PREFETCH = 32, 209 IN_INSTR_PREFETCH = 32,
@@ -196,6 +222,7 @@ enum adreno_state_block {
196 SB_VERT_SHADER = 4, 222 SB_VERT_SHADER = 4,
197 SB_GEOM_SHADER = 5, 223 SB_GEOM_SHADER = 5,
198 SB_FRAG_SHADER = 6, 224 SB_FRAG_SHADER = 6,
225 SB_COMPUTE_SHADER = 7,
199}; 226};
200 227
201enum adreno_state_type { 228enum adreno_state_type {
@@ -218,6 +245,17 @@ enum a4xx_index_size {
218 INDEX4_SIZE_32_BIT = 2, 245 INDEX4_SIZE_32_BIT = 2,
219}; 246};
220 247
248enum render_mode_cmd {
249 BYPASS = 1,
250 GMEM = 3,
251 BLIT2D = 5,
252};
253
254enum cp_blit_cmd {
255 BLIT_OP_FILL = 0,
256 BLIT_OP_BLIT = 1,
257};
258
221#define REG_CP_LOAD_STATE_0 0x00000000 259#define REG_CP_LOAD_STATE_0 0x00000000
222#define CP_LOAD_STATE_0_DST_OFF__MASK 0x0000ffff 260#define CP_LOAD_STATE_0_DST_OFF__MASK 0x0000ffff
223#define CP_LOAD_STATE_0_DST_OFF__SHIFT 0 261#define CP_LOAD_STATE_0_DST_OFF__SHIFT 0
@@ -258,6 +296,14 @@ static inline uint32_t CP_LOAD_STATE_1_EXT_SRC_ADDR(uint32_t val)
258 return ((val >> 2) << CP_LOAD_STATE_1_EXT_SRC_ADDR__SHIFT) & CP_LOAD_STATE_1_EXT_SRC_ADDR__MASK; 296 return ((val >> 2) << CP_LOAD_STATE_1_EXT_SRC_ADDR__SHIFT) & CP_LOAD_STATE_1_EXT_SRC_ADDR__MASK;
259} 297}
260 298
299#define REG_CP_LOAD_STATE_2 0x00000002
300#define CP_LOAD_STATE_2_EXT_SRC_ADDR_HI__MASK 0xffffffff
301#define CP_LOAD_STATE_2_EXT_SRC_ADDR_HI__SHIFT 0
302static inline uint32_t CP_LOAD_STATE_2_EXT_SRC_ADDR_HI(uint32_t val)
303{
304 return ((val) << CP_LOAD_STATE_2_EXT_SRC_ADDR_HI__SHIFT) & CP_LOAD_STATE_2_EXT_SRC_ADDR_HI__MASK;
305}
306
261#define REG_CP_DRAW_INDX_0 0x00000000 307#define REG_CP_DRAW_INDX_0 0x00000000
262#define CP_DRAW_INDX_0_VIZ_QUERY__MASK 0xffffffff 308#define CP_DRAW_INDX_0_VIZ_QUERY__MASK 0xffffffff
263#define CP_DRAW_INDX_0_VIZ_QUERY__SHIFT 0 309#define CP_DRAW_INDX_0_VIZ_QUERY__SHIFT 0
@@ -389,7 +435,12 @@ static inline uint32_t CP_DRAW_INDX_OFFSET_0_SOURCE_SELECT(enum pc_di_src_sel va
389{ 435{
390 return ((val) << CP_DRAW_INDX_OFFSET_0_SOURCE_SELECT__SHIFT) & CP_DRAW_INDX_OFFSET_0_SOURCE_SELECT__MASK; 436 return ((val) << CP_DRAW_INDX_OFFSET_0_SOURCE_SELECT__SHIFT) & CP_DRAW_INDX_OFFSET_0_SOURCE_SELECT__MASK;
391} 437}
392#define CP_DRAW_INDX_OFFSET_0_TESSELLATE 0x00000100 438#define CP_DRAW_INDX_OFFSET_0_VIS_CULL__MASK 0x00000300
439#define CP_DRAW_INDX_OFFSET_0_VIS_CULL__SHIFT 8
440static inline uint32_t CP_DRAW_INDX_OFFSET_0_VIS_CULL(enum pc_di_vis_cull_mode val)
441{
442 return ((val) << CP_DRAW_INDX_OFFSET_0_VIS_CULL__SHIFT) & CP_DRAW_INDX_OFFSET_0_VIS_CULL__MASK;
443}
393#define CP_DRAW_INDX_OFFSET_0_INDEX_SIZE__MASK 0x00000c00 444#define CP_DRAW_INDX_OFFSET_0_INDEX_SIZE__MASK 0x00000c00
394#define CP_DRAW_INDX_OFFSET_0_INDEX_SIZE__SHIFT 10 445#define CP_DRAW_INDX_OFFSET_0_INDEX_SIZE__SHIFT 10
395static inline uint32_t CP_DRAW_INDX_OFFSET_0_INDEX_SIZE(enum a4xx_index_size val) 446static inline uint32_t CP_DRAW_INDX_OFFSET_0_INDEX_SIZE(enum a4xx_index_size val)
@@ -437,30 +488,40 @@ static inline uint32_t CP_DRAW_INDX_OFFSET_5_INDX_SIZE(uint32_t val)
437 return ((val) << CP_DRAW_INDX_OFFSET_5_INDX_SIZE__SHIFT) & CP_DRAW_INDX_OFFSET_5_INDX_SIZE__MASK; 488 return ((val) << CP_DRAW_INDX_OFFSET_5_INDX_SIZE__SHIFT) & CP_DRAW_INDX_OFFSET_5_INDX_SIZE__MASK;
438} 489}
439 490
440#define REG_CP_SET_DRAW_STATE_0 0x00000000 491static inline uint32_t REG_CP_SET_DRAW_STATE_(uint32_t i0) { return 0x00000000 + 0x3*i0; }
441#define CP_SET_DRAW_STATE_0_COUNT__MASK 0x0000ffff 492
442#define CP_SET_DRAW_STATE_0_COUNT__SHIFT 0 493static inline uint32_t REG_CP_SET_DRAW_STATE__0(uint32_t i0) { return 0x00000000 + 0x3*i0; }
443static inline uint32_t CP_SET_DRAW_STATE_0_COUNT(uint32_t val) 494#define CP_SET_DRAW_STATE__0_COUNT__MASK 0x0000ffff
495#define CP_SET_DRAW_STATE__0_COUNT__SHIFT 0
496static inline uint32_t CP_SET_DRAW_STATE__0_COUNT(uint32_t val)
444{ 497{
445 return ((val) << CP_SET_DRAW_STATE_0_COUNT__SHIFT) & CP_SET_DRAW_STATE_0_COUNT__MASK; 498 return ((val) << CP_SET_DRAW_STATE__0_COUNT__SHIFT) & CP_SET_DRAW_STATE__0_COUNT__MASK;
446} 499}
447#define CP_SET_DRAW_STATE_0_DIRTY 0x00010000 500#define CP_SET_DRAW_STATE__0_DIRTY 0x00010000
448#define CP_SET_DRAW_STATE_0_DISABLE 0x00020000 501#define CP_SET_DRAW_STATE__0_DISABLE 0x00020000
449#define CP_SET_DRAW_STATE_0_DISABLE_ALL_GROUPS 0x00040000 502#define CP_SET_DRAW_STATE__0_DISABLE_ALL_GROUPS 0x00040000
450#define CP_SET_DRAW_STATE_0_LOAD_IMMED 0x00080000 503#define CP_SET_DRAW_STATE__0_LOAD_IMMED 0x00080000
451#define CP_SET_DRAW_STATE_0_GROUP_ID__MASK 0x1f000000 504#define CP_SET_DRAW_STATE__0_GROUP_ID__MASK 0x1f000000
452#define CP_SET_DRAW_STATE_0_GROUP_ID__SHIFT 24 505#define CP_SET_DRAW_STATE__0_GROUP_ID__SHIFT 24
453static inline uint32_t CP_SET_DRAW_STATE_0_GROUP_ID(uint32_t val) 506static inline uint32_t CP_SET_DRAW_STATE__0_GROUP_ID(uint32_t val)
454{ 507{
455 return ((val) << CP_SET_DRAW_STATE_0_GROUP_ID__SHIFT) & CP_SET_DRAW_STATE_0_GROUP_ID__MASK; 508 return ((val) << CP_SET_DRAW_STATE__0_GROUP_ID__SHIFT) & CP_SET_DRAW_STATE__0_GROUP_ID__MASK;
456} 509}
457 510
458#define REG_CP_SET_DRAW_STATE_1 0x00000001 511static inline uint32_t REG_CP_SET_DRAW_STATE__1(uint32_t i0) { return 0x00000001 + 0x3*i0; }
459#define CP_SET_DRAW_STATE_1_ADDR__MASK 0xffffffff 512#define CP_SET_DRAW_STATE__1_ADDR_LO__MASK 0xffffffff
460#define CP_SET_DRAW_STATE_1_ADDR__SHIFT 0 513#define CP_SET_DRAW_STATE__1_ADDR_LO__SHIFT 0
461static inline uint32_t CP_SET_DRAW_STATE_1_ADDR(uint32_t val) 514static inline uint32_t CP_SET_DRAW_STATE__1_ADDR_LO(uint32_t val)
462{ 515{
463 return ((val) << CP_SET_DRAW_STATE_1_ADDR__SHIFT) & CP_SET_DRAW_STATE_1_ADDR__MASK; 516 return ((val) << CP_SET_DRAW_STATE__1_ADDR_LO__SHIFT) & CP_SET_DRAW_STATE__1_ADDR_LO__MASK;
517}
518
519static inline uint32_t REG_CP_SET_DRAW_STATE__2(uint32_t i0) { return 0x00000002 + 0x3*i0; }
520#define CP_SET_DRAW_STATE__2_ADDR_HI__MASK 0xffffffff
521#define CP_SET_DRAW_STATE__2_ADDR_HI__SHIFT 0
522static inline uint32_t CP_SET_DRAW_STATE__2_ADDR_HI(uint32_t val)
523{
524 return ((val) << CP_SET_DRAW_STATE__2_ADDR_HI__SHIFT) & CP_SET_DRAW_STATE__2_ADDR_HI__MASK;
464} 525}
465 526
466#define REG_CP_SET_BIN_0 0x00000000 527#define REG_CP_SET_BIN_0 0x00000000
@@ -533,5 +594,192 @@ static inline uint32_t CP_REG_TO_MEM_1_DEST(uint32_t val)
533 return ((val) << CP_REG_TO_MEM_1_DEST__SHIFT) & CP_REG_TO_MEM_1_DEST__MASK; 594 return ((val) << CP_REG_TO_MEM_1_DEST__SHIFT) & CP_REG_TO_MEM_1_DEST__MASK;
534} 595}
535 596
597#define REG_CP_DISPATCH_COMPUTE_0 0x00000000
598
599#define REG_CP_DISPATCH_COMPUTE_1 0x00000001
600#define CP_DISPATCH_COMPUTE_1_X__MASK 0xffffffff
601#define CP_DISPATCH_COMPUTE_1_X__SHIFT 0
602static inline uint32_t CP_DISPATCH_COMPUTE_1_X(uint32_t val)
603{
604 return ((val) << CP_DISPATCH_COMPUTE_1_X__SHIFT) & CP_DISPATCH_COMPUTE_1_X__MASK;
605}
606
607#define REG_CP_DISPATCH_COMPUTE_2 0x00000002
608#define CP_DISPATCH_COMPUTE_2_Y__MASK 0xffffffff
609#define CP_DISPATCH_COMPUTE_2_Y__SHIFT 0
610static inline uint32_t CP_DISPATCH_COMPUTE_2_Y(uint32_t val)
611{
612 return ((val) << CP_DISPATCH_COMPUTE_2_Y__SHIFT) & CP_DISPATCH_COMPUTE_2_Y__MASK;
613}
614
615#define REG_CP_DISPATCH_COMPUTE_3 0x00000003
616#define CP_DISPATCH_COMPUTE_3_Z__MASK 0xffffffff
617#define CP_DISPATCH_COMPUTE_3_Z__SHIFT 0
618static inline uint32_t CP_DISPATCH_COMPUTE_3_Z(uint32_t val)
619{
620 return ((val) << CP_DISPATCH_COMPUTE_3_Z__SHIFT) & CP_DISPATCH_COMPUTE_3_Z__MASK;
621}
622
623#define REG_CP_SET_RENDER_MODE_0 0x00000000
624#define CP_SET_RENDER_MODE_0_MODE__MASK 0x000001ff
625#define CP_SET_RENDER_MODE_0_MODE__SHIFT 0
626static inline uint32_t CP_SET_RENDER_MODE_0_MODE(enum render_mode_cmd val)
627{
628 return ((val) << CP_SET_RENDER_MODE_0_MODE__SHIFT) & CP_SET_RENDER_MODE_0_MODE__MASK;
629}
630
631#define REG_CP_SET_RENDER_MODE_1 0x00000001
632#define CP_SET_RENDER_MODE_1_ADDR_0_LO__MASK 0xffffffff
633#define CP_SET_RENDER_MODE_1_ADDR_0_LO__SHIFT 0
634static inline uint32_t CP_SET_RENDER_MODE_1_ADDR_0_LO(uint32_t val)
635{
636 return ((val) << CP_SET_RENDER_MODE_1_ADDR_0_LO__SHIFT) & CP_SET_RENDER_MODE_1_ADDR_0_LO__MASK;
637}
638
639#define REG_CP_SET_RENDER_MODE_2 0x00000002
640#define CP_SET_RENDER_MODE_2_ADDR_0_HI__MASK 0xffffffff
641#define CP_SET_RENDER_MODE_2_ADDR_0_HI__SHIFT 0
642static inline uint32_t CP_SET_RENDER_MODE_2_ADDR_0_HI(uint32_t val)
643{
644 return ((val) << CP_SET_RENDER_MODE_2_ADDR_0_HI__SHIFT) & CP_SET_RENDER_MODE_2_ADDR_0_HI__MASK;
645}
646
647#define REG_CP_SET_RENDER_MODE_3 0x00000003
648#define CP_SET_RENDER_MODE_3_GMEM_ENABLE 0x00000010
649
650#define REG_CP_SET_RENDER_MODE_4 0x00000004
651
652#define REG_CP_SET_RENDER_MODE_5 0x00000005
653#define CP_SET_RENDER_MODE_5_ADDR_1_LEN__MASK 0xffffffff
654#define CP_SET_RENDER_MODE_5_ADDR_1_LEN__SHIFT 0
655static inline uint32_t CP_SET_RENDER_MODE_5_ADDR_1_LEN(uint32_t val)
656{
657 return ((val) << CP_SET_RENDER_MODE_5_ADDR_1_LEN__SHIFT) & CP_SET_RENDER_MODE_5_ADDR_1_LEN__MASK;
658}
659
660#define REG_CP_SET_RENDER_MODE_6 0x00000006
661#define CP_SET_RENDER_MODE_6_ADDR_1_LO__MASK 0xffffffff
662#define CP_SET_RENDER_MODE_6_ADDR_1_LO__SHIFT 0
663static inline uint32_t CP_SET_RENDER_MODE_6_ADDR_1_LO(uint32_t val)
664{
665 return ((val) << CP_SET_RENDER_MODE_6_ADDR_1_LO__SHIFT) & CP_SET_RENDER_MODE_6_ADDR_1_LO__MASK;
666}
667
668#define REG_CP_SET_RENDER_MODE_7 0x00000007
669#define CP_SET_RENDER_MODE_7_ADDR_1_HI__MASK 0xffffffff
670#define CP_SET_RENDER_MODE_7_ADDR_1_HI__SHIFT 0
671static inline uint32_t CP_SET_RENDER_MODE_7_ADDR_1_HI(uint32_t val)
672{
673 return ((val) << CP_SET_RENDER_MODE_7_ADDR_1_HI__SHIFT) & CP_SET_RENDER_MODE_7_ADDR_1_HI__MASK;
674}
675
676#define REG_CP_PERFCOUNTER_ACTION_0 0x00000000
677
678#define REG_CP_PERFCOUNTER_ACTION_1 0x00000001
679#define CP_PERFCOUNTER_ACTION_1_ADDR_0_LO__MASK 0xffffffff
680#define CP_PERFCOUNTER_ACTION_1_ADDR_0_LO__SHIFT 0
681static inline uint32_t CP_PERFCOUNTER_ACTION_1_ADDR_0_LO(uint32_t val)
682{
683 return ((val) << CP_PERFCOUNTER_ACTION_1_ADDR_0_LO__SHIFT) & CP_PERFCOUNTER_ACTION_1_ADDR_0_LO__MASK;
684}
685
686#define REG_CP_PERFCOUNTER_ACTION_2 0x00000002
687#define CP_PERFCOUNTER_ACTION_2_ADDR_0_HI__MASK 0xffffffff
688#define CP_PERFCOUNTER_ACTION_2_ADDR_0_HI__SHIFT 0
689static inline uint32_t CP_PERFCOUNTER_ACTION_2_ADDR_0_HI(uint32_t val)
690{
691 return ((val) << CP_PERFCOUNTER_ACTION_2_ADDR_0_HI__SHIFT) & CP_PERFCOUNTER_ACTION_2_ADDR_0_HI__MASK;
692}
693
694#define REG_CP_EVENT_WRITE_0 0x00000000
695#define CP_EVENT_WRITE_0_EVENT__MASK 0x000000ff
696#define CP_EVENT_WRITE_0_EVENT__SHIFT 0
697static inline uint32_t CP_EVENT_WRITE_0_EVENT(enum vgt_event_type val)
698{
699 return ((val) << CP_EVENT_WRITE_0_EVENT__SHIFT) & CP_EVENT_WRITE_0_EVENT__MASK;
700}
701
702#define REG_CP_EVENT_WRITE_1 0x00000001
703#define CP_EVENT_WRITE_1_ADDR_0_LO__MASK 0xffffffff
704#define CP_EVENT_WRITE_1_ADDR_0_LO__SHIFT 0
705static inline uint32_t CP_EVENT_WRITE_1_ADDR_0_LO(uint32_t val)
706{
707 return ((val) << CP_EVENT_WRITE_1_ADDR_0_LO__SHIFT) & CP_EVENT_WRITE_1_ADDR_0_LO__MASK;
708}
709
710#define REG_CP_EVENT_WRITE_2 0x00000002
711#define CP_EVENT_WRITE_2_ADDR_0_HI__MASK 0xffffffff
712#define CP_EVENT_WRITE_2_ADDR_0_HI__SHIFT 0
713static inline uint32_t CP_EVENT_WRITE_2_ADDR_0_HI(uint32_t val)
714{
715 return ((val) << CP_EVENT_WRITE_2_ADDR_0_HI__SHIFT) & CP_EVENT_WRITE_2_ADDR_0_HI__MASK;
716}
717
718#define REG_CP_EVENT_WRITE_3 0x00000003
719
720#define REG_CP_BLIT_0 0x00000000
721#define CP_BLIT_0_OP__MASK 0x0000000f
722#define CP_BLIT_0_OP__SHIFT 0
723static inline uint32_t CP_BLIT_0_OP(enum cp_blit_cmd val)
724{
725 return ((val) << CP_BLIT_0_OP__SHIFT) & CP_BLIT_0_OP__MASK;
726}
727
728#define REG_CP_BLIT_1 0x00000001
729#define CP_BLIT_1_SRC_X1__MASK 0x0000ffff
730#define CP_BLIT_1_SRC_X1__SHIFT 0
731static inline uint32_t CP_BLIT_1_SRC_X1(uint32_t val)
732{
733 return ((val) << CP_BLIT_1_SRC_X1__SHIFT) & CP_BLIT_1_SRC_X1__MASK;
734}
735#define CP_BLIT_1_SRC_Y1__MASK 0xffff0000
736#define CP_BLIT_1_SRC_Y1__SHIFT 16
737static inline uint32_t CP_BLIT_1_SRC_Y1(uint32_t val)
738{
739 return ((val) << CP_BLIT_1_SRC_Y1__SHIFT) & CP_BLIT_1_SRC_Y1__MASK;
740}
741
742#define REG_CP_BLIT_2 0x00000002
743#define CP_BLIT_2_SRC_X2__MASK 0x0000ffff
744#define CP_BLIT_2_SRC_X2__SHIFT 0
745static inline uint32_t CP_BLIT_2_SRC_X2(uint32_t val)
746{
747 return ((val) << CP_BLIT_2_SRC_X2__SHIFT) & CP_BLIT_2_SRC_X2__MASK;
748}
749#define CP_BLIT_2_SRC_Y2__MASK 0xffff0000
750#define CP_BLIT_2_SRC_Y2__SHIFT 16
751static inline uint32_t CP_BLIT_2_SRC_Y2(uint32_t val)
752{
753 return ((val) << CP_BLIT_2_SRC_Y2__SHIFT) & CP_BLIT_2_SRC_Y2__MASK;
754}
755
756#define REG_CP_BLIT_3 0x00000003
757#define CP_BLIT_3_DST_X1__MASK 0x0000ffff
758#define CP_BLIT_3_DST_X1__SHIFT 0
759static inline uint32_t CP_BLIT_3_DST_X1(uint32_t val)
760{
761 return ((val) << CP_BLIT_3_DST_X1__SHIFT) & CP_BLIT_3_DST_X1__MASK;
762}
763#define CP_BLIT_3_DST_Y1__MASK 0xffff0000
764#define CP_BLIT_3_DST_Y1__SHIFT 16
765static inline uint32_t CP_BLIT_3_DST_Y1(uint32_t val)
766{
767 return ((val) << CP_BLIT_3_DST_Y1__SHIFT) & CP_BLIT_3_DST_Y1__MASK;
768}
769
770#define REG_CP_BLIT_4 0x00000004
771#define CP_BLIT_4_DST_X2__MASK 0x0000ffff
772#define CP_BLIT_4_DST_X2__SHIFT 0
773static inline uint32_t CP_BLIT_4_DST_X2(uint32_t val)
774{
775 return ((val) << CP_BLIT_4_DST_X2__SHIFT) & CP_BLIT_4_DST_X2__MASK;
776}
777#define CP_BLIT_4_DST_Y2__MASK 0xffff0000
778#define CP_BLIT_4_DST_Y2__SHIFT 16
779static inline uint32_t CP_BLIT_4_DST_Y2(uint32_t val)
780{
781 return ((val) << CP_BLIT_4_DST_Y2__SHIFT) & CP_BLIT_4_DST_Y2__MASK;
782}
783
536 784
537#endif /* ADRENO_PM4_XML */ 785#endif /* ADRENO_PM4_XML */
diff --git a/drivers/gpu/drm/msm/dsi/dsi.xml.h b/drivers/gpu/drm/msm/dsi/dsi.xml.h
index 4958594d5266..39dff7d5e89b 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.xml.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.xml.h
@@ -12,7 +12,7 @@ The rules-ng-ng source files this header was generated from are:
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 6f240021705b..3819fdefcae2 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -982,7 +982,7 @@ static int dsi_tx_buf_alloc(struct msm_dsi_host *msm_host, int size)
982 struct drm_device *dev = msm_host->dev; 982 struct drm_device *dev = msm_host->dev;
983 const struct msm_dsi_cfg_handler *cfg_hnd = msm_host->cfg_hnd; 983 const struct msm_dsi_cfg_handler *cfg_hnd = msm_host->cfg_hnd;
984 int ret; 984 int ret;
985 u32 iova; 985 uint64_t iova;
986 986
987 if (cfg_hnd->major == MSM_DSI_VER_MAJOR_6G) { 987 if (cfg_hnd->major == MSM_DSI_VER_MAJOR_6G) {
988 mutex_lock(&dev->struct_mutex); 988 mutex_lock(&dev->struct_mutex);
@@ -1147,7 +1147,7 @@ static int dsi_cmd_dma_tx(struct msm_dsi_host *msm_host, int len)
1147{ 1147{
1148 const struct msm_dsi_cfg_handler *cfg_hnd = msm_host->cfg_hnd; 1148 const struct msm_dsi_cfg_handler *cfg_hnd = msm_host->cfg_hnd;
1149 int ret; 1149 int ret;
1150 u32 dma_base; 1150 uint64_t dma_base;
1151 bool triggered; 1151 bool triggered;
1152 1152
1153 if (cfg_hnd->major == MSM_DSI_VER_MAJOR_6G) { 1153 if (cfg_hnd->major == MSM_DSI_VER_MAJOR_6G) {
diff --git a/drivers/gpu/drm/msm/dsi/mmss_cc.xml.h b/drivers/gpu/drm/msm/dsi/mmss_cc.xml.h
index 2d999494cdea..8b9f3ebaeba7 100644
--- a/drivers/gpu/drm/msm/dsi/mmss_cc.xml.h
+++ b/drivers/gpu/drm/msm/dsi/mmss_cc.xml.h
@@ -12,7 +12,7 @@ The rules-ng-ng source files this header was generated from are:
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
diff --git a/drivers/gpu/drm/msm/dsi/sfpb.xml.h b/drivers/gpu/drm/msm/dsi/sfpb.xml.h
index 506434fac993..3fcbb30dc241 100644
--- a/drivers/gpu/drm/msm/dsi/sfpb.xml.h
+++ b/drivers/gpu/drm/msm/dsi/sfpb.xml.h
@@ -12,7 +12,7 @@ The rules-ng-ng source files this header was generated from are:
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
diff --git a/drivers/gpu/drm/msm/edp/edp.xml.h b/drivers/gpu/drm/msm/edp/edp.xml.h
index f1072c18c81e..d7bf3232dc88 100644
--- a/drivers/gpu/drm/msm/edp/edp.xml.h
+++ b/drivers/gpu/drm/msm/edp/edp.xml.h
@@ -12,7 +12,7 @@ The rules-ng-ng source files this header was generated from are:
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.xml.h b/drivers/gpu/drm/msm/hdmi/hdmi.xml.h
index 34c7df6549c1..0a97ff75ed6f 100644
--- a/drivers/gpu/drm/msm/hdmi/hdmi.xml.h
+++ b/drivers/gpu/drm/msm/hdmi/hdmi.xml.h
@@ -12,7 +12,7 @@ The rules-ng-ng source files this header was generated from are:
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
diff --git a/drivers/gpu/drm/msm/hdmi/qfprom.xml.h b/drivers/gpu/drm/msm/hdmi/qfprom.xml.h
index 6eab7d0cf6b5..1b996ede7a65 100644
--- a/drivers/gpu/drm/msm/hdmi/qfprom.xml.h
+++ b/drivers/gpu/drm/msm/hdmi/qfprom.xml.h
@@ -12,7 +12,7 @@ The rules-ng-ng source files this header was generated from are:
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
diff --git a/drivers/gpu/drm/msm/mdp/mdp4/mdp4.xml.h b/drivers/gpu/drm/msm/mdp/mdp4/mdp4.xml.h
index 6688e79cc88e..88037889589b 100644
--- a/drivers/gpu/drm/msm/mdp/mdp4/mdp4.xml.h
+++ b/drivers/gpu/drm/msm/mdp/mdp4/mdp4.xml.h
@@ -12,7 +12,7 @@ The rules-ng-ng source files this header was generated from are:
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
diff --git a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c
index 9527dafc3e69..1c29618f4ddb 100644
--- a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c
+++ b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c
@@ -373,7 +373,7 @@ static void update_cursor(struct drm_crtc *crtc)
373 if (mdp4_crtc->cursor.stale) { 373 if (mdp4_crtc->cursor.stale) {
374 struct drm_gem_object *next_bo = mdp4_crtc->cursor.next_bo; 374 struct drm_gem_object *next_bo = mdp4_crtc->cursor.next_bo;
375 struct drm_gem_object *prev_bo = mdp4_crtc->cursor.scanout_bo; 375 struct drm_gem_object *prev_bo = mdp4_crtc->cursor.scanout_bo;
376 uint32_t iova = mdp4_crtc->cursor.next_iova; 376 uint64_t iova = mdp4_crtc->cursor.next_iova;
377 377
378 if (next_bo) { 378 if (next_bo) {
379 /* take a obj ref + iova ref when we start scanning out: */ 379 /* take a obj ref + iova ref when we start scanning out: */
@@ -418,7 +418,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc,
418 struct drm_device *dev = crtc->dev; 418 struct drm_device *dev = crtc->dev;
419 struct drm_gem_object *cursor_bo, *old_bo; 419 struct drm_gem_object *cursor_bo, *old_bo;
420 unsigned long flags; 420 unsigned long flags;
421 uint32_t iova; 421 uint64_t iova;
422 int ret; 422 int ret;
423 423
424 if ((width > CURSOR_WIDTH) || (height > CURSOR_HEIGHT)) { 424 if ((width > CURSOR_WIDTH) || (height > CURSOR_HEIGHT)) {
diff --git a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
index 571a91ee9607..b782efd4b95f 100644
--- a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
+++ b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
@@ -17,6 +17,7 @@
17 17
18 18
19#include "msm_drv.h" 19#include "msm_drv.h"
20#include "msm_gem.h"
20#include "msm_mmu.h" 21#include "msm_mmu.h"
21#include "mdp4_kms.h" 22#include "mdp4_kms.h"
22 23
@@ -159,17 +160,18 @@ static void mdp4_destroy(struct msm_kms *kms)
159{ 160{
160 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); 161 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
161 struct device *dev = mdp4_kms->dev->dev; 162 struct device *dev = mdp4_kms->dev->dev;
162 struct msm_mmu *mmu = mdp4_kms->mmu; 163 struct msm_gem_address_space *aspace = mdp4_kms->aspace;
163
164 if (mmu) {
165 mmu->funcs->detach(mmu, iommu_ports, ARRAY_SIZE(iommu_ports));
166 mmu->funcs->destroy(mmu);
167 }
168 164
169 if (mdp4_kms->blank_cursor_iova) 165 if (mdp4_kms->blank_cursor_iova)
170 msm_gem_put_iova(mdp4_kms->blank_cursor_bo, mdp4_kms->id); 166 msm_gem_put_iova(mdp4_kms->blank_cursor_bo, mdp4_kms->id);
171 drm_gem_object_unreference_unlocked(mdp4_kms->blank_cursor_bo); 167 drm_gem_object_unreference_unlocked(mdp4_kms->blank_cursor_bo);
172 168
169 if (aspace) {
170 aspace->mmu->funcs->detach(aspace->mmu,
171 iommu_ports, ARRAY_SIZE(iommu_ports));
172 msm_gem_address_space_destroy(aspace);
173 }
174
173 if (mdp4_kms->rpm_enabled) 175 if (mdp4_kms->rpm_enabled)
174 pm_runtime_disable(dev); 176 pm_runtime_disable(dev);
175 177
@@ -440,7 +442,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
440 struct mdp4_platform_config *config = mdp4_get_config(pdev); 442 struct mdp4_platform_config *config = mdp4_get_config(pdev);
441 struct mdp4_kms *mdp4_kms; 443 struct mdp4_kms *mdp4_kms;
442 struct msm_kms *kms = NULL; 444 struct msm_kms *kms = NULL;
443 struct msm_mmu *mmu; 445 struct msm_gem_address_space *aspace;
444 int irq, ret; 446 int irq, ret;
445 447
446 mdp4_kms = kzalloc(sizeof(*mdp4_kms), GFP_KERNEL); 448 mdp4_kms = kzalloc(sizeof(*mdp4_kms), GFP_KERNEL);
@@ -531,24 +533,26 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
531 mdelay(16); 533 mdelay(16);
532 534
533 if (config->iommu) { 535 if (config->iommu) {
534 mmu = msm_iommu_new(&pdev->dev, config->iommu); 536 aspace = msm_gem_address_space_create(&pdev->dev,
535 if (IS_ERR(mmu)) { 537 config->iommu, "mdp4");
536 ret = PTR_ERR(mmu); 538 if (IS_ERR(aspace)) {
539 ret = PTR_ERR(aspace);
537 goto fail; 540 goto fail;
538 } 541 }
539 ret = mmu->funcs->attach(mmu, iommu_ports, 542
543 mdp4_kms->aspace = aspace;
544
545 ret = aspace->mmu->funcs->attach(aspace->mmu, iommu_ports,
540 ARRAY_SIZE(iommu_ports)); 546 ARRAY_SIZE(iommu_ports));
541 if (ret) 547 if (ret)
542 goto fail; 548 goto fail;
543
544 mdp4_kms->mmu = mmu;
545 } else { 549 } else {
546 dev_info(dev->dev, "no iommu, fallback to phys " 550 dev_info(dev->dev, "no iommu, fallback to phys "
547 "contig buffers for scanout\n"); 551 "contig buffers for scanout\n");
548 mmu = NULL; 552 aspace = NULL;
549 } 553 }
550 554
551 mdp4_kms->id = msm_register_mmu(dev, mmu); 555 mdp4_kms->id = msm_register_address_space(dev, aspace);
552 if (mdp4_kms->id < 0) { 556 if (mdp4_kms->id < 0) {
553 ret = mdp4_kms->id; 557 ret = mdp4_kms->id;
554 dev_err(dev->dev, "failed to register mdp4 iommu: %d\n", ret); 558 dev_err(dev->dev, "failed to register mdp4 iommu: %d\n", ret);
@@ -598,6 +602,10 @@ static struct mdp4_platform_config *mdp4_get_config(struct platform_device *dev)
598 /* TODO: Chips that aren't apq8064 have a 200 Mhz max_clk */ 602 /* TODO: Chips that aren't apq8064 have a 200 Mhz max_clk */
599 config.max_clk = 266667000; 603 config.max_clk = 266667000;
600 config.iommu = iommu_domain_alloc(&platform_bus_type); 604 config.iommu = iommu_domain_alloc(&platform_bus_type);
605 if (config.iommu) {
606 config.iommu->geometry.aperture_start = 0x1000;
607 config.iommu->geometry.aperture_end = 0xffffffff;
608 }
601 609
602 return &config; 610 return &config;
603} 611}
diff --git a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.h b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.h
index 25fb83997119..62712ca164ee 100644
--- a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.h
+++ b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.h
@@ -43,7 +43,7 @@ struct mdp4_kms {
43 struct clk *pclk; 43 struct clk *pclk;
44 struct clk *lut_clk; 44 struct clk *lut_clk;
45 struct clk *axi_clk; 45 struct clk *axi_clk;
46 struct msm_mmu *mmu; 46 struct msm_gem_address_space *aspace;
47 47
48 struct mdp_irq error_handler; 48 struct mdp_irq error_handler;
49 49
@@ -51,7 +51,7 @@ struct mdp4_kms {
51 51
52 /* empty/blank cursor bo to use when cursor is "disabled" */ 52 /* empty/blank cursor bo to use when cursor is "disabled" */
53 struct drm_gem_object *blank_cursor_bo; 53 struct drm_gem_object *blank_cursor_bo;
54 uint32_t blank_cursor_iova; 54 uint64_t blank_cursor_iova;
55}; 55};
56#define to_mdp4_kms(x) container_of(x, struct mdp4_kms, base) 56#define to_mdp4_kms(x) container_of(x, struct mdp4_kms, base)
57 57
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
index ca6ca30650a0..27d5371acee0 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
@@ -8,9 +8,17 @@ http://github.com/freedreno/envytools/
8git clone https://github.com/freedreno/envytools.git 8git clone https://github.com/freedreno/envytools.git
9 9
10The rules-ng-ng source files this header was generated from are: 10The rules-ng-ng source files this header was generated from are:
11- /local/mnt/workspace/source_trees/envytools/rnndb/../rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-05-10 05:06:30) 11- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 676 bytes, from 2015-05-20 20:03:14)
12- /local/mnt/workspace/source_trees/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-05-09 06:32:54) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /local/mnt/workspace/source_trees/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2016-01-07 08:45:55) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
19- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2015-05-20 20:03:07)
20- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 41472 bytes, from 2016-01-22 18:18:18)
21- /home/robclark/src/freedreno/envytools/rnndb/edp/edp.xml ( 10416 bytes, from 2015-05-20 20:03:14)
14 22
15Copyright (C) 2013-2016 by the following authors: 23Copyright (C) 2013-2016 by the following authors:
16- Rob Clark <robdclark@gmail.com> (robclark) 24- Rob Clark <robdclark@gmail.com> (robclark)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c
index 8b4e3004f451..618b2ffed9b4 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c
@@ -550,6 +550,10 @@ static struct mdp5_cfg_platform *mdp5_get_config(struct platform_device *dev)
550 static struct mdp5_cfg_platform config = {}; 550 static struct mdp5_cfg_platform config = {};
551 551
552 config.iommu = iommu_domain_alloc(&platform_bus_type); 552 config.iommu = iommu_domain_alloc(&platform_bus_type);
553 if (config.iommu) {
554 config.iommu->geometry.aperture_start = 0x1000;
555 config.iommu->geometry.aperture_end = 0xffffffff;
556 }
553 557
554 return &config; 558 return &config;
555} 559}
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
index c205c360e16d..1ce8a01a5a28 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
@@ -27,11 +27,8 @@
27#define CURSOR_WIDTH 64 27#define CURSOR_WIDTH 64
28#define CURSOR_HEIGHT 64 28#define CURSOR_HEIGHT 64
29 29
30#define SSPP_MAX (SSPP_RGB3 + 1) /* TODO: Add SSPP_MAX in mdp5.xml.h */
31
32struct mdp5_crtc { 30struct mdp5_crtc {
33 struct drm_crtc base; 31 struct drm_crtc base;
34 char name[8];
35 int id; 32 int id;
36 bool enabled; 33 bool enabled;
37 34
@@ -102,7 +99,7 @@ static u32 crtc_flush(struct drm_crtc *crtc, u32 flush_mask)
102{ 99{
103 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 100 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
104 101
105 DBG("%s: flush=%08x", mdp5_crtc->name, flush_mask); 102 DBG("%s: flush=%08x", crtc->name, flush_mask);
106 return mdp5_ctl_commit(mdp5_crtc->ctl, flush_mask); 103 return mdp5_ctl_commit(mdp5_crtc->ctl, flush_mask);
107} 104}
108 105
@@ -136,7 +133,6 @@ static void complete_flip(struct drm_crtc *crtc, struct drm_file *file)
136 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 133 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
137 struct drm_device *dev = crtc->dev; 134 struct drm_device *dev = crtc->dev;
138 struct drm_pending_vblank_event *event; 135 struct drm_pending_vblank_event *event;
139 struct drm_plane *plane;
140 unsigned long flags; 136 unsigned long flags;
141 137
142 spin_lock_irqsave(&dev->event_lock, flags); 138 spin_lock_irqsave(&dev->event_lock, flags);
@@ -148,16 +144,12 @@ static void complete_flip(struct drm_crtc *crtc, struct drm_file *file)
148 */ 144 */
149 if (!file || (event->base.file_priv == file)) { 145 if (!file || (event->base.file_priv == file)) {
150 mdp5_crtc->event = NULL; 146 mdp5_crtc->event = NULL;
151 DBG("%s: send event: %p", mdp5_crtc->name, event); 147 DBG("%s: send event: %p", crtc->name, event);
152 drm_crtc_send_vblank_event(crtc, event); 148 drm_crtc_send_vblank_event(crtc, event);
153 } 149 }
154 } 150 }
155 spin_unlock_irqrestore(&dev->event_lock, flags); 151 spin_unlock_irqrestore(&dev->event_lock, flags);
156 152
157 drm_atomic_crtc_for_each_plane(plane, crtc) {
158 mdp5_plane_complete_flip(plane);
159 }
160
161 if (mdp5_crtc->ctl && !crtc->state->enable) { 153 if (mdp5_crtc->ctl && !crtc->state->enable) {
162 /* set STAGE_UNUSED for all layers */ 154 /* set STAGE_UNUSED for all layers */
163 mdp5_ctl_blend(mdp5_crtc->ctl, NULL, 0, 0); 155 mdp5_ctl_blend(mdp5_crtc->ctl, NULL, 0, 0);
@@ -295,7 +287,7 @@ static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc)
295 mode = &crtc->state->adjusted_mode; 287 mode = &crtc->state->adjusted_mode;
296 288
297 DBG("%s: set mode: %d:\"%s\" %d %d %d %d %d %d %d %d %d %d 0x%x 0x%x", 289 DBG("%s: set mode: %d:\"%s\" %d %d %d %d %d %d %d %d %d %d 0x%x 0x%x",
298 mdp5_crtc->name, mode->base.id, mode->name, 290 crtc->name, mode->base.id, mode->name,
299 mode->vrefresh, mode->clock, 291 mode->vrefresh, mode->clock,
300 mode->hdisplay, mode->hsync_start, 292 mode->hdisplay, mode->hsync_start,
301 mode->hsync_end, mode->htotal, 293 mode->hsync_end, mode->htotal,
@@ -315,7 +307,7 @@ static void mdp5_crtc_disable(struct drm_crtc *crtc)
315 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 307 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
316 struct mdp5_kms *mdp5_kms = get_kms(crtc); 308 struct mdp5_kms *mdp5_kms = get_kms(crtc);
317 309
318 DBG("%s", mdp5_crtc->name); 310 DBG("%s", crtc->name);
319 311
320 if (WARN_ON(!mdp5_crtc->enabled)) 312 if (WARN_ON(!mdp5_crtc->enabled))
321 return; 313 return;
@@ -334,7 +326,7 @@ static void mdp5_crtc_enable(struct drm_crtc *crtc)
334 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 326 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
335 struct mdp5_kms *mdp5_kms = get_kms(crtc); 327 struct mdp5_kms *mdp5_kms = get_kms(crtc);
336 328
337 DBG("%s", mdp5_crtc->name); 329 DBG("%s", crtc->name);
338 330
339 if (WARN_ON(mdp5_crtc->enabled)) 331 if (WARN_ON(mdp5_crtc->enabled))
340 return; 332 return;
@@ -372,7 +364,6 @@ static bool is_fullscreen(struct drm_crtc_state *cstate,
372static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, 364static int mdp5_crtc_atomic_check(struct drm_crtc *crtc,
373 struct drm_crtc_state *state) 365 struct drm_crtc_state *state)
374{ 366{
375 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
376 struct mdp5_kms *mdp5_kms = get_kms(crtc); 367 struct mdp5_kms *mdp5_kms = get_kms(crtc);
377 struct drm_plane *plane; 368 struct drm_plane *plane;
378 struct drm_device *dev = crtc->dev; 369 struct drm_device *dev = crtc->dev;
@@ -381,7 +372,7 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc,
381 const struct drm_plane_state *pstate; 372 const struct drm_plane_state *pstate;
382 int cnt = 0, base = 0, i; 373 int cnt = 0, base = 0, i;
383 374
384 DBG("%s: check", mdp5_crtc->name); 375 DBG("%s: check", crtc->name);
385 376
386 drm_atomic_crtc_state_for_each_plane_state(plane, pstate, state) { 377 drm_atomic_crtc_state_for_each_plane_state(plane, pstate, state) {
387 pstates[cnt].plane = plane; 378 pstates[cnt].plane = plane;
@@ -405,14 +396,14 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc,
405 hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); 396 hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg);
406 397
407 if ((cnt + base) >= hw_cfg->lm.nb_stages) { 398 if ((cnt + base) >= hw_cfg->lm.nb_stages) {
408 dev_err(dev->dev, "too many planes!\n"); 399 dev_err(dev->dev, "too many planes! cnt=%d, base=%d\n", cnt, base);
409 return -EINVAL; 400 return -EINVAL;
410 } 401 }
411 402
412 for (i = 0; i < cnt; i++) { 403 for (i = 0; i < cnt; i++) {
413 pstates[i].state->stage = STAGE_BASE + i + base; 404 pstates[i].state->stage = STAGE_BASE + i + base;
414 DBG("%s: assign pipe %s on stage=%d", mdp5_crtc->name, 405 DBG("%s: assign pipe %s on stage=%d", crtc->name,
415 pipe2name(mdp5_plane_pipe(pstates[i].plane)), 406 pstates[i].plane->name,
416 pstates[i].state->stage); 407 pstates[i].state->stage);
417 } 408 }
418 409
@@ -422,8 +413,7 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc,
422static void mdp5_crtc_atomic_begin(struct drm_crtc *crtc, 413static void mdp5_crtc_atomic_begin(struct drm_crtc *crtc,
423 struct drm_crtc_state *old_crtc_state) 414 struct drm_crtc_state *old_crtc_state)
424{ 415{
425 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 416 DBG("%s: begin", crtc->name);
426 DBG("%s: begin", mdp5_crtc->name);
427} 417}
428 418
429static void mdp5_crtc_atomic_flush(struct drm_crtc *crtc, 419static void mdp5_crtc_atomic_flush(struct drm_crtc *crtc,
@@ -433,7 +423,7 @@ static void mdp5_crtc_atomic_flush(struct drm_crtc *crtc,
433 struct drm_device *dev = crtc->dev; 423 struct drm_device *dev = crtc->dev;
434 unsigned long flags; 424 unsigned long flags;
435 425
436 DBG("%s: event: %p", mdp5_crtc->name, crtc->state->event); 426 DBG("%s: event: %p", crtc->name, crtc->state->event);
437 427
438 WARN_ON(mdp5_crtc->event); 428 WARN_ON(mdp5_crtc->event);
439 429
@@ -499,7 +489,8 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc,
499 struct drm_device *dev = crtc->dev; 489 struct drm_device *dev = crtc->dev;
500 struct mdp5_kms *mdp5_kms = get_kms(crtc); 490 struct mdp5_kms *mdp5_kms = get_kms(crtc);
501 struct drm_gem_object *cursor_bo, *old_bo = NULL; 491 struct drm_gem_object *cursor_bo, *old_bo = NULL;
502 uint32_t blendcfg, cursor_addr, stride; 492 uint32_t blendcfg, stride;
493 uint64_t cursor_addr;
503 int ret, lm; 494 int ret, lm;
504 enum mdp5_cursor_alpha cur_alpha = CURSOR_ALPHA_PER_PIXEL; 495 enum mdp5_cursor_alpha cur_alpha = CURSOR_ALPHA_PER_PIXEL;
505 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); 496 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0);
@@ -653,7 +644,7 @@ static void mdp5_crtc_err_irq(struct mdp_irq *irq, uint32_t irqstatus)
653{ 644{
654 struct mdp5_crtc *mdp5_crtc = container_of(irq, struct mdp5_crtc, err); 645 struct mdp5_crtc *mdp5_crtc = container_of(irq, struct mdp5_crtc, err);
655 646
656 DBG("%s: error: %08x", mdp5_crtc->name, irqstatus); 647 DBG("%s: error: %08x", mdp5_crtc->base.name, irqstatus);
657} 648}
658 649
659static void mdp5_crtc_pp_done_irq(struct mdp_irq *irq, uint32_t irqstatus) 650static void mdp5_crtc_pp_done_irq(struct mdp_irq *irq, uint32_t irqstatus)
@@ -775,9 +766,6 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev,
775 mdp5_crtc->vblank.irq = mdp5_crtc_vblank_irq; 766 mdp5_crtc->vblank.irq = mdp5_crtc_vblank_irq;
776 mdp5_crtc->err.irq = mdp5_crtc_err_irq; 767 mdp5_crtc->err.irq = mdp5_crtc_err_irq;
777 768
778 snprintf(mdp5_crtc->name, sizeof(mdp5_crtc->name), "%s:%d",
779 pipe2name(mdp5_plane_pipe(plane)), id);
780
781 drm_crtc_init_with_planes(dev, crtc, plane, NULL, &mdp5_crtc_funcs, 769 drm_crtc_init_with_planes(dev, crtc, plane, NULL, &mdp5_crtc_funcs,
782 NULL); 770 NULL);
783 771
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
index 5c5940db898e..3ce8b9dec9c1 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
@@ -41,6 +41,8 @@ static void mdp5_irq_error_handler(struct mdp_irq *irq, uint32_t irqstatus)
41 if (dumpstate && __ratelimit(&rs)) { 41 if (dumpstate && __ratelimit(&rs)) {
42 struct drm_printer p = drm_info_printer(mdp5_kms->dev->dev); 42 struct drm_printer p = drm_info_printer(mdp5_kms->dev->dev);
43 drm_state_dump(mdp5_kms->dev, &p); 43 drm_state_dump(mdp5_kms->dev, &p);
44 if (mdp5_kms->smp)
45 mdp5_smp_dump(mdp5_kms->smp, &p);
44 } 46 }
45} 47}
46 48
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
index ed7143d35b25..5f6cd8745dbc 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
@@ -19,6 +19,7 @@
19#include <linux/of_irq.h> 19#include <linux/of_irq.h>
20 20
21#include "msm_drv.h" 21#include "msm_drv.h"
22#include "msm_gem.h"
22#include "msm_mmu.h" 23#include "msm_mmu.h"
23#include "mdp5_kms.h" 24#include "mdp5_kms.h"
24 25
@@ -71,10 +72,49 @@ static int mdp5_hw_init(struct msm_kms *kms)
71 return 0; 72 return 0;
72} 73}
73 74
75struct mdp5_state *mdp5_get_state(struct drm_atomic_state *s)
76{
77 struct msm_drm_private *priv = s->dev->dev_private;
78 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
79 struct msm_kms_state *state = to_kms_state(s);
80 struct mdp5_state *new_state;
81 int ret;
82
83 if (state->state)
84 return state->state;
85
86 ret = drm_modeset_lock(&mdp5_kms->state_lock, s->acquire_ctx);
87 if (ret)
88 return ERR_PTR(ret);
89
90 new_state = kmalloc(sizeof(*mdp5_kms->state), GFP_KERNEL);
91 if (!new_state)
92 return ERR_PTR(-ENOMEM);
93
94 /* Copy state: */
95 new_state->hwpipe = mdp5_kms->state->hwpipe;
96 if (mdp5_kms->smp)
97 new_state->smp = mdp5_kms->state->smp;
98
99 state->state = new_state;
100
101 return new_state;
102}
103
104static void mdp5_swap_state(struct msm_kms *kms, struct drm_atomic_state *state)
105{
106 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));
107 swap(to_kms_state(state)->state, mdp5_kms->state);
108}
109
74static void mdp5_prepare_commit(struct msm_kms *kms, struct drm_atomic_state *state) 110static void mdp5_prepare_commit(struct msm_kms *kms, struct drm_atomic_state *state)
75{ 111{
76 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 112 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));
113
77 mdp5_enable(mdp5_kms); 114 mdp5_enable(mdp5_kms);
115
116 if (mdp5_kms->smp)
117 mdp5_smp_prepare_commit(mdp5_kms->smp, &mdp5_kms->state->smp);
78} 118}
79 119
80static void mdp5_complete_commit(struct msm_kms *kms, struct drm_atomic_state *state) 120static void mdp5_complete_commit(struct msm_kms *kms, struct drm_atomic_state *state)
@@ -87,6 +127,9 @@ static void mdp5_complete_commit(struct msm_kms *kms, struct drm_atomic_state *s
87 for_each_plane_in_state(state, plane, plane_state, i) 127 for_each_plane_in_state(state, plane, plane_state, i)
88 mdp5_plane_complete_commit(plane, plane_state); 128 mdp5_plane_complete_commit(plane, plane_state);
89 129
130 if (mdp5_kms->smp)
131 mdp5_smp_complete_commit(mdp5_kms->smp, &mdp5_kms->state->smp);
132
90 mdp5_disable(mdp5_kms); 133 mdp5_disable(mdp5_kms);
91} 134}
92 135
@@ -117,14 +160,66 @@ static int mdp5_set_split_display(struct msm_kms *kms,
117static void mdp5_kms_destroy(struct msm_kms *kms) 160static void mdp5_kms_destroy(struct msm_kms *kms)
118{ 161{
119 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 162 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));
120 struct msm_mmu *mmu = mdp5_kms->mmu; 163 struct msm_gem_address_space *aspace = mdp5_kms->aspace;
164 int i;
165
166 for (i = 0; i < mdp5_kms->num_hwpipes; i++)
167 mdp5_pipe_destroy(mdp5_kms->hwpipes[i]);
121 168
122 if (mmu) { 169 if (aspace) {
123 mmu->funcs->detach(mmu, iommu_ports, ARRAY_SIZE(iommu_ports)); 170 aspace->mmu->funcs->detach(aspace->mmu,
124 mmu->funcs->destroy(mmu); 171 iommu_ports, ARRAY_SIZE(iommu_ports));
172 msm_gem_address_space_destroy(aspace);
125 } 173 }
126} 174}
127 175
176#ifdef CONFIG_DEBUG_FS
177static int smp_show(struct seq_file *m, void *arg)
178{
179 struct drm_info_node *node = (struct drm_info_node *) m->private;
180 struct drm_device *dev = node->minor->dev;
181 struct msm_drm_private *priv = dev->dev_private;
182 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
183 struct drm_printer p = drm_seq_file_printer(m);
184
185 if (!mdp5_kms->smp) {
186 drm_printf(&p, "no SMP pool\n");
187 return 0;
188 }
189
190 mdp5_smp_dump(mdp5_kms->smp, &p);
191
192 return 0;
193}
194
195static struct drm_info_list mdp5_debugfs_list[] = {
196 {"smp", smp_show },
197};
198
199static int mdp5_kms_debugfs_init(struct msm_kms *kms, struct drm_minor *minor)
200{
201 struct drm_device *dev = minor->dev;
202 int ret;
203
204 ret = drm_debugfs_create_files(mdp5_debugfs_list,
205 ARRAY_SIZE(mdp5_debugfs_list),
206 minor->debugfs_root, minor);
207
208 if (ret) {
209 dev_err(dev->dev, "could not install mdp5_debugfs_list\n");
210 return ret;
211 }
212
213 return 0;
214}
215
216static void mdp5_kms_debugfs_cleanup(struct msm_kms *kms, struct drm_minor *minor)
217{
218 drm_debugfs_remove_files(mdp5_debugfs_list,
219 ARRAY_SIZE(mdp5_debugfs_list), minor);
220}
221#endif
222
128static const struct mdp_kms_funcs kms_funcs = { 223static const struct mdp_kms_funcs kms_funcs = {
129 .base = { 224 .base = {
130 .hw_init = mdp5_hw_init, 225 .hw_init = mdp5_hw_init,
@@ -134,6 +229,7 @@ static const struct mdp_kms_funcs kms_funcs = {
134 .irq = mdp5_irq, 229 .irq = mdp5_irq,
135 .enable_vblank = mdp5_enable_vblank, 230 .enable_vblank = mdp5_enable_vblank,
136 .disable_vblank = mdp5_disable_vblank, 231 .disable_vblank = mdp5_disable_vblank,
232 .swap_state = mdp5_swap_state,
137 .prepare_commit = mdp5_prepare_commit, 233 .prepare_commit = mdp5_prepare_commit,
138 .complete_commit = mdp5_complete_commit, 234 .complete_commit = mdp5_complete_commit,
139 .wait_for_crtc_commit_done = mdp5_wait_for_crtc_commit_done, 235 .wait_for_crtc_commit_done = mdp5_wait_for_crtc_commit_done,
@@ -141,6 +237,10 @@ static const struct mdp_kms_funcs kms_funcs = {
141 .round_pixclk = mdp5_round_pixclk, 237 .round_pixclk = mdp5_round_pixclk,
142 .set_split_display = mdp5_set_split_display, 238 .set_split_display = mdp5_set_split_display,
143 .destroy = mdp5_kms_destroy, 239 .destroy = mdp5_kms_destroy,
240#ifdef CONFIG_DEBUG_FS
241 .debugfs_init = mdp5_kms_debugfs_init,
242 .debugfs_cleanup = mdp5_kms_debugfs_cleanup,
243#endif
144 }, 244 },
145 .set_irqmask = mdp5_set_irqmask, 245 .set_irqmask = mdp5_set_irqmask,
146}; 246};
@@ -321,15 +421,6 @@ static int modeset_init_intf(struct mdp5_kms *mdp5_kms, int intf_num)
321 421
322static int modeset_init(struct mdp5_kms *mdp5_kms) 422static int modeset_init(struct mdp5_kms *mdp5_kms)
323{ 423{
324 static const enum mdp5_pipe crtcs[] = {
325 SSPP_RGB0, SSPP_RGB1, SSPP_RGB2, SSPP_RGB3,
326 };
327 static const enum mdp5_pipe vig_planes[] = {
328 SSPP_VIG0, SSPP_VIG1, SSPP_VIG2, SSPP_VIG3,
329 };
330 static const enum mdp5_pipe dma_planes[] = {
331 SSPP_DMA0, SSPP_DMA1,
332 };
333 struct drm_device *dev = mdp5_kms->dev; 424 struct drm_device *dev = mdp5_kms->dev;
334 struct msm_drm_private *priv = dev->dev_private; 425 struct msm_drm_private *priv = dev->dev_private;
335 const struct mdp5_cfg_hw *hw_cfg; 426 const struct mdp5_cfg_hw *hw_cfg;
@@ -337,58 +428,35 @@ static int modeset_init(struct mdp5_kms *mdp5_kms)
337 428
338 hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); 429 hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg);
339 430
340 /* construct CRTCs and their private planes: */ 431 /* Construct planes equaling the number of hw pipes, and CRTCs
341 for (i = 0; i < hw_cfg->pipe_rgb.count; i++) { 432 * for the N layer-mixers (LM). The first N planes become primary
433 * planes for the CRTCs, with the remainder as overlay planes:
434 */
435 for (i = 0; i < mdp5_kms->num_hwpipes; i++) {
436 bool primary = i < mdp5_cfg->lm.count;
342 struct drm_plane *plane; 437 struct drm_plane *plane;
343 struct drm_crtc *crtc; 438 struct drm_crtc *crtc;
344 439
345 plane = mdp5_plane_init(dev, crtcs[i], true, 440 plane = mdp5_plane_init(dev, primary);
346 hw_cfg->pipe_rgb.base[i], hw_cfg->pipe_rgb.caps);
347 if (IS_ERR(plane)) { 441 if (IS_ERR(plane)) {
348 ret = PTR_ERR(plane); 442 ret = PTR_ERR(plane);
349 dev_err(dev->dev, "failed to construct plane for %s (%d)\n", 443 dev_err(dev->dev, "failed to construct plane %d (%d)\n", i, ret);
350 pipe2name(crtcs[i]), ret);
351 goto fail; 444 goto fail;
352 } 445 }
446 priv->planes[priv->num_planes++] = plane;
447
448 if (!primary)
449 continue;
353 450
354 crtc = mdp5_crtc_init(dev, plane, i); 451 crtc = mdp5_crtc_init(dev, plane, i);
355 if (IS_ERR(crtc)) { 452 if (IS_ERR(crtc)) {
356 ret = PTR_ERR(crtc); 453 ret = PTR_ERR(crtc);
357 dev_err(dev->dev, "failed to construct crtc for %s (%d)\n", 454 dev_err(dev->dev, "failed to construct crtc %d (%d)\n", i, ret);
358 pipe2name(crtcs[i]), ret);
359 goto fail; 455 goto fail;
360 } 456 }
361 priv->crtcs[priv->num_crtcs++] = crtc; 457 priv->crtcs[priv->num_crtcs++] = crtc;
362 } 458 }
363 459
364 /* Construct video planes: */
365 for (i = 0; i < hw_cfg->pipe_vig.count; i++) {
366 struct drm_plane *plane;
367
368 plane = mdp5_plane_init(dev, vig_planes[i], false,
369 hw_cfg->pipe_vig.base[i], hw_cfg->pipe_vig.caps);
370 if (IS_ERR(plane)) {
371 ret = PTR_ERR(plane);
372 dev_err(dev->dev, "failed to construct %s plane: %d\n",
373 pipe2name(vig_planes[i]), ret);
374 goto fail;
375 }
376 }
377
378 /* DMA planes */
379 for (i = 0; i < hw_cfg->pipe_dma.count; i++) {
380 struct drm_plane *plane;
381
382 plane = mdp5_plane_init(dev, dma_planes[i], false,
383 hw_cfg->pipe_dma.base[i], hw_cfg->pipe_dma.caps);
384 if (IS_ERR(plane)) {
385 ret = PTR_ERR(plane);
386 dev_err(dev->dev, "failed to construct %s plane: %d\n",
387 pipe2name(dma_planes[i]), ret);
388 goto fail;
389 }
390 }
391
392 /* Construct encoders and modeset initialize connector devices 460 /* Construct encoders and modeset initialize connector devices
393 * for each external display interface. 461 * for each external display interface.
394 */ 462 */
@@ -564,7 +632,7 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev)
564 struct mdp5_kms *mdp5_kms; 632 struct mdp5_kms *mdp5_kms;
565 struct mdp5_cfg *config; 633 struct mdp5_cfg *config;
566 struct msm_kms *kms; 634 struct msm_kms *kms;
567 struct msm_mmu *mmu; 635 struct msm_gem_address_space *aspace;
568 int irq, i, ret; 636 int irq, i, ret;
569 637
570 /* priv->kms would have been populated by the MDP5 driver */ 638 /* priv->kms would have been populated by the MDP5 driver */
@@ -606,30 +674,29 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev)
606 mdelay(16); 674 mdelay(16);
607 675
608 if (config->platform.iommu) { 676 if (config->platform.iommu) {
609 mmu = msm_iommu_new(&pdev->dev, config->platform.iommu); 677 aspace = msm_gem_address_space_create(&pdev->dev,
610 if (IS_ERR(mmu)) { 678 config->platform.iommu, "mdp5");
611 ret = PTR_ERR(mmu); 679 if (IS_ERR(aspace)) {
612 dev_err(&pdev->dev, "failed to init iommu: %d\n", ret); 680 ret = PTR_ERR(aspace);
613 iommu_domain_free(config->platform.iommu);
614 goto fail; 681 goto fail;
615 } 682 }
616 683
617 ret = mmu->funcs->attach(mmu, iommu_ports, 684 mdp5_kms->aspace = aspace;
685
686 ret = aspace->mmu->funcs->attach(aspace->mmu, iommu_ports,
618 ARRAY_SIZE(iommu_ports)); 687 ARRAY_SIZE(iommu_ports));
619 if (ret) { 688 if (ret) {
620 dev_err(&pdev->dev, "failed to attach iommu: %d\n", 689 dev_err(&pdev->dev, "failed to attach iommu: %d\n",
621 ret); 690 ret);
622 mmu->funcs->destroy(mmu);
623 goto fail; 691 goto fail;
624 } 692 }
625 } else { 693 } else {
626 dev_info(&pdev->dev, 694 dev_info(&pdev->dev,
627 "no iommu, fallback to phys contig buffers for scanout\n"); 695 "no iommu, fallback to phys contig buffers for scanout\n");
628 mmu = NULL; 696 aspace = NULL;;
629 } 697 }
630 mdp5_kms->mmu = mmu;
631 698
632 mdp5_kms->id = msm_register_mmu(dev, mmu); 699 mdp5_kms->id = msm_register_address_space(dev, aspace);
633 if (mdp5_kms->id < 0) { 700 if (mdp5_kms->id < 0) {
634 ret = mdp5_kms->id; 701 ret = mdp5_kms->id;
635 dev_err(&pdev->dev, "failed to register mdp5 iommu: %d\n", ret); 702 dev_err(&pdev->dev, "failed to register mdp5 iommu: %d\n", ret);
@@ -644,8 +711,8 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev)
644 711
645 dev->mode_config.min_width = 0; 712 dev->mode_config.min_width = 0;
646 dev->mode_config.min_height = 0; 713 dev->mode_config.min_height = 0;
647 dev->mode_config.max_width = config->hw->lm.max_width; 714 dev->mode_config.max_width = 0xffff;
648 dev->mode_config.max_height = config->hw->lm.max_height; 715 dev->mode_config.max_height = 0xffff;
649 716
650 dev->driver->get_vblank_timestamp = mdp5_get_vblank_timestamp; 717 dev->driver->get_vblank_timestamp = mdp5_get_vblank_timestamp;
651 dev->driver->get_scanout_position = mdp5_get_scanoutpos; 718 dev->driver->get_scanout_position = mdp5_get_scanoutpos;
@@ -673,6 +740,69 @@ static void mdp5_destroy(struct platform_device *pdev)
673 740
674 if (mdp5_kms->rpm_enabled) 741 if (mdp5_kms->rpm_enabled)
675 pm_runtime_disable(&pdev->dev); 742 pm_runtime_disable(&pdev->dev);
743
744 kfree(mdp5_kms->state);
745}
746
747static int construct_pipes(struct mdp5_kms *mdp5_kms, int cnt,
748 const enum mdp5_pipe *pipes, const uint32_t *offsets,
749 uint32_t caps)
750{
751 struct drm_device *dev = mdp5_kms->dev;
752 int i, ret;
753
754 for (i = 0; i < cnt; i++) {
755 struct mdp5_hw_pipe *hwpipe;
756
757 hwpipe = mdp5_pipe_init(pipes[i], offsets[i], caps);
758 if (IS_ERR(hwpipe)) {
759 ret = PTR_ERR(hwpipe);
760 dev_err(dev->dev, "failed to construct pipe for %s (%d)\n",
761 pipe2name(pipes[i]), ret);
762 return ret;
763 }
764 hwpipe->idx = mdp5_kms->num_hwpipes;
765 mdp5_kms->hwpipes[mdp5_kms->num_hwpipes++] = hwpipe;
766 }
767
768 return 0;
769}
770
771static int hwpipe_init(struct mdp5_kms *mdp5_kms)
772{
773 static const enum mdp5_pipe rgb_planes[] = {
774 SSPP_RGB0, SSPP_RGB1, SSPP_RGB2, SSPP_RGB3,
775 };
776 static const enum mdp5_pipe vig_planes[] = {
777 SSPP_VIG0, SSPP_VIG1, SSPP_VIG2, SSPP_VIG3,
778 };
779 static const enum mdp5_pipe dma_planes[] = {
780 SSPP_DMA0, SSPP_DMA1,
781 };
782 const struct mdp5_cfg_hw *hw_cfg;
783 int ret;
784
785 hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg);
786
787 /* Construct RGB pipes: */
788 ret = construct_pipes(mdp5_kms, hw_cfg->pipe_rgb.count, rgb_planes,
789 hw_cfg->pipe_rgb.base, hw_cfg->pipe_rgb.caps);
790 if (ret)
791 return ret;
792
793 /* Construct video (VIG) pipes: */
794 ret = construct_pipes(mdp5_kms, hw_cfg->pipe_vig.count, vig_planes,
795 hw_cfg->pipe_vig.base, hw_cfg->pipe_vig.caps);
796 if (ret)
797 return ret;
798
799 /* Construct DMA pipes: */
800 ret = construct_pipes(mdp5_kms, hw_cfg->pipe_dma.count, dma_planes,
801 hw_cfg->pipe_dma.base, hw_cfg->pipe_dma.caps);
802 if (ret)
803 return ret;
804
805 return 0;
676} 806}
677 807
678static int mdp5_init(struct platform_device *pdev, struct drm_device *dev) 808static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
@@ -696,6 +826,13 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
696 mdp5_kms->dev = dev; 826 mdp5_kms->dev = dev;
697 mdp5_kms->pdev = pdev; 827 mdp5_kms->pdev = pdev;
698 828
829 drm_modeset_lock_init(&mdp5_kms->state_lock);
830 mdp5_kms->state = kzalloc(sizeof(*mdp5_kms->state), GFP_KERNEL);
831 if (!mdp5_kms->state) {
832 ret = -ENOMEM;
833 goto fail;
834 }
835
699 mdp5_kms->mmio = msm_ioremap(pdev, "mdp_phys", "MDP5"); 836 mdp5_kms->mmio = msm_ioremap(pdev, "mdp_phys", "MDP5");
700 if (IS_ERR(mdp5_kms->mmio)) { 837 if (IS_ERR(mdp5_kms->mmio)) {
701 ret = PTR_ERR(mdp5_kms->mmio); 838 ret = PTR_ERR(mdp5_kms->mmio);
@@ -749,7 +886,7 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
749 * this section initializes the SMP: 886 * this section initializes the SMP:
750 */ 887 */
751 if (mdp5_kms->caps & MDP_CAP_SMP) { 888 if (mdp5_kms->caps & MDP_CAP_SMP) {
752 mdp5_kms->smp = mdp5_smp_init(mdp5_kms->dev, &config->hw->smp); 889 mdp5_kms->smp = mdp5_smp_init(mdp5_kms, &config->hw->smp);
753 if (IS_ERR(mdp5_kms->smp)) { 890 if (IS_ERR(mdp5_kms->smp)) {
754 ret = PTR_ERR(mdp5_kms->smp); 891 ret = PTR_ERR(mdp5_kms->smp);
755 mdp5_kms->smp = NULL; 892 mdp5_kms->smp = NULL;
@@ -764,6 +901,10 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
764 goto fail; 901 goto fail;
765 } 902 }
766 903
904 ret = hwpipe_init(mdp5_kms);
905 if (ret)
906 goto fail;
907
767 /* set uninit-ed kms */ 908 /* set uninit-ed kms */
768 priv->kms = &mdp5_kms->base.base; 909 priv->kms = &mdp5_kms->base.base;
769 910
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
index c6fbcfad2d59..17b0cc101171 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
@@ -24,8 +24,11 @@
24#include "mdp5_cfg.h" /* must be included before mdp5.xml.h */ 24#include "mdp5_cfg.h" /* must be included before mdp5.xml.h */
25#include "mdp5.xml.h" 25#include "mdp5.xml.h"
26#include "mdp5_ctl.h" 26#include "mdp5_ctl.h"
27#include "mdp5_pipe.h"
27#include "mdp5_smp.h" 28#include "mdp5_smp.h"
28 29
30struct mdp5_state;
31
29struct mdp5_kms { 32struct mdp5_kms {
30 struct mdp_kms base; 33 struct mdp_kms base;
31 34
@@ -33,13 +36,21 @@ struct mdp5_kms {
33 36
34 struct platform_device *pdev; 37 struct platform_device *pdev;
35 38
39 unsigned num_hwpipes;
40 struct mdp5_hw_pipe *hwpipes[SSPP_MAX];
41
36 struct mdp5_cfg_handler *cfg; 42 struct mdp5_cfg_handler *cfg;
37 uint32_t caps; /* MDP capabilities (MDP_CAP_XXX bits) */ 43 uint32_t caps; /* MDP capabilities (MDP_CAP_XXX bits) */
38 44
45 /**
46 * Global atomic state. Do not access directly, use mdp5_get_state()
47 */
48 struct mdp5_state *state;
49 struct drm_modeset_lock state_lock;
39 50
40 /* mapper-id used to request GEM buffer mapped for scanout: */ 51 /* mapper-id used to request GEM buffer mapped for scanout: */
41 int id; 52 int id;
42 struct msm_mmu *mmu; 53 struct msm_gem_address_space *aspace;
43 54
44 struct mdp5_smp *smp; 55 struct mdp5_smp *smp;
45 struct mdp5_ctl_manager *ctlm; 56 struct mdp5_ctl_manager *ctlm;
@@ -65,9 +76,27 @@ struct mdp5_kms {
65}; 76};
66#define to_mdp5_kms(x) container_of(x, struct mdp5_kms, base) 77#define to_mdp5_kms(x) container_of(x, struct mdp5_kms, base)
67 78
79/* Global atomic state for tracking resources that are shared across
80 * multiple kms objects (planes/crtcs/etc).
81 *
82 * For atomic updates which require modifying global state,
83 */
84struct mdp5_state {
85 struct mdp5_hw_pipe_state hwpipe;
86 struct mdp5_smp_state smp;
87};
88
89struct mdp5_state *__must_check
90mdp5_get_state(struct drm_atomic_state *s);
91
92/* Atomic plane state. Subclasses the base drm_plane_state in order to
93 * track assigned hwpipe and hw specific state.
94 */
68struct mdp5_plane_state { 95struct mdp5_plane_state {
69 struct drm_plane_state base; 96 struct drm_plane_state base;
70 97
98 struct mdp5_hw_pipe *hwpipe;
99
71 /* aligned with property */ 100 /* aligned with property */
72 uint8_t premultiplied; 101 uint8_t premultiplied;
73 uint8_t zpos; 102 uint8_t zpos;
@@ -76,11 +105,6 @@ struct mdp5_plane_state {
76 /* assigned by crtc blender */ 105 /* assigned by crtc blender */
77 enum mdp_mixer_stage_id stage; 106 enum mdp_mixer_stage_id stage;
78 107
79 /* some additional transactional status to help us know in the
80 * apply path whether we need to update SMP allocation, and
81 * whether current update is still pending:
82 */
83 bool mode_changed : 1;
84 bool pending : 1; 108 bool pending : 1;
85}; 109};
86#define to_mdp5_plane_state(x) \ 110#define to_mdp5_plane_state(x) \
@@ -208,13 +232,10 @@ int mdp5_irq_domain_init(struct mdp5_kms *mdp5_kms);
208void mdp5_irq_domain_fini(struct mdp5_kms *mdp5_kms); 232void mdp5_irq_domain_fini(struct mdp5_kms *mdp5_kms);
209 233
210uint32_t mdp5_plane_get_flush(struct drm_plane *plane); 234uint32_t mdp5_plane_get_flush(struct drm_plane *plane);
211void mdp5_plane_complete_flip(struct drm_plane *plane);
212void mdp5_plane_complete_commit(struct drm_plane *plane, 235void mdp5_plane_complete_commit(struct drm_plane *plane,
213 struct drm_plane_state *state); 236 struct drm_plane_state *state);
214enum mdp5_pipe mdp5_plane_pipe(struct drm_plane *plane); 237enum mdp5_pipe mdp5_plane_pipe(struct drm_plane *plane);
215struct drm_plane *mdp5_plane_init(struct drm_device *dev, 238struct drm_plane *mdp5_plane_init(struct drm_device *dev, bool primary);
216 enum mdp5_pipe pipe, bool private_plane,
217 uint32_t reg_offset, uint32_t caps);
218 239
219uint32_t mdp5_crtc_vblank(struct drm_crtc *crtc); 240uint32_t mdp5_crtc_vblank(struct drm_crtc *crtc);
220 241
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c
new file mode 100644
index 000000000000..1ae9dc8d260d
--- /dev/null
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c
@@ -0,0 +1,133 @@
1/*
2 * Copyright (C) 2016 Red Hat
3 * Author: Rob Clark <robdclark@gmail.com>
4 *
5 * This program is free software; you can redistribute it and/or modify it
6 * under the terms of the GNU General Public License version 2 as published by
7 * the Free Software Foundation.
8 *
9 * This program is distributed in the hope that it will be useful, but WITHOUT
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
12 * more details.
13 *
14 * You should have received a copy of the GNU General Public License along with
15 * this program. If not, see <http://www.gnu.org/licenses/>.
16 */
17
18#include "mdp5_kms.h"
19
20struct mdp5_hw_pipe *mdp5_pipe_assign(struct drm_atomic_state *s,
21 struct drm_plane *plane, uint32_t caps, uint32_t blkcfg)
22{
23 struct msm_drm_private *priv = s->dev->dev_private;
24 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
25 struct mdp5_state *state;
26 struct mdp5_hw_pipe_state *old_state, *new_state;
27 struct mdp5_hw_pipe *hwpipe = NULL;
28 int i;
29
30 state = mdp5_get_state(s);
31 if (IS_ERR(state))
32 return ERR_CAST(state);
33
34 /* grab old_state after mdp5_get_state(), since now we hold lock: */
35 old_state = &mdp5_kms->state->hwpipe;
36 new_state = &state->hwpipe;
37
38 for (i = 0; i < mdp5_kms->num_hwpipes; i++) {
39 struct mdp5_hw_pipe *cur = mdp5_kms->hwpipes[i];
40
41 /* skip if already in-use.. check both new and old state,
42 * since we cannot immediately re-use a pipe that is
43 * released in the current update in some cases:
44 * (1) mdp5 can have SMP (non-double-buffered)
45 * (2) hw pipe previously assigned to different CRTC
46 * (vblanks might not be aligned)
47 */
48 if (new_state->hwpipe_to_plane[cur->idx] ||
49 old_state->hwpipe_to_plane[cur->idx])
50 continue;
51
52 /* skip if doesn't support some required caps: */
53 if (caps & ~cur->caps)
54 continue;
55
56 /* possible candidate, take the one with the
57 * fewest unneeded caps bits set:
58 */
59 if (!hwpipe || (hweight_long(cur->caps & ~caps) <
60 hweight_long(hwpipe->caps & ~caps)))
61 hwpipe = cur;
62 }
63
64 if (!hwpipe)
65 return ERR_PTR(-ENOMEM);
66
67 if (mdp5_kms->smp) {
68 int ret;
69
70 DBG("%s: alloc SMP blocks", hwpipe->name);
71 ret = mdp5_smp_assign(mdp5_kms->smp, &state->smp,
72 hwpipe->pipe, blkcfg);
73 if (ret)
74 return ERR_PTR(-ENOMEM);
75
76 hwpipe->blkcfg = blkcfg;
77 }
78
79 DBG("%s: assign to plane %s for caps %x",
80 hwpipe->name, plane->name, caps);
81 new_state->hwpipe_to_plane[hwpipe->idx] = plane;
82
83 return hwpipe;
84}
85
86void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
87{
88 struct msm_drm_private *priv = s->dev->dev_private;
89 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
90 struct mdp5_state *state = mdp5_get_state(s);
91 struct mdp5_hw_pipe_state *new_state = &state->hwpipe;
92
93 if (!hwpipe)
94 return;
95
96 if (WARN_ON(!new_state->hwpipe_to_plane[hwpipe->idx]))
97 return;
98
99 DBG("%s: release from plane %s", hwpipe->name,
100 new_state->hwpipe_to_plane[hwpipe->idx]->name);
101
102 if (mdp5_kms->smp) {
103 DBG("%s: free SMP blocks", hwpipe->name);
104 mdp5_smp_release(mdp5_kms->smp, &state->smp, hwpipe->pipe);
105 }
106
107 new_state->hwpipe_to_plane[hwpipe->idx] = NULL;
108}
109
110void mdp5_pipe_destroy(struct mdp5_hw_pipe *hwpipe)
111{
112 kfree(hwpipe);
113}
114
115struct mdp5_hw_pipe *mdp5_pipe_init(enum mdp5_pipe pipe,
116 uint32_t reg_offset, uint32_t caps)
117{
118 struct mdp5_hw_pipe *hwpipe;
119
120 hwpipe = kzalloc(sizeof(*hwpipe), GFP_KERNEL);
121 if (!hwpipe)
122 return ERR_PTR(-ENOMEM);
123
124 hwpipe->name = pipe2name(pipe);
125 hwpipe->pipe = pipe;
126 hwpipe->reg_offset = reg_offset;
127 hwpipe->caps = caps;
128 hwpipe->flush_mask = mdp_ctl_flush_mask_pipe(pipe);
129
130 spin_lock_init(&hwpipe->pipe_lock);
131
132 return hwpipe;
133}
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h
new file mode 100644
index 000000000000..611da7a660c9
--- /dev/null
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h
@@ -0,0 +1,56 @@
1/*
2 * Copyright (C) 2016 Red Hat
3 * Author: Rob Clark <robdclark@gmail.com>
4 *
5 * This program is free software; you can redistribute it and/or modify it
6 * under the terms of the GNU General Public License version 2 as published by
7 * the Free Software Foundation.
8 *
9 * This program is distributed in the hope that it will be useful, but WITHOUT
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
12 * more details.
13 *
14 * You should have received a copy of the GNU General Public License along with
15 * this program. If not, see <http://www.gnu.org/licenses/>.
16 */
17
18#ifndef __MDP5_PIPE_H__
19#define __MDP5_PIPE_H__
20
21#define SSPP_MAX (SSPP_RGB3 + 1) /* TODO: Add SSPP_MAX in mdp5.xml.h */
22
23/* represents a hw pipe, which is dynamically assigned to a plane */
24struct mdp5_hw_pipe {
25 int idx;
26
27 const char *name;
28 enum mdp5_pipe pipe;
29
30 spinlock_t pipe_lock; /* protect REG_MDP5_PIPE_* registers */
31 uint32_t reg_offset;
32 uint32_t caps;
33
34 uint32_t flush_mask; /* used to commit pipe registers */
35
36 /* number of smp blocks per plane, ie:
37 * nblks_y | (nblks_u << 8) | (nblks_v << 16)
38 */
39 uint32_t blkcfg;
40};
41
42/* global atomic state of assignment between pipes and planes: */
43struct mdp5_hw_pipe_state {
44 struct drm_plane *hwpipe_to_plane[SSPP_MAX];
45};
46
47struct mdp5_hw_pipe *__must_check
48mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane,
49 uint32_t caps, uint32_t blkcfg);
50void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe);
51
52struct mdp5_hw_pipe *mdp5_pipe_init(enum mdp5_pipe pipe,
53 uint32_t reg_offset, uint32_t caps);
54void mdp5_pipe_destroy(struct mdp5_hw_pipe *hwpipe);
55
56#endif /* __MDP5_PIPE_H__ */
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
index 81c0562ab489..c099da7bc212 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
@@ -21,15 +21,6 @@
21 21
22struct mdp5_plane { 22struct mdp5_plane {
23 struct drm_plane base; 23 struct drm_plane base;
24 const char *name;
25
26 enum mdp5_pipe pipe;
27
28 spinlock_t pipe_lock; /* protect REG_MDP5_PIPE_* registers */
29 uint32_t reg_offset;
30 uint32_t caps;
31
32 uint32_t flush_mask; /* used to commit pipe registers */
33 24
34 uint32_t nformats; 25 uint32_t nformats;
35 uint32_t formats[32]; 26 uint32_t formats[32];
@@ -70,12 +61,6 @@ static void mdp5_plane_destroy(struct drm_plane *plane)
70static void mdp5_plane_install_rotation_property(struct drm_device *dev, 61static void mdp5_plane_install_rotation_property(struct drm_device *dev,
71 struct drm_plane *plane) 62 struct drm_plane *plane)
72{ 63{
73 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
74
75 if (!(mdp5_plane->caps & MDP_PIPE_CAP_HFLIP) &&
76 !(mdp5_plane->caps & MDP_PIPE_CAP_VFLIP))
77 return;
78
79 drm_plane_create_rotation_property(plane, 64 drm_plane_create_rotation_property(plane,
80 DRM_ROTATE_0, 65 DRM_ROTATE_0,
81 DRM_ROTATE_0 | 66 DRM_ROTATE_0 |
@@ -188,11 +173,12 @@ mdp5_plane_atomic_print_state(struct drm_printer *p,
188{ 173{
189 struct mdp5_plane_state *pstate = to_mdp5_plane_state(state); 174 struct mdp5_plane_state *pstate = to_mdp5_plane_state(state);
190 175
176 drm_printf(p, "\thwpipe=%s\n", pstate->hwpipe ?
177 pstate->hwpipe->name : "(null)");
191 drm_printf(p, "\tpremultiplied=%u\n", pstate->premultiplied); 178 drm_printf(p, "\tpremultiplied=%u\n", pstate->premultiplied);
192 drm_printf(p, "\tzpos=%u\n", pstate->zpos); 179 drm_printf(p, "\tzpos=%u\n", pstate->zpos);
193 drm_printf(p, "\talpha=%u\n", pstate->alpha); 180 drm_printf(p, "\talpha=%u\n", pstate->alpha);
194 drm_printf(p, "\tstage=%s\n", stage2name(pstate->stage)); 181 drm_printf(p, "\tstage=%s\n", stage2name(pstate->stage));
195 drm_printf(p, "\tmode_changed=%u\n", pstate->mode_changed);
196 drm_printf(p, "\tpending=%u\n", pstate->pending); 182 drm_printf(p, "\tpending=%u\n", pstate->pending);
197} 183}
198 184
@@ -234,7 +220,6 @@ mdp5_plane_duplicate_state(struct drm_plane *plane)
234 if (mdp5_state && mdp5_state->base.fb) 220 if (mdp5_state && mdp5_state->base.fb)
235 drm_framebuffer_reference(mdp5_state->base.fb); 221 drm_framebuffer_reference(mdp5_state->base.fb);
236 222
237 mdp5_state->mode_changed = false;
238 mdp5_state->pending = false; 223 mdp5_state->pending = false;
239 224
240 return &mdp5_state->base; 225 return &mdp5_state->base;
@@ -243,10 +228,12 @@ mdp5_plane_duplicate_state(struct drm_plane *plane)
243static void mdp5_plane_destroy_state(struct drm_plane *plane, 228static void mdp5_plane_destroy_state(struct drm_plane *plane,
244 struct drm_plane_state *state) 229 struct drm_plane_state *state)
245{ 230{
231 struct mdp5_plane_state *pstate = to_mdp5_plane_state(state);
232
246 if (state->fb) 233 if (state->fb)
247 drm_framebuffer_unreference(state->fb); 234 drm_framebuffer_unreference(state->fb);
248 235
249 kfree(to_mdp5_plane_state(state)); 236 kfree(pstate);
250} 237}
251 238
252static const struct drm_plane_funcs mdp5_plane_funcs = { 239static const struct drm_plane_funcs mdp5_plane_funcs = {
@@ -265,101 +252,115 @@ static const struct drm_plane_funcs mdp5_plane_funcs = {
265static int mdp5_plane_prepare_fb(struct drm_plane *plane, 252static int mdp5_plane_prepare_fb(struct drm_plane *plane,
266 struct drm_plane_state *new_state) 253 struct drm_plane_state *new_state)
267{ 254{
268 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
269 struct mdp5_kms *mdp5_kms = get_kms(plane); 255 struct mdp5_kms *mdp5_kms = get_kms(plane);
270 struct drm_framebuffer *fb = new_state->fb; 256 struct drm_framebuffer *fb = new_state->fb;
271 257
272 if (!new_state->fb) 258 if (!new_state->fb)
273 return 0; 259 return 0;
274 260
275 DBG("%s: prepare: FB[%u]", mdp5_plane->name, fb->base.id); 261 DBG("%s: prepare: FB[%u]", plane->name, fb->base.id);
276 return msm_framebuffer_prepare(fb, mdp5_kms->id); 262 return msm_framebuffer_prepare(fb, mdp5_kms->id);
277} 263}
278 264
279static void mdp5_plane_cleanup_fb(struct drm_plane *plane, 265static void mdp5_plane_cleanup_fb(struct drm_plane *plane,
280 struct drm_plane_state *old_state) 266 struct drm_plane_state *old_state)
281{ 267{
282 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
283 struct mdp5_kms *mdp5_kms = get_kms(plane); 268 struct mdp5_kms *mdp5_kms = get_kms(plane);
284 struct drm_framebuffer *fb = old_state->fb; 269 struct drm_framebuffer *fb = old_state->fb;
285 270
286 if (!fb) 271 if (!fb)
287 return; 272 return;
288 273
289 DBG("%s: cleanup: FB[%u]", mdp5_plane->name, fb->base.id); 274 DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id);
290 msm_framebuffer_cleanup(fb, mdp5_kms->id); 275 msm_framebuffer_cleanup(fb, mdp5_kms->id);
291} 276}
292 277
293static int mdp5_plane_atomic_check(struct drm_plane *plane, 278static int mdp5_plane_atomic_check(struct drm_plane *plane,
294 struct drm_plane_state *state) 279 struct drm_plane_state *state)
295{ 280{
296 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane); 281 struct mdp5_plane_state *mdp5_state = to_mdp5_plane_state(state);
297 struct drm_plane_state *old_state = plane->state; 282 struct drm_plane_state *old_state = plane->state;
298 const struct mdp_format *format; 283 struct mdp5_cfg *config = mdp5_cfg_get_config(get_kms(plane)->cfg);
299 bool vflip, hflip; 284 bool new_hwpipe = false;
285 uint32_t max_width, max_height;
286 uint32_t caps = 0;
300 287
301 DBG("%s: check (%d -> %d)", mdp5_plane->name, 288 DBG("%s: check (%d -> %d)", plane->name,
302 plane_enabled(old_state), plane_enabled(state)); 289 plane_enabled(old_state), plane_enabled(state));
303 290
291 /* We don't allow faster-than-vblank updates.. if we did add this
292 * some day, we would need to disallow in cases where hwpipe
293 * changes
294 */
295 if (WARN_ON(to_mdp5_plane_state(old_state)->pending))
296 return -EBUSY;
297
298 max_width = config->hw->lm.max_width << 16;
299 max_height = config->hw->lm.max_height << 16;
300
301 /* Make sure source dimensions are within bounds. */
302 if ((state->src_w > max_width) || (state->src_h > max_height)) {
303 struct drm_rect src = drm_plane_state_src(state);
304 DBG("Invalid source size "DRM_RECT_FP_FMT,
305 DRM_RECT_FP_ARG(&src));
306 return -ERANGE;
307 }
308
304 if (plane_enabled(state)) { 309 if (plane_enabled(state)) {
305 unsigned int rotation; 310 unsigned int rotation;
311 const struct mdp_format *format;
312 struct mdp5_kms *mdp5_kms = get_kms(plane);
313 uint32_t blkcfg = 0;
306 314
307 format = to_mdp_format(msm_framebuffer_format(state->fb)); 315 format = to_mdp_format(msm_framebuffer_format(state->fb));
308 if (MDP_FORMAT_IS_YUV(format) && 316 if (MDP_FORMAT_IS_YUV(format))
309 !pipe_supports_yuv(mdp5_plane->caps)) { 317 caps |= MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_CSC;
310 DBG("Pipe doesn't support YUV\n");
311
312 return -EINVAL;
313 }
314
315 if (!(mdp5_plane->caps & MDP_PIPE_CAP_SCALE) &&
316 (((state->src_w >> 16) != state->crtc_w) ||
317 ((state->src_h >> 16) != state->crtc_h))) {
318 DBG("Pipe doesn't support scaling (%dx%d -> %dx%d)\n",
319 state->src_w >> 16, state->src_h >> 16,
320 state->crtc_w, state->crtc_h);
321 318
322 return -EINVAL; 319 if (((state->src_w >> 16) != state->crtc_w) ||
323 } 320 ((state->src_h >> 16) != state->crtc_h))
321 caps |= MDP_PIPE_CAP_SCALE;
324 322
325 rotation = drm_rotation_simplify(state->rotation, 323 rotation = drm_rotation_simplify(state->rotation,
326 DRM_ROTATE_0 | 324 DRM_ROTATE_0 |
327 DRM_REFLECT_X | 325 DRM_REFLECT_X |
328 DRM_REFLECT_Y); 326 DRM_REFLECT_Y);
329 hflip = !!(rotation & DRM_REFLECT_X);
330 vflip = !!(rotation & DRM_REFLECT_Y);
331 327
332 if ((vflip && !(mdp5_plane->caps & MDP_PIPE_CAP_VFLIP)) || 328 if (rotation & DRM_REFLECT_X)
333 (hflip && !(mdp5_plane->caps & MDP_PIPE_CAP_HFLIP))) { 329 caps |= MDP_PIPE_CAP_HFLIP;
334 DBG("Pipe doesn't support flip\n");
335 330
336 return -EINVAL; 331 if (rotation & DRM_REFLECT_Y)
337 } 332 caps |= MDP_PIPE_CAP_VFLIP;
338 }
339 333
340 if (plane_enabled(state) && plane_enabled(old_state)) { 334 /* (re)allocate hw pipe if we don't have one or caps-mismatch: */
341 /* we cannot change SMP block configuration during scanout: */ 335 if (!mdp5_state->hwpipe || (caps & ~mdp5_state->hwpipe->caps))
342 bool full_modeset = false; 336 new_hwpipe = true;
343 if (state->fb->pixel_format != old_state->fb->pixel_format) { 337
344 DBG("%s: pixel_format change!", mdp5_plane->name); 338 if (mdp5_kms->smp) {
345 full_modeset = true; 339 const struct mdp_format *format =
346 } 340 to_mdp_format(msm_framebuffer_format(state->fb));
347 if (state->src_w != old_state->src_w) { 341
348 DBG("%s: src_w change!", mdp5_plane->name); 342 blkcfg = mdp5_smp_calculate(mdp5_kms->smp, format,
349 full_modeset = true; 343 state->src_w >> 16, false);
350 } 344
351 if (to_mdp5_plane_state(old_state)->pending) { 345 if (mdp5_state->hwpipe && (mdp5_state->hwpipe->blkcfg != blkcfg))
352 DBG("%s: still pending!", mdp5_plane->name); 346 new_hwpipe = true;
353 full_modeset = true;
354 } 347 }
355 if (full_modeset) { 348
356 struct drm_crtc_state *crtc_state = 349 /* (re)assign hwpipe if needed, otherwise keep old one: */
357 drm_atomic_get_crtc_state(state->state, state->crtc); 350 if (new_hwpipe) {
358 crtc_state->mode_changed = true; 351 /* TODO maybe we want to re-assign hwpipe sometimes
359 to_mdp5_plane_state(state)->mode_changed = true; 352 * in cases when we no-longer need some caps to make
353 * it available for other planes?
354 */
355 struct mdp5_hw_pipe *old_hwpipe = mdp5_state->hwpipe;
356 mdp5_state->hwpipe = mdp5_pipe_assign(state->state,
357 plane, caps, blkcfg);
358 if (IS_ERR(mdp5_state->hwpipe)) {
359 DBG("%s: failed to assign hwpipe!", plane->name);
360 return PTR_ERR(mdp5_state->hwpipe);
361 }
362 mdp5_pipe_release(state->state, old_hwpipe);
360 } 363 }
361 } else {
362 to_mdp5_plane_state(state)->mode_changed = true;
363 } 364 }
364 365
365 return 0; 366 return 0;
@@ -368,16 +369,16 @@ static int mdp5_plane_atomic_check(struct drm_plane *plane,
368static void mdp5_plane_atomic_update(struct drm_plane *plane, 369static void mdp5_plane_atomic_update(struct drm_plane *plane,
369 struct drm_plane_state *old_state) 370 struct drm_plane_state *old_state)
370{ 371{
371 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
372 struct drm_plane_state *state = plane->state; 372 struct drm_plane_state *state = plane->state;
373 struct mdp5_plane_state *mdp5_state = to_mdp5_plane_state(state);
374
375 DBG("%s: update", plane->name);
373 376
374 DBG("%s: update", mdp5_plane->name); 377 mdp5_state->pending = true;
375 378
376 if (!plane_enabled(state)) { 379 if (plane_enabled(state)) {
377 to_mdp5_plane_state(state)->pending = true;
378 } else if (to_mdp5_plane_state(state)->mode_changed) {
379 int ret; 380 int ret;
380 to_mdp5_plane_state(state)->pending = true; 381
381 ret = mdp5_plane_mode_set(plane, 382 ret = mdp5_plane_mode_set(plane,
382 state->crtc, state->fb, 383 state->crtc, state->fb,
383 state->crtc_x, state->crtc_y, 384 state->crtc_x, state->crtc_y,
@@ -386,11 +387,6 @@ static void mdp5_plane_atomic_update(struct drm_plane *plane,
386 state->src_w, state->src_h); 387 state->src_w, state->src_h);
387 /* atomic_check should have ensured that this doesn't fail */ 388 /* atomic_check should have ensured that this doesn't fail */
388 WARN_ON(ret < 0); 389 WARN_ON(ret < 0);
389 } else {
390 unsigned long flags;
391 spin_lock_irqsave(&mdp5_plane->pipe_lock, flags);
392 set_scanout_locked(plane, state->fb);
393 spin_unlock_irqrestore(&mdp5_plane->pipe_lock, flags);
394 } 390 }
395} 391}
396 392
@@ -404,9 +400,9 @@ static const struct drm_plane_helper_funcs mdp5_plane_helper_funcs = {
404static void set_scanout_locked(struct drm_plane *plane, 400static void set_scanout_locked(struct drm_plane *plane,
405 struct drm_framebuffer *fb) 401 struct drm_framebuffer *fb)
406{ 402{
407 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
408 struct mdp5_kms *mdp5_kms = get_kms(plane); 403 struct mdp5_kms *mdp5_kms = get_kms(plane);
409 enum mdp5_pipe pipe = mdp5_plane->pipe; 404 struct mdp5_hw_pipe *hwpipe = to_mdp5_plane_state(plane->state)->hwpipe;
405 enum mdp5_pipe pipe = hwpipe->pipe;
410 406
411 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_STRIDE_A(pipe), 407 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_STRIDE_A(pipe),
412 MDP5_PIPE_SRC_STRIDE_A_P0(fb->pitches[0]) | 408 MDP5_PIPE_SRC_STRIDE_A_P0(fb->pitches[0]) |
@@ -686,14 +682,14 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
686 uint32_t src_x, uint32_t src_y, 682 uint32_t src_x, uint32_t src_y,
687 uint32_t src_w, uint32_t src_h) 683 uint32_t src_w, uint32_t src_h)
688{ 684{
689 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
690 struct drm_plane_state *pstate = plane->state; 685 struct drm_plane_state *pstate = plane->state;
686 struct mdp5_hw_pipe *hwpipe = to_mdp5_plane_state(pstate)->hwpipe;
691 struct mdp5_kms *mdp5_kms = get_kms(plane); 687 struct mdp5_kms *mdp5_kms = get_kms(plane);
692 enum mdp5_pipe pipe = mdp5_plane->pipe; 688 enum mdp5_pipe pipe = hwpipe->pipe;
693 const struct mdp_format *format; 689 const struct mdp_format *format;
694 uint32_t nplanes, config = 0; 690 uint32_t nplanes, config = 0;
695 uint32_t phasex_step[COMP_MAX] = {0,}, phasey_step[COMP_MAX] = {0,}; 691 uint32_t phasex_step[COMP_MAX] = {0,}, phasey_step[COMP_MAX] = {0,};
696 bool pe = mdp5_plane->caps & MDP_PIPE_CAP_SW_PIX_EXT; 692 bool pe = hwpipe->caps & MDP_PIPE_CAP_SW_PIX_EXT;
697 int pe_left[COMP_MAX], pe_right[COMP_MAX]; 693 int pe_left[COMP_MAX], pe_right[COMP_MAX];
698 int pe_top[COMP_MAX], pe_bottom[COMP_MAX]; 694 int pe_top[COMP_MAX], pe_bottom[COMP_MAX];
699 uint32_t hdecm = 0, vdecm = 0; 695 uint32_t hdecm = 0, vdecm = 0;
@@ -718,27 +714,10 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
718 src_w = src_w >> 16; 714 src_w = src_w >> 16;
719 src_h = src_h >> 16; 715 src_h = src_h >> 16;
720 716
721 DBG("%s: FB[%u] %u,%u,%u,%u -> CRTC[%u] %d,%d,%u,%u", mdp5_plane->name, 717 DBG("%s: FB[%u] %u,%u,%u,%u -> CRTC[%u] %d,%d,%u,%u", plane->name,
722 fb->base.id, src_x, src_y, src_w, src_h, 718 fb->base.id, src_x, src_y, src_w, src_h,
723 crtc->base.id, crtc_x, crtc_y, crtc_w, crtc_h); 719 crtc->base.id, crtc_x, crtc_y, crtc_w, crtc_h);
724 720
725 /* Request some memory from the SMP: */
726 if (mdp5_kms->smp) {
727 ret = mdp5_smp_request(mdp5_kms->smp,
728 mdp5_plane->pipe, format, src_w, false);
729 if (ret)
730 return ret;
731 }
732
733 /*
734 * Currently we update the hw for allocations/requests immediately,
735 * but once atomic modeset/pageflip is in place, the allocation
736 * would move into atomic->check_plane_state(), while updating the
737 * hw would remain here:
738 */
739 if (mdp5_kms->smp)
740 mdp5_smp_configure(mdp5_kms->smp, pipe);
741
742 ret = calc_scalex_steps(plane, pix_format, src_w, crtc_w, phasex_step); 721 ret = calc_scalex_steps(plane, pix_format, src_w, crtc_w, phasex_step);
743 if (ret) 722 if (ret)
744 return ret; 723 return ret;
@@ -747,7 +726,7 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
747 if (ret) 726 if (ret)
748 return ret; 727 return ret;
749 728
750 if (mdp5_plane->caps & MDP_PIPE_CAP_SW_PIX_EXT) { 729 if (hwpipe->caps & MDP_PIPE_CAP_SW_PIX_EXT) {
751 calc_pixel_ext(format, src_w, crtc_w, phasex_step, 730 calc_pixel_ext(format, src_w, crtc_w, phasex_step,
752 pe_left, pe_right, true); 731 pe_left, pe_right, true);
753 calc_pixel_ext(format, src_h, crtc_h, phasey_step, 732 calc_pixel_ext(format, src_h, crtc_h, phasey_step,
@@ -768,11 +747,11 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
768 hflip = !!(rotation & DRM_REFLECT_X); 747 hflip = !!(rotation & DRM_REFLECT_X);
769 vflip = !!(rotation & DRM_REFLECT_Y); 748 vflip = !!(rotation & DRM_REFLECT_Y);
770 749
771 spin_lock_irqsave(&mdp5_plane->pipe_lock, flags); 750 spin_lock_irqsave(&hwpipe->pipe_lock, flags);
772 751
773 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_IMG_SIZE(pipe), 752 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_IMG_SIZE(pipe),
774 MDP5_PIPE_SRC_IMG_SIZE_WIDTH(fb->width) | 753 MDP5_PIPE_SRC_IMG_SIZE_WIDTH(min(fb->width, src_w)) |
775 MDP5_PIPE_SRC_IMG_SIZE_HEIGHT(fb->height)); 754 MDP5_PIPE_SRC_IMG_SIZE_HEIGHT(min(fb->height, src_h)));
776 755
777 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_SIZE(pipe), 756 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_SIZE(pipe),
778 MDP5_PIPE_SRC_SIZE_WIDTH(src_w) | 757 MDP5_PIPE_SRC_SIZE_WIDTH(src_w) |
@@ -817,12 +796,12 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
817 /* not using secure mode: */ 796 /* not using secure mode: */
818 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_ADDR_SW_STATUS(pipe), 0); 797 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_ADDR_SW_STATUS(pipe), 0);
819 798
820 if (mdp5_plane->caps & MDP_PIPE_CAP_SW_PIX_EXT) 799 if (hwpipe->caps & MDP_PIPE_CAP_SW_PIX_EXT)
821 mdp5_write_pixel_ext(mdp5_kms, pipe, format, 800 mdp5_write_pixel_ext(mdp5_kms, pipe, format,
822 src_w, pe_left, pe_right, 801 src_w, pe_left, pe_right,
823 src_h, pe_top, pe_bottom); 802 src_h, pe_top, pe_bottom);
824 803
825 if (mdp5_plane->caps & MDP_PIPE_CAP_SCALE) { 804 if (hwpipe->caps & MDP_PIPE_CAP_SCALE) {
826 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_PHASE_STEP_X(pipe), 805 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_PHASE_STEP_X(pipe),
827 phasex_step[COMP_0]); 806 phasex_step[COMP_0]);
828 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_PHASE_STEP_Y(pipe), 807 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_PHASE_STEP_Y(pipe),
@@ -837,7 +816,7 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
837 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_CONFIG(pipe), config); 816 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_CONFIG(pipe), config);
838 } 817 }
839 818
840 if (mdp5_plane->caps & MDP_PIPE_CAP_CSC) { 819 if (hwpipe->caps & MDP_PIPE_CAP_CSC) {
841 if (MDP_FORMAT_IS_YUV(format)) 820 if (MDP_FORMAT_IS_YUV(format))
842 csc_enable(mdp5_kms, pipe, 821 csc_enable(mdp5_kms, pipe,
843 mdp_get_default_csc_cfg(CSC_YUV2RGB)); 822 mdp_get_default_csc_cfg(CSC_YUV2RGB));
@@ -847,56 +826,42 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
847 826
848 set_scanout_locked(plane, fb); 827 set_scanout_locked(plane, fb);
849 828
850 spin_unlock_irqrestore(&mdp5_plane->pipe_lock, flags); 829 spin_unlock_irqrestore(&hwpipe->pipe_lock, flags);
851 830
852 return ret; 831 return ret;
853} 832}
854 833
855void mdp5_plane_complete_flip(struct drm_plane *plane) 834enum mdp5_pipe mdp5_plane_pipe(struct drm_plane *plane)
856{ 835{
857 struct mdp5_kms *mdp5_kms = get_kms(plane); 836 struct mdp5_plane_state *pstate = to_mdp5_plane_state(plane->state);
858 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
859 enum mdp5_pipe pipe = mdp5_plane->pipe;
860
861 DBG("%s: complete flip", mdp5_plane->name);
862 837
863 if (mdp5_kms->smp) 838 if (WARN_ON(!pstate->hwpipe))
864 mdp5_smp_commit(mdp5_kms->smp, pipe); 839 return 0;
865 840
866 to_mdp5_plane_state(plane->state)->pending = false; 841 return pstate->hwpipe->pipe;
867}
868
869enum mdp5_pipe mdp5_plane_pipe(struct drm_plane *plane)
870{
871 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
872 return mdp5_plane->pipe;
873} 842}
874 843
875uint32_t mdp5_plane_get_flush(struct drm_plane *plane) 844uint32_t mdp5_plane_get_flush(struct drm_plane *plane)
876{ 845{
877 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane); 846 struct mdp5_plane_state *pstate = to_mdp5_plane_state(plane->state);
878 847
879 return mdp5_plane->flush_mask; 848 if (WARN_ON(!pstate->hwpipe))
849 return 0;
850
851 return pstate->hwpipe->flush_mask;
880} 852}
881 853
882/* called after vsync in thread context */ 854/* called after vsync in thread context */
883void mdp5_plane_complete_commit(struct drm_plane *plane, 855void mdp5_plane_complete_commit(struct drm_plane *plane,
884 struct drm_plane_state *state) 856 struct drm_plane_state *state)
885{ 857{
886 struct mdp5_kms *mdp5_kms = get_kms(plane); 858 struct mdp5_plane_state *pstate = to_mdp5_plane_state(plane->state);
887 struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
888 enum mdp5_pipe pipe = mdp5_plane->pipe;
889 859
890 if (!plane_enabled(plane->state) && mdp5_kms->smp) { 860 pstate->pending = false;
891 DBG("%s: free SMP", mdp5_plane->name);
892 mdp5_smp_release(mdp5_kms->smp, pipe);
893 }
894} 861}
895 862
896/* initialize plane */ 863/* initialize plane */
897struct drm_plane *mdp5_plane_init(struct drm_device *dev, 864struct drm_plane *mdp5_plane_init(struct drm_device *dev, bool primary)
898 enum mdp5_pipe pipe, bool private_plane, uint32_t reg_offset,
899 uint32_t caps)
900{ 865{
901 struct drm_plane *plane = NULL; 866 struct drm_plane *plane = NULL;
902 struct mdp5_plane *mdp5_plane; 867 struct mdp5_plane *mdp5_plane;
@@ -911,22 +876,13 @@ struct drm_plane *mdp5_plane_init(struct drm_device *dev,
911 876
912 plane = &mdp5_plane->base; 877 plane = &mdp5_plane->base;
913 878
914 mdp5_plane->pipe = pipe;
915 mdp5_plane->name = pipe2name(pipe);
916 mdp5_plane->caps = caps;
917
918 mdp5_plane->nformats = mdp_get_formats(mdp5_plane->formats, 879 mdp5_plane->nformats = mdp_get_formats(mdp5_plane->formats,
919 ARRAY_SIZE(mdp5_plane->formats), 880 ARRAY_SIZE(mdp5_plane->formats), false);
920 !pipe_supports_yuv(mdp5_plane->caps));
921
922 mdp5_plane->flush_mask = mdp_ctl_flush_mask_pipe(pipe);
923 mdp5_plane->reg_offset = reg_offset;
924 spin_lock_init(&mdp5_plane->pipe_lock);
925 881
926 type = private_plane ? DRM_PLANE_TYPE_PRIMARY : DRM_PLANE_TYPE_OVERLAY; 882 type = primary ? DRM_PLANE_TYPE_PRIMARY : DRM_PLANE_TYPE_OVERLAY;
927 ret = drm_universal_plane_init(dev, plane, 0xff, &mdp5_plane_funcs, 883 ret = drm_universal_plane_init(dev, plane, 0xff, &mdp5_plane_funcs,
928 mdp5_plane->formats, mdp5_plane->nformats, 884 mdp5_plane->formats, mdp5_plane->nformats,
929 type, "%s", mdp5_plane->name); 885 type, NULL);
930 if (ret) 886 if (ret)
931 goto fail; 887 goto fail;
932 888
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c
index 27d7b55b52c9..58f712d37e7f 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c
@@ -21,72 +21,6 @@
21#include "mdp5_smp.h" 21#include "mdp5_smp.h"
22 22
23 23
24/* SMP - Shared Memory Pool
25 *
26 * These are shared between all the clients, where each plane in a
27 * scanout buffer is a SMP client. Ie. scanout of 3 plane I420 on
28 * pipe VIG0 => 3 clients: VIG0_Y, VIG0_CB, VIG0_CR.
29 *
30 * Based on the size of the attached scanout buffer, a certain # of
31 * blocks must be allocated to that client out of the shared pool.
32 *
33 * In some hw, some blocks are statically allocated for certain pipes
34 * and CANNOT be re-allocated (eg: MMB0 and MMB1 both tied to RGB0).
35 *
36 * For each block that can be dynamically allocated, it can be either
37 * free:
38 * The block is free.
39 *
40 * pending:
41 * The block is allocated to some client and not free.
42 *
43 * configured:
44 * The block is allocated to some client, and assigned to that
45 * client in MDP5_SMP_ALLOC registers.
46 *
47 * inuse:
48 * The block is being actively used by a client.
49 *
50 * The updates happen in the following steps:
51 *
52 * 1) mdp5_smp_request():
53 * When plane scanout is setup, calculate required number of
54 * blocks needed per client, and request. Blocks neither inuse nor
55 * configured nor pending by any other client are added to client's
56 * pending set.
57 * For shrinking, blocks in pending but not in configured can be freed
58 * directly, but those already in configured will be freed later by
59 * mdp5_smp_commit.
60 *
61 * 2) mdp5_smp_configure():
62 * As hw is programmed, before FLUSH, MDP5_SMP_ALLOC registers
63 * are configured for the union(pending, inuse)
64 * Current pending is copied to configured.
65 * It is assumed that mdp5_smp_request and mdp5_smp_configure not run
66 * concurrently for the same pipe.
67 *
68 * 3) mdp5_smp_commit():
69 * After next vblank, copy configured -> inuse. Optionally update
70 * MDP5_SMP_ALLOC registers if there are newly unused blocks
71 *
72 * 4) mdp5_smp_release():
73 * Must be called after the pipe is disabled and no longer uses any SMB
74 *
75 * On the next vblank after changes have been committed to hw, the
76 * client's pending blocks become it's in-use blocks (and no-longer
77 * in-use blocks become available to other clients).
78 *
79 * btw, hurray for confusing overloaded acronyms! :-/
80 *
81 * NOTE: for atomic modeset/pageflip NONBLOCK operations, step #1
82 * should happen at (or before)? atomic->check(). And we'd need
83 * an API to discard previous requests if update is aborted or
84 * (test-only).
85 *
86 * TODO would perhaps be nice to have debugfs to dump out kernel
87 * inuse and pending state of all clients..
88 */
89
90struct mdp5_smp { 24struct mdp5_smp {
91 struct drm_device *dev; 25 struct drm_device *dev;
92 26
@@ -94,16 +28,8 @@ struct mdp5_smp {
94 28
95 int blk_cnt; 29 int blk_cnt;
96 int blk_size; 30 int blk_size;
97
98 spinlock_t state_lock;
99 mdp5_smp_state_t state; /* to track smp allocation amongst pipes: */
100
101 struct mdp5_client_smp_state client_state[MAX_CLIENTS];
102}; 31};
103 32
104static void update_smp_state(struct mdp5_smp *smp,
105 u32 cid, mdp5_smp_state_t *assigned);
106
107static inline 33static inline
108struct mdp5_kms *get_kms(struct mdp5_smp *smp) 34struct mdp5_kms *get_kms(struct mdp5_smp *smp)
109{ 35{
@@ -134,57 +60,38 @@ static inline u32 pipe2client(enum mdp5_pipe pipe, int plane)
134 return mdp5_cfg->smp.clients[pipe] + plane; 60 return mdp5_cfg->smp.clients[pipe] + plane;
135} 61}
136 62
137/* step #1: update # of blocks pending for the client: */ 63/* allocate blocks for the specified request: */
138static int smp_request_block(struct mdp5_smp *smp, 64static int smp_request_block(struct mdp5_smp *smp,
65 struct mdp5_smp_state *state,
139 u32 cid, int nblks) 66 u32 cid, int nblks)
140{ 67{
141 struct mdp5_kms *mdp5_kms = get_kms(smp); 68 void *cs = state->client_state[cid];
142 struct mdp5_client_smp_state *ps = &smp->client_state[cid]; 69 int i, avail, cnt = smp->blk_cnt;
143 int i, ret, avail, cur_nblks, cnt = smp->blk_cnt;
144 uint8_t reserved; 70 uint8_t reserved;
145 unsigned long flags;
146 71
147 reserved = smp->reserved[cid]; 72 /* we shouldn't be requesting blocks for an in-use client: */
73 WARN_ON(bitmap_weight(cs, cnt) > 0);
148 74
149 spin_lock_irqsave(&smp->state_lock, flags); 75 reserved = smp->reserved[cid];
150 76
151 if (reserved) { 77 if (reserved) {
152 nblks = max(0, nblks - reserved); 78 nblks = max(0, nblks - reserved);
153 DBG("%d MMBs allocated (%d reserved)", nblks, reserved); 79 DBG("%d MMBs allocated (%d reserved)", nblks, reserved);
154 } 80 }
155 81
156 avail = cnt - bitmap_weight(smp->state, cnt); 82 avail = cnt - bitmap_weight(state->state, cnt);
157 if (nblks > avail) { 83 if (nblks > avail) {
158 dev_err(mdp5_kms->dev->dev, "out of blks (req=%d > avail=%d)\n", 84 dev_err(smp->dev->dev, "out of blks (req=%d > avail=%d)\n",
159 nblks, avail); 85 nblks, avail);
160 ret = -ENOSPC; 86 return -ENOSPC;
161 goto fail;
162 } 87 }
163 88
164 cur_nblks = bitmap_weight(ps->pending, cnt); 89 for (i = 0; i < nblks; i++) {
165 if (nblks > cur_nblks) { 90 int blk = find_first_zero_bit(state->state, cnt);
166 /* grow the existing pending reservation: */ 91 set_bit(blk, cs);
167 for (i = cur_nblks; i < nblks; i++) { 92 set_bit(blk, state->state);
168 int blk = find_first_zero_bit(smp->state, cnt);
169 set_bit(blk, ps->pending);
170 set_bit(blk, smp->state);
171 }
172 } else {
173 /* shrink the existing pending reservation: */
174 for (i = cur_nblks; i > nblks; i--) {
175 int blk = find_first_bit(ps->pending, cnt);
176 clear_bit(blk, ps->pending);
177
178 /* clear in global smp_state if not in configured
179 * otherwise until _commit()
180 */
181 if (!test_bit(blk, ps->configured))
182 clear_bit(blk, smp->state);
183 }
184 } 93 }
185 94
186fail:
187 spin_unlock_irqrestore(&smp->state_lock, flags);
188 return 0; 95 return 0;
189} 96}
190 97
@@ -209,14 +116,15 @@ static void set_fifo_thresholds(struct mdp5_smp *smp,
209 * decimated width. Ie. SMP buffering sits downstream of decimation (which 116 * decimated width. Ie. SMP buffering sits downstream of decimation (which
210 * presumably happens during the dma from scanout buffer). 117 * presumably happens during the dma from scanout buffer).
211 */ 118 */
212int mdp5_smp_request(struct mdp5_smp *smp, enum mdp5_pipe pipe, 119uint32_t mdp5_smp_calculate(struct mdp5_smp *smp,
213 const struct mdp_format *format, u32 width, bool hdecim) 120 const struct mdp_format *format,
121 u32 width, bool hdecim)
214{ 122{
215 struct mdp5_kms *mdp5_kms = get_kms(smp); 123 struct mdp5_kms *mdp5_kms = get_kms(smp);
216 struct drm_device *dev = mdp5_kms->dev;
217 int rev = mdp5_cfg_get_hw_rev(mdp5_kms->cfg); 124 int rev = mdp5_cfg_get_hw_rev(mdp5_kms->cfg);
218 int i, hsub, nplanes, nlines, nblks, ret; 125 int i, hsub, nplanes, nlines;
219 u32 fmt = format->base.pixel_format; 126 u32 fmt = format->base.pixel_format;
127 uint32_t blkcfg = 0;
220 128
221 nplanes = drm_format_num_planes(fmt); 129 nplanes = drm_format_num_planes(fmt);
222 hsub = drm_format_horz_chroma_subsampling(fmt); 130 hsub = drm_format_horz_chroma_subsampling(fmt);
@@ -239,7 +147,7 @@ int mdp5_smp_request(struct mdp5_smp *smp, enum mdp5_pipe pipe,
239 hsub = 1; 147 hsub = 1;
240 } 148 }
241 149
242 for (i = 0, nblks = 0; i < nplanes; i++) { 150 for (i = 0; i < nplanes; i++) {
243 int n, fetch_stride, cpp; 151 int n, fetch_stride, cpp;
244 152
245 cpp = drm_format_plane_cpp(fmt, i); 153 cpp = drm_format_plane_cpp(fmt, i);
@@ -251,60 +159,72 @@ int mdp5_smp_request(struct mdp5_smp *smp, enum mdp5_pipe pipe,
251 if (rev == 0) 159 if (rev == 0)
252 n = roundup_pow_of_two(n); 160 n = roundup_pow_of_two(n);
253 161
162 blkcfg |= (n << (8 * i));
163 }
164
165 return blkcfg;
166}
167
168int mdp5_smp_assign(struct mdp5_smp *smp, struct mdp5_smp_state *state,
169 enum mdp5_pipe pipe, uint32_t blkcfg)
170{
171 struct mdp5_kms *mdp5_kms = get_kms(smp);
172 struct drm_device *dev = mdp5_kms->dev;
173 int i, ret;
174
175 for (i = 0; i < pipe2nclients(pipe); i++) {
176 u32 cid = pipe2client(pipe, i);
177 int n = blkcfg & 0xff;
178
179 if (!n)
180 continue;
181
254 DBG("%s[%d]: request %d SMP blocks", pipe2name(pipe), i, n); 182 DBG("%s[%d]: request %d SMP blocks", pipe2name(pipe), i, n);
255 ret = smp_request_block(smp, pipe2client(pipe, i), n); 183 ret = smp_request_block(smp, state, cid, n);
256 if (ret) { 184 if (ret) {
257 dev_err(dev->dev, "Cannot allocate %d SMP blocks: %d\n", 185 dev_err(dev->dev, "Cannot allocate %d SMP blocks: %d\n",
258 n, ret); 186 n, ret);
259 return ret; 187 return ret;
260 } 188 }
261 189
262 nblks += n; 190 blkcfg >>= 8;
263 } 191 }
264 192
265 set_fifo_thresholds(smp, pipe, nblks); 193 state->assigned |= (1 << pipe);
266 194
267 return 0; 195 return 0;
268} 196}
269 197
270/* Release SMP blocks for all clients of the pipe */ 198/* Release SMP blocks for all clients of the pipe */
271void mdp5_smp_release(struct mdp5_smp *smp, enum mdp5_pipe pipe) 199void mdp5_smp_release(struct mdp5_smp *smp, struct mdp5_smp_state *state,
200 enum mdp5_pipe pipe)
272{ 201{
273 int i; 202 int i;
274 unsigned long flags;
275 int cnt = smp->blk_cnt; 203 int cnt = smp->blk_cnt;
276 204
277 for (i = 0; i < pipe2nclients(pipe); i++) { 205 for (i = 0; i < pipe2nclients(pipe); i++) {
278 mdp5_smp_state_t assigned;
279 u32 cid = pipe2client(pipe, i); 206 u32 cid = pipe2client(pipe, i);
280 struct mdp5_client_smp_state *ps = &smp->client_state[cid]; 207 void *cs = state->client_state[cid];
281
282 spin_lock_irqsave(&smp->state_lock, flags);
283
284 /* clear hw assignment */
285 bitmap_or(assigned, ps->inuse, ps->configured, cnt);
286 update_smp_state(smp, CID_UNUSED, &assigned);
287
288 /* free to global pool */
289 bitmap_andnot(smp->state, smp->state, ps->pending, cnt);
290 bitmap_andnot(smp->state, smp->state, assigned, cnt);
291 208
292 /* clear client's infor */ 209 /* update global state: */
293 bitmap_zero(ps->pending, cnt); 210 bitmap_andnot(state->state, state->state, cs, cnt);
294 bitmap_zero(ps->configured, cnt);
295 bitmap_zero(ps->inuse, cnt);
296 211
297 spin_unlock_irqrestore(&smp->state_lock, flags); 212 /* clear client's state */
213 bitmap_zero(cs, cnt);
298 } 214 }
299 215
300 set_fifo_thresholds(smp, pipe, 0); 216 state->released |= (1 << pipe);
301} 217}
302 218
303static void update_smp_state(struct mdp5_smp *smp, 219/* NOTE: SMP_ALLOC_* regs are *not* double buffered, so release has to
220 * happen after scanout completes.
221 */
222static unsigned update_smp_state(struct mdp5_smp *smp,
304 u32 cid, mdp5_smp_state_t *assigned) 223 u32 cid, mdp5_smp_state_t *assigned)
305{ 224{
306 struct mdp5_kms *mdp5_kms = get_kms(smp); 225 struct mdp5_kms *mdp5_kms = get_kms(smp);
307 int cnt = smp->blk_cnt; 226 int cnt = smp->blk_cnt;
227 unsigned nblks = 0;
308 u32 blk, val; 228 u32 blk, val;
309 229
310 for_each_set_bit(blk, *assigned, cnt) { 230 for_each_set_bit(blk, *assigned, cnt) {
@@ -330,62 +250,88 @@ static void update_smp_state(struct mdp5_smp *smp,
330 250
331 mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(idx), val); 251 mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(idx), val);
332 mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_R_REG(idx), val); 252 mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_R_REG(idx), val);
253
254 nblks++;
333 } 255 }
256
257 return nblks;
334} 258}
335 259
336/* step #2: configure hw for union(pending, inuse): */ 260void mdp5_smp_prepare_commit(struct mdp5_smp *smp, struct mdp5_smp_state *state)
337void mdp5_smp_configure(struct mdp5_smp *smp, enum mdp5_pipe pipe)
338{ 261{
339 int cnt = smp->blk_cnt; 262 enum mdp5_pipe pipe;
340 mdp5_smp_state_t assigned;
341 int i;
342 263
343 for (i = 0; i < pipe2nclients(pipe); i++) { 264 for_each_set_bit(pipe, &state->assigned, sizeof(state->assigned) * 8) {
344 u32 cid = pipe2client(pipe, i); 265 unsigned i, nblks = 0;
345 struct mdp5_client_smp_state *ps = &smp->client_state[cid];
346 266
347 /* 267 for (i = 0; i < pipe2nclients(pipe); i++) {
348 * if vblank has not happened since last smp_configure 268 u32 cid = pipe2client(pipe, i);
349 * skip the configure for now 269 void *cs = state->client_state[cid];
350 */
351 if (!bitmap_equal(ps->inuse, ps->configured, cnt))
352 continue;
353 270
354 bitmap_copy(ps->configured, ps->pending, cnt); 271 nblks += update_smp_state(smp, cid, cs);
355 bitmap_or(assigned, ps->inuse, ps->configured, cnt); 272
356 update_smp_state(smp, cid, &assigned); 273 DBG("assign %s:%u, %u blks",
274 pipe2name(pipe), i, nblks);
275 }
276
277 set_fifo_thresholds(smp, pipe, nblks);
357 } 278 }
279
280 state->assigned = 0;
358} 281}
359 282
360/* step #3: after vblank, copy configured -> inuse: */ 283void mdp5_smp_complete_commit(struct mdp5_smp *smp, struct mdp5_smp_state *state)
361void mdp5_smp_commit(struct mdp5_smp *smp, enum mdp5_pipe pipe)
362{ 284{
363 int cnt = smp->blk_cnt; 285 enum mdp5_pipe pipe;
364 mdp5_smp_state_t released;
365 int i;
366
367 for (i = 0; i < pipe2nclients(pipe); i++) {
368 u32 cid = pipe2client(pipe, i);
369 struct mdp5_client_smp_state *ps = &smp->client_state[cid];
370 286
371 /* 287 for_each_set_bit(pipe, &state->released, sizeof(state->released) * 8) {
372 * Figure out if there are any blocks we where previously 288 DBG("release %s", pipe2name(pipe));
373 * using, which can be released and made available to other 289 set_fifo_thresholds(smp, pipe, 0);
374 * clients: 290 }
375 */
376 if (bitmap_andnot(released, ps->inuse, ps->configured, cnt)) {
377 unsigned long flags;
378 291
379 spin_lock_irqsave(&smp->state_lock, flags); 292 state->released = 0;
380 /* clear released blocks: */ 293}
381 bitmap_andnot(smp->state, smp->state, released, cnt);
382 spin_unlock_irqrestore(&smp->state_lock, flags);
383 294
384 update_smp_state(smp, CID_UNUSED, &released); 295void mdp5_smp_dump(struct mdp5_smp *smp, struct drm_printer *p)
296{
297 struct mdp5_kms *mdp5_kms = get_kms(smp);
298 struct mdp5_hw_pipe_state *hwpstate;
299 struct mdp5_smp_state *state;
300 int total = 0, i, j;
301
302 drm_printf(p, "name\tinuse\tplane\n");
303 drm_printf(p, "----\t-----\t-----\n");
304
305 if (drm_can_sleep())
306 drm_modeset_lock(&mdp5_kms->state_lock, NULL);
307
308 /* grab these *after* we hold the state_lock */
309 hwpstate = &mdp5_kms->state->hwpipe;
310 state = &mdp5_kms->state->smp;
311
312 for (i = 0; i < mdp5_kms->num_hwpipes; i++) {
313 struct mdp5_hw_pipe *hwpipe = mdp5_kms->hwpipes[i];
314 struct drm_plane *plane = hwpstate->hwpipe_to_plane[hwpipe->idx];
315 enum mdp5_pipe pipe = hwpipe->pipe;
316 for (j = 0; j < pipe2nclients(pipe); j++) {
317 u32 cid = pipe2client(pipe, j);
318 void *cs = state->client_state[cid];
319 int inuse = bitmap_weight(cs, smp->blk_cnt);
320
321 drm_printf(p, "%s:%d\t%d\t%s\n",
322 pipe2name(pipe), j, inuse,
323 plane ? plane->name : NULL);
324
325 total += inuse;
385 } 326 }
386
387 bitmap_copy(ps->inuse, ps->configured, cnt);
388 } 327 }
328
329 drm_printf(p, "TOTAL:\t%d\t(of %d)\n", total, smp->blk_cnt);
330 drm_printf(p, "AVAIL:\t%d\n", smp->blk_cnt -
331 bitmap_weight(state->state, smp->blk_cnt));
332
333 if (drm_can_sleep())
334 drm_modeset_unlock(&mdp5_kms->state_lock);
389} 335}
390 336
391void mdp5_smp_destroy(struct mdp5_smp *smp) 337void mdp5_smp_destroy(struct mdp5_smp *smp)
@@ -393,8 +339,9 @@ void mdp5_smp_destroy(struct mdp5_smp *smp)
393 kfree(smp); 339 kfree(smp);
394} 340}
395 341
396struct mdp5_smp *mdp5_smp_init(struct drm_device *dev, const struct mdp5_smp_block *cfg) 342struct mdp5_smp *mdp5_smp_init(struct mdp5_kms *mdp5_kms, const struct mdp5_smp_block *cfg)
397{ 343{
344 struct mdp5_smp_state *state = &mdp5_kms->state->smp;
398 struct mdp5_smp *smp = NULL; 345 struct mdp5_smp *smp = NULL;
399 int ret; 346 int ret;
400 347
@@ -404,14 +351,13 @@ struct mdp5_smp *mdp5_smp_init(struct drm_device *dev, const struct mdp5_smp_blo
404 goto fail; 351 goto fail;
405 } 352 }
406 353
407 smp->dev = dev; 354 smp->dev = mdp5_kms->dev;
408 smp->blk_cnt = cfg->mmb_count; 355 smp->blk_cnt = cfg->mmb_count;
409 smp->blk_size = cfg->mmb_size; 356 smp->blk_size = cfg->mmb_size;
410 357
411 /* statically tied MMBs cannot be re-allocated: */ 358 /* statically tied MMBs cannot be re-allocated: */
412 bitmap_copy(smp->state, cfg->reserved_state, smp->blk_cnt); 359 bitmap_copy(state->state, cfg->reserved_state, smp->blk_cnt);
413 memcpy(smp->reserved, cfg->reserved, sizeof(smp->reserved)); 360 memcpy(smp->reserved, cfg->reserved, sizeof(smp->reserved));
414 spin_lock_init(&smp->state_lock);
415 361
416 return smp; 362 return smp;
417fail: 363fail:
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.h
index 20b87e800ea3..b41d0448fbe8 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.h
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.h
@@ -19,12 +19,53 @@
19#ifndef __MDP5_SMP_H__ 19#ifndef __MDP5_SMP_H__
20#define __MDP5_SMP_H__ 20#define __MDP5_SMP_H__
21 21
22#include <drm/drm_print.h>
23
22#include "msm_drv.h" 24#include "msm_drv.h"
23 25
24struct mdp5_client_smp_state { 26/*
25 mdp5_smp_state_t inuse; 27 * SMP - Shared Memory Pool:
26 mdp5_smp_state_t configured; 28 *
27 mdp5_smp_state_t pending; 29 * SMP blocks are shared between all the clients, where each plane in
30 * a scanout buffer is a SMP client. Ie. scanout of 3 plane I420 on
31 * pipe VIG0 => 3 clients: VIG0_Y, VIG0_CB, VIG0_CR.
32 *
33 * Based on the size of the attached scanout buffer, a certain # of
34 * blocks must be allocated to that client out of the shared pool.
35 *
36 * In some hw, some blocks are statically allocated for certain pipes
37 * and CANNOT be re-allocated (eg: MMB0 and MMB1 both tied to RGB0).
38 *
39 *
40 * Atomic SMP State:
41 *
42 * On atomic updates that modify SMP configuration, the state is cloned
43 * (copied) and modified. For test-only, or in cases where atomic
44 * update fails (or if we hit ww_mutex deadlock/backoff condition) the
45 * new state is simply thrown away.
46 *
47 * Because the SMP registers are not double buffered, updates are a
48 * two step process:
49 *
50 * 1) in _prepare_commit() we configure things (via read-modify-write)
51 * for the newly assigned pipes, so we don't take away blocks
52 * assigned to pipes that are still scanning out
53 * 2) in _complete_commit(), after vblank/etc, we clear things for the
54 * released clients, since at that point old pipes are no longer
55 * scanning out.
56 */
57struct mdp5_smp_state {
58 /* global state of what blocks are in use: */
59 mdp5_smp_state_t state;
60
61 /* per client state of what blocks they are using: */
62 mdp5_smp_state_t client_state[MAX_CLIENTS];
63
64 /* assigned pipes (hw updated at _prepare_commit()): */
65 unsigned long assigned;
66
67 /* released pipes (hw updated at _complete_commit()): */
68 unsigned long released;
28}; 69};
29 70
30struct mdp5_kms; 71struct mdp5_kms;
@@ -36,13 +77,22 @@ struct mdp5_smp;
36 * which is then used to call the other mdp5_smp_*(handler, ...) functions. 77 * which is then used to call the other mdp5_smp_*(handler, ...) functions.
37 */ 78 */
38 79
39struct mdp5_smp *mdp5_smp_init(struct drm_device *dev, const struct mdp5_smp_block *cfg); 80struct mdp5_smp *mdp5_smp_init(struct mdp5_kms *mdp5_kms,
81 const struct mdp5_smp_block *cfg);
40void mdp5_smp_destroy(struct mdp5_smp *smp); 82void mdp5_smp_destroy(struct mdp5_smp *smp);
41 83
42int mdp5_smp_request(struct mdp5_smp *smp, enum mdp5_pipe pipe, 84void mdp5_smp_dump(struct mdp5_smp *smp, struct drm_printer *p);
43 const struct mdp_format *format, u32 width, bool hdecim); 85
44void mdp5_smp_configure(struct mdp5_smp *smp, enum mdp5_pipe pipe); 86uint32_t mdp5_smp_calculate(struct mdp5_smp *smp,
45void mdp5_smp_commit(struct mdp5_smp *smp, enum mdp5_pipe pipe); 87 const struct mdp_format *format,
46void mdp5_smp_release(struct mdp5_smp *smp, enum mdp5_pipe pipe); 88 u32 width, bool hdecim);
89
90int mdp5_smp_assign(struct mdp5_smp *smp, struct mdp5_smp_state *state,
91 enum mdp5_pipe pipe, uint32_t blkcfg);
92void mdp5_smp_release(struct mdp5_smp *smp, struct mdp5_smp_state *state,
93 enum mdp5_pipe pipe);
94
95void mdp5_smp_prepare_commit(struct mdp5_smp *smp, struct mdp5_smp_state *state);
96void mdp5_smp_complete_commit(struct mdp5_smp *smp, struct mdp5_smp_state *state);
47 97
48#endif /* __MDP5_SMP_H__ */ 98#endif /* __MDP5_SMP_H__ */
diff --git a/drivers/gpu/drm/msm/mdp/mdp_common.xml.h b/drivers/gpu/drm/msm/mdp/mdp_common.xml.h
index 452e3518f98b..8994c365e218 100644
--- a/drivers/gpu/drm/msm/mdp/mdp_common.xml.h
+++ b/drivers/gpu/drm/msm/mdp/mdp_common.xml.h
@@ -12,7 +12,7 @@ The rules-ng-ng source files this header was generated from are:
12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 12- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21)
13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 13- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14)
14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 14- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28)
15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 15- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-11-26 23:01:08)
16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 16- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52)
17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 17- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02)
18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 18- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14)
diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c
index 4e21e1d72378..30b5d23e53b4 100644
--- a/drivers/gpu/drm/msm/msm_atomic.c
+++ b/drivers/gpu/drm/msm/msm_atomic.c
@@ -241,6 +241,10 @@ int msm_atomic_commit(struct drm_device *dev,
241 241
242 drm_atomic_helper_swap_state(state, true); 242 drm_atomic_helper_swap_state(state, true);
243 243
244 /* swap driver private state while still holding state_lock */
245 if (to_kms_state(state)->state)
246 priv->kms->funcs->swap_state(priv->kms, state);
247
244 /* 248 /*
245 * Everything below can be run asynchronously without the need to grab 249 * Everything below can be run asynchronously without the need to grab
246 * any modeset locks at all under one conditions: It must be guaranteed 250 * any modeset locks at all under one conditions: It must be guaranteed
@@ -271,3 +275,30 @@ error:
271 drm_atomic_helper_cleanup_planes(dev, state); 275 drm_atomic_helper_cleanup_planes(dev, state);
272 return ret; 276 return ret;
273} 277}
278
279struct drm_atomic_state *msm_atomic_state_alloc(struct drm_device *dev)
280{
281 struct msm_kms_state *state = kzalloc(sizeof(*state), GFP_KERNEL);
282
283 if (!state || drm_atomic_state_init(dev, &state->base) < 0) {
284 kfree(state);
285 return NULL;
286 }
287
288 return &state->base;
289}
290
291void msm_atomic_state_clear(struct drm_atomic_state *s)
292{
293 struct msm_kms_state *state = to_kms_state(s);
294 drm_atomic_state_default_clear(&state->base);
295 kfree(state->state);
296 state->state = NULL;
297}
298
299void msm_atomic_state_free(struct drm_atomic_state *state)
300{
301 kfree(to_kms_state(state)->state);
302 drm_atomic_state_default_release(state);
303 kfree(state);
304}
diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c
index 3c853733c99a..c1b40f5adb60 100644
--- a/drivers/gpu/drm/msm/msm_debugfs.c
+++ b/drivers/gpu/drm/msm/msm_debugfs.c
@@ -18,6 +18,7 @@
18#ifdef CONFIG_DEBUG_FS 18#ifdef CONFIG_DEBUG_FS
19#include "msm_drv.h" 19#include "msm_drv.h"
20#include "msm_gpu.h" 20#include "msm_gpu.h"
21#include "msm_kms.h"
21#include "msm_debugfs.h" 22#include "msm_debugfs.h"
22 23
23static int msm_gpu_show(struct drm_device *dev, struct seq_file *m) 24static int msm_gpu_show(struct drm_device *dev, struct seq_file *m)
@@ -142,6 +143,7 @@ int msm_debugfs_late_init(struct drm_device *dev)
142int msm_debugfs_init(struct drm_minor *minor) 143int msm_debugfs_init(struct drm_minor *minor)
143{ 144{
144 struct drm_device *dev = minor->dev; 145 struct drm_device *dev = minor->dev;
146 struct msm_drm_private *priv = dev->dev_private;
145 int ret; 147 int ret;
146 148
147 ret = drm_debugfs_create_files(msm_debugfs_list, 149 ret = drm_debugfs_create_files(msm_debugfs_list,
@@ -153,15 +155,25 @@ int msm_debugfs_init(struct drm_minor *minor)
153 return ret; 155 return ret;
154 } 156 }
155 157
156 return 0; 158 if (priv->kms->funcs->debugfs_init)
159 ret = priv->kms->funcs->debugfs_init(priv->kms, minor);
160
161 return ret;
157} 162}
158 163
159void msm_debugfs_cleanup(struct drm_minor *minor) 164void msm_debugfs_cleanup(struct drm_minor *minor)
160{ 165{
166 struct drm_device *dev = minor->dev;
167 struct msm_drm_private *priv = dev->dev_private;
168
161 drm_debugfs_remove_files(msm_debugfs_list, 169 drm_debugfs_remove_files(msm_debugfs_list,
162 ARRAY_SIZE(msm_debugfs_list), minor); 170 ARRAY_SIZE(msm_debugfs_list), minor);
163 if (!minor->dev->dev_private) 171 if (!priv)
164 return; 172 return;
173
174 if (priv->kms->funcs->debugfs_cleanup)
175 priv->kms->funcs->debugfs_cleanup(priv->kms, minor);
176
165 msm_rd_debugfs_cleanup(minor); 177 msm_rd_debugfs_cleanup(minor);
166 msm_perf_debugfs_cleanup(minor); 178 msm_perf_debugfs_cleanup(minor);
167} 179}
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 440c00ff8409..e29bb66f55b1 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -46,17 +46,21 @@ static const struct drm_mode_config_funcs mode_config_funcs = {
46 .output_poll_changed = msm_fb_output_poll_changed, 46 .output_poll_changed = msm_fb_output_poll_changed,
47 .atomic_check = msm_atomic_check, 47 .atomic_check = msm_atomic_check,
48 .atomic_commit = msm_atomic_commit, 48 .atomic_commit = msm_atomic_commit,
49 .atomic_state_alloc = msm_atomic_state_alloc,
50 .atomic_state_clear = msm_atomic_state_clear,
51 .atomic_state_free = msm_atomic_state_free,
49}; 52};
50 53
51int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu) 54int msm_register_address_space(struct drm_device *dev,
55 struct msm_gem_address_space *aspace)
52{ 56{
53 struct msm_drm_private *priv = dev->dev_private; 57 struct msm_drm_private *priv = dev->dev_private;
54 int idx = priv->num_mmus++; 58 int idx = priv->num_aspaces++;
55 59
56 if (WARN_ON(idx >= ARRAY_SIZE(priv->mmus))) 60 if (WARN_ON(idx >= ARRAY_SIZE(priv->aspace)))
57 return -EINVAL; 61 return -EINVAL;
58 62
59 priv->mmus[idx] = mmu; 63 priv->aspace[idx] = aspace;
60 64
61 return idx; 65 return idx;
62} 66}
@@ -907,10 +911,8 @@ static int add_components_mdp(struct device *mdp_dev,
907 * remote-endpoint isn't a component that we need to add 911 * remote-endpoint isn't a component that we need to add
908 */ 912 */
909 if (of_device_is_compatible(np, "qcom,mdp4") && 913 if (of_device_is_compatible(np, "qcom,mdp4") &&
910 ep.port == 0) { 914 ep.port == 0)
911 of_node_put(ep_node);
912 continue; 915 continue;
913 }
914 916
915 /* 917 /*
916 * It's okay if some of the ports don't have a remote endpoint 918 * It's okay if some of the ports don't have a remote endpoint
@@ -918,15 +920,12 @@ static int add_components_mdp(struct device *mdp_dev,
918 * any external interface. 920 * any external interface.
919 */ 921 */
920 intf = of_graph_get_remote_port_parent(ep_node); 922 intf = of_graph_get_remote_port_parent(ep_node);
921 if (!intf) { 923 if (!intf)
922 of_node_put(ep_node);
923 continue; 924 continue;
924 }
925 925
926 drm_of_component_match_add(master_dev, matchptr, compare_of, 926 drm_of_component_match_add(master_dev, matchptr, compare_of,
927 intf); 927 intf);
928 of_node_put(intf); 928 of_node_put(intf);
929 of_node_put(ep_node);
930 } 929 }
931 930
932 return 0; 931 return 0;
@@ -1039,7 +1038,13 @@ static int msm_pdev_probe(struct platform_device *pdev)
1039 if (ret) 1038 if (ret)
1040 return ret; 1039 return ret;
1041 1040
1042 pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 1041 /* on all devices that I am aware of, iommu's which can map
1042 * any address the cpu can see are used:
1043 */
1044 ret = dma_set_mask_and_coherent(&pdev->dev, ~0);
1045 if (ret)
1046 return ret;
1047
1043 return component_master_add_with_match(&pdev->dev, &msm_drm_ops, match); 1048 return component_master_add_with_match(&pdev->dev, &msm_drm_ops, match);
1044} 1049}
1045 1050
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 940bf4992fe2..ed4dad3ca133 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -52,6 +52,8 @@ struct msm_perf_state;
52struct msm_gem_submit; 52struct msm_gem_submit;
53struct msm_fence_context; 53struct msm_fence_context;
54struct msm_fence_cb; 54struct msm_fence_cb;
55struct msm_gem_address_space;
56struct msm_gem_vma;
55 57
56#define NUM_DOMAINS 2 /* one for KMS, then one per gpu core (?) */ 58#define NUM_DOMAINS 2 /* one for KMS, then one per gpu core (?) */
57 59
@@ -121,12 +123,16 @@ struct msm_drm_private {
121 uint32_t pending_crtcs; 123 uint32_t pending_crtcs;
122 wait_queue_head_t pending_crtcs_event; 124 wait_queue_head_t pending_crtcs_event;
123 125
124 /* registered MMUs: */ 126 /* Registered address spaces.. currently this is fixed per # of
125 unsigned int num_mmus; 127 * iommu's. Ie. one for display block and one for gpu block.
126 struct msm_mmu *mmus[NUM_DOMAINS]; 128 * Eventually, to do per-process gpu pagetables, we'll want one
129 * of these per-process.
130 */
131 unsigned int num_aspaces;
132 struct msm_gem_address_space *aspace[NUM_DOMAINS];
127 133
128 unsigned int num_planes; 134 unsigned int num_planes;
129 struct drm_plane *planes[8]; 135 struct drm_plane *planes[16];
130 136
131 unsigned int num_crtcs; 137 unsigned int num_crtcs;
132 struct drm_crtc *crtcs[8]; 138 struct drm_crtc *crtcs[8];
@@ -173,8 +179,22 @@ int msm_atomic_check(struct drm_device *dev,
173 struct drm_atomic_state *state); 179 struct drm_atomic_state *state);
174int msm_atomic_commit(struct drm_device *dev, 180int msm_atomic_commit(struct drm_device *dev,
175 struct drm_atomic_state *state, bool nonblock); 181 struct drm_atomic_state *state, bool nonblock);
182struct drm_atomic_state *msm_atomic_state_alloc(struct drm_device *dev);
183void msm_atomic_state_clear(struct drm_atomic_state *state);
184void msm_atomic_state_free(struct drm_atomic_state *state);
185
186int msm_register_address_space(struct drm_device *dev,
187 struct msm_gem_address_space *aspace);
188
189void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
190 struct msm_gem_vma *vma, struct sg_table *sgt);
191int msm_gem_map_vma(struct msm_gem_address_space *aspace,
192 struct msm_gem_vma *vma, struct sg_table *sgt, int npages);
176 193
177int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); 194void msm_gem_address_space_destroy(struct msm_gem_address_space *aspace);
195struct msm_gem_address_space *
196msm_gem_address_space_create(struct device *dev, struct iommu_domain *domain,
197 const char *name);
178 198
179void msm_gem_submit_free(struct msm_gem_submit *submit); 199void msm_gem_submit_free(struct msm_gem_submit *submit);
180int msm_ioctl_gem_submit(struct drm_device *dev, void *data, 200int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
@@ -189,9 +209,9 @@ int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
189int msm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf); 209int msm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
190uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); 210uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
191int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id, 211int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id,
192 uint32_t *iova); 212 uint64_t *iova);
193int msm_gem_get_iova(struct drm_gem_object *obj, int id, uint32_t *iova); 213int msm_gem_get_iova(struct drm_gem_object *obj, int id, uint64_t *iova);
194uint32_t msm_gem_iova(struct drm_gem_object *obj, int id); 214uint64_t msm_gem_iova(struct drm_gem_object *obj, int id);
195struct page **msm_gem_get_pages(struct drm_gem_object *obj); 215struct page **msm_gem_get_pages(struct drm_gem_object *obj);
196void msm_gem_put_pages(struct drm_gem_object *obj); 216void msm_gem_put_pages(struct drm_gem_object *obj);
197void msm_gem_put_iova(struct drm_gem_object *obj, int id); 217void msm_gem_put_iova(struct drm_gem_object *obj, int id);
@@ -303,8 +323,8 @@ void __iomem *msm_ioremap(struct platform_device *pdev, const char *name,
303void msm_writel(u32 data, void __iomem *addr); 323void msm_writel(u32 data, void __iomem *addr);
304u32 msm_readl(const void __iomem *addr); 324u32 msm_readl(const void __iomem *addr);
305 325
306#define DBG(fmt, ...) DRM_DEBUG(fmt"\n", ##__VA_ARGS__) 326#define DBG(fmt, ...) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__)
307#define VERB(fmt, ...) if (0) DRM_DEBUG(fmt"\n", ##__VA_ARGS__) 327#define VERB(fmt, ...) if (0) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__)
308 328
309static inline int align_pitch(int width, int bpp) 329static inline int align_pitch(int width, int bpp)
310{ 330{
diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c
index 95cf8fe72ee5..9acf544e7a8f 100644
--- a/drivers/gpu/drm/msm/msm_fb.c
+++ b/drivers/gpu/drm/msm/msm_fb.c
@@ -88,11 +88,11 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, int id)
88{ 88{
89 struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 89 struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
90 int ret, i, n = drm_format_num_planes(fb->pixel_format); 90 int ret, i, n = drm_format_num_planes(fb->pixel_format);
91 uint32_t iova; 91 uint64_t iova;
92 92
93 for (i = 0; i < n; i++) { 93 for (i = 0; i < n; i++) {
94 ret = msm_gem_get_iova(msm_fb->planes[i], id, &iova); 94 ret = msm_gem_get_iova(msm_fb->planes[i], id, &iova);
95 DBG("FB[%u]: iova[%d]: %08x (%d)", fb->base.id, i, iova, ret); 95 DBG("FB[%u]: iova[%d]: %08llx (%d)", fb->base.id, i, iova, ret);
96 if (ret) 96 if (ret)
97 return ret; 97 return ret;
98 } 98 }
diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c
index d29f5e82a410..bffe93498512 100644
--- a/drivers/gpu/drm/msm/msm_fbdev.c
+++ b/drivers/gpu/drm/msm/msm_fbdev.c
@@ -76,7 +76,7 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
76 struct drm_framebuffer *fb = NULL; 76 struct drm_framebuffer *fb = NULL;
77 struct fb_info *fbi = NULL; 77 struct fb_info *fbi = NULL;
78 struct drm_mode_fb_cmd2 mode_cmd = {0}; 78 struct drm_mode_fb_cmd2 mode_cmd = {0};
79 uint32_t paddr; 79 uint64_t paddr;
80 int ret, size; 80 int ret, size;
81 81
82 DBG("create fbdev: %dx%d@%d (%dx%d)", sizes->surface_width, 82 DBG("create fbdev: %dx%d@%d (%dx%d)", sizes->surface_width,
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 57db7dbbb618..cd06cfd94687 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -296,12 +296,8 @@ put_iova(struct drm_gem_object *obj)
296 WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 296 WARN_ON(!mutex_is_locked(&dev->struct_mutex));
297 297
298 for (id = 0; id < ARRAY_SIZE(msm_obj->domain); id++) { 298 for (id = 0; id < ARRAY_SIZE(msm_obj->domain); id++) {
299 struct msm_mmu *mmu = priv->mmus[id]; 299 msm_gem_unmap_vma(priv->aspace[id],
300 if (mmu && msm_obj->domain[id].iova) { 300 &msm_obj->domain[id], msm_obj->sgt);
301 uint32_t offset = msm_obj->domain[id].iova;
302 mmu->funcs->unmap(mmu, offset, msm_obj->sgt, obj->size);
303 msm_obj->domain[id].iova = 0;
304 }
305 } 301 }
306} 302}
307 303
@@ -313,7 +309,7 @@ put_iova(struct drm_gem_object *obj)
313 * the refcnt counter needs to be atomic_t. 309 * the refcnt counter needs to be atomic_t.
314 */ 310 */
315int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id, 311int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id,
316 uint32_t *iova) 312 uint64_t *iova)
317{ 313{
318 struct msm_gem_object *msm_obj = to_msm_bo(obj); 314 struct msm_gem_object *msm_obj = to_msm_bo(obj);
319 int ret = 0; 315 int ret = 0;
@@ -326,16 +322,8 @@ int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id,
326 return PTR_ERR(pages); 322 return PTR_ERR(pages);
327 323
328 if (iommu_present(&platform_bus_type)) { 324 if (iommu_present(&platform_bus_type)) {
329 struct msm_mmu *mmu = priv->mmus[id]; 325 ret = msm_gem_map_vma(priv->aspace[id], &msm_obj->domain[id],
330 uint32_t offset; 326 msm_obj->sgt, obj->size >> PAGE_SHIFT);
331
332 if (WARN_ON(!mmu))
333 return -EINVAL;
334
335 offset = (uint32_t)mmap_offset(obj);
336 ret = mmu->funcs->map(mmu, offset, msm_obj->sgt,
337 obj->size, IOMMU_READ | IOMMU_WRITE);
338 msm_obj->domain[id].iova = offset;
339 } else { 327 } else {
340 msm_obj->domain[id].iova = physaddr(obj); 328 msm_obj->domain[id].iova = physaddr(obj);
341 } 329 }
@@ -348,7 +336,7 @@ int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id,
348} 336}
349 337
350/* get iova, taking a reference. Should have a matching put */ 338/* get iova, taking a reference. Should have a matching put */
351int msm_gem_get_iova(struct drm_gem_object *obj, int id, uint32_t *iova) 339int msm_gem_get_iova(struct drm_gem_object *obj, int id, uint64_t *iova)
352{ 340{
353 struct msm_gem_object *msm_obj = to_msm_bo(obj); 341 struct msm_gem_object *msm_obj = to_msm_bo(obj);
354 int ret; 342 int ret;
@@ -370,7 +358,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, int id, uint32_t *iova)
370/* get iova without taking a reference, used in places where you have 358/* get iova without taking a reference, used in places where you have
371 * already done a 'msm_gem_get_iova()'. 359 * already done a 'msm_gem_get_iova()'.
372 */ 360 */
373uint32_t msm_gem_iova(struct drm_gem_object *obj, int id) 361uint64_t msm_gem_iova(struct drm_gem_object *obj, int id)
374{ 362{
375 struct msm_gem_object *msm_obj = to_msm_bo(obj); 363 struct msm_gem_object *msm_obj = to_msm_bo(obj);
376 WARN_ON(!msm_obj->domain[id].iova); 364 WARN_ON(!msm_obj->domain[id].iova);
@@ -631,9 +619,11 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
631 struct msm_gem_object *msm_obj = to_msm_bo(obj); 619 struct msm_gem_object *msm_obj = to_msm_bo(obj);
632 struct reservation_object *robj = msm_obj->resv; 620 struct reservation_object *robj = msm_obj->resv;
633 struct reservation_object_list *fobj; 621 struct reservation_object_list *fobj;
622 struct msm_drm_private *priv = obj->dev->dev_private;
634 struct dma_fence *fence; 623 struct dma_fence *fence;
635 uint64_t off = drm_vma_node_start(&obj->vma_node); 624 uint64_t off = drm_vma_node_start(&obj->vma_node);
636 const char *madv; 625 const char *madv;
626 unsigned id;
637 627
638 WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); 628 WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
639 629
@@ -650,10 +640,15 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
650 break; 640 break;
651 } 641 }
652 642
653 seq_printf(m, "%08x: %c %2d (%2d) %08llx %p %zu%s\n", 643 seq_printf(m, "%08x: %c %2d (%2d) %08llx %p\t",
654 msm_obj->flags, is_active(msm_obj) ? 'A' : 'I', 644 msm_obj->flags, is_active(msm_obj) ? 'A' : 'I',
655 obj->name, obj->refcount.refcount.counter, 645 obj->name, obj->refcount.refcount.counter,
656 off, msm_obj->vaddr, obj->size, madv); 646 off, msm_obj->vaddr);
647
648 for (id = 0; id < priv->num_aspaces; id++)
649 seq_printf(m, " %08llx", msm_obj->domain[id].iova);
650
651 seq_printf(m, " %zu%s\n", obj->size, madv);
657 652
658 rcu_read_lock(); 653 rcu_read_lock();
659 fobj = rcu_dereference(robj->fence); 654 fobj = rcu_dereference(robj->fence);
@@ -761,7 +756,6 @@ static int msm_gem_new_impl(struct drm_device *dev,
761{ 756{
762 struct msm_drm_private *priv = dev->dev_private; 757 struct msm_drm_private *priv = dev->dev_private;
763 struct msm_gem_object *msm_obj; 758 struct msm_gem_object *msm_obj;
764 unsigned sz;
765 bool use_vram = false; 759 bool use_vram = false;
766 760
767 switch (flags & MSM_BO_CACHE_MASK) { 761 switch (flags & MSM_BO_CACHE_MASK) {
@@ -783,16 +777,12 @@ static int msm_gem_new_impl(struct drm_device *dev,
783 if (WARN_ON(use_vram && !priv->vram.size)) 777 if (WARN_ON(use_vram && !priv->vram.size))
784 return -EINVAL; 778 return -EINVAL;
785 779
786 sz = sizeof(*msm_obj); 780 msm_obj = kzalloc(sizeof(*msm_obj), GFP_KERNEL);
787 if (use_vram)
788 sz += sizeof(struct drm_mm_node);
789
790 msm_obj = kzalloc(sz, GFP_KERNEL);
791 if (!msm_obj) 781 if (!msm_obj)
792 return -ENOMEM; 782 return -ENOMEM;
793 783
794 if (use_vram) 784 if (use_vram)
795 msm_obj->vram_node = (void *)&msm_obj[1]; 785 msm_obj->vram_node = &msm_obj->domain[0].node;
796 786
797 msm_obj->flags = flags; 787 msm_obj->flags = flags;
798 msm_obj->madv = MSM_MADV_WILLNEED; 788 msm_obj->madv = MSM_MADV_WILLNEED;
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 2cb8551fda70..7d529516b332 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -24,6 +24,20 @@
24/* Additional internal-use only BO flags: */ 24/* Additional internal-use only BO flags: */
25#define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ 25#define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */
26 26
27struct msm_gem_address_space {
28 const char *name;
29 /* NOTE: mm managed at the page level, size is in # of pages
30 * and position mm_node->start is in # of pages:
31 */
32 struct drm_mm mm;
33 struct msm_mmu *mmu;
34};
35
36struct msm_gem_vma {
37 struct drm_mm_node node;
38 uint64_t iova;
39};
40
27struct msm_gem_object { 41struct msm_gem_object {
28 struct drm_gem_object base; 42 struct drm_gem_object base;
29 43
@@ -61,10 +75,7 @@ struct msm_gem_object {
61 struct sg_table *sgt; 75 struct sg_table *sgt;
62 void *vaddr; 76 void *vaddr;
63 77
64 struct { 78 struct msm_gem_vma domain[NUM_DOMAINS];
65 // XXX
66 uint32_t iova;
67 } domain[NUM_DOMAINS];
68 79
69 /* normally (resv == &_resv) except for imported bo's */ 80 /* normally (resv == &_resv) except for imported bo's */
70 struct reservation_object *resv; 81 struct reservation_object *resv;
@@ -112,13 +123,13 @@ struct msm_gem_submit {
112 struct { 123 struct {
113 uint32_t type; 124 uint32_t type;
114 uint32_t size; /* in dwords */ 125 uint32_t size; /* in dwords */
115 uint32_t iova; 126 uint64_t iova;
116 uint32_t idx; /* cmdstream buffer idx in bos[] */ 127 uint32_t idx; /* cmdstream buffer idx in bos[] */
117 } *cmd; /* array of size nr_cmds */ 128 } *cmd; /* array of size nr_cmds */
118 struct { 129 struct {
119 uint32_t flags; 130 uint32_t flags;
120 struct msm_gem_object *obj; 131 struct msm_gem_object *obj;
121 uint32_t iova; 132 uint64_t iova;
122 } bos[0]; 133 } bos[0];
123}; 134};
124 135
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 25e8786fa4ca..166e84e4f0d4 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -241,7 +241,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
241 241
242 for (i = 0; i < submit->nr_bos; i++) { 242 for (i = 0; i < submit->nr_bos; i++) {
243 struct msm_gem_object *msm_obj = submit->bos[i].obj; 243 struct msm_gem_object *msm_obj = submit->bos[i].obj;
244 uint32_t iova; 244 uint64_t iova;
245 245
246 /* if locking succeeded, pin bo: */ 246 /* if locking succeeded, pin bo: */
247 ret = msm_gem_get_iova_locked(&msm_obj->base, 247 ret = msm_gem_get_iova_locked(&msm_obj->base,
@@ -266,7 +266,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
266} 266}
267 267
268static int submit_bo(struct msm_gem_submit *submit, uint32_t idx, 268static int submit_bo(struct msm_gem_submit *submit, uint32_t idx,
269 struct msm_gem_object **obj, uint32_t *iova, bool *valid) 269 struct msm_gem_object **obj, uint64_t *iova, bool *valid)
270{ 270{
271 if (idx >= submit->nr_bos) { 271 if (idx >= submit->nr_bos) {
272 DRM_ERROR("invalid buffer index: %u (out of %u)\n", 272 DRM_ERROR("invalid buffer index: %u (out of %u)\n",
@@ -312,7 +312,8 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
312 struct drm_msm_gem_submit_reloc submit_reloc; 312 struct drm_msm_gem_submit_reloc submit_reloc;
313 void __user *userptr = 313 void __user *userptr =
314 u64_to_user_ptr(relocs + (i * sizeof(submit_reloc))); 314 u64_to_user_ptr(relocs + (i * sizeof(submit_reloc)));
315 uint32_t iova, off; 315 uint32_t off;
316 uint64_t iova;
316 bool valid; 317 bool valid;
317 318
318 ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc)); 319 ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc));
@@ -461,7 +462,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
461 void __user *userptr = 462 void __user *userptr =
462 u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); 463 u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd)));
463 struct msm_gem_object *msm_obj; 464 struct msm_gem_object *msm_obj;
464 uint32_t iova; 465 uint64_t iova;
465 466
466 ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); 467 ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd));
467 if (ret) { 468 if (ret) {
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
new file mode 100644
index 000000000000..a311d26ccb21
--- /dev/null
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -0,0 +1,90 @@
1/*
2 * Copyright (C) 2016 Red Hat
3 * Author: Rob Clark <robdclark@gmail.com>
4 *
5 * This program is free software; you can redistribute it and/or modify it
6 * under the terms of the GNU General Public License version 2 as published by
7 * the Free Software Foundation.
8 *
9 * This program is distributed in the hope that it will be useful, but WITHOUT
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
12 * more details.
13 *
14 * You should have received a copy of the GNU General Public License along with
15 * this program. If not, see <http://www.gnu.org/licenses/>.
16 */
17
18#include "msm_drv.h"
19#include "msm_gem.h"
20#include "msm_mmu.h"
21
22void
23msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
24 struct msm_gem_vma *vma, struct sg_table *sgt)
25{
26 if (!vma->iova)
27 return;
28
29 if (aspace->mmu) {
30 unsigned size = vma->node.size << PAGE_SHIFT;
31 aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, sgt, size);
32 }
33
34 drm_mm_remove_node(&vma->node);
35
36 vma->iova = 0;
37}
38
39int
40msm_gem_map_vma(struct msm_gem_address_space *aspace,
41 struct msm_gem_vma *vma, struct sg_table *sgt, int npages)
42{
43 int ret;
44
45 if (WARN_ON(drm_mm_node_allocated(&vma->node)))
46 return 0;
47
48 ret = drm_mm_insert_node(&aspace->mm, &vma->node, npages,
49 0, DRM_MM_SEARCH_DEFAULT);
50 if (ret)
51 return ret;
52
53 vma->iova = vma->node.start << PAGE_SHIFT;
54
55 if (aspace->mmu) {
56 unsigned size = npages << PAGE_SHIFT;
57 ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt,
58 size, IOMMU_READ | IOMMU_WRITE);
59 }
60
61 return ret;
62}
63
64void
65msm_gem_address_space_destroy(struct msm_gem_address_space *aspace)
66{
67 drm_mm_takedown(&aspace->mm);
68 if (aspace->mmu)
69 aspace->mmu->funcs->destroy(aspace->mmu);
70 kfree(aspace);
71}
72
73struct msm_gem_address_space *
74msm_gem_address_space_create(struct device *dev, struct iommu_domain *domain,
75 const char *name)
76{
77 struct msm_gem_address_space *aspace;
78
79 aspace = kzalloc(sizeof(*aspace), GFP_KERNEL);
80 if (!aspace)
81 return ERR_PTR(-ENOMEM);
82
83 aspace->name = name;
84 aspace->mmu = msm_iommu_new(dev, domain);
85
86 drm_mm_init(&aspace->mm, (domain->geometry.aperture_start >> PAGE_SHIFT),
87 (domain->geometry.aperture_end >> PAGE_SHIFT) - 1);
88
89 return aspace;
90}
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 3249707e6834..b28527a65d09 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -91,21 +91,20 @@ static int disable_pwrrail(struct msm_gpu *gpu)
91 91
92static int enable_clk(struct msm_gpu *gpu) 92static int enable_clk(struct msm_gpu *gpu)
93{ 93{
94 struct clk *rate_clk = NULL;
95 int i; 94 int i;
96 95
97 /* NOTE: kgsl_pwrctrl_clk() ignores grp_clks[0].. */ 96 if (gpu->grp_clks[0] && gpu->fast_rate)
98 for (i = ARRAY_SIZE(gpu->grp_clks) - 1; i > 0; i--) { 97 clk_set_rate(gpu->grp_clks[0], gpu->fast_rate);
99 if (gpu->grp_clks[i]) {
100 clk_prepare(gpu->grp_clks[i]);
101 rate_clk = gpu->grp_clks[i];
102 }
103 }
104 98
105 if (rate_clk && gpu->fast_rate) 99 /* Set the RBBM timer rate to 19.2Mhz */
106 clk_set_rate(rate_clk, gpu->fast_rate); 100 if (gpu->grp_clks[2])
101 clk_set_rate(gpu->grp_clks[2], 19200000);
107 102
108 for (i = ARRAY_SIZE(gpu->grp_clks) - 1; i > 0; i--) 103 for (i = ARRAY_SIZE(gpu->grp_clks) - 1; i >= 0; i--)
104 if (gpu->grp_clks[i])
105 clk_prepare(gpu->grp_clks[i]);
106
107 for (i = ARRAY_SIZE(gpu->grp_clks) - 1; i >= 0; i--)
109 if (gpu->grp_clks[i]) 108 if (gpu->grp_clks[i])
110 clk_enable(gpu->grp_clks[i]); 109 clk_enable(gpu->grp_clks[i]);
111 110
@@ -114,24 +113,22 @@ static int enable_clk(struct msm_gpu *gpu)
114 113
115static int disable_clk(struct msm_gpu *gpu) 114static int disable_clk(struct msm_gpu *gpu)
116{ 115{
117 struct clk *rate_clk = NULL;
118 int i; 116 int i;
119 117
120 /* NOTE: kgsl_pwrctrl_clk() ignores grp_clks[0].. */ 118 for (i = ARRAY_SIZE(gpu->grp_clks) - 1; i >= 0; i--)
121 for (i = ARRAY_SIZE(gpu->grp_clks) - 1; i > 0; i--) { 119 if (gpu->grp_clks[i])
122 if (gpu->grp_clks[i]) {
123 clk_disable(gpu->grp_clks[i]); 120 clk_disable(gpu->grp_clks[i]);
124 rate_clk = gpu->grp_clks[i];
125 }
126 }
127
128 if (rate_clk && gpu->slow_rate)
129 clk_set_rate(rate_clk, gpu->slow_rate);
130 121
131 for (i = ARRAY_SIZE(gpu->grp_clks) - 1; i > 0; i--) 122 for (i = ARRAY_SIZE(gpu->grp_clks) - 1; i >= 0; i--)
132 if (gpu->grp_clks[i]) 123 if (gpu->grp_clks[i])
133 clk_unprepare(gpu->grp_clks[i]); 124 clk_unprepare(gpu->grp_clks[i]);
134 125
126 if (gpu->grp_clks[0] && gpu->slow_rate)
127 clk_set_rate(gpu->grp_clks[0], gpu->slow_rate);
128
129 if (gpu->grp_clks[2])
130 clk_set_rate(gpu->grp_clks[2], 0);
131
135 return 0; 132 return 0;
136} 133}
137 134
@@ -528,7 +525,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
528 525
529 for (i = 0; i < submit->nr_bos; i++) { 526 for (i = 0; i < submit->nr_bos; i++) {
530 struct msm_gem_object *msm_obj = submit->bos[i].obj; 527 struct msm_gem_object *msm_obj = submit->bos[i].obj;
531 uint32_t iova; 528 uint64_t iova;
532 529
533 /* can't happen yet.. but when we add 2d support we'll have 530 /* can't happen yet.. but when we add 2d support we'll have
534 * to deal w/ cross-ring synchronization: 531 * to deal w/ cross-ring synchronization:
@@ -563,8 +560,8 @@ static irqreturn_t irq_handler(int irq, void *data)
563} 560}
564 561
565static const char *clk_names[] = { 562static const char *clk_names[] = {
566 "src_clk", "core_clk", "iface_clk", "mem_clk", "mem_iface_clk", 563 "core_clk", "iface_clk", "rbbmtimer_clk", "mem_clk",
567 "alt_mem_iface_clk", 564 "mem_iface_clk", "alt_mem_iface_clk",
568}; 565};
569 566
570int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, 567int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
@@ -656,12 +653,17 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
656 */ 653 */
657 iommu = iommu_domain_alloc(&platform_bus_type); 654 iommu = iommu_domain_alloc(&platform_bus_type);
658 if (iommu) { 655 if (iommu) {
656 /* TODO 32b vs 64b address space.. */
657 iommu->geometry.aperture_start = SZ_16M;
658 iommu->geometry.aperture_end = 0xffffffff;
659
659 dev_info(drm->dev, "%s: using IOMMU\n", name); 660 dev_info(drm->dev, "%s: using IOMMU\n", name);
660 gpu->mmu = msm_iommu_new(&pdev->dev, iommu); 661 gpu->aspace = msm_gem_address_space_create(&pdev->dev,
661 if (IS_ERR(gpu->mmu)) { 662 iommu, "gpu");
662 ret = PTR_ERR(gpu->mmu); 663 if (IS_ERR(gpu->aspace)) {
664 ret = PTR_ERR(gpu->aspace);
663 dev_err(drm->dev, "failed to init iommu: %d\n", ret); 665 dev_err(drm->dev, "failed to init iommu: %d\n", ret);
664 gpu->mmu = NULL; 666 gpu->aspace = NULL;
665 iommu_domain_free(iommu); 667 iommu_domain_free(iommu);
666 goto fail; 668 goto fail;
667 } 669 }
@@ -669,7 +671,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
669 } else { 671 } else {
670 dev_info(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); 672 dev_info(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name);
671 } 673 }
672 gpu->id = msm_register_mmu(drm, gpu->mmu); 674 gpu->id = msm_register_address_space(drm, gpu->aspace);
673 675
674 676
675 /* Create ringbuffer: */ 677 /* Create ringbuffer: */
@@ -705,8 +707,8 @@ void msm_gpu_cleanup(struct msm_gpu *gpu)
705 msm_ringbuffer_destroy(gpu->rb); 707 msm_ringbuffer_destroy(gpu->rb);
706 } 708 }
707 709
708 if (gpu->mmu) 710 if (gpu->aspace)
709 gpu->mmu->funcs->destroy(gpu->mmu); 711 msm_gem_address_space_destroy(gpu->aspace);
710 712
711 if (gpu->fctx) 713 if (gpu->fctx)
712 msm_fence_context_free(gpu->fctx); 714 msm_fence_context_free(gpu->fctx);
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index d61d98a6e047..c4c39d3272c7 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -50,7 +50,7 @@ struct msm_gpu_funcs {
50 void (*submit)(struct msm_gpu *gpu, struct msm_gem_submit *submit, 50 void (*submit)(struct msm_gpu *gpu, struct msm_gem_submit *submit,
51 struct msm_file_private *ctx); 51 struct msm_file_private *ctx);
52 void (*flush)(struct msm_gpu *gpu); 52 void (*flush)(struct msm_gpu *gpu);
53 void (*idle)(struct msm_gpu *gpu); 53 bool (*idle)(struct msm_gpu *gpu);
54 irqreturn_t (*irq)(struct msm_gpu *irq); 54 irqreturn_t (*irq)(struct msm_gpu *irq);
55 uint32_t (*last_fence)(struct msm_gpu *gpu); 55 uint32_t (*last_fence)(struct msm_gpu *gpu);
56 void (*recover)(struct msm_gpu *gpu); 56 void (*recover)(struct msm_gpu *gpu);
@@ -80,7 +80,7 @@ struct msm_gpu {
80 80
81 /* ringbuffer: */ 81 /* ringbuffer: */
82 struct msm_ringbuffer *rb; 82 struct msm_ringbuffer *rb;
83 uint32_t rb_iova; 83 uint64_t rb_iova;
84 84
85 /* list of GEM active objects: */ 85 /* list of GEM active objects: */
86 struct list_head active_list; 86 struct list_head active_list;
@@ -98,7 +98,7 @@ struct msm_gpu {
98 void __iomem *mmio; 98 void __iomem *mmio;
99 int irq; 99 int irq;
100 100
101 struct msm_mmu *mmu; 101 struct msm_gem_address_space *aspace;
102 int id; 102 int id;
103 103
104 /* Power Control: */ 104 /* Power Control: */
@@ -154,6 +154,45 @@ static inline u32 gpu_read(struct msm_gpu *gpu, u32 reg)
154 return msm_readl(gpu->mmio + (reg << 2)); 154 return msm_readl(gpu->mmio + (reg << 2));
155} 155}
156 156
157static inline void gpu_rmw(struct msm_gpu *gpu, u32 reg, u32 mask, u32 or)
158{
159 uint32_t val = gpu_read(gpu, reg);
160
161 val &= ~mask;
162 gpu_write(gpu, reg, val | or);
163}
164
165static inline u64 gpu_read64(struct msm_gpu *gpu, u32 lo, u32 hi)
166{
167 u64 val;
168
169 /*
170 * Why not a readq here? Two reasons: 1) many of the LO registers are
171 * not quad word aligned and 2) the GPU hardware designers have a bit
172 * of a history of putting registers where they fit, especially in
173 * spins. The longer a GPU family goes the higher the chance that
174 * we'll get burned. We could do a series of validity checks if we
175 * wanted to, but really is a readq() that much better? Nah.
176 */
177
178 /*
179 * For some lo/hi registers (like perfcounters), the hi value is latched
180 * when the lo is read, so make sure to read the lo first to trigger
181 * that
182 */
183 val = (u64) msm_readl(gpu->mmio + (lo << 2));
184 val |= ((u64) msm_readl(gpu->mmio + (hi << 2)) << 32);
185
186 return val;
187}
188
189static inline void gpu_write64(struct msm_gpu *gpu, u32 lo, u32 hi, u64 val)
190{
191 /* Why not a writeq here? Read the screed above */
192 msm_writel(lower_32_bits(val), gpu->mmio + (lo << 2));
193 msm_writel(upper_32_bits(val), gpu->mmio + (hi << 2));
194}
195
157int msm_gpu_pm_suspend(struct msm_gpu *gpu); 196int msm_gpu_pm_suspend(struct msm_gpu *gpu);
158int msm_gpu_pm_resume(struct msm_gpu *gpu); 197int msm_gpu_pm_resume(struct msm_gpu *gpu);
159 198
diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
index 3a294d0da3a0..61aaaa1de6eb 100644
--- a/drivers/gpu/drm/msm/msm_iommu.c
+++ b/drivers/gpu/drm/msm/msm_iommu.c
@@ -45,13 +45,13 @@ static void msm_iommu_detach(struct msm_mmu *mmu, const char * const *names,
45 iommu_detach_device(iommu->domain, mmu->dev); 45 iommu_detach_device(iommu->domain, mmu->dev);
46} 46}
47 47
48static int msm_iommu_map(struct msm_mmu *mmu, uint32_t iova, 48static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova,
49 struct sg_table *sgt, unsigned len, int prot) 49 struct sg_table *sgt, unsigned len, int prot)
50{ 50{
51 struct msm_iommu *iommu = to_msm_iommu(mmu); 51 struct msm_iommu *iommu = to_msm_iommu(mmu);
52 struct iommu_domain *domain = iommu->domain; 52 struct iommu_domain *domain = iommu->domain;
53 struct scatterlist *sg; 53 struct scatterlist *sg;
54 unsigned int da = iova; 54 unsigned long da = iova;
55 unsigned int i, j; 55 unsigned int i, j;
56 int ret; 56 int ret;
57 57
@@ -62,7 +62,7 @@ static int msm_iommu_map(struct msm_mmu *mmu, uint32_t iova,
62 dma_addr_t pa = sg_phys(sg) - sg->offset; 62 dma_addr_t pa = sg_phys(sg) - sg->offset;
63 size_t bytes = sg->length + sg->offset; 63 size_t bytes = sg->length + sg->offset;
64 64
65 VERB("map[%d]: %08x %08lx(%zx)", i, da, (unsigned long)pa, bytes); 65 VERB("map[%d]: %08lx %08lx(%zx)", i, da, (unsigned long)pa, bytes);
66 66
67 ret = iommu_map(domain, da, pa, bytes, prot); 67 ret = iommu_map(domain, da, pa, bytes, prot);
68 if (ret) 68 if (ret)
@@ -84,13 +84,13 @@ fail:
84 return ret; 84 return ret;
85} 85}
86 86
87static int msm_iommu_unmap(struct msm_mmu *mmu, uint32_t iova, 87static int msm_iommu_unmap(struct msm_mmu *mmu, uint64_t iova,
88 struct sg_table *sgt, unsigned len) 88 struct sg_table *sgt, unsigned len)
89{ 89{
90 struct msm_iommu *iommu = to_msm_iommu(mmu); 90 struct msm_iommu *iommu = to_msm_iommu(mmu);
91 struct iommu_domain *domain = iommu->domain; 91 struct iommu_domain *domain = iommu->domain;
92 struct scatterlist *sg; 92 struct scatterlist *sg;
93 unsigned int da = iova; 93 unsigned long da = iova;
94 int i; 94 int i;
95 95
96 for_each_sg(sgt->sgl, sg, sgt->nents, i) { 96 for_each_sg(sgt->sgl, sg, sgt->nents, i) {
@@ -101,7 +101,7 @@ static int msm_iommu_unmap(struct msm_mmu *mmu, uint32_t iova,
101 if (unmapped < bytes) 101 if (unmapped < bytes)
102 return unmapped; 102 return unmapped;
103 103
104 VERB("unmap[%d]: %08x(%zx)", i, da, bytes); 104 VERB("unmap[%d]: %08lx(%zx)", i, da, bytes);
105 105
106 BUG_ON(!PAGE_ALIGNED(bytes)); 106 BUG_ON(!PAGE_ALIGNED(bytes));
107 107
diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h
index 40e41e5cdbc6..e470f4cf8f76 100644
--- a/drivers/gpu/drm/msm/msm_kms.h
+++ b/drivers/gpu/drm/msm/msm_kms.h
@@ -40,6 +40,8 @@ struct msm_kms_funcs {
40 irqreturn_t (*irq)(struct msm_kms *kms); 40 irqreturn_t (*irq)(struct msm_kms *kms);
41 int (*enable_vblank)(struct msm_kms *kms, struct drm_crtc *crtc); 41 int (*enable_vblank)(struct msm_kms *kms, struct drm_crtc *crtc);
42 void (*disable_vblank)(struct msm_kms *kms, struct drm_crtc *crtc); 42 void (*disable_vblank)(struct msm_kms *kms, struct drm_crtc *crtc);
43 /* swap global atomic state: */
44 void (*swap_state)(struct msm_kms *kms, struct drm_atomic_state *state);
43 /* modeset, bracketing atomic_commit(): */ 45 /* modeset, bracketing atomic_commit(): */
44 void (*prepare_commit)(struct msm_kms *kms, struct drm_atomic_state *state); 46 void (*prepare_commit)(struct msm_kms *kms, struct drm_atomic_state *state);
45 void (*complete_commit)(struct msm_kms *kms, struct drm_atomic_state *state); 47 void (*complete_commit)(struct msm_kms *kms, struct drm_atomic_state *state);
@@ -56,6 +58,11 @@ struct msm_kms_funcs {
56 bool is_cmd_mode); 58 bool is_cmd_mode);
57 /* cleanup: */ 59 /* cleanup: */
58 void (*destroy)(struct msm_kms *kms); 60 void (*destroy)(struct msm_kms *kms);
61#ifdef CONFIG_DEBUG_FS
62 /* debugfs: */
63 int (*debugfs_init)(struct msm_kms *kms, struct drm_minor *minor);
64 void (*debugfs_cleanup)(struct msm_kms *kms, struct drm_minor *minor);
65#endif
59}; 66};
60 67
61struct msm_kms { 68struct msm_kms {
@@ -65,6 +72,18 @@ struct msm_kms {
65 int irq; 72 int irq;
66}; 73};
67 74
75/**
76 * Subclass of drm_atomic_state, to allow kms backend to have driver
77 * private global state. The kms backend can do whatever it wants
78 * with the ->state ptr. On ->atomic_state_clear() the ->state ptr
79 * is kfree'd and set back to NULL.
80 */
81struct msm_kms_state {
82 struct drm_atomic_state base;
83 void *state;
84};
85#define to_kms_state(x) container_of(x, struct msm_kms_state, base)
86
68static inline void msm_kms_init(struct msm_kms *kms, 87static inline void msm_kms_init(struct msm_kms *kms,
69 const struct msm_kms_funcs *funcs) 88 const struct msm_kms_funcs *funcs)
70{ 89{
diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h
index b8ca9a0e9170..f85c879e68d2 100644
--- a/drivers/gpu/drm/msm/msm_mmu.h
+++ b/drivers/gpu/drm/msm/msm_mmu.h
@@ -23,9 +23,9 @@
23struct msm_mmu_funcs { 23struct msm_mmu_funcs {
24 int (*attach)(struct msm_mmu *mmu, const char * const *names, int cnt); 24 int (*attach)(struct msm_mmu *mmu, const char * const *names, int cnt);
25 void (*detach)(struct msm_mmu *mmu, const char * const *names, int cnt); 25 void (*detach)(struct msm_mmu *mmu, const char * const *names, int cnt);
26 int (*map)(struct msm_mmu *mmu, uint32_t iova, struct sg_table *sgt, 26 int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt,
27 unsigned len, int prot); 27 unsigned len, int prot);
28 int (*unmap)(struct msm_mmu *mmu, uint32_t iova, struct sg_table *sgt, 28 int (*unmap)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt,
29 unsigned len); 29 unsigned len);
30 void (*destroy)(struct msm_mmu *mmu); 30 void (*destroy)(struct msm_mmu *mmu);
31}; 31};
diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
index 8487f461f05f..6607456dc626 100644
--- a/drivers/gpu/drm/msm/msm_rd.c
+++ b/drivers/gpu/drm/msm/msm_rd.c
@@ -289,7 +289,7 @@ void msm_rd_debugfs_cleanup(struct drm_minor *minor)
289 289
290static void snapshot_buf(struct msm_rd_state *rd, 290static void snapshot_buf(struct msm_rd_state *rd,
291 struct msm_gem_submit *submit, int idx, 291 struct msm_gem_submit *submit, int idx,
292 uint32_t iova, uint32_t size) 292 uint64_t iova, uint32_t size)
293{ 293{
294 struct msm_gem_object *obj = submit->bos[idx].obj; 294 struct msm_gem_object *obj = submit->bos[idx].obj;
295 const char *buf; 295 const char *buf;
@@ -306,7 +306,7 @@ static void snapshot_buf(struct msm_rd_state *rd,
306 } 306 }
307 307
308 rd_write_section(rd, RD_GPUADDR, 308 rd_write_section(rd, RD_GPUADDR,
309 (uint32_t[2]){ iova, size }, 8); 309 (uint32_t[3]){ iova, size, iova >> 32 }, 12);
310 rd_write_section(rd, RD_BUFFER_CONTENTS, buf, size); 310 rd_write_section(rd, RD_BUFFER_CONTENTS, buf, size);
311 311
312 msm_gem_put_vaddr_locked(&obj->base); 312 msm_gem_put_vaddr_locked(&obj->base);
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index 8c51e8a0df89..4d5d6a2bc59e 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -2,17 +2,24 @@
2 * Copyright (C) 2013 Red Hat 2 * Copyright (C) 2013 Red Hat
3 * Author: Rob Clark <robdclark@gmail.com> 3 * Author: Rob Clark <robdclark@gmail.com>
4 * 4 *
5 * This program is free software; you can redistribute it and/or modify it 5 * Permission is hereby granted, free of charge, to any person obtaining a
6 * under the terms of the GNU General Public License version 2 as published by 6 * copy of this software and associated documentation files (the "Software"),
7 * the Free Software Foundation. 7 * to deal in the Software without restriction, including without limitation
8 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
9 * and/or sell copies of the Software, and to permit persons to whom the
10 * Software is furnished to do so, subject to the following conditions:
8 * 11 *
9 * This program is distributed in the hope that it will be useful, but WITHOUT 12 * The above copyright notice and this permission notice (including the next
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 * paragraph) shall be included in all copies or substantial portions of the
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 * Software.
12 * more details.
13 * 15 *
14 * You should have received a copy of the GNU General Public License along with 16 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 * this program. If not, see <http://www.gnu.org/licenses/>. 17 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
19 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22 * SOFTWARE.
16 */ 23 */
17 24
18#ifndef __MSM_DRM_H__ 25#ifndef __MSM_DRM_H__