diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-04-06 00:29:35 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-04-06 00:29:35 -0400 |
commit | 38c23685b273cfb4ccf31a199feccce3bdcb5d83 (patch) | |
tree | 6b693a36c6ea6c64aaaf34112c57e89f1b5c4b0f | |
parent | 167569343fac74ec6825a3ab982f795b5880e63e (diff) | |
parent | 7df3f0bb5f90e3470de2798452000e221420059c (diff) |
Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC driver updates from Arnd Bergmann:
"The main addition this time around is the new ARM "SCMI" framework,
which is the latest in a series of standards coming from ARM to do
power management in a platform independent way.
This has been through many review cycles, and it relies on a rather
interesting way of using the mailbox subsystem, but in the end I
agreed that Sudeep's version was the best we could do after all.
Other changes include:
- the ARM CCN driver is moved out of drivers/bus into drivers/perf,
which makes more sense. Similarly, the performance monitoring
portion of the CCI driver are moved the same way and cleaned up a
little more.
- a series of updates to the SCPI framework
- support for the Mediatek mt7623a SoC in drivers/soc
- support for additional NVIDIA Tegra hardware in drivers/soc
- a new reset driver for Socionext Uniphier
- lesser bug fixes in drivers/soc, drivers/tee, drivers/memory, and
drivers/firmware and drivers/reset across platforms"
* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (87 commits)
reset: uniphier: add ethernet reset control support for PXs3
reset: stm32mp1: Enable stm32mp1 reset driver
dt-bindings: reset: add STM32MP1 resets
reset: uniphier: add Pro4/Pro5/PXs2 audio systems reset control
reset: imx7: add 'depends on HAS_IOMEM' to fix unmet dependency
reset: modify the way reset lookup works for board files
reset: add support for non-DT systems
clk: scmi: use devm_of_clk_add_hw_provider() API and drop scmi_clocks_remove
firmware: arm_scmi: prevent accessing rate_discrete uninitialized
hwmon: (scmi) return -EINVAL when sensor information is unavailable
amlogic: meson-gx-socinfo: Update soc ids
soc/tegra: pmc: Use the new reset APIs to manage reset controllers
soc: mediatek: update power domain data of MT2712
dt-bindings: soc: update MT2712 power dt-bindings
cpufreq: scmi: add thermal dependency
soc: mediatek: fix the mistaken pointer accessed when subdomains are added
soc: mediatek: add SCPSYS power domain driver for MediaTek MT7623A SoC
soc: mediatek: avoid hardcoded value with bus_prot_mask
dt-bindings: soc: add header files required for MT7623A SCPSYS dt-binding
dt-bindings: soc: add SCPSYS binding for MT7623 and MT7623A SoC
...
79 files changed, 6863 insertions, 2191 deletions
diff --git a/Documentation/devicetree/bindings/arm/arm,scmi.txt b/Documentation/devicetree/bindings/arm/arm,scmi.txt new file mode 100644 index 000000000000..5f3719ab7075 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/arm,scmi.txt | |||
@@ -0,0 +1,179 @@ | |||
1 | System Control and Management Interface (SCMI) Message Protocol | ||
2 | ---------------------------------------------------------- | ||
3 | |||
4 | The SCMI is intended to allow agents such as OSPM to manage various functions | ||
5 | that are provided by the hardware platform it is running on, including power | ||
6 | and performance functions. | ||
7 | |||
8 | This binding is intended to define the interface the firmware implementing | ||
9 | the SCMI as described in ARM document number ARM DUI 0922B ("ARM System Control | ||
10 | and Management Interface Platform Design Document")[0] provide for OSPM in | ||
11 | the device tree. | ||
12 | |||
13 | Required properties: | ||
14 | |||
15 | The scmi node with the following properties shall be under the /firmware/ node. | ||
16 | |||
17 | - compatible : shall be "arm,scmi" | ||
18 | - mboxes: List of phandle and mailbox channel specifiers. It should contain | ||
19 | exactly one or two mailboxes, one for transmitting messages("tx") | ||
20 | and another optional for receiving the notifications("rx") if | ||
21 | supported. | ||
22 | - shmem : List of phandle pointing to the shared memory(SHM) area as per | ||
23 | generic mailbox client binding. | ||
24 | - #address-cells : should be '1' if the device has sub-nodes, maps to | ||
25 | protocol identifier for a given sub-node. | ||
26 | - #size-cells : should be '0' as 'reg' property doesn't have any size | ||
27 | associated with it. | ||
28 | |||
29 | Optional properties: | ||
30 | |||
31 | - mbox-names: shall be "tx" or "rx" depending on mboxes entries. | ||
32 | |||
33 | See Documentation/devicetree/bindings/mailbox/mailbox.txt for more details | ||
34 | about the generic mailbox controller and client driver bindings. | ||
35 | |||
36 | The mailbox is the only permitted method of calling the SCMI firmware. | ||
37 | Mailbox doorbell is used as a mechanism to alert the presence of a | ||
38 | messages and/or notification. | ||
39 | |||
40 | Each protocol supported shall have a sub-node with corresponding compatible | ||
41 | as described in the following sections. If the platform supports dedicated | ||
42 | communication channel for a particular protocol, the 3 properties namely: | ||
43 | mboxes, mbox-names and shmem shall be present in the sub-node corresponding | ||
44 | to that protocol. | ||
45 | |||
46 | Clock/Performance bindings for the clocks/OPPs based on SCMI Message Protocol | ||
47 | ------------------------------------------------------------ | ||
48 | |||
49 | This binding uses the common clock binding[1]. | ||
50 | |||
51 | Required properties: | ||
52 | - #clock-cells : Should be 1. Contains the Clock ID value used by SCMI commands. | ||
53 | |||
54 | Power domain bindings for the power domains based on SCMI Message Protocol | ||
55 | ------------------------------------------------------------ | ||
56 | |||
57 | This binding for the SCMI power domain providers uses the generic power | ||
58 | domain binding[2]. | ||
59 | |||
60 | Required properties: | ||
61 | - #power-domain-cells : Should be 1. Contains the device or the power | ||
62 | domain ID value used by SCMI commands. | ||
63 | |||
64 | Sensor bindings for the sensors based on SCMI Message Protocol | ||
65 | -------------------------------------------------------------- | ||
66 | SCMI provides an API to access the various sensors on the SoC. | ||
67 | |||
68 | Required properties: | ||
69 | - #thermal-sensor-cells: should be set to 1. This property follows the | ||
70 | thermal device tree bindings[3]. | ||
71 | |||
72 | Valid cell values are raw identifiers (Sensor ID) | ||
73 | as used by the firmware. Refer to platform details | ||
74 | for your implementation for the IDs to use. | ||
75 | |||
76 | SRAM and Shared Memory for SCMI | ||
77 | ------------------------------- | ||
78 | |||
79 | A small area of SRAM is reserved for SCMI communication between application | ||
80 | processors and SCP. | ||
81 | |||
82 | The properties should follow the generic mmio-sram description found in [4] | ||
83 | |||
84 | Each sub-node represents the reserved area for SCMI. | ||
85 | |||
86 | Required sub-node properties: | ||
87 | - reg : The base offset and size of the reserved area with the SRAM | ||
88 | - compatible : should be "arm,scmi-shmem" for Non-secure SRAM based | ||
89 | shared memory | ||
90 | |||
91 | [0] http://infocenter.arm.com/help/topic/com.arm.doc.den0056a/index.html | ||
92 | [1] Documentation/devicetree/bindings/clock/clock-bindings.txt | ||
93 | [2] Documentation/devicetree/bindings/power/power_domain.txt | ||
94 | [3] Documentation/devicetree/bindings/thermal/thermal.txt | ||
95 | [4] Documentation/devicetree/bindings/sram/sram.txt | ||
96 | |||
97 | Example: | ||
98 | |||
99 | sram@50000000 { | ||
100 | compatible = "mmio-sram"; | ||
101 | reg = <0x0 0x50000000 0x0 0x10000>; | ||
102 | |||
103 | #address-cells = <1>; | ||
104 | #size-cells = <1>; | ||
105 | ranges = <0 0x0 0x50000000 0x10000>; | ||
106 | |||
107 | cpu_scp_lpri: scp-shmem@0 { | ||
108 | compatible = "arm,scmi-shmem"; | ||
109 | reg = <0x0 0x200>; | ||
110 | }; | ||
111 | |||
112 | cpu_scp_hpri: scp-shmem@200 { | ||
113 | compatible = "arm,scmi-shmem"; | ||
114 | reg = <0x200 0x200>; | ||
115 | }; | ||
116 | }; | ||
117 | |||
118 | mailbox@40000000 { | ||
119 | .... | ||
120 | #mbox-cells = <1>; | ||
121 | reg = <0x0 0x40000000 0x0 0x10000>; | ||
122 | }; | ||
123 | |||
124 | firmware { | ||
125 | |||
126 | ... | ||
127 | |||
128 | scmi { | ||
129 | compatible = "arm,scmi"; | ||
130 | mboxes = <&mailbox 0 &mailbox 1>; | ||
131 | mbox-names = "tx", "rx"; | ||
132 | shmem = <&cpu_scp_lpri &cpu_scp_hpri>; | ||
133 | #address-cells = <1>; | ||
134 | #size-cells = <0>; | ||
135 | |||
136 | scmi_devpd: protocol@11 { | ||
137 | reg = <0x11>; | ||
138 | #power-domain-cells = <1>; | ||
139 | }; | ||
140 | |||
141 | scmi_dvfs: protocol@13 { | ||
142 | reg = <0x13>; | ||
143 | #clock-cells = <1>; | ||
144 | }; | ||
145 | |||
146 | scmi_clk: protocol@14 { | ||
147 | reg = <0x14>; | ||
148 | #clock-cells = <1>; | ||
149 | }; | ||
150 | |||
151 | scmi_sensors0: protocol@15 { | ||
152 | reg = <0x15>; | ||
153 | #thermal-sensor-cells = <1>; | ||
154 | }; | ||
155 | }; | ||
156 | }; | ||
157 | |||
158 | cpu@0 { | ||
159 | ... | ||
160 | reg = <0 0>; | ||
161 | clocks = <&scmi_dvfs 0>; | ||
162 | }; | ||
163 | |||
164 | hdlcd@7ff60000 { | ||
165 | ... | ||
166 | reg = <0 0x7ff60000 0 0x1000>; | ||
167 | clocks = <&scmi_clk 4>; | ||
168 | power-domains = <&scmi_devpd 1>; | ||
169 | }; | ||
170 | |||
171 | thermal-zones { | ||
172 | soc_thermal { | ||
173 | polling-delay-passive = <100>; | ||
174 | polling-delay = <1000>; | ||
175 | /* sensor ID */ | ||
176 | thermal-sensors = <&scmi_sensors0 3>; | ||
177 | ... | ||
178 | }; | ||
179 | }; | ||
diff --git a/Documentation/devicetree/bindings/arm/samsung/pmu.txt b/Documentation/devicetree/bindings/arm/samsung/pmu.txt index 779f5614bcee..16685787d2bd 100644 --- a/Documentation/devicetree/bindings/arm/samsung/pmu.txt +++ b/Documentation/devicetree/bindings/arm/samsung/pmu.txt | |||
@@ -43,6 +43,12 @@ following properties: | |||
43 | - interrupt-parent: a phandle indicating which interrupt controller | 43 | - interrupt-parent: a phandle indicating which interrupt controller |
44 | this PMU signals interrupts to. | 44 | this PMU signals interrupts to. |
45 | 45 | ||
46 | |||
47 | Optional nodes: | ||
48 | |||
49 | - nodes defining the restart and poweroff syscon children | ||
50 | |||
51 | |||
46 | Example : | 52 | Example : |
47 | pmu_system_controller: system-controller@10040000 { | 53 | pmu_system_controller: system-controller@10040000 { |
48 | compatible = "samsung,exynos5250-pmu", "syscon"; | 54 | compatible = "samsung,exynos5250-pmu", "syscon"; |
diff --git a/Documentation/devicetree/bindings/mailbox/mailbox.txt b/Documentation/devicetree/bindings/mailbox/mailbox.txt index be05b9746c69..af8ecee2ac68 100644 --- a/Documentation/devicetree/bindings/mailbox/mailbox.txt +++ b/Documentation/devicetree/bindings/mailbox/mailbox.txt | |||
@@ -23,6 +23,11 @@ Required property: | |||
23 | 23 | ||
24 | Optional property: | 24 | Optional property: |
25 | - mbox-names: List of identifier strings for each mailbox channel. | 25 | - mbox-names: List of identifier strings for each mailbox channel. |
26 | - shmem : List of phandle pointing to the shared memory(SHM) area between the | ||
27 | users of these mailboxes for IPC, one for each mailbox. This shared | ||
28 | memory can be part of any memory reserved for the purpose of this | ||
29 | communication between the mailbox client and the remote. | ||
30 | |||
26 | 31 | ||
27 | Example: | 32 | Example: |
28 | pwr_cntrl: power { | 33 | pwr_cntrl: power { |
@@ -30,3 +35,26 @@ Example: | |||
30 | mbox-names = "pwr-ctrl", "rpc"; | 35 | mbox-names = "pwr-ctrl", "rpc"; |
31 | mboxes = <&mailbox 0 &mailbox 1>; | 36 | mboxes = <&mailbox 0 &mailbox 1>; |
32 | }; | 37 | }; |
38 | |||
39 | Example with shared memory(shmem): | ||
40 | |||
41 | sram: sram@50000000 { | ||
42 | compatible = "mmio-sram"; | ||
43 | reg = <0x50000000 0x10000>; | ||
44 | |||
45 | #address-cells = <1>; | ||
46 | #size-cells = <1>; | ||
47 | ranges = <0 0x50000000 0x10000>; | ||
48 | |||
49 | cl_shmem: shmem@0 { | ||
50 | compatible = "client-shmem"; | ||
51 | reg = <0x0 0x200>; | ||
52 | }; | ||
53 | }; | ||
54 | |||
55 | client@2e000000 { | ||
56 | ... | ||
57 | mboxes = <&mailbox 0>; | ||
58 | shmem = <&cl_shmem>; | ||
59 | .. | ||
60 | }; | ||
diff --git a/Documentation/devicetree/bindings/mfd/aspeed-lpc.txt b/Documentation/devicetree/bindings/mfd/aspeed-lpc.txt index 69aadee00d5f..34dd89087cff 100644 --- a/Documentation/devicetree/bindings/mfd/aspeed-lpc.txt +++ b/Documentation/devicetree/bindings/mfd/aspeed-lpc.txt | |||
@@ -176,3 +176,24 @@ lhc: lhc@20 { | |||
176 | compatible = "aspeed,ast2500-lhc"; | 176 | compatible = "aspeed,ast2500-lhc"; |
177 | reg = <0x20 0x24 0x48 0x8>; | 177 | reg = <0x20 0x24 0x48 0x8>; |
178 | }; | 178 | }; |
179 | |||
180 | LPC reset control | ||
181 | ----------------- | ||
182 | |||
183 | The UARTs present in the ASPEED SoC can have their resets tied to the reset | ||
184 | state of the LPC bus. Some systems may chose to modify this configuration. | ||
185 | |||
186 | Required properties: | ||
187 | |||
188 | - compatible: "aspeed,ast2500-lpc-reset" or | ||
189 | "aspeed,ast2400-lpc-reset" | ||
190 | - reg: offset and length of the IP in the LHC memory region | ||
191 | - #reset-controller indicates the number of reset cells expected | ||
192 | |||
193 | Example: | ||
194 | |||
195 | lpc_reset: reset-controller@18 { | ||
196 | compatible = "aspeed,ast2500-lpc-reset"; | ||
197 | reg = <0x18 0x4>; | ||
198 | #reset-cells = <1>; | ||
199 | }; | ||
diff --git a/Documentation/devicetree/bindings/arm/ccn.txt b/Documentation/devicetree/bindings/perf/arm-ccn.txt index 43b5a71a5a9d..43b5a71a5a9d 100644 --- a/Documentation/devicetree/bindings/arm/ccn.txt +++ b/Documentation/devicetree/bindings/perf/arm-ccn.txt | |||
diff --git a/Documentation/devicetree/bindings/reset/st,stm32mp1-rcc.txt b/Documentation/devicetree/bindings/reset/st,stm32mp1-rcc.txt new file mode 100644 index 000000000000..b4edaf7c7ff3 --- /dev/null +++ b/Documentation/devicetree/bindings/reset/st,stm32mp1-rcc.txt | |||
@@ -0,0 +1,6 @@ | |||
1 | STMicroelectronics STM32MP1 Peripheral Reset Controller | ||
2 | ======================================================= | ||
3 | |||
4 | The RCC IP is both a reset and a clock controller. | ||
5 | |||
6 | Please see Documentation/devicetree/bindings/clock/st,stm32mp1-rcc.txt | ||
diff --git a/Documentation/devicetree/bindings/soc/mediatek/scpsys.txt b/Documentation/devicetree/bindings/soc/mediatek/scpsys.txt index 76bf45b893fa..d6fe16f094af 100644 --- a/Documentation/devicetree/bindings/soc/mediatek/scpsys.txt +++ b/Documentation/devicetree/bindings/soc/mediatek/scpsys.txt | |||
@@ -21,6 +21,8 @@ Required properties: | |||
21 | - "mediatek,mt2712-scpsys" | 21 | - "mediatek,mt2712-scpsys" |
22 | - "mediatek,mt6797-scpsys" | 22 | - "mediatek,mt6797-scpsys" |
23 | - "mediatek,mt7622-scpsys" | 23 | - "mediatek,mt7622-scpsys" |
24 | - "mediatek,mt7623-scpsys", "mediatek,mt2701-scpsys": For MT7623 SoC | ||
25 | - "mediatek,mt7623a-scpsys": For MT7623A SoC | ||
24 | - "mediatek,mt8173-scpsys" | 26 | - "mediatek,mt8173-scpsys" |
25 | - #power-domain-cells: Must be 1 | 27 | - #power-domain-cells: Must be 1 |
26 | - reg: Address range of the SCPSYS unit | 28 | - reg: Address range of the SCPSYS unit |
@@ -28,10 +30,11 @@ Required properties: | |||
28 | - clock, clock-names: clocks according to the common clock binding. | 30 | - clock, clock-names: clocks according to the common clock binding. |
29 | These are clocks which hardware needs to be | 31 | These are clocks which hardware needs to be |
30 | enabled before enabling certain power domains. | 32 | enabled before enabling certain power domains. |
31 | Required clocks for MT2701: "mm", "mfg", "ethif" | 33 | Required clocks for MT2701 or MT7623: "mm", "mfg", "ethif" |
32 | Required clocks for MT2712: "mm", "mfg", "venc", "jpgdec", "audio", "vdec" | 34 | Required clocks for MT2712: "mm", "mfg", "venc", "jpgdec", "audio", "vdec" |
33 | Required clocks for MT6797: "mm", "mfg", "vdec" | 35 | Required clocks for MT6797: "mm", "mfg", "vdec" |
34 | Required clocks for MT7622: "hif_sel" | 36 | Required clocks for MT7622: "hif_sel" |
37 | Required clocks for MT7622A: "ethif" | ||
35 | Required clocks for MT8173: "mm", "mfg", "venc", "venc_lt" | 38 | Required clocks for MT8173: "mm", "mfg", "venc", "venc_lt" |
36 | 39 | ||
37 | Optional properties: | 40 | Optional properties: |
diff --git a/Documentation/arm/CCN.txt b/Documentation/perf/arm-ccn.txt index 15cdb7bc57c3..15cdb7bc57c3 100644 --- a/Documentation/arm/CCN.txt +++ b/Documentation/perf/arm-ccn.txt | |||
diff --git a/MAINTAINERS b/MAINTAINERS index 2f983928d74c..c6973001bf3c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS | |||
@@ -13491,15 +13491,16 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd.git | |||
13491 | S: Supported | 13491 | S: Supported |
13492 | F: drivers/mfd/syscon.c | 13492 | F: drivers/mfd/syscon.c |
13493 | 13493 | ||
13494 | SYSTEM CONTROL & POWER INTERFACE (SCPI) Message Protocol drivers | 13494 | SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers |
13495 | M: Sudeep Holla <sudeep.holla@arm.com> | 13495 | M: Sudeep Holla <sudeep.holla@arm.com> |
13496 | L: linux-arm-kernel@lists.infradead.org | 13496 | L: linux-arm-kernel@lists.infradead.org |
13497 | S: Maintained | 13497 | S: Maintained |
13498 | F: Documentation/devicetree/bindings/arm/arm,scpi.txt | 13498 | F: Documentation/devicetree/bindings/arm/arm,sc[mp]i.txt |
13499 | F: drivers/clk/clk-scpi.c | 13499 | F: drivers/clk/clk-sc[mp]i.c |
13500 | F: drivers/cpufreq/scpi-cpufreq.c | 13500 | F: drivers/cpufreq/sc[mp]i-cpufreq.c |
13501 | F: drivers/firmware/arm_scpi.c | 13501 | F: drivers/firmware/arm_scpi.c |
13502 | F: include/linux/scpi_protocol.h | 13502 | F: drivers/firmware/arm_scmi/ |
13503 | F: include/linux/sc[mp]i_protocol.h | ||
13503 | 13504 | ||
13504 | SYSTEM RESET/SHUTDOWN DRIVERS | 13505 | SYSTEM RESET/SHUTDOWN DRIVERS |
13505 | M: Sebastian Reichel <sre@kernel.org> | 13506 | M: Sebastian Reichel <sre@kernel.org> |
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig index 769599bc1bab..ff70850031c5 100644 --- a/drivers/bus/Kconfig +++ b/drivers/bus/Kconfig | |||
@@ -8,25 +8,10 @@ menu "Bus devices" | |||
8 | config ARM_CCI | 8 | config ARM_CCI |
9 | bool | 9 | bool |
10 | 10 | ||
11 | config ARM_CCI_PMU | ||
12 | bool | ||
13 | select ARM_CCI | ||
14 | |||
15 | config ARM_CCI400_COMMON | 11 | config ARM_CCI400_COMMON |
16 | bool | 12 | bool |
17 | select ARM_CCI | 13 | select ARM_CCI |
18 | 14 | ||
19 | config ARM_CCI400_PMU | ||
20 | bool "ARM CCI400 PMU support" | ||
21 | depends on (ARM && CPU_V7) || ARM64 | ||
22 | depends on PERF_EVENTS | ||
23 | select ARM_CCI400_COMMON | ||
24 | select ARM_CCI_PMU | ||
25 | help | ||
26 | Support for PMU events monitoring on the ARM CCI-400 (cache coherent | ||
27 | interconnect). CCI-400 supports counting events related to the | ||
28 | connected slave/master interfaces. | ||
29 | |||
30 | config ARM_CCI400_PORT_CTRL | 15 | config ARM_CCI400_PORT_CTRL |
31 | bool | 16 | bool |
32 | depends on ARM && OF && CPU_V7 | 17 | depends on ARM && OF && CPU_V7 |
@@ -35,27 +20,6 @@ config ARM_CCI400_PORT_CTRL | |||
35 | Low level power management driver for CCI400 cache coherent | 20 | Low level power management driver for CCI400 cache coherent |
36 | interconnect for ARM platforms. | 21 | interconnect for ARM platforms. |
37 | 22 | ||
38 | config ARM_CCI5xx_PMU | ||
39 | bool "ARM CCI-500/CCI-550 PMU support" | ||
40 | depends on (ARM && CPU_V7) || ARM64 | ||
41 | depends on PERF_EVENTS | ||
42 | select ARM_CCI_PMU | ||
43 | help | ||
44 | Support for PMU events monitoring on the ARM CCI-500/CCI-550 cache | ||
45 | coherent interconnects. Both of them provide 8 independent event counters, | ||
46 | which can count events pertaining to the slave/master interfaces as well | ||
47 | as the internal events to the CCI. | ||
48 | |||
49 | If unsure, say Y | ||
50 | |||
51 | config ARM_CCN | ||
52 | tristate "ARM CCN driver support" | ||
53 | depends on ARM || ARM64 | ||
54 | depends on PERF_EVENTS | ||
55 | help | ||
56 | PMU (perf) driver supporting the ARM CCN (Cache Coherent Network) | ||
57 | interconnect. | ||
58 | |||
59 | config BRCMSTB_GISB_ARB | 23 | config BRCMSTB_GISB_ARB |
60 | bool "Broadcom STB GISB bus arbiter" | 24 | bool "Broadcom STB GISB bus arbiter" |
61 | depends on ARM || ARM64 || MIPS | 25 | depends on ARM || ARM64 || MIPS |
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile index b666c49f249e..3d473b8adeac 100644 --- a/drivers/bus/Makefile +++ b/drivers/bus/Makefile | |||
@@ -5,8 +5,6 @@ | |||
5 | 5 | ||
6 | # Interconnect bus drivers for ARM platforms | 6 | # Interconnect bus drivers for ARM platforms |
7 | obj-$(CONFIG_ARM_CCI) += arm-cci.o | 7 | obj-$(CONFIG_ARM_CCI) += arm-cci.o |
8 | obj-$(CONFIG_ARM_CCN) += arm-ccn.o | ||
9 | |||
10 | obj-$(CONFIG_BRCMSTB_GISB_ARB) += brcmstb_gisb.o | 8 | obj-$(CONFIG_BRCMSTB_GISB_ARB) += brcmstb_gisb.o |
11 | 9 | ||
12 | # DPAA2 fsl-mc bus | 10 | # DPAA2 fsl-mc bus |
diff --git a/drivers/bus/arm-cci.c b/drivers/bus/arm-cci.c index c4c0c8560cce..443e4c3fd357 100644 --- a/drivers/bus/arm-cci.c +++ b/drivers/bus/arm-cci.c | |||
@@ -16,21 +16,17 @@ | |||
16 | 16 | ||
17 | #include <linux/arm-cci.h> | 17 | #include <linux/arm-cci.h> |
18 | #include <linux/io.h> | 18 | #include <linux/io.h> |
19 | #include <linux/interrupt.h> | ||
20 | #include <linux/module.h> | 19 | #include <linux/module.h> |
21 | #include <linux/of_address.h> | 20 | #include <linux/of_address.h> |
22 | #include <linux/of_irq.h> | ||
23 | #include <linux/of_platform.h> | 21 | #include <linux/of_platform.h> |
24 | #include <linux/perf_event.h> | ||
25 | #include <linux/platform_device.h> | 22 | #include <linux/platform_device.h> |
26 | #include <linux/slab.h> | 23 | #include <linux/slab.h> |
27 | #include <linux/spinlock.h> | ||
28 | 24 | ||
29 | #include <asm/cacheflush.h> | 25 | #include <asm/cacheflush.h> |
30 | #include <asm/smp_plat.h> | 26 | #include <asm/smp_plat.h> |
31 | 27 | ||
32 | static void __iomem *cci_ctrl_base; | 28 | static void __iomem *cci_ctrl_base __ro_after_init; |
33 | static unsigned long cci_ctrl_phys; | 29 | static unsigned long cci_ctrl_phys __ro_after_init; |
34 | 30 | ||
35 | #ifdef CONFIG_ARM_CCI400_PORT_CTRL | 31 | #ifdef CONFIG_ARM_CCI400_PORT_CTRL |
36 | struct cci_nb_ports { | 32 | struct cci_nb_ports { |
@@ -59,1733 +55,26 @@ static const struct of_device_id arm_cci_matches[] = { | |||
59 | {}, | 55 | {}, |
60 | }; | 56 | }; |
61 | 57 | ||
62 | #ifdef CONFIG_ARM_CCI_PMU | 58 | static const struct of_dev_auxdata arm_cci_auxdata[] = { |
63 | 59 | OF_DEV_AUXDATA("arm,cci-400-pmu", 0, NULL, &cci_ctrl_base), | |
64 | #define DRIVER_NAME "ARM-CCI" | 60 | OF_DEV_AUXDATA("arm,cci-400-pmu,r0", 0, NULL, &cci_ctrl_base), |
65 | #define DRIVER_NAME_PMU DRIVER_NAME " PMU" | 61 | OF_DEV_AUXDATA("arm,cci-400-pmu,r1", 0, NULL, &cci_ctrl_base), |
66 | 62 | OF_DEV_AUXDATA("arm,cci-500-pmu,r0", 0, NULL, &cci_ctrl_base), | |
67 | #define CCI_PMCR 0x0100 | 63 | OF_DEV_AUXDATA("arm,cci-550-pmu,r0", 0, NULL, &cci_ctrl_base), |
68 | #define CCI_PID2 0x0fe8 | 64 | {} |
69 | |||
70 | #define CCI_PMCR_CEN 0x00000001 | ||
71 | #define CCI_PMCR_NCNT_MASK 0x0000f800 | ||
72 | #define CCI_PMCR_NCNT_SHIFT 11 | ||
73 | |||
74 | #define CCI_PID2_REV_MASK 0xf0 | ||
75 | #define CCI_PID2_REV_SHIFT 4 | ||
76 | |||
77 | #define CCI_PMU_EVT_SEL 0x000 | ||
78 | #define CCI_PMU_CNTR 0x004 | ||
79 | #define CCI_PMU_CNTR_CTRL 0x008 | ||
80 | #define CCI_PMU_OVRFLW 0x00c | ||
81 | |||
82 | #define CCI_PMU_OVRFLW_FLAG 1 | ||
83 | |||
84 | #define CCI_PMU_CNTR_SIZE(model) ((model)->cntr_size) | ||
85 | #define CCI_PMU_CNTR_BASE(model, idx) ((idx) * CCI_PMU_CNTR_SIZE(model)) | ||
86 | #define CCI_PMU_CNTR_MASK ((1ULL << 32) -1) | ||
87 | #define CCI_PMU_CNTR_LAST(cci_pmu) (cci_pmu->num_cntrs - 1) | ||
88 | |||
89 | #define CCI_PMU_MAX_HW_CNTRS(model) \ | ||
90 | ((model)->num_hw_cntrs + (model)->fixed_hw_cntrs) | ||
91 | |||
92 | /* Types of interfaces that can generate events */ | ||
93 | enum { | ||
94 | CCI_IF_SLAVE, | ||
95 | CCI_IF_MASTER, | ||
96 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
97 | CCI_IF_GLOBAL, | ||
98 | #endif | ||
99 | CCI_IF_MAX, | ||
100 | }; | ||
101 | |||
102 | struct event_range { | ||
103 | u32 min; | ||
104 | u32 max; | ||
105 | }; | ||
106 | |||
107 | struct cci_pmu_hw_events { | ||
108 | struct perf_event **events; | ||
109 | unsigned long *used_mask; | ||
110 | raw_spinlock_t pmu_lock; | ||
111 | }; | ||
112 | |||
113 | struct cci_pmu; | ||
114 | /* | ||
115 | * struct cci_pmu_model: | ||
116 | * @fixed_hw_cntrs - Number of fixed event counters | ||
117 | * @num_hw_cntrs - Maximum number of programmable event counters | ||
118 | * @cntr_size - Size of an event counter mapping | ||
119 | */ | ||
120 | struct cci_pmu_model { | ||
121 | char *name; | ||
122 | u32 fixed_hw_cntrs; | ||
123 | u32 num_hw_cntrs; | ||
124 | u32 cntr_size; | ||
125 | struct attribute **format_attrs; | ||
126 | struct attribute **event_attrs; | ||
127 | struct event_range event_ranges[CCI_IF_MAX]; | ||
128 | int (*validate_hw_event)(struct cci_pmu *, unsigned long); | ||
129 | int (*get_event_idx)(struct cci_pmu *, struct cci_pmu_hw_events *, unsigned long); | ||
130 | void (*write_counters)(struct cci_pmu *, unsigned long *); | ||
131 | }; | ||
132 | |||
133 | static struct cci_pmu_model cci_pmu_models[]; | ||
134 | |||
135 | struct cci_pmu { | ||
136 | void __iomem *base; | ||
137 | struct pmu pmu; | ||
138 | int nr_irqs; | ||
139 | int *irqs; | ||
140 | unsigned long active_irqs; | ||
141 | const struct cci_pmu_model *model; | ||
142 | struct cci_pmu_hw_events hw_events; | ||
143 | struct platform_device *plat_device; | ||
144 | int num_cntrs; | ||
145 | atomic_t active_events; | ||
146 | struct mutex reserve_mutex; | ||
147 | struct hlist_node node; | ||
148 | cpumask_t cpus; | ||
149 | }; | ||
150 | |||
151 | #define to_cci_pmu(c) (container_of(c, struct cci_pmu, pmu)) | ||
152 | |||
153 | enum cci_models { | ||
154 | #ifdef CONFIG_ARM_CCI400_PMU | ||
155 | CCI400_R0, | ||
156 | CCI400_R1, | ||
157 | #endif | ||
158 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
159 | CCI500_R0, | ||
160 | CCI550_R0, | ||
161 | #endif | ||
162 | CCI_MODEL_MAX | ||
163 | }; | ||
164 | |||
165 | static void pmu_write_counters(struct cci_pmu *cci_pmu, | ||
166 | unsigned long *mask); | ||
167 | static ssize_t cci_pmu_format_show(struct device *dev, | ||
168 | struct device_attribute *attr, char *buf); | ||
169 | static ssize_t cci_pmu_event_show(struct device *dev, | ||
170 | struct device_attribute *attr, char *buf); | ||
171 | |||
172 | #define CCI_EXT_ATTR_ENTRY(_name, _func, _config) \ | ||
173 | &((struct dev_ext_attribute[]) { \ | ||
174 | { __ATTR(_name, S_IRUGO, _func, NULL), (void *)_config } \ | ||
175 | })[0].attr.attr | ||
176 | |||
177 | #define CCI_FORMAT_EXT_ATTR_ENTRY(_name, _config) \ | ||
178 | CCI_EXT_ATTR_ENTRY(_name, cci_pmu_format_show, (char *)_config) | ||
179 | #define CCI_EVENT_EXT_ATTR_ENTRY(_name, _config) \ | ||
180 | CCI_EXT_ATTR_ENTRY(_name, cci_pmu_event_show, (unsigned long)_config) | ||
181 | |||
182 | /* CCI400 PMU Specific definitions */ | ||
183 | |||
184 | #ifdef CONFIG_ARM_CCI400_PMU | ||
185 | |||
186 | /* Port ids */ | ||
187 | #define CCI400_PORT_S0 0 | ||
188 | #define CCI400_PORT_S1 1 | ||
189 | #define CCI400_PORT_S2 2 | ||
190 | #define CCI400_PORT_S3 3 | ||
191 | #define CCI400_PORT_S4 4 | ||
192 | #define CCI400_PORT_M0 5 | ||
193 | #define CCI400_PORT_M1 6 | ||
194 | #define CCI400_PORT_M2 7 | ||
195 | |||
196 | #define CCI400_R1_PX 5 | ||
197 | |||
198 | /* | ||
199 | * Instead of an event id to monitor CCI cycles, a dedicated counter is | ||
200 | * provided. Use 0xff to represent CCI cycles and hope that no future revisions | ||
201 | * make use of this event in hardware. | ||
202 | */ | ||
203 | enum cci400_perf_events { | ||
204 | CCI400_PMU_CYCLES = 0xff | ||
205 | }; | ||
206 | |||
207 | #define CCI400_PMU_CYCLE_CNTR_IDX 0 | ||
208 | #define CCI400_PMU_CNTR0_IDX 1 | ||
209 | |||
210 | /* | ||
211 | * CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8 | ||
212 | * ports and bits 4:0 are event codes. There are different event codes | ||
213 | * associated with each port type. | ||
214 | * | ||
215 | * Additionally, the range of events associated with the port types changed | ||
216 | * between Rev0 and Rev1. | ||
217 | * | ||
218 | * The constants below define the range of valid codes for each port type for | ||
219 | * the different revisions and are used to validate the event to be monitored. | ||
220 | */ | ||
221 | |||
222 | #define CCI400_PMU_EVENT_MASK 0xffUL | ||
223 | #define CCI400_PMU_EVENT_SOURCE_SHIFT 5 | ||
224 | #define CCI400_PMU_EVENT_SOURCE_MASK 0x7 | ||
225 | #define CCI400_PMU_EVENT_CODE_SHIFT 0 | ||
226 | #define CCI400_PMU_EVENT_CODE_MASK 0x1f | ||
227 | #define CCI400_PMU_EVENT_SOURCE(event) \ | ||
228 | ((event >> CCI400_PMU_EVENT_SOURCE_SHIFT) & \ | ||
229 | CCI400_PMU_EVENT_SOURCE_MASK) | ||
230 | #define CCI400_PMU_EVENT_CODE(event) \ | ||
231 | ((event >> CCI400_PMU_EVENT_CODE_SHIFT) & CCI400_PMU_EVENT_CODE_MASK) | ||
232 | |||
233 | #define CCI400_R0_SLAVE_PORT_MIN_EV 0x00 | ||
234 | #define CCI400_R0_SLAVE_PORT_MAX_EV 0x13 | ||
235 | #define CCI400_R0_MASTER_PORT_MIN_EV 0x14 | ||
236 | #define CCI400_R0_MASTER_PORT_MAX_EV 0x1a | ||
237 | |||
238 | #define CCI400_R1_SLAVE_PORT_MIN_EV 0x00 | ||
239 | #define CCI400_R1_SLAVE_PORT_MAX_EV 0x14 | ||
240 | #define CCI400_R1_MASTER_PORT_MIN_EV 0x00 | ||
241 | #define CCI400_R1_MASTER_PORT_MAX_EV 0x11 | ||
242 | |||
243 | #define CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(_name, _config) \ | ||
244 | CCI_EXT_ATTR_ENTRY(_name, cci400_pmu_cycle_event_show, \ | ||
245 | (unsigned long)_config) | ||
246 | |||
247 | static ssize_t cci400_pmu_cycle_event_show(struct device *dev, | ||
248 | struct device_attribute *attr, char *buf); | ||
249 | |||
250 | static struct attribute *cci400_pmu_format_attrs[] = { | ||
251 | CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), | ||
252 | CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-7"), | ||
253 | NULL | ||
254 | }; | ||
255 | |||
256 | static struct attribute *cci400_r0_pmu_event_attrs[] = { | ||
257 | /* Slave events */ | ||
258 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), | ||
259 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), | ||
260 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), | ||
261 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), | ||
262 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), | ||
263 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), | ||
264 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), | ||
265 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), | ||
266 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), | ||
267 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), | ||
268 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), | ||
269 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), | ||
270 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), | ||
271 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), | ||
272 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), | ||
273 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), | ||
274 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), | ||
275 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), | ||
276 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), | ||
277 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), | ||
278 | /* Master events */ | ||
279 | CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x14), | ||
280 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_addr_hazard, 0x15), | ||
281 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_id_hazard, 0x16), | ||
282 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_tt_full, 0x17), | ||
283 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x18), | ||
284 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x19), | ||
285 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_tt_full, 0x1A), | ||
286 | /* Special event for cycles counter */ | ||
287 | CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), | ||
288 | NULL | ||
289 | }; | ||
290 | |||
291 | static struct attribute *cci400_r1_pmu_event_attrs[] = { | ||
292 | /* Slave events */ | ||
293 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), | ||
294 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), | ||
295 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), | ||
296 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), | ||
297 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), | ||
298 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), | ||
299 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), | ||
300 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), | ||
301 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), | ||
302 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), | ||
303 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), | ||
304 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), | ||
305 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), | ||
306 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), | ||
307 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), | ||
308 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), | ||
309 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), | ||
310 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), | ||
311 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), | ||
312 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), | ||
313 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_slave_id_hazard, 0x14), | ||
314 | /* Master events */ | ||
315 | CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x0), | ||
316 | CCI_EVENT_EXT_ATTR_ENTRY(mi_stall_cycle_addr_hazard, 0x1), | ||
317 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_master_id_hazard, 0x2), | ||
318 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_hi_prio_rtq_full, 0x3), | ||
319 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x4), | ||
320 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x5), | ||
321 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_wtq_full, 0x6), | ||
322 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_low_prio_rtq_full, 0x7), | ||
323 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_mid_prio_rtq_full, 0x8), | ||
324 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn0, 0x9), | ||
325 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn1, 0xA), | ||
326 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn2, 0xB), | ||
327 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn3, 0xC), | ||
328 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn0, 0xD), | ||
329 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn1, 0xE), | ||
330 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn2, 0xF), | ||
331 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn3, 0x10), | ||
332 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_unique_or_line_unique_addr_hazard, 0x11), | ||
333 | /* Special event for cycles counter */ | ||
334 | CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), | ||
335 | NULL | ||
336 | }; | ||
337 | |||
338 | static ssize_t cci400_pmu_cycle_event_show(struct device *dev, | ||
339 | struct device_attribute *attr, char *buf) | ||
340 | { | ||
341 | struct dev_ext_attribute *eattr = container_of(attr, | ||
342 | struct dev_ext_attribute, attr); | ||
343 | return snprintf(buf, PAGE_SIZE, "config=0x%lx\n", (unsigned long)eattr->var); | ||
344 | } | ||
345 | |||
346 | static int cci400_get_event_idx(struct cci_pmu *cci_pmu, | ||
347 | struct cci_pmu_hw_events *hw, | ||
348 | unsigned long cci_event) | ||
349 | { | ||
350 | int idx; | ||
351 | |||
352 | /* cycles event idx is fixed */ | ||
353 | if (cci_event == CCI400_PMU_CYCLES) { | ||
354 | if (test_and_set_bit(CCI400_PMU_CYCLE_CNTR_IDX, hw->used_mask)) | ||
355 | return -EAGAIN; | ||
356 | |||
357 | return CCI400_PMU_CYCLE_CNTR_IDX; | ||
358 | } | ||
359 | |||
360 | for (idx = CCI400_PMU_CNTR0_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); ++idx) | ||
361 | if (!test_and_set_bit(idx, hw->used_mask)) | ||
362 | return idx; | ||
363 | |||
364 | /* No counters available */ | ||
365 | return -EAGAIN; | ||
366 | } | ||
367 | |||
368 | static int cci400_validate_hw_event(struct cci_pmu *cci_pmu, unsigned long hw_event) | ||
369 | { | ||
370 | u8 ev_source = CCI400_PMU_EVENT_SOURCE(hw_event); | ||
371 | u8 ev_code = CCI400_PMU_EVENT_CODE(hw_event); | ||
372 | int if_type; | ||
373 | |||
374 | if (hw_event & ~CCI400_PMU_EVENT_MASK) | ||
375 | return -ENOENT; | ||
376 | |||
377 | if (hw_event == CCI400_PMU_CYCLES) | ||
378 | return hw_event; | ||
379 | |||
380 | switch (ev_source) { | ||
381 | case CCI400_PORT_S0: | ||
382 | case CCI400_PORT_S1: | ||
383 | case CCI400_PORT_S2: | ||
384 | case CCI400_PORT_S3: | ||
385 | case CCI400_PORT_S4: | ||
386 | /* Slave Interface */ | ||
387 | if_type = CCI_IF_SLAVE; | ||
388 | break; | ||
389 | case CCI400_PORT_M0: | ||
390 | case CCI400_PORT_M1: | ||
391 | case CCI400_PORT_M2: | ||
392 | /* Master Interface */ | ||
393 | if_type = CCI_IF_MASTER; | ||
394 | break; | ||
395 | default: | ||
396 | return -ENOENT; | ||
397 | } | ||
398 | |||
399 | if (ev_code >= cci_pmu->model->event_ranges[if_type].min && | ||
400 | ev_code <= cci_pmu->model->event_ranges[if_type].max) | ||
401 | return hw_event; | ||
402 | |||
403 | return -ENOENT; | ||
404 | } | ||
405 | |||
406 | static int probe_cci400_revision(void) | ||
407 | { | ||
408 | int rev; | ||
409 | rev = readl_relaxed(cci_ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK; | ||
410 | rev >>= CCI_PID2_REV_SHIFT; | ||
411 | |||
412 | if (rev < CCI400_R1_PX) | ||
413 | return CCI400_R0; | ||
414 | else | ||
415 | return CCI400_R1; | ||
416 | } | ||
417 | |||
418 | static const struct cci_pmu_model *probe_cci_model(struct platform_device *pdev) | ||
419 | { | ||
420 | if (platform_has_secure_cci_access()) | ||
421 | return &cci_pmu_models[probe_cci400_revision()]; | ||
422 | return NULL; | ||
423 | } | ||
424 | #else /* !CONFIG_ARM_CCI400_PMU */ | ||
425 | static inline struct cci_pmu_model *probe_cci_model(struct platform_device *pdev) | ||
426 | { | ||
427 | return NULL; | ||
428 | } | ||
429 | #endif /* CONFIG_ARM_CCI400_PMU */ | ||
430 | |||
431 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
432 | |||
433 | /* | ||
434 | * CCI5xx PMU event id is an 9-bit value made of two parts. | ||
435 | * bits [8:5] - Source for the event | ||
436 | * bits [4:0] - Event code (specific to type of interface) | ||
437 | * | ||
438 | * | ||
439 | */ | ||
440 | |||
441 | /* Port ids */ | ||
442 | #define CCI5xx_PORT_S0 0x0 | ||
443 | #define CCI5xx_PORT_S1 0x1 | ||
444 | #define CCI5xx_PORT_S2 0x2 | ||
445 | #define CCI5xx_PORT_S3 0x3 | ||
446 | #define CCI5xx_PORT_S4 0x4 | ||
447 | #define CCI5xx_PORT_S5 0x5 | ||
448 | #define CCI5xx_PORT_S6 0x6 | ||
449 | |||
450 | #define CCI5xx_PORT_M0 0x8 | ||
451 | #define CCI5xx_PORT_M1 0x9 | ||
452 | #define CCI5xx_PORT_M2 0xa | ||
453 | #define CCI5xx_PORT_M3 0xb | ||
454 | #define CCI5xx_PORT_M4 0xc | ||
455 | #define CCI5xx_PORT_M5 0xd | ||
456 | #define CCI5xx_PORT_M6 0xe | ||
457 | |||
458 | #define CCI5xx_PORT_GLOBAL 0xf | ||
459 | |||
460 | #define CCI5xx_PMU_EVENT_MASK 0x1ffUL | ||
461 | #define CCI5xx_PMU_EVENT_SOURCE_SHIFT 0x5 | ||
462 | #define CCI5xx_PMU_EVENT_SOURCE_MASK 0xf | ||
463 | #define CCI5xx_PMU_EVENT_CODE_SHIFT 0x0 | ||
464 | #define CCI5xx_PMU_EVENT_CODE_MASK 0x1f | ||
465 | |||
466 | #define CCI5xx_PMU_EVENT_SOURCE(event) \ | ||
467 | ((event >> CCI5xx_PMU_EVENT_SOURCE_SHIFT) & CCI5xx_PMU_EVENT_SOURCE_MASK) | ||
468 | #define CCI5xx_PMU_EVENT_CODE(event) \ | ||
469 | ((event >> CCI5xx_PMU_EVENT_CODE_SHIFT) & CCI5xx_PMU_EVENT_CODE_MASK) | ||
470 | |||
471 | #define CCI5xx_SLAVE_PORT_MIN_EV 0x00 | ||
472 | #define CCI5xx_SLAVE_PORT_MAX_EV 0x1f | ||
473 | #define CCI5xx_MASTER_PORT_MIN_EV 0x00 | ||
474 | #define CCI5xx_MASTER_PORT_MAX_EV 0x06 | ||
475 | #define CCI5xx_GLOBAL_PORT_MIN_EV 0x00 | ||
476 | #define CCI5xx_GLOBAL_PORT_MAX_EV 0x0f | ||
477 | |||
478 | |||
479 | #define CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(_name, _config) \ | ||
480 | CCI_EXT_ATTR_ENTRY(_name, cci5xx_pmu_global_event_show, \ | ||
481 | (unsigned long) _config) | ||
482 | |||
483 | static ssize_t cci5xx_pmu_global_event_show(struct device *dev, | ||
484 | struct device_attribute *attr, char *buf); | ||
485 | |||
486 | static struct attribute *cci5xx_pmu_format_attrs[] = { | ||
487 | CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), | ||
488 | CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-8"), | ||
489 | NULL, | ||
490 | }; | ||
491 | |||
492 | static struct attribute *cci5xx_pmu_event_attrs[] = { | ||
493 | /* Slave events */ | ||
494 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_arvalid, 0x0), | ||
495 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_dev, 0x1), | ||
496 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_nonshareable, 0x2), | ||
497 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_non_alloc, 0x3), | ||
498 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_alloc, 0x4), | ||
499 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_invalidate, 0x5), | ||
500 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maint, 0x6), | ||
501 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), | ||
502 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rval, 0x8), | ||
503 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rlast_snoop, 0x9), | ||
504 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_awalid, 0xA), | ||
505 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_dev, 0xB), | ||
506 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_non_shareable, 0xC), | ||
507 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wb, 0xD), | ||
508 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wlu, 0xE), | ||
509 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wunique, 0xF), | ||
510 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_evict, 0x10), | ||
511 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_wrevict, 0x11), | ||
512 | CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_beat, 0x12), | ||
513 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_acvalid, 0x13), | ||
514 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_read, 0x14), | ||
515 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_clean, 0x15), | ||
516 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_data_transfer_low, 0x16), | ||
517 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_arvalid, 0x17), | ||
518 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall, 0x18), | ||
519 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall, 0x19), | ||
520 | CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_stall, 0x1A), | ||
521 | CCI_EVENT_EXT_ATTR_ENTRY(si_w_resp_stall, 0x1B), | ||
522 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_stall, 0x1C), | ||
523 | CCI_EVENT_EXT_ATTR_ENTRY(si_s_data_stall, 0x1D), | ||
524 | CCI_EVENT_EXT_ATTR_ENTRY(si_rq_stall_ot_limit, 0x1E), | ||
525 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_stall_arbit, 0x1F), | ||
526 | |||
527 | /* Master events */ | ||
528 | CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_beat_any, 0x0), | ||
529 | CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_beat_any, 0x1), | ||
530 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall, 0x2), | ||
531 | CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_stall, 0x3), | ||
532 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall, 0x4), | ||
533 | CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_stall, 0x5), | ||
534 | CCI_EVENT_EXT_ATTR_ENTRY(mi_w_resp_stall, 0x6), | ||
535 | |||
536 | /* Global events */ | ||
537 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_0_1, 0x0), | ||
538 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_2_3, 0x1), | ||
539 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_4_5, 0x2), | ||
540 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_6_7, 0x3), | ||
541 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_0_1, 0x4), | ||
542 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_2_3, 0x5), | ||
543 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_4_5, 0x6), | ||
544 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_6_7, 0x7), | ||
545 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_back_invalidation, 0x8), | ||
546 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_alloc_busy, 0x9), | ||
547 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_tt_full, 0xA), | ||
548 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB), | ||
549 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC), | ||
550 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD), | ||
551 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_stall_tt_full, 0xE), | ||
552 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF), | ||
553 | NULL | ||
554 | }; | ||
555 | |||
556 | static ssize_t cci5xx_pmu_global_event_show(struct device *dev, | ||
557 | struct device_attribute *attr, char *buf) | ||
558 | { | ||
559 | struct dev_ext_attribute *eattr = container_of(attr, | ||
560 | struct dev_ext_attribute, attr); | ||
561 | /* Global events have single fixed source code */ | ||
562 | return snprintf(buf, PAGE_SIZE, "event=0x%lx,source=0x%x\n", | ||
563 | (unsigned long)eattr->var, CCI5xx_PORT_GLOBAL); | ||
564 | } | ||
565 | |||
566 | /* | ||
567 | * CCI500 provides 8 independent event counters that can count | ||
568 | * any of the events available. | ||
569 | * CCI500 PMU event source ids | ||
570 | * 0x0-0x6 - Slave interfaces | ||
571 | * 0x8-0xD - Master interfaces | ||
572 | * 0xf - Global Events | ||
573 | * 0x7,0xe - Reserved | ||
574 | */ | ||
575 | static int cci500_validate_hw_event(struct cci_pmu *cci_pmu, | ||
576 | unsigned long hw_event) | ||
577 | { | ||
578 | u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event); | ||
579 | u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event); | ||
580 | int if_type; | ||
581 | |||
582 | if (hw_event & ~CCI5xx_PMU_EVENT_MASK) | ||
583 | return -ENOENT; | ||
584 | |||
585 | switch (ev_source) { | ||
586 | case CCI5xx_PORT_S0: | ||
587 | case CCI5xx_PORT_S1: | ||
588 | case CCI5xx_PORT_S2: | ||
589 | case CCI5xx_PORT_S3: | ||
590 | case CCI5xx_PORT_S4: | ||
591 | case CCI5xx_PORT_S5: | ||
592 | case CCI5xx_PORT_S6: | ||
593 | if_type = CCI_IF_SLAVE; | ||
594 | break; | ||
595 | case CCI5xx_PORT_M0: | ||
596 | case CCI5xx_PORT_M1: | ||
597 | case CCI5xx_PORT_M2: | ||
598 | case CCI5xx_PORT_M3: | ||
599 | case CCI5xx_PORT_M4: | ||
600 | case CCI5xx_PORT_M5: | ||
601 | if_type = CCI_IF_MASTER; | ||
602 | break; | ||
603 | case CCI5xx_PORT_GLOBAL: | ||
604 | if_type = CCI_IF_GLOBAL; | ||
605 | break; | ||
606 | default: | ||
607 | return -ENOENT; | ||
608 | } | ||
609 | |||
610 | if (ev_code >= cci_pmu->model->event_ranges[if_type].min && | ||
611 | ev_code <= cci_pmu->model->event_ranges[if_type].max) | ||
612 | return hw_event; | ||
613 | |||
614 | return -ENOENT; | ||
615 | } | ||
616 | |||
617 | /* | ||
618 | * CCI550 provides 8 independent event counters that can count | ||
619 | * any of the events available. | ||
620 | * CCI550 PMU event source ids | ||
621 | * 0x0-0x6 - Slave interfaces | ||
622 | * 0x8-0xe - Master interfaces | ||
623 | * 0xf - Global Events | ||
624 | * 0x7 - Reserved | ||
625 | */ | ||
626 | static int cci550_validate_hw_event(struct cci_pmu *cci_pmu, | ||
627 | unsigned long hw_event) | ||
628 | { | ||
629 | u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event); | ||
630 | u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event); | ||
631 | int if_type; | ||
632 | |||
633 | if (hw_event & ~CCI5xx_PMU_EVENT_MASK) | ||
634 | return -ENOENT; | ||
635 | |||
636 | switch (ev_source) { | ||
637 | case CCI5xx_PORT_S0: | ||
638 | case CCI5xx_PORT_S1: | ||
639 | case CCI5xx_PORT_S2: | ||
640 | case CCI5xx_PORT_S3: | ||
641 | case CCI5xx_PORT_S4: | ||
642 | case CCI5xx_PORT_S5: | ||
643 | case CCI5xx_PORT_S6: | ||
644 | if_type = CCI_IF_SLAVE; | ||
645 | break; | ||
646 | case CCI5xx_PORT_M0: | ||
647 | case CCI5xx_PORT_M1: | ||
648 | case CCI5xx_PORT_M2: | ||
649 | case CCI5xx_PORT_M3: | ||
650 | case CCI5xx_PORT_M4: | ||
651 | case CCI5xx_PORT_M5: | ||
652 | case CCI5xx_PORT_M6: | ||
653 | if_type = CCI_IF_MASTER; | ||
654 | break; | ||
655 | case CCI5xx_PORT_GLOBAL: | ||
656 | if_type = CCI_IF_GLOBAL; | ||
657 | break; | ||
658 | default: | ||
659 | return -ENOENT; | ||
660 | } | ||
661 | |||
662 | if (ev_code >= cci_pmu->model->event_ranges[if_type].min && | ||
663 | ev_code <= cci_pmu->model->event_ranges[if_type].max) | ||
664 | return hw_event; | ||
665 | |||
666 | return -ENOENT; | ||
667 | } | ||
668 | |||
669 | #endif /* CONFIG_ARM_CCI5xx_PMU */ | ||
670 | |||
671 | /* | ||
672 | * Program the CCI PMU counters which have PERF_HES_ARCH set | ||
673 | * with the event period and mark them ready before we enable | ||
674 | * PMU. | ||
675 | */ | ||
676 | static void cci_pmu_sync_counters(struct cci_pmu *cci_pmu) | ||
677 | { | ||
678 | int i; | ||
679 | struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events; | ||
680 | |||
681 | DECLARE_BITMAP(mask, cci_pmu->num_cntrs); | ||
682 | |||
683 | bitmap_zero(mask, cci_pmu->num_cntrs); | ||
684 | for_each_set_bit(i, cci_pmu->hw_events.used_mask, cci_pmu->num_cntrs) { | ||
685 | struct perf_event *event = cci_hw->events[i]; | ||
686 | |||
687 | if (WARN_ON(!event)) | ||
688 | continue; | ||
689 | |||
690 | /* Leave the events which are not counting */ | ||
691 | if (event->hw.state & PERF_HES_STOPPED) | ||
692 | continue; | ||
693 | if (event->hw.state & PERF_HES_ARCH) { | ||
694 | set_bit(i, mask); | ||
695 | event->hw.state &= ~PERF_HES_ARCH; | ||
696 | } | ||
697 | } | ||
698 | |||
699 | pmu_write_counters(cci_pmu, mask); | ||
700 | } | ||
701 | |||
702 | /* Should be called with cci_pmu->hw_events->pmu_lock held */ | ||
703 | static void __cci_pmu_enable_nosync(struct cci_pmu *cci_pmu) | ||
704 | { | ||
705 | u32 val; | ||
706 | |||
707 | /* Enable all the PMU counters. */ | ||
708 | val = readl_relaxed(cci_ctrl_base + CCI_PMCR) | CCI_PMCR_CEN; | ||
709 | writel(val, cci_ctrl_base + CCI_PMCR); | ||
710 | } | ||
711 | |||
712 | /* Should be called with cci_pmu->hw_events->pmu_lock held */ | ||
713 | static void __cci_pmu_enable_sync(struct cci_pmu *cci_pmu) | ||
714 | { | ||
715 | cci_pmu_sync_counters(cci_pmu); | ||
716 | __cci_pmu_enable_nosync(cci_pmu); | ||
717 | } | ||
718 | |||
719 | /* Should be called with cci_pmu->hw_events->pmu_lock held */ | ||
720 | static void __cci_pmu_disable(void) | ||
721 | { | ||
722 | u32 val; | ||
723 | |||
724 | /* Disable all the PMU counters. */ | ||
725 | val = readl_relaxed(cci_ctrl_base + CCI_PMCR) & ~CCI_PMCR_CEN; | ||
726 | writel(val, cci_ctrl_base + CCI_PMCR); | ||
727 | } | ||
728 | |||
729 | static ssize_t cci_pmu_format_show(struct device *dev, | ||
730 | struct device_attribute *attr, char *buf) | ||
731 | { | ||
732 | struct dev_ext_attribute *eattr = container_of(attr, | ||
733 | struct dev_ext_attribute, attr); | ||
734 | return snprintf(buf, PAGE_SIZE, "%s\n", (char *)eattr->var); | ||
735 | } | ||
736 | |||
737 | static ssize_t cci_pmu_event_show(struct device *dev, | ||
738 | struct device_attribute *attr, char *buf) | ||
739 | { | ||
740 | struct dev_ext_attribute *eattr = container_of(attr, | ||
741 | struct dev_ext_attribute, attr); | ||
742 | /* source parameter is mandatory for normal PMU events */ | ||
743 | return snprintf(buf, PAGE_SIZE, "source=?,event=0x%lx\n", | ||
744 | (unsigned long)eattr->var); | ||
745 | } | ||
746 | |||
747 | static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx) | ||
748 | { | ||
749 | return 0 <= idx && idx <= CCI_PMU_CNTR_LAST(cci_pmu); | ||
750 | } | ||
751 | |||
752 | static u32 pmu_read_register(struct cci_pmu *cci_pmu, int idx, unsigned int offset) | ||
753 | { | ||
754 | return readl_relaxed(cci_pmu->base + | ||
755 | CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); | ||
756 | } | ||
757 | |||
758 | static void pmu_write_register(struct cci_pmu *cci_pmu, u32 value, | ||
759 | int idx, unsigned int offset) | ||
760 | { | ||
761 | writel_relaxed(value, cci_pmu->base + | ||
762 | CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); | ||
763 | } | ||
764 | |||
765 | static void pmu_disable_counter(struct cci_pmu *cci_pmu, int idx) | ||
766 | { | ||
767 | pmu_write_register(cci_pmu, 0, idx, CCI_PMU_CNTR_CTRL); | ||
768 | } | ||
769 | |||
770 | static void pmu_enable_counter(struct cci_pmu *cci_pmu, int idx) | ||
771 | { | ||
772 | pmu_write_register(cci_pmu, 1, idx, CCI_PMU_CNTR_CTRL); | ||
773 | } | ||
774 | |||
775 | static bool __maybe_unused | ||
776 | pmu_counter_is_enabled(struct cci_pmu *cci_pmu, int idx) | ||
777 | { | ||
778 | return (pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR_CTRL) & 0x1) != 0; | ||
779 | } | ||
780 | |||
781 | static void pmu_set_event(struct cci_pmu *cci_pmu, int idx, unsigned long event) | ||
782 | { | ||
783 | pmu_write_register(cci_pmu, event, idx, CCI_PMU_EVT_SEL); | ||
784 | } | ||
785 | |||
786 | /* | ||
787 | * For all counters on the CCI-PMU, disable any 'enabled' counters, | ||
788 | * saving the changed counters in the mask, so that we can restore | ||
789 | * it later using pmu_restore_counters. The mask is private to the | ||
790 | * caller. We cannot rely on the used_mask maintained by the CCI_PMU | ||
791 | * as it only tells us if the counter is assigned to perf_event or not. | ||
792 | * The state of the perf_event cannot be locked by the PMU layer, hence | ||
793 | * we check the individual counter status (which can be locked by | ||
794 | * cci_pm->hw_events->pmu_lock). | ||
795 | * | ||
796 | * @mask should be initialised to empty by the caller. | ||
797 | */ | ||
798 | static void __maybe_unused | ||
799 | pmu_save_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
800 | { | ||
801 | int i; | ||
802 | |||
803 | for (i = 0; i < cci_pmu->num_cntrs; i++) { | ||
804 | if (pmu_counter_is_enabled(cci_pmu, i)) { | ||
805 | set_bit(i, mask); | ||
806 | pmu_disable_counter(cci_pmu, i); | ||
807 | } | ||
808 | } | ||
809 | } | ||
810 | |||
811 | /* | ||
812 | * Restore the status of the counters. Reversal of the pmu_save_counters(). | ||
813 | * For each counter set in the mask, enable the counter back. | ||
814 | */ | ||
815 | static void __maybe_unused | ||
816 | pmu_restore_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
817 | { | ||
818 | int i; | ||
819 | |||
820 | for_each_set_bit(i, mask, cci_pmu->num_cntrs) | ||
821 | pmu_enable_counter(cci_pmu, i); | ||
822 | } | ||
823 | |||
824 | /* | ||
825 | * Returns the number of programmable counters actually implemented | ||
826 | * by the cci | ||
827 | */ | ||
828 | static u32 pmu_get_max_counters(void) | ||
829 | { | ||
830 | return (readl_relaxed(cci_ctrl_base + CCI_PMCR) & | ||
831 | CCI_PMCR_NCNT_MASK) >> CCI_PMCR_NCNT_SHIFT; | ||
832 | } | ||
833 | |||
834 | static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *event) | ||
835 | { | ||
836 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
837 | unsigned long cci_event = event->hw.config_base; | ||
838 | int idx; | ||
839 | |||
840 | if (cci_pmu->model->get_event_idx) | ||
841 | return cci_pmu->model->get_event_idx(cci_pmu, hw, cci_event); | ||
842 | |||
843 | /* Generic code to find an unused idx from the mask */ | ||
844 | for(idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) | ||
845 | if (!test_and_set_bit(idx, hw->used_mask)) | ||
846 | return idx; | ||
847 | |||
848 | /* No counters available */ | ||
849 | return -EAGAIN; | ||
850 | } | ||
851 | |||
852 | static int pmu_map_event(struct perf_event *event) | ||
853 | { | ||
854 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
855 | |||
856 | if (event->attr.type < PERF_TYPE_MAX || | ||
857 | !cci_pmu->model->validate_hw_event) | ||
858 | return -ENOENT; | ||
859 | |||
860 | return cci_pmu->model->validate_hw_event(cci_pmu, event->attr.config); | ||
861 | } | ||
862 | |||
863 | static int pmu_request_irq(struct cci_pmu *cci_pmu, irq_handler_t handler) | ||
864 | { | ||
865 | int i; | ||
866 | struct platform_device *pmu_device = cci_pmu->plat_device; | ||
867 | |||
868 | if (unlikely(!pmu_device)) | ||
869 | return -ENODEV; | ||
870 | |||
871 | if (cci_pmu->nr_irqs < 1) { | ||
872 | dev_err(&pmu_device->dev, "no irqs for CCI PMUs defined\n"); | ||
873 | return -ENODEV; | ||
874 | } | ||
875 | |||
876 | /* | ||
877 | * Register all available CCI PMU interrupts. In the interrupt handler | ||
878 | * we iterate over the counters checking for interrupt source (the | ||
879 | * overflowing counter) and clear it. | ||
880 | * | ||
881 | * This should allow handling of non-unique interrupt for the counters. | ||
882 | */ | ||
883 | for (i = 0; i < cci_pmu->nr_irqs; i++) { | ||
884 | int err = request_irq(cci_pmu->irqs[i], handler, IRQF_SHARED, | ||
885 | "arm-cci-pmu", cci_pmu); | ||
886 | if (err) { | ||
887 | dev_err(&pmu_device->dev, "unable to request IRQ%d for ARM CCI PMU counters\n", | ||
888 | cci_pmu->irqs[i]); | ||
889 | return err; | ||
890 | } | ||
891 | |||
892 | set_bit(i, &cci_pmu->active_irqs); | ||
893 | } | ||
894 | |||
895 | return 0; | ||
896 | } | ||
897 | |||
898 | static void pmu_free_irq(struct cci_pmu *cci_pmu) | ||
899 | { | ||
900 | int i; | ||
901 | |||
902 | for (i = 0; i < cci_pmu->nr_irqs; i++) { | ||
903 | if (!test_and_clear_bit(i, &cci_pmu->active_irqs)) | ||
904 | continue; | ||
905 | |||
906 | free_irq(cci_pmu->irqs[i], cci_pmu); | ||
907 | } | ||
908 | } | ||
909 | |||
910 | static u32 pmu_read_counter(struct perf_event *event) | ||
911 | { | ||
912 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
913 | struct hw_perf_event *hw_counter = &event->hw; | ||
914 | int idx = hw_counter->idx; | ||
915 | u32 value; | ||
916 | |||
917 | if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { | ||
918 | dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); | ||
919 | return 0; | ||
920 | } | ||
921 | value = pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR); | ||
922 | |||
923 | return value; | ||
924 | } | ||
925 | |||
926 | static void pmu_write_counter(struct cci_pmu *cci_pmu, u32 value, int idx) | ||
927 | { | ||
928 | pmu_write_register(cci_pmu, value, idx, CCI_PMU_CNTR); | ||
929 | } | ||
930 | |||
931 | static void __pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
932 | { | ||
933 | int i; | ||
934 | struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events; | ||
935 | |||
936 | for_each_set_bit(i, mask, cci_pmu->num_cntrs) { | ||
937 | struct perf_event *event = cci_hw->events[i]; | ||
938 | |||
939 | if (WARN_ON(!event)) | ||
940 | continue; | ||
941 | pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i); | ||
942 | } | ||
943 | } | ||
944 | |||
945 | static void pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
946 | { | ||
947 | if (cci_pmu->model->write_counters) | ||
948 | cci_pmu->model->write_counters(cci_pmu, mask); | ||
949 | else | ||
950 | __pmu_write_counters(cci_pmu, mask); | ||
951 | } | ||
952 | |||
953 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
954 | |||
955 | /* | ||
956 | * CCI-500/CCI-550 has advanced power saving policies, which could gate the | ||
957 | * clocks to the PMU counters, which makes the writes to them ineffective. | ||
958 | * The only way to write to those counters is when the global counters | ||
959 | * are enabled and the particular counter is enabled. | ||
960 | * | ||
961 | * So we do the following : | ||
962 | * | ||
963 | * 1) Disable all the PMU counters, saving their current state | ||
964 | * 2) Enable the global PMU profiling, now that all counters are | ||
965 | * disabled. | ||
966 | * | ||
967 | * For each counter to be programmed, repeat steps 3-7: | ||
968 | * | ||
969 | * 3) Write an invalid event code to the event control register for the | ||
970 | counter, so that the counters are not modified. | ||
971 | * 4) Enable the counter control for the counter. | ||
972 | * 5) Set the counter value | ||
973 | * 6) Disable the counter | ||
974 | * 7) Restore the event in the target counter | ||
975 | * | ||
976 | * 8) Disable the global PMU. | ||
977 | * 9) Restore the status of the rest of the counters. | ||
978 | * | ||
979 | * We choose an event which for CCI-5xx is guaranteed not to count. | ||
980 | * We use the highest possible event code (0x1f) for the master interface 0. | ||
981 | */ | ||
982 | #define CCI5xx_INVALID_EVENT ((CCI5xx_PORT_M0 << CCI5xx_PMU_EVENT_SOURCE_SHIFT) | \ | ||
983 | (CCI5xx_PMU_EVENT_CODE_MASK << CCI5xx_PMU_EVENT_CODE_SHIFT)) | ||
984 | static void cci5xx_pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
985 | { | ||
986 | int i; | ||
987 | DECLARE_BITMAP(saved_mask, cci_pmu->num_cntrs); | ||
988 | |||
989 | bitmap_zero(saved_mask, cci_pmu->num_cntrs); | ||
990 | pmu_save_counters(cci_pmu, saved_mask); | ||
991 | |||
992 | /* | ||
993 | * Now that all the counters are disabled, we can safely turn the PMU on, | ||
994 | * without syncing the status of the counters | ||
995 | */ | ||
996 | __cci_pmu_enable_nosync(cci_pmu); | ||
997 | |||
998 | for_each_set_bit(i, mask, cci_pmu->num_cntrs) { | ||
999 | struct perf_event *event = cci_pmu->hw_events.events[i]; | ||
1000 | |||
1001 | if (WARN_ON(!event)) | ||
1002 | continue; | ||
1003 | |||
1004 | pmu_set_event(cci_pmu, i, CCI5xx_INVALID_EVENT); | ||
1005 | pmu_enable_counter(cci_pmu, i); | ||
1006 | pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i); | ||
1007 | pmu_disable_counter(cci_pmu, i); | ||
1008 | pmu_set_event(cci_pmu, i, event->hw.config_base); | ||
1009 | } | ||
1010 | |||
1011 | __cci_pmu_disable(); | ||
1012 | |||
1013 | pmu_restore_counters(cci_pmu, saved_mask); | ||
1014 | } | ||
1015 | |||
1016 | #endif /* CONFIG_ARM_CCI5xx_PMU */ | ||
1017 | |||
1018 | static u64 pmu_event_update(struct perf_event *event) | ||
1019 | { | ||
1020 | struct hw_perf_event *hwc = &event->hw; | ||
1021 | u64 delta, prev_raw_count, new_raw_count; | ||
1022 | |||
1023 | do { | ||
1024 | prev_raw_count = local64_read(&hwc->prev_count); | ||
1025 | new_raw_count = pmu_read_counter(event); | ||
1026 | } while (local64_cmpxchg(&hwc->prev_count, prev_raw_count, | ||
1027 | new_raw_count) != prev_raw_count); | ||
1028 | |||
1029 | delta = (new_raw_count - prev_raw_count) & CCI_PMU_CNTR_MASK; | ||
1030 | |||
1031 | local64_add(delta, &event->count); | ||
1032 | |||
1033 | return new_raw_count; | ||
1034 | } | ||
1035 | |||
1036 | static void pmu_read(struct perf_event *event) | ||
1037 | { | ||
1038 | pmu_event_update(event); | ||
1039 | } | ||
1040 | |||
1041 | static void pmu_event_set_period(struct perf_event *event) | ||
1042 | { | ||
1043 | struct hw_perf_event *hwc = &event->hw; | ||
1044 | /* | ||
1045 | * The CCI PMU counters have a period of 2^32. To account for the | ||
1046 | * possiblity of extreme interrupt latency we program for a period of | ||
1047 | * half that. Hopefully we can handle the interrupt before another 2^31 | ||
1048 | * events occur and the counter overtakes its previous value. | ||
1049 | */ | ||
1050 | u64 val = 1ULL << 31; | ||
1051 | local64_set(&hwc->prev_count, val); | ||
1052 | |||
1053 | /* | ||
1054 | * CCI PMU uses PERF_HES_ARCH to keep track of the counters, whose | ||
1055 | * values needs to be sync-ed with the s/w state before the PMU is | ||
1056 | * enabled. | ||
1057 | * Mark this counter for sync. | ||
1058 | */ | ||
1059 | hwc->state |= PERF_HES_ARCH; | ||
1060 | } | ||
1061 | |||
1062 | static irqreturn_t pmu_handle_irq(int irq_num, void *dev) | ||
1063 | { | ||
1064 | unsigned long flags; | ||
1065 | struct cci_pmu *cci_pmu = dev; | ||
1066 | struct cci_pmu_hw_events *events = &cci_pmu->hw_events; | ||
1067 | int idx, handled = IRQ_NONE; | ||
1068 | |||
1069 | raw_spin_lock_irqsave(&events->pmu_lock, flags); | ||
1070 | |||
1071 | /* Disable the PMU while we walk through the counters */ | ||
1072 | __cci_pmu_disable(); | ||
1073 | /* | ||
1074 | * Iterate over counters and update the corresponding perf events. | ||
1075 | * This should work regardless of whether we have per-counter overflow | ||
1076 | * interrupt or a combined overflow interrupt. | ||
1077 | */ | ||
1078 | for (idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) { | ||
1079 | struct perf_event *event = events->events[idx]; | ||
1080 | |||
1081 | if (!event) | ||
1082 | continue; | ||
1083 | |||
1084 | /* Did this counter overflow? */ | ||
1085 | if (!(pmu_read_register(cci_pmu, idx, CCI_PMU_OVRFLW) & | ||
1086 | CCI_PMU_OVRFLW_FLAG)) | ||
1087 | continue; | ||
1088 | |||
1089 | pmu_write_register(cci_pmu, CCI_PMU_OVRFLW_FLAG, idx, | ||
1090 | CCI_PMU_OVRFLW); | ||
1091 | |||
1092 | pmu_event_update(event); | ||
1093 | pmu_event_set_period(event); | ||
1094 | handled = IRQ_HANDLED; | ||
1095 | } | ||
1096 | |||
1097 | /* Enable the PMU and sync possibly overflowed counters */ | ||
1098 | __cci_pmu_enable_sync(cci_pmu); | ||
1099 | raw_spin_unlock_irqrestore(&events->pmu_lock, flags); | ||
1100 | |||
1101 | return IRQ_RETVAL(handled); | ||
1102 | } | ||
1103 | |||
1104 | static int cci_pmu_get_hw(struct cci_pmu *cci_pmu) | ||
1105 | { | ||
1106 | int ret = pmu_request_irq(cci_pmu, pmu_handle_irq); | ||
1107 | if (ret) { | ||
1108 | pmu_free_irq(cci_pmu); | ||
1109 | return ret; | ||
1110 | } | ||
1111 | return 0; | ||
1112 | } | ||
1113 | |||
1114 | static void cci_pmu_put_hw(struct cci_pmu *cci_pmu) | ||
1115 | { | ||
1116 | pmu_free_irq(cci_pmu); | ||
1117 | } | ||
1118 | |||
1119 | static void hw_perf_event_destroy(struct perf_event *event) | ||
1120 | { | ||
1121 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1122 | atomic_t *active_events = &cci_pmu->active_events; | ||
1123 | struct mutex *reserve_mutex = &cci_pmu->reserve_mutex; | ||
1124 | |||
1125 | if (atomic_dec_and_mutex_lock(active_events, reserve_mutex)) { | ||
1126 | cci_pmu_put_hw(cci_pmu); | ||
1127 | mutex_unlock(reserve_mutex); | ||
1128 | } | ||
1129 | } | ||
1130 | |||
1131 | static void cci_pmu_enable(struct pmu *pmu) | ||
1132 | { | ||
1133 | struct cci_pmu *cci_pmu = to_cci_pmu(pmu); | ||
1134 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1135 | int enabled = bitmap_weight(hw_events->used_mask, cci_pmu->num_cntrs); | ||
1136 | unsigned long flags; | ||
1137 | |||
1138 | if (!enabled) | ||
1139 | return; | ||
1140 | |||
1141 | raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); | ||
1142 | __cci_pmu_enable_sync(cci_pmu); | ||
1143 | raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); | ||
1144 | |||
1145 | } | ||
1146 | |||
1147 | static void cci_pmu_disable(struct pmu *pmu) | ||
1148 | { | ||
1149 | struct cci_pmu *cci_pmu = to_cci_pmu(pmu); | ||
1150 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1151 | unsigned long flags; | ||
1152 | |||
1153 | raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); | ||
1154 | __cci_pmu_disable(); | ||
1155 | raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); | ||
1156 | } | ||
1157 | |||
1158 | /* | ||
1159 | * Check if the idx represents a non-programmable counter. | ||
1160 | * All the fixed event counters are mapped before the programmable | ||
1161 | * counters. | ||
1162 | */ | ||
1163 | static bool pmu_fixed_hw_idx(struct cci_pmu *cci_pmu, int idx) | ||
1164 | { | ||
1165 | return (idx >= 0) && (idx < cci_pmu->model->fixed_hw_cntrs); | ||
1166 | } | ||
1167 | |||
1168 | static void cci_pmu_start(struct perf_event *event, int pmu_flags) | ||
1169 | { | ||
1170 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1171 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1172 | struct hw_perf_event *hwc = &event->hw; | ||
1173 | int idx = hwc->idx; | ||
1174 | unsigned long flags; | ||
1175 | |||
1176 | /* | ||
1177 | * To handle interrupt latency, we always reprogram the period | ||
1178 | * regardlesss of PERF_EF_RELOAD. | ||
1179 | */ | ||
1180 | if (pmu_flags & PERF_EF_RELOAD) | ||
1181 | WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); | ||
1182 | |||
1183 | hwc->state = 0; | ||
1184 | |||
1185 | if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { | ||
1186 | dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); | ||
1187 | return; | ||
1188 | } | ||
1189 | |||
1190 | raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); | ||
1191 | |||
1192 | /* Configure the counter unless you are counting a fixed event */ | ||
1193 | if (!pmu_fixed_hw_idx(cci_pmu, idx)) | ||
1194 | pmu_set_event(cci_pmu, idx, hwc->config_base); | ||
1195 | |||
1196 | pmu_event_set_period(event); | ||
1197 | pmu_enable_counter(cci_pmu, idx); | ||
1198 | |||
1199 | raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); | ||
1200 | } | ||
1201 | |||
1202 | static void cci_pmu_stop(struct perf_event *event, int pmu_flags) | ||
1203 | { | ||
1204 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1205 | struct hw_perf_event *hwc = &event->hw; | ||
1206 | int idx = hwc->idx; | ||
1207 | |||
1208 | if (hwc->state & PERF_HES_STOPPED) | ||
1209 | return; | ||
1210 | |||
1211 | if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { | ||
1212 | dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); | ||
1213 | return; | ||
1214 | } | ||
1215 | |||
1216 | /* | ||
1217 | * We always reprogram the counter, so ignore PERF_EF_UPDATE. See | ||
1218 | * cci_pmu_start() | ||
1219 | */ | ||
1220 | pmu_disable_counter(cci_pmu, idx); | ||
1221 | pmu_event_update(event); | ||
1222 | hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; | ||
1223 | } | ||
1224 | |||
1225 | static int cci_pmu_add(struct perf_event *event, int flags) | ||
1226 | { | ||
1227 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1228 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1229 | struct hw_perf_event *hwc = &event->hw; | ||
1230 | int idx; | ||
1231 | int err = 0; | ||
1232 | |||
1233 | perf_pmu_disable(event->pmu); | ||
1234 | |||
1235 | /* If we don't have a space for the counter then finish early. */ | ||
1236 | idx = pmu_get_event_idx(hw_events, event); | ||
1237 | if (idx < 0) { | ||
1238 | err = idx; | ||
1239 | goto out; | ||
1240 | } | ||
1241 | |||
1242 | event->hw.idx = idx; | ||
1243 | hw_events->events[idx] = event; | ||
1244 | |||
1245 | hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; | ||
1246 | if (flags & PERF_EF_START) | ||
1247 | cci_pmu_start(event, PERF_EF_RELOAD); | ||
1248 | |||
1249 | /* Propagate our changes to the userspace mapping. */ | ||
1250 | perf_event_update_userpage(event); | ||
1251 | |||
1252 | out: | ||
1253 | perf_pmu_enable(event->pmu); | ||
1254 | return err; | ||
1255 | } | ||
1256 | |||
1257 | static void cci_pmu_del(struct perf_event *event, int flags) | ||
1258 | { | ||
1259 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1260 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1261 | struct hw_perf_event *hwc = &event->hw; | ||
1262 | int idx = hwc->idx; | ||
1263 | |||
1264 | cci_pmu_stop(event, PERF_EF_UPDATE); | ||
1265 | hw_events->events[idx] = NULL; | ||
1266 | clear_bit(idx, hw_events->used_mask); | ||
1267 | |||
1268 | perf_event_update_userpage(event); | ||
1269 | } | ||
1270 | |||
1271 | static int | ||
1272 | validate_event(struct pmu *cci_pmu, | ||
1273 | struct cci_pmu_hw_events *hw_events, | ||
1274 | struct perf_event *event) | ||
1275 | { | ||
1276 | if (is_software_event(event)) | ||
1277 | return 1; | ||
1278 | |||
1279 | /* | ||
1280 | * Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The | ||
1281 | * core perf code won't check that the pmu->ctx == leader->ctx | ||
1282 | * until after pmu->event_init(event). | ||
1283 | */ | ||
1284 | if (event->pmu != cci_pmu) | ||
1285 | return 0; | ||
1286 | |||
1287 | if (event->state < PERF_EVENT_STATE_OFF) | ||
1288 | return 1; | ||
1289 | |||
1290 | if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) | ||
1291 | return 1; | ||
1292 | |||
1293 | return pmu_get_event_idx(hw_events, event) >= 0; | ||
1294 | } | ||
1295 | |||
1296 | static int | ||
1297 | validate_group(struct perf_event *event) | ||
1298 | { | ||
1299 | struct perf_event *sibling, *leader = event->group_leader; | ||
1300 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1301 | unsigned long mask[BITS_TO_LONGS(cci_pmu->num_cntrs)]; | ||
1302 | struct cci_pmu_hw_events fake_pmu = { | ||
1303 | /* | ||
1304 | * Initialise the fake PMU. We only need to populate the | ||
1305 | * used_mask for the purposes of validation. | ||
1306 | */ | ||
1307 | .used_mask = mask, | ||
1308 | }; | ||
1309 | memset(mask, 0, BITS_TO_LONGS(cci_pmu->num_cntrs) * sizeof(unsigned long)); | ||
1310 | |||
1311 | if (!validate_event(event->pmu, &fake_pmu, leader)) | ||
1312 | return -EINVAL; | ||
1313 | |||
1314 | for_each_sibling_event(sibling, leader) { | ||
1315 | if (!validate_event(event->pmu, &fake_pmu, sibling)) | ||
1316 | return -EINVAL; | ||
1317 | } | ||
1318 | |||
1319 | if (!validate_event(event->pmu, &fake_pmu, event)) | ||
1320 | return -EINVAL; | ||
1321 | |||
1322 | return 0; | ||
1323 | } | ||
1324 | |||
1325 | static int | ||
1326 | __hw_perf_event_init(struct perf_event *event) | ||
1327 | { | ||
1328 | struct hw_perf_event *hwc = &event->hw; | ||
1329 | int mapping; | ||
1330 | |||
1331 | mapping = pmu_map_event(event); | ||
1332 | |||
1333 | if (mapping < 0) { | ||
1334 | pr_debug("event %x:%llx not supported\n", event->attr.type, | ||
1335 | event->attr.config); | ||
1336 | return mapping; | ||
1337 | } | ||
1338 | |||
1339 | /* | ||
1340 | * We don't assign an index until we actually place the event onto | ||
1341 | * hardware. Use -1 to signify that we haven't decided where to put it | ||
1342 | * yet. | ||
1343 | */ | ||
1344 | hwc->idx = -1; | ||
1345 | hwc->config_base = 0; | ||
1346 | hwc->config = 0; | ||
1347 | hwc->event_base = 0; | ||
1348 | |||
1349 | /* | ||
1350 | * Store the event encoding into the config_base field. | ||
1351 | */ | ||
1352 | hwc->config_base |= (unsigned long)mapping; | ||
1353 | |||
1354 | /* | ||
1355 | * Limit the sample_period to half of the counter width. That way, the | ||
1356 | * new counter value is far less likely to overtake the previous one | ||
1357 | * unless you have some serious IRQ latency issues. | ||
1358 | */ | ||
1359 | hwc->sample_period = CCI_PMU_CNTR_MASK >> 1; | ||
1360 | hwc->last_period = hwc->sample_period; | ||
1361 | local64_set(&hwc->period_left, hwc->sample_period); | ||
1362 | |||
1363 | if (event->group_leader != event) { | ||
1364 | if (validate_group(event) != 0) | ||
1365 | return -EINVAL; | ||
1366 | } | ||
1367 | |||
1368 | return 0; | ||
1369 | } | ||
1370 | |||
1371 | static int cci_pmu_event_init(struct perf_event *event) | ||
1372 | { | ||
1373 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1374 | atomic_t *active_events = &cci_pmu->active_events; | ||
1375 | int err = 0; | ||
1376 | int cpu; | ||
1377 | |||
1378 | if (event->attr.type != event->pmu->type) | ||
1379 | return -ENOENT; | ||
1380 | |||
1381 | /* Shared by all CPUs, no meaningful state to sample */ | ||
1382 | if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) | ||
1383 | return -EOPNOTSUPP; | ||
1384 | |||
1385 | /* We have no filtering of any kind */ | ||
1386 | if (event->attr.exclude_user || | ||
1387 | event->attr.exclude_kernel || | ||
1388 | event->attr.exclude_hv || | ||
1389 | event->attr.exclude_idle || | ||
1390 | event->attr.exclude_host || | ||
1391 | event->attr.exclude_guest) | ||
1392 | return -EINVAL; | ||
1393 | |||
1394 | /* | ||
1395 | * Following the example set by other "uncore" PMUs, we accept any CPU | ||
1396 | * and rewrite its affinity dynamically rather than having perf core | ||
1397 | * handle cpu == -1 and pid == -1 for this case. | ||
1398 | * | ||
1399 | * The perf core will pin online CPUs for the duration of this call and | ||
1400 | * the event being installed into its context, so the PMU's CPU can't | ||
1401 | * change under our feet. | ||
1402 | */ | ||
1403 | cpu = cpumask_first(&cci_pmu->cpus); | ||
1404 | if (event->cpu < 0 || cpu < 0) | ||
1405 | return -EINVAL; | ||
1406 | event->cpu = cpu; | ||
1407 | |||
1408 | event->destroy = hw_perf_event_destroy; | ||
1409 | if (!atomic_inc_not_zero(active_events)) { | ||
1410 | mutex_lock(&cci_pmu->reserve_mutex); | ||
1411 | if (atomic_read(active_events) == 0) | ||
1412 | err = cci_pmu_get_hw(cci_pmu); | ||
1413 | if (!err) | ||
1414 | atomic_inc(active_events); | ||
1415 | mutex_unlock(&cci_pmu->reserve_mutex); | ||
1416 | } | ||
1417 | if (err) | ||
1418 | return err; | ||
1419 | |||
1420 | err = __hw_perf_event_init(event); | ||
1421 | if (err) | ||
1422 | hw_perf_event_destroy(event); | ||
1423 | |||
1424 | return err; | ||
1425 | } | ||
1426 | |||
1427 | static ssize_t pmu_cpumask_attr_show(struct device *dev, | ||
1428 | struct device_attribute *attr, char *buf) | ||
1429 | { | ||
1430 | struct pmu *pmu = dev_get_drvdata(dev); | ||
1431 | struct cci_pmu *cci_pmu = to_cci_pmu(pmu); | ||
1432 | |||
1433 | int n = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", | ||
1434 | cpumask_pr_args(&cci_pmu->cpus)); | ||
1435 | buf[n++] = '\n'; | ||
1436 | buf[n] = '\0'; | ||
1437 | return n; | ||
1438 | } | ||
1439 | |||
1440 | static struct device_attribute pmu_cpumask_attr = | ||
1441 | __ATTR(cpumask, S_IRUGO, pmu_cpumask_attr_show, NULL); | ||
1442 | |||
1443 | static struct attribute *pmu_attrs[] = { | ||
1444 | &pmu_cpumask_attr.attr, | ||
1445 | NULL, | ||
1446 | }; | ||
1447 | |||
1448 | static struct attribute_group pmu_attr_group = { | ||
1449 | .attrs = pmu_attrs, | ||
1450 | }; | ||
1451 | |||
1452 | static struct attribute_group pmu_format_attr_group = { | ||
1453 | .name = "format", | ||
1454 | .attrs = NULL, /* Filled in cci_pmu_init_attrs */ | ||
1455 | }; | ||
1456 | |||
1457 | static struct attribute_group pmu_event_attr_group = { | ||
1458 | .name = "events", | ||
1459 | .attrs = NULL, /* Filled in cci_pmu_init_attrs */ | ||
1460 | }; | ||
1461 | |||
1462 | static const struct attribute_group *pmu_attr_groups[] = { | ||
1463 | &pmu_attr_group, | ||
1464 | &pmu_format_attr_group, | ||
1465 | &pmu_event_attr_group, | ||
1466 | NULL | ||
1467 | }; | ||
1468 | |||
1469 | static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev) | ||
1470 | { | ||
1471 | const struct cci_pmu_model *model = cci_pmu->model; | ||
1472 | char *name = model->name; | ||
1473 | u32 num_cntrs; | ||
1474 | |||
1475 | pmu_event_attr_group.attrs = model->event_attrs; | ||
1476 | pmu_format_attr_group.attrs = model->format_attrs; | ||
1477 | |||
1478 | cci_pmu->pmu = (struct pmu) { | ||
1479 | .name = cci_pmu->model->name, | ||
1480 | .task_ctx_nr = perf_invalid_context, | ||
1481 | .pmu_enable = cci_pmu_enable, | ||
1482 | .pmu_disable = cci_pmu_disable, | ||
1483 | .event_init = cci_pmu_event_init, | ||
1484 | .add = cci_pmu_add, | ||
1485 | .del = cci_pmu_del, | ||
1486 | .start = cci_pmu_start, | ||
1487 | .stop = cci_pmu_stop, | ||
1488 | .read = pmu_read, | ||
1489 | .attr_groups = pmu_attr_groups, | ||
1490 | }; | ||
1491 | |||
1492 | cci_pmu->plat_device = pdev; | ||
1493 | num_cntrs = pmu_get_max_counters(); | ||
1494 | if (num_cntrs > cci_pmu->model->num_hw_cntrs) { | ||
1495 | dev_warn(&pdev->dev, | ||
1496 | "PMU implements more counters(%d) than supported by" | ||
1497 | " the model(%d), truncated.", | ||
1498 | num_cntrs, cci_pmu->model->num_hw_cntrs); | ||
1499 | num_cntrs = cci_pmu->model->num_hw_cntrs; | ||
1500 | } | ||
1501 | cci_pmu->num_cntrs = num_cntrs + cci_pmu->model->fixed_hw_cntrs; | ||
1502 | |||
1503 | return perf_pmu_register(&cci_pmu->pmu, name, -1); | ||
1504 | } | ||
1505 | |||
1506 | static int cci_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) | ||
1507 | { | ||
1508 | struct cci_pmu *cci_pmu = hlist_entry_safe(node, struct cci_pmu, node); | ||
1509 | unsigned int target; | ||
1510 | |||
1511 | if (!cpumask_test_and_clear_cpu(cpu, &cci_pmu->cpus)) | ||
1512 | return 0; | ||
1513 | target = cpumask_any_but(cpu_online_mask, cpu); | ||
1514 | if (target >= nr_cpu_ids) | ||
1515 | return 0; | ||
1516 | /* | ||
1517 | * TODO: migrate context once core races on event->ctx have | ||
1518 | * been fixed. | ||
1519 | */ | ||
1520 | cpumask_set_cpu(target, &cci_pmu->cpus); | ||
1521 | return 0; | ||
1522 | } | ||
1523 | |||
1524 | static struct cci_pmu_model cci_pmu_models[] = { | ||
1525 | #ifdef CONFIG_ARM_CCI400_PMU | ||
1526 | [CCI400_R0] = { | ||
1527 | .name = "CCI_400", | ||
1528 | .fixed_hw_cntrs = 1, /* Cycle counter */ | ||
1529 | .num_hw_cntrs = 4, | ||
1530 | .cntr_size = SZ_4K, | ||
1531 | .format_attrs = cci400_pmu_format_attrs, | ||
1532 | .event_attrs = cci400_r0_pmu_event_attrs, | ||
1533 | .event_ranges = { | ||
1534 | [CCI_IF_SLAVE] = { | ||
1535 | CCI400_R0_SLAVE_PORT_MIN_EV, | ||
1536 | CCI400_R0_SLAVE_PORT_MAX_EV, | ||
1537 | }, | ||
1538 | [CCI_IF_MASTER] = { | ||
1539 | CCI400_R0_MASTER_PORT_MIN_EV, | ||
1540 | CCI400_R0_MASTER_PORT_MAX_EV, | ||
1541 | }, | ||
1542 | }, | ||
1543 | .validate_hw_event = cci400_validate_hw_event, | ||
1544 | .get_event_idx = cci400_get_event_idx, | ||
1545 | }, | ||
1546 | [CCI400_R1] = { | ||
1547 | .name = "CCI_400_r1", | ||
1548 | .fixed_hw_cntrs = 1, /* Cycle counter */ | ||
1549 | .num_hw_cntrs = 4, | ||
1550 | .cntr_size = SZ_4K, | ||
1551 | .format_attrs = cci400_pmu_format_attrs, | ||
1552 | .event_attrs = cci400_r1_pmu_event_attrs, | ||
1553 | .event_ranges = { | ||
1554 | [CCI_IF_SLAVE] = { | ||
1555 | CCI400_R1_SLAVE_PORT_MIN_EV, | ||
1556 | CCI400_R1_SLAVE_PORT_MAX_EV, | ||
1557 | }, | ||
1558 | [CCI_IF_MASTER] = { | ||
1559 | CCI400_R1_MASTER_PORT_MIN_EV, | ||
1560 | CCI400_R1_MASTER_PORT_MAX_EV, | ||
1561 | }, | ||
1562 | }, | ||
1563 | .validate_hw_event = cci400_validate_hw_event, | ||
1564 | .get_event_idx = cci400_get_event_idx, | ||
1565 | }, | ||
1566 | #endif | ||
1567 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
1568 | [CCI500_R0] = { | ||
1569 | .name = "CCI_500", | ||
1570 | .fixed_hw_cntrs = 0, | ||
1571 | .num_hw_cntrs = 8, | ||
1572 | .cntr_size = SZ_64K, | ||
1573 | .format_attrs = cci5xx_pmu_format_attrs, | ||
1574 | .event_attrs = cci5xx_pmu_event_attrs, | ||
1575 | .event_ranges = { | ||
1576 | [CCI_IF_SLAVE] = { | ||
1577 | CCI5xx_SLAVE_PORT_MIN_EV, | ||
1578 | CCI5xx_SLAVE_PORT_MAX_EV, | ||
1579 | }, | ||
1580 | [CCI_IF_MASTER] = { | ||
1581 | CCI5xx_MASTER_PORT_MIN_EV, | ||
1582 | CCI5xx_MASTER_PORT_MAX_EV, | ||
1583 | }, | ||
1584 | [CCI_IF_GLOBAL] = { | ||
1585 | CCI5xx_GLOBAL_PORT_MIN_EV, | ||
1586 | CCI5xx_GLOBAL_PORT_MAX_EV, | ||
1587 | }, | ||
1588 | }, | ||
1589 | .validate_hw_event = cci500_validate_hw_event, | ||
1590 | .write_counters = cci5xx_pmu_write_counters, | ||
1591 | }, | ||
1592 | [CCI550_R0] = { | ||
1593 | .name = "CCI_550", | ||
1594 | .fixed_hw_cntrs = 0, | ||
1595 | .num_hw_cntrs = 8, | ||
1596 | .cntr_size = SZ_64K, | ||
1597 | .format_attrs = cci5xx_pmu_format_attrs, | ||
1598 | .event_attrs = cci5xx_pmu_event_attrs, | ||
1599 | .event_ranges = { | ||
1600 | [CCI_IF_SLAVE] = { | ||
1601 | CCI5xx_SLAVE_PORT_MIN_EV, | ||
1602 | CCI5xx_SLAVE_PORT_MAX_EV, | ||
1603 | }, | ||
1604 | [CCI_IF_MASTER] = { | ||
1605 | CCI5xx_MASTER_PORT_MIN_EV, | ||
1606 | CCI5xx_MASTER_PORT_MAX_EV, | ||
1607 | }, | ||
1608 | [CCI_IF_GLOBAL] = { | ||
1609 | CCI5xx_GLOBAL_PORT_MIN_EV, | ||
1610 | CCI5xx_GLOBAL_PORT_MAX_EV, | ||
1611 | }, | ||
1612 | }, | ||
1613 | .validate_hw_event = cci550_validate_hw_event, | ||
1614 | .write_counters = cci5xx_pmu_write_counters, | ||
1615 | }, | ||
1616 | #endif | ||
1617 | }; | ||
1618 | |||
1619 | static const struct of_device_id arm_cci_pmu_matches[] = { | ||
1620 | #ifdef CONFIG_ARM_CCI400_PMU | ||
1621 | { | ||
1622 | .compatible = "arm,cci-400-pmu", | ||
1623 | .data = NULL, | ||
1624 | }, | ||
1625 | { | ||
1626 | .compatible = "arm,cci-400-pmu,r0", | ||
1627 | .data = &cci_pmu_models[CCI400_R0], | ||
1628 | }, | ||
1629 | { | ||
1630 | .compatible = "arm,cci-400-pmu,r1", | ||
1631 | .data = &cci_pmu_models[CCI400_R1], | ||
1632 | }, | ||
1633 | #endif | ||
1634 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
1635 | { | ||
1636 | .compatible = "arm,cci-500-pmu,r0", | ||
1637 | .data = &cci_pmu_models[CCI500_R0], | ||
1638 | }, | ||
1639 | { | ||
1640 | .compatible = "arm,cci-550-pmu,r0", | ||
1641 | .data = &cci_pmu_models[CCI550_R0], | ||
1642 | }, | ||
1643 | #endif | ||
1644 | {}, | ||
1645 | }; | 65 | }; |
1646 | 66 | ||
1647 | static inline const struct cci_pmu_model *get_cci_model(struct platform_device *pdev) | 67 | #define DRIVER_NAME "ARM-CCI" |
1648 | { | ||
1649 | const struct of_device_id *match = of_match_node(arm_cci_pmu_matches, | ||
1650 | pdev->dev.of_node); | ||
1651 | if (!match) | ||
1652 | return NULL; | ||
1653 | if (match->data) | ||
1654 | return match->data; | ||
1655 | |||
1656 | dev_warn(&pdev->dev, "DEPRECATED compatible property," | ||
1657 | "requires secure access to CCI registers"); | ||
1658 | return probe_cci_model(pdev); | ||
1659 | } | ||
1660 | |||
1661 | static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs) | ||
1662 | { | ||
1663 | int i; | ||
1664 | |||
1665 | for (i = 0; i < nr_irqs; i++) | ||
1666 | if (irq == irqs[i]) | ||
1667 | return true; | ||
1668 | |||
1669 | return false; | ||
1670 | } | ||
1671 | |||
1672 | static struct cci_pmu *cci_pmu_alloc(struct platform_device *pdev) | ||
1673 | { | ||
1674 | struct cci_pmu *cci_pmu; | ||
1675 | const struct cci_pmu_model *model; | ||
1676 | |||
1677 | /* | ||
1678 | * All allocations are devm_* hence we don't have to free | ||
1679 | * them explicitly on an error, as it would end up in driver | ||
1680 | * detach. | ||
1681 | */ | ||
1682 | model = get_cci_model(pdev); | ||
1683 | if (!model) { | ||
1684 | dev_warn(&pdev->dev, "CCI PMU version not supported\n"); | ||
1685 | return ERR_PTR(-ENODEV); | ||
1686 | } | ||
1687 | |||
1688 | cci_pmu = devm_kzalloc(&pdev->dev, sizeof(*cci_pmu), GFP_KERNEL); | ||
1689 | if (!cci_pmu) | ||
1690 | return ERR_PTR(-ENOMEM); | ||
1691 | |||
1692 | cci_pmu->model = model; | ||
1693 | cci_pmu->irqs = devm_kcalloc(&pdev->dev, CCI_PMU_MAX_HW_CNTRS(model), | ||
1694 | sizeof(*cci_pmu->irqs), GFP_KERNEL); | ||
1695 | if (!cci_pmu->irqs) | ||
1696 | return ERR_PTR(-ENOMEM); | ||
1697 | cci_pmu->hw_events.events = devm_kcalloc(&pdev->dev, | ||
1698 | CCI_PMU_MAX_HW_CNTRS(model), | ||
1699 | sizeof(*cci_pmu->hw_events.events), | ||
1700 | GFP_KERNEL); | ||
1701 | if (!cci_pmu->hw_events.events) | ||
1702 | return ERR_PTR(-ENOMEM); | ||
1703 | cci_pmu->hw_events.used_mask = devm_kcalloc(&pdev->dev, | ||
1704 | BITS_TO_LONGS(CCI_PMU_MAX_HW_CNTRS(model)), | ||
1705 | sizeof(*cci_pmu->hw_events.used_mask), | ||
1706 | GFP_KERNEL); | ||
1707 | if (!cci_pmu->hw_events.used_mask) | ||
1708 | return ERR_PTR(-ENOMEM); | ||
1709 | |||
1710 | return cci_pmu; | ||
1711 | } | ||
1712 | |||
1713 | |||
1714 | static int cci_pmu_probe(struct platform_device *pdev) | ||
1715 | { | ||
1716 | struct resource *res; | ||
1717 | struct cci_pmu *cci_pmu; | ||
1718 | int i, ret, irq; | ||
1719 | |||
1720 | cci_pmu = cci_pmu_alloc(pdev); | ||
1721 | if (IS_ERR(cci_pmu)) | ||
1722 | return PTR_ERR(cci_pmu); | ||
1723 | |||
1724 | res = platform_get_resource(pdev, IORESOURCE_MEM, 0); | ||
1725 | cci_pmu->base = devm_ioremap_resource(&pdev->dev, res); | ||
1726 | if (IS_ERR(cci_pmu->base)) | ||
1727 | return -ENOMEM; | ||
1728 | |||
1729 | /* | ||
1730 | * CCI PMU has one overflow interrupt per counter; but some may be tied | ||
1731 | * together to a common interrupt. | ||
1732 | */ | ||
1733 | cci_pmu->nr_irqs = 0; | ||
1734 | for (i = 0; i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model); i++) { | ||
1735 | irq = platform_get_irq(pdev, i); | ||
1736 | if (irq < 0) | ||
1737 | break; | ||
1738 | |||
1739 | if (is_duplicate_irq(irq, cci_pmu->irqs, cci_pmu->nr_irqs)) | ||
1740 | continue; | ||
1741 | |||
1742 | cci_pmu->irqs[cci_pmu->nr_irqs++] = irq; | ||
1743 | } | ||
1744 | |||
1745 | /* | ||
1746 | * Ensure that the device tree has as many interrupts as the number | ||
1747 | * of counters. | ||
1748 | */ | ||
1749 | if (i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)) { | ||
1750 | dev_warn(&pdev->dev, "In-correct number of interrupts: %d, should be %d\n", | ||
1751 | i, CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)); | ||
1752 | return -EINVAL; | ||
1753 | } | ||
1754 | |||
1755 | raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock); | ||
1756 | mutex_init(&cci_pmu->reserve_mutex); | ||
1757 | atomic_set(&cci_pmu->active_events, 0); | ||
1758 | cpumask_set_cpu(get_cpu(), &cci_pmu->cpus); | ||
1759 | |||
1760 | ret = cci_pmu_init(cci_pmu, pdev); | ||
1761 | if (ret) { | ||
1762 | put_cpu(); | ||
1763 | return ret; | ||
1764 | } | ||
1765 | |||
1766 | cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE, | ||
1767 | &cci_pmu->node); | ||
1768 | put_cpu(); | ||
1769 | pr_info("ARM %s PMU driver probed", cci_pmu->model->name); | ||
1770 | return 0; | ||
1771 | } | ||
1772 | 68 | ||
1773 | static int cci_platform_probe(struct platform_device *pdev) | 69 | static int cci_platform_probe(struct platform_device *pdev) |
1774 | { | 70 | { |
1775 | if (!cci_probed()) | 71 | if (!cci_probed()) |
1776 | return -ENODEV; | 72 | return -ENODEV; |
1777 | 73 | ||
1778 | return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); | 74 | return of_platform_populate(pdev->dev.of_node, NULL, |
75 | arm_cci_auxdata, &pdev->dev); | ||
1779 | } | 76 | } |
1780 | 77 | ||
1781 | static struct platform_driver cci_pmu_driver = { | ||
1782 | .driver = { | ||
1783 | .name = DRIVER_NAME_PMU, | ||
1784 | .of_match_table = arm_cci_pmu_matches, | ||
1785 | }, | ||
1786 | .probe = cci_pmu_probe, | ||
1787 | }; | ||
1788 | |||
1789 | static struct platform_driver cci_platform_driver = { | 78 | static struct platform_driver cci_platform_driver = { |
1790 | .driver = { | 79 | .driver = { |
1791 | .name = DRIVER_NAME, | 80 | .name = DRIVER_NAME, |
@@ -1796,30 +85,9 @@ static struct platform_driver cci_platform_driver = { | |||
1796 | 85 | ||
1797 | static int __init cci_platform_init(void) | 86 | static int __init cci_platform_init(void) |
1798 | { | 87 | { |
1799 | int ret; | ||
1800 | |||
1801 | ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_ARM_CCI_ONLINE, | ||
1802 | "perf/arm/cci:online", NULL, | ||
1803 | cci_pmu_offline_cpu); | ||
1804 | if (ret) | ||
1805 | return ret; | ||
1806 | |||
1807 | ret = platform_driver_register(&cci_pmu_driver); | ||
1808 | if (ret) | ||
1809 | return ret; | ||
1810 | |||
1811 | return platform_driver_register(&cci_platform_driver); | 88 | return platform_driver_register(&cci_platform_driver); |
1812 | } | 89 | } |
1813 | 90 | ||
1814 | #else /* !CONFIG_ARM_CCI_PMU */ | ||
1815 | |||
1816 | static int __init cci_platform_init(void) | ||
1817 | { | ||
1818 | return 0; | ||
1819 | } | ||
1820 | |||
1821 | #endif /* CONFIG_ARM_CCI_PMU */ | ||
1822 | |||
1823 | #ifdef CONFIG_ARM_CCI400_PORT_CTRL | 91 | #ifdef CONFIG_ARM_CCI400_PORT_CTRL |
1824 | 92 | ||
1825 | #define CCI_PORT_CTRL 0x0 | 93 | #define CCI_PORT_CTRL 0x0 |
@@ -2189,13 +457,10 @@ static int cci_probe_ports(struct device_node *np) | |||
2189 | if (!ports) | 457 | if (!ports) |
2190 | return -ENOMEM; | 458 | return -ENOMEM; |
2191 | 459 | ||
2192 | for_each_child_of_node(np, cp) { | 460 | for_each_available_child_of_node(np, cp) { |
2193 | if (!of_match_node(arm_cci_ctrl_if_matches, cp)) | 461 | if (!of_match_node(arm_cci_ctrl_if_matches, cp)) |
2194 | continue; | 462 | continue; |
2195 | 463 | ||
2196 | if (!of_device_is_available(cp)) | ||
2197 | continue; | ||
2198 | |||
2199 | i = nb_ace + nb_ace_lite; | 464 | i = nb_ace + nb_ace_lite; |
2200 | 465 | ||
2201 | if (i >= nb_cci_ports) | 466 | if (i >= nb_cci_ports) |
@@ -2275,7 +540,7 @@ static int cci_probe(void) | |||
2275 | struct resource res; | 540 | struct resource res; |
2276 | 541 | ||
2277 | np = of_find_matching_node(NULL, arm_cci_matches); | 542 | np = of_find_matching_node(NULL, arm_cci_matches); |
2278 | if(!np || !of_device_is_available(np)) | 543 | if (!of_device_is_available(np)) |
2279 | return -ENODEV; | 544 | return -ENODEV; |
2280 | 545 | ||
2281 | ret = of_address_to_resource(np, 0, &res); | 546 | ret = of_address_to_resource(np, 0, &res); |
diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig index 98ce9fc6e6c0..7ae23b25b406 100644 --- a/drivers/clk/Kconfig +++ b/drivers/clk/Kconfig | |||
@@ -62,6 +62,16 @@ config COMMON_CLK_HI655X | |||
62 | multi-function device has one fixed-rate oscillator, clocked | 62 | multi-function device has one fixed-rate oscillator, clocked |
63 | at 32KHz. | 63 | at 32KHz. |
64 | 64 | ||
65 | config COMMON_CLK_SCMI | ||
66 | tristate "Clock driver controlled via SCMI interface" | ||
67 | depends on ARM_SCMI_PROTOCOL || COMPILE_TEST | ||
68 | ---help--- | ||
69 | This driver provides support for clocks that are controlled | ||
70 | by firmware that implements the SCMI interface. | ||
71 | |||
72 | This driver uses SCMI Message Protocol to interact with the | ||
73 | firmware providing all the clock controls. | ||
74 | |||
65 | config COMMON_CLK_SCPI | 75 | config COMMON_CLK_SCPI |
66 | tristate "Clock driver controlled via SCPI interface" | 76 | tristate "Clock driver controlled via SCPI interface" |
67 | depends on ARM_SCPI_PROTOCOL || COMPILE_TEST | 77 | depends on ARM_SCPI_PROTOCOL || COMPILE_TEST |
diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile index 71ec41e6364f..6605513eaa94 100644 --- a/drivers/clk/Makefile +++ b/drivers/clk/Makefile | |||
@@ -41,6 +41,7 @@ obj-$(CONFIG_CLK_QORIQ) += clk-qoriq.o | |||
41 | obj-$(CONFIG_COMMON_CLK_RK808) += clk-rk808.o | 41 | obj-$(CONFIG_COMMON_CLK_RK808) += clk-rk808.o |
42 | obj-$(CONFIG_COMMON_CLK_HI655X) += clk-hi655x.o | 42 | obj-$(CONFIG_COMMON_CLK_HI655X) += clk-hi655x.o |
43 | obj-$(CONFIG_COMMON_CLK_S2MPS11) += clk-s2mps11.o | 43 | obj-$(CONFIG_COMMON_CLK_S2MPS11) += clk-s2mps11.o |
44 | obj-$(CONFIG_COMMON_CLK_SCMI) += clk-scmi.o | ||
44 | obj-$(CONFIG_COMMON_CLK_SCPI) += clk-scpi.o | 45 | obj-$(CONFIG_COMMON_CLK_SCPI) += clk-scpi.o |
45 | obj-$(CONFIG_COMMON_CLK_SI5351) += clk-si5351.o | 46 | obj-$(CONFIG_COMMON_CLK_SI5351) += clk-si5351.o |
46 | obj-$(CONFIG_COMMON_CLK_SI514) += clk-si514.o | 47 | obj-$(CONFIG_COMMON_CLK_SI514) += clk-si514.o |
diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c new file mode 100644 index 000000000000..488c21376b55 --- /dev/null +++ b/drivers/clk/clk-scmi.c | |||
@@ -0,0 +1,194 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Power Interface (SCMI) Protocol based clock driver | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | |||
8 | #include <linux/clk-provider.h> | ||
9 | #include <linux/device.h> | ||
10 | #include <linux/err.h> | ||
11 | #include <linux/of.h> | ||
12 | #include <linux/module.h> | ||
13 | #include <linux/scmi_protocol.h> | ||
14 | #include <asm/div64.h> | ||
15 | |||
16 | struct scmi_clk { | ||
17 | u32 id; | ||
18 | struct clk_hw hw; | ||
19 | const struct scmi_clock_info *info; | ||
20 | const struct scmi_handle *handle; | ||
21 | }; | ||
22 | |||
23 | #define to_scmi_clk(clk) container_of(clk, struct scmi_clk, hw) | ||
24 | |||
25 | static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw, | ||
26 | unsigned long parent_rate) | ||
27 | { | ||
28 | int ret; | ||
29 | u64 rate; | ||
30 | struct scmi_clk *clk = to_scmi_clk(hw); | ||
31 | |||
32 | ret = clk->handle->clk_ops->rate_get(clk->handle, clk->id, &rate); | ||
33 | if (ret) | ||
34 | return 0; | ||
35 | return rate; | ||
36 | } | ||
37 | |||
38 | static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate, | ||
39 | unsigned long *parent_rate) | ||
40 | { | ||
41 | int step; | ||
42 | u64 fmin, fmax, ftmp; | ||
43 | struct scmi_clk *clk = to_scmi_clk(hw); | ||
44 | |||
45 | /* | ||
46 | * We can't figure out what rate it will be, so just return the | ||
47 | * rate back to the caller. scmi_clk_recalc_rate() will be called | ||
48 | * after the rate is set and we'll know what rate the clock is | ||
49 | * running at then. | ||
50 | */ | ||
51 | if (clk->info->rate_discrete) | ||
52 | return rate; | ||
53 | |||
54 | fmin = clk->info->range.min_rate; | ||
55 | fmax = clk->info->range.max_rate; | ||
56 | if (rate <= fmin) | ||
57 | return fmin; | ||
58 | else if (rate >= fmax) | ||
59 | return fmax; | ||
60 | |||
61 | ftmp = rate - fmin; | ||
62 | ftmp += clk->info->range.step_size - 1; /* to round up */ | ||
63 | step = do_div(ftmp, clk->info->range.step_size); | ||
64 | |||
65 | return step * clk->info->range.step_size + fmin; | ||
66 | } | ||
67 | |||
68 | static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate, | ||
69 | unsigned long parent_rate) | ||
70 | { | ||
71 | struct scmi_clk *clk = to_scmi_clk(hw); | ||
72 | |||
73 | return clk->handle->clk_ops->rate_set(clk->handle, clk->id, 0, rate); | ||
74 | } | ||
75 | |||
76 | static int scmi_clk_enable(struct clk_hw *hw) | ||
77 | { | ||
78 | struct scmi_clk *clk = to_scmi_clk(hw); | ||
79 | |||
80 | return clk->handle->clk_ops->enable(clk->handle, clk->id); | ||
81 | } | ||
82 | |||
83 | static void scmi_clk_disable(struct clk_hw *hw) | ||
84 | { | ||
85 | struct scmi_clk *clk = to_scmi_clk(hw); | ||
86 | |||
87 | clk->handle->clk_ops->disable(clk->handle, clk->id); | ||
88 | } | ||
89 | |||
90 | static const struct clk_ops scmi_clk_ops = { | ||
91 | .recalc_rate = scmi_clk_recalc_rate, | ||
92 | .round_rate = scmi_clk_round_rate, | ||
93 | .set_rate = scmi_clk_set_rate, | ||
94 | /* | ||
95 | * We can't provide enable/disable callback as we can't perform the same | ||
96 | * in atomic context. Since the clock framework provides standard API | ||
97 | * clk_prepare_enable that helps cases using clk_enable in non-atomic | ||
98 | * context, it should be fine providing prepare/unprepare. | ||
99 | */ | ||
100 | .prepare = scmi_clk_enable, | ||
101 | .unprepare = scmi_clk_disable, | ||
102 | }; | ||
103 | |||
104 | static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk) | ||
105 | { | ||
106 | int ret; | ||
107 | struct clk_init_data init = { | ||
108 | .flags = CLK_GET_RATE_NOCACHE, | ||
109 | .num_parents = 0, | ||
110 | .ops = &scmi_clk_ops, | ||
111 | .name = sclk->info->name, | ||
112 | }; | ||
113 | |||
114 | sclk->hw.init = &init; | ||
115 | ret = devm_clk_hw_register(dev, &sclk->hw); | ||
116 | if (!ret) | ||
117 | clk_hw_set_rate_range(&sclk->hw, sclk->info->range.min_rate, | ||
118 | sclk->info->range.max_rate); | ||
119 | return ret; | ||
120 | } | ||
121 | |||
122 | static int scmi_clocks_probe(struct scmi_device *sdev) | ||
123 | { | ||
124 | int idx, count, err; | ||
125 | struct clk_hw **hws; | ||
126 | struct clk_hw_onecell_data *clk_data; | ||
127 | struct device *dev = &sdev->dev; | ||
128 | struct device_node *np = dev->of_node; | ||
129 | const struct scmi_handle *handle = sdev->handle; | ||
130 | |||
131 | if (!handle || !handle->clk_ops) | ||
132 | return -ENODEV; | ||
133 | |||
134 | count = handle->clk_ops->count_get(handle); | ||
135 | if (count < 0) { | ||
136 | dev_err(dev, "%s: invalid clock output count\n", np->name); | ||
137 | return -EINVAL; | ||
138 | } | ||
139 | |||
140 | clk_data = devm_kzalloc(dev, sizeof(*clk_data) + | ||
141 | sizeof(*clk_data->hws) * count, GFP_KERNEL); | ||
142 | if (!clk_data) | ||
143 | return -ENOMEM; | ||
144 | |||
145 | clk_data->num = count; | ||
146 | hws = clk_data->hws; | ||
147 | |||
148 | for (idx = 0; idx < count; idx++) { | ||
149 | struct scmi_clk *sclk; | ||
150 | |||
151 | sclk = devm_kzalloc(dev, sizeof(*sclk), GFP_KERNEL); | ||
152 | if (!sclk) | ||
153 | return -ENOMEM; | ||
154 | |||
155 | sclk->info = handle->clk_ops->info_get(handle, idx); | ||
156 | if (!sclk->info) { | ||
157 | dev_dbg(dev, "invalid clock info for idx %d\n", idx); | ||
158 | continue; | ||
159 | } | ||
160 | |||
161 | sclk->id = idx; | ||
162 | sclk->handle = handle; | ||
163 | |||
164 | err = scmi_clk_ops_init(dev, sclk); | ||
165 | if (err) { | ||
166 | dev_err(dev, "failed to register clock %d\n", idx); | ||
167 | devm_kfree(dev, sclk); | ||
168 | hws[idx] = NULL; | ||
169 | } else { | ||
170 | dev_dbg(dev, "Registered clock:%s\n", sclk->info->name); | ||
171 | hws[idx] = &sclk->hw; | ||
172 | } | ||
173 | } | ||
174 | |||
175 | return devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get, | ||
176 | clk_data); | ||
177 | } | ||
178 | |||
179 | static const struct scmi_device_id scmi_id_table[] = { | ||
180 | { SCMI_PROTOCOL_CLOCK }, | ||
181 | { }, | ||
182 | }; | ||
183 | MODULE_DEVICE_TABLE(scmi, scmi_id_table); | ||
184 | |||
185 | static struct scmi_driver scmi_clocks_driver = { | ||
186 | .name = "scmi-clocks", | ||
187 | .probe = scmi_clocks_probe, | ||
188 | .id_table = scmi_id_table, | ||
189 | }; | ||
190 | module_scmi_driver(scmi_clocks_driver); | ||
191 | |||
192 | MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); | ||
193 | MODULE_DESCRIPTION("ARM SCMI clock driver"); | ||
194 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm index 833b5f41f596..7f56fe5183f2 100644 --- a/drivers/cpufreq/Kconfig.arm +++ b/drivers/cpufreq/Kconfig.arm | |||
@@ -239,6 +239,18 @@ config ARM_SA1100_CPUFREQ | |||
239 | config ARM_SA1110_CPUFREQ | 239 | config ARM_SA1110_CPUFREQ |
240 | bool | 240 | bool |
241 | 241 | ||
242 | config ARM_SCMI_CPUFREQ | ||
243 | tristate "SCMI based CPUfreq driver" | ||
244 | depends on ARM_SCMI_PROTOCOL || COMPILE_TEST | ||
245 | depends on !CPU_THERMAL || THERMAL | ||
246 | select PM_OPP | ||
247 | help | ||
248 | This adds the CPUfreq driver support for ARM platforms using SCMI | ||
249 | protocol for CPU power management. | ||
250 | |||
251 | This driver uses SCMI Message Protocol driver to interact with the | ||
252 | firmware providing the CPU DVFS functionality. | ||
253 | |||
242 | config ARM_SPEAR_CPUFREQ | 254 | config ARM_SPEAR_CPUFREQ |
243 | bool "SPEAr CPUFreq support" | 255 | bool "SPEAr CPUFreq support" |
244 | depends on PLAT_SPEAR | 256 | depends on PLAT_SPEAR |
diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile index b2cbdc016c77..8d24ade3bd02 100644 --- a/drivers/cpufreq/Makefile +++ b/drivers/cpufreq/Makefile | |||
@@ -75,6 +75,7 @@ obj-$(CONFIG_ARM_S3C24XX_CPUFREQ_DEBUGFS) += s3c24xx-cpufreq-debugfs.o | |||
75 | obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o | 75 | obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o |
76 | obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o | 76 | obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o |
77 | obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o | 77 | obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o |
78 | obj-$(CONFIG_ARM_SCMI_CPUFREQ) += scmi-cpufreq.o | ||
78 | obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o | 79 | obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o |
79 | obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o | 80 | obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o |
80 | obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o | 81 | obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o |
diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c new file mode 100644 index 000000000000..959a1dbe3835 --- /dev/null +++ b/drivers/cpufreq/scmi-cpufreq.c | |||
@@ -0,0 +1,264 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Power Interface (SCMI) based CPUFreq Interface driver | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | * Sudeep Holla <sudeep.holla@arm.com> | ||
7 | */ | ||
8 | |||
9 | #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt | ||
10 | |||
11 | #include <linux/cpu.h> | ||
12 | #include <linux/cpufreq.h> | ||
13 | #include <linux/cpumask.h> | ||
14 | #include <linux/cpu_cooling.h> | ||
15 | #include <linux/export.h> | ||
16 | #include <linux/module.h> | ||
17 | #include <linux/pm_opp.h> | ||
18 | #include <linux/slab.h> | ||
19 | #include <linux/scmi_protocol.h> | ||
20 | #include <linux/types.h> | ||
21 | |||
22 | struct scmi_data { | ||
23 | int domain_id; | ||
24 | struct device *cpu_dev; | ||
25 | struct thermal_cooling_device *cdev; | ||
26 | }; | ||
27 | |||
28 | static const struct scmi_handle *handle; | ||
29 | |||
30 | static unsigned int scmi_cpufreq_get_rate(unsigned int cpu) | ||
31 | { | ||
32 | struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); | ||
33 | struct scmi_perf_ops *perf_ops = handle->perf_ops; | ||
34 | struct scmi_data *priv = policy->driver_data; | ||
35 | unsigned long rate; | ||
36 | int ret; | ||
37 | |||
38 | ret = perf_ops->freq_get(handle, priv->domain_id, &rate, false); | ||
39 | if (ret) | ||
40 | return 0; | ||
41 | return rate / 1000; | ||
42 | } | ||
43 | |||
44 | /* | ||
45 | * perf_ops->freq_set is not a synchronous, the actual OPP change will | ||
46 | * happen asynchronously and can get notified if the events are | ||
47 | * subscribed for by the SCMI firmware | ||
48 | */ | ||
49 | static int | ||
50 | scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index) | ||
51 | { | ||
52 | int ret; | ||
53 | struct scmi_data *priv = policy->driver_data; | ||
54 | struct scmi_perf_ops *perf_ops = handle->perf_ops; | ||
55 | u64 freq = policy->freq_table[index].frequency * 1000; | ||
56 | |||
57 | ret = perf_ops->freq_set(handle, priv->domain_id, freq, false); | ||
58 | if (!ret) | ||
59 | arch_set_freq_scale(policy->related_cpus, freq, | ||
60 | policy->cpuinfo.max_freq); | ||
61 | return ret; | ||
62 | } | ||
63 | |||
64 | static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy, | ||
65 | unsigned int target_freq) | ||
66 | { | ||
67 | struct scmi_data *priv = policy->driver_data; | ||
68 | struct scmi_perf_ops *perf_ops = handle->perf_ops; | ||
69 | |||
70 | if (!perf_ops->freq_set(handle, priv->domain_id, | ||
71 | target_freq * 1000, true)) { | ||
72 | arch_set_freq_scale(policy->related_cpus, target_freq, | ||
73 | policy->cpuinfo.max_freq); | ||
74 | return target_freq; | ||
75 | } | ||
76 | |||
77 | return 0; | ||
78 | } | ||
79 | |||
80 | static int | ||
81 | scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) | ||
82 | { | ||
83 | int cpu, domain, tdomain; | ||
84 | struct device *tcpu_dev; | ||
85 | |||
86 | domain = handle->perf_ops->device_domain_id(cpu_dev); | ||
87 | if (domain < 0) | ||
88 | return domain; | ||
89 | |||
90 | for_each_possible_cpu(cpu) { | ||
91 | if (cpu == cpu_dev->id) | ||
92 | continue; | ||
93 | |||
94 | tcpu_dev = get_cpu_device(cpu); | ||
95 | if (!tcpu_dev) | ||
96 | continue; | ||
97 | |||
98 | tdomain = handle->perf_ops->device_domain_id(tcpu_dev); | ||
99 | if (tdomain == domain) | ||
100 | cpumask_set_cpu(cpu, cpumask); | ||
101 | } | ||
102 | |||
103 | return 0; | ||
104 | } | ||
105 | |||
106 | static int scmi_cpufreq_init(struct cpufreq_policy *policy) | ||
107 | { | ||
108 | int ret; | ||
109 | unsigned int latency; | ||
110 | struct device *cpu_dev; | ||
111 | struct scmi_data *priv; | ||
112 | struct cpufreq_frequency_table *freq_table; | ||
113 | |||
114 | cpu_dev = get_cpu_device(policy->cpu); | ||
115 | if (!cpu_dev) { | ||
116 | pr_err("failed to get cpu%d device\n", policy->cpu); | ||
117 | return -ENODEV; | ||
118 | } | ||
119 | |||
120 | ret = handle->perf_ops->add_opps_to_device(handle, cpu_dev); | ||
121 | if (ret) { | ||
122 | dev_warn(cpu_dev, "failed to add opps to the device\n"); | ||
123 | return ret; | ||
124 | } | ||
125 | |||
126 | ret = scmi_get_sharing_cpus(cpu_dev, policy->cpus); | ||
127 | if (ret) { | ||
128 | dev_warn(cpu_dev, "failed to get sharing cpumask\n"); | ||
129 | return ret; | ||
130 | } | ||
131 | |||
132 | ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus); | ||
133 | if (ret) { | ||
134 | dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", | ||
135 | __func__, ret); | ||
136 | return ret; | ||
137 | } | ||
138 | |||
139 | ret = dev_pm_opp_get_opp_count(cpu_dev); | ||
140 | if (ret <= 0) { | ||
141 | dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n"); | ||
142 | ret = -EPROBE_DEFER; | ||
143 | goto out_free_opp; | ||
144 | } | ||
145 | |||
146 | priv = kzalloc(sizeof(*priv), GFP_KERNEL); | ||
147 | if (!priv) { | ||
148 | ret = -ENOMEM; | ||
149 | goto out_free_opp; | ||
150 | } | ||
151 | |||
152 | ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); | ||
153 | if (ret) { | ||
154 | dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret); | ||
155 | goto out_free_priv; | ||
156 | } | ||
157 | |||
158 | priv->cpu_dev = cpu_dev; | ||
159 | priv->domain_id = handle->perf_ops->device_domain_id(cpu_dev); | ||
160 | |||
161 | policy->driver_data = priv; | ||
162 | |||
163 | ret = cpufreq_table_validate_and_show(policy, freq_table); | ||
164 | if (ret) { | ||
165 | dev_err(cpu_dev, "%s: invalid frequency table: %d\n", __func__, | ||
166 | ret); | ||
167 | goto out_free_cpufreq_table; | ||
168 | } | ||
169 | |||
170 | /* SCMI allows DVFS request for any domain from any CPU */ | ||
171 | policy->dvfs_possible_from_any_cpu = true; | ||
172 | |||
173 | latency = handle->perf_ops->get_transition_latency(handle, cpu_dev); | ||
174 | if (!latency) | ||
175 | latency = CPUFREQ_ETERNAL; | ||
176 | |||
177 | policy->cpuinfo.transition_latency = latency; | ||
178 | |||
179 | policy->fast_switch_possible = true; | ||
180 | return 0; | ||
181 | |||
182 | out_free_cpufreq_table: | ||
183 | dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); | ||
184 | out_free_priv: | ||
185 | kfree(priv); | ||
186 | out_free_opp: | ||
187 | dev_pm_opp_cpumask_remove_table(policy->cpus); | ||
188 | |||
189 | return ret; | ||
190 | } | ||
191 | |||
192 | static int scmi_cpufreq_exit(struct cpufreq_policy *policy) | ||
193 | { | ||
194 | struct scmi_data *priv = policy->driver_data; | ||
195 | |||
196 | cpufreq_cooling_unregister(priv->cdev); | ||
197 | dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); | ||
198 | kfree(priv); | ||
199 | dev_pm_opp_cpumask_remove_table(policy->related_cpus); | ||
200 | |||
201 | return 0; | ||
202 | } | ||
203 | |||
204 | static void scmi_cpufreq_ready(struct cpufreq_policy *policy) | ||
205 | { | ||
206 | struct scmi_data *priv = policy->driver_data; | ||
207 | |||
208 | priv->cdev = of_cpufreq_cooling_register(policy); | ||
209 | } | ||
210 | |||
211 | static struct cpufreq_driver scmi_cpufreq_driver = { | ||
212 | .name = "scmi", | ||
213 | .flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY | | ||
214 | CPUFREQ_NEED_INITIAL_FREQ_CHECK, | ||
215 | .verify = cpufreq_generic_frequency_table_verify, | ||
216 | .attr = cpufreq_generic_attr, | ||
217 | .target_index = scmi_cpufreq_set_target, | ||
218 | .fast_switch = scmi_cpufreq_fast_switch, | ||
219 | .get = scmi_cpufreq_get_rate, | ||
220 | .init = scmi_cpufreq_init, | ||
221 | .exit = scmi_cpufreq_exit, | ||
222 | .ready = scmi_cpufreq_ready, | ||
223 | }; | ||
224 | |||
225 | static int scmi_cpufreq_probe(struct scmi_device *sdev) | ||
226 | { | ||
227 | int ret; | ||
228 | |||
229 | handle = sdev->handle; | ||
230 | |||
231 | if (!handle || !handle->perf_ops) | ||
232 | return -ENODEV; | ||
233 | |||
234 | ret = cpufreq_register_driver(&scmi_cpufreq_driver); | ||
235 | if (ret) { | ||
236 | dev_err(&sdev->dev, "%s: registering cpufreq failed, err: %d\n", | ||
237 | __func__, ret); | ||
238 | } | ||
239 | |||
240 | return ret; | ||
241 | } | ||
242 | |||
243 | static void scmi_cpufreq_remove(struct scmi_device *sdev) | ||
244 | { | ||
245 | cpufreq_unregister_driver(&scmi_cpufreq_driver); | ||
246 | } | ||
247 | |||
248 | static const struct scmi_device_id scmi_id_table[] = { | ||
249 | { SCMI_PROTOCOL_PERF }, | ||
250 | { }, | ||
251 | }; | ||
252 | MODULE_DEVICE_TABLE(scmi, scmi_id_table); | ||
253 | |||
254 | static struct scmi_driver scmi_cpufreq_drv = { | ||
255 | .name = "scmi-cpufreq", | ||
256 | .probe = scmi_cpufreq_probe, | ||
257 | .remove = scmi_cpufreq_remove, | ||
258 | .id_table = scmi_id_table, | ||
259 | }; | ||
260 | module_scmi_driver(scmi_cpufreq_drv); | ||
261 | |||
262 | MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); | ||
263 | MODULE_DESCRIPTION("ARM SCMI CPUFreq interface driver"); | ||
264 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig index b7c748248e53..6e83880046d7 100644 --- a/drivers/firmware/Kconfig +++ b/drivers/firmware/Kconfig | |||
@@ -19,6 +19,40 @@ config ARM_PSCI_CHECKER | |||
19 | on and off through hotplug, so for now torture tests and PSCI checker | 19 | on and off through hotplug, so for now torture tests and PSCI checker |
20 | are mutually exclusive. | 20 | are mutually exclusive. |
21 | 21 | ||
22 | config ARM_SCMI_PROTOCOL | ||
23 | bool "ARM System Control and Management Interface (SCMI) Message Protocol" | ||
24 | depends on ARM || ARM64 || COMPILE_TEST | ||
25 | depends on MAILBOX | ||
26 | help | ||
27 | ARM System Control and Management Interface (SCMI) protocol is a | ||
28 | set of operating system-independent software interfaces that are | ||
29 | used in system management. SCMI is extensible and currently provides | ||
30 | interfaces for: Discovery and self-description of the interfaces | ||
31 | it supports, Power domain management which is the ability to place | ||
32 | a given device or domain into the various power-saving states that | ||
33 | it supports, Performance management which is the ability to control | ||
34 | the performance of a domain that is composed of compute engines | ||
35 | such as application processors and other accelerators, Clock | ||
36 | management which is the ability to set and inquire rates on platform | ||
37 | managed clocks and Sensor management which is the ability to read | ||
38 | sensor data, and be notified of sensor value. | ||
39 | |||
40 | This protocol library provides interface for all the client drivers | ||
41 | making use of the features offered by the SCMI. | ||
42 | |||
43 | config ARM_SCMI_POWER_DOMAIN | ||
44 | tristate "SCMI power domain driver" | ||
45 | depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF) | ||
46 | default y | ||
47 | select PM_GENERIC_DOMAINS if PM | ||
48 | help | ||
49 | This enables support for the SCMI power domains which can be | ||
50 | enabled or disabled via the SCP firmware | ||
51 | |||
52 | This driver can also be built as a module. If so, the module | ||
53 | will be called scmi_pm_domain. Note this may needed early in boot | ||
54 | before rootfs may be available. | ||
55 | |||
22 | config ARM_SCPI_PROTOCOL | 56 | config ARM_SCPI_PROTOCOL |
23 | tristate "ARM System Control and Power Interface (SCPI) Message Protocol" | 57 | tristate "ARM System Control and Power Interface (SCPI) Message Protocol" |
24 | depends on ARM || ARM64 || COMPILE_TEST | 58 | depends on ARM || ARM64 || COMPILE_TEST |
diff --git a/drivers/firmware/Makefile b/drivers/firmware/Makefile index b248238ddc6a..e18a041cfc53 100644 --- a/drivers/firmware/Makefile +++ b/drivers/firmware/Makefile | |||
@@ -25,6 +25,7 @@ obj-$(CONFIG_QCOM_SCM_32) += qcom_scm-32.o | |||
25 | CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a | 25 | CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a |
26 | obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o | 26 | obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o |
27 | 27 | ||
28 | obj-$(CONFIG_ARM_SCMI_PROTOCOL) += arm_scmi/ | ||
28 | obj-y += broadcom/ | 29 | obj-y += broadcom/ |
29 | obj-y += meson/ | 30 | obj-y += meson/ |
30 | obj-$(CONFIG_GOOGLE_FIRMWARE) += google/ | 31 | obj-$(CONFIG_GOOGLE_FIRMWARE) += google/ |
diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile new file mode 100644 index 000000000000..99e36c580fbc --- /dev/null +++ b/drivers/firmware/arm_scmi/Makefile | |||
@@ -0,0 +1,5 @@ | |||
1 | obj-y = scmi-bus.o scmi-driver.o scmi-protocols.o | ||
2 | scmi-bus-y = bus.o | ||
3 | scmi-driver-y = driver.o | ||
4 | scmi-protocols-y = base.o clock.o perf.o power.o sensors.o | ||
5 | obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o | ||
diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c new file mode 100644 index 000000000000..0d3806c0d432 --- /dev/null +++ b/drivers/firmware/arm_scmi/base.c | |||
@@ -0,0 +1,253 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface (SCMI) Base Protocol | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | |||
8 | #include "common.h" | ||
9 | |||
10 | enum scmi_base_protocol_cmd { | ||
11 | BASE_DISCOVER_VENDOR = 0x3, | ||
12 | BASE_DISCOVER_SUB_VENDOR = 0x4, | ||
13 | BASE_DISCOVER_IMPLEMENT_VERSION = 0x5, | ||
14 | BASE_DISCOVER_LIST_PROTOCOLS = 0x6, | ||
15 | BASE_DISCOVER_AGENT = 0x7, | ||
16 | BASE_NOTIFY_ERRORS = 0x8, | ||
17 | }; | ||
18 | |||
19 | struct scmi_msg_resp_base_attributes { | ||
20 | u8 num_protocols; | ||
21 | u8 num_agents; | ||
22 | __le16 reserved; | ||
23 | }; | ||
24 | |||
25 | /** | ||
26 | * scmi_base_attributes_get() - gets the implementation details | ||
27 | * that are associated with the base protocol. | ||
28 | * | ||
29 | * @handle - SCMI entity handle | ||
30 | * | ||
31 | * Return: 0 on success, else appropriate SCMI error. | ||
32 | */ | ||
33 | static int scmi_base_attributes_get(const struct scmi_handle *handle) | ||
34 | { | ||
35 | int ret; | ||
36 | struct scmi_xfer *t; | ||
37 | struct scmi_msg_resp_base_attributes *attr_info; | ||
38 | struct scmi_revision_info *rev = handle->version; | ||
39 | |||
40 | ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, | ||
41 | SCMI_PROTOCOL_BASE, 0, sizeof(*attr_info), &t); | ||
42 | if (ret) | ||
43 | return ret; | ||
44 | |||
45 | ret = scmi_do_xfer(handle, t); | ||
46 | if (!ret) { | ||
47 | attr_info = t->rx.buf; | ||
48 | rev->num_protocols = attr_info->num_protocols; | ||
49 | rev->num_agents = attr_info->num_agents; | ||
50 | } | ||
51 | |||
52 | scmi_one_xfer_put(handle, t); | ||
53 | return ret; | ||
54 | } | ||
55 | |||
56 | /** | ||
57 | * scmi_base_vendor_id_get() - gets vendor/subvendor identifier ASCII string. | ||
58 | * | ||
59 | * @handle - SCMI entity handle | ||
60 | * @sub_vendor - specify true if sub-vendor ID is needed | ||
61 | * | ||
62 | * Return: 0 on success, else appropriate SCMI error. | ||
63 | */ | ||
64 | static int | ||
65 | scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor) | ||
66 | { | ||
67 | u8 cmd; | ||
68 | int ret, size; | ||
69 | char *vendor_id; | ||
70 | struct scmi_xfer *t; | ||
71 | struct scmi_revision_info *rev = handle->version; | ||
72 | |||
73 | if (sub_vendor) { | ||
74 | cmd = BASE_DISCOVER_SUB_VENDOR; | ||
75 | vendor_id = rev->sub_vendor_id; | ||
76 | size = ARRAY_SIZE(rev->sub_vendor_id); | ||
77 | } else { | ||
78 | cmd = BASE_DISCOVER_VENDOR; | ||
79 | vendor_id = rev->vendor_id; | ||
80 | size = ARRAY_SIZE(rev->vendor_id); | ||
81 | } | ||
82 | |||
83 | ret = scmi_one_xfer_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t); | ||
84 | if (ret) | ||
85 | return ret; | ||
86 | |||
87 | ret = scmi_do_xfer(handle, t); | ||
88 | if (!ret) | ||
89 | memcpy(vendor_id, t->rx.buf, size); | ||
90 | |||
91 | scmi_one_xfer_put(handle, t); | ||
92 | return ret; | ||
93 | } | ||
94 | |||
95 | /** | ||
96 | * scmi_base_implementation_version_get() - gets a vendor-specific | ||
97 | * implementation 32-bit version. The format of the version number is | ||
98 | * vendor-specific | ||
99 | * | ||
100 | * @handle - SCMI entity handle | ||
101 | * | ||
102 | * Return: 0 on success, else appropriate SCMI error. | ||
103 | */ | ||
104 | static int | ||
105 | scmi_base_implementation_version_get(const struct scmi_handle *handle) | ||
106 | { | ||
107 | int ret; | ||
108 | __le32 *impl_ver; | ||
109 | struct scmi_xfer *t; | ||
110 | struct scmi_revision_info *rev = handle->version; | ||
111 | |||
112 | ret = scmi_one_xfer_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION, | ||
113 | SCMI_PROTOCOL_BASE, 0, sizeof(*impl_ver), &t); | ||
114 | if (ret) | ||
115 | return ret; | ||
116 | |||
117 | ret = scmi_do_xfer(handle, t); | ||
118 | if (!ret) { | ||
119 | impl_ver = t->rx.buf; | ||
120 | rev->impl_ver = le32_to_cpu(*impl_ver); | ||
121 | } | ||
122 | |||
123 | scmi_one_xfer_put(handle, t); | ||
124 | return ret; | ||
125 | } | ||
126 | |||
127 | /** | ||
128 | * scmi_base_implementation_list_get() - gets the list of protocols it is | ||
129 | * OSPM is allowed to access | ||
130 | * | ||
131 | * @handle - SCMI entity handle | ||
132 | * @protocols_imp - pointer to hold the list of protocol identifiers | ||
133 | * | ||
134 | * Return: 0 on success, else appropriate SCMI error. | ||
135 | */ | ||
136 | static int scmi_base_implementation_list_get(const struct scmi_handle *handle, | ||
137 | u8 *protocols_imp) | ||
138 | { | ||
139 | u8 *list; | ||
140 | int ret, loop; | ||
141 | struct scmi_xfer *t; | ||
142 | __le32 *num_skip, *num_ret; | ||
143 | u32 tot_num_ret = 0, loop_num_ret; | ||
144 | struct device *dev = handle->dev; | ||
145 | |||
146 | ret = scmi_one_xfer_init(handle, BASE_DISCOVER_LIST_PROTOCOLS, | ||
147 | SCMI_PROTOCOL_BASE, sizeof(*num_skip), 0, &t); | ||
148 | if (ret) | ||
149 | return ret; | ||
150 | |||
151 | num_skip = t->tx.buf; | ||
152 | num_ret = t->rx.buf; | ||
153 | list = t->rx.buf + sizeof(*num_ret); | ||
154 | |||
155 | do { | ||
156 | /* Set the number of protocols to be skipped/already read */ | ||
157 | *num_skip = cpu_to_le32(tot_num_ret); | ||
158 | |||
159 | ret = scmi_do_xfer(handle, t); | ||
160 | if (ret) | ||
161 | break; | ||
162 | |||
163 | loop_num_ret = le32_to_cpu(*num_ret); | ||
164 | if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) { | ||
165 | dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP"); | ||
166 | break; | ||
167 | } | ||
168 | |||
169 | for (loop = 0; loop < loop_num_ret; loop++) | ||
170 | protocols_imp[tot_num_ret + loop] = *(list + loop); | ||
171 | |||
172 | tot_num_ret += loop_num_ret; | ||
173 | } while (loop_num_ret); | ||
174 | |||
175 | scmi_one_xfer_put(handle, t); | ||
176 | return ret; | ||
177 | } | ||
178 | |||
179 | /** | ||
180 | * scmi_base_discover_agent_get() - discover the name of an agent | ||
181 | * | ||
182 | * @handle - SCMI entity handle | ||
183 | * @id - Agent identifier | ||
184 | * @name - Agent identifier ASCII string | ||
185 | * | ||
186 | * An agent id of 0 is reserved to identify the platform itself. | ||
187 | * Generally operating system is represented as "OSPM" | ||
188 | * | ||
189 | * Return: 0 on success, else appropriate SCMI error. | ||
190 | */ | ||
191 | static int scmi_base_discover_agent_get(const struct scmi_handle *handle, | ||
192 | int id, char *name) | ||
193 | { | ||
194 | int ret; | ||
195 | struct scmi_xfer *t; | ||
196 | |||
197 | ret = scmi_one_xfer_init(handle, BASE_DISCOVER_AGENT, | ||
198 | SCMI_PROTOCOL_BASE, sizeof(__le32), | ||
199 | SCMI_MAX_STR_SIZE, &t); | ||
200 | if (ret) | ||
201 | return ret; | ||
202 | |||
203 | *(__le32 *)t->tx.buf = cpu_to_le32(id); | ||
204 | |||
205 | ret = scmi_do_xfer(handle, t); | ||
206 | if (!ret) | ||
207 | memcpy(name, t->rx.buf, SCMI_MAX_STR_SIZE); | ||
208 | |||
209 | scmi_one_xfer_put(handle, t); | ||
210 | return ret; | ||
211 | } | ||
212 | |||
213 | int scmi_base_protocol_init(struct scmi_handle *h) | ||
214 | { | ||
215 | int id, ret; | ||
216 | u8 *prot_imp; | ||
217 | u32 version; | ||
218 | char name[SCMI_MAX_STR_SIZE]; | ||
219 | const struct scmi_handle *handle = h; | ||
220 | struct device *dev = handle->dev; | ||
221 | struct scmi_revision_info *rev = handle->version; | ||
222 | |||
223 | ret = scmi_version_get(handle, SCMI_PROTOCOL_BASE, &version); | ||
224 | if (ret) | ||
225 | return ret; | ||
226 | |||
227 | prot_imp = devm_kcalloc(dev, MAX_PROTOCOLS_IMP, sizeof(u8), GFP_KERNEL); | ||
228 | if (!prot_imp) | ||
229 | return -ENOMEM; | ||
230 | |||
231 | rev->major_ver = PROTOCOL_REV_MAJOR(version), | ||
232 | rev->minor_ver = PROTOCOL_REV_MINOR(version); | ||
233 | |||
234 | scmi_base_attributes_get(handle); | ||
235 | scmi_base_vendor_id_get(handle, false); | ||
236 | scmi_base_vendor_id_get(handle, true); | ||
237 | scmi_base_implementation_version_get(handle); | ||
238 | scmi_base_implementation_list_get(handle, prot_imp); | ||
239 | scmi_setup_protocol_implemented(handle, prot_imp); | ||
240 | |||
241 | dev_info(dev, "SCMI Protocol v%d.%d '%s:%s' Firmware version 0x%x\n", | ||
242 | rev->major_ver, rev->minor_ver, rev->vendor_id, | ||
243 | rev->sub_vendor_id, rev->impl_ver); | ||
244 | dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols, | ||
245 | rev->num_agents); | ||
246 | |||
247 | for (id = 0; id < rev->num_agents; id++) { | ||
248 | scmi_base_discover_agent_get(handle, id, name); | ||
249 | dev_dbg(dev, "Agent %d: %s\n", id, name); | ||
250 | } | ||
251 | |||
252 | return 0; | ||
253 | } | ||
diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c new file mode 100644 index 000000000000..f2760a596c28 --- /dev/null +++ b/drivers/firmware/arm_scmi/bus.c | |||
@@ -0,0 +1,221 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface (SCMI) Message Protocol bus layer | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | |||
8 | #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt | ||
9 | |||
10 | #include <linux/types.h> | ||
11 | #include <linux/module.h> | ||
12 | #include <linux/kernel.h> | ||
13 | #include <linux/slab.h> | ||
14 | #include <linux/device.h> | ||
15 | |||
16 | #include "common.h" | ||
17 | |||
18 | static DEFINE_IDA(scmi_bus_id); | ||
19 | static DEFINE_IDR(scmi_protocols); | ||
20 | static DEFINE_SPINLOCK(protocol_lock); | ||
21 | |||
22 | static const struct scmi_device_id * | ||
23 | scmi_dev_match_id(struct scmi_device *scmi_dev, struct scmi_driver *scmi_drv) | ||
24 | { | ||
25 | const struct scmi_device_id *id = scmi_drv->id_table; | ||
26 | |||
27 | if (!id) | ||
28 | return NULL; | ||
29 | |||
30 | for (; id->protocol_id; id++) | ||
31 | if (id->protocol_id == scmi_dev->protocol_id) | ||
32 | return id; | ||
33 | |||
34 | return NULL; | ||
35 | } | ||
36 | |||
37 | static int scmi_dev_match(struct device *dev, struct device_driver *drv) | ||
38 | { | ||
39 | struct scmi_driver *scmi_drv = to_scmi_driver(drv); | ||
40 | struct scmi_device *scmi_dev = to_scmi_dev(dev); | ||
41 | const struct scmi_device_id *id; | ||
42 | |||
43 | id = scmi_dev_match_id(scmi_dev, scmi_drv); | ||
44 | if (id) | ||
45 | return 1; | ||
46 | |||
47 | return 0; | ||
48 | } | ||
49 | |||
50 | static int scmi_protocol_init(int protocol_id, struct scmi_handle *handle) | ||
51 | { | ||
52 | scmi_prot_init_fn_t fn = idr_find(&scmi_protocols, protocol_id); | ||
53 | |||
54 | if (unlikely(!fn)) | ||
55 | return -EINVAL; | ||
56 | return fn(handle); | ||
57 | } | ||
58 | |||
59 | static int scmi_dev_probe(struct device *dev) | ||
60 | { | ||
61 | struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver); | ||
62 | struct scmi_device *scmi_dev = to_scmi_dev(dev); | ||
63 | const struct scmi_device_id *id; | ||
64 | int ret; | ||
65 | |||
66 | id = scmi_dev_match_id(scmi_dev, scmi_drv); | ||
67 | if (!id) | ||
68 | return -ENODEV; | ||
69 | |||
70 | if (!scmi_dev->handle) | ||
71 | return -EPROBE_DEFER; | ||
72 | |||
73 | ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle); | ||
74 | if (ret) | ||
75 | return ret; | ||
76 | |||
77 | return scmi_drv->probe(scmi_dev); | ||
78 | } | ||
79 | |||
80 | static int scmi_dev_remove(struct device *dev) | ||
81 | { | ||
82 | struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver); | ||
83 | struct scmi_device *scmi_dev = to_scmi_dev(dev); | ||
84 | |||
85 | if (scmi_drv->remove) | ||
86 | scmi_drv->remove(scmi_dev); | ||
87 | |||
88 | return 0; | ||
89 | } | ||
90 | |||
91 | static struct bus_type scmi_bus_type = { | ||
92 | .name = "scmi_protocol", | ||
93 | .match = scmi_dev_match, | ||
94 | .probe = scmi_dev_probe, | ||
95 | .remove = scmi_dev_remove, | ||
96 | }; | ||
97 | |||
98 | int scmi_driver_register(struct scmi_driver *driver, struct module *owner, | ||
99 | const char *mod_name) | ||
100 | { | ||
101 | int retval; | ||
102 | |||
103 | driver->driver.bus = &scmi_bus_type; | ||
104 | driver->driver.name = driver->name; | ||
105 | driver->driver.owner = owner; | ||
106 | driver->driver.mod_name = mod_name; | ||
107 | |||
108 | retval = driver_register(&driver->driver); | ||
109 | if (!retval) | ||
110 | pr_debug("registered new scmi driver %s\n", driver->name); | ||
111 | |||
112 | return retval; | ||
113 | } | ||
114 | EXPORT_SYMBOL_GPL(scmi_driver_register); | ||
115 | |||
116 | void scmi_driver_unregister(struct scmi_driver *driver) | ||
117 | { | ||
118 | driver_unregister(&driver->driver); | ||
119 | } | ||
120 | EXPORT_SYMBOL_GPL(scmi_driver_unregister); | ||
121 | |||
122 | struct scmi_device * | ||
123 | scmi_device_create(struct device_node *np, struct device *parent, int protocol) | ||
124 | { | ||
125 | int id, retval; | ||
126 | struct scmi_device *scmi_dev; | ||
127 | |||
128 | id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL); | ||
129 | if (id < 0) | ||
130 | return NULL; | ||
131 | |||
132 | scmi_dev = kzalloc(sizeof(*scmi_dev), GFP_KERNEL); | ||
133 | if (!scmi_dev) | ||
134 | goto no_mem; | ||
135 | |||
136 | scmi_dev->id = id; | ||
137 | scmi_dev->protocol_id = protocol; | ||
138 | scmi_dev->dev.parent = parent; | ||
139 | scmi_dev->dev.of_node = np; | ||
140 | scmi_dev->dev.bus = &scmi_bus_type; | ||
141 | dev_set_name(&scmi_dev->dev, "scmi_dev.%d", id); | ||
142 | |||
143 | retval = device_register(&scmi_dev->dev); | ||
144 | if (!retval) | ||
145 | return scmi_dev; | ||
146 | |||
147 | put_device(&scmi_dev->dev); | ||
148 | kfree(scmi_dev); | ||
149 | no_mem: | ||
150 | ida_simple_remove(&scmi_bus_id, id); | ||
151 | return NULL; | ||
152 | } | ||
153 | |||
154 | void scmi_device_destroy(struct scmi_device *scmi_dev) | ||
155 | { | ||
156 | scmi_handle_put(scmi_dev->handle); | ||
157 | device_unregister(&scmi_dev->dev); | ||
158 | ida_simple_remove(&scmi_bus_id, scmi_dev->id); | ||
159 | kfree(scmi_dev); | ||
160 | } | ||
161 | |||
162 | void scmi_set_handle(struct scmi_device *scmi_dev) | ||
163 | { | ||
164 | scmi_dev->handle = scmi_handle_get(&scmi_dev->dev); | ||
165 | } | ||
166 | |||
167 | int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn) | ||
168 | { | ||
169 | int ret; | ||
170 | |||
171 | spin_lock(&protocol_lock); | ||
172 | ret = idr_alloc(&scmi_protocols, fn, protocol_id, protocol_id + 1, | ||
173 | GFP_ATOMIC); | ||
174 | if (ret != protocol_id) | ||
175 | pr_err("unable to allocate SCMI idr slot, err %d\n", ret); | ||
176 | spin_unlock(&protocol_lock); | ||
177 | |||
178 | return ret; | ||
179 | } | ||
180 | EXPORT_SYMBOL_GPL(scmi_protocol_register); | ||
181 | |||
182 | void scmi_protocol_unregister(int protocol_id) | ||
183 | { | ||
184 | spin_lock(&protocol_lock); | ||
185 | idr_remove(&scmi_protocols, protocol_id); | ||
186 | spin_unlock(&protocol_lock); | ||
187 | } | ||
188 | EXPORT_SYMBOL_GPL(scmi_protocol_unregister); | ||
189 | |||
190 | static int __scmi_devices_unregister(struct device *dev, void *data) | ||
191 | { | ||
192 | struct scmi_device *scmi_dev = to_scmi_dev(dev); | ||
193 | |||
194 | scmi_device_destroy(scmi_dev); | ||
195 | return 0; | ||
196 | } | ||
197 | |||
198 | static void scmi_devices_unregister(void) | ||
199 | { | ||
200 | bus_for_each_dev(&scmi_bus_type, NULL, NULL, __scmi_devices_unregister); | ||
201 | } | ||
202 | |||
203 | static int __init scmi_bus_init(void) | ||
204 | { | ||
205 | int retval; | ||
206 | |||
207 | retval = bus_register(&scmi_bus_type); | ||
208 | if (retval) | ||
209 | pr_err("scmi protocol bus register failed (%d)\n", retval); | ||
210 | |||
211 | return retval; | ||
212 | } | ||
213 | subsys_initcall(scmi_bus_init); | ||
214 | |||
215 | static void __exit scmi_bus_exit(void) | ||
216 | { | ||
217 | scmi_devices_unregister(); | ||
218 | bus_unregister(&scmi_bus_type); | ||
219 | ida_destroy(&scmi_bus_id); | ||
220 | } | ||
221 | module_exit(scmi_bus_exit); | ||
diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c new file mode 100644 index 000000000000..e6f17825db79 --- /dev/null +++ b/drivers/firmware/arm_scmi/clock.c | |||
@@ -0,0 +1,343 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface (SCMI) Clock Protocol | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | |||
8 | #include "common.h" | ||
9 | |||
10 | enum scmi_clock_protocol_cmd { | ||
11 | CLOCK_ATTRIBUTES = 0x3, | ||
12 | CLOCK_DESCRIBE_RATES = 0x4, | ||
13 | CLOCK_RATE_SET = 0x5, | ||
14 | CLOCK_RATE_GET = 0x6, | ||
15 | CLOCK_CONFIG_SET = 0x7, | ||
16 | }; | ||
17 | |||
18 | struct scmi_msg_resp_clock_protocol_attributes { | ||
19 | __le16 num_clocks; | ||
20 | u8 max_async_req; | ||
21 | u8 reserved; | ||
22 | }; | ||
23 | |||
24 | struct scmi_msg_resp_clock_attributes { | ||
25 | __le32 attributes; | ||
26 | #define CLOCK_ENABLE BIT(0) | ||
27 | u8 name[SCMI_MAX_STR_SIZE]; | ||
28 | }; | ||
29 | |||
30 | struct scmi_clock_set_config { | ||
31 | __le32 id; | ||
32 | __le32 attributes; | ||
33 | }; | ||
34 | |||
35 | struct scmi_msg_clock_describe_rates { | ||
36 | __le32 id; | ||
37 | __le32 rate_index; | ||
38 | }; | ||
39 | |||
40 | struct scmi_msg_resp_clock_describe_rates { | ||
41 | __le32 num_rates_flags; | ||
42 | #define NUM_RETURNED(x) ((x) & 0xfff) | ||
43 | #define RATE_DISCRETE(x) !((x) & BIT(12)) | ||
44 | #define NUM_REMAINING(x) ((x) >> 16) | ||
45 | struct { | ||
46 | __le32 value_low; | ||
47 | __le32 value_high; | ||
48 | } rate[0]; | ||
49 | #define RATE_TO_U64(X) \ | ||
50 | ({ \ | ||
51 | typeof(X) x = (X); \ | ||
52 | le32_to_cpu((x).value_low) | (u64)le32_to_cpu((x).value_high) << 32; \ | ||
53 | }) | ||
54 | }; | ||
55 | |||
56 | struct scmi_clock_set_rate { | ||
57 | __le32 flags; | ||
58 | #define CLOCK_SET_ASYNC BIT(0) | ||
59 | #define CLOCK_SET_DELAYED BIT(1) | ||
60 | #define CLOCK_SET_ROUND_UP BIT(2) | ||
61 | #define CLOCK_SET_ROUND_AUTO BIT(3) | ||
62 | __le32 id; | ||
63 | __le32 value_low; | ||
64 | __le32 value_high; | ||
65 | }; | ||
66 | |||
67 | struct clock_info { | ||
68 | int num_clocks; | ||
69 | int max_async_req; | ||
70 | struct scmi_clock_info *clk; | ||
71 | }; | ||
72 | |||
73 | static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle, | ||
74 | struct clock_info *ci) | ||
75 | { | ||
76 | int ret; | ||
77 | struct scmi_xfer *t; | ||
78 | struct scmi_msg_resp_clock_protocol_attributes *attr; | ||
79 | |||
80 | ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, | ||
81 | SCMI_PROTOCOL_CLOCK, 0, sizeof(*attr), &t); | ||
82 | if (ret) | ||
83 | return ret; | ||
84 | |||
85 | attr = t->rx.buf; | ||
86 | |||
87 | ret = scmi_do_xfer(handle, t); | ||
88 | if (!ret) { | ||
89 | ci->num_clocks = le16_to_cpu(attr->num_clocks); | ||
90 | ci->max_async_req = attr->max_async_req; | ||
91 | } | ||
92 | |||
93 | scmi_one_xfer_put(handle, t); | ||
94 | return ret; | ||
95 | } | ||
96 | |||
97 | static int scmi_clock_attributes_get(const struct scmi_handle *handle, | ||
98 | u32 clk_id, struct scmi_clock_info *clk) | ||
99 | { | ||
100 | int ret; | ||
101 | struct scmi_xfer *t; | ||
102 | struct scmi_msg_resp_clock_attributes *attr; | ||
103 | |||
104 | ret = scmi_one_xfer_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK, | ||
105 | sizeof(clk_id), sizeof(*attr), &t); | ||
106 | if (ret) | ||
107 | return ret; | ||
108 | |||
109 | *(__le32 *)t->tx.buf = cpu_to_le32(clk_id); | ||
110 | attr = t->rx.buf; | ||
111 | |||
112 | ret = scmi_do_xfer(handle, t); | ||
113 | if (!ret) | ||
114 | memcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE); | ||
115 | else | ||
116 | clk->name[0] = '\0'; | ||
117 | |||
118 | scmi_one_xfer_put(handle, t); | ||
119 | return ret; | ||
120 | } | ||
121 | |||
122 | static int | ||
123 | scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id, | ||
124 | struct scmi_clock_info *clk) | ||
125 | { | ||
126 | u64 *rate; | ||
127 | int ret, cnt; | ||
128 | bool rate_discrete = false; | ||
129 | u32 tot_rate_cnt = 0, rates_flag; | ||
130 | u16 num_returned, num_remaining; | ||
131 | struct scmi_xfer *t; | ||
132 | struct scmi_msg_clock_describe_rates *clk_desc; | ||
133 | struct scmi_msg_resp_clock_describe_rates *rlist; | ||
134 | |||
135 | ret = scmi_one_xfer_init(handle, CLOCK_DESCRIBE_RATES, | ||
136 | SCMI_PROTOCOL_CLOCK, sizeof(*clk_desc), 0, &t); | ||
137 | if (ret) | ||
138 | return ret; | ||
139 | |||
140 | clk_desc = t->tx.buf; | ||
141 | rlist = t->rx.buf; | ||
142 | |||
143 | do { | ||
144 | clk_desc->id = cpu_to_le32(clk_id); | ||
145 | /* Set the number of rates to be skipped/already read */ | ||
146 | clk_desc->rate_index = cpu_to_le32(tot_rate_cnt); | ||
147 | |||
148 | ret = scmi_do_xfer(handle, t); | ||
149 | if (ret) | ||
150 | goto err; | ||
151 | |||
152 | rates_flag = le32_to_cpu(rlist->num_rates_flags); | ||
153 | num_remaining = NUM_REMAINING(rates_flag); | ||
154 | rate_discrete = RATE_DISCRETE(rates_flag); | ||
155 | num_returned = NUM_RETURNED(rates_flag); | ||
156 | |||
157 | if (tot_rate_cnt + num_returned > SCMI_MAX_NUM_RATES) { | ||
158 | dev_err(handle->dev, "No. of rates > MAX_NUM_RATES"); | ||
159 | break; | ||
160 | } | ||
161 | |||
162 | if (!rate_discrete) { | ||
163 | clk->range.min_rate = RATE_TO_U64(rlist->rate[0]); | ||
164 | clk->range.max_rate = RATE_TO_U64(rlist->rate[1]); | ||
165 | clk->range.step_size = RATE_TO_U64(rlist->rate[2]); | ||
166 | dev_dbg(handle->dev, "Min %llu Max %llu Step %llu Hz\n", | ||
167 | clk->range.min_rate, clk->range.max_rate, | ||
168 | clk->range.step_size); | ||
169 | break; | ||
170 | } | ||
171 | |||
172 | rate = &clk->list.rates[tot_rate_cnt]; | ||
173 | for (cnt = 0; cnt < num_returned; cnt++, rate++) { | ||
174 | *rate = RATE_TO_U64(rlist->rate[cnt]); | ||
175 | dev_dbg(handle->dev, "Rate %llu Hz\n", *rate); | ||
176 | } | ||
177 | |||
178 | tot_rate_cnt += num_returned; | ||
179 | /* | ||
180 | * check for both returned and remaining to avoid infinite | ||
181 | * loop due to buggy firmware | ||
182 | */ | ||
183 | } while (num_returned && num_remaining); | ||
184 | |||
185 | if (rate_discrete) | ||
186 | clk->list.num_rates = tot_rate_cnt; | ||
187 | |||
188 | err: | ||
189 | scmi_one_xfer_put(handle, t); | ||
190 | return ret; | ||
191 | } | ||
192 | |||
193 | static int | ||
194 | scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value) | ||
195 | { | ||
196 | int ret; | ||
197 | struct scmi_xfer *t; | ||
198 | |||
199 | ret = scmi_one_xfer_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK, | ||
200 | sizeof(__le32), sizeof(u64), &t); | ||
201 | if (ret) | ||
202 | return ret; | ||
203 | |||
204 | *(__le32 *)t->tx.buf = cpu_to_le32(clk_id); | ||
205 | |||
206 | ret = scmi_do_xfer(handle, t); | ||
207 | if (!ret) { | ||
208 | __le32 *pval = t->rx.buf; | ||
209 | |||
210 | *value = le32_to_cpu(*pval); | ||
211 | *value |= (u64)le32_to_cpu(*(pval + 1)) << 32; | ||
212 | } | ||
213 | |||
214 | scmi_one_xfer_put(handle, t); | ||
215 | return ret; | ||
216 | } | ||
217 | |||
218 | static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id, | ||
219 | u32 config, u64 rate) | ||
220 | { | ||
221 | int ret; | ||
222 | struct scmi_xfer *t; | ||
223 | struct scmi_clock_set_rate *cfg; | ||
224 | |||
225 | ret = scmi_one_xfer_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK, | ||
226 | sizeof(*cfg), 0, &t); | ||
227 | if (ret) | ||
228 | return ret; | ||
229 | |||
230 | cfg = t->tx.buf; | ||
231 | cfg->flags = cpu_to_le32(config); | ||
232 | cfg->id = cpu_to_le32(clk_id); | ||
233 | cfg->value_low = cpu_to_le32(rate & 0xffffffff); | ||
234 | cfg->value_high = cpu_to_le32(rate >> 32); | ||
235 | |||
236 | ret = scmi_do_xfer(handle, t); | ||
237 | |||
238 | scmi_one_xfer_put(handle, t); | ||
239 | return ret; | ||
240 | } | ||
241 | |||
242 | static int | ||
243 | scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config) | ||
244 | { | ||
245 | int ret; | ||
246 | struct scmi_xfer *t; | ||
247 | struct scmi_clock_set_config *cfg; | ||
248 | |||
249 | ret = scmi_one_xfer_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK, | ||
250 | sizeof(*cfg), 0, &t); | ||
251 | if (ret) | ||
252 | return ret; | ||
253 | |||
254 | cfg = t->tx.buf; | ||
255 | cfg->id = cpu_to_le32(clk_id); | ||
256 | cfg->attributes = cpu_to_le32(config); | ||
257 | |||
258 | ret = scmi_do_xfer(handle, t); | ||
259 | |||
260 | scmi_one_xfer_put(handle, t); | ||
261 | return ret; | ||
262 | } | ||
263 | |||
264 | static int scmi_clock_enable(const struct scmi_handle *handle, u32 clk_id) | ||
265 | { | ||
266 | return scmi_clock_config_set(handle, clk_id, CLOCK_ENABLE); | ||
267 | } | ||
268 | |||
269 | static int scmi_clock_disable(const struct scmi_handle *handle, u32 clk_id) | ||
270 | { | ||
271 | return scmi_clock_config_set(handle, clk_id, 0); | ||
272 | } | ||
273 | |||
274 | static int scmi_clock_count_get(const struct scmi_handle *handle) | ||
275 | { | ||
276 | struct clock_info *ci = handle->clk_priv; | ||
277 | |||
278 | return ci->num_clocks; | ||
279 | } | ||
280 | |||
281 | static const struct scmi_clock_info * | ||
282 | scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id) | ||
283 | { | ||
284 | struct clock_info *ci = handle->clk_priv; | ||
285 | struct scmi_clock_info *clk = ci->clk + clk_id; | ||
286 | |||
287 | if (!clk->name || !clk->name[0]) | ||
288 | return NULL; | ||
289 | |||
290 | return clk; | ||
291 | } | ||
292 | |||
293 | static struct scmi_clk_ops clk_ops = { | ||
294 | .count_get = scmi_clock_count_get, | ||
295 | .info_get = scmi_clock_info_get, | ||
296 | .rate_get = scmi_clock_rate_get, | ||
297 | .rate_set = scmi_clock_rate_set, | ||
298 | .enable = scmi_clock_enable, | ||
299 | .disable = scmi_clock_disable, | ||
300 | }; | ||
301 | |||
302 | static int scmi_clock_protocol_init(struct scmi_handle *handle) | ||
303 | { | ||
304 | u32 version; | ||
305 | int clkid, ret; | ||
306 | struct clock_info *cinfo; | ||
307 | |||
308 | scmi_version_get(handle, SCMI_PROTOCOL_CLOCK, &version); | ||
309 | |||
310 | dev_dbg(handle->dev, "Clock Version %d.%d\n", | ||
311 | PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); | ||
312 | |||
313 | cinfo = devm_kzalloc(handle->dev, sizeof(*cinfo), GFP_KERNEL); | ||
314 | if (!cinfo) | ||
315 | return -ENOMEM; | ||
316 | |||
317 | scmi_clock_protocol_attributes_get(handle, cinfo); | ||
318 | |||
319 | cinfo->clk = devm_kcalloc(handle->dev, cinfo->num_clocks, | ||
320 | sizeof(*cinfo->clk), GFP_KERNEL); | ||
321 | if (!cinfo->clk) | ||
322 | return -ENOMEM; | ||
323 | |||
324 | for (clkid = 0; clkid < cinfo->num_clocks; clkid++) { | ||
325 | struct scmi_clock_info *clk = cinfo->clk + clkid; | ||
326 | |||
327 | ret = scmi_clock_attributes_get(handle, clkid, clk); | ||
328 | if (!ret) | ||
329 | scmi_clock_describe_rates_get(handle, clkid, clk); | ||
330 | } | ||
331 | |||
332 | handle->clk_ops = &clk_ops; | ||
333 | handle->clk_priv = cinfo; | ||
334 | |||
335 | return 0; | ||
336 | } | ||
337 | |||
338 | static int __init scmi_clock_init(void) | ||
339 | { | ||
340 | return scmi_protocol_register(SCMI_PROTOCOL_CLOCK, | ||
341 | &scmi_clock_protocol_init); | ||
342 | } | ||
343 | subsys_initcall(scmi_clock_init); | ||
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h new file mode 100644 index 000000000000..0c30234f9098 --- /dev/null +++ b/drivers/firmware/arm_scmi/common.h | |||
@@ -0,0 +1,105 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface (SCMI) Message Protocol | ||
4 | * driver common header file containing some definitions, structures | ||
5 | * and function prototypes used in all the different SCMI protocols. | ||
6 | * | ||
7 | * Copyright (C) 2018 ARM Ltd. | ||
8 | */ | ||
9 | |||
10 | #include <linux/completion.h> | ||
11 | #include <linux/device.h> | ||
12 | #include <linux/errno.h> | ||
13 | #include <linux/kernel.h> | ||
14 | #include <linux/scmi_protocol.h> | ||
15 | #include <linux/types.h> | ||
16 | |||
17 | #define PROTOCOL_REV_MINOR_BITS 16 | ||
18 | #define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1) | ||
19 | #define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS) | ||
20 | #define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK) | ||
21 | #define MAX_PROTOCOLS_IMP 16 | ||
22 | #define MAX_OPPS 16 | ||
23 | |||
24 | enum scmi_common_cmd { | ||
25 | PROTOCOL_VERSION = 0x0, | ||
26 | PROTOCOL_ATTRIBUTES = 0x1, | ||
27 | PROTOCOL_MESSAGE_ATTRIBUTES = 0x2, | ||
28 | }; | ||
29 | |||
30 | /** | ||
31 | * struct scmi_msg_resp_prot_version - Response for a message | ||
32 | * | ||
33 | * @major_version: Major version of the ABI that firmware supports | ||
34 | * @minor_version: Minor version of the ABI that firmware supports | ||
35 | * | ||
36 | * In general, ABI version changes follow the rule that minor version increments | ||
37 | * are backward compatible. Major revision changes in ABI may not be | ||
38 | * backward compatible. | ||
39 | * | ||
40 | * Response to a generic message with message type SCMI_MSG_VERSION | ||
41 | */ | ||
42 | struct scmi_msg_resp_prot_version { | ||
43 | __le16 minor_version; | ||
44 | __le16 major_version; | ||
45 | }; | ||
46 | |||
47 | /** | ||
48 | * struct scmi_msg_hdr - Message(Tx/Rx) header | ||
49 | * | ||
50 | * @id: The identifier of the command being sent | ||
51 | * @protocol_id: The identifier of the protocol used to send @id command | ||
52 | * @seq: The token to identify the message. when a message/command returns, | ||
53 | * the platform returns the whole message header unmodified including | ||
54 | * the token. | ||
55 | */ | ||
56 | struct scmi_msg_hdr { | ||
57 | u8 id; | ||
58 | u8 protocol_id; | ||
59 | u16 seq; | ||
60 | u32 status; | ||
61 | bool poll_completion; | ||
62 | }; | ||
63 | |||
64 | /** | ||
65 | * struct scmi_msg - Message(Tx/Rx) structure | ||
66 | * | ||
67 | * @buf: Buffer pointer | ||
68 | * @len: Length of data in the Buffer | ||
69 | */ | ||
70 | struct scmi_msg { | ||
71 | void *buf; | ||
72 | size_t len; | ||
73 | }; | ||
74 | |||
75 | /** | ||
76 | * struct scmi_xfer - Structure representing a message flow | ||
77 | * | ||
78 | * @hdr: Transmit message header | ||
79 | * @tx: Transmit message | ||
80 | * @rx: Receive message, the buffer should be pre-allocated to store | ||
81 | * message. If request-ACK protocol is used, we can reuse the same | ||
82 | * buffer for the rx path as we use for the tx path. | ||
83 | * @done: completion event | ||
84 | */ | ||
85 | |||
86 | struct scmi_xfer { | ||
87 | void *con_priv; | ||
88 | struct scmi_msg_hdr hdr; | ||
89 | struct scmi_msg tx; | ||
90 | struct scmi_msg rx; | ||
91 | struct completion done; | ||
92 | }; | ||
93 | |||
94 | void scmi_one_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer); | ||
95 | int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer); | ||
96 | int scmi_one_xfer_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id, | ||
97 | size_t tx_size, size_t rx_size, struct scmi_xfer **p); | ||
98 | int scmi_handle_put(const struct scmi_handle *handle); | ||
99 | struct scmi_handle *scmi_handle_get(struct device *dev); | ||
100 | void scmi_set_handle(struct scmi_device *scmi_dev); | ||
101 | int scmi_version_get(const struct scmi_handle *h, u8 protocol, u32 *version); | ||
102 | void scmi_setup_protocol_implemented(const struct scmi_handle *handle, | ||
103 | u8 *prot_imp); | ||
104 | |||
105 | int scmi_base_protocol_init(struct scmi_handle *h); | ||
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c new file mode 100644 index 000000000000..14b147135a0c --- /dev/null +++ b/drivers/firmware/arm_scmi/driver.c | |||
@@ -0,0 +1,871 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface (SCMI) Message Protocol driver | ||
4 | * | ||
5 | * SCMI Message Protocol is used between the System Control Processor(SCP) | ||
6 | * and the Application Processors(AP). The Message Handling Unit(MHU) | ||
7 | * provides a mechanism for inter-processor communication between SCP's | ||
8 | * Cortex M3 and AP. | ||
9 | * | ||
10 | * SCP offers control and management of the core/cluster power states, | ||
11 | * various power domain DVFS including the core/cluster, certain system | ||
12 | * clocks configuration, thermal sensors and many others. | ||
13 | * | ||
14 | * Copyright (C) 2018 ARM Ltd. | ||
15 | */ | ||
16 | |||
17 | #include <linux/bitmap.h> | ||
18 | #include <linux/export.h> | ||
19 | #include <linux/io.h> | ||
20 | #include <linux/kernel.h> | ||
21 | #include <linux/ktime.h> | ||
22 | #include <linux/mailbox_client.h> | ||
23 | #include <linux/module.h> | ||
24 | #include <linux/of_address.h> | ||
25 | #include <linux/of_device.h> | ||
26 | #include <linux/processor.h> | ||
27 | #include <linux/semaphore.h> | ||
28 | #include <linux/slab.h> | ||
29 | |||
30 | #include "common.h" | ||
31 | |||
32 | #define MSG_ID_SHIFT 0 | ||
33 | #define MSG_ID_MASK 0xff | ||
34 | #define MSG_TYPE_SHIFT 8 | ||
35 | #define MSG_TYPE_MASK 0x3 | ||
36 | #define MSG_PROTOCOL_ID_SHIFT 10 | ||
37 | #define MSG_PROTOCOL_ID_MASK 0xff | ||
38 | #define MSG_TOKEN_ID_SHIFT 18 | ||
39 | #define MSG_TOKEN_ID_MASK 0x3ff | ||
40 | #define MSG_XTRACT_TOKEN(header) \ | ||
41 | (((header) >> MSG_TOKEN_ID_SHIFT) & MSG_TOKEN_ID_MASK) | ||
42 | |||
43 | enum scmi_error_codes { | ||
44 | SCMI_SUCCESS = 0, /* Success */ | ||
45 | SCMI_ERR_SUPPORT = -1, /* Not supported */ | ||
46 | SCMI_ERR_PARAMS = -2, /* Invalid Parameters */ | ||
47 | SCMI_ERR_ACCESS = -3, /* Invalid access/permission denied */ | ||
48 | SCMI_ERR_ENTRY = -4, /* Not found */ | ||
49 | SCMI_ERR_RANGE = -5, /* Value out of range */ | ||
50 | SCMI_ERR_BUSY = -6, /* Device busy */ | ||
51 | SCMI_ERR_COMMS = -7, /* Communication Error */ | ||
52 | SCMI_ERR_GENERIC = -8, /* Generic Error */ | ||
53 | SCMI_ERR_HARDWARE = -9, /* Hardware Error */ | ||
54 | SCMI_ERR_PROTOCOL = -10,/* Protocol Error */ | ||
55 | SCMI_ERR_MAX | ||
56 | }; | ||
57 | |||
58 | /* List of all SCMI devices active in system */ | ||
59 | static LIST_HEAD(scmi_list); | ||
60 | /* Protection for the entire list */ | ||
61 | static DEFINE_MUTEX(scmi_list_mutex); | ||
62 | |||
63 | /** | ||
64 | * struct scmi_xfers_info - Structure to manage transfer information | ||
65 | * | ||
66 | * @xfer_block: Preallocated Message array | ||
67 | * @xfer_alloc_table: Bitmap table for allocated messages. | ||
68 | * Index of this bitmap table is also used for message | ||
69 | * sequence identifier. | ||
70 | * @xfer_lock: Protection for message allocation | ||
71 | */ | ||
72 | struct scmi_xfers_info { | ||
73 | struct scmi_xfer *xfer_block; | ||
74 | unsigned long *xfer_alloc_table; | ||
75 | /* protect transfer allocation */ | ||
76 | spinlock_t xfer_lock; | ||
77 | }; | ||
78 | |||
79 | /** | ||
80 | * struct scmi_desc - Description of SoC integration | ||
81 | * | ||
82 | * @max_rx_timeout_ms: Timeout for communication with SoC (in Milliseconds) | ||
83 | * @max_msg: Maximum number of messages that can be pending | ||
84 | * simultaneously in the system | ||
85 | * @max_msg_size: Maximum size of data per message that can be handled. | ||
86 | */ | ||
87 | struct scmi_desc { | ||
88 | int max_rx_timeout_ms; | ||
89 | int max_msg; | ||
90 | int max_msg_size; | ||
91 | }; | ||
92 | |||
93 | /** | ||
94 | * struct scmi_chan_info - Structure representing a SCMI channel informfation | ||
95 | * | ||
96 | * @cl: Mailbox Client | ||
97 | * @chan: Transmit/Receive mailbox channel | ||
98 | * @payload: Transmit/Receive mailbox channel payload area | ||
99 | * @dev: Reference to device in the SCMI hierarchy corresponding to this | ||
100 | * channel | ||
101 | */ | ||
102 | struct scmi_chan_info { | ||
103 | struct mbox_client cl; | ||
104 | struct mbox_chan *chan; | ||
105 | void __iomem *payload; | ||
106 | struct device *dev; | ||
107 | struct scmi_handle *handle; | ||
108 | }; | ||
109 | |||
110 | /** | ||
111 | * struct scmi_info - Structure representing a SCMI instance | ||
112 | * | ||
113 | * @dev: Device pointer | ||
114 | * @desc: SoC description for this instance | ||
115 | * @handle: Instance of SCMI handle to send to clients | ||
116 | * @version: SCMI revision information containing protocol version, | ||
117 | * implementation version and (sub-)vendor identification. | ||
118 | * @minfo: Message info | ||
119 | * @tx_idr: IDR object to map protocol id to channel info pointer | ||
120 | * @protocols_imp: list of protocols implemented, currently maximum of | ||
121 | * MAX_PROTOCOLS_IMP elements allocated by the base protocol | ||
122 | * @node: list head | ||
123 | * @users: Number of users of this instance | ||
124 | */ | ||
125 | struct scmi_info { | ||
126 | struct device *dev; | ||
127 | const struct scmi_desc *desc; | ||
128 | struct scmi_revision_info version; | ||
129 | struct scmi_handle handle; | ||
130 | struct scmi_xfers_info minfo; | ||
131 | struct idr tx_idr; | ||
132 | u8 *protocols_imp; | ||
133 | struct list_head node; | ||
134 | int users; | ||
135 | }; | ||
136 | |||
137 | #define client_to_scmi_chan_info(c) container_of(c, struct scmi_chan_info, cl) | ||
138 | #define handle_to_scmi_info(h) container_of(h, struct scmi_info, handle) | ||
139 | |||
140 | /* | ||
141 | * SCMI specification requires all parameters, message headers, return | ||
142 | * arguments or any protocol data to be expressed in little endian | ||
143 | * format only. | ||
144 | */ | ||
145 | struct scmi_shared_mem { | ||
146 | __le32 reserved; | ||
147 | __le32 channel_status; | ||
148 | #define SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR BIT(1) | ||
149 | #define SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE BIT(0) | ||
150 | __le32 reserved1[2]; | ||
151 | __le32 flags; | ||
152 | #define SCMI_SHMEM_FLAG_INTR_ENABLED BIT(0) | ||
153 | __le32 length; | ||
154 | __le32 msg_header; | ||
155 | u8 msg_payload[0]; | ||
156 | }; | ||
157 | |||
158 | static const int scmi_linux_errmap[] = { | ||
159 | /* better than switch case as long as return value is continuous */ | ||
160 | 0, /* SCMI_SUCCESS */ | ||
161 | -EOPNOTSUPP, /* SCMI_ERR_SUPPORT */ | ||
162 | -EINVAL, /* SCMI_ERR_PARAM */ | ||
163 | -EACCES, /* SCMI_ERR_ACCESS */ | ||
164 | -ENOENT, /* SCMI_ERR_ENTRY */ | ||
165 | -ERANGE, /* SCMI_ERR_RANGE */ | ||
166 | -EBUSY, /* SCMI_ERR_BUSY */ | ||
167 | -ECOMM, /* SCMI_ERR_COMMS */ | ||
168 | -EIO, /* SCMI_ERR_GENERIC */ | ||
169 | -EREMOTEIO, /* SCMI_ERR_HARDWARE */ | ||
170 | -EPROTO, /* SCMI_ERR_PROTOCOL */ | ||
171 | }; | ||
172 | |||
173 | static inline int scmi_to_linux_errno(int errno) | ||
174 | { | ||
175 | if (errno < SCMI_SUCCESS && errno > SCMI_ERR_MAX) | ||
176 | return scmi_linux_errmap[-errno]; | ||
177 | return -EIO; | ||
178 | } | ||
179 | |||
180 | /** | ||
181 | * scmi_dump_header_dbg() - Helper to dump a message header. | ||
182 | * | ||
183 | * @dev: Device pointer corresponding to the SCMI entity | ||
184 | * @hdr: pointer to header. | ||
185 | */ | ||
186 | static inline void scmi_dump_header_dbg(struct device *dev, | ||
187 | struct scmi_msg_hdr *hdr) | ||
188 | { | ||
189 | dev_dbg(dev, "Command ID: %x Sequence ID: %x Protocol: %x\n", | ||
190 | hdr->id, hdr->seq, hdr->protocol_id); | ||
191 | } | ||
192 | |||
193 | static void scmi_fetch_response(struct scmi_xfer *xfer, | ||
194 | struct scmi_shared_mem __iomem *mem) | ||
195 | { | ||
196 | xfer->hdr.status = ioread32(mem->msg_payload); | ||
197 | /* Skip the length of header and statues in payload area i.e 8 bytes*/ | ||
198 | xfer->rx.len = min_t(size_t, xfer->rx.len, ioread32(&mem->length) - 8); | ||
199 | |||
200 | /* Take a copy to the rx buffer.. */ | ||
201 | memcpy_fromio(xfer->rx.buf, mem->msg_payload + 4, xfer->rx.len); | ||
202 | } | ||
203 | |||
204 | /** | ||
205 | * scmi_rx_callback() - mailbox client callback for receive messages | ||
206 | * | ||
207 | * @cl: client pointer | ||
208 | * @m: mailbox message | ||
209 | * | ||
210 | * Processes one received message to appropriate transfer information and | ||
211 | * signals completion of the transfer. | ||
212 | * | ||
213 | * NOTE: This function will be invoked in IRQ context, hence should be | ||
214 | * as optimal as possible. | ||
215 | */ | ||
216 | static void scmi_rx_callback(struct mbox_client *cl, void *m) | ||
217 | { | ||
218 | u16 xfer_id; | ||
219 | struct scmi_xfer *xfer; | ||
220 | struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl); | ||
221 | struct device *dev = cinfo->dev; | ||
222 | struct scmi_info *info = handle_to_scmi_info(cinfo->handle); | ||
223 | struct scmi_xfers_info *minfo = &info->minfo; | ||
224 | struct scmi_shared_mem __iomem *mem = cinfo->payload; | ||
225 | |||
226 | xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header)); | ||
227 | |||
228 | /* | ||
229 | * Are we even expecting this? | ||
230 | */ | ||
231 | if (!test_bit(xfer_id, minfo->xfer_alloc_table)) { | ||
232 | dev_err(dev, "message for %d is not expected!\n", xfer_id); | ||
233 | return; | ||
234 | } | ||
235 | |||
236 | xfer = &minfo->xfer_block[xfer_id]; | ||
237 | |||
238 | scmi_dump_header_dbg(dev, &xfer->hdr); | ||
239 | /* Is the message of valid length? */ | ||
240 | if (xfer->rx.len > info->desc->max_msg_size) { | ||
241 | dev_err(dev, "unable to handle %zu xfer(max %d)\n", | ||
242 | xfer->rx.len, info->desc->max_msg_size); | ||
243 | return; | ||
244 | } | ||
245 | |||
246 | scmi_fetch_response(xfer, mem); | ||
247 | complete(&xfer->done); | ||
248 | } | ||
249 | |||
250 | /** | ||
251 | * pack_scmi_header() - packs and returns 32-bit header | ||
252 | * | ||
253 | * @hdr: pointer to header containing all the information on message id, | ||
254 | * protocol id and sequence id. | ||
255 | */ | ||
256 | static inline u32 pack_scmi_header(struct scmi_msg_hdr *hdr) | ||
257 | { | ||
258 | return ((hdr->id & MSG_ID_MASK) << MSG_ID_SHIFT) | | ||
259 | ((hdr->seq & MSG_TOKEN_ID_MASK) << MSG_TOKEN_ID_SHIFT) | | ||
260 | ((hdr->protocol_id & MSG_PROTOCOL_ID_MASK) << MSG_PROTOCOL_ID_SHIFT); | ||
261 | } | ||
262 | |||
263 | /** | ||
264 | * scmi_tx_prepare() - mailbox client callback to prepare for the transfer | ||
265 | * | ||
266 | * @cl: client pointer | ||
267 | * @m: mailbox message | ||
268 | * | ||
269 | * This function prepares the shared memory which contains the header and the | ||
270 | * payload. | ||
271 | */ | ||
272 | static void scmi_tx_prepare(struct mbox_client *cl, void *m) | ||
273 | { | ||
274 | struct scmi_xfer *t = m; | ||
275 | struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl); | ||
276 | struct scmi_shared_mem __iomem *mem = cinfo->payload; | ||
277 | |||
278 | /* Mark channel busy + clear error */ | ||
279 | iowrite32(0x0, &mem->channel_status); | ||
280 | iowrite32(t->hdr.poll_completion ? 0 : SCMI_SHMEM_FLAG_INTR_ENABLED, | ||
281 | &mem->flags); | ||
282 | iowrite32(sizeof(mem->msg_header) + t->tx.len, &mem->length); | ||
283 | iowrite32(pack_scmi_header(&t->hdr), &mem->msg_header); | ||
284 | if (t->tx.buf) | ||
285 | memcpy_toio(mem->msg_payload, t->tx.buf, t->tx.len); | ||
286 | } | ||
287 | |||
288 | /** | ||
289 | * scmi_one_xfer_get() - Allocate one message | ||
290 | * | ||
291 | * @handle: SCMI entity handle | ||
292 | * | ||
293 | * Helper function which is used by various command functions that are | ||
294 | * exposed to clients of this driver for allocating a message traffic event. | ||
295 | * | ||
296 | * This function can sleep depending on pending requests already in the system | ||
297 | * for the SCMI entity. Further, this also holds a spinlock to maintain | ||
298 | * integrity of internal data structures. | ||
299 | * | ||
300 | * Return: 0 if all went fine, else corresponding error. | ||
301 | */ | ||
302 | static struct scmi_xfer *scmi_one_xfer_get(const struct scmi_handle *handle) | ||
303 | { | ||
304 | u16 xfer_id; | ||
305 | struct scmi_xfer *xfer; | ||
306 | unsigned long flags, bit_pos; | ||
307 | struct scmi_info *info = handle_to_scmi_info(handle); | ||
308 | struct scmi_xfers_info *minfo = &info->minfo; | ||
309 | |||
310 | /* Keep the locked section as small as possible */ | ||
311 | spin_lock_irqsave(&minfo->xfer_lock, flags); | ||
312 | bit_pos = find_first_zero_bit(minfo->xfer_alloc_table, | ||
313 | info->desc->max_msg); | ||
314 | if (bit_pos == info->desc->max_msg) { | ||
315 | spin_unlock_irqrestore(&minfo->xfer_lock, flags); | ||
316 | return ERR_PTR(-ENOMEM); | ||
317 | } | ||
318 | set_bit(bit_pos, minfo->xfer_alloc_table); | ||
319 | spin_unlock_irqrestore(&minfo->xfer_lock, flags); | ||
320 | |||
321 | xfer_id = bit_pos; | ||
322 | |||
323 | xfer = &minfo->xfer_block[xfer_id]; | ||
324 | xfer->hdr.seq = xfer_id; | ||
325 | reinit_completion(&xfer->done); | ||
326 | |||
327 | return xfer; | ||
328 | } | ||
329 | |||
330 | /** | ||
331 | * scmi_one_xfer_put() - Release a message | ||
332 | * | ||
333 | * @minfo: transfer info pointer | ||
334 | * @xfer: message that was reserved by scmi_one_xfer_get | ||
335 | * | ||
336 | * This holds a spinlock to maintain integrity of internal data structures. | ||
337 | */ | ||
338 | void scmi_one_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer) | ||
339 | { | ||
340 | unsigned long flags; | ||
341 | struct scmi_info *info = handle_to_scmi_info(handle); | ||
342 | struct scmi_xfers_info *minfo = &info->minfo; | ||
343 | |||
344 | /* | ||
345 | * Keep the locked section as small as possible | ||
346 | * NOTE: we might escape with smp_mb and no lock here.. | ||
347 | * but just be conservative and symmetric. | ||
348 | */ | ||
349 | spin_lock_irqsave(&minfo->xfer_lock, flags); | ||
350 | clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table); | ||
351 | spin_unlock_irqrestore(&minfo->xfer_lock, flags); | ||
352 | } | ||
353 | |||
354 | static bool | ||
355 | scmi_xfer_poll_done(const struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) | ||
356 | { | ||
357 | struct scmi_shared_mem __iomem *mem = cinfo->payload; | ||
358 | u16 xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header)); | ||
359 | |||
360 | if (xfer->hdr.seq != xfer_id) | ||
361 | return false; | ||
362 | |||
363 | return ioread32(&mem->channel_status) & | ||
364 | (SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR | | ||
365 | SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE); | ||
366 | } | ||
367 | |||
368 | #define SCMI_MAX_POLL_TO_NS (100 * NSEC_PER_USEC) | ||
369 | |||
370 | static bool scmi_xfer_done_no_timeout(const struct scmi_chan_info *cinfo, | ||
371 | struct scmi_xfer *xfer, ktime_t stop) | ||
372 | { | ||
373 | ktime_t __cur = ktime_get(); | ||
374 | |||
375 | return scmi_xfer_poll_done(cinfo, xfer) || ktime_after(__cur, stop); | ||
376 | } | ||
377 | |||
378 | /** | ||
379 | * scmi_do_xfer() - Do one transfer | ||
380 | * | ||
381 | * @info: Pointer to SCMI entity information | ||
382 | * @xfer: Transfer to initiate and wait for response | ||
383 | * | ||
384 | * Return: -ETIMEDOUT in case of no response, if transmit error, | ||
385 | * return corresponding error, else if all goes well, | ||
386 | * return 0. | ||
387 | */ | ||
388 | int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer) | ||
389 | { | ||
390 | int ret; | ||
391 | int timeout; | ||
392 | struct scmi_info *info = handle_to_scmi_info(handle); | ||
393 | struct device *dev = info->dev; | ||
394 | struct scmi_chan_info *cinfo; | ||
395 | |||
396 | cinfo = idr_find(&info->tx_idr, xfer->hdr.protocol_id); | ||
397 | if (unlikely(!cinfo)) | ||
398 | return -EINVAL; | ||
399 | |||
400 | ret = mbox_send_message(cinfo->chan, xfer); | ||
401 | if (ret < 0) { | ||
402 | dev_dbg(dev, "mbox send fail %d\n", ret); | ||
403 | return ret; | ||
404 | } | ||
405 | |||
406 | /* mbox_send_message returns non-negative value on success, so reset */ | ||
407 | ret = 0; | ||
408 | |||
409 | if (xfer->hdr.poll_completion) { | ||
410 | ktime_t stop = ktime_add_ns(ktime_get(), SCMI_MAX_POLL_TO_NS); | ||
411 | |||
412 | spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer, stop)); | ||
413 | |||
414 | if (ktime_before(ktime_get(), stop)) | ||
415 | scmi_fetch_response(xfer, cinfo->payload); | ||
416 | else | ||
417 | ret = -ETIMEDOUT; | ||
418 | } else { | ||
419 | /* And we wait for the response. */ | ||
420 | timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms); | ||
421 | if (!wait_for_completion_timeout(&xfer->done, timeout)) { | ||
422 | dev_err(dev, "mbox timed out in resp(caller: %pS)\n", | ||
423 | (void *)_RET_IP_); | ||
424 | ret = -ETIMEDOUT; | ||
425 | } | ||
426 | } | ||
427 | |||
428 | if (!ret && xfer->hdr.status) | ||
429 | ret = scmi_to_linux_errno(xfer->hdr.status); | ||
430 | |||
431 | /* | ||
432 | * NOTE: we might prefer not to need the mailbox ticker to manage the | ||
433 | * transfer queueing since the protocol layer queues things by itself. | ||
434 | * Unfortunately, we have to kick the mailbox framework after we have | ||
435 | * received our message. | ||
436 | */ | ||
437 | mbox_client_txdone(cinfo->chan, ret); | ||
438 | |||
439 | return ret; | ||
440 | } | ||
441 | |||
442 | /** | ||
443 | * scmi_one_xfer_init() - Allocate and initialise one message | ||
444 | * | ||
445 | * @handle: SCMI entity handle | ||
446 | * @msg_id: Message identifier | ||
447 | * @msg_prot_id: Protocol identifier for the message | ||
448 | * @tx_size: transmit message size | ||
449 | * @rx_size: receive message size | ||
450 | * @p: pointer to the allocated and initialised message | ||
451 | * | ||
452 | * This function allocates the message using @scmi_one_xfer_get and | ||
453 | * initialise the header. | ||
454 | * | ||
455 | * Return: 0 if all went fine with @p pointing to message, else | ||
456 | * corresponding error. | ||
457 | */ | ||
458 | int scmi_one_xfer_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id, | ||
459 | size_t tx_size, size_t rx_size, struct scmi_xfer **p) | ||
460 | { | ||
461 | int ret; | ||
462 | struct scmi_xfer *xfer; | ||
463 | struct scmi_info *info = handle_to_scmi_info(handle); | ||
464 | struct device *dev = info->dev; | ||
465 | |||
466 | /* Ensure we have sane transfer sizes */ | ||
467 | if (rx_size > info->desc->max_msg_size || | ||
468 | tx_size > info->desc->max_msg_size) | ||
469 | return -ERANGE; | ||
470 | |||
471 | xfer = scmi_one_xfer_get(handle); | ||
472 | if (IS_ERR(xfer)) { | ||
473 | ret = PTR_ERR(xfer); | ||
474 | dev_err(dev, "failed to get free message slot(%d)\n", ret); | ||
475 | return ret; | ||
476 | } | ||
477 | |||
478 | xfer->tx.len = tx_size; | ||
479 | xfer->rx.len = rx_size ? : info->desc->max_msg_size; | ||
480 | xfer->hdr.id = msg_id; | ||
481 | xfer->hdr.protocol_id = prot_id; | ||
482 | xfer->hdr.poll_completion = false; | ||
483 | |||
484 | *p = xfer; | ||
485 | return 0; | ||
486 | } | ||
487 | |||
488 | /** | ||
489 | * scmi_version_get() - command to get the revision of the SCMI entity | ||
490 | * | ||
491 | * @handle: Handle to SCMI entity information | ||
492 | * | ||
493 | * Updates the SCMI information in the internal data structure. | ||
494 | * | ||
495 | * Return: 0 if all went fine, else return appropriate error. | ||
496 | */ | ||
497 | int scmi_version_get(const struct scmi_handle *handle, u8 protocol, | ||
498 | u32 *version) | ||
499 | { | ||
500 | int ret; | ||
501 | __le32 *rev_info; | ||
502 | struct scmi_xfer *t; | ||
503 | |||
504 | ret = scmi_one_xfer_init(handle, PROTOCOL_VERSION, protocol, 0, | ||
505 | sizeof(*version), &t); | ||
506 | if (ret) | ||
507 | return ret; | ||
508 | |||
509 | ret = scmi_do_xfer(handle, t); | ||
510 | if (!ret) { | ||
511 | rev_info = t->rx.buf; | ||
512 | *version = le32_to_cpu(*rev_info); | ||
513 | } | ||
514 | |||
515 | scmi_one_xfer_put(handle, t); | ||
516 | return ret; | ||
517 | } | ||
518 | |||
519 | void scmi_setup_protocol_implemented(const struct scmi_handle *handle, | ||
520 | u8 *prot_imp) | ||
521 | { | ||
522 | struct scmi_info *info = handle_to_scmi_info(handle); | ||
523 | |||
524 | info->protocols_imp = prot_imp; | ||
525 | } | ||
526 | |||
527 | static bool | ||
528 | scmi_is_protocol_implemented(const struct scmi_handle *handle, u8 prot_id) | ||
529 | { | ||
530 | int i; | ||
531 | struct scmi_info *info = handle_to_scmi_info(handle); | ||
532 | |||
533 | if (!info->protocols_imp) | ||
534 | return false; | ||
535 | |||
536 | for (i = 0; i < MAX_PROTOCOLS_IMP; i++) | ||
537 | if (info->protocols_imp[i] == prot_id) | ||
538 | return true; | ||
539 | return false; | ||
540 | } | ||
541 | |||
542 | /** | ||
543 | * scmi_handle_get() - Get the SCMI handle for a device | ||
544 | * | ||
545 | * @dev: pointer to device for which we want SCMI handle | ||
546 | * | ||
547 | * NOTE: The function does not track individual clients of the framework | ||
548 | * and is expected to be maintained by caller of SCMI protocol library. | ||
549 | * scmi_handle_put must be balanced with successful scmi_handle_get | ||
550 | * | ||
551 | * Return: pointer to handle if successful, NULL on error | ||
552 | */ | ||
553 | struct scmi_handle *scmi_handle_get(struct device *dev) | ||
554 | { | ||
555 | struct list_head *p; | ||
556 | struct scmi_info *info; | ||
557 | struct scmi_handle *handle = NULL; | ||
558 | |||
559 | mutex_lock(&scmi_list_mutex); | ||
560 | list_for_each(p, &scmi_list) { | ||
561 | info = list_entry(p, struct scmi_info, node); | ||
562 | if (dev->parent == info->dev) { | ||
563 | handle = &info->handle; | ||
564 | info->users++; | ||
565 | break; | ||
566 | } | ||
567 | } | ||
568 | mutex_unlock(&scmi_list_mutex); | ||
569 | |||
570 | return handle; | ||
571 | } | ||
572 | |||
573 | /** | ||
574 | * scmi_handle_put() - Release the handle acquired by scmi_handle_get | ||
575 | * | ||
576 | * @handle: handle acquired by scmi_handle_get | ||
577 | * | ||
578 | * NOTE: The function does not track individual clients of the framework | ||
579 | * and is expected to be maintained by caller of SCMI protocol library. | ||
580 | * scmi_handle_put must be balanced with successful scmi_handle_get | ||
581 | * | ||
582 | * Return: 0 is successfully released | ||
583 | * if null was passed, it returns -EINVAL; | ||
584 | */ | ||
585 | int scmi_handle_put(const struct scmi_handle *handle) | ||
586 | { | ||
587 | struct scmi_info *info; | ||
588 | |||
589 | if (!handle) | ||
590 | return -EINVAL; | ||
591 | |||
592 | info = handle_to_scmi_info(handle); | ||
593 | mutex_lock(&scmi_list_mutex); | ||
594 | if (!WARN_ON(!info->users)) | ||
595 | info->users--; | ||
596 | mutex_unlock(&scmi_list_mutex); | ||
597 | |||
598 | return 0; | ||
599 | } | ||
600 | |||
601 | static const struct scmi_desc scmi_generic_desc = { | ||
602 | .max_rx_timeout_ms = 30, /* we may increase this if required */ | ||
603 | .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */ | ||
604 | .max_msg_size = 128, | ||
605 | }; | ||
606 | |||
607 | /* Each compatible listed below must have descriptor associated with it */ | ||
608 | static const struct of_device_id scmi_of_match[] = { | ||
609 | { .compatible = "arm,scmi", .data = &scmi_generic_desc }, | ||
610 | { /* Sentinel */ }, | ||
611 | }; | ||
612 | |||
613 | MODULE_DEVICE_TABLE(of, scmi_of_match); | ||
614 | |||
615 | static int scmi_xfer_info_init(struct scmi_info *sinfo) | ||
616 | { | ||
617 | int i; | ||
618 | struct scmi_xfer *xfer; | ||
619 | struct device *dev = sinfo->dev; | ||
620 | const struct scmi_desc *desc = sinfo->desc; | ||
621 | struct scmi_xfers_info *info = &sinfo->minfo; | ||
622 | |||
623 | /* Pre-allocated messages, no more than what hdr.seq can support */ | ||
624 | if (WARN_ON(desc->max_msg >= (MSG_TOKEN_ID_MASK + 1))) { | ||
625 | dev_err(dev, "Maximum message of %d exceeds supported %d\n", | ||
626 | desc->max_msg, MSG_TOKEN_ID_MASK + 1); | ||
627 | return -EINVAL; | ||
628 | } | ||
629 | |||
630 | info->xfer_block = devm_kcalloc(dev, desc->max_msg, | ||
631 | sizeof(*info->xfer_block), GFP_KERNEL); | ||
632 | if (!info->xfer_block) | ||
633 | return -ENOMEM; | ||
634 | |||
635 | info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(desc->max_msg), | ||
636 | sizeof(long), GFP_KERNEL); | ||
637 | if (!info->xfer_alloc_table) | ||
638 | return -ENOMEM; | ||
639 | |||
640 | bitmap_zero(info->xfer_alloc_table, desc->max_msg); | ||
641 | |||
642 | /* Pre-initialize the buffer pointer to pre-allocated buffers */ | ||
643 | for (i = 0, xfer = info->xfer_block; i < desc->max_msg; i++, xfer++) { | ||
644 | xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size, | ||
645 | GFP_KERNEL); | ||
646 | if (!xfer->rx.buf) | ||
647 | return -ENOMEM; | ||
648 | |||
649 | xfer->tx.buf = xfer->rx.buf; | ||
650 | init_completion(&xfer->done); | ||
651 | } | ||
652 | |||
653 | spin_lock_init(&info->xfer_lock); | ||
654 | |||
655 | return 0; | ||
656 | } | ||
657 | |||
658 | static int scmi_mailbox_check(struct device_node *np) | ||
659 | { | ||
660 | struct of_phandle_args arg; | ||
661 | |||
662 | return of_parse_phandle_with_args(np, "mboxes", "#mbox-cells", 0, &arg); | ||
663 | } | ||
664 | |||
665 | static int scmi_mbox_free_channel(int id, void *p, void *data) | ||
666 | { | ||
667 | struct scmi_chan_info *cinfo = p; | ||
668 | struct idr *idr = data; | ||
669 | |||
670 | if (!IS_ERR_OR_NULL(cinfo->chan)) { | ||
671 | mbox_free_channel(cinfo->chan); | ||
672 | cinfo->chan = NULL; | ||
673 | } | ||
674 | |||
675 | idr_remove(idr, id); | ||
676 | |||
677 | return 0; | ||
678 | } | ||
679 | |||
680 | static int scmi_remove(struct platform_device *pdev) | ||
681 | { | ||
682 | int ret = 0; | ||
683 | struct scmi_info *info = platform_get_drvdata(pdev); | ||
684 | struct idr *idr = &info->tx_idr; | ||
685 | |||
686 | mutex_lock(&scmi_list_mutex); | ||
687 | if (info->users) | ||
688 | ret = -EBUSY; | ||
689 | else | ||
690 | list_del(&info->node); | ||
691 | mutex_unlock(&scmi_list_mutex); | ||
692 | |||
693 | if (!ret) { | ||
694 | /* Safe to free channels since no more users */ | ||
695 | ret = idr_for_each(idr, scmi_mbox_free_channel, idr); | ||
696 | idr_destroy(&info->tx_idr); | ||
697 | } | ||
698 | |||
699 | return ret; | ||
700 | } | ||
701 | |||
702 | static inline int | ||
703 | scmi_mbox_chan_setup(struct scmi_info *info, struct device *dev, int prot_id) | ||
704 | { | ||
705 | int ret; | ||
706 | struct resource res; | ||
707 | resource_size_t size; | ||
708 | struct device_node *shmem, *np = dev->of_node; | ||
709 | struct scmi_chan_info *cinfo; | ||
710 | struct mbox_client *cl; | ||
711 | |||
712 | if (scmi_mailbox_check(np)) { | ||
713 | cinfo = idr_find(&info->tx_idr, SCMI_PROTOCOL_BASE); | ||
714 | goto idr_alloc; | ||
715 | } | ||
716 | |||
717 | cinfo = devm_kzalloc(info->dev, sizeof(*cinfo), GFP_KERNEL); | ||
718 | if (!cinfo) | ||
719 | return -ENOMEM; | ||
720 | |||
721 | cinfo->dev = dev; | ||
722 | |||
723 | cl = &cinfo->cl; | ||
724 | cl->dev = dev; | ||
725 | cl->rx_callback = scmi_rx_callback; | ||
726 | cl->tx_prepare = scmi_tx_prepare; | ||
727 | cl->tx_block = false; | ||
728 | cl->knows_txdone = true; | ||
729 | |||
730 | shmem = of_parse_phandle(np, "shmem", 0); | ||
731 | ret = of_address_to_resource(shmem, 0, &res); | ||
732 | of_node_put(shmem); | ||
733 | if (ret) { | ||
734 | dev_err(dev, "failed to get SCMI Tx payload mem resource\n"); | ||
735 | return ret; | ||
736 | } | ||
737 | |||
738 | size = resource_size(&res); | ||
739 | cinfo->payload = devm_ioremap(info->dev, res.start, size); | ||
740 | if (!cinfo->payload) { | ||
741 | dev_err(dev, "failed to ioremap SCMI Tx payload\n"); | ||
742 | return -EADDRNOTAVAIL; | ||
743 | } | ||
744 | |||
745 | /* Transmit channel is first entry i.e. index 0 */ | ||
746 | cinfo->chan = mbox_request_channel(cl, 0); | ||
747 | if (IS_ERR(cinfo->chan)) { | ||
748 | ret = PTR_ERR(cinfo->chan); | ||
749 | if (ret != -EPROBE_DEFER) | ||
750 | dev_err(dev, "failed to request SCMI Tx mailbox\n"); | ||
751 | return ret; | ||
752 | } | ||
753 | |||
754 | idr_alloc: | ||
755 | ret = idr_alloc(&info->tx_idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL); | ||
756 | if (ret != prot_id) { | ||
757 | dev_err(dev, "unable to allocate SCMI idr slot err %d\n", ret); | ||
758 | return ret; | ||
759 | } | ||
760 | |||
761 | cinfo->handle = &info->handle; | ||
762 | return 0; | ||
763 | } | ||
764 | |||
765 | static inline void | ||
766 | scmi_create_protocol_device(struct device_node *np, struct scmi_info *info, | ||
767 | int prot_id) | ||
768 | { | ||
769 | struct scmi_device *sdev; | ||
770 | |||
771 | sdev = scmi_device_create(np, info->dev, prot_id); | ||
772 | if (!sdev) { | ||
773 | dev_err(info->dev, "failed to create %d protocol device\n", | ||
774 | prot_id); | ||
775 | return; | ||
776 | } | ||
777 | |||
778 | if (scmi_mbox_chan_setup(info, &sdev->dev, prot_id)) { | ||
779 | dev_err(&sdev->dev, "failed to setup transport\n"); | ||
780 | scmi_device_destroy(sdev); | ||
781 | } | ||
782 | |||
783 | /* setup handle now as the transport is ready */ | ||
784 | scmi_set_handle(sdev); | ||
785 | } | ||
786 | |||
787 | static int scmi_probe(struct platform_device *pdev) | ||
788 | { | ||
789 | int ret; | ||
790 | struct scmi_handle *handle; | ||
791 | const struct scmi_desc *desc; | ||
792 | struct scmi_info *info; | ||
793 | struct device *dev = &pdev->dev; | ||
794 | struct device_node *child, *np = dev->of_node; | ||
795 | |||
796 | /* Only mailbox method supported, check for the presence of one */ | ||
797 | if (scmi_mailbox_check(np)) { | ||
798 | dev_err(dev, "no mailbox found in %pOF\n", np); | ||
799 | return -EINVAL; | ||
800 | } | ||
801 | |||
802 | desc = of_match_device(scmi_of_match, dev)->data; | ||
803 | |||
804 | info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); | ||
805 | if (!info) | ||
806 | return -ENOMEM; | ||
807 | |||
808 | info->dev = dev; | ||
809 | info->desc = desc; | ||
810 | INIT_LIST_HEAD(&info->node); | ||
811 | |||
812 | ret = scmi_xfer_info_init(info); | ||
813 | if (ret) | ||
814 | return ret; | ||
815 | |||
816 | platform_set_drvdata(pdev, info); | ||
817 | idr_init(&info->tx_idr); | ||
818 | |||
819 | handle = &info->handle; | ||
820 | handle->dev = info->dev; | ||
821 | handle->version = &info->version; | ||
822 | |||
823 | ret = scmi_mbox_chan_setup(info, dev, SCMI_PROTOCOL_BASE); | ||
824 | if (ret) | ||
825 | return ret; | ||
826 | |||
827 | ret = scmi_base_protocol_init(handle); | ||
828 | if (ret) { | ||
829 | dev_err(dev, "unable to communicate with SCMI(%d)\n", ret); | ||
830 | return ret; | ||
831 | } | ||
832 | |||
833 | mutex_lock(&scmi_list_mutex); | ||
834 | list_add_tail(&info->node, &scmi_list); | ||
835 | mutex_unlock(&scmi_list_mutex); | ||
836 | |||
837 | for_each_available_child_of_node(np, child) { | ||
838 | u32 prot_id; | ||
839 | |||
840 | if (of_property_read_u32(child, "reg", &prot_id)) | ||
841 | continue; | ||
842 | |||
843 | prot_id &= MSG_PROTOCOL_ID_MASK; | ||
844 | |||
845 | if (!scmi_is_protocol_implemented(handle, prot_id)) { | ||
846 | dev_err(dev, "SCMI protocol %d not implemented\n", | ||
847 | prot_id); | ||
848 | continue; | ||
849 | } | ||
850 | |||
851 | scmi_create_protocol_device(child, info, prot_id); | ||
852 | } | ||
853 | |||
854 | return 0; | ||
855 | } | ||
856 | |||
857 | static struct platform_driver scmi_driver = { | ||
858 | .driver = { | ||
859 | .name = "arm-scmi", | ||
860 | .of_match_table = scmi_of_match, | ||
861 | }, | ||
862 | .probe = scmi_probe, | ||
863 | .remove = scmi_remove, | ||
864 | }; | ||
865 | |||
866 | module_platform_driver(scmi_driver); | ||
867 | |||
868 | MODULE_ALIAS("platform: arm-scmi"); | ||
869 | MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); | ||
870 | MODULE_DESCRIPTION("ARM SCMI protocol driver"); | ||
871 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c new file mode 100644 index 000000000000..987c64d19801 --- /dev/null +++ b/drivers/firmware/arm_scmi/perf.c | |||
@@ -0,0 +1,481 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface (SCMI) Performance Protocol | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | |||
8 | #include <linux/of.h> | ||
9 | #include <linux/platform_device.h> | ||
10 | #include <linux/pm_opp.h> | ||
11 | #include <linux/sort.h> | ||
12 | |||
13 | #include "common.h" | ||
14 | |||
15 | enum scmi_performance_protocol_cmd { | ||
16 | PERF_DOMAIN_ATTRIBUTES = 0x3, | ||
17 | PERF_DESCRIBE_LEVELS = 0x4, | ||
18 | PERF_LIMITS_SET = 0x5, | ||
19 | PERF_LIMITS_GET = 0x6, | ||
20 | PERF_LEVEL_SET = 0x7, | ||
21 | PERF_LEVEL_GET = 0x8, | ||
22 | PERF_NOTIFY_LIMITS = 0x9, | ||
23 | PERF_NOTIFY_LEVEL = 0xa, | ||
24 | }; | ||
25 | |||
26 | struct scmi_opp { | ||
27 | u32 perf; | ||
28 | u32 power; | ||
29 | u32 trans_latency_us; | ||
30 | }; | ||
31 | |||
32 | struct scmi_msg_resp_perf_attributes { | ||
33 | __le16 num_domains; | ||
34 | __le16 flags; | ||
35 | #define POWER_SCALE_IN_MILLIWATT(x) ((x) & BIT(0)) | ||
36 | __le32 stats_addr_low; | ||
37 | __le32 stats_addr_high; | ||
38 | __le32 stats_size; | ||
39 | }; | ||
40 | |||
41 | struct scmi_msg_resp_perf_domain_attributes { | ||
42 | __le32 flags; | ||
43 | #define SUPPORTS_SET_LIMITS(x) ((x) & BIT(31)) | ||
44 | #define SUPPORTS_SET_PERF_LVL(x) ((x) & BIT(30)) | ||
45 | #define SUPPORTS_PERF_LIMIT_NOTIFY(x) ((x) & BIT(29)) | ||
46 | #define SUPPORTS_PERF_LEVEL_NOTIFY(x) ((x) & BIT(28)) | ||
47 | __le32 rate_limit_us; | ||
48 | __le32 sustained_freq_khz; | ||
49 | __le32 sustained_perf_level; | ||
50 | u8 name[SCMI_MAX_STR_SIZE]; | ||
51 | }; | ||
52 | |||
53 | struct scmi_msg_perf_describe_levels { | ||
54 | __le32 domain; | ||
55 | __le32 level_index; | ||
56 | }; | ||
57 | |||
58 | struct scmi_perf_set_limits { | ||
59 | __le32 domain; | ||
60 | __le32 max_level; | ||
61 | __le32 min_level; | ||
62 | }; | ||
63 | |||
64 | struct scmi_perf_get_limits { | ||
65 | __le32 max_level; | ||
66 | __le32 min_level; | ||
67 | }; | ||
68 | |||
69 | struct scmi_perf_set_level { | ||
70 | __le32 domain; | ||
71 | __le32 level; | ||
72 | }; | ||
73 | |||
74 | struct scmi_perf_notify_level_or_limits { | ||
75 | __le32 domain; | ||
76 | __le32 notify_enable; | ||
77 | }; | ||
78 | |||
79 | struct scmi_msg_resp_perf_describe_levels { | ||
80 | __le16 num_returned; | ||
81 | __le16 num_remaining; | ||
82 | struct { | ||
83 | __le32 perf_val; | ||
84 | __le32 power; | ||
85 | __le16 transition_latency_us; | ||
86 | __le16 reserved; | ||
87 | } opp[0]; | ||
88 | }; | ||
89 | |||
90 | struct perf_dom_info { | ||
91 | bool set_limits; | ||
92 | bool set_perf; | ||
93 | bool perf_limit_notify; | ||
94 | bool perf_level_notify; | ||
95 | u32 opp_count; | ||
96 | u32 sustained_freq_khz; | ||
97 | u32 sustained_perf_level; | ||
98 | u32 mult_factor; | ||
99 | char name[SCMI_MAX_STR_SIZE]; | ||
100 | struct scmi_opp opp[MAX_OPPS]; | ||
101 | }; | ||
102 | |||
103 | struct scmi_perf_info { | ||
104 | int num_domains; | ||
105 | bool power_scale_mw; | ||
106 | u64 stats_addr; | ||
107 | u32 stats_size; | ||
108 | struct perf_dom_info *dom_info; | ||
109 | }; | ||
110 | |||
111 | static int scmi_perf_attributes_get(const struct scmi_handle *handle, | ||
112 | struct scmi_perf_info *pi) | ||
113 | { | ||
114 | int ret; | ||
115 | struct scmi_xfer *t; | ||
116 | struct scmi_msg_resp_perf_attributes *attr; | ||
117 | |||
118 | ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, | ||
119 | SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t); | ||
120 | if (ret) | ||
121 | return ret; | ||
122 | |||
123 | attr = t->rx.buf; | ||
124 | |||
125 | ret = scmi_do_xfer(handle, t); | ||
126 | if (!ret) { | ||
127 | u16 flags = le16_to_cpu(attr->flags); | ||
128 | |||
129 | pi->num_domains = le16_to_cpu(attr->num_domains); | ||
130 | pi->power_scale_mw = POWER_SCALE_IN_MILLIWATT(flags); | ||
131 | pi->stats_addr = le32_to_cpu(attr->stats_addr_low) | | ||
132 | (u64)le32_to_cpu(attr->stats_addr_high) << 32; | ||
133 | pi->stats_size = le32_to_cpu(attr->stats_size); | ||
134 | } | ||
135 | |||
136 | scmi_one_xfer_put(handle, t); | ||
137 | return ret; | ||
138 | } | ||
139 | |||
140 | static int | ||
141 | scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain, | ||
142 | struct perf_dom_info *dom_info) | ||
143 | { | ||
144 | int ret; | ||
145 | struct scmi_xfer *t; | ||
146 | struct scmi_msg_resp_perf_domain_attributes *attr; | ||
147 | |||
148 | ret = scmi_one_xfer_init(handle, PERF_DOMAIN_ATTRIBUTES, | ||
149 | SCMI_PROTOCOL_PERF, sizeof(domain), | ||
150 | sizeof(*attr), &t); | ||
151 | if (ret) | ||
152 | return ret; | ||
153 | |||
154 | *(__le32 *)t->tx.buf = cpu_to_le32(domain); | ||
155 | attr = t->rx.buf; | ||
156 | |||
157 | ret = scmi_do_xfer(handle, t); | ||
158 | if (!ret) { | ||
159 | u32 flags = le32_to_cpu(attr->flags); | ||
160 | |||
161 | dom_info->set_limits = SUPPORTS_SET_LIMITS(flags); | ||
162 | dom_info->set_perf = SUPPORTS_SET_PERF_LVL(flags); | ||
163 | dom_info->perf_limit_notify = SUPPORTS_PERF_LIMIT_NOTIFY(flags); | ||
164 | dom_info->perf_level_notify = SUPPORTS_PERF_LEVEL_NOTIFY(flags); | ||
165 | dom_info->sustained_freq_khz = | ||
166 | le32_to_cpu(attr->sustained_freq_khz); | ||
167 | dom_info->sustained_perf_level = | ||
168 | le32_to_cpu(attr->sustained_perf_level); | ||
169 | dom_info->mult_factor = (dom_info->sustained_freq_khz * 1000) / | ||
170 | dom_info->sustained_perf_level; | ||
171 | memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); | ||
172 | } | ||
173 | |||
174 | scmi_one_xfer_put(handle, t); | ||
175 | return ret; | ||
176 | } | ||
177 | |||
178 | static int opp_cmp_func(const void *opp1, const void *opp2) | ||
179 | { | ||
180 | const struct scmi_opp *t1 = opp1, *t2 = opp2; | ||
181 | |||
182 | return t1->perf - t2->perf; | ||
183 | } | ||
184 | |||
185 | static int | ||
186 | scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain, | ||
187 | struct perf_dom_info *perf_dom) | ||
188 | { | ||
189 | int ret, cnt; | ||
190 | u32 tot_opp_cnt = 0; | ||
191 | u16 num_returned, num_remaining; | ||
192 | struct scmi_xfer *t; | ||
193 | struct scmi_opp *opp; | ||
194 | struct scmi_msg_perf_describe_levels *dom_info; | ||
195 | struct scmi_msg_resp_perf_describe_levels *level_info; | ||
196 | |||
197 | ret = scmi_one_xfer_init(handle, PERF_DESCRIBE_LEVELS, | ||
198 | SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t); | ||
199 | if (ret) | ||
200 | return ret; | ||
201 | |||
202 | dom_info = t->tx.buf; | ||
203 | level_info = t->rx.buf; | ||
204 | |||
205 | do { | ||
206 | dom_info->domain = cpu_to_le32(domain); | ||
207 | /* Set the number of OPPs to be skipped/already read */ | ||
208 | dom_info->level_index = cpu_to_le32(tot_opp_cnt); | ||
209 | |||
210 | ret = scmi_do_xfer(handle, t); | ||
211 | if (ret) | ||
212 | break; | ||
213 | |||
214 | num_returned = le16_to_cpu(level_info->num_returned); | ||
215 | num_remaining = le16_to_cpu(level_info->num_remaining); | ||
216 | if (tot_opp_cnt + num_returned > MAX_OPPS) { | ||
217 | dev_err(handle->dev, "No. of OPPs exceeded MAX_OPPS"); | ||
218 | break; | ||
219 | } | ||
220 | |||
221 | opp = &perf_dom->opp[tot_opp_cnt]; | ||
222 | for (cnt = 0; cnt < num_returned; cnt++, opp++) { | ||
223 | opp->perf = le32_to_cpu(level_info->opp[cnt].perf_val); | ||
224 | opp->power = le32_to_cpu(level_info->opp[cnt].power); | ||
225 | opp->trans_latency_us = le16_to_cpu | ||
226 | (level_info->opp[cnt].transition_latency_us); | ||
227 | |||
228 | dev_dbg(handle->dev, "Level %d Power %d Latency %dus\n", | ||
229 | opp->perf, opp->power, opp->trans_latency_us); | ||
230 | } | ||
231 | |||
232 | tot_opp_cnt += num_returned; | ||
233 | /* | ||
234 | * check for both returned and remaining to avoid infinite | ||
235 | * loop due to buggy firmware | ||
236 | */ | ||
237 | } while (num_returned && num_remaining); | ||
238 | |||
239 | perf_dom->opp_count = tot_opp_cnt; | ||
240 | scmi_one_xfer_put(handle, t); | ||
241 | |||
242 | sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL); | ||
243 | return ret; | ||
244 | } | ||
245 | |||
246 | static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain, | ||
247 | u32 max_perf, u32 min_perf) | ||
248 | { | ||
249 | int ret; | ||
250 | struct scmi_xfer *t; | ||
251 | struct scmi_perf_set_limits *limits; | ||
252 | |||
253 | ret = scmi_one_xfer_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF, | ||
254 | sizeof(*limits), 0, &t); | ||
255 | if (ret) | ||
256 | return ret; | ||
257 | |||
258 | limits = t->tx.buf; | ||
259 | limits->domain = cpu_to_le32(domain); | ||
260 | limits->max_level = cpu_to_le32(max_perf); | ||
261 | limits->min_level = cpu_to_le32(min_perf); | ||
262 | |||
263 | ret = scmi_do_xfer(handle, t); | ||
264 | |||
265 | scmi_one_xfer_put(handle, t); | ||
266 | return ret; | ||
267 | } | ||
268 | |||
269 | static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain, | ||
270 | u32 *max_perf, u32 *min_perf) | ||
271 | { | ||
272 | int ret; | ||
273 | struct scmi_xfer *t; | ||
274 | struct scmi_perf_get_limits *limits; | ||
275 | |||
276 | ret = scmi_one_xfer_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF, | ||
277 | sizeof(__le32), 0, &t); | ||
278 | if (ret) | ||
279 | return ret; | ||
280 | |||
281 | *(__le32 *)t->tx.buf = cpu_to_le32(domain); | ||
282 | |||
283 | ret = scmi_do_xfer(handle, t); | ||
284 | if (!ret) { | ||
285 | limits = t->rx.buf; | ||
286 | |||
287 | *max_perf = le32_to_cpu(limits->max_level); | ||
288 | *min_perf = le32_to_cpu(limits->min_level); | ||
289 | } | ||
290 | |||
291 | scmi_one_xfer_put(handle, t); | ||
292 | return ret; | ||
293 | } | ||
294 | |||
295 | static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain, | ||
296 | u32 level, bool poll) | ||
297 | { | ||
298 | int ret; | ||
299 | struct scmi_xfer *t; | ||
300 | struct scmi_perf_set_level *lvl; | ||
301 | |||
302 | ret = scmi_one_xfer_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF, | ||
303 | sizeof(*lvl), 0, &t); | ||
304 | if (ret) | ||
305 | return ret; | ||
306 | |||
307 | t->hdr.poll_completion = poll; | ||
308 | lvl = t->tx.buf; | ||
309 | lvl->domain = cpu_to_le32(domain); | ||
310 | lvl->level = cpu_to_le32(level); | ||
311 | |||
312 | ret = scmi_do_xfer(handle, t); | ||
313 | |||
314 | scmi_one_xfer_put(handle, t); | ||
315 | return ret; | ||
316 | } | ||
317 | |||
318 | static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain, | ||
319 | u32 *level, bool poll) | ||
320 | { | ||
321 | int ret; | ||
322 | struct scmi_xfer *t; | ||
323 | |||
324 | ret = scmi_one_xfer_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF, | ||
325 | sizeof(u32), sizeof(u32), &t); | ||
326 | if (ret) | ||
327 | return ret; | ||
328 | |||
329 | t->hdr.poll_completion = poll; | ||
330 | *(__le32 *)t->tx.buf = cpu_to_le32(domain); | ||
331 | |||
332 | ret = scmi_do_xfer(handle, t); | ||
333 | if (!ret) | ||
334 | *level = le32_to_cpu(*(__le32 *)t->rx.buf); | ||
335 | |||
336 | scmi_one_xfer_put(handle, t); | ||
337 | return ret; | ||
338 | } | ||
339 | |||
340 | /* Device specific ops */ | ||
341 | static int scmi_dev_domain_id(struct device *dev) | ||
342 | { | ||
343 | struct of_phandle_args clkspec; | ||
344 | |||
345 | if (of_parse_phandle_with_args(dev->of_node, "clocks", "#clock-cells", | ||
346 | 0, &clkspec)) | ||
347 | return -EINVAL; | ||
348 | |||
349 | return clkspec.args[0]; | ||
350 | } | ||
351 | |||
352 | static int scmi_dvfs_add_opps_to_device(const struct scmi_handle *handle, | ||
353 | struct device *dev) | ||
354 | { | ||
355 | int idx, ret, domain; | ||
356 | unsigned long freq; | ||
357 | struct scmi_opp *opp; | ||
358 | struct perf_dom_info *dom; | ||
359 | struct scmi_perf_info *pi = handle->perf_priv; | ||
360 | |||
361 | domain = scmi_dev_domain_id(dev); | ||
362 | if (domain < 0) | ||
363 | return domain; | ||
364 | |||
365 | dom = pi->dom_info + domain; | ||
366 | if (!dom) | ||
367 | return -EIO; | ||
368 | |||
369 | for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) { | ||
370 | freq = opp->perf * dom->mult_factor; | ||
371 | |||
372 | ret = dev_pm_opp_add(dev, freq, 0); | ||
373 | if (ret) { | ||
374 | dev_warn(dev, "failed to add opp %luHz\n", freq); | ||
375 | |||
376 | while (idx-- > 0) { | ||
377 | freq = (--opp)->perf * dom->mult_factor; | ||
378 | dev_pm_opp_remove(dev, freq); | ||
379 | } | ||
380 | return ret; | ||
381 | } | ||
382 | } | ||
383 | return 0; | ||
384 | } | ||
385 | |||
386 | static int scmi_dvfs_get_transition_latency(const struct scmi_handle *handle, | ||
387 | struct device *dev) | ||
388 | { | ||
389 | struct perf_dom_info *dom; | ||
390 | struct scmi_perf_info *pi = handle->perf_priv; | ||
391 | int domain = scmi_dev_domain_id(dev); | ||
392 | |||
393 | if (domain < 0) | ||
394 | return domain; | ||
395 | |||
396 | dom = pi->dom_info + domain; | ||
397 | if (!dom) | ||
398 | return -EIO; | ||
399 | |||
400 | /* uS to nS */ | ||
401 | return dom->opp[dom->opp_count - 1].trans_latency_us * 1000; | ||
402 | } | ||
403 | |||
404 | static int scmi_dvfs_freq_set(const struct scmi_handle *handle, u32 domain, | ||
405 | unsigned long freq, bool poll) | ||
406 | { | ||
407 | struct scmi_perf_info *pi = handle->perf_priv; | ||
408 | struct perf_dom_info *dom = pi->dom_info + domain; | ||
409 | |||
410 | return scmi_perf_level_set(handle, domain, freq / dom->mult_factor, | ||
411 | poll); | ||
412 | } | ||
413 | |||
414 | static int scmi_dvfs_freq_get(const struct scmi_handle *handle, u32 domain, | ||
415 | unsigned long *freq, bool poll) | ||
416 | { | ||
417 | int ret; | ||
418 | u32 level; | ||
419 | struct scmi_perf_info *pi = handle->perf_priv; | ||
420 | struct perf_dom_info *dom = pi->dom_info + domain; | ||
421 | |||
422 | ret = scmi_perf_level_get(handle, domain, &level, poll); | ||
423 | if (!ret) | ||
424 | *freq = level * dom->mult_factor; | ||
425 | |||
426 | return ret; | ||
427 | } | ||
428 | |||
429 | static struct scmi_perf_ops perf_ops = { | ||
430 | .limits_set = scmi_perf_limits_set, | ||
431 | .limits_get = scmi_perf_limits_get, | ||
432 | .level_set = scmi_perf_level_set, | ||
433 | .level_get = scmi_perf_level_get, | ||
434 | .device_domain_id = scmi_dev_domain_id, | ||
435 | .get_transition_latency = scmi_dvfs_get_transition_latency, | ||
436 | .add_opps_to_device = scmi_dvfs_add_opps_to_device, | ||
437 | .freq_set = scmi_dvfs_freq_set, | ||
438 | .freq_get = scmi_dvfs_freq_get, | ||
439 | }; | ||
440 | |||
441 | static int scmi_perf_protocol_init(struct scmi_handle *handle) | ||
442 | { | ||
443 | int domain; | ||
444 | u32 version; | ||
445 | struct scmi_perf_info *pinfo; | ||
446 | |||
447 | scmi_version_get(handle, SCMI_PROTOCOL_PERF, &version); | ||
448 | |||
449 | dev_dbg(handle->dev, "Performance Version %d.%d\n", | ||
450 | PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); | ||
451 | |||
452 | pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); | ||
453 | if (!pinfo) | ||
454 | return -ENOMEM; | ||
455 | |||
456 | scmi_perf_attributes_get(handle, pinfo); | ||
457 | |||
458 | pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains, | ||
459 | sizeof(*pinfo->dom_info), GFP_KERNEL); | ||
460 | if (!pinfo->dom_info) | ||
461 | return -ENOMEM; | ||
462 | |||
463 | for (domain = 0; domain < pinfo->num_domains; domain++) { | ||
464 | struct perf_dom_info *dom = pinfo->dom_info + domain; | ||
465 | |||
466 | scmi_perf_domain_attributes_get(handle, domain, dom); | ||
467 | scmi_perf_describe_levels_get(handle, domain, dom); | ||
468 | } | ||
469 | |||
470 | handle->perf_ops = &perf_ops; | ||
471 | handle->perf_priv = pinfo; | ||
472 | |||
473 | return 0; | ||
474 | } | ||
475 | |||
476 | static int __init scmi_perf_init(void) | ||
477 | { | ||
478 | return scmi_protocol_register(SCMI_PROTOCOL_PERF, | ||
479 | &scmi_perf_protocol_init); | ||
480 | } | ||
481 | subsys_initcall(scmi_perf_init); | ||
diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c new file mode 100644 index 000000000000..087c2876cdf2 --- /dev/null +++ b/drivers/firmware/arm_scmi/power.c | |||
@@ -0,0 +1,221 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface (SCMI) Power Protocol | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | |||
8 | #include "common.h" | ||
9 | |||
10 | enum scmi_power_protocol_cmd { | ||
11 | POWER_DOMAIN_ATTRIBUTES = 0x3, | ||
12 | POWER_STATE_SET = 0x4, | ||
13 | POWER_STATE_GET = 0x5, | ||
14 | POWER_STATE_NOTIFY = 0x6, | ||
15 | }; | ||
16 | |||
17 | struct scmi_msg_resp_power_attributes { | ||
18 | __le16 num_domains; | ||
19 | __le16 reserved; | ||
20 | __le32 stats_addr_low; | ||
21 | __le32 stats_addr_high; | ||
22 | __le32 stats_size; | ||
23 | }; | ||
24 | |||
25 | struct scmi_msg_resp_power_domain_attributes { | ||
26 | __le32 flags; | ||
27 | #define SUPPORTS_STATE_SET_NOTIFY(x) ((x) & BIT(31)) | ||
28 | #define SUPPORTS_STATE_SET_ASYNC(x) ((x) & BIT(30)) | ||
29 | #define SUPPORTS_STATE_SET_SYNC(x) ((x) & BIT(29)) | ||
30 | u8 name[SCMI_MAX_STR_SIZE]; | ||
31 | }; | ||
32 | |||
33 | struct scmi_power_set_state { | ||
34 | __le32 flags; | ||
35 | #define STATE_SET_ASYNC BIT(0) | ||
36 | __le32 domain; | ||
37 | __le32 state; | ||
38 | }; | ||
39 | |||
40 | struct scmi_power_state_notify { | ||
41 | __le32 domain; | ||
42 | __le32 notify_enable; | ||
43 | }; | ||
44 | |||
45 | struct power_dom_info { | ||
46 | bool state_set_sync; | ||
47 | bool state_set_async; | ||
48 | bool state_set_notify; | ||
49 | char name[SCMI_MAX_STR_SIZE]; | ||
50 | }; | ||
51 | |||
52 | struct scmi_power_info { | ||
53 | int num_domains; | ||
54 | u64 stats_addr; | ||
55 | u32 stats_size; | ||
56 | struct power_dom_info *dom_info; | ||
57 | }; | ||
58 | |||
59 | static int scmi_power_attributes_get(const struct scmi_handle *handle, | ||
60 | struct scmi_power_info *pi) | ||
61 | { | ||
62 | int ret; | ||
63 | struct scmi_xfer *t; | ||
64 | struct scmi_msg_resp_power_attributes *attr; | ||
65 | |||
66 | ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, | ||
67 | SCMI_PROTOCOL_POWER, 0, sizeof(*attr), &t); | ||
68 | if (ret) | ||
69 | return ret; | ||
70 | |||
71 | attr = t->rx.buf; | ||
72 | |||
73 | ret = scmi_do_xfer(handle, t); | ||
74 | if (!ret) { | ||
75 | pi->num_domains = le16_to_cpu(attr->num_domains); | ||
76 | pi->stats_addr = le32_to_cpu(attr->stats_addr_low) | | ||
77 | (u64)le32_to_cpu(attr->stats_addr_high) << 32; | ||
78 | pi->stats_size = le32_to_cpu(attr->stats_size); | ||
79 | } | ||
80 | |||
81 | scmi_one_xfer_put(handle, t); | ||
82 | return ret; | ||
83 | } | ||
84 | |||
85 | static int | ||
86 | scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain, | ||
87 | struct power_dom_info *dom_info) | ||
88 | { | ||
89 | int ret; | ||
90 | struct scmi_xfer *t; | ||
91 | struct scmi_msg_resp_power_domain_attributes *attr; | ||
92 | |||
93 | ret = scmi_one_xfer_init(handle, POWER_DOMAIN_ATTRIBUTES, | ||
94 | SCMI_PROTOCOL_POWER, sizeof(domain), | ||
95 | sizeof(*attr), &t); | ||
96 | if (ret) | ||
97 | return ret; | ||
98 | |||
99 | *(__le32 *)t->tx.buf = cpu_to_le32(domain); | ||
100 | attr = t->rx.buf; | ||
101 | |||
102 | ret = scmi_do_xfer(handle, t); | ||
103 | if (!ret) { | ||
104 | u32 flags = le32_to_cpu(attr->flags); | ||
105 | |||
106 | dom_info->state_set_notify = SUPPORTS_STATE_SET_NOTIFY(flags); | ||
107 | dom_info->state_set_async = SUPPORTS_STATE_SET_ASYNC(flags); | ||
108 | dom_info->state_set_sync = SUPPORTS_STATE_SET_SYNC(flags); | ||
109 | memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); | ||
110 | } | ||
111 | |||
112 | scmi_one_xfer_put(handle, t); | ||
113 | return ret; | ||
114 | } | ||
115 | |||
116 | static int | ||
117 | scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state) | ||
118 | { | ||
119 | int ret; | ||
120 | struct scmi_xfer *t; | ||
121 | struct scmi_power_set_state *st; | ||
122 | |||
123 | ret = scmi_one_xfer_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER, | ||
124 | sizeof(*st), 0, &t); | ||
125 | if (ret) | ||
126 | return ret; | ||
127 | |||
128 | st = t->tx.buf; | ||
129 | st->flags = cpu_to_le32(0); | ||
130 | st->domain = cpu_to_le32(domain); | ||
131 | st->state = cpu_to_le32(state); | ||
132 | |||
133 | ret = scmi_do_xfer(handle, t); | ||
134 | |||
135 | scmi_one_xfer_put(handle, t); | ||
136 | return ret; | ||
137 | } | ||
138 | |||
139 | static int | ||
140 | scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state) | ||
141 | { | ||
142 | int ret; | ||
143 | struct scmi_xfer *t; | ||
144 | |||
145 | ret = scmi_one_xfer_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER, | ||
146 | sizeof(u32), sizeof(u32), &t); | ||
147 | if (ret) | ||
148 | return ret; | ||
149 | |||
150 | *(__le32 *)t->tx.buf = cpu_to_le32(domain); | ||
151 | |||
152 | ret = scmi_do_xfer(handle, t); | ||
153 | if (!ret) | ||
154 | *state = le32_to_cpu(*(__le32 *)t->rx.buf); | ||
155 | |||
156 | scmi_one_xfer_put(handle, t); | ||
157 | return ret; | ||
158 | } | ||
159 | |||
160 | static int scmi_power_num_domains_get(const struct scmi_handle *handle) | ||
161 | { | ||
162 | struct scmi_power_info *pi = handle->power_priv; | ||
163 | |||
164 | return pi->num_domains; | ||
165 | } | ||
166 | |||
167 | static char *scmi_power_name_get(const struct scmi_handle *handle, u32 domain) | ||
168 | { | ||
169 | struct scmi_power_info *pi = handle->power_priv; | ||
170 | struct power_dom_info *dom = pi->dom_info + domain; | ||
171 | |||
172 | return dom->name; | ||
173 | } | ||
174 | |||
175 | static struct scmi_power_ops power_ops = { | ||
176 | .num_domains_get = scmi_power_num_domains_get, | ||
177 | .name_get = scmi_power_name_get, | ||
178 | .state_set = scmi_power_state_set, | ||
179 | .state_get = scmi_power_state_get, | ||
180 | }; | ||
181 | |||
182 | static int scmi_power_protocol_init(struct scmi_handle *handle) | ||
183 | { | ||
184 | int domain; | ||
185 | u32 version; | ||
186 | struct scmi_power_info *pinfo; | ||
187 | |||
188 | scmi_version_get(handle, SCMI_PROTOCOL_POWER, &version); | ||
189 | |||
190 | dev_dbg(handle->dev, "Power Version %d.%d\n", | ||
191 | PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); | ||
192 | |||
193 | pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); | ||
194 | if (!pinfo) | ||
195 | return -ENOMEM; | ||
196 | |||
197 | scmi_power_attributes_get(handle, pinfo); | ||
198 | |||
199 | pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains, | ||
200 | sizeof(*pinfo->dom_info), GFP_KERNEL); | ||
201 | if (!pinfo->dom_info) | ||
202 | return -ENOMEM; | ||
203 | |||
204 | for (domain = 0; domain < pinfo->num_domains; domain++) { | ||
205 | struct power_dom_info *dom = pinfo->dom_info + domain; | ||
206 | |||
207 | scmi_power_domain_attributes_get(handle, domain, dom); | ||
208 | } | ||
209 | |||
210 | handle->power_ops = &power_ops; | ||
211 | handle->power_priv = pinfo; | ||
212 | |||
213 | return 0; | ||
214 | } | ||
215 | |||
216 | static int __init scmi_power_init(void) | ||
217 | { | ||
218 | return scmi_protocol_register(SCMI_PROTOCOL_POWER, | ||
219 | &scmi_power_protocol_init); | ||
220 | } | ||
221 | subsys_initcall(scmi_power_init); | ||
diff --git a/drivers/firmware/arm_scmi/scmi_pm_domain.c b/drivers/firmware/arm_scmi/scmi_pm_domain.c new file mode 100644 index 000000000000..87f737e01473 --- /dev/null +++ b/drivers/firmware/arm_scmi/scmi_pm_domain.c | |||
@@ -0,0 +1,129 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * SCMI Generic power domain support. | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | |||
8 | #include <linux/err.h> | ||
9 | #include <linux/io.h> | ||
10 | #include <linux/module.h> | ||
11 | #include <linux/pm_domain.h> | ||
12 | #include <linux/scmi_protocol.h> | ||
13 | |||
14 | struct scmi_pm_domain { | ||
15 | struct generic_pm_domain genpd; | ||
16 | const struct scmi_handle *handle; | ||
17 | const char *name; | ||
18 | u32 domain; | ||
19 | }; | ||
20 | |||
21 | #define to_scmi_pd(gpd) container_of(gpd, struct scmi_pm_domain, genpd) | ||
22 | |||
23 | static int scmi_pd_power(struct generic_pm_domain *domain, bool power_on) | ||
24 | { | ||
25 | int ret; | ||
26 | u32 state, ret_state; | ||
27 | struct scmi_pm_domain *pd = to_scmi_pd(domain); | ||
28 | const struct scmi_power_ops *ops = pd->handle->power_ops; | ||
29 | |||
30 | if (power_on) | ||
31 | state = SCMI_POWER_STATE_GENERIC_ON; | ||
32 | else | ||
33 | state = SCMI_POWER_STATE_GENERIC_OFF; | ||
34 | |||
35 | ret = ops->state_set(pd->handle, pd->domain, state); | ||
36 | if (!ret) | ||
37 | ret = ops->state_get(pd->handle, pd->domain, &ret_state); | ||
38 | if (!ret && state != ret_state) | ||
39 | return -EIO; | ||
40 | |||
41 | return ret; | ||
42 | } | ||
43 | |||
44 | static int scmi_pd_power_on(struct generic_pm_domain *domain) | ||
45 | { | ||
46 | return scmi_pd_power(domain, true); | ||
47 | } | ||
48 | |||
49 | static int scmi_pd_power_off(struct generic_pm_domain *domain) | ||
50 | { | ||
51 | return scmi_pd_power(domain, false); | ||
52 | } | ||
53 | |||
54 | static int scmi_pm_domain_probe(struct scmi_device *sdev) | ||
55 | { | ||
56 | int num_domains, i; | ||
57 | struct device *dev = &sdev->dev; | ||
58 | struct device_node *np = dev->of_node; | ||
59 | struct scmi_pm_domain *scmi_pd; | ||
60 | struct genpd_onecell_data *scmi_pd_data; | ||
61 | struct generic_pm_domain **domains; | ||
62 | const struct scmi_handle *handle = sdev->handle; | ||
63 | |||
64 | if (!handle || !handle->power_ops) | ||
65 | return -ENODEV; | ||
66 | |||
67 | num_domains = handle->power_ops->num_domains_get(handle); | ||
68 | if (num_domains < 0) { | ||
69 | dev_err(dev, "number of domains not found\n"); | ||
70 | return num_domains; | ||
71 | } | ||
72 | |||
73 | scmi_pd = devm_kcalloc(dev, num_domains, sizeof(*scmi_pd), GFP_KERNEL); | ||
74 | if (!scmi_pd) | ||
75 | return -ENOMEM; | ||
76 | |||
77 | scmi_pd_data = devm_kzalloc(dev, sizeof(*scmi_pd_data), GFP_KERNEL); | ||
78 | if (!scmi_pd_data) | ||
79 | return -ENOMEM; | ||
80 | |||
81 | domains = devm_kcalloc(dev, num_domains, sizeof(*domains), GFP_KERNEL); | ||
82 | if (!domains) | ||
83 | return -ENOMEM; | ||
84 | |||
85 | for (i = 0; i < num_domains; i++, scmi_pd++) { | ||
86 | u32 state; | ||
87 | |||
88 | domains[i] = &scmi_pd->genpd; | ||
89 | |||
90 | scmi_pd->domain = i; | ||
91 | scmi_pd->handle = handle; | ||
92 | scmi_pd->name = handle->power_ops->name_get(handle, i); | ||
93 | scmi_pd->genpd.name = scmi_pd->name; | ||
94 | scmi_pd->genpd.power_off = scmi_pd_power_off; | ||
95 | scmi_pd->genpd.power_on = scmi_pd_power_on; | ||
96 | |||
97 | if (handle->power_ops->state_get(handle, i, &state)) { | ||
98 | dev_warn(dev, "failed to get state for domain %d\n", i); | ||
99 | continue; | ||
100 | } | ||
101 | |||
102 | pm_genpd_init(&scmi_pd->genpd, NULL, | ||
103 | state == SCMI_POWER_STATE_GENERIC_OFF); | ||
104 | } | ||
105 | |||
106 | scmi_pd_data->domains = domains; | ||
107 | scmi_pd_data->num_domains = num_domains; | ||
108 | |||
109 | of_genpd_add_provider_onecell(np, scmi_pd_data); | ||
110 | |||
111 | return 0; | ||
112 | } | ||
113 | |||
114 | static const struct scmi_device_id scmi_id_table[] = { | ||
115 | { SCMI_PROTOCOL_POWER }, | ||
116 | { }, | ||
117 | }; | ||
118 | MODULE_DEVICE_TABLE(scmi, scmi_id_table); | ||
119 | |||
120 | static struct scmi_driver scmi_power_domain_driver = { | ||
121 | .name = "scmi-power-domain", | ||
122 | .probe = scmi_pm_domain_probe, | ||
123 | .id_table = scmi_id_table, | ||
124 | }; | ||
125 | module_scmi_driver(scmi_power_domain_driver); | ||
126 | |||
127 | MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); | ||
128 | MODULE_DESCRIPTION("ARM SCMI power domain driver"); | ||
129 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c new file mode 100644 index 000000000000..bbb469fea0ed --- /dev/null +++ b/drivers/firmware/arm_scmi/sensors.c | |||
@@ -0,0 +1,291 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface (SCMI) Sensor Protocol | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | |||
8 | #include "common.h" | ||
9 | |||
10 | enum scmi_sensor_protocol_cmd { | ||
11 | SENSOR_DESCRIPTION_GET = 0x3, | ||
12 | SENSOR_CONFIG_SET = 0x4, | ||
13 | SENSOR_TRIP_POINT_SET = 0x5, | ||
14 | SENSOR_READING_GET = 0x6, | ||
15 | }; | ||
16 | |||
17 | struct scmi_msg_resp_sensor_attributes { | ||
18 | __le16 num_sensors; | ||
19 | u8 max_requests; | ||
20 | u8 reserved; | ||
21 | __le32 reg_addr_low; | ||
22 | __le32 reg_addr_high; | ||
23 | __le32 reg_size; | ||
24 | }; | ||
25 | |||
26 | struct scmi_msg_resp_sensor_description { | ||
27 | __le16 num_returned; | ||
28 | __le16 num_remaining; | ||
29 | struct { | ||
30 | __le32 id; | ||
31 | __le32 attributes_low; | ||
32 | #define SUPPORTS_ASYNC_READ(x) ((x) & BIT(31)) | ||
33 | #define NUM_TRIP_POINTS(x) (((x) >> 4) & 0xff) | ||
34 | __le32 attributes_high; | ||
35 | #define SENSOR_TYPE(x) ((x) & 0xff) | ||
36 | #define SENSOR_SCALE(x) (((x) >> 11) & 0x3f) | ||
37 | #define SENSOR_UPDATE_SCALE(x) (((x) >> 22) & 0x1f) | ||
38 | #define SENSOR_UPDATE_BASE(x) (((x) >> 27) & 0x1f) | ||
39 | u8 name[SCMI_MAX_STR_SIZE]; | ||
40 | } desc[0]; | ||
41 | }; | ||
42 | |||
43 | struct scmi_msg_set_sensor_config { | ||
44 | __le32 id; | ||
45 | __le32 event_control; | ||
46 | }; | ||
47 | |||
48 | struct scmi_msg_set_sensor_trip_point { | ||
49 | __le32 id; | ||
50 | __le32 event_control; | ||
51 | #define SENSOR_TP_EVENT_MASK (0x3) | ||
52 | #define SENSOR_TP_DISABLED 0x0 | ||
53 | #define SENSOR_TP_POSITIVE 0x1 | ||
54 | #define SENSOR_TP_NEGATIVE 0x2 | ||
55 | #define SENSOR_TP_BOTH 0x3 | ||
56 | #define SENSOR_TP_ID(x) (((x) & 0xff) << 4) | ||
57 | __le32 value_low; | ||
58 | __le32 value_high; | ||
59 | }; | ||
60 | |||
61 | struct scmi_msg_sensor_reading_get { | ||
62 | __le32 id; | ||
63 | __le32 flags; | ||
64 | #define SENSOR_READ_ASYNC BIT(0) | ||
65 | }; | ||
66 | |||
67 | struct sensors_info { | ||
68 | int num_sensors; | ||
69 | int max_requests; | ||
70 | u64 reg_addr; | ||
71 | u32 reg_size; | ||
72 | struct scmi_sensor_info *sensors; | ||
73 | }; | ||
74 | |||
75 | static int scmi_sensor_attributes_get(const struct scmi_handle *handle, | ||
76 | struct sensors_info *si) | ||
77 | { | ||
78 | int ret; | ||
79 | struct scmi_xfer *t; | ||
80 | struct scmi_msg_resp_sensor_attributes *attr; | ||
81 | |||
82 | ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, | ||
83 | SCMI_PROTOCOL_SENSOR, 0, sizeof(*attr), &t); | ||
84 | if (ret) | ||
85 | return ret; | ||
86 | |||
87 | attr = t->rx.buf; | ||
88 | |||
89 | ret = scmi_do_xfer(handle, t); | ||
90 | if (!ret) { | ||
91 | si->num_sensors = le16_to_cpu(attr->num_sensors); | ||
92 | si->max_requests = attr->max_requests; | ||
93 | si->reg_addr = le32_to_cpu(attr->reg_addr_low) | | ||
94 | (u64)le32_to_cpu(attr->reg_addr_high) << 32; | ||
95 | si->reg_size = le32_to_cpu(attr->reg_size); | ||
96 | } | ||
97 | |||
98 | scmi_one_xfer_put(handle, t); | ||
99 | return ret; | ||
100 | } | ||
101 | |||
102 | static int scmi_sensor_description_get(const struct scmi_handle *handle, | ||
103 | struct sensors_info *si) | ||
104 | { | ||
105 | int ret, cnt; | ||
106 | u32 desc_index = 0; | ||
107 | u16 num_returned, num_remaining; | ||
108 | struct scmi_xfer *t; | ||
109 | struct scmi_msg_resp_sensor_description *buf; | ||
110 | |||
111 | ret = scmi_one_xfer_init(handle, SENSOR_DESCRIPTION_GET, | ||
112 | SCMI_PROTOCOL_SENSOR, sizeof(__le32), 0, &t); | ||
113 | if (ret) | ||
114 | return ret; | ||
115 | |||
116 | buf = t->rx.buf; | ||
117 | |||
118 | do { | ||
119 | /* Set the number of sensors to be skipped/already read */ | ||
120 | *(__le32 *)t->tx.buf = cpu_to_le32(desc_index); | ||
121 | |||
122 | ret = scmi_do_xfer(handle, t); | ||
123 | if (ret) | ||
124 | break; | ||
125 | |||
126 | num_returned = le16_to_cpu(buf->num_returned); | ||
127 | num_remaining = le16_to_cpu(buf->num_remaining); | ||
128 | |||
129 | if (desc_index + num_returned > si->num_sensors) { | ||
130 | dev_err(handle->dev, "No. of sensors can't exceed %d", | ||
131 | si->num_sensors); | ||
132 | break; | ||
133 | } | ||
134 | |||
135 | for (cnt = 0; cnt < num_returned; cnt++) { | ||
136 | u32 attrh; | ||
137 | struct scmi_sensor_info *s; | ||
138 | |||
139 | attrh = le32_to_cpu(buf->desc[cnt].attributes_high); | ||
140 | s = &si->sensors[desc_index + cnt]; | ||
141 | s->id = le32_to_cpu(buf->desc[cnt].id); | ||
142 | s->type = SENSOR_TYPE(attrh); | ||
143 | memcpy(s->name, buf->desc[cnt].name, SCMI_MAX_STR_SIZE); | ||
144 | } | ||
145 | |||
146 | desc_index += num_returned; | ||
147 | /* | ||
148 | * check for both returned and remaining to avoid infinite | ||
149 | * loop due to buggy firmware | ||
150 | */ | ||
151 | } while (num_returned && num_remaining); | ||
152 | |||
153 | scmi_one_xfer_put(handle, t); | ||
154 | return ret; | ||
155 | } | ||
156 | |||
157 | static int | ||
158 | scmi_sensor_configuration_set(const struct scmi_handle *handle, u32 sensor_id) | ||
159 | { | ||
160 | int ret; | ||
161 | u32 evt_cntl = BIT(0); | ||
162 | struct scmi_xfer *t; | ||
163 | struct scmi_msg_set_sensor_config *cfg; | ||
164 | |||
165 | ret = scmi_one_xfer_init(handle, SENSOR_CONFIG_SET, | ||
166 | SCMI_PROTOCOL_SENSOR, sizeof(*cfg), 0, &t); | ||
167 | if (ret) | ||
168 | return ret; | ||
169 | |||
170 | cfg = t->tx.buf; | ||
171 | cfg->id = cpu_to_le32(sensor_id); | ||
172 | cfg->event_control = cpu_to_le32(evt_cntl); | ||
173 | |||
174 | ret = scmi_do_xfer(handle, t); | ||
175 | |||
176 | scmi_one_xfer_put(handle, t); | ||
177 | return ret; | ||
178 | } | ||
179 | |||
180 | static int scmi_sensor_trip_point_set(const struct scmi_handle *handle, | ||
181 | u32 sensor_id, u8 trip_id, u64 trip_value) | ||
182 | { | ||
183 | int ret; | ||
184 | u32 evt_cntl = SENSOR_TP_BOTH; | ||
185 | struct scmi_xfer *t; | ||
186 | struct scmi_msg_set_sensor_trip_point *trip; | ||
187 | |||
188 | ret = scmi_one_xfer_init(handle, SENSOR_TRIP_POINT_SET, | ||
189 | SCMI_PROTOCOL_SENSOR, sizeof(*trip), 0, &t); | ||
190 | if (ret) | ||
191 | return ret; | ||
192 | |||
193 | trip = t->tx.buf; | ||
194 | trip->id = cpu_to_le32(sensor_id); | ||
195 | trip->event_control = cpu_to_le32(evt_cntl | SENSOR_TP_ID(trip_id)); | ||
196 | trip->value_low = cpu_to_le32(trip_value & 0xffffffff); | ||
197 | trip->value_high = cpu_to_le32(trip_value >> 32); | ||
198 | |||
199 | ret = scmi_do_xfer(handle, t); | ||
200 | |||
201 | scmi_one_xfer_put(handle, t); | ||
202 | return ret; | ||
203 | } | ||
204 | |||
205 | static int scmi_sensor_reading_get(const struct scmi_handle *handle, | ||
206 | u32 sensor_id, bool async, u64 *value) | ||
207 | { | ||
208 | int ret; | ||
209 | struct scmi_xfer *t; | ||
210 | struct scmi_msg_sensor_reading_get *sensor; | ||
211 | |||
212 | ret = scmi_one_xfer_init(handle, SENSOR_READING_GET, | ||
213 | SCMI_PROTOCOL_SENSOR, sizeof(*sensor), | ||
214 | sizeof(u64), &t); | ||
215 | if (ret) | ||
216 | return ret; | ||
217 | |||
218 | sensor = t->tx.buf; | ||
219 | sensor->id = cpu_to_le32(sensor_id); | ||
220 | sensor->flags = cpu_to_le32(async ? SENSOR_READ_ASYNC : 0); | ||
221 | |||
222 | ret = scmi_do_xfer(handle, t); | ||
223 | if (!ret) { | ||
224 | __le32 *pval = t->rx.buf; | ||
225 | |||
226 | *value = le32_to_cpu(*pval); | ||
227 | *value |= (u64)le32_to_cpu(*(pval + 1)) << 32; | ||
228 | } | ||
229 | |||
230 | scmi_one_xfer_put(handle, t); | ||
231 | return ret; | ||
232 | } | ||
233 | |||
234 | static const struct scmi_sensor_info * | ||
235 | scmi_sensor_info_get(const struct scmi_handle *handle, u32 sensor_id) | ||
236 | { | ||
237 | struct sensors_info *si = handle->sensor_priv; | ||
238 | |||
239 | return si->sensors + sensor_id; | ||
240 | } | ||
241 | |||
242 | static int scmi_sensor_count_get(const struct scmi_handle *handle) | ||
243 | { | ||
244 | struct sensors_info *si = handle->sensor_priv; | ||
245 | |||
246 | return si->num_sensors; | ||
247 | } | ||
248 | |||
249 | static struct scmi_sensor_ops sensor_ops = { | ||
250 | .count_get = scmi_sensor_count_get, | ||
251 | .info_get = scmi_sensor_info_get, | ||
252 | .configuration_set = scmi_sensor_configuration_set, | ||
253 | .trip_point_set = scmi_sensor_trip_point_set, | ||
254 | .reading_get = scmi_sensor_reading_get, | ||
255 | }; | ||
256 | |||
257 | static int scmi_sensors_protocol_init(struct scmi_handle *handle) | ||
258 | { | ||
259 | u32 version; | ||
260 | struct sensors_info *sinfo; | ||
261 | |||
262 | scmi_version_get(handle, SCMI_PROTOCOL_SENSOR, &version); | ||
263 | |||
264 | dev_dbg(handle->dev, "Sensor Version %d.%d\n", | ||
265 | PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); | ||
266 | |||
267 | sinfo = devm_kzalloc(handle->dev, sizeof(*sinfo), GFP_KERNEL); | ||
268 | if (!sinfo) | ||
269 | return -ENOMEM; | ||
270 | |||
271 | scmi_sensor_attributes_get(handle, sinfo); | ||
272 | |||
273 | sinfo->sensors = devm_kcalloc(handle->dev, sinfo->num_sensors, | ||
274 | sizeof(*sinfo->sensors), GFP_KERNEL); | ||
275 | if (!sinfo->sensors) | ||
276 | return -ENOMEM; | ||
277 | |||
278 | scmi_sensor_description_get(handle, sinfo); | ||
279 | |||
280 | handle->sensor_ops = &sensor_ops; | ||
281 | handle->sensor_priv = sinfo; | ||
282 | |||
283 | return 0; | ||
284 | } | ||
285 | |||
286 | static int __init scmi_sensors_init(void) | ||
287 | { | ||
288 | return scmi_protocol_register(SCMI_PROTOCOL_SENSOR, | ||
289 | &scmi_sensors_protocol_init); | ||
290 | } | ||
291 | subsys_initcall(scmi_sensors_init); | ||
diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c index 7da9f1b83ebe..6d7a6c0a5e07 100644 --- a/drivers/firmware/arm_scpi.c +++ b/drivers/firmware/arm_scpi.c | |||
@@ -28,6 +28,7 @@ | |||
28 | #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt | 28 | #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt |
29 | 29 | ||
30 | #include <linux/bitmap.h> | 30 | #include <linux/bitmap.h> |
31 | #include <linux/bitfield.h> | ||
31 | #include <linux/device.h> | 32 | #include <linux/device.h> |
32 | #include <linux/err.h> | 33 | #include <linux/err.h> |
33 | #include <linux/export.h> | 34 | #include <linux/export.h> |
@@ -45,48 +46,32 @@ | |||
45 | #include <linux/sort.h> | 46 | #include <linux/sort.h> |
46 | #include <linux/spinlock.h> | 47 | #include <linux/spinlock.h> |
47 | 48 | ||
48 | #define CMD_ID_SHIFT 0 | 49 | #define CMD_ID_MASK GENMASK(6, 0) |
49 | #define CMD_ID_MASK 0x7f | 50 | #define CMD_TOKEN_ID_MASK GENMASK(15, 8) |
50 | #define CMD_TOKEN_ID_SHIFT 8 | 51 | #define CMD_DATA_SIZE_MASK GENMASK(24, 16) |
51 | #define CMD_TOKEN_ID_MASK 0xff | 52 | #define CMD_LEGACY_DATA_SIZE_MASK GENMASK(28, 20) |
52 | #define CMD_DATA_SIZE_SHIFT 16 | 53 | #define PACK_SCPI_CMD(cmd_id, tx_sz) \ |
53 | #define CMD_DATA_SIZE_MASK 0x1ff | 54 | (FIELD_PREP(CMD_ID_MASK, cmd_id) | \ |
54 | #define CMD_LEGACY_DATA_SIZE_SHIFT 20 | 55 | FIELD_PREP(CMD_DATA_SIZE_MASK, tx_sz)) |
55 | #define CMD_LEGACY_DATA_SIZE_MASK 0x1ff | 56 | #define PACK_LEGACY_SCPI_CMD(cmd_id, tx_sz) \ |
56 | #define PACK_SCPI_CMD(cmd_id, tx_sz) \ | 57 | (FIELD_PREP(CMD_ID_MASK, cmd_id) | \ |
57 | ((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) | \ | 58 | FIELD_PREP(CMD_LEGACY_DATA_SIZE_MASK, tx_sz)) |
58 | (((tx_sz) & CMD_DATA_SIZE_MASK) << CMD_DATA_SIZE_SHIFT)) | 59 | |
59 | #define ADD_SCPI_TOKEN(cmd, token) \ | 60 | #define CMD_SIZE(cmd) FIELD_GET(CMD_DATA_SIZE_MASK, cmd) |
60 | ((cmd) |= (((token) & CMD_TOKEN_ID_MASK) << CMD_TOKEN_ID_SHIFT)) | 61 | #define CMD_UNIQ_MASK (CMD_TOKEN_ID_MASK | CMD_ID_MASK) |
61 | #define PACK_LEGACY_SCPI_CMD(cmd_id, tx_sz) \ | ||
62 | ((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) | \ | ||
63 | (((tx_sz) & CMD_LEGACY_DATA_SIZE_MASK) << CMD_LEGACY_DATA_SIZE_SHIFT)) | ||
64 | |||
65 | #define CMD_SIZE(cmd) (((cmd) >> CMD_DATA_SIZE_SHIFT) & CMD_DATA_SIZE_MASK) | ||
66 | #define CMD_LEGACY_SIZE(cmd) (((cmd) >> CMD_LEGACY_DATA_SIZE_SHIFT) & \ | ||
67 | CMD_LEGACY_DATA_SIZE_MASK) | ||
68 | #define CMD_UNIQ_MASK (CMD_TOKEN_ID_MASK << CMD_TOKEN_ID_SHIFT | CMD_ID_MASK) | ||
69 | #define CMD_XTRACT_UNIQ(cmd) ((cmd) & CMD_UNIQ_MASK) | 62 | #define CMD_XTRACT_UNIQ(cmd) ((cmd) & CMD_UNIQ_MASK) |
70 | 63 | ||
71 | #define SCPI_SLOT 0 | 64 | #define SCPI_SLOT 0 |
72 | 65 | ||
73 | #define MAX_DVFS_DOMAINS 8 | 66 | #define MAX_DVFS_DOMAINS 8 |
74 | #define MAX_DVFS_OPPS 16 | 67 | #define MAX_DVFS_OPPS 16 |
75 | #define DVFS_LATENCY(hdr) (le32_to_cpu(hdr) >> 16) | 68 | |
76 | #define DVFS_OPP_COUNT(hdr) ((le32_to_cpu(hdr) >> 8) & 0xff) | 69 | #define PROTO_REV_MAJOR_MASK GENMASK(31, 16) |
77 | 70 | #define PROTO_REV_MINOR_MASK GENMASK(15, 0) | |
78 | #define PROTOCOL_REV_MINOR_BITS 16 | 71 | |
79 | #define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1) | 72 | #define FW_REV_MAJOR_MASK GENMASK(31, 24) |
80 | #define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS) | 73 | #define FW_REV_MINOR_MASK GENMASK(23, 16) |
81 | #define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK) | 74 | #define FW_REV_PATCH_MASK GENMASK(15, 0) |
82 | |||
83 | #define FW_REV_MAJOR_BITS 24 | ||
84 | #define FW_REV_MINOR_BITS 16 | ||
85 | #define FW_REV_PATCH_MASK ((1U << FW_REV_MINOR_BITS) - 1) | ||
86 | #define FW_REV_MINOR_MASK ((1U << FW_REV_MAJOR_BITS) - 1) | ||
87 | #define FW_REV_MAJOR(x) ((x) >> FW_REV_MAJOR_BITS) | ||
88 | #define FW_REV_MINOR(x) (((x) & FW_REV_MINOR_MASK) >> FW_REV_MINOR_BITS) | ||
89 | #define FW_REV_PATCH(x) ((x) & FW_REV_PATCH_MASK) | ||
90 | 75 | ||
91 | #define MAX_RX_TIMEOUT (msecs_to_jiffies(30)) | 76 | #define MAX_RX_TIMEOUT (msecs_to_jiffies(30)) |
92 | 77 | ||
@@ -311,10 +296,6 @@ struct clk_get_info { | |||
311 | u8 name[20]; | 296 | u8 name[20]; |
312 | } __packed; | 297 | } __packed; |
313 | 298 | ||
314 | struct clk_get_value { | ||
315 | __le32 rate; | ||
316 | } __packed; | ||
317 | |||
318 | struct clk_set_value { | 299 | struct clk_set_value { |
319 | __le16 id; | 300 | __le16 id; |
320 | __le16 reserved; | 301 | __le16 reserved; |
@@ -328,7 +309,9 @@ struct legacy_clk_set_value { | |||
328 | } __packed; | 309 | } __packed; |
329 | 310 | ||
330 | struct dvfs_info { | 311 | struct dvfs_info { |
331 | __le32 header; | 312 | u8 domain; |
313 | u8 opp_count; | ||
314 | __le16 latency; | ||
332 | struct { | 315 | struct { |
333 | __le32 freq; | 316 | __le32 freq; |
334 | __le32 m_volt; | 317 | __le32 m_volt; |
@@ -340,10 +323,6 @@ struct dvfs_set { | |||
340 | u8 index; | 323 | u8 index; |
341 | } __packed; | 324 | } __packed; |
342 | 325 | ||
343 | struct sensor_capabilities { | ||
344 | __le16 sensors; | ||
345 | } __packed; | ||
346 | |||
347 | struct _scpi_sensor_info { | 326 | struct _scpi_sensor_info { |
348 | __le16 sensor_id; | 327 | __le16 sensor_id; |
349 | u8 class; | 328 | u8 class; |
@@ -351,11 +330,6 @@ struct _scpi_sensor_info { | |||
351 | char name[20]; | 330 | char name[20]; |
352 | }; | 331 | }; |
353 | 332 | ||
354 | struct sensor_value { | ||
355 | __le32 lo_val; | ||
356 | __le32 hi_val; | ||
357 | } __packed; | ||
358 | |||
359 | struct dev_pstate_set { | 333 | struct dev_pstate_set { |
360 | __le16 dev_id; | 334 | __le16 dev_id; |
361 | u8 pstate; | 335 | u8 pstate; |
@@ -419,19 +393,20 @@ static void scpi_process_cmd(struct scpi_chan *ch, u32 cmd) | |||
419 | unsigned int len; | 393 | unsigned int len; |
420 | 394 | ||
421 | if (scpi_info->is_legacy) { | 395 | if (scpi_info->is_legacy) { |
422 | struct legacy_scpi_shared_mem *mem = ch->rx_payload; | 396 | struct legacy_scpi_shared_mem __iomem *mem = |
397 | ch->rx_payload; | ||
423 | 398 | ||
424 | /* RX Length is not replied by the legacy Firmware */ | 399 | /* RX Length is not replied by the legacy Firmware */ |
425 | len = match->rx_len; | 400 | len = match->rx_len; |
426 | 401 | ||
427 | match->status = le32_to_cpu(mem->status); | 402 | match->status = ioread32(&mem->status); |
428 | memcpy_fromio(match->rx_buf, mem->payload, len); | 403 | memcpy_fromio(match->rx_buf, mem->payload, len); |
429 | } else { | 404 | } else { |
430 | struct scpi_shared_mem *mem = ch->rx_payload; | 405 | struct scpi_shared_mem __iomem *mem = ch->rx_payload; |
431 | 406 | ||
432 | len = min(match->rx_len, CMD_SIZE(cmd)); | 407 | len = min_t(unsigned int, match->rx_len, CMD_SIZE(cmd)); |
433 | 408 | ||
434 | match->status = le32_to_cpu(mem->status); | 409 | match->status = ioread32(&mem->status); |
435 | memcpy_fromio(match->rx_buf, mem->payload, len); | 410 | memcpy_fromio(match->rx_buf, mem->payload, len); |
436 | } | 411 | } |
437 | 412 | ||
@@ -445,11 +420,11 @@ static void scpi_process_cmd(struct scpi_chan *ch, u32 cmd) | |||
445 | static void scpi_handle_remote_msg(struct mbox_client *c, void *msg) | 420 | static void scpi_handle_remote_msg(struct mbox_client *c, void *msg) |
446 | { | 421 | { |
447 | struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); | 422 | struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); |
448 | struct scpi_shared_mem *mem = ch->rx_payload; | 423 | struct scpi_shared_mem __iomem *mem = ch->rx_payload; |
449 | u32 cmd = 0; | 424 | u32 cmd = 0; |
450 | 425 | ||
451 | if (!scpi_info->is_legacy) | 426 | if (!scpi_info->is_legacy) |
452 | cmd = le32_to_cpu(mem->command); | 427 | cmd = ioread32(&mem->command); |
453 | 428 | ||
454 | scpi_process_cmd(ch, cmd); | 429 | scpi_process_cmd(ch, cmd); |
455 | } | 430 | } |
@@ -459,7 +434,7 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg) | |||
459 | unsigned long flags; | 434 | unsigned long flags; |
460 | struct scpi_xfer *t = msg; | 435 | struct scpi_xfer *t = msg; |
461 | struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); | 436 | struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); |
462 | struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload; | 437 | struct scpi_shared_mem __iomem *mem = ch->tx_payload; |
463 | 438 | ||
464 | if (t->tx_buf) { | 439 | if (t->tx_buf) { |
465 | if (scpi_info->is_legacy) | 440 | if (scpi_info->is_legacy) |
@@ -471,14 +446,14 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg) | |||
471 | if (t->rx_buf) { | 446 | if (t->rx_buf) { |
472 | if (!(++ch->token)) | 447 | if (!(++ch->token)) |
473 | ++ch->token; | 448 | ++ch->token; |
474 | ADD_SCPI_TOKEN(t->cmd, ch->token); | 449 | t->cmd |= FIELD_PREP(CMD_TOKEN_ID_MASK, ch->token); |
475 | spin_lock_irqsave(&ch->rx_lock, flags); | 450 | spin_lock_irqsave(&ch->rx_lock, flags); |
476 | list_add_tail(&t->node, &ch->rx_pending); | 451 | list_add_tail(&t->node, &ch->rx_pending); |
477 | spin_unlock_irqrestore(&ch->rx_lock, flags); | 452 | spin_unlock_irqrestore(&ch->rx_lock, flags); |
478 | } | 453 | } |
479 | 454 | ||
480 | if (!scpi_info->is_legacy) | 455 | if (!scpi_info->is_legacy) |
481 | mem->command = cpu_to_le32(t->cmd); | 456 | iowrite32(t->cmd, &mem->command); |
482 | } | 457 | } |
483 | 458 | ||
484 | static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch) | 459 | static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch) |
@@ -583,13 +558,13 @@ scpi_clk_get_range(u16 clk_id, unsigned long *min, unsigned long *max) | |||
583 | static unsigned long scpi_clk_get_val(u16 clk_id) | 558 | static unsigned long scpi_clk_get_val(u16 clk_id) |
584 | { | 559 | { |
585 | int ret; | 560 | int ret; |
586 | struct clk_get_value clk; | 561 | __le32 rate; |
587 | __le16 le_clk_id = cpu_to_le16(clk_id); | 562 | __le16 le_clk_id = cpu_to_le16(clk_id); |
588 | 563 | ||
589 | ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id, | 564 | ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id, |
590 | sizeof(le_clk_id), &clk, sizeof(clk)); | 565 | sizeof(le_clk_id), &rate, sizeof(rate)); |
591 | 566 | ||
592 | return ret ? ret : le32_to_cpu(clk.rate); | 567 | return ret ? ret : le32_to_cpu(rate); |
593 | } | 568 | } |
594 | 569 | ||
595 | static int scpi_clk_set_val(u16 clk_id, unsigned long rate) | 570 | static int scpi_clk_set_val(u16 clk_id, unsigned long rate) |
@@ -665,8 +640,8 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain) | |||
665 | if (!info) | 640 | if (!info) |
666 | return ERR_PTR(-ENOMEM); | 641 | return ERR_PTR(-ENOMEM); |
667 | 642 | ||
668 | info->count = DVFS_OPP_COUNT(buf.header); | 643 | info->count = buf.opp_count; |
669 | info->latency = DVFS_LATENCY(buf.header) * 1000; /* uS to nS */ | 644 | info->latency = le16_to_cpu(buf.latency) * 1000; /* uS to nS */ |
670 | 645 | ||
671 | info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL); | 646 | info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL); |
672 | if (!info->opps) { | 647 | if (!info->opps) { |
@@ -713,9 +688,6 @@ static int scpi_dvfs_get_transition_latency(struct device *dev) | |||
713 | if (IS_ERR(info)) | 688 | if (IS_ERR(info)) |
714 | return PTR_ERR(info); | 689 | return PTR_ERR(info); |
715 | 690 | ||
716 | if (!info->latency) | ||
717 | return 0; | ||
718 | |||
719 | return info->latency; | 691 | return info->latency; |
720 | } | 692 | } |
721 | 693 | ||
@@ -746,13 +718,13 @@ static int scpi_dvfs_add_opps_to_device(struct device *dev) | |||
746 | 718 | ||
747 | static int scpi_sensor_get_capability(u16 *sensors) | 719 | static int scpi_sensor_get_capability(u16 *sensors) |
748 | { | 720 | { |
749 | struct sensor_capabilities cap_buf; | 721 | __le16 cap; |
750 | int ret; | 722 | int ret; |
751 | 723 | ||
752 | ret = scpi_send_message(CMD_SENSOR_CAPABILITIES, NULL, 0, &cap_buf, | 724 | ret = scpi_send_message(CMD_SENSOR_CAPABILITIES, NULL, 0, &cap, |
753 | sizeof(cap_buf)); | 725 | sizeof(cap)); |
754 | if (!ret) | 726 | if (!ret) |
755 | *sensors = le16_to_cpu(cap_buf.sensors); | 727 | *sensors = le16_to_cpu(cap); |
756 | 728 | ||
757 | return ret; | 729 | return ret; |
758 | } | 730 | } |
@@ -776,20 +748,19 @@ static int scpi_sensor_get_info(u16 sensor_id, struct scpi_sensor_info *info) | |||
776 | static int scpi_sensor_get_value(u16 sensor, u64 *val) | 748 | static int scpi_sensor_get_value(u16 sensor, u64 *val) |
777 | { | 749 | { |
778 | __le16 id = cpu_to_le16(sensor); | 750 | __le16 id = cpu_to_le16(sensor); |
779 | struct sensor_value buf; | 751 | __le64 value; |
780 | int ret; | 752 | int ret; |
781 | 753 | ||
782 | ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id), | 754 | ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id), |
783 | &buf, sizeof(buf)); | 755 | &value, sizeof(value)); |
784 | if (ret) | 756 | if (ret) |
785 | return ret; | 757 | return ret; |
786 | 758 | ||
787 | if (scpi_info->is_legacy) | 759 | if (scpi_info->is_legacy) |
788 | /* only 32-bits supported, hi_val can be junk */ | 760 | /* only 32-bits supported, upper 32 bits can be junk */ |
789 | *val = le32_to_cpu(buf.lo_val); | 761 | *val = le32_to_cpup((__le32 *)&value); |
790 | else | 762 | else |
791 | *val = (u64)le32_to_cpu(buf.hi_val) << 32 | | 763 | *val = le64_to_cpu(value); |
792 | le32_to_cpu(buf.lo_val); | ||
793 | 764 | ||
794 | return 0; | 765 | return 0; |
795 | } | 766 | } |
@@ -864,9 +835,9 @@ static ssize_t protocol_version_show(struct device *dev, | |||
864 | { | 835 | { |
865 | struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); | 836 | struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); |
866 | 837 | ||
867 | return sprintf(buf, "%d.%d\n", | 838 | return sprintf(buf, "%lu.%lu\n", |
868 | PROTOCOL_REV_MAJOR(scpi_info->protocol_version), | 839 | FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version), |
869 | PROTOCOL_REV_MINOR(scpi_info->protocol_version)); | 840 | FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version)); |
870 | } | 841 | } |
871 | static DEVICE_ATTR_RO(protocol_version); | 842 | static DEVICE_ATTR_RO(protocol_version); |
872 | 843 | ||
@@ -875,10 +846,10 @@ static ssize_t firmware_version_show(struct device *dev, | |||
875 | { | 846 | { |
876 | struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); | 847 | struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); |
877 | 848 | ||
878 | return sprintf(buf, "%d.%d.%d\n", | 849 | return sprintf(buf, "%lu.%lu.%lu\n", |
879 | FW_REV_MAJOR(scpi_info->firmware_version), | 850 | FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version), |
880 | FW_REV_MINOR(scpi_info->firmware_version), | 851 | FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version), |
881 | FW_REV_PATCH(scpi_info->firmware_version)); | 852 | FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version)); |
882 | } | 853 | } |
883 | static DEVICE_ATTR_RO(firmware_version); | 854 | static DEVICE_ATTR_RO(firmware_version); |
884 | 855 | ||
@@ -889,37 +860,26 @@ static struct attribute *versions_attrs[] = { | |||
889 | }; | 860 | }; |
890 | ATTRIBUTE_GROUPS(versions); | 861 | ATTRIBUTE_GROUPS(versions); |
891 | 862 | ||
892 | static void | 863 | static void scpi_free_channels(void *data) |
893 | scpi_free_channels(struct device *dev, struct scpi_chan *pchan, int count) | ||
894 | { | 864 | { |
865 | struct scpi_drvinfo *info = data; | ||
895 | int i; | 866 | int i; |
896 | 867 | ||
897 | for (i = 0; i < count && pchan->chan; i++, pchan++) { | 868 | for (i = 0; i < info->num_chans; i++) |
898 | mbox_free_channel(pchan->chan); | 869 | mbox_free_channel(info->channels[i].chan); |
899 | devm_kfree(dev, pchan->xfers); | ||
900 | devm_iounmap(dev, pchan->rx_payload); | ||
901 | } | ||
902 | } | 870 | } |
903 | 871 | ||
904 | static int scpi_remove(struct platform_device *pdev) | 872 | static int scpi_remove(struct platform_device *pdev) |
905 | { | 873 | { |
906 | int i; | 874 | int i; |
907 | struct device *dev = &pdev->dev; | ||
908 | struct scpi_drvinfo *info = platform_get_drvdata(pdev); | 875 | struct scpi_drvinfo *info = platform_get_drvdata(pdev); |
909 | 876 | ||
910 | scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */ | 877 | scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */ |
911 | 878 | ||
912 | of_platform_depopulate(dev); | ||
913 | sysfs_remove_groups(&dev->kobj, versions_groups); | ||
914 | scpi_free_channels(dev, info->channels, info->num_chans); | ||
915 | platform_set_drvdata(pdev, NULL); | ||
916 | |||
917 | for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) { | 879 | for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) { |
918 | kfree(info->dvfs[i]->opps); | 880 | kfree(info->dvfs[i]->opps); |
919 | kfree(info->dvfs[i]); | 881 | kfree(info->dvfs[i]); |
920 | } | 882 | } |
921 | devm_kfree(dev, info->channels); | ||
922 | devm_kfree(dev, info); | ||
923 | 883 | ||
924 | return 0; | 884 | return 0; |
925 | } | 885 | } |
@@ -952,7 +912,6 @@ static int scpi_probe(struct platform_device *pdev) | |||
952 | { | 912 | { |
953 | int count, idx, ret; | 913 | int count, idx, ret; |
954 | struct resource res; | 914 | struct resource res; |
955 | struct scpi_chan *scpi_chan; | ||
956 | struct device *dev = &pdev->dev; | 915 | struct device *dev = &pdev->dev; |
957 | struct device_node *np = dev->of_node; | 916 | struct device_node *np = dev->of_node; |
958 | 917 | ||
@@ -969,13 +928,19 @@ static int scpi_probe(struct platform_device *pdev) | |||
969 | return -ENODEV; | 928 | return -ENODEV; |
970 | } | 929 | } |
971 | 930 | ||
972 | scpi_chan = devm_kcalloc(dev, count, sizeof(*scpi_chan), GFP_KERNEL); | 931 | scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan), |
973 | if (!scpi_chan) | 932 | GFP_KERNEL); |
933 | if (!scpi_info->channels) | ||
974 | return -ENOMEM; | 934 | return -ENOMEM; |
975 | 935 | ||
976 | for (idx = 0; idx < count; idx++) { | 936 | ret = devm_add_action(dev, scpi_free_channels, scpi_info); |
937 | if (ret) | ||
938 | return ret; | ||
939 | |||
940 | for (; scpi_info->num_chans < count; scpi_info->num_chans++) { | ||
977 | resource_size_t size; | 941 | resource_size_t size; |
978 | struct scpi_chan *pchan = scpi_chan + idx; | 942 | int idx = scpi_info->num_chans; |
943 | struct scpi_chan *pchan = scpi_info->channels + idx; | ||
979 | struct mbox_client *cl = &pchan->cl; | 944 | struct mbox_client *cl = &pchan->cl; |
980 | struct device_node *shmem = of_parse_phandle(np, "shmem", idx); | 945 | struct device_node *shmem = of_parse_phandle(np, "shmem", idx); |
981 | 946 | ||
@@ -983,15 +948,14 @@ static int scpi_probe(struct platform_device *pdev) | |||
983 | of_node_put(shmem); | 948 | of_node_put(shmem); |
984 | if (ret) { | 949 | if (ret) { |
985 | dev_err(dev, "failed to get SCPI payload mem resource\n"); | 950 | dev_err(dev, "failed to get SCPI payload mem resource\n"); |
986 | goto err; | 951 | return ret; |
987 | } | 952 | } |
988 | 953 | ||
989 | size = resource_size(&res); | 954 | size = resource_size(&res); |
990 | pchan->rx_payload = devm_ioremap(dev, res.start, size); | 955 | pchan->rx_payload = devm_ioremap(dev, res.start, size); |
991 | if (!pchan->rx_payload) { | 956 | if (!pchan->rx_payload) { |
992 | dev_err(dev, "failed to ioremap SCPI payload\n"); | 957 | dev_err(dev, "failed to ioremap SCPI payload\n"); |
993 | ret = -EADDRNOTAVAIL; | 958 | return -EADDRNOTAVAIL; |
994 | goto err; | ||
995 | } | 959 | } |
996 | pchan->tx_payload = pchan->rx_payload + (size >> 1); | 960 | pchan->tx_payload = pchan->rx_payload + (size >> 1); |
997 | 961 | ||
@@ -1017,14 +981,9 @@ static int scpi_probe(struct platform_device *pdev) | |||
1017 | dev_err(dev, "failed to get channel%d err %d\n", | 981 | dev_err(dev, "failed to get channel%d err %d\n", |
1018 | idx, ret); | 982 | idx, ret); |
1019 | } | 983 | } |
1020 | err: | ||
1021 | scpi_free_channels(dev, scpi_chan, idx); | ||
1022 | scpi_info = NULL; | ||
1023 | return ret; | 984 | return ret; |
1024 | } | 985 | } |
1025 | 986 | ||
1026 | scpi_info->channels = scpi_chan; | ||
1027 | scpi_info->num_chans = count; | ||
1028 | scpi_info->commands = scpi_std_commands; | 987 | scpi_info->commands = scpi_std_commands; |
1029 | 988 | ||
1030 | platform_set_drvdata(pdev, scpi_info); | 989 | platform_set_drvdata(pdev, scpi_info); |
@@ -1043,23 +1002,31 @@ err: | |||
1043 | ret = scpi_init_versions(scpi_info); | 1002 | ret = scpi_init_versions(scpi_info); |
1044 | if (ret) { | 1003 | if (ret) { |
1045 | dev_err(dev, "incorrect or no SCP firmware found\n"); | 1004 | dev_err(dev, "incorrect or no SCP firmware found\n"); |
1046 | scpi_remove(pdev); | ||
1047 | return ret; | 1005 | return ret; |
1048 | } | 1006 | } |
1049 | 1007 | ||
1050 | _dev_info(dev, "SCP Protocol %d.%d Firmware %d.%d.%d version\n", | 1008 | if (scpi_info->is_legacy && !scpi_info->protocol_version && |
1051 | PROTOCOL_REV_MAJOR(scpi_info->protocol_version), | 1009 | !scpi_info->firmware_version) |
1052 | PROTOCOL_REV_MINOR(scpi_info->protocol_version), | 1010 | dev_info(dev, "SCP Protocol legacy pre-1.0 firmware\n"); |
1053 | FW_REV_MAJOR(scpi_info->firmware_version), | 1011 | else |
1054 | FW_REV_MINOR(scpi_info->firmware_version), | 1012 | dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n", |
1055 | FW_REV_PATCH(scpi_info->firmware_version)); | 1013 | FIELD_GET(PROTO_REV_MAJOR_MASK, |
1014 | scpi_info->protocol_version), | ||
1015 | FIELD_GET(PROTO_REV_MINOR_MASK, | ||
1016 | scpi_info->protocol_version), | ||
1017 | FIELD_GET(FW_REV_MAJOR_MASK, | ||
1018 | scpi_info->firmware_version), | ||
1019 | FIELD_GET(FW_REV_MINOR_MASK, | ||
1020 | scpi_info->firmware_version), | ||
1021 | FIELD_GET(FW_REV_PATCH_MASK, | ||
1022 | scpi_info->firmware_version)); | ||
1056 | scpi_info->scpi_ops = &scpi_ops; | 1023 | scpi_info->scpi_ops = &scpi_ops; |
1057 | 1024 | ||
1058 | ret = sysfs_create_groups(&dev->kobj, versions_groups); | 1025 | ret = devm_device_add_groups(dev, versions_groups); |
1059 | if (ret) | 1026 | if (ret) |
1060 | dev_err(dev, "unable to create sysfs version group\n"); | 1027 | dev_err(dev, "unable to create sysfs version group\n"); |
1061 | 1028 | ||
1062 | return of_platform_populate(dev->of_node, NULL, NULL, dev); | 1029 | return devm_of_platform_populate(dev); |
1063 | } | 1030 | } |
1064 | 1031 | ||
1065 | static const struct of_device_id scpi_of_match[] = { | 1032 | static const struct of_device_id scpi_of_match[] = { |
diff --git a/drivers/firmware/meson/meson_sm.c b/drivers/firmware/meson/meson_sm.c index ff204421117b..0ec2ca87318c 100644 --- a/drivers/firmware/meson/meson_sm.c +++ b/drivers/firmware/meson/meson_sm.c | |||
@@ -17,8 +17,10 @@ | |||
17 | #include <linux/arm-smccc.h> | 17 | #include <linux/arm-smccc.h> |
18 | #include <linux/bug.h> | 18 | #include <linux/bug.h> |
19 | #include <linux/io.h> | 19 | #include <linux/io.h> |
20 | #include <linux/module.h> | ||
20 | #include <linux/of.h> | 21 | #include <linux/of.h> |
21 | #include <linux/of_device.h> | 22 | #include <linux/of_device.h> |
23 | #include <linux/platform_device.h> | ||
22 | #include <linux/printk.h> | 24 | #include <linux/printk.h> |
23 | #include <linux/types.h> | 25 | #include <linux/types.h> |
24 | #include <linux/sizes.h> | 26 | #include <linux/sizes.h> |
@@ -217,21 +219,11 @@ static const struct of_device_id meson_sm_ids[] = { | |||
217 | { /* sentinel */ }, | 219 | { /* sentinel */ }, |
218 | }; | 220 | }; |
219 | 221 | ||
220 | int __init meson_sm_init(void) | 222 | static int __init meson_sm_probe(struct platform_device *pdev) |
221 | { | 223 | { |
222 | const struct meson_sm_chip *chip; | 224 | const struct meson_sm_chip *chip; |
223 | const struct of_device_id *matched_np; | ||
224 | struct device_node *np; | ||
225 | 225 | ||
226 | np = of_find_matching_node_and_match(NULL, meson_sm_ids, &matched_np); | 226 | chip = of_match_device(meson_sm_ids, &pdev->dev)->data; |
227 | if (!np) | ||
228 | return -ENODEV; | ||
229 | |||
230 | chip = matched_np->data; | ||
231 | if (!chip) { | ||
232 | pr_err("unable to setup secure-monitor data\n"); | ||
233 | goto out; | ||
234 | } | ||
235 | 227 | ||
236 | if (chip->cmd_shmem_in_base) { | 228 | if (chip->cmd_shmem_in_base) { |
237 | fw.sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base, | 229 | fw.sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base, |
@@ -257,4 +249,11 @@ out_in_base: | |||
257 | out: | 249 | out: |
258 | return -EINVAL; | 250 | return -EINVAL; |
259 | } | 251 | } |
260 | device_initcall(meson_sm_init); | 252 | |
253 | static struct platform_driver meson_sm_driver = { | ||
254 | .driver = { | ||
255 | .name = "meson-sm", | ||
256 | .of_match_table = of_match_ptr(meson_sm_ids), | ||
257 | }, | ||
258 | }; | ||
259 | module_platform_driver_probe(meson_sm_driver, meson_sm_probe); | ||
diff --git a/drivers/firmware/tegra/bpmp.c b/drivers/firmware/tegra/bpmp.c index a7f461f2e650..14a456afa379 100644 --- a/drivers/firmware/tegra/bpmp.c +++ b/drivers/firmware/tegra/bpmp.c | |||
@@ -70,57 +70,20 @@ void tegra_bpmp_put(struct tegra_bpmp *bpmp) | |||
70 | } | 70 | } |
71 | EXPORT_SYMBOL_GPL(tegra_bpmp_put); | 71 | EXPORT_SYMBOL_GPL(tegra_bpmp_put); |
72 | 72 | ||
73 | static int tegra_bpmp_channel_get_index(struct tegra_bpmp_channel *channel) | ||
74 | { | ||
75 | return channel - channel->bpmp->channels; | ||
76 | } | ||
77 | |||
78 | static int | 73 | static int |
79 | tegra_bpmp_channel_get_thread_index(struct tegra_bpmp_channel *channel) | 74 | tegra_bpmp_channel_get_thread_index(struct tegra_bpmp_channel *channel) |
80 | { | 75 | { |
81 | struct tegra_bpmp *bpmp = channel->bpmp; | 76 | struct tegra_bpmp *bpmp = channel->bpmp; |
82 | unsigned int offset, count; | 77 | unsigned int count; |
83 | int index; | 78 | int index; |
84 | 79 | ||
85 | offset = bpmp->soc->channels.thread.offset; | ||
86 | count = bpmp->soc->channels.thread.count; | 80 | count = bpmp->soc->channels.thread.count; |
87 | 81 | ||
88 | index = tegra_bpmp_channel_get_index(channel); | 82 | index = channel - channel->bpmp->threaded_channels; |
89 | if (index < 0) | 83 | if (index < 0 || index >= count) |
90 | return index; | ||
91 | |||
92 | if (index < offset || index >= offset + count) | ||
93 | return -EINVAL; | 84 | return -EINVAL; |
94 | 85 | ||
95 | return index - offset; | 86 | return index; |
96 | } | ||
97 | |||
98 | static struct tegra_bpmp_channel * | ||
99 | tegra_bpmp_channel_get_thread(struct tegra_bpmp *bpmp, unsigned int index) | ||
100 | { | ||
101 | unsigned int offset = bpmp->soc->channels.thread.offset; | ||
102 | unsigned int count = bpmp->soc->channels.thread.count; | ||
103 | |||
104 | if (index >= count) | ||
105 | return NULL; | ||
106 | |||
107 | return &bpmp->channels[offset + index]; | ||
108 | } | ||
109 | |||
110 | static struct tegra_bpmp_channel * | ||
111 | tegra_bpmp_channel_get_tx(struct tegra_bpmp *bpmp) | ||
112 | { | ||
113 | unsigned int offset = bpmp->soc->channels.cpu_tx.offset; | ||
114 | |||
115 | return &bpmp->channels[offset + smp_processor_id()]; | ||
116 | } | ||
117 | |||
118 | static struct tegra_bpmp_channel * | ||
119 | tegra_bpmp_channel_get_rx(struct tegra_bpmp *bpmp) | ||
120 | { | ||
121 | unsigned int offset = bpmp->soc->channels.cpu_rx.offset; | ||
122 | |||
123 | return &bpmp->channels[offset]; | ||
124 | } | 87 | } |
125 | 88 | ||
126 | static bool tegra_bpmp_message_valid(const struct tegra_bpmp_message *msg) | 89 | static bool tegra_bpmp_message_valid(const struct tegra_bpmp_message *msg) |
@@ -271,11 +234,7 @@ tegra_bpmp_write_threaded(struct tegra_bpmp *bpmp, unsigned int mrq, | |||
271 | goto unlock; | 234 | goto unlock; |
272 | } | 235 | } |
273 | 236 | ||
274 | channel = tegra_bpmp_channel_get_thread(bpmp, index); | 237 | channel = &bpmp->threaded_channels[index]; |
275 | if (!channel) { | ||
276 | err = -EINVAL; | ||
277 | goto unlock; | ||
278 | } | ||
279 | 238 | ||
280 | if (!tegra_bpmp_master_free(channel)) { | 239 | if (!tegra_bpmp_master_free(channel)) { |
281 | err = -EBUSY; | 240 | err = -EBUSY; |
@@ -328,12 +287,18 @@ int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp, | |||
328 | if (!tegra_bpmp_message_valid(msg)) | 287 | if (!tegra_bpmp_message_valid(msg)) |
329 | return -EINVAL; | 288 | return -EINVAL; |
330 | 289 | ||
331 | channel = tegra_bpmp_channel_get_tx(bpmp); | 290 | channel = bpmp->tx_channel; |
291 | |||
292 | spin_lock(&bpmp->atomic_tx_lock); | ||
332 | 293 | ||
333 | err = tegra_bpmp_channel_write(channel, msg->mrq, MSG_ACK, | 294 | err = tegra_bpmp_channel_write(channel, msg->mrq, MSG_ACK, |
334 | msg->tx.data, msg->tx.size); | 295 | msg->tx.data, msg->tx.size); |
335 | if (err < 0) | 296 | if (err < 0) { |
297 | spin_unlock(&bpmp->atomic_tx_lock); | ||
336 | return err; | 298 | return err; |
299 | } | ||
300 | |||
301 | spin_unlock(&bpmp->atomic_tx_lock); | ||
337 | 302 | ||
338 | err = mbox_send_message(bpmp->mbox.channel, NULL); | 303 | err = mbox_send_message(bpmp->mbox.channel, NULL); |
339 | if (err < 0) | 304 | if (err < 0) |
@@ -607,7 +572,7 @@ static void tegra_bpmp_handle_rx(struct mbox_client *client, void *data) | |||
607 | unsigned int i, count; | 572 | unsigned int i, count; |
608 | unsigned long *busy; | 573 | unsigned long *busy; |
609 | 574 | ||
610 | channel = tegra_bpmp_channel_get_rx(bpmp); | 575 | channel = bpmp->rx_channel; |
611 | count = bpmp->soc->channels.thread.count; | 576 | count = bpmp->soc->channels.thread.count; |
612 | busy = bpmp->threaded.busy; | 577 | busy = bpmp->threaded.busy; |
613 | 578 | ||
@@ -619,9 +584,7 @@ static void tegra_bpmp_handle_rx(struct mbox_client *client, void *data) | |||
619 | for_each_set_bit(i, busy, count) { | 584 | for_each_set_bit(i, busy, count) { |
620 | struct tegra_bpmp_channel *channel; | 585 | struct tegra_bpmp_channel *channel; |
621 | 586 | ||
622 | channel = tegra_bpmp_channel_get_thread(bpmp, i); | 587 | channel = &bpmp->threaded_channels[i]; |
623 | if (!channel) | ||
624 | continue; | ||
625 | 588 | ||
626 | if (tegra_bpmp_master_acked(channel)) { | 589 | if (tegra_bpmp_master_acked(channel)) { |
627 | tegra_bpmp_channel_signal(channel); | 590 | tegra_bpmp_channel_signal(channel); |
@@ -698,7 +661,6 @@ static void tegra_bpmp_channel_cleanup(struct tegra_bpmp_channel *channel) | |||
698 | 661 | ||
699 | static int tegra_bpmp_probe(struct platform_device *pdev) | 662 | static int tegra_bpmp_probe(struct platform_device *pdev) |
700 | { | 663 | { |
701 | struct tegra_bpmp_channel *channel; | ||
702 | struct tegra_bpmp *bpmp; | 664 | struct tegra_bpmp *bpmp; |
703 | unsigned int i; | 665 | unsigned int i; |
704 | char tag[32]; | 666 | char tag[32]; |
@@ -732,7 +694,7 @@ static int tegra_bpmp_probe(struct platform_device *pdev) | |||
732 | } | 694 | } |
733 | 695 | ||
734 | bpmp->rx.virt = gen_pool_dma_alloc(bpmp->rx.pool, 4096, &bpmp->rx.phys); | 696 | bpmp->rx.virt = gen_pool_dma_alloc(bpmp->rx.pool, 4096, &bpmp->rx.phys); |
735 | if (!bpmp->rx.pool) { | 697 | if (!bpmp->rx.virt) { |
736 | dev_err(&pdev->dev, "failed to allocate from RX pool\n"); | 698 | dev_err(&pdev->dev, "failed to allocate from RX pool\n"); |
737 | err = -ENOMEM; | 699 | err = -ENOMEM; |
738 | goto free_tx; | 700 | goto free_tx; |
@@ -758,24 +720,45 @@ static int tegra_bpmp_probe(struct platform_device *pdev) | |||
758 | goto free_rx; | 720 | goto free_rx; |
759 | } | 721 | } |
760 | 722 | ||
761 | bpmp->num_channels = bpmp->soc->channels.cpu_tx.count + | 723 | spin_lock_init(&bpmp->atomic_tx_lock); |
762 | bpmp->soc->channels.thread.count + | 724 | bpmp->tx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->tx_channel), |
763 | bpmp->soc->channels.cpu_rx.count; | 725 | GFP_KERNEL); |
726 | if (!bpmp->tx_channel) { | ||
727 | err = -ENOMEM; | ||
728 | goto free_rx; | ||
729 | } | ||
764 | 730 | ||
765 | bpmp->channels = devm_kcalloc(&pdev->dev, bpmp->num_channels, | 731 | bpmp->rx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->rx_channel), |
766 | sizeof(*channel), GFP_KERNEL); | 732 | GFP_KERNEL); |
767 | if (!bpmp->channels) { | 733 | if (!bpmp->rx_channel) { |
768 | err = -ENOMEM; | 734 | err = -ENOMEM; |
769 | goto free_rx; | 735 | goto free_rx; |
770 | } | 736 | } |
771 | 737 | ||
772 | /* message channel initialization */ | 738 | bpmp->threaded_channels = devm_kcalloc(&pdev->dev, bpmp->threaded.count, |
773 | for (i = 0; i < bpmp->num_channels; i++) { | 739 | sizeof(*bpmp->threaded_channels), |
774 | struct tegra_bpmp_channel *channel = &bpmp->channels[i]; | 740 | GFP_KERNEL); |
741 | if (!bpmp->threaded_channels) { | ||
742 | err = -ENOMEM; | ||
743 | goto free_rx; | ||
744 | } | ||
775 | 745 | ||
776 | err = tegra_bpmp_channel_init(channel, bpmp, i); | 746 | err = tegra_bpmp_channel_init(bpmp->tx_channel, bpmp, |
747 | bpmp->soc->channels.cpu_tx.offset); | ||
748 | if (err < 0) | ||
749 | goto free_rx; | ||
750 | |||
751 | err = tegra_bpmp_channel_init(bpmp->rx_channel, bpmp, | ||
752 | bpmp->soc->channels.cpu_rx.offset); | ||
753 | if (err < 0) | ||
754 | goto cleanup_tx_channel; | ||
755 | |||
756 | for (i = 0; i < bpmp->threaded.count; i++) { | ||
757 | err = tegra_bpmp_channel_init( | ||
758 | &bpmp->threaded_channels[i], bpmp, | ||
759 | bpmp->soc->channels.thread.offset + i); | ||
777 | if (err < 0) | 760 | if (err < 0) |
778 | goto cleanup_channels; | 761 | goto cleanup_threaded_channels; |
779 | } | 762 | } |
780 | 763 | ||
781 | /* mbox registration */ | 764 | /* mbox registration */ |
@@ -788,15 +771,14 @@ static int tegra_bpmp_probe(struct platform_device *pdev) | |||
788 | if (IS_ERR(bpmp->mbox.channel)) { | 771 | if (IS_ERR(bpmp->mbox.channel)) { |
789 | err = PTR_ERR(bpmp->mbox.channel); | 772 | err = PTR_ERR(bpmp->mbox.channel); |
790 | dev_err(&pdev->dev, "failed to get HSP mailbox: %d\n", err); | 773 | dev_err(&pdev->dev, "failed to get HSP mailbox: %d\n", err); |
791 | goto cleanup_channels; | 774 | goto cleanup_threaded_channels; |
792 | } | 775 | } |
793 | 776 | ||
794 | /* reset message channels */ | 777 | /* reset message channels */ |
795 | for (i = 0; i < bpmp->num_channels; i++) { | 778 | tegra_bpmp_channel_reset(bpmp->tx_channel); |
796 | struct tegra_bpmp_channel *channel = &bpmp->channels[i]; | 779 | tegra_bpmp_channel_reset(bpmp->rx_channel); |
797 | 780 | for (i = 0; i < bpmp->threaded.count; i++) | |
798 | tegra_bpmp_channel_reset(channel); | 781 | tegra_bpmp_channel_reset(&bpmp->threaded_channels[i]); |
799 | } | ||
800 | 782 | ||
801 | err = tegra_bpmp_request_mrq(bpmp, MRQ_PING, | 783 | err = tegra_bpmp_request_mrq(bpmp, MRQ_PING, |
802 | tegra_bpmp_mrq_handle_ping, bpmp); | 784 | tegra_bpmp_mrq_handle_ping, bpmp); |
@@ -845,9 +827,15 @@ free_mrq: | |||
845 | tegra_bpmp_free_mrq(bpmp, MRQ_PING, bpmp); | 827 | tegra_bpmp_free_mrq(bpmp, MRQ_PING, bpmp); |
846 | free_mbox: | 828 | free_mbox: |
847 | mbox_free_channel(bpmp->mbox.channel); | 829 | mbox_free_channel(bpmp->mbox.channel); |
848 | cleanup_channels: | 830 | cleanup_threaded_channels: |
849 | while (i--) | 831 | for (i = 0; i < bpmp->threaded.count; i++) { |
850 | tegra_bpmp_channel_cleanup(&bpmp->channels[i]); | 832 | if (bpmp->threaded_channels[i].bpmp) |
833 | tegra_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); | ||
834 | } | ||
835 | |||
836 | tegra_bpmp_channel_cleanup(bpmp->rx_channel); | ||
837 | cleanup_tx_channel: | ||
838 | tegra_bpmp_channel_cleanup(bpmp->tx_channel); | ||
851 | free_rx: | 839 | free_rx: |
852 | gen_pool_free(bpmp->rx.pool, (unsigned long)bpmp->rx.virt, 4096); | 840 | gen_pool_free(bpmp->rx.pool, (unsigned long)bpmp->rx.virt, 4096); |
853 | free_tx: | 841 | free_tx: |
@@ -858,18 +846,16 @@ free_tx: | |||
858 | static const struct tegra_bpmp_soc tegra186_soc = { | 846 | static const struct tegra_bpmp_soc tegra186_soc = { |
859 | .channels = { | 847 | .channels = { |
860 | .cpu_tx = { | 848 | .cpu_tx = { |
861 | .offset = 0, | 849 | .offset = 3, |
862 | .count = 6, | ||
863 | .timeout = 60 * USEC_PER_SEC, | 850 | .timeout = 60 * USEC_PER_SEC, |
864 | }, | 851 | }, |
865 | .thread = { | 852 | .thread = { |
866 | .offset = 6, | 853 | .offset = 0, |
867 | .count = 7, | 854 | .count = 3, |
868 | .timeout = 600 * USEC_PER_SEC, | 855 | .timeout = 600 * USEC_PER_SEC, |
869 | }, | 856 | }, |
870 | .cpu_rx = { | 857 | .cpu_rx = { |
871 | .offset = 13, | 858 | .offset = 13, |
872 | .count = 1, | ||
873 | .timeout = 0, | 859 | .timeout = 0, |
874 | }, | 860 | }, |
875 | }, | 861 | }, |
diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig index ef23553ff5cb..033e57366d56 100644 --- a/drivers/hwmon/Kconfig +++ b/drivers/hwmon/Kconfig | |||
@@ -317,6 +317,18 @@ config SENSORS_APPLESMC | |||
317 | Say Y here if you have an applicable laptop and want to experience | 317 | Say Y here if you have an applicable laptop and want to experience |
318 | the awesome power of applesmc. | 318 | the awesome power of applesmc. |
319 | 319 | ||
320 | config SENSORS_ARM_SCMI | ||
321 | tristate "ARM SCMI Sensors" | ||
322 | depends on ARM_SCMI_PROTOCOL | ||
323 | depends on THERMAL || !THERMAL_OF | ||
324 | help | ||
325 | This driver provides support for temperature, voltage, current | ||
326 | and power sensors available on SCMI based platforms. The actual | ||
327 | number and type of sensors exported depend on the platform. | ||
328 | |||
329 | This driver can also be built as a module. If so, the module | ||
330 | will be called scmi-hwmon. | ||
331 | |||
320 | config SENSORS_ARM_SCPI | 332 | config SENSORS_ARM_SCPI |
321 | tristate "ARM SCPI Sensors" | 333 | tristate "ARM SCPI Sensors" |
322 | depends on ARM_SCPI_PROTOCOL | 334 | depends on ARM_SCPI_PROTOCOL |
diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile index f814b4ace138..e7d52a36e6c4 100644 --- a/drivers/hwmon/Makefile +++ b/drivers/hwmon/Makefile | |||
@@ -46,6 +46,7 @@ obj-$(CONFIG_SENSORS_ADT7462) += adt7462.o | |||
46 | obj-$(CONFIG_SENSORS_ADT7470) += adt7470.o | 46 | obj-$(CONFIG_SENSORS_ADT7470) += adt7470.o |
47 | obj-$(CONFIG_SENSORS_ADT7475) += adt7475.o | 47 | obj-$(CONFIG_SENSORS_ADT7475) += adt7475.o |
48 | obj-$(CONFIG_SENSORS_APPLESMC) += applesmc.o | 48 | obj-$(CONFIG_SENSORS_APPLESMC) += applesmc.o |
49 | obj-$(CONFIG_SENSORS_ARM_SCMI) += scmi-hwmon.o | ||
49 | obj-$(CONFIG_SENSORS_ARM_SCPI) += scpi-hwmon.o | 50 | obj-$(CONFIG_SENSORS_ARM_SCPI) += scpi-hwmon.o |
50 | obj-$(CONFIG_SENSORS_ASC7621) += asc7621.o | 51 | obj-$(CONFIG_SENSORS_ASC7621) += asc7621.o |
51 | obj-$(CONFIG_SENSORS_ASPEED) += aspeed-pwm-tacho.o | 52 | obj-$(CONFIG_SENSORS_ASPEED) += aspeed-pwm-tacho.o |
diff --git a/drivers/hwmon/scmi-hwmon.c b/drivers/hwmon/scmi-hwmon.c new file mode 100644 index 000000000000..363bf56eb0f2 --- /dev/null +++ b/drivers/hwmon/scmi-hwmon.c | |||
@@ -0,0 +1,225 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * System Control and Management Interface(SCMI) based hwmon sensor driver | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | * Sudeep Holla <sudeep.holla@arm.com> | ||
7 | */ | ||
8 | |||
9 | #include <linux/hwmon.h> | ||
10 | #include <linux/module.h> | ||
11 | #include <linux/scmi_protocol.h> | ||
12 | #include <linux/slab.h> | ||
13 | #include <linux/sysfs.h> | ||
14 | #include <linux/thermal.h> | ||
15 | |||
16 | struct scmi_sensors { | ||
17 | const struct scmi_handle *handle; | ||
18 | const struct scmi_sensor_info **info[hwmon_max]; | ||
19 | }; | ||
20 | |||
21 | static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type, | ||
22 | u32 attr, int channel, long *val) | ||
23 | { | ||
24 | int ret; | ||
25 | u64 value; | ||
26 | const struct scmi_sensor_info *sensor; | ||
27 | struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev); | ||
28 | const struct scmi_handle *h = scmi_sensors->handle; | ||
29 | |||
30 | sensor = *(scmi_sensors->info[type] + channel); | ||
31 | ret = h->sensor_ops->reading_get(h, sensor->id, false, &value); | ||
32 | if (!ret) | ||
33 | *val = value; | ||
34 | |||
35 | return ret; | ||
36 | } | ||
37 | |||
38 | static int | ||
39 | scmi_hwmon_read_string(struct device *dev, enum hwmon_sensor_types type, | ||
40 | u32 attr, int channel, const char **str) | ||
41 | { | ||
42 | const struct scmi_sensor_info *sensor; | ||
43 | struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev); | ||
44 | |||
45 | sensor = *(scmi_sensors->info[type] + channel); | ||
46 | *str = sensor->name; | ||
47 | |||
48 | return 0; | ||
49 | } | ||
50 | |||
51 | static umode_t | ||
52 | scmi_hwmon_is_visible(const void *drvdata, enum hwmon_sensor_types type, | ||
53 | u32 attr, int channel) | ||
54 | { | ||
55 | const struct scmi_sensor_info *sensor; | ||
56 | const struct scmi_sensors *scmi_sensors = drvdata; | ||
57 | |||
58 | sensor = *(scmi_sensors->info[type] + channel); | ||
59 | if (sensor && sensor->name) | ||
60 | return S_IRUGO; | ||
61 | |||
62 | return 0; | ||
63 | } | ||
64 | |||
65 | static const struct hwmon_ops scmi_hwmon_ops = { | ||
66 | .is_visible = scmi_hwmon_is_visible, | ||
67 | .read = scmi_hwmon_read, | ||
68 | .read_string = scmi_hwmon_read_string, | ||
69 | }; | ||
70 | |||
71 | static struct hwmon_chip_info scmi_chip_info = { | ||
72 | .ops = &scmi_hwmon_ops, | ||
73 | .info = NULL, | ||
74 | }; | ||
75 | |||
76 | static int scmi_hwmon_add_chan_info(struct hwmon_channel_info *scmi_hwmon_chan, | ||
77 | struct device *dev, int num, | ||
78 | enum hwmon_sensor_types type, u32 config) | ||
79 | { | ||
80 | int i; | ||
81 | u32 *cfg = devm_kcalloc(dev, num + 1, sizeof(*cfg), GFP_KERNEL); | ||
82 | |||
83 | if (!cfg) | ||
84 | return -ENOMEM; | ||
85 | |||
86 | scmi_hwmon_chan->type = type; | ||
87 | scmi_hwmon_chan->config = cfg; | ||
88 | for (i = 0; i < num; i++, cfg++) | ||
89 | *cfg = config; | ||
90 | |||
91 | return 0; | ||
92 | } | ||
93 | |||
94 | static enum hwmon_sensor_types scmi_types[] = { | ||
95 | [TEMPERATURE_C] = hwmon_temp, | ||
96 | [VOLTAGE] = hwmon_in, | ||
97 | [CURRENT] = hwmon_curr, | ||
98 | [POWER] = hwmon_power, | ||
99 | [ENERGY] = hwmon_energy, | ||
100 | }; | ||
101 | |||
102 | static u32 hwmon_attributes[] = { | ||
103 | [hwmon_chip] = HWMON_C_REGISTER_TZ, | ||
104 | [hwmon_temp] = HWMON_T_INPUT | HWMON_T_LABEL, | ||
105 | [hwmon_in] = HWMON_I_INPUT | HWMON_I_LABEL, | ||
106 | [hwmon_curr] = HWMON_C_INPUT | HWMON_C_LABEL, | ||
107 | [hwmon_power] = HWMON_P_INPUT | HWMON_P_LABEL, | ||
108 | [hwmon_energy] = HWMON_E_INPUT | HWMON_E_LABEL, | ||
109 | }; | ||
110 | |||
111 | static int scmi_hwmon_probe(struct scmi_device *sdev) | ||
112 | { | ||
113 | int i, idx; | ||
114 | u16 nr_sensors; | ||
115 | enum hwmon_sensor_types type; | ||
116 | struct scmi_sensors *scmi_sensors; | ||
117 | const struct scmi_sensor_info *sensor; | ||
118 | int nr_count[hwmon_max] = {0}, nr_types = 0; | ||
119 | const struct hwmon_chip_info *chip_info; | ||
120 | struct device *hwdev, *dev = &sdev->dev; | ||
121 | struct hwmon_channel_info *scmi_hwmon_chan; | ||
122 | const struct hwmon_channel_info **ptr_scmi_ci; | ||
123 | const struct scmi_handle *handle = sdev->handle; | ||
124 | |||
125 | if (!handle || !handle->sensor_ops) | ||
126 | return -ENODEV; | ||
127 | |||
128 | nr_sensors = handle->sensor_ops->count_get(handle); | ||
129 | if (!nr_sensors) | ||
130 | return -EIO; | ||
131 | |||
132 | scmi_sensors = devm_kzalloc(dev, sizeof(*scmi_sensors), GFP_KERNEL); | ||
133 | if (!scmi_sensors) | ||
134 | return -ENOMEM; | ||
135 | |||
136 | scmi_sensors->handle = handle; | ||
137 | |||
138 | for (i = 0; i < nr_sensors; i++) { | ||
139 | sensor = handle->sensor_ops->info_get(handle, i); | ||
140 | if (!sensor) | ||
141 | return -EINVAL; | ||
142 | |||
143 | switch (sensor->type) { | ||
144 | case TEMPERATURE_C: | ||
145 | case VOLTAGE: | ||
146 | case CURRENT: | ||
147 | case POWER: | ||
148 | case ENERGY: | ||
149 | type = scmi_types[sensor->type]; | ||
150 | if (!nr_count[type]) | ||
151 | nr_types++; | ||
152 | nr_count[type]++; | ||
153 | break; | ||
154 | } | ||
155 | } | ||
156 | |||
157 | if (nr_count[hwmon_temp]) | ||
158 | nr_count[hwmon_chip]++, nr_types++; | ||
159 | |||
160 | scmi_hwmon_chan = devm_kcalloc(dev, nr_types, sizeof(*scmi_hwmon_chan), | ||
161 | GFP_KERNEL); | ||
162 | if (!scmi_hwmon_chan) | ||
163 | return -ENOMEM; | ||
164 | |||
165 | ptr_scmi_ci = devm_kcalloc(dev, nr_types + 1, sizeof(*ptr_scmi_ci), | ||
166 | GFP_KERNEL); | ||
167 | if (!ptr_scmi_ci) | ||
168 | return -ENOMEM; | ||
169 | |||
170 | scmi_chip_info.info = ptr_scmi_ci; | ||
171 | chip_info = &scmi_chip_info; | ||
172 | |||
173 | for (type = 0; type < hwmon_max && nr_count[type]; type++) { | ||
174 | scmi_hwmon_add_chan_info(scmi_hwmon_chan, dev, nr_count[type], | ||
175 | type, hwmon_attributes[type]); | ||
176 | *ptr_scmi_ci++ = scmi_hwmon_chan++; | ||
177 | |||
178 | scmi_sensors->info[type] = | ||
179 | devm_kcalloc(dev, nr_count[type], | ||
180 | sizeof(*scmi_sensors->info), GFP_KERNEL); | ||
181 | if (!scmi_sensors->info[type]) | ||
182 | return -ENOMEM; | ||
183 | } | ||
184 | |||
185 | for (i = nr_sensors - 1; i >= 0 ; i--) { | ||
186 | sensor = handle->sensor_ops->info_get(handle, i); | ||
187 | if (!sensor) | ||
188 | continue; | ||
189 | |||
190 | switch (sensor->type) { | ||
191 | case TEMPERATURE_C: | ||
192 | case VOLTAGE: | ||
193 | case CURRENT: | ||
194 | case POWER: | ||
195 | case ENERGY: | ||
196 | type = scmi_types[sensor->type]; | ||
197 | idx = --nr_count[type]; | ||
198 | *(scmi_sensors->info[type] + idx) = sensor; | ||
199 | break; | ||
200 | } | ||
201 | } | ||
202 | |||
203 | hwdev = devm_hwmon_device_register_with_info(dev, "scmi_sensors", | ||
204 | scmi_sensors, chip_info, | ||
205 | NULL); | ||
206 | |||
207 | return PTR_ERR_OR_ZERO(hwdev); | ||
208 | } | ||
209 | |||
210 | static const struct scmi_device_id scmi_id_table[] = { | ||
211 | { SCMI_PROTOCOL_SENSOR }, | ||
212 | { }, | ||
213 | }; | ||
214 | MODULE_DEVICE_TABLE(scmi, scmi_id_table); | ||
215 | |||
216 | static struct scmi_driver scmi_hwmon_drv = { | ||
217 | .name = "scmi-hwmon", | ||
218 | .probe = scmi_hwmon_probe, | ||
219 | .id_table = scmi_id_table, | ||
220 | }; | ||
221 | module_scmi_driver(scmi_hwmon_drv); | ||
222 | |||
223 | MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); | ||
224 | MODULE_DESCRIPTION("ARM SCMI HWMON interface driver"); | ||
225 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/memory/emif.c b/drivers/memory/emif.c index 04644e7b42b1..2f214440008c 100644 --- a/drivers/memory/emif.c +++ b/drivers/memory/emif.c | |||
@@ -127,7 +127,7 @@ static int emif_regdump_show(struct seq_file *s, void *unused) | |||
127 | 127 | ||
128 | for (i = 0; i < EMIF_MAX_NUM_FREQUENCIES && regs_cache[i]; i++) { | 128 | for (i = 0; i < EMIF_MAX_NUM_FREQUENCIES && regs_cache[i]; i++) { |
129 | do_emif_regdump_show(s, emif, regs_cache[i]); | 129 | do_emif_regdump_show(s, emif, regs_cache[i]); |
130 | seq_printf(s, "\n"); | 130 | seq_putc(s, '\n'); |
131 | } | 131 | } |
132 | 132 | ||
133 | return 0; | 133 | return 0; |
diff --git a/drivers/memory/samsung/Kconfig b/drivers/memory/samsung/Kconfig index 9de12222061c..79ce7ea58903 100644 --- a/drivers/memory/samsung/Kconfig +++ b/drivers/memory/samsung/Kconfig | |||
@@ -1,3 +1,4 @@ | |||
1 | # SPDX-License-Identifier: GPL-2.0 | ||
1 | config SAMSUNG_MC | 2 | config SAMSUNG_MC |
2 | bool "Samsung Exynos Memory Controller support" if COMPILE_TEST | 3 | bool "Samsung Exynos Memory Controller support" if COMPILE_TEST |
3 | help | 4 | help |
diff --git a/drivers/memory/samsung/Makefile b/drivers/memory/samsung/Makefile index 9c554d5522ad..00587be66211 100644 --- a/drivers/memory/samsung/Makefile +++ b/drivers/memory/samsung/Makefile | |||
@@ -1 +1,2 @@ | |||
1 | # SPDX-License-Identifier: GPL-2.0 | ||
1 | obj-$(CONFIG_EXYNOS_SROM) += exynos-srom.o | 2 | obj-$(CONFIG_EXYNOS_SROM) += exynos-srom.o |
diff --git a/drivers/memory/samsung/exynos-srom.c b/drivers/memory/samsung/exynos-srom.c index bf827a666694..7edd7fb540f2 100644 --- a/drivers/memory/samsung/exynos-srom.c +++ b/drivers/memory/samsung/exynos-srom.c | |||
@@ -1,14 +1,10 @@ | |||
1 | /* | 1 | // SPDX-License-Identifier: GPL-2.0 |
2 | * Copyright (c) 2015 Samsung Electronics Co., Ltd. | 2 | // |
3 | * http://www.samsung.com/ | 3 | // Copyright (c) 2015 Samsung Electronics Co., Ltd. |
4 | * | 4 | // http://www.samsung.com/ |
5 | * EXYNOS - SROM Controller support | 5 | // |
6 | * Author: Pankaj Dubey <pankaj.dubey@samsung.com> | 6 | // EXYNOS - SROM Controller support |
7 | * | 7 | // Author: Pankaj Dubey <pankaj.dubey@samsung.com> |
8 | * This program is free software; you can redistribute it and/or modify | ||
9 | * it under the terms of the GNU General Public License version 2 as | ||
10 | * published by the Free Software Foundation. | ||
11 | */ | ||
12 | 8 | ||
13 | #include <linux/io.h> | 9 | #include <linux/io.h> |
14 | #include <linux/init.h> | 10 | #include <linux/init.h> |
diff --git a/drivers/memory/samsung/exynos-srom.h b/drivers/memory/samsung/exynos-srom.h index 34660c6a57a9..da612797f522 100644 --- a/drivers/memory/samsung/exynos-srom.h +++ b/drivers/memory/samsung/exynos-srom.h | |||
@@ -1,13 +1,10 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
1 | /* | 2 | /* |
2 | * Copyright (c) 2015 Samsung Electronics Co., Ltd. | 3 | * Copyright (c) 2015 Samsung Electronics Co., Ltd. |
3 | * http://www.samsung.com | 4 | * http://www.samsung.com |
4 | * | 5 | * |
5 | * Exynos SROMC register definitions | 6 | * Exynos SROMC register definitions |
6 | * | 7 | */ |
7 | * This program is free software; you can redistribute it and/or modify | ||
8 | * it under the terms of the GNU General Public License version 2 as | ||
9 | * published by the Free Software Foundation. | ||
10 | */ | ||
11 | 8 | ||
12 | #ifndef __EXYNOS_SROM_H | 9 | #ifndef __EXYNOS_SROM_H |
13 | #define __EXYNOS_SROM_H __FILE__ | 10 | #define __EXYNOS_SROM_H __FILE__ |
diff --git a/drivers/memory/ti-emif-pm.c b/drivers/memory/ti-emif-pm.c index 62a86c4bcd0b..632651f4b6e8 100644 --- a/drivers/memory/ti-emif-pm.c +++ b/drivers/memory/ti-emif-pm.c | |||
@@ -271,7 +271,6 @@ static int ti_emif_probe(struct platform_device *pdev) | |||
271 | emif_data->pm_data.ti_emif_base_addr_virt = devm_ioremap_resource(dev, | 271 | emif_data->pm_data.ti_emif_base_addr_virt = devm_ioremap_resource(dev, |
272 | res); | 272 | res); |
273 | if (IS_ERR(emif_data->pm_data.ti_emif_base_addr_virt)) { | 273 | if (IS_ERR(emif_data->pm_data.ti_emif_base_addr_virt)) { |
274 | dev_err(dev, "could not ioremap emif mem\n"); | ||
275 | ret = PTR_ERR(emif_data->pm_data.ti_emif_base_addr_virt); | 274 | ret = PTR_ERR(emif_data->pm_data.ti_emif_base_addr_virt); |
276 | return ret; | 275 | return ret; |
277 | } | 276 | } |
diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index da5724cd89cf..28bb5a029558 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig | |||
@@ -5,6 +5,39 @@ | |||
5 | menu "Performance monitor support" | 5 | menu "Performance monitor support" |
6 | depends on PERF_EVENTS | 6 | depends on PERF_EVENTS |
7 | 7 | ||
8 | config ARM_CCI_PMU | ||
9 | bool | ||
10 | select ARM_CCI | ||
11 | |||
12 | config ARM_CCI400_PMU | ||
13 | bool "ARM CCI400 PMU support" | ||
14 | depends on (ARM && CPU_V7) || ARM64 | ||
15 | select ARM_CCI400_COMMON | ||
16 | select ARM_CCI_PMU | ||
17 | help | ||
18 | Support for PMU events monitoring on the ARM CCI-400 (cache coherent | ||
19 | interconnect). CCI-400 supports counting events related to the | ||
20 | connected slave/master interfaces. | ||
21 | |||
22 | config ARM_CCI5xx_PMU | ||
23 | bool "ARM CCI-500/CCI-550 PMU support" | ||
24 | depends on (ARM && CPU_V7) || ARM64 | ||
25 | select ARM_CCI_PMU | ||
26 | help | ||
27 | Support for PMU events monitoring on the ARM CCI-500/CCI-550 cache | ||
28 | coherent interconnects. Both of them provide 8 independent event counters, | ||
29 | which can count events pertaining to the slave/master interfaces as well | ||
30 | as the internal events to the CCI. | ||
31 | |||
32 | If unsure, say Y | ||
33 | |||
34 | config ARM_CCN | ||
35 | tristate "ARM CCN driver support" | ||
36 | depends on ARM || ARM64 | ||
37 | help | ||
38 | PMU (perf) driver supporting the ARM CCN (Cache Coherent Network) | ||
39 | interconnect. | ||
40 | |||
8 | config ARM_PMU | 41 | config ARM_PMU |
9 | depends on ARM || ARM64 | 42 | depends on ARM || ARM64 |
10 | bool "ARM PMU framework" | 43 | bool "ARM PMU framework" |
diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index c2f27419bdf0..b3902bd37d53 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile | |||
@@ -1,4 +1,6 @@ | |||
1 | # SPDX-License-Identifier: GPL-2.0 | 1 | # SPDX-License-Identifier: GPL-2.0 |
2 | obj-$(CONFIG_ARM_CCI_PMU) += arm-cci.o | ||
3 | obj-$(CONFIG_ARM_CCN) += arm-ccn.o | ||
2 | obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o | 4 | obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o |
3 | obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o | 5 | obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o |
4 | obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o | 6 | obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o |
diff --git a/drivers/perf/arm-cci.c b/drivers/perf/arm-cci.c new file mode 100644 index 000000000000..383b2d3dcbc6 --- /dev/null +++ b/drivers/perf/arm-cci.c | |||
@@ -0,0 +1,1722 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | // CCI Cache Coherent Interconnect PMU driver | ||
3 | // Copyright (C) 2013-2018 Arm Ltd. | ||
4 | // Author: Punit Agrawal <punit.agrawal@arm.com>, Suzuki Poulose <suzuki.poulose@arm.com> | ||
5 | |||
6 | #include <linux/arm-cci.h> | ||
7 | #include <linux/io.h> | ||
8 | #include <linux/interrupt.h> | ||
9 | #include <linux/module.h> | ||
10 | #include <linux/of_address.h> | ||
11 | #include <linux/of_device.h> | ||
12 | #include <linux/of_irq.h> | ||
13 | #include <linux/of_platform.h> | ||
14 | #include <linux/perf_event.h> | ||
15 | #include <linux/platform_device.h> | ||
16 | #include <linux/slab.h> | ||
17 | #include <linux/spinlock.h> | ||
18 | |||
19 | #define DRIVER_NAME "ARM-CCI PMU" | ||
20 | |||
21 | #define CCI_PMCR 0x0100 | ||
22 | #define CCI_PID2 0x0fe8 | ||
23 | |||
24 | #define CCI_PMCR_CEN 0x00000001 | ||
25 | #define CCI_PMCR_NCNT_MASK 0x0000f800 | ||
26 | #define CCI_PMCR_NCNT_SHIFT 11 | ||
27 | |||
28 | #define CCI_PID2_REV_MASK 0xf0 | ||
29 | #define CCI_PID2_REV_SHIFT 4 | ||
30 | |||
31 | #define CCI_PMU_EVT_SEL 0x000 | ||
32 | #define CCI_PMU_CNTR 0x004 | ||
33 | #define CCI_PMU_CNTR_CTRL 0x008 | ||
34 | #define CCI_PMU_OVRFLW 0x00c | ||
35 | |||
36 | #define CCI_PMU_OVRFLW_FLAG 1 | ||
37 | |||
38 | #define CCI_PMU_CNTR_SIZE(model) ((model)->cntr_size) | ||
39 | #define CCI_PMU_CNTR_BASE(model, idx) ((idx) * CCI_PMU_CNTR_SIZE(model)) | ||
40 | #define CCI_PMU_CNTR_MASK ((1ULL << 32) -1) | ||
41 | #define CCI_PMU_CNTR_LAST(cci_pmu) (cci_pmu->num_cntrs - 1) | ||
42 | |||
43 | #define CCI_PMU_MAX_HW_CNTRS(model) \ | ||
44 | ((model)->num_hw_cntrs + (model)->fixed_hw_cntrs) | ||
45 | |||
46 | /* Types of interfaces that can generate events */ | ||
47 | enum { | ||
48 | CCI_IF_SLAVE, | ||
49 | CCI_IF_MASTER, | ||
50 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
51 | CCI_IF_GLOBAL, | ||
52 | #endif | ||
53 | CCI_IF_MAX, | ||
54 | }; | ||
55 | |||
56 | struct event_range { | ||
57 | u32 min; | ||
58 | u32 max; | ||
59 | }; | ||
60 | |||
61 | struct cci_pmu_hw_events { | ||
62 | struct perf_event **events; | ||
63 | unsigned long *used_mask; | ||
64 | raw_spinlock_t pmu_lock; | ||
65 | }; | ||
66 | |||
67 | struct cci_pmu; | ||
68 | /* | ||
69 | * struct cci_pmu_model: | ||
70 | * @fixed_hw_cntrs - Number of fixed event counters | ||
71 | * @num_hw_cntrs - Maximum number of programmable event counters | ||
72 | * @cntr_size - Size of an event counter mapping | ||
73 | */ | ||
74 | struct cci_pmu_model { | ||
75 | char *name; | ||
76 | u32 fixed_hw_cntrs; | ||
77 | u32 num_hw_cntrs; | ||
78 | u32 cntr_size; | ||
79 | struct attribute **format_attrs; | ||
80 | struct attribute **event_attrs; | ||
81 | struct event_range event_ranges[CCI_IF_MAX]; | ||
82 | int (*validate_hw_event)(struct cci_pmu *, unsigned long); | ||
83 | int (*get_event_idx)(struct cci_pmu *, struct cci_pmu_hw_events *, unsigned long); | ||
84 | void (*write_counters)(struct cci_pmu *, unsigned long *); | ||
85 | }; | ||
86 | |||
87 | static struct cci_pmu_model cci_pmu_models[]; | ||
88 | |||
89 | struct cci_pmu { | ||
90 | void __iomem *base; | ||
91 | void __iomem *ctrl_base; | ||
92 | struct pmu pmu; | ||
93 | int cpu; | ||
94 | int nr_irqs; | ||
95 | int *irqs; | ||
96 | unsigned long active_irqs; | ||
97 | const struct cci_pmu_model *model; | ||
98 | struct cci_pmu_hw_events hw_events; | ||
99 | struct platform_device *plat_device; | ||
100 | int num_cntrs; | ||
101 | atomic_t active_events; | ||
102 | struct mutex reserve_mutex; | ||
103 | }; | ||
104 | |||
105 | #define to_cci_pmu(c) (container_of(c, struct cci_pmu, pmu)) | ||
106 | |||
107 | static struct cci_pmu *g_cci_pmu; | ||
108 | |||
109 | enum cci_models { | ||
110 | #ifdef CONFIG_ARM_CCI400_PMU | ||
111 | CCI400_R0, | ||
112 | CCI400_R1, | ||
113 | #endif | ||
114 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
115 | CCI500_R0, | ||
116 | CCI550_R0, | ||
117 | #endif | ||
118 | CCI_MODEL_MAX | ||
119 | }; | ||
120 | |||
121 | static void pmu_write_counters(struct cci_pmu *cci_pmu, | ||
122 | unsigned long *mask); | ||
123 | static ssize_t cci_pmu_format_show(struct device *dev, | ||
124 | struct device_attribute *attr, char *buf); | ||
125 | static ssize_t cci_pmu_event_show(struct device *dev, | ||
126 | struct device_attribute *attr, char *buf); | ||
127 | |||
128 | #define CCI_EXT_ATTR_ENTRY(_name, _func, _config) \ | ||
129 | &((struct dev_ext_attribute[]) { \ | ||
130 | { __ATTR(_name, S_IRUGO, _func, NULL), (void *)_config } \ | ||
131 | })[0].attr.attr | ||
132 | |||
133 | #define CCI_FORMAT_EXT_ATTR_ENTRY(_name, _config) \ | ||
134 | CCI_EXT_ATTR_ENTRY(_name, cci_pmu_format_show, (char *)_config) | ||
135 | #define CCI_EVENT_EXT_ATTR_ENTRY(_name, _config) \ | ||
136 | CCI_EXT_ATTR_ENTRY(_name, cci_pmu_event_show, (unsigned long)_config) | ||
137 | |||
138 | /* CCI400 PMU Specific definitions */ | ||
139 | |||
140 | #ifdef CONFIG_ARM_CCI400_PMU | ||
141 | |||
142 | /* Port ids */ | ||
143 | #define CCI400_PORT_S0 0 | ||
144 | #define CCI400_PORT_S1 1 | ||
145 | #define CCI400_PORT_S2 2 | ||
146 | #define CCI400_PORT_S3 3 | ||
147 | #define CCI400_PORT_S4 4 | ||
148 | #define CCI400_PORT_M0 5 | ||
149 | #define CCI400_PORT_M1 6 | ||
150 | #define CCI400_PORT_M2 7 | ||
151 | |||
152 | #define CCI400_R1_PX 5 | ||
153 | |||
154 | /* | ||
155 | * Instead of an event id to monitor CCI cycles, a dedicated counter is | ||
156 | * provided. Use 0xff to represent CCI cycles and hope that no future revisions | ||
157 | * make use of this event in hardware. | ||
158 | */ | ||
159 | enum cci400_perf_events { | ||
160 | CCI400_PMU_CYCLES = 0xff | ||
161 | }; | ||
162 | |||
163 | #define CCI400_PMU_CYCLE_CNTR_IDX 0 | ||
164 | #define CCI400_PMU_CNTR0_IDX 1 | ||
165 | |||
166 | /* | ||
167 | * CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8 | ||
168 | * ports and bits 4:0 are event codes. There are different event codes | ||
169 | * associated with each port type. | ||
170 | * | ||
171 | * Additionally, the range of events associated with the port types changed | ||
172 | * between Rev0 and Rev1. | ||
173 | * | ||
174 | * The constants below define the range of valid codes for each port type for | ||
175 | * the different revisions and are used to validate the event to be monitored. | ||
176 | */ | ||
177 | |||
178 | #define CCI400_PMU_EVENT_MASK 0xffUL | ||
179 | #define CCI400_PMU_EVENT_SOURCE_SHIFT 5 | ||
180 | #define CCI400_PMU_EVENT_SOURCE_MASK 0x7 | ||
181 | #define CCI400_PMU_EVENT_CODE_SHIFT 0 | ||
182 | #define CCI400_PMU_EVENT_CODE_MASK 0x1f | ||
183 | #define CCI400_PMU_EVENT_SOURCE(event) \ | ||
184 | ((event >> CCI400_PMU_EVENT_SOURCE_SHIFT) & \ | ||
185 | CCI400_PMU_EVENT_SOURCE_MASK) | ||
186 | #define CCI400_PMU_EVENT_CODE(event) \ | ||
187 | ((event >> CCI400_PMU_EVENT_CODE_SHIFT) & CCI400_PMU_EVENT_CODE_MASK) | ||
188 | |||
189 | #define CCI400_R0_SLAVE_PORT_MIN_EV 0x00 | ||
190 | #define CCI400_R0_SLAVE_PORT_MAX_EV 0x13 | ||
191 | #define CCI400_R0_MASTER_PORT_MIN_EV 0x14 | ||
192 | #define CCI400_R0_MASTER_PORT_MAX_EV 0x1a | ||
193 | |||
194 | #define CCI400_R1_SLAVE_PORT_MIN_EV 0x00 | ||
195 | #define CCI400_R1_SLAVE_PORT_MAX_EV 0x14 | ||
196 | #define CCI400_R1_MASTER_PORT_MIN_EV 0x00 | ||
197 | #define CCI400_R1_MASTER_PORT_MAX_EV 0x11 | ||
198 | |||
199 | #define CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(_name, _config) \ | ||
200 | CCI_EXT_ATTR_ENTRY(_name, cci400_pmu_cycle_event_show, \ | ||
201 | (unsigned long)_config) | ||
202 | |||
203 | static ssize_t cci400_pmu_cycle_event_show(struct device *dev, | ||
204 | struct device_attribute *attr, char *buf); | ||
205 | |||
206 | static struct attribute *cci400_pmu_format_attrs[] = { | ||
207 | CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), | ||
208 | CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-7"), | ||
209 | NULL | ||
210 | }; | ||
211 | |||
212 | static struct attribute *cci400_r0_pmu_event_attrs[] = { | ||
213 | /* Slave events */ | ||
214 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), | ||
215 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), | ||
216 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), | ||
217 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), | ||
218 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), | ||
219 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), | ||
220 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), | ||
221 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), | ||
222 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), | ||
223 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), | ||
224 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), | ||
225 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), | ||
226 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), | ||
227 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), | ||
228 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), | ||
229 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), | ||
230 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), | ||
231 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), | ||
232 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), | ||
233 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), | ||
234 | /* Master events */ | ||
235 | CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x14), | ||
236 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_addr_hazard, 0x15), | ||
237 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_id_hazard, 0x16), | ||
238 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_tt_full, 0x17), | ||
239 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x18), | ||
240 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x19), | ||
241 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_tt_full, 0x1A), | ||
242 | /* Special event for cycles counter */ | ||
243 | CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), | ||
244 | NULL | ||
245 | }; | ||
246 | |||
247 | static struct attribute *cci400_r1_pmu_event_attrs[] = { | ||
248 | /* Slave events */ | ||
249 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), | ||
250 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), | ||
251 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), | ||
252 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), | ||
253 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), | ||
254 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), | ||
255 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), | ||
256 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), | ||
257 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), | ||
258 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), | ||
259 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), | ||
260 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), | ||
261 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), | ||
262 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), | ||
263 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), | ||
264 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), | ||
265 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), | ||
266 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), | ||
267 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), | ||
268 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), | ||
269 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_slave_id_hazard, 0x14), | ||
270 | /* Master events */ | ||
271 | CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x0), | ||
272 | CCI_EVENT_EXT_ATTR_ENTRY(mi_stall_cycle_addr_hazard, 0x1), | ||
273 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_master_id_hazard, 0x2), | ||
274 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_hi_prio_rtq_full, 0x3), | ||
275 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x4), | ||
276 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x5), | ||
277 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_wtq_full, 0x6), | ||
278 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_low_prio_rtq_full, 0x7), | ||
279 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_mid_prio_rtq_full, 0x8), | ||
280 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn0, 0x9), | ||
281 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn1, 0xA), | ||
282 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn2, 0xB), | ||
283 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn3, 0xC), | ||
284 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn0, 0xD), | ||
285 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn1, 0xE), | ||
286 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn2, 0xF), | ||
287 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn3, 0x10), | ||
288 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_unique_or_line_unique_addr_hazard, 0x11), | ||
289 | /* Special event for cycles counter */ | ||
290 | CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), | ||
291 | NULL | ||
292 | }; | ||
293 | |||
294 | static ssize_t cci400_pmu_cycle_event_show(struct device *dev, | ||
295 | struct device_attribute *attr, char *buf) | ||
296 | { | ||
297 | struct dev_ext_attribute *eattr = container_of(attr, | ||
298 | struct dev_ext_attribute, attr); | ||
299 | return snprintf(buf, PAGE_SIZE, "config=0x%lx\n", (unsigned long)eattr->var); | ||
300 | } | ||
301 | |||
302 | static int cci400_get_event_idx(struct cci_pmu *cci_pmu, | ||
303 | struct cci_pmu_hw_events *hw, | ||
304 | unsigned long cci_event) | ||
305 | { | ||
306 | int idx; | ||
307 | |||
308 | /* cycles event idx is fixed */ | ||
309 | if (cci_event == CCI400_PMU_CYCLES) { | ||
310 | if (test_and_set_bit(CCI400_PMU_CYCLE_CNTR_IDX, hw->used_mask)) | ||
311 | return -EAGAIN; | ||
312 | |||
313 | return CCI400_PMU_CYCLE_CNTR_IDX; | ||
314 | } | ||
315 | |||
316 | for (idx = CCI400_PMU_CNTR0_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); ++idx) | ||
317 | if (!test_and_set_bit(idx, hw->used_mask)) | ||
318 | return idx; | ||
319 | |||
320 | /* No counters available */ | ||
321 | return -EAGAIN; | ||
322 | } | ||
323 | |||
324 | static int cci400_validate_hw_event(struct cci_pmu *cci_pmu, unsigned long hw_event) | ||
325 | { | ||
326 | u8 ev_source = CCI400_PMU_EVENT_SOURCE(hw_event); | ||
327 | u8 ev_code = CCI400_PMU_EVENT_CODE(hw_event); | ||
328 | int if_type; | ||
329 | |||
330 | if (hw_event & ~CCI400_PMU_EVENT_MASK) | ||
331 | return -ENOENT; | ||
332 | |||
333 | if (hw_event == CCI400_PMU_CYCLES) | ||
334 | return hw_event; | ||
335 | |||
336 | switch (ev_source) { | ||
337 | case CCI400_PORT_S0: | ||
338 | case CCI400_PORT_S1: | ||
339 | case CCI400_PORT_S2: | ||
340 | case CCI400_PORT_S3: | ||
341 | case CCI400_PORT_S4: | ||
342 | /* Slave Interface */ | ||
343 | if_type = CCI_IF_SLAVE; | ||
344 | break; | ||
345 | case CCI400_PORT_M0: | ||
346 | case CCI400_PORT_M1: | ||
347 | case CCI400_PORT_M2: | ||
348 | /* Master Interface */ | ||
349 | if_type = CCI_IF_MASTER; | ||
350 | break; | ||
351 | default: | ||
352 | return -ENOENT; | ||
353 | } | ||
354 | |||
355 | if (ev_code >= cci_pmu->model->event_ranges[if_type].min && | ||
356 | ev_code <= cci_pmu->model->event_ranges[if_type].max) | ||
357 | return hw_event; | ||
358 | |||
359 | return -ENOENT; | ||
360 | } | ||
361 | |||
362 | static int probe_cci400_revision(struct cci_pmu *cci_pmu) | ||
363 | { | ||
364 | int rev; | ||
365 | rev = readl_relaxed(cci_pmu->ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK; | ||
366 | rev >>= CCI_PID2_REV_SHIFT; | ||
367 | |||
368 | if (rev < CCI400_R1_PX) | ||
369 | return CCI400_R0; | ||
370 | else | ||
371 | return CCI400_R1; | ||
372 | } | ||
373 | |||
374 | static const struct cci_pmu_model *probe_cci_model(struct cci_pmu *cci_pmu) | ||
375 | { | ||
376 | if (platform_has_secure_cci_access()) | ||
377 | return &cci_pmu_models[probe_cci400_revision(cci_pmu)]; | ||
378 | return NULL; | ||
379 | } | ||
380 | #else /* !CONFIG_ARM_CCI400_PMU */ | ||
381 | static inline struct cci_pmu_model *probe_cci_model(struct cci_pmu *cci_pmu) | ||
382 | { | ||
383 | return NULL; | ||
384 | } | ||
385 | #endif /* CONFIG_ARM_CCI400_PMU */ | ||
386 | |||
387 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
388 | |||
389 | /* | ||
390 | * CCI5xx PMU event id is an 9-bit value made of two parts. | ||
391 | * bits [8:5] - Source for the event | ||
392 | * bits [4:0] - Event code (specific to type of interface) | ||
393 | * | ||
394 | * | ||
395 | */ | ||
396 | |||
397 | /* Port ids */ | ||
398 | #define CCI5xx_PORT_S0 0x0 | ||
399 | #define CCI5xx_PORT_S1 0x1 | ||
400 | #define CCI5xx_PORT_S2 0x2 | ||
401 | #define CCI5xx_PORT_S3 0x3 | ||
402 | #define CCI5xx_PORT_S4 0x4 | ||
403 | #define CCI5xx_PORT_S5 0x5 | ||
404 | #define CCI5xx_PORT_S6 0x6 | ||
405 | |||
406 | #define CCI5xx_PORT_M0 0x8 | ||
407 | #define CCI5xx_PORT_M1 0x9 | ||
408 | #define CCI5xx_PORT_M2 0xa | ||
409 | #define CCI5xx_PORT_M3 0xb | ||
410 | #define CCI5xx_PORT_M4 0xc | ||
411 | #define CCI5xx_PORT_M5 0xd | ||
412 | #define CCI5xx_PORT_M6 0xe | ||
413 | |||
414 | #define CCI5xx_PORT_GLOBAL 0xf | ||
415 | |||
416 | #define CCI5xx_PMU_EVENT_MASK 0x1ffUL | ||
417 | #define CCI5xx_PMU_EVENT_SOURCE_SHIFT 0x5 | ||
418 | #define CCI5xx_PMU_EVENT_SOURCE_MASK 0xf | ||
419 | #define CCI5xx_PMU_EVENT_CODE_SHIFT 0x0 | ||
420 | #define CCI5xx_PMU_EVENT_CODE_MASK 0x1f | ||
421 | |||
422 | #define CCI5xx_PMU_EVENT_SOURCE(event) \ | ||
423 | ((event >> CCI5xx_PMU_EVENT_SOURCE_SHIFT) & CCI5xx_PMU_EVENT_SOURCE_MASK) | ||
424 | #define CCI5xx_PMU_EVENT_CODE(event) \ | ||
425 | ((event >> CCI5xx_PMU_EVENT_CODE_SHIFT) & CCI5xx_PMU_EVENT_CODE_MASK) | ||
426 | |||
427 | #define CCI5xx_SLAVE_PORT_MIN_EV 0x00 | ||
428 | #define CCI5xx_SLAVE_PORT_MAX_EV 0x1f | ||
429 | #define CCI5xx_MASTER_PORT_MIN_EV 0x00 | ||
430 | #define CCI5xx_MASTER_PORT_MAX_EV 0x06 | ||
431 | #define CCI5xx_GLOBAL_PORT_MIN_EV 0x00 | ||
432 | #define CCI5xx_GLOBAL_PORT_MAX_EV 0x0f | ||
433 | |||
434 | |||
435 | #define CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(_name, _config) \ | ||
436 | CCI_EXT_ATTR_ENTRY(_name, cci5xx_pmu_global_event_show, \ | ||
437 | (unsigned long) _config) | ||
438 | |||
439 | static ssize_t cci5xx_pmu_global_event_show(struct device *dev, | ||
440 | struct device_attribute *attr, char *buf); | ||
441 | |||
442 | static struct attribute *cci5xx_pmu_format_attrs[] = { | ||
443 | CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), | ||
444 | CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-8"), | ||
445 | NULL, | ||
446 | }; | ||
447 | |||
448 | static struct attribute *cci5xx_pmu_event_attrs[] = { | ||
449 | /* Slave events */ | ||
450 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_arvalid, 0x0), | ||
451 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_dev, 0x1), | ||
452 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_nonshareable, 0x2), | ||
453 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_non_alloc, 0x3), | ||
454 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_alloc, 0x4), | ||
455 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_invalidate, 0x5), | ||
456 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maint, 0x6), | ||
457 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), | ||
458 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rval, 0x8), | ||
459 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rlast_snoop, 0x9), | ||
460 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_awalid, 0xA), | ||
461 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_dev, 0xB), | ||
462 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_non_shareable, 0xC), | ||
463 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wb, 0xD), | ||
464 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wlu, 0xE), | ||
465 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wunique, 0xF), | ||
466 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_evict, 0x10), | ||
467 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_wrevict, 0x11), | ||
468 | CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_beat, 0x12), | ||
469 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_acvalid, 0x13), | ||
470 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_read, 0x14), | ||
471 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_clean, 0x15), | ||
472 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_data_transfer_low, 0x16), | ||
473 | CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_arvalid, 0x17), | ||
474 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall, 0x18), | ||
475 | CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall, 0x19), | ||
476 | CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_stall, 0x1A), | ||
477 | CCI_EVENT_EXT_ATTR_ENTRY(si_w_resp_stall, 0x1B), | ||
478 | CCI_EVENT_EXT_ATTR_ENTRY(si_srq_stall, 0x1C), | ||
479 | CCI_EVENT_EXT_ATTR_ENTRY(si_s_data_stall, 0x1D), | ||
480 | CCI_EVENT_EXT_ATTR_ENTRY(si_rq_stall_ot_limit, 0x1E), | ||
481 | CCI_EVENT_EXT_ATTR_ENTRY(si_r_stall_arbit, 0x1F), | ||
482 | |||
483 | /* Master events */ | ||
484 | CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_beat_any, 0x0), | ||
485 | CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_beat_any, 0x1), | ||
486 | CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall, 0x2), | ||
487 | CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_stall, 0x3), | ||
488 | CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall, 0x4), | ||
489 | CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_stall, 0x5), | ||
490 | CCI_EVENT_EXT_ATTR_ENTRY(mi_w_resp_stall, 0x6), | ||
491 | |||
492 | /* Global events */ | ||
493 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_0_1, 0x0), | ||
494 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_2_3, 0x1), | ||
495 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_4_5, 0x2), | ||
496 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_6_7, 0x3), | ||
497 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_0_1, 0x4), | ||
498 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_2_3, 0x5), | ||
499 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_4_5, 0x6), | ||
500 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_6_7, 0x7), | ||
501 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_back_invalidation, 0x8), | ||
502 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_alloc_busy, 0x9), | ||
503 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_tt_full, 0xA), | ||
504 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB), | ||
505 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC), | ||
506 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD), | ||
507 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_stall_tt_full, 0xE), | ||
508 | CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF), | ||
509 | NULL | ||
510 | }; | ||
511 | |||
512 | static ssize_t cci5xx_pmu_global_event_show(struct device *dev, | ||
513 | struct device_attribute *attr, char *buf) | ||
514 | { | ||
515 | struct dev_ext_attribute *eattr = container_of(attr, | ||
516 | struct dev_ext_attribute, attr); | ||
517 | /* Global events have single fixed source code */ | ||
518 | return snprintf(buf, PAGE_SIZE, "event=0x%lx,source=0x%x\n", | ||
519 | (unsigned long)eattr->var, CCI5xx_PORT_GLOBAL); | ||
520 | } | ||
521 | |||
522 | /* | ||
523 | * CCI500 provides 8 independent event counters that can count | ||
524 | * any of the events available. | ||
525 | * CCI500 PMU event source ids | ||
526 | * 0x0-0x6 - Slave interfaces | ||
527 | * 0x8-0xD - Master interfaces | ||
528 | * 0xf - Global Events | ||
529 | * 0x7,0xe - Reserved | ||
530 | */ | ||
531 | static int cci500_validate_hw_event(struct cci_pmu *cci_pmu, | ||
532 | unsigned long hw_event) | ||
533 | { | ||
534 | u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event); | ||
535 | u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event); | ||
536 | int if_type; | ||
537 | |||
538 | if (hw_event & ~CCI5xx_PMU_EVENT_MASK) | ||
539 | return -ENOENT; | ||
540 | |||
541 | switch (ev_source) { | ||
542 | case CCI5xx_PORT_S0: | ||
543 | case CCI5xx_PORT_S1: | ||
544 | case CCI5xx_PORT_S2: | ||
545 | case CCI5xx_PORT_S3: | ||
546 | case CCI5xx_PORT_S4: | ||
547 | case CCI5xx_PORT_S5: | ||
548 | case CCI5xx_PORT_S6: | ||
549 | if_type = CCI_IF_SLAVE; | ||
550 | break; | ||
551 | case CCI5xx_PORT_M0: | ||
552 | case CCI5xx_PORT_M1: | ||
553 | case CCI5xx_PORT_M2: | ||
554 | case CCI5xx_PORT_M3: | ||
555 | case CCI5xx_PORT_M4: | ||
556 | case CCI5xx_PORT_M5: | ||
557 | if_type = CCI_IF_MASTER; | ||
558 | break; | ||
559 | case CCI5xx_PORT_GLOBAL: | ||
560 | if_type = CCI_IF_GLOBAL; | ||
561 | break; | ||
562 | default: | ||
563 | return -ENOENT; | ||
564 | } | ||
565 | |||
566 | if (ev_code >= cci_pmu->model->event_ranges[if_type].min && | ||
567 | ev_code <= cci_pmu->model->event_ranges[if_type].max) | ||
568 | return hw_event; | ||
569 | |||
570 | return -ENOENT; | ||
571 | } | ||
572 | |||
573 | /* | ||
574 | * CCI550 provides 8 independent event counters that can count | ||
575 | * any of the events available. | ||
576 | * CCI550 PMU event source ids | ||
577 | * 0x0-0x6 - Slave interfaces | ||
578 | * 0x8-0xe - Master interfaces | ||
579 | * 0xf - Global Events | ||
580 | * 0x7 - Reserved | ||
581 | */ | ||
582 | static int cci550_validate_hw_event(struct cci_pmu *cci_pmu, | ||
583 | unsigned long hw_event) | ||
584 | { | ||
585 | u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event); | ||
586 | u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event); | ||
587 | int if_type; | ||
588 | |||
589 | if (hw_event & ~CCI5xx_PMU_EVENT_MASK) | ||
590 | return -ENOENT; | ||
591 | |||
592 | switch (ev_source) { | ||
593 | case CCI5xx_PORT_S0: | ||
594 | case CCI5xx_PORT_S1: | ||
595 | case CCI5xx_PORT_S2: | ||
596 | case CCI5xx_PORT_S3: | ||
597 | case CCI5xx_PORT_S4: | ||
598 | case CCI5xx_PORT_S5: | ||
599 | case CCI5xx_PORT_S6: | ||
600 | if_type = CCI_IF_SLAVE; | ||
601 | break; | ||
602 | case CCI5xx_PORT_M0: | ||
603 | case CCI5xx_PORT_M1: | ||
604 | case CCI5xx_PORT_M2: | ||
605 | case CCI5xx_PORT_M3: | ||
606 | case CCI5xx_PORT_M4: | ||
607 | case CCI5xx_PORT_M5: | ||
608 | case CCI5xx_PORT_M6: | ||
609 | if_type = CCI_IF_MASTER; | ||
610 | break; | ||
611 | case CCI5xx_PORT_GLOBAL: | ||
612 | if_type = CCI_IF_GLOBAL; | ||
613 | break; | ||
614 | default: | ||
615 | return -ENOENT; | ||
616 | } | ||
617 | |||
618 | if (ev_code >= cci_pmu->model->event_ranges[if_type].min && | ||
619 | ev_code <= cci_pmu->model->event_ranges[if_type].max) | ||
620 | return hw_event; | ||
621 | |||
622 | return -ENOENT; | ||
623 | } | ||
624 | |||
625 | #endif /* CONFIG_ARM_CCI5xx_PMU */ | ||
626 | |||
627 | /* | ||
628 | * Program the CCI PMU counters which have PERF_HES_ARCH set | ||
629 | * with the event period and mark them ready before we enable | ||
630 | * PMU. | ||
631 | */ | ||
632 | static void cci_pmu_sync_counters(struct cci_pmu *cci_pmu) | ||
633 | { | ||
634 | int i; | ||
635 | struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events; | ||
636 | |||
637 | DECLARE_BITMAP(mask, cci_pmu->num_cntrs); | ||
638 | |||
639 | bitmap_zero(mask, cci_pmu->num_cntrs); | ||
640 | for_each_set_bit(i, cci_pmu->hw_events.used_mask, cci_pmu->num_cntrs) { | ||
641 | struct perf_event *event = cci_hw->events[i]; | ||
642 | |||
643 | if (WARN_ON(!event)) | ||
644 | continue; | ||
645 | |||
646 | /* Leave the events which are not counting */ | ||
647 | if (event->hw.state & PERF_HES_STOPPED) | ||
648 | continue; | ||
649 | if (event->hw.state & PERF_HES_ARCH) { | ||
650 | set_bit(i, mask); | ||
651 | event->hw.state &= ~PERF_HES_ARCH; | ||
652 | } | ||
653 | } | ||
654 | |||
655 | pmu_write_counters(cci_pmu, mask); | ||
656 | } | ||
657 | |||
658 | /* Should be called with cci_pmu->hw_events->pmu_lock held */ | ||
659 | static void __cci_pmu_enable_nosync(struct cci_pmu *cci_pmu) | ||
660 | { | ||
661 | u32 val; | ||
662 | |||
663 | /* Enable all the PMU counters. */ | ||
664 | val = readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) | CCI_PMCR_CEN; | ||
665 | writel(val, cci_pmu->ctrl_base + CCI_PMCR); | ||
666 | } | ||
667 | |||
668 | /* Should be called with cci_pmu->hw_events->pmu_lock held */ | ||
669 | static void __cci_pmu_enable_sync(struct cci_pmu *cci_pmu) | ||
670 | { | ||
671 | cci_pmu_sync_counters(cci_pmu); | ||
672 | __cci_pmu_enable_nosync(cci_pmu); | ||
673 | } | ||
674 | |||
675 | /* Should be called with cci_pmu->hw_events->pmu_lock held */ | ||
676 | static void __cci_pmu_disable(struct cci_pmu *cci_pmu) | ||
677 | { | ||
678 | u32 val; | ||
679 | |||
680 | /* Disable all the PMU counters. */ | ||
681 | val = readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) & ~CCI_PMCR_CEN; | ||
682 | writel(val, cci_pmu->ctrl_base + CCI_PMCR); | ||
683 | } | ||
684 | |||
685 | static ssize_t cci_pmu_format_show(struct device *dev, | ||
686 | struct device_attribute *attr, char *buf) | ||
687 | { | ||
688 | struct dev_ext_attribute *eattr = container_of(attr, | ||
689 | struct dev_ext_attribute, attr); | ||
690 | return snprintf(buf, PAGE_SIZE, "%s\n", (char *)eattr->var); | ||
691 | } | ||
692 | |||
693 | static ssize_t cci_pmu_event_show(struct device *dev, | ||
694 | struct device_attribute *attr, char *buf) | ||
695 | { | ||
696 | struct dev_ext_attribute *eattr = container_of(attr, | ||
697 | struct dev_ext_attribute, attr); | ||
698 | /* source parameter is mandatory for normal PMU events */ | ||
699 | return snprintf(buf, PAGE_SIZE, "source=?,event=0x%lx\n", | ||
700 | (unsigned long)eattr->var); | ||
701 | } | ||
702 | |||
703 | static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx) | ||
704 | { | ||
705 | return 0 <= idx && idx <= CCI_PMU_CNTR_LAST(cci_pmu); | ||
706 | } | ||
707 | |||
708 | static u32 pmu_read_register(struct cci_pmu *cci_pmu, int idx, unsigned int offset) | ||
709 | { | ||
710 | return readl_relaxed(cci_pmu->base + | ||
711 | CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); | ||
712 | } | ||
713 | |||
714 | static void pmu_write_register(struct cci_pmu *cci_pmu, u32 value, | ||
715 | int idx, unsigned int offset) | ||
716 | { | ||
717 | writel_relaxed(value, cci_pmu->base + | ||
718 | CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); | ||
719 | } | ||
720 | |||
721 | static void pmu_disable_counter(struct cci_pmu *cci_pmu, int idx) | ||
722 | { | ||
723 | pmu_write_register(cci_pmu, 0, idx, CCI_PMU_CNTR_CTRL); | ||
724 | } | ||
725 | |||
726 | static void pmu_enable_counter(struct cci_pmu *cci_pmu, int idx) | ||
727 | { | ||
728 | pmu_write_register(cci_pmu, 1, idx, CCI_PMU_CNTR_CTRL); | ||
729 | } | ||
730 | |||
731 | static bool __maybe_unused | ||
732 | pmu_counter_is_enabled(struct cci_pmu *cci_pmu, int idx) | ||
733 | { | ||
734 | return (pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR_CTRL) & 0x1) != 0; | ||
735 | } | ||
736 | |||
737 | static void pmu_set_event(struct cci_pmu *cci_pmu, int idx, unsigned long event) | ||
738 | { | ||
739 | pmu_write_register(cci_pmu, event, idx, CCI_PMU_EVT_SEL); | ||
740 | } | ||
741 | |||
742 | /* | ||
743 | * For all counters on the CCI-PMU, disable any 'enabled' counters, | ||
744 | * saving the changed counters in the mask, so that we can restore | ||
745 | * it later using pmu_restore_counters. The mask is private to the | ||
746 | * caller. We cannot rely on the used_mask maintained by the CCI_PMU | ||
747 | * as it only tells us if the counter is assigned to perf_event or not. | ||
748 | * The state of the perf_event cannot be locked by the PMU layer, hence | ||
749 | * we check the individual counter status (which can be locked by | ||
750 | * cci_pm->hw_events->pmu_lock). | ||
751 | * | ||
752 | * @mask should be initialised to empty by the caller. | ||
753 | */ | ||
754 | static void __maybe_unused | ||
755 | pmu_save_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
756 | { | ||
757 | int i; | ||
758 | |||
759 | for (i = 0; i < cci_pmu->num_cntrs; i++) { | ||
760 | if (pmu_counter_is_enabled(cci_pmu, i)) { | ||
761 | set_bit(i, mask); | ||
762 | pmu_disable_counter(cci_pmu, i); | ||
763 | } | ||
764 | } | ||
765 | } | ||
766 | |||
767 | /* | ||
768 | * Restore the status of the counters. Reversal of the pmu_save_counters(). | ||
769 | * For each counter set in the mask, enable the counter back. | ||
770 | */ | ||
771 | static void __maybe_unused | ||
772 | pmu_restore_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
773 | { | ||
774 | int i; | ||
775 | |||
776 | for_each_set_bit(i, mask, cci_pmu->num_cntrs) | ||
777 | pmu_enable_counter(cci_pmu, i); | ||
778 | } | ||
779 | |||
780 | /* | ||
781 | * Returns the number of programmable counters actually implemented | ||
782 | * by the cci | ||
783 | */ | ||
784 | static u32 pmu_get_max_counters(struct cci_pmu *cci_pmu) | ||
785 | { | ||
786 | return (readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) & | ||
787 | CCI_PMCR_NCNT_MASK) >> CCI_PMCR_NCNT_SHIFT; | ||
788 | } | ||
789 | |||
790 | static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *event) | ||
791 | { | ||
792 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
793 | unsigned long cci_event = event->hw.config_base; | ||
794 | int idx; | ||
795 | |||
796 | if (cci_pmu->model->get_event_idx) | ||
797 | return cci_pmu->model->get_event_idx(cci_pmu, hw, cci_event); | ||
798 | |||
799 | /* Generic code to find an unused idx from the mask */ | ||
800 | for(idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) | ||
801 | if (!test_and_set_bit(idx, hw->used_mask)) | ||
802 | return idx; | ||
803 | |||
804 | /* No counters available */ | ||
805 | return -EAGAIN; | ||
806 | } | ||
807 | |||
808 | static int pmu_map_event(struct perf_event *event) | ||
809 | { | ||
810 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
811 | |||
812 | if (event->attr.type < PERF_TYPE_MAX || | ||
813 | !cci_pmu->model->validate_hw_event) | ||
814 | return -ENOENT; | ||
815 | |||
816 | return cci_pmu->model->validate_hw_event(cci_pmu, event->attr.config); | ||
817 | } | ||
818 | |||
819 | static int pmu_request_irq(struct cci_pmu *cci_pmu, irq_handler_t handler) | ||
820 | { | ||
821 | int i; | ||
822 | struct platform_device *pmu_device = cci_pmu->plat_device; | ||
823 | |||
824 | if (unlikely(!pmu_device)) | ||
825 | return -ENODEV; | ||
826 | |||
827 | if (cci_pmu->nr_irqs < 1) { | ||
828 | dev_err(&pmu_device->dev, "no irqs for CCI PMUs defined\n"); | ||
829 | return -ENODEV; | ||
830 | } | ||
831 | |||
832 | /* | ||
833 | * Register all available CCI PMU interrupts. In the interrupt handler | ||
834 | * we iterate over the counters checking for interrupt source (the | ||
835 | * overflowing counter) and clear it. | ||
836 | * | ||
837 | * This should allow handling of non-unique interrupt for the counters. | ||
838 | */ | ||
839 | for (i = 0; i < cci_pmu->nr_irqs; i++) { | ||
840 | int err = request_irq(cci_pmu->irqs[i], handler, IRQF_SHARED, | ||
841 | "arm-cci-pmu", cci_pmu); | ||
842 | if (err) { | ||
843 | dev_err(&pmu_device->dev, "unable to request IRQ%d for ARM CCI PMU counters\n", | ||
844 | cci_pmu->irqs[i]); | ||
845 | return err; | ||
846 | } | ||
847 | |||
848 | set_bit(i, &cci_pmu->active_irqs); | ||
849 | } | ||
850 | |||
851 | return 0; | ||
852 | } | ||
853 | |||
854 | static void pmu_free_irq(struct cci_pmu *cci_pmu) | ||
855 | { | ||
856 | int i; | ||
857 | |||
858 | for (i = 0; i < cci_pmu->nr_irqs; i++) { | ||
859 | if (!test_and_clear_bit(i, &cci_pmu->active_irqs)) | ||
860 | continue; | ||
861 | |||
862 | free_irq(cci_pmu->irqs[i], cci_pmu); | ||
863 | } | ||
864 | } | ||
865 | |||
866 | static u32 pmu_read_counter(struct perf_event *event) | ||
867 | { | ||
868 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
869 | struct hw_perf_event *hw_counter = &event->hw; | ||
870 | int idx = hw_counter->idx; | ||
871 | u32 value; | ||
872 | |||
873 | if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { | ||
874 | dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); | ||
875 | return 0; | ||
876 | } | ||
877 | value = pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR); | ||
878 | |||
879 | return value; | ||
880 | } | ||
881 | |||
882 | static void pmu_write_counter(struct cci_pmu *cci_pmu, u32 value, int idx) | ||
883 | { | ||
884 | pmu_write_register(cci_pmu, value, idx, CCI_PMU_CNTR); | ||
885 | } | ||
886 | |||
887 | static void __pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
888 | { | ||
889 | int i; | ||
890 | struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events; | ||
891 | |||
892 | for_each_set_bit(i, mask, cci_pmu->num_cntrs) { | ||
893 | struct perf_event *event = cci_hw->events[i]; | ||
894 | |||
895 | if (WARN_ON(!event)) | ||
896 | continue; | ||
897 | pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i); | ||
898 | } | ||
899 | } | ||
900 | |||
901 | static void pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
902 | { | ||
903 | if (cci_pmu->model->write_counters) | ||
904 | cci_pmu->model->write_counters(cci_pmu, mask); | ||
905 | else | ||
906 | __pmu_write_counters(cci_pmu, mask); | ||
907 | } | ||
908 | |||
909 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
910 | |||
911 | /* | ||
912 | * CCI-500/CCI-550 has advanced power saving policies, which could gate the | ||
913 | * clocks to the PMU counters, which makes the writes to them ineffective. | ||
914 | * The only way to write to those counters is when the global counters | ||
915 | * are enabled and the particular counter is enabled. | ||
916 | * | ||
917 | * So we do the following : | ||
918 | * | ||
919 | * 1) Disable all the PMU counters, saving their current state | ||
920 | * 2) Enable the global PMU profiling, now that all counters are | ||
921 | * disabled. | ||
922 | * | ||
923 | * For each counter to be programmed, repeat steps 3-7: | ||
924 | * | ||
925 | * 3) Write an invalid event code to the event control register for the | ||
926 | counter, so that the counters are not modified. | ||
927 | * 4) Enable the counter control for the counter. | ||
928 | * 5) Set the counter value | ||
929 | * 6) Disable the counter | ||
930 | * 7) Restore the event in the target counter | ||
931 | * | ||
932 | * 8) Disable the global PMU. | ||
933 | * 9) Restore the status of the rest of the counters. | ||
934 | * | ||
935 | * We choose an event which for CCI-5xx is guaranteed not to count. | ||
936 | * We use the highest possible event code (0x1f) for the master interface 0. | ||
937 | */ | ||
938 | #define CCI5xx_INVALID_EVENT ((CCI5xx_PORT_M0 << CCI5xx_PMU_EVENT_SOURCE_SHIFT) | \ | ||
939 | (CCI5xx_PMU_EVENT_CODE_MASK << CCI5xx_PMU_EVENT_CODE_SHIFT)) | ||
940 | static void cci5xx_pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) | ||
941 | { | ||
942 | int i; | ||
943 | DECLARE_BITMAP(saved_mask, cci_pmu->num_cntrs); | ||
944 | |||
945 | bitmap_zero(saved_mask, cci_pmu->num_cntrs); | ||
946 | pmu_save_counters(cci_pmu, saved_mask); | ||
947 | |||
948 | /* | ||
949 | * Now that all the counters are disabled, we can safely turn the PMU on, | ||
950 | * without syncing the status of the counters | ||
951 | */ | ||
952 | __cci_pmu_enable_nosync(cci_pmu); | ||
953 | |||
954 | for_each_set_bit(i, mask, cci_pmu->num_cntrs) { | ||
955 | struct perf_event *event = cci_pmu->hw_events.events[i]; | ||
956 | |||
957 | if (WARN_ON(!event)) | ||
958 | continue; | ||
959 | |||
960 | pmu_set_event(cci_pmu, i, CCI5xx_INVALID_EVENT); | ||
961 | pmu_enable_counter(cci_pmu, i); | ||
962 | pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i); | ||
963 | pmu_disable_counter(cci_pmu, i); | ||
964 | pmu_set_event(cci_pmu, i, event->hw.config_base); | ||
965 | } | ||
966 | |||
967 | __cci_pmu_disable(cci_pmu); | ||
968 | |||
969 | pmu_restore_counters(cci_pmu, saved_mask); | ||
970 | } | ||
971 | |||
972 | #endif /* CONFIG_ARM_CCI5xx_PMU */ | ||
973 | |||
974 | static u64 pmu_event_update(struct perf_event *event) | ||
975 | { | ||
976 | struct hw_perf_event *hwc = &event->hw; | ||
977 | u64 delta, prev_raw_count, new_raw_count; | ||
978 | |||
979 | do { | ||
980 | prev_raw_count = local64_read(&hwc->prev_count); | ||
981 | new_raw_count = pmu_read_counter(event); | ||
982 | } while (local64_cmpxchg(&hwc->prev_count, prev_raw_count, | ||
983 | new_raw_count) != prev_raw_count); | ||
984 | |||
985 | delta = (new_raw_count - prev_raw_count) & CCI_PMU_CNTR_MASK; | ||
986 | |||
987 | local64_add(delta, &event->count); | ||
988 | |||
989 | return new_raw_count; | ||
990 | } | ||
991 | |||
992 | static void pmu_read(struct perf_event *event) | ||
993 | { | ||
994 | pmu_event_update(event); | ||
995 | } | ||
996 | |||
997 | static void pmu_event_set_period(struct perf_event *event) | ||
998 | { | ||
999 | struct hw_perf_event *hwc = &event->hw; | ||
1000 | /* | ||
1001 | * The CCI PMU counters have a period of 2^32. To account for the | ||
1002 | * possiblity of extreme interrupt latency we program for a period of | ||
1003 | * half that. Hopefully we can handle the interrupt before another 2^31 | ||
1004 | * events occur and the counter overtakes its previous value. | ||
1005 | */ | ||
1006 | u64 val = 1ULL << 31; | ||
1007 | local64_set(&hwc->prev_count, val); | ||
1008 | |||
1009 | /* | ||
1010 | * CCI PMU uses PERF_HES_ARCH to keep track of the counters, whose | ||
1011 | * values needs to be sync-ed with the s/w state before the PMU is | ||
1012 | * enabled. | ||
1013 | * Mark this counter for sync. | ||
1014 | */ | ||
1015 | hwc->state |= PERF_HES_ARCH; | ||
1016 | } | ||
1017 | |||
1018 | static irqreturn_t pmu_handle_irq(int irq_num, void *dev) | ||
1019 | { | ||
1020 | unsigned long flags; | ||
1021 | struct cci_pmu *cci_pmu = dev; | ||
1022 | struct cci_pmu_hw_events *events = &cci_pmu->hw_events; | ||
1023 | int idx, handled = IRQ_NONE; | ||
1024 | |||
1025 | raw_spin_lock_irqsave(&events->pmu_lock, flags); | ||
1026 | |||
1027 | /* Disable the PMU while we walk through the counters */ | ||
1028 | __cci_pmu_disable(cci_pmu); | ||
1029 | /* | ||
1030 | * Iterate over counters and update the corresponding perf events. | ||
1031 | * This should work regardless of whether we have per-counter overflow | ||
1032 | * interrupt or a combined overflow interrupt. | ||
1033 | */ | ||
1034 | for (idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) { | ||
1035 | struct perf_event *event = events->events[idx]; | ||
1036 | |||
1037 | if (!event) | ||
1038 | continue; | ||
1039 | |||
1040 | /* Did this counter overflow? */ | ||
1041 | if (!(pmu_read_register(cci_pmu, idx, CCI_PMU_OVRFLW) & | ||
1042 | CCI_PMU_OVRFLW_FLAG)) | ||
1043 | continue; | ||
1044 | |||
1045 | pmu_write_register(cci_pmu, CCI_PMU_OVRFLW_FLAG, idx, | ||
1046 | CCI_PMU_OVRFLW); | ||
1047 | |||
1048 | pmu_event_update(event); | ||
1049 | pmu_event_set_period(event); | ||
1050 | handled = IRQ_HANDLED; | ||
1051 | } | ||
1052 | |||
1053 | /* Enable the PMU and sync possibly overflowed counters */ | ||
1054 | __cci_pmu_enable_sync(cci_pmu); | ||
1055 | raw_spin_unlock_irqrestore(&events->pmu_lock, flags); | ||
1056 | |||
1057 | return IRQ_RETVAL(handled); | ||
1058 | } | ||
1059 | |||
1060 | static int cci_pmu_get_hw(struct cci_pmu *cci_pmu) | ||
1061 | { | ||
1062 | int ret = pmu_request_irq(cci_pmu, pmu_handle_irq); | ||
1063 | if (ret) { | ||
1064 | pmu_free_irq(cci_pmu); | ||
1065 | return ret; | ||
1066 | } | ||
1067 | return 0; | ||
1068 | } | ||
1069 | |||
1070 | static void cci_pmu_put_hw(struct cci_pmu *cci_pmu) | ||
1071 | { | ||
1072 | pmu_free_irq(cci_pmu); | ||
1073 | } | ||
1074 | |||
1075 | static void hw_perf_event_destroy(struct perf_event *event) | ||
1076 | { | ||
1077 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1078 | atomic_t *active_events = &cci_pmu->active_events; | ||
1079 | struct mutex *reserve_mutex = &cci_pmu->reserve_mutex; | ||
1080 | |||
1081 | if (atomic_dec_and_mutex_lock(active_events, reserve_mutex)) { | ||
1082 | cci_pmu_put_hw(cci_pmu); | ||
1083 | mutex_unlock(reserve_mutex); | ||
1084 | } | ||
1085 | } | ||
1086 | |||
1087 | static void cci_pmu_enable(struct pmu *pmu) | ||
1088 | { | ||
1089 | struct cci_pmu *cci_pmu = to_cci_pmu(pmu); | ||
1090 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1091 | int enabled = bitmap_weight(hw_events->used_mask, cci_pmu->num_cntrs); | ||
1092 | unsigned long flags; | ||
1093 | |||
1094 | if (!enabled) | ||
1095 | return; | ||
1096 | |||
1097 | raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); | ||
1098 | __cci_pmu_enable_sync(cci_pmu); | ||
1099 | raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); | ||
1100 | |||
1101 | } | ||
1102 | |||
1103 | static void cci_pmu_disable(struct pmu *pmu) | ||
1104 | { | ||
1105 | struct cci_pmu *cci_pmu = to_cci_pmu(pmu); | ||
1106 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1107 | unsigned long flags; | ||
1108 | |||
1109 | raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); | ||
1110 | __cci_pmu_disable(cci_pmu); | ||
1111 | raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); | ||
1112 | } | ||
1113 | |||
1114 | /* | ||
1115 | * Check if the idx represents a non-programmable counter. | ||
1116 | * All the fixed event counters are mapped before the programmable | ||
1117 | * counters. | ||
1118 | */ | ||
1119 | static bool pmu_fixed_hw_idx(struct cci_pmu *cci_pmu, int idx) | ||
1120 | { | ||
1121 | return (idx >= 0) && (idx < cci_pmu->model->fixed_hw_cntrs); | ||
1122 | } | ||
1123 | |||
1124 | static void cci_pmu_start(struct perf_event *event, int pmu_flags) | ||
1125 | { | ||
1126 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1127 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1128 | struct hw_perf_event *hwc = &event->hw; | ||
1129 | int idx = hwc->idx; | ||
1130 | unsigned long flags; | ||
1131 | |||
1132 | /* | ||
1133 | * To handle interrupt latency, we always reprogram the period | ||
1134 | * regardlesss of PERF_EF_RELOAD. | ||
1135 | */ | ||
1136 | if (pmu_flags & PERF_EF_RELOAD) | ||
1137 | WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); | ||
1138 | |||
1139 | hwc->state = 0; | ||
1140 | |||
1141 | if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { | ||
1142 | dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); | ||
1143 | return; | ||
1144 | } | ||
1145 | |||
1146 | raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); | ||
1147 | |||
1148 | /* Configure the counter unless you are counting a fixed event */ | ||
1149 | if (!pmu_fixed_hw_idx(cci_pmu, idx)) | ||
1150 | pmu_set_event(cci_pmu, idx, hwc->config_base); | ||
1151 | |||
1152 | pmu_event_set_period(event); | ||
1153 | pmu_enable_counter(cci_pmu, idx); | ||
1154 | |||
1155 | raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); | ||
1156 | } | ||
1157 | |||
1158 | static void cci_pmu_stop(struct perf_event *event, int pmu_flags) | ||
1159 | { | ||
1160 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1161 | struct hw_perf_event *hwc = &event->hw; | ||
1162 | int idx = hwc->idx; | ||
1163 | |||
1164 | if (hwc->state & PERF_HES_STOPPED) | ||
1165 | return; | ||
1166 | |||
1167 | if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { | ||
1168 | dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); | ||
1169 | return; | ||
1170 | } | ||
1171 | |||
1172 | /* | ||
1173 | * We always reprogram the counter, so ignore PERF_EF_UPDATE. See | ||
1174 | * cci_pmu_start() | ||
1175 | */ | ||
1176 | pmu_disable_counter(cci_pmu, idx); | ||
1177 | pmu_event_update(event); | ||
1178 | hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; | ||
1179 | } | ||
1180 | |||
1181 | static int cci_pmu_add(struct perf_event *event, int flags) | ||
1182 | { | ||
1183 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1184 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1185 | struct hw_perf_event *hwc = &event->hw; | ||
1186 | int idx; | ||
1187 | int err = 0; | ||
1188 | |||
1189 | perf_pmu_disable(event->pmu); | ||
1190 | |||
1191 | /* If we don't have a space for the counter then finish early. */ | ||
1192 | idx = pmu_get_event_idx(hw_events, event); | ||
1193 | if (idx < 0) { | ||
1194 | err = idx; | ||
1195 | goto out; | ||
1196 | } | ||
1197 | |||
1198 | event->hw.idx = idx; | ||
1199 | hw_events->events[idx] = event; | ||
1200 | |||
1201 | hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; | ||
1202 | if (flags & PERF_EF_START) | ||
1203 | cci_pmu_start(event, PERF_EF_RELOAD); | ||
1204 | |||
1205 | /* Propagate our changes to the userspace mapping. */ | ||
1206 | perf_event_update_userpage(event); | ||
1207 | |||
1208 | out: | ||
1209 | perf_pmu_enable(event->pmu); | ||
1210 | return err; | ||
1211 | } | ||
1212 | |||
1213 | static void cci_pmu_del(struct perf_event *event, int flags) | ||
1214 | { | ||
1215 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1216 | struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; | ||
1217 | struct hw_perf_event *hwc = &event->hw; | ||
1218 | int idx = hwc->idx; | ||
1219 | |||
1220 | cci_pmu_stop(event, PERF_EF_UPDATE); | ||
1221 | hw_events->events[idx] = NULL; | ||
1222 | clear_bit(idx, hw_events->used_mask); | ||
1223 | |||
1224 | perf_event_update_userpage(event); | ||
1225 | } | ||
1226 | |||
1227 | static int validate_event(struct pmu *cci_pmu, | ||
1228 | struct cci_pmu_hw_events *hw_events, | ||
1229 | struct perf_event *event) | ||
1230 | { | ||
1231 | if (is_software_event(event)) | ||
1232 | return 1; | ||
1233 | |||
1234 | /* | ||
1235 | * Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The | ||
1236 | * core perf code won't check that the pmu->ctx == leader->ctx | ||
1237 | * until after pmu->event_init(event). | ||
1238 | */ | ||
1239 | if (event->pmu != cci_pmu) | ||
1240 | return 0; | ||
1241 | |||
1242 | if (event->state < PERF_EVENT_STATE_OFF) | ||
1243 | return 1; | ||
1244 | |||
1245 | if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) | ||
1246 | return 1; | ||
1247 | |||
1248 | return pmu_get_event_idx(hw_events, event) >= 0; | ||
1249 | } | ||
1250 | |||
1251 | static int validate_group(struct perf_event *event) | ||
1252 | { | ||
1253 | struct perf_event *sibling, *leader = event->group_leader; | ||
1254 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1255 | unsigned long mask[BITS_TO_LONGS(cci_pmu->num_cntrs)]; | ||
1256 | struct cci_pmu_hw_events fake_pmu = { | ||
1257 | /* | ||
1258 | * Initialise the fake PMU. We only need to populate the | ||
1259 | * used_mask for the purposes of validation. | ||
1260 | */ | ||
1261 | .used_mask = mask, | ||
1262 | }; | ||
1263 | memset(mask, 0, BITS_TO_LONGS(cci_pmu->num_cntrs) * sizeof(unsigned long)); | ||
1264 | |||
1265 | if (!validate_event(event->pmu, &fake_pmu, leader)) | ||
1266 | return -EINVAL; | ||
1267 | |||
1268 | for_each_sibling_event(sibling, leader) { | ||
1269 | if (!validate_event(event->pmu, &fake_pmu, sibling)) | ||
1270 | return -EINVAL; | ||
1271 | } | ||
1272 | |||
1273 | if (!validate_event(event->pmu, &fake_pmu, event)) | ||
1274 | return -EINVAL; | ||
1275 | |||
1276 | return 0; | ||
1277 | } | ||
1278 | |||
1279 | static int __hw_perf_event_init(struct perf_event *event) | ||
1280 | { | ||
1281 | struct hw_perf_event *hwc = &event->hw; | ||
1282 | int mapping; | ||
1283 | |||
1284 | mapping = pmu_map_event(event); | ||
1285 | |||
1286 | if (mapping < 0) { | ||
1287 | pr_debug("event %x:%llx not supported\n", event->attr.type, | ||
1288 | event->attr.config); | ||
1289 | return mapping; | ||
1290 | } | ||
1291 | |||
1292 | /* | ||
1293 | * We don't assign an index until we actually place the event onto | ||
1294 | * hardware. Use -1 to signify that we haven't decided where to put it | ||
1295 | * yet. | ||
1296 | */ | ||
1297 | hwc->idx = -1; | ||
1298 | hwc->config_base = 0; | ||
1299 | hwc->config = 0; | ||
1300 | hwc->event_base = 0; | ||
1301 | |||
1302 | /* | ||
1303 | * Store the event encoding into the config_base field. | ||
1304 | */ | ||
1305 | hwc->config_base |= (unsigned long)mapping; | ||
1306 | |||
1307 | /* | ||
1308 | * Limit the sample_period to half of the counter width. That way, the | ||
1309 | * new counter value is far less likely to overtake the previous one | ||
1310 | * unless you have some serious IRQ latency issues. | ||
1311 | */ | ||
1312 | hwc->sample_period = CCI_PMU_CNTR_MASK >> 1; | ||
1313 | hwc->last_period = hwc->sample_period; | ||
1314 | local64_set(&hwc->period_left, hwc->sample_period); | ||
1315 | |||
1316 | if (event->group_leader != event) { | ||
1317 | if (validate_group(event) != 0) | ||
1318 | return -EINVAL; | ||
1319 | } | ||
1320 | |||
1321 | return 0; | ||
1322 | } | ||
1323 | |||
1324 | static int cci_pmu_event_init(struct perf_event *event) | ||
1325 | { | ||
1326 | struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); | ||
1327 | atomic_t *active_events = &cci_pmu->active_events; | ||
1328 | int err = 0; | ||
1329 | |||
1330 | if (event->attr.type != event->pmu->type) | ||
1331 | return -ENOENT; | ||
1332 | |||
1333 | /* Shared by all CPUs, no meaningful state to sample */ | ||
1334 | if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) | ||
1335 | return -EOPNOTSUPP; | ||
1336 | |||
1337 | /* We have no filtering of any kind */ | ||
1338 | if (event->attr.exclude_user || | ||
1339 | event->attr.exclude_kernel || | ||
1340 | event->attr.exclude_hv || | ||
1341 | event->attr.exclude_idle || | ||
1342 | event->attr.exclude_host || | ||
1343 | event->attr.exclude_guest) | ||
1344 | return -EINVAL; | ||
1345 | |||
1346 | /* | ||
1347 | * Following the example set by other "uncore" PMUs, we accept any CPU | ||
1348 | * and rewrite its affinity dynamically rather than having perf core | ||
1349 | * handle cpu == -1 and pid == -1 for this case. | ||
1350 | * | ||
1351 | * The perf core will pin online CPUs for the duration of this call and | ||
1352 | * the event being installed into its context, so the PMU's CPU can't | ||
1353 | * change under our feet. | ||
1354 | */ | ||
1355 | if (event->cpu < 0) | ||
1356 | return -EINVAL; | ||
1357 | event->cpu = cci_pmu->cpu; | ||
1358 | |||
1359 | event->destroy = hw_perf_event_destroy; | ||
1360 | if (!atomic_inc_not_zero(active_events)) { | ||
1361 | mutex_lock(&cci_pmu->reserve_mutex); | ||
1362 | if (atomic_read(active_events) == 0) | ||
1363 | err = cci_pmu_get_hw(cci_pmu); | ||
1364 | if (!err) | ||
1365 | atomic_inc(active_events); | ||
1366 | mutex_unlock(&cci_pmu->reserve_mutex); | ||
1367 | } | ||
1368 | if (err) | ||
1369 | return err; | ||
1370 | |||
1371 | err = __hw_perf_event_init(event); | ||
1372 | if (err) | ||
1373 | hw_perf_event_destroy(event); | ||
1374 | |||
1375 | return err; | ||
1376 | } | ||
1377 | |||
1378 | static ssize_t pmu_cpumask_attr_show(struct device *dev, | ||
1379 | struct device_attribute *attr, char *buf) | ||
1380 | { | ||
1381 | struct pmu *pmu = dev_get_drvdata(dev); | ||
1382 | struct cci_pmu *cci_pmu = to_cci_pmu(pmu); | ||
1383 | |||
1384 | return cpumap_print_to_pagebuf(true, buf, cpumask_of(cci_pmu->cpu)); | ||
1385 | } | ||
1386 | |||
1387 | static struct device_attribute pmu_cpumask_attr = | ||
1388 | __ATTR(cpumask, S_IRUGO, pmu_cpumask_attr_show, NULL); | ||
1389 | |||
1390 | static struct attribute *pmu_attrs[] = { | ||
1391 | &pmu_cpumask_attr.attr, | ||
1392 | NULL, | ||
1393 | }; | ||
1394 | |||
1395 | static struct attribute_group pmu_attr_group = { | ||
1396 | .attrs = pmu_attrs, | ||
1397 | }; | ||
1398 | |||
1399 | static struct attribute_group pmu_format_attr_group = { | ||
1400 | .name = "format", | ||
1401 | .attrs = NULL, /* Filled in cci_pmu_init_attrs */ | ||
1402 | }; | ||
1403 | |||
1404 | static struct attribute_group pmu_event_attr_group = { | ||
1405 | .name = "events", | ||
1406 | .attrs = NULL, /* Filled in cci_pmu_init_attrs */ | ||
1407 | }; | ||
1408 | |||
1409 | static const struct attribute_group *pmu_attr_groups[] = { | ||
1410 | &pmu_attr_group, | ||
1411 | &pmu_format_attr_group, | ||
1412 | &pmu_event_attr_group, | ||
1413 | NULL | ||
1414 | }; | ||
1415 | |||
1416 | static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev) | ||
1417 | { | ||
1418 | const struct cci_pmu_model *model = cci_pmu->model; | ||
1419 | char *name = model->name; | ||
1420 | u32 num_cntrs; | ||
1421 | |||
1422 | pmu_event_attr_group.attrs = model->event_attrs; | ||
1423 | pmu_format_attr_group.attrs = model->format_attrs; | ||
1424 | |||
1425 | cci_pmu->pmu = (struct pmu) { | ||
1426 | .name = cci_pmu->model->name, | ||
1427 | .task_ctx_nr = perf_invalid_context, | ||
1428 | .pmu_enable = cci_pmu_enable, | ||
1429 | .pmu_disable = cci_pmu_disable, | ||
1430 | .event_init = cci_pmu_event_init, | ||
1431 | .add = cci_pmu_add, | ||
1432 | .del = cci_pmu_del, | ||
1433 | .start = cci_pmu_start, | ||
1434 | .stop = cci_pmu_stop, | ||
1435 | .read = pmu_read, | ||
1436 | .attr_groups = pmu_attr_groups, | ||
1437 | }; | ||
1438 | |||
1439 | cci_pmu->plat_device = pdev; | ||
1440 | num_cntrs = pmu_get_max_counters(cci_pmu); | ||
1441 | if (num_cntrs > cci_pmu->model->num_hw_cntrs) { | ||
1442 | dev_warn(&pdev->dev, | ||
1443 | "PMU implements more counters(%d) than supported by" | ||
1444 | " the model(%d), truncated.", | ||
1445 | num_cntrs, cci_pmu->model->num_hw_cntrs); | ||
1446 | num_cntrs = cci_pmu->model->num_hw_cntrs; | ||
1447 | } | ||
1448 | cci_pmu->num_cntrs = num_cntrs + cci_pmu->model->fixed_hw_cntrs; | ||
1449 | |||
1450 | return perf_pmu_register(&cci_pmu->pmu, name, -1); | ||
1451 | } | ||
1452 | |||
1453 | static int cci_pmu_offline_cpu(unsigned int cpu) | ||
1454 | { | ||
1455 | int target; | ||
1456 | |||
1457 | if (!g_cci_pmu || cpu != g_cci_pmu->cpu) | ||
1458 | return 0; | ||
1459 | |||
1460 | target = cpumask_any_but(cpu_online_mask, cpu); | ||
1461 | if (target >= nr_cpu_ids) | ||
1462 | return 0; | ||
1463 | |||
1464 | perf_pmu_migrate_context(&g_cci_pmu->pmu, cpu, target); | ||
1465 | g_cci_pmu->cpu = target; | ||
1466 | return 0; | ||
1467 | } | ||
1468 | |||
1469 | static struct cci_pmu_model cci_pmu_models[] = { | ||
1470 | #ifdef CONFIG_ARM_CCI400_PMU | ||
1471 | [CCI400_R0] = { | ||
1472 | .name = "CCI_400", | ||
1473 | .fixed_hw_cntrs = 1, /* Cycle counter */ | ||
1474 | .num_hw_cntrs = 4, | ||
1475 | .cntr_size = SZ_4K, | ||
1476 | .format_attrs = cci400_pmu_format_attrs, | ||
1477 | .event_attrs = cci400_r0_pmu_event_attrs, | ||
1478 | .event_ranges = { | ||
1479 | [CCI_IF_SLAVE] = { | ||
1480 | CCI400_R0_SLAVE_PORT_MIN_EV, | ||
1481 | CCI400_R0_SLAVE_PORT_MAX_EV, | ||
1482 | }, | ||
1483 | [CCI_IF_MASTER] = { | ||
1484 | CCI400_R0_MASTER_PORT_MIN_EV, | ||
1485 | CCI400_R0_MASTER_PORT_MAX_EV, | ||
1486 | }, | ||
1487 | }, | ||
1488 | .validate_hw_event = cci400_validate_hw_event, | ||
1489 | .get_event_idx = cci400_get_event_idx, | ||
1490 | }, | ||
1491 | [CCI400_R1] = { | ||
1492 | .name = "CCI_400_r1", | ||
1493 | .fixed_hw_cntrs = 1, /* Cycle counter */ | ||
1494 | .num_hw_cntrs = 4, | ||
1495 | .cntr_size = SZ_4K, | ||
1496 | .format_attrs = cci400_pmu_format_attrs, | ||
1497 | .event_attrs = cci400_r1_pmu_event_attrs, | ||
1498 | .event_ranges = { | ||
1499 | [CCI_IF_SLAVE] = { | ||
1500 | CCI400_R1_SLAVE_PORT_MIN_EV, | ||
1501 | CCI400_R1_SLAVE_PORT_MAX_EV, | ||
1502 | }, | ||
1503 | [CCI_IF_MASTER] = { | ||
1504 | CCI400_R1_MASTER_PORT_MIN_EV, | ||
1505 | CCI400_R1_MASTER_PORT_MAX_EV, | ||
1506 | }, | ||
1507 | }, | ||
1508 | .validate_hw_event = cci400_validate_hw_event, | ||
1509 | .get_event_idx = cci400_get_event_idx, | ||
1510 | }, | ||
1511 | #endif | ||
1512 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
1513 | [CCI500_R0] = { | ||
1514 | .name = "CCI_500", | ||
1515 | .fixed_hw_cntrs = 0, | ||
1516 | .num_hw_cntrs = 8, | ||
1517 | .cntr_size = SZ_64K, | ||
1518 | .format_attrs = cci5xx_pmu_format_attrs, | ||
1519 | .event_attrs = cci5xx_pmu_event_attrs, | ||
1520 | .event_ranges = { | ||
1521 | [CCI_IF_SLAVE] = { | ||
1522 | CCI5xx_SLAVE_PORT_MIN_EV, | ||
1523 | CCI5xx_SLAVE_PORT_MAX_EV, | ||
1524 | }, | ||
1525 | [CCI_IF_MASTER] = { | ||
1526 | CCI5xx_MASTER_PORT_MIN_EV, | ||
1527 | CCI5xx_MASTER_PORT_MAX_EV, | ||
1528 | }, | ||
1529 | [CCI_IF_GLOBAL] = { | ||
1530 | CCI5xx_GLOBAL_PORT_MIN_EV, | ||
1531 | CCI5xx_GLOBAL_PORT_MAX_EV, | ||
1532 | }, | ||
1533 | }, | ||
1534 | .validate_hw_event = cci500_validate_hw_event, | ||
1535 | .write_counters = cci5xx_pmu_write_counters, | ||
1536 | }, | ||
1537 | [CCI550_R0] = { | ||
1538 | .name = "CCI_550", | ||
1539 | .fixed_hw_cntrs = 0, | ||
1540 | .num_hw_cntrs = 8, | ||
1541 | .cntr_size = SZ_64K, | ||
1542 | .format_attrs = cci5xx_pmu_format_attrs, | ||
1543 | .event_attrs = cci5xx_pmu_event_attrs, | ||
1544 | .event_ranges = { | ||
1545 | [CCI_IF_SLAVE] = { | ||
1546 | CCI5xx_SLAVE_PORT_MIN_EV, | ||
1547 | CCI5xx_SLAVE_PORT_MAX_EV, | ||
1548 | }, | ||
1549 | [CCI_IF_MASTER] = { | ||
1550 | CCI5xx_MASTER_PORT_MIN_EV, | ||
1551 | CCI5xx_MASTER_PORT_MAX_EV, | ||
1552 | }, | ||
1553 | [CCI_IF_GLOBAL] = { | ||
1554 | CCI5xx_GLOBAL_PORT_MIN_EV, | ||
1555 | CCI5xx_GLOBAL_PORT_MAX_EV, | ||
1556 | }, | ||
1557 | }, | ||
1558 | .validate_hw_event = cci550_validate_hw_event, | ||
1559 | .write_counters = cci5xx_pmu_write_counters, | ||
1560 | }, | ||
1561 | #endif | ||
1562 | }; | ||
1563 | |||
1564 | static const struct of_device_id arm_cci_pmu_matches[] = { | ||
1565 | #ifdef CONFIG_ARM_CCI400_PMU | ||
1566 | { | ||
1567 | .compatible = "arm,cci-400-pmu", | ||
1568 | .data = NULL, | ||
1569 | }, | ||
1570 | { | ||
1571 | .compatible = "arm,cci-400-pmu,r0", | ||
1572 | .data = &cci_pmu_models[CCI400_R0], | ||
1573 | }, | ||
1574 | { | ||
1575 | .compatible = "arm,cci-400-pmu,r1", | ||
1576 | .data = &cci_pmu_models[CCI400_R1], | ||
1577 | }, | ||
1578 | #endif | ||
1579 | #ifdef CONFIG_ARM_CCI5xx_PMU | ||
1580 | { | ||
1581 | .compatible = "arm,cci-500-pmu,r0", | ||
1582 | .data = &cci_pmu_models[CCI500_R0], | ||
1583 | }, | ||
1584 | { | ||
1585 | .compatible = "arm,cci-550-pmu,r0", | ||
1586 | .data = &cci_pmu_models[CCI550_R0], | ||
1587 | }, | ||
1588 | #endif | ||
1589 | {}, | ||
1590 | }; | ||
1591 | |||
1592 | static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs) | ||
1593 | { | ||
1594 | int i; | ||
1595 | |||
1596 | for (i = 0; i < nr_irqs; i++) | ||
1597 | if (irq == irqs[i]) | ||
1598 | return true; | ||
1599 | |||
1600 | return false; | ||
1601 | } | ||
1602 | |||
1603 | static struct cci_pmu *cci_pmu_alloc(struct device *dev) | ||
1604 | { | ||
1605 | struct cci_pmu *cci_pmu; | ||
1606 | const struct cci_pmu_model *model; | ||
1607 | |||
1608 | /* | ||
1609 | * All allocations are devm_* hence we don't have to free | ||
1610 | * them explicitly on an error, as it would end up in driver | ||
1611 | * detach. | ||
1612 | */ | ||
1613 | cci_pmu = devm_kzalloc(dev, sizeof(*cci_pmu), GFP_KERNEL); | ||
1614 | if (!cci_pmu) | ||
1615 | return ERR_PTR(-ENOMEM); | ||
1616 | |||
1617 | cci_pmu->ctrl_base = *(void __iomem **)dev->platform_data; | ||
1618 | |||
1619 | model = of_device_get_match_data(dev); | ||
1620 | if (!model) { | ||
1621 | dev_warn(dev, | ||
1622 | "DEPRECATED compatible property, requires secure access to CCI registers"); | ||
1623 | model = probe_cci_model(cci_pmu); | ||
1624 | } | ||
1625 | if (!model) { | ||
1626 | dev_warn(dev, "CCI PMU version not supported\n"); | ||
1627 | return ERR_PTR(-ENODEV); | ||
1628 | } | ||
1629 | |||
1630 | cci_pmu->model = model; | ||
1631 | cci_pmu->irqs = devm_kcalloc(dev, CCI_PMU_MAX_HW_CNTRS(model), | ||
1632 | sizeof(*cci_pmu->irqs), GFP_KERNEL); | ||
1633 | if (!cci_pmu->irqs) | ||
1634 | return ERR_PTR(-ENOMEM); | ||
1635 | cci_pmu->hw_events.events = devm_kcalloc(dev, | ||
1636 | CCI_PMU_MAX_HW_CNTRS(model), | ||
1637 | sizeof(*cci_pmu->hw_events.events), | ||
1638 | GFP_KERNEL); | ||
1639 | if (!cci_pmu->hw_events.events) | ||
1640 | return ERR_PTR(-ENOMEM); | ||
1641 | cci_pmu->hw_events.used_mask = devm_kcalloc(dev, | ||
1642 | BITS_TO_LONGS(CCI_PMU_MAX_HW_CNTRS(model)), | ||
1643 | sizeof(*cci_pmu->hw_events.used_mask), | ||
1644 | GFP_KERNEL); | ||
1645 | if (!cci_pmu->hw_events.used_mask) | ||
1646 | return ERR_PTR(-ENOMEM); | ||
1647 | |||
1648 | return cci_pmu; | ||
1649 | } | ||
1650 | |||
1651 | static int cci_pmu_probe(struct platform_device *pdev) | ||
1652 | { | ||
1653 | struct resource *res; | ||
1654 | struct cci_pmu *cci_pmu; | ||
1655 | int i, ret, irq; | ||
1656 | |||
1657 | cci_pmu = cci_pmu_alloc(&pdev->dev); | ||
1658 | if (IS_ERR(cci_pmu)) | ||
1659 | return PTR_ERR(cci_pmu); | ||
1660 | |||
1661 | res = platform_get_resource(pdev, IORESOURCE_MEM, 0); | ||
1662 | cci_pmu->base = devm_ioremap_resource(&pdev->dev, res); | ||
1663 | if (IS_ERR(cci_pmu->base)) | ||
1664 | return -ENOMEM; | ||
1665 | |||
1666 | /* | ||
1667 | * CCI PMU has one overflow interrupt per counter; but some may be tied | ||
1668 | * together to a common interrupt. | ||
1669 | */ | ||
1670 | cci_pmu->nr_irqs = 0; | ||
1671 | for (i = 0; i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model); i++) { | ||
1672 | irq = platform_get_irq(pdev, i); | ||
1673 | if (irq < 0) | ||
1674 | break; | ||
1675 | |||
1676 | if (is_duplicate_irq(irq, cci_pmu->irqs, cci_pmu->nr_irqs)) | ||
1677 | continue; | ||
1678 | |||
1679 | cci_pmu->irqs[cci_pmu->nr_irqs++] = irq; | ||
1680 | } | ||
1681 | |||
1682 | /* | ||
1683 | * Ensure that the device tree has as many interrupts as the number | ||
1684 | * of counters. | ||
1685 | */ | ||
1686 | if (i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)) { | ||
1687 | dev_warn(&pdev->dev, "In-correct number of interrupts: %d, should be %d\n", | ||
1688 | i, CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)); | ||
1689 | return -EINVAL; | ||
1690 | } | ||
1691 | |||
1692 | raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock); | ||
1693 | mutex_init(&cci_pmu->reserve_mutex); | ||
1694 | atomic_set(&cci_pmu->active_events, 0); | ||
1695 | cci_pmu->cpu = get_cpu(); | ||
1696 | |||
1697 | ret = cci_pmu_init(cci_pmu, pdev); | ||
1698 | if (ret) { | ||
1699 | put_cpu(); | ||
1700 | return ret; | ||
1701 | } | ||
1702 | |||
1703 | cpuhp_setup_state_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE, | ||
1704 | "perf/arm/cci:online", NULL, | ||
1705 | cci_pmu_offline_cpu); | ||
1706 | put_cpu(); | ||
1707 | g_cci_pmu = cci_pmu; | ||
1708 | pr_info("ARM %s PMU driver probed", cci_pmu->model->name); | ||
1709 | return 0; | ||
1710 | } | ||
1711 | |||
1712 | static struct platform_driver cci_pmu_driver = { | ||
1713 | .driver = { | ||
1714 | .name = DRIVER_NAME, | ||
1715 | .of_match_table = arm_cci_pmu_matches, | ||
1716 | }, | ||
1717 | .probe = cci_pmu_probe, | ||
1718 | }; | ||
1719 | |||
1720 | builtin_platform_driver(cci_pmu_driver); | ||
1721 | MODULE_LICENSE("GPL"); | ||
1722 | MODULE_DESCRIPTION("ARM CCI PMU support"); | ||
diff --git a/drivers/bus/arm-ccn.c b/drivers/perf/arm-ccn.c index 65b7e4042ece..65b7e4042ece 100644 --- a/drivers/bus/arm-ccn.c +++ b/drivers/perf/arm-ccn.c | |||
diff --git a/drivers/reset/Kconfig b/drivers/reset/Kconfig index 7fc77696bb1e..c0b292be1b72 100644 --- a/drivers/reset/Kconfig +++ b/drivers/reset/Kconfig | |||
@@ -49,6 +49,7 @@ config RESET_HSDK | |||
49 | 49 | ||
50 | config RESET_IMX7 | 50 | config RESET_IMX7 |
51 | bool "i.MX7 Reset Driver" if COMPILE_TEST | 51 | bool "i.MX7 Reset Driver" if COMPILE_TEST |
52 | depends on HAS_IOMEM | ||
52 | default SOC_IMX7D | 53 | default SOC_IMX7D |
53 | select MFD_SYSCON | 54 | select MFD_SYSCON |
54 | help | 55 | help |
@@ -83,14 +84,24 @@ config RESET_PISTACHIO | |||
83 | 84 | ||
84 | config RESET_SIMPLE | 85 | config RESET_SIMPLE |
85 | bool "Simple Reset Controller Driver" if COMPILE_TEST | 86 | bool "Simple Reset Controller Driver" if COMPILE_TEST |
86 | default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX | 87 | default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED |
87 | help | 88 | help |
88 | This enables a simple reset controller driver for reset lines that | 89 | This enables a simple reset controller driver for reset lines that |
89 | that can be asserted and deasserted by toggling bits in a contiguous, | 90 | that can be asserted and deasserted by toggling bits in a contiguous, |
90 | exclusive register space. | 91 | exclusive register space. |
91 | 92 | ||
92 | Currently this driver supports Altera SoCFPGAs, the RCC reset | 93 | Currently this driver supports: |
93 | controller in STM32 MCUs, Allwinner SoCs, and ZTE's zx2967 family. | 94 | - Altera SoCFPGAs |
95 | - ASPEED BMC SoCs | ||
96 | - RCC reset controller in STM32 MCUs | ||
97 | - Allwinner SoCs | ||
98 | - ZTE's zx2967 family | ||
99 | |||
100 | config RESET_STM32MP157 | ||
101 | bool "STM32MP157 Reset Driver" if COMPILE_TEST | ||
102 | default MACH_STM32MP157 | ||
103 | help | ||
104 | This enables the RCC reset controller driver for STM32 MPUs. | ||
94 | 105 | ||
95 | config RESET_SUNXI | 106 | config RESET_SUNXI |
96 | bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI | 107 | bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI |
diff --git a/drivers/reset/Makefile b/drivers/reset/Makefile index 132c24f5ddb5..c1261dcfe9ad 100644 --- a/drivers/reset/Makefile +++ b/drivers/reset/Makefile | |||
@@ -15,6 +15,7 @@ obj-$(CONFIG_RESET_MESON) += reset-meson.o | |||
15 | obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o | 15 | obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o |
16 | obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o | 16 | obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o |
17 | obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o | 17 | obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o |
18 | obj-$(CONFIG_RESET_STM32MP157) += reset-stm32mp1.o | ||
18 | obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o | 19 | obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o |
19 | obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o | 20 | obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o |
20 | obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o | 21 | obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o |
diff --git a/drivers/reset/core.c b/drivers/reset/core.c index da4292e9de97..6488292e129c 100644 --- a/drivers/reset/core.c +++ b/drivers/reset/core.c | |||
@@ -23,6 +23,9 @@ | |||
23 | static DEFINE_MUTEX(reset_list_mutex); | 23 | static DEFINE_MUTEX(reset_list_mutex); |
24 | static LIST_HEAD(reset_controller_list); | 24 | static LIST_HEAD(reset_controller_list); |
25 | 25 | ||
26 | static DEFINE_MUTEX(reset_lookup_mutex); | ||
27 | static LIST_HEAD(reset_lookup_list); | ||
28 | |||
26 | /** | 29 | /** |
27 | * struct reset_control - a reset control | 30 | * struct reset_control - a reset control |
28 | * @rcdev: a pointer to the reset controller device | 31 | * @rcdev: a pointer to the reset controller device |
@@ -148,6 +151,33 @@ int devm_reset_controller_register(struct device *dev, | |||
148 | } | 151 | } |
149 | EXPORT_SYMBOL_GPL(devm_reset_controller_register); | 152 | EXPORT_SYMBOL_GPL(devm_reset_controller_register); |
150 | 153 | ||
154 | /** | ||
155 | * reset_controller_add_lookup - register a set of lookup entries | ||
156 | * @lookup: array of reset lookup entries | ||
157 | * @num_entries: number of entries in the lookup array | ||
158 | */ | ||
159 | void reset_controller_add_lookup(struct reset_control_lookup *lookup, | ||
160 | unsigned int num_entries) | ||
161 | { | ||
162 | struct reset_control_lookup *entry; | ||
163 | unsigned int i; | ||
164 | |||
165 | mutex_lock(&reset_lookup_mutex); | ||
166 | for (i = 0; i < num_entries; i++) { | ||
167 | entry = &lookup[i]; | ||
168 | |||
169 | if (!entry->dev_id || !entry->provider) { | ||
170 | pr_warn("%s(): reset lookup entry badly specified, skipping\n", | ||
171 | __func__); | ||
172 | continue; | ||
173 | } | ||
174 | |||
175 | list_add_tail(&entry->list, &reset_lookup_list); | ||
176 | } | ||
177 | mutex_unlock(&reset_lookup_mutex); | ||
178 | } | ||
179 | EXPORT_SYMBOL_GPL(reset_controller_add_lookup); | ||
180 | |||
151 | static inline struct reset_control_array * | 181 | static inline struct reset_control_array * |
152 | rstc_to_array(struct reset_control *rstc) { | 182 | rstc_to_array(struct reset_control *rstc) { |
153 | return container_of(rstc, struct reset_control_array, base); | 183 | return container_of(rstc, struct reset_control_array, base); |
@@ -493,6 +523,70 @@ struct reset_control *__of_reset_control_get(struct device_node *node, | |||
493 | } | 523 | } |
494 | EXPORT_SYMBOL_GPL(__of_reset_control_get); | 524 | EXPORT_SYMBOL_GPL(__of_reset_control_get); |
495 | 525 | ||
526 | static struct reset_controller_dev * | ||
527 | __reset_controller_by_name(const char *name) | ||
528 | { | ||
529 | struct reset_controller_dev *rcdev; | ||
530 | |||
531 | lockdep_assert_held(&reset_list_mutex); | ||
532 | |||
533 | list_for_each_entry(rcdev, &reset_controller_list, list) { | ||
534 | if (!rcdev->dev) | ||
535 | continue; | ||
536 | |||
537 | if (!strcmp(name, dev_name(rcdev->dev))) | ||
538 | return rcdev; | ||
539 | } | ||
540 | |||
541 | return NULL; | ||
542 | } | ||
543 | |||
544 | static struct reset_control * | ||
545 | __reset_control_get_from_lookup(struct device *dev, const char *con_id, | ||
546 | bool shared, bool optional) | ||
547 | { | ||
548 | const struct reset_control_lookup *lookup; | ||
549 | struct reset_controller_dev *rcdev; | ||
550 | const char *dev_id = dev_name(dev); | ||
551 | struct reset_control *rstc = NULL; | ||
552 | |||
553 | if (!dev) | ||
554 | return ERR_PTR(-EINVAL); | ||
555 | |||
556 | mutex_lock(&reset_lookup_mutex); | ||
557 | |||
558 | list_for_each_entry(lookup, &reset_lookup_list, list) { | ||
559 | if (strcmp(lookup->dev_id, dev_id)) | ||
560 | continue; | ||
561 | |||
562 | if ((!con_id && !lookup->con_id) || | ||
563 | ((con_id && lookup->con_id) && | ||
564 | !strcmp(con_id, lookup->con_id))) { | ||
565 | mutex_lock(&reset_list_mutex); | ||
566 | rcdev = __reset_controller_by_name(lookup->provider); | ||
567 | if (!rcdev) { | ||
568 | mutex_unlock(&reset_list_mutex); | ||
569 | mutex_unlock(&reset_lookup_mutex); | ||
570 | /* Reset provider may not be ready yet. */ | ||
571 | return ERR_PTR(-EPROBE_DEFER); | ||
572 | } | ||
573 | |||
574 | rstc = __reset_control_get_internal(rcdev, | ||
575 | lookup->index, | ||
576 | shared); | ||
577 | mutex_unlock(&reset_list_mutex); | ||
578 | break; | ||
579 | } | ||
580 | } | ||
581 | |||
582 | mutex_unlock(&reset_lookup_mutex); | ||
583 | |||
584 | if (!rstc) | ||
585 | return optional ? NULL : ERR_PTR(-ENOENT); | ||
586 | |||
587 | return rstc; | ||
588 | } | ||
589 | |||
496 | struct reset_control *__reset_control_get(struct device *dev, const char *id, | 590 | struct reset_control *__reset_control_get(struct device *dev, const char *id, |
497 | int index, bool shared, bool optional) | 591 | int index, bool shared, bool optional) |
498 | { | 592 | { |
@@ -500,7 +594,7 @@ struct reset_control *__reset_control_get(struct device *dev, const char *id, | |||
500 | return __of_reset_control_get(dev->of_node, id, index, shared, | 594 | return __of_reset_control_get(dev->of_node, id, index, shared, |
501 | optional); | 595 | optional); |
502 | 596 | ||
503 | return optional ? NULL : ERR_PTR(-EINVAL); | 597 | return __reset_control_get_from_lookup(dev, id, shared, optional); |
504 | } | 598 | } |
505 | EXPORT_SYMBOL_GPL(__reset_control_get); | 599 | EXPORT_SYMBOL_GPL(__reset_control_get); |
506 | 600 | ||
diff --git a/drivers/reset/reset-meson.c b/drivers/reset/reset-meson.c index 93cbee1ae8ef..5242e0679df7 100644 --- a/drivers/reset/reset-meson.c +++ b/drivers/reset/reset-meson.c | |||
@@ -124,29 +124,21 @@ static int meson_reset_deassert(struct reset_controller_dev *rcdev, | |||
124 | return meson_reset_level(rcdev, id, false); | 124 | return meson_reset_level(rcdev, id, false); |
125 | } | 125 | } |
126 | 126 | ||
127 | static const struct reset_control_ops meson_reset_meson8_ops = { | 127 | static const struct reset_control_ops meson_reset_ops = { |
128 | .reset = meson_reset_reset, | ||
129 | }; | ||
130 | |||
131 | static const struct reset_control_ops meson_reset_gx_ops = { | ||
132 | .reset = meson_reset_reset, | 128 | .reset = meson_reset_reset, |
133 | .assert = meson_reset_assert, | 129 | .assert = meson_reset_assert, |
134 | .deassert = meson_reset_deassert, | 130 | .deassert = meson_reset_deassert, |
135 | }; | 131 | }; |
136 | 132 | ||
137 | static const struct of_device_id meson_reset_dt_ids[] = { | 133 | static const struct of_device_id meson_reset_dt_ids[] = { |
138 | { .compatible = "amlogic,meson8b-reset", | 134 | { .compatible = "amlogic,meson8b-reset" }, |
139 | .data = &meson_reset_meson8_ops, }, | 135 | { .compatible = "amlogic,meson-gxbb-reset" }, |
140 | { .compatible = "amlogic,meson-gxbb-reset", | 136 | { .compatible = "amlogic,meson-axg-reset" }, |
141 | .data = &meson_reset_gx_ops, }, | ||
142 | { .compatible = "amlogic,meson-axg-reset", | ||
143 | .data = &meson_reset_gx_ops, }, | ||
144 | { /* sentinel */ }, | 137 | { /* sentinel */ }, |
145 | }; | 138 | }; |
146 | 139 | ||
147 | static int meson_reset_probe(struct platform_device *pdev) | 140 | static int meson_reset_probe(struct platform_device *pdev) |
148 | { | 141 | { |
149 | const struct reset_control_ops *ops; | ||
150 | struct meson_reset *data; | 142 | struct meson_reset *data; |
151 | struct resource *res; | 143 | struct resource *res; |
152 | 144 | ||
@@ -154,10 +146,6 @@ static int meson_reset_probe(struct platform_device *pdev) | |||
154 | if (!data) | 146 | if (!data) |
155 | return -ENOMEM; | 147 | return -ENOMEM; |
156 | 148 | ||
157 | ops = of_device_get_match_data(&pdev->dev); | ||
158 | if (!ops) | ||
159 | return -EINVAL; | ||
160 | |||
161 | res = platform_get_resource(pdev, IORESOURCE_MEM, 0); | 149 | res = platform_get_resource(pdev, IORESOURCE_MEM, 0); |
162 | data->reg_base = devm_ioremap_resource(&pdev->dev, res); | 150 | data->reg_base = devm_ioremap_resource(&pdev->dev, res); |
163 | if (IS_ERR(data->reg_base)) | 151 | if (IS_ERR(data->reg_base)) |
@@ -169,7 +157,7 @@ static int meson_reset_probe(struct platform_device *pdev) | |||
169 | 157 | ||
170 | data->rcdev.owner = THIS_MODULE; | 158 | data->rcdev.owner = THIS_MODULE; |
171 | data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG; | 159 | data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG; |
172 | data->rcdev.ops = ops; | 160 | data->rcdev.ops = &meson_reset_ops; |
173 | data->rcdev.of_node = pdev->dev.of_node; | 161 | data->rcdev.of_node = pdev->dev.of_node; |
174 | 162 | ||
175 | return devm_reset_controller_register(&pdev->dev, &data->rcdev); | 163 | return devm_reset_controller_register(&pdev->dev, &data->rcdev); |
diff --git a/drivers/reset/reset-simple.c b/drivers/reset/reset-simple.c index 2d4f362ef025..f7ce8910a392 100644 --- a/drivers/reset/reset-simple.c +++ b/drivers/reset/reset-simple.c | |||
@@ -125,6 +125,8 @@ static const struct of_device_id reset_simple_dt_ids[] = { | |||
125 | .data = &reset_simple_active_low }, | 125 | .data = &reset_simple_active_low }, |
126 | { .compatible = "zte,zx296718-reset", | 126 | { .compatible = "zte,zx296718-reset", |
127 | .data = &reset_simple_active_low }, | 127 | .data = &reset_simple_active_low }, |
128 | { .compatible = "aspeed,ast2400-lpc-reset" }, | ||
129 | { .compatible = "aspeed,ast2500-lpc-reset" }, | ||
128 | { /* sentinel */ }, | 130 | { /* sentinel */ }, |
129 | }; | 131 | }; |
130 | 132 | ||
diff --git a/drivers/reset/reset-stm32mp1.c b/drivers/reset/reset-stm32mp1.c new file mode 100644 index 000000000000..b221a28041fa --- /dev/null +++ b/drivers/reset/reset-stm32mp1.c | |||
@@ -0,0 +1,115 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (C) STMicroelectronics 2018 - All Rights Reserved | ||
4 | * Author: Gabriel Fernandez <gabriel.fernandez@st.com> for STMicroelectronics. | ||
5 | */ | ||
6 | |||
7 | #include <linux/device.h> | ||
8 | #include <linux/err.h> | ||
9 | #include <linux/io.h> | ||
10 | #include <linux/of.h> | ||
11 | #include <linux/platform_device.h> | ||
12 | #include <linux/reset-controller.h> | ||
13 | |||
14 | #define CLR_OFFSET 0x4 | ||
15 | |||
16 | struct stm32_reset_data { | ||
17 | struct reset_controller_dev rcdev; | ||
18 | void __iomem *membase; | ||
19 | }; | ||
20 | |||
21 | static inline struct stm32_reset_data * | ||
22 | to_stm32_reset_data(struct reset_controller_dev *rcdev) | ||
23 | { | ||
24 | return container_of(rcdev, struct stm32_reset_data, rcdev); | ||
25 | } | ||
26 | |||
27 | static int stm32_reset_update(struct reset_controller_dev *rcdev, | ||
28 | unsigned long id, bool assert) | ||
29 | { | ||
30 | struct stm32_reset_data *data = to_stm32_reset_data(rcdev); | ||
31 | int reg_width = sizeof(u32); | ||
32 | int bank = id / (reg_width * BITS_PER_BYTE); | ||
33 | int offset = id % (reg_width * BITS_PER_BYTE); | ||
34 | void __iomem *addr; | ||
35 | |||
36 | addr = data->membase + (bank * reg_width); | ||
37 | if (!assert) | ||
38 | addr += CLR_OFFSET; | ||
39 | |||
40 | writel(BIT(offset), addr); | ||
41 | |||
42 | return 0; | ||
43 | } | ||
44 | |||
45 | static int stm32_reset_assert(struct reset_controller_dev *rcdev, | ||
46 | unsigned long id) | ||
47 | { | ||
48 | return stm32_reset_update(rcdev, id, true); | ||
49 | } | ||
50 | |||
51 | static int stm32_reset_deassert(struct reset_controller_dev *rcdev, | ||
52 | unsigned long id) | ||
53 | { | ||
54 | return stm32_reset_update(rcdev, id, false); | ||
55 | } | ||
56 | |||
57 | static int stm32_reset_status(struct reset_controller_dev *rcdev, | ||
58 | unsigned long id) | ||
59 | { | ||
60 | struct stm32_reset_data *data = to_stm32_reset_data(rcdev); | ||
61 | int reg_width = sizeof(u32); | ||
62 | int bank = id / (reg_width * BITS_PER_BYTE); | ||
63 | int offset = id % (reg_width * BITS_PER_BYTE); | ||
64 | u32 reg; | ||
65 | |||
66 | reg = readl(data->membase + (bank * reg_width)); | ||
67 | |||
68 | return !!(reg & BIT(offset)); | ||
69 | } | ||
70 | |||
71 | static const struct reset_control_ops stm32_reset_ops = { | ||
72 | .assert = stm32_reset_assert, | ||
73 | .deassert = stm32_reset_deassert, | ||
74 | .status = stm32_reset_status, | ||
75 | }; | ||
76 | |||
77 | static const struct of_device_id stm32_reset_dt_ids[] = { | ||
78 | { .compatible = "st,stm32mp1-rcc"}, | ||
79 | { /* sentinel */ }, | ||
80 | }; | ||
81 | |||
82 | static int stm32_reset_probe(struct platform_device *pdev) | ||
83 | { | ||
84 | struct device *dev = &pdev->dev; | ||
85 | struct stm32_reset_data *data; | ||
86 | void __iomem *membase; | ||
87 | struct resource *res; | ||
88 | |||
89 | data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); | ||
90 | if (!data) | ||
91 | return -ENOMEM; | ||
92 | |||
93 | res = platform_get_resource(pdev, IORESOURCE_MEM, 0); | ||
94 | membase = devm_ioremap_resource(dev, res); | ||
95 | if (IS_ERR(membase)) | ||
96 | return PTR_ERR(membase); | ||
97 | |||
98 | data->membase = membase; | ||
99 | data->rcdev.owner = THIS_MODULE; | ||
100 | data->rcdev.nr_resets = resource_size(res) * BITS_PER_BYTE; | ||
101 | data->rcdev.ops = &stm32_reset_ops; | ||
102 | data->rcdev.of_node = dev->of_node; | ||
103 | |||
104 | return devm_reset_controller_register(dev, &data->rcdev); | ||
105 | } | ||
106 | |||
107 | static struct platform_driver stm32_reset_driver = { | ||
108 | .probe = stm32_reset_probe, | ||
109 | .driver = { | ||
110 | .name = "stm32mp1-reset", | ||
111 | .of_match_table = stm32_reset_dt_ids, | ||
112 | }, | ||
113 | }; | ||
114 | |||
115 | builtin_platform_driver(stm32_reset_driver); | ||
diff --git a/drivers/reset/reset-uniphier.c b/drivers/reset/reset-uniphier.c index e8bb023ff15e..360e06b20c53 100644 --- a/drivers/reset/reset-uniphier.c +++ b/drivers/reset/reset-uniphier.c | |||
@@ -63,6 +63,7 @@ static const struct uniphier_reset_data uniphier_pro4_sys_reset_data[] = { | |||
63 | UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (Ether, SATA, USB3) */ | 63 | UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (Ether, SATA, USB3) */ |
64 | UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */ | 64 | UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */ |
65 | UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */ | 65 | UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */ |
66 | UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */ | ||
66 | UNIPHIER_RESET_END, | 67 | UNIPHIER_RESET_END, |
67 | }; | 68 | }; |
68 | 69 | ||
@@ -72,6 +73,7 @@ static const struct uniphier_reset_data uniphier_pro5_sys_reset_data[] = { | |||
72 | UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (PCIe, USB3) */ | 73 | UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (PCIe, USB3) */ |
73 | UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */ | 74 | UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */ |
74 | UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */ | 75 | UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */ |
76 | UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */ | ||
75 | UNIPHIER_RESET_END, | 77 | UNIPHIER_RESET_END, |
76 | }; | 78 | }; |
77 | 79 | ||
@@ -88,6 +90,7 @@ static const struct uniphier_reset_data uniphier_pxs2_sys_reset_data[] = { | |||
88 | UNIPHIER_RESETX(21, 0x2014, 1), /* USB31-PHY1 */ | 90 | UNIPHIER_RESETX(21, 0x2014, 1), /* USB31-PHY1 */ |
89 | UNIPHIER_RESETX(28, 0x2014, 12), /* SATA */ | 91 | UNIPHIER_RESETX(28, 0x2014, 12), /* SATA */ |
90 | UNIPHIER_RESET(29, 0x2014, 8), /* SATA-PHY (active high) */ | 92 | UNIPHIER_RESET(29, 0x2014, 8), /* SATA-PHY (active high) */ |
93 | UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */ | ||
91 | UNIPHIER_RESET_END, | 94 | UNIPHIER_RESET_END, |
92 | }; | 95 | }; |
93 | 96 | ||
@@ -121,6 +124,8 @@ static const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = { | |||
121 | static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = { | 124 | static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = { |
122 | UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */ | 125 | UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */ |
123 | UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */ | 126 | UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */ |
127 | UNIPHIER_RESETX(6, 0x200c, 9), /* Ether0 */ | ||
128 | UNIPHIER_RESETX(7, 0x200c, 10), /* Ether1 */ | ||
124 | UNIPHIER_RESETX(8, 0x200c, 12), /* STDMAC */ | 129 | UNIPHIER_RESETX(8, 0x200c, 12), /* STDMAC */ |
125 | UNIPHIER_RESETX(12, 0x200c, 4), /* USB30 link (GIO0) */ | 130 | UNIPHIER_RESETX(12, 0x200c, 4), /* USB30 link (GIO0) */ |
126 | UNIPHIER_RESETX(13, 0x200c, 5), /* USB31 link (GIO1) */ | 131 | UNIPHIER_RESETX(13, 0x200c, 5), /* USB31 link (GIO1) */ |
diff --git a/drivers/soc/amlogic/meson-gx-pwrc-vpu.c b/drivers/soc/amlogic/meson-gx-pwrc-vpu.c index 2bdeebc48901..6289965c42e9 100644 --- a/drivers/soc/amlogic/meson-gx-pwrc-vpu.c +++ b/drivers/soc/amlogic/meson-gx-pwrc-vpu.c | |||
@@ -184,7 +184,8 @@ static int meson_gx_pwrc_vpu_probe(struct platform_device *pdev) | |||
184 | 184 | ||
185 | rstc = devm_reset_control_array_get(&pdev->dev, false, false); | 185 | rstc = devm_reset_control_array_get(&pdev->dev, false, false); |
186 | if (IS_ERR(rstc)) { | 186 | if (IS_ERR(rstc)) { |
187 | dev_err(&pdev->dev, "failed to get reset lines\n"); | 187 | if (PTR_ERR(rstc) != -EPROBE_DEFER) |
188 | dev_err(&pdev->dev, "failed to get reset lines\n"); | ||
188 | return PTR_ERR(rstc); | 189 | return PTR_ERR(rstc); |
189 | } | 190 | } |
190 | 191 | ||
@@ -224,7 +225,11 @@ static int meson_gx_pwrc_vpu_probe(struct platform_device *pdev) | |||
224 | 225 | ||
225 | static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev) | 226 | static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev) |
226 | { | 227 | { |
227 | meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd); | 228 | bool powered_off; |
229 | |||
230 | powered_off = meson_gx_pwrc_vpu_get_power(&vpu_hdmi_pd); | ||
231 | if (!powered_off) | ||
232 | meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd); | ||
228 | } | 233 | } |
229 | 234 | ||
230 | static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = { | 235 | static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = { |
diff --git a/drivers/soc/amlogic/meson-gx-socinfo.c b/drivers/soc/amlogic/meson-gx-socinfo.c index 8bdaa9b43d49..37ea0a1c24c8 100644 --- a/drivers/soc/amlogic/meson-gx-socinfo.c +++ b/drivers/soc/amlogic/meson-gx-socinfo.c | |||
@@ -33,6 +33,10 @@ static const struct meson_gx_soc_id { | |||
33 | { "GXL", 0x21 }, | 33 | { "GXL", 0x21 }, |
34 | { "GXM", 0x22 }, | 34 | { "GXM", 0x22 }, |
35 | { "TXL", 0x23 }, | 35 | { "TXL", 0x23 }, |
36 | { "TXLX", 0x24 }, | ||
37 | { "AXG", 0x25 }, | ||
38 | { "GXLX", 0x26 }, | ||
39 | { "TXHD", 0x27 }, | ||
36 | }; | 40 | }; |
37 | 41 | ||
38 | static const struct meson_gx_package_id { | 42 | static const struct meson_gx_package_id { |
@@ -45,9 +49,14 @@ static const struct meson_gx_package_id { | |||
45 | { "S905M", 0x1f, 0x20 }, | 49 | { "S905M", 0x1f, 0x20 }, |
46 | { "S905D", 0x21, 0 }, | 50 | { "S905D", 0x21, 0 }, |
47 | { "S905X", 0x21, 0x80 }, | 51 | { "S905X", 0x21, 0x80 }, |
52 | { "S905W", 0x21, 0xa0 }, | ||
48 | { "S905L", 0x21, 0xc0 }, | 53 | { "S905L", 0x21, 0xc0 }, |
49 | { "S905M2", 0x21, 0xe0 }, | 54 | { "S905M2", 0x21, 0xe0 }, |
50 | { "S912", 0x22, 0 }, | 55 | { "S912", 0x22, 0 }, |
56 | { "962X", 0x24, 0x10 }, | ||
57 | { "962E", 0x24, 0x20 }, | ||
58 | { "A113X", 0x25, 0x37 }, | ||
59 | { "A113D", 0x25, 0x22 }, | ||
51 | }; | 60 | }; |
52 | 61 | ||
53 | static inline unsigned int socinfo_to_major(u32 socinfo) | 62 | static inline unsigned int socinfo_to_major(u32 socinfo) |
@@ -98,7 +107,7 @@ static const char *socinfo_to_soc_id(u32 socinfo) | |||
98 | return "Unknown"; | 107 | return "Unknown"; |
99 | } | 108 | } |
100 | 109 | ||
101 | int __init meson_gx_socinfo_init(void) | 110 | static int __init meson_gx_socinfo_init(void) |
102 | { | 111 | { |
103 | struct soc_device_attribute *soc_dev_attr; | 112 | struct soc_device_attribute *soc_dev_attr; |
104 | struct soc_device *soc_dev; | 113 | struct soc_device *soc_dev; |
diff --git a/drivers/soc/amlogic/meson-mx-socinfo.c b/drivers/soc/amlogic/meson-mx-socinfo.c index 7bfff5ff22a2..78f0f1aeca57 100644 --- a/drivers/soc/amlogic/meson-mx-socinfo.c +++ b/drivers/soc/amlogic/meson-mx-socinfo.c | |||
@@ -104,7 +104,7 @@ static const struct of_device_id meson_mx_socinfo_analog_top_ids[] = { | |||
104 | { /* sentinel */ } | 104 | { /* sentinel */ } |
105 | }; | 105 | }; |
106 | 106 | ||
107 | int __init meson_mx_socinfo_init(void) | 107 | static int __init meson_mx_socinfo_init(void) |
108 | { | 108 | { |
109 | struct soc_device_attribute *soc_dev_attr; | 109 | struct soc_device_attribute *soc_dev_attr; |
110 | struct soc_device *soc_dev; | 110 | struct soc_device *soc_dev; |
diff --git a/drivers/soc/imx/gpc.c b/drivers/soc/imx/gpc.c index 750f93197411..c4d35f32af8d 100644 --- a/drivers/soc/imx/gpc.c +++ b/drivers/soc/imx/gpc.c | |||
@@ -254,6 +254,7 @@ static struct imx_pm_domain imx_gpc_domains[] = { | |||
254 | { | 254 | { |
255 | .base = { | 255 | .base = { |
256 | .name = "ARM", | 256 | .name = "ARM", |
257 | .flags = GENPD_FLAG_ALWAYS_ON, | ||
257 | }, | 258 | }, |
258 | }, { | 259 | }, { |
259 | .base = { | 260 | .base = { |
diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c index 435ce5ec648a..d762a46d434f 100644 --- a/drivers/soc/mediatek/mtk-scpsys.c +++ b/drivers/soc/mediatek/mtk-scpsys.c | |||
@@ -24,6 +24,7 @@ | |||
24 | #include <dt-bindings/power/mt2712-power.h> | 24 | #include <dt-bindings/power/mt2712-power.h> |
25 | #include <dt-bindings/power/mt6797-power.h> | 25 | #include <dt-bindings/power/mt6797-power.h> |
26 | #include <dt-bindings/power/mt7622-power.h> | 26 | #include <dt-bindings/power/mt7622-power.h> |
27 | #include <dt-bindings/power/mt7623a-power.h> | ||
27 | #include <dt-bindings/power/mt8173-power.h> | 28 | #include <dt-bindings/power/mt8173-power.h> |
28 | 29 | ||
29 | #define SPM_VDE_PWR_CON 0x0210 | 30 | #define SPM_VDE_PWR_CON 0x0210 |
@@ -518,7 +519,8 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = { | |||
518 | .name = "conn", | 519 | .name = "conn", |
519 | .sta_mask = PWR_STATUS_CONN, | 520 | .sta_mask = PWR_STATUS_CONN, |
520 | .ctl_offs = SPM_CONN_PWR_CON, | 521 | .ctl_offs = SPM_CONN_PWR_CON, |
521 | .bus_prot_mask = 0x0104, | 522 | .bus_prot_mask = MT2701_TOP_AXI_PROT_EN_CONN_M | |
523 | MT2701_TOP_AXI_PROT_EN_CONN_S, | ||
522 | .clk_id = {CLK_NONE}, | 524 | .clk_id = {CLK_NONE}, |
523 | .active_wakeup = true, | 525 | .active_wakeup = true, |
524 | }, | 526 | }, |
@@ -528,7 +530,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = { | |||
528 | .ctl_offs = SPM_DIS_PWR_CON, | 530 | .ctl_offs = SPM_DIS_PWR_CON, |
529 | .sram_pdn_bits = GENMASK(11, 8), | 531 | .sram_pdn_bits = GENMASK(11, 8), |
530 | .clk_id = {CLK_MM}, | 532 | .clk_id = {CLK_MM}, |
531 | .bus_prot_mask = 0x0002, | 533 | .bus_prot_mask = MT2701_TOP_AXI_PROT_EN_MM_M0, |
532 | .active_wakeup = true, | 534 | .active_wakeup = true, |
533 | }, | 535 | }, |
534 | [MT2701_POWER_DOMAIN_MFG] = { | 536 | [MT2701_POWER_DOMAIN_MFG] = { |
@@ -664,12 +666,48 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = { | |||
664 | .name = "mfg", | 666 | .name = "mfg", |
665 | .sta_mask = PWR_STATUS_MFG, | 667 | .sta_mask = PWR_STATUS_MFG, |
666 | .ctl_offs = SPM_MFG_PWR_CON, | 668 | .ctl_offs = SPM_MFG_PWR_CON, |
667 | .sram_pdn_bits = GENMASK(11, 8), | 669 | .sram_pdn_bits = GENMASK(8, 8), |
668 | .sram_pdn_ack_bits = GENMASK(19, 16), | 670 | .sram_pdn_ack_bits = GENMASK(16, 16), |
669 | .clk_id = {CLK_MFG}, | 671 | .clk_id = {CLK_MFG}, |
670 | .bus_prot_mask = BIT(14) | BIT(21) | BIT(23), | 672 | .bus_prot_mask = BIT(14) | BIT(21) | BIT(23), |
671 | .active_wakeup = true, | 673 | .active_wakeup = true, |
672 | }, | 674 | }, |
675 | [MT2712_POWER_DOMAIN_MFG_SC1] = { | ||
676 | .name = "mfg_sc1", | ||
677 | .sta_mask = BIT(22), | ||
678 | .ctl_offs = 0x02c0, | ||
679 | .sram_pdn_bits = GENMASK(8, 8), | ||
680 | .sram_pdn_ack_bits = GENMASK(16, 16), | ||
681 | .clk_id = {CLK_NONE}, | ||
682 | .active_wakeup = true, | ||
683 | }, | ||
684 | [MT2712_POWER_DOMAIN_MFG_SC2] = { | ||
685 | .name = "mfg_sc2", | ||
686 | .sta_mask = BIT(23), | ||
687 | .ctl_offs = 0x02c4, | ||
688 | .sram_pdn_bits = GENMASK(8, 8), | ||
689 | .sram_pdn_ack_bits = GENMASK(16, 16), | ||
690 | .clk_id = {CLK_NONE}, | ||
691 | .active_wakeup = true, | ||
692 | }, | ||
693 | [MT2712_POWER_DOMAIN_MFG_SC3] = { | ||
694 | .name = "mfg_sc3", | ||
695 | .sta_mask = BIT(30), | ||
696 | .ctl_offs = 0x01f8, | ||
697 | .sram_pdn_bits = GENMASK(8, 8), | ||
698 | .sram_pdn_ack_bits = GENMASK(16, 16), | ||
699 | .clk_id = {CLK_NONE}, | ||
700 | .active_wakeup = true, | ||
701 | }, | ||
702 | }; | ||
703 | |||
704 | static const struct scp_subdomain scp_subdomain_mt2712[] = { | ||
705 | {MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_VDEC}, | ||
706 | {MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_VENC}, | ||
707 | {MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_ISP}, | ||
708 | {MT2712_POWER_DOMAIN_MFG, MT2712_POWER_DOMAIN_MFG_SC1}, | ||
709 | {MT2712_POWER_DOMAIN_MFG_SC1, MT2712_POWER_DOMAIN_MFG_SC2}, | ||
710 | {MT2712_POWER_DOMAIN_MFG_SC2, MT2712_POWER_DOMAIN_MFG_SC3}, | ||
673 | }; | 711 | }; |
674 | 712 | ||
675 | /* | 713 | /* |
@@ -794,6 +832,47 @@ static const struct scp_domain_data scp_domain_data_mt7622[] = { | |||
794 | }; | 832 | }; |
795 | 833 | ||
796 | /* | 834 | /* |
835 | * MT7623A power domain support | ||
836 | */ | ||
837 | |||
838 | static const struct scp_domain_data scp_domain_data_mt7623a[] = { | ||
839 | [MT7623A_POWER_DOMAIN_CONN] = { | ||
840 | .name = "conn", | ||
841 | .sta_mask = PWR_STATUS_CONN, | ||
842 | .ctl_offs = SPM_CONN_PWR_CON, | ||
843 | .bus_prot_mask = MT2701_TOP_AXI_PROT_EN_CONN_M | | ||
844 | MT2701_TOP_AXI_PROT_EN_CONN_S, | ||
845 | .clk_id = {CLK_NONE}, | ||
846 | .active_wakeup = true, | ||
847 | }, | ||
848 | [MT7623A_POWER_DOMAIN_ETH] = { | ||
849 | .name = "eth", | ||
850 | .sta_mask = PWR_STATUS_ETH, | ||
851 | .ctl_offs = SPM_ETH_PWR_CON, | ||
852 | .sram_pdn_bits = GENMASK(11, 8), | ||
853 | .sram_pdn_ack_bits = GENMASK(15, 12), | ||
854 | .clk_id = {CLK_ETHIF}, | ||
855 | .active_wakeup = true, | ||
856 | }, | ||
857 | [MT7623A_POWER_DOMAIN_HIF] = { | ||
858 | .name = "hif", | ||
859 | .sta_mask = PWR_STATUS_HIF, | ||
860 | .ctl_offs = SPM_HIF_PWR_CON, | ||
861 | .sram_pdn_bits = GENMASK(11, 8), | ||
862 | .sram_pdn_ack_bits = GENMASK(15, 12), | ||
863 | .clk_id = {CLK_ETHIF}, | ||
864 | .active_wakeup = true, | ||
865 | }, | ||
866 | [MT7623A_POWER_DOMAIN_IFR_MSC] = { | ||
867 | .name = "ifr_msc", | ||
868 | .sta_mask = PWR_STATUS_IFR_MSC, | ||
869 | .ctl_offs = SPM_IFR_MSC_PWR_CON, | ||
870 | .clk_id = {CLK_NONE}, | ||
871 | .active_wakeup = true, | ||
872 | }, | ||
873 | }; | ||
874 | |||
875 | /* | ||
797 | * MT8173 power domain support | 876 | * MT8173 power domain support |
798 | */ | 877 | */ |
799 | 878 | ||
@@ -905,6 +984,8 @@ static const struct scp_soc_data mt2701_data = { | |||
905 | static const struct scp_soc_data mt2712_data = { | 984 | static const struct scp_soc_data mt2712_data = { |
906 | .domains = scp_domain_data_mt2712, | 985 | .domains = scp_domain_data_mt2712, |
907 | .num_domains = ARRAY_SIZE(scp_domain_data_mt2712), | 986 | .num_domains = ARRAY_SIZE(scp_domain_data_mt2712), |
987 | .subdomains = scp_subdomain_mt2712, | ||
988 | .num_subdomains = ARRAY_SIZE(scp_subdomain_mt2712), | ||
908 | .regs = { | 989 | .regs = { |
909 | .pwr_sta_offs = SPM_PWR_STATUS, | 990 | .pwr_sta_offs = SPM_PWR_STATUS, |
910 | .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND | 991 | .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND |
@@ -934,6 +1015,16 @@ static const struct scp_soc_data mt7622_data = { | |||
934 | .bus_prot_reg_update = true, | 1015 | .bus_prot_reg_update = true, |
935 | }; | 1016 | }; |
936 | 1017 | ||
1018 | static const struct scp_soc_data mt7623a_data = { | ||
1019 | .domains = scp_domain_data_mt7623a, | ||
1020 | .num_domains = ARRAY_SIZE(scp_domain_data_mt7623a), | ||
1021 | .regs = { | ||
1022 | .pwr_sta_offs = SPM_PWR_STATUS, | ||
1023 | .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND | ||
1024 | }, | ||
1025 | .bus_prot_reg_update = true, | ||
1026 | }; | ||
1027 | |||
937 | static const struct scp_soc_data mt8173_data = { | 1028 | static const struct scp_soc_data mt8173_data = { |
938 | .domains = scp_domain_data_mt8173, | 1029 | .domains = scp_domain_data_mt8173, |
939 | .num_domains = ARRAY_SIZE(scp_domain_data_mt8173), | 1030 | .num_domains = ARRAY_SIZE(scp_domain_data_mt8173), |
@@ -964,6 +1055,9 @@ static const struct of_device_id of_scpsys_match_tbl[] = { | |||
964 | .compatible = "mediatek,mt7622-scpsys", | 1055 | .compatible = "mediatek,mt7622-scpsys", |
965 | .data = &mt7622_data, | 1056 | .data = &mt7622_data, |
966 | }, { | 1057 | }, { |
1058 | .compatible = "mediatek,mt7623a-scpsys", | ||
1059 | .data = &mt7623a_data, | ||
1060 | }, { | ||
967 | .compatible = "mediatek,mt8173-scpsys", | 1061 | .compatible = "mediatek,mt8173-scpsys", |
968 | .data = &mt8173_data, | 1062 | .data = &mt8173_data, |
969 | }, { | 1063 | }, { |
@@ -992,7 +1086,7 @@ static int scpsys_probe(struct platform_device *pdev) | |||
992 | 1086 | ||
993 | pd_data = &scp->pd_data; | 1087 | pd_data = &scp->pd_data; |
994 | 1088 | ||
995 | for (i = 0, sd = soc->subdomains ; i < soc->num_subdomains ; i++) { | 1089 | for (i = 0, sd = soc->subdomains; i < soc->num_subdomains; i++, sd++) { |
996 | ret = pm_genpd_add_subdomain(pd_data->domains[sd->origin], | 1090 | ret = pm_genpd_add_subdomain(pd_data->domains[sd->origin], |
997 | pd_data->domains[sd->subdomain]); | 1091 | pd_data->domains[sd->subdomain]); |
998 | if (ret && IS_ENABLED(CONFIG_PM)) | 1092 | if (ret && IS_ENABLED(CONFIG_PM)) |
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig index e050eb83341d..a993d19fa562 100644 --- a/drivers/soc/qcom/Kconfig +++ b/drivers/soc/qcom/Kconfig | |||
@@ -47,6 +47,7 @@ config QCOM_QMI_HELPERS | |||
47 | config QCOM_RMTFS_MEM | 47 | config QCOM_RMTFS_MEM |
48 | tristate "Qualcomm Remote Filesystem memory driver" | 48 | tristate "Qualcomm Remote Filesystem memory driver" |
49 | depends on ARCH_QCOM | 49 | depends on ARCH_QCOM |
50 | select QCOM_SCM | ||
50 | help | 51 | help |
51 | The Qualcomm remote filesystem memory driver is used for allocating | 52 | The Qualcomm remote filesystem memory driver is used for allocating |
52 | and exposing regions of shared memory with remote processors for the | 53 | and exposing regions of shared memory with remote processors for the |
diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c index 0a43b2e8906f..c8999e38b005 100644 --- a/drivers/soc/qcom/rmtfs_mem.c +++ b/drivers/soc/qcom/rmtfs_mem.c | |||
@@ -37,6 +37,8 @@ struct qcom_rmtfs_mem { | |||
37 | phys_addr_t size; | 37 | phys_addr_t size; |
38 | 38 | ||
39 | unsigned int client_id; | 39 | unsigned int client_id; |
40 | |||
41 | unsigned int perms; | ||
40 | }; | 42 | }; |
41 | 43 | ||
42 | static ssize_t qcom_rmtfs_mem_show(struct device *dev, | 44 | static ssize_t qcom_rmtfs_mem_show(struct device *dev, |
@@ -151,9 +153,11 @@ static void qcom_rmtfs_mem_release_device(struct device *dev) | |||
151 | static int qcom_rmtfs_mem_probe(struct platform_device *pdev) | 153 | static int qcom_rmtfs_mem_probe(struct platform_device *pdev) |
152 | { | 154 | { |
153 | struct device_node *node = pdev->dev.of_node; | 155 | struct device_node *node = pdev->dev.of_node; |
156 | struct qcom_scm_vmperm perms[2]; | ||
154 | struct reserved_mem *rmem; | 157 | struct reserved_mem *rmem; |
155 | struct qcom_rmtfs_mem *rmtfs_mem; | 158 | struct qcom_rmtfs_mem *rmtfs_mem; |
156 | u32 client_id; | 159 | u32 client_id; |
160 | u32 vmid; | ||
157 | int ret; | 161 | int ret; |
158 | 162 | ||
159 | rmem = of_reserved_mem_lookup(node); | 163 | rmem = of_reserved_mem_lookup(node); |
@@ -204,10 +208,31 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev) | |||
204 | 208 | ||
205 | rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device; | 209 | rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device; |
206 | 210 | ||
211 | ret = of_property_read_u32(node, "qcom,vmid", &vmid); | ||
212 | if (ret < 0 && ret != -EINVAL) { | ||
213 | dev_err(&pdev->dev, "failed to parse qcom,vmid\n"); | ||
214 | goto remove_cdev; | ||
215 | } else if (!ret) { | ||
216 | perms[0].vmid = QCOM_SCM_VMID_HLOS; | ||
217 | perms[0].perm = QCOM_SCM_PERM_RW; | ||
218 | perms[1].vmid = vmid; | ||
219 | perms[1].perm = QCOM_SCM_PERM_RW; | ||
220 | |||
221 | rmtfs_mem->perms = BIT(QCOM_SCM_VMID_HLOS); | ||
222 | ret = qcom_scm_assign_mem(rmtfs_mem->addr, rmtfs_mem->size, | ||
223 | &rmtfs_mem->perms, perms, 2); | ||
224 | if (ret < 0) { | ||
225 | dev_err(&pdev->dev, "assign memory failed\n"); | ||
226 | goto remove_cdev; | ||
227 | } | ||
228 | } | ||
229 | |||
207 | dev_set_drvdata(&pdev->dev, rmtfs_mem); | 230 | dev_set_drvdata(&pdev->dev, rmtfs_mem); |
208 | 231 | ||
209 | return 0; | 232 | return 0; |
210 | 233 | ||
234 | remove_cdev: | ||
235 | cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev); | ||
211 | put_device: | 236 | put_device: |
212 | put_device(&rmtfs_mem->dev); | 237 | put_device(&rmtfs_mem->dev); |
213 | 238 | ||
@@ -217,6 +242,15 @@ put_device: | |||
217 | static int qcom_rmtfs_mem_remove(struct platform_device *pdev) | 242 | static int qcom_rmtfs_mem_remove(struct platform_device *pdev) |
218 | { | 243 | { |
219 | struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev); | 244 | struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev); |
245 | struct qcom_scm_vmperm perm; | ||
246 | |||
247 | if (rmtfs_mem->perms) { | ||
248 | perm.vmid = QCOM_SCM_VMID_HLOS; | ||
249 | perm.perm = QCOM_SCM_PERM_RW; | ||
250 | |||
251 | qcom_scm_assign_mem(rmtfs_mem->addr, rmtfs_mem->size, | ||
252 | &rmtfs_mem->perms, &perm, 1); | ||
253 | } | ||
220 | 254 | ||
221 | cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev); | 255 | cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev); |
222 | put_device(&rmtfs_mem->dev); | 256 | put_device(&rmtfs_mem->dev); |
diff --git a/drivers/soc/qcom/wcnss_ctrl.c b/drivers/soc/qcom/wcnss_ctrl.c index d008e5b82db4..df3ccb30bc2d 100644 --- a/drivers/soc/qcom/wcnss_ctrl.c +++ b/drivers/soc/qcom/wcnss_ctrl.c | |||
@@ -249,7 +249,7 @@ static int wcnss_download_nv(struct wcnss_ctrl *wcnss, bool *expect_cbc) | |||
249 | /* Increment for next fragment */ | 249 | /* Increment for next fragment */ |
250 | req->seq++; | 250 | req->seq++; |
251 | 251 | ||
252 | data += req->hdr.len; | 252 | data += NV_FRAGMENT_SIZE; |
253 | left -= NV_FRAGMENT_SIZE; | 253 | left -= NV_FRAGMENT_SIZE; |
254 | } while (left > 0); | 254 | } while (left > 0); |
255 | 255 | ||
diff --git a/drivers/soc/rockchip/grf.c b/drivers/soc/rockchip/grf.c index 15e71fd6c513..96882ffde67e 100644 --- a/drivers/soc/rockchip/grf.c +++ b/drivers/soc/rockchip/grf.c | |||
@@ -43,6 +43,28 @@ static const struct rockchip_grf_info rk3036_grf __initconst = { | |||
43 | .num_values = ARRAY_SIZE(rk3036_defaults), | 43 | .num_values = ARRAY_SIZE(rk3036_defaults), |
44 | }; | 44 | }; |
45 | 45 | ||
46 | #define RK3128_GRF_SOC_CON0 0x140 | ||
47 | |||
48 | static const struct rockchip_grf_value rk3128_defaults[] __initconst = { | ||
49 | { "jtag switching", RK3128_GRF_SOC_CON0, HIWORD_UPDATE(0, 1, 8) }, | ||
50 | }; | ||
51 | |||
52 | static const struct rockchip_grf_info rk3128_grf __initconst = { | ||
53 | .values = rk3128_defaults, | ||
54 | .num_values = ARRAY_SIZE(rk3128_defaults), | ||
55 | }; | ||
56 | |||
57 | #define RK3228_GRF_SOC_CON6 0x418 | ||
58 | |||
59 | static const struct rockchip_grf_value rk3228_defaults[] __initconst = { | ||
60 | { "jtag switching", RK3228_GRF_SOC_CON6, HIWORD_UPDATE(0, 1, 8) }, | ||
61 | }; | ||
62 | |||
63 | static const struct rockchip_grf_info rk3228_grf __initconst = { | ||
64 | .values = rk3228_defaults, | ||
65 | .num_values = ARRAY_SIZE(rk3228_defaults), | ||
66 | }; | ||
67 | |||
46 | #define RK3288_GRF_SOC_CON0 0x244 | 68 | #define RK3288_GRF_SOC_CON0 0x244 |
47 | 69 | ||
48 | static const struct rockchip_grf_value rk3288_defaults[] __initconst = { | 70 | static const struct rockchip_grf_value rk3288_defaults[] __initconst = { |
@@ -92,6 +114,12 @@ static const struct of_device_id rockchip_grf_dt_match[] __initconst = { | |||
92 | .compatible = "rockchip,rk3036-grf", | 114 | .compatible = "rockchip,rk3036-grf", |
93 | .data = (void *)&rk3036_grf, | 115 | .data = (void *)&rk3036_grf, |
94 | }, { | 116 | }, { |
117 | .compatible = "rockchip,rk3128-grf", | ||
118 | .data = (void *)&rk3128_grf, | ||
119 | }, { | ||
120 | .compatible = "rockchip,rk3228-grf", | ||
121 | .data = (void *)&rk3228_grf, | ||
122 | }, { | ||
95 | .compatible = "rockchip,rk3288-grf", | 123 | .compatible = "rockchip,rk3288-grf", |
96 | .data = (void *)&rk3288_grf, | 124 | .data = (void *)&rk3288_grf, |
97 | }, { | 125 | }, { |
diff --git a/drivers/soc/rockchip/pm_domains.c b/drivers/soc/rockchip/pm_domains.c index 5c342167b9db..53efc386b1ad 100644 --- a/drivers/soc/rockchip/pm_domains.c +++ b/drivers/soc/rockchip/pm_domains.c | |||
@@ -67,7 +67,7 @@ struct rockchip_pm_domain { | |||
67 | struct regmap **qos_regmap; | 67 | struct regmap **qos_regmap; |
68 | u32 *qos_save_regs[MAX_QOS_REGS_NUM]; | 68 | u32 *qos_save_regs[MAX_QOS_REGS_NUM]; |
69 | int num_clks; | 69 | int num_clks; |
70 | struct clk *clks[]; | 70 | struct clk_bulk_data *clks; |
71 | }; | 71 | }; |
72 | 72 | ||
73 | struct rockchip_pmu { | 73 | struct rockchip_pmu { |
@@ -274,13 +274,18 @@ static void rockchip_do_pmu_set_power_domain(struct rockchip_pm_domain *pd, | |||
274 | 274 | ||
275 | static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on) | 275 | static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on) |
276 | { | 276 | { |
277 | int i; | 277 | struct rockchip_pmu *pmu = pd->pmu; |
278 | int ret; | ||
278 | 279 | ||
279 | mutex_lock(&pd->pmu->mutex); | 280 | mutex_lock(&pmu->mutex); |
280 | 281 | ||
281 | if (rockchip_pmu_domain_is_on(pd) != power_on) { | 282 | if (rockchip_pmu_domain_is_on(pd) != power_on) { |
282 | for (i = 0; i < pd->num_clks; i++) | 283 | ret = clk_bulk_enable(pd->num_clks, pd->clks); |
283 | clk_enable(pd->clks[i]); | 284 | if (ret < 0) { |
285 | dev_err(pmu->dev, "failed to enable clocks\n"); | ||
286 | mutex_unlock(&pmu->mutex); | ||
287 | return ret; | ||
288 | } | ||
284 | 289 | ||
285 | if (!power_on) { | 290 | if (!power_on) { |
286 | rockchip_pmu_save_qos(pd); | 291 | rockchip_pmu_save_qos(pd); |
@@ -298,11 +303,10 @@ static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on) | |||
298 | rockchip_pmu_restore_qos(pd); | 303 | rockchip_pmu_restore_qos(pd); |
299 | } | 304 | } |
300 | 305 | ||
301 | for (i = pd->num_clks - 1; i >= 0; i--) | 306 | clk_bulk_disable(pd->num_clks, pd->clks); |
302 | clk_disable(pd->clks[i]); | ||
303 | } | 307 | } |
304 | 308 | ||
305 | mutex_unlock(&pd->pmu->mutex); | 309 | mutex_unlock(&pmu->mutex); |
306 | return 0; | 310 | return 0; |
307 | } | 311 | } |
308 | 312 | ||
@@ -364,8 +368,6 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, | |||
364 | const struct rockchip_domain_info *pd_info; | 368 | const struct rockchip_domain_info *pd_info; |
365 | struct rockchip_pm_domain *pd; | 369 | struct rockchip_pm_domain *pd; |
366 | struct device_node *qos_node; | 370 | struct device_node *qos_node; |
367 | struct clk *clk; | ||
368 | int clk_cnt; | ||
369 | int i, j; | 371 | int i, j; |
370 | u32 id; | 372 | u32 id; |
371 | int error; | 373 | int error; |
@@ -391,41 +393,41 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, | |||
391 | return -EINVAL; | 393 | return -EINVAL; |
392 | } | 394 | } |
393 | 395 | ||
394 | clk_cnt = of_count_phandle_with_args(node, "clocks", "#clock-cells"); | 396 | pd = devm_kzalloc(pmu->dev, sizeof(*pd), GFP_KERNEL); |
395 | pd = devm_kzalloc(pmu->dev, | ||
396 | sizeof(*pd) + clk_cnt * sizeof(pd->clks[0]), | ||
397 | GFP_KERNEL); | ||
398 | if (!pd) | 397 | if (!pd) |
399 | return -ENOMEM; | 398 | return -ENOMEM; |
400 | 399 | ||
401 | pd->info = pd_info; | 400 | pd->info = pd_info; |
402 | pd->pmu = pmu; | 401 | pd->pmu = pmu; |
403 | 402 | ||
404 | for (i = 0; i < clk_cnt; i++) { | 403 | pd->num_clks = of_count_phandle_with_args(node, "clocks", |
405 | clk = of_clk_get(node, i); | 404 | "#clock-cells"); |
406 | if (IS_ERR(clk)) { | 405 | if (pd->num_clks > 0) { |
407 | error = PTR_ERR(clk); | 406 | pd->clks = devm_kcalloc(pmu->dev, pd->num_clks, |
407 | sizeof(*pd->clks), GFP_KERNEL); | ||
408 | if (!pd->clks) | ||
409 | return -ENOMEM; | ||
410 | } else { | ||
411 | dev_dbg(pmu->dev, "%s: doesn't have clocks: %d\n", | ||
412 | node->name, pd->num_clks); | ||
413 | pd->num_clks = 0; | ||
414 | } | ||
415 | |||
416 | for (i = 0; i < pd->num_clks; i++) { | ||
417 | pd->clks[i].clk = of_clk_get(node, i); | ||
418 | if (IS_ERR(pd->clks[i].clk)) { | ||
419 | error = PTR_ERR(pd->clks[i].clk); | ||
408 | dev_err(pmu->dev, | 420 | dev_err(pmu->dev, |
409 | "%s: failed to get clk at index %d: %d\n", | 421 | "%s: failed to get clk at index %d: %d\n", |
410 | node->name, i, error); | 422 | node->name, i, error); |
411 | goto err_out; | 423 | return error; |
412 | } | ||
413 | |||
414 | error = clk_prepare(clk); | ||
415 | if (error) { | ||
416 | dev_err(pmu->dev, | ||
417 | "%s: failed to prepare clk %pC (index %d): %d\n", | ||
418 | node->name, clk, i, error); | ||
419 | clk_put(clk); | ||
420 | goto err_out; | ||
421 | } | 424 | } |
422 | |||
423 | pd->clks[pd->num_clks++] = clk; | ||
424 | |||
425 | dev_dbg(pmu->dev, "added clock '%pC' to domain '%s'\n", | ||
426 | clk, node->name); | ||
427 | } | 425 | } |
428 | 426 | ||
427 | error = clk_bulk_prepare(pd->num_clks, pd->clks); | ||
428 | if (error) | ||
429 | goto err_put_clocks; | ||
430 | |||
429 | pd->num_qos = of_count_phandle_with_args(node, "pm_qos", | 431 | pd->num_qos = of_count_phandle_with_args(node, "pm_qos", |
430 | NULL); | 432 | NULL); |
431 | 433 | ||
@@ -435,7 +437,7 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, | |||
435 | GFP_KERNEL); | 437 | GFP_KERNEL); |
436 | if (!pd->qos_regmap) { | 438 | if (!pd->qos_regmap) { |
437 | error = -ENOMEM; | 439 | error = -ENOMEM; |
438 | goto err_out; | 440 | goto err_unprepare_clocks; |
439 | } | 441 | } |
440 | 442 | ||
441 | for (j = 0; j < MAX_QOS_REGS_NUM; j++) { | 443 | for (j = 0; j < MAX_QOS_REGS_NUM; j++) { |
@@ -445,7 +447,7 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, | |||
445 | GFP_KERNEL); | 447 | GFP_KERNEL); |
446 | if (!pd->qos_save_regs[j]) { | 448 | if (!pd->qos_save_regs[j]) { |
447 | error = -ENOMEM; | 449 | error = -ENOMEM; |
448 | goto err_out; | 450 | goto err_unprepare_clocks; |
449 | } | 451 | } |
450 | } | 452 | } |
451 | 453 | ||
@@ -453,13 +455,13 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, | |||
453 | qos_node = of_parse_phandle(node, "pm_qos", j); | 455 | qos_node = of_parse_phandle(node, "pm_qos", j); |
454 | if (!qos_node) { | 456 | if (!qos_node) { |
455 | error = -ENODEV; | 457 | error = -ENODEV; |
456 | goto err_out; | 458 | goto err_unprepare_clocks; |
457 | } | 459 | } |
458 | pd->qos_regmap[j] = syscon_node_to_regmap(qos_node); | 460 | pd->qos_regmap[j] = syscon_node_to_regmap(qos_node); |
459 | if (IS_ERR(pd->qos_regmap[j])) { | 461 | if (IS_ERR(pd->qos_regmap[j])) { |
460 | error = -ENODEV; | 462 | error = -ENODEV; |
461 | of_node_put(qos_node); | 463 | of_node_put(qos_node); |
462 | goto err_out; | 464 | goto err_unprepare_clocks; |
463 | } | 465 | } |
464 | of_node_put(qos_node); | 466 | of_node_put(qos_node); |
465 | } | 467 | } |
@@ -470,7 +472,7 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, | |||
470 | dev_err(pmu->dev, | 472 | dev_err(pmu->dev, |
471 | "failed to power on domain '%s': %d\n", | 473 | "failed to power on domain '%s': %d\n", |
472 | node->name, error); | 474 | node->name, error); |
473 | goto err_out; | 475 | goto err_unprepare_clocks; |
474 | } | 476 | } |
475 | 477 | ||
476 | pd->genpd.name = node->name; | 478 | pd->genpd.name = node->name; |
@@ -486,17 +488,16 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, | |||
486 | pmu->genpd_data.domains[id] = &pd->genpd; | 488 | pmu->genpd_data.domains[id] = &pd->genpd; |
487 | return 0; | 489 | return 0; |
488 | 490 | ||
489 | err_out: | 491 | err_unprepare_clocks: |
490 | while (--i >= 0) { | 492 | clk_bulk_unprepare(pd->num_clks, pd->clks); |
491 | clk_unprepare(pd->clks[i]); | 493 | err_put_clocks: |
492 | clk_put(pd->clks[i]); | 494 | clk_bulk_put(pd->num_clks, pd->clks); |
493 | } | ||
494 | return error; | 495 | return error; |
495 | } | 496 | } |
496 | 497 | ||
497 | static void rockchip_pm_remove_one_domain(struct rockchip_pm_domain *pd) | 498 | static void rockchip_pm_remove_one_domain(struct rockchip_pm_domain *pd) |
498 | { | 499 | { |
499 | int i, ret; | 500 | int ret; |
500 | 501 | ||
501 | /* | 502 | /* |
502 | * We're in the error cleanup already, so we only complain, | 503 | * We're in the error cleanup already, so we only complain, |
@@ -507,10 +508,8 @@ static void rockchip_pm_remove_one_domain(struct rockchip_pm_domain *pd) | |||
507 | dev_err(pd->pmu->dev, "failed to remove domain '%s' : %d - state may be inconsistent\n", | 508 | dev_err(pd->pmu->dev, "failed to remove domain '%s' : %d - state may be inconsistent\n", |
508 | pd->genpd.name, ret); | 509 | pd->genpd.name, ret); |
509 | 510 | ||
510 | for (i = 0; i < pd->num_clks; i++) { | 511 | clk_bulk_unprepare(pd->num_clks, pd->clks); |
511 | clk_unprepare(pd->clks[i]); | 512 | clk_bulk_put(pd->num_clks, pd->clks); |
512 | clk_put(pd->clks[i]); | ||
513 | } | ||
514 | 513 | ||
515 | /* protect the zeroing of pm->num_clks */ | 514 | /* protect the zeroing of pm->num_clks */ |
516 | mutex_lock(&pd->pmu->mutex); | 515 | mutex_lock(&pd->pmu->mutex); |
diff --git a/drivers/soc/samsung/exynos-pmu.c b/drivers/soc/samsung/exynos-pmu.c index f56adbd9fb8b..d34ca201b8b7 100644 --- a/drivers/soc/samsung/exynos-pmu.c +++ b/drivers/soc/samsung/exynos-pmu.c | |||
@@ -85,10 +85,14 @@ static const struct of_device_id exynos_pmu_of_device_ids[] = { | |||
85 | .compatible = "samsung,exynos5250-pmu", | 85 | .compatible = "samsung,exynos5250-pmu", |
86 | .data = exynos_pmu_data_arm_ptr(exynos5250_pmu_data), | 86 | .data = exynos_pmu_data_arm_ptr(exynos5250_pmu_data), |
87 | }, { | 87 | }, { |
88 | .compatible = "samsung,exynos5410-pmu", | ||
89 | }, { | ||
88 | .compatible = "samsung,exynos5420-pmu", | 90 | .compatible = "samsung,exynos5420-pmu", |
89 | .data = exynos_pmu_data_arm_ptr(exynos5420_pmu_data), | 91 | .data = exynos_pmu_data_arm_ptr(exynos5420_pmu_data), |
90 | }, { | 92 | }, { |
91 | .compatible = "samsung,exynos5433-pmu", | 93 | .compatible = "samsung,exynos5433-pmu", |
94 | }, { | ||
95 | .compatible = "samsung,exynos7-pmu", | ||
92 | }, | 96 | }, |
93 | { /*sentinel*/ }, | 97 | { /*sentinel*/ }, |
94 | }; | 98 | }; |
@@ -126,6 +130,9 @@ static int exynos_pmu_probe(struct platform_device *pdev) | |||
126 | 130 | ||
127 | platform_set_drvdata(pdev, pmu_context); | 131 | platform_set_drvdata(pdev, pmu_context); |
128 | 132 | ||
133 | if (devm_of_platform_populate(dev)) | ||
134 | dev_err(dev, "Error populating children, reboot and poweroff might not work properly\n"); | ||
135 | |||
129 | dev_dbg(dev, "Exynos PMU Driver probe done\n"); | 136 | dev_dbg(dev, "Exynos PMU Driver probe done\n"); |
130 | return 0; | 137 | return 0; |
131 | } | 138 | } |
diff --git a/drivers/soc/tegra/Kconfig b/drivers/soc/tegra/Kconfig index 89ebe22a3e27..fe4481676da6 100644 --- a/drivers/soc/tegra/Kconfig +++ b/drivers/soc/tegra/Kconfig | |||
@@ -104,6 +104,16 @@ config ARCH_TEGRA_186_SOC | |||
104 | multi-format support, ISP for image capture processing and BPMP for | 104 | multi-format support, ISP for image capture processing and BPMP for |
105 | power management. | 105 | power management. |
106 | 106 | ||
107 | config ARCH_TEGRA_194_SOC | ||
108 | bool "NVIDIA Tegra194 SoC" | ||
109 | select MAILBOX | ||
110 | select TEGRA_BPMP | ||
111 | select TEGRA_HSP_MBOX | ||
112 | select TEGRA_IVC | ||
113 | select SOC_TEGRA_PMC | ||
114 | help | ||
115 | Enable support for the NVIDIA Tegra194 SoC. | ||
116 | |||
107 | endif | 117 | endif |
108 | endif | 118 | endif |
109 | 119 | ||
diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c index ce62a47a6647..d9fcdb592b39 100644 --- a/drivers/soc/tegra/pmc.c +++ b/drivers/soc/tegra/pmc.c | |||
@@ -127,8 +127,7 @@ struct tegra_powergate { | |||
127 | unsigned int id; | 127 | unsigned int id; |
128 | struct clk **clks; | 128 | struct clk **clks; |
129 | unsigned int num_clks; | 129 | unsigned int num_clks; |
130 | struct reset_control **resets; | 130 | struct reset_control *reset; |
131 | unsigned int num_resets; | ||
132 | }; | 131 | }; |
133 | 132 | ||
134 | struct tegra_io_pad_soc { | 133 | struct tegra_io_pad_soc { |
@@ -153,6 +152,7 @@ struct tegra_pmc_soc { | |||
153 | 152 | ||
154 | bool has_tsense_reset; | 153 | bool has_tsense_reset; |
155 | bool has_gpu_clamps; | 154 | bool has_gpu_clamps; |
155 | bool needs_mbist_war; | ||
156 | 156 | ||
157 | const struct tegra_io_pad_soc *io_pads; | 157 | const struct tegra_io_pad_soc *io_pads; |
158 | unsigned int num_io_pads; | 158 | unsigned int num_io_pads; |
@@ -368,31 +368,8 @@ out: | |||
368 | return err; | 368 | return err; |
369 | } | 369 | } |
370 | 370 | ||
371 | static int tegra_powergate_reset_assert(struct tegra_powergate *pg) | 371 | int __weak tegra210_clk_handle_mbist_war(unsigned int id) |
372 | { | 372 | { |
373 | unsigned int i; | ||
374 | int err; | ||
375 | |||
376 | for (i = 0; i < pg->num_resets; i++) { | ||
377 | err = reset_control_assert(pg->resets[i]); | ||
378 | if (err) | ||
379 | return err; | ||
380 | } | ||
381 | |||
382 | return 0; | ||
383 | } | ||
384 | |||
385 | static int tegra_powergate_reset_deassert(struct tegra_powergate *pg) | ||
386 | { | ||
387 | unsigned int i; | ||
388 | int err; | ||
389 | |||
390 | for (i = 0; i < pg->num_resets; i++) { | ||
391 | err = reset_control_deassert(pg->resets[i]); | ||
392 | if (err) | ||
393 | return err; | ||
394 | } | ||
395 | |||
396 | return 0; | 373 | return 0; |
397 | } | 374 | } |
398 | 375 | ||
@@ -401,7 +378,7 @@ static int tegra_powergate_power_up(struct tegra_powergate *pg, | |||
401 | { | 378 | { |
402 | int err; | 379 | int err; |
403 | 380 | ||
404 | err = tegra_powergate_reset_assert(pg); | 381 | err = reset_control_assert(pg->reset); |
405 | if (err) | 382 | if (err) |
406 | return err; | 383 | return err; |
407 | 384 | ||
@@ -425,12 +402,17 @@ static int tegra_powergate_power_up(struct tegra_powergate *pg, | |||
425 | 402 | ||
426 | usleep_range(10, 20); | 403 | usleep_range(10, 20); |
427 | 404 | ||
428 | err = tegra_powergate_reset_deassert(pg); | 405 | err = reset_control_deassert(pg->reset); |
429 | if (err) | 406 | if (err) |
430 | goto powergate_off; | 407 | goto powergate_off; |
431 | 408 | ||
432 | usleep_range(10, 20); | 409 | usleep_range(10, 20); |
433 | 410 | ||
411 | if (pg->pmc->soc->needs_mbist_war) | ||
412 | err = tegra210_clk_handle_mbist_war(pg->id); | ||
413 | if (err) | ||
414 | goto disable_clks; | ||
415 | |||
434 | if (disable_clocks) | 416 | if (disable_clocks) |
435 | tegra_powergate_disable_clocks(pg); | 417 | tegra_powergate_disable_clocks(pg); |
436 | 418 | ||
@@ -456,7 +438,7 @@ static int tegra_powergate_power_down(struct tegra_powergate *pg) | |||
456 | 438 | ||
457 | usleep_range(10, 20); | 439 | usleep_range(10, 20); |
458 | 440 | ||
459 | err = tegra_powergate_reset_assert(pg); | 441 | err = reset_control_assert(pg->reset); |
460 | if (err) | 442 | if (err) |
461 | goto disable_clks; | 443 | goto disable_clks; |
462 | 444 | ||
@@ -475,7 +457,7 @@ static int tegra_powergate_power_down(struct tegra_powergate *pg) | |||
475 | assert_resets: | 457 | assert_resets: |
476 | tegra_powergate_enable_clocks(pg); | 458 | tegra_powergate_enable_clocks(pg); |
477 | usleep_range(10, 20); | 459 | usleep_range(10, 20); |
478 | tegra_powergate_reset_deassert(pg); | 460 | reset_control_deassert(pg->reset); |
479 | usleep_range(10, 20); | 461 | usleep_range(10, 20); |
480 | 462 | ||
481 | disable_clks: | 463 | disable_clks: |
@@ -586,8 +568,8 @@ int tegra_powergate_sequence_power_up(unsigned int id, struct clk *clk, | |||
586 | pg.id = id; | 568 | pg.id = id; |
587 | pg.clks = &clk; | 569 | pg.clks = &clk; |
588 | pg.num_clks = 1; | 570 | pg.num_clks = 1; |
589 | pg.resets = &rst; | 571 | pg.reset = rst; |
590 | pg.num_resets = 1; | 572 | pg.pmc = pmc; |
591 | 573 | ||
592 | err = tegra_powergate_power_up(&pg, false); | 574 | err = tegra_powergate_power_up(&pg, false); |
593 | if (err) | 575 | if (err) |
@@ -775,45 +757,22 @@ err: | |||
775 | static int tegra_powergate_of_get_resets(struct tegra_powergate *pg, | 757 | static int tegra_powergate_of_get_resets(struct tegra_powergate *pg, |
776 | struct device_node *np, bool off) | 758 | struct device_node *np, bool off) |
777 | { | 759 | { |
778 | struct reset_control *rst; | ||
779 | unsigned int i, count; | ||
780 | int err; | 760 | int err; |
781 | 761 | ||
782 | count = of_count_phandle_with_args(np, "resets", "#reset-cells"); | 762 | pg->reset = of_reset_control_array_get_exclusive(np); |
783 | if (count == 0) | 763 | if (IS_ERR(pg->reset)) { |
784 | return -ENODEV; | 764 | err = PTR_ERR(pg->reset); |
785 | 765 | pr_err("failed to get device resets: %d\n", err); | |
786 | pg->resets = kcalloc(count, sizeof(rst), GFP_KERNEL); | 766 | return err; |
787 | if (!pg->resets) | ||
788 | return -ENOMEM; | ||
789 | |||
790 | for (i = 0; i < count; i++) { | ||
791 | pg->resets[i] = of_reset_control_get_by_index(np, i); | ||
792 | if (IS_ERR(pg->resets[i])) { | ||
793 | err = PTR_ERR(pg->resets[i]); | ||
794 | goto error; | ||
795 | } | ||
796 | |||
797 | if (off) | ||
798 | err = reset_control_assert(pg->resets[i]); | ||
799 | else | ||
800 | err = reset_control_deassert(pg->resets[i]); | ||
801 | |||
802 | if (err) { | ||
803 | reset_control_put(pg->resets[i]); | ||
804 | goto error; | ||
805 | } | ||
806 | } | 767 | } |
807 | 768 | ||
808 | pg->num_resets = count; | 769 | if (off) |
809 | 770 | err = reset_control_assert(pg->reset); | |
810 | return 0; | 771 | else |
811 | 772 | err = reset_control_deassert(pg->reset); | |
812 | error: | ||
813 | while (i--) | ||
814 | reset_control_put(pg->resets[i]); | ||
815 | 773 | ||
816 | kfree(pg->resets); | 774 | if (err) |
775 | reset_control_put(pg->reset); | ||
817 | 776 | ||
818 | return err; | 777 | return err; |
819 | } | 778 | } |
@@ -905,10 +864,7 @@ remove_genpd: | |||
905 | pm_genpd_remove(&pg->genpd); | 864 | pm_genpd_remove(&pg->genpd); |
906 | 865 | ||
907 | remove_resets: | 866 | remove_resets: |
908 | while (pg->num_resets--) | 867 | reset_control_put(pg->reset); |
909 | reset_control_put(pg->resets[pg->num_resets]); | ||
910 | |||
911 | kfree(pg->resets); | ||
912 | 868 | ||
913 | remove_clks: | 869 | remove_clks: |
914 | while (pg->num_clks--) | 870 | while (pg->num_clks--) |
@@ -1815,6 +1771,7 @@ static const struct tegra_pmc_soc tegra210_pmc_soc = { | |||
1815 | .cpu_powergates = tegra210_cpu_powergates, | 1771 | .cpu_powergates = tegra210_cpu_powergates, |
1816 | .has_tsense_reset = true, | 1772 | .has_tsense_reset = true, |
1817 | .has_gpu_clamps = true, | 1773 | .has_gpu_clamps = true, |
1774 | .needs_mbist_war = true, | ||
1818 | .num_io_pads = ARRAY_SIZE(tegra210_io_pads), | 1775 | .num_io_pads = ARRAY_SIZE(tegra210_io_pads), |
1819 | .io_pads = tegra210_io_pads, | 1776 | .io_pads = tegra210_io_pads, |
1820 | .regs = &tegra20_pmc_regs, | 1777 | .regs = &tegra20_pmc_regs, |
@@ -1920,6 +1877,7 @@ static const struct tegra_pmc_soc tegra186_pmc_soc = { | |||
1920 | }; | 1877 | }; |
1921 | 1878 | ||
1922 | static const struct of_device_id tegra_pmc_match[] = { | 1879 | static const struct of_device_id tegra_pmc_match[] = { |
1880 | { .compatible = "nvidia,tegra194-pmc", .data = &tegra186_pmc_soc }, | ||
1923 | { .compatible = "nvidia,tegra186-pmc", .data = &tegra186_pmc_soc }, | 1881 | { .compatible = "nvidia,tegra186-pmc", .data = &tegra186_pmc_soc }, |
1924 | { .compatible = "nvidia,tegra210-pmc", .data = &tegra210_pmc_soc }, | 1882 | { .compatible = "nvidia,tegra210-pmc", .data = &tegra210_pmc_soc }, |
1925 | { .compatible = "nvidia,tegra132-pmc", .data = &tegra124_pmc_soc }, | 1883 | { .compatible = "nvidia,tegra132-pmc", .data = &tegra124_pmc_soc }, |
diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c index e9843c53fe31..e5fd5ed217da 100644 --- a/drivers/tee/optee/core.c +++ b/drivers/tee/optee/core.c | |||
@@ -356,6 +356,27 @@ static bool optee_msg_api_uid_is_optee_api(optee_invoke_fn *invoke_fn) | |||
356 | return false; | 356 | return false; |
357 | } | 357 | } |
358 | 358 | ||
359 | static void optee_msg_get_os_revision(optee_invoke_fn *invoke_fn) | ||
360 | { | ||
361 | union { | ||
362 | struct arm_smccc_res smccc; | ||
363 | struct optee_smc_call_get_os_revision_result result; | ||
364 | } res = { | ||
365 | .result = { | ||
366 | .build_id = 0 | ||
367 | } | ||
368 | }; | ||
369 | |||
370 | invoke_fn(OPTEE_SMC_CALL_GET_OS_REVISION, 0, 0, 0, 0, 0, 0, 0, | ||
371 | &res.smccc); | ||
372 | |||
373 | if (res.result.build_id) | ||
374 | pr_info("revision %lu.%lu (%08lx)", res.result.major, | ||
375 | res.result.minor, res.result.build_id); | ||
376 | else | ||
377 | pr_info("revision %lu.%lu", res.result.major, res.result.minor); | ||
378 | } | ||
379 | |||
359 | static bool optee_msg_api_revision_is_compatible(optee_invoke_fn *invoke_fn) | 380 | static bool optee_msg_api_revision_is_compatible(optee_invoke_fn *invoke_fn) |
360 | { | 381 | { |
361 | union { | 382 | union { |
@@ -547,6 +568,8 @@ static struct optee *optee_probe(struct device_node *np) | |||
547 | return ERR_PTR(-EINVAL); | 568 | return ERR_PTR(-EINVAL); |
548 | } | 569 | } |
549 | 570 | ||
571 | optee_msg_get_os_revision(invoke_fn); | ||
572 | |||
550 | if (!optee_msg_api_revision_is_compatible(invoke_fn)) { | 573 | if (!optee_msg_api_revision_is_compatible(invoke_fn)) { |
551 | pr_warn("api revision mismatch\n"); | 574 | pr_warn("api revision mismatch\n"); |
552 | return ERR_PTR(-EINVAL); | 575 | return ERR_PTR(-EINVAL); |
diff --git a/drivers/tee/optee/optee_smc.h b/drivers/tee/optee/optee_smc.h index 7cd327243ada..bbf0cf028c16 100644 --- a/drivers/tee/optee/optee_smc.h +++ b/drivers/tee/optee/optee_smc.h | |||
@@ -112,12 +112,20 @@ struct optee_smc_calls_revision_result { | |||
112 | * Trusted OS, not of the API. | 112 | * Trusted OS, not of the API. |
113 | * | 113 | * |
114 | * Returns revision in a0-1 in the same way as OPTEE_SMC_CALLS_REVISION | 114 | * Returns revision in a0-1 in the same way as OPTEE_SMC_CALLS_REVISION |
115 | * described above. | 115 | * described above. May optionally return a 32-bit build identifier in a2, |
116 | * with zero meaning unspecified. | ||
116 | */ | 117 | */ |
117 | #define OPTEE_SMC_FUNCID_GET_OS_REVISION OPTEE_MSG_FUNCID_GET_OS_REVISION | 118 | #define OPTEE_SMC_FUNCID_GET_OS_REVISION OPTEE_MSG_FUNCID_GET_OS_REVISION |
118 | #define OPTEE_SMC_CALL_GET_OS_REVISION \ | 119 | #define OPTEE_SMC_CALL_GET_OS_REVISION \ |
119 | OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_OS_REVISION) | 120 | OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_OS_REVISION) |
120 | 121 | ||
122 | struct optee_smc_call_get_os_revision_result { | ||
123 | unsigned long major; | ||
124 | unsigned long minor; | ||
125 | unsigned long build_id; | ||
126 | unsigned long reserved1; | ||
127 | }; | ||
128 | |||
121 | /* | 129 | /* |
122 | * Call with struct optee_msg_arg as argument | 130 | * Call with struct optee_msg_arg as argument |
123 | * | 131 | * |
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 6c4b200a4560..0124a91c8d71 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c | |||
@@ -693,7 +693,7 @@ struct tee_device *tee_device_alloc(const struct tee_desc *teedesc, | |||
693 | { | 693 | { |
694 | struct tee_device *teedev; | 694 | struct tee_device *teedev; |
695 | void *ret; | 695 | void *ret; |
696 | int rc; | 696 | int rc, max_id; |
697 | int offs = 0; | 697 | int offs = 0; |
698 | 698 | ||
699 | if (!teedesc || !teedesc->name || !teedesc->ops || | 699 | if (!teedesc || !teedesc->name || !teedesc->ops || |
@@ -707,16 +707,20 @@ struct tee_device *tee_device_alloc(const struct tee_desc *teedesc, | |||
707 | goto err; | 707 | goto err; |
708 | } | 708 | } |
709 | 709 | ||
710 | if (teedesc->flags & TEE_DESC_PRIVILEGED) | 710 | max_id = TEE_NUM_DEVICES / 2; |
711 | |||
712 | if (teedesc->flags & TEE_DESC_PRIVILEGED) { | ||
711 | offs = TEE_NUM_DEVICES / 2; | 713 | offs = TEE_NUM_DEVICES / 2; |
714 | max_id = TEE_NUM_DEVICES; | ||
715 | } | ||
712 | 716 | ||
713 | spin_lock(&driver_lock); | 717 | spin_lock(&driver_lock); |
714 | teedev->id = find_next_zero_bit(dev_mask, TEE_NUM_DEVICES, offs); | 718 | teedev->id = find_next_zero_bit(dev_mask, max_id, offs); |
715 | if (teedev->id < TEE_NUM_DEVICES) | 719 | if (teedev->id < max_id) |
716 | set_bit(teedev->id, dev_mask); | 720 | set_bit(teedev->id, dev_mask); |
717 | spin_unlock(&driver_lock); | 721 | spin_unlock(&driver_lock); |
718 | 722 | ||
719 | if (teedev->id >= TEE_NUM_DEVICES) { | 723 | if (teedev->id >= max_id) { |
720 | ret = ERR_PTR(-ENOMEM); | 724 | ret = ERR_PTR(-ENOMEM); |
721 | goto err; | 725 | goto err; |
722 | } | 726 | } |
diff --git a/include/dt-bindings/power/mt2712-power.h b/include/dt-bindings/power/mt2712-power.h index 92b46d772fae..2c147817efc2 100644 --- a/include/dt-bindings/power/mt2712-power.h +++ b/include/dt-bindings/power/mt2712-power.h | |||
@@ -22,5 +22,8 @@ | |||
22 | #define MT2712_POWER_DOMAIN_USB 5 | 22 | #define MT2712_POWER_DOMAIN_USB 5 |
23 | #define MT2712_POWER_DOMAIN_USB2 6 | 23 | #define MT2712_POWER_DOMAIN_USB2 6 |
24 | #define MT2712_POWER_DOMAIN_MFG 7 | 24 | #define MT2712_POWER_DOMAIN_MFG 7 |
25 | #define MT2712_POWER_DOMAIN_MFG_SC1 8 | ||
26 | #define MT2712_POWER_DOMAIN_MFG_SC2 9 | ||
27 | #define MT2712_POWER_DOMAIN_MFG_SC3 10 | ||
25 | 28 | ||
26 | #endif /* _DT_BINDINGS_POWER_MT2712_POWER_H */ | 29 | #endif /* _DT_BINDINGS_POWER_MT2712_POWER_H */ |
diff --git a/include/dt-bindings/power/mt7623a-power.h b/include/dt-bindings/power/mt7623a-power.h new file mode 100644 index 000000000000..2544822aa76b --- /dev/null +++ b/include/dt-bindings/power/mt7623a-power.h | |||
@@ -0,0 +1,10 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
2 | #ifndef _DT_BINDINGS_POWER_MT7623A_POWER_H | ||
3 | #define _DT_BINDINGS_POWER_MT7623A_POWER_H | ||
4 | |||
5 | #define MT7623A_POWER_DOMAIN_CONN 0 | ||
6 | #define MT7623A_POWER_DOMAIN_ETH 1 | ||
7 | #define MT7623A_POWER_DOMAIN_HIF 2 | ||
8 | #define MT7623A_POWER_DOMAIN_IFR_MSC 3 | ||
9 | |||
10 | #endif /* _DT_BINDINGS_POWER_MT7623A_POWER_H */ | ||
diff --git a/include/dt-bindings/reset/stm32mp1-resets.h b/include/dt-bindings/reset/stm32mp1-resets.h new file mode 100644 index 000000000000..f0c3aaef67a0 --- /dev/null +++ b/include/dt-bindings/reset/stm32mp1-resets.h | |||
@@ -0,0 +1,108 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 or BSD-3-Clause */ | ||
2 | /* | ||
3 | * Copyright (C) STMicroelectronics 2018 - All Rights Reserved | ||
4 | * Author: Gabriel Fernandez <gabriel.fernandez@st.com> for STMicroelectronics. | ||
5 | */ | ||
6 | |||
7 | #ifndef _DT_BINDINGS_STM32MP1_RESET_H_ | ||
8 | #define _DT_BINDINGS_STM32MP1_RESET_H_ | ||
9 | |||
10 | #define LTDC_R 3072 | ||
11 | #define DSI_R 3076 | ||
12 | #define DDRPERFM_R 3080 | ||
13 | #define USBPHY_R 3088 | ||
14 | #define SPI6_R 3136 | ||
15 | #define I2C4_R 3138 | ||
16 | #define I2C6_R 3139 | ||
17 | #define USART1_R 3140 | ||
18 | #define STGEN_R 3156 | ||
19 | #define GPIOZ_R 3200 | ||
20 | #define CRYP1_R 3204 | ||
21 | #define HASH1_R 3205 | ||
22 | #define RNG1_R 3206 | ||
23 | #define AXIM_R 3216 | ||
24 | #define GPU_R 3269 | ||
25 | #define ETHMAC_R 3274 | ||
26 | #define FMC_R 3276 | ||
27 | #define QSPI_R 3278 | ||
28 | #define SDMMC1_R 3280 | ||
29 | #define SDMMC2_R 3281 | ||
30 | #define CRC1_R 3284 | ||
31 | #define USBH_R 3288 | ||
32 | #define MDMA_R 3328 | ||
33 | #define MCU_R 8225 | ||
34 | #define TIM2_R 19456 | ||
35 | #define TIM3_R 19457 | ||
36 | #define TIM4_R 19458 | ||
37 | #define TIM5_R 19459 | ||
38 | #define TIM6_R 19460 | ||
39 | #define TIM7_R 19461 | ||
40 | #define TIM12_R 16462 | ||
41 | #define TIM13_R 16463 | ||
42 | #define TIM14_R 16464 | ||
43 | #define LPTIM1_R 19465 | ||
44 | #define SPI2_R 19467 | ||
45 | #define SPI3_R 19468 | ||
46 | #define USART2_R 19470 | ||
47 | #define USART3_R 19471 | ||
48 | #define UART4_R 19472 | ||
49 | #define UART5_R 19473 | ||
50 | #define UART7_R 19474 | ||
51 | #define UART8_R 19475 | ||
52 | #define I2C1_R 19477 | ||
53 | #define I2C2_R 19478 | ||
54 | #define I2C3_R 19479 | ||
55 | #define I2C5_R 19480 | ||
56 | #define SPDIF_R 19482 | ||
57 | #define CEC_R 19483 | ||
58 | #define DAC12_R 19485 | ||
59 | #define MDIO_R 19847 | ||
60 | #define TIM1_R 19520 | ||
61 | #define TIM8_R 19521 | ||
62 | #define TIM15_R 19522 | ||
63 | #define TIM16_R 19523 | ||
64 | #define TIM17_R 19524 | ||
65 | #define SPI1_R 19528 | ||
66 | #define SPI4_R 19529 | ||
67 | #define SPI5_R 19530 | ||
68 | #define USART6_R 19533 | ||
69 | #define SAI1_R 19536 | ||
70 | #define SAI2_R 19537 | ||
71 | #define SAI3_R 19538 | ||
72 | #define DFSDM_R 19540 | ||
73 | #define FDCAN_R 19544 | ||
74 | #define LPTIM2_R 19584 | ||
75 | #define LPTIM3_R 19585 | ||
76 | #define LPTIM4_R 19586 | ||
77 | #define LPTIM5_R 19587 | ||
78 | #define SAI4_R 19592 | ||
79 | #define SYSCFG_R 19595 | ||
80 | #define VREF_R 19597 | ||
81 | #define TMPSENS_R 19600 | ||
82 | #define PMBCTRL_R 19601 | ||
83 | #define DMA1_R 19648 | ||
84 | #define DMA2_R 19649 | ||
85 | #define DMAMUX_R 19650 | ||
86 | #define ADC12_R 19653 | ||
87 | #define USBO_R 19656 | ||
88 | #define SDMMC3_R 19664 | ||
89 | #define CAMITF_R 19712 | ||
90 | #define CRYP2_R 19716 | ||
91 | #define HASH2_R 19717 | ||
92 | #define RNG2_R 19718 | ||
93 | #define CRC2_R 19719 | ||
94 | #define HSEM_R 19723 | ||
95 | #define MBOX_R 19724 | ||
96 | #define GPIOA_R 19776 | ||
97 | #define GPIOB_R 19777 | ||
98 | #define GPIOC_R 19778 | ||
99 | #define GPIOD_R 19779 | ||
100 | #define GPIOE_R 19780 | ||
101 | #define GPIOF_R 19781 | ||
102 | #define GPIOG_R 19782 | ||
103 | #define GPIOH_R 19783 | ||
104 | #define GPIOI_R 19784 | ||
105 | #define GPIOJ_R 19785 | ||
106 | #define GPIOK_R 19786 | ||
107 | |||
108 | #endif /* _DT_BINDINGS_STM32MP1_RESET_H_ */ | ||
diff --git a/include/linux/hwmon.h b/include/linux/hwmon.h index ceb751987c40..e5fd2707b6df 100644 --- a/include/linux/hwmon.h +++ b/include/linux/hwmon.h | |||
@@ -29,6 +29,7 @@ enum hwmon_sensor_types { | |||
29 | hwmon_humidity, | 29 | hwmon_humidity, |
30 | hwmon_fan, | 30 | hwmon_fan, |
31 | hwmon_pwm, | 31 | hwmon_pwm, |
32 | hwmon_max, | ||
32 | }; | 33 | }; |
33 | 34 | ||
34 | enum hwmon_chip_attributes { | 35 | enum hwmon_chip_attributes { |
diff --git a/include/linux/reset-controller.h b/include/linux/reset-controller.h index adb88f8cefbc..9326d671b6e6 100644 --- a/include/linux/reset-controller.h +++ b/include/linux/reset-controller.h | |||
@@ -27,12 +27,38 @@ struct device_node; | |||
27 | struct of_phandle_args; | 27 | struct of_phandle_args; |
28 | 28 | ||
29 | /** | 29 | /** |
30 | * struct reset_control_lookup - represents a single lookup entry | ||
31 | * | ||
32 | * @list: internal list of all reset lookup entries | ||
33 | * @provider: name of the reset controller device controlling this reset line | ||
34 | * @index: ID of the reset controller in the reset controller device | ||
35 | * @dev_id: name of the device associated with this reset line | ||
36 | * @con_id name of the reset line (can be NULL) | ||
37 | */ | ||
38 | struct reset_control_lookup { | ||
39 | struct list_head list; | ||
40 | const char *provider; | ||
41 | unsigned int index; | ||
42 | const char *dev_id; | ||
43 | const char *con_id; | ||
44 | }; | ||
45 | |||
46 | #define RESET_LOOKUP(_provider, _index, _dev_id, _con_id) \ | ||
47 | { \ | ||
48 | .provider = _provider, \ | ||
49 | .index = _index, \ | ||
50 | .dev_id = _dev_id, \ | ||
51 | .con_id = _con_id, \ | ||
52 | } | ||
53 | |||
54 | /** | ||
30 | * struct reset_controller_dev - reset controller entity that might | 55 | * struct reset_controller_dev - reset controller entity that might |
31 | * provide multiple reset controls | 56 | * provide multiple reset controls |
32 | * @ops: a pointer to device specific struct reset_control_ops | 57 | * @ops: a pointer to device specific struct reset_control_ops |
33 | * @owner: kernel module of the reset controller driver | 58 | * @owner: kernel module of the reset controller driver |
34 | * @list: internal list of reset controller devices | 59 | * @list: internal list of reset controller devices |
35 | * @reset_control_head: head of internal list of requested reset controls | 60 | * @reset_control_head: head of internal list of requested reset controls |
61 | * @dev: corresponding driver model device struct | ||
36 | * @of_node: corresponding device tree node as phandle target | 62 | * @of_node: corresponding device tree node as phandle target |
37 | * @of_reset_n_cells: number of cells in reset line specifiers | 63 | * @of_reset_n_cells: number of cells in reset line specifiers |
38 | * @of_xlate: translation function to translate from specifier as found in the | 64 | * @of_xlate: translation function to translate from specifier as found in the |
@@ -44,6 +70,7 @@ struct reset_controller_dev { | |||
44 | struct module *owner; | 70 | struct module *owner; |
45 | struct list_head list; | 71 | struct list_head list; |
46 | struct list_head reset_control_head; | 72 | struct list_head reset_control_head; |
73 | struct device *dev; | ||
47 | struct device_node *of_node; | 74 | struct device_node *of_node; |
48 | int of_reset_n_cells; | 75 | int of_reset_n_cells; |
49 | int (*of_xlate)(struct reset_controller_dev *rcdev, | 76 | int (*of_xlate)(struct reset_controller_dev *rcdev, |
@@ -58,4 +85,7 @@ struct device; | |||
58 | int devm_reset_controller_register(struct device *dev, | 85 | int devm_reset_controller_register(struct device *dev, |
59 | struct reset_controller_dev *rcdev); | 86 | struct reset_controller_dev *rcdev); |
60 | 87 | ||
88 | void reset_controller_add_lookup(struct reset_control_lookup *lookup, | ||
89 | unsigned int num_entries); | ||
90 | |||
61 | #endif | 91 | #endif |
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h new file mode 100644 index 000000000000..b458c87b866c --- /dev/null +++ b/include/linux/scmi_protocol.h | |||
@@ -0,0 +1,277 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * SCMI Message Protocol driver header | ||
4 | * | ||
5 | * Copyright (C) 2018 ARM Ltd. | ||
6 | */ | ||
7 | #include <linux/device.h> | ||
8 | #include <linux/types.h> | ||
9 | |||
10 | #define SCMI_MAX_STR_SIZE 16 | ||
11 | #define SCMI_MAX_NUM_RATES 16 | ||
12 | |||
13 | /** | ||
14 | * struct scmi_revision_info - version information structure | ||
15 | * | ||
16 | * @major_ver: Major ABI version. Change here implies risk of backward | ||
17 | * compatibility break. | ||
18 | * @minor_ver: Minor ABI version. Change here implies new feature addition, | ||
19 | * or compatible change in ABI. | ||
20 | * @num_protocols: Number of protocols that are implemented, excluding the | ||
21 | * base protocol. | ||
22 | * @num_agents: Number of agents in the system. | ||
23 | * @impl_ver: A vendor-specific implementation version. | ||
24 | * @vendor_id: A vendor identifier(Null terminated ASCII string) | ||
25 | * @sub_vendor_id: A sub-vendor identifier(Null terminated ASCII string) | ||
26 | */ | ||
27 | struct scmi_revision_info { | ||
28 | u16 major_ver; | ||
29 | u16 minor_ver; | ||
30 | u8 num_protocols; | ||
31 | u8 num_agents; | ||
32 | u32 impl_ver; | ||
33 | char vendor_id[SCMI_MAX_STR_SIZE]; | ||
34 | char sub_vendor_id[SCMI_MAX_STR_SIZE]; | ||
35 | }; | ||
36 | |||
37 | struct scmi_clock_info { | ||
38 | char name[SCMI_MAX_STR_SIZE]; | ||
39 | bool rate_discrete; | ||
40 | union { | ||
41 | struct { | ||
42 | int num_rates; | ||
43 | u64 rates[SCMI_MAX_NUM_RATES]; | ||
44 | } list; | ||
45 | struct { | ||
46 | u64 min_rate; | ||
47 | u64 max_rate; | ||
48 | u64 step_size; | ||
49 | } range; | ||
50 | }; | ||
51 | }; | ||
52 | |||
53 | struct scmi_handle; | ||
54 | |||
55 | /** | ||
56 | * struct scmi_clk_ops - represents the various operations provided | ||
57 | * by SCMI Clock Protocol | ||
58 | * | ||
59 | * @count_get: get the count of clocks provided by SCMI | ||
60 | * @info_get: get the information of the specified clock | ||
61 | * @rate_get: request the current clock rate of a clock | ||
62 | * @rate_set: set the clock rate of a clock | ||
63 | * @enable: enables the specified clock | ||
64 | * @disable: disables the specified clock | ||
65 | */ | ||
66 | struct scmi_clk_ops { | ||
67 | int (*count_get)(const struct scmi_handle *handle); | ||
68 | |||
69 | const struct scmi_clock_info *(*info_get) | ||
70 | (const struct scmi_handle *handle, u32 clk_id); | ||
71 | int (*rate_get)(const struct scmi_handle *handle, u32 clk_id, | ||
72 | u64 *rate); | ||
73 | int (*rate_set)(const struct scmi_handle *handle, u32 clk_id, | ||
74 | u32 config, u64 rate); | ||
75 | int (*enable)(const struct scmi_handle *handle, u32 clk_id); | ||
76 | int (*disable)(const struct scmi_handle *handle, u32 clk_id); | ||
77 | }; | ||
78 | |||
79 | /** | ||
80 | * struct scmi_perf_ops - represents the various operations provided | ||
81 | * by SCMI Performance Protocol | ||
82 | * | ||
83 | * @limits_set: sets limits on the performance level of a domain | ||
84 | * @limits_get: gets limits on the performance level of a domain | ||
85 | * @level_set: sets the performance level of a domain | ||
86 | * @level_get: gets the performance level of a domain | ||
87 | * @device_domain_id: gets the scmi domain id for a given device | ||
88 | * @get_transition_latency: gets the DVFS transition latency for a given device | ||
89 | * @add_opps_to_device: adds all the OPPs for a given device | ||
90 | * @freq_set: sets the frequency for a given device using sustained frequency | ||
91 | * to sustained performance level mapping | ||
92 | * @freq_get: gets the frequency for a given device using sustained frequency | ||
93 | * to sustained performance level mapping | ||
94 | */ | ||
95 | struct scmi_perf_ops { | ||
96 | int (*limits_set)(const struct scmi_handle *handle, u32 domain, | ||
97 | u32 max_perf, u32 min_perf); | ||
98 | int (*limits_get)(const struct scmi_handle *handle, u32 domain, | ||
99 | u32 *max_perf, u32 *min_perf); | ||
100 | int (*level_set)(const struct scmi_handle *handle, u32 domain, | ||
101 | u32 level, bool poll); | ||
102 | int (*level_get)(const struct scmi_handle *handle, u32 domain, | ||
103 | u32 *level, bool poll); | ||
104 | int (*device_domain_id)(struct device *dev); | ||
105 | int (*get_transition_latency)(const struct scmi_handle *handle, | ||
106 | struct device *dev); | ||
107 | int (*add_opps_to_device)(const struct scmi_handle *handle, | ||
108 | struct device *dev); | ||
109 | int (*freq_set)(const struct scmi_handle *handle, u32 domain, | ||
110 | unsigned long rate, bool poll); | ||
111 | int (*freq_get)(const struct scmi_handle *handle, u32 domain, | ||
112 | unsigned long *rate, bool poll); | ||
113 | }; | ||
114 | |||
115 | /** | ||
116 | * struct scmi_power_ops - represents the various operations provided | ||
117 | * by SCMI Power Protocol | ||
118 | * | ||
119 | * @num_domains_get: get the count of power domains provided by SCMI | ||
120 | * @name_get: gets the name of a power domain | ||
121 | * @state_set: sets the power state of a power domain | ||
122 | * @state_get: gets the power state of a power domain | ||
123 | */ | ||
124 | struct scmi_power_ops { | ||
125 | int (*num_domains_get)(const struct scmi_handle *handle); | ||
126 | char *(*name_get)(const struct scmi_handle *handle, u32 domain); | ||
127 | #define SCMI_POWER_STATE_TYPE_SHIFT 30 | ||
128 | #define SCMI_POWER_STATE_ID_MASK (BIT(28) - 1) | ||
129 | #define SCMI_POWER_STATE_PARAM(type, id) \ | ||
130 | ((((type) & BIT(0)) << SCMI_POWER_STATE_TYPE_SHIFT) | \ | ||
131 | ((id) & SCMI_POWER_STATE_ID_MASK)) | ||
132 | #define SCMI_POWER_STATE_GENERIC_ON SCMI_POWER_STATE_PARAM(0, 0) | ||
133 | #define SCMI_POWER_STATE_GENERIC_OFF SCMI_POWER_STATE_PARAM(1, 0) | ||
134 | int (*state_set)(const struct scmi_handle *handle, u32 domain, | ||
135 | u32 state); | ||
136 | int (*state_get)(const struct scmi_handle *handle, u32 domain, | ||
137 | u32 *state); | ||
138 | }; | ||
139 | |||
140 | struct scmi_sensor_info { | ||
141 | u32 id; | ||
142 | u8 type; | ||
143 | char name[SCMI_MAX_STR_SIZE]; | ||
144 | }; | ||
145 | |||
146 | /* | ||
147 | * Partial list from Distributed Management Task Force (DMTF) specification: | ||
148 | * DSP0249 (Platform Level Data Model specification) | ||
149 | */ | ||
150 | enum scmi_sensor_class { | ||
151 | NONE = 0x0, | ||
152 | TEMPERATURE_C = 0x2, | ||
153 | VOLTAGE = 0x5, | ||
154 | CURRENT = 0x6, | ||
155 | POWER = 0x7, | ||
156 | ENERGY = 0x8, | ||
157 | }; | ||
158 | |||
159 | /** | ||
160 | * struct scmi_sensor_ops - represents the various operations provided | ||
161 | * by SCMI Sensor Protocol | ||
162 | * | ||
163 | * @count_get: get the count of sensors provided by SCMI | ||
164 | * @info_get: get the information of the specified sensor | ||
165 | * @configuration_set: control notifications on cross-over events for | ||
166 | * the trip-points | ||
167 | * @trip_point_set: selects and configures a trip-point of interest | ||
168 | * @reading_get: gets the current value of the sensor | ||
169 | */ | ||
170 | struct scmi_sensor_ops { | ||
171 | int (*count_get)(const struct scmi_handle *handle); | ||
172 | |||
173 | const struct scmi_sensor_info *(*info_get) | ||
174 | (const struct scmi_handle *handle, u32 sensor_id); | ||
175 | int (*configuration_set)(const struct scmi_handle *handle, | ||
176 | u32 sensor_id); | ||
177 | int (*trip_point_set)(const struct scmi_handle *handle, u32 sensor_id, | ||
178 | u8 trip_id, u64 trip_value); | ||
179 | int (*reading_get)(const struct scmi_handle *handle, u32 sensor_id, | ||
180 | bool async, u64 *value); | ||
181 | }; | ||
182 | |||
183 | /** | ||
184 | * struct scmi_handle - Handle returned to ARM SCMI clients for usage. | ||
185 | * | ||
186 | * @dev: pointer to the SCMI device | ||
187 | * @version: pointer to the structure containing SCMI version information | ||
188 | * @power_ops: pointer to set of power protocol operations | ||
189 | * @perf_ops: pointer to set of performance protocol operations | ||
190 | * @clk_ops: pointer to set of clock protocol operations | ||
191 | * @sensor_ops: pointer to set of sensor protocol operations | ||
192 | */ | ||
193 | struct scmi_handle { | ||
194 | struct device *dev; | ||
195 | struct scmi_revision_info *version; | ||
196 | struct scmi_perf_ops *perf_ops; | ||
197 | struct scmi_clk_ops *clk_ops; | ||
198 | struct scmi_power_ops *power_ops; | ||
199 | struct scmi_sensor_ops *sensor_ops; | ||
200 | /* for protocol internal use */ | ||
201 | void *perf_priv; | ||
202 | void *clk_priv; | ||
203 | void *power_priv; | ||
204 | void *sensor_priv; | ||
205 | }; | ||
206 | |||
207 | enum scmi_std_protocol { | ||
208 | SCMI_PROTOCOL_BASE = 0x10, | ||
209 | SCMI_PROTOCOL_POWER = 0x11, | ||
210 | SCMI_PROTOCOL_SYSTEM = 0x12, | ||
211 | SCMI_PROTOCOL_PERF = 0x13, | ||
212 | SCMI_PROTOCOL_CLOCK = 0x14, | ||
213 | SCMI_PROTOCOL_SENSOR = 0x15, | ||
214 | }; | ||
215 | |||
216 | struct scmi_device { | ||
217 | u32 id; | ||
218 | u8 protocol_id; | ||
219 | struct device dev; | ||
220 | struct scmi_handle *handle; | ||
221 | }; | ||
222 | |||
223 | #define to_scmi_dev(d) container_of(d, struct scmi_device, dev) | ||
224 | |||
225 | struct scmi_device * | ||
226 | scmi_device_create(struct device_node *np, struct device *parent, int protocol); | ||
227 | void scmi_device_destroy(struct scmi_device *scmi_dev); | ||
228 | |||
229 | struct scmi_device_id { | ||
230 | u8 protocol_id; | ||
231 | }; | ||
232 | |||
233 | struct scmi_driver { | ||
234 | const char *name; | ||
235 | int (*probe)(struct scmi_device *sdev); | ||
236 | void (*remove)(struct scmi_device *sdev); | ||
237 | const struct scmi_device_id *id_table; | ||
238 | |||
239 | struct device_driver driver; | ||
240 | }; | ||
241 | |||
242 | #define to_scmi_driver(d) container_of(d, struct scmi_driver, driver) | ||
243 | |||
244 | #ifdef CONFIG_ARM_SCMI_PROTOCOL | ||
245 | int scmi_driver_register(struct scmi_driver *driver, | ||
246 | struct module *owner, const char *mod_name); | ||
247 | void scmi_driver_unregister(struct scmi_driver *driver); | ||
248 | #else | ||
249 | static inline int | ||
250 | scmi_driver_register(struct scmi_driver *driver, struct module *owner, | ||
251 | const char *mod_name) | ||
252 | { | ||
253 | return -EINVAL; | ||
254 | } | ||
255 | |||
256 | static inline void scmi_driver_unregister(struct scmi_driver *driver) {} | ||
257 | #endif /* CONFIG_ARM_SCMI_PROTOCOL */ | ||
258 | |||
259 | #define scmi_register(driver) \ | ||
260 | scmi_driver_register(driver, THIS_MODULE, KBUILD_MODNAME) | ||
261 | #define scmi_unregister(driver) \ | ||
262 | scmi_driver_unregister(driver) | ||
263 | |||
264 | /** | ||
265 | * module_scmi_driver() - Helper macro for registering a scmi driver | ||
266 | * @__scmi_driver: scmi_driver structure | ||
267 | * | ||
268 | * Helper macro for scmi drivers to set up proper module init / exit | ||
269 | * functions. Replaces module_init() and module_exit() and keeps people from | ||
270 | * printing pointless things to the kernel log when their driver is loaded. | ||
271 | */ | ||
272 | #define module_scmi_driver(__scmi_driver) \ | ||
273 | module_driver(__scmi_driver, scmi_register, scmi_unregister) | ||
274 | |||
275 | typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *); | ||
276 | int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn); | ||
277 | void scmi_protocol_unregister(int protocol_id); | ||
diff --git a/include/linux/soc/mediatek/infracfg.h b/include/linux/soc/mediatek/infracfg.h index b0a507d356ef..fd25f0148566 100644 --- a/include/linux/soc/mediatek/infracfg.h +++ b/include/linux/soc/mediatek/infracfg.h | |||
@@ -21,6 +21,10 @@ | |||
21 | #define MT8173_TOP_AXI_PROT_EN_MFG_M1 BIT(22) | 21 | #define MT8173_TOP_AXI_PROT_EN_MFG_M1 BIT(22) |
22 | #define MT8173_TOP_AXI_PROT_EN_MFG_SNOOP_OUT BIT(23) | 22 | #define MT8173_TOP_AXI_PROT_EN_MFG_SNOOP_OUT BIT(23) |
23 | 23 | ||
24 | #define MT2701_TOP_AXI_PROT_EN_MM_M0 BIT(1) | ||
25 | #define MT2701_TOP_AXI_PROT_EN_CONN_M BIT(2) | ||
26 | #define MT2701_TOP_AXI_PROT_EN_CONN_S BIT(8) | ||
27 | |||
24 | #define MT7622_TOP_AXI_PROT_EN_ETHSYS (BIT(3) | BIT(17)) | 28 | #define MT7622_TOP_AXI_PROT_EN_ETHSYS (BIT(3) | BIT(17)) |
25 | #define MT7622_TOP_AXI_PROT_EN_HIF0 (BIT(24) | BIT(25)) | 29 | #define MT7622_TOP_AXI_PROT_EN_HIF0 (BIT(24) | BIT(25)) |
26 | #define MT7622_TOP_AXI_PROT_EN_HIF1 (BIT(26) | BIT(27) | \ | 30 | #define MT7622_TOP_AXI_PROT_EN_HIF1 (BIT(26) | BIT(27) | \ |
diff --git a/include/linux/soc/samsung/exynos-pmu.h b/include/linux/soc/samsung/exynos-pmu.h index e57eb4b6cc5a..fc0b445bb36b 100644 --- a/include/linux/soc/samsung/exynos-pmu.h +++ b/include/linux/soc/samsung/exynos-pmu.h | |||
@@ -1,12 +1,9 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
1 | /* | 2 | /* |
2 | * Copyright (c) 2014 Samsung Electronics Co., Ltd. | 3 | * Copyright (c) 2014 Samsung Electronics Co., Ltd. |
3 | * http://www.samsung.com | 4 | * http://www.samsung.com |
4 | * | 5 | * |
5 | * Header for EXYNOS PMU Driver support | 6 | * Header for EXYNOS PMU Driver support |
6 | * | ||
7 | * This program is free software; you can redistribute it and/or modify | ||
8 | * it under the terms of the GNU General Public License version 2 as | ||
9 | * published by the Free Software Foundation. | ||
10 | */ | 7 | */ |
11 | 8 | ||
12 | #ifndef __LINUX_SOC_EXYNOS_PMU_H | 9 | #ifndef __LINUX_SOC_EXYNOS_PMU_H |
diff --git a/include/linux/soc/samsung/exynos-regs-pmu.h b/include/linux/soc/samsung/exynos-regs-pmu.h index bebdde5dccd6..66dcb9ec273a 100644 --- a/include/linux/soc/samsung/exynos-regs-pmu.h +++ b/include/linux/soc/samsung/exynos-regs-pmu.h | |||
@@ -1,14 +1,10 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
1 | /* | 2 | /* |
2 | * Copyright (c) 2010-2015 Samsung Electronics Co., Ltd. | 3 | * Copyright (c) 2010-2015 Samsung Electronics Co., Ltd. |
3 | * http://www.samsung.com | 4 | * http://www.samsung.com |
4 | * | 5 | * |
5 | * EXYNOS - Power management unit definition | 6 | * EXYNOS - Power management unit definition |
6 | * | 7 | * |
7 | * This program is free software; you can redistribute it and/or modify | ||
8 | * it under the terms of the GNU General Public License version 2 as | ||
9 | * published by the Free Software Foundation. | ||
10 | * | ||
11 | * | ||
12 | * Notice: | 8 | * Notice: |
13 | * This is not a list of all Exynos Power Management Unit SFRs. | 9 | * This is not a list of all Exynos Power Management Unit SFRs. |
14 | * There are too many of them, not mentioning subtle differences | 10 | * There are too many of them, not mentioning subtle differences |
diff --git a/include/soc/tegra/bpmp.h b/include/soc/tegra/bpmp.h index aeae4466dd25..e69e4c4d80ae 100644 --- a/include/soc/tegra/bpmp.h +++ b/include/soc/tegra/bpmp.h | |||
@@ -75,8 +75,8 @@ struct tegra_bpmp { | |||
75 | struct mbox_chan *channel; | 75 | struct mbox_chan *channel; |
76 | } mbox; | 76 | } mbox; |
77 | 77 | ||
78 | struct tegra_bpmp_channel *channels; | 78 | spinlock_t atomic_tx_lock; |
79 | unsigned int num_channels; | 79 | struct tegra_bpmp_channel *tx_channel, *rx_channel, *threaded_channels; |
80 | 80 | ||
81 | struct { | 81 | struct { |
82 | unsigned long *allocated; | 82 | unsigned long *allocated; |