aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorArnd Bergmann <arnd@arndb.de>2018-03-07 10:45:07 -0500
committerArnd Bergmann <arnd@arndb.de>2018-03-07 10:45:07 -0500
commitf46f11dc1e86270935041fbc3920ba71a050a5fd (patch)
treee049f00eb27f573e2f3d90d379554c869f6469c3
parent819d38e95f8ac3fb2e26e42e5c3f19d95a07c847 (diff)
parent02f208c5c60549039445402505dea284e15f0f4f (diff)
Merge tag 'scmi-updates-4.17' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux into next/drivers
Pull "ARM SCMI support for v4.17" from Sudeep Holla: ARM System Control and Management Interface(SCMI)[1] is more flexible and easily extensible than any of the existing interfaces. Few existing as well as future ARM platforms provide micro-controllers to abstract various power and other system management tasks which have similar interfaces, both in terms of the functions that are provided by them, and in terms of how requests are communicated to them. There are quite a few protocols like ARM SCPI, TI SCI, QCOM RPM, Nvidia Tegra BPMP, and so on already. This specification is to standardize and avoid any further fragmentation in the design of such interface by various vendors. The current SCMI driver implementation is very basic and initial support. It lacks support for notifications, asynchronous/delayed response, perf/power statistics region and sensor register region. Mailbox is the only form of transport supported currently in the driver. SCMI supports interrupt based mailbox communication, where, on completion of the processing of a message, the caller receives an interrupt as well as polling for completion. SCMI is designed to minimize the dependency on the mailbox/transport hardware. So in terms of SCMI, each channel in the mailbox includes memory area, doorbell and completion interrupt. However the doorbell and completion interrupt is highly mailbox dependent which was bit of controversial as part of SCMI/mailbox discussions. Arnd and me discussed about the few aspects of SCMI and the mailbox framework: 1. Use of mailbox framework for doorbell type mailbox controller: - Such hardware may not require any data to be sent to signal the remote about the presence of a message. The channel will have in-built information on how to trigger the signal to the remote. There are few mailbox controller drivers which are purely doorbell based. e.g.QCOM IPC, STM, Tegra, ACPI PCC,..etc 2. Supporting other mailbox controller: - SCMI just needs a mechanism to signal the remote firmware. Such controller may need fixed message to be sent to trigger a doorbell. In such case we may need to get that data from DT and pass the same to the controller. It's not covered in the current DT binding, but can be extended as optional property in future. However handling notifications may be interesting on such mailbox, but again there is no way to interpret what the data field(remote message) means, it could be a bit mask or a number or don't-care. Arnd mentioned that he doesn't like the way the mailbox binding deals with doorbell-type hardware, but we do have quite a few precedent drivers already and changing the binding to add a data field would not make it any better, but could cause other problems. So he is happy with the status quo of SCMI implementation. [1] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0056a/index.html * tag 'scmi-updates-4.17' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux: cpufreq: scmi: add support for fast frequency switching cpufreq: add support for CPU DVFS based on SCMI message protocol hwmon: add support for sensors exported via ARM SCMI hwmon: (core) Add hwmon_max to hwmon_sensor_types enumeration clk: add support for clocks provided by SCMI firmware: arm_scmi: add device power domain support using genpd firmware: arm_scmi: add per-protocol channels support using idr objects firmware: arm_scmi: refactor in preparation to support per-protocol channels firmware: arm_scmi: add option for polling based performance domain operations firmware: arm_scmi: add support for polling based SCMI transfers firmware: arm_scmi: probe and initialise all the supported protocols firmware: arm_scmi: add initial support for sensor protocol firmware: arm_scmi: add initial support for power protocol firmware: arm_scmi: add initial support for clock protocol firmware: arm_scmi: add initial support for performance protocol firmware: arm_scmi: add scmi protocol bus to enumerate protocol devices firmware: arm_scmi: add common infrastructure and support for base protocol firmware: arm_scmi: add basic driver infrastructure for SCMI dt-bindings: arm: add support for ARM System Control and Management Interface(SCMI) protocol dt-bindings: mailbox: add support for mailbox client shared memory
-rw-r--r--Documentation/devicetree/bindings/arm/arm,scmi.txt179
-rw-r--r--Documentation/devicetree/bindings/mailbox/mailbox.txt28
-rw-r--r--MAINTAINERS11
-rw-r--r--drivers/clk/Kconfig10
-rw-r--r--drivers/clk/Makefile1
-rw-r--r--drivers/clk/clk-scmi.c202
-rw-r--r--drivers/cpufreq/Kconfig.arm11
-rw-r--r--drivers/cpufreq/Makefile1
-rw-r--r--drivers/cpufreq/scmi-cpufreq.c264
-rw-r--r--drivers/firmware/Kconfig34
-rw-r--r--drivers/firmware/Makefile1
-rw-r--r--drivers/firmware/arm_scmi/Makefile5
-rw-r--r--drivers/firmware/arm_scmi/base.c253
-rw-r--r--drivers/firmware/arm_scmi/bus.c221
-rw-r--r--drivers/firmware/arm_scmi/clock.c342
-rw-r--r--drivers/firmware/arm_scmi/common.h105
-rw-r--r--drivers/firmware/arm_scmi/driver.c871
-rw-r--r--drivers/firmware/arm_scmi/perf.c481
-rw-r--r--drivers/firmware/arm_scmi/power.c221
-rw-r--r--drivers/firmware/arm_scmi/scmi_pm_domain.c129
-rw-r--r--drivers/firmware/arm_scmi/sensors.c291
-rw-r--r--drivers/hwmon/Kconfig12
-rw-r--r--drivers/hwmon/Makefile1
-rw-r--r--drivers/hwmon/scmi-hwmon.c225
-rw-r--r--include/linux/hwmon.h1
-rw-r--r--include/linux/scmi_protocol.h277
26 files changed, 4172 insertions, 5 deletions
diff --git a/Documentation/devicetree/bindings/arm/arm,scmi.txt b/Documentation/devicetree/bindings/arm/arm,scmi.txt
new file mode 100644
index 000000000000..5f3719ab7075
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/arm,scmi.txt
@@ -0,0 +1,179 @@
1System Control and Management Interface (SCMI) Message Protocol
2----------------------------------------------------------
3
4The SCMI is intended to allow agents such as OSPM to manage various functions
5that are provided by the hardware platform it is running on, including power
6and performance functions.
7
8This binding is intended to define the interface the firmware implementing
9the SCMI as described in ARM document number ARM DUI 0922B ("ARM System Control
10and Management Interface Platform Design Document")[0] provide for OSPM in
11the device tree.
12
13Required properties:
14
15The scmi node with the following properties shall be under the /firmware/ node.
16
17- compatible : shall be "arm,scmi"
18- mboxes: List of phandle and mailbox channel specifiers. It should contain
19 exactly one or two mailboxes, one for transmitting messages("tx")
20 and another optional for receiving the notifications("rx") if
21 supported.
22- shmem : List of phandle pointing to the shared memory(SHM) area as per
23 generic mailbox client binding.
24- #address-cells : should be '1' if the device has sub-nodes, maps to
25 protocol identifier for a given sub-node.
26- #size-cells : should be '0' as 'reg' property doesn't have any size
27 associated with it.
28
29Optional properties:
30
31- mbox-names: shall be "tx" or "rx" depending on mboxes entries.
32
33See Documentation/devicetree/bindings/mailbox/mailbox.txt for more details
34about the generic mailbox controller and client driver bindings.
35
36The mailbox is the only permitted method of calling the SCMI firmware.
37Mailbox doorbell is used as a mechanism to alert the presence of a
38messages and/or notification.
39
40Each protocol supported shall have a sub-node with corresponding compatible
41as described in the following sections. If the platform supports dedicated
42communication channel for a particular protocol, the 3 properties namely:
43mboxes, mbox-names and shmem shall be present in the sub-node corresponding
44to that protocol.
45
46Clock/Performance bindings for the clocks/OPPs based on SCMI Message Protocol
47------------------------------------------------------------
48
49This binding uses the common clock binding[1].
50
51Required properties:
52- #clock-cells : Should be 1. Contains the Clock ID value used by SCMI commands.
53
54Power domain bindings for the power domains based on SCMI Message Protocol
55------------------------------------------------------------
56
57This binding for the SCMI power domain providers uses the generic power
58domain binding[2].
59
60Required properties:
61 - #power-domain-cells : Should be 1. Contains the device or the power
62 domain ID value used by SCMI commands.
63
64Sensor bindings for the sensors based on SCMI Message Protocol
65--------------------------------------------------------------
66SCMI provides an API to access the various sensors on the SoC.
67
68Required properties:
69- #thermal-sensor-cells: should be set to 1. This property follows the
70 thermal device tree bindings[3].
71
72 Valid cell values are raw identifiers (Sensor ID)
73 as used by the firmware. Refer to platform details
74 for your implementation for the IDs to use.
75
76SRAM and Shared Memory for SCMI
77-------------------------------
78
79A small area of SRAM is reserved for SCMI communication between application
80processors and SCP.
81
82The properties should follow the generic mmio-sram description found in [4]
83
84Each sub-node represents the reserved area for SCMI.
85
86Required sub-node properties:
87- reg : The base offset and size of the reserved area with the SRAM
88- compatible : should be "arm,scmi-shmem" for Non-secure SRAM based
89 shared memory
90
91[0] http://infocenter.arm.com/help/topic/com.arm.doc.den0056a/index.html
92[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
93[2] Documentation/devicetree/bindings/power/power_domain.txt
94[3] Documentation/devicetree/bindings/thermal/thermal.txt
95[4] Documentation/devicetree/bindings/sram/sram.txt
96
97Example:
98
99sram@50000000 {
100 compatible = "mmio-sram";
101 reg = <0x0 0x50000000 0x0 0x10000>;
102
103 #address-cells = <1>;
104 #size-cells = <1>;
105 ranges = <0 0x0 0x50000000 0x10000>;
106
107 cpu_scp_lpri: scp-shmem@0 {
108 compatible = "arm,scmi-shmem";
109 reg = <0x0 0x200>;
110 };
111
112 cpu_scp_hpri: scp-shmem@200 {
113 compatible = "arm,scmi-shmem";
114 reg = <0x200 0x200>;
115 };
116};
117
118mailbox@40000000 {
119 ....
120 #mbox-cells = <1>;
121 reg = <0x0 0x40000000 0x0 0x10000>;
122};
123
124firmware {
125
126 ...
127
128 scmi {
129 compatible = "arm,scmi";
130 mboxes = <&mailbox 0 &mailbox 1>;
131 mbox-names = "tx", "rx";
132 shmem = <&cpu_scp_lpri &cpu_scp_hpri>;
133 #address-cells = <1>;
134 #size-cells = <0>;
135
136 scmi_devpd: protocol@11 {
137 reg = <0x11>;
138 #power-domain-cells = <1>;
139 };
140
141 scmi_dvfs: protocol@13 {
142 reg = <0x13>;
143 #clock-cells = <1>;
144 };
145
146 scmi_clk: protocol@14 {
147 reg = <0x14>;
148 #clock-cells = <1>;
149 };
150
151 scmi_sensors0: protocol@15 {
152 reg = <0x15>;
153 #thermal-sensor-cells = <1>;
154 };
155 };
156};
157
158cpu@0 {
159 ...
160 reg = <0 0>;
161 clocks = <&scmi_dvfs 0>;
162};
163
164hdlcd@7ff60000 {
165 ...
166 reg = <0 0x7ff60000 0 0x1000>;
167 clocks = <&scmi_clk 4>;
168 power-domains = <&scmi_devpd 1>;
169};
170
171thermal-zones {
172 soc_thermal {
173 polling-delay-passive = <100>;
174 polling-delay = <1000>;
175 /* sensor ID */
176 thermal-sensors = <&scmi_sensors0 3>;
177 ...
178 };
179};
diff --git a/Documentation/devicetree/bindings/mailbox/mailbox.txt b/Documentation/devicetree/bindings/mailbox/mailbox.txt
index be05b9746c69..af8ecee2ac68 100644
--- a/Documentation/devicetree/bindings/mailbox/mailbox.txt
+++ b/Documentation/devicetree/bindings/mailbox/mailbox.txt
@@ -23,6 +23,11 @@ Required property:
23 23
24Optional property: 24Optional property:
25- mbox-names: List of identifier strings for each mailbox channel. 25- mbox-names: List of identifier strings for each mailbox channel.
26- shmem : List of phandle pointing to the shared memory(SHM) area between the
27 users of these mailboxes for IPC, one for each mailbox. This shared
28 memory can be part of any memory reserved for the purpose of this
29 communication between the mailbox client and the remote.
30
26 31
27Example: 32Example:
28 pwr_cntrl: power { 33 pwr_cntrl: power {
@@ -30,3 +35,26 @@ Example:
30 mbox-names = "pwr-ctrl", "rpc"; 35 mbox-names = "pwr-ctrl", "rpc";
31 mboxes = <&mailbox 0 &mailbox 1>; 36 mboxes = <&mailbox 0 &mailbox 1>;
32 }; 37 };
38
39Example with shared memory(shmem):
40
41 sram: sram@50000000 {
42 compatible = "mmio-sram";
43 reg = <0x50000000 0x10000>;
44
45 #address-cells = <1>;
46 #size-cells = <1>;
47 ranges = <0 0x50000000 0x10000>;
48
49 cl_shmem: shmem@0 {
50 compatible = "client-shmem";
51 reg = <0x0 0x200>;
52 };
53 };
54
55 client@2e000000 {
56 ...
57 mboxes = <&mailbox 0>;
58 shmem = <&cl_shmem>;
59 ..
60 };
diff --git a/MAINTAINERS b/MAINTAINERS
index 4623caf8d72d..24e19837c113 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13395,15 +13395,16 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd.git
13395S: Supported 13395S: Supported
13396F: drivers/mfd/syscon.c 13396F: drivers/mfd/syscon.c
13397 13397
13398SYSTEM CONTROL & POWER INTERFACE (SCPI) Message Protocol drivers 13398SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers
13399M: Sudeep Holla <sudeep.holla@arm.com> 13399M: Sudeep Holla <sudeep.holla@arm.com>
13400L: linux-arm-kernel@lists.infradead.org 13400L: linux-arm-kernel@lists.infradead.org
13401S: Maintained 13401S: Maintained
13402F: Documentation/devicetree/bindings/arm/arm,scpi.txt 13402F: Documentation/devicetree/bindings/arm/arm,sc[mp]i.txt
13403F: drivers/clk/clk-scpi.c 13403F: drivers/clk/clk-sc[mp]i.c
13404F: drivers/cpufreq/scpi-cpufreq.c 13404F: drivers/cpufreq/sc[mp]i-cpufreq.c
13405F: drivers/firmware/arm_scpi.c 13405F: drivers/firmware/arm_scpi.c
13406F: include/linux/scpi_protocol.h 13406F: drivers/firmware/arm_scmi/
13407F: include/linux/sc[mp]i_protocol.h
13407 13408
13408SYSTEM RESET/SHUTDOWN DRIVERS 13409SYSTEM RESET/SHUTDOWN DRIVERS
13409M: Sebastian Reichel <sre@kernel.org> 13410M: Sebastian Reichel <sre@kernel.org>
diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
index 98ce9fc6e6c0..7ae23b25b406 100644
--- a/drivers/clk/Kconfig
+++ b/drivers/clk/Kconfig
@@ -62,6 +62,16 @@ config COMMON_CLK_HI655X
62 multi-function device has one fixed-rate oscillator, clocked 62 multi-function device has one fixed-rate oscillator, clocked
63 at 32KHz. 63 at 32KHz.
64 64
65config COMMON_CLK_SCMI
66 tristate "Clock driver controlled via SCMI interface"
67 depends on ARM_SCMI_PROTOCOL || COMPILE_TEST
68 ---help---
69 This driver provides support for clocks that are controlled
70 by firmware that implements the SCMI interface.
71
72 This driver uses SCMI Message Protocol to interact with the
73 firmware providing all the clock controls.
74
65config COMMON_CLK_SCPI 75config COMMON_CLK_SCPI
66 tristate "Clock driver controlled via SCPI interface" 76 tristate "Clock driver controlled via SCPI interface"
67 depends on ARM_SCPI_PROTOCOL || COMPILE_TEST 77 depends on ARM_SCPI_PROTOCOL || COMPILE_TEST
diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
index 71ec41e6364f..6605513eaa94 100644
--- a/drivers/clk/Makefile
+++ b/drivers/clk/Makefile
@@ -41,6 +41,7 @@ obj-$(CONFIG_CLK_QORIQ) += clk-qoriq.o
41obj-$(CONFIG_COMMON_CLK_RK808) += clk-rk808.o 41obj-$(CONFIG_COMMON_CLK_RK808) += clk-rk808.o
42obj-$(CONFIG_COMMON_CLK_HI655X) += clk-hi655x.o 42obj-$(CONFIG_COMMON_CLK_HI655X) += clk-hi655x.o
43obj-$(CONFIG_COMMON_CLK_S2MPS11) += clk-s2mps11.o 43obj-$(CONFIG_COMMON_CLK_S2MPS11) += clk-s2mps11.o
44obj-$(CONFIG_COMMON_CLK_SCMI) += clk-scmi.o
44obj-$(CONFIG_COMMON_CLK_SCPI) += clk-scpi.o 45obj-$(CONFIG_COMMON_CLK_SCPI) += clk-scpi.o
45obj-$(CONFIG_COMMON_CLK_SI5351) += clk-si5351.o 46obj-$(CONFIG_COMMON_CLK_SI5351) += clk-si5351.o
46obj-$(CONFIG_COMMON_CLK_SI514) += clk-si514.o 47obj-$(CONFIG_COMMON_CLK_SI514) += clk-si514.o
diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
new file mode 100644
index 000000000000..26f1476d4a79
--- /dev/null
+++ b/drivers/clk/clk-scmi.c
@@ -0,0 +1,202 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Power Interface (SCMI) Protocol based clock driver
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7
8#include <linux/clk-provider.h>
9#include <linux/device.h>
10#include <linux/err.h>
11#include <linux/of.h>
12#include <linux/module.h>
13#include <linux/scmi_protocol.h>
14#include <asm/div64.h>
15
16struct scmi_clk {
17 u32 id;
18 struct clk_hw hw;
19 const struct scmi_clock_info *info;
20 const struct scmi_handle *handle;
21};
22
23#define to_scmi_clk(clk) container_of(clk, struct scmi_clk, hw)
24
25static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
26 unsigned long parent_rate)
27{
28 int ret;
29 u64 rate;
30 struct scmi_clk *clk = to_scmi_clk(hw);
31
32 ret = clk->handle->clk_ops->rate_get(clk->handle, clk->id, &rate);
33 if (ret)
34 return 0;
35 return rate;
36}
37
38static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
39 unsigned long *parent_rate)
40{
41 int step;
42 u64 fmin, fmax, ftmp;
43 struct scmi_clk *clk = to_scmi_clk(hw);
44
45 /*
46 * We can't figure out what rate it will be, so just return the
47 * rate back to the caller. scmi_clk_recalc_rate() will be called
48 * after the rate is set and we'll know what rate the clock is
49 * running at then.
50 */
51 if (clk->info->rate_discrete)
52 return rate;
53
54 fmin = clk->info->range.min_rate;
55 fmax = clk->info->range.max_rate;
56 if (rate <= fmin)
57 return fmin;
58 else if (rate >= fmax)
59 return fmax;
60
61 ftmp = rate - fmin;
62 ftmp += clk->info->range.step_size - 1; /* to round up */
63 step = do_div(ftmp, clk->info->range.step_size);
64
65 return step * clk->info->range.step_size + fmin;
66}
67
68static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
69 unsigned long parent_rate)
70{
71 struct scmi_clk *clk = to_scmi_clk(hw);
72
73 return clk->handle->clk_ops->rate_set(clk->handle, clk->id, 0, rate);
74}
75
76static int scmi_clk_enable(struct clk_hw *hw)
77{
78 struct scmi_clk *clk = to_scmi_clk(hw);
79
80 return clk->handle->clk_ops->enable(clk->handle, clk->id);
81}
82
83static void scmi_clk_disable(struct clk_hw *hw)
84{
85 struct scmi_clk *clk = to_scmi_clk(hw);
86
87 clk->handle->clk_ops->disable(clk->handle, clk->id);
88}
89
90static const struct clk_ops scmi_clk_ops = {
91 .recalc_rate = scmi_clk_recalc_rate,
92 .round_rate = scmi_clk_round_rate,
93 .set_rate = scmi_clk_set_rate,
94 /*
95 * We can't provide enable/disable callback as we can't perform the same
96 * in atomic context. Since the clock framework provides standard API
97 * clk_prepare_enable that helps cases using clk_enable in non-atomic
98 * context, it should be fine providing prepare/unprepare.
99 */
100 .prepare = scmi_clk_enable,
101 .unprepare = scmi_clk_disable,
102};
103
104static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk)
105{
106 int ret;
107 struct clk_init_data init = {
108 .flags = CLK_GET_RATE_NOCACHE,
109 .num_parents = 0,
110 .ops = &scmi_clk_ops,
111 .name = sclk->info->name,
112 };
113
114 sclk->hw.init = &init;
115 ret = devm_clk_hw_register(dev, &sclk->hw);
116 if (!ret)
117 clk_hw_set_rate_range(&sclk->hw, sclk->info->range.min_rate,
118 sclk->info->range.max_rate);
119 return ret;
120}
121
122static int scmi_clocks_probe(struct scmi_device *sdev)
123{
124 int idx, count, err;
125 struct clk_hw **hws;
126 struct clk_hw_onecell_data *clk_data;
127 struct device *dev = &sdev->dev;
128 struct device_node *np = dev->of_node;
129 const struct scmi_handle *handle = sdev->handle;
130
131 if (!handle || !handle->clk_ops)
132 return -ENODEV;
133
134 count = handle->clk_ops->count_get(handle);
135 if (count < 0) {
136 dev_err(dev, "%s: invalid clock output count\n", np->name);
137 return -EINVAL;
138 }
139
140 clk_data = devm_kzalloc(dev, sizeof(*clk_data) +
141 sizeof(*clk_data->hws) * count, GFP_KERNEL);
142 if (!clk_data)
143 return -ENOMEM;
144
145 clk_data->num = count;
146 hws = clk_data->hws;
147
148 for (idx = 0; idx < count; idx++) {
149 struct scmi_clk *sclk;
150
151 sclk = devm_kzalloc(dev, sizeof(*sclk), GFP_KERNEL);
152 if (!sclk)
153 return -ENOMEM;
154
155 sclk->info = handle->clk_ops->info_get(handle, idx);
156 if (!sclk->info) {
157 dev_dbg(dev, "invalid clock info for idx %d\n", idx);
158 continue;
159 }
160
161 sclk->id = idx;
162 sclk->handle = handle;
163
164 err = scmi_clk_ops_init(dev, sclk);
165 if (err) {
166 dev_err(dev, "failed to register clock %d\n", idx);
167 devm_kfree(dev, sclk);
168 hws[idx] = NULL;
169 } else {
170 dev_dbg(dev, "Registered clock:%s\n", sclk->info->name);
171 hws[idx] = &sclk->hw;
172 }
173 }
174
175 return of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
176}
177
178static void scmi_clocks_remove(struct scmi_device *sdev)
179{
180 struct device *dev = &sdev->dev;
181 struct device_node *np = dev->of_node;
182
183 of_clk_del_provider(np);
184}
185
186static const struct scmi_device_id scmi_id_table[] = {
187 { SCMI_PROTOCOL_CLOCK },
188 { },
189};
190MODULE_DEVICE_TABLE(scmi, scmi_id_table);
191
192static struct scmi_driver scmi_clocks_driver = {
193 .name = "scmi-clocks",
194 .probe = scmi_clocks_probe,
195 .remove = scmi_clocks_remove,
196 .id_table = scmi_id_table,
197};
198module_scmi_driver(scmi_clocks_driver);
199
200MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
201MODULE_DESCRIPTION("ARM SCMI clock driver");
202MODULE_LICENSE("GPL v2");
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
index fb586e09682d..9bbb5b39d18a 100644
--- a/drivers/cpufreq/Kconfig.arm
+++ b/drivers/cpufreq/Kconfig.arm
@@ -238,6 +238,17 @@ config ARM_SA1100_CPUFREQ
238config ARM_SA1110_CPUFREQ 238config ARM_SA1110_CPUFREQ
239 bool 239 bool
240 240
241config ARM_SCMI_CPUFREQ
242 tristate "SCMI based CPUfreq driver"
243 depends on ARM_SCMI_PROTOCOL || COMPILE_TEST
244 select PM_OPP
245 help
246 This adds the CPUfreq driver support for ARM platforms using SCMI
247 protocol for CPU power management.
248
249 This driver uses SCMI Message Protocol driver to interact with the
250 firmware providing the CPU DVFS functionality.
251
241config ARM_SPEAR_CPUFREQ 252config ARM_SPEAR_CPUFREQ
242 bool "SPEAr CPUFreq support" 253 bool "SPEAr CPUFreq support"
243 depends on PLAT_SPEAR 254 depends on PLAT_SPEAR
diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
index c60c1e141d9d..4987227b67df 100644
--- a/drivers/cpufreq/Makefile
+++ b/drivers/cpufreq/Makefile
@@ -75,6 +75,7 @@ obj-$(CONFIG_ARM_S3C24XX_CPUFREQ_DEBUGFS) += s3c24xx-cpufreq-debugfs.o
75obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o 75obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o
76obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o 76obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o
77obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o 77obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o
78obj-$(CONFIG_ARM_SCMI_CPUFREQ) += scmi-cpufreq.o
78obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o 79obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o
79obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o 80obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
80obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o 81obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o
diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
new file mode 100644
index 000000000000..959a1dbe3835
--- /dev/null
+++ b/drivers/cpufreq/scmi-cpufreq.c
@@ -0,0 +1,264 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Power Interface (SCMI) based CPUFreq Interface driver
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 * Sudeep Holla <sudeep.holla@arm.com>
7 */
8
9#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
10
11#include <linux/cpu.h>
12#include <linux/cpufreq.h>
13#include <linux/cpumask.h>
14#include <linux/cpu_cooling.h>
15#include <linux/export.h>
16#include <linux/module.h>
17#include <linux/pm_opp.h>
18#include <linux/slab.h>
19#include <linux/scmi_protocol.h>
20#include <linux/types.h>
21
22struct scmi_data {
23 int domain_id;
24 struct device *cpu_dev;
25 struct thermal_cooling_device *cdev;
26};
27
28static const struct scmi_handle *handle;
29
30static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)
31{
32 struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
33 struct scmi_perf_ops *perf_ops = handle->perf_ops;
34 struct scmi_data *priv = policy->driver_data;
35 unsigned long rate;
36 int ret;
37
38 ret = perf_ops->freq_get(handle, priv->domain_id, &rate, false);
39 if (ret)
40 return 0;
41 return rate / 1000;
42}
43
44/*
45 * perf_ops->freq_set is not a synchronous, the actual OPP change will
46 * happen asynchronously and can get notified if the events are
47 * subscribed for by the SCMI firmware
48 */
49static int
50scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
51{
52 int ret;
53 struct scmi_data *priv = policy->driver_data;
54 struct scmi_perf_ops *perf_ops = handle->perf_ops;
55 u64 freq = policy->freq_table[index].frequency * 1000;
56
57 ret = perf_ops->freq_set(handle, priv->domain_id, freq, false);
58 if (!ret)
59 arch_set_freq_scale(policy->related_cpus, freq,
60 policy->cpuinfo.max_freq);
61 return ret;
62}
63
64static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
65 unsigned int target_freq)
66{
67 struct scmi_data *priv = policy->driver_data;
68 struct scmi_perf_ops *perf_ops = handle->perf_ops;
69
70 if (!perf_ops->freq_set(handle, priv->domain_id,
71 target_freq * 1000, true)) {
72 arch_set_freq_scale(policy->related_cpus, target_freq,
73 policy->cpuinfo.max_freq);
74 return target_freq;
75 }
76
77 return 0;
78}
79
80static int
81scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
82{
83 int cpu, domain, tdomain;
84 struct device *tcpu_dev;
85
86 domain = handle->perf_ops->device_domain_id(cpu_dev);
87 if (domain < 0)
88 return domain;
89
90 for_each_possible_cpu(cpu) {
91 if (cpu == cpu_dev->id)
92 continue;
93
94 tcpu_dev = get_cpu_device(cpu);
95 if (!tcpu_dev)
96 continue;
97
98 tdomain = handle->perf_ops->device_domain_id(tcpu_dev);
99 if (tdomain == domain)
100 cpumask_set_cpu(cpu, cpumask);
101 }
102
103 return 0;
104}
105
106static int scmi_cpufreq_init(struct cpufreq_policy *policy)
107{
108 int ret;
109 unsigned int latency;
110 struct device *cpu_dev;
111 struct scmi_data *priv;
112 struct cpufreq_frequency_table *freq_table;
113
114 cpu_dev = get_cpu_device(policy->cpu);
115 if (!cpu_dev) {
116 pr_err("failed to get cpu%d device\n", policy->cpu);
117 return -ENODEV;
118 }
119
120 ret = handle->perf_ops->add_opps_to_device(handle, cpu_dev);
121 if (ret) {
122 dev_warn(cpu_dev, "failed to add opps to the device\n");
123 return ret;
124 }
125
126 ret = scmi_get_sharing_cpus(cpu_dev, policy->cpus);
127 if (ret) {
128 dev_warn(cpu_dev, "failed to get sharing cpumask\n");
129 return ret;
130 }
131
132 ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
133 if (ret) {
134 dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
135 __func__, ret);
136 return ret;
137 }
138
139 ret = dev_pm_opp_get_opp_count(cpu_dev);
140 if (ret <= 0) {
141 dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n");
142 ret = -EPROBE_DEFER;
143 goto out_free_opp;
144 }
145
146 priv = kzalloc(sizeof(*priv), GFP_KERNEL);
147 if (!priv) {
148 ret = -ENOMEM;
149 goto out_free_opp;
150 }
151
152 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
153 if (ret) {
154 dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
155 goto out_free_priv;
156 }
157
158 priv->cpu_dev = cpu_dev;
159 priv->domain_id = handle->perf_ops->device_domain_id(cpu_dev);
160
161 policy->driver_data = priv;
162
163 ret = cpufreq_table_validate_and_show(policy, freq_table);
164 if (ret) {
165 dev_err(cpu_dev, "%s: invalid frequency table: %d\n", __func__,
166 ret);
167 goto out_free_cpufreq_table;
168 }
169
170 /* SCMI allows DVFS request for any domain from any CPU */
171 policy->dvfs_possible_from_any_cpu = true;
172
173 latency = handle->perf_ops->get_transition_latency(handle, cpu_dev);
174 if (!latency)
175 latency = CPUFREQ_ETERNAL;
176
177 policy->cpuinfo.transition_latency = latency;
178
179 policy->fast_switch_possible = true;
180 return 0;
181
182out_free_cpufreq_table:
183 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
184out_free_priv:
185 kfree(priv);
186out_free_opp:
187 dev_pm_opp_cpumask_remove_table(policy->cpus);
188
189 return ret;
190}
191
192static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
193{
194 struct scmi_data *priv = policy->driver_data;
195
196 cpufreq_cooling_unregister(priv->cdev);
197 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
198 kfree(priv);
199 dev_pm_opp_cpumask_remove_table(policy->related_cpus);
200
201 return 0;
202}
203
204static void scmi_cpufreq_ready(struct cpufreq_policy *policy)
205{
206 struct scmi_data *priv = policy->driver_data;
207
208 priv->cdev = of_cpufreq_cooling_register(policy);
209}
210
211static struct cpufreq_driver scmi_cpufreq_driver = {
212 .name = "scmi",
213 .flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
214 CPUFREQ_NEED_INITIAL_FREQ_CHECK,
215 .verify = cpufreq_generic_frequency_table_verify,
216 .attr = cpufreq_generic_attr,
217 .target_index = scmi_cpufreq_set_target,
218 .fast_switch = scmi_cpufreq_fast_switch,
219 .get = scmi_cpufreq_get_rate,
220 .init = scmi_cpufreq_init,
221 .exit = scmi_cpufreq_exit,
222 .ready = scmi_cpufreq_ready,
223};
224
225static int scmi_cpufreq_probe(struct scmi_device *sdev)
226{
227 int ret;
228
229 handle = sdev->handle;
230
231 if (!handle || !handle->perf_ops)
232 return -ENODEV;
233
234 ret = cpufreq_register_driver(&scmi_cpufreq_driver);
235 if (ret) {
236 dev_err(&sdev->dev, "%s: registering cpufreq failed, err: %d\n",
237 __func__, ret);
238 }
239
240 return ret;
241}
242
243static void scmi_cpufreq_remove(struct scmi_device *sdev)
244{
245 cpufreq_unregister_driver(&scmi_cpufreq_driver);
246}
247
248static const struct scmi_device_id scmi_id_table[] = {
249 { SCMI_PROTOCOL_PERF },
250 { },
251};
252MODULE_DEVICE_TABLE(scmi, scmi_id_table);
253
254static struct scmi_driver scmi_cpufreq_drv = {
255 .name = "scmi-cpufreq",
256 .probe = scmi_cpufreq_probe,
257 .remove = scmi_cpufreq_remove,
258 .id_table = scmi_id_table,
259};
260module_scmi_driver(scmi_cpufreq_drv);
261
262MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
263MODULE_DESCRIPTION("ARM SCMI CPUFreq interface driver");
264MODULE_LICENSE("GPL v2");
diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
index b7c748248e53..6e83880046d7 100644
--- a/drivers/firmware/Kconfig
+++ b/drivers/firmware/Kconfig
@@ -19,6 +19,40 @@ config ARM_PSCI_CHECKER
19 on and off through hotplug, so for now torture tests and PSCI checker 19 on and off through hotplug, so for now torture tests and PSCI checker
20 are mutually exclusive. 20 are mutually exclusive.
21 21
22config ARM_SCMI_PROTOCOL
23 bool "ARM System Control and Management Interface (SCMI) Message Protocol"
24 depends on ARM || ARM64 || COMPILE_TEST
25 depends on MAILBOX
26 help
27 ARM System Control and Management Interface (SCMI) protocol is a
28 set of operating system-independent software interfaces that are
29 used in system management. SCMI is extensible and currently provides
30 interfaces for: Discovery and self-description of the interfaces
31 it supports, Power domain management which is the ability to place
32 a given device or domain into the various power-saving states that
33 it supports, Performance management which is the ability to control
34 the performance of a domain that is composed of compute engines
35 such as application processors and other accelerators, Clock
36 management which is the ability to set and inquire rates on platform
37 managed clocks and Sensor management which is the ability to read
38 sensor data, and be notified of sensor value.
39
40 This protocol library provides interface for all the client drivers
41 making use of the features offered by the SCMI.
42
43config ARM_SCMI_POWER_DOMAIN
44 tristate "SCMI power domain driver"
45 depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
46 default y
47 select PM_GENERIC_DOMAINS if PM
48 help
49 This enables support for the SCMI power domains which can be
50 enabled or disabled via the SCP firmware
51
52 This driver can also be built as a module. If so, the module
53 will be called scmi_pm_domain. Note this may needed early in boot
54 before rootfs may be available.
55
22config ARM_SCPI_PROTOCOL 56config ARM_SCPI_PROTOCOL
23 tristate "ARM System Control and Power Interface (SCPI) Message Protocol" 57 tristate "ARM System Control and Power Interface (SCPI) Message Protocol"
24 depends on ARM || ARM64 || COMPILE_TEST 58 depends on ARM || ARM64 || COMPILE_TEST
diff --git a/drivers/firmware/Makefile b/drivers/firmware/Makefile
index b248238ddc6a..e18a041cfc53 100644
--- a/drivers/firmware/Makefile
+++ b/drivers/firmware/Makefile
@@ -25,6 +25,7 @@ obj-$(CONFIG_QCOM_SCM_32) += qcom_scm-32.o
25CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a 25CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a
26obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o 26obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o
27 27
28obj-$(CONFIG_ARM_SCMI_PROTOCOL) += arm_scmi/
28obj-y += broadcom/ 29obj-y += broadcom/
29obj-y += meson/ 30obj-y += meson/
30obj-$(CONFIG_GOOGLE_FIRMWARE) += google/ 31obj-$(CONFIG_GOOGLE_FIRMWARE) += google/
diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
new file mode 100644
index 000000000000..99e36c580fbc
--- /dev/null
+++ b/drivers/firmware/arm_scmi/Makefile
@@ -0,0 +1,5 @@
1obj-y = scmi-bus.o scmi-driver.o scmi-protocols.o
2scmi-bus-y = bus.o
3scmi-driver-y = driver.o
4scmi-protocols-y = base.o clock.o perf.o power.o sensors.o
5obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
new file mode 100644
index 000000000000..0d3806c0d432
--- /dev/null
+++ b/drivers/firmware/arm_scmi/base.c
@@ -0,0 +1,253 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface (SCMI) Base Protocol
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7
8#include "common.h"
9
10enum scmi_base_protocol_cmd {
11 BASE_DISCOVER_VENDOR = 0x3,
12 BASE_DISCOVER_SUB_VENDOR = 0x4,
13 BASE_DISCOVER_IMPLEMENT_VERSION = 0x5,
14 BASE_DISCOVER_LIST_PROTOCOLS = 0x6,
15 BASE_DISCOVER_AGENT = 0x7,
16 BASE_NOTIFY_ERRORS = 0x8,
17};
18
19struct scmi_msg_resp_base_attributes {
20 u8 num_protocols;
21 u8 num_agents;
22 __le16 reserved;
23};
24
25/**
26 * scmi_base_attributes_get() - gets the implementation details
27 * that are associated with the base protocol.
28 *
29 * @handle - SCMI entity handle
30 *
31 * Return: 0 on success, else appropriate SCMI error.
32 */
33static int scmi_base_attributes_get(const struct scmi_handle *handle)
34{
35 int ret;
36 struct scmi_xfer *t;
37 struct scmi_msg_resp_base_attributes *attr_info;
38 struct scmi_revision_info *rev = handle->version;
39
40 ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
41 SCMI_PROTOCOL_BASE, 0, sizeof(*attr_info), &t);
42 if (ret)
43 return ret;
44
45 ret = scmi_do_xfer(handle, t);
46 if (!ret) {
47 attr_info = t->rx.buf;
48 rev->num_protocols = attr_info->num_protocols;
49 rev->num_agents = attr_info->num_agents;
50 }
51
52 scmi_one_xfer_put(handle, t);
53 return ret;
54}
55
56/**
57 * scmi_base_vendor_id_get() - gets vendor/subvendor identifier ASCII string.
58 *
59 * @handle - SCMI entity handle
60 * @sub_vendor - specify true if sub-vendor ID is needed
61 *
62 * Return: 0 on success, else appropriate SCMI error.
63 */
64static int
65scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
66{
67 u8 cmd;
68 int ret, size;
69 char *vendor_id;
70 struct scmi_xfer *t;
71 struct scmi_revision_info *rev = handle->version;
72
73 if (sub_vendor) {
74 cmd = BASE_DISCOVER_SUB_VENDOR;
75 vendor_id = rev->sub_vendor_id;
76 size = ARRAY_SIZE(rev->sub_vendor_id);
77 } else {
78 cmd = BASE_DISCOVER_VENDOR;
79 vendor_id = rev->vendor_id;
80 size = ARRAY_SIZE(rev->vendor_id);
81 }
82
83 ret = scmi_one_xfer_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t);
84 if (ret)
85 return ret;
86
87 ret = scmi_do_xfer(handle, t);
88 if (!ret)
89 memcpy(vendor_id, t->rx.buf, size);
90
91 scmi_one_xfer_put(handle, t);
92 return ret;
93}
94
95/**
96 * scmi_base_implementation_version_get() - gets a vendor-specific
97 * implementation 32-bit version. The format of the version number is
98 * vendor-specific
99 *
100 * @handle - SCMI entity handle
101 *
102 * Return: 0 on success, else appropriate SCMI error.
103 */
104static int
105scmi_base_implementation_version_get(const struct scmi_handle *handle)
106{
107 int ret;
108 __le32 *impl_ver;
109 struct scmi_xfer *t;
110 struct scmi_revision_info *rev = handle->version;
111
112 ret = scmi_one_xfer_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION,
113 SCMI_PROTOCOL_BASE, 0, sizeof(*impl_ver), &t);
114 if (ret)
115 return ret;
116
117 ret = scmi_do_xfer(handle, t);
118 if (!ret) {
119 impl_ver = t->rx.buf;
120 rev->impl_ver = le32_to_cpu(*impl_ver);
121 }
122
123 scmi_one_xfer_put(handle, t);
124 return ret;
125}
126
127/**
128 * scmi_base_implementation_list_get() - gets the list of protocols it is
129 * OSPM is allowed to access
130 *
131 * @handle - SCMI entity handle
132 * @protocols_imp - pointer to hold the list of protocol identifiers
133 *
134 * Return: 0 on success, else appropriate SCMI error.
135 */
136static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
137 u8 *protocols_imp)
138{
139 u8 *list;
140 int ret, loop;
141 struct scmi_xfer *t;
142 __le32 *num_skip, *num_ret;
143 u32 tot_num_ret = 0, loop_num_ret;
144 struct device *dev = handle->dev;
145
146 ret = scmi_one_xfer_init(handle, BASE_DISCOVER_LIST_PROTOCOLS,
147 SCMI_PROTOCOL_BASE, sizeof(*num_skip), 0, &t);
148 if (ret)
149 return ret;
150
151 num_skip = t->tx.buf;
152 num_ret = t->rx.buf;
153 list = t->rx.buf + sizeof(*num_ret);
154
155 do {
156 /* Set the number of protocols to be skipped/already read */
157 *num_skip = cpu_to_le32(tot_num_ret);
158
159 ret = scmi_do_xfer(handle, t);
160 if (ret)
161 break;
162
163 loop_num_ret = le32_to_cpu(*num_ret);
164 if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) {
165 dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP");
166 break;
167 }
168
169 for (loop = 0; loop < loop_num_ret; loop++)
170 protocols_imp[tot_num_ret + loop] = *(list + loop);
171
172 tot_num_ret += loop_num_ret;
173 } while (loop_num_ret);
174
175 scmi_one_xfer_put(handle, t);
176 return ret;
177}
178
179/**
180 * scmi_base_discover_agent_get() - discover the name of an agent
181 *
182 * @handle - SCMI entity handle
183 * @id - Agent identifier
184 * @name - Agent identifier ASCII string
185 *
186 * An agent id of 0 is reserved to identify the platform itself.
187 * Generally operating system is represented as "OSPM"
188 *
189 * Return: 0 on success, else appropriate SCMI error.
190 */
191static int scmi_base_discover_agent_get(const struct scmi_handle *handle,
192 int id, char *name)
193{
194 int ret;
195 struct scmi_xfer *t;
196
197 ret = scmi_one_xfer_init(handle, BASE_DISCOVER_AGENT,
198 SCMI_PROTOCOL_BASE, sizeof(__le32),
199 SCMI_MAX_STR_SIZE, &t);
200 if (ret)
201 return ret;
202
203 *(__le32 *)t->tx.buf = cpu_to_le32(id);
204
205 ret = scmi_do_xfer(handle, t);
206 if (!ret)
207 memcpy(name, t->rx.buf, SCMI_MAX_STR_SIZE);
208
209 scmi_one_xfer_put(handle, t);
210 return ret;
211}
212
213int scmi_base_protocol_init(struct scmi_handle *h)
214{
215 int id, ret;
216 u8 *prot_imp;
217 u32 version;
218 char name[SCMI_MAX_STR_SIZE];
219 const struct scmi_handle *handle = h;
220 struct device *dev = handle->dev;
221 struct scmi_revision_info *rev = handle->version;
222
223 ret = scmi_version_get(handle, SCMI_PROTOCOL_BASE, &version);
224 if (ret)
225 return ret;
226
227 prot_imp = devm_kcalloc(dev, MAX_PROTOCOLS_IMP, sizeof(u8), GFP_KERNEL);
228 if (!prot_imp)
229 return -ENOMEM;
230
231 rev->major_ver = PROTOCOL_REV_MAJOR(version),
232 rev->minor_ver = PROTOCOL_REV_MINOR(version);
233
234 scmi_base_attributes_get(handle);
235 scmi_base_vendor_id_get(handle, false);
236 scmi_base_vendor_id_get(handle, true);
237 scmi_base_implementation_version_get(handle);
238 scmi_base_implementation_list_get(handle, prot_imp);
239 scmi_setup_protocol_implemented(handle, prot_imp);
240
241 dev_info(dev, "SCMI Protocol v%d.%d '%s:%s' Firmware version 0x%x\n",
242 rev->major_ver, rev->minor_ver, rev->vendor_id,
243 rev->sub_vendor_id, rev->impl_ver);
244 dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols,
245 rev->num_agents);
246
247 for (id = 0; id < rev->num_agents; id++) {
248 scmi_base_discover_agent_get(handle, id, name);
249 dev_dbg(dev, "Agent %d: %s\n", id, name);
250 }
251
252 return 0;
253}
diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c
new file mode 100644
index 000000000000..f2760a596c28
--- /dev/null
+++ b/drivers/firmware/arm_scmi/bus.c
@@ -0,0 +1,221 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface (SCMI) Message Protocol bus layer
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7
8#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
9
10#include <linux/types.h>
11#include <linux/module.h>
12#include <linux/kernel.h>
13#include <linux/slab.h>
14#include <linux/device.h>
15
16#include "common.h"
17
18static DEFINE_IDA(scmi_bus_id);
19static DEFINE_IDR(scmi_protocols);
20static DEFINE_SPINLOCK(protocol_lock);
21
22static const struct scmi_device_id *
23scmi_dev_match_id(struct scmi_device *scmi_dev, struct scmi_driver *scmi_drv)
24{
25 const struct scmi_device_id *id = scmi_drv->id_table;
26
27 if (!id)
28 return NULL;
29
30 for (; id->protocol_id; id++)
31 if (id->protocol_id == scmi_dev->protocol_id)
32 return id;
33
34 return NULL;
35}
36
37static int scmi_dev_match(struct device *dev, struct device_driver *drv)
38{
39 struct scmi_driver *scmi_drv = to_scmi_driver(drv);
40 struct scmi_device *scmi_dev = to_scmi_dev(dev);
41 const struct scmi_device_id *id;
42
43 id = scmi_dev_match_id(scmi_dev, scmi_drv);
44 if (id)
45 return 1;
46
47 return 0;
48}
49
50static int scmi_protocol_init(int protocol_id, struct scmi_handle *handle)
51{
52 scmi_prot_init_fn_t fn = idr_find(&scmi_protocols, protocol_id);
53
54 if (unlikely(!fn))
55 return -EINVAL;
56 return fn(handle);
57}
58
59static int scmi_dev_probe(struct device *dev)
60{
61 struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver);
62 struct scmi_device *scmi_dev = to_scmi_dev(dev);
63 const struct scmi_device_id *id;
64 int ret;
65
66 id = scmi_dev_match_id(scmi_dev, scmi_drv);
67 if (!id)
68 return -ENODEV;
69
70 if (!scmi_dev->handle)
71 return -EPROBE_DEFER;
72
73 ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle);
74 if (ret)
75 return ret;
76
77 return scmi_drv->probe(scmi_dev);
78}
79
80static int scmi_dev_remove(struct device *dev)
81{
82 struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver);
83 struct scmi_device *scmi_dev = to_scmi_dev(dev);
84
85 if (scmi_drv->remove)
86 scmi_drv->remove(scmi_dev);
87
88 return 0;
89}
90
91static struct bus_type scmi_bus_type = {
92 .name = "scmi_protocol",
93 .match = scmi_dev_match,
94 .probe = scmi_dev_probe,
95 .remove = scmi_dev_remove,
96};
97
98int scmi_driver_register(struct scmi_driver *driver, struct module *owner,
99 const char *mod_name)
100{
101 int retval;
102
103 driver->driver.bus = &scmi_bus_type;
104 driver->driver.name = driver->name;
105 driver->driver.owner = owner;
106 driver->driver.mod_name = mod_name;
107
108 retval = driver_register(&driver->driver);
109 if (!retval)
110 pr_debug("registered new scmi driver %s\n", driver->name);
111
112 return retval;
113}
114EXPORT_SYMBOL_GPL(scmi_driver_register);
115
116void scmi_driver_unregister(struct scmi_driver *driver)
117{
118 driver_unregister(&driver->driver);
119}
120EXPORT_SYMBOL_GPL(scmi_driver_unregister);
121
122struct scmi_device *
123scmi_device_create(struct device_node *np, struct device *parent, int protocol)
124{
125 int id, retval;
126 struct scmi_device *scmi_dev;
127
128 id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL);
129 if (id < 0)
130 return NULL;
131
132 scmi_dev = kzalloc(sizeof(*scmi_dev), GFP_KERNEL);
133 if (!scmi_dev)
134 goto no_mem;
135
136 scmi_dev->id = id;
137 scmi_dev->protocol_id = protocol;
138 scmi_dev->dev.parent = parent;
139 scmi_dev->dev.of_node = np;
140 scmi_dev->dev.bus = &scmi_bus_type;
141 dev_set_name(&scmi_dev->dev, "scmi_dev.%d", id);
142
143 retval = device_register(&scmi_dev->dev);
144 if (!retval)
145 return scmi_dev;
146
147 put_device(&scmi_dev->dev);
148 kfree(scmi_dev);
149no_mem:
150 ida_simple_remove(&scmi_bus_id, id);
151 return NULL;
152}
153
154void scmi_device_destroy(struct scmi_device *scmi_dev)
155{
156 scmi_handle_put(scmi_dev->handle);
157 device_unregister(&scmi_dev->dev);
158 ida_simple_remove(&scmi_bus_id, scmi_dev->id);
159 kfree(scmi_dev);
160}
161
162void scmi_set_handle(struct scmi_device *scmi_dev)
163{
164 scmi_dev->handle = scmi_handle_get(&scmi_dev->dev);
165}
166
167int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn)
168{
169 int ret;
170
171 spin_lock(&protocol_lock);
172 ret = idr_alloc(&scmi_protocols, fn, protocol_id, protocol_id + 1,
173 GFP_ATOMIC);
174 if (ret != protocol_id)
175 pr_err("unable to allocate SCMI idr slot, err %d\n", ret);
176 spin_unlock(&protocol_lock);
177
178 return ret;
179}
180EXPORT_SYMBOL_GPL(scmi_protocol_register);
181
182void scmi_protocol_unregister(int protocol_id)
183{
184 spin_lock(&protocol_lock);
185 idr_remove(&scmi_protocols, protocol_id);
186 spin_unlock(&protocol_lock);
187}
188EXPORT_SYMBOL_GPL(scmi_protocol_unregister);
189
190static int __scmi_devices_unregister(struct device *dev, void *data)
191{
192 struct scmi_device *scmi_dev = to_scmi_dev(dev);
193
194 scmi_device_destroy(scmi_dev);
195 return 0;
196}
197
198static void scmi_devices_unregister(void)
199{
200 bus_for_each_dev(&scmi_bus_type, NULL, NULL, __scmi_devices_unregister);
201}
202
203static int __init scmi_bus_init(void)
204{
205 int retval;
206
207 retval = bus_register(&scmi_bus_type);
208 if (retval)
209 pr_err("scmi protocol bus register failed (%d)\n", retval);
210
211 return retval;
212}
213subsys_initcall(scmi_bus_init);
214
215static void __exit scmi_bus_exit(void)
216{
217 scmi_devices_unregister();
218 bus_unregister(&scmi_bus_type);
219 ida_destroy(&scmi_bus_id);
220}
221module_exit(scmi_bus_exit);
diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c
new file mode 100644
index 000000000000..e8ffad33a0ff
--- /dev/null
+++ b/drivers/firmware/arm_scmi/clock.c
@@ -0,0 +1,342 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface (SCMI) Clock Protocol
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7
8#include "common.h"
9
10enum scmi_clock_protocol_cmd {
11 CLOCK_ATTRIBUTES = 0x3,
12 CLOCK_DESCRIBE_RATES = 0x4,
13 CLOCK_RATE_SET = 0x5,
14 CLOCK_RATE_GET = 0x6,
15 CLOCK_CONFIG_SET = 0x7,
16};
17
18struct scmi_msg_resp_clock_protocol_attributes {
19 __le16 num_clocks;
20 u8 max_async_req;
21 u8 reserved;
22};
23
24struct scmi_msg_resp_clock_attributes {
25 __le32 attributes;
26#define CLOCK_ENABLE BIT(0)
27 u8 name[SCMI_MAX_STR_SIZE];
28};
29
30struct scmi_clock_set_config {
31 __le32 id;
32 __le32 attributes;
33};
34
35struct scmi_msg_clock_describe_rates {
36 __le32 id;
37 __le32 rate_index;
38};
39
40struct scmi_msg_resp_clock_describe_rates {
41 __le32 num_rates_flags;
42#define NUM_RETURNED(x) ((x) & 0xfff)
43#define RATE_DISCRETE(x) !((x) & BIT(12))
44#define NUM_REMAINING(x) ((x) >> 16)
45 struct {
46 __le32 value_low;
47 __le32 value_high;
48 } rate[0];
49#define RATE_TO_U64(X) \
50({ \
51 typeof(X) x = (X); \
52 le32_to_cpu((x).value_low) | (u64)le32_to_cpu((x).value_high) << 32; \
53})
54};
55
56struct scmi_clock_set_rate {
57 __le32 flags;
58#define CLOCK_SET_ASYNC BIT(0)
59#define CLOCK_SET_DELAYED BIT(1)
60#define CLOCK_SET_ROUND_UP BIT(2)
61#define CLOCK_SET_ROUND_AUTO BIT(3)
62 __le32 id;
63 __le32 value_low;
64 __le32 value_high;
65};
66
67struct clock_info {
68 int num_clocks;
69 int max_async_req;
70 struct scmi_clock_info *clk;
71};
72
73static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle,
74 struct clock_info *ci)
75{
76 int ret;
77 struct scmi_xfer *t;
78 struct scmi_msg_resp_clock_protocol_attributes *attr;
79
80 ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
81 SCMI_PROTOCOL_CLOCK, 0, sizeof(*attr), &t);
82 if (ret)
83 return ret;
84
85 attr = t->rx.buf;
86
87 ret = scmi_do_xfer(handle, t);
88 if (!ret) {
89 ci->num_clocks = le16_to_cpu(attr->num_clocks);
90 ci->max_async_req = attr->max_async_req;
91 }
92
93 scmi_one_xfer_put(handle, t);
94 return ret;
95}
96
97static int scmi_clock_attributes_get(const struct scmi_handle *handle,
98 u32 clk_id, struct scmi_clock_info *clk)
99{
100 int ret;
101 struct scmi_xfer *t;
102 struct scmi_msg_resp_clock_attributes *attr;
103
104 ret = scmi_one_xfer_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK,
105 sizeof(clk_id), sizeof(*attr), &t);
106 if (ret)
107 return ret;
108
109 *(__le32 *)t->tx.buf = cpu_to_le32(clk_id);
110 attr = t->rx.buf;
111
112 ret = scmi_do_xfer(handle, t);
113 if (!ret)
114 memcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE);
115 else
116 clk->name[0] = '\0';
117
118 scmi_one_xfer_put(handle, t);
119 return ret;
120}
121
122static int
123scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
124 struct scmi_clock_info *clk)
125{
126 u64 *rate;
127 int ret, cnt;
128 bool rate_discrete;
129 u32 tot_rate_cnt = 0, rates_flag;
130 u16 num_returned, num_remaining;
131 struct scmi_xfer *t;
132 struct scmi_msg_clock_describe_rates *clk_desc;
133 struct scmi_msg_resp_clock_describe_rates *rlist;
134
135 ret = scmi_one_xfer_init(handle, CLOCK_DESCRIBE_RATES,
136 SCMI_PROTOCOL_CLOCK, sizeof(*clk_desc), 0, &t);
137 if (ret)
138 return ret;
139
140 clk_desc = t->tx.buf;
141 rlist = t->rx.buf;
142
143 do {
144 clk_desc->id = cpu_to_le32(clk_id);
145 /* Set the number of rates to be skipped/already read */
146 clk_desc->rate_index = cpu_to_le32(tot_rate_cnt);
147
148 ret = scmi_do_xfer(handle, t);
149 if (ret)
150 break;
151
152 rates_flag = le32_to_cpu(rlist->num_rates_flags);
153 num_remaining = NUM_REMAINING(rates_flag);
154 rate_discrete = RATE_DISCRETE(rates_flag);
155 num_returned = NUM_RETURNED(rates_flag);
156
157 if (tot_rate_cnt + num_returned > SCMI_MAX_NUM_RATES) {
158 dev_err(handle->dev, "No. of rates > MAX_NUM_RATES");
159 break;
160 }
161
162 if (!rate_discrete) {
163 clk->range.min_rate = RATE_TO_U64(rlist->rate[0]);
164 clk->range.max_rate = RATE_TO_U64(rlist->rate[1]);
165 clk->range.step_size = RATE_TO_U64(rlist->rate[2]);
166 dev_dbg(handle->dev, "Min %llu Max %llu Step %llu Hz\n",
167 clk->range.min_rate, clk->range.max_rate,
168 clk->range.step_size);
169 break;
170 }
171
172 rate = &clk->list.rates[tot_rate_cnt];
173 for (cnt = 0; cnt < num_returned; cnt++, rate++) {
174 *rate = RATE_TO_U64(rlist->rate[cnt]);
175 dev_dbg(handle->dev, "Rate %llu Hz\n", *rate);
176 }
177
178 tot_rate_cnt += num_returned;
179 /*
180 * check for both returned and remaining to avoid infinite
181 * loop due to buggy firmware
182 */
183 } while (num_returned && num_remaining);
184
185 if (rate_discrete)
186 clk->list.num_rates = tot_rate_cnt;
187
188 scmi_one_xfer_put(handle, t);
189 return ret;
190}
191
192static int
193scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value)
194{
195 int ret;
196 struct scmi_xfer *t;
197
198 ret = scmi_one_xfer_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK,
199 sizeof(__le32), sizeof(u64), &t);
200 if (ret)
201 return ret;
202
203 *(__le32 *)t->tx.buf = cpu_to_le32(clk_id);
204
205 ret = scmi_do_xfer(handle, t);
206 if (!ret) {
207 __le32 *pval = t->rx.buf;
208
209 *value = le32_to_cpu(*pval);
210 *value |= (u64)le32_to_cpu(*(pval + 1)) << 32;
211 }
212
213 scmi_one_xfer_put(handle, t);
214 return ret;
215}
216
217static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id,
218 u32 config, u64 rate)
219{
220 int ret;
221 struct scmi_xfer *t;
222 struct scmi_clock_set_rate *cfg;
223
224 ret = scmi_one_xfer_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK,
225 sizeof(*cfg), 0, &t);
226 if (ret)
227 return ret;
228
229 cfg = t->tx.buf;
230 cfg->flags = cpu_to_le32(config);
231 cfg->id = cpu_to_le32(clk_id);
232 cfg->value_low = cpu_to_le32(rate & 0xffffffff);
233 cfg->value_high = cpu_to_le32(rate >> 32);
234
235 ret = scmi_do_xfer(handle, t);
236
237 scmi_one_xfer_put(handle, t);
238 return ret;
239}
240
241static int
242scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config)
243{
244 int ret;
245 struct scmi_xfer *t;
246 struct scmi_clock_set_config *cfg;
247
248 ret = scmi_one_xfer_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK,
249 sizeof(*cfg), 0, &t);
250 if (ret)
251 return ret;
252
253 cfg = t->tx.buf;
254 cfg->id = cpu_to_le32(clk_id);
255 cfg->attributes = cpu_to_le32(config);
256
257 ret = scmi_do_xfer(handle, t);
258
259 scmi_one_xfer_put(handle, t);
260 return ret;
261}
262
263static int scmi_clock_enable(const struct scmi_handle *handle, u32 clk_id)
264{
265 return scmi_clock_config_set(handle, clk_id, CLOCK_ENABLE);
266}
267
268static int scmi_clock_disable(const struct scmi_handle *handle, u32 clk_id)
269{
270 return scmi_clock_config_set(handle, clk_id, 0);
271}
272
273static int scmi_clock_count_get(const struct scmi_handle *handle)
274{
275 struct clock_info *ci = handle->clk_priv;
276
277 return ci->num_clocks;
278}
279
280static const struct scmi_clock_info *
281scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id)
282{
283 struct clock_info *ci = handle->clk_priv;
284 struct scmi_clock_info *clk = ci->clk + clk_id;
285
286 if (!clk->name || !clk->name[0])
287 return NULL;
288
289 return clk;
290}
291
292static struct scmi_clk_ops clk_ops = {
293 .count_get = scmi_clock_count_get,
294 .info_get = scmi_clock_info_get,
295 .rate_get = scmi_clock_rate_get,
296 .rate_set = scmi_clock_rate_set,
297 .enable = scmi_clock_enable,
298 .disable = scmi_clock_disable,
299};
300
301static int scmi_clock_protocol_init(struct scmi_handle *handle)
302{
303 u32 version;
304 int clkid, ret;
305 struct clock_info *cinfo;
306
307 scmi_version_get(handle, SCMI_PROTOCOL_CLOCK, &version);
308
309 dev_dbg(handle->dev, "Clock Version %d.%d\n",
310 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
311
312 cinfo = devm_kzalloc(handle->dev, sizeof(*cinfo), GFP_KERNEL);
313 if (!cinfo)
314 return -ENOMEM;
315
316 scmi_clock_protocol_attributes_get(handle, cinfo);
317
318 cinfo->clk = devm_kcalloc(handle->dev, cinfo->num_clocks,
319 sizeof(*cinfo->clk), GFP_KERNEL);
320 if (!cinfo->clk)
321 return -ENOMEM;
322
323 for (clkid = 0; clkid < cinfo->num_clocks; clkid++) {
324 struct scmi_clock_info *clk = cinfo->clk + clkid;
325
326 ret = scmi_clock_attributes_get(handle, clkid, clk);
327 if (!ret)
328 scmi_clock_describe_rates_get(handle, clkid, clk);
329 }
330
331 handle->clk_ops = &clk_ops;
332 handle->clk_priv = cinfo;
333
334 return 0;
335}
336
337static int __init scmi_clock_init(void)
338{
339 return scmi_protocol_register(SCMI_PROTOCOL_CLOCK,
340 &scmi_clock_protocol_init);
341}
342subsys_initcall(scmi_clock_init);
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
new file mode 100644
index 000000000000..0c30234f9098
--- /dev/null
+++ b/drivers/firmware/arm_scmi/common.h
@@ -0,0 +1,105 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface (SCMI) Message Protocol
4 * driver common header file containing some definitions, structures
5 * and function prototypes used in all the different SCMI protocols.
6 *
7 * Copyright (C) 2018 ARM Ltd.
8 */
9
10#include <linux/completion.h>
11#include <linux/device.h>
12#include <linux/errno.h>
13#include <linux/kernel.h>
14#include <linux/scmi_protocol.h>
15#include <linux/types.h>
16
17#define PROTOCOL_REV_MINOR_BITS 16
18#define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1)
19#define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS)
20#define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK)
21#define MAX_PROTOCOLS_IMP 16
22#define MAX_OPPS 16
23
24enum scmi_common_cmd {
25 PROTOCOL_VERSION = 0x0,
26 PROTOCOL_ATTRIBUTES = 0x1,
27 PROTOCOL_MESSAGE_ATTRIBUTES = 0x2,
28};
29
30/**
31 * struct scmi_msg_resp_prot_version - Response for a message
32 *
33 * @major_version: Major version of the ABI that firmware supports
34 * @minor_version: Minor version of the ABI that firmware supports
35 *
36 * In general, ABI version changes follow the rule that minor version increments
37 * are backward compatible. Major revision changes in ABI may not be
38 * backward compatible.
39 *
40 * Response to a generic message with message type SCMI_MSG_VERSION
41 */
42struct scmi_msg_resp_prot_version {
43 __le16 minor_version;
44 __le16 major_version;
45};
46
47/**
48 * struct scmi_msg_hdr - Message(Tx/Rx) header
49 *
50 * @id: The identifier of the command being sent
51 * @protocol_id: The identifier of the protocol used to send @id command
52 * @seq: The token to identify the message. when a message/command returns,
53 * the platform returns the whole message header unmodified including
54 * the token.
55 */
56struct scmi_msg_hdr {
57 u8 id;
58 u8 protocol_id;
59 u16 seq;
60 u32 status;
61 bool poll_completion;
62};
63
64/**
65 * struct scmi_msg - Message(Tx/Rx) structure
66 *
67 * @buf: Buffer pointer
68 * @len: Length of data in the Buffer
69 */
70struct scmi_msg {
71 void *buf;
72 size_t len;
73};
74
75/**
76 * struct scmi_xfer - Structure representing a message flow
77 *
78 * @hdr: Transmit message header
79 * @tx: Transmit message
80 * @rx: Receive message, the buffer should be pre-allocated to store
81 * message. If request-ACK protocol is used, we can reuse the same
82 * buffer for the rx path as we use for the tx path.
83 * @done: completion event
84 */
85
86struct scmi_xfer {
87 void *con_priv;
88 struct scmi_msg_hdr hdr;
89 struct scmi_msg tx;
90 struct scmi_msg rx;
91 struct completion done;
92};
93
94void scmi_one_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer);
95int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer);
96int scmi_one_xfer_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id,
97 size_t tx_size, size_t rx_size, struct scmi_xfer **p);
98int scmi_handle_put(const struct scmi_handle *handle);
99struct scmi_handle *scmi_handle_get(struct device *dev);
100void scmi_set_handle(struct scmi_device *scmi_dev);
101int scmi_version_get(const struct scmi_handle *h, u8 protocol, u32 *version);
102void scmi_setup_protocol_implemented(const struct scmi_handle *handle,
103 u8 *prot_imp);
104
105int scmi_base_protocol_init(struct scmi_handle *h);
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
new file mode 100644
index 000000000000..14b147135a0c
--- /dev/null
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -0,0 +1,871 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface (SCMI) Message Protocol driver
4 *
5 * SCMI Message Protocol is used between the System Control Processor(SCP)
6 * and the Application Processors(AP). The Message Handling Unit(MHU)
7 * provides a mechanism for inter-processor communication between SCP's
8 * Cortex M3 and AP.
9 *
10 * SCP offers control and management of the core/cluster power states,
11 * various power domain DVFS including the core/cluster, certain system
12 * clocks configuration, thermal sensors and many others.
13 *
14 * Copyright (C) 2018 ARM Ltd.
15 */
16
17#include <linux/bitmap.h>
18#include <linux/export.h>
19#include <linux/io.h>
20#include <linux/kernel.h>
21#include <linux/ktime.h>
22#include <linux/mailbox_client.h>
23#include <linux/module.h>
24#include <linux/of_address.h>
25#include <linux/of_device.h>
26#include <linux/processor.h>
27#include <linux/semaphore.h>
28#include <linux/slab.h>
29
30#include "common.h"
31
32#define MSG_ID_SHIFT 0
33#define MSG_ID_MASK 0xff
34#define MSG_TYPE_SHIFT 8
35#define MSG_TYPE_MASK 0x3
36#define MSG_PROTOCOL_ID_SHIFT 10
37#define MSG_PROTOCOL_ID_MASK 0xff
38#define MSG_TOKEN_ID_SHIFT 18
39#define MSG_TOKEN_ID_MASK 0x3ff
40#define MSG_XTRACT_TOKEN(header) \
41 (((header) >> MSG_TOKEN_ID_SHIFT) & MSG_TOKEN_ID_MASK)
42
43enum scmi_error_codes {
44 SCMI_SUCCESS = 0, /* Success */
45 SCMI_ERR_SUPPORT = -1, /* Not supported */
46 SCMI_ERR_PARAMS = -2, /* Invalid Parameters */
47 SCMI_ERR_ACCESS = -3, /* Invalid access/permission denied */
48 SCMI_ERR_ENTRY = -4, /* Not found */
49 SCMI_ERR_RANGE = -5, /* Value out of range */
50 SCMI_ERR_BUSY = -6, /* Device busy */
51 SCMI_ERR_COMMS = -7, /* Communication Error */
52 SCMI_ERR_GENERIC = -8, /* Generic Error */
53 SCMI_ERR_HARDWARE = -9, /* Hardware Error */
54 SCMI_ERR_PROTOCOL = -10,/* Protocol Error */
55 SCMI_ERR_MAX
56};
57
58/* List of all SCMI devices active in system */
59static LIST_HEAD(scmi_list);
60/* Protection for the entire list */
61static DEFINE_MUTEX(scmi_list_mutex);
62
63/**
64 * struct scmi_xfers_info - Structure to manage transfer information
65 *
66 * @xfer_block: Preallocated Message array
67 * @xfer_alloc_table: Bitmap table for allocated messages.
68 * Index of this bitmap table is also used for message
69 * sequence identifier.
70 * @xfer_lock: Protection for message allocation
71 */
72struct scmi_xfers_info {
73 struct scmi_xfer *xfer_block;
74 unsigned long *xfer_alloc_table;
75 /* protect transfer allocation */
76 spinlock_t xfer_lock;
77};
78
79/**
80 * struct scmi_desc - Description of SoC integration
81 *
82 * @max_rx_timeout_ms: Timeout for communication with SoC (in Milliseconds)
83 * @max_msg: Maximum number of messages that can be pending
84 * simultaneously in the system
85 * @max_msg_size: Maximum size of data per message that can be handled.
86 */
87struct scmi_desc {
88 int max_rx_timeout_ms;
89 int max_msg;
90 int max_msg_size;
91};
92
93/**
94 * struct scmi_chan_info - Structure representing a SCMI channel informfation
95 *
96 * @cl: Mailbox Client
97 * @chan: Transmit/Receive mailbox channel
98 * @payload: Transmit/Receive mailbox channel payload area
99 * @dev: Reference to device in the SCMI hierarchy corresponding to this
100 * channel
101 */
102struct scmi_chan_info {
103 struct mbox_client cl;
104 struct mbox_chan *chan;
105 void __iomem *payload;
106 struct device *dev;
107 struct scmi_handle *handle;
108};
109
110/**
111 * struct scmi_info - Structure representing a SCMI instance
112 *
113 * @dev: Device pointer
114 * @desc: SoC description for this instance
115 * @handle: Instance of SCMI handle to send to clients
116 * @version: SCMI revision information containing protocol version,
117 * implementation version and (sub-)vendor identification.
118 * @minfo: Message info
119 * @tx_idr: IDR object to map protocol id to channel info pointer
120 * @protocols_imp: list of protocols implemented, currently maximum of
121 * MAX_PROTOCOLS_IMP elements allocated by the base protocol
122 * @node: list head
123 * @users: Number of users of this instance
124 */
125struct scmi_info {
126 struct device *dev;
127 const struct scmi_desc *desc;
128 struct scmi_revision_info version;
129 struct scmi_handle handle;
130 struct scmi_xfers_info minfo;
131 struct idr tx_idr;
132 u8 *protocols_imp;
133 struct list_head node;
134 int users;
135};
136
137#define client_to_scmi_chan_info(c) container_of(c, struct scmi_chan_info, cl)
138#define handle_to_scmi_info(h) container_of(h, struct scmi_info, handle)
139
140/*
141 * SCMI specification requires all parameters, message headers, return
142 * arguments or any protocol data to be expressed in little endian
143 * format only.
144 */
145struct scmi_shared_mem {
146 __le32 reserved;
147 __le32 channel_status;
148#define SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR BIT(1)
149#define SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE BIT(0)
150 __le32 reserved1[2];
151 __le32 flags;
152#define SCMI_SHMEM_FLAG_INTR_ENABLED BIT(0)
153 __le32 length;
154 __le32 msg_header;
155 u8 msg_payload[0];
156};
157
158static const int scmi_linux_errmap[] = {
159 /* better than switch case as long as return value is continuous */
160 0, /* SCMI_SUCCESS */
161 -EOPNOTSUPP, /* SCMI_ERR_SUPPORT */
162 -EINVAL, /* SCMI_ERR_PARAM */
163 -EACCES, /* SCMI_ERR_ACCESS */
164 -ENOENT, /* SCMI_ERR_ENTRY */
165 -ERANGE, /* SCMI_ERR_RANGE */
166 -EBUSY, /* SCMI_ERR_BUSY */
167 -ECOMM, /* SCMI_ERR_COMMS */
168 -EIO, /* SCMI_ERR_GENERIC */
169 -EREMOTEIO, /* SCMI_ERR_HARDWARE */
170 -EPROTO, /* SCMI_ERR_PROTOCOL */
171};
172
173static inline int scmi_to_linux_errno(int errno)
174{
175 if (errno < SCMI_SUCCESS && errno > SCMI_ERR_MAX)
176 return scmi_linux_errmap[-errno];
177 return -EIO;
178}
179
180/**
181 * scmi_dump_header_dbg() - Helper to dump a message header.
182 *
183 * @dev: Device pointer corresponding to the SCMI entity
184 * @hdr: pointer to header.
185 */
186static inline void scmi_dump_header_dbg(struct device *dev,
187 struct scmi_msg_hdr *hdr)
188{
189 dev_dbg(dev, "Command ID: %x Sequence ID: %x Protocol: %x\n",
190 hdr->id, hdr->seq, hdr->protocol_id);
191}
192
193static void scmi_fetch_response(struct scmi_xfer *xfer,
194 struct scmi_shared_mem __iomem *mem)
195{
196 xfer->hdr.status = ioread32(mem->msg_payload);
197 /* Skip the length of header and statues in payload area i.e 8 bytes*/
198 xfer->rx.len = min_t(size_t, xfer->rx.len, ioread32(&mem->length) - 8);
199
200 /* Take a copy to the rx buffer.. */
201 memcpy_fromio(xfer->rx.buf, mem->msg_payload + 4, xfer->rx.len);
202}
203
204/**
205 * scmi_rx_callback() - mailbox client callback for receive messages
206 *
207 * @cl: client pointer
208 * @m: mailbox message
209 *
210 * Processes one received message to appropriate transfer information and
211 * signals completion of the transfer.
212 *
213 * NOTE: This function will be invoked in IRQ context, hence should be
214 * as optimal as possible.
215 */
216static void scmi_rx_callback(struct mbox_client *cl, void *m)
217{
218 u16 xfer_id;
219 struct scmi_xfer *xfer;
220 struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl);
221 struct device *dev = cinfo->dev;
222 struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
223 struct scmi_xfers_info *minfo = &info->minfo;
224 struct scmi_shared_mem __iomem *mem = cinfo->payload;
225
226 xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header));
227
228 /*
229 * Are we even expecting this?
230 */
231 if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
232 dev_err(dev, "message for %d is not expected!\n", xfer_id);
233 return;
234 }
235
236 xfer = &minfo->xfer_block[xfer_id];
237
238 scmi_dump_header_dbg(dev, &xfer->hdr);
239 /* Is the message of valid length? */
240 if (xfer->rx.len > info->desc->max_msg_size) {
241 dev_err(dev, "unable to handle %zu xfer(max %d)\n",
242 xfer->rx.len, info->desc->max_msg_size);
243 return;
244 }
245
246 scmi_fetch_response(xfer, mem);
247 complete(&xfer->done);
248}
249
250/**
251 * pack_scmi_header() - packs and returns 32-bit header
252 *
253 * @hdr: pointer to header containing all the information on message id,
254 * protocol id and sequence id.
255 */
256static inline u32 pack_scmi_header(struct scmi_msg_hdr *hdr)
257{
258 return ((hdr->id & MSG_ID_MASK) << MSG_ID_SHIFT) |
259 ((hdr->seq & MSG_TOKEN_ID_MASK) << MSG_TOKEN_ID_SHIFT) |
260 ((hdr->protocol_id & MSG_PROTOCOL_ID_MASK) << MSG_PROTOCOL_ID_SHIFT);
261}
262
263/**
264 * scmi_tx_prepare() - mailbox client callback to prepare for the transfer
265 *
266 * @cl: client pointer
267 * @m: mailbox message
268 *
269 * This function prepares the shared memory which contains the header and the
270 * payload.
271 */
272static void scmi_tx_prepare(struct mbox_client *cl, void *m)
273{
274 struct scmi_xfer *t = m;
275 struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl);
276 struct scmi_shared_mem __iomem *mem = cinfo->payload;
277
278 /* Mark channel busy + clear error */
279 iowrite32(0x0, &mem->channel_status);
280 iowrite32(t->hdr.poll_completion ? 0 : SCMI_SHMEM_FLAG_INTR_ENABLED,
281 &mem->flags);
282 iowrite32(sizeof(mem->msg_header) + t->tx.len, &mem->length);
283 iowrite32(pack_scmi_header(&t->hdr), &mem->msg_header);
284 if (t->tx.buf)
285 memcpy_toio(mem->msg_payload, t->tx.buf, t->tx.len);
286}
287
288/**
289 * scmi_one_xfer_get() - Allocate one message
290 *
291 * @handle: SCMI entity handle
292 *
293 * Helper function which is used by various command functions that are
294 * exposed to clients of this driver for allocating a message traffic event.
295 *
296 * This function can sleep depending on pending requests already in the system
297 * for the SCMI entity. Further, this also holds a spinlock to maintain
298 * integrity of internal data structures.
299 *
300 * Return: 0 if all went fine, else corresponding error.
301 */
302static struct scmi_xfer *scmi_one_xfer_get(const struct scmi_handle *handle)
303{
304 u16 xfer_id;
305 struct scmi_xfer *xfer;
306 unsigned long flags, bit_pos;
307 struct scmi_info *info = handle_to_scmi_info(handle);
308 struct scmi_xfers_info *minfo = &info->minfo;
309
310 /* Keep the locked section as small as possible */
311 spin_lock_irqsave(&minfo->xfer_lock, flags);
312 bit_pos = find_first_zero_bit(minfo->xfer_alloc_table,
313 info->desc->max_msg);
314 if (bit_pos == info->desc->max_msg) {
315 spin_unlock_irqrestore(&minfo->xfer_lock, flags);
316 return ERR_PTR(-ENOMEM);
317 }
318 set_bit(bit_pos, minfo->xfer_alloc_table);
319 spin_unlock_irqrestore(&minfo->xfer_lock, flags);
320
321 xfer_id = bit_pos;
322
323 xfer = &minfo->xfer_block[xfer_id];
324 xfer->hdr.seq = xfer_id;
325 reinit_completion(&xfer->done);
326
327 return xfer;
328}
329
330/**
331 * scmi_one_xfer_put() - Release a message
332 *
333 * @minfo: transfer info pointer
334 * @xfer: message that was reserved by scmi_one_xfer_get
335 *
336 * This holds a spinlock to maintain integrity of internal data structures.
337 */
338void scmi_one_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer)
339{
340 unsigned long flags;
341 struct scmi_info *info = handle_to_scmi_info(handle);
342 struct scmi_xfers_info *minfo = &info->minfo;
343
344 /*
345 * Keep the locked section as small as possible
346 * NOTE: we might escape with smp_mb and no lock here..
347 * but just be conservative and symmetric.
348 */
349 spin_lock_irqsave(&minfo->xfer_lock, flags);
350 clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table);
351 spin_unlock_irqrestore(&minfo->xfer_lock, flags);
352}
353
354static bool
355scmi_xfer_poll_done(const struct scmi_chan_info *cinfo, struct scmi_xfer *xfer)
356{
357 struct scmi_shared_mem __iomem *mem = cinfo->payload;
358 u16 xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header));
359
360 if (xfer->hdr.seq != xfer_id)
361 return false;
362
363 return ioread32(&mem->channel_status) &
364 (SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR |
365 SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
366}
367
368#define SCMI_MAX_POLL_TO_NS (100 * NSEC_PER_USEC)
369
370static bool scmi_xfer_done_no_timeout(const struct scmi_chan_info *cinfo,
371 struct scmi_xfer *xfer, ktime_t stop)
372{
373 ktime_t __cur = ktime_get();
374
375 return scmi_xfer_poll_done(cinfo, xfer) || ktime_after(__cur, stop);
376}
377
378/**
379 * scmi_do_xfer() - Do one transfer
380 *
381 * @info: Pointer to SCMI entity information
382 * @xfer: Transfer to initiate and wait for response
383 *
384 * Return: -ETIMEDOUT in case of no response, if transmit error,
385 * return corresponding error, else if all goes well,
386 * return 0.
387 */
388int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer)
389{
390 int ret;
391 int timeout;
392 struct scmi_info *info = handle_to_scmi_info(handle);
393 struct device *dev = info->dev;
394 struct scmi_chan_info *cinfo;
395
396 cinfo = idr_find(&info->tx_idr, xfer->hdr.protocol_id);
397 if (unlikely(!cinfo))
398 return -EINVAL;
399
400 ret = mbox_send_message(cinfo->chan, xfer);
401 if (ret < 0) {
402 dev_dbg(dev, "mbox send fail %d\n", ret);
403 return ret;
404 }
405
406 /* mbox_send_message returns non-negative value on success, so reset */
407 ret = 0;
408
409 if (xfer->hdr.poll_completion) {
410 ktime_t stop = ktime_add_ns(ktime_get(), SCMI_MAX_POLL_TO_NS);
411
412 spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer, stop));
413
414 if (ktime_before(ktime_get(), stop))
415 scmi_fetch_response(xfer, cinfo->payload);
416 else
417 ret = -ETIMEDOUT;
418 } else {
419 /* And we wait for the response. */
420 timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms);
421 if (!wait_for_completion_timeout(&xfer->done, timeout)) {
422 dev_err(dev, "mbox timed out in resp(caller: %pS)\n",
423 (void *)_RET_IP_);
424 ret = -ETIMEDOUT;
425 }
426 }
427
428 if (!ret && xfer->hdr.status)
429 ret = scmi_to_linux_errno(xfer->hdr.status);
430
431 /*
432 * NOTE: we might prefer not to need the mailbox ticker to manage the
433 * transfer queueing since the protocol layer queues things by itself.
434 * Unfortunately, we have to kick the mailbox framework after we have
435 * received our message.
436 */
437 mbox_client_txdone(cinfo->chan, ret);
438
439 return ret;
440}
441
442/**
443 * scmi_one_xfer_init() - Allocate and initialise one message
444 *
445 * @handle: SCMI entity handle
446 * @msg_id: Message identifier
447 * @msg_prot_id: Protocol identifier for the message
448 * @tx_size: transmit message size
449 * @rx_size: receive message size
450 * @p: pointer to the allocated and initialised message
451 *
452 * This function allocates the message using @scmi_one_xfer_get and
453 * initialise the header.
454 *
455 * Return: 0 if all went fine with @p pointing to message, else
456 * corresponding error.
457 */
458int scmi_one_xfer_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id,
459 size_t tx_size, size_t rx_size, struct scmi_xfer **p)
460{
461 int ret;
462 struct scmi_xfer *xfer;
463 struct scmi_info *info = handle_to_scmi_info(handle);
464 struct device *dev = info->dev;
465
466 /* Ensure we have sane transfer sizes */
467 if (rx_size > info->desc->max_msg_size ||
468 tx_size > info->desc->max_msg_size)
469 return -ERANGE;
470
471 xfer = scmi_one_xfer_get(handle);
472 if (IS_ERR(xfer)) {
473 ret = PTR_ERR(xfer);
474 dev_err(dev, "failed to get free message slot(%d)\n", ret);
475 return ret;
476 }
477
478 xfer->tx.len = tx_size;
479 xfer->rx.len = rx_size ? : info->desc->max_msg_size;
480 xfer->hdr.id = msg_id;
481 xfer->hdr.protocol_id = prot_id;
482 xfer->hdr.poll_completion = false;
483
484 *p = xfer;
485 return 0;
486}
487
488/**
489 * scmi_version_get() - command to get the revision of the SCMI entity
490 *
491 * @handle: Handle to SCMI entity information
492 *
493 * Updates the SCMI information in the internal data structure.
494 *
495 * Return: 0 if all went fine, else return appropriate error.
496 */
497int scmi_version_get(const struct scmi_handle *handle, u8 protocol,
498 u32 *version)
499{
500 int ret;
501 __le32 *rev_info;
502 struct scmi_xfer *t;
503
504 ret = scmi_one_xfer_init(handle, PROTOCOL_VERSION, protocol, 0,
505 sizeof(*version), &t);
506 if (ret)
507 return ret;
508
509 ret = scmi_do_xfer(handle, t);
510 if (!ret) {
511 rev_info = t->rx.buf;
512 *version = le32_to_cpu(*rev_info);
513 }
514
515 scmi_one_xfer_put(handle, t);
516 return ret;
517}
518
519void scmi_setup_protocol_implemented(const struct scmi_handle *handle,
520 u8 *prot_imp)
521{
522 struct scmi_info *info = handle_to_scmi_info(handle);
523
524 info->protocols_imp = prot_imp;
525}
526
527static bool
528scmi_is_protocol_implemented(const struct scmi_handle *handle, u8 prot_id)
529{
530 int i;
531 struct scmi_info *info = handle_to_scmi_info(handle);
532
533 if (!info->protocols_imp)
534 return false;
535
536 for (i = 0; i < MAX_PROTOCOLS_IMP; i++)
537 if (info->protocols_imp[i] == prot_id)
538 return true;
539 return false;
540}
541
542/**
543 * scmi_handle_get() - Get the SCMI handle for a device
544 *
545 * @dev: pointer to device for which we want SCMI handle
546 *
547 * NOTE: The function does not track individual clients of the framework
548 * and is expected to be maintained by caller of SCMI protocol library.
549 * scmi_handle_put must be balanced with successful scmi_handle_get
550 *
551 * Return: pointer to handle if successful, NULL on error
552 */
553struct scmi_handle *scmi_handle_get(struct device *dev)
554{
555 struct list_head *p;
556 struct scmi_info *info;
557 struct scmi_handle *handle = NULL;
558
559 mutex_lock(&scmi_list_mutex);
560 list_for_each(p, &scmi_list) {
561 info = list_entry(p, struct scmi_info, node);
562 if (dev->parent == info->dev) {
563 handle = &info->handle;
564 info->users++;
565 break;
566 }
567 }
568 mutex_unlock(&scmi_list_mutex);
569
570 return handle;
571}
572
573/**
574 * scmi_handle_put() - Release the handle acquired by scmi_handle_get
575 *
576 * @handle: handle acquired by scmi_handle_get
577 *
578 * NOTE: The function does not track individual clients of the framework
579 * and is expected to be maintained by caller of SCMI protocol library.
580 * scmi_handle_put must be balanced with successful scmi_handle_get
581 *
582 * Return: 0 is successfully released
583 * if null was passed, it returns -EINVAL;
584 */
585int scmi_handle_put(const struct scmi_handle *handle)
586{
587 struct scmi_info *info;
588
589 if (!handle)
590 return -EINVAL;
591
592 info = handle_to_scmi_info(handle);
593 mutex_lock(&scmi_list_mutex);
594 if (!WARN_ON(!info->users))
595 info->users--;
596 mutex_unlock(&scmi_list_mutex);
597
598 return 0;
599}
600
601static const struct scmi_desc scmi_generic_desc = {
602 .max_rx_timeout_ms = 30, /* we may increase this if required */
603 .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */
604 .max_msg_size = 128,
605};
606
607/* Each compatible listed below must have descriptor associated with it */
608static const struct of_device_id scmi_of_match[] = {
609 { .compatible = "arm,scmi", .data = &scmi_generic_desc },
610 { /* Sentinel */ },
611};
612
613MODULE_DEVICE_TABLE(of, scmi_of_match);
614
615static int scmi_xfer_info_init(struct scmi_info *sinfo)
616{
617 int i;
618 struct scmi_xfer *xfer;
619 struct device *dev = sinfo->dev;
620 const struct scmi_desc *desc = sinfo->desc;
621 struct scmi_xfers_info *info = &sinfo->minfo;
622
623 /* Pre-allocated messages, no more than what hdr.seq can support */
624 if (WARN_ON(desc->max_msg >= (MSG_TOKEN_ID_MASK + 1))) {
625 dev_err(dev, "Maximum message of %d exceeds supported %d\n",
626 desc->max_msg, MSG_TOKEN_ID_MASK + 1);
627 return -EINVAL;
628 }
629
630 info->xfer_block = devm_kcalloc(dev, desc->max_msg,
631 sizeof(*info->xfer_block), GFP_KERNEL);
632 if (!info->xfer_block)
633 return -ENOMEM;
634
635 info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(desc->max_msg),
636 sizeof(long), GFP_KERNEL);
637 if (!info->xfer_alloc_table)
638 return -ENOMEM;
639
640 bitmap_zero(info->xfer_alloc_table, desc->max_msg);
641
642 /* Pre-initialize the buffer pointer to pre-allocated buffers */
643 for (i = 0, xfer = info->xfer_block; i < desc->max_msg; i++, xfer++) {
644 xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size,
645 GFP_KERNEL);
646 if (!xfer->rx.buf)
647 return -ENOMEM;
648
649 xfer->tx.buf = xfer->rx.buf;
650 init_completion(&xfer->done);
651 }
652
653 spin_lock_init(&info->xfer_lock);
654
655 return 0;
656}
657
658static int scmi_mailbox_check(struct device_node *np)
659{
660 struct of_phandle_args arg;
661
662 return of_parse_phandle_with_args(np, "mboxes", "#mbox-cells", 0, &arg);
663}
664
665static int scmi_mbox_free_channel(int id, void *p, void *data)
666{
667 struct scmi_chan_info *cinfo = p;
668 struct idr *idr = data;
669
670 if (!IS_ERR_OR_NULL(cinfo->chan)) {
671 mbox_free_channel(cinfo->chan);
672 cinfo->chan = NULL;
673 }
674
675 idr_remove(idr, id);
676
677 return 0;
678}
679
680static int scmi_remove(struct platform_device *pdev)
681{
682 int ret = 0;
683 struct scmi_info *info = platform_get_drvdata(pdev);
684 struct idr *idr = &info->tx_idr;
685
686 mutex_lock(&scmi_list_mutex);
687 if (info->users)
688 ret = -EBUSY;
689 else
690 list_del(&info->node);
691 mutex_unlock(&scmi_list_mutex);
692
693 if (!ret) {
694 /* Safe to free channels since no more users */
695 ret = idr_for_each(idr, scmi_mbox_free_channel, idr);
696 idr_destroy(&info->tx_idr);
697 }
698
699 return ret;
700}
701
702static inline int
703scmi_mbox_chan_setup(struct scmi_info *info, struct device *dev, int prot_id)
704{
705 int ret;
706 struct resource res;
707 resource_size_t size;
708 struct device_node *shmem, *np = dev->of_node;
709 struct scmi_chan_info *cinfo;
710 struct mbox_client *cl;
711
712 if (scmi_mailbox_check(np)) {
713 cinfo = idr_find(&info->tx_idr, SCMI_PROTOCOL_BASE);
714 goto idr_alloc;
715 }
716
717 cinfo = devm_kzalloc(info->dev, sizeof(*cinfo), GFP_KERNEL);
718 if (!cinfo)
719 return -ENOMEM;
720
721 cinfo->dev = dev;
722
723 cl = &cinfo->cl;
724 cl->dev = dev;
725 cl->rx_callback = scmi_rx_callback;
726 cl->tx_prepare = scmi_tx_prepare;
727 cl->tx_block = false;
728 cl->knows_txdone = true;
729
730 shmem = of_parse_phandle(np, "shmem", 0);
731 ret = of_address_to_resource(shmem, 0, &res);
732 of_node_put(shmem);
733 if (ret) {
734 dev_err(dev, "failed to get SCMI Tx payload mem resource\n");
735 return ret;
736 }
737
738 size = resource_size(&res);
739 cinfo->payload = devm_ioremap(info->dev, res.start, size);
740 if (!cinfo->payload) {
741 dev_err(dev, "failed to ioremap SCMI Tx payload\n");
742 return -EADDRNOTAVAIL;
743 }
744
745 /* Transmit channel is first entry i.e. index 0 */
746 cinfo->chan = mbox_request_channel(cl, 0);
747 if (IS_ERR(cinfo->chan)) {
748 ret = PTR_ERR(cinfo->chan);
749 if (ret != -EPROBE_DEFER)
750 dev_err(dev, "failed to request SCMI Tx mailbox\n");
751 return ret;
752 }
753
754idr_alloc:
755 ret = idr_alloc(&info->tx_idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL);
756 if (ret != prot_id) {
757 dev_err(dev, "unable to allocate SCMI idr slot err %d\n", ret);
758 return ret;
759 }
760
761 cinfo->handle = &info->handle;
762 return 0;
763}
764
765static inline void
766scmi_create_protocol_device(struct device_node *np, struct scmi_info *info,
767 int prot_id)
768{
769 struct scmi_device *sdev;
770
771 sdev = scmi_device_create(np, info->dev, prot_id);
772 if (!sdev) {
773 dev_err(info->dev, "failed to create %d protocol device\n",
774 prot_id);
775 return;
776 }
777
778 if (scmi_mbox_chan_setup(info, &sdev->dev, prot_id)) {
779 dev_err(&sdev->dev, "failed to setup transport\n");
780 scmi_device_destroy(sdev);
781 }
782
783 /* setup handle now as the transport is ready */
784 scmi_set_handle(sdev);
785}
786
787static int scmi_probe(struct platform_device *pdev)
788{
789 int ret;
790 struct scmi_handle *handle;
791 const struct scmi_desc *desc;
792 struct scmi_info *info;
793 struct device *dev = &pdev->dev;
794 struct device_node *child, *np = dev->of_node;
795
796 /* Only mailbox method supported, check for the presence of one */
797 if (scmi_mailbox_check(np)) {
798 dev_err(dev, "no mailbox found in %pOF\n", np);
799 return -EINVAL;
800 }
801
802 desc = of_match_device(scmi_of_match, dev)->data;
803
804 info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL);
805 if (!info)
806 return -ENOMEM;
807
808 info->dev = dev;
809 info->desc = desc;
810 INIT_LIST_HEAD(&info->node);
811
812 ret = scmi_xfer_info_init(info);
813 if (ret)
814 return ret;
815
816 platform_set_drvdata(pdev, info);
817 idr_init(&info->tx_idr);
818
819 handle = &info->handle;
820 handle->dev = info->dev;
821 handle->version = &info->version;
822
823 ret = scmi_mbox_chan_setup(info, dev, SCMI_PROTOCOL_BASE);
824 if (ret)
825 return ret;
826
827 ret = scmi_base_protocol_init(handle);
828 if (ret) {
829 dev_err(dev, "unable to communicate with SCMI(%d)\n", ret);
830 return ret;
831 }
832
833 mutex_lock(&scmi_list_mutex);
834 list_add_tail(&info->node, &scmi_list);
835 mutex_unlock(&scmi_list_mutex);
836
837 for_each_available_child_of_node(np, child) {
838 u32 prot_id;
839
840 if (of_property_read_u32(child, "reg", &prot_id))
841 continue;
842
843 prot_id &= MSG_PROTOCOL_ID_MASK;
844
845 if (!scmi_is_protocol_implemented(handle, prot_id)) {
846 dev_err(dev, "SCMI protocol %d not implemented\n",
847 prot_id);
848 continue;
849 }
850
851 scmi_create_protocol_device(child, info, prot_id);
852 }
853
854 return 0;
855}
856
857static struct platform_driver scmi_driver = {
858 .driver = {
859 .name = "arm-scmi",
860 .of_match_table = scmi_of_match,
861 },
862 .probe = scmi_probe,
863 .remove = scmi_remove,
864};
865
866module_platform_driver(scmi_driver);
867
868MODULE_ALIAS("platform: arm-scmi");
869MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
870MODULE_DESCRIPTION("ARM SCMI protocol driver");
871MODULE_LICENSE("GPL v2");
diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
new file mode 100644
index 000000000000..987c64d19801
--- /dev/null
+++ b/drivers/firmware/arm_scmi/perf.c
@@ -0,0 +1,481 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface (SCMI) Performance Protocol
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7
8#include <linux/of.h>
9#include <linux/platform_device.h>
10#include <linux/pm_opp.h>
11#include <linux/sort.h>
12
13#include "common.h"
14
15enum scmi_performance_protocol_cmd {
16 PERF_DOMAIN_ATTRIBUTES = 0x3,
17 PERF_DESCRIBE_LEVELS = 0x4,
18 PERF_LIMITS_SET = 0x5,
19 PERF_LIMITS_GET = 0x6,
20 PERF_LEVEL_SET = 0x7,
21 PERF_LEVEL_GET = 0x8,
22 PERF_NOTIFY_LIMITS = 0x9,
23 PERF_NOTIFY_LEVEL = 0xa,
24};
25
26struct scmi_opp {
27 u32 perf;
28 u32 power;
29 u32 trans_latency_us;
30};
31
32struct scmi_msg_resp_perf_attributes {
33 __le16 num_domains;
34 __le16 flags;
35#define POWER_SCALE_IN_MILLIWATT(x) ((x) & BIT(0))
36 __le32 stats_addr_low;
37 __le32 stats_addr_high;
38 __le32 stats_size;
39};
40
41struct scmi_msg_resp_perf_domain_attributes {
42 __le32 flags;
43#define SUPPORTS_SET_LIMITS(x) ((x) & BIT(31))
44#define SUPPORTS_SET_PERF_LVL(x) ((x) & BIT(30))
45#define SUPPORTS_PERF_LIMIT_NOTIFY(x) ((x) & BIT(29))
46#define SUPPORTS_PERF_LEVEL_NOTIFY(x) ((x) & BIT(28))
47 __le32 rate_limit_us;
48 __le32 sustained_freq_khz;
49 __le32 sustained_perf_level;
50 u8 name[SCMI_MAX_STR_SIZE];
51};
52
53struct scmi_msg_perf_describe_levels {
54 __le32 domain;
55 __le32 level_index;
56};
57
58struct scmi_perf_set_limits {
59 __le32 domain;
60 __le32 max_level;
61 __le32 min_level;
62};
63
64struct scmi_perf_get_limits {
65 __le32 max_level;
66 __le32 min_level;
67};
68
69struct scmi_perf_set_level {
70 __le32 domain;
71 __le32 level;
72};
73
74struct scmi_perf_notify_level_or_limits {
75 __le32 domain;
76 __le32 notify_enable;
77};
78
79struct scmi_msg_resp_perf_describe_levels {
80 __le16 num_returned;
81 __le16 num_remaining;
82 struct {
83 __le32 perf_val;
84 __le32 power;
85 __le16 transition_latency_us;
86 __le16 reserved;
87 } opp[0];
88};
89
90struct perf_dom_info {
91 bool set_limits;
92 bool set_perf;
93 bool perf_limit_notify;
94 bool perf_level_notify;
95 u32 opp_count;
96 u32 sustained_freq_khz;
97 u32 sustained_perf_level;
98 u32 mult_factor;
99 char name[SCMI_MAX_STR_SIZE];
100 struct scmi_opp opp[MAX_OPPS];
101};
102
103struct scmi_perf_info {
104 int num_domains;
105 bool power_scale_mw;
106 u64 stats_addr;
107 u32 stats_size;
108 struct perf_dom_info *dom_info;
109};
110
111static int scmi_perf_attributes_get(const struct scmi_handle *handle,
112 struct scmi_perf_info *pi)
113{
114 int ret;
115 struct scmi_xfer *t;
116 struct scmi_msg_resp_perf_attributes *attr;
117
118 ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
119 SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t);
120 if (ret)
121 return ret;
122
123 attr = t->rx.buf;
124
125 ret = scmi_do_xfer(handle, t);
126 if (!ret) {
127 u16 flags = le16_to_cpu(attr->flags);
128
129 pi->num_domains = le16_to_cpu(attr->num_domains);
130 pi->power_scale_mw = POWER_SCALE_IN_MILLIWATT(flags);
131 pi->stats_addr = le32_to_cpu(attr->stats_addr_low) |
132 (u64)le32_to_cpu(attr->stats_addr_high) << 32;
133 pi->stats_size = le32_to_cpu(attr->stats_size);
134 }
135
136 scmi_one_xfer_put(handle, t);
137 return ret;
138}
139
140static int
141scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
142 struct perf_dom_info *dom_info)
143{
144 int ret;
145 struct scmi_xfer *t;
146 struct scmi_msg_resp_perf_domain_attributes *attr;
147
148 ret = scmi_one_xfer_init(handle, PERF_DOMAIN_ATTRIBUTES,
149 SCMI_PROTOCOL_PERF, sizeof(domain),
150 sizeof(*attr), &t);
151 if (ret)
152 return ret;
153
154 *(__le32 *)t->tx.buf = cpu_to_le32(domain);
155 attr = t->rx.buf;
156
157 ret = scmi_do_xfer(handle, t);
158 if (!ret) {
159 u32 flags = le32_to_cpu(attr->flags);
160
161 dom_info->set_limits = SUPPORTS_SET_LIMITS(flags);
162 dom_info->set_perf = SUPPORTS_SET_PERF_LVL(flags);
163 dom_info->perf_limit_notify = SUPPORTS_PERF_LIMIT_NOTIFY(flags);
164 dom_info->perf_level_notify = SUPPORTS_PERF_LEVEL_NOTIFY(flags);
165 dom_info->sustained_freq_khz =
166 le32_to_cpu(attr->sustained_freq_khz);
167 dom_info->sustained_perf_level =
168 le32_to_cpu(attr->sustained_perf_level);
169 dom_info->mult_factor = (dom_info->sustained_freq_khz * 1000) /
170 dom_info->sustained_perf_level;
171 memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
172 }
173
174 scmi_one_xfer_put(handle, t);
175 return ret;
176}
177
178static int opp_cmp_func(const void *opp1, const void *opp2)
179{
180 const struct scmi_opp *t1 = opp1, *t2 = opp2;
181
182 return t1->perf - t2->perf;
183}
184
185static int
186scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
187 struct perf_dom_info *perf_dom)
188{
189 int ret, cnt;
190 u32 tot_opp_cnt = 0;
191 u16 num_returned, num_remaining;
192 struct scmi_xfer *t;
193 struct scmi_opp *opp;
194 struct scmi_msg_perf_describe_levels *dom_info;
195 struct scmi_msg_resp_perf_describe_levels *level_info;
196
197 ret = scmi_one_xfer_init(handle, PERF_DESCRIBE_LEVELS,
198 SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t);
199 if (ret)
200 return ret;
201
202 dom_info = t->tx.buf;
203 level_info = t->rx.buf;
204
205 do {
206 dom_info->domain = cpu_to_le32(domain);
207 /* Set the number of OPPs to be skipped/already read */
208 dom_info->level_index = cpu_to_le32(tot_opp_cnt);
209
210 ret = scmi_do_xfer(handle, t);
211 if (ret)
212 break;
213
214 num_returned = le16_to_cpu(level_info->num_returned);
215 num_remaining = le16_to_cpu(level_info->num_remaining);
216 if (tot_opp_cnt + num_returned > MAX_OPPS) {
217 dev_err(handle->dev, "No. of OPPs exceeded MAX_OPPS");
218 break;
219 }
220
221 opp = &perf_dom->opp[tot_opp_cnt];
222 for (cnt = 0; cnt < num_returned; cnt++, opp++) {
223 opp->perf = le32_to_cpu(level_info->opp[cnt].perf_val);
224 opp->power = le32_to_cpu(level_info->opp[cnt].power);
225 opp->trans_latency_us = le16_to_cpu
226 (level_info->opp[cnt].transition_latency_us);
227
228 dev_dbg(handle->dev, "Level %d Power %d Latency %dus\n",
229 opp->perf, opp->power, opp->trans_latency_us);
230 }
231
232 tot_opp_cnt += num_returned;
233 /*
234 * check for both returned and remaining to avoid infinite
235 * loop due to buggy firmware
236 */
237 } while (num_returned && num_remaining);
238
239 perf_dom->opp_count = tot_opp_cnt;
240 scmi_one_xfer_put(handle, t);
241
242 sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL);
243 return ret;
244}
245
246static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain,
247 u32 max_perf, u32 min_perf)
248{
249 int ret;
250 struct scmi_xfer *t;
251 struct scmi_perf_set_limits *limits;
252
253 ret = scmi_one_xfer_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF,
254 sizeof(*limits), 0, &t);
255 if (ret)
256 return ret;
257
258 limits = t->tx.buf;
259 limits->domain = cpu_to_le32(domain);
260 limits->max_level = cpu_to_le32(max_perf);
261 limits->min_level = cpu_to_le32(min_perf);
262
263 ret = scmi_do_xfer(handle, t);
264
265 scmi_one_xfer_put(handle, t);
266 return ret;
267}
268
269static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain,
270 u32 *max_perf, u32 *min_perf)
271{
272 int ret;
273 struct scmi_xfer *t;
274 struct scmi_perf_get_limits *limits;
275
276 ret = scmi_one_xfer_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF,
277 sizeof(__le32), 0, &t);
278 if (ret)
279 return ret;
280
281 *(__le32 *)t->tx.buf = cpu_to_le32(domain);
282
283 ret = scmi_do_xfer(handle, t);
284 if (!ret) {
285 limits = t->rx.buf;
286
287 *max_perf = le32_to_cpu(limits->max_level);
288 *min_perf = le32_to_cpu(limits->min_level);
289 }
290
291 scmi_one_xfer_put(handle, t);
292 return ret;
293}
294
295static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain,
296 u32 level, bool poll)
297{
298 int ret;
299 struct scmi_xfer *t;
300 struct scmi_perf_set_level *lvl;
301
302 ret = scmi_one_xfer_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF,
303 sizeof(*lvl), 0, &t);
304 if (ret)
305 return ret;
306
307 t->hdr.poll_completion = poll;
308 lvl = t->tx.buf;
309 lvl->domain = cpu_to_le32(domain);
310 lvl->level = cpu_to_le32(level);
311
312 ret = scmi_do_xfer(handle, t);
313
314 scmi_one_xfer_put(handle, t);
315 return ret;
316}
317
318static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
319 u32 *level, bool poll)
320{
321 int ret;
322 struct scmi_xfer *t;
323
324 ret = scmi_one_xfer_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF,
325 sizeof(u32), sizeof(u32), &t);
326 if (ret)
327 return ret;
328
329 t->hdr.poll_completion = poll;
330 *(__le32 *)t->tx.buf = cpu_to_le32(domain);
331
332 ret = scmi_do_xfer(handle, t);
333 if (!ret)
334 *level = le32_to_cpu(*(__le32 *)t->rx.buf);
335
336 scmi_one_xfer_put(handle, t);
337 return ret;
338}
339
340/* Device specific ops */
341static int scmi_dev_domain_id(struct device *dev)
342{
343 struct of_phandle_args clkspec;
344
345 if (of_parse_phandle_with_args(dev->of_node, "clocks", "#clock-cells",
346 0, &clkspec))
347 return -EINVAL;
348
349 return clkspec.args[0];
350}
351
352static int scmi_dvfs_add_opps_to_device(const struct scmi_handle *handle,
353 struct device *dev)
354{
355 int idx, ret, domain;
356 unsigned long freq;
357 struct scmi_opp *opp;
358 struct perf_dom_info *dom;
359 struct scmi_perf_info *pi = handle->perf_priv;
360
361 domain = scmi_dev_domain_id(dev);
362 if (domain < 0)
363 return domain;
364
365 dom = pi->dom_info + domain;
366 if (!dom)
367 return -EIO;
368
369 for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) {
370 freq = opp->perf * dom->mult_factor;
371
372 ret = dev_pm_opp_add(dev, freq, 0);
373 if (ret) {
374 dev_warn(dev, "failed to add opp %luHz\n", freq);
375
376 while (idx-- > 0) {
377 freq = (--opp)->perf * dom->mult_factor;
378 dev_pm_opp_remove(dev, freq);
379 }
380 return ret;
381 }
382 }
383 return 0;
384}
385
386static int scmi_dvfs_get_transition_latency(const struct scmi_handle *handle,
387 struct device *dev)
388{
389 struct perf_dom_info *dom;
390 struct scmi_perf_info *pi = handle->perf_priv;
391 int domain = scmi_dev_domain_id(dev);
392
393 if (domain < 0)
394 return domain;
395
396 dom = pi->dom_info + domain;
397 if (!dom)
398 return -EIO;
399
400 /* uS to nS */
401 return dom->opp[dom->opp_count - 1].trans_latency_us * 1000;
402}
403
404static int scmi_dvfs_freq_set(const struct scmi_handle *handle, u32 domain,
405 unsigned long freq, bool poll)
406{
407 struct scmi_perf_info *pi = handle->perf_priv;
408 struct perf_dom_info *dom = pi->dom_info + domain;
409
410 return scmi_perf_level_set(handle, domain, freq / dom->mult_factor,
411 poll);
412}
413
414static int scmi_dvfs_freq_get(const struct scmi_handle *handle, u32 domain,
415 unsigned long *freq, bool poll)
416{
417 int ret;
418 u32 level;
419 struct scmi_perf_info *pi = handle->perf_priv;
420 struct perf_dom_info *dom = pi->dom_info + domain;
421
422 ret = scmi_perf_level_get(handle, domain, &level, poll);
423 if (!ret)
424 *freq = level * dom->mult_factor;
425
426 return ret;
427}
428
429static struct scmi_perf_ops perf_ops = {
430 .limits_set = scmi_perf_limits_set,
431 .limits_get = scmi_perf_limits_get,
432 .level_set = scmi_perf_level_set,
433 .level_get = scmi_perf_level_get,
434 .device_domain_id = scmi_dev_domain_id,
435 .get_transition_latency = scmi_dvfs_get_transition_latency,
436 .add_opps_to_device = scmi_dvfs_add_opps_to_device,
437 .freq_set = scmi_dvfs_freq_set,
438 .freq_get = scmi_dvfs_freq_get,
439};
440
441static int scmi_perf_protocol_init(struct scmi_handle *handle)
442{
443 int domain;
444 u32 version;
445 struct scmi_perf_info *pinfo;
446
447 scmi_version_get(handle, SCMI_PROTOCOL_PERF, &version);
448
449 dev_dbg(handle->dev, "Performance Version %d.%d\n",
450 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
451
452 pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
453 if (!pinfo)
454 return -ENOMEM;
455
456 scmi_perf_attributes_get(handle, pinfo);
457
458 pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
459 sizeof(*pinfo->dom_info), GFP_KERNEL);
460 if (!pinfo->dom_info)
461 return -ENOMEM;
462
463 for (domain = 0; domain < pinfo->num_domains; domain++) {
464 struct perf_dom_info *dom = pinfo->dom_info + domain;
465
466 scmi_perf_domain_attributes_get(handle, domain, dom);
467 scmi_perf_describe_levels_get(handle, domain, dom);
468 }
469
470 handle->perf_ops = &perf_ops;
471 handle->perf_priv = pinfo;
472
473 return 0;
474}
475
476static int __init scmi_perf_init(void)
477{
478 return scmi_protocol_register(SCMI_PROTOCOL_PERF,
479 &scmi_perf_protocol_init);
480}
481subsys_initcall(scmi_perf_init);
diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
new file mode 100644
index 000000000000..087c2876cdf2
--- /dev/null
+++ b/drivers/firmware/arm_scmi/power.c
@@ -0,0 +1,221 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface (SCMI) Power Protocol
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7
8#include "common.h"
9
10enum scmi_power_protocol_cmd {
11 POWER_DOMAIN_ATTRIBUTES = 0x3,
12 POWER_STATE_SET = 0x4,
13 POWER_STATE_GET = 0x5,
14 POWER_STATE_NOTIFY = 0x6,
15};
16
17struct scmi_msg_resp_power_attributes {
18 __le16 num_domains;
19 __le16 reserved;
20 __le32 stats_addr_low;
21 __le32 stats_addr_high;
22 __le32 stats_size;
23};
24
25struct scmi_msg_resp_power_domain_attributes {
26 __le32 flags;
27#define SUPPORTS_STATE_SET_NOTIFY(x) ((x) & BIT(31))
28#define SUPPORTS_STATE_SET_ASYNC(x) ((x) & BIT(30))
29#define SUPPORTS_STATE_SET_SYNC(x) ((x) & BIT(29))
30 u8 name[SCMI_MAX_STR_SIZE];
31};
32
33struct scmi_power_set_state {
34 __le32 flags;
35#define STATE_SET_ASYNC BIT(0)
36 __le32 domain;
37 __le32 state;
38};
39
40struct scmi_power_state_notify {
41 __le32 domain;
42 __le32 notify_enable;
43};
44
45struct power_dom_info {
46 bool state_set_sync;
47 bool state_set_async;
48 bool state_set_notify;
49 char name[SCMI_MAX_STR_SIZE];
50};
51
52struct scmi_power_info {
53 int num_domains;
54 u64 stats_addr;
55 u32 stats_size;
56 struct power_dom_info *dom_info;
57};
58
59static int scmi_power_attributes_get(const struct scmi_handle *handle,
60 struct scmi_power_info *pi)
61{
62 int ret;
63 struct scmi_xfer *t;
64 struct scmi_msg_resp_power_attributes *attr;
65
66 ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
67 SCMI_PROTOCOL_POWER, 0, sizeof(*attr), &t);
68 if (ret)
69 return ret;
70
71 attr = t->rx.buf;
72
73 ret = scmi_do_xfer(handle, t);
74 if (!ret) {
75 pi->num_domains = le16_to_cpu(attr->num_domains);
76 pi->stats_addr = le32_to_cpu(attr->stats_addr_low) |
77 (u64)le32_to_cpu(attr->stats_addr_high) << 32;
78 pi->stats_size = le32_to_cpu(attr->stats_size);
79 }
80
81 scmi_one_xfer_put(handle, t);
82 return ret;
83}
84
85static int
86scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
87 struct power_dom_info *dom_info)
88{
89 int ret;
90 struct scmi_xfer *t;
91 struct scmi_msg_resp_power_domain_attributes *attr;
92
93 ret = scmi_one_xfer_init(handle, POWER_DOMAIN_ATTRIBUTES,
94 SCMI_PROTOCOL_POWER, sizeof(domain),
95 sizeof(*attr), &t);
96 if (ret)
97 return ret;
98
99 *(__le32 *)t->tx.buf = cpu_to_le32(domain);
100 attr = t->rx.buf;
101
102 ret = scmi_do_xfer(handle, t);
103 if (!ret) {
104 u32 flags = le32_to_cpu(attr->flags);
105
106 dom_info->state_set_notify = SUPPORTS_STATE_SET_NOTIFY(flags);
107 dom_info->state_set_async = SUPPORTS_STATE_SET_ASYNC(flags);
108 dom_info->state_set_sync = SUPPORTS_STATE_SET_SYNC(flags);
109 memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
110 }
111
112 scmi_one_xfer_put(handle, t);
113 return ret;
114}
115
116static int
117scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state)
118{
119 int ret;
120 struct scmi_xfer *t;
121 struct scmi_power_set_state *st;
122
123 ret = scmi_one_xfer_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER,
124 sizeof(*st), 0, &t);
125 if (ret)
126 return ret;
127
128 st = t->tx.buf;
129 st->flags = cpu_to_le32(0);
130 st->domain = cpu_to_le32(domain);
131 st->state = cpu_to_le32(state);
132
133 ret = scmi_do_xfer(handle, t);
134
135 scmi_one_xfer_put(handle, t);
136 return ret;
137}
138
139static int
140scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state)
141{
142 int ret;
143 struct scmi_xfer *t;
144
145 ret = scmi_one_xfer_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER,
146 sizeof(u32), sizeof(u32), &t);
147 if (ret)
148 return ret;
149
150 *(__le32 *)t->tx.buf = cpu_to_le32(domain);
151
152 ret = scmi_do_xfer(handle, t);
153 if (!ret)
154 *state = le32_to_cpu(*(__le32 *)t->rx.buf);
155
156 scmi_one_xfer_put(handle, t);
157 return ret;
158}
159
160static int scmi_power_num_domains_get(const struct scmi_handle *handle)
161{
162 struct scmi_power_info *pi = handle->power_priv;
163
164 return pi->num_domains;
165}
166
167static char *scmi_power_name_get(const struct scmi_handle *handle, u32 domain)
168{
169 struct scmi_power_info *pi = handle->power_priv;
170 struct power_dom_info *dom = pi->dom_info + domain;
171
172 return dom->name;
173}
174
175static struct scmi_power_ops power_ops = {
176 .num_domains_get = scmi_power_num_domains_get,
177 .name_get = scmi_power_name_get,
178 .state_set = scmi_power_state_set,
179 .state_get = scmi_power_state_get,
180};
181
182static int scmi_power_protocol_init(struct scmi_handle *handle)
183{
184 int domain;
185 u32 version;
186 struct scmi_power_info *pinfo;
187
188 scmi_version_get(handle, SCMI_PROTOCOL_POWER, &version);
189
190 dev_dbg(handle->dev, "Power Version %d.%d\n",
191 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
192
193 pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
194 if (!pinfo)
195 return -ENOMEM;
196
197 scmi_power_attributes_get(handle, pinfo);
198
199 pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
200 sizeof(*pinfo->dom_info), GFP_KERNEL);
201 if (!pinfo->dom_info)
202 return -ENOMEM;
203
204 for (domain = 0; domain < pinfo->num_domains; domain++) {
205 struct power_dom_info *dom = pinfo->dom_info + domain;
206
207 scmi_power_domain_attributes_get(handle, domain, dom);
208 }
209
210 handle->power_ops = &power_ops;
211 handle->power_priv = pinfo;
212
213 return 0;
214}
215
216static int __init scmi_power_init(void)
217{
218 return scmi_protocol_register(SCMI_PROTOCOL_POWER,
219 &scmi_power_protocol_init);
220}
221subsys_initcall(scmi_power_init);
diff --git a/drivers/firmware/arm_scmi/scmi_pm_domain.c b/drivers/firmware/arm_scmi/scmi_pm_domain.c
new file mode 100644
index 000000000000..87f737e01473
--- /dev/null
+++ b/drivers/firmware/arm_scmi/scmi_pm_domain.c
@@ -0,0 +1,129 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * SCMI Generic power domain support.
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7
8#include <linux/err.h>
9#include <linux/io.h>
10#include <linux/module.h>
11#include <linux/pm_domain.h>
12#include <linux/scmi_protocol.h>
13
14struct scmi_pm_domain {
15 struct generic_pm_domain genpd;
16 const struct scmi_handle *handle;
17 const char *name;
18 u32 domain;
19};
20
21#define to_scmi_pd(gpd) container_of(gpd, struct scmi_pm_domain, genpd)
22
23static int scmi_pd_power(struct generic_pm_domain *domain, bool power_on)
24{
25 int ret;
26 u32 state, ret_state;
27 struct scmi_pm_domain *pd = to_scmi_pd(domain);
28 const struct scmi_power_ops *ops = pd->handle->power_ops;
29
30 if (power_on)
31 state = SCMI_POWER_STATE_GENERIC_ON;
32 else
33 state = SCMI_POWER_STATE_GENERIC_OFF;
34
35 ret = ops->state_set(pd->handle, pd->domain, state);
36 if (!ret)
37 ret = ops->state_get(pd->handle, pd->domain, &ret_state);
38 if (!ret && state != ret_state)
39 return -EIO;
40
41 return ret;
42}
43
44static int scmi_pd_power_on(struct generic_pm_domain *domain)
45{
46 return scmi_pd_power(domain, true);
47}
48
49static int scmi_pd_power_off(struct generic_pm_domain *domain)
50{
51 return scmi_pd_power(domain, false);
52}
53
54static int scmi_pm_domain_probe(struct scmi_device *sdev)
55{
56 int num_domains, i;
57 struct device *dev = &sdev->dev;
58 struct device_node *np = dev->of_node;
59 struct scmi_pm_domain *scmi_pd;
60 struct genpd_onecell_data *scmi_pd_data;
61 struct generic_pm_domain **domains;
62 const struct scmi_handle *handle = sdev->handle;
63
64 if (!handle || !handle->power_ops)
65 return -ENODEV;
66
67 num_domains = handle->power_ops->num_domains_get(handle);
68 if (num_domains < 0) {
69 dev_err(dev, "number of domains not found\n");
70 return num_domains;
71 }
72
73 scmi_pd = devm_kcalloc(dev, num_domains, sizeof(*scmi_pd), GFP_KERNEL);
74 if (!scmi_pd)
75 return -ENOMEM;
76
77 scmi_pd_data = devm_kzalloc(dev, sizeof(*scmi_pd_data), GFP_KERNEL);
78 if (!scmi_pd_data)
79 return -ENOMEM;
80
81 domains = devm_kcalloc(dev, num_domains, sizeof(*domains), GFP_KERNEL);
82 if (!domains)
83 return -ENOMEM;
84
85 for (i = 0; i < num_domains; i++, scmi_pd++) {
86 u32 state;
87
88 domains[i] = &scmi_pd->genpd;
89
90 scmi_pd->domain = i;
91 scmi_pd->handle = handle;
92 scmi_pd->name = handle->power_ops->name_get(handle, i);
93 scmi_pd->genpd.name = scmi_pd->name;
94 scmi_pd->genpd.power_off = scmi_pd_power_off;
95 scmi_pd->genpd.power_on = scmi_pd_power_on;
96
97 if (handle->power_ops->state_get(handle, i, &state)) {
98 dev_warn(dev, "failed to get state for domain %d\n", i);
99 continue;
100 }
101
102 pm_genpd_init(&scmi_pd->genpd, NULL,
103 state == SCMI_POWER_STATE_GENERIC_OFF);
104 }
105
106 scmi_pd_data->domains = domains;
107 scmi_pd_data->num_domains = num_domains;
108
109 of_genpd_add_provider_onecell(np, scmi_pd_data);
110
111 return 0;
112}
113
114static const struct scmi_device_id scmi_id_table[] = {
115 { SCMI_PROTOCOL_POWER },
116 { },
117};
118MODULE_DEVICE_TABLE(scmi, scmi_id_table);
119
120static struct scmi_driver scmi_power_domain_driver = {
121 .name = "scmi-power-domain",
122 .probe = scmi_pm_domain_probe,
123 .id_table = scmi_id_table,
124};
125module_scmi_driver(scmi_power_domain_driver);
126
127MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
128MODULE_DESCRIPTION("ARM SCMI power domain driver");
129MODULE_LICENSE("GPL v2");
diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c
new file mode 100644
index 000000000000..bbb469fea0ed
--- /dev/null
+++ b/drivers/firmware/arm_scmi/sensors.c
@@ -0,0 +1,291 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface (SCMI) Sensor Protocol
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7
8#include "common.h"
9
10enum scmi_sensor_protocol_cmd {
11 SENSOR_DESCRIPTION_GET = 0x3,
12 SENSOR_CONFIG_SET = 0x4,
13 SENSOR_TRIP_POINT_SET = 0x5,
14 SENSOR_READING_GET = 0x6,
15};
16
17struct scmi_msg_resp_sensor_attributes {
18 __le16 num_sensors;
19 u8 max_requests;
20 u8 reserved;
21 __le32 reg_addr_low;
22 __le32 reg_addr_high;
23 __le32 reg_size;
24};
25
26struct scmi_msg_resp_sensor_description {
27 __le16 num_returned;
28 __le16 num_remaining;
29 struct {
30 __le32 id;
31 __le32 attributes_low;
32#define SUPPORTS_ASYNC_READ(x) ((x) & BIT(31))
33#define NUM_TRIP_POINTS(x) (((x) >> 4) & 0xff)
34 __le32 attributes_high;
35#define SENSOR_TYPE(x) ((x) & 0xff)
36#define SENSOR_SCALE(x) (((x) >> 11) & 0x3f)
37#define SENSOR_UPDATE_SCALE(x) (((x) >> 22) & 0x1f)
38#define SENSOR_UPDATE_BASE(x) (((x) >> 27) & 0x1f)
39 u8 name[SCMI_MAX_STR_SIZE];
40 } desc[0];
41};
42
43struct scmi_msg_set_sensor_config {
44 __le32 id;
45 __le32 event_control;
46};
47
48struct scmi_msg_set_sensor_trip_point {
49 __le32 id;
50 __le32 event_control;
51#define SENSOR_TP_EVENT_MASK (0x3)
52#define SENSOR_TP_DISABLED 0x0
53#define SENSOR_TP_POSITIVE 0x1
54#define SENSOR_TP_NEGATIVE 0x2
55#define SENSOR_TP_BOTH 0x3
56#define SENSOR_TP_ID(x) (((x) & 0xff) << 4)
57 __le32 value_low;
58 __le32 value_high;
59};
60
61struct scmi_msg_sensor_reading_get {
62 __le32 id;
63 __le32 flags;
64#define SENSOR_READ_ASYNC BIT(0)
65};
66
67struct sensors_info {
68 int num_sensors;
69 int max_requests;
70 u64 reg_addr;
71 u32 reg_size;
72 struct scmi_sensor_info *sensors;
73};
74
75static int scmi_sensor_attributes_get(const struct scmi_handle *handle,
76 struct sensors_info *si)
77{
78 int ret;
79 struct scmi_xfer *t;
80 struct scmi_msg_resp_sensor_attributes *attr;
81
82 ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
83 SCMI_PROTOCOL_SENSOR, 0, sizeof(*attr), &t);
84 if (ret)
85 return ret;
86
87 attr = t->rx.buf;
88
89 ret = scmi_do_xfer(handle, t);
90 if (!ret) {
91 si->num_sensors = le16_to_cpu(attr->num_sensors);
92 si->max_requests = attr->max_requests;
93 si->reg_addr = le32_to_cpu(attr->reg_addr_low) |
94 (u64)le32_to_cpu(attr->reg_addr_high) << 32;
95 si->reg_size = le32_to_cpu(attr->reg_size);
96 }
97
98 scmi_one_xfer_put(handle, t);
99 return ret;
100}
101
102static int scmi_sensor_description_get(const struct scmi_handle *handle,
103 struct sensors_info *si)
104{
105 int ret, cnt;
106 u32 desc_index = 0;
107 u16 num_returned, num_remaining;
108 struct scmi_xfer *t;
109 struct scmi_msg_resp_sensor_description *buf;
110
111 ret = scmi_one_xfer_init(handle, SENSOR_DESCRIPTION_GET,
112 SCMI_PROTOCOL_SENSOR, sizeof(__le32), 0, &t);
113 if (ret)
114 return ret;
115
116 buf = t->rx.buf;
117
118 do {
119 /* Set the number of sensors to be skipped/already read */
120 *(__le32 *)t->tx.buf = cpu_to_le32(desc_index);
121
122 ret = scmi_do_xfer(handle, t);
123 if (ret)
124 break;
125
126 num_returned = le16_to_cpu(buf->num_returned);
127 num_remaining = le16_to_cpu(buf->num_remaining);
128
129 if (desc_index + num_returned > si->num_sensors) {
130 dev_err(handle->dev, "No. of sensors can't exceed %d",
131 si->num_sensors);
132 break;
133 }
134
135 for (cnt = 0; cnt < num_returned; cnt++) {
136 u32 attrh;
137 struct scmi_sensor_info *s;
138
139 attrh = le32_to_cpu(buf->desc[cnt].attributes_high);
140 s = &si->sensors[desc_index + cnt];
141 s->id = le32_to_cpu(buf->desc[cnt].id);
142 s->type = SENSOR_TYPE(attrh);
143 memcpy(s->name, buf->desc[cnt].name, SCMI_MAX_STR_SIZE);
144 }
145
146 desc_index += num_returned;
147 /*
148 * check for both returned and remaining to avoid infinite
149 * loop due to buggy firmware
150 */
151 } while (num_returned && num_remaining);
152
153 scmi_one_xfer_put(handle, t);
154 return ret;
155}
156
157static int
158scmi_sensor_configuration_set(const struct scmi_handle *handle, u32 sensor_id)
159{
160 int ret;
161 u32 evt_cntl = BIT(0);
162 struct scmi_xfer *t;
163 struct scmi_msg_set_sensor_config *cfg;
164
165 ret = scmi_one_xfer_init(handle, SENSOR_CONFIG_SET,
166 SCMI_PROTOCOL_SENSOR, sizeof(*cfg), 0, &t);
167 if (ret)
168 return ret;
169
170 cfg = t->tx.buf;
171 cfg->id = cpu_to_le32(sensor_id);
172 cfg->event_control = cpu_to_le32(evt_cntl);
173
174 ret = scmi_do_xfer(handle, t);
175
176 scmi_one_xfer_put(handle, t);
177 return ret;
178}
179
180static int scmi_sensor_trip_point_set(const struct scmi_handle *handle,
181 u32 sensor_id, u8 trip_id, u64 trip_value)
182{
183 int ret;
184 u32 evt_cntl = SENSOR_TP_BOTH;
185 struct scmi_xfer *t;
186 struct scmi_msg_set_sensor_trip_point *trip;
187
188 ret = scmi_one_xfer_init(handle, SENSOR_TRIP_POINT_SET,
189 SCMI_PROTOCOL_SENSOR, sizeof(*trip), 0, &t);
190 if (ret)
191 return ret;
192
193 trip = t->tx.buf;
194 trip->id = cpu_to_le32(sensor_id);
195 trip->event_control = cpu_to_le32(evt_cntl | SENSOR_TP_ID(trip_id));
196 trip->value_low = cpu_to_le32(trip_value & 0xffffffff);
197 trip->value_high = cpu_to_le32(trip_value >> 32);
198
199 ret = scmi_do_xfer(handle, t);
200
201 scmi_one_xfer_put(handle, t);
202 return ret;
203}
204
205static int scmi_sensor_reading_get(const struct scmi_handle *handle,
206 u32 sensor_id, bool async, u64 *value)
207{
208 int ret;
209 struct scmi_xfer *t;
210 struct scmi_msg_sensor_reading_get *sensor;
211
212 ret = scmi_one_xfer_init(handle, SENSOR_READING_GET,
213 SCMI_PROTOCOL_SENSOR, sizeof(*sensor),
214 sizeof(u64), &t);
215 if (ret)
216 return ret;
217
218 sensor = t->tx.buf;
219 sensor->id = cpu_to_le32(sensor_id);
220 sensor->flags = cpu_to_le32(async ? SENSOR_READ_ASYNC : 0);
221
222 ret = scmi_do_xfer(handle, t);
223 if (!ret) {
224 __le32 *pval = t->rx.buf;
225
226 *value = le32_to_cpu(*pval);
227 *value |= (u64)le32_to_cpu(*(pval + 1)) << 32;
228 }
229
230 scmi_one_xfer_put(handle, t);
231 return ret;
232}
233
234static const struct scmi_sensor_info *
235scmi_sensor_info_get(const struct scmi_handle *handle, u32 sensor_id)
236{
237 struct sensors_info *si = handle->sensor_priv;
238
239 return si->sensors + sensor_id;
240}
241
242static int scmi_sensor_count_get(const struct scmi_handle *handle)
243{
244 struct sensors_info *si = handle->sensor_priv;
245
246 return si->num_sensors;
247}
248
249static struct scmi_sensor_ops sensor_ops = {
250 .count_get = scmi_sensor_count_get,
251 .info_get = scmi_sensor_info_get,
252 .configuration_set = scmi_sensor_configuration_set,
253 .trip_point_set = scmi_sensor_trip_point_set,
254 .reading_get = scmi_sensor_reading_get,
255};
256
257static int scmi_sensors_protocol_init(struct scmi_handle *handle)
258{
259 u32 version;
260 struct sensors_info *sinfo;
261
262 scmi_version_get(handle, SCMI_PROTOCOL_SENSOR, &version);
263
264 dev_dbg(handle->dev, "Sensor Version %d.%d\n",
265 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
266
267 sinfo = devm_kzalloc(handle->dev, sizeof(*sinfo), GFP_KERNEL);
268 if (!sinfo)
269 return -ENOMEM;
270
271 scmi_sensor_attributes_get(handle, sinfo);
272
273 sinfo->sensors = devm_kcalloc(handle->dev, sinfo->num_sensors,
274 sizeof(*sinfo->sensors), GFP_KERNEL);
275 if (!sinfo->sensors)
276 return -ENOMEM;
277
278 scmi_sensor_description_get(handle, sinfo);
279
280 handle->sensor_ops = &sensor_ops;
281 handle->sensor_priv = sinfo;
282
283 return 0;
284}
285
286static int __init scmi_sensors_init(void)
287{
288 return scmi_protocol_register(SCMI_PROTOCOL_SENSOR,
289 &scmi_sensors_protocol_init);
290}
291subsys_initcall(scmi_sensors_init);
diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
index ef23553ff5cb..033e57366d56 100644
--- a/drivers/hwmon/Kconfig
+++ b/drivers/hwmon/Kconfig
@@ -317,6 +317,18 @@ config SENSORS_APPLESMC
317 Say Y here if you have an applicable laptop and want to experience 317 Say Y here if you have an applicable laptop and want to experience
318 the awesome power of applesmc. 318 the awesome power of applesmc.
319 319
320config SENSORS_ARM_SCMI
321 tristate "ARM SCMI Sensors"
322 depends on ARM_SCMI_PROTOCOL
323 depends on THERMAL || !THERMAL_OF
324 help
325 This driver provides support for temperature, voltage, current
326 and power sensors available on SCMI based platforms. The actual
327 number and type of sensors exported depend on the platform.
328
329 This driver can also be built as a module. If so, the module
330 will be called scmi-hwmon.
331
320config SENSORS_ARM_SCPI 332config SENSORS_ARM_SCPI
321 tristate "ARM SCPI Sensors" 333 tristate "ARM SCPI Sensors"
322 depends on ARM_SCPI_PROTOCOL 334 depends on ARM_SCPI_PROTOCOL
diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile
index f814b4ace138..e7d52a36e6c4 100644
--- a/drivers/hwmon/Makefile
+++ b/drivers/hwmon/Makefile
@@ -46,6 +46,7 @@ obj-$(CONFIG_SENSORS_ADT7462) += adt7462.o
46obj-$(CONFIG_SENSORS_ADT7470) += adt7470.o 46obj-$(CONFIG_SENSORS_ADT7470) += adt7470.o
47obj-$(CONFIG_SENSORS_ADT7475) += adt7475.o 47obj-$(CONFIG_SENSORS_ADT7475) += adt7475.o
48obj-$(CONFIG_SENSORS_APPLESMC) += applesmc.o 48obj-$(CONFIG_SENSORS_APPLESMC) += applesmc.o
49obj-$(CONFIG_SENSORS_ARM_SCMI) += scmi-hwmon.o
49obj-$(CONFIG_SENSORS_ARM_SCPI) += scpi-hwmon.o 50obj-$(CONFIG_SENSORS_ARM_SCPI) += scpi-hwmon.o
50obj-$(CONFIG_SENSORS_ASC7621) += asc7621.o 51obj-$(CONFIG_SENSORS_ASC7621) += asc7621.o
51obj-$(CONFIG_SENSORS_ASPEED) += aspeed-pwm-tacho.o 52obj-$(CONFIG_SENSORS_ASPEED) += aspeed-pwm-tacho.o
diff --git a/drivers/hwmon/scmi-hwmon.c b/drivers/hwmon/scmi-hwmon.c
new file mode 100644
index 000000000000..32e750373ced
--- /dev/null
+++ b/drivers/hwmon/scmi-hwmon.c
@@ -0,0 +1,225 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * System Control and Management Interface(SCMI) based hwmon sensor driver
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 * Sudeep Holla <sudeep.holla@arm.com>
7 */
8
9#include <linux/hwmon.h>
10#include <linux/module.h>
11#include <linux/scmi_protocol.h>
12#include <linux/slab.h>
13#include <linux/sysfs.h>
14#include <linux/thermal.h>
15
16struct scmi_sensors {
17 const struct scmi_handle *handle;
18 const struct scmi_sensor_info **info[hwmon_max];
19};
20
21static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
22 u32 attr, int channel, long *val)
23{
24 int ret;
25 u64 value;
26 const struct scmi_sensor_info *sensor;
27 struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev);
28 const struct scmi_handle *h = scmi_sensors->handle;
29
30 sensor = *(scmi_sensors->info[type] + channel);
31 ret = h->sensor_ops->reading_get(h, sensor->id, false, &value);
32 if (!ret)
33 *val = value;
34
35 return ret;
36}
37
38static int
39scmi_hwmon_read_string(struct device *dev, enum hwmon_sensor_types type,
40 u32 attr, int channel, const char **str)
41{
42 const struct scmi_sensor_info *sensor;
43 struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev);
44
45 sensor = *(scmi_sensors->info[type] + channel);
46 *str = sensor->name;
47
48 return 0;
49}
50
51static umode_t
52scmi_hwmon_is_visible(const void *drvdata, enum hwmon_sensor_types type,
53 u32 attr, int channel)
54{
55 const struct scmi_sensor_info *sensor;
56 const struct scmi_sensors *scmi_sensors = drvdata;
57
58 sensor = *(scmi_sensors->info[type] + channel);
59 if (sensor && sensor->name)
60 return S_IRUGO;
61
62 return 0;
63}
64
65static const struct hwmon_ops scmi_hwmon_ops = {
66 .is_visible = scmi_hwmon_is_visible,
67 .read = scmi_hwmon_read,
68 .read_string = scmi_hwmon_read_string,
69};
70
71static struct hwmon_chip_info scmi_chip_info = {
72 .ops = &scmi_hwmon_ops,
73 .info = NULL,
74};
75
76static int scmi_hwmon_add_chan_info(struct hwmon_channel_info *scmi_hwmon_chan,
77 struct device *dev, int num,
78 enum hwmon_sensor_types type, u32 config)
79{
80 int i;
81 u32 *cfg = devm_kcalloc(dev, num + 1, sizeof(*cfg), GFP_KERNEL);
82
83 if (!cfg)
84 return -ENOMEM;
85
86 scmi_hwmon_chan->type = type;
87 scmi_hwmon_chan->config = cfg;
88 for (i = 0; i < num; i++, cfg++)
89 *cfg = config;
90
91 return 0;
92}
93
94static enum hwmon_sensor_types scmi_types[] = {
95 [TEMPERATURE_C] = hwmon_temp,
96 [VOLTAGE] = hwmon_in,
97 [CURRENT] = hwmon_curr,
98 [POWER] = hwmon_power,
99 [ENERGY] = hwmon_energy,
100};
101
102static u32 hwmon_attributes[] = {
103 [hwmon_chip] = HWMON_C_REGISTER_TZ,
104 [hwmon_temp] = HWMON_T_INPUT | HWMON_T_LABEL,
105 [hwmon_in] = HWMON_I_INPUT | HWMON_I_LABEL,
106 [hwmon_curr] = HWMON_C_INPUT | HWMON_C_LABEL,
107 [hwmon_power] = HWMON_P_INPUT | HWMON_P_LABEL,
108 [hwmon_energy] = HWMON_E_INPUT | HWMON_E_LABEL,
109};
110
111static int scmi_hwmon_probe(struct scmi_device *sdev)
112{
113 int i, idx;
114 u16 nr_sensors;
115 enum hwmon_sensor_types type;
116 struct scmi_sensors *scmi_sensors;
117 const struct scmi_sensor_info *sensor;
118 int nr_count[hwmon_max] = {0}, nr_types = 0;
119 const struct hwmon_chip_info *chip_info;
120 struct device *hwdev, *dev = &sdev->dev;
121 struct hwmon_channel_info *scmi_hwmon_chan;
122 const struct hwmon_channel_info **ptr_scmi_ci;
123 const struct scmi_handle *handle = sdev->handle;
124
125 if (!handle || !handle->sensor_ops)
126 return -ENODEV;
127
128 nr_sensors = handle->sensor_ops->count_get(handle);
129 if (!nr_sensors)
130 return -EIO;
131
132 scmi_sensors = devm_kzalloc(dev, sizeof(*scmi_sensors), GFP_KERNEL);
133 if (!scmi_sensors)
134 return -ENOMEM;
135
136 scmi_sensors->handle = handle;
137
138 for (i = 0; i < nr_sensors; i++) {
139 sensor = handle->sensor_ops->info_get(handle, i);
140 if (!sensor)
141 return PTR_ERR(sensor);
142
143 switch (sensor->type) {
144 case TEMPERATURE_C:
145 case VOLTAGE:
146 case CURRENT:
147 case POWER:
148 case ENERGY:
149 type = scmi_types[sensor->type];
150 if (!nr_count[type])
151 nr_types++;
152 nr_count[type]++;
153 break;
154 }
155 }
156
157 if (nr_count[hwmon_temp])
158 nr_count[hwmon_chip]++, nr_types++;
159
160 scmi_hwmon_chan = devm_kcalloc(dev, nr_types, sizeof(*scmi_hwmon_chan),
161 GFP_KERNEL);
162 if (!scmi_hwmon_chan)
163 return -ENOMEM;
164
165 ptr_scmi_ci = devm_kcalloc(dev, nr_types + 1, sizeof(*ptr_scmi_ci),
166 GFP_KERNEL);
167 if (!ptr_scmi_ci)
168 return -ENOMEM;
169
170 scmi_chip_info.info = ptr_scmi_ci;
171 chip_info = &scmi_chip_info;
172
173 for (type = 0; type < hwmon_max && nr_count[type]; type++) {
174 scmi_hwmon_add_chan_info(scmi_hwmon_chan, dev, nr_count[type],
175 type, hwmon_attributes[type]);
176 *ptr_scmi_ci++ = scmi_hwmon_chan++;
177
178 scmi_sensors->info[type] =
179 devm_kcalloc(dev, nr_count[type],
180 sizeof(*scmi_sensors->info), GFP_KERNEL);
181 if (!scmi_sensors->info[type])
182 return -ENOMEM;
183 }
184
185 for (i = nr_sensors - 1; i >= 0 ; i--) {
186 sensor = handle->sensor_ops->info_get(handle, i);
187 if (!sensor)
188 continue;
189
190 switch (sensor->type) {
191 case TEMPERATURE_C:
192 case VOLTAGE:
193 case CURRENT:
194 case POWER:
195 case ENERGY:
196 type = scmi_types[sensor->type];
197 idx = --nr_count[type];
198 *(scmi_sensors->info[type] + idx) = sensor;
199 break;
200 }
201 }
202
203 hwdev = devm_hwmon_device_register_with_info(dev, "scmi_sensors",
204 scmi_sensors, chip_info,
205 NULL);
206
207 return PTR_ERR_OR_ZERO(hwdev);
208}
209
210static const struct scmi_device_id scmi_id_table[] = {
211 { SCMI_PROTOCOL_SENSOR },
212 { },
213};
214MODULE_DEVICE_TABLE(scmi, scmi_id_table);
215
216static struct scmi_driver scmi_hwmon_drv = {
217 .name = "scmi-hwmon",
218 .probe = scmi_hwmon_probe,
219 .id_table = scmi_id_table,
220};
221module_scmi_driver(scmi_hwmon_drv);
222
223MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
224MODULE_DESCRIPTION("ARM SCMI HWMON interface driver");
225MODULE_LICENSE("GPL v2");
diff --git a/include/linux/hwmon.h b/include/linux/hwmon.h
index ceb751987c40..e5fd2707b6df 100644
--- a/include/linux/hwmon.h
+++ b/include/linux/hwmon.h
@@ -29,6 +29,7 @@ enum hwmon_sensor_types {
29 hwmon_humidity, 29 hwmon_humidity,
30 hwmon_fan, 30 hwmon_fan,
31 hwmon_pwm, 31 hwmon_pwm,
32 hwmon_max,
32}; 33};
33 34
34enum hwmon_chip_attributes { 35enum hwmon_chip_attributes {
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
new file mode 100644
index 000000000000..b458c87b866c
--- /dev/null
+++ b/include/linux/scmi_protocol.h
@@ -0,0 +1,277 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * SCMI Message Protocol driver header
4 *
5 * Copyright (C) 2018 ARM Ltd.
6 */
7#include <linux/device.h>
8#include <linux/types.h>
9
10#define SCMI_MAX_STR_SIZE 16
11#define SCMI_MAX_NUM_RATES 16
12
13/**
14 * struct scmi_revision_info - version information structure
15 *
16 * @major_ver: Major ABI version. Change here implies risk of backward
17 * compatibility break.
18 * @minor_ver: Minor ABI version. Change here implies new feature addition,
19 * or compatible change in ABI.
20 * @num_protocols: Number of protocols that are implemented, excluding the
21 * base protocol.
22 * @num_agents: Number of agents in the system.
23 * @impl_ver: A vendor-specific implementation version.
24 * @vendor_id: A vendor identifier(Null terminated ASCII string)
25 * @sub_vendor_id: A sub-vendor identifier(Null terminated ASCII string)
26 */
27struct scmi_revision_info {
28 u16 major_ver;
29 u16 minor_ver;
30 u8 num_protocols;
31 u8 num_agents;
32 u32 impl_ver;
33 char vendor_id[SCMI_MAX_STR_SIZE];
34 char sub_vendor_id[SCMI_MAX_STR_SIZE];
35};
36
37struct scmi_clock_info {
38 char name[SCMI_MAX_STR_SIZE];
39 bool rate_discrete;
40 union {
41 struct {
42 int num_rates;
43 u64 rates[SCMI_MAX_NUM_RATES];
44 } list;
45 struct {
46 u64 min_rate;
47 u64 max_rate;
48 u64 step_size;
49 } range;
50 };
51};
52
53struct scmi_handle;
54
55/**
56 * struct scmi_clk_ops - represents the various operations provided
57 * by SCMI Clock Protocol
58 *
59 * @count_get: get the count of clocks provided by SCMI
60 * @info_get: get the information of the specified clock
61 * @rate_get: request the current clock rate of a clock
62 * @rate_set: set the clock rate of a clock
63 * @enable: enables the specified clock
64 * @disable: disables the specified clock
65 */
66struct scmi_clk_ops {
67 int (*count_get)(const struct scmi_handle *handle);
68
69 const struct scmi_clock_info *(*info_get)
70 (const struct scmi_handle *handle, u32 clk_id);
71 int (*rate_get)(const struct scmi_handle *handle, u32 clk_id,
72 u64 *rate);
73 int (*rate_set)(const struct scmi_handle *handle, u32 clk_id,
74 u32 config, u64 rate);
75 int (*enable)(const struct scmi_handle *handle, u32 clk_id);
76 int (*disable)(const struct scmi_handle *handle, u32 clk_id);
77};
78
79/**
80 * struct scmi_perf_ops - represents the various operations provided
81 * by SCMI Performance Protocol
82 *
83 * @limits_set: sets limits on the performance level of a domain
84 * @limits_get: gets limits on the performance level of a domain
85 * @level_set: sets the performance level of a domain
86 * @level_get: gets the performance level of a domain
87 * @device_domain_id: gets the scmi domain id for a given device
88 * @get_transition_latency: gets the DVFS transition latency for a given device
89 * @add_opps_to_device: adds all the OPPs for a given device
90 * @freq_set: sets the frequency for a given device using sustained frequency
91 * to sustained performance level mapping
92 * @freq_get: gets the frequency for a given device using sustained frequency
93 * to sustained performance level mapping
94 */
95struct scmi_perf_ops {
96 int (*limits_set)(const struct scmi_handle *handle, u32 domain,
97 u32 max_perf, u32 min_perf);
98 int (*limits_get)(const struct scmi_handle *handle, u32 domain,
99 u32 *max_perf, u32 *min_perf);
100 int (*level_set)(const struct scmi_handle *handle, u32 domain,
101 u32 level, bool poll);
102 int (*level_get)(const struct scmi_handle *handle, u32 domain,
103 u32 *level, bool poll);
104 int (*device_domain_id)(struct device *dev);
105 int (*get_transition_latency)(const struct scmi_handle *handle,
106 struct device *dev);
107 int (*add_opps_to_device)(const struct scmi_handle *handle,
108 struct device *dev);
109 int (*freq_set)(const struct scmi_handle *handle, u32 domain,
110 unsigned long rate, bool poll);
111 int (*freq_get)(const struct scmi_handle *handle, u32 domain,
112 unsigned long *rate, bool poll);
113};
114
115/**
116 * struct scmi_power_ops - represents the various operations provided
117 * by SCMI Power Protocol
118 *
119 * @num_domains_get: get the count of power domains provided by SCMI
120 * @name_get: gets the name of a power domain
121 * @state_set: sets the power state of a power domain
122 * @state_get: gets the power state of a power domain
123 */
124struct scmi_power_ops {
125 int (*num_domains_get)(const struct scmi_handle *handle);
126 char *(*name_get)(const struct scmi_handle *handle, u32 domain);
127#define SCMI_POWER_STATE_TYPE_SHIFT 30
128#define SCMI_POWER_STATE_ID_MASK (BIT(28) - 1)
129#define SCMI_POWER_STATE_PARAM(type, id) \
130 ((((type) & BIT(0)) << SCMI_POWER_STATE_TYPE_SHIFT) | \
131 ((id) & SCMI_POWER_STATE_ID_MASK))
132#define SCMI_POWER_STATE_GENERIC_ON SCMI_POWER_STATE_PARAM(0, 0)
133#define SCMI_POWER_STATE_GENERIC_OFF SCMI_POWER_STATE_PARAM(1, 0)
134 int (*state_set)(const struct scmi_handle *handle, u32 domain,
135 u32 state);
136 int (*state_get)(const struct scmi_handle *handle, u32 domain,
137 u32 *state);
138};
139
140struct scmi_sensor_info {
141 u32 id;
142 u8 type;
143 char name[SCMI_MAX_STR_SIZE];
144};
145
146/*
147 * Partial list from Distributed Management Task Force (DMTF) specification:
148 * DSP0249 (Platform Level Data Model specification)
149 */
150enum scmi_sensor_class {
151 NONE = 0x0,
152 TEMPERATURE_C = 0x2,
153 VOLTAGE = 0x5,
154 CURRENT = 0x6,
155 POWER = 0x7,
156 ENERGY = 0x8,
157};
158
159/**
160 * struct scmi_sensor_ops - represents the various operations provided
161 * by SCMI Sensor Protocol
162 *
163 * @count_get: get the count of sensors provided by SCMI
164 * @info_get: get the information of the specified sensor
165 * @configuration_set: control notifications on cross-over events for
166 * the trip-points
167 * @trip_point_set: selects and configures a trip-point of interest
168 * @reading_get: gets the current value of the sensor
169 */
170struct scmi_sensor_ops {
171 int (*count_get)(const struct scmi_handle *handle);
172
173 const struct scmi_sensor_info *(*info_get)
174 (const struct scmi_handle *handle, u32 sensor_id);
175 int (*configuration_set)(const struct scmi_handle *handle,
176 u32 sensor_id);
177 int (*trip_point_set)(const struct scmi_handle *handle, u32 sensor_id,
178 u8 trip_id, u64 trip_value);
179 int (*reading_get)(const struct scmi_handle *handle, u32 sensor_id,
180 bool async, u64 *value);
181};
182
183/**
184 * struct scmi_handle - Handle returned to ARM SCMI clients for usage.
185 *
186 * @dev: pointer to the SCMI device
187 * @version: pointer to the structure containing SCMI version information
188 * @power_ops: pointer to set of power protocol operations
189 * @perf_ops: pointer to set of performance protocol operations
190 * @clk_ops: pointer to set of clock protocol operations
191 * @sensor_ops: pointer to set of sensor protocol operations
192 */
193struct scmi_handle {
194 struct device *dev;
195 struct scmi_revision_info *version;
196 struct scmi_perf_ops *perf_ops;
197 struct scmi_clk_ops *clk_ops;
198 struct scmi_power_ops *power_ops;
199 struct scmi_sensor_ops *sensor_ops;
200 /* for protocol internal use */
201 void *perf_priv;
202 void *clk_priv;
203 void *power_priv;
204 void *sensor_priv;
205};
206
207enum scmi_std_protocol {
208 SCMI_PROTOCOL_BASE = 0x10,
209 SCMI_PROTOCOL_POWER = 0x11,
210 SCMI_PROTOCOL_SYSTEM = 0x12,
211 SCMI_PROTOCOL_PERF = 0x13,
212 SCMI_PROTOCOL_CLOCK = 0x14,
213 SCMI_PROTOCOL_SENSOR = 0x15,
214};
215
216struct scmi_device {
217 u32 id;
218 u8 protocol_id;
219 struct device dev;
220 struct scmi_handle *handle;
221};
222
223#define to_scmi_dev(d) container_of(d, struct scmi_device, dev)
224
225struct scmi_device *
226scmi_device_create(struct device_node *np, struct device *parent, int protocol);
227void scmi_device_destroy(struct scmi_device *scmi_dev);
228
229struct scmi_device_id {
230 u8 protocol_id;
231};
232
233struct scmi_driver {
234 const char *name;
235 int (*probe)(struct scmi_device *sdev);
236 void (*remove)(struct scmi_device *sdev);
237 const struct scmi_device_id *id_table;
238
239 struct device_driver driver;
240};
241
242#define to_scmi_driver(d) container_of(d, struct scmi_driver, driver)
243
244#ifdef CONFIG_ARM_SCMI_PROTOCOL
245int scmi_driver_register(struct scmi_driver *driver,
246 struct module *owner, const char *mod_name);
247void scmi_driver_unregister(struct scmi_driver *driver);
248#else
249static inline int
250scmi_driver_register(struct scmi_driver *driver, struct module *owner,
251 const char *mod_name)
252{
253 return -EINVAL;
254}
255
256static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
257#endif /* CONFIG_ARM_SCMI_PROTOCOL */
258
259#define scmi_register(driver) \
260 scmi_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
261#define scmi_unregister(driver) \
262 scmi_driver_unregister(driver)
263
264/**
265 * module_scmi_driver() - Helper macro for registering a scmi driver
266 * @__scmi_driver: scmi_driver structure
267 *
268 * Helper macro for scmi drivers to set up proper module init / exit
269 * functions. Replaces module_init() and module_exit() and keeps people from
270 * printing pointless things to the kernel log when their driver is loaded.
271 */
272#define module_scmi_driver(__scmi_driver) \
273 module_driver(__scmi_driver, scmi_register, scmi_unregister)
274
275typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
276int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
277void scmi_protocol_unregister(int protocol_id);