diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-02-01 13:31:17 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-02-01 13:31:17 -0500 |
commit | f6cff79f1d122f78a4b35bf4b2f0112afcd89ea4 (patch) | |
tree | cf3a38576f9adbb3860982c25f72aebed2bb541a | |
parent | 47fcc0360cfb3fe82e4daddacad3c1cd80b0b75d (diff) | |
parent | 9ff6576e124b1227c27c1da43fe5f8ee908263e0 (diff) |
Merge tag 'char-misc-4.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull char/misc driver updates from Greg KH:
"Here is the big pull request for char/misc drivers for 4.16-rc1.
There's a lot of stuff in here. Three new driver subsystems were added
for various types of hardware busses:
- siox
- slimbus
- soundwire
as well as a new vboxguest subsystem for the VirtualBox hypervisor
drivers.
There's also big updates from the FPGA subsystem, lots of Android
binder fixes, the usual handful of hyper-v updates, and lots of other
smaller driver updates.
All of these have been in linux-next for a long time, with no reported
issues"
* tag 'char-misc-4.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (155 commits)
char: lp: use true or false for boolean values
android: binder: use VM_ALLOC to get vm area
android: binder: Use true and false for boolean values
lkdtm: fix handle_irq_event symbol for INT_HW_IRQ_EN
EISA: Delete error message for a failed memory allocation in eisa_probe()
EISA: Whitespace cleanup
misc: remove AVR32 dependencies
virt: vbox: Add error mapping for VERR_INVALID_NAME and VERR_NO_MORE_FILES
soundwire: Fix a signedness bug
uio_hv_generic: fix new type mismatch warnings
uio_hv_generic: fix type mismatch warnings
auxdisplay: img-ascii-lcd: add missing MODULE_DESCRIPTION/AUTHOR/LICENSE
uio_hv_generic: add rescind support
uio_hv_generic: check that host supports monitor page
uio_hv_generic: create send and receive buffers
uio: document uio_hv_generic regions
doc: fix documentation about uio_hv_generic
vmbus: add monitor_id and subchannel_id to sysfs per channel
vmbus: fix ABI documentation
uio_hv_generic: use ISR callback method
...
150 files changed, 14288 insertions, 1142 deletions
diff --git a/Documentation/ABI/stable/sysfs-bus-vmbus b/Documentation/ABI/stable/sysfs-bus-vmbus index d4077cc60d55..e46be65d0e1d 100644 --- a/Documentation/ABI/stable/sysfs-bus-vmbus +++ b/Documentation/ABI/stable/sysfs-bus-vmbus | |||
@@ -42,72 +42,93 @@ Contact: K. Y. Srinivasan <kys@microsoft.com> | |||
42 | Description: The 16 bit vendor ID of the device | 42 | Description: The 16 bit vendor ID of the device |
43 | Users: tools/hv/lsvmbus and user level RDMA libraries | 43 | Users: tools/hv/lsvmbus and user level RDMA libraries |
44 | 44 | ||
45 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/cpu | 45 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN |
46 | Date: September. 2017 | ||
47 | KernelVersion: 4.14 | ||
48 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | ||
49 | Description: Directory for per-channel information | ||
50 | NN is the VMBUS relid associtated with the channel. | ||
51 | |||
52 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu | ||
46 | Date: September. 2017 | 53 | Date: September. 2017 |
47 | KernelVersion: 4.14 | 54 | KernelVersion: 4.14 |
48 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 55 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
49 | Description: VCPU (sub)channel is affinitized to | 56 | Description: VCPU (sub)channel is affinitized to |
50 | Users: tools/hv/lsvmbus and other debuggig tools | 57 | Users: tools/hv/lsvmbus and other debugging tools |
51 | 58 | ||
52 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/cpu | 59 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu |
53 | Date: September. 2017 | 60 | Date: September. 2017 |
54 | KernelVersion: 4.14 | 61 | KernelVersion: 4.14 |
55 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 62 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
56 | Description: VCPU (sub)channel is affinitized to | 63 | Description: VCPU (sub)channel is affinitized to |
57 | Users: tools/hv/lsvmbus and other debuggig tools | 64 | Users: tools/hv/lsvmbus and other debugging tools |
58 | 65 | ||
59 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/in_mask | 66 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/in_mask |
60 | Date: September. 2017 | 67 | Date: September. 2017 |
61 | KernelVersion: 4.14 | 68 | KernelVersion: 4.14 |
62 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 69 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
63 | Description: Inbound channel signaling state | 70 | Description: Host to guest channel interrupt mask |
64 | Users: Debugging tools | 71 | Users: Debugging tools |
65 | 72 | ||
66 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/latency | 73 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/latency |
67 | Date: September. 2017 | 74 | Date: September. 2017 |
68 | KernelVersion: 4.14 | 75 | KernelVersion: 4.14 |
69 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 76 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
70 | Description: Channel signaling latency | 77 | Description: Channel signaling latency |
71 | Users: Debugging tools | 78 | Users: Debugging tools |
72 | 79 | ||
73 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/out_mask | 80 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/out_mask |
74 | Date: September. 2017 | 81 | Date: September. 2017 |
75 | KernelVersion: 4.14 | 82 | KernelVersion: 4.14 |
76 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 83 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
77 | Description: Outbound channel signaling state | 84 | Description: Guest to host channel interrupt mask |
78 | Users: Debugging tools | 85 | Users: Debugging tools |
79 | 86 | ||
80 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/pending | 87 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/pending |
81 | Date: September. 2017 | 88 | Date: September. 2017 |
82 | KernelVersion: 4.14 | 89 | KernelVersion: 4.14 |
83 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 90 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
84 | Description: Channel interrupt pending state | 91 | Description: Channel interrupt pending state |
85 | Users: Debugging tools | 92 | Users: Debugging tools |
86 | 93 | ||
87 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/read_avail | 94 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/read_avail |
88 | Date: September. 2017 | 95 | Date: September. 2017 |
89 | KernelVersion: 4.14 | 96 | KernelVersion: 4.14 |
90 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 97 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
91 | Description: Bytes availabble to read | 98 | Description: Bytes available to read |
92 | Users: Debugging tools | 99 | Users: Debugging tools |
93 | 100 | ||
94 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/write_avail | 101 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/write_avail |
95 | Date: September. 2017 | 102 | Date: September. 2017 |
96 | KernelVersion: 4.14 | 103 | KernelVersion: 4.14 |
97 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 104 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
98 | Description: Bytes availabble to write | 105 | Description: Bytes available to write |
99 | Users: Debugging tools | 106 | Users: Debugging tools |
100 | 107 | ||
101 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/events | 108 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/events |
102 | Date: September. 2017 | 109 | Date: September. 2017 |
103 | KernelVersion: 4.14 | 110 | KernelVersion: 4.14 |
104 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 111 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
105 | Description: Number of times we have signaled the host | 112 | Description: Number of times we have signaled the host |
106 | Users: Debugging tools | 113 | Users: Debugging tools |
107 | 114 | ||
108 | What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/interrupts | 115 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/interrupts |
109 | Date: September. 2017 | 116 | Date: September. 2017 |
110 | KernelVersion: 4.14 | 117 | KernelVersion: 4.14 |
111 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | 118 | Contact: Stephen Hemminger <sthemmin@microsoft.com> |
112 | Description: Number of times we have taken an interrupt (incoming) | 119 | Description: Number of times we have taken an interrupt (incoming) |
113 | Users: Debugging tools | 120 | Users: Debugging tools |
121 | |||
122 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/subchannel_id | ||
123 | Date: January. 2018 | ||
124 | KernelVersion: 4.16 | ||
125 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | ||
126 | Description: Subchannel ID associated with VMBUS channel | ||
127 | Users: Debugging tools and userspace drivers | ||
128 | |||
129 | What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/monitor_id | ||
130 | Date: January. 2018 | ||
131 | KernelVersion: 4.16 | ||
132 | Contact: Stephen Hemminger <sthemmin@microsoft.com> | ||
133 | Description: Monitor bit associated with channel | ||
134 | Users: Debugging tools and userspace drivers | ||
diff --git a/Documentation/ABI/testing/sysfs-bus-siox b/Documentation/ABI/testing/sysfs-bus-siox new file mode 100644 index 000000000000..fed7c3765a4e --- /dev/null +++ b/Documentation/ABI/testing/sysfs-bus-siox | |||
@@ -0,0 +1,87 @@ | |||
1 | What: /sys/bus/siox/devices/siox-X/active | ||
2 | KernelVersion: 4.16 | ||
3 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
4 | Description: | ||
5 | On reading represents the current state of the bus. If it | ||
6 | contains a "0" the bus is stopped and connected devices are | ||
7 | expected to not do anything because their watchdog triggered. | ||
8 | When the file contains a "1" the bus is operated and periodically | ||
9 | does a push-pull cycle to write and read data from the | ||
10 | connected devices. | ||
11 | When writing a "0" or "1" the bus moves to the described state. | ||
12 | |||
13 | What: /sys/bus/siox/devices/siox-X/device_add | ||
14 | KernelVersion: 4.16 | ||
15 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
16 | Description: | ||
17 | Write-only file. Write | ||
18 | |||
19 | <type> <inbytes> <outbytes> <statustype> | ||
20 | |||
21 | to add a new device dynamically. <type> is the name that is used to match | ||
22 | to a driver (similar to the platform bus). <inbytes> and <outbytes> define | ||
23 | the length of the input and output shift register in bytes respectively. | ||
24 | <statustype> defines the 4 bit device type that is check to identify connection | ||
25 | problems. | ||
26 | The new device is added to the end of the existing chain. | ||
27 | |||
28 | What: /sys/bus/siox/devices/siox-X/device_remove | ||
29 | KernelVersion: 4.16 | ||
30 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
31 | Description: | ||
32 | Write-only file. A single write removes the last device in the siox chain. | ||
33 | |||
34 | What: /sys/bus/siox/devices/siox-X/poll_interval_ns | ||
35 | KernelVersion: 4.16 | ||
36 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
37 | Description: | ||
38 | Defines the interval between two poll cycles in nano seconds. | ||
39 | Note this is rounded to jiffies on writing. On reading the current value | ||
40 | is returned. | ||
41 | |||
42 | What: /sys/bus/siox/devices/siox-X-Y/connected | ||
43 | KernelVersion: 4.16 | ||
44 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
45 | Description: | ||
46 | Read-only value. "0" means the Yth device on siox bus X isn't "connected" i.e. | ||
47 | communication with it is not ensured. "1" signals a working connection. | ||
48 | |||
49 | What: /sys/bus/siox/devices/siox-X-Y/inbytes | ||
50 | KernelVersion: 4.16 | ||
51 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
52 | Description: | ||
53 | Read-only value reporting the inbytes value provided to siox-X/device_add | ||
54 | |||
55 | What: /sys/bus/siox/devices/siox-X-Y/status_errors | ||
56 | KernelVersion: 4.16 | ||
57 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
58 | Description: | ||
59 | Counts the number of time intervals when the read status byte doesn't yield the | ||
60 | expected value. | ||
61 | |||
62 | What: /sys/bus/siox/devices/siox-X-Y/type | ||
63 | KernelVersion: 4.16 | ||
64 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
65 | Description: | ||
66 | Read-only value reporting the type value provided to siox-X/device_add. | ||
67 | |||
68 | What: /sys/bus/siox/devices/siox-X-Y/watchdog | ||
69 | KernelVersion: 4.16 | ||
70 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
71 | Description: | ||
72 | Read-only value reporting if the watchdog of the siox device is | ||
73 | active. "0" means the watchdog is not active and the device is expected to | ||
74 | be operational. "1" means the watchdog keeps the device in reset. | ||
75 | |||
76 | What: /sys/bus/siox/devices/siox-X-Y/watchdog_errors | ||
77 | KernelVersion: 4.16 | ||
78 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
79 | Description: | ||
80 | Read-only value reporting the number to time intervals when the | ||
81 | watchdog was active. | ||
82 | |||
83 | What: /sys/bus/siox/devices/siox-X-Y/outbytes | ||
84 | KernelVersion: 4.16 | ||
85 | Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> | ||
86 | Description: | ||
87 | Read-only value reporting the outbytes value provided to siox-X/device_add. | ||
diff --git a/Documentation/devicetree/bindings/eeprom/at25.txt b/Documentation/devicetree/bindings/eeprom/at25.txt index e823d90b802f..b3bde97dc199 100644 --- a/Documentation/devicetree/bindings/eeprom/at25.txt +++ b/Documentation/devicetree/bindings/eeprom/at25.txt | |||
@@ -11,7 +11,9 @@ Required properties: | |||
11 | - spi-max-frequency : max spi frequency to use | 11 | - spi-max-frequency : max spi frequency to use |
12 | - pagesize : size of the eeprom page | 12 | - pagesize : size of the eeprom page |
13 | - size : total eeprom size in bytes | 13 | - size : total eeprom size in bytes |
14 | - address-width : number of address bits (one of 8, 16, or 24) | 14 | - address-width : number of address bits (one of 8, 9, 16, or 24). |
15 | For 9 bits, the MSB of the address is sent as bit 3 of the instruction | ||
16 | byte, before the address byte. | ||
15 | 17 | ||
16 | Optional properties: | 18 | Optional properties: |
17 | - spi-cpha : SPI shifted clock phase, as per spi-bus bindings. | 19 | - spi-cpha : SPI shifted clock phase, as per spi-bus bindings. |
diff --git a/Documentation/devicetree/bindings/nvmem/rockchip-efuse.txt b/Documentation/devicetree/bindings/nvmem/rockchip-efuse.txt index 60bec4782806..265bdb7dc8aa 100644 --- a/Documentation/devicetree/bindings/nvmem/rockchip-efuse.txt +++ b/Documentation/devicetree/bindings/nvmem/rockchip-efuse.txt | |||
@@ -6,12 +6,17 @@ Required properties: | |||
6 | - "rockchip,rk3188-efuse" - for RK3188 SoCs. | 6 | - "rockchip,rk3188-efuse" - for RK3188 SoCs. |
7 | - "rockchip,rk3228-efuse" - for RK3228 SoCs. | 7 | - "rockchip,rk3228-efuse" - for RK3228 SoCs. |
8 | - "rockchip,rk3288-efuse" - for RK3288 SoCs. | 8 | - "rockchip,rk3288-efuse" - for RK3288 SoCs. |
9 | - "rockchip,rk3328-efuse" - for RK3328 SoCs. | ||
9 | - "rockchip,rk3368-efuse" - for RK3368 SoCs. | 10 | - "rockchip,rk3368-efuse" - for RK3368 SoCs. |
10 | - "rockchip,rk3399-efuse" - for RK3399 SoCs. | 11 | - "rockchip,rk3399-efuse" - for RK3399 SoCs. |
11 | - reg: Should contain the registers location and exact eFuse size | 12 | - reg: Should contain the registers location and exact eFuse size |
12 | - clocks: Should be the clock id of eFuse | 13 | - clocks: Should be the clock id of eFuse |
13 | - clock-names: Should be "pclk_efuse" | 14 | - clock-names: Should be "pclk_efuse" |
14 | 15 | ||
16 | Optional properties: | ||
17 | - rockchip,efuse-size: Should be exact eFuse size in byte, the eFuse | ||
18 | size in property <reg> will be invalid if define this property. | ||
19 | |||
15 | Deprecated properties: | 20 | Deprecated properties: |
16 | - compatible: "rockchip,rockchip-efuse" | 21 | - compatible: "rockchip,rockchip-efuse" |
17 | Old efuse compatible value compatible to rk3066a, rk3188 and rk3288 | 22 | Old efuse compatible value compatible to rk3066a, rk3188 and rk3288 |
diff --git a/Documentation/devicetree/bindings/siox/eckelmann,siox-gpio.txt b/Documentation/devicetree/bindings/siox/eckelmann,siox-gpio.txt new file mode 100644 index 000000000000..55259cf39c25 --- /dev/null +++ b/Documentation/devicetree/bindings/siox/eckelmann,siox-gpio.txt | |||
@@ -0,0 +1,19 @@ | |||
1 | Eckelmann SIOX GPIO bus | ||
2 | |||
3 | Required properties: | ||
4 | - compatible : "eckelmann,siox-gpio" | ||
5 | - din-gpios, dout-gpios, dclk-gpios, dld-gpios: references gpios for the | ||
6 | corresponding bus signals. | ||
7 | |||
8 | Examples: | ||
9 | |||
10 | siox { | ||
11 | compatible = "eckelmann,siox-gpio"; | ||
12 | pinctrl-names = "default"; | ||
13 | pinctrl-0 = <&pinctrl_siox>; | ||
14 | |||
15 | din-gpios = <&gpio6 11 0>; | ||
16 | dout-gpios = <&gpio6 8 0>; | ||
17 | dclk-gpios = <&gpio6 9 0>; | ||
18 | dld-gpios = <&gpio6 10 0>; | ||
19 | }; | ||
diff --git a/Documentation/devicetree/bindings/slimbus/bus.txt b/Documentation/devicetree/bindings/slimbus/bus.txt new file mode 100644 index 000000000000..52fa6426388c --- /dev/null +++ b/Documentation/devicetree/bindings/slimbus/bus.txt | |||
@@ -0,0 +1,50 @@ | |||
1 | SLIM(Serial Low Power Interchip Media Bus) bus | ||
2 | |||
3 | SLIMbus is a 2-wire bus, and is used to communicate with peripheral | ||
4 | components like audio-codec. | ||
5 | |||
6 | Required property for SLIMbus controller node: | ||
7 | - compatible - name of SLIMbus controller | ||
8 | |||
9 | Child nodes: | ||
10 | Every SLIMbus controller node can contain zero or more child nodes | ||
11 | representing slave devices on the bus. Every SLIMbus slave device is | ||
12 | uniquely determined by the enumeration address containing 4 fields: | ||
13 | Manufacturer ID, Product code, Device index, and Instance value for | ||
14 | the device. | ||
15 | If child node is not present and it is instantiated after device | ||
16 | discovery (slave device reporting itself present). | ||
17 | |||
18 | In some cases it may be necessary to describe non-probeable device | ||
19 | details such as non-standard ways of powering up a device. In | ||
20 | such cases, child nodes for those devices will be present as | ||
21 | slaves of the SLIMbus controller, as detailed below. | ||
22 | |||
23 | Required property for SLIMbus child node if it is present: | ||
24 | - reg - Should be ('Device index', 'Instance ID') from SLIMbus | ||
25 | Enumeration Address. | ||
26 | Device Index Uniquely identifies multiple Devices within | ||
27 | a single Component. | ||
28 | Instance ID Is for the cases where multiple Devices of the | ||
29 | same type or Class are attached to the bus. | ||
30 | |||
31 | - compatible -"slimMID,PID". The textual representation of Manufacturer ID, | ||
32 | Product Code, shall be in lower case hexadecimal with leading | ||
33 | zeroes suppressed | ||
34 | |||
35 | SLIMbus example for Qualcomm's slimbus manager component: | ||
36 | |||
37 | slim@28080000 { | ||
38 | compatible = "qcom,apq8064-slim", "qcom,slim"; | ||
39 | reg = <0x28080000 0x2000>, | ||
40 | interrupts = <0 33 0>; | ||
41 | clocks = <&lcc SLIMBUS_SRC>, <&lcc AUDIO_SLIMBUS_CLK>; | ||
42 | clock-names = "iface", "core"; | ||
43 | #address-cells = <2>; | ||
44 | #size-cell = <0>; | ||
45 | |||
46 | codec: wcd9310@1,0{ | ||
47 | compatible = "slim217,60"; | ||
48 | reg = <1 0>; | ||
49 | }; | ||
50 | }; | ||
diff --git a/Documentation/devicetree/bindings/slimbus/slim-qcom-ctrl.txt b/Documentation/devicetree/bindings/slimbus/slim-qcom-ctrl.txt new file mode 100644 index 000000000000..922dcb8ff24a --- /dev/null +++ b/Documentation/devicetree/bindings/slimbus/slim-qcom-ctrl.txt | |||
@@ -0,0 +1,39 @@ | |||
1 | Qualcomm SLIMbus controller | ||
2 | This controller is used if applications processor driver controls SLIMbus | ||
3 | master component. | ||
4 | |||
5 | Required properties: | ||
6 | |||
7 | - #address-cells - refer to Documentation/devicetree/bindings/slimbus/bus.txt | ||
8 | - #size-cells - refer to Documentation/devicetree/bindings/slimbus/bus.txt | ||
9 | |||
10 | - reg : Offset and length of the register region(s) for the device | ||
11 | - reg-names : Register region name(s) referenced in reg above | ||
12 | Required register resource entries are: | ||
13 | "ctrl": Physical address of controller register blocks | ||
14 | "slew": required for "qcom,apq8064-slim" SOC. | ||
15 | - compatible : should be "qcom,<SOC-NAME>-slim" for SOC specific compatible | ||
16 | followed by "qcom,slim" for fallback. | ||
17 | - interrupts : Interrupt number used by this controller | ||
18 | - clocks : Interface and core clocks used by this SLIMbus controller | ||
19 | - clock-names : Required clock-name entries are: | ||
20 | "iface" : Interface clock for this controller | ||
21 | "core" : Interrupt for controller core's BAM | ||
22 | |||
23 | Example: | ||
24 | |||
25 | slim@28080000 { | ||
26 | compatible = "qcom,apq8064-slim", "qcom,slim"; | ||
27 | reg = <0x28080000 0x2000>, <0x80207C 4>; | ||
28 | reg-names = "ctrl", "slew"; | ||
29 | interrupts = <0 33 0>; | ||
30 | clocks = <&lcc SLIMBUS_SRC>, <&lcc AUDIO_SLIMBUS_CLK>; | ||
31 | clock-names = "iface", "core"; | ||
32 | #address-cells = <2>; | ||
33 | #size-cell = <0>; | ||
34 | |||
35 | wcd9310: audio-codec@1,0{ | ||
36 | compatible = "slim217,60"; | ||
37 | reg = <1 0>; | ||
38 | }; | ||
39 | }; | ||
diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt index f776fb804a8c..6ec1a028a3a8 100644 --- a/Documentation/devicetree/bindings/vendor-prefixes.txt +++ b/Documentation/devicetree/bindings/vendor-prefixes.txt | |||
@@ -97,6 +97,7 @@ dptechnics DPTechnics | |||
97 | dragino Dragino Technology Co., Limited | 97 | dragino Dragino Technology Co., Limited |
98 | ea Embedded Artists AB | 98 | ea Embedded Artists AB |
99 | ebv EBV Elektronik | 99 | ebv EBV Elektronik |
100 | eckelmann Eckelmann AG | ||
100 | edt Emerging Display Technologies | 101 | edt Emerging Display Technologies |
101 | eeti eGalax_eMPIA Technology Inc | 102 | eeti eGalax_eMPIA Technology Inc |
102 | elan Elan Microelectronic Corp. | 103 | elan Elan Microelectronic Corp. |
diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst index d17a9876b473..e9b41b1634f3 100644 --- a/Documentation/driver-api/index.rst +++ b/Documentation/driver-api/index.rst | |||
@@ -47,6 +47,8 @@ available subsections can be seen below. | |||
47 | gpio | 47 | gpio |
48 | misc_devices | 48 | misc_devices |
49 | dmaengine/index | 49 | dmaengine/index |
50 | slimbus | ||
51 | soundwire/index | ||
50 | 52 | ||
51 | .. only:: subproject and html | 53 | .. only:: subproject and html |
52 | 54 | ||
diff --git a/Documentation/driver-api/slimbus.rst b/Documentation/driver-api/slimbus.rst new file mode 100644 index 000000000000..7555ecd538de --- /dev/null +++ b/Documentation/driver-api/slimbus.rst | |||
@@ -0,0 +1,127 @@ | |||
1 | ============================ | ||
2 | Linux kernel SLIMbus support | ||
3 | ============================ | ||
4 | |||
5 | Overview | ||
6 | ======== | ||
7 | |||
8 | What is SLIMbus? | ||
9 | ---------------- | ||
10 | SLIMbus (Serial Low Power Interchip Media Bus) is a specification developed by | ||
11 | MIPI (Mobile Industry Processor Interface) alliance. The bus uses master/slave | ||
12 | configuration, and is a 2-wire multi-drop implementation (clock, and data). | ||
13 | |||
14 | Currently, SLIMbus is used to interface between application processors of SoCs | ||
15 | (System-on-Chip) and peripheral components (typically codec). SLIMbus uses | ||
16 | Time-Division-Multiplexing to accommodate multiple data channels, and | ||
17 | a control channel. | ||
18 | |||
19 | The control channel is used for various control functions such as bus | ||
20 | management, configuration and status updates. These messages can be unicast (e.g. | ||
21 | reading/writing device specific values), or multicast (e.g. data channel | ||
22 | reconfiguration sequence is a broadcast message announced to all devices) | ||
23 | |||
24 | A data channel is used for data-transfer between 2 SLIMbus devices. Data | ||
25 | channel uses dedicated ports on the device. | ||
26 | |||
27 | Hardware description: | ||
28 | --------------------- | ||
29 | SLIMbus specification has different types of device classifications based on | ||
30 | their capabilities. | ||
31 | A manager device is responsible for enumeration, configuration, and dynamic | ||
32 | channel allocation. Every bus has 1 active manager. | ||
33 | |||
34 | A generic device is a device providing application functionality (e.g. codec). | ||
35 | |||
36 | Framer device is responsible for clocking the bus, and transmitting frame-sync | ||
37 | and framing information on the bus. | ||
38 | |||
39 | Each SLIMbus component has an interface device for monitoring physical layer. | ||
40 | |||
41 | Typically each SoC contains SLIMbus component having 1 manager, 1 framer device, | ||
42 | 1 generic device (for data channel support), and 1 interface device. | ||
43 | External peripheral SLIMbus component usually has 1 generic device (for | ||
44 | functionality/data channel support), and an associated interface device. | ||
45 | The generic device's registers are mapped as 'value elements' so that they can | ||
46 | be written/read using SLIMbus control channel exchanging control/status type of | ||
47 | information. | ||
48 | In case there are multiple framer devices on the same bus, manager device is | ||
49 | responsible to select the active-framer for clocking the bus. | ||
50 | |||
51 | Per specification, SLIMbus uses "clock gears" to do power management based on | ||
52 | current frequency and bandwidth requirements. There are 10 clock gears and each | ||
53 | gear changes the SLIMbus frequency to be twice its previous gear. | ||
54 | |||
55 | Each device has a 6-byte enumeration-address and the manager assigns every | ||
56 | device with a 1-byte logical address after the devices report presence on the | ||
57 | bus. | ||
58 | |||
59 | Software description: | ||
60 | --------------------- | ||
61 | There are 2 types of SLIMbus drivers: | ||
62 | |||
63 | slim_controller represents a 'controller' for SLIMbus. This driver should | ||
64 | implement duties needed by the SoC (manager device, associated | ||
65 | interface device for monitoring the layers and reporting errors, default | ||
66 | framer device). | ||
67 | |||
68 | slim_device represents the 'generic device/component' for SLIMbus, and a | ||
69 | slim_driver should implement driver for that slim_device. | ||
70 | |||
71 | Device notifications to the driver: | ||
72 | ----------------------------------- | ||
73 | Since SLIMbus devices have mechanisms for reporting their presence, the | ||
74 | framework allows drivers to bind when corresponding devices report their | ||
75 | presence on the bus. | ||
76 | However, it is possible that the driver needs to be probed | ||
77 | first so that it can enable corresponding SLIMbus device (e.g. power it up and/or | ||
78 | take it out of reset). To support that behavior, the framework allows drivers | ||
79 | to probe first as well (e.g. using standard DeviceTree compatibility field). | ||
80 | This creates the necessity for the driver to know when the device is functional | ||
81 | (i.e. reported present). device_up callback is used for that reason when the | ||
82 | device reports present and is assigned a logical address by the controller. | ||
83 | |||
84 | Similarly, SLIMbus devices 'report absent' when they go down. A 'device_down' | ||
85 | callback notifies the driver when the device reports absent and its logical | ||
86 | address assignment is invalidated by the controller. | ||
87 | |||
88 | Another notification "boot_device" is used to notify the slim_driver when | ||
89 | controller resets the bus. This notification allows the driver to take necessary | ||
90 | steps to boot the device so that it's functional after the bus has been reset. | ||
91 | |||
92 | Driver and Controller APIs: | ||
93 | -------------------------- | ||
94 | .. kernel-doc:: include/linux/slimbus.h | ||
95 | :internal: | ||
96 | |||
97 | .. kernel-doc:: drivers/slimbus/slimbus.h | ||
98 | :internal: | ||
99 | |||
100 | .. kernel-doc:: drivers/slimbus/core.c | ||
101 | :export: | ||
102 | |||
103 | Clock-pause: | ||
104 | ------------ | ||
105 | SLIMbus mandates that a reconfiguration sequence (known as clock-pause) be | ||
106 | broadcast to all active devices on the bus before the bus can enter low-power | ||
107 | mode. Controller uses this sequence when it decides to enter low-power mode so | ||
108 | that corresponding clocks and/or power-rails can be turned off to save power. | ||
109 | Clock-pause is exited by waking up framer device (if controller driver initiates | ||
110 | exiting low power mode), or by toggling the data line (if a slave device wants | ||
111 | to initiate it). | ||
112 | |||
113 | Clock-pause APIs: | ||
114 | ~~~~~~~~~~~~~~~~~ | ||
115 | .. kernel-doc:: drivers/slimbus/sched.c | ||
116 | :export: | ||
117 | |||
118 | Messaging: | ||
119 | ---------- | ||
120 | The framework supports regmap and read/write apis to exchange control-information | ||
121 | with a SLIMbus device. APIs can be synchronous or asynchronous. | ||
122 | The header file <linux/slimbus.h> has more documentation about messaging APIs. | ||
123 | |||
124 | Messaging APIs: | ||
125 | ~~~~~~~~~~~~~~~ | ||
126 | .. kernel-doc:: drivers/slimbus/messaging.c | ||
127 | :export: | ||
diff --git a/Documentation/driver-api/soundwire/index.rst b/Documentation/driver-api/soundwire/index.rst new file mode 100644 index 000000000000..647e94654752 --- /dev/null +++ b/Documentation/driver-api/soundwire/index.rst | |||
@@ -0,0 +1,15 @@ | |||
1 | ======================= | ||
2 | SoundWire Documentation | ||
3 | ======================= | ||
4 | |||
5 | .. toctree:: | ||
6 | :maxdepth: 1 | ||
7 | |||
8 | summary | ||
9 | |||
10 | .. only:: subproject | ||
11 | |||
12 | Indices | ||
13 | ======= | ||
14 | |||
15 | * :ref:`genindex` | ||
diff --git a/Documentation/driver-api/soundwire/summary.rst b/Documentation/driver-api/soundwire/summary.rst new file mode 100644 index 000000000000..8193125a2bfb --- /dev/null +++ b/Documentation/driver-api/soundwire/summary.rst | |||
@@ -0,0 +1,207 @@ | |||
1 | =========================== | ||
2 | SoundWire Subsystem Summary | ||
3 | =========================== | ||
4 | |||
5 | SoundWire is a new interface ratified in 2015 by the MIPI Alliance. | ||
6 | SoundWire is used for transporting data typically related to audio | ||
7 | functions. SoundWire interface is optimized to integrate audio devices in | ||
8 | mobile or mobile inspired systems. | ||
9 | |||
10 | SoundWire is a 2-pin multi-drop interface with data and clock line. It | ||
11 | facilitates development of low cost, efficient, high performance systems. | ||
12 | Broad level key features of SoundWire interface include: | ||
13 | |||
14 | (1) Transporting all of payload data channels, control information, and setup | ||
15 | commands over a single two-pin interface. | ||
16 | |||
17 | (2) Lower clock frequency, and hence lower power consumption, by use of DDR | ||
18 | (Dual Data Rate) data transmission. | ||
19 | |||
20 | (3) Clock scaling and optional multiple data lanes to give wide flexibility | ||
21 | in data rate to match system requirements. | ||
22 | |||
23 | (4) Device status monitoring, including interrupt-style alerts to the Master. | ||
24 | |||
25 | The SoundWire protocol supports up to eleven Slave interfaces. All the | ||
26 | interfaces share the common Bus containing data and clock line. Each of the | ||
27 | Slaves can support up to 14 Data Ports. 13 Data Ports are dedicated to audio | ||
28 | transport. Data Port0 is dedicated to transport of Bulk control information, | ||
29 | each of the audio Data Ports (1..14) can support up to 8 Channels in | ||
30 | transmit or receiving mode (typically fixed direction but configurable | ||
31 | direction is enabled by the specification). Bandwidth restrictions to | ||
32 | ~19.2..24.576Mbits/s don't however allow for 11*13*8 channels to be | ||
33 | transmitted simultaneously. | ||
34 | |||
35 | Below figure shows an example of connectivity between a SoundWire Master and | ||
36 | two Slave devices. :: | ||
37 | |||
38 | +---------------+ +---------------+ | ||
39 | | | Clock Signal | | | ||
40 | | Master |-------+-------------------------------| Slave | | ||
41 | | Interface | | Data Signal | Interface 1 | | ||
42 | | |-------|-------+-----------------------| | | ||
43 | +---------------+ | | +---------------+ | ||
44 | | | | ||
45 | | | | ||
46 | | | | ||
47 | +--+-------+--+ | ||
48 | | | | ||
49 | | Slave | | ||
50 | | Interface 2 | | ||
51 | | | | ||
52 | +-------------+ | ||
53 | |||
54 | |||
55 | Terminology | ||
56 | =========== | ||
57 | |||
58 | The MIPI SoundWire specification uses the term 'device' to refer to a Master | ||
59 | or Slave interface, which of course can be confusing. In this summary and | ||
60 | code we use the term interface only to refer to the hardware. We follow the | ||
61 | Linux device model by mapping each Slave interface connected on the bus as a | ||
62 | device managed by a specific driver. The Linux SoundWire subsystem provides | ||
63 | a framework to implement a SoundWire Slave driver with an API allowing | ||
64 | 3rd-party vendors to enable implementation-defined functionality while | ||
65 | common setup/configuration tasks are handled by the bus. | ||
66 | |||
67 | Bus: | ||
68 | Implements SoundWire Linux Bus which handles the SoundWire protocol. | ||
69 | Programs all the MIPI-defined Slave registers. Represents a SoundWire | ||
70 | Master. Multiple instances of Bus may be present in a system. | ||
71 | |||
72 | Slave: | ||
73 | Registers as SoundWire Slave device (Linux Device). Multiple Slave devices | ||
74 | can register to a Bus instance. | ||
75 | |||
76 | Slave driver: | ||
77 | Driver controlling the Slave device. MIPI-specified registers are controlled | ||
78 | directly by the Bus (and transmitted through the Master driver/interface). | ||
79 | Any implementation-defined Slave register is controlled by Slave driver. In | ||
80 | practice, it is expected that the Slave driver relies on regmap and does not | ||
81 | request direct register access. | ||
82 | |||
83 | Programming interfaces (SoundWire Master interface Driver) | ||
84 | ========================================================== | ||
85 | |||
86 | SoundWire Bus supports programming interfaces for the SoundWire Master | ||
87 | implementation and SoundWire Slave devices. All the code uses the "sdw" | ||
88 | prefix commonly used by SoC designers and 3rd party vendors. | ||
89 | |||
90 | Each of the SoundWire Master interfaces needs to be registered to the Bus. | ||
91 | Bus implements API to read standard Master MIPI properties and also provides | ||
92 | callback in Master ops for Master driver to implement its own functions that | ||
93 | provides capabilities information. DT support is not implemented at this | ||
94 | time but should be trivial to add since capabilities are enabled with the | ||
95 | ``device_property_`` API. | ||
96 | |||
97 | The Master interface along with the Master interface capabilities are | ||
98 | registered based on board file, DT or ACPI. | ||
99 | |||
100 | Following is the Bus API to register the SoundWire Bus: | ||
101 | |||
102 | .. code-block:: c | ||
103 | |||
104 | int sdw_add_bus_master(struct sdw_bus *bus) | ||
105 | { | ||
106 | if (!bus->dev) | ||
107 | return -ENODEV; | ||
108 | |||
109 | mutex_init(&bus->lock); | ||
110 | INIT_LIST_HEAD(&bus->slaves); | ||
111 | |||
112 | /* Check ACPI for Slave devices */ | ||
113 | sdw_acpi_find_slaves(bus); | ||
114 | |||
115 | /* Check DT for Slave devices */ | ||
116 | sdw_of_find_slaves(bus); | ||
117 | |||
118 | return 0; | ||
119 | } | ||
120 | |||
121 | This will initialize sdw_bus object for Master device. "sdw_master_ops" and | ||
122 | "sdw_master_port_ops" callback functions are provided to the Bus. | ||
123 | |||
124 | "sdw_master_ops" is used by Bus to control the Bus in the hardware specific | ||
125 | way. It includes Bus control functions such as sending the SoundWire | ||
126 | read/write messages on Bus, setting up clock frequency & Stream | ||
127 | Synchronization Point (SSP). The "sdw_master_ops" structure abstracts the | ||
128 | hardware details of the Master from the Bus. | ||
129 | |||
130 | "sdw_master_port_ops" is used by Bus to setup the Port parameters of the | ||
131 | Master interface Port. Master interface Port register map is not defined by | ||
132 | MIPI specification, so Bus calls the "sdw_master_port_ops" callback | ||
133 | function to do Port operations like "Port Prepare", "Port Transport params | ||
134 | set", "Port enable and disable". The implementation of the Master driver can | ||
135 | then perform hardware-specific configurations. | ||
136 | |||
137 | Programming interfaces (SoundWire Slave Driver) | ||
138 | =============================================== | ||
139 | |||
140 | The MIPI specification requires each Slave interface to expose a unique | ||
141 | 48-bit identifier, stored in 6 read-only dev_id registers. This dev_id | ||
142 | identifier contains vendor and part information, as well as a field enabling | ||
143 | to differentiate between identical components. An additional class field is | ||
144 | currently unused. Slave driver is written for a specific vendor and part | ||
145 | identifier, Bus enumerates the Slave device based on these two ids. | ||
146 | Slave device and driver match is done based on these two ids . Probe | ||
147 | of the Slave driver is called by Bus on successful match between device and | ||
148 | driver id. A parent/child relationship is enforced between Master and Slave | ||
149 | devices (the logical representation is aligned with the physical | ||
150 | connectivity). | ||
151 | |||
152 | The information on Master/Slave dependencies is stored in platform data, | ||
153 | board-file, ACPI or DT. The MIPI Software specification defines additional | ||
154 | link_id parameters for controllers that have multiple Master interfaces. The | ||
155 | dev_id registers are only unique in the scope of a link, and the link_id | ||
156 | unique in the scope of a controller. Both dev_id and link_id are not | ||
157 | necessarily unique at the system level but the parent/child information is | ||
158 | used to avoid ambiguity. | ||
159 | |||
160 | .. code-block:: c | ||
161 | |||
162 | static const struct sdw_device_id slave_id[] = { | ||
163 | SDW_SLAVE_ENTRY(0x025d, 0x700, 0), | ||
164 | {}, | ||
165 | }; | ||
166 | MODULE_DEVICE_TABLE(sdw, slave_id); | ||
167 | |||
168 | static struct sdw_driver slave_sdw_driver = { | ||
169 | .driver = { | ||
170 | .name = "slave_xxx", | ||
171 | .pm = &slave_runtime_pm, | ||
172 | }, | ||
173 | .probe = slave_sdw_probe, | ||
174 | .remove = slave_sdw_remove, | ||
175 | .ops = &slave_slave_ops, | ||
176 | .id_table = slave_id, | ||
177 | }; | ||
178 | |||
179 | |||
180 | For capabilities, Bus implements API to read standard Slave MIPI properties | ||
181 | and also provides callback in Slave ops for Slave driver to implement own | ||
182 | function that provides capabilities information. Bus needs to know a set of | ||
183 | Slave capabilities to program Slave registers and to control the Bus | ||
184 | reconfigurations. | ||
185 | |||
186 | Future enhancements to be done | ||
187 | ============================== | ||
188 | |||
189 | (1) Bulk Register Access (BRA) transfers. | ||
190 | |||
191 | |||
192 | (2) Multiple data lane support. | ||
193 | |||
194 | Links | ||
195 | ===== | ||
196 | |||
197 | SoundWire MIPI specification 1.1 is available at: | ||
198 | https://members.mipi.org/wg/All-Members/document/70290 | ||
199 | |||
200 | SoundWire MIPI DisCo (Discovery and Configuration) specification is | ||
201 | available at: | ||
202 | https://www.mipi.org/specifications/mipi-disco-soundwire | ||
203 | |||
204 | (publicly accessible with registration or directly accessible to MIPI | ||
205 | members) | ||
206 | |||
207 | MIPI Alliance Manufacturer ID Page: mid.mipi.org | ||
diff --git a/Documentation/driver-api/uio-howto.rst b/Documentation/driver-api/uio-howto.rst index f73d660b2956..693e3bd84e79 100644 --- a/Documentation/driver-api/uio-howto.rst +++ b/Documentation/driver-api/uio-howto.rst | |||
@@ -667,27 +667,28 @@ Making the driver recognize the device | |||
667 | Since the driver does not declare any device GUID's, it will not get | 667 | Since the driver does not declare any device GUID's, it will not get |
668 | loaded automatically and will not automatically bind to any devices, you | 668 | loaded automatically and will not automatically bind to any devices, you |
669 | must load it and allocate id to the driver yourself. For example, to use | 669 | must load it and allocate id to the driver yourself. For example, to use |
670 | the network device GUID:: | 670 | the network device class GUID:: |
671 | 671 | ||
672 | modprobe uio_hv_generic | 672 | modprobe uio_hv_generic |
673 | echo "f8615163-df3e-46c5-913f-f2d2f965ed0e" > /sys/bus/vmbus/drivers/uio_hv_generic/new_id | 673 | echo "f8615163-df3e-46c5-913f-f2d2f965ed0e" > /sys/bus/vmbus/drivers/uio_hv_generic/new_id |
674 | 674 | ||
675 | If there already is a hardware specific kernel driver for the device, | 675 | If there already is a hardware specific kernel driver for the device, |
676 | the generic driver still won't bind to it, in this case if you want to | 676 | the generic driver still won't bind to it, in this case if you want to |
677 | use the generic driver (why would you?) you'll have to manually unbind | 677 | use the generic driver for a userspace library you'll have to manually unbind |
678 | the hardware specific driver and bind the generic driver, like this:: | 678 | the hardware specific driver and bind the generic driver, using the device specific GUID |
679 | like this:: | ||
679 | 680 | ||
680 | echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/hv_netvsc/unbind | 681 | echo -n ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/hv_netvsc/unbind |
681 | echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/uio_hv_generic/bind | 682 | echo -n ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/uio_hv_generic/bind |
682 | 683 | ||
683 | You can verify that the device has been bound to the driver by looking | 684 | You can verify that the device has been bound to the driver by looking |
684 | for it in sysfs, for example like the following:: | 685 | for it in sysfs, for example like the following:: |
685 | 686 | ||
686 | ls -l /sys/bus/vmbus/devices/vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver | 687 | ls -l /sys/bus/vmbus/devices/ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver |
687 | 688 | ||
688 | Which if successful should print:: | 689 | Which if successful should print:: |
689 | 690 | ||
690 | .../vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver -> ../../../bus/vmbus/drivers/uio_hv_generic | 691 | .../ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver -> ../../../bus/vmbus/drivers/uio_hv_generic |
691 | 692 | ||
692 | Things to know about uio_hv_generic | 693 | Things to know about uio_hv_generic |
693 | ----------------------------------- | 694 | ----------------------------------- |
@@ -697,6 +698,17 @@ prevents the device from generating further interrupts until the bit is | |||
697 | cleared. The userspace driver should clear this bit before blocking and | 698 | cleared. The userspace driver should clear this bit before blocking and |
698 | waiting for more interrupts. | 699 | waiting for more interrupts. |
699 | 700 | ||
701 | When host rescinds a device, the interrupt file descriptor is marked down | ||
702 | and any reads of the interrupt file descriptor will return -EIO. Similar | ||
703 | to a closed socket or disconnected serial device. | ||
704 | |||
705 | The vmbus device regions are mapped into uio device resources: | ||
706 | 0) Channel ring buffers: guest to host and host to guest | ||
707 | 1) Guest to host interrupt signalling pages | ||
708 | 2) Guest to host monitor page | ||
709 | 3) Network receive buffer region | ||
710 | 4) Network send buffer region | ||
711 | |||
700 | Further information | 712 | Further information |
701 | =================== | 713 | =================== |
702 | 714 | ||
diff --git a/Documentation/fpga/fpga-mgr.txt b/Documentation/fpga/fpga-mgr.txt index 78f197fadfd1..cc6413ed6fc9 100644 --- a/Documentation/fpga/fpga-mgr.txt +++ b/Documentation/fpga/fpga-mgr.txt | |||
@@ -11,61 +11,65 @@ hidden away in a low level driver which registers a set of ops with the core. | |||
11 | The FPGA image data itself is very manufacturer specific, but for our purposes | 11 | The FPGA image data itself is very manufacturer specific, but for our purposes |
12 | it's just binary data. The FPGA manager core won't parse it. | 12 | it's just binary data. The FPGA manager core won't parse it. |
13 | 13 | ||
14 | The FPGA image to be programmed can be in a scatter gather list, a single | ||
15 | contiguous buffer, or a firmware file. Because allocating contiguous kernel | ||
16 | memory for the buffer should be avoided, users are encouraged to use a scatter | ||
17 | gather list instead if possible. | ||
18 | |||
19 | The particulars for programming the image are presented in a structure (struct | ||
20 | fpga_image_info). This struct contains parameters such as pointers to the | ||
21 | FPGA image as well as image-specific particulars such as whether the image was | ||
22 | built for full or partial reconfiguration. | ||
14 | 23 | ||
15 | API Functions: | 24 | API Functions: |
16 | ============== | 25 | ============== |
17 | 26 | ||
18 | To program the FPGA from a file or from a buffer: | 27 | To program the FPGA: |
19 | ------------------------------------------------- | 28 | -------------------- |
20 | |||
21 | int fpga_mgr_buf_load(struct fpga_manager *mgr, | ||
22 | struct fpga_image_info *info, | ||
23 | const char *buf, size_t count); | ||
24 | |||
25 | Load the FPGA from an image which exists as a contiguous buffer in | ||
26 | memory. Allocating contiguous kernel memory for the buffer should be avoided, | ||
27 | users are encouraged to use the _sg interface instead of this. | ||
28 | |||
29 | int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, | ||
30 | struct fpga_image_info *info, | ||
31 | struct sg_table *sgt); | ||
32 | 29 | ||
33 | Load the FPGA from an image from non-contiguous in memory. Callers can | 30 | int fpga_mgr_load(struct fpga_manager *mgr, |
34 | construct a sg_table using alloc_page backed memory. | 31 | struct fpga_image_info *info); |
35 | 32 | ||
36 | int fpga_mgr_firmware_load(struct fpga_manager *mgr, | 33 | Load the FPGA from an image which is indicated in the info. If successful, |
37 | struct fpga_image_info *info, | ||
38 | const char *image_name); | ||
39 | |||
40 | Load the FPGA from an image which exists as a file. The image file must be on | ||
41 | the firmware search path (see the firmware class documentation). If successful, | ||
42 | the FPGA ends up in operating mode. Return 0 on success or a negative error | 34 | the FPGA ends up in operating mode. Return 0 on success or a negative error |
43 | code. | 35 | code. |
44 | 36 | ||
45 | A FPGA design contained in a FPGA image file will likely have particulars that | 37 | To allocate or free a struct fpga_image_info: |
46 | affect how the image is programmed to the FPGA. These are contained in struct | 38 | --------------------------------------------- |
47 | fpga_image_info. Currently the only such particular is a single flag bit | 39 | |
48 | indicating whether the image is for full or partial reconfiguration. | 40 | struct fpga_image_info *fpga_image_info_alloc(struct device *dev); |
41 | |||
42 | void fpga_image_info_free(struct fpga_image_info *info); | ||
49 | 43 | ||
50 | To get/put a reference to a FPGA manager: | 44 | To get/put a reference to a FPGA manager: |
51 | ----------------------------------------- | 45 | ----------------------------------------- |
52 | 46 | ||
53 | struct fpga_manager *of_fpga_mgr_get(struct device_node *node); | 47 | struct fpga_manager *of_fpga_mgr_get(struct device_node *node); |
54 | struct fpga_manager *fpga_mgr_get(struct device *dev); | 48 | struct fpga_manager *fpga_mgr_get(struct device *dev); |
49 | void fpga_mgr_put(struct fpga_manager *mgr); | ||
55 | 50 | ||
56 | Given a DT node or device, get an exclusive reference to a FPGA manager. | 51 | Given a DT node or device, get a reference to a FPGA manager. This pointer |
52 | can be saved until you are ready to program the FPGA. fpga_mgr_put releases | ||
53 | the reference. | ||
57 | 54 | ||
58 | void fpga_mgr_put(struct fpga_manager *mgr); | ||
59 | 55 | ||
60 | Release the reference. | 56 | To get exclusive control of a FPGA manager: |
57 | ------------------------------------------- | ||
58 | |||
59 | int fpga_mgr_lock(struct fpga_manager *mgr); | ||
60 | void fpga_mgr_unlock(struct fpga_manager *mgr); | ||
61 | |||
62 | The user should call fpga_mgr_lock and verify that it returns 0 before | ||
63 | attempting to program the FPGA. Likewise, the user should call | ||
64 | fpga_mgr_unlock when done programming the FPGA. | ||
61 | 65 | ||
62 | 66 | ||
63 | To register or unregister the low level FPGA-specific driver: | 67 | To register or unregister the low level FPGA-specific driver: |
64 | ------------------------------------------------------------- | 68 | ------------------------------------------------------------- |
65 | 69 | ||
66 | int fpga_mgr_register(struct device *dev, const char *name, | 70 | int fpga_mgr_register(struct device *dev, const char *name, |
67 | const struct fpga_manager_ops *mops, | 71 | const struct fpga_manager_ops *mops, |
68 | void *priv); | 72 | void *priv); |
69 | 73 | ||
70 | void fpga_mgr_unregister(struct device *dev); | 74 | void fpga_mgr_unregister(struct device *dev); |
71 | 75 | ||
@@ -75,62 +79,58 @@ device." | |||
75 | 79 | ||
76 | How to write an image buffer to a supported FPGA | 80 | How to write an image buffer to a supported FPGA |
77 | ================================================ | 81 | ================================================ |
78 | /* Include to get the API */ | ||
79 | #include <linux/fpga/fpga-mgr.h> | 82 | #include <linux/fpga/fpga-mgr.h> |
80 | 83 | ||
81 | /* device node that specifies the FPGA manager to use */ | 84 | struct fpga_manager *mgr; |
82 | struct device_node *mgr_node = ... | 85 | struct fpga_image_info *info; |
86 | int ret; | ||
83 | 87 | ||
84 | /* FPGA image is in this buffer. count is size of the buffer. */ | 88 | /* |
85 | char *buf = ... | 89 | * Get a reference to FPGA manager. The manager is not locked, so you can |
86 | int count = ... | 90 | * hold onto this reference without it preventing programming. |
91 | * | ||
92 | * This example uses the device node of the manager. Alternatively, use | ||
93 | * fpga_mgr_get(dev) instead if you have the device. | ||
94 | */ | ||
95 | mgr = of_fpga_mgr_get(mgr_node); | ||
87 | 96 | ||
88 | /* struct with information about the FPGA image to program. */ | 97 | /* struct with information about the FPGA image to program. */ |
89 | struct fpga_image_info info; | 98 | info = fpga_image_info_alloc(dev); |
90 | 99 | ||
91 | /* flags indicates whether to do full or partial reconfiguration */ | 100 | /* flags indicates whether to do full or partial reconfiguration */ |
92 | info.flags = 0; | 101 | info->flags = FPGA_MGR_PARTIAL_RECONFIG; |
93 | 102 | ||
94 | int ret; | 103 | /* |
104 | * At this point, indicate where the image is. This is pseudo-code; you're | ||
105 | * going to use one of these three. | ||
106 | */ | ||
107 | if (image is in a scatter gather table) { | ||
95 | 108 | ||
96 | /* Get exclusive control of FPGA manager */ | 109 | info->sgt = [your scatter gather table] |
97 | struct fpga_manager *mgr = of_fpga_mgr_get(mgr_node); | ||
98 | 110 | ||
99 | /* Load the buffer to the FPGA */ | 111 | } else if (image is in a buffer) { |
100 | ret = fpga_mgr_buf_load(mgr, &info, buf, count); | ||
101 | |||
102 | /* Release the FPGA manager */ | ||
103 | fpga_mgr_put(mgr); | ||
104 | |||
105 | |||
106 | How to write an image file to a supported FPGA | ||
107 | ============================================== | ||
108 | /* Include to get the API */ | ||
109 | #include <linux/fpga/fpga-mgr.h> | ||
110 | 112 | ||
111 | /* device node that specifies the FPGA manager to use */ | 113 | info->buf = [your image buffer] |
112 | struct device_node *mgr_node = ... | 114 | info->count = [image buffer size] |
113 | 115 | ||
114 | /* FPGA image is in this file which is in the firmware search path */ | 116 | } else if (image is in a firmware file) { |
115 | const char *path = "fpga-image-9.rbf" | ||
116 | 117 | ||
117 | /* struct with information about the FPGA image to program. */ | 118 | info->firmware_name = devm_kstrdup(dev, firmware_name, GFP_KERNEL); |
118 | struct fpga_image_info info; | ||
119 | |||
120 | /* flags indicates whether to do full or partial reconfiguration */ | ||
121 | info.flags = 0; | ||
122 | 119 | ||
123 | int ret; | 120 | } |
124 | 121 | ||
125 | /* Get exclusive control of FPGA manager */ | 122 | /* Get exclusive control of FPGA manager */ |
126 | struct fpga_manager *mgr = of_fpga_mgr_get(mgr_node); | 123 | ret = fpga_mgr_lock(mgr); |
127 | 124 | ||
128 | /* Get the firmware image (path) and load it to the FPGA */ | 125 | /* Load the buffer to the FPGA */ |
129 | ret = fpga_mgr_firmware_load(mgr, &info, path); | 126 | ret = fpga_mgr_buf_load(mgr, &info, buf, count); |
130 | 127 | ||
131 | /* Release the FPGA manager */ | 128 | /* Release the FPGA manager */ |
129 | fpga_mgr_unlock(mgr); | ||
132 | fpga_mgr_put(mgr); | 130 | fpga_mgr_put(mgr); |
133 | 131 | ||
132 | /* Deallocate the image info if you're done with it */ | ||
133 | fpga_image_info_free(info); | ||
134 | 134 | ||
135 | How to support a new FPGA device | 135 | How to support a new FPGA device |
136 | ================================ | 136 | ================================ |
diff --git a/Documentation/fpga/fpga-region.txt b/Documentation/fpga/fpga-region.txt new file mode 100644 index 000000000000..139a02ba1ff6 --- /dev/null +++ b/Documentation/fpga/fpga-region.txt | |||
@@ -0,0 +1,95 @@ | |||
1 | FPGA Regions | ||
2 | |||
3 | Alan Tull 2017 | ||
4 | |||
5 | CONTENTS | ||
6 | - Introduction | ||
7 | - The FPGA region API | ||
8 | - Usage example | ||
9 | |||
10 | Introduction | ||
11 | ============ | ||
12 | |||
13 | This document is meant to be an brief overview of the FPGA region API usage. A | ||
14 | more conceptual look at regions can be found in [1]. | ||
15 | |||
16 | For the purposes of this API document, let's just say that a region associates | ||
17 | an FPGA Manager and a bridge (or bridges) with a reprogrammable region of an | ||
18 | FPGA or the whole FPGA. The API provides a way to register a region and to | ||
19 | program a region. | ||
20 | |||
21 | Currently the only layer above fpga-region.c in the kernel is the Device Tree | ||
22 | support (of-fpga-region.c) described in [1]. The DT support layer uses regions | ||
23 | to program the FPGA and then DT to handle enumeration. The common region code | ||
24 | is intended to be used by other schemes that have other ways of accomplishing | ||
25 | enumeration after programming. | ||
26 | |||
27 | An fpga-region can be set up to know the following things: | ||
28 | * which FPGA manager to use to do the programming | ||
29 | * which bridges to disable before programming and enable afterwards. | ||
30 | |||
31 | Additional info needed to program the FPGA image is passed in the struct | ||
32 | fpga_image_info [2] including: | ||
33 | * pointers to the image as either a scatter-gather buffer, a contiguous | ||
34 | buffer, or the name of firmware file | ||
35 | * flags indicating specifics such as whether the image if for partial | ||
36 | reconfiguration. | ||
37 | |||
38 | =================== | ||
39 | The FPGA region API | ||
40 | =================== | ||
41 | |||
42 | To register or unregister a region: | ||
43 | ----------------------------------- | ||
44 | |||
45 | int fpga_region_register(struct device *dev, | ||
46 | struct fpga_region *region); | ||
47 | int fpga_region_unregister(struct fpga_region *region); | ||
48 | |||
49 | An example of usage can be seen in the probe function of [3] | ||
50 | |||
51 | To program an FPGA: | ||
52 | ------------------- | ||
53 | int fpga_region_program_fpga(struct fpga_region *region); | ||
54 | |||
55 | This function operates on info passed in the fpga_image_info | ||
56 | (region->info). | ||
57 | |||
58 | This function will attempt to: | ||
59 | * lock the region's mutex | ||
60 | * lock the region's FPGA manager | ||
61 | * build a list of FPGA bridges if a method has been specified to do so | ||
62 | * disable the bridges | ||
63 | * program the FPGA | ||
64 | * re-enable the bridges | ||
65 | * release the locks | ||
66 | |||
67 | ============= | ||
68 | Usage example | ||
69 | ============= | ||
70 | |||
71 | First, allocate the info struct: | ||
72 | |||
73 | info = fpga_image_info_alloc(dev); | ||
74 | if (!info) | ||
75 | return -ENOMEM; | ||
76 | |||
77 | Set flags as needed, i.e. | ||
78 | |||
79 | info->flags |= FPGA_MGR_PARTIAL_RECONFIG; | ||
80 | |||
81 | Point to your FPGA image, such as: | ||
82 | |||
83 | info->sgt = &sgt; | ||
84 | |||
85 | Add info to region and do the programming: | ||
86 | |||
87 | region->info = info; | ||
88 | ret = fpga_region_program_fpga(region); | ||
89 | |||
90 | Then enumerate whatever hardware has appeared in the FPGA. | ||
91 | |||
92 | -- | ||
93 | [1] ../devicetree/bindings/fpga/fpga-region.txt | ||
94 | [2] ./fpga-mgr.txt | ||
95 | [3] ../../drivers/fpga/of-fpga-region.c | ||
diff --git a/Documentation/fpga/overview.txt b/Documentation/fpga/overview.txt new file mode 100644 index 000000000000..0f1236e7e675 --- /dev/null +++ b/Documentation/fpga/overview.txt | |||
@@ -0,0 +1,23 @@ | |||
1 | Linux kernel FPGA support | ||
2 | |||
3 | Alan Tull 2017 | ||
4 | |||
5 | The main point of this project has been to separate the out the upper layers | ||
6 | that know when to reprogram a FPGA from the lower layers that know how to | ||
7 | reprogram a specific FPGA device. The intention is to make this manufacturer | ||
8 | agnostic, understanding that of course the FPGA images are very device specific | ||
9 | themselves. | ||
10 | |||
11 | The framework in the kernel includes: | ||
12 | * low level FPGA manager drivers that know how to program a specific device | ||
13 | * the fpga-mgr framework they are registered with | ||
14 | * low level FPGA bridge drivers for hard/soft bridges which are intended to | ||
15 | be disable during FPGA programming | ||
16 | * the fpga-bridge framework they are registered with | ||
17 | * the fpga-region framework which associates and controls managers and bridges | ||
18 | as reconfigurable regions | ||
19 | * the of-fpga-region support for reprogramming FPGAs when device tree overlays | ||
20 | are applied. | ||
21 | |||
22 | I would encourage you the user to add code that creates FPGA regions rather | ||
23 | that trying to control managers and bridges separately. | ||
diff --git a/MAINTAINERS b/MAINTAINERS index 64c47587d1d4..a2a25331e3b1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS | |||
@@ -3421,8 +3421,8 @@ M: Arnd Bergmann <arnd@arndb.de> | |||
3421 | M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 3421 | M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
3422 | T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git | 3422 | T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git |
3423 | S: Supported | 3423 | S: Supported |
3424 | F: drivers/char/* | 3424 | F: drivers/char/ |
3425 | F: drivers/misc/* | 3425 | F: drivers/misc/ |
3426 | F: include/linux/miscdevice.h | 3426 | F: include/linux/miscdevice.h |
3427 | 3427 | ||
3428 | CHECKPATCH | 3428 | CHECKPATCH |
@@ -12526,6 +12526,13 @@ F: lib/siphash.c | |||
12526 | F: lib/test_siphash.c | 12526 | F: lib/test_siphash.c |
12527 | F: include/linux/siphash.h | 12527 | F: include/linux/siphash.h |
12528 | 12528 | ||
12529 | SIOX | ||
12530 | M: Gavin Schenk <g.schenk@eckelmann.de> | ||
12531 | M: Uwe Kleine-König <kernel@pengutronix.de> | ||
12532 | S: Supported | ||
12533 | F: drivers/siox/* | ||
12534 | F: include/trace/events/siox.h | ||
12535 | |||
12529 | SIS 190 ETHERNET DRIVER | 12536 | SIS 190 ETHERNET DRIVER |
12530 | M: Francois Romieu <romieu@fr.zoreil.com> | 12537 | M: Francois Romieu <romieu@fr.zoreil.com> |
12531 | L: netdev@vger.kernel.org | 12538 | L: netdev@vger.kernel.org |
@@ -12577,6 +12584,14 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git | |||
12577 | F: include/linux/srcu.h | 12584 | F: include/linux/srcu.h |
12578 | F: kernel/rcu/srcu.c | 12585 | F: kernel/rcu/srcu.c |
12579 | 12586 | ||
12587 | SERIAL LOW-POWER INTER-CHIP MEDIA BUS (SLIMbus) | ||
12588 | M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> | ||
12589 | L: alsa-devel@alsa-project.org (moderated for non-subscribers) | ||
12590 | S: Maintained | ||
12591 | F: drivers/slimbus/ | ||
12592 | F: Documentation/devicetree/bindings/slimbus/ | ||
12593 | F: include/linux/slimbus.h | ||
12594 | |||
12580 | SMACK SECURITY MODULE | 12595 | SMACK SECURITY MODULE |
12581 | M: Casey Schaufler <casey@schaufler-ca.com> | 12596 | M: Casey Schaufler <casey@schaufler-ca.com> |
12582 | L: linux-security-module@vger.kernel.org | 12597 | L: linux-security-module@vger.kernel.org |
@@ -12802,6 +12817,16 @@ F: Documentation/sound/alsa/soc/ | |||
12802 | F: sound/soc/ | 12817 | F: sound/soc/ |
12803 | F: include/sound/soc* | 12818 | F: include/sound/soc* |
12804 | 12819 | ||
12820 | SOUNDWIRE SUBSYSTEM | ||
12821 | M: Vinod Koul <vinod.koul@intel.com> | ||
12822 | M: Sanyog Kale <sanyog.r.kale@intel.com> | ||
12823 | R: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> | ||
12824 | L: alsa-devel@alsa-project.org (moderated for non-subscribers) | ||
12825 | S: Supported | ||
12826 | F: Documentation/driver-api/soundwire/ | ||
12827 | F: drivers/soundwire/ | ||
12828 | F: include/linux/soundwire/ | ||
12829 | |||
12805 | SP2 MEDIA DRIVER | 12830 | SP2 MEDIA DRIVER |
12806 | M: Olli Salonen <olli.salonen@iki.fi> | 12831 | M: Olli Salonen <olli.salonen@iki.fi> |
12807 | L: linux-media@vger.kernel.org | 12832 | L: linux-media@vger.kernel.org |
@@ -14672,6 +14697,15 @@ S: Maintained | |||
14672 | F: drivers/virtio/virtio_input.c | 14697 | F: drivers/virtio/virtio_input.c |
14673 | F: include/uapi/linux/virtio_input.h | 14698 | F: include/uapi/linux/virtio_input.h |
14674 | 14699 | ||
14700 | VIRTUAL BOX GUEST DEVICE DRIVER | ||
14701 | M: Hans de Goede <hdegoede@redhat.com> | ||
14702 | M: Arnd Bergmann <arnd@arndb.de> | ||
14703 | M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | ||
14704 | S: Maintained | ||
14705 | F: include/linux/vbox_utils.h | ||
14706 | F: include/uapi/linux/vbox*.h | ||
14707 | F: drivers/virt/vboxguest/ | ||
14708 | |||
14675 | VIRTUAL SERIO DEVICE DRIVER | 14709 | VIRTUAL SERIO DEVICE DRIVER |
14676 | M: Stephen Chandler Paul <thatslyude@gmail.com> | 14710 | M: Stephen Chandler Paul <thatslyude@gmail.com> |
14677 | S: Maintained | 14711 | S: Maintained |
diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index 189a398290db..a0a206556919 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c | |||
@@ -239,17 +239,24 @@ void hyperv_report_panic(struct pt_regs *regs, long err) | |||
239 | } | 239 | } |
240 | EXPORT_SYMBOL_GPL(hyperv_report_panic); | 240 | EXPORT_SYMBOL_GPL(hyperv_report_panic); |
241 | 241 | ||
242 | bool hv_is_hypercall_page_setup(void) | 242 | bool hv_is_hyperv_initialized(void) |
243 | { | 243 | { |
244 | union hv_x64_msr_hypercall_contents hypercall_msr; | 244 | union hv_x64_msr_hypercall_contents hypercall_msr; |
245 | 245 | ||
246 | /* Check if the hypercall page is setup */ | 246 | /* |
247 | * Ensure that we're really on Hyper-V, and not a KVM or Xen | ||
248 | * emulation of Hyper-V | ||
249 | */ | ||
250 | if (x86_hyper_type != X86_HYPER_MS_HYPERV) | ||
251 | return false; | ||
252 | |||
253 | /* | ||
254 | * Verify that earlier initialization succeeded by checking | ||
255 | * that the hypercall page is setup | ||
256 | */ | ||
247 | hypercall_msr.as_uint64 = 0; | 257 | hypercall_msr.as_uint64 = 0; |
248 | rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); | 258 | rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); |
249 | 259 | ||
250 | if (!hypercall_msr.enable) | 260 | return hypercall_msr.enable; |
251 | return false; | ||
252 | |||
253 | return true; | ||
254 | } | 261 | } |
255 | EXPORT_SYMBOL_GPL(hv_is_hypercall_page_setup); | 262 | EXPORT_SYMBOL_GPL(hv_is_hyperv_initialized); |
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 8bf450b13d9f..b52af150cbd8 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h | |||
@@ -314,11 +314,11 @@ void hyperv_init(void); | |||
314 | void hyperv_setup_mmu_ops(void); | 314 | void hyperv_setup_mmu_ops(void); |
315 | void hyper_alloc_mmu(void); | 315 | void hyper_alloc_mmu(void); |
316 | void hyperv_report_panic(struct pt_regs *regs, long err); | 316 | void hyperv_report_panic(struct pt_regs *regs, long err); |
317 | bool hv_is_hypercall_page_setup(void); | 317 | bool hv_is_hyperv_initialized(void); |
318 | void hyperv_cleanup(void); | 318 | void hyperv_cleanup(void); |
319 | #else /* CONFIG_HYPERV */ | 319 | #else /* CONFIG_HYPERV */ |
320 | static inline void hyperv_init(void) {} | 320 | static inline void hyperv_init(void) {} |
321 | static inline bool hv_is_hypercall_page_setup(void) { return false; } | 321 | static inline bool hv_is_hyperv_initialized(void) { return false; } |
322 | static inline void hyperv_cleanup(void) {} | 322 | static inline void hyperv_cleanup(void) {} |
323 | static inline void hyperv_setup_mmu_ops(void) {} | 323 | static inline void hyperv_setup_mmu_ops(void) {} |
324 | #endif /* CONFIG_HYPERV */ | 324 | #endif /* CONFIG_HYPERV */ |
diff --git a/drivers/Kconfig b/drivers/Kconfig index ef5fb8395d76..879dc0604cba 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig | |||
@@ -153,6 +153,8 @@ source "drivers/remoteproc/Kconfig" | |||
153 | 153 | ||
154 | source "drivers/rpmsg/Kconfig" | 154 | source "drivers/rpmsg/Kconfig" |
155 | 155 | ||
156 | source "drivers/soundwire/Kconfig" | ||
157 | |||
156 | source "drivers/soc/Kconfig" | 158 | source "drivers/soc/Kconfig" |
157 | 159 | ||
158 | source "drivers/devfreq/Kconfig" | 160 | source "drivers/devfreq/Kconfig" |
@@ -213,4 +215,8 @@ source "drivers/opp/Kconfig" | |||
213 | 215 | ||
214 | source "drivers/visorbus/Kconfig" | 216 | source "drivers/visorbus/Kconfig" |
215 | 217 | ||
218 | source "drivers/siox/Kconfig" | ||
219 | |||
220 | source "drivers/slimbus/Kconfig" | ||
221 | |||
216 | endmenu | 222 | endmenu |
diff --git a/drivers/Makefile b/drivers/Makefile index 7a2330077e47..7a0438744053 100644 --- a/drivers/Makefile +++ b/drivers/Makefile | |||
@@ -87,6 +87,7 @@ obj-$(CONFIG_MTD) += mtd/ | |||
87 | obj-$(CONFIG_SPI) += spi/ | 87 | obj-$(CONFIG_SPI) += spi/ |
88 | obj-$(CONFIG_SPMI) += spmi/ | 88 | obj-$(CONFIG_SPMI) += spmi/ |
89 | obj-$(CONFIG_HSI) += hsi/ | 89 | obj-$(CONFIG_HSI) += hsi/ |
90 | obj-$(CONFIG_SLIMBUS) += slimbus/ | ||
90 | obj-y += net/ | 91 | obj-y += net/ |
91 | obj-$(CONFIG_ATM) += atm/ | 92 | obj-$(CONFIG_ATM) += atm/ |
92 | obj-$(CONFIG_FUSION) += message/ | 93 | obj-$(CONFIG_FUSION) += message/ |
@@ -157,6 +158,7 @@ obj-$(CONFIG_MAILBOX) += mailbox/ | |||
157 | obj-$(CONFIG_HWSPINLOCK) += hwspinlock/ | 158 | obj-$(CONFIG_HWSPINLOCK) += hwspinlock/ |
158 | obj-$(CONFIG_REMOTEPROC) += remoteproc/ | 159 | obj-$(CONFIG_REMOTEPROC) += remoteproc/ |
159 | obj-$(CONFIG_RPMSG) += rpmsg/ | 160 | obj-$(CONFIG_RPMSG) += rpmsg/ |
161 | obj-$(CONFIG_SOUNDWIRE) += soundwire/ | ||
160 | 162 | ||
161 | # Virtualization drivers | 163 | # Virtualization drivers |
162 | obj-$(CONFIG_VIRT_DRIVERS) += virt/ | 164 | obj-$(CONFIG_VIRT_DRIVERS) += virt/ |
@@ -185,3 +187,4 @@ obj-$(CONFIG_FSI) += fsi/ | |||
185 | obj-$(CONFIG_TEE) += tee/ | 187 | obj-$(CONFIG_TEE) += tee/ |
186 | obj-$(CONFIG_MULTIPLEXER) += mux/ | 188 | obj-$(CONFIG_MULTIPLEXER) += mux/ |
187 | obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/ | 189 | obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/ |
190 | obj-$(CONFIG_SIOX) += siox/ | ||
diff --git a/drivers/android/binder.c b/drivers/android/binder.c index cc89d0d2b965..d21040c5d343 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c | |||
@@ -141,7 +141,7 @@ enum { | |||
141 | }; | 141 | }; |
142 | static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR | | 142 | static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR | |
143 | BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION; | 143 | BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION; |
144 | module_param_named(debug_mask, binder_debug_mask, uint, S_IWUSR | S_IRUGO); | 144 | module_param_named(debug_mask, binder_debug_mask, uint, 0644); |
145 | 145 | ||
146 | static char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES; | 146 | static char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES; |
147 | module_param_named(devices, binder_devices_param, charp, 0444); | 147 | module_param_named(devices, binder_devices_param, charp, 0444); |
@@ -160,7 +160,7 @@ static int binder_set_stop_on_user_error(const char *val, | |||
160 | return ret; | 160 | return ret; |
161 | } | 161 | } |
162 | module_param_call(stop_on_user_error, binder_set_stop_on_user_error, | 162 | module_param_call(stop_on_user_error, binder_set_stop_on_user_error, |
163 | param_get_int, &binder_stop_on_user_error, S_IWUSR | S_IRUGO); | 163 | param_get_int, &binder_stop_on_user_error, 0644); |
164 | 164 | ||
165 | #define binder_debug(mask, x...) \ | 165 | #define binder_debug(mask, x...) \ |
166 | do { \ | 166 | do { \ |
@@ -249,7 +249,7 @@ static struct binder_transaction_log_entry *binder_transaction_log_add( | |||
249 | unsigned int cur = atomic_inc_return(&log->cur); | 249 | unsigned int cur = atomic_inc_return(&log->cur); |
250 | 250 | ||
251 | if (cur >= ARRAY_SIZE(log->entry)) | 251 | if (cur >= ARRAY_SIZE(log->entry)) |
252 | log->full = 1; | 252 | log->full = true; |
253 | e = &log->entry[cur % ARRAY_SIZE(log->entry)]; | 253 | e = &log->entry[cur % ARRAY_SIZE(log->entry)]; |
254 | WRITE_ONCE(e->debug_id_done, 0); | 254 | WRITE_ONCE(e->debug_id_done, 0); |
255 | /* | 255 | /* |
@@ -493,8 +493,6 @@ enum binder_deferred_state { | |||
493 | * (protected by @inner_lock) | 493 | * (protected by @inner_lock) |
494 | * @todo: list of work for this process | 494 | * @todo: list of work for this process |
495 | * (protected by @inner_lock) | 495 | * (protected by @inner_lock) |
496 | * @wait: wait queue head to wait for proc work | ||
497 | * (invariant after initialized) | ||
498 | * @stats: per-process binder statistics | 496 | * @stats: per-process binder statistics |
499 | * (atomics, no lock needed) | 497 | * (atomics, no lock needed) |
500 | * @delivered_death: list of delivered death notification | 498 | * @delivered_death: list of delivered death notification |
@@ -537,7 +535,6 @@ struct binder_proc { | |||
537 | bool is_dead; | 535 | bool is_dead; |
538 | 536 | ||
539 | struct list_head todo; | 537 | struct list_head todo; |
540 | wait_queue_head_t wait; | ||
541 | struct binder_stats stats; | 538 | struct binder_stats stats; |
542 | struct list_head delivered_death; | 539 | struct list_head delivered_death; |
543 | int max_threads; | 540 | int max_threads; |
@@ -579,6 +576,8 @@ enum { | |||
579 | * (protected by @proc->inner_lock) | 576 | * (protected by @proc->inner_lock) |
580 | * @todo: list of work to do for this thread | 577 | * @todo: list of work to do for this thread |
581 | * (protected by @proc->inner_lock) | 578 | * (protected by @proc->inner_lock) |
579 | * @process_todo: whether work in @todo should be processed | ||
580 | * (protected by @proc->inner_lock) | ||
582 | * @return_error: transaction errors reported by this thread | 581 | * @return_error: transaction errors reported by this thread |
583 | * (only accessed by this thread) | 582 | * (only accessed by this thread) |
584 | * @reply_error: transaction errors reported by target thread | 583 | * @reply_error: transaction errors reported by target thread |
@@ -604,6 +603,7 @@ struct binder_thread { | |||
604 | bool looper_need_return; /* can be written by other thread */ | 603 | bool looper_need_return; /* can be written by other thread */ |
605 | struct binder_transaction *transaction_stack; | 604 | struct binder_transaction *transaction_stack; |
606 | struct list_head todo; | 605 | struct list_head todo; |
606 | bool process_todo; | ||
607 | struct binder_error return_error; | 607 | struct binder_error return_error; |
608 | struct binder_error reply_error; | 608 | struct binder_error reply_error; |
609 | wait_queue_head_t wait; | 609 | wait_queue_head_t wait; |
@@ -789,6 +789,16 @@ static bool binder_worklist_empty(struct binder_proc *proc, | |||
789 | return ret; | 789 | return ret; |
790 | } | 790 | } |
791 | 791 | ||
792 | /** | ||
793 | * binder_enqueue_work_ilocked() - Add an item to the work list | ||
794 | * @work: struct binder_work to add to list | ||
795 | * @target_list: list to add work to | ||
796 | * | ||
797 | * Adds the work to the specified list. Asserts that work | ||
798 | * is not already on a list. | ||
799 | * | ||
800 | * Requires the proc->inner_lock to be held. | ||
801 | */ | ||
792 | static void | 802 | static void |
793 | binder_enqueue_work_ilocked(struct binder_work *work, | 803 | binder_enqueue_work_ilocked(struct binder_work *work, |
794 | struct list_head *target_list) | 804 | struct list_head *target_list) |
@@ -799,22 +809,56 @@ binder_enqueue_work_ilocked(struct binder_work *work, | |||
799 | } | 809 | } |
800 | 810 | ||
801 | /** | 811 | /** |
802 | * binder_enqueue_work() - Add an item to the work list | 812 | * binder_enqueue_deferred_thread_work_ilocked() - Add deferred thread work |
803 | * @proc: binder_proc associated with list | 813 | * @thread: thread to queue work to |
804 | * @work: struct binder_work to add to list | 814 | * @work: struct binder_work to add to list |
805 | * @target_list: list to add work to | ||
806 | * | 815 | * |
807 | * Adds the work to the specified list. Asserts that work | 816 | * Adds the work to the todo list of the thread. Doesn't set the process_todo |
808 | * is not already on a list. | 817 | * flag, which means that (if it wasn't already set) the thread will go to |
818 | * sleep without handling this work when it calls read. | ||
819 | * | ||
820 | * Requires the proc->inner_lock to be held. | ||
809 | */ | 821 | */ |
810 | static void | 822 | static void |
811 | binder_enqueue_work(struct binder_proc *proc, | 823 | binder_enqueue_deferred_thread_work_ilocked(struct binder_thread *thread, |
812 | struct binder_work *work, | 824 | struct binder_work *work) |
813 | struct list_head *target_list) | ||
814 | { | 825 | { |
815 | binder_inner_proc_lock(proc); | 826 | binder_enqueue_work_ilocked(work, &thread->todo); |
816 | binder_enqueue_work_ilocked(work, target_list); | 827 | } |
817 | binder_inner_proc_unlock(proc); | 828 | |
829 | /** | ||
830 | * binder_enqueue_thread_work_ilocked() - Add an item to the thread work list | ||
831 | * @thread: thread to queue work to | ||
832 | * @work: struct binder_work to add to list | ||
833 | * | ||
834 | * Adds the work to the todo list of the thread, and enables processing | ||
835 | * of the todo queue. | ||
836 | * | ||
837 | * Requires the proc->inner_lock to be held. | ||
838 | */ | ||
839 | static void | ||
840 | binder_enqueue_thread_work_ilocked(struct binder_thread *thread, | ||
841 | struct binder_work *work) | ||
842 | { | ||
843 | binder_enqueue_work_ilocked(work, &thread->todo); | ||
844 | thread->process_todo = true; | ||
845 | } | ||
846 | |||
847 | /** | ||
848 | * binder_enqueue_thread_work() - Add an item to the thread work list | ||
849 | * @thread: thread to queue work to | ||
850 | * @work: struct binder_work to add to list | ||
851 | * | ||
852 | * Adds the work to the todo list of the thread, and enables processing | ||
853 | * of the todo queue. | ||
854 | */ | ||
855 | static void | ||
856 | binder_enqueue_thread_work(struct binder_thread *thread, | ||
857 | struct binder_work *work) | ||
858 | { | ||
859 | binder_inner_proc_lock(thread->proc); | ||
860 | binder_enqueue_thread_work_ilocked(thread, work); | ||
861 | binder_inner_proc_unlock(thread->proc); | ||
818 | } | 862 | } |
819 | 863 | ||
820 | static void | 864 | static void |
@@ -940,7 +984,7 @@ err: | |||
940 | static bool binder_has_work_ilocked(struct binder_thread *thread, | 984 | static bool binder_has_work_ilocked(struct binder_thread *thread, |
941 | bool do_proc_work) | 985 | bool do_proc_work) |
942 | { | 986 | { |
943 | return !binder_worklist_empty_ilocked(&thread->todo) || | 987 | return thread->process_todo || |
944 | thread->looper_need_return || | 988 | thread->looper_need_return || |
945 | (do_proc_work && | 989 | (do_proc_work && |
946 | !binder_worklist_empty_ilocked(&thread->proc->todo)); | 990 | !binder_worklist_empty_ilocked(&thread->proc->todo)); |
@@ -1228,6 +1272,17 @@ static int binder_inc_node_nilocked(struct binder_node *node, int strong, | |||
1228 | node->local_strong_refs++; | 1272 | node->local_strong_refs++; |
1229 | if (!node->has_strong_ref && target_list) { | 1273 | if (!node->has_strong_ref && target_list) { |
1230 | binder_dequeue_work_ilocked(&node->work); | 1274 | binder_dequeue_work_ilocked(&node->work); |
1275 | /* | ||
1276 | * Note: this function is the only place where we queue | ||
1277 | * directly to a thread->todo without using the | ||
1278 | * corresponding binder_enqueue_thread_work() helper | ||
1279 | * functions; in this case it's ok to not set the | ||
1280 | * process_todo flag, since we know this node work will | ||
1281 | * always be followed by other work that starts queue | ||
1282 | * processing: in case of synchronous transactions, a | ||
1283 | * BR_REPLY or BR_ERROR; in case of oneway | ||
1284 | * transactions, a BR_TRANSACTION_COMPLETE. | ||
1285 | */ | ||
1231 | binder_enqueue_work_ilocked(&node->work, target_list); | 1286 | binder_enqueue_work_ilocked(&node->work, target_list); |
1232 | } | 1287 | } |
1233 | } else { | 1288 | } else { |
@@ -1239,6 +1294,9 @@ static int binder_inc_node_nilocked(struct binder_node *node, int strong, | |||
1239 | node->debug_id); | 1294 | node->debug_id); |
1240 | return -EINVAL; | 1295 | return -EINVAL; |
1241 | } | 1296 | } |
1297 | /* | ||
1298 | * See comment above | ||
1299 | */ | ||
1242 | binder_enqueue_work_ilocked(&node->work, target_list); | 1300 | binder_enqueue_work_ilocked(&node->work, target_list); |
1243 | } | 1301 | } |
1244 | } | 1302 | } |
@@ -1928,9 +1986,9 @@ static void binder_send_failed_reply(struct binder_transaction *t, | |||
1928 | binder_pop_transaction_ilocked(target_thread, t); | 1986 | binder_pop_transaction_ilocked(target_thread, t); |
1929 | if (target_thread->reply_error.cmd == BR_OK) { | 1987 | if (target_thread->reply_error.cmd == BR_OK) { |
1930 | target_thread->reply_error.cmd = error_code; | 1988 | target_thread->reply_error.cmd = error_code; |
1931 | binder_enqueue_work_ilocked( | 1989 | binder_enqueue_thread_work_ilocked( |
1932 | &target_thread->reply_error.work, | 1990 | target_thread, |
1933 | &target_thread->todo); | 1991 | &target_thread->reply_error.work); |
1934 | wake_up_interruptible(&target_thread->wait); | 1992 | wake_up_interruptible(&target_thread->wait); |
1935 | } else { | 1993 | } else { |
1936 | WARN(1, "Unexpected reply error: %u\n", | 1994 | WARN(1, "Unexpected reply error: %u\n", |
@@ -2569,20 +2627,18 @@ static bool binder_proc_transaction(struct binder_transaction *t, | |||
2569 | struct binder_proc *proc, | 2627 | struct binder_proc *proc, |
2570 | struct binder_thread *thread) | 2628 | struct binder_thread *thread) |
2571 | { | 2629 | { |
2572 | struct list_head *target_list = NULL; | ||
2573 | struct binder_node *node = t->buffer->target_node; | 2630 | struct binder_node *node = t->buffer->target_node; |
2574 | bool oneway = !!(t->flags & TF_ONE_WAY); | 2631 | bool oneway = !!(t->flags & TF_ONE_WAY); |
2575 | bool wakeup = true; | 2632 | bool pending_async = false; |
2576 | 2633 | ||
2577 | BUG_ON(!node); | 2634 | BUG_ON(!node); |
2578 | binder_node_lock(node); | 2635 | binder_node_lock(node); |
2579 | if (oneway) { | 2636 | if (oneway) { |
2580 | BUG_ON(thread); | 2637 | BUG_ON(thread); |
2581 | if (node->has_async_transaction) { | 2638 | if (node->has_async_transaction) { |
2582 | target_list = &node->async_todo; | 2639 | pending_async = true; |
2583 | wakeup = false; | ||
2584 | } else { | 2640 | } else { |
2585 | node->has_async_transaction = 1; | 2641 | node->has_async_transaction = true; |
2586 | } | 2642 | } |
2587 | } | 2643 | } |
2588 | 2644 | ||
@@ -2594,19 +2650,17 @@ static bool binder_proc_transaction(struct binder_transaction *t, | |||
2594 | return false; | 2650 | return false; |
2595 | } | 2651 | } |
2596 | 2652 | ||
2597 | if (!thread && !target_list) | 2653 | if (!thread && !pending_async) |
2598 | thread = binder_select_thread_ilocked(proc); | 2654 | thread = binder_select_thread_ilocked(proc); |
2599 | 2655 | ||
2600 | if (thread) | 2656 | if (thread) |
2601 | target_list = &thread->todo; | 2657 | binder_enqueue_thread_work_ilocked(thread, &t->work); |
2602 | else if (!target_list) | 2658 | else if (!pending_async) |
2603 | target_list = &proc->todo; | 2659 | binder_enqueue_work_ilocked(&t->work, &proc->todo); |
2604 | else | 2660 | else |
2605 | BUG_ON(target_list != &node->async_todo); | 2661 | binder_enqueue_work_ilocked(&t->work, &node->async_todo); |
2606 | 2662 | ||
2607 | binder_enqueue_work_ilocked(&t->work, target_list); | 2663 | if (!pending_async) |
2608 | |||
2609 | if (wakeup) | ||
2610 | binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */); | 2664 | binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */); |
2611 | 2665 | ||
2612 | binder_inner_proc_unlock(proc); | 2666 | binder_inner_proc_unlock(proc); |
@@ -3101,10 +3155,10 @@ static void binder_transaction(struct binder_proc *proc, | |||
3101 | } | 3155 | } |
3102 | } | 3156 | } |
3103 | tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; | 3157 | tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; |
3104 | binder_enqueue_work(proc, tcomplete, &thread->todo); | ||
3105 | t->work.type = BINDER_WORK_TRANSACTION; | 3158 | t->work.type = BINDER_WORK_TRANSACTION; |
3106 | 3159 | ||
3107 | if (reply) { | 3160 | if (reply) { |
3161 | binder_enqueue_thread_work(thread, tcomplete); | ||
3108 | binder_inner_proc_lock(target_proc); | 3162 | binder_inner_proc_lock(target_proc); |
3109 | if (target_thread->is_dead) { | 3163 | if (target_thread->is_dead) { |
3110 | binder_inner_proc_unlock(target_proc); | 3164 | binder_inner_proc_unlock(target_proc); |
@@ -3112,13 +3166,21 @@ static void binder_transaction(struct binder_proc *proc, | |||
3112 | } | 3166 | } |
3113 | BUG_ON(t->buffer->async_transaction != 0); | 3167 | BUG_ON(t->buffer->async_transaction != 0); |
3114 | binder_pop_transaction_ilocked(target_thread, in_reply_to); | 3168 | binder_pop_transaction_ilocked(target_thread, in_reply_to); |
3115 | binder_enqueue_work_ilocked(&t->work, &target_thread->todo); | 3169 | binder_enqueue_thread_work_ilocked(target_thread, &t->work); |
3116 | binder_inner_proc_unlock(target_proc); | 3170 | binder_inner_proc_unlock(target_proc); |
3117 | wake_up_interruptible_sync(&target_thread->wait); | 3171 | wake_up_interruptible_sync(&target_thread->wait); |
3118 | binder_free_transaction(in_reply_to); | 3172 | binder_free_transaction(in_reply_to); |
3119 | } else if (!(t->flags & TF_ONE_WAY)) { | 3173 | } else if (!(t->flags & TF_ONE_WAY)) { |
3120 | BUG_ON(t->buffer->async_transaction != 0); | 3174 | BUG_ON(t->buffer->async_transaction != 0); |
3121 | binder_inner_proc_lock(proc); | 3175 | binder_inner_proc_lock(proc); |
3176 | /* | ||
3177 | * Defer the TRANSACTION_COMPLETE, so we don't return to | ||
3178 | * userspace immediately; this allows the target process to | ||
3179 | * immediately start processing this transaction, reducing | ||
3180 | * latency. We will then return the TRANSACTION_COMPLETE when | ||
3181 | * the target replies (or there is an error). | ||
3182 | */ | ||
3183 | binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete); | ||
3122 | t->need_reply = 1; | 3184 | t->need_reply = 1; |
3123 | t->from_parent = thread->transaction_stack; | 3185 | t->from_parent = thread->transaction_stack; |
3124 | thread->transaction_stack = t; | 3186 | thread->transaction_stack = t; |
@@ -3132,6 +3194,7 @@ static void binder_transaction(struct binder_proc *proc, | |||
3132 | } else { | 3194 | } else { |
3133 | BUG_ON(target_node == NULL); | 3195 | BUG_ON(target_node == NULL); |
3134 | BUG_ON(t->buffer->async_transaction != 1); | 3196 | BUG_ON(t->buffer->async_transaction != 1); |
3197 | binder_enqueue_thread_work(thread, tcomplete); | ||
3135 | if (!binder_proc_transaction(t, target_proc, NULL)) | 3198 | if (!binder_proc_transaction(t, target_proc, NULL)) |
3136 | goto err_dead_proc_or_thread; | 3199 | goto err_dead_proc_or_thread; |
3137 | } | 3200 | } |
@@ -3210,15 +3273,11 @@ err_invalid_target_handle: | |||
3210 | BUG_ON(thread->return_error.cmd != BR_OK); | 3273 | BUG_ON(thread->return_error.cmd != BR_OK); |
3211 | if (in_reply_to) { | 3274 | if (in_reply_to) { |
3212 | thread->return_error.cmd = BR_TRANSACTION_COMPLETE; | 3275 | thread->return_error.cmd = BR_TRANSACTION_COMPLETE; |
3213 | binder_enqueue_work(thread->proc, | 3276 | binder_enqueue_thread_work(thread, &thread->return_error.work); |
3214 | &thread->return_error.work, | ||
3215 | &thread->todo); | ||
3216 | binder_send_failed_reply(in_reply_to, return_error); | 3277 | binder_send_failed_reply(in_reply_to, return_error); |
3217 | } else { | 3278 | } else { |
3218 | thread->return_error.cmd = return_error; | 3279 | thread->return_error.cmd = return_error; |
3219 | binder_enqueue_work(thread->proc, | 3280 | binder_enqueue_thread_work(thread, &thread->return_error.work); |
3220 | &thread->return_error.work, | ||
3221 | &thread->todo); | ||
3222 | } | 3281 | } |
3223 | } | 3282 | } |
3224 | 3283 | ||
@@ -3424,7 +3483,7 @@ static int binder_thread_write(struct binder_proc *proc, | |||
3424 | w = binder_dequeue_work_head_ilocked( | 3483 | w = binder_dequeue_work_head_ilocked( |
3425 | &buf_node->async_todo); | 3484 | &buf_node->async_todo); |
3426 | if (!w) { | 3485 | if (!w) { |
3427 | buf_node->has_async_transaction = 0; | 3486 | buf_node->has_async_transaction = false; |
3428 | } else { | 3487 | } else { |
3429 | binder_enqueue_work_ilocked( | 3488 | binder_enqueue_work_ilocked( |
3430 | w, &proc->todo); | 3489 | w, &proc->todo); |
@@ -3522,10 +3581,9 @@ static int binder_thread_write(struct binder_proc *proc, | |||
3522 | WARN_ON(thread->return_error.cmd != | 3581 | WARN_ON(thread->return_error.cmd != |
3523 | BR_OK); | 3582 | BR_OK); |
3524 | thread->return_error.cmd = BR_ERROR; | 3583 | thread->return_error.cmd = BR_ERROR; |
3525 | binder_enqueue_work( | 3584 | binder_enqueue_thread_work( |
3526 | thread->proc, | 3585 | thread, |
3527 | &thread->return_error.work, | 3586 | &thread->return_error.work); |
3528 | &thread->todo); | ||
3529 | binder_debug( | 3587 | binder_debug( |
3530 | BINDER_DEBUG_FAILED_TRANSACTION, | 3588 | BINDER_DEBUG_FAILED_TRANSACTION, |
3531 | "%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n", | 3589 | "%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n", |
@@ -3605,9 +3663,9 @@ static int binder_thread_write(struct binder_proc *proc, | |||
3605 | if (thread->looper & | 3663 | if (thread->looper & |
3606 | (BINDER_LOOPER_STATE_REGISTERED | | 3664 | (BINDER_LOOPER_STATE_REGISTERED | |
3607 | BINDER_LOOPER_STATE_ENTERED)) | 3665 | BINDER_LOOPER_STATE_ENTERED)) |
3608 | binder_enqueue_work_ilocked( | 3666 | binder_enqueue_thread_work_ilocked( |
3609 | &death->work, | 3667 | thread, |
3610 | &thread->todo); | 3668 | &death->work); |
3611 | else { | 3669 | else { |
3612 | binder_enqueue_work_ilocked( | 3670 | binder_enqueue_work_ilocked( |
3613 | &death->work, | 3671 | &death->work, |
@@ -3662,8 +3720,8 @@ static int binder_thread_write(struct binder_proc *proc, | |||
3662 | if (thread->looper & | 3720 | if (thread->looper & |
3663 | (BINDER_LOOPER_STATE_REGISTERED | | 3721 | (BINDER_LOOPER_STATE_REGISTERED | |
3664 | BINDER_LOOPER_STATE_ENTERED)) | 3722 | BINDER_LOOPER_STATE_ENTERED)) |
3665 | binder_enqueue_work_ilocked( | 3723 | binder_enqueue_thread_work_ilocked( |
3666 | &death->work, &thread->todo); | 3724 | thread, &death->work); |
3667 | else { | 3725 | else { |
3668 | binder_enqueue_work_ilocked( | 3726 | binder_enqueue_work_ilocked( |
3669 | &death->work, | 3727 | &death->work, |
@@ -3837,6 +3895,8 @@ retry: | |||
3837 | break; | 3895 | break; |
3838 | } | 3896 | } |
3839 | w = binder_dequeue_work_head_ilocked(list); | 3897 | w = binder_dequeue_work_head_ilocked(list); |
3898 | if (binder_worklist_empty_ilocked(&thread->todo)) | ||
3899 | thread->process_todo = false; | ||
3840 | 3900 | ||
3841 | switch (w->type) { | 3901 | switch (w->type) { |
3842 | case BINDER_WORK_TRANSACTION: { | 3902 | case BINDER_WORK_TRANSACTION: { |
@@ -4302,6 +4362,18 @@ static int binder_thread_release(struct binder_proc *proc, | |||
4302 | if (t) | 4362 | if (t) |
4303 | spin_lock(&t->lock); | 4363 | spin_lock(&t->lock); |
4304 | } | 4364 | } |
4365 | |||
4366 | /* | ||
4367 | * If this thread used poll, make sure we remove the waitqueue | ||
4368 | * from any epoll data structures holding it with POLLFREE. | ||
4369 | * waitqueue_active() is safe to use here because we're holding | ||
4370 | * the inner lock. | ||
4371 | */ | ||
4372 | if ((thread->looper & BINDER_LOOPER_STATE_POLL) && | ||
4373 | waitqueue_active(&thread->wait)) { | ||
4374 | wake_up_poll(&thread->wait, POLLHUP | POLLFREE); | ||
4375 | } | ||
4376 | |||
4305 | binder_inner_proc_unlock(thread->proc); | 4377 | binder_inner_proc_unlock(thread->proc); |
4306 | 4378 | ||
4307 | if (send_reply) | 4379 | if (send_reply) |
@@ -4646,7 +4718,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma) | |||
4646 | return 0; | 4718 | return 0; |
4647 | 4719 | ||
4648 | err_bad_arg: | 4720 | err_bad_arg: |
4649 | pr_err("binder_mmap: %d %lx-%lx %s failed %d\n", | 4721 | pr_err("%s: %d %lx-%lx %s failed %d\n", __func__, |
4650 | proc->pid, vma->vm_start, vma->vm_end, failure_string, ret); | 4722 | proc->pid, vma->vm_start, vma->vm_end, failure_string, ret); |
4651 | return ret; | 4723 | return ret; |
4652 | } | 4724 | } |
@@ -4656,7 +4728,7 @@ static int binder_open(struct inode *nodp, struct file *filp) | |||
4656 | struct binder_proc *proc; | 4728 | struct binder_proc *proc; |
4657 | struct binder_device *binder_dev; | 4729 | struct binder_device *binder_dev; |
4658 | 4730 | ||
4659 | binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n", | 4731 | binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__, |
4660 | current->group_leader->pid, current->pid); | 4732 | current->group_leader->pid, current->pid); |
4661 | 4733 | ||
4662 | proc = kzalloc(sizeof(*proc), GFP_KERNEL); | 4734 | proc = kzalloc(sizeof(*proc), GFP_KERNEL); |
@@ -4695,7 +4767,7 @@ static int binder_open(struct inode *nodp, struct file *filp) | |||
4695 | * anyway print all contexts that a given PID has, so this | 4767 | * anyway print all contexts that a given PID has, so this |
4696 | * is not a problem. | 4768 | * is not a problem. |
4697 | */ | 4769 | */ |
4698 | proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO, | 4770 | proc->debugfs_entry = debugfs_create_file(strbuf, 0444, |
4699 | binder_debugfs_dir_entry_proc, | 4771 | binder_debugfs_dir_entry_proc, |
4700 | (void *)(unsigned long)proc->pid, | 4772 | (void *)(unsigned long)proc->pid, |
4701 | &binder_proc_fops); | 4773 | &binder_proc_fops); |
@@ -5524,7 +5596,9 @@ static int __init binder_init(void) | |||
5524 | struct binder_device *device; | 5596 | struct binder_device *device; |
5525 | struct hlist_node *tmp; | 5597 | struct hlist_node *tmp; |
5526 | 5598 | ||
5527 | binder_alloc_shrinker_init(); | 5599 | ret = binder_alloc_shrinker_init(); |
5600 | if (ret) | ||
5601 | return ret; | ||
5528 | 5602 | ||
5529 | atomic_set(&binder_transaction_log.cur, ~0U); | 5603 | atomic_set(&binder_transaction_log.cur, ~0U); |
5530 | atomic_set(&binder_transaction_log_failed.cur, ~0U); | 5604 | atomic_set(&binder_transaction_log_failed.cur, ~0U); |
@@ -5536,27 +5610,27 @@ static int __init binder_init(void) | |||
5536 | 5610 | ||
5537 | if (binder_debugfs_dir_entry_root) { | 5611 | if (binder_debugfs_dir_entry_root) { |
5538 | debugfs_create_file("state", | 5612 | debugfs_create_file("state", |
5539 | S_IRUGO, | 5613 | 0444, |
5540 | binder_debugfs_dir_entry_root, | 5614 | binder_debugfs_dir_entry_root, |
5541 | NULL, | 5615 | NULL, |
5542 | &binder_state_fops); | 5616 | &binder_state_fops); |
5543 | debugfs_create_file("stats", | 5617 | debugfs_create_file("stats", |
5544 | S_IRUGO, | 5618 | 0444, |
5545 | binder_debugfs_dir_entry_root, | 5619 | binder_debugfs_dir_entry_root, |
5546 | NULL, | 5620 | NULL, |
5547 | &binder_stats_fops); | 5621 | &binder_stats_fops); |
5548 | debugfs_create_file("transactions", | 5622 | debugfs_create_file("transactions", |
5549 | S_IRUGO, | 5623 | 0444, |
5550 | binder_debugfs_dir_entry_root, | 5624 | binder_debugfs_dir_entry_root, |
5551 | NULL, | 5625 | NULL, |
5552 | &binder_transactions_fops); | 5626 | &binder_transactions_fops); |
5553 | debugfs_create_file("transaction_log", | 5627 | debugfs_create_file("transaction_log", |
5554 | S_IRUGO, | 5628 | 0444, |
5555 | binder_debugfs_dir_entry_root, | 5629 | binder_debugfs_dir_entry_root, |
5556 | &binder_transaction_log, | 5630 | &binder_transaction_log, |
5557 | &binder_transaction_log_fops); | 5631 | &binder_transaction_log_fops); |
5558 | debugfs_create_file("failed_transaction_log", | 5632 | debugfs_create_file("failed_transaction_log", |
5559 | S_IRUGO, | 5633 | 0444, |
5560 | binder_debugfs_dir_entry_root, | 5634 | binder_debugfs_dir_entry_root, |
5561 | &binder_transaction_log_failed, | 5635 | &binder_transaction_log_failed, |
5562 | &binder_transaction_log_fops); | 5636 | &binder_transaction_log_fops); |
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 6f6f745605af..5a426c877dfb 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c | |||
@@ -281,6 +281,9 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, | |||
281 | goto err_vm_insert_page_failed; | 281 | goto err_vm_insert_page_failed; |
282 | } | 282 | } |
283 | 283 | ||
284 | if (index + 1 > alloc->pages_high) | ||
285 | alloc->pages_high = index + 1; | ||
286 | |||
284 | trace_binder_alloc_page_end(alloc, index); | 287 | trace_binder_alloc_page_end(alloc, index); |
285 | /* vm_insert_page does not seem to increment the refcount */ | 288 | /* vm_insert_page does not seem to increment the refcount */ |
286 | } | 289 | } |
@@ -324,11 +327,12 @@ err_no_vma: | |||
324 | return vma ? -ENOMEM : -ESRCH; | 327 | return vma ? -ENOMEM : -ESRCH; |
325 | } | 328 | } |
326 | 329 | ||
327 | struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc, | 330 | static struct binder_buffer *binder_alloc_new_buf_locked( |
328 | size_t data_size, | 331 | struct binder_alloc *alloc, |
329 | size_t offsets_size, | 332 | size_t data_size, |
330 | size_t extra_buffers_size, | 333 | size_t offsets_size, |
331 | int is_async) | 334 | size_t extra_buffers_size, |
335 | int is_async) | ||
332 | { | 336 | { |
333 | struct rb_node *n = alloc->free_buffers.rb_node; | 337 | struct rb_node *n = alloc->free_buffers.rb_node; |
334 | struct binder_buffer *buffer; | 338 | struct binder_buffer *buffer; |
@@ -666,7 +670,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc, | |||
666 | goto err_already_mapped; | 670 | goto err_already_mapped; |
667 | } | 671 | } |
668 | 672 | ||
669 | area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP); | 673 | area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC); |
670 | if (area == NULL) { | 674 | if (area == NULL) { |
671 | ret = -ENOMEM; | 675 | ret = -ENOMEM; |
672 | failure_string = "get_vm_area"; | 676 | failure_string = "get_vm_area"; |
@@ -853,6 +857,7 @@ void binder_alloc_print_pages(struct seq_file *m, | |||
853 | } | 857 | } |
854 | mutex_unlock(&alloc->mutex); | 858 | mutex_unlock(&alloc->mutex); |
855 | seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); | 859 | seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); |
860 | seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high); | ||
856 | } | 861 | } |
857 | 862 | ||
858 | /** | 863 | /** |
@@ -1002,8 +1007,14 @@ void binder_alloc_init(struct binder_alloc *alloc) | |||
1002 | INIT_LIST_HEAD(&alloc->buffers); | 1007 | INIT_LIST_HEAD(&alloc->buffers); |
1003 | } | 1008 | } |
1004 | 1009 | ||
1005 | void binder_alloc_shrinker_init(void) | 1010 | int binder_alloc_shrinker_init(void) |
1006 | { | 1011 | { |
1007 | list_lru_init(&binder_alloc_lru); | 1012 | int ret = list_lru_init(&binder_alloc_lru); |
1008 | register_shrinker(&binder_shrinker); | 1013 | |
1014 | if (ret == 0) { | ||
1015 | ret = register_shrinker(&binder_shrinker); | ||
1016 | if (ret) | ||
1017 | list_lru_destroy(&binder_alloc_lru); | ||
1018 | } | ||
1019 | return ret; | ||
1009 | } | 1020 | } |
diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 2dd33b6df104..9ef64e563856 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h | |||
@@ -92,6 +92,7 @@ struct binder_lru_page { | |||
92 | * @pages: array of binder_lru_page | 92 | * @pages: array of binder_lru_page |
93 | * @buffer_size: size of address space specified via mmap | 93 | * @buffer_size: size of address space specified via mmap |
94 | * @pid: pid for associated binder_proc (invariant after init) | 94 | * @pid: pid for associated binder_proc (invariant after init) |
95 | * @pages_high: high watermark of offset in @pages | ||
95 | * | 96 | * |
96 | * Bookkeeping structure for per-proc address space management for binder | 97 | * Bookkeeping structure for per-proc address space management for binder |
97 | * buffers. It is normally initialized during binder_init() and binder_mmap() | 98 | * buffers. It is normally initialized during binder_init() and binder_mmap() |
@@ -112,6 +113,7 @@ struct binder_alloc { | |||
112 | size_t buffer_size; | 113 | size_t buffer_size; |
113 | uint32_t buffer_free; | 114 | uint32_t buffer_free; |
114 | int pid; | 115 | int pid; |
116 | size_t pages_high; | ||
115 | }; | 117 | }; |
116 | 118 | ||
117 | #ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST | 119 | #ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST |
@@ -128,7 +130,7 @@ extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc, | |||
128 | size_t extra_buffers_size, | 130 | size_t extra_buffers_size, |
129 | int is_async); | 131 | int is_async); |
130 | extern void binder_alloc_init(struct binder_alloc *alloc); | 132 | extern void binder_alloc_init(struct binder_alloc *alloc); |
131 | void binder_alloc_shrinker_init(void); | 133 | extern int binder_alloc_shrinker_init(void); |
132 | extern void binder_alloc_vma_close(struct binder_alloc *alloc); | 134 | extern void binder_alloc_vma_close(struct binder_alloc *alloc); |
133 | extern struct binder_buffer * | 135 | extern struct binder_buffer * |
134 | binder_alloc_prepare_to_free(struct binder_alloc *alloc, | 136 | binder_alloc_prepare_to_free(struct binder_alloc *alloc, |
diff --git a/drivers/auxdisplay/img-ascii-lcd.c b/drivers/auxdisplay/img-ascii-lcd.c index db040b378224..9180b9bd5821 100644 --- a/drivers/auxdisplay/img-ascii-lcd.c +++ b/drivers/auxdisplay/img-ascii-lcd.c | |||
@@ -441,3 +441,7 @@ static struct platform_driver img_ascii_lcd_driver = { | |||
441 | .remove = img_ascii_lcd_remove, | 441 | .remove = img_ascii_lcd_remove, |
442 | }; | 442 | }; |
443 | module_platform_driver(img_ascii_lcd_driver); | 443 | module_platform_driver(img_ascii_lcd_driver); |
444 | |||
445 | MODULE_DESCRIPTION("Imagination Technologies ASCII LCD Display"); | ||
446 | MODULE_AUTHOR("Paul Burton <paul.burton@mips.com>"); | ||
447 | MODULE_LICENSE("GPL"); | ||
diff --git a/drivers/base/regmap/Kconfig b/drivers/base/regmap/Kconfig index 067073e4beb1..aff34c0c2a3e 100644 --- a/drivers/base/regmap/Kconfig +++ b/drivers/base/regmap/Kconfig | |||
@@ -20,6 +20,10 @@ config REGMAP_I2C | |||
20 | tristate | 20 | tristate |
21 | depends on I2C | 21 | depends on I2C |
22 | 22 | ||
23 | config REGMAP_SLIMBUS | ||
24 | tristate | ||
25 | depends on SLIMBUS | ||
26 | |||
23 | config REGMAP_SPI | 27 | config REGMAP_SPI |
24 | tristate | 28 | tristate |
25 | depends on SPI | 29 | depends on SPI |
diff --git a/drivers/base/regmap/Makefile b/drivers/base/regmap/Makefile index 22d263cca395..5ed0023fabda 100644 --- a/drivers/base/regmap/Makefile +++ b/drivers/base/regmap/Makefile | |||
@@ -8,6 +8,7 @@ obj-$(CONFIG_REGCACHE_COMPRESSED) += regcache-lzo.o | |||
8 | obj-$(CONFIG_DEBUG_FS) += regmap-debugfs.o | 8 | obj-$(CONFIG_DEBUG_FS) += regmap-debugfs.o |
9 | obj-$(CONFIG_REGMAP_AC97) += regmap-ac97.o | 9 | obj-$(CONFIG_REGMAP_AC97) += regmap-ac97.o |
10 | obj-$(CONFIG_REGMAP_I2C) += regmap-i2c.o | 10 | obj-$(CONFIG_REGMAP_I2C) += regmap-i2c.o |
11 | obj-$(CONFIG_REGMAP_SLIMBUS) += regmap-slimbus.o | ||
11 | obj-$(CONFIG_REGMAP_SPI) += regmap-spi.o | 12 | obj-$(CONFIG_REGMAP_SPI) += regmap-spi.o |
12 | obj-$(CONFIG_REGMAP_SPMI) += regmap-spmi.o | 13 | obj-$(CONFIG_REGMAP_SPMI) += regmap-spmi.o |
13 | obj-$(CONFIG_REGMAP_MMIO) += regmap-mmio.o | 14 | obj-$(CONFIG_REGMAP_MMIO) += regmap-mmio.o |
diff --git a/drivers/base/regmap/regmap-slimbus.c b/drivers/base/regmap/regmap-slimbus.c new file mode 100644 index 000000000000..c90bee81d954 --- /dev/null +++ b/drivers/base/regmap/regmap-slimbus.c | |||
@@ -0,0 +1,80 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | // Copyright (c) 2017, Linaro Ltd. | ||
3 | |||
4 | #include <linux/regmap.h> | ||
5 | #include <linux/slimbus.h> | ||
6 | #include <linux/module.h> | ||
7 | |||
8 | #include "internal.h" | ||
9 | |||
10 | static int regmap_slimbus_byte_reg_read(void *context, unsigned int reg, | ||
11 | unsigned int *val) | ||
12 | { | ||
13 | struct slim_device *sdev = context; | ||
14 | int v; | ||
15 | |||
16 | v = slim_readb(sdev, reg); | ||
17 | |||
18 | if (v < 0) | ||
19 | return v; | ||
20 | |||
21 | *val = v; | ||
22 | |||
23 | return 0; | ||
24 | } | ||
25 | |||
26 | static int regmap_slimbus_byte_reg_write(void *context, unsigned int reg, | ||
27 | unsigned int val) | ||
28 | { | ||
29 | struct slim_device *sdev = context; | ||
30 | |||
31 | return slim_writeb(sdev, reg, val); | ||
32 | } | ||
33 | |||
34 | static struct regmap_bus regmap_slimbus_bus = { | ||
35 | .reg_write = regmap_slimbus_byte_reg_write, | ||
36 | .reg_read = regmap_slimbus_byte_reg_read, | ||
37 | .reg_format_endian_default = REGMAP_ENDIAN_LITTLE, | ||
38 | .val_format_endian_default = REGMAP_ENDIAN_LITTLE, | ||
39 | }; | ||
40 | |||
41 | static const struct regmap_bus *regmap_get_slimbus(struct slim_device *slim, | ||
42 | const struct regmap_config *config) | ||
43 | { | ||
44 | if (config->val_bits == 8 && config->reg_bits == 8) | ||
45 | return ®map_slimbus_bus; | ||
46 | |||
47 | return ERR_PTR(-ENOTSUPP); | ||
48 | } | ||
49 | |||
50 | struct regmap *__regmap_init_slimbus(struct slim_device *slimbus, | ||
51 | const struct regmap_config *config, | ||
52 | struct lock_class_key *lock_key, | ||
53 | const char *lock_name) | ||
54 | { | ||
55 | const struct regmap_bus *bus = regmap_get_slimbus(slimbus, config); | ||
56 | |||
57 | if (IS_ERR(bus)) | ||
58 | return ERR_CAST(bus); | ||
59 | |||
60 | return __regmap_init(&slimbus->dev, bus, &slimbus->dev, config, | ||
61 | lock_key, lock_name); | ||
62 | } | ||
63 | EXPORT_SYMBOL_GPL(__regmap_init_slimbus); | ||
64 | |||
65 | struct regmap *__devm_regmap_init_slimbus(struct slim_device *slimbus, | ||
66 | const struct regmap_config *config, | ||
67 | struct lock_class_key *lock_key, | ||
68 | const char *lock_name) | ||
69 | { | ||
70 | const struct regmap_bus *bus = regmap_get_slimbus(slimbus, config); | ||
71 | |||
72 | if (IS_ERR(bus)) | ||
73 | return ERR_CAST(bus); | ||
74 | |||
75 | return __devm_regmap_init(&slimbus->dev, bus, &slimbus, config, | ||
76 | lock_key, lock_name); | ||
77 | } | ||
78 | EXPORT_SYMBOL_GPL(__devm_regmap_init_slimbus); | ||
79 | |||
80 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/char/lp.c b/drivers/char/lp.c index 8249762192d5..8c4dd1a3bb6a 100644 --- a/drivers/char/lp.c +++ b/drivers/char/lp.c | |||
@@ -659,17 +659,31 @@ static int lp_do_ioctl(unsigned int minor, unsigned int cmd, | |||
659 | return retval; | 659 | return retval; |
660 | } | 660 | } |
661 | 661 | ||
662 | static int lp_set_timeout(unsigned int minor, struct timeval *par_timeout) | 662 | static int lp_set_timeout(unsigned int minor, s64 tv_sec, long tv_usec) |
663 | { | 663 | { |
664 | long to_jiffies; | 664 | long to_jiffies; |
665 | 665 | ||
666 | /* Convert to jiffies, place in lp_table */ | 666 | /* Convert to jiffies, place in lp_table */ |
667 | if ((par_timeout->tv_sec < 0) || | 667 | if (tv_sec < 0 || tv_usec < 0) |
668 | (par_timeout->tv_usec < 0)) { | ||
669 | return -EINVAL; | 668 | return -EINVAL; |
669 | |||
670 | /* | ||
671 | * we used to not check, so let's not make this fatal, | ||
672 | * but deal with user space passing a 32-bit tv_nsec in | ||
673 | * a 64-bit field, capping the timeout to 1 second | ||
674 | * worth of microseconds, and capping the total at | ||
675 | * MAX_JIFFY_OFFSET. | ||
676 | */ | ||
677 | if (tv_usec > 999999) | ||
678 | tv_usec = 999999; | ||
679 | |||
680 | if (tv_sec >= MAX_SEC_IN_JIFFIES - 1) { | ||
681 | to_jiffies = MAX_JIFFY_OFFSET; | ||
682 | } else { | ||
683 | to_jiffies = DIV_ROUND_UP(tv_usec, 1000000/HZ); | ||
684 | to_jiffies += tv_sec * (long) HZ; | ||
670 | } | 685 | } |
671 | to_jiffies = DIV_ROUND_UP(par_timeout->tv_usec, 1000000/HZ); | 686 | |
672 | to_jiffies += par_timeout->tv_sec * (long) HZ; | ||
673 | if (to_jiffies <= 0) { | 687 | if (to_jiffies <= 0) { |
674 | return -EINVAL; | 688 | return -EINVAL; |
675 | } | 689 | } |
@@ -677,23 +691,43 @@ static int lp_set_timeout(unsigned int minor, struct timeval *par_timeout) | |||
677 | return 0; | 691 | return 0; |
678 | } | 692 | } |
679 | 693 | ||
694 | static int lp_set_timeout32(unsigned int minor, void __user *arg) | ||
695 | { | ||
696 | s32 karg[2]; | ||
697 | |||
698 | if (copy_from_user(karg, arg, sizeof(karg))) | ||
699 | return -EFAULT; | ||
700 | |||
701 | return lp_set_timeout(minor, karg[0], karg[1]); | ||
702 | } | ||
703 | |||
704 | static int lp_set_timeout64(unsigned int minor, void __user *arg) | ||
705 | { | ||
706 | s64 karg[2]; | ||
707 | |||
708 | if (copy_from_user(karg, arg, sizeof(karg))) | ||
709 | return -EFAULT; | ||
710 | |||
711 | return lp_set_timeout(minor, karg[0], karg[1]); | ||
712 | } | ||
713 | |||
680 | static long lp_ioctl(struct file *file, unsigned int cmd, | 714 | static long lp_ioctl(struct file *file, unsigned int cmd, |
681 | unsigned long arg) | 715 | unsigned long arg) |
682 | { | 716 | { |
683 | unsigned int minor; | 717 | unsigned int minor; |
684 | struct timeval par_timeout; | ||
685 | int ret; | 718 | int ret; |
686 | 719 | ||
687 | minor = iminor(file_inode(file)); | 720 | minor = iminor(file_inode(file)); |
688 | mutex_lock(&lp_mutex); | 721 | mutex_lock(&lp_mutex); |
689 | switch (cmd) { | 722 | switch (cmd) { |
690 | case LPSETTIMEOUT: | 723 | case LPSETTIMEOUT_OLD: |
691 | if (copy_from_user(&par_timeout, (void __user *)arg, | 724 | if (BITS_PER_LONG == 32) { |
692 | sizeof (struct timeval))) { | 725 | ret = lp_set_timeout32(minor, (void __user *)arg); |
693 | ret = -EFAULT; | ||
694 | break; | 726 | break; |
695 | } | 727 | } |
696 | ret = lp_set_timeout(minor, &par_timeout); | 728 | /* fallthrough for 64-bit */ |
729 | case LPSETTIMEOUT_NEW: | ||
730 | ret = lp_set_timeout64(minor, (void __user *)arg); | ||
697 | break; | 731 | break; |
698 | default: | 732 | default: |
699 | ret = lp_do_ioctl(minor, cmd, arg, (void __user *)arg); | 733 | ret = lp_do_ioctl(minor, cmd, arg, (void __user *)arg); |
@@ -709,18 +743,19 @@ static long lp_compat_ioctl(struct file *file, unsigned int cmd, | |||
709 | unsigned long arg) | 743 | unsigned long arg) |
710 | { | 744 | { |
711 | unsigned int minor; | 745 | unsigned int minor; |
712 | struct timeval par_timeout; | ||
713 | int ret; | 746 | int ret; |
714 | 747 | ||
715 | minor = iminor(file_inode(file)); | 748 | minor = iminor(file_inode(file)); |
716 | mutex_lock(&lp_mutex); | 749 | mutex_lock(&lp_mutex); |
717 | switch (cmd) { | 750 | switch (cmd) { |
718 | case LPSETTIMEOUT: | 751 | case LPSETTIMEOUT_OLD: |
719 | if (compat_get_timeval(&par_timeout, compat_ptr(arg))) { | 752 | if (!COMPAT_USE_64BIT_TIME) { |
720 | ret = -EFAULT; | 753 | ret = lp_set_timeout32(minor, (void __user *)arg); |
721 | break; | 754 | break; |
722 | } | 755 | } |
723 | ret = lp_set_timeout(minor, &par_timeout); | 756 | /* fallthrough for x32 mode */ |
757 | case LPSETTIMEOUT_NEW: | ||
758 | ret = lp_set_timeout64(minor, (void __user *)arg); | ||
724 | break; | 759 | break; |
725 | #ifdef LP_STATS | 760 | #ifdef LP_STATS |
726 | case LPGETSTATS: | 761 | case LPGETSTATS: |
@@ -865,7 +900,7 @@ static int __init lp_setup (char *str) | |||
865 | printk(KERN_INFO "lp: too many ports, %s ignored.\n", | 900 | printk(KERN_INFO "lp: too many ports, %s ignored.\n", |
866 | str); | 901 | str); |
867 | } else if (!strcmp(str, "reset")) { | 902 | } else if (!strcmp(str, "reset")) { |
868 | reset = 1; | 903 | reset = true; |
869 | } | 904 | } |
870 | return 1; | 905 | return 1; |
871 | } | 906 | } |
diff --git a/drivers/char/mem.c b/drivers/char/mem.c index 6aefe5370e5b..052011bcf100 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c | |||
@@ -107,6 +107,8 @@ static ssize_t read_mem(struct file *file, char __user *buf, | |||
107 | phys_addr_t p = *ppos; | 107 | phys_addr_t p = *ppos; |
108 | ssize_t read, sz; | 108 | ssize_t read, sz; |
109 | void *ptr; | 109 | void *ptr; |
110 | char *bounce; | ||
111 | int err; | ||
110 | 112 | ||
111 | if (p != *ppos) | 113 | if (p != *ppos) |
112 | return 0; | 114 | return 0; |
@@ -129,15 +131,22 @@ static ssize_t read_mem(struct file *file, char __user *buf, | |||
129 | } | 131 | } |
130 | #endif | 132 | #endif |
131 | 133 | ||
134 | bounce = kmalloc(PAGE_SIZE, GFP_KERNEL); | ||
135 | if (!bounce) | ||
136 | return -ENOMEM; | ||
137 | |||
132 | while (count > 0) { | 138 | while (count > 0) { |
133 | unsigned long remaining; | 139 | unsigned long remaining; |
134 | int allowed; | 140 | int allowed; |
135 | 141 | ||
136 | sz = size_inside_page(p, count); | 142 | sz = size_inside_page(p, count); |
137 | 143 | ||
144 | err = -EPERM; | ||
138 | allowed = page_is_allowed(p >> PAGE_SHIFT); | 145 | allowed = page_is_allowed(p >> PAGE_SHIFT); |
139 | if (!allowed) | 146 | if (!allowed) |
140 | return -EPERM; | 147 | goto failed; |
148 | |||
149 | err = -EFAULT; | ||
141 | if (allowed == 2) { | 150 | if (allowed == 2) { |
142 | /* Show zeros for restricted memory. */ | 151 | /* Show zeros for restricted memory. */ |
143 | remaining = clear_user(buf, sz); | 152 | remaining = clear_user(buf, sz); |
@@ -149,24 +158,32 @@ static ssize_t read_mem(struct file *file, char __user *buf, | |||
149 | */ | 158 | */ |
150 | ptr = xlate_dev_mem_ptr(p); | 159 | ptr = xlate_dev_mem_ptr(p); |
151 | if (!ptr) | 160 | if (!ptr) |
152 | return -EFAULT; | 161 | goto failed; |
153 | |||
154 | remaining = copy_to_user(buf, ptr, sz); | ||
155 | 162 | ||
163 | err = probe_kernel_read(bounce, ptr, sz); | ||
156 | unxlate_dev_mem_ptr(p, ptr); | 164 | unxlate_dev_mem_ptr(p, ptr); |
165 | if (err) | ||
166 | goto failed; | ||
167 | |||
168 | remaining = copy_to_user(buf, bounce, sz); | ||
157 | } | 169 | } |
158 | 170 | ||
159 | if (remaining) | 171 | if (remaining) |
160 | return -EFAULT; | 172 | goto failed; |
161 | 173 | ||
162 | buf += sz; | 174 | buf += sz; |
163 | p += sz; | 175 | p += sz; |
164 | count -= sz; | 176 | count -= sz; |
165 | read += sz; | 177 | read += sz; |
166 | } | 178 | } |
179 | kfree(bounce); | ||
167 | 180 | ||
168 | *ppos += read; | 181 | *ppos += read; |
169 | return read; | 182 | return read; |
183 | |||
184 | failed: | ||
185 | kfree(bounce); | ||
186 | return err; | ||
170 | } | 187 | } |
171 | 188 | ||
172 | static ssize_t write_mem(struct file *file, const char __user *buf, | 189 | static ssize_t write_mem(struct file *file, const char __user *buf, |
diff --git a/drivers/char/xillybus/Kconfig b/drivers/char/xillybus/Kconfig index b302684d86c1..a1f16df08d32 100644 --- a/drivers/char/xillybus/Kconfig +++ b/drivers/char/xillybus/Kconfig | |||
@@ -4,7 +4,7 @@ | |||
4 | 4 | ||
5 | config XILLYBUS | 5 | config XILLYBUS |
6 | tristate "Xillybus generic FPGA interface" | 6 | tristate "Xillybus generic FPGA interface" |
7 | depends on PCI || (OF_ADDRESS && OF_IRQ) | 7 | depends on PCI || OF |
8 | select CRC32 | 8 | select CRC32 |
9 | help | 9 | help |
10 | Xillybus is a generic interface for peripherals designed on | 10 | Xillybus is a generic interface for peripherals designed on |
@@ -24,7 +24,7 @@ config XILLYBUS_PCIE | |||
24 | 24 | ||
25 | config XILLYBUS_OF | 25 | config XILLYBUS_OF |
26 | tristate "Xillybus over Device Tree" | 26 | tristate "Xillybus over Device Tree" |
27 | depends on OF_ADDRESS && OF_IRQ && HAS_DMA | 27 | depends on OF && HAS_DMA |
28 | help | 28 | help |
29 | Set to M if you want Xillybus to find its resources from the | 29 | Set to M if you want Xillybus to find its resources from the |
30 | Open Firmware Flattened Device Tree. If the target is an embedded | 30 | Open Firmware Flattened Device Tree. If the target is an embedded |
diff --git a/drivers/char/xillybus/xillybus_of.c b/drivers/char/xillybus/xillybus_of.c index 78a492f5acfb..4d6625ccb48f 100644 --- a/drivers/char/xillybus/xillybus_of.c +++ b/drivers/char/xillybus/xillybus_of.c | |||
@@ -15,10 +15,6 @@ | |||
15 | #include <linux/slab.h> | 15 | #include <linux/slab.h> |
16 | #include <linux/platform_device.h> | 16 | #include <linux/platform_device.h> |
17 | #include <linux/of.h> | 17 | #include <linux/of.h> |
18 | #include <linux/of_irq.h> | ||
19 | #include <linux/of_address.h> | ||
20 | #include <linux/of_device.h> | ||
21 | #include <linux/of_platform.h> | ||
22 | #include <linux/err.h> | 18 | #include <linux/err.h> |
23 | #include "xillybus.h" | 19 | #include "xillybus.h" |
24 | 20 | ||
@@ -123,7 +119,7 @@ static int xilly_drv_probe(struct platform_device *op) | |||
123 | struct xilly_endpoint *endpoint; | 119 | struct xilly_endpoint *endpoint; |
124 | int rc; | 120 | int rc; |
125 | int irq; | 121 | int irq; |
126 | struct resource res; | 122 | struct resource *res; |
127 | struct xilly_endpoint_hardware *ephw = &of_hw; | 123 | struct xilly_endpoint_hardware *ephw = &of_hw; |
128 | 124 | ||
129 | if (of_property_read_bool(dev->of_node, "dma-coherent")) | 125 | if (of_property_read_bool(dev->of_node, "dma-coherent")) |
@@ -136,13 +132,13 @@ static int xilly_drv_probe(struct platform_device *op) | |||
136 | 132 | ||
137 | dev_set_drvdata(dev, endpoint); | 133 | dev_set_drvdata(dev, endpoint); |
138 | 134 | ||
139 | rc = of_address_to_resource(dev->of_node, 0, &res); | 135 | res = platform_get_resource(op, IORESOURCE_MEM, 0); |
140 | endpoint->registers = devm_ioremap_resource(dev, &res); | 136 | endpoint->registers = devm_ioremap_resource(dev, res); |
141 | 137 | ||
142 | if (IS_ERR(endpoint->registers)) | 138 | if (IS_ERR(endpoint->registers)) |
143 | return PTR_ERR(endpoint->registers); | 139 | return PTR_ERR(endpoint->registers); |
144 | 140 | ||
145 | irq = irq_of_parse_and_map(dev->of_node, 0); | 141 | irq = platform_get_irq(op, 0); |
146 | 142 | ||
147 | rc = devm_request_irq(dev, irq, xillybus_isr, 0, xillyname, endpoint); | 143 | rc = devm_request_irq(dev, irq, xillybus_isr, 0, xillyname, endpoint); |
148 | 144 | ||
diff --git a/drivers/eisa/eisa-bus.c b/drivers/eisa/eisa-bus.c index 612afeaec3cb..1e8062f6dbfc 100644 --- a/drivers/eisa/eisa-bus.c +++ b/drivers/eisa/eisa-bus.c | |||
@@ -75,9 +75,9 @@ static void __init eisa_name_device(struct eisa_device *edev) | |||
75 | 75 | ||
76 | static char __init *decode_eisa_sig(unsigned long addr) | 76 | static char __init *decode_eisa_sig(unsigned long addr) |
77 | { | 77 | { |
78 | static char sig_str[EISA_SIG_LEN]; | 78 | static char sig_str[EISA_SIG_LEN]; |
79 | u8 sig[4]; | 79 | u8 sig[4]; |
80 | u16 rev; | 80 | u16 rev; |
81 | int i; | 81 | int i; |
82 | 82 | ||
83 | for (i = 0; i < 4; i++) { | 83 | for (i = 0; i < 4; i++) { |
@@ -96,14 +96,14 @@ static char __init *decode_eisa_sig(unsigned long addr) | |||
96 | if (!i && (sig[0] & 0x80)) | 96 | if (!i && (sig[0] & 0x80)) |
97 | return NULL; | 97 | return NULL; |
98 | } | 98 | } |
99 | 99 | ||
100 | sig_str[0] = ((sig[0] >> 2) & 0x1f) + ('A' - 1); | 100 | sig_str[0] = ((sig[0] >> 2) & 0x1f) + ('A' - 1); |
101 | sig_str[1] = (((sig[0] & 3) << 3) | (sig[1] >> 5)) + ('A' - 1); | 101 | sig_str[1] = (((sig[0] & 3) << 3) | (sig[1] >> 5)) + ('A' - 1); |
102 | sig_str[2] = (sig[1] & 0x1f) + ('A' - 1); | 102 | sig_str[2] = (sig[1] & 0x1f) + ('A' - 1); |
103 | rev = (sig[2] << 8) | sig[3]; | 103 | rev = (sig[2] << 8) | sig[3]; |
104 | sprintf(sig_str + 3, "%04X", rev); | 104 | sprintf(sig_str + 3, "%04X", rev); |
105 | 105 | ||
106 | return sig_str; | 106 | return sig_str; |
107 | } | 107 | } |
108 | 108 | ||
109 | static int eisa_bus_match(struct device *dev, struct device_driver *drv) | 109 | static int eisa_bus_match(struct device *dev, struct device_driver *drv) |
@@ -198,7 +198,7 @@ static int __init eisa_init_device(struct eisa_root_device *root, | |||
198 | sig = decode_eisa_sig(sig_addr); | 198 | sig = decode_eisa_sig(sig_addr); |
199 | if (!sig) | 199 | if (!sig) |
200 | return -1; /* No EISA device here */ | 200 | return -1; /* No EISA device here */ |
201 | 201 | ||
202 | memcpy(edev->id.sig, sig, EISA_SIG_LEN); | 202 | memcpy(edev->id.sig, sig, EISA_SIG_LEN); |
203 | edev->slot = slot; | 203 | edev->slot = slot; |
204 | edev->state = inb(SLOT_ADDRESS(root, slot) + EISA_CONFIG_OFFSET) | 204 | edev->state = inb(SLOT_ADDRESS(root, slot) + EISA_CONFIG_OFFSET) |
@@ -222,7 +222,7 @@ static int __init eisa_init_device(struct eisa_root_device *root, | |||
222 | 222 | ||
223 | if (is_forced_dev(enable_dev, enable_dev_count, root, edev)) | 223 | if (is_forced_dev(enable_dev, enable_dev_count, root, edev)) |
224 | edev->state = EISA_CONFIG_ENABLED | EISA_CONFIG_FORCED; | 224 | edev->state = EISA_CONFIG_ENABLED | EISA_CONFIG_FORCED; |
225 | 225 | ||
226 | if (is_forced_dev(disable_dev, disable_dev_count, root, edev)) | 226 | if (is_forced_dev(disable_dev, disable_dev_count, root, edev)) |
227 | edev->state = EISA_CONFIG_FORCED; | 227 | edev->state = EISA_CONFIG_FORCED; |
228 | 228 | ||
@@ -275,7 +275,7 @@ static int __init eisa_request_resources(struct eisa_root_device *root, | |||
275 | edev->res[i].start = edev->res[i].end = 0; | 275 | edev->res[i].start = edev->res[i].end = 0; |
276 | continue; | 276 | continue; |
277 | } | 277 | } |
278 | 278 | ||
279 | if (slot) { | 279 | if (slot) { |
280 | edev->res[i].name = NULL; | 280 | edev->res[i].name = NULL; |
281 | edev->res[i].start = SLOT_ADDRESS(root, slot) | 281 | edev->res[i].start = SLOT_ADDRESS(root, slot) |
@@ -295,7 +295,7 @@ static int __init eisa_request_resources(struct eisa_root_device *root, | |||
295 | } | 295 | } |
296 | 296 | ||
297 | return 0; | 297 | return 0; |
298 | 298 | ||
299 | failed: | 299 | failed: |
300 | while (--i >= 0) | 300 | while (--i >= 0) |
301 | release_resource(&edev->res[i]); | 301 | release_resource(&edev->res[i]); |
@@ -314,7 +314,7 @@ static void __init eisa_release_resources(struct eisa_device *edev) | |||
314 | 314 | ||
315 | static int __init eisa_probe(struct eisa_root_device *root) | 315 | static int __init eisa_probe(struct eisa_root_device *root) |
316 | { | 316 | { |
317 | int i, c; | 317 | int i, c; |
318 | struct eisa_device *edev; | 318 | struct eisa_device *edev; |
319 | char *enabled_str; | 319 | char *enabled_str; |
320 | 320 | ||
@@ -322,16 +322,14 @@ static int __init eisa_probe(struct eisa_root_device *root) | |||
322 | 322 | ||
323 | /* First try to get hold of slot 0. If there is no device | 323 | /* First try to get hold of slot 0. If there is no device |
324 | * here, simply fail, unless root->force_probe is set. */ | 324 | * here, simply fail, unless root->force_probe is set. */ |
325 | 325 | ||
326 | edev = kzalloc(sizeof(*edev), GFP_KERNEL); | 326 | edev = kzalloc(sizeof(*edev), GFP_KERNEL); |
327 | if (!edev) { | 327 | if (!edev) |
328 | dev_err(root->dev, "EISA: Couldn't allocate mainboard slot\n"); | ||
329 | return -ENOMEM; | 328 | return -ENOMEM; |
330 | } | 329 | |
331 | |||
332 | if (eisa_request_resources(root, edev, 0)) { | 330 | if (eisa_request_resources(root, edev, 0)) { |
333 | dev_warn(root->dev, | 331 | dev_warn(root->dev, |
334 | "EISA: Cannot allocate resource for mainboard\n"); | 332 | "EISA: Cannot allocate resource for mainboard\n"); |
335 | kfree(edev); | 333 | kfree(edev); |
336 | if (!root->force_probe) | 334 | if (!root->force_probe) |
337 | return -EBUSY; | 335 | return -EBUSY; |
@@ -350,14 +348,14 @@ static int __init eisa_probe(struct eisa_root_device *root) | |||
350 | 348 | ||
351 | if (eisa_register_device(edev)) { | 349 | if (eisa_register_device(edev)) { |
352 | dev_err(&edev->dev, "EISA: Failed to register %s\n", | 350 | dev_err(&edev->dev, "EISA: Failed to register %s\n", |
353 | edev->id.sig); | 351 | edev->id.sig); |
354 | eisa_release_resources(edev); | 352 | eisa_release_resources(edev); |
355 | kfree(edev); | 353 | kfree(edev); |
356 | } | 354 | } |
357 | 355 | ||
358 | force_probe: | 356 | force_probe: |
359 | 357 | ||
360 | for (c = 0, i = 1; i <= root->slots; i++) { | 358 | for (c = 0, i = 1; i <= root->slots; i++) { |
361 | edev = kzalloc(sizeof(*edev), GFP_KERNEL); | 359 | edev = kzalloc(sizeof(*edev), GFP_KERNEL); |
362 | if (!edev) { | 360 | if (!edev) { |
363 | dev_err(root->dev, "EISA: Out of memory for slot %d\n", | 361 | dev_err(root->dev, "EISA: Out of memory for slot %d\n", |
@@ -367,8 +365,8 @@ static int __init eisa_probe(struct eisa_root_device *root) | |||
367 | 365 | ||
368 | if (eisa_request_resources(root, edev, i)) { | 366 | if (eisa_request_resources(root, edev, i)) { |
369 | dev_warn(root->dev, | 367 | dev_warn(root->dev, |
370 | "Cannot allocate resource for EISA slot %d\n", | 368 | "Cannot allocate resource for EISA slot %d\n", |
371 | i); | 369 | i); |
372 | kfree(edev); | 370 | kfree(edev); |
373 | continue; | 371 | continue; |
374 | } | 372 | } |
@@ -395,11 +393,11 @@ static int __init eisa_probe(struct eisa_root_device *root) | |||
395 | 393 | ||
396 | if (eisa_register_device(edev)) { | 394 | if (eisa_register_device(edev)) { |
397 | dev_err(&edev->dev, "EISA: Failed to register %s\n", | 395 | dev_err(&edev->dev, "EISA: Failed to register %s\n", |
398 | edev->id.sig); | 396 | edev->id.sig); |
399 | eisa_release_resources(edev); | 397 | eisa_release_resources(edev); |
400 | kfree(edev); | 398 | kfree(edev); |
401 | } | 399 | } |
402 | } | 400 | } |
403 | 401 | ||
404 | dev_info(root->dev, "EISA: Detected %d card%s\n", c, c == 1 ? "" : "s"); | 402 | dev_info(root->dev, "EISA: Detected %d card%s\n", c, c == 1 ? "" : "s"); |
405 | return 0; | 403 | return 0; |
@@ -422,7 +420,7 @@ int __init eisa_root_register(struct eisa_root_device *root) | |||
422 | * been already registered. This prevents the virtual root | 420 | * been already registered. This prevents the virtual root |
423 | * device from registering after the real one has, for | 421 | * device from registering after the real one has, for |
424 | * example... */ | 422 | * example... */ |
425 | 423 | ||
426 | root->eisa_root_res.name = eisa_root_res.name; | 424 | root->eisa_root_res.name = eisa_root_res.name; |
427 | root->eisa_root_res.start = root->res->start; | 425 | root->eisa_root_res.start = root->res->start; |
428 | root->eisa_root_res.end = root->res->end; | 426 | root->eisa_root_res.end = root->res->end; |
@@ -431,7 +429,7 @@ int __init eisa_root_register(struct eisa_root_device *root) | |||
431 | err = request_resource(&eisa_root_res, &root->eisa_root_res); | 429 | err = request_resource(&eisa_root_res, &root->eisa_root_res); |
432 | if (err) | 430 | if (err) |
433 | return err; | 431 | return err; |
434 | 432 | ||
435 | root->bus_nr = eisa_bus_count++; | 433 | root->bus_nr = eisa_bus_count++; |
436 | 434 | ||
437 | err = eisa_probe(root); | 435 | err = eisa_probe(root); |
@@ -444,7 +442,7 @@ int __init eisa_root_register(struct eisa_root_device *root) | |||
444 | static int __init eisa_init(void) | 442 | static int __init eisa_init(void) |
445 | { | 443 | { |
446 | int r; | 444 | int r; |
447 | 445 | ||
448 | r = bus_register(&eisa_bus_type); | 446 | r = bus_register(&eisa_bus_type); |
449 | if (r) | 447 | if (r) |
450 | return r; | 448 | return r; |
diff --git a/drivers/eisa/pci_eisa.c b/drivers/eisa/pci_eisa.c index a333bf3517de..b5f367b44413 100644 --- a/drivers/eisa/pci_eisa.c +++ b/drivers/eisa/pci_eisa.c | |||
@@ -50,11 +50,11 @@ static int __init pci_eisa_init(struct pci_dev *pdev) | |||
50 | return -1; | 50 | return -1; |
51 | } | 51 | } |
52 | 52 | ||
53 | pci_eisa_root.dev = &pdev->dev; | 53 | pci_eisa_root.dev = &pdev->dev; |
54 | pci_eisa_root.res = bus_res; | 54 | pci_eisa_root.res = bus_res; |
55 | pci_eisa_root.bus_base_addr = bus_res->start; | 55 | pci_eisa_root.bus_base_addr = bus_res->start; |
56 | pci_eisa_root.slots = EISA_MAX_SLOTS; | 56 | pci_eisa_root.slots = EISA_MAX_SLOTS; |
57 | pci_eisa_root.dma_mask = pdev->dma_mask; | 57 | pci_eisa_root.dma_mask = pdev->dma_mask; |
58 | dev_set_drvdata(pci_eisa_root.dev, &pci_eisa_root); | 58 | dev_set_drvdata(pci_eisa_root.dev, &pci_eisa_root); |
59 | 59 | ||
60 | if (eisa_root_register (&pci_eisa_root)) { | 60 | if (eisa_root_register (&pci_eisa_root)) { |
diff --git a/drivers/eisa/virtual_root.c b/drivers/eisa/virtual_root.c index 535e4f9c83f4..f1221c1d6319 100644 --- a/drivers/eisa/virtual_root.c +++ b/drivers/eisa/virtual_root.c | |||
@@ -35,11 +35,11 @@ static struct platform_device eisa_root_dev = { | |||
35 | }; | 35 | }; |
36 | 36 | ||
37 | static struct eisa_root_device eisa_bus_root = { | 37 | static struct eisa_root_device eisa_bus_root = { |
38 | .dev = &eisa_root_dev.dev, | 38 | .dev = &eisa_root_dev.dev, |
39 | .bus_base_addr = 0, | 39 | .bus_base_addr = 0, |
40 | .res = &ioport_resource, | 40 | .res = &ioport_resource, |
41 | .slots = EISA_MAX_SLOTS, | 41 | .slots = EISA_MAX_SLOTS, |
42 | .dma_mask = 0xffffffff, | 42 | .dma_mask = 0xffffffff, |
43 | }; | 43 | }; |
44 | 44 | ||
45 | static void virtual_eisa_release (struct device *dev) | 45 | static void virtual_eisa_release (struct device *dev) |
@@ -50,13 +50,12 @@ static void virtual_eisa_release (struct device *dev) | |||
50 | static int __init virtual_eisa_root_init (void) | 50 | static int __init virtual_eisa_root_init (void) |
51 | { | 51 | { |
52 | int r; | 52 | int r; |
53 | 53 | ||
54 | if ((r = platform_device_register (&eisa_root_dev))) { | 54 | if ((r = platform_device_register (&eisa_root_dev))) |
55 | return r; | 55 | return r; |
56 | } | ||
57 | 56 | ||
58 | eisa_bus_root.force_probe = force_probe; | 57 | eisa_bus_root.force_probe = force_probe; |
59 | 58 | ||
60 | dev_set_drvdata(&eisa_root_dev.dev, &eisa_bus_root); | 59 | dev_set_drvdata(&eisa_root_dev.dev, &eisa_bus_root); |
61 | 60 | ||
62 | if (eisa_root_register (&eisa_bus_root)) { | 61 | if (eisa_root_register (&eisa_bus_root)) { |
diff --git a/drivers/extcon/extcon-adc-jack.c b/drivers/extcon/extcon-adc-jack.c index 3877d86c746a..18026354c332 100644 --- a/drivers/extcon/extcon-adc-jack.c +++ b/drivers/extcon/extcon-adc-jack.c | |||
@@ -144,7 +144,7 @@ static int adc_jack_probe(struct platform_device *pdev) | |||
144 | return err; | 144 | return err; |
145 | 145 | ||
146 | data->irq = platform_get_irq(pdev, 0); | 146 | data->irq = platform_get_irq(pdev, 0); |
147 | if (!data->irq) { | 147 | if (data->irq < 0) { |
148 | dev_err(&pdev->dev, "platform_get_irq failed\n"); | 148 | dev_err(&pdev->dev, "platform_get_irq failed\n"); |
149 | return -ENODEV; | 149 | return -ENODEV; |
150 | } | 150 | } |
diff --git a/drivers/extcon/extcon-axp288.c b/drivers/extcon/extcon-axp288.c index 1621f2f7f129..0a44d43802fe 100644 --- a/drivers/extcon/extcon-axp288.c +++ b/drivers/extcon/extcon-axp288.c | |||
@@ -1,6 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * extcon-axp288.c - X-Power AXP288 PMIC extcon cable detection driver | 2 | * extcon-axp288.c - X-Power AXP288 PMIC extcon cable detection driver |
3 | * | 3 | * |
4 | * Copyright (C) 2016-2017 Hans de Goede <hdegoede@redhat.com> | ||
4 | * Copyright (C) 2015 Intel Corporation | 5 | * Copyright (C) 2015 Intel Corporation |
5 | * Author: Ramakrishna Pallala <ramakrishna.pallala@intel.com> | 6 | * Author: Ramakrishna Pallala <ramakrishna.pallala@intel.com> |
6 | * | 7 | * |
@@ -97,9 +98,11 @@ struct axp288_extcon_info { | |||
97 | struct device *dev; | 98 | struct device *dev; |
98 | struct regmap *regmap; | 99 | struct regmap *regmap; |
99 | struct regmap_irq_chip_data *regmap_irqc; | 100 | struct regmap_irq_chip_data *regmap_irqc; |
101 | struct delayed_work det_work; | ||
100 | int irq[EXTCON_IRQ_END]; | 102 | int irq[EXTCON_IRQ_END]; |
101 | struct extcon_dev *edev; | 103 | struct extcon_dev *edev; |
102 | unsigned int previous_cable; | 104 | unsigned int previous_cable; |
105 | bool first_detect_done; | ||
103 | }; | 106 | }; |
104 | 107 | ||
105 | /* Power up/down reason string array */ | 108 | /* Power up/down reason string array */ |
@@ -137,6 +140,25 @@ static void axp288_extcon_log_rsi(struct axp288_extcon_info *info) | |||
137 | regmap_write(info->regmap, AXP288_PS_BOOT_REASON_REG, clear_mask); | 140 | regmap_write(info->regmap, AXP288_PS_BOOT_REASON_REG, clear_mask); |
138 | } | 141 | } |
139 | 142 | ||
143 | static void axp288_chrg_detect_complete(struct axp288_extcon_info *info) | ||
144 | { | ||
145 | /* | ||
146 | * We depend on other drivers to do things like mux the data lines, | ||
147 | * enable/disable vbus based on the id-pin, etc. Sometimes the BIOS has | ||
148 | * not set these things up correctly resulting in the initial charger | ||
149 | * cable type detection giving a wrong result and we end up not charging | ||
150 | * or charging at only 0.5A. | ||
151 | * | ||
152 | * So we schedule a second cable type detection after 2 seconds to | ||
153 | * give the other drivers time to load and do their thing. | ||
154 | */ | ||
155 | if (!info->first_detect_done) { | ||
156 | queue_delayed_work(system_wq, &info->det_work, | ||
157 | msecs_to_jiffies(2000)); | ||
158 | info->first_detect_done = true; | ||
159 | } | ||
160 | } | ||
161 | |||
140 | static int axp288_handle_chrg_det_event(struct axp288_extcon_info *info) | 162 | static int axp288_handle_chrg_det_event(struct axp288_extcon_info *info) |
141 | { | 163 | { |
142 | int ret, stat, cfg, pwr_stat; | 164 | int ret, stat, cfg, pwr_stat; |
@@ -183,8 +205,8 @@ static int axp288_handle_chrg_det_event(struct axp288_extcon_info *info) | |||
183 | cable = EXTCON_CHG_USB_DCP; | 205 | cable = EXTCON_CHG_USB_DCP; |
184 | break; | 206 | break; |
185 | default: | 207 | default: |
186 | dev_warn(info->dev, | 208 | dev_warn(info->dev, "unknown (reserved) bc detect result\n"); |
187 | "disconnect or unknown or ID event\n"); | 209 | cable = EXTCON_CHG_USB_SDP; |
188 | } | 210 | } |
189 | 211 | ||
190 | no_vbus: | 212 | no_vbus: |
@@ -201,6 +223,8 @@ no_vbus: | |||
201 | info->previous_cable = cable; | 223 | info->previous_cable = cable; |
202 | } | 224 | } |
203 | 225 | ||
226 | axp288_chrg_detect_complete(info); | ||
227 | |||
204 | return 0; | 228 | return 0; |
205 | 229 | ||
206 | dev_det_ret: | 230 | dev_det_ret: |
@@ -222,8 +246,11 @@ static irqreturn_t axp288_extcon_isr(int irq, void *data) | |||
222 | return IRQ_HANDLED; | 246 | return IRQ_HANDLED; |
223 | } | 247 | } |
224 | 248 | ||
225 | static void axp288_extcon_enable(struct axp288_extcon_info *info) | 249 | static void axp288_extcon_det_work(struct work_struct *work) |
226 | { | 250 | { |
251 | struct axp288_extcon_info *info = | ||
252 | container_of(work, struct axp288_extcon_info, det_work.work); | ||
253 | |||
227 | regmap_update_bits(info->regmap, AXP288_BC_GLOBAL_REG, | 254 | regmap_update_bits(info->regmap, AXP288_BC_GLOBAL_REG, |
228 | BC_GLOBAL_RUN, 0); | 255 | BC_GLOBAL_RUN, 0); |
229 | /* Enable the charger detection logic */ | 256 | /* Enable the charger detection logic */ |
@@ -245,6 +272,7 @@ static int axp288_extcon_probe(struct platform_device *pdev) | |||
245 | info->regmap = axp20x->regmap; | 272 | info->regmap = axp20x->regmap; |
246 | info->regmap_irqc = axp20x->regmap_irqc; | 273 | info->regmap_irqc = axp20x->regmap_irqc; |
247 | info->previous_cable = EXTCON_NONE; | 274 | info->previous_cable = EXTCON_NONE; |
275 | INIT_DELAYED_WORK(&info->det_work, axp288_extcon_det_work); | ||
248 | 276 | ||
249 | platform_set_drvdata(pdev, info); | 277 | platform_set_drvdata(pdev, info); |
250 | 278 | ||
@@ -290,7 +318,7 @@ static int axp288_extcon_probe(struct platform_device *pdev) | |||
290 | } | 318 | } |
291 | 319 | ||
292 | /* Start charger cable type detection */ | 320 | /* Start charger cable type detection */ |
293 | axp288_extcon_enable(info); | 321 | queue_delayed_work(system_wq, &info->det_work, 0); |
294 | 322 | ||
295 | return 0; | 323 | return 0; |
296 | } | 324 | } |
diff --git a/drivers/extcon/extcon-max77693.c b/drivers/extcon/extcon-max77693.c index 643411066ad9..227651ff9666 100644 --- a/drivers/extcon/extcon-max77693.c +++ b/drivers/extcon/extcon-max77693.c | |||
@@ -266,7 +266,7 @@ static int max77693_muic_set_debounce_time(struct max77693_muic_info *info, | |||
266 | static int max77693_muic_set_path(struct max77693_muic_info *info, | 266 | static int max77693_muic_set_path(struct max77693_muic_info *info, |
267 | u8 val, bool attached) | 267 | u8 val, bool attached) |
268 | { | 268 | { |
269 | int ret = 0; | 269 | int ret; |
270 | unsigned int ctrl1, ctrl2 = 0; | 270 | unsigned int ctrl1, ctrl2 = 0; |
271 | 271 | ||
272 | if (attached) | 272 | if (attached) |
diff --git a/drivers/extcon/extcon-max8997.c b/drivers/extcon/extcon-max8997.c index 8152790d72e1..9f30f4929b72 100644 --- a/drivers/extcon/extcon-max8997.c +++ b/drivers/extcon/extcon-max8997.c | |||
@@ -204,7 +204,7 @@ static int max8997_muic_set_debounce_time(struct max8997_muic_info *info, | |||
204 | static int max8997_muic_set_path(struct max8997_muic_info *info, | 204 | static int max8997_muic_set_path(struct max8997_muic_info *info, |
205 | u8 val, bool attached) | 205 | u8 val, bool attached) |
206 | { | 206 | { |
207 | int ret = 0; | 207 | int ret; |
208 | u8 ctrl1, ctrl2 = 0; | 208 | u8 ctrl1, ctrl2 = 0; |
209 | 209 | ||
210 | if (attached) | 210 | if (attached) |
diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig index ad5448f718b3..f47ef848bcd0 100644 --- a/drivers/fpga/Kconfig +++ b/drivers/fpga/Kconfig | |||
@@ -11,25 +11,30 @@ menuconfig FPGA | |||
11 | 11 | ||
12 | if FPGA | 12 | if FPGA |
13 | 13 | ||
14 | config FPGA_REGION | 14 | config FPGA_MGR_SOCFPGA |
15 | tristate "FPGA Region" | 15 | tristate "Altera SOCFPGA FPGA Manager" |
16 | depends on OF && FPGA_BRIDGE | 16 | depends on ARCH_SOCFPGA || COMPILE_TEST |
17 | help | 17 | help |
18 | FPGA Regions allow loading FPGA images under control of | 18 | FPGA manager driver support for Altera SOCFPGA. |
19 | the Device Tree. | ||
20 | 19 | ||
21 | config FPGA_MGR_ICE40_SPI | 20 | config FPGA_MGR_SOCFPGA_A10 |
22 | tristate "Lattice iCE40 SPI" | 21 | tristate "Altera SoCFPGA Arria10" |
23 | depends on OF && SPI | 22 | depends on ARCH_SOCFPGA || COMPILE_TEST |
23 | select REGMAP_MMIO | ||
24 | help | 24 | help |
25 | FPGA manager driver support for Lattice iCE40 FPGAs over SPI. | 25 | FPGA manager driver support for Altera Arria10 SoCFPGA. |
26 | 26 | ||
27 | config FPGA_MGR_ALTERA_CVP | 27 | config ALTERA_PR_IP_CORE |
28 | tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager" | 28 | tristate "Altera Partial Reconfiguration IP Core" |
29 | depends on PCI | 29 | help |
30 | Core driver support for Altera Partial Reconfiguration IP component | ||
31 | |||
32 | config ALTERA_PR_IP_CORE_PLAT | ||
33 | tristate "Platform support of Altera Partial Reconfiguration IP Core" | ||
34 | depends on ALTERA_PR_IP_CORE && OF && HAS_IOMEM | ||
30 | help | 35 | help |
31 | FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V | 36 | Platform driver support for Altera Partial Reconfiguration IP |
32 | and Arria 10 Altera FPGAs using the CvP interface over PCIe. | 37 | component |
33 | 38 | ||
34 | config FPGA_MGR_ALTERA_PS_SPI | 39 | config FPGA_MGR_ALTERA_PS_SPI |
35 | tristate "Altera FPGA Passive Serial over SPI" | 40 | tristate "Altera FPGA Passive Serial over SPI" |
@@ -38,25 +43,19 @@ config FPGA_MGR_ALTERA_PS_SPI | |||
38 | FPGA manager driver support for Altera Arria/Cyclone/Stratix | 43 | FPGA manager driver support for Altera Arria/Cyclone/Stratix |
39 | using the passive serial interface over SPI. | 44 | using the passive serial interface over SPI. |
40 | 45 | ||
41 | config FPGA_MGR_SOCFPGA | 46 | config FPGA_MGR_ALTERA_CVP |
42 | tristate "Altera SOCFPGA FPGA Manager" | 47 | tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager" |
43 | depends on ARCH_SOCFPGA || COMPILE_TEST | 48 | depends on PCI |
44 | help | ||
45 | FPGA manager driver support for Altera SOCFPGA. | ||
46 | |||
47 | config FPGA_MGR_SOCFPGA_A10 | ||
48 | tristate "Altera SoCFPGA Arria10" | ||
49 | depends on ARCH_SOCFPGA || COMPILE_TEST | ||
50 | select REGMAP_MMIO | ||
51 | help | 49 | help |
52 | FPGA manager driver support for Altera Arria10 SoCFPGA. | 50 | FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V |
51 | and Arria 10 Altera FPGAs using the CvP interface over PCIe. | ||
53 | 52 | ||
54 | config FPGA_MGR_TS73XX | 53 | config FPGA_MGR_ZYNQ_FPGA |
55 | tristate "Technologic Systems TS-73xx SBC FPGA Manager" | 54 | tristate "Xilinx Zynq FPGA" |
56 | depends on ARCH_EP93XX && MACH_TS72XX | 55 | depends on ARCH_ZYNQ || COMPILE_TEST |
56 | depends on HAS_DMA | ||
57 | help | 57 | help |
58 | FPGA manager driver support for the Altera Cyclone II FPGA | 58 | FPGA manager driver support for Xilinx Zynq FPGAs. |
59 | present on the TS-73xx SBC boards. | ||
60 | 59 | ||
61 | config FPGA_MGR_XILINX_SPI | 60 | config FPGA_MGR_XILINX_SPI |
62 | tristate "Xilinx Configuration over Slave Serial (SPI)" | 61 | tristate "Xilinx Configuration over Slave Serial (SPI)" |
@@ -65,16 +64,21 @@ config FPGA_MGR_XILINX_SPI | |||
65 | FPGA manager driver support for Xilinx FPGA configuration | 64 | FPGA manager driver support for Xilinx FPGA configuration |
66 | over slave serial interface. | 65 | over slave serial interface. |
67 | 66 | ||
68 | config FPGA_MGR_ZYNQ_FPGA | 67 | config FPGA_MGR_ICE40_SPI |
69 | tristate "Xilinx Zynq FPGA" | 68 | tristate "Lattice iCE40 SPI" |
70 | depends on ARCH_ZYNQ || COMPILE_TEST | 69 | depends on OF && SPI |
71 | depends on HAS_DMA | ||
72 | help | 70 | help |
73 | FPGA manager driver support for Xilinx Zynq FPGAs. | 71 | FPGA manager driver support for Lattice iCE40 FPGAs over SPI. |
72 | |||
73 | config FPGA_MGR_TS73XX | ||
74 | tristate "Technologic Systems TS-73xx SBC FPGA Manager" | ||
75 | depends on ARCH_EP93XX && MACH_TS72XX | ||
76 | help | ||
77 | FPGA manager driver support for the Altera Cyclone II FPGA | ||
78 | present on the TS-73xx SBC boards. | ||
74 | 79 | ||
75 | config FPGA_BRIDGE | 80 | config FPGA_BRIDGE |
76 | tristate "FPGA Bridge Framework" | 81 | tristate "FPGA Bridge Framework" |
77 | depends on OF | ||
78 | help | 82 | help |
79 | Say Y here if you want to support bridges connected between host | 83 | Say Y here if you want to support bridges connected between host |
80 | processors and FPGAs or between FPGAs. | 84 | processors and FPGAs or between FPGAs. |
@@ -95,18 +99,6 @@ config ALTERA_FREEZE_BRIDGE | |||
95 | isolate one region of the FPGA from the busses while that | 99 | isolate one region of the FPGA from the busses while that |
96 | region is being reprogrammed. | 100 | region is being reprogrammed. |
97 | 101 | ||
98 | config ALTERA_PR_IP_CORE | ||
99 | tristate "Altera Partial Reconfiguration IP Core" | ||
100 | help | ||
101 | Core driver support for Altera Partial Reconfiguration IP component | ||
102 | |||
103 | config ALTERA_PR_IP_CORE_PLAT | ||
104 | tristate "Platform support of Altera Partial Reconfiguration IP Core" | ||
105 | depends on ALTERA_PR_IP_CORE && OF && HAS_IOMEM | ||
106 | help | ||
107 | Platform driver support for Altera Partial Reconfiguration IP | ||
108 | component | ||
109 | |||
110 | config XILINX_PR_DECOUPLER | 102 | config XILINX_PR_DECOUPLER |
111 | tristate "Xilinx LogiCORE PR Decoupler" | 103 | tristate "Xilinx LogiCORE PR Decoupler" |
112 | depends on FPGA_BRIDGE | 104 | depends on FPGA_BRIDGE |
@@ -117,4 +109,19 @@ config XILINX_PR_DECOUPLER | |||
117 | region of the FPGA from the busses while that region is | 109 | region of the FPGA from the busses while that region is |
118 | being reprogrammed during partial reconfig. | 110 | being reprogrammed during partial reconfig. |
119 | 111 | ||
112 | config FPGA_REGION | ||
113 | tristate "FPGA Region" | ||
114 | depends on FPGA_BRIDGE | ||
115 | help | ||
116 | FPGA Region common code. A FPGA Region controls a FPGA Manager | ||
117 | and the FPGA Bridges associated with either a reconfigurable | ||
118 | region of an FPGA or a whole FPGA. | ||
119 | |||
120 | config OF_FPGA_REGION | ||
121 | tristate "FPGA Region Device Tree Overlay Support" | ||
122 | depends on OF && FPGA_REGION | ||
123 | help | ||
124 | Support for loading FPGA images by applying a Device Tree | ||
125 | overlay. | ||
126 | |||
120 | endif # FPGA | 127 | endif # FPGA |
diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile index f98dcf1d89e1..3cb276a0f88d 100644 --- a/drivers/fpga/Makefile +++ b/drivers/fpga/Makefile | |||
@@ -26,3 +26,4 @@ obj-$(CONFIG_XILINX_PR_DECOUPLER) += xilinx-pr-decoupler.o | |||
26 | 26 | ||
27 | # High Level Interfaces | 27 | # High Level Interfaces |
28 | obj-$(CONFIG_FPGA_REGION) += fpga-region.o | 28 | obj-$(CONFIG_FPGA_REGION) += fpga-region.o |
29 | obj-$(CONFIG_OF_FPGA_REGION) += of-fpga-region.o | ||
diff --git a/drivers/fpga/fpga-bridge.c b/drivers/fpga/fpga-bridge.c index 9651aa56244a..31bd2c59c305 100644 --- a/drivers/fpga/fpga-bridge.c +++ b/drivers/fpga/fpga-bridge.c | |||
@@ -2,6 +2,7 @@ | |||
2 | * FPGA Bridge Framework Driver | 2 | * FPGA Bridge Framework Driver |
3 | * | 3 | * |
4 | * Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved. | 4 | * Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved. |
5 | * Copyright (C) 2017 Intel Corporation | ||
5 | * | 6 | * |
6 | * This program is free software; you can redistribute it and/or modify it | 7 | * This program is free software; you can redistribute it and/or modify it |
7 | * under the terms and conditions of the GNU General Public License, | 8 | * under the terms and conditions of the GNU General Public License, |
@@ -70,32 +71,13 @@ int fpga_bridge_disable(struct fpga_bridge *bridge) | |||
70 | } | 71 | } |
71 | EXPORT_SYMBOL_GPL(fpga_bridge_disable); | 72 | EXPORT_SYMBOL_GPL(fpga_bridge_disable); |
72 | 73 | ||
73 | /** | 74 | static struct fpga_bridge *__fpga_bridge_get(struct device *dev, |
74 | * of_fpga_bridge_get - get an exclusive reference to a fpga bridge | 75 | struct fpga_image_info *info) |
75 | * | ||
76 | * @np: node pointer of a FPGA bridge | ||
77 | * @info: fpga image specific information | ||
78 | * | ||
79 | * Return fpga_bridge struct if successful. | ||
80 | * Return -EBUSY if someone already has a reference to the bridge. | ||
81 | * Return -ENODEV if @np is not a FPGA Bridge. | ||
82 | */ | ||
83 | struct fpga_bridge *of_fpga_bridge_get(struct device_node *np, | ||
84 | struct fpga_image_info *info) | ||
85 | |||
86 | { | 76 | { |
87 | struct device *dev; | ||
88 | struct fpga_bridge *bridge; | 77 | struct fpga_bridge *bridge; |
89 | int ret = -ENODEV; | 78 | int ret = -ENODEV; |
90 | 79 | ||
91 | dev = class_find_device(fpga_bridge_class, NULL, np, | ||
92 | fpga_bridge_of_node_match); | ||
93 | if (!dev) | ||
94 | goto err_dev; | ||
95 | |||
96 | bridge = to_fpga_bridge(dev); | 80 | bridge = to_fpga_bridge(dev); |
97 | if (!bridge) | ||
98 | goto err_dev; | ||
99 | 81 | ||
100 | bridge->info = info; | 82 | bridge->info = info; |
101 | 83 | ||
@@ -117,8 +99,58 @@ err_dev: | |||
117 | put_device(dev); | 99 | put_device(dev); |
118 | return ERR_PTR(ret); | 100 | return ERR_PTR(ret); |
119 | } | 101 | } |
102 | |||
103 | /** | ||
104 | * of_fpga_bridge_get - get an exclusive reference to a fpga bridge | ||
105 | * | ||
106 | * @np: node pointer of a FPGA bridge | ||
107 | * @info: fpga image specific information | ||
108 | * | ||
109 | * Return fpga_bridge struct if successful. | ||
110 | * Return -EBUSY if someone already has a reference to the bridge. | ||
111 | * Return -ENODEV if @np is not a FPGA Bridge. | ||
112 | */ | ||
113 | struct fpga_bridge *of_fpga_bridge_get(struct device_node *np, | ||
114 | struct fpga_image_info *info) | ||
115 | { | ||
116 | struct device *dev; | ||
117 | |||
118 | dev = class_find_device(fpga_bridge_class, NULL, np, | ||
119 | fpga_bridge_of_node_match); | ||
120 | if (!dev) | ||
121 | return ERR_PTR(-ENODEV); | ||
122 | |||
123 | return __fpga_bridge_get(dev, info); | ||
124 | } | ||
120 | EXPORT_SYMBOL_GPL(of_fpga_bridge_get); | 125 | EXPORT_SYMBOL_GPL(of_fpga_bridge_get); |
121 | 126 | ||
127 | static int fpga_bridge_dev_match(struct device *dev, const void *data) | ||
128 | { | ||
129 | return dev->parent == data; | ||
130 | } | ||
131 | |||
132 | /** | ||
133 | * fpga_bridge_get - get an exclusive reference to a fpga bridge | ||
134 | * @dev: parent device that fpga bridge was registered with | ||
135 | * | ||
136 | * Given a device, get an exclusive reference to a fpga bridge. | ||
137 | * | ||
138 | * Return: fpga manager struct or IS_ERR() condition containing error code. | ||
139 | */ | ||
140 | struct fpga_bridge *fpga_bridge_get(struct device *dev, | ||
141 | struct fpga_image_info *info) | ||
142 | { | ||
143 | struct device *bridge_dev; | ||
144 | |||
145 | bridge_dev = class_find_device(fpga_bridge_class, NULL, dev, | ||
146 | fpga_bridge_dev_match); | ||
147 | if (!bridge_dev) | ||
148 | return ERR_PTR(-ENODEV); | ||
149 | |||
150 | return __fpga_bridge_get(bridge_dev, info); | ||
151 | } | ||
152 | EXPORT_SYMBOL_GPL(fpga_bridge_get); | ||
153 | |||
122 | /** | 154 | /** |
123 | * fpga_bridge_put - release a reference to a bridge | 155 | * fpga_bridge_put - release a reference to a bridge |
124 | * | 156 | * |
@@ -206,7 +238,7 @@ void fpga_bridges_put(struct list_head *bridge_list) | |||
206 | EXPORT_SYMBOL_GPL(fpga_bridges_put); | 238 | EXPORT_SYMBOL_GPL(fpga_bridges_put); |
207 | 239 | ||
208 | /** | 240 | /** |
209 | * fpga_bridges_get_to_list - get a bridge, add it to a list | 241 | * of_fpga_bridge_get_to_list - get a bridge, add it to a list |
210 | * | 242 | * |
211 | * @np: node pointer of a FPGA bridge | 243 | * @np: node pointer of a FPGA bridge |
212 | * @info: fpga image specific information | 244 | * @info: fpga image specific information |
@@ -216,14 +248,44 @@ EXPORT_SYMBOL_GPL(fpga_bridges_put); | |||
216 | * | 248 | * |
217 | * Return 0 for success, error code from of_fpga_bridge_get() othewise. | 249 | * Return 0 for success, error code from of_fpga_bridge_get() othewise. |
218 | */ | 250 | */ |
219 | int fpga_bridge_get_to_list(struct device_node *np, | 251 | int of_fpga_bridge_get_to_list(struct device_node *np, |
252 | struct fpga_image_info *info, | ||
253 | struct list_head *bridge_list) | ||
254 | { | ||
255 | struct fpga_bridge *bridge; | ||
256 | unsigned long flags; | ||
257 | |||
258 | bridge = of_fpga_bridge_get(np, info); | ||
259 | if (IS_ERR(bridge)) | ||
260 | return PTR_ERR(bridge); | ||
261 | |||
262 | spin_lock_irqsave(&bridge_list_lock, flags); | ||
263 | list_add(&bridge->node, bridge_list); | ||
264 | spin_unlock_irqrestore(&bridge_list_lock, flags); | ||
265 | |||
266 | return 0; | ||
267 | } | ||
268 | EXPORT_SYMBOL_GPL(of_fpga_bridge_get_to_list); | ||
269 | |||
270 | /** | ||
271 | * fpga_bridge_get_to_list - given device, get a bridge, add it to a list | ||
272 | * | ||
273 | * @dev: FPGA bridge device | ||
274 | * @info: fpga image specific information | ||
275 | * @bridge_list: list of FPGA bridges | ||
276 | * | ||
277 | * Get an exclusive reference to the bridge and and it to the list. | ||
278 | * | ||
279 | * Return 0 for success, error code from fpga_bridge_get() othewise. | ||
280 | */ | ||
281 | int fpga_bridge_get_to_list(struct device *dev, | ||
220 | struct fpga_image_info *info, | 282 | struct fpga_image_info *info, |
221 | struct list_head *bridge_list) | 283 | struct list_head *bridge_list) |
222 | { | 284 | { |
223 | struct fpga_bridge *bridge; | 285 | struct fpga_bridge *bridge; |
224 | unsigned long flags; | 286 | unsigned long flags; |
225 | 287 | ||
226 | bridge = of_fpga_bridge_get(np, info); | 288 | bridge = fpga_bridge_get(dev, info); |
227 | if (IS_ERR(bridge)) | 289 | if (IS_ERR(bridge)) |
228 | return PTR_ERR(bridge); | 290 | return PTR_ERR(bridge); |
229 | 291 | ||
@@ -303,6 +365,7 @@ int fpga_bridge_register(struct device *dev, const char *name, | |||
303 | bridge->priv = priv; | 365 | bridge->priv = priv; |
304 | 366 | ||
305 | device_initialize(&bridge->dev); | 367 | device_initialize(&bridge->dev); |
368 | bridge->dev.groups = br_ops->groups; | ||
306 | bridge->dev.class = fpga_bridge_class; | 369 | bridge->dev.class = fpga_bridge_class; |
307 | bridge->dev.parent = dev; | 370 | bridge->dev.parent = dev; |
308 | bridge->dev.of_node = dev->of_node; | 371 | bridge->dev.of_node = dev->of_node; |
@@ -381,7 +444,7 @@ static void __exit fpga_bridge_dev_exit(void) | |||
381 | } | 444 | } |
382 | 445 | ||
383 | MODULE_DESCRIPTION("FPGA Bridge Driver"); | 446 | MODULE_DESCRIPTION("FPGA Bridge Driver"); |
384 | MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>"); | 447 | MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); |
385 | MODULE_LICENSE("GPL v2"); | 448 | MODULE_LICENSE("GPL v2"); |
386 | 449 | ||
387 | subsys_initcall(fpga_bridge_dev_init); | 450 | subsys_initcall(fpga_bridge_dev_init); |
diff --git a/drivers/fpga/fpga-mgr.c b/drivers/fpga/fpga-mgr.c index 188ffefa3cc3..9939d2cbc9a6 100644 --- a/drivers/fpga/fpga-mgr.c +++ b/drivers/fpga/fpga-mgr.c | |||
@@ -2,6 +2,7 @@ | |||
2 | * FPGA Manager Core | 2 | * FPGA Manager Core |
3 | * | 3 | * |
4 | * Copyright (C) 2013-2015 Altera Corporation | 4 | * Copyright (C) 2013-2015 Altera Corporation |
5 | * Copyright (C) 2017 Intel Corporation | ||
5 | * | 6 | * |
6 | * With code from the mailing list: | 7 | * With code from the mailing list: |
7 | * Copyright (C) 2013 Xilinx, Inc. | 8 | * Copyright (C) 2013 Xilinx, Inc. |
@@ -31,6 +32,40 @@ | |||
31 | static DEFINE_IDA(fpga_mgr_ida); | 32 | static DEFINE_IDA(fpga_mgr_ida); |
32 | static struct class *fpga_mgr_class; | 33 | static struct class *fpga_mgr_class; |
33 | 34 | ||
35 | struct fpga_image_info *fpga_image_info_alloc(struct device *dev) | ||
36 | { | ||
37 | struct fpga_image_info *info; | ||
38 | |||
39 | get_device(dev); | ||
40 | |||
41 | info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); | ||
42 | if (!info) { | ||
43 | put_device(dev); | ||
44 | return NULL; | ||
45 | } | ||
46 | |||
47 | info->dev = dev; | ||
48 | |||
49 | return info; | ||
50 | } | ||
51 | EXPORT_SYMBOL_GPL(fpga_image_info_alloc); | ||
52 | |||
53 | void fpga_image_info_free(struct fpga_image_info *info) | ||
54 | { | ||
55 | struct device *dev; | ||
56 | |||
57 | if (!info) | ||
58 | return; | ||
59 | |||
60 | dev = info->dev; | ||
61 | if (info->firmware_name) | ||
62 | devm_kfree(dev, info->firmware_name); | ||
63 | |||
64 | devm_kfree(dev, info); | ||
65 | put_device(dev); | ||
66 | } | ||
67 | EXPORT_SYMBOL_GPL(fpga_image_info_free); | ||
68 | |||
34 | /* | 69 | /* |
35 | * Call the low level driver's write_init function. This will do the | 70 | * Call the low level driver's write_init function. This will do the |
36 | * device-specific things to get the FPGA into the state where it is ready to | 71 | * device-specific things to get the FPGA into the state where it is ready to |
@@ -137,8 +172,9 @@ static int fpga_mgr_write_complete(struct fpga_manager *mgr, | |||
137 | * | 172 | * |
138 | * Return: 0 on success, negative error code otherwise. | 173 | * Return: 0 on success, negative error code otherwise. |
139 | */ | 174 | */ |
140 | int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, struct fpga_image_info *info, | 175 | static int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, |
141 | struct sg_table *sgt) | 176 | struct fpga_image_info *info, |
177 | struct sg_table *sgt) | ||
142 | { | 178 | { |
143 | int ret; | 179 | int ret; |
144 | 180 | ||
@@ -170,7 +206,6 @@ int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, struct fpga_image_info *info, | |||
170 | 206 | ||
171 | return fpga_mgr_write_complete(mgr, info); | 207 | return fpga_mgr_write_complete(mgr, info); |
172 | } | 208 | } |
173 | EXPORT_SYMBOL_GPL(fpga_mgr_buf_load_sg); | ||
174 | 209 | ||
175 | static int fpga_mgr_buf_load_mapped(struct fpga_manager *mgr, | 210 | static int fpga_mgr_buf_load_mapped(struct fpga_manager *mgr, |
176 | struct fpga_image_info *info, | 211 | struct fpga_image_info *info, |
@@ -210,8 +245,9 @@ static int fpga_mgr_buf_load_mapped(struct fpga_manager *mgr, | |||
210 | * | 245 | * |
211 | * Return: 0 on success, negative error code otherwise. | 246 | * Return: 0 on success, negative error code otherwise. |
212 | */ | 247 | */ |
213 | int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info, | 248 | static int fpga_mgr_buf_load(struct fpga_manager *mgr, |
214 | const char *buf, size_t count) | 249 | struct fpga_image_info *info, |
250 | const char *buf, size_t count) | ||
215 | { | 251 | { |
216 | struct page **pages; | 252 | struct page **pages; |
217 | struct sg_table sgt; | 253 | struct sg_table sgt; |
@@ -266,7 +302,6 @@ int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info, | |||
266 | 302 | ||
267 | return rc; | 303 | return rc; |
268 | } | 304 | } |
269 | EXPORT_SYMBOL_GPL(fpga_mgr_buf_load); | ||
270 | 305 | ||
271 | /** | 306 | /** |
272 | * fpga_mgr_firmware_load - request firmware and load to fpga | 307 | * fpga_mgr_firmware_load - request firmware and load to fpga |
@@ -282,9 +317,9 @@ EXPORT_SYMBOL_GPL(fpga_mgr_buf_load); | |||
282 | * | 317 | * |
283 | * Return: 0 on success, negative error code otherwise. | 318 | * Return: 0 on success, negative error code otherwise. |
284 | */ | 319 | */ |
285 | int fpga_mgr_firmware_load(struct fpga_manager *mgr, | 320 | static int fpga_mgr_firmware_load(struct fpga_manager *mgr, |
286 | struct fpga_image_info *info, | 321 | struct fpga_image_info *info, |
287 | const char *image_name) | 322 | const char *image_name) |
288 | { | 323 | { |
289 | struct device *dev = &mgr->dev; | 324 | struct device *dev = &mgr->dev; |
290 | const struct firmware *fw; | 325 | const struct firmware *fw; |
@@ -307,7 +342,18 @@ int fpga_mgr_firmware_load(struct fpga_manager *mgr, | |||
307 | 342 | ||
308 | return ret; | 343 | return ret; |
309 | } | 344 | } |
310 | EXPORT_SYMBOL_GPL(fpga_mgr_firmware_load); | 345 | |
346 | int fpga_mgr_load(struct fpga_manager *mgr, struct fpga_image_info *info) | ||
347 | { | ||
348 | if (info->sgt) | ||
349 | return fpga_mgr_buf_load_sg(mgr, info, info->sgt); | ||
350 | if (info->buf && info->count) | ||
351 | return fpga_mgr_buf_load(mgr, info, info->buf, info->count); | ||
352 | if (info->firmware_name) | ||
353 | return fpga_mgr_firmware_load(mgr, info, info->firmware_name); | ||
354 | return -EINVAL; | ||
355 | } | ||
356 | EXPORT_SYMBOL_GPL(fpga_mgr_load); | ||
311 | 357 | ||
312 | static const char * const state_str[] = { | 358 | static const char * const state_str[] = { |
313 | [FPGA_MGR_STATE_UNKNOWN] = "unknown", | 359 | [FPGA_MGR_STATE_UNKNOWN] = "unknown", |
@@ -364,28 +410,17 @@ ATTRIBUTE_GROUPS(fpga_mgr); | |||
364 | static struct fpga_manager *__fpga_mgr_get(struct device *dev) | 410 | static struct fpga_manager *__fpga_mgr_get(struct device *dev) |
365 | { | 411 | { |
366 | struct fpga_manager *mgr; | 412 | struct fpga_manager *mgr; |
367 | int ret = -ENODEV; | ||
368 | 413 | ||
369 | mgr = to_fpga_manager(dev); | 414 | mgr = to_fpga_manager(dev); |
370 | if (!mgr) | ||
371 | goto err_dev; | ||
372 | |||
373 | /* Get exclusive use of fpga manager */ | ||
374 | if (!mutex_trylock(&mgr->ref_mutex)) { | ||
375 | ret = -EBUSY; | ||
376 | goto err_dev; | ||
377 | } | ||
378 | 415 | ||
379 | if (!try_module_get(dev->parent->driver->owner)) | 416 | if (!try_module_get(dev->parent->driver->owner)) |
380 | goto err_ll_mod; | 417 | goto err_dev; |
381 | 418 | ||
382 | return mgr; | 419 | return mgr; |
383 | 420 | ||
384 | err_ll_mod: | ||
385 | mutex_unlock(&mgr->ref_mutex); | ||
386 | err_dev: | 421 | err_dev: |
387 | put_device(dev); | 422 | put_device(dev); |
388 | return ERR_PTR(ret); | 423 | return ERR_PTR(-ENODEV); |
389 | } | 424 | } |
390 | 425 | ||
391 | static int fpga_mgr_dev_match(struct device *dev, const void *data) | 426 | static int fpga_mgr_dev_match(struct device *dev, const void *data) |
@@ -394,10 +429,10 @@ static int fpga_mgr_dev_match(struct device *dev, const void *data) | |||
394 | } | 429 | } |
395 | 430 | ||
396 | /** | 431 | /** |
397 | * fpga_mgr_get - get an exclusive reference to a fpga mgr | 432 | * fpga_mgr_get - get a reference to a fpga mgr |
398 | * @dev: parent device that fpga mgr was registered with | 433 | * @dev: parent device that fpga mgr was registered with |
399 | * | 434 | * |
400 | * Given a device, get an exclusive reference to a fpga mgr. | 435 | * Given a device, get a reference to a fpga mgr. |
401 | * | 436 | * |
402 | * Return: fpga manager struct or IS_ERR() condition containing error code. | 437 | * Return: fpga manager struct or IS_ERR() condition containing error code. |
403 | */ | 438 | */ |
@@ -418,10 +453,10 @@ static int fpga_mgr_of_node_match(struct device *dev, const void *data) | |||
418 | } | 453 | } |
419 | 454 | ||
420 | /** | 455 | /** |
421 | * of_fpga_mgr_get - get an exclusive reference to a fpga mgr | 456 | * of_fpga_mgr_get - get a reference to a fpga mgr |
422 | * @node: device node | 457 | * @node: device node |
423 | * | 458 | * |
424 | * Given a device node, get an exclusive reference to a fpga mgr. | 459 | * Given a device node, get a reference to a fpga mgr. |
425 | * | 460 | * |
426 | * Return: fpga manager struct or IS_ERR() condition containing error code. | 461 | * Return: fpga manager struct or IS_ERR() condition containing error code. |
427 | */ | 462 | */ |
@@ -445,12 +480,41 @@ EXPORT_SYMBOL_GPL(of_fpga_mgr_get); | |||
445 | void fpga_mgr_put(struct fpga_manager *mgr) | 480 | void fpga_mgr_put(struct fpga_manager *mgr) |
446 | { | 481 | { |
447 | module_put(mgr->dev.parent->driver->owner); | 482 | module_put(mgr->dev.parent->driver->owner); |
448 | mutex_unlock(&mgr->ref_mutex); | ||
449 | put_device(&mgr->dev); | 483 | put_device(&mgr->dev); |
450 | } | 484 | } |
451 | EXPORT_SYMBOL_GPL(fpga_mgr_put); | 485 | EXPORT_SYMBOL_GPL(fpga_mgr_put); |
452 | 486 | ||
453 | /** | 487 | /** |
488 | * fpga_mgr_lock - Lock FPGA manager for exclusive use | ||
489 | * @mgr: fpga manager | ||
490 | * | ||
491 | * Given a pointer to FPGA Manager (from fpga_mgr_get() or | ||
492 | * of_fpga_mgr_put()) attempt to get the mutex. | ||
493 | * | ||
494 | * Return: 0 for success or -EBUSY | ||
495 | */ | ||
496 | int fpga_mgr_lock(struct fpga_manager *mgr) | ||
497 | { | ||
498 | if (!mutex_trylock(&mgr->ref_mutex)) { | ||
499 | dev_err(&mgr->dev, "FPGA manager is in use.\n"); | ||
500 | return -EBUSY; | ||
501 | } | ||
502 | |||
503 | return 0; | ||
504 | } | ||
505 | EXPORT_SYMBOL_GPL(fpga_mgr_lock); | ||
506 | |||
507 | /** | ||
508 | * fpga_mgr_unlock - Unlock FPGA manager | ||
509 | * @mgr: fpga manager | ||
510 | */ | ||
511 | void fpga_mgr_unlock(struct fpga_manager *mgr) | ||
512 | { | ||
513 | mutex_unlock(&mgr->ref_mutex); | ||
514 | } | ||
515 | EXPORT_SYMBOL_GPL(fpga_mgr_unlock); | ||
516 | |||
517 | /** | ||
454 | * fpga_mgr_register - register a low level fpga manager driver | 518 | * fpga_mgr_register - register a low level fpga manager driver |
455 | * @dev: fpga manager device from pdev | 519 | * @dev: fpga manager device from pdev |
456 | * @name: fpga manager name | 520 | * @name: fpga manager name |
@@ -503,6 +567,7 @@ int fpga_mgr_register(struct device *dev, const char *name, | |||
503 | 567 | ||
504 | device_initialize(&mgr->dev); | 568 | device_initialize(&mgr->dev); |
505 | mgr->dev.class = fpga_mgr_class; | 569 | mgr->dev.class = fpga_mgr_class; |
570 | mgr->dev.groups = mops->groups; | ||
506 | mgr->dev.parent = dev; | 571 | mgr->dev.parent = dev; |
507 | mgr->dev.of_node = dev->of_node; | 572 | mgr->dev.of_node = dev->of_node; |
508 | mgr->dev.id = id; | 573 | mgr->dev.id = id; |
@@ -578,7 +643,7 @@ static void __exit fpga_mgr_class_exit(void) | |||
578 | ida_destroy(&fpga_mgr_ida); | 643 | ida_destroy(&fpga_mgr_ida); |
579 | } | 644 | } |
580 | 645 | ||
581 | MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>"); | 646 | MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); |
582 | MODULE_DESCRIPTION("FPGA manager framework"); | 647 | MODULE_DESCRIPTION("FPGA manager framework"); |
583 | MODULE_LICENSE("GPL v2"); | 648 | MODULE_LICENSE("GPL v2"); |
584 | 649 | ||
diff --git a/drivers/fpga/fpga-region.c b/drivers/fpga/fpga-region.c index d9ab7c75b14f..edab2a2e03ef 100644 --- a/drivers/fpga/fpga-region.c +++ b/drivers/fpga/fpga-region.c | |||
@@ -2,6 +2,7 @@ | |||
2 | * FPGA Region - Device Tree support for FPGA programming under Linux | 2 | * FPGA Region - Device Tree support for FPGA programming under Linux |
3 | * | 3 | * |
4 | * Copyright (C) 2013-2016 Altera Corporation | 4 | * Copyright (C) 2013-2016 Altera Corporation |
5 | * Copyright (C) 2017 Intel Corporation | ||
5 | * | 6 | * |
6 | * This program is free software; you can redistribute it and/or modify it | 7 | * This program is free software; you can redistribute it and/or modify it |
7 | * under the terms and conditions of the GNU General Public License, | 8 | * under the terms and conditions of the GNU General Public License, |
@@ -18,61 +19,30 @@ | |||
18 | 19 | ||
19 | #include <linux/fpga/fpga-bridge.h> | 20 | #include <linux/fpga/fpga-bridge.h> |
20 | #include <linux/fpga/fpga-mgr.h> | 21 | #include <linux/fpga/fpga-mgr.h> |
22 | #include <linux/fpga/fpga-region.h> | ||
21 | #include <linux/idr.h> | 23 | #include <linux/idr.h> |
22 | #include <linux/kernel.h> | 24 | #include <linux/kernel.h> |
23 | #include <linux/list.h> | 25 | #include <linux/list.h> |
24 | #include <linux/module.h> | 26 | #include <linux/module.h> |
25 | #include <linux/of_platform.h> | ||
26 | #include <linux/slab.h> | 27 | #include <linux/slab.h> |
27 | #include <linux/spinlock.h> | 28 | #include <linux/spinlock.h> |
28 | 29 | ||
29 | /** | ||
30 | * struct fpga_region - FPGA Region structure | ||
31 | * @dev: FPGA Region device | ||
32 | * @mutex: enforces exclusive reference to region | ||
33 | * @bridge_list: list of FPGA bridges specified in region | ||
34 | * @info: fpga image specific information | ||
35 | */ | ||
36 | struct fpga_region { | ||
37 | struct device dev; | ||
38 | struct mutex mutex; /* for exclusive reference to region */ | ||
39 | struct list_head bridge_list; | ||
40 | struct fpga_image_info *info; | ||
41 | }; | ||
42 | |||
43 | #define to_fpga_region(d) container_of(d, struct fpga_region, dev) | ||
44 | |||
45 | static DEFINE_IDA(fpga_region_ida); | 30 | static DEFINE_IDA(fpga_region_ida); |
46 | static struct class *fpga_region_class; | 31 | static struct class *fpga_region_class; |
47 | 32 | ||
48 | static const struct of_device_id fpga_region_of_match[] = { | 33 | struct fpga_region *fpga_region_class_find( |
49 | { .compatible = "fpga-region", }, | 34 | struct device *start, const void *data, |
50 | {}, | 35 | int (*match)(struct device *, const void *)) |
51 | }; | ||
52 | MODULE_DEVICE_TABLE(of, fpga_region_of_match); | ||
53 | |||
54 | static int fpga_region_of_node_match(struct device *dev, const void *data) | ||
55 | { | ||
56 | return dev->of_node == data; | ||
57 | } | ||
58 | |||
59 | /** | ||
60 | * fpga_region_find - find FPGA region | ||
61 | * @np: device node of FPGA Region | ||
62 | * Caller will need to put_device(®ion->dev) when done. | ||
63 | * Returns FPGA Region struct or NULL | ||
64 | */ | ||
65 | static struct fpga_region *fpga_region_find(struct device_node *np) | ||
66 | { | 36 | { |
67 | struct device *dev; | 37 | struct device *dev; |
68 | 38 | ||
69 | dev = class_find_device(fpga_region_class, NULL, np, | 39 | dev = class_find_device(fpga_region_class, start, data, match); |
70 | fpga_region_of_node_match); | ||
71 | if (!dev) | 40 | if (!dev) |
72 | return NULL; | 41 | return NULL; |
73 | 42 | ||
74 | return to_fpga_region(dev); | 43 | return to_fpga_region(dev); |
75 | } | 44 | } |
45 | EXPORT_SYMBOL_GPL(fpga_region_class_find); | ||
76 | 46 | ||
77 | /** | 47 | /** |
78 | * fpga_region_get - get an exclusive reference to a fpga region | 48 | * fpga_region_get - get an exclusive reference to a fpga region |
@@ -94,15 +64,13 @@ static struct fpga_region *fpga_region_get(struct fpga_region *region) | |||
94 | } | 64 | } |
95 | 65 | ||
96 | get_device(dev); | 66 | get_device(dev); |
97 | of_node_get(dev->of_node); | ||
98 | if (!try_module_get(dev->parent->driver->owner)) { | 67 | if (!try_module_get(dev->parent->driver->owner)) { |
99 | of_node_put(dev->of_node); | ||
100 | put_device(dev); | 68 | put_device(dev); |
101 | mutex_unlock(®ion->mutex); | 69 | mutex_unlock(®ion->mutex); |
102 | return ERR_PTR(-ENODEV); | 70 | return ERR_PTR(-ENODEV); |
103 | } | 71 | } |
104 | 72 | ||
105 | dev_dbg(®ion->dev, "get\n"); | 73 | dev_dbg(dev, "get\n"); |
106 | 74 | ||
107 | return region; | 75 | return region; |
108 | } | 76 | } |
@@ -116,403 +84,99 @@ static void fpga_region_put(struct fpga_region *region) | |||
116 | { | 84 | { |
117 | struct device *dev = ®ion->dev; | 85 | struct device *dev = ®ion->dev; |
118 | 86 | ||
119 | dev_dbg(®ion->dev, "put\n"); | 87 | dev_dbg(dev, "put\n"); |
120 | 88 | ||
121 | module_put(dev->parent->driver->owner); | 89 | module_put(dev->parent->driver->owner); |
122 | of_node_put(dev->of_node); | ||
123 | put_device(dev); | 90 | put_device(dev); |
124 | mutex_unlock(®ion->mutex); | 91 | mutex_unlock(®ion->mutex); |
125 | } | 92 | } |
126 | 93 | ||
127 | /** | 94 | /** |
128 | * fpga_region_get_manager - get exclusive reference for FPGA manager | ||
129 | * @region: FPGA region | ||
130 | * | ||
131 | * Get FPGA Manager from "fpga-mgr" property or from ancestor region. | ||
132 | * | ||
133 | * Caller should call fpga_mgr_put() when done with manager. | ||
134 | * | ||
135 | * Return: fpga manager struct or IS_ERR() condition containing error code. | ||
136 | */ | ||
137 | static struct fpga_manager *fpga_region_get_manager(struct fpga_region *region) | ||
138 | { | ||
139 | struct device *dev = ®ion->dev; | ||
140 | struct device_node *np = dev->of_node; | ||
141 | struct device_node *mgr_node; | ||
142 | struct fpga_manager *mgr; | ||
143 | |||
144 | of_node_get(np); | ||
145 | while (np) { | ||
146 | if (of_device_is_compatible(np, "fpga-region")) { | ||
147 | mgr_node = of_parse_phandle(np, "fpga-mgr", 0); | ||
148 | if (mgr_node) { | ||
149 | mgr = of_fpga_mgr_get(mgr_node); | ||
150 | of_node_put(np); | ||
151 | return mgr; | ||
152 | } | ||
153 | } | ||
154 | np = of_get_next_parent(np); | ||
155 | } | ||
156 | of_node_put(np); | ||
157 | |||
158 | return ERR_PTR(-EINVAL); | ||
159 | } | ||
160 | |||
161 | /** | ||
162 | * fpga_region_get_bridges - create a list of bridges | ||
163 | * @region: FPGA region | ||
164 | * @overlay: device node of the overlay | ||
165 | * | ||
166 | * Create a list of bridges including the parent bridge and the bridges | ||
167 | * specified by "fpga-bridges" property. Note that the | ||
168 | * fpga_bridges_enable/disable/put functions are all fine with an empty list | ||
169 | * if that happens. | ||
170 | * | ||
171 | * Caller should call fpga_bridges_put(®ion->bridge_list) when | ||
172 | * done with the bridges. | ||
173 | * | ||
174 | * Return 0 for success (even if there are no bridges specified) | ||
175 | * or -EBUSY if any of the bridges are in use. | ||
176 | */ | ||
177 | static int fpga_region_get_bridges(struct fpga_region *region, | ||
178 | struct device_node *overlay) | ||
179 | { | ||
180 | struct device *dev = ®ion->dev; | ||
181 | struct device_node *region_np = dev->of_node; | ||
182 | struct device_node *br, *np, *parent_br = NULL; | ||
183 | int i, ret; | ||
184 | |||
185 | /* If parent is a bridge, add to list */ | ||
186 | ret = fpga_bridge_get_to_list(region_np->parent, region->info, | ||
187 | ®ion->bridge_list); | ||
188 | if (ret == -EBUSY) | ||
189 | return ret; | ||
190 | |||
191 | if (!ret) | ||
192 | parent_br = region_np->parent; | ||
193 | |||
194 | /* If overlay has a list of bridges, use it. */ | ||
195 | if (of_parse_phandle(overlay, "fpga-bridges", 0)) | ||
196 | np = overlay; | ||
197 | else | ||
198 | np = region_np; | ||
199 | |||
200 | for (i = 0; ; i++) { | ||
201 | br = of_parse_phandle(np, "fpga-bridges", i); | ||
202 | if (!br) | ||
203 | break; | ||
204 | |||
205 | /* If parent bridge is in list, skip it. */ | ||
206 | if (br == parent_br) | ||
207 | continue; | ||
208 | |||
209 | /* If node is a bridge, get it and add to list */ | ||
210 | ret = fpga_bridge_get_to_list(br, region->info, | ||
211 | ®ion->bridge_list); | ||
212 | |||
213 | /* If any of the bridges are in use, give up */ | ||
214 | if (ret == -EBUSY) { | ||
215 | fpga_bridges_put(®ion->bridge_list); | ||
216 | return -EBUSY; | ||
217 | } | ||
218 | } | ||
219 | |||
220 | return 0; | ||
221 | } | ||
222 | |||
223 | /** | ||
224 | * fpga_region_program_fpga - program FPGA | 95 | * fpga_region_program_fpga - program FPGA |
225 | * @region: FPGA region | 96 | * @region: FPGA region |
226 | * @firmware_name: name of FPGA image firmware file | 97 | * Program an FPGA using fpga image info (region->info). |
227 | * @overlay: device node of the overlay | ||
228 | * Program an FPGA using information in the device tree. | ||
229 | * Function assumes that there is a firmware-name property. | ||
230 | * Return 0 for success or negative error code. | 98 | * Return 0 for success or negative error code. |
231 | */ | 99 | */ |
232 | static int fpga_region_program_fpga(struct fpga_region *region, | 100 | int fpga_region_program_fpga(struct fpga_region *region) |
233 | const char *firmware_name, | ||
234 | struct device_node *overlay) | ||
235 | { | 101 | { |
236 | struct fpga_manager *mgr; | 102 | struct device *dev = ®ion->dev; |
103 | struct fpga_image_info *info = region->info; | ||
237 | int ret; | 104 | int ret; |
238 | 105 | ||
239 | region = fpga_region_get(region); | 106 | region = fpga_region_get(region); |
240 | if (IS_ERR(region)) { | 107 | if (IS_ERR(region)) { |
241 | pr_err("failed to get fpga region\n"); | 108 | dev_err(dev, "failed to get FPGA region\n"); |
242 | return PTR_ERR(region); | 109 | return PTR_ERR(region); |
243 | } | 110 | } |
244 | 111 | ||
245 | mgr = fpga_region_get_manager(region); | 112 | ret = fpga_mgr_lock(region->mgr); |
246 | if (IS_ERR(mgr)) { | 113 | if (ret) { |
247 | pr_err("failed to get fpga region manager\n"); | 114 | dev_err(dev, "FPGA manager is busy\n"); |
248 | ret = PTR_ERR(mgr); | ||
249 | goto err_put_region; | 115 | goto err_put_region; |
250 | } | 116 | } |
251 | 117 | ||
252 | ret = fpga_region_get_bridges(region, overlay); | 118 | /* |
253 | if (ret) { | 119 | * In some cases, we already have a list of bridges in the |
254 | pr_err("failed to get fpga region bridges\n"); | 120 | * fpga region struct. Or we don't have any bridges. |
255 | goto err_put_mgr; | 121 | */ |
122 | if (region->get_bridges) { | ||
123 | ret = region->get_bridges(region); | ||
124 | if (ret) { | ||
125 | dev_err(dev, "failed to get fpga region bridges\n"); | ||
126 | goto err_unlock_mgr; | ||
127 | } | ||
256 | } | 128 | } |
257 | 129 | ||
258 | ret = fpga_bridges_disable(®ion->bridge_list); | 130 | ret = fpga_bridges_disable(®ion->bridge_list); |
259 | if (ret) { | 131 | if (ret) { |
260 | pr_err("failed to disable region bridges\n"); | 132 | dev_err(dev, "failed to disable bridges\n"); |
261 | goto err_put_br; | 133 | goto err_put_br; |
262 | } | 134 | } |
263 | 135 | ||
264 | ret = fpga_mgr_firmware_load(mgr, region->info, firmware_name); | 136 | ret = fpga_mgr_load(region->mgr, info); |
265 | if (ret) { | 137 | if (ret) { |
266 | pr_err("failed to load fpga image\n"); | 138 | dev_err(dev, "failed to load FPGA image\n"); |
267 | goto err_put_br; | 139 | goto err_put_br; |
268 | } | 140 | } |
269 | 141 | ||
270 | ret = fpga_bridges_enable(®ion->bridge_list); | 142 | ret = fpga_bridges_enable(®ion->bridge_list); |
271 | if (ret) { | 143 | if (ret) { |
272 | pr_err("failed to enable region bridges\n"); | 144 | dev_err(dev, "failed to enable region bridges\n"); |
273 | goto err_put_br; | 145 | goto err_put_br; |
274 | } | 146 | } |
275 | 147 | ||
276 | fpga_mgr_put(mgr); | 148 | fpga_mgr_unlock(region->mgr); |
277 | fpga_region_put(region); | 149 | fpga_region_put(region); |
278 | 150 | ||
279 | return 0; | 151 | return 0; |
280 | 152 | ||
281 | err_put_br: | 153 | err_put_br: |
282 | fpga_bridges_put(®ion->bridge_list); | 154 | if (region->get_bridges) |
283 | err_put_mgr: | 155 | fpga_bridges_put(®ion->bridge_list); |
284 | fpga_mgr_put(mgr); | 156 | err_unlock_mgr: |
157 | fpga_mgr_unlock(region->mgr); | ||
285 | err_put_region: | 158 | err_put_region: |
286 | fpga_region_put(region); | 159 | fpga_region_put(region); |
287 | 160 | ||
288 | return ret; | 161 | return ret; |
289 | } | 162 | } |
163 | EXPORT_SYMBOL_GPL(fpga_region_program_fpga); | ||
290 | 164 | ||
291 | /** | 165 | int fpga_region_register(struct device *dev, struct fpga_region *region) |
292 | * child_regions_with_firmware | ||
293 | * @overlay: device node of the overlay | ||
294 | * | ||
295 | * If the overlay adds child FPGA regions, they are not allowed to have | ||
296 | * firmware-name property. | ||
297 | * | ||
298 | * Return 0 for OK or -EINVAL if child FPGA region adds firmware-name. | ||
299 | */ | ||
300 | static int child_regions_with_firmware(struct device_node *overlay) | ||
301 | { | ||
302 | struct device_node *child_region; | ||
303 | const char *child_firmware_name; | ||
304 | int ret = 0; | ||
305 | |||
306 | of_node_get(overlay); | ||
307 | |||
308 | child_region = of_find_matching_node(overlay, fpga_region_of_match); | ||
309 | while (child_region) { | ||
310 | if (!of_property_read_string(child_region, "firmware-name", | ||
311 | &child_firmware_name)) { | ||
312 | ret = -EINVAL; | ||
313 | break; | ||
314 | } | ||
315 | child_region = of_find_matching_node(child_region, | ||
316 | fpga_region_of_match); | ||
317 | } | ||
318 | |||
319 | of_node_put(child_region); | ||
320 | |||
321 | if (ret) | ||
322 | pr_err("firmware-name not allowed in child FPGA region: %pOF", | ||
323 | child_region); | ||
324 | |||
325 | return ret; | ||
326 | } | ||
327 | |||
328 | /** | ||
329 | * fpga_region_notify_pre_apply - pre-apply overlay notification | ||
330 | * | ||
331 | * @region: FPGA region that the overlay was applied to | ||
332 | * @nd: overlay notification data | ||
333 | * | ||
334 | * Called after when an overlay targeted to a FPGA Region is about to be | ||
335 | * applied. Function will check the properties that will be added to the FPGA | ||
336 | * region. If the checks pass, it will program the FPGA. | ||
337 | * | ||
338 | * The checks are: | ||
339 | * The overlay must add either firmware-name or external-fpga-config property | ||
340 | * to the FPGA Region. | ||
341 | * | ||
342 | * firmware-name : program the FPGA | ||
343 | * external-fpga-config : FPGA is already programmed | ||
344 | * encrypted-fpga-config : FPGA bitstream is encrypted | ||
345 | * | ||
346 | * The overlay can add other FPGA regions, but child FPGA regions cannot have a | ||
347 | * firmware-name property since those regions don't exist yet. | ||
348 | * | ||
349 | * If the overlay that breaks the rules, notifier returns an error and the | ||
350 | * overlay is rejected before it goes into the main tree. | ||
351 | * | ||
352 | * Returns 0 for success or negative error code for failure. | ||
353 | */ | ||
354 | static int fpga_region_notify_pre_apply(struct fpga_region *region, | ||
355 | struct of_overlay_notify_data *nd) | ||
356 | { | 166 | { |
357 | const char *firmware_name = NULL; | ||
358 | struct fpga_image_info *info; | ||
359 | int ret; | ||
360 | |||
361 | info = devm_kzalloc(®ion->dev, sizeof(*info), GFP_KERNEL); | ||
362 | if (!info) | ||
363 | return -ENOMEM; | ||
364 | |||
365 | region->info = info; | ||
366 | |||
367 | /* Reject overlay if child FPGA Regions have firmware-name property */ | ||
368 | ret = child_regions_with_firmware(nd->overlay); | ||
369 | if (ret) | ||
370 | return ret; | ||
371 | |||
372 | /* Read FPGA region properties from the overlay */ | ||
373 | if (of_property_read_bool(nd->overlay, "partial-fpga-config")) | ||
374 | info->flags |= FPGA_MGR_PARTIAL_RECONFIG; | ||
375 | |||
376 | if (of_property_read_bool(nd->overlay, "external-fpga-config")) | ||
377 | info->flags |= FPGA_MGR_EXTERNAL_CONFIG; | ||
378 | |||
379 | if (of_property_read_bool(nd->overlay, "encrypted-fpga-config")) | ||
380 | info->flags |= FPGA_MGR_ENCRYPTED_BITSTREAM; | ||
381 | |||
382 | of_property_read_string(nd->overlay, "firmware-name", &firmware_name); | ||
383 | |||
384 | of_property_read_u32(nd->overlay, "region-unfreeze-timeout-us", | ||
385 | &info->enable_timeout_us); | ||
386 | |||
387 | of_property_read_u32(nd->overlay, "region-freeze-timeout-us", | ||
388 | &info->disable_timeout_us); | ||
389 | |||
390 | of_property_read_u32(nd->overlay, "config-complete-timeout-us", | ||
391 | &info->config_complete_timeout_us); | ||
392 | |||
393 | /* If FPGA was externally programmed, don't specify firmware */ | ||
394 | if ((info->flags & FPGA_MGR_EXTERNAL_CONFIG) && firmware_name) { | ||
395 | pr_err("error: specified firmware and external-fpga-config"); | ||
396 | return -EINVAL; | ||
397 | } | ||
398 | |||
399 | /* FPGA is already configured externally. We're done. */ | ||
400 | if (info->flags & FPGA_MGR_EXTERNAL_CONFIG) | ||
401 | return 0; | ||
402 | |||
403 | /* If we got this far, we should be programming the FPGA */ | ||
404 | if (!firmware_name) { | ||
405 | pr_err("should specify firmware-name or external-fpga-config\n"); | ||
406 | return -EINVAL; | ||
407 | } | ||
408 | |||
409 | return fpga_region_program_fpga(region, firmware_name, nd->overlay); | ||
410 | } | ||
411 | |||
412 | /** | ||
413 | * fpga_region_notify_post_remove - post-remove overlay notification | ||
414 | * | ||
415 | * @region: FPGA region that was targeted by the overlay that was removed | ||
416 | * @nd: overlay notification data | ||
417 | * | ||
418 | * Called after an overlay has been removed if the overlay's target was a | ||
419 | * FPGA region. | ||
420 | */ | ||
421 | static void fpga_region_notify_post_remove(struct fpga_region *region, | ||
422 | struct of_overlay_notify_data *nd) | ||
423 | { | ||
424 | fpga_bridges_disable(®ion->bridge_list); | ||
425 | fpga_bridges_put(®ion->bridge_list); | ||
426 | devm_kfree(®ion->dev, region->info); | ||
427 | region->info = NULL; | ||
428 | } | ||
429 | |||
430 | /** | ||
431 | * of_fpga_region_notify - reconfig notifier for dynamic DT changes | ||
432 | * @nb: notifier block | ||
433 | * @action: notifier action | ||
434 | * @arg: reconfig data | ||
435 | * | ||
436 | * This notifier handles programming a FPGA when a "firmware-name" property is | ||
437 | * added to a fpga-region. | ||
438 | * | ||
439 | * Returns NOTIFY_OK or error if FPGA programming fails. | ||
440 | */ | ||
441 | static int of_fpga_region_notify(struct notifier_block *nb, | ||
442 | unsigned long action, void *arg) | ||
443 | { | ||
444 | struct of_overlay_notify_data *nd = arg; | ||
445 | struct fpga_region *region; | ||
446 | int ret; | ||
447 | |||
448 | switch (action) { | ||
449 | case OF_OVERLAY_PRE_APPLY: | ||
450 | pr_debug("%s OF_OVERLAY_PRE_APPLY\n", __func__); | ||
451 | break; | ||
452 | case OF_OVERLAY_POST_APPLY: | ||
453 | pr_debug("%s OF_OVERLAY_POST_APPLY\n", __func__); | ||
454 | return NOTIFY_OK; /* not for us */ | ||
455 | case OF_OVERLAY_PRE_REMOVE: | ||
456 | pr_debug("%s OF_OVERLAY_PRE_REMOVE\n", __func__); | ||
457 | return NOTIFY_OK; /* not for us */ | ||
458 | case OF_OVERLAY_POST_REMOVE: | ||
459 | pr_debug("%s OF_OVERLAY_POST_REMOVE\n", __func__); | ||
460 | break; | ||
461 | default: /* should not happen */ | ||
462 | return NOTIFY_OK; | ||
463 | } | ||
464 | |||
465 | region = fpga_region_find(nd->target); | ||
466 | if (!region) | ||
467 | return NOTIFY_OK; | ||
468 | |||
469 | ret = 0; | ||
470 | switch (action) { | ||
471 | case OF_OVERLAY_PRE_APPLY: | ||
472 | ret = fpga_region_notify_pre_apply(region, nd); | ||
473 | break; | ||
474 | |||
475 | case OF_OVERLAY_POST_REMOVE: | ||
476 | fpga_region_notify_post_remove(region, nd); | ||
477 | break; | ||
478 | } | ||
479 | |||
480 | put_device(®ion->dev); | ||
481 | |||
482 | if (ret) | ||
483 | return notifier_from_errno(ret); | ||
484 | |||
485 | return NOTIFY_OK; | ||
486 | } | ||
487 | |||
488 | static struct notifier_block fpga_region_of_nb = { | ||
489 | .notifier_call = of_fpga_region_notify, | ||
490 | }; | ||
491 | |||
492 | static int fpga_region_probe(struct platform_device *pdev) | ||
493 | { | ||
494 | struct device *dev = &pdev->dev; | ||
495 | struct device_node *np = dev->of_node; | ||
496 | struct fpga_region *region; | ||
497 | int id, ret = 0; | 167 | int id, ret = 0; |
498 | 168 | ||
499 | region = kzalloc(sizeof(*region), GFP_KERNEL); | ||
500 | if (!region) | ||
501 | return -ENOMEM; | ||
502 | |||
503 | id = ida_simple_get(&fpga_region_ida, 0, 0, GFP_KERNEL); | 169 | id = ida_simple_get(&fpga_region_ida, 0, 0, GFP_KERNEL); |
504 | if (id < 0) { | 170 | if (id < 0) |
505 | ret = id; | 171 | return id; |
506 | goto err_kfree; | ||
507 | } | ||
508 | 172 | ||
509 | mutex_init(®ion->mutex); | 173 | mutex_init(®ion->mutex); |
510 | INIT_LIST_HEAD(®ion->bridge_list); | 174 | INIT_LIST_HEAD(®ion->bridge_list); |
511 | |||
512 | device_initialize(®ion->dev); | 175 | device_initialize(®ion->dev); |
176 | region->dev.groups = region->groups; | ||
513 | region->dev.class = fpga_region_class; | 177 | region->dev.class = fpga_region_class; |
514 | region->dev.parent = dev; | 178 | region->dev.parent = dev; |
515 | region->dev.of_node = np; | 179 | region->dev.of_node = dev->of_node; |
516 | region->dev.id = id; | 180 | region->dev.id = id; |
517 | dev_set_drvdata(dev, region); | 181 | dev_set_drvdata(dev, region); |
518 | 182 | ||
@@ -524,44 +188,27 @@ static int fpga_region_probe(struct platform_device *pdev) | |||
524 | if (ret) | 188 | if (ret) |
525 | goto err_remove; | 189 | goto err_remove; |
526 | 190 | ||
527 | of_platform_populate(np, fpga_region_of_match, NULL, ®ion->dev); | ||
528 | |||
529 | dev_info(dev, "FPGA Region probed\n"); | ||
530 | |||
531 | return 0; | 191 | return 0; |
532 | 192 | ||
533 | err_remove: | 193 | err_remove: |
534 | ida_simple_remove(&fpga_region_ida, id); | 194 | ida_simple_remove(&fpga_region_ida, id); |
535 | err_kfree: | ||
536 | kfree(region); | ||
537 | |||
538 | return ret; | 195 | return ret; |
539 | } | 196 | } |
197 | EXPORT_SYMBOL_GPL(fpga_region_register); | ||
540 | 198 | ||
541 | static int fpga_region_remove(struct platform_device *pdev) | 199 | int fpga_region_unregister(struct fpga_region *region) |
542 | { | 200 | { |
543 | struct fpga_region *region = platform_get_drvdata(pdev); | ||
544 | |||
545 | device_unregister(®ion->dev); | 201 | device_unregister(®ion->dev); |
546 | 202 | ||
547 | return 0; | 203 | return 0; |
548 | } | 204 | } |
549 | 205 | EXPORT_SYMBOL_GPL(fpga_region_unregister); | |
550 | static struct platform_driver fpga_region_driver = { | ||
551 | .probe = fpga_region_probe, | ||
552 | .remove = fpga_region_remove, | ||
553 | .driver = { | ||
554 | .name = "fpga-region", | ||
555 | .of_match_table = of_match_ptr(fpga_region_of_match), | ||
556 | }, | ||
557 | }; | ||
558 | 206 | ||
559 | static void fpga_region_dev_release(struct device *dev) | 207 | static void fpga_region_dev_release(struct device *dev) |
560 | { | 208 | { |
561 | struct fpga_region *region = to_fpga_region(dev); | 209 | struct fpga_region *region = to_fpga_region(dev); |
562 | 210 | ||
563 | ida_simple_remove(&fpga_region_ida, region->dev.id); | 211 | ida_simple_remove(&fpga_region_ida, region->dev.id); |
564 | kfree(region); | ||
565 | } | 212 | } |
566 | 213 | ||
567 | /** | 214 | /** |
@@ -570,36 +217,17 @@ static void fpga_region_dev_release(struct device *dev) | |||
570 | */ | 217 | */ |
571 | static int __init fpga_region_init(void) | 218 | static int __init fpga_region_init(void) |
572 | { | 219 | { |
573 | int ret; | ||
574 | |||
575 | fpga_region_class = class_create(THIS_MODULE, "fpga_region"); | 220 | fpga_region_class = class_create(THIS_MODULE, "fpga_region"); |
576 | if (IS_ERR(fpga_region_class)) | 221 | if (IS_ERR(fpga_region_class)) |
577 | return PTR_ERR(fpga_region_class); | 222 | return PTR_ERR(fpga_region_class); |
578 | 223 | ||
579 | fpga_region_class->dev_release = fpga_region_dev_release; | 224 | fpga_region_class->dev_release = fpga_region_dev_release; |
580 | 225 | ||
581 | ret = of_overlay_notifier_register(&fpga_region_of_nb); | ||
582 | if (ret) | ||
583 | goto err_class; | ||
584 | |||
585 | ret = platform_driver_register(&fpga_region_driver); | ||
586 | if (ret) | ||
587 | goto err_plat; | ||
588 | |||
589 | return 0; | 226 | return 0; |
590 | |||
591 | err_plat: | ||
592 | of_overlay_notifier_unregister(&fpga_region_of_nb); | ||
593 | err_class: | ||
594 | class_destroy(fpga_region_class); | ||
595 | ida_destroy(&fpga_region_ida); | ||
596 | return ret; | ||
597 | } | 227 | } |
598 | 228 | ||
599 | static void __exit fpga_region_exit(void) | 229 | static void __exit fpga_region_exit(void) |
600 | { | 230 | { |
601 | platform_driver_unregister(&fpga_region_driver); | ||
602 | of_overlay_notifier_unregister(&fpga_region_of_nb); | ||
603 | class_destroy(fpga_region_class); | 231 | class_destroy(fpga_region_class); |
604 | ida_destroy(&fpga_region_ida); | 232 | ida_destroy(&fpga_region_ida); |
605 | } | 233 | } |
@@ -608,5 +236,5 @@ subsys_initcall(fpga_region_init); | |||
608 | module_exit(fpga_region_exit); | 236 | module_exit(fpga_region_exit); |
609 | 237 | ||
610 | MODULE_DESCRIPTION("FPGA Region"); | 238 | MODULE_DESCRIPTION("FPGA Region"); |
611 | MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>"); | 239 | MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); |
612 | MODULE_LICENSE("GPL v2"); | 240 | MODULE_LICENSE("GPL v2"); |
diff --git a/drivers/fpga/of-fpga-region.c b/drivers/fpga/of-fpga-region.c new file mode 100644 index 000000000000..119ff75522f1 --- /dev/null +++ b/drivers/fpga/of-fpga-region.c | |||
@@ -0,0 +1,504 @@ | |||
1 | /* | ||
2 | * FPGA Region - Device Tree support for FPGA programming under Linux | ||
3 | * | ||
4 | * Copyright (C) 2013-2016 Altera Corporation | ||
5 | * Copyright (C) 2017 Intel Corporation | ||
6 | * | ||
7 | * This program is free software; you can redistribute it and/or modify it | ||
8 | * under the terms and conditions of the GNU General Public License, | ||
9 | * version 2, as published by the Free Software Foundation. | ||
10 | * | ||
11 | * This program is distributed in the hope it will be useful, but WITHOUT | ||
12 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or | ||
13 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for | ||
14 | * more details. | ||
15 | * | ||
16 | * You should have received a copy of the GNU General Public License along with | ||
17 | * this program. If not, see <http://www.gnu.org/licenses/>. | ||
18 | */ | ||
19 | |||
20 | #include <linux/fpga/fpga-bridge.h> | ||
21 | #include <linux/fpga/fpga-mgr.h> | ||
22 | #include <linux/fpga/fpga-region.h> | ||
23 | #include <linux/idr.h> | ||
24 | #include <linux/kernel.h> | ||
25 | #include <linux/list.h> | ||
26 | #include <linux/module.h> | ||
27 | #include <linux/of_platform.h> | ||
28 | #include <linux/slab.h> | ||
29 | #include <linux/spinlock.h> | ||
30 | |||
31 | static const struct of_device_id fpga_region_of_match[] = { | ||
32 | { .compatible = "fpga-region", }, | ||
33 | {}, | ||
34 | }; | ||
35 | MODULE_DEVICE_TABLE(of, fpga_region_of_match); | ||
36 | |||
37 | static int fpga_region_of_node_match(struct device *dev, const void *data) | ||
38 | { | ||
39 | return dev->of_node == data; | ||
40 | } | ||
41 | |||
42 | /** | ||
43 | * of_fpga_region_find - find FPGA region | ||
44 | * @np: device node of FPGA Region | ||
45 | * | ||
46 | * Caller will need to put_device(®ion->dev) when done. | ||
47 | * | ||
48 | * Returns FPGA Region struct or NULL | ||
49 | */ | ||
50 | static struct fpga_region *of_fpga_region_find(struct device_node *np) | ||
51 | { | ||
52 | return fpga_region_class_find(NULL, np, fpga_region_of_node_match); | ||
53 | } | ||
54 | |||
55 | /** | ||
56 | * of_fpga_region_get_mgr - get reference for FPGA manager | ||
57 | * @np: device node of FPGA region | ||
58 | * | ||
59 | * Get FPGA Manager from "fpga-mgr" property or from ancestor region. | ||
60 | * | ||
61 | * Caller should call fpga_mgr_put() when done with manager. | ||
62 | * | ||
63 | * Return: fpga manager struct or IS_ERR() condition containing error code. | ||
64 | */ | ||
65 | static struct fpga_manager *of_fpga_region_get_mgr(struct device_node *np) | ||
66 | { | ||
67 | struct device_node *mgr_node; | ||
68 | struct fpga_manager *mgr; | ||
69 | |||
70 | of_node_get(np); | ||
71 | while (np) { | ||
72 | if (of_device_is_compatible(np, "fpga-region")) { | ||
73 | mgr_node = of_parse_phandle(np, "fpga-mgr", 0); | ||
74 | if (mgr_node) { | ||
75 | mgr = of_fpga_mgr_get(mgr_node); | ||
76 | of_node_put(mgr_node); | ||
77 | of_node_put(np); | ||
78 | return mgr; | ||
79 | } | ||
80 | } | ||
81 | np = of_get_next_parent(np); | ||
82 | } | ||
83 | of_node_put(np); | ||
84 | |||
85 | return ERR_PTR(-EINVAL); | ||
86 | } | ||
87 | |||
88 | /** | ||
89 | * of_fpga_region_get_bridges - create a list of bridges | ||
90 | * @region: FPGA region | ||
91 | * | ||
92 | * Create a list of bridges including the parent bridge and the bridges | ||
93 | * specified by "fpga-bridges" property. Note that the | ||
94 | * fpga_bridges_enable/disable/put functions are all fine with an empty list | ||
95 | * if that happens. | ||
96 | * | ||
97 | * Caller should call fpga_bridges_put(®ion->bridge_list) when | ||
98 | * done with the bridges. | ||
99 | * | ||
100 | * Return 0 for success (even if there are no bridges specified) | ||
101 | * or -EBUSY if any of the bridges are in use. | ||
102 | */ | ||
103 | static int of_fpga_region_get_bridges(struct fpga_region *region) | ||
104 | { | ||
105 | struct device *dev = ®ion->dev; | ||
106 | struct device_node *region_np = dev->of_node; | ||
107 | struct fpga_image_info *info = region->info; | ||
108 | struct device_node *br, *np, *parent_br = NULL; | ||
109 | int i, ret; | ||
110 | |||
111 | /* If parent is a bridge, add to list */ | ||
112 | ret = of_fpga_bridge_get_to_list(region_np->parent, info, | ||
113 | ®ion->bridge_list); | ||
114 | |||
115 | /* -EBUSY means parent is a bridge that is under use. Give up. */ | ||
116 | if (ret == -EBUSY) | ||
117 | return ret; | ||
118 | |||
119 | /* Zero return code means parent was a bridge and was added to list. */ | ||
120 | if (!ret) | ||
121 | parent_br = region_np->parent; | ||
122 | |||
123 | /* If overlay has a list of bridges, use it. */ | ||
124 | br = of_parse_phandle(info->overlay, "fpga-bridges", 0); | ||
125 | if (br) { | ||
126 | of_node_put(br); | ||
127 | np = info->overlay; | ||
128 | } else { | ||
129 | np = region_np; | ||
130 | } | ||
131 | |||
132 | for (i = 0; ; i++) { | ||
133 | br = of_parse_phandle(np, "fpga-bridges", i); | ||
134 | if (!br) | ||
135 | break; | ||
136 | |||
137 | /* If parent bridge is in list, skip it. */ | ||
138 | if (br == parent_br) { | ||
139 | of_node_put(br); | ||
140 | continue; | ||
141 | } | ||
142 | |||
143 | /* If node is a bridge, get it and add to list */ | ||
144 | ret = of_fpga_bridge_get_to_list(br, info, | ||
145 | ®ion->bridge_list); | ||
146 | of_node_put(br); | ||
147 | |||
148 | /* If any of the bridges are in use, give up */ | ||
149 | if (ret == -EBUSY) { | ||
150 | fpga_bridges_put(®ion->bridge_list); | ||
151 | return -EBUSY; | ||
152 | } | ||
153 | } | ||
154 | |||
155 | return 0; | ||
156 | } | ||
157 | |||
158 | /** | ||
159 | * child_regions_with_firmware | ||
160 | * @overlay: device node of the overlay | ||
161 | * | ||
162 | * If the overlay adds child FPGA regions, they are not allowed to have | ||
163 | * firmware-name property. | ||
164 | * | ||
165 | * Return 0 for OK or -EINVAL if child FPGA region adds firmware-name. | ||
166 | */ | ||
167 | static int child_regions_with_firmware(struct device_node *overlay) | ||
168 | { | ||
169 | struct device_node *child_region; | ||
170 | const char *child_firmware_name; | ||
171 | int ret = 0; | ||
172 | |||
173 | of_node_get(overlay); | ||
174 | |||
175 | child_region = of_find_matching_node(overlay, fpga_region_of_match); | ||
176 | while (child_region) { | ||
177 | if (!of_property_read_string(child_region, "firmware-name", | ||
178 | &child_firmware_name)) { | ||
179 | ret = -EINVAL; | ||
180 | break; | ||
181 | } | ||
182 | child_region = of_find_matching_node(child_region, | ||
183 | fpga_region_of_match); | ||
184 | } | ||
185 | |||
186 | of_node_put(child_region); | ||
187 | |||
188 | if (ret) | ||
189 | pr_err("firmware-name not allowed in child FPGA region: %pOF", | ||
190 | child_region); | ||
191 | |||
192 | return ret; | ||
193 | } | ||
194 | |||
195 | /** | ||
196 | * of_fpga_region_parse_ov - parse and check overlay applied to region | ||
197 | * | ||
198 | * @region: FPGA region | ||
199 | * @overlay: overlay applied to the FPGA region | ||
200 | * | ||
201 | * Given an overlay applied to a FPGA region, parse the FPGA image specific | ||
202 | * info in the overlay and do some checking. | ||
203 | * | ||
204 | * Returns: | ||
205 | * NULL if overlay doesn't direct us to program the FPGA. | ||
206 | * fpga_image_info struct if there is an image to program. | ||
207 | * error code for invalid overlay. | ||
208 | */ | ||
209 | static struct fpga_image_info *of_fpga_region_parse_ov( | ||
210 | struct fpga_region *region, | ||
211 | struct device_node *overlay) | ||
212 | { | ||
213 | struct device *dev = ®ion->dev; | ||
214 | struct fpga_image_info *info; | ||
215 | const char *firmware_name; | ||
216 | int ret; | ||
217 | |||
218 | if (region->info) { | ||
219 | dev_err(dev, "Region already has overlay applied.\n"); | ||
220 | return ERR_PTR(-EINVAL); | ||
221 | } | ||
222 | |||
223 | /* | ||
224 | * Reject overlay if child FPGA Regions added in the overlay have | ||
225 | * firmware-name property (would mean that an FPGA region that has | ||
226 | * not been added to the live tree yet is doing FPGA programming). | ||
227 | */ | ||
228 | ret = child_regions_with_firmware(overlay); | ||
229 | if (ret) | ||
230 | return ERR_PTR(ret); | ||
231 | |||
232 | info = fpga_image_info_alloc(dev); | ||
233 | if (!info) | ||
234 | return ERR_PTR(-ENOMEM); | ||
235 | |||
236 | info->overlay = overlay; | ||
237 | |||
238 | /* Read FPGA region properties from the overlay */ | ||
239 | if (of_property_read_bool(overlay, "partial-fpga-config")) | ||
240 | info->flags |= FPGA_MGR_PARTIAL_RECONFIG; | ||
241 | |||
242 | if (of_property_read_bool(overlay, "external-fpga-config")) | ||
243 | info->flags |= FPGA_MGR_EXTERNAL_CONFIG; | ||
244 | |||
245 | if (of_property_read_bool(overlay, "encrypted-fpga-config")) | ||
246 | info->flags |= FPGA_MGR_ENCRYPTED_BITSTREAM; | ||
247 | |||
248 | if (!of_property_read_string(overlay, "firmware-name", | ||
249 | &firmware_name)) { | ||
250 | info->firmware_name = devm_kstrdup(dev, firmware_name, | ||
251 | GFP_KERNEL); | ||
252 | if (!info->firmware_name) | ||
253 | return ERR_PTR(-ENOMEM); | ||
254 | } | ||
255 | |||
256 | of_property_read_u32(overlay, "region-unfreeze-timeout-us", | ||
257 | &info->enable_timeout_us); | ||
258 | |||
259 | of_property_read_u32(overlay, "region-freeze-timeout-us", | ||
260 | &info->disable_timeout_us); | ||
261 | |||
262 | of_property_read_u32(overlay, "config-complete-timeout-us", | ||
263 | &info->config_complete_timeout_us); | ||
264 | |||
265 | /* If overlay is not programming the FPGA, don't need FPGA image info */ | ||
266 | if (!info->firmware_name) { | ||
267 | ret = 0; | ||
268 | goto ret_no_info; | ||
269 | } | ||
270 | |||
271 | /* | ||
272 | * If overlay informs us FPGA was externally programmed, specifying | ||
273 | * firmware here would be ambiguous. | ||
274 | */ | ||
275 | if (info->flags & FPGA_MGR_EXTERNAL_CONFIG) { | ||
276 | dev_err(dev, "error: specified firmware and external-fpga-config"); | ||
277 | ret = -EINVAL; | ||
278 | goto ret_no_info; | ||
279 | } | ||
280 | |||
281 | return info; | ||
282 | ret_no_info: | ||
283 | fpga_image_info_free(info); | ||
284 | return ERR_PTR(ret); | ||
285 | } | ||
286 | |||
287 | /** | ||
288 | * of_fpga_region_notify_pre_apply - pre-apply overlay notification | ||
289 | * | ||
290 | * @region: FPGA region that the overlay was applied to | ||
291 | * @nd: overlay notification data | ||
292 | * | ||
293 | * Called when an overlay targeted to a FPGA Region is about to be applied. | ||
294 | * Parses the overlay for properties that influence how the FPGA will be | ||
295 | * programmed and does some checking. If the checks pass, programs the FPGA. | ||
296 | * If the checks fail, overlay is rejected and does not get added to the | ||
297 | * live tree. | ||
298 | * | ||
299 | * Returns 0 for success or negative error code for failure. | ||
300 | */ | ||
301 | static int of_fpga_region_notify_pre_apply(struct fpga_region *region, | ||
302 | struct of_overlay_notify_data *nd) | ||
303 | { | ||
304 | struct device *dev = ®ion->dev; | ||
305 | struct fpga_image_info *info; | ||
306 | int ret; | ||
307 | |||
308 | info = of_fpga_region_parse_ov(region, nd->overlay); | ||
309 | if (IS_ERR(info)) | ||
310 | return PTR_ERR(info); | ||
311 | |||
312 | /* If overlay doesn't program the FPGA, accept it anyway. */ | ||
313 | if (!info) | ||
314 | return 0; | ||
315 | |||
316 | if (region->info) { | ||
317 | dev_err(dev, "Region already has overlay applied.\n"); | ||
318 | return -EINVAL; | ||
319 | } | ||
320 | |||
321 | region->info = info; | ||
322 | ret = fpga_region_program_fpga(region); | ||
323 | if (ret) { | ||
324 | /* error; reject overlay */ | ||
325 | fpga_image_info_free(info); | ||
326 | region->info = NULL; | ||
327 | } | ||
328 | |||
329 | return ret; | ||
330 | } | ||
331 | |||
332 | /** | ||
333 | * of_fpga_region_notify_post_remove - post-remove overlay notification | ||
334 | * | ||
335 | * @region: FPGA region that was targeted by the overlay that was removed | ||
336 | * @nd: overlay notification data | ||
337 | * | ||
338 | * Called after an overlay has been removed if the overlay's target was a | ||
339 | * FPGA region. | ||
340 | */ | ||
341 | static void of_fpga_region_notify_post_remove(struct fpga_region *region, | ||
342 | struct of_overlay_notify_data *nd) | ||
343 | { | ||
344 | fpga_bridges_disable(®ion->bridge_list); | ||
345 | fpga_bridges_put(®ion->bridge_list); | ||
346 | fpga_image_info_free(region->info); | ||
347 | region->info = NULL; | ||
348 | } | ||
349 | |||
350 | /** | ||
351 | * of_fpga_region_notify - reconfig notifier for dynamic DT changes | ||
352 | * @nb: notifier block | ||
353 | * @action: notifier action | ||
354 | * @arg: reconfig data | ||
355 | * | ||
356 | * This notifier handles programming a FPGA when a "firmware-name" property is | ||
357 | * added to a fpga-region. | ||
358 | * | ||
359 | * Returns NOTIFY_OK or error if FPGA programming fails. | ||
360 | */ | ||
361 | static int of_fpga_region_notify(struct notifier_block *nb, | ||
362 | unsigned long action, void *arg) | ||
363 | { | ||
364 | struct of_overlay_notify_data *nd = arg; | ||
365 | struct fpga_region *region; | ||
366 | int ret; | ||
367 | |||
368 | switch (action) { | ||
369 | case OF_OVERLAY_PRE_APPLY: | ||
370 | pr_debug("%s OF_OVERLAY_PRE_APPLY\n", __func__); | ||
371 | break; | ||
372 | case OF_OVERLAY_POST_APPLY: | ||
373 | pr_debug("%s OF_OVERLAY_POST_APPLY\n", __func__); | ||
374 | return NOTIFY_OK; /* not for us */ | ||
375 | case OF_OVERLAY_PRE_REMOVE: | ||
376 | pr_debug("%s OF_OVERLAY_PRE_REMOVE\n", __func__); | ||
377 | return NOTIFY_OK; /* not for us */ | ||
378 | case OF_OVERLAY_POST_REMOVE: | ||
379 | pr_debug("%s OF_OVERLAY_POST_REMOVE\n", __func__); | ||
380 | break; | ||
381 | default: /* should not happen */ | ||
382 | return NOTIFY_OK; | ||
383 | } | ||
384 | |||
385 | region = of_fpga_region_find(nd->target); | ||
386 | if (!region) | ||
387 | return NOTIFY_OK; | ||
388 | |||
389 | ret = 0; | ||
390 | switch (action) { | ||
391 | case OF_OVERLAY_PRE_APPLY: | ||
392 | ret = of_fpga_region_notify_pre_apply(region, nd); | ||
393 | break; | ||
394 | |||
395 | case OF_OVERLAY_POST_REMOVE: | ||
396 | of_fpga_region_notify_post_remove(region, nd); | ||
397 | break; | ||
398 | } | ||
399 | |||
400 | put_device(®ion->dev); | ||
401 | |||
402 | if (ret) | ||
403 | return notifier_from_errno(ret); | ||
404 | |||
405 | return NOTIFY_OK; | ||
406 | } | ||
407 | |||
408 | static struct notifier_block fpga_region_of_nb = { | ||
409 | .notifier_call = of_fpga_region_notify, | ||
410 | }; | ||
411 | |||
412 | static int of_fpga_region_probe(struct platform_device *pdev) | ||
413 | { | ||
414 | struct device *dev = &pdev->dev; | ||
415 | struct device_node *np = dev->of_node; | ||
416 | struct fpga_region *region; | ||
417 | struct fpga_manager *mgr; | ||
418 | int ret; | ||
419 | |||
420 | /* Find the FPGA mgr specified by region or parent region. */ | ||
421 | mgr = of_fpga_region_get_mgr(np); | ||
422 | if (IS_ERR(mgr)) | ||
423 | return -EPROBE_DEFER; | ||
424 | |||
425 | region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL); | ||
426 | if (!region) { | ||
427 | ret = -ENOMEM; | ||
428 | goto eprobe_mgr_put; | ||
429 | } | ||
430 | |||
431 | region->mgr = mgr; | ||
432 | |||
433 | /* Specify how to get bridges for this type of region. */ | ||
434 | region->get_bridges = of_fpga_region_get_bridges; | ||
435 | |||
436 | ret = fpga_region_register(dev, region); | ||
437 | if (ret) | ||
438 | goto eprobe_mgr_put; | ||
439 | |||
440 | of_platform_populate(np, fpga_region_of_match, NULL, ®ion->dev); | ||
441 | |||
442 | dev_info(dev, "FPGA Region probed\n"); | ||
443 | |||
444 | return 0; | ||
445 | |||
446 | eprobe_mgr_put: | ||
447 | fpga_mgr_put(mgr); | ||
448 | return ret; | ||
449 | } | ||
450 | |||
451 | static int of_fpga_region_remove(struct platform_device *pdev) | ||
452 | { | ||
453 | struct fpga_region *region = platform_get_drvdata(pdev); | ||
454 | |||
455 | fpga_region_unregister(region); | ||
456 | fpga_mgr_put(region->mgr); | ||
457 | |||
458 | return 0; | ||
459 | } | ||
460 | |||
461 | static struct platform_driver of_fpga_region_driver = { | ||
462 | .probe = of_fpga_region_probe, | ||
463 | .remove = of_fpga_region_remove, | ||
464 | .driver = { | ||
465 | .name = "of-fpga-region", | ||
466 | .of_match_table = of_match_ptr(fpga_region_of_match), | ||
467 | }, | ||
468 | }; | ||
469 | |||
470 | /** | ||
471 | * fpga_region_init - init function for fpga_region class | ||
472 | * Creates the fpga_region class and registers a reconfig notifier. | ||
473 | */ | ||
474 | static int __init of_fpga_region_init(void) | ||
475 | { | ||
476 | int ret; | ||
477 | |||
478 | ret = of_overlay_notifier_register(&fpga_region_of_nb); | ||
479 | if (ret) | ||
480 | return ret; | ||
481 | |||
482 | ret = platform_driver_register(&of_fpga_region_driver); | ||
483 | if (ret) | ||
484 | goto err_plat; | ||
485 | |||
486 | return 0; | ||
487 | |||
488 | err_plat: | ||
489 | of_overlay_notifier_unregister(&fpga_region_of_nb); | ||
490 | return ret; | ||
491 | } | ||
492 | |||
493 | static void __exit of_fpga_region_exit(void) | ||
494 | { | ||
495 | platform_driver_unregister(&of_fpga_region_driver); | ||
496 | of_overlay_notifier_unregister(&fpga_region_of_nb); | ||
497 | } | ||
498 | |||
499 | subsys_initcall(of_fpga_region_init); | ||
500 | module_exit(of_fpga_region_exit); | ||
501 | |||
502 | MODULE_DESCRIPTION("FPGA Region"); | ||
503 | MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); | ||
504 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/fpga/socfpga-a10.c b/drivers/fpga/socfpga-a10.c index f8770af0f6b5..a46e343a5b72 100644 --- a/drivers/fpga/socfpga-a10.c +++ b/drivers/fpga/socfpga-a10.c | |||
@@ -519,8 +519,14 @@ static int socfpga_a10_fpga_probe(struct platform_device *pdev) | |||
519 | return -EBUSY; | 519 | return -EBUSY; |
520 | } | 520 | } |
521 | 521 | ||
522 | return fpga_mgr_register(dev, "SoCFPGA Arria10 FPGA Manager", | 522 | ret = fpga_mgr_register(dev, "SoCFPGA Arria10 FPGA Manager", |
523 | &socfpga_a10_fpga_mgr_ops, priv); | 523 | &socfpga_a10_fpga_mgr_ops, priv); |
524 | if (ret) { | ||
525 | clk_disable_unprepare(priv->clk); | ||
526 | return ret; | ||
527 | } | ||
528 | |||
529 | return 0; | ||
524 | } | 530 | } |
525 | 531 | ||
526 | static int socfpga_a10_fpga_remove(struct platform_device *pdev) | 532 | static int socfpga_a10_fpga_remove(struct platform_device *pdev) |
diff --git a/drivers/fsi/Kconfig b/drivers/fsi/Kconfig index 6821ed0cd5e8..513e35173aaa 100644 --- a/drivers/fsi/Kconfig +++ b/drivers/fsi/Kconfig | |||
@@ -2,9 +2,7 @@ | |||
2 | # FSI subsystem | 2 | # FSI subsystem |
3 | # | 3 | # |
4 | 4 | ||
5 | menu "FSI support" | 5 | menuconfig FSI |
6 | |||
7 | config FSI | ||
8 | tristate "FSI support" | 6 | tristate "FSI support" |
9 | select CRC4 | 7 | select CRC4 |
10 | ---help--- | 8 | ---help--- |
@@ -34,5 +32,3 @@ config FSI_SCOM | |||
34 | This option enables an FSI based SCOM device driver. | 32 | This option enables an FSI based SCOM device driver. |
35 | 33 | ||
36 | endif | 34 | endif |
37 | |||
38 | endmenu | ||
diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index 8267439dd1ee..fe96aab9e794 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c | |||
@@ -49,9 +49,6 @@ struct hv_context hv_context = { | |||
49 | */ | 49 | */ |
50 | int hv_init(void) | 50 | int hv_init(void) |
51 | { | 51 | { |
52 | if (!hv_is_hypercall_page_setup()) | ||
53 | return -ENOTSUPP; | ||
54 | |||
55 | hv_context.cpu_context = alloc_percpu(struct hv_per_cpu_context); | 52 | hv_context.cpu_context = alloc_percpu(struct hv_per_cpu_context); |
56 | if (!hv_context.cpu_context) | 53 | if (!hv_context.cpu_context) |
57 | return -ENOMEM; | 54 | return -ENOMEM; |
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c index 610223f0e945..bc65c4d79c1f 100644 --- a/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c | |||
@@ -37,7 +37,6 @@ | |||
37 | #include <linux/sched/task_stack.h> | 37 | #include <linux/sched/task_stack.h> |
38 | 38 | ||
39 | #include <asm/hyperv.h> | 39 | #include <asm/hyperv.h> |
40 | #include <asm/hypervisor.h> | ||
41 | #include <asm/mshyperv.h> | 40 | #include <asm/mshyperv.h> |
42 | #include <linux/notifier.h> | 41 | #include <linux/notifier.h> |
43 | #include <linux/ptrace.h> | 42 | #include <linux/ptrace.h> |
@@ -1053,7 +1052,7 @@ static int vmbus_bus_init(void) | |||
1053 | * Initialize the per-cpu interrupt state and | 1052 | * Initialize the per-cpu interrupt state and |
1054 | * connect to the host. | 1053 | * connect to the host. |
1055 | */ | 1054 | */ |
1056 | ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/hyperv:online", | 1055 | ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hyperv/vmbus:online", |
1057 | hv_synic_init, hv_synic_cleanup); | 1056 | hv_synic_init, hv_synic_cleanup); |
1058 | if (ret < 0) | 1057 | if (ret < 0) |
1059 | goto err_alloc; | 1058 | goto err_alloc; |
@@ -1193,7 +1192,7 @@ static ssize_t out_mask_show(const struct vmbus_channel *channel, char *buf) | |||
1193 | 1192 | ||
1194 | return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask); | 1193 | return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask); |
1195 | } | 1194 | } |
1196 | VMBUS_CHAN_ATTR_RO(out_mask); | 1195 | static VMBUS_CHAN_ATTR_RO(out_mask); |
1197 | 1196 | ||
1198 | static ssize_t in_mask_show(const struct vmbus_channel *channel, char *buf) | 1197 | static ssize_t in_mask_show(const struct vmbus_channel *channel, char *buf) |
1199 | { | 1198 | { |
@@ -1201,7 +1200,7 @@ static ssize_t in_mask_show(const struct vmbus_channel *channel, char *buf) | |||
1201 | 1200 | ||
1202 | return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask); | 1201 | return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask); |
1203 | } | 1202 | } |
1204 | VMBUS_CHAN_ATTR_RO(in_mask); | 1203 | static VMBUS_CHAN_ATTR_RO(in_mask); |
1205 | 1204 | ||
1206 | static ssize_t read_avail_show(const struct vmbus_channel *channel, char *buf) | 1205 | static ssize_t read_avail_show(const struct vmbus_channel *channel, char *buf) |
1207 | { | 1206 | { |
@@ -1209,7 +1208,7 @@ static ssize_t read_avail_show(const struct vmbus_channel *channel, char *buf) | |||
1209 | 1208 | ||
1210 | return sprintf(buf, "%u\n", hv_get_bytes_to_read(rbi)); | 1209 | return sprintf(buf, "%u\n", hv_get_bytes_to_read(rbi)); |
1211 | } | 1210 | } |
1212 | VMBUS_CHAN_ATTR_RO(read_avail); | 1211 | static VMBUS_CHAN_ATTR_RO(read_avail); |
1213 | 1212 | ||
1214 | static ssize_t write_avail_show(const struct vmbus_channel *channel, char *buf) | 1213 | static ssize_t write_avail_show(const struct vmbus_channel *channel, char *buf) |
1215 | { | 1214 | { |
@@ -1217,13 +1216,13 @@ static ssize_t write_avail_show(const struct vmbus_channel *channel, char *buf) | |||
1217 | 1216 | ||
1218 | return sprintf(buf, "%u\n", hv_get_bytes_to_write(rbi)); | 1217 | return sprintf(buf, "%u\n", hv_get_bytes_to_write(rbi)); |
1219 | } | 1218 | } |
1220 | VMBUS_CHAN_ATTR_RO(write_avail); | 1219 | static VMBUS_CHAN_ATTR_RO(write_avail); |
1221 | 1220 | ||
1222 | static ssize_t show_target_cpu(const struct vmbus_channel *channel, char *buf) | 1221 | static ssize_t show_target_cpu(const struct vmbus_channel *channel, char *buf) |
1223 | { | 1222 | { |
1224 | return sprintf(buf, "%u\n", channel->target_cpu); | 1223 | return sprintf(buf, "%u\n", channel->target_cpu); |
1225 | } | 1224 | } |
1226 | VMBUS_CHAN_ATTR(cpu, S_IRUGO, show_target_cpu, NULL); | 1225 | static VMBUS_CHAN_ATTR(cpu, S_IRUGO, show_target_cpu, NULL); |
1227 | 1226 | ||
1228 | static ssize_t channel_pending_show(const struct vmbus_channel *channel, | 1227 | static ssize_t channel_pending_show(const struct vmbus_channel *channel, |
1229 | char *buf) | 1228 | char *buf) |
@@ -1232,7 +1231,7 @@ static ssize_t channel_pending_show(const struct vmbus_channel *channel, | |||
1232 | channel_pending(channel, | 1231 | channel_pending(channel, |
1233 | vmbus_connection.monitor_pages[1])); | 1232 | vmbus_connection.monitor_pages[1])); |
1234 | } | 1233 | } |
1235 | VMBUS_CHAN_ATTR(pending, S_IRUGO, channel_pending_show, NULL); | 1234 | static VMBUS_CHAN_ATTR(pending, S_IRUGO, channel_pending_show, NULL); |
1236 | 1235 | ||
1237 | static ssize_t channel_latency_show(const struct vmbus_channel *channel, | 1236 | static ssize_t channel_latency_show(const struct vmbus_channel *channel, |
1238 | char *buf) | 1237 | char *buf) |
@@ -1241,19 +1240,34 @@ static ssize_t channel_latency_show(const struct vmbus_channel *channel, | |||
1241 | channel_latency(channel, | 1240 | channel_latency(channel, |
1242 | vmbus_connection.monitor_pages[1])); | 1241 | vmbus_connection.monitor_pages[1])); |
1243 | } | 1242 | } |
1244 | VMBUS_CHAN_ATTR(latency, S_IRUGO, channel_latency_show, NULL); | 1243 | static VMBUS_CHAN_ATTR(latency, S_IRUGO, channel_latency_show, NULL); |
1245 | 1244 | ||
1246 | static ssize_t channel_interrupts_show(const struct vmbus_channel *channel, char *buf) | 1245 | static ssize_t channel_interrupts_show(const struct vmbus_channel *channel, char *buf) |
1247 | { | 1246 | { |
1248 | return sprintf(buf, "%llu\n", channel->interrupts); | 1247 | return sprintf(buf, "%llu\n", channel->interrupts); |
1249 | } | 1248 | } |
1250 | VMBUS_CHAN_ATTR(interrupts, S_IRUGO, channel_interrupts_show, NULL); | 1249 | static VMBUS_CHAN_ATTR(interrupts, S_IRUGO, channel_interrupts_show, NULL); |
1251 | 1250 | ||
1252 | static ssize_t channel_events_show(const struct vmbus_channel *channel, char *buf) | 1251 | static ssize_t channel_events_show(const struct vmbus_channel *channel, char *buf) |
1253 | { | 1252 | { |
1254 | return sprintf(buf, "%llu\n", channel->sig_events); | 1253 | return sprintf(buf, "%llu\n", channel->sig_events); |
1255 | } | 1254 | } |
1256 | VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL); | 1255 | static VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL); |
1256 | |||
1257 | static ssize_t subchannel_monitor_id_show(const struct vmbus_channel *channel, | ||
1258 | char *buf) | ||
1259 | { | ||
1260 | return sprintf(buf, "%u\n", channel->offermsg.monitorid); | ||
1261 | } | ||
1262 | static VMBUS_CHAN_ATTR(monitor_id, S_IRUGO, subchannel_monitor_id_show, NULL); | ||
1263 | |||
1264 | static ssize_t subchannel_id_show(const struct vmbus_channel *channel, | ||
1265 | char *buf) | ||
1266 | { | ||
1267 | return sprintf(buf, "%u\n", | ||
1268 | channel->offermsg.offer.sub_channel_index); | ||
1269 | } | ||
1270 | static VMBUS_CHAN_ATTR_RO(subchannel_id); | ||
1257 | 1271 | ||
1258 | static struct attribute *vmbus_chan_attrs[] = { | 1272 | static struct attribute *vmbus_chan_attrs[] = { |
1259 | &chan_attr_out_mask.attr, | 1273 | &chan_attr_out_mask.attr, |
@@ -1265,6 +1279,8 @@ static struct attribute *vmbus_chan_attrs[] = { | |||
1265 | &chan_attr_latency.attr, | 1279 | &chan_attr_latency.attr, |
1266 | &chan_attr_interrupts.attr, | 1280 | &chan_attr_interrupts.attr, |
1267 | &chan_attr_events.attr, | 1281 | &chan_attr_events.attr, |
1282 | &chan_attr_monitor_id.attr, | ||
1283 | &chan_attr_subchannel_id.attr, | ||
1268 | NULL | 1284 | NULL |
1269 | }; | 1285 | }; |
1270 | 1286 | ||
@@ -1717,7 +1733,7 @@ static int __init hv_acpi_init(void) | |||
1717 | { | 1733 | { |
1718 | int ret, t; | 1734 | int ret, t; |
1719 | 1735 | ||
1720 | if (x86_hyper_type != X86_HYPER_MS_HYPERV) | 1736 | if (!hv_is_hyperv_initialized()) |
1721 | return -ENODEV; | 1737 | return -ENODEV; |
1722 | 1738 | ||
1723 | init_completion(&probe_event); | 1739 | init_completion(&probe_event); |
diff --git a/drivers/hwtracing/coresight/coresight-dynamic-replicator.c b/drivers/hwtracing/coresight/coresight-dynamic-replicator.c index 8f4357e2626c..043da86b0fe9 100644 --- a/drivers/hwtracing/coresight/coresight-dynamic-replicator.c +++ b/drivers/hwtracing/coresight/coresight-dynamic-replicator.c | |||
@@ -163,10 +163,8 @@ static int replicator_probe(struct amba_device *adev, const struct amba_id *id) | |||
163 | desc.dev = &adev->dev; | 163 | desc.dev = &adev->dev; |
164 | desc.groups = replicator_groups; | 164 | desc.groups = replicator_groups; |
165 | drvdata->csdev = coresight_register(&desc); | 165 | drvdata->csdev = coresight_register(&desc); |
166 | if (IS_ERR(drvdata->csdev)) | ||
167 | return PTR_ERR(drvdata->csdev); | ||
168 | 166 | ||
169 | return 0; | 167 | return PTR_ERR_OR_ZERO(drvdata->csdev); |
170 | } | 168 | } |
171 | 169 | ||
172 | #ifdef CONFIG_PM | 170 | #ifdef CONFIG_PM |
diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c index e03e58933141..580cd381adf3 100644 --- a/drivers/hwtracing/coresight/coresight-etb10.c +++ b/drivers/hwtracing/coresight/coresight-etb10.c | |||
@@ -33,7 +33,6 @@ | |||
33 | #include <linux/mm.h> | 33 | #include <linux/mm.h> |
34 | #include <linux/perf_event.h> | 34 | #include <linux/perf_event.h> |
35 | 35 | ||
36 | #include <asm/local.h> | ||
37 | 36 | ||
38 | #include "coresight-priv.h" | 37 | #include "coresight-priv.h" |
39 | 38 | ||
diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c index fd3c396717f6..9f8ac0bef853 100644 --- a/drivers/hwtracing/coresight/coresight-funnel.c +++ b/drivers/hwtracing/coresight/coresight-funnel.c | |||
@@ -214,10 +214,8 @@ static int funnel_probe(struct amba_device *adev, const struct amba_id *id) | |||
214 | desc.dev = dev; | 214 | desc.dev = dev; |
215 | desc.groups = coresight_funnel_groups; | 215 | desc.groups = coresight_funnel_groups; |
216 | drvdata->csdev = coresight_register(&desc); | 216 | drvdata->csdev = coresight_register(&desc); |
217 | if (IS_ERR(drvdata->csdev)) | ||
218 | return PTR_ERR(drvdata->csdev); | ||
219 | 217 | ||
220 | return 0; | 218 | return PTR_ERR_OR_ZERO(drvdata->csdev); |
221 | } | 219 | } |
222 | 220 | ||
223 | #ifdef CONFIG_PM | 221 | #ifdef CONFIG_PM |
diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c index bef49a3a5ca7..805f7c2210fe 100644 --- a/drivers/hwtracing/coresight/coresight-tpiu.c +++ b/drivers/hwtracing/coresight/coresight-tpiu.c | |||
@@ -46,8 +46,11 @@ | |||
46 | #define TPIU_ITATBCTR0 0xef8 | 46 | #define TPIU_ITATBCTR0 0xef8 |
47 | 47 | ||
48 | /** register definition **/ | 48 | /** register definition **/ |
49 | /* FFSR - 0x300 */ | ||
50 | #define FFSR_FT_STOPPED BIT(1) | ||
49 | /* FFCR - 0x304 */ | 51 | /* FFCR - 0x304 */ |
50 | #define FFCR_FON_MAN BIT(6) | 52 | #define FFCR_FON_MAN BIT(6) |
53 | #define FFCR_STOP_FI BIT(12) | ||
51 | 54 | ||
52 | /** | 55 | /** |
53 | * @base: memory mapped base address for this component. | 56 | * @base: memory mapped base address for this component. |
@@ -85,10 +88,14 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata) | |||
85 | { | 88 | { |
86 | CS_UNLOCK(drvdata->base); | 89 | CS_UNLOCK(drvdata->base); |
87 | 90 | ||
88 | /* Clear formatter controle reg. */ | 91 | /* Clear formatter and stop on flush */ |
89 | writel_relaxed(0x0, drvdata->base + TPIU_FFCR); | 92 | writel_relaxed(FFCR_STOP_FI, drvdata->base + TPIU_FFCR); |
90 | /* Generate manual flush */ | 93 | /* Generate manual flush */ |
91 | writel_relaxed(FFCR_FON_MAN, drvdata->base + TPIU_FFCR); | 94 | writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR); |
95 | /* Wait for flush to complete */ | ||
96 | coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0); | ||
97 | /* Wait for formatter to stop */ | ||
98 | coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1); | ||
92 | 99 | ||
93 | CS_LOCK(drvdata->base); | 100 | CS_LOCK(drvdata->base); |
94 | } | 101 | } |
@@ -160,10 +167,8 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id) | |||
160 | desc.pdata = pdata; | 167 | desc.pdata = pdata; |
161 | desc.dev = dev; | 168 | desc.dev = dev; |
162 | drvdata->csdev = coresight_register(&desc); | 169 | drvdata->csdev = coresight_register(&desc); |
163 | if (IS_ERR(drvdata->csdev)) | ||
164 | return PTR_ERR(drvdata->csdev); | ||
165 | 170 | ||
166 | return 0; | 171 | return PTR_ERR_OR_ZERO(drvdata->csdev); |
167 | } | 172 | } |
168 | 173 | ||
169 | #ifdef CONFIG_PM | 174 | #ifdef CONFIG_PM |
diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c index b8091bef21dc..389c4baeca9d 100644 --- a/drivers/hwtracing/coresight/coresight.c +++ b/drivers/hwtracing/coresight/coresight.c | |||
@@ -843,32 +843,17 @@ static void coresight_fixup_orphan_conns(struct coresight_device *csdev) | |||
843 | } | 843 | } |
844 | 844 | ||
845 | 845 | ||
846 | static int coresight_name_match(struct device *dev, void *data) | ||
847 | { | ||
848 | char *to_match; | ||
849 | struct coresight_device *i_csdev; | ||
850 | |||
851 | to_match = data; | ||
852 | i_csdev = to_coresight_device(dev); | ||
853 | |||
854 | if (to_match && !strcmp(to_match, dev_name(&i_csdev->dev))) | ||
855 | return 1; | ||
856 | |||
857 | return 0; | ||
858 | } | ||
859 | |||
860 | static void coresight_fixup_device_conns(struct coresight_device *csdev) | 846 | static void coresight_fixup_device_conns(struct coresight_device *csdev) |
861 | { | 847 | { |
862 | int i; | 848 | int i; |
863 | struct device *dev = NULL; | ||
864 | struct coresight_connection *conn; | ||
865 | 849 | ||
866 | for (i = 0; i < csdev->nr_outport; i++) { | 850 | for (i = 0; i < csdev->nr_outport; i++) { |
867 | conn = &csdev->conns[i]; | 851 | struct coresight_connection *conn = &csdev->conns[i]; |
868 | dev = bus_find_device(&coresight_bustype, NULL, | 852 | struct device *dev = NULL; |
869 | (void *)conn->child_name, | ||
870 | coresight_name_match); | ||
871 | 853 | ||
854 | if (conn->child_name) | ||
855 | dev = bus_find_device_by_name(&coresight_bustype, NULL, | ||
856 | conn->child_name); | ||
872 | if (dev) { | 857 | if (dev) { |
873 | conn->child_dev = to_coresight_device(dev); | 858 | conn->child_dev = to_coresight_device(dev); |
874 | /* and put reference from 'bus_find_device()' */ | 859 | /* and put reference from 'bus_find_device()' */ |
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig index 7c0fa24f9067..6722073e339b 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig | |||
@@ -53,7 +53,7 @@ config AD525X_DPOT_SPI | |||
53 | 53 | ||
54 | config ATMEL_TCLIB | 54 | config ATMEL_TCLIB |
55 | bool "Atmel AT32/AT91 Timer/Counter Library" | 55 | bool "Atmel AT32/AT91 Timer/Counter Library" |
56 | depends on (AVR32 || ARCH_AT91) | 56 | depends on ARCH_AT91 |
57 | help | 57 | help |
58 | Select this if you want a library to allocate the Timer/Counter | 58 | Select this if you want a library to allocate the Timer/Counter |
59 | blocks found on many Atmel processors. This facilitates using | 59 | blocks found on many Atmel processors. This facilitates using |
@@ -192,7 +192,7 @@ config ICS932S401 | |||
192 | 192 | ||
193 | config ATMEL_SSC | 193 | config ATMEL_SSC |
194 | tristate "Device driver for Atmel SSC peripheral" | 194 | tristate "Device driver for Atmel SSC peripheral" |
195 | depends on HAS_IOMEM && (AVR32 || ARCH_AT91 || COMPILE_TEST) | 195 | depends on HAS_IOMEM && (ARCH_AT91 || COMPILE_TEST) |
196 | ---help--- | 196 | ---help--- |
197 | This option enables device driver support for Atmel Synchronized | 197 | This option enables device driver support for Atmel Synchronized |
198 | Serial Communication peripheral (SSC). | 198 | Serial Communication peripheral (SSC). |
diff --git a/drivers/misc/ad525x_dpot.c b/drivers/misc/ad525x_dpot.c index fe1672747bc1..bc591b7168db 100644 --- a/drivers/misc/ad525x_dpot.c +++ b/drivers/misc/ad525x_dpot.c | |||
@@ -3,7 +3,7 @@ | |||
3 | * Copyright (c) 2009-2010 Analog Devices, Inc. | 3 | * Copyright (c) 2009-2010 Analog Devices, Inc. |
4 | * Author: Michael Hennerich <hennerich@blackfin.uclinux.org> | 4 | * Author: Michael Hennerich <hennerich@blackfin.uclinux.org> |
5 | * | 5 | * |
6 | * DEVID #Wipers #Positions Resistor Options (kOhm) | 6 | * DEVID #Wipers #Positions Resistor Options (kOhm) |
7 | * AD5258 1 64 1, 10, 50, 100 | 7 | * AD5258 1 64 1, 10, 50, 100 |
8 | * AD5259 1 256 5, 10, 50, 100 | 8 | * AD5259 1 256 5, 10, 50, 100 |
9 | * AD5251 2 64 1, 10, 50, 100 | 9 | * AD5251 2 64 1, 10, 50, 100 |
@@ -84,12 +84,12 @@ | |||
84 | struct dpot_data { | 84 | struct dpot_data { |
85 | struct ad_dpot_bus_data bdata; | 85 | struct ad_dpot_bus_data bdata; |
86 | struct mutex update_lock; | 86 | struct mutex update_lock; |
87 | unsigned rdac_mask; | 87 | unsigned int rdac_mask; |
88 | unsigned max_pos; | 88 | unsigned int max_pos; |
89 | unsigned long devid; | 89 | unsigned long devid; |
90 | unsigned uid; | 90 | unsigned int uid; |
91 | unsigned feat; | 91 | unsigned int feat; |
92 | unsigned wipers; | 92 | unsigned int wipers; |
93 | u16 rdac_cache[MAX_RDACS]; | 93 | u16 rdac_cache[MAX_RDACS]; |
94 | DECLARE_BITMAP(otp_en_mask, MAX_RDACS); | 94 | DECLARE_BITMAP(otp_en_mask, MAX_RDACS); |
95 | }; | 95 | }; |
@@ -126,7 +126,7 @@ static inline int dpot_write_r8d16(struct dpot_data *dpot, u8 reg, u16 val) | |||
126 | 126 | ||
127 | static s32 dpot_read_spi(struct dpot_data *dpot, u8 reg) | 127 | static s32 dpot_read_spi(struct dpot_data *dpot, u8 reg) |
128 | { | 128 | { |
129 | unsigned ctrl = 0; | 129 | unsigned int ctrl = 0; |
130 | int value; | 130 | int value; |
131 | 131 | ||
132 | if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD))) { | 132 | if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD))) { |
@@ -175,7 +175,7 @@ static s32 dpot_read_spi(struct dpot_data *dpot, u8 reg) | |||
175 | static s32 dpot_read_i2c(struct dpot_data *dpot, u8 reg) | 175 | static s32 dpot_read_i2c(struct dpot_data *dpot, u8 reg) |
176 | { | 176 | { |
177 | int value; | 177 | int value; |
178 | unsigned ctrl = 0; | 178 | unsigned int ctrl = 0; |
179 | 179 | ||
180 | switch (dpot->uid) { | 180 | switch (dpot->uid) { |
181 | case DPOT_UID(AD5246_ID): | 181 | case DPOT_UID(AD5246_ID): |
@@ -238,7 +238,7 @@ static s32 dpot_read(struct dpot_data *dpot, u8 reg) | |||
238 | 238 | ||
239 | static s32 dpot_write_spi(struct dpot_data *dpot, u8 reg, u16 value) | 239 | static s32 dpot_write_spi(struct dpot_data *dpot, u8 reg, u16 value) |
240 | { | 240 | { |
241 | unsigned val = 0; | 241 | unsigned int val = 0; |
242 | 242 | ||
243 | if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD | DPOT_ADDR_OTP))) { | 243 | if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD | DPOT_ADDR_OTP))) { |
244 | if (dpot->feat & F_RDACS_WONLY) | 244 | if (dpot->feat & F_RDACS_WONLY) |
@@ -328,7 +328,7 @@ static s32 dpot_write_spi(struct dpot_data *dpot, u8 reg, u16 value) | |||
328 | static s32 dpot_write_i2c(struct dpot_data *dpot, u8 reg, u16 value) | 328 | static s32 dpot_write_i2c(struct dpot_data *dpot, u8 reg, u16 value) |
329 | { | 329 | { |
330 | /* Only write the instruction byte for certain commands */ | 330 | /* Only write the instruction byte for certain commands */ |
331 | unsigned tmp = 0, ctrl = 0; | 331 | unsigned int tmp = 0, ctrl = 0; |
332 | 332 | ||
333 | switch (dpot->uid) { | 333 | switch (dpot->uid) { |
334 | case DPOT_UID(AD5246_ID): | 334 | case DPOT_UID(AD5246_ID): |
@@ -515,11 +515,11 @@ set_##_name(struct device *dev, \ | |||
515 | #define DPOT_DEVICE_SHOW_SET(name, reg) \ | 515 | #define DPOT_DEVICE_SHOW_SET(name, reg) \ |
516 | DPOT_DEVICE_SHOW(name, reg) \ | 516 | DPOT_DEVICE_SHOW(name, reg) \ |
517 | DPOT_DEVICE_SET(name, reg) \ | 517 | DPOT_DEVICE_SET(name, reg) \ |
518 | static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, set_##name); | 518 | static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, set_##name) |
519 | 519 | ||
520 | #define DPOT_DEVICE_SHOW_ONLY(name, reg) \ | 520 | #define DPOT_DEVICE_SHOW_ONLY(name, reg) \ |
521 | DPOT_DEVICE_SHOW(name, reg) \ | 521 | DPOT_DEVICE_SHOW(name, reg) \ |
522 | static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, NULL); | 522 | static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, NULL) |
523 | 523 | ||
524 | DPOT_DEVICE_SHOW_SET(rdac0, DPOT_ADDR_RDAC | DPOT_RDAC0); | 524 | DPOT_DEVICE_SHOW_SET(rdac0, DPOT_ADDR_RDAC | DPOT_RDAC0); |
525 | DPOT_DEVICE_SHOW_SET(eeprom0, DPOT_ADDR_EEPROM | DPOT_RDAC0); | 525 | DPOT_DEVICE_SHOW_SET(eeprom0, DPOT_ADDR_EEPROM | DPOT_RDAC0); |
@@ -616,7 +616,7 @@ set_##_name(struct device *dev, \ | |||
616 | { \ | 616 | { \ |
617 | return sysfs_do_cmd(dev, attr, buf, count, _cmd); \ | 617 | return sysfs_do_cmd(dev, attr, buf, count, _cmd); \ |
618 | } \ | 618 | } \ |
619 | static DEVICE_ATTR(_name, S_IWUSR | S_IRUGO, NULL, set_##_name); | 619 | static DEVICE_ATTR(_name, S_IWUSR | S_IRUGO, NULL, set_##_name) |
620 | 620 | ||
621 | DPOT_DEVICE_DO_CMD(inc_all, DPOT_INC_ALL); | 621 | DPOT_DEVICE_DO_CMD(inc_all, DPOT_INC_ALL); |
622 | DPOT_DEVICE_DO_CMD(dec_all, DPOT_DEC_ALL); | 622 | DPOT_DEVICE_DO_CMD(dec_all, DPOT_DEC_ALL); |
@@ -636,7 +636,7 @@ static const struct attribute_group ad525x_group_commands = { | |||
636 | }; | 636 | }; |
637 | 637 | ||
638 | static int ad_dpot_add_files(struct device *dev, | 638 | static int ad_dpot_add_files(struct device *dev, |
639 | unsigned features, unsigned rdac) | 639 | unsigned int features, unsigned int rdac) |
640 | { | 640 | { |
641 | int err = sysfs_create_file(&dev->kobj, | 641 | int err = sysfs_create_file(&dev->kobj, |
642 | dpot_attrib_wipers[rdac]); | 642 | dpot_attrib_wipers[rdac]); |
@@ -661,7 +661,7 @@ static int ad_dpot_add_files(struct device *dev, | |||
661 | } | 661 | } |
662 | 662 | ||
663 | static inline void ad_dpot_remove_files(struct device *dev, | 663 | static inline void ad_dpot_remove_files(struct device *dev, |
664 | unsigned features, unsigned rdac) | 664 | unsigned int features, unsigned int rdac) |
665 | { | 665 | { |
666 | sysfs_remove_file(&dev->kobj, | 666 | sysfs_remove_file(&dev->kobj, |
667 | dpot_attrib_wipers[rdac]); | 667 | dpot_attrib_wipers[rdac]); |
diff --git a/drivers/misc/ad525x_dpot.h b/drivers/misc/ad525x_dpot.h index 6bd1eba23bc0..443a51fd5680 100644 --- a/drivers/misc/ad525x_dpot.h +++ b/drivers/misc/ad525x_dpot.h | |||
@@ -195,12 +195,12 @@ enum dpot_devid { | |||
195 | struct dpot_data; | 195 | struct dpot_data; |
196 | 196 | ||
197 | struct ad_dpot_bus_ops { | 197 | struct ad_dpot_bus_ops { |
198 | int (*read_d8) (void *client); | 198 | int (*read_d8)(void *client); |
199 | int (*read_r8d8) (void *client, u8 reg); | 199 | int (*read_r8d8)(void *client, u8 reg); |
200 | int (*read_r8d16) (void *client, u8 reg); | 200 | int (*read_r8d16)(void *client, u8 reg); |
201 | int (*write_d8) (void *client, u8 val); | 201 | int (*write_d8)(void *client, u8 val); |
202 | int (*write_r8d8) (void *client, u8 reg, u8 val); | 202 | int (*write_r8d8)(void *client, u8 reg, u8 val); |
203 | int (*write_r8d16) (void *client, u8 reg, u16 val); | 203 | int (*write_r8d16)(void *client, u8 reg, u16 val); |
204 | }; | 204 | }; |
205 | 205 | ||
206 | struct ad_dpot_bus_data { | 206 | struct ad_dpot_bus_data { |
diff --git a/drivers/misc/apds990x.c b/drivers/misc/apds990x.c index c9f07032c2fc..ed9412d750b7 100644 --- a/drivers/misc/apds990x.c +++ b/drivers/misc/apds990x.c | |||
@@ -715,6 +715,7 @@ static ssize_t apds990x_rate_avail(struct device *dev, | |||
715 | { | 715 | { |
716 | int i; | 716 | int i; |
717 | int pos = 0; | 717 | int pos = 0; |
718 | |||
718 | for (i = 0; i < ARRAY_SIZE(arates_hz); i++) | 719 | for (i = 0; i < ARRAY_SIZE(arates_hz); i++) |
719 | pos += sprintf(buf + pos, "%d ", arates_hz[i]); | 720 | pos += sprintf(buf + pos, "%d ", arates_hz[i]); |
720 | sprintf(buf + pos - 1, "\n"); | 721 | sprintf(buf + pos - 1, "\n"); |
@@ -725,6 +726,7 @@ static ssize_t apds990x_rate_show(struct device *dev, | |||
725 | struct device_attribute *attr, char *buf) | 726 | struct device_attribute *attr, char *buf) |
726 | { | 727 | { |
727 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 728 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
729 | |||
728 | return sprintf(buf, "%d\n", chip->arate); | 730 | return sprintf(buf, "%d\n", chip->arate); |
729 | } | 731 | } |
730 | 732 | ||
@@ -784,6 +786,7 @@ static ssize_t apds990x_prox_show(struct device *dev, | |||
784 | { | 786 | { |
785 | ssize_t ret; | 787 | ssize_t ret; |
786 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 788 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
789 | |||
787 | if (pm_runtime_suspended(dev) || !chip->prox_en) | 790 | if (pm_runtime_suspended(dev) || !chip->prox_en) |
788 | return -EIO; | 791 | return -EIO; |
789 | 792 | ||
@@ -807,6 +810,7 @@ static ssize_t apds990x_prox_enable_show(struct device *dev, | |||
807 | struct device_attribute *attr, char *buf) | 810 | struct device_attribute *attr, char *buf) |
808 | { | 811 | { |
809 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 812 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
813 | |||
810 | return sprintf(buf, "%d\n", chip->prox_en); | 814 | return sprintf(buf, "%d\n", chip->prox_en); |
811 | } | 815 | } |
812 | 816 | ||
@@ -847,6 +851,7 @@ static ssize_t apds990x_prox_reporting_mode_show(struct device *dev, | |||
847 | struct device_attribute *attr, char *buf) | 851 | struct device_attribute *attr, char *buf) |
848 | { | 852 | { |
849 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 853 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
854 | |||
850 | return sprintf(buf, "%s\n", | 855 | return sprintf(buf, "%s\n", |
851 | reporting_modes[!!chip->prox_continuous_mode]); | 856 | reporting_modes[!!chip->prox_continuous_mode]); |
852 | } | 857 | } |
@@ -884,6 +889,7 @@ static ssize_t apds990x_lux_thresh_above_show(struct device *dev, | |||
884 | struct device_attribute *attr, char *buf) | 889 | struct device_attribute *attr, char *buf) |
885 | { | 890 | { |
886 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 891 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
892 | |||
887 | return sprintf(buf, "%d\n", chip->lux_thres_hi); | 893 | return sprintf(buf, "%d\n", chip->lux_thres_hi); |
888 | } | 894 | } |
889 | 895 | ||
@@ -891,6 +897,7 @@ static ssize_t apds990x_lux_thresh_below_show(struct device *dev, | |||
891 | struct device_attribute *attr, char *buf) | 897 | struct device_attribute *attr, char *buf) |
892 | { | 898 | { |
893 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 899 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
900 | |||
894 | return sprintf(buf, "%d\n", chip->lux_thres_lo); | 901 | return sprintf(buf, "%d\n", chip->lux_thres_lo); |
895 | } | 902 | } |
896 | 903 | ||
@@ -926,6 +933,7 @@ static ssize_t apds990x_lux_thresh_above_store(struct device *dev, | |||
926 | { | 933 | { |
927 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 934 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
928 | int ret = apds990x_set_lux_thresh(chip, &chip->lux_thres_hi, buf); | 935 | int ret = apds990x_set_lux_thresh(chip, &chip->lux_thres_hi, buf); |
936 | |||
929 | if (ret < 0) | 937 | if (ret < 0) |
930 | return ret; | 938 | return ret; |
931 | return len; | 939 | return len; |
@@ -937,6 +945,7 @@ static ssize_t apds990x_lux_thresh_below_store(struct device *dev, | |||
937 | { | 945 | { |
938 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 946 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
939 | int ret = apds990x_set_lux_thresh(chip, &chip->lux_thres_lo, buf); | 947 | int ret = apds990x_set_lux_thresh(chip, &chip->lux_thres_lo, buf); |
948 | |||
940 | if (ret < 0) | 949 | if (ret < 0) |
941 | return ret; | 950 | return ret; |
942 | return len; | 951 | return len; |
@@ -954,6 +963,7 @@ static ssize_t apds990x_prox_threshold_show(struct device *dev, | |||
954 | struct device_attribute *attr, char *buf) | 963 | struct device_attribute *attr, char *buf) |
955 | { | 964 | { |
956 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 965 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
966 | |||
957 | return sprintf(buf, "%d\n", chip->prox_thres); | 967 | return sprintf(buf, "%d\n", chip->prox_thres); |
958 | } | 968 | } |
959 | 969 | ||
@@ -1026,6 +1036,7 @@ static ssize_t apds990x_chip_id_show(struct device *dev, | |||
1026 | struct device_attribute *attr, char *buf) | 1036 | struct device_attribute *attr, char *buf) |
1027 | { | 1037 | { |
1028 | struct apds990x_chip *chip = dev_get_drvdata(dev); | 1038 | struct apds990x_chip *chip = dev_get_drvdata(dev); |
1039 | |||
1029 | return sprintf(buf, "%s %d\n", chip->chipname, chip->revision); | 1040 | return sprintf(buf, "%s %d\n", chip->chipname, chip->revision); |
1030 | } | 1041 | } |
1031 | 1042 | ||
diff --git a/drivers/misc/ds1682.c b/drivers/misc/ds1682.c index 7231260ac287..98a921ea9ee8 100644 --- a/drivers/misc/ds1682.c +++ b/drivers/misc/ds1682.c | |||
@@ -59,25 +59,42 @@ static ssize_t ds1682_show(struct device *dev, struct device_attribute *attr, | |||
59 | { | 59 | { |
60 | struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); | 60 | struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); |
61 | struct i2c_client *client = to_i2c_client(dev); | 61 | struct i2c_client *client = to_i2c_client(dev); |
62 | __le32 val = 0; | 62 | unsigned long long val, check; |
63 | __le32 val_le = 0; | ||
63 | int rc; | 64 | int rc; |
64 | 65 | ||
65 | dev_dbg(dev, "ds1682_show() called on %s\n", attr->attr.name); | 66 | dev_dbg(dev, "ds1682_show() called on %s\n", attr->attr.name); |
66 | 67 | ||
67 | /* Read the register */ | 68 | /* Read the register */ |
68 | rc = i2c_smbus_read_i2c_block_data(client, sattr->index, sattr->nr, | 69 | rc = i2c_smbus_read_i2c_block_data(client, sattr->index, sattr->nr, |
69 | (u8 *) & val); | 70 | (u8 *)&val_le); |
70 | if (rc < 0) | 71 | if (rc < 0) |
71 | return -EIO; | 72 | return -EIO; |
72 | 73 | ||
73 | /* Special case: the 32 bit regs are time values with 1/4s | 74 | val = le32_to_cpu(val_le); |
74 | * resolution, scale them up to milliseconds */ | 75 | |
75 | if (sattr->nr == 4) | 76 | if (sattr->index == DS1682_REG_ELAPSED) { |
76 | return sprintf(buf, "%llu\n", | 77 | int retries = 5; |
77 | ((unsigned long long)le32_to_cpu(val)) * 250); | 78 | |
79 | /* Detect and retry when a tick occurs mid-read */ | ||
80 | do { | ||
81 | rc = i2c_smbus_read_i2c_block_data(client, sattr->index, | ||
82 | sattr->nr, | ||
83 | (u8 *)&val_le); | ||
84 | if (rc < 0 || retries <= 0) | ||
85 | return -EIO; | ||
86 | |||
87 | check = val; | ||
88 | val = le32_to_cpu(val_le); | ||
89 | retries--; | ||
90 | } while (val != check && val != (check + 1)); | ||
91 | } | ||
78 | 92 | ||
79 | /* Format the output string and return # of bytes */ | 93 | /* Format the output string and return # of bytes |
80 | return sprintf(buf, "%li\n", (long)le32_to_cpu(val)); | 94 | * Special case: the 32 bit regs are time values with 1/4s |
95 | * resolution, scale them up to milliseconds | ||
96 | */ | ||
97 | return sprintf(buf, "%llu\n", (sattr->nr == 4) ? (val * 250) : val); | ||
81 | } | 98 | } |
82 | 99 | ||
83 | static ssize_t ds1682_store(struct device *dev, struct device_attribute *attr, | 100 | static ssize_t ds1682_store(struct device *dev, struct device_attribute *attr, |
diff --git a/drivers/misc/eeprom/at25.c b/drivers/misc/eeprom/at25.c index 5afe4cd16569..9282ffd607ff 100644 --- a/drivers/misc/eeprom/at25.c +++ b/drivers/misc/eeprom/at25.c | |||
@@ -276,6 +276,9 @@ static int at25_fw_to_chip(struct device *dev, struct spi_eeprom *chip) | |||
276 | return -ENODEV; | 276 | return -ENODEV; |
277 | } | 277 | } |
278 | switch (val) { | 278 | switch (val) { |
279 | case 9: | ||
280 | chip->flags |= EE_INSTR_BIT3_IS_ADDR; | ||
281 | /* fall through */ | ||
279 | case 8: | 282 | case 8: |
280 | chip->flags |= EE_ADDR1; | 283 | chip->flags |= EE_ADDR1; |
281 | break; | 284 | break; |
diff --git a/drivers/misc/enclosure.c b/drivers/misc/enclosure.c index eb29113e0bac..5a17bfeb80d3 100644 --- a/drivers/misc/enclosure.c +++ b/drivers/misc/enclosure.c | |||
@@ -468,7 +468,7 @@ static struct class enclosure_class = { | |||
468 | .dev_groups = enclosure_class_groups, | 468 | .dev_groups = enclosure_class_groups, |
469 | }; | 469 | }; |
470 | 470 | ||
471 | static const char *const enclosure_status [] = { | 471 | static const char *const enclosure_status[] = { |
472 | [ENCLOSURE_STATUS_UNSUPPORTED] = "unsupported", | 472 | [ENCLOSURE_STATUS_UNSUPPORTED] = "unsupported", |
473 | [ENCLOSURE_STATUS_OK] = "OK", | 473 | [ENCLOSURE_STATUS_OK] = "OK", |
474 | [ENCLOSURE_STATUS_CRITICAL] = "critical", | 474 | [ENCLOSURE_STATUS_CRITICAL] = "critical", |
@@ -480,7 +480,7 @@ static const char *const enclosure_status [] = { | |||
480 | [ENCLOSURE_STATUS_MAX] = NULL, | 480 | [ENCLOSURE_STATUS_MAX] = NULL, |
481 | }; | 481 | }; |
482 | 482 | ||
483 | static const char *const enclosure_type [] = { | 483 | static const char *const enclosure_type[] = { |
484 | [ENCLOSURE_COMPONENT_DEVICE] = "device", | 484 | [ENCLOSURE_COMPONENT_DEVICE] = "device", |
485 | [ENCLOSURE_COMPONENT_ARRAY_DEVICE] = "array device", | 485 | [ENCLOSURE_COMPONENT_ARRAY_DEVICE] = "array device", |
486 | }; | 486 | }; |
@@ -680,13 +680,7 @@ ATTRIBUTE_GROUPS(enclosure_component); | |||
680 | 680 | ||
681 | static int __init enclosure_init(void) | 681 | static int __init enclosure_init(void) |
682 | { | 682 | { |
683 | int err; | 683 | return class_register(&enclosure_class); |
684 | |||
685 | err = class_register(&enclosure_class); | ||
686 | if (err) | ||
687 | return err; | ||
688 | |||
689 | return 0; | ||
690 | } | 684 | } |
691 | 685 | ||
692 | static void __exit enclosure_exit(void) | 686 | static void __exit enclosure_exit(void) |
diff --git a/drivers/misc/fsa9480.c b/drivers/misc/fsa9480.c index 71d2793b372c..607b489a6501 100644 --- a/drivers/misc/fsa9480.c +++ b/drivers/misc/fsa9480.c | |||
@@ -465,6 +465,7 @@ fail1: | |||
465 | static int fsa9480_remove(struct i2c_client *client) | 465 | static int fsa9480_remove(struct i2c_client *client) |
466 | { | 466 | { |
467 | struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client); | 467 | struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client); |
468 | |||
468 | if (client->irq) | 469 | if (client->irq) |
469 | free_irq(client->irq, usbsw); | 470 | free_irq(client->irq, usbsw); |
470 | 471 | ||
diff --git a/drivers/misc/genwqe/card_base.c b/drivers/misc/genwqe/card_base.c index 4fd21e86ad56..c7cd3675bcd1 100644 --- a/drivers/misc/genwqe/card_base.c +++ b/drivers/misc/genwqe/card_base.c | |||
@@ -153,11 +153,11 @@ static struct genwqe_dev *genwqe_dev_alloc(void) | |||
153 | cd->card_state = GENWQE_CARD_UNUSED; | 153 | cd->card_state = GENWQE_CARD_UNUSED; |
154 | spin_lock_init(&cd->print_lock); | 154 | spin_lock_init(&cd->print_lock); |
155 | 155 | ||
156 | cd->ddcb_software_timeout = genwqe_ddcb_software_timeout; | 156 | cd->ddcb_software_timeout = GENWQE_DDCB_SOFTWARE_TIMEOUT; |
157 | cd->kill_timeout = genwqe_kill_timeout; | 157 | cd->kill_timeout = GENWQE_KILL_TIMEOUT; |
158 | 158 | ||
159 | for (j = 0; j < GENWQE_MAX_VFS; j++) | 159 | for (j = 0; j < GENWQE_MAX_VFS; j++) |
160 | cd->vf_jobtimeout_msec[j] = genwqe_vf_jobtimeout_msec; | 160 | cd->vf_jobtimeout_msec[j] = GENWQE_VF_JOBTIMEOUT_MSEC; |
161 | 161 | ||
162 | genwqe_devices[i] = cd; | 162 | genwqe_devices[i] = cd; |
163 | return cd; | 163 | return cd; |
@@ -324,11 +324,11 @@ static bool genwqe_setup_pf_jtimer(struct genwqe_dev *cd) | |||
324 | u32 T = genwqe_T_psec(cd); | 324 | u32 T = genwqe_T_psec(cd); |
325 | u64 x; | 325 | u64 x; |
326 | 326 | ||
327 | if (genwqe_pf_jobtimeout_msec == 0) | 327 | if (GENWQE_PF_JOBTIMEOUT_MSEC == 0) |
328 | return false; | 328 | return false; |
329 | 329 | ||
330 | /* PF: large value needed, flash update 2sec per block */ | 330 | /* PF: large value needed, flash update 2sec per block */ |
331 | x = ilog2(genwqe_pf_jobtimeout_msec * | 331 | x = ilog2(GENWQE_PF_JOBTIMEOUT_MSEC * |
332 | 16000000000uL/(T * 15)) - 10; | 332 | 16000000000uL/(T * 15)) - 10; |
333 | 333 | ||
334 | genwqe_write_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, | 334 | genwqe_write_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, |
@@ -904,7 +904,7 @@ static int genwqe_reload_bistream(struct genwqe_dev *cd) | |||
904 | * b) a critical GFIR occured | 904 | * b) a critical GFIR occured |
905 | * | 905 | * |
906 | * Informational GFIRs are checked and potentially printed in | 906 | * Informational GFIRs are checked and potentially printed in |
907 | * health_check_interval seconds. | 907 | * GENWQE_HEALTH_CHECK_INTERVAL seconds. |
908 | */ | 908 | */ |
909 | static int genwqe_health_thread(void *data) | 909 | static int genwqe_health_thread(void *data) |
910 | { | 910 | { |
@@ -918,7 +918,7 @@ static int genwqe_health_thread(void *data) | |||
918 | rc = wait_event_interruptible_timeout(cd->health_waitq, | 918 | rc = wait_event_interruptible_timeout(cd->health_waitq, |
919 | (genwqe_health_check_cond(cd, &gfir) || | 919 | (genwqe_health_check_cond(cd, &gfir) || |
920 | (should_stop = kthread_should_stop())), | 920 | (should_stop = kthread_should_stop())), |
921 | genwqe_health_check_interval * HZ); | 921 | GENWQE_HEALTH_CHECK_INTERVAL * HZ); |
922 | 922 | ||
923 | if (should_stop) | 923 | if (should_stop) |
924 | break; | 924 | break; |
@@ -1028,7 +1028,7 @@ static int genwqe_health_check_start(struct genwqe_dev *cd) | |||
1028 | { | 1028 | { |
1029 | int rc; | 1029 | int rc; |
1030 | 1030 | ||
1031 | if (genwqe_health_check_interval <= 0) | 1031 | if (GENWQE_HEALTH_CHECK_INTERVAL <= 0) |
1032 | return 0; /* valid for disabling the service */ | 1032 | return 0; /* valid for disabling the service */ |
1033 | 1033 | ||
1034 | /* moved before request_irq() */ | 1034 | /* moved before request_irq() */ |
diff --git a/drivers/misc/genwqe/card_base.h b/drivers/misc/genwqe/card_base.h index 3743c87f8ab9..1c3967f10f55 100644 --- a/drivers/misc/genwqe/card_base.h +++ b/drivers/misc/genwqe/card_base.h | |||
@@ -47,13 +47,13 @@ | |||
47 | #define GENWQE_CARD_NO_MAX (16 * GENWQE_MAX_FUNCS) | 47 | #define GENWQE_CARD_NO_MAX (16 * GENWQE_MAX_FUNCS) |
48 | 48 | ||
49 | /* Compile parameters, some of them appear in debugfs for later adjustment */ | 49 | /* Compile parameters, some of them appear in debugfs for later adjustment */ |
50 | #define genwqe_ddcb_max 32 /* DDCBs on the work-queue */ | 50 | #define GENWQE_DDCB_MAX 32 /* DDCBs on the work-queue */ |
51 | #define genwqe_polling_enabled 0 /* in case of irqs not working */ | 51 | #define GENWQE_POLLING_ENABLED 0 /* in case of irqs not working */ |
52 | #define genwqe_ddcb_software_timeout 10 /* timeout per DDCB in seconds */ | 52 | #define GENWQE_DDCB_SOFTWARE_TIMEOUT 10 /* timeout per DDCB in seconds */ |
53 | #define genwqe_kill_timeout 8 /* time until process gets killed */ | 53 | #define GENWQE_KILL_TIMEOUT 8 /* time until process gets killed */ |
54 | #define genwqe_vf_jobtimeout_msec 250 /* 250 msec */ | 54 | #define GENWQE_VF_JOBTIMEOUT_MSEC 250 /* 250 msec */ |
55 | #define genwqe_pf_jobtimeout_msec 8000 /* 8 sec should be ok */ | 55 | #define GENWQE_PF_JOBTIMEOUT_MSEC 8000 /* 8 sec should be ok */ |
56 | #define genwqe_health_check_interval 4 /* <= 0: disabled */ | 56 | #define GENWQE_HEALTH_CHECK_INTERVAL 4 /* <= 0: disabled */ |
57 | 57 | ||
58 | /* Sysfs attribute groups used when we create the genwqe device */ | 58 | /* Sysfs attribute groups used when we create the genwqe device */ |
59 | extern const struct attribute_group *genwqe_attribute_groups[]; | 59 | extern const struct attribute_group *genwqe_attribute_groups[]; |
@@ -490,11 +490,9 @@ int genwqe_read_app_id(struct genwqe_dev *cd, char *app_name, int len); | |||
490 | 490 | ||
491 | /* Memory allocation/deallocation; dma address handling */ | 491 | /* Memory allocation/deallocation; dma address handling */ |
492 | int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, | 492 | int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, |
493 | void *uaddr, unsigned long size, | 493 | void *uaddr, unsigned long size); |
494 | struct ddcb_requ *req); | ||
495 | 494 | ||
496 | int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m, | 495 | int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m); |
497 | struct ddcb_requ *req); | ||
498 | 496 | ||
499 | static inline bool dma_mapping_used(struct dma_mapping *m) | 497 | static inline bool dma_mapping_used(struct dma_mapping *m) |
500 | { | 498 | { |
diff --git a/drivers/misc/genwqe/card_ddcb.c b/drivers/misc/genwqe/card_ddcb.c index ddfeefe39540..b7f8d35c17a9 100644 --- a/drivers/misc/genwqe/card_ddcb.c +++ b/drivers/misc/genwqe/card_ddcb.c | |||
@@ -500,7 +500,7 @@ int __genwqe_wait_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req) | |||
500 | 500 | ||
501 | rc = wait_event_interruptible_timeout(queue->ddcb_waitqs[ddcb_no], | 501 | rc = wait_event_interruptible_timeout(queue->ddcb_waitqs[ddcb_no], |
502 | ddcb_requ_finished(cd, req), | 502 | ddcb_requ_finished(cd, req), |
503 | genwqe_ddcb_software_timeout * HZ); | 503 | GENWQE_DDCB_SOFTWARE_TIMEOUT * HZ); |
504 | 504 | ||
505 | /* | 505 | /* |
506 | * We need to distinguish 3 cases here: | 506 | * We need to distinguish 3 cases here: |
@@ -633,7 +633,7 @@ int __genwqe_purge_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req) | |||
633 | __be32 old, new; | 633 | __be32 old, new; |
634 | 634 | ||
635 | /* unsigned long flags; */ | 635 | /* unsigned long flags; */ |
636 | if (genwqe_ddcb_software_timeout <= 0) { | 636 | if (GENWQE_DDCB_SOFTWARE_TIMEOUT <= 0) { |
637 | dev_err(&pci_dev->dev, | 637 | dev_err(&pci_dev->dev, |
638 | "[%s] err: software timeout is not set!\n", __func__); | 638 | "[%s] err: software timeout is not set!\n", __func__); |
639 | return -EFAULT; | 639 | return -EFAULT; |
@@ -641,7 +641,7 @@ int __genwqe_purge_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req) | |||
641 | 641 | ||
642 | pddcb = &queue->ddcb_vaddr[req->num]; | 642 | pddcb = &queue->ddcb_vaddr[req->num]; |
643 | 643 | ||
644 | for (t = 0; t < genwqe_ddcb_software_timeout * 10; t++) { | 644 | for (t = 0; t < GENWQE_DDCB_SOFTWARE_TIMEOUT * 10; t++) { |
645 | 645 | ||
646 | spin_lock_irqsave(&queue->ddcb_lock, flags); | 646 | spin_lock_irqsave(&queue->ddcb_lock, flags); |
647 | 647 | ||
@@ -718,7 +718,7 @@ go_home: | |||
718 | 718 | ||
719 | dev_err(&pci_dev->dev, | 719 | dev_err(&pci_dev->dev, |
720 | "[%s] err: DDCB#%d not purged and not completed after %d seconds QSTAT=%016llx!!\n", | 720 | "[%s] err: DDCB#%d not purged and not completed after %d seconds QSTAT=%016llx!!\n", |
721 | __func__, req->num, genwqe_ddcb_software_timeout, | 721 | __func__, req->num, GENWQE_DDCB_SOFTWARE_TIMEOUT, |
722 | queue_status); | 722 | queue_status); |
723 | 723 | ||
724 | print_ddcb_info(cd, req->queue); | 724 | print_ddcb_info(cd, req->queue); |
@@ -778,7 +778,7 @@ int __genwqe_enqueue_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req, | |||
778 | /* FIXME circumvention to improve performance when no irq is | 778 | /* FIXME circumvention to improve performance when no irq is |
779 | * there. | 779 | * there. |
780 | */ | 780 | */ |
781 | if (genwqe_polling_enabled) | 781 | if (GENWQE_POLLING_ENABLED) |
782 | genwqe_check_ddcb_queue(cd, queue); | 782 | genwqe_check_ddcb_queue(cd, queue); |
783 | 783 | ||
784 | /* | 784 | /* |
@@ -878,7 +878,7 @@ int __genwqe_enqueue_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req, | |||
878 | pddcb->icrc_hsi_shi_32 = cpu_to_be32((u32)icrc << 16); | 878 | pddcb->icrc_hsi_shi_32 = cpu_to_be32((u32)icrc << 16); |
879 | 879 | ||
880 | /* enable DDCB completion irq */ | 880 | /* enable DDCB completion irq */ |
881 | if (!genwqe_polling_enabled) | 881 | if (!GENWQE_POLLING_ENABLED) |
882 | pddcb->icrc_hsi_shi_32 |= DDCB_INTR_BE32; | 882 | pddcb->icrc_hsi_shi_32 |= DDCB_INTR_BE32; |
883 | 883 | ||
884 | dev_dbg(&pci_dev->dev, "INPUT DDCB#%d\n", req->num); | 884 | dev_dbg(&pci_dev->dev, "INPUT DDCB#%d\n", req->num); |
@@ -1028,10 +1028,10 @@ static int setup_ddcb_queue(struct genwqe_dev *cd, struct ddcb_queue *queue) | |||
1028 | unsigned int queue_size; | 1028 | unsigned int queue_size; |
1029 | struct pci_dev *pci_dev = cd->pci_dev; | 1029 | struct pci_dev *pci_dev = cd->pci_dev; |
1030 | 1030 | ||
1031 | if (genwqe_ddcb_max < 2) | 1031 | if (GENWQE_DDCB_MAX < 2) |
1032 | return -EINVAL; | 1032 | return -EINVAL; |
1033 | 1033 | ||
1034 | queue_size = roundup(genwqe_ddcb_max * sizeof(struct ddcb), PAGE_SIZE); | 1034 | queue_size = roundup(GENWQE_DDCB_MAX * sizeof(struct ddcb), PAGE_SIZE); |
1035 | 1035 | ||
1036 | queue->ddcbs_in_flight = 0; /* statistics */ | 1036 | queue->ddcbs_in_flight = 0; /* statistics */ |
1037 | queue->ddcbs_max_in_flight = 0; | 1037 | queue->ddcbs_max_in_flight = 0; |
@@ -1040,7 +1040,7 @@ static int setup_ddcb_queue(struct genwqe_dev *cd, struct ddcb_queue *queue) | |||
1040 | queue->wait_on_busy = 0; | 1040 | queue->wait_on_busy = 0; |
1041 | 1041 | ||
1042 | queue->ddcb_seq = 0x100; /* start sequence number */ | 1042 | queue->ddcb_seq = 0x100; /* start sequence number */ |
1043 | queue->ddcb_max = genwqe_ddcb_max; /* module parameter */ | 1043 | queue->ddcb_max = GENWQE_DDCB_MAX; |
1044 | queue->ddcb_vaddr = __genwqe_alloc_consistent(cd, queue_size, | 1044 | queue->ddcb_vaddr = __genwqe_alloc_consistent(cd, queue_size, |
1045 | &queue->ddcb_daddr); | 1045 | &queue->ddcb_daddr); |
1046 | if (queue->ddcb_vaddr == NULL) { | 1046 | if (queue->ddcb_vaddr == NULL) { |
@@ -1194,7 +1194,7 @@ static int genwqe_card_thread(void *data) | |||
1194 | 1194 | ||
1195 | genwqe_check_ddcb_queue(cd, &cd->queue); | 1195 | genwqe_check_ddcb_queue(cd, &cd->queue); |
1196 | 1196 | ||
1197 | if (genwqe_polling_enabled) { | 1197 | if (GENWQE_POLLING_ENABLED) { |
1198 | rc = wait_event_interruptible_timeout( | 1198 | rc = wait_event_interruptible_timeout( |
1199 | cd->queue_waitq, | 1199 | cd->queue_waitq, |
1200 | genwqe_ddcbs_in_flight(cd) || | 1200 | genwqe_ddcbs_in_flight(cd) || |
@@ -1340,7 +1340,7 @@ static int queue_wake_up_all(struct genwqe_dev *cd) | |||
1340 | int genwqe_finish_queue(struct genwqe_dev *cd) | 1340 | int genwqe_finish_queue(struct genwqe_dev *cd) |
1341 | { | 1341 | { |
1342 | int i, rc = 0, in_flight; | 1342 | int i, rc = 0, in_flight; |
1343 | int waitmax = genwqe_ddcb_software_timeout; | 1343 | int waitmax = GENWQE_DDCB_SOFTWARE_TIMEOUT; |
1344 | struct pci_dev *pci_dev = cd->pci_dev; | 1344 | struct pci_dev *pci_dev = cd->pci_dev; |
1345 | struct ddcb_queue *queue = &cd->queue; | 1345 | struct ddcb_queue *queue = &cd->queue; |
1346 | 1346 | ||
diff --git a/drivers/misc/genwqe/card_debugfs.c b/drivers/misc/genwqe/card_debugfs.c index c715534e7fe7..f921dd590271 100644 --- a/drivers/misc/genwqe/card_debugfs.c +++ b/drivers/misc/genwqe/card_debugfs.c | |||
@@ -198,7 +198,7 @@ static int genwqe_jtimer_show(struct seq_file *s, void *unused) | |||
198 | 198 | ||
199 | jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, 0); | 199 | jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, 0); |
200 | seq_printf(s, " PF 0x%016llx %d msec\n", jtimer, | 200 | seq_printf(s, " PF 0x%016llx %d msec\n", jtimer, |
201 | genwqe_pf_jobtimeout_msec); | 201 | GENWQE_PF_JOBTIMEOUT_MSEC); |
202 | 202 | ||
203 | for (vf_num = 0; vf_num < cd->num_vfs; vf_num++) { | 203 | for (vf_num = 0; vf_num < cd->num_vfs; vf_num++) { |
204 | jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, | 204 | jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, |
diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c index 3ecfa35457e0..0dd6b5ef314a 100644 --- a/drivers/misc/genwqe/card_dev.c +++ b/drivers/misc/genwqe/card_dev.c | |||
@@ -226,7 +226,7 @@ static void genwqe_remove_mappings(struct genwqe_file *cfile) | |||
226 | kfree(dma_map); | 226 | kfree(dma_map); |
227 | } else if (dma_map->type == GENWQE_MAPPING_SGL_TEMP) { | 227 | } else if (dma_map->type == GENWQE_MAPPING_SGL_TEMP) { |
228 | /* we use dma_map statically from the request */ | 228 | /* we use dma_map statically from the request */ |
229 | genwqe_user_vunmap(cd, dma_map, NULL); | 229 | genwqe_user_vunmap(cd, dma_map); |
230 | } | 230 | } |
231 | } | 231 | } |
232 | } | 232 | } |
@@ -249,7 +249,7 @@ static void genwqe_remove_pinnings(struct genwqe_file *cfile) | |||
249 | * deleted. | 249 | * deleted. |
250 | */ | 250 | */ |
251 | list_del_init(&dma_map->pin_list); | 251 | list_del_init(&dma_map->pin_list); |
252 | genwqe_user_vunmap(cd, dma_map, NULL); | 252 | genwqe_user_vunmap(cd, dma_map); |
253 | kfree(dma_map); | 253 | kfree(dma_map); |
254 | } | 254 | } |
255 | } | 255 | } |
@@ -790,7 +790,7 @@ static int genwqe_pin_mem(struct genwqe_file *cfile, struct genwqe_mem *m) | |||
790 | return -ENOMEM; | 790 | return -ENOMEM; |
791 | 791 | ||
792 | genwqe_mapping_init(dma_map, GENWQE_MAPPING_SGL_PINNED); | 792 | genwqe_mapping_init(dma_map, GENWQE_MAPPING_SGL_PINNED); |
793 | rc = genwqe_user_vmap(cd, dma_map, (void *)map_addr, map_size, NULL); | 793 | rc = genwqe_user_vmap(cd, dma_map, (void *)map_addr, map_size); |
794 | if (rc != 0) { | 794 | if (rc != 0) { |
795 | dev_err(&pci_dev->dev, | 795 | dev_err(&pci_dev->dev, |
796 | "[%s] genwqe_user_vmap rc=%d\n", __func__, rc); | 796 | "[%s] genwqe_user_vmap rc=%d\n", __func__, rc); |
@@ -820,7 +820,7 @@ static int genwqe_unpin_mem(struct genwqe_file *cfile, struct genwqe_mem *m) | |||
820 | return -ENOENT; | 820 | return -ENOENT; |
821 | 821 | ||
822 | genwqe_del_pin(cfile, dma_map); | 822 | genwqe_del_pin(cfile, dma_map); |
823 | genwqe_user_vunmap(cd, dma_map, NULL); | 823 | genwqe_user_vunmap(cd, dma_map); |
824 | kfree(dma_map); | 824 | kfree(dma_map); |
825 | return 0; | 825 | return 0; |
826 | } | 826 | } |
@@ -841,7 +841,7 @@ static int ddcb_cmd_cleanup(struct genwqe_file *cfile, struct ddcb_requ *req) | |||
841 | 841 | ||
842 | if (dma_mapping_used(dma_map)) { | 842 | if (dma_mapping_used(dma_map)) { |
843 | __genwqe_del_mapping(cfile, dma_map); | 843 | __genwqe_del_mapping(cfile, dma_map); |
844 | genwqe_user_vunmap(cd, dma_map, req); | 844 | genwqe_user_vunmap(cd, dma_map); |
845 | } | 845 | } |
846 | if (req->sgls[i].sgl != NULL) | 846 | if (req->sgls[i].sgl != NULL) |
847 | genwqe_free_sync_sgl(cd, &req->sgls[i]); | 847 | genwqe_free_sync_sgl(cd, &req->sgls[i]); |
@@ -947,7 +947,7 @@ static int ddcb_cmd_fixups(struct genwqe_file *cfile, struct ddcb_requ *req) | |||
947 | m->write = 0; | 947 | m->write = 0; |
948 | 948 | ||
949 | rc = genwqe_user_vmap(cd, m, (void *)u_addr, | 949 | rc = genwqe_user_vmap(cd, m, (void *)u_addr, |
950 | u_size, req); | 950 | u_size); |
951 | if (rc != 0) | 951 | if (rc != 0) |
952 | goto err_out; | 952 | goto err_out; |
953 | 953 | ||
@@ -1011,7 +1011,6 @@ static int do_execute_ddcb(struct genwqe_file *cfile, | |||
1011 | { | 1011 | { |
1012 | int rc; | 1012 | int rc; |
1013 | struct genwqe_ddcb_cmd *cmd; | 1013 | struct genwqe_ddcb_cmd *cmd; |
1014 | struct ddcb_requ *req; | ||
1015 | struct genwqe_dev *cd = cfile->cd; | 1014 | struct genwqe_dev *cd = cfile->cd; |
1016 | struct file *filp = cfile->filp; | 1015 | struct file *filp = cfile->filp; |
1017 | 1016 | ||
@@ -1019,8 +1018,6 @@ static int do_execute_ddcb(struct genwqe_file *cfile, | |||
1019 | if (cmd == NULL) | 1018 | if (cmd == NULL) |
1020 | return -ENOMEM; | 1019 | return -ENOMEM; |
1021 | 1020 | ||
1022 | req = container_of(cmd, struct ddcb_requ, cmd); | ||
1023 | |||
1024 | if (copy_from_user(cmd, (void __user *)arg, sizeof(*cmd))) { | 1021 | if (copy_from_user(cmd, (void __user *)arg, sizeof(*cmd))) { |
1025 | ddcb_requ_free(cmd); | 1022 | ddcb_requ_free(cmd); |
1026 | return -EFAULT; | 1023 | return -EFAULT; |
@@ -1345,7 +1342,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd) | |||
1345 | rc = genwqe_kill_fasync(cd, SIGIO); | 1342 | rc = genwqe_kill_fasync(cd, SIGIO); |
1346 | if (rc > 0) { | 1343 | if (rc > 0) { |
1347 | /* give kill_timeout seconds to close file descriptors ... */ | 1344 | /* give kill_timeout seconds to close file descriptors ... */ |
1348 | for (i = 0; (i < genwqe_kill_timeout) && | 1345 | for (i = 0; (i < GENWQE_KILL_TIMEOUT) && |
1349 | genwqe_open_files(cd); i++) { | 1346 | genwqe_open_files(cd); i++) { |
1350 | dev_info(&pci_dev->dev, " %d sec ...", i); | 1347 | dev_info(&pci_dev->dev, " %d sec ...", i); |
1351 | 1348 | ||
@@ -1363,7 +1360,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd) | |||
1363 | rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */ | 1360 | rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */ |
1364 | if (rc) { | 1361 | if (rc) { |
1365 | /* Give kill_timout more seconds to end processes */ | 1362 | /* Give kill_timout more seconds to end processes */ |
1366 | for (i = 0; (i < genwqe_kill_timeout) && | 1363 | for (i = 0; (i < GENWQE_KILL_TIMEOUT) && |
1367 | genwqe_open_files(cd); i++) { | 1364 | genwqe_open_files(cd); i++) { |
1368 | dev_warn(&pci_dev->dev, " %d sec ...", i); | 1365 | dev_warn(&pci_dev->dev, " %d sec ...", i); |
1369 | 1366 | ||
diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c index 5c0d917636f7..8f2e6442d88b 100644 --- a/drivers/misc/genwqe/card_utils.c +++ b/drivers/misc/genwqe/card_utils.c | |||
@@ -524,22 +524,16 @@ int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl) | |||
524 | } | 524 | } |
525 | 525 | ||
526 | /** | 526 | /** |
527 | * free_user_pages() - Give pinned pages back | 527 | * genwqe_free_user_pages() - Give pinned pages back |
528 | * | 528 | * |
529 | * Documentation of get_user_pages is in mm/memory.c: | 529 | * Documentation of get_user_pages is in mm/gup.c: |
530 | * | 530 | * |
531 | * If the page is written to, set_page_dirty (or set_page_dirty_lock, | 531 | * If the page is written to, set_page_dirty (or set_page_dirty_lock, |
532 | * as appropriate) must be called after the page is finished with, and | 532 | * as appropriate) must be called after the page is finished with, and |
533 | * before put_page is called. | 533 | * before put_page is called. |
534 | * | ||
535 | * FIXME Could be of use to others and might belong in the generic | ||
536 | * code, if others agree. E.g. | ||
537 | * ll_free_user_pages in drivers/staging/lustre/lustre/llite/rw26.c | ||
538 | * ceph_put_page_vector in net/ceph/pagevec.c | ||
539 | * maybe more? | ||
540 | */ | 534 | */ |
541 | static int free_user_pages(struct page **page_list, unsigned int nr_pages, | 535 | static int genwqe_free_user_pages(struct page **page_list, |
542 | int dirty) | 536 | unsigned int nr_pages, int dirty) |
543 | { | 537 | { |
544 | unsigned int i; | 538 | unsigned int i; |
545 | 539 | ||
@@ -577,7 +571,7 @@ static int free_user_pages(struct page **page_list, unsigned int nr_pages, | |||
577 | * Return: 0 if success | 571 | * Return: 0 if success |
578 | */ | 572 | */ |
579 | int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr, | 573 | int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr, |
580 | unsigned long size, struct ddcb_requ *req) | 574 | unsigned long size) |
581 | { | 575 | { |
582 | int rc = -EINVAL; | 576 | int rc = -EINVAL; |
583 | unsigned long data, offs; | 577 | unsigned long data, offs; |
@@ -617,7 +611,7 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr, | |||
617 | 611 | ||
618 | /* assumption: get_user_pages can be killed by signals. */ | 612 | /* assumption: get_user_pages can be killed by signals. */ |
619 | if (rc < m->nr_pages) { | 613 | if (rc < m->nr_pages) { |
620 | free_user_pages(m->page_list, rc, m->write); | 614 | genwqe_free_user_pages(m->page_list, rc, m->write); |
621 | rc = -EFAULT; | 615 | rc = -EFAULT; |
622 | goto fail_get_user_pages; | 616 | goto fail_get_user_pages; |
623 | } | 617 | } |
@@ -629,7 +623,7 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr, | |||
629 | return 0; | 623 | return 0; |
630 | 624 | ||
631 | fail_free_user_pages: | 625 | fail_free_user_pages: |
632 | free_user_pages(m->page_list, m->nr_pages, m->write); | 626 | genwqe_free_user_pages(m->page_list, m->nr_pages, m->write); |
633 | 627 | ||
634 | fail_get_user_pages: | 628 | fail_get_user_pages: |
635 | kfree(m->page_list); | 629 | kfree(m->page_list); |
@@ -647,8 +641,7 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr, | |||
647 | * @cd: pointer to genwqe device | 641 | * @cd: pointer to genwqe device |
648 | * @m: mapping params | 642 | * @m: mapping params |
649 | */ | 643 | */ |
650 | int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m, | 644 | int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m) |
651 | struct ddcb_requ *req) | ||
652 | { | 645 | { |
653 | struct pci_dev *pci_dev = cd->pci_dev; | 646 | struct pci_dev *pci_dev = cd->pci_dev; |
654 | 647 | ||
@@ -662,7 +655,7 @@ int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m, | |||
662 | genwqe_unmap_pages(cd, m->dma_list, m->nr_pages); | 655 | genwqe_unmap_pages(cd, m->dma_list, m->nr_pages); |
663 | 656 | ||
664 | if (m->page_list) { | 657 | if (m->page_list) { |
665 | free_user_pages(m->page_list, m->nr_pages, m->write); | 658 | genwqe_free_user_pages(m->page_list, m->nr_pages, m->write); |
666 | 659 | ||
667 | kfree(m->page_list); | 660 | kfree(m->page_list); |
668 | m->page_list = NULL; | 661 | m->page_list = NULL; |
diff --git a/drivers/misc/hpilo.c b/drivers/misc/hpilo.c index 95ce3e891b1b..35693c0a78e2 100644 --- a/drivers/misc/hpilo.c +++ b/drivers/misc/hpilo.c | |||
@@ -1,12 +1,9 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
1 | /* | 2 | /* |
2 | * Driver for the HP iLO management processor. | 3 | * Driver for the HP iLO management processor. |
3 | * | 4 | * |
4 | * Copyright (C) 2008 Hewlett-Packard Development Company, L.P. | 5 | * Copyright (C) 2008 Hewlett-Packard Development Company, L.P. |
5 | * David Altobelli <david.altobelli@hpe.com> | 6 | * David Altobelli <david.altobelli@hpe.com> |
6 | * | ||
7 | * This program is free software; you can redistribute it and/or modify | ||
8 | * it under the terms of the GNU General Public License version 2 as | ||
9 | * published by the Free Software Foundation. | ||
10 | */ | 7 | */ |
11 | #include <linux/kernel.h> | 8 | #include <linux/kernel.h> |
12 | #include <linux/types.h> | 9 | #include <linux/types.h> |
diff --git a/drivers/misc/hpilo.h b/drivers/misc/hpilo.h index b97672e0cf90..94dfb9e40e29 100644 --- a/drivers/misc/hpilo.h +++ b/drivers/misc/hpilo.h | |||
@@ -1,12 +1,9 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
1 | /* | 2 | /* |
2 | * linux/drivers/char/hpilo.h | 3 | * linux/drivers/char/hpilo.h |
3 | * | 4 | * |
4 | * Copyright (C) 2008 Hewlett-Packard Development Company, L.P. | 5 | * Copyright (C) 2008 Hewlett-Packard Development Company, L.P. |
5 | * David Altobelli <david.altobelli@hp.com> | 6 | * David Altobelli <david.altobelli@hp.com> |
6 | * | ||
7 | * This program is free software; you can redistribute it and/or modify | ||
8 | * it under the terms of the GNU General Public License version 2 as | ||
9 | * published by the Free Software Foundation. | ||
10 | */ | 7 | */ |
11 | #ifndef __HPILO_H | 8 | #ifndef __HPILO_H |
12 | #define __HPILO_H | 9 | #define __HPILO_H |
diff --git a/drivers/misc/ics932s401.c b/drivers/misc/ics932s401.c index 28f51e01fd2b..81a0541ef3ac 100644 --- a/drivers/misc/ics932s401.c +++ b/drivers/misc/ics932s401.c | |||
@@ -33,7 +33,7 @@ static const unsigned short normal_i2c[] = { 0x69, I2C_CLIENT_END }; | |||
33 | 33 | ||
34 | /* ICS932S401 registers */ | 34 | /* ICS932S401 registers */ |
35 | #define ICS932S401_REG_CFG2 0x01 | 35 | #define ICS932S401_REG_CFG2 0x01 |
36 | #define ICS932S401_CFG1_SPREAD 0x01 | 36 | #define ICS932S401_CFG1_SPREAD 0x01 |
37 | #define ICS932S401_REG_CFG7 0x06 | 37 | #define ICS932S401_REG_CFG7 0x06 |
38 | #define ICS932S401_FS_MASK 0x07 | 38 | #define ICS932S401_FS_MASK 0x07 |
39 | #define ICS932S401_REG_VENDOR_REV 0x07 | 39 | #define ICS932S401_REG_VENDOR_REV 0x07 |
@@ -58,7 +58,7 @@ static const unsigned short normal_i2c[] = { 0x69, I2C_CLIENT_END }; | |||
58 | #define ICS932S401_REG_SRC_SPREAD1 0x11 | 58 | #define ICS932S401_REG_SRC_SPREAD1 0x11 |
59 | #define ICS932S401_REG_SRC_SPREAD2 0x12 | 59 | #define ICS932S401_REG_SRC_SPREAD2 0x12 |
60 | #define ICS932S401_REG_CPU_DIVISOR 0x13 | 60 | #define ICS932S401_REG_CPU_DIVISOR 0x13 |
61 | #define ICS932S401_CPU_DIVISOR_SHIFT 4 | 61 | #define ICS932S401_CPU_DIVISOR_SHIFT 4 |
62 | #define ICS932S401_REG_PCISRC_DIVISOR 0x14 | 62 | #define ICS932S401_REG_PCISRC_DIVISOR 0x14 |
63 | #define ICS932S401_SRC_DIVISOR_MASK 0x0F | 63 | #define ICS932S401_SRC_DIVISOR_MASK 0x0F |
64 | #define ICS932S401_PCI_DIVISOR_SHIFT 4 | 64 | #define ICS932S401_PCI_DIVISOR_SHIFT 4 |
@@ -225,6 +225,7 @@ static ssize_t show_cpu_clock_sel(struct device *dev, | |||
225 | else { | 225 | else { |
226 | /* Freq is neatly wrapped up for us */ | 226 | /* Freq is neatly wrapped up for us */ |
227 | int fid = data->regs[ICS932S401_REG_CFG7] & ICS932S401_FS_MASK; | 227 | int fid = data->regs[ICS932S401_REG_CFG7] & ICS932S401_FS_MASK; |
228 | |||
228 | freq = fs_speeds[fid]; | 229 | freq = fs_speeds[fid]; |
229 | if (data->regs[ICS932S401_REG_CTRL] & ICS932S401_CPU_ALT) { | 230 | if (data->regs[ICS932S401_REG_CTRL] & ICS932S401_CPU_ALT) { |
230 | switch (freq) { | 231 | switch (freq) { |
@@ -352,8 +353,7 @@ static DEVICE_ATTR(ref_clock, S_IRUGO, show_value, NULL); | |||
352 | static DEVICE_ATTR(cpu_spread, S_IRUGO, show_spread, NULL); | 353 | static DEVICE_ATTR(cpu_spread, S_IRUGO, show_spread, NULL); |
353 | static DEVICE_ATTR(src_spread, S_IRUGO, show_spread, NULL); | 354 | static DEVICE_ATTR(src_spread, S_IRUGO, show_spread, NULL); |
354 | 355 | ||
355 | static struct attribute *ics932s401_attr[] = | 356 | static struct attribute *ics932s401_attr[] = { |
356 | { | ||
357 | &dev_attr_spread_enabled.attr, | 357 | &dev_attr_spread_enabled.attr, |
358 | &dev_attr_cpu_clock_selection.attr, | 358 | &dev_attr_cpu_clock_selection.attr, |
359 | &dev_attr_cpu_clock.attr, | 359 | &dev_attr_cpu_clock.attr, |
diff --git a/drivers/misc/isl29003.c b/drivers/misc/isl29003.c index 976df0013633..b8032882c865 100644 --- a/drivers/misc/isl29003.c +++ b/drivers/misc/isl29003.c | |||
@@ -78,6 +78,7 @@ static int __isl29003_read_reg(struct i2c_client *client, | |||
78 | u32 reg, u8 mask, u8 shift) | 78 | u32 reg, u8 mask, u8 shift) |
79 | { | 79 | { |
80 | struct isl29003_data *data = i2c_get_clientdata(client); | 80 | struct isl29003_data *data = i2c_get_clientdata(client); |
81 | |||
81 | return (data->reg_cache[reg] & mask) >> shift; | 82 | return (data->reg_cache[reg] & mask) >> shift; |
82 | } | 83 | } |
83 | 84 | ||
@@ -160,6 +161,7 @@ static int isl29003_get_power_state(struct i2c_client *client) | |||
160 | { | 161 | { |
161 | struct isl29003_data *data = i2c_get_clientdata(client); | 162 | struct isl29003_data *data = i2c_get_clientdata(client); |
162 | u8 cmdreg = data->reg_cache[ISL29003_REG_COMMAND]; | 163 | u8 cmdreg = data->reg_cache[ISL29003_REG_COMMAND]; |
164 | |||
163 | return ~cmdreg & ISL29003_ADC_PD; | 165 | return ~cmdreg & ISL29003_ADC_PD; |
164 | } | 166 | } |
165 | 167 | ||
@@ -196,6 +198,7 @@ static ssize_t isl29003_show_range(struct device *dev, | |||
196 | struct device_attribute *attr, char *buf) | 198 | struct device_attribute *attr, char *buf) |
197 | { | 199 | { |
198 | struct i2c_client *client = to_i2c_client(dev); | 200 | struct i2c_client *client = to_i2c_client(dev); |
201 | |||
199 | return sprintf(buf, "%i\n", isl29003_get_range(client)); | 202 | return sprintf(buf, "%i\n", isl29003_get_range(client)); |
200 | } | 203 | } |
201 | 204 | ||
@@ -231,6 +234,7 @@ static ssize_t isl29003_show_resolution(struct device *dev, | |||
231 | char *buf) | 234 | char *buf) |
232 | { | 235 | { |
233 | struct i2c_client *client = to_i2c_client(dev); | 236 | struct i2c_client *client = to_i2c_client(dev); |
237 | |||
234 | return sprintf(buf, "%d\n", isl29003_get_resolution(client)); | 238 | return sprintf(buf, "%d\n", isl29003_get_resolution(client)); |
235 | } | 239 | } |
236 | 240 | ||
@@ -264,6 +268,7 @@ static ssize_t isl29003_show_mode(struct device *dev, | |||
264 | struct device_attribute *attr, char *buf) | 268 | struct device_attribute *attr, char *buf) |
265 | { | 269 | { |
266 | struct i2c_client *client = to_i2c_client(dev); | 270 | struct i2c_client *client = to_i2c_client(dev); |
271 | |||
267 | return sprintf(buf, "%d\n", isl29003_get_mode(client)); | 272 | return sprintf(buf, "%d\n", isl29003_get_mode(client)); |
268 | } | 273 | } |
269 | 274 | ||
@@ -298,6 +303,7 @@ static ssize_t isl29003_show_power_state(struct device *dev, | |||
298 | char *buf) | 303 | char *buf) |
299 | { | 304 | { |
300 | struct i2c_client *client = to_i2c_client(dev); | 305 | struct i2c_client *client = to_i2c_client(dev); |
306 | |||
301 | return sprintf(buf, "%d\n", isl29003_get_power_state(client)); | 307 | return sprintf(buf, "%d\n", isl29003_get_power_state(client)); |
302 | } | 308 | } |
303 | 309 | ||
@@ -361,6 +367,7 @@ static int isl29003_init_client(struct i2c_client *client) | |||
361 | * if one of the reads fails, we consider the init failed */ | 367 | * if one of the reads fails, we consider the init failed */ |
362 | for (i = 0; i < ARRAY_SIZE(data->reg_cache); i++) { | 368 | for (i = 0; i < ARRAY_SIZE(data->reg_cache); i++) { |
363 | int v = i2c_smbus_read_byte_data(client, i); | 369 | int v = i2c_smbus_read_byte_data(client, i); |
370 | |||
364 | if (v < 0) | 371 | if (v < 0) |
365 | return -ENODEV; | 372 | return -ENODEV; |
366 | 373 | ||
diff --git a/drivers/misc/lkdtm_core.c b/drivers/misc/lkdtm_core.c index ba92291508dc..4942da93d066 100644 --- a/drivers/misc/lkdtm_core.c +++ b/drivers/misc/lkdtm_core.c | |||
@@ -96,7 +96,7 @@ static struct crashpoint crashpoints[] = { | |||
96 | CRASHPOINT("DIRECT", NULL), | 96 | CRASHPOINT("DIRECT", NULL), |
97 | #ifdef CONFIG_KPROBES | 97 | #ifdef CONFIG_KPROBES |
98 | CRASHPOINT("INT_HARDWARE_ENTRY", "do_IRQ"), | 98 | CRASHPOINT("INT_HARDWARE_ENTRY", "do_IRQ"), |
99 | CRASHPOINT("INT_HW_IRQ_EN", "handle_IRQ_event"), | 99 | CRASHPOINT("INT_HW_IRQ_EN", "handle_irq_event"), |
100 | CRASHPOINT("INT_TASKLET_ENTRY", "tasklet_action"), | 100 | CRASHPOINT("INT_TASKLET_ENTRY", "tasklet_action"), |
101 | CRASHPOINT("FS_DEVRW", "ll_rw_block"), | 101 | CRASHPOINT("FS_DEVRW", "ll_rw_block"), |
102 | CRASHPOINT("MEM_SWAPOUT", "shrink_inactive_list"), | 102 | CRASHPOINT("MEM_SWAPOUT", "shrink_inactive_list"), |
diff --git a/drivers/misc/lkdtm_heap.c b/drivers/misc/lkdtm_heap.c index f5494a6d4be5..65026d7de130 100644 --- a/drivers/misc/lkdtm_heap.c +++ b/drivers/misc/lkdtm_heap.c | |||
@@ -16,6 +16,8 @@ void lkdtm_OVERWRITE_ALLOCATION(void) | |||
16 | { | 16 | { |
17 | size_t len = 1020; | 17 | size_t len = 1020; |
18 | u32 *data = kmalloc(len, GFP_KERNEL); | 18 | u32 *data = kmalloc(len, GFP_KERNEL); |
19 | if (!data) | ||
20 | return; | ||
19 | 21 | ||
20 | data[1024 / sizeof(u32)] = 0x12345678; | 22 | data[1024 / sizeof(u32)] = 0x12345678; |
21 | kfree(data); | 23 | kfree(data); |
@@ -33,6 +35,8 @@ void lkdtm_WRITE_AFTER_FREE(void) | |||
33 | size_t offset = (len / sizeof(*base)) / 2; | 35 | size_t offset = (len / sizeof(*base)) / 2; |
34 | 36 | ||
35 | base = kmalloc(len, GFP_KERNEL); | 37 | base = kmalloc(len, GFP_KERNEL); |
38 | if (!base) | ||
39 | return; | ||
36 | pr_info("Allocated memory %p-%p\n", base, &base[offset * 2]); | 40 | pr_info("Allocated memory %p-%p\n", base, &base[offset * 2]); |
37 | pr_info("Attempting bad write to freed memory at %p\n", | 41 | pr_info("Attempting bad write to freed memory at %p\n", |
38 | &base[offset]); | 42 | &base[offset]); |
diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c index 1ac10cb64d6e..3e5eabdae8d9 100644 --- a/drivers/misc/mei/bus.c +++ b/drivers/misc/mei/bus.c | |||
@@ -543,14 +543,20 @@ int mei_cldev_disable(struct mei_cl_device *cldev) | |||
543 | mutex_lock(&bus->device_lock); | 543 | mutex_lock(&bus->device_lock); |
544 | 544 | ||
545 | if (!mei_cl_is_connected(cl)) { | 545 | if (!mei_cl_is_connected(cl)) { |
546 | dev_dbg(bus->dev, "Already disconnected"); | 546 | dev_dbg(bus->dev, "Already disconnected\n"); |
547 | err = 0; | ||
548 | goto out; | ||
549 | } | ||
550 | |||
551 | if (bus->dev_state == MEI_DEV_POWER_DOWN) { | ||
552 | dev_dbg(bus->dev, "Device is powering down, don't bother with disconnection\n"); | ||
547 | err = 0; | 553 | err = 0; |
548 | goto out; | 554 | goto out; |
549 | } | 555 | } |
550 | 556 | ||
551 | err = mei_cl_disconnect(cl); | 557 | err = mei_cl_disconnect(cl); |
552 | if (err < 0) | 558 | if (err < 0) |
553 | dev_err(bus->dev, "Could not disconnect from the ME client"); | 559 | dev_err(bus->dev, "Could not disconnect from the ME client\n"); |
554 | 560 | ||
555 | out: | 561 | out: |
556 | /* Flush queues and remove any pending read */ | 562 | /* Flush queues and remove any pending read */ |
diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c index 10dcf4ff99a5..334ab02e1de2 100644 --- a/drivers/misc/mei/hw-me.c +++ b/drivers/misc/mei/hw-me.c | |||
@@ -1260,7 +1260,9 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id) | |||
1260 | if (rets == -ENODATA) | 1260 | if (rets == -ENODATA) |
1261 | break; | 1261 | break; |
1262 | 1262 | ||
1263 | if (rets && dev->dev_state != MEI_DEV_RESETTING) { | 1263 | if (rets && |
1264 | (dev->dev_state != MEI_DEV_RESETTING && | ||
1265 | dev->dev_state != MEI_DEV_POWER_DOWN)) { | ||
1264 | dev_err(dev->dev, "mei_irq_read_handler ret = %d.\n", | 1266 | dev_err(dev->dev, "mei_irq_read_handler ret = %d.\n", |
1265 | rets); | 1267 | rets); |
1266 | schedule_work(&dev->reset_work); | 1268 | schedule_work(&dev->reset_work); |
diff --git a/drivers/misc/mei/hw-txe.c b/drivers/misc/mei/hw-txe.c index 24e4a4c96606..c2c8993e2a51 100644 --- a/drivers/misc/mei/hw-txe.c +++ b/drivers/misc/mei/hw-txe.c | |||
@@ -1127,7 +1127,9 @@ irqreturn_t mei_txe_irq_thread_handler(int irq, void *dev_id) | |||
1127 | if (test_and_clear_bit(TXE_INTR_OUT_DB_BIT, &hw->intr_cause)) { | 1127 | if (test_and_clear_bit(TXE_INTR_OUT_DB_BIT, &hw->intr_cause)) { |
1128 | /* Read from TXE */ | 1128 | /* Read from TXE */ |
1129 | rets = mei_irq_read_handler(dev, &cmpl_list, &slots); | 1129 | rets = mei_irq_read_handler(dev, &cmpl_list, &slots); |
1130 | if (rets && dev->dev_state != MEI_DEV_RESETTING) { | 1130 | if (rets && |
1131 | (dev->dev_state != MEI_DEV_RESETTING && | ||
1132 | dev->dev_state != MEI_DEV_POWER_DOWN)) { | ||
1131 | dev_err(dev->dev, | 1133 | dev_err(dev->dev, |
1132 | "mei_irq_read_handler ret = %d.\n", rets); | 1134 | "mei_irq_read_handler ret = %d.\n", rets); |
1133 | 1135 | ||
diff --git a/drivers/misc/mei/init.c b/drivers/misc/mei/init.c index d2f691424dd1..c46f6e99a55e 100644 --- a/drivers/misc/mei/init.c +++ b/drivers/misc/mei/init.c | |||
@@ -310,6 +310,9 @@ void mei_stop(struct mei_device *dev) | |||
310 | { | 310 | { |
311 | dev_dbg(dev->dev, "stopping the device.\n"); | 311 | dev_dbg(dev->dev, "stopping the device.\n"); |
312 | 312 | ||
313 | mutex_lock(&dev->device_lock); | ||
314 | dev->dev_state = MEI_DEV_POWER_DOWN; | ||
315 | mutex_unlock(&dev->device_lock); | ||
313 | mei_cl_bus_remove_devices(dev); | 316 | mei_cl_bus_remove_devices(dev); |
314 | 317 | ||
315 | mei_cancel_work(dev); | 318 | mei_cancel_work(dev); |
@@ -319,7 +322,6 @@ void mei_stop(struct mei_device *dev) | |||
319 | 322 | ||
320 | mutex_lock(&dev->device_lock); | 323 | mutex_lock(&dev->device_lock); |
321 | 324 | ||
322 | dev->dev_state = MEI_DEV_POWER_DOWN; | ||
323 | mei_reset(dev); | 325 | mei_reset(dev); |
324 | /* move device to disabled state unconditionally */ | 326 | /* move device to disabled state unconditionally */ |
325 | dev->dev_state = MEI_DEV_DISABLED; | 327 | dev->dev_state = MEI_DEV_DISABLED; |
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c index f4f17552c9b8..4a0ccda4d04b 100644 --- a/drivers/misc/mei/pci-me.c +++ b/drivers/misc/mei/pci-me.c | |||
@@ -238,8 +238,11 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent) | |||
238 | */ | 238 | */ |
239 | mei_me_set_pm_domain(dev); | 239 | mei_me_set_pm_domain(dev); |
240 | 240 | ||
241 | if (mei_pg_is_enabled(dev)) | 241 | if (mei_pg_is_enabled(dev)) { |
242 | pm_runtime_put_noidle(&pdev->dev); | 242 | pm_runtime_put_noidle(&pdev->dev); |
243 | if (hw->d0i3_supported) | ||
244 | pm_runtime_allow(&pdev->dev); | ||
245 | } | ||
243 | 246 | ||
244 | dev_dbg(&pdev->dev, "initialization successful.\n"); | 247 | dev_dbg(&pdev->dev, "initialization successful.\n"); |
245 | 248 | ||
diff --git a/drivers/misc/mic/vop/vop_vringh.c b/drivers/misc/mic/vop/vop_vringh.c index 4120ed8f0cae..01d1f2ba7bb8 100644 --- a/drivers/misc/mic/vop/vop_vringh.c +++ b/drivers/misc/mic/vop/vop_vringh.c | |||
@@ -937,13 +937,10 @@ static long vop_ioctl(struct file *f, unsigned int cmd, unsigned long arg) | |||
937 | dd.num_vq > MIC_MAX_VRINGS) | 937 | dd.num_vq > MIC_MAX_VRINGS) |
938 | return -EINVAL; | 938 | return -EINVAL; |
939 | 939 | ||
940 | dd_config = kzalloc(mic_desc_size(&dd), GFP_KERNEL); | 940 | dd_config = memdup_user(argp, mic_desc_size(&dd)); |
941 | if (!dd_config) | 941 | if (IS_ERR(dd_config)) |
942 | return -ENOMEM; | 942 | return PTR_ERR(dd_config); |
943 | if (copy_from_user(dd_config, argp, mic_desc_size(&dd))) { | 943 | |
944 | ret = -EFAULT; | ||
945 | goto free_ret; | ||
946 | } | ||
947 | /* Ensure desc has not changed between the two reads */ | 944 | /* Ensure desc has not changed between the two reads */ |
948 | if (memcmp(&dd, dd_config, sizeof(dd))) { | 945 | if (memcmp(&dd, dd_config, sizeof(dd))) { |
949 | ret = -EINVAL; | 946 | ret = -EINVAL; |
@@ -995,17 +992,12 @@ _unlock_ret: | |||
995 | ret = vop_vdev_inited(vdev); | 992 | ret = vop_vdev_inited(vdev); |
996 | if (ret) | 993 | if (ret) |
997 | goto __unlock_ret; | 994 | goto __unlock_ret; |
998 | buf = kzalloc(vdev->dd->config_len, GFP_KERNEL); | 995 | buf = memdup_user(argp, vdev->dd->config_len); |
999 | if (!buf) { | 996 | if (IS_ERR(buf)) { |
1000 | ret = -ENOMEM; | 997 | ret = PTR_ERR(buf); |
1001 | goto __unlock_ret; | 998 | goto __unlock_ret; |
1002 | } | 999 | } |
1003 | if (copy_from_user(buf, argp, vdev->dd->config_len)) { | ||
1004 | ret = -EFAULT; | ||
1005 | goto done; | ||
1006 | } | ||
1007 | ret = vop_virtio_config_change(vdev, buf); | 1000 | ret = vop_virtio_config_change(vdev, buf); |
1008 | done: | ||
1009 | kfree(buf); | 1001 | kfree(buf); |
1010 | __unlock_ret: | 1002 | __unlock_ret: |
1011 | mutex_unlock(&vdev->vdev_mutex); | 1003 | mutex_unlock(&vdev->vdev_mutex); |
diff --git a/drivers/misc/vexpress-syscfg.c b/drivers/misc/vexpress-syscfg.c index 2cde80c7bb93..9eea30f54fd6 100644 --- a/drivers/misc/vexpress-syscfg.c +++ b/drivers/misc/vexpress-syscfg.c | |||
@@ -270,10 +270,8 @@ static int vexpress_syscfg_probe(struct platform_device *pdev) | |||
270 | /* Must use dev.parent (MFD), as that's where DT phandle points at... */ | 270 | /* Must use dev.parent (MFD), as that's where DT phandle points at... */ |
271 | bridge = vexpress_config_bridge_register(pdev->dev.parent, | 271 | bridge = vexpress_config_bridge_register(pdev->dev.parent, |
272 | &vexpress_syscfg_bridge_ops, syscfg); | 272 | &vexpress_syscfg_bridge_ops, syscfg); |
273 | if (IS_ERR(bridge)) | ||
274 | return PTR_ERR(bridge); | ||
275 | 273 | ||
276 | return 0; | 274 | return PTR_ERR_OR_ZERO(bridge); |
277 | } | 275 | } |
278 | 276 | ||
279 | static const struct platform_device_id vexpress_syscfg_id_table[] = { | 277 | static const struct platform_device_id vexpress_syscfg_id_table[] = { |
diff --git a/drivers/mux/Kconfig b/drivers/mux/Kconfig index 19e4e904c9bf..6241678e99af 100644 --- a/drivers/mux/Kconfig +++ b/drivers/mux/Kconfig | |||
@@ -1,3 +1,4 @@ | |||
1 | # SPDX-License-Identifier: GPL-2.0 | ||
1 | # | 2 | # |
2 | # Multiplexer devices | 3 | # Multiplexer devices |
3 | # | 4 | # |
diff --git a/drivers/mux/Makefile b/drivers/mux/Makefile index 0e1e59760e3f..c3d883955fd5 100644 --- a/drivers/mux/Makefile +++ b/drivers/mux/Makefile | |||
@@ -1,3 +1,4 @@ | |||
1 | # SPDX-License-Identifier: GPL-2.0 | ||
1 | # | 2 | # |
2 | # Makefile for multiplexer devices. | 3 | # Makefile for multiplexer devices. |
3 | # | 4 | # |
diff --git a/drivers/mux/adg792a.c b/drivers/mux/adg792a.c index 12aa221ab90d..6a8725cf3d71 100644 --- a/drivers/mux/adg792a.c +++ b/drivers/mux/adg792a.c | |||
@@ -1,13 +1,10 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
1 | /* | 2 | /* |
2 | * Multiplexer driver for Analog Devices ADG792A/G Triple 4:1 mux | 3 | * Multiplexer driver for Analog Devices ADG792A/G Triple 4:1 mux |
3 | * | 4 | * |
4 | * Copyright (C) 2017 Axentia Technologies AB | 5 | * Copyright (C) 2017 Axentia Technologies AB |
5 | * | 6 | * |
6 | * Author: Peter Rosin <peda@axentia.se> | 7 | * Author: Peter Rosin <peda@axentia.se> |
7 | * | ||
8 | * This program is free software; you can redistribute it and/or modify | ||
9 | * it under the terms of the GNU General Public License version 2 as | ||
10 | * published by the Free Software Foundation. | ||
11 | */ | 8 | */ |
12 | 9 | ||
13 | #include <linux/err.h> | 10 | #include <linux/err.h> |
diff --git a/drivers/mux/core.c b/drivers/mux/core.c index 6e5cf9d9cd99..d1271c1ee23c 100644 --- a/drivers/mux/core.c +++ b/drivers/mux/core.c | |||
@@ -1,13 +1,10 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
1 | /* | 2 | /* |
2 | * Multiplexer subsystem | 3 | * Multiplexer subsystem |
3 | * | 4 | * |
4 | * Copyright (C) 2017 Axentia Technologies AB | 5 | * Copyright (C) 2017 Axentia Technologies AB |
5 | * | 6 | * |
6 | * Author: Peter Rosin <peda@axentia.se> | 7 | * Author: Peter Rosin <peda@axentia.se> |
7 | * | ||
8 | * This program is free software; you can redistribute it and/or modify | ||
9 | * it under the terms of the GNU General Public License version 2 as | ||
10 | * published by the Free Software Foundation. | ||
11 | */ | 8 | */ |
12 | 9 | ||
13 | #define pr_fmt(fmt) "mux-core: " fmt | 10 | #define pr_fmt(fmt) "mux-core: " fmt |
diff --git a/drivers/mux/gpio.c b/drivers/mux/gpio.c index 468bf1709606..6fdd9316db8b 100644 --- a/drivers/mux/gpio.c +++ b/drivers/mux/gpio.c | |||
@@ -1,13 +1,10 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
1 | /* | 2 | /* |
2 | * GPIO-controlled multiplexer driver | 3 | * GPIO-controlled multiplexer driver |
3 | * | 4 | * |
4 | * Copyright (C) 2017 Axentia Technologies AB | 5 | * Copyright (C) 2017 Axentia Technologies AB |
5 | * | 6 | * |
6 | * Author: Peter Rosin <peda@axentia.se> | 7 | * Author: Peter Rosin <peda@axentia.se> |
7 | * | ||
8 | * This program is free software; you can redistribute it and/or modify | ||
9 | * it under the terms of the GNU General Public License version 2 as | ||
10 | * published by the Free Software Foundation. | ||
11 | */ | 8 | */ |
12 | 9 | ||
13 | #include <linux/err.h> | 10 | #include <linux/err.h> |
diff --git a/drivers/mux/mmio.c b/drivers/mux/mmio.c index 37c1de359a70..935ac44aa209 100644 --- a/drivers/mux/mmio.c +++ b/drivers/mux/mmio.c | |||
@@ -1,11 +1,8 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
1 | /* | 2 | /* |
2 | * MMIO register bitfield-controlled multiplexer driver | 3 | * MMIO register bitfield-controlled multiplexer driver |
3 | * | 4 | * |
4 | * Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de> | 5 | * Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de> |
5 | * | ||
6 | * This program is free software; you can redistribute it and/or modify | ||
7 | * it under the terms of the GNU General Public License version 2 as | ||
8 | * published by the Free Software Foundation. | ||
9 | */ | 6 | */ |
10 | 7 | ||
11 | #include <linux/bitops.h> | 8 | #include <linux/bitops.h> |
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index 5a5cefd12153..35a3dbeea324 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c | |||
@@ -444,7 +444,6 @@ static int nvmem_setup_compat(struct nvmem_device *nvmem, | |||
444 | struct nvmem_device *nvmem_register(const struct nvmem_config *config) | 444 | struct nvmem_device *nvmem_register(const struct nvmem_config *config) |
445 | { | 445 | { |
446 | struct nvmem_device *nvmem; | 446 | struct nvmem_device *nvmem; |
447 | struct device_node *np; | ||
448 | int rval; | 447 | int rval; |
449 | 448 | ||
450 | if (!config->dev) | 449 | if (!config->dev) |
@@ -464,8 +463,8 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) | |||
464 | nvmem->owner = config->owner; | 463 | nvmem->owner = config->owner; |
465 | if (!nvmem->owner && config->dev->driver) | 464 | if (!nvmem->owner && config->dev->driver) |
466 | nvmem->owner = config->dev->driver->owner; | 465 | nvmem->owner = config->dev->driver->owner; |
467 | nvmem->stride = config->stride; | 466 | nvmem->stride = config->stride ?: 1; |
468 | nvmem->word_size = config->word_size; | 467 | nvmem->word_size = config->word_size ?: 1; |
469 | nvmem->size = config->size; | 468 | nvmem->size = config->size; |
470 | nvmem->dev.type = &nvmem_provider_type; | 469 | nvmem->dev.type = &nvmem_provider_type; |
471 | nvmem->dev.bus = &nvmem_bus_type; | 470 | nvmem->dev.bus = &nvmem_bus_type; |
@@ -473,13 +472,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) | |||
473 | nvmem->priv = config->priv; | 472 | nvmem->priv = config->priv; |
474 | nvmem->reg_read = config->reg_read; | 473 | nvmem->reg_read = config->reg_read; |
475 | nvmem->reg_write = config->reg_write; | 474 | nvmem->reg_write = config->reg_write; |
476 | np = config->dev->of_node; | 475 | nvmem->dev.of_node = config->dev->of_node; |
477 | nvmem->dev.of_node = np; | ||
478 | dev_set_name(&nvmem->dev, "%s%d", | 476 | dev_set_name(&nvmem->dev, "%s%d", |
479 | config->name ? : "nvmem", | 477 | config->name ? : "nvmem", |
480 | config->name ? config->id : nvmem->id); | 478 | config->name ? config->id : nvmem->id); |
481 | 479 | ||
482 | nvmem->read_only = of_property_read_bool(np, "read-only") | | 480 | nvmem->read_only = device_property_present(config->dev, "read-only") | |
483 | config->read_only; | 481 | config->read_only; |
484 | 482 | ||
485 | if (config->root_only) | 483 | if (config->root_only) |
@@ -600,16 +598,11 @@ static void __nvmem_device_put(struct nvmem_device *nvmem) | |||
600 | mutex_unlock(&nvmem_mutex); | 598 | mutex_unlock(&nvmem_mutex); |
601 | } | 599 | } |
602 | 600 | ||
603 | static int nvmem_match(struct device *dev, void *data) | ||
604 | { | ||
605 | return !strcmp(dev_name(dev), data); | ||
606 | } | ||
607 | |||
608 | static struct nvmem_device *nvmem_find(const char *name) | 601 | static struct nvmem_device *nvmem_find(const char *name) |
609 | { | 602 | { |
610 | struct device *d; | 603 | struct device *d; |
611 | 604 | ||
612 | d = bus_find_device(&nvmem_bus_type, NULL, (void *)name, nvmem_match); | 605 | d = bus_find_device_by_name(&nvmem_bus_type, NULL, name); |
613 | 606 | ||
614 | if (!d) | 607 | if (!d) |
615 | return NULL; | 608 | return NULL; |
diff --git a/drivers/nvmem/rockchip-efuse.c b/drivers/nvmem/rockchip-efuse.c index 123de77ca5d6..f13a8335f364 100644 --- a/drivers/nvmem/rockchip-efuse.c +++ b/drivers/nvmem/rockchip-efuse.c | |||
@@ -32,6 +32,14 @@ | |||
32 | #define RK3288_STROBE BIT(1) | 32 | #define RK3288_STROBE BIT(1) |
33 | #define RK3288_CSB BIT(0) | 33 | #define RK3288_CSB BIT(0) |
34 | 34 | ||
35 | #define RK3328_SECURE_SIZES 96 | ||
36 | #define RK3328_INT_STATUS 0x0018 | ||
37 | #define RK3328_DOUT 0x0020 | ||
38 | #define RK3328_AUTO_CTRL 0x0024 | ||
39 | #define RK3328_INT_FINISH BIT(0) | ||
40 | #define RK3328_AUTO_ENB BIT(0) | ||
41 | #define RK3328_AUTO_RD BIT(1) | ||
42 | |||
35 | #define RK3399_A_SHIFT 16 | 43 | #define RK3399_A_SHIFT 16 |
36 | #define RK3399_A_MASK 0x3ff | 44 | #define RK3399_A_MASK 0x3ff |
37 | #define RK3399_NBYTES 4 | 45 | #define RK3399_NBYTES 4 |
@@ -92,6 +100,60 @@ static int rockchip_rk3288_efuse_read(void *context, unsigned int offset, | |||
92 | return 0; | 100 | return 0; |
93 | } | 101 | } |
94 | 102 | ||
103 | static int rockchip_rk3328_efuse_read(void *context, unsigned int offset, | ||
104 | void *val, size_t bytes) | ||
105 | { | ||
106 | struct rockchip_efuse_chip *efuse = context; | ||
107 | unsigned int addr_start, addr_end, addr_offset, addr_len; | ||
108 | u32 out_value, status; | ||
109 | u8 *buf; | ||
110 | int ret, i = 0; | ||
111 | |||
112 | ret = clk_prepare_enable(efuse->clk); | ||
113 | if (ret < 0) { | ||
114 | dev_err(efuse->dev, "failed to prepare/enable efuse clk\n"); | ||
115 | return ret; | ||
116 | } | ||
117 | |||
118 | /* 128 Byte efuse, 96 Byte for secure, 32 Byte for non-secure */ | ||
119 | offset += RK3328_SECURE_SIZES; | ||
120 | addr_start = rounddown(offset, RK3399_NBYTES) / RK3399_NBYTES; | ||
121 | addr_end = roundup(offset + bytes, RK3399_NBYTES) / RK3399_NBYTES; | ||
122 | addr_offset = offset % RK3399_NBYTES; | ||
123 | addr_len = addr_end - addr_start; | ||
124 | |||
125 | buf = kzalloc(sizeof(*buf) * addr_len * RK3399_NBYTES, GFP_KERNEL); | ||
126 | if (!buf) { | ||
127 | ret = -ENOMEM; | ||
128 | goto nomem; | ||
129 | } | ||
130 | |||
131 | while (addr_len--) { | ||
132 | writel(RK3328_AUTO_RD | RK3328_AUTO_ENB | | ||
133 | ((addr_start++ & RK3399_A_MASK) << RK3399_A_SHIFT), | ||
134 | efuse->base + RK3328_AUTO_CTRL); | ||
135 | udelay(4); | ||
136 | status = readl(efuse->base + RK3328_INT_STATUS); | ||
137 | if (!(status & RK3328_INT_FINISH)) { | ||
138 | ret = -EIO; | ||
139 | goto err; | ||
140 | } | ||
141 | out_value = readl(efuse->base + RK3328_DOUT); | ||
142 | writel(RK3328_INT_FINISH, efuse->base + RK3328_INT_STATUS); | ||
143 | |||
144 | memcpy(&buf[i], &out_value, RK3399_NBYTES); | ||
145 | i += RK3399_NBYTES; | ||
146 | } | ||
147 | |||
148 | memcpy(val, buf + addr_offset, bytes); | ||
149 | err: | ||
150 | kfree(buf); | ||
151 | nomem: | ||
152 | clk_disable_unprepare(efuse->clk); | ||
153 | |||
154 | return ret; | ||
155 | } | ||
156 | |||
95 | static int rockchip_rk3399_efuse_read(void *context, unsigned int offset, | 157 | static int rockchip_rk3399_efuse_read(void *context, unsigned int offset, |
96 | void *val, size_t bytes) | 158 | void *val, size_t bytes) |
97 | { | 159 | { |
@@ -181,6 +243,10 @@ static const struct of_device_id rockchip_efuse_match[] = { | |||
181 | .data = (void *)&rockchip_rk3288_efuse_read, | 243 | .data = (void *)&rockchip_rk3288_efuse_read, |
182 | }, | 244 | }, |
183 | { | 245 | { |
246 | .compatible = "rockchip,rk3328-efuse", | ||
247 | .data = (void *)&rockchip_rk3328_efuse_read, | ||
248 | }, | ||
249 | { | ||
184 | .compatible = "rockchip,rk3399-efuse", | 250 | .compatible = "rockchip,rk3399-efuse", |
185 | .data = (void *)&rockchip_rk3399_efuse_read, | 251 | .data = (void *)&rockchip_rk3399_efuse_read, |
186 | }, | 252 | }, |
@@ -217,7 +283,9 @@ static int rockchip_efuse_probe(struct platform_device *pdev) | |||
217 | return PTR_ERR(efuse->clk); | 283 | return PTR_ERR(efuse->clk); |
218 | 284 | ||
219 | efuse->dev = &pdev->dev; | 285 | efuse->dev = &pdev->dev; |
220 | econfig.size = resource_size(res); | 286 | if (of_property_read_u32(dev->of_node, "rockchip,efuse-size", |
287 | &econfig.size)) | ||
288 | econfig.size = resource_size(res); | ||
221 | econfig.reg_read = match->data; | 289 | econfig.reg_read = match->data; |
222 | econfig.priv = efuse; | 290 | econfig.priv = efuse; |
223 | econfig.dev = efuse->dev; | 291 | econfig.dev = efuse->dev; |
diff --git a/drivers/nvmem/uniphier-efuse.c b/drivers/nvmem/uniphier-efuse.c index 9d278b4e1dc7..be11880a1358 100644 --- a/drivers/nvmem/uniphier-efuse.c +++ b/drivers/nvmem/uniphier-efuse.c | |||
@@ -27,11 +27,11 @@ static int uniphier_reg_read(void *context, | |||
27 | unsigned int reg, void *_val, size_t bytes) | 27 | unsigned int reg, void *_val, size_t bytes) |
28 | { | 28 | { |
29 | struct uniphier_efuse_priv *priv = context; | 29 | struct uniphier_efuse_priv *priv = context; |
30 | u32 *val = _val; | 30 | u8 *val = _val; |
31 | int offs; | 31 | int offs; |
32 | 32 | ||
33 | for (offs = 0; offs < bytes; offs += sizeof(u32)) | 33 | for (offs = 0; offs < bytes; offs += sizeof(u8)) |
34 | *val++ = readl(priv->base + reg + offs); | 34 | *val++ = readb(priv->base + reg + offs); |
35 | 35 | ||
36 | return 0; | 36 | return 0; |
37 | } | 37 | } |
@@ -53,8 +53,8 @@ static int uniphier_efuse_probe(struct platform_device *pdev) | |||
53 | if (IS_ERR(priv->base)) | 53 | if (IS_ERR(priv->base)) |
54 | return PTR_ERR(priv->base); | 54 | return PTR_ERR(priv->base); |
55 | 55 | ||
56 | econfig.stride = 4; | 56 | econfig.stride = 1; |
57 | econfig.word_size = 4; | 57 | econfig.word_size = 1; |
58 | econfig.read_only = true; | 58 | econfig.read_only = true; |
59 | econfig.reg_read = uniphier_reg_read; | 59 | econfig.reg_read = uniphier_reg_read; |
60 | econfig.size = resource_size(res); | 60 | econfig.size = resource_size(res); |
diff --git a/drivers/siox/Kconfig b/drivers/siox/Kconfig new file mode 100644 index 000000000000..083d2e62189a --- /dev/null +++ b/drivers/siox/Kconfig | |||
@@ -0,0 +1,18 @@ | |||
1 | menuconfig SIOX | ||
2 | tristate "Eckelmann SIOX Support" | ||
3 | help | ||
4 | SIOX stands for Serial Input Output eXtension and is a synchronous | ||
5 | bus system invented by Eckelmann AG. It is used in their control and | ||
6 | remote monitoring systems for commercial and industrial refrigeration | ||
7 | to drive additional I/O units. | ||
8 | |||
9 | Unless you know better, it is probably safe to say "no" here. | ||
10 | |||
11 | if SIOX | ||
12 | |||
13 | config SIOX_BUS_GPIO | ||
14 | tristate "SIOX GPIO bus driver" | ||
15 | help | ||
16 | SIOX bus driver that controls the four bus lines using GPIOs. | ||
17 | |||
18 | endif | ||
diff --git a/drivers/siox/Makefile b/drivers/siox/Makefile new file mode 100644 index 000000000000..a956f65206d5 --- /dev/null +++ b/drivers/siox/Makefile | |||
@@ -0,0 +1,2 @@ | |||
1 | obj-$(CONFIG_SIOX) += siox-core.o | ||
2 | obj-$(CONFIG_SIOX_BUS_GPIO) += siox-bus-gpio.o | ||
diff --git a/drivers/siox/siox-bus-gpio.c b/drivers/siox/siox-bus-gpio.c new file mode 100644 index 000000000000..ea7ef982968b --- /dev/null +++ b/drivers/siox/siox-bus-gpio.c | |||
@@ -0,0 +1,172 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> | ||
4 | */ | ||
5 | |||
6 | #include <linux/gpio/consumer.h> | ||
7 | #include <linux/module.h> | ||
8 | #include <linux/platform_device.h> | ||
9 | |||
10 | #include <linux/delay.h> | ||
11 | |||
12 | #include "siox.h" | ||
13 | |||
14 | #define DRIVER_NAME "siox-gpio" | ||
15 | |||
16 | struct siox_gpio_ddata { | ||
17 | struct gpio_desc *din; | ||
18 | struct gpio_desc *dout; | ||
19 | struct gpio_desc *dclk; | ||
20 | struct gpio_desc *dld; | ||
21 | }; | ||
22 | |||
23 | static unsigned int siox_clkhigh_ns = 1000; | ||
24 | static unsigned int siox_loadhigh_ns; | ||
25 | static unsigned int siox_bytegap_ns; | ||
26 | |||
27 | static int siox_gpio_pushpull(struct siox_master *smaster, | ||
28 | size_t setbuf_len, const u8 setbuf[], | ||
29 | size_t getbuf_len, u8 getbuf[]) | ||
30 | { | ||
31 | struct siox_gpio_ddata *ddata = siox_master_get_devdata(smaster); | ||
32 | size_t i; | ||
33 | size_t cycles = max(setbuf_len, getbuf_len); | ||
34 | |||
35 | /* reset data and clock */ | ||
36 | gpiod_set_value_cansleep(ddata->dout, 0); | ||
37 | gpiod_set_value_cansleep(ddata->dclk, 0); | ||
38 | |||
39 | gpiod_set_value_cansleep(ddata->dld, 1); | ||
40 | ndelay(siox_loadhigh_ns); | ||
41 | gpiod_set_value_cansleep(ddata->dld, 0); | ||
42 | |||
43 | for (i = 0; i < cycles; ++i) { | ||
44 | u8 set = 0, get = 0; | ||
45 | size_t j; | ||
46 | |||
47 | if (i >= cycles - setbuf_len) | ||
48 | set = setbuf[i - (cycles - setbuf_len)]; | ||
49 | |||
50 | for (j = 0; j < 8; ++j) { | ||
51 | get <<= 1; | ||
52 | if (gpiod_get_value_cansleep(ddata->din)) | ||
53 | get |= 1; | ||
54 | |||
55 | /* DOUT is logically inverted */ | ||
56 | gpiod_set_value_cansleep(ddata->dout, !(set & 0x80)); | ||
57 | set <<= 1; | ||
58 | |||
59 | gpiod_set_value_cansleep(ddata->dclk, 1); | ||
60 | ndelay(siox_clkhigh_ns); | ||
61 | gpiod_set_value_cansleep(ddata->dclk, 0); | ||
62 | } | ||
63 | |||
64 | if (i < getbuf_len) | ||
65 | getbuf[i] = get; | ||
66 | |||
67 | ndelay(siox_bytegap_ns); | ||
68 | } | ||
69 | |||
70 | gpiod_set_value_cansleep(ddata->dld, 1); | ||
71 | ndelay(siox_loadhigh_ns); | ||
72 | gpiod_set_value_cansleep(ddata->dld, 0); | ||
73 | |||
74 | /* | ||
75 | * Resetting dout isn't necessary protocol wise, but it makes the | ||
76 | * signals more pretty because the dout level is deterministic between | ||
77 | * cycles. Note that this only affects dout between the master and the | ||
78 | * first siox device. dout for the later devices depend on the output of | ||
79 | * the previous siox device. | ||
80 | */ | ||
81 | gpiod_set_value_cansleep(ddata->dout, 0); | ||
82 | |||
83 | return 0; | ||
84 | } | ||
85 | |||
86 | static int siox_gpio_probe(struct platform_device *pdev) | ||
87 | { | ||
88 | struct device *dev = &pdev->dev; | ||
89 | struct siox_gpio_ddata *ddata; | ||
90 | int ret; | ||
91 | struct siox_master *smaster; | ||
92 | |||
93 | smaster = siox_master_alloc(&pdev->dev, sizeof(*ddata)); | ||
94 | if (!smaster) { | ||
95 | dev_err(dev, "failed to allocate siox master\n"); | ||
96 | return -ENOMEM; | ||
97 | } | ||
98 | |||
99 | platform_set_drvdata(pdev, smaster); | ||
100 | ddata = siox_master_get_devdata(smaster); | ||
101 | |||
102 | ddata->din = devm_gpiod_get(dev, "din", GPIOD_IN); | ||
103 | if (IS_ERR(ddata->din)) { | ||
104 | ret = PTR_ERR(ddata->din); | ||
105 | dev_err(dev, "Failed to get %s GPIO: %d\n", "din", ret); | ||
106 | goto err; | ||
107 | } | ||
108 | |||
109 | ddata->dout = devm_gpiod_get(dev, "dout", GPIOD_OUT_LOW); | ||
110 | if (IS_ERR(ddata->dout)) { | ||
111 | ret = PTR_ERR(ddata->dout); | ||
112 | dev_err(dev, "Failed to get %s GPIO: %d\n", "dout", ret); | ||
113 | goto err; | ||
114 | } | ||
115 | |||
116 | ddata->dclk = devm_gpiod_get(dev, "dclk", GPIOD_OUT_LOW); | ||
117 | if (IS_ERR(ddata->dclk)) { | ||
118 | ret = PTR_ERR(ddata->dclk); | ||
119 | dev_err(dev, "Failed to get %s GPIO: %d\n", "dclk", ret); | ||
120 | goto err; | ||
121 | } | ||
122 | |||
123 | ddata->dld = devm_gpiod_get(dev, "dld", GPIOD_OUT_LOW); | ||
124 | if (IS_ERR(ddata->dld)) { | ||
125 | ret = PTR_ERR(ddata->dld); | ||
126 | dev_err(dev, "Failed to get %s GPIO: %d\n", "dld", ret); | ||
127 | goto err; | ||
128 | } | ||
129 | |||
130 | smaster->pushpull = siox_gpio_pushpull; | ||
131 | /* XXX: determine automatically like spi does */ | ||
132 | smaster->busno = 0; | ||
133 | |||
134 | ret = siox_master_register(smaster); | ||
135 | if (ret) { | ||
136 | dev_err(dev, "Failed to register siox master: %d\n", ret); | ||
137 | err: | ||
138 | siox_master_put(smaster); | ||
139 | } | ||
140 | |||
141 | return ret; | ||
142 | } | ||
143 | |||
144 | static int siox_gpio_remove(struct platform_device *pdev) | ||
145 | { | ||
146 | struct siox_master *master = platform_get_drvdata(pdev); | ||
147 | |||
148 | siox_master_unregister(master); | ||
149 | |||
150 | return 0; | ||
151 | } | ||
152 | |||
153 | static const struct of_device_id siox_gpio_dt_ids[] = { | ||
154 | { .compatible = "eckelmann,siox-gpio", }, | ||
155 | { /* sentinel */ } | ||
156 | }; | ||
157 | MODULE_DEVICE_TABLE(of, siox_gpio_dt_ids); | ||
158 | |||
159 | static struct platform_driver siox_gpio_driver = { | ||
160 | .probe = siox_gpio_probe, | ||
161 | .remove = siox_gpio_remove, | ||
162 | |||
163 | .driver = { | ||
164 | .name = DRIVER_NAME, | ||
165 | .of_match_table = siox_gpio_dt_ids, | ||
166 | }, | ||
167 | }; | ||
168 | module_platform_driver(siox_gpio_driver); | ||
169 | |||
170 | MODULE_AUTHOR("Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>"); | ||
171 | MODULE_LICENSE("GPL v2"); | ||
172 | MODULE_ALIAS("platform:" DRIVER_NAME); | ||
diff --git a/drivers/siox/siox-core.c b/drivers/siox/siox-core.c new file mode 100644 index 000000000000..fdfcdea25867 --- /dev/null +++ b/drivers/siox/siox-core.c | |||
@@ -0,0 +1,934 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> | ||
4 | */ | ||
5 | #include <linux/kernel.h> | ||
6 | #include <linux/device.h> | ||
7 | #include <linux/module.h> | ||
8 | #include <linux/slab.h> | ||
9 | #include <linux/sysfs.h> | ||
10 | |||
11 | #include "siox.h" | ||
12 | |||
13 | /* | ||
14 | * The lowest bit in the SIOX status word signals if the in-device watchdog is | ||
15 | * ok. If the bit is set, the device is functional. | ||
16 | * | ||
17 | * On writing the watchdog timer is reset when this bit toggles. | ||
18 | */ | ||
19 | #define SIOX_STATUS_WDG 0x01 | ||
20 | |||
21 | /* | ||
22 | * Bits 1 to 3 of the status word read as the bitwise negation of what was | ||
23 | * clocked in before. The value clocked in is changed in each cycle and so | ||
24 | * allows to detect transmit/receive problems. | ||
25 | */ | ||
26 | #define SIOX_STATUS_COUNTER 0x0e | ||
27 | |||
28 | /* | ||
29 | * Each Siox-Device has a 4 bit type number that is neither 0 nor 15. This is | ||
30 | * available in the upper nibble of the read status. | ||
31 | * | ||
32 | * On write these bits are DC. | ||
33 | */ | ||
34 | #define SIOX_STATUS_TYPE 0xf0 | ||
35 | |||
36 | #define CREATE_TRACE_POINTS | ||
37 | #include <trace/events/siox.h> | ||
38 | |||
39 | static bool siox_is_registered; | ||
40 | |||
41 | static void siox_master_lock(struct siox_master *smaster) | ||
42 | { | ||
43 | mutex_lock(&smaster->lock); | ||
44 | } | ||
45 | |||
46 | static void siox_master_unlock(struct siox_master *smaster) | ||
47 | { | ||
48 | mutex_unlock(&smaster->lock); | ||
49 | } | ||
50 | |||
51 | static inline u8 siox_status_clean(u8 status_read, u8 status_written) | ||
52 | { | ||
53 | /* | ||
54 | * bits 3:1 of status sample the respective bit in the status | ||
55 | * byte written in the previous cycle but inverted. So if you wrote the | ||
56 | * status word as 0xa before (counter = 0b101), it is expected to get | ||
57 | * back the counter bits as 0b010. | ||
58 | * | ||
59 | * So given the last status written this function toggles the there | ||
60 | * unset counter bits in the read value such that the counter bits in | ||
61 | * the return value are all zero iff the bits were read as expected to | ||
62 | * simplify error detection. | ||
63 | */ | ||
64 | |||
65 | return status_read ^ (~status_written & 0xe); | ||
66 | } | ||
67 | |||
68 | static bool siox_device_counter_error(struct siox_device *sdevice, | ||
69 | u8 status_clean) | ||
70 | { | ||
71 | return (status_clean & SIOX_STATUS_COUNTER) != 0; | ||
72 | } | ||
73 | |||
74 | static bool siox_device_type_error(struct siox_device *sdevice, u8 status_clean) | ||
75 | { | ||
76 | u8 statustype = (status_clean & SIOX_STATUS_TYPE) >> 4; | ||
77 | |||
78 | /* | ||
79 | * If the device knows which value the type bits should have, check | ||
80 | * against this value otherwise just rule out the invalid values 0b0000 | ||
81 | * and 0b1111. | ||
82 | */ | ||
83 | if (sdevice->statustype) { | ||
84 | if (statustype != sdevice->statustype) | ||
85 | return true; | ||
86 | } else { | ||
87 | switch (statustype) { | ||
88 | case 0: | ||
89 | case 0xf: | ||
90 | return true; | ||
91 | } | ||
92 | } | ||
93 | |||
94 | return false; | ||
95 | } | ||
96 | |||
97 | static bool siox_device_wdg_error(struct siox_device *sdevice, u8 status_clean) | ||
98 | { | ||
99 | return (status_clean & SIOX_STATUS_WDG) == 0; | ||
100 | } | ||
101 | |||
102 | /* | ||
103 | * If there is a type or counter error the device is called "unsynced". | ||
104 | */ | ||
105 | bool siox_device_synced(struct siox_device *sdevice) | ||
106 | { | ||
107 | if (siox_device_type_error(sdevice, sdevice->status_read_clean)) | ||
108 | return false; | ||
109 | |||
110 | return !siox_device_counter_error(sdevice, sdevice->status_read_clean); | ||
111 | |||
112 | } | ||
113 | EXPORT_SYMBOL_GPL(siox_device_synced); | ||
114 | |||
115 | /* | ||
116 | * A device is called "connected" if it is synced and the watchdog is not | ||
117 | * asserted. | ||
118 | */ | ||
119 | bool siox_device_connected(struct siox_device *sdevice) | ||
120 | { | ||
121 | if (!siox_device_synced(sdevice)) | ||
122 | return false; | ||
123 | |||
124 | return !siox_device_wdg_error(sdevice, sdevice->status_read_clean); | ||
125 | } | ||
126 | EXPORT_SYMBOL_GPL(siox_device_connected); | ||
127 | |||
128 | static void siox_poll(struct siox_master *smaster) | ||
129 | { | ||
130 | struct siox_device *sdevice; | ||
131 | size_t i = smaster->setbuf_len; | ||
132 | unsigned int devno = 0; | ||
133 | int unsync_error = 0; | ||
134 | |||
135 | smaster->last_poll = jiffies; | ||
136 | |||
137 | /* | ||
138 | * The counter bits change in each second cycle, the watchdog bit | ||
139 | * toggles each time. | ||
140 | * The counter bits hold values from [0, 6]. 7 would be possible | ||
141 | * theoretically but the protocol designer considered that a bad idea | ||
142 | * for reasons unknown today. (Maybe that's because then the status read | ||
143 | * back has only zeros in the counter bits then which might be confused | ||
144 | * with a stuck-at-0 error. But for the same reason (with s/0/1/) 0 | ||
145 | * could be skipped.) | ||
146 | */ | ||
147 | if (++smaster->status > 0x0d) | ||
148 | smaster->status = 0; | ||
149 | |||
150 | memset(smaster->buf, 0, smaster->setbuf_len); | ||
151 | |||
152 | /* prepare data pushed out to devices in buf[0..setbuf_len) */ | ||
153 | list_for_each_entry(sdevice, &smaster->devices, node) { | ||
154 | struct siox_driver *sdriver = | ||
155 | to_siox_driver(sdevice->dev.driver); | ||
156 | sdevice->status_written = smaster->status; | ||
157 | |||
158 | i -= sdevice->inbytes; | ||
159 | |||
160 | /* | ||
161 | * If the device or a previous one is unsynced, don't pet the | ||
162 | * watchdog. This is done to ensure that the device is kept in | ||
163 | * reset when something is wrong. | ||
164 | */ | ||
165 | if (!siox_device_synced(sdevice)) | ||
166 | unsync_error = 1; | ||
167 | |||
168 | if (sdriver && !unsync_error) | ||
169 | sdriver->set_data(sdevice, sdevice->status_written, | ||
170 | &smaster->buf[i + 1]); | ||
171 | else | ||
172 | /* | ||
173 | * Don't trigger watchdog if there is no driver or a | ||
174 | * sync problem | ||
175 | */ | ||
176 | sdevice->status_written &= ~SIOX_STATUS_WDG; | ||
177 | |||
178 | smaster->buf[i] = sdevice->status_written; | ||
179 | |||
180 | trace_siox_set_data(smaster, sdevice, devno, i); | ||
181 | |||
182 | devno++; | ||
183 | } | ||
184 | |||
185 | smaster->pushpull(smaster, smaster->setbuf_len, smaster->buf, | ||
186 | smaster->getbuf_len, | ||
187 | smaster->buf + smaster->setbuf_len); | ||
188 | |||
189 | unsync_error = 0; | ||
190 | |||
191 | /* interpret data pulled in from devices in buf[setbuf_len..] */ | ||
192 | devno = 0; | ||
193 | i = smaster->setbuf_len; | ||
194 | list_for_each_entry(sdevice, &smaster->devices, node) { | ||
195 | struct siox_driver *sdriver = | ||
196 | to_siox_driver(sdevice->dev.driver); | ||
197 | u8 status = smaster->buf[i + sdevice->outbytes - 1]; | ||
198 | u8 status_clean; | ||
199 | u8 prev_status_clean = sdevice->status_read_clean; | ||
200 | bool synced = true; | ||
201 | bool connected = true; | ||
202 | |||
203 | if (!siox_device_synced(sdevice)) | ||
204 | unsync_error = 1; | ||
205 | |||
206 | /* | ||
207 | * If the watchdog bit wasn't toggled in this cycle, report the | ||
208 | * watchdog as active to give a consistent view for drivers and | ||
209 | * sysfs consumers. | ||
210 | */ | ||
211 | if (!sdriver || unsync_error) | ||
212 | status &= ~SIOX_STATUS_WDG; | ||
213 | |||
214 | status_clean = | ||
215 | siox_status_clean(status, | ||
216 | sdevice->status_written_lastcycle); | ||
217 | |||
218 | /* Check counter bits */ | ||
219 | if (siox_device_counter_error(sdevice, status_clean)) { | ||
220 | bool prev_counter_error; | ||
221 | |||
222 | synced = false; | ||
223 | |||
224 | /* only report a new error if the last cycle was ok */ | ||
225 | prev_counter_error = | ||
226 | siox_device_counter_error(sdevice, | ||
227 | prev_status_clean); | ||
228 | if (!prev_counter_error) { | ||
229 | sdevice->status_errors++; | ||
230 | sysfs_notify_dirent(sdevice->status_errors_kn); | ||
231 | } | ||
232 | } | ||
233 | |||
234 | /* Check type bits */ | ||
235 | if (siox_device_type_error(sdevice, status_clean)) | ||
236 | synced = false; | ||
237 | |||
238 | /* If the device is unsynced report the watchdog as active */ | ||
239 | if (!synced) { | ||
240 | status &= ~SIOX_STATUS_WDG; | ||
241 | status_clean &= ~SIOX_STATUS_WDG; | ||
242 | } | ||
243 | |||
244 | if (siox_device_wdg_error(sdevice, status_clean)) | ||
245 | connected = false; | ||
246 | |||
247 | /* The watchdog state changed just now */ | ||
248 | if ((status_clean ^ prev_status_clean) & SIOX_STATUS_WDG) { | ||
249 | sysfs_notify_dirent(sdevice->watchdog_kn); | ||
250 | |||
251 | if (siox_device_wdg_error(sdevice, status_clean)) { | ||
252 | struct kernfs_node *wd_errs = | ||
253 | sdevice->watchdog_errors_kn; | ||
254 | |||
255 | sdevice->watchdog_errors++; | ||
256 | sysfs_notify_dirent(wd_errs); | ||
257 | } | ||
258 | } | ||
259 | |||
260 | if (connected != sdevice->connected) | ||
261 | sysfs_notify_dirent(sdevice->connected_kn); | ||
262 | |||
263 | sdevice->status_read_clean = status_clean; | ||
264 | sdevice->status_written_lastcycle = sdevice->status_written; | ||
265 | sdevice->connected = connected; | ||
266 | |||
267 | trace_siox_get_data(smaster, sdevice, devno, status_clean, i); | ||
268 | |||
269 | /* only give data read to driver if the device is connected */ | ||
270 | if (sdriver && connected) | ||
271 | sdriver->get_data(sdevice, &smaster->buf[i]); | ||
272 | |||
273 | devno++; | ||
274 | i += sdevice->outbytes; | ||
275 | } | ||
276 | } | ||
277 | |||
278 | static int siox_poll_thread(void *data) | ||
279 | { | ||
280 | struct siox_master *smaster = data; | ||
281 | signed long timeout = 0; | ||
282 | |||
283 | get_device(&smaster->dev); | ||
284 | |||
285 | for (;;) { | ||
286 | if (kthread_should_stop()) { | ||
287 | put_device(&smaster->dev); | ||
288 | return 0; | ||
289 | } | ||
290 | |||
291 | siox_master_lock(smaster); | ||
292 | |||
293 | if (smaster->active) { | ||
294 | unsigned long next_poll = | ||
295 | smaster->last_poll + smaster->poll_interval; | ||
296 | if (time_is_before_eq_jiffies(next_poll)) | ||
297 | siox_poll(smaster); | ||
298 | |||
299 | timeout = smaster->poll_interval - | ||
300 | (jiffies - smaster->last_poll); | ||
301 | } else { | ||
302 | timeout = MAX_SCHEDULE_TIMEOUT; | ||
303 | } | ||
304 | |||
305 | /* | ||
306 | * Set the task to idle while holding the lock. This makes sure | ||
307 | * that we don't sleep too long when the bus is reenabled before | ||
308 | * schedule_timeout is reached. | ||
309 | */ | ||
310 | if (timeout > 0) | ||
311 | set_current_state(TASK_IDLE); | ||
312 | |||
313 | siox_master_unlock(smaster); | ||
314 | |||
315 | if (timeout > 0) | ||
316 | schedule_timeout(timeout); | ||
317 | |||
318 | /* | ||
319 | * I'm not clear if/why it is important to set the state to | ||
320 | * RUNNING again, but it fixes a "do not call blocking ops when | ||
321 | * !TASK_RUNNING;"-warning. | ||
322 | */ | ||
323 | set_current_state(TASK_RUNNING); | ||
324 | } | ||
325 | } | ||
326 | |||
327 | static int __siox_start(struct siox_master *smaster) | ||
328 | { | ||
329 | if (!(smaster->setbuf_len + smaster->getbuf_len)) | ||
330 | return -ENODEV; | ||
331 | |||
332 | if (!smaster->buf) | ||
333 | return -ENOMEM; | ||
334 | |||
335 | if (smaster->active) | ||
336 | return 0; | ||
337 | |||
338 | smaster->active = 1; | ||
339 | wake_up_process(smaster->poll_thread); | ||
340 | |||
341 | return 1; | ||
342 | } | ||
343 | |||
344 | static int siox_start(struct siox_master *smaster) | ||
345 | { | ||
346 | int ret; | ||
347 | |||
348 | siox_master_lock(smaster); | ||
349 | ret = __siox_start(smaster); | ||
350 | siox_master_unlock(smaster); | ||
351 | |||
352 | return ret; | ||
353 | } | ||
354 | |||
355 | static int __siox_stop(struct siox_master *smaster) | ||
356 | { | ||
357 | if (smaster->active) { | ||
358 | struct siox_device *sdevice; | ||
359 | |||
360 | smaster->active = 0; | ||
361 | |||
362 | list_for_each_entry(sdevice, &smaster->devices, node) { | ||
363 | if (sdevice->connected) | ||
364 | sysfs_notify_dirent(sdevice->connected_kn); | ||
365 | sdevice->connected = false; | ||
366 | } | ||
367 | |||
368 | return 1; | ||
369 | } | ||
370 | return 0; | ||
371 | } | ||
372 | |||
373 | static int siox_stop(struct siox_master *smaster) | ||
374 | { | ||
375 | int ret; | ||
376 | |||
377 | siox_master_lock(smaster); | ||
378 | ret = __siox_stop(smaster); | ||
379 | siox_master_unlock(smaster); | ||
380 | |||
381 | return ret; | ||
382 | } | ||
383 | |||
384 | static ssize_t type_show(struct device *dev, | ||
385 | struct device_attribute *attr, char *buf) | ||
386 | { | ||
387 | struct siox_device *sdev = to_siox_device(dev); | ||
388 | |||
389 | return sprintf(buf, "%s\n", sdev->type); | ||
390 | } | ||
391 | |||
392 | static DEVICE_ATTR_RO(type); | ||
393 | |||
394 | static ssize_t inbytes_show(struct device *dev, | ||
395 | struct device_attribute *attr, char *buf) | ||
396 | { | ||
397 | struct siox_device *sdev = to_siox_device(dev); | ||
398 | |||
399 | return sprintf(buf, "%zu\n", sdev->inbytes); | ||
400 | } | ||
401 | |||
402 | static DEVICE_ATTR_RO(inbytes); | ||
403 | |||
404 | static ssize_t outbytes_show(struct device *dev, | ||
405 | struct device_attribute *attr, char *buf) | ||
406 | { | ||
407 | struct siox_device *sdev = to_siox_device(dev); | ||
408 | |||
409 | return sprintf(buf, "%zu\n", sdev->outbytes); | ||
410 | } | ||
411 | |||
412 | static DEVICE_ATTR_RO(outbytes); | ||
413 | |||
414 | static ssize_t status_errors_show(struct device *dev, | ||
415 | struct device_attribute *attr, char *buf) | ||
416 | { | ||
417 | struct siox_device *sdev = to_siox_device(dev); | ||
418 | unsigned int status_errors; | ||
419 | |||
420 | siox_master_lock(sdev->smaster); | ||
421 | |||
422 | status_errors = sdev->status_errors; | ||
423 | |||
424 | siox_master_unlock(sdev->smaster); | ||
425 | |||
426 | return sprintf(buf, "%u\n", status_errors); | ||
427 | } | ||
428 | |||
429 | static DEVICE_ATTR_RO(status_errors); | ||
430 | |||
431 | static ssize_t connected_show(struct device *dev, | ||
432 | struct device_attribute *attr, char *buf) | ||
433 | { | ||
434 | struct siox_device *sdev = to_siox_device(dev); | ||
435 | bool connected; | ||
436 | |||
437 | siox_master_lock(sdev->smaster); | ||
438 | |||
439 | connected = sdev->connected; | ||
440 | |||
441 | siox_master_unlock(sdev->smaster); | ||
442 | |||
443 | return sprintf(buf, "%u\n", connected); | ||
444 | } | ||
445 | |||
446 | static DEVICE_ATTR_RO(connected); | ||
447 | |||
448 | static ssize_t watchdog_show(struct device *dev, | ||
449 | struct device_attribute *attr, char *buf) | ||
450 | { | ||
451 | struct siox_device *sdev = to_siox_device(dev); | ||
452 | u8 status; | ||
453 | |||
454 | siox_master_lock(sdev->smaster); | ||
455 | |||
456 | status = sdev->status_read_clean; | ||
457 | |||
458 | siox_master_unlock(sdev->smaster); | ||
459 | |||
460 | return sprintf(buf, "%d\n", status & SIOX_STATUS_WDG); | ||
461 | } | ||
462 | |||
463 | static DEVICE_ATTR_RO(watchdog); | ||
464 | |||
465 | static ssize_t watchdog_errors_show(struct device *dev, | ||
466 | struct device_attribute *attr, char *buf) | ||
467 | { | ||
468 | struct siox_device *sdev = to_siox_device(dev); | ||
469 | unsigned int watchdog_errors; | ||
470 | |||
471 | siox_master_lock(sdev->smaster); | ||
472 | |||
473 | watchdog_errors = sdev->watchdog_errors; | ||
474 | |||
475 | siox_master_unlock(sdev->smaster); | ||
476 | |||
477 | return sprintf(buf, "%u\n", watchdog_errors); | ||
478 | } | ||
479 | |||
480 | static DEVICE_ATTR_RO(watchdog_errors); | ||
481 | |||
482 | static struct attribute *siox_device_attrs[] = { | ||
483 | &dev_attr_type.attr, | ||
484 | &dev_attr_inbytes.attr, | ||
485 | &dev_attr_outbytes.attr, | ||
486 | &dev_attr_status_errors.attr, | ||
487 | &dev_attr_connected.attr, | ||
488 | &dev_attr_watchdog.attr, | ||
489 | &dev_attr_watchdog_errors.attr, | ||
490 | NULL | ||
491 | }; | ||
492 | ATTRIBUTE_GROUPS(siox_device); | ||
493 | |||
494 | static void siox_device_release(struct device *dev) | ||
495 | { | ||
496 | struct siox_device *sdevice = to_siox_device(dev); | ||
497 | |||
498 | kfree(sdevice); | ||
499 | } | ||
500 | |||
501 | static struct device_type siox_device_type = { | ||
502 | .groups = siox_device_groups, | ||
503 | .release = siox_device_release, | ||
504 | }; | ||
505 | |||
506 | static int siox_match(struct device *dev, struct device_driver *drv) | ||
507 | { | ||
508 | if (dev->type != &siox_device_type) | ||
509 | return 0; | ||
510 | |||
511 | /* up to now there is only a single driver so keeping this simple */ | ||
512 | return 1; | ||
513 | } | ||
514 | |||
515 | static struct bus_type siox_bus_type = { | ||
516 | .name = "siox", | ||
517 | .match = siox_match, | ||
518 | }; | ||
519 | |||
520 | static int siox_driver_probe(struct device *dev) | ||
521 | { | ||
522 | struct siox_driver *sdriver = to_siox_driver(dev->driver); | ||
523 | struct siox_device *sdevice = to_siox_device(dev); | ||
524 | int ret; | ||
525 | |||
526 | ret = sdriver->probe(sdevice); | ||
527 | return ret; | ||
528 | } | ||
529 | |||
530 | static int siox_driver_remove(struct device *dev) | ||
531 | { | ||
532 | struct siox_driver *sdriver = | ||
533 | container_of(dev->driver, struct siox_driver, driver); | ||
534 | struct siox_device *sdevice = to_siox_device(dev); | ||
535 | int ret; | ||
536 | |||
537 | ret = sdriver->remove(sdevice); | ||
538 | return ret; | ||
539 | } | ||
540 | |||
541 | static void siox_driver_shutdown(struct device *dev) | ||
542 | { | ||
543 | struct siox_driver *sdriver = | ||
544 | container_of(dev->driver, struct siox_driver, driver); | ||
545 | struct siox_device *sdevice = to_siox_device(dev); | ||
546 | |||
547 | sdriver->shutdown(sdevice); | ||
548 | } | ||
549 | |||
550 | static ssize_t active_show(struct device *dev, | ||
551 | struct device_attribute *attr, char *buf) | ||
552 | { | ||
553 | struct siox_master *smaster = to_siox_master(dev); | ||
554 | |||
555 | return sprintf(buf, "%d\n", smaster->active); | ||
556 | } | ||
557 | |||
558 | static ssize_t active_store(struct device *dev, | ||
559 | struct device_attribute *attr, | ||
560 | const char *buf, size_t count) | ||
561 | { | ||
562 | struct siox_master *smaster = to_siox_master(dev); | ||
563 | int ret; | ||
564 | int active; | ||
565 | |||
566 | ret = kstrtoint(buf, 0, &active); | ||
567 | if (ret < 0) | ||
568 | return ret; | ||
569 | |||
570 | if (active) | ||
571 | ret = siox_start(smaster); | ||
572 | else | ||
573 | ret = siox_stop(smaster); | ||
574 | |||
575 | if (ret < 0) | ||
576 | return ret; | ||
577 | |||
578 | return count; | ||
579 | } | ||
580 | |||
581 | static DEVICE_ATTR_RW(active); | ||
582 | |||
583 | static struct siox_device *siox_device_add(struct siox_master *smaster, | ||
584 | const char *type, size_t inbytes, | ||
585 | size_t outbytes, u8 statustype); | ||
586 | |||
587 | static ssize_t device_add_store(struct device *dev, | ||
588 | struct device_attribute *attr, | ||
589 | const char *buf, size_t count) | ||
590 | { | ||
591 | struct siox_master *smaster = to_siox_master(dev); | ||
592 | int ret; | ||
593 | char type[20] = ""; | ||
594 | size_t inbytes = 0, outbytes = 0; | ||
595 | u8 statustype = 0; | ||
596 | |||
597 | ret = sscanf(buf, "%20s %zu %zu %hhu", type, &inbytes, | ||
598 | &outbytes, &statustype); | ||
599 | if (ret != 3 && ret != 4) | ||
600 | return -EINVAL; | ||
601 | |||
602 | if (strcmp(type, "siox-12x8") || inbytes != 2 || outbytes != 4) | ||
603 | return -EINVAL; | ||
604 | |||
605 | siox_device_add(smaster, "siox-12x8", inbytes, outbytes, statustype); | ||
606 | |||
607 | return count; | ||
608 | } | ||
609 | |||
610 | static DEVICE_ATTR_WO(device_add); | ||
611 | |||
612 | static void siox_device_remove(struct siox_master *smaster); | ||
613 | |||
614 | static ssize_t device_remove_store(struct device *dev, | ||
615 | struct device_attribute *attr, | ||
616 | const char *buf, size_t count) | ||
617 | { | ||
618 | struct siox_master *smaster = to_siox_master(dev); | ||
619 | |||
620 | /* XXX? require to write <type> <inbytes> <outbytes> */ | ||
621 | siox_device_remove(smaster); | ||
622 | |||
623 | return count; | ||
624 | } | ||
625 | |||
626 | static DEVICE_ATTR_WO(device_remove); | ||
627 | |||
628 | static ssize_t poll_interval_ns_show(struct device *dev, | ||
629 | struct device_attribute *attr, char *buf) | ||
630 | { | ||
631 | struct siox_master *smaster = to_siox_master(dev); | ||
632 | |||
633 | return sprintf(buf, "%lld\n", jiffies_to_nsecs(smaster->poll_interval)); | ||
634 | } | ||
635 | |||
636 | static ssize_t poll_interval_ns_store(struct device *dev, | ||
637 | struct device_attribute *attr, | ||
638 | const char *buf, size_t count) | ||
639 | { | ||
640 | struct siox_master *smaster = to_siox_master(dev); | ||
641 | int ret; | ||
642 | u64 val; | ||
643 | |||
644 | ret = kstrtou64(buf, 0, &val); | ||
645 | if (ret < 0) | ||
646 | return ret; | ||
647 | |||
648 | siox_master_lock(smaster); | ||
649 | |||
650 | smaster->poll_interval = nsecs_to_jiffies(val); | ||
651 | |||
652 | siox_master_unlock(smaster); | ||
653 | |||
654 | return count; | ||
655 | } | ||
656 | |||
657 | static DEVICE_ATTR_RW(poll_interval_ns); | ||
658 | |||
659 | static struct attribute *siox_master_attrs[] = { | ||
660 | &dev_attr_active.attr, | ||
661 | &dev_attr_device_add.attr, | ||
662 | &dev_attr_device_remove.attr, | ||
663 | &dev_attr_poll_interval_ns.attr, | ||
664 | NULL | ||
665 | }; | ||
666 | ATTRIBUTE_GROUPS(siox_master); | ||
667 | |||
668 | static void siox_master_release(struct device *dev) | ||
669 | { | ||
670 | struct siox_master *smaster = to_siox_master(dev); | ||
671 | |||
672 | kfree(smaster); | ||
673 | } | ||
674 | |||
675 | static struct device_type siox_master_type = { | ||
676 | .groups = siox_master_groups, | ||
677 | .release = siox_master_release, | ||
678 | }; | ||
679 | |||
680 | struct siox_master *siox_master_alloc(struct device *dev, | ||
681 | size_t size) | ||
682 | { | ||
683 | struct siox_master *smaster; | ||
684 | |||
685 | if (!dev) | ||
686 | return NULL; | ||
687 | |||
688 | smaster = kzalloc(sizeof(*smaster) + size, GFP_KERNEL); | ||
689 | if (!smaster) | ||
690 | return NULL; | ||
691 | |||
692 | device_initialize(&smaster->dev); | ||
693 | |||
694 | smaster->busno = -1; | ||
695 | smaster->dev.bus = &siox_bus_type; | ||
696 | smaster->dev.type = &siox_master_type; | ||
697 | smaster->dev.parent = dev; | ||
698 | smaster->poll_interval = DIV_ROUND_UP(HZ, 40); | ||
699 | |||
700 | dev_set_drvdata(&smaster->dev, &smaster[1]); | ||
701 | |||
702 | return smaster; | ||
703 | } | ||
704 | EXPORT_SYMBOL_GPL(siox_master_alloc); | ||
705 | |||
706 | int siox_master_register(struct siox_master *smaster) | ||
707 | { | ||
708 | int ret; | ||
709 | |||
710 | if (!siox_is_registered) | ||
711 | return -EPROBE_DEFER; | ||
712 | |||
713 | if (!smaster->pushpull) | ||
714 | return -EINVAL; | ||
715 | |||
716 | dev_set_name(&smaster->dev, "siox-%d", smaster->busno); | ||
717 | |||
718 | smaster->last_poll = jiffies; | ||
719 | smaster->poll_thread = kthread_create(siox_poll_thread, smaster, | ||
720 | "siox-%d", smaster->busno); | ||
721 | if (IS_ERR(smaster->poll_thread)) { | ||
722 | smaster->active = 0; | ||
723 | return PTR_ERR(smaster->poll_thread); | ||
724 | } | ||
725 | |||
726 | mutex_init(&smaster->lock); | ||
727 | INIT_LIST_HEAD(&smaster->devices); | ||
728 | |||
729 | ret = device_add(&smaster->dev); | ||
730 | if (ret) | ||
731 | kthread_stop(smaster->poll_thread); | ||
732 | |||
733 | return ret; | ||
734 | } | ||
735 | EXPORT_SYMBOL_GPL(siox_master_register); | ||
736 | |||
737 | void siox_master_unregister(struct siox_master *smaster) | ||
738 | { | ||
739 | /* remove device */ | ||
740 | device_del(&smaster->dev); | ||
741 | |||
742 | siox_master_lock(smaster); | ||
743 | |||
744 | __siox_stop(smaster); | ||
745 | |||
746 | while (smaster->num_devices) { | ||
747 | struct siox_device *sdevice; | ||
748 | |||
749 | sdevice = container_of(smaster->devices.prev, | ||
750 | struct siox_device, node); | ||
751 | list_del(&sdevice->node); | ||
752 | smaster->num_devices--; | ||
753 | |||
754 | siox_master_unlock(smaster); | ||
755 | |||
756 | device_unregister(&sdevice->dev); | ||
757 | |||
758 | siox_master_lock(smaster); | ||
759 | } | ||
760 | |||
761 | siox_master_unlock(smaster); | ||
762 | |||
763 | put_device(&smaster->dev); | ||
764 | } | ||
765 | EXPORT_SYMBOL_GPL(siox_master_unregister); | ||
766 | |||
767 | static struct siox_device *siox_device_add(struct siox_master *smaster, | ||
768 | const char *type, size_t inbytes, | ||
769 | size_t outbytes, u8 statustype) | ||
770 | { | ||
771 | struct siox_device *sdevice; | ||
772 | int ret; | ||
773 | size_t buf_len; | ||
774 | |||
775 | sdevice = kzalloc(sizeof(*sdevice), GFP_KERNEL); | ||
776 | if (!sdevice) | ||
777 | return ERR_PTR(-ENOMEM); | ||
778 | |||
779 | sdevice->type = type; | ||
780 | sdevice->inbytes = inbytes; | ||
781 | sdevice->outbytes = outbytes; | ||
782 | sdevice->statustype = statustype; | ||
783 | |||
784 | sdevice->smaster = smaster; | ||
785 | sdevice->dev.parent = &smaster->dev; | ||
786 | sdevice->dev.bus = &siox_bus_type; | ||
787 | sdevice->dev.type = &siox_device_type; | ||
788 | |||
789 | siox_master_lock(smaster); | ||
790 | |||
791 | dev_set_name(&sdevice->dev, "siox-%d-%d", | ||
792 | smaster->busno, smaster->num_devices); | ||
793 | |||
794 | buf_len = smaster->setbuf_len + inbytes + | ||
795 | smaster->getbuf_len + outbytes; | ||
796 | if (smaster->buf_len < buf_len) { | ||
797 | u8 *buf = krealloc(smaster->buf, buf_len, GFP_KERNEL); | ||
798 | |||
799 | if (!buf) { | ||
800 | dev_err(&smaster->dev, | ||
801 | "failed to realloc buffer to %zu\n", buf_len); | ||
802 | ret = -ENOMEM; | ||
803 | goto err_buf_alloc; | ||
804 | } | ||
805 | |||
806 | smaster->buf_len = buf_len; | ||
807 | smaster->buf = buf; | ||
808 | } | ||
809 | |||
810 | ret = device_register(&sdevice->dev); | ||
811 | if (ret) { | ||
812 | dev_err(&smaster->dev, "failed to register device: %d\n", ret); | ||
813 | |||
814 | goto err_device_register; | ||
815 | } | ||
816 | |||
817 | smaster->num_devices++; | ||
818 | list_add_tail(&sdevice->node, &smaster->devices); | ||
819 | |||
820 | smaster->setbuf_len += sdevice->inbytes; | ||
821 | smaster->getbuf_len += sdevice->outbytes; | ||
822 | |||
823 | sdevice->status_errors_kn = sysfs_get_dirent(sdevice->dev.kobj.sd, | ||
824 | "status_errors"); | ||
825 | sdevice->watchdog_kn = sysfs_get_dirent(sdevice->dev.kobj.sd, | ||
826 | "watchdog"); | ||
827 | sdevice->watchdog_errors_kn = sysfs_get_dirent(sdevice->dev.kobj.sd, | ||
828 | "watchdog_errors"); | ||
829 | sdevice->connected_kn = sysfs_get_dirent(sdevice->dev.kobj.sd, | ||
830 | "connected"); | ||
831 | |||
832 | siox_master_unlock(smaster); | ||
833 | |||
834 | return sdevice; | ||
835 | |||
836 | err_device_register: | ||
837 | /* don't care to make the buffer smaller again */ | ||
838 | |||
839 | err_buf_alloc: | ||
840 | siox_master_unlock(smaster); | ||
841 | |||
842 | kfree(sdevice); | ||
843 | |||
844 | return ERR_PTR(ret); | ||
845 | } | ||
846 | |||
847 | static void siox_device_remove(struct siox_master *smaster) | ||
848 | { | ||
849 | struct siox_device *sdevice; | ||
850 | |||
851 | siox_master_lock(smaster); | ||
852 | |||
853 | if (!smaster->num_devices) { | ||
854 | siox_master_unlock(smaster); | ||
855 | return; | ||
856 | } | ||
857 | |||
858 | sdevice = container_of(smaster->devices.prev, struct siox_device, node); | ||
859 | list_del(&sdevice->node); | ||
860 | smaster->num_devices--; | ||
861 | |||
862 | smaster->setbuf_len -= sdevice->inbytes; | ||
863 | smaster->getbuf_len -= sdevice->outbytes; | ||
864 | |||
865 | if (!smaster->num_devices) | ||
866 | __siox_stop(smaster); | ||
867 | |||
868 | siox_master_unlock(smaster); | ||
869 | |||
870 | /* | ||
871 | * This must be done without holding the master lock because we're | ||
872 | * called from device_remove_store which also holds a sysfs mutex. | ||
873 | * device_unregister tries to aquire the same lock. | ||
874 | */ | ||
875 | device_unregister(&sdevice->dev); | ||
876 | } | ||
877 | |||
878 | int __siox_driver_register(struct siox_driver *sdriver, struct module *owner) | ||
879 | { | ||
880 | int ret; | ||
881 | |||
882 | if (unlikely(!siox_is_registered)) | ||
883 | return -EPROBE_DEFER; | ||
884 | |||
885 | if (!sdriver->set_data && !sdriver->get_data) { | ||
886 | pr_err("Driver %s doesn't provide needed callbacks\n", | ||
887 | sdriver->driver.name); | ||
888 | return -EINVAL; | ||
889 | } | ||
890 | |||
891 | sdriver->driver.owner = owner; | ||
892 | sdriver->driver.bus = &siox_bus_type; | ||
893 | |||
894 | if (sdriver->probe) | ||
895 | sdriver->driver.probe = siox_driver_probe; | ||
896 | if (sdriver->remove) | ||
897 | sdriver->driver.remove = siox_driver_remove; | ||
898 | if (sdriver->shutdown) | ||
899 | sdriver->driver.shutdown = siox_driver_shutdown; | ||
900 | |||
901 | ret = driver_register(&sdriver->driver); | ||
902 | if (ret) | ||
903 | pr_err("Failed to register siox driver %s (%d)\n", | ||
904 | sdriver->driver.name, ret); | ||
905 | |||
906 | return ret; | ||
907 | } | ||
908 | EXPORT_SYMBOL_GPL(__siox_driver_register); | ||
909 | |||
910 | static int __init siox_init(void) | ||
911 | { | ||
912 | int ret; | ||
913 | |||
914 | ret = bus_register(&siox_bus_type); | ||
915 | if (ret) { | ||
916 | pr_err("Registration of SIOX bus type failed: %d\n", ret); | ||
917 | return ret; | ||
918 | } | ||
919 | |||
920 | siox_is_registered = true; | ||
921 | |||
922 | return 0; | ||
923 | } | ||
924 | subsys_initcall(siox_init); | ||
925 | |||
926 | static void __exit siox_exit(void) | ||
927 | { | ||
928 | bus_unregister(&siox_bus_type); | ||
929 | } | ||
930 | module_exit(siox_exit); | ||
931 | |||
932 | MODULE_AUTHOR("Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>"); | ||
933 | MODULE_DESCRIPTION("Eckelmann SIOX driver core"); | ||
934 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/siox/siox.h b/drivers/siox/siox.h new file mode 100644 index 000000000000..c674bf6fb119 --- /dev/null +++ b/drivers/siox/siox.h | |||
@@ -0,0 +1,49 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> | ||
4 | */ | ||
5 | #include <linux/kernel.h> | ||
6 | #include <linux/kthread.h> | ||
7 | #include <linux/siox.h> | ||
8 | |||
9 | #define to_siox_master(_dev) container_of((_dev), struct siox_master, dev) | ||
10 | struct siox_master { | ||
11 | /* these fields should be initialized by the driver */ | ||
12 | int busno; | ||
13 | int (*pushpull)(struct siox_master *smaster, | ||
14 | size_t setbuf_len, const u8 setbuf[], | ||
15 | size_t getbuf_len, u8 getbuf[]); | ||
16 | |||
17 | /* might be initialized by the driver, if 0 it is set to HZ / 40 */ | ||
18 | unsigned long poll_interval; /* in jiffies */ | ||
19 | |||
20 | /* framework private stuff */ | ||
21 | struct mutex lock; | ||
22 | bool active; | ||
23 | struct module *owner; | ||
24 | struct device dev; | ||
25 | unsigned int num_devices; | ||
26 | struct list_head devices; | ||
27 | |||
28 | size_t setbuf_len, getbuf_len; | ||
29 | size_t buf_len; | ||
30 | u8 *buf; | ||
31 | u8 status; | ||
32 | |||
33 | unsigned long last_poll; | ||
34 | struct task_struct *poll_thread; | ||
35 | }; | ||
36 | |||
37 | static inline void *siox_master_get_devdata(struct siox_master *smaster) | ||
38 | { | ||
39 | return dev_get_drvdata(&smaster->dev); | ||
40 | } | ||
41 | |||
42 | struct siox_master *siox_master_alloc(struct device *dev, size_t size); | ||
43 | static inline void siox_master_put(struct siox_master *smaster) | ||
44 | { | ||
45 | put_device(&smaster->dev); | ||
46 | } | ||
47 | |||
48 | int siox_master_register(struct siox_master *smaster); | ||
49 | void siox_master_unregister(struct siox_master *smaster); | ||
diff --git a/drivers/slimbus/Kconfig b/drivers/slimbus/Kconfig new file mode 100644 index 000000000000..1a632fad597e --- /dev/null +++ b/drivers/slimbus/Kconfig | |||
@@ -0,0 +1,24 @@ | |||
1 | # SPDX-License-Identifier: GPL-2.0 | ||
2 | # | ||
3 | # SLIMbus driver configuration | ||
4 | # | ||
5 | menuconfig SLIMBUS | ||
6 | tristate "SLIMbus support" | ||
7 | help | ||
8 | SLIMbus is standard interface between System-on-Chip and audio codec, | ||
9 | and other peripheral components in typical embedded systems. | ||
10 | |||
11 | If unsure, choose N. | ||
12 | |||
13 | if SLIMBUS | ||
14 | |||
15 | # SLIMbus controllers | ||
16 | config SLIM_QCOM_CTRL | ||
17 | tristate "Qualcomm SLIMbus Manager Component" | ||
18 | depends on SLIMBUS | ||
19 | depends on HAS_IOMEM | ||
20 | help | ||
21 | Select driver if Qualcomm's SLIMbus Manager Component is | ||
22 | programmed using Linux kernel. | ||
23 | |||
24 | endif | ||
diff --git a/drivers/slimbus/Makefile b/drivers/slimbus/Makefile new file mode 100644 index 000000000000..a35a3da4eb78 --- /dev/null +++ b/drivers/slimbus/Makefile | |||
@@ -0,0 +1,10 @@ | |||
1 | # SPDX-License-Identifier: GPL-2.0 | ||
2 | # | ||
3 | # Makefile for kernel SLIMbus framework. | ||
4 | # | ||
5 | obj-$(CONFIG_SLIMBUS) += slimbus.o | ||
6 | slimbus-y := core.o messaging.o sched.o | ||
7 | |||
8 | #Controllers | ||
9 | obj-$(CONFIG_SLIM_QCOM_CTRL) += slim-qcom-ctrl.o | ||
10 | slim-qcom-ctrl-y := qcom-ctrl.o | ||
diff --git a/drivers/slimbus/core.c b/drivers/slimbus/core.c new file mode 100644 index 000000000000..4988a8f4d905 --- /dev/null +++ b/drivers/slimbus/core.c | |||
@@ -0,0 +1,480 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (c) 2011-2017, The Linux Foundation | ||
4 | */ | ||
5 | |||
6 | #include <linux/kernel.h> | ||
7 | #include <linux/errno.h> | ||
8 | #include <linux/slab.h> | ||
9 | #include <linux/init.h> | ||
10 | #include <linux/idr.h> | ||
11 | #include <linux/of.h> | ||
12 | #include <linux/pm_runtime.h> | ||
13 | #include <linux/slimbus.h> | ||
14 | #include "slimbus.h" | ||
15 | |||
16 | static DEFINE_IDA(ctrl_ida); | ||
17 | |||
18 | static const struct slim_device_id *slim_match(const struct slim_device_id *id, | ||
19 | const struct slim_device *sbdev) | ||
20 | { | ||
21 | while (id->manf_id != 0 || id->prod_code != 0) { | ||
22 | if (id->manf_id == sbdev->e_addr.manf_id && | ||
23 | id->prod_code == sbdev->e_addr.prod_code) | ||
24 | return id; | ||
25 | id++; | ||
26 | } | ||
27 | return NULL; | ||
28 | } | ||
29 | |||
30 | static int slim_device_match(struct device *dev, struct device_driver *drv) | ||
31 | { | ||
32 | struct slim_device *sbdev = to_slim_device(dev); | ||
33 | struct slim_driver *sbdrv = to_slim_driver(drv); | ||
34 | |||
35 | return !!slim_match(sbdrv->id_table, sbdev); | ||
36 | } | ||
37 | |||
38 | static int slim_device_probe(struct device *dev) | ||
39 | { | ||
40 | struct slim_device *sbdev = to_slim_device(dev); | ||
41 | struct slim_driver *sbdrv = to_slim_driver(dev->driver); | ||
42 | |||
43 | return sbdrv->probe(sbdev); | ||
44 | } | ||
45 | |||
46 | static int slim_device_remove(struct device *dev) | ||
47 | { | ||
48 | struct slim_device *sbdev = to_slim_device(dev); | ||
49 | struct slim_driver *sbdrv; | ||
50 | |||
51 | if (dev->driver) { | ||
52 | sbdrv = to_slim_driver(dev->driver); | ||
53 | if (sbdrv->remove) | ||
54 | sbdrv->remove(sbdev); | ||
55 | } | ||
56 | |||
57 | return 0; | ||
58 | } | ||
59 | |||
60 | struct bus_type slimbus_bus = { | ||
61 | .name = "slimbus", | ||
62 | .match = slim_device_match, | ||
63 | .probe = slim_device_probe, | ||
64 | .remove = slim_device_remove, | ||
65 | }; | ||
66 | EXPORT_SYMBOL_GPL(slimbus_bus); | ||
67 | |||
68 | /* | ||
69 | * __slim_driver_register() - Client driver registration with SLIMbus | ||
70 | * | ||
71 | * @drv:Client driver to be associated with client-device. | ||
72 | * @owner: owning module/driver | ||
73 | * | ||
74 | * This API will register the client driver with the SLIMbus | ||
75 | * It is called from the driver's module-init function. | ||
76 | */ | ||
77 | int __slim_driver_register(struct slim_driver *drv, struct module *owner) | ||
78 | { | ||
79 | /* ID table and probe are mandatory */ | ||
80 | if (!drv->id_table || !drv->probe) | ||
81 | return -EINVAL; | ||
82 | |||
83 | drv->driver.bus = &slimbus_bus; | ||
84 | drv->driver.owner = owner; | ||
85 | |||
86 | return driver_register(&drv->driver); | ||
87 | } | ||
88 | EXPORT_SYMBOL_GPL(__slim_driver_register); | ||
89 | |||
90 | /* | ||
91 | * slim_driver_unregister() - Undo effect of slim_driver_register | ||
92 | * | ||
93 | * @drv: Client driver to be unregistered | ||
94 | */ | ||
95 | void slim_driver_unregister(struct slim_driver *drv) | ||
96 | { | ||
97 | driver_unregister(&drv->driver); | ||
98 | } | ||
99 | EXPORT_SYMBOL_GPL(slim_driver_unregister); | ||
100 | |||
101 | static void slim_dev_release(struct device *dev) | ||
102 | { | ||
103 | struct slim_device *sbdev = to_slim_device(dev); | ||
104 | |||
105 | kfree(sbdev); | ||
106 | } | ||
107 | |||
108 | static int slim_add_device(struct slim_controller *ctrl, | ||
109 | struct slim_device *sbdev, | ||
110 | struct device_node *node) | ||
111 | { | ||
112 | sbdev->dev.bus = &slimbus_bus; | ||
113 | sbdev->dev.parent = ctrl->dev; | ||
114 | sbdev->dev.release = slim_dev_release; | ||
115 | sbdev->dev.driver = NULL; | ||
116 | sbdev->ctrl = ctrl; | ||
117 | |||
118 | if (node) | ||
119 | sbdev->dev.of_node = of_node_get(node); | ||
120 | |||
121 | dev_set_name(&sbdev->dev, "%x:%x:%x:%x", | ||
122 | sbdev->e_addr.manf_id, | ||
123 | sbdev->e_addr.prod_code, | ||
124 | sbdev->e_addr.dev_index, | ||
125 | sbdev->e_addr.instance); | ||
126 | |||
127 | return device_register(&sbdev->dev); | ||
128 | } | ||
129 | |||
130 | static struct slim_device *slim_alloc_device(struct slim_controller *ctrl, | ||
131 | struct slim_eaddr *eaddr, | ||
132 | struct device_node *node) | ||
133 | { | ||
134 | struct slim_device *sbdev; | ||
135 | int ret; | ||
136 | |||
137 | sbdev = kzalloc(sizeof(*sbdev), GFP_KERNEL); | ||
138 | if (!sbdev) | ||
139 | return NULL; | ||
140 | |||
141 | sbdev->e_addr = *eaddr; | ||
142 | ret = slim_add_device(ctrl, sbdev, node); | ||
143 | if (ret) { | ||
144 | kfree(sbdev); | ||
145 | return NULL; | ||
146 | } | ||
147 | |||
148 | return sbdev; | ||
149 | } | ||
150 | |||
151 | static void of_register_slim_devices(struct slim_controller *ctrl) | ||
152 | { | ||
153 | struct device *dev = ctrl->dev; | ||
154 | struct device_node *node; | ||
155 | |||
156 | if (!ctrl->dev->of_node) | ||
157 | return; | ||
158 | |||
159 | for_each_child_of_node(ctrl->dev->of_node, node) { | ||
160 | struct slim_device *sbdev; | ||
161 | struct slim_eaddr e_addr; | ||
162 | const char *compat = NULL; | ||
163 | int reg[2], ret; | ||
164 | int manf_id, prod_code; | ||
165 | |||
166 | compat = of_get_property(node, "compatible", NULL); | ||
167 | if (!compat) | ||
168 | continue; | ||
169 | |||
170 | ret = sscanf(compat, "slim%x,%x", &manf_id, &prod_code); | ||
171 | if (ret != 2) { | ||
172 | dev_err(dev, "Manf ID & Product code not found %s\n", | ||
173 | compat); | ||
174 | continue; | ||
175 | } | ||
176 | |||
177 | ret = of_property_read_u32_array(node, "reg", reg, 2); | ||
178 | if (ret) { | ||
179 | dev_err(dev, "Device and Instance id not found:%d\n", | ||
180 | ret); | ||
181 | continue; | ||
182 | } | ||
183 | |||
184 | e_addr.dev_index = reg[0]; | ||
185 | e_addr.instance = reg[1]; | ||
186 | e_addr.manf_id = manf_id; | ||
187 | e_addr.prod_code = prod_code; | ||
188 | |||
189 | sbdev = slim_alloc_device(ctrl, &e_addr, node); | ||
190 | if (!sbdev) | ||
191 | continue; | ||
192 | } | ||
193 | } | ||
194 | |||
195 | /* | ||
196 | * slim_register_controller() - Controller bring-up and registration. | ||
197 | * | ||
198 | * @ctrl: Controller to be registered. | ||
199 | * | ||
200 | * A controller is registered with the framework using this API. | ||
201 | * If devices on a controller were registered before controller, | ||
202 | * this will make sure that they get probed when controller is up | ||
203 | */ | ||
204 | int slim_register_controller(struct slim_controller *ctrl) | ||
205 | { | ||
206 | int id; | ||
207 | |||
208 | id = ida_simple_get(&ctrl_ida, 0, 0, GFP_KERNEL); | ||
209 | if (id < 0) | ||
210 | return id; | ||
211 | |||
212 | ctrl->id = id; | ||
213 | |||
214 | if (!ctrl->min_cg) | ||
215 | ctrl->min_cg = SLIM_MIN_CLK_GEAR; | ||
216 | if (!ctrl->max_cg) | ||
217 | ctrl->max_cg = SLIM_MAX_CLK_GEAR; | ||
218 | |||
219 | ida_init(&ctrl->laddr_ida); | ||
220 | idr_init(&ctrl->tid_idr); | ||
221 | mutex_init(&ctrl->lock); | ||
222 | mutex_init(&ctrl->sched.m_reconf); | ||
223 | init_completion(&ctrl->sched.pause_comp); | ||
224 | |||
225 | dev_dbg(ctrl->dev, "Bus [%s] registered:dev:%p\n", | ||
226 | ctrl->name, ctrl->dev); | ||
227 | |||
228 | of_register_slim_devices(ctrl); | ||
229 | |||
230 | return 0; | ||
231 | } | ||
232 | EXPORT_SYMBOL_GPL(slim_register_controller); | ||
233 | |||
234 | /* slim_remove_device: Remove the effect of slim_add_device() */ | ||
235 | static void slim_remove_device(struct slim_device *sbdev) | ||
236 | { | ||
237 | device_unregister(&sbdev->dev); | ||
238 | } | ||
239 | |||
240 | static int slim_ctrl_remove_device(struct device *dev, void *null) | ||
241 | { | ||
242 | slim_remove_device(to_slim_device(dev)); | ||
243 | return 0; | ||
244 | } | ||
245 | |||
246 | /** | ||
247 | * slim_unregister_controller() - Controller tear-down. | ||
248 | * | ||
249 | * @ctrl: Controller to tear-down. | ||
250 | */ | ||
251 | int slim_unregister_controller(struct slim_controller *ctrl) | ||
252 | { | ||
253 | /* Remove all clients */ | ||
254 | device_for_each_child(ctrl->dev, NULL, slim_ctrl_remove_device); | ||
255 | /* Enter Clock Pause */ | ||
256 | slim_ctrl_clk_pause(ctrl, false, 0); | ||
257 | ida_simple_remove(&ctrl_ida, ctrl->id); | ||
258 | |||
259 | return 0; | ||
260 | } | ||
261 | EXPORT_SYMBOL_GPL(slim_unregister_controller); | ||
262 | |||
263 | static void slim_device_update_status(struct slim_device *sbdev, | ||
264 | enum slim_device_status status) | ||
265 | { | ||
266 | struct slim_driver *sbdrv; | ||
267 | |||
268 | if (sbdev->status == status) | ||
269 | return; | ||
270 | |||
271 | sbdev->status = status; | ||
272 | if (!sbdev->dev.driver) | ||
273 | return; | ||
274 | |||
275 | sbdrv = to_slim_driver(sbdev->dev.driver); | ||
276 | if (sbdrv->device_status) | ||
277 | sbdrv->device_status(sbdev, sbdev->status); | ||
278 | } | ||
279 | |||
280 | /** | ||
281 | * slim_report_absent() - Controller calls this function when a device | ||
282 | * reports absent, OR when the device cannot be communicated with | ||
283 | * | ||
284 | * @sbdev: Device that cannot be reached, or sent report absent | ||
285 | */ | ||
286 | void slim_report_absent(struct slim_device *sbdev) | ||
287 | { | ||
288 | struct slim_controller *ctrl = sbdev->ctrl; | ||
289 | |||
290 | if (!ctrl) | ||
291 | return; | ||
292 | |||
293 | /* invalidate logical addresses */ | ||
294 | mutex_lock(&ctrl->lock); | ||
295 | sbdev->is_laddr_valid = false; | ||
296 | mutex_unlock(&ctrl->lock); | ||
297 | |||
298 | ida_simple_remove(&ctrl->laddr_ida, sbdev->laddr); | ||
299 | slim_device_update_status(sbdev, SLIM_DEVICE_STATUS_DOWN); | ||
300 | } | ||
301 | EXPORT_SYMBOL_GPL(slim_report_absent); | ||
302 | |||
303 | static bool slim_eaddr_equal(struct slim_eaddr *a, struct slim_eaddr *b) | ||
304 | { | ||
305 | return (a->manf_id == b->manf_id && | ||
306 | a->prod_code == b->prod_code && | ||
307 | a->dev_index == b->dev_index && | ||
308 | a->instance == b->instance); | ||
309 | } | ||
310 | |||
311 | static int slim_match_dev(struct device *dev, void *data) | ||
312 | { | ||
313 | struct slim_eaddr *e_addr = data; | ||
314 | struct slim_device *sbdev = to_slim_device(dev); | ||
315 | |||
316 | return slim_eaddr_equal(&sbdev->e_addr, e_addr); | ||
317 | } | ||
318 | |||
319 | static struct slim_device *find_slim_device(struct slim_controller *ctrl, | ||
320 | struct slim_eaddr *eaddr) | ||
321 | { | ||
322 | struct slim_device *sbdev; | ||
323 | struct device *dev; | ||
324 | |||
325 | dev = device_find_child(ctrl->dev, eaddr, slim_match_dev); | ||
326 | if (dev) { | ||
327 | sbdev = to_slim_device(dev); | ||
328 | return sbdev; | ||
329 | } | ||
330 | |||
331 | return NULL; | ||
332 | } | ||
333 | |||
334 | /** | ||
335 | * slim_get_device() - get handle to a device. | ||
336 | * | ||
337 | * @ctrl: Controller on which this device will be added/queried | ||
338 | * @e_addr: Enumeration address of the device to be queried | ||
339 | * | ||
340 | * Return: pointer to a device if it has already reported. Creates a new | ||
341 | * device and returns pointer to it if the device has not yet enumerated. | ||
342 | */ | ||
343 | struct slim_device *slim_get_device(struct slim_controller *ctrl, | ||
344 | struct slim_eaddr *e_addr) | ||
345 | { | ||
346 | struct slim_device *sbdev; | ||
347 | |||
348 | sbdev = find_slim_device(ctrl, e_addr); | ||
349 | if (!sbdev) { | ||
350 | sbdev = slim_alloc_device(ctrl, e_addr, NULL); | ||
351 | if (!sbdev) | ||
352 | return ERR_PTR(-ENOMEM); | ||
353 | } | ||
354 | |||
355 | return sbdev; | ||
356 | } | ||
357 | EXPORT_SYMBOL_GPL(slim_get_device); | ||
358 | |||
359 | static int slim_device_alloc_laddr(struct slim_device *sbdev, | ||
360 | bool report_present) | ||
361 | { | ||
362 | struct slim_controller *ctrl = sbdev->ctrl; | ||
363 | u8 laddr; | ||
364 | int ret; | ||
365 | |||
366 | mutex_lock(&ctrl->lock); | ||
367 | if (ctrl->get_laddr) { | ||
368 | ret = ctrl->get_laddr(ctrl, &sbdev->e_addr, &laddr); | ||
369 | if (ret < 0) | ||
370 | goto err; | ||
371 | } else if (report_present) { | ||
372 | ret = ida_simple_get(&ctrl->laddr_ida, | ||
373 | 0, SLIM_LA_MANAGER - 1, GFP_KERNEL); | ||
374 | if (ret < 0) | ||
375 | goto err; | ||
376 | |||
377 | laddr = ret; | ||
378 | } else { | ||
379 | ret = -EINVAL; | ||
380 | goto err; | ||
381 | } | ||
382 | |||
383 | if (ctrl->set_laddr) { | ||
384 | ret = ctrl->set_laddr(ctrl, &sbdev->e_addr, laddr); | ||
385 | if (ret) { | ||
386 | ret = -EINVAL; | ||
387 | goto err; | ||
388 | } | ||
389 | } | ||
390 | |||
391 | sbdev->laddr = laddr; | ||
392 | sbdev->is_laddr_valid = true; | ||
393 | |||
394 | slim_device_update_status(sbdev, SLIM_DEVICE_STATUS_UP); | ||
395 | |||
396 | dev_dbg(ctrl->dev, "setting slimbus l-addr:%x, ea:%x,%x,%x,%x\n", | ||
397 | laddr, sbdev->e_addr.manf_id, sbdev->e_addr.prod_code, | ||
398 | sbdev->e_addr.dev_index, sbdev->e_addr.instance); | ||
399 | |||
400 | err: | ||
401 | mutex_unlock(&ctrl->lock); | ||
402 | return ret; | ||
403 | |||
404 | } | ||
405 | |||
406 | /** | ||
407 | * slim_device_report_present() - Report enumerated device. | ||
408 | * | ||
409 | * @ctrl: Controller with which device is enumerated. | ||
410 | * @e_addr: Enumeration address of the device. | ||
411 | * @laddr: Return logical address (if valid flag is false) | ||
412 | * | ||
413 | * Called by controller in response to REPORT_PRESENT. Framework will assign | ||
414 | * a logical address to this enumeration address. | ||
415 | * Function returns -EXFULL to indicate that all logical addresses are already | ||
416 | * taken. | ||
417 | */ | ||
418 | int slim_device_report_present(struct slim_controller *ctrl, | ||
419 | struct slim_eaddr *e_addr, u8 *laddr) | ||
420 | { | ||
421 | struct slim_device *sbdev; | ||
422 | int ret; | ||
423 | |||
424 | ret = pm_runtime_get_sync(ctrl->dev); | ||
425 | |||
426 | if (ctrl->sched.clk_state != SLIM_CLK_ACTIVE) { | ||
427 | dev_err(ctrl->dev, "slim ctrl not active,state:%d, ret:%d\n", | ||
428 | ctrl->sched.clk_state, ret); | ||
429 | goto slimbus_not_active; | ||
430 | } | ||
431 | |||
432 | sbdev = slim_get_device(ctrl, e_addr); | ||
433 | if (IS_ERR(sbdev)) | ||
434 | return -ENODEV; | ||
435 | |||
436 | if (sbdev->is_laddr_valid) { | ||
437 | *laddr = sbdev->laddr; | ||
438 | return 0; | ||
439 | } | ||
440 | |||
441 | ret = slim_device_alloc_laddr(sbdev, true); | ||
442 | |||
443 | slimbus_not_active: | ||
444 | pm_runtime_mark_last_busy(ctrl->dev); | ||
445 | pm_runtime_put_autosuspend(ctrl->dev); | ||
446 | return ret; | ||
447 | } | ||
448 | EXPORT_SYMBOL_GPL(slim_device_report_present); | ||
449 | |||
450 | /** | ||
451 | * slim_get_logical_addr() - get/allocate logical address of a SLIMbus device. | ||
452 | * | ||
453 | * @sbdev: client handle requesting the address. | ||
454 | * | ||
455 | * Return: zero if a logical address is valid or a new logical address | ||
456 | * has been assigned. error code in case of error. | ||
457 | */ | ||
458 | int slim_get_logical_addr(struct slim_device *sbdev) | ||
459 | { | ||
460 | if (!sbdev->is_laddr_valid) | ||
461 | return slim_device_alloc_laddr(sbdev, false); | ||
462 | |||
463 | return 0; | ||
464 | } | ||
465 | EXPORT_SYMBOL_GPL(slim_get_logical_addr); | ||
466 | |||
467 | static void __exit slimbus_exit(void) | ||
468 | { | ||
469 | bus_unregister(&slimbus_bus); | ||
470 | } | ||
471 | module_exit(slimbus_exit); | ||
472 | |||
473 | static int __init slimbus_init(void) | ||
474 | { | ||
475 | return bus_register(&slimbus_bus); | ||
476 | } | ||
477 | postcore_initcall(slimbus_init); | ||
478 | |||
479 | MODULE_LICENSE("GPL v2"); | ||
480 | MODULE_DESCRIPTION("SLIMbus core"); | ||
diff --git a/drivers/slimbus/messaging.c b/drivers/slimbus/messaging.c new file mode 100644 index 000000000000..884419c37e84 --- /dev/null +++ b/drivers/slimbus/messaging.c | |||
@@ -0,0 +1,332 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (c) 2011-2017, The Linux Foundation | ||
4 | */ | ||
5 | |||
6 | #include <linux/slab.h> | ||
7 | #include <linux/pm_runtime.h> | ||
8 | #include "slimbus.h" | ||
9 | |||
10 | /** | ||
11 | * slim_msg_response() - Deliver Message response received from a device to the | ||
12 | * framework. | ||
13 | * | ||
14 | * @ctrl: Controller handle | ||
15 | * @reply: Reply received from the device | ||
16 | * @len: Length of the reply | ||
17 | * @tid: Transaction ID received with which framework can associate reply. | ||
18 | * | ||
19 | * Called by controller to inform framework about the response received. | ||
20 | * This helps in making the API asynchronous, and controller-driver doesn't need | ||
21 | * to manage 1 more table other than the one managed by framework mapping TID | ||
22 | * with buffers | ||
23 | */ | ||
24 | void slim_msg_response(struct slim_controller *ctrl, u8 *reply, u8 tid, u8 len) | ||
25 | { | ||
26 | struct slim_msg_txn *txn; | ||
27 | struct slim_val_inf *msg; | ||
28 | unsigned long flags; | ||
29 | |||
30 | spin_lock_irqsave(&ctrl->txn_lock, flags); | ||
31 | txn = idr_find(&ctrl->tid_idr, tid); | ||
32 | if (txn == NULL) { | ||
33 | spin_unlock_irqrestore(&ctrl->txn_lock, flags); | ||
34 | return; | ||
35 | } | ||
36 | |||
37 | msg = txn->msg; | ||
38 | if (msg == NULL || msg->rbuf == NULL) { | ||
39 | dev_err(ctrl->dev, "Got response to invalid TID:%d, len:%d\n", | ||
40 | tid, len); | ||
41 | spin_unlock_irqrestore(&ctrl->txn_lock, flags); | ||
42 | return; | ||
43 | } | ||
44 | |||
45 | idr_remove(&ctrl->tid_idr, tid); | ||
46 | spin_unlock_irqrestore(&ctrl->txn_lock, flags); | ||
47 | |||
48 | memcpy(msg->rbuf, reply, len); | ||
49 | if (txn->comp) | ||
50 | complete(txn->comp); | ||
51 | |||
52 | /* Remove runtime-pm vote now that response was received for TID txn */ | ||
53 | pm_runtime_mark_last_busy(ctrl->dev); | ||
54 | pm_runtime_put_autosuspend(ctrl->dev); | ||
55 | } | ||
56 | EXPORT_SYMBOL_GPL(slim_msg_response); | ||
57 | |||
58 | /** | ||
59 | * slim_do_transfer() - Process a SLIMbus-messaging transaction | ||
60 | * | ||
61 | * @ctrl: Controller handle | ||
62 | * @txn: Transaction to be sent over SLIMbus | ||
63 | * | ||
64 | * Called by controller to transmit messaging transactions not dealing with | ||
65 | * Interface/Value elements. (e.g. transmittting a message to assign logical | ||
66 | * address to a slave device | ||
67 | * | ||
68 | * Return: -ETIMEDOUT: If transmission of this message timed out | ||
69 | * (e.g. due to bus lines not being clocked or driven by controller) | ||
70 | */ | ||
71 | int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn) | ||
72 | { | ||
73 | DECLARE_COMPLETION_ONSTACK(done); | ||
74 | bool need_tid = false, clk_pause_msg = false; | ||
75 | unsigned long flags; | ||
76 | int ret, tid, timeout; | ||
77 | |||
78 | /* | ||
79 | * do not vote for runtime-PM if the transactions are part of clock | ||
80 | * pause sequence | ||
81 | */ | ||
82 | if (ctrl->sched.clk_state == SLIM_CLK_ENTERING_PAUSE && | ||
83 | (txn->mt == SLIM_MSG_MT_CORE && | ||
84 | txn->mc >= SLIM_MSG_MC_BEGIN_RECONFIGURATION && | ||
85 | txn->mc <= SLIM_MSG_MC_RECONFIGURE_NOW)) | ||
86 | clk_pause_msg = true; | ||
87 | |||
88 | if (!clk_pause_msg) { | ||
89 | ret = pm_runtime_get_sync(ctrl->dev); | ||
90 | if (ctrl->sched.clk_state != SLIM_CLK_ACTIVE) { | ||
91 | dev_err(ctrl->dev, "ctrl wrong state:%d, ret:%d\n", | ||
92 | ctrl->sched.clk_state, ret); | ||
93 | goto slim_xfer_err; | ||
94 | } | ||
95 | } | ||
96 | |||
97 | need_tid = slim_tid_txn(txn->mt, txn->mc); | ||
98 | |||
99 | if (need_tid) { | ||
100 | spin_lock_irqsave(&ctrl->txn_lock, flags); | ||
101 | tid = idr_alloc(&ctrl->tid_idr, txn, 0, | ||
102 | SLIM_MAX_TIDS, GFP_ATOMIC); | ||
103 | txn->tid = tid; | ||
104 | |||
105 | if (!txn->msg->comp) | ||
106 | txn->comp = &done; | ||
107 | else | ||
108 | txn->comp = txn->comp; | ||
109 | |||
110 | spin_unlock_irqrestore(&ctrl->txn_lock, flags); | ||
111 | |||
112 | if (tid < 0) | ||
113 | return tid; | ||
114 | } | ||
115 | |||
116 | ret = ctrl->xfer_msg(ctrl, txn); | ||
117 | |||
118 | if (ret && need_tid && !txn->msg->comp) { | ||
119 | unsigned long ms = txn->rl + HZ; | ||
120 | |||
121 | timeout = wait_for_completion_timeout(txn->comp, | ||
122 | msecs_to_jiffies(ms)); | ||
123 | if (!timeout) { | ||
124 | ret = -ETIMEDOUT; | ||
125 | spin_lock_irqsave(&ctrl->txn_lock, flags); | ||
126 | idr_remove(&ctrl->tid_idr, tid); | ||
127 | spin_unlock_irqrestore(&ctrl->txn_lock, flags); | ||
128 | } | ||
129 | } | ||
130 | |||
131 | if (ret) | ||
132 | dev_err(ctrl->dev, "Tx:MT:0x%x, MC:0x%x, LA:0x%x failed:%d\n", | ||
133 | txn->mt, txn->mc, txn->la, ret); | ||
134 | |||
135 | slim_xfer_err: | ||
136 | if (!clk_pause_msg && (!need_tid || ret == -ETIMEDOUT)) { | ||
137 | /* | ||
138 | * remove runtime-pm vote if this was TX only, or | ||
139 | * if there was error during this transaction | ||
140 | */ | ||
141 | pm_runtime_mark_last_busy(ctrl->dev); | ||
142 | pm_runtime_mark_last_busy(ctrl->dev); | ||
143 | } | ||
144 | return ret; | ||
145 | } | ||
146 | EXPORT_SYMBOL_GPL(slim_do_transfer); | ||
147 | |||
148 | static int slim_val_inf_sanity(struct slim_controller *ctrl, | ||
149 | struct slim_val_inf *msg, u8 mc) | ||
150 | { | ||
151 | if (!msg || msg->num_bytes > 16 || | ||
152 | (msg->start_offset + msg->num_bytes) > 0xC00) | ||
153 | goto reterr; | ||
154 | switch (mc) { | ||
155 | case SLIM_MSG_MC_REQUEST_VALUE: | ||
156 | case SLIM_MSG_MC_REQUEST_INFORMATION: | ||
157 | if (msg->rbuf != NULL) | ||
158 | return 0; | ||
159 | break; | ||
160 | |||
161 | case SLIM_MSG_MC_CHANGE_VALUE: | ||
162 | case SLIM_MSG_MC_CLEAR_INFORMATION: | ||
163 | if (msg->wbuf != NULL) | ||
164 | return 0; | ||
165 | break; | ||
166 | |||
167 | case SLIM_MSG_MC_REQUEST_CHANGE_VALUE: | ||
168 | case SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION: | ||
169 | if (msg->rbuf != NULL && msg->wbuf != NULL) | ||
170 | return 0; | ||
171 | break; | ||
172 | } | ||
173 | reterr: | ||
174 | if (msg) | ||
175 | dev_err(ctrl->dev, "Sanity check failed:msg:offset:0x%x, mc:%d\n", | ||
176 | msg->start_offset, mc); | ||
177 | return -EINVAL; | ||
178 | } | ||
179 | |||
180 | static u16 slim_slicesize(int code) | ||
181 | { | ||
182 | static const u8 sizetocode[16] = { | ||
183 | 0, 1, 2, 3, 3, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7 | ||
184 | }; | ||
185 | |||
186 | clamp(code, 1, (int)ARRAY_SIZE(sizetocode)); | ||
187 | |||
188 | return sizetocode[code - 1]; | ||
189 | } | ||
190 | |||
191 | /** | ||
192 | * slim_xfer_msg() - Transfer a value info message on slim device | ||
193 | * | ||
194 | * @sbdev: slim device to which this msg has to be transfered | ||
195 | * @msg: value info message pointer | ||
196 | * @mc: message code of the message | ||
197 | * | ||
198 | * Called by drivers which want to transfer a vlaue or info elements. | ||
199 | * | ||
200 | * Return: -ETIMEDOUT: If transmission of this message timed out | ||
201 | */ | ||
202 | int slim_xfer_msg(struct slim_device *sbdev, struct slim_val_inf *msg, | ||
203 | u8 mc) | ||
204 | { | ||
205 | DEFINE_SLIM_LDEST_TXN(txn_stack, mc, 6, sbdev->laddr, msg); | ||
206 | struct slim_msg_txn *txn = &txn_stack; | ||
207 | struct slim_controller *ctrl = sbdev->ctrl; | ||
208 | int ret; | ||
209 | u16 sl; | ||
210 | |||
211 | if (!ctrl) | ||
212 | return -EINVAL; | ||
213 | |||
214 | ret = slim_val_inf_sanity(ctrl, msg, mc); | ||
215 | if (ret) | ||
216 | return ret; | ||
217 | |||
218 | sl = slim_slicesize(msg->num_bytes); | ||
219 | |||
220 | dev_dbg(ctrl->dev, "SB xfer msg:os:%x, len:%d, MC:%x, sl:%x\n", | ||
221 | msg->start_offset, msg->num_bytes, mc, sl); | ||
222 | |||
223 | txn->ec = ((sl | (1 << 3)) | ((msg->start_offset & 0xFFF) << 4)); | ||
224 | |||
225 | switch (mc) { | ||
226 | case SLIM_MSG_MC_REQUEST_CHANGE_VALUE: | ||
227 | case SLIM_MSG_MC_CHANGE_VALUE: | ||
228 | case SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION: | ||
229 | case SLIM_MSG_MC_CLEAR_INFORMATION: | ||
230 | txn->rl += msg->num_bytes; | ||
231 | default: | ||
232 | break; | ||
233 | } | ||
234 | |||
235 | if (slim_tid_txn(txn->mt, txn->mc)) | ||
236 | txn->rl++; | ||
237 | |||
238 | return slim_do_transfer(ctrl, txn); | ||
239 | } | ||
240 | EXPORT_SYMBOL_GPL(slim_xfer_msg); | ||
241 | |||
242 | static void slim_fill_msg(struct slim_val_inf *msg, u32 addr, | ||
243 | size_t count, u8 *rbuf, u8 *wbuf) | ||
244 | { | ||
245 | msg->start_offset = addr; | ||
246 | msg->num_bytes = count; | ||
247 | msg->rbuf = rbuf; | ||
248 | msg->wbuf = wbuf; | ||
249 | } | ||
250 | |||
251 | /** | ||
252 | * slim_read() - Read SLIMbus value element | ||
253 | * | ||
254 | * @sdev: client handle. | ||
255 | * @addr: address of value element to read. | ||
256 | * @count: number of bytes to read. Maximum bytes allowed are 16. | ||
257 | * @val: will return what the value element value was | ||
258 | * | ||
259 | * Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of | ||
260 | * this message timed out (e.g. due to bus lines not being clocked | ||
261 | * or driven by controller) | ||
262 | */ | ||
263 | int slim_read(struct slim_device *sdev, u32 addr, size_t count, u8 *val) | ||
264 | { | ||
265 | struct slim_val_inf msg; | ||
266 | |||
267 | slim_fill_msg(&msg, addr, count, val, NULL); | ||
268 | |||
269 | return slim_xfer_msg(sdev, &msg, SLIM_MSG_MC_REQUEST_VALUE); | ||
270 | } | ||
271 | EXPORT_SYMBOL_GPL(slim_read); | ||
272 | |||
273 | /** | ||
274 | * slim_readb() - Read byte from SLIMbus value element | ||
275 | * | ||
276 | * @sdev: client handle. | ||
277 | * @addr: address in the value element to read. | ||
278 | * | ||
279 | * Return: byte value of value element. | ||
280 | */ | ||
281 | int slim_readb(struct slim_device *sdev, u32 addr) | ||
282 | { | ||
283 | int ret; | ||
284 | u8 buf; | ||
285 | |||
286 | ret = slim_read(sdev, addr, 1, &buf); | ||
287 | if (ret < 0) | ||
288 | return ret; | ||
289 | else | ||
290 | return buf; | ||
291 | } | ||
292 | EXPORT_SYMBOL_GPL(slim_readb); | ||
293 | |||
294 | /** | ||
295 | * slim_write() - Write SLIMbus value element | ||
296 | * | ||
297 | * @sdev: client handle. | ||
298 | * @addr: address in the value element to write. | ||
299 | * @count: number of bytes to write. Maximum bytes allowed are 16. | ||
300 | * @val: value to write to value element | ||
301 | * | ||
302 | * Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of | ||
303 | * this message timed out (e.g. due to bus lines not being clocked | ||
304 | * or driven by controller) | ||
305 | */ | ||
306 | int slim_write(struct slim_device *sdev, u32 addr, size_t count, u8 *val) | ||
307 | { | ||
308 | struct slim_val_inf msg; | ||
309 | |||
310 | slim_fill_msg(&msg, addr, count, val, NULL); | ||
311 | |||
312 | return slim_xfer_msg(sdev, &msg, SLIM_MSG_MC_CHANGE_VALUE); | ||
313 | } | ||
314 | EXPORT_SYMBOL_GPL(slim_write); | ||
315 | |||
316 | /** | ||
317 | * slim_writeb() - Write byte to SLIMbus value element | ||
318 | * | ||
319 | * @sdev: client handle. | ||
320 | * @addr: address of value element to write. | ||
321 | * @value: value to write to value element | ||
322 | * | ||
323 | * Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of | ||
324 | * this message timed out (e.g. due to bus lines not being clocked | ||
325 | * or driven by controller) | ||
326 | * | ||
327 | */ | ||
328 | int slim_writeb(struct slim_device *sdev, u32 addr, u8 value) | ||
329 | { | ||
330 | return slim_write(sdev, addr, 1, &value); | ||
331 | } | ||
332 | EXPORT_SYMBOL_GPL(slim_writeb); | ||
diff --git a/drivers/slimbus/qcom-ctrl.c b/drivers/slimbus/qcom-ctrl.c new file mode 100644 index 000000000000..ffb46f915334 --- /dev/null +++ b/drivers/slimbus/qcom-ctrl.c | |||
@@ -0,0 +1,747 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (c) 2011-2017, The Linux Foundation | ||
4 | */ | ||
5 | |||
6 | #include <linux/irq.h> | ||
7 | #include <linux/kernel.h> | ||
8 | #include <linux/init.h> | ||
9 | #include <linux/slab.h> | ||
10 | #include <linux/io.h> | ||
11 | #include <linux/interrupt.h> | ||
12 | #include <linux/platform_device.h> | ||
13 | #include <linux/delay.h> | ||
14 | #include <linux/clk.h> | ||
15 | #include <linux/of.h> | ||
16 | #include <linux/pm_runtime.h> | ||
17 | #include "slimbus.h" | ||
18 | |||
19 | /* Manager registers */ | ||
20 | #define MGR_CFG 0x200 | ||
21 | #define MGR_STATUS 0x204 | ||
22 | #define MGR_INT_EN 0x210 | ||
23 | #define MGR_INT_STAT 0x214 | ||
24 | #define MGR_INT_CLR 0x218 | ||
25 | #define MGR_TX_MSG 0x230 | ||
26 | #define MGR_RX_MSG 0x270 | ||
27 | #define MGR_IE_STAT 0x2F0 | ||
28 | #define MGR_VE_STAT 0x300 | ||
29 | #define MGR_CFG_ENABLE 1 | ||
30 | |||
31 | /* Framer registers */ | ||
32 | #define FRM_CFG 0x400 | ||
33 | #define FRM_STAT 0x404 | ||
34 | #define FRM_INT_EN 0x410 | ||
35 | #define FRM_INT_STAT 0x414 | ||
36 | #define FRM_INT_CLR 0x418 | ||
37 | #define FRM_WAKEUP 0x41C | ||
38 | #define FRM_CLKCTL_DONE 0x420 | ||
39 | #define FRM_IE_STAT 0x430 | ||
40 | #define FRM_VE_STAT 0x440 | ||
41 | |||
42 | /* Interface registers */ | ||
43 | #define INTF_CFG 0x600 | ||
44 | #define INTF_STAT 0x604 | ||
45 | #define INTF_INT_EN 0x610 | ||
46 | #define INTF_INT_STAT 0x614 | ||
47 | #define INTF_INT_CLR 0x618 | ||
48 | #define INTF_IE_STAT 0x630 | ||
49 | #define INTF_VE_STAT 0x640 | ||
50 | |||
51 | /* Interrupt status bits */ | ||
52 | #define MGR_INT_TX_NACKED_2 BIT(25) | ||
53 | #define MGR_INT_MSG_BUF_CONTE BIT(26) | ||
54 | #define MGR_INT_RX_MSG_RCVD BIT(30) | ||
55 | #define MGR_INT_TX_MSG_SENT BIT(31) | ||
56 | |||
57 | /* Framer config register settings */ | ||
58 | #define FRM_ACTIVE 1 | ||
59 | #define CLK_GEAR 7 | ||
60 | #define ROOT_FREQ 11 | ||
61 | #define REF_CLK_GEAR 15 | ||
62 | #define INTR_WAKE 19 | ||
63 | |||
64 | #define SLIM_MSG_ASM_FIRST_WORD(l, mt, mc, dt, ad) \ | ||
65 | ((l) | ((mt) << 5) | ((mc) << 8) | ((dt) << 15) | ((ad) << 16)) | ||
66 | |||
67 | #define SLIM_ROOT_FREQ 24576000 | ||
68 | #define QCOM_SLIM_AUTOSUSPEND 1000 | ||
69 | |||
70 | /* MAX message size over control channel */ | ||
71 | #define SLIM_MSGQ_BUF_LEN 40 | ||
72 | #define QCOM_TX_MSGS 2 | ||
73 | #define QCOM_RX_MSGS 8 | ||
74 | #define QCOM_BUF_ALLOC_RETRIES 10 | ||
75 | |||
76 | #define CFG_PORT(r, v) ((v) ? CFG_PORT_V2(r) : CFG_PORT_V1(r)) | ||
77 | |||
78 | /* V2 Component registers */ | ||
79 | #define CFG_PORT_V2(r) ((r ## _V2)) | ||
80 | #define COMP_CFG_V2 4 | ||
81 | #define COMP_TRUST_CFG_V2 0x3000 | ||
82 | |||
83 | /* V1 Component registers */ | ||
84 | #define CFG_PORT_V1(r) ((r ## _V1)) | ||
85 | #define COMP_CFG_V1 0 | ||
86 | #define COMP_TRUST_CFG_V1 0x14 | ||
87 | |||
88 | /* Resource group info for manager, and non-ported generic device-components */ | ||
89 | #define EE_MGR_RSC_GRP (1 << 10) | ||
90 | #define EE_NGD_2 (2 << 6) | ||
91 | #define EE_NGD_1 0 | ||
92 | |||
93 | struct slim_ctrl_buf { | ||
94 | void *base; | ||
95 | spinlock_t lock; | ||
96 | int head; | ||
97 | int tail; | ||
98 | int sl_sz; | ||
99 | int n; | ||
100 | }; | ||
101 | |||
102 | struct qcom_slim_ctrl { | ||
103 | struct slim_controller ctrl; | ||
104 | struct slim_framer framer; | ||
105 | struct device *dev; | ||
106 | void __iomem *base; | ||
107 | void __iomem *slew_reg; | ||
108 | |||
109 | struct slim_ctrl_buf rx; | ||
110 | struct slim_ctrl_buf tx; | ||
111 | |||
112 | struct completion **wr_comp; | ||
113 | int irq; | ||
114 | struct workqueue_struct *rxwq; | ||
115 | struct work_struct wd; | ||
116 | struct clk *rclk; | ||
117 | struct clk *hclk; | ||
118 | }; | ||
119 | |||
120 | static void qcom_slim_queue_tx(struct qcom_slim_ctrl *ctrl, void *buf, | ||
121 | u8 len, u32 tx_reg) | ||
122 | { | ||
123 | int count = (len + 3) >> 2; | ||
124 | |||
125 | __iowrite32_copy(ctrl->base + tx_reg, buf, count); | ||
126 | |||
127 | /* Ensure Oder of subsequent writes */ | ||
128 | mb(); | ||
129 | } | ||
130 | |||
131 | static void *slim_alloc_rxbuf(struct qcom_slim_ctrl *ctrl) | ||
132 | { | ||
133 | unsigned long flags; | ||
134 | int idx; | ||
135 | |||
136 | spin_lock_irqsave(&ctrl->rx.lock, flags); | ||
137 | if ((ctrl->rx.tail + 1) % ctrl->rx.n == ctrl->rx.head) { | ||
138 | spin_unlock_irqrestore(&ctrl->rx.lock, flags); | ||
139 | dev_err(ctrl->dev, "RX QUEUE full!"); | ||
140 | return NULL; | ||
141 | } | ||
142 | idx = ctrl->rx.tail; | ||
143 | ctrl->rx.tail = (ctrl->rx.tail + 1) % ctrl->rx.n; | ||
144 | spin_unlock_irqrestore(&ctrl->rx.lock, flags); | ||
145 | |||
146 | return ctrl->rx.base + (idx * ctrl->rx.sl_sz); | ||
147 | } | ||
148 | |||
149 | static void slim_ack_txn(struct qcom_slim_ctrl *ctrl, int err) | ||
150 | { | ||
151 | struct completion *comp; | ||
152 | unsigned long flags; | ||
153 | int idx; | ||
154 | |||
155 | spin_lock_irqsave(&ctrl->tx.lock, flags); | ||
156 | idx = ctrl->tx.head; | ||
157 | ctrl->tx.head = (ctrl->tx.head + 1) % ctrl->tx.n; | ||
158 | spin_unlock_irqrestore(&ctrl->tx.lock, flags); | ||
159 | |||
160 | comp = ctrl->wr_comp[idx]; | ||
161 | ctrl->wr_comp[idx] = NULL; | ||
162 | |||
163 | complete(comp); | ||
164 | } | ||
165 | |||
166 | static irqreturn_t qcom_slim_handle_tx_irq(struct qcom_slim_ctrl *ctrl, | ||
167 | u32 stat) | ||
168 | { | ||
169 | int err = 0; | ||
170 | |||
171 | if (stat & MGR_INT_TX_MSG_SENT) | ||
172 | writel_relaxed(MGR_INT_TX_MSG_SENT, | ||
173 | ctrl->base + MGR_INT_CLR); | ||
174 | |||
175 | if (stat & MGR_INT_TX_NACKED_2) { | ||
176 | u32 mgr_stat = readl_relaxed(ctrl->base + MGR_STATUS); | ||
177 | u32 mgr_ie_stat = readl_relaxed(ctrl->base + MGR_IE_STAT); | ||
178 | u32 frm_stat = readl_relaxed(ctrl->base + FRM_STAT); | ||
179 | u32 frm_cfg = readl_relaxed(ctrl->base + FRM_CFG); | ||
180 | u32 frm_intr_stat = readl_relaxed(ctrl->base + FRM_INT_STAT); | ||
181 | u32 frm_ie_stat = readl_relaxed(ctrl->base + FRM_IE_STAT); | ||
182 | u32 intf_stat = readl_relaxed(ctrl->base + INTF_STAT); | ||
183 | u32 intf_intr_stat = readl_relaxed(ctrl->base + INTF_INT_STAT); | ||
184 | u32 intf_ie_stat = readl_relaxed(ctrl->base + INTF_IE_STAT); | ||
185 | |||
186 | writel_relaxed(MGR_INT_TX_NACKED_2, ctrl->base + MGR_INT_CLR); | ||
187 | |||
188 | dev_err(ctrl->dev, "TX Nack MGR:int:0x%x, stat:0x%x\n", | ||
189 | stat, mgr_stat); | ||
190 | dev_err(ctrl->dev, "TX Nack MGR:ie:0x%x\n", mgr_ie_stat); | ||
191 | dev_err(ctrl->dev, "TX Nack FRM:int:0x%x, stat:0x%x\n", | ||
192 | frm_intr_stat, frm_stat); | ||
193 | dev_err(ctrl->dev, "TX Nack FRM:cfg:0x%x, ie:0x%x\n", | ||
194 | frm_cfg, frm_ie_stat); | ||
195 | dev_err(ctrl->dev, "TX Nack INTF:intr:0x%x, stat:0x%x\n", | ||
196 | intf_intr_stat, intf_stat); | ||
197 | dev_err(ctrl->dev, "TX Nack INTF:ie:0x%x\n", | ||
198 | intf_ie_stat); | ||
199 | err = -ENOTCONN; | ||
200 | } | ||
201 | |||
202 | slim_ack_txn(ctrl, err); | ||
203 | |||
204 | return IRQ_HANDLED; | ||
205 | } | ||
206 | |||
207 | static irqreturn_t qcom_slim_handle_rx_irq(struct qcom_slim_ctrl *ctrl, | ||
208 | u32 stat) | ||
209 | { | ||
210 | u32 *rx_buf, pkt[10]; | ||
211 | bool q_rx = false; | ||
212 | u8 mc, mt, len; | ||
213 | |||
214 | pkt[0] = readl_relaxed(ctrl->base + MGR_RX_MSG); | ||
215 | mt = SLIM_HEADER_GET_MT(pkt[0]); | ||
216 | len = SLIM_HEADER_GET_RL(pkt[0]); | ||
217 | mc = SLIM_HEADER_GET_MC(pkt[0]>>8); | ||
218 | |||
219 | /* | ||
220 | * this message cannot be handled by ISR, so | ||
221 | * let work-queue handle it | ||
222 | */ | ||
223 | if (mt == SLIM_MSG_MT_CORE && mc == SLIM_MSG_MC_REPORT_PRESENT) { | ||
224 | rx_buf = (u32 *)slim_alloc_rxbuf(ctrl); | ||
225 | if (!rx_buf) { | ||
226 | dev_err(ctrl->dev, "dropping RX:0x%x due to RX full\n", | ||
227 | pkt[0]); | ||
228 | goto rx_ret_irq; | ||
229 | } | ||
230 | rx_buf[0] = pkt[0]; | ||
231 | |||
232 | } else { | ||
233 | rx_buf = pkt; | ||
234 | } | ||
235 | |||
236 | __ioread32_copy(rx_buf + 1, ctrl->base + MGR_RX_MSG + 4, | ||
237 | DIV_ROUND_UP(len, 4)); | ||
238 | |||
239 | switch (mc) { | ||
240 | |||
241 | case SLIM_MSG_MC_REPORT_PRESENT: | ||
242 | q_rx = true; | ||
243 | break; | ||
244 | case SLIM_MSG_MC_REPLY_INFORMATION: | ||
245 | case SLIM_MSG_MC_REPLY_VALUE: | ||
246 | slim_msg_response(&ctrl->ctrl, (u8 *)(rx_buf + 1), | ||
247 | (u8)(*rx_buf >> 24), (len - 4)); | ||
248 | break; | ||
249 | default: | ||
250 | dev_err(ctrl->dev, "unsupported MC,%x MT:%x\n", | ||
251 | mc, mt); | ||
252 | break; | ||
253 | } | ||
254 | rx_ret_irq: | ||
255 | writel(MGR_INT_RX_MSG_RCVD, ctrl->base + | ||
256 | MGR_INT_CLR); | ||
257 | if (q_rx) | ||
258 | queue_work(ctrl->rxwq, &ctrl->wd); | ||
259 | |||
260 | return IRQ_HANDLED; | ||
261 | } | ||
262 | |||
263 | static irqreturn_t qcom_slim_interrupt(int irq, void *d) | ||
264 | { | ||
265 | struct qcom_slim_ctrl *ctrl = d; | ||
266 | u32 stat = readl_relaxed(ctrl->base + MGR_INT_STAT); | ||
267 | int ret = IRQ_NONE; | ||
268 | |||
269 | if (stat & MGR_INT_TX_MSG_SENT || stat & MGR_INT_TX_NACKED_2) | ||
270 | ret = qcom_slim_handle_tx_irq(ctrl, stat); | ||
271 | |||
272 | if (stat & MGR_INT_RX_MSG_RCVD) | ||
273 | ret = qcom_slim_handle_rx_irq(ctrl, stat); | ||
274 | |||
275 | return ret; | ||
276 | } | ||
277 | |||
278 | static int qcom_clk_pause_wakeup(struct slim_controller *sctrl) | ||
279 | { | ||
280 | struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev); | ||
281 | |||
282 | clk_prepare_enable(ctrl->hclk); | ||
283 | clk_prepare_enable(ctrl->rclk); | ||
284 | enable_irq(ctrl->irq); | ||
285 | |||
286 | writel_relaxed(1, ctrl->base + FRM_WAKEUP); | ||
287 | /* Make sure framer wakeup write goes through before ISR fires */ | ||
288 | mb(); | ||
289 | /* | ||
290 | * HW Workaround: Currently, slave is reporting lost-sync messages | ||
291 | * after SLIMbus comes out of clock pause. | ||
292 | * Transaction with slave fail before slave reports that message | ||
293 | * Give some time for that report to come | ||
294 | * SLIMbus wakes up in clock gear 10 at 24.576MHz. With each superframe | ||
295 | * being 250 usecs, we wait for 5-10 superframes here to ensure | ||
296 | * we get the message | ||
297 | */ | ||
298 | usleep_range(1250, 2500); | ||
299 | return 0; | ||
300 | } | ||
301 | |||
302 | static void *slim_alloc_txbuf(struct qcom_slim_ctrl *ctrl, | ||
303 | struct slim_msg_txn *txn, | ||
304 | struct completion *done) | ||
305 | { | ||
306 | unsigned long flags; | ||
307 | int idx; | ||
308 | |||
309 | spin_lock_irqsave(&ctrl->tx.lock, flags); | ||
310 | if (((ctrl->tx.head + 1) % ctrl->tx.n) == ctrl->tx.tail) { | ||
311 | spin_unlock_irqrestore(&ctrl->tx.lock, flags); | ||
312 | dev_err(ctrl->dev, "controller TX buf unavailable"); | ||
313 | return NULL; | ||
314 | } | ||
315 | idx = ctrl->tx.tail; | ||
316 | ctrl->wr_comp[idx] = done; | ||
317 | ctrl->tx.tail = (ctrl->tx.tail + 1) % ctrl->tx.n; | ||
318 | |||
319 | spin_unlock_irqrestore(&ctrl->tx.lock, flags); | ||
320 | |||
321 | return ctrl->tx.base + (idx * ctrl->tx.sl_sz); | ||
322 | } | ||
323 | |||
324 | |||
325 | static int qcom_xfer_msg(struct slim_controller *sctrl, | ||
326 | struct slim_msg_txn *txn) | ||
327 | { | ||
328 | struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev); | ||
329 | DECLARE_COMPLETION_ONSTACK(done); | ||
330 | void *pbuf = slim_alloc_txbuf(ctrl, txn, &done); | ||
331 | unsigned long ms = txn->rl + HZ; | ||
332 | u8 *puc; | ||
333 | int ret = 0, timeout, retries = QCOM_BUF_ALLOC_RETRIES; | ||
334 | u8 la = txn->la; | ||
335 | u32 *head; | ||
336 | /* HW expects length field to be excluded */ | ||
337 | txn->rl--; | ||
338 | |||
339 | /* spin till buffer is made available */ | ||
340 | if (!pbuf) { | ||
341 | while (retries--) { | ||
342 | usleep_range(10000, 15000); | ||
343 | pbuf = slim_alloc_txbuf(ctrl, txn, &done); | ||
344 | if (pbuf) | ||
345 | break; | ||
346 | } | ||
347 | } | ||
348 | |||
349 | if (retries < 0 && !pbuf) | ||
350 | return -ENOMEM; | ||
351 | |||
352 | puc = (u8 *)pbuf; | ||
353 | head = (u32 *)pbuf; | ||
354 | |||
355 | if (txn->dt == SLIM_MSG_DEST_LOGICALADDR) { | ||
356 | *head = SLIM_MSG_ASM_FIRST_WORD(txn->rl, txn->mt, | ||
357 | txn->mc, 0, la); | ||
358 | puc += 3; | ||
359 | } else { | ||
360 | *head = SLIM_MSG_ASM_FIRST_WORD(txn->rl, txn->mt, | ||
361 | txn->mc, 1, la); | ||
362 | puc += 2; | ||
363 | } | ||
364 | |||
365 | if (slim_tid_txn(txn->mt, txn->mc)) | ||
366 | *(puc++) = txn->tid; | ||
367 | |||
368 | if (slim_ec_txn(txn->mt, txn->mc)) { | ||
369 | *(puc++) = (txn->ec & 0xFF); | ||
370 | *(puc++) = (txn->ec >> 8) & 0xFF; | ||
371 | } | ||
372 | |||
373 | if (txn->msg && txn->msg->wbuf) | ||
374 | memcpy(puc, txn->msg->wbuf, txn->msg->num_bytes); | ||
375 | |||
376 | qcom_slim_queue_tx(ctrl, head, txn->rl, MGR_TX_MSG); | ||
377 | timeout = wait_for_completion_timeout(&done, msecs_to_jiffies(ms)); | ||
378 | |||
379 | if (!timeout) { | ||
380 | dev_err(ctrl->dev, "TX timed out:MC:0x%x,mt:0x%x", txn->mc, | ||
381 | txn->mt); | ||
382 | ret = -ETIMEDOUT; | ||
383 | } | ||
384 | |||
385 | return ret; | ||
386 | |||
387 | } | ||
388 | |||
389 | static int qcom_set_laddr(struct slim_controller *sctrl, | ||
390 | struct slim_eaddr *ead, u8 laddr) | ||
391 | { | ||
392 | struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev); | ||
393 | struct { | ||
394 | __be16 manf_id; | ||
395 | __be16 prod_code; | ||
396 | u8 dev_index; | ||
397 | u8 instance; | ||
398 | u8 laddr; | ||
399 | } __packed p; | ||
400 | struct slim_val_inf msg = {0}; | ||
401 | DEFINE_SLIM_EDEST_TXN(txn, SLIM_MSG_MC_ASSIGN_LOGICAL_ADDRESS, | ||
402 | 10, laddr, &msg); | ||
403 | int ret; | ||
404 | |||
405 | p.manf_id = cpu_to_be16(ead->manf_id); | ||
406 | p.prod_code = cpu_to_be16(ead->prod_code); | ||
407 | p.dev_index = ead->dev_index; | ||
408 | p.instance = ead->instance; | ||
409 | p.laddr = laddr; | ||
410 | |||
411 | msg.wbuf = (void *)&p; | ||
412 | msg.num_bytes = 7; | ||
413 | ret = slim_do_transfer(&ctrl->ctrl, &txn); | ||
414 | |||
415 | if (ret) | ||
416 | dev_err(ctrl->dev, "set LA:0x%x failed:ret:%d\n", | ||
417 | laddr, ret); | ||
418 | return ret; | ||
419 | } | ||
420 | |||
421 | static int slim_get_current_rxbuf(struct qcom_slim_ctrl *ctrl, void *buf) | ||
422 | { | ||
423 | unsigned long flags; | ||
424 | |||
425 | spin_lock_irqsave(&ctrl->rx.lock, flags); | ||
426 | if (ctrl->rx.tail == ctrl->rx.head) { | ||
427 | spin_unlock_irqrestore(&ctrl->rx.lock, flags); | ||
428 | return -ENODATA; | ||
429 | } | ||
430 | memcpy(buf, ctrl->rx.base + (ctrl->rx.head * ctrl->rx.sl_sz), | ||
431 | ctrl->rx.sl_sz); | ||
432 | |||
433 | ctrl->rx.head = (ctrl->rx.head + 1) % ctrl->rx.n; | ||
434 | spin_unlock_irqrestore(&ctrl->rx.lock, flags); | ||
435 | |||
436 | return 0; | ||
437 | } | ||
438 | |||
439 | static void qcom_slim_rxwq(struct work_struct *work) | ||
440 | { | ||
441 | u8 buf[SLIM_MSGQ_BUF_LEN]; | ||
442 | u8 mc, mt, len; | ||
443 | int ret; | ||
444 | struct qcom_slim_ctrl *ctrl = container_of(work, struct qcom_slim_ctrl, | ||
445 | wd); | ||
446 | |||
447 | while ((slim_get_current_rxbuf(ctrl, buf)) != -ENODATA) { | ||
448 | len = SLIM_HEADER_GET_RL(buf[0]); | ||
449 | mt = SLIM_HEADER_GET_MT(buf[0]); | ||
450 | mc = SLIM_HEADER_GET_MC(buf[1]); | ||
451 | if (mt == SLIM_MSG_MT_CORE && | ||
452 | mc == SLIM_MSG_MC_REPORT_PRESENT) { | ||
453 | struct slim_eaddr ea; | ||
454 | u8 laddr; | ||
455 | |||
456 | ea.manf_id = be16_to_cpup((__be16 *)&buf[2]); | ||
457 | ea.prod_code = be16_to_cpup((__be16 *)&buf[4]); | ||
458 | ea.dev_index = buf[6]; | ||
459 | ea.instance = buf[7]; | ||
460 | |||
461 | ret = slim_device_report_present(&ctrl->ctrl, &ea, | ||
462 | &laddr); | ||
463 | if (ret < 0) | ||
464 | dev_err(ctrl->dev, "assign laddr failed:%d\n", | ||
465 | ret); | ||
466 | } else { | ||
467 | dev_err(ctrl->dev, "unexpected message:mc:%x, mt:%x\n", | ||
468 | mc, mt); | ||
469 | } | ||
470 | } | ||
471 | } | ||
472 | |||
473 | static void qcom_slim_prg_slew(struct platform_device *pdev, | ||
474 | struct qcom_slim_ctrl *ctrl) | ||
475 | { | ||
476 | struct resource *slew_mem; | ||
477 | |||
478 | if (!ctrl->slew_reg) { | ||
479 | /* SLEW RATE register for this SLIMbus */ | ||
480 | slew_mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, | ||
481 | "slew"); | ||
482 | ctrl->slew_reg = devm_ioremap(&pdev->dev, slew_mem->start, | ||
483 | resource_size(slew_mem)); | ||
484 | if (!ctrl->slew_reg) | ||
485 | return; | ||
486 | } | ||
487 | |||
488 | writel_relaxed(1, ctrl->slew_reg); | ||
489 | /* Make sure SLIMbus-slew rate enabling goes through */ | ||
490 | wmb(); | ||
491 | } | ||
492 | |||
493 | static int qcom_slim_probe(struct platform_device *pdev) | ||
494 | { | ||
495 | struct qcom_slim_ctrl *ctrl; | ||
496 | struct slim_controller *sctrl; | ||
497 | struct resource *slim_mem; | ||
498 | int ret, ver; | ||
499 | |||
500 | ctrl = devm_kzalloc(&pdev->dev, sizeof(*ctrl), GFP_KERNEL); | ||
501 | if (!ctrl) | ||
502 | return -ENOMEM; | ||
503 | |||
504 | ctrl->hclk = devm_clk_get(&pdev->dev, "iface"); | ||
505 | if (IS_ERR(ctrl->hclk)) | ||
506 | return PTR_ERR(ctrl->hclk); | ||
507 | |||
508 | ctrl->rclk = devm_clk_get(&pdev->dev, "core"); | ||
509 | if (IS_ERR(ctrl->rclk)) | ||
510 | return PTR_ERR(ctrl->rclk); | ||
511 | |||
512 | ret = clk_set_rate(ctrl->rclk, SLIM_ROOT_FREQ); | ||
513 | if (ret) { | ||
514 | dev_err(&pdev->dev, "ref-clock set-rate failed:%d\n", ret); | ||
515 | return ret; | ||
516 | } | ||
517 | |||
518 | ctrl->irq = platform_get_irq(pdev, 0); | ||
519 | if (!ctrl->irq) { | ||
520 | dev_err(&pdev->dev, "no slimbus IRQ\n"); | ||
521 | return -ENODEV; | ||
522 | } | ||
523 | |||
524 | sctrl = &ctrl->ctrl; | ||
525 | sctrl->dev = &pdev->dev; | ||
526 | ctrl->dev = &pdev->dev; | ||
527 | platform_set_drvdata(pdev, ctrl); | ||
528 | dev_set_drvdata(ctrl->dev, ctrl); | ||
529 | |||
530 | slim_mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl"); | ||
531 | ctrl->base = devm_ioremap_resource(ctrl->dev, slim_mem); | ||
532 | if (IS_ERR(ctrl->base)) { | ||
533 | dev_err(&pdev->dev, "IOremap failed\n"); | ||
534 | return PTR_ERR(ctrl->base); | ||
535 | } | ||
536 | |||
537 | sctrl->set_laddr = qcom_set_laddr; | ||
538 | sctrl->xfer_msg = qcom_xfer_msg; | ||
539 | sctrl->wakeup = qcom_clk_pause_wakeup; | ||
540 | ctrl->tx.n = QCOM_TX_MSGS; | ||
541 | ctrl->tx.sl_sz = SLIM_MSGQ_BUF_LEN; | ||
542 | ctrl->rx.n = QCOM_RX_MSGS; | ||
543 | ctrl->rx.sl_sz = SLIM_MSGQ_BUF_LEN; | ||
544 | ctrl->wr_comp = kzalloc(sizeof(struct completion *) * QCOM_TX_MSGS, | ||
545 | GFP_KERNEL); | ||
546 | if (!ctrl->wr_comp) | ||
547 | return -ENOMEM; | ||
548 | |||
549 | spin_lock_init(&ctrl->rx.lock); | ||
550 | spin_lock_init(&ctrl->tx.lock); | ||
551 | INIT_WORK(&ctrl->wd, qcom_slim_rxwq); | ||
552 | ctrl->rxwq = create_singlethread_workqueue("qcom_slim_rx"); | ||
553 | if (!ctrl->rxwq) { | ||
554 | dev_err(ctrl->dev, "Failed to start Rx WQ\n"); | ||
555 | return -ENOMEM; | ||
556 | } | ||
557 | |||
558 | ctrl->framer.rootfreq = SLIM_ROOT_FREQ / 8; | ||
559 | ctrl->framer.superfreq = | ||
560 | ctrl->framer.rootfreq / SLIM_CL_PER_SUPERFRAME_DIV8; | ||
561 | sctrl->a_framer = &ctrl->framer; | ||
562 | sctrl->clkgear = SLIM_MAX_CLK_GEAR; | ||
563 | |||
564 | qcom_slim_prg_slew(pdev, ctrl); | ||
565 | |||
566 | ret = devm_request_irq(&pdev->dev, ctrl->irq, qcom_slim_interrupt, | ||
567 | IRQF_TRIGGER_HIGH, "qcom_slim_irq", ctrl); | ||
568 | if (ret) { | ||
569 | dev_err(&pdev->dev, "request IRQ failed\n"); | ||
570 | goto err_request_irq_failed; | ||
571 | } | ||
572 | |||
573 | ret = clk_prepare_enable(ctrl->hclk); | ||
574 | if (ret) | ||
575 | goto err_hclk_enable_failed; | ||
576 | |||
577 | ret = clk_prepare_enable(ctrl->rclk); | ||
578 | if (ret) | ||
579 | goto err_rclk_enable_failed; | ||
580 | |||
581 | ctrl->tx.base = devm_kcalloc(&pdev->dev, ctrl->tx.n, ctrl->tx.sl_sz, | ||
582 | GFP_KERNEL); | ||
583 | if (!ctrl->tx.base) { | ||
584 | ret = -ENOMEM; | ||
585 | goto err; | ||
586 | } | ||
587 | |||
588 | ctrl->rx.base = devm_kcalloc(&pdev->dev,ctrl->rx.n, ctrl->rx.sl_sz, | ||
589 | GFP_KERNEL); | ||
590 | if (!ctrl->rx.base) { | ||
591 | ret = -ENOMEM; | ||
592 | goto err; | ||
593 | } | ||
594 | |||
595 | /* Register with framework before enabling frame, clock */ | ||
596 | ret = slim_register_controller(&ctrl->ctrl); | ||
597 | if (ret) { | ||
598 | dev_err(ctrl->dev, "error adding controller\n"); | ||
599 | goto err; | ||
600 | } | ||
601 | |||
602 | ver = readl_relaxed(ctrl->base); | ||
603 | /* Version info in 16 MSbits */ | ||
604 | ver >>= 16; | ||
605 | /* Component register initialization */ | ||
606 | writel(1, ctrl->base + CFG_PORT(COMP_CFG, ver)); | ||
607 | writel((EE_MGR_RSC_GRP | EE_NGD_2 | EE_NGD_1), | ||
608 | ctrl->base + CFG_PORT(COMP_TRUST_CFG, ver)); | ||
609 | |||
610 | writel((MGR_INT_TX_NACKED_2 | | ||
611 | MGR_INT_MSG_BUF_CONTE | MGR_INT_RX_MSG_RCVD | | ||
612 | MGR_INT_TX_MSG_SENT), ctrl->base + MGR_INT_EN); | ||
613 | writel(1, ctrl->base + MGR_CFG); | ||
614 | /* Framer register initialization */ | ||
615 | writel((1 << INTR_WAKE) | (0xA << REF_CLK_GEAR) | | ||
616 | (0xA << CLK_GEAR) | (1 << ROOT_FREQ) | (1 << FRM_ACTIVE) | 1, | ||
617 | ctrl->base + FRM_CFG); | ||
618 | writel(MGR_CFG_ENABLE, ctrl->base + MGR_CFG); | ||
619 | writel(1, ctrl->base + INTF_CFG); | ||
620 | writel(1, ctrl->base + CFG_PORT(COMP_CFG, ver)); | ||
621 | |||
622 | pm_runtime_use_autosuspend(&pdev->dev); | ||
623 | pm_runtime_set_autosuspend_delay(&pdev->dev, QCOM_SLIM_AUTOSUSPEND); | ||
624 | pm_runtime_set_active(&pdev->dev); | ||
625 | pm_runtime_mark_last_busy(&pdev->dev); | ||
626 | pm_runtime_enable(&pdev->dev); | ||
627 | |||
628 | dev_dbg(ctrl->dev, "QCOM SB controller is up:ver:0x%x!\n", ver); | ||
629 | return 0; | ||
630 | |||
631 | err: | ||
632 | clk_disable_unprepare(ctrl->rclk); | ||
633 | err_rclk_enable_failed: | ||
634 | clk_disable_unprepare(ctrl->hclk); | ||
635 | err_hclk_enable_failed: | ||
636 | err_request_irq_failed: | ||
637 | destroy_workqueue(ctrl->rxwq); | ||
638 | return ret; | ||
639 | } | ||
640 | |||
641 | static int qcom_slim_remove(struct platform_device *pdev) | ||
642 | { | ||
643 | struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); | ||
644 | |||
645 | pm_runtime_disable(&pdev->dev); | ||
646 | slim_unregister_controller(&ctrl->ctrl); | ||
647 | destroy_workqueue(ctrl->rxwq); | ||
648 | return 0; | ||
649 | } | ||
650 | |||
651 | /* | ||
652 | * If PM_RUNTIME is not defined, these 2 functions become helper | ||
653 | * functions to be called from system suspend/resume. | ||
654 | */ | ||
655 | #ifdef CONFIG_PM | ||
656 | static int qcom_slim_runtime_suspend(struct device *device) | ||
657 | { | ||
658 | struct platform_device *pdev = to_platform_device(device); | ||
659 | struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); | ||
660 | int ret; | ||
661 | |||
662 | dev_dbg(device, "pm_runtime: suspending...\n"); | ||
663 | ret = slim_ctrl_clk_pause(&ctrl->ctrl, false, SLIM_CLK_UNSPECIFIED); | ||
664 | if (ret) { | ||
665 | dev_err(device, "clk pause not entered:%d", ret); | ||
666 | } else { | ||
667 | disable_irq(ctrl->irq); | ||
668 | clk_disable_unprepare(ctrl->hclk); | ||
669 | clk_disable_unprepare(ctrl->rclk); | ||
670 | } | ||
671 | return ret; | ||
672 | } | ||
673 | |||
674 | static int qcom_slim_runtime_resume(struct device *device) | ||
675 | { | ||
676 | struct platform_device *pdev = to_platform_device(device); | ||
677 | struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); | ||
678 | int ret = 0; | ||
679 | |||
680 | dev_dbg(device, "pm_runtime: resuming...\n"); | ||
681 | ret = slim_ctrl_clk_pause(&ctrl->ctrl, true, 0); | ||
682 | if (ret) | ||
683 | dev_err(device, "clk pause not exited:%d", ret); | ||
684 | return ret; | ||
685 | } | ||
686 | #endif | ||
687 | |||
688 | #ifdef CONFIG_PM_SLEEP | ||
689 | static int qcom_slim_suspend(struct device *dev) | ||
690 | { | ||
691 | int ret = 0; | ||
692 | |||
693 | if (!pm_runtime_enabled(dev) || | ||
694 | (!pm_runtime_suspended(dev))) { | ||
695 | dev_dbg(dev, "system suspend"); | ||
696 | ret = qcom_slim_runtime_suspend(dev); | ||
697 | } | ||
698 | |||
699 | return ret; | ||
700 | } | ||
701 | |||
702 | static int qcom_slim_resume(struct device *dev) | ||
703 | { | ||
704 | if (!pm_runtime_enabled(dev) || !pm_runtime_suspended(dev)) { | ||
705 | int ret; | ||
706 | |||
707 | dev_dbg(dev, "system resume"); | ||
708 | ret = qcom_slim_runtime_resume(dev); | ||
709 | if (!ret) { | ||
710 | pm_runtime_mark_last_busy(dev); | ||
711 | pm_request_autosuspend(dev); | ||
712 | } | ||
713 | return ret; | ||
714 | |||
715 | } | ||
716 | return 0; | ||
717 | } | ||
718 | #endif /* CONFIG_PM_SLEEP */ | ||
719 | |||
720 | static const struct dev_pm_ops qcom_slim_dev_pm_ops = { | ||
721 | SET_SYSTEM_SLEEP_PM_OPS(qcom_slim_suspend, qcom_slim_resume) | ||
722 | SET_RUNTIME_PM_OPS( | ||
723 | qcom_slim_runtime_suspend, | ||
724 | qcom_slim_runtime_resume, | ||
725 | NULL | ||
726 | ) | ||
727 | }; | ||
728 | |||
729 | static const struct of_device_id qcom_slim_dt_match[] = { | ||
730 | { .compatible = "qcom,slim", }, | ||
731 | { .compatible = "qcom,apq8064-slim", }, | ||
732 | {} | ||
733 | }; | ||
734 | |||
735 | static struct platform_driver qcom_slim_driver = { | ||
736 | .probe = qcom_slim_probe, | ||
737 | .remove = qcom_slim_remove, | ||
738 | .driver = { | ||
739 | .name = "qcom_slim_ctrl", | ||
740 | .of_match_table = qcom_slim_dt_match, | ||
741 | .pm = &qcom_slim_dev_pm_ops, | ||
742 | }, | ||
743 | }; | ||
744 | module_platform_driver(qcom_slim_driver); | ||
745 | |||
746 | MODULE_LICENSE("GPL v2"); | ||
747 | MODULE_DESCRIPTION("Qualcomm SLIMbus Controller"); | ||
diff --git a/drivers/slimbus/sched.c b/drivers/slimbus/sched.c new file mode 100644 index 000000000000..af84997d2742 --- /dev/null +++ b/drivers/slimbus/sched.c | |||
@@ -0,0 +1,121 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (c) 2011-2017, The Linux Foundation | ||
4 | */ | ||
5 | |||
6 | #include <linux/errno.h> | ||
7 | #include "slimbus.h" | ||
8 | |||
9 | /** | ||
10 | * slim_ctrl_clk_pause() - Called by slimbus controller to enter/exit | ||
11 | * 'clock pause' | ||
12 | * @ctrl: controller requesting bus to be paused or woken up | ||
13 | * @wakeup: Wakeup this controller from clock pause. | ||
14 | * @restart: Restart time value per spec used for clock pause. This value | ||
15 | * isn't used when controller is to be woken up. | ||
16 | * | ||
17 | * Slimbus specification needs this sequence to turn-off clocks for the bus. | ||
18 | * The sequence involves sending 3 broadcast messages (reconfiguration | ||
19 | * sequence) to inform all devices on the bus. | ||
20 | * To exit clock-pause, controller typically wakes up active framer device. | ||
21 | * This API executes clock pause reconfiguration sequence if wakeup is false. | ||
22 | * If wakeup is true, controller's wakeup is called. | ||
23 | * For entering clock-pause, -EBUSY is returned if a message txn in pending. | ||
24 | */ | ||
25 | int slim_ctrl_clk_pause(struct slim_controller *ctrl, bool wakeup, u8 restart) | ||
26 | { | ||
27 | int i, ret = 0; | ||
28 | unsigned long flags; | ||
29 | struct slim_sched *sched = &ctrl->sched; | ||
30 | struct slim_val_inf msg = {0, 0, NULL, NULL}; | ||
31 | |||
32 | DEFINE_SLIM_BCAST_TXN(txn, SLIM_MSG_MC_BEGIN_RECONFIGURATION, | ||
33 | 3, SLIM_LA_MANAGER, &msg); | ||
34 | |||
35 | if (wakeup == false && restart > SLIM_CLK_UNSPECIFIED) | ||
36 | return -EINVAL; | ||
37 | |||
38 | mutex_lock(&sched->m_reconf); | ||
39 | if (wakeup) { | ||
40 | if (sched->clk_state == SLIM_CLK_ACTIVE) { | ||
41 | mutex_unlock(&sched->m_reconf); | ||
42 | return 0; | ||
43 | } | ||
44 | |||
45 | /* | ||
46 | * Fine-tune calculation based on clock gear, | ||
47 | * message-bandwidth after bandwidth management | ||
48 | */ | ||
49 | ret = wait_for_completion_timeout(&sched->pause_comp, | ||
50 | msecs_to_jiffies(100)); | ||
51 | if (!ret) { | ||
52 | mutex_unlock(&sched->m_reconf); | ||
53 | pr_err("Previous clock pause did not finish"); | ||
54 | return -ETIMEDOUT; | ||
55 | } | ||
56 | ret = 0; | ||
57 | |||
58 | /* | ||
59 | * Slimbus framework will call controller wakeup | ||
60 | * Controller should make sure that it sets active framer | ||
61 | * out of clock pause | ||
62 | */ | ||
63 | if (sched->clk_state == SLIM_CLK_PAUSED && ctrl->wakeup) | ||
64 | ret = ctrl->wakeup(ctrl); | ||
65 | if (!ret) | ||
66 | sched->clk_state = SLIM_CLK_ACTIVE; | ||
67 | mutex_unlock(&sched->m_reconf); | ||
68 | |||
69 | return ret; | ||
70 | } | ||
71 | |||
72 | /* already paused */ | ||
73 | if (ctrl->sched.clk_state == SLIM_CLK_PAUSED) { | ||
74 | mutex_unlock(&sched->m_reconf); | ||
75 | return 0; | ||
76 | } | ||
77 | |||
78 | spin_lock_irqsave(&ctrl->txn_lock, flags); | ||
79 | for (i = 0; i < SLIM_MAX_TIDS; i++) { | ||
80 | /* Pending response for a message */ | ||
81 | if (idr_find(&ctrl->tid_idr, i)) { | ||
82 | spin_unlock_irqrestore(&ctrl->txn_lock, flags); | ||
83 | mutex_unlock(&sched->m_reconf); | ||
84 | return -EBUSY; | ||
85 | } | ||
86 | } | ||
87 | spin_unlock_irqrestore(&ctrl->txn_lock, flags); | ||
88 | |||
89 | sched->clk_state = SLIM_CLK_ENTERING_PAUSE; | ||
90 | |||
91 | /* clock pause sequence */ | ||
92 | ret = slim_do_transfer(ctrl, &txn); | ||
93 | if (ret) | ||
94 | goto clk_pause_ret; | ||
95 | |||
96 | txn.mc = SLIM_MSG_MC_NEXT_PAUSE_CLOCK; | ||
97 | txn.rl = 4; | ||
98 | msg.num_bytes = 1; | ||
99 | msg.wbuf = &restart; | ||
100 | ret = slim_do_transfer(ctrl, &txn); | ||
101 | if (ret) | ||
102 | goto clk_pause_ret; | ||
103 | |||
104 | txn.mc = SLIM_MSG_MC_RECONFIGURE_NOW; | ||
105 | txn.rl = 3; | ||
106 | msg.num_bytes = 1; | ||
107 | msg.wbuf = NULL; | ||
108 | ret = slim_do_transfer(ctrl, &txn); | ||
109 | |||
110 | clk_pause_ret: | ||
111 | if (ret) { | ||
112 | sched->clk_state = SLIM_CLK_ACTIVE; | ||
113 | } else { | ||
114 | sched->clk_state = SLIM_CLK_PAUSED; | ||
115 | complete(&sched->pause_comp); | ||
116 | } | ||
117 | mutex_unlock(&sched->m_reconf); | ||
118 | |||
119 | return ret; | ||
120 | } | ||
121 | EXPORT_SYMBOL_GPL(slim_ctrl_clk_pause); | ||
diff --git a/drivers/slimbus/slimbus.h b/drivers/slimbus/slimbus.h new file mode 100644 index 000000000000..79f8e05d92dd --- /dev/null +++ b/drivers/slimbus/slimbus.h | |||
@@ -0,0 +1,261 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (c) 2011-2017, The Linux Foundation | ||
4 | */ | ||
5 | |||
6 | #ifndef _DRIVERS_SLIMBUS_H | ||
7 | #define _DRIVERS_SLIMBUS_H | ||
8 | #include <linux/module.h> | ||
9 | #include <linux/device.h> | ||
10 | #include <linux/mutex.h> | ||
11 | #include <linux/completion.h> | ||
12 | #include <linux/slimbus.h> | ||
13 | |||
14 | /* Standard values per SLIMbus spec needed by controllers and devices */ | ||
15 | #define SLIM_CL_PER_SUPERFRAME 6144 | ||
16 | #define SLIM_CL_PER_SUPERFRAME_DIV8 (SLIM_CL_PER_SUPERFRAME >> 3) | ||
17 | |||
18 | /* SLIMbus message types. Related to interpretation of message code. */ | ||
19 | #define SLIM_MSG_MT_CORE 0x0 | ||
20 | |||
21 | /* | ||
22 | * SLIM Broadcast header format | ||
23 | * BYTE 0: MT[7:5] RL[4:0] | ||
24 | * BYTE 1: RSVD[7] MC[6:0] | ||
25 | * BYTE 2: RSVD[7:6] DT[5:4] PI[3:0] | ||
26 | */ | ||
27 | #define SLIM_MSG_MT_MASK GENMASK(2, 0) | ||
28 | #define SLIM_MSG_MT_SHIFT 5 | ||
29 | #define SLIM_MSG_RL_MASK GENMASK(4, 0) | ||
30 | #define SLIM_MSG_RL_SHIFT 0 | ||
31 | #define SLIM_MSG_MC_MASK GENMASK(6, 0) | ||
32 | #define SLIM_MSG_MC_SHIFT 0 | ||
33 | #define SLIM_MSG_DT_MASK GENMASK(1, 0) | ||
34 | #define SLIM_MSG_DT_SHIFT 4 | ||
35 | |||
36 | #define SLIM_HEADER_GET_MT(b) ((b >> SLIM_MSG_MT_SHIFT) & SLIM_MSG_MT_MASK) | ||
37 | #define SLIM_HEADER_GET_RL(b) ((b >> SLIM_MSG_RL_SHIFT) & SLIM_MSG_RL_MASK) | ||
38 | #define SLIM_HEADER_GET_MC(b) ((b >> SLIM_MSG_MC_SHIFT) & SLIM_MSG_MC_MASK) | ||
39 | #define SLIM_HEADER_GET_DT(b) ((b >> SLIM_MSG_DT_SHIFT) & SLIM_MSG_DT_MASK) | ||
40 | |||
41 | /* Device management messages used by this framework */ | ||
42 | #define SLIM_MSG_MC_REPORT_PRESENT 0x1 | ||
43 | #define SLIM_MSG_MC_ASSIGN_LOGICAL_ADDRESS 0x2 | ||
44 | #define SLIM_MSG_MC_REPORT_ABSENT 0xF | ||
45 | |||
46 | /* Clock pause Reconfiguration messages */ | ||
47 | #define SLIM_MSG_MC_BEGIN_RECONFIGURATION 0x40 | ||
48 | #define SLIM_MSG_MC_NEXT_PAUSE_CLOCK 0x4A | ||
49 | #define SLIM_MSG_MC_RECONFIGURE_NOW 0x5F | ||
50 | |||
51 | /* Clock pause values per SLIMbus spec */ | ||
52 | #define SLIM_CLK_FAST 0 | ||
53 | #define SLIM_CLK_CONST_PHASE 1 | ||
54 | #define SLIM_CLK_UNSPECIFIED 2 | ||
55 | |||
56 | /* Destination type Values */ | ||
57 | #define SLIM_MSG_DEST_LOGICALADDR 0 | ||
58 | #define SLIM_MSG_DEST_ENUMADDR 1 | ||
59 | #define SLIM_MSG_DEST_BROADCAST 3 | ||
60 | |||
61 | /* Standard values per SLIMbus spec needed by controllers and devices */ | ||
62 | #define SLIM_MAX_CLK_GEAR 10 | ||
63 | #define SLIM_MIN_CLK_GEAR 1 | ||
64 | |||
65 | /* Manager's logical address is set to 0xFF per spec */ | ||
66 | #define SLIM_LA_MANAGER 0xFF | ||
67 | |||
68 | #define SLIM_MAX_TIDS 256 | ||
69 | /** | ||
70 | * struct slim_framer - Represents SLIMbus framer. | ||
71 | * Every controller may have multiple framers. There is 1 active framer device | ||
72 | * responsible for clocking the bus. | ||
73 | * Manager is responsible for framer hand-over. | ||
74 | * @dev: Driver model representation of the device. | ||
75 | * @e_addr: Enumeration address of the framer. | ||
76 | * @rootfreq: Root Frequency at which the framer can run. This is maximum | ||
77 | * frequency ('clock gear 10') at which the bus can operate. | ||
78 | * @superfreq: Superframes per root frequency. Every frame is 6144 bits. | ||
79 | */ | ||
80 | struct slim_framer { | ||
81 | struct device dev; | ||
82 | struct slim_eaddr e_addr; | ||
83 | int rootfreq; | ||
84 | int superfreq; | ||
85 | }; | ||
86 | |||
87 | #define to_slim_framer(d) container_of(d, struct slim_framer, dev) | ||
88 | |||
89 | /** | ||
90 | * struct slim_msg_txn - Message to be sent by the controller. | ||
91 | * This structure has packet header, | ||
92 | * payload and buffer to be filled (if any) | ||
93 | * @rl: Header field. remaining length. | ||
94 | * @mt: Header field. Message type. | ||
95 | * @mc: Header field. LSB is message code for type mt. | ||
96 | * @dt: Header field. Destination type. | ||
97 | * @ec: Element code. Used for elemental access APIs. | ||
98 | * @tid: Transaction ID. Used for messages expecting response. | ||
99 | * (relevant for message-codes involving read operation) | ||
100 | * @la: Logical address of the device this message is going to. | ||
101 | * (Not used when destination type is broadcast.) | ||
102 | * @msg: Elemental access message to be read/written | ||
103 | * @comp: completion if read/write is synchronous, used internally | ||
104 | * for tid based transactions. | ||
105 | */ | ||
106 | struct slim_msg_txn { | ||
107 | u8 rl; | ||
108 | u8 mt; | ||
109 | u8 mc; | ||
110 | u8 dt; | ||
111 | u16 ec; | ||
112 | u8 tid; | ||
113 | u8 la; | ||
114 | struct slim_val_inf *msg; | ||
115 | struct completion *comp; | ||
116 | }; | ||
117 | |||
118 | /* Frequently used message transaction structures */ | ||
119 | #define DEFINE_SLIM_LDEST_TXN(name, mc, rl, la, msg) \ | ||
120 | struct slim_msg_txn name = { rl, 0, mc, SLIM_MSG_DEST_LOGICALADDR, 0,\ | ||
121 | 0, la, msg, } | ||
122 | |||
123 | #define DEFINE_SLIM_BCAST_TXN(name, mc, rl, la, msg) \ | ||
124 | struct slim_msg_txn name = { rl, 0, mc, SLIM_MSG_DEST_BROADCAST, 0,\ | ||
125 | 0, la, msg, } | ||
126 | |||
127 | #define DEFINE_SLIM_EDEST_TXN(name, mc, rl, la, msg) \ | ||
128 | struct slim_msg_txn name = { rl, 0, mc, SLIM_MSG_DEST_ENUMADDR, 0,\ | ||
129 | 0, la, msg, } | ||
130 | /** | ||
131 | * enum slim_clk_state: SLIMbus controller's clock state used internally for | ||
132 | * maintaining current clock state. | ||
133 | * @SLIM_CLK_ACTIVE: SLIMbus clock is active | ||
134 | * @SLIM_CLK_ENTERING_PAUSE: SLIMbus clock pause sequence is being sent on the | ||
135 | * bus. If this succeeds, state changes to SLIM_CLK_PAUSED. If the | ||
136 | * transition fails, state changes back to SLIM_CLK_ACTIVE | ||
137 | * @SLIM_CLK_PAUSED: SLIMbus controller clock has paused. | ||
138 | */ | ||
139 | enum slim_clk_state { | ||
140 | SLIM_CLK_ACTIVE, | ||
141 | SLIM_CLK_ENTERING_PAUSE, | ||
142 | SLIM_CLK_PAUSED, | ||
143 | }; | ||
144 | |||
145 | /** | ||
146 | * struct slim_sched: Framework uses this structure internally for scheduling. | ||
147 | * @clk_state: Controller's clock state from enum slim_clk_state | ||
148 | * @pause_comp: Signals completion of clock pause sequence. This is useful when | ||
149 | * client tries to call SLIMbus transaction when controller is entering | ||
150 | * clock pause. | ||
151 | * @m_reconf: This mutex is held until current reconfiguration (data channel | ||
152 | * scheduling, message bandwidth reservation) is done. Message APIs can | ||
153 | * use the bus concurrently when this mutex is held since elemental access | ||
154 | * messages can be sent on the bus when reconfiguration is in progress. | ||
155 | */ | ||
156 | struct slim_sched { | ||
157 | enum slim_clk_state clk_state; | ||
158 | struct completion pause_comp; | ||
159 | struct mutex m_reconf; | ||
160 | }; | ||
161 | |||
162 | /** | ||
163 | * struct slim_controller - Controls every instance of SLIMbus | ||
164 | * (similar to 'master' on SPI) | ||
165 | * @dev: Device interface to this driver | ||
166 | * @id: Board-specific number identifier for this controller/bus | ||
167 | * @name: Name for this controller | ||
168 | * @min_cg: Minimum clock gear supported by this controller (default value: 1) | ||
169 | * @max_cg: Maximum clock gear supported by this controller (default value: 10) | ||
170 | * @clkgear: Current clock gear in which this bus is running | ||
171 | * @laddr_ida: logical address id allocator | ||
172 | * @a_framer: Active framer which is clocking the bus managed by this controller | ||
173 | * @lock: Mutex protecting controller data structures | ||
174 | * @devices: Slim device list | ||
175 | * @tid_idr: tid id allocator | ||
176 | * @txn_lock: Lock to protect table of transactions | ||
177 | * @sched: scheduler structure used by the controller | ||
178 | * @xfer_msg: Transfer a message on this controller (this can be a broadcast | ||
179 | * control/status message like data channel setup, or a unicast message | ||
180 | * like value element read/write. | ||
181 | * @set_laddr: Setup logical address at laddr for the slave with elemental | ||
182 | * address e_addr. Drivers implementing controller will be expected to | ||
183 | * send unicast message to this device with its logical address. | ||
184 | * @get_laddr: It is possible that controller needs to set fixed logical | ||
185 | * address table and get_laddr can be used in that case so that controller | ||
186 | * can do this assignment. Use case is when the master is on the remote | ||
187 | * processor side, who is resposible for allocating laddr. | ||
188 | * @wakeup: This function pointer implements controller-specific procedure | ||
189 | * to wake it up from clock-pause. Framework will call this to bring | ||
190 | * the controller out of clock pause. | ||
191 | * | ||
192 | * 'Manager device' is responsible for device management, bandwidth | ||
193 | * allocation, channel setup, and port associations per channel. | ||
194 | * Device management means Logical address assignment/removal based on | ||
195 | * enumeration (report-present, report-absent) of a device. | ||
196 | * Bandwidth allocation is done dynamically by the manager based on active | ||
197 | * channels on the bus, message-bandwidth requests made by SLIMbus devices. | ||
198 | * Based on current bandwidth usage, manager chooses a frequency to run | ||
199 | * the bus at (in steps of 'clock-gear', 1 through 10, each clock gear | ||
200 | * representing twice the frequency than the previous gear). | ||
201 | * Manager is also responsible for entering (and exiting) low-power-mode | ||
202 | * (known as 'clock pause'). | ||
203 | * Manager can do handover of framer if there are multiple framers on the | ||
204 | * bus and a certain usecase warrants using certain framer to avoid keeping | ||
205 | * previous framer being powered-on. | ||
206 | * | ||
207 | * Controller here performs duties of the manager device, and 'interface | ||
208 | * device'. Interface device is responsible for monitoring the bus and | ||
209 | * reporting information such as loss-of-synchronization, data | ||
210 | * slot-collision. | ||
211 | */ | ||
212 | struct slim_controller { | ||
213 | struct device *dev; | ||
214 | unsigned int id; | ||
215 | char name[SLIMBUS_NAME_SIZE]; | ||
216 | int min_cg; | ||
217 | int max_cg; | ||
218 | int clkgear; | ||
219 | struct ida laddr_ida; | ||
220 | struct slim_framer *a_framer; | ||
221 | struct mutex lock; | ||
222 | struct list_head devices; | ||
223 | struct idr tid_idr; | ||
224 | spinlock_t txn_lock; | ||
225 | struct slim_sched sched; | ||
226 | int (*xfer_msg)(struct slim_controller *ctrl, | ||
227 | struct slim_msg_txn *tx); | ||
228 | int (*set_laddr)(struct slim_controller *ctrl, | ||
229 | struct slim_eaddr *ea, u8 laddr); | ||
230 | int (*get_laddr)(struct slim_controller *ctrl, | ||
231 | struct slim_eaddr *ea, u8 *laddr); | ||
232 | int (*wakeup)(struct slim_controller *ctrl); | ||
233 | }; | ||
234 | |||
235 | int slim_device_report_present(struct slim_controller *ctrl, | ||
236 | struct slim_eaddr *e_addr, u8 *laddr); | ||
237 | void slim_report_absent(struct slim_device *sbdev); | ||
238 | int slim_register_controller(struct slim_controller *ctrl); | ||
239 | int slim_unregister_controller(struct slim_controller *ctrl); | ||
240 | void slim_msg_response(struct slim_controller *ctrl, u8 *reply, u8 tid, u8 l); | ||
241 | int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn); | ||
242 | int slim_ctrl_clk_pause(struct slim_controller *ctrl, bool wakeup, u8 restart); | ||
243 | |||
244 | static inline bool slim_tid_txn(u8 mt, u8 mc) | ||
245 | { | ||
246 | return (mt == SLIM_MSG_MT_CORE && | ||
247 | (mc == SLIM_MSG_MC_REQUEST_INFORMATION || | ||
248 | mc == SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION || | ||
249 | mc == SLIM_MSG_MC_REQUEST_VALUE || | ||
250 | mc == SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION)); | ||
251 | } | ||
252 | |||
253 | static inline bool slim_ec_txn(u8 mt, u8 mc) | ||
254 | { | ||
255 | return (mt == SLIM_MSG_MT_CORE && | ||
256 | ((mc >= SLIM_MSG_MC_REQUEST_INFORMATION && | ||
257 | mc <= SLIM_MSG_MC_REPORT_INFORMATION) || | ||
258 | (mc >= SLIM_MSG_MC_REQUEST_VALUE && | ||
259 | mc <= SLIM_MSG_MC_CHANGE_VALUE))); | ||
260 | } | ||
261 | #endif /* _LINUX_SLIMBUS_H */ | ||
diff --git a/drivers/soundwire/Kconfig b/drivers/soundwire/Kconfig new file mode 100644 index 000000000000..b46084b4b1f8 --- /dev/null +++ b/drivers/soundwire/Kconfig | |||
@@ -0,0 +1,37 @@ | |||
1 | # | ||
2 | # SoundWire subsystem configuration | ||
3 | # | ||
4 | |||
5 | menuconfig SOUNDWIRE | ||
6 | bool "SoundWire support" | ||
7 | ---help--- | ||
8 | SoundWire is a 2-Pin interface with data and clock line ratified | ||
9 | by the MIPI Alliance. SoundWire is used for transporting data | ||
10 | typically related to audio functions. SoundWire interface is | ||
11 | optimized to integrate audio devices in mobile or mobile inspired | ||
12 | systems. Say Y to enable this subsystem, N if you do not have such | ||
13 | a device | ||
14 | |||
15 | if SOUNDWIRE | ||
16 | |||
17 | comment "SoundWire Devices" | ||
18 | |||
19 | config SOUNDWIRE_BUS | ||
20 | tristate | ||
21 | select REGMAP_SOUNDWIRE | ||
22 | |||
23 | config SOUNDWIRE_CADENCE | ||
24 | tristate | ||
25 | |||
26 | config SOUNDWIRE_INTEL | ||
27 | tristate "Intel SoundWire Master driver" | ||
28 | select SOUNDWIRE_CADENCE | ||
29 | select SOUNDWIRE_BUS | ||
30 | depends on X86 && ACPI | ||
31 | ---help--- | ||
32 | SoundWire Intel Master driver. | ||
33 | If you have an Intel platform which has a SoundWire Master then | ||
34 | enable this config option to get the SoundWire support for that | ||
35 | device. | ||
36 | |||
37 | endif | ||
diff --git a/drivers/soundwire/Makefile b/drivers/soundwire/Makefile new file mode 100644 index 000000000000..e1a74c5692aa --- /dev/null +++ b/drivers/soundwire/Makefile | |||
@@ -0,0 +1,18 @@ | |||
1 | # | ||
2 | # Makefile for soundwire core | ||
3 | # | ||
4 | |||
5 | #Bus Objs | ||
6 | soundwire-bus-objs := bus_type.o bus.o slave.o mipi_disco.o | ||
7 | obj-$(CONFIG_SOUNDWIRE_BUS) += soundwire-bus.o | ||
8 | |||
9 | #Cadence Objs | ||
10 | soundwire-cadence-objs := cadence_master.o | ||
11 | obj-$(CONFIG_SOUNDWIRE_CADENCE) += soundwire-cadence.o | ||
12 | |||
13 | #Intel driver | ||
14 | soundwire-intel-objs := intel.o | ||
15 | obj-$(CONFIG_SOUNDWIRE_INTEL) += soundwire-intel.o | ||
16 | |||
17 | soundwire-intel-init-objs := intel_init.o | ||
18 | obj-$(CONFIG_SOUNDWIRE_INTEL) += soundwire-intel-init.o | ||
diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c new file mode 100644 index 000000000000..d6dc8e7a8614 --- /dev/null +++ b/drivers/soundwire/bus.c | |||
@@ -0,0 +1,997 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #include <linux/acpi.h> | ||
5 | #include <linux/mod_devicetable.h> | ||
6 | #include <linux/pm_runtime.h> | ||
7 | #include <linux/soundwire/sdw_registers.h> | ||
8 | #include <linux/soundwire/sdw.h> | ||
9 | #include "bus.h" | ||
10 | |||
11 | /** | ||
12 | * sdw_add_bus_master() - add a bus Master instance | ||
13 | * @bus: bus instance | ||
14 | * | ||
15 | * Initializes the bus instance, read properties and create child | ||
16 | * devices. | ||
17 | */ | ||
18 | int sdw_add_bus_master(struct sdw_bus *bus) | ||
19 | { | ||
20 | int ret; | ||
21 | |||
22 | if (!bus->dev) { | ||
23 | pr_err("SoundWire bus has no device"); | ||
24 | return -ENODEV; | ||
25 | } | ||
26 | |||
27 | if (!bus->ops) { | ||
28 | dev_err(bus->dev, "SoundWire Bus ops are not set"); | ||
29 | return -EINVAL; | ||
30 | } | ||
31 | |||
32 | mutex_init(&bus->msg_lock); | ||
33 | mutex_init(&bus->bus_lock); | ||
34 | INIT_LIST_HEAD(&bus->slaves); | ||
35 | |||
36 | if (bus->ops->read_prop) { | ||
37 | ret = bus->ops->read_prop(bus); | ||
38 | if (ret < 0) { | ||
39 | dev_err(bus->dev, "Bus read properties failed:%d", ret); | ||
40 | return ret; | ||
41 | } | ||
42 | } | ||
43 | |||
44 | /* | ||
45 | * Device numbers in SoundWire are 0 thru 15. Enumeration device | ||
46 | * number (0), Broadcast device number (15), Group numbers (12 and | ||
47 | * 13) and Master device number (14) are not used for assignment so | ||
48 | * mask these and other higher bits. | ||
49 | */ | ||
50 | |||
51 | /* Set higher order bits */ | ||
52 | *bus->assigned = ~GENMASK(SDW_BROADCAST_DEV_NUM, SDW_ENUM_DEV_NUM); | ||
53 | |||
54 | /* Set enumuration device number and broadcast device number */ | ||
55 | set_bit(SDW_ENUM_DEV_NUM, bus->assigned); | ||
56 | set_bit(SDW_BROADCAST_DEV_NUM, bus->assigned); | ||
57 | |||
58 | /* Set group device numbers and master device number */ | ||
59 | set_bit(SDW_GROUP12_DEV_NUM, bus->assigned); | ||
60 | set_bit(SDW_GROUP13_DEV_NUM, bus->assigned); | ||
61 | set_bit(SDW_MASTER_DEV_NUM, bus->assigned); | ||
62 | |||
63 | /* | ||
64 | * SDW is an enumerable bus, but devices can be powered off. So, | ||
65 | * they won't be able to report as present. | ||
66 | * | ||
67 | * Create Slave devices based on Slaves described in | ||
68 | * the respective firmware (ACPI/DT) | ||
69 | */ | ||
70 | if (IS_ENABLED(CONFIG_ACPI) && ACPI_HANDLE(bus->dev)) | ||
71 | ret = sdw_acpi_find_slaves(bus); | ||
72 | else | ||
73 | ret = -ENOTSUPP; /* No ACPI/DT so error out */ | ||
74 | |||
75 | if (ret) { | ||
76 | dev_err(bus->dev, "Finding slaves failed:%d\n", ret); | ||
77 | return ret; | ||
78 | } | ||
79 | |||
80 | return 0; | ||
81 | } | ||
82 | EXPORT_SYMBOL(sdw_add_bus_master); | ||
83 | |||
84 | static int sdw_delete_slave(struct device *dev, void *data) | ||
85 | { | ||
86 | struct sdw_slave *slave = dev_to_sdw_dev(dev); | ||
87 | struct sdw_bus *bus = slave->bus; | ||
88 | |||
89 | mutex_lock(&bus->bus_lock); | ||
90 | |||
91 | if (slave->dev_num) /* clear dev_num if assigned */ | ||
92 | clear_bit(slave->dev_num, bus->assigned); | ||
93 | |||
94 | list_del_init(&slave->node); | ||
95 | mutex_unlock(&bus->bus_lock); | ||
96 | |||
97 | device_unregister(dev); | ||
98 | return 0; | ||
99 | } | ||
100 | |||
101 | /** | ||
102 | * sdw_delete_bus_master() - delete the bus master instance | ||
103 | * @bus: bus to be deleted | ||
104 | * | ||
105 | * Remove the instance, delete the child devices. | ||
106 | */ | ||
107 | void sdw_delete_bus_master(struct sdw_bus *bus) | ||
108 | { | ||
109 | device_for_each_child(bus->dev, NULL, sdw_delete_slave); | ||
110 | } | ||
111 | EXPORT_SYMBOL(sdw_delete_bus_master); | ||
112 | |||
113 | /* | ||
114 | * SDW IO Calls | ||
115 | */ | ||
116 | |||
117 | static inline int find_response_code(enum sdw_command_response resp) | ||
118 | { | ||
119 | switch (resp) { | ||
120 | case SDW_CMD_OK: | ||
121 | return 0; | ||
122 | |||
123 | case SDW_CMD_IGNORED: | ||
124 | return -ENODATA; | ||
125 | |||
126 | case SDW_CMD_TIMEOUT: | ||
127 | return -ETIMEDOUT; | ||
128 | |||
129 | default: | ||
130 | return -EIO; | ||
131 | } | ||
132 | } | ||
133 | |||
134 | static inline int do_transfer(struct sdw_bus *bus, struct sdw_msg *msg) | ||
135 | { | ||
136 | int retry = bus->prop.err_threshold; | ||
137 | enum sdw_command_response resp; | ||
138 | int ret = 0, i; | ||
139 | |||
140 | for (i = 0; i <= retry; i++) { | ||
141 | resp = bus->ops->xfer_msg(bus, msg); | ||
142 | ret = find_response_code(resp); | ||
143 | |||
144 | /* if cmd is ok or ignored return */ | ||
145 | if (ret == 0 || ret == -ENODATA) | ||
146 | return ret; | ||
147 | } | ||
148 | |||
149 | return ret; | ||
150 | } | ||
151 | |||
152 | static inline int do_transfer_defer(struct sdw_bus *bus, | ||
153 | struct sdw_msg *msg, struct sdw_defer *defer) | ||
154 | { | ||
155 | int retry = bus->prop.err_threshold; | ||
156 | enum sdw_command_response resp; | ||
157 | int ret = 0, i; | ||
158 | |||
159 | defer->msg = msg; | ||
160 | defer->length = msg->len; | ||
161 | |||
162 | for (i = 0; i <= retry; i++) { | ||
163 | resp = bus->ops->xfer_msg_defer(bus, msg, defer); | ||
164 | ret = find_response_code(resp); | ||
165 | /* if cmd is ok or ignored return */ | ||
166 | if (ret == 0 || ret == -ENODATA) | ||
167 | return ret; | ||
168 | } | ||
169 | |||
170 | return ret; | ||
171 | } | ||
172 | |||
173 | static int sdw_reset_page(struct sdw_bus *bus, u16 dev_num) | ||
174 | { | ||
175 | int retry = bus->prop.err_threshold; | ||
176 | enum sdw_command_response resp; | ||
177 | int ret = 0, i; | ||
178 | |||
179 | for (i = 0; i <= retry; i++) { | ||
180 | resp = bus->ops->reset_page_addr(bus, dev_num); | ||
181 | ret = find_response_code(resp); | ||
182 | /* if cmd is ok or ignored return */ | ||
183 | if (ret == 0 || ret == -ENODATA) | ||
184 | return ret; | ||
185 | } | ||
186 | |||
187 | return ret; | ||
188 | } | ||
189 | |||
190 | /** | ||
191 | * sdw_transfer() - Synchronous transfer message to a SDW Slave device | ||
192 | * @bus: SDW bus | ||
193 | * @msg: SDW message to be xfered | ||
194 | */ | ||
195 | int sdw_transfer(struct sdw_bus *bus, struct sdw_msg *msg) | ||
196 | { | ||
197 | int ret; | ||
198 | |||
199 | mutex_lock(&bus->msg_lock); | ||
200 | |||
201 | ret = do_transfer(bus, msg); | ||
202 | if (ret != 0 && ret != -ENODATA) | ||
203 | dev_err(bus->dev, "trf on Slave %d failed:%d\n", | ||
204 | msg->dev_num, ret); | ||
205 | |||
206 | if (msg->page) | ||
207 | sdw_reset_page(bus, msg->dev_num); | ||
208 | |||
209 | mutex_unlock(&bus->msg_lock); | ||
210 | |||
211 | return ret; | ||
212 | } | ||
213 | |||
214 | /** | ||
215 | * sdw_transfer_defer() - Asynchronously transfer message to a SDW Slave device | ||
216 | * @bus: SDW bus | ||
217 | * @msg: SDW message to be xfered | ||
218 | * @defer: Defer block for signal completion | ||
219 | * | ||
220 | * Caller needs to hold the msg_lock lock while calling this | ||
221 | */ | ||
222 | int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, | ||
223 | struct sdw_defer *defer) | ||
224 | { | ||
225 | int ret; | ||
226 | |||
227 | if (!bus->ops->xfer_msg_defer) | ||
228 | return -ENOTSUPP; | ||
229 | |||
230 | ret = do_transfer_defer(bus, msg, defer); | ||
231 | if (ret != 0 && ret != -ENODATA) | ||
232 | dev_err(bus->dev, "Defer trf on Slave %d failed:%d\n", | ||
233 | msg->dev_num, ret); | ||
234 | |||
235 | if (msg->page) | ||
236 | sdw_reset_page(bus, msg->dev_num); | ||
237 | |||
238 | return ret; | ||
239 | } | ||
240 | |||
241 | |||
242 | int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, | ||
243 | u32 addr, size_t count, u16 dev_num, u8 flags, u8 *buf) | ||
244 | { | ||
245 | memset(msg, 0, sizeof(*msg)); | ||
246 | msg->addr = addr; /* addr is 16 bit and truncated here */ | ||
247 | msg->len = count; | ||
248 | msg->dev_num = dev_num; | ||
249 | msg->flags = flags; | ||
250 | msg->buf = buf; | ||
251 | msg->ssp_sync = false; | ||
252 | msg->page = false; | ||
253 | |||
254 | if (addr < SDW_REG_NO_PAGE) { /* no paging area */ | ||
255 | return 0; | ||
256 | } else if (addr >= SDW_REG_MAX) { /* illegal addr */ | ||
257 | pr_err("SDW: Invalid address %x passed\n", addr); | ||
258 | return -EINVAL; | ||
259 | } | ||
260 | |||
261 | if (addr < SDW_REG_OPTIONAL_PAGE) { /* 32k but no page */ | ||
262 | if (slave && !slave->prop.paging_support) | ||
263 | return 0; | ||
264 | /* no need for else as that will fall thru to paging */ | ||
265 | } | ||
266 | |||
267 | /* paging mandatory */ | ||
268 | if (dev_num == SDW_ENUM_DEV_NUM || dev_num == SDW_BROADCAST_DEV_NUM) { | ||
269 | pr_err("SDW: Invalid device for paging :%d\n", dev_num); | ||
270 | return -EINVAL; | ||
271 | } | ||
272 | |||
273 | if (!slave) { | ||
274 | pr_err("SDW: No slave for paging addr\n"); | ||
275 | return -EINVAL; | ||
276 | } else if (!slave->prop.paging_support) { | ||
277 | dev_err(&slave->dev, | ||
278 | "address %x needs paging but no support", addr); | ||
279 | return -EINVAL; | ||
280 | } | ||
281 | |||
282 | msg->addr_page1 = (addr >> SDW_REG_SHIFT(SDW_SCP_ADDRPAGE1_MASK)); | ||
283 | msg->addr_page2 = (addr >> SDW_REG_SHIFT(SDW_SCP_ADDRPAGE2_MASK)); | ||
284 | msg->addr |= BIT(15); | ||
285 | msg->page = true; | ||
286 | |||
287 | return 0; | ||
288 | } | ||
289 | |||
290 | /** | ||
291 | * sdw_nread() - Read "n" contiguous SDW Slave registers | ||
292 | * @slave: SDW Slave | ||
293 | * @addr: Register address | ||
294 | * @count: length | ||
295 | * @val: Buffer for values to be read | ||
296 | */ | ||
297 | int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) | ||
298 | { | ||
299 | struct sdw_msg msg; | ||
300 | int ret; | ||
301 | |||
302 | ret = sdw_fill_msg(&msg, slave, addr, count, | ||
303 | slave->dev_num, SDW_MSG_FLAG_READ, val); | ||
304 | if (ret < 0) | ||
305 | return ret; | ||
306 | |||
307 | ret = pm_runtime_get_sync(slave->bus->dev); | ||
308 | if (ret < 0) | ||
309 | return ret; | ||
310 | |||
311 | ret = sdw_transfer(slave->bus, &msg); | ||
312 | pm_runtime_put(slave->bus->dev); | ||
313 | |||
314 | return ret; | ||
315 | } | ||
316 | EXPORT_SYMBOL(sdw_nread); | ||
317 | |||
318 | /** | ||
319 | * sdw_nwrite() - Write "n" contiguous SDW Slave registers | ||
320 | * @slave: SDW Slave | ||
321 | * @addr: Register address | ||
322 | * @count: length | ||
323 | * @val: Buffer for values to be read | ||
324 | */ | ||
325 | int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) | ||
326 | { | ||
327 | struct sdw_msg msg; | ||
328 | int ret; | ||
329 | |||
330 | ret = sdw_fill_msg(&msg, slave, addr, count, | ||
331 | slave->dev_num, SDW_MSG_FLAG_WRITE, val); | ||
332 | if (ret < 0) | ||
333 | return ret; | ||
334 | |||
335 | ret = pm_runtime_get_sync(slave->bus->dev); | ||
336 | if (ret < 0) | ||
337 | return ret; | ||
338 | |||
339 | ret = sdw_transfer(slave->bus, &msg); | ||
340 | pm_runtime_put(slave->bus->dev); | ||
341 | |||
342 | return ret; | ||
343 | } | ||
344 | EXPORT_SYMBOL(sdw_nwrite); | ||
345 | |||
346 | /** | ||
347 | * sdw_read() - Read a SDW Slave register | ||
348 | * @slave: SDW Slave | ||
349 | * @addr: Register address | ||
350 | */ | ||
351 | int sdw_read(struct sdw_slave *slave, u32 addr) | ||
352 | { | ||
353 | u8 buf; | ||
354 | int ret; | ||
355 | |||
356 | ret = sdw_nread(slave, addr, 1, &buf); | ||
357 | if (ret < 0) | ||
358 | return ret; | ||
359 | else | ||
360 | return buf; | ||
361 | } | ||
362 | EXPORT_SYMBOL(sdw_read); | ||
363 | |||
364 | /** | ||
365 | * sdw_write() - Write a SDW Slave register | ||
366 | * @slave: SDW Slave | ||
367 | * @addr: Register address | ||
368 | * @value: Register value | ||
369 | */ | ||
370 | int sdw_write(struct sdw_slave *slave, u32 addr, u8 value) | ||
371 | { | ||
372 | return sdw_nwrite(slave, addr, 1, &value); | ||
373 | |||
374 | } | ||
375 | EXPORT_SYMBOL(sdw_write); | ||
376 | |||
377 | /* | ||
378 | * SDW alert handling | ||
379 | */ | ||
380 | |||
381 | /* called with bus_lock held */ | ||
382 | static struct sdw_slave *sdw_get_slave(struct sdw_bus *bus, int i) | ||
383 | { | ||
384 | struct sdw_slave *slave = NULL; | ||
385 | |||
386 | list_for_each_entry(slave, &bus->slaves, node) { | ||
387 | if (slave->dev_num == i) | ||
388 | return slave; | ||
389 | } | ||
390 | |||
391 | return NULL; | ||
392 | } | ||
393 | |||
394 | static int sdw_compare_devid(struct sdw_slave *slave, struct sdw_slave_id id) | ||
395 | { | ||
396 | |||
397 | if ((slave->id.unique_id != id.unique_id) || | ||
398 | (slave->id.mfg_id != id.mfg_id) || | ||
399 | (slave->id.part_id != id.part_id) || | ||
400 | (slave->id.class_id != id.class_id)) | ||
401 | return -ENODEV; | ||
402 | |||
403 | return 0; | ||
404 | } | ||
405 | |||
406 | /* called with bus_lock held */ | ||
407 | static int sdw_get_device_num(struct sdw_slave *slave) | ||
408 | { | ||
409 | int bit; | ||
410 | |||
411 | bit = find_first_zero_bit(slave->bus->assigned, SDW_MAX_DEVICES); | ||
412 | if (bit == SDW_MAX_DEVICES) { | ||
413 | bit = -ENODEV; | ||
414 | goto err; | ||
415 | } | ||
416 | |||
417 | /* | ||
418 | * Do not update dev_num in Slave data structure here, | ||
419 | * Update once program dev_num is successful | ||
420 | */ | ||
421 | set_bit(bit, slave->bus->assigned); | ||
422 | |||
423 | err: | ||
424 | return bit; | ||
425 | } | ||
426 | |||
427 | static int sdw_assign_device_num(struct sdw_slave *slave) | ||
428 | { | ||
429 | int ret, dev_num; | ||
430 | |||
431 | /* check first if device number is assigned, if so reuse that */ | ||
432 | if (!slave->dev_num) { | ||
433 | mutex_lock(&slave->bus->bus_lock); | ||
434 | dev_num = sdw_get_device_num(slave); | ||
435 | mutex_unlock(&slave->bus->bus_lock); | ||
436 | if (dev_num < 0) { | ||
437 | dev_err(slave->bus->dev, "Get dev_num failed: %d", | ||
438 | dev_num); | ||
439 | return dev_num; | ||
440 | } | ||
441 | } else { | ||
442 | dev_info(slave->bus->dev, | ||
443 | "Slave already registered dev_num:%d", | ||
444 | slave->dev_num); | ||
445 | |||
446 | /* Clear the slave->dev_num to transfer message on device 0 */ | ||
447 | dev_num = slave->dev_num; | ||
448 | slave->dev_num = 0; | ||
449 | |||
450 | } | ||
451 | |||
452 | ret = sdw_write(slave, SDW_SCP_DEVNUMBER, dev_num); | ||
453 | if (ret < 0) { | ||
454 | dev_err(&slave->dev, "Program device_num failed: %d", ret); | ||
455 | return ret; | ||
456 | } | ||
457 | |||
458 | /* After xfer of msg, restore dev_num */ | ||
459 | slave->dev_num = dev_num; | ||
460 | |||
461 | return 0; | ||
462 | } | ||
463 | |||
464 | void sdw_extract_slave_id(struct sdw_bus *bus, | ||
465 | u64 addr, struct sdw_slave_id *id) | ||
466 | { | ||
467 | dev_dbg(bus->dev, "SDW Slave Addr: %llx", addr); | ||
468 | |||
469 | /* | ||
470 | * Spec definition | ||
471 | * Register Bit Contents | ||
472 | * DevId_0 [7:4] 47:44 sdw_version | ||
473 | * DevId_0 [3:0] 43:40 unique_id | ||
474 | * DevId_1 39:32 mfg_id [15:8] | ||
475 | * DevId_2 31:24 mfg_id [7:0] | ||
476 | * DevId_3 23:16 part_id [15:8] | ||
477 | * DevId_4 15:08 part_id [7:0] | ||
478 | * DevId_5 07:00 class_id | ||
479 | */ | ||
480 | id->sdw_version = (addr >> 44) & GENMASK(3, 0); | ||
481 | id->unique_id = (addr >> 40) & GENMASK(3, 0); | ||
482 | id->mfg_id = (addr >> 24) & GENMASK(15, 0); | ||
483 | id->part_id = (addr >> 8) & GENMASK(15, 0); | ||
484 | id->class_id = addr & GENMASK(7, 0); | ||
485 | |||
486 | dev_dbg(bus->dev, | ||
487 | "SDW Slave class_id %x, part_id %x, mfg_id %x, unique_id %x, version %x", | ||
488 | id->class_id, id->part_id, id->mfg_id, | ||
489 | id->unique_id, id->sdw_version); | ||
490 | |||
491 | } | ||
492 | |||
493 | static int sdw_program_device_num(struct sdw_bus *bus) | ||
494 | { | ||
495 | u8 buf[SDW_NUM_DEV_ID_REGISTERS] = {0}; | ||
496 | struct sdw_slave *slave, *_s; | ||
497 | struct sdw_slave_id id; | ||
498 | struct sdw_msg msg; | ||
499 | bool found = false; | ||
500 | int count = 0, ret; | ||
501 | u64 addr; | ||
502 | |||
503 | /* No Slave, so use raw xfer api */ | ||
504 | ret = sdw_fill_msg(&msg, NULL, SDW_SCP_DEVID_0, | ||
505 | SDW_NUM_DEV_ID_REGISTERS, 0, SDW_MSG_FLAG_READ, buf); | ||
506 | if (ret < 0) | ||
507 | return ret; | ||
508 | |||
509 | do { | ||
510 | ret = sdw_transfer(bus, &msg); | ||
511 | if (ret == -ENODATA) { /* end of device id reads */ | ||
512 | ret = 0; | ||
513 | break; | ||
514 | } | ||
515 | if (ret < 0) { | ||
516 | dev_err(bus->dev, "DEVID read fail:%d\n", ret); | ||
517 | break; | ||
518 | } | ||
519 | |||
520 | /* | ||
521 | * Construct the addr and extract. Cast the higher shift | ||
522 | * bits to avoid truncation due to size limit. | ||
523 | */ | ||
524 | addr = buf[5] | (buf[4] << 8) | (buf[3] << 16) | | ||
525 | ((u64)buf[2] << 24) | ((u64)buf[1] << 32) | | ||
526 | ((u64)buf[0] << 40); | ||
527 | |||
528 | sdw_extract_slave_id(bus, addr, &id); | ||
529 | |||
530 | /* Now compare with entries */ | ||
531 | list_for_each_entry_safe(slave, _s, &bus->slaves, node) { | ||
532 | if (sdw_compare_devid(slave, id) == 0) { | ||
533 | found = true; | ||
534 | |||
535 | /* | ||
536 | * Assign a new dev_num to this Slave and | ||
537 | * not mark it present. It will be marked | ||
538 | * present after it reports ATTACHED on new | ||
539 | * dev_num | ||
540 | */ | ||
541 | ret = sdw_assign_device_num(slave); | ||
542 | if (ret) { | ||
543 | dev_err(slave->bus->dev, | ||
544 | "Assign dev_num failed:%d", | ||
545 | ret); | ||
546 | return ret; | ||
547 | } | ||
548 | |||
549 | break; | ||
550 | } | ||
551 | } | ||
552 | |||
553 | if (found == false) { | ||
554 | /* TODO: Park this device in Group 13 */ | ||
555 | dev_err(bus->dev, "Slave Entry not found"); | ||
556 | } | ||
557 | |||
558 | count++; | ||
559 | |||
560 | /* | ||
561 | * Check till error out or retry (count) exhausts. | ||
562 | * Device can drop off and rejoin during enumeration | ||
563 | * so count till twice the bound. | ||
564 | */ | ||
565 | |||
566 | } while (ret == 0 && count < (SDW_MAX_DEVICES * 2)); | ||
567 | |||
568 | return ret; | ||
569 | } | ||
570 | |||
571 | static void sdw_modify_slave_status(struct sdw_slave *slave, | ||
572 | enum sdw_slave_status status) | ||
573 | { | ||
574 | mutex_lock(&slave->bus->bus_lock); | ||
575 | slave->status = status; | ||
576 | mutex_unlock(&slave->bus->bus_lock); | ||
577 | } | ||
578 | |||
579 | static int sdw_initialize_slave(struct sdw_slave *slave) | ||
580 | { | ||
581 | struct sdw_slave_prop *prop = &slave->prop; | ||
582 | int ret; | ||
583 | u8 val; | ||
584 | |||
585 | /* | ||
586 | * Set bus clash, parity and SCP implementation | ||
587 | * defined interrupt mask | ||
588 | * TODO: Read implementation defined interrupt mask | ||
589 | * from Slave property | ||
590 | */ | ||
591 | val = SDW_SCP_INT1_IMPL_DEF | SDW_SCP_INT1_BUS_CLASH | | ||
592 | SDW_SCP_INT1_PARITY; | ||
593 | |||
594 | /* Enable SCP interrupts */ | ||
595 | ret = sdw_update(slave, SDW_SCP_INTMASK1, val, val); | ||
596 | if (ret < 0) { | ||
597 | dev_err(slave->bus->dev, | ||
598 | "SDW_SCP_INTMASK1 write failed:%d", ret); | ||
599 | return ret; | ||
600 | } | ||
601 | |||
602 | /* No need to continue if DP0 is not present */ | ||
603 | if (!slave->prop.dp0_prop) | ||
604 | return 0; | ||
605 | |||
606 | /* Enable DP0 interrupts */ | ||
607 | val = prop->dp0_prop->device_interrupts; | ||
608 | val |= SDW_DP0_INT_PORT_READY | SDW_DP0_INT_BRA_FAILURE; | ||
609 | |||
610 | ret = sdw_update(slave, SDW_DP0_INTMASK, val, val); | ||
611 | if (ret < 0) { | ||
612 | dev_err(slave->bus->dev, | ||
613 | "SDW_DP0_INTMASK read failed:%d", ret); | ||
614 | return val; | ||
615 | } | ||
616 | |||
617 | return 0; | ||
618 | } | ||
619 | |||
620 | static int sdw_handle_dp0_interrupt(struct sdw_slave *slave, u8 *slave_status) | ||
621 | { | ||
622 | u8 clear = 0, impl_int_mask; | ||
623 | int status, status2, ret, count = 0; | ||
624 | |||
625 | status = sdw_read(slave, SDW_DP0_INT); | ||
626 | if (status < 0) { | ||
627 | dev_err(slave->bus->dev, | ||
628 | "SDW_DP0_INT read failed:%d", status); | ||
629 | return status; | ||
630 | } | ||
631 | |||
632 | do { | ||
633 | |||
634 | if (status & SDW_DP0_INT_TEST_FAIL) { | ||
635 | dev_err(&slave->dev, "Test fail for port 0"); | ||
636 | clear |= SDW_DP0_INT_TEST_FAIL; | ||
637 | } | ||
638 | |||
639 | /* | ||
640 | * Assumption: PORT_READY interrupt will be received only for | ||
641 | * ports implementing Channel Prepare state machine (CP_SM) | ||
642 | */ | ||
643 | |||
644 | if (status & SDW_DP0_INT_PORT_READY) { | ||
645 | complete(&slave->port_ready[0]); | ||
646 | clear |= SDW_DP0_INT_PORT_READY; | ||
647 | } | ||
648 | |||
649 | if (status & SDW_DP0_INT_BRA_FAILURE) { | ||
650 | dev_err(&slave->dev, "BRA failed"); | ||
651 | clear |= SDW_DP0_INT_BRA_FAILURE; | ||
652 | } | ||
653 | |||
654 | impl_int_mask = SDW_DP0_INT_IMPDEF1 | | ||
655 | SDW_DP0_INT_IMPDEF2 | SDW_DP0_INT_IMPDEF3; | ||
656 | |||
657 | if (status & impl_int_mask) { | ||
658 | clear |= impl_int_mask; | ||
659 | *slave_status = clear; | ||
660 | } | ||
661 | |||
662 | /* clear the interrupt */ | ||
663 | ret = sdw_write(slave, SDW_DP0_INT, clear); | ||
664 | if (ret < 0) { | ||
665 | dev_err(slave->bus->dev, | ||
666 | "SDW_DP0_INT write failed:%d", ret); | ||
667 | return ret; | ||
668 | } | ||
669 | |||
670 | /* Read DP0 interrupt again */ | ||
671 | status2 = sdw_read(slave, SDW_DP0_INT); | ||
672 | if (status2 < 0) { | ||
673 | dev_err(slave->bus->dev, | ||
674 | "SDW_DP0_INT read failed:%d", status2); | ||
675 | return status2; | ||
676 | } | ||
677 | status &= status2; | ||
678 | |||
679 | count++; | ||
680 | |||
681 | /* we can get alerts while processing so keep retrying */ | ||
682 | } while (status != 0 && count < SDW_READ_INTR_CLEAR_RETRY); | ||
683 | |||
684 | if (count == SDW_READ_INTR_CLEAR_RETRY) | ||
685 | dev_warn(slave->bus->dev, "Reached MAX_RETRY on DP0 read"); | ||
686 | |||
687 | return ret; | ||
688 | } | ||
689 | |||
690 | static int sdw_handle_port_interrupt(struct sdw_slave *slave, | ||
691 | int port, u8 *slave_status) | ||
692 | { | ||
693 | u8 clear = 0, impl_int_mask; | ||
694 | int status, status2, ret, count = 0; | ||
695 | u32 addr; | ||
696 | |||
697 | if (port == 0) | ||
698 | return sdw_handle_dp0_interrupt(slave, slave_status); | ||
699 | |||
700 | addr = SDW_DPN_INT(port); | ||
701 | status = sdw_read(slave, addr); | ||
702 | if (status < 0) { | ||
703 | dev_err(slave->bus->dev, | ||
704 | "SDW_DPN_INT read failed:%d", status); | ||
705 | |||
706 | return status; | ||
707 | } | ||
708 | |||
709 | do { | ||
710 | |||
711 | if (status & SDW_DPN_INT_TEST_FAIL) { | ||
712 | dev_err(&slave->dev, "Test fail for port:%d", port); | ||
713 | clear |= SDW_DPN_INT_TEST_FAIL; | ||
714 | } | ||
715 | |||
716 | /* | ||
717 | * Assumption: PORT_READY interrupt will be received only | ||
718 | * for ports implementing CP_SM. | ||
719 | */ | ||
720 | if (status & SDW_DPN_INT_PORT_READY) { | ||
721 | complete(&slave->port_ready[port]); | ||
722 | clear |= SDW_DPN_INT_PORT_READY; | ||
723 | } | ||
724 | |||
725 | impl_int_mask = SDW_DPN_INT_IMPDEF1 | | ||
726 | SDW_DPN_INT_IMPDEF2 | SDW_DPN_INT_IMPDEF3; | ||
727 | |||
728 | |||
729 | if (status & impl_int_mask) { | ||
730 | clear |= impl_int_mask; | ||
731 | *slave_status = clear; | ||
732 | } | ||
733 | |||
734 | /* clear the interrupt */ | ||
735 | ret = sdw_write(slave, addr, clear); | ||
736 | if (ret < 0) { | ||
737 | dev_err(slave->bus->dev, | ||
738 | "SDW_DPN_INT write failed:%d", ret); | ||
739 | return ret; | ||
740 | } | ||
741 | |||
742 | /* Read DPN interrupt again */ | ||
743 | status2 = sdw_read(slave, addr); | ||
744 | if (status2 < 0) { | ||
745 | dev_err(slave->bus->dev, | ||
746 | "SDW_DPN_INT read failed:%d", status2); | ||
747 | return status2; | ||
748 | } | ||
749 | status &= status2; | ||
750 | |||
751 | count++; | ||
752 | |||
753 | /* we can get alerts while processing so keep retrying */ | ||
754 | } while (status != 0 && count < SDW_READ_INTR_CLEAR_RETRY); | ||
755 | |||
756 | if (count == SDW_READ_INTR_CLEAR_RETRY) | ||
757 | dev_warn(slave->bus->dev, "Reached MAX_RETRY on port read"); | ||
758 | |||
759 | return ret; | ||
760 | } | ||
761 | |||
762 | static int sdw_handle_slave_alerts(struct sdw_slave *slave) | ||
763 | { | ||
764 | struct sdw_slave_intr_status slave_intr; | ||
765 | u8 clear = 0, bit, port_status[15]; | ||
766 | int port_num, stat, ret, count = 0; | ||
767 | unsigned long port; | ||
768 | bool slave_notify = false; | ||
769 | u8 buf, buf2[2], _buf, _buf2[2]; | ||
770 | |||
771 | sdw_modify_slave_status(slave, SDW_SLAVE_ALERT); | ||
772 | |||
773 | /* Read Instat 1, Instat 2 and Instat 3 registers */ | ||
774 | buf = ret = sdw_read(slave, SDW_SCP_INT1); | ||
775 | if (ret < 0) { | ||
776 | dev_err(slave->bus->dev, | ||
777 | "SDW_SCP_INT1 read failed:%d", ret); | ||
778 | return ret; | ||
779 | } | ||
780 | |||
781 | ret = sdw_nread(slave, SDW_SCP_INTSTAT2, 2, buf2); | ||
782 | if (ret < 0) { | ||
783 | dev_err(slave->bus->dev, | ||
784 | "SDW_SCP_INT2/3 read failed:%d", ret); | ||
785 | return ret; | ||
786 | } | ||
787 | |||
788 | do { | ||
789 | /* | ||
790 | * Check parity, bus clash and Slave (impl defined) | ||
791 | * interrupt | ||
792 | */ | ||
793 | if (buf & SDW_SCP_INT1_PARITY) { | ||
794 | dev_err(&slave->dev, "Parity error detected"); | ||
795 | clear |= SDW_SCP_INT1_PARITY; | ||
796 | } | ||
797 | |||
798 | if (buf & SDW_SCP_INT1_BUS_CLASH) { | ||
799 | dev_err(&slave->dev, "Bus clash error detected"); | ||
800 | clear |= SDW_SCP_INT1_BUS_CLASH; | ||
801 | } | ||
802 | |||
803 | /* | ||
804 | * When bus clash or parity errors are detected, such errors | ||
805 | * are unlikely to be recoverable errors. | ||
806 | * TODO: In such scenario, reset bus. Make this configurable | ||
807 | * via sysfs property with bus reset being the default. | ||
808 | */ | ||
809 | |||
810 | if (buf & SDW_SCP_INT1_IMPL_DEF) { | ||
811 | dev_dbg(&slave->dev, "Slave impl defined interrupt\n"); | ||
812 | clear |= SDW_SCP_INT1_IMPL_DEF; | ||
813 | slave_notify = true; | ||
814 | } | ||
815 | |||
816 | /* Check port 0 - 3 interrupts */ | ||
817 | port = buf & SDW_SCP_INT1_PORT0_3; | ||
818 | |||
819 | /* To get port number corresponding to bits, shift it */ | ||
820 | port = port >> SDW_REG_SHIFT(SDW_SCP_INT1_PORT0_3); | ||
821 | for_each_set_bit(bit, &port, 8) { | ||
822 | sdw_handle_port_interrupt(slave, bit, | ||
823 | &port_status[bit]); | ||
824 | |||
825 | } | ||
826 | |||
827 | /* Check if cascade 2 interrupt is present */ | ||
828 | if (buf & SDW_SCP_INT1_SCP2_CASCADE) { | ||
829 | port = buf2[0] & SDW_SCP_INTSTAT2_PORT4_10; | ||
830 | for_each_set_bit(bit, &port, 8) { | ||
831 | /* scp2 ports start from 4 */ | ||
832 | port_num = bit + 3; | ||
833 | sdw_handle_port_interrupt(slave, | ||
834 | port_num, | ||
835 | &port_status[port_num]); | ||
836 | } | ||
837 | } | ||
838 | |||
839 | /* now check last cascade */ | ||
840 | if (buf2[0] & SDW_SCP_INTSTAT2_SCP3_CASCADE) { | ||
841 | port = buf2[1] & SDW_SCP_INTSTAT3_PORT11_14; | ||
842 | for_each_set_bit(bit, &port, 8) { | ||
843 | /* scp3 ports start from 11 */ | ||
844 | port_num = bit + 10; | ||
845 | sdw_handle_port_interrupt(slave, | ||
846 | port_num, | ||
847 | &port_status[port_num]); | ||
848 | } | ||
849 | } | ||
850 | |||
851 | /* Update the Slave driver */ | ||
852 | if (slave_notify && (slave->ops) && | ||
853 | (slave->ops->interrupt_callback)) { | ||
854 | slave_intr.control_port = clear; | ||
855 | memcpy(slave_intr.port, &port_status, | ||
856 | sizeof(slave_intr.port)); | ||
857 | |||
858 | slave->ops->interrupt_callback(slave, &slave_intr); | ||
859 | } | ||
860 | |||
861 | /* Ack interrupt */ | ||
862 | ret = sdw_write(slave, SDW_SCP_INT1, clear); | ||
863 | if (ret < 0) { | ||
864 | dev_err(slave->bus->dev, | ||
865 | "SDW_SCP_INT1 write failed:%d", ret); | ||
866 | return ret; | ||
867 | } | ||
868 | |||
869 | /* | ||
870 | * Read status again to ensure no new interrupts arrived | ||
871 | * while servicing interrupts. | ||
872 | */ | ||
873 | _buf = ret = sdw_read(slave, SDW_SCP_INT1); | ||
874 | if (ret < 0) { | ||
875 | dev_err(slave->bus->dev, | ||
876 | "SDW_SCP_INT1 read failed:%d", ret); | ||
877 | return ret; | ||
878 | } | ||
879 | |||
880 | ret = sdw_nread(slave, SDW_SCP_INTSTAT2, 2, _buf2); | ||
881 | if (ret < 0) { | ||
882 | dev_err(slave->bus->dev, | ||
883 | "SDW_SCP_INT2/3 read failed:%d", ret); | ||
884 | return ret; | ||
885 | } | ||
886 | |||
887 | /* Make sure no interrupts are pending */ | ||
888 | buf &= _buf; | ||
889 | buf2[0] &= _buf2[0]; | ||
890 | buf2[1] &= _buf2[1]; | ||
891 | stat = buf || buf2[0] || buf2[1]; | ||
892 | |||
893 | /* | ||
894 | * Exit loop if Slave is continuously in ALERT state even | ||
895 | * after servicing the interrupt multiple times. | ||
896 | */ | ||
897 | count++; | ||
898 | |||
899 | /* we can get alerts while processing so keep retrying */ | ||
900 | } while (stat != 0 && count < SDW_READ_INTR_CLEAR_RETRY); | ||
901 | |||
902 | if (count == SDW_READ_INTR_CLEAR_RETRY) | ||
903 | dev_warn(slave->bus->dev, "Reached MAX_RETRY on alert read"); | ||
904 | |||
905 | return ret; | ||
906 | } | ||
907 | |||
908 | static int sdw_update_slave_status(struct sdw_slave *slave, | ||
909 | enum sdw_slave_status status) | ||
910 | { | ||
911 | if ((slave->ops) && (slave->ops->update_status)) | ||
912 | return slave->ops->update_status(slave, status); | ||
913 | |||
914 | return 0; | ||
915 | } | ||
916 | |||
917 | /** | ||
918 | * sdw_handle_slave_status() - Handle Slave status | ||
919 | * @bus: SDW bus instance | ||
920 | * @status: Status for all Slave(s) | ||
921 | */ | ||
922 | int sdw_handle_slave_status(struct sdw_bus *bus, | ||
923 | enum sdw_slave_status status[]) | ||
924 | { | ||
925 | enum sdw_slave_status prev_status; | ||
926 | struct sdw_slave *slave; | ||
927 | int i, ret = 0; | ||
928 | |||
929 | if (status[0] == SDW_SLAVE_ATTACHED) { | ||
930 | ret = sdw_program_device_num(bus); | ||
931 | if (ret) | ||
932 | dev_err(bus->dev, "Slave attach failed: %d", ret); | ||
933 | } | ||
934 | |||
935 | /* Continue to check other slave statuses */ | ||
936 | for (i = 1; i <= SDW_MAX_DEVICES; i++) { | ||
937 | mutex_lock(&bus->bus_lock); | ||
938 | if (test_bit(i, bus->assigned) == false) { | ||
939 | mutex_unlock(&bus->bus_lock); | ||
940 | continue; | ||
941 | } | ||
942 | mutex_unlock(&bus->bus_lock); | ||
943 | |||
944 | slave = sdw_get_slave(bus, i); | ||
945 | if (!slave) | ||
946 | continue; | ||
947 | |||
948 | switch (status[i]) { | ||
949 | case SDW_SLAVE_UNATTACHED: | ||
950 | if (slave->status == SDW_SLAVE_UNATTACHED) | ||
951 | break; | ||
952 | |||
953 | sdw_modify_slave_status(slave, SDW_SLAVE_UNATTACHED); | ||
954 | break; | ||
955 | |||
956 | case SDW_SLAVE_ALERT: | ||
957 | ret = sdw_handle_slave_alerts(slave); | ||
958 | if (ret) | ||
959 | dev_err(bus->dev, | ||
960 | "Slave %d alert handling failed: %d", | ||
961 | i, ret); | ||
962 | break; | ||
963 | |||
964 | case SDW_SLAVE_ATTACHED: | ||
965 | if (slave->status == SDW_SLAVE_ATTACHED) | ||
966 | break; | ||
967 | |||
968 | prev_status = slave->status; | ||
969 | sdw_modify_slave_status(slave, SDW_SLAVE_ATTACHED); | ||
970 | |||
971 | if (prev_status == SDW_SLAVE_ALERT) | ||
972 | break; | ||
973 | |||
974 | ret = sdw_initialize_slave(slave); | ||
975 | if (ret) | ||
976 | dev_err(bus->dev, | ||
977 | "Slave %d initialization failed: %d", | ||
978 | i, ret); | ||
979 | |||
980 | break; | ||
981 | |||
982 | default: | ||
983 | dev_err(bus->dev, "Invalid slave %d status:%d", | ||
984 | i, status[i]); | ||
985 | break; | ||
986 | } | ||
987 | |||
988 | ret = sdw_update_slave_status(slave, status[i]); | ||
989 | if (ret) | ||
990 | dev_err(slave->bus->dev, | ||
991 | "Update Slave status failed:%d", ret); | ||
992 | |||
993 | } | ||
994 | |||
995 | return ret; | ||
996 | } | ||
997 | EXPORT_SYMBOL(sdw_handle_slave_status); | ||
diff --git a/drivers/soundwire/bus.h b/drivers/soundwire/bus.h new file mode 100644 index 000000000000..345c34d697e9 --- /dev/null +++ b/drivers/soundwire/bus.h | |||
@@ -0,0 +1,71 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #ifndef __SDW_BUS_H | ||
5 | #define __SDW_BUS_H | ||
6 | |||
7 | #if IS_ENABLED(CONFIG_ACPI) | ||
8 | int sdw_acpi_find_slaves(struct sdw_bus *bus); | ||
9 | #else | ||
10 | static inline int sdw_acpi_find_slaves(struct sdw_bus *bus) | ||
11 | { | ||
12 | return -ENOTSUPP; | ||
13 | } | ||
14 | #endif | ||
15 | |||
16 | void sdw_extract_slave_id(struct sdw_bus *bus, | ||
17 | u64 addr, struct sdw_slave_id *id); | ||
18 | |||
19 | enum { | ||
20 | SDW_MSG_FLAG_READ = 0, | ||
21 | SDW_MSG_FLAG_WRITE, | ||
22 | }; | ||
23 | |||
24 | /** | ||
25 | * struct sdw_msg - Message structure | ||
26 | * @addr: Register address accessed in the Slave | ||
27 | * @len: number of messages | ||
28 | * @dev_num: Slave device number | ||
29 | * @addr_page1: SCP address page 1 Slave register | ||
30 | * @addr_page2: SCP address page 2 Slave register | ||
31 | * @flags: transfer flags, indicate if xfer is read or write | ||
32 | * @buf: message data buffer | ||
33 | * @ssp_sync: Send message at SSP (Stream Synchronization Point) | ||
34 | * @page: address requires paging | ||
35 | */ | ||
36 | struct sdw_msg { | ||
37 | u16 addr; | ||
38 | u16 len; | ||
39 | u8 dev_num; | ||
40 | u8 addr_page1; | ||
41 | u8 addr_page2; | ||
42 | u8 flags; | ||
43 | u8 *buf; | ||
44 | bool ssp_sync; | ||
45 | bool page; | ||
46 | }; | ||
47 | |||
48 | int sdw_transfer(struct sdw_bus *bus, struct sdw_msg *msg); | ||
49 | int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, | ||
50 | struct sdw_defer *defer); | ||
51 | |||
52 | #define SDW_READ_INTR_CLEAR_RETRY 10 | ||
53 | |||
54 | int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, | ||
55 | u32 addr, size_t count, u16 dev_num, u8 flags, u8 *buf); | ||
56 | |||
57 | /* Read-Modify-Write Slave register */ | ||
58 | static inline int | ||
59 | sdw_update(struct sdw_slave *slave, u32 addr, u8 mask, u8 val) | ||
60 | { | ||
61 | int tmp; | ||
62 | |||
63 | tmp = sdw_read(slave, addr); | ||
64 | if (tmp < 0) | ||
65 | return tmp; | ||
66 | |||
67 | tmp = (tmp & ~mask) | val; | ||
68 | return sdw_write(slave, addr, tmp); | ||
69 | } | ||
70 | |||
71 | #endif /* __SDW_BUS_H */ | ||
diff --git a/drivers/soundwire/bus_type.c b/drivers/soundwire/bus_type.c new file mode 100644 index 000000000000..d5f3a70c06b0 --- /dev/null +++ b/drivers/soundwire/bus_type.c | |||
@@ -0,0 +1,193 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #include <linux/module.h> | ||
5 | #include <linux/mod_devicetable.h> | ||
6 | #include <linux/pm_domain.h> | ||
7 | #include <linux/soundwire/sdw.h> | ||
8 | #include <linux/soundwire/sdw_type.h> | ||
9 | |||
10 | /** | ||
11 | * sdw_get_device_id - find the matching SoundWire device id | ||
12 | * @slave: SoundWire Slave Device | ||
13 | * @drv: SoundWire Slave Driver | ||
14 | * | ||
15 | * The match is done by comparing the mfg_id and part_id from the | ||
16 | * struct sdw_device_id. | ||
17 | */ | ||
18 | static const struct sdw_device_id * | ||
19 | sdw_get_device_id(struct sdw_slave *slave, struct sdw_driver *drv) | ||
20 | { | ||
21 | const struct sdw_device_id *id = drv->id_table; | ||
22 | |||
23 | while (id && id->mfg_id) { | ||
24 | if (slave->id.mfg_id == id->mfg_id && | ||
25 | slave->id.part_id == id->part_id) | ||
26 | return id; | ||
27 | id++; | ||
28 | } | ||
29 | |||
30 | return NULL; | ||
31 | } | ||
32 | |||
33 | static int sdw_bus_match(struct device *dev, struct device_driver *ddrv) | ||
34 | { | ||
35 | struct sdw_slave *slave = dev_to_sdw_dev(dev); | ||
36 | struct sdw_driver *drv = drv_to_sdw_driver(ddrv); | ||
37 | |||
38 | return !!sdw_get_device_id(slave, drv); | ||
39 | } | ||
40 | |||
41 | int sdw_slave_modalias(const struct sdw_slave *slave, char *buf, size_t size) | ||
42 | { | ||
43 | /* modalias is sdw:m<mfg_id>p<part_id> */ | ||
44 | |||
45 | return snprintf(buf, size, "sdw:m%04Xp%04X\n", | ||
46 | slave->id.mfg_id, slave->id.part_id); | ||
47 | } | ||
48 | |||
49 | static int sdw_uevent(struct device *dev, struct kobj_uevent_env *env) | ||
50 | { | ||
51 | struct sdw_slave *slave = dev_to_sdw_dev(dev); | ||
52 | char modalias[32]; | ||
53 | |||
54 | sdw_slave_modalias(slave, modalias, sizeof(modalias)); | ||
55 | |||
56 | if (add_uevent_var(env, "MODALIAS=%s", modalias)) | ||
57 | return -ENOMEM; | ||
58 | |||
59 | return 0; | ||
60 | } | ||
61 | |||
62 | struct bus_type sdw_bus_type = { | ||
63 | .name = "soundwire", | ||
64 | .match = sdw_bus_match, | ||
65 | .uevent = sdw_uevent, | ||
66 | }; | ||
67 | EXPORT_SYMBOL_GPL(sdw_bus_type); | ||
68 | |||
69 | static int sdw_drv_probe(struct device *dev) | ||
70 | { | ||
71 | struct sdw_slave *slave = dev_to_sdw_dev(dev); | ||
72 | struct sdw_driver *drv = drv_to_sdw_driver(dev->driver); | ||
73 | const struct sdw_device_id *id; | ||
74 | int ret; | ||
75 | |||
76 | id = sdw_get_device_id(slave, drv); | ||
77 | if (!id) | ||
78 | return -ENODEV; | ||
79 | |||
80 | slave->ops = drv->ops; | ||
81 | |||
82 | /* | ||
83 | * attach to power domain but don't turn on (last arg) | ||
84 | */ | ||
85 | ret = dev_pm_domain_attach(dev, false); | ||
86 | if (ret != -EPROBE_DEFER) { | ||
87 | ret = drv->probe(slave, id); | ||
88 | if (ret) { | ||
89 | dev_err(dev, "Probe of %s failed: %d\n", drv->name, ret); | ||
90 | dev_pm_domain_detach(dev, false); | ||
91 | } | ||
92 | } | ||
93 | |||
94 | if (ret) | ||
95 | return ret; | ||
96 | |||
97 | /* device is probed so let's read the properties now */ | ||
98 | if (slave->ops && slave->ops->read_prop) | ||
99 | slave->ops->read_prop(slave); | ||
100 | |||
101 | /* | ||
102 | * Check for valid clk_stop_timeout, use DisCo worst case value of | ||
103 | * 300ms | ||
104 | * | ||
105 | * TODO: check the timeouts and driver removal case | ||
106 | */ | ||
107 | if (slave->prop.clk_stop_timeout == 0) | ||
108 | slave->prop.clk_stop_timeout = 300; | ||
109 | |||
110 | slave->bus->clk_stop_timeout = max_t(u32, slave->bus->clk_stop_timeout, | ||
111 | slave->prop.clk_stop_timeout); | ||
112 | |||
113 | return 0; | ||
114 | } | ||
115 | |||
116 | static int sdw_drv_remove(struct device *dev) | ||
117 | { | ||
118 | struct sdw_slave *slave = dev_to_sdw_dev(dev); | ||
119 | struct sdw_driver *drv = drv_to_sdw_driver(dev->driver); | ||
120 | int ret = 0; | ||
121 | |||
122 | if (drv->remove) | ||
123 | ret = drv->remove(slave); | ||
124 | |||
125 | dev_pm_domain_detach(dev, false); | ||
126 | |||
127 | return ret; | ||
128 | } | ||
129 | |||
130 | static void sdw_drv_shutdown(struct device *dev) | ||
131 | { | ||
132 | struct sdw_slave *slave = dev_to_sdw_dev(dev); | ||
133 | struct sdw_driver *drv = drv_to_sdw_driver(dev->driver); | ||
134 | |||
135 | if (drv->shutdown) | ||
136 | drv->shutdown(slave); | ||
137 | } | ||
138 | |||
139 | /** | ||
140 | * __sdw_register_driver() - register a SoundWire Slave driver | ||
141 | * @drv: driver to register | ||
142 | * @owner: owning module/driver | ||
143 | * | ||
144 | * Return: zero on success, else a negative error code. | ||
145 | */ | ||
146 | int __sdw_register_driver(struct sdw_driver *drv, struct module *owner) | ||
147 | { | ||
148 | drv->driver.bus = &sdw_bus_type; | ||
149 | |||
150 | if (!drv->probe) { | ||
151 | pr_err("driver %s didn't provide SDW probe routine\n", | ||
152 | drv->name); | ||
153 | return -EINVAL; | ||
154 | } | ||
155 | |||
156 | drv->driver.owner = owner; | ||
157 | drv->driver.probe = sdw_drv_probe; | ||
158 | |||
159 | if (drv->remove) | ||
160 | drv->driver.remove = sdw_drv_remove; | ||
161 | |||
162 | if (drv->shutdown) | ||
163 | drv->driver.shutdown = sdw_drv_shutdown; | ||
164 | |||
165 | return driver_register(&drv->driver); | ||
166 | } | ||
167 | EXPORT_SYMBOL_GPL(__sdw_register_driver); | ||
168 | |||
169 | /** | ||
170 | * sdw_unregister_driver() - unregisters the SoundWire Slave driver | ||
171 | * @drv: driver to unregister | ||
172 | */ | ||
173 | void sdw_unregister_driver(struct sdw_driver *drv) | ||
174 | { | ||
175 | driver_unregister(&drv->driver); | ||
176 | } | ||
177 | EXPORT_SYMBOL_GPL(sdw_unregister_driver); | ||
178 | |||
179 | static int __init sdw_bus_init(void) | ||
180 | { | ||
181 | return bus_register(&sdw_bus_type); | ||
182 | } | ||
183 | |||
184 | static void __exit sdw_bus_exit(void) | ||
185 | { | ||
186 | bus_unregister(&sdw_bus_type); | ||
187 | } | ||
188 | |||
189 | postcore_initcall(sdw_bus_init); | ||
190 | module_exit(sdw_bus_exit); | ||
191 | |||
192 | MODULE_DESCRIPTION("SoundWire bus"); | ||
193 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c new file mode 100644 index 000000000000..3a9b1462039b --- /dev/null +++ b/drivers/soundwire/cadence_master.c | |||
@@ -0,0 +1,751 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | /* | ||
5 | * Cadence SoundWire Master module | ||
6 | * Used by Master driver | ||
7 | */ | ||
8 | |||
9 | #include <linux/delay.h> | ||
10 | #include <linux/device.h> | ||
11 | #include <linux/interrupt.h> | ||
12 | #include <linux/module.h> | ||
13 | #include <linux/mod_devicetable.h> | ||
14 | #include <linux/soundwire/sdw_registers.h> | ||
15 | #include <linux/soundwire/sdw.h> | ||
16 | #include "bus.h" | ||
17 | #include "cadence_master.h" | ||
18 | |||
19 | #define CDNS_MCP_CONFIG 0x0 | ||
20 | |||
21 | #define CDNS_MCP_CONFIG_MCMD_RETRY GENMASK(27, 24) | ||
22 | #define CDNS_MCP_CONFIG_MPREQ_DELAY GENMASK(20, 16) | ||
23 | #define CDNS_MCP_CONFIG_MMASTER BIT(7) | ||
24 | #define CDNS_MCP_CONFIG_BUS_REL BIT(6) | ||
25 | #define CDNS_MCP_CONFIG_SNIFFER BIT(5) | ||
26 | #define CDNS_MCP_CONFIG_SSPMOD BIT(4) | ||
27 | #define CDNS_MCP_CONFIG_CMD BIT(3) | ||
28 | #define CDNS_MCP_CONFIG_OP GENMASK(2, 0) | ||
29 | #define CDNS_MCP_CONFIG_OP_NORMAL 0 | ||
30 | |||
31 | #define CDNS_MCP_CONTROL 0x4 | ||
32 | |||
33 | #define CDNS_MCP_CONTROL_RST_DELAY GENMASK(10, 8) | ||
34 | #define CDNS_MCP_CONTROL_CMD_RST BIT(7) | ||
35 | #define CDNS_MCP_CONTROL_SOFT_RST BIT(6) | ||
36 | #define CDNS_MCP_CONTROL_SW_RST BIT(5) | ||
37 | #define CDNS_MCP_CONTROL_HW_RST BIT(4) | ||
38 | #define CDNS_MCP_CONTROL_CLK_PAUSE BIT(3) | ||
39 | #define CDNS_MCP_CONTROL_CLK_STOP_CLR BIT(2) | ||
40 | #define CDNS_MCP_CONTROL_CMD_ACCEPT BIT(1) | ||
41 | #define CDNS_MCP_CONTROL_BLOCK_WAKEUP BIT(0) | ||
42 | |||
43 | |||
44 | #define CDNS_MCP_CMDCTRL 0x8 | ||
45 | #define CDNS_MCP_SSPSTAT 0xC | ||
46 | #define CDNS_MCP_FRAME_SHAPE 0x10 | ||
47 | #define CDNS_MCP_FRAME_SHAPE_INIT 0x14 | ||
48 | |||
49 | #define CDNS_MCP_CONFIG_UPDATE 0x18 | ||
50 | #define CDNS_MCP_CONFIG_UPDATE_BIT BIT(0) | ||
51 | |||
52 | #define CDNS_MCP_PHYCTRL 0x1C | ||
53 | #define CDNS_MCP_SSP_CTRL0 0x20 | ||
54 | #define CDNS_MCP_SSP_CTRL1 0x28 | ||
55 | #define CDNS_MCP_CLK_CTRL0 0x30 | ||
56 | #define CDNS_MCP_CLK_CTRL1 0x38 | ||
57 | |||
58 | #define CDNS_MCP_STAT 0x40 | ||
59 | |||
60 | #define CDNS_MCP_STAT_ACTIVE_BANK BIT(20) | ||
61 | #define CDNS_MCP_STAT_CLK_STOP BIT(16) | ||
62 | |||
63 | #define CDNS_MCP_INTSTAT 0x44 | ||
64 | #define CDNS_MCP_INTMASK 0x48 | ||
65 | |||
66 | #define CDNS_MCP_INT_IRQ BIT(31) | ||
67 | #define CDNS_MCP_INT_WAKEUP BIT(16) | ||
68 | #define CDNS_MCP_INT_SLAVE_RSVD BIT(15) | ||
69 | #define CDNS_MCP_INT_SLAVE_ALERT BIT(14) | ||
70 | #define CDNS_MCP_INT_SLAVE_ATTACH BIT(13) | ||
71 | #define CDNS_MCP_INT_SLAVE_NATTACH BIT(12) | ||
72 | #define CDNS_MCP_INT_SLAVE_MASK GENMASK(15, 12) | ||
73 | #define CDNS_MCP_INT_DPINT BIT(11) | ||
74 | #define CDNS_MCP_INT_CTRL_CLASH BIT(10) | ||
75 | #define CDNS_MCP_INT_DATA_CLASH BIT(9) | ||
76 | #define CDNS_MCP_INT_CMD_ERR BIT(7) | ||
77 | #define CDNS_MCP_INT_RX_WL BIT(2) | ||
78 | #define CDNS_MCP_INT_TXE BIT(1) | ||
79 | |||
80 | #define CDNS_MCP_INTSET 0x4C | ||
81 | |||
82 | #define CDNS_SDW_SLAVE_STAT 0x50 | ||
83 | #define CDNS_MCP_SLAVE_STAT_MASK BIT(1, 0) | ||
84 | |||
85 | #define CDNS_MCP_SLAVE_INTSTAT0 0x54 | ||
86 | #define CDNS_MCP_SLAVE_INTSTAT1 0x58 | ||
87 | #define CDNS_MCP_SLAVE_INTSTAT_NPRESENT BIT(0) | ||
88 | #define CDNS_MCP_SLAVE_INTSTAT_ATTACHED BIT(1) | ||
89 | #define CDNS_MCP_SLAVE_INTSTAT_ALERT BIT(2) | ||
90 | #define CDNS_MCP_SLAVE_INTSTAT_RESERVED BIT(3) | ||
91 | #define CDNS_MCP_SLAVE_STATUS_BITS GENMASK(3, 0) | ||
92 | #define CDNS_MCP_SLAVE_STATUS_NUM 4 | ||
93 | |||
94 | #define CDNS_MCP_SLAVE_INTMASK0 0x5C | ||
95 | #define CDNS_MCP_SLAVE_INTMASK1 0x60 | ||
96 | |||
97 | #define CDNS_MCP_SLAVE_INTMASK0_MASK GENMASK(30, 0) | ||
98 | #define CDNS_MCP_SLAVE_INTMASK1_MASK GENMASK(16, 0) | ||
99 | |||
100 | #define CDNS_MCP_PORT_INTSTAT 0x64 | ||
101 | #define CDNS_MCP_PDI_STAT 0x6C | ||
102 | |||
103 | #define CDNS_MCP_FIFOLEVEL 0x78 | ||
104 | #define CDNS_MCP_FIFOSTAT 0x7C | ||
105 | #define CDNS_MCP_RX_FIFO_AVAIL GENMASK(5, 0) | ||
106 | |||
107 | #define CDNS_MCP_CMD_BASE 0x80 | ||
108 | #define CDNS_MCP_RESP_BASE 0x80 | ||
109 | #define CDNS_MCP_CMD_LEN 0x20 | ||
110 | #define CDNS_MCP_CMD_WORD_LEN 0x4 | ||
111 | |||
112 | #define CDNS_MCP_CMD_SSP_TAG BIT(31) | ||
113 | #define CDNS_MCP_CMD_COMMAND GENMASK(30, 28) | ||
114 | #define CDNS_MCP_CMD_DEV_ADDR GENMASK(27, 24) | ||
115 | #define CDNS_MCP_CMD_REG_ADDR_H GENMASK(23, 16) | ||
116 | #define CDNS_MCP_CMD_REG_ADDR_L GENMASK(15, 8) | ||
117 | #define CDNS_MCP_CMD_REG_DATA GENMASK(7, 0) | ||
118 | |||
119 | #define CDNS_MCP_CMD_READ 2 | ||
120 | #define CDNS_MCP_CMD_WRITE 3 | ||
121 | |||
122 | #define CDNS_MCP_RESP_RDATA GENMASK(15, 8) | ||
123 | #define CDNS_MCP_RESP_ACK BIT(0) | ||
124 | #define CDNS_MCP_RESP_NACK BIT(1) | ||
125 | |||
126 | #define CDNS_DP_SIZE 128 | ||
127 | |||
128 | #define CDNS_DPN_B0_CONFIG(n) (0x100 + CDNS_DP_SIZE * (n)) | ||
129 | #define CDNS_DPN_B0_CH_EN(n) (0x104 + CDNS_DP_SIZE * (n)) | ||
130 | #define CDNS_DPN_B0_SAMPLE_CTRL(n) (0x108 + CDNS_DP_SIZE * (n)) | ||
131 | #define CDNS_DPN_B0_OFFSET_CTRL(n) (0x10C + CDNS_DP_SIZE * (n)) | ||
132 | #define CDNS_DPN_B0_HCTRL(n) (0x110 + CDNS_DP_SIZE * (n)) | ||
133 | #define CDNS_DPN_B0_ASYNC_CTRL(n) (0x114 + CDNS_DP_SIZE * (n)) | ||
134 | |||
135 | #define CDNS_DPN_B1_CONFIG(n) (0x118 + CDNS_DP_SIZE * (n)) | ||
136 | #define CDNS_DPN_B1_CH_EN(n) (0x11C + CDNS_DP_SIZE * (n)) | ||
137 | #define CDNS_DPN_B1_SAMPLE_CTRL(n) (0x120 + CDNS_DP_SIZE * (n)) | ||
138 | #define CDNS_DPN_B1_OFFSET_CTRL(n) (0x124 + CDNS_DP_SIZE * (n)) | ||
139 | #define CDNS_DPN_B1_HCTRL(n) (0x128 + CDNS_DP_SIZE * (n)) | ||
140 | #define CDNS_DPN_B1_ASYNC_CTRL(n) (0x12C + CDNS_DP_SIZE * (n)) | ||
141 | |||
142 | #define CDNS_DPN_CONFIG_BPM BIT(18) | ||
143 | #define CDNS_DPN_CONFIG_BGC GENMASK(17, 16) | ||
144 | #define CDNS_DPN_CONFIG_WL GENMASK(12, 8) | ||
145 | #define CDNS_DPN_CONFIG_PORT_DAT GENMASK(3, 2) | ||
146 | #define CDNS_DPN_CONFIG_PORT_FLOW GENMASK(1, 0) | ||
147 | |||
148 | #define CDNS_DPN_SAMPLE_CTRL_SI GENMASK(15, 0) | ||
149 | |||
150 | #define CDNS_DPN_OFFSET_CTRL_1 GENMASK(7, 0) | ||
151 | #define CDNS_DPN_OFFSET_CTRL_2 GENMASK(15, 8) | ||
152 | |||
153 | #define CDNS_DPN_HCTRL_HSTOP GENMASK(3, 0) | ||
154 | #define CDNS_DPN_HCTRL_HSTART GENMASK(7, 4) | ||
155 | #define CDNS_DPN_HCTRL_LCTRL GENMASK(10, 8) | ||
156 | |||
157 | #define CDNS_PORTCTRL 0x130 | ||
158 | #define CDNS_PORTCTRL_DIRN BIT(7) | ||
159 | #define CDNS_PORTCTRL_BANK_INVERT BIT(8) | ||
160 | |||
161 | #define CDNS_PORT_OFFSET 0x80 | ||
162 | |||
163 | #define CDNS_PDI_CONFIG(n) (0x1100 + (n) * 16) | ||
164 | |||
165 | #define CDNS_PDI_CONFIG_SOFT_RESET BIT(24) | ||
166 | #define CDNS_PDI_CONFIG_CHANNEL GENMASK(15, 8) | ||
167 | #define CDNS_PDI_CONFIG_PORT GENMASK(4, 0) | ||
168 | |||
169 | /* Driver defaults */ | ||
170 | |||
171 | #define CDNS_DEFAULT_CLK_DIVIDER 0 | ||
172 | #define CDNS_DEFAULT_FRAME_SHAPE 0x30 | ||
173 | #define CDNS_DEFAULT_SSP_INTERVAL 0x18 | ||
174 | #define CDNS_TX_TIMEOUT 2000 | ||
175 | |||
176 | #define CDNS_PCM_PDI_OFFSET 0x2 | ||
177 | #define CDNS_PDM_PDI_OFFSET 0x6 | ||
178 | |||
179 | #define CDNS_SCP_RX_FIFOLEVEL 0x2 | ||
180 | |||
181 | /* | ||
182 | * register accessor helpers | ||
183 | */ | ||
184 | static inline u32 cdns_readl(struct sdw_cdns *cdns, int offset) | ||
185 | { | ||
186 | return readl(cdns->registers + offset); | ||
187 | } | ||
188 | |||
189 | static inline void cdns_writel(struct sdw_cdns *cdns, int offset, u32 value) | ||
190 | { | ||
191 | writel(value, cdns->registers + offset); | ||
192 | } | ||
193 | |||
194 | static inline void cdns_updatel(struct sdw_cdns *cdns, | ||
195 | int offset, u32 mask, u32 val) | ||
196 | { | ||
197 | u32 tmp; | ||
198 | |||
199 | tmp = cdns_readl(cdns, offset); | ||
200 | tmp = (tmp & ~mask) | val; | ||
201 | cdns_writel(cdns, offset, tmp); | ||
202 | } | ||
203 | |||
204 | static int cdns_clear_bit(struct sdw_cdns *cdns, int offset, u32 value) | ||
205 | { | ||
206 | int timeout = 10; | ||
207 | u32 reg_read; | ||
208 | |||
209 | writel(value, cdns->registers + offset); | ||
210 | |||
211 | /* Wait for bit to be self cleared */ | ||
212 | do { | ||
213 | reg_read = readl(cdns->registers + offset); | ||
214 | if ((reg_read & value) == 0) | ||
215 | return 0; | ||
216 | |||
217 | timeout--; | ||
218 | udelay(50); | ||
219 | } while (timeout != 0); | ||
220 | |||
221 | return -EAGAIN; | ||
222 | } | ||
223 | |||
224 | /* | ||
225 | * IO Calls | ||
226 | */ | ||
227 | static enum sdw_command_response cdns_fill_msg_resp( | ||
228 | struct sdw_cdns *cdns, | ||
229 | struct sdw_msg *msg, int count, int offset) | ||
230 | { | ||
231 | int nack = 0, no_ack = 0; | ||
232 | int i; | ||
233 | |||
234 | /* check message response */ | ||
235 | for (i = 0; i < count; i++) { | ||
236 | if (!(cdns->response_buf[i] & CDNS_MCP_RESP_ACK)) { | ||
237 | no_ack = 1; | ||
238 | dev_dbg(cdns->dev, "Msg Ack not received\n"); | ||
239 | if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) { | ||
240 | nack = 1; | ||
241 | dev_err(cdns->dev, "Msg NACK received\n"); | ||
242 | } | ||
243 | } | ||
244 | } | ||
245 | |||
246 | if (nack) { | ||
247 | dev_err(cdns->dev, "Msg NACKed for Slave %d\n", msg->dev_num); | ||
248 | return SDW_CMD_FAIL; | ||
249 | } else if (no_ack) { | ||
250 | dev_dbg(cdns->dev, "Msg ignored for Slave %d\n", msg->dev_num); | ||
251 | return SDW_CMD_IGNORED; | ||
252 | } | ||
253 | |||
254 | /* fill response */ | ||
255 | for (i = 0; i < count; i++) | ||
256 | msg->buf[i + offset] = cdns->response_buf[i] >> | ||
257 | SDW_REG_SHIFT(CDNS_MCP_RESP_RDATA); | ||
258 | |||
259 | return SDW_CMD_OK; | ||
260 | } | ||
261 | |||
262 | static enum sdw_command_response | ||
263 | _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd, | ||
264 | int offset, int count, bool defer) | ||
265 | { | ||
266 | unsigned long time; | ||
267 | u32 base, i, data; | ||
268 | u16 addr; | ||
269 | |||
270 | /* Program the watermark level for RX FIFO */ | ||
271 | if (cdns->msg_count != count) { | ||
272 | cdns_writel(cdns, CDNS_MCP_FIFOLEVEL, count); | ||
273 | cdns->msg_count = count; | ||
274 | } | ||
275 | |||
276 | base = CDNS_MCP_CMD_BASE; | ||
277 | addr = msg->addr; | ||
278 | |||
279 | for (i = 0; i < count; i++) { | ||
280 | data = msg->dev_num << SDW_REG_SHIFT(CDNS_MCP_CMD_DEV_ADDR); | ||
281 | data |= cmd << SDW_REG_SHIFT(CDNS_MCP_CMD_COMMAND); | ||
282 | data |= addr++ << SDW_REG_SHIFT(CDNS_MCP_CMD_REG_ADDR_L); | ||
283 | |||
284 | if (msg->flags == SDW_MSG_FLAG_WRITE) | ||
285 | data |= msg->buf[i + offset]; | ||
286 | |||
287 | data |= msg->ssp_sync << SDW_REG_SHIFT(CDNS_MCP_CMD_SSP_TAG); | ||
288 | cdns_writel(cdns, base, data); | ||
289 | base += CDNS_MCP_CMD_WORD_LEN; | ||
290 | } | ||
291 | |||
292 | if (defer) | ||
293 | return SDW_CMD_OK; | ||
294 | |||
295 | /* wait for timeout or response */ | ||
296 | time = wait_for_completion_timeout(&cdns->tx_complete, | ||
297 | msecs_to_jiffies(CDNS_TX_TIMEOUT)); | ||
298 | if (!time) { | ||
299 | dev_err(cdns->dev, "IO transfer timed out\n"); | ||
300 | msg->len = 0; | ||
301 | return SDW_CMD_TIMEOUT; | ||
302 | } | ||
303 | |||
304 | return cdns_fill_msg_resp(cdns, msg, count, offset); | ||
305 | } | ||
306 | |||
307 | static enum sdw_command_response cdns_program_scp_addr( | ||
308 | struct sdw_cdns *cdns, struct sdw_msg *msg) | ||
309 | { | ||
310 | int nack = 0, no_ack = 0; | ||
311 | unsigned long time; | ||
312 | u32 data[2], base; | ||
313 | int i; | ||
314 | |||
315 | /* Program the watermark level for RX FIFO */ | ||
316 | if (cdns->msg_count != CDNS_SCP_RX_FIFOLEVEL) { | ||
317 | cdns_writel(cdns, CDNS_MCP_FIFOLEVEL, CDNS_SCP_RX_FIFOLEVEL); | ||
318 | cdns->msg_count = CDNS_SCP_RX_FIFOLEVEL; | ||
319 | } | ||
320 | |||
321 | data[0] = msg->dev_num << SDW_REG_SHIFT(CDNS_MCP_CMD_DEV_ADDR); | ||
322 | data[0] |= 0x3 << SDW_REG_SHIFT(CDNS_MCP_CMD_COMMAND); | ||
323 | data[1] = data[0]; | ||
324 | |||
325 | data[0] |= SDW_SCP_ADDRPAGE1 << SDW_REG_SHIFT(CDNS_MCP_CMD_REG_ADDR_L); | ||
326 | data[1] |= SDW_SCP_ADDRPAGE2 << SDW_REG_SHIFT(CDNS_MCP_CMD_REG_ADDR_L); | ||
327 | |||
328 | data[0] |= msg->addr_page1; | ||
329 | data[1] |= msg->addr_page2; | ||
330 | |||
331 | base = CDNS_MCP_CMD_BASE; | ||
332 | cdns_writel(cdns, base, data[0]); | ||
333 | base += CDNS_MCP_CMD_WORD_LEN; | ||
334 | cdns_writel(cdns, base, data[1]); | ||
335 | |||
336 | time = wait_for_completion_timeout(&cdns->tx_complete, | ||
337 | msecs_to_jiffies(CDNS_TX_TIMEOUT)); | ||
338 | if (!time) { | ||
339 | dev_err(cdns->dev, "SCP Msg trf timed out\n"); | ||
340 | msg->len = 0; | ||
341 | return SDW_CMD_TIMEOUT; | ||
342 | } | ||
343 | |||
344 | /* check response the writes */ | ||
345 | for (i = 0; i < 2; i++) { | ||
346 | if (!(cdns->response_buf[i] & CDNS_MCP_RESP_ACK)) { | ||
347 | no_ack = 1; | ||
348 | dev_err(cdns->dev, "Program SCP Ack not received"); | ||
349 | if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) { | ||
350 | nack = 1; | ||
351 | dev_err(cdns->dev, "Program SCP NACK received"); | ||
352 | } | ||
353 | } | ||
354 | } | ||
355 | |||
356 | /* For NACK, NO ack, don't return err if we are in Broadcast mode */ | ||
357 | if (nack) { | ||
358 | dev_err(cdns->dev, | ||
359 | "SCP_addrpage NACKed for Slave %d", msg->dev_num); | ||
360 | return SDW_CMD_FAIL; | ||
361 | } else if (no_ack) { | ||
362 | dev_dbg(cdns->dev, | ||
363 | "SCP_addrpage ignored for Slave %d", msg->dev_num); | ||
364 | return SDW_CMD_IGNORED; | ||
365 | } | ||
366 | |||
367 | return SDW_CMD_OK; | ||
368 | } | ||
369 | |||
370 | static int cdns_prep_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int *cmd) | ||
371 | { | ||
372 | int ret; | ||
373 | |||
374 | if (msg->page) { | ||
375 | ret = cdns_program_scp_addr(cdns, msg); | ||
376 | if (ret) { | ||
377 | msg->len = 0; | ||
378 | return ret; | ||
379 | } | ||
380 | } | ||
381 | |||
382 | switch (msg->flags) { | ||
383 | case SDW_MSG_FLAG_READ: | ||
384 | *cmd = CDNS_MCP_CMD_READ; | ||
385 | break; | ||
386 | |||
387 | case SDW_MSG_FLAG_WRITE: | ||
388 | *cmd = CDNS_MCP_CMD_WRITE; | ||
389 | break; | ||
390 | |||
391 | default: | ||
392 | dev_err(cdns->dev, "Invalid msg cmd: %d\n", msg->flags); | ||
393 | return -EINVAL; | ||
394 | } | ||
395 | |||
396 | return 0; | ||
397 | } | ||
398 | |||
399 | static enum sdw_command_response | ||
400 | cdns_xfer_msg(struct sdw_bus *bus, struct sdw_msg *msg) | ||
401 | { | ||
402 | struct sdw_cdns *cdns = bus_to_cdns(bus); | ||
403 | int cmd = 0, ret, i; | ||
404 | |||
405 | ret = cdns_prep_msg(cdns, msg, &cmd); | ||
406 | if (ret) | ||
407 | return SDW_CMD_FAIL_OTHER; | ||
408 | |||
409 | for (i = 0; i < msg->len / CDNS_MCP_CMD_LEN; i++) { | ||
410 | ret = _cdns_xfer_msg(cdns, msg, cmd, i * CDNS_MCP_CMD_LEN, | ||
411 | CDNS_MCP_CMD_LEN, false); | ||
412 | if (ret < 0) | ||
413 | goto exit; | ||
414 | } | ||
415 | |||
416 | if (!(msg->len % CDNS_MCP_CMD_LEN)) | ||
417 | goto exit; | ||
418 | |||
419 | ret = _cdns_xfer_msg(cdns, msg, cmd, i * CDNS_MCP_CMD_LEN, | ||
420 | msg->len % CDNS_MCP_CMD_LEN, false); | ||
421 | |||
422 | exit: | ||
423 | return ret; | ||
424 | } | ||
425 | |||
426 | static enum sdw_command_response | ||
427 | cdns_xfer_msg_defer(struct sdw_bus *bus, | ||
428 | struct sdw_msg *msg, struct sdw_defer *defer) | ||
429 | { | ||
430 | struct sdw_cdns *cdns = bus_to_cdns(bus); | ||
431 | int cmd = 0, ret; | ||
432 | |||
433 | /* for defer only 1 message is supported */ | ||
434 | if (msg->len > 1) | ||
435 | return -ENOTSUPP; | ||
436 | |||
437 | ret = cdns_prep_msg(cdns, msg, &cmd); | ||
438 | if (ret) | ||
439 | return SDW_CMD_FAIL_OTHER; | ||
440 | |||
441 | cdns->defer = defer; | ||
442 | cdns->defer->length = msg->len; | ||
443 | |||
444 | return _cdns_xfer_msg(cdns, msg, cmd, 0, msg->len, true); | ||
445 | } | ||
446 | |||
447 | static enum sdw_command_response | ||
448 | cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num) | ||
449 | { | ||
450 | struct sdw_cdns *cdns = bus_to_cdns(bus); | ||
451 | struct sdw_msg msg; | ||
452 | |||
453 | /* Create dummy message with valid device number */ | ||
454 | memset(&msg, 0, sizeof(msg)); | ||
455 | msg.dev_num = dev_num; | ||
456 | |||
457 | return cdns_program_scp_addr(cdns, &msg); | ||
458 | } | ||
459 | |||
460 | /* | ||
461 | * IRQ handling | ||
462 | */ | ||
463 | |||
464 | static void cdns_read_response(struct sdw_cdns *cdns) | ||
465 | { | ||
466 | u32 num_resp, cmd_base; | ||
467 | int i; | ||
468 | |||
469 | num_resp = cdns_readl(cdns, CDNS_MCP_FIFOSTAT); | ||
470 | num_resp &= CDNS_MCP_RX_FIFO_AVAIL; | ||
471 | |||
472 | cmd_base = CDNS_MCP_CMD_BASE; | ||
473 | |||
474 | for (i = 0; i < num_resp; i++) { | ||
475 | cdns->response_buf[i] = cdns_readl(cdns, cmd_base); | ||
476 | cmd_base += CDNS_MCP_CMD_WORD_LEN; | ||
477 | } | ||
478 | } | ||
479 | |||
480 | static int cdns_update_slave_status(struct sdw_cdns *cdns, | ||
481 | u32 slave0, u32 slave1) | ||
482 | { | ||
483 | enum sdw_slave_status status[SDW_MAX_DEVICES + 1]; | ||
484 | bool is_slave = false; | ||
485 | u64 slave, mask; | ||
486 | int i, set_status; | ||
487 | |||
488 | /* combine the two status */ | ||
489 | slave = ((u64)slave1 << 32) | slave0; | ||
490 | memset(status, 0, sizeof(status)); | ||
491 | |||
492 | for (i = 0; i <= SDW_MAX_DEVICES; i++) { | ||
493 | mask = (slave >> (i * CDNS_MCP_SLAVE_STATUS_NUM)) & | ||
494 | CDNS_MCP_SLAVE_STATUS_BITS; | ||
495 | if (!mask) | ||
496 | continue; | ||
497 | |||
498 | is_slave = true; | ||
499 | set_status = 0; | ||
500 | |||
501 | if (mask & CDNS_MCP_SLAVE_INTSTAT_RESERVED) { | ||
502 | status[i] = SDW_SLAVE_RESERVED; | ||
503 | set_status++; | ||
504 | } | ||
505 | |||
506 | if (mask & CDNS_MCP_SLAVE_INTSTAT_ATTACHED) { | ||
507 | status[i] = SDW_SLAVE_ATTACHED; | ||
508 | set_status++; | ||
509 | } | ||
510 | |||
511 | if (mask & CDNS_MCP_SLAVE_INTSTAT_ALERT) { | ||
512 | status[i] = SDW_SLAVE_ALERT; | ||
513 | set_status++; | ||
514 | } | ||
515 | |||
516 | if (mask & CDNS_MCP_SLAVE_INTSTAT_NPRESENT) { | ||
517 | status[i] = SDW_SLAVE_UNATTACHED; | ||
518 | set_status++; | ||
519 | } | ||
520 | |||
521 | /* first check if Slave reported multiple status */ | ||
522 | if (set_status > 1) { | ||
523 | dev_warn(cdns->dev, | ||
524 | "Slave reported multiple Status: %d\n", | ||
525 | status[i]); | ||
526 | /* | ||
527 | * TODO: we need to reread the status here by | ||
528 | * issuing a PING cmd | ||
529 | */ | ||
530 | } | ||
531 | } | ||
532 | |||
533 | if (is_slave) | ||
534 | return sdw_handle_slave_status(&cdns->bus, status); | ||
535 | |||
536 | return 0; | ||
537 | } | ||
538 | |||
539 | /** | ||
540 | * sdw_cdns_irq() - Cadence interrupt handler | ||
541 | * @irq: irq number | ||
542 | * @dev_id: irq context | ||
543 | */ | ||
544 | irqreturn_t sdw_cdns_irq(int irq, void *dev_id) | ||
545 | { | ||
546 | struct sdw_cdns *cdns = dev_id; | ||
547 | u32 int_status; | ||
548 | int ret = IRQ_HANDLED; | ||
549 | |||
550 | /* Check if the link is up */ | ||
551 | if (!cdns->link_up) | ||
552 | return IRQ_NONE; | ||
553 | |||
554 | int_status = cdns_readl(cdns, CDNS_MCP_INTSTAT); | ||
555 | |||
556 | if (!(int_status & CDNS_MCP_INT_IRQ)) | ||
557 | return IRQ_NONE; | ||
558 | |||
559 | if (int_status & CDNS_MCP_INT_RX_WL) { | ||
560 | cdns_read_response(cdns); | ||
561 | |||
562 | if (cdns->defer) { | ||
563 | cdns_fill_msg_resp(cdns, cdns->defer->msg, | ||
564 | cdns->defer->length, 0); | ||
565 | complete(&cdns->defer->complete); | ||
566 | cdns->defer = NULL; | ||
567 | } else | ||
568 | complete(&cdns->tx_complete); | ||
569 | } | ||
570 | |||
571 | if (int_status & CDNS_MCP_INT_CTRL_CLASH) { | ||
572 | |||
573 | /* Slave is driving bit slot during control word */ | ||
574 | dev_err_ratelimited(cdns->dev, "Bus clash for control word\n"); | ||
575 | int_status |= CDNS_MCP_INT_CTRL_CLASH; | ||
576 | } | ||
577 | |||
578 | if (int_status & CDNS_MCP_INT_DATA_CLASH) { | ||
579 | /* | ||
580 | * Multiple slaves trying to drive bit slot, or issue with | ||
581 | * ownership of data bits or Slave gone bonkers | ||
582 | */ | ||
583 | dev_err_ratelimited(cdns->dev, "Bus clash for data word\n"); | ||
584 | int_status |= CDNS_MCP_INT_DATA_CLASH; | ||
585 | } | ||
586 | |||
587 | if (int_status & CDNS_MCP_INT_SLAVE_MASK) { | ||
588 | /* Mask the Slave interrupt and wake thread */ | ||
589 | cdns_updatel(cdns, CDNS_MCP_INTMASK, | ||
590 | CDNS_MCP_INT_SLAVE_MASK, 0); | ||
591 | |||
592 | int_status &= ~CDNS_MCP_INT_SLAVE_MASK; | ||
593 | ret = IRQ_WAKE_THREAD; | ||
594 | } | ||
595 | |||
596 | cdns_writel(cdns, CDNS_MCP_INTSTAT, int_status); | ||
597 | return ret; | ||
598 | } | ||
599 | EXPORT_SYMBOL(sdw_cdns_irq); | ||
600 | |||
601 | /** | ||
602 | * sdw_cdns_thread() - Cadence irq thread handler | ||
603 | * @irq: irq number | ||
604 | * @dev_id: irq context | ||
605 | */ | ||
606 | irqreturn_t sdw_cdns_thread(int irq, void *dev_id) | ||
607 | { | ||
608 | struct sdw_cdns *cdns = dev_id; | ||
609 | u32 slave0, slave1; | ||
610 | |||
611 | dev_dbg(cdns->dev, "Slave status change\n"); | ||
612 | |||
613 | slave0 = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT0); | ||
614 | slave1 = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT1); | ||
615 | |||
616 | cdns_update_slave_status(cdns, slave0, slave1); | ||
617 | cdns_writel(cdns, CDNS_MCP_SLAVE_INTSTAT0, slave0); | ||
618 | cdns_writel(cdns, CDNS_MCP_SLAVE_INTSTAT1, slave1); | ||
619 | |||
620 | /* clear and unmask Slave interrupt now */ | ||
621 | cdns_writel(cdns, CDNS_MCP_INTSTAT, CDNS_MCP_INT_SLAVE_MASK); | ||
622 | cdns_updatel(cdns, CDNS_MCP_INTMASK, | ||
623 | CDNS_MCP_INT_SLAVE_MASK, CDNS_MCP_INT_SLAVE_MASK); | ||
624 | |||
625 | return IRQ_HANDLED; | ||
626 | } | ||
627 | EXPORT_SYMBOL(sdw_cdns_thread); | ||
628 | |||
629 | /* | ||
630 | * init routines | ||
631 | */ | ||
632 | static int _cdns_enable_interrupt(struct sdw_cdns *cdns) | ||
633 | { | ||
634 | u32 mask; | ||
635 | |||
636 | cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK0, | ||
637 | CDNS_MCP_SLAVE_INTMASK0_MASK); | ||
638 | cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK1, | ||
639 | CDNS_MCP_SLAVE_INTMASK1_MASK); | ||
640 | |||
641 | mask = CDNS_MCP_INT_SLAVE_RSVD | CDNS_MCP_INT_SLAVE_ALERT | | ||
642 | CDNS_MCP_INT_SLAVE_ATTACH | CDNS_MCP_INT_SLAVE_NATTACH | | ||
643 | CDNS_MCP_INT_CTRL_CLASH | CDNS_MCP_INT_DATA_CLASH | | ||
644 | CDNS_MCP_INT_RX_WL | CDNS_MCP_INT_IRQ | CDNS_MCP_INT_DPINT; | ||
645 | |||
646 | cdns_writel(cdns, CDNS_MCP_INTMASK, mask); | ||
647 | |||
648 | return 0; | ||
649 | } | ||
650 | |||
651 | /** | ||
652 | * sdw_cdns_enable_interrupt() - Enable SDW interrupts and update config | ||
653 | * @cdns: Cadence instance | ||
654 | */ | ||
655 | int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns) | ||
656 | { | ||
657 | int ret; | ||
658 | |||
659 | _cdns_enable_interrupt(cdns); | ||
660 | ret = cdns_clear_bit(cdns, CDNS_MCP_CONFIG_UPDATE, | ||
661 | CDNS_MCP_CONFIG_UPDATE_BIT); | ||
662 | if (ret < 0) | ||
663 | dev_err(cdns->dev, "Config update timedout"); | ||
664 | |||
665 | return ret; | ||
666 | } | ||
667 | EXPORT_SYMBOL(sdw_cdns_enable_interrupt); | ||
668 | |||
669 | /** | ||
670 | * sdw_cdns_init() - Cadence initialization | ||
671 | * @cdns: Cadence instance | ||
672 | */ | ||
673 | int sdw_cdns_init(struct sdw_cdns *cdns) | ||
674 | { | ||
675 | u32 val; | ||
676 | int ret; | ||
677 | |||
678 | /* Exit clock stop */ | ||
679 | ret = cdns_clear_bit(cdns, CDNS_MCP_CONTROL, | ||
680 | CDNS_MCP_CONTROL_CLK_STOP_CLR); | ||
681 | if (ret < 0) { | ||
682 | dev_err(cdns->dev, "Couldn't exit from clock stop\n"); | ||
683 | return ret; | ||
684 | } | ||
685 | |||
686 | /* Set clock divider */ | ||
687 | val = cdns_readl(cdns, CDNS_MCP_CLK_CTRL0); | ||
688 | val |= CDNS_DEFAULT_CLK_DIVIDER; | ||
689 | cdns_writel(cdns, CDNS_MCP_CLK_CTRL0, val); | ||
690 | |||
691 | /* Set the default frame shape */ | ||
692 | cdns_writel(cdns, CDNS_MCP_FRAME_SHAPE_INIT, CDNS_DEFAULT_FRAME_SHAPE); | ||
693 | |||
694 | /* Set SSP interval to default value */ | ||
695 | cdns_writel(cdns, CDNS_MCP_SSP_CTRL0, CDNS_DEFAULT_SSP_INTERVAL); | ||
696 | cdns_writel(cdns, CDNS_MCP_SSP_CTRL1, CDNS_DEFAULT_SSP_INTERVAL); | ||
697 | |||
698 | /* Set cmd accept mode */ | ||
699 | cdns_updatel(cdns, CDNS_MCP_CONTROL, CDNS_MCP_CONTROL_CMD_ACCEPT, | ||
700 | CDNS_MCP_CONTROL_CMD_ACCEPT); | ||
701 | |||
702 | /* Configure mcp config */ | ||
703 | val = cdns_readl(cdns, CDNS_MCP_CONFIG); | ||
704 | |||
705 | /* Set Max cmd retry to 15 */ | ||
706 | val |= CDNS_MCP_CONFIG_MCMD_RETRY; | ||
707 | |||
708 | /* Set frame delay between PREQ and ping frame to 15 frames */ | ||
709 | val |= 0xF << SDW_REG_SHIFT(CDNS_MCP_CONFIG_MPREQ_DELAY); | ||
710 | |||
711 | /* Disable auto bus release */ | ||
712 | val &= ~CDNS_MCP_CONFIG_BUS_REL; | ||
713 | |||
714 | /* Disable sniffer mode */ | ||
715 | val &= ~CDNS_MCP_CONFIG_SNIFFER; | ||
716 | |||
717 | /* Set cmd mode for Tx and Rx cmds */ | ||
718 | val &= ~CDNS_MCP_CONFIG_CMD; | ||
719 | |||
720 | /* Set operation to normal */ | ||
721 | val &= ~CDNS_MCP_CONFIG_OP; | ||
722 | val |= CDNS_MCP_CONFIG_OP_NORMAL; | ||
723 | |||
724 | cdns_writel(cdns, CDNS_MCP_CONFIG, val); | ||
725 | |||
726 | return 0; | ||
727 | } | ||
728 | EXPORT_SYMBOL(sdw_cdns_init); | ||
729 | |||
730 | struct sdw_master_ops sdw_cdns_master_ops = { | ||
731 | .read_prop = sdw_master_read_prop, | ||
732 | .xfer_msg = cdns_xfer_msg, | ||
733 | .xfer_msg_defer = cdns_xfer_msg_defer, | ||
734 | .reset_page_addr = cdns_reset_page_addr, | ||
735 | }; | ||
736 | EXPORT_SYMBOL(sdw_cdns_master_ops); | ||
737 | |||
738 | /** | ||
739 | * sdw_cdns_probe() - Cadence probe routine | ||
740 | * @cdns: Cadence instance | ||
741 | */ | ||
742 | int sdw_cdns_probe(struct sdw_cdns *cdns) | ||
743 | { | ||
744 | init_completion(&cdns->tx_complete); | ||
745 | |||
746 | return 0; | ||
747 | } | ||
748 | EXPORT_SYMBOL(sdw_cdns_probe); | ||
749 | |||
750 | MODULE_LICENSE("Dual BSD/GPL"); | ||
751 | MODULE_DESCRIPTION("Cadence Soundwire Library"); | ||
diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h new file mode 100644 index 000000000000..beaf6c9804eb --- /dev/null +++ b/drivers/soundwire/cadence_master.h | |||
@@ -0,0 +1,48 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #ifndef __SDW_CADENCE_H | ||
5 | #define __SDW_CADENCE_H | ||
6 | |||
7 | /** | ||
8 | * struct sdw_cdns - Cadence driver context | ||
9 | * @dev: Linux device | ||
10 | * @bus: Bus handle | ||
11 | * @instance: instance number | ||
12 | * @response_buf: SoundWire response buffer | ||
13 | * @tx_complete: Tx completion | ||
14 | * @defer: Defer pointer | ||
15 | * @registers: Cadence registers | ||
16 | * @link_up: Link status | ||
17 | * @msg_count: Messages sent on bus | ||
18 | */ | ||
19 | struct sdw_cdns { | ||
20 | struct device *dev; | ||
21 | struct sdw_bus bus; | ||
22 | unsigned int instance; | ||
23 | |||
24 | u32 response_buf[0x80]; | ||
25 | struct completion tx_complete; | ||
26 | struct sdw_defer *defer; | ||
27 | |||
28 | void __iomem *registers; | ||
29 | |||
30 | bool link_up; | ||
31 | unsigned int msg_count; | ||
32 | }; | ||
33 | |||
34 | #define bus_to_cdns(_bus) container_of(_bus, struct sdw_cdns, bus) | ||
35 | |||
36 | /* Exported symbols */ | ||
37 | |||
38 | int sdw_cdns_probe(struct sdw_cdns *cdns); | ||
39 | extern struct sdw_master_ops sdw_cdns_master_ops; | ||
40 | |||
41 | irqreturn_t sdw_cdns_irq(int irq, void *dev_id); | ||
42 | irqreturn_t sdw_cdns_thread(int irq, void *dev_id); | ||
43 | |||
44 | int sdw_cdns_init(struct sdw_cdns *cdns); | ||
45 | int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns); | ||
46 | |||
47 | |||
48 | #endif /* __SDW_CADENCE_H */ | ||
diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c new file mode 100644 index 000000000000..86a7bd1fc912 --- /dev/null +++ b/drivers/soundwire/intel.c | |||
@@ -0,0 +1,345 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | /* | ||
5 | * Soundwire Intel Master Driver | ||
6 | */ | ||
7 | |||
8 | #include <linux/acpi.h> | ||
9 | #include <linux/delay.h> | ||
10 | #include <linux/interrupt.h> | ||
11 | #include <linux/platform_device.h> | ||
12 | #include <linux/soundwire/sdw_registers.h> | ||
13 | #include <linux/soundwire/sdw.h> | ||
14 | #include <linux/soundwire/sdw_intel.h> | ||
15 | #include "cadence_master.h" | ||
16 | #include "intel.h" | ||
17 | |||
18 | /* Intel SHIM Registers Definition */ | ||
19 | #define SDW_SHIM_LCAP 0x0 | ||
20 | #define SDW_SHIM_LCTL 0x4 | ||
21 | #define SDW_SHIM_IPPTR 0x8 | ||
22 | #define SDW_SHIM_SYNC 0xC | ||
23 | |||
24 | #define SDW_SHIM_CTLSCAP(x) (0x010 + 0x60 * x) | ||
25 | #define SDW_SHIM_CTLS0CM(x) (0x012 + 0x60 * x) | ||
26 | #define SDW_SHIM_CTLS1CM(x) (0x014 + 0x60 * x) | ||
27 | #define SDW_SHIM_CTLS2CM(x) (0x016 + 0x60 * x) | ||
28 | #define SDW_SHIM_CTLS3CM(x) (0x018 + 0x60 * x) | ||
29 | #define SDW_SHIM_PCMSCAP(x) (0x020 + 0x60 * x) | ||
30 | |||
31 | #define SDW_SHIM_PCMSYCHM(x, y) (0x022 + (0x60 * x) + (0x2 * y)) | ||
32 | #define SDW_SHIM_PCMSYCHC(x, y) (0x042 + (0x60 * x) + (0x2 * y)) | ||
33 | #define SDW_SHIM_PDMSCAP(x) (0x062 + 0x60 * x) | ||
34 | #define SDW_SHIM_IOCTL(x) (0x06C + 0x60 * x) | ||
35 | #define SDW_SHIM_CTMCTL(x) (0x06E + 0x60 * x) | ||
36 | |||
37 | #define SDW_SHIM_WAKEEN 0x190 | ||
38 | #define SDW_SHIM_WAKESTS 0x192 | ||
39 | |||
40 | #define SDW_SHIM_LCTL_SPA BIT(0) | ||
41 | #define SDW_SHIM_LCTL_CPA BIT(8) | ||
42 | |||
43 | #define SDW_SHIM_SYNC_SYNCPRD_VAL 0x176F | ||
44 | #define SDW_SHIM_SYNC_SYNCPRD GENMASK(14, 0) | ||
45 | #define SDW_SHIM_SYNC_SYNCCPU BIT(15) | ||
46 | #define SDW_SHIM_SYNC_CMDSYNC_MASK GENMASK(19, 16) | ||
47 | #define SDW_SHIM_SYNC_CMDSYNC BIT(16) | ||
48 | #define SDW_SHIM_SYNC_SYNCGO BIT(24) | ||
49 | |||
50 | #define SDW_SHIM_PCMSCAP_ISS GENMASK(3, 0) | ||
51 | #define SDW_SHIM_PCMSCAP_OSS GENMASK(7, 4) | ||
52 | #define SDW_SHIM_PCMSCAP_BSS GENMASK(12, 8) | ||
53 | |||
54 | #define SDW_SHIM_PCMSYCM_LCHN GENMASK(3, 0) | ||
55 | #define SDW_SHIM_PCMSYCM_HCHN GENMASK(7, 4) | ||
56 | #define SDW_SHIM_PCMSYCM_STREAM GENMASK(13, 8) | ||
57 | #define SDW_SHIM_PCMSYCM_DIR BIT(15) | ||
58 | |||
59 | #define SDW_SHIM_PDMSCAP_ISS GENMASK(3, 0) | ||
60 | #define SDW_SHIM_PDMSCAP_OSS GENMASK(7, 4) | ||
61 | #define SDW_SHIM_PDMSCAP_BSS GENMASK(12, 8) | ||
62 | #define SDW_SHIM_PDMSCAP_CPSS GENMASK(15, 13) | ||
63 | |||
64 | #define SDW_SHIM_IOCTL_MIF BIT(0) | ||
65 | #define SDW_SHIM_IOCTL_CO BIT(1) | ||
66 | #define SDW_SHIM_IOCTL_COE BIT(2) | ||
67 | #define SDW_SHIM_IOCTL_DO BIT(3) | ||
68 | #define SDW_SHIM_IOCTL_DOE BIT(4) | ||
69 | #define SDW_SHIM_IOCTL_BKE BIT(5) | ||
70 | #define SDW_SHIM_IOCTL_WPDD BIT(6) | ||
71 | #define SDW_SHIM_IOCTL_CIBD BIT(8) | ||
72 | #define SDW_SHIM_IOCTL_DIBD BIT(9) | ||
73 | |||
74 | #define SDW_SHIM_CTMCTL_DACTQE BIT(0) | ||
75 | #define SDW_SHIM_CTMCTL_DODS BIT(1) | ||
76 | #define SDW_SHIM_CTMCTL_DOAIS GENMASK(4, 3) | ||
77 | |||
78 | #define SDW_SHIM_WAKEEN_ENABLE BIT(0) | ||
79 | #define SDW_SHIM_WAKESTS_STATUS BIT(0) | ||
80 | |||
81 | /* Intel ALH Register definitions */ | ||
82 | #define SDW_ALH_STRMZCFG(x) (0x000 + (0x4 * x)) | ||
83 | |||
84 | #define SDW_ALH_STRMZCFG_DMAT_VAL 0x3 | ||
85 | #define SDW_ALH_STRMZCFG_DMAT GENMASK(7, 0) | ||
86 | #define SDW_ALH_STRMZCFG_CHN GENMASK(19, 16) | ||
87 | |||
88 | struct sdw_intel { | ||
89 | struct sdw_cdns cdns; | ||
90 | int instance; | ||
91 | struct sdw_intel_link_res *res; | ||
92 | }; | ||
93 | |||
94 | #define cdns_to_intel(_cdns) container_of(_cdns, struct sdw_intel, cdns) | ||
95 | |||
96 | /* | ||
97 | * Read, write helpers for HW registers | ||
98 | */ | ||
99 | static inline int intel_readl(void __iomem *base, int offset) | ||
100 | { | ||
101 | return readl(base + offset); | ||
102 | } | ||
103 | |||
104 | static inline void intel_writel(void __iomem *base, int offset, int value) | ||
105 | { | ||
106 | writel(value, base + offset); | ||
107 | } | ||
108 | |||
109 | static inline u16 intel_readw(void __iomem *base, int offset) | ||
110 | { | ||
111 | return readw(base + offset); | ||
112 | } | ||
113 | |||
114 | static inline void intel_writew(void __iomem *base, int offset, u16 value) | ||
115 | { | ||
116 | writew(value, base + offset); | ||
117 | } | ||
118 | |||
119 | static int intel_clear_bit(void __iomem *base, int offset, u32 value, u32 mask) | ||
120 | { | ||
121 | int timeout = 10; | ||
122 | u32 reg_read; | ||
123 | |||
124 | writel(value, base + offset); | ||
125 | do { | ||
126 | reg_read = readl(base + offset); | ||
127 | if (!(reg_read & mask)) | ||
128 | return 0; | ||
129 | |||
130 | timeout--; | ||
131 | udelay(50); | ||
132 | } while (timeout != 0); | ||
133 | |||
134 | return -EAGAIN; | ||
135 | } | ||
136 | |||
137 | static int intel_set_bit(void __iomem *base, int offset, u32 value, u32 mask) | ||
138 | { | ||
139 | int timeout = 10; | ||
140 | u32 reg_read; | ||
141 | |||
142 | writel(value, base + offset); | ||
143 | do { | ||
144 | reg_read = readl(base + offset); | ||
145 | if (reg_read & mask) | ||
146 | return 0; | ||
147 | |||
148 | timeout--; | ||
149 | udelay(50); | ||
150 | } while (timeout != 0); | ||
151 | |||
152 | return -EAGAIN; | ||
153 | } | ||
154 | |||
155 | /* | ||
156 | * shim ops | ||
157 | */ | ||
158 | |||
159 | static int intel_link_power_up(struct sdw_intel *sdw) | ||
160 | { | ||
161 | unsigned int link_id = sdw->instance; | ||
162 | void __iomem *shim = sdw->res->shim; | ||
163 | int spa_mask, cpa_mask; | ||
164 | int link_control, ret; | ||
165 | |||
166 | /* Link power up sequence */ | ||
167 | link_control = intel_readl(shim, SDW_SHIM_LCTL); | ||
168 | spa_mask = (SDW_SHIM_LCTL_SPA << link_id); | ||
169 | cpa_mask = (SDW_SHIM_LCTL_CPA << link_id); | ||
170 | link_control |= spa_mask; | ||
171 | |||
172 | ret = intel_set_bit(shim, SDW_SHIM_LCTL, link_control, cpa_mask); | ||
173 | if (ret < 0) | ||
174 | return ret; | ||
175 | |||
176 | sdw->cdns.link_up = true; | ||
177 | return 0; | ||
178 | } | ||
179 | |||
180 | static int intel_shim_init(struct sdw_intel *sdw) | ||
181 | { | ||
182 | void __iomem *shim = sdw->res->shim; | ||
183 | unsigned int link_id = sdw->instance; | ||
184 | int sync_reg, ret; | ||
185 | u16 ioctl = 0, act = 0; | ||
186 | |||
187 | /* Initialize Shim */ | ||
188 | ioctl |= SDW_SHIM_IOCTL_BKE; | ||
189 | intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); | ||
190 | |||
191 | ioctl |= SDW_SHIM_IOCTL_WPDD; | ||
192 | intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); | ||
193 | |||
194 | ioctl |= SDW_SHIM_IOCTL_DO; | ||
195 | intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); | ||
196 | |||
197 | ioctl |= SDW_SHIM_IOCTL_DOE; | ||
198 | intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); | ||
199 | |||
200 | /* Switch to MIP from Glue logic */ | ||
201 | ioctl = intel_readw(shim, SDW_SHIM_IOCTL(link_id)); | ||
202 | |||
203 | ioctl &= ~(SDW_SHIM_IOCTL_DOE); | ||
204 | intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); | ||
205 | |||
206 | ioctl &= ~(SDW_SHIM_IOCTL_DO); | ||
207 | intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); | ||
208 | |||
209 | ioctl |= (SDW_SHIM_IOCTL_MIF); | ||
210 | intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); | ||
211 | |||
212 | ioctl &= ~(SDW_SHIM_IOCTL_BKE); | ||
213 | ioctl &= ~(SDW_SHIM_IOCTL_COE); | ||
214 | |||
215 | intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); | ||
216 | |||
217 | act |= 0x1 << SDW_REG_SHIFT(SDW_SHIM_CTMCTL_DOAIS); | ||
218 | act |= SDW_SHIM_CTMCTL_DACTQE; | ||
219 | act |= SDW_SHIM_CTMCTL_DODS; | ||
220 | intel_writew(shim, SDW_SHIM_CTMCTL(link_id), act); | ||
221 | |||
222 | /* Now set SyncPRD period */ | ||
223 | sync_reg = intel_readl(shim, SDW_SHIM_SYNC); | ||
224 | sync_reg |= (SDW_SHIM_SYNC_SYNCPRD_VAL << | ||
225 | SDW_REG_SHIFT(SDW_SHIM_SYNC_SYNCPRD)); | ||
226 | |||
227 | /* Set SyncCPU bit */ | ||
228 | sync_reg |= SDW_SHIM_SYNC_SYNCCPU; | ||
229 | ret = intel_clear_bit(shim, SDW_SHIM_SYNC, sync_reg, | ||
230 | SDW_SHIM_SYNC_SYNCCPU); | ||
231 | if (ret < 0) | ||
232 | dev_err(sdw->cdns.dev, "Failed to set sync period: %d", ret); | ||
233 | |||
234 | return ret; | ||
235 | } | ||
236 | |||
237 | static int intel_prop_read(struct sdw_bus *bus) | ||
238 | { | ||
239 | /* Initialize with default handler to read all DisCo properties */ | ||
240 | sdw_master_read_prop(bus); | ||
241 | |||
242 | /* BIOS is not giving some values correctly. So, lets override them */ | ||
243 | bus->prop.num_freq = 1; | ||
244 | bus->prop.freq = devm_kcalloc(bus->dev, sizeof(*bus->prop.freq), | ||
245 | bus->prop.num_freq, GFP_KERNEL); | ||
246 | if (!bus->prop.freq) | ||
247 | return -ENOMEM; | ||
248 | |||
249 | bus->prop.freq[0] = bus->prop.max_freq; | ||
250 | bus->prop.err_threshold = 5; | ||
251 | |||
252 | return 0; | ||
253 | } | ||
254 | |||
255 | /* | ||
256 | * probe and init | ||
257 | */ | ||
258 | static int intel_probe(struct platform_device *pdev) | ||
259 | { | ||
260 | struct sdw_intel *sdw; | ||
261 | int ret; | ||
262 | |||
263 | sdw = devm_kzalloc(&pdev->dev, sizeof(*sdw), GFP_KERNEL); | ||
264 | if (!sdw) | ||
265 | return -ENOMEM; | ||
266 | |||
267 | sdw->instance = pdev->id; | ||
268 | sdw->res = dev_get_platdata(&pdev->dev); | ||
269 | sdw->cdns.dev = &pdev->dev; | ||
270 | sdw->cdns.registers = sdw->res->registers; | ||
271 | sdw->cdns.instance = sdw->instance; | ||
272 | sdw->cdns.msg_count = 0; | ||
273 | sdw->cdns.bus.dev = &pdev->dev; | ||
274 | sdw->cdns.bus.link_id = pdev->id; | ||
275 | |||
276 | sdw_cdns_probe(&sdw->cdns); | ||
277 | |||
278 | /* Set property read ops */ | ||
279 | sdw_cdns_master_ops.read_prop = intel_prop_read; | ||
280 | sdw->cdns.bus.ops = &sdw_cdns_master_ops; | ||
281 | |||
282 | platform_set_drvdata(pdev, sdw); | ||
283 | |||
284 | ret = sdw_add_bus_master(&sdw->cdns.bus); | ||
285 | if (ret) { | ||
286 | dev_err(&pdev->dev, "sdw_add_bus_master fail: %d\n", ret); | ||
287 | goto err_master_reg; | ||
288 | } | ||
289 | |||
290 | /* Initialize shim and controller */ | ||
291 | intel_link_power_up(sdw); | ||
292 | intel_shim_init(sdw); | ||
293 | |||
294 | ret = sdw_cdns_init(&sdw->cdns); | ||
295 | if (ret) | ||
296 | goto err_init; | ||
297 | |||
298 | ret = sdw_cdns_enable_interrupt(&sdw->cdns); | ||
299 | if (ret) | ||
300 | goto err_init; | ||
301 | |||
302 | /* Acquire IRQ */ | ||
303 | ret = request_threaded_irq(sdw->res->irq, sdw_cdns_irq, | ||
304 | sdw_cdns_thread, IRQF_SHARED, KBUILD_MODNAME, | ||
305 | &sdw->cdns); | ||
306 | if (ret < 0) { | ||
307 | dev_err(sdw->cdns.dev, "unable to grab IRQ %d, disabling device\n", | ||
308 | sdw->res->irq); | ||
309 | goto err_init; | ||
310 | } | ||
311 | |||
312 | return 0; | ||
313 | |||
314 | err_init: | ||
315 | sdw_delete_bus_master(&sdw->cdns.bus); | ||
316 | err_master_reg: | ||
317 | return ret; | ||
318 | } | ||
319 | |||
320 | static int intel_remove(struct platform_device *pdev) | ||
321 | { | ||
322 | struct sdw_intel *sdw; | ||
323 | |||
324 | sdw = platform_get_drvdata(pdev); | ||
325 | |||
326 | free_irq(sdw->res->irq, sdw); | ||
327 | sdw_delete_bus_master(&sdw->cdns.bus); | ||
328 | |||
329 | return 0; | ||
330 | } | ||
331 | |||
332 | static struct platform_driver sdw_intel_drv = { | ||
333 | .probe = intel_probe, | ||
334 | .remove = intel_remove, | ||
335 | .driver = { | ||
336 | .name = "int-sdw", | ||
337 | |||
338 | }, | ||
339 | }; | ||
340 | |||
341 | module_platform_driver(sdw_intel_drv); | ||
342 | |||
343 | MODULE_LICENSE("Dual BSD/GPL"); | ||
344 | MODULE_ALIAS("platform:int-sdw"); | ||
345 | MODULE_DESCRIPTION("Intel Soundwire Master Driver"); | ||
diff --git a/drivers/soundwire/intel.h b/drivers/soundwire/intel.h new file mode 100644 index 000000000000..ffa30d9535a2 --- /dev/null +++ b/drivers/soundwire/intel.h | |||
@@ -0,0 +1,23 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #ifndef __SDW_INTEL_LOCAL_H | ||
5 | #define __SDW_INTEL_LOCAL_H | ||
6 | |||
7 | /** | ||
8 | * struct sdw_intel_res - Soundwire link resources | ||
9 | * @registers: Link IO registers base | ||
10 | * @shim: Audio shim pointer | ||
11 | * @alh: ALH (Audio Link Hub) pointer | ||
12 | * @irq: Interrupt line | ||
13 | * | ||
14 | * This is set as pdata for each link instance. | ||
15 | */ | ||
16 | struct sdw_intel_link_res { | ||
17 | void __iomem *registers; | ||
18 | void __iomem *shim; | ||
19 | void __iomem *alh; | ||
20 | int irq; | ||
21 | }; | ||
22 | |||
23 | #endif /* __SDW_INTEL_LOCAL_H */ | ||
diff --git a/drivers/soundwire/intel_init.c b/drivers/soundwire/intel_init.c new file mode 100644 index 000000000000..6f2bb99526f2 --- /dev/null +++ b/drivers/soundwire/intel_init.c | |||
@@ -0,0 +1,198 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | /* | ||
5 | * SDW Intel Init Routines | ||
6 | * | ||
7 | * Initializes and creates SDW devices based on ACPI and Hardware values | ||
8 | */ | ||
9 | |||
10 | #include <linux/acpi.h> | ||
11 | #include <linux/platform_device.h> | ||
12 | #include <linux/soundwire/sdw_intel.h> | ||
13 | #include "intel.h" | ||
14 | |||
15 | #define SDW_MAX_LINKS 4 | ||
16 | #define SDW_SHIM_LCAP 0x0 | ||
17 | #define SDW_SHIM_BASE 0x2C000 | ||
18 | #define SDW_ALH_BASE 0x2C800 | ||
19 | #define SDW_LINK_BASE 0x30000 | ||
20 | #define SDW_LINK_SIZE 0x10000 | ||
21 | |||
22 | struct sdw_link_data { | ||
23 | struct sdw_intel_link_res res; | ||
24 | struct platform_device *pdev; | ||
25 | }; | ||
26 | |||
27 | struct sdw_intel_ctx { | ||
28 | int count; | ||
29 | struct sdw_link_data *links; | ||
30 | }; | ||
31 | |||
32 | static int sdw_intel_cleanup_pdev(struct sdw_intel_ctx *ctx) | ||
33 | { | ||
34 | struct sdw_link_data *link = ctx->links; | ||
35 | int i; | ||
36 | |||
37 | if (!link) | ||
38 | return 0; | ||
39 | |||
40 | for (i = 0; i < ctx->count; i++) { | ||
41 | if (link->pdev) | ||
42 | platform_device_unregister(link->pdev); | ||
43 | link++; | ||
44 | } | ||
45 | |||
46 | kfree(ctx->links); | ||
47 | ctx->links = NULL; | ||
48 | |||
49 | return 0; | ||
50 | } | ||
51 | |||
52 | static struct sdw_intel_ctx | ||
53 | *sdw_intel_add_controller(struct sdw_intel_res *res) | ||
54 | { | ||
55 | struct platform_device_info pdevinfo; | ||
56 | struct platform_device *pdev; | ||
57 | struct sdw_link_data *link; | ||
58 | struct sdw_intel_ctx *ctx; | ||
59 | struct acpi_device *adev; | ||
60 | int ret, i; | ||
61 | u8 count; | ||
62 | u32 caps; | ||
63 | |||
64 | if (acpi_bus_get_device(res->handle, &adev)) | ||
65 | return NULL; | ||
66 | |||
67 | /* Found controller, find links supported */ | ||
68 | count = 0; | ||
69 | ret = fwnode_property_read_u8_array(acpi_fwnode_handle(adev), | ||
70 | "mipi-sdw-master-count", &count, 1); | ||
71 | |||
72 | /* Don't fail on error, continue and use hw value */ | ||
73 | if (ret) { | ||
74 | dev_err(&adev->dev, | ||
75 | "Failed to read mipi-sdw-master-count: %d\n", ret); | ||
76 | count = SDW_MAX_LINKS; | ||
77 | } | ||
78 | |||
79 | /* Check SNDWLCAP.LCOUNT */ | ||
80 | caps = ioread32(res->mmio_base + SDW_SHIM_BASE + SDW_SHIM_LCAP); | ||
81 | |||
82 | /* Check HW supported vs property value and use min of two */ | ||
83 | count = min_t(u8, caps, count); | ||
84 | |||
85 | /* Check count is within bounds */ | ||
86 | if (count > SDW_MAX_LINKS) { | ||
87 | dev_err(&adev->dev, "Link count %d exceeds max %d\n", | ||
88 | count, SDW_MAX_LINKS); | ||
89 | return NULL; | ||
90 | } | ||
91 | |||
92 | dev_dbg(&adev->dev, "Creating %d SDW Link devices\n", count); | ||
93 | |||
94 | ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); | ||
95 | if (!ctx) | ||
96 | return NULL; | ||
97 | |||
98 | ctx->count = count; | ||
99 | ctx->links = kcalloc(ctx->count, sizeof(*ctx->links), GFP_KERNEL); | ||
100 | if (!ctx->links) | ||
101 | goto link_err; | ||
102 | |||
103 | link = ctx->links; | ||
104 | |||
105 | /* Create SDW Master devices */ | ||
106 | for (i = 0; i < count; i++) { | ||
107 | |||
108 | link->res.irq = res->irq; | ||
109 | link->res.registers = res->mmio_base + SDW_LINK_BASE | ||
110 | + (SDW_LINK_SIZE * i); | ||
111 | link->res.shim = res->mmio_base + SDW_SHIM_BASE; | ||
112 | link->res.alh = res->mmio_base + SDW_ALH_BASE; | ||
113 | |||
114 | memset(&pdevinfo, 0, sizeof(pdevinfo)); | ||
115 | |||
116 | pdevinfo.parent = res->parent; | ||
117 | pdevinfo.name = "int-sdw"; | ||
118 | pdevinfo.id = i; | ||
119 | pdevinfo.fwnode = acpi_fwnode_handle(adev); | ||
120 | pdevinfo.data = &link->res; | ||
121 | pdevinfo.size_data = sizeof(link->res); | ||
122 | |||
123 | pdev = platform_device_register_full(&pdevinfo); | ||
124 | if (IS_ERR(pdev)) { | ||
125 | dev_err(&adev->dev, | ||
126 | "platform device creation failed: %ld\n", | ||
127 | PTR_ERR(pdev)); | ||
128 | goto pdev_err; | ||
129 | } | ||
130 | |||
131 | link->pdev = pdev; | ||
132 | link++; | ||
133 | } | ||
134 | |||
135 | return ctx; | ||
136 | |||
137 | pdev_err: | ||
138 | sdw_intel_cleanup_pdev(ctx); | ||
139 | link_err: | ||
140 | kfree(ctx); | ||
141 | return NULL; | ||
142 | } | ||
143 | |||
144 | static acpi_status sdw_intel_acpi_cb(acpi_handle handle, u32 level, | ||
145 | void *cdata, void **return_value) | ||
146 | { | ||
147 | struct sdw_intel_res *res = cdata; | ||
148 | struct acpi_device *adev; | ||
149 | |||
150 | if (acpi_bus_get_device(handle, &adev)) { | ||
151 | dev_err(&adev->dev, "Couldn't find ACPI handle\n"); | ||
152 | return AE_NOT_FOUND; | ||
153 | } | ||
154 | |||
155 | res->handle = handle; | ||
156 | return AE_OK; | ||
157 | } | ||
158 | |||
159 | /** | ||
160 | * sdw_intel_init() - SoundWire Intel init routine | ||
161 | * @parent_handle: ACPI parent handle | ||
162 | * @res: resource data | ||
163 | * | ||
164 | * This scans the namespace and creates SoundWire link controller devices | ||
165 | * based on the info queried. | ||
166 | */ | ||
167 | void *sdw_intel_init(acpi_handle *parent_handle, struct sdw_intel_res *res) | ||
168 | { | ||
169 | acpi_status status; | ||
170 | |||
171 | status = acpi_walk_namespace(ACPI_TYPE_DEVICE, | ||
172 | parent_handle, 1, | ||
173 | sdw_intel_acpi_cb, | ||
174 | NULL, res, NULL); | ||
175 | if (ACPI_FAILURE(status)) | ||
176 | return NULL; | ||
177 | |||
178 | return sdw_intel_add_controller(res); | ||
179 | } | ||
180 | EXPORT_SYMBOL(sdw_intel_init); | ||
181 | |||
182 | /** | ||
183 | * sdw_intel_exit() - SoundWire Intel exit | ||
184 | * @arg: callback context | ||
185 | * | ||
186 | * Delete the controller instances created and cleanup | ||
187 | */ | ||
188 | void sdw_intel_exit(void *arg) | ||
189 | { | ||
190 | struct sdw_intel_ctx *ctx = arg; | ||
191 | |||
192 | sdw_intel_cleanup_pdev(ctx); | ||
193 | kfree(ctx); | ||
194 | } | ||
195 | EXPORT_SYMBOL(sdw_intel_exit); | ||
196 | |||
197 | MODULE_LICENSE("Dual BSD/GPL"); | ||
198 | MODULE_DESCRIPTION("Intel Soundwire Init Library"); | ||
diff --git a/drivers/soundwire/mipi_disco.c b/drivers/soundwire/mipi_disco.c new file mode 100644 index 000000000000..fdeba0c3b589 --- /dev/null +++ b/drivers/soundwire/mipi_disco.c | |||
@@ -0,0 +1,401 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | /* | ||
5 | * MIPI Discovery And Configuration (DisCo) Specification for SoundWire | ||
6 | * specifies properties to be implemented for SoundWire Masters and Slaves. | ||
7 | * The DisCo spec doesn't mandate these properties. However, SDW bus cannot | ||
8 | * work without knowing these values. | ||
9 | * | ||
10 | * The helper functions read the Master and Slave properties. Implementers | ||
11 | * of Master or Slave drivers can use any of the below three mechanisms: | ||
12 | * a) Use these APIs here as .read_prop() callback for Master and Slave | ||
13 | * b) Implement own methods and set those as .read_prop(), but invoke | ||
14 | * APIs in this file for generic read and override the values with | ||
15 | * platform specific data | ||
16 | * c) Implement ones own methods which do not use anything provided | ||
17 | * here | ||
18 | */ | ||
19 | |||
20 | #include <linux/device.h> | ||
21 | #include <linux/property.h> | ||
22 | #include <linux/mod_devicetable.h> | ||
23 | #include <linux/soundwire/sdw.h> | ||
24 | #include "bus.h" | ||
25 | |||
26 | /** | ||
27 | * sdw_master_read_prop() - Read Master properties | ||
28 | * @bus: SDW bus instance | ||
29 | */ | ||
30 | int sdw_master_read_prop(struct sdw_bus *bus) | ||
31 | { | ||
32 | struct sdw_master_prop *prop = &bus->prop; | ||
33 | struct fwnode_handle *link; | ||
34 | char name[32]; | ||
35 | int nval, i; | ||
36 | |||
37 | device_property_read_u32(bus->dev, | ||
38 | "mipi-sdw-sw-interface-revision", &prop->revision); | ||
39 | |||
40 | /* Find master handle */ | ||
41 | snprintf(name, sizeof(name), | ||
42 | "mipi-sdw-master-%d-subproperties", bus->link_id); | ||
43 | |||
44 | link = device_get_named_child_node(bus->dev, name); | ||
45 | if (!link) { | ||
46 | dev_err(bus->dev, "Master node %s not found\n", name); | ||
47 | return -EIO; | ||
48 | } | ||
49 | |||
50 | if (fwnode_property_read_bool(link, | ||
51 | "mipi-sdw-clock-stop-mode0-supported") == true) | ||
52 | prop->clk_stop_mode = SDW_CLK_STOP_MODE0; | ||
53 | |||
54 | if (fwnode_property_read_bool(link, | ||
55 | "mipi-sdw-clock-stop-mode1-supported") == true) | ||
56 | prop->clk_stop_mode |= SDW_CLK_STOP_MODE1; | ||
57 | |||
58 | fwnode_property_read_u32(link, | ||
59 | "mipi-sdw-max-clock-frequency", &prop->max_freq); | ||
60 | |||
61 | nval = fwnode_property_read_u32_array(link, | ||
62 | "mipi-sdw-clock-frequencies-supported", NULL, 0); | ||
63 | if (nval > 0) { | ||
64 | |||
65 | prop->num_freq = nval; | ||
66 | prop->freq = devm_kcalloc(bus->dev, prop->num_freq, | ||
67 | sizeof(*prop->freq), GFP_KERNEL); | ||
68 | if (!prop->freq) | ||
69 | return -ENOMEM; | ||
70 | |||
71 | fwnode_property_read_u32_array(link, | ||
72 | "mipi-sdw-clock-frequencies-supported", | ||
73 | prop->freq, prop->num_freq); | ||
74 | } | ||
75 | |||
76 | /* | ||
77 | * Check the frequencies supported. If FW doesn't provide max | ||
78 | * freq, then populate here by checking values. | ||
79 | */ | ||
80 | if (!prop->max_freq && prop->freq) { | ||
81 | prop->max_freq = prop->freq[0]; | ||
82 | for (i = 1; i < prop->num_freq; i++) { | ||
83 | if (prop->freq[i] > prop->max_freq) | ||
84 | prop->max_freq = prop->freq[i]; | ||
85 | } | ||
86 | } | ||
87 | |||
88 | nval = fwnode_property_read_u32_array(link, | ||
89 | "mipi-sdw-supported-clock-gears", NULL, 0); | ||
90 | if (nval > 0) { | ||
91 | |||
92 | prop->num_clk_gears = nval; | ||
93 | prop->clk_gears = devm_kcalloc(bus->dev, prop->num_clk_gears, | ||
94 | sizeof(*prop->clk_gears), GFP_KERNEL); | ||
95 | if (!prop->clk_gears) | ||
96 | return -ENOMEM; | ||
97 | |||
98 | fwnode_property_read_u32_array(link, | ||
99 | "mipi-sdw-supported-clock-gears", | ||
100 | prop->clk_gears, prop->num_clk_gears); | ||
101 | } | ||
102 | |||
103 | fwnode_property_read_u32(link, "mipi-sdw-default-frame-rate", | ||
104 | &prop->default_frame_rate); | ||
105 | |||
106 | fwnode_property_read_u32(link, "mipi-sdw-default-frame-row-size", | ||
107 | &prop->default_row); | ||
108 | |||
109 | fwnode_property_read_u32(link, "mipi-sdw-default-frame-col-size", | ||
110 | &prop->default_col); | ||
111 | |||
112 | prop->dynamic_frame = fwnode_property_read_bool(link, | ||
113 | "mipi-sdw-dynamic-frame-shape"); | ||
114 | |||
115 | fwnode_property_read_u32(link, "mipi-sdw-command-error-threshold", | ||
116 | &prop->err_threshold); | ||
117 | |||
118 | return 0; | ||
119 | } | ||
120 | EXPORT_SYMBOL(sdw_master_read_prop); | ||
121 | |||
122 | static int sdw_slave_read_dp0(struct sdw_slave *slave, | ||
123 | struct fwnode_handle *port, struct sdw_dp0_prop *dp0) | ||
124 | { | ||
125 | int nval; | ||
126 | |||
127 | fwnode_property_read_u32(port, "mipi-sdw-port-max-wordlength", | ||
128 | &dp0->max_word); | ||
129 | |||
130 | fwnode_property_read_u32(port, "mipi-sdw-port-min-wordlength", | ||
131 | &dp0->min_word); | ||
132 | |||
133 | nval = fwnode_property_read_u32_array(port, | ||
134 | "mipi-sdw-port-wordlength-configs", NULL, 0); | ||
135 | if (nval > 0) { | ||
136 | |||
137 | dp0->num_words = nval; | ||
138 | dp0->words = devm_kcalloc(&slave->dev, | ||
139 | dp0->num_words, sizeof(*dp0->words), | ||
140 | GFP_KERNEL); | ||
141 | if (!dp0->words) | ||
142 | return -ENOMEM; | ||
143 | |||
144 | fwnode_property_read_u32_array(port, | ||
145 | "mipi-sdw-port-wordlength-configs", | ||
146 | dp0->words, dp0->num_words); | ||
147 | } | ||
148 | |||
149 | dp0->flow_controlled = fwnode_property_read_bool( | ||
150 | port, "mipi-sdw-bra-flow-controlled"); | ||
151 | |||
152 | dp0->simple_ch_prep_sm = fwnode_property_read_bool( | ||
153 | port, "mipi-sdw-simplified-channel-prepare-sm"); | ||
154 | |||
155 | dp0->device_interrupts = fwnode_property_read_bool( | ||
156 | port, "mipi-sdw-imp-def-dp0-interrupts-supported"); | ||
157 | |||
158 | return 0; | ||
159 | } | ||
160 | |||
161 | static int sdw_slave_read_dpn(struct sdw_slave *slave, | ||
162 | struct sdw_dpn_prop *dpn, int count, int ports, char *type) | ||
163 | { | ||
164 | struct fwnode_handle *node; | ||
165 | u32 bit, i = 0; | ||
166 | int nval; | ||
167 | unsigned long addr; | ||
168 | char name[40]; | ||
169 | |||
170 | addr = ports; | ||
171 | /* valid ports are 1 to 14 so apply mask */ | ||
172 | addr &= GENMASK(14, 1); | ||
173 | |||
174 | for_each_set_bit(bit, &addr, 32) { | ||
175 | snprintf(name, sizeof(name), | ||
176 | "mipi-sdw-dp-%d-%s-subproperties", bit, type); | ||
177 | |||
178 | dpn[i].num = bit; | ||
179 | |||
180 | node = device_get_named_child_node(&slave->dev, name); | ||
181 | if (!node) { | ||
182 | dev_err(&slave->dev, "%s dpN not found\n", name); | ||
183 | return -EIO; | ||
184 | } | ||
185 | |||
186 | fwnode_property_read_u32(node, "mipi-sdw-port-max-wordlength", | ||
187 | &dpn[i].max_word); | ||
188 | fwnode_property_read_u32(node, "mipi-sdw-port-min-wordlength", | ||
189 | &dpn[i].min_word); | ||
190 | |||
191 | nval = fwnode_property_read_u32_array(node, | ||
192 | "mipi-sdw-port-wordlength-configs", NULL, 0); | ||
193 | if (nval > 0) { | ||
194 | |||
195 | dpn[i].num_words = nval; | ||
196 | dpn[i].words = devm_kcalloc(&slave->dev, | ||
197 | dpn[i].num_words, | ||
198 | sizeof(*dpn[i].words), GFP_KERNEL); | ||
199 | if (!dpn[i].words) | ||
200 | return -ENOMEM; | ||
201 | |||
202 | fwnode_property_read_u32_array(node, | ||
203 | "mipi-sdw-port-wordlength-configs", | ||
204 | dpn[i].words, dpn[i].num_words); | ||
205 | } | ||
206 | |||
207 | fwnode_property_read_u32(node, "mipi-sdw-data-port-type", | ||
208 | &dpn[i].type); | ||
209 | |||
210 | fwnode_property_read_u32(node, | ||
211 | "mipi-sdw-max-grouping-supported", | ||
212 | &dpn[i].max_grouping); | ||
213 | |||
214 | dpn[i].simple_ch_prep_sm = fwnode_property_read_bool(node, | ||
215 | "mipi-sdw-simplified-channelprepare-sm"); | ||
216 | |||
217 | fwnode_property_read_u32(node, | ||
218 | "mipi-sdw-port-channelprepare-timeout", | ||
219 | &dpn[i].ch_prep_timeout); | ||
220 | |||
221 | fwnode_property_read_u32(node, | ||
222 | "mipi-sdw-imp-def-dpn-interrupts-supported", | ||
223 | &dpn[i].device_interrupts); | ||
224 | |||
225 | fwnode_property_read_u32(node, "mipi-sdw-min-channel-number", | ||
226 | &dpn[i].min_ch); | ||
227 | |||
228 | fwnode_property_read_u32(node, "mipi-sdw-max-channel-number", | ||
229 | &dpn[i].max_ch); | ||
230 | |||
231 | nval = fwnode_property_read_u32_array(node, | ||
232 | "mipi-sdw-channel-number-list", NULL, 0); | ||
233 | if (nval > 0) { | ||
234 | |||
235 | dpn[i].num_ch = nval; | ||
236 | dpn[i].ch = devm_kcalloc(&slave->dev, dpn[i].num_ch, | ||
237 | sizeof(*dpn[i].ch), GFP_KERNEL); | ||
238 | if (!dpn[i].ch) | ||
239 | return -ENOMEM; | ||
240 | |||
241 | fwnode_property_read_u32_array(node, | ||
242 | "mipi-sdw-channel-number-list", | ||
243 | dpn[i].ch, dpn[i].num_ch); | ||
244 | } | ||
245 | |||
246 | nval = fwnode_property_read_u32_array(node, | ||
247 | "mipi-sdw-channel-combination-list", NULL, 0); | ||
248 | if (nval > 0) { | ||
249 | |||
250 | dpn[i].num_ch_combinations = nval; | ||
251 | dpn[i].ch_combinations = devm_kcalloc(&slave->dev, | ||
252 | dpn[i].num_ch_combinations, | ||
253 | sizeof(*dpn[i].ch_combinations), | ||
254 | GFP_KERNEL); | ||
255 | if (!dpn[i].ch_combinations) | ||
256 | return -ENOMEM; | ||
257 | |||
258 | fwnode_property_read_u32_array(node, | ||
259 | "mipi-sdw-channel-combination-list", | ||
260 | dpn[i].ch_combinations, | ||
261 | dpn[i].num_ch_combinations); | ||
262 | } | ||
263 | |||
264 | fwnode_property_read_u32(node, | ||
265 | "mipi-sdw-modes-supported", &dpn[i].modes); | ||
266 | |||
267 | fwnode_property_read_u32(node, "mipi-sdw-max-async-buffer", | ||
268 | &dpn[i].max_async_buffer); | ||
269 | |||
270 | dpn[i].block_pack_mode = fwnode_property_read_bool(node, | ||
271 | "mipi-sdw-block-packing-mode"); | ||
272 | |||
273 | fwnode_property_read_u32(node, "mipi-sdw-port-encoding-type", | ||
274 | &dpn[i].port_encoding); | ||
275 | |||
276 | /* TODO: Read audio mode */ | ||
277 | |||
278 | i++; | ||
279 | } | ||
280 | |||
281 | return 0; | ||
282 | } | ||
283 | |||
284 | /** | ||
285 | * sdw_slave_read_prop() - Read Slave properties | ||
286 | * @slave: SDW Slave | ||
287 | */ | ||
288 | int sdw_slave_read_prop(struct sdw_slave *slave) | ||
289 | { | ||
290 | struct sdw_slave_prop *prop = &slave->prop; | ||
291 | struct device *dev = &slave->dev; | ||
292 | struct fwnode_handle *port; | ||
293 | int num_of_ports, nval, i, dp0 = 0; | ||
294 | |||
295 | device_property_read_u32(dev, "mipi-sdw-sw-interface-revision", | ||
296 | &prop->mipi_revision); | ||
297 | |||
298 | prop->wake_capable = device_property_read_bool(dev, | ||
299 | "mipi-sdw-wake-up-unavailable"); | ||
300 | prop->wake_capable = !prop->wake_capable; | ||
301 | |||
302 | prop->test_mode_capable = device_property_read_bool(dev, | ||
303 | "mipi-sdw-test-mode-supported"); | ||
304 | |||
305 | prop->clk_stop_mode1 = false; | ||
306 | if (device_property_read_bool(dev, | ||
307 | "mipi-sdw-clock-stop-mode1-supported")) | ||
308 | prop->clk_stop_mode1 = true; | ||
309 | |||
310 | prop->simple_clk_stop_capable = device_property_read_bool(dev, | ||
311 | "mipi-sdw-simplified-clockstopprepare-sm-supported"); | ||
312 | |||
313 | device_property_read_u32(dev, "mipi-sdw-clockstopprepare-timeout", | ||
314 | &prop->clk_stop_timeout); | ||
315 | |||
316 | device_property_read_u32(dev, "mipi-sdw-slave-channelprepare-timeout", | ||
317 | &prop->ch_prep_timeout); | ||
318 | |||
319 | device_property_read_u32(dev, | ||
320 | "mipi-sdw-clockstopprepare-hard-reset-behavior", | ||
321 | &prop->reset_behave); | ||
322 | |||
323 | prop->high_PHY_capable = device_property_read_bool(dev, | ||
324 | "mipi-sdw-highPHY-capable"); | ||
325 | |||
326 | prop->paging_support = device_property_read_bool(dev, | ||
327 | "mipi-sdw-paging-support"); | ||
328 | |||
329 | prop->bank_delay_support = device_property_read_bool(dev, | ||
330 | "mipi-sdw-bank-delay-support"); | ||
331 | |||
332 | device_property_read_u32(dev, | ||
333 | "mipi-sdw-port15-read-behavior", &prop->p15_behave); | ||
334 | |||
335 | device_property_read_u32(dev, "mipi-sdw-master-count", | ||
336 | &prop->master_count); | ||
337 | |||
338 | device_property_read_u32(dev, "mipi-sdw-source-port-list", | ||
339 | &prop->source_ports); | ||
340 | |||
341 | device_property_read_u32(dev, "mipi-sdw-sink-port-list", | ||
342 | &prop->sink_ports); | ||
343 | |||
344 | /* Read dp0 properties */ | ||
345 | port = device_get_named_child_node(dev, "mipi-sdw-dp-0-subproperties"); | ||
346 | if (!port) { | ||
347 | dev_dbg(dev, "DP0 node not found!!\n"); | ||
348 | } else { | ||
349 | |||
350 | prop->dp0_prop = devm_kzalloc(&slave->dev, | ||
351 | sizeof(*prop->dp0_prop), GFP_KERNEL); | ||
352 | if (!prop->dp0_prop) | ||
353 | return -ENOMEM; | ||
354 | |||
355 | sdw_slave_read_dp0(slave, port, prop->dp0_prop); | ||
356 | dp0 = 1; | ||
357 | } | ||
358 | |||
359 | /* | ||
360 | * Based on each DPn port, get source and sink dpn properties. | ||
361 | * Also, some ports can operate as both source or sink. | ||
362 | */ | ||
363 | |||
364 | /* Allocate memory for set bits in port lists */ | ||
365 | nval = hweight32(prop->source_ports); | ||
366 | prop->src_dpn_prop = devm_kcalloc(&slave->dev, nval, | ||
367 | sizeof(*prop->src_dpn_prop), GFP_KERNEL); | ||
368 | if (!prop->src_dpn_prop) | ||
369 | return -ENOMEM; | ||
370 | |||
371 | /* Read dpn properties for source port(s) */ | ||
372 | sdw_slave_read_dpn(slave, prop->src_dpn_prop, nval, | ||
373 | prop->source_ports, "source"); | ||
374 | |||
375 | nval = hweight32(prop->sink_ports); | ||
376 | prop->sink_dpn_prop = devm_kcalloc(&slave->dev, nval, | ||
377 | sizeof(*prop->sink_dpn_prop), GFP_KERNEL); | ||
378 | if (!prop->sink_dpn_prop) | ||
379 | return -ENOMEM; | ||
380 | |||
381 | /* Read dpn properties for sink port(s) */ | ||
382 | sdw_slave_read_dpn(slave, prop->sink_dpn_prop, nval, | ||
383 | prop->sink_ports, "sink"); | ||
384 | |||
385 | /* some ports are bidirectional so check total ports by ORing */ | ||
386 | nval = prop->source_ports | prop->sink_ports; | ||
387 | num_of_ports = hweight32(nval) + dp0; /* add DP0 */ | ||
388 | |||
389 | /* Allocate port_ready based on num_of_ports */ | ||
390 | slave->port_ready = devm_kcalloc(&slave->dev, num_of_ports, | ||
391 | sizeof(*slave->port_ready), GFP_KERNEL); | ||
392 | if (!slave->port_ready) | ||
393 | return -ENOMEM; | ||
394 | |||
395 | /* Initialize completion */ | ||
396 | for (i = 0; i < num_of_ports; i++) | ||
397 | init_completion(&slave->port_ready[i]); | ||
398 | |||
399 | return 0; | ||
400 | } | ||
401 | EXPORT_SYMBOL(sdw_slave_read_prop); | ||
diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c new file mode 100644 index 000000000000..ac103bd0c176 --- /dev/null +++ b/drivers/soundwire/slave.c | |||
@@ -0,0 +1,114 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #include <linux/acpi.h> | ||
5 | #include <linux/soundwire/sdw.h> | ||
6 | #include <linux/soundwire/sdw_type.h> | ||
7 | #include "bus.h" | ||
8 | |||
9 | static void sdw_slave_release(struct device *dev) | ||
10 | { | ||
11 | struct sdw_slave *slave = dev_to_sdw_dev(dev); | ||
12 | |||
13 | kfree(slave); | ||
14 | } | ||
15 | |||
16 | static int sdw_slave_add(struct sdw_bus *bus, | ||
17 | struct sdw_slave_id *id, struct fwnode_handle *fwnode) | ||
18 | { | ||
19 | struct sdw_slave *slave; | ||
20 | int ret; | ||
21 | |||
22 | slave = kzalloc(sizeof(*slave), GFP_KERNEL); | ||
23 | if (!slave) | ||
24 | return -ENOMEM; | ||
25 | |||
26 | /* Initialize data structure */ | ||
27 | memcpy(&slave->id, id, sizeof(*id)); | ||
28 | slave->dev.parent = bus->dev; | ||
29 | slave->dev.fwnode = fwnode; | ||
30 | |||
31 | /* name shall be sdw:link:mfg:part:class:unique */ | ||
32 | dev_set_name(&slave->dev, "sdw:%x:%x:%x:%x:%x", | ||
33 | bus->link_id, id->mfg_id, id->part_id, | ||
34 | id->class_id, id->unique_id); | ||
35 | |||
36 | slave->dev.release = sdw_slave_release; | ||
37 | slave->dev.bus = &sdw_bus_type; | ||
38 | slave->bus = bus; | ||
39 | slave->status = SDW_SLAVE_UNATTACHED; | ||
40 | slave->dev_num = 0; | ||
41 | |||
42 | mutex_lock(&bus->bus_lock); | ||
43 | list_add_tail(&slave->node, &bus->slaves); | ||
44 | mutex_unlock(&bus->bus_lock); | ||
45 | |||
46 | ret = device_register(&slave->dev); | ||
47 | if (ret) { | ||
48 | dev_err(bus->dev, "Failed to add slave: ret %d\n", ret); | ||
49 | |||
50 | /* | ||
51 | * On err, don't free but drop ref as this will be freed | ||
52 | * when release method is invoked. | ||
53 | */ | ||
54 | mutex_lock(&bus->bus_lock); | ||
55 | list_del(&slave->node); | ||
56 | mutex_unlock(&bus->bus_lock); | ||
57 | put_device(&slave->dev); | ||
58 | } | ||
59 | |||
60 | return ret; | ||
61 | } | ||
62 | |||
63 | #if IS_ENABLED(CONFIG_ACPI) | ||
64 | /* | ||
65 | * sdw_acpi_find_slaves() - Find Slave devices in Master ACPI node | ||
66 | * @bus: SDW bus instance | ||
67 | * | ||
68 | * Scans Master ACPI node for SDW child Slave devices and registers it. | ||
69 | */ | ||
70 | int sdw_acpi_find_slaves(struct sdw_bus *bus) | ||
71 | { | ||
72 | struct acpi_device *adev, *parent; | ||
73 | |||
74 | parent = ACPI_COMPANION(bus->dev); | ||
75 | if (!parent) { | ||
76 | dev_err(bus->dev, "Can't find parent for acpi bind\n"); | ||
77 | return -ENODEV; | ||
78 | } | ||
79 | |||
80 | list_for_each_entry(adev, &parent->children, node) { | ||
81 | unsigned long long addr; | ||
82 | struct sdw_slave_id id; | ||
83 | unsigned int link_id; | ||
84 | acpi_status status; | ||
85 | |||
86 | status = acpi_evaluate_integer(adev->handle, | ||
87 | METHOD_NAME__ADR, NULL, &addr); | ||
88 | |||
89 | if (ACPI_FAILURE(status)) { | ||
90 | dev_err(bus->dev, "_ADR resolution failed: %x\n", | ||
91 | status); | ||
92 | return status; | ||
93 | } | ||
94 | |||
95 | /* Extract link id from ADR, Bit 51 to 48 (included) */ | ||
96 | link_id = (addr >> 48) & GENMASK(3, 0); | ||
97 | |||
98 | /* Check for link_id match */ | ||
99 | if (link_id != bus->link_id) | ||
100 | continue; | ||
101 | |||
102 | sdw_extract_slave_id(bus, addr, &id); | ||
103 | |||
104 | /* | ||
105 | * don't error check for sdw_slave_add as we want to continue | ||
106 | * adding Slaves | ||
107 | */ | ||
108 | sdw_slave_add(bus, &id, acpi_fwnode_handle(adev)); | ||
109 | } | ||
110 | |||
111 | return 0; | ||
112 | } | ||
113 | |||
114 | #endif | ||
diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c index 48d5327d38d4..8ca549032c27 100644 --- a/drivers/uio/uio_hv_generic.c +++ b/drivers/uio/uio_hv_generic.c | |||
@@ -10,11 +10,13 @@ | |||
10 | * Since the driver does not declare any device ids, you must allocate | 10 | * Since the driver does not declare any device ids, you must allocate |
11 | * id and bind the device to the driver yourself. For example: | 11 | * id and bind the device to the driver yourself. For example: |
12 | * | 12 | * |
13 | * Associate Network GUID with UIO device | ||
13 | * # echo "f8615163-df3e-46c5-913f-f2d2f965ed0e" \ | 14 | * # echo "f8615163-df3e-46c5-913f-f2d2f965ed0e" \ |
14 | * > /sys/bus/vmbus/drivers/uio_hv_generic | 15 | * > /sys/bus/vmbus/drivers/uio_hv_generic/new_id |
15 | * # echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 \ | 16 | * Then rebind |
17 | * # echo -n "ed963694-e847-4b2a-85af-bc9cfc11d6f3" \ | ||
16 | * > /sys/bus/vmbus/drivers/hv_netvsc/unbind | 18 | * > /sys/bus/vmbus/drivers/hv_netvsc/unbind |
17 | * # echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 \ | 19 | * # echo -n "ed963694-e847-4b2a-85af-bc9cfc11d6f3" \ |
18 | * > /sys/bus/vmbus/drivers/uio_hv_generic/bind | 20 | * > /sys/bus/vmbus/drivers/uio_hv_generic/bind |
19 | */ | 21 | */ |
20 | 22 | ||
@@ -37,6 +39,10 @@ | |||
37 | #define DRIVER_AUTHOR "Stephen Hemminger <sthemmin at microsoft.com>" | 39 | #define DRIVER_AUTHOR "Stephen Hemminger <sthemmin at microsoft.com>" |
38 | #define DRIVER_DESC "Generic UIO driver for VMBus devices" | 40 | #define DRIVER_DESC "Generic UIO driver for VMBus devices" |
39 | 41 | ||
42 | #define HV_RING_SIZE 512 /* pages */ | ||
43 | #define SEND_BUFFER_SIZE (15 * 1024 * 1024) | ||
44 | #define RECV_BUFFER_SIZE (15 * 1024 * 1024) | ||
45 | |||
40 | /* | 46 | /* |
41 | * List of resources to be mapped to user space | 47 | * List of resources to be mapped to user space |
42 | * can be extended up to MAX_UIO_MAPS(5) items | 48 | * can be extended up to MAX_UIO_MAPS(5) items |
@@ -45,32 +51,22 @@ enum hv_uio_map { | |||
45 | TXRX_RING_MAP = 0, | 51 | TXRX_RING_MAP = 0, |
46 | INT_PAGE_MAP, | 52 | INT_PAGE_MAP, |
47 | MON_PAGE_MAP, | 53 | MON_PAGE_MAP, |
54 | RECV_BUF_MAP, | ||
55 | SEND_BUF_MAP | ||
48 | }; | 56 | }; |
49 | 57 | ||
50 | #define HV_RING_SIZE 512 | ||
51 | |||
52 | struct hv_uio_private_data { | 58 | struct hv_uio_private_data { |
53 | struct uio_info info; | 59 | struct uio_info info; |
54 | struct hv_device *device; | 60 | struct hv_device *device; |
55 | }; | ||
56 | |||
57 | static int | ||
58 | hv_uio_mmap(struct uio_info *info, struct vm_area_struct *vma) | ||
59 | { | ||
60 | int mi; | ||
61 | 61 | ||
62 | if (vma->vm_pgoff >= MAX_UIO_MAPS) | 62 | void *recv_buf; |
63 | return -EINVAL; | 63 | u32 recv_gpadl; |
64 | char recv_name[32]; /* "recv_4294967295" */ | ||
64 | 65 | ||
65 | if (info->mem[vma->vm_pgoff].size == 0) | 66 | void *send_buf; |
66 | return -EINVAL; | 67 | u32 send_gpadl; |
67 | 68 | char send_name[32]; | |
68 | mi = (int)vma->vm_pgoff; | 69 | }; |
69 | |||
70 | return remap_pfn_range(vma, vma->vm_start, | ||
71 | info->mem[mi].addr >> PAGE_SHIFT, | ||
72 | vma->vm_end - vma->vm_start, vma->vm_page_prot); | ||
73 | } | ||
74 | 70 | ||
75 | /* | 71 | /* |
76 | * This is the irqcontrol callback to be registered to uio_info. | 72 | * This is the irqcontrol callback to be registered to uio_info. |
@@ -107,6 +103,36 @@ static void hv_uio_channel_cb(void *context) | |||
107 | uio_event_notify(&pdata->info); | 103 | uio_event_notify(&pdata->info); |
108 | } | 104 | } |
109 | 105 | ||
106 | /* | ||
107 | * Callback from vmbus_event when channel is rescinded. | ||
108 | */ | ||
109 | static void hv_uio_rescind(struct vmbus_channel *channel) | ||
110 | { | ||
111 | struct hv_device *hv_dev = channel->primary_channel->device_obj; | ||
112 | struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev); | ||
113 | |||
114 | /* | ||
115 | * Turn off the interrupt file handle | ||
116 | * Next read for event will return -EIO | ||
117 | */ | ||
118 | pdata->info.irq = 0; | ||
119 | |||
120 | /* Wake up reader */ | ||
121 | uio_event_notify(&pdata->info); | ||
122 | } | ||
123 | |||
124 | static void | ||
125 | hv_uio_cleanup(struct hv_device *dev, struct hv_uio_private_data *pdata) | ||
126 | { | ||
127 | if (pdata->send_gpadl) | ||
128 | vmbus_teardown_gpadl(dev->channel, pdata->send_gpadl); | ||
129 | vfree(pdata->send_buf); | ||
130 | |||
131 | if (pdata->recv_gpadl) | ||
132 | vmbus_teardown_gpadl(dev->channel, pdata->recv_gpadl); | ||
133 | vfree(pdata->recv_buf); | ||
134 | } | ||
135 | |||
110 | static int | 136 | static int |
111 | hv_uio_probe(struct hv_device *dev, | 137 | hv_uio_probe(struct hv_device *dev, |
112 | const struct hv_vmbus_device_id *dev_id) | 138 | const struct hv_vmbus_device_id *dev_id) |
@@ -124,36 +150,82 @@ hv_uio_probe(struct hv_device *dev, | |||
124 | if (ret) | 150 | if (ret) |
125 | goto fail; | 151 | goto fail; |
126 | 152 | ||
153 | /* Communicating with host has to be via shared memory not hypercall */ | ||
154 | if (!dev->channel->offermsg.monitor_allocated) { | ||
155 | dev_err(&dev->device, "vmbus channel requires hypercall\n"); | ||
156 | ret = -ENOTSUPP; | ||
157 | goto fail_close; | ||
158 | } | ||
159 | |||
127 | dev->channel->inbound.ring_buffer->interrupt_mask = 1; | 160 | dev->channel->inbound.ring_buffer->interrupt_mask = 1; |
128 | set_channel_read_mode(dev->channel, HV_CALL_DIRECT); | 161 | set_channel_read_mode(dev->channel, HV_CALL_ISR); |
129 | 162 | ||
130 | /* Fill general uio info */ | 163 | /* Fill general uio info */ |
131 | pdata->info.name = "uio_hv_generic"; | 164 | pdata->info.name = "uio_hv_generic"; |
132 | pdata->info.version = DRIVER_VERSION; | 165 | pdata->info.version = DRIVER_VERSION; |
133 | pdata->info.irqcontrol = hv_uio_irqcontrol; | 166 | pdata->info.irqcontrol = hv_uio_irqcontrol; |
134 | pdata->info.mmap = hv_uio_mmap; | ||
135 | pdata->info.irq = UIO_IRQ_CUSTOM; | 167 | pdata->info.irq = UIO_IRQ_CUSTOM; |
136 | 168 | ||
137 | /* mem resources */ | 169 | /* mem resources */ |
138 | pdata->info.mem[TXRX_RING_MAP].name = "txrx_rings"; | 170 | pdata->info.mem[TXRX_RING_MAP].name = "txrx_rings"; |
139 | pdata->info.mem[TXRX_RING_MAP].addr | 171 | pdata->info.mem[TXRX_RING_MAP].addr |
140 | = virt_to_phys(dev->channel->ringbuffer_pages); | 172 | = (uintptr_t)dev->channel->ringbuffer_pages; |
141 | pdata->info.mem[TXRX_RING_MAP].size | 173 | pdata->info.mem[TXRX_RING_MAP].size |
142 | = dev->channel->ringbuffer_pagecount * PAGE_SIZE; | 174 | = dev->channel->ringbuffer_pagecount << PAGE_SHIFT; |
143 | pdata->info.mem[TXRX_RING_MAP].memtype = UIO_MEM_LOGICAL; | 175 | pdata->info.mem[TXRX_RING_MAP].memtype = UIO_MEM_LOGICAL; |
144 | 176 | ||
145 | pdata->info.mem[INT_PAGE_MAP].name = "int_page"; | 177 | pdata->info.mem[INT_PAGE_MAP].name = "int_page"; |
146 | pdata->info.mem[INT_PAGE_MAP].addr = | 178 | pdata->info.mem[INT_PAGE_MAP].addr |
147 | virt_to_phys(vmbus_connection.int_page); | 179 | = (uintptr_t)vmbus_connection.int_page; |
148 | pdata->info.mem[INT_PAGE_MAP].size = PAGE_SIZE; | 180 | pdata->info.mem[INT_PAGE_MAP].size = PAGE_SIZE; |
149 | pdata->info.mem[INT_PAGE_MAP].memtype = UIO_MEM_LOGICAL; | 181 | pdata->info.mem[INT_PAGE_MAP].memtype = UIO_MEM_LOGICAL; |
150 | 182 | ||
151 | pdata->info.mem[MON_PAGE_MAP].name = "monitor_pages"; | 183 | pdata->info.mem[MON_PAGE_MAP].name = "monitor_page"; |
152 | pdata->info.mem[MON_PAGE_MAP].addr = | 184 | pdata->info.mem[MON_PAGE_MAP].addr |
153 | virt_to_phys(vmbus_connection.monitor_pages[1]); | 185 | = (uintptr_t)vmbus_connection.monitor_pages[1]; |
154 | pdata->info.mem[MON_PAGE_MAP].size = PAGE_SIZE; | 186 | pdata->info.mem[MON_PAGE_MAP].size = PAGE_SIZE; |
155 | pdata->info.mem[MON_PAGE_MAP].memtype = UIO_MEM_LOGICAL; | 187 | pdata->info.mem[MON_PAGE_MAP].memtype = UIO_MEM_LOGICAL; |
156 | 188 | ||
189 | pdata->recv_buf = vzalloc(RECV_BUFFER_SIZE); | ||
190 | if (pdata->recv_buf == NULL) { | ||
191 | ret = -ENOMEM; | ||
192 | goto fail_close; | ||
193 | } | ||
194 | |||
195 | ret = vmbus_establish_gpadl(dev->channel, pdata->recv_buf, | ||
196 | RECV_BUFFER_SIZE, &pdata->recv_gpadl); | ||
197 | if (ret) | ||
198 | goto fail_close; | ||
199 | |||
200 | /* put Global Physical Address Label in name */ | ||
201 | snprintf(pdata->recv_name, sizeof(pdata->recv_name), | ||
202 | "recv:%u", pdata->recv_gpadl); | ||
203 | pdata->info.mem[RECV_BUF_MAP].name = pdata->recv_name; | ||
204 | pdata->info.mem[RECV_BUF_MAP].addr | ||
205 | = (uintptr_t)pdata->recv_buf; | ||
206 | pdata->info.mem[RECV_BUF_MAP].size = RECV_BUFFER_SIZE; | ||
207 | pdata->info.mem[RECV_BUF_MAP].memtype = UIO_MEM_VIRTUAL; | ||
208 | |||
209 | |||
210 | pdata->send_buf = vzalloc(SEND_BUFFER_SIZE); | ||
211 | if (pdata->send_buf == NULL) { | ||
212 | ret = -ENOMEM; | ||
213 | goto fail_close; | ||
214 | } | ||
215 | |||
216 | ret = vmbus_establish_gpadl(dev->channel, pdata->send_buf, | ||
217 | SEND_BUFFER_SIZE, &pdata->send_gpadl); | ||
218 | if (ret) | ||
219 | goto fail_close; | ||
220 | |||
221 | snprintf(pdata->send_name, sizeof(pdata->send_name), | ||
222 | "send:%u", pdata->send_gpadl); | ||
223 | pdata->info.mem[SEND_BUF_MAP].name = pdata->send_name; | ||
224 | pdata->info.mem[SEND_BUF_MAP].addr | ||
225 | = (uintptr_t)pdata->send_buf; | ||
226 | pdata->info.mem[SEND_BUF_MAP].size = SEND_BUFFER_SIZE; | ||
227 | pdata->info.mem[SEND_BUF_MAP].memtype = UIO_MEM_VIRTUAL; | ||
228 | |||
157 | pdata->info.priv = pdata; | 229 | pdata->info.priv = pdata; |
158 | pdata->device = dev; | 230 | pdata->device = dev; |
159 | 231 | ||
@@ -163,11 +235,14 @@ hv_uio_probe(struct hv_device *dev, | |||
163 | goto fail_close; | 235 | goto fail_close; |
164 | } | 236 | } |
165 | 237 | ||
238 | vmbus_set_chn_rescind_callback(dev->channel, hv_uio_rescind); | ||
239 | |||
166 | hv_set_drvdata(dev, pdata); | 240 | hv_set_drvdata(dev, pdata); |
167 | 241 | ||
168 | return 0; | 242 | return 0; |
169 | 243 | ||
170 | fail_close: | 244 | fail_close: |
245 | hv_uio_cleanup(dev, pdata); | ||
171 | vmbus_close(dev->channel); | 246 | vmbus_close(dev->channel); |
172 | fail: | 247 | fail: |
173 | kfree(pdata); | 248 | kfree(pdata); |
@@ -184,6 +259,7 @@ hv_uio_remove(struct hv_device *dev) | |||
184 | return 0; | 259 | return 0; |
185 | 260 | ||
186 | uio_unregister_device(&pdata->info); | 261 | uio_unregister_device(&pdata->info); |
262 | hv_uio_cleanup(dev, pdata); | ||
187 | hv_set_drvdata(dev, NULL); | 263 | hv_set_drvdata(dev, NULL); |
188 | vmbus_close(dev->channel); | 264 | vmbus_close(dev->channel); |
189 | kfree(pdata); | 265 | kfree(pdata); |
diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig index 99ebdde590f8..8d9cdfbd6bcc 100644 --- a/drivers/virt/Kconfig +++ b/drivers/virt/Kconfig | |||
@@ -30,4 +30,5 @@ config FSL_HV_MANAGER | |||
30 | 4) A kernel interface for receiving callbacks when a managed | 30 | 4) A kernel interface for receiving callbacks when a managed |
31 | partition shuts down. | 31 | partition shuts down. |
32 | 32 | ||
33 | source "drivers/virt/vboxguest/Kconfig" | ||
33 | endif | 34 | endif |
diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile index c47f04dd343b..d3f7b2540890 100644 --- a/drivers/virt/Makefile +++ b/drivers/virt/Makefile | |||
@@ -3,3 +3,4 @@ | |||
3 | # | 3 | # |
4 | 4 | ||
5 | obj-$(CONFIG_FSL_HV_MANAGER) += fsl_hypervisor.o | 5 | obj-$(CONFIG_FSL_HV_MANAGER) += fsl_hypervisor.o |
6 | obj-y += vboxguest/ | ||
diff --git a/drivers/virt/vboxguest/Kconfig b/drivers/virt/vboxguest/Kconfig new file mode 100644 index 000000000000..fffd318a10fe --- /dev/null +++ b/drivers/virt/vboxguest/Kconfig | |||
@@ -0,0 +1,18 @@ | |||
1 | config VBOXGUEST | ||
2 | tristate "Virtual Box Guest integration support" | ||
3 | depends on X86 && PCI && INPUT | ||
4 | help | ||
5 | This is a driver for the Virtual Box Guest PCI device used in | ||
6 | Virtual Box virtual machines. Enabling this driver will add | ||
7 | support for Virtual Box Guest integration features such as | ||
8 | copy-and-paste, seamless mode and OpenGL pass-through. | ||
9 | |||
10 | This driver also offers vboxguest IPC functionality which is needed | ||
11 | for the vboxfs driver which offers folder sharing support. | ||
12 | |||
13 | If you enable this driver you should also enable the VBOXVIDEO option. | ||
14 | |||
15 | Although it is possible to build this module in, it is advised | ||
16 | to build this driver as a module, so that it can be updated | ||
17 | independently of the kernel. Select M to build this driver as a | ||
18 | module. | ||
diff --git a/drivers/virt/vboxguest/Makefile b/drivers/virt/vboxguest/Makefile new file mode 100644 index 000000000000..203b8f465817 --- /dev/null +++ b/drivers/virt/vboxguest/Makefile | |||
@@ -0,0 +1,3 @@ | |||
1 | vboxguest-y := vboxguest_linux.o vboxguest_core.o vboxguest_utils.o | ||
2 | |||
3 | obj-$(CONFIG_VBOXGUEST) += vboxguest.o | ||
diff --git a/drivers/virt/vboxguest/vboxguest_core.c b/drivers/virt/vboxguest/vboxguest_core.c new file mode 100644 index 000000000000..190dbf8cfcb5 --- /dev/null +++ b/drivers/virt/vboxguest/vboxguest_core.c | |||
@@ -0,0 +1,1571 @@ | |||
1 | /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ | ||
2 | /* | ||
3 | * vboxguest core guest-device handling code, VBoxGuest.cpp in upstream svn. | ||
4 | * | ||
5 | * Copyright (C) 2007-2016 Oracle Corporation | ||
6 | */ | ||
7 | |||
8 | #include <linux/device.h> | ||
9 | #include <linux/mm.h> | ||
10 | #include <linux/sched.h> | ||
11 | #include <linux/sizes.h> | ||
12 | #include <linux/slab.h> | ||
13 | #include <linux/vbox_err.h> | ||
14 | #include <linux/vbox_utils.h> | ||
15 | #include <linux/vmalloc.h> | ||
16 | #include "vboxguest_core.h" | ||
17 | #include "vboxguest_version.h" | ||
18 | |||
19 | /* Get the pointer to the first HGCM parameter. */ | ||
20 | #define VBG_IOCTL_HGCM_CALL_PARMS(a) \ | ||
21 | ((struct vmmdev_hgcm_function_parameter *)( \ | ||
22 | (u8 *)(a) + sizeof(struct vbg_ioctl_hgcm_call))) | ||
23 | /* Get the pointer to the first HGCM parameter in a 32-bit request. */ | ||
24 | #define VBG_IOCTL_HGCM_CALL_PARMS32(a) \ | ||
25 | ((struct vmmdev_hgcm_function_parameter32 *)( \ | ||
26 | (u8 *)(a) + sizeof(struct vbg_ioctl_hgcm_call))) | ||
27 | |||
28 | #define GUEST_MAPPINGS_TRIES 5 | ||
29 | |||
30 | /** | ||
31 | * Reserves memory in which the VMM can relocate any guest mappings | ||
32 | * that are floating around. | ||
33 | * | ||
34 | * This operation is a little bit tricky since the VMM might not accept | ||
35 | * just any address because of address clashes between the three contexts | ||
36 | * it operates in, so we try several times. | ||
37 | * | ||
38 | * Failure to reserve the guest mappings is ignored. | ||
39 | * | ||
40 | * @gdev: The Guest extension device. | ||
41 | */ | ||
42 | static void vbg_guest_mappings_init(struct vbg_dev *gdev) | ||
43 | { | ||
44 | struct vmmdev_hypervisorinfo *req; | ||
45 | void *guest_mappings[GUEST_MAPPINGS_TRIES]; | ||
46 | struct page **pages = NULL; | ||
47 | u32 size, hypervisor_size; | ||
48 | int i, rc; | ||
49 | |||
50 | /* Query the required space. */ | ||
51 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_GET_HYPERVISOR_INFO); | ||
52 | if (!req) | ||
53 | return; | ||
54 | |||
55 | req->hypervisor_start = 0; | ||
56 | req->hypervisor_size = 0; | ||
57 | rc = vbg_req_perform(gdev, req); | ||
58 | if (rc < 0) | ||
59 | goto out; | ||
60 | |||
61 | /* | ||
62 | * The VMM will report back if there is nothing it wants to map, like | ||
63 | * for instance in VT-x and AMD-V mode. | ||
64 | */ | ||
65 | if (req->hypervisor_size == 0) | ||
66 | goto out; | ||
67 | |||
68 | hypervisor_size = req->hypervisor_size; | ||
69 | /* Add 4M so that we can align the vmap to 4MiB as the host requires. */ | ||
70 | size = PAGE_ALIGN(req->hypervisor_size) + SZ_4M; | ||
71 | |||
72 | pages = kmalloc(sizeof(*pages) * (size >> PAGE_SHIFT), GFP_KERNEL); | ||
73 | if (!pages) | ||
74 | goto out; | ||
75 | |||
76 | gdev->guest_mappings_dummy_page = alloc_page(GFP_HIGHUSER); | ||
77 | if (!gdev->guest_mappings_dummy_page) | ||
78 | goto out; | ||
79 | |||
80 | for (i = 0; i < (size >> PAGE_SHIFT); i++) | ||
81 | pages[i] = gdev->guest_mappings_dummy_page; | ||
82 | |||
83 | /* | ||
84 | * Try several times, the VMM might not accept some addresses because | ||
85 | * of address clashes between the three contexts. | ||
86 | */ | ||
87 | for (i = 0; i < GUEST_MAPPINGS_TRIES; i++) { | ||
88 | guest_mappings[i] = vmap(pages, (size >> PAGE_SHIFT), | ||
89 | VM_MAP, PAGE_KERNEL_RO); | ||
90 | if (!guest_mappings[i]) | ||
91 | break; | ||
92 | |||
93 | req->header.request_type = VMMDEVREQ_SET_HYPERVISOR_INFO; | ||
94 | req->header.rc = VERR_INTERNAL_ERROR; | ||
95 | req->hypervisor_size = hypervisor_size; | ||
96 | req->hypervisor_start = | ||
97 | (unsigned long)PTR_ALIGN(guest_mappings[i], SZ_4M); | ||
98 | |||
99 | rc = vbg_req_perform(gdev, req); | ||
100 | if (rc >= 0) { | ||
101 | gdev->guest_mappings = guest_mappings[i]; | ||
102 | break; | ||
103 | } | ||
104 | } | ||
105 | |||
106 | /* Free vmap's from failed attempts. */ | ||
107 | while (--i >= 0) | ||
108 | vunmap(guest_mappings[i]); | ||
109 | |||
110 | /* On failure free the dummy-page backing the vmap */ | ||
111 | if (!gdev->guest_mappings) { | ||
112 | __free_page(gdev->guest_mappings_dummy_page); | ||
113 | gdev->guest_mappings_dummy_page = NULL; | ||
114 | } | ||
115 | |||
116 | out: | ||
117 | kfree(req); | ||
118 | kfree(pages); | ||
119 | } | ||
120 | |||
121 | /** | ||
122 | * Undo what vbg_guest_mappings_init did. | ||
123 | * | ||
124 | * @gdev: The Guest extension device. | ||
125 | */ | ||
126 | static void vbg_guest_mappings_exit(struct vbg_dev *gdev) | ||
127 | { | ||
128 | struct vmmdev_hypervisorinfo *req; | ||
129 | int rc; | ||
130 | |||
131 | if (!gdev->guest_mappings) | ||
132 | return; | ||
133 | |||
134 | /* | ||
135 | * Tell the host that we're going to free the memory we reserved for | ||
136 | * it, the free it up. (Leak the memory if anything goes wrong here.) | ||
137 | */ | ||
138 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_HYPERVISOR_INFO); | ||
139 | if (!req) | ||
140 | return; | ||
141 | |||
142 | req->hypervisor_start = 0; | ||
143 | req->hypervisor_size = 0; | ||
144 | |||
145 | rc = vbg_req_perform(gdev, req); | ||
146 | |||
147 | kfree(req); | ||
148 | |||
149 | if (rc < 0) { | ||
150 | vbg_err("%s error: %d\n", __func__, rc); | ||
151 | return; | ||
152 | } | ||
153 | |||
154 | vunmap(gdev->guest_mappings); | ||
155 | gdev->guest_mappings = NULL; | ||
156 | |||
157 | __free_page(gdev->guest_mappings_dummy_page); | ||
158 | gdev->guest_mappings_dummy_page = NULL; | ||
159 | } | ||
160 | |||
161 | /** | ||
162 | * Report the guest information to the host. | ||
163 | * Return: 0 or negative errno value. | ||
164 | * @gdev: The Guest extension device. | ||
165 | */ | ||
166 | static int vbg_report_guest_info(struct vbg_dev *gdev) | ||
167 | { | ||
168 | /* | ||
169 | * Allocate and fill in the two guest info reports. | ||
170 | */ | ||
171 | struct vmmdev_guest_info *req1 = NULL; | ||
172 | struct vmmdev_guest_info2 *req2 = NULL; | ||
173 | int rc, ret = -ENOMEM; | ||
174 | |||
175 | req1 = vbg_req_alloc(sizeof(*req1), VMMDEVREQ_REPORT_GUEST_INFO); | ||
176 | req2 = vbg_req_alloc(sizeof(*req2), VMMDEVREQ_REPORT_GUEST_INFO2); | ||
177 | if (!req1 || !req2) | ||
178 | goto out_free; | ||
179 | |||
180 | req1->interface_version = VMMDEV_VERSION; | ||
181 | req1->os_type = VMMDEV_OSTYPE_LINUX26; | ||
182 | #if __BITS_PER_LONG == 64 | ||
183 | req1->os_type |= VMMDEV_OSTYPE_X64; | ||
184 | #endif | ||
185 | |||
186 | req2->additions_major = VBG_VERSION_MAJOR; | ||
187 | req2->additions_minor = VBG_VERSION_MINOR; | ||
188 | req2->additions_build = VBG_VERSION_BUILD; | ||
189 | req2->additions_revision = VBG_SVN_REV; | ||
190 | /* (no features defined yet) */ | ||
191 | req2->additions_features = 0; | ||
192 | strlcpy(req2->name, VBG_VERSION_STRING, | ||
193 | sizeof(req2->name)); | ||
194 | |||
195 | /* | ||
196 | * There are two protocols here: | ||
197 | * 1. INFO2 + INFO1. Supported by >=3.2.51. | ||
198 | * 2. INFO1 and optionally INFO2. The old protocol. | ||
199 | * | ||
200 | * We try protocol 2 first. It will fail with VERR_NOT_SUPPORTED | ||
201 | * if not supported by the VMMDev (message ordering requirement). | ||
202 | */ | ||
203 | rc = vbg_req_perform(gdev, req2); | ||
204 | if (rc >= 0) { | ||
205 | rc = vbg_req_perform(gdev, req1); | ||
206 | } else if (rc == VERR_NOT_SUPPORTED || rc == VERR_NOT_IMPLEMENTED) { | ||
207 | rc = vbg_req_perform(gdev, req1); | ||
208 | if (rc >= 0) { | ||
209 | rc = vbg_req_perform(gdev, req2); | ||
210 | if (rc == VERR_NOT_IMPLEMENTED) | ||
211 | rc = VINF_SUCCESS; | ||
212 | } | ||
213 | } | ||
214 | ret = vbg_status_code_to_errno(rc); | ||
215 | |||
216 | out_free: | ||
217 | kfree(req2); | ||
218 | kfree(req1); | ||
219 | return ret; | ||
220 | } | ||
221 | |||
222 | /** | ||
223 | * Report the guest driver status to the host. | ||
224 | * Return: 0 or negative errno value. | ||
225 | * @gdev: The Guest extension device. | ||
226 | * @active: Flag whether the driver is now active or not. | ||
227 | */ | ||
228 | static int vbg_report_driver_status(struct vbg_dev *gdev, bool active) | ||
229 | { | ||
230 | struct vmmdev_guest_status *req; | ||
231 | int rc; | ||
232 | |||
233 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_REPORT_GUEST_STATUS); | ||
234 | if (!req) | ||
235 | return -ENOMEM; | ||
236 | |||
237 | req->facility = VBOXGUEST_FACILITY_TYPE_VBOXGUEST_DRIVER; | ||
238 | if (active) | ||
239 | req->status = VBOXGUEST_FACILITY_STATUS_ACTIVE; | ||
240 | else | ||
241 | req->status = VBOXGUEST_FACILITY_STATUS_INACTIVE; | ||
242 | req->flags = 0; | ||
243 | |||
244 | rc = vbg_req_perform(gdev, req); | ||
245 | if (rc == VERR_NOT_IMPLEMENTED) /* Compatibility with older hosts. */ | ||
246 | rc = VINF_SUCCESS; | ||
247 | |||
248 | kfree(req); | ||
249 | |||
250 | return vbg_status_code_to_errno(rc); | ||
251 | } | ||
252 | |||
253 | /** | ||
254 | * Inflate the balloon by one chunk. The caller owns the balloon mutex. | ||
255 | * Return: 0 or negative errno value. | ||
256 | * @gdev: The Guest extension device. | ||
257 | * @chunk_idx: Index of the chunk. | ||
258 | */ | ||
259 | static int vbg_balloon_inflate(struct vbg_dev *gdev, u32 chunk_idx) | ||
260 | { | ||
261 | struct vmmdev_memballoon_change *req = gdev->mem_balloon.change_req; | ||
262 | struct page **pages; | ||
263 | int i, rc, ret; | ||
264 | |||
265 | pages = kmalloc(sizeof(*pages) * VMMDEV_MEMORY_BALLOON_CHUNK_PAGES, | ||
266 | GFP_KERNEL | __GFP_NOWARN); | ||
267 | if (!pages) | ||
268 | return -ENOMEM; | ||
269 | |||
270 | req->header.size = sizeof(*req); | ||
271 | req->inflate = true; | ||
272 | req->pages = VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; | ||
273 | |||
274 | for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++) { | ||
275 | pages[i] = alloc_page(GFP_KERNEL | __GFP_NOWARN); | ||
276 | if (!pages[i]) { | ||
277 | ret = -ENOMEM; | ||
278 | goto out_error; | ||
279 | } | ||
280 | |||
281 | req->phys_page[i] = page_to_phys(pages[i]); | ||
282 | } | ||
283 | |||
284 | rc = vbg_req_perform(gdev, req); | ||
285 | if (rc < 0) { | ||
286 | vbg_err("%s error, rc: %d\n", __func__, rc); | ||
287 | ret = vbg_status_code_to_errno(rc); | ||
288 | goto out_error; | ||
289 | } | ||
290 | |||
291 | gdev->mem_balloon.pages[chunk_idx] = pages; | ||
292 | |||
293 | return 0; | ||
294 | |||
295 | out_error: | ||
296 | while (--i >= 0) | ||
297 | __free_page(pages[i]); | ||
298 | kfree(pages); | ||
299 | |||
300 | return ret; | ||
301 | } | ||
302 | |||
303 | /** | ||
304 | * Deflate the balloon by one chunk. The caller owns the balloon mutex. | ||
305 | * Return: 0 or negative errno value. | ||
306 | * @gdev: The Guest extension device. | ||
307 | * @chunk_idx: Index of the chunk. | ||
308 | */ | ||
309 | static int vbg_balloon_deflate(struct vbg_dev *gdev, u32 chunk_idx) | ||
310 | { | ||
311 | struct vmmdev_memballoon_change *req = gdev->mem_balloon.change_req; | ||
312 | struct page **pages = gdev->mem_balloon.pages[chunk_idx]; | ||
313 | int i, rc; | ||
314 | |||
315 | req->header.size = sizeof(*req); | ||
316 | req->inflate = false; | ||
317 | req->pages = VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; | ||
318 | |||
319 | for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++) | ||
320 | req->phys_page[i] = page_to_phys(pages[i]); | ||
321 | |||
322 | rc = vbg_req_perform(gdev, req); | ||
323 | if (rc < 0) { | ||
324 | vbg_err("%s error, rc: %d\n", __func__, rc); | ||
325 | return vbg_status_code_to_errno(rc); | ||
326 | } | ||
327 | |||
328 | for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++) | ||
329 | __free_page(pages[i]); | ||
330 | kfree(pages); | ||
331 | gdev->mem_balloon.pages[chunk_idx] = NULL; | ||
332 | |||
333 | return 0; | ||
334 | } | ||
335 | |||
336 | /** | ||
337 | * Respond to VMMDEV_EVENT_BALLOON_CHANGE_REQUEST events, query the size | ||
338 | * the host wants the balloon to be and adjust accordingly. | ||
339 | */ | ||
340 | static void vbg_balloon_work(struct work_struct *work) | ||
341 | { | ||
342 | struct vbg_dev *gdev = | ||
343 | container_of(work, struct vbg_dev, mem_balloon.work); | ||
344 | struct vmmdev_memballoon_info *req = gdev->mem_balloon.get_req; | ||
345 | u32 i, chunks; | ||
346 | int rc, ret; | ||
347 | |||
348 | /* | ||
349 | * Setting this bit means that we request the value from the host and | ||
350 | * change the guest memory balloon according to the returned value. | ||
351 | */ | ||
352 | req->event_ack = VMMDEV_EVENT_BALLOON_CHANGE_REQUEST; | ||
353 | rc = vbg_req_perform(gdev, req); | ||
354 | if (rc < 0) { | ||
355 | vbg_err("%s error, rc: %d)\n", __func__, rc); | ||
356 | return; | ||
357 | } | ||
358 | |||
359 | /* | ||
360 | * The host always returns the same maximum amount of chunks, so | ||
361 | * we do this once. | ||
362 | */ | ||
363 | if (!gdev->mem_balloon.max_chunks) { | ||
364 | gdev->mem_balloon.pages = | ||
365 | devm_kcalloc(gdev->dev, req->phys_mem_chunks, | ||
366 | sizeof(struct page **), GFP_KERNEL); | ||
367 | if (!gdev->mem_balloon.pages) | ||
368 | return; | ||
369 | |||
370 | gdev->mem_balloon.max_chunks = req->phys_mem_chunks; | ||
371 | } | ||
372 | |||
373 | chunks = req->balloon_chunks; | ||
374 | if (chunks > gdev->mem_balloon.max_chunks) { | ||
375 | vbg_err("%s: illegal balloon size %u (max=%u)\n", | ||
376 | __func__, chunks, gdev->mem_balloon.max_chunks); | ||
377 | return; | ||
378 | } | ||
379 | |||
380 | if (chunks > gdev->mem_balloon.chunks) { | ||
381 | /* inflate */ | ||
382 | for (i = gdev->mem_balloon.chunks; i < chunks; i++) { | ||
383 | ret = vbg_balloon_inflate(gdev, i); | ||
384 | if (ret < 0) | ||
385 | return; | ||
386 | |||
387 | gdev->mem_balloon.chunks++; | ||
388 | } | ||
389 | } else { | ||
390 | /* deflate */ | ||
391 | for (i = gdev->mem_balloon.chunks; i-- > chunks;) { | ||
392 | ret = vbg_balloon_deflate(gdev, i); | ||
393 | if (ret < 0) | ||
394 | return; | ||
395 | |||
396 | gdev->mem_balloon.chunks--; | ||
397 | } | ||
398 | } | ||
399 | } | ||
400 | |||
401 | /** | ||
402 | * Callback for heartbeat timer. | ||
403 | */ | ||
404 | static void vbg_heartbeat_timer(struct timer_list *t) | ||
405 | { | ||
406 | struct vbg_dev *gdev = from_timer(gdev, t, heartbeat_timer); | ||
407 | |||
408 | vbg_req_perform(gdev, gdev->guest_heartbeat_req); | ||
409 | mod_timer(&gdev->heartbeat_timer, | ||
410 | msecs_to_jiffies(gdev->heartbeat_interval_ms)); | ||
411 | } | ||
412 | |||
413 | /** | ||
414 | * Configure the host to check guest's heartbeat | ||
415 | * and get heartbeat interval from the host. | ||
416 | * Return: 0 or negative errno value. | ||
417 | * @gdev: The Guest extension device. | ||
418 | * @enabled: Set true to enable guest heartbeat checks on host. | ||
419 | */ | ||
420 | static int vbg_heartbeat_host_config(struct vbg_dev *gdev, bool enabled) | ||
421 | { | ||
422 | struct vmmdev_heartbeat *req; | ||
423 | int rc; | ||
424 | |||
425 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_HEARTBEAT_CONFIGURE); | ||
426 | if (!req) | ||
427 | return -ENOMEM; | ||
428 | |||
429 | req->enabled = enabled; | ||
430 | req->interval_ns = 0; | ||
431 | rc = vbg_req_perform(gdev, req); | ||
432 | do_div(req->interval_ns, 1000000); /* ns -> ms */ | ||
433 | gdev->heartbeat_interval_ms = req->interval_ns; | ||
434 | kfree(req); | ||
435 | |||
436 | return vbg_status_code_to_errno(rc); | ||
437 | } | ||
438 | |||
439 | /** | ||
440 | * Initializes the heartbeat timer. This feature may be disabled by the host. | ||
441 | * Return: 0 or negative errno value. | ||
442 | * @gdev: The Guest extension device. | ||
443 | */ | ||
444 | static int vbg_heartbeat_init(struct vbg_dev *gdev) | ||
445 | { | ||
446 | int ret; | ||
447 | |||
448 | /* Make sure that heartbeat checking is disabled if we fail. */ | ||
449 | ret = vbg_heartbeat_host_config(gdev, false); | ||
450 | if (ret < 0) | ||
451 | return ret; | ||
452 | |||
453 | ret = vbg_heartbeat_host_config(gdev, true); | ||
454 | if (ret < 0) | ||
455 | return ret; | ||
456 | |||
457 | /* | ||
458 | * Preallocate the request to use it from the timer callback because: | ||
459 | * 1) on Windows vbg_req_alloc must be called at IRQL <= APC_LEVEL | ||
460 | * and the timer callback runs at DISPATCH_LEVEL; | ||
461 | * 2) avoid repeated allocations. | ||
462 | */ | ||
463 | gdev->guest_heartbeat_req = vbg_req_alloc( | ||
464 | sizeof(*gdev->guest_heartbeat_req), | ||
465 | VMMDEVREQ_GUEST_HEARTBEAT); | ||
466 | if (!gdev->guest_heartbeat_req) | ||
467 | return -ENOMEM; | ||
468 | |||
469 | vbg_info("%s: Setting up heartbeat to trigger every %d milliseconds\n", | ||
470 | __func__, gdev->heartbeat_interval_ms); | ||
471 | mod_timer(&gdev->heartbeat_timer, 0); | ||
472 | |||
473 | return 0; | ||
474 | } | ||
475 | |||
476 | /** | ||
477 | * Cleanup hearbeat code, stop HB timer and disable host heartbeat checking. | ||
478 | * @gdev: The Guest extension device. | ||
479 | */ | ||
480 | static void vbg_heartbeat_exit(struct vbg_dev *gdev) | ||
481 | { | ||
482 | del_timer_sync(&gdev->heartbeat_timer); | ||
483 | vbg_heartbeat_host_config(gdev, false); | ||
484 | kfree(gdev->guest_heartbeat_req); | ||
485 | |||
486 | } | ||
487 | |||
488 | /** | ||
489 | * Applies a change to the bit usage tracker. | ||
490 | * Return: true if the mask changed, false if not. | ||
491 | * @tracker: The bit usage tracker. | ||
492 | * @changed: The bits to change. | ||
493 | * @previous: The previous value of the bits. | ||
494 | */ | ||
495 | static bool vbg_track_bit_usage(struct vbg_bit_usage_tracker *tracker, | ||
496 | u32 changed, u32 previous) | ||
497 | { | ||
498 | bool global_change = false; | ||
499 | |||
500 | while (changed) { | ||
501 | u32 bit = ffs(changed) - 1; | ||
502 | u32 bitmask = BIT(bit); | ||
503 | |||
504 | if (bitmask & previous) { | ||
505 | tracker->per_bit_usage[bit] -= 1; | ||
506 | if (tracker->per_bit_usage[bit] == 0) { | ||
507 | global_change = true; | ||
508 | tracker->mask &= ~bitmask; | ||
509 | } | ||
510 | } else { | ||
511 | tracker->per_bit_usage[bit] += 1; | ||
512 | if (tracker->per_bit_usage[bit] == 1) { | ||
513 | global_change = true; | ||
514 | tracker->mask |= bitmask; | ||
515 | } | ||
516 | } | ||
517 | |||
518 | changed &= ~bitmask; | ||
519 | } | ||
520 | |||
521 | return global_change; | ||
522 | } | ||
523 | |||
524 | /** | ||
525 | * Init and termination worker for resetting the (host) event filter on the host | ||
526 | * Return: 0 or negative errno value. | ||
527 | * @gdev: The Guest extension device. | ||
528 | * @fixed_events: Fixed events (init time). | ||
529 | */ | ||
530 | static int vbg_reset_host_event_filter(struct vbg_dev *gdev, | ||
531 | u32 fixed_events) | ||
532 | { | ||
533 | struct vmmdev_mask *req; | ||
534 | int rc; | ||
535 | |||
536 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_CTL_GUEST_FILTER_MASK); | ||
537 | if (!req) | ||
538 | return -ENOMEM; | ||
539 | |||
540 | req->not_mask = U32_MAX & ~fixed_events; | ||
541 | req->or_mask = fixed_events; | ||
542 | rc = vbg_req_perform(gdev, req); | ||
543 | if (rc < 0) | ||
544 | vbg_err("%s error, rc: %d\n", __func__, rc); | ||
545 | |||
546 | kfree(req); | ||
547 | return vbg_status_code_to_errno(rc); | ||
548 | } | ||
549 | |||
550 | /** | ||
551 | * Changes the event filter mask for the given session. | ||
552 | * | ||
553 | * This is called in response to VBG_IOCTL_CHANGE_FILTER_MASK as well as to | ||
554 | * do session cleanup. Takes the session spinlock. | ||
555 | * | ||
556 | * Return: 0 or negative errno value. | ||
557 | * @gdev: The Guest extension device. | ||
558 | * @session: The session. | ||
559 | * @or_mask: The events to add. | ||
560 | * @not_mask: The events to remove. | ||
561 | * @session_termination: Set if we're called by the session cleanup code. | ||
562 | * This tweaks the error handling so we perform | ||
563 | * proper session cleanup even if the host | ||
564 | * misbehaves. | ||
565 | */ | ||
566 | static int vbg_set_session_event_filter(struct vbg_dev *gdev, | ||
567 | struct vbg_session *session, | ||
568 | u32 or_mask, u32 not_mask, | ||
569 | bool session_termination) | ||
570 | { | ||
571 | struct vmmdev_mask *req; | ||
572 | u32 changed, previous; | ||
573 | int rc, ret = 0; | ||
574 | |||
575 | /* Allocate a request buffer before taking the spinlock */ | ||
576 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_CTL_GUEST_FILTER_MASK); | ||
577 | if (!req) { | ||
578 | if (!session_termination) | ||
579 | return -ENOMEM; | ||
580 | /* Ignore allocation failure, we must do session cleanup. */ | ||
581 | } | ||
582 | |||
583 | mutex_lock(&gdev->session_mutex); | ||
584 | |||
585 | /* Apply the changes to the session mask. */ | ||
586 | previous = session->event_filter; | ||
587 | session->event_filter |= or_mask; | ||
588 | session->event_filter &= ~not_mask; | ||
589 | |||
590 | /* If anything actually changed, update the global usage counters. */ | ||
591 | changed = previous ^ session->event_filter; | ||
592 | if (!changed) | ||
593 | goto out; | ||
594 | |||
595 | vbg_track_bit_usage(&gdev->event_filter_tracker, changed, previous); | ||
596 | or_mask = gdev->fixed_events | gdev->event_filter_tracker.mask; | ||
597 | |||
598 | if (gdev->event_filter_host == or_mask || !req) | ||
599 | goto out; | ||
600 | |||
601 | gdev->event_filter_host = or_mask; | ||
602 | req->or_mask = or_mask; | ||
603 | req->not_mask = ~or_mask; | ||
604 | rc = vbg_req_perform(gdev, req); | ||
605 | if (rc < 0) { | ||
606 | ret = vbg_status_code_to_errno(rc); | ||
607 | |||
608 | /* Failed, roll back (unless it's session termination time). */ | ||
609 | gdev->event_filter_host = U32_MAX; | ||
610 | if (session_termination) | ||
611 | goto out; | ||
612 | |||
613 | vbg_track_bit_usage(&gdev->event_filter_tracker, changed, | ||
614 | session->event_filter); | ||
615 | session->event_filter = previous; | ||
616 | } | ||
617 | |||
618 | out: | ||
619 | mutex_unlock(&gdev->session_mutex); | ||
620 | kfree(req); | ||
621 | |||
622 | return ret; | ||
623 | } | ||
624 | |||
625 | /** | ||
626 | * Init and termination worker for set guest capabilities to zero on the host. | ||
627 | * Return: 0 or negative errno value. | ||
628 | * @gdev: The Guest extension device. | ||
629 | */ | ||
630 | static int vbg_reset_host_capabilities(struct vbg_dev *gdev) | ||
631 | { | ||
632 | struct vmmdev_mask *req; | ||
633 | int rc; | ||
634 | |||
635 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_GUEST_CAPABILITIES); | ||
636 | if (!req) | ||
637 | return -ENOMEM; | ||
638 | |||
639 | req->not_mask = U32_MAX; | ||
640 | req->or_mask = 0; | ||
641 | rc = vbg_req_perform(gdev, req); | ||
642 | if (rc < 0) | ||
643 | vbg_err("%s error, rc: %d\n", __func__, rc); | ||
644 | |||
645 | kfree(req); | ||
646 | return vbg_status_code_to_errno(rc); | ||
647 | } | ||
648 | |||
649 | /** | ||
650 | * Sets the guest capabilities for a session. Takes the session spinlock. | ||
651 | * Return: 0 or negative errno value. | ||
652 | * @gdev: The Guest extension device. | ||
653 | * @session: The session. | ||
654 | * @or_mask: The capabilities to add. | ||
655 | * @not_mask: The capabilities to remove. | ||
656 | * @session_termination: Set if we're called by the session cleanup code. | ||
657 | * This tweaks the error handling so we perform | ||
658 | * proper session cleanup even if the host | ||
659 | * misbehaves. | ||
660 | */ | ||
661 | static int vbg_set_session_capabilities(struct vbg_dev *gdev, | ||
662 | struct vbg_session *session, | ||
663 | u32 or_mask, u32 not_mask, | ||
664 | bool session_termination) | ||
665 | { | ||
666 | struct vmmdev_mask *req; | ||
667 | u32 changed, previous; | ||
668 | int rc, ret = 0; | ||
669 | |||
670 | /* Allocate a request buffer before taking the spinlock */ | ||
671 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_GUEST_CAPABILITIES); | ||
672 | if (!req) { | ||
673 | if (!session_termination) | ||
674 | return -ENOMEM; | ||
675 | /* Ignore allocation failure, we must do session cleanup. */ | ||
676 | } | ||
677 | |||
678 | mutex_lock(&gdev->session_mutex); | ||
679 | |||
680 | /* Apply the changes to the session mask. */ | ||
681 | previous = session->guest_caps; | ||
682 | session->guest_caps |= or_mask; | ||
683 | session->guest_caps &= ~not_mask; | ||
684 | |||
685 | /* If anything actually changed, update the global usage counters. */ | ||
686 | changed = previous ^ session->guest_caps; | ||
687 | if (!changed) | ||
688 | goto out; | ||
689 | |||
690 | vbg_track_bit_usage(&gdev->guest_caps_tracker, changed, previous); | ||
691 | or_mask = gdev->guest_caps_tracker.mask; | ||
692 | |||
693 | if (gdev->guest_caps_host == or_mask || !req) | ||
694 | goto out; | ||
695 | |||
696 | gdev->guest_caps_host = or_mask; | ||
697 | req->or_mask = or_mask; | ||
698 | req->not_mask = ~or_mask; | ||
699 | rc = vbg_req_perform(gdev, req); | ||
700 | if (rc < 0) { | ||
701 | ret = vbg_status_code_to_errno(rc); | ||
702 | |||
703 | /* Failed, roll back (unless it's session termination time). */ | ||
704 | gdev->guest_caps_host = U32_MAX; | ||
705 | if (session_termination) | ||
706 | goto out; | ||
707 | |||
708 | vbg_track_bit_usage(&gdev->guest_caps_tracker, changed, | ||
709 | session->guest_caps); | ||
710 | session->guest_caps = previous; | ||
711 | } | ||
712 | |||
713 | out: | ||
714 | mutex_unlock(&gdev->session_mutex); | ||
715 | kfree(req); | ||
716 | |||
717 | return ret; | ||
718 | } | ||
719 | |||
720 | /** | ||
721 | * vbg_query_host_version get the host feature mask and version information. | ||
722 | * Return: 0 or negative errno value. | ||
723 | * @gdev: The Guest extension device. | ||
724 | */ | ||
725 | static int vbg_query_host_version(struct vbg_dev *gdev) | ||
726 | { | ||
727 | struct vmmdev_host_version *req; | ||
728 | int rc, ret; | ||
729 | |||
730 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_GET_HOST_VERSION); | ||
731 | if (!req) | ||
732 | return -ENOMEM; | ||
733 | |||
734 | rc = vbg_req_perform(gdev, req); | ||
735 | ret = vbg_status_code_to_errno(rc); | ||
736 | if (ret) | ||
737 | goto out; | ||
738 | |||
739 | snprintf(gdev->host_version, sizeof(gdev->host_version), "%u.%u.%ur%u", | ||
740 | req->major, req->minor, req->build, req->revision); | ||
741 | gdev->host_features = req->features; | ||
742 | |||
743 | vbg_info("vboxguest: host-version: %s %#x\n", gdev->host_version, | ||
744 | gdev->host_features); | ||
745 | |||
746 | if (!(req->features & VMMDEV_HVF_HGCM_PHYS_PAGE_LIST)) { | ||
747 | vbg_err("vboxguest: Error host too old (does not support page-lists)\n"); | ||
748 | ret = -ENODEV; | ||
749 | } | ||
750 | |||
751 | out: | ||
752 | kfree(req); | ||
753 | return ret; | ||
754 | } | ||
755 | |||
756 | /** | ||
757 | * Initializes the VBoxGuest device extension when the | ||
758 | * device driver is loaded. | ||
759 | * | ||
760 | * The native code locates the VMMDev on the PCI bus and retrieve | ||
761 | * the MMIO and I/O port ranges, this function will take care of | ||
762 | * mapping the MMIO memory (if present). Upon successful return | ||
763 | * the native code should set up the interrupt handler. | ||
764 | * | ||
765 | * Return: 0 or negative errno value. | ||
766 | * | ||
767 | * @gdev: The Guest extension device. | ||
768 | * @fixed_events: Events that will be enabled upon init and no client | ||
769 | * will ever be allowed to mask. | ||
770 | */ | ||
771 | int vbg_core_init(struct vbg_dev *gdev, u32 fixed_events) | ||
772 | { | ||
773 | int ret = -ENOMEM; | ||
774 | |||
775 | gdev->fixed_events = fixed_events | VMMDEV_EVENT_HGCM; | ||
776 | gdev->event_filter_host = U32_MAX; /* forces a report */ | ||
777 | gdev->guest_caps_host = U32_MAX; /* forces a report */ | ||
778 | |||
779 | init_waitqueue_head(&gdev->event_wq); | ||
780 | init_waitqueue_head(&gdev->hgcm_wq); | ||
781 | spin_lock_init(&gdev->event_spinlock); | ||
782 | mutex_init(&gdev->session_mutex); | ||
783 | mutex_init(&gdev->cancel_req_mutex); | ||
784 | timer_setup(&gdev->heartbeat_timer, vbg_heartbeat_timer, 0); | ||
785 | INIT_WORK(&gdev->mem_balloon.work, vbg_balloon_work); | ||
786 | |||
787 | gdev->mem_balloon.get_req = | ||
788 | vbg_req_alloc(sizeof(*gdev->mem_balloon.get_req), | ||
789 | VMMDEVREQ_GET_MEMBALLOON_CHANGE_REQ); | ||
790 | gdev->mem_balloon.change_req = | ||
791 | vbg_req_alloc(sizeof(*gdev->mem_balloon.change_req), | ||
792 | VMMDEVREQ_CHANGE_MEMBALLOON); | ||
793 | gdev->cancel_req = | ||
794 | vbg_req_alloc(sizeof(*(gdev->cancel_req)), | ||
795 | VMMDEVREQ_HGCM_CANCEL2); | ||
796 | gdev->ack_events_req = | ||
797 | vbg_req_alloc(sizeof(*gdev->ack_events_req), | ||
798 | VMMDEVREQ_ACKNOWLEDGE_EVENTS); | ||
799 | gdev->mouse_status_req = | ||
800 | vbg_req_alloc(sizeof(*gdev->mouse_status_req), | ||
801 | VMMDEVREQ_GET_MOUSE_STATUS); | ||
802 | |||
803 | if (!gdev->mem_balloon.get_req || !gdev->mem_balloon.change_req || | ||
804 | !gdev->cancel_req || !gdev->ack_events_req || | ||
805 | !gdev->mouse_status_req) | ||
806 | goto err_free_reqs; | ||
807 | |||
808 | ret = vbg_query_host_version(gdev); | ||
809 | if (ret) | ||
810 | goto err_free_reqs; | ||
811 | |||
812 | ret = vbg_report_guest_info(gdev); | ||
813 | if (ret) { | ||
814 | vbg_err("vboxguest: vbg_report_guest_info error: %d\n", ret); | ||
815 | goto err_free_reqs; | ||
816 | } | ||
817 | |||
818 | ret = vbg_reset_host_event_filter(gdev, gdev->fixed_events); | ||
819 | if (ret) { | ||
820 | vbg_err("vboxguest: Error setting fixed event filter: %d\n", | ||
821 | ret); | ||
822 | goto err_free_reqs; | ||
823 | } | ||
824 | |||
825 | ret = vbg_reset_host_capabilities(gdev); | ||
826 | if (ret) { | ||
827 | vbg_err("vboxguest: Error clearing guest capabilities: %d\n", | ||
828 | ret); | ||
829 | goto err_free_reqs; | ||
830 | } | ||
831 | |||
832 | ret = vbg_core_set_mouse_status(gdev, 0); | ||
833 | if (ret) { | ||
834 | vbg_err("vboxguest: Error clearing mouse status: %d\n", ret); | ||
835 | goto err_free_reqs; | ||
836 | } | ||
837 | |||
838 | /* These may fail without requiring the driver init to fail. */ | ||
839 | vbg_guest_mappings_init(gdev); | ||
840 | vbg_heartbeat_init(gdev); | ||
841 | |||
842 | /* All Done! */ | ||
843 | ret = vbg_report_driver_status(gdev, true); | ||
844 | if (ret < 0) | ||
845 | vbg_err("vboxguest: Error reporting driver status: %d\n", ret); | ||
846 | |||
847 | return 0; | ||
848 | |||
849 | err_free_reqs: | ||
850 | kfree(gdev->mouse_status_req); | ||
851 | kfree(gdev->ack_events_req); | ||
852 | kfree(gdev->cancel_req); | ||
853 | kfree(gdev->mem_balloon.change_req); | ||
854 | kfree(gdev->mem_balloon.get_req); | ||
855 | return ret; | ||
856 | } | ||
857 | |||
858 | /** | ||
859 | * Call this on exit to clean-up vboxguest-core managed resources. | ||
860 | * | ||
861 | * The native code should call this before the driver is loaded, | ||
862 | * but don't call this on shutdown. | ||
863 | * @gdev: The Guest extension device. | ||
864 | */ | ||
865 | void vbg_core_exit(struct vbg_dev *gdev) | ||
866 | { | ||
867 | vbg_heartbeat_exit(gdev); | ||
868 | vbg_guest_mappings_exit(gdev); | ||
869 | |||
870 | /* Clear the host flags (mouse status etc). */ | ||
871 | vbg_reset_host_event_filter(gdev, 0); | ||
872 | vbg_reset_host_capabilities(gdev); | ||
873 | vbg_core_set_mouse_status(gdev, 0); | ||
874 | |||
875 | kfree(gdev->mouse_status_req); | ||
876 | kfree(gdev->ack_events_req); | ||
877 | kfree(gdev->cancel_req); | ||
878 | kfree(gdev->mem_balloon.change_req); | ||
879 | kfree(gdev->mem_balloon.get_req); | ||
880 | } | ||
881 | |||
882 | /** | ||
883 | * Creates a VBoxGuest user session. | ||
884 | * | ||
885 | * vboxguest_linux.c calls this when userspace opens the char-device. | ||
886 | * Return: A pointer to the new session or an ERR_PTR on error. | ||
887 | * @gdev: The Guest extension device. | ||
888 | * @user: Set if this is a session for the vboxuser device. | ||
889 | */ | ||
890 | struct vbg_session *vbg_core_open_session(struct vbg_dev *gdev, bool user) | ||
891 | { | ||
892 | struct vbg_session *session; | ||
893 | |||
894 | session = kzalloc(sizeof(*session), GFP_KERNEL); | ||
895 | if (!session) | ||
896 | return ERR_PTR(-ENOMEM); | ||
897 | |||
898 | session->gdev = gdev; | ||
899 | session->user_session = user; | ||
900 | |||
901 | return session; | ||
902 | } | ||
903 | |||
904 | /** | ||
905 | * Closes a VBoxGuest session. | ||
906 | * @session: The session to close (and free). | ||
907 | */ | ||
908 | void vbg_core_close_session(struct vbg_session *session) | ||
909 | { | ||
910 | struct vbg_dev *gdev = session->gdev; | ||
911 | int i, rc; | ||
912 | |||
913 | vbg_set_session_capabilities(gdev, session, 0, U32_MAX, true); | ||
914 | vbg_set_session_event_filter(gdev, session, 0, U32_MAX, true); | ||
915 | |||
916 | for (i = 0; i < ARRAY_SIZE(session->hgcm_client_ids); i++) { | ||
917 | if (!session->hgcm_client_ids[i]) | ||
918 | continue; | ||
919 | |||
920 | vbg_hgcm_disconnect(gdev, session->hgcm_client_ids[i], &rc); | ||
921 | } | ||
922 | |||
923 | kfree(session); | ||
924 | } | ||
925 | |||
926 | static int vbg_ioctl_chk(struct vbg_ioctl_hdr *hdr, size_t in_size, | ||
927 | size_t out_size) | ||
928 | { | ||
929 | if (hdr->size_in != (sizeof(*hdr) + in_size) || | ||
930 | hdr->size_out != (sizeof(*hdr) + out_size)) | ||
931 | return -EINVAL; | ||
932 | |||
933 | return 0; | ||
934 | } | ||
935 | |||
936 | static int vbg_ioctl_driver_version_info( | ||
937 | struct vbg_ioctl_driver_version_info *info) | ||
938 | { | ||
939 | const u16 vbg_maj_version = VBG_IOC_VERSION >> 16; | ||
940 | u16 min_maj_version, req_maj_version; | ||
941 | |||
942 | if (vbg_ioctl_chk(&info->hdr, sizeof(info->u.in), sizeof(info->u.out))) | ||
943 | return -EINVAL; | ||
944 | |||
945 | req_maj_version = info->u.in.req_version >> 16; | ||
946 | min_maj_version = info->u.in.min_version >> 16; | ||
947 | |||
948 | if (info->u.in.min_version > info->u.in.req_version || | ||
949 | min_maj_version != req_maj_version) | ||
950 | return -EINVAL; | ||
951 | |||
952 | if (info->u.in.min_version <= VBG_IOC_VERSION && | ||
953 | min_maj_version == vbg_maj_version) { | ||
954 | info->u.out.session_version = VBG_IOC_VERSION; | ||
955 | } else { | ||
956 | info->u.out.session_version = U32_MAX; | ||
957 | info->hdr.rc = VERR_VERSION_MISMATCH; | ||
958 | } | ||
959 | |||
960 | info->u.out.driver_version = VBG_IOC_VERSION; | ||
961 | info->u.out.driver_revision = 0; | ||
962 | info->u.out.reserved1 = 0; | ||
963 | info->u.out.reserved2 = 0; | ||
964 | |||
965 | return 0; | ||
966 | } | ||
967 | |||
968 | static bool vbg_wait_event_cond(struct vbg_dev *gdev, | ||
969 | struct vbg_session *session, | ||
970 | u32 event_mask) | ||
971 | { | ||
972 | unsigned long flags; | ||
973 | bool wakeup; | ||
974 | u32 events; | ||
975 | |||
976 | spin_lock_irqsave(&gdev->event_spinlock, flags); | ||
977 | |||
978 | events = gdev->pending_events & event_mask; | ||
979 | wakeup = events || session->cancel_waiters; | ||
980 | |||
981 | spin_unlock_irqrestore(&gdev->event_spinlock, flags); | ||
982 | |||
983 | return wakeup; | ||
984 | } | ||
985 | |||
986 | /* Must be called with the event_lock held */ | ||
987 | static u32 vbg_consume_events_locked(struct vbg_dev *gdev, | ||
988 | struct vbg_session *session, | ||
989 | u32 event_mask) | ||
990 | { | ||
991 | u32 events = gdev->pending_events & event_mask; | ||
992 | |||
993 | gdev->pending_events &= ~events; | ||
994 | return events; | ||
995 | } | ||
996 | |||
997 | static int vbg_ioctl_wait_for_events(struct vbg_dev *gdev, | ||
998 | struct vbg_session *session, | ||
999 | struct vbg_ioctl_wait_for_events *wait) | ||
1000 | { | ||
1001 | u32 timeout_ms = wait->u.in.timeout_ms; | ||
1002 | u32 event_mask = wait->u.in.events; | ||
1003 | unsigned long flags; | ||
1004 | long timeout; | ||
1005 | int ret = 0; | ||
1006 | |||
1007 | if (vbg_ioctl_chk(&wait->hdr, sizeof(wait->u.in), sizeof(wait->u.out))) | ||
1008 | return -EINVAL; | ||
1009 | |||
1010 | if (timeout_ms == U32_MAX) | ||
1011 | timeout = MAX_SCHEDULE_TIMEOUT; | ||
1012 | else | ||
1013 | timeout = msecs_to_jiffies(timeout_ms); | ||
1014 | |||
1015 | wait->u.out.events = 0; | ||
1016 | do { | ||
1017 | timeout = wait_event_interruptible_timeout( | ||
1018 | gdev->event_wq, | ||
1019 | vbg_wait_event_cond(gdev, session, event_mask), | ||
1020 | timeout); | ||
1021 | |||
1022 | spin_lock_irqsave(&gdev->event_spinlock, flags); | ||
1023 | |||
1024 | if (timeout < 0 || session->cancel_waiters) { | ||
1025 | ret = -EINTR; | ||
1026 | } else if (timeout == 0) { | ||
1027 | ret = -ETIMEDOUT; | ||
1028 | } else { | ||
1029 | wait->u.out.events = | ||
1030 | vbg_consume_events_locked(gdev, session, event_mask); | ||
1031 | } | ||
1032 | |||
1033 | spin_unlock_irqrestore(&gdev->event_spinlock, flags); | ||
1034 | |||
1035 | /* | ||
1036 | * Someone else may have consumed the event(s) first, in | ||
1037 | * which case we go back to waiting. | ||
1038 | */ | ||
1039 | } while (ret == 0 && wait->u.out.events == 0); | ||
1040 | |||
1041 | return ret; | ||
1042 | } | ||
1043 | |||
1044 | static int vbg_ioctl_interrupt_all_wait_events(struct vbg_dev *gdev, | ||
1045 | struct vbg_session *session, | ||
1046 | struct vbg_ioctl_hdr *hdr) | ||
1047 | { | ||
1048 | unsigned long flags; | ||
1049 | |||
1050 | if (hdr->size_in != sizeof(*hdr) || hdr->size_out != sizeof(*hdr)) | ||
1051 | return -EINVAL; | ||
1052 | |||
1053 | spin_lock_irqsave(&gdev->event_spinlock, flags); | ||
1054 | session->cancel_waiters = true; | ||
1055 | spin_unlock_irqrestore(&gdev->event_spinlock, flags); | ||
1056 | |||
1057 | wake_up(&gdev->event_wq); | ||
1058 | |||
1059 | return 0; | ||
1060 | } | ||
1061 | |||
1062 | /** | ||
1063 | * Checks if the VMM request is allowed in the context of the given session. | ||
1064 | * Return: 0 or negative errno value. | ||
1065 | * @gdev: The Guest extension device. | ||
1066 | * @session: The calling session. | ||
1067 | * @req: The request. | ||
1068 | */ | ||
1069 | static int vbg_req_allowed(struct vbg_dev *gdev, struct vbg_session *session, | ||
1070 | const struct vmmdev_request_header *req) | ||
1071 | { | ||
1072 | const struct vmmdev_guest_status *guest_status; | ||
1073 | bool trusted_apps_only; | ||
1074 | |||
1075 | switch (req->request_type) { | ||
1076 | /* Trusted users apps only. */ | ||
1077 | case VMMDEVREQ_QUERY_CREDENTIALS: | ||
1078 | case VMMDEVREQ_REPORT_CREDENTIALS_JUDGEMENT: | ||
1079 | case VMMDEVREQ_REGISTER_SHARED_MODULE: | ||
1080 | case VMMDEVREQ_UNREGISTER_SHARED_MODULE: | ||
1081 | case VMMDEVREQ_WRITE_COREDUMP: | ||
1082 | case VMMDEVREQ_GET_CPU_HOTPLUG_REQ: | ||
1083 | case VMMDEVREQ_SET_CPU_HOTPLUG_STATUS: | ||
1084 | case VMMDEVREQ_CHECK_SHARED_MODULES: | ||
1085 | case VMMDEVREQ_GET_PAGE_SHARING_STATUS: | ||
1086 | case VMMDEVREQ_DEBUG_IS_PAGE_SHARED: | ||
1087 | case VMMDEVREQ_REPORT_GUEST_STATS: | ||
1088 | case VMMDEVREQ_REPORT_GUEST_USER_STATE: | ||
1089 | case VMMDEVREQ_GET_STATISTICS_CHANGE_REQ: | ||
1090 | trusted_apps_only = true; | ||
1091 | break; | ||
1092 | |||
1093 | /* Anyone. */ | ||
1094 | case VMMDEVREQ_GET_MOUSE_STATUS: | ||
1095 | case VMMDEVREQ_SET_MOUSE_STATUS: | ||
1096 | case VMMDEVREQ_SET_POINTER_SHAPE: | ||
1097 | case VMMDEVREQ_GET_HOST_VERSION: | ||
1098 | case VMMDEVREQ_IDLE: | ||
1099 | case VMMDEVREQ_GET_HOST_TIME: | ||
1100 | case VMMDEVREQ_SET_POWER_STATUS: | ||
1101 | case VMMDEVREQ_ACKNOWLEDGE_EVENTS: | ||
1102 | case VMMDEVREQ_CTL_GUEST_FILTER_MASK: | ||
1103 | case VMMDEVREQ_REPORT_GUEST_STATUS: | ||
1104 | case VMMDEVREQ_GET_DISPLAY_CHANGE_REQ: | ||
1105 | case VMMDEVREQ_VIDEMODE_SUPPORTED: | ||
1106 | case VMMDEVREQ_GET_HEIGHT_REDUCTION: | ||
1107 | case VMMDEVREQ_GET_DISPLAY_CHANGE_REQ2: | ||
1108 | case VMMDEVREQ_VIDEMODE_SUPPORTED2: | ||
1109 | case VMMDEVREQ_VIDEO_ACCEL_ENABLE: | ||
1110 | case VMMDEVREQ_VIDEO_ACCEL_FLUSH: | ||
1111 | case VMMDEVREQ_VIDEO_SET_VISIBLE_REGION: | ||
1112 | case VMMDEVREQ_GET_DISPLAY_CHANGE_REQEX: | ||
1113 | case VMMDEVREQ_GET_SEAMLESS_CHANGE_REQ: | ||
1114 | case VMMDEVREQ_GET_VRDPCHANGE_REQ: | ||
1115 | case VMMDEVREQ_LOG_STRING: | ||
1116 | case VMMDEVREQ_GET_SESSION_ID: | ||
1117 | trusted_apps_only = false; | ||
1118 | break; | ||
1119 | |||
1120 | /* Depends on the request parameters... */ | ||
1121 | case VMMDEVREQ_REPORT_GUEST_CAPABILITIES: | ||
1122 | guest_status = (const struct vmmdev_guest_status *)req; | ||
1123 | switch (guest_status->facility) { | ||
1124 | case VBOXGUEST_FACILITY_TYPE_ALL: | ||
1125 | case VBOXGUEST_FACILITY_TYPE_VBOXGUEST_DRIVER: | ||
1126 | vbg_err("Denying userspace vmm report guest cap. call facility %#08x\n", | ||
1127 | guest_status->facility); | ||
1128 | return -EPERM; | ||
1129 | case VBOXGUEST_FACILITY_TYPE_VBOX_SERVICE: | ||
1130 | trusted_apps_only = true; | ||
1131 | break; | ||
1132 | case VBOXGUEST_FACILITY_TYPE_VBOX_TRAY_CLIENT: | ||
1133 | case VBOXGUEST_FACILITY_TYPE_SEAMLESS: | ||
1134 | case VBOXGUEST_FACILITY_TYPE_GRAPHICS: | ||
1135 | default: | ||
1136 | trusted_apps_only = false; | ||
1137 | break; | ||
1138 | } | ||
1139 | break; | ||
1140 | |||
1141 | /* Anything else is not allowed. */ | ||
1142 | default: | ||
1143 | vbg_err("Denying userspace vmm call type %#08x\n", | ||
1144 | req->request_type); | ||
1145 | return -EPERM; | ||
1146 | } | ||
1147 | |||
1148 | if (trusted_apps_only && session->user_session) { | ||
1149 | vbg_err("Denying userspace vmm call type %#08x through vboxuser device node\n", | ||
1150 | req->request_type); | ||
1151 | return -EPERM; | ||
1152 | } | ||
1153 | |||
1154 | return 0; | ||
1155 | } | ||
1156 | |||
1157 | static int vbg_ioctl_vmmrequest(struct vbg_dev *gdev, | ||
1158 | struct vbg_session *session, void *data) | ||
1159 | { | ||
1160 | struct vbg_ioctl_hdr *hdr = data; | ||
1161 | int ret; | ||
1162 | |||
1163 | if (hdr->size_in != hdr->size_out) | ||
1164 | return -EINVAL; | ||
1165 | |||
1166 | if (hdr->size_in > VMMDEV_MAX_VMMDEVREQ_SIZE) | ||
1167 | return -E2BIG; | ||
1168 | |||
1169 | if (hdr->type == VBG_IOCTL_HDR_TYPE_DEFAULT) | ||
1170 | return -EINVAL; | ||
1171 | |||
1172 | ret = vbg_req_allowed(gdev, session, data); | ||
1173 | if (ret < 0) | ||
1174 | return ret; | ||
1175 | |||
1176 | vbg_req_perform(gdev, data); | ||
1177 | WARN_ON(hdr->rc == VINF_HGCM_ASYNC_EXECUTE); | ||
1178 | |||
1179 | return 0; | ||
1180 | } | ||
1181 | |||
1182 | static int vbg_ioctl_hgcm_connect(struct vbg_dev *gdev, | ||
1183 | struct vbg_session *session, | ||
1184 | struct vbg_ioctl_hgcm_connect *conn) | ||
1185 | { | ||
1186 | u32 client_id; | ||
1187 | int i, ret; | ||
1188 | |||
1189 | if (vbg_ioctl_chk(&conn->hdr, sizeof(conn->u.in), sizeof(conn->u.out))) | ||
1190 | return -EINVAL; | ||
1191 | |||
1192 | /* Find a free place in the sessions clients array and claim it */ | ||
1193 | mutex_lock(&gdev->session_mutex); | ||
1194 | for (i = 0; i < ARRAY_SIZE(session->hgcm_client_ids); i++) { | ||
1195 | if (!session->hgcm_client_ids[i]) { | ||
1196 | session->hgcm_client_ids[i] = U32_MAX; | ||
1197 | break; | ||
1198 | } | ||
1199 | } | ||
1200 | mutex_unlock(&gdev->session_mutex); | ||
1201 | |||
1202 | if (i >= ARRAY_SIZE(session->hgcm_client_ids)) | ||
1203 | return -EMFILE; | ||
1204 | |||
1205 | ret = vbg_hgcm_connect(gdev, &conn->u.in.loc, &client_id, | ||
1206 | &conn->hdr.rc); | ||
1207 | |||
1208 | mutex_lock(&gdev->session_mutex); | ||
1209 | if (ret == 0 && conn->hdr.rc >= 0) { | ||
1210 | conn->u.out.client_id = client_id; | ||
1211 | session->hgcm_client_ids[i] = client_id; | ||
1212 | } else { | ||
1213 | conn->u.out.client_id = 0; | ||
1214 | session->hgcm_client_ids[i] = 0; | ||
1215 | } | ||
1216 | mutex_unlock(&gdev->session_mutex); | ||
1217 | |||
1218 | return ret; | ||
1219 | } | ||
1220 | |||
1221 | static int vbg_ioctl_hgcm_disconnect(struct vbg_dev *gdev, | ||
1222 | struct vbg_session *session, | ||
1223 | struct vbg_ioctl_hgcm_disconnect *disconn) | ||
1224 | { | ||
1225 | u32 client_id; | ||
1226 | int i, ret; | ||
1227 | |||
1228 | if (vbg_ioctl_chk(&disconn->hdr, sizeof(disconn->u.in), 0)) | ||
1229 | return -EINVAL; | ||
1230 | |||
1231 | client_id = disconn->u.in.client_id; | ||
1232 | if (client_id == 0 || client_id == U32_MAX) | ||
1233 | return -EINVAL; | ||
1234 | |||
1235 | mutex_lock(&gdev->session_mutex); | ||
1236 | for (i = 0; i < ARRAY_SIZE(session->hgcm_client_ids); i++) { | ||
1237 | if (session->hgcm_client_ids[i] == client_id) { | ||
1238 | session->hgcm_client_ids[i] = U32_MAX; | ||
1239 | break; | ||
1240 | } | ||
1241 | } | ||
1242 | mutex_unlock(&gdev->session_mutex); | ||
1243 | |||
1244 | if (i >= ARRAY_SIZE(session->hgcm_client_ids)) | ||
1245 | return -EINVAL; | ||
1246 | |||
1247 | ret = vbg_hgcm_disconnect(gdev, client_id, &disconn->hdr.rc); | ||
1248 | |||
1249 | mutex_lock(&gdev->session_mutex); | ||
1250 | if (ret == 0 && disconn->hdr.rc >= 0) | ||
1251 | session->hgcm_client_ids[i] = 0; | ||
1252 | else | ||
1253 | session->hgcm_client_ids[i] = client_id; | ||
1254 | mutex_unlock(&gdev->session_mutex); | ||
1255 | |||
1256 | return ret; | ||
1257 | } | ||
1258 | |||
1259 | static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev, | ||
1260 | struct vbg_session *session, bool f32bit, | ||
1261 | struct vbg_ioctl_hgcm_call *call) | ||
1262 | { | ||
1263 | size_t actual_size; | ||
1264 | u32 client_id; | ||
1265 | int i, ret; | ||
1266 | |||
1267 | if (call->hdr.size_in < sizeof(*call)) | ||
1268 | return -EINVAL; | ||
1269 | |||
1270 | if (call->hdr.size_in != call->hdr.size_out) | ||
1271 | return -EINVAL; | ||
1272 | |||
1273 | if (call->parm_count > VMMDEV_HGCM_MAX_PARMS) | ||
1274 | return -E2BIG; | ||
1275 | |||
1276 | client_id = call->client_id; | ||
1277 | if (client_id == 0 || client_id == U32_MAX) | ||
1278 | return -EINVAL; | ||
1279 | |||
1280 | actual_size = sizeof(*call); | ||
1281 | if (f32bit) | ||
1282 | actual_size += call->parm_count * | ||
1283 | sizeof(struct vmmdev_hgcm_function_parameter32); | ||
1284 | else | ||
1285 | actual_size += call->parm_count * | ||
1286 | sizeof(struct vmmdev_hgcm_function_parameter); | ||
1287 | if (call->hdr.size_in < actual_size) { | ||
1288 | vbg_debug("VBG_IOCTL_HGCM_CALL: hdr.size_in %d required size is %zd\n", | ||
1289 | call->hdr.size_in, actual_size); | ||
1290 | return -EINVAL; | ||
1291 | } | ||
1292 | call->hdr.size_out = actual_size; | ||
1293 | |||
1294 | /* | ||
1295 | * Validate the client id. | ||
1296 | */ | ||
1297 | mutex_lock(&gdev->session_mutex); | ||
1298 | for (i = 0; i < ARRAY_SIZE(session->hgcm_client_ids); i++) | ||
1299 | if (session->hgcm_client_ids[i] == client_id) | ||
1300 | break; | ||
1301 | mutex_unlock(&gdev->session_mutex); | ||
1302 | if (i >= ARRAY_SIZE(session->hgcm_client_ids)) { | ||
1303 | vbg_debug("VBG_IOCTL_HGCM_CALL: INVALID handle. u32Client=%#08x\n", | ||
1304 | client_id); | ||
1305 | return -EINVAL; | ||
1306 | } | ||
1307 | |||
1308 | if (f32bit) | ||
1309 | ret = vbg_hgcm_call32(gdev, client_id, | ||
1310 | call->function, call->timeout_ms, | ||
1311 | VBG_IOCTL_HGCM_CALL_PARMS32(call), | ||
1312 | call->parm_count, &call->hdr.rc); | ||
1313 | else | ||
1314 | ret = vbg_hgcm_call(gdev, client_id, | ||
1315 | call->function, call->timeout_ms, | ||
1316 | VBG_IOCTL_HGCM_CALL_PARMS(call), | ||
1317 | call->parm_count, &call->hdr.rc); | ||
1318 | |||
1319 | if (ret == -E2BIG) { | ||
1320 | /* E2BIG needs to be reported through the hdr.rc field. */ | ||
1321 | call->hdr.rc = VERR_OUT_OF_RANGE; | ||
1322 | ret = 0; | ||
1323 | } | ||
1324 | |||
1325 | if (ret && ret != -EINTR && ret != -ETIMEDOUT) | ||
1326 | vbg_err("VBG_IOCTL_HGCM_CALL error: %d\n", ret); | ||
1327 | |||
1328 | return ret; | ||
1329 | } | ||
1330 | |||
1331 | static int vbg_ioctl_log(struct vbg_ioctl_log *log) | ||
1332 | { | ||
1333 | if (log->hdr.size_out != sizeof(log->hdr)) | ||
1334 | return -EINVAL; | ||
1335 | |||
1336 | vbg_info("%.*s", (int)(log->hdr.size_in - sizeof(log->hdr)), | ||
1337 | log->u.in.msg); | ||
1338 | |||
1339 | return 0; | ||
1340 | } | ||
1341 | |||
1342 | static int vbg_ioctl_change_filter_mask(struct vbg_dev *gdev, | ||
1343 | struct vbg_session *session, | ||
1344 | struct vbg_ioctl_change_filter *filter) | ||
1345 | { | ||
1346 | u32 or_mask, not_mask; | ||
1347 | |||
1348 | if (vbg_ioctl_chk(&filter->hdr, sizeof(filter->u.in), 0)) | ||
1349 | return -EINVAL; | ||
1350 | |||
1351 | or_mask = filter->u.in.or_mask; | ||
1352 | not_mask = filter->u.in.not_mask; | ||
1353 | |||
1354 | if ((or_mask | not_mask) & ~VMMDEV_EVENT_VALID_EVENT_MASK) | ||
1355 | return -EINVAL; | ||
1356 | |||
1357 | return vbg_set_session_event_filter(gdev, session, or_mask, not_mask, | ||
1358 | false); | ||
1359 | } | ||
1360 | |||
1361 | static int vbg_ioctl_change_guest_capabilities(struct vbg_dev *gdev, | ||
1362 | struct vbg_session *session, struct vbg_ioctl_set_guest_caps *caps) | ||
1363 | { | ||
1364 | u32 or_mask, not_mask; | ||
1365 | int ret; | ||
1366 | |||
1367 | if (vbg_ioctl_chk(&caps->hdr, sizeof(caps->u.in), sizeof(caps->u.out))) | ||
1368 | return -EINVAL; | ||
1369 | |||
1370 | or_mask = caps->u.in.or_mask; | ||
1371 | not_mask = caps->u.in.not_mask; | ||
1372 | |||
1373 | if ((or_mask | not_mask) & ~VMMDEV_EVENT_VALID_EVENT_MASK) | ||
1374 | return -EINVAL; | ||
1375 | |||
1376 | ret = vbg_set_session_capabilities(gdev, session, or_mask, not_mask, | ||
1377 | false); | ||
1378 | if (ret) | ||
1379 | return ret; | ||
1380 | |||
1381 | caps->u.out.session_caps = session->guest_caps; | ||
1382 | caps->u.out.global_caps = gdev->guest_caps_host; | ||
1383 | |||
1384 | return 0; | ||
1385 | } | ||
1386 | |||
1387 | static int vbg_ioctl_check_balloon(struct vbg_dev *gdev, | ||
1388 | struct vbg_ioctl_check_balloon *balloon_info) | ||
1389 | { | ||
1390 | if (vbg_ioctl_chk(&balloon_info->hdr, 0, sizeof(balloon_info->u.out))) | ||
1391 | return -EINVAL; | ||
1392 | |||
1393 | balloon_info->u.out.balloon_chunks = gdev->mem_balloon.chunks; | ||
1394 | /* | ||
1395 | * Under Linux we handle VMMDEV_EVENT_BALLOON_CHANGE_REQUEST | ||
1396 | * events entirely in the kernel, see vbg_core_isr(). | ||
1397 | */ | ||
1398 | balloon_info->u.out.handle_in_r3 = false; | ||
1399 | |||
1400 | return 0; | ||
1401 | } | ||
1402 | |||
1403 | static int vbg_ioctl_write_core_dump(struct vbg_dev *gdev, | ||
1404 | struct vbg_ioctl_write_coredump *dump) | ||
1405 | { | ||
1406 | struct vmmdev_write_core_dump *req; | ||
1407 | |||
1408 | if (vbg_ioctl_chk(&dump->hdr, sizeof(dump->u.in), 0)) | ||
1409 | return -EINVAL; | ||
1410 | |||
1411 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_WRITE_COREDUMP); | ||
1412 | if (!req) | ||
1413 | return -ENOMEM; | ||
1414 | |||
1415 | req->flags = dump->u.in.flags; | ||
1416 | dump->hdr.rc = vbg_req_perform(gdev, req); | ||
1417 | |||
1418 | kfree(req); | ||
1419 | return 0; | ||
1420 | } | ||
1421 | |||
1422 | /** | ||
1423 | * Common IOCtl for user to kernel communication. | ||
1424 | * Return: 0 or negative errno value. | ||
1425 | * @session: The client session. | ||
1426 | * @req: The requested function. | ||
1427 | * @data: The i/o data buffer, minimum size sizeof(struct vbg_ioctl_hdr). | ||
1428 | */ | ||
1429 | int vbg_core_ioctl(struct vbg_session *session, unsigned int req, void *data) | ||
1430 | { | ||
1431 | unsigned int req_no_size = req & ~IOCSIZE_MASK; | ||
1432 | struct vbg_dev *gdev = session->gdev; | ||
1433 | struct vbg_ioctl_hdr *hdr = data; | ||
1434 | bool f32bit = false; | ||
1435 | |||
1436 | hdr->rc = VINF_SUCCESS; | ||
1437 | if (!hdr->size_out) | ||
1438 | hdr->size_out = hdr->size_in; | ||
1439 | |||
1440 | /* | ||
1441 | * hdr->version and hdr->size_in / hdr->size_out minimum size are | ||
1442 | * already checked by vbg_misc_device_ioctl(). | ||
1443 | */ | ||
1444 | |||
1445 | /* For VMMDEV_REQUEST hdr->type != VBG_IOCTL_HDR_TYPE_DEFAULT */ | ||
1446 | if (req_no_size == VBG_IOCTL_VMMDEV_REQUEST(0) || | ||
1447 | req == VBG_IOCTL_VMMDEV_REQUEST_BIG) | ||
1448 | return vbg_ioctl_vmmrequest(gdev, session, data); | ||
1449 | |||
1450 | if (hdr->type != VBG_IOCTL_HDR_TYPE_DEFAULT) | ||
1451 | return -EINVAL; | ||
1452 | |||
1453 | /* Fixed size requests. */ | ||
1454 | switch (req) { | ||
1455 | case VBG_IOCTL_DRIVER_VERSION_INFO: | ||
1456 | return vbg_ioctl_driver_version_info(data); | ||
1457 | case VBG_IOCTL_HGCM_CONNECT: | ||
1458 | return vbg_ioctl_hgcm_connect(gdev, session, data); | ||
1459 | case VBG_IOCTL_HGCM_DISCONNECT: | ||
1460 | return vbg_ioctl_hgcm_disconnect(gdev, session, data); | ||
1461 | case VBG_IOCTL_WAIT_FOR_EVENTS: | ||
1462 | return vbg_ioctl_wait_for_events(gdev, session, data); | ||
1463 | case VBG_IOCTL_INTERRUPT_ALL_WAIT_FOR_EVENTS: | ||
1464 | return vbg_ioctl_interrupt_all_wait_events(gdev, session, data); | ||
1465 | case VBG_IOCTL_CHANGE_FILTER_MASK: | ||
1466 | return vbg_ioctl_change_filter_mask(gdev, session, data); | ||
1467 | case VBG_IOCTL_CHANGE_GUEST_CAPABILITIES: | ||
1468 | return vbg_ioctl_change_guest_capabilities(gdev, session, data); | ||
1469 | case VBG_IOCTL_CHECK_BALLOON: | ||
1470 | return vbg_ioctl_check_balloon(gdev, data); | ||
1471 | case VBG_IOCTL_WRITE_CORE_DUMP: | ||
1472 | return vbg_ioctl_write_core_dump(gdev, data); | ||
1473 | } | ||
1474 | |||
1475 | /* Variable sized requests. */ | ||
1476 | switch (req_no_size) { | ||
1477 | #ifdef CONFIG_COMPAT | ||
1478 | case VBG_IOCTL_HGCM_CALL_32(0): | ||
1479 | f32bit = true; | ||
1480 | /* Fall through */ | ||
1481 | #endif | ||
1482 | case VBG_IOCTL_HGCM_CALL(0): | ||
1483 | return vbg_ioctl_hgcm_call(gdev, session, f32bit, data); | ||
1484 | case VBG_IOCTL_LOG(0): | ||
1485 | return vbg_ioctl_log(data); | ||
1486 | } | ||
1487 | |||
1488 | vbg_debug("VGDrvCommonIoCtl: Unknown req %#08x\n", req); | ||
1489 | return -ENOTTY; | ||
1490 | } | ||
1491 | |||
1492 | /** | ||
1493 | * Report guest supported mouse-features to the host. | ||
1494 | * | ||
1495 | * Return: 0 or negative errno value. | ||
1496 | * @gdev: The Guest extension device. | ||
1497 | * @features: The set of features to report to the host. | ||
1498 | */ | ||
1499 | int vbg_core_set_mouse_status(struct vbg_dev *gdev, u32 features) | ||
1500 | { | ||
1501 | struct vmmdev_mouse_status *req; | ||
1502 | int rc; | ||
1503 | |||
1504 | req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_MOUSE_STATUS); | ||
1505 | if (!req) | ||
1506 | return -ENOMEM; | ||
1507 | |||
1508 | req->mouse_features = features; | ||
1509 | req->pointer_pos_x = 0; | ||
1510 | req->pointer_pos_y = 0; | ||
1511 | |||
1512 | rc = vbg_req_perform(gdev, req); | ||
1513 | if (rc < 0) | ||
1514 | vbg_err("%s error, rc: %d\n", __func__, rc); | ||
1515 | |||
1516 | kfree(req); | ||
1517 | return vbg_status_code_to_errno(rc); | ||
1518 | } | ||
1519 | |||
1520 | /** Core interrupt service routine. */ | ||
1521 | irqreturn_t vbg_core_isr(int irq, void *dev_id) | ||
1522 | { | ||
1523 | struct vbg_dev *gdev = dev_id; | ||
1524 | struct vmmdev_events *req = gdev->ack_events_req; | ||
1525 | bool mouse_position_changed = false; | ||
1526 | unsigned long flags; | ||
1527 | u32 events = 0; | ||
1528 | int rc; | ||
1529 | |||
1530 | if (!gdev->mmio->V.V1_04.have_events) | ||
1531 | return IRQ_NONE; | ||
1532 | |||
1533 | /* Get and acknowlegde events. */ | ||
1534 | req->header.rc = VERR_INTERNAL_ERROR; | ||
1535 | req->events = 0; | ||
1536 | rc = vbg_req_perform(gdev, req); | ||
1537 | if (rc < 0) { | ||
1538 | vbg_err("Error performing events req, rc: %d\n", rc); | ||
1539 | return IRQ_NONE; | ||
1540 | } | ||
1541 | |||
1542 | events = req->events; | ||
1543 | |||
1544 | if (events & VMMDEV_EVENT_MOUSE_POSITION_CHANGED) { | ||
1545 | mouse_position_changed = true; | ||
1546 | events &= ~VMMDEV_EVENT_MOUSE_POSITION_CHANGED; | ||
1547 | } | ||
1548 | |||
1549 | if (events & VMMDEV_EVENT_HGCM) { | ||
1550 | wake_up(&gdev->hgcm_wq); | ||
1551 | events &= ~VMMDEV_EVENT_HGCM; | ||
1552 | } | ||
1553 | |||
1554 | if (events & VMMDEV_EVENT_BALLOON_CHANGE_REQUEST) { | ||
1555 | schedule_work(&gdev->mem_balloon.work); | ||
1556 | events &= ~VMMDEV_EVENT_BALLOON_CHANGE_REQUEST; | ||
1557 | } | ||
1558 | |||
1559 | if (events) { | ||
1560 | spin_lock_irqsave(&gdev->event_spinlock, flags); | ||
1561 | gdev->pending_events |= events; | ||
1562 | spin_unlock_irqrestore(&gdev->event_spinlock, flags); | ||
1563 | |||
1564 | wake_up(&gdev->event_wq); | ||
1565 | } | ||
1566 | |||
1567 | if (mouse_position_changed) | ||
1568 | vbg_linux_mouse_event(gdev); | ||
1569 | |||
1570 | return IRQ_HANDLED; | ||
1571 | } | ||
diff --git a/drivers/virt/vboxguest/vboxguest_core.h b/drivers/virt/vboxguest/vboxguest_core.h new file mode 100644 index 000000000000..6c784bf4fa6d --- /dev/null +++ b/drivers/virt/vboxguest/vboxguest_core.h | |||
@@ -0,0 +1,174 @@ | |||
1 | /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ | ||
2 | /* Copyright (C) 2010-2016 Oracle Corporation */ | ||
3 | |||
4 | #ifndef __VBOXGUEST_CORE_H__ | ||
5 | #define __VBOXGUEST_CORE_H__ | ||
6 | |||
7 | #include <linux/input.h> | ||
8 | #include <linux/interrupt.h> | ||
9 | #include <linux/kernel.h> | ||
10 | #include <linux/list.h> | ||
11 | #include <linux/miscdevice.h> | ||
12 | #include <linux/spinlock.h> | ||
13 | #include <linux/wait.h> | ||
14 | #include <linux/workqueue.h> | ||
15 | #include <linux/vboxguest.h> | ||
16 | #include "vmmdev.h" | ||
17 | |||
18 | struct vbg_session; | ||
19 | |||
20 | /** VBox guest memory balloon. */ | ||
21 | struct vbg_mem_balloon { | ||
22 | /** Work handling VMMDEV_EVENT_BALLOON_CHANGE_REQUEST events */ | ||
23 | struct work_struct work; | ||
24 | /** Pre-allocated vmmdev_memballoon_info req for query */ | ||
25 | struct vmmdev_memballoon_info *get_req; | ||
26 | /** Pre-allocated vmmdev_memballoon_change req for inflate / deflate */ | ||
27 | struct vmmdev_memballoon_change *change_req; | ||
28 | /** The current number of chunks in the balloon. */ | ||
29 | u32 chunks; | ||
30 | /** The maximum number of chunks in the balloon. */ | ||
31 | u32 max_chunks; | ||
32 | /** | ||
33 | * Array of pointers to page arrays. A page * array is allocated for | ||
34 | * each chunk when inflating, and freed when the deflating. | ||
35 | */ | ||
36 | struct page ***pages; | ||
37 | }; | ||
38 | |||
39 | /** | ||
40 | * Per bit usage tracker for a u32 mask. | ||
41 | * | ||
42 | * Used for optimal handling of guest properties and event filter. | ||
43 | */ | ||
44 | struct vbg_bit_usage_tracker { | ||
45 | /** Per bit usage counters. */ | ||
46 | u32 per_bit_usage[32]; | ||
47 | /** The current mask according to per_bit_usage. */ | ||
48 | u32 mask; | ||
49 | }; | ||
50 | |||
51 | /** VBox guest device (data) extension. */ | ||
52 | struct vbg_dev { | ||
53 | struct device *dev; | ||
54 | /** The base of the adapter I/O ports. */ | ||
55 | u16 io_port; | ||
56 | /** Pointer to the mapping of the VMMDev adapter memory. */ | ||
57 | struct vmmdev_memory *mmio; | ||
58 | /** Host version */ | ||
59 | char host_version[64]; | ||
60 | /** Host features */ | ||
61 | unsigned int host_features; | ||
62 | /** | ||
63 | * Dummy page and vmap address for reserved kernel virtual-address | ||
64 | * space for the guest mappings, only used on hosts lacking vtx. | ||
65 | */ | ||
66 | struct page *guest_mappings_dummy_page; | ||
67 | void *guest_mappings; | ||
68 | /** Spinlock protecting pending_events. */ | ||
69 | spinlock_t event_spinlock; | ||
70 | /** Preallocated struct vmmdev_events for the IRQ handler. */ | ||
71 | struct vmmdev_events *ack_events_req; | ||
72 | /** Wait-for-event list for threads waiting for multiple events. */ | ||
73 | wait_queue_head_t event_wq; | ||
74 | /** Mask of pending events. */ | ||
75 | u32 pending_events; | ||
76 | /** Wait-for-event list for threads waiting on HGCM async completion. */ | ||
77 | wait_queue_head_t hgcm_wq; | ||
78 | /** Pre-allocated hgcm cancel2 req. for cancellation on timeout */ | ||
79 | struct vmmdev_hgcm_cancel2 *cancel_req; | ||
80 | /** Mutex protecting cancel_req accesses */ | ||
81 | struct mutex cancel_req_mutex; | ||
82 | /** Pre-allocated mouse-status request for the input-device handling. */ | ||
83 | struct vmmdev_mouse_status *mouse_status_req; | ||
84 | /** Input device for reporting abs mouse coordinates to the guest. */ | ||
85 | struct input_dev *input; | ||
86 | |||
87 | /** Memory balloon information. */ | ||
88 | struct vbg_mem_balloon mem_balloon; | ||
89 | |||
90 | /** Lock for session related items in vbg_dev and vbg_session */ | ||
91 | struct mutex session_mutex; | ||
92 | /** Events we won't permit anyone to filter out. */ | ||
93 | u32 fixed_events; | ||
94 | /** | ||
95 | * Usage counters for the host events (excludes fixed events), | ||
96 | * Protected by session_mutex. | ||
97 | */ | ||
98 | struct vbg_bit_usage_tracker event_filter_tracker; | ||
99 | /** | ||
100 | * The event filter last reported to the host (or UINT32_MAX). | ||
101 | * Protected by session_mutex. | ||
102 | */ | ||
103 | u32 event_filter_host; | ||
104 | |||
105 | /** | ||
106 | * Usage counters for guest capabilities. Indexed by capability bit | ||
107 | * number, one count per session using a capability. | ||
108 | * Protected by session_mutex. | ||
109 | */ | ||
110 | struct vbg_bit_usage_tracker guest_caps_tracker; | ||
111 | /** | ||
112 | * The guest capabilities last reported to the host (or UINT32_MAX). | ||
113 | * Protected by session_mutex. | ||
114 | */ | ||
115 | u32 guest_caps_host; | ||
116 | |||
117 | /** | ||
118 | * Heartbeat timer which fires with interval | ||
119 | * cNsHearbeatInterval and its handler sends | ||
120 | * VMMDEVREQ_GUEST_HEARTBEAT to VMMDev. | ||
121 | */ | ||
122 | struct timer_list heartbeat_timer; | ||
123 | /** Heartbeat timer interval in ms. */ | ||
124 | int heartbeat_interval_ms; | ||
125 | /** Preallocated VMMDEVREQ_GUEST_HEARTBEAT request. */ | ||
126 | struct vmmdev_request_header *guest_heartbeat_req; | ||
127 | |||
128 | /** "vboxguest" char-device */ | ||
129 | struct miscdevice misc_device; | ||
130 | /** "vboxuser" char-device */ | ||
131 | struct miscdevice misc_device_user; | ||
132 | }; | ||
133 | |||
134 | /** The VBoxGuest per session data. */ | ||
135 | struct vbg_session { | ||
136 | /** Pointer to the device extension. */ | ||
137 | struct vbg_dev *gdev; | ||
138 | |||
139 | /** | ||
140 | * Array containing HGCM client IDs associated with this session. | ||
141 | * These will be automatically disconnected when the session is closed. | ||
142 | * Protected by vbg_gdev.session_mutex. | ||
143 | */ | ||
144 | u32 hgcm_client_ids[64]; | ||
145 | /** | ||
146 | * Host events requested by the session. | ||
147 | * An event type requested in any guest session will be added to the | ||
148 | * host filter. Protected by vbg_gdev.session_mutex. | ||
149 | */ | ||
150 | u32 event_filter; | ||
151 | /** | ||
152 | * Guest capabilities for this session. | ||
153 | * A capability claimed by any guest session will be reported to the | ||
154 | * host. Protected by vbg_gdev.session_mutex. | ||
155 | */ | ||
156 | u32 guest_caps; | ||
157 | /** Does this session belong to a root process or a user one? */ | ||
158 | bool user_session; | ||
159 | /** Set on CANCEL_ALL_WAITEVENTS, protected by vbg_devevent_spinlock. */ | ||
160 | bool cancel_waiters; | ||
161 | }; | ||
162 | |||
163 | int vbg_core_init(struct vbg_dev *gdev, u32 fixed_events); | ||
164 | void vbg_core_exit(struct vbg_dev *gdev); | ||
165 | struct vbg_session *vbg_core_open_session(struct vbg_dev *gdev, bool user); | ||
166 | void vbg_core_close_session(struct vbg_session *session); | ||
167 | int vbg_core_ioctl(struct vbg_session *session, unsigned int req, void *data); | ||
168 | int vbg_core_set_mouse_status(struct vbg_dev *gdev, u32 features); | ||
169 | |||
170 | irqreturn_t vbg_core_isr(int irq, void *dev_id); | ||
171 | |||
172 | void vbg_linux_mouse_event(struct vbg_dev *gdev); | ||
173 | |||
174 | #endif | ||
diff --git a/drivers/virt/vboxguest/vboxguest_linux.c b/drivers/virt/vboxguest/vboxguest_linux.c new file mode 100644 index 000000000000..82e280d38cc2 --- /dev/null +++ b/drivers/virt/vboxguest/vboxguest_linux.c | |||
@@ -0,0 +1,466 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
2 | /* | ||
3 | * vboxguest linux pci driver, char-dev and input-device code, | ||
4 | * | ||
5 | * Copyright (C) 2006-2016 Oracle Corporation | ||
6 | */ | ||
7 | |||
8 | #include <linux/input.h> | ||
9 | #include <linux/kernel.h> | ||
10 | #include <linux/miscdevice.h> | ||
11 | #include <linux/module.h> | ||
12 | #include <linux/pci.h> | ||
13 | #include <linux/poll.h> | ||
14 | #include <linux/vbox_utils.h> | ||
15 | #include "vboxguest_core.h" | ||
16 | |||
17 | /** The device name. */ | ||
18 | #define DEVICE_NAME "vboxguest" | ||
19 | /** The device name for the device node open to everyone. */ | ||
20 | #define DEVICE_NAME_USER "vboxuser" | ||
21 | /** VirtualBox PCI vendor ID. */ | ||
22 | #define VBOX_VENDORID 0x80ee | ||
23 | /** VMMDev PCI card product ID. */ | ||
24 | #define VMMDEV_DEVICEID 0xcafe | ||
25 | |||
26 | /** Mutex protecting the global vbg_gdev pointer used by vbg_get/put_gdev. */ | ||
27 | static DEFINE_MUTEX(vbg_gdev_mutex); | ||
28 | /** Global vbg_gdev pointer used by vbg_get/put_gdev. */ | ||
29 | static struct vbg_dev *vbg_gdev; | ||
30 | |||
31 | static int vbg_misc_device_open(struct inode *inode, struct file *filp) | ||
32 | { | ||
33 | struct vbg_session *session; | ||
34 | struct vbg_dev *gdev; | ||
35 | |||
36 | /* misc_open sets filp->private_data to our misc device */ | ||
37 | gdev = container_of(filp->private_data, struct vbg_dev, misc_device); | ||
38 | |||
39 | session = vbg_core_open_session(gdev, false); | ||
40 | if (IS_ERR(session)) | ||
41 | return PTR_ERR(session); | ||
42 | |||
43 | filp->private_data = session; | ||
44 | return 0; | ||
45 | } | ||
46 | |||
47 | static int vbg_misc_device_user_open(struct inode *inode, struct file *filp) | ||
48 | { | ||
49 | struct vbg_session *session; | ||
50 | struct vbg_dev *gdev; | ||
51 | |||
52 | /* misc_open sets filp->private_data to our misc device */ | ||
53 | gdev = container_of(filp->private_data, struct vbg_dev, | ||
54 | misc_device_user); | ||
55 | |||
56 | session = vbg_core_open_session(gdev, false); | ||
57 | if (IS_ERR(session)) | ||
58 | return PTR_ERR(session); | ||
59 | |||
60 | filp->private_data = session; | ||
61 | return 0; | ||
62 | } | ||
63 | |||
64 | /** | ||
65 | * Close device. | ||
66 | * Return: 0 on success, negated errno on failure. | ||
67 | * @inode: Pointer to inode info structure. | ||
68 | * @filp: Associated file pointer. | ||
69 | */ | ||
70 | static int vbg_misc_device_close(struct inode *inode, struct file *filp) | ||
71 | { | ||
72 | vbg_core_close_session(filp->private_data); | ||
73 | filp->private_data = NULL; | ||
74 | return 0; | ||
75 | } | ||
76 | |||
77 | /** | ||
78 | * Device I/O Control entry point. | ||
79 | * Return: 0 on success, negated errno on failure. | ||
80 | * @filp: Associated file pointer. | ||
81 | * @req: The request specified to ioctl(). | ||
82 | * @arg: The argument specified to ioctl(). | ||
83 | */ | ||
84 | static long vbg_misc_device_ioctl(struct file *filp, unsigned int req, | ||
85 | unsigned long arg) | ||
86 | { | ||
87 | struct vbg_session *session = filp->private_data; | ||
88 | size_t returned_size, size; | ||
89 | struct vbg_ioctl_hdr hdr; | ||
90 | int ret = 0; | ||
91 | void *buf; | ||
92 | |||
93 | if (copy_from_user(&hdr, (void *)arg, sizeof(hdr))) | ||
94 | return -EFAULT; | ||
95 | |||
96 | if (hdr.version != VBG_IOCTL_HDR_VERSION) | ||
97 | return -EINVAL; | ||
98 | |||
99 | if (hdr.size_in < sizeof(hdr) || | ||
100 | (hdr.size_out && hdr.size_out < sizeof(hdr))) | ||
101 | return -EINVAL; | ||
102 | |||
103 | size = max(hdr.size_in, hdr.size_out); | ||
104 | if (_IOC_SIZE(req) && _IOC_SIZE(req) != size) | ||
105 | return -EINVAL; | ||
106 | if (size > SZ_16M) | ||
107 | return -E2BIG; | ||
108 | |||
109 | /* __GFP_DMA32 because IOCTL_VMMDEV_REQUEST passes this to the host */ | ||
110 | buf = kmalloc(size, GFP_KERNEL | __GFP_DMA32); | ||
111 | if (!buf) | ||
112 | return -ENOMEM; | ||
113 | |||
114 | if (copy_from_user(buf, (void *)arg, hdr.size_in)) { | ||
115 | ret = -EFAULT; | ||
116 | goto out; | ||
117 | } | ||
118 | if (hdr.size_in < size) | ||
119 | memset(buf + hdr.size_in, 0, size - hdr.size_in); | ||
120 | |||
121 | ret = vbg_core_ioctl(session, req, buf); | ||
122 | if (ret) | ||
123 | goto out; | ||
124 | |||
125 | returned_size = ((struct vbg_ioctl_hdr *)buf)->size_out; | ||
126 | if (returned_size > size) { | ||
127 | vbg_debug("%s: too much output data %zu > %zu\n", | ||
128 | __func__, returned_size, size); | ||
129 | returned_size = size; | ||
130 | } | ||
131 | if (copy_to_user((void *)arg, buf, returned_size) != 0) | ||
132 | ret = -EFAULT; | ||
133 | |||
134 | out: | ||
135 | kfree(buf); | ||
136 | |||
137 | return ret; | ||
138 | } | ||
139 | |||
140 | /** The file_operations structures. */ | ||
141 | static const struct file_operations vbg_misc_device_fops = { | ||
142 | .owner = THIS_MODULE, | ||
143 | .open = vbg_misc_device_open, | ||
144 | .release = vbg_misc_device_close, | ||
145 | .unlocked_ioctl = vbg_misc_device_ioctl, | ||
146 | #ifdef CONFIG_COMPAT | ||
147 | .compat_ioctl = vbg_misc_device_ioctl, | ||
148 | #endif | ||
149 | }; | ||
150 | static const struct file_operations vbg_misc_device_user_fops = { | ||
151 | .owner = THIS_MODULE, | ||
152 | .open = vbg_misc_device_user_open, | ||
153 | .release = vbg_misc_device_close, | ||
154 | .unlocked_ioctl = vbg_misc_device_ioctl, | ||
155 | #ifdef CONFIG_COMPAT | ||
156 | .compat_ioctl = vbg_misc_device_ioctl, | ||
157 | #endif | ||
158 | }; | ||
159 | |||
160 | /** | ||
161 | * Called when the input device is first opened. | ||
162 | * | ||
163 | * Sets up absolute mouse reporting. | ||
164 | */ | ||
165 | static int vbg_input_open(struct input_dev *input) | ||
166 | { | ||
167 | struct vbg_dev *gdev = input_get_drvdata(input); | ||
168 | u32 feat = VMMDEV_MOUSE_GUEST_CAN_ABSOLUTE | VMMDEV_MOUSE_NEW_PROTOCOL; | ||
169 | int ret; | ||
170 | |||
171 | ret = vbg_core_set_mouse_status(gdev, feat); | ||
172 | if (ret) | ||
173 | return ret; | ||
174 | |||
175 | return 0; | ||
176 | } | ||
177 | |||
178 | /** | ||
179 | * Called if all open handles to the input device are closed. | ||
180 | * | ||
181 | * Disables absolute reporting. | ||
182 | */ | ||
183 | static void vbg_input_close(struct input_dev *input) | ||
184 | { | ||
185 | struct vbg_dev *gdev = input_get_drvdata(input); | ||
186 | |||
187 | vbg_core_set_mouse_status(gdev, 0); | ||
188 | } | ||
189 | |||
190 | /** | ||
191 | * Creates the kernel input device. | ||
192 | * | ||
193 | * Return: 0 on success, negated errno on failure. | ||
194 | */ | ||
195 | static int vbg_create_input_device(struct vbg_dev *gdev) | ||
196 | { | ||
197 | struct input_dev *input; | ||
198 | |||
199 | input = devm_input_allocate_device(gdev->dev); | ||
200 | if (!input) | ||
201 | return -ENOMEM; | ||
202 | |||
203 | input->id.bustype = BUS_PCI; | ||
204 | input->id.vendor = VBOX_VENDORID; | ||
205 | input->id.product = VMMDEV_DEVICEID; | ||
206 | input->open = vbg_input_open; | ||
207 | input->close = vbg_input_close; | ||
208 | input->dev.parent = gdev->dev; | ||
209 | input->name = "VirtualBox mouse integration"; | ||
210 | |||
211 | input_set_abs_params(input, ABS_X, VMMDEV_MOUSE_RANGE_MIN, | ||
212 | VMMDEV_MOUSE_RANGE_MAX, 0, 0); | ||
213 | input_set_abs_params(input, ABS_Y, VMMDEV_MOUSE_RANGE_MIN, | ||
214 | VMMDEV_MOUSE_RANGE_MAX, 0, 0); | ||
215 | input_set_capability(input, EV_KEY, BTN_MOUSE); | ||
216 | input_set_drvdata(input, gdev); | ||
217 | |||
218 | gdev->input = input; | ||
219 | |||
220 | return input_register_device(gdev->input); | ||
221 | } | ||
222 | |||
223 | static ssize_t host_version_show(struct device *dev, | ||
224 | struct device_attribute *attr, char *buf) | ||
225 | { | ||
226 | struct vbg_dev *gdev = dev_get_drvdata(dev); | ||
227 | |||
228 | return sprintf(buf, "%s\n", gdev->host_version); | ||
229 | } | ||
230 | |||
231 | static ssize_t host_features_show(struct device *dev, | ||
232 | struct device_attribute *attr, char *buf) | ||
233 | { | ||
234 | struct vbg_dev *gdev = dev_get_drvdata(dev); | ||
235 | |||
236 | return sprintf(buf, "%#x\n", gdev->host_features); | ||
237 | } | ||
238 | |||
239 | static DEVICE_ATTR_RO(host_version); | ||
240 | static DEVICE_ATTR_RO(host_features); | ||
241 | |||
242 | /** | ||
243 | * Does the PCI detection and init of the device. | ||
244 | * | ||
245 | * Return: 0 on success, negated errno on failure. | ||
246 | */ | ||
247 | static int vbg_pci_probe(struct pci_dev *pci, const struct pci_device_id *id) | ||
248 | { | ||
249 | struct device *dev = &pci->dev; | ||
250 | resource_size_t io, io_len, mmio, mmio_len; | ||
251 | struct vmmdev_memory *vmmdev; | ||
252 | struct vbg_dev *gdev; | ||
253 | int ret; | ||
254 | |||
255 | gdev = devm_kzalloc(dev, sizeof(*gdev), GFP_KERNEL); | ||
256 | if (!gdev) | ||
257 | return -ENOMEM; | ||
258 | |||
259 | ret = pci_enable_device(pci); | ||
260 | if (ret != 0) { | ||
261 | vbg_err("vboxguest: Error enabling device: %d\n", ret); | ||
262 | return ret; | ||
263 | } | ||
264 | |||
265 | ret = -ENODEV; | ||
266 | |||
267 | io = pci_resource_start(pci, 0); | ||
268 | io_len = pci_resource_len(pci, 0); | ||
269 | if (!io || !io_len) { | ||
270 | vbg_err("vboxguest: Error IO-port resource (0) is missing\n"); | ||
271 | goto err_disable_pcidev; | ||
272 | } | ||
273 | if (devm_request_region(dev, io, io_len, DEVICE_NAME) == NULL) { | ||
274 | vbg_err("vboxguest: Error could not claim IO resource\n"); | ||
275 | ret = -EBUSY; | ||
276 | goto err_disable_pcidev; | ||
277 | } | ||
278 | |||
279 | mmio = pci_resource_start(pci, 1); | ||
280 | mmio_len = pci_resource_len(pci, 1); | ||
281 | if (!mmio || !mmio_len) { | ||
282 | vbg_err("vboxguest: Error MMIO resource (1) is missing\n"); | ||
283 | goto err_disable_pcidev; | ||
284 | } | ||
285 | |||
286 | if (devm_request_mem_region(dev, mmio, mmio_len, DEVICE_NAME) == NULL) { | ||
287 | vbg_err("vboxguest: Error could not claim MMIO resource\n"); | ||
288 | ret = -EBUSY; | ||
289 | goto err_disable_pcidev; | ||
290 | } | ||
291 | |||
292 | vmmdev = devm_ioremap(dev, mmio, mmio_len); | ||
293 | if (!vmmdev) { | ||
294 | vbg_err("vboxguest: Error ioremap failed; MMIO addr=%pap size=%pap\n", | ||
295 | &mmio, &mmio_len); | ||
296 | goto err_disable_pcidev; | ||
297 | } | ||
298 | |||
299 | /* Validate MMIO region version and size. */ | ||
300 | if (vmmdev->version != VMMDEV_MEMORY_VERSION || | ||
301 | vmmdev->size < 32 || vmmdev->size > mmio_len) { | ||
302 | vbg_err("vboxguest: Bogus VMMDev memory; version=%08x (expected %08x) size=%d (expected <= %d)\n", | ||
303 | vmmdev->version, VMMDEV_MEMORY_VERSION, | ||
304 | vmmdev->size, (int)mmio_len); | ||
305 | goto err_disable_pcidev; | ||
306 | } | ||
307 | |||
308 | gdev->io_port = io; | ||
309 | gdev->mmio = vmmdev; | ||
310 | gdev->dev = dev; | ||
311 | gdev->misc_device.minor = MISC_DYNAMIC_MINOR; | ||
312 | gdev->misc_device.name = DEVICE_NAME; | ||
313 | gdev->misc_device.fops = &vbg_misc_device_fops; | ||
314 | gdev->misc_device_user.minor = MISC_DYNAMIC_MINOR; | ||
315 | gdev->misc_device_user.name = DEVICE_NAME_USER; | ||
316 | gdev->misc_device_user.fops = &vbg_misc_device_user_fops; | ||
317 | |||
318 | ret = vbg_core_init(gdev, VMMDEV_EVENT_MOUSE_POSITION_CHANGED); | ||
319 | if (ret) | ||
320 | goto err_disable_pcidev; | ||
321 | |||
322 | ret = vbg_create_input_device(gdev); | ||
323 | if (ret) { | ||
324 | vbg_err("vboxguest: Error creating input device: %d\n", ret); | ||
325 | goto err_vbg_core_exit; | ||
326 | } | ||
327 | |||
328 | ret = devm_request_irq(dev, pci->irq, vbg_core_isr, IRQF_SHARED, | ||
329 | DEVICE_NAME, gdev); | ||
330 | if (ret) { | ||
331 | vbg_err("vboxguest: Error requesting irq: %d\n", ret); | ||
332 | goto err_vbg_core_exit; | ||
333 | } | ||
334 | |||
335 | ret = misc_register(&gdev->misc_device); | ||
336 | if (ret) { | ||
337 | vbg_err("vboxguest: Error misc_register %s failed: %d\n", | ||
338 | DEVICE_NAME, ret); | ||
339 | goto err_vbg_core_exit; | ||
340 | } | ||
341 | |||
342 | ret = misc_register(&gdev->misc_device_user); | ||
343 | if (ret) { | ||
344 | vbg_err("vboxguest: Error misc_register %s failed: %d\n", | ||
345 | DEVICE_NAME_USER, ret); | ||
346 | goto err_unregister_misc_device; | ||
347 | } | ||
348 | |||
349 | mutex_lock(&vbg_gdev_mutex); | ||
350 | if (!vbg_gdev) | ||
351 | vbg_gdev = gdev; | ||
352 | else | ||
353 | ret = -EBUSY; | ||
354 | mutex_unlock(&vbg_gdev_mutex); | ||
355 | |||
356 | if (ret) { | ||
357 | vbg_err("vboxguest: Error more then 1 vbox guest pci device\n"); | ||
358 | goto err_unregister_misc_device_user; | ||
359 | } | ||
360 | |||
361 | pci_set_drvdata(pci, gdev); | ||
362 | device_create_file(dev, &dev_attr_host_version); | ||
363 | device_create_file(dev, &dev_attr_host_features); | ||
364 | |||
365 | vbg_info("vboxguest: misc device minor %d, IRQ %d, I/O port %x, MMIO at %pap (size %pap)\n", | ||
366 | gdev->misc_device.minor, pci->irq, gdev->io_port, | ||
367 | &mmio, &mmio_len); | ||
368 | |||
369 | return 0; | ||
370 | |||
371 | err_unregister_misc_device_user: | ||
372 | misc_deregister(&gdev->misc_device_user); | ||
373 | err_unregister_misc_device: | ||
374 | misc_deregister(&gdev->misc_device); | ||
375 | err_vbg_core_exit: | ||
376 | vbg_core_exit(gdev); | ||
377 | err_disable_pcidev: | ||
378 | pci_disable_device(pci); | ||
379 | |||
380 | return ret; | ||
381 | } | ||
382 | |||
383 | static void vbg_pci_remove(struct pci_dev *pci) | ||
384 | { | ||
385 | struct vbg_dev *gdev = pci_get_drvdata(pci); | ||
386 | |||
387 | mutex_lock(&vbg_gdev_mutex); | ||
388 | vbg_gdev = NULL; | ||
389 | mutex_unlock(&vbg_gdev_mutex); | ||
390 | |||
391 | device_remove_file(gdev->dev, &dev_attr_host_features); | ||
392 | device_remove_file(gdev->dev, &dev_attr_host_version); | ||
393 | misc_deregister(&gdev->misc_device_user); | ||
394 | misc_deregister(&gdev->misc_device); | ||
395 | vbg_core_exit(gdev); | ||
396 | pci_disable_device(pci); | ||
397 | } | ||
398 | |||
399 | struct vbg_dev *vbg_get_gdev(void) | ||
400 | { | ||
401 | mutex_lock(&vbg_gdev_mutex); | ||
402 | |||
403 | /* | ||
404 | * Note on success we keep the mutex locked until vbg_put_gdev(), | ||
405 | * this stops vbg_pci_remove from removing the device from underneath | ||
406 | * vboxsf. vboxsf will only hold a reference for a short while. | ||
407 | */ | ||
408 | if (vbg_gdev) | ||
409 | return vbg_gdev; | ||
410 | |||
411 | mutex_unlock(&vbg_gdev_mutex); | ||
412 | return ERR_PTR(-ENODEV); | ||
413 | } | ||
414 | EXPORT_SYMBOL(vbg_get_gdev); | ||
415 | |||
416 | void vbg_put_gdev(struct vbg_dev *gdev) | ||
417 | { | ||
418 | WARN_ON(gdev != vbg_gdev); | ||
419 | mutex_unlock(&vbg_gdev_mutex); | ||
420 | } | ||
421 | EXPORT_SYMBOL(vbg_put_gdev); | ||
422 | |||
423 | /** | ||
424 | * Callback for mouse events. | ||
425 | * | ||
426 | * This is called at the end of the ISR, after leaving the event spinlock, if | ||
427 | * VMMDEV_EVENT_MOUSE_POSITION_CHANGED was raised by the host. | ||
428 | * | ||
429 | * @gdev: The device extension. | ||
430 | */ | ||
431 | void vbg_linux_mouse_event(struct vbg_dev *gdev) | ||
432 | { | ||
433 | int rc; | ||
434 | |||
435 | /* Report events to the kernel input device */ | ||
436 | gdev->mouse_status_req->mouse_features = 0; | ||
437 | gdev->mouse_status_req->pointer_pos_x = 0; | ||
438 | gdev->mouse_status_req->pointer_pos_y = 0; | ||
439 | rc = vbg_req_perform(gdev, gdev->mouse_status_req); | ||
440 | if (rc >= 0) { | ||
441 | input_report_abs(gdev->input, ABS_X, | ||
442 | gdev->mouse_status_req->pointer_pos_x); | ||
443 | input_report_abs(gdev->input, ABS_Y, | ||
444 | gdev->mouse_status_req->pointer_pos_y); | ||
445 | input_sync(gdev->input); | ||
446 | } | ||
447 | } | ||
448 | |||
449 | static const struct pci_device_id vbg_pci_ids[] = { | ||
450 | { .vendor = VBOX_VENDORID, .device = VMMDEV_DEVICEID }, | ||
451 | {} | ||
452 | }; | ||
453 | MODULE_DEVICE_TABLE(pci, vbg_pci_ids); | ||
454 | |||
455 | static struct pci_driver vbg_pci_driver = { | ||
456 | .name = DEVICE_NAME, | ||
457 | .id_table = vbg_pci_ids, | ||
458 | .probe = vbg_pci_probe, | ||
459 | .remove = vbg_pci_remove, | ||
460 | }; | ||
461 | |||
462 | module_pci_driver(vbg_pci_driver); | ||
463 | |||
464 | MODULE_AUTHOR("Oracle Corporation"); | ||
465 | MODULE_DESCRIPTION("Oracle VM VirtualBox Guest Additions for Linux Module"); | ||
466 | MODULE_LICENSE("GPL"); | ||
diff --git a/drivers/virt/vboxguest/vboxguest_utils.c b/drivers/virt/vboxguest/vboxguest_utils.c new file mode 100644 index 000000000000..0f0dab8023cf --- /dev/null +++ b/drivers/virt/vboxguest/vboxguest_utils.c | |||
@@ -0,0 +1,803 @@ | |||
1 | /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ | ||
2 | /* | ||
3 | * vboxguest vmm-req and hgcm-call code, VBoxGuestR0LibHGCMInternal.cpp, | ||
4 | * VBoxGuestR0LibGenericRequest.cpp and RTErrConvertToErrno.cpp in vbox svn. | ||
5 | * | ||
6 | * Copyright (C) 2006-2016 Oracle Corporation | ||
7 | */ | ||
8 | |||
9 | #include <linux/errno.h> | ||
10 | #include <linux/kernel.h> | ||
11 | #include <linux/mm.h> | ||
12 | #include <linux/module.h> | ||
13 | #include <linux/sizes.h> | ||
14 | #include <linux/slab.h> | ||
15 | #include <linux/uaccess.h> | ||
16 | #include <linux/vmalloc.h> | ||
17 | #include <linux/vbox_err.h> | ||
18 | #include <linux/vbox_utils.h> | ||
19 | #include "vboxguest_core.h" | ||
20 | |||
21 | /* Get the pointer to the first parameter of a HGCM call request. */ | ||
22 | #define VMMDEV_HGCM_CALL_PARMS(a) \ | ||
23 | ((struct vmmdev_hgcm_function_parameter *)( \ | ||
24 | (u8 *)(a) + sizeof(struct vmmdev_hgcm_call))) | ||
25 | |||
26 | /* The max parameter buffer size for a user request. */ | ||
27 | #define VBG_MAX_HGCM_USER_PARM (24 * SZ_1M) | ||
28 | /* The max parameter buffer size for a kernel request. */ | ||
29 | #define VBG_MAX_HGCM_KERNEL_PARM (16 * SZ_1M) | ||
30 | |||
31 | #define VBG_DEBUG_PORT 0x504 | ||
32 | |||
33 | /* This protects vbg_log_buf and serializes VBG_DEBUG_PORT accesses */ | ||
34 | static DEFINE_SPINLOCK(vbg_log_lock); | ||
35 | static char vbg_log_buf[128]; | ||
36 | |||
37 | #define VBG_LOG(name, pr_func) \ | ||
38 | void name(const char *fmt, ...) \ | ||
39 | { \ | ||
40 | unsigned long flags; \ | ||
41 | va_list args; \ | ||
42 | int i, count; \ | ||
43 | \ | ||
44 | va_start(args, fmt); \ | ||
45 | spin_lock_irqsave(&vbg_log_lock, flags); \ | ||
46 | \ | ||
47 | count = vscnprintf(vbg_log_buf, sizeof(vbg_log_buf), fmt, args);\ | ||
48 | for (i = 0; i < count; i++) \ | ||
49 | outb(vbg_log_buf[i], VBG_DEBUG_PORT); \ | ||
50 | \ | ||
51 | pr_func("%s", vbg_log_buf); \ | ||
52 | \ | ||
53 | spin_unlock_irqrestore(&vbg_log_lock, flags); \ | ||
54 | va_end(args); \ | ||
55 | } \ | ||
56 | EXPORT_SYMBOL(name) | ||
57 | |||
58 | VBG_LOG(vbg_info, pr_info); | ||
59 | VBG_LOG(vbg_warn, pr_warn); | ||
60 | VBG_LOG(vbg_err, pr_err); | ||
61 | #if defined(DEBUG) && !defined(CONFIG_DYNAMIC_DEBUG) | ||
62 | VBG_LOG(vbg_debug, pr_debug); | ||
63 | #endif | ||
64 | |||
65 | void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type) | ||
66 | { | ||
67 | struct vmmdev_request_header *req; | ||
68 | |||
69 | req = kmalloc(len, GFP_KERNEL | __GFP_DMA32); | ||
70 | if (!req) | ||
71 | return NULL; | ||
72 | |||
73 | memset(req, 0xaa, len); | ||
74 | |||
75 | req->size = len; | ||
76 | req->version = VMMDEV_REQUEST_HEADER_VERSION; | ||
77 | req->request_type = req_type; | ||
78 | req->rc = VERR_GENERAL_FAILURE; | ||
79 | req->reserved1 = 0; | ||
80 | req->reserved2 = 0; | ||
81 | |||
82 | return req; | ||
83 | } | ||
84 | |||
85 | /* Note this function returns a VBox status code, not a negative errno!! */ | ||
86 | int vbg_req_perform(struct vbg_dev *gdev, void *req) | ||
87 | { | ||
88 | unsigned long phys_req = virt_to_phys(req); | ||
89 | |||
90 | outl(phys_req, gdev->io_port + VMMDEV_PORT_OFF_REQUEST); | ||
91 | /* | ||
92 | * The host changes the request as a result of the outl, make sure | ||
93 | * the outl and any reads of the req happen in the correct order. | ||
94 | */ | ||
95 | mb(); | ||
96 | |||
97 | return ((struct vmmdev_request_header *)req)->rc; | ||
98 | } | ||
99 | |||
100 | static bool hgcm_req_done(struct vbg_dev *gdev, | ||
101 | struct vmmdev_hgcmreq_header *header) | ||
102 | { | ||
103 | unsigned long flags; | ||
104 | bool done; | ||
105 | |||
106 | spin_lock_irqsave(&gdev->event_spinlock, flags); | ||
107 | done = header->flags & VMMDEV_HGCM_REQ_DONE; | ||
108 | spin_unlock_irqrestore(&gdev->event_spinlock, flags); | ||
109 | |||
110 | return done; | ||
111 | } | ||
112 | |||
113 | int vbg_hgcm_connect(struct vbg_dev *gdev, | ||
114 | struct vmmdev_hgcm_service_location *loc, | ||
115 | u32 *client_id, int *vbox_status) | ||
116 | { | ||
117 | struct vmmdev_hgcm_connect *hgcm_connect = NULL; | ||
118 | int rc; | ||
119 | |||
120 | hgcm_connect = vbg_req_alloc(sizeof(*hgcm_connect), | ||
121 | VMMDEVREQ_HGCM_CONNECT); | ||
122 | if (!hgcm_connect) | ||
123 | return -ENOMEM; | ||
124 | |||
125 | hgcm_connect->header.flags = 0; | ||
126 | memcpy(&hgcm_connect->loc, loc, sizeof(*loc)); | ||
127 | hgcm_connect->client_id = 0; | ||
128 | |||
129 | rc = vbg_req_perform(gdev, hgcm_connect); | ||
130 | |||
131 | if (rc == VINF_HGCM_ASYNC_EXECUTE) | ||
132 | wait_event(gdev->hgcm_wq, | ||
133 | hgcm_req_done(gdev, &hgcm_connect->header)); | ||
134 | |||
135 | if (rc >= 0) { | ||
136 | *client_id = hgcm_connect->client_id; | ||
137 | rc = hgcm_connect->header.result; | ||
138 | } | ||
139 | |||
140 | kfree(hgcm_connect); | ||
141 | |||
142 | *vbox_status = rc; | ||
143 | return 0; | ||
144 | } | ||
145 | EXPORT_SYMBOL(vbg_hgcm_connect); | ||
146 | |||
147 | int vbg_hgcm_disconnect(struct vbg_dev *gdev, u32 client_id, int *vbox_status) | ||
148 | { | ||
149 | struct vmmdev_hgcm_disconnect *hgcm_disconnect = NULL; | ||
150 | int rc; | ||
151 | |||
152 | hgcm_disconnect = vbg_req_alloc(sizeof(*hgcm_disconnect), | ||
153 | VMMDEVREQ_HGCM_DISCONNECT); | ||
154 | if (!hgcm_disconnect) | ||
155 | return -ENOMEM; | ||
156 | |||
157 | hgcm_disconnect->header.flags = 0; | ||
158 | hgcm_disconnect->client_id = client_id; | ||
159 | |||
160 | rc = vbg_req_perform(gdev, hgcm_disconnect); | ||
161 | |||
162 | if (rc == VINF_HGCM_ASYNC_EXECUTE) | ||
163 | wait_event(gdev->hgcm_wq, | ||
164 | hgcm_req_done(gdev, &hgcm_disconnect->header)); | ||
165 | |||
166 | if (rc >= 0) | ||
167 | rc = hgcm_disconnect->header.result; | ||
168 | |||
169 | kfree(hgcm_disconnect); | ||
170 | |||
171 | *vbox_status = rc; | ||
172 | return 0; | ||
173 | } | ||
174 | EXPORT_SYMBOL(vbg_hgcm_disconnect); | ||
175 | |||
176 | static u32 hgcm_call_buf_size_in_pages(void *buf, u32 len) | ||
177 | { | ||
178 | u32 size = PAGE_ALIGN(len + ((unsigned long)buf & ~PAGE_MASK)); | ||
179 | |||
180 | return size >> PAGE_SHIFT; | ||
181 | } | ||
182 | |||
183 | static void hgcm_call_add_pagelist_size(void *buf, u32 len, size_t *extra) | ||
184 | { | ||
185 | u32 page_count; | ||
186 | |||
187 | page_count = hgcm_call_buf_size_in_pages(buf, len); | ||
188 | *extra += offsetof(struct vmmdev_hgcm_pagelist, pages[page_count]); | ||
189 | } | ||
190 | |||
191 | static int hgcm_call_preprocess_linaddr( | ||
192 | const struct vmmdev_hgcm_function_parameter *src_parm, | ||
193 | void **bounce_buf_ret, size_t *extra) | ||
194 | { | ||
195 | void *buf, *bounce_buf; | ||
196 | bool copy_in; | ||
197 | u32 len; | ||
198 | int ret; | ||
199 | |||
200 | buf = (void *)src_parm->u.pointer.u.linear_addr; | ||
201 | len = src_parm->u.pointer.size; | ||
202 | copy_in = src_parm->type != VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT; | ||
203 | |||
204 | if (len > VBG_MAX_HGCM_USER_PARM) | ||
205 | return -E2BIG; | ||
206 | |||
207 | bounce_buf = kvmalloc(len, GFP_KERNEL); | ||
208 | if (!bounce_buf) | ||
209 | return -ENOMEM; | ||
210 | |||
211 | if (copy_in) { | ||
212 | ret = copy_from_user(bounce_buf, (void __user *)buf, len); | ||
213 | if (ret) | ||
214 | return -EFAULT; | ||
215 | } else { | ||
216 | memset(bounce_buf, 0, len); | ||
217 | } | ||
218 | |||
219 | *bounce_buf_ret = bounce_buf; | ||
220 | hgcm_call_add_pagelist_size(bounce_buf, len, extra); | ||
221 | return 0; | ||
222 | } | ||
223 | |||
224 | /** | ||
225 | * Preprocesses the HGCM call, validate parameters, alloc bounce buffers and | ||
226 | * figure out how much extra storage we need for page lists. | ||
227 | * Return: 0 or negative errno value. | ||
228 | * @src_parm: Pointer to source function call parameters | ||
229 | * @parm_count: Number of function call parameters. | ||
230 | * @bounce_bufs_ret: Where to return the allocated bouncebuffer array | ||
231 | * @extra: Where to return the extra request space needed for | ||
232 | * physical page lists. | ||
233 | */ | ||
234 | static int hgcm_call_preprocess( | ||
235 | const struct vmmdev_hgcm_function_parameter *src_parm, | ||
236 | u32 parm_count, void ***bounce_bufs_ret, size_t *extra) | ||
237 | { | ||
238 | void *buf, **bounce_bufs = NULL; | ||
239 | u32 i, len; | ||
240 | int ret; | ||
241 | |||
242 | for (i = 0; i < parm_count; i++, src_parm++) { | ||
243 | switch (src_parm->type) { | ||
244 | case VMMDEV_HGCM_PARM_TYPE_32BIT: | ||
245 | case VMMDEV_HGCM_PARM_TYPE_64BIT: | ||
246 | break; | ||
247 | |||
248 | case VMMDEV_HGCM_PARM_TYPE_LINADDR: | ||
249 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: | ||
250 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: | ||
251 | if (!bounce_bufs) { | ||
252 | bounce_bufs = kcalloc(parm_count, | ||
253 | sizeof(void *), | ||
254 | GFP_KERNEL); | ||
255 | if (!bounce_bufs) | ||
256 | return -ENOMEM; | ||
257 | |||
258 | *bounce_bufs_ret = bounce_bufs; | ||
259 | } | ||
260 | |||
261 | ret = hgcm_call_preprocess_linaddr(src_parm, | ||
262 | &bounce_bufs[i], | ||
263 | extra); | ||
264 | if (ret) | ||
265 | return ret; | ||
266 | |||
267 | break; | ||
268 | |||
269 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL: | ||
270 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN: | ||
271 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT: | ||
272 | buf = (void *)src_parm->u.pointer.u.linear_addr; | ||
273 | len = src_parm->u.pointer.size; | ||
274 | if (WARN_ON(len > VBG_MAX_HGCM_KERNEL_PARM)) | ||
275 | return -E2BIG; | ||
276 | |||
277 | hgcm_call_add_pagelist_size(buf, len, extra); | ||
278 | break; | ||
279 | |||
280 | default: | ||
281 | return -EINVAL; | ||
282 | } | ||
283 | } | ||
284 | |||
285 | return 0; | ||
286 | } | ||
287 | |||
288 | /** | ||
289 | * Translates linear address types to page list direction flags. | ||
290 | * | ||
291 | * Return: page list flags. | ||
292 | * @type: The type. | ||
293 | */ | ||
294 | static u32 hgcm_call_linear_addr_type_to_pagelist_flags( | ||
295 | enum vmmdev_hgcm_function_parameter_type type) | ||
296 | { | ||
297 | switch (type) { | ||
298 | default: | ||
299 | WARN_ON(1); | ||
300 | /* Fall through */ | ||
301 | case VMMDEV_HGCM_PARM_TYPE_LINADDR: | ||
302 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL: | ||
303 | return VMMDEV_HGCM_F_PARM_DIRECTION_BOTH; | ||
304 | |||
305 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: | ||
306 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN: | ||
307 | return VMMDEV_HGCM_F_PARM_DIRECTION_TO_HOST; | ||
308 | |||
309 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: | ||
310 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT: | ||
311 | return VMMDEV_HGCM_F_PARM_DIRECTION_FROM_HOST; | ||
312 | } | ||
313 | } | ||
314 | |||
315 | static void hgcm_call_init_linaddr(struct vmmdev_hgcm_call *call, | ||
316 | struct vmmdev_hgcm_function_parameter *dst_parm, void *buf, u32 len, | ||
317 | enum vmmdev_hgcm_function_parameter_type type, u32 *off_extra) | ||
318 | { | ||
319 | struct vmmdev_hgcm_pagelist *dst_pg_lst; | ||
320 | struct page *page; | ||
321 | bool is_vmalloc; | ||
322 | u32 i, page_count; | ||
323 | |||
324 | dst_parm->type = type; | ||
325 | |||
326 | if (len == 0) { | ||
327 | dst_parm->u.pointer.size = 0; | ||
328 | dst_parm->u.pointer.u.linear_addr = 0; | ||
329 | return; | ||
330 | } | ||
331 | |||
332 | dst_pg_lst = (void *)call + *off_extra; | ||
333 | page_count = hgcm_call_buf_size_in_pages(buf, len); | ||
334 | is_vmalloc = is_vmalloc_addr(buf); | ||
335 | |||
336 | dst_parm->type = VMMDEV_HGCM_PARM_TYPE_PAGELIST; | ||
337 | dst_parm->u.page_list.size = len; | ||
338 | dst_parm->u.page_list.offset = *off_extra; | ||
339 | dst_pg_lst->flags = hgcm_call_linear_addr_type_to_pagelist_flags(type); | ||
340 | dst_pg_lst->offset_first_page = (unsigned long)buf & ~PAGE_MASK; | ||
341 | dst_pg_lst->page_count = page_count; | ||
342 | |||
343 | for (i = 0; i < page_count; i++) { | ||
344 | if (is_vmalloc) | ||
345 | page = vmalloc_to_page(buf); | ||
346 | else | ||
347 | page = virt_to_page(buf); | ||
348 | |||
349 | dst_pg_lst->pages[i] = page_to_phys(page); | ||
350 | buf += PAGE_SIZE; | ||
351 | } | ||
352 | |||
353 | *off_extra += offsetof(struct vmmdev_hgcm_pagelist, pages[page_count]); | ||
354 | } | ||
355 | |||
356 | /** | ||
357 | * Initializes the call request that we're sending to the host. | ||
358 | * @call: The call to initialize. | ||
359 | * @client_id: The client ID of the caller. | ||
360 | * @function: The function number of the function to call. | ||
361 | * @src_parm: Pointer to source function call parameters. | ||
362 | * @parm_count: Number of function call parameters. | ||
363 | * @bounce_bufs: The bouncebuffer array. | ||
364 | */ | ||
365 | static void hgcm_call_init_call( | ||
366 | struct vmmdev_hgcm_call *call, u32 client_id, u32 function, | ||
367 | const struct vmmdev_hgcm_function_parameter *src_parm, | ||
368 | u32 parm_count, void **bounce_bufs) | ||
369 | { | ||
370 | struct vmmdev_hgcm_function_parameter *dst_parm = | ||
371 | VMMDEV_HGCM_CALL_PARMS(call); | ||
372 | u32 i, off_extra = (uintptr_t)(dst_parm + parm_count) - (uintptr_t)call; | ||
373 | void *buf; | ||
374 | |||
375 | call->header.flags = 0; | ||
376 | call->header.result = VINF_SUCCESS; | ||
377 | call->client_id = client_id; | ||
378 | call->function = function; | ||
379 | call->parm_count = parm_count; | ||
380 | |||
381 | for (i = 0; i < parm_count; i++, src_parm++, dst_parm++) { | ||
382 | switch (src_parm->type) { | ||
383 | case VMMDEV_HGCM_PARM_TYPE_32BIT: | ||
384 | case VMMDEV_HGCM_PARM_TYPE_64BIT: | ||
385 | *dst_parm = *src_parm; | ||
386 | break; | ||
387 | |||
388 | case VMMDEV_HGCM_PARM_TYPE_LINADDR: | ||
389 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: | ||
390 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: | ||
391 | hgcm_call_init_linaddr(call, dst_parm, bounce_bufs[i], | ||
392 | src_parm->u.pointer.size, | ||
393 | src_parm->type, &off_extra); | ||
394 | break; | ||
395 | |||
396 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL: | ||
397 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN: | ||
398 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT: | ||
399 | buf = (void *)src_parm->u.pointer.u.linear_addr; | ||
400 | hgcm_call_init_linaddr(call, dst_parm, buf, | ||
401 | src_parm->u.pointer.size, | ||
402 | src_parm->type, &off_extra); | ||
403 | break; | ||
404 | |||
405 | default: | ||
406 | WARN_ON(1); | ||
407 | dst_parm->type = VMMDEV_HGCM_PARM_TYPE_INVALID; | ||
408 | } | ||
409 | } | ||
410 | } | ||
411 | |||
412 | /** | ||
413 | * Tries to cancel a pending HGCM call. | ||
414 | * | ||
415 | * Return: VBox status code | ||
416 | */ | ||
417 | static int hgcm_cancel_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call) | ||
418 | { | ||
419 | int rc; | ||
420 | |||
421 | /* | ||
422 | * We use a pre-allocated request for cancellations, which is | ||
423 | * protected by cancel_req_mutex. This means that all cancellations | ||
424 | * get serialized, this should be fine since they should be rare. | ||
425 | */ | ||
426 | mutex_lock(&gdev->cancel_req_mutex); | ||
427 | gdev->cancel_req->phys_req_to_cancel = virt_to_phys(call); | ||
428 | rc = vbg_req_perform(gdev, gdev->cancel_req); | ||
429 | mutex_unlock(&gdev->cancel_req_mutex); | ||
430 | |||
431 | if (rc == VERR_NOT_IMPLEMENTED) { | ||
432 | call->header.flags |= VMMDEV_HGCM_REQ_CANCELLED; | ||
433 | call->header.header.request_type = VMMDEVREQ_HGCM_CANCEL; | ||
434 | |||
435 | rc = vbg_req_perform(gdev, call); | ||
436 | if (rc == VERR_INVALID_PARAMETER) | ||
437 | rc = VERR_NOT_FOUND; | ||
438 | } | ||
439 | |||
440 | if (rc >= 0) | ||
441 | call->header.flags |= VMMDEV_HGCM_REQ_CANCELLED; | ||
442 | |||
443 | return rc; | ||
444 | } | ||
445 | |||
446 | /** | ||
447 | * Performs the call and completion wait. | ||
448 | * Return: 0 or negative errno value. | ||
449 | * @gdev: The VBoxGuest device extension. | ||
450 | * @call: The call to execute. | ||
451 | * @timeout_ms: Timeout in ms. | ||
452 | * @leak_it: Where to return the leak it / free it, indicator. | ||
453 | * Cancellation fun. | ||
454 | */ | ||
455 | static int vbg_hgcm_do_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call, | ||
456 | u32 timeout_ms, bool *leak_it) | ||
457 | { | ||
458 | int rc, cancel_rc, ret; | ||
459 | long timeout; | ||
460 | |||
461 | *leak_it = false; | ||
462 | |||
463 | rc = vbg_req_perform(gdev, call); | ||
464 | |||
465 | /* | ||
466 | * If the call failed, then pretend success. Upper layers will | ||
467 | * interpret the result code in the packet. | ||
468 | */ | ||
469 | if (rc < 0) { | ||
470 | call->header.result = rc; | ||
471 | return 0; | ||
472 | } | ||
473 | |||
474 | if (rc != VINF_HGCM_ASYNC_EXECUTE) | ||
475 | return 0; | ||
476 | |||
477 | /* Host decided to process the request asynchronously, wait for it */ | ||
478 | if (timeout_ms == U32_MAX) | ||
479 | timeout = MAX_SCHEDULE_TIMEOUT; | ||
480 | else | ||
481 | timeout = msecs_to_jiffies(timeout_ms); | ||
482 | |||
483 | timeout = wait_event_interruptible_timeout( | ||
484 | gdev->hgcm_wq, | ||
485 | hgcm_req_done(gdev, &call->header), | ||
486 | timeout); | ||
487 | |||
488 | /* timeout > 0 means hgcm_req_done has returned true, so success */ | ||
489 | if (timeout > 0) | ||
490 | return 0; | ||
491 | |||
492 | if (timeout == 0) | ||
493 | ret = -ETIMEDOUT; | ||
494 | else | ||
495 | ret = -EINTR; | ||
496 | |||
497 | /* Cancel the request */ | ||
498 | cancel_rc = hgcm_cancel_call(gdev, call); | ||
499 | if (cancel_rc >= 0) | ||
500 | return ret; | ||
501 | |||
502 | /* | ||
503 | * Failed to cancel, this should mean that the cancel has lost the | ||
504 | * race with normal completion, wait while the host completes it. | ||
505 | */ | ||
506 | if (cancel_rc == VERR_NOT_FOUND || cancel_rc == VERR_SEM_DESTROYED) | ||
507 | timeout = msecs_to_jiffies(500); | ||
508 | else | ||
509 | timeout = msecs_to_jiffies(2000); | ||
510 | |||
511 | timeout = wait_event_timeout(gdev->hgcm_wq, | ||
512 | hgcm_req_done(gdev, &call->header), | ||
513 | timeout); | ||
514 | |||
515 | if (WARN_ON(timeout == 0)) { | ||
516 | /* We really should never get here */ | ||
517 | vbg_err("%s: Call timedout and cancellation failed, leaking the request\n", | ||
518 | __func__); | ||
519 | *leak_it = true; | ||
520 | return ret; | ||
521 | } | ||
522 | |||
523 | /* The call has completed normally after all */ | ||
524 | return 0; | ||
525 | } | ||
526 | |||
527 | /** | ||
528 | * Copies the result of the call back to the caller info structure and user | ||
529 | * buffers. | ||
530 | * Return: 0 or negative errno value. | ||
531 | * @call: HGCM call request. | ||
532 | * @dst_parm: Pointer to function call parameters destination. | ||
533 | * @parm_count: Number of function call parameters. | ||
534 | * @bounce_bufs: The bouncebuffer array. | ||
535 | */ | ||
536 | static int hgcm_call_copy_back_result( | ||
537 | const struct vmmdev_hgcm_call *call, | ||
538 | struct vmmdev_hgcm_function_parameter *dst_parm, | ||
539 | u32 parm_count, void **bounce_bufs) | ||
540 | { | ||
541 | const struct vmmdev_hgcm_function_parameter *src_parm = | ||
542 | VMMDEV_HGCM_CALL_PARMS(call); | ||
543 | void __user *p; | ||
544 | int ret; | ||
545 | u32 i; | ||
546 | |||
547 | /* Copy back parameters. */ | ||
548 | for (i = 0; i < parm_count; i++, src_parm++, dst_parm++) { | ||
549 | switch (dst_parm->type) { | ||
550 | case VMMDEV_HGCM_PARM_TYPE_32BIT: | ||
551 | case VMMDEV_HGCM_PARM_TYPE_64BIT: | ||
552 | *dst_parm = *src_parm; | ||
553 | break; | ||
554 | |||
555 | case VMMDEV_HGCM_PARM_TYPE_PAGELIST: | ||
556 | dst_parm->u.page_list.size = src_parm->u.page_list.size; | ||
557 | break; | ||
558 | |||
559 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: | ||
560 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL: | ||
561 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN: | ||
562 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT: | ||
563 | dst_parm->u.pointer.size = src_parm->u.pointer.size; | ||
564 | break; | ||
565 | |||
566 | case VMMDEV_HGCM_PARM_TYPE_LINADDR: | ||
567 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: | ||
568 | dst_parm->u.pointer.size = src_parm->u.pointer.size; | ||
569 | |||
570 | p = (void __user *)dst_parm->u.pointer.u.linear_addr; | ||
571 | ret = copy_to_user(p, bounce_bufs[i], | ||
572 | min(src_parm->u.pointer.size, | ||
573 | dst_parm->u.pointer.size)); | ||
574 | if (ret) | ||
575 | return -EFAULT; | ||
576 | break; | ||
577 | |||
578 | default: | ||
579 | WARN_ON(1); | ||
580 | return -EINVAL; | ||
581 | } | ||
582 | } | ||
583 | |||
584 | return 0; | ||
585 | } | ||
586 | |||
587 | int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function, | ||
588 | u32 timeout_ms, struct vmmdev_hgcm_function_parameter *parms, | ||
589 | u32 parm_count, int *vbox_status) | ||
590 | { | ||
591 | struct vmmdev_hgcm_call *call; | ||
592 | void **bounce_bufs = NULL; | ||
593 | bool leak_it; | ||
594 | size_t size; | ||
595 | int i, ret; | ||
596 | |||
597 | size = sizeof(struct vmmdev_hgcm_call) + | ||
598 | parm_count * sizeof(struct vmmdev_hgcm_function_parameter); | ||
599 | /* | ||
600 | * Validate and buffer the parameters for the call. This also increases | ||
601 | * call_size with the amount of extra space needed for page lists. | ||
602 | */ | ||
603 | ret = hgcm_call_preprocess(parms, parm_count, &bounce_bufs, &size); | ||
604 | if (ret) { | ||
605 | /* Even on error bounce bufs may still have been allocated */ | ||
606 | goto free_bounce_bufs; | ||
607 | } | ||
608 | |||
609 | call = vbg_req_alloc(size, VMMDEVREQ_HGCM_CALL); | ||
610 | if (!call) { | ||
611 | ret = -ENOMEM; | ||
612 | goto free_bounce_bufs; | ||
613 | } | ||
614 | |||
615 | hgcm_call_init_call(call, client_id, function, parms, parm_count, | ||
616 | bounce_bufs); | ||
617 | |||
618 | ret = vbg_hgcm_do_call(gdev, call, timeout_ms, &leak_it); | ||
619 | if (ret == 0) { | ||
620 | *vbox_status = call->header.result; | ||
621 | ret = hgcm_call_copy_back_result(call, parms, parm_count, | ||
622 | bounce_bufs); | ||
623 | } | ||
624 | |||
625 | if (!leak_it) | ||
626 | kfree(call); | ||
627 | |||
628 | free_bounce_bufs: | ||
629 | if (bounce_bufs) { | ||
630 | for (i = 0; i < parm_count; i++) | ||
631 | kvfree(bounce_bufs[i]); | ||
632 | kfree(bounce_bufs); | ||
633 | } | ||
634 | |||
635 | return ret; | ||
636 | } | ||
637 | EXPORT_SYMBOL(vbg_hgcm_call); | ||
638 | |||
639 | #ifdef CONFIG_COMPAT | ||
640 | int vbg_hgcm_call32( | ||
641 | struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms, | ||
642 | struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count, | ||
643 | int *vbox_status) | ||
644 | { | ||
645 | struct vmmdev_hgcm_function_parameter *parm64 = NULL; | ||
646 | u32 i, size; | ||
647 | int ret = 0; | ||
648 | |||
649 | /* KISS allocate a temporary request and convert the parameters. */ | ||
650 | size = parm_count * sizeof(struct vmmdev_hgcm_function_parameter); | ||
651 | parm64 = kzalloc(size, GFP_KERNEL); | ||
652 | if (!parm64) | ||
653 | return -ENOMEM; | ||
654 | |||
655 | for (i = 0; i < parm_count; i++) { | ||
656 | switch (parm32[i].type) { | ||
657 | case VMMDEV_HGCM_PARM_TYPE_32BIT: | ||
658 | parm64[i].type = VMMDEV_HGCM_PARM_TYPE_32BIT; | ||
659 | parm64[i].u.value32 = parm32[i].u.value32; | ||
660 | break; | ||
661 | |||
662 | case VMMDEV_HGCM_PARM_TYPE_64BIT: | ||
663 | parm64[i].type = VMMDEV_HGCM_PARM_TYPE_64BIT; | ||
664 | parm64[i].u.value64 = parm32[i].u.value64; | ||
665 | break; | ||
666 | |||
667 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: | ||
668 | case VMMDEV_HGCM_PARM_TYPE_LINADDR: | ||
669 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: | ||
670 | parm64[i].type = parm32[i].type; | ||
671 | parm64[i].u.pointer.size = parm32[i].u.pointer.size; | ||
672 | parm64[i].u.pointer.u.linear_addr = | ||
673 | parm32[i].u.pointer.u.linear_addr; | ||
674 | break; | ||
675 | |||
676 | default: | ||
677 | ret = -EINVAL; | ||
678 | } | ||
679 | if (ret < 0) | ||
680 | goto out_free; | ||
681 | } | ||
682 | |||
683 | ret = vbg_hgcm_call(gdev, client_id, function, timeout_ms, | ||
684 | parm64, parm_count, vbox_status); | ||
685 | if (ret < 0) | ||
686 | goto out_free; | ||
687 | |||
688 | /* Copy back. */ | ||
689 | for (i = 0; i < parm_count; i++, parm32++, parm64++) { | ||
690 | switch (parm64[i].type) { | ||
691 | case VMMDEV_HGCM_PARM_TYPE_32BIT: | ||
692 | parm32[i].u.value32 = parm64[i].u.value32; | ||
693 | break; | ||
694 | |||
695 | case VMMDEV_HGCM_PARM_TYPE_64BIT: | ||
696 | parm32[i].u.value64 = parm64[i].u.value64; | ||
697 | break; | ||
698 | |||
699 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: | ||
700 | case VMMDEV_HGCM_PARM_TYPE_LINADDR: | ||
701 | case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: | ||
702 | parm32[i].u.pointer.size = parm64[i].u.pointer.size; | ||
703 | break; | ||
704 | |||
705 | default: | ||
706 | WARN_ON(1); | ||
707 | ret = -EINVAL; | ||
708 | } | ||
709 | } | ||
710 | |||
711 | out_free: | ||
712 | kfree(parm64); | ||
713 | return ret; | ||
714 | } | ||
715 | #endif | ||
716 | |||
717 | static const int vbg_status_code_to_errno_table[] = { | ||
718 | [-VERR_ACCESS_DENIED] = -EPERM, | ||
719 | [-VERR_FILE_NOT_FOUND] = -ENOENT, | ||
720 | [-VERR_PROCESS_NOT_FOUND] = -ESRCH, | ||
721 | [-VERR_INTERRUPTED] = -EINTR, | ||
722 | [-VERR_DEV_IO_ERROR] = -EIO, | ||
723 | [-VERR_TOO_MUCH_DATA] = -E2BIG, | ||
724 | [-VERR_BAD_EXE_FORMAT] = -ENOEXEC, | ||
725 | [-VERR_INVALID_HANDLE] = -EBADF, | ||
726 | [-VERR_TRY_AGAIN] = -EAGAIN, | ||
727 | [-VERR_NO_MEMORY] = -ENOMEM, | ||
728 | [-VERR_INVALID_POINTER] = -EFAULT, | ||
729 | [-VERR_RESOURCE_BUSY] = -EBUSY, | ||
730 | [-VERR_ALREADY_EXISTS] = -EEXIST, | ||
731 | [-VERR_NOT_SAME_DEVICE] = -EXDEV, | ||
732 | [-VERR_NOT_A_DIRECTORY] = -ENOTDIR, | ||
733 | [-VERR_PATH_NOT_FOUND] = -ENOTDIR, | ||
734 | [-VERR_INVALID_NAME] = -ENOENT, | ||
735 | [-VERR_IS_A_DIRECTORY] = -EISDIR, | ||
736 | [-VERR_INVALID_PARAMETER] = -EINVAL, | ||
737 | [-VERR_TOO_MANY_OPEN_FILES] = -ENFILE, | ||
738 | [-VERR_INVALID_FUNCTION] = -ENOTTY, | ||
739 | [-VERR_SHARING_VIOLATION] = -ETXTBSY, | ||
740 | [-VERR_FILE_TOO_BIG] = -EFBIG, | ||
741 | [-VERR_DISK_FULL] = -ENOSPC, | ||
742 | [-VERR_SEEK_ON_DEVICE] = -ESPIPE, | ||
743 | [-VERR_WRITE_PROTECT] = -EROFS, | ||
744 | [-VERR_BROKEN_PIPE] = -EPIPE, | ||
745 | [-VERR_DEADLOCK] = -EDEADLK, | ||
746 | [-VERR_FILENAME_TOO_LONG] = -ENAMETOOLONG, | ||
747 | [-VERR_FILE_LOCK_FAILED] = -ENOLCK, | ||
748 | [-VERR_NOT_IMPLEMENTED] = -ENOSYS, | ||
749 | [-VERR_NOT_SUPPORTED] = -ENOSYS, | ||
750 | [-VERR_DIR_NOT_EMPTY] = -ENOTEMPTY, | ||
751 | [-VERR_TOO_MANY_SYMLINKS] = -ELOOP, | ||
752 | [-VERR_NO_MORE_FILES] = -ENODATA, | ||
753 | [-VERR_NO_DATA] = -ENODATA, | ||
754 | [-VERR_NET_NO_NETWORK] = -ENONET, | ||
755 | [-VERR_NET_NOT_UNIQUE_NAME] = -ENOTUNIQ, | ||
756 | [-VERR_NO_TRANSLATION] = -EILSEQ, | ||
757 | [-VERR_NET_NOT_SOCKET] = -ENOTSOCK, | ||
758 | [-VERR_NET_DEST_ADDRESS_REQUIRED] = -EDESTADDRREQ, | ||
759 | [-VERR_NET_MSG_SIZE] = -EMSGSIZE, | ||
760 | [-VERR_NET_PROTOCOL_TYPE] = -EPROTOTYPE, | ||
761 | [-VERR_NET_PROTOCOL_NOT_AVAILABLE] = -ENOPROTOOPT, | ||
762 | [-VERR_NET_PROTOCOL_NOT_SUPPORTED] = -EPROTONOSUPPORT, | ||
763 | [-VERR_NET_SOCKET_TYPE_NOT_SUPPORTED] = -ESOCKTNOSUPPORT, | ||
764 | [-VERR_NET_OPERATION_NOT_SUPPORTED] = -EOPNOTSUPP, | ||
765 | [-VERR_NET_PROTOCOL_FAMILY_NOT_SUPPORTED] = -EPFNOSUPPORT, | ||
766 | [-VERR_NET_ADDRESS_FAMILY_NOT_SUPPORTED] = -EAFNOSUPPORT, | ||
767 | [-VERR_NET_ADDRESS_IN_USE] = -EADDRINUSE, | ||
768 | [-VERR_NET_ADDRESS_NOT_AVAILABLE] = -EADDRNOTAVAIL, | ||
769 | [-VERR_NET_DOWN] = -ENETDOWN, | ||
770 | [-VERR_NET_UNREACHABLE] = -ENETUNREACH, | ||
771 | [-VERR_NET_CONNECTION_RESET] = -ENETRESET, | ||
772 | [-VERR_NET_CONNECTION_ABORTED] = -ECONNABORTED, | ||
773 | [-VERR_NET_CONNECTION_RESET_BY_PEER] = -ECONNRESET, | ||
774 | [-VERR_NET_NO_BUFFER_SPACE] = -ENOBUFS, | ||
775 | [-VERR_NET_ALREADY_CONNECTED] = -EISCONN, | ||
776 | [-VERR_NET_NOT_CONNECTED] = -ENOTCONN, | ||
777 | [-VERR_NET_SHUTDOWN] = -ESHUTDOWN, | ||
778 | [-VERR_NET_TOO_MANY_REFERENCES] = -ETOOMANYREFS, | ||
779 | [-VERR_TIMEOUT] = -ETIMEDOUT, | ||
780 | [-VERR_NET_CONNECTION_REFUSED] = -ECONNREFUSED, | ||
781 | [-VERR_NET_HOST_DOWN] = -EHOSTDOWN, | ||
782 | [-VERR_NET_HOST_UNREACHABLE] = -EHOSTUNREACH, | ||
783 | [-VERR_NET_ALREADY_IN_PROGRESS] = -EALREADY, | ||
784 | [-VERR_NET_IN_PROGRESS] = -EINPROGRESS, | ||
785 | [-VERR_MEDIA_NOT_PRESENT] = -ENOMEDIUM, | ||
786 | [-VERR_MEDIA_NOT_RECOGNIZED] = -EMEDIUMTYPE, | ||
787 | }; | ||
788 | |||
789 | int vbg_status_code_to_errno(int rc) | ||
790 | { | ||
791 | if (rc >= 0) | ||
792 | return 0; | ||
793 | |||
794 | rc = -rc; | ||
795 | if (rc >= ARRAY_SIZE(vbg_status_code_to_errno_table) || | ||
796 | vbg_status_code_to_errno_table[rc] == 0) { | ||
797 | vbg_warn("%s: Unhandled err %d\n", __func__, -rc); | ||
798 | return -EPROTO; | ||
799 | } | ||
800 | |||
801 | return vbg_status_code_to_errno_table[rc]; | ||
802 | } | ||
803 | EXPORT_SYMBOL(vbg_status_code_to_errno); | ||
diff --git a/drivers/virt/vboxguest/vboxguest_version.h b/drivers/virt/vboxguest/vboxguest_version.h new file mode 100644 index 000000000000..77f0c8f8a231 --- /dev/null +++ b/drivers/virt/vboxguest/vboxguest_version.h | |||
@@ -0,0 +1,19 @@ | |||
1 | /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ | ||
2 | /* | ||
3 | * VBox Guest additions version info, this is used by the host to determine | ||
4 | * supported guest-addition features in some cases. So this will need to be | ||
5 | * synced with vbox upstreams versioning scheme when we implement / port | ||
6 | * new features from the upstream out-of-tree vboxguest driver. | ||
7 | */ | ||
8 | |||
9 | #ifndef __VBOX_VERSION_H__ | ||
10 | #define __VBOX_VERSION_H__ | ||
11 | |||
12 | /* Last synced October 4th 2017 */ | ||
13 | #define VBG_VERSION_MAJOR 5 | ||
14 | #define VBG_VERSION_MINOR 2 | ||
15 | #define VBG_VERSION_BUILD 0 | ||
16 | #define VBG_SVN_REV 68940 | ||
17 | #define VBG_VERSION_STRING "5.2.0" | ||
18 | |||
19 | #endif | ||
diff --git a/drivers/virt/vboxguest/vmmdev.h b/drivers/virt/vboxguest/vmmdev.h new file mode 100644 index 000000000000..5e2ae978935d --- /dev/null +++ b/drivers/virt/vboxguest/vmmdev.h | |||
@@ -0,0 +1,449 @@ | |||
1 | /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ | ||
2 | /* | ||
3 | * Virtual Device for Guest <-> VMM/Host communication interface | ||
4 | * | ||
5 | * Copyright (C) 2006-2016 Oracle Corporation | ||
6 | */ | ||
7 | |||
8 | #ifndef __VBOX_VMMDEV_H__ | ||
9 | #define __VBOX_VMMDEV_H__ | ||
10 | |||
11 | #include <asm/bitsperlong.h> | ||
12 | #include <linux/sizes.h> | ||
13 | #include <linux/types.h> | ||
14 | #include <linux/vbox_vmmdev_types.h> | ||
15 | |||
16 | /* Port for generic request interface (relative offset). */ | ||
17 | #define VMMDEV_PORT_OFF_REQUEST 0 | ||
18 | |||
19 | /** Layout of VMMDEV RAM region that contains information for guest. */ | ||
20 | struct vmmdev_memory { | ||
21 | /** The size of this structure. */ | ||
22 | u32 size; | ||
23 | /** The structure version. (VMMDEV_MEMORY_VERSION) */ | ||
24 | u32 version; | ||
25 | |||
26 | union { | ||
27 | struct { | ||
28 | /** Flag telling that VMMDev has events pending. */ | ||
29 | u8 have_events; | ||
30 | /** Explicit padding, MBZ. */ | ||
31 | u8 padding[3]; | ||
32 | } V1_04; | ||
33 | |||
34 | struct { | ||
35 | /** Pending events flags, set by host. */ | ||
36 | u32 host_events; | ||
37 | /** Mask of events the guest wants, set by guest. */ | ||
38 | u32 guest_event_mask; | ||
39 | } V1_03; | ||
40 | } V; | ||
41 | |||
42 | /* struct vbva_memory, not used */ | ||
43 | }; | ||
44 | VMMDEV_ASSERT_SIZE(vmmdev_memory, 8 + 8); | ||
45 | |||
46 | /** Version of vmmdev_memory structure (vmmdev_memory::version). */ | ||
47 | #define VMMDEV_MEMORY_VERSION (1) | ||
48 | |||
49 | /* Host mouse capabilities has been changed. */ | ||
50 | #define VMMDEV_EVENT_MOUSE_CAPABILITIES_CHANGED BIT(0) | ||
51 | /* HGCM event. */ | ||
52 | #define VMMDEV_EVENT_HGCM BIT(1) | ||
53 | /* A display change request has been issued. */ | ||
54 | #define VMMDEV_EVENT_DISPLAY_CHANGE_REQUEST BIT(2) | ||
55 | /* Credentials are available for judgement. */ | ||
56 | #define VMMDEV_EVENT_JUDGE_CREDENTIALS BIT(3) | ||
57 | /* The guest has been restored. */ | ||
58 | #define VMMDEV_EVENT_RESTORED BIT(4) | ||
59 | /* Seamless mode state changed. */ | ||
60 | #define VMMDEV_EVENT_SEAMLESS_MODE_CHANGE_REQUEST BIT(5) | ||
61 | /* Memory balloon size changed. */ | ||
62 | #define VMMDEV_EVENT_BALLOON_CHANGE_REQUEST BIT(6) | ||
63 | /* Statistics interval changed. */ | ||
64 | #define VMMDEV_EVENT_STATISTICS_INTERVAL_CHANGE_REQUEST BIT(7) | ||
65 | /* VRDP status changed. */ | ||
66 | #define VMMDEV_EVENT_VRDP BIT(8) | ||
67 | /* New mouse position data available. */ | ||
68 | #define VMMDEV_EVENT_MOUSE_POSITION_CHANGED BIT(9) | ||
69 | /* CPU hotplug event occurred. */ | ||
70 | #define VMMDEV_EVENT_CPU_HOTPLUG BIT(10) | ||
71 | /* The mask of valid events, for sanity checking. */ | ||
72 | #define VMMDEV_EVENT_VALID_EVENT_MASK 0x000007ffU | ||
73 | |||
74 | /* | ||
75 | * Additions are allowed to work only if additions_major == vmmdev_current && | ||
76 | * additions_minor <= vmmdev_current. Additions version is reported to host | ||
77 | * (VMMDev) by VMMDEVREQ_REPORT_GUEST_INFO. | ||
78 | */ | ||
79 | #define VMMDEV_VERSION 0x00010004 | ||
80 | #define VMMDEV_VERSION_MAJOR (VMMDEV_VERSION >> 16) | ||
81 | #define VMMDEV_VERSION_MINOR (VMMDEV_VERSION & 0xffff) | ||
82 | |||
83 | /* Maximum request packet size. */ | ||
84 | #define VMMDEV_MAX_VMMDEVREQ_SIZE 1048576 | ||
85 | |||
86 | /* Version of vmmdev_request_header structure. */ | ||
87 | #define VMMDEV_REQUEST_HEADER_VERSION 0x10001 | ||
88 | |||
89 | /** struct vmmdev_request_header - Generic VMMDev request header. */ | ||
90 | struct vmmdev_request_header { | ||
91 | /** IN: Size of the structure in bytes (including body). */ | ||
92 | u32 size; | ||
93 | /** IN: Version of the structure. */ | ||
94 | u32 version; | ||
95 | /** IN: Type of the request. */ | ||
96 | enum vmmdev_request_type request_type; | ||
97 | /** OUT: Return code. */ | ||
98 | s32 rc; | ||
99 | /** Reserved field no.1. MBZ. */ | ||
100 | u32 reserved1; | ||
101 | /** Reserved field no.2. MBZ. */ | ||
102 | u32 reserved2; | ||
103 | }; | ||
104 | VMMDEV_ASSERT_SIZE(vmmdev_request_header, 24); | ||
105 | |||
106 | /** | ||
107 | * struct vmmdev_mouse_status - Mouse status request structure. | ||
108 | * | ||
109 | * Used by VMMDEVREQ_GET_MOUSE_STATUS and VMMDEVREQ_SET_MOUSE_STATUS. | ||
110 | */ | ||
111 | struct vmmdev_mouse_status { | ||
112 | /** header */ | ||
113 | struct vmmdev_request_header header; | ||
114 | /** Mouse feature mask. See VMMDEV_MOUSE_*. */ | ||
115 | u32 mouse_features; | ||
116 | /** Mouse x position. */ | ||
117 | s32 pointer_pos_x; | ||
118 | /** Mouse y position. */ | ||
119 | s32 pointer_pos_y; | ||
120 | }; | ||
121 | VMMDEV_ASSERT_SIZE(vmmdev_mouse_status, 24 + 12); | ||
122 | |||
123 | /* The guest can (== wants to) handle absolute coordinates. */ | ||
124 | #define VMMDEV_MOUSE_GUEST_CAN_ABSOLUTE BIT(0) | ||
125 | /* | ||
126 | * The host can (== wants to) send absolute coordinates. | ||
127 | * (Input not captured.) | ||
128 | */ | ||
129 | #define VMMDEV_MOUSE_HOST_WANTS_ABSOLUTE BIT(1) | ||
130 | /* | ||
131 | * The guest can *NOT* switch to software cursor and therefore depends on the | ||
132 | * host cursor. | ||
133 | * | ||
134 | * When guest additions are installed and the host has promised to display the | ||
135 | * cursor itself, the guest installs a hardware mouse driver. Don't ask the | ||
136 | * guest to switch to a software cursor then. | ||
137 | */ | ||
138 | #define VMMDEV_MOUSE_GUEST_NEEDS_HOST_CURSOR BIT(2) | ||
139 | /* The host does NOT provide support for drawing the cursor itself. */ | ||
140 | #define VMMDEV_MOUSE_HOST_CANNOT_HWPOINTER BIT(3) | ||
141 | /* The guest can read VMMDev events to find out about pointer movement */ | ||
142 | #define VMMDEV_MOUSE_NEW_PROTOCOL BIT(4) | ||
143 | /* | ||
144 | * If the guest changes the status of the VMMDEV_MOUSE_GUEST_NEEDS_HOST_CURSOR | ||
145 | * bit, the host will honour this. | ||
146 | */ | ||
147 | #define VMMDEV_MOUSE_HOST_RECHECKS_NEEDS_HOST_CURSOR BIT(5) | ||
148 | /* | ||
149 | * The host supplies an absolute pointing device. The Guest Additions may | ||
150 | * wish to use this to decide whether to install their own driver. | ||
151 | */ | ||
152 | #define VMMDEV_MOUSE_HOST_HAS_ABS_DEV BIT(6) | ||
153 | |||
154 | /* The minimum value our pointing device can return. */ | ||
155 | #define VMMDEV_MOUSE_RANGE_MIN 0 | ||
156 | /* The maximum value our pointing device can return. */ | ||
157 | #define VMMDEV_MOUSE_RANGE_MAX 0xFFFF | ||
158 | |||
159 | /** | ||
160 | * struct vmmdev_host_version - VirtualBox host version request structure. | ||
161 | * | ||
162 | * VBG uses this to detect the precense of new features in the interface. | ||
163 | */ | ||
164 | struct vmmdev_host_version { | ||
165 | /** Header. */ | ||
166 | struct vmmdev_request_header header; | ||
167 | /** Major version. */ | ||
168 | u16 major; | ||
169 | /** Minor version. */ | ||
170 | u16 minor; | ||
171 | /** Build number. */ | ||
172 | u32 build; | ||
173 | /** SVN revision. */ | ||
174 | u32 revision; | ||
175 | /** Feature mask. */ | ||
176 | u32 features; | ||
177 | }; | ||
178 | VMMDEV_ASSERT_SIZE(vmmdev_host_version, 24 + 16); | ||
179 | |||
180 | /* Physical page lists are supported by HGCM. */ | ||
181 | #define VMMDEV_HVF_HGCM_PHYS_PAGE_LIST BIT(0) | ||
182 | |||
183 | /** | ||
184 | * struct vmmdev_mask - Structure to set / clear bits in a mask used for | ||
185 | * VMMDEVREQ_SET_GUEST_CAPABILITIES and VMMDEVREQ_CTL_GUEST_FILTER_MASK. | ||
186 | */ | ||
187 | struct vmmdev_mask { | ||
188 | /** Header. */ | ||
189 | struct vmmdev_request_header header; | ||
190 | /** Mask of bits to be set. */ | ||
191 | u32 or_mask; | ||
192 | /** Mask of bits to be cleared. */ | ||
193 | u32 not_mask; | ||
194 | }; | ||
195 | VMMDEV_ASSERT_SIZE(vmmdev_mask, 24 + 8); | ||
196 | |||
197 | /* The guest supports seamless display rendering. */ | ||
198 | #define VMMDEV_GUEST_SUPPORTS_SEAMLESS BIT(0) | ||
199 | /* The guest supports mapping guest to host windows. */ | ||
200 | #define VMMDEV_GUEST_SUPPORTS_GUEST_HOST_WINDOW_MAPPING BIT(1) | ||
201 | /* | ||
202 | * The guest graphical additions are active. | ||
203 | * Used for fast activation and deactivation of certain graphical operations | ||
204 | * (e.g. resizing & seamless). The legacy VMMDEVREQ_REPORT_GUEST_CAPABILITIES | ||
205 | * request sets this automatically, but VMMDEVREQ_SET_GUEST_CAPABILITIES does | ||
206 | * not. | ||
207 | */ | ||
208 | #define VMMDEV_GUEST_SUPPORTS_GRAPHICS BIT(2) | ||
209 | |||
210 | /** struct vmmdev_hypervisorinfo - Hypervisor info structure. */ | ||
211 | struct vmmdev_hypervisorinfo { | ||
212 | /** Header. */ | ||
213 | struct vmmdev_request_header header; | ||
214 | /** | ||
215 | * Guest virtual address of proposed hypervisor start. | ||
216 | * Not used by VMMDEVREQ_GET_HYPERVISOR_INFO. | ||
217 | */ | ||
218 | u32 hypervisor_start; | ||
219 | /** Hypervisor size in bytes. */ | ||
220 | u32 hypervisor_size; | ||
221 | }; | ||
222 | VMMDEV_ASSERT_SIZE(vmmdev_hypervisorinfo, 24 + 8); | ||
223 | |||
224 | /** struct vmmdev_events - Pending events structure. */ | ||
225 | struct vmmdev_events { | ||
226 | /** Header. */ | ||
227 | struct vmmdev_request_header header; | ||
228 | /** OUT: Pending event mask. */ | ||
229 | u32 events; | ||
230 | }; | ||
231 | VMMDEV_ASSERT_SIZE(vmmdev_events, 24 + 4); | ||
232 | |||
233 | #define VMMDEV_OSTYPE_LINUX26 0x53000 | ||
234 | #define VMMDEV_OSTYPE_X64 BIT(8) | ||
235 | |||
236 | /** struct vmmdev_guestinfo - Guest information report. */ | ||
237 | struct vmmdev_guest_info { | ||
238 | /** Header. */ | ||
239 | struct vmmdev_request_header header; | ||
240 | /** | ||
241 | * The VMMDev interface version expected by additions. | ||
242 | * *Deprecated*, do not use anymore! Will be removed. | ||
243 | */ | ||
244 | u32 interface_version; | ||
245 | /** Guest OS type. */ | ||
246 | u32 os_type; | ||
247 | }; | ||
248 | VMMDEV_ASSERT_SIZE(vmmdev_guest_info, 24 + 8); | ||
249 | |||
250 | /** struct vmmdev_guestinfo2 - Guest information report, version 2. */ | ||
251 | struct vmmdev_guest_info2 { | ||
252 | /** Header. */ | ||
253 | struct vmmdev_request_header header; | ||
254 | /** Major version. */ | ||
255 | u16 additions_major; | ||
256 | /** Minor version. */ | ||
257 | u16 additions_minor; | ||
258 | /** Build number. */ | ||
259 | u32 additions_build; | ||
260 | /** SVN revision. */ | ||
261 | u32 additions_revision; | ||
262 | /** Feature mask, currently unused. */ | ||
263 | u32 additions_features; | ||
264 | /** | ||
265 | * The intentional meaning of this field was: | ||
266 | * Some additional information, for example 'Beta 1' or something like | ||
267 | * that. | ||
268 | * | ||
269 | * The way it was implemented was implemented: VBG_VERSION_STRING. | ||
270 | * | ||
271 | * This means the first three members are duplicated in this field (if | ||
272 | * the guest build config is sane). So, the user must check this and | ||
273 | * chop it off before usage. There is, because of the Main code's blind | ||
274 | * trust in the field's content, no way back. | ||
275 | */ | ||
276 | char name[128]; | ||
277 | }; | ||
278 | VMMDEV_ASSERT_SIZE(vmmdev_guest_info2, 24 + 144); | ||
279 | |||
280 | enum vmmdev_guest_facility_type { | ||
281 | VBOXGUEST_FACILITY_TYPE_UNKNOWN = 0, | ||
282 | VBOXGUEST_FACILITY_TYPE_VBOXGUEST_DRIVER = 20, | ||
283 | /* VBoxGINA / VBoxCredProv / pam_vbox. */ | ||
284 | VBOXGUEST_FACILITY_TYPE_AUTO_LOGON = 90, | ||
285 | VBOXGUEST_FACILITY_TYPE_VBOX_SERVICE = 100, | ||
286 | /* VBoxTray (Windows), VBoxClient (Linux, Unix). */ | ||
287 | VBOXGUEST_FACILITY_TYPE_VBOX_TRAY_CLIENT = 101, | ||
288 | VBOXGUEST_FACILITY_TYPE_SEAMLESS = 1000, | ||
289 | VBOXGUEST_FACILITY_TYPE_GRAPHICS = 1100, | ||
290 | VBOXGUEST_FACILITY_TYPE_ALL = 0x7ffffffe, | ||
291 | /* Ensure the enum is a 32 bit data-type */ | ||
292 | VBOXGUEST_FACILITY_TYPE_SIZEHACK = 0x7fffffff | ||
293 | }; | ||
294 | |||
295 | enum vmmdev_guest_facility_status { | ||
296 | VBOXGUEST_FACILITY_STATUS_INACTIVE = 0, | ||
297 | VBOXGUEST_FACILITY_STATUS_PAUSED = 1, | ||
298 | VBOXGUEST_FACILITY_STATUS_PRE_INIT = 20, | ||
299 | VBOXGUEST_FACILITY_STATUS_INIT = 30, | ||
300 | VBOXGUEST_FACILITY_STATUS_ACTIVE = 50, | ||
301 | VBOXGUEST_FACILITY_STATUS_TERMINATING = 100, | ||
302 | VBOXGUEST_FACILITY_STATUS_TERMINATED = 101, | ||
303 | VBOXGUEST_FACILITY_STATUS_FAILED = 800, | ||
304 | VBOXGUEST_FACILITY_STATUS_UNKNOWN = 999, | ||
305 | /* Ensure the enum is a 32 bit data-type */ | ||
306 | VBOXGUEST_FACILITY_STATUS_SIZEHACK = 0x7fffffff | ||
307 | }; | ||
308 | |||
309 | /** struct vmmdev_guest_status - Guest Additions status structure. */ | ||
310 | struct vmmdev_guest_status { | ||
311 | /** Header. */ | ||
312 | struct vmmdev_request_header header; | ||
313 | /** Facility the status is indicated for. */ | ||
314 | enum vmmdev_guest_facility_type facility; | ||
315 | /** Current guest status. */ | ||
316 | enum vmmdev_guest_facility_status status; | ||
317 | /** Flags, not used at the moment. */ | ||
318 | u32 flags; | ||
319 | }; | ||
320 | VMMDEV_ASSERT_SIZE(vmmdev_guest_status, 24 + 12); | ||
321 | |||
322 | #define VMMDEV_MEMORY_BALLOON_CHUNK_SIZE (1048576) | ||
323 | #define VMMDEV_MEMORY_BALLOON_CHUNK_PAGES (1048576 / 4096) | ||
324 | |||
325 | /** struct vmmdev_memballoon_info - Memory-balloon info structure. */ | ||
326 | struct vmmdev_memballoon_info { | ||
327 | /** Header. */ | ||
328 | struct vmmdev_request_header header; | ||
329 | /** Balloon size in megabytes. */ | ||
330 | u32 balloon_chunks; | ||
331 | /** Guest ram size in megabytes. */ | ||
332 | u32 phys_mem_chunks; | ||
333 | /** | ||
334 | * Setting this to VMMDEV_EVENT_BALLOON_CHANGE_REQUEST indicates that | ||
335 | * the request is a response to that event. | ||
336 | * (Don't confuse this with VMMDEVREQ_ACKNOWLEDGE_EVENTS.) | ||
337 | */ | ||
338 | u32 event_ack; | ||
339 | }; | ||
340 | VMMDEV_ASSERT_SIZE(vmmdev_memballoon_info, 24 + 12); | ||
341 | |||
342 | /** struct vmmdev_memballoon_change - Change the size of the balloon. */ | ||
343 | struct vmmdev_memballoon_change { | ||
344 | /** Header. */ | ||
345 | struct vmmdev_request_header header; | ||
346 | /** The number of pages in the array. */ | ||
347 | u32 pages; | ||
348 | /** true = inflate, false = deflate. */ | ||
349 | u32 inflate; | ||
350 | /** Physical address (u64) of each page. */ | ||
351 | u64 phys_page[VMMDEV_MEMORY_BALLOON_CHUNK_PAGES]; | ||
352 | }; | ||
353 | |||
354 | /** struct vmmdev_write_core_dump - Write Core Dump request data. */ | ||
355 | struct vmmdev_write_core_dump { | ||
356 | /** Header. */ | ||
357 | struct vmmdev_request_header header; | ||
358 | /** Flags (reserved, MBZ). */ | ||
359 | u32 flags; | ||
360 | }; | ||
361 | VMMDEV_ASSERT_SIZE(vmmdev_write_core_dump, 24 + 4); | ||
362 | |||
363 | /** struct vmmdev_heartbeat - Heart beat check state structure. */ | ||
364 | struct vmmdev_heartbeat { | ||
365 | /** Header. */ | ||
366 | struct vmmdev_request_header header; | ||
367 | /** OUT: Guest heartbeat interval in nanosec. */ | ||
368 | u64 interval_ns; | ||
369 | /** Heartbeat check flag. */ | ||
370 | u8 enabled; | ||
371 | /** Explicit padding, MBZ. */ | ||
372 | u8 padding[3]; | ||
373 | } __packed; | ||
374 | VMMDEV_ASSERT_SIZE(vmmdev_heartbeat, 24 + 12); | ||
375 | |||
376 | #define VMMDEV_HGCM_REQ_DONE BIT(0) | ||
377 | #define VMMDEV_HGCM_REQ_CANCELLED BIT(1) | ||
378 | |||
379 | /** struct vmmdev_hgcmreq_header - vmmdev HGCM requests header. */ | ||
380 | struct vmmdev_hgcmreq_header { | ||
381 | /** Request header. */ | ||
382 | struct vmmdev_request_header header; | ||
383 | |||
384 | /** HGCM flags. */ | ||
385 | u32 flags; | ||
386 | |||
387 | /** Result code. */ | ||
388 | s32 result; | ||
389 | }; | ||
390 | VMMDEV_ASSERT_SIZE(vmmdev_hgcmreq_header, 24 + 8); | ||
391 | |||
392 | /** struct vmmdev_hgcm_connect - HGCM connect request structure. */ | ||
393 | struct vmmdev_hgcm_connect { | ||
394 | /** HGCM request header. */ | ||
395 | struct vmmdev_hgcmreq_header header; | ||
396 | |||
397 | /** IN: Description of service to connect to. */ | ||
398 | struct vmmdev_hgcm_service_location loc; | ||
399 | |||
400 | /** OUT: Client identifier assigned by local instance of HGCM. */ | ||
401 | u32 client_id; | ||
402 | }; | ||
403 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_connect, 32 + 132 + 4); | ||
404 | |||
405 | /** struct vmmdev_hgcm_disconnect - HGCM disconnect request structure. */ | ||
406 | struct vmmdev_hgcm_disconnect { | ||
407 | /** HGCM request header. */ | ||
408 | struct vmmdev_hgcmreq_header header; | ||
409 | |||
410 | /** IN: Client identifier. */ | ||
411 | u32 client_id; | ||
412 | }; | ||
413 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_disconnect, 32 + 4); | ||
414 | |||
415 | #define VMMDEV_HGCM_MAX_PARMS 32 | ||
416 | |||
417 | /** struct vmmdev_hgcm_call - HGCM call request structure. */ | ||
418 | struct vmmdev_hgcm_call { | ||
419 | /* request header */ | ||
420 | struct vmmdev_hgcmreq_header header; | ||
421 | |||
422 | /** IN: Client identifier. */ | ||
423 | u32 client_id; | ||
424 | /** IN: Service function number. */ | ||
425 | u32 function; | ||
426 | /** IN: Number of parameters. */ | ||
427 | u32 parm_count; | ||
428 | /** Parameters follow in form: HGCMFunctionParameter32|64 parms[X]; */ | ||
429 | }; | ||
430 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_call, 32 + 12); | ||
431 | |||
432 | /** | ||
433 | * struct vmmdev_hgcm_cancel2 - HGCM cancel request structure, version 2. | ||
434 | * | ||
435 | * After the request header.rc will be: | ||
436 | * | ||
437 | * VINF_SUCCESS when cancelled. | ||
438 | * VERR_NOT_FOUND if the specified request cannot be found. | ||
439 | * VERR_INVALID_PARAMETER if the address is invalid valid. | ||
440 | */ | ||
441 | struct vmmdev_hgcm_cancel2 { | ||
442 | /** Header. */ | ||
443 | struct vmmdev_request_header header; | ||
444 | /** The physical address of the request to cancel. */ | ||
445 | u32 phys_req_to_cancel; | ||
446 | }; | ||
447 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_cancel2, 24 + 4); | ||
448 | |||
449 | #endif | ||
diff --git a/drivers/vme/vme.c b/drivers/vme/vme.c index 81246221a13b..92500f6bdad1 100644 --- a/drivers/vme/vme.c +++ b/drivers/vme/vme.c | |||
@@ -1290,7 +1290,7 @@ struct vme_error_handler *vme_register_error_handler( | |||
1290 | { | 1290 | { |
1291 | struct vme_error_handler *handler; | 1291 | struct vme_error_handler *handler; |
1292 | 1292 | ||
1293 | handler = kmalloc(sizeof(*handler), GFP_KERNEL); | 1293 | handler = kmalloc(sizeof(*handler), GFP_ATOMIC); |
1294 | if (!handler) | 1294 | if (!handler) |
1295 | return NULL; | 1295 | return NULL; |
1296 | 1296 | ||
diff --git a/include/linux/fpga/fpga-bridge.h b/include/linux/fpga/fpga-bridge.h index aa66c87c120b..3694821a6d2d 100644 --- a/include/linux/fpga/fpga-bridge.h +++ b/include/linux/fpga/fpga-bridge.h | |||
@@ -1,10 +1,11 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | 1 | /* SPDX-License-Identifier: GPL-2.0 */ |
2 | #include <linux/device.h> | ||
3 | #include <linux/fpga/fpga-mgr.h> | ||
4 | 2 | ||
5 | #ifndef _LINUX_FPGA_BRIDGE_H | 3 | #ifndef _LINUX_FPGA_BRIDGE_H |
6 | #define _LINUX_FPGA_BRIDGE_H | 4 | #define _LINUX_FPGA_BRIDGE_H |
7 | 5 | ||
6 | #include <linux/device.h> | ||
7 | #include <linux/fpga/fpga-mgr.h> | ||
8 | |||
8 | struct fpga_bridge; | 9 | struct fpga_bridge; |
9 | 10 | ||
10 | /** | 11 | /** |
@@ -12,11 +13,13 @@ struct fpga_bridge; | |||
12 | * @enable_show: returns the FPGA bridge's status | 13 | * @enable_show: returns the FPGA bridge's status |
13 | * @enable_set: set a FPGA bridge as enabled or disabled | 14 | * @enable_set: set a FPGA bridge as enabled or disabled |
14 | * @fpga_bridge_remove: set FPGA into a specific state during driver remove | 15 | * @fpga_bridge_remove: set FPGA into a specific state during driver remove |
16 | * @groups: optional attribute groups. | ||
15 | */ | 17 | */ |
16 | struct fpga_bridge_ops { | 18 | struct fpga_bridge_ops { |
17 | int (*enable_show)(struct fpga_bridge *bridge); | 19 | int (*enable_show)(struct fpga_bridge *bridge); |
18 | int (*enable_set)(struct fpga_bridge *bridge, bool enable); | 20 | int (*enable_set)(struct fpga_bridge *bridge, bool enable); |
19 | void (*fpga_bridge_remove)(struct fpga_bridge *bridge); | 21 | void (*fpga_bridge_remove)(struct fpga_bridge *bridge); |
22 | const struct attribute_group **groups; | ||
20 | }; | 23 | }; |
21 | 24 | ||
22 | /** | 25 | /** |
@@ -43,6 +46,8 @@ struct fpga_bridge { | |||
43 | 46 | ||
44 | struct fpga_bridge *of_fpga_bridge_get(struct device_node *node, | 47 | struct fpga_bridge *of_fpga_bridge_get(struct device_node *node, |
45 | struct fpga_image_info *info); | 48 | struct fpga_image_info *info); |
49 | struct fpga_bridge *fpga_bridge_get(struct device *dev, | ||
50 | struct fpga_image_info *info); | ||
46 | void fpga_bridge_put(struct fpga_bridge *bridge); | 51 | void fpga_bridge_put(struct fpga_bridge *bridge); |
47 | int fpga_bridge_enable(struct fpga_bridge *bridge); | 52 | int fpga_bridge_enable(struct fpga_bridge *bridge); |
48 | int fpga_bridge_disable(struct fpga_bridge *bridge); | 53 | int fpga_bridge_disable(struct fpga_bridge *bridge); |
@@ -50,9 +55,12 @@ int fpga_bridge_disable(struct fpga_bridge *bridge); | |||
50 | int fpga_bridges_enable(struct list_head *bridge_list); | 55 | int fpga_bridges_enable(struct list_head *bridge_list); |
51 | int fpga_bridges_disable(struct list_head *bridge_list); | 56 | int fpga_bridges_disable(struct list_head *bridge_list); |
52 | void fpga_bridges_put(struct list_head *bridge_list); | 57 | void fpga_bridges_put(struct list_head *bridge_list); |
53 | int fpga_bridge_get_to_list(struct device_node *np, | 58 | int fpga_bridge_get_to_list(struct device *dev, |
54 | struct fpga_image_info *info, | 59 | struct fpga_image_info *info, |
55 | struct list_head *bridge_list); | 60 | struct list_head *bridge_list); |
61 | int of_fpga_bridge_get_to_list(struct device_node *np, | ||
62 | struct fpga_image_info *info, | ||
63 | struct list_head *bridge_list); | ||
56 | 64 | ||
57 | int fpga_bridge_register(struct device *dev, const char *name, | 65 | int fpga_bridge_register(struct device *dev, const char *name, |
58 | const struct fpga_bridge_ops *br_ops, void *priv); | 66 | const struct fpga_bridge_ops *br_ops, void *priv); |
diff --git a/include/linux/fpga/fpga-mgr.h b/include/linux/fpga/fpga-mgr.h index bfa14bc023fb..3c6de23aabdf 100644 --- a/include/linux/fpga/fpga-mgr.h +++ b/include/linux/fpga/fpga-mgr.h | |||
@@ -1,7 +1,8 @@ | |||
1 | /* | 1 | /* |
2 | * FPGA Framework | 2 | * FPGA Framework |
3 | * | 3 | * |
4 | * Copyright (C) 2013-2015 Altera Corporation | 4 | * Copyright (C) 2013-2016 Altera Corporation |
5 | * Copyright (C) 2017 Intel Corporation | ||
5 | * | 6 | * |
6 | * This program is free software; you can redistribute it and/or modify it | 7 | * This program is free software; you can redistribute it and/or modify it |
7 | * under the terms and conditions of the GNU General Public License, | 8 | * under the terms and conditions of the GNU General Public License, |
@@ -15,12 +16,12 @@ | |||
15 | * You should have received a copy of the GNU General Public License along with | 16 | * You should have received a copy of the GNU General Public License along with |
16 | * this program. If not, see <http://www.gnu.org/licenses/>. | 17 | * this program. If not, see <http://www.gnu.org/licenses/>. |
17 | */ | 18 | */ |
18 | #include <linux/mutex.h> | ||
19 | #include <linux/platform_device.h> | ||
20 | |||
21 | #ifndef _LINUX_FPGA_MGR_H | 19 | #ifndef _LINUX_FPGA_MGR_H |
22 | #define _LINUX_FPGA_MGR_H | 20 | #define _LINUX_FPGA_MGR_H |
23 | 21 | ||
22 | #include <linux/mutex.h> | ||
23 | #include <linux/platform_device.h> | ||
24 | |||
24 | struct fpga_manager; | 25 | struct fpga_manager; |
25 | struct sg_table; | 26 | struct sg_table; |
26 | 27 | ||
@@ -83,12 +84,26 @@ enum fpga_mgr_states { | |||
83 | * @disable_timeout_us: maximum time to disable traffic through bridge (uSec) | 84 | * @disable_timeout_us: maximum time to disable traffic through bridge (uSec) |
84 | * @config_complete_timeout_us: maximum time for FPGA to switch to operating | 85 | * @config_complete_timeout_us: maximum time for FPGA to switch to operating |
85 | * status in the write_complete op. | 86 | * status in the write_complete op. |
87 | * @firmware_name: name of FPGA image firmware file | ||
88 | * @sgt: scatter/gather table containing FPGA image | ||
89 | * @buf: contiguous buffer containing FPGA image | ||
90 | * @count: size of buf | ||
91 | * @dev: device that owns this | ||
92 | * @overlay: Device Tree overlay | ||
86 | */ | 93 | */ |
87 | struct fpga_image_info { | 94 | struct fpga_image_info { |
88 | u32 flags; | 95 | u32 flags; |
89 | u32 enable_timeout_us; | 96 | u32 enable_timeout_us; |
90 | u32 disable_timeout_us; | 97 | u32 disable_timeout_us; |
91 | u32 config_complete_timeout_us; | 98 | u32 config_complete_timeout_us; |
99 | char *firmware_name; | ||
100 | struct sg_table *sgt; | ||
101 | const char *buf; | ||
102 | size_t count; | ||
103 | struct device *dev; | ||
104 | #ifdef CONFIG_OF | ||
105 | struct device_node *overlay; | ||
106 | #endif | ||
92 | }; | 107 | }; |
93 | 108 | ||
94 | /** | 109 | /** |
@@ -100,6 +115,7 @@ struct fpga_image_info { | |||
100 | * @write_sg: write the scatter list of configuration data to the FPGA | 115 | * @write_sg: write the scatter list of configuration data to the FPGA |
101 | * @write_complete: set FPGA to operating state after writing is done | 116 | * @write_complete: set FPGA to operating state after writing is done |
102 | * @fpga_remove: optional: Set FPGA into a specific state during driver remove | 117 | * @fpga_remove: optional: Set FPGA into a specific state during driver remove |
118 | * @groups: optional attribute groups. | ||
103 | * | 119 | * |
104 | * fpga_manager_ops are the low level functions implemented by a specific | 120 | * fpga_manager_ops are the low level functions implemented by a specific |
105 | * fpga manager driver. The optional ones are tested for NULL before being | 121 | * fpga manager driver. The optional ones are tested for NULL before being |
@@ -116,6 +132,7 @@ struct fpga_manager_ops { | |||
116 | int (*write_complete)(struct fpga_manager *mgr, | 132 | int (*write_complete)(struct fpga_manager *mgr, |
117 | struct fpga_image_info *info); | 133 | struct fpga_image_info *info); |
118 | void (*fpga_remove)(struct fpga_manager *mgr); | 134 | void (*fpga_remove)(struct fpga_manager *mgr); |
135 | const struct attribute_group **groups; | ||
119 | }; | 136 | }; |
120 | 137 | ||
121 | /** | 138 | /** |
@@ -138,14 +155,14 @@ struct fpga_manager { | |||
138 | 155 | ||
139 | #define to_fpga_manager(d) container_of(d, struct fpga_manager, dev) | 156 | #define to_fpga_manager(d) container_of(d, struct fpga_manager, dev) |
140 | 157 | ||
141 | int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info, | 158 | struct fpga_image_info *fpga_image_info_alloc(struct device *dev); |
142 | const char *buf, size_t count); | 159 | |
143 | int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, struct fpga_image_info *info, | 160 | void fpga_image_info_free(struct fpga_image_info *info); |
144 | struct sg_table *sgt); | 161 | |
162 | int fpga_mgr_load(struct fpga_manager *mgr, struct fpga_image_info *info); | ||
145 | 163 | ||
146 | int fpga_mgr_firmware_load(struct fpga_manager *mgr, | 164 | int fpga_mgr_lock(struct fpga_manager *mgr); |
147 | struct fpga_image_info *info, | 165 | void fpga_mgr_unlock(struct fpga_manager *mgr); |
148 | const char *image_name); | ||
149 | 166 | ||
150 | struct fpga_manager *of_fpga_mgr_get(struct device_node *node); | 167 | struct fpga_manager *of_fpga_mgr_get(struct device_node *node); |
151 | 168 | ||
diff --git a/include/linux/fpga/fpga-region.h b/include/linux/fpga/fpga-region.h new file mode 100644 index 000000000000..b6520318ab9c --- /dev/null +++ b/include/linux/fpga/fpga-region.h | |||
@@ -0,0 +1,40 @@ | |||
1 | #ifndef _FPGA_REGION_H | ||
2 | #define _FPGA_REGION_H | ||
3 | |||
4 | #include <linux/device.h> | ||
5 | #include <linux/fpga/fpga-mgr.h> | ||
6 | #include <linux/fpga/fpga-bridge.h> | ||
7 | |||
8 | /** | ||
9 | * struct fpga_region - FPGA Region structure | ||
10 | * @dev: FPGA Region device | ||
11 | * @mutex: enforces exclusive reference to region | ||
12 | * @bridge_list: list of FPGA bridges specified in region | ||
13 | * @mgr: FPGA manager | ||
14 | * @info: FPGA image info | ||
15 | * @priv: private data | ||
16 | * @get_bridges: optional function to get bridges to a list | ||
17 | * @groups: optional attribute groups. | ||
18 | */ | ||
19 | struct fpga_region { | ||
20 | struct device dev; | ||
21 | struct mutex mutex; /* for exclusive reference to region */ | ||
22 | struct list_head bridge_list; | ||
23 | struct fpga_manager *mgr; | ||
24 | struct fpga_image_info *info; | ||
25 | void *priv; | ||
26 | int (*get_bridges)(struct fpga_region *region); | ||
27 | const struct attribute_group **groups; | ||
28 | }; | ||
29 | |||
30 | #define to_fpga_region(d) container_of(d, struct fpga_region, dev) | ||
31 | |||
32 | struct fpga_region *fpga_region_class_find( | ||
33 | struct device *start, const void *data, | ||
34 | int (*match)(struct device *, const void *)); | ||
35 | |||
36 | int fpga_region_program_fpga(struct fpga_region *region); | ||
37 | int fpga_region_register(struct device *dev, struct fpga_region *region); | ||
38 | int fpga_region_unregister(struct fpga_region *region); | ||
39 | |||
40 | #endif /* _FPGA_REGION_H */ | ||
diff --git a/include/linux/i7300_idle.h b/include/linux/i7300_idle.h deleted file mode 100644 index 4dbe651f71f5..000000000000 --- a/include/linux/i7300_idle.h +++ /dev/null | |||
@@ -1,84 +0,0 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
2 | |||
3 | #ifndef I7300_IDLE_H | ||
4 | #define I7300_IDLE_H | ||
5 | |||
6 | #include <linux/pci.h> | ||
7 | |||
8 | /* | ||
9 | * I/O AT controls (PCI bus 0 device 8 function 0) | ||
10 | * DIMM controls (PCI bus 0 device 16 function 1) | ||
11 | */ | ||
12 | #define IOAT_BUS 0 | ||
13 | #define IOAT_DEVFN PCI_DEVFN(8, 0) | ||
14 | #define MEMCTL_BUS 0 | ||
15 | #define MEMCTL_DEVFN PCI_DEVFN(16, 1) | ||
16 | |||
17 | struct fbd_ioat { | ||
18 | unsigned int vendor; | ||
19 | unsigned int ioat_dev; | ||
20 | unsigned int enabled; | ||
21 | }; | ||
22 | |||
23 | /* | ||
24 | * The i5000 chip-set has the same hooks as the i7300 | ||
25 | * but it is not enabled by default and must be manually | ||
26 | * manually enabled with "forceload=1" because it is | ||
27 | * only lightly validated. | ||
28 | */ | ||
29 | |||
30 | static const struct fbd_ioat fbd_ioat_list[] = { | ||
31 | {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB, 1}, | ||
32 | {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT, 0}, | ||
33 | {0, 0} | ||
34 | }; | ||
35 | |||
36 | /* table of devices that work with this driver */ | ||
37 | static const struct pci_device_id pci_tbl[] = { | ||
38 | { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_FBD_CNB) }, | ||
39 | { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_5000_ERR) }, | ||
40 | { } /* Terminating entry */ | ||
41 | }; | ||
42 | |||
43 | /* Check for known platforms with I/O-AT */ | ||
44 | static inline int i7300_idle_platform_probe(struct pci_dev **fbd_dev, | ||
45 | struct pci_dev **ioat_dev, | ||
46 | int enable_all) | ||
47 | { | ||
48 | int i; | ||
49 | struct pci_dev *memdev, *dmadev; | ||
50 | |||
51 | memdev = pci_get_bus_and_slot(MEMCTL_BUS, MEMCTL_DEVFN); | ||
52 | if (!memdev) | ||
53 | return -ENODEV; | ||
54 | |||
55 | for (i = 0; pci_tbl[i].vendor != 0; i++) { | ||
56 | if (memdev->vendor == pci_tbl[i].vendor && | ||
57 | memdev->device == pci_tbl[i].device) { | ||
58 | break; | ||
59 | } | ||
60 | } | ||
61 | if (pci_tbl[i].vendor == 0) | ||
62 | return -ENODEV; | ||
63 | |||
64 | dmadev = pci_get_bus_and_slot(IOAT_BUS, IOAT_DEVFN); | ||
65 | if (!dmadev) | ||
66 | return -ENODEV; | ||
67 | |||
68 | for (i = 0; fbd_ioat_list[i].vendor != 0; i++) { | ||
69 | if (dmadev->vendor == fbd_ioat_list[i].vendor && | ||
70 | dmadev->device == fbd_ioat_list[i].ioat_dev) { | ||
71 | if (!(fbd_ioat_list[i].enabled || enable_all)) | ||
72 | continue; | ||
73 | if (fbd_dev) | ||
74 | *fbd_dev = memdev; | ||
75 | if (ioat_dev) | ||
76 | *ioat_dev = dmadev; | ||
77 | |||
78 | return 0; | ||
79 | } | ||
80 | } | ||
81 | return -ENODEV; | ||
82 | } | ||
83 | |||
84 | #endif | ||
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index abb6dc2ebbf8..48fb2b43c35a 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h | |||
@@ -229,6 +229,12 @@ struct hda_device_id { | |||
229 | unsigned long driver_data; | 229 | unsigned long driver_data; |
230 | }; | 230 | }; |
231 | 231 | ||
232 | struct sdw_device_id { | ||
233 | __u16 mfg_id; | ||
234 | __u16 part_id; | ||
235 | kernel_ulong_t driver_data; | ||
236 | }; | ||
237 | |||
232 | /* | 238 | /* |
233 | * Struct used for matching a device | 239 | * Struct used for matching a device |
234 | */ | 240 | */ |
@@ -452,6 +458,19 @@ struct spi_device_id { | |||
452 | kernel_ulong_t driver_data; /* Data private to the driver */ | 458 | kernel_ulong_t driver_data; /* Data private to the driver */ |
453 | }; | 459 | }; |
454 | 460 | ||
461 | /* SLIMbus */ | ||
462 | |||
463 | #define SLIMBUS_NAME_SIZE 32 | ||
464 | #define SLIMBUS_MODULE_PREFIX "slim:" | ||
465 | |||
466 | struct slim_device_id { | ||
467 | __u16 manf_id, prod_code; | ||
468 | __u16 dev_index, instance; | ||
469 | |||
470 | /* Data private to the driver */ | ||
471 | kernel_ulong_t driver_data; | ||
472 | }; | ||
473 | |||
455 | #define SPMI_NAME_SIZE 32 | 474 | #define SPMI_NAME_SIZE 32 |
456 | #define SPMI_MODULE_PREFIX "spmi:" | 475 | #define SPMI_MODULE_PREFIX "spmi:" |
457 | 476 | ||
diff --git a/include/linux/mux/consumer.h b/include/linux/mux/consumer.h index ea96d4c82be7..5fc6bb2fefad 100644 --- a/include/linux/mux/consumer.h +++ b/include/linux/mux/consumer.h | |||
@@ -1,13 +1,10 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
1 | /* | 2 | /* |
2 | * mux/consumer.h - definitions for the multiplexer consumer interface | 3 | * mux/consumer.h - definitions for the multiplexer consumer interface |
3 | * | 4 | * |
4 | * Copyright (C) 2017 Axentia Technologies AB | 5 | * Copyright (C) 2017 Axentia Technologies AB |
5 | * | 6 | * |
6 | * Author: Peter Rosin <peda@axentia.se> | 7 | * Author: Peter Rosin <peda@axentia.se> |
7 | * | ||
8 | * This program is free software; you can redistribute it and/or modify | ||
9 | * it under the terms of the GNU General Public License version 2 as | ||
10 | * published by the Free Software Foundation. | ||
11 | */ | 8 | */ |
12 | 9 | ||
13 | #ifndef _LINUX_MUX_CONSUMER_H | 10 | #ifndef _LINUX_MUX_CONSUMER_H |
diff --git a/include/linux/mux/driver.h b/include/linux/mux/driver.h index 35c3579c3304..627a2c6bc02d 100644 --- a/include/linux/mux/driver.h +++ b/include/linux/mux/driver.h | |||
@@ -1,13 +1,10 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
1 | /* | 2 | /* |
2 | * mux/driver.h - definitions for the multiplexer driver interface | 3 | * mux/driver.h - definitions for the multiplexer driver interface |
3 | * | 4 | * |
4 | * Copyright (C) 2017 Axentia Technologies AB | 5 | * Copyright (C) 2017 Axentia Technologies AB |
5 | * | 6 | * |
6 | * Author: Peter Rosin <peda@axentia.se> | 7 | * Author: Peter Rosin <peda@axentia.se> |
7 | * | ||
8 | * This program is free software; you can redistribute it and/or modify | ||
9 | * it under the terms of the GNU General Public License version 2 as | ||
10 | * published by the Free Software Foundation. | ||
11 | */ | 8 | */ |
12 | 9 | ||
13 | #ifndef _LINUX_MUX_DRIVER_H | 10 | #ifndef _LINUX_MUX_DRIVER_H |
diff --git a/include/linux/regmap.h b/include/linux/regmap.h index 20268b7d5001..6a3aeba40e9e 100644 --- a/include/linux/regmap.h +++ b/include/linux/regmap.h | |||
@@ -24,6 +24,7 @@ struct module; | |||
24 | struct device; | 24 | struct device; |
25 | struct i2c_client; | 25 | struct i2c_client; |
26 | struct irq_domain; | 26 | struct irq_domain; |
27 | struct slim_device; | ||
27 | struct spi_device; | 28 | struct spi_device; |
28 | struct spmi_device; | 29 | struct spmi_device; |
29 | struct regmap; | 30 | struct regmap; |
@@ -511,6 +512,10 @@ struct regmap *__regmap_init_i2c(struct i2c_client *i2c, | |||
511 | const struct regmap_config *config, | 512 | const struct regmap_config *config, |
512 | struct lock_class_key *lock_key, | 513 | struct lock_class_key *lock_key, |
513 | const char *lock_name); | 514 | const char *lock_name); |
515 | struct regmap *__regmap_init_slimbus(struct slim_device *slimbus, | ||
516 | const struct regmap_config *config, | ||
517 | struct lock_class_key *lock_key, | ||
518 | const char *lock_name); | ||
514 | struct regmap *__regmap_init_spi(struct spi_device *dev, | 519 | struct regmap *__regmap_init_spi(struct spi_device *dev, |
515 | const struct regmap_config *config, | 520 | const struct regmap_config *config, |
516 | struct lock_class_key *lock_key, | 521 | struct lock_class_key *lock_key, |
@@ -636,6 +641,19 @@ int regmap_attach_dev(struct device *dev, struct regmap *map, | |||
636 | i2c, config) | 641 | i2c, config) |
637 | 642 | ||
638 | /** | 643 | /** |
644 | * regmap_init_slimbus() - Initialise register map | ||
645 | * | ||
646 | * @slimbus: Device that will be interacted with | ||
647 | * @config: Configuration for register map | ||
648 | * | ||
649 | * The return value will be an ERR_PTR() on error or a valid pointer to | ||
650 | * a struct regmap. | ||
651 | */ | ||
652 | #define regmap_init_slimbus(slimbus, config) \ | ||
653 | __regmap_lockdep_wrapper(__regmap_init_slimbus, #config, \ | ||
654 | slimbus, config) | ||
655 | |||
656 | /** | ||
639 | * regmap_init_spi() - Initialise register map | 657 | * regmap_init_spi() - Initialise register map |
640 | * | 658 | * |
641 | * @dev: Device that will be interacted with | 659 | * @dev: Device that will be interacted with |
diff --git a/include/linux/siox.h b/include/linux/siox.h new file mode 100644 index 000000000000..d79624e83134 --- /dev/null +++ b/include/linux/siox.h | |||
@@ -0,0 +1,77 @@ | |||
1 | /* | ||
2 | * Copyright (C) 2015 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> | ||
3 | * | ||
4 | * This program is free software; you can redistribute it and/or modify it under | ||
5 | * the terms of the GNU General Public License version 2 as published by the | ||
6 | * Free Software Foundation. | ||
7 | */ | ||
8 | |||
9 | #include <linux/device.h> | ||
10 | |||
11 | #define to_siox_device(_dev) container_of((_dev), struct siox_device, dev) | ||
12 | struct siox_device { | ||
13 | struct list_head node; /* node in smaster->devices */ | ||
14 | struct siox_master *smaster; | ||
15 | struct device dev; | ||
16 | |||
17 | const char *type; | ||
18 | size_t inbytes; | ||
19 | size_t outbytes; | ||
20 | u8 statustype; | ||
21 | |||
22 | u8 status_read_clean; | ||
23 | u8 status_written; | ||
24 | u8 status_written_lastcycle; | ||
25 | bool connected; | ||
26 | |||
27 | /* statistics */ | ||
28 | unsigned int watchdog_errors; | ||
29 | unsigned int status_errors; | ||
30 | |||
31 | struct kernfs_node *status_errors_kn; | ||
32 | struct kernfs_node *watchdog_kn; | ||
33 | struct kernfs_node *watchdog_errors_kn; | ||
34 | struct kernfs_node *connected_kn; | ||
35 | }; | ||
36 | |||
37 | bool siox_device_synced(struct siox_device *sdevice); | ||
38 | bool siox_device_connected(struct siox_device *sdevice); | ||
39 | |||
40 | struct siox_driver { | ||
41 | int (*probe)(struct siox_device *sdevice); | ||
42 | int (*remove)(struct siox_device *sdevice); | ||
43 | void (*shutdown)(struct siox_device *sdevice); | ||
44 | |||
45 | /* | ||
46 | * buf is big enough to hold sdev->inbytes - 1 bytes, the status byte | ||
47 | * is in the scope of the framework. | ||
48 | */ | ||
49 | int (*set_data)(struct siox_device *sdevice, u8 status, u8 buf[]); | ||
50 | /* | ||
51 | * buf is big enough to hold sdev->outbytes - 1 bytes, the status byte | ||
52 | * is in the scope of the framework | ||
53 | */ | ||
54 | int (*get_data)(struct siox_device *sdevice, const u8 buf[]); | ||
55 | |||
56 | struct device_driver driver; | ||
57 | }; | ||
58 | |||
59 | static inline struct siox_driver *to_siox_driver(struct device_driver *driver) | ||
60 | { | ||
61 | if (driver) | ||
62 | return container_of(driver, struct siox_driver, driver); | ||
63 | else | ||
64 | return NULL; | ||
65 | } | ||
66 | |||
67 | int __siox_driver_register(struct siox_driver *sdriver, struct module *owner); | ||
68 | |||
69 | static inline int siox_driver_register(struct siox_driver *sdriver) | ||
70 | { | ||
71 | return __siox_driver_register(sdriver, THIS_MODULE); | ||
72 | } | ||
73 | |||
74 | static inline void siox_driver_unregister(struct siox_driver *sdriver) | ||
75 | { | ||
76 | return driver_unregister(&sdriver->driver); | ||
77 | } | ||
diff --git a/include/linux/slimbus.h b/include/linux/slimbus.h new file mode 100644 index 000000000000..c36cf121d2cd --- /dev/null +++ b/include/linux/slimbus.h | |||
@@ -0,0 +1,164 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (c) 2011-2017, The Linux Foundation | ||
4 | */ | ||
5 | |||
6 | #ifndef _LINUX_SLIMBUS_H | ||
7 | #define _LINUX_SLIMBUS_H | ||
8 | #include <linux/device.h> | ||
9 | #include <linux/module.h> | ||
10 | #include <linux/completion.h> | ||
11 | #include <linux/mod_devicetable.h> | ||
12 | |||
13 | extern struct bus_type slimbus_bus; | ||
14 | |||
15 | /** | ||
16 | * struct slim_eaddr - Enumeration address for a SLIMbus device | ||
17 | * @manf_id: Manufacturer Id for the device | ||
18 | * @prod_code: Product code | ||
19 | * @dev_index: Device index | ||
20 | * @instance: Instance value | ||
21 | */ | ||
22 | struct slim_eaddr { | ||
23 | u16 manf_id; | ||
24 | u16 prod_code; | ||
25 | u8 dev_index; | ||
26 | u8 instance; | ||
27 | } __packed; | ||
28 | |||
29 | /** | ||
30 | * enum slim_device_status - slim device status | ||
31 | * @SLIM_DEVICE_STATUS_DOWN: Slim device is absent or not reported yet. | ||
32 | * @SLIM_DEVICE_STATUS_UP: Slim device is announced on the bus. | ||
33 | * @SLIM_DEVICE_STATUS_RESERVED: Reserved for future use. | ||
34 | */ | ||
35 | enum slim_device_status { | ||
36 | SLIM_DEVICE_STATUS_DOWN = 0, | ||
37 | SLIM_DEVICE_STATUS_UP, | ||
38 | SLIM_DEVICE_STATUS_RESERVED, | ||
39 | }; | ||
40 | |||
41 | struct slim_controller; | ||
42 | |||
43 | /** | ||
44 | * struct slim_device - Slim device handle. | ||
45 | * @dev: Driver model representation of the device. | ||
46 | * @e_addr: Enumeration address of this device. | ||
47 | * @status: slim device status | ||
48 | * @ctrl: slim controller instance. | ||
49 | * @laddr: 1-byte Logical address of this device. | ||
50 | * @is_laddr_valid: indicates if the laddr is valid or not | ||
51 | * | ||
52 | * This is the client/device handle returned when a SLIMbus | ||
53 | * device is registered with a controller. | ||
54 | * Pointer to this structure is used by client-driver as a handle. | ||
55 | */ | ||
56 | struct slim_device { | ||
57 | struct device dev; | ||
58 | struct slim_eaddr e_addr; | ||
59 | struct slim_controller *ctrl; | ||
60 | enum slim_device_status status; | ||
61 | u8 laddr; | ||
62 | bool is_laddr_valid; | ||
63 | }; | ||
64 | |||
65 | #define to_slim_device(d) container_of(d, struct slim_device, dev) | ||
66 | |||
67 | /** | ||
68 | * struct slim_driver - SLIMbus 'generic device' (slave) device driver | ||
69 | * (similar to 'spi_device' on SPI) | ||
70 | * @probe: Binds this driver to a SLIMbus device. | ||
71 | * @remove: Unbinds this driver from the SLIMbus device. | ||
72 | * @shutdown: Standard shutdown callback used during powerdown/halt. | ||
73 | * @device_status: This callback is called when | ||
74 | * - The device reports present and gets a laddr assigned | ||
75 | * - The device reports absent, or the bus goes down. | ||
76 | * @driver: SLIMbus device drivers should initialize name and owner field of | ||
77 | * this structure | ||
78 | * @id_table: List of SLIMbus devices supported by this driver | ||
79 | */ | ||
80 | |||
81 | struct slim_driver { | ||
82 | int (*probe)(struct slim_device *sl); | ||
83 | void (*remove)(struct slim_device *sl); | ||
84 | void (*shutdown)(struct slim_device *sl); | ||
85 | int (*device_status)(struct slim_device *sl, | ||
86 | enum slim_device_status s); | ||
87 | struct device_driver driver; | ||
88 | const struct slim_device_id *id_table; | ||
89 | }; | ||
90 | #define to_slim_driver(d) container_of(d, struct slim_driver, driver) | ||
91 | |||
92 | /** | ||
93 | * struct slim_val_inf - Slimbus value or information element | ||
94 | * @start_offset: Specifies starting offset in information/value element map | ||
95 | * @rbuf: buffer to read the values | ||
96 | * @wbuf: buffer to write | ||
97 | * @num_bytes: upto 16. This ensures that the message will fit the slicesize | ||
98 | * per SLIMbus spec | ||
99 | * @comp: completion for asynchronous operations, valid only if TID is | ||
100 | * required for transaction, like REQUEST operations. | ||
101 | * Rest of the transactions are synchronous anyway. | ||
102 | */ | ||
103 | struct slim_val_inf { | ||
104 | u16 start_offset; | ||
105 | u8 num_bytes; | ||
106 | u8 *rbuf; | ||
107 | const u8 *wbuf; | ||
108 | struct completion *comp; | ||
109 | }; | ||
110 | |||
111 | /* | ||
112 | * use a macro to avoid include chaining to get THIS_MODULE | ||
113 | */ | ||
114 | #define slim_driver_register(drv) \ | ||
115 | __slim_driver_register(drv, THIS_MODULE) | ||
116 | int __slim_driver_register(struct slim_driver *drv, struct module *owner); | ||
117 | void slim_driver_unregister(struct slim_driver *drv); | ||
118 | |||
119 | /** | ||
120 | * module_slim_driver() - Helper macro for registering a SLIMbus driver | ||
121 | * @__slim_driver: slimbus_driver struct | ||
122 | * | ||
123 | * Helper macro for SLIMbus drivers which do not do anything special in module | ||
124 | * init/exit. This eliminates a lot of boilerplate. Each module may only | ||
125 | * use this macro once, and calling it replaces module_init() and module_exit() | ||
126 | */ | ||
127 | #define module_slim_driver(__slim_driver) \ | ||
128 | module_driver(__slim_driver, slim_driver_register, \ | ||
129 | slim_driver_unregister) | ||
130 | |||
131 | static inline void *slim_get_devicedata(const struct slim_device *dev) | ||
132 | { | ||
133 | return dev_get_drvdata(&dev->dev); | ||
134 | } | ||
135 | |||
136 | static inline void slim_set_devicedata(struct slim_device *dev, void *data) | ||
137 | { | ||
138 | dev_set_drvdata(&dev->dev, data); | ||
139 | } | ||
140 | |||
141 | struct slim_device *slim_get_device(struct slim_controller *ctrl, | ||
142 | struct slim_eaddr *e_addr); | ||
143 | int slim_get_logical_addr(struct slim_device *sbdev); | ||
144 | |||
145 | /* Information Element management messages */ | ||
146 | #define SLIM_MSG_MC_REQUEST_INFORMATION 0x20 | ||
147 | #define SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION 0x21 | ||
148 | #define SLIM_MSG_MC_REPLY_INFORMATION 0x24 | ||
149 | #define SLIM_MSG_MC_CLEAR_INFORMATION 0x28 | ||
150 | #define SLIM_MSG_MC_REPORT_INFORMATION 0x29 | ||
151 | |||
152 | /* Value Element management messages */ | ||
153 | #define SLIM_MSG_MC_REQUEST_VALUE 0x60 | ||
154 | #define SLIM_MSG_MC_REQUEST_CHANGE_VALUE 0x61 | ||
155 | #define SLIM_MSG_MC_REPLY_VALUE 0x64 | ||
156 | #define SLIM_MSG_MC_CHANGE_VALUE 0x68 | ||
157 | |||
158 | int slim_xfer_msg(struct slim_device *sbdev, struct slim_val_inf *msg, | ||
159 | u8 mc); | ||
160 | int slim_readb(struct slim_device *sdev, u32 addr); | ||
161 | int slim_writeb(struct slim_device *sdev, u32 addr, u8 value); | ||
162 | int slim_read(struct slim_device *sdev, u32 addr, size_t count, u8 *val); | ||
163 | int slim_write(struct slim_device *sdev, u32 addr, size_t count, u8 *val); | ||
164 | #endif /* _LINUX_SLIMBUS_H */ | ||
diff --git a/include/linux/soundwire/sdw.h b/include/linux/soundwire/sdw.h new file mode 100644 index 000000000000..e91fdcf41049 --- /dev/null +++ b/include/linux/soundwire/sdw.h | |||
@@ -0,0 +1,479 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #ifndef __SOUNDWIRE_H | ||
5 | #define __SOUNDWIRE_H | ||
6 | |||
7 | struct sdw_bus; | ||
8 | struct sdw_slave; | ||
9 | |||
10 | /* SDW spec defines and enums, as defined by MIPI 1.1. Spec */ | ||
11 | |||
12 | /* SDW Broadcast Device Number */ | ||
13 | #define SDW_BROADCAST_DEV_NUM 15 | ||
14 | |||
15 | /* SDW Enumeration Device Number */ | ||
16 | #define SDW_ENUM_DEV_NUM 0 | ||
17 | |||
18 | /* SDW Group Device Numbers */ | ||
19 | #define SDW_GROUP12_DEV_NUM 12 | ||
20 | #define SDW_GROUP13_DEV_NUM 13 | ||
21 | |||
22 | /* SDW Master Device Number, not supported yet */ | ||
23 | #define SDW_MASTER_DEV_NUM 14 | ||
24 | |||
25 | #define SDW_NUM_DEV_ID_REGISTERS 6 | ||
26 | |||
27 | #define SDW_MAX_DEVICES 11 | ||
28 | |||
29 | /** | ||
30 | * enum sdw_slave_status - Slave status | ||
31 | * @SDW_SLAVE_UNATTACHED: Slave is not attached with the bus. | ||
32 | * @SDW_SLAVE_ATTACHED: Slave is attached with bus. | ||
33 | * @SDW_SLAVE_ALERT: Some alert condition on the Slave | ||
34 | * @SDW_SLAVE_RESERVED: Reserved for future use | ||
35 | */ | ||
36 | enum sdw_slave_status { | ||
37 | SDW_SLAVE_UNATTACHED = 0, | ||
38 | SDW_SLAVE_ATTACHED = 1, | ||
39 | SDW_SLAVE_ALERT = 2, | ||
40 | SDW_SLAVE_RESERVED = 3, | ||
41 | }; | ||
42 | |||
43 | /** | ||
44 | * enum sdw_command_response - Command response as defined by SDW spec | ||
45 | * @SDW_CMD_OK: cmd was successful | ||
46 | * @SDW_CMD_IGNORED: cmd was ignored | ||
47 | * @SDW_CMD_FAIL: cmd was NACKed | ||
48 | * @SDW_CMD_TIMEOUT: cmd timedout | ||
49 | * @SDW_CMD_FAIL_OTHER: cmd failed due to other reason than above | ||
50 | * | ||
51 | * NOTE: The enum is different than actual Spec as response in the Spec is | ||
52 | * combination of ACK/NAK bits | ||
53 | * | ||
54 | * SDW_CMD_TIMEOUT/FAIL_OTHER is defined for SW use, not in spec | ||
55 | */ | ||
56 | enum sdw_command_response { | ||
57 | SDW_CMD_OK = 0, | ||
58 | SDW_CMD_IGNORED = 1, | ||
59 | SDW_CMD_FAIL = 2, | ||
60 | SDW_CMD_TIMEOUT = 3, | ||
61 | SDW_CMD_FAIL_OTHER = 4, | ||
62 | }; | ||
63 | |||
64 | /* | ||
65 | * SDW properties, defined in MIPI DisCo spec v1.0 | ||
66 | */ | ||
67 | enum sdw_clk_stop_reset_behave { | ||
68 | SDW_CLK_STOP_KEEP_STATUS = 1, | ||
69 | }; | ||
70 | |||
71 | /** | ||
72 | * enum sdw_p15_behave - Slave Port 15 behaviour when the Master attempts a | ||
73 | * read | ||
74 | * @SDW_P15_READ_IGNORED: Read is ignored | ||
75 | * @SDW_P15_CMD_OK: Command is ok | ||
76 | */ | ||
77 | enum sdw_p15_behave { | ||
78 | SDW_P15_READ_IGNORED = 0, | ||
79 | SDW_P15_CMD_OK = 1, | ||
80 | }; | ||
81 | |||
82 | /** | ||
83 | * enum sdw_dpn_type - Data port types | ||
84 | * @SDW_DPN_FULL: Full Data Port is supported | ||
85 | * @SDW_DPN_SIMPLE: Simplified Data Port as defined in spec. | ||
86 | * DPN_SampleCtrl2, DPN_OffsetCtrl2, DPN_HCtrl and DPN_BlockCtrl3 | ||
87 | * are not implemented. | ||
88 | * @SDW_DPN_REDUCED: Reduced Data Port as defined in spec. | ||
89 | * DPN_SampleCtrl2, DPN_HCtrl are not implemented. | ||
90 | */ | ||
91 | enum sdw_dpn_type { | ||
92 | SDW_DPN_FULL = 0, | ||
93 | SDW_DPN_SIMPLE = 1, | ||
94 | SDW_DPN_REDUCED = 2, | ||
95 | }; | ||
96 | |||
97 | /** | ||
98 | * enum sdw_clk_stop_mode - Clock Stop modes | ||
99 | * @SDW_CLK_STOP_MODE0: Slave can continue operation seamlessly on clock | ||
100 | * restart | ||
101 | * @SDW_CLK_STOP_MODE1: Slave may have entered a deeper power-saving mode, | ||
102 | * not capable of continuing operation seamlessly when the clock restarts | ||
103 | */ | ||
104 | enum sdw_clk_stop_mode { | ||
105 | SDW_CLK_STOP_MODE0 = 0, | ||
106 | SDW_CLK_STOP_MODE1 = 1, | ||
107 | }; | ||
108 | |||
109 | /** | ||
110 | * struct sdw_dp0_prop - DP0 properties | ||
111 | * @max_word: Maximum number of bits in a Payload Channel Sample, 1 to 64 | ||
112 | * (inclusive) | ||
113 | * @min_word: Minimum number of bits in a Payload Channel Sample, 1 to 64 | ||
114 | * (inclusive) | ||
115 | * @num_words: number of wordlengths supported | ||
116 | * @words: wordlengths supported | ||
117 | * @flow_controlled: Slave implementation results in an OK_NotReady | ||
118 | * response | ||
119 | * @simple_ch_prep_sm: If channel prepare sequence is required | ||
120 | * @device_interrupts: If implementation-defined interrupts are supported | ||
121 | * | ||
122 | * The wordlengths are specified by Spec as max, min AND number of | ||
123 | * discrete values, implementation can define based on the wordlengths they | ||
124 | * support | ||
125 | */ | ||
126 | struct sdw_dp0_prop { | ||
127 | u32 max_word; | ||
128 | u32 min_word; | ||
129 | u32 num_words; | ||
130 | u32 *words; | ||
131 | bool flow_controlled; | ||
132 | bool simple_ch_prep_sm; | ||
133 | bool device_interrupts; | ||
134 | }; | ||
135 | |||
136 | /** | ||
137 | * struct sdw_dpn_audio_mode - Audio mode properties for DPn | ||
138 | * @bus_min_freq: Minimum bus frequency, in Hz | ||
139 | * @bus_max_freq: Maximum bus frequency, in Hz | ||
140 | * @bus_num_freq: Number of discrete frequencies supported | ||
141 | * @bus_freq: Discrete bus frequencies, in Hz | ||
142 | * @min_freq: Minimum sampling frequency, in Hz | ||
143 | * @max_freq: Maximum sampling bus frequency, in Hz | ||
144 | * @num_freq: Number of discrete sampling frequency supported | ||
145 | * @freq: Discrete sampling frequencies, in Hz | ||
146 | * @prep_ch_behave: Specifies the dependencies between Channel Prepare | ||
147 | * sequence and bus clock configuration | ||
148 | * If 0, Channel Prepare can happen at any Bus clock rate | ||
149 | * If 1, Channel Prepare sequence shall happen only after Bus clock is | ||
150 | * changed to a frequency supported by this mode or compatible modes | ||
151 | * described by the next field | ||
152 | * @glitchless: Bitmap describing possible glitchless transitions from this | ||
153 | * Audio Mode to other Audio Modes | ||
154 | */ | ||
155 | struct sdw_dpn_audio_mode { | ||
156 | u32 bus_min_freq; | ||
157 | u32 bus_max_freq; | ||
158 | u32 bus_num_freq; | ||
159 | u32 *bus_freq; | ||
160 | u32 max_freq; | ||
161 | u32 min_freq; | ||
162 | u32 num_freq; | ||
163 | u32 *freq; | ||
164 | u32 prep_ch_behave; | ||
165 | u32 glitchless; | ||
166 | }; | ||
167 | |||
168 | /** | ||
169 | * struct sdw_dpn_prop - Data Port DPn properties | ||
170 | * @num: port number | ||
171 | * @max_word: Maximum number of bits in a Payload Channel Sample, 1 to 64 | ||
172 | * (inclusive) | ||
173 | * @min_word: Minimum number of bits in a Payload Channel Sample, 1 to 64 | ||
174 | * (inclusive) | ||
175 | * @num_words: Number of discrete supported wordlengths | ||
176 | * @words: Discrete supported wordlength | ||
177 | * @type: Data port type. Full, Simplified or Reduced | ||
178 | * @max_grouping: Maximum number of samples that can be grouped together for | ||
179 | * a full data port | ||
180 | * @simple_ch_prep_sm: If the port supports simplified channel prepare state | ||
181 | * machine | ||
182 | * @ch_prep_timeout: Port-specific timeout value, in milliseconds | ||
183 | * @device_interrupts: If set, each bit corresponds to support for | ||
184 | * implementation-defined interrupts | ||
185 | * @max_ch: Maximum channels supported | ||
186 | * @min_ch: Minimum channels supported | ||
187 | * @num_ch: Number of discrete channels supported | ||
188 | * @ch: Discrete channels supported | ||
189 | * @num_ch_combinations: Number of channel combinations supported | ||
190 | * @ch_combinations: Channel combinations supported | ||
191 | * @modes: SDW mode supported | ||
192 | * @max_async_buffer: Number of samples that this port can buffer in | ||
193 | * asynchronous modes | ||
194 | * @block_pack_mode: Type of block port mode supported | ||
195 | * @port_encoding: Payload Channel Sample encoding schemes supported | ||
196 | * @audio_modes: Audio modes supported | ||
197 | */ | ||
198 | struct sdw_dpn_prop { | ||
199 | u32 num; | ||
200 | u32 max_word; | ||
201 | u32 min_word; | ||
202 | u32 num_words; | ||
203 | u32 *words; | ||
204 | enum sdw_dpn_type type; | ||
205 | u32 max_grouping; | ||
206 | bool simple_ch_prep_sm; | ||
207 | u32 ch_prep_timeout; | ||
208 | u32 device_interrupts; | ||
209 | u32 max_ch; | ||
210 | u32 min_ch; | ||
211 | u32 num_ch; | ||
212 | u32 *ch; | ||
213 | u32 num_ch_combinations; | ||
214 | u32 *ch_combinations; | ||
215 | u32 modes; | ||
216 | u32 max_async_buffer; | ||
217 | bool block_pack_mode; | ||
218 | u32 port_encoding; | ||
219 | struct sdw_dpn_audio_mode *audio_modes; | ||
220 | }; | ||
221 | |||
222 | /** | ||
223 | * struct sdw_slave_prop - SoundWire Slave properties | ||
224 | * @mipi_revision: Spec version of the implementation | ||
225 | * @wake_capable: Wake-up events are supported | ||
226 | * @test_mode_capable: If test mode is supported | ||
227 | * @clk_stop_mode1: Clock-Stop Mode 1 is supported | ||
228 | * @simple_clk_stop_capable: Simple clock mode is supported | ||
229 | * @clk_stop_timeout: Worst-case latency of the Clock Stop Prepare State | ||
230 | * Machine transitions, in milliseconds | ||
231 | * @ch_prep_timeout: Worst-case latency of the Channel Prepare State Machine | ||
232 | * transitions, in milliseconds | ||
233 | * @reset_behave: Slave keeps the status of the SlaveStopClockPrepare | ||
234 | * state machine (P=1 SCSP_SM) after exit from clock-stop mode1 | ||
235 | * @high_PHY_capable: Slave is HighPHY capable | ||
236 | * @paging_support: Slave implements paging registers SCP_AddrPage1 and | ||
237 | * SCP_AddrPage2 | ||
238 | * @bank_delay_support: Slave implements bank delay/bridge support registers | ||
239 | * SCP_BankDelay and SCP_NextFrame | ||
240 | * @p15_behave: Slave behavior when the Master attempts a read to the Port15 | ||
241 | * alias | ||
242 | * @lane_control_support: Slave supports lane control | ||
243 | * @master_count: Number of Masters present on this Slave | ||
244 | * @source_ports: Bitmap identifying source ports | ||
245 | * @sink_ports: Bitmap identifying sink ports | ||
246 | * @dp0_prop: Data Port 0 properties | ||
247 | * @src_dpn_prop: Source Data Port N properties | ||
248 | * @sink_dpn_prop: Sink Data Port N properties | ||
249 | */ | ||
250 | struct sdw_slave_prop { | ||
251 | u32 mipi_revision; | ||
252 | bool wake_capable; | ||
253 | bool test_mode_capable; | ||
254 | bool clk_stop_mode1; | ||
255 | bool simple_clk_stop_capable; | ||
256 | u32 clk_stop_timeout; | ||
257 | u32 ch_prep_timeout; | ||
258 | enum sdw_clk_stop_reset_behave reset_behave; | ||
259 | bool high_PHY_capable; | ||
260 | bool paging_support; | ||
261 | bool bank_delay_support; | ||
262 | enum sdw_p15_behave p15_behave; | ||
263 | bool lane_control_support; | ||
264 | u32 master_count; | ||
265 | u32 source_ports; | ||
266 | u32 sink_ports; | ||
267 | struct sdw_dp0_prop *dp0_prop; | ||
268 | struct sdw_dpn_prop *src_dpn_prop; | ||
269 | struct sdw_dpn_prop *sink_dpn_prop; | ||
270 | }; | ||
271 | |||
272 | /** | ||
273 | * struct sdw_master_prop - Master properties | ||
274 | * @revision: MIPI spec version of the implementation | ||
275 | * @master_count: Number of masters | ||
276 | * @clk_stop_mode: Bitmap for Clock Stop modes supported | ||
277 | * @max_freq: Maximum Bus clock frequency, in Hz | ||
278 | * @num_clk_gears: Number of clock gears supported | ||
279 | * @clk_gears: Clock gears supported | ||
280 | * @num_freq: Number of clock frequencies supported, in Hz | ||
281 | * @freq: Clock frequencies supported, in Hz | ||
282 | * @default_frame_rate: Controller default Frame rate, in Hz | ||
283 | * @default_row: Number of rows | ||
284 | * @default_col: Number of columns | ||
285 | * @dynamic_frame: Dynamic frame supported | ||
286 | * @err_threshold: Number of times that software may retry sending a single | ||
287 | * command | ||
288 | * @dpn_prop: Data Port N properties | ||
289 | */ | ||
290 | struct sdw_master_prop { | ||
291 | u32 revision; | ||
292 | u32 master_count; | ||
293 | enum sdw_clk_stop_mode clk_stop_mode; | ||
294 | u32 max_freq; | ||
295 | u32 num_clk_gears; | ||
296 | u32 *clk_gears; | ||
297 | u32 num_freq; | ||
298 | u32 *freq; | ||
299 | u32 default_frame_rate; | ||
300 | u32 default_row; | ||
301 | u32 default_col; | ||
302 | bool dynamic_frame; | ||
303 | u32 err_threshold; | ||
304 | struct sdw_dpn_prop *dpn_prop; | ||
305 | }; | ||
306 | |||
307 | int sdw_master_read_prop(struct sdw_bus *bus); | ||
308 | int sdw_slave_read_prop(struct sdw_slave *slave); | ||
309 | |||
310 | /* | ||
311 | * SDW Slave Structures and APIs | ||
312 | */ | ||
313 | |||
314 | /** | ||
315 | * struct sdw_slave_id - Slave ID | ||
316 | * @mfg_id: MIPI Manufacturer ID | ||
317 | * @part_id: Device Part ID | ||
318 | * @class_id: MIPI Class ID, unused now. | ||
319 | * Currently a placeholder in MIPI SoundWire Spec | ||
320 | * @unique_id: Device unique ID | ||
321 | * @sdw_version: SDW version implemented | ||
322 | * | ||
323 | * The order of the IDs here does not follow the DisCo spec definitions | ||
324 | */ | ||
325 | struct sdw_slave_id { | ||
326 | __u16 mfg_id; | ||
327 | __u16 part_id; | ||
328 | __u8 class_id; | ||
329 | __u8 unique_id:4; | ||
330 | __u8 sdw_version:4; | ||
331 | }; | ||
332 | |||
333 | /** | ||
334 | * struct sdw_slave_intr_status - Slave interrupt status | ||
335 | * @control_port: control port status | ||
336 | * @port: data port status | ||
337 | */ | ||
338 | struct sdw_slave_intr_status { | ||
339 | u8 control_port; | ||
340 | u8 port[15]; | ||
341 | }; | ||
342 | |||
343 | /** | ||
344 | * struct sdw_slave_ops - Slave driver callback ops | ||
345 | * @read_prop: Read Slave properties | ||
346 | * @interrupt_callback: Device interrupt notification (invoked in thread | ||
347 | * context) | ||
348 | * @update_status: Update Slave status | ||
349 | */ | ||
350 | struct sdw_slave_ops { | ||
351 | int (*read_prop)(struct sdw_slave *sdw); | ||
352 | int (*interrupt_callback)(struct sdw_slave *slave, | ||
353 | struct sdw_slave_intr_status *status); | ||
354 | int (*update_status)(struct sdw_slave *slave, | ||
355 | enum sdw_slave_status status); | ||
356 | }; | ||
357 | |||
358 | /** | ||
359 | * struct sdw_slave - SoundWire Slave | ||
360 | * @id: MIPI device ID | ||
361 | * @dev: Linux device | ||
362 | * @status: Status reported by the Slave | ||
363 | * @bus: Bus handle | ||
364 | * @ops: Slave callback ops | ||
365 | * @prop: Slave properties | ||
366 | * @node: node for bus list | ||
367 | * @port_ready: Port ready completion flag for each Slave port | ||
368 | * @dev_num: Device Number assigned by Bus | ||
369 | */ | ||
370 | struct sdw_slave { | ||
371 | struct sdw_slave_id id; | ||
372 | struct device dev; | ||
373 | enum sdw_slave_status status; | ||
374 | struct sdw_bus *bus; | ||
375 | const struct sdw_slave_ops *ops; | ||
376 | struct sdw_slave_prop prop; | ||
377 | struct list_head node; | ||
378 | struct completion *port_ready; | ||
379 | u16 dev_num; | ||
380 | }; | ||
381 | |||
382 | #define dev_to_sdw_dev(_dev) container_of(_dev, struct sdw_slave, dev) | ||
383 | |||
384 | struct sdw_driver { | ||
385 | const char *name; | ||
386 | |||
387 | int (*probe)(struct sdw_slave *sdw, | ||
388 | const struct sdw_device_id *id); | ||
389 | int (*remove)(struct sdw_slave *sdw); | ||
390 | void (*shutdown)(struct sdw_slave *sdw); | ||
391 | |||
392 | const struct sdw_device_id *id_table; | ||
393 | const struct sdw_slave_ops *ops; | ||
394 | |||
395 | struct device_driver driver; | ||
396 | }; | ||
397 | |||
398 | #define SDW_SLAVE_ENTRY(_mfg_id, _part_id, _drv_data) \ | ||
399 | { .mfg_id = (_mfg_id), .part_id = (_part_id), \ | ||
400 | .driver_data = (unsigned long)(_drv_data) } | ||
401 | |||
402 | int sdw_handle_slave_status(struct sdw_bus *bus, | ||
403 | enum sdw_slave_status status[]); | ||
404 | |||
405 | /* | ||
406 | * SDW master structures and APIs | ||
407 | */ | ||
408 | |||
409 | struct sdw_msg; | ||
410 | |||
411 | /** | ||
412 | * struct sdw_defer - SDW deffered message | ||
413 | * @length: message length | ||
414 | * @complete: message completion | ||
415 | * @msg: SDW message | ||
416 | */ | ||
417 | struct sdw_defer { | ||
418 | int length; | ||
419 | struct completion complete; | ||
420 | struct sdw_msg *msg; | ||
421 | }; | ||
422 | |||
423 | /** | ||
424 | * struct sdw_master_ops - Master driver ops | ||
425 | * @read_prop: Read Master properties | ||
426 | * @xfer_msg: Transfer message callback | ||
427 | * @xfer_msg_defer: Defer version of transfer message callback | ||
428 | * @reset_page_addr: Reset the SCP page address registers | ||
429 | */ | ||
430 | struct sdw_master_ops { | ||
431 | int (*read_prop)(struct sdw_bus *bus); | ||
432 | |||
433 | enum sdw_command_response (*xfer_msg) | ||
434 | (struct sdw_bus *bus, struct sdw_msg *msg); | ||
435 | enum sdw_command_response (*xfer_msg_defer) | ||
436 | (struct sdw_bus *bus, struct sdw_msg *msg, | ||
437 | struct sdw_defer *defer); | ||
438 | enum sdw_command_response (*reset_page_addr) | ||
439 | (struct sdw_bus *bus, unsigned int dev_num); | ||
440 | }; | ||
441 | |||
442 | /** | ||
443 | * struct sdw_bus - SoundWire bus | ||
444 | * @dev: Master linux device | ||
445 | * @link_id: Link id number, can be 0 to N, unique for each Master | ||
446 | * @slaves: list of Slaves on this bus | ||
447 | * @assigned: Bitmap for Slave device numbers. | ||
448 | * Bit set implies used number, bit clear implies unused number. | ||
449 | * @bus_lock: bus lock | ||
450 | * @msg_lock: message lock | ||
451 | * @ops: Master callback ops | ||
452 | * @prop: Master properties | ||
453 | * @defer_msg: Defer message | ||
454 | * @clk_stop_timeout: Clock stop timeout computed | ||
455 | */ | ||
456 | struct sdw_bus { | ||
457 | struct device *dev; | ||
458 | unsigned int link_id; | ||
459 | struct list_head slaves; | ||
460 | DECLARE_BITMAP(assigned, SDW_MAX_DEVICES); | ||
461 | struct mutex bus_lock; | ||
462 | struct mutex msg_lock; | ||
463 | const struct sdw_master_ops *ops; | ||
464 | struct sdw_master_prop prop; | ||
465 | struct sdw_defer defer_msg; | ||
466 | unsigned int clk_stop_timeout; | ||
467 | }; | ||
468 | |||
469 | int sdw_add_bus_master(struct sdw_bus *bus); | ||
470 | void sdw_delete_bus_master(struct sdw_bus *bus); | ||
471 | |||
472 | /* messaging and data APIs */ | ||
473 | |||
474 | int sdw_read(struct sdw_slave *slave, u32 addr); | ||
475 | int sdw_write(struct sdw_slave *slave, u32 addr, u8 value); | ||
476 | int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val); | ||
477 | int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val); | ||
478 | |||
479 | #endif /* __SOUNDWIRE_H */ | ||
diff --git a/include/linux/soundwire/sdw_intel.h b/include/linux/soundwire/sdw_intel.h new file mode 100644 index 000000000000..4b37528f592d --- /dev/null +++ b/include/linux/soundwire/sdw_intel.h | |||
@@ -0,0 +1,24 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #ifndef __SDW_INTEL_H | ||
5 | #define __SDW_INTEL_H | ||
6 | |||
7 | /** | ||
8 | * struct sdw_intel_res - Soundwire Intel resource structure | ||
9 | * @mmio_base: mmio base of SoundWire registers | ||
10 | * @irq: interrupt number | ||
11 | * @handle: ACPI parent handle | ||
12 | * @parent: parent device | ||
13 | */ | ||
14 | struct sdw_intel_res { | ||
15 | void __iomem *mmio_base; | ||
16 | int irq; | ||
17 | acpi_handle handle; | ||
18 | struct device *parent; | ||
19 | }; | ||
20 | |||
21 | void *sdw_intel_init(acpi_handle *parent_handle, struct sdw_intel_res *res); | ||
22 | void sdw_intel_exit(void *arg); | ||
23 | |||
24 | #endif | ||
diff --git a/include/linux/soundwire/sdw_registers.h b/include/linux/soundwire/sdw_registers.h new file mode 100644 index 000000000000..df472b1ab410 --- /dev/null +++ b/include/linux/soundwire/sdw_registers.h | |||
@@ -0,0 +1,194 @@ | |||
1 | // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #ifndef __SDW_REGISTERS_H | ||
5 | #define __SDW_REGISTERS_H | ||
6 | |||
7 | /* | ||
8 | * typically we define register and shifts but if one observes carefully, | ||
9 | * the shift can be generated from MASKS using few bit primitaives like ffs | ||
10 | * etc, so we use that and avoid defining shifts | ||
11 | */ | ||
12 | #define SDW_REG_SHIFT(n) (ffs(n) - 1) | ||
13 | |||
14 | /* | ||
15 | * SDW registers as defined by MIPI 1.1 Spec | ||
16 | */ | ||
17 | #define SDW_REGADDR GENMASK(14, 0) | ||
18 | #define SDW_SCP_ADDRPAGE2_MASK GENMASK(22, 15) | ||
19 | #define SDW_SCP_ADDRPAGE1_MASK GENMASK(30, 23) | ||
20 | |||
21 | #define SDW_REG_NO_PAGE 0x00008000 | ||
22 | #define SDW_REG_OPTIONAL_PAGE 0x00010000 | ||
23 | #define SDW_REG_MAX 0x80000000 | ||
24 | |||
25 | #define SDW_DPN_SIZE 0x100 | ||
26 | #define SDW_BANK1_OFFSET 0x10 | ||
27 | |||
28 | /* | ||
29 | * DP0 Interrupt register & bits | ||
30 | * | ||
31 | * Spec treats Status (RO) and Clear (WC) as separate but they are same | ||
32 | * address, so treat as same register with WC. | ||
33 | */ | ||
34 | |||
35 | /* both INT and STATUS register are same */ | ||
36 | #define SDW_DP0_INT 0x0 | ||
37 | #define SDW_DP0_INTMASK 0x1 | ||
38 | #define SDW_DP0_PORTCTRL 0x2 | ||
39 | #define SDW_DP0_BLOCKCTRL1 0x3 | ||
40 | #define SDW_DP0_PREPARESTATUS 0x4 | ||
41 | #define SDW_DP0_PREPARECTRL 0x5 | ||
42 | |||
43 | #define SDW_DP0_INT_TEST_FAIL BIT(0) | ||
44 | #define SDW_DP0_INT_PORT_READY BIT(1) | ||
45 | #define SDW_DP0_INT_BRA_FAILURE BIT(2) | ||
46 | #define SDW_DP0_INT_IMPDEF1 BIT(5) | ||
47 | #define SDW_DP0_INT_IMPDEF2 BIT(6) | ||
48 | #define SDW_DP0_INT_IMPDEF3 BIT(7) | ||
49 | |||
50 | #define SDW_DP0_PORTCTRL_DATAMODE GENMASK(3, 2) | ||
51 | #define SDW_DP0_PORTCTRL_NXTINVBANK BIT(4) | ||
52 | #define SDW_DP0_PORTCTRL_BPT_PAYLD GENMASK(7, 6) | ||
53 | |||
54 | #define SDW_DP0_CHANNELEN 0x20 | ||
55 | #define SDW_DP0_SAMPLECTRL1 0x22 | ||
56 | #define SDW_DP0_SAMPLECTRL2 0x23 | ||
57 | #define SDW_DP0_OFFSETCTRL1 0x24 | ||
58 | #define SDW_DP0_OFFSETCTRL2 0x25 | ||
59 | #define SDW_DP0_HCTRL 0x26 | ||
60 | #define SDW_DP0_LANECTRL 0x28 | ||
61 | |||
62 | /* Both INT and STATUS register are same */ | ||
63 | #define SDW_SCP_INT1 0x40 | ||
64 | #define SDW_SCP_INTMASK1 0x41 | ||
65 | |||
66 | #define SDW_SCP_INT1_PARITY BIT(0) | ||
67 | #define SDW_SCP_INT1_BUS_CLASH BIT(1) | ||
68 | #define SDW_SCP_INT1_IMPL_DEF BIT(2) | ||
69 | #define SDW_SCP_INT1_SCP2_CASCADE BIT(7) | ||
70 | #define SDW_SCP_INT1_PORT0_3 GENMASK(6, 3) | ||
71 | |||
72 | #define SDW_SCP_INTSTAT2 0x42 | ||
73 | #define SDW_SCP_INTSTAT2_SCP3_CASCADE BIT(7) | ||
74 | #define SDW_SCP_INTSTAT2_PORT4_10 GENMASK(6, 0) | ||
75 | |||
76 | |||
77 | #define SDW_SCP_INTSTAT3 0x43 | ||
78 | #define SDW_SCP_INTSTAT3_PORT11_14 GENMASK(3, 0) | ||
79 | |||
80 | /* Number of interrupt status registers */ | ||
81 | #define SDW_NUM_INT_STAT_REGISTERS 3 | ||
82 | |||
83 | /* Number of interrupt clear registers */ | ||
84 | #define SDW_NUM_INT_CLEAR_REGISTERS 1 | ||
85 | |||
86 | #define SDW_SCP_CTRL 0x44 | ||
87 | #define SDW_SCP_CTRL_CLK_STP_NOW BIT(1) | ||
88 | #define SDW_SCP_CTRL_FORCE_RESET BIT(7) | ||
89 | |||
90 | #define SDW_SCP_STAT 0x44 | ||
91 | #define SDW_SCP_STAT_CLK_STP_NF BIT(0) | ||
92 | #define SDW_SCP_STAT_HPHY_NOK BIT(5) | ||
93 | #define SDW_SCP_STAT_CURR_BANK BIT(6) | ||
94 | |||
95 | #define SDW_SCP_SYSTEMCTRL 0x45 | ||
96 | #define SDW_SCP_SYSTEMCTRL_CLK_STP_PREP BIT(0) | ||
97 | #define SDW_SCP_SYSTEMCTRL_CLK_STP_MODE BIT(2) | ||
98 | #define SDW_SCP_SYSTEMCTRL_WAKE_UP_EN BIT(3) | ||
99 | #define SDW_SCP_SYSTEMCTRL_HIGH_PHY BIT(4) | ||
100 | |||
101 | #define SDW_SCP_SYSTEMCTRL_CLK_STP_MODE0 0 | ||
102 | #define SDW_SCP_SYSTEMCTRL_CLK_STP_MODE1 BIT(2) | ||
103 | |||
104 | #define SDW_SCP_DEVNUMBER 0x46 | ||
105 | #define SDW_SCP_HIGH_PHY_CHECK 0x47 | ||
106 | #define SDW_SCP_ADDRPAGE1 0x48 | ||
107 | #define SDW_SCP_ADDRPAGE2 0x49 | ||
108 | #define SDW_SCP_KEEPEREN 0x4A | ||
109 | #define SDW_SCP_BANKDELAY 0x4B | ||
110 | #define SDW_SCP_TESTMODE 0x4F | ||
111 | #define SDW_SCP_DEVID_0 0x50 | ||
112 | #define SDW_SCP_DEVID_1 0x51 | ||
113 | #define SDW_SCP_DEVID_2 0x52 | ||
114 | #define SDW_SCP_DEVID_3 0x53 | ||
115 | #define SDW_SCP_DEVID_4 0x54 | ||
116 | #define SDW_SCP_DEVID_5 0x55 | ||
117 | |||
118 | /* Banked Registers */ | ||
119 | #define SDW_SCP_FRAMECTRL_B0 0x60 | ||
120 | #define SDW_SCP_FRAMECTRL_B1 (0x60 + SDW_BANK1_OFFSET) | ||
121 | #define SDW_SCP_NEXTFRAME_B0 0x61 | ||
122 | #define SDW_SCP_NEXTFRAME_B1 (0x61 + SDW_BANK1_OFFSET) | ||
123 | |||
124 | /* Both INT and STATUS register is same */ | ||
125 | #define SDW_DPN_INT(n) (0x0 + SDW_DPN_SIZE * (n)) | ||
126 | #define SDW_DPN_INTMASK(n) (0x1 + SDW_DPN_SIZE * (n)) | ||
127 | #define SDW_DPN_PORTCTRL(n) (0x2 + SDW_DPN_SIZE * (n)) | ||
128 | #define SDW_DPN_BLOCKCTRL1(n) (0x3 + SDW_DPN_SIZE * (n)) | ||
129 | #define SDW_DPN_PREPARESTATUS(n) (0x4 + SDW_DPN_SIZE * (n)) | ||
130 | #define SDW_DPN_PREPARECTRL(n) (0x5 + SDW_DPN_SIZE * (n)) | ||
131 | |||
132 | #define SDW_DPN_INT_TEST_FAIL BIT(0) | ||
133 | #define SDW_DPN_INT_PORT_READY BIT(1) | ||
134 | #define SDW_DPN_INT_IMPDEF1 BIT(5) | ||
135 | #define SDW_DPN_INT_IMPDEF2 BIT(6) | ||
136 | #define SDW_DPN_INT_IMPDEF3 BIT(7) | ||
137 | |||
138 | #define SDW_DPN_PORTCTRL_FLOWMODE GENMASK(1, 0) | ||
139 | #define SDW_DPN_PORTCTRL_DATAMODE GENMASK(3, 2) | ||
140 | #define SDW_DPN_PORTCTRL_NXTINVBANK BIT(4) | ||
141 | |||
142 | #define SDW_DPN_BLOCKCTRL1_WDLEN GENMASK(5, 0) | ||
143 | |||
144 | #define SDW_DPN_PREPARECTRL_CH_PREP GENMASK(7, 0) | ||
145 | |||
146 | #define SDW_DPN_CHANNELEN_B0(n) (0x20 + SDW_DPN_SIZE * (n)) | ||
147 | #define SDW_DPN_CHANNELEN_B1(n) (0x30 + SDW_DPN_SIZE * (n)) | ||
148 | |||
149 | #define SDW_DPN_BLOCKCTRL2_B0(n) (0x21 + SDW_DPN_SIZE * (n)) | ||
150 | #define SDW_DPN_BLOCKCTRL2_B1(n) (0x31 + SDW_DPN_SIZE * (n)) | ||
151 | |||
152 | #define SDW_DPN_SAMPLECTRL1_B0(n) (0x22 + SDW_DPN_SIZE * (n)) | ||
153 | #define SDW_DPN_SAMPLECTRL1_B1(n) (0x32 + SDW_DPN_SIZE * (n)) | ||
154 | |||
155 | #define SDW_DPN_SAMPLECTRL2_B0(n) (0x23 + SDW_DPN_SIZE * (n)) | ||
156 | #define SDW_DPN_SAMPLECTRL2_B1(n) (0x33 + SDW_DPN_SIZE * (n)) | ||
157 | |||
158 | #define SDW_DPN_OFFSETCTRL1_B0(n) (0x24 + SDW_DPN_SIZE * (n)) | ||
159 | #define SDW_DPN_OFFSETCTRL1_B1(n) (0x34 + SDW_DPN_SIZE * (n)) | ||
160 | |||
161 | #define SDW_DPN_OFFSETCTRL2_B0(n) (0x25 + SDW_DPN_SIZE * (n)) | ||
162 | #define SDW_DPN_OFFSETCTRL2_B1(n) (0x35 + SDW_DPN_SIZE * (n)) | ||
163 | |||
164 | #define SDW_DPN_HCTRL_B0(n) (0x26 + SDW_DPN_SIZE * (n)) | ||
165 | #define SDW_DPN_HCTRL_B1(n) (0x36 + SDW_DPN_SIZE * (n)) | ||
166 | |||
167 | #define SDW_DPN_BLOCKCTRL3_B0(n) (0x27 + SDW_DPN_SIZE * (n)) | ||
168 | #define SDW_DPN_BLOCKCTRL3_B1(n) (0x37 + SDW_DPN_SIZE * (n)) | ||
169 | |||
170 | #define SDW_DPN_LANECTRL_B0(n) (0x28 + SDW_DPN_SIZE * (n)) | ||
171 | #define SDW_DPN_LANECTRL_B1(n) (0x38 + SDW_DPN_SIZE * (n)) | ||
172 | |||
173 | #define SDW_DPN_SAMPLECTRL_LOW GENMASK(7, 0) | ||
174 | #define SDW_DPN_SAMPLECTRL_HIGH GENMASK(15, 8) | ||
175 | |||
176 | #define SDW_DPN_HCTRL_HSTART GENMASK(7, 4) | ||
177 | #define SDW_DPN_HCTRL_HSTOP GENMASK(3, 0) | ||
178 | |||
179 | #define SDW_NUM_CASC_PORT_INTSTAT1 4 | ||
180 | #define SDW_CASC_PORT_START_INTSTAT1 0 | ||
181 | #define SDW_CASC_PORT_MASK_INTSTAT1 0x8 | ||
182 | #define SDW_CASC_PORT_REG_OFFSET_INTSTAT1 0x0 | ||
183 | |||
184 | #define SDW_NUM_CASC_PORT_INTSTAT2 7 | ||
185 | #define SDW_CASC_PORT_START_INTSTAT2 4 | ||
186 | #define SDW_CASC_PORT_MASK_INTSTAT2 1 | ||
187 | #define SDW_CASC_PORT_REG_OFFSET_INTSTAT2 1 | ||
188 | |||
189 | #define SDW_NUM_CASC_PORT_INTSTAT3 4 | ||
190 | #define SDW_CASC_PORT_START_INTSTAT3 11 | ||
191 | #define SDW_CASC_PORT_MASK_INTSTAT3 1 | ||
192 | #define SDW_CASC_PORT_REG_OFFSET_INTSTAT3 2 | ||
193 | |||
194 | #endif /* __SDW_REGISTERS_H */ | ||
diff --git a/include/linux/soundwire/sdw_type.h b/include/linux/soundwire/sdw_type.h new file mode 100644 index 000000000000..9fd553e553e9 --- /dev/null +++ b/include/linux/soundwire/sdw_type.h | |||
@@ -0,0 +1,19 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | // Copyright(c) 2015-17 Intel Corporation. | ||
3 | |||
4 | #ifndef __SOUNDWIRE_TYPES_H | ||
5 | #define __SOUNDWIRE_TYPES_H | ||
6 | |||
7 | extern struct bus_type sdw_bus_type; | ||
8 | |||
9 | #define drv_to_sdw_driver(_drv) container_of(_drv, struct sdw_driver, driver) | ||
10 | |||
11 | #define sdw_register_driver(drv) \ | ||
12 | __sdw_register_driver(drv, THIS_MODULE) | ||
13 | |||
14 | int __sdw_register_driver(struct sdw_driver *drv, struct module *); | ||
15 | void sdw_unregister_driver(struct sdw_driver *drv); | ||
16 | |||
17 | int sdw_slave_modalias(const struct sdw_slave *slave, char *buf, size_t size); | ||
18 | |||
19 | #endif /* __SOUNDWIRE_TYPES_H */ | ||
diff --git a/include/linux/vbox_utils.h b/include/linux/vbox_utils.h new file mode 100644 index 000000000000..c71def6b310f --- /dev/null +++ b/include/linux/vbox_utils.h | |||
@@ -0,0 +1,79 @@ | |||
1 | /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ | ||
2 | /* Copyright (C) 2006-2016 Oracle Corporation */ | ||
3 | |||
4 | #ifndef __VBOX_UTILS_H__ | ||
5 | #define __VBOX_UTILS_H__ | ||
6 | |||
7 | #include <linux/printk.h> | ||
8 | #include <linux/vbox_vmmdev_types.h> | ||
9 | |||
10 | struct vbg_dev; | ||
11 | |||
12 | /** | ||
13 | * vboxguest logging functions, these log both to the backdoor and call | ||
14 | * the equivalent kernel pr_foo function. | ||
15 | */ | ||
16 | __printf(1, 2) void vbg_info(const char *fmt, ...); | ||
17 | __printf(1, 2) void vbg_warn(const char *fmt, ...); | ||
18 | __printf(1, 2) void vbg_err(const char *fmt, ...); | ||
19 | |||
20 | /* Only use backdoor logging for non-dynamic debug builds */ | ||
21 | #if defined(DEBUG) && !defined(CONFIG_DYNAMIC_DEBUG) | ||
22 | __printf(1, 2) void vbg_debug(const char *fmt, ...); | ||
23 | #else | ||
24 | #define vbg_debug pr_debug | ||
25 | #endif | ||
26 | |||
27 | /** | ||
28 | * Allocate memory for generic request and initialize the request header. | ||
29 | * | ||
30 | * Return: the allocated memory | ||
31 | * @len: Size of memory block required for the request. | ||
32 | * @req_type: The generic request type. | ||
33 | */ | ||
34 | void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type); | ||
35 | |||
36 | /** | ||
37 | * Perform a generic request. | ||
38 | * | ||
39 | * Return: VBox status code | ||
40 | * @gdev: The Guest extension device. | ||
41 | * @req: Pointer to the request structure. | ||
42 | */ | ||
43 | int vbg_req_perform(struct vbg_dev *gdev, void *req); | ||
44 | |||
45 | int vbg_hgcm_connect(struct vbg_dev *gdev, | ||
46 | struct vmmdev_hgcm_service_location *loc, | ||
47 | u32 *client_id, int *vbox_status); | ||
48 | |||
49 | int vbg_hgcm_disconnect(struct vbg_dev *gdev, u32 client_id, int *vbox_status); | ||
50 | |||
51 | int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function, | ||
52 | u32 timeout_ms, struct vmmdev_hgcm_function_parameter *parms, | ||
53 | u32 parm_count, int *vbox_status); | ||
54 | |||
55 | int vbg_hgcm_call32( | ||
56 | struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms, | ||
57 | struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count, | ||
58 | int *vbox_status); | ||
59 | |||
60 | /** | ||
61 | * Convert a VirtualBox status code to a standard Linux kernel return value. | ||
62 | * Return: 0 or negative errno value. | ||
63 | * @rc: VirtualBox status code to convert. | ||
64 | */ | ||
65 | int vbg_status_code_to_errno(int rc); | ||
66 | |||
67 | /** | ||
68 | * Helper for the vboxsf driver to get a reference to the guest device. | ||
69 | * Return: a pointer to the gdev; or a ERR_PTR value on error. | ||
70 | */ | ||
71 | struct vbg_dev *vbg_get_gdev(void); | ||
72 | |||
73 | /** | ||
74 | * Helper for the vboxsf driver to put a guest device reference. | ||
75 | * @gdev: Reference returned by vbg_get_gdev to put. | ||
76 | */ | ||
77 | void vbg_put_gdev(struct vbg_dev *gdev); | ||
78 | |||
79 | #endif | ||
diff --git a/include/trace/events/siox.h b/include/trace/events/siox.h new file mode 100644 index 000000000000..68a43fc2c3a5 --- /dev/null +++ b/include/trace/events/siox.h | |||
@@ -0,0 +1,66 @@ | |||
1 | #undef TRACE_SYSTEM | ||
2 | #define TRACE_SYSTEM siox | ||
3 | |||
4 | #if !defined(_TRACE_SIOX_H) || defined(TRACE_HEADER_MULTI_READ) | ||
5 | #define _TRACE_SIOX_H | ||
6 | |||
7 | #include <linux/tracepoint.h> | ||
8 | |||
9 | TRACE_EVENT(siox_set_data, | ||
10 | TP_PROTO(const struct siox_master *smaster, | ||
11 | const struct siox_device *sdevice, | ||
12 | unsigned int devno, size_t bufoffset), | ||
13 | TP_ARGS(smaster, sdevice, devno, bufoffset), | ||
14 | TP_STRUCT__entry( | ||
15 | __field(int, busno) | ||
16 | __field(unsigned int, devno) | ||
17 | __field(size_t, inbytes) | ||
18 | __dynamic_array(u8, buf, sdevice->inbytes) | ||
19 | ), | ||
20 | TP_fast_assign( | ||
21 | __entry->busno = smaster->busno; | ||
22 | __entry->devno = devno; | ||
23 | __entry->inbytes = sdevice->inbytes; | ||
24 | memcpy(__get_dynamic_array(buf), | ||
25 | smaster->buf + bufoffset, sdevice->inbytes); | ||
26 | ), | ||
27 | TP_printk("siox-%d-%u [%*phD]", | ||
28 | __entry->busno, | ||
29 | __entry->devno, | ||
30 | (int)__entry->inbytes, __get_dynamic_array(buf) | ||
31 | ) | ||
32 | ); | ||
33 | |||
34 | TRACE_EVENT(siox_get_data, | ||
35 | TP_PROTO(const struct siox_master *smaster, | ||
36 | const struct siox_device *sdevice, | ||
37 | unsigned int devno, u8 status_clean, | ||
38 | size_t bufoffset), | ||
39 | TP_ARGS(smaster, sdevice, devno, status_clean, bufoffset), | ||
40 | TP_STRUCT__entry( | ||
41 | __field(int, busno) | ||
42 | __field(unsigned int, devno) | ||
43 | __field(u8, status_clean) | ||
44 | __field(size_t, outbytes) | ||
45 | __dynamic_array(u8, buf, sdevice->outbytes) | ||
46 | ), | ||
47 | TP_fast_assign( | ||
48 | __entry->busno = smaster->busno; | ||
49 | __entry->devno = devno; | ||
50 | __entry->status_clean = status_clean; | ||
51 | __entry->outbytes = sdevice->outbytes; | ||
52 | memcpy(__get_dynamic_array(buf), | ||
53 | smaster->buf + bufoffset, sdevice->outbytes); | ||
54 | ), | ||
55 | TP_printk("siox-%d-%u (%02hhx) [%*phD]", | ||
56 | __entry->busno, | ||
57 | __entry->devno, | ||
58 | __entry->status_clean, | ||
59 | (int)__entry->outbytes, __get_dynamic_array(buf) | ||
60 | ) | ||
61 | ); | ||
62 | |||
63 | #endif /* if !defined(_TRACE_SIOX_H) || defined(TRACE_HEADER_MULTI_READ) */ | ||
64 | |||
65 | /* This part must be outside protection */ | ||
66 | #include <trace/define_trace.h> | ||
diff --git a/include/uapi/linux/lp.h b/include/uapi/linux/lp.h index dafcfe4e4834..8589a27037d7 100644 --- a/include/uapi/linux/lp.h +++ b/include/uapi/linux/lp.h | |||
@@ -8,6 +8,8 @@ | |||
8 | #ifndef _UAPI_LINUX_LP_H | 8 | #ifndef _UAPI_LINUX_LP_H |
9 | #define _UAPI_LINUX_LP_H | 9 | #define _UAPI_LINUX_LP_H |
10 | 10 | ||
11 | #include <linux/types.h> | ||
12 | #include <linux/ioctl.h> | ||
11 | 13 | ||
12 | /* | 14 | /* |
13 | * Per POSIX guidelines, this module reserves the LP and lp prefixes | 15 | * Per POSIX guidelines, this module reserves the LP and lp prefixes |
@@ -88,7 +90,15 @@ | |||
88 | #define LPGETSTATS 0x060d /* get statistics (struct lp_stats) */ | 90 | #define LPGETSTATS 0x060d /* get statistics (struct lp_stats) */ |
89 | #endif | 91 | #endif |
90 | #define LPGETFLAGS 0x060e /* get status flags */ | 92 | #define LPGETFLAGS 0x060e /* get status flags */ |
91 | #define LPSETTIMEOUT 0x060f /* set parport timeout */ | 93 | #define LPSETTIMEOUT_OLD 0x060f /* set parport timeout */ |
94 | #define LPSETTIMEOUT_NEW \ | ||
95 | _IOW(0x6, 0xf, __s64[2]) /* set parport timeout */ | ||
96 | #if __BITS_PER_LONG == 64 | ||
97 | #define LPSETTIMEOUT LPSETTIMEOUT_OLD | ||
98 | #else | ||
99 | #define LPSETTIMEOUT (sizeof(time_t) > sizeof(__kernel_long_t) ? \ | ||
100 | LPSETTIMEOUT_NEW : LPSETTIMEOUT_OLD) | ||
101 | #endif | ||
92 | 102 | ||
93 | /* timeout for printk'ing a timeout, in jiffies (100ths of a second). | 103 | /* timeout for printk'ing a timeout, in jiffies (100ths of a second). |
94 | This is also used for re-checking error conditions if LP_ABORT is | 104 | This is also used for re-checking error conditions if LP_ABORT is |
diff --git a/include/uapi/linux/vbox_err.h b/include/uapi/linux/vbox_err.h new file mode 100644 index 000000000000..7eae536ff1e6 --- /dev/null +++ b/include/uapi/linux/vbox_err.h | |||
@@ -0,0 +1,151 @@ | |||
1 | /* SPDX-License-Identifier: MIT */ | ||
2 | /* Copyright (C) 2017 Oracle Corporation */ | ||
3 | |||
4 | #ifndef __UAPI_VBOX_ERR_H__ | ||
5 | #define __UAPI_VBOX_ERR_H__ | ||
6 | |||
7 | #define VINF_SUCCESS 0 | ||
8 | #define VERR_GENERAL_FAILURE (-1) | ||
9 | #define VERR_INVALID_PARAMETER (-2) | ||
10 | #define VERR_INVALID_MAGIC (-3) | ||
11 | #define VERR_INVALID_HANDLE (-4) | ||
12 | #define VERR_LOCK_FAILED (-5) | ||
13 | #define VERR_INVALID_POINTER (-6) | ||
14 | #define VERR_IDT_FAILED (-7) | ||
15 | #define VERR_NO_MEMORY (-8) | ||
16 | #define VERR_ALREADY_LOADED (-9) | ||
17 | #define VERR_PERMISSION_DENIED (-10) | ||
18 | #define VERR_VERSION_MISMATCH (-11) | ||
19 | #define VERR_NOT_IMPLEMENTED (-12) | ||
20 | #define VERR_INVALID_FLAGS (-13) | ||
21 | |||
22 | #define VERR_NOT_EQUAL (-18) | ||
23 | #define VERR_NOT_SYMLINK (-19) | ||
24 | #define VERR_NO_TMP_MEMORY (-20) | ||
25 | #define VERR_INVALID_FMODE (-21) | ||
26 | #define VERR_WRONG_ORDER (-22) | ||
27 | #define VERR_NO_TLS_FOR_SELF (-23) | ||
28 | #define VERR_FAILED_TO_SET_SELF_TLS (-24) | ||
29 | #define VERR_NO_CONT_MEMORY (-26) | ||
30 | #define VERR_NO_PAGE_MEMORY (-27) | ||
31 | #define VERR_THREAD_IS_DEAD (-29) | ||
32 | #define VERR_THREAD_NOT_WAITABLE (-30) | ||
33 | #define VERR_PAGE_TABLE_NOT_PRESENT (-31) | ||
34 | #define VERR_INVALID_CONTEXT (-32) | ||
35 | #define VERR_TIMER_BUSY (-33) | ||
36 | #define VERR_ADDRESS_CONFLICT (-34) | ||
37 | #define VERR_UNRESOLVED_ERROR (-35) | ||
38 | #define VERR_INVALID_FUNCTION (-36) | ||
39 | #define VERR_NOT_SUPPORTED (-37) | ||
40 | #define VERR_ACCESS_DENIED (-38) | ||
41 | #define VERR_INTERRUPTED (-39) | ||
42 | #define VERR_TIMEOUT (-40) | ||
43 | #define VERR_BUFFER_OVERFLOW (-41) | ||
44 | #define VERR_TOO_MUCH_DATA (-42) | ||
45 | #define VERR_MAX_THRDS_REACHED (-43) | ||
46 | #define VERR_MAX_PROCS_REACHED (-44) | ||
47 | #define VERR_SIGNAL_REFUSED (-45) | ||
48 | #define VERR_SIGNAL_PENDING (-46) | ||
49 | #define VERR_SIGNAL_INVALID (-47) | ||
50 | #define VERR_STATE_CHANGED (-48) | ||
51 | #define VERR_INVALID_UUID_FORMAT (-49) | ||
52 | #define VERR_PROCESS_NOT_FOUND (-50) | ||
53 | #define VERR_PROCESS_RUNNING (-51) | ||
54 | #define VERR_TRY_AGAIN (-52) | ||
55 | #define VERR_PARSE_ERROR (-53) | ||
56 | #define VERR_OUT_OF_RANGE (-54) | ||
57 | #define VERR_NUMBER_TOO_BIG (-55) | ||
58 | #define VERR_NO_DIGITS (-56) | ||
59 | #define VERR_NEGATIVE_UNSIGNED (-57) | ||
60 | #define VERR_NO_TRANSLATION (-58) | ||
61 | |||
62 | #define VERR_NOT_FOUND (-78) | ||
63 | #define VERR_INVALID_STATE (-79) | ||
64 | #define VERR_OUT_OF_RESOURCES (-80) | ||
65 | |||
66 | #define VERR_FILE_NOT_FOUND (-102) | ||
67 | #define VERR_PATH_NOT_FOUND (-103) | ||
68 | #define VERR_INVALID_NAME (-104) | ||
69 | #define VERR_ALREADY_EXISTS (-105) | ||
70 | #define VERR_TOO_MANY_OPEN_FILES (-106) | ||
71 | #define VERR_SEEK (-107) | ||
72 | #define VERR_NEGATIVE_SEEK (-108) | ||
73 | #define VERR_SEEK_ON_DEVICE (-109) | ||
74 | #define VERR_EOF (-110) | ||
75 | #define VERR_READ_ERROR (-111) | ||
76 | #define VERR_WRITE_ERROR (-112) | ||
77 | #define VERR_WRITE_PROTECT (-113) | ||
78 | #define VERR_SHARING_VIOLATION (-114) | ||
79 | #define VERR_FILE_LOCK_FAILED (-115) | ||
80 | #define VERR_FILE_LOCK_VIOLATION (-116) | ||
81 | #define VERR_CANT_CREATE (-117) | ||
82 | #define VERR_CANT_DELETE_DIRECTORY (-118) | ||
83 | #define VERR_NOT_SAME_DEVICE (-119) | ||
84 | #define VERR_FILENAME_TOO_LONG (-120) | ||
85 | #define VERR_MEDIA_NOT_PRESENT (-121) | ||
86 | #define VERR_MEDIA_NOT_RECOGNIZED (-122) | ||
87 | #define VERR_FILE_NOT_LOCKED (-123) | ||
88 | #define VERR_FILE_LOCK_LOST (-124) | ||
89 | #define VERR_DIR_NOT_EMPTY (-125) | ||
90 | #define VERR_NOT_A_DIRECTORY (-126) | ||
91 | #define VERR_IS_A_DIRECTORY (-127) | ||
92 | #define VERR_FILE_TOO_BIG (-128) | ||
93 | |||
94 | #define VERR_NET_IO_ERROR (-400) | ||
95 | #define VERR_NET_OUT_OF_RESOURCES (-401) | ||
96 | #define VERR_NET_HOST_NOT_FOUND (-402) | ||
97 | #define VERR_NET_PATH_NOT_FOUND (-403) | ||
98 | #define VERR_NET_PRINT_ERROR (-404) | ||
99 | #define VERR_NET_NO_NETWORK (-405) | ||
100 | #define VERR_NET_NOT_UNIQUE_NAME (-406) | ||
101 | |||
102 | #define VERR_NET_IN_PROGRESS (-436) | ||
103 | #define VERR_NET_ALREADY_IN_PROGRESS (-437) | ||
104 | #define VERR_NET_NOT_SOCKET (-438) | ||
105 | #define VERR_NET_DEST_ADDRESS_REQUIRED (-439) | ||
106 | #define VERR_NET_MSG_SIZE (-440) | ||
107 | #define VERR_NET_PROTOCOL_TYPE (-441) | ||
108 | #define VERR_NET_PROTOCOL_NOT_AVAILABLE (-442) | ||
109 | #define VERR_NET_PROTOCOL_NOT_SUPPORTED (-443) | ||
110 | #define VERR_NET_SOCKET_TYPE_NOT_SUPPORTED (-444) | ||
111 | #define VERR_NET_OPERATION_NOT_SUPPORTED (-445) | ||
112 | #define VERR_NET_PROTOCOL_FAMILY_NOT_SUPPORTED (-446) | ||
113 | #define VERR_NET_ADDRESS_FAMILY_NOT_SUPPORTED (-447) | ||
114 | #define VERR_NET_ADDRESS_IN_USE (-448) | ||
115 | #define VERR_NET_ADDRESS_NOT_AVAILABLE (-449) | ||
116 | #define VERR_NET_DOWN (-450) | ||
117 | #define VERR_NET_UNREACHABLE (-451) | ||
118 | #define VERR_NET_CONNECTION_RESET (-452) | ||
119 | #define VERR_NET_CONNECTION_ABORTED (-453) | ||
120 | #define VERR_NET_CONNECTION_RESET_BY_PEER (-454) | ||
121 | #define VERR_NET_NO_BUFFER_SPACE (-455) | ||
122 | #define VERR_NET_ALREADY_CONNECTED (-456) | ||
123 | #define VERR_NET_NOT_CONNECTED (-457) | ||
124 | #define VERR_NET_SHUTDOWN (-458) | ||
125 | #define VERR_NET_TOO_MANY_REFERENCES (-459) | ||
126 | #define VERR_NET_CONNECTION_TIMED_OUT (-460) | ||
127 | #define VERR_NET_CONNECTION_REFUSED (-461) | ||
128 | #define VERR_NET_HOST_DOWN (-464) | ||
129 | #define VERR_NET_HOST_UNREACHABLE (-465) | ||
130 | #define VERR_NET_PROTOCOL_ERROR (-466) | ||
131 | #define VERR_NET_INCOMPLETE_TX_PACKET (-467) | ||
132 | |||
133 | /* misc. unsorted codes */ | ||
134 | #define VERR_RESOURCE_BUSY (-138) | ||
135 | #define VERR_DISK_FULL (-152) | ||
136 | #define VERR_TOO_MANY_SYMLINKS (-156) | ||
137 | #define VERR_NO_MORE_FILES (-201) | ||
138 | #define VERR_INTERNAL_ERROR (-225) | ||
139 | #define VERR_INTERNAL_ERROR_2 (-226) | ||
140 | #define VERR_INTERNAL_ERROR_3 (-227) | ||
141 | #define VERR_INTERNAL_ERROR_4 (-228) | ||
142 | #define VERR_DEV_IO_ERROR (-250) | ||
143 | #define VERR_IO_BAD_LENGTH (-255) | ||
144 | #define VERR_BROKEN_PIPE (-301) | ||
145 | #define VERR_NO_DATA (-304) | ||
146 | #define VERR_SEM_DESTROYED (-363) | ||
147 | #define VERR_DEADLOCK (-365) | ||
148 | #define VERR_BAD_EXE_FORMAT (-608) | ||
149 | #define VINF_HGCM_ASYNC_EXECUTE (2903) | ||
150 | |||
151 | #endif | ||
diff --git a/include/uapi/linux/vbox_vmmdev_types.h b/include/uapi/linux/vbox_vmmdev_types.h new file mode 100644 index 000000000000..0e68024f36c7 --- /dev/null +++ b/include/uapi/linux/vbox_vmmdev_types.h | |||
@@ -0,0 +1,226 @@ | |||
1 | /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ | ||
2 | /* | ||
3 | * Virtual Device for Guest <-> VMM/Host communication, type definitions | ||
4 | * which are also used for the vboxguest ioctl interface / by vboxsf | ||
5 | * | ||
6 | * Copyright (C) 2006-2016 Oracle Corporation | ||
7 | */ | ||
8 | |||
9 | #ifndef __UAPI_VBOX_VMMDEV_TYPES_H__ | ||
10 | #define __UAPI_VBOX_VMMDEV_TYPES_H__ | ||
11 | |||
12 | #include <asm/bitsperlong.h> | ||
13 | #include <linux/types.h> | ||
14 | |||
15 | /* | ||
16 | * We cannot use linux' compiletime_assert here because it expects to be used | ||
17 | * inside a function only. Use a typedef to a char array with a negative size. | ||
18 | */ | ||
19 | #define VMMDEV_ASSERT_SIZE(type, size) \ | ||
20 | typedef char type ## _asrt_size[1 - 2*!!(sizeof(struct type) != (size))] | ||
21 | |||
22 | /** enum vmmdev_request_type - VMMDev request types. */ | ||
23 | enum vmmdev_request_type { | ||
24 | VMMDEVREQ_INVALID_REQUEST = 0, | ||
25 | VMMDEVREQ_GET_MOUSE_STATUS = 1, | ||
26 | VMMDEVREQ_SET_MOUSE_STATUS = 2, | ||
27 | VMMDEVREQ_SET_POINTER_SHAPE = 3, | ||
28 | VMMDEVREQ_GET_HOST_VERSION = 4, | ||
29 | VMMDEVREQ_IDLE = 5, | ||
30 | VMMDEVREQ_GET_HOST_TIME = 10, | ||
31 | VMMDEVREQ_GET_HYPERVISOR_INFO = 20, | ||
32 | VMMDEVREQ_SET_HYPERVISOR_INFO = 21, | ||
33 | VMMDEVREQ_REGISTER_PATCH_MEMORY = 22, /* since version 3.0.6 */ | ||
34 | VMMDEVREQ_DEREGISTER_PATCH_MEMORY = 23, /* since version 3.0.6 */ | ||
35 | VMMDEVREQ_SET_POWER_STATUS = 30, | ||
36 | VMMDEVREQ_ACKNOWLEDGE_EVENTS = 41, | ||
37 | VMMDEVREQ_CTL_GUEST_FILTER_MASK = 42, | ||
38 | VMMDEVREQ_REPORT_GUEST_INFO = 50, | ||
39 | VMMDEVREQ_REPORT_GUEST_INFO2 = 58, /* since version 3.2.0 */ | ||
40 | VMMDEVREQ_REPORT_GUEST_STATUS = 59, /* since version 3.2.8 */ | ||
41 | VMMDEVREQ_REPORT_GUEST_USER_STATE = 74, /* since version 4.3 */ | ||
42 | /* Retrieve a display resize request sent by the host, deprecated. */ | ||
43 | VMMDEVREQ_GET_DISPLAY_CHANGE_REQ = 51, | ||
44 | VMMDEVREQ_VIDEMODE_SUPPORTED = 52, | ||
45 | VMMDEVREQ_GET_HEIGHT_REDUCTION = 53, | ||
46 | /** | ||
47 | * @VMMDEVREQ_GET_DISPLAY_CHANGE_REQ2: | ||
48 | * Retrieve a display resize request sent by the host. | ||
49 | * | ||
50 | * Queries a display resize request sent from the host. If the | ||
51 | * event_ack member is sent to true and there is an unqueried request | ||
52 | * available for one of the virtual display then that request will | ||
53 | * be returned. If several displays have unqueried requests the lowest | ||
54 | * numbered display will be chosen first. Only the most recent unseen | ||
55 | * request for each display is remembered. | ||
56 | * If event_ack is set to false, the last host request queried with | ||
57 | * event_ack set is resent, or failing that the most recent received | ||
58 | * from the host. If no host request was ever received then all zeros | ||
59 | * are returned. | ||
60 | */ | ||
61 | VMMDEVREQ_GET_DISPLAY_CHANGE_REQ2 = 54, | ||
62 | VMMDEVREQ_REPORT_GUEST_CAPABILITIES = 55, | ||
63 | VMMDEVREQ_SET_GUEST_CAPABILITIES = 56, | ||
64 | VMMDEVREQ_VIDEMODE_SUPPORTED2 = 57, /* since version 3.2.0 */ | ||
65 | VMMDEVREQ_GET_DISPLAY_CHANGE_REQEX = 80, /* since version 4.2.4 */ | ||
66 | VMMDEVREQ_HGCM_CONNECT = 60, | ||
67 | VMMDEVREQ_HGCM_DISCONNECT = 61, | ||
68 | VMMDEVREQ_HGCM_CALL32 = 62, | ||
69 | VMMDEVREQ_HGCM_CALL64 = 63, | ||
70 | VMMDEVREQ_HGCM_CANCEL = 64, | ||
71 | VMMDEVREQ_HGCM_CANCEL2 = 65, | ||
72 | VMMDEVREQ_VIDEO_ACCEL_ENABLE = 70, | ||
73 | VMMDEVREQ_VIDEO_ACCEL_FLUSH = 71, | ||
74 | VMMDEVREQ_VIDEO_SET_VISIBLE_REGION = 72, | ||
75 | VMMDEVREQ_GET_SEAMLESS_CHANGE_REQ = 73, | ||
76 | VMMDEVREQ_QUERY_CREDENTIALS = 100, | ||
77 | VMMDEVREQ_REPORT_CREDENTIALS_JUDGEMENT = 101, | ||
78 | VMMDEVREQ_REPORT_GUEST_STATS = 110, | ||
79 | VMMDEVREQ_GET_MEMBALLOON_CHANGE_REQ = 111, | ||
80 | VMMDEVREQ_GET_STATISTICS_CHANGE_REQ = 112, | ||
81 | VMMDEVREQ_CHANGE_MEMBALLOON = 113, | ||
82 | VMMDEVREQ_GET_VRDPCHANGE_REQ = 150, | ||
83 | VMMDEVREQ_LOG_STRING = 200, | ||
84 | VMMDEVREQ_GET_CPU_HOTPLUG_REQ = 210, | ||
85 | VMMDEVREQ_SET_CPU_HOTPLUG_STATUS = 211, | ||
86 | VMMDEVREQ_REGISTER_SHARED_MODULE = 212, | ||
87 | VMMDEVREQ_UNREGISTER_SHARED_MODULE = 213, | ||
88 | VMMDEVREQ_CHECK_SHARED_MODULES = 214, | ||
89 | VMMDEVREQ_GET_PAGE_SHARING_STATUS = 215, | ||
90 | VMMDEVREQ_DEBUG_IS_PAGE_SHARED = 216, | ||
91 | VMMDEVREQ_GET_SESSION_ID = 217, /* since version 3.2.8 */ | ||
92 | VMMDEVREQ_WRITE_COREDUMP = 218, | ||
93 | VMMDEVREQ_GUEST_HEARTBEAT = 219, | ||
94 | VMMDEVREQ_HEARTBEAT_CONFIGURE = 220, | ||
95 | /* Ensure the enum is a 32 bit data-type */ | ||
96 | VMMDEVREQ_SIZEHACK = 0x7fffffff | ||
97 | }; | ||
98 | |||
99 | #if __BITS_PER_LONG == 64 | ||
100 | #define VMMDEVREQ_HGCM_CALL VMMDEVREQ_HGCM_CALL64 | ||
101 | #else | ||
102 | #define VMMDEVREQ_HGCM_CALL VMMDEVREQ_HGCM_CALL32 | ||
103 | #endif | ||
104 | |||
105 | /** HGCM service location types. */ | ||
106 | enum vmmdev_hgcm_service_location_type { | ||
107 | VMMDEV_HGCM_LOC_INVALID = 0, | ||
108 | VMMDEV_HGCM_LOC_LOCALHOST = 1, | ||
109 | VMMDEV_HGCM_LOC_LOCALHOST_EXISTING = 2, | ||
110 | /* Ensure the enum is a 32 bit data-type */ | ||
111 | VMMDEV_HGCM_LOC_SIZEHACK = 0x7fffffff | ||
112 | }; | ||
113 | |||
114 | /** HGCM host service location. */ | ||
115 | struct vmmdev_hgcm_service_location_localhost { | ||
116 | /** Service name */ | ||
117 | char service_name[128]; | ||
118 | }; | ||
119 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_service_location_localhost, 128); | ||
120 | |||
121 | /** HGCM service location. */ | ||
122 | struct vmmdev_hgcm_service_location { | ||
123 | /** Type of the location. */ | ||
124 | enum vmmdev_hgcm_service_location_type type; | ||
125 | |||
126 | union { | ||
127 | struct vmmdev_hgcm_service_location_localhost localhost; | ||
128 | } u; | ||
129 | }; | ||
130 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_service_location, 128 + 4); | ||
131 | |||
132 | /** HGCM function parameter type. */ | ||
133 | enum vmmdev_hgcm_function_parameter_type { | ||
134 | VMMDEV_HGCM_PARM_TYPE_INVALID = 0, | ||
135 | VMMDEV_HGCM_PARM_TYPE_32BIT = 1, | ||
136 | VMMDEV_HGCM_PARM_TYPE_64BIT = 2, | ||
137 | /** Deprecated Doesn't work, use PAGELIST. */ | ||
138 | VMMDEV_HGCM_PARM_TYPE_PHYSADDR = 3, | ||
139 | /** In and Out, user-memory */ | ||
140 | VMMDEV_HGCM_PARM_TYPE_LINADDR = 4, | ||
141 | /** In, user-memory (read; host<-guest) */ | ||
142 | VMMDEV_HGCM_PARM_TYPE_LINADDR_IN = 5, | ||
143 | /** Out, user-memory (write; host->guest) */ | ||
144 | VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT = 6, | ||
145 | /** In and Out, kernel-memory */ | ||
146 | VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL = 7, | ||
147 | /** In, kernel-memory (read; host<-guest) */ | ||
148 | VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN = 8, | ||
149 | /** Out, kernel-memory (write; host->guest) */ | ||
150 | VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT = 9, | ||
151 | /** Physical addresses of locked pages for a buffer. */ | ||
152 | VMMDEV_HGCM_PARM_TYPE_PAGELIST = 10, | ||
153 | /* Ensure the enum is a 32 bit data-type */ | ||
154 | VMMDEV_HGCM_PARM_TYPE_SIZEHACK = 0x7fffffff | ||
155 | }; | ||
156 | |||
157 | /** HGCM function parameter, 32-bit client. */ | ||
158 | struct vmmdev_hgcm_function_parameter32 { | ||
159 | enum vmmdev_hgcm_function_parameter_type type; | ||
160 | union { | ||
161 | __u32 value32; | ||
162 | __u64 value64; | ||
163 | struct { | ||
164 | __u32 size; | ||
165 | union { | ||
166 | __u32 phys_addr; | ||
167 | __u32 linear_addr; | ||
168 | } u; | ||
169 | } pointer; | ||
170 | struct { | ||
171 | /** Size of the buffer described by the page list. */ | ||
172 | __u32 size; | ||
173 | /** Relative to the request header. */ | ||
174 | __u32 offset; | ||
175 | } page_list; | ||
176 | } u; | ||
177 | } __packed; | ||
178 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_function_parameter32, 4 + 8); | ||
179 | |||
180 | /** HGCM function parameter, 64-bit client. */ | ||
181 | struct vmmdev_hgcm_function_parameter64 { | ||
182 | enum vmmdev_hgcm_function_parameter_type type; | ||
183 | union { | ||
184 | __u32 value32; | ||
185 | __u64 value64; | ||
186 | struct { | ||
187 | __u32 size; | ||
188 | union { | ||
189 | __u64 phys_addr; | ||
190 | __u64 linear_addr; | ||
191 | } u; | ||
192 | } __packed pointer; | ||
193 | struct { | ||
194 | /** Size of the buffer described by the page list. */ | ||
195 | __u32 size; | ||
196 | /** Relative to the request header. */ | ||
197 | __u32 offset; | ||
198 | } page_list; | ||
199 | } __packed u; | ||
200 | } __packed; | ||
201 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_function_parameter64, 4 + 12); | ||
202 | |||
203 | #if __BITS_PER_LONG == 64 | ||
204 | #define vmmdev_hgcm_function_parameter vmmdev_hgcm_function_parameter64 | ||
205 | #else | ||
206 | #define vmmdev_hgcm_function_parameter vmmdev_hgcm_function_parameter32 | ||
207 | #endif | ||
208 | |||
209 | #define VMMDEV_HGCM_F_PARM_DIRECTION_NONE 0x00000000U | ||
210 | #define VMMDEV_HGCM_F_PARM_DIRECTION_TO_HOST 0x00000001U | ||
211 | #define VMMDEV_HGCM_F_PARM_DIRECTION_FROM_HOST 0x00000002U | ||
212 | #define VMMDEV_HGCM_F_PARM_DIRECTION_BOTH 0x00000003U | ||
213 | |||
214 | /** | ||
215 | * struct vmmdev_hgcm_pagelist - VMMDEV_HGCM_PARM_TYPE_PAGELIST parameters | ||
216 | * point to this structure to actually describe the buffer. | ||
217 | */ | ||
218 | struct vmmdev_hgcm_pagelist { | ||
219 | __u32 flags; /** VMMDEV_HGCM_F_PARM_*. */ | ||
220 | __u16 offset_first_page; /** Data offset in the first page. */ | ||
221 | __u16 page_count; /** Number of pages. */ | ||
222 | __u64 pages[1]; /** Page addresses. */ | ||
223 | }; | ||
224 | VMMDEV_ASSERT_SIZE(vmmdev_hgcm_pagelist, 4 + 2 + 2 + 8); | ||
225 | |||
226 | #endif | ||
diff --git a/include/uapi/linux/vboxguest.h b/include/uapi/linux/vboxguest.h new file mode 100644 index 000000000000..612f0c7d3558 --- /dev/null +++ b/include/uapi/linux/vboxguest.h | |||
@@ -0,0 +1,330 @@ | |||
1 | /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ | ||
2 | /* | ||
3 | * VBoxGuest - VirtualBox Guest Additions Driver Interface. | ||
4 | * | ||
5 | * Copyright (C) 2006-2016 Oracle Corporation | ||
6 | */ | ||
7 | |||
8 | #ifndef __UAPI_VBOXGUEST_H__ | ||
9 | #define __UAPI_VBOXGUEST_H__ | ||
10 | |||
11 | #include <asm/bitsperlong.h> | ||
12 | #include <linux/ioctl.h> | ||
13 | #include <linux/vbox_err.h> | ||
14 | #include <linux/vbox_vmmdev_types.h> | ||
15 | |||
16 | /* Version of vbg_ioctl_hdr structure. */ | ||
17 | #define VBG_IOCTL_HDR_VERSION 0x10001 | ||
18 | /* Default request type. Use this for non-VMMDev requests. */ | ||
19 | #define VBG_IOCTL_HDR_TYPE_DEFAULT 0 | ||
20 | |||
21 | /** | ||
22 | * Common ioctl header. | ||
23 | * | ||
24 | * This is a mirror of vmmdev_request_header to prevent duplicating data and | ||
25 | * needing to verify things multiple times. | ||
26 | */ | ||
27 | struct vbg_ioctl_hdr { | ||
28 | /** IN: The request input size, and output size if size_out is zero. */ | ||
29 | __u32 size_in; | ||
30 | /** IN: Structure version (VBG_IOCTL_HDR_VERSION) */ | ||
31 | __u32 version; | ||
32 | /** IN: The VMMDev request type or VBG_IOCTL_HDR_TYPE_DEFAULT. */ | ||
33 | __u32 type; | ||
34 | /** | ||
35 | * OUT: The VBox status code of the operation, out direction only. | ||
36 | * This is a VINF_ or VERR_ value as defined in vbox_err.h. | ||
37 | */ | ||
38 | __s32 rc; | ||
39 | /** IN: Output size. Set to zero to use size_in as output size. */ | ||
40 | __u32 size_out; | ||
41 | /** Reserved, MBZ. */ | ||
42 | __u32 reserved; | ||
43 | }; | ||
44 | VMMDEV_ASSERT_SIZE(vbg_ioctl_hdr, 24); | ||
45 | |||
46 | |||
47 | /* | ||
48 | * The VBoxGuest I/O control version. | ||
49 | * | ||
50 | * As usual, the high word contains the major version and changes to it | ||
51 | * signifies incompatible changes. | ||
52 | * | ||
53 | * The lower word is the minor version number, it is increased when new | ||
54 | * functions are added or existing changed in a backwards compatible manner. | ||
55 | */ | ||
56 | #define VBG_IOC_VERSION 0x00010000u | ||
57 | |||
58 | /** | ||
59 | * VBG_IOCTL_DRIVER_VERSION_INFO data structure | ||
60 | * | ||
61 | * Note VBG_IOCTL_DRIVER_VERSION_INFO may switch the session to a backwards | ||
62 | * compatible interface version if uClientVersion indicates older client code. | ||
63 | */ | ||
64 | struct vbg_ioctl_driver_version_info { | ||
65 | /** The header. */ | ||
66 | struct vbg_ioctl_hdr hdr; | ||
67 | union { | ||
68 | struct { | ||
69 | /** Requested interface version (VBG_IOC_VERSION). */ | ||
70 | __u32 req_version; | ||
71 | /** | ||
72 | * Minimum interface version number (typically the | ||
73 | * major version part of VBG_IOC_VERSION). | ||
74 | */ | ||
75 | __u32 min_version; | ||
76 | /** Reserved, MBZ. */ | ||
77 | __u32 reserved1; | ||
78 | /** Reserved, MBZ. */ | ||
79 | __u32 reserved2; | ||
80 | } in; | ||
81 | struct { | ||
82 | /** Version for this session (typ. VBG_IOC_VERSION). */ | ||
83 | __u32 session_version; | ||
84 | /** Version of the IDC interface (VBG_IOC_VERSION). */ | ||
85 | __u32 driver_version; | ||
86 | /** The SVN revision of the driver, or 0. */ | ||
87 | __u32 driver_revision; | ||
88 | /** Reserved \#1 (zero until defined). */ | ||
89 | __u32 reserved1; | ||
90 | /** Reserved \#2 (zero until defined). */ | ||
91 | __u32 reserved2; | ||
92 | } out; | ||
93 | } u; | ||
94 | }; | ||
95 | VMMDEV_ASSERT_SIZE(vbg_ioctl_driver_version_info, 24 + 20); | ||
96 | |||
97 | #define VBG_IOCTL_DRIVER_VERSION_INFO \ | ||
98 | _IOWR('V', 0, struct vbg_ioctl_driver_version_info) | ||
99 | |||
100 | |||
101 | /* IOCTL to perform a VMM Device request less than 1KB in size. */ | ||
102 | #define VBG_IOCTL_VMMDEV_REQUEST(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 2, s) | ||
103 | |||
104 | |||
105 | /* IOCTL to perform a VMM Device request larger then 1KB. */ | ||
106 | #define VBG_IOCTL_VMMDEV_REQUEST_BIG _IOC(_IOC_READ | _IOC_WRITE, 'V', 3, 0) | ||
107 | |||
108 | |||
109 | /** VBG_IOCTL_HGCM_CONNECT data structure. */ | ||
110 | struct vbg_ioctl_hgcm_connect { | ||
111 | struct vbg_ioctl_hdr hdr; | ||
112 | union { | ||
113 | struct { | ||
114 | struct vmmdev_hgcm_service_location loc; | ||
115 | } in; | ||
116 | struct { | ||
117 | __u32 client_id; | ||
118 | } out; | ||
119 | } u; | ||
120 | }; | ||
121 | VMMDEV_ASSERT_SIZE(vbg_ioctl_hgcm_connect, 24 + 132); | ||
122 | |||
123 | #define VBG_IOCTL_HGCM_CONNECT \ | ||
124 | _IOWR('V', 4, struct vbg_ioctl_hgcm_connect) | ||
125 | |||
126 | |||
127 | /** VBG_IOCTL_HGCM_DISCONNECT data structure. */ | ||
128 | struct vbg_ioctl_hgcm_disconnect { | ||
129 | struct vbg_ioctl_hdr hdr; | ||
130 | union { | ||
131 | struct { | ||
132 | __u32 client_id; | ||
133 | } in; | ||
134 | } u; | ||
135 | }; | ||
136 | VMMDEV_ASSERT_SIZE(vbg_ioctl_hgcm_disconnect, 24 + 4); | ||
137 | |||
138 | #define VBG_IOCTL_HGCM_DISCONNECT \ | ||
139 | _IOWR('V', 5, struct vbg_ioctl_hgcm_disconnect) | ||
140 | |||
141 | |||
142 | /** VBG_IOCTL_HGCM_CALL data structure. */ | ||
143 | struct vbg_ioctl_hgcm_call { | ||
144 | /** The header. */ | ||
145 | struct vbg_ioctl_hdr hdr; | ||
146 | /** Input: The id of the caller. */ | ||
147 | __u32 client_id; | ||
148 | /** Input: Function number. */ | ||
149 | __u32 function; | ||
150 | /** | ||
151 | * Input: How long to wait (milliseconds) for completion before | ||
152 | * cancelling the call. Set to -1 to wait indefinitely. | ||
153 | */ | ||
154 | __u32 timeout_ms; | ||
155 | /** Interruptable flag, ignored for userspace calls. */ | ||
156 | __u8 interruptible; | ||
157 | /** Explicit padding, MBZ. */ | ||
158 | __u8 reserved; | ||
159 | /** | ||
160 | * Input: How many parameters following this structure. | ||
161 | * | ||
162 | * The parameters are either HGCMFunctionParameter64 or 32, | ||
163 | * depending on whether we're receiving a 64-bit or 32-bit request. | ||
164 | * | ||
165 | * The current maximum is 61 parameters (given a 1KB max request size, | ||
166 | * and a 64-bit parameter size of 16 bytes). | ||
167 | */ | ||
168 | __u16 parm_count; | ||
169 | /* | ||
170 | * Parameters follow in form: | ||
171 | * struct hgcm_function_parameter<32|64> parms[parm_count] | ||
172 | */ | ||
173 | }; | ||
174 | VMMDEV_ASSERT_SIZE(vbg_ioctl_hgcm_call, 24 + 16); | ||
175 | |||
176 | #define VBG_IOCTL_HGCM_CALL_32(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 6, s) | ||
177 | #define VBG_IOCTL_HGCM_CALL_64(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 7, s) | ||
178 | #if __BITS_PER_LONG == 64 | ||
179 | #define VBG_IOCTL_HGCM_CALL(s) VBG_IOCTL_HGCM_CALL_64(s) | ||
180 | #else | ||
181 | #define VBG_IOCTL_HGCM_CALL(s) VBG_IOCTL_HGCM_CALL_32(s) | ||
182 | #endif | ||
183 | |||
184 | |||
185 | /** VBG_IOCTL_LOG data structure. */ | ||
186 | struct vbg_ioctl_log { | ||
187 | /** The header. */ | ||
188 | struct vbg_ioctl_hdr hdr; | ||
189 | union { | ||
190 | struct { | ||
191 | /** | ||
192 | * The log message, this may be zero terminated. If it | ||
193 | * is not zero terminated then the length is determined | ||
194 | * from the input size. | ||
195 | */ | ||
196 | char msg[1]; | ||
197 | } in; | ||
198 | } u; | ||
199 | }; | ||
200 | |||
201 | #define VBG_IOCTL_LOG(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 9, s) | ||
202 | |||
203 | |||
204 | /** VBG_IOCTL_WAIT_FOR_EVENTS data structure. */ | ||
205 | struct vbg_ioctl_wait_for_events { | ||
206 | /** The header. */ | ||
207 | struct vbg_ioctl_hdr hdr; | ||
208 | union { | ||
209 | struct { | ||
210 | /** Timeout in milliseconds. */ | ||
211 | __u32 timeout_ms; | ||
212 | /** Events to wait for. */ | ||
213 | __u32 events; | ||
214 | } in; | ||
215 | struct { | ||
216 | /** Events that occurred. */ | ||
217 | __u32 events; | ||
218 | } out; | ||
219 | } u; | ||
220 | }; | ||
221 | VMMDEV_ASSERT_SIZE(vbg_ioctl_wait_for_events, 24 + 8); | ||
222 | |||
223 | #define VBG_IOCTL_WAIT_FOR_EVENTS \ | ||
224 | _IOWR('V', 10, struct vbg_ioctl_wait_for_events) | ||
225 | |||
226 | |||
227 | /* | ||
228 | * IOCTL to VBoxGuest to interrupt (cancel) any pending | ||
229 | * VBG_IOCTL_WAIT_FOR_EVENTS and return. | ||
230 | * | ||
231 | * Handled inside the vboxguest driver and not seen by the host at all. | ||
232 | * After calling this, VBG_IOCTL_WAIT_FOR_EVENTS should no longer be called in | ||
233 | * the same session. Any VBOXGUEST_IOCTL_WAITEVENT calls in the same session | ||
234 | * done after calling this will directly exit with -EINTR. | ||
235 | */ | ||
236 | #define VBG_IOCTL_INTERRUPT_ALL_WAIT_FOR_EVENTS \ | ||
237 | _IOWR('V', 11, struct vbg_ioctl_hdr) | ||
238 | |||
239 | |||
240 | /** VBG_IOCTL_CHANGE_FILTER_MASK data structure. */ | ||
241 | struct vbg_ioctl_change_filter { | ||
242 | /** The header. */ | ||
243 | struct vbg_ioctl_hdr hdr; | ||
244 | union { | ||
245 | struct { | ||
246 | /** Flags to set. */ | ||
247 | __u32 or_mask; | ||
248 | /** Flags to remove. */ | ||
249 | __u32 not_mask; | ||
250 | } in; | ||
251 | } u; | ||
252 | }; | ||
253 | VMMDEV_ASSERT_SIZE(vbg_ioctl_change_filter, 24 + 8); | ||
254 | |||
255 | /* IOCTL to VBoxGuest to control the event filter mask. */ | ||
256 | #define VBG_IOCTL_CHANGE_FILTER_MASK \ | ||
257 | _IOWR('V', 12, struct vbg_ioctl_change_filter) | ||
258 | |||
259 | |||
260 | /** VBG_IOCTL_CHANGE_GUEST_CAPABILITIES data structure. */ | ||
261 | struct vbg_ioctl_set_guest_caps { | ||
262 | /** The header. */ | ||
263 | struct vbg_ioctl_hdr hdr; | ||
264 | union { | ||
265 | struct { | ||
266 | /** Capabilities to set (VMMDEV_GUEST_SUPPORTS_XXX). */ | ||
267 | __u32 or_mask; | ||
268 | /** Capabilities to drop (VMMDEV_GUEST_SUPPORTS_XXX). */ | ||
269 | __u32 not_mask; | ||
270 | } in; | ||
271 | struct { | ||
272 | /** Capabilities held by the session after the call. */ | ||
273 | __u32 session_caps; | ||
274 | /** Capabilities for all the sessions after the call. */ | ||
275 | __u32 global_caps; | ||
276 | } out; | ||
277 | } u; | ||
278 | }; | ||
279 | VMMDEV_ASSERT_SIZE(vbg_ioctl_set_guest_caps, 24 + 8); | ||
280 | |||
281 | #define VBG_IOCTL_CHANGE_GUEST_CAPABILITIES \ | ||
282 | _IOWR('V', 14, struct vbg_ioctl_set_guest_caps) | ||
283 | |||
284 | |||
285 | /** VBG_IOCTL_CHECK_BALLOON data structure. */ | ||
286 | struct vbg_ioctl_check_balloon { | ||
287 | /** The header. */ | ||
288 | struct vbg_ioctl_hdr hdr; | ||
289 | union { | ||
290 | struct { | ||
291 | /** The size of the balloon in chunks of 1MB. */ | ||
292 | __u32 balloon_chunks; | ||
293 | /** | ||
294 | * false = handled in R0, no further action required. | ||
295 | * true = allocate balloon memory in R3. | ||
296 | */ | ||
297 | __u8 handle_in_r3; | ||
298 | /** Explicit padding, MBZ. */ | ||
299 | __u8 padding[3]; | ||
300 | } out; | ||
301 | } u; | ||
302 | }; | ||
303 | VMMDEV_ASSERT_SIZE(vbg_ioctl_check_balloon, 24 + 8); | ||
304 | |||
305 | /* | ||
306 | * IOCTL to check memory ballooning. | ||
307 | * | ||
308 | * The guest kernel module will ask the host for the current size of the | ||
309 | * balloon and adjust the size. Or it will set handle_in_r3 = true and R3 is | ||
310 | * responsible for allocating memory and calling VBG_IOCTL_CHANGE_BALLOON. | ||
311 | */ | ||
312 | #define VBG_IOCTL_CHECK_BALLOON \ | ||
313 | _IOWR('V', 17, struct vbg_ioctl_check_balloon) | ||
314 | |||
315 | |||
316 | /** VBG_IOCTL_WRITE_CORE_DUMP data structure. */ | ||
317 | struct vbg_ioctl_write_coredump { | ||
318 | struct vbg_ioctl_hdr hdr; | ||
319 | union { | ||
320 | struct { | ||
321 | __u32 flags; /** Flags (reserved, MBZ). */ | ||
322 | } in; | ||
323 | } u; | ||
324 | }; | ||
325 | VMMDEV_ASSERT_SIZE(vbg_ioctl_write_coredump, 24 + 4); | ||
326 | |||
327 | #define VBG_IOCTL_WRITE_CORE_DUMP \ | ||
328 | _IOWR('V', 19, struct vbg_ioctl_write_coredump) | ||
329 | |||
330 | #endif | ||
diff --git a/scripts/mod/devicetable-offsets.c b/scripts/mod/devicetable-offsets.c index 9826b9a6543c..9fad6afe4c41 100644 --- a/scripts/mod/devicetable-offsets.c +++ b/scripts/mod/devicetable-offsets.c | |||
@@ -203,6 +203,10 @@ int main(void) | |||
203 | DEVID_FIELD(hda_device_id, rev_id); | 203 | DEVID_FIELD(hda_device_id, rev_id); |
204 | DEVID_FIELD(hda_device_id, api_version); | 204 | DEVID_FIELD(hda_device_id, api_version); |
205 | 205 | ||
206 | DEVID(sdw_device_id); | ||
207 | DEVID_FIELD(sdw_device_id, mfg_id); | ||
208 | DEVID_FIELD(sdw_device_id, part_id); | ||
209 | |||
206 | DEVID(fsl_mc_device_id); | 210 | DEVID(fsl_mc_device_id); |
207 | DEVID_FIELD(fsl_mc_device_id, vendor); | 211 | DEVID_FIELD(fsl_mc_device_id, vendor); |
208 | DEVID_FIELD(fsl_mc_device_id, obj_type); | 212 | DEVID_FIELD(fsl_mc_device_id, obj_type); |
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c index 6ef6e63f96fd..b9beeaa4695b 100644 --- a/scripts/mod/file2alias.c +++ b/scripts/mod/file2alias.c | |||
@@ -1289,6 +1289,21 @@ static int do_hda_entry(const char *filename, void *symval, char *alias) | |||
1289 | } | 1289 | } |
1290 | ADD_TO_DEVTABLE("hdaudio", hda_device_id, do_hda_entry); | 1290 | ADD_TO_DEVTABLE("hdaudio", hda_device_id, do_hda_entry); |
1291 | 1291 | ||
1292 | /* Looks like: sdw:mNpN */ | ||
1293 | static int do_sdw_entry(const char *filename, void *symval, char *alias) | ||
1294 | { | ||
1295 | DEF_FIELD(symval, sdw_device_id, mfg_id); | ||
1296 | DEF_FIELD(symval, sdw_device_id, part_id); | ||
1297 | |||
1298 | strcpy(alias, "sdw:"); | ||
1299 | ADD(alias, "m", mfg_id != 0, mfg_id); | ||
1300 | ADD(alias, "p", part_id != 0, part_id); | ||
1301 | |||
1302 | add_wildcard(alias); | ||
1303 | return 1; | ||
1304 | } | ||
1305 | ADD_TO_DEVTABLE("sdw", sdw_device_id, do_sdw_entry); | ||
1306 | |||
1292 | /* Looks like: fsl-mc:vNdN */ | 1307 | /* Looks like: fsl-mc:vNdN */ |
1293 | static int do_fsl_mc_entry(const char *filename, void *symval, | 1308 | static int do_fsl_mc_entry(const char *filename, void *symval, |
1294 | char *alias) | 1309 | char *alias) |
diff --git a/security/Kconfig b/security/Kconfig index b0cb9a5f9448..3709db95027f 100644 --- a/security/Kconfig +++ b/security/Kconfig | |||
@@ -154,6 +154,7 @@ config HARDENED_USERCOPY | |||
154 | bool "Harden memory copies between kernel and userspace" | 154 | bool "Harden memory copies between kernel and userspace" |
155 | depends on HAVE_HARDENED_USERCOPY_ALLOCATOR | 155 | depends on HAVE_HARDENED_USERCOPY_ALLOCATOR |
156 | select BUG | 156 | select BUG |
157 | imply STRICT_DEVMEM | ||
157 | help | 158 | help |
158 | This option checks for obviously wrong memory regions when | 159 | This option checks for obviously wrong memory regions when |
159 | copying memory to/from the kernel (via copy_to_user() and | 160 | copying memory to/from the kernel (via copy_to_user() and |
diff --git a/tools/hv/Makefile b/tools/hv/Makefile index 31503819454d..1139d71fa0cf 100644 --- a/tools/hv/Makefile +++ b/tools/hv/Makefile | |||
@@ -7,9 +7,30 @@ CFLAGS = $(WARNINGS) -g $(shell getconf LFS_CFLAGS) | |||
7 | 7 | ||
8 | CFLAGS += -D__EXPORTED_HEADERS__ -I../../include/uapi -I../../include | 8 | CFLAGS += -D__EXPORTED_HEADERS__ -I../../include/uapi -I../../include |
9 | 9 | ||
10 | all: hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon | 10 | sbindir ?= /usr/sbin |
11 | libexecdir ?= /usr/libexec | ||
12 | sharedstatedir ?= /var/lib | ||
13 | |||
14 | ALL_PROGRAMS := hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon | ||
15 | |||
16 | ALL_SCRIPTS := hv_get_dhcp_info.sh hv_get_dns_info.sh hv_set_ifconfig.sh | ||
17 | |||
18 | all: $(ALL_PROGRAMS) | ||
19 | |||
11 | %: %.c | 20 | %: %.c |
12 | $(CC) $(CFLAGS) -o $@ $^ | 21 | $(CC) $(CFLAGS) -o $@ $^ |
13 | 22 | ||
14 | clean: | 23 | clean: |
15 | $(RM) hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon | 24 | $(RM) hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon |
25 | |||
26 | install: all | ||
27 | install -d -m 755 $(DESTDIR)$(sbindir); \ | ||
28 | install -d -m 755 $(DESTDIR)$(libexecdir)/hypervkvpd; \ | ||
29 | install -d -m 755 $(DESTDIR)$(sharedstatedir); \ | ||
30 | for program in $(ALL_PROGRAMS); do \ | ||
31 | install $$program -m 755 $(DESTDIR)$(sbindir); \ | ||
32 | done; \ | ||
33 | install -m 755 lsvmbus $(DESTDIR)$(sbindir); \ | ||
34 | for script in $(ALL_SCRIPTS); do \ | ||
35 | install $$script -m 755 $(DESTDIR)$(libexecdir)/hypervkvpd/$${script%.sh}; \ | ||
36 | done | ||