diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-09-23 22:16:01 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-09-23 22:16:01 -0400 |
commit | 299d14d4c31aff3b37a03894e012edf8421676ee (patch) | |
tree | cb1ca4a273ff202818461f339bf947acb11924a3 | |
parent | e94f8ccde4710f9a3e51dd3bc6134c96e33f29b3 (diff) | |
parent | c5048a73b4770304699cb15e3ffcb97acab685f7 (diff) |
Merge tag 'pci-v5.4-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Consolidate _HPP/_HPX stuff in pci-acpi.c and simplify it
(Krzysztof Wilczynski)
- Fix incorrect PCIe device types and remove dev->has_secondary_link
to simplify code that deals with upstream/downstream ports (Mika
Westerberg)
- After suspend, restore Resizable BAR size bits correctly for 1MB
BARs (Sumit Saxena)
- Enable PCI_MSI_IRQ_DOMAIN support for RISC-V (Wesley Terpstra)
Virtualization:
- Add ACS quirks for iProc PAXB (Abhinav Ratna), Amazon Annapurna
Labs (Ali Saidi)
- Move sysfs SR-IOV functions to iov.c (Kelsey Skunberg)
- Remove group write permissions from sysfs sriov_numvfs,
sriov_drivers_autoprobe (Kelsey Skunberg)
Hotplug:
- Simplify pciehp indicator control (Denis Efremov)
Peer-to-peer DMA:
- Allow P2P DMA between root ports for whitelisted bridges (Logan
Gunthorpe)
- Whitelist some Intel host bridges for P2P DMA (Logan Gunthorpe)
- DMA map P2P DMA requests that traverse host bridge (Logan
Gunthorpe)
Amazon Annapurna Labs host bridge driver:
- Add DT binding and controller driver (Jonathan Chocron)
Hyper-V host bridge driver:
- Fix hv_pci_dev->pci_slot use-after-free (Dexuan Cui)
- Fix PCI domain number collisions (Haiyang Zhang)
- Use instance ID bytes 4 & 5 as PCI domain numbers (Haiyang Zhang)
- Fix build errors on non-SYSFS config (Randy Dunlap)
i.MX6 host bridge driver:
- Limit DBI register length (Stefan Agner)
Intel VMD host bridge driver:
- Fix config addressing issues (Jon Derrick)
Layerscape host bridge driver:
- Add bar_fixed_64bit property to endpoint driver (Xiaowei Bao)
- Add CONFIG_PCI_LAYERSCAPE_EP to build EP/RC drivers separately
(Xiaowei Bao)
Mediatek host bridge driver:
- Add MT7629 controller support (Jianjun Wang)
Mobiveil host bridge driver:
- Fix CPU base address setup (Hou Zhiqiang)
- Make "num-lanes" property optional (Hou Zhiqiang)
Tegra host bridge driver:
- Fix OF node reference leak (Nishka Dasgupta)
- Disable MSI for root ports to work around design problem (Vidya
Sagar)
- Add Tegra194 DT binding and controller support (Vidya Sagar)
- Add support for sideband pins and slot regulators (Vidya Sagar)
- Add PIPE2UPHY support (Vidya Sagar)
Misc:
- Remove unused pci_block_cfg_access() et al (Kelsey Skunberg)
- Unexport pci_bus_get(), etc (Kelsey Skunberg)
- Hide PM, VC, link speed, ATS, ECRC, PTM constants and interfaces in
the PCI core (Kelsey Skunberg)
- Clean up sysfs DEVICE_ATTR() usage (Kelsey Skunberg)
- Mark expected switch fall-through (Gustavo A. R. Silva)
- Propagate errors for optional regulators and PHYs (Thierry Reding)
- Fix kernel command line resource_alignment parameter issues (Logan
Gunthorpe)"
* tag 'pci-v5.4-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (112 commits)
PCI: Add pci_irq_vector() and other stubs when !CONFIG_PCI
arm64: tegra: Add PCIe slot supply information in p2972-0000 platform
arm64: tegra: Add configuration for PCIe C5 sideband signals
PCI: tegra: Add support to enable slot regulators
PCI: tegra: Add support to configure sideband pins
PCI: vmd: Fix shadow offsets to reflect spec changes
PCI: vmd: Fix config addressing when using bus offsets
PCI: dwc: Add validation that PCIe core is set to correct mode
PCI: dwc: al: Add Amazon Annapurna Labs PCIe controller driver
dt-bindings: PCI: Add Amazon's Annapurna Labs PCIe host bridge binding
PCI: Add quirk to disable MSI-X support for Amazon's Annapurna Labs Root Port
PCI/VPD: Prevent VPD access for Amazon's Annapurna Labs Root Port
PCI: Add ACS quirk for Amazon Annapurna Labs root ports
PCI: Add Amazon's Annapurna Labs vendor ID
MAINTAINERS: Add PCI native host/endpoint controllers designated reviewer
PCI: hv: Use bytes 4 and 5 from instance ID as the PCI domain numbers
dt-bindings: PCI: tegra: Add PCIe slot supplies regulator entries
dt-bindings: PCI: tegra: Add sideband pins configuration entries
PCI: tegra: Add Tegra194 PCIe support
PCI: Get rid of dev->has_secondary_link flag
...
99 files changed, 4201 insertions, 1186 deletions
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index d3814789304f..254d8a369f32 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt | |||
@@ -3465,12 +3465,13 @@ | |||
3465 | specify the device is described above. | 3465 | specify the device is described above. |
3466 | If <order of align> is not specified, | 3466 | If <order of align> is not specified, |
3467 | PAGE_SIZE is used as alignment. | 3467 | PAGE_SIZE is used as alignment. |
3468 | PCI-PCI bridge can be specified, if resource | 3468 | A PCI-PCI bridge can be specified if resource |
3469 | windows need to be expanded. | 3469 | windows need to be expanded. |
3470 | To specify the alignment for several | 3470 | To specify the alignment for several |
3471 | instances of a device, the PCI vendor, | 3471 | instances of a device, the PCI vendor, |
3472 | device, subvendor, and subdevice may be | 3472 | device, subvendor, and subdevice may be |
3473 | specified, e.g., 4096@pci:8086:9c22:103c:198f | 3473 | specified, e.g., 12@pci:8086:9c22:103c:198f |
3474 | for 4096-byte alignment. | ||
3474 | ecrc= Enable/disable PCIe ECRC (transaction layer | 3475 | ecrc= Enable/disable PCIe ECRC (transaction layer |
3475 | end-to-end CRC checking). | 3476 | end-to-end CRC checking). |
3476 | bios: Use BIOS/firmware settings. This is the | 3477 | bios: Use BIOS/firmware settings. This is the |
diff --git a/Documentation/devicetree/bindings/pci/designware-pcie.txt b/Documentation/devicetree/bindings/pci/designware-pcie.txt index 5561a1c060d0..78494c4050f7 100644 --- a/Documentation/devicetree/bindings/pci/designware-pcie.txt +++ b/Documentation/devicetree/bindings/pci/designware-pcie.txt | |||
@@ -11,7 +11,6 @@ Required properties: | |||
11 | the ATU address space. | 11 | the ATU address space. |
12 | (The old way of getting the configuration address space from "ranges" | 12 | (The old way of getting the configuration address space from "ranges" |
13 | is deprecated and should be avoided.) | 13 | is deprecated and should be avoided.) |
14 | - num-lanes: number of lanes to use | ||
15 | RC mode: | 14 | RC mode: |
16 | - #address-cells: set to <3> | 15 | - #address-cells: set to <3> |
17 | - #size-cells: set to <2> | 16 | - #size-cells: set to <2> |
@@ -34,6 +33,11 @@ Optional properties: | |||
34 | - clock-names: Must include the following entries: | 33 | - clock-names: Must include the following entries: |
35 | - "pcie" | 34 | - "pcie" |
36 | - "pcie_bus" | 35 | - "pcie_bus" |
36 | - snps,enable-cdm-check: This is a boolean property and if present enables | ||
37 | automatic checking of CDM (Configuration Dependent Module) registers | ||
38 | for data corruption. CDM registers include standard PCIe configuration | ||
39 | space registers, Port Logic registers, DMA and iATU (internal Address | ||
40 | Translation Unit) registers. | ||
37 | RC mode: | 41 | RC mode: |
38 | - num-viewport: number of view ports configured in hardware. If a platform | 42 | - num-viewport: number of view ports configured in hardware. If a platform |
39 | does not specify it, the driver assumes 2. | 43 | does not specify it, the driver assumes 2. |
diff --git a/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt b/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt index a7f5f5afa0e6..de4b2baf91e8 100644 --- a/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt +++ b/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt | |||
@@ -50,7 +50,7 @@ Additional required properties for imx7d-pcie and imx8mq-pcie: | |||
50 | - power-domains: Must be set to a phandle pointing to PCIE_PHY power domain | 50 | - power-domains: Must be set to a phandle pointing to PCIE_PHY power domain |
51 | - resets: Must contain phandles to PCIe-related reset lines exposed by SRC | 51 | - resets: Must contain phandles to PCIe-related reset lines exposed by SRC |
52 | IP block | 52 | IP block |
53 | - reset-names: Must contain the following entires: | 53 | - reset-names: Must contain the following entries: |
54 | - "pciephy" | 54 | - "pciephy" |
55 | - "apps" | 55 | - "apps" |
56 | - "turnoff" | 56 | - "turnoff" |
diff --git a/Documentation/devicetree/bindings/pci/mediatek-pcie.txt b/Documentation/devicetree/bindings/pci/mediatek-pcie.txt index 92437a366e5f..7468d666763a 100644 --- a/Documentation/devicetree/bindings/pci/mediatek-pcie.txt +++ b/Documentation/devicetree/bindings/pci/mediatek-pcie.txt | |||
@@ -6,6 +6,7 @@ Required properties: | |||
6 | "mediatek,mt2712-pcie" | 6 | "mediatek,mt2712-pcie" |
7 | "mediatek,mt7622-pcie" | 7 | "mediatek,mt7622-pcie" |
8 | "mediatek,mt7623-pcie" | 8 | "mediatek,mt7623-pcie" |
9 | "mediatek,mt7629-pcie" | ||
9 | - device_type: Must be "pci" | 10 | - device_type: Must be "pci" |
10 | - reg: Base addresses and lengths of the PCIe subsys and root ports. | 11 | - reg: Base addresses and lengths of the PCIe subsys and root ports. |
11 | - reg-names: Names of the above areas to use during resource lookup. | 12 | - reg-names: Names of the above areas to use during resource lookup. |
diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt new file mode 100644 index 000000000000..b739f92da58e --- /dev/null +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt | |||
@@ -0,0 +1,171 @@ | |||
1 | NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based) | ||
2 | |||
3 | This PCIe host controller is based on the Synopsis Designware PCIe IP | ||
4 | and thus inherits all the common properties defined in designware-pcie.txt. | ||
5 | |||
6 | Required properties: | ||
7 | - compatible: For Tegra19x, must contain "nvidia,tegra194-pcie". | ||
8 | - device_type: Must be "pci" | ||
9 | - power-domains: A phandle to the node that controls power to the respective | ||
10 | PCIe controller and a specifier name for the PCIe controller. Following are | ||
11 | the specifiers for the different PCIe controllers | ||
12 | TEGRA194_POWER_DOMAIN_PCIEX8B: C0 | ||
13 | TEGRA194_POWER_DOMAIN_PCIEX1A: C1 | ||
14 | TEGRA194_POWER_DOMAIN_PCIEX1A: C2 | ||
15 | TEGRA194_POWER_DOMAIN_PCIEX1A: C3 | ||
16 | TEGRA194_POWER_DOMAIN_PCIEX4A: C4 | ||
17 | TEGRA194_POWER_DOMAIN_PCIEX8A: C5 | ||
18 | these specifiers are defined in | ||
19 | "include/dt-bindings/power/tegra194-powergate.h" file. | ||
20 | - reg: A list of physical base address and length pairs for each set of | ||
21 | controller registers. Must contain an entry for each entry in the reg-names | ||
22 | property. | ||
23 | - reg-names: Must include the following entries: | ||
24 | "appl": Controller's application logic registers | ||
25 | "config": As per the definition in designware-pcie.txt | ||
26 | "atu_dma": iATU and DMA registers. This is where the iATU (internal Address | ||
27 | Translation Unit) registers of the PCIe core are made available | ||
28 | for SW access. | ||
29 | "dbi": The aperture where root port's own configuration registers are | ||
30 | available | ||
31 | - interrupts: A list of interrupt outputs of the controller. Must contain an | ||
32 | entry for each entry in the interrupt-names property. | ||
33 | - interrupt-names: Must include the following entries: | ||
34 | "intr": The Tegra interrupt that is asserted for controller interrupts | ||
35 | "msi": The Tegra interrupt that is asserted when an MSI is received | ||
36 | - bus-range: Range of bus numbers associated with this controller | ||
37 | - #address-cells: Address representation for root ports (must be 3) | ||
38 | - cell 0 specifies the bus and device numbers of the root port: | ||
39 | [23:16]: bus number | ||
40 | [15:11]: device number | ||
41 | - cell 1 denotes the upper 32 address bits and should be 0 | ||
42 | - cell 2 contains the lower 32 address bits and is used to translate to the | ||
43 | CPU address space | ||
44 | - #size-cells: Size representation for root ports (must be 2) | ||
45 | - ranges: Describes the translation of addresses for root ports and standard | ||
46 | PCI regions. The entries must be 7 cells each, where the first three cells | ||
47 | correspond to the address as described for the #address-cells property | ||
48 | above, the fourth and fifth cells are for the physical CPU address to | ||
49 | translate to and the sixth and seventh cells are as described for the | ||
50 | #size-cells property above. | ||
51 | - Entries setup the mapping for the standard I/O, memory and | ||
52 | prefetchable PCI regions. The first cell determines the type of region | ||
53 | that is setup: | ||
54 | - 0x81000000: I/O memory region | ||
55 | - 0x82000000: non-prefetchable memory region | ||
56 | - 0xc2000000: prefetchable memory region | ||
57 | Please refer to the standard PCI bus binding document for a more detailed | ||
58 | explanation. | ||
59 | - #interrupt-cells: Size representation for interrupts (must be 1) | ||
60 | - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties | ||
61 | Please refer to the standard PCI bus binding document for a more detailed | ||
62 | explanation. | ||
63 | - clocks: Must contain an entry for each entry in clock-names. | ||
64 | See ../clocks/clock-bindings.txt for details. | ||
65 | - clock-names: Must include the following entries: | ||
66 | - core | ||
67 | - resets: Must contain an entry for each entry in reset-names. | ||
68 | See ../reset/reset.txt for details. | ||
69 | - reset-names: Must include the following entries: | ||
70 | - apb | ||
71 | - core | ||
72 | - phys: Must contain a phandle to P2U PHY for each entry in phy-names. | ||
73 | - phy-names: Must include an entry for each active lane. | ||
74 | "p2u-N": where N ranges from 0 to one less than the total number of lanes | ||
75 | - nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed | ||
76 | by controller-id. Following are the controller ids for each controller. | ||
77 | 0: C0 | ||
78 | 1: C1 | ||
79 | 2: C2 | ||
80 | 3: C3 | ||
81 | 4: C4 | ||
82 | 5: C5 | ||
83 | - vddio-pex-ctl-supply: Regulator supply for PCIe side band signals | ||
84 | |||
85 | Optional properties: | ||
86 | - pinctrl-names: A list of pinctrl state names. | ||
87 | It is mandatory for C5 controller and optional for other controllers. | ||
88 | - "default": Configures PCIe I/O for proper operation. | ||
89 | - pinctrl-0: phandle for the 'default' state of pin configuration. | ||
90 | It is mandatory for C5 controller and optional for other controllers. | ||
91 | - supports-clkreq: Refer to Documentation/devicetree/bindings/pci/pci.txt | ||
92 | - nvidia,update-fc-fixup: This is a boolean property and needs to be present to | ||
93 | improve performance when a platform is designed in such a way that it | ||
94 | satisfies at least one of the following conditions thereby enabling root | ||
95 | port to exchange optimum number of FC (Flow Control) credits with | ||
96 | downstream devices | ||
97 | 1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS) | ||
98 | 2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and | ||
99 | a) speed is Gen-2 and MPS is 256B | ||
100 | b) speed is >= Gen-3 with any MPS | ||
101 | - nvidia,aspm-cmrt-us: Common Mode Restore Time for proper operation of ASPM | ||
102 | to be specified in microseconds | ||
103 | - nvidia,aspm-pwr-on-t-us: Power On time for proper operation of ASPM to be | ||
104 | specified in microseconds | ||
105 | - nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be | ||
106 | specified in microseconds | ||
107 | - vpcie3v3-supply: A phandle to the regulator node that supplies 3.3V to the slot | ||
108 | if the platform has one such slot. (Ex:- x16 slot owned by C5 controller | ||
109 | in p2972-0000 platform). | ||
110 | - vpcie12v-supply: A phandle to the regulator node that supplies 12V to the slot | ||
111 | if the platform has one such slot. (Ex:- x16 slot owned by C5 controller | ||
112 | in p2972-0000 platform). | ||
113 | |||
114 | Examples: | ||
115 | ========= | ||
116 | |||
117 | Tegra194: | ||
118 | -------- | ||
119 | |||
120 | pcie@14180000 { | ||
121 | compatible = "nvidia,tegra194-pcie", "snps,dw-pcie"; | ||
122 | power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>; | ||
123 | reg = <0x00 0x14180000 0x0 0x00020000 /* appl registers (128K) */ | ||
124 | 0x00 0x38000000 0x0 0x00040000 /* configuration space (256K) */ | ||
125 | 0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K) */ | ||
126 | reg-names = "appl", "config", "atu_dma"; | ||
127 | |||
128 | #address-cells = <3>; | ||
129 | #size-cells = <2>; | ||
130 | device_type = "pci"; | ||
131 | num-lanes = <8>; | ||
132 | linux,pci-domain = <0>; | ||
133 | |||
134 | pinctrl-names = "default"; | ||
135 | pinctrl-0 = <&pex_rst_c5_out_state>, <&clkreq_c5_bi_dir_state>; | ||
136 | |||
137 | clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>; | ||
138 | clock-names = "core"; | ||
139 | |||
140 | resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>, | ||
141 | <&bpmp TEGRA194_RESET_PEX0_CORE_0>; | ||
142 | reset-names = "apb", "core"; | ||
143 | |||
144 | interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ | ||
145 | <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */ | ||
146 | interrupt-names = "intr", "msi"; | ||
147 | |||
148 | #interrupt-cells = <1>; | ||
149 | interrupt-map-mask = <0 0 0 0>; | ||
150 | interrupt-map = <0 0 0 0 &gic GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>; | ||
151 | |||
152 | nvidia,bpmp = <&bpmp 0>; | ||
153 | |||
154 | supports-clkreq; | ||
155 | nvidia,aspm-cmrt-us = <60>; | ||
156 | nvidia,aspm-pwr-on-t-us = <20>; | ||
157 | nvidia,aspm-l0s-entrance-latency-us = <3>; | ||
158 | |||
159 | bus-range = <0x0 0xff>; | ||
160 | ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000 /* downstream I/O (1MB) */ | ||
161 | 0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000 /* non-prefetchable memory (30MB) */ | ||
162 | 0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>; /* prefetchable memory (16GB) */ | ||
163 | |||
164 | vddio-pex-ctl-supply = <&vdd_1v8ao>; | ||
165 | vpcie3v3-supply = <&vdd_3v3_pcie>; | ||
166 | vpcie12v-supply = <&vdd_12v_pcie>; | ||
167 | |||
168 | phys = <&p2u_hsio_2>, <&p2u_hsio_3>, <&p2u_hsio_4>, | ||
169 | <&p2u_hsio_5>; | ||
170 | phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3"; | ||
171 | }; | ||
diff --git a/Documentation/devicetree/bindings/pci/pci-armada8k.txt b/Documentation/devicetree/bindings/pci/pci-armada8k.txt index 8324a4ee6f06..7a813d0e6d63 100644 --- a/Documentation/devicetree/bindings/pci/pci-armada8k.txt +++ b/Documentation/devicetree/bindings/pci/pci-armada8k.txt | |||
@@ -11,7 +11,7 @@ Required properties: | |||
11 | - reg-names: | 11 | - reg-names: |
12 | - "ctrl" for the control register region | 12 | - "ctrl" for the control register region |
13 | - "config" for the config space region | 13 | - "config" for the config space region |
14 | - interrupts: Interrupt specifier for the PCIe controler | 14 | - interrupts: Interrupt specifier for the PCIe controller |
15 | - clocks: reference to the PCIe controller clocks | 15 | - clocks: reference to the PCIe controller clocks |
16 | - clock-names: mandatory if there is a second clock, in this case the | 16 | - clock-names: mandatory if there is a second clock, in this case the |
17 | name must be "core" for the first clock and "reg" for the second | 17 | name must be "core" for the first clock and "reg" for the second |
diff --git a/Documentation/devicetree/bindings/pci/pci.txt b/Documentation/devicetree/bindings/pci/pci.txt index 2a5d91024059..29bcbd88f457 100644 --- a/Documentation/devicetree/bindings/pci/pci.txt +++ b/Documentation/devicetree/bindings/pci/pci.txt | |||
@@ -27,6 +27,11 @@ driver implementation may support the following properties: | |||
27 | - reset-gpios: | 27 | - reset-gpios: |
28 | If present this property specifies PERST# GPIO. Host drivers can parse the | 28 | If present this property specifies PERST# GPIO. Host drivers can parse the |
29 | GPIO and apply fundamental reset to endpoints. | 29 | GPIO and apply fundamental reset to endpoints. |
30 | - supports-clkreq: | ||
31 | If present this property specifies that CLKREQ signal routing exists from | ||
32 | root port to downstream device and host bridge drivers can do programming | ||
33 | which depends on CLKREQ signal existence. For example, programming root port | ||
34 | not to advertise ASPM L1 Sub-States support if there is no CLKREQ signal. | ||
30 | 35 | ||
31 | PCI-PCI Bridge properties | 36 | PCI-PCI Bridge properties |
32 | ------------------------- | 37 | ------------------------- |
diff --git a/Documentation/devicetree/bindings/pci/pcie-al.txt b/Documentation/devicetree/bindings/pci/pcie-al.txt new file mode 100644 index 000000000000..557a5089229d --- /dev/null +++ b/Documentation/devicetree/bindings/pci/pcie-al.txt | |||
@@ -0,0 +1,46 @@ | |||
1 | * Amazon Annapurna Labs PCIe host bridge | ||
2 | |||
3 | Amazon's Annapurna Labs PCIe Host Controller is based on the Synopsys DesignWare | ||
4 | PCI core. It inherits common properties defined in | ||
5 | Documentation/devicetree/bindings/pci/designware-pcie.txt. | ||
6 | |||
7 | Properties of the host controller node that differ from it are: | ||
8 | |||
9 | - compatible: | ||
10 | Usage: required | ||
11 | Value type: <stringlist> | ||
12 | Definition: Value should contain | ||
13 | - "amazon,al-alpine-v2-pcie" for alpine_v2 | ||
14 | - "amazon,al-alpine-v3-pcie" for alpine_v3 | ||
15 | |||
16 | - reg: | ||
17 | Usage: required | ||
18 | Value type: <prop-encoded-array> | ||
19 | Definition: Register ranges as listed in the reg-names property | ||
20 | |||
21 | - reg-names: | ||
22 | Usage: required | ||
23 | Value type: <stringlist> | ||
24 | Definition: Must include the following entries | ||
25 | - "config" PCIe ECAM space | ||
26 | - "controller" AL proprietary registers | ||
27 | - "dbi" Designware PCIe registers | ||
28 | |||
29 | Example: | ||
30 | |||
31 | pcie-external0: pcie@fb600000 { | ||
32 | compatible = "amazon,al-alpine-v3-pcie"; | ||
33 | reg = <0x0 0xfb600000 0x0 0x00100000 | ||
34 | 0x0 0xfd800000 0x0 0x00010000 | ||
35 | 0x0 0xfd810000 0x0 0x00001000>; | ||
36 | reg-names = "config", "controller", "dbi"; | ||
37 | bus-range = <0 255>; | ||
38 | device_type = "pci"; | ||
39 | #address-cells = <3>; | ||
40 | #size-cells = <2>; | ||
41 | #interrupt-cells = <1>; | ||
42 | interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>; | ||
43 | interrupt-map-mask = <0x00 0 0 7>; | ||
44 | interrupt-map = <0x0000 0 0 1 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>; /* INTa */ | ||
45 | ranges = <0x02000000 0x0 0xc0010000 0x0 0xc0010000 0x0 0x07ff0000>; | ||
46 | }; | ||
diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt new file mode 100644 index 000000000000..d23ff90baad5 --- /dev/null +++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt | |||
@@ -0,0 +1,28 @@ | |||
1 | NVIDIA Tegra194 P2U binding | ||
2 | |||
3 | Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High | ||
4 | Speed) each interfacing with 12 and 8 P2U instances respectively. | ||
5 | A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE | ||
6 | interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe | ||
7 | lane. | ||
8 | |||
9 | Required properties: | ||
10 | - compatible: For Tegra19x, must contain "nvidia,tegra194-p2u". | ||
11 | - reg: Should be the physical address space and length of respective each P2U | ||
12 | instance. | ||
13 | - reg-names: Must include the entry "ctl". | ||
14 | |||
15 | Required properties for PHY port node: | ||
16 | - #phy-cells: Defined by generic PHY bindings. Must be 0. | ||
17 | |||
18 | Refer to phy/phy-bindings.txt for the generic PHY binding properties. | ||
19 | |||
20 | Example: | ||
21 | |||
22 | p2u_hsio_0: phy@3e10000 { | ||
23 | compatible = "nvidia,tegra194-p2u"; | ||
24 | reg = <0x03e10000 0x10000>; | ||
25 | reg-names = "ctl"; | ||
26 | |||
27 | #phy-cells = <0>; | ||
28 | }; | ||
diff --git a/MAINTAINERS b/MAINTAINERS index 7ea0c11b8e8d..6ee4bb5f6d30 100644 --- a/MAINTAINERS +++ b/MAINTAINERS | |||
@@ -12580,16 +12580,18 @@ F: arch/x86/kernel/early-quirks.c | |||
12580 | 12580 | ||
12581 | PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS | 12581 | PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS |
12582 | M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> | 12582 | M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> |
12583 | R: Andrew Murray <andrew.murray@arm.com> | ||
12583 | L: linux-pci@vger.kernel.org | 12584 | L: linux-pci@vger.kernel.org |
12584 | Q: http://patchwork.ozlabs.org/project/linux-pci/list/ | 12585 | Q: http://patchwork.ozlabs.org/project/linux-pci/list/ |
12585 | T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/ | 12586 | T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/ |
12586 | S: Supported | 12587 | S: Supported |
12587 | F: drivers/pci/controller/ | 12588 | F: drivers/pci/controller/ |
12588 | 12589 | ||
12589 | PCIE DRIVER FOR ANNAPURNA LABS | 12590 | PCIE DRIVER FOR AMAZON ANNAPURNA LABS |
12590 | M: Jonathan Chocron <jonnyc@amazon.com> | 12591 | M: Jonathan Chocron <jonnyc@amazon.com> |
12591 | L: linux-pci@vger.kernel.org | 12592 | L: linux-pci@vger.kernel.org |
12592 | S: Maintained | 12593 | S: Maintained |
12594 | F: Documentation/devicetree/bindings/pci/pcie-al.txt | ||
12593 | F: drivers/pci/controller/dwc/pcie-al.c | 12595 | F: drivers/pci/controller/dwc/pcie-al.c |
12594 | 12596 | ||
12595 | PCIE DRIVER FOR AMLOGIC MESON | 12597 | PCIE DRIVER FOR AMLOGIC MESON |
diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi index 464df4290ffc..2f6977ada447 100644 --- a/arch/arm/boot/dts/ls1021a.dtsi +++ b/arch/arm/boot/dts/ls1021a.dtsi | |||
@@ -874,7 +874,6 @@ | |||
874 | #address-cells = <3>; | 874 | #address-cells = <3>; |
875 | #size-cells = <2>; | 875 | #size-cells = <2>; |
876 | device_type = "pci"; | 876 | device_type = "pci"; |
877 | num-lanes = <4>; | ||
878 | num-viewport = <6>; | 877 | num-viewport = <6>; |
879 | bus-range = <0x0 0xff>; | 878 | bus-range = <0x0 0xff>; |
880 | ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ | 879 | ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ |
@@ -899,7 +898,6 @@ | |||
899 | #address-cells = <3>; | 898 | #address-cells = <3>; |
900 | #size-cells = <2>; | 899 | #size-cells = <2>; |
901 | device_type = "pci"; | 900 | device_type = "pci"; |
902 | num-lanes = <4>; | ||
903 | num-viewport = <6>; | 901 | num-viewport = <6>; |
904 | bus-range = <0x0 0xff>; | 902 | bus-range = <0x0 0xff>; |
905 | ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ | 903 | ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ |
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi index 124a7e2d8442..337919366dc8 100644 --- a/arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi +++ b/arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi | |||
@@ -486,7 +486,6 @@ | |||
486 | #address-cells = <3>; | 486 | #address-cells = <3>; |
487 | #size-cells = <2>; | 487 | #size-cells = <2>; |
488 | device_type = "pci"; | 488 | device_type = "pci"; |
489 | num-lanes = <4>; | ||
490 | num-viewport = <2>; | 489 | num-viewport = <2>; |
491 | bus-range = <0x0 0xff>; | 490 | bus-range = <0x0 0xff>; |
492 | ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ | 491 | ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ |
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi index 71d9ed9ff985..c084c7a4b6a6 100644 --- a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi +++ b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi | |||
@@ -677,7 +677,6 @@ | |||
677 | #size-cells = <2>; | 677 | #size-cells = <2>; |
678 | device_type = "pci"; | 678 | device_type = "pci"; |
679 | dma-coherent; | 679 | dma-coherent; |
680 | num-lanes = <4>; | ||
681 | num-viewport = <6>; | 680 | num-viewport = <6>; |
682 | bus-range = <0x0 0xff>; | 681 | bus-range = <0x0 0xff>; |
683 | ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ | 682 | ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ |
@@ -704,7 +703,6 @@ | |||
704 | #size-cells = <2>; | 703 | #size-cells = <2>; |
705 | device_type = "pci"; | 704 | device_type = "pci"; |
706 | dma-coherent; | 705 | dma-coherent; |
707 | num-lanes = <2>; | ||
708 | num-viewport = <6>; | 706 | num-viewport = <6>; |
709 | bus-range = <0x0 0xff>; | 707 | bus-range = <0x0 0xff>; |
710 | ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ | 708 | ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ |
@@ -731,7 +729,6 @@ | |||
731 | #size-cells = <2>; | 729 | #size-cells = <2>; |
732 | device_type = "pci"; | 730 | device_type = "pci"; |
733 | dma-coherent; | 731 | dma-coherent; |
734 | num-lanes = <2>; | ||
735 | num-viewport = <6>; | 732 | num-viewport = <6>; |
736 | bus-range = <0x0 0xff>; | 733 | bus-range = <0x0 0xff>; |
737 | ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */ | 734 | ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */ |
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi index b0ef08b090dd..d4c1da3d4bde 100644 --- a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi +++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi | |||
@@ -649,7 +649,6 @@ | |||
649 | #size-cells = <2>; | 649 | #size-cells = <2>; |
650 | device_type = "pci"; | 650 | device_type = "pci"; |
651 | dma-coherent; | 651 | dma-coherent; |
652 | num-lanes = <4>; | ||
653 | num-viewport = <8>; | 652 | num-viewport = <8>; |
654 | bus-range = <0x0 0xff>; | 653 | bus-range = <0x0 0xff>; |
655 | ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ | 654 | ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ |
@@ -671,7 +670,6 @@ | |||
671 | reg-names = "regs", "addr_space"; | 670 | reg-names = "regs", "addr_space"; |
672 | num-ib-windows = <6>; | 671 | num-ib-windows = <6>; |
673 | num-ob-windows = <8>; | 672 | num-ob-windows = <8>; |
674 | num-lanes = <2>; | ||
675 | status = "disabled"; | 673 | status = "disabled"; |
676 | }; | 674 | }; |
677 | 675 | ||
@@ -687,7 +685,6 @@ | |||
687 | #size-cells = <2>; | 685 | #size-cells = <2>; |
688 | device_type = "pci"; | 686 | device_type = "pci"; |
689 | dma-coherent; | 687 | dma-coherent; |
690 | num-lanes = <2>; | ||
691 | num-viewport = <8>; | 688 | num-viewport = <8>; |
692 | bus-range = <0x0 0xff>; | 689 | bus-range = <0x0 0xff>; |
693 | ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ | 690 | ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ |
@@ -709,7 +706,6 @@ | |||
709 | reg-names = "regs", "addr_space"; | 706 | reg-names = "regs", "addr_space"; |
710 | num-ib-windows = <6>; | 707 | num-ib-windows = <6>; |
711 | num-ob-windows = <8>; | 708 | num-ob-windows = <8>; |
712 | num-lanes = <2>; | ||
713 | status = "disabled"; | 709 | status = "disabled"; |
714 | }; | 710 | }; |
715 | 711 | ||
@@ -725,7 +721,6 @@ | |||
725 | #size-cells = <2>; | 721 | #size-cells = <2>; |
726 | device_type = "pci"; | 722 | device_type = "pci"; |
727 | dma-coherent; | 723 | dma-coherent; |
728 | num-lanes = <2>; | ||
729 | num-viewport = <8>; | 724 | num-viewport = <8>; |
730 | bus-range = <0x0 0xff>; | 725 | bus-range = <0x0 0xff>; |
731 | ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */ | 726 | ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */ |
@@ -747,7 +742,6 @@ | |||
747 | reg-names = "regs", "addr_space"; | 742 | reg-names = "regs", "addr_space"; |
748 | num-ib-windows = <6>; | 743 | num-ib-windows = <6>; |
749 | num-ob-windows = <8>; | 744 | num-ob-windows = <8>; |
750 | num-lanes = <2>; | ||
751 | status = "disabled"; | 745 | status = "disabled"; |
752 | }; | 746 | }; |
753 | 747 | ||
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi index d1469b0747c7..c676d0771762 100644 --- a/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi +++ b/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi | |||
@@ -469,7 +469,6 @@ | |||
469 | #size-cells = <2>; | 469 | #size-cells = <2>; |
470 | device_type = "pci"; | 470 | device_type = "pci"; |
471 | dma-coherent; | 471 | dma-coherent; |
472 | num-lanes = <4>; | ||
473 | num-viewport = <256>; | 472 | num-viewport = <256>; |
474 | bus-range = <0x0 0xff>; | 473 | bus-range = <0x0 0xff>; |
475 | ranges = <0x81000000 0x0 0x00000000 0x20 0x00010000 0x0 0x00010000 /* downstream I/O */ | 474 | ranges = <0x81000000 0x0 0x00000000 0x20 0x00010000 0x0 0x00010000 /* downstream I/O */ |
@@ -495,7 +494,6 @@ | |||
495 | #size-cells = <2>; | 494 | #size-cells = <2>; |
496 | device_type = "pci"; | 495 | device_type = "pci"; |
497 | dma-coherent; | 496 | dma-coherent; |
498 | num-lanes = <4>; | ||
499 | num-viewport = <6>; | 497 | num-viewport = <6>; |
500 | bus-range = <0x0 0xff>; | 498 | bus-range = <0x0 0xff>; |
501 | ranges = <0x81000000 0x0 0x00000000 0x28 0x00010000 0x0 0x00010000 /* downstream I/O */ | 499 | ranges = <0x81000000 0x0 0x00000000 0x28 0x00010000 0x0 0x00010000 /* downstream I/O */ |
@@ -521,7 +519,6 @@ | |||
521 | #size-cells = <2>; | 519 | #size-cells = <2>; |
522 | device_type = "pci"; | 520 | device_type = "pci"; |
523 | dma-coherent; | 521 | dma-coherent; |
524 | num-lanes = <8>; | ||
525 | num-viewport = <6>; | 522 | num-viewport = <6>; |
526 | bus-range = <0x0 0xff>; | 523 | bus-range = <0x0 0xff>; |
527 | ranges = <0x81000000 0x0 0x00000000 0x30 0x00010000 0x0 0x00010000 /* downstream I/O */ | 524 | ranges = <0x81000000 0x0 0x00000000 0x30 0x00010000 0x0 0x00010000 /* downstream I/O */ |
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi index 64101c9962ce..7a0be8eaa84a 100644 --- a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi +++ b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi | |||
@@ -639,7 +639,6 @@ | |||
639 | #size-cells = <2>; | 639 | #size-cells = <2>; |
640 | device_type = "pci"; | 640 | device_type = "pci"; |
641 | dma-coherent; | 641 | dma-coherent; |
642 | num-lanes = <4>; | ||
643 | num-viewport = <6>; | 642 | num-viewport = <6>; |
644 | bus-range = <0x0 0xff>; | 643 | bus-range = <0x0 0xff>; |
645 | msi-parent = <&its>; | 644 | msi-parent = <&its>; |
@@ -661,7 +660,6 @@ | |||
661 | #size-cells = <2>; | 660 | #size-cells = <2>; |
662 | device_type = "pci"; | 661 | device_type = "pci"; |
663 | dma-coherent; | 662 | dma-coherent; |
664 | num-lanes = <4>; | ||
665 | num-viewport = <6>; | 663 | num-viewport = <6>; |
666 | bus-range = <0x0 0xff>; | 664 | bus-range = <0x0 0xff>; |
667 | msi-parent = <&its>; | 665 | msi-parent = <&its>; |
@@ -683,7 +681,6 @@ | |||
683 | #size-cells = <2>; | 681 | #size-cells = <2>; |
684 | device_type = "pci"; | 682 | device_type = "pci"; |
685 | dma-coherent; | 683 | dma-coherent; |
686 | num-lanes = <8>; | ||
687 | num-viewport = <256>; | 684 | num-viewport = <256>; |
688 | bus-range = <0x0 0xff>; | 685 | bus-range = <0x0 0xff>; |
689 | msi-parent = <&its>; | 686 | msi-parent = <&its>; |
@@ -705,7 +702,6 @@ | |||
705 | #size-cells = <2>; | 702 | #size-cells = <2>; |
706 | device_type = "pci"; | 703 | device_type = "pci"; |
707 | dma-coherent; | 704 | dma-coherent; |
708 | num-lanes = <4>; | ||
709 | num-viewport = <6>; | 705 | num-viewport = <6>; |
710 | bus-range = <0x0 0xff>; | 706 | bus-range = <0x0 0xff>; |
711 | msi-parent = <&its>; | 707 | msi-parent = <&its>; |
diff --git a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi index 62e07e1197cc..4c38426a6969 100644 --- a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi | |||
@@ -289,5 +289,29 @@ | |||
289 | gpio = <&gpio TEGRA194_MAIN_GPIO(A, 3) GPIO_ACTIVE_HIGH>; | 289 | gpio = <&gpio TEGRA194_MAIN_GPIO(A, 3) GPIO_ACTIVE_HIGH>; |
290 | enable-active-high; | 290 | enable-active-high; |
291 | }; | 291 | }; |
292 | |||
293 | vdd_3v3_pcie: regulator@2 { | ||
294 | compatible = "regulator-fixed"; | ||
295 | reg = <2>; | ||
296 | |||
297 | regulator-name = "PEX_3V3"; | ||
298 | regulator-min-microvolt = <3300000>; | ||
299 | regulator-max-microvolt = <3300000>; | ||
300 | gpio = <&gpio TEGRA194_MAIN_GPIO(Z, 2) GPIO_ACTIVE_HIGH>; | ||
301 | regulator-boot-on; | ||
302 | enable-active-high; | ||
303 | }; | ||
304 | |||
305 | vdd_12v_pcie: regulator@3 { | ||
306 | compatible = "regulator-fixed"; | ||
307 | reg = <3>; | ||
308 | |||
309 | regulator-name = "VDD_12V"; | ||
310 | regulator-min-microvolt = <1200000>; | ||
311 | regulator-max-microvolt = <1200000>; | ||
312 | gpio = <&gpio TEGRA194_MAIN_GPIO(A, 1) GPIO_ACTIVE_LOW>; | ||
313 | regulator-boot-on; | ||
314 | enable-active-low; | ||
315 | }; | ||
292 | }; | 316 | }; |
293 | }; | 317 | }; |
diff --git a/arch/arm64/boot/dts/nvidia/tegra194-p2972-0000.dts b/arch/arm64/boot/dts/nvidia/tegra194-p2972-0000.dts index 23597d53c9c9..d47cd8c4dd24 100644 --- a/arch/arm64/boot/dts/nvidia/tegra194-p2972-0000.dts +++ b/arch/arm64/boot/dts/nvidia/tegra194-p2972-0000.dts | |||
@@ -93,9 +93,11 @@ | |||
93 | }; | 93 | }; |
94 | 94 | ||
95 | pcie@141a0000 { | 95 | pcie@141a0000 { |
96 | status = "disabled"; | 96 | status = "okay"; |
97 | 97 | ||
98 | vddio-pex-ctl-supply = <&vdd_1v8ao>; | 98 | vddio-pex-ctl-supply = <&vdd_1v8ao>; |
99 | vpcie3v3-supply = <&vdd_3v3_pcie>; | ||
100 | vpcie12v-supply = <&vdd_12v_pcie>; | ||
99 | 101 | ||
100 | phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>, | 102 | phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>, |
101 | <&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>, | 103 | <&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>, |
diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi index adebbbf36bd0..3c0cf54f0aab 100644 --- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi | |||
@@ -3,8 +3,9 @@ | |||
3 | #include <dt-bindings/gpio/tegra194-gpio.h> | 3 | #include <dt-bindings/gpio/tegra194-gpio.h> |
4 | #include <dt-bindings/interrupt-controller/arm-gic.h> | 4 | #include <dt-bindings/interrupt-controller/arm-gic.h> |
5 | #include <dt-bindings/mailbox/tegra186-hsp.h> | 5 | #include <dt-bindings/mailbox/tegra186-hsp.h> |
6 | #include <dt-bindings/reset/tegra194-reset.h> | 6 | #include <dt-bindings/pinctrl/pinctrl-tegra.h> |
7 | #include <dt-bindings/power/tegra194-powergate.h> | 7 | #include <dt-bindings/power/tegra194-powergate.h> |
8 | #include <dt-bindings/reset/tegra194-reset.h> | ||
8 | #include <dt-bindings/thermal/tegra194-bpmp-thermal.h> | 9 | #include <dt-bindings/thermal/tegra194-bpmp-thermal.h> |
9 | 10 | ||
10 | / { | 11 | / { |
@@ -130,6 +131,38 @@ | |||
130 | }; | 131 | }; |
131 | }; | 132 | }; |
132 | 133 | ||
134 | pinmux: pinmux@2430000 { | ||
135 | compatible = "nvidia,tegra194-pinmux"; | ||
136 | reg = <0x2430000 0x17000 | ||
137 | 0xc300000 0x4000>; | ||
138 | |||
139 | status = "okay"; | ||
140 | |||
141 | pex_rst_c5_out_state: pex_rst_c5_out { | ||
142 | pex_rst { | ||
143 | nvidia,pins = "pex_l5_rst_n_pgg1"; | ||
144 | nvidia,schmitt = <TEGRA_PIN_DISABLE>; | ||
145 | nvidia,lpdr = <TEGRA_PIN_ENABLE>; | ||
146 | nvidia,enable-input = <TEGRA_PIN_DISABLE>; | ||
147 | nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>; | ||
148 | nvidia,tristate = <TEGRA_PIN_DISABLE>; | ||
149 | nvidia,pull = <TEGRA_PIN_PULL_NONE>; | ||
150 | }; | ||
151 | }; | ||
152 | |||
153 | clkreq_c5_bi_dir_state: clkreq_c5_bi_dir { | ||
154 | clkreq { | ||
155 | nvidia,pins = "pex_l5_clkreq_n_pgg0"; | ||
156 | nvidia,schmitt = <TEGRA_PIN_DISABLE>; | ||
157 | nvidia,lpdr = <TEGRA_PIN_ENABLE>; | ||
158 | nvidia,enable-input = <TEGRA_PIN_ENABLE>; | ||
159 | nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>; | ||
160 | nvidia,tristate = <TEGRA_PIN_DISABLE>; | ||
161 | nvidia,pull = <TEGRA_PIN_PULL_NONE>; | ||
162 | }; | ||
163 | }; | ||
164 | }; | ||
165 | |||
133 | uarta: serial@3100000 { | 166 | uarta: serial@3100000 { |
134 | compatible = "nvidia,tegra194-uart", "nvidia,tegra20-uart"; | 167 | compatible = "nvidia,tegra194-uart", "nvidia,tegra20-uart"; |
135 | reg = <0x03100000 0x40>; | 168 | reg = <0x03100000 0x40>; |
@@ -1365,6 +1398,9 @@ | |||
1365 | num-viewport = <8>; | 1398 | num-viewport = <8>; |
1366 | linux,pci-domain = <5>; | 1399 | linux,pci-domain = <5>; |
1367 | 1400 | ||
1401 | pinctrl-names = "default"; | ||
1402 | pinctrl-0 = <&pex_rst_c5_out_state>, <&clkreq_c5_bi_dir_state>; | ||
1403 | |||
1368 | clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>, | 1404 | clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>, |
1369 | <&bpmp TEGRA194_CLK_PEX1_CORE_5M>; | 1405 | <&bpmp TEGRA194_CLK_PEX1_CORE_5M>; |
1370 | clock-names = "core", "core_m"; | 1406 | clock-names = "core", "core_m"; |
diff --git a/arch/microblaze/include/asm/pci.h b/arch/microblaze/include/asm/pci.h index 21ddba9188b2..7c4dc5d85f53 100644 --- a/arch/microblaze/include/asm/pci.h +++ b/arch/microblaze/include/asm/pci.h | |||
@@ -66,8 +66,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file, | |||
66 | unsigned long size, | 66 | unsigned long size, |
67 | pgprot_t prot); | 67 | pgprot_t prot); |
68 | 68 | ||
69 | #define HAVE_ARCH_PCI_RESOURCE_TO_USER | ||
70 | |||
71 | /* This part of code was originally in xilinx-pci.h */ | 69 | /* This part of code was originally in xilinx-pci.h */ |
72 | #ifdef CONFIG_PCI_XILINX | 70 | #ifdef CONFIG_PCI_XILINX |
73 | extern void __init xilinx_pci_init(void); | 71 | extern void __init xilinx_pci_init(void); |
diff --git a/arch/mips/include/asm/pci.h b/arch/mips/include/asm/pci.h index 436099883022..6f48649201c5 100644 --- a/arch/mips/include/asm/pci.h +++ b/arch/mips/include/asm/pci.h | |||
@@ -108,7 +108,6 @@ extern unsigned long PCIBIOS_MIN_MEM; | |||
108 | 108 | ||
109 | #define HAVE_PCI_MMAP | 109 | #define HAVE_PCI_MMAP |
110 | #define ARCH_GENERIC_PCI_MMAP_RESOURCE | 110 | #define ARCH_GENERIC_PCI_MMAP_RESOURCE |
111 | #define HAVE_ARCH_PCI_RESOURCE_TO_USER | ||
112 | 111 | ||
113 | /* | 112 | /* |
114 | * Dynamic DMA mapping stuff. | 113 | * Dynamic DMA mapping stuff. |
diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h index 2372d35533ad..327567b8f7d6 100644 --- a/arch/powerpc/include/asm/pci.h +++ b/arch/powerpc/include/asm/pci.h | |||
@@ -112,8 +112,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file, | |||
112 | unsigned long size, | 112 | unsigned long size, |
113 | pgprot_t prot); | 113 | pgprot_t prot); |
114 | 114 | ||
115 | #define HAVE_ARCH_PCI_RESOURCE_TO_USER | ||
116 | |||
117 | extern resource_size_t pcibios_io_space_offset(struct pci_controller *hose); | 115 | extern resource_size_t pcibios_io_space_offset(struct pci_controller *hose); |
118 | extern void pcibios_setup_bus_devices(struct pci_bus *bus); | 116 | extern void pcibios_setup_bus_devices(struct pci_bus *bus); |
119 | extern void pcibios_setup_bus_self(struct pci_bus *bus); | 117 | extern void pcibios_setup_bus_self(struct pci_bus *bus); |
diff --git a/arch/sparc/include/asm/pci.h b/arch/sparc/include/asm/pci.h index cfec79bb1831..4deddf430e5d 100644 --- a/arch/sparc/include/asm/pci.h +++ b/arch/sparc/include/asm/pci.h | |||
@@ -38,8 +38,6 @@ static inline int pci_proc_domain(struct pci_bus *bus) | |||
38 | #define arch_can_pci_mmap_io() 1 | 38 | #define arch_can_pci_mmap_io() 1 |
39 | #define HAVE_ARCH_PCI_GET_UNMAPPED_AREA | 39 | #define HAVE_ARCH_PCI_GET_UNMAPPED_AREA |
40 | #define get_pci_unmapped_area get_fb_unmapped_area | 40 | #define get_pci_unmapped_area get_fb_unmapped_area |
41 | |||
42 | #define HAVE_ARCH_PCI_RESOURCE_TO_USER | ||
43 | #endif /* CONFIG_SPARC64 */ | 41 | #endif /* CONFIG_SPARC64 */ |
44 | 42 | ||
45 | #if defined(CONFIG_SPARC64) || defined(CONFIG_LEON_PCI) | 43 | #if defined(CONFIG_SPARC64) || defined(CONFIG_LEON_PCI) |
diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c index 314a187ed572..d1e666ef3fcc 100644 --- a/drivers/acpi/pci_root.c +++ b/drivers/acpi/pci_root.c | |||
@@ -15,7 +15,6 @@ | |||
15 | #include <linux/pm_runtime.h> | 15 | #include <linux/pm_runtime.h> |
16 | #include <linux/pci.h> | 16 | #include <linux/pci.h> |
17 | #include <linux/pci-acpi.h> | 17 | #include <linux/pci-acpi.h> |
18 | #include <linux/pci-aspm.h> | ||
19 | #include <linux/dmar.h> | 18 | #include <linux/dmar.h> |
20 | #include <linux/acpi.h> | 19 | #include <linux/acpi.h> |
21 | #include <linux/slab.h> | 20 | #include <linux/slab.h> |
diff --git a/drivers/char/xillybus/xillybus_pcie.c b/drivers/char/xillybus/xillybus_pcie.c index 02c15952b103..18b0c392bc93 100644 --- a/drivers/char/xillybus/xillybus_pcie.c +++ b/drivers/char/xillybus/xillybus_pcie.c | |||
@@ -9,7 +9,6 @@ | |||
9 | 9 | ||
10 | #include <linux/module.h> | 10 | #include <linux/module.h> |
11 | #include <linux/pci.h> | 11 | #include <linux/pci.h> |
12 | #include <linux/pci-aspm.h> | ||
13 | #include <linux/slab.h> | 12 | #include <linux/slab.h> |
14 | #include "xillybus.h" | 13 | #include "xillybus.h" |
15 | 14 | ||
diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c index dce06108c8c3..5337393d4dfe 100644 --- a/drivers/infiniband/core/rw.c +++ b/drivers/infiniband/core/rw.c | |||
@@ -583,8 +583,10 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num, | |||
583 | break; | 583 | break; |
584 | } | 584 | } |
585 | 585 | ||
586 | /* P2PDMA contexts do not need to be unmapped */ | 586 | if (is_pci_p2pdma_page(sg_page(sg))) |
587 | if (!is_pci_p2pdma_page(sg_page(sg))) | 587 | pci_p2pdma_unmap_sg(qp->pd->device->dma_device, sg, |
588 | sg_cnt, dir); | ||
589 | else | ||
588 | ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); | 590 | ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); |
589 | } | 591 | } |
590 | EXPORT_SYMBOL(rdma_rw_ctx_destroy); | 592 | EXPORT_SYMBOL(rdma_rw_ctx_destroy); |
diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h index 34cd67951aec..6c51b1bad8c4 100644 --- a/drivers/net/ethernet/intel/e1000e/e1000.h +++ b/drivers/net/ethernet/intel/e1000e/e1000.h | |||
@@ -13,7 +13,6 @@ | |||
13 | #include <linux/io.h> | 13 | #include <linux/io.h> |
14 | #include <linux/netdevice.h> | 14 | #include <linux/netdevice.h> |
15 | #include <linux/pci.h> | 15 | #include <linux/pci.h> |
16 | #include <linux/pci-aspm.h> | ||
17 | #include <linux/crc32.h> | 16 | #include <linux/crc32.h> |
18 | #include <linux/if_vlan.h> | 17 | #include <linux/if_vlan.h> |
19 | #include <linux/timecounter.h> | 18 | #include <linux/timecounter.h> |
diff --git a/drivers/net/ethernet/jme.c b/drivers/net/ethernet/jme.c index 6d52cf5ce20e..25aa400e2e3c 100644 --- a/drivers/net/ethernet/jme.c +++ b/drivers/net/ethernet/jme.c | |||
@@ -14,7 +14,6 @@ | |||
14 | #include <linux/module.h> | 14 | #include <linux/module.h> |
15 | #include <linux/kernel.h> | 15 | #include <linux/kernel.h> |
16 | #include <linux/pci.h> | 16 | #include <linux/pci.h> |
17 | #include <linux/pci-aspm.h> | ||
18 | #include <linux/netdevice.h> | 17 | #include <linux/netdevice.h> |
19 | #include <linux/etherdevice.h> | 18 | #include <linux/etherdevice.h> |
20 | #include <linux/ethtool.h> | 19 | #include <linux/ethtool.h> |
diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c index 0ef01db1f8b8..74f81fe03810 100644 --- a/drivers/net/ethernet/realtek/r8169_main.c +++ b/drivers/net/ethernet/realtek/r8169_main.c | |||
@@ -28,7 +28,6 @@ | |||
28 | #include <linux/dma-mapping.h> | 28 | #include <linux/dma-mapping.h> |
29 | #include <linux/pm_runtime.h> | 29 | #include <linux/pm_runtime.h> |
30 | #include <linux/prefetch.h> | 30 | #include <linux/prefetch.h> |
31 | #include <linux/pci-aspm.h> | ||
32 | #include <linux/ipv6.h> | 31 | #include <linux/ipv6.h> |
33 | #include <net/ip6_checksum.h> | 32 | #include <net/ip6_checksum.h> |
34 | 33 | ||
diff --git a/drivers/net/wireless/ath/ath5k/pci.c b/drivers/net/wireless/ath/ath5k/pci.c index c6156cc38940..d5ee32ce9eb3 100644 --- a/drivers/net/wireless/ath/ath5k/pci.c +++ b/drivers/net/wireless/ath/ath5k/pci.c | |||
@@ -18,7 +18,6 @@ | |||
18 | 18 | ||
19 | #include <linux/nl80211.h> | 19 | #include <linux/nl80211.h> |
20 | #include <linux/pci.h> | 20 | #include <linux/pci.h> |
21 | #include <linux/pci-aspm.h> | ||
22 | #include <linux/etherdevice.h> | 21 | #include <linux/etherdevice.h> |
23 | #include <linux/module.h> | 22 | #include <linux/module.h> |
24 | #include "../ath.h" | 23 | #include "../ath.h" |
diff --git a/drivers/net/wireless/intel/iwlegacy/3945-mac.c b/drivers/net/wireless/intel/iwlegacy/3945-mac.c index b82da75a9ae3..4fbcc7fba3cc 100644 --- a/drivers/net/wireless/intel/iwlegacy/3945-mac.c +++ b/drivers/net/wireless/intel/iwlegacy/3945-mac.c | |||
@@ -18,7 +18,6 @@ | |||
18 | #include <linux/module.h> | 18 | #include <linux/module.h> |
19 | #include <linux/init.h> | 19 | #include <linux/init.h> |
20 | #include <linux/pci.h> | 20 | #include <linux/pci.h> |
21 | #include <linux/pci-aspm.h> | ||
22 | #include <linux/slab.h> | 21 | #include <linux/slab.h> |
23 | #include <linux/dma-mapping.h> | 22 | #include <linux/dma-mapping.h> |
24 | #include <linux/delay.h> | 23 | #include <linux/delay.h> |
diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c index fa2c02881939..ffb705b18fb1 100644 --- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c +++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c | |||
@@ -18,7 +18,6 @@ | |||
18 | #include <linux/module.h> | 18 | #include <linux/module.h> |
19 | #include <linux/init.h> | 19 | #include <linux/init.h> |
20 | #include <linux/pci.h> | 20 | #include <linux/pci.h> |
21 | #include <linux/pci-aspm.h> | ||
22 | #include <linux/slab.h> | 21 | #include <linux/slab.h> |
23 | #include <linux/dma-mapping.h> | 22 | #include <linux/dma-mapping.h> |
24 | #include <linux/delay.h> | 23 | #include <linux/delay.h> |
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c index 5ab87a8dc907..f8a1f985a1d8 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c | |||
@@ -62,7 +62,6 @@ | |||
62 | * | 62 | * |
63 | *****************************************************************************/ | 63 | *****************************************************************************/ |
64 | #include <linux/pci.h> | 64 | #include <linux/pci.h> |
65 | #include <linux/pci-aspm.h> | ||
66 | #include <linux/interrupt.h> | 65 | #include <linux/interrupt.h> |
67 | #include <linux/debugfs.h> | 66 | #include <linux/debugfs.h> |
68 | #include <linux/sched.h> | 67 | #include <linux/sched.h> |
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 6b4d7b064b38..c0808f9eb8ab 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c | |||
@@ -549,8 +549,10 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) | |||
549 | 549 | ||
550 | WARN_ON_ONCE(!iod->nents); | 550 | WARN_ON_ONCE(!iod->nents); |
551 | 551 | ||
552 | /* P2PDMA requests do not need to be unmapped */ | 552 | if (is_pci_p2pdma_page(sg_page(iod->sg))) |
553 | if (!is_pci_p2pdma_page(sg_page(iod->sg))) | 553 | pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, |
554 | rq_dma_dir(req)); | ||
555 | else | ||
554 | dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); | 556 | dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); |
555 | 557 | ||
556 | 558 | ||
@@ -834,8 +836,8 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, | |||
834 | goto out; | 836 | goto out; |
835 | 837 | ||
836 | if (is_pci_p2pdma_page(sg_page(iod->sg))) | 838 | if (is_pci_p2pdma_page(sg_page(iod->sg))) |
837 | nr_mapped = pci_p2pdma_map_sg(dev->dev, iod->sg, iod->nents, | 839 | nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg, |
838 | rq_dma_dir(req)); | 840 | iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN); |
839 | else | 841 | else |
840 | nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, | 842 | nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, |
841 | rq_dma_dir(req), DMA_ATTR_NO_WARN); | 843 | rq_dma_dir(req), DMA_ATTR_NO_WARN); |
diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index c313de96a357..a304f5ea11b9 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig | |||
@@ -52,7 +52,7 @@ config PCI_MSI | |||
52 | If you don't know what to do here, say Y. | 52 | If you don't know what to do here, say Y. |
53 | 53 | ||
54 | config PCI_MSI_IRQ_DOMAIN | 54 | config PCI_MSI_IRQ_DOMAIN |
55 | def_bool ARC || ARM || ARM64 || X86 | 55 | def_bool ARC || ARM || ARM64 || X86 || RISCV |
56 | depends on PCI_MSI | 56 | depends on PCI_MSI |
57 | select GENERIC_MSI_IRQ_DOMAIN | 57 | select GENERIC_MSI_IRQ_DOMAIN |
58 | 58 | ||
@@ -170,7 +170,7 @@ config PCI_P2PDMA | |||
170 | 170 | ||
171 | Many PCIe root complexes do not support P2P transactions and | 171 | Many PCIe root complexes do not support P2P transactions and |
172 | it's hard to tell which support it at all, so at this time, | 172 | it's hard to tell which support it at all, so at this time, |
173 | P2P DMA transations must be between devices behind the same root | 173 | P2P DMA transactions must be between devices behind the same root |
174 | port. | 174 | port. |
175 | 175 | ||
176 | If unsure, say N. | 176 | If unsure, say N. |
@@ -181,7 +181,7 @@ config PCI_LABEL | |||
181 | 181 | ||
182 | config PCI_HYPERV | 182 | config PCI_HYPERV |
183 | tristate "Hyper-V PCI Frontend" | 183 | tristate "Hyper-V PCI Frontend" |
184 | depends on X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64 | 184 | depends on X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS |
185 | select PCI_HYPERV_INTERFACE | 185 | select PCI_HYPERV_INTERFACE |
186 | help | 186 | help |
187 | The PCI device frontend driver allows the kernel to import arbitrary | 187 | The PCI device frontend driver allows the kernel to import arbitrary |
diff --git a/drivers/pci/access.c b/drivers/pci/access.c index 544922f097c0..2fccb5762c76 100644 --- a/drivers/pci/access.c +++ b/drivers/pci/access.c | |||
@@ -336,15 +336,6 @@ static inline int pcie_cap_version(const struct pci_dev *dev) | |||
336 | return pcie_caps_reg(dev) & PCI_EXP_FLAGS_VERS; | 336 | return pcie_caps_reg(dev) & PCI_EXP_FLAGS_VERS; |
337 | } | 337 | } |
338 | 338 | ||
339 | static bool pcie_downstream_port(const struct pci_dev *dev) | ||
340 | { | ||
341 | int type = pci_pcie_type(dev); | ||
342 | |||
343 | return type == PCI_EXP_TYPE_ROOT_PORT || | ||
344 | type == PCI_EXP_TYPE_DOWNSTREAM || | ||
345 | type == PCI_EXP_TYPE_PCIE_BRIDGE; | ||
346 | } | ||
347 | |||
348 | bool pcie_cap_has_lnkctl(const struct pci_dev *dev) | 339 | bool pcie_cap_has_lnkctl(const struct pci_dev *dev) |
349 | { | 340 | { |
350 | int type = pci_pcie_type(dev); | 341 | int type = pci_pcie_type(dev); |
diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c index 495059d923f7..8e40b3e6da77 100644 --- a/drivers/pci/bus.c +++ b/drivers/pci/bus.c | |||
@@ -417,11 +417,9 @@ struct pci_bus *pci_bus_get(struct pci_bus *bus) | |||
417 | get_device(&bus->dev); | 417 | get_device(&bus->dev); |
418 | return bus; | 418 | return bus; |
419 | } | 419 | } |
420 | EXPORT_SYMBOL(pci_bus_get); | ||
421 | 420 | ||
422 | void pci_bus_put(struct pci_bus *bus) | 421 | void pci_bus_put(struct pci_bus *bus) |
423 | { | 422 | { |
424 | if (bus) | 423 | if (bus) |
425 | put_device(&bus->dev); | 424 | put_device(&bus->dev); |
426 | } | 425 | } |
427 | EXPORT_SYMBOL(pci_bus_put); | ||
diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig index 6ea778ae4877..0ba988b5b5bc 100644 --- a/drivers/pci/controller/dwc/Kconfig +++ b/drivers/pci/controller/dwc/Kconfig | |||
@@ -131,13 +131,29 @@ config PCI_KEYSTONE_EP | |||
131 | DesignWare core functions to implement the driver. | 131 | DesignWare core functions to implement the driver. |
132 | 132 | ||
133 | config PCI_LAYERSCAPE | 133 | config PCI_LAYERSCAPE |
134 | bool "Freescale Layerscape PCIe controller" | 134 | bool "Freescale Layerscape PCIe controller - Host mode" |
135 | depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) | 135 | depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) |
136 | depends on PCI_MSI_IRQ_DOMAIN | 136 | depends on PCI_MSI_IRQ_DOMAIN |
137 | select MFD_SYSCON | 137 | select MFD_SYSCON |
138 | select PCIE_DW_HOST | 138 | select PCIE_DW_HOST |
139 | help | 139 | help |
140 | Say Y here if you want PCIe controller support on Layerscape SoCs. | 140 | Say Y here if you want to enable PCIe controller support on Layerscape |
141 | SoCs to work in Host mode. | ||
142 | This controller can work either as EP or RC. The RCW[HOST_AGT_PEX] | ||
143 | determines which PCIe controller works in EP mode and which PCIe | ||
144 | controller works in RC mode. | ||
145 | |||
146 | config PCI_LAYERSCAPE_EP | ||
147 | bool "Freescale Layerscape PCIe controller - Endpoint mode" | ||
148 | depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) | ||
149 | depends on PCI_ENDPOINT | ||
150 | select PCIE_DW_EP | ||
151 | help | ||
152 | Say Y here if you want to enable PCIe controller support on Layerscape | ||
153 | SoCs to work in Endpoint mode. | ||
154 | This controller can work either as EP or RC. The RCW[HOST_AGT_PEX] | ||
155 | determines which PCIe controller works in EP mode and which PCIe | ||
156 | controller works in RC mode. | ||
141 | 157 | ||
142 | config PCI_HISI | 158 | config PCI_HISI |
143 | depends on OF && (ARM64 || COMPILE_TEST) | 159 | depends on OF && (ARM64 || COMPILE_TEST) |
@@ -220,6 +236,16 @@ config PCI_MESON | |||
220 | and therefore the driver re-uses the DesignWare core functions to | 236 | and therefore the driver re-uses the DesignWare core functions to |
221 | implement the driver. | 237 | implement the driver. |
222 | 238 | ||
239 | config PCIE_TEGRA194 | ||
240 | tristate "NVIDIA Tegra194 (and later) PCIe controller" | ||
241 | depends on ARCH_TEGRA_194_SOC || COMPILE_TEST | ||
242 | depends on PCI_MSI_IRQ_DOMAIN | ||
243 | select PCIE_DW_HOST | ||
244 | select PHY_TEGRA194_P2U | ||
245 | help | ||
246 | Say Y here if you want support for DesignWare core based PCIe host | ||
247 | controller found in NVIDIA Tegra194 SoC. | ||
248 | |||
223 | config PCIE_UNIPHIER | 249 | config PCIE_UNIPHIER |
224 | bool "Socionext UniPhier PCIe controllers" | 250 | bool "Socionext UniPhier PCIe controllers" |
225 | depends on ARCH_UNIPHIER || COMPILE_TEST | 251 | depends on ARCH_UNIPHIER || COMPILE_TEST |
@@ -230,4 +256,16 @@ config PCIE_UNIPHIER | |||
230 | Say Y here if you want PCIe controller support on UniPhier SoCs. | 256 | Say Y here if you want PCIe controller support on UniPhier SoCs. |
231 | This driver supports LD20 and PXs3 SoCs. | 257 | This driver supports LD20 and PXs3 SoCs. |
232 | 258 | ||
259 | config PCIE_AL | ||
260 | bool "Amazon Annapurna Labs PCIe controller" | ||
261 | depends on OF && (ARM64 || COMPILE_TEST) | ||
262 | depends on PCI_MSI_IRQ_DOMAIN | ||
263 | select PCIE_DW_HOST | ||
264 | help | ||
265 | Say Y here to enable support of the Amazon's Annapurna Labs PCIe | ||
266 | controller IP on Amazon SoCs. The PCIe controller uses the DesignWare | ||
267 | core plus Annapurna Labs proprietary hardware wrappers. This is | ||
268 | required only for DT-based platforms. ACPI platforms with the | ||
269 | Annapurna Labs PCIe controller don't need to enable this. | ||
270 | |||
233 | endmenu | 271 | endmenu |
diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile index b085dfd4fab7..69faff371f11 100644 --- a/drivers/pci/controller/dwc/Makefile +++ b/drivers/pci/controller/dwc/Makefile | |||
@@ -8,13 +8,15 @@ obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o | |||
8 | obj-$(CONFIG_PCI_IMX6) += pci-imx6.o | 8 | obj-$(CONFIG_PCI_IMX6) += pci-imx6.o |
9 | obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o | 9 | obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o |
10 | obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o | 10 | obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o |
11 | obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o pci-layerscape-ep.o | 11 | obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o |
12 | obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o | ||
12 | obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o | 13 | obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o |
13 | obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o | 14 | obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o |
14 | obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o | 15 | obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o |
15 | obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o | 16 | obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o |
16 | obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o | 17 | obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o |
17 | obj-$(CONFIG_PCI_MESON) += pci-meson.o | 18 | obj-$(CONFIG_PCI_MESON) += pci-meson.o |
19 | obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o | ||
18 | obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o | 20 | obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o |
19 | 21 | ||
20 | # The following drivers are for devices that use the generic ACPI | 22 | # The following drivers are for devices that use the generic ACPI |
diff --git a/drivers/pci/controller/dwc/pci-exynos.c b/drivers/pci/controller/dwc/pci-exynos.c index cee5f2f590e2..14a6ba4067fb 100644 --- a/drivers/pci/controller/dwc/pci-exynos.c +++ b/drivers/pci/controller/dwc/pci-exynos.c | |||
@@ -465,7 +465,7 @@ static int __init exynos_pcie_probe(struct platform_device *pdev) | |||
465 | 465 | ||
466 | ep->phy = devm_of_phy_get(dev, np, NULL); | 466 | ep->phy = devm_of_phy_get(dev, np, NULL); |
467 | if (IS_ERR(ep->phy)) { | 467 | if (IS_ERR(ep->phy)) { |
468 | if (PTR_ERR(ep->phy) == -EPROBE_DEFER) | 468 | if (PTR_ERR(ep->phy) != -ENODEV) |
469 | return PTR_ERR(ep->phy); | 469 | return PTR_ERR(ep->phy); |
470 | 470 | ||
471 | ep->phy = NULL; | 471 | ep->phy = NULL; |
diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c index 9b5cb5b70389..acfbd34032a8 100644 --- a/drivers/pci/controller/dwc/pci-imx6.c +++ b/drivers/pci/controller/dwc/pci-imx6.c | |||
@@ -57,6 +57,7 @@ enum imx6_pcie_variants { | |||
57 | struct imx6_pcie_drvdata { | 57 | struct imx6_pcie_drvdata { |
58 | enum imx6_pcie_variants variant; | 58 | enum imx6_pcie_variants variant; |
59 | u32 flags; | 59 | u32 flags; |
60 | int dbi_length; | ||
60 | }; | 61 | }; |
61 | 62 | ||
62 | struct imx6_pcie { | 63 | struct imx6_pcie { |
@@ -1173,8 +1174,8 @@ static int imx6_pcie_probe(struct platform_device *pdev) | |||
1173 | 1174 | ||
1174 | imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); | 1175 | imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); |
1175 | if (IS_ERR(imx6_pcie->vpcie)) { | 1176 | if (IS_ERR(imx6_pcie->vpcie)) { |
1176 | if (PTR_ERR(imx6_pcie->vpcie) == -EPROBE_DEFER) | 1177 | if (PTR_ERR(imx6_pcie->vpcie) != -ENODEV) |
1177 | return -EPROBE_DEFER; | 1178 | return PTR_ERR(imx6_pcie->vpcie); |
1178 | imx6_pcie->vpcie = NULL; | 1179 | imx6_pcie->vpcie = NULL; |
1179 | } | 1180 | } |
1180 | 1181 | ||
@@ -1212,6 +1213,7 @@ static const struct imx6_pcie_drvdata drvdata[] = { | |||
1212 | .variant = IMX6Q, | 1213 | .variant = IMX6Q, |
1213 | .flags = IMX6_PCIE_FLAG_IMX6_PHY | | 1214 | .flags = IMX6_PCIE_FLAG_IMX6_PHY | |
1214 | IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, | 1215 | IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, |
1216 | .dbi_length = 0x200, | ||
1215 | }, | 1217 | }, |
1216 | [IMX6SX] = { | 1218 | [IMX6SX] = { |
1217 | .variant = IMX6SX, | 1219 | .variant = IMX6SX, |
@@ -1254,6 +1256,37 @@ static struct platform_driver imx6_pcie_driver = { | |||
1254 | .shutdown = imx6_pcie_shutdown, | 1256 | .shutdown = imx6_pcie_shutdown, |
1255 | }; | 1257 | }; |
1256 | 1258 | ||
1259 | static void imx6_pcie_quirk(struct pci_dev *dev) | ||
1260 | { | ||
1261 | struct pci_bus *bus = dev->bus; | ||
1262 | struct pcie_port *pp = bus->sysdata; | ||
1263 | |||
1264 | /* Bus parent is the PCI bridge, its parent is this platform driver */ | ||
1265 | if (!bus->dev.parent || !bus->dev.parent->parent) | ||
1266 | return; | ||
1267 | |||
1268 | /* Make sure we only quirk devices associated with this driver */ | ||
1269 | if (bus->dev.parent->parent->driver != &imx6_pcie_driver.driver) | ||
1270 | return; | ||
1271 | |||
1272 | if (bus->number == pp->root_bus_nr) { | ||
1273 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
1274 | struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci); | ||
1275 | |||
1276 | /* | ||
1277 | * Limit config length to avoid the kernel reading beyond | ||
1278 | * the register set and causing an abort on i.MX 6Quad | ||
1279 | */ | ||
1280 | if (imx6_pcie->drvdata->dbi_length) { | ||
1281 | dev->cfg_size = imx6_pcie->drvdata->dbi_length; | ||
1282 | dev_info(&dev->dev, "Limiting cfg_size to %d\n", | ||
1283 | dev->cfg_size); | ||
1284 | } | ||
1285 | } | ||
1286 | } | ||
1287 | DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_SYNOPSYS, 0xabcd, | ||
1288 | PCI_CLASS_BRIDGE_PCI, 8, imx6_pcie_quirk); | ||
1289 | |||
1257 | static int __init imx6_pcie_init(void) | 1290 | static int __init imx6_pcie_init(void) |
1258 | { | 1291 | { |
1259 | #ifdef CONFIG_ARM | 1292 | #ifdef CONFIG_ARM |
diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c index be61d96cc95e..ca9aa4501e7e 100644 --- a/drivers/pci/controller/dwc/pci-layerscape-ep.c +++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c | |||
@@ -44,6 +44,7 @@ static const struct pci_epc_features ls_pcie_epc_features = { | |||
44 | .linkup_notifier = false, | 44 | .linkup_notifier = false, |
45 | .msi_capable = true, | 45 | .msi_capable = true, |
46 | .msix_capable = false, | 46 | .msix_capable = false, |
47 | .bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4), | ||
47 | }; | 48 | }; |
48 | 49 | ||
49 | static const struct pci_epc_features* | 50 | static const struct pci_epc_features* |
diff --git a/drivers/pci/controller/dwc/pcie-al.c b/drivers/pci/controller/dwc/pcie-al.c index 3ab58f0584a8..1eeda2f6371f 100644 --- a/drivers/pci/controller/dwc/pcie-al.c +++ b/drivers/pci/controller/dwc/pcie-al.c | |||
@@ -91,3 +91,368 @@ struct pci_ecam_ops al_pcie_ops = { | |||
91 | }; | 91 | }; |
92 | 92 | ||
93 | #endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */ | 93 | #endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */ |
94 | |||
95 | #ifdef CONFIG_PCIE_AL | ||
96 | |||
97 | #include <linux/of_pci.h> | ||
98 | #include "pcie-designware.h" | ||
99 | |||
100 | #define AL_PCIE_REV_ID_2 2 | ||
101 | #define AL_PCIE_REV_ID_3 3 | ||
102 | #define AL_PCIE_REV_ID_4 4 | ||
103 | |||
104 | #define AXI_BASE_OFFSET 0x0 | ||
105 | |||
106 | #define DEVICE_ID_OFFSET 0x16c | ||
107 | |||
108 | #define DEVICE_REV_ID 0x0 | ||
109 | #define DEVICE_REV_ID_DEV_ID_MASK GENMASK(31, 16) | ||
110 | |||
111 | #define DEVICE_REV_ID_DEV_ID_X4 0 | ||
112 | #define DEVICE_REV_ID_DEV_ID_X8 2 | ||
113 | #define DEVICE_REV_ID_DEV_ID_X16 4 | ||
114 | |||
115 | #define OB_CTRL_REV1_2_OFFSET 0x0040 | ||
116 | #define OB_CTRL_REV3_5_OFFSET 0x0030 | ||
117 | |||
118 | #define CFG_TARGET_BUS 0x0 | ||
119 | #define CFG_TARGET_BUS_MASK_MASK GENMASK(7, 0) | ||
120 | #define CFG_TARGET_BUS_BUSNUM_MASK GENMASK(15, 8) | ||
121 | |||
122 | #define CFG_CONTROL 0x4 | ||
123 | #define CFG_CONTROL_SUBBUS_MASK GENMASK(15, 8) | ||
124 | #define CFG_CONTROL_SEC_BUS_MASK GENMASK(23, 16) | ||
125 | |||
126 | struct al_pcie_reg_offsets { | ||
127 | unsigned int ob_ctrl; | ||
128 | }; | ||
129 | |||
130 | struct al_pcie_target_bus_cfg { | ||
131 | u8 reg_val; | ||
132 | u8 reg_mask; | ||
133 | u8 ecam_mask; | ||
134 | }; | ||
135 | |||
136 | struct al_pcie { | ||
137 | struct dw_pcie *pci; | ||
138 | void __iomem *controller_base; /* base of PCIe unit (not DW core) */ | ||
139 | struct device *dev; | ||
140 | resource_size_t ecam_size; | ||
141 | unsigned int controller_rev_id; | ||
142 | struct al_pcie_reg_offsets reg_offsets; | ||
143 | struct al_pcie_target_bus_cfg target_bus_cfg; | ||
144 | }; | ||
145 | |||
146 | #define PCIE_ECAM_DEVFN(x) (((x) & 0xff) << 12) | ||
147 | |||
148 | #define to_al_pcie(x) dev_get_drvdata((x)->dev) | ||
149 | |||
150 | static inline u32 al_pcie_controller_readl(struct al_pcie *pcie, u32 offset) | ||
151 | { | ||
152 | return readl_relaxed(pcie->controller_base + offset); | ||
153 | } | ||
154 | |||
155 | static inline void al_pcie_controller_writel(struct al_pcie *pcie, u32 offset, | ||
156 | u32 val) | ||
157 | { | ||
158 | writel_relaxed(val, pcie->controller_base + offset); | ||
159 | } | ||
160 | |||
161 | static int al_pcie_rev_id_get(struct al_pcie *pcie, unsigned int *rev_id) | ||
162 | { | ||
163 | u32 dev_rev_id_val; | ||
164 | u32 dev_id_val; | ||
165 | |||
166 | dev_rev_id_val = al_pcie_controller_readl(pcie, AXI_BASE_OFFSET + | ||
167 | DEVICE_ID_OFFSET + | ||
168 | DEVICE_REV_ID); | ||
169 | dev_id_val = FIELD_GET(DEVICE_REV_ID_DEV_ID_MASK, dev_rev_id_val); | ||
170 | |||
171 | switch (dev_id_val) { | ||
172 | case DEVICE_REV_ID_DEV_ID_X4: | ||
173 | *rev_id = AL_PCIE_REV_ID_2; | ||
174 | break; | ||
175 | case DEVICE_REV_ID_DEV_ID_X8: | ||
176 | *rev_id = AL_PCIE_REV_ID_3; | ||
177 | break; | ||
178 | case DEVICE_REV_ID_DEV_ID_X16: | ||
179 | *rev_id = AL_PCIE_REV_ID_4; | ||
180 | break; | ||
181 | default: | ||
182 | dev_err(pcie->dev, "Unsupported dev_id_val (0x%x)\n", | ||
183 | dev_id_val); | ||
184 | return -EINVAL; | ||
185 | } | ||
186 | |||
187 | dev_dbg(pcie->dev, "dev_id_val: 0x%x\n", dev_id_val); | ||
188 | |||
189 | return 0; | ||
190 | } | ||
191 | |||
192 | static int al_pcie_reg_offsets_set(struct al_pcie *pcie) | ||
193 | { | ||
194 | switch (pcie->controller_rev_id) { | ||
195 | case AL_PCIE_REV_ID_2: | ||
196 | pcie->reg_offsets.ob_ctrl = OB_CTRL_REV1_2_OFFSET; | ||
197 | break; | ||
198 | case AL_PCIE_REV_ID_3: | ||
199 | case AL_PCIE_REV_ID_4: | ||
200 | pcie->reg_offsets.ob_ctrl = OB_CTRL_REV3_5_OFFSET; | ||
201 | break; | ||
202 | default: | ||
203 | dev_err(pcie->dev, "Unsupported controller rev_id: 0x%x\n", | ||
204 | pcie->controller_rev_id); | ||
205 | return -EINVAL; | ||
206 | } | ||
207 | |||
208 | return 0; | ||
209 | } | ||
210 | |||
211 | static inline void al_pcie_target_bus_set(struct al_pcie *pcie, | ||
212 | u8 target_bus, | ||
213 | u8 mask_target_bus) | ||
214 | { | ||
215 | u32 reg; | ||
216 | |||
217 | reg = FIELD_PREP(CFG_TARGET_BUS_MASK_MASK, mask_target_bus) | | ||
218 | FIELD_PREP(CFG_TARGET_BUS_BUSNUM_MASK, target_bus); | ||
219 | |||
220 | al_pcie_controller_writel(pcie, AXI_BASE_OFFSET + | ||
221 | pcie->reg_offsets.ob_ctrl + CFG_TARGET_BUS, | ||
222 | reg); | ||
223 | } | ||
224 | |||
225 | static void __iomem *al_pcie_conf_addr_map(struct al_pcie *pcie, | ||
226 | unsigned int busnr, | ||
227 | unsigned int devfn) | ||
228 | { | ||
229 | struct al_pcie_target_bus_cfg *target_bus_cfg = &pcie->target_bus_cfg; | ||
230 | unsigned int busnr_ecam = busnr & target_bus_cfg->ecam_mask; | ||
231 | unsigned int busnr_reg = busnr & target_bus_cfg->reg_mask; | ||
232 | struct pcie_port *pp = &pcie->pci->pp; | ||
233 | void __iomem *pci_base_addr; | ||
234 | |||
235 | pci_base_addr = (void __iomem *)((uintptr_t)pp->va_cfg0_base + | ||
236 | (busnr_ecam << 20) + | ||
237 | PCIE_ECAM_DEVFN(devfn)); | ||
238 | |||
239 | if (busnr_reg != target_bus_cfg->reg_val) { | ||
240 | dev_dbg(pcie->pci->dev, "Changing target bus busnum val from 0x%x to 0x%x\n", | ||
241 | target_bus_cfg->reg_val, busnr_reg); | ||
242 | target_bus_cfg->reg_val = busnr_reg; | ||
243 | al_pcie_target_bus_set(pcie, | ||
244 | target_bus_cfg->reg_val, | ||
245 | target_bus_cfg->reg_mask); | ||
246 | } | ||
247 | |||
248 | return pci_base_addr; | ||
249 | } | ||
250 | |||
251 | static int al_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, | ||
252 | unsigned int devfn, int where, int size, | ||
253 | u32 *val) | ||
254 | { | ||
255 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
256 | struct al_pcie *pcie = to_al_pcie(pci); | ||
257 | unsigned int busnr = bus->number; | ||
258 | void __iomem *pci_addr; | ||
259 | int rc; | ||
260 | |||
261 | pci_addr = al_pcie_conf_addr_map(pcie, busnr, devfn); | ||
262 | |||
263 | rc = dw_pcie_read(pci_addr + where, size, val); | ||
264 | |||
265 | dev_dbg(pci->dev, "%d-byte config read from %04x:%02x:%02x.%d offset 0x%x (pci_addr: 0x%px) - val:0x%x\n", | ||
266 | size, pci_domain_nr(bus), bus->number, | ||
267 | PCI_SLOT(devfn), PCI_FUNC(devfn), where, | ||
268 | (pci_addr + where), *val); | ||
269 | |||
270 | return rc; | ||
271 | } | ||
272 | |||
273 | static int al_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, | ||
274 | unsigned int devfn, int where, int size, | ||
275 | u32 val) | ||
276 | { | ||
277 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
278 | struct al_pcie *pcie = to_al_pcie(pci); | ||
279 | unsigned int busnr = bus->number; | ||
280 | void __iomem *pci_addr; | ||
281 | int rc; | ||
282 | |||
283 | pci_addr = al_pcie_conf_addr_map(pcie, busnr, devfn); | ||
284 | |||
285 | rc = dw_pcie_write(pci_addr + where, size, val); | ||
286 | |||
287 | dev_dbg(pci->dev, "%d-byte config write to %04x:%02x:%02x.%d offset 0x%x (pci_addr: 0x%px) - val:0x%x\n", | ||
288 | size, pci_domain_nr(bus), bus->number, | ||
289 | PCI_SLOT(devfn), PCI_FUNC(devfn), where, | ||
290 | (pci_addr + where), val); | ||
291 | |||
292 | return rc; | ||
293 | } | ||
294 | |||
295 | static void al_pcie_config_prepare(struct al_pcie *pcie) | ||
296 | { | ||
297 | struct al_pcie_target_bus_cfg *target_bus_cfg; | ||
298 | struct pcie_port *pp = &pcie->pci->pp; | ||
299 | unsigned int ecam_bus_mask; | ||
300 | u32 cfg_control_offset; | ||
301 | u8 subordinate_bus; | ||
302 | u8 secondary_bus; | ||
303 | u32 cfg_control; | ||
304 | u32 reg; | ||
305 | |||
306 | target_bus_cfg = &pcie->target_bus_cfg; | ||
307 | |||
308 | ecam_bus_mask = (pcie->ecam_size >> 20) - 1; | ||
309 | if (ecam_bus_mask > 255) { | ||
310 | dev_warn(pcie->dev, "ECAM window size is larger than 256MB. Cutting off at 256\n"); | ||
311 | ecam_bus_mask = 255; | ||
312 | } | ||
313 | |||
314 | /* This portion is taken from the transaction address */ | ||
315 | target_bus_cfg->ecam_mask = ecam_bus_mask; | ||
316 | /* This portion is taken from the cfg_target_bus reg */ | ||
317 | target_bus_cfg->reg_mask = ~target_bus_cfg->ecam_mask; | ||
318 | target_bus_cfg->reg_val = pp->busn->start & target_bus_cfg->reg_mask; | ||
319 | |||
320 | al_pcie_target_bus_set(pcie, target_bus_cfg->reg_val, | ||
321 | target_bus_cfg->reg_mask); | ||
322 | |||
323 | secondary_bus = pp->busn->start + 1; | ||
324 | subordinate_bus = pp->busn->end; | ||
325 | |||
326 | /* Set the valid values of secondary and subordinate buses */ | ||
327 | cfg_control_offset = AXI_BASE_OFFSET + pcie->reg_offsets.ob_ctrl + | ||
328 | CFG_CONTROL; | ||
329 | |||
330 | cfg_control = al_pcie_controller_readl(pcie, cfg_control_offset); | ||
331 | |||
332 | reg = cfg_control & | ||
333 | ~(CFG_CONTROL_SEC_BUS_MASK | CFG_CONTROL_SUBBUS_MASK); | ||
334 | |||
335 | reg |= FIELD_PREP(CFG_CONTROL_SUBBUS_MASK, subordinate_bus) | | ||
336 | FIELD_PREP(CFG_CONTROL_SEC_BUS_MASK, secondary_bus); | ||
337 | |||
338 | al_pcie_controller_writel(pcie, cfg_control_offset, reg); | ||
339 | } | ||
340 | |||
341 | static int al_pcie_host_init(struct pcie_port *pp) | ||
342 | { | ||
343 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
344 | struct al_pcie *pcie = to_al_pcie(pci); | ||
345 | int rc; | ||
346 | |||
347 | rc = al_pcie_rev_id_get(pcie, &pcie->controller_rev_id); | ||
348 | if (rc) | ||
349 | return rc; | ||
350 | |||
351 | rc = al_pcie_reg_offsets_set(pcie); | ||
352 | if (rc) | ||
353 | return rc; | ||
354 | |||
355 | al_pcie_config_prepare(pcie); | ||
356 | |||
357 | return 0; | ||
358 | } | ||
359 | |||
360 | static const struct dw_pcie_host_ops al_pcie_host_ops = { | ||
361 | .rd_other_conf = al_pcie_rd_other_conf, | ||
362 | .wr_other_conf = al_pcie_wr_other_conf, | ||
363 | .host_init = al_pcie_host_init, | ||
364 | }; | ||
365 | |||
366 | static int al_add_pcie_port(struct pcie_port *pp, | ||
367 | struct platform_device *pdev) | ||
368 | { | ||
369 | struct device *dev = &pdev->dev; | ||
370 | int ret; | ||
371 | |||
372 | pp->ops = &al_pcie_host_ops; | ||
373 | |||
374 | ret = dw_pcie_host_init(pp); | ||
375 | if (ret) { | ||
376 | dev_err(dev, "failed to initialize host\n"); | ||
377 | return ret; | ||
378 | } | ||
379 | |||
380 | return 0; | ||
381 | } | ||
382 | |||
383 | static const struct dw_pcie_ops dw_pcie_ops = { | ||
384 | }; | ||
385 | |||
386 | static int al_pcie_probe(struct platform_device *pdev) | ||
387 | { | ||
388 | struct device *dev = &pdev->dev; | ||
389 | struct resource *controller_res; | ||
390 | struct resource *ecam_res; | ||
391 | struct resource *dbi_res; | ||
392 | struct al_pcie *al_pcie; | ||
393 | struct dw_pcie *pci; | ||
394 | |||
395 | al_pcie = devm_kzalloc(dev, sizeof(*al_pcie), GFP_KERNEL); | ||
396 | if (!al_pcie) | ||
397 | return -ENOMEM; | ||
398 | |||
399 | pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); | ||
400 | if (!pci) | ||
401 | return -ENOMEM; | ||
402 | |||
403 | pci->dev = dev; | ||
404 | pci->ops = &dw_pcie_ops; | ||
405 | |||
406 | al_pcie->pci = pci; | ||
407 | al_pcie->dev = dev; | ||
408 | |||
409 | dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); | ||
410 | pci->dbi_base = devm_pci_remap_cfg_resource(dev, dbi_res); | ||
411 | if (IS_ERR(pci->dbi_base)) { | ||
412 | dev_err(dev, "couldn't remap dbi base %pR\n", dbi_res); | ||
413 | return PTR_ERR(pci->dbi_base); | ||
414 | } | ||
415 | |||
416 | ecam_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); | ||
417 | if (!ecam_res) { | ||
418 | dev_err(dev, "couldn't find 'config' reg in DT\n"); | ||
419 | return -ENOENT; | ||
420 | } | ||
421 | al_pcie->ecam_size = resource_size(ecam_res); | ||
422 | |||
423 | controller_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, | ||
424 | "controller"); | ||
425 | al_pcie->controller_base = devm_ioremap_resource(dev, controller_res); | ||
426 | if (IS_ERR(al_pcie->controller_base)) { | ||
427 | dev_err(dev, "couldn't remap controller base %pR\n", | ||
428 | controller_res); | ||
429 | return PTR_ERR(al_pcie->controller_base); | ||
430 | } | ||
431 | |||
432 | dev_dbg(dev, "From DT: dbi_base: %pR, controller_base: %pR\n", | ||
433 | dbi_res, controller_res); | ||
434 | |||
435 | platform_set_drvdata(pdev, al_pcie); | ||
436 | |||
437 | return al_add_pcie_port(&pci->pp, pdev); | ||
438 | } | ||
439 | |||
440 | static const struct of_device_id al_pcie_of_match[] = { | ||
441 | { .compatible = "amazon,al-alpine-v2-pcie", | ||
442 | }, | ||
443 | { .compatible = "amazon,al-alpine-v3-pcie", | ||
444 | }, | ||
445 | {}, | ||
446 | }; | ||
447 | |||
448 | static struct platform_driver al_pcie_driver = { | ||
449 | .driver = { | ||
450 | .name = "al-pcie", | ||
451 | .of_match_table = al_pcie_of_match, | ||
452 | .suppress_bind_attrs = true, | ||
453 | }, | ||
454 | .probe = al_pcie_probe, | ||
455 | }; | ||
456 | builtin_platform_driver(al_pcie_driver); | ||
457 | |||
458 | #endif /* CONFIG_PCIE_AL*/ | ||
diff --git a/drivers/pci/controller/dwc/pcie-armada8k.c b/drivers/pci/controller/dwc/pcie-armada8k.c index 3d55dc78d999..49596547e8c2 100644 --- a/drivers/pci/controller/dwc/pcie-armada8k.c +++ b/drivers/pci/controller/dwc/pcie-armada8k.c | |||
@@ -118,11 +118,10 @@ static int armada8k_pcie_setup_phys(struct armada8k_pcie *pcie) | |||
118 | 118 | ||
119 | for (i = 0; i < ARMADA8K_PCIE_MAX_LANES; i++) { | 119 | for (i = 0; i < ARMADA8K_PCIE_MAX_LANES; i++) { |
120 | pcie->phy[i] = devm_of_phy_get_by_index(dev, node, i); | 120 | pcie->phy[i] = devm_of_phy_get_by_index(dev, node, i); |
121 | if (IS_ERR(pcie->phy[i]) && | ||
122 | (PTR_ERR(pcie->phy[i]) == -EPROBE_DEFER)) | ||
123 | return PTR_ERR(pcie->phy[i]); | ||
124 | |||
125 | if (IS_ERR(pcie->phy[i])) { | 121 | if (IS_ERR(pcie->phy[i])) { |
122 | if (PTR_ERR(pcie->phy[i]) != -ENODEV) | ||
123 | return PTR_ERR(pcie->phy[i]); | ||
124 | |||
126 | pcie->phy[i] = NULL; | 125 | pcie->phy[i] = NULL; |
127 | continue; | 126 | continue; |
128 | } | 127 | } |
diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 2bf5a35c0570..3dd2e2697294 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c | |||
@@ -40,39 +40,6 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) | |||
40 | __dw_pcie_ep_reset_bar(pci, bar, 0); | 40 | __dw_pcie_ep_reset_bar(pci, bar, 0); |
41 | } | 41 | } |
42 | 42 | ||
43 | static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr, | ||
44 | u8 cap) | ||
45 | { | ||
46 | u8 cap_id, next_cap_ptr; | ||
47 | u16 reg; | ||
48 | |||
49 | if (!cap_ptr) | ||
50 | return 0; | ||
51 | |||
52 | reg = dw_pcie_readw_dbi(pci, cap_ptr); | ||
53 | cap_id = (reg & 0x00ff); | ||
54 | |||
55 | if (cap_id > PCI_CAP_ID_MAX) | ||
56 | return 0; | ||
57 | |||
58 | if (cap_id == cap) | ||
59 | return cap_ptr; | ||
60 | |||
61 | next_cap_ptr = (reg & 0xff00) >> 8; | ||
62 | return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); | ||
63 | } | ||
64 | |||
65 | static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap) | ||
66 | { | ||
67 | u8 next_cap_ptr; | ||
68 | u16 reg; | ||
69 | |||
70 | reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); | ||
71 | next_cap_ptr = (reg & 0x00ff); | ||
72 | |||
73 | return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); | ||
74 | } | ||
75 | |||
76 | static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, | 43 | static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, |
77 | struct pci_epf_header *hdr) | 44 | struct pci_epf_header *hdr) |
78 | { | 45 | { |
@@ -531,6 +498,7 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) | |||
531 | int ret; | 498 | int ret; |
532 | u32 reg; | 499 | u32 reg; |
533 | void *addr; | 500 | void *addr; |
501 | u8 hdr_type; | ||
534 | unsigned int nbars; | 502 | unsigned int nbars; |
535 | unsigned int offset; | 503 | unsigned int offset; |
536 | struct pci_epc *epc; | 504 | struct pci_epc *epc; |
@@ -595,6 +563,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) | |||
595 | if (ep->ops->ep_init) | 563 | if (ep->ops->ep_init) |
596 | ep->ops->ep_init(ep); | 564 | ep->ops->ep_init(ep); |
597 | 565 | ||
566 | hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE); | ||
567 | if (hdr_type != PCI_HEADER_TYPE_NORMAL) { | ||
568 | dev_err(pci->dev, "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n", | ||
569 | hdr_type); | ||
570 | return -EIO; | ||
571 | } | ||
572 | |||
598 | ret = of_property_read_u8(np, "max-functions", &epc->max_functions); | 573 | ret = of_property_read_u8(np, "max-functions", &epc->max_functions); |
599 | if (ret < 0) | 574 | if (ret < 0) |
600 | epc->max_functions = 1; | 575 | epc->max_functions = 1; |
@@ -612,9 +587,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) | |||
612 | dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); | 587 | dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); |
613 | return -ENOMEM; | 588 | return -ENOMEM; |
614 | } | 589 | } |
615 | ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI); | 590 | ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); |
616 | 591 | ||
617 | ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX); | 592 | ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX); |
618 | 593 | ||
619 | offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); | 594 | offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); |
620 | if (offset) { | 595 | if (offset) { |
diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index f93252d0da5b..0f36a926059a 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c | |||
@@ -323,6 +323,7 @@ int dw_pcie_host_init(struct pcie_port *pp) | |||
323 | struct pci_bus *child; | 323 | struct pci_bus *child; |
324 | struct pci_host_bridge *bridge; | 324 | struct pci_host_bridge *bridge; |
325 | struct resource *cfg_res; | 325 | struct resource *cfg_res; |
326 | u32 hdr_type; | ||
326 | int ret; | 327 | int ret; |
327 | 328 | ||
328 | raw_spin_lock_init(&pci->pp.lock); | 329 | raw_spin_lock_init(&pci->pp.lock); |
@@ -464,6 +465,21 @@ int dw_pcie_host_init(struct pcie_port *pp) | |||
464 | goto err_free_msi; | 465 | goto err_free_msi; |
465 | } | 466 | } |
466 | 467 | ||
468 | ret = dw_pcie_rd_own_conf(pp, PCI_HEADER_TYPE, 1, &hdr_type); | ||
469 | if (ret != PCIBIOS_SUCCESSFUL) { | ||
470 | dev_err(pci->dev, "Failed reading PCI_HEADER_TYPE cfg space reg (ret: 0x%x)\n", | ||
471 | ret); | ||
472 | ret = pcibios_err_to_errno(ret); | ||
473 | goto err_free_msi; | ||
474 | } | ||
475 | if (hdr_type != PCI_HEADER_TYPE_BRIDGE) { | ||
476 | dev_err(pci->dev, | ||
477 | "PCIe controller is not set to bridge type (hdr_type: 0x%x)!\n", | ||
478 | hdr_type); | ||
479 | ret = -EIO; | ||
480 | goto err_free_msi; | ||
481 | } | ||
482 | |||
467 | pp->root_bus_nr = pp->busn->start; | 483 | pp->root_bus_nr = pp->busn->start; |
468 | 484 | ||
469 | bridge->dev.parent = dev; | 485 | bridge->dev.parent = dev; |
@@ -628,6 +644,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp) | |||
628 | u32 val, ctrl, num_ctrls; | 644 | u32 val, ctrl, num_ctrls; |
629 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | 645 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); |
630 | 646 | ||
647 | /* | ||
648 | * Enable DBI read-only registers for writing/updating configuration. | ||
649 | * Write permission gets disabled towards the end of this function. | ||
650 | */ | ||
651 | dw_pcie_dbi_ro_wr_en(pci); | ||
652 | |||
631 | dw_pcie_setup(pci); | 653 | dw_pcie_setup(pci); |
632 | 654 | ||
633 | if (!pp->ops->msi_host_init) { | 655 | if (!pp->ops->msi_host_init) { |
@@ -650,12 +672,10 @@ void dw_pcie_setup_rc(struct pcie_port *pp) | |||
650 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000); | 672 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000); |
651 | 673 | ||
652 | /* Setup interrupt pins */ | 674 | /* Setup interrupt pins */ |
653 | dw_pcie_dbi_ro_wr_en(pci); | ||
654 | val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE); | 675 | val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE); |
655 | val &= 0xffff00ff; | 676 | val &= 0xffff00ff; |
656 | val |= 0x00000100; | 677 | val |= 0x00000100; |
657 | dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val); | 678 | dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val); |
658 | dw_pcie_dbi_ro_wr_dis(pci); | ||
659 | 679 | ||
660 | /* Setup bus numbers */ | 680 | /* Setup bus numbers */ |
661 | val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS); | 681 | val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS); |
@@ -687,15 +707,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp) | |||
687 | 707 | ||
688 | dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); | 708 | dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); |
689 | 709 | ||
690 | /* Enable write permission for the DBI read-only register */ | ||
691 | dw_pcie_dbi_ro_wr_en(pci); | ||
692 | /* Program correct class for RC */ | 710 | /* Program correct class for RC */ |
693 | dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); | 711 | dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); |
694 | /* Better disable write permission right after the update */ | ||
695 | dw_pcie_dbi_ro_wr_dis(pci); | ||
696 | 712 | ||
697 | dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); | 713 | dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); |
698 | val |= PORT_LOGIC_SPEED_CHANGE; | 714 | val |= PORT_LOGIC_SPEED_CHANGE; |
699 | dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); | 715 | dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); |
716 | |||
717 | dw_pcie_dbi_ro_wr_dis(pci); | ||
700 | } | 718 | } |
701 | EXPORT_SYMBOL_GPL(dw_pcie_setup_rc); | 719 | EXPORT_SYMBOL_GPL(dw_pcie_setup_rc); |
diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 7d25102c304c..820488dfeaed 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c | |||
@@ -14,6 +14,86 @@ | |||
14 | 14 | ||
15 | #include "pcie-designware.h" | 15 | #include "pcie-designware.h" |
16 | 16 | ||
17 | /* | ||
18 | * These interfaces resemble the pci_find_*capability() interfaces, but these | ||
19 | * are for configuring host controllers, which are bridges *to* PCI devices but | ||
20 | * are not PCI devices themselves. | ||
21 | */ | ||
22 | static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr, | ||
23 | u8 cap) | ||
24 | { | ||
25 | u8 cap_id, next_cap_ptr; | ||
26 | u16 reg; | ||
27 | |||
28 | if (!cap_ptr) | ||
29 | return 0; | ||
30 | |||
31 | reg = dw_pcie_readw_dbi(pci, cap_ptr); | ||
32 | cap_id = (reg & 0x00ff); | ||
33 | |||
34 | if (cap_id > PCI_CAP_ID_MAX) | ||
35 | return 0; | ||
36 | |||
37 | if (cap_id == cap) | ||
38 | return cap_ptr; | ||
39 | |||
40 | next_cap_ptr = (reg & 0xff00) >> 8; | ||
41 | return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap); | ||
42 | } | ||
43 | |||
44 | u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) | ||
45 | { | ||
46 | u8 next_cap_ptr; | ||
47 | u16 reg; | ||
48 | |||
49 | reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); | ||
50 | next_cap_ptr = (reg & 0x00ff); | ||
51 | |||
52 | return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap); | ||
53 | } | ||
54 | EXPORT_SYMBOL_GPL(dw_pcie_find_capability); | ||
55 | |||
56 | static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start, | ||
57 | u8 cap) | ||
58 | { | ||
59 | u32 header; | ||
60 | int ttl; | ||
61 | int pos = PCI_CFG_SPACE_SIZE; | ||
62 | |||
63 | /* minimum 8 bytes per capability */ | ||
64 | ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; | ||
65 | |||
66 | if (start) | ||
67 | pos = start; | ||
68 | |||
69 | header = dw_pcie_readl_dbi(pci, pos); | ||
70 | /* | ||
71 | * If we have no capabilities, this is indicated by cap ID, | ||
72 | * cap version and next pointer all being 0. | ||
73 | */ | ||
74 | if (header == 0) | ||
75 | return 0; | ||
76 | |||
77 | while (ttl-- > 0) { | ||
78 | if (PCI_EXT_CAP_ID(header) == cap && pos != start) | ||
79 | return pos; | ||
80 | |||
81 | pos = PCI_EXT_CAP_NEXT(header); | ||
82 | if (pos < PCI_CFG_SPACE_SIZE) | ||
83 | break; | ||
84 | |||
85 | header = dw_pcie_readl_dbi(pci, pos); | ||
86 | } | ||
87 | |||
88 | return 0; | ||
89 | } | ||
90 | |||
91 | u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap) | ||
92 | { | ||
93 | return dw_pcie_find_next_ext_capability(pci, 0, cap); | ||
94 | } | ||
95 | EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability); | ||
96 | |||
17 | int dw_pcie_read(void __iomem *addr, int size, u32 *val) | 97 | int dw_pcie_read(void __iomem *addr, int size, u32 *val) |
18 | { | 98 | { |
19 | if (!IS_ALIGNED((uintptr_t)addr, size)) { | 99 | if (!IS_ALIGNED((uintptr_t)addr, size)) { |
@@ -376,10 +456,11 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci) | |||
376 | usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); | 456 | usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); |
377 | } | 457 | } |
378 | 458 | ||
379 | dev_err(pci->dev, "Phy link never came up\n"); | 459 | dev_info(pci->dev, "Phy link never came up\n"); |
380 | 460 | ||
381 | return -ETIMEDOUT; | 461 | return -ETIMEDOUT; |
382 | } | 462 | } |
463 | EXPORT_SYMBOL_GPL(dw_pcie_wait_for_link); | ||
383 | 464 | ||
384 | int dw_pcie_link_up(struct dw_pcie *pci) | 465 | int dw_pcie_link_up(struct dw_pcie *pci) |
385 | { | 466 | { |
@@ -423,8 +504,10 @@ void dw_pcie_setup(struct dw_pcie *pci) | |||
423 | 504 | ||
424 | 505 | ||
425 | ret = of_property_read_u32(np, "num-lanes", &lanes); | 506 | ret = of_property_read_u32(np, "num-lanes", &lanes); |
426 | if (ret) | 507 | if (ret) { |
427 | lanes = 0; | 508 | dev_dbg(pci->dev, "property num-lanes isn't found\n"); |
509 | return; | ||
510 | } | ||
428 | 511 | ||
429 | /* Set the number of lanes */ | 512 | /* Set the number of lanes */ |
430 | val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL); | 513 | val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL); |
@@ -466,4 +549,11 @@ void dw_pcie_setup(struct dw_pcie *pci) | |||
466 | break; | 549 | break; |
467 | } | 550 | } |
468 | dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); | 551 | dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); |
552 | |||
553 | if (of_property_read_bool(np, "snps,enable-cdm-check")) { | ||
554 | val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS); | ||
555 | val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS | | ||
556 | PCIE_PL_CHK_REG_CHK_REG_START; | ||
557 | dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val); | ||
558 | } | ||
469 | } | 559 | } |
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index ffed084a0b4f..5a18e94e52c8 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h | |||
@@ -86,6 +86,15 @@ | |||
86 | #define PCIE_MISC_CONTROL_1_OFF 0x8BC | 86 | #define PCIE_MISC_CONTROL_1_OFF 0x8BC |
87 | #define PCIE_DBI_RO_WR_EN BIT(0) | 87 | #define PCIE_DBI_RO_WR_EN BIT(0) |
88 | 88 | ||
89 | #define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20 | ||
90 | #define PCIE_PL_CHK_REG_CHK_REG_START BIT(0) | ||
91 | #define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1) | ||
92 | #define PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR BIT(16) | ||
93 | #define PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR BIT(17) | ||
94 | #define PCIE_PL_CHK_REG_CHK_REG_COMPLETE BIT(18) | ||
95 | |||
96 | #define PCIE_PL_CHK_REG_ERR_ADDR 0xB28 | ||
97 | |||
89 | /* | 98 | /* |
90 | * iATU Unroll-specific register definitions | 99 | * iATU Unroll-specific register definitions |
91 | * From 4.80 core version the address translation will be made by unroll | 100 | * From 4.80 core version the address translation will be made by unroll |
@@ -251,6 +260,9 @@ struct dw_pcie { | |||
251 | #define to_dw_pcie_from_ep(endpoint) \ | 260 | #define to_dw_pcie_from_ep(endpoint) \ |
252 | container_of((endpoint), struct dw_pcie, ep) | 261 | container_of((endpoint), struct dw_pcie, ep) |
253 | 262 | ||
263 | u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap); | ||
264 | u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap); | ||
265 | |||
254 | int dw_pcie_read(void __iomem *addr, int size, u32 *val); | 266 | int dw_pcie_read(void __iomem *addr, int size, u32 *val); |
255 | int dw_pcie_write(void __iomem *addr, int size, u32 val); | 267 | int dw_pcie_write(void __iomem *addr, int size, u32 val); |
256 | 268 | ||
diff --git a/drivers/pci/controller/dwc/pcie-histb.c b/drivers/pci/controller/dwc/pcie-histb.c index 954bc2b74bbc..811b5c6d62ea 100644 --- a/drivers/pci/controller/dwc/pcie-histb.c +++ b/drivers/pci/controller/dwc/pcie-histb.c | |||
@@ -340,8 +340,8 @@ static int histb_pcie_probe(struct platform_device *pdev) | |||
340 | 340 | ||
341 | hipcie->vpcie = devm_regulator_get_optional(dev, "vpcie"); | 341 | hipcie->vpcie = devm_regulator_get_optional(dev, "vpcie"); |
342 | if (IS_ERR(hipcie->vpcie)) { | 342 | if (IS_ERR(hipcie->vpcie)) { |
343 | if (PTR_ERR(hipcie->vpcie) == -EPROBE_DEFER) | 343 | if (PTR_ERR(hipcie->vpcie) != -ENODEV) |
344 | return -EPROBE_DEFER; | 344 | return PTR_ERR(hipcie->vpcie); |
345 | hipcie->vpcie = NULL; | 345 | hipcie->vpcie = NULL; |
346 | } | 346 | } |
347 | 347 | ||
diff --git a/drivers/pci/controller/dwc/pcie-kirin.c b/drivers/pci/controller/dwc/pcie-kirin.c index 8df1914226be..c19617a912bd 100644 --- a/drivers/pci/controller/dwc/pcie-kirin.c +++ b/drivers/pci/controller/dwc/pcie-kirin.c | |||
@@ -436,7 +436,7 @@ static int kirin_pcie_host_init(struct pcie_port *pp) | |||
436 | return 0; | 436 | return 0; |
437 | } | 437 | } |
438 | 438 | ||
439 | static struct dw_pcie_ops kirin_dw_pcie_ops = { | 439 | static const struct dw_pcie_ops kirin_dw_pcie_ops = { |
440 | .read_dbi = kirin_pcie_read_dbi, | 440 | .read_dbi = kirin_pcie_read_dbi, |
441 | .write_dbi = kirin_pcie_write_dbi, | 441 | .write_dbi = kirin_pcie_write_dbi, |
442 | .link_up = kirin_pcie_link_up, | 442 | .link_up = kirin_pcie_link_up, |
diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c new file mode 100644 index 000000000000..f89f5acee72d --- /dev/null +++ b/drivers/pci/controller/dwc/pcie-tegra194.c | |||
@@ -0,0 +1,1732 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0+ | ||
2 | /* | ||
3 | * PCIe host controller driver for Tegra194 SoC | ||
4 | * | ||
5 | * Copyright (C) 2019 NVIDIA Corporation. | ||
6 | * | ||
7 | * Author: Vidya Sagar <vidyas@nvidia.com> | ||
8 | */ | ||
9 | |||
10 | #include <linux/clk.h> | ||
11 | #include <linux/debugfs.h> | ||
12 | #include <linux/delay.h> | ||
13 | #include <linux/gpio.h> | ||
14 | #include <linux/interrupt.h> | ||
15 | #include <linux/iopoll.h> | ||
16 | #include <linux/kernel.h> | ||
17 | #include <linux/module.h> | ||
18 | #include <linux/of.h> | ||
19 | #include <linux/of_device.h> | ||
20 | #include <linux/of_gpio.h> | ||
21 | #include <linux/of_irq.h> | ||
22 | #include <linux/of_pci.h> | ||
23 | #include <linux/pci.h> | ||
24 | #include <linux/phy/phy.h> | ||
25 | #include <linux/pinctrl/consumer.h> | ||
26 | #include <linux/platform_device.h> | ||
27 | #include <linux/pm_runtime.h> | ||
28 | #include <linux/random.h> | ||
29 | #include <linux/reset.h> | ||
30 | #include <linux/resource.h> | ||
31 | #include <linux/types.h> | ||
32 | #include "pcie-designware.h" | ||
33 | #include <soc/tegra/bpmp.h> | ||
34 | #include <soc/tegra/bpmp-abi.h> | ||
35 | #include "../../pci.h" | ||
36 | |||
37 | #define APPL_PINMUX 0x0 | ||
38 | #define APPL_PINMUX_PEX_RST BIT(0) | ||
39 | #define APPL_PINMUX_CLKREQ_OVERRIDE_EN BIT(2) | ||
40 | #define APPL_PINMUX_CLKREQ_OVERRIDE BIT(3) | ||
41 | #define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN BIT(4) | ||
42 | #define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE BIT(5) | ||
43 | #define APPL_PINMUX_CLKREQ_OUT_OVRD_EN BIT(9) | ||
44 | #define APPL_PINMUX_CLKREQ_OUT_OVRD BIT(10) | ||
45 | |||
46 | #define APPL_CTRL 0x4 | ||
47 | #define APPL_CTRL_SYS_PRE_DET_STATE BIT(6) | ||
48 | #define APPL_CTRL_LTSSM_EN BIT(7) | ||
49 | #define APPL_CTRL_HW_HOT_RST_EN BIT(20) | ||
50 | #define APPL_CTRL_HW_HOT_RST_MODE_MASK GENMASK(1, 0) | ||
51 | #define APPL_CTRL_HW_HOT_RST_MODE_SHIFT 22 | ||
52 | #define APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST 0x1 | ||
53 | |||
54 | #define APPL_INTR_EN_L0_0 0x8 | ||
55 | #define APPL_INTR_EN_L0_0_LINK_STATE_INT_EN BIT(0) | ||
56 | #define APPL_INTR_EN_L0_0_MSI_RCV_INT_EN BIT(4) | ||
57 | #define APPL_INTR_EN_L0_0_INT_INT_EN BIT(8) | ||
58 | #define APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN BIT(19) | ||
59 | #define APPL_INTR_EN_L0_0_SYS_INTR_EN BIT(30) | ||
60 | #define APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN BIT(31) | ||
61 | |||
62 | #define APPL_INTR_STATUS_L0 0xC | ||
63 | #define APPL_INTR_STATUS_L0_LINK_STATE_INT BIT(0) | ||
64 | #define APPL_INTR_STATUS_L0_INT_INT BIT(8) | ||
65 | #define APPL_INTR_STATUS_L0_CDM_REG_CHK_INT BIT(18) | ||
66 | |||
67 | #define APPL_INTR_EN_L1_0_0 0x1C | ||
68 | #define APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN BIT(1) | ||
69 | |||
70 | #define APPL_INTR_STATUS_L1_0_0 0x20 | ||
71 | #define APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED BIT(1) | ||
72 | |||
73 | #define APPL_INTR_STATUS_L1_1 0x2C | ||
74 | #define APPL_INTR_STATUS_L1_2 0x30 | ||
75 | #define APPL_INTR_STATUS_L1_3 0x34 | ||
76 | #define APPL_INTR_STATUS_L1_6 0x3C | ||
77 | #define APPL_INTR_STATUS_L1_7 0x40 | ||
78 | |||
79 | #define APPL_INTR_EN_L1_8_0 0x44 | ||
80 | #define APPL_INTR_EN_L1_8_BW_MGT_INT_EN BIT(2) | ||
81 | #define APPL_INTR_EN_L1_8_AUTO_BW_INT_EN BIT(3) | ||
82 | #define APPL_INTR_EN_L1_8_INTX_EN BIT(11) | ||
83 | #define APPL_INTR_EN_L1_8_AER_INT_EN BIT(15) | ||
84 | |||
85 | #define APPL_INTR_STATUS_L1_8_0 0x4C | ||
86 | #define APPL_INTR_STATUS_L1_8_0_EDMA_INT_MASK GENMASK(11, 6) | ||
87 | #define APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS BIT(2) | ||
88 | #define APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS BIT(3) | ||
89 | |||
90 | #define APPL_INTR_STATUS_L1_9 0x54 | ||
91 | #define APPL_INTR_STATUS_L1_10 0x58 | ||
92 | #define APPL_INTR_STATUS_L1_11 0x64 | ||
93 | #define APPL_INTR_STATUS_L1_13 0x74 | ||
94 | #define APPL_INTR_STATUS_L1_14 0x78 | ||
95 | #define APPL_INTR_STATUS_L1_15 0x7C | ||
96 | #define APPL_INTR_STATUS_L1_17 0x88 | ||
97 | |||
98 | #define APPL_INTR_EN_L1_18 0x90 | ||
99 | #define APPL_INTR_EN_L1_18_CDM_REG_CHK_CMPLT BIT(2) | ||
100 | #define APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR BIT(1) | ||
101 | #define APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR BIT(0) | ||
102 | |||
103 | #define APPL_INTR_STATUS_L1_18 0x94 | ||
104 | #define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT BIT(2) | ||
105 | #define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR BIT(1) | ||
106 | #define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR BIT(0) | ||
107 | |||
108 | #define APPL_MSI_CTRL_2 0xB0 | ||
109 | |||
110 | #define APPL_LTR_MSG_1 0xC4 | ||
111 | #define LTR_MSG_REQ BIT(15) | ||
112 | #define LTR_MST_NO_SNOOP_SHIFT 16 | ||
113 | |||
114 | #define APPL_LTR_MSG_2 0xC8 | ||
115 | #define APPL_LTR_MSG_2_LTR_MSG_REQ_STATE BIT(3) | ||
116 | |||
117 | #define APPL_LINK_STATUS 0xCC | ||
118 | #define APPL_LINK_STATUS_RDLH_LINK_UP BIT(0) | ||
119 | |||
120 | #define APPL_DEBUG 0xD0 | ||
121 | #define APPL_DEBUG_PM_LINKST_IN_L2_LAT BIT(21) | ||
122 | #define APPL_DEBUG_PM_LINKST_IN_L0 0x11 | ||
123 | #define APPL_DEBUG_LTSSM_STATE_MASK GENMASK(8, 3) | ||
124 | #define APPL_DEBUG_LTSSM_STATE_SHIFT 3 | ||
125 | #define LTSSM_STATE_PRE_DETECT 5 | ||
126 | |||
127 | #define APPL_RADM_STATUS 0xE4 | ||
128 | #define APPL_PM_XMT_TURNOFF_STATE BIT(0) | ||
129 | |||
130 | #define APPL_DM_TYPE 0x100 | ||
131 | #define APPL_DM_TYPE_MASK GENMASK(3, 0) | ||
132 | #define APPL_DM_TYPE_RP 0x4 | ||
133 | #define APPL_DM_TYPE_EP 0x0 | ||
134 | |||
135 | #define APPL_CFG_BASE_ADDR 0x104 | ||
136 | #define APPL_CFG_BASE_ADDR_MASK GENMASK(31, 12) | ||
137 | |||
138 | #define APPL_CFG_IATU_DMA_BASE_ADDR 0x108 | ||
139 | #define APPL_CFG_IATU_DMA_BASE_ADDR_MASK GENMASK(31, 18) | ||
140 | |||
141 | #define APPL_CFG_MISC 0x110 | ||
142 | #define APPL_CFG_MISC_SLV_EP_MODE BIT(14) | ||
143 | #define APPL_CFG_MISC_ARCACHE_MASK GENMASK(13, 10) | ||
144 | #define APPL_CFG_MISC_ARCACHE_SHIFT 10 | ||
145 | #define APPL_CFG_MISC_ARCACHE_VAL 3 | ||
146 | |||
147 | #define APPL_CFG_SLCG_OVERRIDE 0x114 | ||
148 | #define APPL_CFG_SLCG_OVERRIDE_SLCG_EN_MASTER BIT(0) | ||
149 | |||
150 | #define APPL_CAR_RESET_OVRD 0x12C | ||
151 | #define APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N BIT(0) | ||
152 | |||
153 | #define IO_BASE_IO_DECODE BIT(0) | ||
154 | #define IO_BASE_IO_DECODE_BIT8 BIT(8) | ||
155 | |||
156 | #define CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE BIT(0) | ||
157 | #define CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE BIT(16) | ||
158 | |||
159 | #define CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF 0x718 | ||
160 | #define CFG_TIMER_CTRL_ACK_NAK_SHIFT (19) | ||
161 | |||
162 | #define EVENT_COUNTER_ALL_CLEAR 0x3 | ||
163 | #define EVENT_COUNTER_ENABLE_ALL 0x7 | ||
164 | #define EVENT_COUNTER_ENABLE_SHIFT 2 | ||
165 | #define EVENT_COUNTER_EVENT_SEL_MASK GENMASK(7, 0) | ||
166 | #define EVENT_COUNTER_EVENT_SEL_SHIFT 16 | ||
167 | #define EVENT_COUNTER_EVENT_Tx_L0S 0x2 | ||
168 | #define EVENT_COUNTER_EVENT_Rx_L0S 0x3 | ||
169 | #define EVENT_COUNTER_EVENT_L1 0x5 | ||
170 | #define EVENT_COUNTER_EVENT_L1_1 0x7 | ||
171 | #define EVENT_COUNTER_EVENT_L1_2 0x8 | ||
172 | #define EVENT_COUNTER_GROUP_SEL_SHIFT 24 | ||
173 | #define EVENT_COUNTER_GROUP_5 0x5 | ||
174 | |||
175 | #define PORT_LOGIC_ACK_F_ASPM_CTRL 0x70C | ||
176 | #define ENTER_ASPM BIT(30) | ||
177 | #define L0S_ENTRANCE_LAT_SHIFT 24 | ||
178 | #define L0S_ENTRANCE_LAT_MASK GENMASK(26, 24) | ||
179 | #define L1_ENTRANCE_LAT_SHIFT 27 | ||
180 | #define L1_ENTRANCE_LAT_MASK GENMASK(29, 27) | ||
181 | #define N_FTS_SHIFT 8 | ||
182 | #define N_FTS_MASK GENMASK(7, 0) | ||
183 | #define N_FTS_VAL 52 | ||
184 | |||
185 | #define PORT_LOGIC_GEN2_CTRL 0x80C | ||
186 | #define PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE BIT(17) | ||
187 | #define FTS_MASK GENMASK(7, 0) | ||
188 | #define FTS_VAL 52 | ||
189 | |||
190 | #define PORT_LOGIC_MSI_CTRL_INT_0_EN 0x828 | ||
191 | |||
192 | #define GEN3_EQ_CONTROL_OFF 0x8a8 | ||
193 | #define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT 8 | ||
194 | #define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK GENMASK(23, 8) | ||
195 | #define GEN3_EQ_CONTROL_OFF_FB_MODE_MASK GENMASK(3, 0) | ||
196 | |||
197 | #define GEN3_RELATED_OFF 0x890 | ||
198 | #define GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL BIT(0) | ||
199 | #define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) | ||
200 | #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 | ||
201 | #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) | ||
202 | |||
203 | #define PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT 0x8D0 | ||
204 | #define AMBA_ERROR_RESPONSE_CRS_SHIFT 3 | ||
205 | #define AMBA_ERROR_RESPONSE_CRS_MASK GENMASK(1, 0) | ||
206 | #define AMBA_ERROR_RESPONSE_CRS_OKAY 0 | ||
207 | #define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFFFFFF 1 | ||
208 | #define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 2 | ||
209 | |||
210 | #define PORT_LOGIC_MSIX_DOORBELL 0x948 | ||
211 | |||
212 | #define CAP_SPCIE_CAP_OFF 0x154 | ||
213 | #define CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK GENMASK(3, 0) | ||
214 | #define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK GENMASK(11, 8) | ||
215 | #define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT 8 | ||
216 | |||
217 | #define PME_ACK_TIMEOUT 10000 | ||
218 | |||
219 | #define LTSSM_TIMEOUT 50000 /* 50ms */ | ||
220 | |||
221 | #define GEN3_GEN4_EQ_PRESET_INIT 5 | ||
222 | |||
223 | #define GEN1_CORE_CLK_FREQ 62500000 | ||
224 | #define GEN2_CORE_CLK_FREQ 125000000 | ||
225 | #define GEN3_CORE_CLK_FREQ 250000000 | ||
226 | #define GEN4_CORE_CLK_FREQ 500000000 | ||
227 | |||
228 | static const unsigned int pcie_gen_freq[] = { | ||
229 | GEN1_CORE_CLK_FREQ, | ||
230 | GEN2_CORE_CLK_FREQ, | ||
231 | GEN3_CORE_CLK_FREQ, | ||
232 | GEN4_CORE_CLK_FREQ | ||
233 | }; | ||
234 | |||
235 | static const u32 event_cntr_ctrl_offset[] = { | ||
236 | 0x1d8, | ||
237 | 0x1a8, | ||
238 | 0x1a8, | ||
239 | 0x1a8, | ||
240 | 0x1c4, | ||
241 | 0x1d8 | ||
242 | }; | ||
243 | |||
244 | static const u32 event_cntr_data_offset[] = { | ||
245 | 0x1dc, | ||
246 | 0x1ac, | ||
247 | 0x1ac, | ||
248 | 0x1ac, | ||
249 | 0x1c8, | ||
250 | 0x1dc | ||
251 | }; | ||
252 | |||
253 | struct tegra_pcie_dw { | ||
254 | struct device *dev; | ||
255 | struct resource *appl_res; | ||
256 | struct resource *dbi_res; | ||
257 | struct resource *atu_dma_res; | ||
258 | void __iomem *appl_base; | ||
259 | struct clk *core_clk; | ||
260 | struct reset_control *core_apb_rst; | ||
261 | struct reset_control *core_rst; | ||
262 | struct dw_pcie pci; | ||
263 | struct tegra_bpmp *bpmp; | ||
264 | |||
265 | bool supports_clkreq; | ||
266 | bool enable_cdm_check; | ||
267 | bool link_state; | ||
268 | bool update_fc_fixup; | ||
269 | u8 init_link_width; | ||
270 | u32 msi_ctrl_int; | ||
271 | u32 num_lanes; | ||
272 | u32 max_speed; | ||
273 | u32 cid; | ||
274 | u32 cfg_link_cap_l1sub; | ||
275 | u32 pcie_cap_base; | ||
276 | u32 aspm_cmrt; | ||
277 | u32 aspm_pwr_on_t; | ||
278 | u32 aspm_l0s_enter_lat; | ||
279 | |||
280 | struct regulator *pex_ctl_supply; | ||
281 | struct regulator *slot_ctl_3v3; | ||
282 | struct regulator *slot_ctl_12v; | ||
283 | |||
284 | unsigned int phy_count; | ||
285 | struct phy **phys; | ||
286 | |||
287 | struct dentry *debugfs; | ||
288 | }; | ||
289 | |||
290 | static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci) | ||
291 | { | ||
292 | return container_of(pci, struct tegra_pcie_dw, pci); | ||
293 | } | ||
294 | |||
295 | static inline void appl_writel(struct tegra_pcie_dw *pcie, const u32 value, | ||
296 | const u32 reg) | ||
297 | { | ||
298 | writel_relaxed(value, pcie->appl_base + reg); | ||
299 | } | ||
300 | |||
301 | static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg) | ||
302 | { | ||
303 | return readl_relaxed(pcie->appl_base + reg); | ||
304 | } | ||
305 | |||
306 | struct tegra_pcie_soc { | ||
307 | enum dw_pcie_device_mode mode; | ||
308 | }; | ||
309 | |||
310 | static void apply_bad_link_workaround(struct pcie_port *pp) | ||
311 | { | ||
312 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
313 | struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); | ||
314 | u32 current_link_width; | ||
315 | u16 val; | ||
316 | |||
317 | /* | ||
318 | * NOTE:- Since this scenario is uncommon and link as such is not | ||
319 | * stable anyway, not waiting to confirm if link is really | ||
320 | * transitioning to Gen-2 speed | ||
321 | */ | ||
322 | val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); | ||
323 | if (val & PCI_EXP_LNKSTA_LBMS) { | ||
324 | current_link_width = (val & PCI_EXP_LNKSTA_NLW) >> | ||
325 | PCI_EXP_LNKSTA_NLW_SHIFT; | ||
326 | if (pcie->init_link_width > current_link_width) { | ||
327 | dev_warn(pci->dev, "PCIe link is bad, width reduced\n"); | ||
328 | val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + | ||
329 | PCI_EXP_LNKCTL2); | ||
330 | val &= ~PCI_EXP_LNKCTL2_TLS; | ||
331 | val |= PCI_EXP_LNKCTL2_TLS_2_5GT; | ||
332 | dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + | ||
333 | PCI_EXP_LNKCTL2, val); | ||
334 | |||
335 | val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + | ||
336 | PCI_EXP_LNKCTL); | ||
337 | val |= PCI_EXP_LNKCTL_RL; | ||
338 | dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + | ||
339 | PCI_EXP_LNKCTL, val); | ||
340 | } | ||
341 | } | ||
342 | } | ||
343 | |||
344 | static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie) | ||
345 | { | ||
346 | struct dw_pcie *pci = &pcie->pci; | ||
347 | struct pcie_port *pp = &pci->pp; | ||
348 | u32 val, tmp; | ||
349 | u16 val_w; | ||
350 | |||
351 | val = appl_readl(pcie, APPL_INTR_STATUS_L0); | ||
352 | if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) { | ||
353 | val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0); | ||
354 | if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) { | ||
355 | appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0); | ||
356 | |||
357 | /* SBR & Surprise Link Down WAR */ | ||
358 | val = appl_readl(pcie, APPL_CAR_RESET_OVRD); | ||
359 | val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N; | ||
360 | appl_writel(pcie, val, APPL_CAR_RESET_OVRD); | ||
361 | udelay(1); | ||
362 | val = appl_readl(pcie, APPL_CAR_RESET_OVRD); | ||
363 | val |= APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N; | ||
364 | appl_writel(pcie, val, APPL_CAR_RESET_OVRD); | ||
365 | |||
366 | val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL); | ||
367 | val |= PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE; | ||
368 | dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val); | ||
369 | } | ||
370 | } | ||
371 | |||
372 | if (val & APPL_INTR_STATUS_L0_INT_INT) { | ||
373 | val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0); | ||
374 | if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) { | ||
375 | appl_writel(pcie, | ||
376 | APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS, | ||
377 | APPL_INTR_STATUS_L1_8_0); | ||
378 | apply_bad_link_workaround(pp); | ||
379 | } | ||
380 | if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) { | ||
381 | appl_writel(pcie, | ||
382 | APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS, | ||
383 | APPL_INTR_STATUS_L1_8_0); | ||
384 | |||
385 | val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + | ||
386 | PCI_EXP_LNKSTA); | ||
387 | dev_dbg(pci->dev, "Link Speed : Gen-%u\n", val_w & | ||
388 | PCI_EXP_LNKSTA_CLS); | ||
389 | } | ||
390 | } | ||
391 | |||
392 | val = appl_readl(pcie, APPL_INTR_STATUS_L0); | ||
393 | if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) { | ||
394 | val = appl_readl(pcie, APPL_INTR_STATUS_L1_18); | ||
395 | tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS); | ||
396 | if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) { | ||
397 | dev_info(pci->dev, "CDM check complete\n"); | ||
398 | tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE; | ||
399 | } | ||
400 | if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) { | ||
401 | dev_err(pci->dev, "CDM comparison mismatch\n"); | ||
402 | tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR; | ||
403 | } | ||
404 | if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) { | ||
405 | dev_err(pci->dev, "CDM Logic error\n"); | ||
406 | tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR; | ||
407 | } | ||
408 | dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp); | ||
409 | tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR); | ||
410 | dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp); | ||
411 | } | ||
412 | |||
413 | return IRQ_HANDLED; | ||
414 | } | ||
415 | |||
416 | static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg) | ||
417 | { | ||
418 | struct tegra_pcie_dw *pcie = arg; | ||
419 | |||
420 | return tegra_pcie_rp_irq_handler(pcie); | ||
421 | } | ||
422 | |||
423 | static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size, | ||
424 | u32 *val) | ||
425 | { | ||
426 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
427 | |||
428 | /* | ||
429 | * This is an endpoint mode specific register happen to appear even | ||
430 | * when controller is operating in root port mode and system hangs | ||
431 | * when it is accessed with link being in ASPM-L1 state. | ||
432 | * So skip accessing it altogether | ||
433 | */ | ||
434 | if (where == PORT_LOGIC_MSIX_DOORBELL) { | ||
435 | *val = 0x00000000; | ||
436 | return PCIBIOS_SUCCESSFUL; | ||
437 | } | ||
438 | |||
439 | return dw_pcie_read(pci->dbi_base + where, size, val); | ||
440 | } | ||
441 | |||
442 | static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int size, | ||
443 | u32 val) | ||
444 | { | ||
445 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
446 | |||
447 | /* | ||
448 | * This is an endpoint mode specific register happen to appear even | ||
449 | * when controller is operating in root port mode and system hangs | ||
450 | * when it is accessed with link being in ASPM-L1 state. | ||
451 | * So skip accessing it altogether | ||
452 | */ | ||
453 | if (where == PORT_LOGIC_MSIX_DOORBELL) | ||
454 | return PCIBIOS_SUCCESSFUL; | ||
455 | |||
456 | return dw_pcie_write(pci->dbi_base + where, size, val); | ||
457 | } | ||
458 | |||
459 | #if defined(CONFIG_PCIEASPM) | ||
460 | static void disable_aspm_l11(struct tegra_pcie_dw *pcie) | ||
461 | { | ||
462 | u32 val; | ||
463 | |||
464 | val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub); | ||
465 | val &= ~PCI_L1SS_CAP_ASPM_L1_1; | ||
466 | dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val); | ||
467 | } | ||
468 | |||
469 | static void disable_aspm_l12(struct tegra_pcie_dw *pcie) | ||
470 | { | ||
471 | u32 val; | ||
472 | |||
473 | val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub); | ||
474 | val &= ~PCI_L1SS_CAP_ASPM_L1_2; | ||
475 | dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val); | ||
476 | } | ||
477 | |||
478 | static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event) | ||
479 | { | ||
480 | u32 val; | ||
481 | |||
482 | val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]); | ||
483 | val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT); | ||
484 | val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; | ||
485 | val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT; | ||
486 | val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; | ||
487 | dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val); | ||
488 | val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]); | ||
489 | |||
490 | return val; | ||
491 | } | ||
492 | |||
493 | static int aspm_state_cnt(struct seq_file *s, void *data) | ||
494 | { | ||
495 | struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *) | ||
496 | dev_get_drvdata(s->private); | ||
497 | u32 val; | ||
498 | |||
499 | seq_printf(s, "Tx L0s entry count : %u\n", | ||
500 | event_counter_prog(pcie, EVENT_COUNTER_EVENT_Tx_L0S)); | ||
501 | |||
502 | seq_printf(s, "Rx L0s entry count : %u\n", | ||
503 | event_counter_prog(pcie, EVENT_COUNTER_EVENT_Rx_L0S)); | ||
504 | |||
505 | seq_printf(s, "Link L1 entry count : %u\n", | ||
506 | event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1)); | ||
507 | |||
508 | seq_printf(s, "Link L1.1 entry count : %u\n", | ||
509 | event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_1)); | ||
510 | |||
511 | seq_printf(s, "Link L1.2 entry count : %u\n", | ||
512 | event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2)); | ||
513 | |||
514 | /* Clear all counters */ | ||
515 | dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], | ||
516 | EVENT_COUNTER_ALL_CLEAR); | ||
517 | |||
518 | /* Re-enable counting */ | ||
519 | val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; | ||
520 | val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; | ||
521 | dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val); | ||
522 | |||
523 | return 0; | ||
524 | } | ||
525 | |||
526 | static void init_host_aspm(struct tegra_pcie_dw *pcie) | ||
527 | { | ||
528 | struct dw_pcie *pci = &pcie->pci; | ||
529 | u32 val; | ||
530 | |||
531 | val = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS); | ||
532 | pcie->cfg_link_cap_l1sub = val + PCI_L1SS_CAP; | ||
533 | |||
534 | /* Enable ASPM counters */ | ||
535 | val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; | ||
536 | val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; | ||
537 | dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val); | ||
538 | |||
539 | /* Program T_cmrt and T_pwr_on values */ | ||
540 | val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub); | ||
541 | val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE); | ||
542 | val |= (pcie->aspm_cmrt << 8); | ||
543 | val |= (pcie->aspm_pwr_on_t << 19); | ||
544 | dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val); | ||
545 | |||
546 | /* Program L0s and L1 entrance latencies */ | ||
547 | val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL); | ||
548 | val &= ~L0S_ENTRANCE_LAT_MASK; | ||
549 | val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT); | ||
550 | val |= ENTER_ASPM; | ||
551 | dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val); | ||
552 | } | ||
553 | |||
554 | static int init_debugfs(struct tegra_pcie_dw *pcie) | ||
555 | { | ||
556 | struct dentry *d; | ||
557 | |||
558 | d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt", | ||
559 | pcie->debugfs, aspm_state_cnt); | ||
560 | if (IS_ERR_OR_NULL(d)) | ||
561 | dev_err(pcie->dev, | ||
562 | "Failed to create debugfs file \"aspm_state_cnt\"\n"); | ||
563 | |||
564 | return 0; | ||
565 | } | ||
566 | #else | ||
567 | static inline void disable_aspm_l12(struct tegra_pcie_dw *pcie) { return; } | ||
568 | static inline void disable_aspm_l11(struct tegra_pcie_dw *pcie) { return; } | ||
569 | static inline void init_host_aspm(struct tegra_pcie_dw *pcie) { return; } | ||
570 | static inline int init_debugfs(struct tegra_pcie_dw *pcie) { return 0; } | ||
571 | #endif | ||
572 | |||
573 | static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp) | ||
574 | { | ||
575 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
576 | struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); | ||
577 | u32 val; | ||
578 | u16 val_w; | ||
579 | |||
580 | val = appl_readl(pcie, APPL_INTR_EN_L0_0); | ||
581 | val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN; | ||
582 | appl_writel(pcie, val, APPL_INTR_EN_L0_0); | ||
583 | |||
584 | val = appl_readl(pcie, APPL_INTR_EN_L1_0_0); | ||
585 | val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN; | ||
586 | appl_writel(pcie, val, APPL_INTR_EN_L1_0_0); | ||
587 | |||
588 | if (pcie->enable_cdm_check) { | ||
589 | val = appl_readl(pcie, APPL_INTR_EN_L0_0); | ||
590 | val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN; | ||
591 | appl_writel(pcie, val, APPL_INTR_EN_L0_0); | ||
592 | |||
593 | val = appl_readl(pcie, APPL_INTR_EN_L1_18); | ||
594 | val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR; | ||
595 | val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR; | ||
596 | appl_writel(pcie, val, APPL_INTR_EN_L1_18); | ||
597 | } | ||
598 | |||
599 | val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base + | ||
600 | PCI_EXP_LNKSTA); | ||
601 | pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >> | ||
602 | PCI_EXP_LNKSTA_NLW_SHIFT; | ||
603 | |||
604 | val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base + | ||
605 | PCI_EXP_LNKCTL); | ||
606 | val_w |= PCI_EXP_LNKCTL_LBMIE; | ||
607 | dw_pcie_writew_dbi(&pcie->pci, pcie->pcie_cap_base + PCI_EXP_LNKCTL, | ||
608 | val_w); | ||
609 | } | ||
610 | |||
611 | static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp) | ||
612 | { | ||
613 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
614 | struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); | ||
615 | u32 val; | ||
616 | |||
617 | /* Enable legacy interrupt generation */ | ||
618 | val = appl_readl(pcie, APPL_INTR_EN_L0_0); | ||
619 | val |= APPL_INTR_EN_L0_0_SYS_INTR_EN; | ||
620 | val |= APPL_INTR_EN_L0_0_INT_INT_EN; | ||
621 | appl_writel(pcie, val, APPL_INTR_EN_L0_0); | ||
622 | |||
623 | val = appl_readl(pcie, APPL_INTR_EN_L1_8_0); | ||
624 | val |= APPL_INTR_EN_L1_8_INTX_EN; | ||
625 | val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN; | ||
626 | val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN; | ||
627 | if (IS_ENABLED(CONFIG_PCIEAER)) | ||
628 | val |= APPL_INTR_EN_L1_8_AER_INT_EN; | ||
629 | appl_writel(pcie, val, APPL_INTR_EN_L1_8_0); | ||
630 | } | ||
631 | |||
632 | static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp) | ||
633 | { | ||
634 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
635 | struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); | ||
636 | u32 val; | ||
637 | |||
638 | dw_pcie_msi_init(pp); | ||
639 | |||
640 | /* Enable MSI interrupt generation */ | ||
641 | val = appl_readl(pcie, APPL_INTR_EN_L0_0); | ||
642 | val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN; | ||
643 | val |= APPL_INTR_EN_L0_0_MSI_RCV_INT_EN; | ||
644 | appl_writel(pcie, val, APPL_INTR_EN_L0_0); | ||
645 | } | ||
646 | |||
647 | static void tegra_pcie_enable_interrupts(struct pcie_port *pp) | ||
648 | { | ||
649 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
650 | struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); | ||
651 | |||
652 | /* Clear interrupt statuses before enabling interrupts */ | ||
653 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0); | ||
654 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0); | ||
655 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1); | ||
656 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2); | ||
657 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3); | ||
658 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6); | ||
659 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7); | ||
660 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0); | ||
661 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9); | ||
662 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10); | ||
663 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11); | ||
664 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13); | ||
665 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14); | ||
666 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15); | ||
667 | appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17); | ||
668 | |||
669 | tegra_pcie_enable_system_interrupts(pp); | ||
670 | tegra_pcie_enable_legacy_interrupts(pp); | ||
671 | if (IS_ENABLED(CONFIG_PCI_MSI)) | ||
672 | tegra_pcie_enable_msi_interrupts(pp); | ||
673 | } | ||
674 | |||
675 | static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie) | ||
676 | { | ||
677 | struct dw_pcie *pci = &pcie->pci; | ||
678 | u32 val, offset, i; | ||
679 | |||
680 | /* Program init preset */ | ||
681 | for (i = 0; i < pcie->num_lanes; i++) { | ||
682 | dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF | ||
683 | + (i * 2), 2, &val); | ||
684 | val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK; | ||
685 | val |= GEN3_GEN4_EQ_PRESET_INIT; | ||
686 | val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK; | ||
687 | val |= (GEN3_GEN4_EQ_PRESET_INIT << | ||
688 | CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT); | ||
689 | dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF | ||
690 | + (i * 2), 2, val); | ||
691 | |||
692 | offset = dw_pcie_find_ext_capability(pci, | ||
693 | PCI_EXT_CAP_ID_PL_16GT) + | ||
694 | PCI_PL_16GT_LE_CTRL; | ||
695 | dw_pcie_read(pci->dbi_base + offset + i, 1, &val); | ||
696 | val &= ~PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK; | ||
697 | val |= GEN3_GEN4_EQ_PRESET_INIT; | ||
698 | val &= ~PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK; | ||
699 | val |= (GEN3_GEN4_EQ_PRESET_INIT << | ||
700 | PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT); | ||
701 | dw_pcie_write(pci->dbi_base + offset + i, 1, val); | ||
702 | } | ||
703 | |||
704 | val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); | ||
705 | val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; | ||
706 | dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); | ||
707 | |||
708 | val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); | ||
709 | val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK; | ||
710 | val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT); | ||
711 | val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK; | ||
712 | dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val); | ||
713 | |||
714 | val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); | ||
715 | val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; | ||
716 | val |= (0x1 << GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT); | ||
717 | dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); | ||
718 | |||
719 | val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); | ||
720 | val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK; | ||
721 | val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT); | ||
722 | val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK; | ||
723 | dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val); | ||
724 | |||
725 | val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); | ||
726 | val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; | ||
727 | dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); | ||
728 | } | ||
729 | |||
730 | static void tegra_pcie_prepare_host(struct pcie_port *pp) | ||
731 | { | ||
732 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
733 | struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); | ||
734 | u32 val; | ||
735 | |||
736 | val = dw_pcie_readl_dbi(pci, PCI_IO_BASE); | ||
737 | val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8); | ||
738 | dw_pcie_writel_dbi(pci, PCI_IO_BASE, val); | ||
739 | |||
740 | val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE); | ||
741 | val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE; | ||
742 | val |= CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE; | ||
743 | dw_pcie_writel_dbi(pci, PCI_PREF_MEMORY_BASE, val); | ||
744 | |||
745 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); | ||
746 | |||
747 | /* Configure FTS */ | ||
748 | val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL); | ||
749 | val &= ~(N_FTS_MASK << N_FTS_SHIFT); | ||
750 | val |= N_FTS_VAL << N_FTS_SHIFT; | ||
751 | dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val); | ||
752 | |||
753 | val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL); | ||
754 | val &= ~FTS_MASK; | ||
755 | val |= FTS_VAL; | ||
756 | dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val); | ||
757 | |||
758 | /* Enable as 0xFFFF0001 response for CRS */ | ||
759 | val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT); | ||
760 | val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT); | ||
761 | val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 << | ||
762 | AMBA_ERROR_RESPONSE_CRS_SHIFT); | ||
763 | dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val); | ||
764 | |||
765 | /* Configure Max Speed from DT */ | ||
766 | if (pcie->max_speed && pcie->max_speed != -EINVAL) { | ||
767 | val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + | ||
768 | PCI_EXP_LNKCAP); | ||
769 | val &= ~PCI_EXP_LNKCAP_SLS; | ||
770 | val |= pcie->max_speed; | ||
771 | dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, | ||
772 | val); | ||
773 | } | ||
774 | |||
775 | /* Configure Max lane width from DT */ | ||
776 | val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP); | ||
777 | val &= ~PCI_EXP_LNKCAP_MLW; | ||
778 | val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT); | ||
779 | dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val); | ||
780 | |||
781 | config_gen3_gen4_eq_presets(pcie); | ||
782 | |||
783 | init_host_aspm(pcie); | ||
784 | |||
785 | val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); | ||
786 | val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; | ||
787 | dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); | ||
788 | |||
789 | if (pcie->update_fc_fixup) { | ||
790 | val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF); | ||
791 | val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT; | ||
792 | dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val); | ||
793 | } | ||
794 | |||
795 | dw_pcie_setup_rc(pp); | ||
796 | |||
797 | clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ); | ||
798 | |||
799 | /* Assert RST */ | ||
800 | val = appl_readl(pcie, APPL_PINMUX); | ||
801 | val &= ~APPL_PINMUX_PEX_RST; | ||
802 | appl_writel(pcie, val, APPL_PINMUX); | ||
803 | |||
804 | usleep_range(100, 200); | ||
805 | |||
806 | /* Enable LTSSM */ | ||
807 | val = appl_readl(pcie, APPL_CTRL); | ||
808 | val |= APPL_CTRL_LTSSM_EN; | ||
809 | appl_writel(pcie, val, APPL_CTRL); | ||
810 | |||
811 | /* De-assert RST */ | ||
812 | val = appl_readl(pcie, APPL_PINMUX); | ||
813 | val |= APPL_PINMUX_PEX_RST; | ||
814 | appl_writel(pcie, val, APPL_PINMUX); | ||
815 | |||
816 | msleep(100); | ||
817 | } | ||
818 | |||
819 | static int tegra_pcie_dw_host_init(struct pcie_port *pp) | ||
820 | { | ||
821 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
822 | struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); | ||
823 | u32 val, tmp, offset, speed; | ||
824 | |||
825 | tegra_pcie_prepare_host(pp); | ||
826 | |||
827 | if (dw_pcie_wait_for_link(pci)) { | ||
828 | /* | ||
829 | * There are some endpoints which can't get the link up if | ||
830 | * root port has Data Link Feature (DLF) enabled. | ||
831 | * Refer Spec rev 4.0 ver 1.0 sec 3.4.2 & 7.7.4 for more info | ||
832 | * on Scaled Flow Control and DLF. | ||
833 | * So, need to confirm that is indeed the case here and attempt | ||
834 | * link up once again with DLF disabled. | ||
835 | */ | ||
836 | val = appl_readl(pcie, APPL_DEBUG); | ||
837 | val &= APPL_DEBUG_LTSSM_STATE_MASK; | ||
838 | val >>= APPL_DEBUG_LTSSM_STATE_SHIFT; | ||
839 | tmp = appl_readl(pcie, APPL_LINK_STATUS); | ||
840 | tmp &= APPL_LINK_STATUS_RDLH_LINK_UP; | ||
841 | if (!(val == 0x11 && !tmp)) { | ||
842 | /* Link is down for all good reasons */ | ||
843 | return 0; | ||
844 | } | ||
845 | |||
846 | dev_info(pci->dev, "Link is down in DLL"); | ||
847 | dev_info(pci->dev, "Trying again with DLFE disabled\n"); | ||
848 | /* Disable LTSSM */ | ||
849 | val = appl_readl(pcie, APPL_CTRL); | ||
850 | val &= ~APPL_CTRL_LTSSM_EN; | ||
851 | appl_writel(pcie, val, APPL_CTRL); | ||
852 | |||
853 | reset_control_assert(pcie->core_rst); | ||
854 | reset_control_deassert(pcie->core_rst); | ||
855 | |||
856 | offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF); | ||
857 | val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP); | ||
858 | val &= ~PCI_DLF_EXCHANGE_ENABLE; | ||
859 | dw_pcie_writel_dbi(pci, offset, val); | ||
860 | |||
861 | tegra_pcie_prepare_host(pp); | ||
862 | |||
863 | if (dw_pcie_wait_for_link(pci)) | ||
864 | return 0; | ||
865 | } | ||
866 | |||
867 | speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) & | ||
868 | PCI_EXP_LNKSTA_CLS; | ||
869 | clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]); | ||
870 | |||
871 | tegra_pcie_enable_interrupts(pp); | ||
872 | |||
873 | return 0; | ||
874 | } | ||
875 | |||
876 | static int tegra_pcie_dw_link_up(struct dw_pcie *pci) | ||
877 | { | ||
878 | struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); | ||
879 | u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); | ||
880 | |||
881 | return !!(val & PCI_EXP_LNKSTA_DLLLA); | ||
882 | } | ||
883 | |||
884 | static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp) | ||
885 | { | ||
886 | pp->num_vectors = MAX_MSI_IRQS; | ||
887 | } | ||
888 | |||
889 | static const struct dw_pcie_ops tegra_dw_pcie_ops = { | ||
890 | .link_up = tegra_pcie_dw_link_up, | ||
891 | }; | ||
892 | |||
893 | static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = { | ||
894 | .rd_own_conf = tegra_pcie_dw_rd_own_conf, | ||
895 | .wr_own_conf = tegra_pcie_dw_wr_own_conf, | ||
896 | .host_init = tegra_pcie_dw_host_init, | ||
897 | .set_num_vectors = tegra_pcie_set_msi_vec_num, | ||
898 | }; | ||
899 | |||
900 | static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie) | ||
901 | { | ||
902 | unsigned int phy_count = pcie->phy_count; | ||
903 | |||
904 | while (phy_count--) { | ||
905 | phy_power_off(pcie->phys[phy_count]); | ||
906 | phy_exit(pcie->phys[phy_count]); | ||
907 | } | ||
908 | } | ||
909 | |||
910 | static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie) | ||
911 | { | ||
912 | unsigned int i; | ||
913 | int ret; | ||
914 | |||
915 | for (i = 0; i < pcie->phy_count; i++) { | ||
916 | ret = phy_init(pcie->phys[i]); | ||
917 | if (ret < 0) | ||
918 | goto phy_power_off; | ||
919 | |||
920 | ret = phy_power_on(pcie->phys[i]); | ||
921 | if (ret < 0) | ||
922 | goto phy_exit; | ||
923 | } | ||
924 | |||
925 | return 0; | ||
926 | |||
927 | phy_power_off: | ||
928 | while (i--) { | ||
929 | phy_power_off(pcie->phys[i]); | ||
930 | phy_exit: | ||
931 | phy_exit(pcie->phys[i]); | ||
932 | } | ||
933 | |||
934 | return ret; | ||
935 | } | ||
936 | |||
937 | static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie) | ||
938 | { | ||
939 | struct device_node *np = pcie->dev->of_node; | ||
940 | int ret; | ||
941 | |||
942 | ret = of_property_read_u32(np, "nvidia,aspm-cmrt-us", &pcie->aspm_cmrt); | ||
943 | if (ret < 0) { | ||
944 | dev_info(pcie->dev, "Failed to read ASPM T_cmrt: %d\n", ret); | ||
945 | return ret; | ||
946 | } | ||
947 | |||
948 | ret = of_property_read_u32(np, "nvidia,aspm-pwr-on-t-us", | ||
949 | &pcie->aspm_pwr_on_t); | ||
950 | if (ret < 0) | ||
951 | dev_info(pcie->dev, "Failed to read ASPM Power On time: %d\n", | ||
952 | ret); | ||
953 | |||
954 | ret = of_property_read_u32(np, "nvidia,aspm-l0s-entrance-latency-us", | ||
955 | &pcie->aspm_l0s_enter_lat); | ||
956 | if (ret < 0) | ||
957 | dev_info(pcie->dev, | ||
958 | "Failed to read ASPM L0s Entrance latency: %d\n", ret); | ||
959 | |||
960 | ret = of_property_read_u32(np, "num-lanes", &pcie->num_lanes); | ||
961 | if (ret < 0) { | ||
962 | dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret); | ||
963 | return ret; | ||
964 | } | ||
965 | |||
966 | pcie->max_speed = of_pci_get_max_link_speed(np); | ||
967 | |||
968 | ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid); | ||
969 | if (ret) { | ||
970 | dev_err(pcie->dev, "Failed to read Controller-ID: %d\n", ret); | ||
971 | return ret; | ||
972 | } | ||
973 | |||
974 | ret = of_property_count_strings(np, "phy-names"); | ||
975 | if (ret < 0) { | ||
976 | dev_err(pcie->dev, "Failed to find PHY entries: %d\n", | ||
977 | ret); | ||
978 | return ret; | ||
979 | } | ||
980 | pcie->phy_count = ret; | ||
981 | |||
982 | if (of_property_read_bool(np, "nvidia,update-fc-fixup")) | ||
983 | pcie->update_fc_fixup = true; | ||
984 | |||
985 | pcie->supports_clkreq = | ||
986 | of_property_read_bool(pcie->dev->of_node, "supports-clkreq"); | ||
987 | |||
988 | pcie->enable_cdm_check = | ||
989 | of_property_read_bool(np, "snps,enable-cdm-check"); | ||
990 | |||
991 | return 0; | ||
992 | } | ||
993 | |||
994 | static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie, | ||
995 | bool enable) | ||
996 | { | ||
997 | struct mrq_uphy_response resp; | ||
998 | struct tegra_bpmp_message msg; | ||
999 | struct mrq_uphy_request req; | ||
1000 | |||
1001 | /* Controller-5 doesn't need to have its state set by BPMP-FW */ | ||
1002 | if (pcie->cid == 5) | ||
1003 | return 0; | ||
1004 | |||
1005 | memset(&req, 0, sizeof(req)); | ||
1006 | memset(&resp, 0, sizeof(resp)); | ||
1007 | |||
1008 | req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE; | ||
1009 | req.controller_state.pcie_controller = pcie->cid; | ||
1010 | req.controller_state.enable = enable; | ||
1011 | |||
1012 | memset(&msg, 0, sizeof(msg)); | ||
1013 | msg.mrq = MRQ_UPHY; | ||
1014 | msg.tx.data = &req; | ||
1015 | msg.tx.size = sizeof(req); | ||
1016 | msg.rx.data = &resp; | ||
1017 | msg.rx.size = sizeof(resp); | ||
1018 | |||
1019 | return tegra_bpmp_transfer(pcie->bpmp, &msg); | ||
1020 | } | ||
1021 | |||
1022 | static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie) | ||
1023 | { | ||
1024 | struct pcie_port *pp = &pcie->pci.pp; | ||
1025 | struct pci_bus *child, *root_bus = NULL; | ||
1026 | struct pci_dev *pdev; | ||
1027 | |||
1028 | /* | ||
1029 | * link doesn't go into L2 state with some of the endpoints with Tegra | ||
1030 | * if they are not in D0 state. So, need to make sure that immediate | ||
1031 | * downstream devices are in D0 state before sending PME_TurnOff to put | ||
1032 | * link into L2 state. | ||
1033 | * This is as per PCI Express Base r4.0 v1.0 September 27-2017, | ||
1034 | * 5.2 Link State Power Management (Page #428). | ||
1035 | */ | ||
1036 | |||
1037 | list_for_each_entry(child, &pp->root_bus->children, node) { | ||
1038 | /* Bring downstream devices to D0 if they are not already in */ | ||
1039 | if (child->parent == pp->root_bus) { | ||
1040 | root_bus = child; | ||
1041 | break; | ||
1042 | } | ||
1043 | } | ||
1044 | |||
1045 | if (!root_bus) { | ||
1046 | dev_err(pcie->dev, "Failed to find downstream devices\n"); | ||
1047 | return; | ||
1048 | } | ||
1049 | |||
1050 | list_for_each_entry(pdev, &root_bus->devices, bus_list) { | ||
1051 | if (PCI_SLOT(pdev->devfn) == 0) { | ||
1052 | if (pci_set_power_state(pdev, PCI_D0)) | ||
1053 | dev_err(pcie->dev, | ||
1054 | "Failed to transition %s to D0 state\n", | ||
1055 | dev_name(&pdev->dev)); | ||
1056 | } | ||
1057 | } | ||
1058 | } | ||
1059 | |||
1060 | static int tegra_pcie_get_slot_regulators(struct tegra_pcie_dw *pcie) | ||
1061 | { | ||
1062 | pcie->slot_ctl_3v3 = devm_regulator_get_optional(pcie->dev, "vpcie3v3"); | ||
1063 | if (IS_ERR(pcie->slot_ctl_3v3)) { | ||
1064 | if (PTR_ERR(pcie->slot_ctl_3v3) != -ENODEV) | ||
1065 | return PTR_ERR(pcie->slot_ctl_3v3); | ||
1066 | |||
1067 | pcie->slot_ctl_3v3 = NULL; | ||
1068 | } | ||
1069 | |||
1070 | pcie->slot_ctl_12v = devm_regulator_get_optional(pcie->dev, "vpcie12v"); | ||
1071 | if (IS_ERR(pcie->slot_ctl_12v)) { | ||
1072 | if (PTR_ERR(pcie->slot_ctl_12v) != -ENODEV) | ||
1073 | return PTR_ERR(pcie->slot_ctl_12v); | ||
1074 | |||
1075 | pcie->slot_ctl_12v = NULL; | ||
1076 | } | ||
1077 | |||
1078 | return 0; | ||
1079 | } | ||
1080 | |||
1081 | static int tegra_pcie_enable_slot_regulators(struct tegra_pcie_dw *pcie) | ||
1082 | { | ||
1083 | int ret; | ||
1084 | |||
1085 | if (pcie->slot_ctl_3v3) { | ||
1086 | ret = regulator_enable(pcie->slot_ctl_3v3); | ||
1087 | if (ret < 0) { | ||
1088 | dev_err(pcie->dev, | ||
1089 | "Failed to enable 3.3V slot supply: %d\n", ret); | ||
1090 | return ret; | ||
1091 | } | ||
1092 | } | ||
1093 | |||
1094 | if (pcie->slot_ctl_12v) { | ||
1095 | ret = regulator_enable(pcie->slot_ctl_12v); | ||
1096 | if (ret < 0) { | ||
1097 | dev_err(pcie->dev, | ||
1098 | "Failed to enable 12V slot supply: %d\n", ret); | ||
1099 | goto fail_12v_enable; | ||
1100 | } | ||
1101 | } | ||
1102 | |||
1103 | /* | ||
1104 | * According to PCI Express Card Electromechanical Specification | ||
1105 | * Revision 1.1, Table-2.4, T_PVPERL (Power stable to PERST# inactive) | ||
1106 | * should be a minimum of 100ms. | ||
1107 | */ | ||
1108 | if (pcie->slot_ctl_3v3 || pcie->slot_ctl_12v) | ||
1109 | msleep(100); | ||
1110 | |||
1111 | return 0; | ||
1112 | |||
1113 | fail_12v_enable: | ||
1114 | if (pcie->slot_ctl_3v3) | ||
1115 | regulator_disable(pcie->slot_ctl_3v3); | ||
1116 | return ret; | ||
1117 | } | ||
1118 | |||
1119 | static void tegra_pcie_disable_slot_regulators(struct tegra_pcie_dw *pcie) | ||
1120 | { | ||
1121 | if (pcie->slot_ctl_12v) | ||
1122 | regulator_disable(pcie->slot_ctl_12v); | ||
1123 | if (pcie->slot_ctl_3v3) | ||
1124 | regulator_disable(pcie->slot_ctl_3v3); | ||
1125 | } | ||
1126 | |||
1127 | static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie, | ||
1128 | bool en_hw_hot_rst) | ||
1129 | { | ||
1130 | int ret; | ||
1131 | u32 val; | ||
1132 | |||
1133 | ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true); | ||
1134 | if (ret) { | ||
1135 | dev_err(pcie->dev, | ||
1136 | "Failed to enable controller %u: %d\n", pcie->cid, ret); | ||
1137 | return ret; | ||
1138 | } | ||
1139 | |||
1140 | ret = tegra_pcie_enable_slot_regulators(pcie); | ||
1141 | if (ret < 0) | ||
1142 | goto fail_slot_reg_en; | ||
1143 | |||
1144 | ret = regulator_enable(pcie->pex_ctl_supply); | ||
1145 | if (ret < 0) { | ||
1146 | dev_err(pcie->dev, "Failed to enable regulator: %d\n", ret); | ||
1147 | goto fail_reg_en; | ||
1148 | } | ||
1149 | |||
1150 | ret = clk_prepare_enable(pcie->core_clk); | ||
1151 | if (ret) { | ||
1152 | dev_err(pcie->dev, "Failed to enable core clock: %d\n", ret); | ||
1153 | goto fail_core_clk; | ||
1154 | } | ||
1155 | |||
1156 | ret = reset_control_deassert(pcie->core_apb_rst); | ||
1157 | if (ret) { | ||
1158 | dev_err(pcie->dev, "Failed to deassert core APB reset: %d\n", | ||
1159 | ret); | ||
1160 | goto fail_core_apb_rst; | ||
1161 | } | ||
1162 | |||
1163 | if (en_hw_hot_rst) { | ||
1164 | /* Enable HW_HOT_RST mode */ | ||
1165 | val = appl_readl(pcie, APPL_CTRL); | ||
1166 | val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << | ||
1167 | APPL_CTRL_HW_HOT_RST_MODE_SHIFT); | ||
1168 | val |= APPL_CTRL_HW_HOT_RST_EN; | ||
1169 | appl_writel(pcie, val, APPL_CTRL); | ||
1170 | } | ||
1171 | |||
1172 | ret = tegra_pcie_enable_phy(pcie); | ||
1173 | if (ret) { | ||
1174 | dev_err(pcie->dev, "Failed to enable PHY: %d\n", ret); | ||
1175 | goto fail_phy; | ||
1176 | } | ||
1177 | |||
1178 | /* Update CFG base address */ | ||
1179 | appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK, | ||
1180 | APPL_CFG_BASE_ADDR); | ||
1181 | |||
1182 | /* Configure this core for RP mode operation */ | ||
1183 | appl_writel(pcie, APPL_DM_TYPE_RP, APPL_DM_TYPE); | ||
1184 | |||
1185 | appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE); | ||
1186 | |||
1187 | val = appl_readl(pcie, APPL_CTRL); | ||
1188 | appl_writel(pcie, val | APPL_CTRL_SYS_PRE_DET_STATE, APPL_CTRL); | ||
1189 | |||
1190 | val = appl_readl(pcie, APPL_CFG_MISC); | ||
1191 | val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT); | ||
1192 | appl_writel(pcie, val, APPL_CFG_MISC); | ||
1193 | |||
1194 | if (!pcie->supports_clkreq) { | ||
1195 | val = appl_readl(pcie, APPL_PINMUX); | ||
1196 | val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN; | ||
1197 | val |= APPL_PINMUX_CLKREQ_OUT_OVRD; | ||
1198 | appl_writel(pcie, val, APPL_PINMUX); | ||
1199 | } | ||
1200 | |||
1201 | /* Update iATU_DMA base address */ | ||
1202 | appl_writel(pcie, | ||
1203 | pcie->atu_dma_res->start & APPL_CFG_IATU_DMA_BASE_ADDR_MASK, | ||
1204 | APPL_CFG_IATU_DMA_BASE_ADDR); | ||
1205 | |||
1206 | reset_control_deassert(pcie->core_rst); | ||
1207 | |||
1208 | pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, | ||
1209 | PCI_CAP_ID_EXP); | ||
1210 | |||
1211 | /* Disable ASPM-L1SS advertisement as there is no CLKREQ routing */ | ||
1212 | if (!pcie->supports_clkreq) { | ||
1213 | disable_aspm_l11(pcie); | ||
1214 | disable_aspm_l12(pcie); | ||
1215 | } | ||
1216 | |||
1217 | return ret; | ||
1218 | |||
1219 | fail_phy: | ||
1220 | reset_control_assert(pcie->core_apb_rst); | ||
1221 | fail_core_apb_rst: | ||
1222 | clk_disable_unprepare(pcie->core_clk); | ||
1223 | fail_core_clk: | ||
1224 | regulator_disable(pcie->pex_ctl_supply); | ||
1225 | fail_reg_en: | ||
1226 | tegra_pcie_disable_slot_regulators(pcie); | ||
1227 | fail_slot_reg_en: | ||
1228 | tegra_pcie_bpmp_set_ctrl_state(pcie, false); | ||
1229 | |||
1230 | return ret; | ||
1231 | } | ||
1232 | |||
1233 | static int __deinit_controller(struct tegra_pcie_dw *pcie) | ||
1234 | { | ||
1235 | int ret; | ||
1236 | |||
1237 | ret = reset_control_assert(pcie->core_rst); | ||
1238 | if (ret) { | ||
1239 | dev_err(pcie->dev, "Failed to assert \"core\" reset: %d\n", | ||
1240 | ret); | ||
1241 | return ret; | ||
1242 | } | ||
1243 | |||
1244 | tegra_pcie_disable_phy(pcie); | ||
1245 | |||
1246 | ret = reset_control_assert(pcie->core_apb_rst); | ||
1247 | if (ret) { | ||
1248 | dev_err(pcie->dev, "Failed to assert APB reset: %d\n", ret); | ||
1249 | return ret; | ||
1250 | } | ||
1251 | |||
1252 | clk_disable_unprepare(pcie->core_clk); | ||
1253 | |||
1254 | ret = regulator_disable(pcie->pex_ctl_supply); | ||
1255 | if (ret) { | ||
1256 | dev_err(pcie->dev, "Failed to disable regulator: %d\n", ret); | ||
1257 | return ret; | ||
1258 | } | ||
1259 | |||
1260 | tegra_pcie_disable_slot_regulators(pcie); | ||
1261 | |||
1262 | ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false); | ||
1263 | if (ret) { | ||
1264 | dev_err(pcie->dev, "Failed to disable controller %d: %d\n", | ||
1265 | pcie->cid, ret); | ||
1266 | return ret; | ||
1267 | } | ||
1268 | |||
1269 | return ret; | ||
1270 | } | ||
1271 | |||
1272 | static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie) | ||
1273 | { | ||
1274 | struct dw_pcie *pci = &pcie->pci; | ||
1275 | struct pcie_port *pp = &pci->pp; | ||
1276 | int ret; | ||
1277 | |||
1278 | ret = tegra_pcie_config_controller(pcie, false); | ||
1279 | if (ret < 0) | ||
1280 | return ret; | ||
1281 | |||
1282 | pp->ops = &tegra_pcie_dw_host_ops; | ||
1283 | |||
1284 | ret = dw_pcie_host_init(pp); | ||
1285 | if (ret < 0) { | ||
1286 | dev_err(pcie->dev, "Failed to add PCIe port: %d\n", ret); | ||
1287 | goto fail_host_init; | ||
1288 | } | ||
1289 | |||
1290 | return 0; | ||
1291 | |||
1292 | fail_host_init: | ||
1293 | return __deinit_controller(pcie); | ||
1294 | } | ||
1295 | |||
1296 | static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie) | ||
1297 | { | ||
1298 | u32 val; | ||
1299 | |||
1300 | if (!tegra_pcie_dw_link_up(&pcie->pci)) | ||
1301 | return 0; | ||
1302 | |||
1303 | val = appl_readl(pcie, APPL_RADM_STATUS); | ||
1304 | val |= APPL_PM_XMT_TURNOFF_STATE; | ||
1305 | appl_writel(pcie, val, APPL_RADM_STATUS); | ||
1306 | |||
1307 | return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val, | ||
1308 | val & APPL_DEBUG_PM_LINKST_IN_L2_LAT, | ||
1309 | 1, PME_ACK_TIMEOUT); | ||
1310 | } | ||
1311 | |||
1312 | static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie) | ||
1313 | { | ||
1314 | u32 data; | ||
1315 | int err; | ||
1316 | |||
1317 | if (!tegra_pcie_dw_link_up(&pcie->pci)) { | ||
1318 | dev_dbg(pcie->dev, "PCIe link is not up...!\n"); | ||
1319 | return; | ||
1320 | } | ||
1321 | |||
1322 | if (tegra_pcie_try_link_l2(pcie)) { | ||
1323 | dev_info(pcie->dev, "Link didn't transition to L2 state\n"); | ||
1324 | /* | ||
1325 | * TX lane clock freq will reset to Gen1 only if link is in L2 | ||
1326 | * or detect state. | ||
1327 | * So apply pex_rst to end point to force RP to go into detect | ||
1328 | * state | ||
1329 | */ | ||
1330 | data = appl_readl(pcie, APPL_PINMUX); | ||
1331 | data &= ~APPL_PINMUX_PEX_RST; | ||
1332 | appl_writel(pcie, data, APPL_PINMUX); | ||
1333 | |||
1334 | err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, | ||
1335 | data, | ||
1336 | ((data & | ||
1337 | APPL_DEBUG_LTSSM_STATE_MASK) >> | ||
1338 | APPL_DEBUG_LTSSM_STATE_SHIFT) == | ||
1339 | LTSSM_STATE_PRE_DETECT, | ||
1340 | 1, LTSSM_TIMEOUT); | ||
1341 | if (err) { | ||
1342 | dev_info(pcie->dev, "Link didn't go to detect state\n"); | ||
1343 | } else { | ||
1344 | /* Disable LTSSM after link is in detect state */ | ||
1345 | data = appl_readl(pcie, APPL_CTRL); | ||
1346 | data &= ~APPL_CTRL_LTSSM_EN; | ||
1347 | appl_writel(pcie, data, APPL_CTRL); | ||
1348 | } | ||
1349 | } | ||
1350 | /* | ||
1351 | * DBI registers may not be accessible after this as PLL-E would be | ||
1352 | * down depending on how CLKREQ is pulled by end point | ||
1353 | */ | ||
1354 | data = appl_readl(pcie, APPL_PINMUX); | ||
1355 | data |= (APPL_PINMUX_CLKREQ_OVERRIDE_EN | APPL_PINMUX_CLKREQ_OVERRIDE); | ||
1356 | /* Cut REFCLK to slot */ | ||
1357 | data |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN; | ||
1358 | data &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE; | ||
1359 | appl_writel(pcie, data, APPL_PINMUX); | ||
1360 | } | ||
1361 | |||
1362 | static int tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie) | ||
1363 | { | ||
1364 | tegra_pcie_downstream_dev_to_D0(pcie); | ||
1365 | dw_pcie_host_deinit(&pcie->pci.pp); | ||
1366 | tegra_pcie_dw_pme_turnoff(pcie); | ||
1367 | |||
1368 | return __deinit_controller(pcie); | ||
1369 | } | ||
1370 | |||
1371 | static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie) | ||
1372 | { | ||
1373 | struct pcie_port *pp = &pcie->pci.pp; | ||
1374 | struct device *dev = pcie->dev; | ||
1375 | char *name; | ||
1376 | int ret; | ||
1377 | |||
1378 | if (IS_ENABLED(CONFIG_PCI_MSI)) { | ||
1379 | pp->msi_irq = of_irq_get_byname(dev->of_node, "msi"); | ||
1380 | if (!pp->msi_irq) { | ||
1381 | dev_err(dev, "Failed to get MSI interrupt\n"); | ||
1382 | return -ENODEV; | ||
1383 | } | ||
1384 | } | ||
1385 | |||
1386 | pm_runtime_enable(dev); | ||
1387 | |||
1388 | ret = pm_runtime_get_sync(dev); | ||
1389 | if (ret < 0) { | ||
1390 | dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n", | ||
1391 | ret); | ||
1392 | goto fail_pm_get_sync; | ||
1393 | } | ||
1394 | |||
1395 | ret = pinctrl_pm_select_default_state(dev); | ||
1396 | if (ret < 0) { | ||
1397 | dev_err(dev, "Failed to configure sideband pins: %d\n", ret); | ||
1398 | goto fail_pinctrl; | ||
1399 | } | ||
1400 | |||
1401 | tegra_pcie_init_controller(pcie); | ||
1402 | |||
1403 | pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci); | ||
1404 | if (!pcie->link_state) { | ||
1405 | ret = -ENOMEDIUM; | ||
1406 | goto fail_host_init; | ||
1407 | } | ||
1408 | |||
1409 | name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node); | ||
1410 | if (!name) { | ||
1411 | ret = -ENOMEM; | ||
1412 | goto fail_host_init; | ||
1413 | } | ||
1414 | |||
1415 | pcie->debugfs = debugfs_create_dir(name, NULL); | ||
1416 | if (!pcie->debugfs) | ||
1417 | dev_err(dev, "Failed to create debugfs\n"); | ||
1418 | else | ||
1419 | init_debugfs(pcie); | ||
1420 | |||
1421 | return ret; | ||
1422 | |||
1423 | fail_host_init: | ||
1424 | tegra_pcie_deinit_controller(pcie); | ||
1425 | fail_pinctrl: | ||
1426 | pm_runtime_put_sync(dev); | ||
1427 | fail_pm_get_sync: | ||
1428 | pm_runtime_disable(dev); | ||
1429 | return ret; | ||
1430 | } | ||
1431 | |||
1432 | static int tegra_pcie_dw_probe(struct platform_device *pdev) | ||
1433 | { | ||
1434 | struct device *dev = &pdev->dev; | ||
1435 | struct resource *atu_dma_res; | ||
1436 | struct tegra_pcie_dw *pcie; | ||
1437 | struct resource *dbi_res; | ||
1438 | struct pcie_port *pp; | ||
1439 | struct dw_pcie *pci; | ||
1440 | struct phy **phys; | ||
1441 | char *name; | ||
1442 | int ret; | ||
1443 | u32 i; | ||
1444 | |||
1445 | pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); | ||
1446 | if (!pcie) | ||
1447 | return -ENOMEM; | ||
1448 | |||
1449 | pci = &pcie->pci; | ||
1450 | pci->dev = &pdev->dev; | ||
1451 | pci->ops = &tegra_dw_pcie_ops; | ||
1452 | pp = &pci->pp; | ||
1453 | pcie->dev = &pdev->dev; | ||
1454 | |||
1455 | ret = tegra_pcie_dw_parse_dt(pcie); | ||
1456 | if (ret < 0) { | ||
1457 | dev_err(dev, "Failed to parse device tree: %d\n", ret); | ||
1458 | return ret; | ||
1459 | } | ||
1460 | |||
1461 | ret = tegra_pcie_get_slot_regulators(pcie); | ||
1462 | if (ret < 0) { | ||
1463 | dev_err(dev, "Failed to get slot regulators: %d\n", ret); | ||
1464 | return ret; | ||
1465 | } | ||
1466 | |||
1467 | pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl"); | ||
1468 | if (IS_ERR(pcie->pex_ctl_supply)) { | ||
1469 | ret = PTR_ERR(pcie->pex_ctl_supply); | ||
1470 | if (ret != -EPROBE_DEFER) | ||
1471 | dev_err(dev, "Failed to get regulator: %ld\n", | ||
1472 | PTR_ERR(pcie->pex_ctl_supply)); | ||
1473 | return ret; | ||
1474 | } | ||
1475 | |||
1476 | pcie->core_clk = devm_clk_get(dev, "core"); | ||
1477 | if (IS_ERR(pcie->core_clk)) { | ||
1478 | dev_err(dev, "Failed to get core clock: %ld\n", | ||
1479 | PTR_ERR(pcie->core_clk)); | ||
1480 | return PTR_ERR(pcie->core_clk); | ||
1481 | } | ||
1482 | |||
1483 | pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, | ||
1484 | "appl"); | ||
1485 | if (!pcie->appl_res) { | ||
1486 | dev_err(dev, "Failed to find \"appl\" region\n"); | ||
1487 | return -ENODEV; | ||
1488 | } | ||
1489 | |||
1490 | pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res); | ||
1491 | if (IS_ERR(pcie->appl_base)) | ||
1492 | return PTR_ERR(pcie->appl_base); | ||
1493 | |||
1494 | pcie->core_apb_rst = devm_reset_control_get(dev, "apb"); | ||
1495 | if (IS_ERR(pcie->core_apb_rst)) { | ||
1496 | dev_err(dev, "Failed to get APB reset: %ld\n", | ||
1497 | PTR_ERR(pcie->core_apb_rst)); | ||
1498 | return PTR_ERR(pcie->core_apb_rst); | ||
1499 | } | ||
1500 | |||
1501 | phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL); | ||
1502 | if (!phys) | ||
1503 | return -ENOMEM; | ||
1504 | |||
1505 | for (i = 0; i < pcie->phy_count; i++) { | ||
1506 | name = kasprintf(GFP_KERNEL, "p2u-%u", i); | ||
1507 | if (!name) { | ||
1508 | dev_err(dev, "Failed to create P2U string\n"); | ||
1509 | return -ENOMEM; | ||
1510 | } | ||
1511 | phys[i] = devm_phy_get(dev, name); | ||
1512 | kfree(name); | ||
1513 | if (IS_ERR(phys[i])) { | ||
1514 | ret = PTR_ERR(phys[i]); | ||
1515 | if (ret != -EPROBE_DEFER) | ||
1516 | dev_err(dev, "Failed to get PHY: %d\n", ret); | ||
1517 | return ret; | ||
1518 | } | ||
1519 | } | ||
1520 | |||
1521 | pcie->phys = phys; | ||
1522 | |||
1523 | dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); | ||
1524 | if (!dbi_res) { | ||
1525 | dev_err(dev, "Failed to find \"dbi\" region\n"); | ||
1526 | return -ENODEV; | ||
1527 | } | ||
1528 | pcie->dbi_res = dbi_res; | ||
1529 | |||
1530 | pci->dbi_base = devm_ioremap_resource(dev, dbi_res); | ||
1531 | if (IS_ERR(pci->dbi_base)) | ||
1532 | return PTR_ERR(pci->dbi_base); | ||
1533 | |||
1534 | /* Tegra HW locates DBI2 at a fixed offset from DBI */ | ||
1535 | pci->dbi_base2 = pci->dbi_base + 0x1000; | ||
1536 | |||
1537 | atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, | ||
1538 | "atu_dma"); | ||
1539 | if (!atu_dma_res) { | ||
1540 | dev_err(dev, "Failed to find \"atu_dma\" region\n"); | ||
1541 | return -ENODEV; | ||
1542 | } | ||
1543 | pcie->atu_dma_res = atu_dma_res; | ||
1544 | |||
1545 | pci->atu_base = devm_ioremap_resource(dev, atu_dma_res); | ||
1546 | if (IS_ERR(pci->atu_base)) | ||
1547 | return PTR_ERR(pci->atu_base); | ||
1548 | |||
1549 | pcie->core_rst = devm_reset_control_get(dev, "core"); | ||
1550 | if (IS_ERR(pcie->core_rst)) { | ||
1551 | dev_err(dev, "Failed to get core reset: %ld\n", | ||
1552 | PTR_ERR(pcie->core_rst)); | ||
1553 | return PTR_ERR(pcie->core_rst); | ||
1554 | } | ||
1555 | |||
1556 | pp->irq = platform_get_irq_byname(pdev, "intr"); | ||
1557 | if (!pp->irq) { | ||
1558 | dev_err(dev, "Failed to get \"intr\" interrupt\n"); | ||
1559 | return -ENODEV; | ||
1560 | } | ||
1561 | |||
1562 | ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler, | ||
1563 | IRQF_SHARED, "tegra-pcie-intr", pcie); | ||
1564 | if (ret) { | ||
1565 | dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret); | ||
1566 | return ret; | ||
1567 | } | ||
1568 | |||
1569 | pcie->bpmp = tegra_bpmp_get(dev); | ||
1570 | if (IS_ERR(pcie->bpmp)) | ||
1571 | return PTR_ERR(pcie->bpmp); | ||
1572 | |||
1573 | platform_set_drvdata(pdev, pcie); | ||
1574 | |||
1575 | ret = tegra_pcie_config_rp(pcie); | ||
1576 | if (ret && ret != -ENOMEDIUM) | ||
1577 | goto fail; | ||
1578 | else | ||
1579 | return 0; | ||
1580 | |||
1581 | fail: | ||
1582 | tegra_bpmp_put(pcie->bpmp); | ||
1583 | return ret; | ||
1584 | } | ||
1585 | |||
1586 | static int tegra_pcie_dw_remove(struct platform_device *pdev) | ||
1587 | { | ||
1588 | struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); | ||
1589 | |||
1590 | if (!pcie->link_state) | ||
1591 | return 0; | ||
1592 | |||
1593 | debugfs_remove_recursive(pcie->debugfs); | ||
1594 | tegra_pcie_deinit_controller(pcie); | ||
1595 | pm_runtime_put_sync(pcie->dev); | ||
1596 | pm_runtime_disable(pcie->dev); | ||
1597 | tegra_bpmp_put(pcie->bpmp); | ||
1598 | |||
1599 | return 0; | ||
1600 | } | ||
1601 | |||
1602 | static int tegra_pcie_dw_suspend_late(struct device *dev) | ||
1603 | { | ||
1604 | struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); | ||
1605 | u32 val; | ||
1606 | |||
1607 | if (!pcie->link_state) | ||
1608 | return 0; | ||
1609 | |||
1610 | /* Enable HW_HOT_RST mode */ | ||
1611 | val = appl_readl(pcie, APPL_CTRL); | ||
1612 | val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << | ||
1613 | APPL_CTRL_HW_HOT_RST_MODE_SHIFT); | ||
1614 | val |= APPL_CTRL_HW_HOT_RST_EN; | ||
1615 | appl_writel(pcie, val, APPL_CTRL); | ||
1616 | |||
1617 | return 0; | ||
1618 | } | ||
1619 | |||
1620 | static int tegra_pcie_dw_suspend_noirq(struct device *dev) | ||
1621 | { | ||
1622 | struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); | ||
1623 | |||
1624 | if (!pcie->link_state) | ||
1625 | return 0; | ||
1626 | |||
1627 | /* Save MSI interrupt vector */ | ||
1628 | pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci, | ||
1629 | PORT_LOGIC_MSI_CTRL_INT_0_EN); | ||
1630 | tegra_pcie_downstream_dev_to_D0(pcie); | ||
1631 | tegra_pcie_dw_pme_turnoff(pcie); | ||
1632 | |||
1633 | return __deinit_controller(pcie); | ||
1634 | } | ||
1635 | |||
1636 | static int tegra_pcie_dw_resume_noirq(struct device *dev) | ||
1637 | { | ||
1638 | struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); | ||
1639 | int ret; | ||
1640 | |||
1641 | if (!pcie->link_state) | ||
1642 | return 0; | ||
1643 | |||
1644 | ret = tegra_pcie_config_controller(pcie, true); | ||
1645 | if (ret < 0) | ||
1646 | return ret; | ||
1647 | |||
1648 | ret = tegra_pcie_dw_host_init(&pcie->pci.pp); | ||
1649 | if (ret < 0) { | ||
1650 | dev_err(dev, "Failed to init host: %d\n", ret); | ||
1651 | goto fail_host_init; | ||
1652 | } | ||
1653 | |||
1654 | /* Restore MSI interrupt vector */ | ||
1655 | dw_pcie_writel_dbi(&pcie->pci, PORT_LOGIC_MSI_CTRL_INT_0_EN, | ||
1656 | pcie->msi_ctrl_int); | ||
1657 | |||
1658 | return 0; | ||
1659 | |||
1660 | fail_host_init: | ||
1661 | return __deinit_controller(pcie); | ||
1662 | } | ||
1663 | |||
1664 | static int tegra_pcie_dw_resume_early(struct device *dev) | ||
1665 | { | ||
1666 | struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); | ||
1667 | u32 val; | ||
1668 | |||
1669 | if (!pcie->link_state) | ||
1670 | return 0; | ||
1671 | |||
1672 | /* Disable HW_HOT_RST mode */ | ||
1673 | val = appl_readl(pcie, APPL_CTRL); | ||
1674 | val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << | ||
1675 | APPL_CTRL_HW_HOT_RST_MODE_SHIFT); | ||
1676 | val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST << | ||
1677 | APPL_CTRL_HW_HOT_RST_MODE_SHIFT; | ||
1678 | val &= ~APPL_CTRL_HW_HOT_RST_EN; | ||
1679 | appl_writel(pcie, val, APPL_CTRL); | ||
1680 | |||
1681 | return 0; | ||
1682 | } | ||
1683 | |||
1684 | static void tegra_pcie_dw_shutdown(struct platform_device *pdev) | ||
1685 | { | ||
1686 | struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); | ||
1687 | |||
1688 | if (!pcie->link_state) | ||
1689 | return; | ||
1690 | |||
1691 | debugfs_remove_recursive(pcie->debugfs); | ||
1692 | tegra_pcie_downstream_dev_to_D0(pcie); | ||
1693 | |||
1694 | disable_irq(pcie->pci.pp.irq); | ||
1695 | if (IS_ENABLED(CONFIG_PCI_MSI)) | ||
1696 | disable_irq(pcie->pci.pp.msi_irq); | ||
1697 | |||
1698 | tegra_pcie_dw_pme_turnoff(pcie); | ||
1699 | __deinit_controller(pcie); | ||
1700 | } | ||
1701 | |||
1702 | static const struct of_device_id tegra_pcie_dw_of_match[] = { | ||
1703 | { | ||
1704 | .compatible = "nvidia,tegra194-pcie", | ||
1705 | }, | ||
1706 | {}, | ||
1707 | }; | ||
1708 | |||
1709 | static const struct dev_pm_ops tegra_pcie_dw_pm_ops = { | ||
1710 | .suspend_late = tegra_pcie_dw_suspend_late, | ||
1711 | .suspend_noirq = tegra_pcie_dw_suspend_noirq, | ||
1712 | .resume_noirq = tegra_pcie_dw_resume_noirq, | ||
1713 | .resume_early = tegra_pcie_dw_resume_early, | ||
1714 | }; | ||
1715 | |||
1716 | static struct platform_driver tegra_pcie_dw_driver = { | ||
1717 | .probe = tegra_pcie_dw_probe, | ||
1718 | .remove = tegra_pcie_dw_remove, | ||
1719 | .shutdown = tegra_pcie_dw_shutdown, | ||
1720 | .driver = { | ||
1721 | .name = "tegra194-pcie", | ||
1722 | .pm = &tegra_pcie_dw_pm_ops, | ||
1723 | .of_match_table = tegra_pcie_dw_of_match, | ||
1724 | }, | ||
1725 | }; | ||
1726 | module_platform_driver(tegra_pcie_dw_driver); | ||
1727 | |||
1728 | MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match); | ||
1729 | |||
1730 | MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>"); | ||
1731 | MODULE_DESCRIPTION("NVIDIA PCIe host controller driver"); | ||
1732 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/pci/controller/pci-host-common.c b/drivers/pci/controller/pci-host-common.c index c742881b5061..c8cb9c5188a4 100644 --- a/drivers/pci/controller/pci-host-common.c +++ b/drivers/pci/controller/pci-host-common.c | |||
@@ -43,9 +43,8 @@ static struct pci_config_window *gen_pci_init(struct device *dev, | |||
43 | goto err_out; | 43 | goto err_out; |
44 | } | 44 | } |
45 | 45 | ||
46 | err = devm_add_action(dev, gen_pci_unmap_cfg, cfg); | 46 | err = devm_add_action_or_reset(dev, gen_pci_unmap_cfg, cfg); |
47 | if (err) { | 47 | if (err) { |
48 | gen_pci_unmap_cfg(cfg); | ||
49 | goto err_out; | 48 | goto err_out; |
50 | } | 49 | } |
51 | return cfg; | 50 | return cfg; |
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 0ca73c851e0f..f1f300218fab 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c | |||
@@ -2809,6 +2809,48 @@ static void put_hvpcibus(struct hv_pcibus_device *hbus) | |||
2809 | complete(&hbus->remove_event); | 2809 | complete(&hbus->remove_event); |
2810 | } | 2810 | } |
2811 | 2811 | ||
2812 | #define HVPCI_DOM_MAP_SIZE (64 * 1024) | ||
2813 | static DECLARE_BITMAP(hvpci_dom_map, HVPCI_DOM_MAP_SIZE); | ||
2814 | |||
2815 | /* | ||
2816 | * PCI domain number 0 is used by emulated devices on Gen1 VMs, so define 0 | ||
2817 | * as invalid for passthrough PCI devices of this driver. | ||
2818 | */ | ||
2819 | #define HVPCI_DOM_INVALID 0 | ||
2820 | |||
2821 | /** | ||
2822 | * hv_get_dom_num() - Get a valid PCI domain number | ||
2823 | * Check if the PCI domain number is in use, and return another number if | ||
2824 | * it is in use. | ||
2825 | * | ||
2826 | * @dom: Requested domain number | ||
2827 | * | ||
2828 | * return: domain number on success, HVPCI_DOM_INVALID on failure | ||
2829 | */ | ||
2830 | static u16 hv_get_dom_num(u16 dom) | ||
2831 | { | ||
2832 | unsigned int i; | ||
2833 | |||
2834 | if (test_and_set_bit(dom, hvpci_dom_map) == 0) | ||
2835 | return dom; | ||
2836 | |||
2837 | for_each_clear_bit(i, hvpci_dom_map, HVPCI_DOM_MAP_SIZE) { | ||
2838 | if (test_and_set_bit(i, hvpci_dom_map) == 0) | ||
2839 | return i; | ||
2840 | } | ||
2841 | |||
2842 | return HVPCI_DOM_INVALID; | ||
2843 | } | ||
2844 | |||
2845 | /** | ||
2846 | * hv_put_dom_num() - Mark the PCI domain number as free | ||
2847 | * @dom: Domain number to be freed | ||
2848 | */ | ||
2849 | static void hv_put_dom_num(u16 dom) | ||
2850 | { | ||
2851 | clear_bit(dom, hvpci_dom_map); | ||
2852 | } | ||
2853 | |||
2812 | /** | 2854 | /** |
2813 | * hv_pci_probe() - New VMBus channel probe, for a root PCI bus | 2855 | * hv_pci_probe() - New VMBus channel probe, for a root PCI bus |
2814 | * @hdev: VMBus's tracking struct for this root PCI bus | 2856 | * @hdev: VMBus's tracking struct for this root PCI bus |
@@ -2820,6 +2862,7 @@ static int hv_pci_probe(struct hv_device *hdev, | |||
2820 | const struct hv_vmbus_device_id *dev_id) | 2862 | const struct hv_vmbus_device_id *dev_id) |
2821 | { | 2863 | { |
2822 | struct hv_pcibus_device *hbus; | 2864 | struct hv_pcibus_device *hbus; |
2865 | u16 dom_req, dom; | ||
2823 | char *name; | 2866 | char *name; |
2824 | int ret; | 2867 | int ret; |
2825 | 2868 | ||
@@ -2835,19 +2878,34 @@ static int hv_pci_probe(struct hv_device *hdev, | |||
2835 | hbus->state = hv_pcibus_init; | 2878 | hbus->state = hv_pcibus_init; |
2836 | 2879 | ||
2837 | /* | 2880 | /* |
2838 | * The PCI bus "domain" is what is called "segment" in ACPI and | 2881 | * The PCI bus "domain" is what is called "segment" in ACPI and other |
2839 | * other specs. Pull it from the instance ID, to get something | 2882 | * specs. Pull it from the instance ID, to get something usually |
2840 | * unique. Bytes 8 and 9 are what is used in Windows guests, so | 2883 | * unique. In rare cases of collision, we will find out another number |
2841 | * do the same thing for consistency. Note that, since this code | 2884 | * not in use. |
2842 | * only runs in a Hyper-V VM, Hyper-V can (and does) guarantee | 2885 | * |
2843 | * that (1) the only domain in use for something that looks like | 2886 | * Note that, since this code only runs in a Hyper-V VM, Hyper-V |
2844 | * a physical PCI bus (which is actually emulated by the | 2887 | * together with this guest driver can guarantee that (1) The only |
2845 | * hypervisor) is domain 0 and (2) there will be no overlap | 2888 | * domain used by Gen1 VMs for something that looks like a physical |
2846 | * between domains derived from these instance IDs in the same | 2889 | * PCI bus (which is actually emulated by the hypervisor) is domain 0. |
2847 | * VM. | 2890 | * (2) There will be no overlap between domains (after fixing possible |
2891 | * collisions) in the same VM. | ||
2848 | */ | 2892 | */ |
2849 | hbus->sysdata.domain = hdev->dev_instance.b[9] | | 2893 | dom_req = hdev->dev_instance.b[5] << 8 | hdev->dev_instance.b[4]; |
2850 | hdev->dev_instance.b[8] << 8; | 2894 | dom = hv_get_dom_num(dom_req); |
2895 | |||
2896 | if (dom == HVPCI_DOM_INVALID) { | ||
2897 | dev_err(&hdev->device, | ||
2898 | "Unable to use dom# 0x%hx or other numbers", dom_req); | ||
2899 | ret = -EINVAL; | ||
2900 | goto free_bus; | ||
2901 | } | ||
2902 | |||
2903 | if (dom != dom_req) | ||
2904 | dev_info(&hdev->device, | ||
2905 | "PCI dom# 0x%hx has collision, using 0x%hx", | ||
2906 | dom_req, dom); | ||
2907 | |||
2908 | hbus->sysdata.domain = dom; | ||
2851 | 2909 | ||
2852 | hbus->hdev = hdev; | 2910 | hbus->hdev = hdev; |
2853 | refcount_set(&hbus->remove_lock, 1); | 2911 | refcount_set(&hbus->remove_lock, 1); |
@@ -2862,7 +2920,7 @@ static int hv_pci_probe(struct hv_device *hdev, | |||
2862 | hbus->sysdata.domain); | 2920 | hbus->sysdata.domain); |
2863 | if (!hbus->wq) { | 2921 | if (!hbus->wq) { |
2864 | ret = -ENOMEM; | 2922 | ret = -ENOMEM; |
2865 | goto free_bus; | 2923 | goto free_dom; |
2866 | } | 2924 | } |
2867 | 2925 | ||
2868 | ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0, | 2926 | ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0, |
@@ -2946,6 +3004,8 @@ close: | |||
2946 | vmbus_close(hdev->channel); | 3004 | vmbus_close(hdev->channel); |
2947 | destroy_wq: | 3005 | destroy_wq: |
2948 | destroy_workqueue(hbus->wq); | 3006 | destroy_workqueue(hbus->wq); |
3007 | free_dom: | ||
3008 | hv_put_dom_num(hbus->sysdata.domain); | ||
2949 | free_bus: | 3009 | free_bus: |
2950 | free_page((unsigned long)hbus); | 3010 | free_page((unsigned long)hbus); |
2951 | return ret; | 3011 | return ret; |
@@ -3008,8 +3068,8 @@ static int hv_pci_remove(struct hv_device *hdev) | |||
3008 | /* Remove the bus from PCI's point of view. */ | 3068 | /* Remove the bus from PCI's point of view. */ |
3009 | pci_lock_rescan_remove(); | 3069 | pci_lock_rescan_remove(); |
3010 | pci_stop_root_bus(hbus->pci_bus); | 3070 | pci_stop_root_bus(hbus->pci_bus); |
3011 | pci_remove_root_bus(hbus->pci_bus); | ||
3012 | hv_pci_remove_slots(hbus); | 3071 | hv_pci_remove_slots(hbus); |
3072 | pci_remove_root_bus(hbus->pci_bus); | ||
3013 | pci_unlock_rescan_remove(); | 3073 | pci_unlock_rescan_remove(); |
3014 | hbus->state = hv_pcibus_removed; | 3074 | hbus->state = hv_pcibus_removed; |
3015 | } | 3075 | } |
@@ -3027,6 +3087,9 @@ static int hv_pci_remove(struct hv_device *hdev) | |||
3027 | put_hvpcibus(hbus); | 3087 | put_hvpcibus(hbus); |
3028 | wait_for_completion(&hbus->remove_event); | 3088 | wait_for_completion(&hbus->remove_event); |
3029 | destroy_workqueue(hbus->wq); | 3089 | destroy_workqueue(hbus->wq); |
3090 | |||
3091 | hv_put_dom_num(hbus->sysdata.domain); | ||
3092 | |||
3030 | free_page((unsigned long)hbus); | 3093 | free_page((unsigned long)hbus); |
3031 | return 0; | 3094 | return 0; |
3032 | } | 3095 | } |
@@ -3058,6 +3121,9 @@ static void __exit exit_hv_pci_drv(void) | |||
3058 | 3121 | ||
3059 | static int __init init_hv_pci_drv(void) | 3122 | static int __init init_hv_pci_drv(void) |
3060 | { | 3123 | { |
3124 | /* Set the invalid domain number's bit, so it will not be used */ | ||
3125 | set_bit(HVPCI_DOM_INVALID, hvpci_dom_map); | ||
3126 | |||
3061 | /* Initialize PCI block r/w interface */ | 3127 | /* Initialize PCI block r/w interface */ |
3062 | hvpci_block_ops.read_block = hv_read_config_block; | 3128 | hvpci_block_ops.read_block = hv_read_config_block; |
3063 | hvpci_block_ops.write_block = hv_write_config_block; | 3129 | hvpci_block_ops.write_block = hv_write_config_block; |
diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c index 9a917b2456f6..673a1725ef38 100644 --- a/drivers/pci/controller/pci-tegra.c +++ b/drivers/pci/controller/pci-tegra.c | |||
@@ -2237,14 +2237,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) | |||
2237 | err = of_pci_get_devfn(port); | 2237 | err = of_pci_get_devfn(port); |
2238 | if (err < 0) { | 2238 | if (err < 0) { |
2239 | dev_err(dev, "failed to parse address: %d\n", err); | 2239 | dev_err(dev, "failed to parse address: %d\n", err); |
2240 | return err; | 2240 | goto err_node_put; |
2241 | } | 2241 | } |
2242 | 2242 | ||
2243 | index = PCI_SLOT(err); | 2243 | index = PCI_SLOT(err); |
2244 | 2244 | ||
2245 | if (index < 1 || index > soc->num_ports) { | 2245 | if (index < 1 || index > soc->num_ports) { |
2246 | dev_err(dev, "invalid port number: %d\n", index); | 2246 | dev_err(dev, "invalid port number: %d\n", index); |
2247 | return -EINVAL; | 2247 | err = -EINVAL; |
2248 | goto err_node_put; | ||
2248 | } | 2249 | } |
2249 | 2250 | ||
2250 | index--; | 2251 | index--; |
@@ -2253,12 +2254,13 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) | |||
2253 | if (err < 0) { | 2254 | if (err < 0) { |
2254 | dev_err(dev, "failed to parse # of lanes: %d\n", | 2255 | dev_err(dev, "failed to parse # of lanes: %d\n", |
2255 | err); | 2256 | err); |
2256 | return err; | 2257 | goto err_node_put; |
2257 | } | 2258 | } |
2258 | 2259 | ||
2259 | if (value > 16) { | 2260 | if (value > 16) { |
2260 | dev_err(dev, "invalid # of lanes: %u\n", value); | 2261 | dev_err(dev, "invalid # of lanes: %u\n", value); |
2261 | return -EINVAL; | 2262 | err = -EINVAL; |
2263 | goto err_node_put; | ||
2262 | } | 2264 | } |
2263 | 2265 | ||
2264 | lanes |= value << (index << 3); | 2266 | lanes |= value << (index << 3); |
@@ -2272,13 +2274,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) | |||
2272 | lane += value; | 2274 | lane += value; |
2273 | 2275 | ||
2274 | rp = devm_kzalloc(dev, sizeof(*rp), GFP_KERNEL); | 2276 | rp = devm_kzalloc(dev, sizeof(*rp), GFP_KERNEL); |
2275 | if (!rp) | 2277 | if (!rp) { |
2276 | return -ENOMEM; | 2278 | err = -ENOMEM; |
2279 | goto err_node_put; | ||
2280 | } | ||
2277 | 2281 | ||
2278 | err = of_address_to_resource(port, 0, &rp->regs); | 2282 | err = of_address_to_resource(port, 0, &rp->regs); |
2279 | if (err < 0) { | 2283 | if (err < 0) { |
2280 | dev_err(dev, "failed to parse address: %d\n", err); | 2284 | dev_err(dev, "failed to parse address: %d\n", err); |
2281 | return err; | 2285 | goto err_node_put; |
2282 | } | 2286 | } |
2283 | 2287 | ||
2284 | INIT_LIST_HEAD(&rp->list); | 2288 | INIT_LIST_HEAD(&rp->list); |
@@ -2330,6 +2334,10 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) | |||
2330 | return err; | 2334 | return err; |
2331 | 2335 | ||
2332 | return 0; | 2336 | return 0; |
2337 | |||
2338 | err_node_put: | ||
2339 | of_node_put(port); | ||
2340 | return err; | ||
2333 | } | 2341 | } |
2334 | 2342 | ||
2335 | /* | 2343 | /* |
diff --git a/drivers/pci/controller/pcie-iproc-platform.c b/drivers/pci/controller/pcie-iproc-platform.c index 5a3550b6bb29..9ee6200a66f4 100644 --- a/drivers/pci/controller/pcie-iproc-platform.c +++ b/drivers/pci/controller/pcie-iproc-platform.c | |||
@@ -93,12 +93,9 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev) | |||
93 | pcie->need_ib_cfg = of_property_read_bool(np, "dma-ranges"); | 93 | pcie->need_ib_cfg = of_property_read_bool(np, "dma-ranges"); |
94 | 94 | ||
95 | /* PHY use is optional */ | 95 | /* PHY use is optional */ |
96 | pcie->phy = devm_phy_get(dev, "pcie-phy"); | 96 | pcie->phy = devm_phy_optional_get(dev, "pcie-phy"); |
97 | if (IS_ERR(pcie->phy)) { | 97 | if (IS_ERR(pcie->phy)) |
98 | if (PTR_ERR(pcie->phy) == -EPROBE_DEFER) | 98 | return PTR_ERR(pcie->phy); |
99 | return -EPROBE_DEFER; | ||
100 | pcie->phy = NULL; | ||
101 | } | ||
102 | 99 | ||
103 | ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &resources, | 100 | ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &resources, |
104 | &iobase); | 101 | &iobase); |
diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c index 80601e1b939e..626a7c352dfd 100644 --- a/drivers/pci/controller/pcie-mediatek.c +++ b/drivers/pci/controller/pcie-mediatek.c | |||
@@ -73,6 +73,7 @@ | |||
73 | #define PCIE_MSI_VECTOR 0x0c0 | 73 | #define PCIE_MSI_VECTOR 0x0c0 |
74 | 74 | ||
75 | #define PCIE_CONF_VEND_ID 0x100 | 75 | #define PCIE_CONF_VEND_ID 0x100 |
76 | #define PCIE_CONF_DEVICE_ID 0x102 | ||
76 | #define PCIE_CONF_CLASS_ID 0x106 | 77 | #define PCIE_CONF_CLASS_ID 0x106 |
77 | 78 | ||
78 | #define PCIE_INT_MASK 0x420 | 79 | #define PCIE_INT_MASK 0x420 |
@@ -141,12 +142,16 @@ struct mtk_pcie_port; | |||
141 | /** | 142 | /** |
142 | * struct mtk_pcie_soc - differentiate between host generations | 143 | * struct mtk_pcie_soc - differentiate between host generations |
143 | * @need_fix_class_id: whether this host's class ID needed to be fixed or not | 144 | * @need_fix_class_id: whether this host's class ID needed to be fixed or not |
145 | * @need_fix_device_id: whether this host's device ID needed to be fixed or not | ||
146 | * @device_id: device ID which this host need to be fixed | ||
144 | * @ops: pointer to configuration access functions | 147 | * @ops: pointer to configuration access functions |
145 | * @startup: pointer to controller setting functions | 148 | * @startup: pointer to controller setting functions |
146 | * @setup_irq: pointer to initialize IRQ functions | 149 | * @setup_irq: pointer to initialize IRQ functions |
147 | */ | 150 | */ |
148 | struct mtk_pcie_soc { | 151 | struct mtk_pcie_soc { |
149 | bool need_fix_class_id; | 152 | bool need_fix_class_id; |
153 | bool need_fix_device_id; | ||
154 | unsigned int device_id; | ||
150 | struct pci_ops *ops; | 155 | struct pci_ops *ops; |
151 | int (*startup)(struct mtk_pcie_port *port); | 156 | int (*startup)(struct mtk_pcie_port *port); |
152 | int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node); | 157 | int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node); |
@@ -630,8 +635,6 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc) | |||
630 | } | 635 | } |
631 | 636 | ||
632 | chained_irq_exit(irqchip, desc); | 637 | chained_irq_exit(irqchip, desc); |
633 | |||
634 | return; | ||
635 | } | 638 | } |
636 | 639 | ||
637 | static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, | 640 | static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, |
@@ -696,6 +699,9 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) | |||
696 | writew(val, port->base + PCIE_CONF_CLASS_ID); | 699 | writew(val, port->base + PCIE_CONF_CLASS_ID); |
697 | } | 700 | } |
698 | 701 | ||
702 | if (soc->need_fix_device_id) | ||
703 | writew(soc->device_id, port->base + PCIE_CONF_DEVICE_ID); | ||
704 | |||
699 | /* 100ms timeout value should be enough for Gen1/2 training */ | 705 | /* 100ms timeout value should be enough for Gen1/2 training */ |
700 | err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, | 706 | err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, |
701 | !!(val & PCIE_PORT_LINKUP_V2), 20, | 707 | !!(val & PCIE_PORT_LINKUP_V2), 20, |
@@ -1216,11 +1222,21 @@ static const struct mtk_pcie_soc mtk_pcie_soc_mt7622 = { | |||
1216 | .setup_irq = mtk_pcie_setup_irq, | 1222 | .setup_irq = mtk_pcie_setup_irq, |
1217 | }; | 1223 | }; |
1218 | 1224 | ||
1225 | static const struct mtk_pcie_soc mtk_pcie_soc_mt7629 = { | ||
1226 | .need_fix_class_id = true, | ||
1227 | .need_fix_device_id = true, | ||
1228 | .device_id = PCI_DEVICE_ID_MEDIATEK_7629, | ||
1229 | .ops = &mtk_pcie_ops_v2, | ||
1230 | .startup = mtk_pcie_startup_port_v2, | ||
1231 | .setup_irq = mtk_pcie_setup_irq, | ||
1232 | }; | ||
1233 | |||
1219 | static const struct of_device_id mtk_pcie_ids[] = { | 1234 | static const struct of_device_id mtk_pcie_ids[] = { |
1220 | { .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 }, | 1235 | { .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 }, |
1221 | { .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 }, | 1236 | { .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 }, |
1222 | { .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_mt2712 }, | 1237 | { .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_mt2712 }, |
1223 | { .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_mt7622 }, | 1238 | { .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_mt7622 }, |
1239 | { .compatible = "mediatek,mt7629-pcie", .data = &mtk_pcie_soc_mt7629 }, | ||
1224 | {}, | 1240 | {}, |
1225 | }; | 1241 | }; |
1226 | 1242 | ||
diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c index 672e633601c7..a45a6447b01d 100644 --- a/drivers/pci/controller/pcie-mobiveil.c +++ b/drivers/pci/controller/pcie-mobiveil.c | |||
@@ -88,6 +88,7 @@ | |||
88 | #define AMAP_CTRL_TYPE_MASK 3 | 88 | #define AMAP_CTRL_TYPE_MASK 3 |
89 | 89 | ||
90 | #define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win) | 90 | #define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win) |
91 | #define PAB_EXT_PEX_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0xb4a0, win) | ||
91 | #define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win) | 92 | #define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win) |
92 | #define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win) | 93 | #define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win) |
93 | #define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win) | 94 | #define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win) |
@@ -462,7 +463,7 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie) | |||
462 | } | 463 | } |
463 | 464 | ||
464 | static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, | 465 | static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, |
465 | u64 pci_addr, u32 type, u64 size) | 466 | u64 cpu_addr, u64 pci_addr, u32 type, u64 size) |
466 | { | 467 | { |
467 | u32 value; | 468 | u32 value; |
468 | u64 size64 = ~(size - 1); | 469 | u64 size64 = ~(size - 1); |
@@ -482,7 +483,10 @@ static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, | |||
482 | csr_writel(pcie, upper_32_bits(size64), | 483 | csr_writel(pcie, upper_32_bits(size64), |
483 | PAB_EXT_PEX_AMAP_SIZEN(win_num)); | 484 | PAB_EXT_PEX_AMAP_SIZEN(win_num)); |
484 | 485 | ||
485 | csr_writel(pcie, pci_addr, PAB_PEX_AMAP_AXI_WIN(win_num)); | 486 | csr_writel(pcie, lower_32_bits(cpu_addr), |
487 | PAB_PEX_AMAP_AXI_WIN(win_num)); | ||
488 | csr_writel(pcie, upper_32_bits(cpu_addr), | ||
489 | PAB_EXT_PEX_AMAP_AXI_WIN(win_num)); | ||
486 | 490 | ||
487 | csr_writel(pcie, lower_32_bits(pci_addr), | 491 | csr_writel(pcie, lower_32_bits(pci_addr), |
488 | PAB_PEX_AMAP_PEX_WIN_L(win_num)); | 492 | PAB_PEX_AMAP_PEX_WIN_L(win_num)); |
@@ -624,7 +628,7 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie) | |||
624 | CFG_WINDOW_TYPE, resource_size(pcie->ob_io_res)); | 628 | CFG_WINDOW_TYPE, resource_size(pcie->ob_io_res)); |
625 | 629 | ||
626 | /* memory inbound translation window */ | 630 | /* memory inbound translation window */ |
627 | program_ib_windows(pcie, WIN_NUM_0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE); | 631 | program_ib_windows(pcie, WIN_NUM_0, 0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE); |
628 | 632 | ||
629 | /* Get the I/O and memory ranges from DT */ | 633 | /* Get the I/O and memory ranges from DT */ |
630 | resource_list_for_each_entry(win, &pcie->resources) { | 634 | resource_list_for_each_entry(win, &pcie->resources) { |
diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c index 8d20f1793a61..ef8e677ce9d1 100644 --- a/drivers/pci/controller/pcie-rockchip-host.c +++ b/drivers/pci/controller/pcie-rockchip-host.c | |||
@@ -608,29 +608,29 @@ static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip) | |||
608 | 608 | ||
609 | rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v"); | 609 | rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v"); |
610 | if (IS_ERR(rockchip->vpcie12v)) { | 610 | if (IS_ERR(rockchip->vpcie12v)) { |
611 | if (PTR_ERR(rockchip->vpcie12v) == -EPROBE_DEFER) | 611 | if (PTR_ERR(rockchip->vpcie12v) != -ENODEV) |
612 | return -EPROBE_DEFER; | 612 | return PTR_ERR(rockchip->vpcie12v); |
613 | dev_info(dev, "no vpcie12v regulator found\n"); | 613 | dev_info(dev, "no vpcie12v regulator found\n"); |
614 | } | 614 | } |
615 | 615 | ||
616 | rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); | 616 | rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); |
617 | if (IS_ERR(rockchip->vpcie3v3)) { | 617 | if (IS_ERR(rockchip->vpcie3v3)) { |
618 | if (PTR_ERR(rockchip->vpcie3v3) == -EPROBE_DEFER) | 618 | if (PTR_ERR(rockchip->vpcie3v3) != -ENODEV) |
619 | return -EPROBE_DEFER; | 619 | return PTR_ERR(rockchip->vpcie3v3); |
620 | dev_info(dev, "no vpcie3v3 regulator found\n"); | 620 | dev_info(dev, "no vpcie3v3 regulator found\n"); |
621 | } | 621 | } |
622 | 622 | ||
623 | rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8"); | 623 | rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8"); |
624 | if (IS_ERR(rockchip->vpcie1v8)) { | 624 | if (IS_ERR(rockchip->vpcie1v8)) { |
625 | if (PTR_ERR(rockchip->vpcie1v8) == -EPROBE_DEFER) | 625 | if (PTR_ERR(rockchip->vpcie1v8) != -ENODEV) |
626 | return -EPROBE_DEFER; | 626 | return PTR_ERR(rockchip->vpcie1v8); |
627 | dev_info(dev, "no vpcie1v8 regulator found\n"); | 627 | dev_info(dev, "no vpcie1v8 regulator found\n"); |
628 | } | 628 | } |
629 | 629 | ||
630 | rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9"); | 630 | rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9"); |
631 | if (IS_ERR(rockchip->vpcie0v9)) { | 631 | if (IS_ERR(rockchip->vpcie0v9)) { |
632 | if (PTR_ERR(rockchip->vpcie0v9) == -EPROBE_DEFER) | 632 | if (PTR_ERR(rockchip->vpcie0v9) != -ENODEV) |
633 | return -EPROBE_DEFER; | 633 | return PTR_ERR(rockchip->vpcie0v9); |
634 | dev_info(dev, "no vpcie0v9 regulator found\n"); | 634 | dev_info(dev, "no vpcie0v9 regulator found\n"); |
635 | } | 635 | } |
636 | 636 | ||
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 4575e0c6dc4b..a35d3f3996d7 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c | |||
@@ -31,6 +31,9 @@ | |||
31 | #define PCI_REG_VMLOCK 0x70 | 31 | #define PCI_REG_VMLOCK 0x70 |
32 | #define MB2_SHADOW_EN(vmlock) (vmlock & 0x2) | 32 | #define MB2_SHADOW_EN(vmlock) (vmlock & 0x2) |
33 | 33 | ||
34 | #define MB2_SHADOW_OFFSET 0x2000 | ||
35 | #define MB2_SHADOW_SIZE 16 | ||
36 | |||
34 | enum vmd_features { | 37 | enum vmd_features { |
35 | /* | 38 | /* |
36 | * Device may contain registers which hint the physical location of the | 39 | * Device may contain registers which hint the physical location of the |
@@ -94,6 +97,7 @@ struct vmd_dev { | |||
94 | struct resource resources[3]; | 97 | struct resource resources[3]; |
95 | struct irq_domain *irq_domain; | 98 | struct irq_domain *irq_domain; |
96 | struct pci_bus *bus; | 99 | struct pci_bus *bus; |
100 | u8 busn_start; | ||
97 | 101 | ||
98 | struct dma_map_ops dma_ops; | 102 | struct dma_map_ops dma_ops; |
99 | struct dma_domain dma_domain; | 103 | struct dma_domain dma_domain; |
@@ -440,7 +444,8 @@ static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus, | |||
440 | unsigned int devfn, int reg, int len) | 444 | unsigned int devfn, int reg, int len) |
441 | { | 445 | { |
442 | char __iomem *addr = vmd->cfgbar + | 446 | char __iomem *addr = vmd->cfgbar + |
443 | (bus->number << 20) + (devfn << 12) + reg; | 447 | ((bus->number - vmd->busn_start) << 20) + |
448 | (devfn << 12) + reg; | ||
444 | 449 | ||
445 | if ((addr - vmd->cfgbar) + len >= | 450 | if ((addr - vmd->cfgbar) + len >= |
446 | resource_size(&vmd->dev->resource[VMD_CFGBAR])) | 451 | resource_size(&vmd->dev->resource[VMD_CFGBAR])) |
@@ -563,7 +568,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) | |||
563 | unsigned long flags; | 568 | unsigned long flags; |
564 | LIST_HEAD(resources); | 569 | LIST_HEAD(resources); |
565 | resource_size_t offset[2] = {0}; | 570 | resource_size_t offset[2] = {0}; |
566 | resource_size_t membar2_offset = 0x2000, busn_start = 0; | 571 | resource_size_t membar2_offset = 0x2000; |
567 | struct pci_bus *child; | 572 | struct pci_bus *child; |
568 | 573 | ||
569 | /* | 574 | /* |
@@ -576,7 +581,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) | |||
576 | u32 vmlock; | 581 | u32 vmlock; |
577 | int ret; | 582 | int ret; |
578 | 583 | ||
579 | membar2_offset = 0x2018; | 584 | membar2_offset = MB2_SHADOW_OFFSET + MB2_SHADOW_SIZE; |
580 | ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock); | 585 | ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock); |
581 | if (ret || vmlock == ~0) | 586 | if (ret || vmlock == ~0) |
582 | return -ENODEV; | 587 | return -ENODEV; |
@@ -588,9 +593,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) | |||
588 | if (!membar2) | 593 | if (!membar2) |
589 | return -ENOMEM; | 594 | return -ENOMEM; |
590 | offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - | 595 | offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - |
591 | readq(membar2 + 0x2008); | 596 | readq(membar2 + MB2_SHADOW_OFFSET); |
592 | offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - | 597 | offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - |
593 | readq(membar2 + 0x2010); | 598 | readq(membar2 + MB2_SHADOW_OFFSET + 8); |
594 | pci_iounmap(vmd->dev, membar2); | 599 | pci_iounmap(vmd->dev, membar2); |
595 | } | 600 | } |
596 | } | 601 | } |
@@ -606,14 +611,14 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) | |||
606 | pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig); | 611 | pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig); |
607 | if (BUS_RESTRICT_CAP(vmcap) && | 612 | if (BUS_RESTRICT_CAP(vmcap) && |
608 | (BUS_RESTRICT_CFG(vmconfig) == 0x1)) | 613 | (BUS_RESTRICT_CFG(vmconfig) == 0x1)) |
609 | busn_start = 128; | 614 | vmd->busn_start = 128; |
610 | } | 615 | } |
611 | 616 | ||
612 | res = &vmd->dev->resource[VMD_CFGBAR]; | 617 | res = &vmd->dev->resource[VMD_CFGBAR]; |
613 | vmd->resources[0] = (struct resource) { | 618 | vmd->resources[0] = (struct resource) { |
614 | .name = "VMD CFGBAR", | 619 | .name = "VMD CFGBAR", |
615 | .start = busn_start, | 620 | .start = vmd->busn_start, |
616 | .end = busn_start + (resource_size(res) >> 20) - 1, | 621 | .end = vmd->busn_start + (resource_size(res) >> 20) - 1, |
617 | .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, | 622 | .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, |
618 | }; | 623 | }; |
619 | 624 | ||
@@ -681,8 +686,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) | |||
681 | pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]); | 686 | pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]); |
682 | pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]); | 687 | pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]); |
683 | 688 | ||
684 | vmd->bus = pci_create_root_bus(&vmd->dev->dev, busn_start, &vmd_ops, | 689 | vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start, |
685 | sd, &resources); | 690 | &vmd_ops, sd, &resources); |
686 | if (!vmd->bus) { | 691 | if (!vmd->bus) { |
687 | pci_free_resource_list(&resources); | 692 | pci_free_resource_list(&resources); |
688 | irq_domain_remove(vmd->irq_domain); | 693 | irq_domain_remove(vmd->irq_domain); |
diff --git a/drivers/pci/hotplug/cpci_hotplug_core.c b/drivers/pci/hotplug/cpci_hotplug_core.c index 603eadf3d965..d0559d2faf50 100644 --- a/drivers/pci/hotplug/cpci_hotplug_core.c +++ b/drivers/pci/hotplug/cpci_hotplug_core.c | |||
@@ -563,7 +563,6 @@ cleanup_slots(void) | |||
563 | } | 563 | } |
564 | cleanup_null: | 564 | cleanup_null: |
565 | up_write(&list_rwsem); | 565 | up_write(&list_rwsem); |
566 | return; | ||
567 | } | 566 | } |
568 | 567 | ||
569 | int | 568 | int |
diff --git a/drivers/pci/hotplug/cpqphp_core.c b/drivers/pci/hotplug/cpqphp_core.c index 16bbb183695a..b8aacb41a83c 100644 --- a/drivers/pci/hotplug/cpqphp_core.c +++ b/drivers/pci/hotplug/cpqphp_core.c | |||
@@ -173,7 +173,6 @@ static void pci_print_IRQ_route(void) | |||
173 | dbg("%d %d %d %d\n", tbus, tdevice >> 3, tdevice & 0x7, tslot); | 173 | dbg("%d %d %d %d\n", tbus, tdevice >> 3, tdevice & 0x7, tslot); |
174 | 174 | ||
175 | } | 175 | } |
176 | return; | ||
177 | } | 176 | } |
178 | 177 | ||
179 | 178 | ||
diff --git a/drivers/pci/hotplug/cpqphp_ctrl.c b/drivers/pci/hotplug/cpqphp_ctrl.c index b7f4e1f099d9..68de958a9be8 100644 --- a/drivers/pci/hotplug/cpqphp_ctrl.c +++ b/drivers/pci/hotplug/cpqphp_ctrl.c | |||
@@ -1872,8 +1872,6 @@ static void interrupt_event_handler(struct controller *ctrl) | |||
1872 | } | 1872 | } |
1873 | } /* End of FOR loop */ | 1873 | } /* End of FOR loop */ |
1874 | } | 1874 | } |
1875 | |||
1876 | return; | ||
1877 | } | 1875 | } |
1878 | 1876 | ||
1879 | 1877 | ||
@@ -1943,8 +1941,6 @@ void cpqhp_pushbutton_thread(struct timer_list *t) | |||
1943 | 1941 | ||
1944 | p_slot->state = STATIC_STATE; | 1942 | p_slot->state = STATIC_STATE; |
1945 | } | 1943 | } |
1946 | |||
1947 | return; | ||
1948 | } | 1944 | } |
1949 | 1945 | ||
1950 | 1946 | ||
diff --git a/drivers/pci/hotplug/cpqphp_nvram.h b/drivers/pci/hotplug/cpqphp_nvram.h index 918ff8dbfe62..70e879b6a23f 100644 --- a/drivers/pci/hotplug/cpqphp_nvram.h +++ b/drivers/pci/hotplug/cpqphp_nvram.h | |||
@@ -16,10 +16,7 @@ | |||
16 | 16 | ||
17 | #ifndef CONFIG_HOTPLUG_PCI_COMPAQ_NVRAM | 17 | #ifndef CONFIG_HOTPLUG_PCI_COMPAQ_NVRAM |
18 | 18 | ||
19 | static inline void compaq_nvram_init(void __iomem *rom_start) | 19 | static inline void compaq_nvram_init(void __iomem *rom_start) { } |
20 | { | ||
21 | return; | ||
22 | } | ||
23 | 20 | ||
24 | static inline int compaq_nvram_load(void __iomem *rom_start, struct controller *ctrl) | 21 | static inline int compaq_nvram_load(void __iomem *rom_start, struct controller *ctrl) |
25 | { | 22 | { |
diff --git a/drivers/pci/hotplug/ibmphp_res.c b/drivers/pci/hotplug/ibmphp_res.c index 5e8caf7a4452..5c93aa14f0de 100644 --- a/drivers/pci/hotplug/ibmphp_res.c +++ b/drivers/pci/hotplug/ibmphp_res.c | |||
@@ -1941,6 +1941,7 @@ static int __init update_bridge_ranges(struct bus_node **bus) | |||
1941 | break; | 1941 | break; |
1942 | case PCI_HEADER_TYPE_BRIDGE: | 1942 | case PCI_HEADER_TYPE_BRIDGE: |
1943 | function = 0x8; | 1943 | function = 0x8; |
1944 | /* fall through */ | ||
1944 | case PCI_HEADER_TYPE_MULTIBRIDGE: | 1945 | case PCI_HEADER_TYPE_MULTIBRIDGE: |
1945 | /* We assume here that only 1 bus behind the bridge | 1946 | /* We assume here that only 1 bus behind the bridge |
1946 | TO DO: add functionality for several: | 1947 | TO DO: add functionality for several: |
diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h index 8c51a04b8083..654c972b8ea0 100644 --- a/drivers/pci/hotplug/pciehp.h +++ b/drivers/pci/hotplug/pciehp.h | |||
@@ -110,9 +110,9 @@ struct controller { | |||
110 | * | 110 | * |
111 | * @OFF_STATE: slot is powered off, no subordinate devices are enumerated | 111 | * @OFF_STATE: slot is powered off, no subordinate devices are enumerated |
112 | * @BLINKINGON_STATE: slot will be powered on after the 5 second delay, | 112 | * @BLINKINGON_STATE: slot will be powered on after the 5 second delay, |
113 | * green led is blinking | 113 | * Power Indicator is blinking |
114 | * @BLINKINGOFF_STATE: slot will be powered off after the 5 second delay, | 114 | * @BLINKINGOFF_STATE: slot will be powered off after the 5 second delay, |
115 | * green led is blinking | 115 | * Power Indicator is blinking |
116 | * @POWERON_STATE: slot is currently powering on | 116 | * @POWERON_STATE: slot is currently powering on |
117 | * @POWEROFF_STATE: slot is currently powering off | 117 | * @POWEROFF_STATE: slot is currently powering off |
118 | * @ON_STATE: slot is powered on, subordinate devices have been enumerated | 118 | * @ON_STATE: slot is powered on, subordinate devices have been enumerated |
@@ -167,12 +167,11 @@ int pciehp_power_on_slot(struct controller *ctrl); | |||
167 | void pciehp_power_off_slot(struct controller *ctrl); | 167 | void pciehp_power_off_slot(struct controller *ctrl); |
168 | void pciehp_get_power_status(struct controller *ctrl, u8 *status); | 168 | void pciehp_get_power_status(struct controller *ctrl, u8 *status); |
169 | 169 | ||
170 | void pciehp_set_attention_status(struct controller *ctrl, u8 status); | 170 | #define INDICATOR_NOOP -1 /* Leave indicator unchanged */ |
171 | void pciehp_set_indicators(struct controller *ctrl, int pwr, int attn); | ||
172 | |||
171 | void pciehp_get_latch_status(struct controller *ctrl, u8 *status); | 173 | void pciehp_get_latch_status(struct controller *ctrl, u8 *status); |
172 | int pciehp_query_power_fault(struct controller *ctrl); | 174 | int pciehp_query_power_fault(struct controller *ctrl); |
173 | void pciehp_green_led_on(struct controller *ctrl); | ||
174 | void pciehp_green_led_off(struct controller *ctrl); | ||
175 | void pciehp_green_led_blink(struct controller *ctrl); | ||
176 | bool pciehp_card_present(struct controller *ctrl); | 175 | bool pciehp_card_present(struct controller *ctrl); |
177 | bool pciehp_card_present_or_link_active(struct controller *ctrl); | 176 | bool pciehp_card_present_or_link_active(struct controller *ctrl); |
178 | int pciehp_check_link_status(struct controller *ctrl); | 177 | int pciehp_check_link_status(struct controller *ctrl); |
diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c index 6ad0d86762cb..b3122c151b80 100644 --- a/drivers/pci/hotplug/pciehp_core.c +++ b/drivers/pci/hotplug/pciehp_core.c | |||
@@ -95,15 +95,20 @@ static void cleanup_slot(struct controller *ctrl) | |||
95 | } | 95 | } |
96 | 96 | ||
97 | /* | 97 | /* |
98 | * set_attention_status - Turns the Amber LED for a slot on, off or blink | 98 | * set_attention_status - Turns the Attention Indicator on, off or blinking |
99 | */ | 99 | */ |
100 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | 100 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) |
101 | { | 101 | { |
102 | struct controller *ctrl = to_ctrl(hotplug_slot); | 102 | struct controller *ctrl = to_ctrl(hotplug_slot); |
103 | struct pci_dev *pdev = ctrl->pcie->port; | 103 | struct pci_dev *pdev = ctrl->pcie->port; |
104 | 104 | ||
105 | if (status) | ||
106 | status <<= PCI_EXP_SLTCTL_ATTN_IND_SHIFT; | ||
107 | else | ||
108 | status = PCI_EXP_SLTCTL_ATTN_IND_OFF; | ||
109 | |||
105 | pci_config_pm_runtime_get(pdev); | 110 | pci_config_pm_runtime_get(pdev); |
106 | pciehp_set_attention_status(ctrl, status); | 111 | pciehp_set_indicators(ctrl, INDICATOR_NOOP, status); |
107 | pci_config_pm_runtime_put(pdev); | 112 | pci_config_pm_runtime_put(pdev); |
108 | return 0; | 113 | return 0; |
109 | } | 114 | } |
diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c index 631ced0ab28a..21af7b16d7a4 100644 --- a/drivers/pci/hotplug/pciehp_ctrl.c +++ b/drivers/pci/hotplug/pciehp_ctrl.c | |||
@@ -30,7 +30,10 @@ | |||
30 | 30 | ||
31 | static void set_slot_off(struct controller *ctrl) | 31 | static void set_slot_off(struct controller *ctrl) |
32 | { | 32 | { |
33 | /* turn off slot, turn on Amber LED, turn off Green LED if supported*/ | 33 | /* |
34 | * Turn off slot, turn on attention indicator, turn off power | ||
35 | * indicator | ||
36 | */ | ||
34 | if (POWER_CTRL(ctrl)) { | 37 | if (POWER_CTRL(ctrl)) { |
35 | pciehp_power_off_slot(ctrl); | 38 | pciehp_power_off_slot(ctrl); |
36 | 39 | ||
@@ -42,8 +45,8 @@ static void set_slot_off(struct controller *ctrl) | |||
42 | msleep(1000); | 45 | msleep(1000); |
43 | } | 46 | } |
44 | 47 | ||
45 | pciehp_green_led_off(ctrl); | 48 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, |
46 | pciehp_set_attention_status(ctrl, 1); | 49 | PCI_EXP_SLTCTL_ATTN_IND_ON); |
47 | } | 50 | } |
48 | 51 | ||
49 | /** | 52 | /** |
@@ -65,7 +68,8 @@ static int board_added(struct controller *ctrl) | |||
65 | return retval; | 68 | return retval; |
66 | } | 69 | } |
67 | 70 | ||
68 | pciehp_green_led_blink(ctrl); | 71 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, |
72 | INDICATOR_NOOP); | ||
69 | 73 | ||
70 | /* Check link training status */ | 74 | /* Check link training status */ |
71 | retval = pciehp_check_link_status(ctrl); | 75 | retval = pciehp_check_link_status(ctrl); |
@@ -90,8 +94,8 @@ static int board_added(struct controller *ctrl) | |||
90 | } | 94 | } |
91 | } | 95 | } |
92 | 96 | ||
93 | pciehp_green_led_on(ctrl); | 97 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, |
94 | pciehp_set_attention_status(ctrl, 0); | 98 | PCI_EXP_SLTCTL_ATTN_IND_OFF); |
95 | return 0; | 99 | return 0; |
96 | 100 | ||
97 | err_exit: | 101 | err_exit: |
@@ -100,7 +104,7 @@ err_exit: | |||
100 | } | 104 | } |
101 | 105 | ||
102 | /** | 106 | /** |
103 | * remove_board - Turns off slot and LEDs | 107 | * remove_board - Turn off slot and Power Indicator |
104 | * @ctrl: PCIe hotplug controller where board is being removed | 108 | * @ctrl: PCIe hotplug controller where board is being removed |
105 | * @safe_removal: whether the board is safely removed (versus surprise removed) | 109 | * @safe_removal: whether the board is safely removed (versus surprise removed) |
106 | */ | 110 | */ |
@@ -123,8 +127,8 @@ static void remove_board(struct controller *ctrl, bool safe_removal) | |||
123 | &ctrl->pending_events); | 127 | &ctrl->pending_events); |
124 | } | 128 | } |
125 | 129 | ||
126 | /* turn off Green LED */ | 130 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, |
127 | pciehp_green_led_off(ctrl); | 131 | INDICATOR_NOOP); |
128 | } | 132 | } |
129 | 133 | ||
130 | static int pciehp_enable_slot(struct controller *ctrl); | 134 | static int pciehp_enable_slot(struct controller *ctrl); |
@@ -171,9 +175,9 @@ void pciehp_handle_button_press(struct controller *ctrl) | |||
171 | ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n", | 175 | ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n", |
172 | slot_name(ctrl)); | 176 | slot_name(ctrl)); |
173 | } | 177 | } |
174 | /* blink green LED and turn off amber */ | 178 | /* blink power indicator and turn off attention */ |
175 | pciehp_green_led_blink(ctrl); | 179 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, |
176 | pciehp_set_attention_status(ctrl, 0); | 180 | PCI_EXP_SLTCTL_ATTN_IND_OFF); |
177 | schedule_delayed_work(&ctrl->button_work, 5 * HZ); | 181 | schedule_delayed_work(&ctrl->button_work, 5 * HZ); |
178 | break; | 182 | break; |
179 | case BLINKINGOFF_STATE: | 183 | case BLINKINGOFF_STATE: |
@@ -187,12 +191,13 @@ void pciehp_handle_button_press(struct controller *ctrl) | |||
187 | cancel_delayed_work(&ctrl->button_work); | 191 | cancel_delayed_work(&ctrl->button_work); |
188 | if (ctrl->state == BLINKINGOFF_STATE) { | 192 | if (ctrl->state == BLINKINGOFF_STATE) { |
189 | ctrl->state = ON_STATE; | 193 | ctrl->state = ON_STATE; |
190 | pciehp_green_led_on(ctrl); | 194 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, |
195 | PCI_EXP_SLTCTL_ATTN_IND_OFF); | ||
191 | } else { | 196 | } else { |
192 | ctrl->state = OFF_STATE; | 197 | ctrl->state = OFF_STATE; |
193 | pciehp_green_led_off(ctrl); | 198 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, |
199 | PCI_EXP_SLTCTL_ATTN_IND_OFF); | ||
194 | } | 200 | } |
195 | pciehp_set_attention_status(ctrl, 0); | ||
196 | ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", | 201 | ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", |
197 | slot_name(ctrl)); | 202 | slot_name(ctrl)); |
198 | break; | 203 | break; |
@@ -310,7 +315,9 @@ static int pciehp_enable_slot(struct controller *ctrl) | |||
310 | pm_runtime_get_sync(&ctrl->pcie->port->dev); | 315 | pm_runtime_get_sync(&ctrl->pcie->port->dev); |
311 | ret = __pciehp_enable_slot(ctrl); | 316 | ret = __pciehp_enable_slot(ctrl); |
312 | if (ret && ATTN_BUTTN(ctrl)) | 317 | if (ret && ATTN_BUTTN(ctrl)) |
313 | pciehp_green_led_off(ctrl); /* may be blinking */ | 318 | /* may be blinking */ |
319 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, | ||
320 | INDICATOR_NOOP); | ||
314 | pm_runtime_put(&ctrl->pcie->port->dev); | 321 | pm_runtime_put(&ctrl->pcie->port->dev); |
315 | 322 | ||
316 | mutex_lock(&ctrl->state_lock); | 323 | mutex_lock(&ctrl->state_lock); |
diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index bd990e3371e3..1a522c1c4177 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c | |||
@@ -418,65 +418,40 @@ int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot, | |||
418 | return 0; | 418 | return 0; |
419 | } | 419 | } |
420 | 420 | ||
421 | void pciehp_set_attention_status(struct controller *ctrl, u8 value) | 421 | /** |
422 | * pciehp_set_indicators() - set attention indicator, power indicator, or both | ||
423 | * @ctrl: PCIe hotplug controller | ||
424 | * @pwr: one of: | ||
425 | * PCI_EXP_SLTCTL_PWR_IND_ON | ||
426 | * PCI_EXP_SLTCTL_PWR_IND_BLINK | ||
427 | * PCI_EXP_SLTCTL_PWR_IND_OFF | ||
428 | * @attn: one of: | ||
429 | * PCI_EXP_SLTCTL_ATTN_IND_ON | ||
430 | * PCI_EXP_SLTCTL_ATTN_IND_BLINK | ||
431 | * PCI_EXP_SLTCTL_ATTN_IND_OFF | ||
432 | * | ||
433 | * Either @pwr or @attn can also be INDICATOR_NOOP to leave that indicator | ||
434 | * unchanged. | ||
435 | */ | ||
436 | void pciehp_set_indicators(struct controller *ctrl, int pwr, int attn) | ||
422 | { | 437 | { |
423 | u16 slot_cmd; | 438 | u16 cmd = 0, mask = 0; |
424 | 439 | ||
425 | if (!ATTN_LED(ctrl)) | 440 | if (PWR_LED(ctrl) && pwr != INDICATOR_NOOP) { |
426 | return; | 441 | cmd |= (pwr & PCI_EXP_SLTCTL_PIC); |
427 | 442 | mask |= PCI_EXP_SLTCTL_PIC; | |
428 | switch (value) { | ||
429 | case 0: /* turn off */ | ||
430 | slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_OFF; | ||
431 | break; | ||
432 | case 1: /* turn on */ | ||
433 | slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_ON; | ||
434 | break; | ||
435 | case 2: /* turn blink */ | ||
436 | slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_BLINK; | ||
437 | break; | ||
438 | default: | ||
439 | return; | ||
440 | } | 443 | } |
441 | pcie_write_cmd_nowait(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC); | ||
442 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, | ||
443 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); | ||
444 | } | ||
445 | 444 | ||
446 | void pciehp_green_led_on(struct controller *ctrl) | 445 | if (ATTN_LED(ctrl) && attn != INDICATOR_NOOP) { |
447 | { | 446 | cmd |= (attn & PCI_EXP_SLTCTL_AIC); |
448 | if (!PWR_LED(ctrl)) | 447 | mask |= PCI_EXP_SLTCTL_AIC; |
449 | return; | 448 | } |
450 | |||
451 | pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, | ||
452 | PCI_EXP_SLTCTL_PIC); | ||
453 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, | ||
454 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, | ||
455 | PCI_EXP_SLTCTL_PWR_IND_ON); | ||
456 | } | ||
457 | |||
458 | void pciehp_green_led_off(struct controller *ctrl) | ||
459 | { | ||
460 | if (!PWR_LED(ctrl)) | ||
461 | return; | ||
462 | |||
463 | pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, | ||
464 | PCI_EXP_SLTCTL_PIC); | ||
465 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, | ||
466 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, | ||
467 | PCI_EXP_SLTCTL_PWR_IND_OFF); | ||
468 | } | ||
469 | |||
470 | void pciehp_green_led_blink(struct controller *ctrl) | ||
471 | { | ||
472 | if (!PWR_LED(ctrl)) | ||
473 | return; | ||
474 | 449 | ||
475 | pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, | 450 | if (cmd) { |
476 | PCI_EXP_SLTCTL_PIC); | 451 | pcie_write_cmd_nowait(ctrl, cmd, mask); |
477 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, | 452 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, |
478 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, | 453 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd); |
479 | PCI_EXP_SLTCTL_PWR_IND_BLINK); | 454 | } |
480 | } | 455 | } |
481 | 456 | ||
482 | int pciehp_power_on_slot(struct controller *ctrl) | 457 | int pciehp_power_on_slot(struct controller *ctrl) |
@@ -638,8 +613,8 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) | |||
638 | if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { | 613 | if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { |
639 | ctrl->power_fault_detected = 1; | 614 | ctrl->power_fault_detected = 1; |
640 | ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl)); | 615 | ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl)); |
641 | pciehp_set_attention_status(ctrl, 1); | 616 | pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, |
642 | pciehp_green_led_off(ctrl); | 617 | PCI_EXP_SLTCTL_ATTN_IND_ON); |
643 | } | 618 | } |
644 | 619 | ||
645 | /* | 620 | /* |
diff --git a/drivers/pci/hotplug/rpadlpar_core.c b/drivers/pci/hotplug/rpadlpar_core.c index 182f9e3443ee..977946e4e613 100644 --- a/drivers/pci/hotplug/rpadlpar_core.c +++ b/drivers/pci/hotplug/rpadlpar_core.c | |||
@@ -473,7 +473,6 @@ int __init rpadlpar_io_init(void) | |||
473 | void rpadlpar_io_exit(void) | 473 | void rpadlpar_io_exit(void) |
474 | { | 474 | { |
475 | dlpar_sysfs_exit(); | 475 | dlpar_sysfs_exit(); |
476 | return; | ||
477 | } | 476 | } |
478 | 477 | ||
479 | module_init(rpadlpar_io_init); | 478 | module_init(rpadlpar_io_init); |
diff --git a/drivers/pci/hotplug/rpaphp_core.c b/drivers/pci/hotplug/rpaphp_core.c index c3899ee1db99..18627bb21e9e 100644 --- a/drivers/pci/hotplug/rpaphp_core.c +++ b/drivers/pci/hotplug/rpaphp_core.c | |||
@@ -408,7 +408,6 @@ static void __exit cleanup_slots(void) | |||
408 | pci_hp_deregister(&slot->hotplug_slot); | 408 | pci_hp_deregister(&slot->hotplug_slot); |
409 | dealloc_slot_struct(slot); | 409 | dealloc_slot_struct(slot); |
410 | } | 410 | } |
411 | return; | ||
412 | } | 411 | } |
413 | 412 | ||
414 | static int __init rpaphp_init(void) | 413 | static int __init rpaphp_init(void) |
diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index 525fd3f272b3..b3f972e8cfed 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c | |||
@@ -240,6 +240,173 @@ void pci_iov_remove_virtfn(struct pci_dev *dev, int id) | |||
240 | pci_dev_put(dev); | 240 | pci_dev_put(dev); |
241 | } | 241 | } |
242 | 242 | ||
243 | static ssize_t sriov_totalvfs_show(struct device *dev, | ||
244 | struct device_attribute *attr, | ||
245 | char *buf) | ||
246 | { | ||
247 | struct pci_dev *pdev = to_pci_dev(dev); | ||
248 | |||
249 | return sprintf(buf, "%u\n", pci_sriov_get_totalvfs(pdev)); | ||
250 | } | ||
251 | |||
252 | static ssize_t sriov_numvfs_show(struct device *dev, | ||
253 | struct device_attribute *attr, | ||
254 | char *buf) | ||
255 | { | ||
256 | struct pci_dev *pdev = to_pci_dev(dev); | ||
257 | |||
258 | return sprintf(buf, "%u\n", pdev->sriov->num_VFs); | ||
259 | } | ||
260 | |||
261 | /* | ||
262 | * num_vfs > 0; number of VFs to enable | ||
263 | * num_vfs = 0; disable all VFs | ||
264 | * | ||
265 | * Note: SRIOV spec does not allow partial VF | ||
266 | * disable, so it's all or none. | ||
267 | */ | ||
268 | static ssize_t sriov_numvfs_store(struct device *dev, | ||
269 | struct device_attribute *attr, | ||
270 | const char *buf, size_t count) | ||
271 | { | ||
272 | struct pci_dev *pdev = to_pci_dev(dev); | ||
273 | int ret; | ||
274 | u16 num_vfs; | ||
275 | |||
276 | ret = kstrtou16(buf, 0, &num_vfs); | ||
277 | if (ret < 0) | ||
278 | return ret; | ||
279 | |||
280 | if (num_vfs > pci_sriov_get_totalvfs(pdev)) | ||
281 | return -ERANGE; | ||
282 | |||
283 | device_lock(&pdev->dev); | ||
284 | |||
285 | if (num_vfs == pdev->sriov->num_VFs) | ||
286 | goto exit; | ||
287 | |||
288 | /* is PF driver loaded w/callback */ | ||
289 | if (!pdev->driver || !pdev->driver->sriov_configure) { | ||
290 | pci_info(pdev, "Driver does not support SRIOV configuration via sysfs\n"); | ||
291 | ret = -ENOENT; | ||
292 | goto exit; | ||
293 | } | ||
294 | |||
295 | if (num_vfs == 0) { | ||
296 | /* disable VFs */ | ||
297 | ret = pdev->driver->sriov_configure(pdev, 0); | ||
298 | goto exit; | ||
299 | } | ||
300 | |||
301 | /* enable VFs */ | ||
302 | if (pdev->sriov->num_VFs) { | ||
303 | pci_warn(pdev, "%d VFs already enabled. Disable before enabling %d VFs\n", | ||
304 | pdev->sriov->num_VFs, num_vfs); | ||
305 | ret = -EBUSY; | ||
306 | goto exit; | ||
307 | } | ||
308 | |||
309 | ret = pdev->driver->sriov_configure(pdev, num_vfs); | ||
310 | if (ret < 0) | ||
311 | goto exit; | ||
312 | |||
313 | if (ret != num_vfs) | ||
314 | pci_warn(pdev, "%d VFs requested; only %d enabled\n", | ||
315 | num_vfs, ret); | ||
316 | |||
317 | exit: | ||
318 | device_unlock(&pdev->dev); | ||
319 | |||
320 | if (ret < 0) | ||
321 | return ret; | ||
322 | |||
323 | return count; | ||
324 | } | ||
325 | |||
326 | static ssize_t sriov_offset_show(struct device *dev, | ||
327 | struct device_attribute *attr, | ||
328 | char *buf) | ||
329 | { | ||
330 | struct pci_dev *pdev = to_pci_dev(dev); | ||
331 | |||
332 | return sprintf(buf, "%u\n", pdev->sriov->offset); | ||
333 | } | ||
334 | |||
335 | static ssize_t sriov_stride_show(struct device *dev, | ||
336 | struct device_attribute *attr, | ||
337 | char *buf) | ||
338 | { | ||
339 | struct pci_dev *pdev = to_pci_dev(dev); | ||
340 | |||
341 | return sprintf(buf, "%u\n", pdev->sriov->stride); | ||
342 | } | ||
343 | |||
344 | static ssize_t sriov_vf_device_show(struct device *dev, | ||
345 | struct device_attribute *attr, | ||
346 | char *buf) | ||
347 | { | ||
348 | struct pci_dev *pdev = to_pci_dev(dev); | ||
349 | |||
350 | return sprintf(buf, "%x\n", pdev->sriov->vf_device); | ||
351 | } | ||
352 | |||
353 | static ssize_t sriov_drivers_autoprobe_show(struct device *dev, | ||
354 | struct device_attribute *attr, | ||
355 | char *buf) | ||
356 | { | ||
357 | struct pci_dev *pdev = to_pci_dev(dev); | ||
358 | |||
359 | return sprintf(buf, "%u\n", pdev->sriov->drivers_autoprobe); | ||
360 | } | ||
361 | |||
362 | static ssize_t sriov_drivers_autoprobe_store(struct device *dev, | ||
363 | struct device_attribute *attr, | ||
364 | const char *buf, size_t count) | ||
365 | { | ||
366 | struct pci_dev *pdev = to_pci_dev(dev); | ||
367 | bool drivers_autoprobe; | ||
368 | |||
369 | if (kstrtobool(buf, &drivers_autoprobe) < 0) | ||
370 | return -EINVAL; | ||
371 | |||
372 | pdev->sriov->drivers_autoprobe = drivers_autoprobe; | ||
373 | |||
374 | return count; | ||
375 | } | ||
376 | |||
377 | static DEVICE_ATTR_RO(sriov_totalvfs); | ||
378 | static DEVICE_ATTR_RW(sriov_numvfs); | ||
379 | static DEVICE_ATTR_RO(sriov_offset); | ||
380 | static DEVICE_ATTR_RO(sriov_stride); | ||
381 | static DEVICE_ATTR_RO(sriov_vf_device); | ||
382 | static DEVICE_ATTR_RW(sriov_drivers_autoprobe); | ||
383 | |||
384 | static struct attribute *sriov_dev_attrs[] = { | ||
385 | &dev_attr_sriov_totalvfs.attr, | ||
386 | &dev_attr_sriov_numvfs.attr, | ||
387 | &dev_attr_sriov_offset.attr, | ||
388 | &dev_attr_sriov_stride.attr, | ||
389 | &dev_attr_sriov_vf_device.attr, | ||
390 | &dev_attr_sriov_drivers_autoprobe.attr, | ||
391 | NULL, | ||
392 | }; | ||
393 | |||
394 | static umode_t sriov_attrs_are_visible(struct kobject *kobj, | ||
395 | struct attribute *a, int n) | ||
396 | { | ||
397 | struct device *dev = kobj_to_dev(kobj); | ||
398 | |||
399 | if (!dev_is_pf(dev)) | ||
400 | return 0; | ||
401 | |||
402 | return a->mode; | ||
403 | } | ||
404 | |||
405 | const struct attribute_group sriov_dev_attr_group = { | ||
406 | .attrs = sriov_dev_attrs, | ||
407 | .is_visible = sriov_attrs_are_visible, | ||
408 | }; | ||
409 | |||
243 | int __weak pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) | 410 | int __weak pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) |
244 | { | 411 | { |
245 | return 0; | 412 | return 0; |
@@ -557,8 +724,8 @@ static void sriov_restore_state(struct pci_dev *dev) | |||
557 | ctrl |= iov->ctrl & PCI_SRIOV_CTRL_ARI; | 724 | ctrl |= iov->ctrl & PCI_SRIOV_CTRL_ARI; |
558 | pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, ctrl); | 725 | pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, ctrl); |
559 | 726 | ||
560 | for (i = PCI_IOV_RESOURCES; i <= PCI_IOV_RESOURCE_END; i++) | 727 | for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) |
561 | pci_update_resource(dev, i); | 728 | pci_update_resource(dev, i + PCI_IOV_RESOURCES); |
562 | 729 | ||
563 | pci_write_config_dword(dev, iov->pos + PCI_SRIOV_SYS_PGSIZE, iov->pgsz); | 730 | pci_write_config_dword(dev, iov->pos + PCI_SRIOV_SYS_PGSIZE, iov->pgsz); |
564 | pci_iov_set_numvfs(dev, iov->num_VFs); | 731 | pci_iov_set_numvfs(dev, iov->num_VFs); |
diff --git a/drivers/pci/of.c b/drivers/pci/of.c index bc7b27a28795..36891e7deee3 100644 --- a/drivers/pci/of.c +++ b/drivers/pci/of.c | |||
@@ -353,7 +353,7 @@ EXPORT_SYMBOL_GPL(devm_of_pci_get_host_bridge_resources); | |||
353 | /** | 353 | /** |
354 | * of_irq_parse_pci - Resolve the interrupt for a PCI device | 354 | * of_irq_parse_pci - Resolve the interrupt for a PCI device |
355 | * @pdev: the device whose interrupt is to be resolved | 355 | * @pdev: the device whose interrupt is to be resolved |
356 | * @out_irq: structure of_irq filled by this function | 356 | * @out_irq: structure of_phandle_args filled by this function |
357 | * | 357 | * |
358 | * This function resolves the PCI interrupt for a given PCI device. If a | 358 | * This function resolves the PCI interrupt for a given PCI device. If a |
359 | * device-node exists for a given pci_dev, it will use normal OF tree | 359 | * device-node exists for a given pci_dev, it will use normal OF tree |
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 234476226529..0608aae72ccc 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c | |||
@@ -18,13 +18,32 @@ | |||
18 | #include <linux/percpu-refcount.h> | 18 | #include <linux/percpu-refcount.h> |
19 | #include <linux/random.h> | 19 | #include <linux/random.h> |
20 | #include <linux/seq_buf.h> | 20 | #include <linux/seq_buf.h> |
21 | #include <linux/iommu.h> | 21 | #include <linux/xarray.h> |
22 | |||
23 | enum pci_p2pdma_map_type { | ||
24 | PCI_P2PDMA_MAP_UNKNOWN = 0, | ||
25 | PCI_P2PDMA_MAP_NOT_SUPPORTED, | ||
26 | PCI_P2PDMA_MAP_BUS_ADDR, | ||
27 | PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, | ||
28 | }; | ||
22 | 29 | ||
23 | struct pci_p2pdma { | 30 | struct pci_p2pdma { |
24 | struct gen_pool *pool; | 31 | struct gen_pool *pool; |
25 | bool p2pmem_published; | 32 | bool p2pmem_published; |
33 | struct xarray map_types; | ||
26 | }; | 34 | }; |
27 | 35 | ||
36 | struct pci_p2pdma_pagemap { | ||
37 | struct dev_pagemap pgmap; | ||
38 | struct pci_dev *provider; | ||
39 | u64 bus_offset; | ||
40 | }; | ||
41 | |||
42 | static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap) | ||
43 | { | ||
44 | return container_of(pgmap, struct pci_p2pdma_pagemap, pgmap); | ||
45 | } | ||
46 | |||
28 | static ssize_t size_show(struct device *dev, struct device_attribute *attr, | 47 | static ssize_t size_show(struct device *dev, struct device_attribute *attr, |
29 | char *buf) | 48 | char *buf) |
30 | { | 49 | { |
@@ -87,6 +106,7 @@ static void pci_p2pdma_release(void *data) | |||
87 | 106 | ||
88 | gen_pool_destroy(p2pdma->pool); | 107 | gen_pool_destroy(p2pdma->pool); |
89 | sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); | 108 | sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); |
109 | xa_destroy(&p2pdma->map_types); | ||
90 | } | 110 | } |
91 | 111 | ||
92 | static int pci_p2pdma_setup(struct pci_dev *pdev) | 112 | static int pci_p2pdma_setup(struct pci_dev *pdev) |
@@ -98,6 +118,8 @@ static int pci_p2pdma_setup(struct pci_dev *pdev) | |||
98 | if (!p2p) | 118 | if (!p2p) |
99 | return -ENOMEM; | 119 | return -ENOMEM; |
100 | 120 | ||
121 | xa_init(&p2p->map_types); | ||
122 | |||
101 | p2p->pool = gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev)); | 123 | p2p->pool = gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev)); |
102 | if (!p2p->pool) | 124 | if (!p2p->pool) |
103 | goto out; | 125 | goto out; |
@@ -135,6 +157,7 @@ out: | |||
135 | int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, | 157 | int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, |
136 | u64 offset) | 158 | u64 offset) |
137 | { | 159 | { |
160 | struct pci_p2pdma_pagemap *p2p_pgmap; | ||
138 | struct dev_pagemap *pgmap; | 161 | struct dev_pagemap *pgmap; |
139 | void *addr; | 162 | void *addr; |
140 | int error; | 163 | int error; |
@@ -157,14 +180,18 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, | |||
157 | return error; | 180 | return error; |
158 | } | 181 | } |
159 | 182 | ||
160 | pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL); | 183 | p2p_pgmap = devm_kzalloc(&pdev->dev, sizeof(*p2p_pgmap), GFP_KERNEL); |
161 | if (!pgmap) | 184 | if (!p2p_pgmap) |
162 | return -ENOMEM; | 185 | return -ENOMEM; |
186 | |||
187 | pgmap = &p2p_pgmap->pgmap; | ||
163 | pgmap->res.start = pci_resource_start(pdev, bar) + offset; | 188 | pgmap->res.start = pci_resource_start(pdev, bar) + offset; |
164 | pgmap->res.end = pgmap->res.start + size - 1; | 189 | pgmap->res.end = pgmap->res.start + size - 1; |
165 | pgmap->res.flags = pci_resource_flags(pdev, bar); | 190 | pgmap->res.flags = pci_resource_flags(pdev, bar); |
166 | pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; | 191 | pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; |
167 | pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) - | 192 | |
193 | p2p_pgmap->provider = pdev; | ||
194 | p2p_pgmap->bus_offset = pci_bus_address(pdev, bar) - | ||
168 | pci_resource_start(pdev, bar); | 195 | pci_resource_start(pdev, bar); |
169 | 196 | ||
170 | addr = devm_memremap_pages(&pdev->dev, pgmap); | 197 | addr = devm_memremap_pages(&pdev->dev, pgmap); |
@@ -246,19 +273,32 @@ static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev) | |||
246 | seq_buf_printf(buf, "%s;", pci_name(pdev)); | 273 | seq_buf_printf(buf, "%s;", pci_name(pdev)); |
247 | } | 274 | } |
248 | 275 | ||
249 | /* | 276 | static const struct pci_p2pdma_whitelist_entry { |
250 | * If we can't find a common upstream bridge take a look at the root | 277 | unsigned short vendor; |
251 | * complex and compare it to a whitelist of known good hardware. | 278 | unsigned short device; |
252 | */ | 279 | enum { |
253 | static bool root_complex_whitelist(struct pci_dev *dev) | 280 | REQ_SAME_HOST_BRIDGE = 1 << 0, |
281 | } flags; | ||
282 | } pci_p2pdma_whitelist[] = { | ||
283 | /* AMD ZEN */ | ||
284 | {PCI_VENDOR_ID_AMD, 0x1450, 0}, | ||
285 | |||
286 | /* Intel Xeon E5/Core i7 */ | ||
287 | {PCI_VENDOR_ID_INTEL, 0x3c00, REQ_SAME_HOST_BRIDGE}, | ||
288 | {PCI_VENDOR_ID_INTEL, 0x3c01, REQ_SAME_HOST_BRIDGE}, | ||
289 | /* Intel Xeon E7 v3/Xeon E5 v3/Core i7 */ | ||
290 | {PCI_VENDOR_ID_INTEL, 0x2f00, REQ_SAME_HOST_BRIDGE}, | ||
291 | {PCI_VENDOR_ID_INTEL, 0x2f01, REQ_SAME_HOST_BRIDGE}, | ||
292 | {} | ||
293 | }; | ||
294 | |||
295 | static bool __host_bridge_whitelist(struct pci_host_bridge *host, | ||
296 | bool same_host_bridge) | ||
254 | { | 297 | { |
255 | struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); | ||
256 | struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0)); | 298 | struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0)); |
299 | const struct pci_p2pdma_whitelist_entry *entry; | ||
257 | unsigned short vendor, device; | 300 | unsigned short vendor, device; |
258 | 301 | ||
259 | if (iommu_present(dev->dev.bus)) | ||
260 | return false; | ||
261 | |||
262 | if (!root) | 302 | if (!root) |
263 | return false; | 303 | return false; |
264 | 304 | ||
@@ -266,65 +306,49 @@ static bool root_complex_whitelist(struct pci_dev *dev) | |||
266 | device = root->device; | 306 | device = root->device; |
267 | pci_dev_put(root); | 307 | pci_dev_put(root); |
268 | 308 | ||
269 | /* AMD ZEN host bridges can do peer to peer */ | 309 | for (entry = pci_p2pdma_whitelist; entry->vendor; entry++) { |
270 | if (vendor == PCI_VENDOR_ID_AMD && device == 0x1450) | 310 | if (vendor != entry->vendor || device != entry->device) |
311 | continue; | ||
312 | if (entry->flags & REQ_SAME_HOST_BRIDGE && !same_host_bridge) | ||
313 | return false; | ||
314 | |||
271 | return true; | 315 | return true; |
316 | } | ||
272 | 317 | ||
273 | return false; | 318 | return false; |
274 | } | 319 | } |
275 | 320 | ||
276 | /* | 321 | /* |
277 | * Find the distance through the nearest common upstream bridge between | 322 | * If we can't find a common upstream bridge take a look at the root |
278 | * two PCI devices. | 323 | * complex and compare it to a whitelist of known good hardware. |
279 | * | ||
280 | * If the two devices are the same device then 0 will be returned. | ||
281 | * | ||
282 | * If there are two virtual functions of the same device behind the same | ||
283 | * bridge port then 2 will be returned (one step down to the PCIe switch, | ||
284 | * then one step back to the same device). | ||
285 | * | ||
286 | * In the case where two devices are connected to the same PCIe switch, the | ||
287 | * value 4 will be returned. This corresponds to the following PCI tree: | ||
288 | * | ||
289 | * -+ Root Port | ||
290 | * \+ Switch Upstream Port | ||
291 | * +-+ Switch Downstream Port | ||
292 | * + \- Device A | ||
293 | * \-+ Switch Downstream Port | ||
294 | * \- Device B | ||
295 | * | ||
296 | * The distance is 4 because we traverse from Device A through the downstream | ||
297 | * port of the switch, to the common upstream port, back up to the second | ||
298 | * downstream port and then to Device B. | ||
299 | * | ||
300 | * Any two devices that don't have a common upstream bridge will return -1. | ||
301 | * In this way devices on separate PCIe root ports will be rejected, which | ||
302 | * is what we want for peer-to-peer seeing each PCIe root port defines a | ||
303 | * separate hierarchy domain and there's no way to determine whether the root | ||
304 | * complex supports forwarding between them. | ||
305 | * | ||
306 | * In the case where two devices are connected to different PCIe switches, | ||
307 | * this function will still return a positive distance as long as both | ||
308 | * switches eventually have a common upstream bridge. Note this covers | ||
309 | * the case of using multiple PCIe switches to achieve a desired level of | ||
310 | * fan-out from a root port. The exact distance will be a function of the | ||
311 | * number of switches between Device A and Device B. | ||
312 | * | ||
313 | * If a bridge which has any ACS redirection bits set is in the path | ||
314 | * then this functions will return -2. This is so we reject any | ||
315 | * cases where the TLPs are forwarded up into the root complex. | ||
316 | * In this case, a list of all infringing bridge addresses will be | ||
317 | * populated in acs_list (assuming it's non-null) for printk purposes. | ||
318 | */ | 324 | */ |
319 | static int upstream_bridge_distance(struct pci_dev *provider, | 325 | static bool host_bridge_whitelist(struct pci_dev *a, struct pci_dev *b) |
320 | struct pci_dev *client, | 326 | { |
321 | struct seq_buf *acs_list) | 327 | struct pci_host_bridge *host_a = pci_find_host_bridge(a->bus); |
328 | struct pci_host_bridge *host_b = pci_find_host_bridge(b->bus); | ||
329 | |||
330 | if (host_a == host_b) | ||
331 | return __host_bridge_whitelist(host_a, true); | ||
332 | |||
333 | if (__host_bridge_whitelist(host_a, false) && | ||
334 | __host_bridge_whitelist(host_b, false)) | ||
335 | return true; | ||
336 | |||
337 | return false; | ||
338 | } | ||
339 | |||
340 | static enum pci_p2pdma_map_type | ||
341 | __upstream_bridge_distance(struct pci_dev *provider, struct pci_dev *client, | ||
342 | int *dist, bool *acs_redirects, struct seq_buf *acs_list) | ||
322 | { | 343 | { |
323 | struct pci_dev *a = provider, *b = client, *bb; | 344 | struct pci_dev *a = provider, *b = client, *bb; |
324 | int dist_a = 0; | 345 | int dist_a = 0; |
325 | int dist_b = 0; | 346 | int dist_b = 0; |
326 | int acs_cnt = 0; | 347 | int acs_cnt = 0; |
327 | 348 | ||
349 | if (acs_redirects) | ||
350 | *acs_redirects = false; | ||
351 | |||
328 | /* | 352 | /* |
329 | * Note, we don't need to take references to devices returned by | 353 | * Note, we don't need to take references to devices returned by |
330 | * pci_upstream_bridge() seeing we hold a reference to a child | 354 | * pci_upstream_bridge() seeing we hold a reference to a child |
@@ -353,15 +377,10 @@ static int upstream_bridge_distance(struct pci_dev *provider, | |||
353 | dist_a++; | 377 | dist_a++; |
354 | } | 378 | } |
355 | 379 | ||
356 | /* | 380 | if (dist) |
357 | * Allow the connection if both devices are on a whitelisted root | 381 | *dist = dist_a + dist_b; |
358 | * complex, but add an arbitrary large value to the distance. | ||
359 | */ | ||
360 | if (root_complex_whitelist(provider) && | ||
361 | root_complex_whitelist(client)) | ||
362 | return 0x1000 + dist_a + dist_b; | ||
363 | 382 | ||
364 | return -1; | 383 | return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE; |
365 | 384 | ||
366 | check_b_path_acs: | 385 | check_b_path_acs: |
367 | bb = b; | 386 | bb = b; |
@@ -378,33 +397,110 @@ check_b_path_acs: | |||
378 | bb = pci_upstream_bridge(bb); | 397 | bb = pci_upstream_bridge(bb); |
379 | } | 398 | } |
380 | 399 | ||
381 | if (acs_cnt) | 400 | if (dist) |
382 | return -2; | 401 | *dist = dist_a + dist_b; |
402 | |||
403 | if (acs_cnt) { | ||
404 | if (acs_redirects) | ||
405 | *acs_redirects = true; | ||
406 | |||
407 | return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE; | ||
408 | } | ||
409 | |||
410 | return PCI_P2PDMA_MAP_BUS_ADDR; | ||
411 | } | ||
412 | |||
413 | static unsigned long map_types_idx(struct pci_dev *client) | ||
414 | { | ||
415 | return (pci_domain_nr(client->bus) << 16) | | ||
416 | (client->bus->number << 8) | client->devfn; | ||
417 | } | ||
418 | |||
419 | /* | ||
420 | * Find the distance through the nearest common upstream bridge between | ||
421 | * two PCI devices. | ||
422 | * | ||
423 | * If the two devices are the same device then 0 will be returned. | ||
424 | * | ||
425 | * If there are two virtual functions of the same device behind the same | ||
426 | * bridge port then 2 will be returned (one step down to the PCIe switch, | ||
427 | * then one step back to the same device). | ||
428 | * | ||
429 | * In the case where two devices are connected to the same PCIe switch, the | ||
430 | * value 4 will be returned. This corresponds to the following PCI tree: | ||
431 | * | ||
432 | * -+ Root Port | ||
433 | * \+ Switch Upstream Port | ||
434 | * +-+ Switch Downstream Port | ||
435 | * + \- Device A | ||
436 | * \-+ Switch Downstream Port | ||
437 | * \- Device B | ||
438 | * | ||
439 | * The distance is 4 because we traverse from Device A through the downstream | ||
440 | * port of the switch, to the common upstream port, back up to the second | ||
441 | * downstream port and then to Device B. | ||
442 | * | ||
443 | * Any two devices that cannot communicate using p2pdma will return | ||
444 | * PCI_P2PDMA_MAP_NOT_SUPPORTED. | ||
445 | * | ||
446 | * Any two devices that have a data path that goes through the host bridge | ||
447 | * will consult a whitelist. If the host bridges are on the whitelist, | ||
448 | * this function will return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE. | ||
449 | * | ||
450 | * If either bridge is not on the whitelist this function returns | ||
451 | * PCI_P2PDMA_MAP_NOT_SUPPORTED. | ||
452 | * | ||
453 | * If a bridge which has any ACS redirection bits set is in the path, | ||
454 | * acs_redirects will be set to true. In this case, a list of all infringing | ||
455 | * bridge addresses will be populated in acs_list (assuming it's non-null) | ||
456 | * for printk purposes. | ||
457 | */ | ||
458 | static enum pci_p2pdma_map_type | ||
459 | upstream_bridge_distance(struct pci_dev *provider, struct pci_dev *client, | ||
460 | int *dist, bool *acs_redirects, struct seq_buf *acs_list) | ||
461 | { | ||
462 | enum pci_p2pdma_map_type map_type; | ||
463 | |||
464 | map_type = __upstream_bridge_distance(provider, client, dist, | ||
465 | acs_redirects, acs_list); | ||
466 | |||
467 | if (map_type == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) { | ||
468 | if (!host_bridge_whitelist(provider, client)) | ||
469 | map_type = PCI_P2PDMA_MAP_NOT_SUPPORTED; | ||
470 | } | ||
471 | |||
472 | if (provider->p2pdma) | ||
473 | xa_store(&provider->p2pdma->map_types, map_types_idx(client), | ||
474 | xa_mk_value(map_type), GFP_KERNEL); | ||
383 | 475 | ||
384 | return dist_a + dist_b; | 476 | return map_type; |
385 | } | 477 | } |
386 | 478 | ||
387 | static int upstream_bridge_distance_warn(struct pci_dev *provider, | 479 | static enum pci_p2pdma_map_type |
388 | struct pci_dev *client) | 480 | upstream_bridge_distance_warn(struct pci_dev *provider, struct pci_dev *client, |
481 | int *dist) | ||
389 | { | 482 | { |
390 | struct seq_buf acs_list; | 483 | struct seq_buf acs_list; |
484 | bool acs_redirects; | ||
391 | int ret; | 485 | int ret; |
392 | 486 | ||
393 | seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE); | 487 | seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE); |
394 | if (!acs_list.buffer) | 488 | if (!acs_list.buffer) |
395 | return -ENOMEM; | 489 | return -ENOMEM; |
396 | 490 | ||
397 | ret = upstream_bridge_distance(provider, client, &acs_list); | 491 | ret = upstream_bridge_distance(provider, client, dist, &acs_redirects, |
398 | if (ret == -2) { | 492 | &acs_list); |
399 | pci_warn(client, "cannot be used for peer-to-peer DMA as ACS redirect is set between the client and provider (%s)\n", | 493 | if (acs_redirects) { |
494 | pci_warn(client, "ACS redirect is set between the client and provider (%s)\n", | ||
400 | pci_name(provider)); | 495 | pci_name(provider)); |
401 | /* Drop final semicolon */ | 496 | /* Drop final semicolon */ |
402 | acs_list.buffer[acs_list.len-1] = 0; | 497 | acs_list.buffer[acs_list.len-1] = 0; |
403 | pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n", | 498 | pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n", |
404 | acs_list.buffer); | 499 | acs_list.buffer); |
500 | } | ||
405 | 501 | ||
406 | } else if (ret < 0) { | 502 | if (ret == PCI_P2PDMA_MAP_NOT_SUPPORTED) { |
407 | pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge\n", | 503 | pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge or whitelisted host bridge\n", |
408 | pci_name(provider)); | 504 | pci_name(provider)); |
409 | } | 505 | } |
410 | 506 | ||
@@ -421,22 +517,22 @@ static int upstream_bridge_distance_warn(struct pci_dev *provider, | |||
421 | * @num_clients: number of clients in the array | 517 | * @num_clients: number of clients in the array |
422 | * @verbose: if true, print warnings for devices when we return -1 | 518 | * @verbose: if true, print warnings for devices when we return -1 |
423 | * | 519 | * |
424 | * Returns -1 if any of the clients are not compatible (behind the same | 520 | * Returns -1 if any of the clients are not compatible, otherwise returns a |
425 | * root port as the provider), otherwise returns a positive number where | 521 | * positive number where a lower number is the preferable choice. (If there's |
426 | * a lower number is the preferable choice. (If there's one client | 522 | * one client that's the same as the provider it will return 0, which is best |
427 | * that's the same as the provider it will return 0, which is best choice). | 523 | * choice). |
428 | * | 524 | * |
429 | * For now, "compatible" means the provider and the clients are all behind | 525 | * "compatible" means the provider and the clients are either all behind |
430 | * the same PCI root port. This cuts out cases that may work but is safest | 526 | * the same PCI root port or the host bridges connected to each of the devices |
431 | * for the user. Future work can expand this to white-list root complexes that | 527 | * are listed in the 'pci_p2pdma_whitelist'. |
432 | * can safely forward between each ports. | ||
433 | */ | 528 | */ |
434 | int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, | 529 | int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, |
435 | int num_clients, bool verbose) | 530 | int num_clients, bool verbose) |
436 | { | 531 | { |
437 | bool not_supported = false; | 532 | bool not_supported = false; |
438 | struct pci_dev *pci_client; | 533 | struct pci_dev *pci_client; |
439 | int distance = 0; | 534 | int total_dist = 0; |
535 | int distance; | ||
440 | int i, ret; | 536 | int i, ret; |
441 | 537 | ||
442 | if (num_clients == 0) | 538 | if (num_clients == 0) |
@@ -461,26 +557,26 @@ int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, | |||
461 | 557 | ||
462 | if (verbose) | 558 | if (verbose) |
463 | ret = upstream_bridge_distance_warn(provider, | 559 | ret = upstream_bridge_distance_warn(provider, |
464 | pci_client); | 560 | pci_client, &distance); |
465 | else | 561 | else |
466 | ret = upstream_bridge_distance(provider, pci_client, | 562 | ret = upstream_bridge_distance(provider, pci_client, |
467 | NULL); | 563 | &distance, NULL, NULL); |
468 | 564 | ||
469 | pci_dev_put(pci_client); | 565 | pci_dev_put(pci_client); |
470 | 566 | ||
471 | if (ret < 0) | 567 | if (ret == PCI_P2PDMA_MAP_NOT_SUPPORTED) |
472 | not_supported = true; | 568 | not_supported = true; |
473 | 569 | ||
474 | if (not_supported && !verbose) | 570 | if (not_supported && !verbose) |
475 | break; | 571 | break; |
476 | 572 | ||
477 | distance += ret; | 573 | total_dist += distance; |
478 | } | 574 | } |
479 | 575 | ||
480 | if (not_supported) | 576 | if (not_supported) |
481 | return -1; | 577 | return -1; |
482 | 578 | ||
483 | return distance; | 579 | return total_dist; |
484 | } | 580 | } |
485 | EXPORT_SYMBOL_GPL(pci_p2pdma_distance_many); | 581 | EXPORT_SYMBOL_GPL(pci_p2pdma_distance_many); |
486 | 582 | ||
@@ -706,21 +802,19 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) | |||
706 | } | 802 | } |
707 | EXPORT_SYMBOL_GPL(pci_p2pmem_publish); | 803 | EXPORT_SYMBOL_GPL(pci_p2pmem_publish); |
708 | 804 | ||
709 | /** | 805 | static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct pci_dev *provider, |
710 | * pci_p2pdma_map_sg - map a PCI peer-to-peer scatterlist for DMA | 806 | struct pci_dev *client) |
711 | * @dev: device doing the DMA request | 807 | { |
712 | * @sg: scatter list to map | 808 | if (!provider->p2pdma) |
713 | * @nents: elements in the scatterlist | 809 | return PCI_P2PDMA_MAP_NOT_SUPPORTED; |
714 | * @dir: DMA direction | 810 | |
715 | * | 811 | return xa_to_value(xa_load(&provider->p2pdma->map_types, |
716 | * Scatterlists mapped with this function should not be unmapped in any way. | 812 | map_types_idx(client))); |
717 | * | 813 | } |
718 | * Returns the number of SG entries mapped or 0 on error. | 814 | |
719 | */ | 815 | static int __pci_p2pdma_map_sg(struct pci_p2pdma_pagemap *p2p_pgmap, |
720 | int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, | 816 | struct device *dev, struct scatterlist *sg, int nents) |
721 | enum dma_data_direction dir) | ||
722 | { | 817 | { |
723 | struct dev_pagemap *pgmap; | ||
724 | struct scatterlist *s; | 818 | struct scatterlist *s; |
725 | phys_addr_t paddr; | 819 | phys_addr_t paddr; |
726 | int i; | 820 | int i; |
@@ -736,16 +830,80 @@ int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, | |||
736 | return 0; | 830 | return 0; |
737 | 831 | ||
738 | for_each_sg(sg, s, nents, i) { | 832 | for_each_sg(sg, s, nents, i) { |
739 | pgmap = sg_page(s)->pgmap; | ||
740 | paddr = sg_phys(s); | 833 | paddr = sg_phys(s); |
741 | 834 | ||
742 | s->dma_address = paddr - pgmap->pci_p2pdma_bus_offset; | 835 | s->dma_address = paddr - p2p_pgmap->bus_offset; |
743 | sg_dma_len(s) = s->length; | 836 | sg_dma_len(s) = s->length; |
744 | } | 837 | } |
745 | 838 | ||
746 | return nents; | 839 | return nents; |
747 | } | 840 | } |
748 | EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg); | 841 | |
842 | /** | ||
843 | * pci_p2pdma_map_sg - map a PCI peer-to-peer scatterlist for DMA | ||
844 | * @dev: device doing the DMA request | ||
845 | * @sg: scatter list to map | ||
846 | * @nents: elements in the scatterlist | ||
847 | * @dir: DMA direction | ||
848 | * @attrs: DMA attributes passed to dma_map_sg() (if called) | ||
849 | * | ||
850 | * Scatterlists mapped with this function should be unmapped using | ||
851 | * pci_p2pdma_unmap_sg_attrs(). | ||
852 | * | ||
853 | * Returns the number of SG entries mapped or 0 on error. | ||
854 | */ | ||
855 | int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg, | ||
856 | int nents, enum dma_data_direction dir, unsigned long attrs) | ||
857 | { | ||
858 | struct pci_p2pdma_pagemap *p2p_pgmap = | ||
859 | to_p2p_pgmap(sg_page(sg)->pgmap); | ||
860 | struct pci_dev *client; | ||
861 | |||
862 | if (WARN_ON_ONCE(!dev_is_pci(dev))) | ||
863 | return 0; | ||
864 | |||
865 | client = to_pci_dev(dev); | ||
866 | |||
867 | switch (pci_p2pdma_map_type(p2p_pgmap->provider, client)) { | ||
868 | case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: | ||
869 | return dma_map_sg_attrs(dev, sg, nents, dir, attrs); | ||
870 | case PCI_P2PDMA_MAP_BUS_ADDR: | ||
871 | return __pci_p2pdma_map_sg(p2p_pgmap, dev, sg, nents); | ||
872 | default: | ||
873 | WARN_ON_ONCE(1); | ||
874 | return 0; | ||
875 | } | ||
876 | } | ||
877 | EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg_attrs); | ||
878 | |||
879 | /** | ||
880 | * pci_p2pdma_unmap_sg - unmap a PCI peer-to-peer scatterlist that was | ||
881 | * mapped with pci_p2pdma_map_sg() | ||
882 | * @dev: device doing the DMA request | ||
883 | * @sg: scatter list to map | ||
884 | * @nents: number of elements returned by pci_p2pdma_map_sg() | ||
885 | * @dir: DMA direction | ||
886 | * @attrs: DMA attributes passed to dma_unmap_sg() (if called) | ||
887 | */ | ||
888 | void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, | ||
889 | int nents, enum dma_data_direction dir, unsigned long attrs) | ||
890 | { | ||
891 | struct pci_p2pdma_pagemap *p2p_pgmap = | ||
892 | to_p2p_pgmap(sg_page(sg)->pgmap); | ||
893 | enum pci_p2pdma_map_type map_type; | ||
894 | struct pci_dev *client; | ||
895 | |||
896 | if (WARN_ON_ONCE(!dev_is_pci(dev))) | ||
897 | return; | ||
898 | |||
899 | client = to_pci_dev(dev); | ||
900 | |||
901 | map_type = pci_p2pdma_map_type(p2p_pgmap->provider, client); | ||
902 | |||
903 | if (map_type == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) | ||
904 | dma_unmap_sg_attrs(dev, sg, nents, dir, attrs); | ||
905 | } | ||
906 | EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs); | ||
749 | 907 | ||
750 | /** | 908 | /** |
751 | * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store | 909 | * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store |
diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c index 45049f558860..0c02d500158f 100644 --- a/drivers/pci/pci-acpi.c +++ b/drivers/pci/pci-acpi.c | |||
@@ -14,7 +14,6 @@ | |||
14 | #include <linux/msi.h> | 14 | #include <linux/msi.h> |
15 | #include <linux/pci_hotplug.h> | 15 | #include <linux/pci_hotplug.h> |
16 | #include <linux/module.h> | 16 | #include <linux/module.h> |
17 | #include <linux/pci-aspm.h> | ||
18 | #include <linux/pci-acpi.h> | 17 | #include <linux/pci-acpi.h> |
19 | #include <linux/pm_runtime.h> | 18 | #include <linux/pm_runtime.h> |
20 | #include <linux/pm_qos.h> | 19 | #include <linux/pm_qos.h> |
@@ -118,8 +117,58 @@ phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle) | |||
118 | return (phys_addr_t)mcfg_addr; | 117 | return (phys_addr_t)mcfg_addr; |
119 | } | 118 | } |
120 | 119 | ||
120 | /* _HPX PCI Setting Record (Type 0); same as _HPP */ | ||
121 | struct hpx_type0 { | ||
122 | u32 revision; /* Not present in _HPP */ | ||
123 | u8 cache_line_size; /* Not applicable to PCIe */ | ||
124 | u8 latency_timer; /* Not applicable to PCIe */ | ||
125 | u8 enable_serr; | ||
126 | u8 enable_perr; | ||
127 | }; | ||
128 | |||
129 | static struct hpx_type0 pci_default_type0 = { | ||
130 | .revision = 1, | ||
131 | .cache_line_size = 8, | ||
132 | .latency_timer = 0x40, | ||
133 | .enable_serr = 0, | ||
134 | .enable_perr = 0, | ||
135 | }; | ||
136 | |||
137 | static void program_hpx_type0(struct pci_dev *dev, struct hpx_type0 *hpx) | ||
138 | { | ||
139 | u16 pci_cmd, pci_bctl; | ||
140 | |||
141 | if (!hpx) | ||
142 | hpx = &pci_default_type0; | ||
143 | |||
144 | if (hpx->revision > 1) { | ||
145 | pci_warn(dev, "PCI settings rev %d not supported; using defaults\n", | ||
146 | hpx->revision); | ||
147 | hpx = &pci_default_type0; | ||
148 | } | ||
149 | |||
150 | pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, hpx->cache_line_size); | ||
151 | pci_write_config_byte(dev, PCI_LATENCY_TIMER, hpx->latency_timer); | ||
152 | pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); | ||
153 | if (hpx->enable_serr) | ||
154 | pci_cmd |= PCI_COMMAND_SERR; | ||
155 | if (hpx->enable_perr) | ||
156 | pci_cmd |= PCI_COMMAND_PARITY; | ||
157 | pci_write_config_word(dev, PCI_COMMAND, pci_cmd); | ||
158 | |||
159 | /* Program bridge control value */ | ||
160 | if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) { | ||
161 | pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER, | ||
162 | hpx->latency_timer); | ||
163 | pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl); | ||
164 | if (hpx->enable_perr) | ||
165 | pci_bctl |= PCI_BRIDGE_CTL_PARITY; | ||
166 | pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl); | ||
167 | } | ||
168 | } | ||
169 | |||
121 | static acpi_status decode_type0_hpx_record(union acpi_object *record, | 170 | static acpi_status decode_type0_hpx_record(union acpi_object *record, |
122 | struct hpp_type0 *hpx0) | 171 | struct hpx_type0 *hpx0) |
123 | { | 172 | { |
124 | int i; | 173 | int i; |
125 | union acpi_object *fields = record->package.elements; | 174 | union acpi_object *fields = record->package.elements; |
@@ -146,8 +195,30 @@ static acpi_status decode_type0_hpx_record(union acpi_object *record, | |||
146 | return AE_OK; | 195 | return AE_OK; |
147 | } | 196 | } |
148 | 197 | ||
198 | /* _HPX PCI-X Setting Record (Type 1) */ | ||
199 | struct hpx_type1 { | ||
200 | u32 revision; | ||
201 | u8 max_mem_read; | ||
202 | u8 avg_max_split; | ||
203 | u16 tot_max_split; | ||
204 | }; | ||
205 | |||
206 | static void program_hpx_type1(struct pci_dev *dev, struct hpx_type1 *hpx) | ||
207 | { | ||
208 | int pos; | ||
209 | |||
210 | if (!hpx) | ||
211 | return; | ||
212 | |||
213 | pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); | ||
214 | if (!pos) | ||
215 | return; | ||
216 | |||
217 | pci_warn(dev, "PCI-X settings not supported\n"); | ||
218 | } | ||
219 | |||
149 | static acpi_status decode_type1_hpx_record(union acpi_object *record, | 220 | static acpi_status decode_type1_hpx_record(union acpi_object *record, |
150 | struct hpp_type1 *hpx1) | 221 | struct hpx_type1 *hpx1) |
151 | { | 222 | { |
152 | int i; | 223 | int i; |
153 | union acpi_object *fields = record->package.elements; | 224 | union acpi_object *fields = record->package.elements; |
@@ -173,8 +244,130 @@ static acpi_status decode_type1_hpx_record(union acpi_object *record, | |||
173 | return AE_OK; | 244 | return AE_OK; |
174 | } | 245 | } |
175 | 246 | ||
247 | static bool pcie_root_rcb_set(struct pci_dev *dev) | ||
248 | { | ||
249 | struct pci_dev *rp = pcie_find_root_port(dev); | ||
250 | u16 lnkctl; | ||
251 | |||
252 | if (!rp) | ||
253 | return false; | ||
254 | |||
255 | pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &lnkctl); | ||
256 | if (lnkctl & PCI_EXP_LNKCTL_RCB) | ||
257 | return true; | ||
258 | |||
259 | return false; | ||
260 | } | ||
261 | |||
262 | /* _HPX PCI Express Setting Record (Type 2) */ | ||
263 | struct hpx_type2 { | ||
264 | u32 revision; | ||
265 | u32 unc_err_mask_and; | ||
266 | u32 unc_err_mask_or; | ||
267 | u32 unc_err_sever_and; | ||
268 | u32 unc_err_sever_or; | ||
269 | u32 cor_err_mask_and; | ||
270 | u32 cor_err_mask_or; | ||
271 | u32 adv_err_cap_and; | ||
272 | u32 adv_err_cap_or; | ||
273 | u16 pci_exp_devctl_and; | ||
274 | u16 pci_exp_devctl_or; | ||
275 | u16 pci_exp_lnkctl_and; | ||
276 | u16 pci_exp_lnkctl_or; | ||
277 | u32 sec_unc_err_sever_and; | ||
278 | u32 sec_unc_err_sever_or; | ||
279 | u32 sec_unc_err_mask_and; | ||
280 | u32 sec_unc_err_mask_or; | ||
281 | }; | ||
282 | |||
283 | static void program_hpx_type2(struct pci_dev *dev, struct hpx_type2 *hpx) | ||
284 | { | ||
285 | int pos; | ||
286 | u32 reg32; | ||
287 | |||
288 | if (!hpx) | ||
289 | return; | ||
290 | |||
291 | if (!pci_is_pcie(dev)) | ||
292 | return; | ||
293 | |||
294 | if (hpx->revision > 1) { | ||
295 | pci_warn(dev, "PCIe settings rev %d not supported\n", | ||
296 | hpx->revision); | ||
297 | return; | ||
298 | } | ||
299 | |||
300 | /* | ||
301 | * Don't allow _HPX to change MPS or MRRS settings. We manage | ||
302 | * those to make sure they're consistent with the rest of the | ||
303 | * platform. | ||
304 | */ | ||
305 | hpx->pci_exp_devctl_and |= PCI_EXP_DEVCTL_PAYLOAD | | ||
306 | PCI_EXP_DEVCTL_READRQ; | ||
307 | hpx->pci_exp_devctl_or &= ~(PCI_EXP_DEVCTL_PAYLOAD | | ||
308 | PCI_EXP_DEVCTL_READRQ); | ||
309 | |||
310 | /* Initialize Device Control Register */ | ||
311 | pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL, | ||
312 | ~hpx->pci_exp_devctl_and, hpx->pci_exp_devctl_or); | ||
313 | |||
314 | /* Initialize Link Control Register */ | ||
315 | if (pcie_cap_has_lnkctl(dev)) { | ||
316 | |||
317 | /* | ||
318 | * If the Root Port supports Read Completion Boundary of | ||
319 | * 128, set RCB to 128. Otherwise, clear it. | ||
320 | */ | ||
321 | hpx->pci_exp_lnkctl_and |= PCI_EXP_LNKCTL_RCB; | ||
322 | hpx->pci_exp_lnkctl_or &= ~PCI_EXP_LNKCTL_RCB; | ||
323 | if (pcie_root_rcb_set(dev)) | ||
324 | hpx->pci_exp_lnkctl_or |= PCI_EXP_LNKCTL_RCB; | ||
325 | |||
326 | pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, | ||
327 | ~hpx->pci_exp_lnkctl_and, hpx->pci_exp_lnkctl_or); | ||
328 | } | ||
329 | |||
330 | /* Find Advanced Error Reporting Enhanced Capability */ | ||
331 | pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); | ||
332 | if (!pos) | ||
333 | return; | ||
334 | |||
335 | /* Initialize Uncorrectable Error Mask Register */ | ||
336 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, ®32); | ||
337 | reg32 = (reg32 & hpx->unc_err_mask_and) | hpx->unc_err_mask_or; | ||
338 | pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, reg32); | ||
339 | |||
340 | /* Initialize Uncorrectable Error Severity Register */ | ||
341 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, ®32); | ||
342 | reg32 = (reg32 & hpx->unc_err_sever_and) | hpx->unc_err_sever_or; | ||
343 | pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, reg32); | ||
344 | |||
345 | /* Initialize Correctable Error Mask Register */ | ||
346 | pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, ®32); | ||
347 | reg32 = (reg32 & hpx->cor_err_mask_and) | hpx->cor_err_mask_or; | ||
348 | pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg32); | ||
349 | |||
350 | /* Initialize Advanced Error Capabilities and Control Register */ | ||
351 | pci_read_config_dword(dev, pos + PCI_ERR_CAP, ®32); | ||
352 | reg32 = (reg32 & hpx->adv_err_cap_and) | hpx->adv_err_cap_or; | ||
353 | |||
354 | /* Don't enable ECRC generation or checking if unsupported */ | ||
355 | if (!(reg32 & PCI_ERR_CAP_ECRC_GENC)) | ||
356 | reg32 &= ~PCI_ERR_CAP_ECRC_GENE; | ||
357 | if (!(reg32 & PCI_ERR_CAP_ECRC_CHKC)) | ||
358 | reg32 &= ~PCI_ERR_CAP_ECRC_CHKE; | ||
359 | pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); | ||
360 | |||
361 | /* | ||
362 | * FIXME: The following two registers are not supported yet. | ||
363 | * | ||
364 | * o Secondary Uncorrectable Error Severity Register | ||
365 | * o Secondary Uncorrectable Error Mask Register | ||
366 | */ | ||
367 | } | ||
368 | |||
176 | static acpi_status decode_type2_hpx_record(union acpi_object *record, | 369 | static acpi_status decode_type2_hpx_record(union acpi_object *record, |
177 | struct hpp_type2 *hpx2) | 370 | struct hpx_type2 *hpx2) |
178 | { | 371 | { |
179 | int i; | 372 | int i; |
180 | union acpi_object *fields = record->package.elements; | 373 | union acpi_object *fields = record->package.elements; |
@@ -213,6 +406,164 @@ static acpi_status decode_type2_hpx_record(union acpi_object *record, | |||
213 | return AE_OK; | 406 | return AE_OK; |
214 | } | 407 | } |
215 | 408 | ||
409 | /* _HPX PCI Express Setting Record (Type 3) */ | ||
410 | struct hpx_type3 { | ||
411 | u16 device_type; | ||
412 | u16 function_type; | ||
413 | u16 config_space_location; | ||
414 | u16 pci_exp_cap_id; | ||
415 | u16 pci_exp_cap_ver; | ||
416 | u16 pci_exp_vendor_id; | ||
417 | u16 dvsec_id; | ||
418 | u16 dvsec_rev; | ||
419 | u16 match_offset; | ||
420 | u32 match_mask_and; | ||
421 | u32 match_value; | ||
422 | u16 reg_offset; | ||
423 | u32 reg_mask_and; | ||
424 | u32 reg_mask_or; | ||
425 | }; | ||
426 | |||
427 | enum hpx_type3_dev_type { | ||
428 | HPX_TYPE_ENDPOINT = BIT(0), | ||
429 | HPX_TYPE_LEG_END = BIT(1), | ||
430 | HPX_TYPE_RC_END = BIT(2), | ||
431 | HPX_TYPE_RC_EC = BIT(3), | ||
432 | HPX_TYPE_ROOT_PORT = BIT(4), | ||
433 | HPX_TYPE_UPSTREAM = BIT(5), | ||
434 | HPX_TYPE_DOWNSTREAM = BIT(6), | ||
435 | HPX_TYPE_PCI_BRIDGE = BIT(7), | ||
436 | HPX_TYPE_PCIE_BRIDGE = BIT(8), | ||
437 | }; | ||
438 | |||
439 | static u16 hpx3_device_type(struct pci_dev *dev) | ||
440 | { | ||
441 | u16 pcie_type = pci_pcie_type(dev); | ||
442 | const int pcie_to_hpx3_type[] = { | ||
443 | [PCI_EXP_TYPE_ENDPOINT] = HPX_TYPE_ENDPOINT, | ||
444 | [PCI_EXP_TYPE_LEG_END] = HPX_TYPE_LEG_END, | ||
445 | [PCI_EXP_TYPE_RC_END] = HPX_TYPE_RC_END, | ||
446 | [PCI_EXP_TYPE_RC_EC] = HPX_TYPE_RC_EC, | ||
447 | [PCI_EXP_TYPE_ROOT_PORT] = HPX_TYPE_ROOT_PORT, | ||
448 | [PCI_EXP_TYPE_UPSTREAM] = HPX_TYPE_UPSTREAM, | ||
449 | [PCI_EXP_TYPE_DOWNSTREAM] = HPX_TYPE_DOWNSTREAM, | ||
450 | [PCI_EXP_TYPE_PCI_BRIDGE] = HPX_TYPE_PCI_BRIDGE, | ||
451 | [PCI_EXP_TYPE_PCIE_BRIDGE] = HPX_TYPE_PCIE_BRIDGE, | ||
452 | }; | ||
453 | |||
454 | if (pcie_type >= ARRAY_SIZE(pcie_to_hpx3_type)) | ||
455 | return 0; | ||
456 | |||
457 | return pcie_to_hpx3_type[pcie_type]; | ||
458 | } | ||
459 | |||
460 | enum hpx_type3_fn_type { | ||
461 | HPX_FN_NORMAL = BIT(0), | ||
462 | HPX_FN_SRIOV_PHYS = BIT(1), | ||
463 | HPX_FN_SRIOV_VIRT = BIT(2), | ||
464 | }; | ||
465 | |||
466 | static u8 hpx3_function_type(struct pci_dev *dev) | ||
467 | { | ||
468 | if (dev->is_virtfn) | ||
469 | return HPX_FN_SRIOV_VIRT; | ||
470 | else if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV) > 0) | ||
471 | return HPX_FN_SRIOV_PHYS; | ||
472 | else | ||
473 | return HPX_FN_NORMAL; | ||
474 | } | ||
475 | |||
476 | static bool hpx3_cap_ver_matches(u8 pcie_cap_id, u8 hpx3_cap_id) | ||
477 | { | ||
478 | u8 cap_ver = hpx3_cap_id & 0xf; | ||
479 | |||
480 | if ((hpx3_cap_id & BIT(4)) && cap_ver >= pcie_cap_id) | ||
481 | return true; | ||
482 | else if (cap_ver == pcie_cap_id) | ||
483 | return true; | ||
484 | |||
485 | return false; | ||
486 | } | ||
487 | |||
488 | enum hpx_type3_cfg_loc { | ||
489 | HPX_CFG_PCICFG = 0, | ||
490 | HPX_CFG_PCIE_CAP = 1, | ||
491 | HPX_CFG_PCIE_CAP_EXT = 2, | ||
492 | HPX_CFG_VEND_CAP = 3, | ||
493 | HPX_CFG_DVSEC = 4, | ||
494 | HPX_CFG_MAX, | ||
495 | }; | ||
496 | |||
497 | static void program_hpx_type3_register(struct pci_dev *dev, | ||
498 | const struct hpx_type3 *reg) | ||
499 | { | ||
500 | u32 match_reg, write_reg, header, orig_value; | ||
501 | u16 pos; | ||
502 | |||
503 | if (!(hpx3_device_type(dev) & reg->device_type)) | ||
504 | return; | ||
505 | |||
506 | if (!(hpx3_function_type(dev) & reg->function_type)) | ||
507 | return; | ||
508 | |||
509 | switch (reg->config_space_location) { | ||
510 | case HPX_CFG_PCICFG: | ||
511 | pos = 0; | ||
512 | break; | ||
513 | case HPX_CFG_PCIE_CAP: | ||
514 | pos = pci_find_capability(dev, reg->pci_exp_cap_id); | ||
515 | if (pos == 0) | ||
516 | return; | ||
517 | |||
518 | break; | ||
519 | case HPX_CFG_PCIE_CAP_EXT: | ||
520 | pos = pci_find_ext_capability(dev, reg->pci_exp_cap_id); | ||
521 | if (pos == 0) | ||
522 | return; | ||
523 | |||
524 | pci_read_config_dword(dev, pos, &header); | ||
525 | if (!hpx3_cap_ver_matches(PCI_EXT_CAP_VER(header), | ||
526 | reg->pci_exp_cap_ver)) | ||
527 | return; | ||
528 | |||
529 | break; | ||
530 | case HPX_CFG_VEND_CAP: /* Fall through */ | ||
531 | case HPX_CFG_DVSEC: /* Fall through */ | ||
532 | default: | ||
533 | pci_warn(dev, "Encountered _HPX type 3 with unsupported config space location"); | ||
534 | return; | ||
535 | } | ||
536 | |||
537 | pci_read_config_dword(dev, pos + reg->match_offset, &match_reg); | ||
538 | |||
539 | if ((match_reg & reg->match_mask_and) != reg->match_value) | ||
540 | return; | ||
541 | |||
542 | pci_read_config_dword(dev, pos + reg->reg_offset, &write_reg); | ||
543 | orig_value = write_reg; | ||
544 | write_reg &= reg->reg_mask_and; | ||
545 | write_reg |= reg->reg_mask_or; | ||
546 | |||
547 | if (orig_value == write_reg) | ||
548 | return; | ||
549 | |||
550 | pci_write_config_dword(dev, pos + reg->reg_offset, write_reg); | ||
551 | |||
552 | pci_dbg(dev, "Applied _HPX3 at [0x%x]: 0x%08x -> 0x%08x", | ||
553 | pos, orig_value, write_reg); | ||
554 | } | ||
555 | |||
556 | static void program_hpx_type3(struct pci_dev *dev, struct hpx_type3 *hpx) | ||
557 | { | ||
558 | if (!hpx) | ||
559 | return; | ||
560 | |||
561 | if (!pci_is_pcie(dev)) | ||
562 | return; | ||
563 | |||
564 | program_hpx_type3_register(dev, hpx); | ||
565 | } | ||
566 | |||
216 | static void parse_hpx3_register(struct hpx_type3 *hpx3_reg, | 567 | static void parse_hpx3_register(struct hpx_type3 *hpx3_reg, |
217 | union acpi_object *reg_fields) | 568 | union acpi_object *reg_fields) |
218 | { | 569 | { |
@@ -233,8 +584,7 @@ static void parse_hpx3_register(struct hpx_type3 *hpx3_reg, | |||
233 | } | 584 | } |
234 | 585 | ||
235 | static acpi_status program_type3_hpx_record(struct pci_dev *dev, | 586 | static acpi_status program_type3_hpx_record(struct pci_dev *dev, |
236 | union acpi_object *record, | 587 | union acpi_object *record) |
237 | const struct hotplug_program_ops *hp_ops) | ||
238 | { | 588 | { |
239 | union acpi_object *fields = record->package.elements; | 589 | union acpi_object *fields = record->package.elements; |
240 | u32 desc_count, expected_length, revision; | 590 | u32 desc_count, expected_length, revision; |
@@ -258,7 +608,7 @@ static acpi_status program_type3_hpx_record(struct pci_dev *dev, | |||
258 | for (i = 0; i < desc_count; i++) { | 608 | for (i = 0; i < desc_count; i++) { |
259 | reg_fields = fields + 3 + i * 14; | 609 | reg_fields = fields + 3 + i * 14; |
260 | parse_hpx3_register(&hpx3, reg_fields); | 610 | parse_hpx3_register(&hpx3, reg_fields); |
261 | hp_ops->program_type3(dev, &hpx3); | 611 | program_hpx_type3(dev, &hpx3); |
262 | } | 612 | } |
263 | 613 | ||
264 | break; | 614 | break; |
@@ -271,15 +621,14 @@ static acpi_status program_type3_hpx_record(struct pci_dev *dev, | |||
271 | return AE_OK; | 621 | return AE_OK; |
272 | } | 622 | } |
273 | 623 | ||
274 | static acpi_status acpi_run_hpx(struct pci_dev *dev, acpi_handle handle, | 624 | static acpi_status acpi_run_hpx(struct pci_dev *dev, acpi_handle handle) |
275 | const struct hotplug_program_ops *hp_ops) | ||
276 | { | 625 | { |
277 | acpi_status status; | 626 | acpi_status status; |
278 | struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; | 627 | struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; |
279 | union acpi_object *package, *record, *fields; | 628 | union acpi_object *package, *record, *fields; |
280 | struct hpp_type0 hpx0; | 629 | struct hpx_type0 hpx0; |
281 | struct hpp_type1 hpx1; | 630 | struct hpx_type1 hpx1; |
282 | struct hpp_type2 hpx2; | 631 | struct hpx_type2 hpx2; |
283 | u32 type; | 632 | u32 type; |
284 | int i; | 633 | int i; |
285 | 634 | ||
@@ -314,24 +663,24 @@ static acpi_status acpi_run_hpx(struct pci_dev *dev, acpi_handle handle, | |||
314 | status = decode_type0_hpx_record(record, &hpx0); | 663 | status = decode_type0_hpx_record(record, &hpx0); |
315 | if (ACPI_FAILURE(status)) | 664 | if (ACPI_FAILURE(status)) |
316 | goto exit; | 665 | goto exit; |
317 | hp_ops->program_type0(dev, &hpx0); | 666 | program_hpx_type0(dev, &hpx0); |
318 | break; | 667 | break; |
319 | case 1: | 668 | case 1: |
320 | memset(&hpx1, 0, sizeof(hpx1)); | 669 | memset(&hpx1, 0, sizeof(hpx1)); |
321 | status = decode_type1_hpx_record(record, &hpx1); | 670 | status = decode_type1_hpx_record(record, &hpx1); |
322 | if (ACPI_FAILURE(status)) | 671 | if (ACPI_FAILURE(status)) |
323 | goto exit; | 672 | goto exit; |
324 | hp_ops->program_type1(dev, &hpx1); | 673 | program_hpx_type1(dev, &hpx1); |
325 | break; | 674 | break; |
326 | case 2: | 675 | case 2: |
327 | memset(&hpx2, 0, sizeof(hpx2)); | 676 | memset(&hpx2, 0, sizeof(hpx2)); |
328 | status = decode_type2_hpx_record(record, &hpx2); | 677 | status = decode_type2_hpx_record(record, &hpx2); |
329 | if (ACPI_FAILURE(status)) | 678 | if (ACPI_FAILURE(status)) |
330 | goto exit; | 679 | goto exit; |
331 | hp_ops->program_type2(dev, &hpx2); | 680 | program_hpx_type2(dev, &hpx2); |
332 | break; | 681 | break; |
333 | case 3: | 682 | case 3: |
334 | status = program_type3_hpx_record(dev, record, hp_ops); | 683 | status = program_type3_hpx_record(dev, record); |
335 | if (ACPI_FAILURE(status)) | 684 | if (ACPI_FAILURE(status)) |
336 | goto exit; | 685 | goto exit; |
337 | break; | 686 | break; |
@@ -347,16 +696,15 @@ static acpi_status acpi_run_hpx(struct pci_dev *dev, acpi_handle handle, | |||
347 | return status; | 696 | return status; |
348 | } | 697 | } |
349 | 698 | ||
350 | static acpi_status acpi_run_hpp(struct pci_dev *dev, acpi_handle handle, | 699 | static acpi_status acpi_run_hpp(struct pci_dev *dev, acpi_handle handle) |
351 | const struct hotplug_program_ops *hp_ops) | ||
352 | { | 700 | { |
353 | acpi_status status; | 701 | acpi_status status; |
354 | struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; | 702 | struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; |
355 | union acpi_object *package, *fields; | 703 | union acpi_object *package, *fields; |
356 | struct hpp_type0 hpp0; | 704 | struct hpx_type0 hpx0; |
357 | int i; | 705 | int i; |
358 | 706 | ||
359 | memset(&hpp0, 0, sizeof(hpp0)); | 707 | memset(&hpx0, 0, sizeof(hpx0)); |
360 | 708 | ||
361 | status = acpi_evaluate_object(handle, "_HPP", NULL, &buffer); | 709 | status = acpi_evaluate_object(handle, "_HPP", NULL, &buffer); |
362 | if (ACPI_FAILURE(status)) | 710 | if (ACPI_FAILURE(status)) |
@@ -377,26 +725,24 @@ static acpi_status acpi_run_hpp(struct pci_dev *dev, acpi_handle handle, | |||
377 | } | 725 | } |
378 | } | 726 | } |
379 | 727 | ||
380 | hpp0.revision = 1; | 728 | hpx0.revision = 1; |
381 | hpp0.cache_line_size = fields[0].integer.value; | 729 | hpx0.cache_line_size = fields[0].integer.value; |
382 | hpp0.latency_timer = fields[1].integer.value; | 730 | hpx0.latency_timer = fields[1].integer.value; |
383 | hpp0.enable_serr = fields[2].integer.value; | 731 | hpx0.enable_serr = fields[2].integer.value; |
384 | hpp0.enable_perr = fields[3].integer.value; | 732 | hpx0.enable_perr = fields[3].integer.value; |
385 | 733 | ||
386 | hp_ops->program_type0(dev, &hpp0); | 734 | program_hpx_type0(dev, &hpx0); |
387 | 735 | ||
388 | exit: | 736 | exit: |
389 | kfree(buffer.pointer); | 737 | kfree(buffer.pointer); |
390 | return status; | 738 | return status; |
391 | } | 739 | } |
392 | 740 | ||
393 | /* pci_get_hp_params | 741 | /* pci_acpi_program_hp_params |
394 | * | 742 | * |
395 | * @dev - the pci_dev for which we want parameters | 743 | * @dev - the pci_dev for which we want parameters |
396 | * @hpp - allocated by the caller | ||
397 | */ | 744 | */ |
398 | int pci_acpi_program_hp_params(struct pci_dev *dev, | 745 | int pci_acpi_program_hp_params(struct pci_dev *dev) |
399 | const struct hotplug_program_ops *hp_ops) | ||
400 | { | 746 | { |
401 | acpi_status status; | 747 | acpi_status status; |
402 | acpi_handle handle, phandle; | 748 | acpi_handle handle, phandle; |
@@ -419,10 +765,10 @@ int pci_acpi_program_hp_params(struct pci_dev *dev, | |||
419 | * this pci dev. | 765 | * this pci dev. |
420 | */ | 766 | */ |
421 | while (handle) { | 767 | while (handle) { |
422 | status = acpi_run_hpx(dev, handle, hp_ops); | 768 | status = acpi_run_hpx(dev, handle); |
423 | if (ACPI_SUCCESS(status)) | 769 | if (ACPI_SUCCESS(status)) |
424 | return 0; | 770 | return 0; |
425 | status = acpi_run_hpp(dev, handle, hp_ops); | 771 | status = acpi_run_hpp(dev, handle); |
426 | if (ACPI_SUCCESS(status)) | 772 | if (ACPI_SUCCESS(status)) |
427 | return 0; | 773 | return 0; |
428 | if (acpi_is_root_bridge(handle)) | 774 | if (acpi_is_root_bridge(handle)) |
diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c index 06083b86d4f4..5fd90105510d 100644 --- a/drivers/pci/pci-bridge-emul.c +++ b/drivers/pci/pci-bridge-emul.c | |||
@@ -38,7 +38,7 @@ struct pci_bridge_reg_behavior { | |||
38 | u32 rsvd; | 38 | u32 rsvd; |
39 | }; | 39 | }; |
40 | 40 | ||
41 | const static struct pci_bridge_reg_behavior pci_regs_behavior[] = { | 41 | static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { |
42 | [PCI_VENDOR_ID / 4] = { .ro = ~0 }, | 42 | [PCI_VENDOR_ID / 4] = { .ro = ~0 }, |
43 | [PCI_COMMAND / 4] = { | 43 | [PCI_COMMAND / 4] = { |
44 | .rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | | 44 | .rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | |
@@ -173,7 +173,7 @@ const static struct pci_bridge_reg_behavior pci_regs_behavior[] = { | |||
173 | }, | 173 | }, |
174 | }; | 174 | }; |
175 | 175 | ||
176 | const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { | 176 | static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { |
177 | [PCI_CAP_LIST_ID / 4] = { | 177 | [PCI_CAP_LIST_ID / 4] = { |
178 | /* | 178 | /* |
179 | * Capability ID, Next Capability Pointer and | 179 | * Capability ID, Next Capability Pointer and |
diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 965c72104150..868e35109284 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c | |||
@@ -464,9 +464,7 @@ static ssize_t dev_rescan_store(struct device *dev, | |||
464 | } | 464 | } |
465 | return count; | 465 | return count; |
466 | } | 466 | } |
467 | static struct device_attribute dev_rescan_attr = __ATTR(rescan, | 467 | static DEVICE_ATTR_WO(dev_rescan); |
468 | (S_IWUSR|S_IWGRP), | ||
469 | NULL, dev_rescan_store); | ||
470 | 468 | ||
471 | static ssize_t remove_store(struct device *dev, struct device_attribute *attr, | 469 | static ssize_t remove_store(struct device *dev, struct device_attribute *attr, |
472 | const char *buf, size_t count) | 470 | const char *buf, size_t count) |
@@ -480,13 +478,12 @@ static ssize_t remove_store(struct device *dev, struct device_attribute *attr, | |||
480 | pci_stop_and_remove_bus_device_locked(to_pci_dev(dev)); | 478 | pci_stop_and_remove_bus_device_locked(to_pci_dev(dev)); |
481 | return count; | 479 | return count; |
482 | } | 480 | } |
483 | static struct device_attribute dev_remove_attr = __ATTR_IGNORE_LOCKDEP(remove, | 481 | static DEVICE_ATTR_IGNORE_LOCKDEP(remove, 0220, NULL, |
484 | (S_IWUSR|S_IWGRP), | 482 | remove_store); |
485 | NULL, remove_store); | ||
486 | 483 | ||
487 | static ssize_t dev_bus_rescan_store(struct device *dev, | 484 | static ssize_t bus_rescan_store(struct device *dev, |
488 | struct device_attribute *attr, | 485 | struct device_attribute *attr, |
489 | const char *buf, size_t count) | 486 | const char *buf, size_t count) |
490 | { | 487 | { |
491 | unsigned long val; | 488 | unsigned long val; |
492 | struct pci_bus *bus = to_pci_bus(dev); | 489 | struct pci_bus *bus = to_pci_bus(dev); |
@@ -504,7 +501,7 @@ static ssize_t dev_bus_rescan_store(struct device *dev, | |||
504 | } | 501 | } |
505 | return count; | 502 | return count; |
506 | } | 503 | } |
507 | static DEVICE_ATTR(rescan, (S_IWUSR|S_IWGRP), NULL, dev_bus_rescan_store); | 504 | static DEVICE_ATTR_WO(bus_rescan); |
508 | 505 | ||
509 | #if defined(CONFIG_PM) && defined(CONFIG_ACPI) | 506 | #if defined(CONFIG_PM) && defined(CONFIG_ACPI) |
510 | static ssize_t d3cold_allowed_store(struct device *dev, | 507 | static ssize_t d3cold_allowed_store(struct device *dev, |
@@ -551,154 +548,6 @@ static ssize_t devspec_show(struct device *dev, | |||
551 | static DEVICE_ATTR_RO(devspec); | 548 | static DEVICE_ATTR_RO(devspec); |
552 | #endif | 549 | #endif |
553 | 550 | ||
554 | #ifdef CONFIG_PCI_IOV | ||
555 | static ssize_t sriov_totalvfs_show(struct device *dev, | ||
556 | struct device_attribute *attr, | ||
557 | char *buf) | ||
558 | { | ||
559 | struct pci_dev *pdev = to_pci_dev(dev); | ||
560 | |||
561 | return sprintf(buf, "%u\n", pci_sriov_get_totalvfs(pdev)); | ||
562 | } | ||
563 | |||
564 | |||
565 | static ssize_t sriov_numvfs_show(struct device *dev, | ||
566 | struct device_attribute *attr, | ||
567 | char *buf) | ||
568 | { | ||
569 | struct pci_dev *pdev = to_pci_dev(dev); | ||
570 | |||
571 | return sprintf(buf, "%u\n", pdev->sriov->num_VFs); | ||
572 | } | ||
573 | |||
574 | /* | ||
575 | * num_vfs > 0; number of VFs to enable | ||
576 | * num_vfs = 0; disable all VFs | ||
577 | * | ||
578 | * Note: SRIOV spec doesn't allow partial VF | ||
579 | * disable, so it's all or none. | ||
580 | */ | ||
581 | static ssize_t sriov_numvfs_store(struct device *dev, | ||
582 | struct device_attribute *attr, | ||
583 | const char *buf, size_t count) | ||
584 | { | ||
585 | struct pci_dev *pdev = to_pci_dev(dev); | ||
586 | int ret; | ||
587 | u16 num_vfs; | ||
588 | |||
589 | ret = kstrtou16(buf, 0, &num_vfs); | ||
590 | if (ret < 0) | ||
591 | return ret; | ||
592 | |||
593 | if (num_vfs > pci_sriov_get_totalvfs(pdev)) | ||
594 | return -ERANGE; | ||
595 | |||
596 | device_lock(&pdev->dev); | ||
597 | |||
598 | if (num_vfs == pdev->sriov->num_VFs) | ||
599 | goto exit; | ||
600 | |||
601 | /* is PF driver loaded w/callback */ | ||
602 | if (!pdev->driver || !pdev->driver->sriov_configure) { | ||
603 | pci_info(pdev, "Driver doesn't support SRIOV configuration via sysfs\n"); | ||
604 | ret = -ENOENT; | ||
605 | goto exit; | ||
606 | } | ||
607 | |||
608 | if (num_vfs == 0) { | ||
609 | /* disable VFs */ | ||
610 | ret = pdev->driver->sriov_configure(pdev, 0); | ||
611 | goto exit; | ||
612 | } | ||
613 | |||
614 | /* enable VFs */ | ||
615 | if (pdev->sriov->num_VFs) { | ||
616 | pci_warn(pdev, "%d VFs already enabled. Disable before enabling %d VFs\n", | ||
617 | pdev->sriov->num_VFs, num_vfs); | ||
618 | ret = -EBUSY; | ||
619 | goto exit; | ||
620 | } | ||
621 | |||
622 | ret = pdev->driver->sriov_configure(pdev, num_vfs); | ||
623 | if (ret < 0) | ||
624 | goto exit; | ||
625 | |||
626 | if (ret != num_vfs) | ||
627 | pci_warn(pdev, "%d VFs requested; only %d enabled\n", | ||
628 | num_vfs, ret); | ||
629 | |||
630 | exit: | ||
631 | device_unlock(&pdev->dev); | ||
632 | |||
633 | if (ret < 0) | ||
634 | return ret; | ||
635 | |||
636 | return count; | ||
637 | } | ||
638 | |||
639 | static ssize_t sriov_offset_show(struct device *dev, | ||
640 | struct device_attribute *attr, | ||
641 | char *buf) | ||
642 | { | ||
643 | struct pci_dev *pdev = to_pci_dev(dev); | ||
644 | |||
645 | return sprintf(buf, "%u\n", pdev->sriov->offset); | ||
646 | } | ||
647 | |||
648 | static ssize_t sriov_stride_show(struct device *dev, | ||
649 | struct device_attribute *attr, | ||
650 | char *buf) | ||
651 | { | ||
652 | struct pci_dev *pdev = to_pci_dev(dev); | ||
653 | |||
654 | return sprintf(buf, "%u\n", pdev->sriov->stride); | ||
655 | } | ||
656 | |||
657 | static ssize_t sriov_vf_device_show(struct device *dev, | ||
658 | struct device_attribute *attr, | ||
659 | char *buf) | ||
660 | { | ||
661 | struct pci_dev *pdev = to_pci_dev(dev); | ||
662 | |||
663 | return sprintf(buf, "%x\n", pdev->sriov->vf_device); | ||
664 | } | ||
665 | |||
666 | static ssize_t sriov_drivers_autoprobe_show(struct device *dev, | ||
667 | struct device_attribute *attr, | ||
668 | char *buf) | ||
669 | { | ||
670 | struct pci_dev *pdev = to_pci_dev(dev); | ||
671 | |||
672 | return sprintf(buf, "%u\n", pdev->sriov->drivers_autoprobe); | ||
673 | } | ||
674 | |||
675 | static ssize_t sriov_drivers_autoprobe_store(struct device *dev, | ||
676 | struct device_attribute *attr, | ||
677 | const char *buf, size_t count) | ||
678 | { | ||
679 | struct pci_dev *pdev = to_pci_dev(dev); | ||
680 | bool drivers_autoprobe; | ||
681 | |||
682 | if (kstrtobool(buf, &drivers_autoprobe) < 0) | ||
683 | return -EINVAL; | ||
684 | |||
685 | pdev->sriov->drivers_autoprobe = drivers_autoprobe; | ||
686 | |||
687 | return count; | ||
688 | } | ||
689 | |||
690 | static struct device_attribute sriov_totalvfs_attr = __ATTR_RO(sriov_totalvfs); | ||
691 | static struct device_attribute sriov_numvfs_attr = | ||
692 | __ATTR(sriov_numvfs, (S_IRUGO|S_IWUSR|S_IWGRP), | ||
693 | sriov_numvfs_show, sriov_numvfs_store); | ||
694 | static struct device_attribute sriov_offset_attr = __ATTR_RO(sriov_offset); | ||
695 | static struct device_attribute sriov_stride_attr = __ATTR_RO(sriov_stride); | ||
696 | static struct device_attribute sriov_vf_device_attr = __ATTR_RO(sriov_vf_device); | ||
697 | static struct device_attribute sriov_drivers_autoprobe_attr = | ||
698 | __ATTR(sriov_drivers_autoprobe, (S_IRUGO|S_IWUSR|S_IWGRP), | ||
699 | sriov_drivers_autoprobe_show, sriov_drivers_autoprobe_store); | ||
700 | #endif /* CONFIG_PCI_IOV */ | ||
701 | |||
702 | static ssize_t driver_override_store(struct device *dev, | 551 | static ssize_t driver_override_store(struct device *dev, |
703 | struct device_attribute *attr, | 552 | struct device_attribute *attr, |
704 | const char *buf, size_t count) | 553 | const char *buf, size_t count) |
@@ -792,7 +641,7 @@ static struct attribute *pcie_dev_attrs[] = { | |||
792 | }; | 641 | }; |
793 | 642 | ||
794 | static struct attribute *pcibus_attrs[] = { | 643 | static struct attribute *pcibus_attrs[] = { |
795 | &dev_attr_rescan.attr, | 644 | &dev_attr_bus_rescan.attr, |
796 | &dev_attr_cpuaffinity.attr, | 645 | &dev_attr_cpuaffinity.attr, |
797 | &dev_attr_cpulistaffinity.attr, | 646 | &dev_attr_cpulistaffinity.attr, |
798 | NULL, | 647 | NULL, |
@@ -820,7 +669,7 @@ static ssize_t boot_vga_show(struct device *dev, struct device_attribute *attr, | |||
820 | !!(pdev->resource[PCI_ROM_RESOURCE].flags & | 669 | !!(pdev->resource[PCI_ROM_RESOURCE].flags & |
821 | IORESOURCE_ROM_SHADOW)); | 670 | IORESOURCE_ROM_SHADOW)); |
822 | } | 671 | } |
823 | static struct device_attribute vga_attr = __ATTR_RO(boot_vga); | 672 | static DEVICE_ATTR_RO(boot_vga); |
824 | 673 | ||
825 | static ssize_t pci_read_config(struct file *filp, struct kobject *kobj, | 674 | static ssize_t pci_read_config(struct file *filp, struct kobject *kobj, |
826 | struct bin_attribute *bin_attr, char *buf, | 675 | struct bin_attribute *bin_attr, char *buf, |
@@ -1085,7 +934,7 @@ void pci_create_legacy_files(struct pci_bus *b) | |||
1085 | sysfs_bin_attr_init(b->legacy_io); | 934 | sysfs_bin_attr_init(b->legacy_io); |
1086 | b->legacy_io->attr.name = "legacy_io"; | 935 | b->legacy_io->attr.name = "legacy_io"; |
1087 | b->legacy_io->size = 0xffff; | 936 | b->legacy_io->size = 0xffff; |
1088 | b->legacy_io->attr.mode = S_IRUSR | S_IWUSR; | 937 | b->legacy_io->attr.mode = 0600; |
1089 | b->legacy_io->read = pci_read_legacy_io; | 938 | b->legacy_io->read = pci_read_legacy_io; |
1090 | b->legacy_io->write = pci_write_legacy_io; | 939 | b->legacy_io->write = pci_write_legacy_io; |
1091 | b->legacy_io->mmap = pci_mmap_legacy_io; | 940 | b->legacy_io->mmap = pci_mmap_legacy_io; |
@@ -1099,7 +948,7 @@ void pci_create_legacy_files(struct pci_bus *b) | |||
1099 | sysfs_bin_attr_init(b->legacy_mem); | 948 | sysfs_bin_attr_init(b->legacy_mem); |
1100 | b->legacy_mem->attr.name = "legacy_mem"; | 949 | b->legacy_mem->attr.name = "legacy_mem"; |
1101 | b->legacy_mem->size = 1024*1024; | 950 | b->legacy_mem->size = 1024*1024; |
1102 | b->legacy_mem->attr.mode = S_IRUSR | S_IWUSR; | 951 | b->legacy_mem->attr.mode = 0600; |
1103 | b->legacy_mem->mmap = pci_mmap_legacy_mem; | 952 | b->legacy_mem->mmap = pci_mmap_legacy_mem; |
1104 | pci_adjust_legacy_attr(b, pci_mmap_mem); | 953 | pci_adjust_legacy_attr(b, pci_mmap_mem); |
1105 | error = device_create_bin_file(&b->dev, b->legacy_mem); | 954 | error = device_create_bin_file(&b->dev, b->legacy_mem); |
@@ -1306,7 +1155,7 @@ static int pci_create_attr(struct pci_dev *pdev, int num, int write_combine) | |||
1306 | } | 1155 | } |
1307 | } | 1156 | } |
1308 | res_attr->attr.name = res_attr_name; | 1157 | res_attr->attr.name = res_attr_name; |
1309 | res_attr->attr.mode = S_IRUSR | S_IWUSR; | 1158 | res_attr->attr.mode = 0600; |
1310 | res_attr->size = pci_resource_len(pdev, num); | 1159 | res_attr->size = pci_resource_len(pdev, num); |
1311 | res_attr->private = (void *)(unsigned long)num; | 1160 | res_attr->private = (void *)(unsigned long)num; |
1312 | retval = sysfs_create_bin_file(&pdev->dev.kobj, res_attr); | 1161 | retval = sysfs_create_bin_file(&pdev->dev.kobj, res_attr); |
@@ -1419,7 +1268,7 @@ static ssize_t pci_read_rom(struct file *filp, struct kobject *kobj, | |||
1419 | static const struct bin_attribute pci_config_attr = { | 1268 | static const struct bin_attribute pci_config_attr = { |
1420 | .attr = { | 1269 | .attr = { |
1421 | .name = "config", | 1270 | .name = "config", |
1422 | .mode = S_IRUGO | S_IWUSR, | 1271 | .mode = 0644, |
1423 | }, | 1272 | }, |
1424 | .size = PCI_CFG_SPACE_SIZE, | 1273 | .size = PCI_CFG_SPACE_SIZE, |
1425 | .read = pci_read_config, | 1274 | .read = pci_read_config, |
@@ -1429,7 +1278,7 @@ static const struct bin_attribute pci_config_attr = { | |||
1429 | static const struct bin_attribute pcie_config_attr = { | 1278 | static const struct bin_attribute pcie_config_attr = { |
1430 | .attr = { | 1279 | .attr = { |
1431 | .name = "config", | 1280 | .name = "config", |
1432 | .mode = S_IRUGO | S_IWUSR, | 1281 | .mode = 0644, |
1433 | }, | 1282 | }, |
1434 | .size = PCI_CFG_SPACE_EXP_SIZE, | 1283 | .size = PCI_CFG_SPACE_EXP_SIZE, |
1435 | .read = pci_read_config, | 1284 | .read = pci_read_config, |
@@ -1458,7 +1307,7 @@ static ssize_t reset_store(struct device *dev, struct device_attribute *attr, | |||
1458 | return count; | 1307 | return count; |
1459 | } | 1308 | } |
1460 | 1309 | ||
1461 | static struct device_attribute reset_attr = __ATTR(reset, 0200, NULL, reset_store); | 1310 | static DEVICE_ATTR(reset, 0200, NULL, reset_store); |
1462 | 1311 | ||
1463 | static int pci_create_capabilities_sysfs(struct pci_dev *dev) | 1312 | static int pci_create_capabilities_sysfs(struct pci_dev *dev) |
1464 | { | 1313 | { |
@@ -1468,7 +1317,7 @@ static int pci_create_capabilities_sysfs(struct pci_dev *dev) | |||
1468 | pcie_aspm_create_sysfs_dev_files(dev); | 1317 | pcie_aspm_create_sysfs_dev_files(dev); |
1469 | 1318 | ||
1470 | if (dev->reset_fn) { | 1319 | if (dev->reset_fn) { |
1471 | retval = device_create_file(&dev->dev, &reset_attr); | 1320 | retval = device_create_file(&dev->dev, &dev_attr_reset); |
1472 | if (retval) | 1321 | if (retval) |
1473 | goto error; | 1322 | goto error; |
1474 | } | 1323 | } |
@@ -1511,7 +1360,7 @@ int __must_check pci_create_sysfs_dev_files(struct pci_dev *pdev) | |||
1511 | sysfs_bin_attr_init(attr); | 1360 | sysfs_bin_attr_init(attr); |
1512 | attr->size = rom_size; | 1361 | attr->size = rom_size; |
1513 | attr->attr.name = "rom"; | 1362 | attr->attr.name = "rom"; |
1514 | attr->attr.mode = S_IRUSR | S_IWUSR; | 1363 | attr->attr.mode = 0600; |
1515 | attr->read = pci_read_rom; | 1364 | attr->read = pci_read_rom; |
1516 | attr->write = pci_write_rom; | 1365 | attr->write = pci_write_rom; |
1517 | retval = sysfs_create_bin_file(&pdev->dev.kobj, attr); | 1366 | retval = sysfs_create_bin_file(&pdev->dev.kobj, attr); |
@@ -1553,7 +1402,7 @@ static void pci_remove_capabilities_sysfs(struct pci_dev *dev) | |||
1553 | pcie_vpd_remove_sysfs_dev_files(dev); | 1402 | pcie_vpd_remove_sysfs_dev_files(dev); |
1554 | pcie_aspm_remove_sysfs_dev_files(dev); | 1403 | pcie_aspm_remove_sysfs_dev_files(dev); |
1555 | if (dev->reset_fn) { | 1404 | if (dev->reset_fn) { |
1556 | device_remove_file(&dev->dev, &reset_attr); | 1405 | device_remove_file(&dev->dev, &dev_attr_reset); |
1557 | dev->reset_fn = 0; | 1406 | dev->reset_fn = 0; |
1558 | } | 1407 | } |
1559 | } | 1408 | } |
@@ -1606,7 +1455,7 @@ static int __init pci_sysfs_init(void) | |||
1606 | late_initcall(pci_sysfs_init); | 1455 | late_initcall(pci_sysfs_init); |
1607 | 1456 | ||
1608 | static struct attribute *pci_dev_dev_attrs[] = { | 1457 | static struct attribute *pci_dev_dev_attrs[] = { |
1609 | &vga_attr.attr, | 1458 | &dev_attr_boot_vga.attr, |
1610 | NULL, | 1459 | NULL, |
1611 | }; | 1460 | }; |
1612 | 1461 | ||
@@ -1616,7 +1465,7 @@ static umode_t pci_dev_attrs_are_visible(struct kobject *kobj, | |||
1616 | struct device *dev = kobj_to_dev(kobj); | 1465 | struct device *dev = kobj_to_dev(kobj); |
1617 | struct pci_dev *pdev = to_pci_dev(dev); | 1466 | struct pci_dev *pdev = to_pci_dev(dev); |
1618 | 1467 | ||
1619 | if (a == &vga_attr.attr) | 1468 | if (a == &dev_attr_boot_vga.attr) |
1620 | if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA) | 1469 | if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA) |
1621 | return 0; | 1470 | return 0; |
1622 | 1471 | ||
@@ -1624,8 +1473,8 @@ static umode_t pci_dev_attrs_are_visible(struct kobject *kobj, | |||
1624 | } | 1473 | } |
1625 | 1474 | ||
1626 | static struct attribute *pci_dev_hp_attrs[] = { | 1475 | static struct attribute *pci_dev_hp_attrs[] = { |
1627 | &dev_remove_attr.attr, | 1476 | &dev_attr_remove.attr, |
1628 | &dev_rescan_attr.attr, | 1477 | &dev_attr_dev_rescan.attr, |
1629 | NULL, | 1478 | NULL, |
1630 | }; | 1479 | }; |
1631 | 1480 | ||
@@ -1697,34 +1546,6 @@ static const struct attribute_group pci_dev_hp_attr_group = { | |||
1697 | .is_visible = pci_dev_hp_attrs_are_visible, | 1546 | .is_visible = pci_dev_hp_attrs_are_visible, |
1698 | }; | 1547 | }; |
1699 | 1548 | ||
1700 | #ifdef CONFIG_PCI_IOV | ||
1701 | static struct attribute *sriov_dev_attrs[] = { | ||
1702 | &sriov_totalvfs_attr.attr, | ||
1703 | &sriov_numvfs_attr.attr, | ||
1704 | &sriov_offset_attr.attr, | ||
1705 | &sriov_stride_attr.attr, | ||
1706 | &sriov_vf_device_attr.attr, | ||
1707 | &sriov_drivers_autoprobe_attr.attr, | ||
1708 | NULL, | ||
1709 | }; | ||
1710 | |||
1711 | static umode_t sriov_attrs_are_visible(struct kobject *kobj, | ||
1712 | struct attribute *a, int n) | ||
1713 | { | ||
1714 | struct device *dev = kobj_to_dev(kobj); | ||
1715 | |||
1716 | if (!dev_is_pf(dev)) | ||
1717 | return 0; | ||
1718 | |||
1719 | return a->mode; | ||
1720 | } | ||
1721 | |||
1722 | static const struct attribute_group sriov_dev_attr_group = { | ||
1723 | .attrs = sriov_dev_attrs, | ||
1724 | .is_visible = sriov_attrs_are_visible, | ||
1725 | }; | ||
1726 | #endif /* CONFIG_PCI_IOV */ | ||
1727 | |||
1728 | static const struct attribute_group pci_dev_attr_group = { | 1549 | static const struct attribute_group pci_dev_attr_group = { |
1729 | .attrs = pci_dev_dev_attrs, | 1550 | .attrs = pci_dev_dev_attrs, |
1730 | .is_visible = pci_dev_attrs_are_visible, | 1551 | .is_visible = pci_dev_attrs_are_visible, |
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 1b27b5af3d55..e7982af9a5d8 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c | |||
@@ -890,8 +890,8 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state) | |||
890 | 890 | ||
891 | pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); | 891 | pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); |
892 | dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK); | 892 | dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK); |
893 | if (dev->current_state != state && printk_ratelimit()) | 893 | if (dev->current_state != state) |
894 | pci_info(dev, "Refused to change power state, currently in D%d\n", | 894 | pci_info_ratelimited(dev, "Refused to change power state, currently in D%d\n", |
895 | dev->current_state); | 895 | dev->current_state); |
896 | 896 | ||
897 | /* | 897 | /* |
@@ -1443,7 +1443,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev) | |||
1443 | pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); | 1443 | pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); |
1444 | bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX; | 1444 | bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX; |
1445 | res = pdev->resource + bar_idx; | 1445 | res = pdev->resource + bar_idx; |
1446 | size = order_base_2((resource_size(res) >> 20) | 1) - 1; | 1446 | size = ilog2(resource_size(res)) - 20; |
1447 | ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; | 1447 | ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; |
1448 | ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; | 1448 | ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; |
1449 | pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); | 1449 | pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); |
@@ -3581,7 +3581,7 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask) | |||
3581 | } | 3581 | } |
3582 | 3582 | ||
3583 | /* Ensure upstream ports don't block AtomicOps on egress */ | 3583 | /* Ensure upstream ports don't block AtomicOps on egress */ |
3584 | if (!bridge->has_secondary_link) { | 3584 | if (pci_pcie_type(bridge) == PCI_EXP_TYPE_UPSTREAM) { |
3585 | pcie_capability_read_dword(bridge, PCI_EXP_DEVCTL2, | 3585 | pcie_capability_read_dword(bridge, PCI_EXP_DEVCTL2, |
3586 | &ctl2); | 3586 | &ctl2); |
3587 | if (ctl2 & PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK) | 3587 | if (ctl2 & PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK) |
@@ -5923,8 +5923,19 @@ resource_size_t __weak pcibios_default_alignment(void) | |||
5923 | return 0; | 5923 | return 0; |
5924 | } | 5924 | } |
5925 | 5925 | ||
5926 | #define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE | 5926 | /* |
5927 | static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0}; | 5927 | * Arches that don't want to expose struct resource to userland as-is in |
5928 | * sysfs and /proc can implement their own pci_resource_to_user(). | ||
5929 | */ | ||
5930 | void __weak pci_resource_to_user(const struct pci_dev *dev, int bar, | ||
5931 | const struct resource *rsrc, | ||
5932 | resource_size_t *start, resource_size_t *end) | ||
5933 | { | ||
5934 | *start = rsrc->start; | ||
5935 | *end = rsrc->end; | ||
5936 | } | ||
5937 | |||
5938 | static char *resource_alignment_param; | ||
5928 | static DEFINE_SPINLOCK(resource_alignment_lock); | 5939 | static DEFINE_SPINLOCK(resource_alignment_lock); |
5929 | 5940 | ||
5930 | /** | 5941 | /** |
@@ -5945,7 +5956,7 @@ static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev, | |||
5945 | 5956 | ||
5946 | spin_lock(&resource_alignment_lock); | 5957 | spin_lock(&resource_alignment_lock); |
5947 | p = resource_alignment_param; | 5958 | p = resource_alignment_param; |
5948 | if (!*p && !align) | 5959 | if (!p || !*p) |
5949 | goto out; | 5960 | goto out; |
5950 | if (pci_has_flag(PCI_PROBE_ONLY)) { | 5961 | if (pci_has_flag(PCI_PROBE_ONLY)) { |
5951 | align = 0; | 5962 | align = 0; |
@@ -6109,35 +6120,41 @@ void pci_reassigndev_resource_alignment(struct pci_dev *dev) | |||
6109 | } | 6120 | } |
6110 | } | 6121 | } |
6111 | 6122 | ||
6112 | static ssize_t pci_set_resource_alignment_param(const char *buf, size_t count) | 6123 | static ssize_t resource_alignment_show(struct bus_type *bus, char *buf) |
6113 | { | 6124 | { |
6114 | if (count > RESOURCE_ALIGNMENT_PARAM_SIZE - 1) | 6125 | size_t count = 0; |
6115 | count = RESOURCE_ALIGNMENT_PARAM_SIZE - 1; | ||
6116 | spin_lock(&resource_alignment_lock); | ||
6117 | strncpy(resource_alignment_param, buf, count); | ||
6118 | resource_alignment_param[count] = '\0'; | ||
6119 | spin_unlock(&resource_alignment_lock); | ||
6120 | return count; | ||
6121 | } | ||
6122 | 6126 | ||
6123 | static ssize_t pci_get_resource_alignment_param(char *buf, size_t size) | ||
6124 | { | ||
6125 | size_t count; | ||
6126 | spin_lock(&resource_alignment_lock); | 6127 | spin_lock(&resource_alignment_lock); |
6127 | count = snprintf(buf, size, "%s", resource_alignment_param); | 6128 | if (resource_alignment_param) |
6129 | count = snprintf(buf, PAGE_SIZE, "%s", resource_alignment_param); | ||
6128 | spin_unlock(&resource_alignment_lock); | 6130 | spin_unlock(&resource_alignment_lock); |
6129 | return count; | ||
6130 | } | ||
6131 | 6131 | ||
6132 | static ssize_t resource_alignment_show(struct bus_type *bus, char *buf) | 6132 | /* |
6133 | { | 6133 | * When set by the command line, resource_alignment_param will not |
6134 | return pci_get_resource_alignment_param(buf, PAGE_SIZE); | 6134 | * have a trailing line feed, which is ugly. So conditionally add |
6135 | * it here. | ||
6136 | */ | ||
6137 | if (count >= 2 && buf[count - 2] != '\n' && count < PAGE_SIZE - 1) { | ||
6138 | buf[count - 1] = '\n'; | ||
6139 | buf[count++] = 0; | ||
6140 | } | ||
6141 | |||
6142 | return count; | ||
6135 | } | 6143 | } |
6136 | 6144 | ||
6137 | static ssize_t resource_alignment_store(struct bus_type *bus, | 6145 | static ssize_t resource_alignment_store(struct bus_type *bus, |
6138 | const char *buf, size_t count) | 6146 | const char *buf, size_t count) |
6139 | { | 6147 | { |
6140 | return pci_set_resource_alignment_param(buf, count); | 6148 | char *param = kstrndup(buf, count, GFP_KERNEL); |
6149 | |||
6150 | if (!param) | ||
6151 | return -ENOMEM; | ||
6152 | |||
6153 | spin_lock(&resource_alignment_lock); | ||
6154 | kfree(resource_alignment_param); | ||
6155 | resource_alignment_param = param; | ||
6156 | spin_unlock(&resource_alignment_lock); | ||
6157 | return count; | ||
6141 | } | 6158 | } |
6142 | 6159 | ||
6143 | static BUS_ATTR_RW(resource_alignment); | 6160 | static BUS_ATTR_RW(resource_alignment); |
@@ -6266,8 +6283,7 @@ static int __init pci_setup(char *str) | |||
6266 | } else if (!strncmp(str, "cbmemsize=", 10)) { | 6283 | } else if (!strncmp(str, "cbmemsize=", 10)) { |
6267 | pci_cardbus_mem_size = memparse(str + 10, &str); | 6284 | pci_cardbus_mem_size = memparse(str + 10, &str); |
6268 | } else if (!strncmp(str, "resource_alignment=", 19)) { | 6285 | } else if (!strncmp(str, "resource_alignment=", 19)) { |
6269 | pci_set_resource_alignment_param(str + 19, | 6286 | resource_alignment_param = str + 19; |
6270 | strlen(str + 19)); | ||
6271 | } else if (!strncmp(str, "ecrc=", 5)) { | 6287 | } else if (!strncmp(str, "ecrc=", 5)) { |
6272 | pcie_ecrc_get_policy(str + 5); | 6288 | pcie_ecrc_get_policy(str + 5); |
6273 | } else if (!strncmp(str, "hpiosize=", 9)) { | 6289 | } else if (!strncmp(str, "hpiosize=", 9)) { |
@@ -6302,15 +6318,18 @@ static int __init pci_setup(char *str) | |||
6302 | early_param("pci", pci_setup); | 6318 | early_param("pci", pci_setup); |
6303 | 6319 | ||
6304 | /* | 6320 | /* |
6305 | * 'disable_acs_redir_param' is initialized in pci_setup(), above, to point | 6321 | * 'resource_alignment_param' and 'disable_acs_redir_param' are initialized |
6306 | * to data in the __initdata section which will be freed after the init | 6322 | * in pci_setup(), above, to point to data in the __initdata section which |
6307 | * sequence is complete. We can't allocate memory in pci_setup() because some | 6323 | * will be freed after the init sequence is complete. We can't allocate memory |
6308 | * architectures do not have any memory allocation service available during | 6324 | * in pci_setup() because some architectures do not have any memory allocation |
6309 | * an early_param() call. So we allocate memory and copy the variable here | 6325 | * service available during an early_param() call. So we allocate memory and |
6310 | * before the init section is freed. | 6326 | * copy the variable here before the init section is freed. |
6327 | * | ||
6311 | */ | 6328 | */ |
6312 | static int __init pci_realloc_setup_params(void) | 6329 | static int __init pci_realloc_setup_params(void) |
6313 | { | 6330 | { |
6331 | resource_alignment_param = kstrdup(resource_alignment_param, | ||
6332 | GFP_KERNEL); | ||
6314 | disable_acs_redir_param = kstrdup(disable_acs_redir_param, GFP_KERNEL); | 6333 | disable_acs_redir_param = kstrdup(disable_acs_redir_param, GFP_KERNEL); |
6315 | 6334 | ||
6316 | return 0; | 6335 | return 0; |
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index d22d1b807701..3f6947ee3324 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h | |||
@@ -39,6 +39,11 @@ int pci_probe_reset_function(struct pci_dev *dev); | |||
39 | int pci_bridge_secondary_bus_reset(struct pci_dev *dev); | 39 | int pci_bridge_secondary_bus_reset(struct pci_dev *dev); |
40 | int pci_bus_error_reset(struct pci_dev *dev); | 40 | int pci_bus_error_reset(struct pci_dev *dev); |
41 | 41 | ||
42 | #define PCI_PM_D2_DELAY 200 | ||
43 | #define PCI_PM_D3_WAIT 10 | ||
44 | #define PCI_PM_D3COLD_WAIT 100 | ||
45 | #define PCI_PM_BUS_WAIT 50 | ||
46 | |||
42 | /** | 47 | /** |
43 | * struct pci_platform_pm_ops - Firmware PM callbacks | 48 | * struct pci_platform_pm_ops - Firmware PM callbacks |
44 | * | 49 | * |
@@ -84,6 +89,8 @@ void pci_power_up(struct pci_dev *dev); | |||
84 | void pci_disable_enabled_device(struct pci_dev *dev); | 89 | void pci_disable_enabled_device(struct pci_dev *dev); |
85 | int pci_finish_runtime_suspend(struct pci_dev *dev); | 90 | int pci_finish_runtime_suspend(struct pci_dev *dev); |
86 | void pcie_clear_root_pme_status(struct pci_dev *dev); | 91 | void pcie_clear_root_pme_status(struct pci_dev *dev); |
92 | bool pci_check_pme_status(struct pci_dev *dev); | ||
93 | void pci_pme_wakeup_bus(struct pci_bus *bus); | ||
87 | int __pci_pme_wakeup(struct pci_dev *dev, void *ign); | 94 | int __pci_pme_wakeup(struct pci_dev *dev, void *ign); |
88 | void pci_pme_restore(struct pci_dev *dev); | 95 | void pci_pme_restore(struct pci_dev *dev); |
89 | bool pci_dev_need_resume(struct pci_dev *dev); | 96 | bool pci_dev_need_resume(struct pci_dev *dev); |
@@ -118,11 +125,25 @@ static inline bool pci_power_manageable(struct pci_dev *pci_dev) | |||
118 | return !pci_has_subordinate(pci_dev) || pci_dev->bridge_d3; | 125 | return !pci_has_subordinate(pci_dev) || pci_dev->bridge_d3; |
119 | } | 126 | } |
120 | 127 | ||
128 | static inline bool pcie_downstream_port(const struct pci_dev *dev) | ||
129 | { | ||
130 | int type = pci_pcie_type(dev); | ||
131 | |||
132 | return type == PCI_EXP_TYPE_ROOT_PORT || | ||
133 | type == PCI_EXP_TYPE_DOWNSTREAM || | ||
134 | type == PCI_EXP_TYPE_PCIE_BRIDGE; | ||
135 | } | ||
136 | |||
121 | int pci_vpd_init(struct pci_dev *dev); | 137 | int pci_vpd_init(struct pci_dev *dev); |
122 | void pci_vpd_release(struct pci_dev *dev); | 138 | void pci_vpd_release(struct pci_dev *dev); |
123 | void pcie_vpd_create_sysfs_dev_files(struct pci_dev *dev); | 139 | void pcie_vpd_create_sysfs_dev_files(struct pci_dev *dev); |
124 | void pcie_vpd_remove_sysfs_dev_files(struct pci_dev *dev); | 140 | void pcie_vpd_remove_sysfs_dev_files(struct pci_dev *dev); |
125 | 141 | ||
142 | /* PCI Virtual Channel */ | ||
143 | int pci_save_vc_state(struct pci_dev *dev); | ||
144 | void pci_restore_vc_state(struct pci_dev *dev); | ||
145 | void pci_allocate_vc_save_buffers(struct pci_dev *dev); | ||
146 | |||
126 | /* PCI /proc functions */ | 147 | /* PCI /proc functions */ |
127 | #ifdef CONFIG_PROC_FS | 148 | #ifdef CONFIG_PROC_FS |
128 | int pci_proc_attach_device(struct pci_dev *dev); | 149 | int pci_proc_attach_device(struct pci_dev *dev); |
@@ -196,6 +217,9 @@ extern const struct attribute_group *pcibus_groups[]; | |||
196 | extern const struct device_type pci_dev_type; | 217 | extern const struct device_type pci_dev_type; |
197 | extern const struct attribute_group *pci_bus_groups[]; | 218 | extern const struct attribute_group *pci_bus_groups[]; |
198 | 219 | ||
220 | extern unsigned long pci_hotplug_io_size; | ||
221 | extern unsigned long pci_hotplug_mem_size; | ||
222 | extern unsigned long pci_hotplug_bus_size; | ||
199 | 223 | ||
200 | /** | 224 | /** |
201 | * pci_match_one_device - Tell if a PCI device structure has a matching | 225 | * pci_match_one_device - Tell if a PCI device structure has a matching |
@@ -236,6 +260,9 @@ enum pci_bar_type { | |||
236 | pci_bar_mem64, /* A 64-bit memory BAR */ | 260 | pci_bar_mem64, /* A 64-bit memory BAR */ |
237 | }; | 261 | }; |
238 | 262 | ||
263 | struct device *pci_get_host_bridge_device(struct pci_dev *dev); | ||
264 | void pci_put_host_bridge_device(struct device *dev); | ||
265 | |||
239 | int pci_configure_extended_tags(struct pci_dev *dev, void *ign); | 266 | int pci_configure_extended_tags(struct pci_dev *dev, void *ign); |
240 | bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl, | 267 | bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl, |
241 | int crs_timeout); | 268 | int crs_timeout); |
@@ -256,6 +283,8 @@ bool pci_bus_clip_resource(struct pci_dev *dev, int idx); | |||
256 | 283 | ||
257 | void pci_reassigndev_resource_alignment(struct pci_dev *dev); | 284 | void pci_reassigndev_resource_alignment(struct pci_dev *dev); |
258 | void pci_disable_bridge_window(struct pci_dev *dev); | 285 | void pci_disable_bridge_window(struct pci_dev *dev); |
286 | struct pci_bus *pci_bus_get(struct pci_bus *bus); | ||
287 | void pci_bus_put(struct pci_bus *bus); | ||
259 | 288 | ||
260 | /* PCIe link information */ | 289 | /* PCIe link information */ |
261 | #define PCIE_SPEED2STR(speed) \ | 290 | #define PCIE_SPEED2STR(speed) \ |
@@ -279,6 +308,7 @@ u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, | |||
279 | enum pcie_link_width *width); | 308 | enum pcie_link_width *width); |
280 | void __pcie_print_link_status(struct pci_dev *dev, bool verbose); | 309 | void __pcie_print_link_status(struct pci_dev *dev, bool verbose); |
281 | void pcie_report_downtraining(struct pci_dev *dev); | 310 | void pcie_report_downtraining(struct pci_dev *dev); |
311 | void pcie_update_link_speed(struct pci_bus *bus, u16 link_status); | ||
282 | 312 | ||
283 | /* Single Root I/O Virtualization */ | 313 | /* Single Root I/O Virtualization */ |
284 | struct pci_sriov { | 314 | struct pci_sriov { |
@@ -418,11 +448,12 @@ static inline void pci_restore_dpc_state(struct pci_dev *dev) {} | |||
418 | #endif | 448 | #endif |
419 | 449 | ||
420 | #ifdef CONFIG_PCI_ATS | 450 | #ifdef CONFIG_PCI_ATS |
451 | /* Address Translation Service */ | ||
452 | void pci_ats_init(struct pci_dev *dev); | ||
421 | void pci_restore_ats_state(struct pci_dev *dev); | 453 | void pci_restore_ats_state(struct pci_dev *dev); |
422 | #else | 454 | #else |
423 | static inline void pci_restore_ats_state(struct pci_dev *dev) | 455 | static inline void pci_ats_init(struct pci_dev *d) { } |
424 | { | 456 | static inline void pci_restore_ats_state(struct pci_dev *dev) { } |
425 | } | ||
426 | #endif /* CONFIG_PCI_ATS */ | 457 | #endif /* CONFIG_PCI_ATS */ |
427 | 458 | ||
428 | #ifdef CONFIG_PCI_IOV | 459 | #ifdef CONFIG_PCI_IOV |
@@ -433,7 +464,7 @@ void pci_iov_update_resource(struct pci_dev *dev, int resno); | |||
433 | resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno); | 464 | resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno); |
434 | void pci_restore_iov_state(struct pci_dev *dev); | 465 | void pci_restore_iov_state(struct pci_dev *dev); |
435 | int pci_iov_bus_range(struct pci_bus *bus); | 466 | int pci_iov_bus_range(struct pci_bus *bus); |
436 | 467 | extern const struct attribute_group sriov_dev_attr_group; | |
437 | #else | 468 | #else |
438 | static inline int pci_iov_init(struct pci_dev *dev) | 469 | static inline int pci_iov_init(struct pci_dev *dev) |
439 | { | 470 | { |
@@ -518,10 +549,21 @@ static inline void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev) { } | |||
518 | static inline void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev) { } | 549 | static inline void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev) { } |
519 | #endif | 550 | #endif |
520 | 551 | ||
552 | #ifdef CONFIG_PCIE_ECRC | ||
553 | void pcie_set_ecrc_checking(struct pci_dev *dev); | ||
554 | void pcie_ecrc_get_policy(char *str); | ||
555 | #else | ||
556 | static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { } | ||
557 | static inline void pcie_ecrc_get_policy(char *str) { } | ||
558 | #endif | ||
559 | |||
521 | #ifdef CONFIG_PCIE_PTM | 560 | #ifdef CONFIG_PCIE_PTM |
522 | void pci_ptm_init(struct pci_dev *dev); | 561 | void pci_ptm_init(struct pci_dev *dev); |
562 | int pci_enable_ptm(struct pci_dev *dev, u8 *granularity); | ||
523 | #else | 563 | #else |
524 | static inline void pci_ptm_init(struct pci_dev *dev) { } | 564 | static inline void pci_ptm_init(struct pci_dev *dev) { } |
565 | static inline int pci_enable_ptm(struct pci_dev *dev, u8 *granularity) | ||
566 | { return -EINVAL; } | ||
525 | #endif | 567 | #endif |
526 | 568 | ||
527 | struct pci_dev_reset_methods { | 569 | struct pci_dev_reset_methods { |
@@ -558,6 +600,10 @@ struct device_node; | |||
558 | int of_pci_parse_bus_range(struct device_node *node, struct resource *res); | 600 | int of_pci_parse_bus_range(struct device_node *node, struct resource *res); |
559 | int of_get_pci_domain_nr(struct device_node *node); | 601 | int of_get_pci_domain_nr(struct device_node *node); |
560 | int of_pci_get_max_link_speed(struct device_node *node); | 602 | int of_pci_get_max_link_speed(struct device_node *node); |
603 | void pci_set_of_node(struct pci_dev *dev); | ||
604 | void pci_release_of_node(struct pci_dev *dev); | ||
605 | void pci_set_bus_of_node(struct pci_bus *bus); | ||
606 | void pci_release_bus_of_node(struct pci_bus *bus); | ||
561 | 607 | ||
562 | #else | 608 | #else |
563 | static inline int | 609 | static inline int |
@@ -577,6 +623,11 @@ of_pci_get_max_link_speed(struct device_node *node) | |||
577 | { | 623 | { |
578 | return -EINVAL; | 624 | return -EINVAL; |
579 | } | 625 | } |
626 | |||
627 | static inline void pci_set_of_node(struct pci_dev *dev) { } | ||
628 | static inline void pci_release_of_node(struct pci_dev *dev) { } | ||
629 | static inline void pci_set_bus_of_node(struct pci_bus *bus) { } | ||
630 | static inline void pci_release_bus_of_node(struct pci_bus *bus) { } | ||
580 | #endif /* CONFIG_OF */ | 631 | #endif /* CONFIG_OF */ |
581 | 632 | ||
582 | #if defined(CONFIG_OF_ADDRESS) | 633 | #if defined(CONFIG_OF_ADDRESS) |
@@ -607,4 +658,13 @@ static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { } | |||
607 | static inline void pci_aer_clear_device_status(struct pci_dev *dev) { } | 658 | static inline void pci_aer_clear_device_status(struct pci_dev *dev) { } |
608 | #endif | 659 | #endif |
609 | 660 | ||
661 | #ifdef CONFIG_ACPI | ||
662 | int pci_acpi_program_hp_params(struct pci_dev *dev); | ||
663 | #else | ||
664 | static inline int pci_acpi_program_hp_params(struct pci_dev *dev) | ||
665 | { | ||
666 | return -ENODEV; | ||
667 | } | ||
668 | #endif | ||
669 | |||
610 | #endif /* DRIVERS_PCI_H */ | 670 | #endif /* DRIVERS_PCI_H */ |
diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index 464f8f92653f..652ef23bba35 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c | |||
@@ -18,7 +18,6 @@ | |||
18 | #include <linux/slab.h> | 18 | #include <linux/slab.h> |
19 | #include <linux/jiffies.h> | 19 | #include <linux/jiffies.h> |
20 | #include <linux/delay.h> | 20 | #include <linux/delay.h> |
21 | #include <linux/pci-aspm.h> | ||
22 | #include "../pci.h" | 21 | #include "../pci.h" |
23 | 22 | ||
24 | #ifdef MODULE_PARAM_PREFIX | 23 | #ifdef MODULE_PARAM_PREFIX |
@@ -913,10 +912,10 @@ void pcie_aspm_init_link_state(struct pci_dev *pdev) | |||
913 | 912 | ||
914 | /* | 913 | /* |
915 | * We allocate pcie_link_state for the component on the upstream | 914 | * We allocate pcie_link_state for the component on the upstream |
916 | * end of a Link, so there's nothing to do unless this device has a | 915 | * end of a Link, so there's nothing to do unless this device is |
917 | * Link on its secondary side. | 916 | * downstream port. |
918 | */ | 917 | */ |
919 | if (!pdev->has_secondary_link) | 918 | if (!pcie_downstream_port(pdev)) |
920 | return; | 919 | return; |
921 | 920 | ||
922 | /* VIA has a strange chipset, root port is under a bridge */ | 921 | /* VIA has a strange chipset, root port is under a bridge */ |
@@ -1070,7 +1069,7 @@ static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem) | |||
1070 | if (!pci_is_pcie(pdev)) | 1069 | if (!pci_is_pcie(pdev)) |
1071 | return 0; | 1070 | return 0; |
1072 | 1071 | ||
1073 | if (pdev->has_secondary_link) | 1072 | if (pcie_downstream_port(pdev)) |
1074 | parent = pdev; | 1073 | parent = pdev; |
1075 | if (!parent || !parent->link_state) | 1074 | if (!parent || !parent->link_state) |
1076 | return -EINVAL; | 1075 | return -EINVAL; |
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c index 773197a12568..b0e6048a9208 100644 --- a/drivers/pci/pcie/err.c +++ b/drivers/pci/pcie/err.c | |||
@@ -166,7 +166,7 @@ static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) | |||
166 | driver = pcie_port_find_service(dev, service); | 166 | driver = pcie_port_find_service(dev, service); |
167 | if (driver && driver->reset_link) { | 167 | if (driver && driver->reset_link) { |
168 | status = driver->reset_link(dev); | 168 | status = driver->reset_link(dev); |
169 | } else if (dev->has_secondary_link) { | 169 | } else if (pcie_downstream_port(dev)) { |
170 | status = default_reset_link(dev); | 170 | status = default_reset_link(dev); |
171 | } else { | 171 | } else { |
172 | pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", | 172 | pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", |
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index dbeeb385fb9f..3d5271a7a849 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c | |||
@@ -1426,26 +1426,38 @@ void set_pcie_port_type(struct pci_dev *pdev) | |||
1426 | pci_read_config_word(pdev, pos + PCI_EXP_DEVCAP, ®16); | 1426 | pci_read_config_word(pdev, pos + PCI_EXP_DEVCAP, ®16); |
1427 | pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD; | 1427 | pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD; |
1428 | 1428 | ||
1429 | parent = pci_upstream_bridge(pdev); | ||
1430 | if (!parent) | ||
1431 | return; | ||
1432 | |||
1429 | /* | 1433 | /* |
1430 | * A Root Port or a PCI-to-PCIe bridge is always the upstream end | 1434 | * Some systems do not identify their upstream/downstream ports |
1431 | * of a Link. No PCIe component has two Links. Two Links are | 1435 | * correctly so detect impossible configurations here and correct |
1432 | * connected by a Switch that has a Port on each Link and internal | 1436 | * the port type accordingly. |
1433 | * logic to connect the two Ports. | ||
1434 | */ | 1437 | */ |
1435 | type = pci_pcie_type(pdev); | 1438 | type = pci_pcie_type(pdev); |
1436 | if (type == PCI_EXP_TYPE_ROOT_PORT || | 1439 | if (type == PCI_EXP_TYPE_DOWNSTREAM) { |
1437 | type == PCI_EXP_TYPE_PCIE_BRIDGE) | ||
1438 | pdev->has_secondary_link = 1; | ||
1439 | else if (type == PCI_EXP_TYPE_UPSTREAM || | ||
1440 | type == PCI_EXP_TYPE_DOWNSTREAM) { | ||
1441 | parent = pci_upstream_bridge(pdev); | ||
1442 | |||
1443 | /* | 1440 | /* |
1444 | * Usually there's an upstream device (Root Port or Switch | 1441 | * If pdev claims to be downstream port but the parent |
1445 | * Downstream Port), but we can't assume one exists. | 1442 | * device is also downstream port assume pdev is actually |
1443 | * upstream port. | ||
1446 | */ | 1444 | */ |
1447 | if (parent && !parent->has_secondary_link) | 1445 | if (pcie_downstream_port(parent)) { |
1448 | pdev->has_secondary_link = 1; | 1446 | pci_info(pdev, "claims to be downstream port but is acting as upstream port, correcting type\n"); |
1447 | pdev->pcie_flags_reg &= ~PCI_EXP_FLAGS_TYPE; | ||
1448 | pdev->pcie_flags_reg |= PCI_EXP_TYPE_UPSTREAM; | ||
1449 | } | ||
1450 | } else if (type == PCI_EXP_TYPE_UPSTREAM) { | ||
1451 | /* | ||
1452 | * If pdev claims to be upstream port but the parent | ||
1453 | * device is also upstream port assume pdev is actually | ||
1454 | * downstream port. | ||
1455 | */ | ||
1456 | if (pci_pcie_type(parent) == PCI_EXP_TYPE_UPSTREAM) { | ||
1457 | pci_info(pdev, "claims to be upstream port but is acting as downstream port, correcting type\n"); | ||
1458 | pdev->pcie_flags_reg &= ~PCI_EXP_FLAGS_TYPE; | ||
1459 | pdev->pcie_flags_reg |= PCI_EXP_TYPE_DOWNSTREAM; | ||
1460 | } | ||
1449 | } | 1461 | } |
1450 | } | 1462 | } |
1451 | 1463 | ||
@@ -1915,275 +1927,6 @@ static void pci_configure_mps(struct pci_dev *dev) | |||
1915 | p_mps, mps, mpss); | 1927 | p_mps, mps, mpss); |
1916 | } | 1928 | } |
1917 | 1929 | ||
1918 | static struct hpp_type0 pci_default_type0 = { | ||
1919 | .revision = 1, | ||
1920 | .cache_line_size = 8, | ||
1921 | .latency_timer = 0x40, | ||
1922 | .enable_serr = 0, | ||
1923 | .enable_perr = 0, | ||
1924 | }; | ||
1925 | |||
1926 | static void program_hpp_type0(struct pci_dev *dev, struct hpp_type0 *hpp) | ||
1927 | { | ||
1928 | u16 pci_cmd, pci_bctl; | ||
1929 | |||
1930 | if (!hpp) | ||
1931 | hpp = &pci_default_type0; | ||
1932 | |||
1933 | if (hpp->revision > 1) { | ||
1934 | pci_warn(dev, "PCI settings rev %d not supported; using defaults\n", | ||
1935 | hpp->revision); | ||
1936 | hpp = &pci_default_type0; | ||
1937 | } | ||
1938 | |||
1939 | pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, hpp->cache_line_size); | ||
1940 | pci_write_config_byte(dev, PCI_LATENCY_TIMER, hpp->latency_timer); | ||
1941 | pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); | ||
1942 | if (hpp->enable_serr) | ||
1943 | pci_cmd |= PCI_COMMAND_SERR; | ||
1944 | if (hpp->enable_perr) | ||
1945 | pci_cmd |= PCI_COMMAND_PARITY; | ||
1946 | pci_write_config_word(dev, PCI_COMMAND, pci_cmd); | ||
1947 | |||
1948 | /* Program bridge control value */ | ||
1949 | if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) { | ||
1950 | pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER, | ||
1951 | hpp->latency_timer); | ||
1952 | pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl); | ||
1953 | if (hpp->enable_perr) | ||
1954 | pci_bctl |= PCI_BRIDGE_CTL_PARITY; | ||
1955 | pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl); | ||
1956 | } | ||
1957 | } | ||
1958 | |||
1959 | static void program_hpp_type1(struct pci_dev *dev, struct hpp_type1 *hpp) | ||
1960 | { | ||
1961 | int pos; | ||
1962 | |||
1963 | if (!hpp) | ||
1964 | return; | ||
1965 | |||
1966 | pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); | ||
1967 | if (!pos) | ||
1968 | return; | ||
1969 | |||
1970 | pci_warn(dev, "PCI-X settings not supported\n"); | ||
1971 | } | ||
1972 | |||
1973 | static bool pcie_root_rcb_set(struct pci_dev *dev) | ||
1974 | { | ||
1975 | struct pci_dev *rp = pcie_find_root_port(dev); | ||
1976 | u16 lnkctl; | ||
1977 | |||
1978 | if (!rp) | ||
1979 | return false; | ||
1980 | |||
1981 | pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &lnkctl); | ||
1982 | if (lnkctl & PCI_EXP_LNKCTL_RCB) | ||
1983 | return true; | ||
1984 | |||
1985 | return false; | ||
1986 | } | ||
1987 | |||
1988 | static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp) | ||
1989 | { | ||
1990 | int pos; | ||
1991 | u32 reg32; | ||
1992 | |||
1993 | if (!hpp) | ||
1994 | return; | ||
1995 | |||
1996 | if (!pci_is_pcie(dev)) | ||
1997 | return; | ||
1998 | |||
1999 | if (hpp->revision > 1) { | ||
2000 | pci_warn(dev, "PCIe settings rev %d not supported\n", | ||
2001 | hpp->revision); | ||
2002 | return; | ||
2003 | } | ||
2004 | |||
2005 | /* | ||
2006 | * Don't allow _HPX to change MPS or MRRS settings. We manage | ||
2007 | * those to make sure they're consistent with the rest of the | ||
2008 | * platform. | ||
2009 | */ | ||
2010 | hpp->pci_exp_devctl_and |= PCI_EXP_DEVCTL_PAYLOAD | | ||
2011 | PCI_EXP_DEVCTL_READRQ; | ||
2012 | hpp->pci_exp_devctl_or &= ~(PCI_EXP_DEVCTL_PAYLOAD | | ||
2013 | PCI_EXP_DEVCTL_READRQ); | ||
2014 | |||
2015 | /* Initialize Device Control Register */ | ||
2016 | pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL, | ||
2017 | ~hpp->pci_exp_devctl_and, hpp->pci_exp_devctl_or); | ||
2018 | |||
2019 | /* Initialize Link Control Register */ | ||
2020 | if (pcie_cap_has_lnkctl(dev)) { | ||
2021 | |||
2022 | /* | ||
2023 | * If the Root Port supports Read Completion Boundary of | ||
2024 | * 128, set RCB to 128. Otherwise, clear it. | ||
2025 | */ | ||
2026 | hpp->pci_exp_lnkctl_and |= PCI_EXP_LNKCTL_RCB; | ||
2027 | hpp->pci_exp_lnkctl_or &= ~PCI_EXP_LNKCTL_RCB; | ||
2028 | if (pcie_root_rcb_set(dev)) | ||
2029 | hpp->pci_exp_lnkctl_or |= PCI_EXP_LNKCTL_RCB; | ||
2030 | |||
2031 | pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, | ||
2032 | ~hpp->pci_exp_lnkctl_and, hpp->pci_exp_lnkctl_or); | ||
2033 | } | ||
2034 | |||
2035 | /* Find Advanced Error Reporting Enhanced Capability */ | ||
2036 | pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); | ||
2037 | if (!pos) | ||
2038 | return; | ||
2039 | |||
2040 | /* Initialize Uncorrectable Error Mask Register */ | ||
2041 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, ®32); | ||
2042 | reg32 = (reg32 & hpp->unc_err_mask_and) | hpp->unc_err_mask_or; | ||
2043 | pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, reg32); | ||
2044 | |||
2045 | /* Initialize Uncorrectable Error Severity Register */ | ||
2046 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, ®32); | ||
2047 | reg32 = (reg32 & hpp->unc_err_sever_and) | hpp->unc_err_sever_or; | ||
2048 | pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, reg32); | ||
2049 | |||
2050 | /* Initialize Correctable Error Mask Register */ | ||
2051 | pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, ®32); | ||
2052 | reg32 = (reg32 & hpp->cor_err_mask_and) | hpp->cor_err_mask_or; | ||
2053 | pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg32); | ||
2054 | |||
2055 | /* Initialize Advanced Error Capabilities and Control Register */ | ||
2056 | pci_read_config_dword(dev, pos + PCI_ERR_CAP, ®32); | ||
2057 | reg32 = (reg32 & hpp->adv_err_cap_and) | hpp->adv_err_cap_or; | ||
2058 | |||
2059 | /* Don't enable ECRC generation or checking if unsupported */ | ||
2060 | if (!(reg32 & PCI_ERR_CAP_ECRC_GENC)) | ||
2061 | reg32 &= ~PCI_ERR_CAP_ECRC_GENE; | ||
2062 | if (!(reg32 & PCI_ERR_CAP_ECRC_CHKC)) | ||
2063 | reg32 &= ~PCI_ERR_CAP_ECRC_CHKE; | ||
2064 | pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); | ||
2065 | |||
2066 | /* | ||
2067 | * FIXME: The following two registers are not supported yet. | ||
2068 | * | ||
2069 | * o Secondary Uncorrectable Error Severity Register | ||
2070 | * o Secondary Uncorrectable Error Mask Register | ||
2071 | */ | ||
2072 | } | ||
2073 | |||
2074 | static u16 hpx3_device_type(struct pci_dev *dev) | ||
2075 | { | ||
2076 | u16 pcie_type = pci_pcie_type(dev); | ||
2077 | const int pcie_to_hpx3_type[] = { | ||
2078 | [PCI_EXP_TYPE_ENDPOINT] = HPX_TYPE_ENDPOINT, | ||
2079 | [PCI_EXP_TYPE_LEG_END] = HPX_TYPE_LEG_END, | ||
2080 | [PCI_EXP_TYPE_RC_END] = HPX_TYPE_RC_END, | ||
2081 | [PCI_EXP_TYPE_RC_EC] = HPX_TYPE_RC_EC, | ||
2082 | [PCI_EXP_TYPE_ROOT_PORT] = HPX_TYPE_ROOT_PORT, | ||
2083 | [PCI_EXP_TYPE_UPSTREAM] = HPX_TYPE_UPSTREAM, | ||
2084 | [PCI_EXP_TYPE_DOWNSTREAM] = HPX_TYPE_DOWNSTREAM, | ||
2085 | [PCI_EXP_TYPE_PCI_BRIDGE] = HPX_TYPE_PCI_BRIDGE, | ||
2086 | [PCI_EXP_TYPE_PCIE_BRIDGE] = HPX_TYPE_PCIE_BRIDGE, | ||
2087 | }; | ||
2088 | |||
2089 | if (pcie_type >= ARRAY_SIZE(pcie_to_hpx3_type)) | ||
2090 | return 0; | ||
2091 | |||
2092 | return pcie_to_hpx3_type[pcie_type]; | ||
2093 | } | ||
2094 | |||
2095 | static u8 hpx3_function_type(struct pci_dev *dev) | ||
2096 | { | ||
2097 | if (dev->is_virtfn) | ||
2098 | return HPX_FN_SRIOV_VIRT; | ||
2099 | else if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV) > 0) | ||
2100 | return HPX_FN_SRIOV_PHYS; | ||
2101 | else | ||
2102 | return HPX_FN_NORMAL; | ||
2103 | } | ||
2104 | |||
2105 | static bool hpx3_cap_ver_matches(u8 pcie_cap_id, u8 hpx3_cap_id) | ||
2106 | { | ||
2107 | u8 cap_ver = hpx3_cap_id & 0xf; | ||
2108 | |||
2109 | if ((hpx3_cap_id & BIT(4)) && cap_ver >= pcie_cap_id) | ||
2110 | return true; | ||
2111 | else if (cap_ver == pcie_cap_id) | ||
2112 | return true; | ||
2113 | |||
2114 | return false; | ||
2115 | } | ||
2116 | |||
2117 | static void program_hpx_type3_register(struct pci_dev *dev, | ||
2118 | const struct hpx_type3 *reg) | ||
2119 | { | ||
2120 | u32 match_reg, write_reg, header, orig_value; | ||
2121 | u16 pos; | ||
2122 | |||
2123 | if (!(hpx3_device_type(dev) & reg->device_type)) | ||
2124 | return; | ||
2125 | |||
2126 | if (!(hpx3_function_type(dev) & reg->function_type)) | ||
2127 | return; | ||
2128 | |||
2129 | switch (reg->config_space_location) { | ||
2130 | case HPX_CFG_PCICFG: | ||
2131 | pos = 0; | ||
2132 | break; | ||
2133 | case HPX_CFG_PCIE_CAP: | ||
2134 | pos = pci_find_capability(dev, reg->pci_exp_cap_id); | ||
2135 | if (pos == 0) | ||
2136 | return; | ||
2137 | |||
2138 | break; | ||
2139 | case HPX_CFG_PCIE_CAP_EXT: | ||
2140 | pos = pci_find_ext_capability(dev, reg->pci_exp_cap_id); | ||
2141 | if (pos == 0) | ||
2142 | return; | ||
2143 | |||
2144 | pci_read_config_dword(dev, pos, &header); | ||
2145 | if (!hpx3_cap_ver_matches(PCI_EXT_CAP_VER(header), | ||
2146 | reg->pci_exp_cap_ver)) | ||
2147 | return; | ||
2148 | |||
2149 | break; | ||
2150 | case HPX_CFG_VEND_CAP: /* Fall through */ | ||
2151 | case HPX_CFG_DVSEC: /* Fall through */ | ||
2152 | default: | ||
2153 | pci_warn(dev, "Encountered _HPX type 3 with unsupported config space location"); | ||
2154 | return; | ||
2155 | } | ||
2156 | |||
2157 | pci_read_config_dword(dev, pos + reg->match_offset, &match_reg); | ||
2158 | |||
2159 | if ((match_reg & reg->match_mask_and) != reg->match_value) | ||
2160 | return; | ||
2161 | |||
2162 | pci_read_config_dword(dev, pos + reg->reg_offset, &write_reg); | ||
2163 | orig_value = write_reg; | ||
2164 | write_reg &= reg->reg_mask_and; | ||
2165 | write_reg |= reg->reg_mask_or; | ||
2166 | |||
2167 | if (orig_value == write_reg) | ||
2168 | return; | ||
2169 | |||
2170 | pci_write_config_dword(dev, pos + reg->reg_offset, write_reg); | ||
2171 | |||
2172 | pci_dbg(dev, "Applied _HPX3 at [0x%x]: 0x%08x -> 0x%08x", | ||
2173 | pos, orig_value, write_reg); | ||
2174 | } | ||
2175 | |||
2176 | static void program_hpx_type3(struct pci_dev *dev, struct hpx_type3 *hpx3) | ||
2177 | { | ||
2178 | if (!hpx3) | ||
2179 | return; | ||
2180 | |||
2181 | if (!pci_is_pcie(dev)) | ||
2182 | return; | ||
2183 | |||
2184 | program_hpx_type3_register(dev, hpx3); | ||
2185 | } | ||
2186 | |||
2187 | int pci_configure_extended_tags(struct pci_dev *dev, void *ign) | 1930 | int pci_configure_extended_tags(struct pci_dev *dev, void *ign) |
2188 | { | 1931 | { |
2189 | struct pci_host_bridge *host; | 1932 | struct pci_host_bridge *host; |
@@ -2364,13 +2107,6 @@ static void pci_configure_serr(struct pci_dev *dev) | |||
2364 | 2107 | ||
2365 | static void pci_configure_device(struct pci_dev *dev) | 2108 | static void pci_configure_device(struct pci_dev *dev) |
2366 | { | 2109 | { |
2367 | static const struct hotplug_program_ops hp_ops = { | ||
2368 | .program_type0 = program_hpp_type0, | ||
2369 | .program_type1 = program_hpp_type1, | ||
2370 | .program_type2 = program_hpp_type2, | ||
2371 | .program_type3 = program_hpx_type3, | ||
2372 | }; | ||
2373 | |||
2374 | pci_configure_mps(dev); | 2110 | pci_configure_mps(dev); |
2375 | pci_configure_extended_tags(dev, NULL); | 2111 | pci_configure_extended_tags(dev, NULL); |
2376 | pci_configure_relaxed_ordering(dev); | 2112 | pci_configure_relaxed_ordering(dev); |
@@ -2378,7 +2114,7 @@ static void pci_configure_device(struct pci_dev *dev) | |||
2378 | pci_configure_eetlp_prefix(dev); | 2114 | pci_configure_eetlp_prefix(dev); |
2379 | pci_configure_serr(dev); | 2115 | pci_configure_serr(dev); |
2380 | 2116 | ||
2381 | pci_acpi_program_hp_params(dev, &hp_ops); | 2117 | pci_acpi_program_hp_params(dev); |
2382 | } | 2118 | } |
2383 | 2119 | ||
2384 | static void pci_release_capabilities(struct pci_dev *dev) | 2120 | static void pci_release_capabilities(struct pci_dev *dev) |
@@ -2759,12 +2495,8 @@ static int only_one_child(struct pci_bus *bus) | |||
2759 | * A PCIe Downstream Port normally leads to a Link with only Device | 2495 | * A PCIe Downstream Port normally leads to a Link with only Device |
2760 | * 0 on it (PCIe spec r3.1, sec 7.3.1). As an optimization, scan | 2496 | * 0 on it (PCIe spec r3.1, sec 7.3.1). As an optimization, scan |
2761 | * only for Device 0 in that situation. | 2497 | * only for Device 0 in that situation. |
2762 | * | ||
2763 | * Checking has_secondary_link is a hack to identify Downstream | ||
2764 | * Ports because sometimes Switches are configured such that the | ||
2765 | * PCIe Port Type labels are backwards. | ||
2766 | */ | 2498 | */ |
2767 | if (bridge && pci_is_pcie(bridge) && bridge->has_secondary_link) | 2499 | if (bridge && pci_is_pcie(bridge) && pcie_downstream_port(bridge)) |
2768 | return 1; | 2500 | return 1; |
2769 | 2501 | ||
2770 | return 0; | 2502 | return 0; |
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 44c4ae1abd00..320255e5e8f8 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c | |||
@@ -20,7 +20,6 @@ | |||
20 | #include <linux/delay.h> | 20 | #include <linux/delay.h> |
21 | #include <linux/acpi.h> | 21 | #include <linux/acpi.h> |
22 | #include <linux/dmi.h> | 22 | #include <linux/dmi.h> |
23 | #include <linux/pci-aspm.h> | ||
24 | #include <linux/ioport.h> | 23 | #include <linux/ioport.h> |
25 | #include <linux/sched.h> | 24 | #include <linux/sched.h> |
26 | #include <linux/ktime.h> | 25 | #include <linux/ktime.h> |
@@ -2593,6 +2592,59 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, | |||
2593 | nvenet_msi_disable); | 2592 | nvenet_msi_disable); |
2594 | 2593 | ||
2595 | /* | 2594 | /* |
2595 | * PCIe spec r4.0 sec 7.7.1.2 and sec 7.7.2.2 say that if MSI/MSI-X is enabled, | ||
2596 | * then the device can't use INTx interrupts. Tegra's PCIe root ports don't | ||
2597 | * generate MSI interrupts for PME and AER events instead only INTx interrupts | ||
2598 | * are generated. Though Tegra's PCIe root ports can generate MSI interrupts | ||
2599 | * for other events, since PCIe specificiation doesn't support using a mix of | ||
2600 | * INTx and MSI/MSI-X, it is required to disable MSI interrupts to avoid port | ||
2601 | * service drivers registering their respective ISRs for MSIs. | ||
2602 | */ | ||
2603 | static void pci_quirk_nvidia_tegra_disable_rp_msi(struct pci_dev *dev) | ||
2604 | { | ||
2605 | dev->no_msi = 1; | ||
2606 | } | ||
2607 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad0, | ||
2608 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2609 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2610 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad1, | ||
2611 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2612 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2613 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad2, | ||
2614 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2615 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2616 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf0, | ||
2617 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2618 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2619 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf1, | ||
2620 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2621 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2622 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1c, | ||
2623 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2624 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2625 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1d, | ||
2626 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2627 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2628 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e12, | ||
2629 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2630 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2631 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e13, | ||
2632 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2633 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2634 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0fae, | ||
2635 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2636 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2637 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0faf, | ||
2638 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2639 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2640 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e5, | ||
2641 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2642 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2643 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e6, | ||
2644 | PCI_CLASS_BRIDGE_PCI, 8, | ||
2645 | pci_quirk_nvidia_tegra_disable_rp_msi); | ||
2646 | |||
2647 | /* | ||
2596 | * Some versions of the MCP55 bridge from Nvidia have a legacy IRQ routing | 2648 | * Some versions of the MCP55 bridge from Nvidia have a legacy IRQ routing |
2597 | * config register. This register controls the routing of legacy | 2649 | * config register. This register controls the routing of legacy |
2598 | * interrupts from devices that route through the MCP55. If this register | 2650 | * interrupts from devices that route through the MCP55. If this register |
@@ -2925,6 +2977,24 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, 0x10a1, | |||
2925 | quirk_msi_intx_disable_qca_bug); | 2977 | quirk_msi_intx_disable_qca_bug); |
2926 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, 0xe091, | 2978 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, 0xe091, |
2927 | quirk_msi_intx_disable_qca_bug); | 2979 | quirk_msi_intx_disable_qca_bug); |
2980 | |||
2981 | /* | ||
2982 | * Amazon's Annapurna Labs 1c36:0031 Root Ports don't support MSI-X, so it | ||
2983 | * should be disabled on platforms where the device (mistakenly) advertises it. | ||
2984 | * | ||
2985 | * Notice that this quirk also disables MSI (which may work, but hasn't been | ||
2986 | * tested), since currently there is no standard way to disable only MSI-X. | ||
2987 | * | ||
2988 | * The 0031 device id is reused for other non Root Port device types, | ||
2989 | * therefore the quirk is registered for the PCI_CLASS_BRIDGE_PCI class. | ||
2990 | */ | ||
2991 | static void quirk_al_msi_disable(struct pci_dev *dev) | ||
2992 | { | ||
2993 | dev->no_msi = 1; | ||
2994 | pci_warn(dev, "Disabling MSI/MSI-X\n"); | ||
2995 | } | ||
2996 | DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, | ||
2997 | PCI_CLASS_BRIDGE_PCI, 8, quirk_al_msi_disable); | ||
2928 | #endif /* CONFIG_PCI_MSI */ | 2998 | #endif /* CONFIG_PCI_MSI */ |
2929 | 2999 | ||
2930 | /* | 3000 | /* |
@@ -4366,6 +4436,24 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags) | |||
4366 | return ret; | 4436 | return ret; |
4367 | } | 4437 | } |
4368 | 4438 | ||
4439 | static int pci_quirk_al_acs(struct pci_dev *dev, u16 acs_flags) | ||
4440 | { | ||
4441 | if (pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) | ||
4442 | return -ENOTTY; | ||
4443 | |||
4444 | /* | ||
4445 | * Amazon's Annapurna Labs root ports don't include an ACS capability, | ||
4446 | * but do include ACS-like functionality. The hardware doesn't support | ||
4447 | * peer-to-peer transactions via the root port and each has a unique | ||
4448 | * segment number. | ||
4449 | * | ||
4450 | * Additionally, the root ports cannot send traffic to each other. | ||
4451 | */ | ||
4452 | acs_flags &= ~(PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); | ||
4453 | |||
4454 | return acs_flags ? 0 : 1; | ||
4455 | } | ||
4456 | |||
4369 | /* | 4457 | /* |
4370 | * Sunrise Point PCH root ports implement ACS, but unfortunately as shown in | 4458 | * Sunrise Point PCH root ports implement ACS, but unfortunately as shown in |
4371 | * the datasheet (Intel 100 Series Chipset Family PCH Datasheet, Vol. 2, | 4459 | * the datasheet (Intel 100 Series Chipset Family PCH Datasheet, Vol. 2, |
@@ -4466,6 +4554,19 @@ static int pci_quirk_mf_endpoint_acs(struct pci_dev *dev, u16 acs_flags) | |||
4466 | return acs_flags ? 0 : 1; | 4554 | return acs_flags ? 0 : 1; |
4467 | } | 4555 | } |
4468 | 4556 | ||
4557 | static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags) | ||
4558 | { | ||
4559 | /* | ||
4560 | * iProc PAXB Root Ports don't advertise an ACS capability, but | ||
4561 | * they do not allow peer-to-peer transactions between Root Ports. | ||
4562 | * Allow each Root Port to be in a separate IOMMU group by masking | ||
4563 | * SV/RR/CR/UF bits. | ||
4564 | */ | ||
4565 | acs_flags &= ~(PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); | ||
4566 | |||
4567 | return acs_flags ? 0 : 1; | ||
4568 | } | ||
4569 | |||
4469 | static const struct pci_dev_acs_enabled { | 4570 | static const struct pci_dev_acs_enabled { |
4470 | u16 vendor; | 4571 | u16 vendor; |
4471 | u16 device; | 4572 | u16 device; |
@@ -4559,6 +4660,9 @@ static const struct pci_dev_acs_enabled { | |||
4559 | { PCI_VENDOR_ID_AMPERE, 0xE00A, pci_quirk_xgene_acs }, | 4660 | { PCI_VENDOR_ID_AMPERE, 0xE00A, pci_quirk_xgene_acs }, |
4560 | { PCI_VENDOR_ID_AMPERE, 0xE00B, pci_quirk_xgene_acs }, | 4661 | { PCI_VENDOR_ID_AMPERE, 0xE00B, pci_quirk_xgene_acs }, |
4561 | { PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs }, | 4662 | { PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs }, |
4663 | { PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs }, | ||
4664 | /* Amazon Annapurna Labs */ | ||
4665 | { PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs }, | ||
4562 | { 0 } | 4666 | { 0 } |
4563 | }; | 4667 | }; |
4564 | 4668 | ||
diff --git a/drivers/pci/search.c b/drivers/pci/search.c index 7f4e65872b8d..bade14002fd8 100644 --- a/drivers/pci/search.c +++ b/drivers/pci/search.c | |||
@@ -15,7 +15,6 @@ | |||
15 | #include "pci.h" | 15 | #include "pci.h" |
16 | 16 | ||
17 | DECLARE_RWSEM(pci_bus_sem); | 17 | DECLARE_RWSEM(pci_bus_sem); |
18 | EXPORT_SYMBOL_GPL(pci_bus_sem); | ||
19 | 18 | ||
20 | /* | 19 | /* |
21 | * pci_for_each_dma_alias - Iterate over DMA aliases for a device | 20 | * pci_for_each_dma_alias - Iterate over DMA aliases for a device |
diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index 79b1fa6519be..e7dbe21705ba 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c | |||
@@ -1662,8 +1662,8 @@ static int iov_resources_unassigned(struct pci_dev *dev, void *data) | |||
1662 | int i; | 1662 | int i; |
1663 | bool *unassigned = data; | 1663 | bool *unassigned = data; |
1664 | 1664 | ||
1665 | for (i = PCI_IOV_RESOURCES; i <= PCI_IOV_RESOURCE_END; i++) { | 1665 | for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { |
1666 | struct resource *r = &dev->resource[i]; | 1666 | struct resource *r = &dev->resource[i + PCI_IOV_RESOURCES]; |
1667 | struct pci_bus_region region; | 1667 | struct pci_bus_region region; |
1668 | 1668 | ||
1669 | /* Not assigned or rejected by kernel? */ | 1669 | /* Not assigned or rejected by kernel? */ |
diff --git a/drivers/pci/vc.c b/drivers/pci/vc.c index 5acd9c02683a..5486f8768c86 100644 --- a/drivers/pci/vc.c +++ b/drivers/pci/vc.c | |||
@@ -13,6 +13,8 @@ | |||
13 | #include <linux/pci_regs.h> | 13 | #include <linux/pci_regs.h> |
14 | #include <linux/types.h> | 14 | #include <linux/types.h> |
15 | 15 | ||
16 | #include "pci.h" | ||
17 | |||
16 | /** | 18 | /** |
17 | * pci_vc_save_restore_dwords - Save or restore a series of dwords | 19 | * pci_vc_save_restore_dwords - Save or restore a series of dwords |
18 | * @dev: device | 20 | * @dev: device |
@@ -105,7 +107,7 @@ static void pci_vc_enable(struct pci_dev *dev, int pos, int res) | |||
105 | struct pci_dev *link = NULL; | 107 | struct pci_dev *link = NULL; |
106 | 108 | ||
107 | /* Enable VCs from the downstream device */ | 109 | /* Enable VCs from the downstream device */ |
108 | if (!dev->has_secondary_link) | 110 | if (!pci_is_pcie(dev) || !pcie_downstream_port(dev)) |
109 | return; | 111 | return; |
110 | 112 | ||
111 | ctrl_pos = pos + PCI_VC_RES_CTRL + (res * PCI_CAP_VC_PER_VC_SIZEOF); | 113 | ctrl_pos = pos + PCI_VC_RES_CTRL + (res * PCI_CAP_VC_PER_VC_SIZEOF); |
@@ -409,7 +411,6 @@ void pci_restore_vc_state(struct pci_dev *dev) | |||
409 | * For each type of VC capability, VC/VC9/MFVC, find the capability, size | 411 | * For each type of VC capability, VC/VC9/MFVC, find the capability, size |
410 | * it, and allocate a buffer for save/restore. | 412 | * it, and allocate a buffer for save/restore. |
411 | */ | 413 | */ |
412 | |||
413 | void pci_allocate_vc_save_buffers(struct pci_dev *dev) | 414 | void pci_allocate_vc_save_buffers(struct pci_dev *dev) |
414 | { | 415 | { |
415 | int i; | 416 | int i; |
diff --git a/drivers/pci/vpd.c b/drivers/pci/vpd.c index 4963c2e2bd4c..7915d10f9aa1 100644 --- a/drivers/pci/vpd.c +++ b/drivers/pci/vpd.c | |||
@@ -571,6 +571,12 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd); | |||
571 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, | 571 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, |
572 | quirk_blacklist_vpd); | 572 | quirk_blacklist_vpd); |
573 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd); | 573 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd); |
574 | /* | ||
575 | * The Amazon Annapurna Labs 0x0031 device id is reused for other non Root Port | ||
576 | * device types, so the quirk is registered for the PCI_CLASS_BRIDGE_PCI class. | ||
577 | */ | ||
578 | DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, | ||
579 | PCI_CLASS_BRIDGE_PCI, 8, quirk_blacklist_vpd); | ||
574 | 580 | ||
575 | /* | 581 | /* |
576 | * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the | 582 | * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the |
diff --git a/drivers/phy/tegra/Kconfig b/drivers/phy/tegra/Kconfig index e516967d695b..f9817c3ae85f 100644 --- a/drivers/phy/tegra/Kconfig +++ b/drivers/phy/tegra/Kconfig | |||
@@ -7,3 +7,10 @@ config PHY_TEGRA_XUSB | |||
7 | 7 | ||
8 | To compile this driver as a module, choose M here: the module will | 8 | To compile this driver as a module, choose M here: the module will |
9 | be called phy-tegra-xusb. | 9 | be called phy-tegra-xusb. |
10 | |||
11 | config PHY_TEGRA194_P2U | ||
12 | tristate "NVIDIA Tegra194 PIPE2UPHY PHY driver" | ||
13 | depends on ARCH_TEGRA_194_SOC || COMPILE_TEST | ||
14 | select GENERIC_PHY | ||
15 | help | ||
16 | Enable this to support the P2U (PIPE to UPHY) that is part of Tegra 19x SOCs. | ||
diff --git a/drivers/phy/tegra/Makefile b/drivers/phy/tegra/Makefile index 64ccaeacb631..320dd389f34d 100644 --- a/drivers/phy/tegra/Makefile +++ b/drivers/phy/tegra/Makefile | |||
@@ -6,3 +6,4 @@ phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_124_SOC) += xusb-tegra124.o | |||
6 | phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_132_SOC) += xusb-tegra124.o | 6 | phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_132_SOC) += xusb-tegra124.o |
7 | phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_210_SOC) += xusb-tegra210.o | 7 | phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_210_SOC) += xusb-tegra210.o |
8 | phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_186_SOC) += xusb-tegra186.o | 8 | phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_186_SOC) += xusb-tegra186.o |
9 | obj-$(CONFIG_PHY_TEGRA194_P2U) += phy-tegra194-p2u.o | ||
diff --git a/drivers/phy/tegra/phy-tegra194-p2u.c b/drivers/phy/tegra/phy-tegra194-p2u.c new file mode 100644 index 000000000000..7042bed9feaa --- /dev/null +++ b/drivers/phy/tegra/phy-tegra194-p2u.c | |||
@@ -0,0 +1,120 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0+ | ||
2 | /* | ||
3 | * P2U (PIPE to UPHY) driver for Tegra T194 SoC | ||
4 | * | ||
5 | * Copyright (C) 2019 NVIDIA Corporation. | ||
6 | * | ||
7 | * Author: Vidya Sagar <vidyas@nvidia.com> | ||
8 | */ | ||
9 | |||
10 | #include <linux/err.h> | ||
11 | #include <linux/io.h> | ||
12 | #include <linux/module.h> | ||
13 | #include <linux/of.h> | ||
14 | #include <linux/of_platform.h> | ||
15 | #include <linux/phy/phy.h> | ||
16 | |||
17 | #define P2U_PERIODIC_EQ_CTRL_GEN3 0xc0 | ||
18 | #define P2U_PERIODIC_EQ_CTRL_GEN3_PERIODIC_EQ_EN BIT(0) | ||
19 | #define P2U_PERIODIC_EQ_CTRL_GEN3_INIT_PRESET_EQ_TRAIN_EN BIT(1) | ||
20 | #define P2U_PERIODIC_EQ_CTRL_GEN4 0xc4 | ||
21 | #define P2U_PERIODIC_EQ_CTRL_GEN4_INIT_PRESET_EQ_TRAIN_EN BIT(1) | ||
22 | |||
23 | #define P2U_RX_DEBOUNCE_TIME 0xa4 | ||
24 | #define P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_MASK 0xffff | ||
25 | #define P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_VAL 160 | ||
26 | |||
27 | struct tegra_p2u { | ||
28 | void __iomem *base; | ||
29 | }; | ||
30 | |||
31 | static inline void p2u_writel(struct tegra_p2u *phy, const u32 value, | ||
32 | const u32 reg) | ||
33 | { | ||
34 | writel_relaxed(value, phy->base + reg); | ||
35 | } | ||
36 | |||
37 | static inline u32 p2u_readl(struct tegra_p2u *phy, const u32 reg) | ||
38 | { | ||
39 | return readl_relaxed(phy->base + reg); | ||
40 | } | ||
41 | |||
42 | static int tegra_p2u_power_on(struct phy *x) | ||
43 | { | ||
44 | struct tegra_p2u *phy = phy_get_drvdata(x); | ||
45 | u32 val; | ||
46 | |||
47 | val = p2u_readl(phy, P2U_PERIODIC_EQ_CTRL_GEN3); | ||
48 | val &= ~P2U_PERIODIC_EQ_CTRL_GEN3_PERIODIC_EQ_EN; | ||
49 | val |= P2U_PERIODIC_EQ_CTRL_GEN3_INIT_PRESET_EQ_TRAIN_EN; | ||
50 | p2u_writel(phy, val, P2U_PERIODIC_EQ_CTRL_GEN3); | ||
51 | |||
52 | val = p2u_readl(phy, P2U_PERIODIC_EQ_CTRL_GEN4); | ||
53 | val |= P2U_PERIODIC_EQ_CTRL_GEN4_INIT_PRESET_EQ_TRAIN_EN; | ||
54 | p2u_writel(phy, val, P2U_PERIODIC_EQ_CTRL_GEN4); | ||
55 | |||
56 | val = p2u_readl(phy, P2U_RX_DEBOUNCE_TIME); | ||
57 | val &= ~P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_MASK; | ||
58 | val |= P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_VAL; | ||
59 | p2u_writel(phy, val, P2U_RX_DEBOUNCE_TIME); | ||
60 | |||
61 | return 0; | ||
62 | } | ||
63 | |||
64 | static const struct phy_ops ops = { | ||
65 | .power_on = tegra_p2u_power_on, | ||
66 | .owner = THIS_MODULE, | ||
67 | }; | ||
68 | |||
69 | static int tegra_p2u_probe(struct platform_device *pdev) | ||
70 | { | ||
71 | struct phy_provider *phy_provider; | ||
72 | struct device *dev = &pdev->dev; | ||
73 | struct phy *generic_phy; | ||
74 | struct tegra_p2u *phy; | ||
75 | struct resource *res; | ||
76 | |||
77 | phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL); | ||
78 | if (!phy) | ||
79 | return -ENOMEM; | ||
80 | |||
81 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctl"); | ||
82 | phy->base = devm_ioremap_resource(dev, res); | ||
83 | if (IS_ERR(phy->base)) | ||
84 | return PTR_ERR(phy->base); | ||
85 | |||
86 | platform_set_drvdata(pdev, phy); | ||
87 | |||
88 | generic_phy = devm_phy_create(dev, NULL, &ops); | ||
89 | if (IS_ERR(generic_phy)) | ||
90 | return PTR_ERR(generic_phy); | ||
91 | |||
92 | phy_set_drvdata(generic_phy, phy); | ||
93 | |||
94 | phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); | ||
95 | if (IS_ERR(phy_provider)) | ||
96 | return PTR_ERR(phy_provider); | ||
97 | |||
98 | return 0; | ||
99 | } | ||
100 | |||
101 | static const struct of_device_id tegra_p2u_id_table[] = { | ||
102 | { | ||
103 | .compatible = "nvidia,tegra194-p2u", | ||
104 | }, | ||
105 | {} | ||
106 | }; | ||
107 | MODULE_DEVICE_TABLE(of, tegra_p2u_id_table); | ||
108 | |||
109 | static struct platform_driver tegra_p2u_driver = { | ||
110 | .probe = tegra_p2u_probe, | ||
111 | .driver = { | ||
112 | .name = "tegra194-p2u", | ||
113 | .of_match_table = tegra_p2u_id_table, | ||
114 | }, | ||
115 | }; | ||
116 | module_platform_driver(tegra_p2u_driver); | ||
117 | |||
118 | MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>"); | ||
119 | MODULE_DESCRIPTION("NVIDIA Tegra194 PIPE2UPHY PHY driver"); | ||
120 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c index 644f7f5c61a2..4a858789e6c5 100644 --- a/drivers/scsi/aacraid/linit.c +++ b/drivers/scsi/aacraid/linit.c | |||
@@ -27,7 +27,6 @@ | |||
27 | #include <linux/moduleparam.h> | 27 | #include <linux/moduleparam.h> |
28 | #include <linux/pci.h> | 28 | #include <linux/pci.h> |
29 | #include <linux/aer.h> | 29 | #include <linux/aer.h> |
30 | #include <linux/pci-aspm.h> | ||
31 | #include <linux/slab.h> | 30 | #include <linux/slab.h> |
32 | #include <linux/mutex.h> | 31 | #include <linux/mutex.h> |
33 | #include <linux/spinlock.h> | 32 | #include <linux/spinlock.h> |
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c index 1bb6aada93fa..ac39ed79ccaa 100644 --- a/drivers/scsi/hpsa.c +++ b/drivers/scsi/hpsa.c | |||
@@ -21,7 +21,6 @@ | |||
21 | #include <linux/interrupt.h> | 21 | #include <linux/interrupt.h> |
22 | #include <linux/types.h> | 22 | #include <linux/types.h> |
23 | #include <linux/pci.h> | 23 | #include <linux/pci.h> |
24 | #include <linux/pci-aspm.h> | ||
25 | #include <linux/kernel.h> | 24 | #include <linux/kernel.h> |
26 | #include <linux/slab.h> | 25 | #include <linux/slab.h> |
27 | #include <linux/delay.h> | 26 | #include <linux/delay.h> |
diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c index d0c2f8d6f2a2..c8e512ba6d39 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c +++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c | |||
@@ -51,7 +51,6 @@ | |||
51 | #include <linux/workqueue.h> | 51 | #include <linux/workqueue.h> |
52 | #include <linux/delay.h> | 52 | #include <linux/delay.h> |
53 | #include <linux/pci.h> | 53 | #include <linux/pci.h> |
54 | #include <linux/pci-aspm.h> | ||
55 | #include <linux/interrupt.h> | 54 | #include <linux/interrupt.h> |
56 | #include <linux/aer.h> | 55 | #include <linux/aer.h> |
57 | #include <linux/raid_class.h> | 56 | #include <linux/raid_class.h> |
diff --git a/include/linux/memremap.h b/include/linux/memremap.h index fb2a0bd826b9..bef51e35d8d2 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h | |||
@@ -111,7 +111,6 @@ struct dev_pagemap { | |||
111 | struct completion done; | 111 | struct completion done; |
112 | enum memory_type type; | 112 | enum memory_type type; |
113 | unsigned int flags; | 113 | unsigned int flags; |
114 | u64 pci_p2pdma_bus_offset; | ||
115 | const struct dev_pagemap_ops *ops; | 114 | const struct dev_pagemap_ops *ops; |
116 | }; | 115 | }; |
117 | 116 | ||
diff --git a/include/linux/pci-aspm.h b/include/linux/pci-aspm.h deleted file mode 100644 index 67064145d76e..000000000000 --- a/include/linux/pci-aspm.h +++ /dev/null | |||
@@ -1,36 +0,0 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
2 | /* | ||
3 | * aspm.h | ||
4 | * | ||
5 | * PCI Express ASPM defines and function prototypes | ||
6 | * | ||
7 | * Copyright (C) 2007 Intel Corp. | ||
8 | * Zhang Yanmin (yanmin.zhang@intel.com) | ||
9 | * Shaohua Li (shaohua.li@intel.com) | ||
10 | * | ||
11 | * For more information, please consult the following manuals (look at | ||
12 | * http://www.pcisig.com/ for how to get them): | ||
13 | * | ||
14 | * PCI Express Specification | ||
15 | */ | ||
16 | |||
17 | #ifndef LINUX_ASPM_H | ||
18 | #define LINUX_ASPM_H | ||
19 | |||
20 | #include <linux/pci.h> | ||
21 | |||
22 | #define PCIE_LINK_STATE_L0S 1 | ||
23 | #define PCIE_LINK_STATE_L1 2 | ||
24 | #define PCIE_LINK_STATE_CLKPM 4 | ||
25 | |||
26 | #ifdef CONFIG_PCIEASPM | ||
27 | int pci_disable_link_state(struct pci_dev *pdev, int state); | ||
28 | int pci_disable_link_state_locked(struct pci_dev *pdev, int state); | ||
29 | void pcie_no_aspm(void); | ||
30 | #else | ||
31 | static inline int pci_disable_link_state(struct pci_dev *pdev, int state) | ||
32 | { return 0; } | ||
33 | static inline void pcie_no_aspm(void) { } | ||
34 | #endif | ||
35 | |||
36 | #endif /* LINUX_ASPM_H */ | ||
diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index bca9bc3e5be7..8318a97c9c61 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h | |||
@@ -30,8 +30,10 @@ struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, | |||
30 | unsigned int *nents, u32 length); | 30 | unsigned int *nents, u32 length); |
31 | void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl); | 31 | void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl); |
32 | void pci_p2pmem_publish(struct pci_dev *pdev, bool publish); | 32 | void pci_p2pmem_publish(struct pci_dev *pdev, bool publish); |
33 | int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, | 33 | int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg, |
34 | enum dma_data_direction dir); | 34 | int nents, enum dma_data_direction dir, unsigned long attrs); |
35 | void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, | ||
36 | int nents, enum dma_data_direction dir, unsigned long attrs); | ||
35 | int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, | 37 | int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, |
36 | bool *use_p2pdma); | 38 | bool *use_p2pdma); |
37 | ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, | 39 | ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, |
@@ -81,11 +83,17 @@ static inline void pci_p2pmem_free_sgl(struct pci_dev *pdev, | |||
81 | static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) | 83 | static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) |
82 | { | 84 | { |
83 | } | 85 | } |
84 | static inline int pci_p2pdma_map_sg(struct device *dev, | 86 | static inline int pci_p2pdma_map_sg_attrs(struct device *dev, |
85 | struct scatterlist *sg, int nents, enum dma_data_direction dir) | 87 | struct scatterlist *sg, int nents, enum dma_data_direction dir, |
88 | unsigned long attrs) | ||
86 | { | 89 | { |
87 | return 0; | 90 | return 0; |
88 | } | 91 | } |
92 | static inline void pci_p2pdma_unmap_sg_attrs(struct device *dev, | ||
93 | struct scatterlist *sg, int nents, enum dma_data_direction dir, | ||
94 | unsigned long attrs) | ||
95 | { | ||
96 | } | ||
89 | static inline int pci_p2pdma_enable_store(const char *page, | 97 | static inline int pci_p2pdma_enable_store(const char *page, |
90 | struct pci_dev **p2p_dev, bool *use_p2pdma) | 98 | struct pci_dev **p2p_dev, bool *use_p2pdma) |
91 | { | 99 | { |
@@ -111,4 +119,16 @@ static inline struct pci_dev *pci_p2pmem_find(struct device *client) | |||
111 | return pci_p2pmem_find_many(&client, 1); | 119 | return pci_p2pmem_find_many(&client, 1); |
112 | } | 120 | } |
113 | 121 | ||
122 | static inline int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, | ||
123 | int nents, enum dma_data_direction dir) | ||
124 | { | ||
125 | return pci_p2pdma_map_sg_attrs(dev, sg, nents, dir, 0); | ||
126 | } | ||
127 | |||
128 | static inline void pci_p2pdma_unmap_sg(struct device *dev, | ||
129 | struct scatterlist *sg, int nents, enum dma_data_direction dir) | ||
130 | { | ||
131 | pci_p2pdma_unmap_sg_attrs(dev, sg, nents, dir, 0); | ||
132 | } | ||
133 | |||
114 | #endif /* _LINUX_PCI_P2P_H */ | 134 | #endif /* _LINUX_PCI_P2P_H */ |
diff --git a/include/linux/pci.h b/include/linux/pci.h index 82e4cd1b7ac3..f9088c89a534 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h | |||
@@ -6,12 +6,18 @@ | |||
6 | * Copyright 1994, Drew Eckhardt | 6 | * Copyright 1994, Drew Eckhardt |
7 | * Copyright 1997--1999 Martin Mares <mj@ucw.cz> | 7 | * Copyright 1997--1999 Martin Mares <mj@ucw.cz> |
8 | * | 8 | * |
9 | * PCI Express ASPM defines and function prototypes | ||
10 | * Copyright (c) 2007 Intel Corp. | ||
11 | * Zhang Yanmin (yanmin.zhang@intel.com) | ||
12 | * Shaohua Li (shaohua.li@intel.com) | ||
13 | * | ||
9 | * For more information, please consult the following manuals (look at | 14 | * For more information, please consult the following manuals (look at |
10 | * http://www.pcisig.com/ for how to get them): | 15 | * http://www.pcisig.com/ for how to get them): |
11 | * | 16 | * |
12 | * PCI BIOS Specification | 17 | * PCI BIOS Specification |
13 | * PCI Local Bus Specification | 18 | * PCI Local Bus Specification |
14 | * PCI to PCI Bridge Specification | 19 | * PCI to PCI Bridge Specification |
20 | * PCI Express Specification | ||
15 | * PCI System Design Guide | 21 | * PCI System Design Guide |
16 | */ | 22 | */ |
17 | #ifndef LINUX_PCI_H | 23 | #ifndef LINUX_PCI_H |
@@ -145,11 +151,6 @@ static inline const char *pci_power_name(pci_power_t state) | |||
145 | return pci_power_names[1 + (__force int) state]; | 151 | return pci_power_names[1 + (__force int) state]; |
146 | } | 152 | } |
147 | 153 | ||
148 | #define PCI_PM_D2_DELAY 200 | ||
149 | #define PCI_PM_D3_WAIT 10 | ||
150 | #define PCI_PM_D3COLD_WAIT 100 | ||
151 | #define PCI_PM_BUS_WAIT 50 | ||
152 | |||
153 | /** | 154 | /** |
154 | * typedef pci_channel_state_t | 155 | * typedef pci_channel_state_t |
155 | * | 156 | * |
@@ -418,7 +419,6 @@ struct pci_dev { | |||
418 | unsigned int broken_intx_masking:1; /* INTx masking can't be used */ | 419 | unsigned int broken_intx_masking:1; /* INTx masking can't be used */ |
419 | unsigned int io_window_1k:1; /* Intel bridge 1K I/O windows */ | 420 | unsigned int io_window_1k:1; /* Intel bridge 1K I/O windows */ |
420 | unsigned int irq_managed:1; | 421 | unsigned int irq_managed:1; |
421 | unsigned int has_secondary_link:1; | ||
422 | unsigned int non_compliant_bars:1; /* Broken BARs; ignore them */ | 422 | unsigned int non_compliant_bars:1; /* Broken BARs; ignore them */ |
423 | unsigned int is_probed:1; /* Device probing in progress */ | 423 | unsigned int is_probed:1; /* Device probing in progress */ |
424 | unsigned int link_active_reporting:1;/* Device capable of reporting link active */ | 424 | unsigned int link_active_reporting:1;/* Device capable of reporting link active */ |
@@ -649,9 +649,6 @@ static inline struct pci_dev *pci_upstream_bridge(struct pci_dev *dev) | |||
649 | return dev->bus->self; | 649 | return dev->bus->self; |
650 | } | 650 | } |
651 | 651 | ||
652 | struct device *pci_get_host_bridge_device(struct pci_dev *dev); | ||
653 | void pci_put_host_bridge_device(struct device *dev); | ||
654 | |||
655 | #ifdef CONFIG_PCI_MSI | 652 | #ifdef CONFIG_PCI_MSI |
656 | static inline bool pci_dev_msi_enabled(struct pci_dev *pci_dev) | 653 | static inline bool pci_dev_msi_enabled(struct pci_dev *pci_dev) |
657 | { | 654 | { |
@@ -925,6 +922,11 @@ enum { | |||
925 | PCI_SCAN_ALL_PCIE_DEVS = 0x00000040, /* Scan all, not just dev 0 */ | 922 | PCI_SCAN_ALL_PCIE_DEVS = 0x00000040, /* Scan all, not just dev 0 */ |
926 | }; | 923 | }; |
927 | 924 | ||
925 | #define PCI_IRQ_LEGACY (1 << 0) /* Allow legacy interrupts */ | ||
926 | #define PCI_IRQ_MSI (1 << 1) /* Allow MSI interrupts */ | ||
927 | #define PCI_IRQ_MSIX (1 << 2) /* Allow MSI-X interrupts */ | ||
928 | #define PCI_IRQ_AFFINITY (1 << 3) /* Auto-assign affinity */ | ||
929 | |||
928 | /* These external functions are only available when PCI support is enabled */ | 930 | /* These external functions are only available when PCI support is enabled */ |
929 | #ifdef CONFIG_PCI | 931 | #ifdef CONFIG_PCI |
930 | 932 | ||
@@ -969,7 +971,7 @@ resource_size_t pcibios_align_resource(void *, const struct resource *, | |||
969 | resource_size_t, | 971 | resource_size_t, |
970 | resource_size_t); | 972 | resource_size_t); |
971 | 973 | ||
972 | /* Weak but can be overriden by arch */ | 974 | /* Weak but can be overridden by arch */ |
973 | void pci_fixup_cardbus(struct pci_bus *); | 975 | void pci_fixup_cardbus(struct pci_bus *); |
974 | 976 | ||
975 | /* Generic PCI functions used internally */ | 977 | /* Generic PCI functions used internally */ |
@@ -995,7 +997,6 @@ struct pci_bus *pci_scan_root_bus(struct device *parent, int bus, | |||
995 | int pci_scan_root_bus_bridge(struct pci_host_bridge *bridge); | 997 | int pci_scan_root_bus_bridge(struct pci_host_bridge *bridge); |
996 | struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, | 998 | struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, |
997 | int busnr); | 999 | int busnr); |
998 | void pcie_update_link_speed(struct pci_bus *bus, u16 link_status); | ||
999 | struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr, | 1000 | struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr, |
1000 | const char *name, | 1001 | const char *name, |
1001 | struct hotplug_slot *hotplug); | 1002 | struct hotplug_slot *hotplug); |
@@ -1241,19 +1242,12 @@ int pci_wake_from_d3(struct pci_dev *dev, bool enable); | |||
1241 | int pci_prepare_to_sleep(struct pci_dev *dev); | 1242 | int pci_prepare_to_sleep(struct pci_dev *dev); |
1242 | int pci_back_from_sleep(struct pci_dev *dev); | 1243 | int pci_back_from_sleep(struct pci_dev *dev); |
1243 | bool pci_dev_run_wake(struct pci_dev *dev); | 1244 | bool pci_dev_run_wake(struct pci_dev *dev); |
1244 | bool pci_check_pme_status(struct pci_dev *dev); | ||
1245 | void pci_pme_wakeup_bus(struct pci_bus *bus); | ||
1246 | void pci_d3cold_enable(struct pci_dev *dev); | 1245 | void pci_d3cold_enable(struct pci_dev *dev); |
1247 | void pci_d3cold_disable(struct pci_dev *dev); | 1246 | void pci_d3cold_disable(struct pci_dev *dev); |
1248 | bool pcie_relaxed_ordering_enabled(struct pci_dev *dev); | 1247 | bool pcie_relaxed_ordering_enabled(struct pci_dev *dev); |
1249 | void pci_wakeup_bus(struct pci_bus *bus); | 1248 | void pci_wakeup_bus(struct pci_bus *bus); |
1250 | void pci_bus_set_current_state(struct pci_bus *bus, pci_power_t state); | 1249 | void pci_bus_set_current_state(struct pci_bus *bus, pci_power_t state); |
1251 | 1250 | ||
1252 | /* PCI Virtual Channel */ | ||
1253 | int pci_save_vc_state(struct pci_dev *dev); | ||
1254 | void pci_restore_vc_state(struct pci_dev *dev); | ||
1255 | void pci_allocate_vc_save_buffers(struct pci_dev *dev); | ||
1256 | |||
1257 | /* For use by arch with custom probe code */ | 1251 | /* For use by arch with custom probe code */ |
1258 | void set_pcie_port_type(struct pci_dev *pdev); | 1252 | void set_pcie_port_type(struct pci_dev *pdev); |
1259 | void set_pcie_hotplug_bridge(struct pci_dev *pdev); | 1253 | void set_pcie_hotplug_bridge(struct pci_dev *pdev); |
@@ -1297,8 +1291,6 @@ int pci_request_selected_regions_exclusive(struct pci_dev *, int, const char *); | |||
1297 | void pci_release_selected_regions(struct pci_dev *, int); | 1291 | void pci_release_selected_regions(struct pci_dev *, int); |
1298 | 1292 | ||
1299 | /* drivers/pci/bus.c */ | 1293 | /* drivers/pci/bus.c */ |
1300 | struct pci_bus *pci_bus_get(struct pci_bus *bus); | ||
1301 | void pci_bus_put(struct pci_bus *bus); | ||
1302 | void pci_add_resource(struct list_head *resources, struct resource *res); | 1294 | void pci_add_resource(struct list_head *resources, struct resource *res); |
1303 | void pci_add_resource_offset(struct list_head *resources, struct resource *res, | 1295 | void pci_add_resource_offset(struct list_head *resources, struct resource *res, |
1304 | resource_size_t offset); | 1296 | resource_size_t offset); |
@@ -1408,11 +1400,6 @@ resource_size_t pcibios_window_alignment(struct pci_bus *bus, | |||
1408 | int pci_set_vga_state(struct pci_dev *pdev, bool decode, | 1400 | int pci_set_vga_state(struct pci_dev *pdev, bool decode, |
1409 | unsigned int command_bits, u32 flags); | 1401 | unsigned int command_bits, u32 flags); |
1410 | 1402 | ||
1411 | #define PCI_IRQ_LEGACY (1 << 0) /* Allow legacy interrupts */ | ||
1412 | #define PCI_IRQ_MSI (1 << 1) /* Allow MSI interrupts */ | ||
1413 | #define PCI_IRQ_MSIX (1 << 2) /* Allow MSI-X interrupts */ | ||
1414 | #define PCI_IRQ_AFFINITY (1 << 3) /* Auto-assign affinity */ | ||
1415 | |||
1416 | /* | 1403 | /* |
1417 | * Virtual interrupts allow for more interrupts to be allocated | 1404 | * Virtual interrupts allow for more interrupts to be allocated |
1418 | * than the device has interrupts for. These are not programmed | 1405 | * than the device has interrupts for. These are not programmed |
@@ -1517,14 +1504,6 @@ static inline int pci_irq_get_node(struct pci_dev *pdev, int vec) | |||
1517 | } | 1504 | } |
1518 | #endif | 1505 | #endif |
1519 | 1506 | ||
1520 | static inline int | ||
1521 | pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, | ||
1522 | unsigned int max_vecs, unsigned int flags) | ||
1523 | { | ||
1524 | return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags, | ||
1525 | NULL); | ||
1526 | } | ||
1527 | |||
1528 | /** | 1507 | /** |
1529 | * pci_irqd_intx_xlate() - Translate PCI INTx value to an IRQ domain hwirq | 1508 | * pci_irqd_intx_xlate() - Translate PCI INTx value to an IRQ domain hwirq |
1530 | * @d: the INTx IRQ domain | 1509 | * @d: the INTx IRQ domain |
@@ -1565,10 +1544,22 @@ extern bool pcie_ports_native; | |||
1565 | #define pcie_ports_native false | 1544 | #define pcie_ports_native false |
1566 | #endif | 1545 | #endif |
1567 | 1546 | ||
1547 | #define PCIE_LINK_STATE_L0S 1 | ||
1548 | #define PCIE_LINK_STATE_L1 2 | ||
1549 | #define PCIE_LINK_STATE_CLKPM 4 | ||
1550 | |||
1568 | #ifdef CONFIG_PCIEASPM | 1551 | #ifdef CONFIG_PCIEASPM |
1552 | int pci_disable_link_state(struct pci_dev *pdev, int state); | ||
1553 | int pci_disable_link_state_locked(struct pci_dev *pdev, int state); | ||
1554 | void pcie_no_aspm(void); | ||
1569 | bool pcie_aspm_support_enabled(void); | 1555 | bool pcie_aspm_support_enabled(void); |
1570 | bool pcie_aspm_enabled(struct pci_dev *pdev); | 1556 | bool pcie_aspm_enabled(struct pci_dev *pdev); |
1571 | #else | 1557 | #else |
1558 | static inline int pci_disable_link_state(struct pci_dev *pdev, int state) | ||
1559 | { return 0; } | ||
1560 | static inline int pci_disable_link_state_locked(struct pci_dev *pdev, int state) | ||
1561 | { return 0; } | ||
1562 | static inline void pcie_no_aspm(void) { } | ||
1572 | static inline bool pcie_aspm_support_enabled(void) { return false; } | 1563 | static inline bool pcie_aspm_support_enabled(void) { return false; } |
1573 | static inline bool pcie_aspm_enabled(struct pci_dev *pdev) { return false; } | 1564 | static inline bool pcie_aspm_enabled(struct pci_dev *pdev) { return false; } |
1574 | #endif | 1565 | #endif |
@@ -1579,23 +1570,8 @@ bool pci_aer_available(void); | |||
1579 | static inline bool pci_aer_available(void) { return false; } | 1570 | static inline bool pci_aer_available(void) { return false; } |
1580 | #endif | 1571 | #endif |
1581 | 1572 | ||
1582 | #ifdef CONFIG_PCIE_ECRC | ||
1583 | void pcie_set_ecrc_checking(struct pci_dev *dev); | ||
1584 | void pcie_ecrc_get_policy(char *str); | ||
1585 | #else | ||
1586 | static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { } | ||
1587 | static inline void pcie_ecrc_get_policy(char *str) { } | ||
1588 | #endif | ||
1589 | |||
1590 | bool pci_ats_disabled(void); | 1573 | bool pci_ats_disabled(void); |
1591 | 1574 | ||
1592 | #ifdef CONFIG_PCIE_PTM | ||
1593 | int pci_enable_ptm(struct pci_dev *dev, u8 *granularity); | ||
1594 | #else | ||
1595 | static inline int pci_enable_ptm(struct pci_dev *dev, u8 *granularity) | ||
1596 | { return -EINVAL; } | ||
1597 | #endif | ||
1598 | |||
1599 | void pci_cfg_access_lock(struct pci_dev *dev); | 1575 | void pci_cfg_access_lock(struct pci_dev *dev); |
1600 | bool pci_cfg_access_trylock(struct pci_dev *dev); | 1576 | bool pci_cfg_access_trylock(struct pci_dev *dev); |
1601 | void pci_cfg_access_unlock(struct pci_dev *dev); | 1577 | void pci_cfg_access_unlock(struct pci_dev *dev); |
@@ -1749,11 +1725,6 @@ static inline void pci_release_regions(struct pci_dev *dev) { } | |||
1749 | 1725 | ||
1750 | static inline unsigned long pci_address_to_pio(phys_addr_t addr) { return -1; } | 1726 | static inline unsigned long pci_address_to_pio(phys_addr_t addr) { return -1; } |
1751 | 1727 | ||
1752 | static inline void pci_block_cfg_access(struct pci_dev *dev) { } | ||
1753 | static inline int pci_block_cfg_access_in_atomic(struct pci_dev *dev) | ||
1754 | { return 0; } | ||
1755 | static inline void pci_unblock_cfg_access(struct pci_dev *dev) { } | ||
1756 | |||
1757 | static inline struct pci_bus *pci_find_next_bus(const struct pci_bus *from) | 1728 | static inline struct pci_bus *pci_find_next_bus(const struct pci_bus *from) |
1758 | { return NULL; } | 1729 | { return NULL; } |
1759 | static inline struct pci_dev *pci_get_slot(struct pci_bus *bus, | 1730 | static inline struct pci_dev *pci_get_slot(struct pci_bus *bus, |
@@ -1782,17 +1753,36 @@ static inline const struct pci_device_id *pci_match_id(const struct pci_device_i | |||
1782 | struct pci_dev *dev) | 1753 | struct pci_dev *dev) |
1783 | { return NULL; } | 1754 | { return NULL; } |
1784 | static inline bool pci_ats_disabled(void) { return true; } | 1755 | static inline bool pci_ats_disabled(void) { return true; } |
1756 | |||
1757 | static inline int pci_irq_vector(struct pci_dev *dev, unsigned int nr) | ||
1758 | { | ||
1759 | return -EINVAL; | ||
1760 | } | ||
1761 | |||
1762 | static inline int | ||
1763 | pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs, | ||
1764 | unsigned int max_vecs, unsigned int flags, | ||
1765 | struct irq_affinity *aff_desc) | ||
1766 | { | ||
1767 | return -ENOSPC; | ||
1768 | } | ||
1785 | #endif /* CONFIG_PCI */ | 1769 | #endif /* CONFIG_PCI */ |
1786 | 1770 | ||
1771 | static inline int | ||
1772 | pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, | ||
1773 | unsigned int max_vecs, unsigned int flags) | ||
1774 | { | ||
1775 | return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags, | ||
1776 | NULL); | ||
1777 | } | ||
1778 | |||
1787 | #ifdef CONFIG_PCI_ATS | 1779 | #ifdef CONFIG_PCI_ATS |
1788 | /* Address Translation Service */ | 1780 | /* Address Translation Service */ |
1789 | void pci_ats_init(struct pci_dev *dev); | ||
1790 | int pci_enable_ats(struct pci_dev *dev, int ps); | 1781 | int pci_enable_ats(struct pci_dev *dev, int ps); |
1791 | void pci_disable_ats(struct pci_dev *dev); | 1782 | void pci_disable_ats(struct pci_dev *dev); |
1792 | int pci_ats_queue_depth(struct pci_dev *dev); | 1783 | int pci_ats_queue_depth(struct pci_dev *dev); |
1793 | int pci_ats_page_aligned(struct pci_dev *dev); | 1784 | int pci_ats_page_aligned(struct pci_dev *dev); |
1794 | #else | 1785 | #else |
1795 | static inline void pci_ats_init(struct pci_dev *d) { } | ||
1796 | static inline int pci_enable_ats(struct pci_dev *d, int ps) { return -ENODEV; } | 1786 | static inline int pci_enable_ats(struct pci_dev *d, int ps) { return -ENODEV; } |
1797 | static inline void pci_disable_ats(struct pci_dev *d) { } | 1787 | static inline void pci_disable_ats(struct pci_dev *d) { } |
1798 | static inline int pci_ats_queue_depth(struct pci_dev *d) { return -ENODEV; } | 1788 | static inline int pci_ats_queue_depth(struct pci_dev *d) { return -ENODEV; } |
@@ -1803,7 +1793,7 @@ static inline int pci_ats_page_aligned(struct pci_dev *dev) { return 0; } | |||
1803 | 1793 | ||
1804 | #include <asm/pci.h> | 1794 | #include <asm/pci.h> |
1805 | 1795 | ||
1806 | /* These two functions provide almost identical functionality. Depennding | 1796 | /* These two functions provide almost identical functionality. Depending |
1807 | * on the architecture, one will be implemented as a wrapper around the | 1797 | * on the architecture, one will be implemented as a wrapper around the |
1808 | * other (in drivers/pci/mmap.c). | 1798 | * other (in drivers/pci/mmap.c). |
1809 | * | 1799 | * |
@@ -1872,25 +1862,9 @@ static inline const char *pci_name(const struct pci_dev *pdev) | |||
1872 | return dev_name(&pdev->dev); | 1862 | return dev_name(&pdev->dev); |
1873 | } | 1863 | } |
1874 | 1864 | ||
1875 | |||
1876 | /* | ||
1877 | * Some archs don't want to expose struct resource to userland as-is | ||
1878 | * in sysfs and /proc | ||
1879 | */ | ||
1880 | #ifdef HAVE_ARCH_PCI_RESOURCE_TO_USER | ||
1881 | void pci_resource_to_user(const struct pci_dev *dev, int bar, | 1865 | void pci_resource_to_user(const struct pci_dev *dev, int bar, |
1882 | const struct resource *rsrc, | 1866 | const struct resource *rsrc, |
1883 | resource_size_t *start, resource_size_t *end); | 1867 | resource_size_t *start, resource_size_t *end); |
1884 | #else | ||
1885 | static inline void pci_resource_to_user(const struct pci_dev *dev, int bar, | ||
1886 | const struct resource *rsrc, resource_size_t *start, | ||
1887 | resource_size_t *end) | ||
1888 | { | ||
1889 | *start = rsrc->start; | ||
1890 | *end = rsrc->end; | ||
1891 | } | ||
1892 | #endif /* HAVE_ARCH_PCI_RESOURCE_TO_USER */ | ||
1893 | |||
1894 | 1868 | ||
1895 | /* | 1869 | /* |
1896 | * The world is not perfect and supplies us with broken PCI devices. | 1870 | * The world is not perfect and supplies us with broken PCI devices. |
@@ -2032,10 +2006,6 @@ extern unsigned long pci_cardbus_mem_size; | |||
2032 | extern u8 pci_dfl_cache_line_size; | 2006 | extern u8 pci_dfl_cache_line_size; |
2033 | extern u8 pci_cache_line_size; | 2007 | extern u8 pci_cache_line_size; |
2034 | 2008 | ||
2035 | extern unsigned long pci_hotplug_io_size; | ||
2036 | extern unsigned long pci_hotplug_mem_size; | ||
2037 | extern unsigned long pci_hotplug_bus_size; | ||
2038 | |||
2039 | /* Architecture-specific versions may override these (weak) */ | 2009 | /* Architecture-specific versions may override these (weak) */ |
2040 | void pcibios_disable_device(struct pci_dev *dev); | 2010 | void pcibios_disable_device(struct pci_dev *dev); |
2041 | void pcibios_set_master(struct pci_dev *dev); | 2011 | void pcibios_set_master(struct pci_dev *dev); |
@@ -2305,10 +2275,6 @@ int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off, | |||
2305 | #ifdef CONFIG_OF | 2275 | #ifdef CONFIG_OF |
2306 | struct device_node; | 2276 | struct device_node; |
2307 | struct irq_domain; | 2277 | struct irq_domain; |
2308 | void pci_set_of_node(struct pci_dev *dev); | ||
2309 | void pci_release_of_node(struct pci_dev *dev); | ||
2310 | void pci_set_bus_of_node(struct pci_bus *bus); | ||
2311 | void pci_release_bus_of_node(struct pci_bus *bus); | ||
2312 | struct irq_domain *pci_host_bridge_of_msi_domain(struct pci_bus *bus); | 2278 | struct irq_domain *pci_host_bridge_of_msi_domain(struct pci_bus *bus); |
2313 | int pci_parse_request_of_pci_ranges(struct device *dev, | 2279 | int pci_parse_request_of_pci_ranges(struct device *dev, |
2314 | struct list_head *resources, | 2280 | struct list_head *resources, |
@@ -2318,10 +2284,6 @@ int pci_parse_request_of_pci_ranges(struct device *dev, | |||
2318 | struct device_node *pcibios_get_phb_of_node(struct pci_bus *bus); | 2284 | struct device_node *pcibios_get_phb_of_node(struct pci_bus *bus); |
2319 | 2285 | ||
2320 | #else /* CONFIG_OF */ | 2286 | #else /* CONFIG_OF */ |
2321 | static inline void pci_set_of_node(struct pci_dev *dev) { } | ||
2322 | static inline void pci_release_of_node(struct pci_dev *dev) { } | ||
2323 | static inline void pci_set_bus_of_node(struct pci_bus *bus) { } | ||
2324 | static inline void pci_release_bus_of_node(struct pci_bus *bus) { } | ||
2325 | static inline struct irq_domain * | 2287 | static inline struct irq_domain * |
2326 | pci_host_bridge_of_msi_domain(struct pci_bus *bus) { return NULL; } | 2288 | pci_host_bridge_of_msi_domain(struct pci_bus *bus) { return NULL; } |
2327 | static inline int pci_parse_request_of_pci_ranges(struct device *dev, | 2289 | static inline int pci_parse_request_of_pci_ranges(struct device *dev, |
@@ -2435,4 +2397,7 @@ void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); | |||
2435 | #define pci_notice_ratelimited(pdev, fmt, arg...) \ | 2397 | #define pci_notice_ratelimited(pdev, fmt, arg...) \ |
2436 | dev_notice_ratelimited(&(pdev)->dev, fmt, ##arg) | 2398 | dev_notice_ratelimited(&(pdev)->dev, fmt, ##arg) |
2437 | 2399 | ||
2400 | #define pci_info_ratelimited(pdev, fmt, arg...) \ | ||
2401 | dev_info_ratelimited(&(pdev)->dev, fmt, ##arg) | ||
2402 | |||
2438 | #endif /* LINUX_PCI_H */ | 2403 | #endif /* LINUX_PCI_H */ |
diff --git a/include/linux/pci_hotplug.h b/include/linux/pci_hotplug.h index f694eb2ca978..b482e42d7153 100644 --- a/include/linux/pci_hotplug.h +++ b/include/linux/pci_hotplug.h | |||
@@ -86,114 +86,14 @@ void pci_hp_deregister(struct hotplug_slot *slot); | |||
86 | #define pci_hp_initialize(slot, bus, nr, name) \ | 86 | #define pci_hp_initialize(slot, bus, nr, name) \ |
87 | __pci_hp_initialize(slot, bus, nr, name, THIS_MODULE, KBUILD_MODNAME) | 87 | __pci_hp_initialize(slot, bus, nr, name, THIS_MODULE, KBUILD_MODNAME) |
88 | 88 | ||
89 | /* PCI Setting Record (Type 0) */ | ||
90 | struct hpp_type0 { | ||
91 | u32 revision; | ||
92 | u8 cache_line_size; | ||
93 | u8 latency_timer; | ||
94 | u8 enable_serr; | ||
95 | u8 enable_perr; | ||
96 | }; | ||
97 | |||
98 | /* PCI-X Setting Record (Type 1) */ | ||
99 | struct hpp_type1 { | ||
100 | u32 revision; | ||
101 | u8 max_mem_read; | ||
102 | u8 avg_max_split; | ||
103 | u16 tot_max_split; | ||
104 | }; | ||
105 | |||
106 | /* PCI Express Setting Record (Type 2) */ | ||
107 | struct hpp_type2 { | ||
108 | u32 revision; | ||
109 | u32 unc_err_mask_and; | ||
110 | u32 unc_err_mask_or; | ||
111 | u32 unc_err_sever_and; | ||
112 | u32 unc_err_sever_or; | ||
113 | u32 cor_err_mask_and; | ||
114 | u32 cor_err_mask_or; | ||
115 | u32 adv_err_cap_and; | ||
116 | u32 adv_err_cap_or; | ||
117 | u16 pci_exp_devctl_and; | ||
118 | u16 pci_exp_devctl_or; | ||
119 | u16 pci_exp_lnkctl_and; | ||
120 | u16 pci_exp_lnkctl_or; | ||
121 | u32 sec_unc_err_sever_and; | ||
122 | u32 sec_unc_err_sever_or; | ||
123 | u32 sec_unc_err_mask_and; | ||
124 | u32 sec_unc_err_mask_or; | ||
125 | }; | ||
126 | |||
127 | /* | ||
128 | * _HPX PCI Express Setting Record (Type 3) | ||
129 | */ | ||
130 | struct hpx_type3 { | ||
131 | u16 device_type; | ||
132 | u16 function_type; | ||
133 | u16 config_space_location; | ||
134 | u16 pci_exp_cap_id; | ||
135 | u16 pci_exp_cap_ver; | ||
136 | u16 pci_exp_vendor_id; | ||
137 | u16 dvsec_id; | ||
138 | u16 dvsec_rev; | ||
139 | u16 match_offset; | ||
140 | u32 match_mask_and; | ||
141 | u32 match_value; | ||
142 | u16 reg_offset; | ||
143 | u32 reg_mask_and; | ||
144 | u32 reg_mask_or; | ||
145 | }; | ||
146 | |||
147 | struct hotplug_program_ops { | ||
148 | void (*program_type0)(struct pci_dev *dev, struct hpp_type0 *hpp); | ||
149 | void (*program_type1)(struct pci_dev *dev, struct hpp_type1 *hpp); | ||
150 | void (*program_type2)(struct pci_dev *dev, struct hpp_type2 *hpp); | ||
151 | void (*program_type3)(struct pci_dev *dev, struct hpx_type3 *hpp); | ||
152 | }; | ||
153 | |||
154 | enum hpx_type3_dev_type { | ||
155 | HPX_TYPE_ENDPOINT = BIT(0), | ||
156 | HPX_TYPE_LEG_END = BIT(1), | ||
157 | HPX_TYPE_RC_END = BIT(2), | ||
158 | HPX_TYPE_RC_EC = BIT(3), | ||
159 | HPX_TYPE_ROOT_PORT = BIT(4), | ||
160 | HPX_TYPE_UPSTREAM = BIT(5), | ||
161 | HPX_TYPE_DOWNSTREAM = BIT(6), | ||
162 | HPX_TYPE_PCI_BRIDGE = BIT(7), | ||
163 | HPX_TYPE_PCIE_BRIDGE = BIT(8), | ||
164 | }; | ||
165 | |||
166 | enum hpx_type3_fn_type { | ||
167 | HPX_FN_NORMAL = BIT(0), | ||
168 | HPX_FN_SRIOV_PHYS = BIT(1), | ||
169 | HPX_FN_SRIOV_VIRT = BIT(2), | ||
170 | }; | ||
171 | |||
172 | enum hpx_type3_cfg_loc { | ||
173 | HPX_CFG_PCICFG = 0, | ||
174 | HPX_CFG_PCIE_CAP = 1, | ||
175 | HPX_CFG_PCIE_CAP_EXT = 2, | ||
176 | HPX_CFG_VEND_CAP = 3, | ||
177 | HPX_CFG_DVSEC = 4, | ||
178 | HPX_CFG_MAX, | ||
179 | }; | ||
180 | |||
181 | #ifdef CONFIG_ACPI | 89 | #ifdef CONFIG_ACPI |
182 | #include <linux/acpi.h> | 90 | #include <linux/acpi.h> |
183 | int pci_acpi_program_hp_params(struct pci_dev *dev, | ||
184 | const struct hotplug_program_ops *hp_ops); | ||
185 | bool pciehp_is_native(struct pci_dev *bridge); | 91 | bool pciehp_is_native(struct pci_dev *bridge); |
186 | int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge); | 92 | int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge); |
187 | bool shpchp_is_native(struct pci_dev *bridge); | 93 | bool shpchp_is_native(struct pci_dev *bridge); |
188 | int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle); | 94 | int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle); |
189 | int acpi_pci_detect_ejectable(acpi_handle handle); | 95 | int acpi_pci_detect_ejectable(acpi_handle handle); |
190 | #else | 96 | #else |
191 | static inline int pci_acpi_program_hp_params(struct pci_dev *dev, | ||
192 | const struct hotplug_program_ops *hp_ops) | ||
193 | { | ||
194 | return -ENODEV; | ||
195 | } | ||
196 | |||
197 | static inline int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge) | 97 | static inline int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge) |
198 | { | 98 | { |
199 | return 0; | 99 | return 0; |
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index de1b75e963ef..21a572469a4e 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h | |||
@@ -2134,6 +2134,7 @@ | |||
2134 | #define PCI_VENDOR_ID_MYRICOM 0x14c1 | 2134 | #define PCI_VENDOR_ID_MYRICOM 0x14c1 |
2135 | 2135 | ||
2136 | #define PCI_VENDOR_ID_MEDIATEK 0x14c3 | 2136 | #define PCI_VENDOR_ID_MEDIATEK 0x14c3 |
2137 | #define PCI_DEVICE_ID_MEDIATEK_7629 0x7629 | ||
2137 | 2138 | ||
2138 | #define PCI_VENDOR_ID_TITAN 0x14D2 | 2139 | #define PCI_VENDOR_ID_TITAN 0x14D2 |
2139 | #define PCI_DEVICE_ID_TITAN_010L 0x8001 | 2140 | #define PCI_DEVICE_ID_TITAN_010L 0x8001 |
@@ -2574,6 +2575,8 @@ | |||
2574 | 2575 | ||
2575 | #define PCI_VENDOR_ID_ASMEDIA 0x1b21 | 2576 | #define PCI_VENDOR_ID_ASMEDIA 0x1b21 |
2576 | 2577 | ||
2578 | #define PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS 0x1c36 | ||
2579 | |||
2577 | #define PCI_VENDOR_ID_CIRCUITCO 0x1cc8 | 2580 | #define PCI_VENDOR_ID_CIRCUITCO 0x1cc8 |
2578 | #define PCI_SUBSYSTEM_ID_CIRCUITCO_MINNOWBOARD 0x0001 | 2581 | #define PCI_SUBSYSTEM_ID_CIRCUITCO_MINNOWBOARD 0x0001 |
2579 | 2582 | ||
diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h index f28e562d7ca8..29d6e93fd15e 100644 --- a/include/uapi/linux/pci_regs.h +++ b/include/uapi/linux/pci_regs.h | |||
@@ -591,6 +591,7 @@ | |||
591 | #define PCI_EXP_SLTCTL_CCIE 0x0010 /* Command Completed Interrupt Enable */ | 591 | #define PCI_EXP_SLTCTL_CCIE 0x0010 /* Command Completed Interrupt Enable */ |
592 | #define PCI_EXP_SLTCTL_HPIE 0x0020 /* Hot-Plug Interrupt Enable */ | 592 | #define PCI_EXP_SLTCTL_HPIE 0x0020 /* Hot-Plug Interrupt Enable */ |
593 | #define PCI_EXP_SLTCTL_AIC 0x00c0 /* Attention Indicator Control */ | 593 | #define PCI_EXP_SLTCTL_AIC 0x00c0 /* Attention Indicator Control */ |
594 | #define PCI_EXP_SLTCTL_ATTN_IND_SHIFT 6 /* Attention Indicator shift */ | ||
594 | #define PCI_EXP_SLTCTL_ATTN_IND_ON 0x0040 /* Attention Indicator on */ | 595 | #define PCI_EXP_SLTCTL_ATTN_IND_ON 0x0040 /* Attention Indicator on */ |
595 | #define PCI_EXP_SLTCTL_ATTN_IND_BLINK 0x0080 /* Attention Indicator blinking */ | 596 | #define PCI_EXP_SLTCTL_ATTN_IND_BLINK 0x0080 /* Attention Indicator blinking */ |
596 | #define PCI_EXP_SLTCTL_ATTN_IND_OFF 0x00c0 /* Attention Indicator off */ | 597 | #define PCI_EXP_SLTCTL_ATTN_IND_OFF 0x00c0 /* Attention Indicator off */ |
@@ -713,7 +714,9 @@ | |||
713 | #define PCI_EXT_CAP_ID_DPC 0x1D /* Downstream Port Containment */ | 714 | #define PCI_EXT_CAP_ID_DPC 0x1D /* Downstream Port Containment */ |
714 | #define PCI_EXT_CAP_ID_L1SS 0x1E /* L1 PM Substates */ | 715 | #define PCI_EXT_CAP_ID_L1SS 0x1E /* L1 PM Substates */ |
715 | #define PCI_EXT_CAP_ID_PTM 0x1F /* Precision Time Measurement */ | 716 | #define PCI_EXT_CAP_ID_PTM 0x1F /* Precision Time Measurement */ |
716 | #define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PTM | 717 | #define PCI_EXT_CAP_ID_DLF 0x25 /* Data Link Feature */ |
718 | #define PCI_EXT_CAP_ID_PL_16GT 0x26 /* Physical Layer 16.0 GT/s */ | ||
719 | #define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PL_16GT | ||
717 | 720 | ||
718 | #define PCI_EXT_CAP_DSN_SIZEOF 12 | 721 | #define PCI_EXT_CAP_DSN_SIZEOF 12 |
719 | #define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40 | 722 | #define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40 |
@@ -1053,4 +1056,14 @@ | |||
1053 | #define PCI_L1SS_CTL1_LTR_L12_TH_SCALE 0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */ | 1056 | #define PCI_L1SS_CTL1_LTR_L12_TH_SCALE 0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */ |
1054 | #define PCI_L1SS_CTL2 0x0c /* Control 2 Register */ | 1057 | #define PCI_L1SS_CTL2 0x0c /* Control 2 Register */ |
1055 | 1058 | ||
1059 | /* Data Link Feature */ | ||
1060 | #define PCI_DLF_CAP 0x04 /* Capabilities Register */ | ||
1061 | #define PCI_DLF_EXCHANGE_ENABLE 0x80000000 /* Data Link Feature Exchange Enable */ | ||
1062 | |||
1063 | /* Physical Layer 16.0 GT/s */ | ||
1064 | #define PCI_PL_16GT_LE_CTRL 0x20 /* Lane Equalization Control Register */ | ||
1065 | #define PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK 0x0000000F | ||
1066 | #define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK 0x000000F0 | ||
1067 | #define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT 4 | ||
1068 | |||
1056 | #endif /* LINUX_PCI_REGS_H */ | 1069 | #endif /* LINUX_PCI_REGS_H */ |