diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-10-25 09:50:48 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-10-25 09:50:48 -0400 |
commit | bd6bf7c10484f026505814b690104cdef27ed460 (patch) | |
tree | 0ff42aa1dae7f7e28a5de5a74bfea641cae0841c | |
parent | a41efc2a0f68cea26665ab9e6d991c9bf33b3f59 (diff) | |
parent | 663569db6476795c7955289529ea0154e3d768bf (diff) |
Merge tag 'pci-v4.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
- Fix ASPM link_state teardown on removal (Lukas Wunner)
- Fix misleading _OSC ASPM message (Sinan Kaya)
- Make _OSC optional for PCI (Sinan Kaya)
- Don't initialize ASPM link state when ACPI_FADT_NO_ASPM is set
(Patrick Talbert)
- Remove x86 and arm64 node-local allocation for host bridge structures
(Punit Agrawal)
- Pay attention to device-specific _PXM node values (Jonathan Cameron)
- Support new Immediate Readiness bit (Felipe Balbi)
- Differentiate between pciehp surprise and safe removal (Lukas Wunner)
- Remove unnecessary pciehp includes (Lukas Wunner)
- Drop pciehp hotplug_slot_ops wrappers (Lukas Wunner)
- Tolerate PCIe Slot Presence Detect being hardwired to zero to
workaround broken hardware, e.g., the Wilocity switch/wireless device
(Lukas Wunner)
- Unify pciehp controller & slot structs (Lukas Wunner)
- Constify hotplug_slot_ops (Lukas Wunner)
- Drop hotplug_slot_info (Lukas Wunner)
- Embed hotplug_slot struct into users instead of allocating it
separately (Lukas Wunner)
- Initialize PCIe port service drivers directly instead of relying on
initcall ordering (Keith Busch)
- Restore PCI config state after a slot reset (Keith Busch)
- Save/restore DPC config state along with other PCI config state
(Keith Busch)
- Reference count devices during AER handling to avoid race issue with
concurrent hot removal (Keith Busch)
- If an Upstream Port reports ERR_FATAL, don't try to read the Port's
config space because it is probably unreachable (Keith Busch)
- During error handling, use slot-specific reset instead of secondary
bus reset to avoid link up/down issues on hotplug ports (Keith Busch)
- Restore previous AER/DPC handling that does not remove and
re-enumerate devices on ERR_FATAL (Keith Busch)
- Notify all drivers that may be affected by error recovery resets
(Keith Busch)
- Always generate error recovery uevents, even if a driver doesn't have
error callbacks (Keith Busch)
- Make PCIe link active reporting detection generic (Keith Busch)
- Support D3cold in PCIe hierarchies during system sleep and runtime,
including hotplug and Thunderbolt ports (Mika Westerberg)
- Handle hpmemsize/hpiosize kernel parameters uniformly, whether slots
are empty or occupied (Jon Derrick)
- Remove duplicated include from pci/pcie/err.c and unused variable
from cpqphp (YueHaibing)
- Remove driver pci_cleanup_aer_uncorrect_error_status() calls (Oza
Pawandeep)
- Uninline PCI bus accessors for better ftracing (Keith Busch)
- Remove unused AER Root Port .error_resume method (Keith Busch)
- Use kfifo in AER instead of a local version (Keith Busch)
- Use threaded IRQ in AER bottom half (Keith Busch)
- Use managed resources in AER core (Keith Busch)
- Reuse pcie_port_find_device() for AER injection (Keith Busch)
- Abstract AER interrupt handling to disconnect error injection (Keith
Busch)
- Refactor AER injection callbacks to simplify future improvments
(Keith Busch)
- Remove unused Netronome NFP32xx Device IDs (Jakub Kicinski)
- Use bitmap_zalloc() for dma_alias_mask (Andy Shevchenko)
- Add switch fall-through annotations (Gustavo A. R. Silva)
- Remove unused Switchtec quirk variable (Joshua Abraham)
- Fix pci.c kernel-doc warning (Randy Dunlap)
- Remove trivial PCI wrappers for DMA APIs (Christoph Hellwig)
- Add Intel GPU device IDs to spurious interrupt quirk (Bin Meng)
- Run Switchtec DMA aliasing quirk only on NTB endpoints to avoid
useless dmesg errors (Logan Gunthorpe)
- Update Switchtec NTB documentation (Wesley Yung)
- Remove redundant "default n" from Kconfig (Bartlomiej Zolnierkiewicz)
- Avoid panic when drivers enable MSI/MSI-X twice (Tonghao Zhang)
- Add PCI support for peer-to-peer DMA (Logan Gunthorpe)
- Add sysfs group for PCI peer-to-peer memory statistics (Logan
Gunthorpe)
- Add PCI peer-to-peer DMA scatterlist mapping interface (Logan
Gunthorpe)
- Add PCI configfs/sysfs helpers for use by peer-to-peer users (Logan
Gunthorpe)
- Add PCI peer-to-peer DMA driver writer's documentation (Logan
Gunthorpe)
- Add block layer flag to indicate driver support for PCI peer-to-peer
DMA (Logan Gunthorpe)
- Map Infiniband scatterlists for peer-to-peer DMA if they contain P2P
memory (Logan Gunthorpe)
- Register nvme-pci CMB buffer as PCI peer-to-peer memory (Logan
Gunthorpe)
- Add nvme-pci support for PCI peer-to-peer memory in requests (Logan
Gunthorpe)
- Use PCI peer-to-peer memory in nvme (Stephen Bates, Steve Wise,
Christoph Hellwig, Logan Gunthorpe)
- Cache VF config space size to optimize enumeration of many VFs
(KarimAllah Ahmed)
- Remove unnecessary <linux/pci-ats.h> include (Bjorn Helgaas)
- Fix VMD AERSID quirk Device ID matching (Jon Derrick)
- Fix Cadence PHY handling during probe (Alan Douglas)
- Signal Cadence Endpoint interrupts via AXI region 0 instead of last
region (Alan Douglas)
- Write Cadence Endpoint MSI interrupts with 32 bits of data (Alan
Douglas)
- Remove redundant controller tests for "device_type == pci" (Rob
Herring)
- Document R-Car E3 (R8A77990) bindings (Tho Vu)
- Add device tree support for R-Car r8a7744 (Biju Das)
- Drop unused mvebu PCIe capability code (Thomas Petazzoni)
- Add shared PCI bridge emulation code (Thomas Petazzoni)
- Convert mvebu to use shared PCI bridge emulation (Thomas Petazzoni)
- Add aardvark Root Port emulation (Thomas Petazzoni)
- Support 100MHz/200MHz refclocks for i.MX6 (Lucas Stach)
- Add initial power management for i.MX7 (Leonard Crestez)
- Add PME_Turn_Off support for i.MX7 (Leonard Crestez)
- Fix qcom runtime power management error handling (Bjorn Andersson)
- Update TI dra7xx unaligned access errata workaround for host mode as
well as endpoint mode (Vignesh R)
- Fix kirin section mismatch warning (Nathan Chancellor)
- Remove iproc PAXC slot check to allow VF support (Jitendra Bhivare)
- Quirk Keystone K2G to limit MRRS to 256 (Kishon Vijay Abraham I)
- Update Keystone to use MRRS quirk for host bridge instead of open
coding (Kishon Vijay Abraham I)
- Refactor Keystone link establishment (Kishon Vijay Abraham I)
- Simplify and speed up Keystone link training (Kishon Vijay Abraham I)
- Remove unused Keystone host_init argument (Kishon Vijay Abraham I)
- Merge Keystone driver files into one (Kishon Vijay Abraham I)
- Remove redundant Keystone platform_set_drvdata() (Kishon Vijay
Abraham I)
- Rename Keystone functions for uniformity (Kishon Vijay Abraham I)
- Add Keystone device control module DT binding (Kishon Vijay Abraham
I)
- Use SYSCON API to get Keystone control module device IDs (Kishon
Vijay Abraham I)
- Clean up Keystone PHY handling (Kishon Vijay Abraham I)
- Use runtime PM APIs to enable Keystone clock (Kishon Vijay Abraham I)
- Clean up Keystone config space access checks (Kishon Vijay Abraham I)
- Get Keystone outbound window count from DT (Kishon Vijay Abraham I)
- Clean up Keystone outbound window configuration (Kishon Vijay Abraham
I)
- Clean up Keystone DBI setup (Kishon Vijay Abraham I)
- Clean up Keystone ks_pcie_link_up() (Kishon Vijay Abraham I)
- Fix Keystone IRQ status checking (Kishon Vijay Abraham I)
- Add debug messages for all Keystone errors (Kishon Vijay Abraham I)
- Clean up Keystone includes and macros (Kishon Vijay Abraham I)
- Fix Mediatek unchecked return value from devm_pci_remap_iospace()
(Gustavo A. R. Silva)
- Fix Mediatek endpoint/port matching logic (Honghui Zhang)
- Change Mediatek Root Port Class Code to PCI_CLASS_BRIDGE_PCI (Honghui
Zhang)
- Remove redundant Mediatek PM domain check (Honghui Zhang)
- Convert Mediatek to pci_host_probe() (Honghui Zhang)
- Fix Mediatek MSI enablement (Honghui Zhang)
- Add Mediatek system PM support for MT2712 and MT7622 (Honghui Zhang)
- Add Mediatek loadable module support (Honghui Zhang)
- Detach VMD resources after stopping root bus to prevent orphan
resources (Jon Derrick)
- Convert pcitest build process to that used by other tools (iio, perf,
etc) (Gustavo Pimentel)
* tag 'pci-v4.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (140 commits)
PCI/AER: Refactor error injection fallbacks
PCI/AER: Abstract AER interrupt handling
PCI/AER: Reuse existing pcie_port_find_device() interface
PCI/AER: Use managed resource allocations
PCI: pcie: Remove redundant 'default n' from Kconfig
PCI: aardvark: Implement emulated root PCI bridge config space
PCI: mvebu: Convert to PCI emulated bridge config space
PCI: mvebu: Drop unused PCI express capability code
PCI: Introduce PCI bridge emulated config space common logic
PCI: vmd: Detach resources after stopping root bus
nvmet: Optionally use PCI P2P memory
nvmet: Introduce helper functions to allocate and free request SGLs
nvme-pci: Add support for P2P memory in requests
nvme-pci: Use PCI p2pmem subsystem to manage the CMB
IB/core: Ensure we map P2P memory correctly in rdma_rw_ctx_[init|destroy]()
block: Add PCI P2P flag for request queue
PCI/P2PDMA: Add P2P DMA driver writer's documentation
docs-rst: Add a new directory for PCI documentation
PCI/P2PDMA: Introduce configfs/sysfs enable attribute helpers
PCI/P2PDMA: Add PCI p2pmem DMA mappings to adjust the bus offset
...
162 files changed, 5004 insertions, 3114 deletions
diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci index 44d4b2be92fd..8bfee557e50e 100644 --- a/Documentation/ABI/testing/sysfs-bus-pci +++ b/Documentation/ABI/testing/sysfs-bus-pci | |||
@@ -323,3 +323,27 @@ Description: | |||
323 | 323 | ||
324 | This is similar to /sys/bus/pci/drivers_autoprobe, but | 324 | This is similar to /sys/bus/pci/drivers_autoprobe, but |
325 | affects only the VFs associated with a specific PF. | 325 | affects only the VFs associated with a specific PF. |
326 | |||
327 | What: /sys/bus/pci/devices/.../p2pmem/size | ||
328 | Date: November 2017 | ||
329 | Contact: Logan Gunthorpe <logang@deltatee.com> | ||
330 | Description: | ||
331 | If the device has any Peer-to-Peer memory registered, this | ||
332 | file contains the total amount of memory that the device | ||
333 | provides (in decimal). | ||
334 | |||
335 | What: /sys/bus/pci/devices/.../p2pmem/available | ||
336 | Date: November 2017 | ||
337 | Contact: Logan Gunthorpe <logang@deltatee.com> | ||
338 | Description: | ||
339 | If the device has any Peer-to-Peer memory registered, this | ||
340 | file contains the amount of memory that has not been | ||
341 | allocated (in decimal). | ||
342 | |||
343 | What: /sys/bus/pci/devices/.../p2pmem/published | ||
344 | Date: November 2017 | ||
345 | Contact: Logan Gunthorpe <logang@deltatee.com> | ||
346 | Description: | ||
347 | If the device has any Peer-to-Peer memory registered, this | ||
348 | file contains a '1' if the memory has been published for | ||
349 | use outside the driver that owns the device. | ||
diff --git a/Documentation/PCI/endpoint/pci-test-howto.txt b/Documentation/PCI/endpoint/pci-test-howto.txt index e40cf0fb58d7..040479f437a5 100644 --- a/Documentation/PCI/endpoint/pci-test-howto.txt +++ b/Documentation/PCI/endpoint/pci-test-howto.txt | |||
@@ -99,17 +99,20 @@ Note that the devices listed here correspond to the value populated in 1.4 above | |||
99 | 2.2 Using Endpoint Test function Device | 99 | 2.2 Using Endpoint Test function Device |
100 | 100 | ||
101 | pcitest.sh added in tools/pci/ can be used to run all the default PCI endpoint | 101 | pcitest.sh added in tools/pci/ can be used to run all the default PCI endpoint |
102 | tests. Before pcitest.sh can be used pcitest.c should be compiled using the | 102 | tests. To compile this tool the following commands should be used: |
103 | following commands. | ||
104 | 103 | ||
105 | cd <kernel-dir> | 104 | # cd <kernel-dir> |
106 | make headers_install ARCH=arm | 105 | # make -C tools/pci |
107 | arm-linux-gnueabihf-gcc -Iusr/include tools/pci/pcitest.c -o pcitest | 106 | |
108 | cp pcitest <rootfs>/usr/sbin/ | 107 | or if you desire to compile and install in your system: |
109 | cp tools/pci/pcitest.sh <rootfs> | 108 | |
109 | # cd <kernel-dir> | ||
110 | # make -C tools/pci install | ||
111 | |||
112 | The tool and script will be located in <rootfs>/usr/bin/ | ||
110 | 113 | ||
111 | 2.2.1 pcitest.sh Output | 114 | 2.2.1 pcitest.sh Output |
112 | # ./pcitest.sh | 115 | # pcitest.sh |
113 | BAR tests | 116 | BAR tests |
114 | 117 | ||
115 | BAR0: OKAY | 118 | BAR0: OKAY |
diff --git a/Documentation/PCI/pci-error-recovery.txt b/Documentation/PCI/pci-error-recovery.txt index 688b69121e82..0b6bb3ef449e 100644 --- a/Documentation/PCI/pci-error-recovery.txt +++ b/Documentation/PCI/pci-error-recovery.txt | |||
@@ -110,7 +110,7 @@ The actual steps taken by a platform to recover from a PCI error | |||
110 | event will be platform-dependent, but will follow the general | 110 | event will be platform-dependent, but will follow the general |
111 | sequence described below. | 111 | sequence described below. |
112 | 112 | ||
113 | STEP 0: Error Event: ERR_NONFATAL | 113 | STEP 0: Error Event |
114 | ------------------- | 114 | ------------------- |
115 | A PCI bus error is detected by the PCI hardware. On powerpc, the slot | 115 | A PCI bus error is detected by the PCI hardware. On powerpc, the slot |
116 | is isolated, in that all I/O is blocked: all reads return 0xffffffff, | 116 | is isolated, in that all I/O is blocked: all reads return 0xffffffff, |
@@ -228,7 +228,13 @@ proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations). | |||
228 | If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform | 228 | If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform |
229 | proceeds to STEP 4 (Slot Reset) | 229 | proceeds to STEP 4 (Slot Reset) |
230 | 230 | ||
231 | STEP 3: Slot Reset | 231 | STEP 3: Link Reset |
232 | ------------------ | ||
233 | The platform resets the link. This is a PCI-Express specific step | ||
234 | and is done whenever a fatal error has been detected that can be | ||
235 | "solved" by resetting the link. | ||
236 | |||
237 | STEP 4: Slot Reset | ||
232 | ------------------ | 238 | ------------------ |
233 | 239 | ||
234 | In response to a return value of PCI_ERS_RESULT_NEED_RESET, the | 240 | In response to a return value of PCI_ERS_RESULT_NEED_RESET, the |
@@ -314,7 +320,7 @@ Failure). | |||
314 | >>> However, it probably should. | 320 | >>> However, it probably should. |
315 | 321 | ||
316 | 322 | ||
317 | STEP 4: Resume Operations | 323 | STEP 5: Resume Operations |
318 | ------------------------- | 324 | ------------------------- |
319 | The platform will call the resume() callback on all affected device | 325 | The platform will call the resume() callback on all affected device |
320 | drivers if all drivers on the segment have returned | 326 | drivers if all drivers on the segment have returned |
@@ -326,7 +332,7 @@ a result code. | |||
326 | At this point, if a new error happens, the platform will restart | 332 | At this point, if a new error happens, the platform will restart |
327 | a new error recovery sequence. | 333 | a new error recovery sequence. |
328 | 334 | ||
329 | STEP 5: Permanent Failure | 335 | STEP 6: Permanent Failure |
330 | ------------------------- | 336 | ------------------------- |
331 | A "permanent failure" has occurred, and the platform cannot recover | 337 | A "permanent failure" has occurred, and the platform cannot recover |
332 | the device. The platform will call error_detected() with a | 338 | the device. The platform will call error_detected() with a |
@@ -349,27 +355,6 @@ errors. See the discussion in powerpc/eeh-pci-error-recovery.txt | |||
349 | for additional detail on real-life experience of the causes of | 355 | for additional detail on real-life experience of the causes of |
350 | software errors. | 356 | software errors. |
351 | 357 | ||
352 | STEP 0: Error Event: ERR_FATAL | ||
353 | ------------------- | ||
354 | PCI bus error is detected by the PCI hardware. On powerpc, the slot is | ||
355 | isolated, in that all I/O is blocked: all reads return 0xffffffff, all | ||
356 | writes are ignored. | ||
357 | |||
358 | STEP 1: Remove devices | ||
359 | -------------------- | ||
360 | Platform removes the devices depending on the error agent, it could be | ||
361 | this port for all subordinates or upstream component (likely downstream | ||
362 | port) | ||
363 | |||
364 | STEP 2: Reset link | ||
365 | -------------------- | ||
366 | The platform resets the link. This is a PCI-Express specific step and is | ||
367 | done whenever a fatal error has been detected that can be "solved" by | ||
368 | resetting the link. | ||
369 | |||
370 | STEP 3: Re-enumerate the devices | ||
371 | -------------------- | ||
372 | Initiates the re-enumeration. | ||
373 | 358 | ||
374 | Conclusion; General Remarks | 359 | Conclusion; General Remarks |
375 | --------------------------- | 360 | --------------------------- |
diff --git a/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt b/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt index cb33421184a0..f37494d5a7be 100644 --- a/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt +++ b/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt | |||
@@ -50,6 +50,7 @@ Additional required properties for imx7d-pcie: | |||
50 | - reset-names: Must contain the following entires: | 50 | - reset-names: Must contain the following entires: |
51 | - "pciephy" | 51 | - "pciephy" |
52 | - "apps" | 52 | - "apps" |
53 | - "turnoff" | ||
53 | 54 | ||
54 | Example: | 55 | Example: |
55 | 56 | ||
diff --git a/Documentation/devicetree/bindings/pci/pci-keystone.txt b/Documentation/devicetree/bindings/pci/pci-keystone.txt index 4dd17de549a7..2030ee0dc4f9 100644 --- a/Documentation/devicetree/bindings/pci/pci-keystone.txt +++ b/Documentation/devicetree/bindings/pci/pci-keystone.txt | |||
@@ -19,6 +19,9 @@ pcie_msi_intc : Interrupt controller device node for MSI IRQ chip | |||
19 | interrupt-cells: should be set to 1 | 19 | interrupt-cells: should be set to 1 |
20 | interrupts: GIC interrupt lines connected to PCI MSI interrupt lines | 20 | interrupts: GIC interrupt lines connected to PCI MSI interrupt lines |
21 | 21 | ||
22 | ti,syscon-pcie-id : phandle to the device control module required to set device | ||
23 | id and vendor id. | ||
24 | |||
22 | Example: | 25 | Example: |
23 | pcie_msi_intc: msi-interrupt-controller { | 26 | pcie_msi_intc: msi-interrupt-controller { |
24 | interrupt-controller; | 27 | interrupt-controller; |
diff --git a/Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt b/Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt index 9fe7e12a7bf3..b94078f58d8e 100644 --- a/Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt +++ b/Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt | |||
@@ -7,6 +7,7 @@ OHCI and EHCI controllers. | |||
7 | 7 | ||
8 | Required properties: | 8 | Required properties: |
9 | - compatible: "renesas,pci-r8a7743" for the R8A7743 SoC; | 9 | - compatible: "renesas,pci-r8a7743" for the R8A7743 SoC; |
10 | "renesas,pci-r8a7744" for the R8A7744 SoC; | ||
10 | "renesas,pci-r8a7745" for the R8A7745 SoC; | 11 | "renesas,pci-r8a7745" for the R8A7745 SoC; |
11 | "renesas,pci-r8a7790" for the R8A7790 SoC; | 12 | "renesas,pci-r8a7790" for the R8A7790 SoC; |
12 | "renesas,pci-r8a7791" for the R8A7791 SoC; | 13 | "renesas,pci-r8a7791" for the R8A7791 SoC; |
diff --git a/Documentation/devicetree/bindings/pci/rcar-pci.txt b/Documentation/devicetree/bindings/pci/rcar-pci.txt index a5f7fc62d10e..976ef7bfff93 100644 --- a/Documentation/devicetree/bindings/pci/rcar-pci.txt +++ b/Documentation/devicetree/bindings/pci/rcar-pci.txt | |||
@@ -2,6 +2,7 @@ | |||
2 | 2 | ||
3 | Required properties: | 3 | Required properties: |
4 | compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC; | 4 | compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC; |
5 | "renesas,pcie-r8a7744" for the R8A7744 SoC; | ||
5 | "renesas,pcie-r8a7779" for the R8A7779 SoC; | 6 | "renesas,pcie-r8a7779" for the R8A7779 SoC; |
6 | "renesas,pcie-r8a7790" for the R8A7790 SoC; | 7 | "renesas,pcie-r8a7790" for the R8A7790 SoC; |
7 | "renesas,pcie-r8a7791" for the R8A7791 SoC; | 8 | "renesas,pcie-r8a7791" for the R8A7791 SoC; |
@@ -9,6 +10,7 @@ compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC; | |||
9 | "renesas,pcie-r8a7795" for the R8A7795 SoC; | 10 | "renesas,pcie-r8a7795" for the R8A7795 SoC; |
10 | "renesas,pcie-r8a7796" for the R8A7796 SoC; | 11 | "renesas,pcie-r8a7796" for the R8A7796 SoC; |
11 | "renesas,pcie-r8a77980" for the R8A77980 SoC; | 12 | "renesas,pcie-r8a77980" for the R8A77980 SoC; |
13 | "renesas,pcie-r8a77990" for the R8A77990 SoC; | ||
12 | "renesas,pcie-rcar-gen2" for a generic R-Car Gen2 or | 14 | "renesas,pcie-rcar-gen2" for a generic R-Car Gen2 or |
13 | RZ/G1 compatible device. | 15 | RZ/G1 compatible device. |
14 | "renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device. | 16 | "renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device. |
diff --git a/Documentation/devicetree/bindings/pci/ti-pci.txt b/Documentation/devicetree/bindings/pci/ti-pci.txt index 7f7af3044016..452fe48c4fdd 100644 --- a/Documentation/devicetree/bindings/pci/ti-pci.txt +++ b/Documentation/devicetree/bindings/pci/ti-pci.txt | |||
@@ -26,6 +26,11 @@ HOST MODE | |||
26 | ranges, | 26 | ranges, |
27 | interrupt-map-mask, | 27 | interrupt-map-mask, |
28 | interrupt-map : as specified in ../designware-pcie.txt | 28 | interrupt-map : as specified in ../designware-pcie.txt |
29 | - ti,syscon-unaligned-access: phandle to the syscon DT node. The 1st argument | ||
30 | should contain the register offset within syscon | ||
31 | and the 2nd argument should contain the bit field | ||
32 | for setting the bit to enable unaligned | ||
33 | access. | ||
29 | 34 | ||
30 | DEVICE MODE | 35 | DEVICE MODE |
31 | =========== | 36 | =========== |
diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst index 1f0cdba2b2bf..909f991b4c0d 100644 --- a/Documentation/driver-api/index.rst +++ b/Documentation/driver-api/index.rst | |||
@@ -30,7 +30,7 @@ available subsections can be seen below. | |||
30 | input | 30 | input |
31 | usb/index | 31 | usb/index |
32 | firewire | 32 | firewire |
33 | pci | 33 | pci/index |
34 | spi | 34 | spi |
35 | i2c | 35 | i2c |
36 | hsi | 36 | hsi |
diff --git a/Documentation/driver-api/pci/index.rst b/Documentation/driver-api/pci/index.rst new file mode 100644 index 000000000000..c6cf1fef61ce --- /dev/null +++ b/Documentation/driver-api/pci/index.rst | |||
@@ -0,0 +1,22 @@ | |||
1 | .. SPDX-License-Identifier: GPL-2.0 | ||
2 | |||
3 | ============================================ | ||
4 | The Linux PCI driver implementer's API guide | ||
5 | ============================================ | ||
6 | |||
7 | .. class:: toc-title | ||
8 | |||
9 | Table of contents | ||
10 | |||
11 | .. toctree:: | ||
12 | :maxdepth: 2 | ||
13 | |||
14 | pci | ||
15 | p2pdma | ||
16 | |||
17 | .. only:: subproject and html | ||
18 | |||
19 | Indices | ||
20 | ======= | ||
21 | |||
22 | * :ref:`genindex` | ||
diff --git a/Documentation/driver-api/pci/p2pdma.rst b/Documentation/driver-api/pci/p2pdma.rst new file mode 100644 index 000000000000..4c577fa7bef9 --- /dev/null +++ b/Documentation/driver-api/pci/p2pdma.rst | |||
@@ -0,0 +1,145 @@ | |||
1 | .. SPDX-License-Identifier: GPL-2.0 | ||
2 | |||
3 | ============================ | ||
4 | PCI Peer-to-Peer DMA Support | ||
5 | ============================ | ||
6 | |||
7 | The PCI bus has pretty decent support for performing DMA transfers | ||
8 | between two devices on the bus. This type of transaction is henceforth | ||
9 | called Peer-to-Peer (or P2P). However, there are a number of issues that | ||
10 | make P2P transactions tricky to do in a perfectly safe way. | ||
11 | |||
12 | One of the biggest issues is that PCI doesn't require forwarding | ||
13 | transactions between hierarchy domains, and in PCIe, each Root Port | ||
14 | defines a separate hierarchy domain. To make things worse, there is no | ||
15 | simple way to determine if a given Root Complex supports this or not. | ||
16 | (See PCIe r4.0, sec 1.3.1). Therefore, as of this writing, the kernel | ||
17 | only supports doing P2P when the endpoints involved are all behind the | ||
18 | same PCI bridge, as such devices are all in the same PCI hierarchy | ||
19 | domain, and the spec guarantees that all transactions within the | ||
20 | hierarchy will be routable, but it does not require routing | ||
21 | between hierarchies. | ||
22 | |||
23 | The second issue is that to make use of existing interfaces in Linux, | ||
24 | memory that is used for P2P transactions needs to be backed by struct | ||
25 | pages. However, PCI BARs are not typically cache coherent so there are | ||
26 | a few corner case gotchas with these pages so developers need to | ||
27 | be careful about what they do with them. | ||
28 | |||
29 | |||
30 | Driver Writer's Guide | ||
31 | ===================== | ||
32 | |||
33 | In a given P2P implementation there may be three or more different | ||
34 | types of kernel drivers in play: | ||
35 | |||
36 | * Provider - A driver which provides or publishes P2P resources like | ||
37 | memory or doorbell registers to other drivers. | ||
38 | * Client - A driver which makes use of a resource by setting up a | ||
39 | DMA transaction to or from it. | ||
40 | * Orchestrator - A driver which orchestrates the flow of data between | ||
41 | clients and providers. | ||
42 | |||
43 | In many cases there could be overlap between these three types (i.e., | ||
44 | it may be typical for a driver to be both a provider and a client). | ||
45 | |||
46 | For example, in the NVMe Target Copy Offload implementation: | ||
47 | |||
48 | * The NVMe PCI driver is both a client, provider and orchestrator | ||
49 | in that it exposes any CMB (Controller Memory Buffer) as a P2P memory | ||
50 | resource (provider), it accepts P2P memory pages as buffers in requests | ||
51 | to be used directly (client) and it can also make use of the CMB as | ||
52 | submission queue entries (orchastrator). | ||
53 | * The RDMA driver is a client in this arrangement so that an RNIC | ||
54 | can DMA directly to the memory exposed by the NVMe device. | ||
55 | * The NVMe Target driver (nvmet) can orchestrate the data from the RNIC | ||
56 | to the P2P memory (CMB) and then to the NVMe device (and vice versa). | ||
57 | |||
58 | This is currently the only arrangement supported by the kernel but | ||
59 | one could imagine slight tweaks to this that would allow for the same | ||
60 | functionality. For example, if a specific RNIC added a BAR with some | ||
61 | memory behind it, its driver could add support as a P2P provider and | ||
62 | then the NVMe Target could use the RNIC's memory instead of the CMB | ||
63 | in cases where the NVMe cards in use do not have CMB support. | ||
64 | |||
65 | |||
66 | Provider Drivers | ||
67 | ---------------- | ||
68 | |||
69 | A provider simply needs to register a BAR (or a portion of a BAR) | ||
70 | as a P2P DMA resource using :c:func:`pci_p2pdma_add_resource()`. | ||
71 | This will register struct pages for all the specified memory. | ||
72 | |||
73 | After that it may optionally publish all of its resources as | ||
74 | P2P memory using :c:func:`pci_p2pmem_publish()`. This will allow | ||
75 | any orchestrator drivers to find and use the memory. When marked in | ||
76 | this way, the resource must be regular memory with no side effects. | ||
77 | |||
78 | For the time being this is fairly rudimentary in that all resources | ||
79 | are typically going to be P2P memory. Future work will likely expand | ||
80 | this to include other types of resources like doorbells. | ||
81 | |||
82 | |||
83 | Client Drivers | ||
84 | -------------- | ||
85 | |||
86 | A client driver typically only has to conditionally change its DMA map | ||
87 | routine to use the mapping function :c:func:`pci_p2pdma_map_sg()` instead | ||
88 | of the usual :c:func:`dma_map_sg()` function. Memory mapped in this | ||
89 | way does not need to be unmapped. | ||
90 | |||
91 | The client may also, optionally, make use of | ||
92 | :c:func:`is_pci_p2pdma_page()` to determine when to use the P2P mapping | ||
93 | functions and when to use the regular mapping functions. In some | ||
94 | situations, it may be more appropriate to use a flag to indicate a | ||
95 | given request is P2P memory and map appropriately. It is important to | ||
96 | ensure that struct pages that back P2P memory stay out of code that | ||
97 | does not have support for them as other code may treat the pages as | ||
98 | regular memory which may not be appropriate. | ||
99 | |||
100 | |||
101 | Orchestrator Drivers | ||
102 | -------------------- | ||
103 | |||
104 | The first task an orchestrator driver must do is compile a list of | ||
105 | all client devices that will be involved in a given transaction. For | ||
106 | example, the NVMe Target driver creates a list including the namespace | ||
107 | block device and the RNIC in use. If the orchestrator has access to | ||
108 | a specific P2P provider to use it may check compatibility using | ||
109 | :c:func:`pci_p2pdma_distance()` otherwise it may find a memory provider | ||
110 | that's compatible with all clients using :c:func:`pci_p2pmem_find()`. | ||
111 | If more than one provider is supported, the one nearest to all the clients will | ||
112 | be chosen first. If more than one provider is an equal distance away, the | ||
113 | one returned will be chosen at random (it is not an arbitrary but | ||
114 | truely random). This function returns the PCI device to use for the provider | ||
115 | with a reference taken and therefore when it's no longer needed it should be | ||
116 | returned with pci_dev_put(). | ||
117 | |||
118 | Once a provider is selected, the orchestrator can then use | ||
119 | :c:func:`pci_alloc_p2pmem()` and :c:func:`pci_free_p2pmem()` to | ||
120 | allocate P2P memory from the provider. :c:func:`pci_p2pmem_alloc_sgl()` | ||
121 | and :c:func:`pci_p2pmem_free_sgl()` are convenience functions for | ||
122 | allocating scatter-gather lists with P2P memory. | ||
123 | |||
124 | Struct Page Caveats | ||
125 | ------------------- | ||
126 | |||
127 | Driver writers should be very careful about not passing these special | ||
128 | struct pages to code that isn't prepared for it. At this time, the kernel | ||
129 | interfaces do not have any checks for ensuring this. This obviously | ||
130 | precludes passing these pages to userspace. | ||
131 | |||
132 | P2P memory is also technically IO memory but should never have any side | ||
133 | effects behind it. Thus, the order of loads and stores should not be important | ||
134 | and ioreadX(), iowriteX() and friends should not be necessary. | ||
135 | However, as the memory is not cache coherent, if access ever needs to | ||
136 | be protected by a spinlock then :c:func:`mmiowb()` must be used before | ||
137 | unlocking the lock. (See ACQUIRES VS I/O ACCESSES in | ||
138 | Documentation/memory-barriers.txt) | ||
139 | |||
140 | |||
141 | P2P DMA Support Library | ||
142 | ======================= | ||
143 | |||
144 | .. kernel-doc:: drivers/pci/p2pdma.c | ||
145 | :export: | ||
diff --git a/Documentation/driver-api/pci.rst b/Documentation/driver-api/pci/pci.rst index ca85e5e78b2c..ca85e5e78b2c 100644 --- a/Documentation/driver-api/pci.rst +++ b/Documentation/driver-api/pci/pci.rst | |||
diff --git a/Documentation/switchtec.txt b/Documentation/switchtec.txt index f788264921ff..30d6a64e53f7 100644 --- a/Documentation/switchtec.txt +++ b/Documentation/switchtec.txt | |||
@@ -23,7 +23,7 @@ The primary means of communicating with the Switchtec management firmware is | |||
23 | through the Memory-mapped Remote Procedure Call (MRPC) interface. | 23 | through the Memory-mapped Remote Procedure Call (MRPC) interface. |
24 | Commands are submitted to the interface with a 4-byte command | 24 | Commands are submitted to the interface with a 4-byte command |
25 | identifier and up to 1KB of command specific data. The firmware will | 25 | identifier and up to 1KB of command specific data. The firmware will |
26 | respond with a 4 bytes return code and up to 1KB of command specific | 26 | respond with a 4-byte return code and up to 1KB of command-specific |
27 | data. The interface only processes a single command at a time. | 27 | data. The interface only processes a single command at a time. |
28 | 28 | ||
29 | 29 | ||
@@ -36,8 +36,8 @@ device: /dev/switchtec#, one for each management endpoint in the system. | |||
36 | The char device has the following semantics: | 36 | The char device has the following semantics: |
37 | 37 | ||
38 | * A write must consist of at least 4 bytes and no more than 1028 bytes. | 38 | * A write must consist of at least 4 bytes and no more than 1028 bytes. |
39 | The first four bytes will be interpreted as the command to run and | 39 | The first 4 bytes will be interpreted as the Command ID and the |
40 | the remainder will be used as the input data. A write will send the | 40 | remainder will be used as the input data. A write will send the |
41 | command to the firmware to begin processing. | 41 | command to the firmware to begin processing. |
42 | 42 | ||
43 | * Each write must be followed by exactly one read. Any double write will | 43 | * Each write must be followed by exactly one read. Any double write will |
@@ -45,9 +45,9 @@ The char device has the following semantics: | |||
45 | produce an error. | 45 | produce an error. |
46 | 46 | ||
47 | * A read will block until the firmware completes the command and return | 47 | * A read will block until the firmware completes the command and return |
48 | the four bytes of status plus up to 1024 bytes of output data. (The | 48 | the 4-byte Command Return Value plus up to 1024 bytes of output |
49 | length will be specified by the size parameter of the read call -- | 49 | data. (The length will be specified by the size parameter of the read |
50 | reading less than 4 bytes will produce an error. | 50 | call -- reading less than 4 bytes will produce an error.) |
51 | 51 | ||
52 | * The poll call will also be supported for userspace applications that | 52 | * The poll call will also be supported for userspace applications that |
53 | need to do other things while waiting for the command to complete. | 53 | need to do other things while waiting for the command to complete. |
@@ -83,10 +83,20 @@ The following IOCTLs are also supported by the device: | |||
83 | Non-Transparent Bridge (NTB) Driver | 83 | Non-Transparent Bridge (NTB) Driver |
84 | =================================== | 84 | =================================== |
85 | 85 | ||
86 | An NTB driver is provided for the switchtec hardware in switchtec_ntb. | 86 | An NTB hardware driver is provided for the Switchtec hardware in |
87 | Currently, it only supports switches configured with exactly 2 | 87 | ntb_hw_switchtec. Currently, it only supports switches configured with |
88 | partitions. It also requires the following configuration settings: | 88 | exactly 2 NT partitions and zero or more non-NT partitions. It also requires |
89 | the following configuration settings: | ||
89 | 90 | ||
90 | * Both partitions must be able to access each other's GAS spaces. | 91 | * Both NT partitions must be able to access each other's GAS spaces. |
91 | Thus, the bits in the GAS Access Vector under Management Settings | 92 | Thus, the bits in the GAS Access Vector under Management Settings |
92 | must be set to support this. | 93 | must be set to support this. |
94 | * Kernel configuration MUST include support for NTB (CONFIG_NTB needs | ||
95 | to be set) | ||
96 | |||
97 | NT EP BAR 2 will be dynamically configured as a Direct Window, and | ||
98 | the configuration file does not need to configure it explicitly. | ||
99 | |||
100 | Please refer to Documentation/ntb.txt in Linux source tree for an overall | ||
101 | understanding of the Linux NTB stack. ntb_hw_switchtec works as an NTB | ||
102 | Hardware Driver in this stack. | ||
diff --git a/MAINTAINERS b/MAINTAINERS index 7dc00ae1bdc0..16fb17ce1475 100644 --- a/MAINTAINERS +++ b/MAINTAINERS | |||
@@ -11299,7 +11299,7 @@ M: Murali Karicheri <m-karicheri2@ti.com> | |||
11299 | L: linux-pci@vger.kernel.org | 11299 | L: linux-pci@vger.kernel.org |
11300 | L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) | 11300 | L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) |
11301 | S: Maintained | 11301 | S: Maintained |
11302 | F: drivers/pci/controller/dwc/*keystone* | 11302 | F: drivers/pci/controller/dwc/pci-keystone.c |
11303 | 11303 | ||
11304 | PCI ENDPOINT SUBSYSTEM | 11304 | PCI ENDPOINT SUBSYSTEM |
11305 | M: Kishon Vijay Abraham I <kishon@ti.com> | 11305 | M: Kishon Vijay Abraham I <kishon@ti.com> |
diff --git a/arch/arm/boot/dts/imx7d.dtsi b/arch/arm/boot/dts/imx7d.dtsi index 7234e8330a57..efbdeaaa8dcd 100644 --- a/arch/arm/boot/dts/imx7d.dtsi +++ b/arch/arm/boot/dts/imx7d.dtsi | |||
@@ -146,8 +146,9 @@ | |||
146 | fsl,max-link-speed = <2>; | 146 | fsl,max-link-speed = <2>; |
147 | power-domains = <&pgc_pcie_phy>; | 147 | power-domains = <&pgc_pcie_phy>; |
148 | resets = <&src IMX7_RESET_PCIEPHY>, | 148 | resets = <&src IMX7_RESET_PCIEPHY>, |
149 | <&src IMX7_RESET_PCIE_CTRL_APPS_EN>; | 149 | <&src IMX7_RESET_PCIE_CTRL_APPS_EN>, |
150 | reset-names = "pciephy", "apps"; | 150 | <&src IMX7_RESET_PCIE_CTRL_APPS_TURNOFF>; |
151 | reset-names = "pciephy", "apps", "turnoff"; | ||
151 | status = "disabled"; | 152 | status = "disabled"; |
152 | }; | 153 | }; |
153 | }; | 154 | }; |
diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c index 0e2ea1c78542..bb85e2f4603f 100644 --- a/arch/arm64/kernel/pci.c +++ b/arch/arm64/kernel/pci.c | |||
@@ -165,16 +165,15 @@ static void pci_acpi_generic_release_info(struct acpi_pci_root_info *ci) | |||
165 | /* Interface called from ACPI code to setup PCI host controller */ | 165 | /* Interface called from ACPI code to setup PCI host controller */ |
166 | struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) | 166 | struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) |
167 | { | 167 | { |
168 | int node = acpi_get_node(root->device->handle); | ||
169 | struct acpi_pci_generic_root_info *ri; | 168 | struct acpi_pci_generic_root_info *ri; |
170 | struct pci_bus *bus, *child; | 169 | struct pci_bus *bus, *child; |
171 | struct acpi_pci_root_ops *root_ops; | 170 | struct acpi_pci_root_ops *root_ops; |
172 | 171 | ||
173 | ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node); | 172 | ri = kzalloc(sizeof(*ri), GFP_KERNEL); |
174 | if (!ri) | 173 | if (!ri) |
175 | return NULL; | 174 | return NULL; |
176 | 175 | ||
177 | root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node); | 176 | root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL); |
178 | if (!root_ops) { | 177 | if (!root_ops) { |
179 | kfree(ri); | 178 | kfree(ri); |
180 | return NULL; | 179 | return NULL; |
diff --git a/arch/powerpc/include/asm/pnv-pci.h b/arch/powerpc/include/asm/pnv-pci.h index 7f627e3f4da4..630eb8b1b7ed 100644 --- a/arch/powerpc/include/asm/pnv-pci.h +++ b/arch/powerpc/include/asm/pnv-pci.h | |||
@@ -54,7 +54,6 @@ void pnv_cxl_release_hwirq_ranges(struct cxl_irq_ranges *irqs, | |||
54 | 54 | ||
55 | struct pnv_php_slot { | 55 | struct pnv_php_slot { |
56 | struct hotplug_slot slot; | 56 | struct hotplug_slot slot; |
57 | struct hotplug_slot_info slot_info; | ||
58 | uint64_t id; | 57 | uint64_t id; |
59 | char *name; | 58 | char *name; |
60 | int slot_no; | 59 | int slot_no; |
@@ -72,6 +71,7 @@ struct pnv_php_slot { | |||
72 | struct pci_dev *pdev; | 71 | struct pci_dev *pdev; |
73 | struct pci_bus *bus; | 72 | struct pci_bus *bus; |
74 | bool power_state_check; | 73 | bool power_state_check; |
74 | u8 attention_state; | ||
75 | void *fdt; | 75 | void *fdt; |
76 | void *dt; | 76 | void *dt; |
77 | struct of_changeset ocs; | 77 | struct of_changeset ocs; |
diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c index 5559dcaddd5e..948656069cdd 100644 --- a/arch/x86/pci/acpi.c +++ b/arch/x86/pci/acpi.c | |||
@@ -356,7 +356,7 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) | |||
356 | } else { | 356 | } else { |
357 | struct pci_root_info *info; | 357 | struct pci_root_info *info; |
358 | 358 | ||
359 | info = kzalloc_node(sizeof(*info), GFP_KERNEL, node); | 359 | info = kzalloc(sizeof(*info), GFP_KERNEL); |
360 | if (!info) | 360 | if (!info) |
361 | dev_err(&root->device->dev, | 361 | dev_err(&root->device->dev, |
362 | "pci_bus %04x:%02x: ignored (out of memory)\n", | 362 | "pci_bus %04x:%02x: ignored (out of memory)\n", |
diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c index 13f4485ca388..30a5111ae5fd 100644 --- a/arch/x86/pci/fixup.c +++ b/arch/x86/pci/fixup.c | |||
@@ -629,17 +629,11 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8c10, quirk_apple_mbp_poweroff); | |||
629 | static void quirk_no_aersid(struct pci_dev *pdev) | 629 | static void quirk_no_aersid(struct pci_dev *pdev) |
630 | { | 630 | { |
631 | /* VMD Domain */ | 631 | /* VMD Domain */ |
632 | if (is_vmd(pdev->bus)) | 632 | if (is_vmd(pdev->bus) && pci_is_root_bus(pdev->bus)) |
633 | pdev->bus->bus_flags |= PCI_BUS_FLAGS_NO_AERSID; | 633 | pdev->bus->bus_flags |= PCI_BUS_FLAGS_NO_AERSID; |
634 | } | 634 | } |
635 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2030, quirk_no_aersid); | 635 | DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, |
636 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid); | 636 | PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid); |
637 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid); | ||
638 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid); | ||
639 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334a, quirk_no_aersid); | ||
640 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334b, quirk_no_aersid); | ||
641 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334c, quirk_no_aersid); | ||
642 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334d, quirk_no_aersid); | ||
643 | 637 | ||
644 | #ifdef CONFIG_PHYS_ADDR_T_64BIT | 638 | #ifdef CONFIG_PHYS_ADDR_T_64BIT |
645 | 639 | ||
diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c index 7433035ded95..707aafc7c2aa 100644 --- a/drivers/acpi/pci_root.c +++ b/drivers/acpi/pci_root.c | |||
@@ -421,7 +421,8 @@ out: | |||
421 | } | 421 | } |
422 | EXPORT_SYMBOL(acpi_pci_osc_control_set); | 422 | EXPORT_SYMBOL(acpi_pci_osc_control_set); |
423 | 423 | ||
424 | static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm) | 424 | static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, |
425 | bool is_pcie) | ||
425 | { | 426 | { |
426 | u32 support, control, requested; | 427 | u32 support, control, requested; |
427 | acpi_status status; | 428 | acpi_status status; |
@@ -455,9 +456,15 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm) | |||
455 | decode_osc_support(root, "OS supports", support); | 456 | decode_osc_support(root, "OS supports", support); |
456 | status = acpi_pci_osc_support(root, support); | 457 | status = acpi_pci_osc_support(root, support); |
457 | if (ACPI_FAILURE(status)) { | 458 | if (ACPI_FAILURE(status)) { |
458 | dev_info(&device->dev, "_OSC failed (%s); disabling ASPM\n", | ||
459 | acpi_format_exception(status)); | ||
460 | *no_aspm = 1; | 459 | *no_aspm = 1; |
460 | |||
461 | /* _OSC is optional for PCI host bridges */ | ||
462 | if ((status == AE_NOT_FOUND) && !is_pcie) | ||
463 | return; | ||
464 | |||
465 | dev_info(&device->dev, "_OSC failed (%s)%s\n", | ||
466 | acpi_format_exception(status), | ||
467 | pcie_aspm_support_enabled() ? "; disabling ASPM" : ""); | ||
461 | return; | 468 | return; |
462 | } | 469 | } |
463 | 470 | ||
@@ -533,6 +540,7 @@ static int acpi_pci_root_add(struct acpi_device *device, | |||
533 | acpi_handle handle = device->handle; | 540 | acpi_handle handle = device->handle; |
534 | int no_aspm = 0; | 541 | int no_aspm = 0; |
535 | bool hotadd = system_state == SYSTEM_RUNNING; | 542 | bool hotadd = system_state == SYSTEM_RUNNING; |
543 | bool is_pcie; | ||
536 | 544 | ||
537 | root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL); | 545 | root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL); |
538 | if (!root) | 546 | if (!root) |
@@ -590,7 +598,8 @@ static int acpi_pci_root_add(struct acpi_device *device, | |||
590 | 598 | ||
591 | root->mcfg_addr = acpi_pci_root_get_mcfg_addr(handle); | 599 | root->mcfg_addr = acpi_pci_root_get_mcfg_addr(handle); |
592 | 600 | ||
593 | negotiate_os_control(root, &no_aspm); | 601 | is_pcie = strcmp(acpi_device_hid(device), "PNP0A08") == 0; |
602 | negotiate_os_control(root, &no_aspm, is_pcie); | ||
594 | 603 | ||
595 | /* | 604 | /* |
596 | * TBD: Need PCI interface for enumeration/configuration of roots. | 605 | * TBD: Need PCI interface for enumeration/configuration of roots. |
diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c index 693cf05b0cc4..8c7c4583b52d 100644 --- a/drivers/acpi/property.c +++ b/drivers/acpi/property.c | |||
@@ -24,11 +24,15 @@ static int acpi_data_get_property_array(const struct acpi_device_data *data, | |||
24 | acpi_object_type type, | 24 | acpi_object_type type, |
25 | const union acpi_object **obj); | 25 | const union acpi_object **obj); |
26 | 26 | ||
27 | /* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */ | 27 | static const guid_t prp_guids[] = { |
28 | static const guid_t prp_guid = | 28 | /* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */ |
29 | GUID_INIT(0xdaffd814, 0x6eba, 0x4d8c, | 29 | GUID_INIT(0xdaffd814, 0x6eba, 0x4d8c, |
30 | 0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01); | 30 | 0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01), |
31 | /* ACPI _DSD data subnodes GUID: dbb8e3e6-5886-4ba6-8795-1319f52a966b */ | 31 | /* Hotplug in D3 GUID: 6211e2c0-58a3-4af3-90e1-927a4e0c55a4 */ |
32 | GUID_INIT(0x6211e2c0, 0x58a3, 0x4af3, | ||
33 | 0x90, 0xe1, 0x92, 0x7a, 0x4e, 0x0c, 0x55, 0xa4), | ||
34 | }; | ||
35 | |||
32 | static const guid_t ads_guid = | 36 | static const guid_t ads_guid = |
33 | GUID_INIT(0xdbb8e3e6, 0x5886, 0x4ba6, | 37 | GUID_INIT(0xdbb8e3e6, 0x5886, 0x4ba6, |
34 | 0x87, 0x95, 0x13, 0x19, 0xf5, 0x2a, 0x96, 0x6b); | 38 | 0x87, 0x95, 0x13, 0x19, 0xf5, 0x2a, 0x96, 0x6b); |
@@ -56,6 +60,7 @@ static bool acpi_nondev_subnode_extract(const union acpi_object *desc, | |||
56 | dn->name = link->package.elements[0].string.pointer; | 60 | dn->name = link->package.elements[0].string.pointer; |
57 | dn->fwnode.ops = &acpi_data_fwnode_ops; | 61 | dn->fwnode.ops = &acpi_data_fwnode_ops; |
58 | dn->parent = parent; | 62 | dn->parent = parent; |
63 | INIT_LIST_HEAD(&dn->data.properties); | ||
59 | INIT_LIST_HEAD(&dn->data.subnodes); | 64 | INIT_LIST_HEAD(&dn->data.subnodes); |
60 | 65 | ||
61 | result = acpi_extract_properties(desc, &dn->data); | 66 | result = acpi_extract_properties(desc, &dn->data); |
@@ -288,6 +293,35 @@ static void acpi_init_of_compatible(struct acpi_device *adev) | |||
288 | adev->flags.of_compatible_ok = 1; | 293 | adev->flags.of_compatible_ok = 1; |
289 | } | 294 | } |
290 | 295 | ||
296 | static bool acpi_is_property_guid(const guid_t *guid) | ||
297 | { | ||
298 | int i; | ||
299 | |||
300 | for (i = 0; i < ARRAY_SIZE(prp_guids); i++) { | ||
301 | if (guid_equal(guid, &prp_guids[i])) | ||
302 | return true; | ||
303 | } | ||
304 | |||
305 | return false; | ||
306 | } | ||
307 | |||
308 | struct acpi_device_properties * | ||
309 | acpi_data_add_props(struct acpi_device_data *data, const guid_t *guid, | ||
310 | const union acpi_object *properties) | ||
311 | { | ||
312 | struct acpi_device_properties *props; | ||
313 | |||
314 | props = kzalloc(sizeof(*props), GFP_KERNEL); | ||
315 | if (props) { | ||
316 | INIT_LIST_HEAD(&props->list); | ||
317 | props->guid = guid; | ||
318 | props->properties = properties; | ||
319 | list_add_tail(&props->list, &data->properties); | ||
320 | } | ||
321 | |||
322 | return props; | ||
323 | } | ||
324 | |||
291 | static bool acpi_extract_properties(const union acpi_object *desc, | 325 | static bool acpi_extract_properties(const union acpi_object *desc, |
292 | struct acpi_device_data *data) | 326 | struct acpi_device_data *data) |
293 | { | 327 | { |
@@ -312,7 +346,7 @@ static bool acpi_extract_properties(const union acpi_object *desc, | |||
312 | properties->type != ACPI_TYPE_PACKAGE) | 346 | properties->type != ACPI_TYPE_PACKAGE) |
313 | break; | 347 | break; |
314 | 348 | ||
315 | if (!guid_equal((guid_t *)guid->buffer.pointer, &prp_guid)) | 349 | if (!acpi_is_property_guid((guid_t *)guid->buffer.pointer)) |
316 | continue; | 350 | continue; |
317 | 351 | ||
318 | /* | 352 | /* |
@@ -320,13 +354,13 @@ static bool acpi_extract_properties(const union acpi_object *desc, | |||
320 | * package immediately following it. | 354 | * package immediately following it. |
321 | */ | 355 | */ |
322 | if (!acpi_properties_format_valid(properties)) | 356 | if (!acpi_properties_format_valid(properties)) |
323 | break; | 357 | continue; |
324 | 358 | ||
325 | data->properties = properties; | 359 | acpi_data_add_props(data, (const guid_t *)guid->buffer.pointer, |
326 | return true; | 360 | properties); |
327 | } | 361 | } |
328 | 362 | ||
329 | return false; | 363 | return !list_empty(&data->properties); |
330 | } | 364 | } |
331 | 365 | ||
332 | void acpi_init_properties(struct acpi_device *adev) | 366 | void acpi_init_properties(struct acpi_device *adev) |
@@ -336,6 +370,7 @@ void acpi_init_properties(struct acpi_device *adev) | |||
336 | acpi_status status; | 370 | acpi_status status; |
337 | bool acpi_of = false; | 371 | bool acpi_of = false; |
338 | 372 | ||
373 | INIT_LIST_HEAD(&adev->data.properties); | ||
339 | INIT_LIST_HEAD(&adev->data.subnodes); | 374 | INIT_LIST_HEAD(&adev->data.subnodes); |
340 | 375 | ||
341 | if (!adev->handle) | 376 | if (!adev->handle) |
@@ -398,11 +433,16 @@ static void acpi_destroy_nondev_subnodes(struct list_head *list) | |||
398 | 433 | ||
399 | void acpi_free_properties(struct acpi_device *adev) | 434 | void acpi_free_properties(struct acpi_device *adev) |
400 | { | 435 | { |
436 | struct acpi_device_properties *props, *tmp; | ||
437 | |||
401 | acpi_destroy_nondev_subnodes(&adev->data.subnodes); | 438 | acpi_destroy_nondev_subnodes(&adev->data.subnodes); |
402 | ACPI_FREE((void *)adev->data.pointer); | 439 | ACPI_FREE((void *)adev->data.pointer); |
403 | adev->data.of_compatible = NULL; | 440 | adev->data.of_compatible = NULL; |
404 | adev->data.pointer = NULL; | 441 | adev->data.pointer = NULL; |
405 | adev->data.properties = NULL; | 442 | list_for_each_entry_safe(props, tmp, &adev->data.properties, list) { |
443 | list_del(&props->list); | ||
444 | kfree(props); | ||
445 | } | ||
406 | } | 446 | } |
407 | 447 | ||
408 | /** | 448 | /** |
@@ -427,32 +467,37 @@ static int acpi_data_get_property(const struct acpi_device_data *data, | |||
427 | const char *name, acpi_object_type type, | 467 | const char *name, acpi_object_type type, |
428 | const union acpi_object **obj) | 468 | const union acpi_object **obj) |
429 | { | 469 | { |
430 | const union acpi_object *properties; | 470 | const struct acpi_device_properties *props; |
431 | int i; | ||
432 | 471 | ||
433 | if (!data || !name) | 472 | if (!data || !name) |
434 | return -EINVAL; | 473 | return -EINVAL; |
435 | 474 | ||
436 | if (!data->pointer || !data->properties) | 475 | if (!data->pointer || list_empty(&data->properties)) |
437 | return -EINVAL; | 476 | return -EINVAL; |
438 | 477 | ||
439 | properties = data->properties; | 478 | list_for_each_entry(props, &data->properties, list) { |
440 | for (i = 0; i < properties->package.count; i++) { | 479 | const union acpi_object *properties; |
441 | const union acpi_object *propname, *propvalue; | 480 | unsigned int i; |
442 | const union acpi_object *property; | ||
443 | 481 | ||
444 | property = &properties->package.elements[i]; | 482 | properties = props->properties; |
483 | for (i = 0; i < properties->package.count; i++) { | ||
484 | const union acpi_object *propname, *propvalue; | ||
485 | const union acpi_object *property; | ||
445 | 486 | ||
446 | propname = &property->package.elements[0]; | 487 | property = &properties->package.elements[i]; |
447 | propvalue = &property->package.elements[1]; | ||
448 | 488 | ||
449 | if (!strcmp(name, propname->string.pointer)) { | 489 | propname = &property->package.elements[0]; |
450 | if (type != ACPI_TYPE_ANY && propvalue->type != type) | 490 | propvalue = &property->package.elements[1]; |
451 | return -EPROTO; | ||
452 | if (obj) | ||
453 | *obj = propvalue; | ||
454 | 491 | ||
455 | return 0; | 492 | if (!strcmp(name, propname->string.pointer)) { |
493 | if (type != ACPI_TYPE_ANY && | ||
494 | propvalue->type != type) | ||
495 | return -EPROTO; | ||
496 | if (obj) | ||
497 | *obj = propvalue; | ||
498 | |||
499 | return 0; | ||
500 | } | ||
456 | } | 501 | } |
457 | } | 502 | } |
458 | return -EINVAL; | 503 | return -EINVAL; |
diff --git a/drivers/acpi/x86/apple.c b/drivers/acpi/x86/apple.c index bb1984f6c9fe..b7c98ff82d78 100644 --- a/drivers/acpi/x86/apple.c +++ b/drivers/acpi/x86/apple.c | |||
@@ -132,8 +132,8 @@ void acpi_extract_apple_properties(struct acpi_device *adev) | |||
132 | } | 132 | } |
133 | WARN_ON(free_space != (void *)newprops + newsize); | 133 | WARN_ON(free_space != (void *)newprops + newsize); |
134 | 134 | ||
135 | adev->data.properties = newprops; | ||
136 | adev->data.pointer = newprops; | 135 | adev->data.pointer = newprops; |
136 | acpi_data_add_props(&adev->data, &apple_prp_guid, newprops); | ||
137 | 137 | ||
138 | out_free: | 138 | out_free: |
139 | ACPI_FREE(props); | 139 | ACPI_FREE(props); |
diff --git a/drivers/ata/sata_inic162x.c b/drivers/ata/sata_inic162x.c index 9b6d7930d1c7..e0bcf9b2dab0 100644 --- a/drivers/ata/sata_inic162x.c +++ b/drivers/ata/sata_inic162x.c | |||
@@ -873,7 +873,7 @@ static int inic_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) | |||
873 | * like others but it will lock up the whole machine HARD if | 873 | * like others but it will lock up the whole machine HARD if |
874 | * 65536 byte PRD entry is fed. Reduce maximum segment size. | 874 | * 65536 byte PRD entry is fed. Reduce maximum segment size. |
875 | */ | 875 | */ |
876 | rc = pci_set_dma_max_seg_size(pdev, 65536 - 512); | 876 | rc = dma_set_max_seg_size(&pdev->dev, 65536 - 512); |
877 | if (rc) { | 877 | if (rc) { |
878 | dev_err(&pdev->dev, "failed to set the maximum segment size\n"); | 878 | dev_err(&pdev->dev, "failed to set the maximum segment size\n"); |
879 | return rc; | 879 | return rc; |
diff --git a/drivers/block/rsxx/core.c b/drivers/block/rsxx/core.c index 639051502181..0cf4509d575c 100644 --- a/drivers/block/rsxx/core.c +++ b/drivers/block/rsxx/core.c | |||
@@ -780,7 +780,7 @@ static int rsxx_pci_probe(struct pci_dev *dev, | |||
780 | goto failed_enable; | 780 | goto failed_enable; |
781 | 781 | ||
782 | pci_set_master(dev); | 782 | pci_set_master(dev); |
783 | pci_set_dma_max_seg_size(dev, RSXX_HW_BLK_SIZE); | 783 | dma_set_max_seg_size(&dev->dev, RSXX_HW_BLK_SIZE); |
784 | 784 | ||
785 | st = dma_set_mask(&dev->dev, DMA_BIT_MASK(64)); | 785 | st = dma_set_mask(&dev->dev, DMA_BIT_MASK(64)); |
786 | if (st) { | 786 | if (st) { |
diff --git a/drivers/crypto/qat/qat_common/adf_aer.c b/drivers/crypto/qat/qat_common/adf_aer.c index 9225d060e18f..f5e960d23a7a 100644 --- a/drivers/crypto/qat/qat_common/adf_aer.c +++ b/drivers/crypto/qat/qat_common/adf_aer.c | |||
@@ -198,7 +198,6 @@ static pci_ers_result_t adf_slot_reset(struct pci_dev *pdev) | |||
198 | pr_err("QAT: Can't find acceleration device\n"); | 198 | pr_err("QAT: Can't find acceleration device\n"); |
199 | return PCI_ERS_RESULT_DISCONNECT; | 199 | return PCI_ERS_RESULT_DISCONNECT; |
200 | } | 200 | } |
201 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
202 | if (adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_SYNC)) | 201 | if (adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_SYNC)) |
203 | return PCI_ERS_RESULT_DISCONNECT; | 202 | return PCI_ERS_RESULT_DISCONNECT; |
204 | 203 | ||
diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c index 0fec3c554fe3..2d810dfcdc48 100644 --- a/drivers/dma/ioat/init.c +++ b/drivers/dma/ioat/init.c | |||
@@ -1258,7 +1258,6 @@ static pci_ers_result_t ioat_pcie_error_detected(struct pci_dev *pdev, | |||
1258 | static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev) | 1258 | static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev) |
1259 | { | 1259 | { |
1260 | pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED; | 1260 | pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED; |
1261 | int err; | ||
1262 | 1261 | ||
1263 | dev_dbg(&pdev->dev, "%s post reset handling\n", DRV_NAME); | 1262 | dev_dbg(&pdev->dev, "%s post reset handling\n", DRV_NAME); |
1264 | 1263 | ||
@@ -1273,12 +1272,6 @@ static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev) | |||
1273 | pci_wake_from_d3(pdev, false); | 1272 | pci_wake_from_d3(pdev, false); |
1274 | } | 1273 | } |
1275 | 1274 | ||
1276 | err = pci_cleanup_aer_uncorrect_error_status(pdev); | ||
1277 | if (err) { | ||
1278 | dev_err(&pdev->dev, | ||
1279 | "AER uncorrect error status clear failed: %#x\n", err); | ||
1280 | } | ||
1281 | |||
1282 | return result; | 1275 | return result; |
1283 | } | 1276 | } |
1284 | 1277 | ||
diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c index 2b1a7b455aa8..55b72fbe1631 100644 --- a/drivers/gpio/gpiolib-acpi.c +++ b/drivers/gpio/gpiolib-acpi.c | |||
@@ -1194,7 +1194,7 @@ int acpi_gpio_count(struct device *dev, const char *con_id) | |||
1194 | bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id) | 1194 | bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id) |
1195 | { | 1195 | { |
1196 | /* Never allow fallback if the device has properties */ | 1196 | /* Never allow fallback if the device has properties */ |
1197 | if (adev->data.properties || adev->driver_gpios) | 1197 | if (acpi_dev_has_props(adev) || adev->driver_gpios) |
1198 | return false; | 1198 | return false; |
1199 | 1199 | ||
1200 | return con_id == NULL; | 1200 | return con_id == NULL; |
diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c index 683e6d11a564..d22c4a2ebac6 100644 --- a/drivers/infiniband/core/rw.c +++ b/drivers/infiniband/core/rw.c | |||
@@ -12,6 +12,7 @@ | |||
12 | */ | 12 | */ |
13 | #include <linux/moduleparam.h> | 13 | #include <linux/moduleparam.h> |
14 | #include <linux/slab.h> | 14 | #include <linux/slab.h> |
15 | #include <linux/pci-p2pdma.h> | ||
15 | #include <rdma/mr_pool.h> | 16 | #include <rdma/mr_pool.h> |
16 | #include <rdma/rw.h> | 17 | #include <rdma/rw.h> |
17 | 18 | ||
@@ -280,7 +281,11 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num, | |||
280 | struct ib_device *dev = qp->pd->device; | 281 | struct ib_device *dev = qp->pd->device; |
281 | int ret; | 282 | int ret; |
282 | 283 | ||
283 | ret = ib_dma_map_sg(dev, sg, sg_cnt, dir); | 284 | if (is_pci_p2pdma_page(sg_page(sg))) |
285 | ret = pci_p2pdma_map_sg(dev->dma_device, sg, sg_cnt, dir); | ||
286 | else | ||
287 | ret = ib_dma_map_sg(dev, sg, sg_cnt, dir); | ||
288 | |||
284 | if (!ret) | 289 | if (!ret) |
285 | return -ENOMEM; | 290 | return -ENOMEM; |
286 | sg_cnt = ret; | 291 | sg_cnt = ret; |
@@ -602,7 +607,9 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num, | |||
602 | break; | 607 | break; |
603 | } | 608 | } |
604 | 609 | ||
605 | ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); | 610 | /* P2PDMA contexts do not need to be unmapped */ |
611 | if (!is_pci_p2pdma_page(sg_page(sg))) | ||
612 | ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); | ||
606 | } | 613 | } |
607 | EXPORT_SYMBOL(rdma_rw_ctx_destroy); | 614 | EXPORT_SYMBOL(rdma_rw_ctx_destroy); |
608 | 615 | ||
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c index 347fe18b1a41..62d6f197ec0b 100644 --- a/drivers/infiniband/hw/cxgb4/qp.c +++ b/drivers/infiniband/hw/cxgb4/qp.c | |||
@@ -99,7 +99,7 @@ static void dealloc_oc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) | |||
99 | static void dealloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) | 99 | static void dealloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) |
100 | { | 100 | { |
101 | dma_free_coherent(&(rdev->lldi.pdev->dev), sq->memsize, sq->queue, | 101 | dma_free_coherent(&(rdev->lldi.pdev->dev), sq->memsize, sq->queue, |
102 | pci_unmap_addr(sq, mapping)); | 102 | dma_unmap_addr(sq, mapping)); |
103 | } | 103 | } |
104 | 104 | ||
105 | static void dealloc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) | 105 | static void dealloc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) |
@@ -132,7 +132,7 @@ static int alloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) | |||
132 | if (!sq->queue) | 132 | if (!sq->queue) |
133 | return -ENOMEM; | 133 | return -ENOMEM; |
134 | sq->phys_addr = virt_to_phys(sq->queue); | 134 | sq->phys_addr = virt_to_phys(sq->queue); |
135 | pci_unmap_addr_set(sq, mapping, sq->dma_addr); | 135 | dma_unmap_addr_set(sq, mapping, sq->dma_addr); |
136 | return 0; | 136 | return 0; |
137 | } | 137 | } |
138 | 138 | ||
@@ -2521,7 +2521,7 @@ static void free_srq_queue(struct c4iw_srq *srq, struct c4iw_dev_ucontext *uctx, | |||
2521 | 2521 | ||
2522 | dma_free_coherent(&rdev->lldi.pdev->dev, | 2522 | dma_free_coherent(&rdev->lldi.pdev->dev, |
2523 | wq->memsize, wq->queue, | 2523 | wq->memsize, wq->queue, |
2524 | pci_unmap_addr(wq, mapping)); | 2524 | dma_unmap_addr(wq, mapping)); |
2525 | c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size); | 2525 | c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size); |
2526 | kfree(wq->sw_rq); | 2526 | kfree(wq->sw_rq); |
2527 | c4iw_put_qpid(rdev, wq->qid, uctx); | 2527 | c4iw_put_qpid(rdev, wq->qid, uctx); |
@@ -2570,7 +2570,7 @@ static int alloc_srq_queue(struct c4iw_srq *srq, struct c4iw_dev_ucontext *uctx, | |||
2570 | goto err_free_rqtpool; | 2570 | goto err_free_rqtpool; |
2571 | 2571 | ||
2572 | memset(wq->queue, 0, wq->memsize); | 2572 | memset(wq->queue, 0, wq->memsize); |
2573 | pci_unmap_addr_set(wq, mapping, wq->dma_addr); | 2573 | dma_unmap_addr_set(wq, mapping, wq->dma_addr); |
2574 | 2574 | ||
2575 | wq->bar2_va = c4iw_bar2_addrs(rdev, wq->qid, T4_BAR2_QTYPE_EGRESS, | 2575 | wq->bar2_va = c4iw_bar2_addrs(rdev, wq->qid, T4_BAR2_QTYPE_EGRESS, |
2576 | &wq->bar2_qid, | 2576 | &wq->bar2_qid, |
@@ -2649,7 +2649,7 @@ static int alloc_srq_queue(struct c4iw_srq *srq, struct c4iw_dev_ucontext *uctx, | |||
2649 | err_free_queue: | 2649 | err_free_queue: |
2650 | dma_free_coherent(&rdev->lldi.pdev->dev, | 2650 | dma_free_coherent(&rdev->lldi.pdev->dev, |
2651 | wq->memsize, wq->queue, | 2651 | wq->memsize, wq->queue, |
2652 | pci_unmap_addr(wq, mapping)); | 2652 | dma_unmap_addr(wq, mapping)); |
2653 | err_free_rqtpool: | 2653 | err_free_rqtpool: |
2654 | c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size); | 2654 | c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size); |
2655 | err_free_pending_wrs: | 2655 | err_free_pending_wrs: |
diff --git a/drivers/infiniband/hw/cxgb4/t4.h b/drivers/infiniband/hw/cxgb4/t4.h index e42021fd6fd6..fff6d48d262f 100644 --- a/drivers/infiniband/hw/cxgb4/t4.h +++ b/drivers/infiniband/hw/cxgb4/t4.h | |||
@@ -397,7 +397,7 @@ struct t4_srq_pending_wr { | |||
397 | struct t4_srq { | 397 | struct t4_srq { |
398 | union t4_recv_wr *queue; | 398 | union t4_recv_wr *queue; |
399 | dma_addr_t dma_addr; | 399 | dma_addr_t dma_addr; |
400 | DECLARE_PCI_UNMAP_ADDR(mapping); | 400 | DEFINE_DMA_UNMAP_ADDR(mapping); |
401 | struct t4_swrqe *sw_rq; | 401 | struct t4_swrqe *sw_rq; |
402 | void __iomem *bar2_va; | 402 | void __iomem *bar2_va; |
403 | u64 bar2_pa; | 403 | u64 bar2_pa; |
diff --git a/drivers/infiniband/hw/hfi1/pcie.c b/drivers/infiniband/hw/hfi1/pcie.c index 6c967dde58e7..cca413eaa74e 100644 --- a/drivers/infiniband/hw/hfi1/pcie.c +++ b/drivers/infiniband/hw/hfi1/pcie.c | |||
@@ -650,7 +650,6 @@ pci_resume(struct pci_dev *pdev) | |||
650 | struct hfi1_devdata *dd = pci_get_drvdata(pdev); | 650 | struct hfi1_devdata *dd = pci_get_drvdata(pdev); |
651 | 651 | ||
652 | dd_dev_info(dd, "HFI1 resume function called\n"); | 652 | dd_dev_info(dd, "HFI1 resume function called\n"); |
653 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
654 | /* | 653 | /* |
655 | * Running jobs will fail, since it's asynchronous | 654 | * Running jobs will fail, since it's asynchronous |
656 | * unlike sysfs-requested reset. Better than | 655 | * unlike sysfs-requested reset. Better than |
diff --git a/drivers/infiniband/hw/qib/qib_pcie.c b/drivers/infiniband/hw/qib/qib_pcie.c index 5ac7b31c346b..30595b358d8f 100644 --- a/drivers/infiniband/hw/qib/qib_pcie.c +++ b/drivers/infiniband/hw/qib/qib_pcie.c | |||
@@ -597,7 +597,6 @@ qib_pci_resume(struct pci_dev *pdev) | |||
597 | struct qib_devdata *dd = pci_get_drvdata(pdev); | 597 | struct qib_devdata *dd = pci_get_drvdata(pdev); |
598 | 598 | ||
599 | qib_devinfo(pdev, "QIB resume function called\n"); | 599 | qib_devinfo(pdev, "QIB resume function called\n"); |
600 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
601 | /* | 600 | /* |
602 | * Running jobs will fail, since it's asynchronous | 601 | * Running jobs will fail, since it's asynchronous |
603 | * unlike sysfs-requested reset. Better than | 602 | * unlike sysfs-requested reset. Better than |
diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c index 6d3221134927..7968c644ad86 100644 --- a/drivers/net/ethernet/atheros/alx/main.c +++ b/drivers/net/ethernet/atheros/alx/main.c | |||
@@ -1964,8 +1964,6 @@ static pci_ers_result_t alx_pci_error_slot_reset(struct pci_dev *pdev) | |||
1964 | if (!alx_reset_mac(hw)) | 1964 | if (!alx_reset_mac(hw)) |
1965 | rc = PCI_ERS_RESULT_RECOVERED; | 1965 | rc = PCI_ERS_RESULT_RECOVERED; |
1966 | out: | 1966 | out: |
1967 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
1968 | |||
1969 | rtnl_unlock(); | 1967 | rtnl_unlock(); |
1970 | 1968 | ||
1971 | return rc; | 1969 | return rc; |
diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c index 122fdb80a789..bbb247116045 100644 --- a/drivers/net/ethernet/broadcom/bnx2.c +++ b/drivers/net/ethernet/broadcom/bnx2.c | |||
@@ -8793,13 +8793,6 @@ static pci_ers_result_t bnx2_io_slot_reset(struct pci_dev *pdev) | |||
8793 | if (!(bp->flags & BNX2_FLAG_AER_ENABLED)) | 8793 | if (!(bp->flags & BNX2_FLAG_AER_ENABLED)) |
8794 | return result; | 8794 | return result; |
8795 | 8795 | ||
8796 | err = pci_cleanup_aer_uncorrect_error_status(pdev); | ||
8797 | if (err) { | ||
8798 | dev_err(&pdev->dev, | ||
8799 | "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n", | ||
8800 | err); /* non-fatal, continue */ | ||
8801 | } | ||
8802 | |||
8803 | return result; | 8796 | return result; |
8804 | } | 8797 | } |
8805 | 8798 | ||
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c index 40093d88353f..95309b27c7d1 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | |||
@@ -14380,14 +14380,6 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev) | |||
14380 | 14380 | ||
14381 | rtnl_unlock(); | 14381 | rtnl_unlock(); |
14382 | 14382 | ||
14383 | /* If AER, perform cleanup of the PCIe registers */ | ||
14384 | if (bp->flags & AER_ENABLED) { | ||
14385 | if (pci_cleanup_aer_uncorrect_error_status(pdev)) | ||
14386 | BNX2X_ERR("pci_cleanup_aer_uncorrect_error_status failed\n"); | ||
14387 | else | ||
14388 | DP(NETIF_MSG_HW, "pci_cleanup_aer_uncorrect_error_status succeeded\n"); | ||
14389 | } | ||
14390 | |||
14391 | return PCI_ERS_RESULT_RECOVERED; | 14383 | return PCI_ERS_RESULT_RECOVERED; |
14392 | } | 14384 | } |
14393 | 14385 | ||
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index de987ccdcf1f..dd85d790f638 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c | |||
@@ -10354,13 +10354,6 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev) | |||
10354 | 10354 | ||
10355 | rtnl_unlock(); | 10355 | rtnl_unlock(); |
10356 | 10356 | ||
10357 | err = pci_cleanup_aer_uncorrect_error_status(pdev); | ||
10358 | if (err) { | ||
10359 | dev_err(&pdev->dev, | ||
10360 | "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n", | ||
10361 | err); /* non-fatal, continue */ | ||
10362 | } | ||
10363 | |||
10364 | return PCI_ERS_RESULT_RECOVERED; | 10357 | return PCI_ERS_RESULT_RECOVERED; |
10365 | } | 10358 | } |
10366 | 10359 | ||
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index 2de0590a62c4..05a46926016a 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | |||
@@ -4767,7 +4767,6 @@ static pci_ers_result_t eeh_slot_reset(struct pci_dev *pdev) | |||
4767 | pci_set_master(pdev); | 4767 | pci_set_master(pdev); |
4768 | pci_restore_state(pdev); | 4768 | pci_restore_state(pdev); |
4769 | pci_save_state(pdev); | 4769 | pci_save_state(pdev); |
4770 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
4771 | 4770 | ||
4772 | if (t4_wait_dev_ready(adap->regs) < 0) | 4771 | if (t4_wait_dev_ready(adap->regs) < 0) |
4773 | return PCI_ERS_RESULT_DISCONNECT; | 4772 | return PCI_ERS_RESULT_DISCONNECT; |
diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index bff74752cef1..c5ad7a4f4d83 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c | |||
@@ -6146,7 +6146,6 @@ static pci_ers_result_t be_eeh_reset(struct pci_dev *pdev) | |||
6146 | if (status) | 6146 | if (status) |
6147 | return PCI_ERS_RESULT_DISCONNECT; | 6147 | return PCI_ERS_RESULT_DISCONNECT; |
6148 | 6148 | ||
6149 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
6150 | be_clear_error(adapter, BE_CLEAR_ALL); | 6149 | be_clear_error(adapter, BE_CLEAR_ALL); |
6151 | return PCI_ERS_RESULT_RECOVERED; | 6150 | return PCI_ERS_RESULT_RECOVERED; |
6152 | } | 6151 | } |
diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index c0f9faca70c4..16a73bd9f4cb 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c | |||
@@ -6854,8 +6854,6 @@ static pci_ers_result_t e1000_io_slot_reset(struct pci_dev *pdev) | |||
6854 | result = PCI_ERS_RESULT_RECOVERED; | 6854 | result = PCI_ERS_RESULT_RECOVERED; |
6855 | } | 6855 | } |
6856 | 6856 | ||
6857 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
6858 | |||
6859 | return result; | 6857 | return result; |
6860 | } | 6858 | } |
6861 | 6859 | ||
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c index c859ababeed5..02345d381303 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c | |||
@@ -2440,8 +2440,6 @@ static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev) | |||
2440 | result = PCI_ERS_RESULT_RECOVERED; | 2440 | result = PCI_ERS_RESULT_RECOVERED; |
2441 | } | 2441 | } |
2442 | 2442 | ||
2443 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
2444 | |||
2445 | return result; | 2443 | return result; |
2446 | } | 2444 | } |
2447 | 2445 | ||
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 330bafe6a689..bc71a21c1dc2 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c | |||
@@ -14552,7 +14552,6 @@ static pci_ers_result_t i40e_pci_error_slot_reset(struct pci_dev *pdev) | |||
14552 | { | 14552 | { |
14553 | struct i40e_pf *pf = pci_get_drvdata(pdev); | 14553 | struct i40e_pf *pf = pci_get_drvdata(pdev); |
14554 | pci_ers_result_t result; | 14554 | pci_ers_result_t result; |
14555 | int err; | ||
14556 | u32 reg; | 14555 | u32 reg; |
14557 | 14556 | ||
14558 | dev_dbg(&pdev->dev, "%s\n", __func__); | 14557 | dev_dbg(&pdev->dev, "%s\n", __func__); |
@@ -14573,14 +14572,6 @@ static pci_ers_result_t i40e_pci_error_slot_reset(struct pci_dev *pdev) | |||
14573 | result = PCI_ERS_RESULT_DISCONNECT; | 14572 | result = PCI_ERS_RESULT_DISCONNECT; |
14574 | } | 14573 | } |
14575 | 14574 | ||
14576 | err = pci_cleanup_aer_uncorrect_error_status(pdev); | ||
14577 | if (err) { | ||
14578 | dev_info(&pdev->dev, | ||
14579 | "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n", | ||
14580 | err); | ||
14581 | /* non-fatal, continue */ | ||
14582 | } | ||
14583 | |||
14584 | return result; | 14575 | return result; |
14585 | } | 14576 | } |
14586 | 14577 | ||
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 0d29df8accd8..5df88ad8ac81 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c | |||
@@ -9086,7 +9086,6 @@ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev) | |||
9086 | struct igb_adapter *adapter = netdev_priv(netdev); | 9086 | struct igb_adapter *adapter = netdev_priv(netdev); |
9087 | struct e1000_hw *hw = &adapter->hw; | 9087 | struct e1000_hw *hw = &adapter->hw; |
9088 | pci_ers_result_t result; | 9088 | pci_ers_result_t result; |
9089 | int err; | ||
9090 | 9089 | ||
9091 | if (pci_enable_device_mem(pdev)) { | 9090 | if (pci_enable_device_mem(pdev)) { |
9092 | dev_err(&pdev->dev, | 9091 | dev_err(&pdev->dev, |
@@ -9110,14 +9109,6 @@ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev) | |||
9110 | result = PCI_ERS_RESULT_RECOVERED; | 9109 | result = PCI_ERS_RESULT_RECOVERED; |
9111 | } | 9110 | } |
9112 | 9111 | ||
9113 | err = pci_cleanup_aer_uncorrect_error_status(pdev); | ||
9114 | if (err) { | ||
9115 | dev_err(&pdev->dev, | ||
9116 | "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n", | ||
9117 | err); | ||
9118 | /* non-fatal, continue */ | ||
9119 | } | ||
9120 | |||
9121 | return result; | 9112 | return result; |
9122 | } | 9113 | } |
9123 | 9114 | ||
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 51268772a999..0049a2becd7e 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | |||
@@ -11322,8 +11322,6 @@ static pci_ers_result_t ixgbe_io_error_detected(struct pci_dev *pdev, | |||
11322 | /* Free device reference count */ | 11322 | /* Free device reference count */ |
11323 | pci_dev_put(vfdev); | 11323 | pci_dev_put(vfdev); |
11324 | } | 11324 | } |
11325 | |||
11326 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
11327 | } | 11325 | } |
11328 | 11326 | ||
11329 | /* | 11327 | /* |
@@ -11373,7 +11371,6 @@ static pci_ers_result_t ixgbe_io_slot_reset(struct pci_dev *pdev) | |||
11373 | { | 11371 | { |
11374 | struct ixgbe_adapter *adapter = pci_get_drvdata(pdev); | 11372 | struct ixgbe_adapter *adapter = pci_get_drvdata(pdev); |
11375 | pci_ers_result_t result; | 11373 | pci_ers_result_t result; |
11376 | int err; | ||
11377 | 11374 | ||
11378 | if (pci_enable_device_mem(pdev)) { | 11375 | if (pci_enable_device_mem(pdev)) { |
11379 | e_err(probe, "Cannot re-enable PCI device after reset.\n"); | 11376 | e_err(probe, "Cannot re-enable PCI device after reset.\n"); |
@@ -11393,13 +11390,6 @@ static pci_ers_result_t ixgbe_io_slot_reset(struct pci_dev *pdev) | |||
11393 | result = PCI_ERS_RESULT_RECOVERED; | 11390 | result = PCI_ERS_RESULT_RECOVERED; |
11394 | } | 11391 | } |
11395 | 11392 | ||
11396 | err = pci_cleanup_aer_uncorrect_error_status(pdev); | ||
11397 | if (err) { | ||
11398 | e_dev_err("pci_cleanup_aer_uncorrect_error_status " | ||
11399 | "failed 0x%0x\n", err); | ||
11400 | /* non-fatal, continue */ | ||
11401 | } | ||
11402 | |||
11403 | return result; | 11393 | return result; |
11404 | } | 11394 | } |
11405 | 11395 | ||
diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c index 59c70be22a84..7d9819d80e44 100644 --- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c +++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c | |||
@@ -1784,11 +1784,6 @@ static pci_ers_result_t netxen_io_slot_reset(struct pci_dev *pdev) | |||
1784 | return err ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; | 1784 | return err ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; |
1785 | } | 1785 | } |
1786 | 1786 | ||
1787 | static void netxen_io_resume(struct pci_dev *pdev) | ||
1788 | { | ||
1789 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
1790 | } | ||
1791 | |||
1792 | static void netxen_nic_shutdown(struct pci_dev *pdev) | 1787 | static void netxen_nic_shutdown(struct pci_dev *pdev) |
1793 | { | 1788 | { |
1794 | struct netxen_adapter *adapter = pci_get_drvdata(pdev); | 1789 | struct netxen_adapter *adapter = pci_get_drvdata(pdev); |
@@ -3465,7 +3460,6 @@ netxen_free_ip_list(struct netxen_adapter *adapter, bool master) | |||
3465 | static const struct pci_error_handlers netxen_err_handler = { | 3460 | static const struct pci_error_handlers netxen_err_handler = { |
3466 | .error_detected = netxen_io_error_detected, | 3461 | .error_detected = netxen_io_error_detected, |
3467 | .slot_reset = netxen_io_slot_reset, | 3462 | .slot_reset = netxen_io_slot_reset, |
3468 | .resume = netxen_io_resume, | ||
3469 | }; | 3463 | }; |
3470 | 3464 | ||
3471 | static struct pci_driver netxen_driver = { | 3465 | static struct pci_driver netxen_driver = { |
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c index a79d84f99102..2a533280b124 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c | |||
@@ -4233,7 +4233,6 @@ static void qlcnic_83xx_io_resume(struct pci_dev *pdev) | |||
4233 | { | 4233 | { |
4234 | struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); | 4234 | struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); |
4235 | 4235 | ||
4236 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
4237 | if (test_and_clear_bit(__QLCNIC_AER, &adapter->state)) | 4236 | if (test_and_clear_bit(__QLCNIC_AER, &adapter->state)) |
4238 | qlcnic_83xx_aer_start_poll_work(adapter); | 4237 | qlcnic_83xx_aer_start_poll_work(adapter); |
4239 | } | 4238 | } |
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c index dbd48012224f..d42ba2293d8c 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c | |||
@@ -3930,7 +3930,6 @@ static void qlcnic_82xx_io_resume(struct pci_dev *pdev) | |||
3930 | u32 state; | 3930 | u32 state; |
3931 | struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); | 3931 | struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); |
3932 | 3932 | ||
3933 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
3934 | state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); | 3933 | state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); |
3935 | if (state == QLCNIC_DEV_READY && test_and_clear_bit(__QLCNIC_AER, | 3934 | if (state == QLCNIC_DEV_READY && test_and_clear_bit(__QLCNIC_AER, |
3936 | &adapter->state)) | 3935 | &adapter->state)) |
diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c index 3d0dd39c289e..98fe7e762e17 100644 --- a/drivers/net/ethernet/sfc/efx.c +++ b/drivers/net/ethernet/sfc/efx.c | |||
@@ -3821,7 +3821,6 @@ static pci_ers_result_t efx_io_slot_reset(struct pci_dev *pdev) | |||
3821 | { | 3821 | { |
3822 | struct efx_nic *efx = pci_get_drvdata(pdev); | 3822 | struct efx_nic *efx = pci_get_drvdata(pdev); |
3823 | pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; | 3823 | pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; |
3824 | int rc; | ||
3825 | 3824 | ||
3826 | if (pci_enable_device(pdev)) { | 3825 | if (pci_enable_device(pdev)) { |
3827 | netif_err(efx, hw, efx->net_dev, | 3826 | netif_err(efx, hw, efx->net_dev, |
@@ -3829,13 +3828,6 @@ static pci_ers_result_t efx_io_slot_reset(struct pci_dev *pdev) | |||
3829 | status = PCI_ERS_RESULT_DISCONNECT; | 3828 | status = PCI_ERS_RESULT_DISCONNECT; |
3830 | } | 3829 | } |
3831 | 3830 | ||
3832 | rc = pci_cleanup_aer_uncorrect_error_status(pdev); | ||
3833 | if (rc) { | ||
3834 | netif_err(efx, hw, efx->net_dev, | ||
3835 | "pci_cleanup_aer_uncorrect_error_status failed (%d)\n", rc); | ||
3836 | /* Non-fatal error. Continue. */ | ||
3837 | } | ||
3838 | |||
3839 | return status; | 3831 | return status; |
3840 | } | 3832 | } |
3841 | 3833 | ||
diff --git a/drivers/net/ethernet/sfc/falcon/efx.c b/drivers/net/ethernet/sfc/falcon/efx.c index 03e2455c502e..8b1f94d7a6c5 100644 --- a/drivers/net/ethernet/sfc/falcon/efx.c +++ b/drivers/net/ethernet/sfc/falcon/efx.c | |||
@@ -3160,7 +3160,6 @@ static pci_ers_result_t ef4_io_slot_reset(struct pci_dev *pdev) | |||
3160 | { | 3160 | { |
3161 | struct ef4_nic *efx = pci_get_drvdata(pdev); | 3161 | struct ef4_nic *efx = pci_get_drvdata(pdev); |
3162 | pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; | 3162 | pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; |
3163 | int rc; | ||
3164 | 3163 | ||
3165 | if (pci_enable_device(pdev)) { | 3164 | if (pci_enable_device(pdev)) { |
3166 | netif_err(efx, hw, efx->net_dev, | 3165 | netif_err(efx, hw, efx->net_dev, |
@@ -3168,13 +3167,6 @@ static pci_ers_result_t ef4_io_slot_reset(struct pci_dev *pdev) | |||
3168 | status = PCI_ERS_RESULT_DISCONNECT; | 3167 | status = PCI_ERS_RESULT_DISCONNECT; |
3169 | } | 3168 | } |
3170 | 3169 | ||
3171 | rc = pci_cleanup_aer_uncorrect_error_status(pdev); | ||
3172 | if (rc) { | ||
3173 | netif_err(efx, hw, efx->net_dev, | ||
3174 | "pci_cleanup_aer_uncorrect_error_status failed (%d)\n", rc); | ||
3175 | /* Non-fatal error. Continue. */ | ||
3176 | } | ||
3177 | |||
3178 | return status; | 3170 | return status; |
3179 | } | 3171 | } |
3180 | 3172 | ||
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 9e4a30b05bd2..2e65be8b1387 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c | |||
@@ -3064,7 +3064,11 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) | |||
3064 | ns->queue = blk_mq_init_queue(ctrl->tagset); | 3064 | ns->queue = blk_mq_init_queue(ctrl->tagset); |
3065 | if (IS_ERR(ns->queue)) | 3065 | if (IS_ERR(ns->queue)) |
3066 | goto out_free_ns; | 3066 | goto out_free_ns; |
3067 | |||
3067 | blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue); | 3068 | blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue); |
3069 | if (ctrl->ops->flags & NVME_F_PCI_P2PDMA) | ||
3070 | blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue); | ||
3071 | |||
3068 | ns->queue->queuedata = ns; | 3072 | ns->queue->queuedata = ns; |
3069 | ns->ctrl = ctrl; | 3073 | ns->ctrl = ctrl; |
3070 | 3074 | ||
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 9fefba039d1e..cee79cb388af 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h | |||
@@ -343,6 +343,7 @@ struct nvme_ctrl_ops { | |||
343 | unsigned int flags; | 343 | unsigned int flags; |
344 | #define NVME_F_FABRICS (1 << 0) | 344 | #define NVME_F_FABRICS (1 << 0) |
345 | #define NVME_F_METADATA_SUPPORTED (1 << 1) | 345 | #define NVME_F_METADATA_SUPPORTED (1 << 1) |
346 | #define NVME_F_PCI_P2PDMA (1 << 2) | ||
346 | int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val); | 347 | int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val); |
347 | int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val); | 348 | int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val); |
348 | int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val); | 349 | int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val); |
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 4e023cd007e1..f30031945ee4 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c | |||
@@ -30,6 +30,7 @@ | |||
30 | #include <linux/types.h> | 30 | #include <linux/types.h> |
31 | #include <linux/io-64-nonatomic-lo-hi.h> | 31 | #include <linux/io-64-nonatomic-lo-hi.h> |
32 | #include <linux/sed-opal.h> | 32 | #include <linux/sed-opal.h> |
33 | #include <linux/pci-p2pdma.h> | ||
33 | 34 | ||
34 | #include "nvme.h" | 35 | #include "nvme.h" |
35 | 36 | ||
@@ -99,9 +100,8 @@ struct nvme_dev { | |||
99 | struct work_struct remove_work; | 100 | struct work_struct remove_work; |
100 | struct mutex shutdown_lock; | 101 | struct mutex shutdown_lock; |
101 | bool subsystem; | 102 | bool subsystem; |
102 | void __iomem *cmb; | ||
103 | pci_bus_addr_t cmb_bus_addr; | ||
104 | u64 cmb_size; | 103 | u64 cmb_size; |
104 | bool cmb_use_sqes; | ||
105 | u32 cmbsz; | 105 | u32 cmbsz; |
106 | u32 cmbloc; | 106 | u32 cmbloc; |
107 | struct nvme_ctrl ctrl; | 107 | struct nvme_ctrl ctrl; |
@@ -158,7 +158,7 @@ struct nvme_queue { | |||
158 | struct nvme_dev *dev; | 158 | struct nvme_dev *dev; |
159 | spinlock_t sq_lock; | 159 | spinlock_t sq_lock; |
160 | struct nvme_command *sq_cmds; | 160 | struct nvme_command *sq_cmds; |
161 | struct nvme_command __iomem *sq_cmds_io; | 161 | bool sq_cmds_is_io; |
162 | spinlock_t cq_lock ____cacheline_aligned_in_smp; | 162 | spinlock_t cq_lock ____cacheline_aligned_in_smp; |
163 | volatile struct nvme_completion *cqes; | 163 | volatile struct nvme_completion *cqes; |
164 | struct blk_mq_tags **tags; | 164 | struct blk_mq_tags **tags; |
@@ -447,11 +447,8 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set) | |||
447 | static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd) | 447 | static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd) |
448 | { | 448 | { |
449 | spin_lock(&nvmeq->sq_lock); | 449 | spin_lock(&nvmeq->sq_lock); |
450 | if (nvmeq->sq_cmds_io) | 450 | |
451 | memcpy_toio(&nvmeq->sq_cmds_io[nvmeq->sq_tail], cmd, | 451 | memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd)); |
452 | sizeof(*cmd)); | ||
453 | else | ||
454 | memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd)); | ||
455 | 452 | ||
456 | if (++nvmeq->sq_tail == nvmeq->q_depth) | 453 | if (++nvmeq->sq_tail == nvmeq->q_depth) |
457 | nvmeq->sq_tail = 0; | 454 | nvmeq->sq_tail = 0; |
@@ -748,8 +745,13 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, | |||
748 | goto out; | 745 | goto out; |
749 | 746 | ||
750 | ret = BLK_STS_RESOURCE; | 747 | ret = BLK_STS_RESOURCE; |
751 | nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, dma_dir, | 748 | |
752 | DMA_ATTR_NO_WARN); | 749 | if (is_pci_p2pdma_page(sg_page(iod->sg))) |
750 | nr_mapped = pci_p2pdma_map_sg(dev->dev, iod->sg, iod->nents, | ||
751 | dma_dir); | ||
752 | else | ||
753 | nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, | ||
754 | dma_dir, DMA_ATTR_NO_WARN); | ||
753 | if (!nr_mapped) | 755 | if (!nr_mapped) |
754 | goto out; | 756 | goto out; |
755 | 757 | ||
@@ -791,7 +793,10 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) | |||
791 | DMA_TO_DEVICE : DMA_FROM_DEVICE; | 793 | DMA_TO_DEVICE : DMA_FROM_DEVICE; |
792 | 794 | ||
793 | if (iod->nents) { | 795 | if (iod->nents) { |
794 | dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir); | 796 | /* P2PDMA requests do not need to be unmapped */ |
797 | if (!is_pci_p2pdma_page(sg_page(iod->sg))) | ||
798 | dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir); | ||
799 | |||
795 | if (blk_integrity_rq(req)) | 800 | if (blk_integrity_rq(req)) |
796 | dma_unmap_sg(dev->dev, &iod->meta_sg, 1, dma_dir); | 801 | dma_unmap_sg(dev->dev, &iod->meta_sg, 1, dma_dir); |
797 | } | 802 | } |
@@ -1232,9 +1237,18 @@ static void nvme_free_queue(struct nvme_queue *nvmeq) | |||
1232 | { | 1237 | { |
1233 | dma_free_coherent(nvmeq->q_dmadev, CQ_SIZE(nvmeq->q_depth), | 1238 | dma_free_coherent(nvmeq->q_dmadev, CQ_SIZE(nvmeq->q_depth), |
1234 | (void *)nvmeq->cqes, nvmeq->cq_dma_addr); | 1239 | (void *)nvmeq->cqes, nvmeq->cq_dma_addr); |
1235 | if (nvmeq->sq_cmds) | 1240 | |
1236 | dma_free_coherent(nvmeq->q_dmadev, SQ_SIZE(nvmeq->q_depth), | 1241 | if (nvmeq->sq_cmds) { |
1237 | nvmeq->sq_cmds, nvmeq->sq_dma_addr); | 1242 | if (nvmeq->sq_cmds_is_io) |
1243 | pci_free_p2pmem(to_pci_dev(nvmeq->q_dmadev), | ||
1244 | nvmeq->sq_cmds, | ||
1245 | SQ_SIZE(nvmeq->q_depth)); | ||
1246 | else | ||
1247 | dma_free_coherent(nvmeq->q_dmadev, | ||
1248 | SQ_SIZE(nvmeq->q_depth), | ||
1249 | nvmeq->sq_cmds, | ||
1250 | nvmeq->sq_dma_addr); | ||
1251 | } | ||
1238 | } | 1252 | } |
1239 | 1253 | ||
1240 | static void nvme_free_queues(struct nvme_dev *dev, int lowest) | 1254 | static void nvme_free_queues(struct nvme_dev *dev, int lowest) |
@@ -1323,12 +1337,21 @@ static int nvme_cmb_qdepth(struct nvme_dev *dev, int nr_io_queues, | |||
1323 | static int nvme_alloc_sq_cmds(struct nvme_dev *dev, struct nvme_queue *nvmeq, | 1337 | static int nvme_alloc_sq_cmds(struct nvme_dev *dev, struct nvme_queue *nvmeq, |
1324 | int qid, int depth) | 1338 | int qid, int depth) |
1325 | { | 1339 | { |
1326 | /* CMB SQEs will be mapped before creation */ | 1340 | struct pci_dev *pdev = to_pci_dev(dev->dev); |
1327 | if (qid && dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) | 1341 | |
1328 | return 0; | 1342 | if (qid && dev->cmb_use_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) { |
1343 | nvmeq->sq_cmds = pci_alloc_p2pmem(pdev, SQ_SIZE(depth)); | ||
1344 | nvmeq->sq_dma_addr = pci_p2pmem_virt_to_bus(pdev, | ||
1345 | nvmeq->sq_cmds); | ||
1346 | nvmeq->sq_cmds_is_io = true; | ||
1347 | } | ||
1348 | |||
1349 | if (!nvmeq->sq_cmds) { | ||
1350 | nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth), | ||
1351 | &nvmeq->sq_dma_addr, GFP_KERNEL); | ||
1352 | nvmeq->sq_cmds_is_io = false; | ||
1353 | } | ||
1329 | 1354 | ||
1330 | nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth), | ||
1331 | &nvmeq->sq_dma_addr, GFP_KERNEL); | ||
1332 | if (!nvmeq->sq_cmds) | 1355 | if (!nvmeq->sq_cmds) |
1333 | return -ENOMEM; | 1356 | return -ENOMEM; |
1334 | return 0; | 1357 | return 0; |
@@ -1405,13 +1428,6 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) | |||
1405 | int result; | 1428 | int result; |
1406 | s16 vector; | 1429 | s16 vector; |
1407 | 1430 | ||
1408 | if (dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) { | ||
1409 | unsigned offset = (qid - 1) * roundup(SQ_SIZE(nvmeq->q_depth), | ||
1410 | dev->ctrl.page_size); | ||
1411 | nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset; | ||
1412 | nvmeq->sq_cmds_io = dev->cmb + offset; | ||
1413 | } | ||
1414 | |||
1415 | /* | 1431 | /* |
1416 | * A queue's vector matches the queue identifier unless the controller | 1432 | * A queue's vector matches the queue identifier unless the controller |
1417 | * has only one vector available. | 1433 | * has only one vector available. |
@@ -1652,9 +1668,6 @@ static void nvme_map_cmb(struct nvme_dev *dev) | |||
1652 | return; | 1668 | return; |
1653 | dev->cmbloc = readl(dev->bar + NVME_REG_CMBLOC); | 1669 | dev->cmbloc = readl(dev->bar + NVME_REG_CMBLOC); |
1654 | 1670 | ||
1655 | if (!use_cmb_sqes) | ||
1656 | return; | ||
1657 | |||
1658 | size = nvme_cmb_size_unit(dev) * nvme_cmb_size(dev); | 1671 | size = nvme_cmb_size_unit(dev) * nvme_cmb_size(dev); |
1659 | offset = nvme_cmb_size_unit(dev) * NVME_CMB_OFST(dev->cmbloc); | 1672 | offset = nvme_cmb_size_unit(dev) * NVME_CMB_OFST(dev->cmbloc); |
1660 | bar = NVME_CMB_BIR(dev->cmbloc); | 1673 | bar = NVME_CMB_BIR(dev->cmbloc); |
@@ -1671,11 +1684,18 @@ static void nvme_map_cmb(struct nvme_dev *dev) | |||
1671 | if (size > bar_size - offset) | 1684 | if (size > bar_size - offset) |
1672 | size = bar_size - offset; | 1685 | size = bar_size - offset; |
1673 | 1686 | ||
1674 | dev->cmb = ioremap_wc(pci_resource_start(pdev, bar) + offset, size); | 1687 | if (pci_p2pdma_add_resource(pdev, bar, size, offset)) { |
1675 | if (!dev->cmb) | 1688 | dev_warn(dev->ctrl.device, |
1689 | "failed to register the CMB\n"); | ||
1676 | return; | 1690 | return; |
1677 | dev->cmb_bus_addr = pci_bus_address(pdev, bar) + offset; | 1691 | } |
1692 | |||
1678 | dev->cmb_size = size; | 1693 | dev->cmb_size = size; |
1694 | dev->cmb_use_sqes = use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS); | ||
1695 | |||
1696 | if ((dev->cmbsz & (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) == | ||
1697 | (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) | ||
1698 | pci_p2pmem_publish(pdev, true); | ||
1679 | 1699 | ||
1680 | if (sysfs_add_file_to_group(&dev->ctrl.device->kobj, | 1700 | if (sysfs_add_file_to_group(&dev->ctrl.device->kobj, |
1681 | &dev_attr_cmb.attr, NULL)) | 1701 | &dev_attr_cmb.attr, NULL)) |
@@ -1685,12 +1705,10 @@ static void nvme_map_cmb(struct nvme_dev *dev) | |||
1685 | 1705 | ||
1686 | static inline void nvme_release_cmb(struct nvme_dev *dev) | 1706 | static inline void nvme_release_cmb(struct nvme_dev *dev) |
1687 | { | 1707 | { |
1688 | if (dev->cmb) { | 1708 | if (dev->cmb_size) { |
1689 | iounmap(dev->cmb); | ||
1690 | dev->cmb = NULL; | ||
1691 | sysfs_remove_file_from_group(&dev->ctrl.device->kobj, | 1709 | sysfs_remove_file_from_group(&dev->ctrl.device->kobj, |
1692 | &dev_attr_cmb.attr, NULL); | 1710 | &dev_attr_cmb.attr, NULL); |
1693 | dev->cmbsz = 0; | 1711 | dev->cmb_size = 0; |
1694 | } | 1712 | } |
1695 | } | 1713 | } |
1696 | 1714 | ||
@@ -1889,13 +1907,13 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) | |||
1889 | if (nr_io_queues == 0) | 1907 | if (nr_io_queues == 0) |
1890 | return 0; | 1908 | return 0; |
1891 | 1909 | ||
1892 | if (dev->cmb && (dev->cmbsz & NVME_CMBSZ_SQS)) { | 1910 | if (dev->cmb_use_sqes) { |
1893 | result = nvme_cmb_qdepth(dev, nr_io_queues, | 1911 | result = nvme_cmb_qdepth(dev, nr_io_queues, |
1894 | sizeof(struct nvme_command)); | 1912 | sizeof(struct nvme_command)); |
1895 | if (result > 0) | 1913 | if (result > 0) |
1896 | dev->q_depth = result; | 1914 | dev->q_depth = result; |
1897 | else | 1915 | else |
1898 | nvme_release_cmb(dev); | 1916 | dev->cmb_use_sqes = false; |
1899 | } | 1917 | } |
1900 | 1918 | ||
1901 | do { | 1919 | do { |
@@ -2390,7 +2408,8 @@ static int nvme_pci_get_address(struct nvme_ctrl *ctrl, char *buf, int size) | |||
2390 | static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { | 2408 | static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { |
2391 | .name = "pcie", | 2409 | .name = "pcie", |
2392 | .module = THIS_MODULE, | 2410 | .module = THIS_MODULE, |
2393 | .flags = NVME_F_METADATA_SUPPORTED, | 2411 | .flags = NVME_F_METADATA_SUPPORTED | |
2412 | NVME_F_PCI_P2PDMA, | ||
2394 | .reg_read32 = nvme_pci_reg_read32, | 2413 | .reg_read32 = nvme_pci_reg_read32, |
2395 | .reg_write32 = nvme_pci_reg_write32, | 2414 | .reg_write32 = nvme_pci_reg_write32, |
2396 | .reg_read64 = nvme_pci_reg_read64, | 2415 | .reg_read64 = nvme_pci_reg_read64, |
@@ -2648,7 +2667,6 @@ static void nvme_error_resume(struct pci_dev *pdev) | |||
2648 | struct nvme_dev *dev = pci_get_drvdata(pdev); | 2667 | struct nvme_dev *dev = pci_get_drvdata(pdev); |
2649 | 2668 | ||
2650 | flush_work(&dev->ctrl.reset_work); | 2669 | flush_work(&dev->ctrl.reset_work); |
2651 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
2652 | } | 2670 | } |
2653 | 2671 | ||
2654 | static const struct pci_error_handlers nvme_err_handler = { | 2672 | static const struct pci_error_handlers nvme_err_handler = { |
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c index b37a8e3e3f80..d895579b6c5d 100644 --- a/drivers/nvme/target/configfs.c +++ b/drivers/nvme/target/configfs.c | |||
@@ -17,6 +17,8 @@ | |||
17 | #include <linux/slab.h> | 17 | #include <linux/slab.h> |
18 | #include <linux/stat.h> | 18 | #include <linux/stat.h> |
19 | #include <linux/ctype.h> | 19 | #include <linux/ctype.h> |
20 | #include <linux/pci.h> | ||
21 | #include <linux/pci-p2pdma.h> | ||
20 | 22 | ||
21 | #include "nvmet.h" | 23 | #include "nvmet.h" |
22 | 24 | ||
@@ -340,6 +342,48 @@ out_unlock: | |||
340 | 342 | ||
341 | CONFIGFS_ATTR(nvmet_ns_, device_path); | 343 | CONFIGFS_ATTR(nvmet_ns_, device_path); |
342 | 344 | ||
345 | #ifdef CONFIG_PCI_P2PDMA | ||
346 | static ssize_t nvmet_ns_p2pmem_show(struct config_item *item, char *page) | ||
347 | { | ||
348 | struct nvmet_ns *ns = to_nvmet_ns(item); | ||
349 | |||
350 | return pci_p2pdma_enable_show(page, ns->p2p_dev, ns->use_p2pmem); | ||
351 | } | ||
352 | |||
353 | static ssize_t nvmet_ns_p2pmem_store(struct config_item *item, | ||
354 | const char *page, size_t count) | ||
355 | { | ||
356 | struct nvmet_ns *ns = to_nvmet_ns(item); | ||
357 | struct pci_dev *p2p_dev = NULL; | ||
358 | bool use_p2pmem; | ||
359 | int ret = count; | ||
360 | int error; | ||
361 | |||
362 | mutex_lock(&ns->subsys->lock); | ||
363 | if (ns->enabled) { | ||
364 | ret = -EBUSY; | ||
365 | goto out_unlock; | ||
366 | } | ||
367 | |||
368 | error = pci_p2pdma_enable_store(page, &p2p_dev, &use_p2pmem); | ||
369 | if (error) { | ||
370 | ret = error; | ||
371 | goto out_unlock; | ||
372 | } | ||
373 | |||
374 | ns->use_p2pmem = use_p2pmem; | ||
375 | pci_dev_put(ns->p2p_dev); | ||
376 | ns->p2p_dev = p2p_dev; | ||
377 | |||
378 | out_unlock: | ||
379 | mutex_unlock(&ns->subsys->lock); | ||
380 | |||
381 | return ret; | ||
382 | } | ||
383 | |||
384 | CONFIGFS_ATTR(nvmet_ns_, p2pmem); | ||
385 | #endif /* CONFIG_PCI_P2PDMA */ | ||
386 | |||
343 | static ssize_t nvmet_ns_device_uuid_show(struct config_item *item, char *page) | 387 | static ssize_t nvmet_ns_device_uuid_show(struct config_item *item, char *page) |
344 | { | 388 | { |
345 | return sprintf(page, "%pUb\n", &to_nvmet_ns(item)->uuid); | 389 | return sprintf(page, "%pUb\n", &to_nvmet_ns(item)->uuid); |
@@ -509,6 +553,9 @@ static struct configfs_attribute *nvmet_ns_attrs[] = { | |||
509 | &nvmet_ns_attr_ana_grpid, | 553 | &nvmet_ns_attr_ana_grpid, |
510 | &nvmet_ns_attr_enable, | 554 | &nvmet_ns_attr_enable, |
511 | &nvmet_ns_attr_buffered_io, | 555 | &nvmet_ns_attr_buffered_io, |
556 | #ifdef CONFIG_PCI_P2PDMA | ||
557 | &nvmet_ns_attr_p2pmem, | ||
558 | #endif | ||
512 | NULL, | 559 | NULL, |
513 | }; | 560 | }; |
514 | 561 | ||
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 0acdff9e6842..f4efe289dc7b 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c | |||
@@ -15,6 +15,7 @@ | |||
15 | #include <linux/module.h> | 15 | #include <linux/module.h> |
16 | #include <linux/random.h> | 16 | #include <linux/random.h> |
17 | #include <linux/rculist.h> | 17 | #include <linux/rculist.h> |
18 | #include <linux/pci-p2pdma.h> | ||
18 | 19 | ||
19 | #include "nvmet.h" | 20 | #include "nvmet.h" |
20 | 21 | ||
@@ -365,9 +366,93 @@ static void nvmet_ns_dev_disable(struct nvmet_ns *ns) | |||
365 | nvmet_file_ns_disable(ns); | 366 | nvmet_file_ns_disable(ns); |
366 | } | 367 | } |
367 | 368 | ||
369 | static int nvmet_p2pmem_ns_enable(struct nvmet_ns *ns) | ||
370 | { | ||
371 | int ret; | ||
372 | struct pci_dev *p2p_dev; | ||
373 | |||
374 | if (!ns->use_p2pmem) | ||
375 | return 0; | ||
376 | |||
377 | if (!ns->bdev) { | ||
378 | pr_err("peer-to-peer DMA is not supported by non-block device namespaces\n"); | ||
379 | return -EINVAL; | ||
380 | } | ||
381 | |||
382 | if (!blk_queue_pci_p2pdma(ns->bdev->bd_queue)) { | ||
383 | pr_err("peer-to-peer DMA is not supported by the driver of %s\n", | ||
384 | ns->device_path); | ||
385 | return -EINVAL; | ||
386 | } | ||
387 | |||
388 | if (ns->p2p_dev) { | ||
389 | ret = pci_p2pdma_distance(ns->p2p_dev, nvmet_ns_dev(ns), true); | ||
390 | if (ret < 0) | ||
391 | return -EINVAL; | ||
392 | } else { | ||
393 | /* | ||
394 | * Right now we just check that there is p2pmem available so | ||
395 | * we can report an error to the user right away if there | ||
396 | * is not. We'll find the actual device to use once we | ||
397 | * setup the controller when the port's device is available. | ||
398 | */ | ||
399 | |||
400 | p2p_dev = pci_p2pmem_find(nvmet_ns_dev(ns)); | ||
401 | if (!p2p_dev) { | ||
402 | pr_err("no peer-to-peer memory is available for %s\n", | ||
403 | ns->device_path); | ||
404 | return -EINVAL; | ||
405 | } | ||
406 | |||
407 | pci_dev_put(p2p_dev); | ||
408 | } | ||
409 | |||
410 | return 0; | ||
411 | } | ||
412 | |||
413 | /* | ||
414 | * Note: ctrl->subsys->lock should be held when calling this function | ||
415 | */ | ||
416 | static void nvmet_p2pmem_ns_add_p2p(struct nvmet_ctrl *ctrl, | ||
417 | struct nvmet_ns *ns) | ||
418 | { | ||
419 | struct device *clients[2]; | ||
420 | struct pci_dev *p2p_dev; | ||
421 | int ret; | ||
422 | |||
423 | if (!ctrl->p2p_client) | ||
424 | return; | ||
425 | |||
426 | if (ns->p2p_dev) { | ||
427 | ret = pci_p2pdma_distance(ns->p2p_dev, ctrl->p2p_client, true); | ||
428 | if (ret < 0) | ||
429 | return; | ||
430 | |||
431 | p2p_dev = pci_dev_get(ns->p2p_dev); | ||
432 | } else { | ||
433 | clients[0] = ctrl->p2p_client; | ||
434 | clients[1] = nvmet_ns_dev(ns); | ||
435 | |||
436 | p2p_dev = pci_p2pmem_find_many(clients, ARRAY_SIZE(clients)); | ||
437 | if (!p2p_dev) { | ||
438 | pr_err("no peer-to-peer memory is available that's supported by %s and %s\n", | ||
439 | dev_name(ctrl->p2p_client), ns->device_path); | ||
440 | return; | ||
441 | } | ||
442 | } | ||
443 | |||
444 | ret = radix_tree_insert(&ctrl->p2p_ns_map, ns->nsid, p2p_dev); | ||
445 | if (ret < 0) | ||
446 | pci_dev_put(p2p_dev); | ||
447 | |||
448 | pr_info("using p2pmem on %s for nsid %d\n", pci_name(p2p_dev), | ||
449 | ns->nsid); | ||
450 | } | ||
451 | |||
368 | int nvmet_ns_enable(struct nvmet_ns *ns) | 452 | int nvmet_ns_enable(struct nvmet_ns *ns) |
369 | { | 453 | { |
370 | struct nvmet_subsys *subsys = ns->subsys; | 454 | struct nvmet_subsys *subsys = ns->subsys; |
455 | struct nvmet_ctrl *ctrl; | ||
371 | int ret; | 456 | int ret; |
372 | 457 | ||
373 | mutex_lock(&subsys->lock); | 458 | mutex_lock(&subsys->lock); |
@@ -384,6 +469,13 @@ int nvmet_ns_enable(struct nvmet_ns *ns) | |||
384 | if (ret) | 469 | if (ret) |
385 | goto out_unlock; | 470 | goto out_unlock; |
386 | 471 | ||
472 | ret = nvmet_p2pmem_ns_enable(ns); | ||
473 | if (ret) | ||
474 | goto out_unlock; | ||
475 | |||
476 | list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) | ||
477 | nvmet_p2pmem_ns_add_p2p(ctrl, ns); | ||
478 | |||
387 | ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace, | 479 | ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace, |
388 | 0, GFP_KERNEL); | 480 | 0, GFP_KERNEL); |
389 | if (ret) | 481 | if (ret) |
@@ -418,6 +510,9 @@ out_unlock: | |||
418 | mutex_unlock(&subsys->lock); | 510 | mutex_unlock(&subsys->lock); |
419 | return ret; | 511 | return ret; |
420 | out_dev_put: | 512 | out_dev_put: |
513 | list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) | ||
514 | pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid)); | ||
515 | |||
421 | nvmet_ns_dev_disable(ns); | 516 | nvmet_ns_dev_disable(ns); |
422 | goto out_unlock; | 517 | goto out_unlock; |
423 | } | 518 | } |
@@ -425,6 +520,7 @@ out_dev_put: | |||
425 | void nvmet_ns_disable(struct nvmet_ns *ns) | 520 | void nvmet_ns_disable(struct nvmet_ns *ns) |
426 | { | 521 | { |
427 | struct nvmet_subsys *subsys = ns->subsys; | 522 | struct nvmet_subsys *subsys = ns->subsys; |
523 | struct nvmet_ctrl *ctrl; | ||
428 | 524 | ||
429 | mutex_lock(&subsys->lock); | 525 | mutex_lock(&subsys->lock); |
430 | if (!ns->enabled) | 526 | if (!ns->enabled) |
@@ -434,6 +530,10 @@ void nvmet_ns_disable(struct nvmet_ns *ns) | |||
434 | list_del_rcu(&ns->dev_link); | 530 | list_del_rcu(&ns->dev_link); |
435 | if (ns->nsid == subsys->max_nsid) | 531 | if (ns->nsid == subsys->max_nsid) |
436 | subsys->max_nsid = nvmet_max_nsid(subsys); | 532 | subsys->max_nsid = nvmet_max_nsid(subsys); |
533 | |||
534 | list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) | ||
535 | pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid)); | ||
536 | |||
437 | mutex_unlock(&subsys->lock); | 537 | mutex_unlock(&subsys->lock); |
438 | 538 | ||
439 | /* | 539 | /* |
@@ -450,6 +550,7 @@ void nvmet_ns_disable(struct nvmet_ns *ns) | |||
450 | percpu_ref_exit(&ns->ref); | 550 | percpu_ref_exit(&ns->ref); |
451 | 551 | ||
452 | mutex_lock(&subsys->lock); | 552 | mutex_lock(&subsys->lock); |
553 | |||
453 | subsys->nr_namespaces--; | 554 | subsys->nr_namespaces--; |
454 | nvmet_ns_changed(subsys, ns->nsid); | 555 | nvmet_ns_changed(subsys, ns->nsid); |
455 | nvmet_ns_dev_disable(ns); | 556 | nvmet_ns_dev_disable(ns); |
@@ -725,6 +826,51 @@ void nvmet_req_execute(struct nvmet_req *req) | |||
725 | } | 826 | } |
726 | EXPORT_SYMBOL_GPL(nvmet_req_execute); | 827 | EXPORT_SYMBOL_GPL(nvmet_req_execute); |
727 | 828 | ||
829 | int nvmet_req_alloc_sgl(struct nvmet_req *req) | ||
830 | { | ||
831 | struct pci_dev *p2p_dev = NULL; | ||
832 | |||
833 | if (IS_ENABLED(CONFIG_PCI_P2PDMA)) { | ||
834 | if (req->sq->ctrl && req->ns) | ||
835 | p2p_dev = radix_tree_lookup(&req->sq->ctrl->p2p_ns_map, | ||
836 | req->ns->nsid); | ||
837 | |||
838 | req->p2p_dev = NULL; | ||
839 | if (req->sq->qid && p2p_dev) { | ||
840 | req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt, | ||
841 | req->transfer_len); | ||
842 | if (req->sg) { | ||
843 | req->p2p_dev = p2p_dev; | ||
844 | return 0; | ||
845 | } | ||
846 | } | ||
847 | |||
848 | /* | ||
849 | * If no P2P memory was available we fallback to using | ||
850 | * regular memory | ||
851 | */ | ||
852 | } | ||
853 | |||
854 | req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt); | ||
855 | if (!req->sg) | ||
856 | return -ENOMEM; | ||
857 | |||
858 | return 0; | ||
859 | } | ||
860 | EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl); | ||
861 | |||
862 | void nvmet_req_free_sgl(struct nvmet_req *req) | ||
863 | { | ||
864 | if (req->p2p_dev) | ||
865 | pci_p2pmem_free_sgl(req->p2p_dev, req->sg); | ||
866 | else | ||
867 | sgl_free(req->sg); | ||
868 | |||
869 | req->sg = NULL; | ||
870 | req->sg_cnt = 0; | ||
871 | } | ||
872 | EXPORT_SYMBOL_GPL(nvmet_req_free_sgl); | ||
873 | |||
728 | static inline bool nvmet_cc_en(u32 cc) | 874 | static inline bool nvmet_cc_en(u32 cc) |
729 | { | 875 | { |
730 | return (cc >> NVME_CC_EN_SHIFT) & 0x1; | 876 | return (cc >> NVME_CC_EN_SHIFT) & 0x1; |
@@ -921,6 +1067,37 @@ bool nvmet_host_allowed(struct nvmet_req *req, struct nvmet_subsys *subsys, | |||
921 | return __nvmet_host_allowed(subsys, hostnqn); | 1067 | return __nvmet_host_allowed(subsys, hostnqn); |
922 | } | 1068 | } |
923 | 1069 | ||
1070 | /* | ||
1071 | * Note: ctrl->subsys->lock should be held when calling this function | ||
1072 | */ | ||
1073 | static void nvmet_setup_p2p_ns_map(struct nvmet_ctrl *ctrl, | ||
1074 | struct nvmet_req *req) | ||
1075 | { | ||
1076 | struct nvmet_ns *ns; | ||
1077 | |||
1078 | if (!req->p2p_client) | ||
1079 | return; | ||
1080 | |||
1081 | ctrl->p2p_client = get_device(req->p2p_client); | ||
1082 | |||
1083 | list_for_each_entry_rcu(ns, &ctrl->subsys->namespaces, dev_link) | ||
1084 | nvmet_p2pmem_ns_add_p2p(ctrl, ns); | ||
1085 | } | ||
1086 | |||
1087 | /* | ||
1088 | * Note: ctrl->subsys->lock should be held when calling this function | ||
1089 | */ | ||
1090 | static void nvmet_release_p2p_ns_map(struct nvmet_ctrl *ctrl) | ||
1091 | { | ||
1092 | struct radix_tree_iter iter; | ||
1093 | void __rcu **slot; | ||
1094 | |||
1095 | radix_tree_for_each_slot(slot, &ctrl->p2p_ns_map, &iter, 0) | ||
1096 | pci_dev_put(radix_tree_deref_slot(slot)); | ||
1097 | |||
1098 | put_device(ctrl->p2p_client); | ||
1099 | } | ||
1100 | |||
924 | u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, | 1101 | u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, |
925 | struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp) | 1102 | struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp) |
926 | { | 1103 | { |
@@ -962,6 +1139,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, | |||
962 | 1139 | ||
963 | INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work); | 1140 | INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work); |
964 | INIT_LIST_HEAD(&ctrl->async_events); | 1141 | INIT_LIST_HEAD(&ctrl->async_events); |
1142 | INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL); | ||
965 | 1143 | ||
966 | memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE); | 1144 | memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE); |
967 | memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE); | 1145 | memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE); |
@@ -1026,6 +1204,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, | |||
1026 | 1204 | ||
1027 | mutex_lock(&subsys->lock); | 1205 | mutex_lock(&subsys->lock); |
1028 | list_add_tail(&ctrl->subsys_entry, &subsys->ctrls); | 1206 | list_add_tail(&ctrl->subsys_entry, &subsys->ctrls); |
1207 | nvmet_setup_p2p_ns_map(ctrl, req); | ||
1029 | mutex_unlock(&subsys->lock); | 1208 | mutex_unlock(&subsys->lock); |
1030 | 1209 | ||
1031 | *ctrlp = ctrl; | 1210 | *ctrlp = ctrl; |
@@ -1053,6 +1232,7 @@ static void nvmet_ctrl_free(struct kref *ref) | |||
1053 | struct nvmet_subsys *subsys = ctrl->subsys; | 1232 | struct nvmet_subsys *subsys = ctrl->subsys; |
1054 | 1233 | ||
1055 | mutex_lock(&subsys->lock); | 1234 | mutex_lock(&subsys->lock); |
1235 | nvmet_release_p2p_ns_map(ctrl); | ||
1056 | list_del(&ctrl->subsys_entry); | 1236 | list_del(&ctrl->subsys_entry); |
1057 | mutex_unlock(&subsys->lock); | 1237 | mutex_unlock(&subsys->lock); |
1058 | 1238 | ||
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index f93fb5711142..c1ec3475a140 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c | |||
@@ -78,6 +78,9 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) | |||
78 | op = REQ_OP_READ; | 78 | op = REQ_OP_READ; |
79 | } | 79 | } |
80 | 80 | ||
81 | if (is_pci_p2pdma_page(sg_page(req->sg))) | ||
82 | op_flags |= REQ_NOMERGE; | ||
83 | |||
81 | sector = le64_to_cpu(req->cmd->rw.slba); | 84 | sector = le64_to_cpu(req->cmd->rw.slba); |
82 | sector <<= (req->ns->blksize_shift - 9); | 85 | sector <<= (req->ns->blksize_shift - 9); |
83 | 86 | ||
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 08f7b57a1203..c2b4d9ee6391 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h | |||
@@ -26,6 +26,7 @@ | |||
26 | #include <linux/configfs.h> | 26 | #include <linux/configfs.h> |
27 | #include <linux/rcupdate.h> | 27 | #include <linux/rcupdate.h> |
28 | #include <linux/blkdev.h> | 28 | #include <linux/blkdev.h> |
29 | #include <linux/radix-tree.h> | ||
29 | 30 | ||
30 | #define NVMET_ASYNC_EVENTS 4 | 31 | #define NVMET_ASYNC_EVENTS 4 |
31 | #define NVMET_ERROR_LOG_SLOTS 128 | 32 | #define NVMET_ERROR_LOG_SLOTS 128 |
@@ -77,6 +78,9 @@ struct nvmet_ns { | |||
77 | struct completion disable_done; | 78 | struct completion disable_done; |
78 | mempool_t *bvec_pool; | 79 | mempool_t *bvec_pool; |
79 | struct kmem_cache *bvec_cache; | 80 | struct kmem_cache *bvec_cache; |
81 | |||
82 | int use_p2pmem; | ||
83 | struct pci_dev *p2p_dev; | ||
80 | }; | 84 | }; |
81 | 85 | ||
82 | static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item) | 86 | static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item) |
@@ -84,6 +88,11 @@ static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item) | |||
84 | return container_of(to_config_group(item), struct nvmet_ns, group); | 88 | return container_of(to_config_group(item), struct nvmet_ns, group); |
85 | } | 89 | } |
86 | 90 | ||
91 | static inline struct device *nvmet_ns_dev(struct nvmet_ns *ns) | ||
92 | { | ||
93 | return ns->bdev ? disk_to_dev(ns->bdev->bd_disk) : NULL; | ||
94 | } | ||
95 | |||
87 | struct nvmet_cq { | 96 | struct nvmet_cq { |
88 | u16 qid; | 97 | u16 qid; |
89 | u16 size; | 98 | u16 size; |
@@ -184,6 +193,9 @@ struct nvmet_ctrl { | |||
184 | 193 | ||
185 | char subsysnqn[NVMF_NQN_FIELD_LEN]; | 194 | char subsysnqn[NVMF_NQN_FIELD_LEN]; |
186 | char hostnqn[NVMF_NQN_FIELD_LEN]; | 195 | char hostnqn[NVMF_NQN_FIELD_LEN]; |
196 | |||
197 | struct device *p2p_client; | ||
198 | struct radix_tree_root p2p_ns_map; | ||
187 | }; | 199 | }; |
188 | 200 | ||
189 | struct nvmet_subsys { | 201 | struct nvmet_subsys { |
@@ -295,6 +307,9 @@ struct nvmet_req { | |||
295 | 307 | ||
296 | void (*execute)(struct nvmet_req *req); | 308 | void (*execute)(struct nvmet_req *req); |
297 | const struct nvmet_fabrics_ops *ops; | 309 | const struct nvmet_fabrics_ops *ops; |
310 | |||
311 | struct pci_dev *p2p_dev; | ||
312 | struct device *p2p_client; | ||
298 | }; | 313 | }; |
299 | 314 | ||
300 | extern struct workqueue_struct *buffered_io_wq; | 315 | extern struct workqueue_struct *buffered_io_wq; |
@@ -337,6 +352,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq, | |||
337 | void nvmet_req_uninit(struct nvmet_req *req); | 352 | void nvmet_req_uninit(struct nvmet_req *req); |
338 | void nvmet_req_execute(struct nvmet_req *req); | 353 | void nvmet_req_execute(struct nvmet_req *req); |
339 | void nvmet_req_complete(struct nvmet_req *req, u16 status); | 354 | void nvmet_req_complete(struct nvmet_req *req, u16 status); |
355 | int nvmet_req_alloc_sgl(struct nvmet_req *req); | ||
356 | void nvmet_req_free_sgl(struct nvmet_req *req); | ||
340 | 357 | ||
341 | void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid, | 358 | void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid, |
342 | u16 size); | 359 | u16 size); |
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index bd265aceb90c..ddce100be57a 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c | |||
@@ -504,7 +504,7 @@ static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp) | |||
504 | } | 504 | } |
505 | 505 | ||
506 | if (rsp->req.sg != rsp->cmd->inline_sg) | 506 | if (rsp->req.sg != rsp->cmd->inline_sg) |
507 | sgl_free(rsp->req.sg); | 507 | nvmet_req_free_sgl(&rsp->req); |
508 | 508 | ||
509 | if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list))) | 509 | if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list))) |
510 | nvmet_rdma_process_wr_wait_list(queue); | 510 | nvmet_rdma_process_wr_wait_list(queue); |
@@ -653,24 +653,24 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp, | |||
653 | { | 653 | { |
654 | struct rdma_cm_id *cm_id = rsp->queue->cm_id; | 654 | struct rdma_cm_id *cm_id = rsp->queue->cm_id; |
655 | u64 addr = le64_to_cpu(sgl->addr); | 655 | u64 addr = le64_to_cpu(sgl->addr); |
656 | u32 len = get_unaligned_le24(sgl->length); | ||
657 | u32 key = get_unaligned_le32(sgl->key); | 656 | u32 key = get_unaligned_le32(sgl->key); |
658 | int ret; | 657 | int ret; |
659 | 658 | ||
659 | rsp->req.transfer_len = get_unaligned_le24(sgl->length); | ||
660 | |||
660 | /* no data command? */ | 661 | /* no data command? */ |
661 | if (!len) | 662 | if (!rsp->req.transfer_len) |
662 | return 0; | 663 | return 0; |
663 | 664 | ||
664 | rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt); | 665 | ret = nvmet_req_alloc_sgl(&rsp->req); |
665 | if (!rsp->req.sg) | 666 | if (ret < 0) |
666 | return NVME_SC_INTERNAL; | 667 | goto error_out; |
667 | 668 | ||
668 | ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num, | 669 | ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num, |
669 | rsp->req.sg, rsp->req.sg_cnt, 0, addr, key, | 670 | rsp->req.sg, rsp->req.sg_cnt, 0, addr, key, |
670 | nvmet_data_dir(&rsp->req)); | 671 | nvmet_data_dir(&rsp->req)); |
671 | if (ret < 0) | 672 | if (ret < 0) |
672 | return NVME_SC_INTERNAL; | 673 | goto error_out; |
673 | rsp->req.transfer_len += len; | ||
674 | rsp->n_rdma += ret; | 674 | rsp->n_rdma += ret; |
675 | 675 | ||
676 | if (invalidate) { | 676 | if (invalidate) { |
@@ -679,6 +679,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp, | |||
679 | } | 679 | } |
680 | 680 | ||
681 | return 0; | 681 | return 0; |
682 | |||
683 | error_out: | ||
684 | rsp->req.transfer_len = 0; | ||
685 | return NVME_SC_INTERNAL; | ||
682 | } | 686 | } |
683 | 687 | ||
684 | static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp) | 688 | static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp) |
@@ -746,6 +750,8 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue, | |||
746 | cmd->send_sge.addr, cmd->send_sge.length, | 750 | cmd->send_sge.addr, cmd->send_sge.length, |
747 | DMA_TO_DEVICE); | 751 | DMA_TO_DEVICE); |
748 | 752 | ||
753 | cmd->req.p2p_client = &queue->dev->device->dev; | ||
754 | |||
749 | if (!nvmet_req_init(&cmd->req, &queue->nvme_cq, | 755 | if (!nvmet_req_init(&cmd->req, &queue->nvme_cq, |
750 | &queue->nvme_sq, &nvmet_rdma_ops)) | 756 | &queue->nvme_sq, &nvmet_rdma_ops)) |
751 | return; | 757 | return; |
diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 56ff8f6d31fc..2dcc30429e8b 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig | |||
@@ -98,6 +98,9 @@ config PCI_ECAM | |||
98 | config PCI_LOCKLESS_CONFIG | 98 | config PCI_LOCKLESS_CONFIG |
99 | bool | 99 | bool |
100 | 100 | ||
101 | config PCI_BRIDGE_EMUL | ||
102 | bool | ||
103 | |||
101 | config PCI_IOV | 104 | config PCI_IOV |
102 | bool "PCI IOV support" | 105 | bool "PCI IOV support" |
103 | depends on PCI | 106 | depends on PCI |
@@ -132,6 +135,23 @@ config PCI_PASID | |||
132 | 135 | ||
133 | If unsure, say N. | 136 | If unsure, say N. |
134 | 137 | ||
138 | config PCI_P2PDMA | ||
139 | bool "PCI peer-to-peer transfer support" | ||
140 | depends on PCI && ZONE_DEVICE | ||
141 | select GENERIC_ALLOCATOR | ||
142 | help | ||
143 | Enableѕ drivers to do PCI peer-to-peer transactions to and from | ||
144 | BARs that are exposed in other devices that are the part of | ||
145 | the hierarchy where peer-to-peer DMA is guaranteed by the PCI | ||
146 | specification to work (ie. anything below a single PCI bridge). | ||
147 | |||
148 | Many PCIe root complexes do not support P2P transactions and | ||
149 | it's hard to tell which support it at all, so at this time, | ||
150 | P2P DMA transations must be between devices behind the same root | ||
151 | port. | ||
152 | |||
153 | If unsure, say N. | ||
154 | |||
135 | config PCI_LABEL | 155 | config PCI_LABEL |
136 | def_bool y if (DMI || ACPI) | 156 | def_bool y if (DMI || ACPI) |
137 | depends on PCI | 157 | depends on PCI |
diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 1b2cfe51e8d7..f2bda77a2df1 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile | |||
@@ -19,6 +19,7 @@ obj-$(CONFIG_HOTPLUG_PCI) += hotplug/ | |||
19 | obj-$(CONFIG_PCI_MSI) += msi.o | 19 | obj-$(CONFIG_PCI_MSI) += msi.o |
20 | obj-$(CONFIG_PCI_ATS) += ats.o | 20 | obj-$(CONFIG_PCI_ATS) += ats.o |
21 | obj-$(CONFIG_PCI_IOV) += iov.o | 21 | obj-$(CONFIG_PCI_IOV) += iov.o |
22 | obj-$(CONFIG_PCI_BRIDGE_EMUL) += pci-bridge-emul.o | ||
22 | obj-$(CONFIG_ACPI) += pci-acpi.o | 23 | obj-$(CONFIG_ACPI) += pci-acpi.o |
23 | obj-$(CONFIG_PCI_LABEL) += pci-label.o | 24 | obj-$(CONFIG_PCI_LABEL) += pci-label.o |
24 | obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o | 25 | obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o |
@@ -26,6 +27,7 @@ obj-$(CONFIG_PCI_SYSCALL) += syscall.o | |||
26 | obj-$(CONFIG_PCI_STUB) += pci-stub.o | 27 | obj-$(CONFIG_PCI_STUB) += pci-stub.o |
27 | obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o | 28 | obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o |
28 | obj-$(CONFIG_PCI_ECAM) += ecam.o | 29 | obj-$(CONFIG_PCI_ECAM) += ecam.o |
30 | obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o | ||
29 | obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o | 31 | obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o |
30 | 32 | ||
31 | # Endpoint library must be initialized before its users | 33 | # Endpoint library must be initialized before its users |
diff --git a/drivers/pci/access.c b/drivers/pci/access.c index a3ad2fe185b9..544922f097c0 100644 --- a/drivers/pci/access.c +++ b/drivers/pci/access.c | |||
@@ -33,7 +33,7 @@ DEFINE_RAW_SPINLOCK(pci_lock); | |||
33 | #endif | 33 | #endif |
34 | 34 | ||
35 | #define PCI_OP_READ(size, type, len) \ | 35 | #define PCI_OP_READ(size, type, len) \ |
36 | int pci_bus_read_config_##size \ | 36 | int noinline pci_bus_read_config_##size \ |
37 | (struct pci_bus *bus, unsigned int devfn, int pos, type *value) \ | 37 | (struct pci_bus *bus, unsigned int devfn, int pos, type *value) \ |
38 | { \ | 38 | { \ |
39 | int res; \ | 39 | int res; \ |
@@ -48,7 +48,7 @@ int pci_bus_read_config_##size \ | |||
48 | } | 48 | } |
49 | 49 | ||
50 | #define PCI_OP_WRITE(size, type, len) \ | 50 | #define PCI_OP_WRITE(size, type, len) \ |
51 | int pci_bus_write_config_##size \ | 51 | int noinline pci_bus_write_config_##size \ |
52 | (struct pci_bus *bus, unsigned int devfn, int pos, type value) \ | 52 | (struct pci_bus *bus, unsigned int devfn, int pos, type value) \ |
53 | { \ | 53 | { \ |
54 | int res; \ | 54 | int res; \ |
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig index 028b287466fb..6671946dbf66 100644 --- a/drivers/pci/controller/Kconfig +++ b/drivers/pci/controller/Kconfig | |||
@@ -9,12 +9,14 @@ config PCI_MVEBU | |||
9 | depends on MVEBU_MBUS | 9 | depends on MVEBU_MBUS |
10 | depends on ARM | 10 | depends on ARM |
11 | depends on OF | 11 | depends on OF |
12 | select PCI_BRIDGE_EMUL | ||
12 | 13 | ||
13 | config PCI_AARDVARK | 14 | config PCI_AARDVARK |
14 | bool "Aardvark PCIe controller" | 15 | bool "Aardvark PCIe controller" |
15 | depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST | 16 | depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST |
16 | depends on OF | 17 | depends on OF |
17 | depends on PCI_MSI_IRQ_DOMAIN | 18 | depends on PCI_MSI_IRQ_DOMAIN |
19 | select PCI_BRIDGE_EMUL | ||
18 | help | 20 | help |
19 | Add support for Aardvark 64bit PCIe Host Controller. This | 21 | Add support for Aardvark 64bit PCIe Host Controller. This |
20 | controller is part of the South Bridge of the Marvel Armada | 22 | controller is part of the South Bridge of the Marvel Armada |
@@ -231,7 +233,7 @@ config PCIE_ROCKCHIP_EP | |||
231 | available to support GEN2 with 4 slots. | 233 | available to support GEN2 with 4 slots. |
232 | 234 | ||
233 | config PCIE_MEDIATEK | 235 | config PCIE_MEDIATEK |
234 | bool "MediaTek PCIe controller" | 236 | tristate "MediaTek PCIe controller" |
235 | depends on ARCH_MEDIATEK || COMPILE_TEST | 237 | depends on ARCH_MEDIATEK || COMPILE_TEST |
236 | depends on OF | 238 | depends on OF |
237 | depends on PCI_MSI_IRQ_DOMAIN | 239 | depends on PCI_MSI_IRQ_DOMAIN |
diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile index 5d2ce72c7a52..fcf91eacfc63 100644 --- a/drivers/pci/controller/dwc/Makefile +++ b/drivers/pci/controller/dwc/Makefile | |||
@@ -7,7 +7,7 @@ obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o | |||
7 | obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o | 7 | obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o |
8 | obj-$(CONFIG_PCI_IMX6) += pci-imx6.o | 8 | obj-$(CONFIG_PCI_IMX6) += pci-imx6.o |
9 | obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o | 9 | obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o |
10 | obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o | 10 | obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o |
11 | obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o | 11 | obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o |
12 | obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o | 12 | obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o |
13 | obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o | 13 | obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o |
diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c index ce9224a36f62..a32d6dde7a57 100644 --- a/drivers/pci/controller/dwc/pci-dra7xx.c +++ b/drivers/pci/controller/dwc/pci-dra7xx.c | |||
@@ -542,7 +542,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = { | |||
542 | }; | 542 | }; |
543 | 543 | ||
544 | /* | 544 | /* |
545 | * dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870 | 545 | * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870 |
546 | * @dra7xx: the dra7xx device where the workaround should be applied | 546 | * @dra7xx: the dra7xx device where the workaround should be applied |
547 | * | 547 | * |
548 | * Access to the PCIe slave port that are not 32-bit aligned will result | 548 | * Access to the PCIe slave port that are not 32-bit aligned will result |
@@ -552,7 +552,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = { | |||
552 | * | 552 | * |
553 | * To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1. | 553 | * To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1. |
554 | */ | 554 | */ |
555 | static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev) | 555 | static int dra7xx_pcie_unaligned_memaccess(struct device *dev) |
556 | { | 556 | { |
557 | int ret; | 557 | int ret; |
558 | struct device_node *np = dev->of_node; | 558 | struct device_node *np = dev->of_node; |
@@ -704,6 +704,11 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev) | |||
704 | 704 | ||
705 | dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE, | 705 | dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE, |
706 | DEVICE_TYPE_RC); | 706 | DEVICE_TYPE_RC); |
707 | |||
708 | ret = dra7xx_pcie_unaligned_memaccess(dev); | ||
709 | if (ret) | ||
710 | dev_err(dev, "WA for Errata i870 not applied\n"); | ||
711 | |||
707 | ret = dra7xx_add_pcie_port(dra7xx, pdev); | 712 | ret = dra7xx_add_pcie_port(dra7xx, pdev); |
708 | if (ret < 0) | 713 | if (ret < 0) |
709 | goto err_gpio; | 714 | goto err_gpio; |
@@ -717,7 +722,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev) | |||
717 | dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE, | 722 | dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE, |
718 | DEVICE_TYPE_EP); | 723 | DEVICE_TYPE_EP); |
719 | 724 | ||
720 | ret = dra7xx_pcie_ep_unaligned_memaccess(dev); | 725 | ret = dra7xx_pcie_unaligned_memaccess(dev); |
721 | if (ret) | 726 | if (ret) |
722 | goto err_gpio; | 727 | goto err_gpio; |
723 | 728 | ||
diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c index 4a9a673b4777..2cbef2d7c207 100644 --- a/drivers/pci/controller/dwc/pci-imx6.c +++ b/drivers/pci/controller/dwc/pci-imx6.c | |||
@@ -50,6 +50,7 @@ struct imx6_pcie { | |||
50 | struct regmap *iomuxc_gpr; | 50 | struct regmap *iomuxc_gpr; |
51 | struct reset_control *pciephy_reset; | 51 | struct reset_control *pciephy_reset; |
52 | struct reset_control *apps_reset; | 52 | struct reset_control *apps_reset; |
53 | struct reset_control *turnoff_reset; | ||
53 | enum imx6_pcie_variants variant; | 54 | enum imx6_pcie_variants variant; |
54 | u32 tx_deemph_gen1; | 55 | u32 tx_deemph_gen1; |
55 | u32 tx_deemph_gen2_3p5db; | 56 | u32 tx_deemph_gen2_3p5db; |
@@ -97,6 +98,16 @@ struct imx6_pcie { | |||
97 | #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) | 98 | #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) |
98 | 99 | ||
99 | /* PHY registers (not memory-mapped) */ | 100 | /* PHY registers (not memory-mapped) */ |
101 | #define PCIE_PHY_ATEOVRD 0x10 | ||
102 | #define PCIE_PHY_ATEOVRD_EN (0x1 << 2) | ||
103 | #define PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT 0 | ||
104 | #define PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK 0x1 | ||
105 | |||
106 | #define PCIE_PHY_MPLL_OVRD_IN_LO 0x11 | ||
107 | #define PCIE_PHY_MPLL_MULTIPLIER_SHIFT 2 | ||
108 | #define PCIE_PHY_MPLL_MULTIPLIER_MASK 0x7f | ||
109 | #define PCIE_PHY_MPLL_MULTIPLIER_OVRD (0x1 << 9) | ||
110 | |||
100 | #define PCIE_PHY_RX_ASIC_OUT 0x100D | 111 | #define PCIE_PHY_RX_ASIC_OUT 0x100D |
101 | #define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0) | 112 | #define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0) |
102 | 113 | ||
@@ -508,6 +519,50 @@ static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie) | |||
508 | IMX6Q_GPR12_DEVICE_TYPE, PCI_EXP_TYPE_ROOT_PORT << 12); | 519 | IMX6Q_GPR12_DEVICE_TYPE, PCI_EXP_TYPE_ROOT_PORT << 12); |
509 | } | 520 | } |
510 | 521 | ||
522 | static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie) | ||
523 | { | ||
524 | unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy); | ||
525 | int mult, div; | ||
526 | u32 val; | ||
527 | |||
528 | switch (phy_rate) { | ||
529 | case 125000000: | ||
530 | /* | ||
531 | * The default settings of the MPLL are for a 125MHz input | ||
532 | * clock, so no need to reconfigure anything in that case. | ||
533 | */ | ||
534 | return 0; | ||
535 | case 100000000: | ||
536 | mult = 25; | ||
537 | div = 0; | ||
538 | break; | ||
539 | case 200000000: | ||
540 | mult = 25; | ||
541 | div = 1; | ||
542 | break; | ||
543 | default: | ||
544 | dev_err(imx6_pcie->pci->dev, | ||
545 | "Unsupported PHY reference clock rate %lu\n", phy_rate); | ||
546 | return -EINVAL; | ||
547 | } | ||
548 | |||
549 | pcie_phy_read(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, &val); | ||
550 | val &= ~(PCIE_PHY_MPLL_MULTIPLIER_MASK << | ||
551 | PCIE_PHY_MPLL_MULTIPLIER_SHIFT); | ||
552 | val |= mult << PCIE_PHY_MPLL_MULTIPLIER_SHIFT; | ||
553 | val |= PCIE_PHY_MPLL_MULTIPLIER_OVRD; | ||
554 | pcie_phy_write(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, val); | ||
555 | |||
556 | pcie_phy_read(imx6_pcie, PCIE_PHY_ATEOVRD, &val); | ||
557 | val &= ~(PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK << | ||
558 | PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT); | ||
559 | val |= div << PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT; | ||
560 | val |= PCIE_PHY_ATEOVRD_EN; | ||
561 | pcie_phy_write(imx6_pcie, PCIE_PHY_ATEOVRD, val); | ||
562 | |||
563 | return 0; | ||
564 | } | ||
565 | |||
511 | static int imx6_pcie_wait_for_link(struct imx6_pcie *imx6_pcie) | 566 | static int imx6_pcie_wait_for_link(struct imx6_pcie *imx6_pcie) |
512 | { | 567 | { |
513 | struct dw_pcie *pci = imx6_pcie->pci; | 568 | struct dw_pcie *pci = imx6_pcie->pci; |
@@ -542,6 +597,24 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie) | |||
542 | return -EINVAL; | 597 | return -EINVAL; |
543 | } | 598 | } |
544 | 599 | ||
600 | static void imx6_pcie_ltssm_enable(struct device *dev) | ||
601 | { | ||
602 | struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); | ||
603 | |||
604 | switch (imx6_pcie->variant) { | ||
605 | case IMX6Q: | ||
606 | case IMX6SX: | ||
607 | case IMX6QP: | ||
608 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, | ||
609 | IMX6Q_GPR12_PCIE_CTL_2, | ||
610 | IMX6Q_GPR12_PCIE_CTL_2); | ||
611 | break; | ||
612 | case IMX7D: | ||
613 | reset_control_deassert(imx6_pcie->apps_reset); | ||
614 | break; | ||
615 | } | ||
616 | } | ||
617 | |||
545 | static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie) | 618 | static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie) |
546 | { | 619 | { |
547 | struct dw_pcie *pci = imx6_pcie->pci; | 620 | struct dw_pcie *pci = imx6_pcie->pci; |
@@ -560,11 +633,7 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie) | |||
560 | dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp); | 633 | dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp); |
561 | 634 | ||
562 | /* Start LTSSM. */ | 635 | /* Start LTSSM. */ |
563 | if (imx6_pcie->variant == IMX7D) | 636 | imx6_pcie_ltssm_enable(dev); |
564 | reset_control_deassert(imx6_pcie->apps_reset); | ||
565 | else | ||
566 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, | ||
567 | IMX6Q_GPR12_PCIE_CTL_2, 1 << 10); | ||
568 | 637 | ||
569 | ret = imx6_pcie_wait_for_link(imx6_pcie); | 638 | ret = imx6_pcie_wait_for_link(imx6_pcie); |
570 | if (ret) | 639 | if (ret) |
@@ -632,6 +701,7 @@ static int imx6_pcie_host_init(struct pcie_port *pp) | |||
632 | imx6_pcie_assert_core_reset(imx6_pcie); | 701 | imx6_pcie_assert_core_reset(imx6_pcie); |
633 | imx6_pcie_init_phy(imx6_pcie); | 702 | imx6_pcie_init_phy(imx6_pcie); |
634 | imx6_pcie_deassert_core_reset(imx6_pcie); | 703 | imx6_pcie_deassert_core_reset(imx6_pcie); |
704 | imx6_setup_phy_mpll(imx6_pcie); | ||
635 | dw_pcie_setup_rc(pp); | 705 | dw_pcie_setup_rc(pp); |
636 | imx6_pcie_establish_link(imx6_pcie); | 706 | imx6_pcie_establish_link(imx6_pcie); |
637 | 707 | ||
@@ -682,6 +752,94 @@ static const struct dw_pcie_ops dw_pcie_ops = { | |||
682 | .link_up = imx6_pcie_link_up, | 752 | .link_up = imx6_pcie_link_up, |
683 | }; | 753 | }; |
684 | 754 | ||
755 | #ifdef CONFIG_PM_SLEEP | ||
756 | static void imx6_pcie_ltssm_disable(struct device *dev) | ||
757 | { | ||
758 | struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); | ||
759 | |||
760 | switch (imx6_pcie->variant) { | ||
761 | case IMX6SX: | ||
762 | case IMX6QP: | ||
763 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, | ||
764 | IMX6Q_GPR12_PCIE_CTL_2, 0); | ||
765 | break; | ||
766 | case IMX7D: | ||
767 | reset_control_assert(imx6_pcie->apps_reset); | ||
768 | break; | ||
769 | default: | ||
770 | dev_err(dev, "ltssm_disable not supported\n"); | ||
771 | } | ||
772 | } | ||
773 | |||
774 | static void imx6_pcie_pm_turnoff(struct imx6_pcie *imx6_pcie) | ||
775 | { | ||
776 | reset_control_assert(imx6_pcie->turnoff_reset); | ||
777 | reset_control_deassert(imx6_pcie->turnoff_reset); | ||
778 | |||
779 | /* | ||
780 | * Components with an upstream port must respond to | ||
781 | * PME_Turn_Off with PME_TO_Ack but we can't check. | ||
782 | * | ||
783 | * The standard recommends a 1-10ms timeout after which to | ||
784 | * proceed anyway as if acks were received. | ||
785 | */ | ||
786 | usleep_range(1000, 10000); | ||
787 | } | ||
788 | |||
789 | static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie) | ||
790 | { | ||
791 | clk_disable_unprepare(imx6_pcie->pcie); | ||
792 | clk_disable_unprepare(imx6_pcie->pcie_phy); | ||
793 | clk_disable_unprepare(imx6_pcie->pcie_bus); | ||
794 | |||
795 | if (imx6_pcie->variant == IMX7D) { | ||
796 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, | ||
797 | IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, | ||
798 | IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); | ||
799 | } | ||
800 | } | ||
801 | |||
802 | static int imx6_pcie_suspend_noirq(struct device *dev) | ||
803 | { | ||
804 | struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); | ||
805 | |||
806 | if (imx6_pcie->variant != IMX7D) | ||
807 | return 0; | ||
808 | |||
809 | imx6_pcie_pm_turnoff(imx6_pcie); | ||
810 | imx6_pcie_clk_disable(imx6_pcie); | ||
811 | imx6_pcie_ltssm_disable(dev); | ||
812 | |||
813 | return 0; | ||
814 | } | ||
815 | |||
816 | static int imx6_pcie_resume_noirq(struct device *dev) | ||
817 | { | ||
818 | int ret; | ||
819 | struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); | ||
820 | struct pcie_port *pp = &imx6_pcie->pci->pp; | ||
821 | |||
822 | if (imx6_pcie->variant != IMX7D) | ||
823 | return 0; | ||
824 | |||
825 | imx6_pcie_assert_core_reset(imx6_pcie); | ||
826 | imx6_pcie_init_phy(imx6_pcie); | ||
827 | imx6_pcie_deassert_core_reset(imx6_pcie); | ||
828 | dw_pcie_setup_rc(pp); | ||
829 | |||
830 | ret = imx6_pcie_establish_link(imx6_pcie); | ||
831 | if (ret < 0) | ||
832 | dev_info(dev, "pcie link is down after resume.\n"); | ||
833 | |||
834 | return 0; | ||
835 | } | ||
836 | #endif | ||
837 | |||
838 | static const struct dev_pm_ops imx6_pcie_pm_ops = { | ||
839 | SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx6_pcie_suspend_noirq, | ||
840 | imx6_pcie_resume_noirq) | ||
841 | }; | ||
842 | |||
685 | static int imx6_pcie_probe(struct platform_device *pdev) | 843 | static int imx6_pcie_probe(struct platform_device *pdev) |
686 | { | 844 | { |
687 | struct device *dev = &pdev->dev; | 845 | struct device *dev = &pdev->dev; |
@@ -776,6 +934,13 @@ static int imx6_pcie_probe(struct platform_device *pdev) | |||
776 | break; | 934 | break; |
777 | } | 935 | } |
778 | 936 | ||
937 | /* Grab turnoff reset */ | ||
938 | imx6_pcie->turnoff_reset = devm_reset_control_get_optional_exclusive(dev, "turnoff"); | ||
939 | if (IS_ERR(imx6_pcie->turnoff_reset)) { | ||
940 | dev_err(dev, "Failed to get TURNOFF reset control\n"); | ||
941 | return PTR_ERR(imx6_pcie->turnoff_reset); | ||
942 | } | ||
943 | |||
779 | /* Grab GPR config register range */ | 944 | /* Grab GPR config register range */ |
780 | imx6_pcie->iomuxc_gpr = | 945 | imx6_pcie->iomuxc_gpr = |
781 | syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); | 946 | syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); |
@@ -848,6 +1013,7 @@ static struct platform_driver imx6_pcie_driver = { | |||
848 | .name = "imx6q-pcie", | 1013 | .name = "imx6q-pcie", |
849 | .of_match_table = imx6_pcie_of_match, | 1014 | .of_match_table = imx6_pcie_of_match, |
850 | .suppress_bind_attrs = true, | 1015 | .suppress_bind_attrs = true, |
1016 | .pm = &imx6_pcie_pm_ops, | ||
851 | }, | 1017 | }, |
852 | .probe = imx6_pcie_probe, | 1018 | .probe = imx6_pcie_probe, |
853 | .shutdown = imx6_pcie_shutdown, | 1019 | .shutdown = imx6_pcie_shutdown, |
diff --git a/drivers/pci/controller/dwc/pci-keystone-dw.c b/drivers/pci/controller/dwc/pci-keystone-dw.c deleted file mode 100644 index 0682213328e9..000000000000 --- a/drivers/pci/controller/dwc/pci-keystone-dw.c +++ /dev/null | |||
@@ -1,484 +0,0 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * DesignWare application register space functions for Keystone PCI controller | ||
4 | * | ||
5 | * Copyright (C) 2013-2014 Texas Instruments., Ltd. | ||
6 | * http://www.ti.com | ||
7 | * | ||
8 | * Author: Murali Karicheri <m-karicheri2@ti.com> | ||
9 | */ | ||
10 | |||
11 | #include <linux/irq.h> | ||
12 | #include <linux/irqdomain.h> | ||
13 | #include <linux/irqreturn.h> | ||
14 | #include <linux/module.h> | ||
15 | #include <linux/of.h> | ||
16 | #include <linux/of_pci.h> | ||
17 | #include <linux/pci.h> | ||
18 | #include <linux/platform_device.h> | ||
19 | |||
20 | #include "pcie-designware.h" | ||
21 | #include "pci-keystone.h" | ||
22 | |||
23 | /* Application register defines */ | ||
24 | #define LTSSM_EN_VAL 1 | ||
25 | #define LTSSM_STATE_MASK 0x1f | ||
26 | #define LTSSM_STATE_L0 0x11 | ||
27 | #define DBI_CS2_EN_VAL 0x20 | ||
28 | #define OB_XLAT_EN_VAL 2 | ||
29 | |||
30 | /* Application registers */ | ||
31 | #define CMD_STATUS 0x004 | ||
32 | #define CFG_SETUP 0x008 | ||
33 | #define OB_SIZE 0x030 | ||
34 | #define CFG_PCIM_WIN_SZ_IDX 3 | ||
35 | #define CFG_PCIM_WIN_CNT 32 | ||
36 | #define SPACE0_REMOTE_CFG_OFFSET 0x1000 | ||
37 | #define OB_OFFSET_INDEX(n) (0x200 + (8 * n)) | ||
38 | #define OB_OFFSET_HI(n) (0x204 + (8 * n)) | ||
39 | |||
40 | /* IRQ register defines */ | ||
41 | #define IRQ_EOI 0x050 | ||
42 | #define IRQ_STATUS 0x184 | ||
43 | #define IRQ_ENABLE_SET 0x188 | ||
44 | #define IRQ_ENABLE_CLR 0x18c | ||
45 | |||
46 | #define MSI_IRQ 0x054 | ||
47 | #define MSI0_IRQ_STATUS 0x104 | ||
48 | #define MSI0_IRQ_ENABLE_SET 0x108 | ||
49 | #define MSI0_IRQ_ENABLE_CLR 0x10c | ||
50 | #define IRQ_STATUS 0x184 | ||
51 | #define MSI_IRQ_OFFSET 4 | ||
52 | |||
53 | /* Error IRQ bits */ | ||
54 | #define ERR_AER BIT(5) /* ECRC error */ | ||
55 | #define ERR_AXI BIT(4) /* AXI tag lookup fatal error */ | ||
56 | #define ERR_CORR BIT(3) /* Correctable error */ | ||
57 | #define ERR_NONFATAL BIT(2) /* Non-fatal error */ | ||
58 | #define ERR_FATAL BIT(1) /* Fatal error */ | ||
59 | #define ERR_SYS BIT(0) /* System (fatal, non-fatal, or correctable) */ | ||
60 | #define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \ | ||
61 | ERR_NONFATAL | ERR_FATAL | ERR_SYS) | ||
62 | #define ERR_FATAL_IRQ (ERR_FATAL | ERR_AXI) | ||
63 | #define ERR_IRQ_STATUS_RAW 0x1c0 | ||
64 | #define ERR_IRQ_STATUS 0x1c4 | ||
65 | #define ERR_IRQ_ENABLE_SET 0x1c8 | ||
66 | #define ERR_IRQ_ENABLE_CLR 0x1cc | ||
67 | |||
68 | /* Config space registers */ | ||
69 | #define DEBUG0 0x728 | ||
70 | |||
71 | #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) | ||
72 | |||
73 | static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset, | ||
74 | u32 *bit_pos) | ||
75 | { | ||
76 | *reg_offset = offset % 8; | ||
77 | *bit_pos = offset >> 3; | ||
78 | } | ||
79 | |||
80 | phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp) | ||
81 | { | ||
82 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
83 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
84 | |||
85 | return ks_pcie->app.start + MSI_IRQ; | ||
86 | } | ||
87 | |||
88 | static u32 ks_dw_app_readl(struct keystone_pcie *ks_pcie, u32 offset) | ||
89 | { | ||
90 | return readl(ks_pcie->va_app_base + offset); | ||
91 | } | ||
92 | |||
93 | static void ks_dw_app_writel(struct keystone_pcie *ks_pcie, u32 offset, u32 val) | ||
94 | { | ||
95 | writel(val, ks_pcie->va_app_base + offset); | ||
96 | } | ||
97 | |||
98 | void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset) | ||
99 | { | ||
100 | struct dw_pcie *pci = ks_pcie->pci; | ||
101 | struct pcie_port *pp = &pci->pp; | ||
102 | struct device *dev = pci->dev; | ||
103 | u32 pending, vector; | ||
104 | int src, virq; | ||
105 | |||
106 | pending = ks_dw_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4)); | ||
107 | |||
108 | /* | ||
109 | * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit | ||
110 | * shows 1, 9, 17, 25 and so forth | ||
111 | */ | ||
112 | for (src = 0; src < 4; src++) { | ||
113 | if (BIT(src) & pending) { | ||
114 | vector = offset + (src << 3); | ||
115 | virq = irq_linear_revmap(pp->irq_domain, vector); | ||
116 | dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", | ||
117 | src, vector, virq); | ||
118 | generic_handle_irq(virq); | ||
119 | } | ||
120 | } | ||
121 | } | ||
122 | |||
123 | void ks_dw_pcie_msi_irq_ack(int irq, struct pcie_port *pp) | ||
124 | { | ||
125 | u32 reg_offset, bit_pos; | ||
126 | struct keystone_pcie *ks_pcie; | ||
127 | struct dw_pcie *pci; | ||
128 | |||
129 | pci = to_dw_pcie_from_pp(pp); | ||
130 | ks_pcie = to_keystone_pcie(pci); | ||
131 | update_reg_offset_bit_pos(irq, ®_offset, &bit_pos); | ||
132 | |||
133 | ks_dw_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4), | ||
134 | BIT(bit_pos)); | ||
135 | ks_dw_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET); | ||
136 | } | ||
137 | |||
138 | void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq) | ||
139 | { | ||
140 | u32 reg_offset, bit_pos; | ||
141 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
142 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
143 | |||
144 | update_reg_offset_bit_pos(irq, ®_offset, &bit_pos); | ||
145 | ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4), | ||
146 | BIT(bit_pos)); | ||
147 | } | ||
148 | |||
149 | void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq) | ||
150 | { | ||
151 | u32 reg_offset, bit_pos; | ||
152 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
153 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
154 | |||
155 | update_reg_offset_bit_pos(irq, ®_offset, &bit_pos); | ||
156 | ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4), | ||
157 | BIT(bit_pos)); | ||
158 | } | ||
159 | |||
160 | int ks_dw_pcie_msi_host_init(struct pcie_port *pp) | ||
161 | { | ||
162 | return dw_pcie_allocate_domains(pp); | ||
163 | } | ||
164 | |||
165 | void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) | ||
166 | { | ||
167 | int i; | ||
168 | |||
169 | for (i = 0; i < PCI_NUM_INTX; i++) | ||
170 | ks_dw_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1); | ||
171 | } | ||
172 | |||
173 | void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset) | ||
174 | { | ||
175 | struct dw_pcie *pci = ks_pcie->pci; | ||
176 | struct device *dev = pci->dev; | ||
177 | u32 pending; | ||
178 | int virq; | ||
179 | |||
180 | pending = ks_dw_app_readl(ks_pcie, IRQ_STATUS + (offset << 4)); | ||
181 | |||
182 | if (BIT(0) & pending) { | ||
183 | virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset); | ||
184 | dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq); | ||
185 | generic_handle_irq(virq); | ||
186 | } | ||
187 | |||
188 | /* EOI the INTx interrupt */ | ||
189 | ks_dw_app_writel(ks_pcie, IRQ_EOI, offset); | ||
190 | } | ||
191 | |||
192 | void ks_dw_pcie_enable_error_irq(struct keystone_pcie *ks_pcie) | ||
193 | { | ||
194 | ks_dw_app_writel(ks_pcie, ERR_IRQ_ENABLE_SET, ERR_IRQ_ALL); | ||
195 | } | ||
196 | |||
197 | irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie) | ||
198 | { | ||
199 | u32 status; | ||
200 | |||
201 | status = ks_dw_app_readl(ks_pcie, ERR_IRQ_STATUS_RAW) & ERR_IRQ_ALL; | ||
202 | if (!status) | ||
203 | return IRQ_NONE; | ||
204 | |||
205 | if (status & ERR_FATAL_IRQ) | ||
206 | dev_err(ks_pcie->pci->dev, "fatal error (status %#010x)\n", | ||
207 | status); | ||
208 | |||
209 | /* Ack the IRQ; status bits are RW1C */ | ||
210 | ks_dw_app_writel(ks_pcie, ERR_IRQ_STATUS, status); | ||
211 | return IRQ_HANDLED; | ||
212 | } | ||
213 | |||
214 | static void ks_dw_pcie_ack_legacy_irq(struct irq_data *d) | ||
215 | { | ||
216 | } | ||
217 | |||
218 | static void ks_dw_pcie_mask_legacy_irq(struct irq_data *d) | ||
219 | { | ||
220 | } | ||
221 | |||
222 | static void ks_dw_pcie_unmask_legacy_irq(struct irq_data *d) | ||
223 | { | ||
224 | } | ||
225 | |||
226 | static struct irq_chip ks_dw_pcie_legacy_irq_chip = { | ||
227 | .name = "Keystone-PCI-Legacy-IRQ", | ||
228 | .irq_ack = ks_dw_pcie_ack_legacy_irq, | ||
229 | .irq_mask = ks_dw_pcie_mask_legacy_irq, | ||
230 | .irq_unmask = ks_dw_pcie_unmask_legacy_irq, | ||
231 | }; | ||
232 | |||
233 | static int ks_dw_pcie_init_legacy_irq_map(struct irq_domain *d, | ||
234 | unsigned int irq, irq_hw_number_t hw_irq) | ||
235 | { | ||
236 | irq_set_chip_and_handler(irq, &ks_dw_pcie_legacy_irq_chip, | ||
237 | handle_level_irq); | ||
238 | irq_set_chip_data(irq, d->host_data); | ||
239 | |||
240 | return 0; | ||
241 | } | ||
242 | |||
243 | static const struct irq_domain_ops ks_dw_pcie_legacy_irq_domain_ops = { | ||
244 | .map = ks_dw_pcie_init_legacy_irq_map, | ||
245 | .xlate = irq_domain_xlate_onetwocell, | ||
246 | }; | ||
247 | |||
248 | /** | ||
249 | * ks_dw_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask | ||
250 | * registers | ||
251 | * | ||
252 | * Since modification of dbi_cs2 involves different clock domain, read the | ||
253 | * status back to ensure the transition is complete. | ||
254 | */ | ||
255 | static void ks_dw_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie) | ||
256 | { | ||
257 | u32 val; | ||
258 | |||
259 | val = ks_dw_app_readl(ks_pcie, CMD_STATUS); | ||
260 | ks_dw_app_writel(ks_pcie, CMD_STATUS, DBI_CS2_EN_VAL | val); | ||
261 | |||
262 | do { | ||
263 | val = ks_dw_app_readl(ks_pcie, CMD_STATUS); | ||
264 | } while (!(val & DBI_CS2_EN_VAL)); | ||
265 | } | ||
266 | |||
267 | /** | ||
268 | * ks_dw_pcie_clear_dbi_mode() - Disable DBI mode | ||
269 | * | ||
270 | * Since modification of dbi_cs2 involves different clock domain, read the | ||
271 | * status back to ensure the transition is complete. | ||
272 | */ | ||
273 | static void ks_dw_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie) | ||
274 | { | ||
275 | u32 val; | ||
276 | |||
277 | val = ks_dw_app_readl(ks_pcie, CMD_STATUS); | ||
278 | ks_dw_app_writel(ks_pcie, CMD_STATUS, ~DBI_CS2_EN_VAL & val); | ||
279 | |||
280 | do { | ||
281 | val = ks_dw_app_readl(ks_pcie, CMD_STATUS); | ||
282 | } while (val & DBI_CS2_EN_VAL); | ||
283 | } | ||
284 | |||
285 | void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) | ||
286 | { | ||
287 | struct dw_pcie *pci = ks_pcie->pci; | ||
288 | struct pcie_port *pp = &pci->pp; | ||
289 | u32 start = pp->mem->start, end = pp->mem->end; | ||
290 | int i, tr_size; | ||
291 | u32 val; | ||
292 | |||
293 | /* Disable BARs for inbound access */ | ||
294 | ks_dw_pcie_set_dbi_mode(ks_pcie); | ||
295 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); | ||
296 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0); | ||
297 | ks_dw_pcie_clear_dbi_mode(ks_pcie); | ||
298 | |||
299 | /* Set outbound translation size per window division */ | ||
300 | ks_dw_app_writel(ks_pcie, OB_SIZE, CFG_PCIM_WIN_SZ_IDX & 0x7); | ||
301 | |||
302 | tr_size = (1 << (CFG_PCIM_WIN_SZ_IDX & 0x7)) * SZ_1M; | ||
303 | |||
304 | /* Using Direct 1:1 mapping of RC <-> PCI memory space */ | ||
305 | for (i = 0; (i < CFG_PCIM_WIN_CNT) && (start < end); i++) { | ||
306 | ks_dw_app_writel(ks_pcie, OB_OFFSET_INDEX(i), start | 1); | ||
307 | ks_dw_app_writel(ks_pcie, OB_OFFSET_HI(i), 0); | ||
308 | start += tr_size; | ||
309 | } | ||
310 | |||
311 | /* Enable OB translation */ | ||
312 | val = ks_dw_app_readl(ks_pcie, CMD_STATUS); | ||
313 | ks_dw_app_writel(ks_pcie, CMD_STATUS, OB_XLAT_EN_VAL | val); | ||
314 | } | ||
315 | |||
316 | /** | ||
317 | * ks_pcie_cfg_setup() - Set up configuration space address for a device | ||
318 | * | ||
319 | * @ks_pcie: ptr to keystone_pcie structure | ||
320 | * @bus: Bus number the device is residing on | ||
321 | * @devfn: device, function number info | ||
322 | * | ||
323 | * Forms and returns the address of configuration space mapped in PCIESS | ||
324 | * address space 0. Also configures CFG_SETUP for remote configuration space | ||
325 | * access. | ||
326 | * | ||
327 | * The address space has two regions to access configuration - local and remote. | ||
328 | * We access local region for bus 0 (as RC is attached on bus 0) and remote | ||
329 | * region for others with TYPE 1 access when bus > 1. As for device on bus = 1, | ||
330 | * we will do TYPE 0 access as it will be on our secondary bus (logical). | ||
331 | * CFG_SETUP is needed only for remote configuration access. | ||
332 | */ | ||
333 | static void __iomem *ks_pcie_cfg_setup(struct keystone_pcie *ks_pcie, u8 bus, | ||
334 | unsigned int devfn) | ||
335 | { | ||
336 | u8 device = PCI_SLOT(devfn), function = PCI_FUNC(devfn); | ||
337 | struct dw_pcie *pci = ks_pcie->pci; | ||
338 | struct pcie_port *pp = &pci->pp; | ||
339 | u32 regval; | ||
340 | |||
341 | if (bus == 0) | ||
342 | return pci->dbi_base; | ||
343 | |||
344 | regval = (bus << 16) | (device << 8) | function; | ||
345 | |||
346 | /* | ||
347 | * Since Bus#1 will be a virtual bus, we need to have TYPE0 | ||
348 | * access only. | ||
349 | * TYPE 1 | ||
350 | */ | ||
351 | if (bus != 1) | ||
352 | regval |= BIT(24); | ||
353 | |||
354 | ks_dw_app_writel(ks_pcie, CFG_SETUP, regval); | ||
355 | return pp->va_cfg0_base; | ||
356 | } | ||
357 | |||
358 | int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, | ||
359 | unsigned int devfn, int where, int size, u32 *val) | ||
360 | { | ||
361 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
362 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
363 | u8 bus_num = bus->number; | ||
364 | void __iomem *addr; | ||
365 | |||
366 | addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn); | ||
367 | |||
368 | return dw_pcie_read(addr + where, size, val); | ||
369 | } | ||
370 | |||
371 | int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, | ||
372 | unsigned int devfn, int where, int size, u32 val) | ||
373 | { | ||
374 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
375 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
376 | u8 bus_num = bus->number; | ||
377 | void __iomem *addr; | ||
378 | |||
379 | addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn); | ||
380 | |||
381 | return dw_pcie_write(addr + where, size, val); | ||
382 | } | ||
383 | |||
384 | /** | ||
385 | * ks_dw_pcie_v3_65_scan_bus() - keystone scan_bus post initialization | ||
386 | * | ||
387 | * This sets BAR0 to enable inbound access for MSI_IRQ register | ||
388 | */ | ||
389 | void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp) | ||
390 | { | ||
391 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
392 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
393 | |||
394 | /* Configure and set up BAR0 */ | ||
395 | ks_dw_pcie_set_dbi_mode(ks_pcie); | ||
396 | |||
397 | /* Enable BAR0 */ | ||
398 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1); | ||
399 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1); | ||
400 | |||
401 | ks_dw_pcie_clear_dbi_mode(ks_pcie); | ||
402 | |||
403 | /* | ||
404 | * For BAR0, just setting bus address for inbound writes (MSI) should | ||
405 | * be sufficient. Use physical address to avoid any conflicts. | ||
406 | */ | ||
407 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start); | ||
408 | } | ||
409 | |||
410 | /** | ||
411 | * ks_dw_pcie_link_up() - Check if link up | ||
412 | */ | ||
413 | int ks_dw_pcie_link_up(struct dw_pcie *pci) | ||
414 | { | ||
415 | u32 val; | ||
416 | |||
417 | val = dw_pcie_readl_dbi(pci, DEBUG0); | ||
418 | return (val & LTSSM_STATE_MASK) == LTSSM_STATE_L0; | ||
419 | } | ||
420 | |||
421 | void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie) | ||
422 | { | ||
423 | u32 val; | ||
424 | |||
425 | /* Disable Link training */ | ||
426 | val = ks_dw_app_readl(ks_pcie, CMD_STATUS); | ||
427 | val &= ~LTSSM_EN_VAL; | ||
428 | ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); | ||
429 | |||
430 | /* Initiate Link Training */ | ||
431 | val = ks_dw_app_readl(ks_pcie, CMD_STATUS); | ||
432 | ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); | ||
433 | } | ||
434 | |||
435 | /** | ||
436 | * ks_dw_pcie_host_init() - initialize host for v3_65 dw hardware | ||
437 | * | ||
438 | * Ioremap the register resources, initialize legacy irq domain | ||
439 | * and call dw_pcie_v3_65_host_init() API to initialize the Keystone | ||
440 | * PCI host controller. | ||
441 | */ | ||
442 | int __init ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, | ||
443 | struct device_node *msi_intc_np) | ||
444 | { | ||
445 | struct dw_pcie *pci = ks_pcie->pci; | ||
446 | struct pcie_port *pp = &pci->pp; | ||
447 | struct device *dev = pci->dev; | ||
448 | struct platform_device *pdev = to_platform_device(dev); | ||
449 | struct resource *res; | ||
450 | |||
451 | /* Index 0 is the config reg. space address */ | ||
452 | res = platform_get_resource(pdev, IORESOURCE_MEM, 0); | ||
453 | pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); | ||
454 | if (IS_ERR(pci->dbi_base)) | ||
455 | return PTR_ERR(pci->dbi_base); | ||
456 | |||
457 | /* | ||
458 | * We set these same and is used in pcie rd/wr_other_conf | ||
459 | * functions | ||
460 | */ | ||
461 | pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET; | ||
462 | pp->va_cfg1_base = pp->va_cfg0_base; | ||
463 | |||
464 | /* Index 1 is the application reg. space address */ | ||
465 | res = platform_get_resource(pdev, IORESOURCE_MEM, 1); | ||
466 | ks_pcie->va_app_base = devm_ioremap_resource(dev, res); | ||
467 | if (IS_ERR(ks_pcie->va_app_base)) | ||
468 | return PTR_ERR(ks_pcie->va_app_base); | ||
469 | |||
470 | ks_pcie->app = *res; | ||
471 | |||
472 | /* Create legacy IRQ domain */ | ||
473 | ks_pcie->legacy_irq_domain = | ||
474 | irq_domain_add_linear(ks_pcie->legacy_intc_np, | ||
475 | PCI_NUM_INTX, | ||
476 | &ks_dw_pcie_legacy_irq_domain_ops, | ||
477 | NULL); | ||
478 | if (!ks_pcie->legacy_irq_domain) { | ||
479 | dev_err(dev, "Failed to add irq domain for legacy irqs\n"); | ||
480 | return -EINVAL; | ||
481 | } | ||
482 | |||
483 | return dw_pcie_host_init(pp); | ||
484 | } | ||
diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c index e88bd221fffe..14f2b0b4ed5e 100644 --- a/drivers/pci/controller/dwc/pci-keystone.c +++ b/drivers/pci/controller/dwc/pci-keystone.c | |||
@@ -9,40 +9,510 @@ | |||
9 | * Implementation based on pci-exynos.c and pcie-designware.c | 9 | * Implementation based on pci-exynos.c and pcie-designware.c |
10 | */ | 10 | */ |
11 | 11 | ||
12 | #include <linux/irqchip/chained_irq.h> | ||
13 | #include <linux/clk.h> | 12 | #include <linux/clk.h> |
14 | #include <linux/delay.h> | 13 | #include <linux/delay.h> |
14 | #include <linux/init.h> | ||
15 | #include <linux/interrupt.h> | 15 | #include <linux/interrupt.h> |
16 | #include <linux/irqchip/chained_irq.h> | ||
16 | #include <linux/irqdomain.h> | 17 | #include <linux/irqdomain.h> |
17 | #include <linux/init.h> | 18 | #include <linux/mfd/syscon.h> |
18 | #include <linux/msi.h> | 19 | #include <linux/msi.h> |
19 | #include <linux/of_irq.h> | ||
20 | #include <linux/of.h> | 20 | #include <linux/of.h> |
21 | #include <linux/of_irq.h> | ||
21 | #include <linux/of_pci.h> | 22 | #include <linux/of_pci.h> |
22 | #include <linux/platform_device.h> | ||
23 | #include <linux/phy/phy.h> | 23 | #include <linux/phy/phy.h> |
24 | #include <linux/platform_device.h> | ||
25 | #include <linux/regmap.h> | ||
24 | #include <linux/resource.h> | 26 | #include <linux/resource.h> |
25 | #include <linux/signal.h> | 27 | #include <linux/signal.h> |
26 | 28 | ||
27 | #include "pcie-designware.h" | 29 | #include "pcie-designware.h" |
28 | #include "pci-keystone.h" | ||
29 | 30 | ||
30 | #define DRIVER_NAME "keystone-pcie" | 31 | #define PCIE_VENDORID_MASK 0xffff |
32 | #define PCIE_DEVICEID_SHIFT 16 | ||
33 | |||
34 | /* Application registers */ | ||
35 | #define CMD_STATUS 0x004 | ||
36 | #define LTSSM_EN_VAL BIT(0) | ||
37 | #define OB_XLAT_EN_VAL BIT(1) | ||
38 | #define DBI_CS2 BIT(5) | ||
39 | |||
40 | #define CFG_SETUP 0x008 | ||
41 | #define CFG_BUS(x) (((x) & 0xff) << 16) | ||
42 | #define CFG_DEVICE(x) (((x) & 0x1f) << 8) | ||
43 | #define CFG_FUNC(x) ((x) & 0x7) | ||
44 | #define CFG_TYPE1 BIT(24) | ||
45 | |||
46 | #define OB_SIZE 0x030 | ||
47 | #define SPACE0_REMOTE_CFG_OFFSET 0x1000 | ||
48 | #define OB_OFFSET_INDEX(n) (0x200 + (8 * (n))) | ||
49 | #define OB_OFFSET_HI(n) (0x204 + (8 * (n))) | ||
50 | #define OB_ENABLEN BIT(0) | ||
51 | #define OB_WIN_SIZE 8 /* 8MB */ | ||
52 | |||
53 | /* IRQ register defines */ | ||
54 | #define IRQ_EOI 0x050 | ||
55 | #define IRQ_STATUS 0x184 | ||
56 | #define IRQ_ENABLE_SET 0x188 | ||
57 | #define IRQ_ENABLE_CLR 0x18c | ||
58 | |||
59 | #define MSI_IRQ 0x054 | ||
60 | #define MSI0_IRQ_STATUS 0x104 | ||
61 | #define MSI0_IRQ_ENABLE_SET 0x108 | ||
62 | #define MSI0_IRQ_ENABLE_CLR 0x10c | ||
63 | #define IRQ_STATUS 0x184 | ||
64 | #define MSI_IRQ_OFFSET 4 | ||
65 | |||
66 | #define ERR_IRQ_STATUS 0x1c4 | ||
67 | #define ERR_IRQ_ENABLE_SET 0x1c8 | ||
68 | #define ERR_AER BIT(5) /* ECRC error */ | ||
69 | #define ERR_AXI BIT(4) /* AXI tag lookup fatal error */ | ||
70 | #define ERR_CORR BIT(3) /* Correctable error */ | ||
71 | #define ERR_NONFATAL BIT(2) /* Non-fatal error */ | ||
72 | #define ERR_FATAL BIT(1) /* Fatal error */ | ||
73 | #define ERR_SYS BIT(0) /* System error */ | ||
74 | #define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \ | ||
75 | ERR_NONFATAL | ERR_FATAL | ERR_SYS) | ||
76 | |||
77 | #define MAX_MSI_HOST_IRQS 8 | ||
78 | /* PCIE controller device IDs */ | ||
79 | #define PCIE_RC_K2HK 0xb008 | ||
80 | #define PCIE_RC_K2E 0xb009 | ||
81 | #define PCIE_RC_K2L 0xb00a | ||
82 | #define PCIE_RC_K2G 0xb00b | ||
83 | |||
84 | #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) | ||
85 | |||
86 | struct keystone_pcie { | ||
87 | struct dw_pcie *pci; | ||
88 | /* PCI Device ID */ | ||
89 | u32 device_id; | ||
90 | int num_legacy_host_irqs; | ||
91 | int legacy_host_irqs[PCI_NUM_INTX]; | ||
92 | struct device_node *legacy_intc_np; | ||
93 | |||
94 | int num_msi_host_irqs; | ||
95 | int msi_host_irqs[MAX_MSI_HOST_IRQS]; | ||
96 | int num_lanes; | ||
97 | u32 num_viewport; | ||
98 | struct phy **phy; | ||
99 | struct device_link **link; | ||
100 | struct device_node *msi_intc_np; | ||
101 | struct irq_domain *legacy_irq_domain; | ||
102 | struct device_node *np; | ||
103 | |||
104 | int error_irq; | ||
105 | |||
106 | /* Application register space */ | ||
107 | void __iomem *va_app_base; /* DT 1st resource */ | ||
108 | struct resource app; | ||
109 | }; | ||
31 | 110 | ||
32 | /* DEV_STAT_CTRL */ | 111 | static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset, |
33 | #define PCIE_CAP_BASE 0x70 | 112 | u32 *bit_pos) |
113 | { | ||
114 | *reg_offset = offset % 8; | ||
115 | *bit_pos = offset >> 3; | ||
116 | } | ||
34 | 117 | ||
35 | /* PCIE controller device IDs */ | 118 | static phys_addr_t ks_pcie_get_msi_addr(struct pcie_port *pp) |
36 | #define PCIE_RC_K2HK 0xb008 | 119 | { |
37 | #define PCIE_RC_K2E 0xb009 | 120 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); |
38 | #define PCIE_RC_K2L 0xb00a | 121 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); |
122 | |||
123 | return ks_pcie->app.start + MSI_IRQ; | ||
124 | } | ||
125 | |||
126 | static u32 ks_pcie_app_readl(struct keystone_pcie *ks_pcie, u32 offset) | ||
127 | { | ||
128 | return readl(ks_pcie->va_app_base + offset); | ||
129 | } | ||
130 | |||
131 | static void ks_pcie_app_writel(struct keystone_pcie *ks_pcie, u32 offset, | ||
132 | u32 val) | ||
133 | { | ||
134 | writel(val, ks_pcie->va_app_base + offset); | ||
135 | } | ||
136 | |||
137 | static void ks_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset) | ||
138 | { | ||
139 | struct dw_pcie *pci = ks_pcie->pci; | ||
140 | struct pcie_port *pp = &pci->pp; | ||
141 | struct device *dev = pci->dev; | ||
142 | u32 pending, vector; | ||
143 | int src, virq; | ||
144 | |||
145 | pending = ks_pcie_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4)); | ||
146 | |||
147 | /* | ||
148 | * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit | ||
149 | * shows 1, 9, 17, 25 and so forth | ||
150 | */ | ||
151 | for (src = 0; src < 4; src++) { | ||
152 | if (BIT(src) & pending) { | ||
153 | vector = offset + (src << 3); | ||
154 | virq = irq_linear_revmap(pp->irq_domain, vector); | ||
155 | dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", | ||
156 | src, vector, virq); | ||
157 | generic_handle_irq(virq); | ||
158 | } | ||
159 | } | ||
160 | } | ||
161 | |||
162 | static void ks_pcie_msi_irq_ack(int irq, struct pcie_port *pp) | ||
163 | { | ||
164 | u32 reg_offset, bit_pos; | ||
165 | struct keystone_pcie *ks_pcie; | ||
166 | struct dw_pcie *pci; | ||
167 | |||
168 | pci = to_dw_pcie_from_pp(pp); | ||
169 | ks_pcie = to_keystone_pcie(pci); | ||
170 | update_reg_offset_bit_pos(irq, ®_offset, &bit_pos); | ||
171 | |||
172 | ks_pcie_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4), | ||
173 | BIT(bit_pos)); | ||
174 | ks_pcie_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET); | ||
175 | } | ||
176 | |||
177 | static void ks_pcie_msi_set_irq(struct pcie_port *pp, int irq) | ||
178 | { | ||
179 | u32 reg_offset, bit_pos; | ||
180 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
181 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
182 | |||
183 | update_reg_offset_bit_pos(irq, ®_offset, &bit_pos); | ||
184 | ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4), | ||
185 | BIT(bit_pos)); | ||
186 | } | ||
187 | |||
188 | static void ks_pcie_msi_clear_irq(struct pcie_port *pp, int irq) | ||
189 | { | ||
190 | u32 reg_offset, bit_pos; | ||
191 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
192 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
193 | |||
194 | update_reg_offset_bit_pos(irq, ®_offset, &bit_pos); | ||
195 | ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4), | ||
196 | BIT(bit_pos)); | ||
197 | } | ||
198 | |||
199 | static int ks_pcie_msi_host_init(struct pcie_port *pp) | ||
200 | { | ||
201 | return dw_pcie_allocate_domains(pp); | ||
202 | } | ||
203 | |||
204 | static void ks_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) | ||
205 | { | ||
206 | int i; | ||
207 | |||
208 | for (i = 0; i < PCI_NUM_INTX; i++) | ||
209 | ks_pcie_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1); | ||
210 | } | ||
211 | |||
212 | static void ks_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, | ||
213 | int offset) | ||
214 | { | ||
215 | struct dw_pcie *pci = ks_pcie->pci; | ||
216 | struct device *dev = pci->dev; | ||
217 | u32 pending; | ||
218 | int virq; | ||
219 | |||
220 | pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS + (offset << 4)); | ||
221 | |||
222 | if (BIT(0) & pending) { | ||
223 | virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset); | ||
224 | dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq); | ||
225 | generic_handle_irq(virq); | ||
226 | } | ||
227 | |||
228 | /* EOI the INTx interrupt */ | ||
229 | ks_pcie_app_writel(ks_pcie, IRQ_EOI, offset); | ||
230 | } | ||
231 | |||
232 | static void ks_pcie_enable_error_irq(struct keystone_pcie *ks_pcie) | ||
233 | { | ||
234 | ks_pcie_app_writel(ks_pcie, ERR_IRQ_ENABLE_SET, ERR_IRQ_ALL); | ||
235 | } | ||
236 | |||
237 | static irqreturn_t ks_pcie_handle_error_irq(struct keystone_pcie *ks_pcie) | ||
238 | { | ||
239 | u32 reg; | ||
240 | struct device *dev = ks_pcie->pci->dev; | ||
241 | |||
242 | reg = ks_pcie_app_readl(ks_pcie, ERR_IRQ_STATUS); | ||
243 | if (!reg) | ||
244 | return IRQ_NONE; | ||
245 | |||
246 | if (reg & ERR_SYS) | ||
247 | dev_err(dev, "System Error\n"); | ||
248 | |||
249 | if (reg & ERR_FATAL) | ||
250 | dev_err(dev, "Fatal Error\n"); | ||
251 | |||
252 | if (reg & ERR_NONFATAL) | ||
253 | dev_dbg(dev, "Non Fatal Error\n"); | ||
254 | |||
255 | if (reg & ERR_CORR) | ||
256 | dev_dbg(dev, "Correctable Error\n"); | ||
257 | |||
258 | if (reg & ERR_AXI) | ||
259 | dev_err(dev, "AXI tag lookup fatal Error\n"); | ||
260 | |||
261 | if (reg & ERR_AER) | ||
262 | dev_err(dev, "ECRC Error\n"); | ||
263 | |||
264 | ks_pcie_app_writel(ks_pcie, ERR_IRQ_STATUS, reg); | ||
265 | |||
266 | return IRQ_HANDLED; | ||
267 | } | ||
268 | |||
269 | static void ks_pcie_ack_legacy_irq(struct irq_data *d) | ||
270 | { | ||
271 | } | ||
272 | |||
273 | static void ks_pcie_mask_legacy_irq(struct irq_data *d) | ||
274 | { | ||
275 | } | ||
276 | |||
277 | static void ks_pcie_unmask_legacy_irq(struct irq_data *d) | ||
278 | { | ||
279 | } | ||
280 | |||
281 | static struct irq_chip ks_pcie_legacy_irq_chip = { | ||
282 | .name = "Keystone-PCI-Legacy-IRQ", | ||
283 | .irq_ack = ks_pcie_ack_legacy_irq, | ||
284 | .irq_mask = ks_pcie_mask_legacy_irq, | ||
285 | .irq_unmask = ks_pcie_unmask_legacy_irq, | ||
286 | }; | ||
287 | |||
288 | static int ks_pcie_init_legacy_irq_map(struct irq_domain *d, | ||
289 | unsigned int irq, | ||
290 | irq_hw_number_t hw_irq) | ||
291 | { | ||
292 | irq_set_chip_and_handler(irq, &ks_pcie_legacy_irq_chip, | ||
293 | handle_level_irq); | ||
294 | irq_set_chip_data(irq, d->host_data); | ||
295 | |||
296 | return 0; | ||
297 | } | ||
298 | |||
299 | static const struct irq_domain_ops ks_pcie_legacy_irq_domain_ops = { | ||
300 | .map = ks_pcie_init_legacy_irq_map, | ||
301 | .xlate = irq_domain_xlate_onetwocell, | ||
302 | }; | ||
303 | |||
304 | /** | ||
305 | * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask | ||
306 | * registers | ||
307 | * | ||
308 | * Since modification of dbi_cs2 involves different clock domain, read the | ||
309 | * status back to ensure the transition is complete. | ||
310 | */ | ||
311 | static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie) | ||
312 | { | ||
313 | u32 val; | ||
314 | |||
315 | val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); | ||
316 | val |= DBI_CS2; | ||
317 | ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); | ||
318 | |||
319 | do { | ||
320 | val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); | ||
321 | } while (!(val & DBI_CS2)); | ||
322 | } | ||
323 | |||
324 | /** | ||
325 | * ks_pcie_clear_dbi_mode() - Disable DBI mode | ||
326 | * | ||
327 | * Since modification of dbi_cs2 involves different clock domain, read the | ||
328 | * status back to ensure the transition is complete. | ||
329 | */ | ||
330 | static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie) | ||
331 | { | ||
332 | u32 val; | ||
333 | |||
334 | val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); | ||
335 | val &= ~DBI_CS2; | ||
336 | ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); | ||
337 | |||
338 | do { | ||
339 | val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); | ||
340 | } while (val & DBI_CS2); | ||
341 | } | ||
342 | |||
343 | static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) | ||
344 | { | ||
345 | u32 val; | ||
346 | u32 num_viewport = ks_pcie->num_viewport; | ||
347 | struct dw_pcie *pci = ks_pcie->pci; | ||
348 | struct pcie_port *pp = &pci->pp; | ||
349 | u64 start = pp->mem->start; | ||
350 | u64 end = pp->mem->end; | ||
351 | int i; | ||
352 | |||
353 | /* Disable BARs for inbound access */ | ||
354 | ks_pcie_set_dbi_mode(ks_pcie); | ||
355 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); | ||
356 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0); | ||
357 | ks_pcie_clear_dbi_mode(ks_pcie); | ||
358 | |||
359 | val = ilog2(OB_WIN_SIZE); | ||
360 | ks_pcie_app_writel(ks_pcie, OB_SIZE, val); | ||
361 | |||
362 | /* Using Direct 1:1 mapping of RC <-> PCI memory space */ | ||
363 | for (i = 0; i < num_viewport && (start < end); i++) { | ||
364 | ks_pcie_app_writel(ks_pcie, OB_OFFSET_INDEX(i), | ||
365 | lower_32_bits(start) | OB_ENABLEN); | ||
366 | ks_pcie_app_writel(ks_pcie, OB_OFFSET_HI(i), | ||
367 | upper_32_bits(start)); | ||
368 | start += OB_WIN_SIZE; | ||
369 | } | ||
370 | |||
371 | val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); | ||
372 | val |= OB_XLAT_EN_VAL; | ||
373 | ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); | ||
374 | } | ||
375 | |||
376 | static int ks_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, | ||
377 | unsigned int devfn, int where, int size, | ||
378 | u32 *val) | ||
379 | { | ||
380 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
381 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
382 | u32 reg; | ||
383 | |||
384 | reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) | | ||
385 | CFG_FUNC(PCI_FUNC(devfn)); | ||
386 | if (bus->parent->number != pp->root_bus_nr) | ||
387 | reg |= CFG_TYPE1; | ||
388 | ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg); | ||
389 | |||
390 | return dw_pcie_read(pp->va_cfg0_base + where, size, val); | ||
391 | } | ||
392 | |||
393 | static int ks_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, | ||
394 | unsigned int devfn, int where, int size, | ||
395 | u32 val) | ||
396 | { | ||
397 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
398 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
399 | u32 reg; | ||
400 | |||
401 | reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) | | ||
402 | CFG_FUNC(PCI_FUNC(devfn)); | ||
403 | if (bus->parent->number != pp->root_bus_nr) | ||
404 | reg |= CFG_TYPE1; | ||
405 | ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg); | ||
406 | |||
407 | return dw_pcie_write(pp->va_cfg0_base + where, size, val); | ||
408 | } | ||
409 | |||
410 | /** | ||
411 | * ks_pcie_v3_65_scan_bus() - keystone scan_bus post initialization | ||
412 | * | ||
413 | * This sets BAR0 to enable inbound access for MSI_IRQ register | ||
414 | */ | ||
415 | static void ks_pcie_v3_65_scan_bus(struct pcie_port *pp) | ||
416 | { | ||
417 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | ||
418 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | ||
419 | |||
420 | /* Configure and set up BAR0 */ | ||
421 | ks_pcie_set_dbi_mode(ks_pcie); | ||
422 | |||
423 | /* Enable BAR0 */ | ||
424 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1); | ||
425 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1); | ||
426 | |||
427 | ks_pcie_clear_dbi_mode(ks_pcie); | ||
428 | |||
429 | /* | ||
430 | * For BAR0, just setting bus address for inbound writes (MSI) should | ||
431 | * be sufficient. Use physical address to avoid any conflicts. | ||
432 | */ | ||
433 | dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start); | ||
434 | } | ||
435 | |||
436 | /** | ||
437 | * ks_pcie_link_up() - Check if link up | ||
438 | */ | ||
439 | static int ks_pcie_link_up(struct dw_pcie *pci) | ||
440 | { | ||
441 | u32 val; | ||
442 | |||
443 | val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0); | ||
444 | val &= PORT_LOGIC_LTSSM_STATE_MASK; | ||
445 | return (val == PORT_LOGIC_LTSSM_STATE_L0); | ||
446 | } | ||
447 | |||
448 | static void ks_pcie_initiate_link_train(struct keystone_pcie *ks_pcie) | ||
449 | { | ||
450 | u32 val; | ||
39 | 451 | ||
40 | #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) | 452 | /* Disable Link training */ |
453 | val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); | ||
454 | val &= ~LTSSM_EN_VAL; | ||
455 | ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); | ||
41 | 456 | ||
42 | static void quirk_limit_mrrs(struct pci_dev *dev) | 457 | /* Initiate Link Training */ |
458 | val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); | ||
459 | ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); | ||
460 | } | ||
461 | |||
462 | /** | ||
463 | * ks_pcie_dw_host_init() - initialize host for v3_65 dw hardware | ||
464 | * | ||
465 | * Ioremap the register resources, initialize legacy irq domain | ||
466 | * and call dw_pcie_v3_65_host_init() API to initialize the Keystone | ||
467 | * PCI host controller. | ||
468 | */ | ||
469 | static int __init ks_pcie_dw_host_init(struct keystone_pcie *ks_pcie) | ||
470 | { | ||
471 | struct dw_pcie *pci = ks_pcie->pci; | ||
472 | struct pcie_port *pp = &pci->pp; | ||
473 | struct device *dev = pci->dev; | ||
474 | struct platform_device *pdev = to_platform_device(dev); | ||
475 | struct resource *res; | ||
476 | |||
477 | /* Index 0 is the config reg. space address */ | ||
478 | res = platform_get_resource(pdev, IORESOURCE_MEM, 0); | ||
479 | pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); | ||
480 | if (IS_ERR(pci->dbi_base)) | ||
481 | return PTR_ERR(pci->dbi_base); | ||
482 | |||
483 | /* | ||
484 | * We set these same and is used in pcie rd/wr_other_conf | ||
485 | * functions | ||
486 | */ | ||
487 | pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET; | ||
488 | pp->va_cfg1_base = pp->va_cfg0_base; | ||
489 | |||
490 | /* Index 1 is the application reg. space address */ | ||
491 | res = platform_get_resource(pdev, IORESOURCE_MEM, 1); | ||
492 | ks_pcie->va_app_base = devm_ioremap_resource(dev, res); | ||
493 | if (IS_ERR(ks_pcie->va_app_base)) | ||
494 | return PTR_ERR(ks_pcie->va_app_base); | ||
495 | |||
496 | ks_pcie->app = *res; | ||
497 | |||
498 | /* Create legacy IRQ domain */ | ||
499 | ks_pcie->legacy_irq_domain = | ||
500 | irq_domain_add_linear(ks_pcie->legacy_intc_np, | ||
501 | PCI_NUM_INTX, | ||
502 | &ks_pcie_legacy_irq_domain_ops, | ||
503 | NULL); | ||
504 | if (!ks_pcie->legacy_irq_domain) { | ||
505 | dev_err(dev, "Failed to add irq domain for legacy irqs\n"); | ||
506 | return -EINVAL; | ||
507 | } | ||
508 | |||
509 | return dw_pcie_host_init(pp); | ||
510 | } | ||
511 | |||
512 | static void ks_pcie_quirk(struct pci_dev *dev) | ||
43 | { | 513 | { |
44 | struct pci_bus *bus = dev->bus; | 514 | struct pci_bus *bus = dev->bus; |
45 | struct pci_dev *bridge = bus->self; | 515 | struct pci_dev *bridge; |
46 | static const struct pci_device_id rc_pci_devids[] = { | 516 | static const struct pci_device_id rc_pci_devids[] = { |
47 | { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK), | 517 | { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK), |
48 | .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, | 518 | .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, |
@@ -50,11 +520,13 @@ static void quirk_limit_mrrs(struct pci_dev *dev) | |||
50 | .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, | 520 | .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, |
51 | { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L), | 521 | { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L), |
52 | .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, | 522 | .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, |
523 | { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2G), | ||
524 | .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, | ||
53 | { 0, }, | 525 | { 0, }, |
54 | }; | 526 | }; |
55 | 527 | ||
56 | if (pci_is_root_bus(bus)) | 528 | if (pci_is_root_bus(bus)) |
57 | return; | 529 | bridge = dev; |
58 | 530 | ||
59 | /* look for the host bridge */ | 531 | /* look for the host bridge */ |
60 | while (!pci_is_root_bus(bus)) { | 532 | while (!pci_is_root_bus(bus)) { |
@@ -62,43 +534,39 @@ static void quirk_limit_mrrs(struct pci_dev *dev) | |||
62 | bus = bus->parent; | 534 | bus = bus->parent; |
63 | } | 535 | } |
64 | 536 | ||
65 | if (bridge) { | 537 | if (!bridge) |
66 | /* | 538 | return; |
67 | * Keystone PCI controller has a h/w limitation of | 539 | |
68 | * 256 bytes maximum read request size. It can't handle | 540 | /* |
69 | * anything higher than this. So force this limit on | 541 | * Keystone PCI controller has a h/w limitation of |
70 | * all downstream devices. | 542 | * 256 bytes maximum read request size. It can't handle |
71 | */ | 543 | * anything higher than this. So force this limit on |
72 | if (pci_match_id(rc_pci_devids, bridge)) { | 544 | * all downstream devices. |
73 | if (pcie_get_readrq(dev) > 256) { | 545 | */ |
74 | dev_info(&dev->dev, "limiting MRRS to 256\n"); | 546 | if (pci_match_id(rc_pci_devids, bridge)) { |
75 | pcie_set_readrq(dev, 256); | 547 | if (pcie_get_readrq(dev) > 256) { |
76 | } | 548 | dev_info(&dev->dev, "limiting MRRS to 256\n"); |
549 | pcie_set_readrq(dev, 256); | ||
77 | } | 550 | } |
78 | } | 551 | } |
79 | } | 552 | } |
80 | DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, quirk_limit_mrrs); | 553 | DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk); |
81 | 554 | ||
82 | static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie) | 555 | static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie) |
83 | { | 556 | { |
84 | struct dw_pcie *pci = ks_pcie->pci; | 557 | struct dw_pcie *pci = ks_pcie->pci; |
85 | struct pcie_port *pp = &pci->pp; | ||
86 | struct device *dev = pci->dev; | 558 | struct device *dev = pci->dev; |
87 | unsigned int retries; | ||
88 | |||
89 | dw_pcie_setup_rc(pp); | ||
90 | 559 | ||
91 | if (dw_pcie_link_up(pci)) { | 560 | if (dw_pcie_link_up(pci)) { |
92 | dev_info(dev, "Link already up\n"); | 561 | dev_info(dev, "Link already up\n"); |
93 | return 0; | 562 | return 0; |
94 | } | 563 | } |
95 | 564 | ||
565 | ks_pcie_initiate_link_train(ks_pcie); | ||
566 | |||
96 | /* check if the link is up or not */ | 567 | /* check if the link is up or not */ |
97 | for (retries = 0; retries < 5; retries++) { | 568 | if (!dw_pcie_wait_for_link(pci)) |
98 | ks_dw_pcie_initiate_link_train(ks_pcie); | 569 | return 0; |
99 | if (!dw_pcie_wait_for_link(pci)) | ||
100 | return 0; | ||
101 | } | ||
102 | 570 | ||
103 | dev_err(dev, "phy link never came up\n"); | 571 | dev_err(dev, "phy link never came up\n"); |
104 | return -ETIMEDOUT; | 572 | return -ETIMEDOUT; |
@@ -121,7 +589,7 @@ static void ks_pcie_msi_irq_handler(struct irq_desc *desc) | |||
121 | * ack operation. | 589 | * ack operation. |
122 | */ | 590 | */ |
123 | chained_irq_enter(chip, desc); | 591 | chained_irq_enter(chip, desc); |
124 | ks_dw_pcie_handle_msi_irq(ks_pcie, offset); | 592 | ks_pcie_handle_msi_irq(ks_pcie, offset); |
125 | chained_irq_exit(chip, desc); | 593 | chained_irq_exit(chip, desc); |
126 | } | 594 | } |
127 | 595 | ||
@@ -150,7 +618,7 @@ static void ks_pcie_legacy_irq_handler(struct irq_desc *desc) | |||
150 | * ack operation. | 618 | * ack operation. |
151 | */ | 619 | */ |
152 | chained_irq_enter(chip, desc); | 620 | chained_irq_enter(chip, desc); |
153 | ks_dw_pcie_handle_legacy_irq(ks_pcie, irq_offset); | 621 | ks_pcie_handle_legacy_irq(ks_pcie, irq_offset); |
154 | chained_irq_exit(chip, desc); | 622 | chained_irq_exit(chip, desc); |
155 | } | 623 | } |
156 | 624 | ||
@@ -222,7 +690,7 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) | |||
222 | ks_pcie_legacy_irq_handler, | 690 | ks_pcie_legacy_irq_handler, |
223 | ks_pcie); | 691 | ks_pcie); |
224 | } | 692 | } |
225 | ks_dw_pcie_enable_legacy_irqs(ks_pcie); | 693 | ks_pcie_enable_legacy_irqs(ks_pcie); |
226 | 694 | ||
227 | /* MSI IRQ */ | 695 | /* MSI IRQ */ |
228 | if (IS_ENABLED(CONFIG_PCI_MSI)) { | 696 | if (IS_ENABLED(CONFIG_PCI_MSI)) { |
@@ -234,7 +702,7 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) | |||
234 | } | 702 | } |
235 | 703 | ||
236 | if (ks_pcie->error_irq > 0) | 704 | if (ks_pcie->error_irq > 0) |
237 | ks_dw_pcie_enable_error_irq(ks_pcie); | 705 | ks_pcie_enable_error_irq(ks_pcie); |
238 | } | 706 | } |
239 | 707 | ||
240 | /* | 708 | /* |
@@ -242,8 +710,8 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) | |||
242 | * bus error instead of returning 0xffffffff. This handler always returns 0 | 710 | * bus error instead of returning 0xffffffff. This handler always returns 0 |
243 | * for this kind of faults. | 711 | * for this kind of faults. |
244 | */ | 712 | */ |
245 | static int keystone_pcie_fault(unsigned long addr, unsigned int fsr, | 713 | static int ks_pcie_fault(unsigned long addr, unsigned int fsr, |
246 | struct pt_regs *regs) | 714 | struct pt_regs *regs) |
247 | { | 715 | { |
248 | unsigned long instr = *(unsigned long *) instruction_pointer(regs); | 716 | unsigned long instr = *(unsigned long *) instruction_pointer(regs); |
249 | 717 | ||
@@ -257,59 +725,78 @@ static int keystone_pcie_fault(unsigned long addr, unsigned int fsr, | |||
257 | return 0; | 725 | return 0; |
258 | } | 726 | } |
259 | 727 | ||
728 | static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie) | ||
729 | { | ||
730 | int ret; | ||
731 | unsigned int id; | ||
732 | struct regmap *devctrl_regs; | ||
733 | struct dw_pcie *pci = ks_pcie->pci; | ||
734 | struct device *dev = pci->dev; | ||
735 | struct device_node *np = dev->of_node; | ||
736 | |||
737 | devctrl_regs = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-id"); | ||
738 | if (IS_ERR(devctrl_regs)) | ||
739 | return PTR_ERR(devctrl_regs); | ||
740 | |||
741 | ret = regmap_read(devctrl_regs, 0, &id); | ||
742 | if (ret) | ||
743 | return ret; | ||
744 | |||
745 | dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, id & PCIE_VENDORID_MASK); | ||
746 | dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, id >> PCIE_DEVICEID_SHIFT); | ||
747 | |||
748 | return 0; | ||
749 | } | ||
750 | |||
260 | static int __init ks_pcie_host_init(struct pcie_port *pp) | 751 | static int __init ks_pcie_host_init(struct pcie_port *pp) |
261 | { | 752 | { |
262 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); | 753 | struct dw_pcie *pci = to_dw_pcie_from_pp(pp); |
263 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); | 754 | struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); |
264 | u32 val; | 755 | int ret; |
756 | |||
757 | dw_pcie_setup_rc(pp); | ||
265 | 758 | ||
266 | ks_pcie_establish_link(ks_pcie); | 759 | ks_pcie_establish_link(ks_pcie); |
267 | ks_dw_pcie_setup_rc_app_regs(ks_pcie); | 760 | ks_pcie_setup_rc_app_regs(ks_pcie); |
268 | ks_pcie_setup_interrupts(ks_pcie); | 761 | ks_pcie_setup_interrupts(ks_pcie); |
269 | writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8), | 762 | writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8), |
270 | pci->dbi_base + PCI_IO_BASE); | 763 | pci->dbi_base + PCI_IO_BASE); |
271 | 764 | ||
272 | /* update the Vendor ID */ | 765 | ret = ks_pcie_init_id(ks_pcie); |
273 | writew(ks_pcie->device_id, pci->dbi_base + PCI_DEVICE_ID); | 766 | if (ret < 0) |
274 | 767 | return ret; | |
275 | /* update the DEV_STAT_CTRL to publish right mrrs */ | ||
276 | val = readl(pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL); | ||
277 | val &= ~PCI_EXP_DEVCTL_READRQ; | ||
278 | /* set the mrrs to 256 bytes */ | ||
279 | val |= BIT(12); | ||
280 | writel(val, pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL); | ||
281 | 768 | ||
282 | /* | 769 | /* |
283 | * PCIe access errors that result into OCP errors are caught by ARM as | 770 | * PCIe access errors that result into OCP errors are caught by ARM as |
284 | * "External aborts" | 771 | * "External aborts" |
285 | */ | 772 | */ |
286 | hook_fault_code(17, keystone_pcie_fault, SIGBUS, 0, | 773 | hook_fault_code(17, ks_pcie_fault, SIGBUS, 0, |
287 | "Asynchronous external abort"); | 774 | "Asynchronous external abort"); |
288 | 775 | ||
289 | return 0; | 776 | return 0; |
290 | } | 777 | } |
291 | 778 | ||
292 | static const struct dw_pcie_host_ops keystone_pcie_host_ops = { | 779 | static const struct dw_pcie_host_ops ks_pcie_host_ops = { |
293 | .rd_other_conf = ks_dw_pcie_rd_other_conf, | 780 | .rd_other_conf = ks_pcie_rd_other_conf, |
294 | .wr_other_conf = ks_dw_pcie_wr_other_conf, | 781 | .wr_other_conf = ks_pcie_wr_other_conf, |
295 | .host_init = ks_pcie_host_init, | 782 | .host_init = ks_pcie_host_init, |
296 | .msi_set_irq = ks_dw_pcie_msi_set_irq, | 783 | .msi_set_irq = ks_pcie_msi_set_irq, |
297 | .msi_clear_irq = ks_dw_pcie_msi_clear_irq, | 784 | .msi_clear_irq = ks_pcie_msi_clear_irq, |
298 | .get_msi_addr = ks_dw_pcie_get_msi_addr, | 785 | .get_msi_addr = ks_pcie_get_msi_addr, |
299 | .msi_host_init = ks_dw_pcie_msi_host_init, | 786 | .msi_host_init = ks_pcie_msi_host_init, |
300 | .msi_irq_ack = ks_dw_pcie_msi_irq_ack, | 787 | .msi_irq_ack = ks_pcie_msi_irq_ack, |
301 | .scan_bus = ks_dw_pcie_v3_65_scan_bus, | 788 | .scan_bus = ks_pcie_v3_65_scan_bus, |
302 | }; | 789 | }; |
303 | 790 | ||
304 | static irqreturn_t pcie_err_irq_handler(int irq, void *priv) | 791 | static irqreturn_t ks_pcie_err_irq_handler(int irq, void *priv) |
305 | { | 792 | { |
306 | struct keystone_pcie *ks_pcie = priv; | 793 | struct keystone_pcie *ks_pcie = priv; |
307 | 794 | ||
308 | return ks_dw_pcie_handle_error_irq(ks_pcie); | 795 | return ks_pcie_handle_error_irq(ks_pcie); |
309 | } | 796 | } |
310 | 797 | ||
311 | static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, | 798 | static int __init ks_pcie_add_pcie_port(struct keystone_pcie *ks_pcie, |
312 | struct platform_device *pdev) | 799 | struct platform_device *pdev) |
313 | { | 800 | { |
314 | struct dw_pcie *pci = ks_pcie->pci; | 801 | struct dw_pcie *pci = ks_pcie->pci; |
315 | struct pcie_port *pp = &pci->pp; | 802 | struct pcie_port *pp = &pci->pp; |
@@ -338,7 +825,7 @@ static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, | |||
338 | if (ks_pcie->error_irq <= 0) | 825 | if (ks_pcie->error_irq <= 0) |
339 | dev_info(dev, "no error IRQ defined\n"); | 826 | dev_info(dev, "no error IRQ defined\n"); |
340 | else { | 827 | else { |
341 | ret = request_irq(ks_pcie->error_irq, pcie_err_irq_handler, | 828 | ret = request_irq(ks_pcie->error_irq, ks_pcie_err_irq_handler, |
342 | IRQF_SHARED, "pcie-error-irq", ks_pcie); | 829 | IRQF_SHARED, "pcie-error-irq", ks_pcie); |
343 | if (ret < 0) { | 830 | if (ret < 0) { |
344 | dev_err(dev, "failed to request error IRQ %d\n", | 831 | dev_err(dev, "failed to request error IRQ %d\n", |
@@ -347,8 +834,8 @@ static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, | |||
347 | } | 834 | } |
348 | } | 835 | } |
349 | 836 | ||
350 | pp->ops = &keystone_pcie_host_ops; | 837 | pp->ops = &ks_pcie_host_ops; |
351 | ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np); | 838 | ret = ks_pcie_dw_host_init(ks_pcie); |
352 | if (ret) { | 839 | if (ret) { |
353 | dev_err(dev, "failed to initialize host\n"); | 840 | dev_err(dev, "failed to initialize host\n"); |
354 | return ret; | 841 | return ret; |
@@ -365,28 +852,62 @@ static const struct of_device_id ks_pcie_of_match[] = { | |||
365 | { }, | 852 | { }, |
366 | }; | 853 | }; |
367 | 854 | ||
368 | static const struct dw_pcie_ops dw_pcie_ops = { | 855 | static const struct dw_pcie_ops ks_pcie_dw_pcie_ops = { |
369 | .link_up = ks_dw_pcie_link_up, | 856 | .link_up = ks_pcie_link_up, |
370 | }; | 857 | }; |
371 | 858 | ||
372 | static int __exit ks_pcie_remove(struct platform_device *pdev) | 859 | static void ks_pcie_disable_phy(struct keystone_pcie *ks_pcie) |
373 | { | 860 | { |
374 | struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev); | 861 | int num_lanes = ks_pcie->num_lanes; |
375 | 862 | ||
376 | clk_disable_unprepare(ks_pcie->clk); | 863 | while (num_lanes--) { |
864 | phy_power_off(ks_pcie->phy[num_lanes]); | ||
865 | phy_exit(ks_pcie->phy[num_lanes]); | ||
866 | } | ||
867 | } | ||
868 | |||
869 | static int ks_pcie_enable_phy(struct keystone_pcie *ks_pcie) | ||
870 | { | ||
871 | int i; | ||
872 | int ret; | ||
873 | int num_lanes = ks_pcie->num_lanes; | ||
874 | |||
875 | for (i = 0; i < num_lanes; i++) { | ||
876 | ret = phy_init(ks_pcie->phy[i]); | ||
877 | if (ret < 0) | ||
878 | goto err_phy; | ||
879 | |||
880 | ret = phy_power_on(ks_pcie->phy[i]); | ||
881 | if (ret < 0) { | ||
882 | phy_exit(ks_pcie->phy[i]); | ||
883 | goto err_phy; | ||
884 | } | ||
885 | } | ||
377 | 886 | ||
378 | return 0; | 887 | return 0; |
888 | |||
889 | err_phy: | ||
890 | while (--i >= 0) { | ||
891 | phy_power_off(ks_pcie->phy[i]); | ||
892 | phy_exit(ks_pcie->phy[i]); | ||
893 | } | ||
894 | |||
895 | return ret; | ||
379 | } | 896 | } |
380 | 897 | ||
381 | static int __init ks_pcie_probe(struct platform_device *pdev) | 898 | static int __init ks_pcie_probe(struct platform_device *pdev) |
382 | { | 899 | { |
383 | struct device *dev = &pdev->dev; | 900 | struct device *dev = &pdev->dev; |
901 | struct device_node *np = dev->of_node; | ||
384 | struct dw_pcie *pci; | 902 | struct dw_pcie *pci; |
385 | struct keystone_pcie *ks_pcie; | 903 | struct keystone_pcie *ks_pcie; |
386 | struct resource *res; | 904 | struct device_link **link; |
387 | void __iomem *reg_p; | 905 | u32 num_viewport; |
388 | struct phy *phy; | 906 | struct phy **phy; |
907 | u32 num_lanes; | ||
908 | char name[10]; | ||
389 | int ret; | 909 | int ret; |
910 | int i; | ||
390 | 911 | ||
391 | ks_pcie = devm_kzalloc(dev, sizeof(*ks_pcie), GFP_KERNEL); | 912 | ks_pcie = devm_kzalloc(dev, sizeof(*ks_pcie), GFP_KERNEL); |
392 | if (!ks_pcie) | 913 | if (!ks_pcie) |
@@ -397,54 +918,99 @@ static int __init ks_pcie_probe(struct platform_device *pdev) | |||
397 | return -ENOMEM; | 918 | return -ENOMEM; |
398 | 919 | ||
399 | pci->dev = dev; | 920 | pci->dev = dev; |
400 | pci->ops = &dw_pcie_ops; | 921 | pci->ops = &ks_pcie_dw_pcie_ops; |
401 | 922 | ||
402 | ks_pcie->pci = pci; | 923 | ret = of_property_read_u32(np, "num-viewport", &num_viewport); |
924 | if (ret < 0) { | ||
925 | dev_err(dev, "unable to read *num-viewport* property\n"); | ||
926 | return ret; | ||
927 | } | ||
403 | 928 | ||
404 | /* initialize SerDes Phy if present */ | 929 | ret = of_property_read_u32(np, "num-lanes", &num_lanes); |
405 | phy = devm_phy_get(dev, "pcie-phy"); | 930 | if (ret) |
406 | if (PTR_ERR_OR_ZERO(phy) == -EPROBE_DEFER) | 931 | num_lanes = 1; |
407 | return PTR_ERR(phy); | ||
408 | 932 | ||
409 | if (!IS_ERR_OR_NULL(phy)) { | 933 | phy = devm_kzalloc(dev, sizeof(*phy) * num_lanes, GFP_KERNEL); |
410 | ret = phy_init(phy); | 934 | if (!phy) |
411 | if (ret < 0) | 935 | return -ENOMEM; |
412 | return ret; | 936 | |
937 | link = devm_kzalloc(dev, sizeof(*link) * num_lanes, GFP_KERNEL); | ||
938 | if (!link) | ||
939 | return -ENOMEM; | ||
940 | |||
941 | for (i = 0; i < num_lanes; i++) { | ||
942 | snprintf(name, sizeof(name), "pcie-phy%d", i); | ||
943 | phy[i] = devm_phy_optional_get(dev, name); | ||
944 | if (IS_ERR(phy[i])) { | ||
945 | ret = PTR_ERR(phy[i]); | ||
946 | goto err_link; | ||
947 | } | ||
948 | |||
949 | if (!phy[i]) | ||
950 | continue; | ||
951 | |||
952 | link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS); | ||
953 | if (!link[i]) { | ||
954 | ret = -EINVAL; | ||
955 | goto err_link; | ||
956 | } | ||
413 | } | 957 | } |
414 | 958 | ||
415 | /* index 2 is to read PCI DEVICE_ID */ | 959 | ks_pcie->np = np; |
416 | res = platform_get_resource(pdev, IORESOURCE_MEM, 2); | 960 | ks_pcie->pci = pci; |
417 | reg_p = devm_ioremap_resource(dev, res); | 961 | ks_pcie->link = link; |
418 | if (IS_ERR(reg_p)) | 962 | ks_pcie->num_lanes = num_lanes; |
419 | return PTR_ERR(reg_p); | 963 | ks_pcie->num_viewport = num_viewport; |
420 | ks_pcie->device_id = readl(reg_p) >> 16; | 964 | ks_pcie->phy = phy; |
421 | devm_iounmap(dev, reg_p); | ||
422 | devm_release_mem_region(dev, res->start, resource_size(res)); | ||
423 | 965 | ||
424 | ks_pcie->np = dev->of_node; | 966 | ret = ks_pcie_enable_phy(ks_pcie); |
425 | platform_set_drvdata(pdev, ks_pcie); | 967 | if (ret) { |
426 | ks_pcie->clk = devm_clk_get(dev, "pcie"); | 968 | dev_err(dev, "failed to enable phy\n"); |
427 | if (IS_ERR(ks_pcie->clk)) { | 969 | goto err_link; |
428 | dev_err(dev, "Failed to get pcie rc clock\n"); | ||
429 | return PTR_ERR(ks_pcie->clk); | ||
430 | } | 970 | } |
431 | ret = clk_prepare_enable(ks_pcie->clk); | ||
432 | if (ret) | ||
433 | return ret; | ||
434 | 971 | ||
435 | platform_set_drvdata(pdev, ks_pcie); | 972 | platform_set_drvdata(pdev, ks_pcie); |
973 | pm_runtime_enable(dev); | ||
974 | ret = pm_runtime_get_sync(dev); | ||
975 | if (ret < 0) { | ||
976 | dev_err(dev, "pm_runtime_get_sync failed\n"); | ||
977 | goto err_get_sync; | ||
978 | } | ||
436 | 979 | ||
437 | ret = ks_add_pcie_port(ks_pcie, pdev); | 980 | ret = ks_pcie_add_pcie_port(ks_pcie, pdev); |
438 | if (ret < 0) | 981 | if (ret < 0) |
439 | goto fail_clk; | 982 | goto err_get_sync; |
440 | 983 | ||
441 | return 0; | 984 | return 0; |
442 | fail_clk: | 985 | |
443 | clk_disable_unprepare(ks_pcie->clk); | 986 | err_get_sync: |
987 | pm_runtime_put(dev); | ||
988 | pm_runtime_disable(dev); | ||
989 | ks_pcie_disable_phy(ks_pcie); | ||
990 | |||
991 | err_link: | ||
992 | while (--i >= 0 && link[i]) | ||
993 | device_link_del(link[i]); | ||
444 | 994 | ||
445 | return ret; | 995 | return ret; |
446 | } | 996 | } |
447 | 997 | ||
998 | static int __exit ks_pcie_remove(struct platform_device *pdev) | ||
999 | { | ||
1000 | struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev); | ||
1001 | struct device_link **link = ks_pcie->link; | ||
1002 | int num_lanes = ks_pcie->num_lanes; | ||
1003 | struct device *dev = &pdev->dev; | ||
1004 | |||
1005 | pm_runtime_put(dev); | ||
1006 | pm_runtime_disable(dev); | ||
1007 | ks_pcie_disable_phy(ks_pcie); | ||
1008 | while (num_lanes--) | ||
1009 | device_link_del(link[num_lanes]); | ||
1010 | |||
1011 | return 0; | ||
1012 | } | ||
1013 | |||
448 | static struct platform_driver ks_pcie_driver __refdata = { | 1014 | static struct platform_driver ks_pcie_driver __refdata = { |
449 | .probe = ks_pcie_probe, | 1015 | .probe = ks_pcie_probe, |
450 | .remove = __exit_p(ks_pcie_remove), | 1016 | .remove = __exit_p(ks_pcie_remove), |
diff --git a/drivers/pci/controller/dwc/pci-keystone.h b/drivers/pci/controller/dwc/pci-keystone.h deleted file mode 100644 index 8a13da391543..000000000000 --- a/drivers/pci/controller/dwc/pci-keystone.h +++ /dev/null | |||
@@ -1,57 +0,0 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
2 | /* | ||
3 | * Keystone PCI Controller's common includes | ||
4 | * | ||
5 | * Copyright (C) 2013-2014 Texas Instruments., Ltd. | ||
6 | * http://www.ti.com | ||
7 | * | ||
8 | * Author: Murali Karicheri <m-karicheri2@ti.com> | ||
9 | */ | ||
10 | |||
11 | #define MAX_MSI_HOST_IRQS 8 | ||
12 | |||
13 | struct keystone_pcie { | ||
14 | struct dw_pcie *pci; | ||
15 | struct clk *clk; | ||
16 | /* PCI Device ID */ | ||
17 | u32 device_id; | ||
18 | int num_legacy_host_irqs; | ||
19 | int legacy_host_irqs[PCI_NUM_INTX]; | ||
20 | struct device_node *legacy_intc_np; | ||
21 | |||
22 | int num_msi_host_irqs; | ||
23 | int msi_host_irqs[MAX_MSI_HOST_IRQS]; | ||
24 | struct device_node *msi_intc_np; | ||
25 | struct irq_domain *legacy_irq_domain; | ||
26 | struct device_node *np; | ||
27 | |||
28 | int error_irq; | ||
29 | |||
30 | /* Application register space */ | ||
31 | void __iomem *va_app_base; /* DT 1st resource */ | ||
32 | struct resource app; | ||
33 | }; | ||
34 | |||
35 | /* Keystone DW specific MSI controller APIs/definitions */ | ||
36 | void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset); | ||
37 | phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp); | ||
38 | |||
39 | /* Keystone specific PCI controller APIs */ | ||
40 | void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie); | ||
41 | void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset); | ||
42 | void ks_dw_pcie_enable_error_irq(struct keystone_pcie *ks_pcie); | ||
43 | irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie); | ||
44 | int ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, | ||
45 | struct device_node *msi_intc_np); | ||
46 | int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, | ||
47 | unsigned int devfn, int where, int size, u32 val); | ||
48 | int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, | ||
49 | unsigned int devfn, int where, int size, u32 *val); | ||
50 | void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie); | ||
51 | void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie); | ||
52 | void ks_dw_pcie_msi_irq_ack(int i, struct pcie_port *pp); | ||
53 | void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq); | ||
54 | void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq); | ||
55 | void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp); | ||
56 | int ks_dw_pcie_msi_host_init(struct pcie_port *pp); | ||
57 | int ks_dw_pcie_link_up(struct dw_pcie *pci); | ||
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index 9f1a5e399b70..0989d880ac46 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h | |||
@@ -36,6 +36,10 @@ | |||
36 | #define PORT_LINK_MODE_4_LANES (0x7 << 16) | 36 | #define PORT_LINK_MODE_4_LANES (0x7 << 16) |
37 | #define PORT_LINK_MODE_8_LANES (0xf << 16) | 37 | #define PORT_LINK_MODE_8_LANES (0xf << 16) |
38 | 38 | ||
39 | #define PCIE_PORT_DEBUG0 0x728 | ||
40 | #define PORT_LOGIC_LTSSM_STATE_MASK 0x1f | ||
41 | #define PORT_LOGIC_LTSSM_STATE_L0 0x11 | ||
42 | |||
39 | #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C | 43 | #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C |
40 | #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) | 44 | #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) |
41 | #define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8) | 45 | #define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8) |
diff --git a/drivers/pci/controller/dwc/pcie-kirin.c b/drivers/pci/controller/dwc/pcie-kirin.c index 5352e0c3be82..9b599296205d 100644 --- a/drivers/pci/controller/dwc/pcie-kirin.c +++ b/drivers/pci/controller/dwc/pcie-kirin.c | |||
@@ -467,8 +467,8 @@ static int kirin_pcie_add_msi(struct dw_pcie *pci, | |||
467 | return 0; | 467 | return 0; |
468 | } | 468 | } |
469 | 469 | ||
470 | static int __init kirin_add_pcie_port(struct dw_pcie *pci, | 470 | static int kirin_add_pcie_port(struct dw_pcie *pci, |
471 | struct platform_device *pdev) | 471 | struct platform_device *pdev) |
472 | { | 472 | { |
473 | int ret; | 473 | int ret; |
474 | 474 | ||
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index 4352c1cb926d..d185ea5fe996 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c | |||
@@ -1089,7 +1089,6 @@ static int qcom_pcie_host_init(struct pcie_port *pp) | |||
1089 | struct qcom_pcie *pcie = to_qcom_pcie(pci); | 1089 | struct qcom_pcie *pcie = to_qcom_pcie(pci); |
1090 | int ret; | 1090 | int ret; |
1091 | 1091 | ||
1092 | pm_runtime_get_sync(pci->dev); | ||
1093 | qcom_ep_reset_assert(pcie); | 1092 | qcom_ep_reset_assert(pcie); |
1094 | 1093 | ||
1095 | ret = pcie->ops->init(pcie); | 1094 | ret = pcie->ops->init(pcie); |
@@ -1126,7 +1125,6 @@ err_disable_phy: | |||
1126 | phy_power_off(pcie->phy); | 1125 | phy_power_off(pcie->phy); |
1127 | err_deinit: | 1126 | err_deinit: |
1128 | pcie->ops->deinit(pcie); | 1127 | pcie->ops->deinit(pcie); |
1129 | pm_runtime_put(pci->dev); | ||
1130 | 1128 | ||
1131 | return ret; | 1129 | return ret; |
1132 | } | 1130 | } |
@@ -1216,6 +1214,12 @@ static int qcom_pcie_probe(struct platform_device *pdev) | |||
1216 | return -ENOMEM; | 1214 | return -ENOMEM; |
1217 | 1215 | ||
1218 | pm_runtime_enable(dev); | 1216 | pm_runtime_enable(dev); |
1217 | ret = pm_runtime_get_sync(dev); | ||
1218 | if (ret < 0) { | ||
1219 | pm_runtime_disable(dev); | ||
1220 | return ret; | ||
1221 | } | ||
1222 | |||
1219 | pci->dev = dev; | 1223 | pci->dev = dev; |
1220 | pci->ops = &dw_pcie_ops; | 1224 | pci->ops = &dw_pcie_ops; |
1221 | pp = &pci->pp; | 1225 | pp = &pci->pp; |
@@ -1225,44 +1229,56 @@ static int qcom_pcie_probe(struct platform_device *pdev) | |||
1225 | pcie->ops = of_device_get_match_data(dev); | 1229 | pcie->ops = of_device_get_match_data(dev); |
1226 | 1230 | ||
1227 | pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW); | 1231 | pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW); |
1228 | if (IS_ERR(pcie->reset)) | 1232 | if (IS_ERR(pcie->reset)) { |
1229 | return PTR_ERR(pcie->reset); | 1233 | ret = PTR_ERR(pcie->reset); |
1234 | goto err_pm_runtime_put; | ||
1235 | } | ||
1230 | 1236 | ||
1231 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "parf"); | 1237 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "parf"); |
1232 | pcie->parf = devm_ioremap_resource(dev, res); | 1238 | pcie->parf = devm_ioremap_resource(dev, res); |
1233 | if (IS_ERR(pcie->parf)) | 1239 | if (IS_ERR(pcie->parf)) { |
1234 | return PTR_ERR(pcie->parf); | 1240 | ret = PTR_ERR(pcie->parf); |
1241 | goto err_pm_runtime_put; | ||
1242 | } | ||
1235 | 1243 | ||
1236 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); | 1244 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); |
1237 | pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); | 1245 | pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); |
1238 | if (IS_ERR(pci->dbi_base)) | 1246 | if (IS_ERR(pci->dbi_base)) { |
1239 | return PTR_ERR(pci->dbi_base); | 1247 | ret = PTR_ERR(pci->dbi_base); |
1248 | goto err_pm_runtime_put; | ||
1249 | } | ||
1240 | 1250 | ||
1241 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); | 1251 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); |
1242 | pcie->elbi = devm_ioremap_resource(dev, res); | 1252 | pcie->elbi = devm_ioremap_resource(dev, res); |
1243 | if (IS_ERR(pcie->elbi)) | 1253 | if (IS_ERR(pcie->elbi)) { |
1244 | return PTR_ERR(pcie->elbi); | 1254 | ret = PTR_ERR(pcie->elbi); |
1255 | goto err_pm_runtime_put; | ||
1256 | } | ||
1245 | 1257 | ||
1246 | pcie->phy = devm_phy_optional_get(dev, "pciephy"); | 1258 | pcie->phy = devm_phy_optional_get(dev, "pciephy"); |
1247 | if (IS_ERR(pcie->phy)) | 1259 | if (IS_ERR(pcie->phy)) { |
1248 | return PTR_ERR(pcie->phy); | 1260 | ret = PTR_ERR(pcie->phy); |
1261 | goto err_pm_runtime_put; | ||
1262 | } | ||
1249 | 1263 | ||
1250 | ret = pcie->ops->get_resources(pcie); | 1264 | ret = pcie->ops->get_resources(pcie); |
1251 | if (ret) | 1265 | if (ret) |
1252 | return ret; | 1266 | goto err_pm_runtime_put; |
1253 | 1267 | ||
1254 | pp->ops = &qcom_pcie_dw_ops; | 1268 | pp->ops = &qcom_pcie_dw_ops; |
1255 | 1269 | ||
1256 | if (IS_ENABLED(CONFIG_PCI_MSI)) { | 1270 | if (IS_ENABLED(CONFIG_PCI_MSI)) { |
1257 | pp->msi_irq = platform_get_irq_byname(pdev, "msi"); | 1271 | pp->msi_irq = platform_get_irq_byname(pdev, "msi"); |
1258 | if (pp->msi_irq < 0) | 1272 | if (pp->msi_irq < 0) { |
1259 | return pp->msi_irq; | 1273 | ret = pp->msi_irq; |
1274 | goto err_pm_runtime_put; | ||
1275 | } | ||
1260 | } | 1276 | } |
1261 | 1277 | ||
1262 | ret = phy_init(pcie->phy); | 1278 | ret = phy_init(pcie->phy); |
1263 | if (ret) { | 1279 | if (ret) { |
1264 | pm_runtime_disable(&pdev->dev); | 1280 | pm_runtime_disable(&pdev->dev); |
1265 | return ret; | 1281 | goto err_pm_runtime_put; |
1266 | } | 1282 | } |
1267 | 1283 | ||
1268 | platform_set_drvdata(pdev, pcie); | 1284 | platform_set_drvdata(pdev, pcie); |
@@ -1271,10 +1287,16 @@ static int qcom_pcie_probe(struct platform_device *pdev) | |||
1271 | if (ret) { | 1287 | if (ret) { |
1272 | dev_err(dev, "cannot initialize host\n"); | 1288 | dev_err(dev, "cannot initialize host\n"); |
1273 | pm_runtime_disable(&pdev->dev); | 1289 | pm_runtime_disable(&pdev->dev); |
1274 | return ret; | 1290 | goto err_pm_runtime_put; |
1275 | } | 1291 | } |
1276 | 1292 | ||
1277 | return 0; | 1293 | return 0; |
1294 | |||
1295 | err_pm_runtime_put: | ||
1296 | pm_runtime_put(dev); | ||
1297 | pm_runtime_disable(dev); | ||
1298 | |||
1299 | return ret; | ||
1278 | } | 1300 | } |
1279 | 1301 | ||
1280 | static const struct of_device_id qcom_pcie_match[] = { | 1302 | static const struct of_device_id qcom_pcie_match[] = { |
diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c index 6b4555ff2548..750081c1cb48 100644 --- a/drivers/pci/controller/pci-aardvark.c +++ b/drivers/pci/controller/pci-aardvark.c | |||
@@ -20,12 +20,16 @@ | |||
20 | #include <linux/of_pci.h> | 20 | #include <linux/of_pci.h> |
21 | 21 | ||
22 | #include "../pci.h" | 22 | #include "../pci.h" |
23 | #include "../pci-bridge-emul.h" | ||
23 | 24 | ||
24 | /* PCIe core registers */ | 25 | /* PCIe core registers */ |
26 | #define PCIE_CORE_DEV_ID_REG 0x0 | ||
25 | #define PCIE_CORE_CMD_STATUS_REG 0x4 | 27 | #define PCIE_CORE_CMD_STATUS_REG 0x4 |
26 | #define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0) | 28 | #define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0) |
27 | #define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1) | 29 | #define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1) |
28 | #define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2) | 30 | #define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2) |
31 | #define PCIE_CORE_DEV_REV_REG 0x8 | ||
32 | #define PCIE_CORE_PCIEXP_CAP 0xc0 | ||
29 | #define PCIE_CORE_DEV_CTRL_STATS_REG 0xc8 | 33 | #define PCIE_CORE_DEV_CTRL_STATS_REG 0xc8 |
30 | #define PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE (0 << 4) | 34 | #define PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE (0 << 4) |
31 | #define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 | 35 | #define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 |
@@ -41,7 +45,10 @@ | |||
41 | #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) | 45 | #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) |
42 | #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7) | 46 | #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7) |
43 | #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8) | 47 | #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8) |
44 | 48 | #define PCIE_CORE_INT_A_ASSERT_ENABLE 1 | |
49 | #define PCIE_CORE_INT_B_ASSERT_ENABLE 2 | ||
50 | #define PCIE_CORE_INT_C_ASSERT_ENABLE 3 | ||
51 | #define PCIE_CORE_INT_D_ASSERT_ENABLE 4 | ||
45 | /* PIO registers base address and register offsets */ | 52 | /* PIO registers base address and register offsets */ |
46 | #define PIO_BASE_ADDR 0x4000 | 53 | #define PIO_BASE_ADDR 0x4000 |
47 | #define PIO_CTRL (PIO_BASE_ADDR + 0x0) | 54 | #define PIO_CTRL (PIO_BASE_ADDR + 0x0) |
@@ -93,7 +100,9 @@ | |||
93 | #define PCIE_CORE_CTRL2_STRICT_ORDER_ENABLE BIT(5) | 100 | #define PCIE_CORE_CTRL2_STRICT_ORDER_ENABLE BIT(5) |
94 | #define PCIE_CORE_CTRL2_OB_WIN_ENABLE BIT(6) | 101 | #define PCIE_CORE_CTRL2_OB_WIN_ENABLE BIT(6) |
95 | #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) | 102 | #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) |
103 | #define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30) | ||
96 | #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) | 104 | #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) |
105 | #define PCIE_MSG_PM_PME_MASK BIT(7) | ||
97 | #define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44) | 106 | #define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44) |
98 | #define PCIE_ISR0_MSI_INT_PENDING BIT(24) | 107 | #define PCIE_ISR0_MSI_INT_PENDING BIT(24) |
99 | #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) | 108 | #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) |
@@ -189,6 +198,7 @@ struct advk_pcie { | |||
189 | struct mutex msi_used_lock; | 198 | struct mutex msi_used_lock; |
190 | u16 msi_msg; | 199 | u16 msi_msg; |
191 | int root_bus_nr; | 200 | int root_bus_nr; |
201 | struct pci_bridge_emul bridge; | ||
192 | }; | 202 | }; |
193 | 203 | ||
194 | static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg) | 204 | static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg) |
@@ -390,6 +400,109 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie) | |||
390 | return -ETIMEDOUT; | 400 | return -ETIMEDOUT; |
391 | } | 401 | } |
392 | 402 | ||
403 | |||
404 | static pci_bridge_emul_read_status_t | ||
405 | advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, | ||
406 | int reg, u32 *value) | ||
407 | { | ||
408 | struct advk_pcie *pcie = bridge->data; | ||
409 | |||
410 | |||
411 | switch (reg) { | ||
412 | case PCI_EXP_SLTCTL: | ||
413 | *value = PCI_EXP_SLTSTA_PDS << 16; | ||
414 | return PCI_BRIDGE_EMUL_HANDLED; | ||
415 | |||
416 | case PCI_EXP_RTCTL: { | ||
417 | u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); | ||
418 | *value = (val & PCIE_MSG_PM_PME_MASK) ? PCI_EXP_RTCTL_PMEIE : 0; | ||
419 | return PCI_BRIDGE_EMUL_HANDLED; | ||
420 | } | ||
421 | |||
422 | case PCI_EXP_RTSTA: { | ||
423 | u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG); | ||
424 | u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG); | ||
425 | *value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16); | ||
426 | return PCI_BRIDGE_EMUL_HANDLED; | ||
427 | } | ||
428 | |||
429 | case PCI_CAP_LIST_ID: | ||
430 | case PCI_EXP_DEVCAP: | ||
431 | case PCI_EXP_DEVCTL: | ||
432 | case PCI_EXP_LNKCAP: | ||
433 | case PCI_EXP_LNKCTL: | ||
434 | *value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); | ||
435 | return PCI_BRIDGE_EMUL_HANDLED; | ||
436 | default: | ||
437 | return PCI_BRIDGE_EMUL_NOT_HANDLED; | ||
438 | } | ||
439 | |||
440 | } | ||
441 | |||
442 | static void | ||
443 | advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge, | ||
444 | int reg, u32 old, u32 new, u32 mask) | ||
445 | { | ||
446 | struct advk_pcie *pcie = bridge->data; | ||
447 | |||
448 | switch (reg) { | ||
449 | case PCI_EXP_DEVCTL: | ||
450 | case PCI_EXP_LNKCTL: | ||
451 | advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg); | ||
452 | break; | ||
453 | |||
454 | case PCI_EXP_RTCTL: | ||
455 | new = (new & PCI_EXP_RTCTL_PMEIE) << 3; | ||
456 | advk_writel(pcie, new, PCIE_ISR0_MASK_REG); | ||
457 | break; | ||
458 | |||
459 | case PCI_EXP_RTSTA: | ||
460 | new = (new & PCI_EXP_RTSTA_PME) >> 9; | ||
461 | advk_writel(pcie, new, PCIE_ISR0_REG); | ||
462 | break; | ||
463 | |||
464 | default: | ||
465 | break; | ||
466 | } | ||
467 | } | ||
468 | |||
469 | struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { | ||
470 | .read_pcie = advk_pci_bridge_emul_pcie_conf_read, | ||
471 | .write_pcie = advk_pci_bridge_emul_pcie_conf_write, | ||
472 | }; | ||
473 | |||
474 | /* | ||
475 | * Initialize the configuration space of the PCI-to-PCI bridge | ||
476 | * associated with the given PCIe interface. | ||
477 | */ | ||
478 | static void advk_sw_pci_bridge_init(struct advk_pcie *pcie) | ||
479 | { | ||
480 | struct pci_bridge_emul *bridge = &pcie->bridge; | ||
481 | |||
482 | bridge->conf.vendor = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff; | ||
483 | bridge->conf.device = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) >> 16; | ||
484 | bridge->conf.class_revision = | ||
485 | advk_readl(pcie, PCIE_CORE_DEV_REV_REG) & 0xff; | ||
486 | |||
487 | /* Support 32 bits I/O addressing */ | ||
488 | bridge->conf.iobase = PCI_IO_RANGE_TYPE_32; | ||
489 | bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32; | ||
490 | |||
491 | /* Support 64 bits memory pref */ | ||
492 | bridge->conf.pref_mem_base = PCI_PREF_RANGE_TYPE_64; | ||
493 | bridge->conf.pref_mem_limit = PCI_PREF_RANGE_TYPE_64; | ||
494 | |||
495 | /* Support interrupt A for MSI feature */ | ||
496 | bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE; | ||
497 | |||
498 | bridge->has_pcie = true; | ||
499 | bridge->data = pcie; | ||
500 | bridge->ops = &advk_pci_bridge_emul_ops; | ||
501 | |||
502 | pci_bridge_emul_init(bridge); | ||
503 | |||
504 | } | ||
505 | |||
393 | static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, | 506 | static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, |
394 | int devfn) | 507 | int devfn) |
395 | { | 508 | { |
@@ -411,6 +524,10 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, | |||
411 | return PCIBIOS_DEVICE_NOT_FOUND; | 524 | return PCIBIOS_DEVICE_NOT_FOUND; |
412 | } | 525 | } |
413 | 526 | ||
527 | if (bus->number == pcie->root_bus_nr) | ||
528 | return pci_bridge_emul_conf_read(&pcie->bridge, where, | ||
529 | size, val); | ||
530 | |||
414 | /* Start PIO */ | 531 | /* Start PIO */ |
415 | advk_writel(pcie, 0, PIO_START); | 532 | advk_writel(pcie, 0, PIO_START); |
416 | advk_writel(pcie, 1, PIO_ISR); | 533 | advk_writel(pcie, 1, PIO_ISR); |
@@ -418,7 +535,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, | |||
418 | /* Program the control register */ | 535 | /* Program the control register */ |
419 | reg = advk_readl(pcie, PIO_CTRL); | 536 | reg = advk_readl(pcie, PIO_CTRL); |
420 | reg &= ~PIO_CTRL_TYPE_MASK; | 537 | reg &= ~PIO_CTRL_TYPE_MASK; |
421 | if (bus->number == pcie->root_bus_nr) | 538 | if (bus->primary == pcie->root_bus_nr) |
422 | reg |= PCIE_CONFIG_RD_TYPE0; | 539 | reg |= PCIE_CONFIG_RD_TYPE0; |
423 | else | 540 | else |
424 | reg |= PCIE_CONFIG_RD_TYPE1; | 541 | reg |= PCIE_CONFIG_RD_TYPE1; |
@@ -463,6 +580,10 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, | |||
463 | if (!advk_pcie_valid_device(pcie, bus, devfn)) | 580 | if (!advk_pcie_valid_device(pcie, bus, devfn)) |
464 | return PCIBIOS_DEVICE_NOT_FOUND; | 581 | return PCIBIOS_DEVICE_NOT_FOUND; |
465 | 582 | ||
583 | if (bus->number == pcie->root_bus_nr) | ||
584 | return pci_bridge_emul_conf_write(&pcie->bridge, where, | ||
585 | size, val); | ||
586 | |||
466 | if (where % size) | 587 | if (where % size) |
467 | return PCIBIOS_SET_FAILED; | 588 | return PCIBIOS_SET_FAILED; |
468 | 589 | ||
@@ -473,7 +594,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, | |||
473 | /* Program the control register */ | 594 | /* Program the control register */ |
474 | reg = advk_readl(pcie, PIO_CTRL); | 595 | reg = advk_readl(pcie, PIO_CTRL); |
475 | reg &= ~PIO_CTRL_TYPE_MASK; | 596 | reg &= ~PIO_CTRL_TYPE_MASK; |
476 | if (bus->number == pcie->root_bus_nr) | 597 | if (bus->primary == pcie->root_bus_nr) |
477 | reg |= PCIE_CONFIG_WR_TYPE0; | 598 | reg |= PCIE_CONFIG_WR_TYPE0; |
478 | else | 599 | else |
479 | reg |= PCIE_CONFIG_WR_TYPE1; | 600 | reg |= PCIE_CONFIG_WR_TYPE1; |
@@ -875,6 +996,8 @@ static int advk_pcie_probe(struct platform_device *pdev) | |||
875 | 996 | ||
876 | advk_pcie_setup_hw(pcie); | 997 | advk_pcie_setup_hw(pcie); |
877 | 998 | ||
999 | advk_sw_pci_bridge_init(pcie); | ||
1000 | |||
878 | ret = advk_pcie_init_irq_domain(pcie); | 1001 | ret = advk_pcie_init_irq_domain(pcie); |
879 | if (ret) { | 1002 | if (ret) { |
880 | dev_err(dev, "Failed to initialize irq\n"); | 1003 | dev_err(dev, "Failed to initialize irq\n"); |
diff --git a/drivers/pci/controller/pci-host-common.c b/drivers/pci/controller/pci-host-common.c index d8f10451f273..c742881b5061 100644 --- a/drivers/pci/controller/pci-host-common.c +++ b/drivers/pci/controller/pci-host-common.c | |||
@@ -58,9 +58,7 @@ err_out: | |||
58 | int pci_host_common_probe(struct platform_device *pdev, | 58 | int pci_host_common_probe(struct platform_device *pdev, |
59 | struct pci_ecam_ops *ops) | 59 | struct pci_ecam_ops *ops) |
60 | { | 60 | { |
61 | const char *type; | ||
62 | struct device *dev = &pdev->dev; | 61 | struct device *dev = &pdev->dev; |
63 | struct device_node *np = dev->of_node; | ||
64 | struct pci_host_bridge *bridge; | 62 | struct pci_host_bridge *bridge; |
65 | struct pci_config_window *cfg; | 63 | struct pci_config_window *cfg; |
66 | struct list_head resources; | 64 | struct list_head resources; |
@@ -70,12 +68,6 @@ int pci_host_common_probe(struct platform_device *pdev, | |||
70 | if (!bridge) | 68 | if (!bridge) |
71 | return -ENOMEM; | 69 | return -ENOMEM; |
72 | 70 | ||
73 | type = of_get_property(np, "device_type", NULL); | ||
74 | if (!type || strcmp(type, "pci")) { | ||
75 | dev_err(dev, "invalid \"device_type\" %s\n", type); | ||
76 | return -EINVAL; | ||
77 | } | ||
78 | |||
79 | of_pci_check_probe_only(); | 71 | of_pci_check_probe_only(); |
80 | 72 | ||
81 | /* Parse and map our Configuration Space windows */ | 73 | /* Parse and map our Configuration Space windows */ |
diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c index a41d79b8d46a..fa0fc46edb0c 100644 --- a/drivers/pci/controller/pci-mvebu.c +++ b/drivers/pci/controller/pci-mvebu.c | |||
@@ -22,6 +22,7 @@ | |||
22 | #include <linux/of_platform.h> | 22 | #include <linux/of_platform.h> |
23 | 23 | ||
24 | #include "../pci.h" | 24 | #include "../pci.h" |
25 | #include "../pci-bridge-emul.h" | ||
25 | 26 | ||
26 | /* | 27 | /* |
27 | * PCIe unit register offsets. | 28 | * PCIe unit register offsets. |
@@ -63,61 +64,6 @@ | |||
63 | #define PCIE_DEBUG_CTRL 0x1a60 | 64 | #define PCIE_DEBUG_CTRL 0x1a60 |
64 | #define PCIE_DEBUG_SOFT_RESET BIT(20) | 65 | #define PCIE_DEBUG_SOFT_RESET BIT(20) |
65 | 66 | ||
66 | enum { | ||
67 | PCISWCAP = PCI_BRIDGE_CONTROL + 2, | ||
68 | PCISWCAP_EXP_LIST_ID = PCISWCAP + PCI_CAP_LIST_ID, | ||
69 | PCISWCAP_EXP_DEVCAP = PCISWCAP + PCI_EXP_DEVCAP, | ||
70 | PCISWCAP_EXP_DEVCTL = PCISWCAP + PCI_EXP_DEVCTL, | ||
71 | PCISWCAP_EXP_LNKCAP = PCISWCAP + PCI_EXP_LNKCAP, | ||
72 | PCISWCAP_EXP_LNKCTL = PCISWCAP + PCI_EXP_LNKCTL, | ||
73 | PCISWCAP_EXP_SLTCAP = PCISWCAP + PCI_EXP_SLTCAP, | ||
74 | PCISWCAP_EXP_SLTCTL = PCISWCAP + PCI_EXP_SLTCTL, | ||
75 | PCISWCAP_EXP_RTCTL = PCISWCAP + PCI_EXP_RTCTL, | ||
76 | PCISWCAP_EXP_RTSTA = PCISWCAP + PCI_EXP_RTSTA, | ||
77 | PCISWCAP_EXP_DEVCAP2 = PCISWCAP + PCI_EXP_DEVCAP2, | ||
78 | PCISWCAP_EXP_DEVCTL2 = PCISWCAP + PCI_EXP_DEVCTL2, | ||
79 | PCISWCAP_EXP_LNKCAP2 = PCISWCAP + PCI_EXP_LNKCAP2, | ||
80 | PCISWCAP_EXP_LNKCTL2 = PCISWCAP + PCI_EXP_LNKCTL2, | ||
81 | PCISWCAP_EXP_SLTCAP2 = PCISWCAP + PCI_EXP_SLTCAP2, | ||
82 | PCISWCAP_EXP_SLTCTL2 = PCISWCAP + PCI_EXP_SLTCTL2, | ||
83 | }; | ||
84 | |||
85 | /* PCI configuration space of a PCI-to-PCI bridge */ | ||
86 | struct mvebu_sw_pci_bridge { | ||
87 | u16 vendor; | ||
88 | u16 device; | ||
89 | u16 command; | ||
90 | u16 status; | ||
91 | u16 class; | ||
92 | u8 interface; | ||
93 | u8 revision; | ||
94 | u8 bist; | ||
95 | u8 header_type; | ||
96 | u8 latency_timer; | ||
97 | u8 cache_line_size; | ||
98 | u32 bar[2]; | ||
99 | u8 primary_bus; | ||
100 | u8 secondary_bus; | ||
101 | u8 subordinate_bus; | ||
102 | u8 secondary_latency_timer; | ||
103 | u8 iobase; | ||
104 | u8 iolimit; | ||
105 | u16 secondary_status; | ||
106 | u16 membase; | ||
107 | u16 memlimit; | ||
108 | u16 iobaseupper; | ||
109 | u16 iolimitupper; | ||
110 | u32 romaddr; | ||
111 | u8 intline; | ||
112 | u8 intpin; | ||
113 | u16 bridgectrl; | ||
114 | |||
115 | /* PCI express capability */ | ||
116 | u32 pcie_sltcap; | ||
117 | u16 pcie_devctl; | ||
118 | u16 pcie_rtctl; | ||
119 | }; | ||
120 | |||
121 | struct mvebu_pcie_port; | 67 | struct mvebu_pcie_port; |
122 | 68 | ||
123 | /* Structure representing all PCIe interfaces */ | 69 | /* Structure representing all PCIe interfaces */ |
@@ -153,7 +99,7 @@ struct mvebu_pcie_port { | |||
153 | struct clk *clk; | 99 | struct clk *clk; |
154 | struct gpio_desc *reset_gpio; | 100 | struct gpio_desc *reset_gpio; |
155 | char *reset_name; | 101 | char *reset_name; |
156 | struct mvebu_sw_pci_bridge bridge; | 102 | struct pci_bridge_emul bridge; |
157 | struct device_node *dn; | 103 | struct device_node *dn; |
158 | struct mvebu_pcie *pcie; | 104 | struct mvebu_pcie *pcie; |
159 | struct mvebu_pcie_window memwin; | 105 | struct mvebu_pcie_window memwin; |
@@ -415,11 +361,12 @@ static void mvebu_pcie_set_window(struct mvebu_pcie_port *port, | |||
415 | static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port) | 361 | static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port) |
416 | { | 362 | { |
417 | struct mvebu_pcie_window desired = {}; | 363 | struct mvebu_pcie_window desired = {}; |
364 | struct pci_bridge_emul_conf *conf = &port->bridge.conf; | ||
418 | 365 | ||
419 | /* Are the new iobase/iolimit values invalid? */ | 366 | /* Are the new iobase/iolimit values invalid? */ |
420 | if (port->bridge.iolimit < port->bridge.iobase || | 367 | if (conf->iolimit < conf->iobase || |
421 | port->bridge.iolimitupper < port->bridge.iobaseupper || | 368 | conf->iolimitupper < conf->iobaseupper || |
422 | !(port->bridge.command & PCI_COMMAND_IO)) { | 369 | !(conf->command & PCI_COMMAND_IO)) { |
423 | mvebu_pcie_set_window(port, port->io_target, port->io_attr, | 370 | mvebu_pcie_set_window(port, port->io_target, port->io_attr, |
424 | &desired, &port->iowin); | 371 | &desired, &port->iowin); |
425 | return; | 372 | return; |
@@ -438,11 +385,11 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port) | |||
438 | * specifications. iobase is the bus address, port->iowin_base | 385 | * specifications. iobase is the bus address, port->iowin_base |
439 | * is the CPU address. | 386 | * is the CPU address. |
440 | */ | 387 | */ |
441 | desired.remap = ((port->bridge.iobase & 0xF0) << 8) | | 388 | desired.remap = ((conf->iobase & 0xF0) << 8) | |
442 | (port->bridge.iobaseupper << 16); | 389 | (conf->iobaseupper << 16); |
443 | desired.base = port->pcie->io.start + desired.remap; | 390 | desired.base = port->pcie->io.start + desired.remap; |
444 | desired.size = ((0xFFF | ((port->bridge.iolimit & 0xF0) << 8) | | 391 | desired.size = ((0xFFF | ((conf->iolimit & 0xF0) << 8) | |
445 | (port->bridge.iolimitupper << 16)) - | 392 | (conf->iolimitupper << 16)) - |
446 | desired.remap) + | 393 | desired.remap) + |
447 | 1; | 394 | 1; |
448 | 395 | ||
@@ -453,10 +400,11 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port) | |||
453 | static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port) | 400 | static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port) |
454 | { | 401 | { |
455 | struct mvebu_pcie_window desired = {.remap = MVEBU_MBUS_NO_REMAP}; | 402 | struct mvebu_pcie_window desired = {.remap = MVEBU_MBUS_NO_REMAP}; |
403 | struct pci_bridge_emul_conf *conf = &port->bridge.conf; | ||
456 | 404 | ||
457 | /* Are the new membase/memlimit values invalid? */ | 405 | /* Are the new membase/memlimit values invalid? */ |
458 | if (port->bridge.memlimit < port->bridge.membase || | 406 | if (conf->memlimit < conf->membase || |
459 | !(port->bridge.command & PCI_COMMAND_MEMORY)) { | 407 | !(conf->command & PCI_COMMAND_MEMORY)) { |
460 | mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, | 408 | mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, |
461 | &desired, &port->memwin); | 409 | &desired, &port->memwin); |
462 | return; | 410 | return; |
@@ -468,130 +416,32 @@ static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port) | |||
468 | * window to setup, according to the PCI-to-PCI bridge | 416 | * window to setup, according to the PCI-to-PCI bridge |
469 | * specifications. | 417 | * specifications. |
470 | */ | 418 | */ |
471 | desired.base = ((port->bridge.membase & 0xFFF0) << 16); | 419 | desired.base = ((conf->membase & 0xFFF0) << 16); |
472 | desired.size = (((port->bridge.memlimit & 0xFFF0) << 16) | 0xFFFFF) - | 420 | desired.size = (((conf->memlimit & 0xFFF0) << 16) | 0xFFFFF) - |
473 | desired.base + 1; | 421 | desired.base + 1; |
474 | 422 | ||
475 | mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired, | 423 | mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired, |
476 | &port->memwin); | 424 | &port->memwin); |
477 | } | 425 | } |
478 | 426 | ||
479 | /* | 427 | static pci_bridge_emul_read_status_t |
480 | * Initialize the configuration space of the PCI-to-PCI bridge | 428 | mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, |
481 | * associated with the given PCIe interface. | 429 | int reg, u32 *value) |
482 | */ | ||
483 | static void mvebu_sw_pci_bridge_init(struct mvebu_pcie_port *port) | ||
484 | { | ||
485 | struct mvebu_sw_pci_bridge *bridge = &port->bridge; | ||
486 | |||
487 | memset(bridge, 0, sizeof(struct mvebu_sw_pci_bridge)); | ||
488 | |||
489 | bridge->class = PCI_CLASS_BRIDGE_PCI; | ||
490 | bridge->vendor = PCI_VENDOR_ID_MARVELL; | ||
491 | bridge->device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16; | ||
492 | bridge->revision = mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff; | ||
493 | bridge->header_type = PCI_HEADER_TYPE_BRIDGE; | ||
494 | bridge->cache_line_size = 0x10; | ||
495 | |||
496 | /* We support 32 bits I/O addressing */ | ||
497 | bridge->iobase = PCI_IO_RANGE_TYPE_32; | ||
498 | bridge->iolimit = PCI_IO_RANGE_TYPE_32; | ||
499 | |||
500 | /* Add capabilities */ | ||
501 | bridge->status = PCI_STATUS_CAP_LIST; | ||
502 | } | ||
503 | |||
504 | /* | ||
505 | * Read the configuration space of the PCI-to-PCI bridge associated to | ||
506 | * the given PCIe interface. | ||
507 | */ | ||
508 | static int mvebu_sw_pci_bridge_read(struct mvebu_pcie_port *port, | ||
509 | unsigned int where, int size, u32 *value) | ||
510 | { | 430 | { |
511 | struct mvebu_sw_pci_bridge *bridge = &port->bridge; | 431 | struct mvebu_pcie_port *port = bridge->data; |
512 | |||
513 | switch (where & ~3) { | ||
514 | case PCI_VENDOR_ID: | ||
515 | *value = bridge->device << 16 | bridge->vendor; | ||
516 | break; | ||
517 | |||
518 | case PCI_COMMAND: | ||
519 | *value = bridge->command | bridge->status << 16; | ||
520 | break; | ||
521 | |||
522 | case PCI_CLASS_REVISION: | ||
523 | *value = bridge->class << 16 | bridge->interface << 8 | | ||
524 | bridge->revision; | ||
525 | break; | ||
526 | |||
527 | case PCI_CACHE_LINE_SIZE: | ||
528 | *value = bridge->bist << 24 | bridge->header_type << 16 | | ||
529 | bridge->latency_timer << 8 | bridge->cache_line_size; | ||
530 | break; | ||
531 | |||
532 | case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1: | ||
533 | *value = bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4]; | ||
534 | break; | ||
535 | |||
536 | case PCI_PRIMARY_BUS: | ||
537 | *value = (bridge->secondary_latency_timer << 24 | | ||
538 | bridge->subordinate_bus << 16 | | ||
539 | bridge->secondary_bus << 8 | | ||
540 | bridge->primary_bus); | ||
541 | break; | ||
542 | |||
543 | case PCI_IO_BASE: | ||
544 | if (!mvebu_has_ioport(port)) | ||
545 | *value = bridge->secondary_status << 16; | ||
546 | else | ||
547 | *value = (bridge->secondary_status << 16 | | ||
548 | bridge->iolimit << 8 | | ||
549 | bridge->iobase); | ||
550 | break; | ||
551 | |||
552 | case PCI_MEMORY_BASE: | ||
553 | *value = (bridge->memlimit << 16 | bridge->membase); | ||
554 | break; | ||
555 | |||
556 | case PCI_PREF_MEMORY_BASE: | ||
557 | *value = 0; | ||
558 | break; | ||
559 | |||
560 | case PCI_IO_BASE_UPPER16: | ||
561 | *value = (bridge->iolimitupper << 16 | bridge->iobaseupper); | ||
562 | break; | ||
563 | |||
564 | case PCI_CAPABILITY_LIST: | ||
565 | *value = PCISWCAP; | ||
566 | break; | ||
567 | |||
568 | case PCI_ROM_ADDRESS1: | ||
569 | *value = 0; | ||
570 | break; | ||
571 | 432 | ||
572 | case PCI_INTERRUPT_LINE: | 433 | switch (reg) { |
573 | /* LINE PIN MIN_GNT MAX_LAT */ | 434 | case PCI_EXP_DEVCAP: |
574 | *value = 0; | ||
575 | break; | ||
576 | |||
577 | case PCISWCAP_EXP_LIST_ID: | ||
578 | /* Set PCIe v2, root port, slot support */ | ||
579 | *value = (PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | | ||
580 | PCI_EXP_FLAGS_SLOT) << 16 | PCI_CAP_ID_EXP; | ||
581 | break; | ||
582 | |||
583 | case PCISWCAP_EXP_DEVCAP: | ||
584 | *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCAP); | 435 | *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCAP); |
585 | break; | 436 | break; |
586 | 437 | ||
587 | case PCISWCAP_EXP_DEVCTL: | 438 | case PCI_EXP_DEVCTL: |
588 | *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL) & | 439 | *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL) & |
589 | ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | | 440 | ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | |
590 | PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); | 441 | PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); |
591 | *value |= bridge->pcie_devctl; | ||
592 | break; | 442 | break; |
593 | 443 | ||
594 | case PCISWCAP_EXP_LNKCAP: | 444 | case PCI_EXP_LNKCAP: |
595 | /* | 445 | /* |
596 | * PCIe requires the clock power management capability to be | 446 | * PCIe requires the clock power management capability to be |
597 | * hard-wired to zero for downstream ports | 447 | * hard-wired to zero for downstream ports |
@@ -600,176 +450,140 @@ static int mvebu_sw_pci_bridge_read(struct mvebu_pcie_port *port, | |||
600 | ~PCI_EXP_LNKCAP_CLKPM; | 450 | ~PCI_EXP_LNKCAP_CLKPM; |
601 | break; | 451 | break; |
602 | 452 | ||
603 | case PCISWCAP_EXP_LNKCTL: | 453 | case PCI_EXP_LNKCTL: |
604 | *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); | 454 | *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); |
605 | break; | 455 | break; |
606 | 456 | ||
607 | case PCISWCAP_EXP_SLTCAP: | 457 | case PCI_EXP_SLTCTL: |
608 | *value = bridge->pcie_sltcap; | ||
609 | break; | ||
610 | |||
611 | case PCISWCAP_EXP_SLTCTL: | ||
612 | *value = PCI_EXP_SLTSTA_PDS << 16; | 458 | *value = PCI_EXP_SLTSTA_PDS << 16; |
613 | break; | 459 | break; |
614 | 460 | ||
615 | case PCISWCAP_EXP_RTCTL: | 461 | case PCI_EXP_RTSTA: |
616 | *value = bridge->pcie_rtctl; | ||
617 | break; | ||
618 | |||
619 | case PCISWCAP_EXP_RTSTA: | ||
620 | *value = mvebu_readl(port, PCIE_RC_RTSTA); | 462 | *value = mvebu_readl(port, PCIE_RC_RTSTA); |
621 | break; | 463 | break; |
622 | 464 | ||
623 | /* PCIe requires the v2 fields to be hard-wired to zero */ | ||
624 | case PCISWCAP_EXP_DEVCAP2: | ||
625 | case PCISWCAP_EXP_DEVCTL2: | ||
626 | case PCISWCAP_EXP_LNKCAP2: | ||
627 | case PCISWCAP_EXP_LNKCTL2: | ||
628 | case PCISWCAP_EXP_SLTCAP2: | ||
629 | case PCISWCAP_EXP_SLTCTL2: | ||
630 | default: | 465 | default: |
631 | /* | 466 | return PCI_BRIDGE_EMUL_NOT_HANDLED; |
632 | * PCI defines configuration read accesses to reserved or | ||
633 | * unimplemented registers to read as zero and complete | ||
634 | * normally. | ||
635 | */ | ||
636 | *value = 0; | ||
637 | return PCIBIOS_SUCCESSFUL; | ||
638 | } | 467 | } |
639 | 468 | ||
640 | if (size == 2) | 469 | return PCI_BRIDGE_EMUL_HANDLED; |
641 | *value = (*value >> (8 * (where & 3))) & 0xffff; | ||
642 | else if (size == 1) | ||
643 | *value = (*value >> (8 * (where & 3))) & 0xff; | ||
644 | |||
645 | return PCIBIOS_SUCCESSFUL; | ||
646 | } | 470 | } |
647 | 471 | ||
648 | /* Write to the PCI-to-PCI bridge configuration space */ | 472 | static void |
649 | static int mvebu_sw_pci_bridge_write(struct mvebu_pcie_port *port, | 473 | mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge, |
650 | unsigned int where, int size, u32 value) | 474 | int reg, u32 old, u32 new, u32 mask) |
651 | { | 475 | { |
652 | struct mvebu_sw_pci_bridge *bridge = &port->bridge; | 476 | struct mvebu_pcie_port *port = bridge->data; |
653 | u32 mask, reg; | 477 | struct pci_bridge_emul_conf *conf = &bridge->conf; |
654 | int err; | ||
655 | |||
656 | if (size == 4) | ||
657 | mask = 0x0; | ||
658 | else if (size == 2) | ||
659 | mask = ~(0xffff << ((where & 3) * 8)); | ||
660 | else if (size == 1) | ||
661 | mask = ~(0xff << ((where & 3) * 8)); | ||
662 | else | ||
663 | return PCIBIOS_BAD_REGISTER_NUMBER; | ||
664 | |||
665 | err = mvebu_sw_pci_bridge_read(port, where & ~3, 4, ®); | ||
666 | if (err) | ||
667 | return err; | ||
668 | 478 | ||
669 | value = (reg & mask) | value << ((where & 3) * 8); | 479 | switch (reg) { |
670 | |||
671 | switch (where & ~3) { | ||
672 | case PCI_COMMAND: | 480 | case PCI_COMMAND: |
673 | { | 481 | { |
674 | u32 old = bridge->command; | ||
675 | |||
676 | if (!mvebu_has_ioport(port)) | 482 | if (!mvebu_has_ioport(port)) |
677 | value &= ~PCI_COMMAND_IO; | 483 | conf->command &= ~PCI_COMMAND_IO; |
678 | 484 | ||
679 | bridge->command = value & 0xffff; | 485 | if ((old ^ new) & PCI_COMMAND_IO) |
680 | if ((old ^ bridge->command) & PCI_COMMAND_IO) | ||
681 | mvebu_pcie_handle_iobase_change(port); | 486 | mvebu_pcie_handle_iobase_change(port); |
682 | if ((old ^ bridge->command) & PCI_COMMAND_MEMORY) | 487 | if ((old ^ new) & PCI_COMMAND_MEMORY) |
683 | mvebu_pcie_handle_membase_change(port); | 488 | mvebu_pcie_handle_membase_change(port); |
684 | break; | ||
685 | } | ||
686 | 489 | ||
687 | case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1: | ||
688 | bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4] = value; | ||
689 | break; | 490 | break; |
491 | } | ||
690 | 492 | ||
691 | case PCI_IO_BASE: | 493 | case PCI_IO_BASE: |
692 | /* | 494 | /* |
693 | * We also keep bit 1 set, it is a read-only bit that | 495 | * We keep bit 1 set, it is a read-only bit that |
694 | * indicates we support 32 bits addressing for the | 496 | * indicates we support 32 bits addressing for the |
695 | * I/O | 497 | * I/O |
696 | */ | 498 | */ |
697 | bridge->iobase = (value & 0xff) | PCI_IO_RANGE_TYPE_32; | 499 | conf->iobase |= PCI_IO_RANGE_TYPE_32; |
698 | bridge->iolimit = ((value >> 8) & 0xff) | PCI_IO_RANGE_TYPE_32; | 500 | conf->iolimit |= PCI_IO_RANGE_TYPE_32; |
699 | mvebu_pcie_handle_iobase_change(port); | 501 | mvebu_pcie_handle_iobase_change(port); |
700 | break; | 502 | break; |
701 | 503 | ||
702 | case PCI_MEMORY_BASE: | 504 | case PCI_MEMORY_BASE: |
703 | bridge->membase = value & 0xffff; | ||
704 | bridge->memlimit = value >> 16; | ||
705 | mvebu_pcie_handle_membase_change(port); | 505 | mvebu_pcie_handle_membase_change(port); |
706 | break; | 506 | break; |
707 | 507 | ||
708 | case PCI_IO_BASE_UPPER16: | 508 | case PCI_IO_BASE_UPPER16: |
709 | bridge->iobaseupper = value & 0xffff; | ||
710 | bridge->iolimitupper = value >> 16; | ||
711 | mvebu_pcie_handle_iobase_change(port); | 509 | mvebu_pcie_handle_iobase_change(port); |
712 | break; | 510 | break; |
713 | 511 | ||
714 | case PCI_PRIMARY_BUS: | 512 | case PCI_PRIMARY_BUS: |
715 | bridge->primary_bus = value & 0xff; | 513 | mvebu_pcie_set_local_bus_nr(port, conf->secondary_bus); |
716 | bridge->secondary_bus = (value >> 8) & 0xff; | 514 | break; |
717 | bridge->subordinate_bus = (value >> 16) & 0xff; | 515 | |
718 | bridge->secondary_latency_timer = (value >> 24) & 0xff; | 516 | default: |
719 | mvebu_pcie_set_local_bus_nr(port, bridge->secondary_bus); | ||
720 | break; | 517 | break; |
518 | } | ||
519 | } | ||
520 | |||
521 | static void | ||
522 | mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge, | ||
523 | int reg, u32 old, u32 new, u32 mask) | ||
524 | { | ||
525 | struct mvebu_pcie_port *port = bridge->data; | ||
721 | 526 | ||
722 | case PCISWCAP_EXP_DEVCTL: | 527 | switch (reg) { |
528 | case PCI_EXP_DEVCTL: | ||
723 | /* | 529 | /* |
724 | * Armada370 data says these bits must always | 530 | * Armada370 data says these bits must always |
725 | * be zero when in root complex mode. | 531 | * be zero when in root complex mode. |
726 | */ | 532 | */ |
727 | value &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | | 533 | new &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | |
728 | PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); | 534 | PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); |
729 | |||
730 | /* | ||
731 | * If the mask is 0xffff0000, then we only want to write | ||
732 | * the device control register, rather than clearing the | ||
733 | * RW1C bits in the device status register. Mask out the | ||
734 | * status register bits. | ||
735 | */ | ||
736 | if (mask == 0xffff0000) | ||
737 | value &= 0xffff; | ||
738 | 535 | ||
739 | mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL); | 536 | mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL); |
740 | break; | 537 | break; |
741 | 538 | ||
742 | case PCISWCAP_EXP_LNKCTL: | 539 | case PCI_EXP_LNKCTL: |
743 | /* | 540 | /* |
744 | * If we don't support CLKREQ, we must ensure that the | 541 | * If we don't support CLKREQ, we must ensure that the |
745 | * CLKREQ enable bit always reads zero. Since we haven't | 542 | * CLKREQ enable bit always reads zero. Since we haven't |
746 | * had this capability, and it's dependent on board wiring, | 543 | * had this capability, and it's dependent on board wiring, |
747 | * disable it for the time being. | 544 | * disable it for the time being. |
748 | */ | 545 | */ |
749 | value &= ~PCI_EXP_LNKCTL_CLKREQ_EN; | 546 | new &= ~PCI_EXP_LNKCTL_CLKREQ_EN; |
750 | 547 | ||
751 | /* | 548 | mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); |
752 | * If the mask is 0xffff0000, then we only want to write | ||
753 | * the link control register, rather than clearing the | ||
754 | * RW1C bits in the link status register. Mask out the | ||
755 | * RW1C status register bits. | ||
756 | */ | ||
757 | if (mask == 0xffff0000) | ||
758 | value &= ~((PCI_EXP_LNKSTA_LABS | | ||
759 | PCI_EXP_LNKSTA_LBMS) << 16); | ||
760 | |||
761 | mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); | ||
762 | break; | 549 | break; |
763 | 550 | ||
764 | case PCISWCAP_EXP_RTSTA: | 551 | case PCI_EXP_RTSTA: |
765 | mvebu_writel(port, value, PCIE_RC_RTSTA); | 552 | mvebu_writel(port, new, PCIE_RC_RTSTA); |
766 | break; | 553 | break; |
554 | } | ||
555 | } | ||
767 | 556 | ||
768 | default: | 557 | struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = { |
769 | break; | 558 | .write_base = mvebu_pci_bridge_emul_base_conf_write, |
559 | .read_pcie = mvebu_pci_bridge_emul_pcie_conf_read, | ||
560 | .write_pcie = mvebu_pci_bridge_emul_pcie_conf_write, | ||
561 | }; | ||
562 | |||
563 | /* | ||
564 | * Initialize the configuration space of the PCI-to-PCI bridge | ||
565 | * associated with the given PCIe interface. | ||
566 | */ | ||
567 | static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port) | ||
568 | { | ||
569 | struct pci_bridge_emul *bridge = &port->bridge; | ||
570 | |||
571 | bridge->conf.vendor = PCI_VENDOR_ID_MARVELL; | ||
572 | bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16; | ||
573 | bridge->conf.class_revision = | ||
574 | mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff; | ||
575 | |||
576 | if (mvebu_has_ioport(port)) { | ||
577 | /* We support 32 bits I/O addressing */ | ||
578 | bridge->conf.iobase = PCI_IO_RANGE_TYPE_32; | ||
579 | bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32; | ||
770 | } | 580 | } |
771 | 581 | ||
772 | return PCIBIOS_SUCCESSFUL; | 582 | bridge->has_pcie = true; |
583 | bridge->data = port; | ||
584 | bridge->ops = &mvebu_pci_bridge_emul_ops; | ||
585 | |||
586 | pci_bridge_emul_init(bridge); | ||
773 | } | 587 | } |
774 | 588 | ||
775 | static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys) | 589 | static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys) |
@@ -789,8 +603,8 @@ static struct mvebu_pcie_port *mvebu_pcie_find_port(struct mvebu_pcie *pcie, | |||
789 | if (bus->number == 0 && port->devfn == devfn) | 603 | if (bus->number == 0 && port->devfn == devfn) |
790 | return port; | 604 | return port; |
791 | if (bus->number != 0 && | 605 | if (bus->number != 0 && |
792 | bus->number >= port->bridge.secondary_bus && | 606 | bus->number >= port->bridge.conf.secondary_bus && |
793 | bus->number <= port->bridge.subordinate_bus) | 607 | bus->number <= port->bridge.conf.subordinate_bus) |
794 | return port; | 608 | return port; |
795 | } | 609 | } |
796 | 610 | ||
@@ -811,7 +625,8 @@ static int mvebu_pcie_wr_conf(struct pci_bus *bus, u32 devfn, | |||
811 | 625 | ||
812 | /* Access the emulated PCI-to-PCI bridge */ | 626 | /* Access the emulated PCI-to-PCI bridge */ |
813 | if (bus->number == 0) | 627 | if (bus->number == 0) |
814 | return mvebu_sw_pci_bridge_write(port, where, size, val); | 628 | return pci_bridge_emul_conf_write(&port->bridge, where, |
629 | size, val); | ||
815 | 630 | ||
816 | if (!mvebu_pcie_link_up(port)) | 631 | if (!mvebu_pcie_link_up(port)) |
817 | return PCIBIOS_DEVICE_NOT_FOUND; | 632 | return PCIBIOS_DEVICE_NOT_FOUND; |
@@ -839,7 +654,8 @@ static int mvebu_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, | |||
839 | 654 | ||
840 | /* Access the emulated PCI-to-PCI bridge */ | 655 | /* Access the emulated PCI-to-PCI bridge */ |
841 | if (bus->number == 0) | 656 | if (bus->number == 0) |
842 | return mvebu_sw_pci_bridge_read(port, where, size, val); | 657 | return pci_bridge_emul_conf_read(&port->bridge, where, |
658 | size, val); | ||
843 | 659 | ||
844 | if (!mvebu_pcie_link_up(port)) { | 660 | if (!mvebu_pcie_link_up(port)) { |
845 | *val = 0xffffffff; | 661 | *val = 0xffffffff; |
@@ -1297,7 +1113,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev) | |||
1297 | 1113 | ||
1298 | mvebu_pcie_setup_hw(port); | 1114 | mvebu_pcie_setup_hw(port); |
1299 | mvebu_pcie_set_local_dev_nr(port, 1); | 1115 | mvebu_pcie_set_local_dev_nr(port, 1); |
1300 | mvebu_sw_pci_bridge_init(port); | 1116 | mvebu_pci_bridge_emul_init(port); |
1301 | } | 1117 | } |
1302 | 1118 | ||
1303 | pcie->nports = i; | 1119 | pcie->nports = i; |
diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/pcie-cadence-ep.c index 9e87dd7f9ac3..c3a088910f48 100644 --- a/drivers/pci/controller/pcie-cadence-ep.c +++ b/drivers/pci/controller/pcie-cadence-ep.c | |||
@@ -258,7 +258,6 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, | |||
258 | u8 intx, bool is_asserted) | 258 | u8 intx, bool is_asserted) |
259 | { | 259 | { |
260 | struct cdns_pcie *pcie = &ep->pcie; | 260 | struct cdns_pcie *pcie = &ep->pcie; |
261 | u32 r = ep->max_regions - 1; | ||
262 | u32 offset; | 261 | u32 offset; |
263 | u16 status; | 262 | u16 status; |
264 | u8 msg_code; | 263 | u8 msg_code; |
@@ -268,8 +267,8 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, | |||
268 | /* Set the outbound region if needed. */ | 267 | /* Set the outbound region if needed. */ |
269 | if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY || | 268 | if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY || |
270 | ep->irq_pci_fn != fn)) { | 269 | ep->irq_pci_fn != fn)) { |
271 | /* Last region was reserved for IRQ writes. */ | 270 | /* First region was reserved for IRQ writes. */ |
272 | cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, r, | 271 | cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, 0, |
273 | ep->irq_phys_addr); | 272 | ep->irq_phys_addr); |
274 | ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY; | 273 | ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY; |
275 | ep->irq_pci_fn = fn; | 274 | ep->irq_pci_fn = fn; |
@@ -347,8 +346,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, | |||
347 | /* Set the outbound region if needed. */ | 346 | /* Set the outbound region if needed. */ |
348 | if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || | 347 | if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || |
349 | ep->irq_pci_fn != fn)) { | 348 | ep->irq_pci_fn != fn)) { |
350 | /* Last region was reserved for IRQ writes. */ | 349 | /* First region was reserved for IRQ writes. */ |
351 | cdns_pcie_set_outbound_region(pcie, fn, ep->max_regions - 1, | 350 | cdns_pcie_set_outbound_region(pcie, fn, 0, |
352 | false, | 351 | false, |
353 | ep->irq_phys_addr, | 352 | ep->irq_phys_addr, |
354 | pci_addr & ~pci_addr_mask, | 353 | pci_addr & ~pci_addr_mask, |
@@ -356,7 +355,7 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, | |||
356 | ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); | 355 | ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); |
357 | ep->irq_pci_fn = fn; | 356 | ep->irq_pci_fn = fn; |
358 | } | 357 | } |
359 | writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask)); | 358 | writel(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask)); |
360 | 359 | ||
361 | return 0; | 360 | return 0; |
362 | } | 361 | } |
@@ -517,6 +516,8 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev) | |||
517 | goto free_epc_mem; | 516 | goto free_epc_mem; |
518 | } | 517 | } |
519 | ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE; | 518 | ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE; |
519 | /* Reserve region 0 for IRQs */ | ||
520 | set_bit(0, &ep->ob_region_map); | ||
520 | 521 | ||
521 | return 0; | 522 | return 0; |
522 | 523 | ||
diff --git a/drivers/pci/controller/pcie-cadence-host.c b/drivers/pci/controller/pcie-cadence-host.c index ec394f6a19c8..97e251090b4f 100644 --- a/drivers/pci/controller/pcie-cadence-host.c +++ b/drivers/pci/controller/pcie-cadence-host.c | |||
@@ -235,7 +235,6 @@ static int cdns_pcie_host_init(struct device *dev, | |||
235 | 235 | ||
236 | static int cdns_pcie_host_probe(struct platform_device *pdev) | 236 | static int cdns_pcie_host_probe(struct platform_device *pdev) |
237 | { | 237 | { |
238 | const char *type; | ||
239 | struct device *dev = &pdev->dev; | 238 | struct device *dev = &pdev->dev; |
240 | struct device_node *np = dev->of_node; | 239 | struct device_node *np = dev->of_node; |
241 | struct pci_host_bridge *bridge; | 240 | struct pci_host_bridge *bridge; |
@@ -268,12 +267,6 @@ static int cdns_pcie_host_probe(struct platform_device *pdev) | |||
268 | rc->device_id = 0xffff; | 267 | rc->device_id = 0xffff; |
269 | of_property_read_u16(np, "device-id", &rc->device_id); | 268 | of_property_read_u16(np, "device-id", &rc->device_id); |
270 | 269 | ||
271 | type = of_get_property(np, "device_type", NULL); | ||
272 | if (!type || strcmp(type, "pci")) { | ||
273 | dev_err(dev, "invalid \"device_type\" %s\n", type); | ||
274 | return -EINVAL; | ||
275 | } | ||
276 | |||
277 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg"); | 270 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg"); |
278 | pcie->reg_base = devm_ioremap_resource(dev, res); | 271 | pcie->reg_base = devm_ioremap_resource(dev, res); |
279 | if (IS_ERR(pcie->reg_base)) { | 272 | if (IS_ERR(pcie->reg_base)) { |
diff --git a/drivers/pci/controller/pcie-cadence.c b/drivers/pci/controller/pcie-cadence.c index 975bcdd6b5c0..cd795f6fc1e2 100644 --- a/drivers/pci/controller/pcie-cadence.c +++ b/drivers/pci/controller/pcie-cadence.c | |||
@@ -190,14 +190,16 @@ int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie) | |||
190 | 190 | ||
191 | for (i = 0; i < phy_count; i++) { | 191 | for (i = 0; i < phy_count; i++) { |
192 | of_property_read_string_index(np, "phy-names", i, &name); | 192 | of_property_read_string_index(np, "phy-names", i, &name); |
193 | phy[i] = devm_phy_optional_get(dev, name); | 193 | phy[i] = devm_phy_get(dev, name); |
194 | if (IS_ERR(phy)) | 194 | if (IS_ERR(phy[i])) { |
195 | return PTR_ERR(phy); | 195 | ret = PTR_ERR(phy[i]); |
196 | 196 | goto err_phy; | |
197 | } | ||
197 | link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS); | 198 | link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS); |
198 | if (!link[i]) { | 199 | if (!link[i]) { |
200 | devm_phy_put(dev, phy[i]); | ||
199 | ret = -EINVAL; | 201 | ret = -EINVAL; |
200 | goto err_link; | 202 | goto err_phy; |
201 | } | 203 | } |
202 | } | 204 | } |
203 | 205 | ||
@@ -207,13 +209,15 @@ int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie) | |||
207 | 209 | ||
208 | ret = cdns_pcie_enable_phy(pcie); | 210 | ret = cdns_pcie_enable_phy(pcie); |
209 | if (ret) | 211 | if (ret) |
210 | goto err_link; | 212 | goto err_phy; |
211 | 213 | ||
212 | return 0; | 214 | return 0; |
213 | 215 | ||
214 | err_link: | 216 | err_phy: |
215 | while (--i >= 0) | 217 | while (--i >= 0) { |
216 | device_link_del(link[i]); | 218 | device_link_del(link[i]); |
219 | devm_phy_put(dev, phy[i]); | ||
220 | } | ||
217 | 221 | ||
218 | return ret; | 222 | return ret; |
219 | } | 223 | } |
diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c index 3160e9342a2f..c20fd6bd68fd 100644 --- a/drivers/pci/controller/pcie-iproc.c +++ b/drivers/pci/controller/pcie-iproc.c | |||
@@ -630,14 +630,6 @@ static void __iomem *iproc_pcie_map_cfg_bus(struct iproc_pcie *pcie, | |||
630 | return (pcie->base + offset); | 630 | return (pcie->base + offset); |
631 | } | 631 | } |
632 | 632 | ||
633 | /* | ||
634 | * PAXC is connected to an internally emulated EP within the SoC. It | ||
635 | * allows only one device. | ||
636 | */ | ||
637 | if (pcie->ep_is_internal) | ||
638 | if (slot > 0) | ||
639 | return NULL; | ||
640 | |||
641 | return iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); | 633 | return iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); |
642 | } | 634 | } |
643 | 635 | ||
diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c index 861dda69f366..d069a76cbb95 100644 --- a/drivers/pci/controller/pcie-mediatek.c +++ b/drivers/pci/controller/pcie-mediatek.c | |||
@@ -15,6 +15,7 @@ | |||
15 | #include <linux/irqdomain.h> | 15 | #include <linux/irqdomain.h> |
16 | #include <linux/kernel.h> | 16 | #include <linux/kernel.h> |
17 | #include <linux/msi.h> | 17 | #include <linux/msi.h> |
18 | #include <linux/module.h> | ||
18 | #include <linux/of_address.h> | 19 | #include <linux/of_address.h> |
19 | #include <linux/of_pci.h> | 20 | #include <linux/of_pci.h> |
20 | #include <linux/of_platform.h> | 21 | #include <linux/of_platform.h> |
@@ -162,6 +163,7 @@ struct mtk_pcie_soc { | |||
162 | * @phy: pointer to PHY control block | 163 | * @phy: pointer to PHY control block |
163 | * @lane: lane count | 164 | * @lane: lane count |
164 | * @slot: port slot | 165 | * @slot: port slot |
166 | * @irq: GIC irq | ||
165 | * @irq_domain: legacy INTx IRQ domain | 167 | * @irq_domain: legacy INTx IRQ domain |
166 | * @inner_domain: inner IRQ domain | 168 | * @inner_domain: inner IRQ domain |
167 | * @msi_domain: MSI IRQ domain | 169 | * @msi_domain: MSI IRQ domain |
@@ -182,6 +184,7 @@ struct mtk_pcie_port { | |||
182 | struct phy *phy; | 184 | struct phy *phy; |
183 | u32 lane; | 185 | u32 lane; |
184 | u32 slot; | 186 | u32 slot; |
187 | int irq; | ||
185 | struct irq_domain *irq_domain; | 188 | struct irq_domain *irq_domain; |
186 | struct irq_domain *inner_domain; | 189 | struct irq_domain *inner_domain; |
187 | struct irq_domain *msi_domain; | 190 | struct irq_domain *msi_domain; |
@@ -225,10 +228,8 @@ static void mtk_pcie_subsys_powerdown(struct mtk_pcie *pcie) | |||
225 | 228 | ||
226 | clk_disable_unprepare(pcie->free_ck); | 229 | clk_disable_unprepare(pcie->free_ck); |
227 | 230 | ||
228 | if (dev->pm_domain) { | 231 | pm_runtime_put_sync(dev); |
229 | pm_runtime_put_sync(dev); | 232 | pm_runtime_disable(dev); |
230 | pm_runtime_disable(dev); | ||
231 | } | ||
232 | } | 233 | } |
233 | 234 | ||
234 | static void mtk_pcie_port_free(struct mtk_pcie_port *port) | 235 | static void mtk_pcie_port_free(struct mtk_pcie_port *port) |
@@ -337,6 +338,17 @@ static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus, | |||
337 | { | 338 | { |
338 | struct mtk_pcie *pcie = bus->sysdata; | 339 | struct mtk_pcie *pcie = bus->sysdata; |
339 | struct mtk_pcie_port *port; | 340 | struct mtk_pcie_port *port; |
341 | struct pci_dev *dev = NULL; | ||
342 | |||
343 | /* | ||
344 | * Walk the bus hierarchy to get the devfn value | ||
345 | * of the port in the root bus. | ||
346 | */ | ||
347 | while (bus && bus->number) { | ||
348 | dev = bus->self; | ||
349 | bus = dev->bus; | ||
350 | devfn = dev->devfn; | ||
351 | } | ||
340 | 352 | ||
341 | list_for_each_entry(port, &pcie->ports, list) | 353 | list_for_each_entry(port, &pcie->ports, list) |
342 | if (port->slot == PCI_SLOT(devfn)) | 354 | if (port->slot == PCI_SLOT(devfn)) |
@@ -383,75 +395,6 @@ static struct pci_ops mtk_pcie_ops_v2 = { | |||
383 | .write = mtk_pcie_config_write, | 395 | .write = mtk_pcie_config_write, |
384 | }; | 396 | }; |
385 | 397 | ||
386 | static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) | ||
387 | { | ||
388 | struct mtk_pcie *pcie = port->pcie; | ||
389 | struct resource *mem = &pcie->mem; | ||
390 | const struct mtk_pcie_soc *soc = port->pcie->soc; | ||
391 | u32 val; | ||
392 | size_t size; | ||
393 | int err; | ||
394 | |||
395 | /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */ | ||
396 | if (pcie->base) { | ||
397 | val = readl(pcie->base + PCIE_SYS_CFG_V2); | ||
398 | val |= PCIE_CSR_LTSSM_EN(port->slot) | | ||
399 | PCIE_CSR_ASPM_L1_EN(port->slot); | ||
400 | writel(val, pcie->base + PCIE_SYS_CFG_V2); | ||
401 | } | ||
402 | |||
403 | /* Assert all reset signals */ | ||
404 | writel(0, port->base + PCIE_RST_CTRL); | ||
405 | |||
406 | /* | ||
407 | * Enable PCIe link down reset, if link status changed from link up to | ||
408 | * link down, this will reset MAC control registers and configuration | ||
409 | * space. | ||
410 | */ | ||
411 | writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL); | ||
412 | |||
413 | /* De-assert PHY, PE, PIPE, MAC and configuration reset */ | ||
414 | val = readl(port->base + PCIE_RST_CTRL); | ||
415 | val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB | | ||
416 | PCIE_MAC_SRSTB | PCIE_CRSTB; | ||
417 | writel(val, port->base + PCIE_RST_CTRL); | ||
418 | |||
419 | /* Set up vendor ID and class code */ | ||
420 | if (soc->need_fix_class_id) { | ||
421 | val = PCI_VENDOR_ID_MEDIATEK; | ||
422 | writew(val, port->base + PCIE_CONF_VEND_ID); | ||
423 | |||
424 | val = PCI_CLASS_BRIDGE_HOST; | ||
425 | writew(val, port->base + PCIE_CONF_CLASS_ID); | ||
426 | } | ||
427 | |||
428 | /* 100ms timeout value should be enough for Gen1/2 training */ | ||
429 | err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, | ||
430 | !!(val & PCIE_PORT_LINKUP_V2), 20, | ||
431 | 100 * USEC_PER_MSEC); | ||
432 | if (err) | ||
433 | return -ETIMEDOUT; | ||
434 | |||
435 | /* Set INTx mask */ | ||
436 | val = readl(port->base + PCIE_INT_MASK); | ||
437 | val &= ~INTX_MASK; | ||
438 | writel(val, port->base + PCIE_INT_MASK); | ||
439 | |||
440 | /* Set AHB to PCIe translation windows */ | ||
441 | size = mem->end - mem->start; | ||
442 | val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size)); | ||
443 | writel(val, port->base + PCIE_AHB_TRANS_BASE0_L); | ||
444 | |||
445 | val = upper_32_bits(mem->start); | ||
446 | writel(val, port->base + PCIE_AHB_TRANS_BASE0_H); | ||
447 | |||
448 | /* Set PCIe to AXI translation memory space.*/ | ||
449 | val = fls(0xffffffff) | WIN_ENABLE; | ||
450 | writel(val, port->base + PCIE_AXI_WINDOW0); | ||
451 | |||
452 | return 0; | ||
453 | } | ||
454 | |||
455 | static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) | 398 | static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) |
456 | { | 399 | { |
457 | struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data); | 400 | struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data); |
@@ -590,6 +533,27 @@ static void mtk_pcie_enable_msi(struct mtk_pcie_port *port) | |||
590 | writel(val, port->base + PCIE_INT_MASK); | 533 | writel(val, port->base + PCIE_INT_MASK); |
591 | } | 534 | } |
592 | 535 | ||
536 | static void mtk_pcie_irq_teardown(struct mtk_pcie *pcie) | ||
537 | { | ||
538 | struct mtk_pcie_port *port, *tmp; | ||
539 | |||
540 | list_for_each_entry_safe(port, tmp, &pcie->ports, list) { | ||
541 | irq_set_chained_handler_and_data(port->irq, NULL, NULL); | ||
542 | |||
543 | if (port->irq_domain) | ||
544 | irq_domain_remove(port->irq_domain); | ||
545 | |||
546 | if (IS_ENABLED(CONFIG_PCI_MSI)) { | ||
547 | if (port->msi_domain) | ||
548 | irq_domain_remove(port->msi_domain); | ||
549 | if (port->inner_domain) | ||
550 | irq_domain_remove(port->inner_domain); | ||
551 | } | ||
552 | |||
553 | irq_dispose_mapping(port->irq); | ||
554 | } | ||
555 | } | ||
556 | |||
593 | static int mtk_pcie_intx_map(struct irq_domain *domain, unsigned int irq, | 557 | static int mtk_pcie_intx_map(struct irq_domain *domain, unsigned int irq, |
594 | irq_hw_number_t hwirq) | 558 | irq_hw_number_t hwirq) |
595 | { | 559 | { |
@@ -628,8 +592,6 @@ static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port, | |||
628 | ret = mtk_pcie_allocate_msi_domains(port); | 592 | ret = mtk_pcie_allocate_msi_domains(port); |
629 | if (ret) | 593 | if (ret) |
630 | return ret; | 594 | return ret; |
631 | |||
632 | mtk_pcie_enable_msi(port); | ||
633 | } | 595 | } |
634 | 596 | ||
635 | return 0; | 597 | return 0; |
@@ -682,7 +644,7 @@ static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, | |||
682 | struct mtk_pcie *pcie = port->pcie; | 644 | struct mtk_pcie *pcie = port->pcie; |
683 | struct device *dev = pcie->dev; | 645 | struct device *dev = pcie->dev; |
684 | struct platform_device *pdev = to_platform_device(dev); | 646 | struct platform_device *pdev = to_platform_device(dev); |
685 | int err, irq; | 647 | int err; |
686 | 648 | ||
687 | err = mtk_pcie_init_irq_domain(port, node); | 649 | err = mtk_pcie_init_irq_domain(port, node); |
688 | if (err) { | 650 | if (err) { |
@@ -690,8 +652,81 @@ static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, | |||
690 | return err; | 652 | return err; |
691 | } | 653 | } |
692 | 654 | ||
693 | irq = platform_get_irq(pdev, port->slot); | 655 | port->irq = platform_get_irq(pdev, port->slot); |
694 | irq_set_chained_handler_and_data(irq, mtk_pcie_intr_handler, port); | 656 | irq_set_chained_handler_and_data(port->irq, |
657 | mtk_pcie_intr_handler, port); | ||
658 | |||
659 | return 0; | ||
660 | } | ||
661 | |||
662 | static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) | ||
663 | { | ||
664 | struct mtk_pcie *pcie = port->pcie; | ||
665 | struct resource *mem = &pcie->mem; | ||
666 | const struct mtk_pcie_soc *soc = port->pcie->soc; | ||
667 | u32 val; | ||
668 | size_t size; | ||
669 | int err; | ||
670 | |||
671 | /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */ | ||
672 | if (pcie->base) { | ||
673 | val = readl(pcie->base + PCIE_SYS_CFG_V2); | ||
674 | val |= PCIE_CSR_LTSSM_EN(port->slot) | | ||
675 | PCIE_CSR_ASPM_L1_EN(port->slot); | ||
676 | writel(val, pcie->base + PCIE_SYS_CFG_V2); | ||
677 | } | ||
678 | |||
679 | /* Assert all reset signals */ | ||
680 | writel(0, port->base + PCIE_RST_CTRL); | ||
681 | |||
682 | /* | ||
683 | * Enable PCIe link down reset, if link status changed from link up to | ||
684 | * link down, this will reset MAC control registers and configuration | ||
685 | * space. | ||
686 | */ | ||
687 | writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL); | ||
688 | |||
689 | /* De-assert PHY, PE, PIPE, MAC and configuration reset */ | ||
690 | val = readl(port->base + PCIE_RST_CTRL); | ||
691 | val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB | | ||
692 | PCIE_MAC_SRSTB | PCIE_CRSTB; | ||
693 | writel(val, port->base + PCIE_RST_CTRL); | ||
694 | |||
695 | /* Set up vendor ID and class code */ | ||
696 | if (soc->need_fix_class_id) { | ||
697 | val = PCI_VENDOR_ID_MEDIATEK; | ||
698 | writew(val, port->base + PCIE_CONF_VEND_ID); | ||
699 | |||
700 | val = PCI_CLASS_BRIDGE_PCI; | ||
701 | writew(val, port->base + PCIE_CONF_CLASS_ID); | ||
702 | } | ||
703 | |||
704 | /* 100ms timeout value should be enough for Gen1/2 training */ | ||
705 | err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, | ||
706 | !!(val & PCIE_PORT_LINKUP_V2), 20, | ||
707 | 100 * USEC_PER_MSEC); | ||
708 | if (err) | ||
709 | return -ETIMEDOUT; | ||
710 | |||
711 | /* Set INTx mask */ | ||
712 | val = readl(port->base + PCIE_INT_MASK); | ||
713 | val &= ~INTX_MASK; | ||
714 | writel(val, port->base + PCIE_INT_MASK); | ||
715 | |||
716 | if (IS_ENABLED(CONFIG_PCI_MSI)) | ||
717 | mtk_pcie_enable_msi(port); | ||
718 | |||
719 | /* Set AHB to PCIe translation windows */ | ||
720 | size = mem->end - mem->start; | ||
721 | val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size)); | ||
722 | writel(val, port->base + PCIE_AHB_TRANS_BASE0_L); | ||
723 | |||
724 | val = upper_32_bits(mem->start); | ||
725 | writel(val, port->base + PCIE_AHB_TRANS_BASE0_H); | ||
726 | |||
727 | /* Set PCIe to AXI translation memory space.*/ | ||
728 | val = fls(0xffffffff) | WIN_ENABLE; | ||
729 | writel(val, port->base + PCIE_AXI_WINDOW0); | ||
695 | 730 | ||
696 | return 0; | 731 | return 0; |
697 | } | 732 | } |
@@ -987,10 +1022,8 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie) | |||
987 | pcie->free_ck = NULL; | 1022 | pcie->free_ck = NULL; |
988 | } | 1023 | } |
989 | 1024 | ||
990 | if (dev->pm_domain) { | 1025 | pm_runtime_enable(dev); |
991 | pm_runtime_enable(dev); | 1026 | pm_runtime_get_sync(dev); |
992 | pm_runtime_get_sync(dev); | ||
993 | } | ||
994 | 1027 | ||
995 | /* enable top level clock */ | 1028 | /* enable top level clock */ |
996 | err = clk_prepare_enable(pcie->free_ck); | 1029 | err = clk_prepare_enable(pcie->free_ck); |
@@ -1002,10 +1035,8 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie) | |||
1002 | return 0; | 1035 | return 0; |
1003 | 1036 | ||
1004 | err_free_ck: | 1037 | err_free_ck: |
1005 | if (dev->pm_domain) { | 1038 | pm_runtime_put_sync(dev); |
1006 | pm_runtime_put_sync(dev); | 1039 | pm_runtime_disable(dev); |
1007 | pm_runtime_disable(dev); | ||
1008 | } | ||
1009 | 1040 | ||
1010 | return err; | 1041 | return err; |
1011 | } | 1042 | } |
@@ -1109,36 +1140,10 @@ static int mtk_pcie_request_resources(struct mtk_pcie *pcie) | |||
1109 | if (err < 0) | 1140 | if (err < 0) |
1110 | return err; | 1141 | return err; |
1111 | 1142 | ||
1112 | devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start); | 1143 | err = devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start); |
1113 | 1144 | if (err) | |
1114 | return 0; | ||
1115 | } | ||
1116 | |||
1117 | static int mtk_pcie_register_host(struct pci_host_bridge *host) | ||
1118 | { | ||
1119 | struct mtk_pcie *pcie = pci_host_bridge_priv(host); | ||
1120 | struct pci_bus *child; | ||
1121 | int err; | ||
1122 | |||
1123 | host->busnr = pcie->busn.start; | ||
1124 | host->dev.parent = pcie->dev; | ||
1125 | host->ops = pcie->soc->ops; | ||
1126 | host->map_irq = of_irq_parse_and_map_pci; | ||
1127 | host->swizzle_irq = pci_common_swizzle; | ||
1128 | host->sysdata = pcie; | ||
1129 | |||
1130 | err = pci_scan_root_bus_bridge(host); | ||
1131 | if (err < 0) | ||
1132 | return err; | 1145 | return err; |
1133 | 1146 | ||
1134 | pci_bus_size_bridges(host->bus); | ||
1135 | pci_bus_assign_resources(host->bus); | ||
1136 | |||
1137 | list_for_each_entry(child, &host->bus->children, node) | ||
1138 | pcie_bus_configure_settings(child); | ||
1139 | |||
1140 | pci_bus_add_devices(host->bus); | ||
1141 | |||
1142 | return 0; | 1147 | return 0; |
1143 | } | 1148 | } |
1144 | 1149 | ||
@@ -1168,7 +1173,14 @@ static int mtk_pcie_probe(struct platform_device *pdev) | |||
1168 | if (err) | 1173 | if (err) |
1169 | goto put_resources; | 1174 | goto put_resources; |
1170 | 1175 | ||
1171 | err = mtk_pcie_register_host(host); | 1176 | host->busnr = pcie->busn.start; |
1177 | host->dev.parent = pcie->dev; | ||
1178 | host->ops = pcie->soc->ops; | ||
1179 | host->map_irq = of_irq_parse_and_map_pci; | ||
1180 | host->swizzle_irq = pci_common_swizzle; | ||
1181 | host->sysdata = pcie; | ||
1182 | |||
1183 | err = pci_host_probe(host); | ||
1172 | if (err) | 1184 | if (err) |
1173 | goto put_resources; | 1185 | goto put_resources; |
1174 | 1186 | ||
@@ -1181,6 +1193,80 @@ put_resources: | |||
1181 | return err; | 1193 | return err; |
1182 | } | 1194 | } |
1183 | 1195 | ||
1196 | |||
1197 | static void mtk_pcie_free_resources(struct mtk_pcie *pcie) | ||
1198 | { | ||
1199 | struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); | ||
1200 | struct list_head *windows = &host->windows; | ||
1201 | |||
1202 | pci_free_resource_list(windows); | ||
1203 | } | ||
1204 | |||
1205 | static int mtk_pcie_remove(struct platform_device *pdev) | ||
1206 | { | ||
1207 | struct mtk_pcie *pcie = platform_get_drvdata(pdev); | ||
1208 | struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); | ||
1209 | |||
1210 | pci_stop_root_bus(host->bus); | ||
1211 | pci_remove_root_bus(host->bus); | ||
1212 | mtk_pcie_free_resources(pcie); | ||
1213 | |||
1214 | mtk_pcie_irq_teardown(pcie); | ||
1215 | |||
1216 | mtk_pcie_put_resources(pcie); | ||
1217 | |||
1218 | return 0; | ||
1219 | } | ||
1220 | |||
1221 | static int __maybe_unused mtk_pcie_suspend_noirq(struct device *dev) | ||
1222 | { | ||
1223 | struct mtk_pcie *pcie = dev_get_drvdata(dev); | ||
1224 | struct mtk_pcie_port *port; | ||
1225 | |||
1226 | if (list_empty(&pcie->ports)) | ||
1227 | return 0; | ||
1228 | |||
1229 | list_for_each_entry(port, &pcie->ports, list) { | ||
1230 | clk_disable_unprepare(port->pipe_ck); | ||
1231 | clk_disable_unprepare(port->obff_ck); | ||
1232 | clk_disable_unprepare(port->axi_ck); | ||
1233 | clk_disable_unprepare(port->aux_ck); | ||
1234 | clk_disable_unprepare(port->ahb_ck); | ||
1235 | clk_disable_unprepare(port->sys_ck); | ||
1236 | phy_power_off(port->phy); | ||
1237 | phy_exit(port->phy); | ||
1238 | } | ||
1239 | |||
1240 | clk_disable_unprepare(pcie->free_ck); | ||
1241 | |||
1242 | return 0; | ||
1243 | } | ||
1244 | |||
1245 | static int __maybe_unused mtk_pcie_resume_noirq(struct device *dev) | ||
1246 | { | ||
1247 | struct mtk_pcie *pcie = dev_get_drvdata(dev); | ||
1248 | struct mtk_pcie_port *port, *tmp; | ||
1249 | |||
1250 | if (list_empty(&pcie->ports)) | ||
1251 | return 0; | ||
1252 | |||
1253 | clk_prepare_enable(pcie->free_ck); | ||
1254 | |||
1255 | list_for_each_entry_safe(port, tmp, &pcie->ports, list) | ||
1256 | mtk_pcie_enable_port(port); | ||
1257 | |||
1258 | /* In case of EP was removed while system suspend. */ | ||
1259 | if (list_empty(&pcie->ports)) | ||
1260 | clk_disable_unprepare(pcie->free_ck); | ||
1261 | |||
1262 | return 0; | ||
1263 | } | ||
1264 | |||
1265 | static const struct dev_pm_ops mtk_pcie_pm_ops = { | ||
1266 | SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq, | ||
1267 | mtk_pcie_resume_noirq) | ||
1268 | }; | ||
1269 | |||
1184 | static const struct mtk_pcie_soc mtk_pcie_soc_v1 = { | 1270 | static const struct mtk_pcie_soc mtk_pcie_soc_v1 = { |
1185 | .ops = &mtk_pcie_ops, | 1271 | .ops = &mtk_pcie_ops, |
1186 | .startup = mtk_pcie_startup_port, | 1272 | .startup = mtk_pcie_startup_port, |
@@ -1209,10 +1295,13 @@ static const struct of_device_id mtk_pcie_ids[] = { | |||
1209 | 1295 | ||
1210 | static struct platform_driver mtk_pcie_driver = { | 1296 | static struct platform_driver mtk_pcie_driver = { |
1211 | .probe = mtk_pcie_probe, | 1297 | .probe = mtk_pcie_probe, |
1298 | .remove = mtk_pcie_remove, | ||
1212 | .driver = { | 1299 | .driver = { |
1213 | .name = "mtk-pcie", | 1300 | .name = "mtk-pcie", |
1214 | .of_match_table = mtk_pcie_ids, | 1301 | .of_match_table = mtk_pcie_ids, |
1215 | .suppress_bind_attrs = true, | 1302 | .suppress_bind_attrs = true, |
1303 | .pm = &mtk_pcie_pm_ops, | ||
1216 | }, | 1304 | }, |
1217 | }; | 1305 | }; |
1218 | builtin_platform_driver(mtk_pcie_driver); | 1306 | module_platform_driver(mtk_pcie_driver); |
1307 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c index a939e8d31735..77052a0712d0 100644 --- a/drivers/pci/controller/pcie-mobiveil.c +++ b/drivers/pci/controller/pcie-mobiveil.c | |||
@@ -301,13 +301,6 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie) | |||
301 | struct platform_device *pdev = pcie->pdev; | 301 | struct platform_device *pdev = pcie->pdev; |
302 | struct device_node *node = dev->of_node; | 302 | struct device_node *node = dev->of_node; |
303 | struct resource *res; | 303 | struct resource *res; |
304 | const char *type; | ||
305 | |||
306 | type = of_get_property(node, "device_type", NULL); | ||
307 | if (!type || strcmp(type, "pci")) { | ||
308 | dev_err(dev, "invalid \"device_type\" %s\n", type); | ||
309 | return -EINVAL; | ||
310 | } | ||
311 | 304 | ||
312 | /* map config resource */ | 305 | /* map config resource */ |
313 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, | 306 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, |
diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c index fb32840ce8e6..81538d77f790 100644 --- a/drivers/pci/controller/pcie-xilinx-nwl.c +++ b/drivers/pci/controller/pcie-xilinx-nwl.c | |||
@@ -777,16 +777,7 @@ static int nwl_pcie_parse_dt(struct nwl_pcie *pcie, | |||
777 | struct platform_device *pdev) | 777 | struct platform_device *pdev) |
778 | { | 778 | { |
779 | struct device *dev = pcie->dev; | 779 | struct device *dev = pcie->dev; |
780 | struct device_node *node = dev->of_node; | ||
781 | struct resource *res; | 780 | struct resource *res; |
782 | const char *type; | ||
783 | |||
784 | /* Check for device type */ | ||
785 | type = of_get_property(node, "device_type", NULL); | ||
786 | if (!type || strcmp(type, "pci")) { | ||
787 | dev_err(dev, "invalid \"device_type\" %s\n", type); | ||
788 | return -EINVAL; | ||
789 | } | ||
790 | 781 | ||
791 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "breg"); | 782 | res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "breg"); |
792 | pcie->breg_base = devm_ioremap_resource(dev, res); | 783 | pcie->breg_base = devm_ioremap_resource(dev, res); |
diff --git a/drivers/pci/controller/pcie-xilinx.c b/drivers/pci/controller/pcie-xilinx.c index 7b1389d8e2a5..9bd1a35cd5d8 100644 --- a/drivers/pci/controller/pcie-xilinx.c +++ b/drivers/pci/controller/pcie-xilinx.c | |||
@@ -574,15 +574,8 @@ static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port) | |||
574 | struct device *dev = port->dev; | 574 | struct device *dev = port->dev; |
575 | struct device_node *node = dev->of_node; | 575 | struct device_node *node = dev->of_node; |
576 | struct resource regs; | 576 | struct resource regs; |
577 | const char *type; | ||
578 | int err; | 577 | int err; |
579 | 578 | ||
580 | type = of_get_property(node, "device_type", NULL); | ||
581 | if (!type || strcmp(type, "pci")) { | ||
582 | dev_err(dev, "invalid \"device_type\" %s\n", type); | ||
583 | return -EINVAL; | ||
584 | } | ||
585 | |||
586 | err = of_address_to_resource(node, 0, ®s); | 579 | err = of_address_to_resource(node, 0, ®s); |
587 | if (err) { | 580 | if (err) { |
588 | dev_err(dev, "missing \"reg\" property\n"); | 581 | dev_err(dev, "missing \"reg\" property\n"); |
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index f31ed62d518c..e50b0b5815ff 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c | |||
@@ -809,12 +809,12 @@ static void vmd_remove(struct pci_dev *dev) | |||
809 | { | 809 | { |
810 | struct vmd_dev *vmd = pci_get_drvdata(dev); | 810 | struct vmd_dev *vmd = pci_get_drvdata(dev); |
811 | 811 | ||
812 | vmd_detach_resources(vmd); | ||
813 | sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); | 812 | sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); |
814 | pci_stop_root_bus(vmd->bus); | 813 | pci_stop_root_bus(vmd->bus); |
815 | pci_remove_root_bus(vmd->bus); | 814 | pci_remove_root_bus(vmd->bus); |
816 | vmd_cleanup_srcu(vmd); | 815 | vmd_cleanup_srcu(vmd); |
817 | vmd_teardown_dma_ops(vmd); | 816 | vmd_teardown_dma_ops(vmd); |
817 | vmd_detach_resources(vmd); | ||
818 | irq_domain_remove(vmd->irq_domain); | 818 | irq_domain_remove(vmd->irq_domain); |
819 | } | 819 | } |
820 | 820 | ||
diff --git a/drivers/pci/hotplug/TODO b/drivers/pci/hotplug/TODO new file mode 100644 index 000000000000..a32070be5adf --- /dev/null +++ b/drivers/pci/hotplug/TODO | |||
@@ -0,0 +1,74 @@ | |||
1 | Contributions are solicited in particular to remedy the following issues: | ||
2 | |||
3 | cpcihp: | ||
4 | |||
5 | * There are no implementations of the ->hardware_test, ->get_power and | ||
6 | ->set_power callbacks in struct cpci_hp_controller_ops. Why were they | ||
7 | introduced? Can they be removed from the struct? | ||
8 | |||
9 | cpqphp: | ||
10 | |||
11 | * The driver spawns a kthread cpqhp_event_thread() which is woken by the | ||
12 | hardirq handler cpqhp_ctrl_intr(). Convert this to threaded IRQ handling. | ||
13 | The kthread is also woken from the timer pushbutton_helper_thread(), | ||
14 | convert it to call irq_wake_thread(). Use pciehp as a template. | ||
15 | |||
16 | * A large portion of cpqphp_ctrl.c and cpqphp_pci.c concerns resource | ||
17 | management. Doesn't this duplicate functionality in the core? | ||
18 | |||
19 | ibmphp: | ||
20 | |||
21 | * Implementations of hotplug_slot_ops callbacks such as get_adapter_present() | ||
22 | in ibmphp_core.c create a copy of the struct slot on the stack, then perform | ||
23 | the actual operation on that copy. Determine if this overhead is necessary, | ||
24 | delete it if not. The functions also perform a NULL pointer check on the | ||
25 | struct hotplug_slot, this seems superfluous. | ||
26 | |||
27 | * Several functions access the pci_slot member in struct hotplug_slot even | ||
28 | though pci_hotplug.h declares it private. See get_max_bus_speed() for an | ||
29 | example. Either the pci_slot member should no longer be declared private | ||
30 | or ibmphp should store a pointer to its bus in struct slot. Probably the | ||
31 | former. | ||
32 | |||
33 | * The functions get_max_adapter_speed() and get_bus_name() are commented out. | ||
34 | Can they be deleted? There are also forward declarations at the top of | ||
35 | ibmphp_core.c as well as pointers in ibmphp_hotplug_slot_ops, likewise | ||
36 | commented out. | ||
37 | |||
38 | * ibmphp_init_devno() takes a struct slot **, it could instead take a | ||
39 | struct slot *. | ||
40 | |||
41 | * The return value of pci_hp_register() is not checked. | ||
42 | |||
43 | * iounmap(io_mem) is called in the error path of ebda_rsrc_controller() | ||
44 | and once more in the error path of its caller ibmphp_access_ebda(). | ||
45 | |||
46 | * The various slot data structures are difficult to follow and need to be | ||
47 | simplified. A lot of functions are too large and too complex, they need | ||
48 | to be broken up into smaller, manageable pieces. Negative examples are | ||
49 | ebda_rsrc_controller() and configure_bridge(). | ||
50 | |||
51 | * A large portion of ibmphp_res.c and ibmphp_pci.c concerns resource | ||
52 | management. Doesn't this duplicate functionality in the core? | ||
53 | |||
54 | sgi_hotplug: | ||
55 | |||
56 | * Several functions access the pci_slot member in struct hotplug_slot even | ||
57 | though pci_hotplug.h declares it private. See sn_hp_destroy() for an | ||
58 | example. Either the pci_slot member should no longer be declared private | ||
59 | or sgi_hotplug should store a pointer to it in struct slot. Probably the | ||
60 | former. | ||
61 | |||
62 | shpchp: | ||
63 | |||
64 | * There is only a single implementation of struct hpc_ops. Can the struct be | ||
65 | removed and its functions invoked directly? This has already been done in | ||
66 | pciehp with commit 82a9e79ef132 ("PCI: pciehp: remove hpc_ops"). Clarify | ||
67 | if there was a specific reason not to apply the same change to shpchp. | ||
68 | |||
69 | * The ->get_mode1_ECC_cap callback in shpchp_hpc_ops is never invoked. | ||
70 | Why was it introduced? Can it be removed? | ||
71 | |||
72 | * The hardirq handler shpc_isr() queues events on a workqueue. It can be | ||
73 | simplified by converting it to threaded IRQ handling. Use pciehp as a | ||
74 | template. | ||
diff --git a/drivers/pci/hotplug/acpiphp.h b/drivers/pci/hotplug/acpiphp.h index e438a2d734f2..cf3058404f41 100644 --- a/drivers/pci/hotplug/acpiphp.h +++ b/drivers/pci/hotplug/acpiphp.h | |||
@@ -33,15 +33,19 @@ struct acpiphp_slot; | |||
33 | * struct slot - slot information for each *physical* slot | 33 | * struct slot - slot information for each *physical* slot |
34 | */ | 34 | */ |
35 | struct slot { | 35 | struct slot { |
36 | struct hotplug_slot *hotplug_slot; | 36 | struct hotplug_slot hotplug_slot; |
37 | struct acpiphp_slot *acpi_slot; | 37 | struct acpiphp_slot *acpi_slot; |
38 | struct hotplug_slot_info info; | ||
39 | unsigned int sun; /* ACPI _SUN (Slot User Number) value */ | 38 | unsigned int sun; /* ACPI _SUN (Slot User Number) value */ |
40 | }; | 39 | }; |
41 | 40 | ||
42 | static inline const char *slot_name(struct slot *slot) | 41 | static inline const char *slot_name(struct slot *slot) |
43 | { | 42 | { |
44 | return hotplug_slot_name(slot->hotplug_slot); | 43 | return hotplug_slot_name(&slot->hotplug_slot); |
44 | } | ||
45 | |||
46 | static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) | ||
47 | { | ||
48 | return container_of(hotplug_slot, struct slot, hotplug_slot); | ||
45 | } | 49 | } |
46 | 50 | ||
47 | /* | 51 | /* |
diff --git a/drivers/pci/hotplug/acpiphp_core.c b/drivers/pci/hotplug/acpiphp_core.c index ad32ffbc4b91..c9e2bd40c038 100644 --- a/drivers/pci/hotplug/acpiphp_core.c +++ b/drivers/pci/hotplug/acpiphp_core.c | |||
@@ -57,7 +57,7 @@ static int get_attention_status(struct hotplug_slot *slot, u8 *value); | |||
57 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); | 57 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); |
58 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); | 58 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); |
59 | 59 | ||
60 | static struct hotplug_slot_ops acpi_hotplug_slot_ops = { | 60 | static const struct hotplug_slot_ops acpi_hotplug_slot_ops = { |
61 | .enable_slot = enable_slot, | 61 | .enable_slot = enable_slot, |
62 | .disable_slot = disable_slot, | 62 | .disable_slot = disable_slot, |
63 | .set_attention_status = set_attention_status, | 63 | .set_attention_status = set_attention_status, |
@@ -118,7 +118,7 @@ EXPORT_SYMBOL_GPL(acpiphp_unregister_attention); | |||
118 | */ | 118 | */ |
119 | static int enable_slot(struct hotplug_slot *hotplug_slot) | 119 | static int enable_slot(struct hotplug_slot *hotplug_slot) |
120 | { | 120 | { |
121 | struct slot *slot = hotplug_slot->private; | 121 | struct slot *slot = to_slot(hotplug_slot); |
122 | 122 | ||
123 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 123 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
124 | 124 | ||
@@ -135,7 +135,7 @@ static int enable_slot(struct hotplug_slot *hotplug_slot) | |||
135 | */ | 135 | */ |
136 | static int disable_slot(struct hotplug_slot *hotplug_slot) | 136 | static int disable_slot(struct hotplug_slot *hotplug_slot) |
137 | { | 137 | { |
138 | struct slot *slot = hotplug_slot->private; | 138 | struct slot *slot = to_slot(hotplug_slot); |
139 | 139 | ||
140 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 140 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
141 | 141 | ||
@@ -179,7 +179,7 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | |||
179 | */ | 179 | */ |
180 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | 180 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) |
181 | { | 181 | { |
182 | struct slot *slot = hotplug_slot->private; | 182 | struct slot *slot = to_slot(hotplug_slot); |
183 | 183 | ||
184 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 184 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
185 | 185 | ||
@@ -225,7 +225,7 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
225 | */ | 225 | */ |
226 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | 226 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) |
227 | { | 227 | { |
228 | struct slot *slot = hotplug_slot->private; | 228 | struct slot *slot = to_slot(hotplug_slot); |
229 | 229 | ||
230 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 230 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
231 | 231 | ||
@@ -245,7 +245,7 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
245 | */ | 245 | */ |
246 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | 246 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) |
247 | { | 247 | { |
248 | struct slot *slot = hotplug_slot->private; | 248 | struct slot *slot = to_slot(hotplug_slot); |
249 | 249 | ||
250 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 250 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
251 | 251 | ||
@@ -266,39 +266,26 @@ int acpiphp_register_hotplug_slot(struct acpiphp_slot *acpiphp_slot, | |||
266 | if (!slot) | 266 | if (!slot) |
267 | goto error; | 267 | goto error; |
268 | 268 | ||
269 | slot->hotplug_slot = kzalloc(sizeof(*slot->hotplug_slot), GFP_KERNEL); | 269 | slot->hotplug_slot.ops = &acpi_hotplug_slot_ops; |
270 | if (!slot->hotplug_slot) | ||
271 | goto error_slot; | ||
272 | |||
273 | slot->hotplug_slot->info = &slot->info; | ||
274 | |||
275 | slot->hotplug_slot->private = slot; | ||
276 | slot->hotplug_slot->ops = &acpi_hotplug_slot_ops; | ||
277 | 270 | ||
278 | slot->acpi_slot = acpiphp_slot; | 271 | slot->acpi_slot = acpiphp_slot; |
279 | slot->hotplug_slot->info->power_status = acpiphp_get_power_status(slot->acpi_slot); | ||
280 | slot->hotplug_slot->info->attention_status = 0; | ||
281 | slot->hotplug_slot->info->latch_status = acpiphp_get_latch_status(slot->acpi_slot); | ||
282 | slot->hotplug_slot->info->adapter_status = acpiphp_get_adapter_status(slot->acpi_slot); | ||
283 | 272 | ||
284 | acpiphp_slot->slot = slot; | 273 | acpiphp_slot->slot = slot; |
285 | slot->sun = sun; | 274 | slot->sun = sun; |
286 | snprintf(name, SLOT_NAME_SIZE, "%u", sun); | 275 | snprintf(name, SLOT_NAME_SIZE, "%u", sun); |
287 | 276 | ||
288 | retval = pci_hp_register(slot->hotplug_slot, acpiphp_slot->bus, | 277 | retval = pci_hp_register(&slot->hotplug_slot, acpiphp_slot->bus, |
289 | acpiphp_slot->device, name); | 278 | acpiphp_slot->device, name); |
290 | if (retval == -EBUSY) | 279 | if (retval == -EBUSY) |
291 | goto error_hpslot; | 280 | goto error_slot; |
292 | if (retval) { | 281 | if (retval) { |
293 | pr_err("pci_hp_register failed with error %d\n", retval); | 282 | pr_err("pci_hp_register failed with error %d\n", retval); |
294 | goto error_hpslot; | 283 | goto error_slot; |
295 | } | 284 | } |
296 | 285 | ||
297 | pr_info("Slot [%s] registered\n", slot_name(slot)); | 286 | pr_info("Slot [%s] registered\n", slot_name(slot)); |
298 | 287 | ||
299 | return 0; | 288 | return 0; |
300 | error_hpslot: | ||
301 | kfree(slot->hotplug_slot); | ||
302 | error_slot: | 289 | error_slot: |
303 | kfree(slot); | 290 | kfree(slot); |
304 | error: | 291 | error: |
@@ -312,8 +299,7 @@ void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *acpiphp_slot) | |||
312 | 299 | ||
313 | pr_info("Slot [%s] unregistered\n", slot_name(slot)); | 300 | pr_info("Slot [%s] unregistered\n", slot_name(slot)); |
314 | 301 | ||
315 | pci_hp_deregister(slot->hotplug_slot); | 302 | pci_hp_deregister(&slot->hotplug_slot); |
316 | kfree(slot->hotplug_slot); | ||
317 | kfree(slot); | 303 | kfree(slot); |
318 | } | 304 | } |
319 | 305 | ||
diff --git a/drivers/pci/hotplug/acpiphp_ibm.c b/drivers/pci/hotplug/acpiphp_ibm.c index 41713f16ff97..df48b3b03ab4 100644 --- a/drivers/pci/hotplug/acpiphp_ibm.c +++ b/drivers/pci/hotplug/acpiphp_ibm.c | |||
@@ -41,7 +41,7 @@ MODULE_VERSION(DRIVER_VERSION); | |||
41 | #define IBM_HARDWARE_ID1 "IBM37D0" | 41 | #define IBM_HARDWARE_ID1 "IBM37D0" |
42 | #define IBM_HARDWARE_ID2 "IBM37D4" | 42 | #define IBM_HARDWARE_ID2 "IBM37D4" |
43 | 43 | ||
44 | #define hpslot_to_sun(A) (((struct slot *)((A)->private))->sun) | 44 | #define hpslot_to_sun(A) (to_slot(A)->sun) |
45 | 45 | ||
46 | /* union apci_descriptor - allows access to the | 46 | /* union apci_descriptor - allows access to the |
47 | * various device descriptors that are embedded in the | 47 | * various device descriptors that are embedded in the |
diff --git a/drivers/pci/hotplug/cpci_hotplug.h b/drivers/pci/hotplug/cpci_hotplug.h index 4658557be01a..f33ff2bca414 100644 --- a/drivers/pci/hotplug/cpci_hotplug.h +++ b/drivers/pci/hotplug/cpci_hotplug.h | |||
@@ -32,8 +32,10 @@ struct slot { | |||
32 | unsigned int devfn; | 32 | unsigned int devfn; |
33 | struct pci_bus *bus; | 33 | struct pci_bus *bus; |
34 | struct pci_dev *dev; | 34 | struct pci_dev *dev; |
35 | unsigned int latch_status:1; | ||
36 | unsigned int adapter_status:1; | ||
35 | unsigned int extracting; | 37 | unsigned int extracting; |
36 | struct hotplug_slot *hotplug_slot; | 38 | struct hotplug_slot hotplug_slot; |
37 | struct list_head slot_list; | 39 | struct list_head slot_list; |
38 | }; | 40 | }; |
39 | 41 | ||
@@ -58,7 +60,12 @@ struct cpci_hp_controller { | |||
58 | 60 | ||
59 | static inline const char *slot_name(struct slot *slot) | 61 | static inline const char *slot_name(struct slot *slot) |
60 | { | 62 | { |
61 | return hotplug_slot_name(slot->hotplug_slot); | 63 | return hotplug_slot_name(&slot->hotplug_slot); |
64 | } | ||
65 | |||
66 | static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) | ||
67 | { | ||
68 | return container_of(hotplug_slot, struct slot, hotplug_slot); | ||
62 | } | 69 | } |
63 | 70 | ||
64 | int cpci_hp_register_controller(struct cpci_hp_controller *controller); | 71 | int cpci_hp_register_controller(struct cpci_hp_controller *controller); |
diff --git a/drivers/pci/hotplug/cpci_hotplug_core.c b/drivers/pci/hotplug/cpci_hotplug_core.c index 52a339baf06c..603eadf3d965 100644 --- a/drivers/pci/hotplug/cpci_hotplug_core.c +++ b/drivers/pci/hotplug/cpci_hotplug_core.c | |||
@@ -57,7 +57,7 @@ static int get_attention_status(struct hotplug_slot *slot, u8 *value); | |||
57 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); | 57 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); |
58 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); | 58 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); |
59 | 59 | ||
60 | static struct hotplug_slot_ops cpci_hotplug_slot_ops = { | 60 | static const struct hotplug_slot_ops cpci_hotplug_slot_ops = { |
61 | .enable_slot = enable_slot, | 61 | .enable_slot = enable_slot, |
62 | .disable_slot = disable_slot, | 62 | .disable_slot = disable_slot, |
63 | .set_attention_status = set_attention_status, | 63 | .set_attention_status = set_attention_status, |
@@ -68,29 +68,9 @@ static struct hotplug_slot_ops cpci_hotplug_slot_ops = { | |||
68 | }; | 68 | }; |
69 | 69 | ||
70 | static int | 70 | static int |
71 | update_latch_status(struct hotplug_slot *hotplug_slot, u8 value) | ||
72 | { | ||
73 | struct hotplug_slot_info info; | ||
74 | |||
75 | memcpy(&info, hotplug_slot->info, sizeof(struct hotplug_slot_info)); | ||
76 | info.latch_status = value; | ||
77 | return pci_hp_change_slot_info(hotplug_slot, &info); | ||
78 | } | ||
79 | |||
80 | static int | ||
81 | update_adapter_status(struct hotplug_slot *hotplug_slot, u8 value) | ||
82 | { | ||
83 | struct hotplug_slot_info info; | ||
84 | |||
85 | memcpy(&info, hotplug_slot->info, sizeof(struct hotplug_slot_info)); | ||
86 | info.adapter_status = value; | ||
87 | return pci_hp_change_slot_info(hotplug_slot, &info); | ||
88 | } | ||
89 | |||
90 | static int | ||
91 | enable_slot(struct hotplug_slot *hotplug_slot) | 71 | enable_slot(struct hotplug_slot *hotplug_slot) |
92 | { | 72 | { |
93 | struct slot *slot = hotplug_slot->private; | 73 | struct slot *slot = to_slot(hotplug_slot); |
94 | int retval = 0; | 74 | int retval = 0; |
95 | 75 | ||
96 | dbg("%s - physical_slot = %s", __func__, slot_name(slot)); | 76 | dbg("%s - physical_slot = %s", __func__, slot_name(slot)); |
@@ -103,7 +83,7 @@ enable_slot(struct hotplug_slot *hotplug_slot) | |||
103 | static int | 83 | static int |
104 | disable_slot(struct hotplug_slot *hotplug_slot) | 84 | disable_slot(struct hotplug_slot *hotplug_slot) |
105 | { | 85 | { |
106 | struct slot *slot = hotplug_slot->private; | 86 | struct slot *slot = to_slot(hotplug_slot); |
107 | int retval = 0; | 87 | int retval = 0; |
108 | 88 | ||
109 | dbg("%s - physical_slot = %s", __func__, slot_name(slot)); | 89 | dbg("%s - physical_slot = %s", __func__, slot_name(slot)); |
@@ -135,8 +115,7 @@ disable_slot(struct hotplug_slot *hotplug_slot) | |||
135 | goto disable_error; | 115 | goto disable_error; |
136 | } | 116 | } |
137 | 117 | ||
138 | if (update_adapter_status(slot->hotplug_slot, 0)) | 118 | slot->adapter_status = 0; |
139 | warn("failure to update adapter file"); | ||
140 | 119 | ||
141 | if (slot->extracting) { | 120 | if (slot->extracting) { |
142 | slot->extracting = 0; | 121 | slot->extracting = 0; |
@@ -160,7 +139,7 @@ cpci_get_power_status(struct slot *slot) | |||
160 | static int | 139 | static int |
161 | get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | 140 | get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) |
162 | { | 141 | { |
163 | struct slot *slot = hotplug_slot->private; | 142 | struct slot *slot = to_slot(hotplug_slot); |
164 | 143 | ||
165 | *value = cpci_get_power_status(slot); | 144 | *value = cpci_get_power_status(slot); |
166 | return 0; | 145 | return 0; |
@@ -169,7 +148,7 @@ get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
169 | static int | 148 | static int |
170 | get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | 149 | get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) |
171 | { | 150 | { |
172 | struct slot *slot = hotplug_slot->private; | 151 | struct slot *slot = to_slot(hotplug_slot); |
173 | 152 | ||
174 | *value = cpci_get_attention_status(slot); | 153 | *value = cpci_get_attention_status(slot); |
175 | return 0; | 154 | return 0; |
@@ -178,27 +157,29 @@ get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
178 | static int | 157 | static int |
179 | set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | 158 | set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) |
180 | { | 159 | { |
181 | return cpci_set_attention_status(hotplug_slot->private, status); | 160 | return cpci_set_attention_status(to_slot(hotplug_slot), status); |
182 | } | 161 | } |
183 | 162 | ||
184 | static int | 163 | static int |
185 | get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | 164 | get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) |
186 | { | 165 | { |
187 | *value = hotplug_slot->info->adapter_status; | 166 | struct slot *slot = to_slot(hotplug_slot); |
167 | |||
168 | *value = slot->adapter_status; | ||
188 | return 0; | 169 | return 0; |
189 | } | 170 | } |
190 | 171 | ||
191 | static int | 172 | static int |
192 | get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | 173 | get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) |
193 | { | 174 | { |
194 | *value = hotplug_slot->info->latch_status; | 175 | struct slot *slot = to_slot(hotplug_slot); |
176 | |||
177 | *value = slot->latch_status; | ||
195 | return 0; | 178 | return 0; |
196 | } | 179 | } |
197 | 180 | ||
198 | static void release_slot(struct slot *slot) | 181 | static void release_slot(struct slot *slot) |
199 | { | 182 | { |
200 | kfree(slot->hotplug_slot->info); | ||
201 | kfree(slot->hotplug_slot); | ||
202 | pci_dev_put(slot->dev); | 183 | pci_dev_put(slot->dev); |
203 | kfree(slot); | 184 | kfree(slot); |
204 | } | 185 | } |
@@ -209,8 +190,6 @@ int | |||
209 | cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last) | 190 | cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last) |
210 | { | 191 | { |
211 | struct slot *slot; | 192 | struct slot *slot; |
212 | struct hotplug_slot *hotplug_slot; | ||
213 | struct hotplug_slot_info *info; | ||
214 | char name[SLOT_NAME_SIZE]; | 193 | char name[SLOT_NAME_SIZE]; |
215 | int status; | 194 | int status; |
216 | int i; | 195 | int i; |
@@ -229,43 +208,19 @@ cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last) | |||
229 | goto error; | 208 | goto error; |
230 | } | 209 | } |
231 | 210 | ||
232 | hotplug_slot = | ||
233 | kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); | ||
234 | if (!hotplug_slot) { | ||
235 | status = -ENOMEM; | ||
236 | goto error_slot; | ||
237 | } | ||
238 | slot->hotplug_slot = hotplug_slot; | ||
239 | |||
240 | info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL); | ||
241 | if (!info) { | ||
242 | status = -ENOMEM; | ||
243 | goto error_hpslot; | ||
244 | } | ||
245 | hotplug_slot->info = info; | ||
246 | |||
247 | slot->bus = bus; | 211 | slot->bus = bus; |
248 | slot->number = i; | 212 | slot->number = i; |
249 | slot->devfn = PCI_DEVFN(i, 0); | 213 | slot->devfn = PCI_DEVFN(i, 0); |
250 | 214 | ||
251 | snprintf(name, SLOT_NAME_SIZE, "%02x:%02x", bus->number, i); | 215 | snprintf(name, SLOT_NAME_SIZE, "%02x:%02x", bus->number, i); |
252 | 216 | ||
253 | hotplug_slot->private = slot; | 217 | slot->hotplug_slot.ops = &cpci_hotplug_slot_ops; |
254 | hotplug_slot->ops = &cpci_hotplug_slot_ops; | ||
255 | |||
256 | /* | ||
257 | * Initialize the slot info structure with some known | ||
258 | * good values. | ||
259 | */ | ||
260 | dbg("initializing slot %s", name); | ||
261 | info->power_status = cpci_get_power_status(slot); | ||
262 | info->attention_status = cpci_get_attention_status(slot); | ||
263 | 218 | ||
264 | dbg("registering slot %s", name); | 219 | dbg("registering slot %s", name); |
265 | status = pci_hp_register(slot->hotplug_slot, bus, i, name); | 220 | status = pci_hp_register(&slot->hotplug_slot, bus, i, name); |
266 | if (status) { | 221 | if (status) { |
267 | err("pci_hp_register failed with error %d", status); | 222 | err("pci_hp_register failed with error %d", status); |
268 | goto error_info; | 223 | goto error_slot; |
269 | } | 224 | } |
270 | dbg("slot registered with name: %s", slot_name(slot)); | 225 | dbg("slot registered with name: %s", slot_name(slot)); |
271 | 226 | ||
@@ -276,10 +231,6 @@ cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last) | |||
276 | up_write(&list_rwsem); | 231 | up_write(&list_rwsem); |
277 | } | 232 | } |
278 | return 0; | 233 | return 0; |
279 | error_info: | ||
280 | kfree(info); | ||
281 | error_hpslot: | ||
282 | kfree(hotplug_slot); | ||
283 | error_slot: | 234 | error_slot: |
284 | kfree(slot); | 235 | kfree(slot); |
285 | error: | 236 | error: |
@@ -305,7 +256,7 @@ cpci_hp_unregister_bus(struct pci_bus *bus) | |||
305 | slots--; | 256 | slots--; |
306 | 257 | ||
307 | dbg("deregistering slot %s", slot_name(slot)); | 258 | dbg("deregistering slot %s", slot_name(slot)); |
308 | pci_hp_deregister(slot->hotplug_slot); | 259 | pci_hp_deregister(&slot->hotplug_slot); |
309 | release_slot(slot); | 260 | release_slot(slot); |
310 | } | 261 | } |
311 | } | 262 | } |
@@ -359,10 +310,8 @@ init_slots(int clear_ins) | |||
359 | __func__, slot_name(slot)); | 310 | __func__, slot_name(slot)); |
360 | dev = pci_get_slot(slot->bus, PCI_DEVFN(slot->number, 0)); | 311 | dev = pci_get_slot(slot->bus, PCI_DEVFN(slot->number, 0)); |
361 | if (dev) { | 312 | if (dev) { |
362 | if (update_adapter_status(slot->hotplug_slot, 1)) | 313 | slot->adapter_status = 1; |
363 | warn("failure to update adapter file"); | 314 | slot->latch_status = 1; |
364 | if (update_latch_status(slot->hotplug_slot, 1)) | ||
365 | warn("failure to update latch file"); | ||
366 | slot->dev = dev; | 315 | slot->dev = dev; |
367 | } | 316 | } |
368 | } | 317 | } |
@@ -424,11 +373,8 @@ check_slots(void) | |||
424 | dbg("%s - slot %s HS_CSR (2) = %04x", | 373 | dbg("%s - slot %s HS_CSR (2) = %04x", |
425 | __func__, slot_name(slot), hs_csr); | 374 | __func__, slot_name(slot), hs_csr); |
426 | 375 | ||
427 | if (update_latch_status(slot->hotplug_slot, 1)) | 376 | slot->latch_status = 1; |
428 | warn("failure to update latch file"); | 377 | slot->adapter_status = 1; |
429 | |||
430 | if (update_adapter_status(slot->hotplug_slot, 1)) | ||
431 | warn("failure to update adapter file"); | ||
432 | 378 | ||
433 | cpci_led_off(slot); | 379 | cpci_led_off(slot); |
434 | 380 | ||
@@ -449,9 +395,7 @@ check_slots(void) | |||
449 | __func__, slot_name(slot), hs_csr); | 395 | __func__, slot_name(slot), hs_csr); |
450 | 396 | ||
451 | if (!slot->extracting) { | 397 | if (!slot->extracting) { |
452 | if (update_latch_status(slot->hotplug_slot, 0)) | 398 | slot->latch_status = 0; |
453 | warn("failure to update latch file"); | ||
454 | |||
455 | slot->extracting = 1; | 399 | slot->extracting = 1; |
456 | atomic_inc(&extracting); | 400 | atomic_inc(&extracting); |
457 | } | 401 | } |
@@ -465,8 +409,7 @@ check_slots(void) | |||
465 | */ | 409 | */ |
466 | err("card in slot %s was improperly removed", | 410 | err("card in slot %s was improperly removed", |
467 | slot_name(slot)); | 411 | slot_name(slot)); |
468 | if (update_adapter_status(slot->hotplug_slot, 0)) | 412 | slot->adapter_status = 0; |
469 | warn("failure to update adapter file"); | ||
470 | slot->extracting = 0; | 413 | slot->extracting = 0; |
471 | atomic_dec(&extracting); | 414 | atomic_dec(&extracting); |
472 | } | 415 | } |
@@ -615,7 +558,7 @@ cleanup_slots(void) | |||
615 | goto cleanup_null; | 558 | goto cleanup_null; |
616 | list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { | 559 | list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { |
617 | list_del(&slot->slot_list); | 560 | list_del(&slot->slot_list); |
618 | pci_hp_deregister(slot->hotplug_slot); | 561 | pci_hp_deregister(&slot->hotplug_slot); |
619 | release_slot(slot); | 562 | release_slot(slot); |
620 | } | 563 | } |
621 | cleanup_null: | 564 | cleanup_null: |
diff --git a/drivers/pci/hotplug/cpci_hotplug_pci.c b/drivers/pci/hotplug/cpci_hotplug_pci.c index 389b8fb50cd9..2c16adb7f4ec 100644 --- a/drivers/pci/hotplug/cpci_hotplug_pci.c +++ b/drivers/pci/hotplug/cpci_hotplug_pci.c | |||
@@ -194,8 +194,7 @@ int cpci_led_on(struct slot *slot) | |||
194 | slot->devfn, | 194 | slot->devfn, |
195 | hs_cap + 2, | 195 | hs_cap + 2, |
196 | hs_csr)) { | 196 | hs_csr)) { |
197 | err("Could not set LOO for slot %s", | 197 | err("Could not set LOO for slot %s", slot_name(slot)); |
198 | hotplug_slot_name(slot->hotplug_slot)); | ||
199 | return -ENODEV; | 198 | return -ENODEV; |
200 | } | 199 | } |
201 | } | 200 | } |
@@ -223,8 +222,7 @@ int cpci_led_off(struct slot *slot) | |||
223 | slot->devfn, | 222 | slot->devfn, |
224 | hs_cap + 2, | 223 | hs_cap + 2, |
225 | hs_csr)) { | 224 | hs_csr)) { |
226 | err("Could not clear LOO for slot %s", | 225 | err("Could not clear LOO for slot %s", slot_name(slot)); |
227 | hotplug_slot_name(slot->hotplug_slot)); | ||
228 | return -ENODEV; | 226 | return -ENODEV; |
229 | } | 227 | } |
230 | } | 228 | } |
diff --git a/drivers/pci/hotplug/cpqphp.h b/drivers/pci/hotplug/cpqphp.h index db78b394a075..77e4e0142fbc 100644 --- a/drivers/pci/hotplug/cpqphp.h +++ b/drivers/pci/hotplug/cpqphp.h | |||
@@ -260,7 +260,7 @@ struct slot { | |||
260 | u8 hp_slot; | 260 | u8 hp_slot; |
261 | struct controller *ctrl; | 261 | struct controller *ctrl; |
262 | void __iomem *p_sm_slot; | 262 | void __iomem *p_sm_slot; |
263 | struct hotplug_slot *hotplug_slot; | 263 | struct hotplug_slot hotplug_slot; |
264 | }; | 264 | }; |
265 | 265 | ||
266 | struct pci_resource { | 266 | struct pci_resource { |
@@ -445,7 +445,12 @@ extern u8 cpqhp_disk_irq; | |||
445 | 445 | ||
446 | static inline const char *slot_name(struct slot *slot) | 446 | static inline const char *slot_name(struct slot *slot) |
447 | { | 447 | { |
448 | return hotplug_slot_name(slot->hotplug_slot); | 448 | return hotplug_slot_name(&slot->hotplug_slot); |
449 | } | ||
450 | |||
451 | static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) | ||
452 | { | ||
453 | return container_of(hotplug_slot, struct slot, hotplug_slot); | ||
449 | } | 454 | } |
450 | 455 | ||
451 | /* | 456 | /* |
diff --git a/drivers/pci/hotplug/cpqphp_core.c b/drivers/pci/hotplug/cpqphp_core.c index 5a06636e910a..16bbb183695a 100644 --- a/drivers/pci/hotplug/cpqphp_core.c +++ b/drivers/pci/hotplug/cpqphp_core.c | |||
@@ -121,7 +121,6 @@ static int init_SERR(struct controller *ctrl) | |||
121 | { | 121 | { |
122 | u32 tempdword; | 122 | u32 tempdword; |
123 | u32 number_of_slots; | 123 | u32 number_of_slots; |
124 | u8 physical_slot; | ||
125 | 124 | ||
126 | if (!ctrl) | 125 | if (!ctrl) |
127 | return 1; | 126 | return 1; |
@@ -131,7 +130,6 @@ static int init_SERR(struct controller *ctrl) | |||
131 | number_of_slots = readb(ctrl->hpc_reg + SLOT_MASK) & 0x0F; | 130 | number_of_slots = readb(ctrl->hpc_reg + SLOT_MASK) & 0x0F; |
132 | /* Loop through slots */ | 131 | /* Loop through slots */ |
133 | while (number_of_slots) { | 132 | while (number_of_slots) { |
134 | physical_slot = tempdword; | ||
135 | writeb(0, ctrl->hpc_reg + SLOT_SERR); | 133 | writeb(0, ctrl->hpc_reg + SLOT_SERR); |
136 | tempdword++; | 134 | tempdword++; |
137 | number_of_slots--; | 135 | number_of_slots--; |
@@ -275,9 +273,7 @@ static int ctrl_slot_cleanup(struct controller *ctrl) | |||
275 | 273 | ||
276 | while (old_slot) { | 274 | while (old_slot) { |
277 | next_slot = old_slot->next; | 275 | next_slot = old_slot->next; |
278 | pci_hp_deregister(old_slot->hotplug_slot); | 276 | pci_hp_deregister(&old_slot->hotplug_slot); |
279 | kfree(old_slot->hotplug_slot->info); | ||
280 | kfree(old_slot->hotplug_slot); | ||
281 | kfree(old_slot); | 277 | kfree(old_slot); |
282 | old_slot = next_slot; | 278 | old_slot = next_slot; |
283 | } | 279 | } |
@@ -419,7 +415,7 @@ cpqhp_set_attention_status(struct controller *ctrl, struct pci_func *func, | |||
419 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | 415 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) |
420 | { | 416 | { |
421 | struct pci_func *slot_func; | 417 | struct pci_func *slot_func; |
422 | struct slot *slot = hotplug_slot->private; | 418 | struct slot *slot = to_slot(hotplug_slot); |
423 | struct controller *ctrl = slot->ctrl; | 419 | struct controller *ctrl = slot->ctrl; |
424 | u8 bus; | 420 | u8 bus; |
425 | u8 devfn; | 421 | u8 devfn; |
@@ -446,7 +442,7 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | |||
446 | static int process_SI(struct hotplug_slot *hotplug_slot) | 442 | static int process_SI(struct hotplug_slot *hotplug_slot) |
447 | { | 443 | { |
448 | struct pci_func *slot_func; | 444 | struct pci_func *slot_func; |
449 | struct slot *slot = hotplug_slot->private; | 445 | struct slot *slot = to_slot(hotplug_slot); |
450 | struct controller *ctrl = slot->ctrl; | 446 | struct controller *ctrl = slot->ctrl; |
451 | u8 bus; | 447 | u8 bus; |
452 | u8 devfn; | 448 | u8 devfn; |
@@ -478,7 +474,7 @@ static int process_SI(struct hotplug_slot *hotplug_slot) | |||
478 | static int process_SS(struct hotplug_slot *hotplug_slot) | 474 | static int process_SS(struct hotplug_slot *hotplug_slot) |
479 | { | 475 | { |
480 | struct pci_func *slot_func; | 476 | struct pci_func *slot_func; |
481 | struct slot *slot = hotplug_slot->private; | 477 | struct slot *slot = to_slot(hotplug_slot); |
482 | struct controller *ctrl = slot->ctrl; | 478 | struct controller *ctrl = slot->ctrl; |
483 | u8 bus; | 479 | u8 bus; |
484 | u8 devfn; | 480 | u8 devfn; |
@@ -505,7 +501,7 @@ static int process_SS(struct hotplug_slot *hotplug_slot) | |||
505 | 501 | ||
506 | static int hardware_test(struct hotplug_slot *hotplug_slot, u32 value) | 502 | static int hardware_test(struct hotplug_slot *hotplug_slot, u32 value) |
507 | { | 503 | { |
508 | struct slot *slot = hotplug_slot->private; | 504 | struct slot *slot = to_slot(hotplug_slot); |
509 | struct controller *ctrl = slot->ctrl; | 505 | struct controller *ctrl = slot->ctrl; |
510 | 506 | ||
511 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 507 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
@@ -516,7 +512,7 @@ static int hardware_test(struct hotplug_slot *hotplug_slot, u32 value) | |||
516 | 512 | ||
517 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | 513 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) |
518 | { | 514 | { |
519 | struct slot *slot = hotplug_slot->private; | 515 | struct slot *slot = to_slot(hotplug_slot); |
520 | struct controller *ctrl = slot->ctrl; | 516 | struct controller *ctrl = slot->ctrl; |
521 | 517 | ||
522 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 518 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
@@ -527,7 +523,7 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
527 | 523 | ||
528 | static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | 524 | static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) |
529 | { | 525 | { |
530 | struct slot *slot = hotplug_slot->private; | 526 | struct slot *slot = to_slot(hotplug_slot); |
531 | struct controller *ctrl = slot->ctrl; | 527 | struct controller *ctrl = slot->ctrl; |
532 | 528 | ||
533 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 529 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
@@ -538,7 +534,7 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
538 | 534 | ||
539 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | 535 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) |
540 | { | 536 | { |
541 | struct slot *slot = hotplug_slot->private; | 537 | struct slot *slot = to_slot(hotplug_slot); |
542 | struct controller *ctrl = slot->ctrl; | 538 | struct controller *ctrl = slot->ctrl; |
543 | 539 | ||
544 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 540 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
@@ -550,7 +546,7 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
550 | 546 | ||
551 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | 547 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) |
552 | { | 548 | { |
553 | struct slot *slot = hotplug_slot->private; | 549 | struct slot *slot = to_slot(hotplug_slot); |
554 | struct controller *ctrl = slot->ctrl; | 550 | struct controller *ctrl = slot->ctrl; |
555 | 551 | ||
556 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); | 552 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); |
@@ -560,7 +556,7 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
560 | return 0; | 556 | return 0; |
561 | } | 557 | } |
562 | 558 | ||
563 | static struct hotplug_slot_ops cpqphp_hotplug_slot_ops = { | 559 | static const struct hotplug_slot_ops cpqphp_hotplug_slot_ops = { |
564 | .set_attention_status = set_attention_status, | 560 | .set_attention_status = set_attention_status, |
565 | .enable_slot = process_SI, | 561 | .enable_slot = process_SI, |
566 | .disable_slot = process_SS, | 562 | .disable_slot = process_SS, |
@@ -578,8 +574,6 @@ static int ctrl_slot_setup(struct controller *ctrl, | |||
578 | void __iomem *smbios_table) | 574 | void __iomem *smbios_table) |
579 | { | 575 | { |
580 | struct slot *slot; | 576 | struct slot *slot; |
581 | struct hotplug_slot *hotplug_slot; | ||
582 | struct hotplug_slot_info *hotplug_slot_info; | ||
583 | struct pci_bus *bus = ctrl->pci_bus; | 577 | struct pci_bus *bus = ctrl->pci_bus; |
584 | u8 number_of_slots; | 578 | u8 number_of_slots; |
585 | u8 slot_device; | 579 | u8 slot_device; |
@@ -605,22 +599,6 @@ static int ctrl_slot_setup(struct controller *ctrl, | |||
605 | goto error; | 599 | goto error; |
606 | } | 600 | } |
607 | 601 | ||
608 | slot->hotplug_slot = kzalloc(sizeof(*(slot->hotplug_slot)), | ||
609 | GFP_KERNEL); | ||
610 | if (!slot->hotplug_slot) { | ||
611 | result = -ENOMEM; | ||
612 | goto error_slot; | ||
613 | } | ||
614 | hotplug_slot = slot->hotplug_slot; | ||
615 | |||
616 | hotplug_slot->info = kzalloc(sizeof(*(hotplug_slot->info)), | ||
617 | GFP_KERNEL); | ||
618 | if (!hotplug_slot->info) { | ||
619 | result = -ENOMEM; | ||
620 | goto error_hpslot; | ||
621 | } | ||
622 | hotplug_slot_info = hotplug_slot->info; | ||
623 | |||
624 | slot->ctrl = ctrl; | 602 | slot->ctrl = ctrl; |
625 | slot->bus = ctrl->bus; | 603 | slot->bus = ctrl->bus; |
626 | slot->device = slot_device; | 604 | slot->device = slot_device; |
@@ -669,29 +647,20 @@ static int ctrl_slot_setup(struct controller *ctrl, | |||
669 | ((read_slot_enable(ctrl) << 2) >> ctrl_slot) & 0x04; | 647 | ((read_slot_enable(ctrl) << 2) >> ctrl_slot) & 0x04; |
670 | 648 | ||
671 | /* register this slot with the hotplug pci core */ | 649 | /* register this slot with the hotplug pci core */ |
672 | hotplug_slot->private = slot; | ||
673 | snprintf(name, SLOT_NAME_SIZE, "%u", slot->number); | 650 | snprintf(name, SLOT_NAME_SIZE, "%u", slot->number); |
674 | hotplug_slot->ops = &cpqphp_hotplug_slot_ops; | 651 | slot->hotplug_slot.ops = &cpqphp_hotplug_slot_ops; |
675 | |||
676 | hotplug_slot_info->power_status = get_slot_enabled(ctrl, slot); | ||
677 | hotplug_slot_info->attention_status = | ||
678 | cpq_get_attention_status(ctrl, slot); | ||
679 | hotplug_slot_info->latch_status = | ||
680 | cpq_get_latch_status(ctrl, slot); | ||
681 | hotplug_slot_info->adapter_status = | ||
682 | get_presence_status(ctrl, slot); | ||
683 | 652 | ||
684 | dbg("registering bus %d, dev %d, number %d, ctrl->slot_device_offset %d, slot %d\n", | 653 | dbg("registering bus %d, dev %d, number %d, ctrl->slot_device_offset %d, slot %d\n", |
685 | slot->bus, slot->device, | 654 | slot->bus, slot->device, |
686 | slot->number, ctrl->slot_device_offset, | 655 | slot->number, ctrl->slot_device_offset, |
687 | slot_number); | 656 | slot_number); |
688 | result = pci_hp_register(hotplug_slot, | 657 | result = pci_hp_register(&slot->hotplug_slot, |
689 | ctrl->pci_dev->bus, | 658 | ctrl->pci_dev->bus, |
690 | slot->device, | 659 | slot->device, |
691 | name); | 660 | name); |
692 | if (result) { | 661 | if (result) { |
693 | err("pci_hp_register failed with error %d\n", result); | 662 | err("pci_hp_register failed with error %d\n", result); |
694 | goto error_info; | 663 | goto error_slot; |
695 | } | 664 | } |
696 | 665 | ||
697 | slot->next = ctrl->slot; | 666 | slot->next = ctrl->slot; |
@@ -703,10 +672,6 @@ static int ctrl_slot_setup(struct controller *ctrl, | |||
703 | } | 672 | } |
704 | 673 | ||
705 | return 0; | 674 | return 0; |
706 | error_info: | ||
707 | kfree(hotplug_slot_info); | ||
708 | error_hpslot: | ||
709 | kfree(hotplug_slot); | ||
710 | error_slot: | 675 | error_slot: |
711 | kfree(slot); | 676 | kfree(slot); |
712 | error: | 677 | error: |
diff --git a/drivers/pci/hotplug/cpqphp_ctrl.c b/drivers/pci/hotplug/cpqphp_ctrl.c index 616df442520b..b7f4e1f099d9 100644 --- a/drivers/pci/hotplug/cpqphp_ctrl.c +++ b/drivers/pci/hotplug/cpqphp_ctrl.c | |||
@@ -1130,9 +1130,7 @@ static u8 set_controller_speed(struct controller *ctrl, u8 adapter_speed, u8 hp_ | |||
1130 | for (slot = ctrl->slot; slot; slot = slot->next) { | 1130 | for (slot = ctrl->slot; slot; slot = slot->next) { |
1131 | if (slot->device == (hp_slot + ctrl->slot_device_offset)) | 1131 | if (slot->device == (hp_slot + ctrl->slot_device_offset)) |
1132 | continue; | 1132 | continue; |
1133 | if (!slot->hotplug_slot || !slot->hotplug_slot->info) | 1133 | if (get_presence_status(ctrl, slot) == 0) |
1134 | continue; | ||
1135 | if (slot->hotplug_slot->info->adapter_status == 0) | ||
1136 | continue; | 1134 | continue; |
1137 | /* If another adapter is running on the same segment but at a | 1135 | /* If another adapter is running on the same segment but at a |
1138 | * lower speed/mode, we allow the new adapter to function at | 1136 | * lower speed/mode, we allow the new adapter to function at |
@@ -1767,24 +1765,6 @@ void cpqhp_event_stop_thread(void) | |||
1767 | } | 1765 | } |
1768 | 1766 | ||
1769 | 1767 | ||
1770 | static int update_slot_info(struct controller *ctrl, struct slot *slot) | ||
1771 | { | ||
1772 | struct hotplug_slot_info *info; | ||
1773 | int result; | ||
1774 | |||
1775 | info = kmalloc(sizeof(*info), GFP_KERNEL); | ||
1776 | if (!info) | ||
1777 | return -ENOMEM; | ||
1778 | |||
1779 | info->power_status = get_slot_enabled(ctrl, slot); | ||
1780 | info->attention_status = cpq_get_attention_status(ctrl, slot); | ||
1781 | info->latch_status = cpq_get_latch_status(ctrl, slot); | ||
1782 | info->adapter_status = get_presence_status(ctrl, slot); | ||
1783 | result = pci_hp_change_slot_info(slot->hotplug_slot, info); | ||
1784 | kfree(info); | ||
1785 | return result; | ||
1786 | } | ||
1787 | |||
1788 | static void interrupt_event_handler(struct controller *ctrl) | 1768 | static void interrupt_event_handler(struct controller *ctrl) |
1789 | { | 1769 | { |
1790 | int loop = 0; | 1770 | int loop = 0; |
@@ -1884,9 +1864,6 @@ static void interrupt_event_handler(struct controller *ctrl) | |||
1884 | /***********POWER FAULT */ | 1864 | /***********POWER FAULT */ |
1885 | else if (ctrl->event_queue[loop].event_type == INT_POWER_FAULT) { | 1865 | else if (ctrl->event_queue[loop].event_type == INT_POWER_FAULT) { |
1886 | dbg("power fault\n"); | 1866 | dbg("power fault\n"); |
1887 | } else { | ||
1888 | /* refresh notification */ | ||
1889 | update_slot_info(ctrl, p_slot); | ||
1890 | } | 1867 | } |
1891 | 1868 | ||
1892 | ctrl->event_queue[loop].event_type = 0; | 1869 | ctrl->event_queue[loop].event_type = 0; |
@@ -2057,9 +2034,6 @@ int cpqhp_process_SI(struct controller *ctrl, struct pci_func *func) | |||
2057 | if (rc) | 2034 | if (rc) |
2058 | dbg("%s: rc = %d\n", __func__, rc); | 2035 | dbg("%s: rc = %d\n", __func__, rc); |
2059 | 2036 | ||
2060 | if (p_slot) | ||
2061 | update_slot_info(ctrl, p_slot); | ||
2062 | |||
2063 | return rc; | 2037 | return rc; |
2064 | } | 2038 | } |
2065 | 2039 | ||
@@ -2125,9 +2099,6 @@ int cpqhp_process_SS(struct controller *ctrl, struct pci_func *func) | |||
2125 | rc = 1; | 2099 | rc = 1; |
2126 | } | 2100 | } |
2127 | 2101 | ||
2128 | if (p_slot) | ||
2129 | update_slot_info(ctrl, p_slot); | ||
2130 | |||
2131 | return rc; | 2102 | return rc; |
2132 | } | 2103 | } |
2133 | 2104 | ||
diff --git a/drivers/pci/hotplug/ibmphp.h b/drivers/pci/hotplug/ibmphp.h index fddb78606c74..b89f850c3a4e 100644 --- a/drivers/pci/hotplug/ibmphp.h +++ b/drivers/pci/hotplug/ibmphp.h | |||
@@ -698,7 +698,7 @@ struct slot { | |||
698 | u8 supported_bus_mode; | 698 | u8 supported_bus_mode; |
699 | u8 flag; /* this is for disable slot and polling */ | 699 | u8 flag; /* this is for disable slot and polling */ |
700 | u8 ctlr_index; | 700 | u8 ctlr_index; |
701 | struct hotplug_slot *hotplug_slot; | 701 | struct hotplug_slot hotplug_slot; |
702 | struct controller *ctrl; | 702 | struct controller *ctrl; |
703 | struct pci_func *func; | 703 | struct pci_func *func; |
704 | u8 irq[4]; | 704 | u8 irq[4]; |
@@ -740,7 +740,12 @@ int ibmphp_do_disable_slot(struct slot *slot_cur); | |||
740 | int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be be static */ | 740 | int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be be static */ |
741 | int ibmphp_configure_card(struct pci_func *, u8); | 741 | int ibmphp_configure_card(struct pci_func *, u8); |
742 | int ibmphp_unconfigure_card(struct slot **, int); | 742 | int ibmphp_unconfigure_card(struct slot **, int); |
743 | extern struct hotplug_slot_ops ibmphp_hotplug_slot_ops; | 743 | extern const struct hotplug_slot_ops ibmphp_hotplug_slot_ops; |
744 | |||
745 | static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) | ||
746 | { | ||
747 | return container_of(hotplug_slot, struct slot, hotplug_slot); | ||
748 | } | ||
744 | 749 | ||
745 | #endif //__IBMPHP_H | 750 | #endif //__IBMPHP_H |
746 | 751 | ||
diff --git a/drivers/pci/hotplug/ibmphp_core.c b/drivers/pci/hotplug/ibmphp_core.c index 4ea57e9019f1..08a58e911fc2 100644 --- a/drivers/pci/hotplug/ibmphp_core.c +++ b/drivers/pci/hotplug/ibmphp_core.c | |||
@@ -247,11 +247,8 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 value) | |||
247 | break; | 247 | break; |
248 | } | 248 | } |
249 | if (rc == 0) { | 249 | if (rc == 0) { |
250 | pslot = hotplug_slot->private; | 250 | pslot = to_slot(hotplug_slot); |
251 | if (pslot) | 251 | rc = ibmphp_hpc_writeslot(pslot, cmd); |
252 | rc = ibmphp_hpc_writeslot(pslot, cmd); | ||
253 | else | ||
254 | rc = -ENODEV; | ||
255 | } | 252 | } |
256 | } else | 253 | } else |
257 | rc = -ENODEV; | 254 | rc = -ENODEV; |
@@ -273,19 +270,15 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
273 | 270 | ||
274 | ibmphp_lock_operations(); | 271 | ibmphp_lock_operations(); |
275 | if (hotplug_slot) { | 272 | if (hotplug_slot) { |
276 | pslot = hotplug_slot->private; | 273 | pslot = to_slot(hotplug_slot); |
277 | if (pslot) { | 274 | memcpy(&myslot, pslot, sizeof(struct slot)); |
278 | memcpy(&myslot, pslot, sizeof(struct slot)); | 275 | rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, |
279 | rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, | 276 | &myslot.status); |
280 | &(myslot.status)); | 277 | if (!rc) |
281 | if (!rc) | 278 | rc = ibmphp_hpc_readslot(pslot, READ_EXTSLOTSTATUS, |
282 | rc = ibmphp_hpc_readslot(pslot, | 279 | &myslot.ext_status); |
283 | READ_EXTSLOTSTATUS, | 280 | if (!rc) |
284 | &(myslot.ext_status)); | 281 | *value = SLOT_ATTN(myslot.status, myslot.ext_status); |
285 | if (!rc) | ||
286 | *value = SLOT_ATTN(myslot.status, | ||
287 | myslot.ext_status); | ||
288 | } | ||
289 | } | 282 | } |
290 | 283 | ||
291 | ibmphp_unlock_operations(); | 284 | ibmphp_unlock_operations(); |
@@ -303,14 +296,12 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
303 | (ulong) hotplug_slot, (ulong) value); | 296 | (ulong) hotplug_slot, (ulong) value); |
304 | ibmphp_lock_operations(); | 297 | ibmphp_lock_operations(); |
305 | if (hotplug_slot) { | 298 | if (hotplug_slot) { |
306 | pslot = hotplug_slot->private; | 299 | pslot = to_slot(hotplug_slot); |
307 | if (pslot) { | 300 | memcpy(&myslot, pslot, sizeof(struct slot)); |
308 | memcpy(&myslot, pslot, sizeof(struct slot)); | 301 | rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, |
309 | rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, | 302 | &myslot.status); |
310 | &(myslot.status)); | 303 | if (!rc) |
311 | if (!rc) | 304 | *value = SLOT_LATCH(myslot.status); |
312 | *value = SLOT_LATCH(myslot.status); | ||
313 | } | ||
314 | } | 305 | } |
315 | 306 | ||
316 | ibmphp_unlock_operations(); | 307 | ibmphp_unlock_operations(); |
@@ -330,14 +321,12 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
330 | (ulong) hotplug_slot, (ulong) value); | 321 | (ulong) hotplug_slot, (ulong) value); |
331 | ibmphp_lock_operations(); | 322 | ibmphp_lock_operations(); |
332 | if (hotplug_slot) { | 323 | if (hotplug_slot) { |
333 | pslot = hotplug_slot->private; | 324 | pslot = to_slot(hotplug_slot); |
334 | if (pslot) { | 325 | memcpy(&myslot, pslot, sizeof(struct slot)); |
335 | memcpy(&myslot, pslot, sizeof(struct slot)); | 326 | rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, |
336 | rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, | 327 | &myslot.status); |
337 | &(myslot.status)); | 328 | if (!rc) |
338 | if (!rc) | 329 | *value = SLOT_PWRGD(myslot.status); |
339 | *value = SLOT_PWRGD(myslot.status); | ||
340 | } | ||
341 | } | 330 | } |
342 | 331 | ||
343 | ibmphp_unlock_operations(); | 332 | ibmphp_unlock_operations(); |
@@ -357,18 +346,16 @@ static int get_adapter_present(struct hotplug_slot *hotplug_slot, u8 *value) | |||
357 | (ulong) hotplug_slot, (ulong) value); | 346 | (ulong) hotplug_slot, (ulong) value); |
358 | ibmphp_lock_operations(); | 347 | ibmphp_lock_operations(); |
359 | if (hotplug_slot) { | 348 | if (hotplug_slot) { |
360 | pslot = hotplug_slot->private; | 349 | pslot = to_slot(hotplug_slot); |
361 | if (pslot) { | 350 | memcpy(&myslot, pslot, sizeof(struct slot)); |
362 | memcpy(&myslot, pslot, sizeof(struct slot)); | 351 | rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, |
363 | rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, | 352 | &myslot.status); |
364 | &(myslot.status)); | 353 | if (!rc) { |
365 | if (!rc) { | 354 | present = SLOT_PRESENT(myslot.status); |
366 | present = SLOT_PRESENT(myslot.status); | 355 | if (present == HPC_SLOT_EMPTY) |
367 | if (present == HPC_SLOT_EMPTY) | 356 | *value = 0; |
368 | *value = 0; | 357 | else |
369 | else | 358 | *value = 1; |
370 | *value = 1; | ||
371 | } | ||
372 | } | 359 | } |
373 | } | 360 | } |
374 | 361 | ||
@@ -382,7 +369,7 @@ static int get_max_bus_speed(struct slot *slot) | |||
382 | int rc = 0; | 369 | int rc = 0; |
383 | u8 mode = 0; | 370 | u8 mode = 0; |
384 | enum pci_bus_speed speed; | 371 | enum pci_bus_speed speed; |
385 | struct pci_bus *bus = slot->hotplug_slot->pci_slot->bus; | 372 | struct pci_bus *bus = slot->hotplug_slot.pci_slot->bus; |
386 | 373 | ||
387 | debug("%s - Entry slot[%p]\n", __func__, slot); | 374 | debug("%s - Entry slot[%p]\n", __func__, slot); |
388 | 375 | ||
@@ -582,29 +569,10 @@ static int validate(struct slot *slot_cur, int opn) | |||
582 | ****************************************************************************/ | 569 | ****************************************************************************/ |
583 | int ibmphp_update_slot_info(struct slot *slot_cur) | 570 | int ibmphp_update_slot_info(struct slot *slot_cur) |
584 | { | 571 | { |
585 | struct hotplug_slot_info *info; | 572 | struct pci_bus *bus = slot_cur->hotplug_slot.pci_slot->bus; |
586 | struct pci_bus *bus = slot_cur->hotplug_slot->pci_slot->bus; | ||
587 | int rc; | ||
588 | u8 bus_speed; | 573 | u8 bus_speed; |
589 | u8 mode; | 574 | u8 mode; |
590 | 575 | ||
591 | info = kmalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL); | ||
592 | if (!info) | ||
593 | return -ENOMEM; | ||
594 | |||
595 | info->power_status = SLOT_PWRGD(slot_cur->status); | ||
596 | info->attention_status = SLOT_ATTN(slot_cur->status, | ||
597 | slot_cur->ext_status); | ||
598 | info->latch_status = SLOT_LATCH(slot_cur->status); | ||
599 | if (!SLOT_PRESENT(slot_cur->status)) { | ||
600 | info->adapter_status = 0; | ||
601 | /* info->max_adapter_speed_status = MAX_ADAPTER_NONE; */ | ||
602 | } else { | ||
603 | info->adapter_status = 1; | ||
604 | /* get_max_adapter_speed_1(slot_cur->hotplug_slot, | ||
605 | &info->max_adapter_speed_status, 0); */ | ||
606 | } | ||
607 | |||
608 | bus_speed = slot_cur->bus_on->current_speed; | 576 | bus_speed = slot_cur->bus_on->current_speed; |
609 | mode = slot_cur->bus_on->current_bus_mode; | 577 | mode = slot_cur->bus_on->current_bus_mode; |
610 | 578 | ||
@@ -630,9 +598,7 @@ int ibmphp_update_slot_info(struct slot *slot_cur) | |||
630 | bus->cur_bus_speed = bus_speed; | 598 | bus->cur_bus_speed = bus_speed; |
631 | // To do: bus_names | 599 | // To do: bus_names |
632 | 600 | ||
633 | rc = pci_hp_change_slot_info(slot_cur->hotplug_slot, info); | 601 | return 0; |
634 | kfree(info); | ||
635 | return rc; | ||
636 | } | 602 | } |
637 | 603 | ||
638 | 604 | ||
@@ -673,7 +639,7 @@ static void free_slots(void) | |||
673 | 639 | ||
674 | list_for_each_entry_safe(slot_cur, next, &ibmphp_slot_head, | 640 | list_for_each_entry_safe(slot_cur, next, &ibmphp_slot_head, |
675 | ibm_slot_list) { | 641 | ibm_slot_list) { |
676 | pci_hp_del(slot_cur->hotplug_slot); | 642 | pci_hp_del(&slot_cur->hotplug_slot); |
677 | slot_cur->ctrl = NULL; | 643 | slot_cur->ctrl = NULL; |
678 | slot_cur->bus_on = NULL; | 644 | slot_cur->bus_on = NULL; |
679 | 645 | ||
@@ -683,9 +649,7 @@ static void free_slots(void) | |||
683 | */ | 649 | */ |
684 | ibmphp_unconfigure_card(&slot_cur, -1); | 650 | ibmphp_unconfigure_card(&slot_cur, -1); |
685 | 651 | ||
686 | pci_hp_destroy(slot_cur->hotplug_slot); | 652 | pci_hp_destroy(&slot_cur->hotplug_slot); |
687 | kfree(slot_cur->hotplug_slot->info); | ||
688 | kfree(slot_cur->hotplug_slot); | ||
689 | kfree(slot_cur); | 653 | kfree(slot_cur); |
690 | } | 654 | } |
691 | debug("%s -- exit\n", __func__); | 655 | debug("%s -- exit\n", __func__); |
@@ -1007,7 +971,7 @@ static int enable_slot(struct hotplug_slot *hs) | |||
1007 | ibmphp_lock_operations(); | 971 | ibmphp_lock_operations(); |
1008 | 972 | ||
1009 | debug("ENABLING SLOT........\n"); | 973 | debug("ENABLING SLOT........\n"); |
1010 | slot_cur = hs->private; | 974 | slot_cur = to_slot(hs); |
1011 | 975 | ||
1012 | rc = validate(slot_cur, ENABLE); | 976 | rc = validate(slot_cur, ENABLE); |
1013 | if (rc) { | 977 | if (rc) { |
@@ -1095,8 +1059,7 @@ static int enable_slot(struct hotplug_slot *hs) | |||
1095 | 1059 | ||
1096 | slot_cur->func = kzalloc(sizeof(struct pci_func), GFP_KERNEL); | 1060 | slot_cur->func = kzalloc(sizeof(struct pci_func), GFP_KERNEL); |
1097 | if (!slot_cur->func) { | 1061 | if (!slot_cur->func) { |
1098 | /* We cannot do update_slot_info here, since no memory for | 1062 | /* do update_slot_info here? */ |
1099 | * kmalloc n.e.ways, and update_slot_info allocates some */ | ||
1100 | rc = -ENOMEM; | 1063 | rc = -ENOMEM; |
1101 | goto error_power; | 1064 | goto error_power; |
1102 | } | 1065 | } |
@@ -1169,7 +1132,7 @@ error_power: | |||
1169 | **************************************************************/ | 1132 | **************************************************************/ |
1170 | static int ibmphp_disable_slot(struct hotplug_slot *hotplug_slot) | 1133 | static int ibmphp_disable_slot(struct hotplug_slot *hotplug_slot) |
1171 | { | 1134 | { |
1172 | struct slot *slot = hotplug_slot->private; | 1135 | struct slot *slot = to_slot(hotplug_slot); |
1173 | int rc; | 1136 | int rc; |
1174 | 1137 | ||
1175 | ibmphp_lock_operations(); | 1138 | ibmphp_lock_operations(); |
@@ -1259,7 +1222,7 @@ error: | |||
1259 | goto exit; | 1222 | goto exit; |
1260 | } | 1223 | } |
1261 | 1224 | ||
1262 | struct hotplug_slot_ops ibmphp_hotplug_slot_ops = { | 1225 | const struct hotplug_slot_ops ibmphp_hotplug_slot_ops = { |
1263 | .set_attention_status = set_attention_status, | 1226 | .set_attention_status = set_attention_status, |
1264 | .enable_slot = enable_slot, | 1227 | .enable_slot = enable_slot, |
1265 | .disable_slot = ibmphp_disable_slot, | 1228 | .disable_slot = ibmphp_disable_slot, |
diff --git a/drivers/pci/hotplug/ibmphp_ebda.c b/drivers/pci/hotplug/ibmphp_ebda.c index 6f8e90e3ec08..11a2661dc062 100644 --- a/drivers/pci/hotplug/ibmphp_ebda.c +++ b/drivers/pci/hotplug/ibmphp_ebda.c | |||
@@ -666,36 +666,8 @@ static int fillslotinfo(struct hotplug_slot *hotplug_slot) | |||
666 | struct slot *slot; | 666 | struct slot *slot; |
667 | int rc = 0; | 667 | int rc = 0; |
668 | 668 | ||
669 | if (!hotplug_slot || !hotplug_slot->private) | 669 | slot = to_slot(hotplug_slot); |
670 | return -EINVAL; | ||
671 | |||
672 | slot = hotplug_slot->private; | ||
673 | rc = ibmphp_hpc_readslot(slot, READ_ALLSTAT, NULL); | 670 | rc = ibmphp_hpc_readslot(slot, READ_ALLSTAT, NULL); |
674 | if (rc) | ||
675 | return rc; | ||
676 | |||
677 | // power - enabled:1 not:0 | ||
678 | hotplug_slot->info->power_status = SLOT_POWER(slot->status); | ||
679 | |||
680 | // attention - off:0, on:1, blinking:2 | ||
681 | hotplug_slot->info->attention_status = SLOT_ATTN(slot->status, slot->ext_status); | ||
682 | |||
683 | // latch - open:1 closed:0 | ||
684 | hotplug_slot->info->latch_status = SLOT_LATCH(slot->status); | ||
685 | |||
686 | // pci board - present:1 not:0 | ||
687 | if (SLOT_PRESENT(slot->status)) | ||
688 | hotplug_slot->info->adapter_status = 1; | ||
689 | else | ||
690 | hotplug_slot->info->adapter_status = 0; | ||
691 | /* | ||
692 | if (slot->bus_on->supported_bus_mode | ||
693 | && (slot->bus_on->supported_speed == BUS_SPEED_66)) | ||
694 | hotplug_slot->info->max_bus_speed_status = BUS_SPEED_66PCIX; | ||
695 | else | ||
696 | hotplug_slot->info->max_bus_speed_status = slot->bus_on->supported_speed; | ||
697 | */ | ||
698 | |||
699 | return rc; | 671 | return rc; |
700 | } | 672 | } |
701 | 673 | ||
@@ -712,7 +684,6 @@ static int __init ebda_rsrc_controller(void) | |||
712 | u8 ctlr_id, temp, bus_index; | 684 | u8 ctlr_id, temp, bus_index; |
713 | u16 ctlr, slot, bus; | 685 | u16 ctlr, slot, bus; |
714 | u16 slot_num, bus_num, index; | 686 | u16 slot_num, bus_num, index; |
715 | struct hotplug_slot *hp_slot_ptr; | ||
716 | struct controller *hpc_ptr; | 687 | struct controller *hpc_ptr; |
717 | struct ebda_hpc_bus *bus_ptr; | 688 | struct ebda_hpc_bus *bus_ptr; |
718 | struct ebda_hpc_slot *slot_ptr; | 689 | struct ebda_hpc_slot *slot_ptr; |
@@ -771,7 +742,7 @@ static int __init ebda_rsrc_controller(void) | |||
771 | bus_info_ptr1 = kzalloc(sizeof(struct bus_info), GFP_KERNEL); | 742 | bus_info_ptr1 = kzalloc(sizeof(struct bus_info), GFP_KERNEL); |
772 | if (!bus_info_ptr1) { | 743 | if (!bus_info_ptr1) { |
773 | rc = -ENOMEM; | 744 | rc = -ENOMEM; |
774 | goto error_no_hp_slot; | 745 | goto error_no_slot; |
775 | } | 746 | } |
776 | bus_info_ptr1->slot_min = slot_ptr->slot_num; | 747 | bus_info_ptr1->slot_min = slot_ptr->slot_num; |
777 | bus_info_ptr1->slot_max = slot_ptr->slot_num; | 748 | bus_info_ptr1->slot_max = slot_ptr->slot_num; |
@@ -842,7 +813,7 @@ static int __init ebda_rsrc_controller(void) | |||
842 | (hpc_ptr->u.isa_ctlr.io_end - hpc_ptr->u.isa_ctlr.io_start + 1), | 813 | (hpc_ptr->u.isa_ctlr.io_end - hpc_ptr->u.isa_ctlr.io_start + 1), |
843 | "ibmphp")) { | 814 | "ibmphp")) { |
844 | rc = -ENODEV; | 815 | rc = -ENODEV; |
845 | goto error_no_hp_slot; | 816 | goto error_no_slot; |
846 | } | 817 | } |
847 | hpc_ptr->irq = readb(io_mem + addr + 4); | 818 | hpc_ptr->irq = readb(io_mem + addr + 4); |
848 | addr += 5; | 819 | addr += 5; |
@@ -857,7 +828,7 @@ static int __init ebda_rsrc_controller(void) | |||
857 | break; | 828 | break; |
858 | default: | 829 | default: |
859 | rc = -ENODEV; | 830 | rc = -ENODEV; |
860 | goto error_no_hp_slot; | 831 | goto error_no_slot; |
861 | } | 832 | } |
862 | 833 | ||
863 | //reorganize chassis' linked list | 834 | //reorganize chassis' linked list |
@@ -870,19 +841,6 @@ static int __init ebda_rsrc_controller(void) | |||
870 | 841 | ||
871 | // register slots with hpc core as well as create linked list of ibm slot | 842 | // register slots with hpc core as well as create linked list of ibm slot |
872 | for (index = 0; index < hpc_ptr->slot_count; index++) { | 843 | for (index = 0; index < hpc_ptr->slot_count; index++) { |
873 | |||
874 | hp_slot_ptr = kzalloc(sizeof(*hp_slot_ptr), GFP_KERNEL); | ||
875 | if (!hp_slot_ptr) { | ||
876 | rc = -ENOMEM; | ||
877 | goto error_no_hp_slot; | ||
878 | } | ||
879 | |||
880 | hp_slot_ptr->info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL); | ||
881 | if (!hp_slot_ptr->info) { | ||
882 | rc = -ENOMEM; | ||
883 | goto error_no_hp_info; | ||
884 | } | ||
885 | |||
886 | tmp_slot = kzalloc(sizeof(*tmp_slot), GFP_KERNEL); | 844 | tmp_slot = kzalloc(sizeof(*tmp_slot), GFP_KERNEL); |
887 | if (!tmp_slot) { | 845 | if (!tmp_slot) { |
888 | rc = -ENOMEM; | 846 | rc = -ENOMEM; |
@@ -909,7 +867,6 @@ static int __init ebda_rsrc_controller(void) | |||
909 | 867 | ||
910 | bus_info_ptr1 = ibmphp_find_same_bus_num(hpc_ptr->slots[index].slot_bus_num); | 868 | bus_info_ptr1 = ibmphp_find_same_bus_num(hpc_ptr->slots[index].slot_bus_num); |
911 | if (!bus_info_ptr1) { | 869 | if (!bus_info_ptr1) { |
912 | kfree(tmp_slot); | ||
913 | rc = -ENODEV; | 870 | rc = -ENODEV; |
914 | goto error; | 871 | goto error; |
915 | } | 872 | } |
@@ -919,22 +876,19 @@ static int __init ebda_rsrc_controller(void) | |||
919 | 876 | ||
920 | tmp_slot->ctlr_index = hpc_ptr->slots[index].ctl_index; | 877 | tmp_slot->ctlr_index = hpc_ptr->slots[index].ctl_index; |
921 | tmp_slot->number = hpc_ptr->slots[index].slot_num; | 878 | tmp_slot->number = hpc_ptr->slots[index].slot_num; |
922 | tmp_slot->hotplug_slot = hp_slot_ptr; | ||
923 | |||
924 | hp_slot_ptr->private = tmp_slot; | ||
925 | 879 | ||
926 | rc = fillslotinfo(hp_slot_ptr); | 880 | rc = fillslotinfo(&tmp_slot->hotplug_slot); |
927 | if (rc) | 881 | if (rc) |
928 | goto error; | 882 | goto error; |
929 | 883 | ||
930 | rc = ibmphp_init_devno((struct slot **) &hp_slot_ptr->private); | 884 | rc = ibmphp_init_devno(&tmp_slot); |
931 | if (rc) | 885 | if (rc) |
932 | goto error; | 886 | goto error; |
933 | hp_slot_ptr->ops = &ibmphp_hotplug_slot_ops; | 887 | tmp_slot->hotplug_slot.ops = &ibmphp_hotplug_slot_ops; |
934 | 888 | ||
935 | // end of registering ibm slot with hotplug core | 889 | // end of registering ibm slot with hotplug core |
936 | 890 | ||
937 | list_add(&((struct slot *)(hp_slot_ptr->private))->ibm_slot_list, &ibmphp_slot_head); | 891 | list_add(&tmp_slot->ibm_slot_list, &ibmphp_slot_head); |
938 | } | 892 | } |
939 | 893 | ||
940 | print_bus_info(); | 894 | print_bus_info(); |
@@ -944,7 +898,7 @@ static int __init ebda_rsrc_controller(void) | |||
944 | 898 | ||
945 | list_for_each_entry(tmp_slot, &ibmphp_slot_head, ibm_slot_list) { | 899 | list_for_each_entry(tmp_slot, &ibmphp_slot_head, ibm_slot_list) { |
946 | snprintf(name, SLOT_NAME_SIZE, "%s", create_file_name(tmp_slot)); | 900 | snprintf(name, SLOT_NAME_SIZE, "%s", create_file_name(tmp_slot)); |
947 | pci_hp_register(tmp_slot->hotplug_slot, | 901 | pci_hp_register(&tmp_slot->hotplug_slot, |
948 | pci_find_bus(0, tmp_slot->bus), tmp_slot->device, name); | 902 | pci_find_bus(0, tmp_slot->bus), tmp_slot->device, name); |
949 | } | 903 | } |
950 | 904 | ||
@@ -953,12 +907,8 @@ static int __init ebda_rsrc_controller(void) | |||
953 | return 0; | 907 | return 0; |
954 | 908 | ||
955 | error: | 909 | error: |
956 | kfree(hp_slot_ptr->private); | 910 | kfree(tmp_slot); |
957 | error_no_slot: | 911 | error_no_slot: |
958 | kfree(hp_slot_ptr->info); | ||
959 | error_no_hp_info: | ||
960 | kfree(hp_slot_ptr); | ||
961 | error_no_hp_slot: | ||
962 | free_ebda_hpc(hpc_ptr); | 912 | free_ebda_hpc(hpc_ptr); |
963 | error_no_hpc: | 913 | error_no_hpc: |
964 | iounmap(io_mem); | 914 | iounmap(io_mem); |
diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c index 90fde5f106d8..5ac31f683b85 100644 --- a/drivers/pci/hotplug/pci_hotplug_core.c +++ b/drivers/pci/hotplug/pci_hotplug_core.c | |||
@@ -49,15 +49,13 @@ static DEFINE_MUTEX(pci_hp_mutex); | |||
49 | #define GET_STATUS(name, type) \ | 49 | #define GET_STATUS(name, type) \ |
50 | static int get_##name(struct hotplug_slot *slot, type *value) \ | 50 | static int get_##name(struct hotplug_slot *slot, type *value) \ |
51 | { \ | 51 | { \ |
52 | struct hotplug_slot_ops *ops = slot->ops; \ | 52 | const struct hotplug_slot_ops *ops = slot->ops; \ |
53 | int retval = 0; \ | 53 | int retval = 0; \ |
54 | if (!try_module_get(ops->owner)) \ | 54 | if (!try_module_get(slot->owner)) \ |
55 | return -ENODEV; \ | 55 | return -ENODEV; \ |
56 | if (ops->get_##name) \ | 56 | if (ops->get_##name) \ |
57 | retval = ops->get_##name(slot, value); \ | 57 | retval = ops->get_##name(slot, value); \ |
58 | else \ | 58 | module_put(slot->owner); \ |
59 | *value = slot->info->name; \ | ||
60 | module_put(ops->owner); \ | ||
61 | return retval; \ | 59 | return retval; \ |
62 | } | 60 | } |
63 | 61 | ||
@@ -90,7 +88,7 @@ static ssize_t power_write_file(struct pci_slot *pci_slot, const char *buf, | |||
90 | power = (u8)(lpower & 0xff); | 88 | power = (u8)(lpower & 0xff); |
91 | dbg("power = %d\n", power); | 89 | dbg("power = %d\n", power); |
92 | 90 | ||
93 | if (!try_module_get(slot->ops->owner)) { | 91 | if (!try_module_get(slot->owner)) { |
94 | retval = -ENODEV; | 92 | retval = -ENODEV; |
95 | goto exit; | 93 | goto exit; |
96 | } | 94 | } |
@@ -109,7 +107,7 @@ static ssize_t power_write_file(struct pci_slot *pci_slot, const char *buf, | |||
109 | err("Illegal value specified for power\n"); | 107 | err("Illegal value specified for power\n"); |
110 | retval = -EINVAL; | 108 | retval = -EINVAL; |
111 | } | 109 | } |
112 | module_put(slot->ops->owner); | 110 | module_put(slot->owner); |
113 | 111 | ||
114 | exit: | 112 | exit: |
115 | if (retval) | 113 | if (retval) |
@@ -138,7 +136,8 @@ static ssize_t attention_read_file(struct pci_slot *pci_slot, char *buf) | |||
138 | static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf, | 136 | static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf, |
139 | size_t count) | 137 | size_t count) |
140 | { | 138 | { |
141 | struct hotplug_slot_ops *ops = pci_slot->hotplug->ops; | 139 | struct hotplug_slot *slot = pci_slot->hotplug; |
140 | const struct hotplug_slot_ops *ops = slot->ops; | ||
142 | unsigned long lattention; | 141 | unsigned long lattention; |
143 | u8 attention; | 142 | u8 attention; |
144 | int retval = 0; | 143 | int retval = 0; |
@@ -147,13 +146,13 @@ static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf, | |||
147 | attention = (u8)(lattention & 0xff); | 146 | attention = (u8)(lattention & 0xff); |
148 | dbg(" - attention = %d\n", attention); | 147 | dbg(" - attention = %d\n", attention); |
149 | 148 | ||
150 | if (!try_module_get(ops->owner)) { | 149 | if (!try_module_get(slot->owner)) { |
151 | retval = -ENODEV; | 150 | retval = -ENODEV; |
152 | goto exit; | 151 | goto exit; |
153 | } | 152 | } |
154 | if (ops->set_attention_status) | 153 | if (ops->set_attention_status) |
155 | retval = ops->set_attention_status(pci_slot->hotplug, attention); | 154 | retval = ops->set_attention_status(slot, attention); |
156 | module_put(ops->owner); | 155 | module_put(slot->owner); |
157 | 156 | ||
158 | exit: | 157 | exit: |
159 | if (retval) | 158 | if (retval) |
@@ -213,13 +212,13 @@ static ssize_t test_write_file(struct pci_slot *pci_slot, const char *buf, | |||
213 | test = (u32)(ltest & 0xffffffff); | 212 | test = (u32)(ltest & 0xffffffff); |
214 | dbg("test = %d\n", test); | 213 | dbg("test = %d\n", test); |
215 | 214 | ||
216 | if (!try_module_get(slot->ops->owner)) { | 215 | if (!try_module_get(slot->owner)) { |
217 | retval = -ENODEV; | 216 | retval = -ENODEV; |
218 | goto exit; | 217 | goto exit; |
219 | } | 218 | } |
220 | if (slot->ops->hardware_test) | 219 | if (slot->ops->hardware_test) |
221 | retval = slot->ops->hardware_test(slot, test); | 220 | retval = slot->ops->hardware_test(slot, test); |
222 | module_put(slot->ops->owner); | 221 | module_put(slot->owner); |
223 | 222 | ||
224 | exit: | 223 | exit: |
225 | if (retval) | 224 | if (retval) |
@@ -444,11 +443,11 @@ int __pci_hp_initialize(struct hotplug_slot *slot, struct pci_bus *bus, | |||
444 | 443 | ||
445 | if (slot == NULL) | 444 | if (slot == NULL) |
446 | return -ENODEV; | 445 | return -ENODEV; |
447 | if ((slot->info == NULL) || (slot->ops == NULL)) | 446 | if (slot->ops == NULL) |
448 | return -EINVAL; | 447 | return -EINVAL; |
449 | 448 | ||
450 | slot->ops->owner = owner; | 449 | slot->owner = owner; |
451 | slot->ops->mod_name = mod_name; | 450 | slot->mod_name = mod_name; |
452 | 451 | ||
453 | /* | 452 | /* |
454 | * No problems if we call this interface from both ACPI_PCI_SLOT | 453 | * No problems if we call this interface from both ACPI_PCI_SLOT |
@@ -559,28 +558,6 @@ void pci_hp_destroy(struct hotplug_slot *slot) | |||
559 | } | 558 | } |
560 | EXPORT_SYMBOL_GPL(pci_hp_destroy); | 559 | EXPORT_SYMBOL_GPL(pci_hp_destroy); |
561 | 560 | ||
562 | /** | ||
563 | * pci_hp_change_slot_info - changes the slot's information structure in the core | ||
564 | * @slot: pointer to the slot whose info has changed | ||
565 | * @info: pointer to the info copy into the slot's info structure | ||
566 | * | ||
567 | * @slot must have been registered with the pci | ||
568 | * hotplug subsystem previously with a call to pci_hp_register(). | ||
569 | * | ||
570 | * Returns 0 if successful, anything else for an error. | ||
571 | */ | ||
572 | int pci_hp_change_slot_info(struct hotplug_slot *slot, | ||
573 | struct hotplug_slot_info *info) | ||
574 | { | ||
575 | if (!slot || !info) | ||
576 | return -ENODEV; | ||
577 | |||
578 | memcpy(slot->info, info, sizeof(struct hotplug_slot_info)); | ||
579 | |||
580 | return 0; | ||
581 | } | ||
582 | EXPORT_SYMBOL_GPL(pci_hp_change_slot_info); | ||
583 | |||
584 | static int __init pci_hotplug_init(void) | 561 | static int __init pci_hotplug_init(void) |
585 | { | 562 | { |
586 | int result; | 563 | int result; |
diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h index 811cf83f956d..506e1d923a1f 100644 --- a/drivers/pci/hotplug/pciehp.h +++ b/drivers/pci/hotplug/pciehp.h | |||
@@ -19,7 +19,6 @@ | |||
19 | #include <linux/pci.h> | 19 | #include <linux/pci.h> |
20 | #include <linux/pci_hotplug.h> | 20 | #include <linux/pci_hotplug.h> |
21 | #include <linux/delay.h> | 21 | #include <linux/delay.h> |
22 | #include <linux/sched/signal.h> /* signal_pending() */ | ||
23 | #include <linux/mutex.h> | 22 | #include <linux/mutex.h> |
24 | #include <linux/rwsem.h> | 23 | #include <linux/rwsem.h> |
25 | #include <linux/workqueue.h> | 24 | #include <linux/workqueue.h> |
@@ -60,71 +59,63 @@ do { \ | |||
60 | #define SLOT_NAME_SIZE 10 | 59 | #define SLOT_NAME_SIZE 10 |
61 | 60 | ||
62 | /** | 61 | /** |
63 | * struct slot - PCIe hotplug slot | ||
64 | * @state: current state machine position | ||
65 | * @ctrl: pointer to the slot's controller structure | ||
66 | * @hotplug_slot: pointer to the structure registered with the PCI hotplug core | ||
67 | * @work: work item to turn the slot on or off after 5 seconds in response to | ||
68 | * an Attention Button press | ||
69 | * @lock: protects reads and writes of @state; | ||
70 | * protects scheduling, execution and cancellation of @work | ||
71 | */ | ||
72 | struct slot { | ||
73 | u8 state; | ||
74 | struct controller *ctrl; | ||
75 | struct hotplug_slot *hotplug_slot; | ||
76 | struct delayed_work work; | ||
77 | struct mutex lock; | ||
78 | }; | ||
79 | |||
80 | /** | ||
81 | * struct controller - PCIe hotplug controller | 62 | * struct controller - PCIe hotplug controller |
82 | * @ctrl_lock: serializes writes to the Slot Control register | ||
83 | * @pcie: pointer to the controller's PCIe port service device | 63 | * @pcie: pointer to the controller's PCIe port service device |
84 | * @reset_lock: prevents access to the Data Link Layer Link Active bit in the | ||
85 | * Link Status register and to the Presence Detect State bit in the Slot | ||
86 | * Status register during a slot reset which may cause them to flap | ||
87 | * @slot: pointer to the controller's slot structure | ||
88 | * @queue: wait queue to wake up on reception of a Command Completed event, | ||
89 | * used for synchronous writes to the Slot Control register | ||
90 | * @slot_cap: cached copy of the Slot Capabilities register | 64 | * @slot_cap: cached copy of the Slot Capabilities register |
91 | * @slot_ctrl: cached copy of the Slot Control register | 65 | * @slot_ctrl: cached copy of the Slot Control register |
92 | * @poll_thread: thread to poll for slot events if no IRQ is available, | 66 | * @ctrl_lock: serializes writes to the Slot Control register |
93 | * enabled with pciehp_poll_mode module parameter | ||
94 | * @cmd_started: jiffies when the Slot Control register was last written; | 67 | * @cmd_started: jiffies when the Slot Control register was last written; |
95 | * the next write is allowed 1 second later, absent a Command Completed | 68 | * the next write is allowed 1 second later, absent a Command Completed |
96 | * interrupt (PCIe r4.0, sec 6.7.3.2) | 69 | * interrupt (PCIe r4.0, sec 6.7.3.2) |
97 | * @cmd_busy: flag set on Slot Control register write, cleared by IRQ handler | 70 | * @cmd_busy: flag set on Slot Control register write, cleared by IRQ handler |
98 | * on reception of a Command Completed event | 71 | * on reception of a Command Completed event |
99 | * @link_active_reporting: cached copy of Data Link Layer Link Active Reporting | 72 | * @queue: wait queue to wake up on reception of a Command Completed event, |
100 | * Capable bit in Link Capabilities register; if this bit is zero, the | 73 | * used for synchronous writes to the Slot Control register |
101 | * Data Link Layer Link Active bit in the Link Status register will never | 74 | * @pending_events: used by the IRQ handler to save events retrieved from the |
102 | * be set and the driver is thus confined to wait 1 second before assuming | 75 | * Slot Status register for later consumption by the IRQ thread |
103 | * the link to a hotplugged device is up and accessing it | ||
104 | * @notification_enabled: whether the IRQ was requested successfully | 76 | * @notification_enabled: whether the IRQ was requested successfully |
105 | * @power_fault_detected: whether a power fault was detected by the hardware | 77 | * @power_fault_detected: whether a power fault was detected by the hardware |
106 | * that has not yet been cleared by the user | 78 | * that has not yet been cleared by the user |
107 | * @pending_events: used by the IRQ handler to save events retrieved from the | 79 | * @poll_thread: thread to poll for slot events if no IRQ is available, |
108 | * Slot Status register for later consumption by the IRQ thread | 80 | * enabled with pciehp_poll_mode module parameter |
81 | * @state: current state machine position | ||
82 | * @state_lock: protects reads and writes of @state; | ||
83 | * protects scheduling, execution and cancellation of @button_work | ||
84 | * @button_work: work item to turn the slot on or off after 5 seconds | ||
85 | * in response to an Attention Button press | ||
86 | * @hotplug_slot: structure registered with the PCI hotplug core | ||
87 | * @reset_lock: prevents access to the Data Link Layer Link Active bit in the | ||
88 | * Link Status register and to the Presence Detect State bit in the Slot | ||
89 | * Status register during a slot reset which may cause them to flap | ||
109 | * @request_result: result of last user request submitted to the IRQ thread | 90 | * @request_result: result of last user request submitted to the IRQ thread |
110 | * @requester: wait queue to wake up on completion of user request, | 91 | * @requester: wait queue to wake up on completion of user request, |
111 | * used for synchronous slot enable/disable request via sysfs | 92 | * used for synchronous slot enable/disable request via sysfs |
93 | * | ||
94 | * PCIe hotplug has a 1:1 relationship between controller and slot, hence | ||
95 | * unlike other drivers, the two aren't represented by separate structures. | ||
112 | */ | 96 | */ |
113 | struct controller { | 97 | struct controller { |
114 | struct mutex ctrl_lock; | ||
115 | struct pcie_device *pcie; | 98 | struct pcie_device *pcie; |
116 | struct rw_semaphore reset_lock; | 99 | |
117 | struct slot *slot; | 100 | u32 slot_cap; /* capabilities and quirks */ |
118 | wait_queue_head_t queue; | 101 | |
119 | u32 slot_cap; | 102 | u16 slot_ctrl; /* control register access */ |
120 | u16 slot_ctrl; | 103 | struct mutex ctrl_lock; |
121 | struct task_struct *poll_thread; | 104 | unsigned long cmd_started; |
122 | unsigned long cmd_started; /* jiffies */ | ||
123 | unsigned int cmd_busy:1; | 105 | unsigned int cmd_busy:1; |
124 | unsigned int link_active_reporting:1; | 106 | wait_queue_head_t queue; |
107 | |||
108 | atomic_t pending_events; /* event handling */ | ||
125 | unsigned int notification_enabled:1; | 109 | unsigned int notification_enabled:1; |
126 | unsigned int power_fault_detected; | 110 | unsigned int power_fault_detected; |
127 | atomic_t pending_events; | 111 | struct task_struct *poll_thread; |
112 | |||
113 | u8 state; /* state machine */ | ||
114 | struct mutex state_lock; | ||
115 | struct delayed_work button_work; | ||
116 | |||
117 | struct hotplug_slot hotplug_slot; /* hotplug core interface */ | ||
118 | struct rw_semaphore reset_lock; | ||
128 | int request_result; | 119 | int request_result; |
129 | wait_queue_head_t requester; | 120 | wait_queue_head_t requester; |
130 | }; | 121 | }; |
@@ -174,42 +165,50 @@ struct controller { | |||
174 | #define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS) | 165 | #define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS) |
175 | #define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19) | 166 | #define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19) |
176 | 167 | ||
177 | int pciehp_sysfs_enable_slot(struct slot *slot); | ||
178 | int pciehp_sysfs_disable_slot(struct slot *slot); | ||
179 | void pciehp_request(struct controller *ctrl, int action); | 168 | void pciehp_request(struct controller *ctrl, int action); |
180 | void pciehp_handle_button_press(struct slot *slot); | 169 | void pciehp_handle_button_press(struct controller *ctrl); |
181 | void pciehp_handle_disable_request(struct slot *slot); | 170 | void pciehp_handle_disable_request(struct controller *ctrl); |
182 | void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events); | 171 | void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events); |
183 | int pciehp_configure_device(struct slot *p_slot); | 172 | int pciehp_configure_device(struct controller *ctrl); |
184 | void pciehp_unconfigure_device(struct slot *p_slot); | 173 | void pciehp_unconfigure_device(struct controller *ctrl, bool presence); |
185 | void pciehp_queue_pushbutton_work(struct work_struct *work); | 174 | void pciehp_queue_pushbutton_work(struct work_struct *work); |
186 | struct controller *pcie_init(struct pcie_device *dev); | 175 | struct controller *pcie_init(struct pcie_device *dev); |
187 | int pcie_init_notification(struct controller *ctrl); | 176 | int pcie_init_notification(struct controller *ctrl); |
188 | void pcie_shutdown_notification(struct controller *ctrl); | 177 | void pcie_shutdown_notification(struct controller *ctrl); |
189 | void pcie_clear_hotplug_events(struct controller *ctrl); | 178 | void pcie_clear_hotplug_events(struct controller *ctrl); |
190 | int pciehp_power_on_slot(struct slot *slot); | 179 | void pcie_enable_interrupt(struct controller *ctrl); |
191 | void pciehp_power_off_slot(struct slot *slot); | 180 | void pcie_disable_interrupt(struct controller *ctrl); |
192 | void pciehp_get_power_status(struct slot *slot, u8 *status); | 181 | int pciehp_power_on_slot(struct controller *ctrl); |
193 | void pciehp_get_attention_status(struct slot *slot, u8 *status); | 182 | void pciehp_power_off_slot(struct controller *ctrl); |
194 | 183 | void pciehp_get_power_status(struct controller *ctrl, u8 *status); | |
195 | void pciehp_set_attention_status(struct slot *slot, u8 status); | 184 | |
196 | void pciehp_get_latch_status(struct slot *slot, u8 *status); | 185 | void pciehp_set_attention_status(struct controller *ctrl, u8 status); |
197 | void pciehp_get_adapter_status(struct slot *slot, u8 *status); | 186 | void pciehp_get_latch_status(struct controller *ctrl, u8 *status); |
198 | int pciehp_query_power_fault(struct slot *slot); | 187 | int pciehp_query_power_fault(struct controller *ctrl); |
199 | void pciehp_green_led_on(struct slot *slot); | 188 | void pciehp_green_led_on(struct controller *ctrl); |
200 | void pciehp_green_led_off(struct slot *slot); | 189 | void pciehp_green_led_off(struct controller *ctrl); |
201 | void pciehp_green_led_blink(struct slot *slot); | 190 | void pciehp_green_led_blink(struct controller *ctrl); |
191 | bool pciehp_card_present(struct controller *ctrl); | ||
192 | bool pciehp_card_present_or_link_active(struct controller *ctrl); | ||
202 | int pciehp_check_link_status(struct controller *ctrl); | 193 | int pciehp_check_link_status(struct controller *ctrl); |
203 | bool pciehp_check_link_active(struct controller *ctrl); | 194 | bool pciehp_check_link_active(struct controller *ctrl); |
204 | void pciehp_release_ctrl(struct controller *ctrl); | 195 | void pciehp_release_ctrl(struct controller *ctrl); |
205 | int pciehp_reset_slot(struct slot *slot, int probe); | ||
206 | 196 | ||
197 | int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot); | ||
198 | int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot); | ||
199 | int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe); | ||
200 | int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status); | ||
207 | int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status); | 201 | int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status); |
208 | int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status); | 202 | int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status); |
209 | 203 | ||
210 | static inline const char *slot_name(struct slot *slot) | 204 | static inline const char *slot_name(struct controller *ctrl) |
205 | { | ||
206 | return hotplug_slot_name(&ctrl->hotplug_slot); | ||
207 | } | ||
208 | |||
209 | static inline struct controller *to_ctrl(struct hotplug_slot *hotplug_slot) | ||
211 | { | 210 | { |
212 | return hotplug_slot_name(slot->hotplug_slot); | 211 | return container_of(hotplug_slot, struct controller, hotplug_slot); |
213 | } | 212 | } |
214 | 213 | ||
215 | #endif /* _PCIEHP_H */ | 214 | #endif /* _PCIEHP_H */ |
diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c index ec48c9433ae5..fc5366b50e95 100644 --- a/drivers/pci/hotplug/pciehp_core.c +++ b/drivers/pci/hotplug/pciehp_core.c | |||
@@ -23,8 +23,6 @@ | |||
23 | #include <linux/types.h> | 23 | #include <linux/types.h> |
24 | #include <linux/pci.h> | 24 | #include <linux/pci.h> |
25 | #include "pciehp.h" | 25 | #include "pciehp.h" |
26 | #include <linux/interrupt.h> | ||
27 | #include <linux/time.h> | ||
28 | 26 | ||
29 | #include "../pci.h" | 27 | #include "../pci.h" |
30 | 28 | ||
@@ -47,45 +45,30 @@ MODULE_PARM_DESC(pciehp_poll_time, "Polling mechanism frequency, in seconds"); | |||
47 | #define PCIE_MODULE_NAME "pciehp" | 45 | #define PCIE_MODULE_NAME "pciehp" |
48 | 46 | ||
49 | static int set_attention_status(struct hotplug_slot *slot, u8 value); | 47 | static int set_attention_status(struct hotplug_slot *slot, u8 value); |
50 | static int enable_slot(struct hotplug_slot *slot); | ||
51 | static int disable_slot(struct hotplug_slot *slot); | ||
52 | static int get_power_status(struct hotplug_slot *slot, u8 *value); | 48 | static int get_power_status(struct hotplug_slot *slot, u8 *value); |
53 | static int get_attention_status(struct hotplug_slot *slot, u8 *value); | ||
54 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); | 49 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); |
55 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); | 50 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); |
56 | static int reset_slot(struct hotplug_slot *slot, int probe); | ||
57 | 51 | ||
58 | static int init_slot(struct controller *ctrl) | 52 | static int init_slot(struct controller *ctrl) |
59 | { | 53 | { |
60 | struct slot *slot = ctrl->slot; | 54 | struct hotplug_slot_ops *ops; |
61 | struct hotplug_slot *hotplug = NULL; | ||
62 | struct hotplug_slot_info *info = NULL; | ||
63 | struct hotplug_slot_ops *ops = NULL; | ||
64 | char name[SLOT_NAME_SIZE]; | 55 | char name[SLOT_NAME_SIZE]; |
65 | int retval = -ENOMEM; | 56 | int retval; |
66 | |||
67 | hotplug = kzalloc(sizeof(*hotplug), GFP_KERNEL); | ||
68 | if (!hotplug) | ||
69 | goto out; | ||
70 | |||
71 | info = kzalloc(sizeof(*info), GFP_KERNEL); | ||
72 | if (!info) | ||
73 | goto out; | ||
74 | 57 | ||
75 | /* Setup hotplug slot ops */ | 58 | /* Setup hotplug slot ops */ |
76 | ops = kzalloc(sizeof(*ops), GFP_KERNEL); | 59 | ops = kzalloc(sizeof(*ops), GFP_KERNEL); |
77 | if (!ops) | 60 | if (!ops) |
78 | goto out; | 61 | return -ENOMEM; |
79 | 62 | ||
80 | ops->enable_slot = enable_slot; | 63 | ops->enable_slot = pciehp_sysfs_enable_slot; |
81 | ops->disable_slot = disable_slot; | 64 | ops->disable_slot = pciehp_sysfs_disable_slot; |
82 | ops->get_power_status = get_power_status; | 65 | ops->get_power_status = get_power_status; |
83 | ops->get_adapter_status = get_adapter_status; | 66 | ops->get_adapter_status = get_adapter_status; |
84 | ops->reset_slot = reset_slot; | 67 | ops->reset_slot = pciehp_reset_slot; |
85 | if (MRL_SENS(ctrl)) | 68 | if (MRL_SENS(ctrl)) |
86 | ops->get_latch_status = get_latch_status; | 69 | ops->get_latch_status = get_latch_status; |
87 | if (ATTN_LED(ctrl)) { | 70 | if (ATTN_LED(ctrl)) { |
88 | ops->get_attention_status = get_attention_status; | 71 | ops->get_attention_status = pciehp_get_attention_status; |
89 | ops->set_attention_status = set_attention_status; | 72 | ops->set_attention_status = set_attention_status; |
90 | } else if (ctrl->pcie->port->hotplug_user_indicators) { | 73 | } else if (ctrl->pcie->port->hotplug_user_indicators) { |
91 | ops->get_attention_status = pciehp_get_raw_indicator_status; | 74 | ops->get_attention_status = pciehp_get_raw_indicator_status; |
@@ -93,33 +76,24 @@ static int init_slot(struct controller *ctrl) | |||
93 | } | 76 | } |
94 | 77 | ||
95 | /* register this slot with the hotplug pci core */ | 78 | /* register this slot with the hotplug pci core */ |
96 | hotplug->info = info; | 79 | ctrl->hotplug_slot.ops = ops; |
97 | hotplug->private = slot; | ||
98 | hotplug->ops = ops; | ||
99 | slot->hotplug_slot = hotplug; | ||
100 | snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl)); | 80 | snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl)); |
101 | 81 | ||
102 | retval = pci_hp_initialize(hotplug, | 82 | retval = pci_hp_initialize(&ctrl->hotplug_slot, |
103 | ctrl->pcie->port->subordinate, 0, name); | 83 | ctrl->pcie->port->subordinate, 0, name); |
104 | if (retval) | ||
105 | ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval); | ||
106 | out: | ||
107 | if (retval) { | 84 | if (retval) { |
85 | ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval); | ||
108 | kfree(ops); | 86 | kfree(ops); |
109 | kfree(info); | ||
110 | kfree(hotplug); | ||
111 | } | 87 | } |
112 | return retval; | 88 | return retval; |
113 | } | 89 | } |
114 | 90 | ||
115 | static void cleanup_slot(struct controller *ctrl) | 91 | static void cleanup_slot(struct controller *ctrl) |
116 | { | 92 | { |
117 | struct hotplug_slot *hotplug_slot = ctrl->slot->hotplug_slot; | 93 | struct hotplug_slot *hotplug_slot = &ctrl->hotplug_slot; |
118 | 94 | ||
119 | pci_hp_destroy(hotplug_slot); | 95 | pci_hp_destroy(hotplug_slot); |
120 | kfree(hotplug_slot->ops); | 96 | kfree(hotplug_slot->ops); |
121 | kfree(hotplug_slot->info); | ||
122 | kfree(hotplug_slot); | ||
123 | } | 97 | } |
124 | 98 | ||
125 | /* | 99 | /* |
@@ -127,79 +101,48 @@ static void cleanup_slot(struct controller *ctrl) | |||
127 | */ | 101 | */ |
128 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | 102 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) |
129 | { | 103 | { |
130 | struct slot *slot = hotplug_slot->private; | 104 | struct controller *ctrl = to_ctrl(hotplug_slot); |
131 | struct pci_dev *pdev = slot->ctrl->pcie->port; | 105 | struct pci_dev *pdev = ctrl->pcie->port; |
132 | 106 | ||
133 | pci_config_pm_runtime_get(pdev); | 107 | pci_config_pm_runtime_get(pdev); |
134 | pciehp_set_attention_status(slot, status); | 108 | pciehp_set_attention_status(ctrl, status); |
135 | pci_config_pm_runtime_put(pdev); | 109 | pci_config_pm_runtime_put(pdev); |
136 | return 0; | 110 | return 0; |
137 | } | 111 | } |
138 | 112 | ||
139 | |||
140 | static int enable_slot(struct hotplug_slot *hotplug_slot) | ||
141 | { | ||
142 | struct slot *slot = hotplug_slot->private; | ||
143 | |||
144 | return pciehp_sysfs_enable_slot(slot); | ||
145 | } | ||
146 | |||
147 | |||
148 | static int disable_slot(struct hotplug_slot *hotplug_slot) | ||
149 | { | ||
150 | struct slot *slot = hotplug_slot->private; | ||
151 | |||
152 | return pciehp_sysfs_disable_slot(slot); | ||
153 | } | ||
154 | |||
155 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | 113 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) |
156 | { | 114 | { |
157 | struct slot *slot = hotplug_slot->private; | 115 | struct controller *ctrl = to_ctrl(hotplug_slot); |
158 | struct pci_dev *pdev = slot->ctrl->pcie->port; | 116 | struct pci_dev *pdev = ctrl->pcie->port; |
159 | 117 | ||
160 | pci_config_pm_runtime_get(pdev); | 118 | pci_config_pm_runtime_get(pdev); |
161 | pciehp_get_power_status(slot, value); | 119 | pciehp_get_power_status(ctrl, value); |
162 | pci_config_pm_runtime_put(pdev); | 120 | pci_config_pm_runtime_put(pdev); |
163 | return 0; | 121 | return 0; |
164 | } | 122 | } |
165 | 123 | ||
166 | static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | ||
167 | { | ||
168 | struct slot *slot = hotplug_slot->private; | ||
169 | |||
170 | pciehp_get_attention_status(slot, value); | ||
171 | return 0; | ||
172 | } | ||
173 | |||
174 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | 124 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) |
175 | { | 125 | { |
176 | struct slot *slot = hotplug_slot->private; | 126 | struct controller *ctrl = to_ctrl(hotplug_slot); |
177 | struct pci_dev *pdev = slot->ctrl->pcie->port; | 127 | struct pci_dev *pdev = ctrl->pcie->port; |
178 | 128 | ||
179 | pci_config_pm_runtime_get(pdev); | 129 | pci_config_pm_runtime_get(pdev); |
180 | pciehp_get_latch_status(slot, value); | 130 | pciehp_get_latch_status(ctrl, value); |
181 | pci_config_pm_runtime_put(pdev); | 131 | pci_config_pm_runtime_put(pdev); |
182 | return 0; | 132 | return 0; |
183 | } | 133 | } |
184 | 134 | ||
185 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | 135 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) |
186 | { | 136 | { |
187 | struct slot *slot = hotplug_slot->private; | 137 | struct controller *ctrl = to_ctrl(hotplug_slot); |
188 | struct pci_dev *pdev = slot->ctrl->pcie->port; | 138 | struct pci_dev *pdev = ctrl->pcie->port; |
189 | 139 | ||
190 | pci_config_pm_runtime_get(pdev); | 140 | pci_config_pm_runtime_get(pdev); |
191 | pciehp_get_adapter_status(slot, value); | 141 | *value = pciehp_card_present_or_link_active(ctrl); |
192 | pci_config_pm_runtime_put(pdev); | 142 | pci_config_pm_runtime_put(pdev); |
193 | return 0; | 143 | return 0; |
194 | } | 144 | } |
195 | 145 | ||
196 | static int reset_slot(struct hotplug_slot *hotplug_slot, int probe) | ||
197 | { | ||
198 | struct slot *slot = hotplug_slot->private; | ||
199 | |||
200 | return pciehp_reset_slot(slot, probe); | ||
201 | } | ||
202 | |||
203 | /** | 146 | /** |
204 | * pciehp_check_presence() - synthesize event if presence has changed | 147 | * pciehp_check_presence() - synthesize event if presence has changed |
205 | * | 148 | * |
@@ -212,20 +155,19 @@ static int reset_slot(struct hotplug_slot *hotplug_slot, int probe) | |||
212 | */ | 155 | */ |
213 | static void pciehp_check_presence(struct controller *ctrl) | 156 | static void pciehp_check_presence(struct controller *ctrl) |
214 | { | 157 | { |
215 | struct slot *slot = ctrl->slot; | 158 | bool occupied; |
216 | u8 occupied; | ||
217 | 159 | ||
218 | down_read(&ctrl->reset_lock); | 160 | down_read(&ctrl->reset_lock); |
219 | mutex_lock(&slot->lock); | 161 | mutex_lock(&ctrl->state_lock); |
220 | 162 | ||
221 | pciehp_get_adapter_status(slot, &occupied); | 163 | occupied = pciehp_card_present_or_link_active(ctrl); |
222 | if ((occupied && (slot->state == OFF_STATE || | 164 | if ((occupied && (ctrl->state == OFF_STATE || |
223 | slot->state == BLINKINGON_STATE)) || | 165 | ctrl->state == BLINKINGON_STATE)) || |
224 | (!occupied && (slot->state == ON_STATE || | 166 | (!occupied && (ctrl->state == ON_STATE || |
225 | slot->state == BLINKINGOFF_STATE))) | 167 | ctrl->state == BLINKINGOFF_STATE))) |
226 | pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC); | 168 | pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC); |
227 | 169 | ||
228 | mutex_unlock(&slot->lock); | 170 | mutex_unlock(&ctrl->state_lock); |
229 | up_read(&ctrl->reset_lock); | 171 | up_read(&ctrl->reset_lock); |
230 | } | 172 | } |
231 | 173 | ||
@@ -233,7 +175,6 @@ static int pciehp_probe(struct pcie_device *dev) | |||
233 | { | 175 | { |
234 | int rc; | 176 | int rc; |
235 | struct controller *ctrl; | 177 | struct controller *ctrl; |
236 | struct slot *slot; | ||
237 | 178 | ||
238 | /* If this is not a "hotplug" service, we have no business here. */ | 179 | /* If this is not a "hotplug" service, we have no business here. */ |
239 | if (dev->service != PCIE_PORT_SERVICE_HP) | 180 | if (dev->service != PCIE_PORT_SERVICE_HP) |
@@ -271,8 +212,7 @@ static int pciehp_probe(struct pcie_device *dev) | |||
271 | } | 212 | } |
272 | 213 | ||
273 | /* Publish to user space */ | 214 | /* Publish to user space */ |
274 | slot = ctrl->slot; | 215 | rc = pci_hp_add(&ctrl->hotplug_slot); |
275 | rc = pci_hp_add(slot->hotplug_slot); | ||
276 | if (rc) { | 216 | if (rc) { |
277 | ctrl_err(ctrl, "Publication to user space failed (%d)\n", rc); | 217 | ctrl_err(ctrl, "Publication to user space failed (%d)\n", rc); |
278 | goto err_out_shutdown_notification; | 218 | goto err_out_shutdown_notification; |
@@ -295,29 +235,43 @@ static void pciehp_remove(struct pcie_device *dev) | |||
295 | { | 235 | { |
296 | struct controller *ctrl = get_service_data(dev); | 236 | struct controller *ctrl = get_service_data(dev); |
297 | 237 | ||
298 | pci_hp_del(ctrl->slot->hotplug_slot); | 238 | pci_hp_del(&ctrl->hotplug_slot); |
299 | pcie_shutdown_notification(ctrl); | 239 | pcie_shutdown_notification(ctrl); |
300 | cleanup_slot(ctrl); | 240 | cleanup_slot(ctrl); |
301 | pciehp_release_ctrl(ctrl); | 241 | pciehp_release_ctrl(ctrl); |
302 | } | 242 | } |
303 | 243 | ||
304 | #ifdef CONFIG_PM | 244 | #ifdef CONFIG_PM |
245 | static bool pme_is_native(struct pcie_device *dev) | ||
246 | { | ||
247 | const struct pci_host_bridge *host; | ||
248 | |||
249 | host = pci_find_host_bridge(dev->port->bus); | ||
250 | return pcie_ports_native || host->native_pme; | ||
251 | } | ||
252 | |||
305 | static int pciehp_suspend(struct pcie_device *dev) | 253 | static int pciehp_suspend(struct pcie_device *dev) |
306 | { | 254 | { |
255 | /* | ||
256 | * Disable hotplug interrupt so that it does not trigger | ||
257 | * immediately when the downstream link goes down. | ||
258 | */ | ||
259 | if (pme_is_native(dev)) | ||
260 | pcie_disable_interrupt(get_service_data(dev)); | ||
261 | |||
307 | return 0; | 262 | return 0; |
308 | } | 263 | } |
309 | 264 | ||
310 | static int pciehp_resume_noirq(struct pcie_device *dev) | 265 | static int pciehp_resume_noirq(struct pcie_device *dev) |
311 | { | 266 | { |
312 | struct controller *ctrl = get_service_data(dev); | 267 | struct controller *ctrl = get_service_data(dev); |
313 | struct slot *slot = ctrl->slot; | ||
314 | 268 | ||
315 | /* pci_restore_state() just wrote to the Slot Control register */ | 269 | /* pci_restore_state() just wrote to the Slot Control register */ |
316 | ctrl->cmd_started = jiffies; | 270 | ctrl->cmd_started = jiffies; |
317 | ctrl->cmd_busy = true; | 271 | ctrl->cmd_busy = true; |
318 | 272 | ||
319 | /* clear spurious events from rediscovery of inserted card */ | 273 | /* clear spurious events from rediscovery of inserted card */ |
320 | if (slot->state == ON_STATE || slot->state == BLINKINGOFF_STATE) | 274 | if (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) |
321 | pcie_clear_hotplug_events(ctrl); | 275 | pcie_clear_hotplug_events(ctrl); |
322 | 276 | ||
323 | return 0; | 277 | return 0; |
@@ -327,10 +281,29 @@ static int pciehp_resume(struct pcie_device *dev) | |||
327 | { | 281 | { |
328 | struct controller *ctrl = get_service_data(dev); | 282 | struct controller *ctrl = get_service_data(dev); |
329 | 283 | ||
284 | if (pme_is_native(dev)) | ||
285 | pcie_enable_interrupt(ctrl); | ||
286 | |||
330 | pciehp_check_presence(ctrl); | 287 | pciehp_check_presence(ctrl); |
331 | 288 | ||
332 | return 0; | 289 | return 0; |
333 | } | 290 | } |
291 | |||
292 | static int pciehp_runtime_resume(struct pcie_device *dev) | ||
293 | { | ||
294 | struct controller *ctrl = get_service_data(dev); | ||
295 | |||
296 | /* pci_restore_state() just wrote to the Slot Control register */ | ||
297 | ctrl->cmd_started = jiffies; | ||
298 | ctrl->cmd_busy = true; | ||
299 | |||
300 | /* clear spurious events from rediscovery of inserted card */ | ||
301 | if ((ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) && | ||
302 | pme_is_native(dev)) | ||
303 | pcie_clear_hotplug_events(ctrl); | ||
304 | |||
305 | return pciehp_resume(dev); | ||
306 | } | ||
334 | #endif /* PM */ | 307 | #endif /* PM */ |
335 | 308 | ||
336 | static struct pcie_port_service_driver hpdriver_portdrv = { | 309 | static struct pcie_port_service_driver hpdriver_portdrv = { |
@@ -345,10 +318,12 @@ static struct pcie_port_service_driver hpdriver_portdrv = { | |||
345 | .suspend = pciehp_suspend, | 318 | .suspend = pciehp_suspend, |
346 | .resume_noirq = pciehp_resume_noirq, | 319 | .resume_noirq = pciehp_resume_noirq, |
347 | .resume = pciehp_resume, | 320 | .resume = pciehp_resume, |
321 | .runtime_suspend = pciehp_suspend, | ||
322 | .runtime_resume = pciehp_runtime_resume, | ||
348 | #endif /* PM */ | 323 | #endif /* PM */ |
349 | }; | 324 | }; |
350 | 325 | ||
351 | static int __init pcied_init(void) | 326 | int __init pcie_hp_init(void) |
352 | { | 327 | { |
353 | int retval = 0; | 328 | int retval = 0; |
354 | 329 | ||
@@ -359,4 +334,3 @@ static int __init pcied_init(void) | |||
359 | 334 | ||
360 | return retval; | 335 | return retval; |
361 | } | 336 | } |
362 | device_initcall(pcied_init); | ||
diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c index da7c72372ffc..3f3df4c29f6e 100644 --- a/drivers/pci/hotplug/pciehp_ctrl.c +++ b/drivers/pci/hotplug/pciehp_ctrl.c | |||
@@ -13,24 +13,24 @@ | |||
13 | * | 13 | * |
14 | */ | 14 | */ |
15 | 15 | ||
16 | #include <linux/module.h> | ||
17 | #include <linux/kernel.h> | 16 | #include <linux/kernel.h> |
18 | #include <linux/types.h> | 17 | #include <linux/types.h> |
19 | #include <linux/slab.h> | ||
20 | #include <linux/pm_runtime.h> | 18 | #include <linux/pm_runtime.h> |
21 | #include <linux/pci.h> | 19 | #include <linux/pci.h> |
22 | #include "../pci.h" | ||
23 | #include "pciehp.h" | 20 | #include "pciehp.h" |
24 | 21 | ||
25 | /* The following routines constitute the bulk of the | 22 | /* The following routines constitute the bulk of the |
26 | hotplug controller logic | 23 | hotplug controller logic |
27 | */ | 24 | */ |
28 | 25 | ||
29 | static void set_slot_off(struct controller *ctrl, struct slot *pslot) | 26 | #define SAFE_REMOVAL true |
27 | #define SURPRISE_REMOVAL false | ||
28 | |||
29 | static void set_slot_off(struct controller *ctrl) | ||
30 | { | 30 | { |
31 | /* turn off slot, turn on Amber LED, turn off Green LED if supported*/ | 31 | /* turn off slot, turn on Amber LED, turn off Green LED if supported*/ |
32 | if (POWER_CTRL(ctrl)) { | 32 | if (POWER_CTRL(ctrl)) { |
33 | pciehp_power_off_slot(pslot); | 33 | pciehp_power_off_slot(ctrl); |
34 | 34 | ||
35 | /* | 35 | /* |
36 | * After turning power off, we must wait for at least 1 second | 36 | * After turning power off, we must wait for at least 1 second |
@@ -40,31 +40,30 @@ static void set_slot_off(struct controller *ctrl, struct slot *pslot) | |||
40 | msleep(1000); | 40 | msleep(1000); |
41 | } | 41 | } |
42 | 42 | ||
43 | pciehp_green_led_off(pslot); | 43 | pciehp_green_led_off(ctrl); |
44 | pciehp_set_attention_status(pslot, 1); | 44 | pciehp_set_attention_status(ctrl, 1); |
45 | } | 45 | } |
46 | 46 | ||
47 | /** | 47 | /** |
48 | * board_added - Called after a board has been added to the system. | 48 | * board_added - Called after a board has been added to the system. |
49 | * @p_slot: &slot where board is added | 49 | * @ctrl: PCIe hotplug controller where board is added |
50 | * | 50 | * |
51 | * Turns power on for the board. | 51 | * Turns power on for the board. |
52 | * Configures board. | 52 | * Configures board. |
53 | */ | 53 | */ |
54 | static int board_added(struct slot *p_slot) | 54 | static int board_added(struct controller *ctrl) |
55 | { | 55 | { |
56 | int retval = 0; | 56 | int retval = 0; |
57 | struct controller *ctrl = p_slot->ctrl; | ||
58 | struct pci_bus *parent = ctrl->pcie->port->subordinate; | 57 | struct pci_bus *parent = ctrl->pcie->port->subordinate; |
59 | 58 | ||
60 | if (POWER_CTRL(ctrl)) { | 59 | if (POWER_CTRL(ctrl)) { |
61 | /* Power on slot */ | 60 | /* Power on slot */ |
62 | retval = pciehp_power_on_slot(p_slot); | 61 | retval = pciehp_power_on_slot(ctrl); |
63 | if (retval) | 62 | if (retval) |
64 | return retval; | 63 | return retval; |
65 | } | 64 | } |
66 | 65 | ||
67 | pciehp_green_led_blink(p_slot); | 66 | pciehp_green_led_blink(ctrl); |
68 | 67 | ||
69 | /* Check link training status */ | 68 | /* Check link training status */ |
70 | retval = pciehp_check_link_status(ctrl); | 69 | retval = pciehp_check_link_status(ctrl); |
@@ -74,13 +73,13 @@ static int board_added(struct slot *p_slot) | |||
74 | } | 73 | } |
75 | 74 | ||
76 | /* Check for a power fault */ | 75 | /* Check for a power fault */ |
77 | if (ctrl->power_fault_detected || pciehp_query_power_fault(p_slot)) { | 76 | if (ctrl->power_fault_detected || pciehp_query_power_fault(ctrl)) { |
78 | ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(p_slot)); | 77 | ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl)); |
79 | retval = -EIO; | 78 | retval = -EIO; |
80 | goto err_exit; | 79 | goto err_exit; |
81 | } | 80 | } |
82 | 81 | ||
83 | retval = pciehp_configure_device(p_slot); | 82 | retval = pciehp_configure_device(ctrl); |
84 | if (retval) { | 83 | if (retval) { |
85 | if (retval != -EEXIST) { | 84 | if (retval != -EEXIST) { |
86 | ctrl_err(ctrl, "Cannot add device at %04x:%02x:00\n", | 85 | ctrl_err(ctrl, "Cannot add device at %04x:%02x:00\n", |
@@ -89,27 +88,26 @@ static int board_added(struct slot *p_slot) | |||
89 | } | 88 | } |
90 | } | 89 | } |
91 | 90 | ||
92 | pciehp_green_led_on(p_slot); | 91 | pciehp_green_led_on(ctrl); |
93 | pciehp_set_attention_status(p_slot, 0); | 92 | pciehp_set_attention_status(ctrl, 0); |
94 | return 0; | 93 | return 0; |
95 | 94 | ||
96 | err_exit: | 95 | err_exit: |
97 | set_slot_off(ctrl, p_slot); | 96 | set_slot_off(ctrl); |
98 | return retval; | 97 | return retval; |
99 | } | 98 | } |
100 | 99 | ||
101 | /** | 100 | /** |
102 | * remove_board - Turns off slot and LEDs | 101 | * remove_board - Turns off slot and LEDs |
103 | * @p_slot: slot where board is being removed | 102 | * @ctrl: PCIe hotplug controller where board is being removed |
103 | * @safe_removal: whether the board is safely removed (versus surprise removed) | ||
104 | */ | 104 | */ |
105 | static void remove_board(struct slot *p_slot) | 105 | static void remove_board(struct controller *ctrl, bool safe_removal) |
106 | { | 106 | { |
107 | struct controller *ctrl = p_slot->ctrl; | 107 | pciehp_unconfigure_device(ctrl, safe_removal); |
108 | |||
109 | pciehp_unconfigure_device(p_slot); | ||
110 | 108 | ||
111 | if (POWER_CTRL(ctrl)) { | 109 | if (POWER_CTRL(ctrl)) { |
112 | pciehp_power_off_slot(p_slot); | 110 | pciehp_power_off_slot(ctrl); |
113 | 111 | ||
114 | /* | 112 | /* |
115 | * After turning power off, we must wait for at least 1 second | 113 | * After turning power off, we must wait for at least 1 second |
@@ -120,11 +118,11 @@ static void remove_board(struct slot *p_slot) | |||
120 | } | 118 | } |
121 | 119 | ||
122 | /* turn off Green LED */ | 120 | /* turn off Green LED */ |
123 | pciehp_green_led_off(p_slot); | 121 | pciehp_green_led_off(ctrl); |
124 | } | 122 | } |
125 | 123 | ||
126 | static int pciehp_enable_slot(struct slot *slot); | 124 | static int pciehp_enable_slot(struct controller *ctrl); |
127 | static int pciehp_disable_slot(struct slot *slot); | 125 | static int pciehp_disable_slot(struct controller *ctrl, bool safe_removal); |
128 | 126 | ||
129 | void pciehp_request(struct controller *ctrl, int action) | 127 | void pciehp_request(struct controller *ctrl, int action) |
130 | { | 128 | { |
@@ -135,11 +133,11 @@ void pciehp_request(struct controller *ctrl, int action) | |||
135 | 133 | ||
136 | void pciehp_queue_pushbutton_work(struct work_struct *work) | 134 | void pciehp_queue_pushbutton_work(struct work_struct *work) |
137 | { | 135 | { |
138 | struct slot *p_slot = container_of(work, struct slot, work.work); | 136 | struct controller *ctrl = container_of(work, struct controller, |
139 | struct controller *ctrl = p_slot->ctrl; | 137 | button_work.work); |
140 | 138 | ||
141 | mutex_lock(&p_slot->lock); | 139 | mutex_lock(&ctrl->state_lock); |
142 | switch (p_slot->state) { | 140 | switch (ctrl->state) { |
143 | case BLINKINGOFF_STATE: | 141 | case BLINKINGOFF_STATE: |
144 | pciehp_request(ctrl, DISABLE_SLOT); | 142 | pciehp_request(ctrl, DISABLE_SLOT); |
145 | break; | 143 | break; |
@@ -149,30 +147,28 @@ void pciehp_queue_pushbutton_work(struct work_struct *work) | |||
149 | default: | 147 | default: |
150 | break; | 148 | break; |
151 | } | 149 | } |
152 | mutex_unlock(&p_slot->lock); | 150 | mutex_unlock(&ctrl->state_lock); |
153 | } | 151 | } |
154 | 152 | ||
155 | void pciehp_handle_button_press(struct slot *p_slot) | 153 | void pciehp_handle_button_press(struct controller *ctrl) |
156 | { | 154 | { |
157 | struct controller *ctrl = p_slot->ctrl; | 155 | mutex_lock(&ctrl->state_lock); |
158 | 156 | switch (ctrl->state) { | |
159 | mutex_lock(&p_slot->lock); | ||
160 | switch (p_slot->state) { | ||
161 | case OFF_STATE: | 157 | case OFF_STATE: |
162 | case ON_STATE: | 158 | case ON_STATE: |
163 | if (p_slot->state == ON_STATE) { | 159 | if (ctrl->state == ON_STATE) { |
164 | p_slot->state = BLINKINGOFF_STATE; | 160 | ctrl->state = BLINKINGOFF_STATE; |
165 | ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n", | 161 | ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n", |
166 | slot_name(p_slot)); | 162 | slot_name(ctrl)); |
167 | } else { | 163 | } else { |
168 | p_slot->state = BLINKINGON_STATE; | 164 | ctrl->state = BLINKINGON_STATE; |
169 | ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n", | 165 | ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n", |
170 | slot_name(p_slot)); | 166 | slot_name(ctrl)); |
171 | } | 167 | } |
172 | /* blink green LED and turn off amber */ | 168 | /* blink green LED and turn off amber */ |
173 | pciehp_green_led_blink(p_slot); | 169 | pciehp_green_led_blink(ctrl); |
174 | pciehp_set_attention_status(p_slot, 0); | 170 | pciehp_set_attention_status(ctrl, 0); |
175 | schedule_delayed_work(&p_slot->work, 5 * HZ); | 171 | schedule_delayed_work(&ctrl->button_work, 5 * HZ); |
176 | break; | 172 | break; |
177 | case BLINKINGOFF_STATE: | 173 | case BLINKINGOFF_STATE: |
178 | case BLINKINGON_STATE: | 174 | case BLINKINGON_STATE: |
@@ -181,197 +177,184 @@ void pciehp_handle_button_press(struct slot *p_slot) | |||
181 | * press the attention again before the 5 sec. limit | 177 | * press the attention again before the 5 sec. limit |
182 | * expires to cancel hot-add or hot-remove | 178 | * expires to cancel hot-add or hot-remove |
183 | */ | 179 | */ |
184 | ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(p_slot)); | 180 | ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(ctrl)); |
185 | cancel_delayed_work(&p_slot->work); | 181 | cancel_delayed_work(&ctrl->button_work); |
186 | if (p_slot->state == BLINKINGOFF_STATE) { | 182 | if (ctrl->state == BLINKINGOFF_STATE) { |
187 | p_slot->state = ON_STATE; | 183 | ctrl->state = ON_STATE; |
188 | pciehp_green_led_on(p_slot); | 184 | pciehp_green_led_on(ctrl); |
189 | } else { | 185 | } else { |
190 | p_slot->state = OFF_STATE; | 186 | ctrl->state = OFF_STATE; |
191 | pciehp_green_led_off(p_slot); | 187 | pciehp_green_led_off(ctrl); |
192 | } | 188 | } |
193 | pciehp_set_attention_status(p_slot, 0); | 189 | pciehp_set_attention_status(ctrl, 0); |
194 | ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", | 190 | ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", |
195 | slot_name(p_slot)); | 191 | slot_name(ctrl)); |
196 | break; | 192 | break; |
197 | default: | 193 | default: |
198 | ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n", | 194 | ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n", |
199 | slot_name(p_slot), p_slot->state); | 195 | slot_name(ctrl), ctrl->state); |
200 | break; | 196 | break; |
201 | } | 197 | } |
202 | mutex_unlock(&p_slot->lock); | 198 | mutex_unlock(&ctrl->state_lock); |
203 | } | 199 | } |
204 | 200 | ||
205 | void pciehp_handle_disable_request(struct slot *slot) | 201 | void pciehp_handle_disable_request(struct controller *ctrl) |
206 | { | 202 | { |
207 | struct controller *ctrl = slot->ctrl; | 203 | mutex_lock(&ctrl->state_lock); |
208 | 204 | switch (ctrl->state) { | |
209 | mutex_lock(&slot->lock); | ||
210 | switch (slot->state) { | ||
211 | case BLINKINGON_STATE: | 205 | case BLINKINGON_STATE: |
212 | case BLINKINGOFF_STATE: | 206 | case BLINKINGOFF_STATE: |
213 | cancel_delayed_work(&slot->work); | 207 | cancel_delayed_work(&ctrl->button_work); |
214 | break; | 208 | break; |
215 | } | 209 | } |
216 | slot->state = POWEROFF_STATE; | 210 | ctrl->state = POWEROFF_STATE; |
217 | mutex_unlock(&slot->lock); | 211 | mutex_unlock(&ctrl->state_lock); |
218 | 212 | ||
219 | ctrl->request_result = pciehp_disable_slot(slot); | 213 | ctrl->request_result = pciehp_disable_slot(ctrl, SAFE_REMOVAL); |
220 | } | 214 | } |
221 | 215 | ||
222 | void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events) | 216 | void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events) |
223 | { | 217 | { |
224 | struct controller *ctrl = slot->ctrl; | 218 | bool present, link_active; |
225 | bool link_active; | ||
226 | u8 present; | ||
227 | 219 | ||
228 | /* | 220 | /* |
229 | * If the slot is on and presence or link has changed, turn it off. | 221 | * If the slot is on and presence or link has changed, turn it off. |
230 | * Even if it's occupied again, we cannot assume the card is the same. | 222 | * Even if it's occupied again, we cannot assume the card is the same. |
231 | */ | 223 | */ |
232 | mutex_lock(&slot->lock); | 224 | mutex_lock(&ctrl->state_lock); |
233 | switch (slot->state) { | 225 | switch (ctrl->state) { |
234 | case BLINKINGOFF_STATE: | 226 | case BLINKINGOFF_STATE: |
235 | cancel_delayed_work(&slot->work); | 227 | cancel_delayed_work(&ctrl->button_work); |
236 | /* fall through */ | 228 | /* fall through */ |
237 | case ON_STATE: | 229 | case ON_STATE: |
238 | slot->state = POWEROFF_STATE; | 230 | ctrl->state = POWEROFF_STATE; |
239 | mutex_unlock(&slot->lock); | 231 | mutex_unlock(&ctrl->state_lock); |
240 | if (events & PCI_EXP_SLTSTA_DLLSC) | 232 | if (events & PCI_EXP_SLTSTA_DLLSC) |
241 | ctrl_info(ctrl, "Slot(%s): Link Down\n", | 233 | ctrl_info(ctrl, "Slot(%s): Link Down\n", |
242 | slot_name(slot)); | 234 | slot_name(ctrl)); |
243 | if (events & PCI_EXP_SLTSTA_PDC) | 235 | if (events & PCI_EXP_SLTSTA_PDC) |
244 | ctrl_info(ctrl, "Slot(%s): Card not present\n", | 236 | ctrl_info(ctrl, "Slot(%s): Card not present\n", |
245 | slot_name(slot)); | 237 | slot_name(ctrl)); |
246 | pciehp_disable_slot(slot); | 238 | pciehp_disable_slot(ctrl, SURPRISE_REMOVAL); |
247 | break; | 239 | break; |
248 | default: | 240 | default: |
249 | mutex_unlock(&slot->lock); | 241 | mutex_unlock(&ctrl->state_lock); |
250 | break; | 242 | break; |
251 | } | 243 | } |
252 | 244 | ||
253 | /* Turn the slot on if it's occupied or link is up */ | 245 | /* Turn the slot on if it's occupied or link is up */ |
254 | mutex_lock(&slot->lock); | 246 | mutex_lock(&ctrl->state_lock); |
255 | pciehp_get_adapter_status(slot, &present); | 247 | present = pciehp_card_present(ctrl); |
256 | link_active = pciehp_check_link_active(ctrl); | 248 | link_active = pciehp_check_link_active(ctrl); |
257 | if (!present && !link_active) { | 249 | if (!present && !link_active) { |
258 | mutex_unlock(&slot->lock); | 250 | mutex_unlock(&ctrl->state_lock); |
259 | return; | 251 | return; |
260 | } | 252 | } |
261 | 253 | ||
262 | switch (slot->state) { | 254 | switch (ctrl->state) { |
263 | case BLINKINGON_STATE: | 255 | case BLINKINGON_STATE: |
264 | cancel_delayed_work(&slot->work); | 256 | cancel_delayed_work(&ctrl->button_work); |
265 | /* fall through */ | 257 | /* fall through */ |
266 | case OFF_STATE: | 258 | case OFF_STATE: |
267 | slot->state = POWERON_STATE; | 259 | ctrl->state = POWERON_STATE; |
268 | mutex_unlock(&slot->lock); | 260 | mutex_unlock(&ctrl->state_lock); |
269 | if (present) | 261 | if (present) |
270 | ctrl_info(ctrl, "Slot(%s): Card present\n", | 262 | ctrl_info(ctrl, "Slot(%s): Card present\n", |
271 | slot_name(slot)); | 263 | slot_name(ctrl)); |
272 | if (link_active) | 264 | if (link_active) |
273 | ctrl_info(ctrl, "Slot(%s): Link Up\n", | 265 | ctrl_info(ctrl, "Slot(%s): Link Up\n", |
274 | slot_name(slot)); | 266 | slot_name(ctrl)); |
275 | ctrl->request_result = pciehp_enable_slot(slot); | 267 | ctrl->request_result = pciehp_enable_slot(ctrl); |
276 | break; | 268 | break; |
277 | default: | 269 | default: |
278 | mutex_unlock(&slot->lock); | 270 | mutex_unlock(&ctrl->state_lock); |
279 | break; | 271 | break; |
280 | } | 272 | } |
281 | } | 273 | } |
282 | 274 | ||
283 | static int __pciehp_enable_slot(struct slot *p_slot) | 275 | static int __pciehp_enable_slot(struct controller *ctrl) |
284 | { | 276 | { |
285 | u8 getstatus = 0; | 277 | u8 getstatus = 0; |
286 | struct controller *ctrl = p_slot->ctrl; | ||
287 | 278 | ||
288 | pciehp_get_adapter_status(p_slot, &getstatus); | 279 | if (MRL_SENS(ctrl)) { |
289 | if (!getstatus) { | 280 | pciehp_get_latch_status(ctrl, &getstatus); |
290 | ctrl_info(ctrl, "Slot(%s): No adapter\n", slot_name(p_slot)); | ||
291 | return -ENODEV; | ||
292 | } | ||
293 | if (MRL_SENS(p_slot->ctrl)) { | ||
294 | pciehp_get_latch_status(p_slot, &getstatus); | ||
295 | if (getstatus) { | 281 | if (getstatus) { |
296 | ctrl_info(ctrl, "Slot(%s): Latch open\n", | 282 | ctrl_info(ctrl, "Slot(%s): Latch open\n", |
297 | slot_name(p_slot)); | 283 | slot_name(ctrl)); |
298 | return -ENODEV; | 284 | return -ENODEV; |
299 | } | 285 | } |
300 | } | 286 | } |
301 | 287 | ||
302 | if (POWER_CTRL(p_slot->ctrl)) { | 288 | if (POWER_CTRL(ctrl)) { |
303 | pciehp_get_power_status(p_slot, &getstatus); | 289 | pciehp_get_power_status(ctrl, &getstatus); |
304 | if (getstatus) { | 290 | if (getstatus) { |
305 | ctrl_info(ctrl, "Slot(%s): Already enabled\n", | 291 | ctrl_info(ctrl, "Slot(%s): Already enabled\n", |
306 | slot_name(p_slot)); | 292 | slot_name(ctrl)); |
307 | return 0; | 293 | return 0; |
308 | } | 294 | } |
309 | } | 295 | } |
310 | 296 | ||
311 | return board_added(p_slot); | 297 | return board_added(ctrl); |
312 | } | 298 | } |
313 | 299 | ||
314 | static int pciehp_enable_slot(struct slot *slot) | 300 | static int pciehp_enable_slot(struct controller *ctrl) |
315 | { | 301 | { |
316 | struct controller *ctrl = slot->ctrl; | ||
317 | int ret; | 302 | int ret; |
318 | 303 | ||
319 | pm_runtime_get_sync(&ctrl->pcie->port->dev); | 304 | pm_runtime_get_sync(&ctrl->pcie->port->dev); |
320 | ret = __pciehp_enable_slot(slot); | 305 | ret = __pciehp_enable_slot(ctrl); |
321 | if (ret && ATTN_BUTTN(ctrl)) | 306 | if (ret && ATTN_BUTTN(ctrl)) |
322 | pciehp_green_led_off(slot); /* may be blinking */ | 307 | pciehp_green_led_off(ctrl); /* may be blinking */ |
323 | pm_runtime_put(&ctrl->pcie->port->dev); | 308 | pm_runtime_put(&ctrl->pcie->port->dev); |
324 | 309 | ||
325 | mutex_lock(&slot->lock); | 310 | mutex_lock(&ctrl->state_lock); |
326 | slot->state = ret ? OFF_STATE : ON_STATE; | 311 | ctrl->state = ret ? OFF_STATE : ON_STATE; |
327 | mutex_unlock(&slot->lock); | 312 | mutex_unlock(&ctrl->state_lock); |
328 | 313 | ||
329 | return ret; | 314 | return ret; |
330 | } | 315 | } |
331 | 316 | ||
332 | static int __pciehp_disable_slot(struct slot *p_slot) | 317 | static int __pciehp_disable_slot(struct controller *ctrl, bool safe_removal) |
333 | { | 318 | { |
334 | u8 getstatus = 0; | 319 | u8 getstatus = 0; |
335 | struct controller *ctrl = p_slot->ctrl; | ||
336 | 320 | ||
337 | if (POWER_CTRL(p_slot->ctrl)) { | 321 | if (POWER_CTRL(ctrl)) { |
338 | pciehp_get_power_status(p_slot, &getstatus); | 322 | pciehp_get_power_status(ctrl, &getstatus); |
339 | if (!getstatus) { | 323 | if (!getstatus) { |
340 | ctrl_info(ctrl, "Slot(%s): Already disabled\n", | 324 | ctrl_info(ctrl, "Slot(%s): Already disabled\n", |
341 | slot_name(p_slot)); | 325 | slot_name(ctrl)); |
342 | return -EINVAL; | 326 | return -EINVAL; |
343 | } | 327 | } |
344 | } | 328 | } |
345 | 329 | ||
346 | remove_board(p_slot); | 330 | remove_board(ctrl, safe_removal); |
347 | return 0; | 331 | return 0; |
348 | } | 332 | } |
349 | 333 | ||
350 | static int pciehp_disable_slot(struct slot *slot) | 334 | static int pciehp_disable_slot(struct controller *ctrl, bool safe_removal) |
351 | { | 335 | { |
352 | struct controller *ctrl = slot->ctrl; | ||
353 | int ret; | 336 | int ret; |
354 | 337 | ||
355 | pm_runtime_get_sync(&ctrl->pcie->port->dev); | 338 | pm_runtime_get_sync(&ctrl->pcie->port->dev); |
356 | ret = __pciehp_disable_slot(slot); | 339 | ret = __pciehp_disable_slot(ctrl, safe_removal); |
357 | pm_runtime_put(&ctrl->pcie->port->dev); | 340 | pm_runtime_put(&ctrl->pcie->port->dev); |
358 | 341 | ||
359 | mutex_lock(&slot->lock); | 342 | mutex_lock(&ctrl->state_lock); |
360 | slot->state = OFF_STATE; | 343 | ctrl->state = OFF_STATE; |
361 | mutex_unlock(&slot->lock); | 344 | mutex_unlock(&ctrl->state_lock); |
362 | 345 | ||
363 | return ret; | 346 | return ret; |
364 | } | 347 | } |
365 | 348 | ||
366 | int pciehp_sysfs_enable_slot(struct slot *p_slot) | 349 | int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot) |
367 | { | 350 | { |
368 | struct controller *ctrl = p_slot->ctrl; | 351 | struct controller *ctrl = to_ctrl(hotplug_slot); |
369 | 352 | ||
370 | mutex_lock(&p_slot->lock); | 353 | mutex_lock(&ctrl->state_lock); |
371 | switch (p_slot->state) { | 354 | switch (ctrl->state) { |
372 | case BLINKINGON_STATE: | 355 | case BLINKINGON_STATE: |
373 | case OFF_STATE: | 356 | case OFF_STATE: |
374 | mutex_unlock(&p_slot->lock); | 357 | mutex_unlock(&ctrl->state_lock); |
375 | /* | 358 | /* |
376 | * The IRQ thread becomes a no-op if the user pulls out the | 359 | * The IRQ thread becomes a no-op if the user pulls out the |
377 | * card before the thread wakes up, so initialize to -ENODEV. | 360 | * card before the thread wakes up, so initialize to -ENODEV. |
@@ -383,53 +366,53 @@ int pciehp_sysfs_enable_slot(struct slot *p_slot) | |||
383 | return ctrl->request_result; | 366 | return ctrl->request_result; |
384 | case POWERON_STATE: | 367 | case POWERON_STATE: |
385 | ctrl_info(ctrl, "Slot(%s): Already in powering on state\n", | 368 | ctrl_info(ctrl, "Slot(%s): Already in powering on state\n", |
386 | slot_name(p_slot)); | 369 | slot_name(ctrl)); |
387 | break; | 370 | break; |
388 | case BLINKINGOFF_STATE: | 371 | case BLINKINGOFF_STATE: |
389 | case ON_STATE: | 372 | case ON_STATE: |
390 | case POWEROFF_STATE: | 373 | case POWEROFF_STATE: |
391 | ctrl_info(ctrl, "Slot(%s): Already enabled\n", | 374 | ctrl_info(ctrl, "Slot(%s): Already enabled\n", |
392 | slot_name(p_slot)); | 375 | slot_name(ctrl)); |
393 | break; | 376 | break; |
394 | default: | 377 | default: |
395 | ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n", | 378 | ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n", |
396 | slot_name(p_slot), p_slot->state); | 379 | slot_name(ctrl), ctrl->state); |
397 | break; | 380 | break; |
398 | } | 381 | } |
399 | mutex_unlock(&p_slot->lock); | 382 | mutex_unlock(&ctrl->state_lock); |
400 | 383 | ||
401 | return -ENODEV; | 384 | return -ENODEV; |
402 | } | 385 | } |
403 | 386 | ||
404 | int pciehp_sysfs_disable_slot(struct slot *p_slot) | 387 | int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot) |
405 | { | 388 | { |
406 | struct controller *ctrl = p_slot->ctrl; | 389 | struct controller *ctrl = to_ctrl(hotplug_slot); |
407 | 390 | ||
408 | mutex_lock(&p_slot->lock); | 391 | mutex_lock(&ctrl->state_lock); |
409 | switch (p_slot->state) { | 392 | switch (ctrl->state) { |
410 | case BLINKINGOFF_STATE: | 393 | case BLINKINGOFF_STATE: |
411 | case ON_STATE: | 394 | case ON_STATE: |
412 | mutex_unlock(&p_slot->lock); | 395 | mutex_unlock(&ctrl->state_lock); |
413 | pciehp_request(ctrl, DISABLE_SLOT); | 396 | pciehp_request(ctrl, DISABLE_SLOT); |
414 | wait_event(ctrl->requester, | 397 | wait_event(ctrl->requester, |
415 | !atomic_read(&ctrl->pending_events)); | 398 | !atomic_read(&ctrl->pending_events)); |
416 | return ctrl->request_result; | 399 | return ctrl->request_result; |
417 | case POWEROFF_STATE: | 400 | case POWEROFF_STATE: |
418 | ctrl_info(ctrl, "Slot(%s): Already in powering off state\n", | 401 | ctrl_info(ctrl, "Slot(%s): Already in powering off state\n", |
419 | slot_name(p_slot)); | 402 | slot_name(ctrl)); |
420 | break; | 403 | break; |
421 | case BLINKINGON_STATE: | 404 | case BLINKINGON_STATE: |
422 | case OFF_STATE: | 405 | case OFF_STATE: |
423 | case POWERON_STATE: | 406 | case POWERON_STATE: |
424 | ctrl_info(ctrl, "Slot(%s): Already disabled\n", | 407 | ctrl_info(ctrl, "Slot(%s): Already disabled\n", |
425 | slot_name(p_slot)); | 408 | slot_name(ctrl)); |
426 | break; | 409 | break; |
427 | default: | 410 | default: |
428 | ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n", | 411 | ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n", |
429 | slot_name(p_slot), p_slot->state); | 412 | slot_name(ctrl), ctrl->state); |
430 | break; | 413 | break; |
431 | } | 414 | } |
432 | mutex_unlock(&p_slot->lock); | 415 | mutex_unlock(&ctrl->state_lock); |
433 | 416 | ||
434 | return -ENODEV; | 417 | return -ENODEV; |
435 | } | 418 | } |
diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index a938abdb41ce..7dd443aea5a5 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c | |||
@@ -13,15 +13,12 @@ | |||
13 | */ | 13 | */ |
14 | 14 | ||
15 | #include <linux/kernel.h> | 15 | #include <linux/kernel.h> |
16 | #include <linux/module.h> | ||
17 | #include <linux/types.h> | 16 | #include <linux/types.h> |
18 | #include <linux/signal.h> | ||
19 | #include <linux/jiffies.h> | 17 | #include <linux/jiffies.h> |
20 | #include <linux/kthread.h> | 18 | #include <linux/kthread.h> |
21 | #include <linux/pci.h> | 19 | #include <linux/pci.h> |
22 | #include <linux/pm_runtime.h> | 20 | #include <linux/pm_runtime.h> |
23 | #include <linux/interrupt.h> | 21 | #include <linux/interrupt.h> |
24 | #include <linux/time.h> | ||
25 | #include <linux/slab.h> | 22 | #include <linux/slab.h> |
26 | 23 | ||
27 | #include "../pci.h" | 24 | #include "../pci.h" |
@@ -43,7 +40,7 @@ static inline int pciehp_request_irq(struct controller *ctrl) | |||
43 | if (pciehp_poll_mode) { | 40 | if (pciehp_poll_mode) { |
44 | ctrl->poll_thread = kthread_run(&pciehp_poll, ctrl, | 41 | ctrl->poll_thread = kthread_run(&pciehp_poll, ctrl, |
45 | "pciehp_poll-%s", | 42 | "pciehp_poll-%s", |
46 | slot_name(ctrl->slot)); | 43 | slot_name(ctrl)); |
47 | return PTR_ERR_OR_ZERO(ctrl->poll_thread); | 44 | return PTR_ERR_OR_ZERO(ctrl->poll_thread); |
48 | } | 45 | } |
49 | 46 | ||
@@ -217,13 +214,6 @@ bool pciehp_check_link_active(struct controller *ctrl) | |||
217 | return ret; | 214 | return ret; |
218 | } | 215 | } |
219 | 216 | ||
220 | static void pcie_wait_link_active(struct controller *ctrl) | ||
221 | { | ||
222 | struct pci_dev *pdev = ctrl_dev(ctrl); | ||
223 | |||
224 | pcie_wait_for_link(pdev, true); | ||
225 | } | ||
226 | |||
227 | static bool pci_bus_check_dev(struct pci_bus *bus, int devfn) | 217 | static bool pci_bus_check_dev(struct pci_bus *bus, int devfn) |
228 | { | 218 | { |
229 | u32 l; | 219 | u32 l; |
@@ -256,18 +246,9 @@ int pciehp_check_link_status(struct controller *ctrl) | |||
256 | bool found; | 246 | bool found; |
257 | u16 lnk_status; | 247 | u16 lnk_status; |
258 | 248 | ||
259 | /* | 249 | if (!pcie_wait_for_link(pdev, true)) |
260 | * Data Link Layer Link Active Reporting must be capable for | 250 | return -1; |
261 | * hot-plug capable downstream port. But old controller might | ||
262 | * not implement it. In this case, we wait for 1000 ms. | ||
263 | */ | ||
264 | if (ctrl->link_active_reporting) | ||
265 | pcie_wait_link_active(ctrl); | ||
266 | else | ||
267 | msleep(1000); | ||
268 | 251 | ||
269 | /* wait 100ms before read pci conf, and try in 1s */ | ||
270 | msleep(100); | ||
271 | found = pci_bus_check_dev(ctrl->pcie->port->subordinate, | 252 | found = pci_bus_check_dev(ctrl->pcie->port->subordinate, |
272 | PCI_DEVFN(0, 0)); | 253 | PCI_DEVFN(0, 0)); |
273 | 254 | ||
@@ -318,8 +299,8 @@ static int pciehp_link_enable(struct controller *ctrl) | |||
318 | int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot, | 299 | int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot, |
319 | u8 *status) | 300 | u8 *status) |
320 | { | 301 | { |
321 | struct slot *slot = hotplug_slot->private; | 302 | struct controller *ctrl = to_ctrl(hotplug_slot); |
322 | struct pci_dev *pdev = ctrl_dev(slot->ctrl); | 303 | struct pci_dev *pdev = ctrl_dev(ctrl); |
323 | u16 slot_ctrl; | 304 | u16 slot_ctrl; |
324 | 305 | ||
325 | pci_config_pm_runtime_get(pdev); | 306 | pci_config_pm_runtime_get(pdev); |
@@ -329,9 +310,9 @@ int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot, | |||
329 | return 0; | 310 | return 0; |
330 | } | 311 | } |
331 | 312 | ||
332 | void pciehp_get_attention_status(struct slot *slot, u8 *status) | 313 | int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status) |
333 | { | 314 | { |
334 | struct controller *ctrl = slot->ctrl; | 315 | struct controller *ctrl = to_ctrl(hotplug_slot); |
335 | struct pci_dev *pdev = ctrl_dev(ctrl); | 316 | struct pci_dev *pdev = ctrl_dev(ctrl); |
336 | u16 slot_ctrl; | 317 | u16 slot_ctrl; |
337 | 318 | ||
@@ -355,11 +336,12 @@ void pciehp_get_attention_status(struct slot *slot, u8 *status) | |||
355 | *status = 0xFF; | 336 | *status = 0xFF; |
356 | break; | 337 | break; |
357 | } | 338 | } |
339 | |||
340 | return 0; | ||
358 | } | 341 | } |
359 | 342 | ||
360 | void pciehp_get_power_status(struct slot *slot, u8 *status) | 343 | void pciehp_get_power_status(struct controller *ctrl, u8 *status) |
361 | { | 344 | { |
362 | struct controller *ctrl = slot->ctrl; | ||
363 | struct pci_dev *pdev = ctrl_dev(ctrl); | 345 | struct pci_dev *pdev = ctrl_dev(ctrl); |
364 | u16 slot_ctrl; | 346 | u16 slot_ctrl; |
365 | 347 | ||
@@ -380,27 +362,41 @@ void pciehp_get_power_status(struct slot *slot, u8 *status) | |||
380 | } | 362 | } |
381 | } | 363 | } |
382 | 364 | ||
383 | void pciehp_get_latch_status(struct slot *slot, u8 *status) | 365 | void pciehp_get_latch_status(struct controller *ctrl, u8 *status) |
384 | { | 366 | { |
385 | struct pci_dev *pdev = ctrl_dev(slot->ctrl); | 367 | struct pci_dev *pdev = ctrl_dev(ctrl); |
386 | u16 slot_status; | 368 | u16 slot_status; |
387 | 369 | ||
388 | pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); | 370 | pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); |
389 | *status = !!(slot_status & PCI_EXP_SLTSTA_MRLSS); | 371 | *status = !!(slot_status & PCI_EXP_SLTSTA_MRLSS); |
390 | } | 372 | } |
391 | 373 | ||
392 | void pciehp_get_adapter_status(struct slot *slot, u8 *status) | 374 | bool pciehp_card_present(struct controller *ctrl) |
393 | { | 375 | { |
394 | struct pci_dev *pdev = ctrl_dev(slot->ctrl); | 376 | struct pci_dev *pdev = ctrl_dev(ctrl); |
395 | u16 slot_status; | 377 | u16 slot_status; |
396 | 378 | ||
397 | pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); | 379 | pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); |
398 | *status = !!(slot_status & PCI_EXP_SLTSTA_PDS); | 380 | return slot_status & PCI_EXP_SLTSTA_PDS; |
399 | } | 381 | } |
400 | 382 | ||
401 | int pciehp_query_power_fault(struct slot *slot) | 383 | /** |
384 | * pciehp_card_present_or_link_active() - whether given slot is occupied | ||
385 | * @ctrl: PCIe hotplug controller | ||
386 | * | ||
387 | * Unlike pciehp_card_present(), which determines presence solely from the | ||
388 | * Presence Detect State bit, this helper also returns true if the Link Active | ||
389 | * bit is set. This is a concession to broken hotplug ports which hardwire | ||
390 | * Presence Detect State to zero, such as Wilocity's [1ae9:0200]. | ||
391 | */ | ||
392 | bool pciehp_card_present_or_link_active(struct controller *ctrl) | ||
393 | { | ||
394 | return pciehp_card_present(ctrl) || pciehp_check_link_active(ctrl); | ||
395 | } | ||
396 | |||
397 | int pciehp_query_power_fault(struct controller *ctrl) | ||
402 | { | 398 | { |
403 | struct pci_dev *pdev = ctrl_dev(slot->ctrl); | 399 | struct pci_dev *pdev = ctrl_dev(ctrl); |
404 | u16 slot_status; | 400 | u16 slot_status; |
405 | 401 | ||
406 | pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); | 402 | pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); |
@@ -410,8 +406,7 @@ int pciehp_query_power_fault(struct slot *slot) | |||
410 | int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot, | 406 | int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot, |
411 | u8 status) | 407 | u8 status) |
412 | { | 408 | { |
413 | struct slot *slot = hotplug_slot->private; | 409 | struct controller *ctrl = to_ctrl(hotplug_slot); |
414 | struct controller *ctrl = slot->ctrl; | ||
415 | struct pci_dev *pdev = ctrl_dev(ctrl); | 410 | struct pci_dev *pdev = ctrl_dev(ctrl); |
416 | 411 | ||
417 | pci_config_pm_runtime_get(pdev); | 412 | pci_config_pm_runtime_get(pdev); |
@@ -421,9 +416,8 @@ int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot, | |||
421 | return 0; | 416 | return 0; |
422 | } | 417 | } |
423 | 418 | ||
424 | void pciehp_set_attention_status(struct slot *slot, u8 value) | 419 | void pciehp_set_attention_status(struct controller *ctrl, u8 value) |
425 | { | 420 | { |
426 | struct controller *ctrl = slot->ctrl; | ||
427 | u16 slot_cmd; | 421 | u16 slot_cmd; |
428 | 422 | ||
429 | if (!ATTN_LED(ctrl)) | 423 | if (!ATTN_LED(ctrl)) |
@@ -447,10 +441,8 @@ void pciehp_set_attention_status(struct slot *slot, u8 value) | |||
447 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); | 441 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); |
448 | } | 442 | } |
449 | 443 | ||
450 | void pciehp_green_led_on(struct slot *slot) | 444 | void pciehp_green_led_on(struct controller *ctrl) |
451 | { | 445 | { |
452 | struct controller *ctrl = slot->ctrl; | ||
453 | |||
454 | if (!PWR_LED(ctrl)) | 446 | if (!PWR_LED(ctrl)) |
455 | return; | 447 | return; |
456 | 448 | ||
@@ -461,10 +453,8 @@ void pciehp_green_led_on(struct slot *slot) | |||
461 | PCI_EXP_SLTCTL_PWR_IND_ON); | 453 | PCI_EXP_SLTCTL_PWR_IND_ON); |
462 | } | 454 | } |
463 | 455 | ||
464 | void pciehp_green_led_off(struct slot *slot) | 456 | void pciehp_green_led_off(struct controller *ctrl) |
465 | { | 457 | { |
466 | struct controller *ctrl = slot->ctrl; | ||
467 | |||
468 | if (!PWR_LED(ctrl)) | 458 | if (!PWR_LED(ctrl)) |
469 | return; | 459 | return; |
470 | 460 | ||
@@ -475,10 +465,8 @@ void pciehp_green_led_off(struct slot *slot) | |||
475 | PCI_EXP_SLTCTL_PWR_IND_OFF); | 465 | PCI_EXP_SLTCTL_PWR_IND_OFF); |
476 | } | 466 | } |
477 | 467 | ||
478 | void pciehp_green_led_blink(struct slot *slot) | 468 | void pciehp_green_led_blink(struct controller *ctrl) |
479 | { | 469 | { |
480 | struct controller *ctrl = slot->ctrl; | ||
481 | |||
482 | if (!PWR_LED(ctrl)) | 470 | if (!PWR_LED(ctrl)) |
483 | return; | 471 | return; |
484 | 472 | ||
@@ -489,9 +477,8 @@ void pciehp_green_led_blink(struct slot *slot) | |||
489 | PCI_EXP_SLTCTL_PWR_IND_BLINK); | 477 | PCI_EXP_SLTCTL_PWR_IND_BLINK); |
490 | } | 478 | } |
491 | 479 | ||
492 | int pciehp_power_on_slot(struct slot *slot) | 480 | int pciehp_power_on_slot(struct controller *ctrl) |
493 | { | 481 | { |
494 | struct controller *ctrl = slot->ctrl; | ||
495 | struct pci_dev *pdev = ctrl_dev(ctrl); | 482 | struct pci_dev *pdev = ctrl_dev(ctrl); |
496 | u16 slot_status; | 483 | u16 slot_status; |
497 | int retval; | 484 | int retval; |
@@ -515,10 +502,8 @@ int pciehp_power_on_slot(struct slot *slot) | |||
515 | return retval; | 502 | return retval; |
516 | } | 503 | } |
517 | 504 | ||
518 | void pciehp_power_off_slot(struct slot *slot) | 505 | void pciehp_power_off_slot(struct controller *ctrl) |
519 | { | 506 | { |
520 | struct controller *ctrl = slot->ctrl; | ||
521 | |||
522 | pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_OFF, PCI_EXP_SLTCTL_PCC); | 507 | pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_OFF, PCI_EXP_SLTCTL_PCC); |
523 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, | 508 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, |
524 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, | 509 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, |
@@ -533,9 +518,11 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id) | |||
533 | u16 status, events; | 518 | u16 status, events; |
534 | 519 | ||
535 | /* | 520 | /* |
536 | * Interrupts only occur in D3hot or shallower (PCIe r4.0, sec 6.7.3.4). | 521 | * Interrupts only occur in D3hot or shallower and only if enabled |
522 | * in the Slot Control register (PCIe r4.0, sec 6.7.3.4). | ||
537 | */ | 523 | */ |
538 | if (pdev->current_state == PCI_D3cold) | 524 | if (pdev->current_state == PCI_D3cold || |
525 | (!(ctrl->slot_ctrl & PCI_EXP_SLTCTL_HPIE) && !pciehp_poll_mode)) | ||
539 | return IRQ_NONE; | 526 | return IRQ_NONE; |
540 | 527 | ||
541 | /* | 528 | /* |
@@ -616,7 +603,6 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) | |||
616 | { | 603 | { |
617 | struct controller *ctrl = (struct controller *)dev_id; | 604 | struct controller *ctrl = (struct controller *)dev_id; |
618 | struct pci_dev *pdev = ctrl_dev(ctrl); | 605 | struct pci_dev *pdev = ctrl_dev(ctrl); |
619 | struct slot *slot = ctrl->slot; | ||
620 | irqreturn_t ret; | 606 | irqreturn_t ret; |
621 | u32 events; | 607 | u32 events; |
622 | 608 | ||
@@ -642,16 +628,16 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) | |||
642 | /* Check Attention Button Pressed */ | 628 | /* Check Attention Button Pressed */ |
643 | if (events & PCI_EXP_SLTSTA_ABP) { | 629 | if (events & PCI_EXP_SLTSTA_ABP) { |
644 | ctrl_info(ctrl, "Slot(%s): Attention button pressed\n", | 630 | ctrl_info(ctrl, "Slot(%s): Attention button pressed\n", |
645 | slot_name(slot)); | 631 | slot_name(ctrl)); |
646 | pciehp_handle_button_press(slot); | 632 | pciehp_handle_button_press(ctrl); |
647 | } | 633 | } |
648 | 634 | ||
649 | /* Check Power Fault Detected */ | 635 | /* Check Power Fault Detected */ |
650 | if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { | 636 | if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { |
651 | ctrl->power_fault_detected = 1; | 637 | ctrl->power_fault_detected = 1; |
652 | ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(slot)); | 638 | ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl)); |
653 | pciehp_set_attention_status(slot, 1); | 639 | pciehp_set_attention_status(ctrl, 1); |
654 | pciehp_green_led_off(slot); | 640 | pciehp_green_led_off(ctrl); |
655 | } | 641 | } |
656 | 642 | ||
657 | /* | 643 | /* |
@@ -660,9 +646,9 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) | |||
660 | */ | 646 | */ |
661 | down_read(&ctrl->reset_lock); | 647 | down_read(&ctrl->reset_lock); |
662 | if (events & DISABLE_SLOT) | 648 | if (events & DISABLE_SLOT) |
663 | pciehp_handle_disable_request(slot); | 649 | pciehp_handle_disable_request(ctrl); |
664 | else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) | 650 | else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) |
665 | pciehp_handle_presence_or_link_change(slot, events); | 651 | pciehp_handle_presence_or_link_change(ctrl, events); |
666 | up_read(&ctrl->reset_lock); | 652 | up_read(&ctrl->reset_lock); |
667 | 653 | ||
668 | pci_config_pm_runtime_put(pdev); | 654 | pci_config_pm_runtime_put(pdev); |
@@ -748,6 +734,16 @@ void pcie_clear_hotplug_events(struct controller *ctrl) | |||
748 | PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC); | 734 | PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC); |
749 | } | 735 | } |
750 | 736 | ||
737 | void pcie_enable_interrupt(struct controller *ctrl) | ||
738 | { | ||
739 | pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE); | ||
740 | } | ||
741 | |||
742 | void pcie_disable_interrupt(struct controller *ctrl) | ||
743 | { | ||
744 | pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE); | ||
745 | } | ||
746 | |||
751 | /* | 747 | /* |
752 | * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary | 748 | * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary |
753 | * bus reset of the bridge, but at the same time we want to ensure that it is | 749 | * bus reset of the bridge, but at the same time we want to ensure that it is |
@@ -756,9 +752,9 @@ void pcie_clear_hotplug_events(struct controller *ctrl) | |||
756 | * momentarily, if we see that they could interfere. Also, clear any spurious | 752 | * momentarily, if we see that they could interfere. Also, clear any spurious |
757 | * events after. | 753 | * events after. |
758 | */ | 754 | */ |
759 | int pciehp_reset_slot(struct slot *slot, int probe) | 755 | int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe) |
760 | { | 756 | { |
761 | struct controller *ctrl = slot->ctrl; | 757 | struct controller *ctrl = to_ctrl(hotplug_slot); |
762 | struct pci_dev *pdev = ctrl_dev(ctrl); | 758 | struct pci_dev *pdev = ctrl_dev(ctrl); |
763 | u16 stat_mask = 0, ctrl_mask = 0; | 759 | u16 stat_mask = 0, ctrl_mask = 0; |
764 | int rc; | 760 | int rc; |
@@ -808,34 +804,6 @@ void pcie_shutdown_notification(struct controller *ctrl) | |||
808 | } | 804 | } |
809 | } | 805 | } |
810 | 806 | ||
811 | static int pcie_init_slot(struct controller *ctrl) | ||
812 | { | ||
813 | struct pci_bus *subordinate = ctrl_dev(ctrl)->subordinate; | ||
814 | struct slot *slot; | ||
815 | |||
816 | slot = kzalloc(sizeof(*slot), GFP_KERNEL); | ||
817 | if (!slot) | ||
818 | return -ENOMEM; | ||
819 | |||
820 | down_read(&pci_bus_sem); | ||
821 | slot->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE; | ||
822 | up_read(&pci_bus_sem); | ||
823 | |||
824 | slot->ctrl = ctrl; | ||
825 | mutex_init(&slot->lock); | ||
826 | INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work); | ||
827 | ctrl->slot = slot; | ||
828 | return 0; | ||
829 | } | ||
830 | |||
831 | static void pcie_cleanup_slot(struct controller *ctrl) | ||
832 | { | ||
833 | struct slot *slot = ctrl->slot; | ||
834 | |||
835 | cancel_delayed_work_sync(&slot->work); | ||
836 | kfree(slot); | ||
837 | } | ||
838 | |||
839 | static inline void dbg_ctrl(struct controller *ctrl) | 807 | static inline void dbg_ctrl(struct controller *ctrl) |
840 | { | 808 | { |
841 | struct pci_dev *pdev = ctrl->pcie->port; | 809 | struct pci_dev *pdev = ctrl->pcie->port; |
@@ -857,12 +825,13 @@ struct controller *pcie_init(struct pcie_device *dev) | |||
857 | { | 825 | { |
858 | struct controller *ctrl; | 826 | struct controller *ctrl; |
859 | u32 slot_cap, link_cap; | 827 | u32 slot_cap, link_cap; |
860 | u8 occupied, poweron; | 828 | u8 poweron; |
861 | struct pci_dev *pdev = dev->port; | 829 | struct pci_dev *pdev = dev->port; |
830 | struct pci_bus *subordinate = pdev->subordinate; | ||
862 | 831 | ||
863 | ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); | 832 | ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); |
864 | if (!ctrl) | 833 | if (!ctrl) |
865 | goto abort; | 834 | return NULL; |
866 | 835 | ||
867 | ctrl->pcie = dev; | 836 | ctrl->pcie = dev; |
868 | pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap); | 837 | pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap); |
@@ -879,15 +848,19 @@ struct controller *pcie_init(struct pcie_device *dev) | |||
879 | 848 | ||
880 | ctrl->slot_cap = slot_cap; | 849 | ctrl->slot_cap = slot_cap; |
881 | mutex_init(&ctrl->ctrl_lock); | 850 | mutex_init(&ctrl->ctrl_lock); |
851 | mutex_init(&ctrl->state_lock); | ||
882 | init_rwsem(&ctrl->reset_lock); | 852 | init_rwsem(&ctrl->reset_lock); |
883 | init_waitqueue_head(&ctrl->requester); | 853 | init_waitqueue_head(&ctrl->requester); |
884 | init_waitqueue_head(&ctrl->queue); | 854 | init_waitqueue_head(&ctrl->queue); |
855 | INIT_DELAYED_WORK(&ctrl->button_work, pciehp_queue_pushbutton_work); | ||
885 | dbg_ctrl(ctrl); | 856 | dbg_ctrl(ctrl); |
886 | 857 | ||
858 | down_read(&pci_bus_sem); | ||
859 | ctrl->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE; | ||
860 | up_read(&pci_bus_sem); | ||
861 | |||
887 | /* Check if Data Link Layer Link Active Reporting is implemented */ | 862 | /* Check if Data Link Layer Link Active Reporting is implemented */ |
888 | pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap); | 863 | pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap); |
889 | if (link_cap & PCI_EXP_LNKCAP_DLLLARC) | ||
890 | ctrl->link_active_reporting = 1; | ||
891 | 864 | ||
892 | /* Clear all remaining event bits in Slot Status register. */ | 865 | /* Clear all remaining event bits in Slot Status register. */ |
893 | pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, | 866 | pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, |
@@ -909,33 +882,24 @@ struct controller *pcie_init(struct pcie_device *dev) | |||
909 | FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC), | 882 | FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC), |
910 | pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : ""); | 883 | pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : ""); |
911 | 884 | ||
912 | if (pcie_init_slot(ctrl)) | ||
913 | goto abort_ctrl; | ||
914 | |||
915 | /* | 885 | /* |
916 | * If empty slot's power status is on, turn power off. The IRQ isn't | 886 | * If empty slot's power status is on, turn power off. The IRQ isn't |
917 | * requested yet, so avoid triggering a notification with this command. | 887 | * requested yet, so avoid triggering a notification with this command. |
918 | */ | 888 | */ |
919 | if (POWER_CTRL(ctrl)) { | 889 | if (POWER_CTRL(ctrl)) { |
920 | pciehp_get_adapter_status(ctrl->slot, &occupied); | 890 | pciehp_get_power_status(ctrl, &poweron); |
921 | pciehp_get_power_status(ctrl->slot, &poweron); | 891 | if (!pciehp_card_present_or_link_active(ctrl) && poweron) { |
922 | if (!occupied && poweron) { | ||
923 | pcie_disable_notification(ctrl); | 892 | pcie_disable_notification(ctrl); |
924 | pciehp_power_off_slot(ctrl->slot); | 893 | pciehp_power_off_slot(ctrl); |
925 | } | 894 | } |
926 | } | 895 | } |
927 | 896 | ||
928 | return ctrl; | 897 | return ctrl; |
929 | |||
930 | abort_ctrl: | ||
931 | kfree(ctrl); | ||
932 | abort: | ||
933 | return NULL; | ||
934 | } | 898 | } |
935 | 899 | ||
936 | void pciehp_release_ctrl(struct controller *ctrl) | 900 | void pciehp_release_ctrl(struct controller *ctrl) |
937 | { | 901 | { |
938 | pcie_cleanup_slot(ctrl); | 902 | cancel_delayed_work_sync(&ctrl->button_work); |
939 | kfree(ctrl); | 903 | kfree(ctrl); |
940 | } | 904 | } |
941 | 905 | ||
diff --git a/drivers/pci/hotplug/pciehp_pci.c b/drivers/pci/hotplug/pciehp_pci.c index 5c58c22e0c08..b9c1396db6fe 100644 --- a/drivers/pci/hotplug/pciehp_pci.c +++ b/drivers/pci/hotplug/pciehp_pci.c | |||
@@ -13,20 +13,26 @@ | |||
13 | * | 13 | * |
14 | */ | 14 | */ |
15 | 15 | ||
16 | #include <linux/module.h> | ||
17 | #include <linux/kernel.h> | 16 | #include <linux/kernel.h> |
18 | #include <linux/types.h> | 17 | #include <linux/types.h> |
19 | #include <linux/pci.h> | 18 | #include <linux/pci.h> |
20 | #include "../pci.h" | 19 | #include "../pci.h" |
21 | #include "pciehp.h" | 20 | #include "pciehp.h" |
22 | 21 | ||
23 | int pciehp_configure_device(struct slot *p_slot) | 22 | /** |
23 | * pciehp_configure_device() - enumerate PCI devices below a hotplug bridge | ||
24 | * @ctrl: PCIe hotplug controller | ||
25 | * | ||
26 | * Enumerate PCI devices below a hotplug bridge and add them to the system. | ||
27 | * Return 0 on success, %-EEXIST if the devices are already enumerated or | ||
28 | * %-ENODEV if enumeration failed. | ||
29 | */ | ||
30 | int pciehp_configure_device(struct controller *ctrl) | ||
24 | { | 31 | { |
25 | struct pci_dev *dev; | 32 | struct pci_dev *dev; |
26 | struct pci_dev *bridge = p_slot->ctrl->pcie->port; | 33 | struct pci_dev *bridge = ctrl->pcie->port; |
27 | struct pci_bus *parent = bridge->subordinate; | 34 | struct pci_bus *parent = bridge->subordinate; |
28 | int num, ret = 0; | 35 | int num, ret = 0; |
29 | struct controller *ctrl = p_slot->ctrl; | ||
30 | 36 | ||
31 | pci_lock_rescan_remove(); | 37 | pci_lock_rescan_remove(); |
32 | 38 | ||
@@ -62,17 +68,28 @@ int pciehp_configure_device(struct slot *p_slot) | |||
62 | return ret; | 68 | return ret; |
63 | } | 69 | } |
64 | 70 | ||
65 | void pciehp_unconfigure_device(struct slot *p_slot) | 71 | /** |
72 | * pciehp_unconfigure_device() - remove PCI devices below a hotplug bridge | ||
73 | * @ctrl: PCIe hotplug controller | ||
74 | * @presence: whether the card is still present in the slot; | ||
75 | * true for safe removal via sysfs or an Attention Button press, | ||
76 | * false for surprise removal | ||
77 | * | ||
78 | * Unbind PCI devices below a hotplug bridge from their drivers and remove | ||
79 | * them from the system. Safely removed devices are quiesced. Surprise | ||
80 | * removed devices are marked as such to prevent further accesses. | ||
81 | */ | ||
82 | void pciehp_unconfigure_device(struct controller *ctrl, bool presence) | ||
66 | { | 83 | { |
67 | u8 presence = 0; | ||
68 | struct pci_dev *dev, *temp; | 84 | struct pci_dev *dev, *temp; |
69 | struct pci_bus *parent = p_slot->ctrl->pcie->port->subordinate; | 85 | struct pci_bus *parent = ctrl->pcie->port->subordinate; |
70 | u16 command; | 86 | u16 command; |
71 | struct controller *ctrl = p_slot->ctrl; | ||
72 | 87 | ||
73 | ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:00\n", | 88 | ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:00\n", |
74 | __func__, pci_domain_nr(parent), parent->number); | 89 | __func__, pci_domain_nr(parent), parent->number); |
75 | pciehp_get_adapter_status(p_slot, &presence); | 90 | |
91 | if (!presence) | ||
92 | pci_walk_bus(parent, pci_dev_set_disconnected, NULL); | ||
76 | 93 | ||
77 | pci_lock_rescan_remove(); | 94 | pci_lock_rescan_remove(); |
78 | 95 | ||
@@ -85,12 +102,6 @@ void pciehp_unconfigure_device(struct slot *p_slot) | |||
85 | list_for_each_entry_safe_reverse(dev, temp, &parent->devices, | 102 | list_for_each_entry_safe_reverse(dev, temp, &parent->devices, |
86 | bus_list) { | 103 | bus_list) { |
87 | pci_dev_get(dev); | 104 | pci_dev_get(dev); |
88 | if (!presence) { | ||
89 | pci_dev_set_disconnected(dev, NULL); | ||
90 | if (pci_has_subordinate(dev)) | ||
91 | pci_walk_bus(dev->subordinate, | ||
92 | pci_dev_set_disconnected, NULL); | ||
93 | } | ||
94 | pci_stop_and_remove_bus_device(dev); | 105 | pci_stop_and_remove_bus_device(dev); |
95 | /* | 106 | /* |
96 | * Ensure that no new Requests will be generated from | 107 | * Ensure that no new Requests will be generated from |
diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c index 3276a5e4c430..ee54f5bacad1 100644 --- a/drivers/pci/hotplug/pnv_php.c +++ b/drivers/pci/hotplug/pnv_php.c | |||
@@ -275,14 +275,13 @@ static int pnv_php_add_devtree(struct pnv_php_slot *php_slot) | |||
275 | goto free_fdt1; | 275 | goto free_fdt1; |
276 | } | 276 | } |
277 | 277 | ||
278 | fdt = kzalloc(fdt_totalsize(fdt1), GFP_KERNEL); | 278 | fdt = kmemdup(fdt1, fdt_totalsize(fdt1), GFP_KERNEL); |
279 | if (!fdt) { | 279 | if (!fdt) { |
280 | ret = -ENOMEM; | 280 | ret = -ENOMEM; |
281 | goto free_fdt1; | 281 | goto free_fdt1; |
282 | } | 282 | } |
283 | 283 | ||
284 | /* Unflatten device tree blob */ | 284 | /* Unflatten device tree blob */ |
285 | memcpy(fdt, fdt1, fdt_totalsize(fdt1)); | ||
286 | dt = of_fdt_unflatten_tree(fdt, php_slot->dn, NULL); | 285 | dt = of_fdt_unflatten_tree(fdt, php_slot->dn, NULL); |
287 | if (!dt) { | 286 | if (!dt) { |
288 | ret = -EINVAL; | 287 | ret = -EINVAL; |
@@ -328,10 +327,15 @@ out: | |||
328 | return ret; | 327 | return ret; |
329 | } | 328 | } |
330 | 329 | ||
330 | static inline struct pnv_php_slot *to_pnv_php_slot(struct hotplug_slot *slot) | ||
331 | { | ||
332 | return container_of(slot, struct pnv_php_slot, slot); | ||
333 | } | ||
334 | |||
331 | int pnv_php_set_slot_power_state(struct hotplug_slot *slot, | 335 | int pnv_php_set_slot_power_state(struct hotplug_slot *slot, |
332 | uint8_t state) | 336 | uint8_t state) |
333 | { | 337 | { |
334 | struct pnv_php_slot *php_slot = slot->private; | 338 | struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); |
335 | struct opal_msg msg; | 339 | struct opal_msg msg; |
336 | int ret; | 340 | int ret; |
337 | 341 | ||
@@ -363,7 +367,7 @@ EXPORT_SYMBOL_GPL(pnv_php_set_slot_power_state); | |||
363 | 367 | ||
364 | static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state) | 368 | static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state) |
365 | { | 369 | { |
366 | struct pnv_php_slot *php_slot = slot->private; | 370 | struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); |
367 | uint8_t power_state = OPAL_PCI_SLOT_POWER_ON; | 371 | uint8_t power_state = OPAL_PCI_SLOT_POWER_ON; |
368 | int ret; | 372 | int ret; |
369 | 373 | ||
@@ -378,7 +382,6 @@ static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state) | |||
378 | ret); | 382 | ret); |
379 | } else { | 383 | } else { |
380 | *state = power_state; | 384 | *state = power_state; |
381 | slot->info->power_status = power_state; | ||
382 | } | 385 | } |
383 | 386 | ||
384 | return 0; | 387 | return 0; |
@@ -386,7 +389,7 @@ static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state) | |||
386 | 389 | ||
387 | static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state) | 390 | static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state) |
388 | { | 391 | { |
389 | struct pnv_php_slot *php_slot = slot->private; | 392 | struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); |
390 | uint8_t presence = OPAL_PCI_SLOT_EMPTY; | 393 | uint8_t presence = OPAL_PCI_SLOT_EMPTY; |
391 | int ret; | 394 | int ret; |
392 | 395 | ||
@@ -397,7 +400,6 @@ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state) | |||
397 | ret = pnv_pci_get_presence_state(php_slot->id, &presence); | 400 | ret = pnv_pci_get_presence_state(php_slot->id, &presence); |
398 | if (ret >= 0) { | 401 | if (ret >= 0) { |
399 | *state = presence; | 402 | *state = presence; |
400 | slot->info->adapter_status = presence; | ||
401 | ret = 0; | 403 | ret = 0; |
402 | } else { | 404 | } else { |
403 | pci_warn(php_slot->pdev, "Error %d getting presence\n", ret); | 405 | pci_warn(php_slot->pdev, "Error %d getting presence\n", ret); |
@@ -406,10 +408,20 @@ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state) | |||
406 | return ret; | 408 | return ret; |
407 | } | 409 | } |
408 | 410 | ||
411 | static int pnv_php_get_attention_state(struct hotplug_slot *slot, u8 *state) | ||
412 | { | ||
413 | struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); | ||
414 | |||
415 | *state = php_slot->attention_state; | ||
416 | return 0; | ||
417 | } | ||
418 | |||
409 | static int pnv_php_set_attention_state(struct hotplug_slot *slot, u8 state) | 419 | static int pnv_php_set_attention_state(struct hotplug_slot *slot, u8 state) |
410 | { | 420 | { |
421 | struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); | ||
422 | |||
411 | /* FIXME: Make it real once firmware supports it */ | 423 | /* FIXME: Make it real once firmware supports it */ |
412 | slot->info->attention_status = state; | 424 | php_slot->attention_state = state; |
413 | 425 | ||
414 | return 0; | 426 | return 0; |
415 | } | 427 | } |
@@ -501,15 +513,14 @@ scan: | |||
501 | 513 | ||
502 | static int pnv_php_enable_slot(struct hotplug_slot *slot) | 514 | static int pnv_php_enable_slot(struct hotplug_slot *slot) |
503 | { | 515 | { |
504 | struct pnv_php_slot *php_slot = container_of(slot, | 516 | struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); |
505 | struct pnv_php_slot, slot); | ||
506 | 517 | ||
507 | return pnv_php_enable(php_slot, true); | 518 | return pnv_php_enable(php_slot, true); |
508 | } | 519 | } |
509 | 520 | ||
510 | static int pnv_php_disable_slot(struct hotplug_slot *slot) | 521 | static int pnv_php_disable_slot(struct hotplug_slot *slot) |
511 | { | 522 | { |
512 | struct pnv_php_slot *php_slot = slot->private; | 523 | struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); |
513 | int ret; | 524 | int ret; |
514 | 525 | ||
515 | if (php_slot->state != PNV_PHP_STATE_POPULATED) | 526 | if (php_slot->state != PNV_PHP_STATE_POPULATED) |
@@ -530,9 +541,10 @@ static int pnv_php_disable_slot(struct hotplug_slot *slot) | |||
530 | return ret; | 541 | return ret; |
531 | } | 542 | } |
532 | 543 | ||
533 | static struct hotplug_slot_ops php_slot_ops = { | 544 | static const struct hotplug_slot_ops php_slot_ops = { |
534 | .get_power_status = pnv_php_get_power_state, | 545 | .get_power_status = pnv_php_get_power_state, |
535 | .get_adapter_status = pnv_php_get_adapter_state, | 546 | .get_adapter_status = pnv_php_get_adapter_state, |
547 | .get_attention_status = pnv_php_get_attention_state, | ||
536 | .set_attention_status = pnv_php_set_attention_state, | 548 | .set_attention_status = pnv_php_set_attention_state, |
537 | .enable_slot = pnv_php_enable_slot, | 549 | .enable_slot = pnv_php_enable_slot, |
538 | .disable_slot = pnv_php_disable_slot, | 550 | .disable_slot = pnv_php_disable_slot, |
@@ -594,8 +606,6 @@ static struct pnv_php_slot *pnv_php_alloc_slot(struct device_node *dn) | |||
594 | php_slot->id = id; | 606 | php_slot->id = id; |
595 | php_slot->power_state_check = false; | 607 | php_slot->power_state_check = false; |
596 | php_slot->slot.ops = &php_slot_ops; | 608 | php_slot->slot.ops = &php_slot_ops; |
597 | php_slot->slot.info = &php_slot->slot_info; | ||
598 | php_slot->slot.private = php_slot; | ||
599 | 609 | ||
600 | INIT_LIST_HEAD(&php_slot->children); | 610 | INIT_LIST_HEAD(&php_slot->children); |
601 | INIT_LIST_HEAD(&php_slot->link); | 611 | INIT_LIST_HEAD(&php_slot->link); |
diff --git a/drivers/pci/hotplug/rpaphp.h b/drivers/pci/hotplug/rpaphp.h index c8311724bd76..bdc954d70869 100644 --- a/drivers/pci/hotplug/rpaphp.h +++ b/drivers/pci/hotplug/rpaphp.h | |||
@@ -63,16 +63,22 @@ struct slot { | |||
63 | u32 index; | 63 | u32 index; |
64 | u32 type; | 64 | u32 type; |
65 | u32 power_domain; | 65 | u32 power_domain; |
66 | u8 attention_status; | ||
66 | char *name; | 67 | char *name; |
67 | struct device_node *dn; | 68 | struct device_node *dn; |
68 | struct pci_bus *bus; | 69 | struct pci_bus *bus; |
69 | struct list_head *pci_devs; | 70 | struct list_head *pci_devs; |
70 | struct hotplug_slot *hotplug_slot; | 71 | struct hotplug_slot hotplug_slot; |
71 | }; | 72 | }; |
72 | 73 | ||
73 | extern struct hotplug_slot_ops rpaphp_hotplug_slot_ops; | 74 | extern const struct hotplug_slot_ops rpaphp_hotplug_slot_ops; |
74 | extern struct list_head rpaphp_slot_head; | 75 | extern struct list_head rpaphp_slot_head; |
75 | 76 | ||
77 | static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) | ||
78 | { | ||
79 | return container_of(hotplug_slot, struct slot, hotplug_slot); | ||
80 | } | ||
81 | |||
76 | /* function prototypes */ | 82 | /* function prototypes */ |
77 | 83 | ||
78 | /* rpaphp_pci.c */ | 84 | /* rpaphp_pci.c */ |
diff --git a/drivers/pci/hotplug/rpaphp_core.c b/drivers/pci/hotplug/rpaphp_core.c index 857c358b727b..bcd5d357ca23 100644 --- a/drivers/pci/hotplug/rpaphp_core.c +++ b/drivers/pci/hotplug/rpaphp_core.c | |||
@@ -52,7 +52,7 @@ module_param_named(debug, rpaphp_debug, bool, 0644); | |||
52 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 value) | 52 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 value) |
53 | { | 53 | { |
54 | int rc; | 54 | int rc; |
55 | struct slot *slot = (struct slot *)hotplug_slot->private; | 55 | struct slot *slot = to_slot(hotplug_slot); |
56 | 56 | ||
57 | switch (value) { | 57 | switch (value) { |
58 | case 0: | 58 | case 0: |
@@ -66,7 +66,7 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 value) | |||
66 | 66 | ||
67 | rc = rtas_set_indicator(DR_INDICATOR, slot->index, value); | 67 | rc = rtas_set_indicator(DR_INDICATOR, slot->index, value); |
68 | if (!rc) | 68 | if (!rc) |
69 | hotplug_slot->info->attention_status = value; | 69 | slot->attention_status = value; |
70 | 70 | ||
71 | return rc; | 71 | return rc; |
72 | } | 72 | } |
@@ -79,7 +79,7 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 value) | |||
79 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | 79 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) |
80 | { | 80 | { |
81 | int retval, level; | 81 | int retval, level; |
82 | struct slot *slot = (struct slot *)hotplug_slot->private; | 82 | struct slot *slot = to_slot(hotplug_slot); |
83 | 83 | ||
84 | retval = rtas_get_power_level(slot->power_domain, &level); | 84 | retval = rtas_get_power_level(slot->power_domain, &level); |
85 | if (!retval) | 85 | if (!retval) |
@@ -94,14 +94,14 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
94 | */ | 94 | */ |
95 | static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | 95 | static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) |
96 | { | 96 | { |
97 | struct slot *slot = (struct slot *)hotplug_slot->private; | 97 | struct slot *slot = to_slot(hotplug_slot); |
98 | *value = slot->hotplug_slot->info->attention_status; | 98 | *value = slot->attention_status; |
99 | return 0; | 99 | return 0; |
100 | } | 100 | } |
101 | 101 | ||
102 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | 102 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) |
103 | { | 103 | { |
104 | struct slot *slot = (struct slot *)hotplug_slot->private; | 104 | struct slot *slot = to_slot(hotplug_slot); |
105 | int rc, state; | 105 | int rc, state; |
106 | 106 | ||
107 | rc = rpaphp_get_sensor_state(slot, &state); | 107 | rc = rpaphp_get_sensor_state(slot, &state); |
@@ -409,7 +409,7 @@ static void __exit cleanup_slots(void) | |||
409 | list_for_each_entry_safe(slot, next, &rpaphp_slot_head, | 409 | list_for_each_entry_safe(slot, next, &rpaphp_slot_head, |
410 | rpaphp_slot_list) { | 410 | rpaphp_slot_list) { |
411 | list_del(&slot->rpaphp_slot_list); | 411 | list_del(&slot->rpaphp_slot_list); |
412 | pci_hp_deregister(slot->hotplug_slot); | 412 | pci_hp_deregister(&slot->hotplug_slot); |
413 | dealloc_slot_struct(slot); | 413 | dealloc_slot_struct(slot); |
414 | } | 414 | } |
415 | return; | 415 | return; |
@@ -434,7 +434,7 @@ static void __exit rpaphp_exit(void) | |||
434 | 434 | ||
435 | static int enable_slot(struct hotplug_slot *hotplug_slot) | 435 | static int enable_slot(struct hotplug_slot *hotplug_slot) |
436 | { | 436 | { |
437 | struct slot *slot = (struct slot *)hotplug_slot->private; | 437 | struct slot *slot = to_slot(hotplug_slot); |
438 | int state; | 438 | int state; |
439 | int retval; | 439 | int retval; |
440 | 440 | ||
@@ -464,7 +464,7 @@ static int enable_slot(struct hotplug_slot *hotplug_slot) | |||
464 | 464 | ||
465 | static int disable_slot(struct hotplug_slot *hotplug_slot) | 465 | static int disable_slot(struct hotplug_slot *hotplug_slot) |
466 | { | 466 | { |
467 | struct slot *slot = (struct slot *)hotplug_slot->private; | 467 | struct slot *slot = to_slot(hotplug_slot); |
468 | if (slot->state == NOT_CONFIGURED) | 468 | if (slot->state == NOT_CONFIGURED) |
469 | return -EINVAL; | 469 | return -EINVAL; |
470 | 470 | ||
@@ -477,7 +477,7 @@ static int disable_slot(struct hotplug_slot *hotplug_slot) | |||
477 | return 0; | 477 | return 0; |
478 | } | 478 | } |
479 | 479 | ||
480 | struct hotplug_slot_ops rpaphp_hotplug_slot_ops = { | 480 | const struct hotplug_slot_ops rpaphp_hotplug_slot_ops = { |
481 | .enable_slot = enable_slot, | 481 | .enable_slot = enable_slot, |
482 | .disable_slot = disable_slot, | 482 | .disable_slot = disable_slot, |
483 | .set_attention_status = set_attention_status, | 483 | .set_attention_status = set_attention_status, |
diff --git a/drivers/pci/hotplug/rpaphp_pci.c b/drivers/pci/hotplug/rpaphp_pci.c index 0aac33e15dab..beca61badeea 100644 --- a/drivers/pci/hotplug/rpaphp_pci.c +++ b/drivers/pci/hotplug/rpaphp_pci.c | |||
@@ -54,25 +54,21 @@ int rpaphp_get_sensor_state(struct slot *slot, int *state) | |||
54 | * rpaphp_enable_slot - record slot state, config pci device | 54 | * rpaphp_enable_slot - record slot state, config pci device |
55 | * @slot: target &slot | 55 | * @slot: target &slot |
56 | * | 56 | * |
57 | * Initialize values in the slot, and the hotplug_slot info | 57 | * Initialize values in the slot structure to indicate if there is a pci card |
58 | * structures to indicate if there is a pci card plugged into | 58 | * plugged into the slot. If the slot is not empty, run the pcibios routine |
59 | * the slot. If the slot is not empty, run the pcibios routine | ||
60 | * to get pcibios stuff correctly set up. | 59 | * to get pcibios stuff correctly set up. |
61 | */ | 60 | */ |
62 | int rpaphp_enable_slot(struct slot *slot) | 61 | int rpaphp_enable_slot(struct slot *slot) |
63 | { | 62 | { |
64 | int rc, level, state; | 63 | int rc, level, state; |
65 | struct pci_bus *bus; | 64 | struct pci_bus *bus; |
66 | struct hotplug_slot_info *info = slot->hotplug_slot->info; | ||
67 | 65 | ||
68 | info->adapter_status = NOT_VALID; | ||
69 | slot->state = EMPTY; | 66 | slot->state = EMPTY; |
70 | 67 | ||
71 | /* Find out if the power is turned on for the slot */ | 68 | /* Find out if the power is turned on for the slot */ |
72 | rc = rtas_get_power_level(slot->power_domain, &level); | 69 | rc = rtas_get_power_level(slot->power_domain, &level); |
73 | if (rc) | 70 | if (rc) |
74 | return rc; | 71 | return rc; |
75 | info->power_status = level; | ||
76 | 72 | ||
77 | /* Figure out if there is an adapter in the slot */ | 73 | /* Figure out if there is an adapter in the slot */ |
78 | rc = rpaphp_get_sensor_state(slot, &state); | 74 | rc = rpaphp_get_sensor_state(slot, &state); |
@@ -85,13 +81,11 @@ int rpaphp_enable_slot(struct slot *slot) | |||
85 | return -EINVAL; | 81 | return -EINVAL; |
86 | } | 82 | } |
87 | 83 | ||
88 | info->adapter_status = EMPTY; | ||
89 | slot->bus = bus; | 84 | slot->bus = bus; |
90 | slot->pci_devs = &bus->devices; | 85 | slot->pci_devs = &bus->devices; |
91 | 86 | ||
92 | /* if there's an adapter in the slot, go add the pci devices */ | 87 | /* if there's an adapter in the slot, go add the pci devices */ |
93 | if (state == PRESENT) { | 88 | if (state == PRESENT) { |
94 | info->adapter_status = NOT_CONFIGURED; | ||
95 | slot->state = NOT_CONFIGURED; | 89 | slot->state = NOT_CONFIGURED; |
96 | 90 | ||
97 | /* non-empty slot has to have child */ | 91 | /* non-empty slot has to have child */ |
@@ -105,7 +99,6 @@ int rpaphp_enable_slot(struct slot *slot) | |||
105 | pci_hp_add_devices(bus); | 99 | pci_hp_add_devices(bus); |
106 | 100 | ||
107 | if (!list_empty(&bus->devices)) { | 101 | if (!list_empty(&bus->devices)) { |
108 | info->adapter_status = CONFIGURED; | ||
109 | slot->state = CONFIGURED; | 102 | slot->state = CONFIGURED; |
110 | } | 103 | } |
111 | 104 | ||
diff --git a/drivers/pci/hotplug/rpaphp_slot.c b/drivers/pci/hotplug/rpaphp_slot.c index b916c8e4372d..5282aa3e33c5 100644 --- a/drivers/pci/hotplug/rpaphp_slot.c +++ b/drivers/pci/hotplug/rpaphp_slot.c | |||
@@ -21,9 +21,7 @@ | |||
21 | /* free up the memory used by a slot */ | 21 | /* free up the memory used by a slot */ |
22 | void dealloc_slot_struct(struct slot *slot) | 22 | void dealloc_slot_struct(struct slot *slot) |
23 | { | 23 | { |
24 | kfree(slot->hotplug_slot->info); | ||
25 | kfree(slot->name); | 24 | kfree(slot->name); |
26 | kfree(slot->hotplug_slot); | ||
27 | kfree(slot); | 25 | kfree(slot); |
28 | } | 26 | } |
29 | 27 | ||
@@ -35,28 +33,16 @@ struct slot *alloc_slot_struct(struct device_node *dn, | |||
35 | slot = kzalloc(sizeof(struct slot), GFP_KERNEL); | 33 | slot = kzalloc(sizeof(struct slot), GFP_KERNEL); |
36 | if (!slot) | 34 | if (!slot) |
37 | goto error_nomem; | 35 | goto error_nomem; |
38 | slot->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); | ||
39 | if (!slot->hotplug_slot) | ||
40 | goto error_slot; | ||
41 | slot->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info), | ||
42 | GFP_KERNEL); | ||
43 | if (!slot->hotplug_slot->info) | ||
44 | goto error_hpslot; | ||
45 | slot->name = kstrdup(drc_name, GFP_KERNEL); | 36 | slot->name = kstrdup(drc_name, GFP_KERNEL); |
46 | if (!slot->name) | 37 | if (!slot->name) |
47 | goto error_info; | 38 | goto error_slot; |
48 | slot->dn = dn; | 39 | slot->dn = dn; |
49 | slot->index = drc_index; | 40 | slot->index = drc_index; |
50 | slot->power_domain = power_domain; | 41 | slot->power_domain = power_domain; |
51 | slot->hotplug_slot->private = slot; | 42 | slot->hotplug_slot.ops = &rpaphp_hotplug_slot_ops; |
52 | slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops; | ||
53 | 43 | ||
54 | return (slot); | 44 | return (slot); |
55 | 45 | ||
56 | error_info: | ||
57 | kfree(slot->hotplug_slot->info); | ||
58 | error_hpslot: | ||
59 | kfree(slot->hotplug_slot); | ||
60 | error_slot: | 46 | error_slot: |
61 | kfree(slot); | 47 | kfree(slot); |
62 | error_nomem: | 48 | error_nomem: |
@@ -77,7 +63,7 @@ static int is_registered(struct slot *slot) | |||
77 | int rpaphp_deregister_slot(struct slot *slot) | 63 | int rpaphp_deregister_slot(struct slot *slot) |
78 | { | 64 | { |
79 | int retval = 0; | 65 | int retval = 0; |
80 | struct hotplug_slot *php_slot = slot->hotplug_slot; | 66 | struct hotplug_slot *php_slot = &slot->hotplug_slot; |
81 | 67 | ||
82 | dbg("%s - Entry: deregistering slot=%s\n", | 68 | dbg("%s - Entry: deregistering slot=%s\n", |
83 | __func__, slot->name); | 69 | __func__, slot->name); |
@@ -93,7 +79,7 @@ EXPORT_SYMBOL_GPL(rpaphp_deregister_slot); | |||
93 | 79 | ||
94 | int rpaphp_register_slot(struct slot *slot) | 80 | int rpaphp_register_slot(struct slot *slot) |
95 | { | 81 | { |
96 | struct hotplug_slot *php_slot = slot->hotplug_slot; | 82 | struct hotplug_slot *php_slot = &slot->hotplug_slot; |
97 | struct device_node *child; | 83 | struct device_node *child; |
98 | u32 my_index; | 84 | u32 my_index; |
99 | int retval; | 85 | int retval; |
diff --git a/drivers/pci/hotplug/s390_pci_hpc.c b/drivers/pci/hotplug/s390_pci_hpc.c index 93b5341d282c..30ee72268790 100644 --- a/drivers/pci/hotplug/s390_pci_hpc.c +++ b/drivers/pci/hotplug/s390_pci_hpc.c | |||
@@ -32,10 +32,15 @@ static int zpci_fn_configured(enum zpci_state state) | |||
32 | */ | 32 | */ |
33 | struct slot { | 33 | struct slot { |
34 | struct list_head slot_list; | 34 | struct list_head slot_list; |
35 | struct hotplug_slot *hotplug_slot; | 35 | struct hotplug_slot hotplug_slot; |
36 | struct zpci_dev *zdev; | 36 | struct zpci_dev *zdev; |
37 | }; | 37 | }; |
38 | 38 | ||
39 | static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) | ||
40 | { | ||
41 | return container_of(hotplug_slot, struct slot, hotplug_slot); | ||
42 | } | ||
43 | |||
39 | static inline int slot_configure(struct slot *slot) | 44 | static inline int slot_configure(struct slot *slot) |
40 | { | 45 | { |
41 | int ret = sclp_pci_configure(slot->zdev->fid); | 46 | int ret = sclp_pci_configure(slot->zdev->fid); |
@@ -60,7 +65,7 @@ static inline int slot_deconfigure(struct slot *slot) | |||
60 | 65 | ||
61 | static int enable_slot(struct hotplug_slot *hotplug_slot) | 66 | static int enable_slot(struct hotplug_slot *hotplug_slot) |
62 | { | 67 | { |
63 | struct slot *slot = hotplug_slot->private; | 68 | struct slot *slot = to_slot(hotplug_slot); |
64 | int rc; | 69 | int rc; |
65 | 70 | ||
66 | if (slot->zdev->state != ZPCI_FN_STATE_STANDBY) | 71 | if (slot->zdev->state != ZPCI_FN_STATE_STANDBY) |
@@ -88,7 +93,7 @@ out_deconfigure: | |||
88 | 93 | ||
89 | static int disable_slot(struct hotplug_slot *hotplug_slot) | 94 | static int disable_slot(struct hotplug_slot *hotplug_slot) |
90 | { | 95 | { |
91 | struct slot *slot = hotplug_slot->private; | 96 | struct slot *slot = to_slot(hotplug_slot); |
92 | struct pci_dev *pdev; | 97 | struct pci_dev *pdev; |
93 | int rc; | 98 | int rc; |
94 | 99 | ||
@@ -110,7 +115,7 @@ static int disable_slot(struct hotplug_slot *hotplug_slot) | |||
110 | 115 | ||
111 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | 116 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) |
112 | { | 117 | { |
113 | struct slot *slot = hotplug_slot->private; | 118 | struct slot *slot = to_slot(hotplug_slot); |
114 | 119 | ||
115 | switch (slot->zdev->state) { | 120 | switch (slot->zdev->state) { |
116 | case ZPCI_FN_STATE_STANDBY: | 121 | case ZPCI_FN_STATE_STANDBY: |
@@ -130,7 +135,7 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
130 | return 0; | 135 | return 0; |
131 | } | 136 | } |
132 | 137 | ||
133 | static struct hotplug_slot_ops s390_hotplug_slot_ops = { | 138 | static const struct hotplug_slot_ops s390_hotplug_slot_ops = { |
134 | .enable_slot = enable_slot, | 139 | .enable_slot = enable_slot, |
135 | .disable_slot = disable_slot, | 140 | .disable_slot = disable_slot, |
136 | .get_power_status = get_power_status, | 141 | .get_power_status = get_power_status, |
@@ -139,8 +144,6 @@ static struct hotplug_slot_ops s390_hotplug_slot_ops = { | |||
139 | 144 | ||
140 | int zpci_init_slot(struct zpci_dev *zdev) | 145 | int zpci_init_slot(struct zpci_dev *zdev) |
141 | { | 146 | { |
142 | struct hotplug_slot *hotplug_slot; | ||
143 | struct hotplug_slot_info *info; | ||
144 | char name[SLOT_NAME_SIZE]; | 147 | char name[SLOT_NAME_SIZE]; |
145 | struct slot *slot; | 148 | struct slot *slot; |
146 | int rc; | 149 | int rc; |
@@ -152,26 +155,11 @@ int zpci_init_slot(struct zpci_dev *zdev) | |||
152 | if (!slot) | 155 | if (!slot) |
153 | goto error; | 156 | goto error; |
154 | 157 | ||
155 | hotplug_slot = kzalloc(sizeof(*hotplug_slot), GFP_KERNEL); | ||
156 | if (!hotplug_slot) | ||
157 | goto error_hp; | ||
158 | hotplug_slot->private = slot; | ||
159 | |||
160 | slot->hotplug_slot = hotplug_slot; | ||
161 | slot->zdev = zdev; | 158 | slot->zdev = zdev; |
162 | 159 | slot->hotplug_slot.ops = &s390_hotplug_slot_ops; | |
163 | info = kzalloc(sizeof(*info), GFP_KERNEL); | ||
164 | if (!info) | ||
165 | goto error_info; | ||
166 | hotplug_slot->info = info; | ||
167 | |||
168 | hotplug_slot->ops = &s390_hotplug_slot_ops; | ||
169 | |||
170 | get_power_status(hotplug_slot, &info->power_status); | ||
171 | get_adapter_status(hotplug_slot, &info->adapter_status); | ||
172 | 160 | ||
173 | snprintf(name, SLOT_NAME_SIZE, "%08x", zdev->fid); | 161 | snprintf(name, SLOT_NAME_SIZE, "%08x", zdev->fid); |
174 | rc = pci_hp_register(slot->hotplug_slot, zdev->bus, | 162 | rc = pci_hp_register(&slot->hotplug_slot, zdev->bus, |
175 | ZPCI_DEVFN, name); | 163 | ZPCI_DEVFN, name); |
176 | if (rc) | 164 | if (rc) |
177 | goto error_reg; | 165 | goto error_reg; |
@@ -180,10 +168,6 @@ int zpci_init_slot(struct zpci_dev *zdev) | |||
180 | return 0; | 168 | return 0; |
181 | 169 | ||
182 | error_reg: | 170 | error_reg: |
183 | kfree(info); | ||
184 | error_info: | ||
185 | kfree(hotplug_slot); | ||
186 | error_hp: | ||
187 | kfree(slot); | 171 | kfree(slot); |
188 | error: | 172 | error: |
189 | return -ENOMEM; | 173 | return -ENOMEM; |
@@ -198,9 +182,7 @@ void zpci_exit_slot(struct zpci_dev *zdev) | |||
198 | if (slot->zdev != zdev) | 182 | if (slot->zdev != zdev) |
199 | continue; | 183 | continue; |
200 | list_del(&slot->slot_list); | 184 | list_del(&slot->slot_list); |
201 | pci_hp_deregister(slot->hotplug_slot); | 185 | pci_hp_deregister(&slot->hotplug_slot); |
202 | kfree(slot->hotplug_slot->info); | ||
203 | kfree(slot->hotplug_slot); | ||
204 | kfree(slot); | 186 | kfree(slot); |
205 | } | 187 | } |
206 | } | 188 | } |
diff --git a/drivers/pci/hotplug/sgi_hotplug.c b/drivers/pci/hotplug/sgi_hotplug.c index babd23409f61..231f5bdd3d2d 100644 --- a/drivers/pci/hotplug/sgi_hotplug.c +++ b/drivers/pci/hotplug/sgi_hotplug.c | |||
@@ -56,7 +56,7 @@ struct slot { | |||
56 | int device_num; | 56 | int device_num; |
57 | struct pci_bus *pci_bus; | 57 | struct pci_bus *pci_bus; |
58 | /* this struct for glue internal only */ | 58 | /* this struct for glue internal only */ |
59 | struct hotplug_slot *hotplug_slot; | 59 | struct hotplug_slot hotplug_slot; |
60 | struct list_head hp_list; | 60 | struct list_head hp_list; |
61 | char physical_path[SN_SLOT_NAME_SIZE]; | 61 | char physical_path[SN_SLOT_NAME_SIZE]; |
62 | }; | 62 | }; |
@@ -80,7 +80,7 @@ static int enable_slot(struct hotplug_slot *slot); | |||
80 | static int disable_slot(struct hotplug_slot *slot); | 80 | static int disable_slot(struct hotplug_slot *slot); |
81 | static inline int get_power_status(struct hotplug_slot *slot, u8 *value); | 81 | static inline int get_power_status(struct hotplug_slot *slot, u8 *value); |
82 | 82 | ||
83 | static struct hotplug_slot_ops sn_hotplug_slot_ops = { | 83 | static const struct hotplug_slot_ops sn_hotplug_slot_ops = { |
84 | .enable_slot = enable_slot, | 84 | .enable_slot = enable_slot, |
85 | .disable_slot = disable_slot, | 85 | .disable_slot = disable_slot, |
86 | .get_power_status = get_power_status, | 86 | .get_power_status = get_power_status, |
@@ -88,10 +88,15 @@ static struct hotplug_slot_ops sn_hotplug_slot_ops = { | |||
88 | 88 | ||
89 | static DEFINE_MUTEX(sn_hotplug_mutex); | 89 | static DEFINE_MUTEX(sn_hotplug_mutex); |
90 | 90 | ||
91 | static struct slot *to_slot(struct hotplug_slot *bss_hotplug_slot) | ||
92 | { | ||
93 | return container_of(bss_hotplug_slot, struct slot, hotplug_slot); | ||
94 | } | ||
95 | |||
91 | static ssize_t path_show(struct pci_slot *pci_slot, char *buf) | 96 | static ssize_t path_show(struct pci_slot *pci_slot, char *buf) |
92 | { | 97 | { |
93 | int retval = -ENOENT; | 98 | int retval = -ENOENT; |
94 | struct slot *slot = pci_slot->hotplug->private; | 99 | struct slot *slot = to_slot(pci_slot->hotplug); |
95 | 100 | ||
96 | if (!slot) | 101 | if (!slot) |
97 | return retval; | 102 | return retval; |
@@ -156,7 +161,7 @@ static int sn_pci_bus_valid(struct pci_bus *pci_bus) | |||
156 | return -EIO; | 161 | return -EIO; |
157 | } | 162 | } |
158 | 163 | ||
159 | static int sn_hp_slot_private_alloc(struct hotplug_slot *bss_hotplug_slot, | 164 | static int sn_hp_slot_private_alloc(struct hotplug_slot **bss_hotplug_slot, |
160 | struct pci_bus *pci_bus, int device, | 165 | struct pci_bus *pci_bus, int device, |
161 | char *name) | 166 | char *name) |
162 | { | 167 | { |
@@ -168,7 +173,6 @@ static int sn_hp_slot_private_alloc(struct hotplug_slot *bss_hotplug_slot, | |||
168 | slot = kzalloc(sizeof(*slot), GFP_KERNEL); | 173 | slot = kzalloc(sizeof(*slot), GFP_KERNEL); |
169 | if (!slot) | 174 | if (!slot) |
170 | return -ENOMEM; | 175 | return -ENOMEM; |
171 | bss_hotplug_slot->private = slot; | ||
172 | 176 | ||
173 | slot->device_num = device; | 177 | slot->device_num = device; |
174 | slot->pci_bus = pci_bus; | 178 | slot->pci_bus = pci_bus; |
@@ -179,8 +183,8 @@ static int sn_hp_slot_private_alloc(struct hotplug_slot *bss_hotplug_slot, | |||
179 | 183 | ||
180 | sn_generate_path(pci_bus, slot->physical_path); | 184 | sn_generate_path(pci_bus, slot->physical_path); |
181 | 185 | ||
182 | slot->hotplug_slot = bss_hotplug_slot; | ||
183 | list_add(&slot->hp_list, &sn_hp_list); | 186 | list_add(&slot->hp_list, &sn_hp_list); |
187 | *bss_hotplug_slot = &slot->hotplug_slot; | ||
184 | 188 | ||
185 | return 0; | 189 | return 0; |
186 | } | 190 | } |
@@ -192,10 +196,9 @@ static struct hotplug_slot *sn_hp_destroy(void) | |||
192 | struct hotplug_slot *bss_hotplug_slot = NULL; | 196 | struct hotplug_slot *bss_hotplug_slot = NULL; |
193 | 197 | ||
194 | list_for_each_entry(slot, &sn_hp_list, hp_list) { | 198 | list_for_each_entry(slot, &sn_hp_list, hp_list) { |
195 | bss_hotplug_slot = slot->hotplug_slot; | 199 | bss_hotplug_slot = &slot->hotplug_slot; |
196 | pci_slot = bss_hotplug_slot->pci_slot; | 200 | pci_slot = bss_hotplug_slot->pci_slot; |
197 | list_del(&((struct slot *)bss_hotplug_slot->private)-> | 201 | list_del(&slot->hp_list); |
198 | hp_list); | ||
199 | sysfs_remove_file(&pci_slot->kobj, | 202 | sysfs_remove_file(&pci_slot->kobj, |
200 | &sn_slot_path_attr.attr); | 203 | &sn_slot_path_attr.attr); |
201 | break; | 204 | break; |
@@ -227,7 +230,7 @@ static void sn_bus_free_data(struct pci_dev *dev) | |||
227 | static int sn_slot_enable(struct hotplug_slot *bss_hotplug_slot, | 230 | static int sn_slot_enable(struct hotplug_slot *bss_hotplug_slot, |
228 | int device_num, char **ssdt) | 231 | int device_num, char **ssdt) |
229 | { | 232 | { |
230 | struct slot *slot = bss_hotplug_slot->private; | 233 | struct slot *slot = to_slot(bss_hotplug_slot); |
231 | struct pcibus_info *pcibus_info; | 234 | struct pcibus_info *pcibus_info; |
232 | struct pcibr_slot_enable_resp resp; | 235 | struct pcibr_slot_enable_resp resp; |
233 | int rc; | 236 | int rc; |
@@ -267,7 +270,7 @@ static int sn_slot_enable(struct hotplug_slot *bss_hotplug_slot, | |||
267 | static int sn_slot_disable(struct hotplug_slot *bss_hotplug_slot, | 270 | static int sn_slot_disable(struct hotplug_slot *bss_hotplug_slot, |
268 | int device_num, int action) | 271 | int device_num, int action) |
269 | { | 272 | { |
270 | struct slot *slot = bss_hotplug_slot->private; | 273 | struct slot *slot = to_slot(bss_hotplug_slot); |
271 | struct pcibus_info *pcibus_info; | 274 | struct pcibus_info *pcibus_info; |
272 | struct pcibr_slot_disable_resp resp; | 275 | struct pcibr_slot_disable_resp resp; |
273 | int rc; | 276 | int rc; |
@@ -323,7 +326,7 @@ static int sn_slot_disable(struct hotplug_slot *bss_hotplug_slot, | |||
323 | */ | 326 | */ |
324 | static int enable_slot(struct hotplug_slot *bss_hotplug_slot) | 327 | static int enable_slot(struct hotplug_slot *bss_hotplug_slot) |
325 | { | 328 | { |
326 | struct slot *slot = bss_hotplug_slot->private; | 329 | struct slot *slot = to_slot(bss_hotplug_slot); |
327 | struct pci_bus *new_bus = NULL; | 330 | struct pci_bus *new_bus = NULL; |
328 | struct pci_dev *dev; | 331 | struct pci_dev *dev; |
329 | int num_funcs; | 332 | int num_funcs; |
@@ -469,7 +472,7 @@ static int enable_slot(struct hotplug_slot *bss_hotplug_slot) | |||
469 | 472 | ||
470 | static int disable_slot(struct hotplug_slot *bss_hotplug_slot) | 473 | static int disable_slot(struct hotplug_slot *bss_hotplug_slot) |
471 | { | 474 | { |
472 | struct slot *slot = bss_hotplug_slot->private; | 475 | struct slot *slot = to_slot(bss_hotplug_slot); |
473 | struct pci_dev *dev, *temp; | 476 | struct pci_dev *dev, *temp; |
474 | int rc; | 477 | int rc; |
475 | acpi_handle ssdt_hdl = NULL; | 478 | acpi_handle ssdt_hdl = NULL; |
@@ -571,7 +574,7 @@ static int disable_slot(struct hotplug_slot *bss_hotplug_slot) | |||
571 | static inline int get_power_status(struct hotplug_slot *bss_hotplug_slot, | 574 | static inline int get_power_status(struct hotplug_slot *bss_hotplug_slot, |
572 | u8 *value) | 575 | u8 *value) |
573 | { | 576 | { |
574 | struct slot *slot = bss_hotplug_slot->private; | 577 | struct slot *slot = to_slot(bss_hotplug_slot); |
575 | struct pcibus_info *pcibus_info; | 578 | struct pcibus_info *pcibus_info; |
576 | u32 power; | 579 | u32 power; |
577 | 580 | ||
@@ -585,9 +588,7 @@ static inline int get_power_status(struct hotplug_slot *bss_hotplug_slot, | |||
585 | 588 | ||
586 | static void sn_release_slot(struct hotplug_slot *bss_hotplug_slot) | 589 | static void sn_release_slot(struct hotplug_slot *bss_hotplug_slot) |
587 | { | 590 | { |
588 | kfree(bss_hotplug_slot->info); | 591 | kfree(to_slot(bss_hotplug_slot)); |
589 | kfree(bss_hotplug_slot->private); | ||
590 | kfree(bss_hotplug_slot); | ||
591 | } | 592 | } |
592 | 593 | ||
593 | static int sn_hotplug_slot_register(struct pci_bus *pci_bus) | 594 | static int sn_hotplug_slot_register(struct pci_bus *pci_bus) |
@@ -607,22 +608,7 @@ static int sn_hotplug_slot_register(struct pci_bus *pci_bus) | |||
607 | if (sn_pci_slot_valid(pci_bus, device) != 1) | 608 | if (sn_pci_slot_valid(pci_bus, device) != 1) |
608 | continue; | 609 | continue; |
609 | 610 | ||
610 | bss_hotplug_slot = kzalloc(sizeof(*bss_hotplug_slot), | 611 | if (sn_hp_slot_private_alloc(&bss_hotplug_slot, |
611 | GFP_KERNEL); | ||
612 | if (!bss_hotplug_slot) { | ||
613 | rc = -ENOMEM; | ||
614 | goto alloc_err; | ||
615 | } | ||
616 | |||
617 | bss_hotplug_slot->info = | ||
618 | kzalloc(sizeof(struct hotplug_slot_info), | ||
619 | GFP_KERNEL); | ||
620 | if (!bss_hotplug_slot->info) { | ||
621 | rc = -ENOMEM; | ||
622 | goto alloc_err; | ||
623 | } | ||
624 | |||
625 | if (sn_hp_slot_private_alloc(bss_hotplug_slot, | ||
626 | pci_bus, device, name)) { | 612 | pci_bus, device, name)) { |
627 | rc = -ENOMEM; | 613 | rc = -ENOMEM; |
628 | goto alloc_err; | 614 | goto alloc_err; |
@@ -637,7 +623,7 @@ static int sn_hotplug_slot_register(struct pci_bus *pci_bus) | |||
637 | rc = sysfs_create_file(&pci_slot->kobj, | 623 | rc = sysfs_create_file(&pci_slot->kobj, |
638 | &sn_slot_path_attr.attr); | 624 | &sn_slot_path_attr.attr); |
639 | if (rc) | 625 | if (rc) |
640 | goto register_err; | 626 | goto alloc_err; |
641 | } | 627 | } |
642 | pci_dbg(pci_bus->self, "Registered bus with hotplug\n"); | 628 | pci_dbg(pci_bus->self, "Registered bus with hotplug\n"); |
643 | return rc; | 629 | return rc; |
@@ -646,14 +632,11 @@ register_err: | |||
646 | pci_dbg(pci_bus->self, "bus failed to register with err = %d\n", | 632 | pci_dbg(pci_bus->self, "bus failed to register with err = %d\n", |
647 | rc); | 633 | rc); |
648 | 634 | ||
649 | alloc_err: | ||
650 | if (rc == -ENOMEM) | ||
651 | pci_dbg(pci_bus->self, "Memory allocation error\n"); | ||
652 | |||
653 | /* destroy THIS element */ | 635 | /* destroy THIS element */ |
654 | if (bss_hotplug_slot) | 636 | sn_hp_destroy(); |
655 | sn_release_slot(bss_hotplug_slot); | 637 | sn_release_slot(bss_hotplug_slot); |
656 | 638 | ||
639 | alloc_err: | ||
657 | /* destroy anything else on the list */ | 640 | /* destroy anything else on the list */ |
658 | while ((bss_hotplug_slot = sn_hp_destroy())) { | 641 | while ((bss_hotplug_slot = sn_hp_destroy())) { |
659 | pci_hp_deregister(bss_hotplug_slot); | 642 | pci_hp_deregister(bss_hotplug_slot); |
diff --git a/drivers/pci/hotplug/shpchp.h b/drivers/pci/hotplug/shpchp.h index 516e4835019c..f7f13ee5d06e 100644 --- a/drivers/pci/hotplug/shpchp.h +++ b/drivers/pci/hotplug/shpchp.h | |||
@@ -67,11 +67,13 @@ struct slot { | |||
67 | u32 number; | 67 | u32 number; |
68 | u8 is_a_board; | 68 | u8 is_a_board; |
69 | u8 state; | 69 | u8 state; |
70 | u8 attention_save; | ||
70 | u8 presence_save; | 71 | u8 presence_save; |
72 | u8 latch_save; | ||
71 | u8 pwr_save; | 73 | u8 pwr_save; |
72 | struct controller *ctrl; | 74 | struct controller *ctrl; |
73 | const struct hpc_ops *hpc_ops; | 75 | const struct hpc_ops *hpc_ops; |
74 | struct hotplug_slot *hotplug_slot; | 76 | struct hotplug_slot hotplug_slot; |
75 | struct list_head slot_list; | 77 | struct list_head slot_list; |
76 | struct delayed_work work; /* work for button event */ | 78 | struct delayed_work work; /* work for button event */ |
77 | struct mutex lock; | 79 | struct mutex lock; |
@@ -169,7 +171,7 @@ int shpc_init(struct controller *ctrl, struct pci_dev *pdev); | |||
169 | 171 | ||
170 | static inline const char *slot_name(struct slot *slot) | 172 | static inline const char *slot_name(struct slot *slot) |
171 | { | 173 | { |
172 | return hotplug_slot_name(slot->hotplug_slot); | 174 | return hotplug_slot_name(&slot->hotplug_slot); |
173 | } | 175 | } |
174 | 176 | ||
175 | struct ctrl_reg { | 177 | struct ctrl_reg { |
@@ -207,7 +209,7 @@ enum ctrl_offsets { | |||
207 | 209 | ||
208 | static inline struct slot *get_slot(struct hotplug_slot *hotplug_slot) | 210 | static inline struct slot *get_slot(struct hotplug_slot *hotplug_slot) |
209 | { | 211 | { |
210 | return hotplug_slot->private; | 212 | return container_of(hotplug_slot, struct slot, hotplug_slot); |
211 | } | 213 | } |
212 | 214 | ||
213 | static inline struct slot *shpchp_find_slot(struct controller *ctrl, u8 device) | 215 | static inline struct slot *shpchp_find_slot(struct controller *ctrl, u8 device) |
diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp_core.c index 97cee23f3d51..81a918d47895 100644 --- a/drivers/pci/hotplug/shpchp_core.c +++ b/drivers/pci/hotplug/shpchp_core.c | |||
@@ -51,7 +51,7 @@ static int get_attention_status(struct hotplug_slot *slot, u8 *value); | |||
51 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); | 51 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); |
52 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); | 52 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); |
53 | 53 | ||
54 | static struct hotplug_slot_ops shpchp_hotplug_slot_ops = { | 54 | static const struct hotplug_slot_ops shpchp_hotplug_slot_ops = { |
55 | .set_attention_status = set_attention_status, | 55 | .set_attention_status = set_attention_status, |
56 | .enable_slot = enable_slot, | 56 | .enable_slot = enable_slot, |
57 | .disable_slot = disable_slot, | 57 | .disable_slot = disable_slot, |
@@ -65,7 +65,6 @@ static int init_slots(struct controller *ctrl) | |||
65 | { | 65 | { |
66 | struct slot *slot; | 66 | struct slot *slot; |
67 | struct hotplug_slot *hotplug_slot; | 67 | struct hotplug_slot *hotplug_slot; |
68 | struct hotplug_slot_info *info; | ||
69 | char name[SLOT_NAME_SIZE]; | 68 | char name[SLOT_NAME_SIZE]; |
70 | int retval; | 69 | int retval; |
71 | int i; | 70 | int i; |
@@ -77,19 +76,7 @@ static int init_slots(struct controller *ctrl) | |||
77 | goto error; | 76 | goto error; |
78 | } | 77 | } |
79 | 78 | ||
80 | hotplug_slot = kzalloc(sizeof(*hotplug_slot), GFP_KERNEL); | 79 | hotplug_slot = &slot->hotplug_slot; |
81 | if (!hotplug_slot) { | ||
82 | retval = -ENOMEM; | ||
83 | goto error_slot; | ||
84 | } | ||
85 | slot->hotplug_slot = hotplug_slot; | ||
86 | |||
87 | info = kzalloc(sizeof(*info), GFP_KERNEL); | ||
88 | if (!info) { | ||
89 | retval = -ENOMEM; | ||
90 | goto error_hpslot; | ||
91 | } | ||
92 | hotplug_slot->info = info; | ||
93 | 80 | ||
94 | slot->hp_slot = i; | 81 | slot->hp_slot = i; |
95 | slot->ctrl = ctrl; | 82 | slot->ctrl = ctrl; |
@@ -101,14 +88,13 @@ static int init_slots(struct controller *ctrl) | |||
101 | slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number); | 88 | slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number); |
102 | if (!slot->wq) { | 89 | if (!slot->wq) { |
103 | retval = -ENOMEM; | 90 | retval = -ENOMEM; |
104 | goto error_info; | 91 | goto error_slot; |
105 | } | 92 | } |
106 | 93 | ||
107 | mutex_init(&slot->lock); | 94 | mutex_init(&slot->lock); |
108 | INIT_DELAYED_WORK(&slot->work, shpchp_queue_pushbutton_work); | 95 | INIT_DELAYED_WORK(&slot->work, shpchp_queue_pushbutton_work); |
109 | 96 | ||
110 | /* register this slot with the hotplug pci core */ | 97 | /* register this slot with the hotplug pci core */ |
111 | hotplug_slot->private = slot; | ||
112 | snprintf(name, SLOT_NAME_SIZE, "%d", slot->number); | 98 | snprintf(name, SLOT_NAME_SIZE, "%d", slot->number); |
113 | hotplug_slot->ops = &shpchp_hotplug_slot_ops; | 99 | hotplug_slot->ops = &shpchp_hotplug_slot_ops; |
114 | 100 | ||
@@ -116,7 +102,7 @@ static int init_slots(struct controller *ctrl) | |||
116 | pci_domain_nr(ctrl->pci_dev->subordinate), | 102 | pci_domain_nr(ctrl->pci_dev->subordinate), |
117 | slot->bus, slot->device, slot->hp_slot, slot->number, | 103 | slot->bus, slot->device, slot->hp_slot, slot->number, |
118 | ctrl->slot_device_offset); | 104 | ctrl->slot_device_offset); |
119 | retval = pci_hp_register(slot->hotplug_slot, | 105 | retval = pci_hp_register(hotplug_slot, |
120 | ctrl->pci_dev->subordinate, slot->device, name); | 106 | ctrl->pci_dev->subordinate, slot->device, name); |
121 | if (retval) { | 107 | if (retval) { |
122 | ctrl_err(ctrl, "pci_hp_register failed with error %d\n", | 108 | ctrl_err(ctrl, "pci_hp_register failed with error %d\n", |
@@ -124,10 +110,10 @@ static int init_slots(struct controller *ctrl) | |||
124 | goto error_slotwq; | 110 | goto error_slotwq; |
125 | } | 111 | } |
126 | 112 | ||
127 | get_power_status(hotplug_slot, &info->power_status); | 113 | get_power_status(hotplug_slot, &slot->pwr_save); |
128 | get_attention_status(hotplug_slot, &info->attention_status); | 114 | get_attention_status(hotplug_slot, &slot->attention_save); |
129 | get_latch_status(hotplug_slot, &info->latch_status); | 115 | get_latch_status(hotplug_slot, &slot->latch_save); |
130 | get_adapter_status(hotplug_slot, &info->adapter_status); | 116 | get_adapter_status(hotplug_slot, &slot->presence_save); |
131 | 117 | ||
132 | list_add(&slot->slot_list, &ctrl->slot_list); | 118 | list_add(&slot->slot_list, &ctrl->slot_list); |
133 | } | 119 | } |
@@ -135,10 +121,6 @@ static int init_slots(struct controller *ctrl) | |||
135 | return 0; | 121 | return 0; |
136 | error_slotwq: | 122 | error_slotwq: |
137 | destroy_workqueue(slot->wq); | 123 | destroy_workqueue(slot->wq); |
138 | error_info: | ||
139 | kfree(info); | ||
140 | error_hpslot: | ||
141 | kfree(hotplug_slot); | ||
142 | error_slot: | 124 | error_slot: |
143 | kfree(slot); | 125 | kfree(slot); |
144 | error: | 126 | error: |
@@ -153,9 +135,7 @@ void cleanup_slots(struct controller *ctrl) | |||
153 | list_del(&slot->slot_list); | 135 | list_del(&slot->slot_list); |
154 | cancel_delayed_work(&slot->work); | 136 | cancel_delayed_work(&slot->work); |
155 | destroy_workqueue(slot->wq); | 137 | destroy_workqueue(slot->wq); |
156 | pci_hp_deregister(slot->hotplug_slot); | 138 | pci_hp_deregister(&slot->hotplug_slot); |
157 | kfree(slot->hotplug_slot->info); | ||
158 | kfree(slot->hotplug_slot); | ||
159 | kfree(slot); | 139 | kfree(slot); |
160 | } | 140 | } |
161 | } | 141 | } |
@@ -170,7 +150,7 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | |||
170 | ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", | 150 | ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", |
171 | __func__, slot_name(slot)); | 151 | __func__, slot_name(slot)); |
172 | 152 | ||
173 | hotplug_slot->info->attention_status = status; | 153 | slot->attention_save = status; |
174 | slot->hpc_ops->set_attention_status(slot, status); | 154 | slot->hpc_ops->set_attention_status(slot, status); |
175 | 155 | ||
176 | return 0; | 156 | return 0; |
@@ -206,7 +186,7 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
206 | 186 | ||
207 | retval = slot->hpc_ops->get_power_status(slot, value); | 187 | retval = slot->hpc_ops->get_power_status(slot, value); |
208 | if (retval < 0) | 188 | if (retval < 0) |
209 | *value = hotplug_slot->info->power_status; | 189 | *value = slot->pwr_save; |
210 | 190 | ||
211 | return 0; | 191 | return 0; |
212 | } | 192 | } |
@@ -221,7 +201,7 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
221 | 201 | ||
222 | retval = slot->hpc_ops->get_attention_status(slot, value); | 202 | retval = slot->hpc_ops->get_attention_status(slot, value); |
223 | if (retval < 0) | 203 | if (retval < 0) |
224 | *value = hotplug_slot->info->attention_status; | 204 | *value = slot->attention_save; |
225 | 205 | ||
226 | return 0; | 206 | return 0; |
227 | } | 207 | } |
@@ -236,7 +216,7 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
236 | 216 | ||
237 | retval = slot->hpc_ops->get_latch_status(slot, value); | 217 | retval = slot->hpc_ops->get_latch_status(slot, value); |
238 | if (retval < 0) | 218 | if (retval < 0) |
239 | *value = hotplug_slot->info->latch_status; | 219 | *value = slot->latch_save; |
240 | 220 | ||
241 | return 0; | 221 | return 0; |
242 | } | 222 | } |
@@ -251,7 +231,7 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
251 | 231 | ||
252 | retval = slot->hpc_ops->get_adapter_status(slot, value); | 232 | retval = slot->hpc_ops->get_adapter_status(slot, value); |
253 | if (retval < 0) | 233 | if (retval < 0) |
254 | *value = hotplug_slot->info->adapter_status; | 234 | *value = slot->presence_save; |
255 | 235 | ||
256 | return 0; | 236 | return 0; |
257 | } | 237 | } |
diff --git a/drivers/pci/hotplug/shpchp_ctrl.c b/drivers/pci/hotplug/shpchp_ctrl.c index 1267dcc5a531..078003dcde5b 100644 --- a/drivers/pci/hotplug/shpchp_ctrl.c +++ b/drivers/pci/hotplug/shpchp_ctrl.c | |||
@@ -446,23 +446,12 @@ void shpchp_queue_pushbutton_work(struct work_struct *work) | |||
446 | mutex_unlock(&p_slot->lock); | 446 | mutex_unlock(&p_slot->lock); |
447 | } | 447 | } |
448 | 448 | ||
449 | static int update_slot_info (struct slot *slot) | 449 | static void update_slot_info(struct slot *slot) |
450 | { | 450 | { |
451 | struct hotplug_slot_info *info; | 451 | slot->hpc_ops->get_power_status(slot, &slot->pwr_save); |
452 | int result; | 452 | slot->hpc_ops->get_attention_status(slot, &slot->attention_save); |
453 | 453 | slot->hpc_ops->get_latch_status(slot, &slot->latch_save); | |
454 | info = kmalloc(sizeof(*info), GFP_KERNEL); | 454 | slot->hpc_ops->get_adapter_status(slot, &slot->presence_save); |
455 | if (!info) | ||
456 | return -ENOMEM; | ||
457 | |||
458 | slot->hpc_ops->get_power_status(slot, &(info->power_status)); | ||
459 | slot->hpc_ops->get_attention_status(slot, &(info->attention_status)); | ||
460 | slot->hpc_ops->get_latch_status(slot, &(info->latch_status)); | ||
461 | slot->hpc_ops->get_adapter_status(slot, &(info->adapter_status)); | ||
462 | |||
463 | result = pci_hp_change_slot_info(slot->hotplug_slot, info); | ||
464 | kfree (info); | ||
465 | return result; | ||
466 | } | 455 | } |
467 | 456 | ||
468 | /* | 457 | /* |
diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index c5f3cd4ed766..9616eca3182f 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c | |||
@@ -13,7 +13,6 @@ | |||
13 | #include <linux/export.h> | 13 | #include <linux/export.h> |
14 | #include <linux/string.h> | 14 | #include <linux/string.h> |
15 | #include <linux/delay.h> | 15 | #include <linux/delay.h> |
16 | #include <linux/pci-ats.h> | ||
17 | #include "pci.h" | 16 | #include "pci.h" |
18 | 17 | ||
19 | #define VIRTFN_ID_LEN 16 | 18 | #define VIRTFN_ID_LEN 16 |
@@ -133,6 +132,8 @@ static void pci_read_vf_config_common(struct pci_dev *virtfn) | |||
133 | &physfn->sriov->subsystem_vendor); | 132 | &physfn->sriov->subsystem_vendor); |
134 | pci_read_config_word(virtfn, PCI_SUBSYSTEM_ID, | 133 | pci_read_config_word(virtfn, PCI_SUBSYSTEM_ID, |
135 | &physfn->sriov->subsystem_device); | 134 | &physfn->sriov->subsystem_device); |
135 | |||
136 | physfn->sriov->cfg_size = pci_cfg_space_size(virtfn); | ||
136 | } | 137 | } |
137 | 138 | ||
138 | int pci_iov_add_virtfn(struct pci_dev *dev, int id) | 139 | int pci_iov_add_virtfn(struct pci_dev *dev, int id) |
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index f2ef896464b3..af24ed50a245 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c | |||
@@ -958,7 +958,6 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, | |||
958 | } | 958 | } |
959 | } | 959 | } |
960 | } | 960 | } |
961 | WARN_ON(!!dev->msix_enabled); | ||
962 | 961 | ||
963 | /* Check whether driver already requested for MSI irq */ | 962 | /* Check whether driver already requested for MSI irq */ |
964 | if (dev->msi_enabled) { | 963 | if (dev->msi_enabled) { |
@@ -1028,8 +1027,6 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, | |||
1028 | if (!pci_msi_supported(dev, minvec)) | 1027 | if (!pci_msi_supported(dev, minvec)) |
1029 | return -EINVAL; | 1028 | return -EINVAL; |
1030 | 1029 | ||
1031 | WARN_ON(!!dev->msi_enabled); | ||
1032 | |||
1033 | /* Check whether driver already requested MSI-X irqs */ | 1030 | /* Check whether driver already requested MSI-X irqs */ |
1034 | if (dev->msix_enabled) { | 1031 | if (dev->msix_enabled) { |
1035 | pci_info(dev, "can't enable MSI (MSI-X already enabled)\n"); | 1032 | pci_info(dev, "can't enable MSI (MSI-X already enabled)\n"); |
@@ -1039,6 +1036,9 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, | |||
1039 | if (maxvec < minvec) | 1036 | if (maxvec < minvec) |
1040 | return -ERANGE; | 1037 | return -ERANGE; |
1041 | 1038 | ||
1039 | if (WARN_ON_ONCE(dev->msi_enabled)) | ||
1040 | return -EINVAL; | ||
1041 | |||
1042 | nvec = pci_msi_vec_count(dev); | 1042 | nvec = pci_msi_vec_count(dev); |
1043 | if (nvec < 0) | 1043 | if (nvec < 0) |
1044 | return nvec; | 1044 | return nvec; |
@@ -1087,6 +1087,9 @@ static int __pci_enable_msix_range(struct pci_dev *dev, | |||
1087 | if (maxvec < minvec) | 1087 | if (maxvec < minvec) |
1088 | return -ERANGE; | 1088 | return -ERANGE; |
1089 | 1089 | ||
1090 | if (WARN_ON_ONCE(dev->msix_enabled)) | ||
1091 | return -EINVAL; | ||
1092 | |||
1090 | for (;;) { | 1093 | for (;;) { |
1091 | if (affd) { | 1094 | if (affd) { |
1092 | nvec = irq_calc_affinity_vectors(minvec, nvec, affd); | 1095 | nvec = irq_calc_affinity_vectors(minvec, nvec, affd); |
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c new file mode 100644 index 000000000000..ae3c5b25dcc7 --- /dev/null +++ b/drivers/pci/p2pdma.c | |||
@@ -0,0 +1,805 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * PCI Peer 2 Peer DMA support. | ||
4 | * | ||
5 | * Copyright (c) 2016-2018, Logan Gunthorpe | ||
6 | * Copyright (c) 2016-2017, Microsemi Corporation | ||
7 | * Copyright (c) 2017, Christoph Hellwig | ||
8 | * Copyright (c) 2018, Eideticom Inc. | ||
9 | */ | ||
10 | |||
11 | #define pr_fmt(fmt) "pci-p2pdma: " fmt | ||
12 | #include <linux/ctype.h> | ||
13 | #include <linux/pci-p2pdma.h> | ||
14 | #include <linux/module.h> | ||
15 | #include <linux/slab.h> | ||
16 | #include <linux/genalloc.h> | ||
17 | #include <linux/memremap.h> | ||
18 | #include <linux/percpu-refcount.h> | ||
19 | #include <linux/random.h> | ||
20 | #include <linux/seq_buf.h> | ||
21 | |||
22 | struct pci_p2pdma { | ||
23 | struct percpu_ref devmap_ref; | ||
24 | struct completion devmap_ref_done; | ||
25 | struct gen_pool *pool; | ||
26 | bool p2pmem_published; | ||
27 | }; | ||
28 | |||
29 | static ssize_t size_show(struct device *dev, struct device_attribute *attr, | ||
30 | char *buf) | ||
31 | { | ||
32 | struct pci_dev *pdev = to_pci_dev(dev); | ||
33 | size_t size = 0; | ||
34 | |||
35 | if (pdev->p2pdma->pool) | ||
36 | size = gen_pool_size(pdev->p2pdma->pool); | ||
37 | |||
38 | return snprintf(buf, PAGE_SIZE, "%zd\n", size); | ||
39 | } | ||
40 | static DEVICE_ATTR_RO(size); | ||
41 | |||
42 | static ssize_t available_show(struct device *dev, struct device_attribute *attr, | ||
43 | char *buf) | ||
44 | { | ||
45 | struct pci_dev *pdev = to_pci_dev(dev); | ||
46 | size_t avail = 0; | ||
47 | |||
48 | if (pdev->p2pdma->pool) | ||
49 | avail = gen_pool_avail(pdev->p2pdma->pool); | ||
50 | |||
51 | return snprintf(buf, PAGE_SIZE, "%zd\n", avail); | ||
52 | } | ||
53 | static DEVICE_ATTR_RO(available); | ||
54 | |||
55 | static ssize_t published_show(struct device *dev, struct device_attribute *attr, | ||
56 | char *buf) | ||
57 | { | ||
58 | struct pci_dev *pdev = to_pci_dev(dev); | ||
59 | |||
60 | return snprintf(buf, PAGE_SIZE, "%d\n", | ||
61 | pdev->p2pdma->p2pmem_published); | ||
62 | } | ||
63 | static DEVICE_ATTR_RO(published); | ||
64 | |||
65 | static struct attribute *p2pmem_attrs[] = { | ||
66 | &dev_attr_size.attr, | ||
67 | &dev_attr_available.attr, | ||
68 | &dev_attr_published.attr, | ||
69 | NULL, | ||
70 | }; | ||
71 | |||
72 | static const struct attribute_group p2pmem_group = { | ||
73 | .attrs = p2pmem_attrs, | ||
74 | .name = "p2pmem", | ||
75 | }; | ||
76 | |||
77 | static void pci_p2pdma_percpu_release(struct percpu_ref *ref) | ||
78 | { | ||
79 | struct pci_p2pdma *p2p = | ||
80 | container_of(ref, struct pci_p2pdma, devmap_ref); | ||
81 | |||
82 | complete_all(&p2p->devmap_ref_done); | ||
83 | } | ||
84 | |||
85 | static void pci_p2pdma_percpu_kill(void *data) | ||
86 | { | ||
87 | struct percpu_ref *ref = data; | ||
88 | |||
89 | /* | ||
90 | * pci_p2pdma_add_resource() may be called multiple times | ||
91 | * by a driver and may register the percpu_kill devm action multiple | ||
92 | * times. We only want the first action to actually kill the | ||
93 | * percpu_ref. | ||
94 | */ | ||
95 | if (percpu_ref_is_dying(ref)) | ||
96 | return; | ||
97 | |||
98 | percpu_ref_kill(ref); | ||
99 | } | ||
100 | |||
101 | static void pci_p2pdma_release(void *data) | ||
102 | { | ||
103 | struct pci_dev *pdev = data; | ||
104 | |||
105 | if (!pdev->p2pdma) | ||
106 | return; | ||
107 | |||
108 | wait_for_completion(&pdev->p2pdma->devmap_ref_done); | ||
109 | percpu_ref_exit(&pdev->p2pdma->devmap_ref); | ||
110 | |||
111 | gen_pool_destroy(pdev->p2pdma->pool); | ||
112 | sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); | ||
113 | pdev->p2pdma = NULL; | ||
114 | } | ||
115 | |||
116 | static int pci_p2pdma_setup(struct pci_dev *pdev) | ||
117 | { | ||
118 | int error = -ENOMEM; | ||
119 | struct pci_p2pdma *p2p; | ||
120 | |||
121 | p2p = devm_kzalloc(&pdev->dev, sizeof(*p2p), GFP_KERNEL); | ||
122 | if (!p2p) | ||
123 | return -ENOMEM; | ||
124 | |||
125 | p2p->pool = gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev)); | ||
126 | if (!p2p->pool) | ||
127 | goto out; | ||
128 | |||
129 | init_completion(&p2p->devmap_ref_done); | ||
130 | error = percpu_ref_init(&p2p->devmap_ref, | ||
131 | pci_p2pdma_percpu_release, 0, GFP_KERNEL); | ||
132 | if (error) | ||
133 | goto out_pool_destroy; | ||
134 | |||
135 | error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev); | ||
136 | if (error) | ||
137 | goto out_pool_destroy; | ||
138 | |||
139 | pdev->p2pdma = p2p; | ||
140 | |||
141 | error = sysfs_create_group(&pdev->dev.kobj, &p2pmem_group); | ||
142 | if (error) | ||
143 | goto out_pool_destroy; | ||
144 | |||
145 | return 0; | ||
146 | |||
147 | out_pool_destroy: | ||
148 | pdev->p2pdma = NULL; | ||
149 | gen_pool_destroy(p2p->pool); | ||
150 | out: | ||
151 | devm_kfree(&pdev->dev, p2p); | ||
152 | return error; | ||
153 | } | ||
154 | |||
155 | /** | ||
156 | * pci_p2pdma_add_resource - add memory for use as p2p memory | ||
157 | * @pdev: the device to add the memory to | ||
158 | * @bar: PCI BAR to add | ||
159 | * @size: size of the memory to add, may be zero to use the whole BAR | ||
160 | * @offset: offset into the PCI BAR | ||
161 | * | ||
162 | * The memory will be given ZONE_DEVICE struct pages so that it may | ||
163 | * be used with any DMA request. | ||
164 | */ | ||
165 | int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, | ||
166 | u64 offset) | ||
167 | { | ||
168 | struct dev_pagemap *pgmap; | ||
169 | void *addr; | ||
170 | int error; | ||
171 | |||
172 | if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) | ||
173 | return -EINVAL; | ||
174 | |||
175 | if (offset >= pci_resource_len(pdev, bar)) | ||
176 | return -EINVAL; | ||
177 | |||
178 | if (!size) | ||
179 | size = pci_resource_len(pdev, bar) - offset; | ||
180 | |||
181 | if (size + offset > pci_resource_len(pdev, bar)) | ||
182 | return -EINVAL; | ||
183 | |||
184 | if (!pdev->p2pdma) { | ||
185 | error = pci_p2pdma_setup(pdev); | ||
186 | if (error) | ||
187 | return error; | ||
188 | } | ||
189 | |||
190 | pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL); | ||
191 | if (!pgmap) | ||
192 | return -ENOMEM; | ||
193 | |||
194 | pgmap->res.start = pci_resource_start(pdev, bar) + offset; | ||
195 | pgmap->res.end = pgmap->res.start + size - 1; | ||
196 | pgmap->res.flags = pci_resource_flags(pdev, bar); | ||
197 | pgmap->ref = &pdev->p2pdma->devmap_ref; | ||
198 | pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; | ||
199 | pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) - | ||
200 | pci_resource_start(pdev, bar); | ||
201 | |||
202 | addr = devm_memremap_pages(&pdev->dev, pgmap); | ||
203 | if (IS_ERR(addr)) { | ||
204 | error = PTR_ERR(addr); | ||
205 | goto pgmap_free; | ||
206 | } | ||
207 | |||
208 | error = gen_pool_add_virt(pdev->p2pdma->pool, (unsigned long)addr, | ||
209 | pci_bus_address(pdev, bar) + offset, | ||
210 | resource_size(&pgmap->res), dev_to_node(&pdev->dev)); | ||
211 | if (error) | ||
212 | goto pgmap_free; | ||
213 | |||
214 | error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_percpu_kill, | ||
215 | &pdev->p2pdma->devmap_ref); | ||
216 | if (error) | ||
217 | goto pgmap_free; | ||
218 | |||
219 | pci_info(pdev, "added peer-to-peer DMA memory %pR\n", | ||
220 | &pgmap->res); | ||
221 | |||
222 | return 0; | ||
223 | |||
224 | pgmap_free: | ||
225 | devm_kfree(&pdev->dev, pgmap); | ||
226 | return error; | ||
227 | } | ||
228 | EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); | ||
229 | |||
230 | /* | ||
231 | * Note this function returns the parent PCI device with a | ||
232 | * reference taken. It is the caller's responsibily to drop | ||
233 | * the reference. | ||
234 | */ | ||
235 | static struct pci_dev *find_parent_pci_dev(struct device *dev) | ||
236 | { | ||
237 | struct device *parent; | ||
238 | |||
239 | dev = get_device(dev); | ||
240 | |||
241 | while (dev) { | ||
242 | if (dev_is_pci(dev)) | ||
243 | return to_pci_dev(dev); | ||
244 | |||
245 | parent = get_device(dev->parent); | ||
246 | put_device(dev); | ||
247 | dev = parent; | ||
248 | } | ||
249 | |||
250 | return NULL; | ||
251 | } | ||
252 | |||
253 | /* | ||
254 | * Check if a PCI bridge has its ACS redirection bits set to redirect P2P | ||
255 | * TLPs upstream via ACS. Returns 1 if the packets will be redirected | ||
256 | * upstream, 0 otherwise. | ||
257 | */ | ||
258 | static int pci_bridge_has_acs_redir(struct pci_dev *pdev) | ||
259 | { | ||
260 | int pos; | ||
261 | u16 ctrl; | ||
262 | |||
263 | pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ACS); | ||
264 | if (!pos) | ||
265 | return 0; | ||
266 | |||
267 | pci_read_config_word(pdev, pos + PCI_ACS_CTRL, &ctrl); | ||
268 | |||
269 | if (ctrl & (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC)) | ||
270 | return 1; | ||
271 | |||
272 | return 0; | ||
273 | } | ||
274 | |||
275 | static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev) | ||
276 | { | ||
277 | if (!buf) | ||
278 | return; | ||
279 | |||
280 | seq_buf_printf(buf, "%s;", pci_name(pdev)); | ||
281 | } | ||
282 | |||
283 | /* | ||
284 | * Find the distance through the nearest common upstream bridge between | ||
285 | * two PCI devices. | ||
286 | * | ||
287 | * If the two devices are the same device then 0 will be returned. | ||
288 | * | ||
289 | * If there are two virtual functions of the same device behind the same | ||
290 | * bridge port then 2 will be returned (one step down to the PCIe switch, | ||
291 | * then one step back to the same device). | ||
292 | * | ||
293 | * In the case where two devices are connected to the same PCIe switch, the | ||
294 | * value 4 will be returned. This corresponds to the following PCI tree: | ||
295 | * | ||
296 | * -+ Root Port | ||
297 | * \+ Switch Upstream Port | ||
298 | * +-+ Switch Downstream Port | ||
299 | * + \- Device A | ||
300 | * \-+ Switch Downstream Port | ||
301 | * \- Device B | ||
302 | * | ||
303 | * The distance is 4 because we traverse from Device A through the downstream | ||
304 | * port of the switch, to the common upstream port, back up to the second | ||
305 | * downstream port and then to Device B. | ||
306 | * | ||
307 | * Any two devices that don't have a common upstream bridge will return -1. | ||
308 | * In this way devices on separate PCIe root ports will be rejected, which | ||
309 | * is what we want for peer-to-peer seeing each PCIe root port defines a | ||
310 | * separate hierarchy domain and there's no way to determine whether the root | ||
311 | * complex supports forwarding between them. | ||
312 | * | ||
313 | * In the case where two devices are connected to different PCIe switches, | ||
314 | * this function will still return a positive distance as long as both | ||
315 | * switches eventually have a common upstream bridge. Note this covers | ||
316 | * the case of using multiple PCIe switches to achieve a desired level of | ||
317 | * fan-out from a root port. The exact distance will be a function of the | ||
318 | * number of switches between Device A and Device B. | ||
319 | * | ||
320 | * If a bridge which has any ACS redirection bits set is in the path | ||
321 | * then this functions will return -2. This is so we reject any | ||
322 | * cases where the TLPs are forwarded up into the root complex. | ||
323 | * In this case, a list of all infringing bridge addresses will be | ||
324 | * populated in acs_list (assuming it's non-null) for printk purposes. | ||
325 | */ | ||
326 | static int upstream_bridge_distance(struct pci_dev *a, | ||
327 | struct pci_dev *b, | ||
328 | struct seq_buf *acs_list) | ||
329 | { | ||
330 | int dist_a = 0; | ||
331 | int dist_b = 0; | ||
332 | struct pci_dev *bb = NULL; | ||
333 | int acs_cnt = 0; | ||
334 | |||
335 | /* | ||
336 | * Note, we don't need to take references to devices returned by | ||
337 | * pci_upstream_bridge() seeing we hold a reference to a child | ||
338 | * device which will already hold a reference to the upstream bridge. | ||
339 | */ | ||
340 | |||
341 | while (a) { | ||
342 | dist_b = 0; | ||
343 | |||
344 | if (pci_bridge_has_acs_redir(a)) { | ||
345 | seq_buf_print_bus_devfn(acs_list, a); | ||
346 | acs_cnt++; | ||
347 | } | ||
348 | |||
349 | bb = b; | ||
350 | |||
351 | while (bb) { | ||
352 | if (a == bb) | ||
353 | goto check_b_path_acs; | ||
354 | |||
355 | bb = pci_upstream_bridge(bb); | ||
356 | dist_b++; | ||
357 | } | ||
358 | |||
359 | a = pci_upstream_bridge(a); | ||
360 | dist_a++; | ||
361 | } | ||
362 | |||
363 | return -1; | ||
364 | |||
365 | check_b_path_acs: | ||
366 | bb = b; | ||
367 | |||
368 | while (bb) { | ||
369 | if (a == bb) | ||
370 | break; | ||
371 | |||
372 | if (pci_bridge_has_acs_redir(bb)) { | ||
373 | seq_buf_print_bus_devfn(acs_list, bb); | ||
374 | acs_cnt++; | ||
375 | } | ||
376 | |||
377 | bb = pci_upstream_bridge(bb); | ||
378 | } | ||
379 | |||
380 | if (acs_cnt) | ||
381 | return -2; | ||
382 | |||
383 | return dist_a + dist_b; | ||
384 | } | ||
385 | |||
386 | static int upstream_bridge_distance_warn(struct pci_dev *provider, | ||
387 | struct pci_dev *client) | ||
388 | { | ||
389 | struct seq_buf acs_list; | ||
390 | int ret; | ||
391 | |||
392 | seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE); | ||
393 | if (!acs_list.buffer) | ||
394 | return -ENOMEM; | ||
395 | |||
396 | ret = upstream_bridge_distance(provider, client, &acs_list); | ||
397 | if (ret == -2) { | ||
398 | pci_warn(client, "cannot be used for peer-to-peer DMA as ACS redirect is set between the client and provider (%s)\n", | ||
399 | pci_name(provider)); | ||
400 | /* Drop final semicolon */ | ||
401 | acs_list.buffer[acs_list.len-1] = 0; | ||
402 | pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n", | ||
403 | acs_list.buffer); | ||
404 | |||
405 | } else if (ret < 0) { | ||
406 | pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge\n", | ||
407 | pci_name(provider)); | ||
408 | } | ||
409 | |||
410 | kfree(acs_list.buffer); | ||
411 | |||
412 | return ret; | ||
413 | } | ||
414 | |||
415 | /** | ||
416 | * pci_p2pdma_distance_many - Determive the cumulative distance between | ||
417 | * a p2pdma provider and the clients in use. | ||
418 | * @provider: p2pdma provider to check against the client list | ||
419 | * @clients: array of devices to check (NULL-terminated) | ||
420 | * @num_clients: number of clients in the array | ||
421 | * @verbose: if true, print warnings for devices when we return -1 | ||
422 | * | ||
423 | * Returns -1 if any of the clients are not compatible (behind the same | ||
424 | * root port as the provider), otherwise returns a positive number where | ||
425 | * a lower number is the preferrable choice. (If there's one client | ||
426 | * that's the same as the provider it will return 0, which is best choice). | ||
427 | * | ||
428 | * For now, "compatible" means the provider and the clients are all behind | ||
429 | * the same PCI root port. This cuts out cases that may work but is safest | ||
430 | * for the user. Future work can expand this to white-list root complexes that | ||
431 | * can safely forward between each ports. | ||
432 | */ | ||
433 | int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, | ||
434 | int num_clients, bool verbose) | ||
435 | { | ||
436 | bool not_supported = false; | ||
437 | struct pci_dev *pci_client; | ||
438 | int distance = 0; | ||
439 | int i, ret; | ||
440 | |||
441 | if (num_clients == 0) | ||
442 | return -1; | ||
443 | |||
444 | for (i = 0; i < num_clients; i++) { | ||
445 | pci_client = find_parent_pci_dev(clients[i]); | ||
446 | if (!pci_client) { | ||
447 | if (verbose) | ||
448 | dev_warn(clients[i], | ||
449 | "cannot be used for peer-to-peer DMA as it is not a PCI device\n"); | ||
450 | return -1; | ||
451 | } | ||
452 | |||
453 | if (verbose) | ||
454 | ret = upstream_bridge_distance_warn(provider, | ||
455 | pci_client); | ||
456 | else | ||
457 | ret = upstream_bridge_distance(provider, pci_client, | ||
458 | NULL); | ||
459 | |||
460 | pci_dev_put(pci_client); | ||
461 | |||
462 | if (ret < 0) | ||
463 | not_supported = true; | ||
464 | |||
465 | if (not_supported && !verbose) | ||
466 | break; | ||
467 | |||
468 | distance += ret; | ||
469 | } | ||
470 | |||
471 | if (not_supported) | ||
472 | return -1; | ||
473 | |||
474 | return distance; | ||
475 | } | ||
476 | EXPORT_SYMBOL_GPL(pci_p2pdma_distance_many); | ||
477 | |||
478 | /** | ||
479 | * pci_has_p2pmem - check if a given PCI device has published any p2pmem | ||
480 | * @pdev: PCI device to check | ||
481 | */ | ||
482 | bool pci_has_p2pmem(struct pci_dev *pdev) | ||
483 | { | ||
484 | return pdev->p2pdma && pdev->p2pdma->p2pmem_published; | ||
485 | } | ||
486 | EXPORT_SYMBOL_GPL(pci_has_p2pmem); | ||
487 | |||
488 | /** | ||
489 | * pci_p2pmem_find - find a peer-to-peer DMA memory device compatible with | ||
490 | * the specified list of clients and shortest distance (as determined | ||
491 | * by pci_p2pmem_dma()) | ||
492 | * @clients: array of devices to check (NULL-terminated) | ||
493 | * @num_clients: number of client devices in the list | ||
494 | * | ||
495 | * If multiple devices are behind the same switch, the one "closest" to the | ||
496 | * client devices in use will be chosen first. (So if one of the providers are | ||
497 | * the same as one of the clients, that provider will be used ahead of any | ||
498 | * other providers that are unrelated). If multiple providers are an equal | ||
499 | * distance away, one will be chosen at random. | ||
500 | * | ||
501 | * Returns a pointer to the PCI device with a reference taken (use pci_dev_put | ||
502 | * to return the reference) or NULL if no compatible device is found. The | ||
503 | * found provider will also be assigned to the client list. | ||
504 | */ | ||
505 | struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients) | ||
506 | { | ||
507 | struct pci_dev *pdev = NULL; | ||
508 | int distance; | ||
509 | int closest_distance = INT_MAX; | ||
510 | struct pci_dev **closest_pdevs; | ||
511 | int dev_cnt = 0; | ||
512 | const int max_devs = PAGE_SIZE / sizeof(*closest_pdevs); | ||
513 | int i; | ||
514 | |||
515 | closest_pdevs = kmalloc(PAGE_SIZE, GFP_KERNEL); | ||
516 | if (!closest_pdevs) | ||
517 | return NULL; | ||
518 | |||
519 | while ((pdev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, pdev))) { | ||
520 | if (!pci_has_p2pmem(pdev)) | ||
521 | continue; | ||
522 | |||
523 | distance = pci_p2pdma_distance_many(pdev, clients, | ||
524 | num_clients, false); | ||
525 | if (distance < 0 || distance > closest_distance) | ||
526 | continue; | ||
527 | |||
528 | if (distance == closest_distance && dev_cnt >= max_devs) | ||
529 | continue; | ||
530 | |||
531 | if (distance < closest_distance) { | ||
532 | for (i = 0; i < dev_cnt; i++) | ||
533 | pci_dev_put(closest_pdevs[i]); | ||
534 | |||
535 | dev_cnt = 0; | ||
536 | closest_distance = distance; | ||
537 | } | ||
538 | |||
539 | closest_pdevs[dev_cnt++] = pci_dev_get(pdev); | ||
540 | } | ||
541 | |||
542 | if (dev_cnt) | ||
543 | pdev = pci_dev_get(closest_pdevs[prandom_u32_max(dev_cnt)]); | ||
544 | |||
545 | for (i = 0; i < dev_cnt; i++) | ||
546 | pci_dev_put(closest_pdevs[i]); | ||
547 | |||
548 | kfree(closest_pdevs); | ||
549 | return pdev; | ||
550 | } | ||
551 | EXPORT_SYMBOL_GPL(pci_p2pmem_find_many); | ||
552 | |||
553 | /** | ||
554 | * pci_alloc_p2p_mem - allocate peer-to-peer DMA memory | ||
555 | * @pdev: the device to allocate memory from | ||
556 | * @size: number of bytes to allocate | ||
557 | * | ||
558 | * Returns the allocated memory or NULL on error. | ||
559 | */ | ||
560 | void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size) | ||
561 | { | ||
562 | void *ret; | ||
563 | |||
564 | if (unlikely(!pdev->p2pdma)) | ||
565 | return NULL; | ||
566 | |||
567 | if (unlikely(!percpu_ref_tryget_live(&pdev->p2pdma->devmap_ref))) | ||
568 | return NULL; | ||
569 | |||
570 | ret = (void *)gen_pool_alloc(pdev->p2pdma->pool, size); | ||
571 | |||
572 | if (unlikely(!ret)) | ||
573 | percpu_ref_put(&pdev->p2pdma->devmap_ref); | ||
574 | |||
575 | return ret; | ||
576 | } | ||
577 | EXPORT_SYMBOL_GPL(pci_alloc_p2pmem); | ||
578 | |||
579 | /** | ||
580 | * pci_free_p2pmem - free peer-to-peer DMA memory | ||
581 | * @pdev: the device the memory was allocated from | ||
582 | * @addr: address of the memory that was allocated | ||
583 | * @size: number of bytes that was allocated | ||
584 | */ | ||
585 | void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size) | ||
586 | { | ||
587 | gen_pool_free(pdev->p2pdma->pool, (uintptr_t)addr, size); | ||
588 | percpu_ref_put(&pdev->p2pdma->devmap_ref); | ||
589 | } | ||
590 | EXPORT_SYMBOL_GPL(pci_free_p2pmem); | ||
591 | |||
592 | /** | ||
593 | * pci_virt_to_bus - return the PCI bus address for a given virtual | ||
594 | * address obtained with pci_alloc_p2pmem() | ||
595 | * @pdev: the device the memory was allocated from | ||
596 | * @addr: address of the memory that was allocated | ||
597 | */ | ||
598 | pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, void *addr) | ||
599 | { | ||
600 | if (!addr) | ||
601 | return 0; | ||
602 | if (!pdev->p2pdma) | ||
603 | return 0; | ||
604 | |||
605 | /* | ||
606 | * Note: when we added the memory to the pool we used the PCI | ||
607 | * bus address as the physical address. So gen_pool_virt_to_phys() | ||
608 | * actually returns the bus address despite the misleading name. | ||
609 | */ | ||
610 | return gen_pool_virt_to_phys(pdev->p2pdma->pool, (unsigned long)addr); | ||
611 | } | ||
612 | EXPORT_SYMBOL_GPL(pci_p2pmem_virt_to_bus); | ||
613 | |||
614 | /** | ||
615 | * pci_p2pmem_alloc_sgl - allocate peer-to-peer DMA memory in a scatterlist | ||
616 | * @pdev: the device to allocate memory from | ||
617 | * @nents: the number of SG entries in the list | ||
618 | * @length: number of bytes to allocate | ||
619 | * | ||
620 | * Returns 0 on success | ||
621 | */ | ||
622 | struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, | ||
623 | unsigned int *nents, u32 length) | ||
624 | { | ||
625 | struct scatterlist *sg; | ||
626 | void *addr; | ||
627 | |||
628 | sg = kzalloc(sizeof(*sg), GFP_KERNEL); | ||
629 | if (!sg) | ||
630 | return NULL; | ||
631 | |||
632 | sg_init_table(sg, 1); | ||
633 | |||
634 | addr = pci_alloc_p2pmem(pdev, length); | ||
635 | if (!addr) | ||
636 | goto out_free_sg; | ||
637 | |||
638 | sg_set_buf(sg, addr, length); | ||
639 | *nents = 1; | ||
640 | return sg; | ||
641 | |||
642 | out_free_sg: | ||
643 | kfree(sg); | ||
644 | return NULL; | ||
645 | } | ||
646 | EXPORT_SYMBOL_GPL(pci_p2pmem_alloc_sgl); | ||
647 | |||
648 | /** | ||
649 | * pci_p2pmem_free_sgl - free a scatterlist allocated by pci_p2pmem_alloc_sgl() | ||
650 | * @pdev: the device to allocate memory from | ||
651 | * @sgl: the allocated scatterlist | ||
652 | */ | ||
653 | void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl) | ||
654 | { | ||
655 | struct scatterlist *sg; | ||
656 | int count; | ||
657 | |||
658 | for_each_sg(sgl, sg, INT_MAX, count) { | ||
659 | if (!sg) | ||
660 | break; | ||
661 | |||
662 | pci_free_p2pmem(pdev, sg_virt(sg), sg->length); | ||
663 | } | ||
664 | kfree(sgl); | ||
665 | } | ||
666 | EXPORT_SYMBOL_GPL(pci_p2pmem_free_sgl); | ||
667 | |||
668 | /** | ||
669 | * pci_p2pmem_publish - publish the peer-to-peer DMA memory for use by | ||
670 | * other devices with pci_p2pmem_find() | ||
671 | * @pdev: the device with peer-to-peer DMA memory to publish | ||
672 | * @publish: set to true to publish the memory, false to unpublish it | ||
673 | * | ||
674 | * Published memory can be used by other PCI device drivers for | ||
675 | * peer-2-peer DMA operations. Non-published memory is reserved for | ||
676 | * exlusive use of the device driver that registers the peer-to-peer | ||
677 | * memory. | ||
678 | */ | ||
679 | void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) | ||
680 | { | ||
681 | if (pdev->p2pdma) | ||
682 | pdev->p2pdma->p2pmem_published = publish; | ||
683 | } | ||
684 | EXPORT_SYMBOL_GPL(pci_p2pmem_publish); | ||
685 | |||
686 | /** | ||
687 | * pci_p2pdma_map_sg - map a PCI peer-to-peer scatterlist for DMA | ||
688 | * @dev: device doing the DMA request | ||
689 | * @sg: scatter list to map | ||
690 | * @nents: elements in the scatterlist | ||
691 | * @dir: DMA direction | ||
692 | * | ||
693 | * Scatterlists mapped with this function should not be unmapped in any way. | ||
694 | * | ||
695 | * Returns the number of SG entries mapped or 0 on error. | ||
696 | */ | ||
697 | int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, | ||
698 | enum dma_data_direction dir) | ||
699 | { | ||
700 | struct dev_pagemap *pgmap; | ||
701 | struct scatterlist *s; | ||
702 | phys_addr_t paddr; | ||
703 | int i; | ||
704 | |||
705 | /* | ||
706 | * p2pdma mappings are not compatible with devices that use | ||
707 | * dma_virt_ops. If the upper layers do the right thing | ||
708 | * this should never happen because it will be prevented | ||
709 | * by the check in pci_p2pdma_add_client() | ||
710 | */ | ||
711 | if (WARN_ON_ONCE(IS_ENABLED(CONFIG_DMA_VIRT_OPS) && | ||
712 | dev->dma_ops == &dma_virt_ops)) | ||
713 | return 0; | ||
714 | |||
715 | for_each_sg(sg, s, nents, i) { | ||
716 | pgmap = sg_page(s)->pgmap; | ||
717 | paddr = sg_phys(s); | ||
718 | |||
719 | s->dma_address = paddr - pgmap->pci_p2pdma_bus_offset; | ||
720 | sg_dma_len(s) = s->length; | ||
721 | } | ||
722 | |||
723 | return nents; | ||
724 | } | ||
725 | EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg); | ||
726 | |||
727 | /** | ||
728 | * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store | ||
729 | * to enable p2pdma | ||
730 | * @page: contents of the value to be stored | ||
731 | * @p2p_dev: returns the PCI device that was selected to be used | ||
732 | * (if one was specified in the stored value) | ||
733 | * @use_p2pdma: returns whether to enable p2pdma or not | ||
734 | * | ||
735 | * Parses an attribute value to decide whether to enable p2pdma. | ||
736 | * The value can select a PCI device (using it's full BDF device | ||
737 | * name) or a boolean (in any format strtobool() accepts). A false | ||
738 | * value disables p2pdma, a true value expects the caller | ||
739 | * to automatically find a compatible device and specifying a PCI device | ||
740 | * expects the caller to use the specific provider. | ||
741 | * | ||
742 | * pci_p2pdma_enable_show() should be used as the show operation for | ||
743 | * the attribute. | ||
744 | * | ||
745 | * Returns 0 on success | ||
746 | */ | ||
747 | int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, | ||
748 | bool *use_p2pdma) | ||
749 | { | ||
750 | struct device *dev; | ||
751 | |||
752 | dev = bus_find_device_by_name(&pci_bus_type, NULL, page); | ||
753 | if (dev) { | ||
754 | *use_p2pdma = true; | ||
755 | *p2p_dev = to_pci_dev(dev); | ||
756 | |||
757 | if (!pci_has_p2pmem(*p2p_dev)) { | ||
758 | pci_err(*p2p_dev, | ||
759 | "PCI device has no peer-to-peer memory: %s\n", | ||
760 | page); | ||
761 | pci_dev_put(*p2p_dev); | ||
762 | return -ENODEV; | ||
763 | } | ||
764 | |||
765 | return 0; | ||
766 | } else if ((page[0] == '0' || page[0] == '1') && !iscntrl(page[1])) { | ||
767 | /* | ||
768 | * If the user enters a PCI device that doesn't exist | ||
769 | * like "0000:01:00.1", we don't want strtobool to think | ||
770 | * it's a '0' when it's clearly not what the user wanted. | ||
771 | * So we require 0's and 1's to be exactly one character. | ||
772 | */ | ||
773 | } else if (!strtobool(page, use_p2pdma)) { | ||
774 | return 0; | ||
775 | } | ||
776 | |||
777 | pr_err("No such PCI device: %.*s\n", (int)strcspn(page, "\n"), page); | ||
778 | return -ENODEV; | ||
779 | } | ||
780 | EXPORT_SYMBOL_GPL(pci_p2pdma_enable_store); | ||
781 | |||
782 | /** | ||
783 | * pci_p2pdma_enable_show - show a configfs/sysfs attribute indicating | ||
784 | * whether p2pdma is enabled | ||
785 | * @page: contents of the stored value | ||
786 | * @p2p_dev: the selected p2p device (NULL if no device is selected) | ||
787 | * @use_p2pdma: whether p2pdme has been enabled | ||
788 | * | ||
789 | * Attributes that use pci_p2pdma_enable_store() should use this function | ||
790 | * to show the value of the attribute. | ||
791 | * | ||
792 | * Returns 0 on success | ||
793 | */ | ||
794 | ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, | ||
795 | bool use_p2pdma) | ||
796 | { | ||
797 | if (!use_p2pdma) | ||
798 | return sprintf(page, "0\n"); | ||
799 | |||
800 | if (!p2p_dev) | ||
801 | return sprintf(page, "1\n"); | ||
802 | |||
803 | return sprintf(page, "%s\n", pci_name(p2p_dev)); | ||
804 | } | ||
805 | EXPORT_SYMBOL_GPL(pci_p2pdma_enable_show); | ||
diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c index c2ab57705043..2a4aa6468579 100644 --- a/drivers/pci/pci-acpi.c +++ b/drivers/pci/pci-acpi.c | |||
@@ -519,6 +519,46 @@ static pci_power_t acpi_pci_choose_state(struct pci_dev *pdev) | |||
519 | return PCI_POWER_ERROR; | 519 | return PCI_POWER_ERROR; |
520 | } | 520 | } |
521 | 521 | ||
522 | static struct acpi_device *acpi_pci_find_companion(struct device *dev); | ||
523 | |||
524 | static bool acpi_pci_bridge_d3(struct pci_dev *dev) | ||
525 | { | ||
526 | const struct fwnode_handle *fwnode; | ||
527 | struct acpi_device *adev; | ||
528 | struct pci_dev *root; | ||
529 | u8 val; | ||
530 | |||
531 | if (!dev->is_hotplug_bridge) | ||
532 | return false; | ||
533 | |||
534 | /* | ||
535 | * Look for a special _DSD property for the root port and if it | ||
536 | * is set we know the hierarchy behind it supports D3 just fine. | ||
537 | */ | ||
538 | root = pci_find_pcie_root_port(dev); | ||
539 | if (!root) | ||
540 | return false; | ||
541 | |||
542 | adev = ACPI_COMPANION(&root->dev); | ||
543 | if (root == dev) { | ||
544 | /* | ||
545 | * It is possible that the ACPI companion is not yet bound | ||
546 | * for the root port so look it up manually here. | ||
547 | */ | ||
548 | if (!adev && !pci_dev_is_added(root)) | ||
549 | adev = acpi_pci_find_companion(&root->dev); | ||
550 | } | ||
551 | |||
552 | if (!adev) | ||
553 | return false; | ||
554 | |||
555 | fwnode = acpi_fwnode_handle(adev); | ||
556 | if (fwnode_property_read_u8(fwnode, "HotPlugSupportInD3", &val)) | ||
557 | return false; | ||
558 | |||
559 | return val == 1; | ||
560 | } | ||
561 | |||
522 | static bool acpi_pci_power_manageable(struct pci_dev *dev) | 562 | static bool acpi_pci_power_manageable(struct pci_dev *dev) |
523 | { | 563 | { |
524 | struct acpi_device *adev = ACPI_COMPANION(&dev->dev); | 564 | struct acpi_device *adev = ACPI_COMPANION(&dev->dev); |
@@ -548,6 +588,7 @@ static int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state) | |||
548 | error = -EBUSY; | 588 | error = -EBUSY; |
549 | break; | 589 | break; |
550 | } | 590 | } |
591 | /* Fall through */ | ||
551 | case PCI_D0: | 592 | case PCI_D0: |
552 | case PCI_D1: | 593 | case PCI_D1: |
553 | case PCI_D2: | 594 | case PCI_D2: |
@@ -635,6 +676,7 @@ static bool acpi_pci_need_resume(struct pci_dev *dev) | |||
635 | } | 676 | } |
636 | 677 | ||
637 | static const struct pci_platform_pm_ops acpi_pci_platform_pm = { | 678 | static const struct pci_platform_pm_ops acpi_pci_platform_pm = { |
679 | .bridge_d3 = acpi_pci_bridge_d3, | ||
638 | .is_manageable = acpi_pci_power_manageable, | 680 | .is_manageable = acpi_pci_power_manageable, |
639 | .set_state = acpi_pci_set_power_state, | 681 | .set_state = acpi_pci_set_power_state, |
640 | .get_state = acpi_pci_get_power_state, | 682 | .get_state = acpi_pci_get_power_state, |
@@ -751,10 +793,15 @@ static void pci_acpi_setup(struct device *dev) | |||
751 | { | 793 | { |
752 | struct pci_dev *pci_dev = to_pci_dev(dev); | 794 | struct pci_dev *pci_dev = to_pci_dev(dev); |
753 | struct acpi_device *adev = ACPI_COMPANION(dev); | 795 | struct acpi_device *adev = ACPI_COMPANION(dev); |
796 | int node; | ||
754 | 797 | ||
755 | if (!adev) | 798 | if (!adev) |
756 | return; | 799 | return; |
757 | 800 | ||
801 | node = acpi_get_node(adev->handle); | ||
802 | if (node != NUMA_NO_NODE) | ||
803 | set_dev_node(dev, node); | ||
804 | |||
758 | pci_acpi_optimize_delay(pci_dev, adev->handle); | 805 | pci_acpi_optimize_delay(pci_dev, adev->handle); |
759 | 806 | ||
760 | pci_acpi_add_pm_notifier(adev, pci_dev); | 807 | pci_acpi_add_pm_notifier(adev, pci_dev); |
@@ -762,19 +809,33 @@ static void pci_acpi_setup(struct device *dev) | |||
762 | return; | 809 | return; |
763 | 810 | ||
764 | device_set_wakeup_capable(dev, true); | 811 | device_set_wakeup_capable(dev, true); |
812 | /* | ||
813 | * For bridges that can do D3 we enable wake automatically (as | ||
814 | * we do for the power management itself in that case). The | ||
815 | * reason is that the bridge may have additional methods such as | ||
816 | * _DSW that need to be called. | ||
817 | */ | ||
818 | if (pci_dev->bridge_d3) | ||
819 | device_wakeup_enable(dev); | ||
820 | |||
765 | acpi_pci_wakeup(pci_dev, false); | 821 | acpi_pci_wakeup(pci_dev, false); |
766 | } | 822 | } |
767 | 823 | ||
768 | static void pci_acpi_cleanup(struct device *dev) | 824 | static void pci_acpi_cleanup(struct device *dev) |
769 | { | 825 | { |
770 | struct acpi_device *adev = ACPI_COMPANION(dev); | 826 | struct acpi_device *adev = ACPI_COMPANION(dev); |
827 | struct pci_dev *pci_dev = to_pci_dev(dev); | ||
771 | 828 | ||
772 | if (!adev) | 829 | if (!adev) |
773 | return; | 830 | return; |
774 | 831 | ||
775 | pci_acpi_remove_pm_notifier(adev); | 832 | pci_acpi_remove_pm_notifier(adev); |
776 | if (adev->wakeup.flags.valid) | 833 | if (adev->wakeup.flags.valid) { |
834 | if (pci_dev->bridge_d3) | ||
835 | device_wakeup_disable(dev); | ||
836 | |||
777 | device_set_wakeup_capable(dev, false); | 837 | device_set_wakeup_capable(dev, false); |
838 | } | ||
778 | } | 839 | } |
779 | 840 | ||
780 | static bool pci_acpi_bus_match(struct device *dev) | 841 | static bool pci_acpi_bus_match(struct device *dev) |
diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c new file mode 100644 index 000000000000..129738362d90 --- /dev/null +++ b/drivers/pci/pci-bridge-emul.c | |||
@@ -0,0 +1,408 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | ||
2 | /* | ||
3 | * Copyright (C) 2018 Marvell | ||
4 | * | ||
5 | * Author: Thomas Petazzoni <thomas.petazzoni@bootlin.com> | ||
6 | * | ||
7 | * This file helps PCI controller drivers implement a fake root port | ||
8 | * PCI bridge when the HW doesn't provide such a root port PCI | ||
9 | * bridge. | ||
10 | * | ||
11 | * It emulates a PCI bridge by providing a fake PCI configuration | ||
12 | * space (and optionally a PCIe capability configuration space) in | ||
13 | * memory. By default the read/write operations simply read and update | ||
14 | * this fake configuration space in memory. However, PCI controller | ||
15 | * drivers can provide through the 'struct pci_sw_bridge_ops' | ||
16 | * structure a set of operations to override or complement this | ||
17 | * default behavior. | ||
18 | */ | ||
19 | |||
20 | #include <linux/pci.h> | ||
21 | #include "pci-bridge-emul.h" | ||
22 | |||
23 | #define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF | ||
24 | #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END | ||
25 | #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2) | ||
26 | |||
27 | /* | ||
28 | * Initialize a pci_bridge_emul structure to represent a fake PCI | ||
29 | * bridge configuration space. The caller needs to have initialized | ||
30 | * the PCI configuration space with whatever values make sense | ||
31 | * (typically at least vendor, device, revision), the ->ops pointer, | ||
32 | * and optionally ->data and ->has_pcie. | ||
33 | */ | ||
34 | void pci_bridge_emul_init(struct pci_bridge_emul *bridge) | ||
35 | { | ||
36 | bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16; | ||
37 | bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; | ||
38 | bridge->conf.cache_line_size = 0x10; | ||
39 | bridge->conf.status = PCI_STATUS_CAP_LIST; | ||
40 | |||
41 | if (bridge->has_pcie) { | ||
42 | bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; | ||
43 | bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; | ||
44 | /* Set PCIe v2, root port, slot support */ | ||
45 | bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | | ||
46 | PCI_EXP_FLAGS_SLOT; | ||
47 | } | ||
48 | } | ||
49 | |||
50 | struct pci_bridge_reg_behavior { | ||
51 | /* Read-only bits */ | ||
52 | u32 ro; | ||
53 | |||
54 | /* Read-write bits */ | ||
55 | u32 rw; | ||
56 | |||
57 | /* Write-1-to-clear bits */ | ||
58 | u32 w1c; | ||
59 | |||
60 | /* Reserved bits (hardwired to 0) */ | ||
61 | u32 rsvd; | ||
62 | }; | ||
63 | |||
64 | const static struct pci_bridge_reg_behavior pci_regs_behavior[] = { | ||
65 | [PCI_VENDOR_ID / 4] = { .ro = ~0 }, | ||
66 | [PCI_COMMAND / 4] = { | ||
67 | .rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | | ||
68 | PCI_COMMAND_MASTER | PCI_COMMAND_PARITY | | ||
69 | PCI_COMMAND_SERR), | ||
70 | .ro = ((PCI_COMMAND_SPECIAL | PCI_COMMAND_INVALIDATE | | ||
71 | PCI_COMMAND_VGA_PALETTE | PCI_COMMAND_WAIT | | ||
72 | PCI_COMMAND_FAST_BACK) | | ||
73 | (PCI_STATUS_CAP_LIST | PCI_STATUS_66MHZ | | ||
74 | PCI_STATUS_FAST_BACK | PCI_STATUS_DEVSEL_MASK) << 16), | ||
75 | .rsvd = GENMASK(15, 10) | ((BIT(6) | GENMASK(3, 0)) << 16), | ||
76 | .w1c = (PCI_STATUS_PARITY | | ||
77 | PCI_STATUS_SIG_TARGET_ABORT | | ||
78 | PCI_STATUS_REC_TARGET_ABORT | | ||
79 | PCI_STATUS_REC_MASTER_ABORT | | ||
80 | PCI_STATUS_SIG_SYSTEM_ERROR | | ||
81 | PCI_STATUS_DETECTED_PARITY) << 16, | ||
82 | }, | ||
83 | [PCI_CLASS_REVISION / 4] = { .ro = ~0 }, | ||
84 | |||
85 | /* | ||
86 | * Cache Line Size register: implement as read-only, we do not | ||
87 | * pretend implementing "Memory Write and Invalidate" | ||
88 | * transactions" | ||
89 | * | ||
90 | * Latency Timer Register: implemented as read-only, as "A | ||
91 | * bridge that is not capable of a burst transfer of more than | ||
92 | * two data phases on its primary interface is permitted to | ||
93 | * hardwire the Latency Timer to a value of 16 or less" | ||
94 | * | ||
95 | * Header Type: always read-only | ||
96 | * | ||
97 | * BIST register: implemented as read-only, as "A bridge that | ||
98 | * does not support BIST must implement this register as a | ||
99 | * read-only register that returns 0 when read" | ||
100 | */ | ||
101 | [PCI_CACHE_LINE_SIZE / 4] = { .ro = ~0 }, | ||
102 | |||
103 | /* | ||
104 | * Base Address registers not used must be implemented as | ||
105 | * read-only registers that return 0 when read. | ||
106 | */ | ||
107 | [PCI_BASE_ADDRESS_0 / 4] = { .ro = ~0 }, | ||
108 | [PCI_BASE_ADDRESS_1 / 4] = { .ro = ~0 }, | ||
109 | |||
110 | [PCI_PRIMARY_BUS / 4] = { | ||
111 | /* Primary, secondary and subordinate bus are RW */ | ||
112 | .rw = GENMASK(24, 0), | ||
113 | /* Secondary latency is read-only */ | ||
114 | .ro = GENMASK(31, 24), | ||
115 | }, | ||
116 | |||
117 | [PCI_IO_BASE / 4] = { | ||
118 | /* The high four bits of I/O base/limit are RW */ | ||
119 | .rw = (GENMASK(15, 12) | GENMASK(7, 4)), | ||
120 | |||
121 | /* The low four bits of I/O base/limit are RO */ | ||
122 | .ro = (((PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK | | ||
123 | PCI_STATUS_DEVSEL_MASK) << 16) | | ||
124 | GENMASK(11, 8) | GENMASK(3, 0)), | ||
125 | |||
126 | .w1c = (PCI_STATUS_PARITY | | ||
127 | PCI_STATUS_SIG_TARGET_ABORT | | ||
128 | PCI_STATUS_REC_TARGET_ABORT | | ||
129 | PCI_STATUS_REC_MASTER_ABORT | | ||
130 | PCI_STATUS_SIG_SYSTEM_ERROR | | ||
131 | PCI_STATUS_DETECTED_PARITY) << 16, | ||
132 | |||
133 | .rsvd = ((BIT(6) | GENMASK(4, 0)) << 16), | ||
134 | }, | ||
135 | |||
136 | [PCI_MEMORY_BASE / 4] = { | ||
137 | /* The high 12-bits of mem base/limit are RW */ | ||
138 | .rw = GENMASK(31, 20) | GENMASK(15, 4), | ||
139 | |||
140 | /* The low four bits of mem base/limit are RO */ | ||
141 | .ro = GENMASK(19, 16) | GENMASK(3, 0), | ||
142 | }, | ||
143 | |||
144 | [PCI_PREF_MEMORY_BASE / 4] = { | ||
145 | /* The high 12-bits of pref mem base/limit are RW */ | ||
146 | .rw = GENMASK(31, 20) | GENMASK(15, 4), | ||
147 | |||
148 | /* The low four bits of pref mem base/limit are RO */ | ||
149 | .ro = GENMASK(19, 16) | GENMASK(3, 0), | ||
150 | }, | ||
151 | |||
152 | [PCI_PREF_BASE_UPPER32 / 4] = { | ||
153 | .rw = ~0, | ||
154 | }, | ||
155 | |||
156 | [PCI_PREF_LIMIT_UPPER32 / 4] = { | ||
157 | .rw = ~0, | ||
158 | }, | ||
159 | |||
160 | [PCI_IO_BASE_UPPER16 / 4] = { | ||
161 | .rw = ~0, | ||
162 | }, | ||
163 | |||
164 | [PCI_CAPABILITY_LIST / 4] = { | ||
165 | .ro = GENMASK(7, 0), | ||
166 | .rsvd = GENMASK(31, 8), | ||
167 | }, | ||
168 | |||
169 | [PCI_ROM_ADDRESS1 / 4] = { | ||
170 | .rw = GENMASK(31, 11) | BIT(0), | ||
171 | .rsvd = GENMASK(10, 1), | ||
172 | }, | ||
173 | |||
174 | /* | ||
175 | * Interrupt line (bits 7:0) are RW, interrupt pin (bits 15:8) | ||
176 | * are RO, and bridge control (31:16) are a mix of RW, RO, | ||
177 | * reserved and W1C bits | ||
178 | */ | ||
179 | [PCI_INTERRUPT_LINE / 4] = { | ||
180 | /* Interrupt line is RW */ | ||
181 | .rw = (GENMASK(7, 0) | | ||
182 | ((PCI_BRIDGE_CTL_PARITY | | ||
183 | PCI_BRIDGE_CTL_SERR | | ||
184 | PCI_BRIDGE_CTL_ISA | | ||
185 | PCI_BRIDGE_CTL_VGA | | ||
186 | PCI_BRIDGE_CTL_MASTER_ABORT | | ||
187 | PCI_BRIDGE_CTL_BUS_RESET | | ||
188 | BIT(8) | BIT(9) | BIT(11)) << 16)), | ||
189 | |||
190 | /* Interrupt pin is RO */ | ||
191 | .ro = (GENMASK(15, 8) | ((PCI_BRIDGE_CTL_FAST_BACK) << 16)), | ||
192 | |||
193 | .w1c = BIT(10) << 16, | ||
194 | |||
195 | .rsvd = (GENMASK(15, 12) | BIT(4)) << 16, | ||
196 | }, | ||
197 | }; | ||
198 | |||
199 | const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { | ||
200 | [PCI_CAP_LIST_ID / 4] = { | ||
201 | /* | ||
202 | * Capability ID, Next Capability Pointer and | ||
203 | * Capabilities register are all read-only. | ||
204 | */ | ||
205 | .ro = ~0, | ||
206 | }, | ||
207 | |||
208 | [PCI_EXP_DEVCAP / 4] = { | ||
209 | .ro = ~0, | ||
210 | }, | ||
211 | |||
212 | [PCI_EXP_DEVCTL / 4] = { | ||
213 | /* Device control register is RW */ | ||
214 | .rw = GENMASK(15, 0), | ||
215 | |||
216 | /* | ||
217 | * Device status register has 4 bits W1C, then 2 bits | ||
218 | * RO, the rest is reserved | ||
219 | */ | ||
220 | .w1c = GENMASK(19, 16), | ||
221 | .ro = GENMASK(20, 19), | ||
222 | .rsvd = GENMASK(31, 21), | ||
223 | }, | ||
224 | |||
225 | [PCI_EXP_LNKCAP / 4] = { | ||
226 | /* All bits are RO, except bit 23 which is reserved */ | ||
227 | .ro = lower_32_bits(~BIT(23)), | ||
228 | .rsvd = BIT(23), | ||
229 | }, | ||
230 | |||
231 | [PCI_EXP_LNKCTL / 4] = { | ||
232 | /* | ||
233 | * Link control has bits [1:0] and [11:3] RW, the | ||
234 | * other bits are reserved. | ||
235 | * Link status has bits [13:0] RO, and bits [14:15] | ||
236 | * W1C. | ||
237 | */ | ||
238 | .rw = GENMASK(11, 3) | GENMASK(1, 0), | ||
239 | .ro = GENMASK(13, 0) << 16, | ||
240 | .w1c = GENMASK(15, 14) << 16, | ||
241 | .rsvd = GENMASK(15, 12) | BIT(2), | ||
242 | }, | ||
243 | |||
244 | [PCI_EXP_SLTCAP / 4] = { | ||
245 | .ro = ~0, | ||
246 | }, | ||
247 | |||
248 | [PCI_EXP_SLTCTL / 4] = { | ||
249 | /* | ||
250 | * Slot control has bits [12:0] RW, the rest is | ||
251 | * reserved. | ||
252 | * | ||
253 | * Slot status has a mix of W1C and RO bits, as well | ||
254 | * as reserved bits. | ||
255 | */ | ||
256 | .rw = GENMASK(12, 0), | ||
257 | .w1c = (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | | ||
258 | PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | | ||
259 | PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC) << 16, | ||
260 | .ro = (PCI_EXP_SLTSTA_MRLSS | PCI_EXP_SLTSTA_PDS | | ||
261 | PCI_EXP_SLTSTA_EIS) << 16, | ||
262 | .rsvd = GENMASK(15, 12) | (GENMASK(15, 9) << 16), | ||
263 | }, | ||
264 | |||
265 | [PCI_EXP_RTCTL / 4] = { | ||
266 | /* | ||
267 | * Root control has bits [4:0] RW, the rest is | ||
268 | * reserved. | ||
269 | * | ||
270 | * Root status has bit 0 RO, the rest is reserved. | ||
271 | */ | ||
272 | .rw = (PCI_EXP_RTCTL_SECEE | PCI_EXP_RTCTL_SENFEE | | ||
273 | PCI_EXP_RTCTL_SEFEE | PCI_EXP_RTCTL_PMEIE | | ||
274 | PCI_EXP_RTCTL_CRSSVE), | ||
275 | .ro = PCI_EXP_RTCAP_CRSVIS << 16, | ||
276 | .rsvd = GENMASK(15, 5) | (GENMASK(15, 1) << 16), | ||
277 | }, | ||
278 | |||
279 | [PCI_EXP_RTSTA / 4] = { | ||
280 | .ro = GENMASK(15, 0) | PCI_EXP_RTSTA_PENDING, | ||
281 | .w1c = PCI_EXP_RTSTA_PME, | ||
282 | .rsvd = GENMASK(31, 18), | ||
283 | }, | ||
284 | }; | ||
285 | |||
286 | /* | ||
287 | * Should be called by the PCI controller driver when reading the PCI | ||
288 | * configuration space of the fake bridge. It will call back the | ||
289 | * ->ops->read_base or ->ops->read_pcie operations. | ||
290 | */ | ||
291 | int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, | ||
292 | int size, u32 *value) | ||
293 | { | ||
294 | int ret; | ||
295 | int reg = where & ~3; | ||
296 | pci_bridge_emul_read_status_t (*read_op)(struct pci_bridge_emul *bridge, | ||
297 | int reg, u32 *value); | ||
298 | u32 *cfgspace; | ||
299 | const struct pci_bridge_reg_behavior *behavior; | ||
300 | |||
301 | if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) { | ||
302 | *value = 0; | ||
303 | return PCIBIOS_SUCCESSFUL; | ||
304 | } | ||
305 | |||
306 | if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) { | ||
307 | *value = 0; | ||
308 | return PCIBIOS_SUCCESSFUL; | ||
309 | } | ||
310 | |||
311 | if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { | ||
312 | reg -= PCI_CAP_PCIE_START; | ||
313 | read_op = bridge->ops->read_pcie; | ||
314 | cfgspace = (u32 *) &bridge->pcie_conf; | ||
315 | behavior = pcie_cap_regs_behavior; | ||
316 | } else { | ||
317 | read_op = bridge->ops->read_base; | ||
318 | cfgspace = (u32 *) &bridge->conf; | ||
319 | behavior = pci_regs_behavior; | ||
320 | } | ||
321 | |||
322 | if (read_op) | ||
323 | ret = read_op(bridge, reg, value); | ||
324 | else | ||
325 | ret = PCI_BRIDGE_EMUL_NOT_HANDLED; | ||
326 | |||
327 | if (ret == PCI_BRIDGE_EMUL_NOT_HANDLED) | ||
328 | *value = cfgspace[reg / 4]; | ||
329 | |||
330 | /* | ||
331 | * Make sure we never return any reserved bit with a value | ||
332 | * different from 0. | ||
333 | */ | ||
334 | *value &= ~behavior[reg / 4].rsvd; | ||
335 | |||
336 | if (size == 1) | ||
337 | *value = (*value >> (8 * (where & 3))) & 0xff; | ||
338 | else if (size == 2) | ||
339 | *value = (*value >> (8 * (where & 3))) & 0xffff; | ||
340 | else if (size != 4) | ||
341 | return PCIBIOS_BAD_REGISTER_NUMBER; | ||
342 | |||
343 | return PCIBIOS_SUCCESSFUL; | ||
344 | } | ||
345 | |||
346 | /* | ||
347 | * Should be called by the PCI controller driver when writing the PCI | ||
348 | * configuration space of the fake bridge. It will call back the | ||
349 | * ->ops->write_base or ->ops->write_pcie operations. | ||
350 | */ | ||
351 | int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, | ||
352 | int size, u32 value) | ||
353 | { | ||
354 | int reg = where & ~3; | ||
355 | int mask, ret, old, new, shift; | ||
356 | void (*write_op)(struct pci_bridge_emul *bridge, int reg, | ||
357 | u32 old, u32 new, u32 mask); | ||
358 | u32 *cfgspace; | ||
359 | const struct pci_bridge_reg_behavior *behavior; | ||
360 | |||
361 | if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) | ||
362 | return PCIBIOS_SUCCESSFUL; | ||
363 | |||
364 | if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) | ||
365 | return PCIBIOS_SUCCESSFUL; | ||
366 | |||
367 | shift = (where & 0x3) * 8; | ||
368 | |||
369 | if (size == 4) | ||
370 | mask = 0xffffffff; | ||
371 | else if (size == 2) | ||
372 | mask = 0xffff << shift; | ||
373 | else if (size == 1) | ||
374 | mask = 0xff << shift; | ||
375 | else | ||
376 | return PCIBIOS_BAD_REGISTER_NUMBER; | ||
377 | |||
378 | ret = pci_bridge_emul_conf_read(bridge, reg, 4, &old); | ||
379 | if (ret != PCIBIOS_SUCCESSFUL) | ||
380 | return ret; | ||
381 | |||
382 | if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { | ||
383 | reg -= PCI_CAP_PCIE_START; | ||
384 | write_op = bridge->ops->write_pcie; | ||
385 | cfgspace = (u32 *) &bridge->pcie_conf; | ||
386 | behavior = pcie_cap_regs_behavior; | ||
387 | } else { | ||
388 | write_op = bridge->ops->write_base; | ||
389 | cfgspace = (u32 *) &bridge->conf; | ||
390 | behavior = pci_regs_behavior; | ||
391 | } | ||
392 | |||
393 | /* Keep all bits, except the RW bits */ | ||
394 | new = old & (~mask | ~behavior[reg / 4].rw); | ||
395 | |||
396 | /* Update the value of the RW bits */ | ||
397 | new |= (value << shift) & (behavior[reg / 4].rw & mask); | ||
398 | |||
399 | /* Clear the W1C bits */ | ||
400 | new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); | ||
401 | |||
402 | cfgspace[reg / 4] = new; | ||
403 | |||
404 | if (write_op) | ||
405 | write_op(bridge, reg, old, new, mask); | ||
406 | |||
407 | return PCIBIOS_SUCCESSFUL; | ||
408 | } | ||
diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h new file mode 100644 index 000000000000..9d510ccf738b --- /dev/null +++ b/drivers/pci/pci-bridge-emul.h | |||
@@ -0,0 +1,124 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
2 | #ifndef __PCI_BRIDGE_EMUL_H__ | ||
3 | #define __PCI_BRIDGE_EMUL_H__ | ||
4 | |||
5 | #include <linux/kernel.h> | ||
6 | |||
7 | /* PCI configuration space of a PCI-to-PCI bridge. */ | ||
8 | struct pci_bridge_emul_conf { | ||
9 | u16 vendor; | ||
10 | u16 device; | ||
11 | u16 command; | ||
12 | u16 status; | ||
13 | u32 class_revision; | ||
14 | u8 cache_line_size; | ||
15 | u8 latency_timer; | ||
16 | u8 header_type; | ||
17 | u8 bist; | ||
18 | u32 bar[2]; | ||
19 | u8 primary_bus; | ||
20 | u8 secondary_bus; | ||
21 | u8 subordinate_bus; | ||
22 | u8 secondary_latency_timer; | ||
23 | u8 iobase; | ||
24 | u8 iolimit; | ||
25 | u16 secondary_status; | ||
26 | u16 membase; | ||
27 | u16 memlimit; | ||
28 | u16 pref_mem_base; | ||
29 | u16 pref_mem_limit; | ||
30 | u32 prefbaseupper; | ||
31 | u32 preflimitupper; | ||
32 | u16 iobaseupper; | ||
33 | u16 iolimitupper; | ||
34 | u8 capabilities_pointer; | ||
35 | u8 reserve[3]; | ||
36 | u32 romaddr; | ||
37 | u8 intline; | ||
38 | u8 intpin; | ||
39 | u16 bridgectrl; | ||
40 | }; | ||
41 | |||
42 | /* PCI configuration space of the PCIe capabilities */ | ||
43 | struct pci_bridge_emul_pcie_conf { | ||
44 | u8 cap_id; | ||
45 | u8 next; | ||
46 | u16 cap; | ||
47 | u32 devcap; | ||
48 | u16 devctl; | ||
49 | u16 devsta; | ||
50 | u32 lnkcap; | ||
51 | u16 lnkctl; | ||
52 | u16 lnksta; | ||
53 | u32 slotcap; | ||
54 | u16 slotctl; | ||
55 | u16 slotsta; | ||
56 | u16 rootctl; | ||
57 | u16 rsvd; | ||
58 | u32 rootsta; | ||
59 | u32 devcap2; | ||
60 | u16 devctl2; | ||
61 | u16 devsta2; | ||
62 | u32 lnkcap2; | ||
63 | u16 lnkctl2; | ||
64 | u16 lnksta2; | ||
65 | u32 slotcap2; | ||
66 | u16 slotctl2; | ||
67 | u16 slotsta2; | ||
68 | }; | ||
69 | |||
70 | struct pci_bridge_emul; | ||
71 | |||
72 | typedef enum { PCI_BRIDGE_EMUL_HANDLED, | ||
73 | PCI_BRIDGE_EMUL_NOT_HANDLED } pci_bridge_emul_read_status_t; | ||
74 | |||
75 | struct pci_bridge_emul_ops { | ||
76 | /* | ||
77 | * Called when reading from the regular PCI bridge | ||
78 | * configuration space. Return PCI_BRIDGE_EMUL_HANDLED when the | ||
79 | * operation has handled the read operation and filled in the | ||
80 | * *value, or PCI_BRIDGE_EMUL_NOT_HANDLED when the read should | ||
81 | * be emulated by the common code by reading from the | ||
82 | * in-memory copy of the configuration space. | ||
83 | */ | ||
84 | pci_bridge_emul_read_status_t (*read_base)(struct pci_bridge_emul *bridge, | ||
85 | int reg, u32 *value); | ||
86 | |||
87 | /* | ||
88 | * Same as ->read_base(), except it is for reading from the | ||
89 | * PCIe capability configuration space. | ||
90 | */ | ||
91 | pci_bridge_emul_read_status_t (*read_pcie)(struct pci_bridge_emul *bridge, | ||
92 | int reg, u32 *value); | ||
93 | /* | ||
94 | * Called when writing to the regular PCI bridge configuration | ||
95 | * space. old is the current value, new is the new value being | ||
96 | * written, and mask indicates which parts of the value are | ||
97 | * being changed. | ||
98 | */ | ||
99 | void (*write_base)(struct pci_bridge_emul *bridge, int reg, | ||
100 | u32 old, u32 new, u32 mask); | ||
101 | |||
102 | /* | ||
103 | * Same as ->write_base(), except it is for writing from the | ||
104 | * PCIe capability configuration space. | ||
105 | */ | ||
106 | void (*write_pcie)(struct pci_bridge_emul *bridge, int reg, | ||
107 | u32 old, u32 new, u32 mask); | ||
108 | }; | ||
109 | |||
110 | struct pci_bridge_emul { | ||
111 | struct pci_bridge_emul_conf conf; | ||
112 | struct pci_bridge_emul_pcie_conf pcie_conf; | ||
113 | struct pci_bridge_emul_ops *ops; | ||
114 | void *data; | ||
115 | bool has_pcie; | ||
116 | }; | ||
117 | |||
118 | void pci_bridge_emul_init(struct pci_bridge_emul *bridge); | ||
119 | int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, | ||
120 | int size, u32 *value); | ||
121 | int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, | ||
122 | int size, u32 value); | ||
123 | |||
124 | #endif /* __PCI_BRIDGE_EMUL_H__ */ | ||
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 51b6c81671c1..d068f11d08a7 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c | |||
@@ -35,6 +35,8 @@ | |||
35 | #include <linux/aer.h> | 35 | #include <linux/aer.h> |
36 | #include "pci.h" | 36 | #include "pci.h" |
37 | 37 | ||
38 | DEFINE_MUTEX(pci_slot_mutex); | ||
39 | |||
38 | const char *pci_power_names[] = { | 40 | const char *pci_power_names[] = { |
39 | "error", "D0", "D1", "D2", "D3hot", "D3cold", "unknown", | 41 | "error", "D0", "D1", "D2", "D3hot", "D3cold", "unknown", |
40 | }; | 42 | }; |
@@ -196,7 +198,7 @@ EXPORT_SYMBOL_GPL(pci_ioremap_wc_bar); | |||
196 | /** | 198 | /** |
197 | * pci_dev_str_match_path - test if a path string matches a device | 199 | * pci_dev_str_match_path - test if a path string matches a device |
198 | * @dev: the PCI device to test | 200 | * @dev: the PCI device to test |
199 | * @p: string to match the device against | 201 | * @path: string to match the device against |
200 | * @endptr: pointer to the string after the match | 202 | * @endptr: pointer to the string after the match |
201 | * | 203 | * |
202 | * Test if a string (typically from a kernel parameter) formatted as a | 204 | * Test if a string (typically from a kernel parameter) formatted as a |
@@ -791,6 +793,11 @@ static inline bool platform_pci_need_resume(struct pci_dev *dev) | |||
791 | return pci_platform_pm ? pci_platform_pm->need_resume(dev) : false; | 793 | return pci_platform_pm ? pci_platform_pm->need_resume(dev) : false; |
792 | } | 794 | } |
793 | 795 | ||
796 | static inline bool platform_pci_bridge_d3(struct pci_dev *dev) | ||
797 | { | ||
798 | return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false; | ||
799 | } | ||
800 | |||
794 | /** | 801 | /** |
795 | * pci_raw_set_power_state - Use PCI PM registers to set the power state of | 802 | * pci_raw_set_power_state - Use PCI PM registers to set the power state of |
796 | * given PCI device | 803 | * given PCI device |
@@ -999,7 +1006,7 @@ static void __pci_start_power_transition(struct pci_dev *dev, pci_power_t state) | |||
999 | * because have already delayed for the bridge. | 1006 | * because have already delayed for the bridge. |
1000 | */ | 1007 | */ |
1001 | if (dev->runtime_d3cold) { | 1008 | if (dev->runtime_d3cold) { |
1002 | if (dev->d3cold_delay) | 1009 | if (dev->d3cold_delay && !dev->imm_ready) |
1003 | msleep(dev->d3cold_delay); | 1010 | msleep(dev->d3cold_delay); |
1004 | /* | 1011 | /* |
1005 | * When powering on a bridge from D3cold, the | 1012 | * When powering on a bridge from D3cold, the |
@@ -1284,6 +1291,7 @@ int pci_save_state(struct pci_dev *dev) | |||
1284 | if (i != 0) | 1291 | if (i != 0) |
1285 | return i; | 1292 | return i; |
1286 | 1293 | ||
1294 | pci_save_dpc_state(dev); | ||
1287 | return pci_save_vc_state(dev); | 1295 | return pci_save_vc_state(dev); |
1288 | } | 1296 | } |
1289 | EXPORT_SYMBOL(pci_save_state); | 1297 | EXPORT_SYMBOL(pci_save_state); |
@@ -1389,6 +1397,7 @@ void pci_restore_state(struct pci_dev *dev) | |||
1389 | pci_restore_ats_state(dev); | 1397 | pci_restore_ats_state(dev); |
1390 | pci_restore_vc_state(dev); | 1398 | pci_restore_vc_state(dev); |
1391 | pci_restore_rebar_state(dev); | 1399 | pci_restore_rebar_state(dev); |
1400 | pci_restore_dpc_state(dev); | ||
1392 | 1401 | ||
1393 | pci_cleanup_aer_error_status_regs(dev); | 1402 | pci_cleanup_aer_error_status_regs(dev); |
1394 | 1403 | ||
@@ -2144,10 +2153,13 @@ static int __pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable | |||
2144 | int ret = 0; | 2153 | int ret = 0; |
2145 | 2154 | ||
2146 | /* | 2155 | /* |
2147 | * Bridges can only signal wakeup on behalf of subordinate devices, | 2156 | * Bridges that are not power-manageable directly only signal |
2148 | * but that is set up elsewhere, so skip them. | 2157 | * wakeup on behalf of subordinate devices which is set up |
2158 | * elsewhere, so skip them. However, bridges that are | ||
2159 | * power-manageable may signal wakeup for themselves (for example, | ||
2160 | * on a hotplug event) and they need to be covered here. | ||
2149 | */ | 2161 | */ |
2150 | if (pci_has_subordinate(dev)) | 2162 | if (!pci_power_manageable(dev)) |
2151 | return 0; | 2163 | return 0; |
2152 | 2164 | ||
2153 | /* Don't do the same thing twice in a row for one device. */ | 2165 | /* Don't do the same thing twice in a row for one device. */ |
@@ -2522,6 +2534,10 @@ bool pci_bridge_d3_possible(struct pci_dev *bridge) | |||
2522 | if (bridge->is_thunderbolt) | 2534 | if (bridge->is_thunderbolt) |
2523 | return true; | 2535 | return true; |
2524 | 2536 | ||
2537 | /* Platform might know better if the bridge supports D3 */ | ||
2538 | if (platform_pci_bridge_d3(bridge)) | ||
2539 | return true; | ||
2540 | |||
2525 | /* | 2541 | /* |
2526 | * Hotplug ports handled natively by the OS were not validated | 2542 | * Hotplug ports handled natively by the OS were not validated |
2527 | * by vendors for runtime D3 at least until 2018 because there | 2543 | * by vendors for runtime D3 at least until 2018 because there |
@@ -2655,6 +2671,7 @@ EXPORT_SYMBOL_GPL(pci_d3cold_disable); | |||
2655 | void pci_pm_init(struct pci_dev *dev) | 2671 | void pci_pm_init(struct pci_dev *dev) |
2656 | { | 2672 | { |
2657 | int pm; | 2673 | int pm; |
2674 | u16 status; | ||
2658 | u16 pmc; | 2675 | u16 pmc; |
2659 | 2676 | ||
2660 | pm_runtime_forbid(&dev->dev); | 2677 | pm_runtime_forbid(&dev->dev); |
@@ -2717,6 +2734,10 @@ void pci_pm_init(struct pci_dev *dev) | |||
2717 | /* Disable the PME# generation functionality */ | 2734 | /* Disable the PME# generation functionality */ |
2718 | pci_pme_active(dev, false); | 2735 | pci_pme_active(dev, false); |
2719 | } | 2736 | } |
2737 | |||
2738 | pci_read_config_word(dev, PCI_STATUS, &status); | ||
2739 | if (status & PCI_STATUS_IMM_READY) | ||
2740 | dev->imm_ready = 1; | ||
2720 | } | 2741 | } |
2721 | 2742 | ||
2722 | static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop) | 2743 | static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop) |
@@ -4387,6 +4408,9 @@ int pcie_flr(struct pci_dev *dev) | |||
4387 | 4408 | ||
4388 | pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR); | 4409 | pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR); |
4389 | 4410 | ||
4411 | if (dev->imm_ready) | ||
4412 | return 0; | ||
4413 | |||
4390 | /* | 4414 | /* |
4391 | * Per PCIe r4.0, sec 6.6.2, a device must complete an FLR within | 4415 | * Per PCIe r4.0, sec 6.6.2, a device must complete an FLR within |
4392 | * 100ms, but may silently discard requests while the FLR is in | 4416 | * 100ms, but may silently discard requests while the FLR is in |
@@ -4428,6 +4452,9 @@ static int pci_af_flr(struct pci_dev *dev, int probe) | |||
4428 | 4452 | ||
4429 | pci_write_config_byte(dev, pos + PCI_AF_CTRL, PCI_AF_CTRL_FLR); | 4453 | pci_write_config_byte(dev, pos + PCI_AF_CTRL, PCI_AF_CTRL_FLR); |
4430 | 4454 | ||
4455 | if (dev->imm_ready) | ||
4456 | return 0; | ||
4457 | |||
4431 | /* | 4458 | /* |
4432 | * Per Advanced Capabilities for Conventional PCI ECN, 13 April 2006, | 4459 | * Per Advanced Capabilities for Conventional PCI ECN, 13 April 2006, |
4433 | * updated 27 July 2006; a device must complete an FLR within | 4460 | * updated 27 July 2006; a device must complete an FLR within |
@@ -4496,21 +4523,42 @@ bool pcie_wait_for_link(struct pci_dev *pdev, bool active) | |||
4496 | bool ret; | 4523 | bool ret; |
4497 | u16 lnk_status; | 4524 | u16 lnk_status; |
4498 | 4525 | ||
4526 | /* | ||
4527 | * Some controllers might not implement link active reporting. In this | ||
4528 | * case, we wait for 1000 + 100 ms. | ||
4529 | */ | ||
4530 | if (!pdev->link_active_reporting) { | ||
4531 | msleep(1100); | ||
4532 | return true; | ||
4533 | } | ||
4534 | |||
4535 | /* | ||
4536 | * PCIe r4.0 sec 6.6.1, a component must enter LTSSM Detect within 20ms, | ||
4537 | * after which we should expect an link active if the reset was | ||
4538 | * successful. If so, software must wait a minimum 100ms before sending | ||
4539 | * configuration requests to devices downstream this port. | ||
4540 | * | ||
4541 | * If the link fails to activate, either the device was physically | ||
4542 | * removed or the link is permanently failed. | ||
4543 | */ | ||
4544 | if (active) | ||
4545 | msleep(20); | ||
4499 | for (;;) { | 4546 | for (;;) { |
4500 | pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); | 4547 | pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); |
4501 | ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); | 4548 | ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); |
4502 | if (ret == active) | 4549 | if (ret == active) |
4503 | return true; | 4550 | break; |
4504 | if (timeout <= 0) | 4551 | if (timeout <= 0) |
4505 | break; | 4552 | break; |
4506 | msleep(10); | 4553 | msleep(10); |
4507 | timeout -= 10; | 4554 | timeout -= 10; |
4508 | } | 4555 | } |
4509 | 4556 | if (active && ret) | |
4510 | pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", | 4557 | msleep(100); |
4511 | active ? "set" : "cleared"); | 4558 | else if (ret != active) |
4512 | 4559 | pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", | |
4513 | return false; | 4560 | active ? "set" : "cleared"); |
4561 | return ret == active; | ||
4514 | } | 4562 | } |
4515 | 4563 | ||
4516 | void pci_reset_secondary_bus(struct pci_dev *dev) | 4564 | void pci_reset_secondary_bus(struct pci_dev *dev) |
@@ -4582,13 +4630,13 @@ static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, int probe) | |||
4582 | { | 4630 | { |
4583 | int rc = -ENOTTY; | 4631 | int rc = -ENOTTY; |
4584 | 4632 | ||
4585 | if (!hotplug || !try_module_get(hotplug->ops->owner)) | 4633 | if (!hotplug || !try_module_get(hotplug->owner)) |
4586 | return rc; | 4634 | return rc; |
4587 | 4635 | ||
4588 | if (hotplug->ops->reset_slot) | 4636 | if (hotplug->ops->reset_slot) |
4589 | rc = hotplug->ops->reset_slot(hotplug, probe); | 4637 | rc = hotplug->ops->reset_slot(hotplug, probe); |
4590 | 4638 | ||
4591 | module_put(hotplug->ops->owner); | 4639 | module_put(hotplug->owner); |
4592 | 4640 | ||
4593 | return rc; | 4641 | return rc; |
4594 | } | 4642 | } |
@@ -5165,6 +5213,41 @@ static int pci_bus_reset(struct pci_bus *bus, int probe) | |||
5165 | } | 5213 | } |
5166 | 5214 | ||
5167 | /** | 5215 | /** |
5216 | * pci_bus_error_reset - reset the bridge's subordinate bus | ||
5217 | * @bridge: The parent device that connects to the bus to reset | ||
5218 | * | ||
5219 | * This function will first try to reset the slots on this bus if the method is | ||
5220 | * available. If slot reset fails or is not available, this will fall back to a | ||
5221 | * secondary bus reset. | ||
5222 | */ | ||
5223 | int pci_bus_error_reset(struct pci_dev *bridge) | ||
5224 | { | ||
5225 | struct pci_bus *bus = bridge->subordinate; | ||
5226 | struct pci_slot *slot; | ||
5227 | |||
5228 | if (!bus) | ||
5229 | return -ENOTTY; | ||
5230 | |||
5231 | mutex_lock(&pci_slot_mutex); | ||
5232 | if (list_empty(&bus->slots)) | ||
5233 | goto bus_reset; | ||
5234 | |||
5235 | list_for_each_entry(slot, &bus->slots, list) | ||
5236 | if (pci_probe_reset_slot(slot)) | ||
5237 | goto bus_reset; | ||
5238 | |||
5239 | list_for_each_entry(slot, &bus->slots, list) | ||
5240 | if (pci_slot_reset(slot, 0)) | ||
5241 | goto bus_reset; | ||
5242 | |||
5243 | mutex_unlock(&pci_slot_mutex); | ||
5244 | return 0; | ||
5245 | bus_reset: | ||
5246 | mutex_unlock(&pci_slot_mutex); | ||
5247 | return pci_bus_reset(bridge->subordinate, 0); | ||
5248 | } | ||
5249 | |||
5250 | /** | ||
5168 | * pci_probe_reset_bus - probe whether a PCI bus can be reset | 5251 | * pci_probe_reset_bus - probe whether a PCI bus can be reset |
5169 | * @bus: PCI bus to probe | 5252 | * @bus: PCI bus to probe |
5170 | * | 5253 | * |
@@ -5701,8 +5784,7 @@ int pci_set_vga_state(struct pci_dev *dev, bool decode, | |||
5701 | void pci_add_dma_alias(struct pci_dev *dev, u8 devfn) | 5784 | void pci_add_dma_alias(struct pci_dev *dev, u8 devfn) |
5702 | { | 5785 | { |
5703 | if (!dev->dma_alias_mask) | 5786 | if (!dev->dma_alias_mask) |
5704 | dev->dma_alias_mask = kcalloc(BITS_TO_LONGS(U8_MAX), | 5787 | dev->dma_alias_mask = bitmap_zalloc(U8_MAX, GFP_KERNEL); |
5705 | sizeof(long), GFP_KERNEL); | ||
5706 | if (!dev->dma_alias_mask) { | 5788 | if (!dev->dma_alias_mask) { |
5707 | pci_warn(dev, "Unable to allocate DMA alias mask\n"); | 5789 | pci_warn(dev, "Unable to allocate DMA alias mask\n"); |
5708 | return; | 5790 | return; |
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index 6e0d1528d471..662b7457db23 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h | |||
@@ -35,10 +35,13 @@ int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vmai, | |||
35 | 35 | ||
36 | int pci_probe_reset_function(struct pci_dev *dev); | 36 | int pci_probe_reset_function(struct pci_dev *dev); |
37 | int pci_bridge_secondary_bus_reset(struct pci_dev *dev); | 37 | int pci_bridge_secondary_bus_reset(struct pci_dev *dev); |
38 | int pci_bus_error_reset(struct pci_dev *dev); | ||
38 | 39 | ||
39 | /** | 40 | /** |
40 | * struct pci_platform_pm_ops - Firmware PM callbacks | 41 | * struct pci_platform_pm_ops - Firmware PM callbacks |
41 | * | 42 | * |
43 | * @bridge_d3: Does the bridge allow entering into D3 | ||
44 | * | ||
42 | * @is_manageable: returns 'true' if given device is power manageable by the | 45 | * @is_manageable: returns 'true' if given device is power manageable by the |
43 | * platform firmware | 46 | * platform firmware |
44 | * | 47 | * |
@@ -60,6 +63,7 @@ int pci_bridge_secondary_bus_reset(struct pci_dev *dev); | |||
60 | * these callbacks are mandatory. | 63 | * these callbacks are mandatory. |
61 | */ | 64 | */ |
62 | struct pci_platform_pm_ops { | 65 | struct pci_platform_pm_ops { |
66 | bool (*bridge_d3)(struct pci_dev *dev); | ||
63 | bool (*is_manageable)(struct pci_dev *dev); | 67 | bool (*is_manageable)(struct pci_dev *dev); |
64 | int (*set_state)(struct pci_dev *dev, pci_power_t state); | 68 | int (*set_state)(struct pci_dev *dev, pci_power_t state); |
65 | pci_power_t (*get_state)(struct pci_dev *dev); | 69 | pci_power_t (*get_state)(struct pci_dev *dev); |
@@ -136,6 +140,7 @@ static inline void pci_remove_legacy_files(struct pci_bus *bus) { return; } | |||
136 | 140 | ||
137 | /* Lock for read/write access to pci device and bus lists */ | 141 | /* Lock for read/write access to pci device and bus lists */ |
138 | extern struct rw_semaphore pci_bus_sem; | 142 | extern struct rw_semaphore pci_bus_sem; |
143 | extern struct mutex pci_slot_mutex; | ||
139 | 144 | ||
140 | extern raw_spinlock_t pci_lock; | 145 | extern raw_spinlock_t pci_lock; |
141 | 146 | ||
@@ -285,6 +290,7 @@ struct pci_sriov { | |||
285 | u16 driver_max_VFs; /* Max num VFs driver supports */ | 290 | u16 driver_max_VFs; /* Max num VFs driver supports */ |
286 | struct pci_dev *dev; /* Lowest numbered PF */ | 291 | struct pci_dev *dev; /* Lowest numbered PF */ |
287 | struct pci_dev *self; /* This PF */ | 292 | struct pci_dev *self; /* This PF */ |
293 | u32 cfg_size; /* VF config space size */ | ||
288 | u32 class; /* VF device */ | 294 | u32 class; /* VF device */ |
289 | u8 hdr_type; /* VF header type */ | 295 | u8 hdr_type; /* VF header type */ |
290 | u16 subsystem_vendor; /* VF subsystem vendor */ | 296 | u16 subsystem_vendor; /* VF subsystem vendor */ |
@@ -293,21 +299,71 @@ struct pci_sriov { | |||
293 | bool drivers_autoprobe; /* Auto probing of VFs by driver */ | 299 | bool drivers_autoprobe; /* Auto probing of VFs by driver */ |
294 | }; | 300 | }; |
295 | 301 | ||
296 | /* pci_dev priv_flags */ | 302 | /** |
297 | #define PCI_DEV_DISCONNECTED 0 | 303 | * pci_dev_set_io_state - Set the new error state if possible. |
298 | #define PCI_DEV_ADDED 1 | 304 | * |
305 | * @dev - pci device to set new error_state | ||
306 | * @new - the state we want dev to be in | ||
307 | * | ||
308 | * Must be called with device_lock held. | ||
309 | * | ||
310 | * Returns true if state has been changed to the requested state. | ||
311 | */ | ||
312 | static inline bool pci_dev_set_io_state(struct pci_dev *dev, | ||
313 | pci_channel_state_t new) | ||
314 | { | ||
315 | bool changed = false; | ||
316 | |||
317 | device_lock_assert(&dev->dev); | ||
318 | switch (new) { | ||
319 | case pci_channel_io_perm_failure: | ||
320 | switch (dev->error_state) { | ||
321 | case pci_channel_io_frozen: | ||
322 | case pci_channel_io_normal: | ||
323 | case pci_channel_io_perm_failure: | ||
324 | changed = true; | ||
325 | break; | ||
326 | } | ||
327 | break; | ||
328 | case pci_channel_io_frozen: | ||
329 | switch (dev->error_state) { | ||
330 | case pci_channel_io_frozen: | ||
331 | case pci_channel_io_normal: | ||
332 | changed = true; | ||
333 | break; | ||
334 | } | ||
335 | break; | ||
336 | case pci_channel_io_normal: | ||
337 | switch (dev->error_state) { | ||
338 | case pci_channel_io_frozen: | ||
339 | case pci_channel_io_normal: | ||
340 | changed = true; | ||
341 | break; | ||
342 | } | ||
343 | break; | ||
344 | } | ||
345 | if (changed) | ||
346 | dev->error_state = new; | ||
347 | return changed; | ||
348 | } | ||
299 | 349 | ||
300 | static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused) | 350 | static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused) |
301 | { | 351 | { |
302 | set_bit(PCI_DEV_DISCONNECTED, &dev->priv_flags); | 352 | device_lock(&dev->dev); |
353 | pci_dev_set_io_state(dev, pci_channel_io_perm_failure); | ||
354 | device_unlock(&dev->dev); | ||
355 | |||
303 | return 0; | 356 | return 0; |
304 | } | 357 | } |
305 | 358 | ||
306 | static inline bool pci_dev_is_disconnected(const struct pci_dev *dev) | 359 | static inline bool pci_dev_is_disconnected(const struct pci_dev *dev) |
307 | { | 360 | { |
308 | return test_bit(PCI_DEV_DISCONNECTED, &dev->priv_flags); | 361 | return dev->error_state == pci_channel_io_perm_failure; |
309 | } | 362 | } |
310 | 363 | ||
364 | /* pci_dev priv_flags */ | ||
365 | #define PCI_DEV_ADDED 0 | ||
366 | |||
311 | static inline void pci_dev_assign_added(struct pci_dev *dev, bool added) | 367 | static inline void pci_dev_assign_added(struct pci_dev *dev, bool added) |
312 | { | 368 | { |
313 | assign_bit(PCI_DEV_ADDED, &dev->priv_flags, added); | 369 | assign_bit(PCI_DEV_ADDED, &dev->priv_flags, added); |
@@ -346,6 +402,14 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info); | |||
346 | void aer_print_error(struct pci_dev *dev, struct aer_err_info *info); | 402 | void aer_print_error(struct pci_dev *dev, struct aer_err_info *info); |
347 | #endif /* CONFIG_PCIEAER */ | 403 | #endif /* CONFIG_PCIEAER */ |
348 | 404 | ||
405 | #ifdef CONFIG_PCIE_DPC | ||
406 | void pci_save_dpc_state(struct pci_dev *dev); | ||
407 | void pci_restore_dpc_state(struct pci_dev *dev); | ||
408 | #else | ||
409 | static inline void pci_save_dpc_state(struct pci_dev *dev) {} | ||
410 | static inline void pci_restore_dpc_state(struct pci_dev *dev) {} | ||
411 | #endif | ||
412 | |||
349 | #ifdef CONFIG_PCI_ATS | 413 | #ifdef CONFIG_PCI_ATS |
350 | void pci_restore_ats_state(struct pci_dev *dev); | 414 | void pci_restore_ats_state(struct pci_dev *dev); |
351 | #else | 415 | #else |
@@ -423,8 +487,8 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev) | |||
423 | #endif | 487 | #endif |
424 | 488 | ||
425 | /* PCI error reporting and recovery */ | 489 | /* PCI error reporting and recovery */ |
426 | void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service); | 490 | void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state, |
427 | void pcie_do_nonfatal_recovery(struct pci_dev *dev); | 491 | u32 service); |
428 | 492 | ||
429 | bool pcie_wait_for_link(struct pci_dev *pdev, bool active); | 493 | bool pcie_wait_for_link(struct pci_dev *pdev, bool active); |
430 | #ifdef CONFIG_PCIEASPM | 494 | #ifdef CONFIG_PCIEASPM |
diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig index 0a1e9d379bc5..44742b2e1126 100644 --- a/drivers/pci/pcie/Kconfig +++ b/drivers/pci/pcie/Kconfig | |||
@@ -36,7 +36,6 @@ config PCIEAER | |||
36 | config PCIEAER_INJECT | 36 | config PCIEAER_INJECT |
37 | tristate "PCI Express error injection support" | 37 | tristate "PCI Express error injection support" |
38 | depends on PCIEAER | 38 | depends on PCIEAER |
39 | default n | ||
40 | help | 39 | help |
41 | This enables PCI Express Root Port Advanced Error Reporting | 40 | This enables PCI Express Root Port Advanced Error Reporting |
42 | (AER) software error injector. | 41 | (AER) software error injector. |
@@ -84,7 +83,6 @@ config PCIEASPM | |||
84 | config PCIEASPM_DEBUG | 83 | config PCIEASPM_DEBUG |
85 | bool "Debug PCI Express ASPM" | 84 | bool "Debug PCI Express ASPM" |
86 | depends on PCIEASPM | 85 | depends on PCIEASPM |
87 | default n | ||
88 | help | 86 | help |
89 | This enables PCI Express ASPM debug support. It will add per-device | 87 | This enables PCI Express ASPM debug support. It will add per-device |
90 | interface to control ASPM. | 88 | interface to control ASPM. |
@@ -129,7 +127,6 @@ config PCIE_PME | |||
129 | config PCIE_DPC | 127 | config PCIE_DPC |
130 | bool "PCI Express Downstream Port Containment support" | 128 | bool "PCI Express Downstream Port Containment support" |
131 | depends on PCIEPORTBUS && PCIEAER | 129 | depends on PCIEPORTBUS && PCIEAER |
132 | default n | ||
133 | help | 130 | help |
134 | This enables PCI Express Downstream Port Containment (DPC) | 131 | This enables PCI Express Downstream Port Containment (DPC) |
135 | driver support. DPC events from Root and Downstream ports | 132 | driver support. DPC events from Root and Downstream ports |
@@ -139,7 +136,6 @@ config PCIE_DPC | |||
139 | 136 | ||
140 | config PCIE_PTM | 137 | config PCIE_PTM |
141 | bool "PCI Express Precision Time Measurement support" | 138 | bool "PCI Express Precision Time Measurement support" |
142 | default n | ||
143 | depends on PCIEPORTBUS | 139 | depends on PCIEPORTBUS |
144 | help | 140 | help |
145 | This enables PCI Express Precision Time Measurement (PTM) | 141 | This enables PCI Express Precision Time Measurement (PTM) |
diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c index 83180edd6ed4..a90a9194ac4a 100644 --- a/drivers/pci/pcie/aer.c +++ b/drivers/pci/pcie/aer.c | |||
@@ -30,7 +30,7 @@ | |||
30 | #include "../pci.h" | 30 | #include "../pci.h" |
31 | #include "portdrv.h" | 31 | #include "portdrv.h" |
32 | 32 | ||
33 | #define AER_ERROR_SOURCES_MAX 100 | 33 | #define AER_ERROR_SOURCES_MAX 128 |
34 | 34 | ||
35 | #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ | 35 | #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ |
36 | #define AER_MAX_TYPEOF_UNCOR_ERRS 26 /* as per PCI_ERR_UNCOR_STATUS*/ | 36 | #define AER_MAX_TYPEOF_UNCOR_ERRS 26 /* as per PCI_ERR_UNCOR_STATUS*/ |
@@ -42,21 +42,7 @@ struct aer_err_source { | |||
42 | 42 | ||
43 | struct aer_rpc { | 43 | struct aer_rpc { |
44 | struct pci_dev *rpd; /* Root Port device */ | 44 | struct pci_dev *rpd; /* Root Port device */ |
45 | struct work_struct dpc_handler; | 45 | DECLARE_KFIFO(aer_fifo, struct aer_err_source, AER_ERROR_SOURCES_MAX); |
46 | struct aer_err_source e_sources[AER_ERROR_SOURCES_MAX]; | ||
47 | struct aer_err_info e_info; | ||
48 | unsigned short prod_idx; /* Error Producer Index */ | ||
49 | unsigned short cons_idx; /* Error Consumer Index */ | ||
50 | int isr; | ||
51 | spinlock_t e_lock; /* | ||
52 | * Lock access to Error Status/ID Regs | ||
53 | * and error producer/consumer index | ||
54 | */ | ||
55 | struct mutex rpc_mutex; /* | ||
56 | * only one thread could do | ||
57 | * recovery on the same | ||
58 | * root port hierarchy | ||
59 | */ | ||
60 | }; | 46 | }; |
61 | 47 | ||
62 | /* AER stats for the device */ | 48 | /* AER stats for the device */ |
@@ -866,7 +852,7 @@ void cper_print_aer(struct pci_dev *dev, int aer_severity, | |||
866 | static int add_error_device(struct aer_err_info *e_info, struct pci_dev *dev) | 852 | static int add_error_device(struct aer_err_info *e_info, struct pci_dev *dev) |
867 | { | 853 | { |
868 | if (e_info->error_dev_num < AER_MAX_MULTI_ERR_DEVICES) { | 854 | if (e_info->error_dev_num < AER_MAX_MULTI_ERR_DEVICES) { |
869 | e_info->dev[e_info->error_dev_num] = dev; | 855 | e_info->dev[e_info->error_dev_num] = pci_dev_get(dev); |
870 | e_info->error_dev_num++; | 856 | e_info->error_dev_num++; |
871 | return 0; | 857 | return 0; |
872 | } | 858 | } |
@@ -1010,9 +996,12 @@ static void handle_error_source(struct pci_dev *dev, struct aer_err_info *info) | |||
1010 | info->status); | 996 | info->status); |
1011 | pci_aer_clear_device_status(dev); | 997 | pci_aer_clear_device_status(dev); |
1012 | } else if (info->severity == AER_NONFATAL) | 998 | } else if (info->severity == AER_NONFATAL) |
1013 | pcie_do_nonfatal_recovery(dev); | 999 | pcie_do_recovery(dev, pci_channel_io_normal, |
1000 | PCIE_PORT_SERVICE_AER); | ||
1014 | else if (info->severity == AER_FATAL) | 1001 | else if (info->severity == AER_FATAL) |
1015 | pcie_do_fatal_recovery(dev, PCIE_PORT_SERVICE_AER); | 1002 | pcie_do_recovery(dev, pci_channel_io_frozen, |
1003 | PCIE_PORT_SERVICE_AER); | ||
1004 | pci_dev_put(dev); | ||
1016 | } | 1005 | } |
1017 | 1006 | ||
1018 | #ifdef CONFIG_ACPI_APEI_PCIEAER | 1007 | #ifdef CONFIG_ACPI_APEI_PCIEAER |
@@ -1047,9 +1036,11 @@ static void aer_recover_work_func(struct work_struct *work) | |||
1047 | } | 1036 | } |
1048 | cper_print_aer(pdev, entry.severity, entry.regs); | 1037 | cper_print_aer(pdev, entry.severity, entry.regs); |
1049 | if (entry.severity == AER_NONFATAL) | 1038 | if (entry.severity == AER_NONFATAL) |
1050 | pcie_do_nonfatal_recovery(pdev); | 1039 | pcie_do_recovery(pdev, pci_channel_io_normal, |
1040 | PCIE_PORT_SERVICE_AER); | ||
1051 | else if (entry.severity == AER_FATAL) | 1041 | else if (entry.severity == AER_FATAL) |
1052 | pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_AER); | 1042 | pcie_do_recovery(pdev, pci_channel_io_frozen, |
1043 | PCIE_PORT_SERVICE_AER); | ||
1053 | pci_dev_put(pdev); | 1044 | pci_dev_put(pdev); |
1054 | } | 1045 | } |
1055 | } | 1046 | } |
@@ -1065,7 +1056,6 @@ static DECLARE_WORK(aer_recover_work, aer_recover_work_func); | |||
1065 | void aer_recover_queue(int domain, unsigned int bus, unsigned int devfn, | 1056 | void aer_recover_queue(int domain, unsigned int bus, unsigned int devfn, |
1066 | int severity, struct aer_capability_regs *aer_regs) | 1057 | int severity, struct aer_capability_regs *aer_regs) |
1067 | { | 1058 | { |
1068 | unsigned long flags; | ||
1069 | struct aer_recover_entry entry = { | 1059 | struct aer_recover_entry entry = { |
1070 | .bus = bus, | 1060 | .bus = bus, |
1071 | .devfn = devfn, | 1061 | .devfn = devfn, |
@@ -1074,13 +1064,12 @@ void aer_recover_queue(int domain, unsigned int bus, unsigned int devfn, | |||
1074 | .regs = aer_regs, | 1064 | .regs = aer_regs, |
1075 | }; | 1065 | }; |
1076 | 1066 | ||
1077 | spin_lock_irqsave(&aer_recover_ring_lock, flags); | 1067 | if (kfifo_in_spinlocked(&aer_recover_ring, &entry, sizeof(entry), |
1078 | if (kfifo_put(&aer_recover_ring, entry)) | 1068 | &aer_recover_ring_lock)) |
1079 | schedule_work(&aer_recover_work); | 1069 | schedule_work(&aer_recover_work); |
1080 | else | 1070 | else |
1081 | pr_err("AER recover: Buffer overflow when recovering AER for %04x:%02x:%02x:%x\n", | 1071 | pr_err("AER recover: Buffer overflow when recovering AER for %04x:%02x:%02x:%x\n", |
1082 | domain, bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); | 1072 | domain, bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); |
1083 | spin_unlock_irqrestore(&aer_recover_ring_lock, flags); | ||
1084 | } | 1073 | } |
1085 | EXPORT_SYMBOL_GPL(aer_recover_queue); | 1074 | EXPORT_SYMBOL_GPL(aer_recover_queue); |
1086 | #endif | 1075 | #endif |
@@ -1115,8 +1104,9 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info) | |||
1115 | &info->mask); | 1104 | &info->mask); |
1116 | if (!(info->status & ~info->mask)) | 1105 | if (!(info->status & ~info->mask)) |
1117 | return 0; | 1106 | return 0; |
1118 | } else if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || | 1107 | } else if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || |
1119 | info->severity == AER_NONFATAL) { | 1108 | pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || |
1109 | info->severity == AER_NONFATAL) { | ||
1120 | 1110 | ||
1121 | /* Link is still healthy for IO reads */ | 1111 | /* Link is still healthy for IO reads */ |
1122 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, | 1112 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, |
@@ -1170,7 +1160,7 @@ static void aer_isr_one_error(struct aer_rpc *rpc, | |||
1170 | struct aer_err_source *e_src) | 1160 | struct aer_err_source *e_src) |
1171 | { | 1161 | { |
1172 | struct pci_dev *pdev = rpc->rpd; | 1162 | struct pci_dev *pdev = rpc->rpd; |
1173 | struct aer_err_info *e_info = &rpc->e_info; | 1163 | struct aer_err_info e_info; |
1174 | 1164 | ||
1175 | pci_rootport_aer_stats_incr(pdev, e_src); | 1165 | pci_rootport_aer_stats_incr(pdev, e_src); |
1176 | 1166 | ||
@@ -1179,83 +1169,57 @@ static void aer_isr_one_error(struct aer_rpc *rpc, | |||
1179 | * uncorrectable error being logged. Report correctable error first. | 1169 | * uncorrectable error being logged. Report correctable error first. |
1180 | */ | 1170 | */ |
1181 | if (e_src->status & PCI_ERR_ROOT_COR_RCV) { | 1171 | if (e_src->status & PCI_ERR_ROOT_COR_RCV) { |
1182 | e_info->id = ERR_COR_ID(e_src->id); | 1172 | e_info.id = ERR_COR_ID(e_src->id); |
1183 | e_info->severity = AER_CORRECTABLE; | 1173 | e_info.severity = AER_CORRECTABLE; |
1184 | 1174 | ||
1185 | if (e_src->status & PCI_ERR_ROOT_MULTI_COR_RCV) | 1175 | if (e_src->status & PCI_ERR_ROOT_MULTI_COR_RCV) |
1186 | e_info->multi_error_valid = 1; | 1176 | e_info.multi_error_valid = 1; |
1187 | else | 1177 | else |
1188 | e_info->multi_error_valid = 0; | 1178 | e_info.multi_error_valid = 0; |
1189 | aer_print_port_info(pdev, e_info); | 1179 | aer_print_port_info(pdev, &e_info); |
1190 | 1180 | ||
1191 | if (find_source_device(pdev, e_info)) | 1181 | if (find_source_device(pdev, &e_info)) |
1192 | aer_process_err_devices(e_info); | 1182 | aer_process_err_devices(&e_info); |
1193 | } | 1183 | } |
1194 | 1184 | ||
1195 | if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) { | 1185 | if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) { |
1196 | e_info->id = ERR_UNCOR_ID(e_src->id); | 1186 | e_info.id = ERR_UNCOR_ID(e_src->id); |
1197 | 1187 | ||
1198 | if (e_src->status & PCI_ERR_ROOT_FATAL_RCV) | 1188 | if (e_src->status & PCI_ERR_ROOT_FATAL_RCV) |
1199 | e_info->severity = AER_FATAL; | 1189 | e_info.severity = AER_FATAL; |
1200 | else | 1190 | else |
1201 | e_info->severity = AER_NONFATAL; | 1191 | e_info.severity = AER_NONFATAL; |
1202 | 1192 | ||
1203 | if (e_src->status & PCI_ERR_ROOT_MULTI_UNCOR_RCV) | 1193 | if (e_src->status & PCI_ERR_ROOT_MULTI_UNCOR_RCV) |
1204 | e_info->multi_error_valid = 1; | 1194 | e_info.multi_error_valid = 1; |
1205 | else | 1195 | else |
1206 | e_info->multi_error_valid = 0; | 1196 | e_info.multi_error_valid = 0; |
1207 | 1197 | ||
1208 | aer_print_port_info(pdev, e_info); | 1198 | aer_print_port_info(pdev, &e_info); |
1209 | 1199 | ||
1210 | if (find_source_device(pdev, e_info)) | 1200 | if (find_source_device(pdev, &e_info)) |
1211 | aer_process_err_devices(e_info); | 1201 | aer_process_err_devices(&e_info); |
1212 | } | 1202 | } |
1213 | } | 1203 | } |
1214 | 1204 | ||
1215 | /** | 1205 | /** |
1216 | * get_e_source - retrieve an error source | ||
1217 | * @rpc: pointer to the root port which holds an error | ||
1218 | * @e_src: pointer to store retrieved error source | ||
1219 | * | ||
1220 | * Return 1 if an error source is retrieved, otherwise 0. | ||
1221 | * | ||
1222 | * Invoked by DPC handler to consume an error. | ||
1223 | */ | ||
1224 | static int get_e_source(struct aer_rpc *rpc, struct aer_err_source *e_src) | ||
1225 | { | ||
1226 | unsigned long flags; | ||
1227 | |||
1228 | /* Lock access to Root error producer/consumer index */ | ||
1229 | spin_lock_irqsave(&rpc->e_lock, flags); | ||
1230 | if (rpc->prod_idx == rpc->cons_idx) { | ||
1231 | spin_unlock_irqrestore(&rpc->e_lock, flags); | ||
1232 | return 0; | ||
1233 | } | ||
1234 | |||
1235 | *e_src = rpc->e_sources[rpc->cons_idx]; | ||
1236 | rpc->cons_idx++; | ||
1237 | if (rpc->cons_idx == AER_ERROR_SOURCES_MAX) | ||
1238 | rpc->cons_idx = 0; | ||
1239 | spin_unlock_irqrestore(&rpc->e_lock, flags); | ||
1240 | |||
1241 | return 1; | ||
1242 | } | ||
1243 | |||
1244 | /** | ||
1245 | * aer_isr - consume errors detected by root port | 1206 | * aer_isr - consume errors detected by root port |
1246 | * @work: definition of this work item | 1207 | * @work: definition of this work item |
1247 | * | 1208 | * |
1248 | * Invoked, as DPC, when root port records new detected error | 1209 | * Invoked, as DPC, when root port records new detected error |
1249 | */ | 1210 | */ |
1250 | static void aer_isr(struct work_struct *work) | 1211 | static irqreturn_t aer_isr(int irq, void *context) |
1251 | { | 1212 | { |
1252 | struct aer_rpc *rpc = container_of(work, struct aer_rpc, dpc_handler); | 1213 | struct pcie_device *dev = (struct pcie_device *)context; |
1214 | struct aer_rpc *rpc = get_service_data(dev); | ||
1253 | struct aer_err_source uninitialized_var(e_src); | 1215 | struct aer_err_source uninitialized_var(e_src); |
1254 | 1216 | ||
1255 | mutex_lock(&rpc->rpc_mutex); | 1217 | if (kfifo_is_empty(&rpc->aer_fifo)) |
1256 | while (get_e_source(rpc, &e_src)) | 1218 | return IRQ_NONE; |
1219 | |||
1220 | while (kfifo_get(&rpc->aer_fifo, &e_src)) | ||
1257 | aer_isr_one_error(rpc, &e_src); | 1221 | aer_isr_one_error(rpc, &e_src); |
1258 | mutex_unlock(&rpc->rpc_mutex); | 1222 | return IRQ_HANDLED; |
1259 | } | 1223 | } |
1260 | 1224 | ||
1261 | /** | 1225 | /** |
@@ -1265,56 +1229,26 @@ static void aer_isr(struct work_struct *work) | |||
1265 | * | 1229 | * |
1266 | * Invoked when Root Port detects AER messages. | 1230 | * Invoked when Root Port detects AER messages. |
1267 | */ | 1231 | */ |
1268 | irqreturn_t aer_irq(int irq, void *context) | 1232 | static irqreturn_t aer_irq(int irq, void *context) |
1269 | { | 1233 | { |
1270 | unsigned int status, id; | ||
1271 | struct pcie_device *pdev = (struct pcie_device *)context; | 1234 | struct pcie_device *pdev = (struct pcie_device *)context; |
1272 | struct aer_rpc *rpc = get_service_data(pdev); | 1235 | struct aer_rpc *rpc = get_service_data(pdev); |
1273 | int next_prod_idx; | 1236 | struct pci_dev *rp = rpc->rpd; |
1274 | unsigned long flags; | 1237 | struct aer_err_source e_src = {}; |
1275 | int pos; | 1238 | int pos = rp->aer_cap; |
1276 | |||
1277 | pos = pdev->port->aer_cap; | ||
1278 | /* | ||
1279 | * Must lock access to Root Error Status Reg, Root Error ID Reg, | ||
1280 | * and Root error producer/consumer index | ||
1281 | */ | ||
1282 | spin_lock_irqsave(&rpc->e_lock, flags); | ||
1283 | 1239 | ||
1284 | /* Read error status */ | 1240 | pci_read_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, &e_src.status); |
1285 | pci_read_config_dword(pdev->port, pos + PCI_ERR_ROOT_STATUS, &status); | 1241 | if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV))) |
1286 | if (!(status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV))) { | ||
1287 | spin_unlock_irqrestore(&rpc->e_lock, flags); | ||
1288 | return IRQ_NONE; | 1242 | return IRQ_NONE; |
1289 | } | ||
1290 | 1243 | ||
1291 | /* Read error source and clear error status */ | 1244 | pci_read_config_dword(rp, pos + PCI_ERR_ROOT_ERR_SRC, &e_src.id); |
1292 | pci_read_config_dword(pdev->port, pos + PCI_ERR_ROOT_ERR_SRC, &id); | 1245 | pci_write_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, e_src.status); |
1293 | pci_write_config_dword(pdev->port, pos + PCI_ERR_ROOT_STATUS, status); | ||
1294 | 1246 | ||
1295 | /* Store error source for later DPC handler */ | 1247 | if (!kfifo_put(&rpc->aer_fifo, e_src)) |
1296 | next_prod_idx = rpc->prod_idx + 1; | ||
1297 | if (next_prod_idx == AER_ERROR_SOURCES_MAX) | ||
1298 | next_prod_idx = 0; | ||
1299 | if (next_prod_idx == rpc->cons_idx) { | ||
1300 | /* | ||
1301 | * Error Storm Condition - possibly the same error occurred. | ||
1302 | * Drop the error. | ||
1303 | */ | ||
1304 | spin_unlock_irqrestore(&rpc->e_lock, flags); | ||
1305 | return IRQ_HANDLED; | 1248 | return IRQ_HANDLED; |
1306 | } | ||
1307 | rpc->e_sources[rpc->prod_idx].status = status; | ||
1308 | rpc->e_sources[rpc->prod_idx].id = id; | ||
1309 | rpc->prod_idx = next_prod_idx; | ||
1310 | spin_unlock_irqrestore(&rpc->e_lock, flags); | ||
1311 | |||
1312 | /* Invoke DPC handler */ | ||
1313 | schedule_work(&rpc->dpc_handler); | ||
1314 | 1249 | ||
1315 | return IRQ_HANDLED; | 1250 | return IRQ_WAKE_THREAD; |
1316 | } | 1251 | } |
1317 | EXPORT_SYMBOL_GPL(aer_irq); | ||
1318 | 1252 | ||
1319 | static int set_device_error_reporting(struct pci_dev *dev, void *data) | 1253 | static int set_device_error_reporting(struct pci_dev *dev, void *data) |
1320 | { | 1254 | { |
@@ -1423,33 +1357,6 @@ static void aer_disable_rootport(struct aer_rpc *rpc) | |||
1423 | } | 1357 | } |
1424 | 1358 | ||
1425 | /** | 1359 | /** |
1426 | * aer_alloc_rpc - allocate Root Port data structure | ||
1427 | * @dev: pointer to the pcie_dev data structure | ||
1428 | * | ||
1429 | * Invoked when Root Port's AER service is loaded. | ||
1430 | */ | ||
1431 | static struct aer_rpc *aer_alloc_rpc(struct pcie_device *dev) | ||
1432 | { | ||
1433 | struct aer_rpc *rpc; | ||
1434 | |||
1435 | rpc = kzalloc(sizeof(struct aer_rpc), GFP_KERNEL); | ||
1436 | if (!rpc) | ||
1437 | return NULL; | ||
1438 | |||
1439 | /* Initialize Root lock access, e_lock, to Root Error Status Reg */ | ||
1440 | spin_lock_init(&rpc->e_lock); | ||
1441 | |||
1442 | rpc->rpd = dev->port; | ||
1443 | INIT_WORK(&rpc->dpc_handler, aer_isr); | ||
1444 | mutex_init(&rpc->rpc_mutex); | ||
1445 | |||
1446 | /* Use PCIe bus function to store rpc into PCIe device */ | ||
1447 | set_service_data(dev, rpc); | ||
1448 | |||
1449 | return rpc; | ||
1450 | } | ||
1451 | |||
1452 | /** | ||
1453 | * aer_remove - clean up resources | 1360 | * aer_remove - clean up resources |
1454 | * @dev: pointer to the pcie_dev data structure | 1361 | * @dev: pointer to the pcie_dev data structure |
1455 | * | 1362 | * |
@@ -1459,16 +1366,7 @@ static void aer_remove(struct pcie_device *dev) | |||
1459 | { | 1366 | { |
1460 | struct aer_rpc *rpc = get_service_data(dev); | 1367 | struct aer_rpc *rpc = get_service_data(dev); |
1461 | 1368 | ||
1462 | if (rpc) { | 1369 | aer_disable_rootport(rpc); |
1463 | /* If register interrupt service, it must be free. */ | ||
1464 | if (rpc->isr) | ||
1465 | free_irq(dev->irq, dev); | ||
1466 | |||
1467 | flush_work(&rpc->dpc_handler); | ||
1468 | aer_disable_rootport(rpc); | ||
1469 | kfree(rpc); | ||
1470 | set_service_data(dev, NULL); | ||
1471 | } | ||
1472 | } | 1370 | } |
1473 | 1371 | ||
1474 | /** | 1372 | /** |
@@ -1481,27 +1379,24 @@ static int aer_probe(struct pcie_device *dev) | |||
1481 | { | 1379 | { |
1482 | int status; | 1380 | int status; |
1483 | struct aer_rpc *rpc; | 1381 | struct aer_rpc *rpc; |
1484 | struct device *device = &dev->port->dev; | 1382 | struct device *device = &dev->device; |
1485 | 1383 | ||
1486 | /* Alloc rpc data structure */ | 1384 | rpc = devm_kzalloc(device, sizeof(struct aer_rpc), GFP_KERNEL); |
1487 | rpc = aer_alloc_rpc(dev); | ||
1488 | if (!rpc) { | 1385 | if (!rpc) { |
1489 | dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n"); | 1386 | dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n"); |
1490 | aer_remove(dev); | ||
1491 | return -ENOMEM; | 1387 | return -ENOMEM; |
1492 | } | 1388 | } |
1389 | rpc->rpd = dev->port; | ||
1390 | set_service_data(dev, rpc); | ||
1493 | 1391 | ||
1494 | /* Request IRQ ISR */ | 1392 | status = devm_request_threaded_irq(device, dev->irq, aer_irq, aer_isr, |
1495 | status = request_irq(dev->irq, aer_irq, IRQF_SHARED, "aerdrv", dev); | 1393 | IRQF_SHARED, "aerdrv", dev); |
1496 | if (status) { | 1394 | if (status) { |
1497 | dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n", | 1395 | dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n", |
1498 | dev->irq); | 1396 | dev->irq); |
1499 | aer_remove(dev); | ||
1500 | return status; | 1397 | return status; |
1501 | } | 1398 | } |
1502 | 1399 | ||
1503 | rpc->isr = 1; | ||
1504 | |||
1505 | aer_enable_rootport(rpc); | 1400 | aer_enable_rootport(rpc); |
1506 | dev_info(device, "AER enabled with IRQ %d\n", dev->irq); | 1401 | dev_info(device, "AER enabled with IRQ %d\n", dev->irq); |
1507 | return 0; | 1402 | return 0; |
@@ -1526,7 +1421,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev) | |||
1526 | reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; | 1421 | reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; |
1527 | pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); | 1422 | pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); |
1528 | 1423 | ||
1529 | rc = pci_bridge_secondary_bus_reset(dev); | 1424 | rc = pci_bus_error_reset(dev); |
1530 | pci_printk(KERN_DEBUG, dev, "Root Port link has been reset\n"); | 1425 | pci_printk(KERN_DEBUG, dev, "Root Port link has been reset\n"); |
1531 | 1426 | ||
1532 | /* Clear Root Error Status */ | 1427 | /* Clear Root Error Status */ |
@@ -1541,18 +1436,6 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev) | |||
1541 | return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; | 1436 | return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; |
1542 | } | 1437 | } |
1543 | 1438 | ||
1544 | /** | ||
1545 | * aer_error_resume - clean up corresponding error status bits | ||
1546 | * @dev: pointer to Root Port's pci_dev data structure | ||
1547 | * | ||
1548 | * Invoked by Port Bus driver during nonfatal recovery. | ||
1549 | */ | ||
1550 | static void aer_error_resume(struct pci_dev *dev) | ||
1551 | { | ||
1552 | pci_aer_clear_device_status(dev); | ||
1553 | pci_cleanup_aer_uncorrect_error_status(dev); | ||
1554 | } | ||
1555 | |||
1556 | static struct pcie_port_service_driver aerdriver = { | 1439 | static struct pcie_port_service_driver aerdriver = { |
1557 | .name = "aer", | 1440 | .name = "aer", |
1558 | .port_type = PCI_EXP_TYPE_ROOT_PORT, | 1441 | .port_type = PCI_EXP_TYPE_ROOT_PORT, |
@@ -1560,7 +1443,6 @@ static struct pcie_port_service_driver aerdriver = { | |||
1560 | 1443 | ||
1561 | .probe = aer_probe, | 1444 | .probe = aer_probe, |
1562 | .remove = aer_remove, | 1445 | .remove = aer_remove, |
1563 | .error_resume = aer_error_resume, | ||
1564 | .reset_link = aer_root_reset, | 1446 | .reset_link = aer_root_reset, |
1565 | }; | 1447 | }; |
1566 | 1448 | ||
@@ -1569,10 +1451,9 @@ static struct pcie_port_service_driver aerdriver = { | |||
1569 | * | 1451 | * |
1570 | * Invoked when AER root service driver is loaded. | 1452 | * Invoked when AER root service driver is loaded. |
1571 | */ | 1453 | */ |
1572 | static int __init aer_service_init(void) | 1454 | int __init pcie_aer_init(void) |
1573 | { | 1455 | { |
1574 | if (!pci_aer_available() || aer_acpi_firmware_first()) | 1456 | if (!pci_aer_available() || aer_acpi_firmware_first()) |
1575 | return -ENXIO; | 1457 | return -ENXIO; |
1576 | return pcie_port_service_register(&aerdriver); | 1458 | return pcie_port_service_register(&aerdriver); |
1577 | } | 1459 | } |
1578 | device_initcall(aer_service_init); | ||
diff --git a/drivers/pci/pcie/aer_inject.c b/drivers/pci/pcie/aer_inject.c index 0eb24346cad3..95d4759664b3 100644 --- a/drivers/pci/pcie/aer_inject.c +++ b/drivers/pci/pcie/aer_inject.c | |||
@@ -14,6 +14,7 @@ | |||
14 | 14 | ||
15 | #include <linux/module.h> | 15 | #include <linux/module.h> |
16 | #include <linux/init.h> | 16 | #include <linux/init.h> |
17 | #include <linux/irq.h> | ||
17 | #include <linux/miscdevice.h> | 18 | #include <linux/miscdevice.h> |
18 | #include <linux/pci.h> | 19 | #include <linux/pci.h> |
19 | #include <linux/slab.h> | 20 | #include <linux/slab.h> |
@@ -175,14 +176,48 @@ static u32 *find_pci_config_dword(struct aer_error *err, int where, | |||
175 | return target; | 176 | return target; |
176 | } | 177 | } |
177 | 178 | ||
179 | static int aer_inj_read(struct pci_bus *bus, unsigned int devfn, int where, | ||
180 | int size, u32 *val) | ||
181 | { | ||
182 | struct pci_ops *ops, *my_ops; | ||
183 | int rv; | ||
184 | |||
185 | ops = __find_pci_bus_ops(bus); | ||
186 | if (!ops) | ||
187 | return -1; | ||
188 | |||
189 | my_ops = bus->ops; | ||
190 | bus->ops = ops; | ||
191 | rv = ops->read(bus, devfn, where, size, val); | ||
192 | bus->ops = my_ops; | ||
193 | |||
194 | return rv; | ||
195 | } | ||
196 | |||
197 | static int aer_inj_write(struct pci_bus *bus, unsigned int devfn, int where, | ||
198 | int size, u32 val) | ||
199 | { | ||
200 | struct pci_ops *ops, *my_ops; | ||
201 | int rv; | ||
202 | |||
203 | ops = __find_pci_bus_ops(bus); | ||
204 | if (!ops) | ||
205 | return -1; | ||
206 | |||
207 | my_ops = bus->ops; | ||
208 | bus->ops = ops; | ||
209 | rv = ops->write(bus, devfn, where, size, val); | ||
210 | bus->ops = my_ops; | ||
211 | |||
212 | return rv; | ||
213 | } | ||
214 | |||
178 | static int aer_inj_read_config(struct pci_bus *bus, unsigned int devfn, | 215 | static int aer_inj_read_config(struct pci_bus *bus, unsigned int devfn, |
179 | int where, int size, u32 *val) | 216 | int where, int size, u32 *val) |
180 | { | 217 | { |
181 | u32 *sim; | 218 | u32 *sim; |
182 | struct aer_error *err; | 219 | struct aer_error *err; |
183 | unsigned long flags; | 220 | unsigned long flags; |
184 | struct pci_ops *ops; | ||
185 | struct pci_ops *my_ops; | ||
186 | int domain; | 221 | int domain; |
187 | int rv; | 222 | int rv; |
188 | 223 | ||
@@ -203,18 +238,7 @@ static int aer_inj_read_config(struct pci_bus *bus, unsigned int devfn, | |||
203 | return 0; | 238 | return 0; |
204 | } | 239 | } |
205 | out: | 240 | out: |
206 | ops = __find_pci_bus_ops(bus); | 241 | rv = aer_inj_read(bus, devfn, where, size, val); |
207 | /* | ||
208 | * pci_lock must already be held, so we can directly | ||
209 | * manipulate bus->ops. Many config access functions, | ||
210 | * including pci_generic_config_read() require the original | ||
211 | * bus->ops be installed to function, so temporarily put them | ||
212 | * back. | ||
213 | */ | ||
214 | my_ops = bus->ops; | ||
215 | bus->ops = ops; | ||
216 | rv = ops->read(bus, devfn, where, size, val); | ||
217 | bus->ops = my_ops; | ||
218 | spin_unlock_irqrestore(&inject_lock, flags); | 242 | spin_unlock_irqrestore(&inject_lock, flags); |
219 | return rv; | 243 | return rv; |
220 | } | 244 | } |
@@ -226,8 +250,6 @@ static int aer_inj_write_config(struct pci_bus *bus, unsigned int devfn, | |||
226 | struct aer_error *err; | 250 | struct aer_error *err; |
227 | unsigned long flags; | 251 | unsigned long flags; |
228 | int rw1cs; | 252 | int rw1cs; |
229 | struct pci_ops *ops; | ||
230 | struct pci_ops *my_ops; | ||
231 | int domain; | 253 | int domain; |
232 | int rv; | 254 | int rv; |
233 | 255 | ||
@@ -251,18 +273,7 @@ static int aer_inj_write_config(struct pci_bus *bus, unsigned int devfn, | |||
251 | return 0; | 273 | return 0; |
252 | } | 274 | } |
253 | out: | 275 | out: |
254 | ops = __find_pci_bus_ops(bus); | 276 | rv = aer_inj_write(bus, devfn, where, size, val); |
255 | /* | ||
256 | * pci_lock must already be held, so we can directly | ||
257 | * manipulate bus->ops. Many config access functions, | ||
258 | * including pci_generic_config_write() require the original | ||
259 | * bus->ops be installed to function, so temporarily put them | ||
260 | * back. | ||
261 | */ | ||
262 | my_ops = bus->ops; | ||
263 | bus->ops = ops; | ||
264 | rv = ops->write(bus, devfn, where, size, val); | ||
265 | bus->ops = my_ops; | ||
266 | spin_unlock_irqrestore(&inject_lock, flags); | 277 | spin_unlock_irqrestore(&inject_lock, flags); |
267 | return rv; | 278 | return rv; |
268 | } | 279 | } |
@@ -303,32 +314,13 @@ out: | |||
303 | return 0; | 314 | return 0; |
304 | } | 315 | } |
305 | 316 | ||
306 | static int find_aer_device_iter(struct device *device, void *data) | ||
307 | { | ||
308 | struct pcie_device **result = data; | ||
309 | struct pcie_device *pcie_dev; | ||
310 | |||
311 | if (device->bus == &pcie_port_bus_type) { | ||
312 | pcie_dev = to_pcie_device(device); | ||
313 | if (pcie_dev->service & PCIE_PORT_SERVICE_AER) { | ||
314 | *result = pcie_dev; | ||
315 | return 1; | ||
316 | } | ||
317 | } | ||
318 | return 0; | ||
319 | } | ||
320 | |||
321 | static int find_aer_device(struct pci_dev *dev, struct pcie_device **result) | ||
322 | { | ||
323 | return device_for_each_child(&dev->dev, result, find_aer_device_iter); | ||
324 | } | ||
325 | |||
326 | static int aer_inject(struct aer_error_inj *einj) | 317 | static int aer_inject(struct aer_error_inj *einj) |
327 | { | 318 | { |
328 | struct aer_error *err, *rperr; | 319 | struct aer_error *err, *rperr; |
329 | struct aer_error *err_alloc = NULL, *rperr_alloc = NULL; | 320 | struct aer_error *err_alloc = NULL, *rperr_alloc = NULL; |
330 | struct pci_dev *dev, *rpdev; | 321 | struct pci_dev *dev, *rpdev; |
331 | struct pcie_device *edev; | 322 | struct pcie_device *edev; |
323 | struct device *device; | ||
332 | unsigned long flags; | 324 | unsigned long flags; |
333 | unsigned int devfn = PCI_DEVFN(einj->dev, einj->fn); | 325 | unsigned int devfn = PCI_DEVFN(einj->dev, einj->fn); |
334 | int pos_cap_err, rp_pos_cap_err; | 326 | int pos_cap_err, rp_pos_cap_err; |
@@ -464,7 +456,9 @@ static int aer_inject(struct aer_error_inj *einj) | |||
464 | if (ret) | 456 | if (ret) |
465 | goto out_put; | 457 | goto out_put; |
466 | 458 | ||
467 | if (find_aer_device(rpdev, &edev)) { | 459 | device = pcie_port_find_device(rpdev, PCIE_PORT_SERVICE_AER); |
460 | if (device) { | ||
461 | edev = to_pcie_device(device); | ||
468 | if (!get_service_data(edev)) { | 462 | if (!get_service_data(edev)) { |
469 | dev_warn(&edev->device, | 463 | dev_warn(&edev->device, |
470 | "aer_inject: AER service is not initialized\n"); | 464 | "aer_inject: AER service is not initialized\n"); |
@@ -474,7 +468,9 @@ static int aer_inject(struct aer_error_inj *einj) | |||
474 | dev_info(&edev->device, | 468 | dev_info(&edev->device, |
475 | "aer_inject: Injecting errors %08x/%08x into device %s\n", | 469 | "aer_inject: Injecting errors %08x/%08x into device %s\n", |
476 | einj->cor_status, einj->uncor_status, pci_name(dev)); | 470 | einj->cor_status, einj->uncor_status, pci_name(dev)); |
477 | aer_irq(-1, edev); | 471 | local_irq_disable(); |
472 | generic_handle_irq(edev->irq); | ||
473 | local_irq_enable(); | ||
478 | } else { | 474 | } else { |
479 | pci_err(rpdev, "aer_inject: AER device not found\n"); | 475 | pci_err(rpdev, "aer_inject: AER device not found\n"); |
480 | ret = -ENODEV; | 476 | ret = -ENODEV; |
diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index 5326916715d2..dcb29cb76dc6 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c | |||
@@ -895,7 +895,7 @@ void pcie_aspm_init_link_state(struct pci_dev *pdev) | |||
895 | struct pcie_link_state *link; | 895 | struct pcie_link_state *link; |
896 | int blacklist = !!pcie_aspm_sanity_check(pdev); | 896 | int blacklist = !!pcie_aspm_sanity_check(pdev); |
897 | 897 | ||
898 | if (!aspm_support_enabled) | 898 | if (!aspm_support_enabled || aspm_disabled) |
899 | return; | 899 | return; |
900 | 900 | ||
901 | if (pdev->link_state) | 901 | if (pdev->link_state) |
@@ -991,7 +991,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev) | |||
991 | * All PCIe functions are in one slot, remove one function will remove | 991 | * All PCIe functions are in one slot, remove one function will remove |
992 | * the whole slot, so just wait until we are the last function left. | 992 | * the whole slot, so just wait until we are the last function left. |
993 | */ | 993 | */ |
994 | if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices)) | 994 | if (!list_empty(&parent->subordinate->devices)) |
995 | goto out; | 995 | goto out; |
996 | 996 | ||
997 | link = parent->link_state; | 997 | link = parent->link_state; |
diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c index f03279fc87cd..e435d12e61a0 100644 --- a/drivers/pci/pcie/dpc.c +++ b/drivers/pci/pcie/dpc.c | |||
@@ -44,6 +44,58 @@ static const char * const rp_pio_error_string[] = { | |||
44 | "Memory Request Completion Timeout", /* Bit Position 18 */ | 44 | "Memory Request Completion Timeout", /* Bit Position 18 */ |
45 | }; | 45 | }; |
46 | 46 | ||
47 | static struct dpc_dev *to_dpc_dev(struct pci_dev *dev) | ||
48 | { | ||
49 | struct device *device; | ||
50 | |||
51 | device = pcie_port_find_device(dev, PCIE_PORT_SERVICE_DPC); | ||
52 | if (!device) | ||
53 | return NULL; | ||
54 | return get_service_data(to_pcie_device(device)); | ||
55 | } | ||
56 | |||
57 | void pci_save_dpc_state(struct pci_dev *dev) | ||
58 | { | ||
59 | struct dpc_dev *dpc; | ||
60 | struct pci_cap_saved_state *save_state; | ||
61 | u16 *cap; | ||
62 | |||
63 | if (!pci_is_pcie(dev)) | ||
64 | return; | ||
65 | |||
66 | dpc = to_dpc_dev(dev); | ||
67 | if (!dpc) | ||
68 | return; | ||
69 | |||
70 | save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC); | ||
71 | if (!save_state) | ||
72 | return; | ||
73 | |||
74 | cap = (u16 *)&save_state->cap.data[0]; | ||
75 | pci_read_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, cap); | ||
76 | } | ||
77 | |||
78 | void pci_restore_dpc_state(struct pci_dev *dev) | ||
79 | { | ||
80 | struct dpc_dev *dpc; | ||
81 | struct pci_cap_saved_state *save_state; | ||
82 | u16 *cap; | ||
83 | |||
84 | if (!pci_is_pcie(dev)) | ||
85 | return; | ||
86 | |||
87 | dpc = to_dpc_dev(dev); | ||
88 | if (!dpc) | ||
89 | return; | ||
90 | |||
91 | save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC); | ||
92 | if (!save_state) | ||
93 | return; | ||
94 | |||
95 | cap = (u16 *)&save_state->cap.data[0]; | ||
96 | pci_write_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, *cap); | ||
97 | } | ||
98 | |||
47 | static int dpc_wait_rp_inactive(struct dpc_dev *dpc) | 99 | static int dpc_wait_rp_inactive(struct dpc_dev *dpc) |
48 | { | 100 | { |
49 | unsigned long timeout = jiffies + HZ; | 101 | unsigned long timeout = jiffies + HZ; |
@@ -67,18 +119,13 @@ static int dpc_wait_rp_inactive(struct dpc_dev *dpc) | |||
67 | static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) | 119 | static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) |
68 | { | 120 | { |
69 | struct dpc_dev *dpc; | 121 | struct dpc_dev *dpc; |
70 | struct pcie_device *pciedev; | ||
71 | struct device *devdpc; | ||
72 | |||
73 | u16 cap; | 122 | u16 cap; |
74 | 123 | ||
75 | /* | 124 | /* |
76 | * DPC disables the Link automatically in hardware, so it has | 125 | * DPC disables the Link automatically in hardware, so it has |
77 | * already been reset by the time we get here. | 126 | * already been reset by the time we get here. |
78 | */ | 127 | */ |
79 | devdpc = pcie_port_find_device(pdev, PCIE_PORT_SERVICE_DPC); | 128 | dpc = to_dpc_dev(pdev); |
80 | pciedev = to_pcie_device(devdpc); | ||
81 | dpc = get_service_data(pciedev); | ||
82 | cap = dpc->cap_pos; | 129 | cap = dpc->cap_pos; |
83 | 130 | ||
84 | /* | 131 | /* |
@@ -93,10 +140,12 @@ static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) | |||
93 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, | 140 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, |
94 | PCI_EXP_DPC_STATUS_TRIGGER); | 141 | PCI_EXP_DPC_STATUS_TRIGGER); |
95 | 142 | ||
143 | if (!pcie_wait_for_link(pdev, true)) | ||
144 | return PCI_ERS_RESULT_DISCONNECT; | ||
145 | |||
96 | return PCI_ERS_RESULT_RECOVERED; | 146 | return PCI_ERS_RESULT_RECOVERED; |
97 | } | 147 | } |
98 | 148 | ||
99 | |||
100 | static void dpc_process_rp_pio_error(struct dpc_dev *dpc) | 149 | static void dpc_process_rp_pio_error(struct dpc_dev *dpc) |
101 | { | 150 | { |
102 | struct device *dev = &dpc->dev->device; | 151 | struct device *dev = &dpc->dev->device; |
@@ -169,7 +218,7 @@ static irqreturn_t dpc_handler(int irq, void *context) | |||
169 | 218 | ||
170 | reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; | 219 | reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; |
171 | ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; | 220 | ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; |
172 | dev_warn(dev, "DPC %s detected, remove downstream devices\n", | 221 | dev_warn(dev, "DPC %s detected\n", |
173 | (reason == 0) ? "unmasked uncorrectable error" : | 222 | (reason == 0) ? "unmasked uncorrectable error" : |
174 | (reason == 1) ? "ERR_NONFATAL" : | 223 | (reason == 1) ? "ERR_NONFATAL" : |
175 | (reason == 2) ? "ERR_FATAL" : | 224 | (reason == 2) ? "ERR_FATAL" : |
@@ -186,7 +235,7 @@ static irqreturn_t dpc_handler(int irq, void *context) | |||
186 | } | 235 | } |
187 | 236 | ||
188 | /* We configure DPC so it only triggers on ERR_FATAL */ | 237 | /* We configure DPC so it only triggers on ERR_FATAL */ |
189 | pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_DPC); | 238 | pcie_do_recovery(pdev, pci_channel_io_frozen, PCIE_PORT_SERVICE_DPC); |
190 | 239 | ||
191 | return IRQ_HANDLED; | 240 | return IRQ_HANDLED; |
192 | } | 241 | } |
@@ -259,6 +308,8 @@ static int dpc_probe(struct pcie_device *dev) | |||
259 | FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), | 308 | FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), |
260 | FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size, | 309 | FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size, |
261 | FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); | 310 | FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); |
311 | |||
312 | pci_add_ext_cap_save_buffer(pdev, PCI_EXT_CAP_ID_DPC, sizeof(u16)); | ||
262 | return status; | 313 | return status; |
263 | } | 314 | } |
264 | 315 | ||
@@ -282,8 +333,7 @@ static struct pcie_port_service_driver dpcdriver = { | |||
282 | .reset_link = dpc_reset_link, | 333 | .reset_link = dpc_reset_link, |
283 | }; | 334 | }; |
284 | 335 | ||
285 | static int __init dpc_service_init(void) | 336 | int __init pcie_dpc_init(void) |
286 | { | 337 | { |
287 | return pcie_port_service_register(&dpcdriver); | 338 | return pcie_port_service_register(&dpcdriver); |
288 | } | 339 | } |
289 | device_initcall(dpc_service_init); | ||
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c index 708fd3a0d646..773197a12568 100644 --- a/drivers/pci/pcie/err.c +++ b/drivers/pci/pcie/err.c | |||
@@ -12,18 +12,12 @@ | |||
12 | 12 | ||
13 | #include <linux/pci.h> | 13 | #include <linux/pci.h> |
14 | #include <linux/module.h> | 14 | #include <linux/module.h> |
15 | #include <linux/pci.h> | ||
16 | #include <linux/kernel.h> | 15 | #include <linux/kernel.h> |
17 | #include <linux/errno.h> | 16 | #include <linux/errno.h> |
18 | #include <linux/aer.h> | 17 | #include <linux/aer.h> |
19 | #include "portdrv.h" | 18 | #include "portdrv.h" |
20 | #include "../pci.h" | 19 | #include "../pci.h" |
21 | 20 | ||
22 | struct aer_broadcast_data { | ||
23 | enum pci_channel_state state; | ||
24 | enum pci_ers_result result; | ||
25 | }; | ||
26 | |||
27 | static pci_ers_result_t merge_result(enum pci_ers_result orig, | 21 | static pci_ers_result_t merge_result(enum pci_ers_result orig, |
28 | enum pci_ers_result new) | 22 | enum pci_ers_result new) |
29 | { | 23 | { |
@@ -49,66 +43,52 @@ static pci_ers_result_t merge_result(enum pci_ers_result orig, | |||
49 | return orig; | 43 | return orig; |
50 | } | 44 | } |
51 | 45 | ||
52 | static int report_error_detected(struct pci_dev *dev, void *data) | 46 | static int report_error_detected(struct pci_dev *dev, |
47 | enum pci_channel_state state, | ||
48 | enum pci_ers_result *result) | ||
53 | { | 49 | { |
54 | pci_ers_result_t vote; | 50 | pci_ers_result_t vote; |
55 | const struct pci_error_handlers *err_handler; | 51 | const struct pci_error_handlers *err_handler; |
56 | struct aer_broadcast_data *result_data; | ||
57 | |||
58 | result_data = (struct aer_broadcast_data *) data; | ||
59 | 52 | ||
60 | device_lock(&dev->dev); | 53 | device_lock(&dev->dev); |
61 | dev->error_state = result_data->state; | 54 | if (!pci_dev_set_io_state(dev, state) || |
62 | 55 | !dev->driver || | |
63 | if (!dev->driver || | ||
64 | !dev->driver->err_handler || | 56 | !dev->driver->err_handler || |
65 | !dev->driver->err_handler->error_detected) { | 57 | !dev->driver->err_handler->error_detected) { |
66 | if (result_data->state == pci_channel_io_frozen && | ||
67 | dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) { | ||
68 | /* | ||
69 | * In case of fatal recovery, if one of down- | ||
70 | * stream device has no driver. We might be | ||
71 | * unable to recover because a later insmod | ||
72 | * of a driver for this device is unaware of | ||
73 | * its hw state. | ||
74 | */ | ||
75 | pci_printk(KERN_DEBUG, dev, "device has %s\n", | ||
76 | dev->driver ? | ||
77 | "no AER-aware driver" : "no driver"); | ||
78 | } | ||
79 | |||
80 | /* | 58 | /* |
81 | * If there's any device in the subtree that does not | 59 | * If any device in the subtree does not have an error_detected |
82 | * have an error_detected callback, returning | 60 | * callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent |
83 | * PCI_ERS_RESULT_NO_AER_DRIVER prevents calling of | 61 | * error callbacks of "any" device in the subtree, and will |
84 | * the subsequent mmio_enabled/slot_reset/resume | 62 | * exit in the disconnected error state. |
85 | * callbacks of "any" device in the subtree. All the | ||
86 | * devices in the subtree are left in the error state | ||
87 | * without recovery. | ||
88 | */ | 63 | */ |
89 | |||
90 | if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) | 64 | if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) |
91 | vote = PCI_ERS_RESULT_NO_AER_DRIVER; | 65 | vote = PCI_ERS_RESULT_NO_AER_DRIVER; |
92 | else | 66 | else |
93 | vote = PCI_ERS_RESULT_NONE; | 67 | vote = PCI_ERS_RESULT_NONE; |
94 | } else { | 68 | } else { |
95 | err_handler = dev->driver->err_handler; | 69 | err_handler = dev->driver->err_handler; |
96 | vote = err_handler->error_detected(dev, result_data->state); | 70 | vote = err_handler->error_detected(dev, state); |
97 | pci_uevent_ers(dev, PCI_ERS_RESULT_NONE); | ||
98 | } | 71 | } |
99 | 72 | pci_uevent_ers(dev, vote); | |
100 | result_data->result = merge_result(result_data->result, vote); | 73 | *result = merge_result(*result, vote); |
101 | device_unlock(&dev->dev); | 74 | device_unlock(&dev->dev); |
102 | return 0; | 75 | return 0; |
103 | } | 76 | } |
104 | 77 | ||
78 | static int report_frozen_detected(struct pci_dev *dev, void *data) | ||
79 | { | ||
80 | return report_error_detected(dev, pci_channel_io_frozen, data); | ||
81 | } | ||
82 | |||
83 | static int report_normal_detected(struct pci_dev *dev, void *data) | ||
84 | { | ||
85 | return report_error_detected(dev, pci_channel_io_normal, data); | ||
86 | } | ||
87 | |||
105 | static int report_mmio_enabled(struct pci_dev *dev, void *data) | 88 | static int report_mmio_enabled(struct pci_dev *dev, void *data) |
106 | { | 89 | { |
107 | pci_ers_result_t vote; | 90 | pci_ers_result_t vote, *result = data; |
108 | const struct pci_error_handlers *err_handler; | 91 | const struct pci_error_handlers *err_handler; |
109 | struct aer_broadcast_data *result_data; | ||
110 | |||
111 | result_data = (struct aer_broadcast_data *) data; | ||
112 | 92 | ||
113 | device_lock(&dev->dev); | 93 | device_lock(&dev->dev); |
114 | if (!dev->driver || | 94 | if (!dev->driver || |
@@ -118,7 +98,7 @@ static int report_mmio_enabled(struct pci_dev *dev, void *data) | |||
118 | 98 | ||
119 | err_handler = dev->driver->err_handler; | 99 | err_handler = dev->driver->err_handler; |
120 | vote = err_handler->mmio_enabled(dev); | 100 | vote = err_handler->mmio_enabled(dev); |
121 | result_data->result = merge_result(result_data->result, vote); | 101 | *result = merge_result(*result, vote); |
122 | out: | 102 | out: |
123 | device_unlock(&dev->dev); | 103 | device_unlock(&dev->dev); |
124 | return 0; | 104 | return 0; |
@@ -126,11 +106,8 @@ out: | |||
126 | 106 | ||
127 | static int report_slot_reset(struct pci_dev *dev, void *data) | 107 | static int report_slot_reset(struct pci_dev *dev, void *data) |
128 | { | 108 | { |
129 | pci_ers_result_t vote; | 109 | pci_ers_result_t vote, *result = data; |
130 | const struct pci_error_handlers *err_handler; | 110 | const struct pci_error_handlers *err_handler; |
131 | struct aer_broadcast_data *result_data; | ||
132 | |||
133 | result_data = (struct aer_broadcast_data *) data; | ||
134 | 111 | ||
135 | device_lock(&dev->dev); | 112 | device_lock(&dev->dev); |
136 | if (!dev->driver || | 113 | if (!dev->driver || |
@@ -140,7 +117,7 @@ static int report_slot_reset(struct pci_dev *dev, void *data) | |||
140 | 117 | ||
141 | err_handler = dev->driver->err_handler; | 118 | err_handler = dev->driver->err_handler; |
142 | vote = err_handler->slot_reset(dev); | 119 | vote = err_handler->slot_reset(dev); |
143 | result_data->result = merge_result(result_data->result, vote); | 120 | *result = merge_result(*result, vote); |
144 | out: | 121 | out: |
145 | device_unlock(&dev->dev); | 122 | device_unlock(&dev->dev); |
146 | return 0; | 123 | return 0; |
@@ -151,17 +128,16 @@ static int report_resume(struct pci_dev *dev, void *data) | |||
151 | const struct pci_error_handlers *err_handler; | 128 | const struct pci_error_handlers *err_handler; |
152 | 129 | ||
153 | device_lock(&dev->dev); | 130 | device_lock(&dev->dev); |
154 | dev->error_state = pci_channel_io_normal; | 131 | if (!pci_dev_set_io_state(dev, pci_channel_io_normal) || |
155 | 132 | !dev->driver || | |
156 | if (!dev->driver || | ||
157 | !dev->driver->err_handler || | 133 | !dev->driver->err_handler || |
158 | !dev->driver->err_handler->resume) | 134 | !dev->driver->err_handler->resume) |
159 | goto out; | 135 | goto out; |
160 | 136 | ||
161 | err_handler = dev->driver->err_handler; | 137 | err_handler = dev->driver->err_handler; |
162 | err_handler->resume(dev); | 138 | err_handler->resume(dev); |
163 | pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED); | ||
164 | out: | 139 | out: |
140 | pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED); | ||
165 | device_unlock(&dev->dev); | 141 | device_unlock(&dev->dev); |
166 | return 0; | 142 | return 0; |
167 | } | 143 | } |
@@ -177,207 +153,86 @@ static pci_ers_result_t default_reset_link(struct pci_dev *dev) | |||
177 | { | 153 | { |
178 | int rc; | 154 | int rc; |
179 | 155 | ||
180 | rc = pci_bridge_secondary_bus_reset(dev); | 156 | rc = pci_bus_error_reset(dev); |
181 | pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n"); | 157 | pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n"); |
182 | return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; | 158 | return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; |
183 | } | 159 | } |
184 | 160 | ||
185 | static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) | 161 | static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) |
186 | { | 162 | { |
187 | struct pci_dev *udev; | ||
188 | pci_ers_result_t status; | 163 | pci_ers_result_t status; |
189 | struct pcie_port_service_driver *driver = NULL; | 164 | struct pcie_port_service_driver *driver = NULL; |
190 | 165 | ||
191 | if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { | 166 | driver = pcie_port_find_service(dev, service); |
192 | /* Reset this port for all subordinates */ | ||
193 | udev = dev; | ||
194 | } else { | ||
195 | /* Reset the upstream component (likely downstream port) */ | ||
196 | udev = dev->bus->self; | ||
197 | } | ||
198 | |||
199 | /* Use the aer driver of the component firstly */ | ||
200 | driver = pcie_port_find_service(udev, service); | ||
201 | |||
202 | if (driver && driver->reset_link) { | 167 | if (driver && driver->reset_link) { |
203 | status = driver->reset_link(udev); | 168 | status = driver->reset_link(dev); |
204 | } else if (udev->has_secondary_link) { | 169 | } else if (dev->has_secondary_link) { |
205 | status = default_reset_link(udev); | 170 | status = default_reset_link(dev); |
206 | } else { | 171 | } else { |
207 | pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", | 172 | pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", |
208 | pci_name(udev)); | 173 | pci_name(dev)); |
209 | return PCI_ERS_RESULT_DISCONNECT; | 174 | return PCI_ERS_RESULT_DISCONNECT; |
210 | } | 175 | } |
211 | 176 | ||
212 | if (status != PCI_ERS_RESULT_RECOVERED) { | 177 | if (status != PCI_ERS_RESULT_RECOVERED) { |
213 | pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n", | 178 | pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n", |
214 | pci_name(udev)); | 179 | pci_name(dev)); |
215 | return PCI_ERS_RESULT_DISCONNECT; | 180 | return PCI_ERS_RESULT_DISCONNECT; |
216 | } | 181 | } |
217 | 182 | ||
218 | return status; | 183 | return status; |
219 | } | 184 | } |
220 | 185 | ||
221 | /** | 186 | void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state, |
222 | * broadcast_error_message - handle message broadcast to downstream drivers | 187 | u32 service) |
223 | * @dev: pointer to from where in a hierarchy message is broadcasted down | ||
224 | * @state: error state | ||
225 | * @error_mesg: message to print | ||
226 | * @cb: callback to be broadcasted | ||
227 | * | ||
228 | * Invoked during error recovery process. Once being invoked, the content | ||
229 | * of error severity will be broadcasted to all downstream drivers in a | ||
230 | * hierarchy in question. | ||
231 | */ | ||
232 | static pci_ers_result_t broadcast_error_message(struct pci_dev *dev, | ||
233 | enum pci_channel_state state, | ||
234 | char *error_mesg, | ||
235 | int (*cb)(struct pci_dev *, void *)) | ||
236 | { | ||
237 | struct aer_broadcast_data result_data; | ||
238 | |||
239 | pci_printk(KERN_DEBUG, dev, "broadcast %s message\n", error_mesg); | ||
240 | result_data.state = state; | ||
241 | if (cb == report_error_detected) | ||
242 | result_data.result = PCI_ERS_RESULT_CAN_RECOVER; | ||
243 | else | ||
244 | result_data.result = PCI_ERS_RESULT_RECOVERED; | ||
245 | |||
246 | if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { | ||
247 | /* | ||
248 | * If the error is reported by a bridge, we think this error | ||
249 | * is related to the downstream link of the bridge, so we | ||
250 | * do error recovery on all subordinates of the bridge instead | ||
251 | * of the bridge and clear the error status of the bridge. | ||
252 | */ | ||
253 | if (cb == report_error_detected) | ||
254 | dev->error_state = state; | ||
255 | pci_walk_bus(dev->subordinate, cb, &result_data); | ||
256 | if (cb == report_resume) { | ||
257 | pci_aer_clear_device_status(dev); | ||
258 | pci_cleanup_aer_uncorrect_error_status(dev); | ||
259 | dev->error_state = pci_channel_io_normal; | ||
260 | } | ||
261 | } else { | ||
262 | /* | ||
263 | * If the error is reported by an end point, we think this | ||
264 | * error is related to the upstream link of the end point. | ||
265 | * The error is non fatal so the bus is ok; just invoke | ||
266 | * the callback for the function that logged the error. | ||
267 | */ | ||
268 | cb(dev, &result_data); | ||
269 | } | ||
270 | |||
271 | return result_data.result; | ||
272 | } | ||
273 | |||
274 | /** | ||
275 | * pcie_do_fatal_recovery - handle fatal error recovery process | ||
276 | * @dev: pointer to a pci_dev data structure of agent detecting an error | ||
277 | * | ||
278 | * Invoked when an error is fatal. Once being invoked, removes the devices | ||
279 | * beneath this AER agent, followed by reset link e.g. secondary bus reset | ||
280 | * followed by re-enumeration of devices. | ||
281 | */ | ||
282 | void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service) | ||
283 | { | 188 | { |
284 | struct pci_dev *udev; | 189 | pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER; |
285 | struct pci_bus *parent; | 190 | struct pci_bus *bus; |
286 | struct pci_dev *pdev, *temp; | 191 | |
287 | pci_ers_result_t result; | 192 | /* |
288 | 193 | * Error recovery runs on all subordinates of the first downstream port. | |
289 | if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) | 194 | * If the downstream port detected the error, it is cleared at the end. |
290 | udev = dev; | 195 | */ |
196 | if (!(pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || | ||
197 | pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM)) | ||
198 | dev = dev->bus->self; | ||
199 | bus = dev->subordinate; | ||
200 | |||
201 | pci_dbg(dev, "broadcast error_detected message\n"); | ||
202 | if (state == pci_channel_io_frozen) | ||
203 | pci_walk_bus(bus, report_frozen_detected, &status); | ||
291 | else | 204 | else |
292 | udev = dev->bus->self; | 205 | pci_walk_bus(bus, report_normal_detected, &status); |
293 | |||
294 | parent = udev->subordinate; | ||
295 | pci_lock_rescan_remove(); | ||
296 | pci_dev_get(dev); | ||
297 | list_for_each_entry_safe_reverse(pdev, temp, &parent->devices, | ||
298 | bus_list) { | ||
299 | pci_dev_get(pdev); | ||
300 | pci_dev_set_disconnected(pdev, NULL); | ||
301 | if (pci_has_subordinate(pdev)) | ||
302 | pci_walk_bus(pdev->subordinate, | ||
303 | pci_dev_set_disconnected, NULL); | ||
304 | pci_stop_and_remove_bus_device(pdev); | ||
305 | pci_dev_put(pdev); | ||
306 | } | ||
307 | |||
308 | result = reset_link(udev, service); | ||
309 | 206 | ||
310 | if ((service == PCIE_PORT_SERVICE_AER) && | 207 | if (state == pci_channel_io_frozen && |
311 | (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE)) { | 208 | reset_link(dev, service) != PCI_ERS_RESULT_RECOVERED) |
312 | /* | 209 | goto failed; |
313 | * If the error is reported by a bridge, we think this error | ||
314 | * is related to the downstream link of the bridge, so we | ||
315 | * do error recovery on all subordinates of the bridge instead | ||
316 | * of the bridge and clear the error status of the bridge. | ||
317 | */ | ||
318 | pci_aer_clear_fatal_status(dev); | ||
319 | pci_aer_clear_device_status(dev); | ||
320 | } | ||
321 | 210 | ||
322 | if (result == PCI_ERS_RESULT_RECOVERED) { | 211 | if (status == PCI_ERS_RESULT_CAN_RECOVER) { |
323 | if (pcie_wait_for_link(udev, true)) | 212 | status = PCI_ERS_RESULT_RECOVERED; |
324 | pci_rescan_bus(udev->bus); | 213 | pci_dbg(dev, "broadcast mmio_enabled message\n"); |
325 | pci_info(dev, "Device recovery from fatal error successful\n"); | 214 | pci_walk_bus(bus, report_mmio_enabled, &status); |
326 | } else { | ||
327 | pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT); | ||
328 | pci_info(dev, "Device recovery from fatal error failed\n"); | ||
329 | } | 215 | } |
330 | 216 | ||
331 | pci_dev_put(dev); | ||
332 | pci_unlock_rescan_remove(); | ||
333 | } | ||
334 | |||
335 | /** | ||
336 | * pcie_do_nonfatal_recovery - handle nonfatal error recovery process | ||
337 | * @dev: pointer to a pci_dev data structure of agent detecting an error | ||
338 | * | ||
339 | * Invoked when an error is nonfatal/fatal. Once being invoked, broadcast | ||
340 | * error detected message to all downstream drivers within a hierarchy in | ||
341 | * question and return the returned code. | ||
342 | */ | ||
343 | void pcie_do_nonfatal_recovery(struct pci_dev *dev) | ||
344 | { | ||
345 | pci_ers_result_t status; | ||
346 | enum pci_channel_state state; | ||
347 | |||
348 | state = pci_channel_io_normal; | ||
349 | |||
350 | status = broadcast_error_message(dev, | ||
351 | state, | ||
352 | "error_detected", | ||
353 | report_error_detected); | ||
354 | |||
355 | if (status == PCI_ERS_RESULT_CAN_RECOVER) | ||
356 | status = broadcast_error_message(dev, | ||
357 | state, | ||
358 | "mmio_enabled", | ||
359 | report_mmio_enabled); | ||
360 | |||
361 | if (status == PCI_ERS_RESULT_NEED_RESET) { | 217 | if (status == PCI_ERS_RESULT_NEED_RESET) { |
362 | /* | 218 | /* |
363 | * TODO: Should call platform-specific | 219 | * TODO: Should call platform-specific |
364 | * functions to reset slot before calling | 220 | * functions to reset slot before calling |
365 | * drivers' slot_reset callbacks? | 221 | * drivers' slot_reset callbacks? |
366 | */ | 222 | */ |
367 | status = broadcast_error_message(dev, | 223 | status = PCI_ERS_RESULT_RECOVERED; |
368 | state, | 224 | pci_dbg(dev, "broadcast slot_reset message\n"); |
369 | "slot_reset", | 225 | pci_walk_bus(bus, report_slot_reset, &status); |
370 | report_slot_reset); | ||
371 | } | 226 | } |
372 | 227 | ||
373 | if (status != PCI_ERS_RESULT_RECOVERED) | 228 | if (status != PCI_ERS_RESULT_RECOVERED) |
374 | goto failed; | 229 | goto failed; |
375 | 230 | ||
376 | broadcast_error_message(dev, | 231 | pci_dbg(dev, "broadcast resume message\n"); |
377 | state, | 232 | pci_walk_bus(bus, report_resume, &status); |
378 | "resume", | ||
379 | report_resume); | ||
380 | 233 | ||
234 | pci_aer_clear_device_status(dev); | ||
235 | pci_cleanup_aer_uncorrect_error_status(dev); | ||
381 | pci_info(dev, "AER: Device recovery successful\n"); | 236 | pci_info(dev, "AER: Device recovery successful\n"); |
382 | return; | 237 | return; |
383 | 238 | ||
diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c index 3ed67676ea2a..0dbcf429089f 100644 --- a/drivers/pci/pcie/pme.c +++ b/drivers/pci/pcie/pme.c | |||
@@ -432,6 +432,31 @@ static void pcie_pme_remove(struct pcie_device *srv) | |||
432 | kfree(get_service_data(srv)); | 432 | kfree(get_service_data(srv)); |
433 | } | 433 | } |
434 | 434 | ||
435 | static int pcie_pme_runtime_suspend(struct pcie_device *srv) | ||
436 | { | ||
437 | struct pcie_pme_service_data *data = get_service_data(srv); | ||
438 | |||
439 | spin_lock_irq(&data->lock); | ||
440 | pcie_pme_interrupt_enable(srv->port, false); | ||
441 | pcie_clear_root_pme_status(srv->port); | ||
442 | data->noirq = true; | ||
443 | spin_unlock_irq(&data->lock); | ||
444 | |||
445 | return 0; | ||
446 | } | ||
447 | |||
448 | static int pcie_pme_runtime_resume(struct pcie_device *srv) | ||
449 | { | ||
450 | struct pcie_pme_service_data *data = get_service_data(srv); | ||
451 | |||
452 | spin_lock_irq(&data->lock); | ||
453 | pcie_pme_interrupt_enable(srv->port, true); | ||
454 | data->noirq = false; | ||
455 | spin_unlock_irq(&data->lock); | ||
456 | |||
457 | return 0; | ||
458 | } | ||
459 | |||
435 | static struct pcie_port_service_driver pcie_pme_driver = { | 460 | static struct pcie_port_service_driver pcie_pme_driver = { |
436 | .name = "pcie_pme", | 461 | .name = "pcie_pme", |
437 | .port_type = PCI_EXP_TYPE_ROOT_PORT, | 462 | .port_type = PCI_EXP_TYPE_ROOT_PORT, |
@@ -439,6 +464,8 @@ static struct pcie_port_service_driver pcie_pme_driver = { | |||
439 | 464 | ||
440 | .probe = pcie_pme_probe, | 465 | .probe = pcie_pme_probe, |
441 | .suspend = pcie_pme_suspend, | 466 | .suspend = pcie_pme_suspend, |
467 | .runtime_suspend = pcie_pme_runtime_suspend, | ||
468 | .runtime_resume = pcie_pme_runtime_resume, | ||
442 | .resume = pcie_pme_resume, | 469 | .resume = pcie_pme_resume, |
443 | .remove = pcie_pme_remove, | 470 | .remove = pcie_pme_remove, |
444 | }; | 471 | }; |
@@ -446,8 +473,7 @@ static struct pcie_port_service_driver pcie_pme_driver = { | |||
446 | /** | 473 | /** |
447 | * pcie_pme_service_init - Register the PCIe PME service driver. | 474 | * pcie_pme_service_init - Register the PCIe PME service driver. |
448 | */ | 475 | */ |
449 | static int __init pcie_pme_service_init(void) | 476 | int __init pcie_pme_init(void) |
450 | { | 477 | { |
451 | return pcie_port_service_register(&pcie_pme_driver); | 478 | return pcie_port_service_register(&pcie_pme_driver); |
452 | } | 479 | } |
453 | device_initcall(pcie_pme_service_init); | ||
diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index d59afa42fc14..e495f04394d0 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h | |||
@@ -23,6 +23,30 @@ | |||
23 | 23 | ||
24 | #define PCIE_PORT_DEVICE_MAXSERVICES 4 | 24 | #define PCIE_PORT_DEVICE_MAXSERVICES 4 |
25 | 25 | ||
26 | #ifdef CONFIG_PCIEAER | ||
27 | int pcie_aer_init(void); | ||
28 | #else | ||
29 | static inline int pcie_aer_init(void) { return 0; } | ||
30 | #endif | ||
31 | |||
32 | #ifdef CONFIG_HOTPLUG_PCI_PCIE | ||
33 | int pcie_hp_init(void); | ||
34 | #else | ||
35 | static inline int pcie_hp_init(void) { return 0; } | ||
36 | #endif | ||
37 | |||
38 | #ifdef CONFIG_PCIE_PME | ||
39 | int pcie_pme_init(void); | ||
40 | #else | ||
41 | static inline int pcie_pme_init(void) { return 0; } | ||
42 | #endif | ||
43 | |||
44 | #ifdef CONFIG_PCIE_DPC | ||
45 | int pcie_dpc_init(void); | ||
46 | #else | ||
47 | static inline int pcie_dpc_init(void) { return 0; } | ||
48 | #endif | ||
49 | |||
26 | /* Port Type */ | 50 | /* Port Type */ |
27 | #define PCIE_ANY_PORT (~0) | 51 | #define PCIE_ANY_PORT (~0) |
28 | 52 | ||
@@ -52,6 +76,8 @@ struct pcie_port_service_driver { | |||
52 | int (*suspend) (struct pcie_device *dev); | 76 | int (*suspend) (struct pcie_device *dev); |
53 | int (*resume_noirq) (struct pcie_device *dev); | 77 | int (*resume_noirq) (struct pcie_device *dev); |
54 | int (*resume) (struct pcie_device *dev); | 78 | int (*resume) (struct pcie_device *dev); |
79 | int (*runtime_suspend) (struct pcie_device *dev); | ||
80 | int (*runtime_resume) (struct pcie_device *dev); | ||
55 | 81 | ||
56 | /* Device driver may resume normal operations */ | 82 | /* Device driver may resume normal operations */ |
57 | void (*error_resume)(struct pci_dev *dev); | 83 | void (*error_resume)(struct pci_dev *dev); |
@@ -85,6 +111,8 @@ int pcie_port_device_register(struct pci_dev *dev); | |||
85 | int pcie_port_device_suspend(struct device *dev); | 111 | int pcie_port_device_suspend(struct device *dev); |
86 | int pcie_port_device_resume_noirq(struct device *dev); | 112 | int pcie_port_device_resume_noirq(struct device *dev); |
87 | int pcie_port_device_resume(struct device *dev); | 113 | int pcie_port_device_resume(struct device *dev); |
114 | int pcie_port_device_runtime_suspend(struct device *dev); | ||
115 | int pcie_port_device_runtime_resume(struct device *dev); | ||
88 | #endif | 116 | #endif |
89 | void pcie_port_device_remove(struct pci_dev *dev); | 117 | void pcie_port_device_remove(struct pci_dev *dev); |
90 | int __must_check pcie_port_bus_register(void); | 118 | int __must_check pcie_port_bus_register(void); |
@@ -123,10 +151,6 @@ static inline int pcie_aer_get_firmware_first(struct pci_dev *pci_dev) | |||
123 | } | 151 | } |
124 | #endif | 152 | #endif |
125 | 153 | ||
126 | #ifdef CONFIG_PCIEAER | ||
127 | irqreturn_t aer_irq(int irq, void *context); | ||
128 | #endif | ||
129 | |||
130 | struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev, | 154 | struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev, |
131 | u32 service); | 155 | u32 service); |
132 | struct device *pcie_port_find_device(struct pci_dev *dev, u32 service); | 156 | struct device *pcie_port_find_device(struct pci_dev *dev, u32 service); |
diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c index 7c37d815229e..f458ac9cb70c 100644 --- a/drivers/pci/pcie/portdrv_core.c +++ b/drivers/pci/pcie/portdrv_core.c | |||
@@ -395,6 +395,26 @@ int pcie_port_device_resume(struct device *dev) | |||
395 | size_t off = offsetof(struct pcie_port_service_driver, resume); | 395 | size_t off = offsetof(struct pcie_port_service_driver, resume); |
396 | return device_for_each_child(dev, &off, pm_iter); | 396 | return device_for_each_child(dev, &off, pm_iter); |
397 | } | 397 | } |
398 | |||
399 | /** | ||
400 | * pcie_port_device_runtime_suspend - runtime suspend port services | ||
401 | * @dev: PCI Express port to handle | ||
402 | */ | ||
403 | int pcie_port_device_runtime_suspend(struct device *dev) | ||
404 | { | ||
405 | size_t off = offsetof(struct pcie_port_service_driver, runtime_suspend); | ||
406 | return device_for_each_child(dev, &off, pm_iter); | ||
407 | } | ||
408 | |||
409 | /** | ||
410 | * pcie_port_device_runtime_resume - runtime resume port services | ||
411 | * @dev: PCI Express port to handle | ||
412 | */ | ||
413 | int pcie_port_device_runtime_resume(struct device *dev) | ||
414 | { | ||
415 | size_t off = offsetof(struct pcie_port_service_driver, runtime_resume); | ||
416 | return device_for_each_child(dev, &off, pm_iter); | ||
417 | } | ||
398 | #endif /* PM */ | 418 | #endif /* PM */ |
399 | 419 | ||
400 | static int remove_iter(struct device *dev, void *data) | 420 | static int remove_iter(struct device *dev, void *data) |
@@ -466,6 +486,7 @@ struct device *pcie_port_find_device(struct pci_dev *dev, | |||
466 | device = pdrvs.dev; | 486 | device = pdrvs.dev; |
467 | return device; | 487 | return device; |
468 | } | 488 | } |
489 | EXPORT_SYMBOL_GPL(pcie_port_find_device); | ||
469 | 490 | ||
470 | /** | 491 | /** |
471 | * pcie_port_device_remove - unregister PCI Express port service devices | 492 | * pcie_port_device_remove - unregister PCI Express port service devices |
diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c index eef22dc29140..0acca3596807 100644 --- a/drivers/pci/pcie/portdrv_pci.c +++ b/drivers/pci/pcie/portdrv_pci.c | |||
@@ -45,12 +45,10 @@ __setup("pcie_ports=", pcie_port_setup); | |||
45 | #ifdef CONFIG_PM | 45 | #ifdef CONFIG_PM |
46 | static int pcie_port_runtime_suspend(struct device *dev) | 46 | static int pcie_port_runtime_suspend(struct device *dev) |
47 | { | 47 | { |
48 | return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY; | 48 | if (!to_pci_dev(dev)->bridge_d3) |
49 | } | 49 | return -EBUSY; |
50 | 50 | ||
51 | static int pcie_port_runtime_resume(struct device *dev) | 51 | return pcie_port_device_runtime_suspend(dev); |
52 | { | ||
53 | return 0; | ||
54 | } | 52 | } |
55 | 53 | ||
56 | static int pcie_port_runtime_idle(struct device *dev) | 54 | static int pcie_port_runtime_idle(struct device *dev) |
@@ -73,7 +71,7 @@ static const struct dev_pm_ops pcie_portdrv_pm_ops = { | |||
73 | .restore_noirq = pcie_port_device_resume_noirq, | 71 | .restore_noirq = pcie_port_device_resume_noirq, |
74 | .restore = pcie_port_device_resume, | 72 | .restore = pcie_port_device_resume, |
75 | .runtime_suspend = pcie_port_runtime_suspend, | 73 | .runtime_suspend = pcie_port_runtime_suspend, |
76 | .runtime_resume = pcie_port_runtime_resume, | 74 | .runtime_resume = pcie_port_device_runtime_resume, |
77 | .runtime_idle = pcie_port_runtime_idle, | 75 | .runtime_idle = pcie_port_runtime_idle, |
78 | }; | 76 | }; |
79 | 77 | ||
@@ -109,8 +107,8 @@ static int pcie_portdrv_probe(struct pci_dev *dev, | |||
109 | 107 | ||
110 | pci_save_state(dev); | 108 | pci_save_state(dev); |
111 | 109 | ||
112 | dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_SMART_SUSPEND | | 110 | dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NEVER_SKIP | |
113 | DPM_FLAG_LEAVE_SUSPENDED); | 111 | DPM_FLAG_SMART_SUSPEND); |
114 | 112 | ||
115 | if (pci_bridge_d3_possible(dev)) { | 113 | if (pci_bridge_d3_possible(dev)) { |
116 | /* | 114 | /* |
@@ -146,6 +144,13 @@ static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev, | |||
146 | return PCI_ERS_RESULT_CAN_RECOVER; | 144 | return PCI_ERS_RESULT_CAN_RECOVER; |
147 | } | 145 | } |
148 | 146 | ||
147 | static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) | ||
148 | { | ||
149 | pci_restore_state(dev); | ||
150 | pci_save_state(dev); | ||
151 | return PCI_ERS_RESULT_RECOVERED; | ||
152 | } | ||
153 | |||
149 | static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev) | 154 | static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev) |
150 | { | 155 | { |
151 | return PCI_ERS_RESULT_RECOVERED; | 156 | return PCI_ERS_RESULT_RECOVERED; |
@@ -185,6 +190,7 @@ static const struct pci_device_id port_pci_ids[] = { { | |||
185 | 190 | ||
186 | static const struct pci_error_handlers pcie_portdrv_err_handler = { | 191 | static const struct pci_error_handlers pcie_portdrv_err_handler = { |
187 | .error_detected = pcie_portdrv_error_detected, | 192 | .error_detected = pcie_portdrv_error_detected, |
193 | .slot_reset = pcie_portdrv_slot_reset, | ||
188 | .mmio_enabled = pcie_portdrv_mmio_enabled, | 194 | .mmio_enabled = pcie_portdrv_mmio_enabled, |
189 | .resume = pcie_portdrv_err_resume, | 195 | .resume = pcie_portdrv_err_resume, |
190 | }; | 196 | }; |
@@ -226,11 +232,20 @@ static const struct dmi_system_id pcie_portdrv_dmi_table[] __initconst = { | |||
226 | {} | 232 | {} |
227 | }; | 233 | }; |
228 | 234 | ||
235 | static void __init pcie_init_services(void) | ||
236 | { | ||
237 | pcie_aer_init(); | ||
238 | pcie_pme_init(); | ||
239 | pcie_dpc_init(); | ||
240 | pcie_hp_init(); | ||
241 | } | ||
242 | |||
229 | static int __init pcie_portdrv_init(void) | 243 | static int __init pcie_portdrv_init(void) |
230 | { | 244 | { |
231 | if (pcie_ports_disabled) | 245 | if (pcie_ports_disabled) |
232 | return -EACCES; | 246 | return -EACCES; |
233 | 247 | ||
248 | pcie_init_services(); | ||
234 | dmi_check_system(pcie_portdrv_dmi_table); | 249 | dmi_check_system(pcie_portdrv_dmi_table); |
235 | 250 | ||
236 | return pci_register_driver(&pcie_portdriver); | 251 | return pci_register_driver(&pcie_portdriver); |
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 201f9e5ff55c..b1c05b5054a0 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c | |||
@@ -713,6 +713,7 @@ static void pci_set_bus_speed(struct pci_bus *bus) | |||
713 | 713 | ||
714 | pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap); | 714 | pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap); |
715 | bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS]; | 715 | bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS]; |
716 | bridge->link_active_reporting = !!(linkcap & PCI_EXP_LNKCAP_DLLLARC); | ||
716 | 717 | ||
717 | pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta); | 718 | pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta); |
718 | pcie_update_link_speed(bus, linksta); | 719 | pcie_update_link_speed(bus, linksta); |
@@ -1438,12 +1439,29 @@ static int pci_cfg_space_size_ext(struct pci_dev *dev) | |||
1438 | return PCI_CFG_SPACE_EXP_SIZE; | 1439 | return PCI_CFG_SPACE_EXP_SIZE; |
1439 | } | 1440 | } |
1440 | 1441 | ||
1442 | #ifdef CONFIG_PCI_IOV | ||
1443 | static bool is_vf0(struct pci_dev *dev) | ||
1444 | { | ||
1445 | if (pci_iov_virtfn_devfn(dev->physfn, 0) == dev->devfn && | ||
1446 | pci_iov_virtfn_bus(dev->physfn, 0) == dev->bus->number) | ||
1447 | return true; | ||
1448 | |||
1449 | return false; | ||
1450 | } | ||
1451 | #endif | ||
1452 | |||
1441 | int pci_cfg_space_size(struct pci_dev *dev) | 1453 | int pci_cfg_space_size(struct pci_dev *dev) |
1442 | { | 1454 | { |
1443 | int pos; | 1455 | int pos; |
1444 | u32 status; | 1456 | u32 status; |
1445 | u16 class; | 1457 | u16 class; |
1446 | 1458 | ||
1459 | #ifdef CONFIG_PCI_IOV | ||
1460 | /* Read cached value for all VFs except for VF0 */ | ||
1461 | if (dev->is_virtfn && !is_vf0(dev)) | ||
1462 | return dev->physfn->sriov->cfg_size; | ||
1463 | #endif | ||
1464 | |||
1447 | if (dev->bus->bus_flags & PCI_BUS_FLAGS_NO_EXTCFG) | 1465 | if (dev->bus->bus_flags & PCI_BUS_FLAGS_NO_EXTCFG) |
1448 | return PCI_CFG_SPACE_SIZE; | 1466 | return PCI_CFG_SPACE_SIZE; |
1449 | 1467 | ||
@@ -2143,7 +2161,7 @@ static void pci_release_dev(struct device *dev) | |||
2143 | pcibios_release_device(pci_dev); | 2161 | pcibios_release_device(pci_dev); |
2144 | pci_bus_put(pci_dev->bus); | 2162 | pci_bus_put(pci_dev->bus); |
2145 | kfree(pci_dev->driver_override); | 2163 | kfree(pci_dev->driver_override); |
2146 | kfree(pci_dev->dma_alias_mask); | 2164 | bitmap_free(pci_dev->dma_alias_mask); |
2147 | kfree(pci_dev); | 2165 | kfree(pci_dev); |
2148 | } | 2166 | } |
2149 | 2167 | ||
@@ -2397,8 +2415,8 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus) | |||
2397 | dev->dev.dma_parms = &dev->dma_parms; | 2415 | dev->dev.dma_parms = &dev->dma_parms; |
2398 | dev->dev.coherent_dma_mask = 0xffffffffull; | 2416 | dev->dev.coherent_dma_mask = 0xffffffffull; |
2399 | 2417 | ||
2400 | pci_set_dma_max_seg_size(dev, 65536); | 2418 | dma_set_max_seg_size(&dev->dev, 65536); |
2401 | pci_set_dma_seg_boundary(dev, 0xffffffff); | 2419 | dma_set_seg_boundary(&dev->dev, 0xffffffff); |
2402 | 2420 | ||
2403 | /* Fix up broken headers */ | 2421 | /* Fix up broken headers */ |
2404 | pci_fixup_device(pci_fixup_header, dev); | 2422 | pci_fixup_device(pci_fixup_header, dev); |
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 6bc27b7fd452..4700d24e5d55 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c | |||
@@ -3190,7 +3190,11 @@ static void disable_igfx_irq(struct pci_dev *dev) | |||
3190 | 3190 | ||
3191 | pci_iounmap(dev, regs); | 3191 | pci_iounmap(dev, regs); |
3192 | } | 3192 | } |
3193 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq); | ||
3194 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq); | ||
3195 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq); | ||
3193 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq); | 3196 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq); |
3197 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq); | ||
3194 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq); | 3198 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq); |
3195 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq); | 3199 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq); |
3196 | 3200 | ||
@@ -4987,7 +4991,6 @@ static void quirk_switchtec_ntb_dma_alias(struct pci_dev *pdev) | |||
4987 | void __iomem *mmio; | 4991 | void __iomem *mmio; |
4988 | struct ntb_info_regs __iomem *mmio_ntb; | 4992 | struct ntb_info_regs __iomem *mmio_ntb; |
4989 | struct ntb_ctrl_regs __iomem *mmio_ctrl; | 4993 | struct ntb_ctrl_regs __iomem *mmio_ctrl; |
4990 | struct sys_info_regs __iomem *mmio_sys_info; | ||
4991 | u64 partition_map; | 4994 | u64 partition_map; |
4992 | u8 partition; | 4995 | u8 partition; |
4993 | int pp; | 4996 | int pp; |
@@ -5008,7 +5011,6 @@ static void quirk_switchtec_ntb_dma_alias(struct pci_dev *pdev) | |||
5008 | 5011 | ||
5009 | mmio_ntb = mmio + SWITCHTEC_GAS_NTB_OFFSET; | 5012 | mmio_ntb = mmio + SWITCHTEC_GAS_NTB_OFFSET; |
5010 | mmio_ctrl = (void __iomem *) mmio_ntb + SWITCHTEC_NTB_REG_CTRL_OFFSET; | 5013 | mmio_ctrl = (void __iomem *) mmio_ntb + SWITCHTEC_NTB_REG_CTRL_OFFSET; |
5011 | mmio_sys_info = mmio + SWITCHTEC_GAS_SYS_INFO_OFFSET; | ||
5012 | 5014 | ||
5013 | partition = ioread8(&mmio_ntb->partition_id); | 5015 | partition = ioread8(&mmio_ntb->partition_id); |
5014 | 5016 | ||
@@ -5057,59 +5059,37 @@ static void quirk_switchtec_ntb_dma_alias(struct pci_dev *pdev) | |||
5057 | pci_iounmap(pdev, mmio); | 5059 | pci_iounmap(pdev, mmio); |
5058 | pci_disable_device(pdev); | 5060 | pci_disable_device(pdev); |
5059 | } | 5061 | } |
5060 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8531, | 5062 | #define SWITCHTEC_QUIRK(vid) \ |
5061 | quirk_switchtec_ntb_dma_alias); | 5063 | DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_MICROSEMI, vid, \ |
5062 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8532, | 5064 | PCI_CLASS_BRIDGE_OTHER, 8, quirk_switchtec_ntb_dma_alias) |
5063 | quirk_switchtec_ntb_dma_alias); | 5065 | |
5064 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8533, | 5066 | SWITCHTEC_QUIRK(0x8531); /* PFX 24xG3 */ |
5065 | quirk_switchtec_ntb_dma_alias); | 5067 | SWITCHTEC_QUIRK(0x8532); /* PFX 32xG3 */ |
5066 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8534, | 5068 | SWITCHTEC_QUIRK(0x8533); /* PFX 48xG3 */ |
5067 | quirk_switchtec_ntb_dma_alias); | 5069 | SWITCHTEC_QUIRK(0x8534); /* PFX 64xG3 */ |
5068 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8535, | 5070 | SWITCHTEC_QUIRK(0x8535); /* PFX 80xG3 */ |
5069 | quirk_switchtec_ntb_dma_alias); | 5071 | SWITCHTEC_QUIRK(0x8536); /* PFX 96xG3 */ |
5070 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8536, | 5072 | SWITCHTEC_QUIRK(0x8541); /* PSX 24xG3 */ |
5071 | quirk_switchtec_ntb_dma_alias); | 5073 | SWITCHTEC_QUIRK(0x8542); /* PSX 32xG3 */ |
5072 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8543, | 5074 | SWITCHTEC_QUIRK(0x8543); /* PSX 48xG3 */ |
5073 | quirk_switchtec_ntb_dma_alias); | 5075 | SWITCHTEC_QUIRK(0x8544); /* PSX 64xG3 */ |
5074 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8544, | 5076 | SWITCHTEC_QUIRK(0x8545); /* PSX 80xG3 */ |
5075 | quirk_switchtec_ntb_dma_alias); | 5077 | SWITCHTEC_QUIRK(0x8546); /* PSX 96xG3 */ |
5076 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8545, | 5078 | SWITCHTEC_QUIRK(0x8551); /* PAX 24XG3 */ |
5077 | quirk_switchtec_ntb_dma_alias); | 5079 | SWITCHTEC_QUIRK(0x8552); /* PAX 32XG3 */ |
5078 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8546, | 5080 | SWITCHTEC_QUIRK(0x8553); /* PAX 48XG3 */ |
5079 | quirk_switchtec_ntb_dma_alias); | 5081 | SWITCHTEC_QUIRK(0x8554); /* PAX 64XG3 */ |
5080 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8551, | 5082 | SWITCHTEC_QUIRK(0x8555); /* PAX 80XG3 */ |
5081 | quirk_switchtec_ntb_dma_alias); | 5083 | SWITCHTEC_QUIRK(0x8556); /* PAX 96XG3 */ |
5082 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8552, | 5084 | SWITCHTEC_QUIRK(0x8561); /* PFXL 24XG3 */ |
5083 | quirk_switchtec_ntb_dma_alias); | 5085 | SWITCHTEC_QUIRK(0x8562); /* PFXL 32XG3 */ |
5084 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8553, | 5086 | SWITCHTEC_QUIRK(0x8563); /* PFXL 48XG3 */ |
5085 | quirk_switchtec_ntb_dma_alias); | 5087 | SWITCHTEC_QUIRK(0x8564); /* PFXL 64XG3 */ |
5086 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8554, | 5088 | SWITCHTEC_QUIRK(0x8565); /* PFXL 80XG3 */ |
5087 | quirk_switchtec_ntb_dma_alias); | 5089 | SWITCHTEC_QUIRK(0x8566); /* PFXL 96XG3 */ |
5088 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8555, | 5090 | SWITCHTEC_QUIRK(0x8571); /* PFXI 24XG3 */ |
5089 | quirk_switchtec_ntb_dma_alias); | 5091 | SWITCHTEC_QUIRK(0x8572); /* PFXI 32XG3 */ |
5090 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8556, | 5092 | SWITCHTEC_QUIRK(0x8573); /* PFXI 48XG3 */ |
5091 | quirk_switchtec_ntb_dma_alias); | 5093 | SWITCHTEC_QUIRK(0x8574); /* PFXI 64XG3 */ |
5092 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8561, | 5094 | SWITCHTEC_QUIRK(0x8575); /* PFXI 80XG3 */ |
5093 | quirk_switchtec_ntb_dma_alias); | 5095 | SWITCHTEC_QUIRK(0x8576); /* PFXI 96XG3 */ |
5094 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8562, | ||
5095 | quirk_switchtec_ntb_dma_alias); | ||
5096 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8563, | ||
5097 | quirk_switchtec_ntb_dma_alias); | ||
5098 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8564, | ||
5099 | quirk_switchtec_ntb_dma_alias); | ||
5100 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8565, | ||
5101 | quirk_switchtec_ntb_dma_alias); | ||
5102 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8566, | ||
5103 | quirk_switchtec_ntb_dma_alias); | ||
5104 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8571, | ||
5105 | quirk_switchtec_ntb_dma_alias); | ||
5106 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8572, | ||
5107 | quirk_switchtec_ntb_dma_alias); | ||
5108 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8573, | ||
5109 | quirk_switchtec_ntb_dma_alias); | ||
5110 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8574, | ||
5111 | quirk_switchtec_ntb_dma_alias); | ||
5112 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8575, | ||
5113 | quirk_switchtec_ntb_dma_alias); | ||
5114 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8576, | ||
5115 | quirk_switchtec_ntb_dma_alias); | ||
diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c index 461e7fd2756f..e9c6b120cf45 100644 --- a/drivers/pci/remove.c +++ b/drivers/pci/remove.c | |||
@@ -25,9 +25,6 @@ static void pci_stop_dev(struct pci_dev *dev) | |||
25 | 25 | ||
26 | pci_dev_assign_added(dev, false); | 26 | pci_dev_assign_added(dev, false); |
27 | } | 27 | } |
28 | |||
29 | if (dev->bus->self) | ||
30 | pcie_aspm_exit_link_state(dev); | ||
31 | } | 28 | } |
32 | 29 | ||
33 | static void pci_destroy_dev(struct pci_dev *dev) | 30 | static void pci_destroy_dev(struct pci_dev *dev) |
@@ -41,6 +38,7 @@ static void pci_destroy_dev(struct pci_dev *dev) | |||
41 | list_del(&dev->bus_list); | 38 | list_del(&dev->bus_list); |
42 | up_write(&pci_bus_sem); | 39 | up_write(&pci_bus_sem); |
43 | 40 | ||
41 | pcie_aspm_exit_link_state(dev); | ||
44 | pci_bridge_d3_update(dev); | 42 | pci_bridge_d3_update(dev); |
45 | pci_free_resources(dev); | 43 | pci_free_resources(dev); |
46 | put_device(&dev->dev); | 44 | put_device(&dev->dev); |
diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index 79b1824e83b4..ed960436df5e 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c | |||
@@ -811,6 +811,8 @@ static struct resource *find_free_bus_resource(struct pci_bus *bus, | |||
811 | static resource_size_t calculate_iosize(resource_size_t size, | 811 | static resource_size_t calculate_iosize(resource_size_t size, |
812 | resource_size_t min_size, | 812 | resource_size_t min_size, |
813 | resource_size_t size1, | 813 | resource_size_t size1, |
814 | resource_size_t add_size, | ||
815 | resource_size_t children_add_size, | ||
814 | resource_size_t old_size, | 816 | resource_size_t old_size, |
815 | resource_size_t align) | 817 | resource_size_t align) |
816 | { | 818 | { |
@@ -823,15 +825,18 @@ static resource_size_t calculate_iosize(resource_size_t size, | |||
823 | #if defined(CONFIG_ISA) || defined(CONFIG_EISA) | 825 | #if defined(CONFIG_ISA) || defined(CONFIG_EISA) |
824 | size = (size & 0xff) + ((size & ~0xffUL) << 2); | 826 | size = (size & 0xff) + ((size & ~0xffUL) << 2); |
825 | #endif | 827 | #endif |
826 | size = ALIGN(size + size1, align); | 828 | size = size + size1; |
827 | if (size < old_size) | 829 | if (size < old_size) |
828 | size = old_size; | 830 | size = old_size; |
831 | |||
832 | size = ALIGN(max(size, add_size) + children_add_size, align); | ||
829 | return size; | 833 | return size; |
830 | } | 834 | } |
831 | 835 | ||
832 | static resource_size_t calculate_memsize(resource_size_t size, | 836 | static resource_size_t calculate_memsize(resource_size_t size, |
833 | resource_size_t min_size, | 837 | resource_size_t min_size, |
834 | resource_size_t size1, | 838 | resource_size_t add_size, |
839 | resource_size_t children_add_size, | ||
835 | resource_size_t old_size, | 840 | resource_size_t old_size, |
836 | resource_size_t align) | 841 | resource_size_t align) |
837 | { | 842 | { |
@@ -841,7 +846,8 @@ static resource_size_t calculate_memsize(resource_size_t size, | |||
841 | old_size = 0; | 846 | old_size = 0; |
842 | if (size < old_size) | 847 | if (size < old_size) |
843 | size = old_size; | 848 | size = old_size; |
844 | size = ALIGN(size + size1, align); | 849 | |
850 | size = ALIGN(max(size, add_size) + children_add_size, align); | ||
845 | return size; | 851 | return size; |
846 | } | 852 | } |
847 | 853 | ||
@@ -930,12 +936,10 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size, | |||
930 | } | 936 | } |
931 | } | 937 | } |
932 | 938 | ||
933 | size0 = calculate_iosize(size, min_size, size1, | 939 | size0 = calculate_iosize(size, min_size, size1, 0, 0, |
934 | resource_size(b_res), min_align); | 940 | resource_size(b_res), min_align); |
935 | if (children_add_size > add_size) | 941 | size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 : |
936 | add_size = children_add_size; | 942 | calculate_iosize(size, min_size, size1, add_size, children_add_size, |
937 | size1 = (!realloc_head || (realloc_head && !add_size)) ? size0 : | ||
938 | calculate_iosize(size, min_size, add_size + size1, | ||
939 | resource_size(b_res), min_align); | 943 | resource_size(b_res), min_align); |
940 | if (!size0 && !size1) { | 944 | if (!size0 && !size1) { |
941 | if (b_res->start || b_res->end) | 945 | if (b_res->start || b_res->end) |
@@ -1079,12 +1083,10 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, | |||
1079 | 1083 | ||
1080 | min_align = calculate_mem_align(aligns, max_order); | 1084 | min_align = calculate_mem_align(aligns, max_order); |
1081 | min_align = max(min_align, window_alignment(bus, b_res->flags)); | 1085 | min_align = max(min_align, window_alignment(bus, b_res->flags)); |
1082 | size0 = calculate_memsize(size, min_size, 0, resource_size(b_res), min_align); | 1086 | size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), min_align); |
1083 | add_align = max(min_align, add_align); | 1087 | add_align = max(min_align, add_align); |
1084 | if (children_add_size > add_size) | 1088 | size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 : |
1085 | add_size = children_add_size; | 1089 | calculate_memsize(size, min_size, add_size, children_add_size, |
1086 | size1 = (!realloc_head || (realloc_head && !add_size)) ? size0 : | ||
1087 | calculate_memsize(size, min_size, add_size, | ||
1088 | resource_size(b_res), add_align); | 1090 | resource_size(b_res), add_align); |
1089 | if (!size0 && !size1) { | 1091 | if (!size0 && !size1) { |
1090 | if (b_res->start || b_res->end) | 1092 | if (b_res->start || b_res->end) |
diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c index e634229ece89..c46d5e1ff536 100644 --- a/drivers/pci/slot.c +++ b/drivers/pci/slot.c | |||
@@ -14,7 +14,6 @@ | |||
14 | 14 | ||
15 | struct kset *pci_slots_kset; | 15 | struct kset *pci_slots_kset; |
16 | EXPORT_SYMBOL_GPL(pci_slots_kset); | 16 | EXPORT_SYMBOL_GPL(pci_slots_kset); |
17 | static DEFINE_MUTEX(pci_slot_mutex); | ||
18 | 17 | ||
19 | static ssize_t pci_slot_attr_show(struct kobject *kobj, | 18 | static ssize_t pci_slot_attr_show(struct kobject *kobj, |
20 | struct attribute *attr, char *buf) | 19 | struct attribute *attr, char *buf) |
@@ -371,7 +370,7 @@ void pci_hp_create_module_link(struct pci_slot *pci_slot) | |||
371 | 370 | ||
372 | if (!slot || !slot->ops) | 371 | if (!slot || !slot->ops) |
373 | return; | 372 | return; |
374 | kobj = kset_find_obj(module_kset, slot->ops->mod_name); | 373 | kobj = kset_find_obj(module_kset, slot->mod_name); |
375 | if (!kobj) | 374 | if (!kobj) |
376 | return; | 375 | return; |
377 | ret = sysfs_create_link(&pci_slot->kobj, kobj, "module"); | 376 | ret = sysfs_create_link(&pci_slot->kobj, kobj, "module"); |
diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c index 2d6e272315a8..93ee2d5466f8 100644 --- a/drivers/platform/x86/asus-wmi.c +++ b/drivers/platform/x86/asus-wmi.c | |||
@@ -254,7 +254,7 @@ struct asus_wmi { | |||
254 | int asus_hwmon_num_fans; | 254 | int asus_hwmon_num_fans; |
255 | int asus_hwmon_pwm; | 255 | int asus_hwmon_pwm; |
256 | 256 | ||
257 | struct hotplug_slot *hotplug_slot; | 257 | struct hotplug_slot hotplug_slot; |
258 | struct mutex hotplug_lock; | 258 | struct mutex hotplug_lock; |
259 | struct mutex wmi_lock; | 259 | struct mutex wmi_lock; |
260 | struct workqueue_struct *hotplug_workqueue; | 260 | struct workqueue_struct *hotplug_workqueue; |
@@ -753,7 +753,7 @@ static void asus_rfkill_hotplug(struct asus_wmi *asus) | |||
753 | if (asus->wlan.rfkill) | 753 | if (asus->wlan.rfkill) |
754 | rfkill_set_sw_state(asus->wlan.rfkill, blocked); | 754 | rfkill_set_sw_state(asus->wlan.rfkill, blocked); |
755 | 755 | ||
756 | if (asus->hotplug_slot) { | 756 | if (asus->hotplug_slot.ops) { |
757 | bus = pci_find_bus(0, 1); | 757 | bus = pci_find_bus(0, 1); |
758 | if (!bus) { | 758 | if (!bus) { |
759 | pr_warn("Unable to find PCI bus 1?\n"); | 759 | pr_warn("Unable to find PCI bus 1?\n"); |
@@ -858,7 +858,8 @@ static void asus_unregister_rfkill_notifier(struct asus_wmi *asus, char *node) | |||
858 | static int asus_get_adapter_status(struct hotplug_slot *hotplug_slot, | 858 | static int asus_get_adapter_status(struct hotplug_slot *hotplug_slot, |
859 | u8 *value) | 859 | u8 *value) |
860 | { | 860 | { |
861 | struct asus_wmi *asus = hotplug_slot->private; | 861 | struct asus_wmi *asus = container_of(hotplug_slot, |
862 | struct asus_wmi, hotplug_slot); | ||
862 | int result = asus_wmi_get_devstate_simple(asus, ASUS_WMI_DEVID_WLAN); | 863 | int result = asus_wmi_get_devstate_simple(asus, ASUS_WMI_DEVID_WLAN); |
863 | 864 | ||
864 | if (result < 0) | 865 | if (result < 0) |
@@ -868,8 +869,7 @@ static int asus_get_adapter_status(struct hotplug_slot *hotplug_slot, | |||
868 | return 0; | 869 | return 0; |
869 | } | 870 | } |
870 | 871 | ||
871 | static struct hotplug_slot_ops asus_hotplug_slot_ops = { | 872 | static const struct hotplug_slot_ops asus_hotplug_slot_ops = { |
872 | .owner = THIS_MODULE, | ||
873 | .get_adapter_status = asus_get_adapter_status, | 873 | .get_adapter_status = asus_get_adapter_status, |
874 | .get_power_status = asus_get_adapter_status, | 874 | .get_power_status = asus_get_adapter_status, |
875 | }; | 875 | }; |
@@ -899,21 +899,9 @@ static int asus_setup_pci_hotplug(struct asus_wmi *asus) | |||
899 | 899 | ||
900 | INIT_WORK(&asus->hotplug_work, asus_hotplug_work); | 900 | INIT_WORK(&asus->hotplug_work, asus_hotplug_work); |
901 | 901 | ||
902 | asus->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); | 902 | asus->hotplug_slot.ops = &asus_hotplug_slot_ops; |
903 | if (!asus->hotplug_slot) | ||
904 | goto error_slot; | ||
905 | |||
906 | asus->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info), | ||
907 | GFP_KERNEL); | ||
908 | if (!asus->hotplug_slot->info) | ||
909 | goto error_info; | ||
910 | 903 | ||
911 | asus->hotplug_slot->private = asus; | 904 | ret = pci_hp_register(&asus->hotplug_slot, bus, 0, "asus-wifi"); |
912 | asus->hotplug_slot->ops = &asus_hotplug_slot_ops; | ||
913 | asus_get_adapter_status(asus->hotplug_slot, | ||
914 | &asus->hotplug_slot->info->adapter_status); | ||
915 | |||
916 | ret = pci_hp_register(asus->hotplug_slot, bus, 0, "asus-wifi"); | ||
917 | if (ret) { | 905 | if (ret) { |
918 | pr_err("Unable to register hotplug slot - %d\n", ret); | 906 | pr_err("Unable to register hotplug slot - %d\n", ret); |
919 | goto error_register; | 907 | goto error_register; |
@@ -922,11 +910,7 @@ static int asus_setup_pci_hotplug(struct asus_wmi *asus) | |||
922 | return 0; | 910 | return 0; |
923 | 911 | ||
924 | error_register: | 912 | error_register: |
925 | kfree(asus->hotplug_slot->info); | 913 | asus->hotplug_slot.ops = NULL; |
926 | error_info: | ||
927 | kfree(asus->hotplug_slot); | ||
928 | asus->hotplug_slot = NULL; | ||
929 | error_slot: | ||
930 | destroy_workqueue(asus->hotplug_workqueue); | 914 | destroy_workqueue(asus->hotplug_workqueue); |
931 | error_workqueue: | 915 | error_workqueue: |
932 | return ret; | 916 | return ret; |
@@ -1054,11 +1038,8 @@ static void asus_wmi_rfkill_exit(struct asus_wmi *asus) | |||
1054 | * asus_unregister_rfkill_notifier() | 1038 | * asus_unregister_rfkill_notifier() |
1055 | */ | 1039 | */ |
1056 | asus_rfkill_hotplug(asus); | 1040 | asus_rfkill_hotplug(asus); |
1057 | if (asus->hotplug_slot) { | 1041 | if (asus->hotplug_slot.ops) |
1058 | pci_hp_deregister(asus->hotplug_slot); | 1042 | pci_hp_deregister(&asus->hotplug_slot); |
1059 | kfree(asus->hotplug_slot->info); | ||
1060 | kfree(asus->hotplug_slot); | ||
1061 | } | ||
1062 | if (asus->hotplug_workqueue) | 1043 | if (asus->hotplug_workqueue) |
1063 | destroy_workqueue(asus->hotplug_workqueue); | 1044 | destroy_workqueue(asus->hotplug_workqueue); |
1064 | 1045 | ||
diff --git a/drivers/platform/x86/eeepc-laptop.c b/drivers/platform/x86/eeepc-laptop.c index a4bbf6ecd1f0..e6946a9beb5a 100644 --- a/drivers/platform/x86/eeepc-laptop.c +++ b/drivers/platform/x86/eeepc-laptop.c | |||
@@ -177,7 +177,7 @@ struct eeepc_laptop { | |||
177 | struct rfkill *wwan3g_rfkill; | 177 | struct rfkill *wwan3g_rfkill; |
178 | struct rfkill *wimax_rfkill; | 178 | struct rfkill *wimax_rfkill; |
179 | 179 | ||
180 | struct hotplug_slot *hotplug_slot; | 180 | struct hotplug_slot hotplug_slot; |
181 | struct mutex hotplug_lock; | 181 | struct mutex hotplug_lock; |
182 | 182 | ||
183 | struct led_classdev tpd_led; | 183 | struct led_classdev tpd_led; |
@@ -582,7 +582,7 @@ static void eeepc_rfkill_hotplug(struct eeepc_laptop *eeepc, acpi_handle handle) | |||
582 | mutex_lock(&eeepc->hotplug_lock); | 582 | mutex_lock(&eeepc->hotplug_lock); |
583 | pci_lock_rescan_remove(); | 583 | pci_lock_rescan_remove(); |
584 | 584 | ||
585 | if (!eeepc->hotplug_slot) | 585 | if (!eeepc->hotplug_slot.ops) |
586 | goto out_unlock; | 586 | goto out_unlock; |
587 | 587 | ||
588 | port = acpi_get_pci_dev(handle); | 588 | port = acpi_get_pci_dev(handle); |
@@ -715,8 +715,11 @@ static void eeepc_unregister_rfkill_notifier(struct eeepc_laptop *eeepc, | |||
715 | static int eeepc_get_adapter_status(struct hotplug_slot *hotplug_slot, | 715 | static int eeepc_get_adapter_status(struct hotplug_slot *hotplug_slot, |
716 | u8 *value) | 716 | u8 *value) |
717 | { | 717 | { |
718 | struct eeepc_laptop *eeepc = hotplug_slot->private; | 718 | struct eeepc_laptop *eeepc; |
719 | int val = get_acpi(eeepc, CM_ASL_WLAN); | 719 | int val; |
720 | |||
721 | eeepc = container_of(hotplug_slot, struct eeepc_laptop, hotplug_slot); | ||
722 | val = get_acpi(eeepc, CM_ASL_WLAN); | ||
720 | 723 | ||
721 | if (val == 1 || val == 0) | 724 | if (val == 1 || val == 0) |
722 | *value = val; | 725 | *value = val; |
@@ -726,8 +729,7 @@ static int eeepc_get_adapter_status(struct hotplug_slot *hotplug_slot, | |||
726 | return 0; | 729 | return 0; |
727 | } | 730 | } |
728 | 731 | ||
729 | static struct hotplug_slot_ops eeepc_hotplug_slot_ops = { | 732 | static const struct hotplug_slot_ops eeepc_hotplug_slot_ops = { |
730 | .owner = THIS_MODULE, | ||
731 | .get_adapter_status = eeepc_get_adapter_status, | 733 | .get_adapter_status = eeepc_get_adapter_status, |
732 | .get_power_status = eeepc_get_adapter_status, | 734 | .get_power_status = eeepc_get_adapter_status, |
733 | }; | 735 | }; |
@@ -742,21 +744,9 @@ static int eeepc_setup_pci_hotplug(struct eeepc_laptop *eeepc) | |||
742 | return -ENODEV; | 744 | return -ENODEV; |
743 | } | 745 | } |
744 | 746 | ||
745 | eeepc->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); | 747 | eeepc->hotplug_slot.ops = &eeepc_hotplug_slot_ops; |
746 | if (!eeepc->hotplug_slot) | ||
747 | goto error_slot; | ||
748 | 748 | ||
749 | eeepc->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info), | 749 | ret = pci_hp_register(&eeepc->hotplug_slot, bus, 0, "eeepc-wifi"); |
750 | GFP_KERNEL); | ||
751 | if (!eeepc->hotplug_slot->info) | ||
752 | goto error_info; | ||
753 | |||
754 | eeepc->hotplug_slot->private = eeepc; | ||
755 | eeepc->hotplug_slot->ops = &eeepc_hotplug_slot_ops; | ||
756 | eeepc_get_adapter_status(eeepc->hotplug_slot, | ||
757 | &eeepc->hotplug_slot->info->adapter_status); | ||
758 | |||
759 | ret = pci_hp_register(eeepc->hotplug_slot, bus, 0, "eeepc-wifi"); | ||
760 | if (ret) { | 750 | if (ret) { |
761 | pr_err("Unable to register hotplug slot - %d\n", ret); | 751 | pr_err("Unable to register hotplug slot - %d\n", ret); |
762 | goto error_register; | 752 | goto error_register; |
@@ -765,11 +755,7 @@ static int eeepc_setup_pci_hotplug(struct eeepc_laptop *eeepc) | |||
765 | return 0; | 755 | return 0; |
766 | 756 | ||
767 | error_register: | 757 | error_register: |
768 | kfree(eeepc->hotplug_slot->info); | 758 | eeepc->hotplug_slot.ops = NULL; |
769 | error_info: | ||
770 | kfree(eeepc->hotplug_slot); | ||
771 | eeepc->hotplug_slot = NULL; | ||
772 | error_slot: | ||
773 | return ret; | 759 | return ret; |
774 | } | 760 | } |
775 | 761 | ||
@@ -830,11 +816,8 @@ static void eeepc_rfkill_exit(struct eeepc_laptop *eeepc) | |||
830 | eeepc->wlan_rfkill = NULL; | 816 | eeepc->wlan_rfkill = NULL; |
831 | } | 817 | } |
832 | 818 | ||
833 | if (eeepc->hotplug_slot) { | 819 | if (eeepc->hotplug_slot.ops) |
834 | pci_hp_deregister(eeepc->hotplug_slot); | 820 | pci_hp_deregister(&eeepc->hotplug_slot); |
835 | kfree(eeepc->hotplug_slot->info); | ||
836 | kfree(eeepc->hotplug_slot); | ||
837 | } | ||
838 | 821 | ||
839 | if (eeepc->bluetooth_rfkill) { | 822 | if (eeepc->bluetooth_rfkill) { |
840 | rfkill_unregister(eeepc->bluetooth_rfkill); | 823 | rfkill_unregister(eeepc->bluetooth_rfkill); |
diff --git a/drivers/reset/reset-imx7.c b/drivers/reset/reset-imx7.c index 97d9f08271c5..77911fa8f31d 100644 --- a/drivers/reset/reset-imx7.c +++ b/drivers/reset/reset-imx7.c | |||
@@ -67,6 +67,7 @@ static const struct imx7_src_signal imx7_src_signals[IMX7_RESET_NUM] = { | |||
67 | [IMX7_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, BIT(2) | BIT(1) }, | 67 | [IMX7_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, BIT(2) | BIT(1) }, |
68 | [IMX7_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) }, | 68 | [IMX7_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) }, |
69 | [IMX7_RESET_PCIE_CTRL_APPS_EN] = { SRC_PCIEPHY_RCR, BIT(6) }, | 69 | [IMX7_RESET_PCIE_CTRL_APPS_EN] = { SRC_PCIEPHY_RCR, BIT(6) }, |
70 | [IMX7_RESET_PCIE_CTRL_APPS_TURNOFF] = { SRC_PCIEPHY_RCR, BIT(11) }, | ||
70 | [IMX7_RESET_DDRC_PRST] = { SRC_DDRC_RCR, BIT(0) }, | 71 | [IMX7_RESET_DDRC_PRST] = { SRC_DDRC_RCR, BIT(0) }, |
71 | [IMX7_RESET_DDRC_CORE_RST] = { SRC_DDRC_RCR, BIT(1) }, | 72 | [IMX7_RESET_DDRC_CORE_RST] = { SRC_DDRC_RCR, BIT(1) }, |
72 | }; | 73 | }; |
diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c index c0631895154e..f96ec68af2e5 100644 --- a/drivers/s390/net/ism_drv.c +++ b/drivers/s390/net/ism_drv.c | |||
@@ -515,8 +515,8 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id) | |||
515 | if (ret) | 515 | if (ret) |
516 | goto err_unmap; | 516 | goto err_unmap; |
517 | 517 | ||
518 | pci_set_dma_seg_boundary(pdev, SZ_1M - 1); | 518 | dma_set_seg_boundary(&pdev->dev, SZ_1M - 1); |
519 | pci_set_dma_max_seg_size(pdev, SZ_1M); | 519 | dma_set_max_seg_size(&pdev->dev, SZ_1M); |
520 | pci_set_master(pdev); | 520 | pci_set_master(pdev); |
521 | 521 | ||
522 | ism->smcd = smcd_alloc_dev(&pdev->dev, dev_name(&pdev->dev), &ism_ops, | 522 | ism->smcd = smcd_alloc_dev(&pdev->dev, dev_name(&pdev->dev), &ism_ops, |
diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c index 04443577d48b..2d4e4ddc5ace 100644 --- a/drivers/scsi/aacraid/linit.c +++ b/drivers/scsi/aacraid/linit.c | |||
@@ -1747,7 +1747,7 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) | |||
1747 | shost->max_sectors = (shost->sg_tablesize * 8) + 112; | 1747 | shost->max_sectors = (shost->sg_tablesize * 8) + 112; |
1748 | } | 1748 | } |
1749 | 1749 | ||
1750 | error = pci_set_dma_max_seg_size(pdev, | 1750 | error = dma_set_max_seg_size(&pdev->dev, |
1751 | (aac->adapter_info.options & AAC_OPT_NEW_COMM) ? | 1751 | (aac->adapter_info.options & AAC_OPT_NEW_COMM) ? |
1752 | (shost->max_sectors << 9) : 65536); | 1752 | (shost->max_sectors << 9) : 65536); |
1753 | if (error) | 1753 | if (error) |
@@ -2055,8 +2055,6 @@ static void aac_pci_resume(struct pci_dev *pdev) | |||
2055 | struct scsi_device *sdev = NULL; | 2055 | struct scsi_device *sdev = NULL; |
2056 | struct aac_dev *aac = (struct aac_dev *)shost_priv(shost); | 2056 | struct aac_dev *aac = (struct aac_dev *)shost_priv(shost); |
2057 | 2057 | ||
2058 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
2059 | |||
2060 | if (aac_adapter_ioremap(aac, aac->base_size)) { | 2058 | if (aac_adapter_ioremap(aac, aac->base_size)) { |
2061 | 2059 | ||
2062 | dev_err(&pdev->dev, "aacraid: ioremap failed\n"); | 2060 | dev_err(&pdev->dev, "aacraid: ioremap failed\n"); |
diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c index 3660059784f7..a3019d8a7402 100644 --- a/drivers/scsi/be2iscsi/be_main.c +++ b/drivers/scsi/be2iscsi/be_main.c | |||
@@ -5529,7 +5529,6 @@ static pci_ers_result_t beiscsi_eeh_reset(struct pci_dev *pdev) | |||
5529 | return PCI_ERS_RESULT_DISCONNECT; | 5529 | return PCI_ERS_RESULT_DISCONNECT; |
5530 | } | 5530 | } |
5531 | 5531 | ||
5532 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
5533 | return PCI_ERS_RESULT_RECOVERED; | 5532 | return PCI_ERS_RESULT_RECOVERED; |
5534 | } | 5533 | } |
5535 | 5534 | ||
diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c index bd7e6a6fc1f1..911efc98d1fd 100644 --- a/drivers/scsi/bfa/bfad.c +++ b/drivers/scsi/bfa/bfad.c | |||
@@ -1569,8 +1569,6 @@ bfad_pci_slot_reset(struct pci_dev *pdev) | |||
1569 | if (pci_set_dma_mask(bfad->pcidev, DMA_BIT_MASK(32)) != 0) | 1569 | if (pci_set_dma_mask(bfad->pcidev, DMA_BIT_MASK(32)) != 0) |
1570 | goto out_disable_device; | 1570 | goto out_disable_device; |
1571 | 1571 | ||
1572 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
1573 | |||
1574 | if (restart_bfa(bfad) == -1) | 1572 | if (restart_bfa(bfad) == -1) |
1575 | goto out_disable_device; | 1573 | goto out_disable_device; |
1576 | 1574 | ||
diff --git a/drivers/scsi/csiostor/csio_init.c b/drivers/scsi/csiostor/csio_init.c index ed2dae657964..66b230bee7bc 100644 --- a/drivers/scsi/csiostor/csio_init.c +++ b/drivers/scsi/csiostor/csio_init.c | |||
@@ -1102,7 +1102,6 @@ csio_pci_slot_reset(struct pci_dev *pdev) | |||
1102 | pci_set_master(pdev); | 1102 | pci_set_master(pdev); |
1103 | pci_restore_state(pdev); | 1103 | pci_restore_state(pdev); |
1104 | pci_save_state(pdev); | 1104 | pci_save_state(pdev); |
1105 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
1106 | 1105 | ||
1107 | /* Bring HW s/m to ready state. | 1106 | /* Bring HW s/m to ready state. |
1108 | * but don't resume IOs. | 1107 | * but don't resume IOs. |
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index f3cae733ae2d..0503237b8145 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c | |||
@@ -11329,10 +11329,6 @@ lpfc_io_resume_s3(struct pci_dev *pdev) | |||
11329 | 11329 | ||
11330 | /* Bring device online, it will be no-op for non-fatal error resume */ | 11330 | /* Bring device online, it will be no-op for non-fatal error resume */ |
11331 | lpfc_online(phba); | 11331 | lpfc_online(phba); |
11332 | |||
11333 | /* Clean up Advanced Error Reporting (AER) if needed */ | ||
11334 | if (phba->hba_flag & HBA_AER_ENABLED) | ||
11335 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
11336 | } | 11332 | } |
11337 | 11333 | ||
11338 | /** | 11334 | /** |
@@ -12144,10 +12140,6 @@ lpfc_io_resume_s4(struct pci_dev *pdev) | |||
12144 | /* Bring the device back online */ | 12140 | /* Bring the device back online */ |
12145 | lpfc_online(phba); | 12141 | lpfc_online(phba); |
12146 | } | 12142 | } |
12147 | |||
12148 | /* Clean up Advanced Error Reporting (AER) if needed */ | ||
12149 | if (phba->hba_flag & HBA_AER_ENABLED) | ||
12150 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
12151 | } | 12143 | } |
12152 | 12144 | ||
12153 | /** | 12145 | /** |
diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c index 53133cfd420f..86eaa893adfc 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c +++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c | |||
@@ -10828,7 +10828,6 @@ scsih_pci_resume(struct pci_dev *pdev) | |||
10828 | 10828 | ||
10829 | pr_info(MPT3SAS_FMT "PCI error: resume callback!!\n", ioc->name); | 10829 | pr_info(MPT3SAS_FMT "PCI error: resume callback!!\n", ioc->name); |
10830 | 10830 | ||
10831 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
10832 | mpt3sas_base_start_watchdog(ioc); | 10831 | mpt3sas_base_start_watchdog(ioc); |
10833 | scsi_unblock_requests(ioc->shost); | 10832 | scsi_unblock_requests(ioc->shost); |
10834 | } | 10833 | } |
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index 42b8f0d3e580..8fe2d7329bfe 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c | |||
@@ -6839,8 +6839,6 @@ qla2xxx_pci_resume(struct pci_dev *pdev) | |||
6839 | "The device failed to resume I/O from slot/link_reset.\n"); | 6839 | "The device failed to resume I/O from slot/link_reset.\n"); |
6840 | } | 6840 | } |
6841 | 6841 | ||
6842 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
6843 | |||
6844 | ha->flags.eeh_busy = 0; | 6842 | ha->flags.eeh_busy = 0; |
6845 | } | 6843 | } |
6846 | 6844 | ||
diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c index 0e13349dce57..ab3a924e3e11 100644 --- a/drivers/scsi/qla4xxx/ql4_os.c +++ b/drivers/scsi/qla4xxx/ql4_os.c | |||
@@ -9824,7 +9824,6 @@ qla4xxx_pci_resume(struct pci_dev *pdev) | |||
9824 | __func__); | 9824 | __func__); |
9825 | } | 9825 | } |
9826 | 9826 | ||
9827 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
9828 | clear_bit(AF_EEH_BUSY, &ha->flags); | 9827 | clear_bit(AF_EEH_BUSY, &ha->flags); |
9829 | } | 9828 | } |
9830 | 9829 | ||
diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h index 53600f527a70..0300374101cd 100644 --- a/include/acpi/acpi_bus.h +++ b/include/acpi/acpi_bus.h | |||
@@ -346,10 +346,16 @@ struct acpi_device_physical_node { | |||
346 | bool put_online:1; | 346 | bool put_online:1; |
347 | }; | 347 | }; |
348 | 348 | ||
349 | struct acpi_device_properties { | ||
350 | const guid_t *guid; | ||
351 | const union acpi_object *properties; | ||
352 | struct list_head list; | ||
353 | }; | ||
354 | |||
349 | /* ACPI Device Specific Data (_DSD) */ | 355 | /* ACPI Device Specific Data (_DSD) */ |
350 | struct acpi_device_data { | 356 | struct acpi_device_data { |
351 | const union acpi_object *pointer; | 357 | const union acpi_object *pointer; |
352 | const union acpi_object *properties; | 358 | struct list_head properties; |
353 | const union acpi_object *of_compatible; | 359 | const union acpi_object *of_compatible; |
354 | struct list_head subnodes; | 360 | struct list_head subnodes; |
355 | }; | 361 | }; |
diff --git a/include/dt-bindings/reset/imx7-reset.h b/include/dt-bindings/reset/imx7-reset.h index 63948170c7b2..31b3f87dde9a 100644 --- a/include/dt-bindings/reset/imx7-reset.h +++ b/include/dt-bindings/reset/imx7-reset.h | |||
@@ -56,7 +56,9 @@ | |||
56 | #define IMX7_RESET_DDRC_PRST 23 | 56 | #define IMX7_RESET_DDRC_PRST 23 |
57 | #define IMX7_RESET_DDRC_CORE_RST 24 | 57 | #define IMX7_RESET_DDRC_CORE_RST 24 |
58 | 58 | ||
59 | #define IMX7_RESET_NUM 25 | 59 | #define IMX7_RESET_PCIE_CTRL_APPS_TURNOFF 25 |
60 | |||
61 | #define IMX7_RESET_NUM 26 | ||
60 | 62 | ||
61 | #endif | 63 | #endif |
62 | 64 | ||
diff --git a/include/linux/acpi.h b/include/linux/acpi.h index af4628979d13..ed80f147bd50 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h | |||
@@ -1072,6 +1072,15 @@ static inline int acpi_node_get_property_reference( | |||
1072 | NR_FWNODE_REFERENCE_ARGS, args); | 1072 | NR_FWNODE_REFERENCE_ARGS, args); |
1073 | } | 1073 | } |
1074 | 1074 | ||
1075 | static inline bool acpi_dev_has_props(const struct acpi_device *adev) | ||
1076 | { | ||
1077 | return !list_empty(&adev->data.properties); | ||
1078 | } | ||
1079 | |||
1080 | struct acpi_device_properties * | ||
1081 | acpi_data_add_props(struct acpi_device_data *data, const guid_t *guid, | ||
1082 | const union acpi_object *properties); | ||
1083 | |||
1075 | int acpi_node_prop_get(const struct fwnode_handle *fwnode, const char *propname, | 1084 | int acpi_node_prop_get(const struct fwnode_handle *fwnode, const char *propname, |
1076 | void **valptr); | 1085 | void **valptr); |
1077 | int acpi_dev_prop_read_single(struct acpi_device *adev, | 1086 | int acpi_dev_prop_read_single(struct acpi_device *adev, |
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 61207560e826..7d423721b327 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h | |||
@@ -704,6 +704,7 @@ struct request_queue { | |||
704 | #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ | 704 | #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ |
705 | #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ | 705 | #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ |
706 | #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ | 706 | #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ |
707 | #define QUEUE_FLAG_PCI_P2PDMA 29 /* device supports PCI p2p requests */ | ||
707 | 708 | ||
708 | #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ | 709 | #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ |
709 | (1 << QUEUE_FLAG_SAME_COMP) | \ | 710 | (1 << QUEUE_FLAG_SAME_COMP) | \ |
@@ -736,6 +737,8 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q); | |||
736 | #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) | 737 | #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) |
737 | #define blk_queue_scsi_passthrough(q) \ | 738 | #define blk_queue_scsi_passthrough(q) \ |
738 | test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) | 739 | test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) |
740 | #define blk_queue_pci_p2pdma(q) \ | ||
741 | test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags) | ||
739 | 742 | ||
740 | #define blk_noretry_request(rq) \ | 743 | #define blk_noretry_request(rq) \ |
741 | ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ | 744 | ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ |
diff --git a/include/linux/memremap.h b/include/linux/memremap.h index f91f9e763557..0ac69ddf5fc4 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h | |||
@@ -53,11 +53,16 @@ struct vmem_altmap { | |||
53 | * wakeup event whenever a page is unpinned and becomes idle. This | 53 | * wakeup event whenever a page is unpinned and becomes idle. This |
54 | * wakeup is used to coordinate physical address space management (ex: | 54 | * wakeup is used to coordinate physical address space management (ex: |
55 | * fs truncate/hole punch) vs pinned pages (ex: device dma). | 55 | * fs truncate/hole punch) vs pinned pages (ex: device dma). |
56 | * | ||
57 | * MEMORY_DEVICE_PCI_P2PDMA: | ||
58 | * Device memory residing in a PCI BAR intended for use with Peer-to-Peer | ||
59 | * transactions. | ||
56 | */ | 60 | */ |
57 | enum memory_type { | 61 | enum memory_type { |
58 | MEMORY_DEVICE_PRIVATE = 1, | 62 | MEMORY_DEVICE_PRIVATE = 1, |
59 | MEMORY_DEVICE_PUBLIC, | 63 | MEMORY_DEVICE_PUBLIC, |
60 | MEMORY_DEVICE_FS_DAX, | 64 | MEMORY_DEVICE_FS_DAX, |
65 | MEMORY_DEVICE_PCI_P2PDMA, | ||
61 | }; | 66 | }; |
62 | 67 | ||
63 | /* | 68 | /* |
@@ -120,6 +125,7 @@ struct dev_pagemap { | |||
120 | struct device *dev; | 125 | struct device *dev; |
121 | void *data; | 126 | void *data; |
122 | enum memory_type type; | 127 | enum memory_type type; |
128 | u64 pci_p2pdma_bus_offset; | ||
123 | }; | 129 | }; |
124 | 130 | ||
125 | #ifdef CONFIG_ZONE_DEVICE | 131 | #ifdef CONFIG_ZONE_DEVICE |
diff --git a/include/linux/mm.h b/include/linux/mm.h index 0416a7204be3..daa2b8f1e9a8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h | |||
@@ -890,6 +890,19 @@ static inline bool is_device_public_page(const struct page *page) | |||
890 | page->pgmap->type == MEMORY_DEVICE_PUBLIC; | 890 | page->pgmap->type == MEMORY_DEVICE_PUBLIC; |
891 | } | 891 | } |
892 | 892 | ||
893 | #ifdef CONFIG_PCI_P2PDMA | ||
894 | static inline bool is_pci_p2pdma_page(const struct page *page) | ||
895 | { | ||
896 | return is_zone_device_page(page) && | ||
897 | page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; | ||
898 | } | ||
899 | #else /* CONFIG_PCI_P2PDMA */ | ||
900 | static inline bool is_pci_p2pdma_page(const struct page *page) | ||
901 | { | ||
902 | return false; | ||
903 | } | ||
904 | #endif /* CONFIG_PCI_P2PDMA */ | ||
905 | |||
893 | #else /* CONFIG_DEV_PAGEMAP_OPS */ | 906 | #else /* CONFIG_DEV_PAGEMAP_OPS */ |
894 | static inline void dev_pagemap_get_ops(void) | 907 | static inline void dev_pagemap_get_ops(void) |
895 | { | 908 | { |
@@ -913,6 +926,11 @@ static inline bool is_device_public_page(const struct page *page) | |||
913 | { | 926 | { |
914 | return false; | 927 | return false; |
915 | } | 928 | } |
929 | |||
930 | static inline bool is_pci_p2pdma_page(const struct page *page) | ||
931 | { | ||
932 | return false; | ||
933 | } | ||
916 | #endif /* CONFIG_DEV_PAGEMAP_OPS */ | 934 | #endif /* CONFIG_DEV_PAGEMAP_OPS */ |
917 | 935 | ||
918 | static inline void get_page(struct page *page) | 936 | static inline void get_page(struct page *page) |
diff --git a/include/linux/pci-dma-compat.h b/include/linux/pci-dma-compat.h index c3f1b44ade29..cb1adf0b78a9 100644 --- a/include/linux/pci-dma-compat.h +++ b/include/linux/pci-dma-compat.h | |||
@@ -119,29 +119,11 @@ static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask) | |||
119 | { | 119 | { |
120 | return dma_set_coherent_mask(&dev->dev, mask); | 120 | return dma_set_coherent_mask(&dev->dev, mask); |
121 | } | 121 | } |
122 | |||
123 | static inline int pci_set_dma_max_seg_size(struct pci_dev *dev, | ||
124 | unsigned int size) | ||
125 | { | ||
126 | return dma_set_max_seg_size(&dev->dev, size); | ||
127 | } | ||
128 | |||
129 | static inline int pci_set_dma_seg_boundary(struct pci_dev *dev, | ||
130 | unsigned long mask) | ||
131 | { | ||
132 | return dma_set_seg_boundary(&dev->dev, mask); | ||
133 | } | ||
134 | #else | 122 | #else |
135 | static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) | 123 | static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) |
136 | { return -EIO; } | 124 | { return -EIO; } |
137 | static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask) | 125 | static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask) |
138 | { return -EIO; } | 126 | { return -EIO; } |
139 | static inline int pci_set_dma_max_seg_size(struct pci_dev *dev, | ||
140 | unsigned int size) | ||
141 | { return -EIO; } | ||
142 | static inline int pci_set_dma_seg_boundary(struct pci_dev *dev, | ||
143 | unsigned long mask) | ||
144 | { return -EIO; } | ||
145 | #endif | 127 | #endif |
146 | 128 | ||
147 | #endif | 129 | #endif |
diff --git a/include/linux/pci-dma.h b/include/linux/pci-dma.h deleted file mode 100644 index 0f7aa7353ca3..000000000000 --- a/include/linux/pci-dma.h +++ /dev/null | |||
@@ -1,12 +0,0 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
2 | #ifndef _LINUX_PCI_DMA_H | ||
3 | #define _LINUX_PCI_DMA_H | ||
4 | |||
5 | #define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME) DEFINE_DMA_UNMAP_ADDR(ADDR_NAME); | ||
6 | #define DECLARE_PCI_UNMAP_LEN(LEN_NAME) DEFINE_DMA_UNMAP_LEN(LEN_NAME); | ||
7 | #define pci_unmap_addr dma_unmap_addr | ||
8 | #define pci_unmap_addr_set dma_unmap_addr_set | ||
9 | #define pci_unmap_len dma_unmap_len | ||
10 | #define pci_unmap_len_set dma_unmap_len_set | ||
11 | |||
12 | #endif | ||
diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h new file mode 100644 index 000000000000..bca9bc3e5be7 --- /dev/null +++ b/include/linux/pci-p2pdma.h | |||
@@ -0,0 +1,114 @@ | |||
1 | /* SPDX-License-Identifier: GPL-2.0 */ | ||
2 | /* | ||
3 | * PCI Peer 2 Peer DMA support. | ||
4 | * | ||
5 | * Copyright (c) 2016-2018, Logan Gunthorpe | ||
6 | * Copyright (c) 2016-2017, Microsemi Corporation | ||
7 | * Copyright (c) 2017, Christoph Hellwig | ||
8 | * Copyright (c) 2018, Eideticom Inc. | ||
9 | */ | ||
10 | |||
11 | #ifndef _LINUX_PCI_P2PDMA_H | ||
12 | #define _LINUX_PCI_P2PDMA_H | ||
13 | |||
14 | #include <linux/pci.h> | ||
15 | |||
16 | struct block_device; | ||
17 | struct scatterlist; | ||
18 | |||
19 | #ifdef CONFIG_PCI_P2PDMA | ||
20 | int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, | ||
21 | u64 offset); | ||
22 | int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, | ||
23 | int num_clients, bool verbose); | ||
24 | bool pci_has_p2pmem(struct pci_dev *pdev); | ||
25 | struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients); | ||
26 | void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size); | ||
27 | void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size); | ||
28 | pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, void *addr); | ||
29 | struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, | ||
30 | unsigned int *nents, u32 length); | ||
31 | void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl); | ||
32 | void pci_p2pmem_publish(struct pci_dev *pdev, bool publish); | ||
33 | int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, | ||
34 | enum dma_data_direction dir); | ||
35 | int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, | ||
36 | bool *use_p2pdma); | ||
37 | ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, | ||
38 | bool use_p2pdma); | ||
39 | #else /* CONFIG_PCI_P2PDMA */ | ||
40 | static inline int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, | ||
41 | size_t size, u64 offset) | ||
42 | { | ||
43 | return -EOPNOTSUPP; | ||
44 | } | ||
45 | static inline int pci_p2pdma_distance_many(struct pci_dev *provider, | ||
46 | struct device **clients, int num_clients, bool verbose) | ||
47 | { | ||
48 | return -1; | ||
49 | } | ||
50 | static inline bool pci_has_p2pmem(struct pci_dev *pdev) | ||
51 | { | ||
52 | return false; | ||
53 | } | ||
54 | static inline struct pci_dev *pci_p2pmem_find_many(struct device **clients, | ||
55 | int num_clients) | ||
56 | { | ||
57 | return NULL; | ||
58 | } | ||
59 | static inline void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size) | ||
60 | { | ||
61 | return NULL; | ||
62 | } | ||
63 | static inline void pci_free_p2pmem(struct pci_dev *pdev, void *addr, | ||
64 | size_t size) | ||
65 | { | ||
66 | } | ||
67 | static inline pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, | ||
68 | void *addr) | ||
69 | { | ||
70 | return 0; | ||
71 | } | ||
72 | static inline struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, | ||
73 | unsigned int *nents, u32 length) | ||
74 | { | ||
75 | return NULL; | ||
76 | } | ||
77 | static inline void pci_p2pmem_free_sgl(struct pci_dev *pdev, | ||
78 | struct scatterlist *sgl) | ||
79 | { | ||
80 | } | ||
81 | static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) | ||
82 | { | ||
83 | } | ||
84 | static inline int pci_p2pdma_map_sg(struct device *dev, | ||
85 | struct scatterlist *sg, int nents, enum dma_data_direction dir) | ||
86 | { | ||
87 | return 0; | ||
88 | } | ||
89 | static inline int pci_p2pdma_enable_store(const char *page, | ||
90 | struct pci_dev **p2p_dev, bool *use_p2pdma) | ||
91 | { | ||
92 | *use_p2pdma = false; | ||
93 | return 0; | ||
94 | } | ||
95 | static inline ssize_t pci_p2pdma_enable_show(char *page, | ||
96 | struct pci_dev *p2p_dev, bool use_p2pdma) | ||
97 | { | ||
98 | return sprintf(page, "none\n"); | ||
99 | } | ||
100 | #endif /* CONFIG_PCI_P2PDMA */ | ||
101 | |||
102 | |||
103 | static inline int pci_p2pdma_distance(struct pci_dev *provider, | ||
104 | struct device *client, bool verbose) | ||
105 | { | ||
106 | return pci_p2pdma_distance_many(provider, &client, 1, verbose); | ||
107 | } | ||
108 | |||
109 | static inline struct pci_dev *pci_p2pmem_find(struct device *client) | ||
110 | { | ||
111 | return pci_p2pmem_find_many(&client, 1); | ||
112 | } | ||
113 | |||
114 | #endif /* _LINUX_PCI_P2P_H */ | ||
diff --git a/include/linux/pci.h b/include/linux/pci.h index 2c4755032475..11c71c4ecf75 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h | |||
@@ -281,6 +281,7 @@ struct pcie_link_state; | |||
281 | struct pci_vpd; | 281 | struct pci_vpd; |
282 | struct pci_sriov; | 282 | struct pci_sriov; |
283 | struct pci_ats; | 283 | struct pci_ats; |
284 | struct pci_p2pdma; | ||
284 | 285 | ||
285 | /* The pci_dev structure describes PCI devices */ | 286 | /* The pci_dev structure describes PCI devices */ |
286 | struct pci_dev { | 287 | struct pci_dev { |
@@ -325,6 +326,7 @@ struct pci_dev { | |||
325 | pci_power_t current_state; /* Current operating state. In ACPI, | 326 | pci_power_t current_state; /* Current operating state. In ACPI, |
326 | this is D0-D3, D0 being fully | 327 | this is D0-D3, D0 being fully |
327 | functional, and D3 being off. */ | 328 | functional, and D3 being off. */ |
329 | unsigned int imm_ready:1; /* Supports Immediate Readiness */ | ||
328 | u8 pm_cap; /* PM capability offset */ | 330 | u8 pm_cap; /* PM capability offset */ |
329 | unsigned int pme_support:5; /* Bitmask of states from which PME# | 331 | unsigned int pme_support:5; /* Bitmask of states from which PME# |
330 | can be generated */ | 332 | can be generated */ |
@@ -402,6 +404,7 @@ struct pci_dev { | |||
402 | unsigned int has_secondary_link:1; | 404 | unsigned int has_secondary_link:1; |
403 | unsigned int non_compliant_bars:1; /* Broken BARs; ignore them */ | 405 | unsigned int non_compliant_bars:1; /* Broken BARs; ignore them */ |
404 | unsigned int is_probed:1; /* Device probing in progress */ | 406 | unsigned int is_probed:1; /* Device probing in progress */ |
407 | unsigned int link_active_reporting:1;/* Device capable of reporting link active */ | ||
405 | pci_dev_flags_t dev_flags; | 408 | pci_dev_flags_t dev_flags; |
406 | atomic_t enable_cnt; /* pci_enable_device has been called */ | 409 | atomic_t enable_cnt; /* pci_enable_device has been called */ |
407 | 410 | ||
@@ -439,6 +442,9 @@ struct pci_dev { | |||
439 | #ifdef CONFIG_PCI_PASID | 442 | #ifdef CONFIG_PCI_PASID |
440 | u16 pasid_features; | 443 | u16 pasid_features; |
441 | #endif | 444 | #endif |
445 | #ifdef CONFIG_PCI_P2PDMA | ||
446 | struct pci_p2pdma *p2pdma; | ||
447 | #endif | ||
442 | phys_addr_t rom; /* Physical address if not from BAR */ | 448 | phys_addr_t rom; /* Physical address if not from BAR */ |
443 | size_t romlen; /* Length if not from BAR */ | 449 | size_t romlen; /* Length if not from BAR */ |
444 | char *driver_override; /* Driver name to force a match */ | 450 | char *driver_override; /* Driver name to force a match */ |
@@ -1342,7 +1348,6 @@ int pci_set_vga_state(struct pci_dev *pdev, bool decode, | |||
1342 | 1348 | ||
1343 | /* kmem_cache style wrapper around pci_alloc_consistent() */ | 1349 | /* kmem_cache style wrapper around pci_alloc_consistent() */ |
1344 | 1350 | ||
1345 | #include <linux/pci-dma.h> | ||
1346 | #include <linux/dmapool.h> | 1351 | #include <linux/dmapool.h> |
1347 | 1352 | ||
1348 | #define pci_pool dma_pool | 1353 | #define pci_pool dma_pool |
diff --git a/include/linux/pci_hotplug.h b/include/linux/pci_hotplug.h index a6d6650a0490..7acc9f91e72b 100644 --- a/include/linux/pci_hotplug.h +++ b/include/linux/pci_hotplug.h | |||
@@ -16,8 +16,6 @@ | |||
16 | 16 | ||
17 | /** | 17 | /** |
18 | * struct hotplug_slot_ops -the callbacks that the hotplug pci core can use | 18 | * struct hotplug_slot_ops -the callbacks that the hotplug pci core can use |
19 | * @owner: The module owner of this structure | ||
20 | * @mod_name: The module name (KBUILD_MODNAME) of this structure | ||
21 | * @enable_slot: Called when the user wants to enable a specific pci slot | 19 | * @enable_slot: Called when the user wants to enable a specific pci slot |
22 | * @disable_slot: Called when the user wants to disable a specific pci slot | 20 | * @disable_slot: Called when the user wants to disable a specific pci slot |
23 | * @set_attention_status: Called to set the specific slot's attention LED to | 21 | * @set_attention_status: Called to set the specific slot's attention LED to |
@@ -25,17 +23,9 @@ | |||
25 | * @hardware_test: Called to run a specified hardware test on the specified | 23 | * @hardware_test: Called to run a specified hardware test on the specified |
26 | * slot. | 24 | * slot. |
27 | * @get_power_status: Called to get the current power status of a slot. | 25 | * @get_power_status: Called to get the current power status of a slot. |
28 | * If this field is NULL, the value passed in the struct hotplug_slot_info | ||
29 | * will be used when this value is requested by a user. | ||
30 | * @get_attention_status: Called to get the current attention status of a slot. | 26 | * @get_attention_status: Called to get the current attention status of a slot. |
31 | * If this field is NULL, the value passed in the struct hotplug_slot_info | ||
32 | * will be used when this value is requested by a user. | ||
33 | * @get_latch_status: Called to get the current latch status of a slot. | 27 | * @get_latch_status: Called to get the current latch status of a slot. |
34 | * If this field is NULL, the value passed in the struct hotplug_slot_info | ||
35 | * will be used when this value is requested by a user. | ||
36 | * @get_adapter_status: Called to get see if an adapter is present in the slot or not. | 28 | * @get_adapter_status: Called to get see if an adapter is present in the slot or not. |
37 | * If this field is NULL, the value passed in the struct hotplug_slot_info | ||
38 | * will be used when this value is requested by a user. | ||
39 | * @reset_slot: Optional interface to allow override of a bus reset for the | 29 | * @reset_slot: Optional interface to allow override of a bus reset for the |
40 | * slot for cases where a secondary bus reset can result in spurious | 30 | * slot for cases where a secondary bus reset can result in spurious |
41 | * hotplug events or where a slot can be reset independent of the bus. | 31 | * hotplug events or where a slot can be reset independent of the bus. |
@@ -46,8 +36,6 @@ | |||
46 | * set an LED, enable / disable power, etc.) | 36 | * set an LED, enable / disable power, etc.) |
47 | */ | 37 | */ |
48 | struct hotplug_slot_ops { | 38 | struct hotplug_slot_ops { |
49 | struct module *owner; | ||
50 | const char *mod_name; | ||
51 | int (*enable_slot) (struct hotplug_slot *slot); | 39 | int (*enable_slot) (struct hotplug_slot *slot); |
52 | int (*disable_slot) (struct hotplug_slot *slot); | 40 | int (*disable_slot) (struct hotplug_slot *slot); |
53 | int (*set_attention_status) (struct hotplug_slot *slot, u8 value); | 41 | int (*set_attention_status) (struct hotplug_slot *slot, u8 value); |
@@ -60,37 +48,19 @@ struct hotplug_slot_ops { | |||
60 | }; | 48 | }; |
61 | 49 | ||
62 | /** | 50 | /** |
63 | * struct hotplug_slot_info - used to notify the hotplug pci core of the state of the slot | ||
64 | * @power_status: if power is enabled or not (1/0) | ||
65 | * @attention_status: if the attention light is enabled or not (1/0) | ||
66 | * @latch_status: if the latch (if any) is open or closed (1/0) | ||
67 | * @adapter_status: if there is a pci board present in the slot or not (1/0) | ||
68 | * | ||
69 | * Used to notify the hotplug pci core of the status of a specific slot. | ||
70 | */ | ||
71 | struct hotplug_slot_info { | ||
72 | u8 power_status; | ||
73 | u8 attention_status; | ||
74 | u8 latch_status; | ||
75 | u8 adapter_status; | ||
76 | }; | ||
77 | |||
78 | /** | ||
79 | * struct hotplug_slot - used to register a physical slot with the hotplug pci core | 51 | * struct hotplug_slot - used to register a physical slot with the hotplug pci core |
80 | * @ops: pointer to the &struct hotplug_slot_ops to be used for this slot | 52 | * @ops: pointer to the &struct hotplug_slot_ops to be used for this slot |
81 | * @info: pointer to the &struct hotplug_slot_info for the initial values for | 53 | * @owner: The module owner of this structure |
82 | * this slot. | 54 | * @mod_name: The module name (KBUILD_MODNAME) of this structure |
83 | * @private: used by the hotplug pci controller driver to store whatever it | ||
84 | * needs. | ||
85 | */ | 55 | */ |
86 | struct hotplug_slot { | 56 | struct hotplug_slot { |
87 | struct hotplug_slot_ops *ops; | 57 | const struct hotplug_slot_ops *ops; |
88 | struct hotplug_slot_info *info; | ||
89 | void *private; | ||
90 | 58 | ||
91 | /* Variables below this are for use only by the hotplug pci core. */ | 59 | /* Variables below this are for use only by the hotplug pci core. */ |
92 | struct list_head slot_list; | 60 | struct list_head slot_list; |
93 | struct pci_slot *pci_slot; | 61 | struct pci_slot *pci_slot; |
62 | struct module *owner; | ||
63 | const char *mod_name; | ||
94 | }; | 64 | }; |
95 | 65 | ||
96 | static inline const char *hotplug_slot_name(const struct hotplug_slot *slot) | 66 | static inline const char *hotplug_slot_name(const struct hotplug_slot *slot) |
@@ -110,9 +80,6 @@ void pci_hp_del(struct hotplug_slot *slot); | |||
110 | void pci_hp_destroy(struct hotplug_slot *slot); | 80 | void pci_hp_destroy(struct hotplug_slot *slot); |
111 | void pci_hp_deregister(struct hotplug_slot *slot); | 81 | void pci_hp_deregister(struct hotplug_slot *slot); |
112 | 82 | ||
113 | int __must_check pci_hp_change_slot_info(struct hotplug_slot *slot, | ||
114 | struct hotplug_slot_info *info); | ||
115 | |||
116 | /* use a define to avoid include chaining to get THIS_MODULE & friends */ | 83 | /* use a define to avoid include chaining to get THIS_MODULE & friends */ |
117 | #define pci_hp_register(slot, pbus, devnr, name) \ | 84 | #define pci_hp_register(slot, pbus, devnr, name) \ |
118 | __pci_hp_register(slot, pbus, devnr, name, THIS_MODULE, KBUILD_MODNAME) | 85 | __pci_hp_register(slot, pbus, devnr, name, THIS_MODULE, KBUILD_MODNAME) |
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 8ddc2ffe3895..69f0abe1ba1a 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h | |||
@@ -2543,8 +2543,6 @@ | |||
2543 | #define PCI_VENDOR_ID_HUAWEI 0x19e5 | 2543 | #define PCI_VENDOR_ID_HUAWEI 0x19e5 |
2544 | 2544 | ||
2545 | #define PCI_VENDOR_ID_NETRONOME 0x19ee | 2545 | #define PCI_VENDOR_ID_NETRONOME 0x19ee |
2546 | #define PCI_DEVICE_ID_NETRONOME_NFP3200 0x3200 | ||
2547 | #define PCI_DEVICE_ID_NETRONOME_NFP3240 0x3240 | ||
2548 | #define PCI_DEVICE_ID_NETRONOME_NFP4000 0x4000 | 2546 | #define PCI_DEVICE_ID_NETRONOME_NFP4000 0x4000 |
2549 | #define PCI_DEVICE_ID_NETRONOME_NFP5000 0x5000 | 2547 | #define PCI_DEVICE_ID_NETRONOME_NFP5000 0x5000 |
2550 | #define PCI_DEVICE_ID_NETRONOME_NFP6000 0x6000 | 2548 | #define PCI_DEVICE_ID_NETRONOME_NFP6000 0x6000 |
diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h index ee556ccc93f4..e1e9888c85e6 100644 --- a/include/uapi/linux/pci_regs.h +++ b/include/uapi/linux/pci_regs.h | |||
@@ -52,6 +52,7 @@ | |||
52 | #define PCI_COMMAND_INTX_DISABLE 0x400 /* INTx Emulation Disable */ | 52 | #define PCI_COMMAND_INTX_DISABLE 0x400 /* INTx Emulation Disable */ |
53 | 53 | ||
54 | #define PCI_STATUS 0x06 /* 16 bits */ | 54 | #define PCI_STATUS 0x06 /* 16 bits */ |
55 | #define PCI_STATUS_IMM_READY 0x01 /* Immediate Readiness */ | ||
55 | #define PCI_STATUS_INTERRUPT 0x08 /* Interrupt status */ | 56 | #define PCI_STATUS_INTERRUPT 0x08 /* Interrupt status */ |
56 | #define PCI_STATUS_CAP_LIST 0x10 /* Support Capability List */ | 57 | #define PCI_STATUS_CAP_LIST 0x10 /* Support Capability List */ |
57 | #define PCI_STATUS_66MHZ 0x20 /* Support 66 MHz PCI 2.1 bus */ | 58 | #define PCI_STATUS_66MHZ 0x20 /* Support 66 MHz PCI 2.1 bus */ |
diff --git a/tools/Makefile b/tools/Makefile index be02c8b904db..abb358a70ad0 100644 --- a/tools/Makefile +++ b/tools/Makefile | |||
@@ -21,6 +21,7 @@ help: | |||
21 | @echo ' leds - LEDs tools' | 21 | @echo ' leds - LEDs tools' |
22 | @echo ' liblockdep - user-space wrapper for kernel locking-validator' | 22 | @echo ' liblockdep - user-space wrapper for kernel locking-validator' |
23 | @echo ' bpf - misc BPF tools' | 23 | @echo ' bpf - misc BPF tools' |
24 | @echo ' pci - PCI tools' | ||
24 | @echo ' perf - Linux performance measurement and analysis tool' | 25 | @echo ' perf - Linux performance measurement and analysis tool' |
25 | @echo ' selftests - various kernel selftests' | 26 | @echo ' selftests - various kernel selftests' |
26 | @echo ' spi - spi tools' | 27 | @echo ' spi - spi tools' |
@@ -59,7 +60,7 @@ acpi: FORCE | |||
59 | cpupower: FORCE | 60 | cpupower: FORCE |
60 | $(call descend,power/$@) | 61 | $(call descend,power/$@) |
61 | 62 | ||
62 | cgroup firewire hv guest spi usb virtio vm bpf iio gpio objtool leds wmi: FORCE | 63 | cgroup firewire hv guest spi usb virtio vm bpf iio gpio objtool leds wmi pci: FORCE |
63 | $(call descend,$@) | 64 | $(call descend,$@) |
64 | 65 | ||
65 | liblockdep: FORCE | 66 | liblockdep: FORCE |
@@ -94,7 +95,7 @@ kvm_stat: FORCE | |||
94 | all: acpi cgroup cpupower gpio hv firewire liblockdep \ | 95 | all: acpi cgroup cpupower gpio hv firewire liblockdep \ |
95 | perf selftests spi turbostat usb \ | 96 | perf selftests spi turbostat usb \ |
96 | virtio vm bpf x86_energy_perf_policy \ | 97 | virtio vm bpf x86_energy_perf_policy \ |
97 | tmon freefall iio objtool kvm_stat wmi | 98 | tmon freefall iio objtool kvm_stat wmi pci |
98 | 99 | ||
99 | acpi_install: | 100 | acpi_install: |
100 | $(call descend,power/$(@:_install=),install) | 101 | $(call descend,power/$(@:_install=),install) |
@@ -102,7 +103,7 @@ acpi_install: | |||
102 | cpupower_install: | 103 | cpupower_install: |
103 | $(call descend,power/$(@:_install=),install) | 104 | $(call descend,power/$(@:_install=),install) |
104 | 105 | ||
105 | cgroup_install firewire_install gpio_install hv_install iio_install perf_install spi_install usb_install virtio_install vm_install bpf_install objtool_install wmi_install: | 106 | cgroup_install firewire_install gpio_install hv_install iio_install perf_install spi_install usb_install virtio_install vm_install bpf_install objtool_install wmi_install pci_install: |
106 | $(call descend,$(@:_install=),install) | 107 | $(call descend,$(@:_install=),install) |
107 | 108 | ||
108 | liblockdep_install: | 109 | liblockdep_install: |
@@ -128,7 +129,7 @@ install: acpi_install cgroup_install cpupower_install gpio_install \ | |||
128 | perf_install selftests_install turbostat_install usb_install \ | 129 | perf_install selftests_install turbostat_install usb_install \ |
129 | virtio_install vm_install bpf_install x86_energy_perf_policy_install \ | 130 | virtio_install vm_install bpf_install x86_energy_perf_policy_install \ |
130 | tmon_install freefall_install objtool_install kvm_stat_install \ | 131 | tmon_install freefall_install objtool_install kvm_stat_install \ |
131 | wmi_install | 132 | wmi_install pci_install |
132 | 133 | ||
133 | acpi_clean: | 134 | acpi_clean: |
134 | $(call descend,power/acpi,clean) | 135 | $(call descend,power/acpi,clean) |
@@ -136,7 +137,7 @@ acpi_clean: | |||
136 | cpupower_clean: | 137 | cpupower_clean: |
137 | $(call descend,power/cpupower,clean) | 138 | $(call descend,power/cpupower,clean) |
138 | 139 | ||
139 | cgroup_clean hv_clean firewire_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean: | 140 | cgroup_clean hv_clean firewire_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean: |
140 | $(call descend,$(@:_clean=),clean) | 141 | $(call descend,$(@:_clean=),clean) |
141 | 142 | ||
142 | liblockdep_clean: | 143 | liblockdep_clean: |
@@ -174,6 +175,6 @@ clean: acpi_clean cgroup_clean cpupower_clean hv_clean firewire_clean \ | |||
174 | perf_clean selftests_clean turbostat_clean spi_clean usb_clean virtio_clean \ | 175 | perf_clean selftests_clean turbostat_clean spi_clean usb_clean virtio_clean \ |
175 | vm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \ | 176 | vm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \ |
176 | freefall_clean build_clean libbpf_clean libsubcmd_clean liblockdep_clean \ | 177 | freefall_clean build_clean libbpf_clean libsubcmd_clean liblockdep_clean \ |
177 | gpio_clean objtool_clean leds_clean wmi_clean | 178 | gpio_clean objtool_clean leds_clean wmi_clean pci_clean |
178 | 179 | ||
179 | .PHONY: FORCE | 180 | .PHONY: FORCE |
diff --git a/tools/pci/Build b/tools/pci/Build new file mode 100644 index 000000000000..c375aea21790 --- /dev/null +++ b/tools/pci/Build | |||
@@ -0,0 +1 @@ | |||
pcitest-y += pcitest.o | |||
diff --git a/tools/pci/Makefile b/tools/pci/Makefile new file mode 100644 index 000000000000..46e4c2f318c9 --- /dev/null +++ b/tools/pci/Makefile | |||
@@ -0,0 +1,53 @@ | |||
1 | # SPDX-License-Identifier: GPL-2.0 | ||
2 | include ../scripts/Makefile.include | ||
3 | |||
4 | bindir ?= /usr/bin | ||
5 | |||
6 | ifeq ($(srctree),) | ||
7 | srctree := $(patsubst %/,%,$(dir $(CURDIR))) | ||
8 | srctree := $(patsubst %/,%,$(dir $(srctree))) | ||
9 | endif | ||
10 | |||
11 | # Do not use make's built-in rules | ||
12 | # (this improves performance and avoids hard-to-debug behaviour); | ||
13 | MAKEFLAGS += -r | ||
14 | |||
15 | CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include | ||
16 | |||
17 | ALL_TARGETS := pcitest pcitest.sh | ||
18 | ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS)) | ||
19 | |||
20 | all: $(ALL_PROGRAMS) | ||
21 | |||
22 | export srctree OUTPUT CC LD CFLAGS | ||
23 | include $(srctree)/tools/build/Makefile.include | ||
24 | |||
25 | # | ||
26 | # We need the following to be outside of kernel tree | ||
27 | # | ||
28 | $(OUTPUT)include/linux/: ../../include/uapi/linux/ | ||
29 | mkdir -p $(OUTPUT)include/linux/ 2>&1 || true | ||
30 | ln -sf $(CURDIR)/../../include/uapi/linux/pcitest.h $@ | ||
31 | |||
32 | prepare: $(OUTPUT)include/linux/ | ||
33 | |||
34 | PCITEST_IN := $(OUTPUT)pcitest-in.o | ||
35 | $(PCITEST_IN): prepare FORCE | ||
36 | $(Q)$(MAKE) $(build)=pcitest | ||
37 | $(OUTPUT)pcitest: $(PCITEST_IN) | ||
38 | $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@ | ||
39 | |||
40 | clean: | ||
41 | rm -f $(ALL_PROGRAMS) | ||
42 | rm -rf $(OUTPUT)include/ | ||
43 | find $(if $(OUTPUT),$(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete | ||
44 | |||
45 | install: $(ALL_PROGRAMS) | ||
46 | install -d -m 755 $(DESTDIR)$(bindir); \ | ||
47 | for program in $(ALL_PROGRAMS); do \ | ||
48 | install $$program $(DESTDIR)$(bindir); \ | ||
49 | done | ||
50 | |||
51 | FORCE: | ||
52 | |||
53 | .PHONY: all install clean FORCE prepare | ||
diff --git a/tools/pci/pcitest.c b/tools/pci/pcitest.c index af146bb03b4d..ec4d51f3308b 100644 --- a/tools/pci/pcitest.c +++ b/tools/pci/pcitest.c | |||
@@ -23,7 +23,6 @@ | |||
23 | #include <stdio.h> | 23 | #include <stdio.h> |
24 | #include <stdlib.h> | 24 | #include <stdlib.h> |
25 | #include <sys/ioctl.h> | 25 | #include <sys/ioctl.h> |
26 | #include <time.h> | ||
27 | #include <unistd.h> | 26 | #include <unistd.h> |
28 | 27 | ||
29 | #include <linux/pcitest.h> | 28 | #include <linux/pcitest.h> |
@@ -48,17 +47,15 @@ struct pci_test { | |||
48 | unsigned long size; | 47 | unsigned long size; |
49 | }; | 48 | }; |
50 | 49 | ||
51 | static int run_test(struct pci_test *test) | 50 | static void run_test(struct pci_test *test) |
52 | { | 51 | { |
53 | long ret; | 52 | long ret; |
54 | int fd; | 53 | int fd; |
55 | struct timespec start, end; | ||
56 | double time; | ||
57 | 54 | ||
58 | fd = open(test->device, O_RDWR); | 55 | fd = open(test->device, O_RDWR); |
59 | if (fd < 0) { | 56 | if (fd < 0) { |
60 | perror("can't open PCI Endpoint Test device"); | 57 | perror("can't open PCI Endpoint Test device"); |
61 | return fd; | 58 | return; |
62 | } | 59 | } |
63 | 60 | ||
64 | if (test->barnum >= 0 && test->barnum <= 5) { | 61 | if (test->barnum >= 0 && test->barnum <= 5) { |