diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-08-16 12:21:54 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-08-16 12:21:54 -0400 |
commit | 4e31843f681c34f7185e7d169fe627c9d891ce2c (patch) | |
tree | 7717ffdea5621cb68edb746bb21f6acd32f49aa9 /drivers/pci | |
parent | f91e654474d413201ae578820fb63f8a811f6c4e (diff) | |
parent | fa687fb9ced47b97bd22297366e788dac1927dd7 (diff) |
Merge tag 'pci-v4.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull pci updates from Bjorn Helgaas:
- Decode AER errors with names similar to "lspci" (Tyler Baicar)
- Expose AER statistics in sysfs (Rajat Jain)
- Clear AER status bits selectively based on the type of recovery (Oza
Pawandeep)
- Honor "pcie_ports=native" even if HEST sets FIRMWARE_FIRST (Alexandru
Gagniuc)
- Don't clear AER status bits if we're using the "Firmware-First"
strategy where firmware owns the registers (Alexandru Gagniuc)
- Use sysfs_match_string() to simplify ASPM sysfs parsing (Andy
Shevchenko)
- Remove unnecessary includes of <linux/pci-aspm.h> (Bjorn Helgaas)
- Defer DPC event handling to work queue (Keith Busch)
- Use threaded IRQ for DPC bottom half (Keith Busch)
- Print AER status while handling DPC events (Keith Busch)
- Work around IDT switch ACS Source Validation erratum (James
Puthukattukaran)
- Emit diagnostics for all cases of PCIe Link downtraining (Links
operating slower than they're capable of) (Alexandru Gagniuc)
- Skip VFs when configuring Max Payload Size (Myron Stowe)
- Reduce Root Port Max Payload Size if necessary when hot-adding a
device below it (Myron Stowe)
- Simplify SHPC existence/permission checks (Bjorn Helgaas)
- Remove hotplug sample skeleton driver (Lukas Wunner)
- Convert pciehp to threaded IRQ handling (Lukas Wunner)
- Improve pciehp tolerance of missed events and initially unstable
links (Lukas Wunner)
- Clear spurious pciehp events on resume (Lukas Wunner)
- Add pciehp runtime PM support, including for Thunderbolt controllers
(Lukas Wunner)
- Support interrupts from pciehp bridges in D3hot (Lukas Wunner)
- Mark fall-through switch cases before enabling -Wimplicit-fallthrough
(Gustavo A. R. Silva)
- Move DMA-debug PCI init from arch code to PCI core (Christoph
Hellwig)
- Fix pci_request_irq() usage of IRQF_ONESHOT when no handler is
supplied (Heiner Kallweit)
- Unify PCI and DMA direction #defines (Shunyong Yang)
- Add PCI_DEVICE_DATA() macro (Andy Shevchenko)
- Check for VPD completion before checking for timeout (Bert Kenward)
- Limit Netronome NFP5000 config space size to work around erratum
(Jakub Kicinski)
- Set IRQCHIP_ONESHOT_SAFE for PCI MSI irqchips (Heiner Kallweit)
- Document ACPI description of PCI host bridges (Bjorn Helgaas)
- Add "pci=disable_acs_redir=" parameter to disable ACS redirection for
peer-to-peer DMA support (we don't have the peer-to-peer support yet;
this is just one piece) (Logan Gunthorpe)
- Clean up devm_of_pci_get_host_bridge_resources() resource allocation
(Jan Kiszka)
- Fixup resizable BARs after suspend/resume (Christian König)
- Make "pci=earlydump" generic (Sinan Kaya)
- Fix ROM BAR access routines to stay in bounds and check for signature
correctly (Rex Zhu)
- Add DMA alias quirk for Microsemi Switchtec NTB (Doug Meyer)
- Expand documentation for pci_add_dma_alias() (Logan Gunthorpe)
- To avoid bus errors, enable PASID only if entire path supports
End-End TLP prefixes (Sinan Kaya)
- Unify slot and bus reset functions and remove hotplug knowledge from
callers (Sinan Kaya)
- Add Function-Level Reset quirks for Intel and Samsung NVMe devices to
fix guest reboot issues (Alex Williamson)
- Add function 1 DMA alias quirk for Marvell 88SS9183 PCIe SSD
Controller (Bjorn Helgaas)
- Remove Xilinx AXI-PCIe host bridge arch dependency (Palmer Dabbelt)
- Remove Aardvark outbound window configuration (Evan Wang)
- Fix Aardvark bridge window sizing issue (Zachary Zhang)
- Convert Aardvark to use pci_host_probe() to reduce code duplication
(Thomas Petazzoni)
- Correct the Cadence cdns_pcie_writel() signature (Alan Douglas)
- Add Cadence support for optional generic PHYs (Alan Douglas)
- Add Cadence power management ops (Alan Douglas)
- Remove redundant variable from Cadence driver (Colin Ian King)
- Add Kirin MSI support (Xiaowei Song)
- Drop unnecessary root_bus_nr setting from exynos, imx6, keystone,
armada8k, artpec6, designware-plat, histb, qcom, spear13xx (Shawn
Guo)
- Move link notification settings from DesignWare core to individual
drivers (Gustavo Pimentel)
- Add endpoint library MSI-X interfaces (Gustavo Pimentel)
- Correct signature of endpoint library IRQ interfaces (Gustavo
Pimentel)
- Add DesignWare endpoint library MSI-X callbacks (Gustavo Pimentel)
- Add endpoint library MSI-X test support (Gustavo Pimentel)
- Remove unnecessary GFP_ATOMIC from Hyper-V "new child" allocation
(Jia-Ju Bai)
- Add more devices to Broadcom PAXC quirk (Ray Jui)
- Work around corrupted Broadcom PAXC config space to enable SMMU and
GICv3 ITS (Ray Jui)
- Disable MSI parsing to work around broken Broadcom PAXC logic in some
devices (Ray Jui)
- Hide unconfigured functions to work around a Broadcom PAXC defect
(Ray Jui)
- Lower iproc log level to reduce console output during boot (Ray Jui)
- Fix mobiveil iomem/phys_addr_t type usage (Lorenzo Pieralisi)
- Fix mobiveil missing include file (Lorenzo Pieralisi)
- Add mobiveil Kconfig/Makefile support (Lorenzo Pieralisi)
- Fix mvebu I/O space remapping issues (Thomas Petazzoni)
- Use generic pci_host_bridge in mvebu instead of ARM-specific API
(Thomas Petazzoni)
- Whitelist VMD devices with fast interrupt handlers to avoid sharing
vectors with slow handlers (Keith Busch)
* tag 'pci-v4.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (153 commits)
PCI/AER: Don't clear AER bits if error handling is Firmware-First
PCI: Limit config space size for Netronome NFP5000
PCI/MSI: Set IRQCHIP_ONESHOT_SAFE for PCI-MSI irqchips
PCI/VPD: Check for VPD access completion before checking for timeout
PCI: Add PCI_DEVICE_DATA() macro to fully describe device ID entry
PCI: Match Root Port's MPS to endpoint's MPSS as necessary
PCI: Skip MPS logic for Virtual Functions (VFs)
PCI: Add function 1 DMA alias quirk for Marvell 88SS9183
PCI: Check for PCIe Link downtraining
PCI: Add ACS Redirect disable quirk for Intel Sunrise Point
PCI: Add device-specific ACS Redirect disable infrastructure
PCI: Convert device-specific ACS quirks from NULL termination to ARRAY_SIZE
PCI: Add "pci=disable_acs_redir=" parameter for peer-to-peer support
PCI: Allow specifying devices using a base bus and path of devfns
PCI: Make specifying PCI devices in kernel parameters reusable
PCI: Hide ACS quirk declarations inside PCI core
PCI: Delay after FLR of Intel DC P3700 NVMe
PCI: Disable Samsung SM961/PM961 NVMe before FLR
PCI: Export pcie_has_flr()
PCI: mvebu: Drop bogus comment above mvebu_pcie_map_registers()
...
Diffstat (limited to 'drivers/pci')
73 files changed, 2785 insertions, 1563 deletions
diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c index 4923a2a8e14b..5b78f3b1b918 100644 --- a/drivers/pci/ats.c +++ b/drivers/pci/ats.c | |||
@@ -273,6 +273,9 @@ int pci_enable_pasid(struct pci_dev *pdev, int features) | |||
273 | if (WARN_ON(pdev->pasid_enabled)) | 273 | if (WARN_ON(pdev->pasid_enabled)) |
274 | return -EBUSY; | 274 | return -EBUSY; |
275 | 275 | ||
276 | if (!pdev->eetlp_prefix_path) | ||
277 | return -EINVAL; | ||
278 | |||
276 | pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); | 279 | pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); |
277 | if (!pos) | 280 | if (!pos) |
278 | return -EINVAL; | 281 | return -EINVAL; |
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig index cc9fa02d32a0..028b287466fb 100644 --- a/drivers/pci/controller/Kconfig +++ b/drivers/pci/controller/Kconfig | |||
@@ -102,7 +102,7 @@ config PCI_HOST_GENERIC | |||
102 | 102 | ||
103 | config PCIE_XILINX | 103 | config PCIE_XILINX |
104 | bool "Xilinx AXI PCIe host bridge support" | 104 | bool "Xilinx AXI PCIe host bridge support" |
105 | depends on ARCH_ZYNQ || MICROBLAZE || (MIPS && PCI_DRIVERS_GENERIC) || COMPILE_TEST | 105 | depends on OF || COMPILE_TEST |
106 | help | 106 | help |
107 | Say 'Y' here if you want kernel to support the Xilinx AXI PCIe | 107 | Say 'Y' here if you want kernel to support the Xilinx AXI PCIe |
108 | Host Bridge driver. | 108 | Host Bridge driver. |
@@ -239,6 +239,16 @@ config PCIE_MEDIATEK | |||
239 | Say Y here if you want to enable PCIe controller support on | 239 | Say Y here if you want to enable PCIe controller support on |
240 | MediaTek SoCs. | 240 | MediaTek SoCs. |
241 | 241 | ||
242 | config PCIE_MOBIVEIL | ||
243 | bool "Mobiveil AXI PCIe controller" | ||
244 | depends on ARCH_ZYNQMP || COMPILE_TEST | ||
245 | depends on OF | ||
246 | depends on PCI_MSI_IRQ_DOMAIN | ||
247 | help | ||
248 | Say Y here if you want to enable support for the Mobiveil AXI PCIe | ||
249 | Soft IP. It has up to 8 outbound and inbound windows | ||
250 | for address translation and it is a PCIe Gen4 IP. | ||
251 | |||
242 | config PCIE_TANGO_SMP8759 | 252 | config PCIE_TANGO_SMP8759 |
243 | bool "Tango SMP8759 PCIe controller (DANGEROUS)" | 253 | bool "Tango SMP8759 PCIe controller (DANGEROUS)" |
244 | depends on ARCH_TANGO && PCI_MSI && OF | 254 | depends on ARCH_TANGO && PCI_MSI && OF |
diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile index 24322b92f200..d56a507495c5 100644 --- a/drivers/pci/controller/Makefile +++ b/drivers/pci/controller/Makefile | |||
@@ -26,6 +26,7 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o | |||
26 | obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o | 26 | obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o |
27 | obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o | 27 | obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o |
28 | obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o | 28 | obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o |
29 | obj-$(CONFIG_PCIE_MOBIVEIL) += pcie-mobiveil.o | ||
29 | obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o | 30 | obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o |
30 | obj-$(CONFIG_VMD) += vmd.o | 31 | obj-$(CONFIG_VMD) += vmd.o |
31 | # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW | 32 | # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW |
diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c index 345aab56ce8b..ce9224a36f62 100644 --- a/drivers/pci/controller/dwc/pci-dra7xx.c +++ b/drivers/pci/controller/dwc/pci-dra7xx.c | |||
@@ -370,7 +370,7 @@ static void dra7xx_pcie_raise_msi_irq(struct dra7xx_pcie *dra7xx, | |||
370 | } | 370 | } |
371 | 371 | ||
372 | static int dra7xx_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, | 372 | static int dra7xx_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, |
373 | enum pci_epc_irq_type type, u8 interrupt_num) | 373 | enum pci_epc_irq_type type, u16 interrupt_num) |
374 | { | 374 | { |
375 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | 375 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); |
376 | struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); | 376 | struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); |
diff --git a/drivers/pci/controller/dwc/pci-exynos.c b/drivers/pci/controller/dwc/pci-exynos.c index 4cc1e5df8c79..cee5f2f590e2 100644 --- a/drivers/pci/controller/dwc/pci-exynos.c +++ b/drivers/pci/controller/dwc/pci-exynos.c | |||
@@ -421,7 +421,6 @@ static int __init exynos_add_pcie_port(struct exynos_pcie *ep, | |||
421 | } | 421 | } |
422 | } | 422 | } |
423 | 423 | ||
424 | pp->root_bus_nr = -1; | ||
425 | pp->ops = &exynos_pcie_host_ops; | 424 | pp->ops = &exynos_pcie_host_ops; |
426 | 425 | ||
427 | ret = dw_pcie_host_init(pp); | 426 | ret = dw_pcie_host_init(pp); |
diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c index 80f604602783..4a9a673b4777 100644 --- a/drivers/pci/controller/dwc/pci-imx6.c +++ b/drivers/pci/controller/dwc/pci-imx6.c | |||
@@ -667,7 +667,6 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie, | |||
667 | } | 667 | } |
668 | } | 668 | } |
669 | 669 | ||
670 | pp->root_bus_nr = -1; | ||
671 | pp->ops = &imx6_pcie_host_ops; | 670 | pp->ops = &imx6_pcie_host_ops; |
672 | 671 | ||
673 | ret = dw_pcie_host_init(pp); | 672 | ret = dw_pcie_host_init(pp); |
diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c index 3722a5f31e5e..e88bd221fffe 100644 --- a/drivers/pci/controller/dwc/pci-keystone.c +++ b/drivers/pci/controller/dwc/pci-keystone.c | |||
@@ -347,7 +347,6 @@ static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, | |||
347 | } | 347 | } |
348 | } | 348 | } |
349 | 349 | ||
350 | pp->root_bus_nr = -1; | ||
351 | pp->ops = &keystone_pcie_host_ops; | 350 | pp->ops = &keystone_pcie_host_ops; |
352 | ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np); | 351 | ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np); |
353 | if (ret) { | 352 | if (ret) { |
diff --git a/drivers/pci/controller/dwc/pcie-armada8k.c b/drivers/pci/controller/dwc/pcie-armada8k.c index 072fd7ecc29f..0c389a30ef5d 100644 --- a/drivers/pci/controller/dwc/pcie-armada8k.c +++ b/drivers/pci/controller/dwc/pcie-armada8k.c | |||
@@ -172,7 +172,6 @@ static int armada8k_add_pcie_port(struct armada8k_pcie *pcie, | |||
172 | struct device *dev = &pdev->dev; | 172 | struct device *dev = &pdev->dev; |
173 | int ret; | 173 | int ret; |
174 | 174 | ||
175 | pp->root_bus_nr = -1; | ||
176 | pp->ops = &armada8k_pcie_host_ops; | 175 | pp->ops = &armada8k_pcie_host_ops; |
177 | 176 | ||
178 | pp->irq = platform_get_irq(pdev, 0); | 177 | pp->irq = platform_get_irq(pdev, 0); |
diff --git a/drivers/pci/controller/dwc/pcie-artpec6.c b/drivers/pci/controller/dwc/pcie-artpec6.c index 321b56cfd5d0..dba83abfe764 100644 --- a/drivers/pci/controller/dwc/pcie-artpec6.c +++ b/drivers/pci/controller/dwc/pcie-artpec6.c | |||
@@ -399,7 +399,6 @@ static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie, | |||
399 | } | 399 | } |
400 | } | 400 | } |
401 | 401 | ||
402 | pp->root_bus_nr = -1; | ||
403 | pp->ops = &artpec6_pcie_host_ops; | 402 | pp->ops = &artpec6_pcie_host_ops; |
404 | 403 | ||
405 | ret = dw_pcie_host_init(pp); | 404 | ret = dw_pcie_host_init(pp); |
@@ -428,7 +427,7 @@ static void artpec6_pcie_ep_init(struct dw_pcie_ep *ep) | |||
428 | } | 427 | } |
429 | 428 | ||
430 | static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, | 429 | static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, |
431 | enum pci_epc_irq_type type, u8 interrupt_num) | 430 | enum pci_epc_irq_type type, u16 interrupt_num) |
432 | { | 431 | { |
433 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | 432 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); |
434 | 433 | ||
diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 8650416f6f9e..1e7b02221eac 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c | |||
@@ -40,6 +40,39 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) | |||
40 | __dw_pcie_ep_reset_bar(pci, bar, 0); | 40 | __dw_pcie_ep_reset_bar(pci, bar, 0); |
41 | } | 41 | } |
42 | 42 | ||
43 | static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr, | ||
44 | u8 cap) | ||
45 | { | ||
46 | u8 cap_id, next_cap_ptr; | ||
47 | u16 reg; | ||
48 | |||
49 | reg = dw_pcie_readw_dbi(pci, cap_ptr); | ||
50 | next_cap_ptr = (reg & 0xff00) >> 8; | ||
51 | cap_id = (reg & 0x00ff); | ||
52 | |||
53 | if (!next_cap_ptr || cap_id > PCI_CAP_ID_MAX) | ||
54 | return 0; | ||
55 | |||
56 | if (cap_id == cap) | ||
57 | return cap_ptr; | ||
58 | |||
59 | return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); | ||
60 | } | ||
61 | |||
62 | static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap) | ||
63 | { | ||
64 | u8 next_cap_ptr; | ||
65 | u16 reg; | ||
66 | |||
67 | reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); | ||
68 | next_cap_ptr = (reg & 0x00ff); | ||
69 | |||
70 | if (!next_cap_ptr) | ||
71 | return 0; | ||
72 | |||
73 | return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); | ||
74 | } | ||
75 | |||
43 | static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, | 76 | static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, |
44 | struct pci_epf_header *hdr) | 77 | struct pci_epf_header *hdr) |
45 | { | 78 | { |
@@ -213,36 +246,84 @@ static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no, | |||
213 | 246 | ||
214 | static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no) | 247 | static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no) |
215 | { | 248 | { |
216 | int val; | ||
217 | struct dw_pcie_ep *ep = epc_get_drvdata(epc); | 249 | struct dw_pcie_ep *ep = epc_get_drvdata(epc); |
218 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | 250 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); |
251 | u32 val, reg; | ||
252 | |||
253 | if (!ep->msi_cap) | ||
254 | return -EINVAL; | ||
255 | |||
256 | reg = ep->msi_cap + PCI_MSI_FLAGS; | ||
257 | val = dw_pcie_readw_dbi(pci, reg); | ||
258 | if (!(val & PCI_MSI_FLAGS_ENABLE)) | ||
259 | return -EINVAL; | ||
260 | |||
261 | val = (val & PCI_MSI_FLAGS_QSIZE) >> 4; | ||
262 | |||
263 | return val; | ||
264 | } | ||
265 | |||
266 | static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts) | ||
267 | { | ||
268 | struct dw_pcie_ep *ep = epc_get_drvdata(epc); | ||
269 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | ||
270 | u32 val, reg; | ||
271 | |||
272 | if (!ep->msi_cap) | ||
273 | return -EINVAL; | ||
274 | |||
275 | reg = ep->msi_cap + PCI_MSI_FLAGS; | ||
276 | val = dw_pcie_readw_dbi(pci, reg); | ||
277 | val &= ~PCI_MSI_FLAGS_QMASK; | ||
278 | val |= (interrupts << 1) & PCI_MSI_FLAGS_QMASK; | ||
279 | dw_pcie_dbi_ro_wr_en(pci); | ||
280 | dw_pcie_writew_dbi(pci, reg, val); | ||
281 | dw_pcie_dbi_ro_wr_dis(pci); | ||
282 | |||
283 | return 0; | ||
284 | } | ||
285 | |||
286 | static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no) | ||
287 | { | ||
288 | struct dw_pcie_ep *ep = epc_get_drvdata(epc); | ||
289 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | ||
290 | u32 val, reg; | ||
219 | 291 | ||
220 | val = dw_pcie_readw_dbi(pci, MSI_MESSAGE_CONTROL); | 292 | if (!ep->msix_cap) |
221 | if (!(val & MSI_CAP_MSI_EN_MASK)) | ||
222 | return -EINVAL; | 293 | return -EINVAL; |
223 | 294 | ||
224 | val = (val & MSI_CAP_MME_MASK) >> MSI_CAP_MME_SHIFT; | 295 | reg = ep->msix_cap + PCI_MSIX_FLAGS; |
296 | val = dw_pcie_readw_dbi(pci, reg); | ||
297 | if (!(val & PCI_MSIX_FLAGS_ENABLE)) | ||
298 | return -EINVAL; | ||
299 | |||
300 | val &= PCI_MSIX_FLAGS_QSIZE; | ||
301 | |||
225 | return val; | 302 | return val; |
226 | } | 303 | } |
227 | 304 | ||
228 | static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 encode_int) | 305 | static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts) |
229 | { | 306 | { |
230 | int val; | ||
231 | struct dw_pcie_ep *ep = epc_get_drvdata(epc); | 307 | struct dw_pcie_ep *ep = epc_get_drvdata(epc); |
232 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | 308 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); |
309 | u32 val, reg; | ||
310 | |||
311 | if (!ep->msix_cap) | ||
312 | return -EINVAL; | ||
233 | 313 | ||
234 | val = dw_pcie_readw_dbi(pci, MSI_MESSAGE_CONTROL); | 314 | reg = ep->msix_cap + PCI_MSIX_FLAGS; |
235 | val &= ~MSI_CAP_MMC_MASK; | 315 | val = dw_pcie_readw_dbi(pci, reg); |
236 | val |= (encode_int << MSI_CAP_MMC_SHIFT) & MSI_CAP_MMC_MASK; | 316 | val &= ~PCI_MSIX_FLAGS_QSIZE; |
317 | val |= interrupts; | ||
237 | dw_pcie_dbi_ro_wr_en(pci); | 318 | dw_pcie_dbi_ro_wr_en(pci); |
238 | dw_pcie_writew_dbi(pci, MSI_MESSAGE_CONTROL, val); | 319 | dw_pcie_writew_dbi(pci, reg, val); |
239 | dw_pcie_dbi_ro_wr_dis(pci); | 320 | dw_pcie_dbi_ro_wr_dis(pci); |
240 | 321 | ||
241 | return 0; | 322 | return 0; |
242 | } | 323 | } |
243 | 324 | ||
244 | static int dw_pcie_ep_raise_irq(struct pci_epc *epc, u8 func_no, | 325 | static int dw_pcie_ep_raise_irq(struct pci_epc *epc, u8 func_no, |
245 | enum pci_epc_irq_type type, u8 interrupt_num) | 326 | enum pci_epc_irq_type type, u16 interrupt_num) |
246 | { | 327 | { |
247 | struct dw_pcie_ep *ep = epc_get_drvdata(epc); | 328 | struct dw_pcie_ep *ep = epc_get_drvdata(epc); |
248 | 329 | ||
@@ -282,32 +363,52 @@ static const struct pci_epc_ops epc_ops = { | |||
282 | .unmap_addr = dw_pcie_ep_unmap_addr, | 363 | .unmap_addr = dw_pcie_ep_unmap_addr, |
283 | .set_msi = dw_pcie_ep_set_msi, | 364 | .set_msi = dw_pcie_ep_set_msi, |
284 | .get_msi = dw_pcie_ep_get_msi, | 365 | .get_msi = dw_pcie_ep_get_msi, |
366 | .set_msix = dw_pcie_ep_set_msix, | ||
367 | .get_msix = dw_pcie_ep_get_msix, | ||
285 | .raise_irq = dw_pcie_ep_raise_irq, | 368 | .raise_irq = dw_pcie_ep_raise_irq, |
286 | .start = dw_pcie_ep_start, | 369 | .start = dw_pcie_ep_start, |
287 | .stop = dw_pcie_ep_stop, | 370 | .stop = dw_pcie_ep_stop, |
288 | }; | 371 | }; |
289 | 372 | ||
373 | int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no) | ||
374 | { | ||
375 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | ||
376 | struct device *dev = pci->dev; | ||
377 | |||
378 | dev_err(dev, "EP cannot trigger legacy IRQs\n"); | ||
379 | |||
380 | return -EINVAL; | ||
381 | } | ||
382 | |||
290 | int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, | 383 | int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, |
291 | u8 interrupt_num) | 384 | u8 interrupt_num) |
292 | { | 385 | { |
293 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | 386 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); |
294 | struct pci_epc *epc = ep->epc; | 387 | struct pci_epc *epc = ep->epc; |
295 | u16 msg_ctrl, msg_data; | 388 | u16 msg_ctrl, msg_data; |
296 | u32 msg_addr_lower, msg_addr_upper; | 389 | u32 msg_addr_lower, msg_addr_upper, reg; |
297 | u64 msg_addr; | 390 | u64 msg_addr; |
298 | bool has_upper; | 391 | bool has_upper; |
299 | int ret; | 392 | int ret; |
300 | 393 | ||
394 | if (!ep->msi_cap) | ||
395 | return -EINVAL; | ||
396 | |||
301 | /* Raise MSI per the PCI Local Bus Specification Revision 3.0, 6.8.1. */ | 397 | /* Raise MSI per the PCI Local Bus Specification Revision 3.0, 6.8.1. */ |
302 | msg_ctrl = dw_pcie_readw_dbi(pci, MSI_MESSAGE_CONTROL); | 398 | reg = ep->msi_cap + PCI_MSI_FLAGS; |
399 | msg_ctrl = dw_pcie_readw_dbi(pci, reg); | ||
303 | has_upper = !!(msg_ctrl & PCI_MSI_FLAGS_64BIT); | 400 | has_upper = !!(msg_ctrl & PCI_MSI_FLAGS_64BIT); |
304 | msg_addr_lower = dw_pcie_readl_dbi(pci, MSI_MESSAGE_ADDR_L32); | 401 | reg = ep->msi_cap + PCI_MSI_ADDRESS_LO; |
402 | msg_addr_lower = dw_pcie_readl_dbi(pci, reg); | ||
305 | if (has_upper) { | 403 | if (has_upper) { |
306 | msg_addr_upper = dw_pcie_readl_dbi(pci, MSI_MESSAGE_ADDR_U32); | 404 | reg = ep->msi_cap + PCI_MSI_ADDRESS_HI; |
307 | msg_data = dw_pcie_readw_dbi(pci, MSI_MESSAGE_DATA_64); | 405 | msg_addr_upper = dw_pcie_readl_dbi(pci, reg); |
406 | reg = ep->msi_cap + PCI_MSI_DATA_64; | ||
407 | msg_data = dw_pcie_readw_dbi(pci, reg); | ||
308 | } else { | 408 | } else { |
309 | msg_addr_upper = 0; | 409 | msg_addr_upper = 0; |
310 | msg_data = dw_pcie_readw_dbi(pci, MSI_MESSAGE_DATA_32); | 410 | reg = ep->msi_cap + PCI_MSI_DATA_32; |
411 | msg_data = dw_pcie_readw_dbi(pci, reg); | ||
311 | } | 412 | } |
312 | msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower; | 413 | msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower; |
313 | ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, | 414 | ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, |
@@ -322,6 +423,64 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, | |||
322 | return 0; | 423 | return 0; |
323 | } | 424 | } |
324 | 425 | ||
426 | int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, | ||
427 | u16 interrupt_num) | ||
428 | { | ||
429 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | ||
430 | struct pci_epc *epc = ep->epc; | ||
431 | u16 tbl_offset, bir; | ||
432 | u32 bar_addr_upper, bar_addr_lower; | ||
433 | u32 msg_addr_upper, msg_addr_lower; | ||
434 | u32 reg, msg_data, vec_ctrl; | ||
435 | u64 tbl_addr, msg_addr, reg_u64; | ||
436 | void __iomem *msix_tbl; | ||
437 | int ret; | ||
438 | |||
439 | reg = ep->msix_cap + PCI_MSIX_TABLE; | ||
440 | tbl_offset = dw_pcie_readl_dbi(pci, reg); | ||
441 | bir = (tbl_offset & PCI_MSIX_TABLE_BIR); | ||
442 | tbl_offset &= PCI_MSIX_TABLE_OFFSET; | ||
443 | tbl_offset >>= 3; | ||
444 | |||
445 | reg = PCI_BASE_ADDRESS_0 + (4 * bir); | ||
446 | bar_addr_upper = 0; | ||
447 | bar_addr_lower = dw_pcie_readl_dbi(pci, reg); | ||
448 | reg_u64 = (bar_addr_lower & PCI_BASE_ADDRESS_MEM_TYPE_MASK); | ||
449 | if (reg_u64 == PCI_BASE_ADDRESS_MEM_TYPE_64) | ||
450 | bar_addr_upper = dw_pcie_readl_dbi(pci, reg + 4); | ||
451 | |||
452 | tbl_addr = ((u64) bar_addr_upper) << 32 | bar_addr_lower; | ||
453 | tbl_addr += (tbl_offset + ((interrupt_num - 1) * PCI_MSIX_ENTRY_SIZE)); | ||
454 | tbl_addr &= PCI_BASE_ADDRESS_MEM_MASK; | ||
455 | |||
456 | msix_tbl = ioremap_nocache(ep->phys_base + tbl_addr, | ||
457 | PCI_MSIX_ENTRY_SIZE); | ||
458 | if (!msix_tbl) | ||
459 | return -EINVAL; | ||
460 | |||
461 | msg_addr_lower = readl(msix_tbl + PCI_MSIX_ENTRY_LOWER_ADDR); | ||
462 | msg_addr_upper = readl(msix_tbl + PCI_MSIX_ENTRY_UPPER_ADDR); | ||
463 | msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower; | ||
464 | msg_data = readl(msix_tbl + PCI_MSIX_ENTRY_DATA); | ||
465 | vec_ctrl = readl(msix_tbl + PCI_MSIX_ENTRY_VECTOR_CTRL); | ||
466 | |||
467 | iounmap(msix_tbl); | ||
468 | |||
469 | if (vec_ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT) | ||
470 | return -EPERM; | ||
471 | |||
472 | ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, | ||
473 | epc->mem->page_size); | ||
474 | if (ret) | ||
475 | return ret; | ||
476 | |||
477 | writel(msg_data, ep->msi_mem); | ||
478 | |||
479 | dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); | ||
480 | |||
481 | return 0; | ||
482 | } | ||
483 | |||
325 | void dw_pcie_ep_exit(struct dw_pcie_ep *ep) | 484 | void dw_pcie_ep_exit(struct dw_pcie_ep *ep) |
326 | { | 485 | { |
327 | struct pci_epc *epc = ep->epc; | 486 | struct pci_epc *epc = ep->epc; |
@@ -386,15 +545,18 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) | |||
386 | return -ENOMEM; | 545 | return -ENOMEM; |
387 | ep->outbound_addr = addr; | 546 | ep->outbound_addr = addr; |
388 | 547 | ||
389 | if (ep->ops->ep_init) | ||
390 | ep->ops->ep_init(ep); | ||
391 | |||
392 | epc = devm_pci_epc_create(dev, &epc_ops); | 548 | epc = devm_pci_epc_create(dev, &epc_ops); |
393 | if (IS_ERR(epc)) { | 549 | if (IS_ERR(epc)) { |
394 | dev_err(dev, "Failed to create epc device\n"); | 550 | dev_err(dev, "Failed to create epc device\n"); |
395 | return PTR_ERR(epc); | 551 | return PTR_ERR(epc); |
396 | } | 552 | } |
397 | 553 | ||
554 | ep->epc = epc; | ||
555 | epc_set_drvdata(epc, ep); | ||
556 | |||
557 | if (ep->ops->ep_init) | ||
558 | ep->ops->ep_init(ep); | ||
559 | |||
398 | ret = of_property_read_u8(np, "max-functions", &epc->max_functions); | 560 | ret = of_property_read_u8(np, "max-functions", &epc->max_functions); |
399 | if (ret < 0) | 561 | if (ret < 0) |
400 | epc->max_functions = 1; | 562 | epc->max_functions = 1; |
@@ -409,15 +571,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) | |||
409 | ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys, | 571 | ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys, |
410 | epc->mem->page_size); | 572 | epc->mem->page_size); |
411 | if (!ep->msi_mem) { | 573 | if (!ep->msi_mem) { |
412 | dev_err(dev, "Failed to reserve memory for MSI\n"); | 574 | dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); |
413 | return -ENOMEM; | 575 | return -ENOMEM; |
414 | } | 576 | } |
577 | ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI); | ||
415 | 578 | ||
416 | epc->features = EPC_FEATURE_NO_LINKUP_NOTIFIER; | 579 | ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX); |
417 | EPC_FEATURE_SET_BAR(epc->features, BAR_0); | ||
418 | 580 | ||
419 | ep->epc = epc; | ||
420 | epc_set_drvdata(epc, ep); | ||
421 | dw_pcie_setup(pci); | 581 | dw_pcie_setup(pci); |
422 | 582 | ||
423 | return 0; | 583 | return 0; |
diff --git a/drivers/pci/controller/dwc/pcie-designware-plat.c b/drivers/pci/controller/dwc/pcie-designware-plat.c index 5937fed4c938..c12bf794d69c 100644 --- a/drivers/pci/controller/dwc/pcie-designware-plat.c +++ b/drivers/pci/controller/dwc/pcie-designware-plat.c | |||
@@ -70,24 +70,29 @@ static const struct dw_pcie_ops dw_pcie_ops = { | |||
70 | static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep) | 70 | static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep) |
71 | { | 71 | { |
72 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | 72 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); |
73 | struct pci_epc *epc = ep->epc; | ||
73 | enum pci_barno bar; | 74 | enum pci_barno bar; |
74 | 75 | ||
75 | for (bar = BAR_0; bar <= BAR_5; bar++) | 76 | for (bar = BAR_0; bar <= BAR_5; bar++) |
76 | dw_pcie_ep_reset_bar(pci, bar); | 77 | dw_pcie_ep_reset_bar(pci, bar); |
78 | |||
79 | epc->features |= EPC_FEATURE_NO_LINKUP_NOTIFIER; | ||
80 | epc->features |= EPC_FEATURE_MSIX_AVAILABLE; | ||
77 | } | 81 | } |
78 | 82 | ||
79 | static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, | 83 | static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, |
80 | enum pci_epc_irq_type type, | 84 | enum pci_epc_irq_type type, |
81 | u8 interrupt_num) | 85 | u16 interrupt_num) |
82 | { | 86 | { |
83 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); | 87 | struct dw_pcie *pci = to_dw_pcie_from_ep(ep); |
84 | 88 | ||
85 | switch (type) { | 89 | switch (type) { |
86 | case PCI_EPC_IRQ_LEGACY: | 90 | case PCI_EPC_IRQ_LEGACY: |
87 | dev_err(pci->dev, "EP cannot trigger legacy IRQs\n"); | 91 | return dw_pcie_ep_raise_legacy_irq(ep, func_no); |
88 | return -EINVAL; | ||
89 | case PCI_EPC_IRQ_MSI: | 92 | case PCI_EPC_IRQ_MSI: |
90 | return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); | 93 | return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); |
94 | case PCI_EPC_IRQ_MSIX: | ||
95 | return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num); | ||
91 | default: | 96 | default: |
92 | dev_err(pci->dev, "UNKNOWN IRQ type\n"); | 97 | dev_err(pci->dev, "UNKNOWN IRQ type\n"); |
93 | } | 98 | } |
@@ -118,7 +123,6 @@ static int dw_plat_add_pcie_port(struct dw_plat_pcie *dw_plat_pcie, | |||
118 | return pp->msi_irq; | 123 | return pp->msi_irq; |
119 | } | 124 | } |
120 | 125 | ||
121 | pp->root_bus_nr = -1; | ||
122 | pp->ops = &dw_plat_pcie_host_ops; | 126 | pp->ops = &dw_plat_pcie_host_ops; |
123 | 127 | ||
124 | ret = dw_pcie_host_init(pp); | 128 | ret = dw_pcie_host_init(pp); |
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index bee4e2535a61..96126fd8403c 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h | |||
@@ -96,17 +96,6 @@ | |||
96 | #define PCIE_GET_ATU_INB_UNR_REG_OFFSET(region) \ | 96 | #define PCIE_GET_ATU_INB_UNR_REG_OFFSET(region) \ |
97 | ((0x3 << 20) | ((region) << 9) | (0x1 << 8)) | 97 | ((0x3 << 20) | ((region) << 9) | (0x1 << 8)) |
98 | 98 | ||
99 | #define MSI_MESSAGE_CONTROL 0x52 | ||
100 | #define MSI_CAP_MMC_SHIFT 1 | ||
101 | #define MSI_CAP_MMC_MASK (7 << MSI_CAP_MMC_SHIFT) | ||
102 | #define MSI_CAP_MME_SHIFT 4 | ||
103 | #define MSI_CAP_MSI_EN_MASK 0x1 | ||
104 | #define MSI_CAP_MME_MASK (7 << MSI_CAP_MME_SHIFT) | ||
105 | #define MSI_MESSAGE_ADDR_L32 0x54 | ||
106 | #define MSI_MESSAGE_ADDR_U32 0x58 | ||
107 | #define MSI_MESSAGE_DATA_32 0x58 | ||
108 | #define MSI_MESSAGE_DATA_64 0x5C | ||
109 | |||
110 | #define MAX_MSI_IRQS 256 | 99 | #define MAX_MSI_IRQS 256 |
111 | #define MAX_MSI_IRQS_PER_CTRL 32 | 100 | #define MAX_MSI_IRQS_PER_CTRL 32 |
112 | #define MAX_MSI_CTRLS (MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL) | 101 | #define MAX_MSI_CTRLS (MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL) |
@@ -191,7 +180,7 @@ enum dw_pcie_as_type { | |||
191 | struct dw_pcie_ep_ops { | 180 | struct dw_pcie_ep_ops { |
192 | void (*ep_init)(struct dw_pcie_ep *ep); | 181 | void (*ep_init)(struct dw_pcie_ep *ep); |
193 | int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no, | 182 | int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no, |
194 | enum pci_epc_irq_type type, u8 interrupt_num); | 183 | enum pci_epc_irq_type type, u16 interrupt_num); |
195 | }; | 184 | }; |
196 | 185 | ||
197 | struct dw_pcie_ep { | 186 | struct dw_pcie_ep { |
@@ -208,6 +197,8 @@ struct dw_pcie_ep { | |||
208 | u32 num_ob_windows; | 197 | u32 num_ob_windows; |
209 | void __iomem *msi_mem; | 198 | void __iomem *msi_mem; |
210 | phys_addr_t msi_mem_phys; | 199 | phys_addr_t msi_mem_phys; |
200 | u8 msi_cap; /* MSI capability offset */ | ||
201 | u8 msix_cap; /* MSI-X capability offset */ | ||
211 | }; | 202 | }; |
212 | 203 | ||
213 | struct dw_pcie_ops { | 204 | struct dw_pcie_ops { |
@@ -357,8 +348,11 @@ static inline int dw_pcie_allocate_domains(struct pcie_port *pp) | |||
357 | void dw_pcie_ep_linkup(struct dw_pcie_ep *ep); | 348 | void dw_pcie_ep_linkup(struct dw_pcie_ep *ep); |
358 | int dw_pcie_ep_init(struct dw_pcie_ep *ep); | 349 | int dw_pcie_ep_init(struct dw_pcie_ep *ep); |
359 | void dw_pcie_ep_exit(struct dw_pcie_ep *ep); | 350 | void dw_pcie_ep_exit(struct dw_pcie_ep *ep); |
351 | int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no); | ||
360 | int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, | 352 | int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, |
361 | u8 interrupt_num); | 353 | u8 interrupt_num); |
354 | int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, | ||
355 | u16 interrupt_num); | ||
362 | void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar); | 356 | void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar); |
363 | #else | 357 | #else |
364 | static inline void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) | 358 | static inline void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) |
@@ -374,12 +368,23 @@ static inline void dw_pcie_ep_exit(struct dw_pcie_ep *ep) | |||
374 | { | 368 | { |
375 | } | 369 | } |
376 | 370 | ||
371 | static inline int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no) | ||
372 | { | ||
373 | return 0; | ||
374 | } | ||
375 | |||
377 | static inline int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, | 376 | static inline int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, |
378 | u8 interrupt_num) | 377 | u8 interrupt_num) |
379 | { | 378 | { |
380 | return 0; | 379 | return 0; |
381 | } | 380 | } |
382 | 381 | ||
382 | static inline int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, | ||
383 | u16 interrupt_num) | ||
384 | { | ||
385 | return 0; | ||
386 | } | ||
387 | |||
383 | static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) | 388 | static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) |
384 | { | 389 | { |
385 | } | 390 | } |
diff --git a/drivers/pci/controller/dwc/pcie-histb.c b/drivers/pci/controller/dwc/pcie-histb.c index 3611d6ce9a92..7b32e619b959 100644 --- a/drivers/pci/controller/dwc/pcie-histb.c +++ b/drivers/pci/controller/dwc/pcie-histb.c | |||
@@ -420,7 +420,6 @@ static int histb_pcie_probe(struct platform_device *pdev) | |||
420 | phy_init(hipcie->phy); | 420 | phy_init(hipcie->phy); |
421 | } | 421 | } |
422 | 422 | ||
423 | pp->root_bus_nr = -1; | ||
424 | pp->ops = &histb_pcie_host_ops; | 423 | pp->ops = &histb_pcie_host_ops; |
425 | 424 | ||
426 | platform_set_drvdata(pdev, hipcie); | 425 | platform_set_drvdata(pdev, hipcie); |
diff --git a/drivers/pci/controller/dwc/pcie-kirin.c b/drivers/pci/controller/dwc/pcie-kirin.c index d2970a009eb5..5352e0c3be82 100644 --- a/drivers/pci/controller/dwc/pcie-kirin.c +++ b/drivers/pci/controller/dwc/pcie-kirin.c | |||
@@ -430,6 +430,9 @@ static int kirin_pcie_host_init(struct pcie_port *pp) | |||
430 | { | 430 | { |
431 | kirin_pcie_establish_link(pp); | 431 | kirin_pcie_establish_link(pp); |
432 | 432 | ||
433 | if (IS_ENABLED(CONFIG_PCI_MSI)) | ||
434 | dw_pcie_msi_init(pp); | ||
435 | |||
433 | return 0; | 436 | return 0; |
434 | } | 437 | } |
435 | 438 | ||
@@ -445,9 +448,34 @@ static const struct dw_pcie_host_ops kirin_pcie_host_ops = { | |||
445 | .host_init = kirin_pcie_host_init, | 448 | .host_init = kirin_pcie_host_init, |
446 | }; | 449 | }; |
447 | 450 | ||
451 | static int kirin_pcie_add_msi(struct dw_pcie *pci, | ||
452 | struct platform_device *pdev) | ||
453 | { | ||
454 | int irq; | ||
455 | |||
456 | if (IS_ENABLED(CONFIG_PCI_MSI)) { | ||
457 | irq = platform_get_irq(pdev, 0); | ||
458 | if (irq < 0) { | ||
459 | dev_err(&pdev->dev, | ||
460 | "failed to get MSI IRQ (%d)\n", irq); | ||
461 | return irq; | ||
462 | } | ||
463 | |||
464 | pci->pp.msi_irq = irq; | ||
465 | } | ||
466 | |||
467 | return 0; | ||
468 | } | ||
469 | |||
448 | static int __init kirin_add_pcie_port(struct dw_pcie *pci, | 470 | static int __init kirin_add_pcie_port(struct dw_pcie *pci, |
449 | struct platform_device *pdev) | 471 | struct platform_device *pdev) |
450 | { | 472 | { |
473 | int ret; | ||
474 | |||
475 | ret = kirin_pcie_add_msi(pci, pdev); | ||
476 | if (ret) | ||
477 | return ret; | ||
478 | |||
451 | pci->pp.ops = &kirin_pcie_host_ops; | 479 | pci->pp.ops = &kirin_pcie_host_ops; |
452 | 480 | ||
453 | return dw_pcie_host_init(&pci->pp); | 481 | return dw_pcie_host_init(&pci->pp); |
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index a1d0198081a6..4352c1cb926d 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c | |||
@@ -1251,7 +1251,6 @@ static int qcom_pcie_probe(struct platform_device *pdev) | |||
1251 | if (ret) | 1251 | if (ret) |
1252 | return ret; | 1252 | return ret; |
1253 | 1253 | ||
1254 | pp->root_bus_nr = -1; | ||
1255 | pp->ops = &qcom_pcie_dw_ops; | 1254 | pp->ops = &qcom_pcie_dw_ops; |
1256 | 1255 | ||
1257 | if (IS_ENABLED(CONFIG_PCI_MSI)) { | 1256 | if (IS_ENABLED(CONFIG_PCI_MSI)) { |
diff --git a/drivers/pci/controller/dwc/pcie-spear13xx.c b/drivers/pci/controller/dwc/pcie-spear13xx.c index ecb58f7b7566..7d0cdfd8138b 100644 --- a/drivers/pci/controller/dwc/pcie-spear13xx.c +++ b/drivers/pci/controller/dwc/pcie-spear13xx.c | |||
@@ -210,7 +210,6 @@ static int spear13xx_add_pcie_port(struct spear13xx_pcie *spear13xx_pcie, | |||
210 | return ret; | 210 | return ret; |
211 | } | 211 | } |
212 | 212 | ||
213 | pp->root_bus_nr = -1; | ||
214 | pp->ops = &spear13xx_pcie_host_ops; | 213 | pp->ops = &spear13xx_pcie_host_ops; |
215 | 214 | ||
216 | ret = dw_pcie_host_init(pp); | 215 | ret = dw_pcie_host_init(pp); |
diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c index 0fae816fba39..6b4555ff2548 100644 --- a/drivers/pci/controller/pci-aardvark.c +++ b/drivers/pci/controller/pci-aardvark.c | |||
@@ -111,24 +111,6 @@ | |||
111 | #define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C) | 111 | #define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C) |
112 | #define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C) | 112 | #define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C) |
113 | 113 | ||
114 | /* PCIe window configuration */ | ||
115 | #define OB_WIN_BASE_ADDR 0x4c00 | ||
116 | #define OB_WIN_BLOCK_SIZE 0x20 | ||
117 | #define OB_WIN_REG_ADDR(win, offset) (OB_WIN_BASE_ADDR + \ | ||
118 | OB_WIN_BLOCK_SIZE * (win) + \ | ||
119 | (offset)) | ||
120 | #define OB_WIN_MATCH_LS(win) OB_WIN_REG_ADDR(win, 0x00) | ||
121 | #define OB_WIN_MATCH_MS(win) OB_WIN_REG_ADDR(win, 0x04) | ||
122 | #define OB_WIN_REMAP_LS(win) OB_WIN_REG_ADDR(win, 0x08) | ||
123 | #define OB_WIN_REMAP_MS(win) OB_WIN_REG_ADDR(win, 0x0c) | ||
124 | #define OB_WIN_MASK_LS(win) OB_WIN_REG_ADDR(win, 0x10) | ||
125 | #define OB_WIN_MASK_MS(win) OB_WIN_REG_ADDR(win, 0x14) | ||
126 | #define OB_WIN_ACTIONS(win) OB_WIN_REG_ADDR(win, 0x18) | ||
127 | |||
128 | /* PCIe window types */ | ||
129 | #define OB_PCIE_MEM 0x0 | ||
130 | #define OB_PCIE_IO 0x4 | ||
131 | |||
132 | /* LMI registers base address and register offsets */ | 114 | /* LMI registers base address and register offsets */ |
133 | #define LMI_BASE_ADDR 0x6000 | 115 | #define LMI_BASE_ADDR 0x6000 |
134 | #define CFG_REG (LMI_BASE_ADDR + 0x0) | 116 | #define CFG_REG (LMI_BASE_ADDR + 0x0) |
@@ -247,34 +229,9 @@ static int advk_pcie_wait_for_link(struct advk_pcie *pcie) | |||
247 | return -ETIMEDOUT; | 229 | return -ETIMEDOUT; |
248 | } | 230 | } |
249 | 231 | ||
250 | /* | ||
251 | * Set PCIe address window register which could be used for memory | ||
252 | * mapping. | ||
253 | */ | ||
254 | static void advk_pcie_set_ob_win(struct advk_pcie *pcie, | ||
255 | u32 win_num, u32 match_ms, | ||
256 | u32 match_ls, u32 mask_ms, | ||
257 | u32 mask_ls, u32 remap_ms, | ||
258 | u32 remap_ls, u32 action) | ||
259 | { | ||
260 | advk_writel(pcie, match_ls, OB_WIN_MATCH_LS(win_num)); | ||
261 | advk_writel(pcie, match_ms, OB_WIN_MATCH_MS(win_num)); | ||
262 | advk_writel(pcie, mask_ms, OB_WIN_MASK_MS(win_num)); | ||
263 | advk_writel(pcie, mask_ls, OB_WIN_MASK_LS(win_num)); | ||
264 | advk_writel(pcie, remap_ms, OB_WIN_REMAP_MS(win_num)); | ||
265 | advk_writel(pcie, remap_ls, OB_WIN_REMAP_LS(win_num)); | ||
266 | advk_writel(pcie, action, OB_WIN_ACTIONS(win_num)); | ||
267 | advk_writel(pcie, match_ls | BIT(0), OB_WIN_MATCH_LS(win_num)); | ||
268 | } | ||
269 | |||
270 | static void advk_pcie_setup_hw(struct advk_pcie *pcie) | 232 | static void advk_pcie_setup_hw(struct advk_pcie *pcie) |
271 | { | 233 | { |
272 | u32 reg; | 234 | u32 reg; |
273 | int i; | ||
274 | |||
275 | /* Point PCIe unit MBUS decode windows to DRAM space */ | ||
276 | for (i = 0; i < 8; i++) | ||
277 | advk_pcie_set_ob_win(pcie, i, 0, 0, 0, 0, 0, 0, 0); | ||
278 | 235 | ||
279 | /* Set to Direct mode */ | 236 | /* Set to Direct mode */ |
280 | reg = advk_readl(pcie, CTRL_CONFIG_REG); | 237 | reg = advk_readl(pcie, CTRL_CONFIG_REG); |
@@ -433,6 +390,15 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie) | |||
433 | return -ETIMEDOUT; | 390 | return -ETIMEDOUT; |
434 | } | 391 | } |
435 | 392 | ||
393 | static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, | ||
394 | int devfn) | ||
395 | { | ||
396 | if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) | ||
397 | return false; | ||
398 | |||
399 | return true; | ||
400 | } | ||
401 | |||
436 | static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, | 402 | static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, |
437 | int where, int size, u32 *val) | 403 | int where, int size, u32 *val) |
438 | { | 404 | { |
@@ -440,7 +406,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, | |||
440 | u32 reg; | 406 | u32 reg; |
441 | int ret; | 407 | int ret; |
442 | 408 | ||
443 | if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) { | 409 | if (!advk_pcie_valid_device(pcie, bus, devfn)) { |
444 | *val = 0xffffffff; | 410 | *val = 0xffffffff; |
445 | return PCIBIOS_DEVICE_NOT_FOUND; | 411 | return PCIBIOS_DEVICE_NOT_FOUND; |
446 | } | 412 | } |
@@ -494,7 +460,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, | |||
494 | int offset; | 460 | int offset; |
495 | int ret; | 461 | int ret; |
496 | 462 | ||
497 | if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) | 463 | if (!advk_pcie_valid_device(pcie, bus, devfn)) |
498 | return PCIBIOS_DEVICE_NOT_FOUND; | 464 | return PCIBIOS_DEVICE_NOT_FOUND; |
499 | 465 | ||
500 | if (where % size) | 466 | if (where % size) |
@@ -843,12 +809,6 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie) | |||
843 | 809 | ||
844 | switch (resource_type(res)) { | 810 | switch (resource_type(res)) { |
845 | case IORESOURCE_IO: | 811 | case IORESOURCE_IO: |
846 | advk_pcie_set_ob_win(pcie, 1, | ||
847 | upper_32_bits(res->start), | ||
848 | lower_32_bits(res->start), | ||
849 | 0, 0xF8000000, 0, | ||
850 | lower_32_bits(res->start), | ||
851 | OB_PCIE_IO); | ||
852 | err = devm_pci_remap_iospace(dev, res, iobase); | 812 | err = devm_pci_remap_iospace(dev, res, iobase); |
853 | if (err) { | 813 | if (err) { |
854 | dev_warn(dev, "error %d: failed to map resource %pR\n", | 814 | dev_warn(dev, "error %d: failed to map resource %pR\n", |
@@ -857,12 +817,6 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie) | |||
857 | } | 817 | } |
858 | break; | 818 | break; |
859 | case IORESOURCE_MEM: | 819 | case IORESOURCE_MEM: |
860 | advk_pcie_set_ob_win(pcie, 0, | ||
861 | upper_32_bits(res->start), | ||
862 | lower_32_bits(res->start), | ||
863 | 0x0, 0xF8000000, 0, | ||
864 | lower_32_bits(res->start), | ||
865 | (2 << 20) | OB_PCIE_MEM); | ||
866 | res_valid |= !(res->flags & IORESOURCE_PREFETCH); | 820 | res_valid |= !(res->flags & IORESOURCE_PREFETCH); |
867 | break; | 821 | break; |
868 | case IORESOURCE_BUS: | 822 | case IORESOURCE_BUS: |
@@ -889,7 +843,6 @@ static int advk_pcie_probe(struct platform_device *pdev) | |||
889 | struct device *dev = &pdev->dev; | 843 | struct device *dev = &pdev->dev; |
890 | struct advk_pcie *pcie; | 844 | struct advk_pcie *pcie; |
891 | struct resource *res; | 845 | struct resource *res; |
892 | struct pci_bus *bus, *child; | ||
893 | struct pci_host_bridge *bridge; | 846 | struct pci_host_bridge *bridge; |
894 | int ret, irq; | 847 | int ret, irq; |
895 | 848 | ||
@@ -943,21 +896,13 @@ static int advk_pcie_probe(struct platform_device *pdev) | |||
943 | bridge->map_irq = of_irq_parse_and_map_pci; | 896 | bridge->map_irq = of_irq_parse_and_map_pci; |
944 | bridge->swizzle_irq = pci_common_swizzle; | 897 | bridge->swizzle_irq = pci_common_swizzle; |
945 | 898 | ||
946 | ret = pci_scan_root_bus_bridge(bridge); | 899 | ret = pci_host_probe(bridge); |
947 | if (ret < 0) { | 900 | if (ret < 0) { |
948 | advk_pcie_remove_msi_irq_domain(pcie); | 901 | advk_pcie_remove_msi_irq_domain(pcie); |
949 | advk_pcie_remove_irq_domain(pcie); | 902 | advk_pcie_remove_irq_domain(pcie); |
950 | return ret; | 903 | return ret; |
951 | } | 904 | } |
952 | 905 | ||
953 | bus = bridge->bus; | ||
954 | |||
955 | pci_bus_assign_resources(bus); | ||
956 | |||
957 | list_for_each_entry(child, &bus->children, node) | ||
958 | pcie_bus_configure_settings(child); | ||
959 | |||
960 | pci_bus_add_devices(bus); | ||
961 | return 0; | 906 | return 0; |
962 | } | 907 | } |
963 | 908 | ||
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index d4d4a55f09f8..c00f82cc54aa 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c | |||
@@ -1546,7 +1546,7 @@ static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus, | |||
1546 | unsigned long flags; | 1546 | unsigned long flags; |
1547 | int ret; | 1547 | int ret; |
1548 | 1548 | ||
1549 | hpdev = kzalloc(sizeof(*hpdev), GFP_ATOMIC); | 1549 | hpdev = kzalloc(sizeof(*hpdev), GFP_KERNEL); |
1550 | if (!hpdev) | 1550 | if (!hpdev) |
1551 | return NULL; | 1551 | return NULL; |
1552 | 1552 | ||
diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c index 23e270839e6a..50eb0729385b 100644 --- a/drivers/pci/controller/pci-mvebu.c +++ b/drivers/pci/controller/pci-mvebu.c | |||
@@ -125,6 +125,7 @@ struct mvebu_pcie { | |||
125 | struct platform_device *pdev; | 125 | struct platform_device *pdev; |
126 | struct mvebu_pcie_port *ports; | 126 | struct mvebu_pcie_port *ports; |
127 | struct msi_controller *msi; | 127 | struct msi_controller *msi; |
128 | struct list_head resources; | ||
128 | struct resource io; | 129 | struct resource io; |
129 | struct resource realio; | 130 | struct resource realio; |
130 | struct resource mem; | 131 | struct resource mem; |
@@ -800,7 +801,7 @@ static struct mvebu_pcie_port *mvebu_pcie_find_port(struct mvebu_pcie *pcie, | |||
800 | static int mvebu_pcie_wr_conf(struct pci_bus *bus, u32 devfn, | 801 | static int mvebu_pcie_wr_conf(struct pci_bus *bus, u32 devfn, |
801 | int where, int size, u32 val) | 802 | int where, int size, u32 val) |
802 | { | 803 | { |
803 | struct mvebu_pcie *pcie = sys_to_pcie(bus->sysdata); | 804 | struct mvebu_pcie *pcie = bus->sysdata; |
804 | struct mvebu_pcie_port *port; | 805 | struct mvebu_pcie_port *port; |
805 | int ret; | 806 | int ret; |
806 | 807 | ||
@@ -826,7 +827,7 @@ static int mvebu_pcie_wr_conf(struct pci_bus *bus, u32 devfn, | |||
826 | static int mvebu_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, | 827 | static int mvebu_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, |
827 | int size, u32 *val) | 828 | int size, u32 *val) |
828 | { | 829 | { |
829 | struct mvebu_pcie *pcie = sys_to_pcie(bus->sysdata); | 830 | struct mvebu_pcie *pcie = bus->sysdata; |
830 | struct mvebu_pcie_port *port; | 831 | struct mvebu_pcie_port *port; |
831 | int ret; | 832 | int ret; |
832 | 833 | ||
@@ -857,36 +858,6 @@ static struct pci_ops mvebu_pcie_ops = { | |||
857 | .write = mvebu_pcie_wr_conf, | 858 | .write = mvebu_pcie_wr_conf, |
858 | }; | 859 | }; |
859 | 860 | ||
860 | static int mvebu_pcie_setup(int nr, struct pci_sys_data *sys) | ||
861 | { | ||
862 | struct mvebu_pcie *pcie = sys_to_pcie(sys); | ||
863 | int err, i; | ||
864 | |||
865 | pcie->mem.name = "PCI MEM"; | ||
866 | pcie->realio.name = "PCI I/O"; | ||
867 | |||
868 | if (resource_size(&pcie->realio) != 0) | ||
869 | pci_add_resource_offset(&sys->resources, &pcie->realio, | ||
870 | sys->io_offset); | ||
871 | |||
872 | pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset); | ||
873 | pci_add_resource(&sys->resources, &pcie->busn); | ||
874 | |||
875 | err = devm_request_pci_bus_resources(&pcie->pdev->dev, &sys->resources); | ||
876 | if (err) | ||
877 | return 0; | ||
878 | |||
879 | for (i = 0; i < pcie->nports; i++) { | ||
880 | struct mvebu_pcie_port *port = &pcie->ports[i]; | ||
881 | |||
882 | if (!port->base) | ||
883 | continue; | ||
884 | mvebu_pcie_setup_hw(port); | ||
885 | } | ||
886 | |||
887 | return 1; | ||
888 | } | ||
889 | |||
890 | static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev, | 861 | static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev, |
891 | const struct resource *res, | 862 | const struct resource *res, |
892 | resource_size_t start, | 863 | resource_size_t start, |
@@ -917,31 +888,6 @@ static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev, | |||
917 | return start; | 888 | return start; |
918 | } | 889 | } |
919 | 890 | ||
920 | static void mvebu_pcie_enable(struct mvebu_pcie *pcie) | ||
921 | { | ||
922 | struct hw_pci hw; | ||
923 | |||
924 | memset(&hw, 0, sizeof(hw)); | ||
925 | |||
926 | #ifdef CONFIG_PCI_MSI | ||
927 | hw.msi_ctrl = pcie->msi; | ||
928 | #endif | ||
929 | |||
930 | hw.nr_controllers = 1; | ||
931 | hw.private_data = (void **)&pcie; | ||
932 | hw.setup = mvebu_pcie_setup; | ||
933 | hw.map_irq = of_irq_parse_and_map_pci; | ||
934 | hw.ops = &mvebu_pcie_ops; | ||
935 | hw.align_resource = mvebu_pcie_align_resource; | ||
936 | |||
937 | pci_common_init_dev(&pcie->pdev->dev, &hw); | ||
938 | } | ||
939 | |||
940 | /* | ||
941 | * Looks up the list of register addresses encoded into the reg = | ||
942 | * <...> property for one that matches the given port/lane. Once | ||
943 | * found, maps it. | ||
944 | */ | ||
945 | static void __iomem *mvebu_pcie_map_registers(struct platform_device *pdev, | 891 | static void __iomem *mvebu_pcie_map_registers(struct platform_device *pdev, |
946 | struct device_node *np, | 892 | struct device_node *np, |
947 | struct mvebu_pcie_port *port) | 893 | struct mvebu_pcie_port *port) |
@@ -1190,46 +1136,79 @@ static void mvebu_pcie_powerdown(struct mvebu_pcie_port *port) | |||
1190 | clk_disable_unprepare(port->clk); | 1136 | clk_disable_unprepare(port->clk); |
1191 | } | 1137 | } |
1192 | 1138 | ||
1193 | static int mvebu_pcie_probe(struct platform_device *pdev) | 1139 | /* |
1140 | * We can't use devm_of_pci_get_host_bridge_resources() because we | ||
1141 | * need to parse our special DT properties encoding the MEM and IO | ||
1142 | * apertures. | ||
1143 | */ | ||
1144 | static int mvebu_pcie_parse_request_resources(struct mvebu_pcie *pcie) | ||
1194 | { | 1145 | { |
1195 | struct device *dev = &pdev->dev; | 1146 | struct device *dev = &pcie->pdev->dev; |
1196 | struct mvebu_pcie *pcie; | ||
1197 | struct device_node *np = dev->of_node; | 1147 | struct device_node *np = dev->of_node; |
1198 | struct device_node *child; | 1148 | unsigned int i; |
1199 | int num, i, ret; | 1149 | int ret; |
1200 | 1150 | ||
1201 | pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); | 1151 | INIT_LIST_HEAD(&pcie->resources); |
1202 | if (!pcie) | ||
1203 | return -ENOMEM; | ||
1204 | 1152 | ||
1205 | pcie->pdev = pdev; | 1153 | /* Get the bus range */ |
1206 | platform_set_drvdata(pdev, pcie); | 1154 | ret = of_pci_parse_bus_range(np, &pcie->busn); |
1155 | if (ret) { | ||
1156 | dev_err(dev, "failed to parse bus-range property: %d\n", ret); | ||
1157 | return ret; | ||
1158 | } | ||
1159 | pci_add_resource(&pcie->resources, &pcie->busn); | ||
1207 | 1160 | ||
1208 | /* Get the PCIe memory and I/O aperture */ | 1161 | /* Get the PCIe memory aperture */ |
1209 | mvebu_mbus_get_pcie_mem_aperture(&pcie->mem); | 1162 | mvebu_mbus_get_pcie_mem_aperture(&pcie->mem); |
1210 | if (resource_size(&pcie->mem) == 0) { | 1163 | if (resource_size(&pcie->mem) == 0) { |
1211 | dev_err(dev, "invalid memory aperture size\n"); | 1164 | dev_err(dev, "invalid memory aperture size\n"); |
1212 | return -EINVAL; | 1165 | return -EINVAL; |
1213 | } | 1166 | } |
1214 | 1167 | ||
1168 | pcie->mem.name = "PCI MEM"; | ||
1169 | pci_add_resource(&pcie->resources, &pcie->mem); | ||
1170 | |||
1171 | /* Get the PCIe IO aperture */ | ||
1215 | mvebu_mbus_get_pcie_io_aperture(&pcie->io); | 1172 | mvebu_mbus_get_pcie_io_aperture(&pcie->io); |
1216 | 1173 | ||
1217 | if (resource_size(&pcie->io) != 0) { | 1174 | if (resource_size(&pcie->io) != 0) { |
1218 | pcie->realio.flags = pcie->io.flags; | 1175 | pcie->realio.flags = pcie->io.flags; |
1219 | pcie->realio.start = PCIBIOS_MIN_IO; | 1176 | pcie->realio.start = PCIBIOS_MIN_IO; |
1220 | pcie->realio.end = min_t(resource_size_t, | 1177 | pcie->realio.end = min_t(resource_size_t, |
1221 | IO_SPACE_LIMIT, | 1178 | IO_SPACE_LIMIT - SZ_64K, |
1222 | resource_size(&pcie->io)); | 1179 | resource_size(&pcie->io) - 1); |
1223 | } else | 1180 | pcie->realio.name = "PCI I/O"; |
1224 | pcie->realio = pcie->io; | ||
1225 | 1181 | ||
1226 | /* Get the bus range */ | 1182 | for (i = 0; i < resource_size(&pcie->realio); i += SZ_64K) |
1227 | ret = of_pci_parse_bus_range(np, &pcie->busn); | 1183 | pci_ioremap_io(i, pcie->io.start + i); |
1228 | if (ret) { | 1184 | |
1229 | dev_err(dev, "failed to parse bus-range property: %d\n", ret); | 1185 | pci_add_resource(&pcie->resources, &pcie->realio); |
1230 | return ret; | ||
1231 | } | 1186 | } |
1232 | 1187 | ||
1188 | return devm_request_pci_bus_resources(dev, &pcie->resources); | ||
1189 | } | ||
1190 | |||
1191 | static int mvebu_pcie_probe(struct platform_device *pdev) | ||
1192 | { | ||
1193 | struct device *dev = &pdev->dev; | ||
1194 | struct mvebu_pcie *pcie; | ||
1195 | struct pci_host_bridge *bridge; | ||
1196 | struct device_node *np = dev->of_node; | ||
1197 | struct device_node *child; | ||
1198 | int num, i, ret; | ||
1199 | |||
1200 | bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct mvebu_pcie)); | ||
1201 | if (!bridge) | ||
1202 | return -ENOMEM; | ||
1203 | |||
1204 | pcie = pci_host_bridge_priv(bridge); | ||
1205 | pcie->pdev = pdev; | ||
1206 | platform_set_drvdata(pdev, pcie); | ||
1207 | |||
1208 | ret = mvebu_pcie_parse_request_resources(pcie); | ||
1209 | if (ret) | ||
1210 | return ret; | ||
1211 | |||
1233 | num = of_get_available_child_count(np); | 1212 | num = of_get_available_child_count(np); |
1234 | 1213 | ||
1235 | pcie->ports = devm_kcalloc(dev, num, sizeof(*pcie->ports), GFP_KERNEL); | 1214 | pcie->ports = devm_kcalloc(dev, num, sizeof(*pcie->ports), GFP_KERNEL); |
@@ -1272,20 +1251,24 @@ static int mvebu_pcie_probe(struct platform_device *pdev) | |||
1272 | continue; | 1251 | continue; |
1273 | } | 1252 | } |
1274 | 1253 | ||
1254 | mvebu_pcie_setup_hw(port); | ||
1275 | mvebu_pcie_set_local_dev_nr(port, 1); | 1255 | mvebu_pcie_set_local_dev_nr(port, 1); |
1276 | mvebu_sw_pci_bridge_init(port); | 1256 | mvebu_sw_pci_bridge_init(port); |
1277 | } | 1257 | } |
1278 | 1258 | ||
1279 | pcie->nports = i; | 1259 | pcie->nports = i; |
1280 | 1260 | ||
1281 | for (i = 0; i < (IO_SPACE_LIMIT - SZ_64K); i += SZ_64K) | 1261 | list_splice_init(&pcie->resources, &bridge->windows); |
1282 | pci_ioremap_io(i, pcie->io.start + i); | 1262 | bridge->dev.parent = dev; |
1283 | 1263 | bridge->sysdata = pcie; | |
1284 | mvebu_pcie_enable(pcie); | 1264 | bridge->busnr = 0; |
1285 | 1265 | bridge->ops = &mvebu_pcie_ops; | |
1286 | platform_set_drvdata(pdev, pcie); | 1266 | bridge->map_irq = of_irq_parse_and_map_pci; |
1287 | 1267 | bridge->swizzle_irq = pci_common_swizzle; | |
1288 | return 0; | 1268 | bridge->align_resource = mvebu_pcie_align_resource; |
1269 | bridge->msi = pcie->msi; | ||
1270 | |||
1271 | return pci_host_probe(bridge); | ||
1289 | } | 1272 | } |
1290 | 1273 | ||
1291 | static const struct of_device_id mvebu_pcie_of_match_table[] = { | 1274 | static const struct of_device_id mvebu_pcie_of_match_table[] = { |
diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/pcie-cadence-ep.c index e3fe4124e3af..9e87dd7f9ac3 100644 --- a/drivers/pci/controller/pcie-cadence-ep.c +++ b/drivers/pci/controller/pcie-cadence-ep.c | |||
@@ -238,7 +238,7 @@ static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn) | |||
238 | struct cdns_pcie_ep *ep = epc_get_drvdata(epc); | 238 | struct cdns_pcie_ep *ep = epc_get_drvdata(epc); |
239 | struct cdns_pcie *pcie = &ep->pcie; | 239 | struct cdns_pcie *pcie = &ep->pcie; |
240 | u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; | 240 | u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; |
241 | u16 flags, mmc, mme; | 241 | u16 flags, mme; |
242 | 242 | ||
243 | /* Validate that the MSI feature is actually enabled. */ | 243 | /* Validate that the MSI feature is actually enabled. */ |
244 | flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS); | 244 | flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS); |
@@ -249,7 +249,6 @@ static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn) | |||
249 | * Get the Multiple Message Enable bitfield from the Message Control | 249 | * Get the Multiple Message Enable bitfield from the Message Control |
250 | * register. | 250 | * register. |
251 | */ | 251 | */ |
252 | mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1; | ||
253 | mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4; | 252 | mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4; |
254 | 253 | ||
255 | return mme; | 254 | return mme; |
@@ -363,7 +362,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, | |||
363 | } | 362 | } |
364 | 363 | ||
365 | static int cdns_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, | 364 | static int cdns_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, |
366 | enum pci_epc_irq_type type, u8 interrupt_num) | 365 | enum pci_epc_irq_type type, |
366 | u16 interrupt_num) | ||
367 | { | 367 | { |
368 | struct cdns_pcie_ep *ep = epc_get_drvdata(epc); | 368 | struct cdns_pcie_ep *ep = epc_get_drvdata(epc); |
369 | 369 | ||
@@ -439,6 +439,7 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev) | |||
439 | struct pci_epc *epc; | 439 | struct pci_epc *epc; |
440 | struct resource *res; | 440 | struct resource *res; |
441 | int ret; | 441 | int ret; |
442 | int phy_count; | ||
442 | 443 | ||
443 | ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); | 444 | ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); |
444 | if (!ep) | 445 | if (!ep) |
@@ -473,6 +474,12 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev) | |||
473 | if (!ep->ob_addr) | 474 | if (!ep->ob_addr) |
474 | return -ENOMEM; | 475 | return -ENOMEM; |
475 | 476 | ||
477 | ret = cdns_pcie_init_phy(dev, pcie); | ||
478 | if (ret) { | ||
479 | dev_err(dev, "failed to init phy\n"); | ||
480 | return ret; | ||
481 | } | ||
482 | platform_set_drvdata(pdev, pcie); | ||
476 | pm_runtime_enable(dev); | 483 | pm_runtime_enable(dev); |
477 | ret = pm_runtime_get_sync(dev); | 484 | ret = pm_runtime_get_sync(dev); |
478 | if (ret < 0) { | 485 | if (ret < 0) { |
@@ -521,6 +528,10 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev) | |||
521 | 528 | ||
522 | err_get_sync: | 529 | err_get_sync: |
523 | pm_runtime_disable(dev); | 530 | pm_runtime_disable(dev); |
531 | cdns_pcie_disable_phy(pcie); | ||
532 | phy_count = pcie->phy_count; | ||
533 | while (phy_count--) | ||
534 | device_link_del(pcie->link[phy_count]); | ||
524 | 535 | ||
525 | return ret; | 536 | return ret; |
526 | } | 537 | } |
@@ -528,6 +539,7 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev) | |||
528 | static void cdns_pcie_ep_shutdown(struct platform_device *pdev) | 539 | static void cdns_pcie_ep_shutdown(struct platform_device *pdev) |
529 | { | 540 | { |
530 | struct device *dev = &pdev->dev; | 541 | struct device *dev = &pdev->dev; |
542 | struct cdns_pcie *pcie = dev_get_drvdata(dev); | ||
531 | int ret; | 543 | int ret; |
532 | 544 | ||
533 | ret = pm_runtime_put_sync(dev); | 545 | ret = pm_runtime_put_sync(dev); |
@@ -536,13 +548,14 @@ static void cdns_pcie_ep_shutdown(struct platform_device *pdev) | |||
536 | 548 | ||
537 | pm_runtime_disable(dev); | 549 | pm_runtime_disable(dev); |
538 | 550 | ||
539 | /* The PCIe controller can't be disabled. */ | 551 | cdns_pcie_disable_phy(pcie); |
540 | } | 552 | } |
541 | 553 | ||
542 | static struct platform_driver cdns_pcie_ep_driver = { | 554 | static struct platform_driver cdns_pcie_ep_driver = { |
543 | .driver = { | 555 | .driver = { |
544 | .name = "cdns-pcie-ep", | 556 | .name = "cdns-pcie-ep", |
545 | .of_match_table = cdns_pcie_ep_of_match, | 557 | .of_match_table = cdns_pcie_ep_of_match, |
558 | .pm = &cdns_pcie_pm_ops, | ||
546 | }, | 559 | }, |
547 | .probe = cdns_pcie_ep_probe, | 560 | .probe = cdns_pcie_ep_probe, |
548 | .shutdown = cdns_pcie_ep_shutdown, | 561 | .shutdown = cdns_pcie_ep_shutdown, |
diff --git a/drivers/pci/controller/pcie-cadence-host.c b/drivers/pci/controller/pcie-cadence-host.c index a4ebbd37b553..ec394f6a19c8 100644 --- a/drivers/pci/controller/pcie-cadence-host.c +++ b/drivers/pci/controller/pcie-cadence-host.c | |||
@@ -58,6 +58,11 @@ static void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, | |||
58 | 58 | ||
59 | return pcie->reg_base + (where & 0xfff); | 59 | return pcie->reg_base + (where & 0xfff); |
60 | } | 60 | } |
61 | /* Check that the link is up */ | ||
62 | if (!(cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE) & 0x1)) | ||
63 | return NULL; | ||
64 | /* Clear AXI link-down status */ | ||
65 | cdns_pcie_writel(pcie, CDNS_PCIE_AT_LINKDOWN, 0x0); | ||
61 | 66 | ||
62 | /* Update Output registers for AXI region 0. */ | 67 | /* Update Output registers for AXI region 0. */ |
63 | addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) | | 68 | addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) | |
@@ -239,6 +244,7 @@ static int cdns_pcie_host_probe(struct platform_device *pdev) | |||
239 | struct cdns_pcie *pcie; | 244 | struct cdns_pcie *pcie; |
240 | struct resource *res; | 245 | struct resource *res; |
241 | int ret; | 246 | int ret; |
247 | int phy_count; | ||
242 | 248 | ||
243 | bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); | 249 | bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); |
244 | if (!bridge) | 250 | if (!bridge) |
@@ -290,6 +296,13 @@ static int cdns_pcie_host_probe(struct platform_device *pdev) | |||
290 | } | 296 | } |
291 | pcie->mem_res = res; | 297 | pcie->mem_res = res; |
292 | 298 | ||
299 | ret = cdns_pcie_init_phy(dev, pcie); | ||
300 | if (ret) { | ||
301 | dev_err(dev, "failed to init phy\n"); | ||
302 | return ret; | ||
303 | } | ||
304 | platform_set_drvdata(pdev, pcie); | ||
305 | |||
293 | pm_runtime_enable(dev); | 306 | pm_runtime_enable(dev); |
294 | ret = pm_runtime_get_sync(dev); | 307 | ret = pm_runtime_get_sync(dev); |
295 | if (ret < 0) { | 308 | if (ret < 0) { |
@@ -322,15 +335,35 @@ static int cdns_pcie_host_probe(struct platform_device *pdev) | |||
322 | 335 | ||
323 | err_get_sync: | 336 | err_get_sync: |
324 | pm_runtime_disable(dev); | 337 | pm_runtime_disable(dev); |
338 | cdns_pcie_disable_phy(pcie); | ||
339 | phy_count = pcie->phy_count; | ||
340 | while (phy_count--) | ||
341 | device_link_del(pcie->link[phy_count]); | ||
325 | 342 | ||
326 | return ret; | 343 | return ret; |
327 | } | 344 | } |
328 | 345 | ||
346 | static void cdns_pcie_shutdown(struct platform_device *pdev) | ||
347 | { | ||
348 | struct device *dev = &pdev->dev; | ||
349 | struct cdns_pcie *pcie = dev_get_drvdata(dev); | ||
350 | int ret; | ||
351 | |||
352 | ret = pm_runtime_put_sync(dev); | ||
353 | if (ret < 0) | ||
354 | dev_dbg(dev, "pm_runtime_put_sync failed\n"); | ||
355 | |||
356 | pm_runtime_disable(dev); | ||
357 | cdns_pcie_disable_phy(pcie); | ||
358 | } | ||
359 | |||
329 | static struct platform_driver cdns_pcie_host_driver = { | 360 | static struct platform_driver cdns_pcie_host_driver = { |
330 | .driver = { | 361 | .driver = { |
331 | .name = "cdns-pcie-host", | 362 | .name = "cdns-pcie-host", |
332 | .of_match_table = cdns_pcie_host_of_match, | 363 | .of_match_table = cdns_pcie_host_of_match, |
364 | .pm = &cdns_pcie_pm_ops, | ||
333 | }, | 365 | }, |
334 | .probe = cdns_pcie_host_probe, | 366 | .probe = cdns_pcie_host_probe, |
367 | .shutdown = cdns_pcie_shutdown, | ||
335 | }; | 368 | }; |
336 | builtin_platform_driver(cdns_pcie_host_driver); | 369 | builtin_platform_driver(cdns_pcie_host_driver); |
diff --git a/drivers/pci/controller/pcie-cadence.c b/drivers/pci/controller/pcie-cadence.c index 138d113eb45d..86f1b002c846 100644 --- a/drivers/pci/controller/pcie-cadence.c +++ b/drivers/pci/controller/pcie-cadence.c | |||
@@ -124,3 +124,126 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r) | |||
124 | cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), 0); | 124 | cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), 0); |
125 | cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0); | 125 | cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0); |
126 | } | 126 | } |
127 | |||
128 | void cdns_pcie_disable_phy(struct cdns_pcie *pcie) | ||
129 | { | ||
130 | int i = pcie->phy_count; | ||
131 | |||
132 | while (i--) { | ||
133 | phy_power_off(pcie->phy[i]); | ||
134 | phy_exit(pcie->phy[i]); | ||
135 | } | ||
136 | } | ||
137 | |||
138 | int cdns_pcie_enable_phy(struct cdns_pcie *pcie) | ||
139 | { | ||
140 | int ret; | ||
141 | int i; | ||
142 | |||
143 | for (i = 0; i < pcie->phy_count; i++) { | ||
144 | ret = phy_init(pcie->phy[i]); | ||
145 | if (ret < 0) | ||
146 | goto err_phy; | ||
147 | |||
148 | ret = phy_power_on(pcie->phy[i]); | ||
149 | if (ret < 0) { | ||
150 | phy_exit(pcie->phy[i]); | ||
151 | goto err_phy; | ||
152 | } | ||
153 | } | ||
154 | |||
155 | return 0; | ||
156 | |||
157 | err_phy: | ||
158 | while (--i >= 0) { | ||
159 | phy_power_off(pcie->phy[i]); | ||
160 | phy_exit(pcie->phy[i]); | ||
161 | } | ||
162 | |||
163 | return ret; | ||
164 | } | ||
165 | |||
166 | int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie) | ||
167 | { | ||
168 | struct device_node *np = dev->of_node; | ||
169 | int phy_count; | ||
170 | struct phy **phy; | ||
171 | struct device_link **link; | ||
172 | int i; | ||
173 | int ret; | ||
174 | const char *name; | ||
175 | |||
176 | phy_count = of_property_count_strings(np, "phy-names"); | ||
177 | if (phy_count < 1) { | ||
178 | dev_err(dev, "no phy-names. PHY will not be initialized\n"); | ||
179 | pcie->phy_count = 0; | ||
180 | return 0; | ||
181 | } | ||
182 | |||
183 | phy = devm_kzalloc(dev, sizeof(*phy) * phy_count, GFP_KERNEL); | ||
184 | if (!phy) | ||
185 | return -ENOMEM; | ||
186 | |||
187 | link = devm_kzalloc(dev, sizeof(*link) * phy_count, GFP_KERNEL); | ||
188 | if (!link) | ||
189 | return -ENOMEM; | ||
190 | |||
191 | for (i = 0; i < phy_count; i++) { | ||
192 | of_property_read_string_index(np, "phy-names", i, &name); | ||
193 | phy[i] = devm_phy_optional_get(dev, name); | ||
194 | if (IS_ERR(phy)) | ||
195 | return PTR_ERR(phy); | ||
196 | |||
197 | link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS); | ||
198 | if (!link[i]) { | ||
199 | ret = -EINVAL; | ||
200 | goto err_link; | ||
201 | } | ||
202 | } | ||
203 | |||
204 | pcie->phy_count = phy_count; | ||
205 | pcie->phy = phy; | ||
206 | pcie->link = link; | ||
207 | |||
208 | ret = cdns_pcie_enable_phy(pcie); | ||
209 | if (ret) | ||
210 | goto err_link; | ||
211 | |||
212 | return 0; | ||
213 | |||
214 | err_link: | ||
215 | while (--i >= 0) | ||
216 | device_link_del(link[i]); | ||
217 | |||
218 | return ret; | ||
219 | } | ||
220 | |||
221 | #ifdef CONFIG_PM_SLEEP | ||
222 | static int cdns_pcie_suspend_noirq(struct device *dev) | ||
223 | { | ||
224 | struct cdns_pcie *pcie = dev_get_drvdata(dev); | ||
225 | |||
226 | cdns_pcie_disable_phy(pcie); | ||
227 | |||
228 | return 0; | ||
229 | } | ||
230 | |||
231 | static int cdns_pcie_resume_noirq(struct device *dev) | ||
232 | { | ||
233 | struct cdns_pcie *pcie = dev_get_drvdata(dev); | ||
234 | int ret; | ||
235 | |||
236 | ret = cdns_pcie_enable_phy(pcie); | ||
237 | if (ret) { | ||
238 | dev_err(dev, "failed to enable phy\n"); | ||
239 | return ret; | ||
240 | } | ||
241 | |||
242 | return 0; | ||
243 | } | ||
244 | #endif | ||
245 | |||
246 | const struct dev_pm_ops cdns_pcie_pm_ops = { | ||
247 | SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq, | ||
248 | cdns_pcie_resume_noirq) | ||
249 | }; | ||
diff --git a/drivers/pci/controller/pcie-cadence.h b/drivers/pci/controller/pcie-cadence.h index 4bb27333b05c..ae6bf2a2b3d3 100644 --- a/drivers/pci/controller/pcie-cadence.h +++ b/drivers/pci/controller/pcie-cadence.h | |||
@@ -8,6 +8,7 @@ | |||
8 | 8 | ||
9 | #include <linux/kernel.h> | 9 | #include <linux/kernel.h> |
10 | #include <linux/pci.h> | 10 | #include <linux/pci.h> |
11 | #include <linux/phy/phy.h> | ||
11 | 12 | ||
12 | /* | 13 | /* |
13 | * Local Management Registers | 14 | * Local Management Registers |
@@ -165,6 +166,9 @@ | |||
165 | #define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \ | 166 | #define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \ |
166 | (CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008) | 167 | (CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008) |
167 | 168 | ||
169 | /* AXI link down register */ | ||
170 | #define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824) | ||
171 | |||
168 | enum cdns_pcie_rp_bar { | 172 | enum cdns_pcie_rp_bar { |
169 | RP_BAR0, | 173 | RP_BAR0, |
170 | RP_BAR1, | 174 | RP_BAR1, |
@@ -229,6 +233,9 @@ struct cdns_pcie { | |||
229 | struct resource *mem_res; | 233 | struct resource *mem_res; |
230 | bool is_rc; | 234 | bool is_rc; |
231 | u8 bus; | 235 | u8 bus; |
236 | int phy_count; | ||
237 | struct phy **phy; | ||
238 | struct device_link **link; | ||
232 | }; | 239 | }; |
233 | 240 | ||
234 | /* Register access */ | 241 | /* Register access */ |
@@ -279,7 +286,7 @@ static inline void cdns_pcie_ep_fn_writew(struct cdns_pcie *pcie, u8 fn, | |||
279 | } | 286 | } |
280 | 287 | ||
281 | static inline void cdns_pcie_ep_fn_writel(struct cdns_pcie *pcie, u8 fn, | 288 | static inline void cdns_pcie_ep_fn_writel(struct cdns_pcie *pcie, u8 fn, |
282 | u32 reg, u16 value) | 289 | u32 reg, u32 value) |
283 | { | 290 | { |
284 | writel(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg); | 291 | writel(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg); |
285 | } | 292 | } |
@@ -307,5 +314,9 @@ void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u8 fn, | |||
307 | u32 r, u64 cpu_addr); | 314 | u32 r, u64 cpu_addr); |
308 | 315 | ||
309 | void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r); | 316 | void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r); |
317 | void cdns_pcie_disable_phy(struct cdns_pcie *pcie); | ||
318 | int cdns_pcie_enable_phy(struct cdns_pcie *pcie); | ||
319 | int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie); | ||
320 | extern const struct dev_pm_ops cdns_pcie_pm_ops; | ||
310 | 321 | ||
311 | #endif /* _PCIE_CADENCE_H */ | 322 | #endif /* _PCIE_CADENCE_H */ |
diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c index 3c76c5fa4f32..3160e9342a2f 100644 --- a/drivers/pci/controller/pcie-iproc.c +++ b/drivers/pci/controller/pcie-iproc.c | |||
@@ -85,6 +85,8 @@ | |||
85 | #define IMAP_VALID_SHIFT 0 | 85 | #define IMAP_VALID_SHIFT 0 |
86 | #define IMAP_VALID BIT(IMAP_VALID_SHIFT) | 86 | #define IMAP_VALID BIT(IMAP_VALID_SHIFT) |
87 | 87 | ||
88 | #define IPROC_PCI_PM_CAP 0x48 | ||
89 | #define IPROC_PCI_PM_CAP_MASK 0xffff | ||
88 | #define IPROC_PCI_EXP_CAP 0xac | 90 | #define IPROC_PCI_EXP_CAP 0xac |
89 | 91 | ||
90 | #define IPROC_PCIE_REG_INVALID 0xffff | 92 | #define IPROC_PCIE_REG_INVALID 0xffff |
@@ -375,6 +377,17 @@ static const u16 iproc_pcie_reg_paxc_v2[] = { | |||
375 | [IPROC_PCIE_CFG_DATA] = 0x1fc, | 377 | [IPROC_PCIE_CFG_DATA] = 0x1fc, |
376 | }; | 378 | }; |
377 | 379 | ||
380 | /* | ||
381 | * List of device IDs of controllers that have corrupted capability list that | ||
382 | * require SW fixup | ||
383 | */ | ||
384 | static const u16 iproc_pcie_corrupt_cap_did[] = { | ||
385 | 0x16cd, | ||
386 | 0x16f0, | ||
387 | 0xd802, | ||
388 | 0xd804 | ||
389 | }; | ||
390 | |||
378 | static inline struct iproc_pcie *iproc_data(struct pci_bus *bus) | 391 | static inline struct iproc_pcie *iproc_data(struct pci_bus *bus) |
379 | { | 392 | { |
380 | struct iproc_pcie *pcie = bus->sysdata; | 393 | struct iproc_pcie *pcie = bus->sysdata; |
@@ -495,6 +508,49 @@ static unsigned int iproc_pcie_cfg_retry(void __iomem *cfg_data_p) | |||
495 | return data; | 508 | return data; |
496 | } | 509 | } |
497 | 510 | ||
511 | static void iproc_pcie_fix_cap(struct iproc_pcie *pcie, int where, u32 *val) | ||
512 | { | ||
513 | u32 i, dev_id; | ||
514 | |||
515 | switch (where & ~0x3) { | ||
516 | case PCI_VENDOR_ID: | ||
517 | dev_id = *val >> 16; | ||
518 | |||
519 | /* | ||
520 | * Activate fixup for those controllers that have corrupted | ||
521 | * capability list registers | ||
522 | */ | ||
523 | for (i = 0; i < ARRAY_SIZE(iproc_pcie_corrupt_cap_did); i++) | ||
524 | if (dev_id == iproc_pcie_corrupt_cap_did[i]) | ||
525 | pcie->fix_paxc_cap = true; | ||
526 | break; | ||
527 | |||
528 | case IPROC_PCI_PM_CAP: | ||
529 | if (pcie->fix_paxc_cap) { | ||
530 | /* advertise PM, force next capability to PCIe */ | ||
531 | *val &= ~IPROC_PCI_PM_CAP_MASK; | ||
532 | *val |= IPROC_PCI_EXP_CAP << 8 | PCI_CAP_ID_PM; | ||
533 | } | ||
534 | break; | ||
535 | |||
536 | case IPROC_PCI_EXP_CAP: | ||
537 | if (pcie->fix_paxc_cap) { | ||
538 | /* advertise root port, version 2, terminate here */ | ||
539 | *val = (PCI_EXP_TYPE_ROOT_PORT << 4 | 2) << 16 | | ||
540 | PCI_CAP_ID_EXP; | ||
541 | } | ||
542 | break; | ||
543 | |||
544 | case IPROC_PCI_EXP_CAP + PCI_EXP_RTCTL: | ||
545 | /* Don't advertise CRS SV support */ | ||
546 | *val &= ~(PCI_EXP_RTCAP_CRSVIS << 16); | ||
547 | break; | ||
548 | |||
549 | default: | ||
550 | break; | ||
551 | } | ||
552 | } | ||
553 | |||
498 | static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn, | 554 | static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn, |
499 | int where, int size, u32 *val) | 555 | int where, int size, u32 *val) |
500 | { | 556 | { |
@@ -509,13 +565,10 @@ static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn, | |||
509 | /* root complex access */ | 565 | /* root complex access */ |
510 | if (busno == 0) { | 566 | if (busno == 0) { |
511 | ret = pci_generic_config_read32(bus, devfn, where, size, val); | 567 | ret = pci_generic_config_read32(bus, devfn, where, size, val); |
512 | if (ret != PCIBIOS_SUCCESSFUL) | 568 | if (ret == PCIBIOS_SUCCESSFUL) |
513 | return ret; | 569 | iproc_pcie_fix_cap(pcie, where, val); |
514 | 570 | ||
515 | /* Don't advertise CRS SV support */ | 571 | return ret; |
516 | if ((where & ~0x3) == IPROC_PCI_EXP_CAP + PCI_EXP_RTCTL) | ||
517 | *val &= ~(PCI_EXP_RTCAP_CRSVIS << 16); | ||
518 | return PCIBIOS_SUCCESSFUL; | ||
519 | } | 572 | } |
520 | 573 | ||
521 | cfg_data_p = iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); | 574 | cfg_data_p = iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); |
@@ -529,6 +582,25 @@ static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn, | |||
529 | if (size <= 2) | 582 | if (size <= 2) |
530 | *val = (data >> (8 * (where & 3))) & ((1 << (size * 8)) - 1); | 583 | *val = (data >> (8 * (where & 3))) & ((1 << (size * 8)) - 1); |
531 | 584 | ||
585 | /* | ||
586 | * For PAXC and PAXCv2, the total number of PFs that one can enumerate | ||
587 | * depends on the firmware configuration. Unfortunately, due to an ASIC | ||
588 | * bug, unconfigured PFs cannot be properly hidden from the root | ||
589 | * complex. As a result, write access to these PFs will cause bus lock | ||
590 | * up on the embedded processor | ||
591 | * | ||
592 | * Since all unconfigured PFs are left with an incorrect, staled device | ||
593 | * ID of 0x168e (PCI_DEVICE_ID_NX2_57810), we try to catch those access | ||
594 | * early here and reject them all | ||
595 | */ | ||
596 | #define DEVICE_ID_MASK 0xffff0000 | ||
597 | #define DEVICE_ID_SHIFT 16 | ||
598 | if (pcie->rej_unconfig_pf && | ||
599 | (where & CFG_ADDR_REG_NUM_MASK) == PCI_VENDOR_ID) | ||
600 | if ((*val & DEVICE_ID_MASK) == | ||
601 | (PCI_DEVICE_ID_NX2_57810 << DEVICE_ID_SHIFT)) | ||
602 | return PCIBIOS_FUNC_NOT_SUPPORTED; | ||
603 | |||
532 | return PCIBIOS_SUCCESSFUL; | 604 | return PCIBIOS_SUCCESSFUL; |
533 | } | 605 | } |
534 | 606 | ||
@@ -628,7 +700,7 @@ static int iproc_pcie_config_read32(struct pci_bus *bus, unsigned int devfn, | |||
628 | struct iproc_pcie *pcie = iproc_data(bus); | 700 | struct iproc_pcie *pcie = iproc_data(bus); |
629 | 701 | ||
630 | iproc_pcie_apb_err_disable(bus, true); | 702 | iproc_pcie_apb_err_disable(bus, true); |
631 | if (pcie->type == IPROC_PCIE_PAXB_V2) | 703 | if (pcie->iproc_cfg_read) |
632 | ret = iproc_pcie_config_read(bus, devfn, where, size, val); | 704 | ret = iproc_pcie_config_read(bus, devfn, where, size, val); |
633 | else | 705 | else |
634 | ret = pci_generic_config_read32(bus, devfn, where, size, val); | 706 | ret = pci_generic_config_read32(bus, devfn, where, size, val); |
@@ -808,14 +880,14 @@ static inline int iproc_pcie_ob_write(struct iproc_pcie *pcie, int window_idx, | |||
808 | writel(lower_32_bits(pci_addr), pcie->base + omap_offset); | 880 | writel(lower_32_bits(pci_addr), pcie->base + omap_offset); |
809 | writel(upper_32_bits(pci_addr), pcie->base + omap_offset + 4); | 881 | writel(upper_32_bits(pci_addr), pcie->base + omap_offset + 4); |
810 | 882 | ||
811 | dev_info(dev, "ob window [%d]: offset 0x%x axi %pap pci %pap\n", | 883 | dev_dbg(dev, "ob window [%d]: offset 0x%x axi %pap pci %pap\n", |
812 | window_idx, oarr_offset, &axi_addr, &pci_addr); | 884 | window_idx, oarr_offset, &axi_addr, &pci_addr); |
813 | dev_info(dev, "oarr lo 0x%x oarr hi 0x%x\n", | 885 | dev_dbg(dev, "oarr lo 0x%x oarr hi 0x%x\n", |
814 | readl(pcie->base + oarr_offset), | 886 | readl(pcie->base + oarr_offset), |
815 | readl(pcie->base + oarr_offset + 4)); | 887 | readl(pcie->base + oarr_offset + 4)); |
816 | dev_info(dev, "omap lo 0x%x omap hi 0x%x\n", | 888 | dev_dbg(dev, "omap lo 0x%x omap hi 0x%x\n", |
817 | readl(pcie->base + omap_offset), | 889 | readl(pcie->base + omap_offset), |
818 | readl(pcie->base + omap_offset + 4)); | 890 | readl(pcie->base + omap_offset + 4)); |
819 | 891 | ||
820 | return 0; | 892 | return 0; |
821 | } | 893 | } |
@@ -982,8 +1054,8 @@ static int iproc_pcie_ib_write(struct iproc_pcie *pcie, int region_idx, | |||
982 | iproc_pcie_reg_is_invalid(imap_offset)) | 1054 | iproc_pcie_reg_is_invalid(imap_offset)) |
983 | return -EINVAL; | 1055 | return -EINVAL; |
984 | 1056 | ||
985 | dev_info(dev, "ib region [%d]: offset 0x%x axi %pap pci %pap\n", | 1057 | dev_dbg(dev, "ib region [%d]: offset 0x%x axi %pap pci %pap\n", |
986 | region_idx, iarr_offset, &axi_addr, &pci_addr); | 1058 | region_idx, iarr_offset, &axi_addr, &pci_addr); |
987 | 1059 | ||
988 | /* | 1060 | /* |
989 | * Program the IARR registers. The upper 32-bit IARR register is | 1061 | * Program the IARR registers. The upper 32-bit IARR register is |
@@ -993,9 +1065,9 @@ static int iproc_pcie_ib_write(struct iproc_pcie *pcie, int region_idx, | |||
993 | pcie->base + iarr_offset); | 1065 | pcie->base + iarr_offset); |
994 | writel(upper_32_bits(pci_addr), pcie->base + iarr_offset + 4); | 1066 | writel(upper_32_bits(pci_addr), pcie->base + iarr_offset + 4); |
995 | 1067 | ||
996 | dev_info(dev, "iarr lo 0x%x iarr hi 0x%x\n", | 1068 | dev_dbg(dev, "iarr lo 0x%x iarr hi 0x%x\n", |
997 | readl(pcie->base + iarr_offset), | 1069 | readl(pcie->base + iarr_offset), |
998 | readl(pcie->base + iarr_offset + 4)); | 1070 | readl(pcie->base + iarr_offset + 4)); |
999 | 1071 | ||
1000 | /* | 1072 | /* |
1001 | * Now program the IMAP registers. Each IARR region may have one or | 1073 | * Now program the IMAP registers. Each IARR region may have one or |
@@ -1009,10 +1081,10 @@ static int iproc_pcie_ib_write(struct iproc_pcie *pcie, int region_idx, | |||
1009 | writel(upper_32_bits(axi_addr), | 1081 | writel(upper_32_bits(axi_addr), |
1010 | pcie->base + imap_offset + ib_map->imap_addr_offset); | 1082 | pcie->base + imap_offset + ib_map->imap_addr_offset); |
1011 | 1083 | ||
1012 | dev_info(dev, "imap window [%d] lo 0x%x hi 0x%x\n", | 1084 | dev_dbg(dev, "imap window [%d] lo 0x%x hi 0x%x\n", |
1013 | window_idx, readl(pcie->base + imap_offset), | 1085 | window_idx, readl(pcie->base + imap_offset), |
1014 | readl(pcie->base + imap_offset + | 1086 | readl(pcie->base + imap_offset + |
1015 | ib_map->imap_addr_offset)); | 1087 | ib_map->imap_addr_offset)); |
1016 | 1088 | ||
1017 | imap_offset += ib_map->imap_window_offset; | 1089 | imap_offset += ib_map->imap_window_offset; |
1018 | axi_addr += size; | 1090 | axi_addr += size; |
@@ -1144,10 +1216,22 @@ static int iproc_pcie_paxb_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr) | |||
1144 | return ret; | 1216 | return ret; |
1145 | } | 1217 | } |
1146 | 1218 | ||
1147 | static void iproc_pcie_paxc_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr) | 1219 | static void iproc_pcie_paxc_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr, |
1220 | bool enable) | ||
1148 | { | 1221 | { |
1149 | u32 val; | 1222 | u32 val; |
1150 | 1223 | ||
1224 | if (!enable) { | ||
1225 | /* | ||
1226 | * Disable PAXC MSI steering. All write transfers will be | ||
1227 | * treated as non-MSI transfers | ||
1228 | */ | ||
1229 | val = iproc_pcie_read_reg(pcie, IPROC_PCIE_MSI_EN_CFG); | ||
1230 | val &= ~MSI_ENABLE_CFG; | ||
1231 | iproc_pcie_write_reg(pcie, IPROC_PCIE_MSI_EN_CFG, val); | ||
1232 | return; | ||
1233 | } | ||
1234 | |||
1151 | /* | 1235 | /* |
1152 | * Program bits [43:13] of address of GITS_TRANSLATER register into | 1236 | * Program bits [43:13] of address of GITS_TRANSLATER register into |
1153 | * bits [30:0] of the MSI base address register. In fact, in all iProc | 1237 | * bits [30:0] of the MSI base address register. In fact, in all iProc |
@@ -1201,7 +1285,7 @@ static int iproc_pcie_msi_steer(struct iproc_pcie *pcie, | |||
1201 | return ret; | 1285 | return ret; |
1202 | break; | 1286 | break; |
1203 | case IPROC_PCIE_PAXC_V2: | 1287 | case IPROC_PCIE_PAXC_V2: |
1204 | iproc_pcie_paxc_v2_msi_steer(pcie, msi_addr); | 1288 | iproc_pcie_paxc_v2_msi_steer(pcie, msi_addr, true); |
1205 | break; | 1289 | break; |
1206 | default: | 1290 | default: |
1207 | return -EINVAL; | 1291 | return -EINVAL; |
@@ -1271,6 +1355,7 @@ static int iproc_pcie_rev_init(struct iproc_pcie *pcie) | |||
1271 | break; | 1355 | break; |
1272 | case IPROC_PCIE_PAXB: | 1356 | case IPROC_PCIE_PAXB: |
1273 | regs = iproc_pcie_reg_paxb; | 1357 | regs = iproc_pcie_reg_paxb; |
1358 | pcie->iproc_cfg_read = true; | ||
1274 | pcie->has_apb_err_disable = true; | 1359 | pcie->has_apb_err_disable = true; |
1275 | if (pcie->need_ob_cfg) { | 1360 | if (pcie->need_ob_cfg) { |
1276 | pcie->ob_map = paxb_ob_map; | 1361 | pcie->ob_map = paxb_ob_map; |
@@ -1293,10 +1378,14 @@ static int iproc_pcie_rev_init(struct iproc_pcie *pcie) | |||
1293 | case IPROC_PCIE_PAXC: | 1378 | case IPROC_PCIE_PAXC: |
1294 | regs = iproc_pcie_reg_paxc; | 1379 | regs = iproc_pcie_reg_paxc; |
1295 | pcie->ep_is_internal = true; | 1380 | pcie->ep_is_internal = true; |
1381 | pcie->iproc_cfg_read = true; | ||
1382 | pcie->rej_unconfig_pf = true; | ||
1296 | break; | 1383 | break; |
1297 | case IPROC_PCIE_PAXC_V2: | 1384 | case IPROC_PCIE_PAXC_V2: |
1298 | regs = iproc_pcie_reg_paxc_v2; | 1385 | regs = iproc_pcie_reg_paxc_v2; |
1299 | pcie->ep_is_internal = true; | 1386 | pcie->ep_is_internal = true; |
1387 | pcie->iproc_cfg_read = true; | ||
1388 | pcie->rej_unconfig_pf = true; | ||
1300 | pcie->need_msi_steer = true; | 1389 | pcie->need_msi_steer = true; |
1301 | break; | 1390 | break; |
1302 | default: | 1391 | default: |
@@ -1427,6 +1516,24 @@ int iproc_pcie_remove(struct iproc_pcie *pcie) | |||
1427 | } | 1516 | } |
1428 | EXPORT_SYMBOL(iproc_pcie_remove); | 1517 | EXPORT_SYMBOL(iproc_pcie_remove); |
1429 | 1518 | ||
1519 | /* | ||
1520 | * The MSI parsing logic in certain revisions of Broadcom PAXC based root | ||
1521 | * complex does not work and needs to be disabled | ||
1522 | */ | ||
1523 | static void quirk_paxc_disable_msi_parsing(struct pci_dev *pdev) | ||
1524 | { | ||
1525 | struct iproc_pcie *pcie = iproc_data(pdev->bus); | ||
1526 | |||
1527 | if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE) | ||
1528 | iproc_pcie_paxc_v2_msi_steer(pcie, 0, false); | ||
1529 | } | ||
1530 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16f0, | ||
1531 | quirk_paxc_disable_msi_parsing); | ||
1532 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0xd802, | ||
1533 | quirk_paxc_disable_msi_parsing); | ||
1534 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0xd804, | ||
1535 | quirk_paxc_disable_msi_parsing); | ||
1536 | |||
1430 | MODULE_AUTHOR("Ray Jui <rjui@broadcom.com>"); | 1537 | MODULE_AUTHOR("Ray Jui <rjui@broadcom.com>"); |
1431 | MODULE_DESCRIPTION("Broadcom iPROC PCIe common driver"); | 1538 | MODULE_DESCRIPTION("Broadcom iPROC PCIe common driver"); |
1432 | MODULE_LICENSE("GPL v2"); | 1539 | MODULE_LICENSE("GPL v2"); |
diff --git a/drivers/pci/controller/pcie-iproc.h b/drivers/pci/controller/pcie-iproc.h index 814b600b383a..4f03ea539805 100644 --- a/drivers/pci/controller/pcie-iproc.h +++ b/drivers/pci/controller/pcie-iproc.h | |||
@@ -58,8 +58,13 @@ struct iproc_msi; | |||
58 | * @phy: optional PHY device that controls the Serdes | 58 | * @phy: optional PHY device that controls the Serdes |
59 | * @map_irq: function callback to map interrupts | 59 | * @map_irq: function callback to map interrupts |
60 | * @ep_is_internal: indicates an internal emulated endpoint device is connected | 60 | * @ep_is_internal: indicates an internal emulated endpoint device is connected |
61 | * @iproc_cfg_read: indicates the iProc config read function should be used | ||
62 | * @rej_unconfig_pf: indicates the root complex needs to detect and reject | ||
63 | * enumeration against unconfigured physical functions emulated in the ASIC | ||
61 | * @has_apb_err_disable: indicates the controller can be configured to prevent | 64 | * @has_apb_err_disable: indicates the controller can be configured to prevent |
62 | * unsupported request from being forwarded as an APB bus error | 65 | * unsupported request from being forwarded as an APB bus error |
66 | * @fix_paxc_cap: indicates the controller has corrupted capability list in its | ||
67 | * config space registers and requires SW based fixup | ||
63 | * | 68 | * |
64 | * @need_ob_cfg: indicates SW needs to configure the outbound mapping window | 69 | * @need_ob_cfg: indicates SW needs to configure the outbound mapping window |
65 | * @ob: outbound mapping related parameters | 70 | * @ob: outbound mapping related parameters |
@@ -84,7 +89,10 @@ struct iproc_pcie { | |||
84 | struct phy *phy; | 89 | struct phy *phy; |
85 | int (*map_irq)(const struct pci_dev *, u8, u8); | 90 | int (*map_irq)(const struct pci_dev *, u8, u8); |
86 | bool ep_is_internal; | 91 | bool ep_is_internal; |
92 | bool iproc_cfg_read; | ||
93 | bool rej_unconfig_pf; | ||
87 | bool has_apb_err_disable; | 94 | bool has_apb_err_disable; |
95 | bool fix_paxc_cap; | ||
88 | 96 | ||
89 | bool need_ob_cfg; | 97 | bool need_ob_cfg; |
90 | struct iproc_pcie_ob ob; | 98 | struct iproc_pcie_ob ob; |
diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c index cf0aa7cee5b0..a939e8d31735 100644 --- a/drivers/pci/controller/pcie-mobiveil.c +++ b/drivers/pci/controller/pcie-mobiveil.c | |||
@@ -23,6 +23,8 @@ | |||
23 | #include <linux/platform_device.h> | 23 | #include <linux/platform_device.h> |
24 | #include <linux/slab.h> | 24 | #include <linux/slab.h> |
25 | 25 | ||
26 | #include "../pci.h" | ||
27 | |||
26 | /* register offsets and bit positions */ | 28 | /* register offsets and bit positions */ |
27 | 29 | ||
28 | /* | 30 | /* |
@@ -130,7 +132,7 @@ struct mobiveil_pcie { | |||
130 | void __iomem *config_axi_slave_base; /* endpoint config base */ | 132 | void __iomem *config_axi_slave_base; /* endpoint config base */ |
131 | void __iomem *csr_axi_slave_base; /* root port config base */ | 133 | void __iomem *csr_axi_slave_base; /* root port config base */ |
132 | void __iomem *apb_csr_base; /* MSI register base */ | 134 | void __iomem *apb_csr_base; /* MSI register base */ |
133 | void __iomem *pcie_reg_base; /* Physical PCIe Controller Base */ | 135 | phys_addr_t pcie_reg_base; /* Physical PCIe Controller Base */ |
134 | struct irq_domain *intx_domain; | 136 | struct irq_domain *intx_domain; |
135 | raw_spinlock_t intx_mask_lock; | 137 | raw_spinlock_t intx_mask_lock; |
136 | int irq; | 138 | int irq; |
diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c index 6beba8ed7b84..b8163c56a142 100644 --- a/drivers/pci/controller/pcie-rockchip-ep.c +++ b/drivers/pci/controller/pcie-rockchip-ep.c | |||
@@ -472,7 +472,7 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn, | |||
472 | 472 | ||
473 | static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, | 473 | static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, |
474 | enum pci_epc_irq_type type, | 474 | enum pci_epc_irq_type type, |
475 | u8 interrupt_num) | 475 | u16 interrupt_num) |
476 | { | 476 | { |
477 | struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); | 477 | struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); |
478 | 478 | ||
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 942b64fc7f1f..fd2dbd7eed7b 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c | |||
@@ -197,9 +197,20 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d | |||
197 | int i, best = 1; | 197 | int i, best = 1; |
198 | unsigned long flags; | 198 | unsigned long flags; |
199 | 199 | ||
200 | if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1) | 200 | if (vmd->msix_count == 1) |
201 | return &vmd->irqs[0]; | 201 | return &vmd->irqs[0]; |
202 | 202 | ||
203 | /* | ||
204 | * White list for fast-interrupt handlers. All others will share the | ||
205 | * "slow" interrupt vector. | ||
206 | */ | ||
207 | switch (msi_desc_to_pci_dev(desc)->class) { | ||
208 | case PCI_CLASS_STORAGE_EXPRESS: | ||
209 | break; | ||
210 | default: | ||
211 | return &vmd->irqs[0]; | ||
212 | } | ||
213 | |||
203 | raw_spin_lock_irqsave(&list_lock, flags); | 214 | raw_spin_lock_irqsave(&list_lock, flags); |
204 | for (i = 1; i < vmd->msix_count; i++) | 215 | for (i = 1; i < vmd->msix_count; i++) |
205 | if (vmd->irqs[i].count < vmd->irqs[best].count) | 216 | if (vmd->irqs[i].count < vmd->irqs[best].count) |
diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c index 63ed706445b9..3e86fa3c7da3 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c | |||
@@ -18,13 +18,16 @@ | |||
18 | #include <linux/pci-epf.h> | 18 | #include <linux/pci-epf.h> |
19 | #include <linux/pci_regs.h> | 19 | #include <linux/pci_regs.h> |
20 | 20 | ||
21 | #define IRQ_TYPE_LEGACY 0 | ||
22 | #define IRQ_TYPE_MSI 1 | ||
23 | #define IRQ_TYPE_MSIX 2 | ||
24 | |||
21 | #define COMMAND_RAISE_LEGACY_IRQ BIT(0) | 25 | #define COMMAND_RAISE_LEGACY_IRQ BIT(0) |
22 | #define COMMAND_RAISE_MSI_IRQ BIT(1) | 26 | #define COMMAND_RAISE_MSI_IRQ BIT(1) |
23 | #define MSI_NUMBER_SHIFT 2 | 27 | #define COMMAND_RAISE_MSIX_IRQ BIT(2) |
24 | #define MSI_NUMBER_MASK (0x3f << MSI_NUMBER_SHIFT) | 28 | #define COMMAND_READ BIT(3) |
25 | #define COMMAND_READ BIT(8) | 29 | #define COMMAND_WRITE BIT(4) |
26 | #define COMMAND_WRITE BIT(9) | 30 | #define COMMAND_COPY BIT(5) |
27 | #define COMMAND_COPY BIT(10) | ||
28 | 31 | ||
29 | #define STATUS_READ_SUCCESS BIT(0) | 32 | #define STATUS_READ_SUCCESS BIT(0) |
30 | #define STATUS_READ_FAIL BIT(1) | 33 | #define STATUS_READ_FAIL BIT(1) |
@@ -45,6 +48,7 @@ struct pci_epf_test { | |||
45 | struct pci_epf *epf; | 48 | struct pci_epf *epf; |
46 | enum pci_barno test_reg_bar; | 49 | enum pci_barno test_reg_bar; |
47 | bool linkup_notifier; | 50 | bool linkup_notifier; |
51 | bool msix_available; | ||
48 | struct delayed_work cmd_handler; | 52 | struct delayed_work cmd_handler; |
49 | }; | 53 | }; |
50 | 54 | ||
@@ -56,6 +60,8 @@ struct pci_epf_test_reg { | |||
56 | u64 dst_addr; | 60 | u64 dst_addr; |
57 | u32 size; | 61 | u32 size; |
58 | u32 checksum; | 62 | u32 checksum; |
63 | u32 irq_type; | ||
64 | u32 irq_number; | ||
59 | } __packed; | 65 | } __packed; |
60 | 66 | ||
61 | static struct pci_epf_header test_header = { | 67 | static struct pci_epf_header test_header = { |
@@ -244,31 +250,42 @@ err: | |||
244 | return ret; | 250 | return ret; |
245 | } | 251 | } |
246 | 252 | ||
247 | static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq) | 253 | static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq_type, |
254 | u16 irq) | ||
248 | { | 255 | { |
249 | u8 msi_count; | ||
250 | struct pci_epf *epf = epf_test->epf; | 256 | struct pci_epf *epf = epf_test->epf; |
257 | struct device *dev = &epf->dev; | ||
251 | struct pci_epc *epc = epf->epc; | 258 | struct pci_epc *epc = epf->epc; |
252 | enum pci_barno test_reg_bar = epf_test->test_reg_bar; | 259 | enum pci_barno test_reg_bar = epf_test->test_reg_bar; |
253 | struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; | 260 | struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; |
254 | 261 | ||
255 | reg->status |= STATUS_IRQ_RAISED; | 262 | reg->status |= STATUS_IRQ_RAISED; |
256 | msi_count = pci_epc_get_msi(epc, epf->func_no); | 263 | |
257 | if (irq > msi_count || msi_count <= 0) | 264 | switch (irq_type) { |
265 | case IRQ_TYPE_LEGACY: | ||
258 | pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_LEGACY, 0); | 266 | pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_LEGACY, 0); |
259 | else | 267 | break; |
268 | case IRQ_TYPE_MSI: | ||
260 | pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSI, irq); | 269 | pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSI, irq); |
270 | break; | ||
271 | case IRQ_TYPE_MSIX: | ||
272 | pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSIX, irq); | ||
273 | break; | ||
274 | default: | ||
275 | dev_err(dev, "Failed to raise IRQ, unknown type\n"); | ||
276 | break; | ||
277 | } | ||
261 | } | 278 | } |
262 | 279 | ||
263 | static void pci_epf_test_cmd_handler(struct work_struct *work) | 280 | static void pci_epf_test_cmd_handler(struct work_struct *work) |
264 | { | 281 | { |
265 | int ret; | 282 | int ret; |
266 | u8 irq; | 283 | int count; |
267 | u8 msi_count; | ||
268 | u32 command; | 284 | u32 command; |
269 | struct pci_epf_test *epf_test = container_of(work, struct pci_epf_test, | 285 | struct pci_epf_test *epf_test = container_of(work, struct pci_epf_test, |
270 | cmd_handler.work); | 286 | cmd_handler.work); |
271 | struct pci_epf *epf = epf_test->epf; | 287 | struct pci_epf *epf = epf_test->epf; |
288 | struct device *dev = &epf->dev; | ||
272 | struct pci_epc *epc = epf->epc; | 289 | struct pci_epc *epc = epf->epc; |
273 | enum pci_barno test_reg_bar = epf_test->test_reg_bar; | 290 | enum pci_barno test_reg_bar = epf_test->test_reg_bar; |
274 | struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; | 291 | struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; |
@@ -280,7 +297,10 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) | |||
280 | reg->command = 0; | 297 | reg->command = 0; |
281 | reg->status = 0; | 298 | reg->status = 0; |
282 | 299 | ||
283 | irq = (command & MSI_NUMBER_MASK) >> MSI_NUMBER_SHIFT; | 300 | if (reg->irq_type > IRQ_TYPE_MSIX) { |
301 | dev_err(dev, "Failed to detect IRQ type\n"); | ||
302 | goto reset_handler; | ||
303 | } | ||
284 | 304 | ||
285 | if (command & COMMAND_RAISE_LEGACY_IRQ) { | 305 | if (command & COMMAND_RAISE_LEGACY_IRQ) { |
286 | reg->status = STATUS_IRQ_RAISED; | 306 | reg->status = STATUS_IRQ_RAISED; |
@@ -294,7 +314,8 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) | |||
294 | reg->status |= STATUS_WRITE_FAIL; | 314 | reg->status |= STATUS_WRITE_FAIL; |
295 | else | 315 | else |
296 | reg->status |= STATUS_WRITE_SUCCESS; | 316 | reg->status |= STATUS_WRITE_SUCCESS; |
297 | pci_epf_test_raise_irq(epf_test, irq); | 317 | pci_epf_test_raise_irq(epf_test, reg->irq_type, |
318 | reg->irq_number); | ||
298 | goto reset_handler; | 319 | goto reset_handler; |
299 | } | 320 | } |
300 | 321 | ||
@@ -304,7 +325,8 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) | |||
304 | reg->status |= STATUS_READ_SUCCESS; | 325 | reg->status |= STATUS_READ_SUCCESS; |
305 | else | 326 | else |
306 | reg->status |= STATUS_READ_FAIL; | 327 | reg->status |= STATUS_READ_FAIL; |
307 | pci_epf_test_raise_irq(epf_test, irq); | 328 | pci_epf_test_raise_irq(epf_test, reg->irq_type, |
329 | reg->irq_number); | ||
308 | goto reset_handler; | 330 | goto reset_handler; |
309 | } | 331 | } |
310 | 332 | ||
@@ -314,16 +336,28 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) | |||
314 | reg->status |= STATUS_COPY_SUCCESS; | 336 | reg->status |= STATUS_COPY_SUCCESS; |
315 | else | 337 | else |
316 | reg->status |= STATUS_COPY_FAIL; | 338 | reg->status |= STATUS_COPY_FAIL; |
317 | pci_epf_test_raise_irq(epf_test, irq); | 339 | pci_epf_test_raise_irq(epf_test, reg->irq_type, |
340 | reg->irq_number); | ||
318 | goto reset_handler; | 341 | goto reset_handler; |
319 | } | 342 | } |
320 | 343 | ||
321 | if (command & COMMAND_RAISE_MSI_IRQ) { | 344 | if (command & COMMAND_RAISE_MSI_IRQ) { |
322 | msi_count = pci_epc_get_msi(epc, epf->func_no); | 345 | count = pci_epc_get_msi(epc, epf->func_no); |
323 | if (irq > msi_count || msi_count <= 0) | 346 | if (reg->irq_number > count || count <= 0) |
324 | goto reset_handler; | 347 | goto reset_handler; |
325 | reg->status = STATUS_IRQ_RAISED; | 348 | reg->status = STATUS_IRQ_RAISED; |
326 | pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSI, irq); | 349 | pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSI, |
350 | reg->irq_number); | ||
351 | goto reset_handler; | ||
352 | } | ||
353 | |||
354 | if (command & COMMAND_RAISE_MSIX_IRQ) { | ||
355 | count = pci_epc_get_msix(epc, epf->func_no); | ||
356 | if (reg->irq_number > count || count <= 0) | ||
357 | goto reset_handler; | ||
358 | reg->status = STATUS_IRQ_RAISED; | ||
359 | pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSIX, | ||
360 | reg->irq_number); | ||
327 | goto reset_handler; | 361 | goto reset_handler; |
328 | } | 362 | } |
329 | 363 | ||
@@ -440,6 +474,8 @@ static int pci_epf_test_bind(struct pci_epf *epf) | |||
440 | else | 474 | else |
441 | epf_test->linkup_notifier = true; | 475 | epf_test->linkup_notifier = true; |
442 | 476 | ||
477 | epf_test->msix_available = epc->features & EPC_FEATURE_MSIX_AVAILABLE; | ||
478 | |||
443 | epf_test->test_reg_bar = EPC_FEATURE_GET_BAR(epc->features); | 479 | epf_test->test_reg_bar = EPC_FEATURE_GET_BAR(epc->features); |
444 | 480 | ||
445 | ret = pci_epc_write_header(epc, epf->func_no, header); | 481 | ret = pci_epc_write_header(epc, epf->func_no, header); |
@@ -457,8 +493,18 @@ static int pci_epf_test_bind(struct pci_epf *epf) | |||
457 | return ret; | 493 | return ret; |
458 | 494 | ||
459 | ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts); | 495 | ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts); |
460 | if (ret) | 496 | if (ret) { |
497 | dev_err(dev, "MSI configuration failed\n"); | ||
461 | return ret; | 498 | return ret; |
499 | } | ||
500 | |||
501 | if (epf_test->msix_available) { | ||
502 | ret = pci_epc_set_msix(epc, epf->func_no, epf->msix_interrupts); | ||
503 | if (ret) { | ||
504 | dev_err(dev, "MSI-X configuration failed\n"); | ||
505 | return ret; | ||
506 | } | ||
507 | } | ||
462 | 508 | ||
463 | if (!epf_test->linkup_notifier) | 509 | if (!epf_test->linkup_notifier) |
464 | queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); | 510 | queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); |
diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c index 018ea3433cb5..d1288a0bd530 100644 --- a/drivers/pci/endpoint/pci-ep-cfs.c +++ b/drivers/pci/endpoint/pci-ep-cfs.c | |||
@@ -286,6 +286,28 @@ static ssize_t pci_epf_msi_interrupts_show(struct config_item *item, | |||
286 | to_pci_epf_group(item)->epf->msi_interrupts); | 286 | to_pci_epf_group(item)->epf->msi_interrupts); |
287 | } | 287 | } |
288 | 288 | ||
289 | static ssize_t pci_epf_msix_interrupts_store(struct config_item *item, | ||
290 | const char *page, size_t len) | ||
291 | { | ||
292 | u16 val; | ||
293 | int ret; | ||
294 | |||
295 | ret = kstrtou16(page, 0, &val); | ||
296 | if (ret) | ||
297 | return ret; | ||
298 | |||
299 | to_pci_epf_group(item)->epf->msix_interrupts = val; | ||
300 | |||
301 | return len; | ||
302 | } | ||
303 | |||
304 | static ssize_t pci_epf_msix_interrupts_show(struct config_item *item, | ||
305 | char *page) | ||
306 | { | ||
307 | return sprintf(page, "%d\n", | ||
308 | to_pci_epf_group(item)->epf->msix_interrupts); | ||
309 | } | ||
310 | |||
289 | PCI_EPF_HEADER_R(vendorid) | 311 | PCI_EPF_HEADER_R(vendorid) |
290 | PCI_EPF_HEADER_W_u16(vendorid) | 312 | PCI_EPF_HEADER_W_u16(vendorid) |
291 | 313 | ||
@@ -327,6 +349,7 @@ CONFIGFS_ATTR(pci_epf_, subsys_vendor_id); | |||
327 | CONFIGFS_ATTR(pci_epf_, subsys_id); | 349 | CONFIGFS_ATTR(pci_epf_, subsys_id); |
328 | CONFIGFS_ATTR(pci_epf_, interrupt_pin); | 350 | CONFIGFS_ATTR(pci_epf_, interrupt_pin); |
329 | CONFIGFS_ATTR(pci_epf_, msi_interrupts); | 351 | CONFIGFS_ATTR(pci_epf_, msi_interrupts); |
352 | CONFIGFS_ATTR(pci_epf_, msix_interrupts); | ||
330 | 353 | ||
331 | static struct configfs_attribute *pci_epf_attrs[] = { | 354 | static struct configfs_attribute *pci_epf_attrs[] = { |
332 | &pci_epf_attr_vendorid, | 355 | &pci_epf_attr_vendorid, |
@@ -340,6 +363,7 @@ static struct configfs_attribute *pci_epf_attrs[] = { | |||
340 | &pci_epf_attr_subsys_id, | 363 | &pci_epf_attr_subsys_id, |
341 | &pci_epf_attr_interrupt_pin, | 364 | &pci_epf_attr_interrupt_pin, |
342 | &pci_epf_attr_msi_interrupts, | 365 | &pci_epf_attr_msi_interrupts, |
366 | &pci_epf_attr_msix_interrupts, | ||
343 | NULL, | 367 | NULL, |
344 | }; | 368 | }; |
345 | 369 | ||
diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c index b0ee42739c3c..094dcc3203b8 100644 --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c | |||
@@ -131,13 +131,13 @@ EXPORT_SYMBOL_GPL(pci_epc_start); | |||
131 | * pci_epc_raise_irq() - interrupt the host system | 131 | * pci_epc_raise_irq() - interrupt the host system |
132 | * @epc: the EPC device which has to interrupt the host | 132 | * @epc: the EPC device which has to interrupt the host |
133 | * @func_no: the endpoint function number in the EPC device | 133 | * @func_no: the endpoint function number in the EPC device |
134 | * @type: specify the type of interrupt; legacy or MSI | 134 | * @type: specify the type of interrupt; legacy, MSI or MSI-X |
135 | * @interrupt_num: the MSI interrupt number | 135 | * @interrupt_num: the MSI or MSI-X interrupt number |
136 | * | 136 | * |
137 | * Invoke to raise an MSI or legacy interrupt | 137 | * Invoke to raise an legacy, MSI or MSI-X interrupt |
138 | */ | 138 | */ |
139 | int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, | 139 | int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, |
140 | enum pci_epc_irq_type type, u8 interrupt_num) | 140 | enum pci_epc_irq_type type, u16 interrupt_num) |
141 | { | 141 | { |
142 | int ret; | 142 | int ret; |
143 | unsigned long flags; | 143 | unsigned long flags; |
@@ -201,7 +201,8 @@ int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts) | |||
201 | u8 encode_int; | 201 | u8 encode_int; |
202 | unsigned long flags; | 202 | unsigned long flags; |
203 | 203 | ||
204 | if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) | 204 | if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || |
205 | interrupts > 32) | ||
205 | return -EINVAL; | 206 | return -EINVAL; |
206 | 207 | ||
207 | if (!epc->ops->set_msi) | 208 | if (!epc->ops->set_msi) |
@@ -218,6 +219,63 @@ int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts) | |||
218 | EXPORT_SYMBOL_GPL(pci_epc_set_msi); | 219 | EXPORT_SYMBOL_GPL(pci_epc_set_msi); |
219 | 220 | ||
220 | /** | 221 | /** |
222 | * pci_epc_get_msix() - get the number of MSI-X interrupt numbers allocated | ||
223 | * @epc: the EPC device to which MSI-X interrupts was requested | ||
224 | * @func_no: the endpoint function number in the EPC device | ||
225 | * | ||
226 | * Invoke to get the number of MSI-X interrupts allocated by the RC | ||
227 | */ | ||
228 | int pci_epc_get_msix(struct pci_epc *epc, u8 func_no) | ||
229 | { | ||
230 | int interrupt; | ||
231 | unsigned long flags; | ||
232 | |||
233 | if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) | ||
234 | return 0; | ||
235 | |||
236 | if (!epc->ops->get_msix) | ||
237 | return 0; | ||
238 | |||
239 | spin_lock_irqsave(&epc->lock, flags); | ||
240 | interrupt = epc->ops->get_msix(epc, func_no); | ||
241 | spin_unlock_irqrestore(&epc->lock, flags); | ||
242 | |||
243 | if (interrupt < 0) | ||
244 | return 0; | ||
245 | |||
246 | return interrupt + 1; | ||
247 | } | ||
248 | EXPORT_SYMBOL_GPL(pci_epc_get_msix); | ||
249 | |||
250 | /** | ||
251 | * pci_epc_set_msix() - set the number of MSI-X interrupt numbers required | ||
252 | * @epc: the EPC device on which MSI-X has to be configured | ||
253 | * @func_no: the endpoint function number in the EPC device | ||
254 | * @interrupts: number of MSI-X interrupts required by the EPF | ||
255 | * | ||
256 | * Invoke to set the required number of MSI-X interrupts. | ||
257 | */ | ||
258 | int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts) | ||
259 | { | ||
260 | int ret; | ||
261 | unsigned long flags; | ||
262 | |||
263 | if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || | ||
264 | interrupts < 1 || interrupts > 2048) | ||
265 | return -EINVAL; | ||
266 | |||
267 | if (!epc->ops->set_msix) | ||
268 | return 0; | ||
269 | |||
270 | spin_lock_irqsave(&epc->lock, flags); | ||
271 | ret = epc->ops->set_msix(epc, func_no, interrupts - 1); | ||
272 | spin_unlock_irqrestore(&epc->lock, flags); | ||
273 | |||
274 | return ret; | ||
275 | } | ||
276 | EXPORT_SYMBOL_GPL(pci_epc_set_msix); | ||
277 | |||
278 | /** | ||
221 | * pci_epc_unmap_addr() - unmap CPU address from PCI address | 279 | * pci_epc_unmap_addr() - unmap CPU address from PCI address |
222 | * @epc: the EPC device on which address is allocated | 280 | * @epc: the EPC device on which address is allocated |
223 | * @func_no: the endpoint function number in the EPC device | 281 | * @func_no: the endpoint function number in the EPC device |
diff --git a/drivers/pci/hotplug/acpi_pcihp.c b/drivers/pci/hotplug/acpi_pcihp.c index 5bd6c1573295..6b7c1ed58e7e 100644 --- a/drivers/pci/hotplug/acpi_pcihp.c +++ b/drivers/pci/hotplug/acpi_pcihp.c | |||
@@ -74,20 +74,6 @@ int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev) | |||
74 | struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL }; | 74 | struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL }; |
75 | 75 | ||
76 | /* | 76 | /* |
77 | * Per PCI firmware specification, we should run the ACPI _OSC | ||
78 | * method to get control of hotplug hardware before using it. If | ||
79 | * an _OSC is missing, we look for an OSHP to do the same thing. | ||
80 | * To handle different BIOS behavior, we look for _OSC on a root | ||
81 | * bridge preferentially (according to PCI fw spec). Later for | ||
82 | * OSHP within the scope of the hotplug controller and its parents, | ||
83 | * up to the host bridge under which this controller exists. | ||
84 | */ | ||
85 | if (shpchp_is_native(pdev)) | ||
86 | return 0; | ||
87 | |||
88 | /* If _OSC exists, we should not evaluate OSHP */ | ||
89 | |||
90 | /* | ||
91 | * If there's no ACPI host bridge (i.e., ACPI support is compiled | 77 | * If there's no ACPI host bridge (i.e., ACPI support is compiled |
92 | * into the kernel but the hardware platform doesn't support ACPI), | 78 | * into the kernel but the hardware platform doesn't support ACPI), |
93 | * there's nothing to do here. | 79 | * there's nothing to do here. |
@@ -97,9 +83,25 @@ int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev) | |||
97 | if (!root) | 83 | if (!root) |
98 | return 0; | 84 | return 0; |
99 | 85 | ||
100 | if (root->osc_support_set) | 86 | /* |
101 | goto no_control; | 87 | * If _OSC exists, it determines whether we're allowed to manage |
88 | * the SHPC. We executed it while enumerating the host bridge. | ||
89 | */ | ||
90 | if (root->osc_support_set) { | ||
91 | if (host->native_shpc_hotplug) | ||
92 | return 0; | ||
93 | return -ENODEV; | ||
94 | } | ||
102 | 95 | ||
96 | /* | ||
97 | * In the absence of _OSC, we're always allowed to manage the SHPC. | ||
98 | * However, if an OSHP method is present, we must execute it so the | ||
99 | * firmware can transfer control to the OS, e.g., direct interrupts | ||
100 | * to the OS instead of to the firmware. | ||
101 | * | ||
102 | * N.B. The PCI Firmware Spec (r3.2, sec 4.8) does not endorse | ||
103 | * searching up the ACPI hierarchy, so the loops below are suspect. | ||
104 | */ | ||
103 | handle = ACPI_HANDLE(&pdev->dev); | 105 | handle = ACPI_HANDLE(&pdev->dev); |
104 | if (!handle) { | 106 | if (!handle) { |
105 | /* | 107 | /* |
@@ -128,7 +130,7 @@ int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev) | |||
128 | if (ACPI_FAILURE(status)) | 130 | if (ACPI_FAILURE(status)) |
129 | break; | 131 | break; |
130 | } | 132 | } |
131 | no_control: | 133 | |
132 | pci_info(pdev, "Cannot get control of SHPC hotplug\n"); | 134 | pci_info(pdev, "Cannot get control of SHPC hotplug\n"); |
133 | kfree(string.pointer); | 135 | kfree(string.pointer); |
134 | return -ENODEV; | 136 | return -ENODEV; |
diff --git a/drivers/pci/hotplug/acpiphp_core.c b/drivers/pci/hotplug/acpiphp_core.c index 12b5655fd390..ad32ffbc4b91 100644 --- a/drivers/pci/hotplug/acpiphp_core.c +++ b/drivers/pci/hotplug/acpiphp_core.c | |||
@@ -254,20 +254,6 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
254 | return 0; | 254 | return 0; |
255 | } | 255 | } |
256 | 256 | ||
257 | /** | ||
258 | * release_slot - free up the memory used by a slot | ||
259 | * @hotplug_slot: slot to free | ||
260 | */ | ||
261 | static void release_slot(struct hotplug_slot *hotplug_slot) | ||
262 | { | ||
263 | struct slot *slot = hotplug_slot->private; | ||
264 | |||
265 | pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); | ||
266 | |||
267 | kfree(slot->hotplug_slot); | ||
268 | kfree(slot); | ||
269 | } | ||
270 | |||
271 | /* callback routine to initialize 'struct slot' for each slot */ | 257 | /* callback routine to initialize 'struct slot' for each slot */ |
272 | int acpiphp_register_hotplug_slot(struct acpiphp_slot *acpiphp_slot, | 258 | int acpiphp_register_hotplug_slot(struct acpiphp_slot *acpiphp_slot, |
273 | unsigned int sun) | 259 | unsigned int sun) |
@@ -287,7 +273,6 @@ int acpiphp_register_hotplug_slot(struct acpiphp_slot *acpiphp_slot, | |||
287 | slot->hotplug_slot->info = &slot->info; | 273 | slot->hotplug_slot->info = &slot->info; |
288 | 274 | ||
289 | slot->hotplug_slot->private = slot; | 275 | slot->hotplug_slot->private = slot; |
290 | slot->hotplug_slot->release = &release_slot; | ||
291 | slot->hotplug_slot->ops = &acpi_hotplug_slot_ops; | 276 | slot->hotplug_slot->ops = &acpi_hotplug_slot_ops; |
292 | 277 | ||
293 | slot->acpi_slot = acpiphp_slot; | 278 | slot->acpi_slot = acpiphp_slot; |
@@ -324,13 +309,12 @@ error: | |||
324 | void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *acpiphp_slot) | 309 | void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *acpiphp_slot) |
325 | { | 310 | { |
326 | struct slot *slot = acpiphp_slot->slot; | 311 | struct slot *slot = acpiphp_slot->slot; |
327 | int retval = 0; | ||
328 | 312 | ||
329 | pr_info("Slot [%s] unregistered\n", slot_name(slot)); | 313 | pr_info("Slot [%s] unregistered\n", slot_name(slot)); |
330 | 314 | ||
331 | retval = pci_hp_deregister(slot->hotplug_slot); | 315 | pci_hp_deregister(slot->hotplug_slot); |
332 | if (retval) | 316 | kfree(slot->hotplug_slot); |
333 | pr_err("pci_hp_deregister failed with error %d\n", retval); | 317 | kfree(slot); |
334 | } | 318 | } |
335 | 319 | ||
336 | 320 | ||
diff --git a/drivers/pci/hotplug/cpci_hotplug_core.c b/drivers/pci/hotplug/cpci_hotplug_core.c index 07b533adc9df..52a339baf06c 100644 --- a/drivers/pci/hotplug/cpci_hotplug_core.c +++ b/drivers/pci/hotplug/cpci_hotplug_core.c | |||
@@ -195,10 +195,8 @@ get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
195 | return 0; | 195 | return 0; |
196 | } | 196 | } |
197 | 197 | ||
198 | static void release_slot(struct hotplug_slot *hotplug_slot) | 198 | static void release_slot(struct slot *slot) |
199 | { | 199 | { |
200 | struct slot *slot = hotplug_slot->private; | ||
201 | |||
202 | kfree(slot->hotplug_slot->info); | 200 | kfree(slot->hotplug_slot->info); |
203 | kfree(slot->hotplug_slot); | 201 | kfree(slot->hotplug_slot); |
204 | pci_dev_put(slot->dev); | 202 | pci_dev_put(slot->dev); |
@@ -253,7 +251,6 @@ cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last) | |||
253 | snprintf(name, SLOT_NAME_SIZE, "%02x:%02x", bus->number, i); | 251 | snprintf(name, SLOT_NAME_SIZE, "%02x:%02x", bus->number, i); |
254 | 252 | ||
255 | hotplug_slot->private = slot; | 253 | hotplug_slot->private = slot; |
256 | hotplug_slot->release = &release_slot; | ||
257 | hotplug_slot->ops = &cpci_hotplug_slot_ops; | 254 | hotplug_slot->ops = &cpci_hotplug_slot_ops; |
258 | 255 | ||
259 | /* | 256 | /* |
@@ -308,12 +305,8 @@ cpci_hp_unregister_bus(struct pci_bus *bus) | |||
308 | slots--; | 305 | slots--; |
309 | 306 | ||
310 | dbg("deregistering slot %s", slot_name(slot)); | 307 | dbg("deregistering slot %s", slot_name(slot)); |
311 | status = pci_hp_deregister(slot->hotplug_slot); | 308 | pci_hp_deregister(slot->hotplug_slot); |
312 | if (status) { | 309 | release_slot(slot); |
313 | err("pci_hp_deregister failed with error %d", | ||
314 | status); | ||
315 | break; | ||
316 | } | ||
317 | } | 310 | } |
318 | } | 311 | } |
319 | up_write(&list_rwsem); | 312 | up_write(&list_rwsem); |
@@ -623,6 +616,7 @@ cleanup_slots(void) | |||
623 | list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { | 616 | list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { |
624 | list_del(&slot->slot_list); | 617 | list_del(&slot->slot_list); |
625 | pci_hp_deregister(slot->hotplug_slot); | 618 | pci_hp_deregister(slot->hotplug_slot); |
619 | release_slot(slot); | ||
626 | } | 620 | } |
627 | cleanup_null: | 621 | cleanup_null: |
628 | up_write(&list_rwsem); | 622 | up_write(&list_rwsem); |
diff --git a/drivers/pci/hotplug/cpqphp_core.c b/drivers/pci/hotplug/cpqphp_core.c index 1797e36ec586..5a06636e910a 100644 --- a/drivers/pci/hotplug/cpqphp_core.c +++ b/drivers/pci/hotplug/cpqphp_core.c | |||
@@ -266,17 +266,6 @@ static void __iomem *get_SMBIOS_entry(void __iomem *smbios_start, | |||
266 | return previous; | 266 | return previous; |
267 | } | 267 | } |
268 | 268 | ||
269 | static void release_slot(struct hotplug_slot *hotplug_slot) | ||
270 | { | ||
271 | struct slot *slot = hotplug_slot->private; | ||
272 | |||
273 | dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); | ||
274 | |||
275 | kfree(slot->hotplug_slot->info); | ||
276 | kfree(slot->hotplug_slot); | ||
277 | kfree(slot); | ||
278 | } | ||
279 | |||
280 | static int ctrl_slot_cleanup(struct controller *ctrl) | 269 | static int ctrl_slot_cleanup(struct controller *ctrl) |
281 | { | 270 | { |
282 | struct slot *old_slot, *next_slot; | 271 | struct slot *old_slot, *next_slot; |
@@ -285,9 +274,11 @@ static int ctrl_slot_cleanup(struct controller *ctrl) | |||
285 | ctrl->slot = NULL; | 274 | ctrl->slot = NULL; |
286 | 275 | ||
287 | while (old_slot) { | 276 | while (old_slot) { |
288 | /* memory will be freed by the release_slot callback */ | ||
289 | next_slot = old_slot->next; | 277 | next_slot = old_slot->next; |
290 | pci_hp_deregister(old_slot->hotplug_slot); | 278 | pci_hp_deregister(old_slot->hotplug_slot); |
279 | kfree(old_slot->hotplug_slot->info); | ||
280 | kfree(old_slot->hotplug_slot); | ||
281 | kfree(old_slot); | ||
291 | old_slot = next_slot; | 282 | old_slot = next_slot; |
292 | } | 283 | } |
293 | 284 | ||
@@ -678,7 +669,6 @@ static int ctrl_slot_setup(struct controller *ctrl, | |||
678 | ((read_slot_enable(ctrl) << 2) >> ctrl_slot) & 0x04; | 669 | ((read_slot_enable(ctrl) << 2) >> ctrl_slot) & 0x04; |
679 | 670 | ||
680 | /* register this slot with the hotplug pci core */ | 671 | /* register this slot with the hotplug pci core */ |
681 | hotplug_slot->release = &release_slot; | ||
682 | hotplug_slot->private = slot; | 672 | hotplug_slot->private = slot; |
683 | snprintf(name, SLOT_NAME_SIZE, "%u", slot->number); | 673 | snprintf(name, SLOT_NAME_SIZE, "%u", slot->number); |
684 | hotplug_slot->ops = &cpqphp_hotplug_slot_ops; | 674 | hotplug_slot->ops = &cpqphp_hotplug_slot_ops; |
diff --git a/drivers/pci/hotplug/ibmphp_core.c b/drivers/pci/hotplug/ibmphp_core.c index 1869b0411ce0..4ea57e9019f1 100644 --- a/drivers/pci/hotplug/ibmphp_core.c +++ b/drivers/pci/hotplug/ibmphp_core.c | |||
@@ -673,7 +673,20 @@ static void free_slots(void) | |||
673 | 673 | ||
674 | list_for_each_entry_safe(slot_cur, next, &ibmphp_slot_head, | 674 | list_for_each_entry_safe(slot_cur, next, &ibmphp_slot_head, |
675 | ibm_slot_list) { | 675 | ibm_slot_list) { |
676 | pci_hp_deregister(slot_cur->hotplug_slot); | 676 | pci_hp_del(slot_cur->hotplug_slot); |
677 | slot_cur->ctrl = NULL; | ||
678 | slot_cur->bus_on = NULL; | ||
679 | |||
680 | /* | ||
681 | * We don't want to actually remove the resources, | ||
682 | * since ibmphp_free_resources() will do just that. | ||
683 | */ | ||
684 | ibmphp_unconfigure_card(&slot_cur, -1); | ||
685 | |||
686 | pci_hp_destroy(slot_cur->hotplug_slot); | ||
687 | kfree(slot_cur->hotplug_slot->info); | ||
688 | kfree(slot_cur->hotplug_slot); | ||
689 | kfree(slot_cur); | ||
677 | } | 690 | } |
678 | debug("%s -- exit\n", __func__); | 691 | debug("%s -- exit\n", __func__); |
679 | } | 692 | } |
diff --git a/drivers/pci/hotplug/ibmphp_ebda.c b/drivers/pci/hotplug/ibmphp_ebda.c index 64549aa24c0f..6f8e90e3ec08 100644 --- a/drivers/pci/hotplug/ibmphp_ebda.c +++ b/drivers/pci/hotplug/ibmphp_ebda.c | |||
@@ -699,25 +699,6 @@ static int fillslotinfo(struct hotplug_slot *hotplug_slot) | |||
699 | return rc; | 699 | return rc; |
700 | } | 700 | } |
701 | 701 | ||
702 | static void release_slot(struct hotplug_slot *hotplug_slot) | ||
703 | { | ||
704 | struct slot *slot; | ||
705 | |||
706 | if (!hotplug_slot || !hotplug_slot->private) | ||
707 | return; | ||
708 | |||
709 | slot = hotplug_slot->private; | ||
710 | kfree(slot->hotplug_slot->info); | ||
711 | kfree(slot->hotplug_slot); | ||
712 | slot->ctrl = NULL; | ||
713 | slot->bus_on = NULL; | ||
714 | |||
715 | /* we don't want to actually remove the resources, since free_resources will do just that */ | ||
716 | ibmphp_unconfigure_card(&slot, -1); | ||
717 | |||
718 | kfree(slot); | ||
719 | } | ||
720 | |||
721 | static struct pci_driver ibmphp_driver; | 702 | static struct pci_driver ibmphp_driver; |
722 | 703 | ||
723 | /* | 704 | /* |
@@ -941,7 +922,6 @@ static int __init ebda_rsrc_controller(void) | |||
941 | tmp_slot->hotplug_slot = hp_slot_ptr; | 922 | tmp_slot->hotplug_slot = hp_slot_ptr; |
942 | 923 | ||
943 | hp_slot_ptr->private = tmp_slot; | 924 | hp_slot_ptr->private = tmp_slot; |
944 | hp_slot_ptr->release = release_slot; | ||
945 | 925 | ||
946 | rc = fillslotinfo(hp_slot_ptr); | 926 | rc = fillslotinfo(hp_slot_ptr); |
947 | if (rc) | 927 | if (rc) |
diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c index af92fed46ab7..90fde5f106d8 100644 --- a/drivers/pci/hotplug/pci_hotplug_core.c +++ b/drivers/pci/hotplug/pci_hotplug_core.c | |||
@@ -396,8 +396,9 @@ static struct hotplug_slot *get_slot_from_name(const char *name) | |||
396 | * @owner: caller module owner | 396 | * @owner: caller module owner |
397 | * @mod_name: caller module name | 397 | * @mod_name: caller module name |
398 | * | 398 | * |
399 | * Registers a hotplug slot with the pci hotplug subsystem, which will allow | 399 | * Prepares a hotplug slot for in-kernel use and immediately publishes it to |
400 | * userspace interaction to the slot. | 400 | * user space in one go. Drivers may alternatively carry out the two steps |
401 | * separately by invoking pci_hp_initialize() and pci_hp_add(). | ||
401 | * | 402 | * |
402 | * Returns 0 if successful, anything else for an error. | 403 | * Returns 0 if successful, anything else for an error. |
403 | */ | 404 | */ |
@@ -406,45 +407,91 @@ int __pci_hp_register(struct hotplug_slot *slot, struct pci_bus *bus, | |||
406 | struct module *owner, const char *mod_name) | 407 | struct module *owner, const char *mod_name) |
407 | { | 408 | { |
408 | int result; | 409 | int result; |
410 | |||
411 | result = __pci_hp_initialize(slot, bus, devnr, name, owner, mod_name); | ||
412 | if (result) | ||
413 | return result; | ||
414 | |||
415 | result = pci_hp_add(slot); | ||
416 | if (result) | ||
417 | pci_hp_destroy(slot); | ||
418 | |||
419 | return result; | ||
420 | } | ||
421 | EXPORT_SYMBOL_GPL(__pci_hp_register); | ||
422 | |||
423 | /** | ||
424 | * __pci_hp_initialize - prepare hotplug slot for in-kernel use | ||
425 | * @slot: pointer to the &struct hotplug_slot to initialize | ||
426 | * @bus: bus this slot is on | ||
427 | * @devnr: slot number | ||
428 | * @name: name registered with kobject core | ||
429 | * @owner: caller module owner | ||
430 | * @mod_name: caller module name | ||
431 | * | ||
432 | * Allocate and fill in a PCI slot for use by a hotplug driver. Once this has | ||
433 | * been called, the driver may invoke hotplug_slot_name() to get the slot's | ||
434 | * unique name. The driver must be prepared to handle a ->reset_slot callback | ||
435 | * from this point on. | ||
436 | * | ||
437 | * Returns 0 on success or a negative int on error. | ||
438 | */ | ||
439 | int __pci_hp_initialize(struct hotplug_slot *slot, struct pci_bus *bus, | ||
440 | int devnr, const char *name, struct module *owner, | ||
441 | const char *mod_name) | ||
442 | { | ||
409 | struct pci_slot *pci_slot; | 443 | struct pci_slot *pci_slot; |
410 | 444 | ||
411 | if (slot == NULL) | 445 | if (slot == NULL) |
412 | return -ENODEV; | 446 | return -ENODEV; |
413 | if ((slot->info == NULL) || (slot->ops == NULL)) | 447 | if ((slot->info == NULL) || (slot->ops == NULL)) |
414 | return -EINVAL; | 448 | return -EINVAL; |
415 | if (slot->release == NULL) { | ||
416 | dbg("Why are you trying to register a hotplug slot without a proper release function?\n"); | ||
417 | return -EINVAL; | ||
418 | } | ||
419 | 449 | ||
420 | slot->ops->owner = owner; | 450 | slot->ops->owner = owner; |
421 | slot->ops->mod_name = mod_name; | 451 | slot->ops->mod_name = mod_name; |
422 | 452 | ||
423 | mutex_lock(&pci_hp_mutex); | ||
424 | /* | 453 | /* |
425 | * No problems if we call this interface from both ACPI_PCI_SLOT | 454 | * No problems if we call this interface from both ACPI_PCI_SLOT |
426 | * driver and call it here again. If we've already created the | 455 | * driver and call it here again. If we've already created the |
427 | * pci_slot, the interface will simply bump the refcount. | 456 | * pci_slot, the interface will simply bump the refcount. |
428 | */ | 457 | */ |
429 | pci_slot = pci_create_slot(bus, devnr, name, slot); | 458 | pci_slot = pci_create_slot(bus, devnr, name, slot); |
430 | if (IS_ERR(pci_slot)) { | 459 | if (IS_ERR(pci_slot)) |
431 | result = PTR_ERR(pci_slot); | 460 | return PTR_ERR(pci_slot); |
432 | goto out; | ||
433 | } | ||
434 | 461 | ||
435 | slot->pci_slot = pci_slot; | 462 | slot->pci_slot = pci_slot; |
436 | pci_slot->hotplug = slot; | 463 | pci_slot->hotplug = slot; |
464 | return 0; | ||
465 | } | ||
466 | EXPORT_SYMBOL_GPL(__pci_hp_initialize); | ||
437 | 467 | ||
438 | list_add(&slot->slot_list, &pci_hotplug_slot_list); | 468 | /** |
469 | * pci_hp_add - publish hotplug slot to user space | ||
470 | * @slot: pointer to the &struct hotplug_slot to publish | ||
471 | * | ||
472 | * Make a hotplug slot's sysfs interface available and inform user space of its | ||
473 | * addition by sending a uevent. The hotplug driver must be prepared to handle | ||
474 | * all &struct hotplug_slot_ops callbacks from this point on. | ||
475 | * | ||
476 | * Returns 0 on success or a negative int on error. | ||
477 | */ | ||
478 | int pci_hp_add(struct hotplug_slot *slot) | ||
479 | { | ||
480 | struct pci_slot *pci_slot = slot->pci_slot; | ||
481 | int result; | ||
439 | 482 | ||
440 | result = fs_add_slot(pci_slot); | 483 | result = fs_add_slot(pci_slot); |
484 | if (result) | ||
485 | return result; | ||
486 | |||
441 | kobject_uevent(&pci_slot->kobj, KOBJ_ADD); | 487 | kobject_uevent(&pci_slot->kobj, KOBJ_ADD); |
442 | dbg("Added slot %s to the list\n", name); | 488 | mutex_lock(&pci_hp_mutex); |
443 | out: | 489 | list_add(&slot->slot_list, &pci_hotplug_slot_list); |
444 | mutex_unlock(&pci_hp_mutex); | 490 | mutex_unlock(&pci_hp_mutex); |
445 | return result; | 491 | dbg("Added slot %s to the list\n", hotplug_slot_name(slot)); |
492 | return 0; | ||
446 | } | 493 | } |
447 | EXPORT_SYMBOL_GPL(__pci_hp_register); | 494 | EXPORT_SYMBOL_GPL(pci_hp_add); |
448 | 495 | ||
449 | /** | 496 | /** |
450 | * pci_hp_deregister - deregister a hotplug_slot with the PCI hotplug subsystem | 497 | * pci_hp_deregister - deregister a hotplug_slot with the PCI hotplug subsystem |
@@ -455,35 +502,62 @@ EXPORT_SYMBOL_GPL(__pci_hp_register); | |||
455 | * | 502 | * |
456 | * Returns 0 if successful, anything else for an error. | 503 | * Returns 0 if successful, anything else for an error. |
457 | */ | 504 | */ |
458 | int pci_hp_deregister(struct hotplug_slot *slot) | 505 | void pci_hp_deregister(struct hotplug_slot *slot) |
506 | { | ||
507 | pci_hp_del(slot); | ||
508 | pci_hp_destroy(slot); | ||
509 | } | ||
510 | EXPORT_SYMBOL_GPL(pci_hp_deregister); | ||
511 | |||
512 | /** | ||
513 | * pci_hp_del - unpublish hotplug slot from user space | ||
514 | * @slot: pointer to the &struct hotplug_slot to unpublish | ||
515 | * | ||
516 | * Remove a hotplug slot's sysfs interface. | ||
517 | * | ||
518 | * Returns 0 on success or a negative int on error. | ||
519 | */ | ||
520 | void pci_hp_del(struct hotplug_slot *slot) | ||
459 | { | 521 | { |
460 | struct hotplug_slot *temp; | 522 | struct hotplug_slot *temp; |
461 | struct pci_slot *pci_slot; | ||
462 | 523 | ||
463 | if (!slot) | 524 | if (WARN_ON(!slot)) |
464 | return -ENODEV; | 525 | return; |
465 | 526 | ||
466 | mutex_lock(&pci_hp_mutex); | 527 | mutex_lock(&pci_hp_mutex); |
467 | temp = get_slot_from_name(hotplug_slot_name(slot)); | 528 | temp = get_slot_from_name(hotplug_slot_name(slot)); |
468 | if (temp != slot) { | 529 | if (WARN_ON(temp != slot)) { |
469 | mutex_unlock(&pci_hp_mutex); | 530 | mutex_unlock(&pci_hp_mutex); |
470 | return -ENODEV; | 531 | return; |
471 | } | 532 | } |
472 | 533 | ||
473 | list_del(&slot->slot_list); | 534 | list_del(&slot->slot_list); |
474 | 535 | mutex_unlock(&pci_hp_mutex); | |
475 | pci_slot = slot->pci_slot; | ||
476 | fs_remove_slot(pci_slot); | ||
477 | dbg("Removed slot %s from the list\n", hotplug_slot_name(slot)); | 536 | dbg("Removed slot %s from the list\n", hotplug_slot_name(slot)); |
537 | fs_remove_slot(slot->pci_slot); | ||
538 | } | ||
539 | EXPORT_SYMBOL_GPL(pci_hp_del); | ||
478 | 540 | ||
479 | slot->release(slot); | 541 | /** |
542 | * pci_hp_destroy - remove hotplug slot from in-kernel use | ||
543 | * @slot: pointer to the &struct hotplug_slot to destroy | ||
544 | * | ||
545 | * Destroy a PCI slot used by a hotplug driver. Once this has been called, | ||
546 | * the driver may no longer invoke hotplug_slot_name() to get the slot's | ||
547 | * unique name. The driver no longer needs to handle a ->reset_slot callback | ||
548 | * from this point on. | ||
549 | * | ||
550 | * Returns 0 on success or a negative int on error. | ||
551 | */ | ||
552 | void pci_hp_destroy(struct hotplug_slot *slot) | ||
553 | { | ||
554 | struct pci_slot *pci_slot = slot->pci_slot; | ||
555 | |||
556 | slot->pci_slot = NULL; | ||
480 | pci_slot->hotplug = NULL; | 557 | pci_slot->hotplug = NULL; |
481 | pci_destroy_slot(pci_slot); | 558 | pci_destroy_slot(pci_slot); |
482 | mutex_unlock(&pci_hp_mutex); | ||
483 | |||
484 | return 0; | ||
485 | } | 559 | } |
486 | EXPORT_SYMBOL_GPL(pci_hp_deregister); | 560 | EXPORT_SYMBOL_GPL(pci_hp_destroy); |
487 | 561 | ||
488 | /** | 562 | /** |
489 | * pci_hp_change_slot_info - changes the slot's information structure in the core | 563 | * pci_hp_change_slot_info - changes the slot's information structure in the core |
diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h index 5f892065585e..811cf83f956d 100644 --- a/drivers/pci/hotplug/pciehp.h +++ b/drivers/pci/hotplug/pciehp.h | |||
@@ -21,6 +21,7 @@ | |||
21 | #include <linux/delay.h> | 21 | #include <linux/delay.h> |
22 | #include <linux/sched/signal.h> /* signal_pending() */ | 22 | #include <linux/sched/signal.h> /* signal_pending() */ |
23 | #include <linux/mutex.h> | 23 | #include <linux/mutex.h> |
24 | #include <linux/rwsem.h> | ||
24 | #include <linux/workqueue.h> | 25 | #include <linux/workqueue.h> |
25 | 26 | ||
26 | #include "../pcie/portdrv.h" | 27 | #include "../pcie/portdrv.h" |
@@ -57,49 +58,111 @@ do { \ | |||
57 | dev_warn(&ctrl->pcie->device, format, ## arg) | 58 | dev_warn(&ctrl->pcie->device, format, ## arg) |
58 | 59 | ||
59 | #define SLOT_NAME_SIZE 10 | 60 | #define SLOT_NAME_SIZE 10 |
61 | |||
62 | /** | ||
63 | * struct slot - PCIe hotplug slot | ||
64 | * @state: current state machine position | ||
65 | * @ctrl: pointer to the slot's controller structure | ||
66 | * @hotplug_slot: pointer to the structure registered with the PCI hotplug core | ||
67 | * @work: work item to turn the slot on or off after 5 seconds in response to | ||
68 | * an Attention Button press | ||
69 | * @lock: protects reads and writes of @state; | ||
70 | * protects scheduling, execution and cancellation of @work | ||
71 | */ | ||
60 | struct slot { | 72 | struct slot { |
61 | u8 state; | 73 | u8 state; |
62 | struct controller *ctrl; | 74 | struct controller *ctrl; |
63 | struct hotplug_slot *hotplug_slot; | 75 | struct hotplug_slot *hotplug_slot; |
64 | struct delayed_work work; /* work for button event */ | 76 | struct delayed_work work; |
65 | struct mutex lock; | 77 | struct mutex lock; |
66 | struct mutex hotplug_lock; | ||
67 | struct workqueue_struct *wq; | ||
68 | }; | ||
69 | |||
70 | struct event_info { | ||
71 | u32 event_type; | ||
72 | struct slot *p_slot; | ||
73 | struct work_struct work; | ||
74 | }; | 78 | }; |
75 | 79 | ||
80 | /** | ||
81 | * struct controller - PCIe hotplug controller | ||
82 | * @ctrl_lock: serializes writes to the Slot Control register | ||
83 | * @pcie: pointer to the controller's PCIe port service device | ||
84 | * @reset_lock: prevents access to the Data Link Layer Link Active bit in the | ||
85 | * Link Status register and to the Presence Detect State bit in the Slot | ||
86 | * Status register during a slot reset which may cause them to flap | ||
87 | * @slot: pointer to the controller's slot structure | ||
88 | * @queue: wait queue to wake up on reception of a Command Completed event, | ||
89 | * used for synchronous writes to the Slot Control register | ||
90 | * @slot_cap: cached copy of the Slot Capabilities register | ||
91 | * @slot_ctrl: cached copy of the Slot Control register | ||
92 | * @poll_thread: thread to poll for slot events if no IRQ is available, | ||
93 | * enabled with pciehp_poll_mode module parameter | ||
94 | * @cmd_started: jiffies when the Slot Control register was last written; | ||
95 | * the next write is allowed 1 second later, absent a Command Completed | ||
96 | * interrupt (PCIe r4.0, sec 6.7.3.2) | ||
97 | * @cmd_busy: flag set on Slot Control register write, cleared by IRQ handler | ||
98 | * on reception of a Command Completed event | ||
99 | * @link_active_reporting: cached copy of Data Link Layer Link Active Reporting | ||
100 | * Capable bit in Link Capabilities register; if this bit is zero, the | ||
101 | * Data Link Layer Link Active bit in the Link Status register will never | ||
102 | * be set and the driver is thus confined to wait 1 second before assuming | ||
103 | * the link to a hotplugged device is up and accessing it | ||
104 | * @notification_enabled: whether the IRQ was requested successfully | ||
105 | * @power_fault_detected: whether a power fault was detected by the hardware | ||
106 | * that has not yet been cleared by the user | ||
107 | * @pending_events: used by the IRQ handler to save events retrieved from the | ||
108 | * Slot Status register for later consumption by the IRQ thread | ||
109 | * @request_result: result of last user request submitted to the IRQ thread | ||
110 | * @requester: wait queue to wake up on completion of user request, | ||
111 | * used for synchronous slot enable/disable request via sysfs | ||
112 | */ | ||
76 | struct controller { | 113 | struct controller { |
77 | struct mutex ctrl_lock; /* controller lock */ | 114 | struct mutex ctrl_lock; |
78 | struct pcie_device *pcie; /* PCI Express port service */ | 115 | struct pcie_device *pcie; |
116 | struct rw_semaphore reset_lock; | ||
79 | struct slot *slot; | 117 | struct slot *slot; |
80 | wait_queue_head_t queue; /* sleep & wake process */ | 118 | wait_queue_head_t queue; |
81 | u32 slot_cap; | 119 | u32 slot_cap; |
82 | u16 slot_ctrl; | 120 | u16 slot_ctrl; |
83 | struct timer_list poll_timer; | 121 | struct task_struct *poll_thread; |
84 | unsigned long cmd_started; /* jiffies */ | 122 | unsigned long cmd_started; /* jiffies */ |
85 | unsigned int cmd_busy:1; | 123 | unsigned int cmd_busy:1; |
86 | unsigned int link_active_reporting:1; | 124 | unsigned int link_active_reporting:1; |
87 | unsigned int notification_enabled:1; | 125 | unsigned int notification_enabled:1; |
88 | unsigned int power_fault_detected; | 126 | unsigned int power_fault_detected; |
127 | atomic_t pending_events; | ||
128 | int request_result; | ||
129 | wait_queue_head_t requester; | ||
89 | }; | 130 | }; |
90 | 131 | ||
91 | #define INT_PRESENCE_ON 1 | 132 | /** |
92 | #define INT_PRESENCE_OFF 2 | 133 | * DOC: Slot state |
93 | #define INT_POWER_FAULT 3 | 134 | * |
94 | #define INT_BUTTON_PRESS 4 | 135 | * @OFF_STATE: slot is powered off, no subordinate devices are enumerated |
95 | #define INT_LINK_UP 5 | 136 | * @BLINKINGON_STATE: slot will be powered on after the 5 second delay, |
96 | #define INT_LINK_DOWN 6 | 137 | * green led is blinking |
97 | 138 | * @BLINKINGOFF_STATE: slot will be powered off after the 5 second delay, | |
98 | #define STATIC_STATE 0 | 139 | * green led is blinking |
140 | * @POWERON_STATE: slot is currently powering on | ||
141 | * @POWEROFF_STATE: slot is currently powering off | ||
142 | * @ON_STATE: slot is powered on, subordinate devices have been enumerated | ||
143 | */ | ||
144 | #define OFF_STATE 0 | ||
99 | #define BLINKINGON_STATE 1 | 145 | #define BLINKINGON_STATE 1 |
100 | #define BLINKINGOFF_STATE 2 | 146 | #define BLINKINGOFF_STATE 2 |
101 | #define POWERON_STATE 3 | 147 | #define POWERON_STATE 3 |
102 | #define POWEROFF_STATE 4 | 148 | #define POWEROFF_STATE 4 |
149 | #define ON_STATE 5 | ||
150 | |||
151 | /** | ||
152 | * DOC: Flags to request an action from the IRQ thread | ||
153 | * | ||
154 | * These are stored together with events read from the Slot Status register, | ||
155 | * hence must be greater than its 16-bit width. | ||
156 | * | ||
157 | * %DISABLE_SLOT: Disable the slot in response to a user request via sysfs or | ||
158 | * an Attention Button press after the 5 second delay | ||
159 | * %RERUN_ISR: Used by the IRQ handler to inform the IRQ thread that the | ||
160 | * hotplug port was inaccessible when the interrupt occurred, requiring | ||
161 | * that the IRQ handler is rerun by the IRQ thread after it has made the | ||
162 | * hotplug port accessible by runtime resuming its parents to D0 | ||
163 | */ | ||
164 | #define DISABLE_SLOT (1 << 16) | ||
165 | #define RERUN_ISR (1 << 17) | ||
103 | 166 | ||
104 | #define ATTN_BUTTN(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_ABP) | 167 | #define ATTN_BUTTN(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_ABP) |
105 | #define POWER_CTRL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_PCP) | 168 | #define POWER_CTRL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_PCP) |
@@ -113,15 +176,17 @@ struct controller { | |||
113 | 176 | ||
114 | int pciehp_sysfs_enable_slot(struct slot *slot); | 177 | int pciehp_sysfs_enable_slot(struct slot *slot); |
115 | int pciehp_sysfs_disable_slot(struct slot *slot); | 178 | int pciehp_sysfs_disable_slot(struct slot *slot); |
116 | void pciehp_queue_interrupt_event(struct slot *slot, u32 event_type); | 179 | void pciehp_request(struct controller *ctrl, int action); |
180 | void pciehp_handle_button_press(struct slot *slot); | ||
181 | void pciehp_handle_disable_request(struct slot *slot); | ||
182 | void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events); | ||
117 | int pciehp_configure_device(struct slot *p_slot); | 183 | int pciehp_configure_device(struct slot *p_slot); |
118 | int pciehp_unconfigure_device(struct slot *p_slot); | 184 | void pciehp_unconfigure_device(struct slot *p_slot); |
119 | void pciehp_queue_pushbutton_work(struct work_struct *work); | 185 | void pciehp_queue_pushbutton_work(struct work_struct *work); |
120 | struct controller *pcie_init(struct pcie_device *dev); | 186 | struct controller *pcie_init(struct pcie_device *dev); |
121 | int pcie_init_notification(struct controller *ctrl); | 187 | int pcie_init_notification(struct controller *ctrl); |
122 | int pciehp_enable_slot(struct slot *p_slot); | 188 | void pcie_shutdown_notification(struct controller *ctrl); |
123 | int pciehp_disable_slot(struct slot *p_slot); | 189 | void pcie_clear_hotplug_events(struct controller *ctrl); |
124 | void pcie_reenable_notification(struct controller *ctrl); | ||
125 | int pciehp_power_on_slot(struct slot *slot); | 190 | int pciehp_power_on_slot(struct slot *slot); |
126 | void pciehp_power_off_slot(struct slot *slot); | 191 | void pciehp_power_off_slot(struct slot *slot); |
127 | void pciehp_get_power_status(struct slot *slot, u8 *status); | 192 | void pciehp_get_power_status(struct slot *slot, u8 *status); |
diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c index 44a6a63802d5..ec48c9433ae5 100644 --- a/drivers/pci/hotplug/pciehp_core.c +++ b/drivers/pci/hotplug/pciehp_core.c | |||
@@ -26,11 +26,12 @@ | |||
26 | #include <linux/interrupt.h> | 26 | #include <linux/interrupt.h> |
27 | #include <linux/time.h> | 27 | #include <linux/time.h> |
28 | 28 | ||
29 | #include "../pci.h" | ||
30 | |||
29 | /* Global variables */ | 31 | /* Global variables */ |
30 | bool pciehp_debug; | 32 | bool pciehp_debug; |
31 | bool pciehp_poll_mode; | 33 | bool pciehp_poll_mode; |
32 | int pciehp_poll_time; | 34 | int pciehp_poll_time; |
33 | static bool pciehp_force; | ||
34 | 35 | ||
35 | /* | 36 | /* |
36 | * not really modular, but the easiest way to keep compat with existing | 37 | * not really modular, but the easiest way to keep compat with existing |
@@ -39,11 +40,9 @@ static bool pciehp_force; | |||
39 | module_param(pciehp_debug, bool, 0644); | 40 | module_param(pciehp_debug, bool, 0644); |
40 | module_param(pciehp_poll_mode, bool, 0644); | 41 | module_param(pciehp_poll_mode, bool, 0644); |
41 | module_param(pciehp_poll_time, int, 0644); | 42 | module_param(pciehp_poll_time, int, 0644); |
42 | module_param(pciehp_force, bool, 0644); | ||
43 | MODULE_PARM_DESC(pciehp_debug, "Debugging mode enabled or not"); | 43 | MODULE_PARM_DESC(pciehp_debug, "Debugging mode enabled or not"); |
44 | MODULE_PARM_DESC(pciehp_poll_mode, "Using polling mechanism for hot-plug events or not"); | 44 | MODULE_PARM_DESC(pciehp_poll_mode, "Using polling mechanism for hot-plug events or not"); |
45 | MODULE_PARM_DESC(pciehp_poll_time, "Polling mechanism frequency, in seconds"); | 45 | MODULE_PARM_DESC(pciehp_poll_time, "Polling mechanism frequency, in seconds"); |
46 | MODULE_PARM_DESC(pciehp_force, "Force pciehp, even if OSHP is missing"); | ||
47 | 46 | ||
48 | #define PCIE_MODULE_NAME "pciehp" | 47 | #define PCIE_MODULE_NAME "pciehp" |
49 | 48 | ||
@@ -56,17 +55,6 @@ static int get_latch_status(struct hotplug_slot *slot, u8 *value); | |||
56 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); | 55 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); |
57 | static int reset_slot(struct hotplug_slot *slot, int probe); | 56 | static int reset_slot(struct hotplug_slot *slot, int probe); |
58 | 57 | ||
59 | /** | ||
60 | * release_slot - free up the memory used by a slot | ||
61 | * @hotplug_slot: slot to free | ||
62 | */ | ||
63 | static void release_slot(struct hotplug_slot *hotplug_slot) | ||
64 | { | ||
65 | kfree(hotplug_slot->ops); | ||
66 | kfree(hotplug_slot->info); | ||
67 | kfree(hotplug_slot); | ||
68 | } | ||
69 | |||
70 | static int init_slot(struct controller *ctrl) | 58 | static int init_slot(struct controller *ctrl) |
71 | { | 59 | { |
72 | struct slot *slot = ctrl->slot; | 60 | struct slot *slot = ctrl->slot; |
@@ -107,15 +95,14 @@ static int init_slot(struct controller *ctrl) | |||
107 | /* register this slot with the hotplug pci core */ | 95 | /* register this slot with the hotplug pci core */ |
108 | hotplug->info = info; | 96 | hotplug->info = info; |
109 | hotplug->private = slot; | 97 | hotplug->private = slot; |
110 | hotplug->release = &release_slot; | ||
111 | hotplug->ops = ops; | 98 | hotplug->ops = ops; |
112 | slot->hotplug_slot = hotplug; | 99 | slot->hotplug_slot = hotplug; |
113 | snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl)); | 100 | snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl)); |
114 | 101 | ||
115 | retval = pci_hp_register(hotplug, | 102 | retval = pci_hp_initialize(hotplug, |
116 | ctrl->pcie->port->subordinate, 0, name); | 103 | ctrl->pcie->port->subordinate, 0, name); |
117 | if (retval) | 104 | if (retval) |
118 | ctrl_err(ctrl, "pci_hp_register failed: error %d\n", retval); | 105 | ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval); |
119 | out: | 106 | out: |
120 | if (retval) { | 107 | if (retval) { |
121 | kfree(ops); | 108 | kfree(ops); |
@@ -127,7 +114,12 @@ out: | |||
127 | 114 | ||
128 | static void cleanup_slot(struct controller *ctrl) | 115 | static void cleanup_slot(struct controller *ctrl) |
129 | { | 116 | { |
130 | pci_hp_deregister(ctrl->slot->hotplug_slot); | 117 | struct hotplug_slot *hotplug_slot = ctrl->slot->hotplug_slot; |
118 | |||
119 | pci_hp_destroy(hotplug_slot); | ||
120 | kfree(hotplug_slot->ops); | ||
121 | kfree(hotplug_slot->info); | ||
122 | kfree(hotplug_slot); | ||
131 | } | 123 | } |
132 | 124 | ||
133 | /* | 125 | /* |
@@ -136,8 +128,11 @@ static void cleanup_slot(struct controller *ctrl) | |||
136 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | 128 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) |
137 | { | 129 | { |
138 | struct slot *slot = hotplug_slot->private; | 130 | struct slot *slot = hotplug_slot->private; |
131 | struct pci_dev *pdev = slot->ctrl->pcie->port; | ||
139 | 132 | ||
133 | pci_config_pm_runtime_get(pdev); | ||
140 | pciehp_set_attention_status(slot, status); | 134 | pciehp_set_attention_status(slot, status); |
135 | pci_config_pm_runtime_put(pdev); | ||
141 | return 0; | 136 | return 0; |
142 | } | 137 | } |
143 | 138 | ||
@@ -160,8 +155,11 @@ static int disable_slot(struct hotplug_slot *hotplug_slot) | |||
160 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | 155 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) |
161 | { | 156 | { |
162 | struct slot *slot = hotplug_slot->private; | 157 | struct slot *slot = hotplug_slot->private; |
158 | struct pci_dev *pdev = slot->ctrl->pcie->port; | ||
163 | 159 | ||
160 | pci_config_pm_runtime_get(pdev); | ||
164 | pciehp_get_power_status(slot, value); | 161 | pciehp_get_power_status(slot, value); |
162 | pci_config_pm_runtime_put(pdev); | ||
165 | return 0; | 163 | return 0; |
166 | } | 164 | } |
167 | 165 | ||
@@ -176,16 +174,22 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
176 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | 174 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) |
177 | { | 175 | { |
178 | struct slot *slot = hotplug_slot->private; | 176 | struct slot *slot = hotplug_slot->private; |
177 | struct pci_dev *pdev = slot->ctrl->pcie->port; | ||
179 | 178 | ||
179 | pci_config_pm_runtime_get(pdev); | ||
180 | pciehp_get_latch_status(slot, value); | 180 | pciehp_get_latch_status(slot, value); |
181 | pci_config_pm_runtime_put(pdev); | ||
181 | return 0; | 182 | return 0; |
182 | } | 183 | } |
183 | 184 | ||
184 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | 185 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) |
185 | { | 186 | { |
186 | struct slot *slot = hotplug_slot->private; | 187 | struct slot *slot = hotplug_slot->private; |
188 | struct pci_dev *pdev = slot->ctrl->pcie->port; | ||
187 | 189 | ||
190 | pci_config_pm_runtime_get(pdev); | ||
188 | pciehp_get_adapter_status(slot, value); | 191 | pciehp_get_adapter_status(slot, value); |
192 | pci_config_pm_runtime_put(pdev); | ||
189 | return 0; | 193 | return 0; |
190 | } | 194 | } |
191 | 195 | ||
@@ -196,12 +200,40 @@ static int reset_slot(struct hotplug_slot *hotplug_slot, int probe) | |||
196 | return pciehp_reset_slot(slot, probe); | 200 | return pciehp_reset_slot(slot, probe); |
197 | } | 201 | } |
198 | 202 | ||
203 | /** | ||
204 | * pciehp_check_presence() - synthesize event if presence has changed | ||
205 | * | ||
206 | * On probe and resume, an explicit presence check is necessary to bring up an | ||
207 | * occupied slot or bring down an unoccupied slot. This can't be triggered by | ||
208 | * events in the Slot Status register, they may be stale and are therefore | ||
209 | * cleared. Secondly, sending an interrupt for "events that occur while | ||
210 | * interrupt generation is disabled [when] interrupt generation is subsequently | ||
211 | * enabled" is optional per PCIe r4.0, sec 6.7.3.4. | ||
212 | */ | ||
213 | static void pciehp_check_presence(struct controller *ctrl) | ||
214 | { | ||
215 | struct slot *slot = ctrl->slot; | ||
216 | u8 occupied; | ||
217 | |||
218 | down_read(&ctrl->reset_lock); | ||
219 | mutex_lock(&slot->lock); | ||
220 | |||
221 | pciehp_get_adapter_status(slot, &occupied); | ||
222 | if ((occupied && (slot->state == OFF_STATE || | ||
223 | slot->state == BLINKINGON_STATE)) || | ||
224 | (!occupied && (slot->state == ON_STATE || | ||
225 | slot->state == BLINKINGOFF_STATE))) | ||
226 | pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC); | ||
227 | |||
228 | mutex_unlock(&slot->lock); | ||
229 | up_read(&ctrl->reset_lock); | ||
230 | } | ||
231 | |||
199 | static int pciehp_probe(struct pcie_device *dev) | 232 | static int pciehp_probe(struct pcie_device *dev) |
200 | { | 233 | { |
201 | int rc; | 234 | int rc; |
202 | struct controller *ctrl; | 235 | struct controller *ctrl; |
203 | struct slot *slot; | 236 | struct slot *slot; |
204 | u8 occupied, poweron; | ||
205 | 237 | ||
206 | /* If this is not a "hotplug" service, we have no business here. */ | 238 | /* If this is not a "hotplug" service, we have no business here. */ |
207 | if (dev->service != PCIE_PORT_SERVICE_HP) | 239 | if (dev->service != PCIE_PORT_SERVICE_HP) |
@@ -238,21 +270,20 @@ static int pciehp_probe(struct pcie_device *dev) | |||
238 | goto err_out_free_ctrl_slot; | 270 | goto err_out_free_ctrl_slot; |
239 | } | 271 | } |
240 | 272 | ||
241 | /* Check if slot is occupied */ | 273 | /* Publish to user space */ |
242 | slot = ctrl->slot; | 274 | slot = ctrl->slot; |
243 | pciehp_get_adapter_status(slot, &occupied); | 275 | rc = pci_hp_add(slot->hotplug_slot); |
244 | pciehp_get_power_status(slot, &poweron); | 276 | if (rc) { |
245 | if (occupied && pciehp_force) { | 277 | ctrl_err(ctrl, "Publication to user space failed (%d)\n", rc); |
246 | mutex_lock(&slot->hotplug_lock); | 278 | goto err_out_shutdown_notification; |
247 | pciehp_enable_slot(slot); | ||
248 | mutex_unlock(&slot->hotplug_lock); | ||
249 | } | 279 | } |
250 | /* If empty slot's power status is on, turn power off */ | 280 | |
251 | if (!occupied && poweron && POWER_CTRL(ctrl)) | 281 | pciehp_check_presence(ctrl); |
252 | pciehp_power_off_slot(slot); | ||
253 | 282 | ||
254 | return 0; | 283 | return 0; |
255 | 284 | ||
285 | err_out_shutdown_notification: | ||
286 | pcie_shutdown_notification(ctrl); | ||
256 | err_out_free_ctrl_slot: | 287 | err_out_free_ctrl_slot: |
257 | cleanup_slot(ctrl); | 288 | cleanup_slot(ctrl); |
258 | err_out_release_ctlr: | 289 | err_out_release_ctlr: |
@@ -264,6 +295,8 @@ static void pciehp_remove(struct pcie_device *dev) | |||
264 | { | 295 | { |
265 | struct controller *ctrl = get_service_data(dev); | 296 | struct controller *ctrl = get_service_data(dev); |
266 | 297 | ||
298 | pci_hp_del(ctrl->slot->hotplug_slot); | ||
299 | pcie_shutdown_notification(ctrl); | ||
267 | cleanup_slot(ctrl); | 300 | cleanup_slot(ctrl); |
268 | pciehp_release_ctrl(ctrl); | 301 | pciehp_release_ctrl(ctrl); |
269 | } | 302 | } |
@@ -274,27 +307,28 @@ static int pciehp_suspend(struct pcie_device *dev) | |||
274 | return 0; | 307 | return 0; |
275 | } | 308 | } |
276 | 309 | ||
277 | static int pciehp_resume(struct pcie_device *dev) | 310 | static int pciehp_resume_noirq(struct pcie_device *dev) |
278 | { | 311 | { |
279 | struct controller *ctrl; | 312 | struct controller *ctrl = get_service_data(dev); |
280 | struct slot *slot; | 313 | struct slot *slot = ctrl->slot; |
281 | u8 status; | ||
282 | 314 | ||
283 | ctrl = get_service_data(dev); | 315 | /* pci_restore_state() just wrote to the Slot Control register */ |
316 | ctrl->cmd_started = jiffies; | ||
317 | ctrl->cmd_busy = true; | ||
284 | 318 | ||
285 | /* reinitialize the chipset's event detection logic */ | 319 | /* clear spurious events from rediscovery of inserted card */ |
286 | pcie_reenable_notification(ctrl); | 320 | if (slot->state == ON_STATE || slot->state == BLINKINGOFF_STATE) |
321 | pcie_clear_hotplug_events(ctrl); | ||
287 | 322 | ||
288 | slot = ctrl->slot; | 323 | return 0; |
324 | } | ||
325 | |||
326 | static int pciehp_resume(struct pcie_device *dev) | ||
327 | { | ||
328 | struct controller *ctrl = get_service_data(dev); | ||
329 | |||
330 | pciehp_check_presence(ctrl); | ||
289 | 331 | ||
290 | /* Check if slot is occupied */ | ||
291 | pciehp_get_adapter_status(slot, &status); | ||
292 | mutex_lock(&slot->hotplug_lock); | ||
293 | if (status) | ||
294 | pciehp_enable_slot(slot); | ||
295 | else | ||
296 | pciehp_disable_slot(slot); | ||
297 | mutex_unlock(&slot->hotplug_lock); | ||
298 | return 0; | 332 | return 0; |
299 | } | 333 | } |
300 | #endif /* PM */ | 334 | #endif /* PM */ |
@@ -309,6 +343,7 @@ static struct pcie_port_service_driver hpdriver_portdrv = { | |||
309 | 343 | ||
310 | #ifdef CONFIG_PM | 344 | #ifdef CONFIG_PM |
311 | .suspend = pciehp_suspend, | 345 | .suspend = pciehp_suspend, |
346 | .resume_noirq = pciehp_resume_noirq, | ||
312 | .resume = pciehp_resume, | 347 | .resume = pciehp_resume, |
313 | #endif /* PM */ | 348 | #endif /* PM */ |
314 | }; | 349 | }; |
diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c index c684faa43387..da7c72372ffc 100644 --- a/drivers/pci/hotplug/pciehp_ctrl.c +++ b/drivers/pci/hotplug/pciehp_ctrl.c | |||
@@ -17,28 +17,11 @@ | |||
17 | #include <linux/kernel.h> | 17 | #include <linux/kernel.h> |
18 | #include <linux/types.h> | 18 | #include <linux/types.h> |
19 | #include <linux/slab.h> | 19 | #include <linux/slab.h> |
20 | #include <linux/pm_runtime.h> | ||
20 | #include <linux/pci.h> | 21 | #include <linux/pci.h> |
21 | #include "../pci.h" | 22 | #include "../pci.h" |
22 | #include "pciehp.h" | 23 | #include "pciehp.h" |
23 | 24 | ||
24 | static void interrupt_event_handler(struct work_struct *work); | ||
25 | |||
26 | void pciehp_queue_interrupt_event(struct slot *p_slot, u32 event_type) | ||
27 | { | ||
28 | struct event_info *info; | ||
29 | |||
30 | info = kmalloc(sizeof(*info), GFP_ATOMIC); | ||
31 | if (!info) { | ||
32 | ctrl_err(p_slot->ctrl, "dropped event %d (ENOMEM)\n", event_type); | ||
33 | return; | ||
34 | } | ||
35 | |||
36 | INIT_WORK(&info->work, interrupt_event_handler); | ||
37 | info->event_type = event_type; | ||
38 | info->p_slot = p_slot; | ||
39 | queue_work(p_slot->wq, &info->work); | ||
40 | } | ||
41 | |||
42 | /* The following routines constitute the bulk of the | 25 | /* The following routines constitute the bulk of the |
43 | hotplug controller logic | 26 | hotplug controller logic |
44 | */ | 27 | */ |
@@ -119,14 +102,11 @@ err_exit: | |||
119 | * remove_board - Turns off slot and LEDs | 102 | * remove_board - Turns off slot and LEDs |
120 | * @p_slot: slot where board is being removed | 103 | * @p_slot: slot where board is being removed |
121 | */ | 104 | */ |
122 | static int remove_board(struct slot *p_slot) | 105 | static void remove_board(struct slot *p_slot) |
123 | { | 106 | { |
124 | int retval; | ||
125 | struct controller *ctrl = p_slot->ctrl; | 107 | struct controller *ctrl = p_slot->ctrl; |
126 | 108 | ||
127 | retval = pciehp_unconfigure_device(p_slot); | 109 | pciehp_unconfigure_device(p_slot); |
128 | if (retval) | ||
129 | return retval; | ||
130 | 110 | ||
131 | if (POWER_CTRL(ctrl)) { | 111 | if (POWER_CTRL(ctrl)) { |
132 | pciehp_power_off_slot(p_slot); | 112 | pciehp_power_off_slot(p_slot); |
@@ -141,86 +121,30 @@ static int remove_board(struct slot *p_slot) | |||
141 | 121 | ||
142 | /* turn off Green LED */ | 122 | /* turn off Green LED */ |
143 | pciehp_green_led_off(p_slot); | 123 | pciehp_green_led_off(p_slot); |
144 | return 0; | ||
145 | } | 124 | } |
146 | 125 | ||
147 | struct power_work_info { | 126 | static int pciehp_enable_slot(struct slot *slot); |
148 | struct slot *p_slot; | 127 | static int pciehp_disable_slot(struct slot *slot); |
149 | struct work_struct work; | ||
150 | unsigned int req; | ||
151 | #define DISABLE_REQ 0 | ||
152 | #define ENABLE_REQ 1 | ||
153 | }; | ||
154 | |||
155 | /** | ||
156 | * pciehp_power_thread - handle pushbutton events | ||
157 | * @work: &struct work_struct describing work to be done | ||
158 | * | ||
159 | * Scheduled procedure to handle blocking stuff for the pushbuttons. | ||
160 | * Handles all pending events and exits. | ||
161 | */ | ||
162 | static void pciehp_power_thread(struct work_struct *work) | ||
163 | { | ||
164 | struct power_work_info *info = | ||
165 | container_of(work, struct power_work_info, work); | ||
166 | struct slot *p_slot = info->p_slot; | ||
167 | int ret; | ||
168 | |||
169 | switch (info->req) { | ||
170 | case DISABLE_REQ: | ||
171 | mutex_lock(&p_slot->hotplug_lock); | ||
172 | pciehp_disable_slot(p_slot); | ||
173 | mutex_unlock(&p_slot->hotplug_lock); | ||
174 | mutex_lock(&p_slot->lock); | ||
175 | p_slot->state = STATIC_STATE; | ||
176 | mutex_unlock(&p_slot->lock); | ||
177 | break; | ||
178 | case ENABLE_REQ: | ||
179 | mutex_lock(&p_slot->hotplug_lock); | ||
180 | ret = pciehp_enable_slot(p_slot); | ||
181 | mutex_unlock(&p_slot->hotplug_lock); | ||
182 | if (ret) | ||
183 | pciehp_green_led_off(p_slot); | ||
184 | mutex_lock(&p_slot->lock); | ||
185 | p_slot->state = STATIC_STATE; | ||
186 | mutex_unlock(&p_slot->lock); | ||
187 | break; | ||
188 | default: | ||
189 | break; | ||
190 | } | ||
191 | |||
192 | kfree(info); | ||
193 | } | ||
194 | 128 | ||
195 | static void pciehp_queue_power_work(struct slot *p_slot, int req) | 129 | void pciehp_request(struct controller *ctrl, int action) |
196 | { | 130 | { |
197 | struct power_work_info *info; | 131 | atomic_or(action, &ctrl->pending_events); |
198 | 132 | if (!pciehp_poll_mode) | |
199 | p_slot->state = (req == ENABLE_REQ) ? POWERON_STATE : POWEROFF_STATE; | 133 | irq_wake_thread(ctrl->pcie->irq, ctrl); |
200 | |||
201 | info = kmalloc(sizeof(*info), GFP_KERNEL); | ||
202 | if (!info) { | ||
203 | ctrl_err(p_slot->ctrl, "no memory to queue %s request\n", | ||
204 | (req == ENABLE_REQ) ? "poweron" : "poweroff"); | ||
205 | return; | ||
206 | } | ||
207 | info->p_slot = p_slot; | ||
208 | INIT_WORK(&info->work, pciehp_power_thread); | ||
209 | info->req = req; | ||
210 | queue_work(p_slot->wq, &info->work); | ||
211 | } | 134 | } |
212 | 135 | ||
213 | void pciehp_queue_pushbutton_work(struct work_struct *work) | 136 | void pciehp_queue_pushbutton_work(struct work_struct *work) |
214 | { | 137 | { |
215 | struct slot *p_slot = container_of(work, struct slot, work.work); | 138 | struct slot *p_slot = container_of(work, struct slot, work.work); |
139 | struct controller *ctrl = p_slot->ctrl; | ||
216 | 140 | ||
217 | mutex_lock(&p_slot->lock); | 141 | mutex_lock(&p_slot->lock); |
218 | switch (p_slot->state) { | 142 | switch (p_slot->state) { |
219 | case BLINKINGOFF_STATE: | 143 | case BLINKINGOFF_STATE: |
220 | pciehp_queue_power_work(p_slot, DISABLE_REQ); | 144 | pciehp_request(ctrl, DISABLE_SLOT); |
221 | break; | 145 | break; |
222 | case BLINKINGON_STATE: | 146 | case BLINKINGON_STATE: |
223 | pciehp_queue_power_work(p_slot, ENABLE_REQ); | 147 | pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC); |
224 | break; | 148 | break; |
225 | default: | 149 | default: |
226 | break; | 150 | break; |
@@ -228,18 +152,15 @@ void pciehp_queue_pushbutton_work(struct work_struct *work) | |||
228 | mutex_unlock(&p_slot->lock); | 152 | mutex_unlock(&p_slot->lock); |
229 | } | 153 | } |
230 | 154 | ||
231 | /* | 155 | void pciehp_handle_button_press(struct slot *p_slot) |
232 | * Note: This function must be called with slot->lock held | ||
233 | */ | ||
234 | static void handle_button_press_event(struct slot *p_slot) | ||
235 | { | 156 | { |
236 | struct controller *ctrl = p_slot->ctrl; | 157 | struct controller *ctrl = p_slot->ctrl; |
237 | u8 getstatus; | ||
238 | 158 | ||
159 | mutex_lock(&p_slot->lock); | ||
239 | switch (p_slot->state) { | 160 | switch (p_slot->state) { |
240 | case STATIC_STATE: | 161 | case OFF_STATE: |
241 | pciehp_get_power_status(p_slot, &getstatus); | 162 | case ON_STATE: |
242 | if (getstatus) { | 163 | if (p_slot->state == ON_STATE) { |
243 | p_slot->state = BLINKINGOFF_STATE; | 164 | p_slot->state = BLINKINGOFF_STATE; |
244 | ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n", | 165 | ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n", |
245 | slot_name(p_slot)); | 166 | slot_name(p_slot)); |
@@ -251,7 +172,7 @@ static void handle_button_press_event(struct slot *p_slot) | |||
251 | /* blink green LED and turn off amber */ | 172 | /* blink green LED and turn off amber */ |
252 | pciehp_green_led_blink(p_slot); | 173 | pciehp_green_led_blink(p_slot); |
253 | pciehp_set_attention_status(p_slot, 0); | 174 | pciehp_set_attention_status(p_slot, 0); |
254 | queue_delayed_work(p_slot->wq, &p_slot->work, 5*HZ); | 175 | schedule_delayed_work(&p_slot->work, 5 * HZ); |
255 | break; | 176 | break; |
256 | case BLINKINGOFF_STATE: | 177 | case BLINKINGOFF_STATE: |
257 | case BLINKINGON_STATE: | 178 | case BLINKINGON_STATE: |
@@ -262,118 +183,104 @@ static void handle_button_press_event(struct slot *p_slot) | |||
262 | */ | 183 | */ |
263 | ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(p_slot)); | 184 | ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(p_slot)); |
264 | cancel_delayed_work(&p_slot->work); | 185 | cancel_delayed_work(&p_slot->work); |
265 | if (p_slot->state == BLINKINGOFF_STATE) | 186 | if (p_slot->state == BLINKINGOFF_STATE) { |
187 | p_slot->state = ON_STATE; | ||
266 | pciehp_green_led_on(p_slot); | 188 | pciehp_green_led_on(p_slot); |
267 | else | 189 | } else { |
190 | p_slot->state = OFF_STATE; | ||
268 | pciehp_green_led_off(p_slot); | 191 | pciehp_green_led_off(p_slot); |
192 | } | ||
269 | pciehp_set_attention_status(p_slot, 0); | 193 | pciehp_set_attention_status(p_slot, 0); |
270 | ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", | 194 | ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", |
271 | slot_name(p_slot)); | 195 | slot_name(p_slot)); |
272 | p_slot->state = STATIC_STATE; | ||
273 | break; | ||
274 | case POWEROFF_STATE: | ||
275 | case POWERON_STATE: | ||
276 | /* | ||
277 | * Ignore if the slot is on power-on or power-off state; | ||
278 | * this means that the previous attention button action | ||
279 | * to hot-add or hot-remove is undergoing | ||
280 | */ | ||
281 | ctrl_info(ctrl, "Slot(%s): Button ignored\n", | ||
282 | slot_name(p_slot)); | ||
283 | break; | 196 | break; |
284 | default: | 197 | default: |
285 | ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n", | 198 | ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n", |
286 | slot_name(p_slot), p_slot->state); | 199 | slot_name(p_slot), p_slot->state); |
287 | break; | 200 | break; |
288 | } | 201 | } |
202 | mutex_unlock(&p_slot->lock); | ||
289 | } | 203 | } |
290 | 204 | ||
291 | /* | 205 | void pciehp_handle_disable_request(struct slot *slot) |
292 | * Note: This function must be called with slot->lock held | ||
293 | */ | ||
294 | static void handle_link_event(struct slot *p_slot, u32 event) | ||
295 | { | 206 | { |
296 | struct controller *ctrl = p_slot->ctrl; | 207 | struct controller *ctrl = slot->ctrl; |
297 | 208 | ||
298 | switch (p_slot->state) { | 209 | mutex_lock(&slot->lock); |
210 | switch (slot->state) { | ||
299 | case BLINKINGON_STATE: | 211 | case BLINKINGON_STATE: |
300 | case BLINKINGOFF_STATE: | 212 | case BLINKINGOFF_STATE: |
301 | cancel_delayed_work(&p_slot->work); | 213 | cancel_delayed_work(&slot->work); |
302 | /* Fall through */ | ||
303 | case STATIC_STATE: | ||
304 | pciehp_queue_power_work(p_slot, event == INT_LINK_UP ? | ||
305 | ENABLE_REQ : DISABLE_REQ); | ||
306 | break; | ||
307 | case POWERON_STATE: | ||
308 | if (event == INT_LINK_UP) { | ||
309 | ctrl_info(ctrl, "Slot(%s): Link Up event ignored; already powering on\n", | ||
310 | slot_name(p_slot)); | ||
311 | } else { | ||
312 | ctrl_info(ctrl, "Slot(%s): Link Down event queued; currently getting powered on\n", | ||
313 | slot_name(p_slot)); | ||
314 | pciehp_queue_power_work(p_slot, DISABLE_REQ); | ||
315 | } | ||
316 | break; | ||
317 | case POWEROFF_STATE: | ||
318 | if (event == INT_LINK_UP) { | ||
319 | ctrl_info(ctrl, "Slot(%s): Link Up event queued; currently getting powered off\n", | ||
320 | slot_name(p_slot)); | ||
321 | pciehp_queue_power_work(p_slot, ENABLE_REQ); | ||
322 | } else { | ||
323 | ctrl_info(ctrl, "Slot(%s): Link Down event ignored; already powering off\n", | ||
324 | slot_name(p_slot)); | ||
325 | } | ||
326 | break; | ||
327 | default: | ||
328 | ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n", | ||
329 | slot_name(p_slot), p_slot->state); | ||
330 | break; | 214 | break; |
331 | } | 215 | } |
216 | slot->state = POWEROFF_STATE; | ||
217 | mutex_unlock(&slot->lock); | ||
218 | |||
219 | ctrl->request_result = pciehp_disable_slot(slot); | ||
332 | } | 220 | } |
333 | 221 | ||
334 | static void interrupt_event_handler(struct work_struct *work) | 222 | void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events) |
335 | { | 223 | { |
336 | struct event_info *info = container_of(work, struct event_info, work); | 224 | struct controller *ctrl = slot->ctrl; |
337 | struct slot *p_slot = info->p_slot; | 225 | bool link_active; |
338 | struct controller *ctrl = p_slot->ctrl; | 226 | u8 present; |
339 | 227 | ||
340 | mutex_lock(&p_slot->lock); | 228 | /* |
341 | switch (info->event_type) { | 229 | * If the slot is on and presence or link has changed, turn it off. |
342 | case INT_BUTTON_PRESS: | 230 | * Even if it's occupied again, we cannot assume the card is the same. |
343 | handle_button_press_event(p_slot); | 231 | */ |
344 | break; | 232 | mutex_lock(&slot->lock); |
345 | case INT_POWER_FAULT: | 233 | switch (slot->state) { |
346 | if (!POWER_CTRL(ctrl)) | 234 | case BLINKINGOFF_STATE: |
347 | break; | 235 | cancel_delayed_work(&slot->work); |
348 | pciehp_set_attention_status(p_slot, 1); | 236 | /* fall through */ |
349 | pciehp_green_led_off(p_slot); | 237 | case ON_STATE: |
350 | break; | 238 | slot->state = POWEROFF_STATE; |
351 | case INT_PRESENCE_ON: | 239 | mutex_unlock(&slot->lock); |
352 | pciehp_queue_power_work(p_slot, ENABLE_REQ); | 240 | if (events & PCI_EXP_SLTSTA_DLLSC) |
241 | ctrl_info(ctrl, "Slot(%s): Link Down\n", | ||
242 | slot_name(slot)); | ||
243 | if (events & PCI_EXP_SLTSTA_PDC) | ||
244 | ctrl_info(ctrl, "Slot(%s): Card not present\n", | ||
245 | slot_name(slot)); | ||
246 | pciehp_disable_slot(slot); | ||
353 | break; | 247 | break; |
354 | case INT_PRESENCE_OFF: | 248 | default: |
355 | /* | 249 | mutex_unlock(&slot->lock); |
356 | * Regardless of surprise capability, we need to | ||
357 | * definitely remove a card that has been pulled out! | ||
358 | */ | ||
359 | pciehp_queue_power_work(p_slot, DISABLE_REQ); | ||
360 | break; | 250 | break; |
361 | case INT_LINK_UP: | 251 | } |
362 | case INT_LINK_DOWN: | 252 | |
363 | handle_link_event(p_slot, info->event_type); | 253 | /* Turn the slot on if it's occupied or link is up */ |
254 | mutex_lock(&slot->lock); | ||
255 | pciehp_get_adapter_status(slot, &present); | ||
256 | link_active = pciehp_check_link_active(ctrl); | ||
257 | if (!present && !link_active) { | ||
258 | mutex_unlock(&slot->lock); | ||
259 | return; | ||
260 | } | ||
261 | |||
262 | switch (slot->state) { | ||
263 | case BLINKINGON_STATE: | ||
264 | cancel_delayed_work(&slot->work); | ||
265 | /* fall through */ | ||
266 | case OFF_STATE: | ||
267 | slot->state = POWERON_STATE; | ||
268 | mutex_unlock(&slot->lock); | ||
269 | if (present) | ||
270 | ctrl_info(ctrl, "Slot(%s): Card present\n", | ||
271 | slot_name(slot)); | ||
272 | if (link_active) | ||
273 | ctrl_info(ctrl, "Slot(%s): Link Up\n", | ||
274 | slot_name(slot)); | ||
275 | ctrl->request_result = pciehp_enable_slot(slot); | ||
364 | break; | 276 | break; |
365 | default: | 277 | default: |
278 | mutex_unlock(&slot->lock); | ||
366 | break; | 279 | break; |
367 | } | 280 | } |
368 | mutex_unlock(&p_slot->lock); | ||
369 | |||
370 | kfree(info); | ||
371 | } | 281 | } |
372 | 282 | ||
373 | /* | 283 | static int __pciehp_enable_slot(struct slot *p_slot) |
374 | * Note: This function must be called with slot->hotplug_lock held | ||
375 | */ | ||
376 | int pciehp_enable_slot(struct slot *p_slot) | ||
377 | { | 284 | { |
378 | u8 getstatus = 0; | 285 | u8 getstatus = 0; |
379 | struct controller *ctrl = p_slot->ctrl; | 286 | struct controller *ctrl = p_slot->ctrl; |
@@ -404,17 +311,29 @@ int pciehp_enable_slot(struct slot *p_slot) | |||
404 | return board_added(p_slot); | 311 | return board_added(p_slot); |
405 | } | 312 | } |
406 | 313 | ||
407 | /* | 314 | static int pciehp_enable_slot(struct slot *slot) |
408 | * Note: This function must be called with slot->hotplug_lock held | 315 | { |
409 | */ | 316 | struct controller *ctrl = slot->ctrl; |
410 | int pciehp_disable_slot(struct slot *p_slot) | 317 | int ret; |
318 | |||
319 | pm_runtime_get_sync(&ctrl->pcie->port->dev); | ||
320 | ret = __pciehp_enable_slot(slot); | ||
321 | if (ret && ATTN_BUTTN(ctrl)) | ||
322 | pciehp_green_led_off(slot); /* may be blinking */ | ||
323 | pm_runtime_put(&ctrl->pcie->port->dev); | ||
324 | |||
325 | mutex_lock(&slot->lock); | ||
326 | slot->state = ret ? OFF_STATE : ON_STATE; | ||
327 | mutex_unlock(&slot->lock); | ||
328 | |||
329 | return ret; | ||
330 | } | ||
331 | |||
332 | static int __pciehp_disable_slot(struct slot *p_slot) | ||
411 | { | 333 | { |
412 | u8 getstatus = 0; | 334 | u8 getstatus = 0; |
413 | struct controller *ctrl = p_slot->ctrl; | 335 | struct controller *ctrl = p_slot->ctrl; |
414 | 336 | ||
415 | if (!p_slot->ctrl) | ||
416 | return 1; | ||
417 | |||
418 | if (POWER_CTRL(p_slot->ctrl)) { | 337 | if (POWER_CTRL(p_slot->ctrl)) { |
419 | pciehp_get_power_status(p_slot, &getstatus); | 338 | pciehp_get_power_status(p_slot, &getstatus); |
420 | if (!getstatus) { | 339 | if (!getstatus) { |
@@ -424,32 +343,50 @@ int pciehp_disable_slot(struct slot *p_slot) | |||
424 | } | 343 | } |
425 | } | 344 | } |
426 | 345 | ||
427 | return remove_board(p_slot); | 346 | remove_board(p_slot); |
347 | return 0; | ||
348 | } | ||
349 | |||
350 | static int pciehp_disable_slot(struct slot *slot) | ||
351 | { | ||
352 | struct controller *ctrl = slot->ctrl; | ||
353 | int ret; | ||
354 | |||
355 | pm_runtime_get_sync(&ctrl->pcie->port->dev); | ||
356 | ret = __pciehp_disable_slot(slot); | ||
357 | pm_runtime_put(&ctrl->pcie->port->dev); | ||
358 | |||
359 | mutex_lock(&slot->lock); | ||
360 | slot->state = OFF_STATE; | ||
361 | mutex_unlock(&slot->lock); | ||
362 | |||
363 | return ret; | ||
428 | } | 364 | } |
429 | 365 | ||
430 | int pciehp_sysfs_enable_slot(struct slot *p_slot) | 366 | int pciehp_sysfs_enable_slot(struct slot *p_slot) |
431 | { | 367 | { |
432 | int retval = -ENODEV; | ||
433 | struct controller *ctrl = p_slot->ctrl; | 368 | struct controller *ctrl = p_slot->ctrl; |
434 | 369 | ||
435 | mutex_lock(&p_slot->lock); | 370 | mutex_lock(&p_slot->lock); |
436 | switch (p_slot->state) { | 371 | switch (p_slot->state) { |
437 | case BLINKINGON_STATE: | 372 | case BLINKINGON_STATE: |
438 | cancel_delayed_work(&p_slot->work); | 373 | case OFF_STATE: |
439 | case STATIC_STATE: | ||
440 | p_slot->state = POWERON_STATE; | ||
441 | mutex_unlock(&p_slot->lock); | 374 | mutex_unlock(&p_slot->lock); |
442 | mutex_lock(&p_slot->hotplug_lock); | 375 | /* |
443 | retval = pciehp_enable_slot(p_slot); | 376 | * The IRQ thread becomes a no-op if the user pulls out the |
444 | mutex_unlock(&p_slot->hotplug_lock); | 377 | * card before the thread wakes up, so initialize to -ENODEV. |
445 | mutex_lock(&p_slot->lock); | 378 | */ |
446 | p_slot->state = STATIC_STATE; | 379 | ctrl->request_result = -ENODEV; |
447 | break; | 380 | pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC); |
381 | wait_event(ctrl->requester, | ||
382 | !atomic_read(&ctrl->pending_events)); | ||
383 | return ctrl->request_result; | ||
448 | case POWERON_STATE: | 384 | case POWERON_STATE: |
449 | ctrl_info(ctrl, "Slot(%s): Already in powering on state\n", | 385 | ctrl_info(ctrl, "Slot(%s): Already in powering on state\n", |
450 | slot_name(p_slot)); | 386 | slot_name(p_slot)); |
451 | break; | 387 | break; |
452 | case BLINKINGOFF_STATE: | 388 | case BLINKINGOFF_STATE: |
389 | case ON_STATE: | ||
453 | case POWEROFF_STATE: | 390 | case POWEROFF_STATE: |
454 | ctrl_info(ctrl, "Slot(%s): Already enabled\n", | 391 | ctrl_info(ctrl, "Slot(%s): Already enabled\n", |
455 | slot_name(p_slot)); | 392 | slot_name(p_slot)); |
@@ -461,32 +398,28 @@ int pciehp_sysfs_enable_slot(struct slot *p_slot) | |||
461 | } | 398 | } |
462 | mutex_unlock(&p_slot->lock); | 399 | mutex_unlock(&p_slot->lock); |
463 | 400 | ||
464 | return retval; | 401 | return -ENODEV; |
465 | } | 402 | } |
466 | 403 | ||
467 | int pciehp_sysfs_disable_slot(struct slot *p_slot) | 404 | int pciehp_sysfs_disable_slot(struct slot *p_slot) |
468 | { | 405 | { |
469 | int retval = -ENODEV; | ||
470 | struct controller *ctrl = p_slot->ctrl; | 406 | struct controller *ctrl = p_slot->ctrl; |
471 | 407 | ||
472 | mutex_lock(&p_slot->lock); | 408 | mutex_lock(&p_slot->lock); |
473 | switch (p_slot->state) { | 409 | switch (p_slot->state) { |
474 | case BLINKINGOFF_STATE: | 410 | case BLINKINGOFF_STATE: |
475 | cancel_delayed_work(&p_slot->work); | 411 | case ON_STATE: |
476 | case STATIC_STATE: | ||
477 | p_slot->state = POWEROFF_STATE; | ||
478 | mutex_unlock(&p_slot->lock); | 412 | mutex_unlock(&p_slot->lock); |
479 | mutex_lock(&p_slot->hotplug_lock); | 413 | pciehp_request(ctrl, DISABLE_SLOT); |
480 | retval = pciehp_disable_slot(p_slot); | 414 | wait_event(ctrl->requester, |
481 | mutex_unlock(&p_slot->hotplug_lock); | 415 | !atomic_read(&ctrl->pending_events)); |
482 | mutex_lock(&p_slot->lock); | 416 | return ctrl->request_result; |
483 | p_slot->state = STATIC_STATE; | ||
484 | break; | ||
485 | case POWEROFF_STATE: | 417 | case POWEROFF_STATE: |
486 | ctrl_info(ctrl, "Slot(%s): Already in powering off state\n", | 418 | ctrl_info(ctrl, "Slot(%s): Already in powering off state\n", |
487 | slot_name(p_slot)); | 419 | slot_name(p_slot)); |
488 | break; | 420 | break; |
489 | case BLINKINGON_STATE: | 421 | case BLINKINGON_STATE: |
422 | case OFF_STATE: | ||
490 | case POWERON_STATE: | 423 | case POWERON_STATE: |
491 | ctrl_info(ctrl, "Slot(%s): Already disabled\n", | 424 | ctrl_info(ctrl, "Slot(%s): Already disabled\n", |
492 | slot_name(p_slot)); | 425 | slot_name(p_slot)); |
@@ -498,5 +431,5 @@ int pciehp_sysfs_disable_slot(struct slot *p_slot) | |||
498 | } | 431 | } |
499 | mutex_unlock(&p_slot->lock); | 432 | mutex_unlock(&p_slot->lock); |
500 | 433 | ||
501 | return retval; | 434 | return -ENODEV; |
502 | } | 435 | } |
diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index 718b6073afad..7136e3430925 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c | |||
@@ -17,8 +17,9 @@ | |||
17 | #include <linux/types.h> | 17 | #include <linux/types.h> |
18 | #include <linux/signal.h> | 18 | #include <linux/signal.h> |
19 | #include <linux/jiffies.h> | 19 | #include <linux/jiffies.h> |
20 | #include <linux/timer.h> | 20 | #include <linux/kthread.h> |
21 | #include <linux/pci.h> | 21 | #include <linux/pci.h> |
22 | #include <linux/pm_runtime.h> | ||
22 | #include <linux/interrupt.h> | 23 | #include <linux/interrupt.h> |
23 | #include <linux/time.h> | 24 | #include <linux/time.h> |
24 | #include <linux/slab.h> | 25 | #include <linux/slab.h> |
@@ -31,47 +32,24 @@ static inline struct pci_dev *ctrl_dev(struct controller *ctrl) | |||
31 | return ctrl->pcie->port; | 32 | return ctrl->pcie->port; |
32 | } | 33 | } |
33 | 34 | ||
34 | static irqreturn_t pcie_isr(int irq, void *dev_id); | 35 | static irqreturn_t pciehp_isr(int irq, void *dev_id); |
35 | static void start_int_poll_timer(struct controller *ctrl, int sec); | 36 | static irqreturn_t pciehp_ist(int irq, void *dev_id); |
36 | 37 | static int pciehp_poll(void *data); | |
37 | /* This is the interrupt polling timeout function. */ | ||
38 | static void int_poll_timeout(struct timer_list *t) | ||
39 | { | ||
40 | struct controller *ctrl = from_timer(ctrl, t, poll_timer); | ||
41 | |||
42 | /* Poll for interrupt events. regs == NULL => polling */ | ||
43 | pcie_isr(0, ctrl); | ||
44 | |||
45 | if (!pciehp_poll_time) | ||
46 | pciehp_poll_time = 2; /* default polling interval is 2 sec */ | ||
47 | |||
48 | start_int_poll_timer(ctrl, pciehp_poll_time); | ||
49 | } | ||
50 | |||
51 | /* This function starts the interrupt polling timer. */ | ||
52 | static void start_int_poll_timer(struct controller *ctrl, int sec) | ||
53 | { | ||
54 | /* Clamp to sane value */ | ||
55 | if ((sec <= 0) || (sec > 60)) | ||
56 | sec = 2; | ||
57 | |||
58 | ctrl->poll_timer.expires = jiffies + sec * HZ; | ||
59 | add_timer(&ctrl->poll_timer); | ||
60 | } | ||
61 | 38 | ||
62 | static inline int pciehp_request_irq(struct controller *ctrl) | 39 | static inline int pciehp_request_irq(struct controller *ctrl) |
63 | { | 40 | { |
64 | int retval, irq = ctrl->pcie->irq; | 41 | int retval, irq = ctrl->pcie->irq; |
65 | 42 | ||
66 | /* Install interrupt polling timer. Start with 10 sec delay */ | ||
67 | if (pciehp_poll_mode) { | 43 | if (pciehp_poll_mode) { |
68 | timer_setup(&ctrl->poll_timer, int_poll_timeout, 0); | 44 | ctrl->poll_thread = kthread_run(&pciehp_poll, ctrl, |
69 | start_int_poll_timer(ctrl, 10); | 45 | "pciehp_poll-%s", |
70 | return 0; | 46 | slot_name(ctrl->slot)); |
47 | return PTR_ERR_OR_ZERO(ctrl->poll_thread); | ||
71 | } | 48 | } |
72 | 49 | ||
73 | /* Installs the interrupt handler */ | 50 | /* Installs the interrupt handler */ |
74 | retval = request_irq(irq, pcie_isr, IRQF_SHARED, MY_NAME, ctrl); | 51 | retval = request_threaded_irq(irq, pciehp_isr, pciehp_ist, |
52 | IRQF_SHARED, MY_NAME, ctrl); | ||
75 | if (retval) | 53 | if (retval) |
76 | ctrl_err(ctrl, "Cannot get irq %d for the hotplug controller\n", | 54 | ctrl_err(ctrl, "Cannot get irq %d for the hotplug controller\n", |
77 | irq); | 55 | irq); |
@@ -81,7 +59,7 @@ static inline int pciehp_request_irq(struct controller *ctrl) | |||
81 | static inline void pciehp_free_irq(struct controller *ctrl) | 59 | static inline void pciehp_free_irq(struct controller *ctrl) |
82 | { | 60 | { |
83 | if (pciehp_poll_mode) | 61 | if (pciehp_poll_mode) |
84 | del_timer_sync(&ctrl->poll_timer); | 62 | kthread_stop(ctrl->poll_thread); |
85 | else | 63 | else |
86 | free_irq(ctrl->pcie->irq, ctrl); | 64 | free_irq(ctrl->pcie->irq, ctrl); |
87 | } | 65 | } |
@@ -293,6 +271,11 @@ int pciehp_check_link_status(struct controller *ctrl) | |||
293 | found = pci_bus_check_dev(ctrl->pcie->port->subordinate, | 271 | found = pci_bus_check_dev(ctrl->pcie->port->subordinate, |
294 | PCI_DEVFN(0, 0)); | 272 | PCI_DEVFN(0, 0)); |
295 | 273 | ||
274 | /* ignore link or presence changes up to this point */ | ||
275 | if (found) | ||
276 | atomic_and(~(PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC), | ||
277 | &ctrl->pending_events); | ||
278 | |||
296 | pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); | 279 | pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); |
297 | ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status); | 280 | ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status); |
298 | if ((lnk_status & PCI_EXP_LNKSTA_LT) || | 281 | if ((lnk_status & PCI_EXP_LNKSTA_LT) || |
@@ -339,7 +322,9 @@ int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot, | |||
339 | struct pci_dev *pdev = ctrl_dev(slot->ctrl); | 322 | struct pci_dev *pdev = ctrl_dev(slot->ctrl); |
340 | u16 slot_ctrl; | 323 | u16 slot_ctrl; |
341 | 324 | ||
325 | pci_config_pm_runtime_get(pdev); | ||
342 | pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); | 326 | pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); |
327 | pci_config_pm_runtime_put(pdev); | ||
343 | *status = (slot_ctrl & (PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC)) >> 6; | 328 | *status = (slot_ctrl & (PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC)) >> 6; |
344 | return 0; | 329 | return 0; |
345 | } | 330 | } |
@@ -350,7 +335,9 @@ void pciehp_get_attention_status(struct slot *slot, u8 *status) | |||
350 | struct pci_dev *pdev = ctrl_dev(ctrl); | 335 | struct pci_dev *pdev = ctrl_dev(ctrl); |
351 | u16 slot_ctrl; | 336 | u16 slot_ctrl; |
352 | 337 | ||
338 | pci_config_pm_runtime_get(pdev); | ||
353 | pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); | 339 | pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); |
340 | pci_config_pm_runtime_put(pdev); | ||
354 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x, value read %x\n", __func__, | 341 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x, value read %x\n", __func__, |
355 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_ctrl); | 342 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_ctrl); |
356 | 343 | ||
@@ -425,9 +412,12 @@ int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot, | |||
425 | { | 412 | { |
426 | struct slot *slot = hotplug_slot->private; | 413 | struct slot *slot = hotplug_slot->private; |
427 | struct controller *ctrl = slot->ctrl; | 414 | struct controller *ctrl = slot->ctrl; |
415 | struct pci_dev *pdev = ctrl_dev(ctrl); | ||
428 | 416 | ||
417 | pci_config_pm_runtime_get(pdev); | ||
429 | pcie_write_cmd_nowait(ctrl, status << 6, | 418 | pcie_write_cmd_nowait(ctrl, status << 6, |
430 | PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC); | 419 | PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC); |
420 | pci_config_pm_runtime_put(pdev); | ||
431 | return 0; | 421 | return 0; |
432 | } | 422 | } |
433 | 423 | ||
@@ -539,20 +529,35 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id) | |||
539 | { | 529 | { |
540 | struct controller *ctrl = (struct controller *)dev_id; | 530 | struct controller *ctrl = (struct controller *)dev_id; |
541 | struct pci_dev *pdev = ctrl_dev(ctrl); | 531 | struct pci_dev *pdev = ctrl_dev(ctrl); |
542 | struct pci_bus *subordinate = pdev->subordinate; | 532 | struct device *parent = pdev->dev.parent; |
543 | struct pci_dev *dev; | ||
544 | struct slot *slot = ctrl->slot; | ||
545 | u16 status, events; | 533 | u16 status, events; |
546 | u8 present; | ||
547 | bool link; | ||
548 | 534 | ||
549 | /* Interrupts cannot originate from a controller that's asleep */ | 535 | /* |
536 | * Interrupts only occur in D3hot or shallower (PCIe r4.0, sec 6.7.3.4). | ||
537 | */ | ||
550 | if (pdev->current_state == PCI_D3cold) | 538 | if (pdev->current_state == PCI_D3cold) |
551 | return IRQ_NONE; | 539 | return IRQ_NONE; |
552 | 540 | ||
541 | /* | ||
542 | * Keep the port accessible by holding a runtime PM ref on its parent. | ||
543 | * Defer resume of the parent to the IRQ thread if it's suspended. | ||
544 | * Mask the interrupt until then. | ||
545 | */ | ||
546 | if (parent) { | ||
547 | pm_runtime_get_noresume(parent); | ||
548 | if (!pm_runtime_active(parent)) { | ||
549 | pm_runtime_put(parent); | ||
550 | disable_irq_nosync(irq); | ||
551 | atomic_or(RERUN_ISR, &ctrl->pending_events); | ||
552 | return IRQ_WAKE_THREAD; | ||
553 | } | ||
554 | } | ||
555 | |||
553 | pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &status); | 556 | pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &status); |
554 | if (status == (u16) ~0) { | 557 | if (status == (u16) ~0) { |
555 | ctrl_info(ctrl, "%s: no response from device\n", __func__); | 558 | ctrl_info(ctrl, "%s: no response from device\n", __func__); |
559 | if (parent) | ||
560 | pm_runtime_put(parent); | ||
556 | return IRQ_NONE; | 561 | return IRQ_NONE; |
557 | } | 562 | } |
558 | 563 | ||
@@ -571,86 +576,119 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id) | |||
571 | if (ctrl->power_fault_detected) | 576 | if (ctrl->power_fault_detected) |
572 | events &= ~PCI_EXP_SLTSTA_PFD; | 577 | events &= ~PCI_EXP_SLTSTA_PFD; |
573 | 578 | ||
574 | if (!events) | 579 | if (!events) { |
580 | if (parent) | ||
581 | pm_runtime_put(parent); | ||
575 | return IRQ_NONE; | 582 | return IRQ_NONE; |
576 | 583 | } | |
577 | /* Capture link status before clearing interrupts */ | ||
578 | if (events & PCI_EXP_SLTSTA_DLLSC) | ||
579 | link = pciehp_check_link_active(ctrl); | ||
580 | 584 | ||
581 | pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events); | 585 | pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events); |
582 | ctrl_dbg(ctrl, "pending interrupts %#06x from Slot Status\n", events); | 586 | ctrl_dbg(ctrl, "pending interrupts %#06x from Slot Status\n", events); |
587 | if (parent) | ||
588 | pm_runtime_put(parent); | ||
583 | 589 | ||
584 | /* Check Command Complete Interrupt Pending */ | 590 | /* |
591 | * Command Completed notifications are not deferred to the | ||
592 | * IRQ thread because it may be waiting for their arrival. | ||
593 | */ | ||
585 | if (events & PCI_EXP_SLTSTA_CC) { | 594 | if (events & PCI_EXP_SLTSTA_CC) { |
586 | ctrl->cmd_busy = 0; | 595 | ctrl->cmd_busy = 0; |
587 | smp_mb(); | 596 | smp_mb(); |
588 | wake_up(&ctrl->queue); | 597 | wake_up(&ctrl->queue); |
598 | |||
599 | if (events == PCI_EXP_SLTSTA_CC) | ||
600 | return IRQ_HANDLED; | ||
601 | |||
602 | events &= ~PCI_EXP_SLTSTA_CC; | ||
603 | } | ||
604 | |||
605 | if (pdev->ignore_hotplug) { | ||
606 | ctrl_dbg(ctrl, "ignoring hotplug event %#06x\n", events); | ||
607 | return IRQ_HANDLED; | ||
589 | } | 608 | } |
590 | 609 | ||
591 | if (subordinate) { | 610 | /* Save pending events for consumption by IRQ thread. */ |
592 | list_for_each_entry(dev, &subordinate->devices, bus_list) { | 611 | atomic_or(events, &ctrl->pending_events); |
593 | if (dev->ignore_hotplug) { | 612 | return IRQ_WAKE_THREAD; |
594 | ctrl_dbg(ctrl, "ignoring hotplug event %#06x (%s requested no hotplug)\n", | 613 | } |
595 | events, pci_name(dev)); | 614 | |
596 | return IRQ_HANDLED; | 615 | static irqreturn_t pciehp_ist(int irq, void *dev_id) |
597 | } | 616 | { |
617 | struct controller *ctrl = (struct controller *)dev_id; | ||
618 | struct pci_dev *pdev = ctrl_dev(ctrl); | ||
619 | struct slot *slot = ctrl->slot; | ||
620 | irqreturn_t ret; | ||
621 | u32 events; | ||
622 | |||
623 | pci_config_pm_runtime_get(pdev); | ||
624 | |||
625 | /* rerun pciehp_isr() if the port was inaccessible on interrupt */ | ||
626 | if (atomic_fetch_and(~RERUN_ISR, &ctrl->pending_events) & RERUN_ISR) { | ||
627 | ret = pciehp_isr(irq, dev_id); | ||
628 | enable_irq(irq); | ||
629 | if (ret != IRQ_WAKE_THREAD) { | ||
630 | pci_config_pm_runtime_put(pdev); | ||
631 | return ret; | ||
598 | } | 632 | } |
599 | } | 633 | } |
600 | 634 | ||
635 | synchronize_hardirq(irq); | ||
636 | events = atomic_xchg(&ctrl->pending_events, 0); | ||
637 | if (!events) { | ||
638 | pci_config_pm_runtime_put(pdev); | ||
639 | return IRQ_NONE; | ||
640 | } | ||
641 | |||
601 | /* Check Attention Button Pressed */ | 642 | /* Check Attention Button Pressed */ |
602 | if (events & PCI_EXP_SLTSTA_ABP) { | 643 | if (events & PCI_EXP_SLTSTA_ABP) { |
603 | ctrl_info(ctrl, "Slot(%s): Attention button pressed\n", | 644 | ctrl_info(ctrl, "Slot(%s): Attention button pressed\n", |
604 | slot_name(slot)); | 645 | slot_name(slot)); |
605 | pciehp_queue_interrupt_event(slot, INT_BUTTON_PRESS); | 646 | pciehp_handle_button_press(slot); |
606 | } | 647 | } |
607 | 648 | ||
608 | /* | 649 | /* |
609 | * Check Link Status Changed at higher precedence than Presence | 650 | * Disable requests have higher priority than Presence Detect Changed |
610 | * Detect Changed. The PDS value may be set to "card present" from | 651 | * or Data Link Layer State Changed events. |
611 | * out-of-band detection, which may be in conflict with a Link Down | ||
612 | * and cause the wrong event to queue. | ||
613 | */ | 652 | */ |
614 | if (events & PCI_EXP_SLTSTA_DLLSC) { | 653 | down_read(&ctrl->reset_lock); |
615 | ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot), | 654 | if (events & DISABLE_SLOT) |
616 | link ? "Up" : "Down"); | 655 | pciehp_handle_disable_request(slot); |
617 | pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP : | 656 | else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) |
618 | INT_LINK_DOWN); | 657 | pciehp_handle_presence_or_link_change(slot, events); |
619 | } else if (events & PCI_EXP_SLTSTA_PDC) { | 658 | up_read(&ctrl->reset_lock); |
620 | present = !!(status & PCI_EXP_SLTSTA_PDS); | ||
621 | ctrl_info(ctrl, "Slot(%s): Card %spresent\n", slot_name(slot), | ||
622 | present ? "" : "not "); | ||
623 | pciehp_queue_interrupt_event(slot, present ? INT_PRESENCE_ON : | ||
624 | INT_PRESENCE_OFF); | ||
625 | } | ||
626 | 659 | ||
627 | /* Check Power Fault Detected */ | 660 | /* Check Power Fault Detected */ |
628 | if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { | 661 | if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { |
629 | ctrl->power_fault_detected = 1; | 662 | ctrl->power_fault_detected = 1; |
630 | ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(slot)); | 663 | ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(slot)); |
631 | pciehp_queue_interrupt_event(slot, INT_POWER_FAULT); | 664 | pciehp_set_attention_status(slot, 1); |
665 | pciehp_green_led_off(slot); | ||
632 | } | 666 | } |
633 | 667 | ||
668 | pci_config_pm_runtime_put(pdev); | ||
669 | wake_up(&ctrl->requester); | ||
634 | return IRQ_HANDLED; | 670 | return IRQ_HANDLED; |
635 | } | 671 | } |
636 | 672 | ||
637 | static irqreturn_t pcie_isr(int irq, void *dev_id) | 673 | static int pciehp_poll(void *data) |
638 | { | 674 | { |
639 | irqreturn_t rc, handled = IRQ_NONE; | 675 | struct controller *ctrl = data; |
640 | 676 | ||
641 | /* | 677 | schedule_timeout_idle(10 * HZ); /* start with 10 sec delay */ |
642 | * To guarantee that all interrupt events are serviced, we need to | 678 | |
643 | * re-inspect Slot Status register after clearing what is presumed | 679 | while (!kthread_should_stop()) { |
644 | * to be the last pending interrupt. | 680 | /* poll for interrupt events or user requests */ |
645 | */ | 681 | while (pciehp_isr(IRQ_NOTCONNECTED, ctrl) == IRQ_WAKE_THREAD || |
646 | do { | 682 | atomic_read(&ctrl->pending_events)) |
647 | rc = pciehp_isr(irq, dev_id); | 683 | pciehp_ist(IRQ_NOTCONNECTED, ctrl); |
648 | if (rc == IRQ_HANDLED) | ||
649 | handled = IRQ_HANDLED; | ||
650 | } while (rc == IRQ_HANDLED); | ||
651 | 684 | ||
652 | /* Return IRQ_HANDLED if we handled one or more events */ | 685 | if (pciehp_poll_time <= 0 || pciehp_poll_time > 60) |
653 | return handled; | 686 | pciehp_poll_time = 2; /* clamp to sane value */ |
687 | |||
688 | schedule_timeout_idle(pciehp_poll_time * HZ); | ||
689 | } | ||
690 | |||
691 | return 0; | ||
654 | } | 692 | } |
655 | 693 | ||
656 | static void pcie_enable_notification(struct controller *ctrl) | 694 | static void pcie_enable_notification(struct controller *ctrl) |
@@ -691,17 +729,6 @@ static void pcie_enable_notification(struct controller *ctrl) | |||
691 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd); | 729 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd); |
692 | } | 730 | } |
693 | 731 | ||
694 | void pcie_reenable_notification(struct controller *ctrl) | ||
695 | { | ||
696 | /* | ||
697 | * Clear both Presence and Data Link Layer Changed to make sure | ||
698 | * those events still fire after we have re-enabled them. | ||
699 | */ | ||
700 | pcie_capability_write_word(ctrl->pcie->port, PCI_EXP_SLTSTA, | ||
701 | PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC); | ||
702 | pcie_enable_notification(ctrl); | ||
703 | } | ||
704 | |||
705 | static void pcie_disable_notification(struct controller *ctrl) | 732 | static void pcie_disable_notification(struct controller *ctrl) |
706 | { | 733 | { |
707 | u16 mask; | 734 | u16 mask; |
@@ -715,6 +742,12 @@ static void pcie_disable_notification(struct controller *ctrl) | |||
715 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0); | 742 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0); |
716 | } | 743 | } |
717 | 744 | ||
745 | void pcie_clear_hotplug_events(struct controller *ctrl) | ||
746 | { | ||
747 | pcie_capability_write_word(ctrl_dev(ctrl), PCI_EXP_SLTSTA, | ||
748 | PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC); | ||
749 | } | ||
750 | |||
718 | /* | 751 | /* |
719 | * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary | 752 | * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary |
720 | * bus reset of the bridge, but at the same time we want to ensure that it is | 753 | * bus reset of the bridge, but at the same time we want to ensure that it is |
@@ -728,10 +761,13 @@ int pciehp_reset_slot(struct slot *slot, int probe) | |||
728 | struct controller *ctrl = slot->ctrl; | 761 | struct controller *ctrl = slot->ctrl; |
729 | struct pci_dev *pdev = ctrl_dev(ctrl); | 762 | struct pci_dev *pdev = ctrl_dev(ctrl); |
730 | u16 stat_mask = 0, ctrl_mask = 0; | 763 | u16 stat_mask = 0, ctrl_mask = 0; |
764 | int rc; | ||
731 | 765 | ||
732 | if (probe) | 766 | if (probe) |
733 | return 0; | 767 | return 0; |
734 | 768 | ||
769 | down_write(&ctrl->reset_lock); | ||
770 | |||
735 | if (!ATTN_BUTTN(ctrl)) { | 771 | if (!ATTN_BUTTN(ctrl)) { |
736 | ctrl_mask |= PCI_EXP_SLTCTL_PDCE; | 772 | ctrl_mask |= PCI_EXP_SLTCTL_PDCE; |
737 | stat_mask |= PCI_EXP_SLTSTA_PDC; | 773 | stat_mask |= PCI_EXP_SLTSTA_PDC; |
@@ -742,18 +778,16 @@ int pciehp_reset_slot(struct slot *slot, int probe) | |||
742 | pcie_write_cmd(ctrl, 0, ctrl_mask); | 778 | pcie_write_cmd(ctrl, 0, ctrl_mask); |
743 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, | 779 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, |
744 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0); | 780 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0); |
745 | if (pciehp_poll_mode) | ||
746 | del_timer_sync(&ctrl->poll_timer); | ||
747 | 781 | ||
748 | pci_reset_bridge_secondary_bus(ctrl->pcie->port); | 782 | rc = pci_bridge_secondary_bus_reset(ctrl->pcie->port); |
749 | 783 | ||
750 | pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask); | 784 | pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask); |
751 | pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask); | 785 | pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask); |
752 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, | 786 | ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, |
753 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask); | 787 | pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask); |
754 | if (pciehp_poll_mode) | 788 | |
755 | int_poll_timeout(&ctrl->poll_timer); | 789 | up_write(&ctrl->reset_lock); |
756 | return 0; | 790 | return rc; |
757 | } | 791 | } |
758 | 792 | ||
759 | int pcie_init_notification(struct controller *ctrl) | 793 | int pcie_init_notification(struct controller *ctrl) |
@@ -765,7 +799,7 @@ int pcie_init_notification(struct controller *ctrl) | |||
765 | return 0; | 799 | return 0; |
766 | } | 800 | } |
767 | 801 | ||
768 | static void pcie_shutdown_notification(struct controller *ctrl) | 802 | void pcie_shutdown_notification(struct controller *ctrl) |
769 | { | 803 | { |
770 | if (ctrl->notification_enabled) { | 804 | if (ctrl->notification_enabled) { |
771 | pcie_disable_notification(ctrl); | 805 | pcie_disable_notification(ctrl); |
@@ -776,32 +810,29 @@ static void pcie_shutdown_notification(struct controller *ctrl) | |||
776 | 810 | ||
777 | static int pcie_init_slot(struct controller *ctrl) | 811 | static int pcie_init_slot(struct controller *ctrl) |
778 | { | 812 | { |
813 | struct pci_bus *subordinate = ctrl_dev(ctrl)->subordinate; | ||
779 | struct slot *slot; | 814 | struct slot *slot; |
780 | 815 | ||
781 | slot = kzalloc(sizeof(*slot), GFP_KERNEL); | 816 | slot = kzalloc(sizeof(*slot), GFP_KERNEL); |
782 | if (!slot) | 817 | if (!slot) |
783 | return -ENOMEM; | 818 | return -ENOMEM; |
784 | 819 | ||
785 | slot->wq = alloc_ordered_workqueue("pciehp-%u", 0, PSN(ctrl)); | 820 | down_read(&pci_bus_sem); |
786 | if (!slot->wq) | 821 | slot->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE; |
787 | goto abort; | 822 | up_read(&pci_bus_sem); |
788 | 823 | ||
789 | slot->ctrl = ctrl; | 824 | slot->ctrl = ctrl; |
790 | mutex_init(&slot->lock); | 825 | mutex_init(&slot->lock); |
791 | mutex_init(&slot->hotplug_lock); | ||
792 | INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work); | 826 | INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work); |
793 | ctrl->slot = slot; | 827 | ctrl->slot = slot; |
794 | return 0; | 828 | return 0; |
795 | abort: | ||
796 | kfree(slot); | ||
797 | return -ENOMEM; | ||
798 | } | 829 | } |
799 | 830 | ||
800 | static void pcie_cleanup_slot(struct controller *ctrl) | 831 | static void pcie_cleanup_slot(struct controller *ctrl) |
801 | { | 832 | { |
802 | struct slot *slot = ctrl->slot; | 833 | struct slot *slot = ctrl->slot; |
803 | cancel_delayed_work(&slot->work); | 834 | |
804 | destroy_workqueue(slot->wq); | 835 | cancel_delayed_work_sync(&slot->work); |
805 | kfree(slot); | 836 | kfree(slot); |
806 | } | 837 | } |
807 | 838 | ||
@@ -826,6 +857,7 @@ struct controller *pcie_init(struct pcie_device *dev) | |||
826 | { | 857 | { |
827 | struct controller *ctrl; | 858 | struct controller *ctrl; |
828 | u32 slot_cap, link_cap; | 859 | u32 slot_cap, link_cap; |
860 | u8 occupied, poweron; | ||
829 | struct pci_dev *pdev = dev->port; | 861 | struct pci_dev *pdev = dev->port; |
830 | 862 | ||
831 | ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); | 863 | ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); |
@@ -847,6 +879,8 @@ struct controller *pcie_init(struct pcie_device *dev) | |||
847 | 879 | ||
848 | ctrl->slot_cap = slot_cap; | 880 | ctrl->slot_cap = slot_cap; |
849 | mutex_init(&ctrl->ctrl_lock); | 881 | mutex_init(&ctrl->ctrl_lock); |
882 | init_rwsem(&ctrl->reset_lock); | ||
883 | init_waitqueue_head(&ctrl->requester); | ||
850 | init_waitqueue_head(&ctrl->queue); | 884 | init_waitqueue_head(&ctrl->queue); |
851 | dbg_ctrl(ctrl); | 885 | dbg_ctrl(ctrl); |
852 | 886 | ||
@@ -855,16 +889,11 @@ struct controller *pcie_init(struct pcie_device *dev) | |||
855 | if (link_cap & PCI_EXP_LNKCAP_DLLLARC) | 889 | if (link_cap & PCI_EXP_LNKCAP_DLLLARC) |
856 | ctrl->link_active_reporting = 1; | 890 | ctrl->link_active_reporting = 1; |
857 | 891 | ||
858 | /* | 892 | /* Clear all remaining event bits in Slot Status register. */ |
859 | * Clear all remaining event bits in Slot Status register except | ||
860 | * Presence Detect Changed. We want to make sure possible | ||
861 | * hotplug event is triggered when the interrupt is unmasked so | ||
862 | * that we don't lose that event. | ||
863 | */ | ||
864 | pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, | 893 | pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, |
865 | PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | | 894 | PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | |
866 | PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_CC | | 895 | PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_CC | |
867 | PCI_EXP_SLTSTA_DLLSC); | 896 | PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC); |
868 | 897 | ||
869 | ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c%s\n", | 898 | ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c%s\n", |
870 | (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19, | 899 | (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19, |
@@ -883,6 +912,19 @@ struct controller *pcie_init(struct pcie_device *dev) | |||
883 | if (pcie_init_slot(ctrl)) | 912 | if (pcie_init_slot(ctrl)) |
884 | goto abort_ctrl; | 913 | goto abort_ctrl; |
885 | 914 | ||
915 | /* | ||
916 | * If empty slot's power status is on, turn power off. The IRQ isn't | ||
917 | * requested yet, so avoid triggering a notification with this command. | ||
918 | */ | ||
919 | if (POWER_CTRL(ctrl)) { | ||
920 | pciehp_get_adapter_status(ctrl->slot, &occupied); | ||
921 | pciehp_get_power_status(ctrl->slot, &poweron); | ||
922 | if (!occupied && poweron) { | ||
923 | pcie_disable_notification(ctrl); | ||
924 | pciehp_power_off_slot(ctrl->slot); | ||
925 | } | ||
926 | } | ||
927 | |||
886 | return ctrl; | 928 | return ctrl; |
887 | 929 | ||
888 | abort_ctrl: | 930 | abort_ctrl: |
@@ -893,7 +935,6 @@ abort: | |||
893 | 935 | ||
894 | void pciehp_release_ctrl(struct controller *ctrl) | 936 | void pciehp_release_ctrl(struct controller *ctrl) |
895 | { | 937 | { |
896 | pcie_shutdown_notification(ctrl); | ||
897 | pcie_cleanup_slot(ctrl); | 938 | pcie_cleanup_slot(ctrl); |
898 | kfree(ctrl); | 939 | kfree(ctrl); |
899 | } | 940 | } |
diff --git a/drivers/pci/hotplug/pciehp_pci.c b/drivers/pci/hotplug/pciehp_pci.c index 3f518dea856d..5c58c22e0c08 100644 --- a/drivers/pci/hotplug/pciehp_pci.c +++ b/drivers/pci/hotplug/pciehp_pci.c | |||
@@ -62,9 +62,8 @@ int pciehp_configure_device(struct slot *p_slot) | |||
62 | return ret; | 62 | return ret; |
63 | } | 63 | } |
64 | 64 | ||
65 | int pciehp_unconfigure_device(struct slot *p_slot) | 65 | void pciehp_unconfigure_device(struct slot *p_slot) |
66 | { | 66 | { |
67 | int rc = 0; | ||
68 | u8 presence = 0; | 67 | u8 presence = 0; |
69 | struct pci_dev *dev, *temp; | 68 | struct pci_dev *dev, *temp; |
70 | struct pci_bus *parent = p_slot->ctrl->pcie->port->subordinate; | 69 | struct pci_bus *parent = p_slot->ctrl->pcie->port->subordinate; |
@@ -107,5 +106,4 @@ int pciehp_unconfigure_device(struct slot *p_slot) | |||
107 | } | 106 | } |
108 | 107 | ||
109 | pci_unlock_rescan_remove(); | 108 | pci_unlock_rescan_remove(); |
110 | return rc; | ||
111 | } | 109 | } |
diff --git a/drivers/pci/hotplug/pcihp_skeleton.c b/drivers/pci/hotplug/pcihp_skeleton.c deleted file mode 100644 index c19694a04d2c..000000000000 --- a/drivers/pci/hotplug/pcihp_skeleton.c +++ /dev/null | |||
@@ -1,348 +0,0 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0+ | ||
2 | /* | ||
3 | * PCI Hot Plug Controller Skeleton Driver - 0.3 | ||
4 | * | ||
5 | * Copyright (C) 2001,2003 Greg Kroah-Hartman (greg@kroah.com) | ||
6 | * Copyright (C) 2001,2003 IBM Corp. | ||
7 | * | ||
8 | * All rights reserved. | ||
9 | * | ||
10 | * This driver is to be used as a skeleton driver to show how to interface | ||
11 | * with the pci hotplug core easily. | ||
12 | * | ||
13 | * Send feedback to <greg@kroah.com> | ||
14 | * | ||
15 | */ | ||
16 | |||
17 | #include <linux/module.h> | ||
18 | #include <linux/moduleparam.h> | ||
19 | #include <linux/kernel.h> | ||
20 | #include <linux/slab.h> | ||
21 | #include <linux/pci.h> | ||
22 | #include <linux/pci_hotplug.h> | ||
23 | #include <linux/init.h> | ||
24 | |||
25 | #define SLOT_NAME_SIZE 10 | ||
26 | struct slot { | ||
27 | u8 number; | ||
28 | struct hotplug_slot *hotplug_slot; | ||
29 | struct list_head slot_list; | ||
30 | char name[SLOT_NAME_SIZE]; | ||
31 | }; | ||
32 | |||
33 | static LIST_HEAD(slot_list); | ||
34 | |||
35 | #define MY_NAME "pcihp_skeleton" | ||
36 | |||
37 | #define dbg(format, arg...) \ | ||
38 | do { \ | ||
39 | if (debug) \ | ||
40 | printk(KERN_DEBUG "%s: " format "\n", \ | ||
41 | MY_NAME, ## arg); \ | ||
42 | } while (0) | ||
43 | #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME, ## arg) | ||
44 | #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME, ## arg) | ||
45 | #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME, ## arg) | ||
46 | |||
47 | /* local variables */ | ||
48 | static bool debug; | ||
49 | static int num_slots; | ||
50 | |||
51 | #define DRIVER_VERSION "0.3" | ||
52 | #define DRIVER_AUTHOR "Greg Kroah-Hartman <greg@kroah.com>" | ||
53 | #define DRIVER_DESC "Hot Plug PCI Controller Skeleton Driver" | ||
54 | |||
55 | MODULE_AUTHOR(DRIVER_AUTHOR); | ||
56 | MODULE_DESCRIPTION(DRIVER_DESC); | ||
57 | MODULE_LICENSE("GPL"); | ||
58 | module_param(debug, bool, 0644); | ||
59 | MODULE_PARM_DESC(debug, "Debugging mode enabled or not"); | ||
60 | |||
61 | static int enable_slot(struct hotplug_slot *slot); | ||
62 | static int disable_slot(struct hotplug_slot *slot); | ||
63 | static int set_attention_status(struct hotplug_slot *slot, u8 value); | ||
64 | static int hardware_test(struct hotplug_slot *slot, u32 value); | ||
65 | static int get_power_status(struct hotplug_slot *slot, u8 *value); | ||
66 | static int get_attention_status(struct hotplug_slot *slot, u8 *value); | ||
67 | static int get_latch_status(struct hotplug_slot *slot, u8 *value); | ||
68 | static int get_adapter_status(struct hotplug_slot *slot, u8 *value); | ||
69 | |||
70 | static struct hotplug_slot_ops skel_hotplug_slot_ops = { | ||
71 | .enable_slot = enable_slot, | ||
72 | .disable_slot = disable_slot, | ||
73 | .set_attention_status = set_attention_status, | ||
74 | .hardware_test = hardware_test, | ||
75 | .get_power_status = get_power_status, | ||
76 | .get_attention_status = get_attention_status, | ||
77 | .get_latch_status = get_latch_status, | ||
78 | .get_adapter_status = get_adapter_status, | ||
79 | }; | ||
80 | |||
81 | static int enable_slot(struct hotplug_slot *hotplug_slot) | ||
82 | { | ||
83 | struct slot *slot = hotplug_slot->private; | ||
84 | int retval = 0; | ||
85 | |||
86 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
87 | |||
88 | /* | ||
89 | * Fill in code here to enable the specified slot | ||
90 | */ | ||
91 | |||
92 | return retval; | ||
93 | } | ||
94 | |||
95 | static int disable_slot(struct hotplug_slot *hotplug_slot) | ||
96 | { | ||
97 | struct slot *slot = hotplug_slot->private; | ||
98 | int retval = 0; | ||
99 | |||
100 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
101 | |||
102 | /* | ||
103 | * Fill in code here to disable the specified slot | ||
104 | */ | ||
105 | |||
106 | return retval; | ||
107 | } | ||
108 | |||
109 | static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) | ||
110 | { | ||
111 | struct slot *slot = hotplug_slot->private; | ||
112 | int retval = 0; | ||
113 | |||
114 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
115 | |||
116 | switch (status) { | ||
117 | case 0: | ||
118 | /* | ||
119 | * Fill in code here to turn light off | ||
120 | */ | ||
121 | break; | ||
122 | |||
123 | case 1: | ||
124 | default: | ||
125 | /* | ||
126 | * Fill in code here to turn light on | ||
127 | */ | ||
128 | break; | ||
129 | } | ||
130 | |||
131 | return retval; | ||
132 | } | ||
133 | |||
134 | static int hardware_test(struct hotplug_slot *hotplug_slot, u32 value) | ||
135 | { | ||
136 | struct slot *slot = hotplug_slot->private; | ||
137 | int retval = 0; | ||
138 | |||
139 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
140 | |||
141 | switch (value) { | ||
142 | case 0: | ||
143 | /* Specify a test here */ | ||
144 | break; | ||
145 | case 1: | ||
146 | /* Specify another test here */ | ||
147 | break; | ||
148 | } | ||
149 | |||
150 | return retval; | ||
151 | } | ||
152 | |||
153 | static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) | ||
154 | { | ||
155 | struct slot *slot = hotplug_slot->private; | ||
156 | int retval = 0; | ||
157 | |||
158 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
159 | |||
160 | /* | ||
161 | * Fill in logic to get the current power status of the specific | ||
162 | * slot and store it in the *value location. | ||
163 | */ | ||
164 | |||
165 | return retval; | ||
166 | } | ||
167 | |||
168 | static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) | ||
169 | { | ||
170 | struct slot *slot = hotplug_slot->private; | ||
171 | int retval = 0; | ||
172 | |||
173 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
174 | |||
175 | /* | ||
176 | * Fill in logic to get the current attention status of the specific | ||
177 | * slot and store it in the *value location. | ||
178 | */ | ||
179 | |||
180 | return retval; | ||
181 | } | ||
182 | |||
183 | static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) | ||
184 | { | ||
185 | struct slot *slot = hotplug_slot->private; | ||
186 | int retval = 0; | ||
187 | |||
188 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
189 | |||
190 | /* | ||
191 | * Fill in logic to get the current latch status of the specific | ||
192 | * slot and store it in the *value location. | ||
193 | */ | ||
194 | |||
195 | return retval; | ||
196 | } | ||
197 | |||
198 | static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | ||
199 | { | ||
200 | struct slot *slot = hotplug_slot->private; | ||
201 | int retval = 0; | ||
202 | |||
203 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
204 | |||
205 | /* | ||
206 | * Fill in logic to get the current adapter status of the specific | ||
207 | * slot and store it in the *value location. | ||
208 | */ | ||
209 | |||
210 | return retval; | ||
211 | } | ||
212 | |||
213 | static void release_slot(struct hotplug_slot *hotplug_slot) | ||
214 | { | ||
215 | struct slot *slot = hotplug_slot->private; | ||
216 | |||
217 | dbg("%s - physical_slot = %s\n", __func__, hotplug_slot->name); | ||
218 | kfree(slot->hotplug_slot->info); | ||
219 | kfree(slot->hotplug_slot); | ||
220 | kfree(slot); | ||
221 | } | ||
222 | |||
223 | static void make_slot_name(struct slot *slot) | ||
224 | { | ||
225 | /* | ||
226 | * Stupid way to make a filename out of the slot name. | ||
227 | * replace this if your hardware provides a better way to name slots. | ||
228 | */ | ||
229 | snprintf(slot->hotplug_slot->name, SLOT_NAME_SIZE, "%d", slot->number); | ||
230 | } | ||
231 | |||
232 | /** | ||
233 | * init_slots - initialize 'struct slot' structures for each slot | ||
234 | * | ||
235 | */ | ||
236 | static int __init init_slots(void) | ||
237 | { | ||
238 | struct slot *slot; | ||
239 | struct hotplug_slot *hotplug_slot; | ||
240 | struct hotplug_slot_info *info; | ||
241 | int retval; | ||
242 | int i; | ||
243 | |||
244 | /* | ||
245 | * Create a structure for each slot, and register that slot | ||
246 | * with the pci_hotplug subsystem. | ||
247 | */ | ||
248 | for (i = 0; i < num_slots; ++i) { | ||
249 | slot = kzalloc(sizeof(*slot), GFP_KERNEL); | ||
250 | if (!slot) { | ||
251 | retval = -ENOMEM; | ||
252 | goto error; | ||
253 | } | ||
254 | |||
255 | hotplug_slot = kzalloc(sizeof(*hotplug_slot), GFP_KERNEL); | ||
256 | if (!hotplug_slot) { | ||
257 | retval = -ENOMEM; | ||
258 | goto error_slot; | ||
259 | } | ||
260 | slot->hotplug_slot = hotplug_slot; | ||
261 | |||
262 | info = kzalloc(sizeof(*info), GFP_KERNEL); | ||
263 | if (!info) { | ||
264 | retval = -ENOMEM; | ||
265 | goto error_hpslot; | ||
266 | } | ||
267 | hotplug_slot->info = info; | ||
268 | |||
269 | slot->number = i; | ||
270 | |||
271 | hotplug_slot->name = slot->name; | ||
272 | hotplug_slot->private = slot; | ||
273 | hotplug_slot->release = &release_slot; | ||
274 | make_slot_name(slot); | ||
275 | hotplug_slot->ops = &skel_hotplug_slot_ops; | ||
276 | |||
277 | /* | ||
278 | * Initialize the slot info structure with some known | ||
279 | * good values. | ||
280 | */ | ||
281 | get_power_status(hotplug_slot, &info->power_status); | ||
282 | get_attention_status(hotplug_slot, &info->attention_status); | ||
283 | get_latch_status(hotplug_slot, &info->latch_status); | ||
284 | get_adapter_status(hotplug_slot, &info->adapter_status); | ||
285 | |||
286 | dbg("registering slot %d\n", i); | ||
287 | retval = pci_hp_register(slot->hotplug_slot); | ||
288 | if (retval) { | ||
289 | err("pci_hp_register failed with error %d\n", retval); | ||
290 | goto error_info; | ||
291 | } | ||
292 | |||
293 | /* add slot to our internal list */ | ||
294 | list_add(&slot->slot_list, &slot_list); | ||
295 | } | ||
296 | |||
297 | return 0; | ||
298 | error_info: | ||
299 | kfree(info); | ||
300 | error_hpslot: | ||
301 | kfree(hotplug_slot); | ||
302 | error_slot: | ||
303 | kfree(slot); | ||
304 | error: | ||
305 | return retval; | ||
306 | } | ||
307 | |||
308 | static void __exit cleanup_slots(void) | ||
309 | { | ||
310 | struct slot *slot, *next; | ||
311 | |||
312 | /* | ||
313 | * Unregister all of our slots with the pci_hotplug subsystem. | ||
314 | * Memory will be freed in release_slot() callback after slot's | ||
315 | * lifespan is finished. | ||
316 | */ | ||
317 | list_for_each_entry_safe(slot, next, &slot_list, slot_list) { | ||
318 | list_del(&slot->slot_list); | ||
319 | pci_hp_deregister(slot->hotplug_slot); | ||
320 | } | ||
321 | } | ||
322 | |||
323 | static int __init pcihp_skel_init(void) | ||
324 | { | ||
325 | int retval; | ||
326 | |||
327 | info(DRIVER_DESC " version: " DRIVER_VERSION "\n"); | ||
328 | /* | ||
329 | * Do specific initialization stuff for your driver here | ||
330 | * like initializing your controller hardware (if any) and | ||
331 | * determining the number of slots you have in the system | ||
332 | * right now. | ||
333 | */ | ||
334 | num_slots = 5; | ||
335 | |||
336 | return init_slots(); | ||
337 | } | ||
338 | |||
339 | static void __exit pcihp_skel_exit(void) | ||
340 | { | ||
341 | /* | ||
342 | * Clean everything up. | ||
343 | */ | ||
344 | cleanup_slots(); | ||
345 | } | ||
346 | |||
347 | module_init(pcihp_skel_init); | ||
348 | module_exit(pcihp_skel_exit); | ||
diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c index 6c2e8d7307c6..3276a5e4c430 100644 --- a/drivers/pci/hotplug/pnv_php.c +++ b/drivers/pci/hotplug/pnv_php.c | |||
@@ -538,9 +538,8 @@ static struct hotplug_slot_ops php_slot_ops = { | |||
538 | .disable_slot = pnv_php_disable_slot, | 538 | .disable_slot = pnv_php_disable_slot, |
539 | }; | 539 | }; |
540 | 540 | ||
541 | static void pnv_php_release(struct hotplug_slot *slot) | 541 | static void pnv_php_release(struct pnv_php_slot *php_slot) |
542 | { | 542 | { |
543 | struct pnv_php_slot *php_slot = slot->private; | ||
544 | unsigned long flags; | 543 | unsigned long flags; |
545 | 544 | ||
546 | /* Remove from global or child list */ | 545 | /* Remove from global or child list */ |
@@ -596,7 +595,6 @@ static struct pnv_php_slot *pnv_php_alloc_slot(struct device_node *dn) | |||
596 | php_slot->power_state_check = false; | 595 | php_slot->power_state_check = false; |
597 | php_slot->slot.ops = &php_slot_ops; | 596 | php_slot->slot.ops = &php_slot_ops; |
598 | php_slot->slot.info = &php_slot->slot_info; | 597 | php_slot->slot.info = &php_slot->slot_info; |
599 | php_slot->slot.release = pnv_php_release; | ||
600 | php_slot->slot.private = php_slot; | 598 | php_slot->slot.private = php_slot; |
601 | 599 | ||
602 | INIT_LIST_HEAD(&php_slot->children); | 600 | INIT_LIST_HEAD(&php_slot->children); |
@@ -924,6 +922,7 @@ static void pnv_php_unregister_one(struct device_node *dn) | |||
924 | 922 | ||
925 | php_slot->state = PNV_PHP_STATE_OFFLINE; | 923 | php_slot->state = PNV_PHP_STATE_OFFLINE; |
926 | pci_hp_deregister(&php_slot->slot); | 924 | pci_hp_deregister(&php_slot->slot); |
925 | pnv_php_release(php_slot); | ||
927 | pnv_php_put_slot(php_slot); | 926 | pnv_php_put_slot(php_slot); |
928 | } | 927 | } |
929 | 928 | ||
diff --git a/drivers/pci/hotplug/rpaphp_core.c b/drivers/pci/hotplug/rpaphp_core.c index fb5e0845429d..857c358b727b 100644 --- a/drivers/pci/hotplug/rpaphp_core.c +++ b/drivers/pci/hotplug/rpaphp_core.c | |||
@@ -404,13 +404,13 @@ static void __exit cleanup_slots(void) | |||
404 | /* | 404 | /* |
405 | * Unregister all of our slots with the pci_hotplug subsystem, | 405 | * Unregister all of our slots with the pci_hotplug subsystem, |
406 | * and free up all memory that we had allocated. | 406 | * and free up all memory that we had allocated. |
407 | * memory will be freed in release_slot callback. | ||
408 | */ | 407 | */ |
409 | 408 | ||
410 | list_for_each_entry_safe(slot, next, &rpaphp_slot_head, | 409 | list_for_each_entry_safe(slot, next, &rpaphp_slot_head, |
411 | rpaphp_slot_list) { | 410 | rpaphp_slot_list) { |
412 | list_del(&slot->rpaphp_slot_list); | 411 | list_del(&slot->rpaphp_slot_list); |
413 | pci_hp_deregister(slot->hotplug_slot); | 412 | pci_hp_deregister(slot->hotplug_slot); |
413 | dealloc_slot_struct(slot); | ||
414 | } | 414 | } |
415 | return; | 415 | return; |
416 | } | 416 | } |
diff --git a/drivers/pci/hotplug/rpaphp_slot.c b/drivers/pci/hotplug/rpaphp_slot.c index 3840a2075e6a..b916c8e4372d 100644 --- a/drivers/pci/hotplug/rpaphp_slot.c +++ b/drivers/pci/hotplug/rpaphp_slot.c | |||
@@ -19,12 +19,6 @@ | |||
19 | #include "rpaphp.h" | 19 | #include "rpaphp.h" |
20 | 20 | ||
21 | /* free up the memory used by a slot */ | 21 | /* free up the memory used by a slot */ |
22 | static void rpaphp_release_slot(struct hotplug_slot *hotplug_slot) | ||
23 | { | ||
24 | struct slot *slot = (struct slot *) hotplug_slot->private; | ||
25 | dealloc_slot_struct(slot); | ||
26 | } | ||
27 | |||
28 | void dealloc_slot_struct(struct slot *slot) | 22 | void dealloc_slot_struct(struct slot *slot) |
29 | { | 23 | { |
30 | kfree(slot->hotplug_slot->info); | 24 | kfree(slot->hotplug_slot->info); |
@@ -56,7 +50,6 @@ struct slot *alloc_slot_struct(struct device_node *dn, | |||
56 | slot->power_domain = power_domain; | 50 | slot->power_domain = power_domain; |
57 | slot->hotplug_slot->private = slot; | 51 | slot->hotplug_slot->private = slot; |
58 | slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops; | 52 | slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops; |
59 | slot->hotplug_slot->release = &rpaphp_release_slot; | ||
60 | 53 | ||
61 | return (slot); | 54 | return (slot); |
62 | 55 | ||
@@ -90,10 +83,8 @@ int rpaphp_deregister_slot(struct slot *slot) | |||
90 | __func__, slot->name); | 83 | __func__, slot->name); |
91 | 84 | ||
92 | list_del(&slot->rpaphp_slot_list); | 85 | list_del(&slot->rpaphp_slot_list); |
93 | 86 | pci_hp_deregister(php_slot); | |
94 | retval = pci_hp_deregister(php_slot); | 87 | dealloc_slot_struct(slot); |
95 | if (retval) | ||
96 | err("Problem unregistering a slot %s\n", slot->name); | ||
97 | 88 | ||
98 | dbg("%s - Exit: rc[%d]\n", __func__, retval); | 89 | dbg("%s - Exit: rc[%d]\n", __func__, retval); |
99 | return retval; | 90 | return retval; |
diff --git a/drivers/pci/hotplug/s390_pci_hpc.c b/drivers/pci/hotplug/s390_pci_hpc.c index ffdc2977395d..93b5341d282c 100644 --- a/drivers/pci/hotplug/s390_pci_hpc.c +++ b/drivers/pci/hotplug/s390_pci_hpc.c | |||
@@ -130,15 +130,6 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
130 | return 0; | 130 | return 0; |
131 | } | 131 | } |
132 | 132 | ||
133 | static void release_slot(struct hotplug_slot *hotplug_slot) | ||
134 | { | ||
135 | struct slot *slot = hotplug_slot->private; | ||
136 | |||
137 | kfree(slot->hotplug_slot->info); | ||
138 | kfree(slot->hotplug_slot); | ||
139 | kfree(slot); | ||
140 | } | ||
141 | |||
142 | static struct hotplug_slot_ops s390_hotplug_slot_ops = { | 133 | static struct hotplug_slot_ops s390_hotplug_slot_ops = { |
143 | .enable_slot = enable_slot, | 134 | .enable_slot = enable_slot, |
144 | .disable_slot = disable_slot, | 135 | .disable_slot = disable_slot, |
@@ -175,7 +166,6 @@ int zpci_init_slot(struct zpci_dev *zdev) | |||
175 | hotplug_slot->info = info; | 166 | hotplug_slot->info = info; |
176 | 167 | ||
177 | hotplug_slot->ops = &s390_hotplug_slot_ops; | 168 | hotplug_slot->ops = &s390_hotplug_slot_ops; |
178 | hotplug_slot->release = &release_slot; | ||
179 | 169 | ||
180 | get_power_status(hotplug_slot, &info->power_status); | 170 | get_power_status(hotplug_slot, &info->power_status); |
181 | get_adapter_status(hotplug_slot, &info->adapter_status); | 171 | get_adapter_status(hotplug_slot, &info->adapter_status); |
@@ -209,5 +199,8 @@ void zpci_exit_slot(struct zpci_dev *zdev) | |||
209 | continue; | 199 | continue; |
210 | list_del(&slot->slot_list); | 200 | list_del(&slot->slot_list); |
211 | pci_hp_deregister(slot->hotplug_slot); | 201 | pci_hp_deregister(slot->hotplug_slot); |
202 | kfree(slot->hotplug_slot->info); | ||
203 | kfree(slot->hotplug_slot); | ||
204 | kfree(slot); | ||
212 | } | 205 | } |
213 | } | 206 | } |
diff --git a/drivers/pci/hotplug/sgi_hotplug.c b/drivers/pci/hotplug/sgi_hotplug.c index 78b6bdbb3a39..babd23409f61 100644 --- a/drivers/pci/hotplug/sgi_hotplug.c +++ b/drivers/pci/hotplug/sgi_hotplug.c | |||
@@ -628,7 +628,6 @@ static int sn_hotplug_slot_register(struct pci_bus *pci_bus) | |||
628 | goto alloc_err; | 628 | goto alloc_err; |
629 | } | 629 | } |
630 | bss_hotplug_slot->ops = &sn_hotplug_slot_ops; | 630 | bss_hotplug_slot->ops = &sn_hotplug_slot_ops; |
631 | bss_hotplug_slot->release = &sn_release_slot; | ||
632 | 631 | ||
633 | rc = pci_hp_register(bss_hotplug_slot, pci_bus, device, name); | 632 | rc = pci_hp_register(bss_hotplug_slot, pci_bus, device, name); |
634 | if (rc) | 633 | if (rc) |
@@ -656,8 +655,10 @@ alloc_err: | |||
656 | sn_release_slot(bss_hotplug_slot); | 655 | sn_release_slot(bss_hotplug_slot); |
657 | 656 | ||
658 | /* destroy anything else on the list */ | 657 | /* destroy anything else on the list */ |
659 | while ((bss_hotplug_slot = sn_hp_destroy())) | 658 | while ((bss_hotplug_slot = sn_hp_destroy())) { |
660 | pci_hp_deregister(bss_hotplug_slot); | 659 | pci_hp_deregister(bss_hotplug_slot); |
660 | sn_release_slot(bss_hotplug_slot); | ||
661 | } | ||
661 | 662 | ||
662 | return rc; | 663 | return rc; |
663 | } | 664 | } |
@@ -703,8 +704,10 @@ static void __exit sn_pci_hotplug_exit(void) | |||
703 | { | 704 | { |
704 | struct hotplug_slot *bss_hotplug_slot; | 705 | struct hotplug_slot *bss_hotplug_slot; |
705 | 706 | ||
706 | while ((bss_hotplug_slot = sn_hp_destroy())) | 707 | while ((bss_hotplug_slot = sn_hp_destroy())) { |
707 | pci_hp_deregister(bss_hotplug_slot); | 708 | pci_hp_deregister(bss_hotplug_slot); |
709 | sn_release_slot(bss_hotplug_slot); | ||
710 | } | ||
708 | 711 | ||
709 | if (!list_empty(&sn_hp_list)) | 712 | if (!list_empty(&sn_hp_list)) |
710 | printk(KERN_ERR "%s: internal list is not empty\n", __FILE__); | 713 | printk(KERN_ERR "%s: internal list is not empty\n", __FILE__); |
diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp_core.c index e91be287f292..97cee23f3d51 100644 --- a/drivers/pci/hotplug/shpchp_core.c +++ b/drivers/pci/hotplug/shpchp_core.c | |||
@@ -61,22 +61,6 @@ static struct hotplug_slot_ops shpchp_hotplug_slot_ops = { | |||
61 | .get_adapter_status = get_adapter_status, | 61 | .get_adapter_status = get_adapter_status, |
62 | }; | 62 | }; |
63 | 63 | ||
64 | /** | ||
65 | * release_slot - free up the memory used by a slot | ||
66 | * @hotplug_slot: slot to free | ||
67 | */ | ||
68 | static void release_slot(struct hotplug_slot *hotplug_slot) | ||
69 | { | ||
70 | struct slot *slot = hotplug_slot->private; | ||
71 | |||
72 | ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", | ||
73 | __func__, slot_name(slot)); | ||
74 | |||
75 | kfree(slot->hotplug_slot->info); | ||
76 | kfree(slot->hotplug_slot); | ||
77 | kfree(slot); | ||
78 | } | ||
79 | |||
80 | static int init_slots(struct controller *ctrl) | 64 | static int init_slots(struct controller *ctrl) |
81 | { | 65 | { |
82 | struct slot *slot; | 66 | struct slot *slot; |
@@ -125,7 +109,6 @@ static int init_slots(struct controller *ctrl) | |||
125 | 109 | ||
126 | /* register this slot with the hotplug pci core */ | 110 | /* register this slot with the hotplug pci core */ |
127 | hotplug_slot->private = slot; | 111 | hotplug_slot->private = slot; |
128 | hotplug_slot->release = &release_slot; | ||
129 | snprintf(name, SLOT_NAME_SIZE, "%d", slot->number); | 112 | snprintf(name, SLOT_NAME_SIZE, "%d", slot->number); |
130 | hotplug_slot->ops = &shpchp_hotplug_slot_ops; | 113 | hotplug_slot->ops = &shpchp_hotplug_slot_ops; |
131 | 114 | ||
@@ -171,6 +154,9 @@ void cleanup_slots(struct controller *ctrl) | |||
171 | cancel_delayed_work(&slot->work); | 154 | cancel_delayed_work(&slot->work); |
172 | destroy_workqueue(slot->wq); | 155 | destroy_workqueue(slot->wq); |
173 | pci_hp_deregister(slot->hotplug_slot); | 156 | pci_hp_deregister(slot->hotplug_slot); |
157 | kfree(slot->hotplug_slot->info); | ||
158 | kfree(slot->hotplug_slot); | ||
159 | kfree(slot); | ||
174 | } | 160 | } |
175 | } | 161 | } |
176 | 162 | ||
@@ -270,11 +256,30 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) | |||
270 | return 0; | 256 | return 0; |
271 | } | 257 | } |
272 | 258 | ||
259 | static bool shpc_capable(struct pci_dev *bridge) | ||
260 | { | ||
261 | /* | ||
262 | * It is assumed that AMD GOLAM chips support SHPC but they do not | ||
263 | * have SHPC capability. | ||
264 | */ | ||
265 | if (bridge->vendor == PCI_VENDOR_ID_AMD && | ||
266 | bridge->device == PCI_DEVICE_ID_AMD_GOLAM_7450) | ||
267 | return true; | ||
268 | |||
269 | if (pci_find_capability(bridge, PCI_CAP_ID_SHPC)) | ||
270 | return true; | ||
271 | |||
272 | return false; | ||
273 | } | ||
274 | |||
273 | static int shpc_probe(struct pci_dev *pdev, const struct pci_device_id *ent) | 275 | static int shpc_probe(struct pci_dev *pdev, const struct pci_device_id *ent) |
274 | { | 276 | { |
275 | int rc; | 277 | int rc; |
276 | struct controller *ctrl; | 278 | struct controller *ctrl; |
277 | 279 | ||
280 | if (!shpc_capable(pdev)) | ||
281 | return -ENODEV; | ||
282 | |||
278 | if (acpi_get_hp_hw_control_from_firmware(pdev)) | 283 | if (acpi_get_hp_hw_control_from_firmware(pdev)) |
279 | return -ENODEV; | 284 | return -ENODEV; |
280 | 285 | ||
@@ -303,6 +308,7 @@ static int shpc_probe(struct pci_dev *pdev, const struct pci_device_id *ent) | |||
303 | if (rc) | 308 | if (rc) |
304 | goto err_cleanup_slots; | 309 | goto err_cleanup_slots; |
305 | 310 | ||
311 | pdev->shpc_managed = 1; | ||
306 | return 0; | 312 | return 0; |
307 | 313 | ||
308 | err_cleanup_slots: | 314 | err_cleanup_slots: |
@@ -319,6 +325,7 @@ static void shpc_remove(struct pci_dev *dev) | |||
319 | { | 325 | { |
320 | struct controller *ctrl = pci_get_drvdata(dev); | 326 | struct controller *ctrl = pci_get_drvdata(dev); |
321 | 327 | ||
328 | dev->shpc_managed = 0; | ||
322 | shpchp_remove_ctrl_files(ctrl); | 329 | shpchp_remove_ctrl_files(ctrl); |
323 | ctrl->hpc_ops->release_ctlr(ctrl); | 330 | ctrl->hpc_ops->release_ctlr(ctrl); |
324 | kfree(ctrl); | 331 | kfree(ctrl); |
diff --git a/drivers/pci/hotplug/shpchp_ctrl.c b/drivers/pci/hotplug/shpchp_ctrl.c index 1047b56e5730..1267dcc5a531 100644 --- a/drivers/pci/hotplug/shpchp_ctrl.c +++ b/drivers/pci/hotplug/shpchp_ctrl.c | |||
@@ -654,6 +654,7 @@ int shpchp_sysfs_enable_slot(struct slot *p_slot) | |||
654 | switch (p_slot->state) { | 654 | switch (p_slot->state) { |
655 | case BLINKINGON_STATE: | 655 | case BLINKINGON_STATE: |
656 | cancel_delayed_work(&p_slot->work); | 656 | cancel_delayed_work(&p_slot->work); |
657 | /* fall through */ | ||
657 | case STATIC_STATE: | 658 | case STATIC_STATE: |
658 | p_slot->state = POWERON_STATE; | 659 | p_slot->state = POWERON_STATE; |
659 | mutex_unlock(&p_slot->lock); | 660 | mutex_unlock(&p_slot->lock); |
@@ -689,6 +690,7 @@ int shpchp_sysfs_disable_slot(struct slot *p_slot) | |||
689 | switch (p_slot->state) { | 690 | switch (p_slot->state) { |
690 | case BLINKINGOFF_STATE: | 691 | case BLINKINGOFF_STATE: |
691 | cancel_delayed_work(&p_slot->work); | 692 | cancel_delayed_work(&p_slot->work); |
693 | /* fall through */ | ||
692 | case STATIC_STATE: | 694 | case STATIC_STATE: |
693 | p_slot->state = POWEROFF_STATE; | 695 | p_slot->state = POWEROFF_STATE; |
694 | mutex_unlock(&p_slot->lock); | 696 | mutex_unlock(&p_slot->lock); |
diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index 0f04ae648cf1..c5f3cd4ed766 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c | |||
@@ -818,15 +818,15 @@ int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 numvfs) | |||
818 | { | 818 | { |
819 | if (!dev->is_physfn) | 819 | if (!dev->is_physfn) |
820 | return -ENOSYS; | 820 | return -ENOSYS; |
821 | |||
821 | if (numvfs > dev->sriov->total_VFs) | 822 | if (numvfs > dev->sriov->total_VFs) |
822 | return -EINVAL; | 823 | return -EINVAL; |
823 | 824 | ||
824 | /* Shouldn't change if VFs already enabled */ | 825 | /* Shouldn't change if VFs already enabled */ |
825 | if (dev->sriov->ctrl & PCI_SRIOV_CTRL_VFE) | 826 | if (dev->sriov->ctrl & PCI_SRIOV_CTRL_VFE) |
826 | return -EBUSY; | 827 | return -EBUSY; |
827 | else | ||
828 | dev->sriov->driver_max_VFs = numvfs; | ||
829 | 828 | ||
829 | dev->sriov->driver_max_VFs = numvfs; | ||
830 | return 0; | 830 | return 0; |
831 | } | 831 | } |
832 | EXPORT_SYMBOL_GPL(pci_sriov_set_totalvfs); | 832 | EXPORT_SYMBOL_GPL(pci_sriov_set_totalvfs); |
diff --git a/drivers/pci/irq.c b/drivers/pci/irq.c index 2a808e10645f..a1de501a2729 100644 --- a/drivers/pci/irq.c +++ b/drivers/pci/irq.c | |||
@@ -86,13 +86,17 @@ int pci_request_irq(struct pci_dev *dev, unsigned int nr, irq_handler_t handler, | |||
86 | va_list ap; | 86 | va_list ap; |
87 | int ret; | 87 | int ret; |
88 | char *devname; | 88 | char *devname; |
89 | unsigned long irqflags = IRQF_SHARED; | ||
90 | |||
91 | if (!handler) | ||
92 | irqflags |= IRQF_ONESHOT; | ||
89 | 93 | ||
90 | va_start(ap, fmt); | 94 | va_start(ap, fmt); |
91 | devname = kvasprintf(GFP_KERNEL, fmt, ap); | 95 | devname = kvasprintf(GFP_KERNEL, fmt, ap); |
92 | va_end(ap); | 96 | va_end(ap); |
93 | 97 | ||
94 | ret = request_threaded_irq(pci_irq_vector(dev, nr), handler, thread_fn, | 98 | ret = request_threaded_irq(pci_irq_vector(dev, nr), handler, thread_fn, |
95 | IRQF_SHARED, devname, dev_id); | 99 | irqflags, devname, dev_id); |
96 | if (ret) | 100 | if (ret) |
97 | kfree(devname); | 101 | kfree(devname); |
98 | return ret; | 102 | return ret; |
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 4d88afdfc843..f2ef896464b3 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c | |||
@@ -1446,6 +1446,9 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, | |||
1446 | if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE)) | 1446 | if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE)) |
1447 | info->flags |= MSI_FLAG_MUST_REACTIVATE; | 1447 | info->flags |= MSI_FLAG_MUST_REACTIVATE; |
1448 | 1448 | ||
1449 | /* PCI-MSI is oneshot-safe */ | ||
1450 | info->chip->flags |= IRQCHIP_ONESHOT_SAFE; | ||
1451 | |||
1449 | domain = msi_create_irq_domain(fwnode, info, parent); | 1452 | domain = msi_create_irq_domain(fwnode, info, parent); |
1450 | if (!domain) | 1453 | if (!domain) |
1451 | return NULL; | 1454 | return NULL; |
diff --git a/drivers/pci/of.c b/drivers/pci/of.c index 69a60d6ebd73..1836b8ddf292 100644 --- a/drivers/pci/of.c +++ b/drivers/pci/of.c | |||
@@ -266,7 +266,7 @@ int devm_of_pci_get_host_bridge_resources(struct device *dev, | |||
266 | struct list_head *resources, resource_size_t *io_base) | 266 | struct list_head *resources, resource_size_t *io_base) |
267 | { | 267 | { |
268 | struct device_node *dev_node = dev->of_node; | 268 | struct device_node *dev_node = dev->of_node; |
269 | struct resource *res; | 269 | struct resource *res, tmp_res; |
270 | struct resource *bus_range; | 270 | struct resource *bus_range; |
271 | struct of_pci_range range; | 271 | struct of_pci_range range; |
272 | struct of_pci_range_parser parser; | 272 | struct of_pci_range_parser parser; |
@@ -320,18 +320,16 @@ int devm_of_pci_get_host_bridge_resources(struct device *dev, | |||
320 | if (range.cpu_addr == OF_BAD_ADDR || range.size == 0) | 320 | if (range.cpu_addr == OF_BAD_ADDR || range.size == 0) |
321 | continue; | 321 | continue; |
322 | 322 | ||
323 | res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL); | 323 | err = of_pci_range_to_resource(&range, dev_node, &tmp_res); |
324 | if (err) | ||
325 | continue; | ||
326 | |||
327 | res = devm_kmemdup(dev, &tmp_res, sizeof(tmp_res), GFP_KERNEL); | ||
324 | if (!res) { | 328 | if (!res) { |
325 | err = -ENOMEM; | 329 | err = -ENOMEM; |
326 | goto failed; | 330 | goto failed; |
327 | } | 331 | } |
328 | 332 | ||
329 | err = of_pci_range_to_resource(&range, dev_node, res); | ||
330 | if (err) { | ||
331 | devm_kfree(dev, res); | ||
332 | continue; | ||
333 | } | ||
334 | |||
335 | if (resource_type(res) == IORESOURCE_IO) { | 333 | if (resource_type(res) == IORESOURCE_IO) { |
336 | if (!io_base) { | 334 | if (!io_base) { |
337 | dev_err(dev, "I/O range found for %pOF. Please provide an io_base pointer to save CPU base address\n", | 335 | dev_err(dev, "I/O range found for %pOF. Please provide an io_base pointer to save CPU base address\n", |
diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c index 89ee6a2b6eb8..738e3546abb1 100644 --- a/drivers/pci/pci-acpi.c +++ b/drivers/pci/pci-acpi.c | |||
@@ -403,24 +403,7 @@ bool pciehp_is_native(struct pci_dev *bridge) | |||
403 | */ | 403 | */ |
404 | bool shpchp_is_native(struct pci_dev *bridge) | 404 | bool shpchp_is_native(struct pci_dev *bridge) |
405 | { | 405 | { |
406 | const struct pci_host_bridge *host; | 406 | return bridge->shpc_managed; |
407 | |||
408 | if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_SHPC)) | ||
409 | return false; | ||
410 | |||
411 | /* | ||
412 | * It is assumed that AMD GOLAM chips support SHPC but they do not | ||
413 | * have SHPC capability. | ||
414 | */ | ||
415 | if (bridge->vendor == PCI_VENDOR_ID_AMD && | ||
416 | bridge->device == PCI_DEVICE_ID_AMD_GOLAM_7450) | ||
417 | return true; | ||
418 | |||
419 | if (!pci_find_capability(bridge, PCI_CAP_ID_SHPC)) | ||
420 | return false; | ||
421 | |||
422 | host = pci_find_host_bridge(bridge->bus); | ||
423 | return host->native_shpc_hotplug; | ||
424 | } | 407 | } |
425 | 408 | ||
426 | /** | 409 | /** |
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index 6792292b5fc7..bef17c3fca67 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c | |||
@@ -1668,7 +1668,7 @@ static int __init pci_driver_init(void) | |||
1668 | if (ret) | 1668 | if (ret) |
1669 | return ret; | 1669 | return ret; |
1670 | #endif | 1670 | #endif |
1671 | 1671 | dma_debug_add_bus(&pci_bus_type); | |
1672 | return 0; | 1672 | return 0; |
1673 | } | 1673 | } |
1674 | postcore_initcall(pci_driver_init); | 1674 | postcore_initcall(pci_driver_init); |
diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 0c4653c1d2ce..9ecfe13157c0 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c | |||
@@ -23,7 +23,6 @@ | |||
23 | #include <linux/fs.h> | 23 | #include <linux/fs.h> |
24 | #include <linux/capability.h> | 24 | #include <linux/capability.h> |
25 | #include <linux/security.h> | 25 | #include <linux/security.h> |
26 | #include <linux/pci-aspm.h> | ||
27 | #include <linux/slab.h> | 26 | #include <linux/slab.h> |
28 | #include <linux/vgaarb.h> | 27 | #include <linux/vgaarb.h> |
29 | #include <linux/pm_runtime.h> | 28 | #include <linux/pm_runtime.h> |
@@ -1449,7 +1448,9 @@ static ssize_t reset_store(struct device *dev, struct device_attribute *attr, | |||
1449 | if (val != 1) | 1448 | if (val != 1) |
1450 | return -EINVAL; | 1449 | return -EINVAL; |
1451 | 1450 | ||
1451 | pm_runtime_get_sync(dev); | ||
1452 | result = pci_reset_function(pdev); | 1452 | result = pci_reset_function(pdev); |
1453 | pm_runtime_put(dev); | ||
1453 | if (result < 0) | 1454 | if (result < 0) |
1454 | return result; | 1455 | return result; |
1455 | 1456 | ||
@@ -1746,6 +1747,9 @@ static const struct attribute_group *pci_dev_attr_groups[] = { | |||
1746 | #endif | 1747 | #endif |
1747 | &pci_bridge_attr_group, | 1748 | &pci_bridge_attr_group, |
1748 | &pcie_dev_attr_group, | 1749 | &pcie_dev_attr_group, |
1750 | #ifdef CONFIG_PCIEAER | ||
1751 | &aer_stats_attr_group, | ||
1752 | #endif | ||
1749 | NULL, | 1753 | NULL, |
1750 | }; | 1754 | }; |
1751 | 1755 | ||
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index aa1684d99b70..29ff9619b5fa 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c | |||
@@ -23,7 +23,6 @@ | |||
23 | #include <linux/string.h> | 23 | #include <linux/string.h> |
24 | #include <linux/log2.h> | 24 | #include <linux/log2.h> |
25 | #include <linux/logic_pio.h> | 25 | #include <linux/logic_pio.h> |
26 | #include <linux/pci-aspm.h> | ||
27 | #include <linux/pm_wakeup.h> | 26 | #include <linux/pm_wakeup.h> |
28 | #include <linux/interrupt.h> | 27 | #include <linux/interrupt.h> |
29 | #include <linux/device.h> | 28 | #include <linux/device.h> |
@@ -115,6 +114,9 @@ static bool pcie_ari_disabled; | |||
115 | /* If set, the PCIe ATS capability will not be used. */ | 114 | /* If set, the PCIe ATS capability will not be used. */ |
116 | static bool pcie_ats_disabled; | 115 | static bool pcie_ats_disabled; |
117 | 116 | ||
117 | /* If set, the PCI config space of each device is printed during boot. */ | ||
118 | bool pci_early_dump; | ||
119 | |||
118 | bool pci_ats_disabled(void) | 120 | bool pci_ats_disabled(void) |
119 | { | 121 | { |
120 | return pcie_ats_disabled; | 122 | return pcie_ats_disabled; |
@@ -191,6 +193,168 @@ void __iomem *pci_ioremap_wc_bar(struct pci_dev *pdev, int bar) | |||
191 | EXPORT_SYMBOL_GPL(pci_ioremap_wc_bar); | 193 | EXPORT_SYMBOL_GPL(pci_ioremap_wc_bar); |
192 | #endif | 194 | #endif |
193 | 195 | ||
196 | /** | ||
197 | * pci_dev_str_match_path - test if a path string matches a device | ||
198 | * @dev: the PCI device to test | ||
199 | * @p: string to match the device against | ||
200 | * @endptr: pointer to the string after the match | ||
201 | * | ||
202 | * Test if a string (typically from a kernel parameter) formatted as a | ||
203 | * path of device/function addresses matches a PCI device. The string must | ||
204 | * be of the form: | ||
205 | * | ||
206 | * [<domain>:]<bus>:<device>.<func>[/<device>.<func>]* | ||
207 | * | ||
208 | * A path for a device can be obtained using 'lspci -t'. Using a path | ||
209 | * is more robust against bus renumbering than using only a single bus, | ||
210 | * device and function address. | ||
211 | * | ||
212 | * Returns 1 if the string matches the device, 0 if it does not and | ||
213 | * a negative error code if it fails to parse the string. | ||
214 | */ | ||
215 | static int pci_dev_str_match_path(struct pci_dev *dev, const char *path, | ||
216 | const char **endptr) | ||
217 | { | ||
218 | int ret; | ||
219 | int seg, bus, slot, func; | ||
220 | char *wpath, *p; | ||
221 | char end; | ||
222 | |||
223 | *endptr = strchrnul(path, ';'); | ||
224 | |||
225 | wpath = kmemdup_nul(path, *endptr - path, GFP_KERNEL); | ||
226 | if (!wpath) | ||
227 | return -ENOMEM; | ||
228 | |||
229 | while (1) { | ||
230 | p = strrchr(wpath, '/'); | ||
231 | if (!p) | ||
232 | break; | ||
233 | ret = sscanf(p, "/%x.%x%c", &slot, &func, &end); | ||
234 | if (ret != 2) { | ||
235 | ret = -EINVAL; | ||
236 | goto free_and_exit; | ||
237 | } | ||
238 | |||
239 | if (dev->devfn != PCI_DEVFN(slot, func)) { | ||
240 | ret = 0; | ||
241 | goto free_and_exit; | ||
242 | } | ||
243 | |||
244 | /* | ||
245 | * Note: we don't need to get a reference to the upstream | ||
246 | * bridge because we hold a reference to the top level | ||
247 | * device which should hold a reference to the bridge, | ||
248 | * and so on. | ||
249 | */ | ||
250 | dev = pci_upstream_bridge(dev); | ||
251 | if (!dev) { | ||
252 | ret = 0; | ||
253 | goto free_and_exit; | ||
254 | } | ||
255 | |||
256 | *p = 0; | ||
257 | } | ||
258 | |||
259 | ret = sscanf(wpath, "%x:%x:%x.%x%c", &seg, &bus, &slot, | ||
260 | &func, &end); | ||
261 | if (ret != 4) { | ||
262 | seg = 0; | ||
263 | ret = sscanf(wpath, "%x:%x.%x%c", &bus, &slot, &func, &end); | ||
264 | if (ret != 3) { | ||
265 | ret = -EINVAL; | ||
266 | goto free_and_exit; | ||
267 | } | ||
268 | } | ||
269 | |||
270 | ret = (seg == pci_domain_nr(dev->bus) && | ||
271 | bus == dev->bus->number && | ||
272 | dev->devfn == PCI_DEVFN(slot, func)); | ||
273 | |||
274 | free_and_exit: | ||
275 | kfree(wpath); | ||
276 | return ret; | ||
277 | } | ||
278 | |||
279 | /** | ||
280 | * pci_dev_str_match - test if a string matches a device | ||
281 | * @dev: the PCI device to test | ||
282 | * @p: string to match the device against | ||
283 | * @endptr: pointer to the string after the match | ||
284 | * | ||
285 | * Test if a string (typically from a kernel parameter) matches a specified | ||
286 | * PCI device. The string may be of one of the following formats: | ||
287 | * | ||
288 | * [<domain>:]<bus>:<device>.<func>[/<device>.<func>]* | ||
289 | * pci:<vendor>:<device>[:<subvendor>:<subdevice>] | ||
290 | * | ||
291 | * The first format specifies a PCI bus/device/function address which | ||
292 | * may change if new hardware is inserted, if motherboard firmware changes, | ||
293 | * or due to changes caused in kernel parameters. If the domain is | ||
294 | * left unspecified, it is taken to be 0. In order to be robust against | ||
295 | * bus renumbering issues, a path of PCI device/function numbers may be used | ||
296 | * to address the specific device. The path for a device can be determined | ||
297 | * through the use of 'lspci -t'. | ||
298 | * | ||
299 | * The second format matches devices using IDs in the configuration | ||
300 | * space which may match multiple devices in the system. A value of 0 | ||
301 | * for any field will match all devices. (Note: this differs from | ||
302 | * in-kernel code that uses PCI_ANY_ID which is ~0; this is for | ||
303 | * legacy reasons and convenience so users don't have to specify | ||
304 | * FFFFFFFFs on the command line.) | ||
305 | * | ||
306 | * Returns 1 if the string matches the device, 0 if it does not and | ||
307 | * a negative error code if the string cannot be parsed. | ||
308 | */ | ||
309 | static int pci_dev_str_match(struct pci_dev *dev, const char *p, | ||
310 | const char **endptr) | ||
311 | { | ||
312 | int ret; | ||
313 | int count; | ||
314 | unsigned short vendor, device, subsystem_vendor, subsystem_device; | ||
315 | |||
316 | if (strncmp(p, "pci:", 4) == 0) { | ||
317 | /* PCI vendor/device (subvendor/subdevice) IDs are specified */ | ||
318 | p += 4; | ||
319 | ret = sscanf(p, "%hx:%hx:%hx:%hx%n", &vendor, &device, | ||
320 | &subsystem_vendor, &subsystem_device, &count); | ||
321 | if (ret != 4) { | ||
322 | ret = sscanf(p, "%hx:%hx%n", &vendor, &device, &count); | ||
323 | if (ret != 2) | ||
324 | return -EINVAL; | ||
325 | |||
326 | subsystem_vendor = 0; | ||
327 | subsystem_device = 0; | ||
328 | } | ||
329 | |||
330 | p += count; | ||
331 | |||
332 | if ((!vendor || vendor == dev->vendor) && | ||
333 | (!device || device == dev->device) && | ||
334 | (!subsystem_vendor || | ||
335 | subsystem_vendor == dev->subsystem_vendor) && | ||
336 | (!subsystem_device || | ||
337 | subsystem_device == dev->subsystem_device)) | ||
338 | goto found; | ||
339 | } else { | ||
340 | /* | ||
341 | * PCI Bus, Device, Function IDs are specified | ||
342 | * (optionally, may include a path of devfns following it) | ||
343 | */ | ||
344 | ret = pci_dev_str_match_path(dev, p, &p); | ||
345 | if (ret < 0) | ||
346 | return ret; | ||
347 | else if (ret) | ||
348 | goto found; | ||
349 | } | ||
350 | |||
351 | *endptr = p; | ||
352 | return 0; | ||
353 | |||
354 | found: | ||
355 | *endptr = p; | ||
356 | return 1; | ||
357 | } | ||
194 | 358 | ||
195 | static int __pci_find_next_cap_ttl(struct pci_bus *bus, unsigned int devfn, | 359 | static int __pci_find_next_cap_ttl(struct pci_bus *bus, unsigned int devfn, |
196 | u8 pos, int cap, int *ttl) | 360 | u8 pos, int cap, int *ttl) |
@@ -1171,6 +1335,33 @@ static void pci_restore_config_space(struct pci_dev *pdev) | |||
1171 | } | 1335 | } |
1172 | } | 1336 | } |
1173 | 1337 | ||
1338 | static void pci_restore_rebar_state(struct pci_dev *pdev) | ||
1339 | { | ||
1340 | unsigned int pos, nbars, i; | ||
1341 | u32 ctrl; | ||
1342 | |||
1343 | pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR); | ||
1344 | if (!pos) | ||
1345 | return; | ||
1346 | |||
1347 | pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); | ||
1348 | nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >> | ||
1349 | PCI_REBAR_CTRL_NBAR_SHIFT; | ||
1350 | |||
1351 | for (i = 0; i < nbars; i++, pos += 8) { | ||
1352 | struct resource *res; | ||
1353 | int bar_idx, size; | ||
1354 | |||
1355 | pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); | ||
1356 | bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX; | ||
1357 | res = pdev->resource + bar_idx; | ||
1358 | size = order_base_2((resource_size(res) >> 20) | 1) - 1; | ||
1359 | ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; | ||
1360 | ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; | ||
1361 | pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); | ||
1362 | } | ||
1363 | } | ||
1364 | |||
1174 | /** | 1365 | /** |
1175 | * pci_restore_state - Restore the saved state of a PCI device | 1366 | * pci_restore_state - Restore the saved state of a PCI device |
1176 | * @dev: - PCI device that we're dealing with | 1367 | * @dev: - PCI device that we're dealing with |
@@ -1186,6 +1377,7 @@ void pci_restore_state(struct pci_dev *dev) | |||
1186 | pci_restore_pri_state(dev); | 1377 | pci_restore_pri_state(dev); |
1187 | pci_restore_ats_state(dev); | 1378 | pci_restore_ats_state(dev); |
1188 | pci_restore_vc_state(dev); | 1379 | pci_restore_vc_state(dev); |
1380 | pci_restore_rebar_state(dev); | ||
1189 | 1381 | ||
1190 | pci_cleanup_aer_error_status_regs(dev); | 1382 | pci_cleanup_aer_error_status_regs(dev); |
1191 | 1383 | ||
@@ -2045,6 +2237,7 @@ static pci_power_t pci_target_state(struct pci_dev *dev, bool wakeup) | |||
2045 | case PCI_D2: | 2237 | case PCI_D2: |
2046 | if (pci_no_d1d2(dev)) | 2238 | if (pci_no_d1d2(dev)) |
2047 | break; | 2239 | break; |
2240 | /* else: fall through */ | ||
2048 | default: | 2241 | default: |
2049 | target_state = state; | 2242 | target_state = state; |
2050 | } | 2243 | } |
@@ -2290,7 +2483,7 @@ void pci_config_pm_runtime_put(struct pci_dev *pdev) | |||
2290 | * @bridge: Bridge to check | 2483 | * @bridge: Bridge to check |
2291 | * | 2484 | * |
2292 | * This function checks if it is possible to move the bridge to D3. | 2485 | * This function checks if it is possible to move the bridge to D3. |
2293 | * Currently we only allow D3 for recent enough PCIe ports. | 2486 | * Currently we only allow D3 for recent enough PCIe ports and Thunderbolt. |
2294 | */ | 2487 | */ |
2295 | bool pci_bridge_d3_possible(struct pci_dev *bridge) | 2488 | bool pci_bridge_d3_possible(struct pci_dev *bridge) |
2296 | { | 2489 | { |
@@ -2305,18 +2498,27 @@ bool pci_bridge_d3_possible(struct pci_dev *bridge) | |||
2305 | return false; | 2498 | return false; |
2306 | 2499 | ||
2307 | /* | 2500 | /* |
2308 | * Hotplug interrupts cannot be delivered if the link is down, | 2501 | * Hotplug ports handled by firmware in System Management Mode |
2309 | * so parents of a hotplug port must stay awake. In addition, | ||
2310 | * hotplug ports handled by firmware in System Management Mode | ||
2311 | * may not be put into D3 by the OS (Thunderbolt on non-Macs). | 2502 | * may not be put into D3 by the OS (Thunderbolt on non-Macs). |
2312 | * For simplicity, disallow in general for now. | ||
2313 | */ | 2503 | */ |
2314 | if (bridge->is_hotplug_bridge) | 2504 | if (bridge->is_hotplug_bridge && !pciehp_is_native(bridge)) |
2315 | return false; | 2505 | return false; |
2316 | 2506 | ||
2317 | if (pci_bridge_d3_force) | 2507 | if (pci_bridge_d3_force) |
2318 | return true; | 2508 | return true; |
2319 | 2509 | ||
2510 | /* Even the oldest 2010 Thunderbolt controller supports D3. */ | ||
2511 | if (bridge->is_thunderbolt) | ||
2512 | return true; | ||
2513 | |||
2514 | /* | ||
2515 | * Hotplug ports handled natively by the OS were not validated | ||
2516 | * by vendors for runtime D3 at least until 2018 because there | ||
2517 | * was no OS support. | ||
2518 | */ | ||
2519 | if (bridge->is_hotplug_bridge) | ||
2520 | return false; | ||
2521 | |||
2320 | /* | 2522 | /* |
2321 | * It should be safe to put PCIe ports from 2015 or newer | 2523 | * It should be safe to put PCIe ports from 2015 or newer |
2322 | * to D3. | 2524 | * to D3. |
@@ -2820,6 +3022,66 @@ void pci_request_acs(void) | |||
2820 | pci_acs_enable = 1; | 3022 | pci_acs_enable = 1; |
2821 | } | 3023 | } |
2822 | 3024 | ||
3025 | static const char *disable_acs_redir_param; | ||
3026 | |||
3027 | /** | ||
3028 | * pci_disable_acs_redir - disable ACS redirect capabilities | ||
3029 | * @dev: the PCI device | ||
3030 | * | ||
3031 | * For only devices specified in the disable_acs_redir parameter. | ||
3032 | */ | ||
3033 | static void pci_disable_acs_redir(struct pci_dev *dev) | ||
3034 | { | ||
3035 | int ret = 0; | ||
3036 | const char *p; | ||
3037 | int pos; | ||
3038 | u16 ctrl; | ||
3039 | |||
3040 | if (!disable_acs_redir_param) | ||
3041 | return; | ||
3042 | |||
3043 | p = disable_acs_redir_param; | ||
3044 | while (*p) { | ||
3045 | ret = pci_dev_str_match(dev, p, &p); | ||
3046 | if (ret < 0) { | ||
3047 | pr_info_once("PCI: Can't parse disable_acs_redir parameter: %s\n", | ||
3048 | disable_acs_redir_param); | ||
3049 | |||
3050 | break; | ||
3051 | } else if (ret == 1) { | ||
3052 | /* Found a match */ | ||
3053 | break; | ||
3054 | } | ||
3055 | |||
3056 | if (*p != ';' && *p != ',') { | ||
3057 | /* End of param or invalid format */ | ||
3058 | break; | ||
3059 | } | ||
3060 | p++; | ||
3061 | } | ||
3062 | |||
3063 | if (ret != 1) | ||
3064 | return; | ||
3065 | |||
3066 | if (!pci_dev_specific_disable_acs_redir(dev)) | ||
3067 | return; | ||
3068 | |||
3069 | pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); | ||
3070 | if (!pos) { | ||
3071 | pci_warn(dev, "cannot disable ACS redirect for this hardware as it does not have ACS capabilities\n"); | ||
3072 | return; | ||
3073 | } | ||
3074 | |||
3075 | pci_read_config_word(dev, pos + PCI_ACS_CTRL, &ctrl); | ||
3076 | |||
3077 | /* P2P Request & Completion Redirect */ | ||
3078 | ctrl &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC); | ||
3079 | |||
3080 | pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl); | ||
3081 | |||
3082 | pci_info(dev, "disabled ACS redirect\n"); | ||
3083 | } | ||
3084 | |||
2823 | /** | 3085 | /** |
2824 | * pci_std_enable_acs - enable ACS on devices using standard ACS capabilites | 3086 | * pci_std_enable_acs - enable ACS on devices using standard ACS capabilites |
2825 | * @dev: the PCI device | 3087 | * @dev: the PCI device |
@@ -2859,12 +3121,22 @@ static void pci_std_enable_acs(struct pci_dev *dev) | |||
2859 | void pci_enable_acs(struct pci_dev *dev) | 3121 | void pci_enable_acs(struct pci_dev *dev) |
2860 | { | 3122 | { |
2861 | if (!pci_acs_enable) | 3123 | if (!pci_acs_enable) |
2862 | return; | 3124 | goto disable_acs_redir; |
2863 | 3125 | ||
2864 | if (!pci_dev_specific_enable_acs(dev)) | 3126 | if (!pci_dev_specific_enable_acs(dev)) |
2865 | return; | 3127 | goto disable_acs_redir; |
2866 | 3128 | ||
2867 | pci_std_enable_acs(dev); | 3129 | pci_std_enable_acs(dev); |
3130 | |||
3131 | disable_acs_redir: | ||
3132 | /* | ||
3133 | * Note: pci_disable_acs_redir() must be called even if ACS was not | ||
3134 | * enabled by the kernel because it may have been enabled by | ||
3135 | * platform firmware. So if we are told to disable it, we should | ||
3136 | * always disable it after setting the kernel's default | ||
3137 | * preferences. | ||
3138 | */ | ||
3139 | pci_disable_acs_redir(dev); | ||
2868 | } | 3140 | } |
2869 | 3141 | ||
2870 | static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags) | 3142 | static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags) |
@@ -3070,7 +3342,7 @@ int pci_rebar_get_current_size(struct pci_dev *pdev, int bar) | |||
3070 | return pos; | 3342 | return pos; |
3071 | 3343 | ||
3072 | pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); | 3344 | pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); |
3073 | return (ctrl & PCI_REBAR_CTRL_BAR_SIZE) >> 8; | 3345 | return (ctrl & PCI_REBAR_CTRL_BAR_SIZE) >> PCI_REBAR_CTRL_BAR_SHIFT; |
3074 | } | 3346 | } |
3075 | 3347 | ||
3076 | /** | 3348 | /** |
@@ -3093,7 +3365,7 @@ int pci_rebar_set_size(struct pci_dev *pdev, int bar, int size) | |||
3093 | 3365 | ||
3094 | pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); | 3366 | pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); |
3095 | ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; | 3367 | ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; |
3096 | ctrl |= size << 8; | 3368 | ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; |
3097 | pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); | 3369 | pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); |
3098 | return 0; | 3370 | return 0; |
3099 | } | 3371 | } |
@@ -4077,7 +4349,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout) | |||
4077 | * Returns true if the device advertises support for PCIe function level | 4349 | * Returns true if the device advertises support for PCIe function level |
4078 | * resets. | 4350 | * resets. |
4079 | */ | 4351 | */ |
4080 | static bool pcie_has_flr(struct pci_dev *dev) | 4352 | bool pcie_has_flr(struct pci_dev *dev) |
4081 | { | 4353 | { |
4082 | u32 cap; | 4354 | u32 cap; |
4083 | 4355 | ||
@@ -4087,6 +4359,7 @@ static bool pcie_has_flr(struct pci_dev *dev) | |||
4087 | pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap); | 4359 | pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap); |
4088 | return cap & PCI_EXP_DEVCAP_FLR; | 4360 | return cap & PCI_EXP_DEVCAP_FLR; |
4089 | } | 4361 | } |
4362 | EXPORT_SYMBOL_GPL(pcie_has_flr); | ||
4090 | 4363 | ||
4091 | /** | 4364 | /** |
4092 | * pcie_flr - initiate a PCIe function level reset | 4365 | * pcie_flr - initiate a PCIe function level reset |
@@ -4262,19 +4535,18 @@ void __weak pcibios_reset_secondary_bus(struct pci_dev *dev) | |||
4262 | } | 4535 | } |
4263 | 4536 | ||
4264 | /** | 4537 | /** |
4265 | * pci_reset_bridge_secondary_bus - Reset the secondary bus on a PCI bridge. | 4538 | * pci_bridge_secondary_bus_reset - Reset the secondary bus on a PCI bridge. |
4266 | * @dev: Bridge device | 4539 | * @dev: Bridge device |
4267 | * | 4540 | * |
4268 | * Use the bridge control register to assert reset on the secondary bus. | 4541 | * Use the bridge control register to assert reset on the secondary bus. |
4269 | * Devices on the secondary bus are left in power-on state. | 4542 | * Devices on the secondary bus are left in power-on state. |
4270 | */ | 4543 | */ |
4271 | int pci_reset_bridge_secondary_bus(struct pci_dev *dev) | 4544 | int pci_bridge_secondary_bus_reset(struct pci_dev *dev) |
4272 | { | 4545 | { |
4273 | pcibios_reset_secondary_bus(dev); | 4546 | pcibios_reset_secondary_bus(dev); |
4274 | 4547 | ||
4275 | return pci_dev_wait(dev, "bus reset", PCIE_RESET_READY_POLL_MS); | 4548 | return pci_dev_wait(dev, "bus reset", PCIE_RESET_READY_POLL_MS); |
4276 | } | 4549 | } |
4277 | EXPORT_SYMBOL_GPL(pci_reset_bridge_secondary_bus); | ||
4278 | 4550 | ||
4279 | static int pci_parent_bus_reset(struct pci_dev *dev, int probe) | 4551 | static int pci_parent_bus_reset(struct pci_dev *dev, int probe) |
4280 | { | 4552 | { |
@@ -4291,9 +4563,7 @@ static int pci_parent_bus_reset(struct pci_dev *dev, int probe) | |||
4291 | if (probe) | 4563 | if (probe) |
4292 | return 0; | 4564 | return 0; |
4293 | 4565 | ||
4294 | pci_reset_bridge_secondary_bus(dev->bus->self); | 4566 | return pci_bridge_secondary_bus_reset(dev->bus->self); |
4295 | |||
4296 | return 0; | ||
4297 | } | 4567 | } |
4298 | 4568 | ||
4299 | static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, int probe) | 4569 | static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, int probe) |
@@ -4825,7 +5095,7 @@ int pci_probe_reset_slot(struct pci_slot *slot) | |||
4825 | EXPORT_SYMBOL_GPL(pci_probe_reset_slot); | 5095 | EXPORT_SYMBOL_GPL(pci_probe_reset_slot); |
4826 | 5096 | ||
4827 | /** | 5097 | /** |
4828 | * pci_reset_slot - reset a PCI slot | 5098 | * __pci_reset_slot - Try to reset a PCI slot |
4829 | * @slot: PCI slot to reset | 5099 | * @slot: PCI slot to reset |
4830 | * | 5100 | * |
4831 | * A PCI bus may host multiple slots, each slot may support a reset mechanism | 5101 | * A PCI bus may host multiple slots, each slot may support a reset mechanism |
@@ -4837,33 +5107,9 @@ EXPORT_SYMBOL_GPL(pci_probe_reset_slot); | |||
4837 | * through this function. PCI config space of all devices in the slot and | 5107 | * through this function. PCI config space of all devices in the slot and |
4838 | * behind the slot is saved before and restored after reset. | 5108 | * behind the slot is saved before and restored after reset. |
4839 | * | 5109 | * |
4840 | * Return 0 on success, non-zero on error. | ||
4841 | */ | ||
4842 | int pci_reset_slot(struct pci_slot *slot) | ||
4843 | { | ||
4844 | int rc; | ||
4845 | |||
4846 | rc = pci_slot_reset(slot, 1); | ||
4847 | if (rc) | ||
4848 | return rc; | ||
4849 | |||
4850 | pci_slot_save_and_disable(slot); | ||
4851 | |||
4852 | rc = pci_slot_reset(slot, 0); | ||
4853 | |||
4854 | pci_slot_restore(slot); | ||
4855 | |||
4856 | return rc; | ||
4857 | } | ||
4858 | EXPORT_SYMBOL_GPL(pci_reset_slot); | ||
4859 | |||
4860 | /** | ||
4861 | * pci_try_reset_slot - Try to reset a PCI slot | ||
4862 | * @slot: PCI slot to reset | ||
4863 | * | ||
4864 | * Same as above except return -EAGAIN if the slot cannot be locked | 5110 | * Same as above except return -EAGAIN if the slot cannot be locked |
4865 | */ | 5111 | */ |
4866 | int pci_try_reset_slot(struct pci_slot *slot) | 5112 | static int __pci_reset_slot(struct pci_slot *slot) |
4867 | { | 5113 | { |
4868 | int rc; | 5114 | int rc; |
4869 | 5115 | ||
@@ -4884,10 +5130,11 @@ int pci_try_reset_slot(struct pci_slot *slot) | |||
4884 | 5130 | ||
4885 | return rc; | 5131 | return rc; |
4886 | } | 5132 | } |
4887 | EXPORT_SYMBOL_GPL(pci_try_reset_slot); | ||
4888 | 5133 | ||
4889 | static int pci_bus_reset(struct pci_bus *bus, int probe) | 5134 | static int pci_bus_reset(struct pci_bus *bus, int probe) |
4890 | { | 5135 | { |
5136 | int ret; | ||
5137 | |||
4891 | if (!bus->self || !pci_bus_resetable(bus)) | 5138 | if (!bus->self || !pci_bus_resetable(bus)) |
4892 | return -ENOTTY; | 5139 | return -ENOTTY; |
4893 | 5140 | ||
@@ -4898,11 +5145,11 @@ static int pci_bus_reset(struct pci_bus *bus, int probe) | |||
4898 | 5145 | ||
4899 | might_sleep(); | 5146 | might_sleep(); |
4900 | 5147 | ||
4901 | pci_reset_bridge_secondary_bus(bus->self); | 5148 | ret = pci_bridge_secondary_bus_reset(bus->self); |
4902 | 5149 | ||
4903 | pci_bus_unlock(bus); | 5150 | pci_bus_unlock(bus); |
4904 | 5151 | ||
4905 | return 0; | 5152 | return ret; |
4906 | } | 5153 | } |
4907 | 5154 | ||
4908 | /** | 5155 | /** |
@@ -4918,15 +5165,12 @@ int pci_probe_reset_bus(struct pci_bus *bus) | |||
4918 | EXPORT_SYMBOL_GPL(pci_probe_reset_bus); | 5165 | EXPORT_SYMBOL_GPL(pci_probe_reset_bus); |
4919 | 5166 | ||
4920 | /** | 5167 | /** |
4921 | * pci_reset_bus - reset a PCI bus | 5168 | * __pci_reset_bus - Try to reset a PCI bus |
4922 | * @bus: top level PCI bus to reset | 5169 | * @bus: top level PCI bus to reset |
4923 | * | 5170 | * |
4924 | * Do a bus reset on the given bus and any subordinate buses, saving | 5171 | * Same as above except return -EAGAIN if the bus cannot be locked |
4925 | * and restoring state of all devices. | ||
4926 | * | ||
4927 | * Return 0 on success, non-zero on error. | ||
4928 | */ | 5172 | */ |
4929 | int pci_reset_bus(struct pci_bus *bus) | 5173 | static int __pci_reset_bus(struct pci_bus *bus) |
4930 | { | 5174 | { |
4931 | int rc; | 5175 | int rc; |
4932 | 5176 | ||
@@ -4936,42 +5180,30 @@ int pci_reset_bus(struct pci_bus *bus) | |||
4936 | 5180 | ||
4937 | pci_bus_save_and_disable(bus); | 5181 | pci_bus_save_and_disable(bus); |
4938 | 5182 | ||
4939 | rc = pci_bus_reset(bus, 0); | 5183 | if (pci_bus_trylock(bus)) { |
5184 | might_sleep(); | ||
5185 | rc = pci_bridge_secondary_bus_reset(bus->self); | ||
5186 | pci_bus_unlock(bus); | ||
5187 | } else | ||
5188 | rc = -EAGAIN; | ||
4940 | 5189 | ||
4941 | pci_bus_restore(bus); | 5190 | pci_bus_restore(bus); |
4942 | 5191 | ||
4943 | return rc; | 5192 | return rc; |
4944 | } | 5193 | } |
4945 | EXPORT_SYMBOL_GPL(pci_reset_bus); | ||
4946 | 5194 | ||
4947 | /** | 5195 | /** |
4948 | * pci_try_reset_bus - Try to reset a PCI bus | 5196 | * pci_reset_bus - Try to reset a PCI bus |
4949 | * @bus: top level PCI bus to reset | 5197 | * @pdev: top level PCI device to reset via slot/bus |
4950 | * | 5198 | * |
4951 | * Same as above except return -EAGAIN if the bus cannot be locked | 5199 | * Same as above except return -EAGAIN if the bus cannot be locked |
4952 | */ | 5200 | */ |
4953 | int pci_try_reset_bus(struct pci_bus *bus) | 5201 | int pci_reset_bus(struct pci_dev *pdev) |
4954 | { | 5202 | { |
4955 | int rc; | 5203 | return pci_probe_reset_slot(pdev->slot) ? |
4956 | 5204 | __pci_reset_slot(pdev->slot) : __pci_reset_bus(pdev->bus); | |
4957 | rc = pci_bus_reset(bus, 1); | ||
4958 | if (rc) | ||
4959 | return rc; | ||
4960 | |||
4961 | pci_bus_save_and_disable(bus); | ||
4962 | |||
4963 | if (pci_bus_trylock(bus)) { | ||
4964 | might_sleep(); | ||
4965 | pci_reset_bridge_secondary_bus(bus->self); | ||
4966 | pci_bus_unlock(bus); | ||
4967 | } else | ||
4968 | rc = -EAGAIN; | ||
4969 | |||
4970 | pci_bus_restore(bus); | ||
4971 | |||
4972 | return rc; | ||
4973 | } | 5205 | } |
4974 | EXPORT_SYMBOL_GPL(pci_try_reset_bus); | 5206 | EXPORT_SYMBOL_GPL(pci_reset_bus); |
4975 | 5207 | ||
4976 | /** | 5208 | /** |
4977 | * pcix_get_max_mmrbc - get PCI-X maximum designed memory read byte count | 5209 | * pcix_get_max_mmrbc - get PCI-X maximum designed memory read byte count |
@@ -5304,14 +5536,16 @@ u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, | |||
5304 | } | 5536 | } |
5305 | 5537 | ||
5306 | /** | 5538 | /** |
5307 | * pcie_print_link_status - Report the PCI device's link speed and width | 5539 | * __pcie_print_link_status - Report the PCI device's link speed and width |
5308 | * @dev: PCI device to query | 5540 | * @dev: PCI device to query |
5541 | * @verbose: Print info even when enough bandwidth is available | ||
5309 | * | 5542 | * |
5310 | * Report the available bandwidth at the device. If this is less than the | 5543 | * If the available bandwidth at the device is less than the device is |
5311 | * device is capable of, report the device's maximum possible bandwidth and | 5544 | * capable of, report the device's maximum possible bandwidth and the |
5312 | * the upstream link that limits its performance to less than that. | 5545 | * upstream link that limits its performance. If @verbose, always print |
5546 | * the available bandwidth, even if the device isn't constrained. | ||
5313 | */ | 5547 | */ |
5314 | void pcie_print_link_status(struct pci_dev *dev) | 5548 | void __pcie_print_link_status(struct pci_dev *dev, bool verbose) |
5315 | { | 5549 | { |
5316 | enum pcie_link_width width, width_cap; | 5550 | enum pcie_link_width width, width_cap; |
5317 | enum pci_bus_speed speed, speed_cap; | 5551 | enum pci_bus_speed speed, speed_cap; |
@@ -5321,11 +5555,11 @@ void pcie_print_link_status(struct pci_dev *dev) | |||
5321 | bw_cap = pcie_bandwidth_capable(dev, &speed_cap, &width_cap); | 5555 | bw_cap = pcie_bandwidth_capable(dev, &speed_cap, &width_cap); |
5322 | bw_avail = pcie_bandwidth_available(dev, &limiting_dev, &speed, &width); | 5556 | bw_avail = pcie_bandwidth_available(dev, &limiting_dev, &speed, &width); |
5323 | 5557 | ||
5324 | if (bw_avail >= bw_cap) | 5558 | if (bw_avail >= bw_cap && verbose) |
5325 | pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)\n", | 5559 | pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)\n", |
5326 | bw_cap / 1000, bw_cap % 1000, | 5560 | bw_cap / 1000, bw_cap % 1000, |
5327 | PCIE_SPEED2STR(speed_cap), width_cap); | 5561 | PCIE_SPEED2STR(speed_cap), width_cap); |
5328 | else | 5562 | else if (bw_avail < bw_cap) |
5329 | pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n", | 5563 | pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n", |
5330 | bw_avail / 1000, bw_avail % 1000, | 5564 | bw_avail / 1000, bw_avail % 1000, |
5331 | PCIE_SPEED2STR(speed), width, | 5565 | PCIE_SPEED2STR(speed), width, |
@@ -5333,6 +5567,17 @@ void pcie_print_link_status(struct pci_dev *dev) | |||
5333 | bw_cap / 1000, bw_cap % 1000, | 5567 | bw_cap / 1000, bw_cap % 1000, |
5334 | PCIE_SPEED2STR(speed_cap), width_cap); | 5568 | PCIE_SPEED2STR(speed_cap), width_cap); |
5335 | } | 5569 | } |
5570 | |||
5571 | /** | ||
5572 | * pcie_print_link_status - Report the PCI device's link speed and width | ||
5573 | * @dev: PCI device to query | ||
5574 | * | ||
5575 | * Report the available bandwidth at the device. | ||
5576 | */ | ||
5577 | void pcie_print_link_status(struct pci_dev *dev) | ||
5578 | { | ||
5579 | __pcie_print_link_status(dev, true); | ||
5580 | } | ||
5336 | EXPORT_SYMBOL(pcie_print_link_status); | 5581 | EXPORT_SYMBOL(pcie_print_link_status); |
5337 | 5582 | ||
5338 | /** | 5583 | /** |
@@ -5427,8 +5672,19 @@ int pci_set_vga_state(struct pci_dev *dev, bool decode, | |||
5427 | * @dev: the PCI device for which alias is added | 5672 | * @dev: the PCI device for which alias is added |
5428 | * @devfn: alias slot and function | 5673 | * @devfn: alias slot and function |
5429 | * | 5674 | * |
5430 | * This helper encodes 8-bit devfn as bit number in dma_alias_mask. | 5675 | * This helper encodes an 8-bit devfn as a bit number in dma_alias_mask |
5431 | * It should be called early, preferably as PCI fixup header quirk. | 5676 | * which is used to program permissible bus-devfn source addresses for DMA |
5677 | * requests in an IOMMU. These aliases factor into IOMMU group creation | ||
5678 | * and are useful for devices generating DMA requests beyond or different | ||
5679 | * from their logical bus-devfn. Examples include device quirks where the | ||
5680 | * device simply uses the wrong devfn, as well as non-transparent bridges | ||
5681 | * where the alias may be a proxy for devices in another domain. | ||
5682 | * | ||
5683 | * IOMMU group creation is performed during device discovery or addition, | ||
5684 | * prior to any potential DMA mapping and therefore prior to driver probing | ||
5685 | * (especially for userspace assigned devices where IOMMU group definition | ||
5686 | * cannot be left as a userspace activity). DMA aliases should therefore | ||
5687 | * be configured via quirks, such as the PCI fixup header quirk. | ||
5432 | */ | 5688 | */ |
5433 | void pci_add_dma_alias(struct pci_dev *dev, u8 devfn) | 5689 | void pci_add_dma_alias(struct pci_dev *dev, u8 devfn) |
5434 | { | 5690 | { |
@@ -5494,10 +5750,10 @@ static DEFINE_SPINLOCK(resource_alignment_lock); | |||
5494 | static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev, | 5750 | static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev, |
5495 | bool *resize) | 5751 | bool *resize) |
5496 | { | 5752 | { |
5497 | int seg, bus, slot, func, align_order, count; | 5753 | int align_order, count; |
5498 | unsigned short vendor, device, subsystem_vendor, subsystem_device; | ||
5499 | resource_size_t align = pcibios_default_alignment(); | 5754 | resource_size_t align = pcibios_default_alignment(); |
5500 | char *p; | 5755 | const char *p; |
5756 | int ret; | ||
5501 | 5757 | ||
5502 | spin_lock(&resource_alignment_lock); | 5758 | spin_lock(&resource_alignment_lock); |
5503 | p = resource_alignment_param; | 5759 | p = resource_alignment_param; |
@@ -5517,58 +5773,21 @@ static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev, | |||
5517 | } else { | 5773 | } else { |
5518 | align_order = -1; | 5774 | align_order = -1; |
5519 | } | 5775 | } |
5520 | if (strncmp(p, "pci:", 4) == 0) { | 5776 | |
5521 | /* PCI vendor/device (subvendor/subdevice) ids are specified */ | 5777 | ret = pci_dev_str_match(dev, p, &p); |
5522 | p += 4; | 5778 | if (ret == 1) { |
5523 | if (sscanf(p, "%hx:%hx:%hx:%hx%n", | 5779 | *resize = true; |
5524 | &vendor, &device, &subsystem_vendor, &subsystem_device, &count) != 4) { | 5780 | if (align_order == -1) |
5525 | if (sscanf(p, "%hx:%hx%n", &vendor, &device, &count) != 2) { | 5781 | align = PAGE_SIZE; |
5526 | printk(KERN_ERR "PCI: Can't parse resource_alignment parameter: pci:%s\n", | 5782 | else |
5527 | p); | 5783 | align = 1 << align_order; |
5528 | break; | 5784 | break; |
5529 | } | 5785 | } else if (ret < 0) { |
5530 | subsystem_vendor = subsystem_device = 0; | 5786 | pr_err("PCI: Can't parse resource_alignment parameter: %s\n", |
5531 | } | 5787 | p); |
5532 | p += count; | 5788 | break; |
5533 | if ((!vendor || (vendor == dev->vendor)) && | ||
5534 | (!device || (device == dev->device)) && | ||
5535 | (!subsystem_vendor || (subsystem_vendor == dev->subsystem_vendor)) && | ||
5536 | (!subsystem_device || (subsystem_device == dev->subsystem_device))) { | ||
5537 | *resize = true; | ||
5538 | if (align_order == -1) | ||
5539 | align = PAGE_SIZE; | ||
5540 | else | ||
5541 | align = 1 << align_order; | ||
5542 | /* Found */ | ||
5543 | break; | ||
5544 | } | ||
5545 | } | ||
5546 | else { | ||
5547 | if (sscanf(p, "%x:%x:%x.%x%n", | ||
5548 | &seg, &bus, &slot, &func, &count) != 4) { | ||
5549 | seg = 0; | ||
5550 | if (sscanf(p, "%x:%x.%x%n", | ||
5551 | &bus, &slot, &func, &count) != 3) { | ||
5552 | /* Invalid format */ | ||
5553 | printk(KERN_ERR "PCI: Can't parse resource_alignment parameter: %s\n", | ||
5554 | p); | ||
5555 | break; | ||
5556 | } | ||
5557 | } | ||
5558 | p += count; | ||
5559 | if (seg == pci_domain_nr(dev->bus) && | ||
5560 | bus == dev->bus->number && | ||
5561 | slot == PCI_SLOT(dev->devfn) && | ||
5562 | func == PCI_FUNC(dev->devfn)) { | ||
5563 | *resize = true; | ||
5564 | if (align_order == -1) | ||
5565 | align = PAGE_SIZE; | ||
5566 | else | ||
5567 | align = 1 << align_order; | ||
5568 | /* Found */ | ||
5569 | break; | ||
5570 | } | ||
5571 | } | 5789 | } |
5790 | |||
5572 | if (*p != ';' && *p != ',') { | 5791 | if (*p != ';' && *p != ',') { |
5573 | /* End of param or invalid format */ | 5792 | /* End of param or invalid format */ |
5574 | break; | 5793 | break; |
@@ -5845,6 +6064,8 @@ static int __init pci_setup(char *str) | |||
5845 | pcie_ats_disabled = true; | 6064 | pcie_ats_disabled = true; |
5846 | } else if (!strcmp(str, "noaer")) { | 6065 | } else if (!strcmp(str, "noaer")) { |
5847 | pci_no_aer(); | 6066 | pci_no_aer(); |
6067 | } else if (!strcmp(str, "earlydump")) { | ||
6068 | pci_early_dump = true; | ||
5848 | } else if (!strncmp(str, "realloc=", 8)) { | 6069 | } else if (!strncmp(str, "realloc=", 8)) { |
5849 | pci_realloc_get_opt(str + 8); | 6070 | pci_realloc_get_opt(str + 8); |
5850 | } else if (!strncmp(str, "realloc", 7)) { | 6071 | } else if (!strncmp(str, "realloc", 7)) { |
@@ -5881,6 +6102,8 @@ static int __init pci_setup(char *str) | |||
5881 | pcie_bus_config = PCIE_BUS_PEER2PEER; | 6102 | pcie_bus_config = PCIE_BUS_PEER2PEER; |
5882 | } else if (!strncmp(str, "pcie_scan_all", 13)) { | 6103 | } else if (!strncmp(str, "pcie_scan_all", 13)) { |
5883 | pci_add_flags(PCI_SCAN_ALL_PCIE_DEVS); | 6104 | pci_add_flags(PCI_SCAN_ALL_PCIE_DEVS); |
6105 | } else if (!strncmp(str, "disable_acs_redir=", 18)) { | ||
6106 | disable_acs_redir_param = str + 18; | ||
5884 | } else { | 6107 | } else { |
5885 | printk(KERN_ERR "PCI: Unknown option `%s'\n", | 6108 | printk(KERN_ERR "PCI: Unknown option `%s'\n", |
5886 | str); | 6109 | str); |
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index 08817253c8a2..6e0d1528d471 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h | |||
@@ -7,6 +7,7 @@ | |||
7 | #define PCI_VSEC_ID_INTEL_TBT 0x1234 /* Thunderbolt */ | 7 | #define PCI_VSEC_ID_INTEL_TBT 0x1234 /* Thunderbolt */ |
8 | 8 | ||
9 | extern const unsigned char pcie_link_speed[]; | 9 | extern const unsigned char pcie_link_speed[]; |
10 | extern bool pci_early_dump; | ||
10 | 11 | ||
11 | bool pcie_cap_has_lnkctl(const struct pci_dev *dev); | 12 | bool pcie_cap_has_lnkctl(const struct pci_dev *dev); |
12 | 13 | ||
@@ -33,6 +34,7 @@ int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vmai, | |||
33 | enum pci_mmap_api mmap_api); | 34 | enum pci_mmap_api mmap_api); |
34 | 35 | ||
35 | int pci_probe_reset_function(struct pci_dev *dev); | 36 | int pci_probe_reset_function(struct pci_dev *dev); |
37 | int pci_bridge_secondary_bus_reset(struct pci_dev *dev); | ||
36 | 38 | ||
37 | /** | 39 | /** |
38 | * struct pci_platform_pm_ops - Firmware PM callbacks | 40 | * struct pci_platform_pm_ops - Firmware PM callbacks |
@@ -225,6 +227,10 @@ enum pci_bar_type { | |||
225 | int pci_configure_extended_tags(struct pci_dev *dev, void *ign); | 227 | int pci_configure_extended_tags(struct pci_dev *dev, void *ign); |
226 | bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl, | 228 | bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl, |
227 | int crs_timeout); | 229 | int crs_timeout); |
230 | bool pci_bus_generic_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl, | ||
231 | int crs_timeout); | ||
232 | int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *pl, int crs_timeout); | ||
233 | |||
228 | int pci_setup_device(struct pci_dev *dev); | 234 | int pci_setup_device(struct pci_dev *dev); |
229 | int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, | 235 | int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, |
230 | struct resource *res, unsigned int reg); | 236 | struct resource *res, unsigned int reg); |
@@ -259,6 +265,7 @@ enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev); | |||
259 | enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev); | 265 | enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev); |
260 | u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, | 266 | u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, |
261 | enum pcie_link_width *width); | 267 | enum pcie_link_width *width); |
268 | void __pcie_print_link_status(struct pci_dev *dev, bool verbose); | ||
262 | 269 | ||
263 | /* Single Root I/O Virtualization */ | 270 | /* Single Root I/O Virtualization */ |
264 | struct pci_sriov { | 271 | struct pci_sriov { |
@@ -311,6 +318,34 @@ static inline bool pci_dev_is_added(const struct pci_dev *dev) | |||
311 | return test_bit(PCI_DEV_ADDED, &dev->priv_flags); | 318 | return test_bit(PCI_DEV_ADDED, &dev->priv_flags); |
312 | } | 319 | } |
313 | 320 | ||
321 | #ifdef CONFIG_PCIEAER | ||
322 | #include <linux/aer.h> | ||
323 | |||
324 | #define AER_MAX_MULTI_ERR_DEVICES 5 /* Not likely to have more */ | ||
325 | |||
326 | struct aer_err_info { | ||
327 | struct pci_dev *dev[AER_MAX_MULTI_ERR_DEVICES]; | ||
328 | int error_dev_num; | ||
329 | |||
330 | unsigned int id:16; | ||
331 | |||
332 | unsigned int severity:2; /* 0:NONFATAL | 1:FATAL | 2:COR */ | ||
333 | unsigned int __pad1:5; | ||
334 | unsigned int multi_error_valid:1; | ||
335 | |||
336 | unsigned int first_error:5; | ||
337 | unsigned int __pad2:2; | ||
338 | unsigned int tlp_header_valid:1; | ||
339 | |||
340 | unsigned int status; /* COR/UNCOR Error Status */ | ||
341 | unsigned int mask; /* COR/UNCOR Error Mask */ | ||
342 | struct aer_header_log_regs tlp; /* TLP Header */ | ||
343 | }; | ||
344 | |||
345 | int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info); | ||
346 | void aer_print_error(struct pci_dev *dev, struct aer_err_info *info); | ||
347 | #endif /* CONFIG_PCIEAER */ | ||
348 | |||
314 | #ifdef CONFIG_PCI_ATS | 349 | #ifdef CONFIG_PCI_ATS |
315 | void pci_restore_ats_state(struct pci_dev *dev); | 350 | void pci_restore_ats_state(struct pci_dev *dev); |
316 | #else | 351 | #else |
@@ -367,6 +402,25 @@ static inline resource_size_t pci_resource_alignment(struct pci_dev *dev, | |||
367 | } | 402 | } |
368 | 403 | ||
369 | void pci_enable_acs(struct pci_dev *dev); | 404 | void pci_enable_acs(struct pci_dev *dev); |
405 | #ifdef CONFIG_PCI_QUIRKS | ||
406 | int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags); | ||
407 | int pci_dev_specific_enable_acs(struct pci_dev *dev); | ||
408 | int pci_dev_specific_disable_acs_redir(struct pci_dev *dev); | ||
409 | #else | ||
410 | static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev, | ||
411 | u16 acs_flags) | ||
412 | { | ||
413 | return -ENOTTY; | ||
414 | } | ||
415 | static inline int pci_dev_specific_enable_acs(struct pci_dev *dev) | ||
416 | { | ||
417 | return -ENOTTY; | ||
418 | } | ||
419 | static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev) | ||
420 | { | ||
421 | return -ENOTTY; | ||
422 | } | ||
423 | #endif | ||
370 | 424 | ||
371 | /* PCI error reporting and recovery */ | 425 | /* PCI error reporting and recovery */ |
372 | void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service); | 426 | void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service); |
@@ -467,4 +521,19 @@ static inline int devm_of_pci_get_host_bridge_resources(struct device *dev, | |||
467 | } | 521 | } |
468 | #endif | 522 | #endif |
469 | 523 | ||
524 | #ifdef CONFIG_PCIEAER | ||
525 | void pci_no_aer(void); | ||
526 | void pci_aer_init(struct pci_dev *dev); | ||
527 | void pci_aer_exit(struct pci_dev *dev); | ||
528 | extern const struct attribute_group aer_stats_attr_group; | ||
529 | void pci_aer_clear_fatal_status(struct pci_dev *dev); | ||
530 | void pci_aer_clear_device_status(struct pci_dev *dev); | ||
531 | #else | ||
532 | static inline void pci_no_aer(void) { } | ||
533 | static inline int pci_aer_init(struct pci_dev *d) { return -ENODEV; } | ||
534 | static inline void pci_aer_exit(struct pci_dev *d) { } | ||
535 | static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { } | ||
536 | static inline void pci_aer_clear_device_status(struct pci_dev *dev) { } | ||
537 | #endif | ||
538 | |||
470 | #endif /* DRIVERS_PCI_H */ | 539 | #endif /* DRIVERS_PCI_H */ |
diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c index a2e88386af28..83180edd6ed4 100644 --- a/drivers/pci/pcie/aer.c +++ b/drivers/pci/pcie/aer.c | |||
@@ -31,26 +31,9 @@ | |||
31 | #include "portdrv.h" | 31 | #include "portdrv.h" |
32 | 32 | ||
33 | #define AER_ERROR_SOURCES_MAX 100 | 33 | #define AER_ERROR_SOURCES_MAX 100 |
34 | #define AER_MAX_MULTI_ERR_DEVICES 5 /* Not likely to have more */ | ||
35 | 34 | ||
36 | struct aer_err_info { | 35 | #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ |
37 | struct pci_dev *dev[AER_MAX_MULTI_ERR_DEVICES]; | 36 | #define AER_MAX_TYPEOF_UNCOR_ERRS 26 /* as per PCI_ERR_UNCOR_STATUS*/ |
38 | int error_dev_num; | ||
39 | |||
40 | unsigned int id:16; | ||
41 | |||
42 | unsigned int severity:2; /* 0:NONFATAL | 1:FATAL | 2:COR */ | ||
43 | unsigned int __pad1:5; | ||
44 | unsigned int multi_error_valid:1; | ||
45 | |||
46 | unsigned int first_error:5; | ||
47 | unsigned int __pad2:2; | ||
48 | unsigned int tlp_header_valid:1; | ||
49 | |||
50 | unsigned int status; /* COR/UNCOR Error Status */ | ||
51 | unsigned int mask; /* COR/UNCOR Error Mask */ | ||
52 | struct aer_header_log_regs tlp; /* TLP Header */ | ||
53 | }; | ||
54 | 37 | ||
55 | struct aer_err_source { | 38 | struct aer_err_source { |
56 | unsigned int status; | 39 | unsigned int status; |
@@ -76,6 +59,42 @@ struct aer_rpc { | |||
76 | */ | 59 | */ |
77 | }; | 60 | }; |
78 | 61 | ||
62 | /* AER stats for the device */ | ||
63 | struct aer_stats { | ||
64 | |||
65 | /* | ||
66 | * Fields for all AER capable devices. They indicate the errors | ||
67 | * "as seen by this device". Note that this may mean that if an | ||
68 | * end point is causing problems, the AER counters may increment | ||
69 | * at its link partner (e.g. root port) because the errors will be | ||
70 | * "seen" by the link partner and not the the problematic end point | ||
71 | * itself (which may report all counters as 0 as it never saw any | ||
72 | * problems). | ||
73 | */ | ||
74 | /* Counters for different type of correctable errors */ | ||
75 | u64 dev_cor_errs[AER_MAX_TYPEOF_COR_ERRS]; | ||
76 | /* Counters for different type of fatal uncorrectable errors */ | ||
77 | u64 dev_fatal_errs[AER_MAX_TYPEOF_UNCOR_ERRS]; | ||
78 | /* Counters for different type of nonfatal uncorrectable errors */ | ||
79 | u64 dev_nonfatal_errs[AER_MAX_TYPEOF_UNCOR_ERRS]; | ||
80 | /* Total number of ERR_COR sent by this device */ | ||
81 | u64 dev_total_cor_errs; | ||
82 | /* Total number of ERR_FATAL sent by this device */ | ||
83 | u64 dev_total_fatal_errs; | ||
84 | /* Total number of ERR_NONFATAL sent by this device */ | ||
85 | u64 dev_total_nonfatal_errs; | ||
86 | |||
87 | /* | ||
88 | * Fields for Root ports & root complex event collectors only, these | ||
89 | * indicate the total number of ERR_COR, ERR_FATAL, and ERR_NONFATAL | ||
90 | * messages received by the root port / event collector, INCLUDING the | ||
91 | * ones that are generated internally (by the rootport itself) | ||
92 | */ | ||
93 | u64 rootport_total_cor_errs; | ||
94 | u64 rootport_total_fatal_errs; | ||
95 | u64 rootport_total_nonfatal_errs; | ||
96 | }; | ||
97 | |||
79 | #define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \ | 98 | #define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \ |
80 | PCI_ERR_UNC_ECRC| \ | 99 | PCI_ERR_UNC_ECRC| \ |
81 | PCI_ERR_UNC_UNSUP| \ | 100 | PCI_ERR_UNC_UNSUP| \ |
@@ -303,12 +322,13 @@ int pcie_aer_get_firmware_first(struct pci_dev *dev) | |||
303 | if (!pci_is_pcie(dev)) | 322 | if (!pci_is_pcie(dev)) |
304 | return 0; | 323 | return 0; |
305 | 324 | ||
325 | if (pcie_ports_native) | ||
326 | return 0; | ||
327 | |||
306 | if (!dev->__aer_firmware_first_valid) | 328 | if (!dev->__aer_firmware_first_valid) |
307 | aer_set_firmware_first(dev); | 329 | aer_set_firmware_first(dev); |
308 | return dev->__aer_firmware_first; | 330 | return dev->__aer_firmware_first; |
309 | } | 331 | } |
310 | #define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \ | ||
311 | PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE) | ||
312 | 332 | ||
313 | static bool aer_firmware_first; | 333 | static bool aer_firmware_first; |
314 | 334 | ||
@@ -323,6 +343,9 @@ bool aer_acpi_firmware_first(void) | |||
323 | .firmware_first = 0, | 343 | .firmware_first = 0, |
324 | }; | 344 | }; |
325 | 345 | ||
346 | if (pcie_ports_native) | ||
347 | return false; | ||
348 | |||
326 | if (!parsed) { | 349 | if (!parsed) { |
327 | apei_hest_parse(aer_hest_parse, &info); | 350 | apei_hest_parse(aer_hest_parse, &info); |
328 | aer_firmware_first = info.firmware_first; | 351 | aer_firmware_first = info.firmware_first; |
@@ -357,16 +380,30 @@ int pci_disable_pcie_error_reporting(struct pci_dev *dev) | |||
357 | } | 380 | } |
358 | EXPORT_SYMBOL_GPL(pci_disable_pcie_error_reporting); | 381 | EXPORT_SYMBOL_GPL(pci_disable_pcie_error_reporting); |
359 | 382 | ||
383 | void pci_aer_clear_device_status(struct pci_dev *dev) | ||
384 | { | ||
385 | u16 sta; | ||
386 | |||
387 | pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &sta); | ||
388 | pcie_capability_write_word(dev, PCI_EXP_DEVSTA, sta); | ||
389 | } | ||
390 | |||
360 | int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev) | 391 | int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev) |
361 | { | 392 | { |
362 | int pos; | 393 | int pos; |
363 | u32 status; | 394 | u32 status, sev; |
364 | 395 | ||
365 | pos = dev->aer_cap; | 396 | pos = dev->aer_cap; |
366 | if (!pos) | 397 | if (!pos) |
367 | return -EIO; | 398 | return -EIO; |
368 | 399 | ||
400 | if (pcie_aer_get_firmware_first(dev)) | ||
401 | return -EIO; | ||
402 | |||
403 | /* Clear status bits for ERR_NONFATAL errors only */ | ||
369 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); | 404 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); |
405 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev); | ||
406 | status &= ~sev; | ||
370 | if (status) | 407 | if (status) |
371 | pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); | 408 | pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); |
372 | 409 | ||
@@ -374,6 +411,26 @@ int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev) | |||
374 | } | 411 | } |
375 | EXPORT_SYMBOL_GPL(pci_cleanup_aer_uncorrect_error_status); | 412 | EXPORT_SYMBOL_GPL(pci_cleanup_aer_uncorrect_error_status); |
376 | 413 | ||
414 | void pci_aer_clear_fatal_status(struct pci_dev *dev) | ||
415 | { | ||
416 | int pos; | ||
417 | u32 status, sev; | ||
418 | |||
419 | pos = dev->aer_cap; | ||
420 | if (!pos) | ||
421 | return; | ||
422 | |||
423 | if (pcie_aer_get_firmware_first(dev)) | ||
424 | return; | ||
425 | |||
426 | /* Clear status bits for ERR_FATAL errors only */ | ||
427 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); | ||
428 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev); | ||
429 | status &= sev; | ||
430 | if (status) | ||
431 | pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); | ||
432 | } | ||
433 | |||
377 | int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) | 434 | int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) |
378 | { | 435 | { |
379 | int pos; | 436 | int pos; |
@@ -387,6 +444,9 @@ int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) | |||
387 | if (!pos) | 444 | if (!pos) |
388 | return -EIO; | 445 | return -EIO; |
389 | 446 | ||
447 | if (pcie_aer_get_firmware_first(dev)) | ||
448 | return -EIO; | ||
449 | |||
390 | port_type = pci_pcie_type(dev); | 450 | port_type = pci_pcie_type(dev); |
391 | if (port_type == PCI_EXP_TYPE_ROOT_PORT) { | 451 | if (port_type == PCI_EXP_TYPE_ROOT_PORT) { |
392 | pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &status); | 452 | pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &status); |
@@ -402,10 +462,20 @@ int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) | |||
402 | return 0; | 462 | return 0; |
403 | } | 463 | } |
404 | 464 | ||
405 | int pci_aer_init(struct pci_dev *dev) | 465 | void pci_aer_init(struct pci_dev *dev) |
406 | { | 466 | { |
407 | dev->aer_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); | 467 | dev->aer_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); |
408 | return pci_cleanup_aer_error_status_regs(dev); | 468 | |
469 | if (dev->aer_cap) | ||
470 | dev->aer_stats = kzalloc(sizeof(struct aer_stats), GFP_KERNEL); | ||
471 | |||
472 | pci_cleanup_aer_error_status_regs(dev); | ||
473 | } | ||
474 | |||
475 | void pci_aer_exit(struct pci_dev *dev) | ||
476 | { | ||
477 | kfree(dev->aer_stats); | ||
478 | dev->aer_stats = NULL; | ||
409 | } | 479 | } |
410 | 480 | ||
411 | #define AER_AGENT_RECEIVER 0 | 481 | #define AER_AGENT_RECEIVER 0 |
@@ -458,52 +528,52 @@ static const char *aer_error_layer[] = { | |||
458 | "Transaction Layer" | 528 | "Transaction Layer" |
459 | }; | 529 | }; |
460 | 530 | ||
461 | static const char *aer_correctable_error_string[] = { | 531 | static const char *aer_correctable_error_string[AER_MAX_TYPEOF_COR_ERRS] = { |
462 | "Receiver Error", /* Bit Position 0 */ | 532 | "RxErr", /* Bit Position 0 */ |
463 | NULL, | 533 | NULL, |
464 | NULL, | 534 | NULL, |
465 | NULL, | 535 | NULL, |
466 | NULL, | 536 | NULL, |
467 | NULL, | 537 | NULL, |
468 | "Bad TLP", /* Bit Position 6 */ | 538 | "BadTLP", /* Bit Position 6 */ |
469 | "Bad DLLP", /* Bit Position 7 */ | 539 | "BadDLLP", /* Bit Position 7 */ |
470 | "RELAY_NUM Rollover", /* Bit Position 8 */ | 540 | "Rollover", /* Bit Position 8 */ |
471 | NULL, | 541 | NULL, |
472 | NULL, | 542 | NULL, |
473 | NULL, | 543 | NULL, |
474 | "Replay Timer Timeout", /* Bit Position 12 */ | 544 | "Timeout", /* Bit Position 12 */ |
475 | "Advisory Non-Fatal", /* Bit Position 13 */ | 545 | "NonFatalErr", /* Bit Position 13 */ |
476 | "Corrected Internal Error", /* Bit Position 14 */ | 546 | "CorrIntErr", /* Bit Position 14 */ |
477 | "Header Log Overflow", /* Bit Position 15 */ | 547 | "HeaderOF", /* Bit Position 15 */ |
478 | }; | 548 | }; |
479 | 549 | ||
480 | static const char *aer_uncorrectable_error_string[] = { | 550 | static const char *aer_uncorrectable_error_string[AER_MAX_TYPEOF_UNCOR_ERRS] = { |
481 | "Undefined", /* Bit Position 0 */ | 551 | "Undefined", /* Bit Position 0 */ |
482 | NULL, | 552 | NULL, |
483 | NULL, | 553 | NULL, |
484 | NULL, | 554 | NULL, |
485 | "Data Link Protocol", /* Bit Position 4 */ | 555 | "DLP", /* Bit Position 4 */ |
486 | "Surprise Down Error", /* Bit Position 5 */ | 556 | "SDES", /* Bit Position 5 */ |
487 | NULL, | 557 | NULL, |
488 | NULL, | 558 | NULL, |
489 | NULL, | 559 | NULL, |
490 | NULL, | 560 | NULL, |
491 | NULL, | 561 | NULL, |
492 | NULL, | 562 | NULL, |
493 | "Poisoned TLP", /* Bit Position 12 */ | 563 | "TLP", /* Bit Position 12 */ |
494 | "Flow Control Protocol", /* Bit Position 13 */ | 564 | "FCP", /* Bit Position 13 */ |
495 | "Completion Timeout", /* Bit Position 14 */ | 565 | "CmpltTO", /* Bit Position 14 */ |
496 | "Completer Abort", /* Bit Position 15 */ | 566 | "CmpltAbrt", /* Bit Position 15 */ |
497 | "Unexpected Completion", /* Bit Position 16 */ | 567 | "UnxCmplt", /* Bit Position 16 */ |
498 | "Receiver Overflow", /* Bit Position 17 */ | 568 | "RxOF", /* Bit Position 17 */ |
499 | "Malformed TLP", /* Bit Position 18 */ | 569 | "MalfTLP", /* Bit Position 18 */ |
500 | "ECRC", /* Bit Position 19 */ | 570 | "ECRC", /* Bit Position 19 */ |
501 | "Unsupported Request", /* Bit Position 20 */ | 571 | "UnsupReq", /* Bit Position 20 */ |
502 | "ACS Violation", /* Bit Position 21 */ | 572 | "ACSViol", /* Bit Position 21 */ |
503 | "Uncorrectable Internal Error", /* Bit Position 22 */ | 573 | "UncorrIntErr", /* Bit Position 22 */ |
504 | "MC Blocked TLP", /* Bit Position 23 */ | 574 | "BlockedTLP", /* Bit Position 23 */ |
505 | "AtomicOp Egress Blocked", /* Bit Position 24 */ | 575 | "AtomicOpBlocked", /* Bit Position 24 */ |
506 | "TLP Prefix Blocked Error", /* Bit Position 25 */ | 576 | "TLPBlockedErr", /* Bit Position 25 */ |
507 | }; | 577 | }; |
508 | 578 | ||
509 | static const char *aer_agent_string[] = { | 579 | static const char *aer_agent_string[] = { |
@@ -513,6 +583,144 @@ static const char *aer_agent_string[] = { | |||
513 | "Transmitter ID" | 583 | "Transmitter ID" |
514 | }; | 584 | }; |
515 | 585 | ||
586 | #define aer_stats_dev_attr(name, stats_array, strings_array, \ | ||
587 | total_string, total_field) \ | ||
588 | static ssize_t \ | ||
589 | name##_show(struct device *dev, struct device_attribute *attr, \ | ||
590 | char *buf) \ | ||
591 | { \ | ||
592 | unsigned int i; \ | ||
593 | char *str = buf; \ | ||
594 | struct pci_dev *pdev = to_pci_dev(dev); \ | ||
595 | u64 *stats = pdev->aer_stats->stats_array; \ | ||
596 | \ | ||
597 | for (i = 0; i < ARRAY_SIZE(strings_array); i++) { \ | ||
598 | if (strings_array[i]) \ | ||
599 | str += sprintf(str, "%s %llu\n", \ | ||
600 | strings_array[i], stats[i]); \ | ||
601 | else if (stats[i]) \ | ||
602 | str += sprintf(str, #stats_array "_bit[%d] %llu\n",\ | ||
603 | i, stats[i]); \ | ||
604 | } \ | ||
605 | str += sprintf(str, "TOTAL_%s %llu\n", total_string, \ | ||
606 | pdev->aer_stats->total_field); \ | ||
607 | return str-buf; \ | ||
608 | } \ | ||
609 | static DEVICE_ATTR_RO(name) | ||
610 | |||
611 | aer_stats_dev_attr(aer_dev_correctable, dev_cor_errs, | ||
612 | aer_correctable_error_string, "ERR_COR", | ||
613 | dev_total_cor_errs); | ||
614 | aer_stats_dev_attr(aer_dev_fatal, dev_fatal_errs, | ||
615 | aer_uncorrectable_error_string, "ERR_FATAL", | ||
616 | dev_total_fatal_errs); | ||
617 | aer_stats_dev_attr(aer_dev_nonfatal, dev_nonfatal_errs, | ||
618 | aer_uncorrectable_error_string, "ERR_NONFATAL", | ||
619 | dev_total_nonfatal_errs); | ||
620 | |||
621 | #define aer_stats_rootport_attr(name, field) \ | ||
622 | static ssize_t \ | ||
623 | name##_show(struct device *dev, struct device_attribute *attr, \ | ||
624 | char *buf) \ | ||
625 | { \ | ||
626 | struct pci_dev *pdev = to_pci_dev(dev); \ | ||
627 | return sprintf(buf, "%llu\n", pdev->aer_stats->field); \ | ||
628 | } \ | ||
629 | static DEVICE_ATTR_RO(name) | ||
630 | |||
631 | aer_stats_rootport_attr(aer_rootport_total_err_cor, | ||
632 | rootport_total_cor_errs); | ||
633 | aer_stats_rootport_attr(aer_rootport_total_err_fatal, | ||
634 | rootport_total_fatal_errs); | ||
635 | aer_stats_rootport_attr(aer_rootport_total_err_nonfatal, | ||
636 | rootport_total_nonfatal_errs); | ||
637 | |||
638 | static struct attribute *aer_stats_attrs[] __ro_after_init = { | ||
639 | &dev_attr_aer_dev_correctable.attr, | ||
640 | &dev_attr_aer_dev_fatal.attr, | ||
641 | &dev_attr_aer_dev_nonfatal.attr, | ||
642 | &dev_attr_aer_rootport_total_err_cor.attr, | ||
643 | &dev_attr_aer_rootport_total_err_fatal.attr, | ||
644 | &dev_attr_aer_rootport_total_err_nonfatal.attr, | ||
645 | NULL | ||
646 | }; | ||
647 | |||
648 | static umode_t aer_stats_attrs_are_visible(struct kobject *kobj, | ||
649 | struct attribute *a, int n) | ||
650 | { | ||
651 | struct device *dev = kobj_to_dev(kobj); | ||
652 | struct pci_dev *pdev = to_pci_dev(dev); | ||
653 | |||
654 | if (!pdev->aer_stats) | ||
655 | return 0; | ||
656 | |||
657 | if ((a == &dev_attr_aer_rootport_total_err_cor.attr || | ||
658 | a == &dev_attr_aer_rootport_total_err_fatal.attr || | ||
659 | a == &dev_attr_aer_rootport_total_err_nonfatal.attr) && | ||
660 | pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT) | ||
661 | return 0; | ||
662 | |||
663 | return a->mode; | ||
664 | } | ||
665 | |||
666 | const struct attribute_group aer_stats_attr_group = { | ||
667 | .attrs = aer_stats_attrs, | ||
668 | .is_visible = aer_stats_attrs_are_visible, | ||
669 | }; | ||
670 | |||
671 | static void pci_dev_aer_stats_incr(struct pci_dev *pdev, | ||
672 | struct aer_err_info *info) | ||
673 | { | ||
674 | int status, i, max = -1; | ||
675 | u64 *counter = NULL; | ||
676 | struct aer_stats *aer_stats = pdev->aer_stats; | ||
677 | |||
678 | if (!aer_stats) | ||
679 | return; | ||
680 | |||
681 | switch (info->severity) { | ||
682 | case AER_CORRECTABLE: | ||
683 | aer_stats->dev_total_cor_errs++; | ||
684 | counter = &aer_stats->dev_cor_errs[0]; | ||
685 | max = AER_MAX_TYPEOF_COR_ERRS; | ||
686 | break; | ||
687 | case AER_NONFATAL: | ||
688 | aer_stats->dev_total_nonfatal_errs++; | ||
689 | counter = &aer_stats->dev_nonfatal_errs[0]; | ||
690 | max = AER_MAX_TYPEOF_UNCOR_ERRS; | ||
691 | break; | ||
692 | case AER_FATAL: | ||
693 | aer_stats->dev_total_fatal_errs++; | ||
694 | counter = &aer_stats->dev_fatal_errs[0]; | ||
695 | max = AER_MAX_TYPEOF_UNCOR_ERRS; | ||
696 | break; | ||
697 | } | ||
698 | |||
699 | status = (info->status & ~info->mask); | ||
700 | for (i = 0; i < max; i++) | ||
701 | if (status & (1 << i)) | ||
702 | counter[i]++; | ||
703 | } | ||
704 | |||
705 | static void pci_rootport_aer_stats_incr(struct pci_dev *pdev, | ||
706 | struct aer_err_source *e_src) | ||
707 | { | ||
708 | struct aer_stats *aer_stats = pdev->aer_stats; | ||
709 | |||
710 | if (!aer_stats) | ||
711 | return; | ||
712 | |||
713 | if (e_src->status & PCI_ERR_ROOT_COR_RCV) | ||
714 | aer_stats->rootport_total_cor_errs++; | ||
715 | |||
716 | if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) { | ||
717 | if (e_src->status & PCI_ERR_ROOT_FATAL_RCV) | ||
718 | aer_stats->rootport_total_fatal_errs++; | ||
719 | else | ||
720 | aer_stats->rootport_total_nonfatal_errs++; | ||
721 | } | ||
722 | } | ||
723 | |||
516 | static void __print_tlp_header(struct pci_dev *dev, | 724 | static void __print_tlp_header(struct pci_dev *dev, |
517 | struct aer_header_log_regs *t) | 725 | struct aer_header_log_regs *t) |
518 | { | 726 | { |
@@ -545,9 +753,10 @@ static void __aer_print_error(struct pci_dev *dev, | |||
545 | pci_err(dev, " [%2d] Unknown Error Bit%s\n", | 753 | pci_err(dev, " [%2d] Unknown Error Bit%s\n", |
546 | i, info->first_error == i ? " (First)" : ""); | 754 | i, info->first_error == i ? " (First)" : ""); |
547 | } | 755 | } |
756 | pci_dev_aer_stats_incr(dev, info); | ||
548 | } | 757 | } |
549 | 758 | ||
550 | static void aer_print_error(struct pci_dev *dev, struct aer_err_info *info) | 759 | void aer_print_error(struct pci_dev *dev, struct aer_err_info *info) |
551 | { | 760 | { |
552 | int layer, agent; | 761 | int layer, agent; |
553 | int id = ((dev->bus->number << 8) | dev->devfn); | 762 | int id = ((dev->bus->number << 8) | dev->devfn); |
@@ -799,6 +1008,7 @@ static void handle_error_source(struct pci_dev *dev, struct aer_err_info *info) | |||
799 | if (pos) | 1008 | if (pos) |
800 | pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS, | 1009 | pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS, |
801 | info->status); | 1010 | info->status); |
1011 | pci_aer_clear_device_status(dev); | ||
802 | } else if (info->severity == AER_NONFATAL) | 1012 | } else if (info->severity == AER_NONFATAL) |
803 | pcie_do_nonfatal_recovery(dev); | 1013 | pcie_do_nonfatal_recovery(dev); |
804 | else if (info->severity == AER_FATAL) | 1014 | else if (info->severity == AER_FATAL) |
@@ -876,7 +1086,7 @@ EXPORT_SYMBOL_GPL(aer_recover_queue); | |||
876 | #endif | 1086 | #endif |
877 | 1087 | ||
878 | /** | 1088 | /** |
879 | * get_device_error_info - read error status from dev and store it to info | 1089 | * aer_get_device_error_info - read error status from dev and store it to info |
880 | * @dev: pointer to the device expected to have a error record | 1090 | * @dev: pointer to the device expected to have a error record |
881 | * @info: pointer to structure to store the error record | 1091 | * @info: pointer to structure to store the error record |
882 | * | 1092 | * |
@@ -884,7 +1094,7 @@ EXPORT_SYMBOL_GPL(aer_recover_queue); | |||
884 | * | 1094 | * |
885 | * Note that @info is reused among all error devices. Clear fields properly. | 1095 | * Note that @info is reused among all error devices. Clear fields properly. |
886 | */ | 1096 | */ |
887 | static int get_device_error_info(struct pci_dev *dev, struct aer_err_info *info) | 1097 | int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info) |
888 | { | 1098 | { |
889 | int pos, temp; | 1099 | int pos, temp; |
890 | 1100 | ||
@@ -942,11 +1152,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info) | |||
942 | 1152 | ||
943 | /* Report all before handle them, not to lost records by reset etc. */ | 1153 | /* Report all before handle them, not to lost records by reset etc. */ |
944 | for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) { | 1154 | for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) { |
945 | if (get_device_error_info(e_info->dev[i], e_info)) | 1155 | if (aer_get_device_error_info(e_info->dev[i], e_info)) |
946 | aer_print_error(e_info->dev[i], e_info); | 1156 | aer_print_error(e_info->dev[i], e_info); |
947 | } | 1157 | } |
948 | for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) { | 1158 | for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) { |
949 | if (get_device_error_info(e_info->dev[i], e_info)) | 1159 | if (aer_get_device_error_info(e_info->dev[i], e_info)) |
950 | handle_error_source(e_info->dev[i], e_info); | 1160 | handle_error_source(e_info->dev[i], e_info); |
951 | } | 1161 | } |
952 | } | 1162 | } |
@@ -962,6 +1172,8 @@ static void aer_isr_one_error(struct aer_rpc *rpc, | |||
962 | struct pci_dev *pdev = rpc->rpd; | 1172 | struct pci_dev *pdev = rpc->rpd; |
963 | struct aer_err_info *e_info = &rpc->e_info; | 1173 | struct aer_err_info *e_info = &rpc->e_info; |
964 | 1174 | ||
1175 | pci_rootport_aer_stats_incr(pdev, e_src); | ||
1176 | |||
965 | /* | 1177 | /* |
966 | * There is a possibility that both correctable error and | 1178 | * There is a possibility that both correctable error and |
967 | * uncorrectable error being logged. Report correctable error first. | 1179 | * uncorrectable error being logged. Report correctable error first. |
@@ -1305,6 +1517,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev) | |||
1305 | { | 1517 | { |
1306 | u32 reg32; | 1518 | u32 reg32; |
1307 | int pos; | 1519 | int pos; |
1520 | int rc; | ||
1308 | 1521 | ||
1309 | pos = dev->aer_cap; | 1522 | pos = dev->aer_cap; |
1310 | 1523 | ||
@@ -1313,7 +1526,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev) | |||
1313 | reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; | 1526 | reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; |
1314 | pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); | 1527 | pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); |
1315 | 1528 | ||
1316 | pci_reset_bridge_secondary_bus(dev); | 1529 | rc = pci_bridge_secondary_bus_reset(dev); |
1317 | pci_printk(KERN_DEBUG, dev, "Root Port link has been reset\n"); | 1530 | pci_printk(KERN_DEBUG, dev, "Root Port link has been reset\n"); |
1318 | 1531 | ||
1319 | /* Clear Root Error Status */ | 1532 | /* Clear Root Error Status */ |
@@ -1325,7 +1538,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev) | |||
1325 | reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; | 1538 | reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; |
1326 | pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); | 1539 | pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); |
1327 | 1540 | ||
1328 | return PCI_ERS_RESULT_RECOVERED; | 1541 | return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; |
1329 | } | 1542 | } |
1330 | 1543 | ||
1331 | /** | 1544 | /** |
@@ -1336,20 +1549,8 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev) | |||
1336 | */ | 1549 | */ |
1337 | static void aer_error_resume(struct pci_dev *dev) | 1550 | static void aer_error_resume(struct pci_dev *dev) |
1338 | { | 1551 | { |
1339 | int pos; | 1552 | pci_aer_clear_device_status(dev); |
1340 | u32 status, mask; | 1553 | pci_cleanup_aer_uncorrect_error_status(dev); |
1341 | u16 reg16; | ||
1342 | |||
1343 | /* Clean up Root device status */ | ||
1344 | pcie_capability_read_word(dev, PCI_EXP_DEVSTA, ®16); | ||
1345 | pcie_capability_write_word(dev, PCI_EXP_DEVSTA, reg16); | ||
1346 | |||
1347 | /* Clean AER Root Error Status */ | ||
1348 | pos = dev->aer_cap; | ||
1349 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); | ||
1350 | pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &mask); | ||
1351 | status &= ~mask; /* Clear corresponding nonfatal bits */ | ||
1352 | pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); | ||
1353 | } | 1554 | } |
1354 | 1555 | ||
1355 | static struct pcie_port_service_driver aerdriver = { | 1556 | static struct pcie_port_service_driver aerdriver = { |
diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index c687c817b47d..5326916715d2 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c | |||
@@ -1127,11 +1127,9 @@ static int pcie_aspm_set_policy(const char *val, | |||
1127 | 1127 | ||
1128 | if (aspm_disabled) | 1128 | if (aspm_disabled) |
1129 | return -EPERM; | 1129 | return -EPERM; |
1130 | for (i = 0; i < ARRAY_SIZE(policy_str); i++) | 1130 | i = sysfs_match_string(policy_str, val); |
1131 | if (!strncmp(val, policy_str[i], strlen(policy_str[i]))) | 1131 | if (i < 0) |
1132 | break; | 1132 | return i; |
1133 | if (i >= ARRAY_SIZE(policy_str)) | ||
1134 | return -EINVAL; | ||
1135 | if (i == aspm_policy) | 1133 | if (i == aspm_policy) |
1136 | return 0; | 1134 | return 0; |
1137 | 1135 | ||
diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c index 921ed979109d..f03279fc87cd 100644 --- a/drivers/pci/pcie/dpc.c +++ b/drivers/pci/pcie/dpc.c | |||
@@ -6,6 +6,7 @@ | |||
6 | * Copyright (C) 2016 Intel Corp. | 6 | * Copyright (C) 2016 Intel Corp. |
7 | */ | 7 | */ |
8 | 8 | ||
9 | #include <linux/aer.h> | ||
9 | #include <linux/delay.h> | 10 | #include <linux/delay.h> |
10 | #include <linux/interrupt.h> | 11 | #include <linux/interrupt.h> |
11 | #include <linux/init.h> | 12 | #include <linux/init.h> |
@@ -16,10 +17,8 @@ | |||
16 | 17 | ||
17 | struct dpc_dev { | 18 | struct dpc_dev { |
18 | struct pcie_device *dev; | 19 | struct pcie_device *dev; |
19 | struct work_struct work; | ||
20 | u16 cap_pos; | 20 | u16 cap_pos; |
21 | bool rp_extensions; | 21 | bool rp_extensions; |
22 | u32 rp_pio_status; | ||
23 | u8 rp_log_size; | 22 | u8 rp_log_size; |
24 | }; | 23 | }; |
25 | 24 | ||
@@ -65,19 +64,13 @@ static int dpc_wait_rp_inactive(struct dpc_dev *dpc) | |||
65 | return 0; | 64 | return 0; |
66 | } | 65 | } |
67 | 66 | ||
68 | static void dpc_wait_link_inactive(struct dpc_dev *dpc) | ||
69 | { | ||
70 | struct pci_dev *pdev = dpc->dev->port; | ||
71 | |||
72 | pcie_wait_for_link(pdev, false); | ||
73 | } | ||
74 | |||
75 | static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) | 67 | static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) |
76 | { | 68 | { |
77 | struct dpc_dev *dpc; | 69 | struct dpc_dev *dpc; |
78 | struct pcie_device *pciedev; | 70 | struct pcie_device *pciedev; |
79 | struct device *devdpc; | 71 | struct device *devdpc; |
80 | u16 cap, ctl; | 72 | |
73 | u16 cap; | ||
81 | 74 | ||
82 | /* | 75 | /* |
83 | * DPC disables the Link automatically in hardware, so it has | 76 | * DPC disables the Link automatically in hardware, so it has |
@@ -92,34 +85,17 @@ static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) | |||
92 | * Wait until the Link is inactive, then clear DPC Trigger Status | 85 | * Wait until the Link is inactive, then clear DPC Trigger Status |
93 | * to allow the Port to leave DPC. | 86 | * to allow the Port to leave DPC. |
94 | */ | 87 | */ |
95 | dpc_wait_link_inactive(dpc); | 88 | pcie_wait_for_link(pdev, false); |
96 | 89 | ||
97 | if (dpc->rp_extensions && dpc_wait_rp_inactive(dpc)) | 90 | if (dpc->rp_extensions && dpc_wait_rp_inactive(dpc)) |
98 | return PCI_ERS_RESULT_DISCONNECT; | 91 | return PCI_ERS_RESULT_DISCONNECT; |
99 | if (dpc->rp_extensions && dpc->rp_pio_status) { | ||
100 | pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, | ||
101 | dpc->rp_pio_status); | ||
102 | dpc->rp_pio_status = 0; | ||
103 | } | ||
104 | 92 | ||
105 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, | 93 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, |
106 | PCI_EXP_DPC_STATUS_TRIGGER); | 94 | PCI_EXP_DPC_STATUS_TRIGGER); |
107 | 95 | ||
108 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_CTL, &ctl); | ||
109 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_CTL, | ||
110 | ctl | PCI_EXP_DPC_CTL_INT_EN); | ||
111 | |||
112 | return PCI_ERS_RESULT_RECOVERED; | 96 | return PCI_ERS_RESULT_RECOVERED; |
113 | } | 97 | } |
114 | 98 | ||
115 | static void dpc_work(struct work_struct *work) | ||
116 | { | ||
117 | struct dpc_dev *dpc = container_of(work, struct dpc_dev, work); | ||
118 | struct pci_dev *pdev = dpc->dev->port; | ||
119 | |||
120 | /* We configure DPC so it only triggers on ERR_FATAL */ | ||
121 | pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_DPC); | ||
122 | } | ||
123 | 99 | ||
124 | static void dpc_process_rp_pio_error(struct dpc_dev *dpc) | 100 | static void dpc_process_rp_pio_error(struct dpc_dev *dpc) |
125 | { | 101 | { |
@@ -134,8 +110,6 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc) | |||
134 | dev_err(dev, "rp_pio_status: %#010x, rp_pio_mask: %#010x\n", | 110 | dev_err(dev, "rp_pio_status: %#010x, rp_pio_mask: %#010x\n", |
135 | status, mask); | 111 | status, mask); |
136 | 112 | ||
137 | dpc->rp_pio_status = status; | ||
138 | |||
139 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SEVERITY, &sev); | 113 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SEVERITY, &sev); |
140 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SYSERROR, &syserr); | 114 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SYSERROR, &syserr); |
141 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_EXCEPTION, &exc); | 115 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_EXCEPTION, &exc); |
@@ -146,15 +120,14 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc) | |||
146 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &dpc_status); | 120 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &dpc_status); |
147 | first_error = (dpc_status & 0x1f00) >> 8; | 121 | first_error = (dpc_status & 0x1f00) >> 8; |
148 | 122 | ||
149 | status &= ~mask; | ||
150 | for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) { | 123 | for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) { |
151 | if (status & (1 << i)) | 124 | if ((status & ~mask) & (1 << i)) |
152 | dev_err(dev, "[%2d] %s%s\n", i, rp_pio_error_string[i], | 125 | dev_err(dev, "[%2d] %s%s\n", i, rp_pio_error_string[i], |
153 | first_error == i ? " (First)" : ""); | 126 | first_error == i ? " (First)" : ""); |
154 | } | 127 | } |
155 | 128 | ||
156 | if (dpc->rp_log_size < 4) | 129 | if (dpc->rp_log_size < 4) |
157 | return; | 130 | goto clear_status; |
158 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG, | 131 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG, |
159 | &dw0); | 132 | &dw0); |
160 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 4, | 133 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 4, |
@@ -167,7 +140,7 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc) | |||
167 | dw0, dw1, dw2, dw3); | 140 | dw0, dw1, dw2, dw3); |
168 | 141 | ||
169 | if (dpc->rp_log_size < 5) | 142 | if (dpc->rp_log_size < 5) |
170 | return; | 143 | goto clear_status; |
171 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log); | 144 | pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log); |
172 | dev_err(dev, "RP PIO ImpSpec Log %#010x\n", log); | 145 | dev_err(dev, "RP PIO ImpSpec Log %#010x\n", log); |
173 | 146 | ||
@@ -176,43 +149,26 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc) | |||
176 | cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix); | 149 | cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix); |
177 | dev_err(dev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix); | 150 | dev_err(dev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix); |
178 | } | 151 | } |
152 | clear_status: | ||
153 | pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status); | ||
179 | } | 154 | } |
180 | 155 | ||
181 | static irqreturn_t dpc_irq(int irq, void *context) | 156 | static irqreturn_t dpc_handler(int irq, void *context) |
182 | { | 157 | { |
183 | struct dpc_dev *dpc = (struct dpc_dev *)context; | 158 | struct aer_err_info info; |
159 | struct dpc_dev *dpc = context; | ||
184 | struct pci_dev *pdev = dpc->dev->port; | 160 | struct pci_dev *pdev = dpc->dev->port; |
185 | struct device *dev = &dpc->dev->device; | 161 | struct device *dev = &dpc->dev->device; |
186 | u16 cap = dpc->cap_pos, ctl, status, source, reason, ext_reason; | 162 | u16 cap = dpc->cap_pos, status, source, reason, ext_reason; |
187 | |||
188 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_CTL, &ctl); | ||
189 | |||
190 | if (!(ctl & PCI_EXP_DPC_CTL_INT_EN) || ctl == (u16)(~0)) | ||
191 | return IRQ_NONE; | ||
192 | 163 | ||
193 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); | 164 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); |
194 | 165 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source); | |
195 | if (!(status & PCI_EXP_DPC_STATUS_INTERRUPT)) | ||
196 | return IRQ_NONE; | ||
197 | |||
198 | if (!(status & PCI_EXP_DPC_STATUS_TRIGGER)) { | ||
199 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, | ||
200 | PCI_EXP_DPC_STATUS_INTERRUPT); | ||
201 | return IRQ_HANDLED; | ||
202 | } | ||
203 | |||
204 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_CTL, | ||
205 | ctl & ~PCI_EXP_DPC_CTL_INT_EN); | ||
206 | |||
207 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, | ||
208 | &source); | ||
209 | 166 | ||
210 | dev_info(dev, "DPC containment event, status:%#06x source:%#06x\n", | 167 | dev_info(dev, "DPC containment event, status:%#06x source:%#06x\n", |
211 | status, source); | 168 | status, source); |
212 | 169 | ||
213 | reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; | 170 | reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; |
214 | ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; | 171 | ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; |
215 | |||
216 | dev_warn(dev, "DPC %s detected, remove downstream devices\n", | 172 | dev_warn(dev, "DPC %s detected, remove downstream devices\n", |
217 | (reason == 0) ? "unmasked uncorrectable error" : | 173 | (reason == 0) ? "unmasked uncorrectable error" : |
218 | (reason == 1) ? "ERR_NONFATAL" : | 174 | (reason == 1) ? "ERR_NONFATAL" : |
@@ -220,15 +176,36 @@ static irqreturn_t dpc_irq(int irq, void *context) | |||
220 | (ext_reason == 0) ? "RP PIO error" : | 176 | (ext_reason == 0) ? "RP PIO error" : |
221 | (ext_reason == 1) ? "software trigger" : | 177 | (ext_reason == 1) ? "software trigger" : |
222 | "reserved error"); | 178 | "reserved error"); |
179 | |||
223 | /* show RP PIO error detail information */ | 180 | /* show RP PIO error detail information */ |
224 | if (dpc->rp_extensions && reason == 3 && ext_reason == 0) | 181 | if (dpc->rp_extensions && reason == 3 && ext_reason == 0) |
225 | dpc_process_rp_pio_error(dpc); | 182 | dpc_process_rp_pio_error(dpc); |
183 | else if (reason == 0 && aer_get_device_error_info(pdev, &info)) { | ||
184 | aer_print_error(pdev, &info); | ||
185 | pci_cleanup_aer_uncorrect_error_status(pdev); | ||
186 | } | ||
226 | 187 | ||
227 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, | 188 | /* We configure DPC so it only triggers on ERR_FATAL */ |
228 | PCI_EXP_DPC_STATUS_INTERRUPT); | 189 | pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_DPC); |
190 | |||
191 | return IRQ_HANDLED; | ||
192 | } | ||
193 | |||
194 | static irqreturn_t dpc_irq(int irq, void *context) | ||
195 | { | ||
196 | struct dpc_dev *dpc = (struct dpc_dev *)context; | ||
197 | struct pci_dev *pdev = dpc->dev->port; | ||
198 | u16 cap = dpc->cap_pos, status; | ||
199 | |||
200 | pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); | ||
229 | 201 | ||
230 | schedule_work(&dpc->work); | 202 | if (!(status & PCI_EXP_DPC_STATUS_INTERRUPT) || status == (u16)(~0)) |
203 | return IRQ_NONE; | ||
231 | 204 | ||
205 | pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, | ||
206 | PCI_EXP_DPC_STATUS_INTERRUPT); | ||
207 | if (status & PCI_EXP_DPC_STATUS_TRIGGER) | ||
208 | return IRQ_WAKE_THREAD; | ||
232 | return IRQ_HANDLED; | 209 | return IRQ_HANDLED; |
233 | } | 210 | } |
234 | 211 | ||
@@ -250,11 +227,11 @@ static int dpc_probe(struct pcie_device *dev) | |||
250 | 227 | ||
251 | dpc->cap_pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DPC); | 228 | dpc->cap_pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DPC); |
252 | dpc->dev = dev; | 229 | dpc->dev = dev; |
253 | INIT_WORK(&dpc->work, dpc_work); | ||
254 | set_service_data(dev, dpc); | 230 | set_service_data(dev, dpc); |
255 | 231 | ||
256 | status = devm_request_irq(device, dev->irq, dpc_irq, IRQF_SHARED, | 232 | status = devm_request_threaded_irq(device, dev->irq, dpc_irq, |
257 | "pcie-dpc", dpc); | 233 | dpc_handler, IRQF_SHARED, |
234 | "pcie-dpc", dpc); | ||
258 | if (status) { | 235 | if (status) { |
259 | dev_warn(device, "request IRQ%d failed: %d\n", dev->irq, | 236 | dev_warn(device, "request IRQ%d failed: %d\n", dev->irq, |
260 | status); | 237 | status); |
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c index f02e334beb45..708fd3a0d646 100644 --- a/drivers/pci/pcie/err.c +++ b/drivers/pci/pcie/err.c | |||
@@ -175,9 +175,11 @@ out: | |||
175 | */ | 175 | */ |
176 | static pci_ers_result_t default_reset_link(struct pci_dev *dev) | 176 | static pci_ers_result_t default_reset_link(struct pci_dev *dev) |
177 | { | 177 | { |
178 | pci_reset_bridge_secondary_bus(dev); | 178 | int rc; |
179 | |||
180 | rc = pci_bridge_secondary_bus_reset(dev); | ||
179 | pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n"); | 181 | pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n"); |
180 | return PCI_ERS_RESULT_RECOVERED; | 182 | return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; |
181 | } | 183 | } |
182 | 184 | ||
183 | static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) | 185 | static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) |
@@ -252,6 +254,7 @@ static pci_ers_result_t broadcast_error_message(struct pci_dev *dev, | |||
252 | dev->error_state = state; | 254 | dev->error_state = state; |
253 | pci_walk_bus(dev->subordinate, cb, &result_data); | 255 | pci_walk_bus(dev->subordinate, cb, &result_data); |
254 | if (cb == report_resume) { | 256 | if (cb == report_resume) { |
257 | pci_aer_clear_device_status(dev); | ||
255 | pci_cleanup_aer_uncorrect_error_status(dev); | 258 | pci_cleanup_aer_uncorrect_error_status(dev); |
256 | dev->error_state = pci_channel_io_normal; | 259 | dev->error_state = pci_channel_io_normal; |
257 | } | 260 | } |
@@ -259,15 +262,10 @@ static pci_ers_result_t broadcast_error_message(struct pci_dev *dev, | |||
259 | /* | 262 | /* |
260 | * If the error is reported by an end point, we think this | 263 | * If the error is reported by an end point, we think this |
261 | * error is related to the upstream link of the end point. | 264 | * error is related to the upstream link of the end point. |
265 | * The error is non fatal so the bus is ok; just invoke | ||
266 | * the callback for the function that logged the error. | ||
262 | */ | 267 | */ |
263 | if (state == pci_channel_io_normal) | 268 | cb(dev, &result_data); |
264 | /* | ||
265 | * the error is non fatal so the bus is ok, just invoke | ||
266 | * the callback for the function that logged the error. | ||
267 | */ | ||
268 | cb(dev, &result_data); | ||
269 | else | ||
270 | pci_walk_bus(dev->bus, cb, &result_data); | ||
271 | } | 269 | } |
272 | 270 | ||
273 | return result_data.result; | 271 | return result_data.result; |
@@ -317,7 +315,8 @@ void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service) | |||
317 | * do error recovery on all subordinates of the bridge instead | 315 | * do error recovery on all subordinates of the bridge instead |
318 | * of the bridge and clear the error status of the bridge. | 316 | * of the bridge and clear the error status of the bridge. |
319 | */ | 317 | */ |
320 | pci_cleanup_aer_uncorrect_error_status(dev); | 318 | pci_aer_clear_fatal_status(dev); |
319 | pci_aer_clear_device_status(dev); | ||
321 | } | 320 | } |
322 | 321 | ||
323 | if (result == PCI_ERS_RESULT_RECOVERED) { | 322 | if (result == PCI_ERS_RESULT_RECOVERED) { |
diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index 6ffc797a0dc1..d59afa42fc14 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h | |||
@@ -50,6 +50,7 @@ struct pcie_port_service_driver { | |||
50 | int (*probe) (struct pcie_device *dev); | 50 | int (*probe) (struct pcie_device *dev); |
51 | void (*remove) (struct pcie_device *dev); | 51 | void (*remove) (struct pcie_device *dev); |
52 | int (*suspend) (struct pcie_device *dev); | 52 | int (*suspend) (struct pcie_device *dev); |
53 | int (*resume_noirq) (struct pcie_device *dev); | ||
53 | int (*resume) (struct pcie_device *dev); | 54 | int (*resume) (struct pcie_device *dev); |
54 | 55 | ||
55 | /* Device driver may resume normal operations */ | 56 | /* Device driver may resume normal operations */ |
@@ -82,6 +83,7 @@ extern struct bus_type pcie_port_bus_type; | |||
82 | int pcie_port_device_register(struct pci_dev *dev); | 83 | int pcie_port_device_register(struct pci_dev *dev); |
83 | #ifdef CONFIG_PM | 84 | #ifdef CONFIG_PM |
84 | int pcie_port_device_suspend(struct device *dev); | 85 | int pcie_port_device_suspend(struct device *dev); |
86 | int pcie_port_device_resume_noirq(struct device *dev); | ||
85 | int pcie_port_device_resume(struct device *dev); | 87 | int pcie_port_device_resume(struct device *dev); |
86 | #endif | 88 | #endif |
87 | void pcie_port_device_remove(struct pci_dev *dev); | 89 | void pcie_port_device_remove(struct pci_dev *dev); |
diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c index e0261ad4bcdd..7c37d815229e 100644 --- a/drivers/pci/pcie/portdrv_core.c +++ b/drivers/pci/pcie/portdrv_core.c | |||
@@ -353,14 +353,19 @@ error_disable: | |||
353 | } | 353 | } |
354 | 354 | ||
355 | #ifdef CONFIG_PM | 355 | #ifdef CONFIG_PM |
356 | static int suspend_iter(struct device *dev, void *data) | 356 | typedef int (*pcie_pm_callback_t)(struct pcie_device *); |
357 | |||
358 | static int pm_iter(struct device *dev, void *data) | ||
357 | { | 359 | { |
358 | struct pcie_port_service_driver *service_driver; | 360 | struct pcie_port_service_driver *service_driver; |
361 | size_t offset = *(size_t *)data; | ||
362 | pcie_pm_callback_t cb; | ||
359 | 363 | ||
360 | if ((dev->bus == &pcie_port_bus_type) && dev->driver) { | 364 | if ((dev->bus == &pcie_port_bus_type) && dev->driver) { |
361 | service_driver = to_service_driver(dev->driver); | 365 | service_driver = to_service_driver(dev->driver); |
362 | if (service_driver->suspend) | 366 | cb = *(pcie_pm_callback_t *)((void *)service_driver + offset); |
363 | service_driver->suspend(to_pcie_device(dev)); | 367 | if (cb) |
368 | return cb(to_pcie_device(dev)); | ||
364 | } | 369 | } |
365 | return 0; | 370 | return 0; |
366 | } | 371 | } |
@@ -371,20 +376,14 @@ static int suspend_iter(struct device *dev, void *data) | |||
371 | */ | 376 | */ |
372 | int pcie_port_device_suspend(struct device *dev) | 377 | int pcie_port_device_suspend(struct device *dev) |
373 | { | 378 | { |
374 | return device_for_each_child(dev, NULL, suspend_iter); | 379 | size_t off = offsetof(struct pcie_port_service_driver, suspend); |
380 | return device_for_each_child(dev, &off, pm_iter); | ||
375 | } | 381 | } |
376 | 382 | ||
377 | static int resume_iter(struct device *dev, void *data) | 383 | int pcie_port_device_resume_noirq(struct device *dev) |
378 | { | 384 | { |
379 | struct pcie_port_service_driver *service_driver; | 385 | size_t off = offsetof(struct pcie_port_service_driver, resume_noirq); |
380 | 386 | return device_for_each_child(dev, &off, pm_iter); | |
381 | if ((dev->bus == &pcie_port_bus_type) && | ||
382 | (dev->driver)) { | ||
383 | service_driver = to_service_driver(dev->driver); | ||
384 | if (service_driver->resume) | ||
385 | service_driver->resume(to_pcie_device(dev)); | ||
386 | } | ||
387 | return 0; | ||
388 | } | 387 | } |
389 | 388 | ||
390 | /** | 389 | /** |
@@ -393,7 +392,8 @@ static int resume_iter(struct device *dev, void *data) | |||
393 | */ | 392 | */ |
394 | int pcie_port_device_resume(struct device *dev) | 393 | int pcie_port_device_resume(struct device *dev) |
395 | { | 394 | { |
396 | return device_for_each_child(dev, NULL, resume_iter); | 395 | size_t off = offsetof(struct pcie_port_service_driver, resume); |
396 | return device_for_each_child(dev, &off, pm_iter); | ||
397 | } | 397 | } |
398 | #endif /* PM */ | 398 | #endif /* PM */ |
399 | 399 | ||
diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c index 973f1b80a038..eef22dc29140 100644 --- a/drivers/pci/pcie/portdrv_pci.c +++ b/drivers/pci/pcie/portdrv_pci.c | |||
@@ -42,17 +42,6 @@ __setup("pcie_ports=", pcie_port_setup); | |||
42 | 42 | ||
43 | /* global data */ | 43 | /* global data */ |
44 | 44 | ||
45 | static int pcie_portdrv_restore_config(struct pci_dev *dev) | ||
46 | { | ||
47 | int retval; | ||
48 | |||
49 | retval = pci_enable_device(dev); | ||
50 | if (retval) | ||
51 | return retval; | ||
52 | pci_set_master(dev); | ||
53 | return 0; | ||
54 | } | ||
55 | |||
56 | #ifdef CONFIG_PM | 45 | #ifdef CONFIG_PM |
57 | static int pcie_port_runtime_suspend(struct device *dev) | 46 | static int pcie_port_runtime_suspend(struct device *dev) |
58 | { | 47 | { |
@@ -76,10 +65,12 @@ static int pcie_port_runtime_idle(struct device *dev) | |||
76 | 65 | ||
77 | static const struct dev_pm_ops pcie_portdrv_pm_ops = { | 66 | static const struct dev_pm_ops pcie_portdrv_pm_ops = { |
78 | .suspend = pcie_port_device_suspend, | 67 | .suspend = pcie_port_device_suspend, |
68 | .resume_noirq = pcie_port_device_resume_noirq, | ||
79 | .resume = pcie_port_device_resume, | 69 | .resume = pcie_port_device_resume, |
80 | .freeze = pcie_port_device_suspend, | 70 | .freeze = pcie_port_device_suspend, |
81 | .thaw = pcie_port_device_resume, | 71 | .thaw = pcie_port_device_resume, |
82 | .poweroff = pcie_port_device_suspend, | 72 | .poweroff = pcie_port_device_suspend, |
73 | .restore_noirq = pcie_port_device_resume_noirq, | ||
83 | .restore = pcie_port_device_resume, | 74 | .restore = pcie_port_device_resume, |
84 | .runtime_suspend = pcie_port_runtime_suspend, | 75 | .runtime_suspend = pcie_port_runtime_suspend, |
85 | .runtime_resume = pcie_port_runtime_resume, | 76 | .runtime_resume = pcie_port_runtime_resume, |
@@ -160,19 +151,6 @@ static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev) | |||
160 | return PCI_ERS_RESULT_RECOVERED; | 151 | return PCI_ERS_RESULT_RECOVERED; |
161 | } | 152 | } |
162 | 153 | ||
163 | static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) | ||
164 | { | ||
165 | /* If fatal, restore cfg space for possible link reset at upstream */ | ||
166 | if (dev->error_state == pci_channel_io_frozen) { | ||
167 | dev->state_saved = true; | ||
168 | pci_restore_state(dev); | ||
169 | pcie_portdrv_restore_config(dev); | ||
170 | pci_enable_pcie_error_reporting(dev); | ||
171 | } | ||
172 | |||
173 | return PCI_ERS_RESULT_RECOVERED; | ||
174 | } | ||
175 | |||
176 | static int resume_iter(struct device *device, void *data) | 154 | static int resume_iter(struct device *device, void *data) |
177 | { | 155 | { |
178 | struct pcie_device *pcie_device; | 156 | struct pcie_device *pcie_device; |
@@ -208,7 +186,6 @@ static const struct pci_device_id port_pci_ids[] = { { | |||
208 | static const struct pci_error_handlers pcie_portdrv_err_handler = { | 186 | static const struct pci_error_handlers pcie_portdrv_err_handler = { |
209 | .error_detected = pcie_portdrv_error_detected, | 187 | .error_detected = pcie_portdrv_error_detected, |
210 | .mmio_enabled = pcie_portdrv_mmio_enabled, | 188 | .mmio_enabled = pcie_portdrv_mmio_enabled, |
211 | .slot_reset = pcie_portdrv_slot_reset, | ||
212 | .resume = pcie_portdrv_err_resume, | 189 | .resume = pcie_portdrv_err_resume, |
213 | }; | 190 | }; |
214 | 191 | ||
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 611adcd9c169..ec784009a36b 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c | |||
@@ -13,7 +13,6 @@ | |||
13 | #include <linux/slab.h> | 13 | #include <linux/slab.h> |
14 | #include <linux/module.h> | 14 | #include <linux/module.h> |
15 | #include <linux/cpumask.h> | 15 | #include <linux/cpumask.h> |
16 | #include <linux/pci-aspm.h> | ||
17 | #include <linux/aer.h> | 16 | #include <linux/aer.h> |
18 | #include <linux/acpi.h> | 17 | #include <linux/acpi.h> |
19 | #include <linux/hypervisor.h> | 18 | #include <linux/hypervisor.h> |
@@ -1549,6 +1548,20 @@ static int pci_intx_mask_broken(struct pci_dev *dev) | |||
1549 | return 0; | 1548 | return 0; |
1550 | } | 1549 | } |
1551 | 1550 | ||
1551 | static void early_dump_pci_device(struct pci_dev *pdev) | ||
1552 | { | ||
1553 | u32 value[256 / 4]; | ||
1554 | int i; | ||
1555 | |||
1556 | pci_info(pdev, "config space:\n"); | ||
1557 | |||
1558 | for (i = 0; i < 256; i += 4) | ||
1559 | pci_read_config_dword(pdev, i, &value[i / 4]); | ||
1560 | |||
1561 | print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, | ||
1562 | value, 256, false); | ||
1563 | } | ||
1564 | |||
1552 | /** | 1565 | /** |
1553 | * pci_setup_device - Fill in class and map information of a device | 1566 | * pci_setup_device - Fill in class and map information of a device |
1554 | * @dev: the device structure to fill | 1567 | * @dev: the device structure to fill |
@@ -1598,6 +1611,9 @@ int pci_setup_device(struct pci_dev *dev) | |||
1598 | pci_printk(KERN_DEBUG, dev, "[%04x:%04x] type %02x class %#08x\n", | 1611 | pci_printk(KERN_DEBUG, dev, "[%04x:%04x] type %02x class %#08x\n", |
1599 | dev->vendor, dev->device, dev->hdr_type, dev->class); | 1612 | dev->vendor, dev->device, dev->hdr_type, dev->class); |
1600 | 1613 | ||
1614 | if (pci_early_dump) | ||
1615 | early_dump_pci_device(dev); | ||
1616 | |||
1601 | /* Need to have dev->class ready */ | 1617 | /* Need to have dev->class ready */ |
1602 | dev->cfg_size = pci_cfg_space_size(dev); | 1618 | dev->cfg_size = pci_cfg_space_size(dev); |
1603 | 1619 | ||
@@ -1725,11 +1741,15 @@ int pci_setup_device(struct pci_dev *dev) | |||
1725 | static void pci_configure_mps(struct pci_dev *dev) | 1741 | static void pci_configure_mps(struct pci_dev *dev) |
1726 | { | 1742 | { |
1727 | struct pci_dev *bridge = pci_upstream_bridge(dev); | 1743 | struct pci_dev *bridge = pci_upstream_bridge(dev); |
1728 | int mps, p_mps, rc; | 1744 | int mps, mpss, p_mps, rc; |
1729 | 1745 | ||
1730 | if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge)) | 1746 | if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge)) |
1731 | return; | 1747 | return; |
1732 | 1748 | ||
1749 | /* MPS and MRRS fields are of type 'RsvdP' for VFs, short-circuit out */ | ||
1750 | if (dev->is_virtfn) | ||
1751 | return; | ||
1752 | |||
1733 | mps = pcie_get_mps(dev); | 1753 | mps = pcie_get_mps(dev); |
1734 | p_mps = pcie_get_mps(bridge); | 1754 | p_mps = pcie_get_mps(bridge); |
1735 | 1755 | ||
@@ -1749,6 +1769,14 @@ static void pci_configure_mps(struct pci_dev *dev) | |||
1749 | if (pcie_bus_config != PCIE_BUS_DEFAULT) | 1769 | if (pcie_bus_config != PCIE_BUS_DEFAULT) |
1750 | return; | 1770 | return; |
1751 | 1771 | ||
1772 | mpss = 128 << dev->pcie_mpss; | ||
1773 | if (mpss < p_mps && pci_pcie_type(bridge) == PCI_EXP_TYPE_ROOT_PORT) { | ||
1774 | pcie_set_mps(bridge, mpss); | ||
1775 | pci_info(dev, "Upstream bridge's Max Payload Size set to %d (was %d, max %d)\n", | ||
1776 | mpss, p_mps, 128 << bridge->pcie_mpss); | ||
1777 | p_mps = pcie_get_mps(bridge); | ||
1778 | } | ||
1779 | |||
1752 | rc = pcie_set_mps(dev, p_mps); | 1780 | rc = pcie_set_mps(dev, p_mps); |
1753 | if (rc) { | 1781 | if (rc) { |
1754 | pci_warn(dev, "can't set Max Payload Size to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n", | 1782 | pci_warn(dev, "can't set Max Payload Size to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n", |
@@ -1757,7 +1785,7 @@ static void pci_configure_mps(struct pci_dev *dev) | |||
1757 | } | 1785 | } |
1758 | 1786 | ||
1759 | pci_info(dev, "Max Payload Size set to %d (was %d, max %d)\n", | 1787 | pci_info(dev, "Max Payload Size set to %d (was %d, max %d)\n", |
1760 | p_mps, mps, 128 << dev->pcie_mpss); | 1788 | p_mps, mps, mpss); |
1761 | } | 1789 | } |
1762 | 1790 | ||
1763 | static struct hpp_type0 pci_default_type0 = { | 1791 | static struct hpp_type0 pci_default_type0 = { |
@@ -2042,6 +2070,29 @@ static void pci_configure_ltr(struct pci_dev *dev) | |||
2042 | #endif | 2070 | #endif |
2043 | } | 2071 | } |
2044 | 2072 | ||
2073 | static void pci_configure_eetlp_prefix(struct pci_dev *dev) | ||
2074 | { | ||
2075 | #ifdef CONFIG_PCI_PASID | ||
2076 | struct pci_dev *bridge; | ||
2077 | u32 cap; | ||
2078 | |||
2079 | if (!pci_is_pcie(dev)) | ||
2080 | return; | ||
2081 | |||
2082 | pcie_capability_read_dword(dev, PCI_EXP_DEVCAP2, &cap); | ||
2083 | if (!(cap & PCI_EXP_DEVCAP2_EE_PREFIX)) | ||
2084 | return; | ||
2085 | |||
2086 | if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) | ||
2087 | dev->eetlp_prefix_path = 1; | ||
2088 | else { | ||
2089 | bridge = pci_upstream_bridge(dev); | ||
2090 | if (bridge && bridge->eetlp_prefix_path) | ||
2091 | dev->eetlp_prefix_path = 1; | ||
2092 | } | ||
2093 | #endif | ||
2094 | } | ||
2095 | |||
2045 | static void pci_configure_device(struct pci_dev *dev) | 2096 | static void pci_configure_device(struct pci_dev *dev) |
2046 | { | 2097 | { |
2047 | struct hotplug_params hpp; | 2098 | struct hotplug_params hpp; |
@@ -2051,6 +2102,7 @@ static void pci_configure_device(struct pci_dev *dev) | |||
2051 | pci_configure_extended_tags(dev, NULL); | 2102 | pci_configure_extended_tags(dev, NULL); |
2052 | pci_configure_relaxed_ordering(dev); | 2103 | pci_configure_relaxed_ordering(dev); |
2053 | pci_configure_ltr(dev); | 2104 | pci_configure_ltr(dev); |
2105 | pci_configure_eetlp_prefix(dev); | ||
2054 | 2106 | ||
2055 | memset(&hpp, 0, sizeof(hpp)); | 2107 | memset(&hpp, 0, sizeof(hpp)); |
2056 | ret = pci_get_hp_params(dev, &hpp); | 2108 | ret = pci_get_hp_params(dev, &hpp); |
@@ -2064,6 +2116,7 @@ static void pci_configure_device(struct pci_dev *dev) | |||
2064 | 2116 | ||
2065 | static void pci_release_capabilities(struct pci_dev *dev) | 2117 | static void pci_release_capabilities(struct pci_dev *dev) |
2066 | { | 2118 | { |
2119 | pci_aer_exit(dev); | ||
2067 | pci_vpd_release(dev); | 2120 | pci_vpd_release(dev); |
2068 | pci_iov_release(dev); | 2121 | pci_iov_release(dev); |
2069 | pci_free_cap_save_buffers(dev); | 2122 | pci_free_cap_save_buffers(dev); |
@@ -2156,8 +2209,8 @@ static bool pci_bus_wait_crs(struct pci_bus *bus, int devfn, u32 *l, | |||
2156 | return true; | 2209 | return true; |
2157 | } | 2210 | } |
2158 | 2211 | ||
2159 | bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, | 2212 | bool pci_bus_generic_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, |
2160 | int timeout) | 2213 | int timeout) |
2161 | { | 2214 | { |
2162 | if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l)) | 2215 | if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l)) |
2163 | return false; | 2216 | return false; |
@@ -2172,6 +2225,24 @@ bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, | |||
2172 | 2225 | ||
2173 | return true; | 2226 | return true; |
2174 | } | 2227 | } |
2228 | |||
2229 | bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, | ||
2230 | int timeout) | ||
2231 | { | ||
2232 | #ifdef CONFIG_PCI_QUIRKS | ||
2233 | struct pci_dev *bridge = bus->self; | ||
2234 | |||
2235 | /* | ||
2236 | * Certain IDT switches have an issue where they improperly trigger | ||
2237 | * ACS Source Validation errors on completions for config reads. | ||
2238 | */ | ||
2239 | if (bridge && bridge->vendor == PCI_VENDOR_ID_IDT && | ||
2240 | bridge->device == 0x80b5) | ||
2241 | return pci_idt_bus_quirk(bus, devfn, l, timeout); | ||
2242 | #endif | ||
2243 | |||
2244 | return pci_bus_generic_read_dev_vendor_id(bus, devfn, l, timeout); | ||
2245 | } | ||
2175 | EXPORT_SYMBOL(pci_bus_read_dev_vendor_id); | 2246 | EXPORT_SYMBOL(pci_bus_read_dev_vendor_id); |
2176 | 2247 | ||
2177 | /* | 2248 | /* |
@@ -2205,6 +2276,25 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn) | |||
2205 | return dev; | 2276 | return dev; |
2206 | } | 2277 | } |
2207 | 2278 | ||
2279 | static void pcie_report_downtraining(struct pci_dev *dev) | ||
2280 | { | ||
2281 | if (!pci_is_pcie(dev)) | ||
2282 | return; | ||
2283 | |||
2284 | /* Look from the device up to avoid downstream ports with no devices */ | ||
2285 | if ((pci_pcie_type(dev) != PCI_EXP_TYPE_ENDPOINT) && | ||
2286 | (pci_pcie_type(dev) != PCI_EXP_TYPE_LEG_END) && | ||
2287 | (pci_pcie_type(dev) != PCI_EXP_TYPE_UPSTREAM)) | ||
2288 | return; | ||
2289 | |||
2290 | /* Multi-function PCIe devices share the same link/status */ | ||
2291 | if (PCI_FUNC(dev->devfn) != 0 || dev->is_virtfn) | ||
2292 | return; | ||
2293 | |||
2294 | /* Print link status only if the device is constrained by the fabric */ | ||
2295 | __pcie_print_link_status(dev, false); | ||
2296 | } | ||
2297 | |||
2208 | static void pci_init_capabilities(struct pci_dev *dev) | 2298 | static void pci_init_capabilities(struct pci_dev *dev) |
2209 | { | 2299 | { |
2210 | /* Enhanced Allocation */ | 2300 | /* Enhanced Allocation */ |
@@ -2240,6 +2330,8 @@ static void pci_init_capabilities(struct pci_dev *dev) | |||
2240 | /* Advanced Error Reporting */ | 2330 | /* Advanced Error Reporting */ |
2241 | pci_aer_init(dev); | 2331 | pci_aer_init(dev); |
2242 | 2332 | ||
2333 | pcie_report_downtraining(dev); | ||
2334 | |||
2243 | if (pci_probe_reset_function(dev) == 0) | 2335 | if (pci_probe_reset_function(dev) == 0) |
2244 | dev->reset_fn = 1; | 2336 | dev->reset_fn = 1; |
2245 | } | 2337 | } |
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index f439de848658..46f58a9771d7 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c | |||
@@ -25,8 +25,10 @@ | |||
25 | #include <linux/sched.h> | 25 | #include <linux/sched.h> |
26 | #include <linux/ktime.h> | 26 | #include <linux/ktime.h> |
27 | #include <linux/mm.h> | 27 | #include <linux/mm.h> |
28 | #include <linux/nvme.h> | ||
28 | #include <linux/platform_data/x86/apple.h> | 29 | #include <linux/platform_data/x86/apple.h> |
29 | #include <linux/pm_runtime.h> | 30 | #include <linux/pm_runtime.h> |
31 | #include <linux/switchtec.h> | ||
30 | #include <asm/dma.h> /* isa_dma_bridge_buggy */ | 32 | #include <asm/dma.h> /* isa_dma_bridge_buggy */ |
31 | #include "pci.h" | 33 | #include "pci.h" |
32 | 34 | ||
@@ -460,6 +462,7 @@ static void quirk_nfp6000(struct pci_dev *dev) | |||
460 | } | 462 | } |
461 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP4000, quirk_nfp6000); | 463 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP4000, quirk_nfp6000); |
462 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP6000, quirk_nfp6000); | 464 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP6000, quirk_nfp6000); |
465 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP5000, quirk_nfp6000); | ||
463 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP6000_VF, quirk_nfp6000); | 466 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP6000_VF, quirk_nfp6000); |
464 | 467 | ||
465 | /* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */ | 468 | /* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */ |
@@ -2105,6 +2108,7 @@ static void quirk_netmos(struct pci_dev *dev) | |||
2105 | if (dev->subsystem_vendor == PCI_VENDOR_ID_IBM && | 2108 | if (dev->subsystem_vendor == PCI_VENDOR_ID_IBM && |
2106 | dev->subsystem_device == 0x0299) | 2109 | dev->subsystem_device == 0x0299) |
2107 | return; | 2110 | return; |
2111 | /* else: fall through */ | ||
2108 | case PCI_DEVICE_ID_NETMOS_9735: | 2112 | case PCI_DEVICE_ID_NETMOS_9735: |
2109 | case PCI_DEVICE_ID_NETMOS_9745: | 2113 | case PCI_DEVICE_ID_NETMOS_9745: |
2110 | case PCI_DEVICE_ID_NETMOS_9845: | 2114 | case PCI_DEVICE_ID_NETMOS_9845: |
@@ -2352,6 +2356,9 @@ static void quirk_paxc_bridge(struct pci_dev *pdev) | |||
2352 | } | 2356 | } |
2353 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16cd, quirk_paxc_bridge); | 2357 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16cd, quirk_paxc_bridge); |
2354 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16f0, quirk_paxc_bridge); | 2358 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16f0, quirk_paxc_bridge); |
2359 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0xd750, quirk_paxc_bridge); | ||
2360 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0xd802, quirk_paxc_bridge); | ||
2361 | DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0xd804, quirk_paxc_bridge); | ||
2355 | #endif | 2362 | #endif |
2356 | 2363 | ||
2357 | /* | 2364 | /* |
@@ -3664,6 +3671,108 @@ static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe) | |||
3664 | #define PCI_DEVICE_ID_INTEL_IVB_M_VGA 0x0156 | 3671 | #define PCI_DEVICE_ID_INTEL_IVB_M_VGA 0x0156 |
3665 | #define PCI_DEVICE_ID_INTEL_IVB_M2_VGA 0x0166 | 3672 | #define PCI_DEVICE_ID_INTEL_IVB_M2_VGA 0x0166 |
3666 | 3673 | ||
3674 | /* | ||
3675 | * The Samsung SM961/PM961 controller can sometimes enter a fatal state after | ||
3676 | * FLR where config space reads from the device return -1. We seem to be | ||
3677 | * able to avoid this condition if we disable the NVMe controller prior to | ||
3678 | * FLR. This quirk is generic for any NVMe class device requiring similar | ||
3679 | * assistance to quiesce the device prior to FLR. | ||
3680 | * | ||
3681 | * NVMe specification: https://nvmexpress.org/resources/specifications/ | ||
3682 | * Revision 1.0e: | ||
3683 | * Chapter 2: Required and optional PCI config registers | ||
3684 | * Chapter 3: NVMe control registers | ||
3685 | * Chapter 7.3: Reset behavior | ||
3686 | */ | ||
3687 | static int nvme_disable_and_flr(struct pci_dev *dev, int probe) | ||
3688 | { | ||
3689 | void __iomem *bar; | ||
3690 | u16 cmd; | ||
3691 | u32 cfg; | ||
3692 | |||
3693 | if (dev->class != PCI_CLASS_STORAGE_EXPRESS || | ||
3694 | !pcie_has_flr(dev) || !pci_resource_start(dev, 0)) | ||
3695 | return -ENOTTY; | ||
3696 | |||
3697 | if (probe) | ||
3698 | return 0; | ||
3699 | |||
3700 | bar = pci_iomap(dev, 0, NVME_REG_CC + sizeof(cfg)); | ||
3701 | if (!bar) | ||
3702 | return -ENOTTY; | ||
3703 | |||
3704 | pci_read_config_word(dev, PCI_COMMAND, &cmd); | ||
3705 | pci_write_config_word(dev, PCI_COMMAND, cmd | PCI_COMMAND_MEMORY); | ||
3706 | |||
3707 | cfg = readl(bar + NVME_REG_CC); | ||
3708 | |||
3709 | /* Disable controller if enabled */ | ||
3710 | if (cfg & NVME_CC_ENABLE) { | ||
3711 | u32 cap = readl(bar + NVME_REG_CAP); | ||
3712 | unsigned long timeout; | ||
3713 | |||
3714 | /* | ||
3715 | * Per nvme_disable_ctrl() skip shutdown notification as it | ||
3716 | * could complete commands to the admin queue. We only intend | ||
3717 | * to quiesce the device before reset. | ||
3718 | */ | ||
3719 | cfg &= ~(NVME_CC_SHN_MASK | NVME_CC_ENABLE); | ||
3720 | |||
3721 | writel(cfg, bar + NVME_REG_CC); | ||
3722 | |||
3723 | /* | ||
3724 | * Some controllers require an additional delay here, see | ||
3725 | * NVME_QUIRK_DELAY_BEFORE_CHK_RDY. None of those are yet | ||
3726 | * supported by this quirk. | ||
3727 | */ | ||
3728 | |||
3729 | /* Cap register provides max timeout in 500ms increments */ | ||
3730 | timeout = ((NVME_CAP_TIMEOUT(cap) + 1) * HZ / 2) + jiffies; | ||
3731 | |||
3732 | for (;;) { | ||
3733 | u32 status = readl(bar + NVME_REG_CSTS); | ||
3734 | |||
3735 | /* Ready status becomes zero on disable complete */ | ||
3736 | if (!(status & NVME_CSTS_RDY)) | ||
3737 | break; | ||
3738 | |||
3739 | msleep(100); | ||
3740 | |||
3741 | if (time_after(jiffies, timeout)) { | ||
3742 | pci_warn(dev, "Timeout waiting for NVMe ready status to clear after disable\n"); | ||
3743 | break; | ||
3744 | } | ||
3745 | } | ||
3746 | } | ||
3747 | |||
3748 | pci_iounmap(dev, bar); | ||
3749 | |||
3750 | pcie_flr(dev); | ||
3751 | |||
3752 | return 0; | ||
3753 | } | ||
3754 | |||
3755 | /* | ||
3756 | * Intel DC P3700 NVMe controller will timeout waiting for ready status | ||
3757 | * to change after NVMe enable if the driver starts interacting with the | ||
3758 | * device too soon after FLR. A 250ms delay after FLR has heuristically | ||
3759 | * proven to produce reliably working results for device assignment cases. | ||
3760 | */ | ||
3761 | static int delay_250ms_after_flr(struct pci_dev *dev, int probe) | ||
3762 | { | ||
3763 | if (!pcie_has_flr(dev)) | ||
3764 | return -ENOTTY; | ||
3765 | |||
3766 | if (probe) | ||
3767 | return 0; | ||
3768 | |||
3769 | pcie_flr(dev); | ||
3770 | |||
3771 | msleep(250); | ||
3772 | |||
3773 | return 0; | ||
3774 | } | ||
3775 | |||
3667 | static const struct pci_dev_reset_methods pci_dev_reset_methods[] = { | 3776 | static const struct pci_dev_reset_methods pci_dev_reset_methods[] = { |
3668 | { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82599_SFP_VF, | 3777 | { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82599_SFP_VF, |
3669 | reset_intel_82599_sfp_virtfn }, | 3778 | reset_intel_82599_sfp_virtfn }, |
@@ -3671,6 +3780,8 @@ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = { | |||
3671 | reset_ivb_igd }, | 3780 | reset_ivb_igd }, |
3672 | { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IVB_M2_VGA, | 3781 | { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IVB_M2_VGA, |
3673 | reset_ivb_igd }, | 3782 | reset_ivb_igd }, |
3783 | { PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr }, | ||
3784 | { PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr }, | ||
3674 | { PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, | 3785 | { PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, |
3675 | reset_chelsio_generic_dev }, | 3786 | reset_chelsio_generic_dev }, |
3676 | { 0 } | 3787 | { 0 } |
@@ -3740,6 +3851,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x917a, | |||
3740 | /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c78 */ | 3851 | /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c78 */ |
3741 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9182, | 3852 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9182, |
3742 | quirk_dma_func1_alias); | 3853 | quirk_dma_func1_alias); |
3854 | /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c134 */ | ||
3855 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9183, | ||
3856 | quirk_dma_func1_alias); | ||
3743 | /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c46 */ | 3857 | /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c46 */ |
3744 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x91a0, | 3858 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x91a0, |
3745 | quirk_dma_func1_alias); | 3859 | quirk_dma_func1_alias); |
@@ -4553,27 +4667,79 @@ static int pci_quirk_enable_intel_spt_pch_acs(struct pci_dev *dev) | |||
4553 | return 0; | 4667 | return 0; |
4554 | } | 4668 | } |
4555 | 4669 | ||
4556 | static const struct pci_dev_enable_acs { | 4670 | static int pci_quirk_disable_intel_spt_pch_acs_redir(struct pci_dev *dev) |
4671 | { | ||
4672 | int pos; | ||
4673 | u32 cap, ctrl; | ||
4674 | |||
4675 | if (!pci_quirk_intel_spt_pch_acs_match(dev)) | ||
4676 | return -ENOTTY; | ||
4677 | |||
4678 | pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); | ||
4679 | if (!pos) | ||
4680 | return -ENOTTY; | ||
4681 | |||
4682 | pci_read_config_dword(dev, pos + PCI_ACS_CAP, &cap); | ||
4683 | pci_read_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, &ctrl); | ||
4684 | |||
4685 | ctrl &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC); | ||
4686 | |||
4687 | pci_write_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, ctrl); | ||
4688 | |||
4689 | pci_info(dev, "Intel SPT PCH root port workaround: disabled ACS redirect\n"); | ||
4690 | |||
4691 | return 0; | ||
4692 | } | ||
4693 | |||
4694 | static const struct pci_dev_acs_ops { | ||
4557 | u16 vendor; | 4695 | u16 vendor; |
4558 | u16 device; | 4696 | u16 device; |
4559 | int (*enable_acs)(struct pci_dev *dev); | 4697 | int (*enable_acs)(struct pci_dev *dev); |
4560 | } pci_dev_enable_acs[] = { | 4698 | int (*disable_acs_redir)(struct pci_dev *dev); |
4561 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_enable_intel_pch_acs }, | 4699 | } pci_dev_acs_ops[] = { |
4562 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_enable_intel_spt_pch_acs }, | 4700 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, |
4563 | { 0 } | 4701 | .enable_acs = pci_quirk_enable_intel_pch_acs, |
4702 | }, | ||
4703 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, | ||
4704 | .enable_acs = pci_quirk_enable_intel_spt_pch_acs, | ||
4705 | .disable_acs_redir = pci_quirk_disable_intel_spt_pch_acs_redir, | ||
4706 | }, | ||
4564 | }; | 4707 | }; |
4565 | 4708 | ||
4566 | int pci_dev_specific_enable_acs(struct pci_dev *dev) | 4709 | int pci_dev_specific_enable_acs(struct pci_dev *dev) |
4567 | { | 4710 | { |
4568 | const struct pci_dev_enable_acs *i; | 4711 | const struct pci_dev_acs_ops *p; |
4569 | int ret; | 4712 | int i, ret; |
4713 | |||
4714 | for (i = 0; i < ARRAY_SIZE(pci_dev_acs_ops); i++) { | ||
4715 | p = &pci_dev_acs_ops[i]; | ||
4716 | if ((p->vendor == dev->vendor || | ||
4717 | p->vendor == (u16)PCI_ANY_ID) && | ||
4718 | (p->device == dev->device || | ||
4719 | p->device == (u16)PCI_ANY_ID) && | ||
4720 | p->enable_acs) { | ||
4721 | ret = p->enable_acs(dev); | ||
4722 | if (ret >= 0) | ||
4723 | return ret; | ||
4724 | } | ||
4725 | } | ||
4570 | 4726 | ||
4571 | for (i = pci_dev_enable_acs; i->enable_acs; i++) { | 4727 | return -ENOTTY; |
4572 | if ((i->vendor == dev->vendor || | 4728 | } |
4573 | i->vendor == (u16)PCI_ANY_ID) && | 4729 | |
4574 | (i->device == dev->device || | 4730 | int pci_dev_specific_disable_acs_redir(struct pci_dev *dev) |
4575 | i->device == (u16)PCI_ANY_ID)) { | 4731 | { |
4576 | ret = i->enable_acs(dev); | 4732 | const struct pci_dev_acs_ops *p; |
4733 | int i, ret; | ||
4734 | |||
4735 | for (i = 0; i < ARRAY_SIZE(pci_dev_acs_ops); i++) { | ||
4736 | p = &pci_dev_acs_ops[i]; | ||
4737 | if ((p->vendor == dev->vendor || | ||
4738 | p->vendor == (u16)PCI_ANY_ID) && | ||
4739 | (p->device == dev->device || | ||
4740 | p->device == (u16)PCI_ANY_ID) && | ||
4741 | p->disable_acs_redir) { | ||
4742 | ret = p->disable_acs_redir(dev); | ||
4577 | if (ret >= 0) | 4743 | if (ret >= 0) |
4578 | return ret; | 4744 | return ret; |
4579 | } | 4745 | } |
@@ -4753,3 +4919,197 @@ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_AMD, PCI_ANY_ID, | |||
4753 | PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8, quirk_gpu_hda); | 4919 | PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8, quirk_gpu_hda); |
4754 | DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, | 4920 | DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, |
4755 | PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8, quirk_gpu_hda); | 4921 | PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8, quirk_gpu_hda); |
4922 | |||
4923 | /* | ||
4924 | * Some IDT switches incorrectly flag an ACS Source Validation error on | ||
4925 | * completions for config read requests even though PCIe r4.0, sec | ||
4926 | * 6.12.1.1, says that completions are never affected by ACS Source | ||
4927 | * Validation. Here's the text of IDT 89H32H8G3-YC, erratum #36: | ||
4928 | * | ||
4929 | * Item #36 - Downstream port applies ACS Source Validation to Completions | ||
4930 | * Section 6.12.1.1 of the PCI Express Base Specification 3.1 states that | ||
4931 | * completions are never affected by ACS Source Validation. However, | ||
4932 | * completions received by a downstream port of the PCIe switch from a | ||
4933 | * device that has not yet captured a PCIe bus number are incorrectly | ||
4934 | * dropped by ACS Source Validation by the switch downstream port. | ||
4935 | * | ||
4936 | * The workaround suggested by IDT is to issue a config write to the | ||
4937 | * downstream device before issuing the first config read. This allows the | ||
4938 | * downstream device to capture its bus and device numbers (see PCIe r4.0, | ||
4939 | * sec 2.2.9), thus avoiding the ACS error on the completion. | ||
4940 | * | ||
4941 | * However, we don't know when the device is ready to accept the config | ||
4942 | * write, so we do config reads until we receive a non-Config Request Retry | ||
4943 | * Status, then do the config write. | ||
4944 | * | ||
4945 | * To avoid hitting the erratum when doing the config reads, we disable ACS | ||
4946 | * SV around this process. | ||
4947 | */ | ||
4948 | int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *l, int timeout) | ||
4949 | { | ||
4950 | int pos; | ||
4951 | u16 ctrl = 0; | ||
4952 | bool found; | ||
4953 | struct pci_dev *bridge = bus->self; | ||
4954 | |||
4955 | pos = pci_find_ext_capability(bridge, PCI_EXT_CAP_ID_ACS); | ||
4956 | |||
4957 | /* Disable ACS SV before initial config reads */ | ||
4958 | if (pos) { | ||
4959 | pci_read_config_word(bridge, pos + PCI_ACS_CTRL, &ctrl); | ||
4960 | if (ctrl & PCI_ACS_SV) | ||
4961 | pci_write_config_word(bridge, pos + PCI_ACS_CTRL, | ||
4962 | ctrl & ~PCI_ACS_SV); | ||
4963 | } | ||
4964 | |||
4965 | found = pci_bus_generic_read_dev_vendor_id(bus, devfn, l, timeout); | ||
4966 | |||
4967 | /* Write Vendor ID (read-only) so the endpoint latches its bus/dev */ | ||
4968 | if (found) | ||
4969 | pci_bus_write_config_word(bus, devfn, PCI_VENDOR_ID, 0); | ||
4970 | |||
4971 | /* Re-enable ACS_SV if it was previously enabled */ | ||
4972 | if (ctrl & PCI_ACS_SV) | ||
4973 | pci_write_config_word(bridge, pos + PCI_ACS_CTRL, ctrl); | ||
4974 | |||
4975 | return found; | ||
4976 | } | ||
4977 | |||
4978 | /* | ||
4979 | * Microsemi Switchtec NTB uses devfn proxy IDs to move TLPs between | ||
4980 | * NT endpoints via the internal switch fabric. These IDs replace the | ||
4981 | * originating requestor ID TLPs which access host memory on peer NTB | ||
4982 | * ports. Therefore, all proxy IDs must be aliased to the NTB device | ||
4983 | * to permit access when the IOMMU is turned on. | ||
4984 | */ | ||
4985 | static void quirk_switchtec_ntb_dma_alias(struct pci_dev *pdev) | ||
4986 | { | ||
4987 | void __iomem *mmio; | ||
4988 | struct ntb_info_regs __iomem *mmio_ntb; | ||
4989 | struct ntb_ctrl_regs __iomem *mmio_ctrl; | ||
4990 | struct sys_info_regs __iomem *mmio_sys_info; | ||
4991 | u64 partition_map; | ||
4992 | u8 partition; | ||
4993 | int pp; | ||
4994 | |||
4995 | if (pci_enable_device(pdev)) { | ||
4996 | pci_err(pdev, "Cannot enable Switchtec device\n"); | ||
4997 | return; | ||
4998 | } | ||
4999 | |||
5000 | mmio = pci_iomap(pdev, 0, 0); | ||
5001 | if (mmio == NULL) { | ||
5002 | pci_disable_device(pdev); | ||
5003 | pci_err(pdev, "Cannot iomap Switchtec device\n"); | ||
5004 | return; | ||
5005 | } | ||
5006 | |||
5007 | pci_info(pdev, "Setting Switchtec proxy ID aliases\n"); | ||
5008 | |||
5009 | mmio_ntb = mmio + SWITCHTEC_GAS_NTB_OFFSET; | ||
5010 | mmio_ctrl = (void __iomem *) mmio_ntb + SWITCHTEC_NTB_REG_CTRL_OFFSET; | ||
5011 | mmio_sys_info = mmio + SWITCHTEC_GAS_SYS_INFO_OFFSET; | ||
5012 | |||
5013 | partition = ioread8(&mmio_ntb->partition_id); | ||
5014 | |||
5015 | partition_map = ioread32(&mmio_ntb->ep_map); | ||
5016 | partition_map |= ((u64) ioread32(&mmio_ntb->ep_map + 4)) << 32; | ||
5017 | partition_map &= ~(1ULL << partition); | ||
5018 | |||
5019 | for (pp = 0; pp < (sizeof(partition_map) * 8); pp++) { | ||
5020 | struct ntb_ctrl_regs __iomem *mmio_peer_ctrl; | ||
5021 | u32 table_sz = 0; | ||
5022 | int te; | ||
5023 | |||
5024 | if (!(partition_map & (1ULL << pp))) | ||
5025 | continue; | ||
5026 | |||
5027 | pci_dbg(pdev, "Processing partition %d\n", pp); | ||
5028 | |||
5029 | mmio_peer_ctrl = &mmio_ctrl[pp]; | ||
5030 | |||
5031 | table_sz = ioread16(&mmio_peer_ctrl->req_id_table_size); | ||
5032 | if (!table_sz) { | ||
5033 | pci_warn(pdev, "Partition %d table_sz 0\n", pp); | ||
5034 | continue; | ||
5035 | } | ||
5036 | |||
5037 | if (table_sz > 512) { | ||
5038 | pci_warn(pdev, | ||
5039 | "Invalid Switchtec partition %d table_sz %d\n", | ||
5040 | pp, table_sz); | ||
5041 | continue; | ||
5042 | } | ||
5043 | |||
5044 | for (te = 0; te < table_sz; te++) { | ||
5045 | u32 rid_entry; | ||
5046 | u8 devfn; | ||
5047 | |||
5048 | rid_entry = ioread32(&mmio_peer_ctrl->req_id_table[te]); | ||
5049 | devfn = (rid_entry >> 1) & 0xFF; | ||
5050 | pci_dbg(pdev, | ||
5051 | "Aliasing Partition %d Proxy ID %02x.%d\n", | ||
5052 | pp, PCI_SLOT(devfn), PCI_FUNC(devfn)); | ||
5053 | pci_add_dma_alias(pdev, devfn); | ||
5054 | } | ||
5055 | } | ||
5056 | |||
5057 | pci_iounmap(pdev, mmio); | ||
5058 | pci_disable_device(pdev); | ||
5059 | } | ||
5060 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8531, | ||
5061 | quirk_switchtec_ntb_dma_alias); | ||
5062 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8532, | ||
5063 | quirk_switchtec_ntb_dma_alias); | ||
5064 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8533, | ||
5065 | quirk_switchtec_ntb_dma_alias); | ||
5066 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8534, | ||
5067 | quirk_switchtec_ntb_dma_alias); | ||
5068 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8535, | ||
5069 | quirk_switchtec_ntb_dma_alias); | ||
5070 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8536, | ||
5071 | quirk_switchtec_ntb_dma_alias); | ||
5072 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8543, | ||
5073 | quirk_switchtec_ntb_dma_alias); | ||
5074 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8544, | ||
5075 | quirk_switchtec_ntb_dma_alias); | ||
5076 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8545, | ||
5077 | quirk_switchtec_ntb_dma_alias); | ||
5078 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8546, | ||
5079 | quirk_switchtec_ntb_dma_alias); | ||
5080 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8551, | ||
5081 | quirk_switchtec_ntb_dma_alias); | ||
5082 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8552, | ||
5083 | quirk_switchtec_ntb_dma_alias); | ||
5084 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8553, | ||
5085 | quirk_switchtec_ntb_dma_alias); | ||
5086 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8554, | ||
5087 | quirk_switchtec_ntb_dma_alias); | ||
5088 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8555, | ||
5089 | quirk_switchtec_ntb_dma_alias); | ||
5090 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8556, | ||
5091 | quirk_switchtec_ntb_dma_alias); | ||
5092 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8561, | ||
5093 | quirk_switchtec_ntb_dma_alias); | ||
5094 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8562, | ||
5095 | quirk_switchtec_ntb_dma_alias); | ||
5096 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8563, | ||
5097 | quirk_switchtec_ntb_dma_alias); | ||
5098 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8564, | ||
5099 | quirk_switchtec_ntb_dma_alias); | ||
5100 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8565, | ||
5101 | quirk_switchtec_ntb_dma_alias); | ||
5102 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8566, | ||
5103 | quirk_switchtec_ntb_dma_alias); | ||
5104 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8571, | ||
5105 | quirk_switchtec_ntb_dma_alias); | ||
5106 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8572, | ||
5107 | quirk_switchtec_ntb_dma_alias); | ||
5108 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8573, | ||
5109 | quirk_switchtec_ntb_dma_alias); | ||
5110 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8574, | ||
5111 | quirk_switchtec_ntb_dma_alias); | ||
5112 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8575, | ||
5113 | quirk_switchtec_ntb_dma_alias); | ||
5114 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8576, | ||
5115 | quirk_switchtec_ntb_dma_alias); | ||
diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c index 5e3d0dced2b8..461e7fd2756f 100644 --- a/drivers/pci/remove.c +++ b/drivers/pci/remove.c | |||
@@ -1,7 +1,6 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0 | 1 | // SPDX-License-Identifier: GPL-2.0 |
2 | #include <linux/pci.h> | 2 | #include <linux/pci.h> |
3 | #include <linux/module.h> | 3 | #include <linux/module.h> |
4 | #include <linux/pci-aspm.h> | ||
5 | #include "pci.h" | 4 | #include "pci.h" |
6 | 5 | ||
7 | static void pci_free_resources(struct pci_dev *dev) | 6 | static void pci_free_resources(struct pci_dev *dev) |
diff --git a/drivers/pci/rom.c b/drivers/pci/rom.c index a7b5c37a85ec..137bf0cee897 100644 --- a/drivers/pci/rom.c +++ b/drivers/pci/rom.c | |||
@@ -80,7 +80,8 @@ EXPORT_SYMBOL_GPL(pci_disable_rom); | |||
80 | * The PCI window size could be much larger than the | 80 | * The PCI window size could be much larger than the |
81 | * actual image size. | 81 | * actual image size. |
82 | */ | 82 | */ |
83 | size_t pci_get_rom_size(struct pci_dev *pdev, void __iomem *rom, size_t size) | 83 | static size_t pci_get_rom_size(struct pci_dev *pdev, void __iomem *rom, |
84 | size_t size) | ||
84 | { | 85 | { |
85 | void __iomem *image; | 86 | void __iomem *image; |
86 | int last_image; | 87 | int last_image; |
@@ -106,8 +107,14 @@ size_t pci_get_rom_size(struct pci_dev *pdev, void __iomem *rom, size_t size) | |||
106 | length = readw(pds + 16); | 107 | length = readw(pds + 16); |
107 | image += length * 512; | 108 | image += length * 512; |
108 | /* Avoid iterating through memory outside the resource window */ | 109 | /* Avoid iterating through memory outside the resource window */ |
109 | if (image > rom + size) | 110 | if (image >= rom + size) |
110 | break; | 111 | break; |
112 | if (!last_image) { | ||
113 | if (readw(image) != 0xAA55) { | ||
114 | pci_info(pdev, "No more image in the PCI ROM\n"); | ||
115 | break; | ||
116 | } | ||
117 | } | ||
111 | } while (length && !last_image); | 118 | } while (length && !last_image); |
112 | 119 | ||
113 | /* never return a size larger than the PCI resource window */ | 120 | /* never return a size larger than the PCI resource window */ |
diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c index 47cd0c037433..9940cc70f38b 100644 --- a/drivers/pci/switch/switchtec.c +++ b/drivers/pci/switch/switchtec.c | |||
@@ -641,7 +641,7 @@ static int ioctl_event_summary(struct switchtec_dev *stdev, | |||
641 | 641 | ||
642 | for (i = 0; i < SWITCHTEC_MAX_PFF_CSR; i++) { | 642 | for (i = 0; i < SWITCHTEC_MAX_PFF_CSR; i++) { |
643 | reg = ioread16(&stdev->mmio_pff_csr[i].vendor_id); | 643 | reg = ioread16(&stdev->mmio_pff_csr[i].vendor_id); |
644 | if (reg != MICROSEMI_VENDOR_ID) | 644 | if (reg != PCI_VENDOR_ID_MICROSEMI) |
645 | break; | 645 | break; |
646 | 646 | ||
647 | reg = ioread32(&stdev->mmio_pff_csr[i].pff_event_summary); | 647 | reg = ioread32(&stdev->mmio_pff_csr[i].pff_event_summary); |
@@ -1203,7 +1203,7 @@ static void init_pff(struct switchtec_dev *stdev) | |||
1203 | 1203 | ||
1204 | for (i = 0; i < SWITCHTEC_MAX_PFF_CSR; i++) { | 1204 | for (i = 0; i < SWITCHTEC_MAX_PFF_CSR; i++) { |
1205 | reg = ioread16(&stdev->mmio_pff_csr[i].vendor_id); | 1205 | reg = ioread16(&stdev->mmio_pff_csr[i].vendor_id); |
1206 | if (reg != MICROSEMI_VENDOR_ID) | 1206 | if (reg != PCI_VENDOR_ID_MICROSEMI) |
1207 | break; | 1207 | break; |
1208 | } | 1208 | } |
1209 | 1209 | ||
@@ -1267,7 +1267,7 @@ static int switchtec_pci_probe(struct pci_dev *pdev, | |||
1267 | struct switchtec_dev *stdev; | 1267 | struct switchtec_dev *stdev; |
1268 | int rc; | 1268 | int rc; |
1269 | 1269 | ||
1270 | if (pdev->class == MICROSEMI_NTB_CLASSCODE) | 1270 | if (pdev->class == (PCI_CLASS_BRIDGE_OTHER << 8)) |
1271 | request_module_nowait("ntb_hw_switchtec"); | 1271 | request_module_nowait("ntb_hw_switchtec"); |
1272 | 1272 | ||
1273 | stdev = stdev_create(pdev); | 1273 | stdev = stdev_create(pdev); |
@@ -1321,19 +1321,19 @@ static void switchtec_pci_remove(struct pci_dev *pdev) | |||
1321 | 1321 | ||
1322 | #define SWITCHTEC_PCI_DEVICE(device_id) \ | 1322 | #define SWITCHTEC_PCI_DEVICE(device_id) \ |
1323 | { \ | 1323 | { \ |
1324 | .vendor = MICROSEMI_VENDOR_ID, \ | 1324 | .vendor = PCI_VENDOR_ID_MICROSEMI, \ |
1325 | .device = device_id, \ | 1325 | .device = device_id, \ |
1326 | .subvendor = PCI_ANY_ID, \ | 1326 | .subvendor = PCI_ANY_ID, \ |
1327 | .subdevice = PCI_ANY_ID, \ | 1327 | .subdevice = PCI_ANY_ID, \ |
1328 | .class = MICROSEMI_MGMT_CLASSCODE, \ | 1328 | .class = (PCI_CLASS_MEMORY_OTHER << 8), \ |
1329 | .class_mask = 0xFFFFFFFF, \ | 1329 | .class_mask = 0xFFFFFFFF, \ |
1330 | }, \ | 1330 | }, \ |
1331 | { \ | 1331 | { \ |
1332 | .vendor = MICROSEMI_VENDOR_ID, \ | 1332 | .vendor = PCI_VENDOR_ID_MICROSEMI, \ |
1333 | .device = device_id, \ | 1333 | .device = device_id, \ |
1334 | .subvendor = PCI_ANY_ID, \ | 1334 | .subvendor = PCI_ANY_ID, \ |
1335 | .subdevice = PCI_ANY_ID, \ | 1335 | .subdevice = PCI_ANY_ID, \ |
1336 | .class = MICROSEMI_NTB_CLASSCODE, \ | 1336 | .class = (PCI_CLASS_BRIDGE_OTHER << 8), \ |
1337 | .class_mask = 0xFFFFFFFF, \ | 1337 | .class_mask = 0xFFFFFFFF, \ |
1338 | } | 1338 | } |
1339 | 1339 | ||
diff --git a/drivers/pci/vpd.c b/drivers/pci/vpd.c index 8617565ba561..4963c2e2bd4c 100644 --- a/drivers/pci/vpd.c +++ b/drivers/pci/vpd.c | |||
@@ -146,7 +146,7 @@ static int pci_vpd_wait(struct pci_dev *dev) | |||
146 | if (!vpd->busy) | 146 | if (!vpd->busy) |
147 | return 0; | 147 | return 0; |
148 | 148 | ||
149 | while (time_before(jiffies, timeout)) { | 149 | do { |
150 | ret = pci_user_read_config_word(dev, vpd->cap + PCI_VPD_ADDR, | 150 | ret = pci_user_read_config_word(dev, vpd->cap + PCI_VPD_ADDR, |
151 | &status); | 151 | &status); |
152 | if (ret < 0) | 152 | if (ret < 0) |
@@ -160,10 +160,13 @@ static int pci_vpd_wait(struct pci_dev *dev) | |||
160 | if (fatal_signal_pending(current)) | 160 | if (fatal_signal_pending(current)) |
161 | return -EINTR; | 161 | return -EINTR; |
162 | 162 | ||
163 | if (time_after(jiffies, timeout)) | ||
164 | break; | ||
165 | |||
163 | usleep_range(10, max_sleep); | 166 | usleep_range(10, max_sleep); |
164 | if (max_sleep < 1024) | 167 | if (max_sleep < 1024) |
165 | max_sleep *= 2; | 168 | max_sleep *= 2; |
166 | } | 169 | } while (true); |
167 | 170 | ||
168 | pci_warn(dev, "VPD access failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update\n"); | 171 | pci_warn(dev, "VPD access failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update\n"); |
169 | return -ETIMEDOUT; | 172 | return -ETIMEDOUT; |