diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2016-05-19 16:10:54 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-05-19 16:10:54 -0400 |
commit | 7afd16f882887c9adc69cd1794f5e57777723217 (patch) | |
tree | 73683d08184a227cf627230d8b887a92c51a32e5 /drivers/pci | |
parent | a37571a29eca963562ff5a9233db4a5c73c72cf9 (diff) | |
parent | e257ef55ce51d7ec399193ee85acda8b8759d930 (diff) |
Merge tag 'pci-v4.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Refine PCI support check in pcibios_init() (Adrian-Ken Rueegsegger)
- Provide common functions for ECAM mapping (Jayachandran C)
- Allow all PCIe services on non-ACPI host bridges (Jon Derrick)
- Remove return values from pcie_port_platform_notify() and relatives (Jon Derrick)
- Widen portdrv service type from 4 bits to 8 bits (Keith Busch)
- Add Downstream Port Containment portdrv service type (Keith Busch)
- Add Downstream Port Containment driver (Keith Busch)
Resource management:
- Identify Enhanced Allocation (EA) BAR Equivalent resources in sysfs (Alex Williamson)
- Supply CPU physical address (not bus address) to iomem_is_exclusive() (Bjorn Helgaas)
- alpha: Call iomem_is_exclusive() for IORESOURCE_MEM, but not IORESOURCE_IO (Bjorn Helgaas)
- Mark Broadwell-EP Home Agent 1 as having non-compliant BARs (Prarit Bhargava)
- Disable all BAR sizing for devices with non-compliant BARs (Prarit Bhargava)
- Move PCI I/O space management from OF to PCI core code (Tomasz Nowicki)
PCI device hotplug:
- acpiphp_ibm: Avoid uninitialized variable reference (Dan Carpenter)
- Use cached copy of PCI_EXP_SLTCAP_HPC bit (Lukas Wunner)
Virtualization:
- Mark Intel i40e NIC INTx masking as broken (Alex Williamson)
- Reverse standard ACS vs device-specific ACS enabling (Alex Williamson)
- Work around Intel Sunrise Point PCH incorrect ACS capability (Alex Williamson)
IOMMU:
- Add pci_add_dma_alias() to abstract implementation (Bjorn Helgaas)
- Move informational printk to pci_add_dma_alias() (Bjorn Helgaas)
- Add support for multiple DMA aliases (Jacek Lawrynowicz)
- Add DMA alias quirk for mic_x200_dma (Jacek Lawrynowicz)
Thunderbolt:
- Fix double free of drom buffer (Andreas Noever)
- Add Intel Thunderbolt device IDs (Lukas Wunner)
- Fix typos and magic number (Lukas Wunner)
- Support 1st gen Light Ridge controller (Lukas Wunner)
Generic host bridge driver:
- Use generic ECAM API (Jayachandran C)
Cavium ThunderX host bridge driver:
- Don't clobber read-only bits in bridge config registers (David Daney)
- Use generic ECAM API (Jayachandran C)
Freescale i.MX6 host bridge driver:
- Use enum instead of bool for variant indicator (Andrey Smirnov)
- Implement reset sequence for i.MX6+ (Andrey Smirnov)
- Factor out ref clock enable (Bjorn Helgaas)
- Add initial imx6sx support (Christoph Fritz)
- Add reset-gpio-active-high boolean property to DT (Petr Štetiar)
- Add DT property for link gen, default to Gen1 (Tim Harvey)
- dts: Specify imx6qp version of PCIe core (Andrey Smirnov)
- dts: Fix PCIe reset GPIO polarity on Toradex Apalis Ixora (Petr Štetiar)
Marvell Armada host bridge driver:
- add DT binding for Marvell Armada 7K/8K PCIe controller (Thomas Petazzoni)
- Add driver for Marvell Armada 7K/8K PCIe controller (Thomas Petazzoni)
Marvell MVEBU host bridge driver:
- Constify mvebu_pcie_pm_ops structure (Jisheng Zhang)
- Use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS for mvebu_pcie_pm_ops (Jisheng Zhang)
Microsoft Hyper-V host bridge driver:
- Report resources release after stopping the bus (Vitaly Kuznetsov)
- Add explicit barriers to config space access (Vitaly Kuznetsov)
Renesas R-Car host bridge driver:
- Select PCI_MSI_IRQ_DOMAIN (Arnd Bergmann)
Synopsys DesignWare host bridge driver:
- Remove incorrect RC memory base/limit configuration (Gabriele Paoloni)
- Move Root Complex setup code to dw_pcie_setup_rc() (Jisheng Zhang)
TI Keystone host bridge driver:
- Add error IRQ handler (Murali Karicheri)
- Remove unnecessary goto statement (Murali Karicheri)
Miscellaneous:
- Fix spelling errors (Colin Ian King)"
* tag 'pci-v4.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (48 commits)
PCI: Disable all BAR sizing for devices with non-compliant BARs
x86/PCI: Mark Broadwell-EP Home Agent 1 as having non-compliant BARs
PCI: Identify Enhanced Allocation (EA) BAR Equivalent resources in sysfs
PCI, of: Move PCI I/O space management to PCI core code
PCI: generic, thunder: Use generic ECAM API
PCI: Provide common functions for ECAM mapping
PCI: hv: Add explicit barriers to config space access
PCI: Use cached copy of PCI_EXP_SLTCAP_HPC bit
PCI: Add Downstream Port Containment driver
PCI: Add Downstream Port Containment portdrv service type
PCI: Widen portdrv service type from 4 bits to 8 bits
PCI: designware: Remove incorrect RC memory base/limit configuration
PCI: hv: Report resources release after stopping the bus
ARM: dts: imx6qp: Specify imx6qp version of PCIe core
PCI: imx6: Implement reset sequence for i.MX6+
PCI: imx6: Use enum instead of bool for variant indicator
PCI: thunder: Don't clobber read-only bits in bridge config registers
thunderbolt: Fix double free of drom buffer
PCI: rcar: Select PCI_MSI_IRQ_DOMAIN
PCI: armada: Add driver for Marvell Armada 7K/8K PCIe controller
...
Diffstat (limited to 'drivers/pci')
33 files changed, 1496 insertions, 424 deletions
diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 209292e067d2..56389be5d08b 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig | |||
@@ -83,6 +83,9 @@ config HT_IRQ | |||
83 | config PCI_ATS | 83 | config PCI_ATS |
84 | bool | 84 | bool |
85 | 85 | ||
86 | config PCI_ECAM | ||
87 | bool | ||
88 | |||
86 | config PCI_IOV | 89 | config PCI_IOV |
87 | bool "PCI IOV support" | 90 | bool "PCI IOV support" |
88 | depends on PCI | 91 | depends on PCI |
diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 2154092ddee8..1fa6925733d3 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile | |||
@@ -55,6 +55,8 @@ obj-$(CONFIG_PCI_SYSCALL) += syscall.o | |||
55 | 55 | ||
56 | obj-$(CONFIG_PCI_STUB) += pci-stub.o | 56 | obj-$(CONFIG_PCI_STUB) += pci-stub.o |
57 | 57 | ||
58 | obj-$(CONFIG_PCI_ECAM) += ecam.o | ||
59 | |||
58 | obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o | 60 | obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o |
59 | 61 | ||
60 | obj-$(CONFIG_OF) += of.o | 62 | obj-$(CONFIG_OF) += of.o |
diff --git a/drivers/pci/ecam.c b/drivers/pci/ecam.c new file mode 100644 index 000000000000..f9832ad8efe2 --- /dev/null +++ b/drivers/pci/ecam.c | |||
@@ -0,0 +1,164 @@ | |||
1 | /* | ||
2 | * Copyright 2016 Broadcom | ||
3 | * | ||
4 | * This program is free software; you can redistribute it and/or modify | ||
5 | * it under the terms of the GNU General Public License, version 2, as | ||
6 | * published by the Free Software Foundation (the "GPL"). | ||
7 | * | ||
8 | * This program is distributed in the hope that it will be useful, but | ||
9 | * WITHOUT ANY WARRANTY; without even the implied warranty of | ||
10 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | ||
11 | * General Public License version 2 (GPLv2) for more details. | ||
12 | * | ||
13 | * You should have received a copy of the GNU General Public License | ||
14 | * version 2 (GPLv2) along with this source code. | ||
15 | */ | ||
16 | |||
17 | #include <linux/device.h> | ||
18 | #include <linux/io.h> | ||
19 | #include <linux/kernel.h> | ||
20 | #include <linux/module.h> | ||
21 | #include <linux/pci.h> | ||
22 | #include <linux/slab.h> | ||
23 | |||
24 | #include "ecam.h" | ||
25 | |||
26 | /* | ||
27 | * On 64-bit systems, we do a single ioremap for the whole config space | ||
28 | * since we have enough virtual address range available. On 32-bit, we | ||
29 | * ioremap the config space for each bus individually. | ||
30 | */ | ||
31 | static const bool per_bus_mapping = !config_enabled(CONFIG_64BIT); | ||
32 | |||
33 | /* | ||
34 | * Create a PCI config space window | ||
35 | * - reserve mem region | ||
36 | * - alloc struct pci_config_window with space for all mappings | ||
37 | * - ioremap the config space | ||
38 | */ | ||
39 | struct pci_config_window *pci_ecam_create(struct device *dev, | ||
40 | struct resource *cfgres, struct resource *busr, | ||
41 | struct pci_ecam_ops *ops) | ||
42 | { | ||
43 | struct pci_config_window *cfg; | ||
44 | unsigned int bus_range, bus_range_max, bsz; | ||
45 | struct resource *conflict; | ||
46 | int i, err; | ||
47 | |||
48 | if (busr->start > busr->end) | ||
49 | return ERR_PTR(-EINVAL); | ||
50 | |||
51 | cfg = kzalloc(sizeof(*cfg), GFP_KERNEL); | ||
52 | if (!cfg) | ||
53 | return ERR_PTR(-ENOMEM); | ||
54 | |||
55 | cfg->ops = ops; | ||
56 | cfg->busr.start = busr->start; | ||
57 | cfg->busr.end = busr->end; | ||
58 | cfg->busr.flags = IORESOURCE_BUS; | ||
59 | bus_range = resource_size(&cfg->busr); | ||
60 | bus_range_max = resource_size(cfgres) >> ops->bus_shift; | ||
61 | if (bus_range > bus_range_max) { | ||
62 | bus_range = bus_range_max; | ||
63 | cfg->busr.end = busr->start + bus_range - 1; | ||
64 | dev_warn(dev, "ECAM area %pR can only accommodate %pR (reduced from %pR desired)\n", | ||
65 | cfgres, &cfg->busr, busr); | ||
66 | } | ||
67 | bsz = 1 << ops->bus_shift; | ||
68 | |||
69 | cfg->res.start = cfgres->start; | ||
70 | cfg->res.end = cfgres->end; | ||
71 | cfg->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY; | ||
72 | cfg->res.name = "PCI ECAM"; | ||
73 | |||
74 | conflict = request_resource_conflict(&iomem_resource, &cfg->res); | ||
75 | if (conflict) { | ||
76 | err = -EBUSY; | ||
77 | dev_err(dev, "can't claim ECAM area %pR: address conflict with %s %pR\n", | ||
78 | &cfg->res, conflict->name, conflict); | ||
79 | goto err_exit; | ||
80 | } | ||
81 | |||
82 | if (per_bus_mapping) { | ||
83 | cfg->winp = kcalloc(bus_range, sizeof(*cfg->winp), GFP_KERNEL); | ||
84 | if (!cfg->winp) | ||
85 | goto err_exit_malloc; | ||
86 | for (i = 0; i < bus_range; i++) { | ||
87 | cfg->winp[i] = ioremap(cfgres->start + i * bsz, bsz); | ||
88 | if (!cfg->winp[i]) | ||
89 | goto err_exit_iomap; | ||
90 | } | ||
91 | } else { | ||
92 | cfg->win = ioremap(cfgres->start, bus_range * bsz); | ||
93 | if (!cfg->win) | ||
94 | goto err_exit_iomap; | ||
95 | } | ||
96 | |||
97 | if (ops->init) { | ||
98 | err = ops->init(dev, cfg); | ||
99 | if (err) | ||
100 | goto err_exit; | ||
101 | } | ||
102 | dev_info(dev, "ECAM at %pR for %pR\n", &cfg->res, &cfg->busr); | ||
103 | return cfg; | ||
104 | |||
105 | err_exit_iomap: | ||
106 | dev_err(dev, "ECAM ioremap failed\n"); | ||
107 | err_exit_malloc: | ||
108 | err = -ENOMEM; | ||
109 | err_exit: | ||
110 | pci_ecam_free(cfg); | ||
111 | return ERR_PTR(err); | ||
112 | } | ||
113 | |||
114 | void pci_ecam_free(struct pci_config_window *cfg) | ||
115 | { | ||
116 | int i; | ||
117 | |||
118 | if (per_bus_mapping) { | ||
119 | if (cfg->winp) { | ||
120 | for (i = 0; i < resource_size(&cfg->busr); i++) | ||
121 | if (cfg->winp[i]) | ||
122 | iounmap(cfg->winp[i]); | ||
123 | kfree(cfg->winp); | ||
124 | } | ||
125 | } else { | ||
126 | if (cfg->win) | ||
127 | iounmap(cfg->win); | ||
128 | } | ||
129 | if (cfg->res.parent) | ||
130 | release_resource(&cfg->res); | ||
131 | kfree(cfg); | ||
132 | } | ||
133 | |||
134 | /* | ||
135 | * Function to implement the pci_ops ->map_bus method | ||
136 | */ | ||
137 | void __iomem *pci_ecam_map_bus(struct pci_bus *bus, unsigned int devfn, | ||
138 | int where) | ||
139 | { | ||
140 | struct pci_config_window *cfg = bus->sysdata; | ||
141 | unsigned int devfn_shift = cfg->ops->bus_shift - 8; | ||
142 | unsigned int busn = bus->number; | ||
143 | void __iomem *base; | ||
144 | |||
145 | if (busn < cfg->busr.start || busn > cfg->busr.end) | ||
146 | return NULL; | ||
147 | |||
148 | busn -= cfg->busr.start; | ||
149 | if (per_bus_mapping) | ||
150 | base = cfg->winp[busn]; | ||
151 | else | ||
152 | base = cfg->win + (busn << cfg->ops->bus_shift); | ||
153 | return base + (devfn << devfn_shift) + where; | ||
154 | } | ||
155 | |||
156 | /* ECAM ops */ | ||
157 | struct pci_ecam_ops pci_generic_ecam_ops = { | ||
158 | .bus_shift = 20, | ||
159 | .pci_ops = { | ||
160 | .map_bus = pci_ecam_map_bus, | ||
161 | .read = pci_generic_config_read, | ||
162 | .write = pci_generic_config_write, | ||
163 | } | ||
164 | }; | ||
diff --git a/drivers/pci/ecam.h b/drivers/pci/ecam.h new file mode 100644 index 000000000000..9878bebd45bb --- /dev/null +++ b/drivers/pci/ecam.h | |||
@@ -0,0 +1,67 @@ | |||
1 | /* | ||
2 | * Copyright 2016 Broadcom | ||
3 | * | ||
4 | * This program is free software; you can redistribute it and/or modify | ||
5 | * it under the terms of the GNU General Public License, version 2, as | ||
6 | * published by the Free Software Foundation (the "GPL"). | ||
7 | * | ||
8 | * This program is distributed in the hope that it will be useful, but | ||
9 | * WITHOUT ANY WARRANTY; without even the implied warranty of | ||
10 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | ||
11 | * General Public License version 2 (GPLv2) for more details. | ||
12 | * | ||
13 | * You should have received a copy of the GNU General Public License | ||
14 | * version 2 (GPLv2) along with this source code. | ||
15 | */ | ||
16 | #ifndef DRIVERS_PCI_ECAM_H | ||
17 | #define DRIVERS_PCI_ECAM_H | ||
18 | |||
19 | #include <linux/kernel.h> | ||
20 | #include <linux/platform_device.h> | ||
21 | |||
22 | /* | ||
23 | * struct to hold pci ops and bus shift of the config window | ||
24 | * for a PCI controller. | ||
25 | */ | ||
26 | struct pci_config_window; | ||
27 | struct pci_ecam_ops { | ||
28 | unsigned int bus_shift; | ||
29 | struct pci_ops pci_ops; | ||
30 | int (*init)(struct device *, | ||
31 | struct pci_config_window *); | ||
32 | }; | ||
33 | |||
34 | /* | ||
35 | * struct to hold the mappings of a config space window. This | ||
36 | * is expected to be used as sysdata for PCI controllers that | ||
37 | * use ECAM. | ||
38 | */ | ||
39 | struct pci_config_window { | ||
40 | struct resource res; | ||
41 | struct resource busr; | ||
42 | void *priv; | ||
43 | struct pci_ecam_ops *ops; | ||
44 | union { | ||
45 | void __iomem *win; /* 64-bit single mapping */ | ||
46 | void __iomem **winp; /* 32-bit per-bus mapping */ | ||
47 | }; | ||
48 | }; | ||
49 | |||
50 | /* create and free pci_config_window */ | ||
51 | struct pci_config_window *pci_ecam_create(struct device *dev, | ||
52 | struct resource *cfgres, struct resource *busr, | ||
53 | struct pci_ecam_ops *ops); | ||
54 | void pci_ecam_free(struct pci_config_window *cfg); | ||
55 | |||
56 | /* map_bus when ->sysdata is an instance of pci_config_window */ | ||
57 | void __iomem *pci_ecam_map_bus(struct pci_bus *bus, unsigned int devfn, | ||
58 | int where); | ||
59 | /* default ECAM ops */ | ||
60 | extern struct pci_ecam_ops pci_generic_ecam_ops; | ||
61 | |||
62 | #ifdef CONFIG_PCI_HOST_GENERIC | ||
63 | /* for DT-based PCI controllers that support ECAM */ | ||
64 | int pci_host_common_probe(struct platform_device *pdev, | ||
65 | struct pci_ecam_ops *ops); | ||
66 | #endif | ||
67 | #endif | ||
diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig index 8fb1cf54617d..5d2374e4ee7f 100644 --- a/drivers/pci/host/Kconfig +++ b/drivers/pci/host/Kconfig | |||
@@ -72,11 +72,14 @@ config PCI_RCAR_GEN2 | |||
72 | config PCIE_RCAR | 72 | config PCIE_RCAR |
73 | bool "Renesas R-Car PCIe controller" | 73 | bool "Renesas R-Car PCIe controller" |
74 | depends on ARCH_RENESAS || (ARM && COMPILE_TEST) | 74 | depends on ARCH_RENESAS || (ARM && COMPILE_TEST) |
75 | select PCI_MSI | ||
76 | select PCI_MSI_IRQ_DOMAIN | ||
75 | help | 77 | help |
76 | Say Y here if you want PCIe controller support on R-Car SoCs. | 78 | Say Y here if you want PCIe controller support on R-Car SoCs. |
77 | 79 | ||
78 | config PCI_HOST_COMMON | 80 | config PCI_HOST_COMMON |
79 | bool | 81 | bool |
82 | select PCI_ECAM | ||
80 | 83 | ||
81 | config PCI_HOST_GENERIC | 84 | config PCI_HOST_GENERIC |
82 | bool "Generic PCI host controller" | 85 | bool "Generic PCI host controller" |
@@ -231,4 +234,15 @@ config PCI_HOST_THUNDER_ECAM | |||
231 | help | 234 | help |
232 | Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs. | 235 | Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs. |
233 | 236 | ||
237 | config PCIE_ARMADA_8K | ||
238 | bool "Marvell Armada-8K PCIe controller" | ||
239 | depends on ARCH_MVEBU | ||
240 | select PCIE_DW | ||
241 | select PCIEPORTBUS | ||
242 | help | ||
243 | Say Y here if you want to enable PCIe controller support on | ||
244 | Armada-8K SoCs. The PCIe controller on Armada-8K is based on | ||
245 | Designware hardware and therefore the driver re-uses the | ||
246 | Designware core functions to implement the driver. | ||
247 | |||
234 | endmenu | 248 | endmenu |
diff --git a/drivers/pci/host/Makefile b/drivers/pci/host/Makefile index d3d8e1b36fb9..9c8698e89e96 100644 --- a/drivers/pci/host/Makefile +++ b/drivers/pci/host/Makefile | |||
@@ -28,3 +28,4 @@ obj-$(CONFIG_PCI_HISI) += pcie-hisi.o | |||
28 | obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o | 28 | obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o |
29 | obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o | 29 | obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o |
30 | obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o | 30 | obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o |
31 | obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o | ||
diff --git a/drivers/pci/host/pci-dra7xx.c b/drivers/pci/host/pci-dra7xx.c index 2ca3a1f30ebf..f441130407e7 100644 --- a/drivers/pci/host/pci-dra7xx.c +++ b/drivers/pci/host/pci-dra7xx.c | |||
@@ -142,13 +142,13 @@ static void dra7xx_pcie_enable_interrupts(struct pcie_port *pp) | |||
142 | 142 | ||
143 | static void dra7xx_pcie_host_init(struct pcie_port *pp) | 143 | static void dra7xx_pcie_host_init(struct pcie_port *pp) |
144 | { | 144 | { |
145 | dw_pcie_setup_rc(pp); | ||
146 | |||
147 | pp->io_base &= DRA7XX_CPU_TO_BUS_ADDR; | 145 | pp->io_base &= DRA7XX_CPU_TO_BUS_ADDR; |
148 | pp->mem_base &= DRA7XX_CPU_TO_BUS_ADDR; | 146 | pp->mem_base &= DRA7XX_CPU_TO_BUS_ADDR; |
149 | pp->cfg0_base &= DRA7XX_CPU_TO_BUS_ADDR; | 147 | pp->cfg0_base &= DRA7XX_CPU_TO_BUS_ADDR; |
150 | pp->cfg1_base &= DRA7XX_CPU_TO_BUS_ADDR; | 148 | pp->cfg1_base &= DRA7XX_CPU_TO_BUS_ADDR; |
151 | 149 | ||
150 | dw_pcie_setup_rc(pp); | ||
151 | |||
152 | dra7xx_pcie_establish_link(pp); | 152 | dra7xx_pcie_establish_link(pp); |
153 | if (IS_ENABLED(CONFIG_PCI_MSI)) | 153 | if (IS_ENABLED(CONFIG_PCI_MSI)) |
154 | dw_pcie_msi_init(pp); | 154 | dw_pcie_msi_init(pp); |
diff --git a/drivers/pci/host/pci-host-common.c b/drivers/pci/host/pci-host-common.c index e9f850f07968..8cba7ab73df9 100644 --- a/drivers/pci/host/pci-host-common.c +++ b/drivers/pci/host/pci-host-common.c | |||
@@ -22,27 +22,21 @@ | |||
22 | #include <linux/of_pci.h> | 22 | #include <linux/of_pci.h> |
23 | #include <linux/platform_device.h> | 23 | #include <linux/platform_device.h> |
24 | 24 | ||
25 | #include "pci-host-common.h" | 25 | #include "../ecam.h" |
26 | 26 | ||
27 | static void gen_pci_release_of_pci_ranges(struct gen_pci *pci) | 27 | static int gen_pci_parse_request_of_pci_ranges(struct device *dev, |
28 | { | 28 | struct list_head *resources, struct resource **bus_range) |
29 | pci_free_resource_list(&pci->resources); | ||
30 | } | ||
31 | |||
32 | static int gen_pci_parse_request_of_pci_ranges(struct gen_pci *pci) | ||
33 | { | 29 | { |
34 | int err, res_valid = 0; | 30 | int err, res_valid = 0; |
35 | struct device *dev = pci->host.dev.parent; | ||
36 | struct device_node *np = dev->of_node; | 31 | struct device_node *np = dev->of_node; |
37 | resource_size_t iobase; | 32 | resource_size_t iobase; |
38 | struct resource_entry *win; | 33 | struct resource_entry *win; |
39 | 34 | ||
40 | err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources, | 35 | err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase); |
41 | &iobase); | ||
42 | if (err) | 36 | if (err) |
43 | return err; | 37 | return err; |
44 | 38 | ||
45 | resource_list_for_each_entry(win, &pci->resources) { | 39 | resource_list_for_each_entry(win, resources) { |
46 | struct resource *parent, *res = win->res; | 40 | struct resource *parent, *res = win->res; |
47 | 41 | ||
48 | switch (resource_type(res)) { | 42 | switch (resource_type(res)) { |
@@ -60,7 +54,7 @@ static int gen_pci_parse_request_of_pci_ranges(struct gen_pci *pci) | |||
60 | res_valid |= !(res->flags & IORESOURCE_PREFETCH); | 54 | res_valid |= !(res->flags & IORESOURCE_PREFETCH); |
61 | break; | 55 | break; |
62 | case IORESOURCE_BUS: | 56 | case IORESOURCE_BUS: |
63 | pci->cfg.bus_range = res; | 57 | *bus_range = res; |
64 | default: | 58 | default: |
65 | continue; | 59 | continue; |
66 | } | 60 | } |
@@ -79,65 +73,60 @@ static int gen_pci_parse_request_of_pci_ranges(struct gen_pci *pci) | |||
79 | return 0; | 73 | return 0; |
80 | 74 | ||
81 | out_release_res: | 75 | out_release_res: |
82 | gen_pci_release_of_pci_ranges(pci); | ||
83 | return err; | 76 | return err; |
84 | } | 77 | } |
85 | 78 | ||
86 | static int gen_pci_parse_map_cfg_windows(struct gen_pci *pci) | 79 | static void gen_pci_unmap_cfg(void *ptr) |
80 | { | ||
81 | pci_ecam_free((struct pci_config_window *)ptr); | ||
82 | } | ||
83 | |||
84 | static struct pci_config_window *gen_pci_init(struct device *dev, | ||
85 | struct list_head *resources, struct pci_ecam_ops *ops) | ||
87 | { | 86 | { |
88 | int err; | 87 | int err; |
89 | u8 bus_max; | 88 | struct resource cfgres; |
90 | resource_size_t busn; | 89 | struct resource *bus_range = NULL; |
91 | struct resource *bus_range; | 90 | struct pci_config_window *cfg; |
92 | struct device *dev = pci->host.dev.parent; | ||
93 | struct device_node *np = dev->of_node; | ||
94 | u32 sz = 1 << pci->cfg.ops->bus_shift; | ||
95 | 91 | ||
96 | err = of_address_to_resource(np, 0, &pci->cfg.res); | 92 | /* Parse our PCI ranges and request their resources */ |
93 | err = gen_pci_parse_request_of_pci_ranges(dev, resources, &bus_range); | ||
94 | if (err) | ||
95 | goto err_out; | ||
96 | |||
97 | err = of_address_to_resource(dev->of_node, 0, &cfgres); | ||
97 | if (err) { | 98 | if (err) { |
98 | dev_err(dev, "missing \"reg\" property\n"); | 99 | dev_err(dev, "missing \"reg\" property\n"); |
99 | return err; | 100 | goto err_out; |
100 | } | 101 | } |
101 | 102 | ||
102 | /* Limit the bus-range to fit within reg */ | 103 | cfg = pci_ecam_create(dev, &cfgres, bus_range, ops); |
103 | bus_max = pci->cfg.bus_range->start + | 104 | if (IS_ERR(cfg)) { |
104 | (resource_size(&pci->cfg.res) >> pci->cfg.ops->bus_shift) - 1; | 105 | err = PTR_ERR(cfg); |
105 | pci->cfg.bus_range->end = min_t(resource_size_t, | 106 | goto err_out; |
106 | pci->cfg.bus_range->end, bus_max); | ||
107 | |||
108 | pci->cfg.win = devm_kcalloc(dev, resource_size(pci->cfg.bus_range), | ||
109 | sizeof(*pci->cfg.win), GFP_KERNEL); | ||
110 | if (!pci->cfg.win) | ||
111 | return -ENOMEM; | ||
112 | |||
113 | /* Map our Configuration Space windows */ | ||
114 | if (!devm_request_mem_region(dev, pci->cfg.res.start, | ||
115 | resource_size(&pci->cfg.res), | ||
116 | "Configuration Space")) | ||
117 | return -ENOMEM; | ||
118 | |||
119 | bus_range = pci->cfg.bus_range; | ||
120 | for (busn = bus_range->start; busn <= bus_range->end; ++busn) { | ||
121 | u32 idx = busn - bus_range->start; | ||
122 | |||
123 | pci->cfg.win[idx] = devm_ioremap(dev, | ||
124 | pci->cfg.res.start + idx * sz, | ||
125 | sz); | ||
126 | if (!pci->cfg.win[idx]) | ||
127 | return -ENOMEM; | ||
128 | } | 107 | } |
129 | 108 | ||
130 | return 0; | 109 | err = devm_add_action(dev, gen_pci_unmap_cfg, cfg); |
110 | if (err) { | ||
111 | gen_pci_unmap_cfg(cfg); | ||
112 | goto err_out; | ||
113 | } | ||
114 | return cfg; | ||
115 | |||
116 | err_out: | ||
117 | pci_free_resource_list(resources); | ||
118 | return ERR_PTR(err); | ||
131 | } | 119 | } |
132 | 120 | ||
133 | int pci_host_common_probe(struct platform_device *pdev, | 121 | int pci_host_common_probe(struct platform_device *pdev, |
134 | struct gen_pci *pci) | 122 | struct pci_ecam_ops *ops) |
135 | { | 123 | { |
136 | int err; | ||
137 | const char *type; | 124 | const char *type; |
138 | struct device *dev = &pdev->dev; | 125 | struct device *dev = &pdev->dev; |
139 | struct device_node *np = dev->of_node; | 126 | struct device_node *np = dev->of_node; |
140 | struct pci_bus *bus, *child; | 127 | struct pci_bus *bus, *child; |
128 | struct pci_config_window *cfg; | ||
129 | struct list_head resources; | ||
141 | 130 | ||
142 | type = of_get_property(np, "device_type", NULL); | 131 | type = of_get_property(np, "device_type", NULL); |
143 | if (!type || strcmp(type, "pci")) { | 132 | if (!type || strcmp(type, "pci")) { |
@@ -147,29 +136,18 @@ int pci_host_common_probe(struct platform_device *pdev, | |||
147 | 136 | ||
148 | of_pci_check_probe_only(); | 137 | of_pci_check_probe_only(); |
149 | 138 | ||
150 | pci->host.dev.parent = dev; | ||
151 | INIT_LIST_HEAD(&pci->host.windows); | ||
152 | INIT_LIST_HEAD(&pci->resources); | ||
153 | |||
154 | /* Parse our PCI ranges and request their resources */ | ||
155 | err = gen_pci_parse_request_of_pci_ranges(pci); | ||
156 | if (err) | ||
157 | return err; | ||
158 | |||
159 | /* Parse and map our Configuration Space windows */ | 139 | /* Parse and map our Configuration Space windows */ |
160 | err = gen_pci_parse_map_cfg_windows(pci); | 140 | INIT_LIST_HEAD(&resources); |
161 | if (err) { | 141 | cfg = gen_pci_init(dev, &resources, ops); |
162 | gen_pci_release_of_pci_ranges(pci); | 142 | if (IS_ERR(cfg)) |
163 | return err; | 143 | return PTR_ERR(cfg); |
164 | } | ||
165 | 144 | ||
166 | /* Do not reassign resources if probe only */ | 145 | /* Do not reassign resources if probe only */ |
167 | if (!pci_has_flag(PCI_PROBE_ONLY)) | 146 | if (!pci_has_flag(PCI_PROBE_ONLY)) |
168 | pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); | 147 | pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); |
169 | 148 | ||
170 | 149 | bus = pci_scan_root_bus(dev, cfg->busr.start, &ops->pci_ops, cfg, | |
171 | bus = pci_scan_root_bus(dev, pci->cfg.bus_range->start, | 150 | &resources); |
172 | &pci->cfg.ops->ops, pci, &pci->resources); | ||
173 | if (!bus) { | 151 | if (!bus) { |
174 | dev_err(dev, "Scanning rootbus failed"); | 152 | dev_err(dev, "Scanning rootbus failed"); |
175 | return -ENODEV; | 153 | return -ENODEV; |
diff --git a/drivers/pci/host/pci-host-common.h b/drivers/pci/host/pci-host-common.h deleted file mode 100644 index 09f3fa0a55d7..000000000000 --- a/drivers/pci/host/pci-host-common.h +++ /dev/null | |||
@@ -1,47 +0,0 @@ | |||
1 | /* | ||
2 | * This program is free software; you can redistribute it and/or modify | ||
3 | * it under the terms of the GNU General Public License version 2 as | ||
4 | * published by the Free Software Foundation. | ||
5 | * | ||
6 | * This program is distributed in the hope that it will be useful, | ||
7 | * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
8 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
9 | * GNU General Public License for more details. | ||
10 | * | ||
11 | * You should have received a copy of the GNU General Public License | ||
12 | * along with this program. If not, see <http://www.gnu.org/licenses/>. | ||
13 | * | ||
14 | * Copyright (C) 2014 ARM Limited | ||
15 | * | ||
16 | * Author: Will Deacon <will.deacon@arm.com> | ||
17 | */ | ||
18 | |||
19 | #ifndef _PCI_HOST_COMMON_H | ||
20 | #define _PCI_HOST_COMMON_H | ||
21 | |||
22 | #include <linux/kernel.h> | ||
23 | #include <linux/platform_device.h> | ||
24 | |||
25 | struct gen_pci_cfg_bus_ops { | ||
26 | u32 bus_shift; | ||
27 | struct pci_ops ops; | ||
28 | }; | ||
29 | |||
30 | struct gen_pci_cfg_windows { | ||
31 | struct resource res; | ||
32 | struct resource *bus_range; | ||
33 | void __iomem **win; | ||
34 | |||
35 | struct gen_pci_cfg_bus_ops *ops; | ||
36 | }; | ||
37 | |||
38 | struct gen_pci { | ||
39 | struct pci_host_bridge host; | ||
40 | struct gen_pci_cfg_windows cfg; | ||
41 | struct list_head resources; | ||
42 | }; | ||
43 | |||
44 | int pci_host_common_probe(struct platform_device *pdev, | ||
45 | struct gen_pci *pci); | ||
46 | |||
47 | #endif /* _PCI_HOST_COMMON_H */ | ||
diff --git a/drivers/pci/host/pci-host-generic.c b/drivers/pci/host/pci-host-generic.c index e8aa78faa16d..6eaceab1bf04 100644 --- a/drivers/pci/host/pci-host-generic.c +++ b/drivers/pci/host/pci-host-generic.c | |||
@@ -25,41 +25,12 @@ | |||
25 | #include <linux/of_pci.h> | 25 | #include <linux/of_pci.h> |
26 | #include <linux/platform_device.h> | 26 | #include <linux/platform_device.h> |
27 | 27 | ||
28 | #include "pci-host-common.h" | 28 | #include "../ecam.h" |
29 | 29 | ||
30 | static void __iomem *gen_pci_map_cfg_bus_cam(struct pci_bus *bus, | 30 | static struct pci_ecam_ops gen_pci_cfg_cam_bus_ops = { |
31 | unsigned int devfn, | ||
32 | int where) | ||
33 | { | ||
34 | struct gen_pci *pci = bus->sysdata; | ||
35 | resource_size_t idx = bus->number - pci->cfg.bus_range->start; | ||
36 | |||
37 | return pci->cfg.win[idx] + ((devfn << 8) | where); | ||
38 | } | ||
39 | |||
40 | static struct gen_pci_cfg_bus_ops gen_pci_cfg_cam_bus_ops = { | ||
41 | .bus_shift = 16, | 31 | .bus_shift = 16, |
42 | .ops = { | 32 | .pci_ops = { |
43 | .map_bus = gen_pci_map_cfg_bus_cam, | 33 | .map_bus = pci_ecam_map_bus, |
44 | .read = pci_generic_config_read, | ||
45 | .write = pci_generic_config_write, | ||
46 | } | ||
47 | }; | ||
48 | |||
49 | static void __iomem *gen_pci_map_cfg_bus_ecam(struct pci_bus *bus, | ||
50 | unsigned int devfn, | ||
51 | int where) | ||
52 | { | ||
53 | struct gen_pci *pci = bus->sysdata; | ||
54 | resource_size_t idx = bus->number - pci->cfg.bus_range->start; | ||
55 | |||
56 | return pci->cfg.win[idx] + ((devfn << 12) | where); | ||
57 | } | ||
58 | |||
59 | static struct gen_pci_cfg_bus_ops gen_pci_cfg_ecam_bus_ops = { | ||
60 | .bus_shift = 20, | ||
61 | .ops = { | ||
62 | .map_bus = gen_pci_map_cfg_bus_ecam, | ||
63 | .read = pci_generic_config_read, | 34 | .read = pci_generic_config_read, |
64 | .write = pci_generic_config_write, | 35 | .write = pci_generic_config_write, |
65 | } | 36 | } |
@@ -70,25 +41,22 @@ static const struct of_device_id gen_pci_of_match[] = { | |||
70 | .data = &gen_pci_cfg_cam_bus_ops }, | 41 | .data = &gen_pci_cfg_cam_bus_ops }, |
71 | 42 | ||
72 | { .compatible = "pci-host-ecam-generic", | 43 | { .compatible = "pci-host-ecam-generic", |
73 | .data = &gen_pci_cfg_ecam_bus_ops }, | 44 | .data = &pci_generic_ecam_ops }, |
74 | 45 | ||
75 | { }, | 46 | { }, |
76 | }; | 47 | }; |
48 | |||
77 | MODULE_DEVICE_TABLE(of, gen_pci_of_match); | 49 | MODULE_DEVICE_TABLE(of, gen_pci_of_match); |
78 | 50 | ||
79 | static int gen_pci_probe(struct platform_device *pdev) | 51 | static int gen_pci_probe(struct platform_device *pdev) |
80 | { | 52 | { |
81 | struct device *dev = &pdev->dev; | ||
82 | const struct of_device_id *of_id; | 53 | const struct of_device_id *of_id; |
83 | struct gen_pci *pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); | 54 | struct pci_ecam_ops *ops; |
84 | |||
85 | if (!pci) | ||
86 | return -ENOMEM; | ||
87 | 55 | ||
88 | of_id = of_match_node(gen_pci_of_match, dev->of_node); | 56 | of_id = of_match_node(gen_pci_of_match, pdev->dev.of_node); |
89 | pci->cfg.ops = (struct gen_pci_cfg_bus_ops *)of_id->data; | 57 | ops = (struct pci_ecam_ops *)of_id->data; |
90 | 58 | ||
91 | return pci_host_common_probe(pdev, pci); | 59 | return pci_host_common_probe(pdev, ops); |
92 | } | 60 | } |
93 | 61 | ||
94 | static struct platform_driver gen_pci_driver = { | 62 | static struct platform_driver gen_pci_driver = { |
diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c index ed651baa7c50..58f7eeb79bb4 100644 --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c | |||
@@ -553,6 +553,8 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where, | |||
553 | spin_lock_irqsave(&hpdev->hbus->config_lock, flags); | 553 | spin_lock_irqsave(&hpdev->hbus->config_lock, flags); |
554 | /* Choose the function to be read. (See comment above) */ | 554 | /* Choose the function to be read. (See comment above) */ |
555 | writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr); | 555 | writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr); |
556 | /* Make sure the function was chosen before we start reading. */ | ||
557 | mb(); | ||
556 | /* Read from that function's config space. */ | 558 | /* Read from that function's config space. */ |
557 | switch (size) { | 559 | switch (size) { |
558 | case 1: | 560 | case 1: |
@@ -565,6 +567,11 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where, | |||
565 | *val = readl(addr); | 567 | *val = readl(addr); |
566 | break; | 568 | break; |
567 | } | 569 | } |
570 | /* | ||
571 | * Make sure the write was done before we release the spinlock | ||
572 | * allowing consecutive reads/writes. | ||
573 | */ | ||
574 | mb(); | ||
568 | spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags); | 575 | spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags); |
569 | } else { | 576 | } else { |
570 | dev_err(&hpdev->hbus->hdev->device, | 577 | dev_err(&hpdev->hbus->hdev->device, |
@@ -592,6 +599,8 @@ static void _hv_pcifront_write_config(struct hv_pci_dev *hpdev, int where, | |||
592 | spin_lock_irqsave(&hpdev->hbus->config_lock, flags); | 599 | spin_lock_irqsave(&hpdev->hbus->config_lock, flags); |
593 | /* Choose the function to be written. (See comment above) */ | 600 | /* Choose the function to be written. (See comment above) */ |
594 | writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr); | 601 | writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr); |
602 | /* Make sure the function was chosen before we start writing. */ | ||
603 | wmb(); | ||
595 | /* Write to that function's config space. */ | 604 | /* Write to that function's config space. */ |
596 | switch (size) { | 605 | switch (size) { |
597 | case 1: | 606 | case 1: |
@@ -604,6 +613,11 @@ static void _hv_pcifront_write_config(struct hv_pci_dev *hpdev, int where, | |||
604 | writel(val, addr); | 613 | writel(val, addr); |
605 | break; | 614 | break; |
606 | } | 615 | } |
616 | /* | ||
617 | * Make sure the write was done before we release the spinlock | ||
618 | * allowing consecutive reads/writes. | ||
619 | */ | ||
620 | mb(); | ||
607 | spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags); | 621 | spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags); |
608 | } else { | 622 | } else { |
609 | dev_err(&hpdev->hbus->hdev->device, | 623 | dev_err(&hpdev->hbus->hdev->device, |
@@ -2268,11 +2282,6 @@ static int hv_pci_remove(struct hv_device *hdev) | |||
2268 | 2282 | ||
2269 | hbus = hv_get_drvdata(hdev); | 2283 | hbus = hv_get_drvdata(hdev); |
2270 | 2284 | ||
2271 | ret = hv_send_resources_released(hdev); | ||
2272 | if (ret) | ||
2273 | dev_err(&hdev->device, | ||
2274 | "Couldn't send resources released packet(s)\n"); | ||
2275 | |||
2276 | memset(&pkt.teardown_packet, 0, sizeof(pkt.teardown_packet)); | 2285 | memset(&pkt.teardown_packet, 0, sizeof(pkt.teardown_packet)); |
2277 | init_completion(&comp_pkt.host_event); | 2286 | init_completion(&comp_pkt.host_event); |
2278 | pkt.teardown_packet.completion_func = hv_pci_generic_compl; | 2287 | pkt.teardown_packet.completion_func = hv_pci_generic_compl; |
@@ -2295,6 +2304,11 @@ static int hv_pci_remove(struct hv_device *hdev) | |||
2295 | pci_unlock_rescan_remove(); | 2304 | pci_unlock_rescan_remove(); |
2296 | } | 2305 | } |
2297 | 2306 | ||
2307 | ret = hv_send_resources_released(hdev); | ||
2308 | if (ret) | ||
2309 | dev_err(&hdev->device, | ||
2310 | "Couldn't send resources released packet(s)\n"); | ||
2311 | |||
2298 | vmbus_close(hdev->channel); | 2312 | vmbus_close(hdev->channel); |
2299 | 2313 | ||
2300 | /* Delete any children which might still exist. */ | 2314 | /* Delete any children which might still exist. */ |
diff --git a/drivers/pci/host/pci-imx6.c b/drivers/pci/host/pci-imx6.c index 2f817fa4c661..b741a36a67f3 100644 --- a/drivers/pci/host/pci-imx6.c +++ b/drivers/pci/host/pci-imx6.c | |||
@@ -19,6 +19,7 @@ | |||
19 | #include <linux/mfd/syscon/imx6q-iomuxc-gpr.h> | 19 | #include <linux/mfd/syscon/imx6q-iomuxc-gpr.h> |
20 | #include <linux/module.h> | 20 | #include <linux/module.h> |
21 | #include <linux/of_gpio.h> | 21 | #include <linux/of_gpio.h> |
22 | #include <linux/of_device.h> | ||
22 | #include <linux/pci.h> | 23 | #include <linux/pci.h> |
23 | #include <linux/platform_device.h> | 24 | #include <linux/platform_device.h> |
24 | #include <linux/regmap.h> | 25 | #include <linux/regmap.h> |
@@ -31,19 +32,29 @@ | |||
31 | 32 | ||
32 | #define to_imx6_pcie(x) container_of(x, struct imx6_pcie, pp) | 33 | #define to_imx6_pcie(x) container_of(x, struct imx6_pcie, pp) |
33 | 34 | ||
35 | enum imx6_pcie_variants { | ||
36 | IMX6Q, | ||
37 | IMX6SX, | ||
38 | IMX6QP, | ||
39 | }; | ||
40 | |||
34 | struct imx6_pcie { | 41 | struct imx6_pcie { |
35 | int reset_gpio; | 42 | int reset_gpio; |
43 | bool gpio_active_high; | ||
36 | struct clk *pcie_bus; | 44 | struct clk *pcie_bus; |
37 | struct clk *pcie_phy; | 45 | struct clk *pcie_phy; |
46 | struct clk *pcie_inbound_axi; | ||
38 | struct clk *pcie; | 47 | struct clk *pcie; |
39 | struct pcie_port pp; | 48 | struct pcie_port pp; |
40 | struct regmap *iomuxc_gpr; | 49 | struct regmap *iomuxc_gpr; |
50 | enum imx6_pcie_variants variant; | ||
41 | void __iomem *mem_base; | 51 | void __iomem *mem_base; |
42 | u32 tx_deemph_gen1; | 52 | u32 tx_deemph_gen1; |
43 | u32 tx_deemph_gen2_3p5db; | 53 | u32 tx_deemph_gen2_3p5db; |
44 | u32 tx_deemph_gen2_6db; | 54 | u32 tx_deemph_gen2_6db; |
45 | u32 tx_swing_full; | 55 | u32 tx_swing_full; |
46 | u32 tx_swing_low; | 56 | u32 tx_swing_low; |
57 | int link_gen; | ||
47 | }; | 58 | }; |
48 | 59 | ||
49 | /* PCIe Root Complex registers (memory-mapped) */ | 60 | /* PCIe Root Complex registers (memory-mapped) */ |
@@ -236,37 +247,93 @@ static int imx6_pcie_assert_core_reset(struct pcie_port *pp) | |||
236 | struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); | 247 | struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); |
237 | u32 val, gpr1, gpr12; | 248 | u32 val, gpr1, gpr12; |
238 | 249 | ||
239 | /* | 250 | switch (imx6_pcie->variant) { |
240 | * If the bootloader already enabled the link we need some special | 251 | case IMX6SX: |
241 | * handling to get the core back into a state where it is safe to | 252 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, |
242 | * touch it for configuration. As there is no dedicated reset signal | 253 | IMX6SX_GPR12_PCIE_TEST_POWERDOWN, |
243 | * wired up for MX6QDL, we need to manually force LTSSM into "detect" | 254 | IMX6SX_GPR12_PCIE_TEST_POWERDOWN); |
244 | * state before completely disabling LTSSM, which is a prerequisite | 255 | /* Force PCIe PHY reset */ |
245 | * for core configuration. | 256 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR5, |
246 | * | 257 | IMX6SX_GPR5_PCIE_BTNRST_RESET, |
247 | * If both LTSSM_ENABLE and REF_SSP_ENABLE are active we have a strong | 258 | IMX6SX_GPR5_PCIE_BTNRST_RESET); |
248 | * indication that the bootloader activated the link. | 259 | break; |
249 | */ | 260 | case IMX6QP: |
250 | regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, &gpr1); | 261 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, |
251 | regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, &gpr12); | 262 | IMX6Q_GPR1_PCIE_SW_RST, |
263 | IMX6Q_GPR1_PCIE_SW_RST); | ||
264 | break; | ||
265 | case IMX6Q: | ||
266 | /* | ||
267 | * If the bootloader already enabled the link we need some | ||
268 | * special handling to get the core back into a state where | ||
269 | * it is safe to touch it for configuration. As there is | ||
270 | * no dedicated reset signal wired up for MX6QDL, we need | ||
271 | * to manually force LTSSM into "detect" state before | ||
272 | * completely disabling LTSSM, which is a prerequisite for | ||
273 | * core configuration. | ||
274 | * | ||
275 | * If both LTSSM_ENABLE and REF_SSP_ENABLE are active we | ||
276 | * have a strong indication that the bootloader activated | ||
277 | * the link. | ||
278 | */ | ||
279 | regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, &gpr1); | ||
280 | regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, &gpr12); | ||
281 | |||
282 | if ((gpr1 & IMX6Q_GPR1_PCIE_REF_CLK_EN) && | ||
283 | (gpr12 & IMX6Q_GPR12_PCIE_CTL_2)) { | ||
284 | val = readl(pp->dbi_base + PCIE_PL_PFLR); | ||
285 | val &= ~PCIE_PL_PFLR_LINK_STATE_MASK; | ||
286 | val |= PCIE_PL_PFLR_FORCE_LINK; | ||
287 | writel(val, pp->dbi_base + PCIE_PL_PFLR); | ||
288 | |||
289 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, | ||
290 | IMX6Q_GPR12_PCIE_CTL_2, 0 << 10); | ||
291 | } | ||
292 | |||
293 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | ||
294 | IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18); | ||
295 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | ||
296 | IMX6Q_GPR1_PCIE_REF_CLK_EN, 0 << 16); | ||
297 | break; | ||
298 | } | ||
299 | |||
300 | return 0; | ||
301 | } | ||
302 | |||
303 | static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie) | ||
304 | { | ||
305 | struct pcie_port *pp = &imx6_pcie->pp; | ||
306 | int ret = 0; | ||
252 | 307 | ||
253 | if ((gpr1 & IMX6Q_GPR1_PCIE_REF_CLK_EN) && | 308 | switch (imx6_pcie->variant) { |
254 | (gpr12 & IMX6Q_GPR12_PCIE_CTL_2)) { | 309 | case IMX6SX: |
255 | val = readl(pp->dbi_base + PCIE_PL_PFLR); | 310 | ret = clk_prepare_enable(imx6_pcie->pcie_inbound_axi); |
256 | val &= ~PCIE_PL_PFLR_LINK_STATE_MASK; | 311 | if (ret) { |
257 | val |= PCIE_PL_PFLR_FORCE_LINK; | 312 | dev_err(pp->dev, "unable to enable pcie_axi clock\n"); |
258 | writel(val, pp->dbi_base + PCIE_PL_PFLR); | 313 | break; |
314 | } | ||
259 | 315 | ||
260 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, | 316 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, |
261 | IMX6Q_GPR12_PCIE_CTL_2, 0 << 10); | 317 | IMX6SX_GPR12_PCIE_TEST_POWERDOWN, 0); |
318 | break; | ||
319 | case IMX6QP: /* FALLTHROUGH */ | ||
320 | case IMX6Q: | ||
321 | /* power up core phy and enable ref clock */ | ||
322 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | ||
323 | IMX6Q_GPR1_PCIE_TEST_PD, 0 << 18); | ||
324 | /* | ||
325 | * the async reset input need ref clock to sync internally, | ||
326 | * when the ref clock comes after reset, internal synced | ||
327 | * reset time is too short, cannot meet the requirement. | ||
328 | * add one ~10us delay here. | ||
329 | */ | ||
330 | udelay(10); | ||
331 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | ||
332 | IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16); | ||
333 | break; | ||
262 | } | 334 | } |
263 | 335 | ||
264 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | 336 | return ret; |
265 | IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18); | ||
266 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | ||
267 | IMX6Q_GPR1_PCIE_REF_CLK_EN, 0 << 16); | ||
268 | |||
269 | return 0; | ||
270 | } | 337 | } |
271 | 338 | ||
272 | static int imx6_pcie_deassert_core_reset(struct pcie_port *pp) | 339 | static int imx6_pcie_deassert_core_reset(struct pcie_port *pp) |
@@ -292,43 +359,60 @@ static int imx6_pcie_deassert_core_reset(struct pcie_port *pp) | |||
292 | goto err_pcie; | 359 | goto err_pcie; |
293 | } | 360 | } |
294 | 361 | ||
295 | /* power up core phy and enable ref clock */ | 362 | ret = imx6_pcie_enable_ref_clk(imx6_pcie); |
296 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | 363 | if (ret) { |
297 | IMX6Q_GPR1_PCIE_TEST_PD, 0 << 18); | 364 | dev_err(pp->dev, "unable to enable pcie ref clock\n"); |
298 | /* | 365 | goto err_ref_clk; |
299 | * the async reset input need ref clock to sync internally, | 366 | } |
300 | * when the ref clock comes after reset, internal synced | ||
301 | * reset time is too short, cannot meet the requirement. | ||
302 | * add one ~10us delay here. | ||
303 | */ | ||
304 | udelay(10); | ||
305 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | ||
306 | IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16); | ||
307 | 367 | ||
308 | /* allow the clocks to stabilize */ | 368 | /* allow the clocks to stabilize */ |
309 | usleep_range(200, 500); | 369 | usleep_range(200, 500); |
310 | 370 | ||
311 | /* Some boards don't have PCIe reset GPIO. */ | 371 | /* Some boards don't have PCIe reset GPIO. */ |
312 | if (gpio_is_valid(imx6_pcie->reset_gpio)) { | 372 | if (gpio_is_valid(imx6_pcie->reset_gpio)) { |
313 | gpio_set_value_cansleep(imx6_pcie->reset_gpio, 0); | 373 | gpio_set_value_cansleep(imx6_pcie->reset_gpio, |
374 | imx6_pcie->gpio_active_high); | ||
314 | msleep(100); | 375 | msleep(100); |
315 | gpio_set_value_cansleep(imx6_pcie->reset_gpio, 1); | 376 | gpio_set_value_cansleep(imx6_pcie->reset_gpio, |
377 | !imx6_pcie->gpio_active_high); | ||
316 | } | 378 | } |
379 | |||
380 | switch (imx6_pcie->variant) { | ||
381 | case IMX6SX: | ||
382 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR5, | ||
383 | IMX6SX_GPR5_PCIE_BTNRST_RESET, 0); | ||
384 | break; | ||
385 | case IMX6QP: | ||
386 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, | ||
387 | IMX6Q_GPR1_PCIE_SW_RST, 0); | ||
388 | |||
389 | usleep_range(200, 500); | ||
390 | break; | ||
391 | case IMX6Q: /* Nothing to do */ | ||
392 | break; | ||
393 | } | ||
394 | |||
317 | return 0; | 395 | return 0; |
318 | 396 | ||
397 | err_ref_clk: | ||
398 | clk_disable_unprepare(imx6_pcie->pcie); | ||
319 | err_pcie: | 399 | err_pcie: |
320 | clk_disable_unprepare(imx6_pcie->pcie_bus); | 400 | clk_disable_unprepare(imx6_pcie->pcie_bus); |
321 | err_pcie_bus: | 401 | err_pcie_bus: |
322 | clk_disable_unprepare(imx6_pcie->pcie_phy); | 402 | clk_disable_unprepare(imx6_pcie->pcie_phy); |
323 | err_pcie_phy: | 403 | err_pcie_phy: |
324 | return ret; | 404 | return ret; |
325 | |||
326 | } | 405 | } |
327 | 406 | ||
328 | static void imx6_pcie_init_phy(struct pcie_port *pp) | 407 | static void imx6_pcie_init_phy(struct pcie_port *pp) |
329 | { | 408 | { |
330 | struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); | 409 | struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); |
331 | 410 | ||
411 | if (imx6_pcie->variant == IMX6SX) | ||
412 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, | ||
413 | IMX6SX_GPR12_PCIE_RX_EQ_MASK, | ||
414 | IMX6SX_GPR12_PCIE_RX_EQ_2); | ||
415 | |||
332 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, | 416 | regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, |
333 | IMX6Q_GPR12_PCIE_CTL_2, 0 << 10); | 417 | IMX6Q_GPR12_PCIE_CTL_2, 0 << 10); |
334 | 418 | ||
@@ -417,11 +501,15 @@ static int imx6_pcie_establish_link(struct pcie_port *pp) | |||
417 | goto err_reset_phy; | 501 | goto err_reset_phy; |
418 | } | 502 | } |
419 | 503 | ||
420 | /* Allow Gen2 mode after the link is up. */ | 504 | if (imx6_pcie->link_gen == 2) { |
421 | tmp = readl(pp->dbi_base + PCIE_RC_LCR); | 505 | /* Allow Gen2 mode after the link is up. */ |
422 | tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK; | 506 | tmp = readl(pp->dbi_base + PCIE_RC_LCR); |
423 | tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2; | 507 | tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK; |
424 | writel(tmp, pp->dbi_base + PCIE_RC_LCR); | 508 | tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2; |
509 | writel(tmp, pp->dbi_base + PCIE_RC_LCR); | ||
510 | } else { | ||
511 | dev_info(pp->dev, "Link: Gen2 disabled\n"); | ||
512 | } | ||
425 | 513 | ||
426 | /* | 514 | /* |
427 | * Start Directed Speed Change so the best possible speed both link | 515 | * Start Directed Speed Change so the best possible speed both link |
@@ -445,8 +533,7 @@ static int imx6_pcie_establish_link(struct pcie_port *pp) | |||
445 | } | 533 | } |
446 | 534 | ||
447 | tmp = readl(pp->dbi_base + PCIE_RC_LCSR); | 535 | tmp = readl(pp->dbi_base + PCIE_RC_LCSR); |
448 | dev_dbg(pp->dev, "Link up, Gen=%i\n", (tmp >> 16) & 0xf); | 536 | dev_info(pp->dev, "Link up, Gen%i\n", (tmp >> 16) & 0xf); |
449 | |||
450 | return 0; | 537 | return 0; |
451 | 538 | ||
452 | err_reset_phy: | 539 | err_reset_phy: |
@@ -535,6 +622,9 @@ static int __init imx6_pcie_probe(struct platform_device *pdev) | |||
535 | pp = &imx6_pcie->pp; | 622 | pp = &imx6_pcie->pp; |
536 | pp->dev = &pdev->dev; | 623 | pp->dev = &pdev->dev; |
537 | 624 | ||
625 | imx6_pcie->variant = | ||
626 | (enum imx6_pcie_variants)of_device_get_match_data(&pdev->dev); | ||
627 | |||
538 | /* Added for PCI abort handling */ | 628 | /* Added for PCI abort handling */ |
539 | hook_fault_code(16 + 6, imx6q_pcie_abort_handler, SIGBUS, 0, | 629 | hook_fault_code(16 + 6, imx6q_pcie_abort_handler, SIGBUS, 0, |
540 | "imprecise external abort"); | 630 | "imprecise external abort"); |
@@ -546,9 +636,14 @@ static int __init imx6_pcie_probe(struct platform_device *pdev) | |||
546 | 636 | ||
547 | /* Fetch GPIOs */ | 637 | /* Fetch GPIOs */ |
548 | imx6_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0); | 638 | imx6_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0); |
639 | imx6_pcie->gpio_active_high = of_property_read_bool(np, | ||
640 | "reset-gpio-active-high"); | ||
549 | if (gpio_is_valid(imx6_pcie->reset_gpio)) { | 641 | if (gpio_is_valid(imx6_pcie->reset_gpio)) { |
550 | ret = devm_gpio_request_one(&pdev->dev, imx6_pcie->reset_gpio, | 642 | ret = devm_gpio_request_one(&pdev->dev, imx6_pcie->reset_gpio, |
551 | GPIOF_OUT_INIT_LOW, "PCIe reset"); | 643 | imx6_pcie->gpio_active_high ? |
644 | GPIOF_OUT_INIT_HIGH : | ||
645 | GPIOF_OUT_INIT_LOW, | ||
646 | "PCIe reset"); | ||
552 | if (ret) { | 647 | if (ret) { |
553 | dev_err(&pdev->dev, "unable to get reset gpio\n"); | 648 | dev_err(&pdev->dev, "unable to get reset gpio\n"); |
554 | return ret; | 649 | return ret; |
@@ -577,6 +672,16 @@ static int __init imx6_pcie_probe(struct platform_device *pdev) | |||
577 | return PTR_ERR(imx6_pcie->pcie); | 672 | return PTR_ERR(imx6_pcie->pcie); |
578 | } | 673 | } |
579 | 674 | ||
675 | if (imx6_pcie->variant == IMX6SX) { | ||
676 | imx6_pcie->pcie_inbound_axi = devm_clk_get(&pdev->dev, | ||
677 | "pcie_inbound_axi"); | ||
678 | if (IS_ERR(imx6_pcie->pcie_inbound_axi)) { | ||
679 | dev_err(&pdev->dev, | ||
680 | "pcie_incbound_axi clock missing or invalid\n"); | ||
681 | return PTR_ERR(imx6_pcie->pcie_inbound_axi); | ||
682 | } | ||
683 | } | ||
684 | |||
580 | /* Grab GPR config register range */ | 685 | /* Grab GPR config register range */ |
581 | imx6_pcie->iomuxc_gpr = | 686 | imx6_pcie->iomuxc_gpr = |
582 | syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); | 687 | syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); |
@@ -606,6 +711,12 @@ static int __init imx6_pcie_probe(struct platform_device *pdev) | |||
606 | &imx6_pcie->tx_swing_low)) | 711 | &imx6_pcie->tx_swing_low)) |
607 | imx6_pcie->tx_swing_low = 127; | 712 | imx6_pcie->tx_swing_low = 127; |
608 | 713 | ||
714 | /* Limit link speed */ | ||
715 | ret = of_property_read_u32(pp->dev->of_node, "fsl,max-link-speed", | ||
716 | &imx6_pcie->link_gen); | ||
717 | if (ret) | ||
718 | imx6_pcie->link_gen = 1; | ||
719 | |||
609 | ret = imx6_add_pcie_port(pp, pdev); | 720 | ret = imx6_add_pcie_port(pp, pdev); |
610 | if (ret < 0) | 721 | if (ret < 0) |
611 | return ret; | 722 | return ret; |
@@ -623,7 +734,9 @@ static void imx6_pcie_shutdown(struct platform_device *pdev) | |||
623 | } | 734 | } |
624 | 735 | ||
625 | static const struct of_device_id imx6_pcie_of_match[] = { | 736 | static const struct of_device_id imx6_pcie_of_match[] = { |
626 | { .compatible = "fsl,imx6q-pcie", }, | 737 | { .compatible = "fsl,imx6q-pcie", .data = (void *)IMX6Q, }, |
738 | { .compatible = "fsl,imx6sx-pcie", .data = (void *)IMX6SX, }, | ||
739 | { .compatible = "fsl,imx6qp-pcie", .data = (void *)IMX6QP, }, | ||
627 | {}, | 740 | {}, |
628 | }; | 741 | }; |
629 | MODULE_DEVICE_TABLE(of, imx6_pcie_of_match); | 742 | MODULE_DEVICE_TABLE(of, imx6_pcie_of_match); |
diff --git a/drivers/pci/host/pci-keystone-dw.c b/drivers/pci/host/pci-keystone-dw.c index 6153853ca9c3..41515092eb0d 100644 --- a/drivers/pci/host/pci-keystone-dw.c +++ b/drivers/pci/host/pci-keystone-dw.c | |||
@@ -14,6 +14,7 @@ | |||
14 | 14 | ||
15 | #include <linux/irq.h> | 15 | #include <linux/irq.h> |
16 | #include <linux/irqdomain.h> | 16 | #include <linux/irqdomain.h> |
17 | #include <linux/irqreturn.h> | ||
17 | #include <linux/module.h> | 18 | #include <linux/module.h> |
18 | #include <linux/of.h> | 19 | #include <linux/of.h> |
19 | #include <linux/of_pci.h> | 20 | #include <linux/of_pci.h> |
@@ -53,6 +54,21 @@ | |||
53 | #define IRQ_STATUS 0x184 | 54 | #define IRQ_STATUS 0x184 |
54 | #define MSI_IRQ_OFFSET 4 | 55 | #define MSI_IRQ_OFFSET 4 |
55 | 56 | ||
57 | /* Error IRQ bits */ | ||
58 | #define ERR_AER BIT(5) /* ECRC error */ | ||
59 | #define ERR_AXI BIT(4) /* AXI tag lookup fatal error */ | ||
60 | #define ERR_CORR BIT(3) /* Correctable error */ | ||
61 | #define ERR_NONFATAL BIT(2) /* Non-fatal error */ | ||
62 | #define ERR_FATAL BIT(1) /* Fatal error */ | ||
63 | #define ERR_SYS BIT(0) /* System (fatal, non-fatal, or correctable) */ | ||
64 | #define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \ | ||
65 | ERR_NONFATAL | ERR_FATAL | ERR_SYS) | ||
66 | #define ERR_FATAL_IRQ (ERR_FATAL | ERR_AXI) | ||
67 | #define ERR_IRQ_STATUS_RAW 0x1c0 | ||
68 | #define ERR_IRQ_STATUS 0x1c4 | ||
69 | #define ERR_IRQ_ENABLE_SET 0x1c8 | ||
70 | #define ERR_IRQ_ENABLE_CLR 0x1cc | ||
71 | |||
56 | /* Config space registers */ | 72 | /* Config space registers */ |
57 | #define DEBUG0 0x728 | 73 | #define DEBUG0 0x728 |
58 | 74 | ||
@@ -243,6 +259,28 @@ void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset) | |||
243 | writel(offset, ks_pcie->va_app_base + IRQ_EOI); | 259 | writel(offset, ks_pcie->va_app_base + IRQ_EOI); |
244 | } | 260 | } |
245 | 261 | ||
262 | void ks_dw_pcie_enable_error_irq(void __iomem *reg_base) | ||
263 | { | ||
264 | writel(ERR_IRQ_ALL, reg_base + ERR_IRQ_ENABLE_SET); | ||
265 | } | ||
266 | |||
267 | irqreturn_t ks_dw_pcie_handle_error_irq(struct device *dev, | ||
268 | void __iomem *reg_base) | ||
269 | { | ||
270 | u32 status; | ||
271 | |||
272 | status = readl(reg_base + ERR_IRQ_STATUS_RAW) & ERR_IRQ_ALL; | ||
273 | if (!status) | ||
274 | return IRQ_NONE; | ||
275 | |||
276 | if (status & ERR_FATAL_IRQ) | ||
277 | dev_err(dev, "fatal error (status %#010x)\n", status); | ||
278 | |||
279 | /* Ack the IRQ; status bits are RW1C */ | ||
280 | writel(status, reg_base + ERR_IRQ_STATUS); | ||
281 | return IRQ_HANDLED; | ||
282 | } | ||
283 | |||
246 | static void ks_dw_pcie_ack_legacy_irq(struct irq_data *d) | 284 | static void ks_dw_pcie_ack_legacy_irq(struct irq_data *d) |
247 | { | 285 | { |
248 | } | 286 | } |
diff --git a/drivers/pci/host/pci-keystone.c b/drivers/pci/host/pci-keystone.c index b71f55bb0315..6b8301ef21ca 100644 --- a/drivers/pci/host/pci-keystone.c +++ b/drivers/pci/host/pci-keystone.c | |||
@@ -15,6 +15,7 @@ | |||
15 | #include <linux/irqchip/chained_irq.h> | 15 | #include <linux/irqchip/chained_irq.h> |
16 | #include <linux/clk.h> | 16 | #include <linux/clk.h> |
17 | #include <linux/delay.h> | 17 | #include <linux/delay.h> |
18 | #include <linux/interrupt.h> | ||
18 | #include <linux/irqdomain.h> | 19 | #include <linux/irqdomain.h> |
19 | #include <linux/module.h> | 20 | #include <linux/module.h> |
20 | #include <linux/msi.h> | 21 | #include <linux/msi.h> |
@@ -159,7 +160,7 @@ static void ks_pcie_legacy_irq_handler(struct irq_desc *desc) | |||
159 | static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie, | 160 | static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie, |
160 | char *controller, int *num_irqs) | 161 | char *controller, int *num_irqs) |
161 | { | 162 | { |
162 | int temp, max_host_irqs, legacy = 1, *host_irqs, ret = -EINVAL; | 163 | int temp, max_host_irqs, legacy = 1, *host_irqs; |
163 | struct device *dev = ks_pcie->pp.dev; | 164 | struct device *dev = ks_pcie->pp.dev; |
164 | struct device_node *np_pcie = dev->of_node, **np_temp; | 165 | struct device_node *np_pcie = dev->of_node, **np_temp; |
165 | 166 | ||
@@ -180,11 +181,15 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie, | |||
180 | *np_temp = of_find_node_by_name(np_pcie, controller); | 181 | *np_temp = of_find_node_by_name(np_pcie, controller); |
181 | if (!(*np_temp)) { | 182 | if (!(*np_temp)) { |
182 | dev_err(dev, "Node for %s is absent\n", controller); | 183 | dev_err(dev, "Node for %s is absent\n", controller); |
183 | goto out; | 184 | return -EINVAL; |
184 | } | 185 | } |
186 | |||
185 | temp = of_irq_count(*np_temp); | 187 | temp = of_irq_count(*np_temp); |
186 | if (!temp) | 188 | if (!temp) { |
187 | goto out; | 189 | dev_err(dev, "No IRQ entries in %s\n", controller); |
190 | return -EINVAL; | ||
191 | } | ||
192 | |||
188 | if (temp > max_host_irqs) | 193 | if (temp > max_host_irqs) |
189 | dev_warn(dev, "Too many %s interrupts defined %u\n", | 194 | dev_warn(dev, "Too many %s interrupts defined %u\n", |
190 | (legacy ? "legacy" : "MSI"), temp); | 195 | (legacy ? "legacy" : "MSI"), temp); |
@@ -198,12 +203,13 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie, | |||
198 | if (!host_irqs[temp]) | 203 | if (!host_irqs[temp]) |
199 | break; | 204 | break; |
200 | } | 205 | } |
206 | |||
201 | if (temp) { | 207 | if (temp) { |
202 | *num_irqs = temp; | 208 | *num_irqs = temp; |
203 | ret = 0; | 209 | return 0; |
204 | } | 210 | } |
205 | out: | 211 | |
206 | return ret; | 212 | return -EINVAL; |
207 | } | 213 | } |
208 | 214 | ||
209 | static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) | 215 | static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) |
@@ -226,6 +232,9 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) | |||
226 | ks_pcie); | 232 | ks_pcie); |
227 | } | 233 | } |
228 | } | 234 | } |
235 | |||
236 | if (ks_pcie->error_irq > 0) | ||
237 | ks_dw_pcie_enable_error_irq(ks_pcie->va_app_base); | ||
229 | } | 238 | } |
230 | 239 | ||
231 | /* | 240 | /* |
@@ -289,6 +298,14 @@ static struct pcie_host_ops keystone_pcie_host_ops = { | |||
289 | .scan_bus = ks_dw_pcie_v3_65_scan_bus, | 298 | .scan_bus = ks_dw_pcie_v3_65_scan_bus, |
290 | }; | 299 | }; |
291 | 300 | ||
301 | static irqreturn_t pcie_err_irq_handler(int irq, void *priv) | ||
302 | { | ||
303 | struct keystone_pcie *ks_pcie = priv; | ||
304 | |||
305 | return ks_dw_pcie_handle_error_irq(ks_pcie->pp.dev, | ||
306 | ks_pcie->va_app_base); | ||
307 | } | ||
308 | |||
292 | static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, | 309 | static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, |
293 | struct platform_device *pdev) | 310 | struct platform_device *pdev) |
294 | { | 311 | { |
@@ -309,6 +326,22 @@ static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, | |||
309 | return ret; | 326 | return ret; |
310 | } | 327 | } |
311 | 328 | ||
329 | /* | ||
330 | * Index 0 is the platform interrupt for error interrupt | ||
331 | * from RC. This is optional. | ||
332 | */ | ||
333 | ks_pcie->error_irq = irq_of_parse_and_map(ks_pcie->np, 0); | ||
334 | if (ks_pcie->error_irq <= 0) | ||
335 | dev_info(&pdev->dev, "no error IRQ defined\n"); | ||
336 | else { | ||
337 | if (request_irq(ks_pcie->error_irq, pcie_err_irq_handler, | ||
338 | IRQF_SHARED, "pcie-error-irq", ks_pcie) < 0) { | ||
339 | dev_err(&pdev->dev, "failed to request error IRQ %d\n", | ||
340 | ks_pcie->error_irq); | ||
341 | return ret; | ||
342 | } | ||
343 | } | ||
344 | |||
312 | pp->root_bus_nr = -1; | 345 | pp->root_bus_nr = -1; |
313 | pp->ops = &keystone_pcie_host_ops; | 346 | pp->ops = &keystone_pcie_host_ops; |
314 | ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np); | 347 | ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np); |
@@ -317,7 +350,7 @@ static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, | |||
317 | return ret; | 350 | return ret; |
318 | } | 351 | } |
319 | 352 | ||
320 | return ret; | 353 | return 0; |
321 | } | 354 | } |
322 | 355 | ||
323 | static const struct of_device_id ks_pcie_of_match[] = { | 356 | static const struct of_device_id ks_pcie_of_match[] = { |
@@ -346,7 +379,7 @@ static int __init ks_pcie_probe(struct platform_device *pdev) | |||
346 | struct resource *res; | 379 | struct resource *res; |
347 | void __iomem *reg_p; | 380 | void __iomem *reg_p; |
348 | struct phy *phy; | 381 | struct phy *phy; |
349 | int ret = 0; | 382 | int ret; |
350 | 383 | ||
351 | ks_pcie = devm_kzalloc(&pdev->dev, sizeof(*ks_pcie), | 384 | ks_pcie = devm_kzalloc(&pdev->dev, sizeof(*ks_pcie), |
352 | GFP_KERNEL); | 385 | GFP_KERNEL); |
@@ -376,6 +409,7 @@ static int __init ks_pcie_probe(struct platform_device *pdev) | |||
376 | devm_release_mem_region(dev, res->start, resource_size(res)); | 409 | devm_release_mem_region(dev, res->start, resource_size(res)); |
377 | 410 | ||
378 | pp->dev = dev; | 411 | pp->dev = dev; |
412 | ks_pcie->np = dev->of_node; | ||
379 | platform_set_drvdata(pdev, ks_pcie); | 413 | platform_set_drvdata(pdev, ks_pcie); |
380 | ks_pcie->clk = devm_clk_get(dev, "pcie"); | 414 | ks_pcie->clk = devm_clk_get(dev, "pcie"); |
381 | if (IS_ERR(ks_pcie->clk)) { | 415 | if (IS_ERR(ks_pcie->clk)) { |
diff --git a/drivers/pci/host/pci-keystone.h b/drivers/pci/host/pci-keystone.h index f0944e8c4b02..a5b0cb2ba4d7 100644 --- a/drivers/pci/host/pci-keystone.h +++ b/drivers/pci/host/pci-keystone.h | |||
@@ -29,6 +29,9 @@ struct keystone_pcie { | |||
29 | int msi_host_irqs[MAX_MSI_HOST_IRQS]; | 29 | int msi_host_irqs[MAX_MSI_HOST_IRQS]; |
30 | struct device_node *msi_intc_np; | 30 | struct device_node *msi_intc_np; |
31 | struct irq_domain *legacy_irq_domain; | 31 | struct irq_domain *legacy_irq_domain; |
32 | struct device_node *np; | ||
33 | |||
34 | int error_irq; | ||
32 | 35 | ||
33 | /* Application register space */ | 36 | /* Application register space */ |
34 | void __iomem *va_app_base; | 37 | void __iomem *va_app_base; |
@@ -42,6 +45,9 @@ phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp); | |||
42 | /* Keystone specific PCI controller APIs */ | 45 | /* Keystone specific PCI controller APIs */ |
43 | void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie); | 46 | void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie); |
44 | void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset); | 47 | void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset); |
48 | void ks_dw_pcie_enable_error_irq(void __iomem *reg_base); | ||
49 | irqreturn_t ks_dw_pcie_handle_error_irq(struct device *dev, | ||
50 | void __iomem *reg_base); | ||
45 | int ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, | 51 | int ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, |
46 | struct device_node *msi_intc_np); | 52 | struct device_node *msi_intc_np); |
47 | int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, | 53 | int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, |
diff --git a/drivers/pci/host/pci-mvebu.c b/drivers/pci/host/pci-mvebu.c index 53b79c5f0559..6b451df6502c 100644 --- a/drivers/pci/host/pci-mvebu.c +++ b/drivers/pci/host/pci-mvebu.c | |||
@@ -1003,6 +1003,7 @@ static void mvebu_pcie_msi_enable(struct mvebu_pcie *pcie) | |||
1003 | pcie->msi->dev = &pcie->pdev->dev; | 1003 | pcie->msi->dev = &pcie->pdev->dev; |
1004 | } | 1004 | } |
1005 | 1005 | ||
1006 | #ifdef CONFIG_PM_SLEEP | ||
1006 | static int mvebu_pcie_suspend(struct device *dev) | 1007 | static int mvebu_pcie_suspend(struct device *dev) |
1007 | { | 1008 | { |
1008 | struct mvebu_pcie *pcie; | 1009 | struct mvebu_pcie *pcie; |
@@ -1031,6 +1032,7 @@ static int mvebu_pcie_resume(struct device *dev) | |||
1031 | 1032 | ||
1032 | return 0; | 1033 | return 0; |
1033 | } | 1034 | } |
1035 | #endif | ||
1034 | 1036 | ||
1035 | static void mvebu_pcie_port_clk_put(void *data) | 1037 | static void mvebu_pcie_port_clk_put(void *data) |
1036 | { | 1038 | { |
@@ -1298,9 +1300,8 @@ static const struct of_device_id mvebu_pcie_of_match_table[] = { | |||
1298 | }; | 1300 | }; |
1299 | MODULE_DEVICE_TABLE(of, mvebu_pcie_of_match_table); | 1301 | MODULE_DEVICE_TABLE(of, mvebu_pcie_of_match_table); |
1300 | 1302 | ||
1301 | static struct dev_pm_ops mvebu_pcie_pm_ops = { | 1303 | static const struct dev_pm_ops mvebu_pcie_pm_ops = { |
1302 | .suspend_noirq = mvebu_pcie_suspend, | 1304 | SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mvebu_pcie_suspend, mvebu_pcie_resume) |
1303 | .resume_noirq = mvebu_pcie_resume, | ||
1304 | }; | 1305 | }; |
1305 | 1306 | ||
1306 | static struct platform_driver mvebu_pcie_driver = { | 1307 | static struct platform_driver mvebu_pcie_driver = { |
diff --git a/drivers/pci/host/pci-thunder-ecam.c b/drivers/pci/host/pci-thunder-ecam.c index d71935cb2678..540d030613eb 100644 --- a/drivers/pci/host/pci-thunder-ecam.c +++ b/drivers/pci/host/pci-thunder-ecam.c | |||
@@ -13,18 +13,7 @@ | |||
13 | #include <linux/of.h> | 13 | #include <linux/of.h> |
14 | #include <linux/platform_device.h> | 14 | #include <linux/platform_device.h> |
15 | 15 | ||
16 | #include "pci-host-common.h" | 16 | #include "../ecam.h" |
17 | |||
18 | /* Mapping is standard ECAM */ | ||
19 | static void __iomem *thunder_ecam_map_bus(struct pci_bus *bus, | ||
20 | unsigned int devfn, | ||
21 | int where) | ||
22 | { | ||
23 | struct gen_pci *pci = bus->sysdata; | ||
24 | resource_size_t idx = bus->number - pci->cfg.bus_range->start; | ||
25 | |||
26 | return pci->cfg.win[idx] + ((devfn << 12) | where); | ||
27 | } | ||
28 | 17 | ||
29 | static void set_val(u32 v, int where, int size, u32 *val) | 18 | static void set_val(u32 v, int where, int size, u32 *val) |
30 | { | 19 | { |
@@ -99,7 +88,7 @@ static int handle_ea_bar(u32 e0, int bar, struct pci_bus *bus, | |||
99 | static int thunder_ecam_p2_config_read(struct pci_bus *bus, unsigned int devfn, | 88 | static int thunder_ecam_p2_config_read(struct pci_bus *bus, unsigned int devfn, |
100 | int where, int size, u32 *val) | 89 | int where, int size, u32 *val) |
101 | { | 90 | { |
102 | struct gen_pci *pci = bus->sysdata; | 91 | struct pci_config_window *cfg = bus->sysdata; |
103 | int where_a = where & ~3; | 92 | int where_a = where & ~3; |
104 | void __iomem *addr; | 93 | void __iomem *addr; |
105 | u32 node_bits; | 94 | u32 node_bits; |
@@ -129,7 +118,7 @@ static int thunder_ecam_p2_config_read(struct pci_bus *bus, unsigned int devfn, | |||
129 | * the config space access window. Since we are working with | 118 | * the config space access window. Since we are working with |
130 | * the high-order 32 bits, shift everything down by 32 bits. | 119 | * the high-order 32 bits, shift everything down by 32 bits. |
131 | */ | 120 | */ |
132 | node_bits = (pci->cfg.res.start >> 32) & (1 << 12); | 121 | node_bits = (cfg->res.start >> 32) & (1 << 12); |
133 | 122 | ||
134 | v |= node_bits; | 123 | v |= node_bits; |
135 | set_val(v, where, size, val); | 124 | set_val(v, where, size, val); |
@@ -358,36 +347,24 @@ static int thunder_ecam_config_write(struct pci_bus *bus, unsigned int devfn, | |||
358 | return pci_generic_config_write(bus, devfn, where, size, val); | 347 | return pci_generic_config_write(bus, devfn, where, size, val); |
359 | } | 348 | } |
360 | 349 | ||
361 | static struct gen_pci_cfg_bus_ops thunder_ecam_bus_ops = { | 350 | static struct pci_ecam_ops pci_thunder_ecam_ops = { |
362 | .bus_shift = 20, | 351 | .bus_shift = 20, |
363 | .ops = { | 352 | .pci_ops = { |
364 | .map_bus = thunder_ecam_map_bus, | 353 | .map_bus = pci_ecam_map_bus, |
365 | .read = thunder_ecam_config_read, | 354 | .read = thunder_ecam_config_read, |
366 | .write = thunder_ecam_config_write, | 355 | .write = thunder_ecam_config_write, |
367 | } | 356 | } |
368 | }; | 357 | }; |
369 | 358 | ||
370 | static const struct of_device_id thunder_ecam_of_match[] = { | 359 | static const struct of_device_id thunder_ecam_of_match[] = { |
371 | { .compatible = "cavium,pci-host-thunder-ecam", | 360 | { .compatible = "cavium,pci-host-thunder-ecam" }, |
372 | .data = &thunder_ecam_bus_ops }, | ||
373 | |||
374 | { }, | 361 | { }, |
375 | }; | 362 | }; |
376 | MODULE_DEVICE_TABLE(of, thunder_ecam_of_match); | 363 | MODULE_DEVICE_TABLE(of, thunder_ecam_of_match); |
377 | 364 | ||
378 | static int thunder_ecam_probe(struct platform_device *pdev) | 365 | static int thunder_ecam_probe(struct platform_device *pdev) |
379 | { | 366 | { |
380 | struct device *dev = &pdev->dev; | 367 | return pci_host_common_probe(pdev, &pci_thunder_ecam_ops); |
381 | const struct of_device_id *of_id; | ||
382 | struct gen_pci *pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); | ||
383 | |||
384 | if (!pci) | ||
385 | return -ENOMEM; | ||
386 | |||
387 | of_id = of_match_node(thunder_ecam_of_match, dev->of_node); | ||
388 | pci->cfg.ops = (struct gen_pci_cfg_bus_ops *)of_id->data; | ||
389 | |||
390 | return pci_host_common_probe(pdev, pci); | ||
391 | } | 368 | } |
392 | 369 | ||
393 | static struct platform_driver thunder_ecam_driver = { | 370 | static struct platform_driver thunder_ecam_driver = { |
diff --git a/drivers/pci/host/pci-thunder-pem.c b/drivers/pci/host/pci-thunder-pem.c index cabb92a514ac..9b8ab94f3c8c 100644 --- a/drivers/pci/host/pci-thunder-pem.c +++ b/drivers/pci/host/pci-thunder-pem.c | |||
@@ -20,34 +20,22 @@ | |||
20 | #include <linux/of_pci.h> | 20 | #include <linux/of_pci.h> |
21 | #include <linux/platform_device.h> | 21 | #include <linux/platform_device.h> |
22 | 22 | ||
23 | #include "pci-host-common.h" | 23 | #include "../ecam.h" |
24 | 24 | ||
25 | #define PEM_CFG_WR 0x28 | 25 | #define PEM_CFG_WR 0x28 |
26 | #define PEM_CFG_RD 0x30 | 26 | #define PEM_CFG_RD 0x30 |
27 | 27 | ||
28 | struct thunder_pem_pci { | 28 | struct thunder_pem_pci { |
29 | struct gen_pci gen_pci; | ||
30 | u32 ea_entry[3]; | 29 | u32 ea_entry[3]; |
31 | void __iomem *pem_reg_base; | 30 | void __iomem *pem_reg_base; |
32 | }; | 31 | }; |
33 | 32 | ||
34 | static void __iomem *thunder_pem_map_bus(struct pci_bus *bus, | ||
35 | unsigned int devfn, int where) | ||
36 | { | ||
37 | struct gen_pci *pci = bus->sysdata; | ||
38 | resource_size_t idx = bus->number - pci->cfg.bus_range->start; | ||
39 | |||
40 | return pci->cfg.win[idx] + ((devfn << 16) | where); | ||
41 | } | ||
42 | |||
43 | static int thunder_pem_bridge_read(struct pci_bus *bus, unsigned int devfn, | 33 | static int thunder_pem_bridge_read(struct pci_bus *bus, unsigned int devfn, |
44 | int where, int size, u32 *val) | 34 | int where, int size, u32 *val) |
45 | { | 35 | { |
46 | u64 read_val; | 36 | u64 read_val; |
47 | struct thunder_pem_pci *pem_pci; | 37 | struct pci_config_window *cfg = bus->sysdata; |
48 | struct gen_pci *pci = bus->sysdata; | 38 | struct thunder_pem_pci *pem_pci = (struct thunder_pem_pci *)cfg->priv; |
49 | |||
50 | pem_pci = container_of(pci, struct thunder_pem_pci, gen_pci); | ||
51 | 39 | ||
52 | if (devfn != 0 || where >= 2048) { | 40 | if (devfn != 0 || where >= 2048) { |
53 | *val = ~0; | 41 | *val = ~0; |
@@ -132,17 +120,17 @@ static int thunder_pem_bridge_read(struct pci_bus *bus, unsigned int devfn, | |||
132 | static int thunder_pem_config_read(struct pci_bus *bus, unsigned int devfn, | 120 | static int thunder_pem_config_read(struct pci_bus *bus, unsigned int devfn, |
133 | int where, int size, u32 *val) | 121 | int where, int size, u32 *val) |
134 | { | 122 | { |
135 | struct gen_pci *pci = bus->sysdata; | 123 | struct pci_config_window *cfg = bus->sysdata; |
136 | 124 | ||
137 | if (bus->number < pci->cfg.bus_range->start || | 125 | if (bus->number < cfg->busr.start || |
138 | bus->number > pci->cfg.bus_range->end) | 126 | bus->number > cfg->busr.end) |
139 | return PCIBIOS_DEVICE_NOT_FOUND; | 127 | return PCIBIOS_DEVICE_NOT_FOUND; |
140 | 128 | ||
141 | /* | 129 | /* |
142 | * The first device on the bus is the PEM PCIe bridge. | 130 | * The first device on the bus is the PEM PCIe bridge. |
143 | * Special case its config access. | 131 | * Special case its config access. |
144 | */ | 132 | */ |
145 | if (bus->number == pci->cfg.bus_range->start) | 133 | if (bus->number == cfg->busr.start) |
146 | return thunder_pem_bridge_read(bus, devfn, where, size, val); | 134 | return thunder_pem_bridge_read(bus, devfn, where, size, val); |
147 | 135 | ||
148 | return pci_generic_config_read(bus, devfn, where, size, val); | 136 | return pci_generic_config_read(bus, devfn, where, size, val); |
@@ -153,11 +141,11 @@ static int thunder_pem_config_read(struct pci_bus *bus, unsigned int devfn, | |||
153 | * reserved bits, this makes the code simpler and is OK as the bits | 141 | * reserved bits, this makes the code simpler and is OK as the bits |
154 | * are not affected by writing zeros to them. | 142 | * are not affected by writing zeros to them. |
155 | */ | 143 | */ |
156 | static u32 thunder_pem_bridge_w1c_bits(int where) | 144 | static u32 thunder_pem_bridge_w1c_bits(u64 where_aligned) |
157 | { | 145 | { |
158 | u32 w1c_bits = 0; | 146 | u32 w1c_bits = 0; |
159 | 147 | ||
160 | switch (where & ~3) { | 148 | switch (where_aligned) { |
161 | case 0x04: /* Command/Status */ | 149 | case 0x04: /* Command/Status */ |
162 | case 0x1c: /* Base and I/O Limit/Secondary Status */ | 150 | case 0x1c: /* Base and I/O Limit/Secondary Status */ |
163 | w1c_bits = 0xff000000; | 151 | w1c_bits = 0xff000000; |
@@ -184,15 +172,36 @@ static u32 thunder_pem_bridge_w1c_bits(int where) | |||
184 | return w1c_bits; | 172 | return w1c_bits; |
185 | } | 173 | } |
186 | 174 | ||
175 | /* Some bits must be written to one so they appear to be read-only. */ | ||
176 | static u32 thunder_pem_bridge_w1_bits(u64 where_aligned) | ||
177 | { | ||
178 | u32 w1_bits; | ||
179 | |||
180 | switch (where_aligned) { | ||
181 | case 0x1c: /* I/O Base / I/O Limit, Secondary Status */ | ||
182 | /* Force 32-bit I/O addressing. */ | ||
183 | w1_bits = 0x0101; | ||
184 | break; | ||
185 | case 0x24: /* Prefetchable Memory Base / Prefetchable Memory Limit */ | ||
186 | /* Force 64-bit addressing */ | ||
187 | w1_bits = 0x00010001; | ||
188 | break; | ||
189 | default: | ||
190 | w1_bits = 0; | ||
191 | break; | ||
192 | } | ||
193 | return w1_bits; | ||
194 | } | ||
195 | |||
187 | static int thunder_pem_bridge_write(struct pci_bus *bus, unsigned int devfn, | 196 | static int thunder_pem_bridge_write(struct pci_bus *bus, unsigned int devfn, |
188 | int where, int size, u32 val) | 197 | int where, int size, u32 val) |
189 | { | 198 | { |
190 | struct gen_pci *pci = bus->sysdata; | 199 | struct pci_config_window *cfg = bus->sysdata; |
191 | struct thunder_pem_pci *pem_pci; | 200 | struct thunder_pem_pci *pem_pci = (struct thunder_pem_pci *)cfg->priv; |
192 | u64 write_val, read_val; | 201 | u64 write_val, read_val; |
202 | u64 where_aligned = where & ~3ull; | ||
193 | u32 mask = 0; | 203 | u32 mask = 0; |
194 | 204 | ||
195 | pem_pci = container_of(pci, struct thunder_pem_pci, gen_pci); | ||
196 | 205 | ||
197 | if (devfn != 0 || where >= 2048) | 206 | if (devfn != 0 || where >= 2048) |
198 | return PCIBIOS_DEVICE_NOT_FOUND; | 207 | return PCIBIOS_DEVICE_NOT_FOUND; |
@@ -205,8 +214,7 @@ static int thunder_pem_bridge_write(struct pci_bus *bus, unsigned int devfn, | |||
205 | */ | 214 | */ |
206 | switch (size) { | 215 | switch (size) { |
207 | case 1: | 216 | case 1: |
208 | read_val = where & ~3ull; | 217 | writeq(where_aligned, pem_pci->pem_reg_base + PEM_CFG_RD); |
209 | writeq(read_val, pem_pci->pem_reg_base + PEM_CFG_RD); | ||
210 | read_val = readq(pem_pci->pem_reg_base + PEM_CFG_RD); | 218 | read_val = readq(pem_pci->pem_reg_base + PEM_CFG_RD); |
211 | read_val >>= 32; | 219 | read_val >>= 32; |
212 | mask = ~(0xff << (8 * (where & 3))); | 220 | mask = ~(0xff << (8 * (where & 3))); |
@@ -215,8 +223,7 @@ static int thunder_pem_bridge_write(struct pci_bus *bus, unsigned int devfn, | |||
215 | val |= (u32)read_val; | 223 | val |= (u32)read_val; |
216 | break; | 224 | break; |
217 | case 2: | 225 | case 2: |
218 | read_val = where & ~3ull; | 226 | writeq(where_aligned, pem_pci->pem_reg_base + PEM_CFG_RD); |
219 | writeq(read_val, pem_pci->pem_reg_base + PEM_CFG_RD); | ||
220 | read_val = readq(pem_pci->pem_reg_base + PEM_CFG_RD); | 227 | read_val = readq(pem_pci->pem_reg_base + PEM_CFG_RD); |
221 | read_val >>= 32; | 228 | read_val >>= 32; |
222 | mask = ~(0xffff << (8 * (where & 3))); | 229 | mask = ~(0xffff << (8 * (where & 3))); |
@@ -244,11 +251,17 @@ static int thunder_pem_bridge_write(struct pci_bus *bus, unsigned int devfn, | |||
244 | } | 251 | } |
245 | 252 | ||
246 | /* | 253 | /* |
254 | * Some bits must be read-only with value of one. Since the | ||
255 | * access method allows these to be cleared if a zero is | ||
256 | * written, force them to one before writing. | ||
257 | */ | ||
258 | val |= thunder_pem_bridge_w1_bits(where_aligned); | ||
259 | |||
260 | /* | ||
247 | * Low order bits are the config address, the high order 32 | 261 | * Low order bits are the config address, the high order 32 |
248 | * bits are the data to be written. | 262 | * bits are the data to be written. |
249 | */ | 263 | */ |
250 | write_val = where & ~3ull; | 264 | write_val = (((u64)val) << 32) | where_aligned; |
251 | write_val |= (((u64)val) << 32); | ||
252 | writeq(write_val, pem_pci->pem_reg_base + PEM_CFG_WR); | 265 | writeq(write_val, pem_pci->pem_reg_base + PEM_CFG_WR); |
253 | return PCIBIOS_SUCCESSFUL; | 266 | return PCIBIOS_SUCCESSFUL; |
254 | } | 267 | } |
@@ -256,53 +269,38 @@ static int thunder_pem_bridge_write(struct pci_bus *bus, unsigned int devfn, | |||
256 | static int thunder_pem_config_write(struct pci_bus *bus, unsigned int devfn, | 269 | static int thunder_pem_config_write(struct pci_bus *bus, unsigned int devfn, |
257 | int where, int size, u32 val) | 270 | int where, int size, u32 val) |
258 | { | 271 | { |
259 | struct gen_pci *pci = bus->sysdata; | 272 | struct pci_config_window *cfg = bus->sysdata; |
260 | 273 | ||
261 | if (bus->number < pci->cfg.bus_range->start || | 274 | if (bus->number < cfg->busr.start || |
262 | bus->number > pci->cfg.bus_range->end) | 275 | bus->number > cfg->busr.end) |
263 | return PCIBIOS_DEVICE_NOT_FOUND; | 276 | return PCIBIOS_DEVICE_NOT_FOUND; |
264 | /* | 277 | /* |
265 | * The first device on the bus is the PEM PCIe bridge. | 278 | * The first device on the bus is the PEM PCIe bridge. |
266 | * Special case its config access. | 279 | * Special case its config access. |
267 | */ | 280 | */ |
268 | if (bus->number == pci->cfg.bus_range->start) | 281 | if (bus->number == cfg->busr.start) |
269 | return thunder_pem_bridge_write(bus, devfn, where, size, val); | 282 | return thunder_pem_bridge_write(bus, devfn, where, size, val); |
270 | 283 | ||
271 | 284 | ||
272 | return pci_generic_config_write(bus, devfn, where, size, val); | 285 | return pci_generic_config_write(bus, devfn, where, size, val); |
273 | } | 286 | } |
274 | 287 | ||
275 | static struct gen_pci_cfg_bus_ops thunder_pem_bus_ops = { | 288 | static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg) |
276 | .bus_shift = 24, | ||
277 | .ops = { | ||
278 | .map_bus = thunder_pem_map_bus, | ||
279 | .read = thunder_pem_config_read, | ||
280 | .write = thunder_pem_config_write, | ||
281 | } | ||
282 | }; | ||
283 | |||
284 | static const struct of_device_id thunder_pem_of_match[] = { | ||
285 | { .compatible = "cavium,pci-host-thunder-pem", | ||
286 | .data = &thunder_pem_bus_ops }, | ||
287 | |||
288 | { }, | ||
289 | }; | ||
290 | MODULE_DEVICE_TABLE(of, thunder_pem_of_match); | ||
291 | |||
292 | static int thunder_pem_probe(struct platform_device *pdev) | ||
293 | { | 289 | { |
294 | struct device *dev = &pdev->dev; | ||
295 | const struct of_device_id *of_id; | ||
296 | resource_size_t bar4_start; | 290 | resource_size_t bar4_start; |
297 | struct resource *res_pem; | 291 | struct resource *res_pem; |
298 | struct thunder_pem_pci *pem_pci; | 292 | struct thunder_pem_pci *pem_pci; |
293 | struct platform_device *pdev; | ||
294 | |||
295 | /* Only OF support for now */ | ||
296 | if (!dev->of_node) | ||
297 | return -EINVAL; | ||
299 | 298 | ||
300 | pem_pci = devm_kzalloc(dev, sizeof(*pem_pci), GFP_KERNEL); | 299 | pem_pci = devm_kzalloc(dev, sizeof(*pem_pci), GFP_KERNEL); |
301 | if (!pem_pci) | 300 | if (!pem_pci) |
302 | return -ENOMEM; | 301 | return -ENOMEM; |
303 | 302 | ||
304 | of_id = of_match_node(thunder_pem_of_match, dev->of_node); | 303 | pdev = to_platform_device(dev); |
305 | pem_pci->gen_pci.cfg.ops = (struct gen_pci_cfg_bus_ops *)of_id->data; | ||
306 | 304 | ||
307 | /* | 305 | /* |
308 | * The second register range is the PEM bridge to the PCIe | 306 | * The second register range is the PEM bridge to the PCIe |
@@ -330,7 +328,29 @@ static int thunder_pem_probe(struct platform_device *pdev) | |||
330 | pem_pci->ea_entry[1] = (u32)(res_pem->end - bar4_start) & ~3u; | 328 | pem_pci->ea_entry[1] = (u32)(res_pem->end - bar4_start) & ~3u; |
331 | pem_pci->ea_entry[2] = (u32)(bar4_start >> 32); | 329 | pem_pci->ea_entry[2] = (u32)(bar4_start >> 32); |
332 | 330 | ||
333 | return pci_host_common_probe(pdev, &pem_pci->gen_pci); | 331 | cfg->priv = pem_pci; |
332 | return 0; | ||
333 | } | ||
334 | |||
335 | static struct pci_ecam_ops pci_thunder_pem_ops = { | ||
336 | .bus_shift = 24, | ||
337 | .init = thunder_pem_init, | ||
338 | .pci_ops = { | ||
339 | .map_bus = pci_ecam_map_bus, | ||
340 | .read = thunder_pem_config_read, | ||
341 | .write = thunder_pem_config_write, | ||
342 | } | ||
343 | }; | ||
344 | |||
345 | static const struct of_device_id thunder_pem_of_match[] = { | ||
346 | { .compatible = "cavium,pci-host-thunder-pem" }, | ||
347 | { }, | ||
348 | }; | ||
349 | MODULE_DEVICE_TABLE(of, thunder_pem_of_match); | ||
350 | |||
351 | static int thunder_pem_probe(struct platform_device *pdev) | ||
352 | { | ||
353 | return pci_host_common_probe(pdev, &pci_thunder_pem_ops); | ||
334 | } | 354 | } |
335 | 355 | ||
336 | static struct platform_driver thunder_pem_driver = { | 356 | static struct platform_driver thunder_pem_driver = { |
diff --git a/drivers/pci/host/pcie-armada8k.c b/drivers/pci/host/pcie-armada8k.c new file mode 100644 index 000000000000..55723567b5d4 --- /dev/null +++ b/drivers/pci/host/pcie-armada8k.c | |||
@@ -0,0 +1,262 @@ | |||
1 | /* | ||
2 | * PCIe host controller driver for Marvell Armada-8K SoCs | ||
3 | * | ||
4 | * Armada-8K PCIe Glue Layer Source Code | ||
5 | * | ||
6 | * Copyright (C) 2016 Marvell Technology Group Ltd. | ||
7 | * | ||
8 | * This file is licensed under the terms of the GNU General Public | ||
9 | * License version 2. This program is licensed "as is" without any | ||
10 | * warranty of any kind, whether express or implied. | ||
11 | */ | ||
12 | |||
13 | #include <linux/clk.h> | ||
14 | #include <linux/delay.h> | ||
15 | #include <linux/interrupt.h> | ||
16 | #include <linux/kernel.h> | ||
17 | #include <linux/module.h> | ||
18 | #include <linux/of.h> | ||
19 | #include <linux/pci.h> | ||
20 | #include <linux/phy/phy.h> | ||
21 | #include <linux/platform_device.h> | ||
22 | #include <linux/resource.h> | ||
23 | #include <linux/of_pci.h> | ||
24 | #include <linux/of_irq.h> | ||
25 | |||
26 | #include "pcie-designware.h" | ||
27 | |||
28 | struct armada8k_pcie { | ||
29 | void __iomem *base; | ||
30 | struct clk *clk; | ||
31 | struct pcie_port pp; | ||
32 | }; | ||
33 | |||
34 | #define PCIE_VENDOR_REGS_OFFSET 0x8000 | ||
35 | |||
36 | #define PCIE_GLOBAL_CONTROL_REG 0x0 | ||
37 | #define PCIE_APP_LTSSM_EN BIT(2) | ||
38 | #define PCIE_DEVICE_TYPE_SHIFT 4 | ||
39 | #define PCIE_DEVICE_TYPE_MASK 0xF | ||
40 | #define PCIE_DEVICE_TYPE_RC 0x4 /* Root complex */ | ||
41 | |||
42 | #define PCIE_GLOBAL_STATUS_REG 0x8 | ||
43 | #define PCIE_GLB_STS_RDLH_LINK_UP BIT(1) | ||
44 | #define PCIE_GLB_STS_PHY_LINK_UP BIT(9) | ||
45 | |||
46 | #define PCIE_GLOBAL_INT_CAUSE1_REG 0x1C | ||
47 | #define PCIE_GLOBAL_INT_MASK1_REG 0x20 | ||
48 | #define PCIE_INT_A_ASSERT_MASK BIT(9) | ||
49 | #define PCIE_INT_B_ASSERT_MASK BIT(10) | ||
50 | #define PCIE_INT_C_ASSERT_MASK BIT(11) | ||
51 | #define PCIE_INT_D_ASSERT_MASK BIT(12) | ||
52 | |||
53 | #define PCIE_ARCACHE_TRC_REG 0x50 | ||
54 | #define PCIE_AWCACHE_TRC_REG 0x54 | ||
55 | #define PCIE_ARUSER_REG 0x5C | ||
56 | #define PCIE_AWUSER_REG 0x60 | ||
57 | /* | ||
58 | * AR/AW Cache defauls: Normal memory, Write-Back, Read / Write | ||
59 | * allocate | ||
60 | */ | ||
61 | #define ARCACHE_DEFAULT_VALUE 0x3511 | ||
62 | #define AWCACHE_DEFAULT_VALUE 0x5311 | ||
63 | |||
64 | #define DOMAIN_OUTER_SHAREABLE 0x2 | ||
65 | #define AX_USER_DOMAIN_MASK 0x3 | ||
66 | #define AX_USER_DOMAIN_SHIFT 4 | ||
67 | |||
68 | #define to_armada8k_pcie(x) container_of(x, struct armada8k_pcie, pp) | ||
69 | |||
70 | static int armada8k_pcie_link_up(struct pcie_port *pp) | ||
71 | { | ||
72 | struct armada8k_pcie *pcie = to_armada8k_pcie(pp); | ||
73 | u32 reg; | ||
74 | u32 mask = PCIE_GLB_STS_RDLH_LINK_UP | PCIE_GLB_STS_PHY_LINK_UP; | ||
75 | |||
76 | reg = readl(pcie->base + PCIE_GLOBAL_STATUS_REG); | ||
77 | |||
78 | if ((reg & mask) == mask) | ||
79 | return 1; | ||
80 | |||
81 | dev_dbg(pp->dev, "No link detected (Global-Status: 0x%08x).\n", reg); | ||
82 | return 0; | ||
83 | } | ||
84 | |||
85 | static void armada8k_pcie_establish_link(struct pcie_port *pp) | ||
86 | { | ||
87 | struct armada8k_pcie *pcie = to_armada8k_pcie(pp); | ||
88 | void __iomem *base = pcie->base; | ||
89 | u32 reg; | ||
90 | |||
91 | if (!dw_pcie_link_up(pp)) { | ||
92 | /* Disable LTSSM state machine to enable configuration */ | ||
93 | reg = readl(base + PCIE_GLOBAL_CONTROL_REG); | ||
94 | reg &= ~(PCIE_APP_LTSSM_EN); | ||
95 | writel(reg, base + PCIE_GLOBAL_CONTROL_REG); | ||
96 | } | ||
97 | |||
98 | /* Set the device to root complex mode */ | ||
99 | reg = readl(base + PCIE_GLOBAL_CONTROL_REG); | ||
100 | reg &= ~(PCIE_DEVICE_TYPE_MASK << PCIE_DEVICE_TYPE_SHIFT); | ||
101 | reg |= PCIE_DEVICE_TYPE_RC << PCIE_DEVICE_TYPE_SHIFT; | ||
102 | writel(reg, base + PCIE_GLOBAL_CONTROL_REG); | ||
103 | |||
104 | /* Set the PCIe master AxCache attributes */ | ||
105 | writel(ARCACHE_DEFAULT_VALUE, base + PCIE_ARCACHE_TRC_REG); | ||
106 | writel(AWCACHE_DEFAULT_VALUE, base + PCIE_AWCACHE_TRC_REG); | ||
107 | |||
108 | /* Set the PCIe master AxDomain attributes */ | ||
109 | reg = readl(base + PCIE_ARUSER_REG); | ||
110 | reg &= ~(AX_USER_DOMAIN_MASK << AX_USER_DOMAIN_SHIFT); | ||
111 | reg |= DOMAIN_OUTER_SHAREABLE << AX_USER_DOMAIN_SHIFT; | ||
112 | writel(reg, base + PCIE_ARUSER_REG); | ||
113 | |||
114 | reg = readl(base + PCIE_AWUSER_REG); | ||
115 | reg &= ~(AX_USER_DOMAIN_MASK << AX_USER_DOMAIN_SHIFT); | ||
116 | reg |= DOMAIN_OUTER_SHAREABLE << AX_USER_DOMAIN_SHIFT; | ||
117 | writel(reg, base + PCIE_AWUSER_REG); | ||
118 | |||
119 | /* Enable INT A-D interrupts */ | ||
120 | reg = readl(base + PCIE_GLOBAL_INT_MASK1_REG); | ||
121 | reg |= PCIE_INT_A_ASSERT_MASK | PCIE_INT_B_ASSERT_MASK | | ||
122 | PCIE_INT_C_ASSERT_MASK | PCIE_INT_D_ASSERT_MASK; | ||
123 | writel(reg, base + PCIE_GLOBAL_INT_MASK1_REG); | ||
124 | |||
125 | if (!dw_pcie_link_up(pp)) { | ||
126 | /* Configuration done. Start LTSSM */ | ||
127 | reg = readl(base + PCIE_GLOBAL_CONTROL_REG); | ||
128 | reg |= PCIE_APP_LTSSM_EN; | ||
129 | writel(reg, base + PCIE_GLOBAL_CONTROL_REG); | ||
130 | } | ||
131 | |||
132 | /* Wait until the link becomes active again */ | ||
133 | if (dw_pcie_wait_for_link(pp)) | ||
134 | dev_err(pp->dev, "Link not up after reconfiguration\n"); | ||
135 | } | ||
136 | |||
137 | static void armada8k_pcie_host_init(struct pcie_port *pp) | ||
138 | { | ||
139 | dw_pcie_setup_rc(pp); | ||
140 | armada8k_pcie_establish_link(pp); | ||
141 | } | ||
142 | |||
143 | static irqreturn_t armada8k_pcie_irq_handler(int irq, void *arg) | ||
144 | { | ||
145 | struct pcie_port *pp = arg; | ||
146 | struct armada8k_pcie *pcie = to_armada8k_pcie(pp); | ||
147 | void __iomem *base = pcie->base; | ||
148 | u32 val; | ||
149 | |||
150 | /* | ||
151 | * Interrupts are directly handled by the device driver of the | ||
152 | * PCI device. However, they are also latched into the PCIe | ||
153 | * controller, so we simply discard them. | ||
154 | */ | ||
155 | val = readl(base + PCIE_GLOBAL_INT_CAUSE1_REG); | ||
156 | writel(val, base + PCIE_GLOBAL_INT_CAUSE1_REG); | ||
157 | |||
158 | return IRQ_HANDLED; | ||
159 | } | ||
160 | |||
161 | static struct pcie_host_ops armada8k_pcie_host_ops = { | ||
162 | .link_up = armada8k_pcie_link_up, | ||
163 | .host_init = armada8k_pcie_host_init, | ||
164 | }; | ||
165 | |||
166 | static int armada8k_add_pcie_port(struct pcie_port *pp, | ||
167 | struct platform_device *pdev) | ||
168 | { | ||
169 | struct device *dev = &pdev->dev; | ||
170 | int ret; | ||
171 | |||
172 | pp->root_bus_nr = -1; | ||
173 | pp->ops = &armada8k_pcie_host_ops; | ||
174 | |||
175 | pp->irq = platform_get_irq(pdev, 0); | ||
176 | if (!pp->irq) { | ||
177 | dev_err(dev, "failed to get irq for port\n"); | ||
178 | return -ENODEV; | ||
179 | } | ||
180 | |||
181 | ret = devm_request_irq(dev, pp->irq, armada8k_pcie_irq_handler, | ||
182 | IRQF_SHARED, "armada8k-pcie", pp); | ||
183 | if (ret) { | ||
184 | dev_err(dev, "failed to request irq %d\n", pp->irq); | ||
185 | return ret; | ||
186 | } | ||
187 | |||
188 | ret = dw_pcie_host_init(pp); | ||
189 | if (ret) { | ||
190 | dev_err(dev, "failed to initialize host: %d\n", ret); | ||
191 | return ret; | ||
192 | } | ||
193 | |||
194 | return 0; | ||
195 | } | ||
196 | |||
197 | static int armada8k_pcie_probe(struct platform_device *pdev) | ||
198 | { | ||
199 | struct armada8k_pcie *pcie; | ||
200 | struct pcie_port *pp; | ||
201 | struct device *dev = &pdev->dev; | ||
202 | struct resource *base; | ||
203 | int ret; | ||
204 | |||
205 | pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); | ||
206 | if (!pcie) | ||
207 | return -ENOMEM; | ||
208 | |||
209 | pcie->clk = devm_clk_get(dev, NULL); | ||
210 | if (IS_ERR(pcie->clk)) | ||
211 | return PTR_ERR(pcie->clk); | ||
212 | |||
213 | clk_prepare_enable(pcie->clk); | ||
214 | |||
215 | pp = &pcie->pp; | ||
216 | pp->dev = dev; | ||
217 | platform_set_drvdata(pdev, pcie); | ||
218 | |||
219 | /* Get the dw-pcie unit configuration/control registers base. */ | ||
220 | base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl"); | ||
221 | pp->dbi_base = devm_ioremap_resource(dev, base); | ||
222 | if (IS_ERR(pp->dbi_base)) { | ||
223 | dev_err(dev, "couldn't remap regs base %p\n", base); | ||
224 | ret = PTR_ERR(pp->dbi_base); | ||
225 | goto fail; | ||
226 | } | ||
227 | |||
228 | pcie->base = pp->dbi_base + PCIE_VENDOR_REGS_OFFSET; | ||
229 | |||
230 | ret = armada8k_add_pcie_port(pp, pdev); | ||
231 | if (ret) | ||
232 | goto fail; | ||
233 | |||
234 | return 0; | ||
235 | |||
236 | fail: | ||
237 | if (!IS_ERR(pcie->clk)) | ||
238 | clk_disable_unprepare(pcie->clk); | ||
239 | |||
240 | return ret; | ||
241 | } | ||
242 | |||
243 | static const struct of_device_id armada8k_pcie_of_match[] = { | ||
244 | { .compatible = "marvell,armada8k-pcie", }, | ||
245 | {}, | ||
246 | }; | ||
247 | MODULE_DEVICE_TABLE(of, armada8k_pcie_of_match); | ||
248 | |||
249 | static struct platform_driver armada8k_pcie_driver = { | ||
250 | .probe = armada8k_pcie_probe, | ||
251 | .driver = { | ||
252 | .name = "armada8k-pcie", | ||
253 | .of_match_table = of_match_ptr(armada8k_pcie_of_match), | ||
254 | }, | ||
255 | }; | ||
256 | |||
257 | module_platform_driver(armada8k_pcie_driver); | ||
258 | |||
259 | MODULE_DESCRIPTION("Armada 8k PCIe host controller driver"); | ||
260 | MODULE_AUTHOR("Yehuda Yitshak <yehuday@marvell.com>"); | ||
261 | MODULE_AUTHOR("Shadi Ammouri <shadi@marvell.com>"); | ||
262 | MODULE_LICENSE("GPL v2"); | ||
diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c index a4cccd356304..aafd766546f3 100644 --- a/drivers/pci/host/pcie-designware.c +++ b/drivers/pci/host/pcie-designware.c | |||
@@ -434,7 +434,6 @@ int dw_pcie_host_init(struct pcie_port *pp) | |||
434 | struct platform_device *pdev = to_platform_device(pp->dev); | 434 | struct platform_device *pdev = to_platform_device(pp->dev); |
435 | struct pci_bus *bus, *child; | 435 | struct pci_bus *bus, *child; |
436 | struct resource *cfg_res; | 436 | struct resource *cfg_res; |
437 | u32 val; | ||
438 | int i, ret; | 437 | int i, ret; |
439 | LIST_HEAD(res); | 438 | LIST_HEAD(res); |
440 | struct resource_entry *win; | 439 | struct resource_entry *win; |
@@ -544,25 +543,6 @@ int dw_pcie_host_init(struct pcie_port *pp) | |||
544 | if (pp->ops->host_init) | 543 | if (pp->ops->host_init) |
545 | pp->ops->host_init(pp); | 544 | pp->ops->host_init(pp); |
546 | 545 | ||
547 | /* | ||
548 | * If the platform provides ->rd_other_conf, it means the platform | ||
549 | * uses its own address translation component rather than ATU, so | ||
550 | * we should not program the ATU here. | ||
551 | */ | ||
552 | if (!pp->ops->rd_other_conf) | ||
553 | dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1, | ||
554 | PCIE_ATU_TYPE_MEM, pp->mem_base, | ||
555 | pp->mem_bus_addr, pp->mem_size); | ||
556 | |||
557 | dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); | ||
558 | |||
559 | /* program correct class for RC */ | ||
560 | dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); | ||
561 | |||
562 | dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); | ||
563 | val |= PORT_LOGIC_SPEED_CHANGE; | ||
564 | dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); | ||
565 | |||
566 | pp->root_bus_nr = pp->busn->start; | 546 | pp->root_bus_nr = pp->busn->start; |
567 | if (IS_ENABLED(CONFIG_PCI_MSI)) { | 547 | if (IS_ENABLED(CONFIG_PCI_MSI)) { |
568 | bus = pci_scan_root_bus_msi(pp->dev, pp->root_bus_nr, | 548 | bus = pci_scan_root_bus_msi(pp->dev, pp->root_bus_nr, |
@@ -728,8 +708,6 @@ static struct pci_ops dw_pcie_ops = { | |||
728 | void dw_pcie_setup_rc(struct pcie_port *pp) | 708 | void dw_pcie_setup_rc(struct pcie_port *pp) |
729 | { | 709 | { |
730 | u32 val; | 710 | u32 val; |
731 | u32 membase; | ||
732 | u32 memlimit; | ||
733 | 711 | ||
734 | /* set the number of lanes */ | 712 | /* set the number of lanes */ |
735 | dw_pcie_readl_rc(pp, PCIE_PORT_LINK_CONTROL, &val); | 713 | dw_pcie_readl_rc(pp, PCIE_PORT_LINK_CONTROL, &val); |
@@ -788,18 +766,31 @@ void dw_pcie_setup_rc(struct pcie_port *pp) | |||
788 | val |= 0x00010100; | 766 | val |= 0x00010100; |
789 | dw_pcie_writel_rc(pp, val, PCI_PRIMARY_BUS); | 767 | dw_pcie_writel_rc(pp, val, PCI_PRIMARY_BUS); |
790 | 768 | ||
791 | /* setup memory base, memory limit */ | ||
792 | membase = ((u32)pp->mem_base & 0xfff00000) >> 16; | ||
793 | memlimit = (pp->mem_size + (u32)pp->mem_base) & 0xfff00000; | ||
794 | val = memlimit | membase; | ||
795 | dw_pcie_writel_rc(pp, val, PCI_MEMORY_BASE); | ||
796 | |||
797 | /* setup command register */ | 769 | /* setup command register */ |
798 | dw_pcie_readl_rc(pp, PCI_COMMAND, &val); | 770 | dw_pcie_readl_rc(pp, PCI_COMMAND, &val); |
799 | val &= 0xffff0000; | 771 | val &= 0xffff0000; |
800 | val |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY | | 772 | val |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY | |
801 | PCI_COMMAND_MASTER | PCI_COMMAND_SERR; | 773 | PCI_COMMAND_MASTER | PCI_COMMAND_SERR; |
802 | dw_pcie_writel_rc(pp, val, PCI_COMMAND); | 774 | dw_pcie_writel_rc(pp, val, PCI_COMMAND); |
775 | |||
776 | /* | ||
777 | * If the platform provides ->rd_other_conf, it means the platform | ||
778 | * uses its own address translation component rather than ATU, so | ||
779 | * we should not program the ATU here. | ||
780 | */ | ||
781 | if (!pp->ops->rd_other_conf) | ||
782 | dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1, | ||
783 | PCIE_ATU_TYPE_MEM, pp->mem_base, | ||
784 | pp->mem_bus_addr, pp->mem_size); | ||
785 | |||
786 | dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); | ||
787 | |||
788 | /* program correct class for RC */ | ||
789 | dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); | ||
790 | |||
791 | dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); | ||
792 | val |= PORT_LOGIC_SPEED_CHANGE; | ||
793 | dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); | ||
803 | } | 794 | } |
804 | 795 | ||
805 | MODULE_AUTHOR("Jingoo Han <jg1.han@samsung.com>"); | 796 | MODULE_AUTHOR("Jingoo Han <jg1.han@samsung.com>"); |
diff --git a/drivers/pci/host/pcie-xilinx-nwl.c b/drivers/pci/host/pcie-xilinx-nwl.c index 5139e6443bbd..3479d30e2be8 100644 --- a/drivers/pci/host/pcie-xilinx-nwl.c +++ b/drivers/pci/host/pcie-xilinx-nwl.c | |||
@@ -819,7 +819,7 @@ static int nwl_pcie_probe(struct platform_device *pdev) | |||
819 | 819 | ||
820 | err = nwl_pcie_bridge_init(pcie); | 820 | err = nwl_pcie_bridge_init(pcie); |
821 | if (err) { | 821 | if (err) { |
822 | dev_err(pcie->dev, "HW Initalization failed\n"); | 822 | dev_err(pcie->dev, "HW Initialization failed\n"); |
823 | return err; | 823 | return err; |
824 | } | 824 | } |
825 | 825 | ||
diff --git a/drivers/pci/hotplug/acpiphp_ibm.c b/drivers/pci/hotplug/acpiphp_ibm.c index 2f6d3a1c1726..f6221d739f59 100644 --- a/drivers/pci/hotplug/acpiphp_ibm.c +++ b/drivers/pci/hotplug/acpiphp_ibm.c | |||
@@ -138,6 +138,8 @@ static union apci_descriptor *ibm_slot_from_id(int id) | |||
138 | char *table; | 138 | char *table; |
139 | 139 | ||
140 | size = ibm_get_table_from_acpi(&table); | 140 | size = ibm_get_table_from_acpi(&table); |
141 | if (size < 0) | ||
142 | return NULL; | ||
141 | des = (union apci_descriptor *)table; | 143 | des = (union apci_descriptor *)table; |
142 | if (memcmp(des->header.sig, "aPCI", 4) != 0) | 144 | if (memcmp(des->header.sig, "aPCI", 4) != 0) |
143 | goto ibm_slot_done; | 145 | goto ibm_slot_done; |
diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 342b6918bbde..d319a9ca9b7b 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c | |||
@@ -1008,6 +1008,9 @@ static int pci_mmap_resource(struct kobject *kobj, struct bin_attribute *attr, | |||
1008 | if (i >= PCI_ROM_RESOURCE) | 1008 | if (i >= PCI_ROM_RESOURCE) |
1009 | return -ENODEV; | 1009 | return -ENODEV; |
1010 | 1010 | ||
1011 | if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(res->start)) | ||
1012 | return -EINVAL; | ||
1013 | |||
1011 | if (!pci_mmap_fits(pdev, i, vma, PCI_MMAP_SYSFS)) { | 1014 | if (!pci_mmap_fits(pdev, i, vma, PCI_MMAP_SYSFS)) { |
1012 | WARN(1, "process \"%s\" tried to map 0x%08lx bytes at page 0x%08lx on %s BAR %d (start 0x%16Lx, size 0x%16Lx)\n", | 1015 | WARN(1, "process \"%s\" tried to map 0x%08lx bytes at page 0x%08lx on %s BAR %d (start 0x%16Lx, size 0x%16Lx)\n", |
1013 | current->comm, vma->vm_end-vma->vm_start, vma->vm_pgoff, | 1016 | current->comm, vma->vm_end-vma->vm_start, vma->vm_pgoff, |
@@ -1024,10 +1027,6 @@ static int pci_mmap_resource(struct kobject *kobj, struct bin_attribute *attr, | |||
1024 | pci_resource_to_user(pdev, i, res, &start, &end); | 1027 | pci_resource_to_user(pdev, i, res, &start, &end); |
1025 | vma->vm_pgoff += start >> PAGE_SHIFT; | 1028 | vma->vm_pgoff += start >> PAGE_SHIFT; |
1026 | mmap_type = res->flags & IORESOURCE_MEM ? pci_mmap_mem : pci_mmap_io; | 1029 | mmap_type = res->flags & IORESOURCE_MEM ? pci_mmap_mem : pci_mmap_io; |
1027 | |||
1028 | if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(start)) | ||
1029 | return -EINVAL; | ||
1030 | |||
1031 | return pci_mmap_page_range(pdev, vma, mmap_type, write_combine); | 1030 | return pci_mmap_page_range(pdev, vma, mmap_type, write_combine); |
1032 | } | 1031 | } |
1033 | 1032 | ||
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 25e0327d4429..c8b4dbdd1bdd 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c | |||
@@ -2228,7 +2228,7 @@ void pci_pm_init(struct pci_dev *dev) | |||
2228 | 2228 | ||
2229 | static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop) | 2229 | static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop) |
2230 | { | 2230 | { |
2231 | unsigned long flags = IORESOURCE_PCI_FIXED; | 2231 | unsigned long flags = IORESOURCE_PCI_FIXED | IORESOURCE_PCI_EA_BEI; |
2232 | 2232 | ||
2233 | switch (prop) { | 2233 | switch (prop) { |
2234 | case PCI_EA_P_MEM: | 2234 | case PCI_EA_P_MEM: |
@@ -2389,7 +2389,7 @@ out: | |||
2389 | return offset + ent_size; | 2389 | return offset + ent_size; |
2390 | } | 2390 | } |
2391 | 2391 | ||
2392 | /* Enhanced Allocation Initalization */ | 2392 | /* Enhanced Allocation Initialization */ |
2393 | void pci_ea_init(struct pci_dev *dev) | 2393 | void pci_ea_init(struct pci_dev *dev) |
2394 | { | 2394 | { |
2395 | int ea; | 2395 | int ea; |
@@ -2547,7 +2547,7 @@ void pci_request_acs(void) | |||
2547 | * pci_std_enable_acs - enable ACS on devices using standard ACS capabilites | 2547 | * pci_std_enable_acs - enable ACS on devices using standard ACS capabilites |
2548 | * @dev: the PCI device | 2548 | * @dev: the PCI device |
2549 | */ | 2549 | */ |
2550 | static int pci_std_enable_acs(struct pci_dev *dev) | 2550 | static void pci_std_enable_acs(struct pci_dev *dev) |
2551 | { | 2551 | { |
2552 | int pos; | 2552 | int pos; |
2553 | u16 cap; | 2553 | u16 cap; |
@@ -2555,7 +2555,7 @@ static int pci_std_enable_acs(struct pci_dev *dev) | |||
2555 | 2555 | ||
2556 | pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); | 2556 | pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); |
2557 | if (!pos) | 2557 | if (!pos) |
2558 | return -ENODEV; | 2558 | return; |
2559 | 2559 | ||
2560 | pci_read_config_word(dev, pos + PCI_ACS_CAP, &cap); | 2560 | pci_read_config_word(dev, pos + PCI_ACS_CAP, &cap); |
2561 | pci_read_config_word(dev, pos + PCI_ACS_CTRL, &ctrl); | 2561 | pci_read_config_word(dev, pos + PCI_ACS_CTRL, &ctrl); |
@@ -2573,8 +2573,6 @@ static int pci_std_enable_acs(struct pci_dev *dev) | |||
2573 | ctrl |= (cap & PCI_ACS_UF); | 2573 | ctrl |= (cap & PCI_ACS_UF); |
2574 | 2574 | ||
2575 | pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl); | 2575 | pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl); |
2576 | |||
2577 | return 0; | ||
2578 | } | 2576 | } |
2579 | 2577 | ||
2580 | /** | 2578 | /** |
@@ -2586,10 +2584,10 @@ void pci_enable_acs(struct pci_dev *dev) | |||
2586 | if (!pci_acs_enable) | 2584 | if (!pci_acs_enable) |
2587 | return; | 2585 | return; |
2588 | 2586 | ||
2589 | if (!pci_std_enable_acs(dev)) | 2587 | if (!pci_dev_specific_enable_acs(dev)) |
2590 | return; | 2588 | return; |
2591 | 2589 | ||
2592 | pci_dev_specific_enable_acs(dev); | 2590 | pci_std_enable_acs(dev); |
2593 | } | 2591 | } |
2594 | 2592 | ||
2595 | static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags) | 2593 | static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags) |
@@ -3021,6 +3019,121 @@ int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name) | |||
3021 | } | 3019 | } |
3022 | EXPORT_SYMBOL(pci_request_regions_exclusive); | 3020 | EXPORT_SYMBOL(pci_request_regions_exclusive); |
3023 | 3021 | ||
3022 | #ifdef PCI_IOBASE | ||
3023 | struct io_range { | ||
3024 | struct list_head list; | ||
3025 | phys_addr_t start; | ||
3026 | resource_size_t size; | ||
3027 | }; | ||
3028 | |||
3029 | static LIST_HEAD(io_range_list); | ||
3030 | static DEFINE_SPINLOCK(io_range_lock); | ||
3031 | #endif | ||
3032 | |||
3033 | /* | ||
3034 | * Record the PCI IO range (expressed as CPU physical address + size). | ||
3035 | * Return a negative value if an error has occured, zero otherwise | ||
3036 | */ | ||
3037 | int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size) | ||
3038 | { | ||
3039 | int err = 0; | ||
3040 | |||
3041 | #ifdef PCI_IOBASE | ||
3042 | struct io_range *range; | ||
3043 | resource_size_t allocated_size = 0; | ||
3044 | |||
3045 | /* check if the range hasn't been previously recorded */ | ||
3046 | spin_lock(&io_range_lock); | ||
3047 | list_for_each_entry(range, &io_range_list, list) { | ||
3048 | if (addr >= range->start && addr + size <= range->start + size) { | ||
3049 | /* range already registered, bail out */ | ||
3050 | goto end_register; | ||
3051 | } | ||
3052 | allocated_size += range->size; | ||
3053 | } | ||
3054 | |||
3055 | /* range not registed yet, check for available space */ | ||
3056 | if (allocated_size + size - 1 > IO_SPACE_LIMIT) { | ||
3057 | /* if it's too big check if 64K space can be reserved */ | ||
3058 | if (allocated_size + SZ_64K - 1 > IO_SPACE_LIMIT) { | ||
3059 | err = -E2BIG; | ||
3060 | goto end_register; | ||
3061 | } | ||
3062 | |||
3063 | size = SZ_64K; | ||
3064 | pr_warn("Requested IO range too big, new size set to 64K\n"); | ||
3065 | } | ||
3066 | |||
3067 | /* add the range to the list */ | ||
3068 | range = kzalloc(sizeof(*range), GFP_ATOMIC); | ||
3069 | if (!range) { | ||
3070 | err = -ENOMEM; | ||
3071 | goto end_register; | ||
3072 | } | ||
3073 | |||
3074 | range->start = addr; | ||
3075 | range->size = size; | ||
3076 | |||
3077 | list_add_tail(&range->list, &io_range_list); | ||
3078 | |||
3079 | end_register: | ||
3080 | spin_unlock(&io_range_lock); | ||
3081 | #endif | ||
3082 | |||
3083 | return err; | ||
3084 | } | ||
3085 | |||
3086 | phys_addr_t pci_pio_to_address(unsigned long pio) | ||
3087 | { | ||
3088 | phys_addr_t address = (phys_addr_t)OF_BAD_ADDR; | ||
3089 | |||
3090 | #ifdef PCI_IOBASE | ||
3091 | struct io_range *range; | ||
3092 | resource_size_t allocated_size = 0; | ||
3093 | |||
3094 | if (pio > IO_SPACE_LIMIT) | ||
3095 | return address; | ||
3096 | |||
3097 | spin_lock(&io_range_lock); | ||
3098 | list_for_each_entry(range, &io_range_list, list) { | ||
3099 | if (pio >= allocated_size && pio < allocated_size + range->size) { | ||
3100 | address = range->start + pio - allocated_size; | ||
3101 | break; | ||
3102 | } | ||
3103 | allocated_size += range->size; | ||
3104 | } | ||
3105 | spin_unlock(&io_range_lock); | ||
3106 | #endif | ||
3107 | |||
3108 | return address; | ||
3109 | } | ||
3110 | |||
3111 | unsigned long __weak pci_address_to_pio(phys_addr_t address) | ||
3112 | { | ||
3113 | #ifdef PCI_IOBASE | ||
3114 | struct io_range *res; | ||
3115 | resource_size_t offset = 0; | ||
3116 | unsigned long addr = -1; | ||
3117 | |||
3118 | spin_lock(&io_range_lock); | ||
3119 | list_for_each_entry(res, &io_range_list, list) { | ||
3120 | if (address >= res->start && address < res->start + res->size) { | ||
3121 | addr = address - res->start + offset; | ||
3122 | break; | ||
3123 | } | ||
3124 | offset += res->size; | ||
3125 | } | ||
3126 | spin_unlock(&io_range_lock); | ||
3127 | |||
3128 | return addr; | ||
3129 | #else | ||
3130 | if (address > IO_SPACE_LIMIT) | ||
3131 | return (unsigned long)-1; | ||
3132 | |||
3133 | return (unsigned long) address; | ||
3134 | #endif | ||
3135 | } | ||
3136 | |||
3024 | /** | 3137 | /** |
3025 | * pci_remap_iospace - Remap the memory mapped I/O space | 3138 | * pci_remap_iospace - Remap the memory mapped I/O space |
3026 | * @res: Resource describing the I/O space | 3139 | * @res: Resource describing the I/O space |
@@ -4578,6 +4691,37 @@ int pci_set_vga_state(struct pci_dev *dev, bool decode, | |||
4578 | return 0; | 4691 | return 0; |
4579 | } | 4692 | } |
4580 | 4693 | ||
4694 | /** | ||
4695 | * pci_add_dma_alias - Add a DMA devfn alias for a device | ||
4696 | * @dev: the PCI device for which alias is added | ||
4697 | * @devfn: alias slot and function | ||
4698 | * | ||
4699 | * This helper encodes 8-bit devfn as bit number in dma_alias_mask. | ||
4700 | * It should be called early, preferably as PCI fixup header quirk. | ||
4701 | */ | ||
4702 | void pci_add_dma_alias(struct pci_dev *dev, u8 devfn) | ||
4703 | { | ||
4704 | if (!dev->dma_alias_mask) | ||
4705 | dev->dma_alias_mask = kcalloc(BITS_TO_LONGS(U8_MAX), | ||
4706 | sizeof(long), GFP_KERNEL); | ||
4707 | if (!dev->dma_alias_mask) { | ||
4708 | dev_warn(&dev->dev, "Unable to allocate DMA alias mask\n"); | ||
4709 | return; | ||
4710 | } | ||
4711 | |||
4712 | set_bit(devfn, dev->dma_alias_mask); | ||
4713 | dev_info(&dev->dev, "Enabling fixed DMA alias to %02x.%d\n", | ||
4714 | PCI_SLOT(devfn), PCI_FUNC(devfn)); | ||
4715 | } | ||
4716 | |||
4717 | bool pci_devs_are_dma_aliases(struct pci_dev *dev1, struct pci_dev *dev2) | ||
4718 | { | ||
4719 | return (dev1->dma_alias_mask && | ||
4720 | test_bit(dev2->devfn, dev1->dma_alias_mask)) || | ||
4721 | (dev2->dma_alias_mask && | ||
4722 | test_bit(dev1->devfn, dev2->dma_alias_mask)); | ||
4723 | } | ||
4724 | |||
4581 | bool pci_device_is_present(struct pci_dev *pdev) | 4725 | bool pci_device_is_present(struct pci_dev *pdev) |
4582 | { | 4726 | { |
4583 | u32 v; | 4727 | u32 v; |
diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig index 72db7f4209ca..22ca6412bd15 100644 --- a/drivers/pci/pcie/Kconfig +++ b/drivers/pci/pcie/Kconfig | |||
@@ -81,3 +81,17 @@ endchoice | |||
81 | config PCIE_PME | 81 | config PCIE_PME |
82 | def_bool y | 82 | def_bool y |
83 | depends on PCIEPORTBUS && PM | 83 | depends on PCIEPORTBUS && PM |
84 | |||
85 | config PCIE_DPC | ||
86 | tristate "PCIe Downstream Port Containment support" | ||
87 | depends on PCIEPORTBUS | ||
88 | default n | ||
89 | help | ||
90 | This enables PCI Express Downstream Port Containment (DPC) | ||
91 | driver support. DPC events from Root and Downstream ports | ||
92 | will be handled by the DPC driver. If your system doesn't | ||
93 | have this capability or you do not want to use this feature, | ||
94 | it is safe to answer N. | ||
95 | |||
96 | To compile this driver as a module, choose M here: the module | ||
97 | will be called pcie-dpc. | ||
diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile index 00c62df5a9fc..b24525b3dec1 100644 --- a/drivers/pci/pcie/Makefile +++ b/drivers/pci/pcie/Makefile | |||
@@ -14,3 +14,5 @@ obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o | |||
14 | obj-$(CONFIG_PCIEAER) += aer/ | 14 | obj-$(CONFIG_PCIEAER) += aer/ |
15 | 15 | ||
16 | obj-$(CONFIG_PCIE_PME) += pme.o | 16 | obj-$(CONFIG_PCIE_PME) += pme.o |
17 | |||
18 | obj-$(CONFIG_PCIE_DPC) += pcie-dpc.o | ||
diff --git a/drivers/pci/pcie/pcie-dpc.c b/drivers/pci/pcie/pcie-dpc.c new file mode 100644 index 000000000000..ab552f1bc08f --- /dev/null +++ b/drivers/pci/pcie/pcie-dpc.c | |||
@@ -0,0 +1,163 @@ | |||
1 | /* | ||
2 | * PCI Express Downstream Port Containment services driver | ||
3 | * Copyright (C) 2016 Intel Corp. | ||
4 | * | ||
5 | * This file is subject to the terms and conditions of the GNU General Public | ||
6 | * License. See the file "COPYING" in the main directory of this archive | ||
7 | * for more details. | ||
8 | */ | ||
9 | |||
10 | #include <linux/delay.h> | ||
11 | #include <linux/interrupt.h> | ||
12 | #include <linux/module.h> | ||
13 | #include <linux/pci.h> | ||
14 | #include <linux/pcieport_if.h> | ||
15 | |||
16 | struct dpc_dev { | ||
17 | struct pcie_device *dev; | ||
18 | struct work_struct work; | ||
19 | int cap_pos; | ||
20 | }; | ||
21 | |||
22 | static void dpc_wait_link_inactive(struct pci_dev *pdev) | ||
23 | { | ||
24 | unsigned long timeout = jiffies + HZ; | ||
25 | u16 lnk_status; | ||
26 | |||
27 | pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); | ||
28 | while (lnk_status & PCI_EXP_LNKSTA_DLLLA && | ||
29 | !time_after(jiffies, timeout)) { | ||
30 | msleep(10); | ||
31 | pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); | ||
32 | } | ||
33 | if (lnk_status & PCI_EXP_LNKSTA_DLLLA) | ||
34 | dev_warn(&pdev->dev, "Link state not disabled for DPC event"); | ||
35 | } | ||
36 | |||
37 | static void interrupt_event_handler(struct work_struct *work) | ||
38 | { | ||
39 | struct dpc_dev *dpc = container_of(work, struct dpc_dev, work); | ||
40 | struct pci_dev *dev, *temp, *pdev = dpc->dev->port; | ||
41 | struct pci_bus *parent = pdev->subordinate; | ||
42 | |||
43 | pci_lock_rescan_remove(); | ||
44 | list_for_each_entry_safe_reverse(dev, temp, &parent->devices, | ||
45 | bus_list) { | ||
46 | pci_dev_get(dev); | ||
47 | pci_stop_and_remove_bus_device(dev); | ||
48 | pci_dev_put(dev); | ||
49 | } | ||
50 | pci_unlock_rescan_remove(); | ||
51 | |||
52 | dpc_wait_link_inactive(pdev); | ||
53 | pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, | ||
54 | PCI_EXP_DPC_STATUS_TRIGGER | PCI_EXP_DPC_STATUS_INTERRUPT); | ||
55 | } | ||
56 | |||
57 | static irqreturn_t dpc_irq(int irq, void *context) | ||
58 | { | ||
59 | struct dpc_dev *dpc = (struct dpc_dev *)context; | ||
60 | struct pci_dev *pdev = dpc->dev->port; | ||
61 | u16 status, source; | ||
62 | |||
63 | pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); | ||
64 | pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_SOURCE_ID, | ||
65 | &source); | ||
66 | if (!status) | ||
67 | return IRQ_NONE; | ||
68 | |||
69 | dev_info(&dpc->dev->device, "DPC containment event, status:%#06x source:%#06x\n", | ||
70 | status, source); | ||
71 | |||
72 | if (status & PCI_EXP_DPC_STATUS_TRIGGER) { | ||
73 | u16 reason = (status >> 1) & 0x3; | ||
74 | |||
75 | dev_warn(&dpc->dev->device, "DPC %s triggered, remove downstream devices\n", | ||
76 | (reason == 0) ? "unmasked uncorrectable error" : | ||
77 | (reason == 1) ? "ERR_NONFATAL" : | ||
78 | (reason == 2) ? "ERR_FATAL" : "extended error"); | ||
79 | schedule_work(&dpc->work); | ||
80 | } | ||
81 | return IRQ_HANDLED; | ||
82 | } | ||
83 | |||
84 | #define FLAG(x, y) (((x) & (y)) ? '+' : '-') | ||
85 | static int dpc_probe(struct pcie_device *dev) | ||
86 | { | ||
87 | struct dpc_dev *dpc; | ||
88 | struct pci_dev *pdev = dev->port; | ||
89 | int status; | ||
90 | u16 ctl, cap; | ||
91 | |||
92 | dpc = kzalloc(sizeof(*dpc), GFP_KERNEL); | ||
93 | if (!dpc) | ||
94 | return -ENOMEM; | ||
95 | |||
96 | dpc->cap_pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DPC); | ||
97 | dpc->dev = dev; | ||
98 | INIT_WORK(&dpc->work, interrupt_event_handler); | ||
99 | set_service_data(dev, dpc); | ||
100 | |||
101 | status = request_irq(dev->irq, dpc_irq, IRQF_SHARED, "pcie-dpc", dpc); | ||
102 | if (status) { | ||
103 | dev_warn(&dev->device, "request IRQ%d failed: %d\n", dev->irq, | ||
104 | status); | ||
105 | goto out; | ||
106 | } | ||
107 | |||
108 | pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CAP, &cap); | ||
109 | pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl); | ||
110 | |||
111 | ctl |= PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN; | ||
112 | pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); | ||
113 | |||
114 | dev_info(&dev->device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", | ||
115 | cap & 0xf, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), | ||
116 | FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), | ||
117 | FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), (cap >> 8) & 0xf, | ||
118 | FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); | ||
119 | return status; | ||
120 | out: | ||
121 | kfree(dpc); | ||
122 | return status; | ||
123 | } | ||
124 | |||
125 | static void dpc_remove(struct pcie_device *dev) | ||
126 | { | ||
127 | struct dpc_dev *dpc = get_service_data(dev); | ||
128 | struct pci_dev *pdev = dev->port; | ||
129 | u16 ctl; | ||
130 | |||
131 | pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl); | ||
132 | ctl &= ~(PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN); | ||
133 | pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); | ||
134 | |||
135 | free_irq(dev->irq, dpc); | ||
136 | kfree(dpc); | ||
137 | } | ||
138 | |||
139 | static struct pcie_port_service_driver dpcdriver = { | ||
140 | .name = "dpc", | ||
141 | .port_type = PCI_EXP_TYPE_ROOT_PORT | PCI_EXP_TYPE_DOWNSTREAM, | ||
142 | .service = PCIE_PORT_SERVICE_DPC, | ||
143 | .probe = dpc_probe, | ||
144 | .remove = dpc_remove, | ||
145 | }; | ||
146 | |||
147 | static int __init dpc_service_init(void) | ||
148 | { | ||
149 | return pcie_port_service_register(&dpcdriver); | ||
150 | } | ||
151 | |||
152 | static void __exit dpc_service_exit(void) | ||
153 | { | ||
154 | pcie_port_service_unregister(&dpcdriver); | ||
155 | } | ||
156 | |||
157 | MODULE_DESCRIPTION("PCI Express Downstream Port Containment driver"); | ||
158 | MODULE_AUTHOR("Keith Busch <keith.busch@intel.com>"); | ||
159 | MODULE_LICENSE("GPL"); | ||
160 | MODULE_VERSION("0.1"); | ||
161 | |||
162 | module_init(dpc_service_init); | ||
163 | module_exit(dpc_service_exit); | ||
diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index d525548404d6..587aef36030d 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h | |||
@@ -11,14 +11,14 @@ | |||
11 | 11 | ||
12 | #include <linux/compiler.h> | 12 | #include <linux/compiler.h> |
13 | 13 | ||
14 | #define PCIE_PORT_DEVICE_MAXSERVICES 4 | 14 | #define PCIE_PORT_DEVICE_MAXSERVICES 5 |
15 | /* | 15 | /* |
16 | * According to the PCI Express Base Specification 2.0, the indices of | 16 | * According to the PCI Express Base Specification 2.0, the indices of |
17 | * the MSI-X table entries used by port services must not exceed 31 | 17 | * the MSI-X table entries used by port services must not exceed 31 |
18 | */ | 18 | */ |
19 | #define PCIE_PORT_MAX_MSIX_ENTRIES 32 | 19 | #define PCIE_PORT_MAX_MSIX_ENTRIES 32 |
20 | 20 | ||
21 | #define get_descriptor_id(type, service) (((type - 4) << 4) | service) | 21 | #define get_descriptor_id(type, service) (((type - 4) << 8) | service) |
22 | 22 | ||
23 | extern struct bus_type pcie_port_bus_type; | 23 | extern struct bus_type pcie_port_bus_type; |
24 | int pcie_port_device_register(struct pci_dev *dev); | 24 | int pcie_port_device_register(struct pci_dev *dev); |
@@ -67,17 +67,14 @@ static inline void pcie_pme_interrupt_enable(struct pci_dev *dev, bool en) {} | |||
67 | #endif /* !CONFIG_PCIE_PME */ | 67 | #endif /* !CONFIG_PCIE_PME */ |
68 | 68 | ||
69 | #ifdef CONFIG_ACPI | 69 | #ifdef CONFIG_ACPI |
70 | int pcie_port_acpi_setup(struct pci_dev *port, int *mask); | 70 | void pcie_port_acpi_setup(struct pci_dev *port, int *mask); |
71 | 71 | ||
72 | static inline int pcie_port_platform_notify(struct pci_dev *port, int *mask) | 72 | static inline void pcie_port_platform_notify(struct pci_dev *port, int *mask) |
73 | { | 73 | { |
74 | return pcie_port_acpi_setup(port, mask); | 74 | pcie_port_acpi_setup(port, mask); |
75 | } | 75 | } |
76 | #else /* !CONFIG_ACPI */ | 76 | #else /* !CONFIG_ACPI */ |
77 | static inline int pcie_port_platform_notify(struct pci_dev *port, int *mask) | 77 | static inline void pcie_port_platform_notify(struct pci_dev *port, int *mask){} |
78 | { | ||
79 | return 0; | ||
80 | } | ||
81 | #endif /* !CONFIG_ACPI */ | 78 | #endif /* !CONFIG_ACPI */ |
82 | 79 | ||
83 | #endif /* _PORTDRV_H_ */ | 80 | #endif /* _PORTDRV_H_ */ |
diff --git a/drivers/pci/pcie/portdrv_acpi.c b/drivers/pci/pcie/portdrv_acpi.c index b4d2894ee3fc..6b8c2f1d0e71 100644 --- a/drivers/pci/pcie/portdrv_acpi.c +++ b/drivers/pci/pcie/portdrv_acpi.c | |||
@@ -32,32 +32,30 @@ | |||
32 | * NOTE: It turns out that we cannot do that for individual port services | 32 | * NOTE: It turns out that we cannot do that for individual port services |
33 | * separately, because that would make some systems work incorrectly. | 33 | * separately, because that would make some systems work incorrectly. |
34 | */ | 34 | */ |
35 | int pcie_port_acpi_setup(struct pci_dev *port, int *srv_mask) | 35 | void pcie_port_acpi_setup(struct pci_dev *port, int *srv_mask) |
36 | { | 36 | { |
37 | struct acpi_pci_root *root; | 37 | struct acpi_pci_root *root; |
38 | acpi_handle handle; | 38 | acpi_handle handle; |
39 | u32 flags; | 39 | u32 flags; |
40 | 40 | ||
41 | if (acpi_pci_disabled) | 41 | if (acpi_pci_disabled) |
42 | return 0; | 42 | return; |
43 | 43 | ||
44 | handle = acpi_find_root_bridge_handle(port); | 44 | handle = acpi_find_root_bridge_handle(port); |
45 | if (!handle) | 45 | if (!handle) |
46 | return -EINVAL; | 46 | return; |
47 | 47 | ||
48 | root = acpi_pci_find_root(handle); | 48 | root = acpi_pci_find_root(handle); |
49 | if (!root) | 49 | if (!root) |
50 | return -ENODEV; | 50 | return; |
51 | 51 | ||
52 | flags = root->osc_control_set; | 52 | flags = root->osc_control_set; |
53 | 53 | ||
54 | *srv_mask = PCIE_PORT_SERVICE_VC; | 54 | *srv_mask = PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_DPC; |
55 | if (flags & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL) | 55 | if (flags & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL) |
56 | *srv_mask |= PCIE_PORT_SERVICE_HP; | 56 | *srv_mask |= PCIE_PORT_SERVICE_HP; |
57 | if (flags & OSC_PCI_EXPRESS_PME_CONTROL) | 57 | if (flags & OSC_PCI_EXPRESS_PME_CONTROL) |
58 | *srv_mask |= PCIE_PORT_SERVICE_PME; | 58 | *srv_mask |= PCIE_PORT_SERVICE_PME; |
59 | if (flags & OSC_PCI_EXPRESS_AER_CONTROL) | 59 | if (flags & OSC_PCI_EXPRESS_AER_CONTROL) |
60 | *srv_mask |= PCIE_PORT_SERVICE_AER; | 60 | *srv_mask |= PCIE_PORT_SERVICE_AER; |
61 | |||
62 | return 0; | ||
63 | } | 61 | } |
diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c index 88122dc2e1b1..32d4d0a3d20e 100644 --- a/drivers/pci/pcie/portdrv_core.c +++ b/drivers/pci/pcie/portdrv_core.c | |||
@@ -254,38 +254,28 @@ static void cleanup_service_irqs(struct pci_dev *dev) | |||
254 | static int get_port_device_capability(struct pci_dev *dev) | 254 | static int get_port_device_capability(struct pci_dev *dev) |
255 | { | 255 | { |
256 | int services = 0; | 256 | int services = 0; |
257 | u32 reg32; | ||
258 | int cap_mask = 0; | 257 | int cap_mask = 0; |
259 | int err; | ||
260 | 258 | ||
261 | if (pcie_ports_disabled) | 259 | if (pcie_ports_disabled) |
262 | return 0; | 260 | return 0; |
263 | 261 | ||
264 | cap_mask = PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP | 262 | cap_mask = PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP |
265 | | PCIE_PORT_SERVICE_VC; | 263 | | PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_DPC; |
266 | if (pci_aer_available()) | 264 | if (pci_aer_available()) |
267 | cap_mask |= PCIE_PORT_SERVICE_AER; | 265 | cap_mask |= PCIE_PORT_SERVICE_AER; |
268 | 266 | ||
269 | if (pcie_ports_auto) { | 267 | if (pcie_ports_auto) |
270 | err = pcie_port_platform_notify(dev, &cap_mask); | 268 | pcie_port_platform_notify(dev, &cap_mask); |
271 | if (err) | ||
272 | return 0; | ||
273 | } | ||
274 | 269 | ||
275 | /* Hot-Plug Capable */ | 270 | /* Hot-Plug Capable */ |
276 | if ((cap_mask & PCIE_PORT_SERVICE_HP) && | 271 | if ((cap_mask & PCIE_PORT_SERVICE_HP) && dev->is_hotplug_bridge) { |
277 | pcie_caps_reg(dev) & PCI_EXP_FLAGS_SLOT) { | 272 | services |= PCIE_PORT_SERVICE_HP; |
278 | pcie_capability_read_dword(dev, PCI_EXP_SLTCAP, ®32); | 273 | /* |
279 | if (reg32 & PCI_EXP_SLTCAP_HPC) { | 274 | * Disable hot-plug interrupts in case they have been enabled |
280 | services |= PCIE_PORT_SERVICE_HP; | 275 | * by the BIOS and the hot-plug service driver is not loaded. |
281 | /* | 276 | */ |
282 | * Disable hot-plug interrupts in case they have been | 277 | pcie_capability_clear_word(dev, PCI_EXP_SLTCTL, |
283 | * enabled by the BIOS and the hot-plug service driver | 278 | PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE); |
284 | * is not loaded. | ||
285 | */ | ||
286 | pcie_capability_clear_word(dev, PCI_EXP_SLTCTL, | ||
287 | PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE); | ||
288 | } | ||
289 | } | 279 | } |
290 | /* AER capable */ | 280 | /* AER capable */ |
291 | if ((cap_mask & PCIE_PORT_SERVICE_AER) | 281 | if ((cap_mask & PCIE_PORT_SERVICE_AER) |
@@ -311,6 +301,8 @@ static int get_port_device_capability(struct pci_dev *dev) | |||
311 | */ | 301 | */ |
312 | pcie_pme_interrupt_enable(dev, false); | 302 | pcie_pme_interrupt_enable(dev, false); |
313 | } | 303 | } |
304 | if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC)) | ||
305 | services |= PCIE_PORT_SERVICE_DPC; | ||
314 | 306 | ||
315 | return services; | 307 | return services; |
316 | } | 308 | } |
@@ -338,7 +330,7 @@ static int pcie_device_init(struct pci_dev *pdev, int service, int irq) | |||
338 | device = &pcie->device; | 330 | device = &pcie->device; |
339 | device->bus = &pcie_port_bus_type; | 331 | device->bus = &pcie_port_bus_type; |
340 | device->release = release_pcie_device; /* callback to free pcie dev */ | 332 | device->release = release_pcie_device; /* callback to free pcie dev */ |
341 | dev_set_name(device, "%s:pcie%02x", | 333 | dev_set_name(device, "%s:pcie%03x", |
342 | pci_name(pdev), | 334 | pci_name(pdev), |
343 | get_descriptor_id(pci_pcie_type(pdev), service)); | 335 | get_descriptor_id(pci_pcie_type(pdev), service)); |
344 | device->parent = &pdev->dev; | 336 | device->parent = &pdev->dev; |
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 8004f67c57ec..8e3ef720997d 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c | |||
@@ -179,9 +179,6 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, | |||
179 | u16 orig_cmd; | 179 | u16 orig_cmd; |
180 | struct pci_bus_region region, inverted_region; | 180 | struct pci_bus_region region, inverted_region; |
181 | 181 | ||
182 | if (dev->non_compliant_bars) | ||
183 | return 0; | ||
184 | |||
185 | mask = type ? PCI_ROM_ADDRESS_MASK : ~0; | 182 | mask = type ? PCI_ROM_ADDRESS_MASK : ~0; |
186 | 183 | ||
187 | /* No printks while decoding is disabled! */ | 184 | /* No printks while decoding is disabled! */ |
@@ -322,6 +319,9 @@ static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom) | |||
322 | { | 319 | { |
323 | unsigned int pos, reg; | 320 | unsigned int pos, reg; |
324 | 321 | ||
322 | if (dev->non_compliant_bars) | ||
323 | return; | ||
324 | |||
325 | for (pos = 0; pos < howmany; pos++) { | 325 | for (pos = 0; pos < howmany; pos++) { |
326 | struct resource *res = &dev->resource[pos]; | 326 | struct resource *res = &dev->resource[pos]; |
327 | reg = PCI_BASE_ADDRESS_0 + (pos << 2); | 327 | reg = PCI_BASE_ADDRESS_0 + (pos << 2); |
@@ -1537,6 +1537,7 @@ static void pci_release_dev(struct device *dev) | |||
1537 | pcibios_release_device(pci_dev); | 1537 | pcibios_release_device(pci_dev); |
1538 | pci_bus_put(pci_dev->bus); | 1538 | pci_bus_put(pci_dev->bus); |
1539 | kfree(pci_dev->driver_override); | 1539 | kfree(pci_dev->driver_override); |
1540 | kfree(pci_dev->dma_alias_mask); | ||
1540 | kfree(pci_dev); | 1541 | kfree(pci_dev); |
1541 | } | 1542 | } |
1542 | 1543 | ||
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 8e678027b900..ee72ebe18f4b 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c | |||
@@ -3150,6 +3150,39 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_REALTEK, 0x8169, | |||
3150 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID, | 3150 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID, |
3151 | quirk_broken_intx_masking); | 3151 | quirk_broken_intx_masking); |
3152 | 3152 | ||
3153 | /* | ||
3154 | * Intel i40e (XL710/X710) 10/20/40GbE NICs all have broken INTx masking, | ||
3155 | * DisINTx can be set but the interrupt status bit is non-functional. | ||
3156 | */ | ||
3157 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1572, | ||
3158 | quirk_broken_intx_masking); | ||
3159 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1574, | ||
3160 | quirk_broken_intx_masking); | ||
3161 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1580, | ||
3162 | quirk_broken_intx_masking); | ||
3163 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1581, | ||
3164 | quirk_broken_intx_masking); | ||
3165 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1583, | ||
3166 | quirk_broken_intx_masking); | ||
3167 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1584, | ||
3168 | quirk_broken_intx_masking); | ||
3169 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1585, | ||
3170 | quirk_broken_intx_masking); | ||
3171 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1586, | ||
3172 | quirk_broken_intx_masking); | ||
3173 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1587, | ||
3174 | quirk_broken_intx_masking); | ||
3175 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1588, | ||
3176 | quirk_broken_intx_masking); | ||
3177 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1589, | ||
3178 | quirk_broken_intx_masking); | ||
3179 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d0, | ||
3180 | quirk_broken_intx_masking); | ||
3181 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d1, | ||
3182 | quirk_broken_intx_masking); | ||
3183 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d2, | ||
3184 | quirk_broken_intx_masking); | ||
3185 | |||
3153 | static void quirk_no_bus_reset(struct pci_dev *dev) | 3186 | static void quirk_no_bus_reset(struct pci_dev *dev) |
3154 | { | 3187 | { |
3155 | dev->dev_flags |= PCI_DEV_FLAGS_NO_BUS_RESET; | 3188 | dev->dev_flags |= PCI_DEV_FLAGS_NO_BUS_RESET; |
@@ -3185,6 +3218,29 @@ static void quirk_no_pm_reset(struct pci_dev *dev) | |||
3185 | DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_ATI, PCI_ANY_ID, | 3218 | DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_ATI, PCI_ANY_ID, |
3186 | PCI_CLASS_DISPLAY_VGA, 8, quirk_no_pm_reset); | 3219 | PCI_CLASS_DISPLAY_VGA, 8, quirk_no_pm_reset); |
3187 | 3220 | ||
3221 | /* | ||
3222 | * Thunderbolt controllers with broken MSI hotplug signaling: | ||
3223 | * Entire 1st generation (Light Ridge, Eagle Ridge, Light Peak) and part | ||
3224 | * of the 2nd generation (Cactus Ridge 4C up to revision 1, Port Ridge). | ||
3225 | */ | ||
3226 | static void quirk_thunderbolt_hotplug_msi(struct pci_dev *pdev) | ||
3227 | { | ||
3228 | if (pdev->is_hotplug_bridge && | ||
3229 | (pdev->device != PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C || | ||
3230 | pdev->revision <= 1)) | ||
3231 | pdev->no_msi = 1; | ||
3232 | } | ||
3233 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_LIGHT_RIDGE, | ||
3234 | quirk_thunderbolt_hotplug_msi); | ||
3235 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EAGLE_RIDGE, | ||
3236 | quirk_thunderbolt_hotplug_msi); | ||
3237 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_LIGHT_PEAK, | ||
3238 | quirk_thunderbolt_hotplug_msi); | ||
3239 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C, | ||
3240 | quirk_thunderbolt_hotplug_msi); | ||
3241 | DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PORT_RIDGE, | ||
3242 | quirk_thunderbolt_hotplug_msi); | ||
3243 | |||
3188 | #ifdef CONFIG_ACPI | 3244 | #ifdef CONFIG_ACPI |
3189 | /* | 3245 | /* |
3190 | * Apple: Shutdown Cactus Ridge Thunderbolt controller. | 3246 | * Apple: Shutdown Cactus Ridge Thunderbolt controller. |
@@ -3232,7 +3288,8 @@ static void quirk_apple_poweroff_thunderbolt(struct pci_dev *dev) | |||
3232 | acpi_execute_simple_method(SXIO, NULL, 0); | 3288 | acpi_execute_simple_method(SXIO, NULL, 0); |
3233 | acpi_execute_simple_method(SXLV, NULL, 0); | 3289 | acpi_execute_simple_method(SXLV, NULL, 0); |
3234 | } | 3290 | } |
3235 | DECLARE_PCI_FIXUP_SUSPEND_LATE(PCI_VENDOR_ID_INTEL, 0x1547, | 3291 | DECLARE_PCI_FIXUP_SUSPEND_LATE(PCI_VENDOR_ID_INTEL, |
3292 | PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C, | ||
3236 | quirk_apple_poweroff_thunderbolt); | 3293 | quirk_apple_poweroff_thunderbolt); |
3237 | 3294 | ||
3238 | /* | 3295 | /* |
@@ -3266,9 +3323,11 @@ static void quirk_apple_wait_for_thunderbolt(struct pci_dev *dev) | |||
3266 | if (!nhi) | 3323 | if (!nhi) |
3267 | goto out; | 3324 | goto out; |
3268 | if (nhi->vendor != PCI_VENDOR_ID_INTEL | 3325 | if (nhi->vendor != PCI_VENDOR_ID_INTEL |
3269 | || (nhi->device != 0x1547 && nhi->device != 0x156c) | 3326 | || (nhi->device != PCI_DEVICE_ID_INTEL_LIGHT_RIDGE && |
3270 | || nhi->subsystem_vendor != 0x2222 | 3327 | nhi->device != PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C && |
3271 | || nhi->subsystem_device != 0x1111) | 3328 | nhi->device != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI) |
3329 | || nhi->subsystem_vendor != 0x2222 | ||
3330 | || nhi->subsystem_device != 0x1111) | ||
3272 | goto out; | 3331 | goto out; |
3273 | dev_info(&dev->dev, "quirk: waiting for thunderbolt to reestablish PCI tunnels...\n"); | 3332 | dev_info(&dev->dev, "quirk: waiting for thunderbolt to reestablish PCI tunnels...\n"); |
3274 | device_pm_wait_for_dev(&dev->dev, &nhi->dev); | 3333 | device_pm_wait_for_dev(&dev->dev, &nhi->dev); |
@@ -3276,9 +3335,14 @@ out: | |||
3276 | pci_dev_put(nhi); | 3335 | pci_dev_put(nhi); |
3277 | pci_dev_put(sibling); | 3336 | pci_dev_put(sibling); |
3278 | } | 3337 | } |
3279 | DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_INTEL, 0x1547, | 3338 | DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_INTEL, |
3339 | PCI_DEVICE_ID_INTEL_LIGHT_RIDGE, | ||
3280 | quirk_apple_wait_for_thunderbolt); | 3340 | quirk_apple_wait_for_thunderbolt); |
3281 | DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_INTEL, 0x156d, | 3341 | DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_INTEL, |
3342 | PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C, | ||
3343 | quirk_apple_wait_for_thunderbolt); | ||
3344 | DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_INTEL, | ||
3345 | PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE, | ||
3282 | quirk_apple_wait_for_thunderbolt); | 3346 | quirk_apple_wait_for_thunderbolt); |
3283 | #endif | 3347 | #endif |
3284 | 3348 | ||
@@ -3610,10 +3674,8 @@ int pci_dev_specific_reset(struct pci_dev *dev, int probe) | |||
3610 | 3674 | ||
3611 | static void quirk_dma_func0_alias(struct pci_dev *dev) | 3675 | static void quirk_dma_func0_alias(struct pci_dev *dev) |
3612 | { | 3676 | { |
3613 | if (PCI_FUNC(dev->devfn) != 0) { | 3677 | if (PCI_FUNC(dev->devfn) != 0) |
3614 | dev->dma_alias_devfn = PCI_DEVFN(PCI_SLOT(dev->devfn), 0); | 3678 | pci_add_dma_alias(dev, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); |
3615 | dev->dev_flags |= PCI_DEV_FLAGS_DMA_ALIAS_DEVFN; | ||
3616 | } | ||
3617 | } | 3679 | } |
3618 | 3680 | ||
3619 | /* | 3681 | /* |
@@ -3626,10 +3688,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_RICOH, 0xe476, quirk_dma_func0_alias); | |||
3626 | 3688 | ||
3627 | static void quirk_dma_func1_alias(struct pci_dev *dev) | 3689 | static void quirk_dma_func1_alias(struct pci_dev *dev) |
3628 | { | 3690 | { |
3629 | if (PCI_FUNC(dev->devfn) != 1) { | 3691 | if (PCI_FUNC(dev->devfn) != 1) |
3630 | dev->dma_alias_devfn = PCI_DEVFN(PCI_SLOT(dev->devfn), 1); | 3692 | pci_add_dma_alias(dev, PCI_DEVFN(PCI_SLOT(dev->devfn), 1)); |
3631 | dev->dev_flags |= PCI_DEV_FLAGS_DMA_ALIAS_DEVFN; | ||
3632 | } | ||
3633 | } | 3693 | } |
3634 | 3694 | ||
3635 | /* | 3695 | /* |
@@ -3695,13 +3755,8 @@ static void quirk_fixed_dma_alias(struct pci_dev *dev) | |||
3695 | const struct pci_device_id *id; | 3755 | const struct pci_device_id *id; |
3696 | 3756 | ||
3697 | id = pci_match_id(fixed_dma_alias_tbl, dev); | 3757 | id = pci_match_id(fixed_dma_alias_tbl, dev); |
3698 | if (id) { | 3758 | if (id) |
3699 | dev->dma_alias_devfn = id->driver_data; | 3759 | pci_add_dma_alias(dev, id->driver_data); |
3700 | dev->dev_flags |= PCI_DEV_FLAGS_DMA_ALIAS_DEVFN; | ||
3701 | dev_info(&dev->dev, "Enabling fixed DMA alias to %02x.%d\n", | ||
3702 | PCI_SLOT(dev->dma_alias_devfn), | ||
3703 | PCI_FUNC(dev->dma_alias_devfn)); | ||
3704 | } | ||
3705 | } | 3760 | } |
3706 | 3761 | ||
3707 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ADAPTEC2, 0x0285, quirk_fixed_dma_alias); | 3762 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ADAPTEC2, 0x0285, quirk_fixed_dma_alias); |
@@ -3734,6 +3789,21 @@ DECLARE_PCI_FIXUP_HEADER(0x1283, 0x8892, quirk_use_pcie_bridge_dma_alias); | |||
3734 | DECLARE_PCI_FIXUP_HEADER(0x8086, 0x244e, quirk_use_pcie_bridge_dma_alias); | 3789 | DECLARE_PCI_FIXUP_HEADER(0x8086, 0x244e, quirk_use_pcie_bridge_dma_alias); |
3735 | 3790 | ||
3736 | /* | 3791 | /* |
3792 | * MIC x200 NTB forwards PCIe traffic using multiple alien RIDs. They have to | ||
3793 | * be added as aliases to the DMA device in order to allow buffer access | ||
3794 | * when IOMMU is enabled. Following devfns have to match RIT-LUT table | ||
3795 | * programmed in the EEPROM. | ||
3796 | */ | ||
3797 | static void quirk_mic_x200_dma_alias(struct pci_dev *pdev) | ||
3798 | { | ||
3799 | pci_add_dma_alias(pdev, PCI_DEVFN(0x10, 0x0)); | ||
3800 | pci_add_dma_alias(pdev, PCI_DEVFN(0x11, 0x0)); | ||
3801 | pci_add_dma_alias(pdev, PCI_DEVFN(0x12, 0x3)); | ||
3802 | } | ||
3803 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2260, quirk_mic_x200_dma_alias); | ||
3804 | DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2264, quirk_mic_x200_dma_alias); | ||
3805 | |||
3806 | /* | ||
3737 | * Intersil/Techwell TW686[4589]-based video capture cards have an empty (zero) | 3807 | * Intersil/Techwell TW686[4589]-based video capture cards have an empty (zero) |
3738 | * class code. Fix it. | 3808 | * class code. Fix it. |
3739 | */ | 3809 | */ |
@@ -3936,6 +4006,55 @@ static int pci_quirk_intel_pch_acs(struct pci_dev *dev, u16 acs_flags) | |||
3936 | return acs_flags & ~flags ? 0 : 1; | 4006 | return acs_flags & ~flags ? 0 : 1; |
3937 | } | 4007 | } |
3938 | 4008 | ||
4009 | /* | ||
4010 | * Sunrise Point PCH root ports implement ACS, but unfortunately as shown in | ||
4011 | * the datasheet (Intel 100 Series Chipset Family PCH Datasheet, Vol. 2, | ||
4012 | * 12.1.46, 12.1.47)[1] this chipset uses dwords for the ACS capability and | ||
4013 | * control registers whereas the PCIe spec packs them into words (Rev 3.0, | ||
4014 | * 7.16 ACS Extended Capability). The bit definitions are correct, but the | ||
4015 | * control register is at offset 8 instead of 6 and we should probably use | ||
4016 | * dword accesses to them. This applies to the following PCI Device IDs, as | ||
4017 | * found in volume 1 of the datasheet[2]: | ||
4018 | * | ||
4019 | * 0xa110-0xa11f Sunrise Point-H PCI Express Root Port #{0-16} | ||
4020 | * 0xa167-0xa16a Sunrise Point-H PCI Express Root Port #{17-20} | ||
4021 | * | ||
4022 | * N.B. This doesn't fix what lspci shows. | ||
4023 | * | ||
4024 | * [1] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-2.html | ||
4025 | * [2] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-1.html | ||
4026 | */ | ||
4027 | static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev) | ||
4028 | { | ||
4029 | return pci_is_pcie(dev) && | ||
4030 | pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT && | ||
4031 | ((dev->device & ~0xf) == 0xa110 || | ||
4032 | (dev->device >= 0xa167 && dev->device <= 0xa16a)); | ||
4033 | } | ||
4034 | |||
4035 | #define INTEL_SPT_ACS_CTRL (PCI_ACS_CAP + 4) | ||
4036 | |||
4037 | static int pci_quirk_intel_spt_pch_acs(struct pci_dev *dev, u16 acs_flags) | ||
4038 | { | ||
4039 | int pos; | ||
4040 | u32 cap, ctrl; | ||
4041 | |||
4042 | if (!pci_quirk_intel_spt_pch_acs_match(dev)) | ||
4043 | return -ENOTTY; | ||
4044 | |||
4045 | pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); | ||
4046 | if (!pos) | ||
4047 | return -ENOTTY; | ||
4048 | |||
4049 | /* see pci_acs_flags_enabled() */ | ||
4050 | pci_read_config_dword(dev, pos + PCI_ACS_CAP, &cap); | ||
4051 | acs_flags &= (cap | PCI_ACS_EC); | ||
4052 | |||
4053 | pci_read_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, &ctrl); | ||
4054 | |||
4055 | return acs_flags & ~ctrl ? 0 : 1; | ||
4056 | } | ||
4057 | |||
3939 | static int pci_quirk_mf_endpoint_acs(struct pci_dev *dev, u16 acs_flags) | 4058 | static int pci_quirk_mf_endpoint_acs(struct pci_dev *dev, u16 acs_flags) |
3940 | { | 4059 | { |
3941 | /* | 4060 | /* |
@@ -4024,6 +4143,7 @@ static const struct pci_dev_acs_enabled { | |||
4024 | { PCI_VENDOR_ID_INTEL, 0x15b8, pci_quirk_mf_endpoint_acs }, | 4143 | { PCI_VENDOR_ID_INTEL, 0x15b8, pci_quirk_mf_endpoint_acs }, |
4025 | /* Intel PCH root ports */ | 4144 | /* Intel PCH root ports */ |
4026 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs }, | 4145 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs }, |
4146 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_spt_pch_acs }, | ||
4027 | { 0x19a2, 0x710, pci_quirk_mf_endpoint_acs }, /* Emulex BE3-R */ | 4147 | { 0x19a2, 0x710, pci_quirk_mf_endpoint_acs }, /* Emulex BE3-R */ |
4028 | { 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */ | 4148 | { 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */ |
4029 | /* Cavium ThunderX */ | 4149 | /* Cavium ThunderX */ |
@@ -4159,16 +4279,44 @@ static int pci_quirk_enable_intel_pch_acs(struct pci_dev *dev) | |||
4159 | return 0; | 4279 | return 0; |
4160 | } | 4280 | } |
4161 | 4281 | ||
4282 | static int pci_quirk_enable_intel_spt_pch_acs(struct pci_dev *dev) | ||
4283 | { | ||
4284 | int pos; | ||
4285 | u32 cap, ctrl; | ||
4286 | |||
4287 | if (!pci_quirk_intel_spt_pch_acs_match(dev)) | ||
4288 | return -ENOTTY; | ||
4289 | |||
4290 | pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); | ||
4291 | if (!pos) | ||
4292 | return -ENOTTY; | ||
4293 | |||
4294 | pci_read_config_dword(dev, pos + PCI_ACS_CAP, &cap); | ||
4295 | pci_read_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, &ctrl); | ||
4296 | |||
4297 | ctrl |= (cap & PCI_ACS_SV); | ||
4298 | ctrl |= (cap & PCI_ACS_RR); | ||
4299 | ctrl |= (cap & PCI_ACS_CR); | ||
4300 | ctrl |= (cap & PCI_ACS_UF); | ||
4301 | |||
4302 | pci_write_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, ctrl); | ||
4303 | |||
4304 | dev_info(&dev->dev, "Intel SPT PCH root port ACS workaround enabled\n"); | ||
4305 | |||
4306 | return 0; | ||
4307 | } | ||
4308 | |||
4162 | static const struct pci_dev_enable_acs { | 4309 | static const struct pci_dev_enable_acs { |
4163 | u16 vendor; | 4310 | u16 vendor; |
4164 | u16 device; | 4311 | u16 device; |
4165 | int (*enable_acs)(struct pci_dev *dev); | 4312 | int (*enable_acs)(struct pci_dev *dev); |
4166 | } pci_dev_enable_acs[] = { | 4313 | } pci_dev_enable_acs[] = { |
4167 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_enable_intel_pch_acs }, | 4314 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_enable_intel_pch_acs }, |
4315 | { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_enable_intel_spt_pch_acs }, | ||
4168 | { 0 } | 4316 | { 0 } |
4169 | }; | 4317 | }; |
4170 | 4318 | ||
4171 | void pci_dev_specific_enable_acs(struct pci_dev *dev) | 4319 | int pci_dev_specific_enable_acs(struct pci_dev *dev) |
4172 | { | 4320 | { |
4173 | const struct pci_dev_enable_acs *i; | 4321 | const struct pci_dev_enable_acs *i; |
4174 | int ret; | 4322 | int ret; |
@@ -4180,9 +4328,11 @@ void pci_dev_specific_enable_acs(struct pci_dev *dev) | |||
4180 | i->device == (u16)PCI_ANY_ID)) { | 4328 | i->device == (u16)PCI_ANY_ID)) { |
4181 | ret = i->enable_acs(dev); | 4329 | ret = i->enable_acs(dev); |
4182 | if (ret >= 0) | 4330 | if (ret >= 0) |
4183 | return; | 4331 | return ret; |
4184 | } | 4332 | } |
4185 | } | 4333 | } |
4334 | |||
4335 | return -ENOTTY; | ||
4186 | } | 4336 | } |
4187 | 4337 | ||
4188 | /* | 4338 | /* |
diff --git a/drivers/pci/search.c b/drivers/pci/search.c index a20ce7d5e2a7..33e0f033a48e 100644 --- a/drivers/pci/search.c +++ b/drivers/pci/search.c | |||
@@ -40,11 +40,15 @@ int pci_for_each_dma_alias(struct pci_dev *pdev, | |||
40 | * If the device is broken and uses an alias requester ID for | 40 | * If the device is broken and uses an alias requester ID for |
41 | * DMA, iterate over that too. | 41 | * DMA, iterate over that too. |
42 | */ | 42 | */ |
43 | if (unlikely(pdev->dev_flags & PCI_DEV_FLAGS_DMA_ALIAS_DEVFN)) { | 43 | if (unlikely(pdev->dma_alias_mask)) { |
44 | ret = fn(pdev, PCI_DEVID(pdev->bus->number, | 44 | u8 devfn; |
45 | pdev->dma_alias_devfn), data); | 45 | |
46 | if (ret) | 46 | for_each_set_bit(devfn, pdev->dma_alias_mask, U8_MAX) { |
47 | return ret; | 47 | ret = fn(pdev, PCI_DEVID(pdev->bus->number, devfn), |
48 | data); | ||
49 | if (ret) | ||
50 | return ret; | ||
51 | } | ||
48 | } | 52 | } |
49 | 53 | ||
50 | for (bus = pdev->bus; !pci_is_root_bus(bus); bus = bus->parent) { | 54 | for (bus = pdev->bus; !pci_is_root_bus(bus); bus = bus->parent) { |