| Commit message (Collapse) | Author | Age |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
into next/soc
From Shawn Guo:
This is a patch series that updates imx-weim bus driver to have it
support more i.MX SoCs. Because there is no maintainer for
drivers/bus so far, I'm forwarding it through IMX tree for 3.12 merge
window.
* tag 'imx-weim-3.12' of git://git.linaro.org/people/shawnguo/linux-2.6:
drivers: bus: imx-weim: Add support for i.MX1/21/25/27/31/35/50/51/53
drivers: bus: imx-weim: Add missing platform_driver.owner field
drivers: bus: imx-weim: use module_platform_driver_probe()
drivers: bus: imx-weim: Simplify error path
drivers: bus: imx-weim: Remove private driver data
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds WEIM support for all i.MX CPUs supported by the kernel.
Signed-off-by: Alexander Shiyan <shc_work@mail.ru>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
|
| |
| |
| |
| |
| | |
Signed-off-by: Alexander Shiyan <shc_work@mail.ru>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Driver should be called only once at startup, so code converted
to using module_platform_driver_probe().
Signed-off-by: Alexander Shiyan <shc_work@mail.ru>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
|
| |
| |
| |
| |
| | |
Signed-off-by: Alexander Shiyan <shc_work@mail.ru>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Driver uses only probe function so no reason to keep variables
in private driver data.
Signed-off-by: Alexander Shiyan <shc_work@mail.ru>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into next/soc
From Sekhar Nori:
DaVinci DT updates for v3.12
----------------------------
This set of patches add ethernet DT nodes
for DA850 and also remove now unneeded
specification of UART clock frequency so
kernel can now boot irrespective of what
the bootloader setting of UART frequency is.
* tag 'davinci-for-v3.12/dt' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
ARM: davinci: da850: do not specify clock_frequency for UART DT node
ARM: davinci: da850: add DT node for ethernet
ARM: davinci: da850: add OF_DEV_AUXDATA entry for davinci_emac
ARM: davinci: da850: add OF_DEV_AUXDATA entry for mdio.
ARM: davinci: da850: add DT node for mdio device
Signed-off-by: Kevin Hilman <khilman@linaro.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
DT kernel on da850-evm comes up with garbled UART logs. This is because
of mismatch in actual module clock rate and rate specified(clock-frequency)
in DT blob. kernel should not assume or depend on bootloaders clock
configuration, instead let it find the clock rate at runtime.
Issue discussed here before arriving on this implementation:
"ARM: davinci: da850 evm: update clock rate for UART 1/2 DT nodes"
https://patchwork.kernel.org/patch/2162271/
Signed-off-by: Manjunathappa, Prakash <prakash.pm@ti.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add ethernet device tree node information and pinmux for mii to da850 by
providing interrupt details and local mac address.
Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add OF_DEV_AUXDATA for ethernet davinci_emac driver in da850 board dt
file to use emac clock.
Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add OF_DEV_AUXDATA for mdio driver in da850 board dt
file to use mdio clock.
Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add mdio device tree node information to da850 by
providing register details and bus frequency of mdio.
Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into next/soc
From Sekhar Nori:
DaVinci SoC updates for v3.12
-----------------------------
This set of SoC updates contains changes to the
way UART clock is handled to enabled DT-boot to
obtain UART clock frequency instead of relying
on DT-binding being supplied. Similarly handling
of MDIO clock is fixed to make it easier to support
MDIO in DT-boot. Finally there is patch to remove
now unnecessary setting of wake-up capable flag for
RTC.
* tag 'davinci-for-v3.12/soc' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
ARM: davinci: fix clock lookup for mdio device
ARM: davinci: da8xx: remove hard coding of rtc device wakeup
ARM: davinci: serial: remove davinci_serial_setup_clk()
ARM: davinci: serial: get rid of davinci_uart_config
ARM: davinci: da8xx: remove da8xx_uart_clk_enable
ARM: davinci: uart: move to devid based clk_get
Signed-off-by: Kevin Hilman <khilman@linaro.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch removes the clock alias for mdio device and adds a entry
in clock lookup table, this entry can now be used by both DT and
non-DT case.
Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Since now rtc-omap driver itself calls deice_init_wakeup(dev, true),
duplicate call from the rtc device registration can be removed.
This is basically a partial revert of the prev commit
commit 75c99bb0006ee065b4e2995078d779418b0fab54
Author: Sekhar Nori <nsekhar@ti.com>
davinci: da8xx/omap-l1: mark RTC as a wakeup source
Signed-off-by: Hebbar Gururaja <gururaja.hebbar@ti.com>
Acked-by: Kevin Hilman <khilman@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Get rid of davinci_serial_setup_clk() since its not called
from multiple places now. Instead initialize clock in
davinci_serial_init() itself. This also helps get rid of
"serial_dev" member of struct davinci_soc_info.
Signed-off-by: Manjunathappa, Prakash <prakash.pm@ti.com>
Suggested-by: Sekhar Nori <nsekhar@ti.com>
[nsekhar@ti.com: split removal of davinci_serial_setup_clk()
into a separate patch.]
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
"struct davinci_uart_config" was introduced to specify
UART ports brought out or enabled on the board. But
none of the boards use it for that purpose and we are
not going to add anymore board files, so remove the
structure.
Signed-off-by: Manjunathappa, Prakash <prakash.pm@ti.com>
Suggested-by: Sekhar Nori <nsekhar@ti.com>
[nsekhar@ti.com: split patch to remove davinci_serial_setup_clk()
changes.]
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Serial clocks are enabled from of_platform_serial_setup:of_serial.c,
so remove davinci_serial_setup_clk from here.
Signed-off-by: Manjunathappa, Prakash <prakash.pm@ti.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For modules having single clock, clk_get should be done with dev_id.
But current davinci implementation handles multiple instances
of the UART devices with single platform_device_register. Hence clk_get
is based on con_id rather than dev_id, this is not correct. Do
platform_device_register for each instance and clk_get on dev_id.
Signed-off-by: Manjunathappa, Prakash <prakash.pm@ti.com>
[nsekhar@ti.com: actually stop using con_id in clk_get(), squash the
patch adding OF aux data into this one]
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson into next/soc
From Linus Walleij:
Core ux500 changes for v3.12:
- Add support for restart using the PRCMU
- Move secondary startup out of INIT section
- set coherent_dma_mask for DMA40
* tag 'ux500-core-for-arm-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson:
ARM: ux500: set coherent_dma_mask for dma40
ARM: ux500: remove u8500_secondary_startup from INIT section.
ARM: ux500: add restart support via prcmu
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Set coherent_dma_mask to DMA_BIT_MASK(32) for dma40 platform_device, as
without this DMA allocations were failing with the error:
dma40 dma40.0: coherent DMA mask is unset
when booting without device-tree.
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch removes u8500_secondary_startup from _INIT section, there are
two reason for this removal.
1. discarding such a small code does not save much, given the RAM sizes.
2. Having this code discarded, creates corruption issue when we boot
smp-kernel with nr_cpus=1 or with single cpu node in DT.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@st.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add necessary code to restart ux500 based machines using
prcmu_system_reset().
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
next/soc
Allwinner sunXi core additions for 3.12, take 2
These patches add machine support for the Allwinner A20 and A31 SoCs
* tag 'sunxi-core-for-3.12-2' of https://github.com/mripard/linux:
ARM: sunxi: Introduce Allwinner A20 support
ARM: sun6i: Add restart code for the A31
ARM: sunxi: Add the Allwinner A31 compatible to the machine definition
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The Allwinner A20 is a dual-core Cortex-A7-based SoC. It is
pin-compatible with the A10, and re-uses most of the IPs found in it,
plus some additional ones like a Gigabit Ethernet controller.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The Allwinner A31 has a different watchdog, with a slightly different
register layout, that requires a different restart code.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The Allwinner A31 is a quad-Cortex-A7 based SoC, which shares a lot of
IPs with the previous SoCs from Allwinner, like the PIO, I2C, UARTs,
timers, watchdog IPs, but also differs by dropping the WEMAC ethernet
controller and most notably dropping the in-house IRQ controller in
favor of a ARM GIC one.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
next/soc
Allwinner sunXi core additions for 3.12
There's not much in this pull request, only a patch removing some dead code.
* tag 'sunxi-core-for-3.12' of https://github.com/mripard/linux:
ARM: sunxi: Remove Makefile.boot file
Signed-off-by: Kevin Hilman <khilman@linaro.org>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Since we've been a multi-platform enabled kernel from the beginning,
Makefile.boot has always been dead code.
Remove it.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-tegra into next/soc
From: Stephen Warren:
ARM: tegra: core SoC enhancements for 3.12
This branch includes a number of enhancements to core SoC support for
Tegra devices. The major new features are:
* Adds a new CPU-power-gated cpuidle state for Tegra114.
* Adds initial system suspend support for Tegra114, initially supporting
just CPU-power-gating during suspend.
* Adds "LP1" suspend mode support for all of Tegra20/30/114. This mode
both gates CPU power, and places the DRAM into self-refresh mode.
* A new DT-driven PCIe driver to Tegra20/30. The driver is also moved
from arch/arm/mach-tegra/ to drivers/pci/host/.
The PCIe driver work depends on the following tag from Thomas Petazzoni:
git://git.infradead.org/linux-mvebu.git mis-3.12.2
... which is merged into the middle of this pull request.
* tag 'tegra-for-3.12-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-tegra: (33 commits)
ARM: tegra: disable LP2 cpuidle state if PCIe is enabled
MAINTAINERS: Add myself as Tegra PCIe maintainer
PCI: tegra: set up PADS_REFCLK_CFG1
PCI: tegra: Add Tegra 30 PCIe support
PCI: tegra: Move PCIe driver to drivers/pci/host
PCI: msi: add default MSI operations for !HAVE_GENERIC_HARDIRQS platforms
ARM: tegra: add LP1 suspend support for Tegra114
ARM: tegra: add LP1 suspend support for Tegra20
ARM: tegra: add LP1 suspend support for Tegra30
ARM: tegra: add common LP1 suspend support
clk: tegra114: add LP1 suspend/resume support
ARM: tegra: config the polarity of the request of sys clock
ARM: tegra: add common resume handling code for LP1 resuming
ARM: pci: add ->add_bus() and ->remove_bus() hooks to hw_pci
of: pci: add registry of MSI chips
PCI: Introduce new MSI chip infrastructure
PCI: remove ARCH_SUPPORTS_MSI kconfig option
PCI: use weak functions for MSI arch-specific functions
ARM: tegra: unify Tegra's Kconfig a bit more
ARM: tegra: remove the limitation that Tegra114 can't support suspend
...
Signed-off-by: Kevin Hilman <khilman@linaro.org>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Tegra20 HW appears to have a bug such that PCIe device interrupts,
whether they are legacy IRQs or MSI, are lost when LP2 is enabled. To
work around this, simply disable LP2 if any PCIe devices with interrupts
are present. Detect this via the IRQ domain map operation. This is
slightly over-conservative; if a device with an interrupt is present but
the driver does not actually use them, LP2 will still be disabled.
However, this is a reasonable trade-off which enables a simpler
workaround.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
I'll be taking on maintainership of the Tegra PCIe driver since it's now
moved out of arch/arm/mach-tegra.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The registers PADS_REFCLK_CFG are an array of 16-bit data, one entry per
PCIe root port. For Tegra30, we therefore need to write a 3rd entry in
this array. Doing so makes the mini-PCIe slot on Beaver operate correctly.
While we're at it, add some #defines to partially document the fields
within these 16-bit values.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Introduce a data structure to parameterize the driver according to SoC
generation, add Tegra30 specific code and update the device tree binding
document for Tegra30 support.
Signed-off-by: Jay Agarwal <jagarwal@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Move the PCIe driver from arch/arm/mach-tegra into the drivers/pci/host
directory. The motivation is to collect various host controller drivers
in the same location in order to facilitate refactoring.
The Tegra PCIe driver has been largely rewritten, both in order to turn
it into a proper platform driver and to add MSI (based on code by
Krishna Kishore <kthota@nvidia.com>) as well as device tree support.
Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
[swarren, split DT changes into a separate patch in another branch]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| |\ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
pci msi changes for v3.12 (round 2)
- fix build breakage for s390 allyesconfig due to !HAVE_GENERIC_HARDIRQS
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Some platforms (e.g S390) don't use the generic hardirqs code and
therefore do not defined HAVE_GENERIC_HARDIRQS. This prevents using
the irq_set_chip_data() and irq_get_chip_data() functions that are
used for the default implementations of the MSI operations.
So, when CONFIG_GENERIC_HARDIRQS is not enabled, provide another
default implementation of the MSI operations, that simply errors
out. The architecture is responsible for implementing those operations
(which is the case on S390), and cannot use the msi_chip infrastructure.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Some PCI drivers may need to adjust the pci_bus structure after it has
been allocated by the Linux PCI core. The PCI core allows
architectures to implement the pcibios_add_bus() and
pcibios_remove_bus() for this purpose. This commit therefore extends
the hw_pci and pci_sys_data structures of the ARM PCI core to allow
PCI drivers to register ->add_bus() and ->remove_bus() in hw_pci,
which will get called when a bus is added or removed from the system.
This will be used for example by the Marvell PCIe driver to connect a
particular PCI bus with its corresponding MSI chip to handle Message
Signaled Interrupts.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: Thierry Reding <thierry.reding@gmail.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Daniel Price <daniel.price@gmail.com>
Tested-by: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This commit adds a very basic registry of msi_chip structures, so that
an IRQ controller driver can register an msi_chip, and a PCIe host
controller can find it, based on a 'struct device_node'.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The new struct msi_chip is used to associated an MSI controller with a
PCI bus. It is automatically handed down from the root to its children
during bus enumeration.
This patch provides default (weak) implementations for the architecture-
specific MSI functions (arch_setup_msi_irq(), arch_teardown_msi_irq()
and arch_msi_check_device()) which check if a PCI device's bus has an
attached MSI chip and forward the call appropriately.
Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Tested-by: Daniel Price <daniel.price@gmail.com>
Tested-by: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Now that we have weak versions for each of the PCI MSI architecture
functions, we can actually build the MSI support for all platforms,
regardless of whether they provide or not architecture-specific
versions of those functions. For this reason, the ARCH_SUPPORTS_MSI
hidden kconfig boolean becomes useless, and this patch gets rid of it.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Daniel Price <daniel.price@gmail.com>
Tested-by: Thierry Reding <thierry.reding@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux390@de.ibm.com
Cc: linux-s390@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: David S. Miller <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
|
| | |/ / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Until now, the MSI architecture-specific functions could be overloaded
using a fairly complex set of #define and compile-time
conditionals. In order to prepare for the introduction of the msi_chip
infrastructure, it is desirable to switch all those functions to use
the 'weak' mechanism. This commit converts all the architectures that
were overidding those MSI functions to use the new strategy.
Note that we keep two separate, non-weak, functions
default_teardown_msi_irqs() and default_restore_msi_irqs() for the
default behavior of the arch_teardown_msi_irqs() and
arch_restore_msi_irqs(), as the default behavior is needed by x86 PCI
code.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Daniel Price <daniel.price@gmail.com>
Tested-by: Thierry Reding <thierry.reding@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux390@de.ibm.com
Cc: linux-s390@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: David S. Miller <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The LP1 suspend mode will power off the CPU, clock gated the PLLs and put
SDRAM to self-refresh mode. Any interrupt can wake up device from LP1. The
sequence when LP1 suspending:
* tunning off L1 data cache and the MMU
* storing some EMC registers, DPD (deep power down) status, clk source of
mselect and SCLK burst policy
* putting SDRAM into self-refresh
* switching CPU to CLK_M (12MHz OSC)
* tunning off PLLM, PLLP, PLLA, PLLC and PLLX
* switching SCLK to CLK_S (32KHz OSC)
* shutting off the CPU rail
The sequence of LP1 resuming:
* re-enabling PLLM, PLLP, PLLA, PLLC and PLLX
* restoring the clk source of mselect and SCLK burst policy
* setting up CCLK burst policy to PLLX
* restoring DPD status and some EMC registers
* resuming SDRAM to normal mode
* jumping to the "tegra_resume" from PMC_SCRATCH41
Due to the SDRAM will be put into self-refresh mode, the low level
procedures of LP1 suspending and resuming should be copied to
TEGRA_IRAM_CODE_AREA (TEGRA_IRAM_BASE + SZ_4K) when suspending. Before
restoring the CPU context when resuming, the SDRAM needs to be switched
back to normal mode. And the PLLs need to be re-enabled, SCLK burst policy
be restored. Then jumping to "tegra_resume" that was expected to be stored
in PMC_SCRATCH41 to restore CPU context and back to kernel.
Based on the work by: Bo Yan <byan@nvidia.com>
Signed-off-by: Joseph Lo <josephl@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The LP1 suspend mode will power off the CPU, clock gated the PLLs and put
SDRAM to self-refresh mode. Any interrupt can wake up device from LP1. The
sequence when LP1 suspending:
* tunning off L1 data cache and the MMU
* putting SDRAM into self-refresh
* storing some EMC registers and SCLK burst policy
* switching CPU to CLK_M (12MHz OSC)
* switching SCLK to CLK_S (32KHz OSC)
* tunning off PLLM, PLLP and PLLC
* shutting off the CPU rail
The sequence of LP1 resuming:
* re-enabling PLLM, PLLP, and PLLC
* restoring some EMC registers and SCLK burst policy
* setting up CCLK burst policy to PLLP
* resuming SDRAM to normal mode
* jumping to the "tegra_resume" from PMC_SCRATCH41
Due to the SDRAM will be put into self-refresh mode, the low level
procedures of LP1 suspending and resuming should be copied to
TEGRA_IRAM_CODE_AREA (TEGRA_IRAM_BASE + SZ_4K) when suspending. Before
restoring the CPU context when resuming, the SDRAM needs to be switched
back to normal mode. And the PLLs need to be re-enabled, SCLK burst policy
be restored, CCLK burst policy be set in PLLP. Then jumping to
"tegra_resume" that was expected to be stored in PMC_SCRATCH41 to restore
CPU context and back to kernel.
Based on the work by:
Colin Cross <ccross@android.com>
Gary King <gking@nvidia.com>
Signed-off-by: Joseph Lo <josephl@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The LP1 suspend mode will power off the CPU, clock gated the PLLs and put
SDRAM to self-refresh mode. Any interrupt can wake up device from LP1. The
sequence when LP1 suspending:
* tunning off L1 data cache and the MMU
* storing some EMC registers, DPD (deep power down) status, clk source of
mselect and SCLK burst policy
* putting SDRAM into self-refresh
* switching CPU to CLK_M (12MHz OSC)
* tunning off PLLM, PLLP, PLLA, PLLC and PLLX
* switching SCLK to CLK_S (32KHz OSC)
* shutting off the CPU rail
The sequence of LP1 resuming:
* re-enabling PLLM, PLLP, PLLA, PLLC and PLLX
* restoring the clk source of mselect and SCLK burst policy
* setting up CCLK burst policy to PLLX
* restoring DPD status and some EMC registers
* resuming SDRAM to normal mode
* jumping to the "tegra_resume" from PMC_SCRATCH41
Due to the SDRAM will be put into self-refresh mode, the low level
procedures of LP1 suspending and resuming should be copied to
TEGRA_IRAM_CODE_AREA (TEGRA_IRAM_BASE + SZ_4K) when suspending. Before
restoring the CPU context when resuming, the SDRAM needs to be switched
back to normal mode. And the PLLs need to be re-enabled, SCLK burst policy
be restored, CCLK burst policy be set in PLLX. Then jumping to
"tegra_resume" that was expected to be stored in PMC_SCRATCH41 to restore
CPU context and back to kernel.
Based on the work by: Scott Williams <scwilliams@nvidia.com>
Signed-off-by: Joseph Lo <josephl@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The LP1 suspending mode on Tegra means CPU rail off, devices and PLLs are
clock gated and SDRAM in self-refresh mode. That means the low level LP1
suspending and resuming code couldn't be run on DRAM and the CPU must
switch to the always on clock domain (a.k.a. CLK_M 12MHz oscillator). And
the system clock (SCLK) would be switched to CLK_S, a 32KHz oscillator.
The LP1 low level handling code need to be moved to IRAM area first. And
marking the LP1 mask for indicating the Tegra device is in LP1. The CPU
power timer needs to be re-calculated based on 32KHz that was originally
based on PCLK.
When resuming from LP1, the LP1 reset handler will resume PLLs and then
put DRAM to normal mode. Then jumping to the "tegra_resume" that will
restore full context before back to kernel. The "tegra_resume" handler
was expected to be found in PMC_SCRATCH41 register.
This is common LP1 procedures for Tegra, so we do these jobs mainly in
this patch:
* moving LP1 low level handling code to IRAM
* marking LP1 mask
* copying the physical address of "tegra_resume" to PMC_SCRATCH41
* re-calculate the CPU power timer based on 32KHz
Signed-off-by: Joseph Lo <josephl@nvidia.com>
[swarren, replaced IRAM_CODE macro with IO_ADDRESS(TEGRA_IRAM_CODE_AREA)]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
When the system suspends to LP1, the CPU clock source is switched to
CLK_M (12MHz Oscillator) during suspend/resume flow. The CPU clock
source is controlled by the CCLKG_BURST_POLICY register, and hence this
register must be restored during LP1 resume.
Cc: Mike Turquette <mturquette@linaro.org>
Signed-off-by: Joseph Lo <josephl@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
When suspending to LP1 mode, the SYSCLK will be clock gated. And different
board may have different polarity of the request of SYSCLK, this patch
configure the polarity from the DT for the board.
Signed-off-by: Joseph Lo <josephl@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Add support to the Tegra CPU reset vector to detect whether the CPU is
resuming from LP1 suspend state. If it is, branch to the LP1-specific
resume code.
When Tegra enters the LP1 suspend state, the SDRAM controller is placed
into a self-refresh state. For this reason, we must place the LP1 resume
code into IRAM, so that it is accessible before SDRAM access has been
re-enabled.
Signed-off-by: Joseph Lo <josephl@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Move all common select clauses from ARCH_TEGRA_*_SOC to ARCH_TEGRA to
eliminate duplication. The USB-related selects all should have been
common too, but were missing from Tegra114 previously. Move these to
ARCH_TEGRA too. The latter fixes a build break when only Tegra114
support was enabled, but not Tegra20 or Tegra30 support.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|