diff options
author | Jeff Garzik <jgarzik@pobox.com> | 2005-08-29 19:06:29 -0400 |
---|---|---|
committer | Jeff Garzik <jgarzik@pobox.com> | 2005-08-29 19:06:29 -0400 |
commit | 502c7f1dd566e7ca8aabdb755182664c5fdaebf1 (patch) | |
tree | 31934a4da45c0cb83acd118a181db4c51967ac8a | |
parent | 2cba582a49f1535c1a12a687cfb3dab713c22cc4 (diff) | |
parent | 8f3d17fb7bcb7c255197d11469fb5e9695c9d2f4 (diff) |
Merge /spare/repo/linux-2.6/
241 files changed, 13874 insertions, 6714 deletions
diff --git a/Documentation/networking/phy.txt b/Documentation/networking/phy.txt new file mode 100644 index 000000000000..29ccae409031 --- /dev/null +++ b/Documentation/networking/phy.txt | |||
@@ -0,0 +1,288 @@ | |||
1 | |||
2 | ------- | ||
3 | PHY Abstraction Layer | ||
4 | (Updated 2005-07-21) | ||
5 | |||
6 | Purpose | ||
7 | |||
8 | Most network devices consist of set of registers which provide an interface | ||
9 | to a MAC layer, which communicates with the physical connection through a | ||
10 | PHY. The PHY concerns itself with negotiating link parameters with the link | ||
11 | partner on the other side of the network connection (typically, an ethernet | ||
12 | cable), and provides a register interface to allow drivers to determine what | ||
13 | settings were chosen, and to configure what settings are allowed. | ||
14 | |||
15 | While these devices are distinct from the network devices, and conform to a | ||
16 | standard layout for the registers, it has been common practice to integrate | ||
17 | the PHY management code with the network driver. This has resulted in large | ||
18 | amounts of redundant code. Also, on embedded systems with multiple (and | ||
19 | sometimes quite different) ethernet controllers connected to the same | ||
20 | management bus, it is difficult to ensure safe use of the bus. | ||
21 | |||
22 | Since the PHYs are devices, and the management busses through which they are | ||
23 | accessed are, in fact, busses, the PHY Abstraction Layer treats them as such. | ||
24 | In doing so, it has these goals: | ||
25 | |||
26 | 1) Increase code-reuse | ||
27 | 2) Increase overall code-maintainability | ||
28 | 3) Speed development time for new network drivers, and for new systems | ||
29 | |||
30 | Basically, this layer is meant to provide an interface to PHY devices which | ||
31 | allows network driver writers to write as little code as possible, while | ||
32 | still providing a full feature set. | ||
33 | |||
34 | The MDIO bus | ||
35 | |||
36 | Most network devices are connected to a PHY by means of a management bus. | ||
37 | Different devices use different busses (though some share common interfaces). | ||
38 | In order to take advantage of the PAL, each bus interface needs to be | ||
39 | registered as a distinct device. | ||
40 | |||
41 | 1) read and write functions must be implemented. Their prototypes are: | ||
42 | |||
43 | int write(struct mii_bus *bus, int mii_id, int regnum, u16 value); | ||
44 | int read(struct mii_bus *bus, int mii_id, int regnum); | ||
45 | |||
46 | mii_id is the address on the bus for the PHY, and regnum is the register | ||
47 | number. These functions are guaranteed not to be called from interrupt | ||
48 | time, so it is safe for them to block, waiting for an interrupt to signal | ||
49 | the operation is complete | ||
50 | |||
51 | 2) A reset function is necessary. This is used to return the bus to an | ||
52 | initialized state. | ||
53 | |||
54 | 3) A probe function is needed. This function should set up anything the bus | ||
55 | driver needs, setup the mii_bus structure, and register with the PAL using | ||
56 | mdiobus_register. Similarly, there's a remove function to undo all of | ||
57 | that (use mdiobus_unregister). | ||
58 | |||
59 | 4) Like any driver, the device_driver structure must be configured, and init | ||
60 | exit functions are used to register the driver. | ||
61 | |||
62 | 5) The bus must also be declared somewhere as a device, and registered. | ||
63 | |||
64 | As an example for how one driver implemented an mdio bus driver, see | ||
65 | drivers/net/gianfar_mii.c and arch/ppc/syslib/mpc85xx_devices.c | ||
66 | |||
67 | Connecting to a PHY | ||
68 | |||
69 | Sometime during startup, the network driver needs to establish a connection | ||
70 | between the PHY device, and the network device. At this time, the PHY's bus | ||
71 | and drivers need to all have been loaded, so it is ready for the connection. | ||
72 | At this point, there are several ways to connect to the PHY: | ||
73 | |||
74 | 1) The PAL handles everything, and only calls the network driver when | ||
75 | the link state changes, so it can react. | ||
76 | |||
77 | 2) The PAL handles everything except interrupts (usually because the | ||
78 | controller has the interrupt registers). | ||
79 | |||
80 | 3) The PAL handles everything, but checks in with the driver every second, | ||
81 | allowing the network driver to react first to any changes before the PAL | ||
82 | does. | ||
83 | |||
84 | 4) The PAL serves only as a library of functions, with the network device | ||
85 | manually calling functions to update status, and configure the PHY | ||
86 | |||
87 | |||
88 | Letting the PHY Abstraction Layer do Everything | ||
89 | |||
90 | If you choose option 1 (The hope is that every driver can, but to still be | ||
91 | useful to drivers that can't), connecting to the PHY is simple: | ||
92 | |||
93 | First, you need a function to react to changes in the link state. This | ||
94 | function follows this protocol: | ||
95 | |||
96 | static void adjust_link(struct net_device *dev); | ||
97 | |||
98 | Next, you need to know the device name of the PHY connected to this device. | ||
99 | The name will look something like, "phy0:0", where the first number is the | ||
100 | bus id, and the second is the PHY's address on that bus. | ||
101 | |||
102 | Now, to connect, just call this function: | ||
103 | |||
104 | phydev = phy_connect(dev, phy_name, &adjust_link, flags); | ||
105 | |||
106 | phydev is a pointer to the phy_device structure which represents the PHY. If | ||
107 | phy_connect is successful, it will return the pointer. dev, here, is the | ||
108 | pointer to your net_device. Once done, this function will have started the | ||
109 | PHY's software state machine, and registered for the PHY's interrupt, if it | ||
110 | has one. The phydev structure will be populated with information about the | ||
111 | current state, though the PHY will not yet be truly operational at this | ||
112 | point. | ||
113 | |||
114 | flags is a u32 which can optionally contain phy-specific flags. | ||
115 | This is useful if the system has put hardware restrictions on | ||
116 | the PHY/controller, of which the PHY needs to be aware. | ||
117 | |||
118 | Now just make sure that phydev->supported and phydev->advertising have any | ||
119 | values pruned from them which don't make sense for your controller (a 10/100 | ||
120 | controller may be connected to a gigabit capable PHY, so you would need to | ||
121 | mask off SUPPORTED_1000baseT*). See include/linux/ethtool.h for definitions | ||
122 | for these bitfields. Note that you should not SET any bits, or the PHY may | ||
123 | get put into an unsupported state. | ||
124 | |||
125 | Lastly, once the controller is ready to handle network traffic, you call | ||
126 | phy_start(phydev). This tells the PAL that you are ready, and configures the | ||
127 | PHY to connect to the network. If you want to handle your own interrupts, | ||
128 | just set phydev->irq to PHY_IGNORE_INTERRUPT before you call phy_start. | ||
129 | Similarly, if you don't want to use interrupts, set phydev->irq to PHY_POLL. | ||
130 | |||
131 | When you want to disconnect from the network (even if just briefly), you call | ||
132 | phy_stop(phydev). | ||
133 | |||
134 | Keeping Close Tabs on the PAL | ||
135 | |||
136 | It is possible that the PAL's built-in state machine needs a little help to | ||
137 | keep your network device and the PHY properly in sync. If so, you can | ||
138 | register a helper function when connecting to the PHY, which will be called | ||
139 | every second before the state machine reacts to any changes. To do this, you | ||
140 | need to manually call phy_attach() and phy_prepare_link(), and then call | ||
141 | phy_start_machine() with the second argument set to point to your special | ||
142 | handler. | ||
143 | |||
144 | Currently there are no examples of how to use this functionality, and testing | ||
145 | on it has been limited because the author does not have any drivers which use | ||
146 | it (they all use option 1). So Caveat Emptor. | ||
147 | |||
148 | Doing it all yourself | ||
149 | |||
150 | There's a remote chance that the PAL's built-in state machine cannot track | ||
151 | the complex interactions between the PHY and your network device. If this is | ||
152 | so, you can simply call phy_attach(), and not call phy_start_machine or | ||
153 | phy_prepare_link(). This will mean that phydev->state is entirely yours to | ||
154 | handle (phy_start and phy_stop toggle between some of the states, so you | ||
155 | might need to avoid them). | ||
156 | |||
157 | An effort has been made to make sure that useful functionality can be | ||
158 | accessed without the state-machine running, and most of these functions are | ||
159 | descended from functions which did not interact with a complex state-machine. | ||
160 | However, again, no effort has been made so far to test running without the | ||
161 | state machine, so tryer beware. | ||
162 | |||
163 | Here is a brief rundown of the functions: | ||
164 | |||
165 | int phy_read(struct phy_device *phydev, u16 regnum); | ||
166 | int phy_write(struct phy_device *phydev, u16 regnum, u16 val); | ||
167 | |||
168 | Simple read/write primitives. They invoke the bus's read/write function | ||
169 | pointers. | ||
170 | |||
171 | void phy_print_status(struct phy_device *phydev); | ||
172 | |||
173 | A convenience function to print out the PHY status neatly. | ||
174 | |||
175 | int phy_clear_interrupt(struct phy_device *phydev); | ||
176 | int phy_config_interrupt(struct phy_device *phydev, u32 interrupts); | ||
177 | |||
178 | Clear the PHY's interrupt, and configure which ones are allowed, | ||
179 | respectively. Currently only supports all on, or all off. | ||
180 | |||
181 | int phy_enable_interrupts(struct phy_device *phydev); | ||
182 | int phy_disable_interrupts(struct phy_device *phydev); | ||
183 | |||
184 | Functions which enable/disable PHY interrupts, clearing them | ||
185 | before and after, respectively. | ||
186 | |||
187 | int phy_start_interrupts(struct phy_device *phydev); | ||
188 | int phy_stop_interrupts(struct phy_device *phydev); | ||
189 | |||
190 | Requests the IRQ for the PHY interrupts, then enables them for | ||
191 | start, or disables then frees them for stop. | ||
192 | |||
193 | struct phy_device * phy_attach(struct net_device *dev, const char *phy_id, | ||
194 | u32 flags); | ||
195 | |||
196 | Attaches a network device to a particular PHY, binding the PHY to a generic | ||
197 | driver if none was found during bus initialization. Passes in | ||
198 | any phy-specific flags as needed. | ||
199 | |||
200 | int phy_start_aneg(struct phy_device *phydev); | ||
201 | |||
202 | Using variables inside the phydev structure, either configures advertising | ||
203 | and resets autonegotiation, or disables autonegotiation, and configures | ||
204 | forced settings. | ||
205 | |||
206 | static inline int phy_read_status(struct phy_device *phydev); | ||
207 | |||
208 | Fills the phydev structure with up-to-date information about the current | ||
209 | settings in the PHY. | ||
210 | |||
211 | void phy_sanitize_settings(struct phy_device *phydev) | ||
212 | |||
213 | Resolves differences between currently desired settings, and | ||
214 | supported settings for the given PHY device. Does not make | ||
215 | the changes in the hardware, though. | ||
216 | |||
217 | int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd); | ||
218 | int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd); | ||
219 | |||
220 | Ethtool convenience functions. | ||
221 | |||
222 | int phy_mii_ioctl(struct phy_device *phydev, | ||
223 | struct mii_ioctl_data *mii_data, int cmd); | ||
224 | |||
225 | The MII ioctl. Note that this function will completely screw up the state | ||
226 | machine if you write registers like BMCR, BMSR, ADVERTISE, etc. Best to | ||
227 | use this only to write registers which are not standard, and don't set off | ||
228 | a renegotiation. | ||
229 | |||
230 | |||
231 | PHY Device Drivers | ||
232 | |||
233 | With the PHY Abstraction Layer, adding support for new PHYs is | ||
234 | quite easy. In some cases, no work is required at all! However, | ||
235 | many PHYs require a little hand-holding to get up-and-running. | ||
236 | |||
237 | Generic PHY driver | ||
238 | |||
239 | If the desired PHY doesn't have any errata, quirks, or special | ||
240 | features you want to support, then it may be best to not add | ||
241 | support, and let the PHY Abstraction Layer's Generic PHY Driver | ||
242 | do all of the work. | ||
243 | |||
244 | Writing a PHY driver | ||
245 | |||
246 | If you do need to write a PHY driver, the first thing to do is | ||
247 | make sure it can be matched with an appropriate PHY device. | ||
248 | This is done during bus initialization by reading the device's | ||
249 | UID (stored in registers 2 and 3), then comparing it to each | ||
250 | driver's phy_id field by ANDing it with each driver's | ||
251 | phy_id_mask field. Also, it needs a name. Here's an example: | ||
252 | |||
253 | static struct phy_driver dm9161_driver = { | ||
254 | .phy_id = 0x0181b880, | ||
255 | .name = "Davicom DM9161E", | ||
256 | .phy_id_mask = 0x0ffffff0, | ||
257 | ... | ||
258 | } | ||
259 | |||
260 | Next, you need to specify what features (speed, duplex, autoneg, | ||
261 | etc) your PHY device and driver support. Most PHYs support | ||
262 | PHY_BASIC_FEATURES, but you can look in include/mii.h for other | ||
263 | features. | ||
264 | |||
265 | Each driver consists of a number of function pointers: | ||
266 | |||
267 | config_init: configures PHY into a sane state after a reset. | ||
268 | For instance, a Davicom PHY requires descrambling disabled. | ||
269 | probe: Does any setup needed by the driver | ||
270 | suspend/resume: power management | ||
271 | config_aneg: Changes the speed/duplex/negotiation settings | ||
272 | read_status: Reads the current speed/duplex/negotiation settings | ||
273 | ack_interrupt: Clear a pending interrupt | ||
274 | config_intr: Enable or disable interrupts | ||
275 | remove: Does any driver take-down | ||
276 | |||
277 | Of these, only config_aneg and read_status are required to be | ||
278 | assigned by the driver code. The rest are optional. Also, it is | ||
279 | preferred to use the generic phy driver's versions of these two | ||
280 | functions if at all possible: genphy_read_status and | ||
281 | genphy_config_aneg. If this is not possible, it is likely that | ||
282 | you only need to perform some actions before and after invoking | ||
283 | these functions, and so your functions will wrap the generic | ||
284 | ones. | ||
285 | |||
286 | Feel free to look at the Marvell, Cicada, and Davicom drivers in | ||
287 | drivers/net/phy/ for examples (the lxt and qsemi drivers have | ||
288 | not been tested as of this writing) | ||
@@ -2,7 +2,7 @@ VERSION = 2 | |||
2 | PATCHLEVEL = 6 | 2 | PATCHLEVEL = 6 |
3 | SUBLEVEL = 13 | 3 | SUBLEVEL = 13 |
4 | EXTRAVERSION = | 4 | EXTRAVERSION = |
5 | NAME=Woozy Numbat | 5 | NAME=Affluent Albatross |
6 | 6 | ||
7 | # *DOCUMENTATION* | 7 | # *DOCUMENTATION* |
8 | # To see a list of typical targets execute "make help" | 8 | # To see a list of typical targets execute "make help" |
diff --git a/arch/alpha/kernel/signal.c b/arch/alpha/kernel/signal.c index 08fe8071a7f8..2e45e8604e32 100644 --- a/arch/alpha/kernel/signal.c +++ b/arch/alpha/kernel/signal.c | |||
@@ -566,13 +566,12 @@ handle_signal(int sig, struct k_sigaction *ka, siginfo_t *info, | |||
566 | if (ka->sa.sa_flags & SA_RESETHAND) | 566 | if (ka->sa.sa_flags & SA_RESETHAND) |
567 | ka->sa.sa_handler = SIG_DFL; | 567 | ka->sa.sa_handler = SIG_DFL; |
568 | 568 | ||
569 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 569 | spin_lock_irq(¤t->sighand->siglock); |
570 | spin_lock_irq(¤t->sighand->siglock); | 570 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
571 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 571 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
572 | sigaddset(¤t->blocked,sig); | 572 | sigaddset(¤t->blocked,sig); |
573 | recalc_sigpending(); | 573 | recalc_sigpending(); |
574 | spin_unlock_irq(¤t->sighand->siglock); | 574 | spin_unlock_irq(¤t->sighand->siglock); |
575 | } | ||
576 | } | 575 | } |
577 | 576 | ||
578 | static inline void | 577 | static inline void |
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index c65c6eb9810d..4bf0e8737e1f 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig | |||
@@ -635,10 +635,6 @@ config PM | |||
635 | and the Battery Powered Linux mini-HOWTO, available from | 635 | and the Battery Powered Linux mini-HOWTO, available from |
636 | <http://www.tldp.org/docs.html#howto>. | 636 | <http://www.tldp.org/docs.html#howto>. |
637 | 637 | ||
638 | Note that, even if you say N here, Linux on the x86 architecture | ||
639 | will issue the hlt instruction if nothing is to be done, thereby | ||
640 | sending the processor to sleep and saving power. | ||
641 | |||
642 | config APM | 638 | config APM |
643 | tristate "Advanced Power Management Emulation" | 639 | tristate "Advanced Power Management Emulation" |
644 | depends on PM | 640 | depends on PM |
@@ -650,12 +646,6 @@ config APM | |||
650 | battery status information, and user-space programs will receive | 646 | battery status information, and user-space programs will receive |
651 | notification of APM "events" (e.g. battery status change). | 647 | notification of APM "events" (e.g. battery status change). |
652 | 648 | ||
653 | If you select "Y" here, you can disable actual use of the APM | ||
654 | BIOS by passing the "apm=off" option to the kernel at boot time. | ||
655 | |||
656 | Note that the APM support is almost completely disabled for | ||
657 | machines with more than one CPU. | ||
658 | |||
659 | In order to use APM, you will need supporting software. For location | 649 | In order to use APM, you will need supporting software. For location |
660 | and more information, read <file:Documentation/pm.txt> and the | 650 | and more information, read <file:Documentation/pm.txt> and the |
661 | Battery Powered Linux mini-HOWTO, available from | 651 | Battery Powered Linux mini-HOWTO, available from |
@@ -665,39 +655,12 @@ config APM | |||
665 | manpage ("man 8 hdparm") for that), and it doesn't turn off | 655 | manpage ("man 8 hdparm") for that), and it doesn't turn off |
666 | VESA-compliant "green" monitors. | 656 | VESA-compliant "green" monitors. |
667 | 657 | ||
668 | This driver does not support the TI 4000M TravelMate and the ACER | ||
669 | 486/DX4/75 because they don't have compliant BIOSes. Many "green" | ||
670 | desktop machines also don't have compliant BIOSes, and this driver | ||
671 | may cause those machines to panic during the boot phase. | ||
672 | |||
673 | Generally, if you don't have a battery in your machine, there isn't | 658 | Generally, if you don't have a battery in your machine, there isn't |
674 | much point in using this driver and you should say N. If you get | 659 | much point in using this driver and you should say N. If you get |
675 | random kernel OOPSes or reboots that don't seem to be related to | 660 | random kernel OOPSes or reboots that don't seem to be related to |
676 | anything, try disabling/enabling this option (or disabling/enabling | 661 | anything, try disabling/enabling this option (or disabling/enabling |
677 | APM in your BIOS). | 662 | APM in your BIOS). |
678 | 663 | ||
679 | Some other things you should try when experiencing seemingly random, | ||
680 | "weird" problems: | ||
681 | |||
682 | 1) make sure that you have enough swap space and that it is | ||
683 | enabled. | ||
684 | 2) pass the "no-hlt" option to the kernel | ||
685 | 3) switch on floating point emulation in the kernel and pass | ||
686 | the "no387" option to the kernel | ||
687 | 4) pass the "floppy=nodma" option to the kernel | ||
688 | 5) pass the "mem=4M" option to the kernel (thereby disabling | ||
689 | all but the first 4 MB of RAM) | ||
690 | 6) make sure that the CPU is not over clocked. | ||
691 | 7) read the sig11 FAQ at <http://www.bitwizard.nl/sig11/> | ||
692 | 8) disable the cache from your BIOS settings | ||
693 | 9) install a fan for the video card or exchange video RAM | ||
694 | 10) install a better fan for the CPU | ||
695 | 11) exchange RAM chips | ||
696 | 12) exchange the motherboard. | ||
697 | |||
698 | To compile this driver as a module, choose M here: the | ||
699 | module will be called apm. | ||
700 | |||
701 | endmenu | 664 | endmenu |
702 | 665 | ||
703 | source "net/Kconfig" | 666 | source "net/Kconfig" |
@@ -752,6 +715,8 @@ source "drivers/hwmon/Kconfig" | |||
752 | 715 | ||
753 | source "drivers/misc/Kconfig" | 716 | source "drivers/misc/Kconfig" |
754 | 717 | ||
718 | source "drivers/mfd/Kconfig" | ||
719 | |||
755 | source "drivers/media/Kconfig" | 720 | source "drivers/media/Kconfig" |
756 | 721 | ||
757 | source "drivers/video/Kconfig" | 722 | source "drivers/video/Kconfig" |
diff --git a/arch/arm/common/Kconfig b/arch/arm/common/Kconfig index 692af6b5e8ff..666ba393575b 100644 --- a/arch/arm/common/Kconfig +++ b/arch/arm/common/Kconfig | |||
@@ -1,6 +1,9 @@ | |||
1 | config ICST525 | 1 | config ICST525 |
2 | bool | 2 | bool |
3 | 3 | ||
4 | config ARM_GIC | ||
5 | bool | ||
6 | |||
4 | config ICST307 | 7 | config ICST307 |
5 | bool | 8 | bool |
6 | 9 | ||
diff --git a/arch/arm/common/Makefile b/arch/arm/common/Makefile index 11f20a43ee3a..a87886564b19 100644 --- a/arch/arm/common/Makefile +++ b/arch/arm/common/Makefile | |||
@@ -4,6 +4,7 @@ | |||
4 | 4 | ||
5 | obj-y += rtctime.o | 5 | obj-y += rtctime.o |
6 | obj-$(CONFIG_ARM_AMBA) += amba.o | 6 | obj-$(CONFIG_ARM_AMBA) += amba.o |
7 | obj-$(CONFIG_ARM_GIC) += gic.o | ||
7 | obj-$(CONFIG_ICST525) += icst525.o | 8 | obj-$(CONFIG_ICST525) += icst525.o |
8 | obj-$(CONFIG_ICST307) += icst307.o | 9 | obj-$(CONFIG_ICST307) += icst307.o |
9 | obj-$(CONFIG_SA1111) += sa1111.o | 10 | obj-$(CONFIG_SA1111) += sa1111.o |
diff --git a/arch/arm/common/gic.c b/arch/arm/common/gic.c new file mode 100644 index 000000000000..51dbf5489b6b --- /dev/null +++ b/arch/arm/common/gic.c | |||
@@ -0,0 +1,166 @@ | |||
1 | /* | ||
2 | * linux/arch/arm/common/gic.c | ||
3 | * | ||
4 | * Copyright (C) 2002 ARM Limited, All Rights Reserved. | ||
5 | * | ||
6 | * This program is free software; you can redistribute it and/or modify | ||
7 | * it under the terms of the GNU General Public License version 2 as | ||
8 | * published by the Free Software Foundation. | ||
9 | * | ||
10 | * Interrupt architecture for the GIC: | ||
11 | * | ||
12 | * o There is one Interrupt Distributor, which receives interrupts | ||
13 | * from system devices and sends them to the Interrupt Controllers. | ||
14 | * | ||
15 | * o There is one CPU Interface per CPU, which sends interrupts sent | ||
16 | * by the Distributor, and interrupts generated locally, to the | ||
17 | * associated CPU. | ||
18 | * | ||
19 | * Note that IRQs 0-31 are special - they are local to each CPU. | ||
20 | * As such, the enable set/clear, pending set/clear and active bit | ||
21 | * registers are banked per-cpu for these sources. | ||
22 | */ | ||
23 | #include <linux/init.h> | ||
24 | #include <linux/kernel.h> | ||
25 | #include <linux/list.h> | ||
26 | #include <linux/smp.h> | ||
27 | |||
28 | #include <asm/irq.h> | ||
29 | #include <asm/io.h> | ||
30 | #include <asm/mach/irq.h> | ||
31 | #include <asm/hardware/gic.h> | ||
32 | |||
33 | static void __iomem *gic_dist_base; | ||
34 | static void __iomem *gic_cpu_base; | ||
35 | |||
36 | /* | ||
37 | * Routines to acknowledge, disable and enable interrupts | ||
38 | * | ||
39 | * Linux assumes that when we're done with an interrupt we need to | ||
40 | * unmask it, in the same way we need to unmask an interrupt when | ||
41 | * we first enable it. | ||
42 | * | ||
43 | * The GIC has a seperate notion of "end of interrupt" to re-enable | ||
44 | * an interrupt after handling, in order to support hardware | ||
45 | * prioritisation. | ||
46 | * | ||
47 | * We can make the GIC behave in the way that Linux expects by making | ||
48 | * our "acknowledge" routine disable the interrupt, then mark it as | ||
49 | * complete. | ||
50 | */ | ||
51 | static void gic_ack_irq(unsigned int irq) | ||
52 | { | ||
53 | u32 mask = 1 << (irq % 32); | ||
54 | writel(mask, gic_dist_base + GIC_DIST_ENABLE_CLEAR + (irq / 32) * 4); | ||
55 | writel(irq, gic_cpu_base + GIC_CPU_EOI); | ||
56 | } | ||
57 | |||
58 | static void gic_mask_irq(unsigned int irq) | ||
59 | { | ||
60 | u32 mask = 1 << (irq % 32); | ||
61 | writel(mask, gic_dist_base + GIC_DIST_ENABLE_CLEAR + (irq / 32) * 4); | ||
62 | } | ||
63 | |||
64 | static void gic_unmask_irq(unsigned int irq) | ||
65 | { | ||
66 | u32 mask = 1 << (irq % 32); | ||
67 | writel(mask, gic_dist_base + GIC_DIST_ENABLE_SET + (irq / 32) * 4); | ||
68 | } | ||
69 | |||
70 | static void gic_set_cpu(struct irqdesc *desc, unsigned int irq, unsigned int cpu) | ||
71 | { | ||
72 | void __iomem *reg = gic_dist_base + GIC_DIST_TARGET + (irq & ~3); | ||
73 | unsigned int shift = (irq % 4) * 8; | ||
74 | u32 val; | ||
75 | |||
76 | val = readl(reg) & ~(0xff << shift); | ||
77 | val |= 1 << (cpu + shift); | ||
78 | writel(val, reg); | ||
79 | } | ||
80 | |||
81 | static struct irqchip gic_chip = { | ||
82 | .ack = gic_ack_irq, | ||
83 | .mask = gic_mask_irq, | ||
84 | .unmask = gic_unmask_irq, | ||
85 | #ifdef CONFIG_SMP | ||
86 | .set_cpu = gic_set_cpu, | ||
87 | #endif | ||
88 | }; | ||
89 | |||
90 | void __init gic_dist_init(void __iomem *base) | ||
91 | { | ||
92 | unsigned int max_irq, i; | ||
93 | u32 cpumask = 1 << smp_processor_id(); | ||
94 | |||
95 | cpumask |= cpumask << 8; | ||
96 | cpumask |= cpumask << 16; | ||
97 | |||
98 | gic_dist_base = base; | ||
99 | |||
100 | writel(0, base + GIC_DIST_CTRL); | ||
101 | |||
102 | /* | ||
103 | * Find out how many interrupts are supported. | ||
104 | */ | ||
105 | max_irq = readl(base + GIC_DIST_CTR) & 0x1f; | ||
106 | max_irq = (max_irq + 1) * 32; | ||
107 | |||
108 | /* | ||
109 | * The GIC only supports up to 1020 interrupt sources. | ||
110 | * Limit this to either the architected maximum, or the | ||
111 | * platform maximum. | ||
112 | */ | ||
113 | if (max_irq > max(1020, NR_IRQS)) | ||
114 | max_irq = max(1020, NR_IRQS); | ||
115 | |||
116 | /* | ||
117 | * Set all global interrupts to be level triggered, active low. | ||
118 | */ | ||
119 | for (i = 32; i < max_irq; i += 16) | ||
120 | writel(0, base + GIC_DIST_CONFIG + i * 4 / 16); | ||
121 | |||
122 | /* | ||
123 | * Set all global interrupts to this CPU only. | ||
124 | */ | ||
125 | for (i = 32; i < max_irq; i += 4) | ||
126 | writel(cpumask, base + GIC_DIST_TARGET + i * 4 / 4); | ||
127 | |||
128 | /* | ||
129 | * Set priority on all interrupts. | ||
130 | */ | ||
131 | for (i = 0; i < max_irq; i += 4) | ||
132 | writel(0xa0a0a0a0, base + GIC_DIST_PRI + i * 4 / 4); | ||
133 | |||
134 | /* | ||
135 | * Disable all interrupts. | ||
136 | */ | ||
137 | for (i = 0; i < max_irq; i += 32) | ||
138 | writel(0xffffffff, base + GIC_DIST_ENABLE_CLEAR + i * 4 / 32); | ||
139 | |||
140 | /* | ||
141 | * Setup the Linux IRQ subsystem. | ||
142 | */ | ||
143 | for (i = 29; i < max_irq; i++) { | ||
144 | set_irq_chip(i, &gic_chip); | ||
145 | set_irq_handler(i, do_level_IRQ); | ||
146 | set_irq_flags(i, IRQF_VALID | IRQF_PROBE); | ||
147 | } | ||
148 | |||
149 | writel(1, base + GIC_DIST_CTRL); | ||
150 | } | ||
151 | |||
152 | void __cpuinit gic_cpu_init(void __iomem *base) | ||
153 | { | ||
154 | gic_cpu_base = base; | ||
155 | writel(0xf0, base + GIC_CPU_PRIMASK); | ||
156 | writel(1, base + GIC_CPU_CTRL); | ||
157 | } | ||
158 | |||
159 | #ifdef CONFIG_SMP | ||
160 | void gic_raise_softirq(cpumask_t cpumask, unsigned int irq) | ||
161 | { | ||
162 | unsigned long map = *cpus_addr(cpumask); | ||
163 | |||
164 | writel(map << 16 | irq, gic_dist_base + GIC_DIST_SOFTINT); | ||
165 | } | ||
166 | #endif | ||
diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index 5e435e42dacd..a94d75fef598 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c | |||
@@ -658,11 +658,12 @@ handle_signal(unsigned long sig, struct k_sigaction *ka, | |||
658 | /* | 658 | /* |
659 | * Block the signal if we were unsuccessful. | 659 | * Block the signal if we were unsuccessful. |
660 | */ | 660 | */ |
661 | if (ret != 0 || !(ka->sa.sa_flags & SA_NODEFER)) { | 661 | if (ret != 0) { |
662 | spin_lock_irq(&tsk->sighand->siglock); | 662 | spin_lock_irq(&tsk->sighand->siglock); |
663 | sigorsets(&tsk->blocked, &tsk->blocked, | 663 | sigorsets(&tsk->blocked, &tsk->blocked, |
664 | &ka->sa.sa_mask); | 664 | &ka->sa.sa_mask); |
665 | sigaddset(&tsk->blocked, sig); | 665 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
666 | sigaddset(&tsk->blocked, sig); | ||
666 | recalc_sigpending(); | 667 | recalc_sigpending(); |
667 | spin_unlock_irq(&tsk->sighand->siglock); | 668 | spin_unlock_irq(&tsk->sighand->siglock); |
668 | } | 669 | } |
diff --git a/arch/arm/mach-sa1100/assabet.c b/arch/arm/mach-sa1100/assabet.c index 4d4d303ee3a8..24687f511bf5 100644 --- a/arch/arm/mach-sa1100/assabet.c +++ b/arch/arm/mach-sa1100/assabet.c | |||
@@ -35,6 +35,7 @@ | |||
35 | #include <asm/mach/map.h> | 35 | #include <asm/mach/map.h> |
36 | #include <asm/mach/serial_sa1100.h> | 36 | #include <asm/mach/serial_sa1100.h> |
37 | #include <asm/arch/assabet.h> | 37 | #include <asm/arch/assabet.h> |
38 | #include <asm/arch/mcp.h> | ||
38 | 39 | ||
39 | #include "generic.h" | 40 | #include "generic.h" |
40 | 41 | ||
@@ -198,6 +199,11 @@ static struct irda_platform_data assabet_irda_data = { | |||
198 | .set_speed = assabet_irda_set_speed, | 199 | .set_speed = assabet_irda_set_speed, |
199 | }; | 200 | }; |
200 | 201 | ||
202 | static struct mcp_plat_data assabet_mcp_data = { | ||
203 | .mccr0 = MCCR0_ADM, | ||
204 | .sclk_rate = 11981000, | ||
205 | }; | ||
206 | |||
201 | static void __init assabet_init(void) | 207 | static void __init assabet_init(void) |
202 | { | 208 | { |
203 | /* | 209 | /* |
@@ -246,6 +252,7 @@ static void __init assabet_init(void) | |||
246 | sa11x0_set_flash_data(&assabet_flash_data, assabet_flash_resources, | 252 | sa11x0_set_flash_data(&assabet_flash_data, assabet_flash_resources, |
247 | ARRAY_SIZE(assabet_flash_resources)); | 253 | ARRAY_SIZE(assabet_flash_resources)); |
248 | sa11x0_set_irda_data(&assabet_irda_data); | 254 | sa11x0_set_irda_data(&assabet_irda_data); |
255 | sa11x0_set_mcp_data(&assabet_mcp_data); | ||
249 | } | 256 | } |
250 | 257 | ||
251 | /* | 258 | /* |
diff --git a/arch/arm/mach-sa1100/cerf.c b/arch/arm/mach-sa1100/cerf.c index 0aa918e24c31..9484be7dc671 100644 --- a/arch/arm/mach-sa1100/cerf.c +++ b/arch/arm/mach-sa1100/cerf.c | |||
@@ -29,6 +29,7 @@ | |||
29 | #include <asm/mach/serial_sa1100.h> | 29 | #include <asm/mach/serial_sa1100.h> |
30 | 30 | ||
31 | #include <asm/arch/cerf.h> | 31 | #include <asm/arch/cerf.h> |
32 | #include <asm/arch/mcp.h> | ||
32 | #include "generic.h" | 33 | #include "generic.h" |
33 | 34 | ||
34 | static struct resource cerfuart2_resources[] = { | 35 | static struct resource cerfuart2_resources[] = { |
@@ -116,10 +117,16 @@ static void __init cerf_map_io(void) | |||
116 | GPDR |= CERF_GPIO_CF_RESET; | 117 | GPDR |= CERF_GPIO_CF_RESET; |
117 | } | 118 | } |
118 | 119 | ||
120 | static struct mcp_plat_data cerf_mcp_data = { | ||
121 | .mccr0 = MCCR0_ADM, | ||
122 | .sclk_rate = 11981000, | ||
123 | }; | ||
124 | |||
119 | static void __init cerf_init(void) | 125 | static void __init cerf_init(void) |
120 | { | 126 | { |
121 | platform_add_devices(cerf_devices, ARRAY_SIZE(cerf_devices)); | 127 | platform_add_devices(cerf_devices, ARRAY_SIZE(cerf_devices)); |
122 | sa11x0_set_flash_data(&cerf_flash_data, &cerf_flash_resource, 1); | 128 | sa11x0_set_flash_data(&cerf_flash_data, &cerf_flash_resource, 1); |
129 | sa11x0_set_mcp_data(&cerf_mcp_data); | ||
123 | } | 130 | } |
124 | 131 | ||
125 | MACHINE_START(CERF, "Intrinsyc CerfBoard/CerfCube") | 132 | MACHINE_START(CERF, "Intrinsyc CerfBoard/CerfCube") |
diff --git a/arch/arm/mach-sa1100/generic.c b/arch/arm/mach-sa1100/generic.c index 95ae217be1bc..3f1e358455e5 100644 --- a/arch/arm/mach-sa1100/generic.c +++ b/arch/arm/mach-sa1100/generic.c | |||
@@ -221,6 +221,11 @@ static struct platform_device sa11x0mcp_device = { | |||
221 | .resource = sa11x0mcp_resources, | 221 | .resource = sa11x0mcp_resources, |
222 | }; | 222 | }; |
223 | 223 | ||
224 | void sa11x0_set_mcp_data(struct mcp_plat_data *data) | ||
225 | { | ||
226 | sa11x0mcp_device.dev.platform_data = data; | ||
227 | } | ||
228 | |||
224 | static struct resource sa11x0ssp_resources[] = { | 229 | static struct resource sa11x0ssp_resources[] = { |
225 | [0] = { | 230 | [0] = { |
226 | .start = 0x80070000, | 231 | .start = 0x80070000, |
diff --git a/arch/arm/mach-sa1100/generic.h b/arch/arm/mach-sa1100/generic.h index bfe41da9923e..279e3afa3c39 100644 --- a/arch/arm/mach-sa1100/generic.h +++ b/arch/arm/mach-sa1100/generic.h | |||
@@ -34,5 +34,8 @@ struct resource; | |||
34 | extern void sa11x0_set_flash_data(struct flash_platform_data *flash, | 34 | extern void sa11x0_set_flash_data(struct flash_platform_data *flash, |
35 | struct resource *res, int nr); | 35 | struct resource *res, int nr); |
36 | 36 | ||
37 | struct sa11x0_ssp_plat_ops; | ||
38 | extern void sa11x0_set_ssp_data(struct sa11x0_ssp_plat_ops *ops); | ||
39 | |||
37 | struct irda_platform_data; | 40 | struct irda_platform_data; |
38 | void sa11x0_set_irda_data(struct irda_platform_data *irda); | 41 | void sa11x0_set_irda_data(struct irda_platform_data *irda); |
diff --git a/arch/arm/mach-sa1100/lart.c b/arch/arm/mach-sa1100/lart.c index 870b488aeda4..ed6744d480af 100644 --- a/arch/arm/mach-sa1100/lart.c +++ b/arch/arm/mach-sa1100/lart.c | |||
@@ -13,12 +13,23 @@ | |||
13 | #include <asm/mach/arch.h> | 13 | #include <asm/mach/arch.h> |
14 | #include <asm/mach/map.h> | 14 | #include <asm/mach/map.h> |
15 | #include <asm/mach/serial_sa1100.h> | 15 | #include <asm/mach/serial_sa1100.h> |
16 | #include <asm/arch/mcp.h> | ||
16 | 17 | ||
17 | #include "generic.h" | 18 | #include "generic.h" |
18 | 19 | ||
19 | 20 | ||
20 | #warning "include/asm/arch-sa1100/ide.h needs fixing for lart" | 21 | #warning "include/asm/arch-sa1100/ide.h needs fixing for lart" |
21 | 22 | ||
23 | static struct mcp_plat_data lart_mcp_data = { | ||
24 | .mccr0 = MCCR0_ADM, | ||
25 | .sclk_rate = 11981000, | ||
26 | }; | ||
27 | |||
28 | static void __init lart_init(void) | ||
29 | { | ||
30 | sa11x0_set_mcp_data(&lart_mcp_data); | ||
31 | } | ||
32 | |||
22 | static struct map_desc lart_io_desc[] __initdata = { | 33 | static struct map_desc lart_io_desc[] __initdata = { |
23 | /* virtual physical length type */ | 34 | /* virtual physical length type */ |
24 | { 0xe8000000, 0x00000000, 0x00400000, MT_DEVICE }, /* main flash memory */ | 35 | { 0xe8000000, 0x00000000, 0x00400000, MT_DEVICE }, /* main flash memory */ |
@@ -47,5 +58,6 @@ MACHINE_START(LART, "LART") | |||
47 | .boot_params = 0xc0000100, | 58 | .boot_params = 0xc0000100, |
48 | .map_io = lart_map_io, | 59 | .map_io = lart_map_io, |
49 | .init_irq = sa1100_init_irq, | 60 | .init_irq = sa1100_init_irq, |
61 | .init_machine = lart_init, | ||
50 | .timer = &sa1100_timer, | 62 | .timer = &sa1100_timer, |
51 | MACHINE_END | 63 | MACHINE_END |
diff --git a/arch/arm/mach-sa1100/shannon.c b/arch/arm/mach-sa1100/shannon.c index 43a00359fcdd..7482288278d9 100644 --- a/arch/arm/mach-sa1100/shannon.c +++ b/arch/arm/mach-sa1100/shannon.c | |||
@@ -18,6 +18,7 @@ | |||
18 | #include <asm/mach/flash.h> | 18 | #include <asm/mach/flash.h> |
19 | #include <asm/mach/map.h> | 19 | #include <asm/mach/map.h> |
20 | #include <asm/mach/serial_sa1100.h> | 20 | #include <asm/mach/serial_sa1100.h> |
21 | #include <asm/arch/mcp.h> | ||
21 | #include <asm/arch/shannon.h> | 22 | #include <asm/arch/shannon.h> |
22 | 23 | ||
23 | #include "generic.h" | 24 | #include "generic.h" |
@@ -52,9 +53,15 @@ static struct resource shannon_flash_resource = { | |||
52 | .flags = IORESOURCE_MEM, | 53 | .flags = IORESOURCE_MEM, |
53 | }; | 54 | }; |
54 | 55 | ||
56 | static struct mcp_plat_data shannon_mcp_data = { | ||
57 | .mccr0 = MCCR0_ADM, | ||
58 | .sclk_rate = 11981000, | ||
59 | }; | ||
60 | |||
55 | static void __init shannon_init(void) | 61 | static void __init shannon_init(void) |
56 | { | 62 | { |
57 | sa11x0_set_flash_data(&shannon_flash_data, &shannon_flash_resource, 1); | 63 | sa11x0_set_flash_data(&shannon_flash_data, &shannon_flash_resource, 1); |
64 | sa11x0_set_mcp_data(&shannon_mcp_data); | ||
58 | } | 65 | } |
59 | 66 | ||
60 | static void __init shannon_map_io(void) | 67 | static void __init shannon_map_io(void) |
diff --git a/arch/arm/mach-sa1100/simpad.c b/arch/arm/mach-sa1100/simpad.c index 77978586b126..07f6d5fd7bb0 100644 --- a/arch/arm/mach-sa1100/simpad.c +++ b/arch/arm/mach-sa1100/simpad.c | |||
@@ -23,6 +23,7 @@ | |||
23 | #include <asm/mach/flash.h> | 23 | #include <asm/mach/flash.h> |
24 | #include <asm/mach/map.h> | 24 | #include <asm/mach/map.h> |
25 | #include <asm/mach/serial_sa1100.h> | 25 | #include <asm/mach/serial_sa1100.h> |
26 | #include <asm/arch/mcp.h> | ||
26 | #include <asm/arch/simpad.h> | 27 | #include <asm/arch/simpad.h> |
27 | 28 | ||
28 | #include <linux/serial_core.h> | 29 | #include <linux/serial_core.h> |
@@ -123,6 +124,11 @@ static struct resource simpad_flash_resources [] = { | |||
123 | } | 124 | } |
124 | }; | 125 | }; |
125 | 126 | ||
127 | static struct mcp_plat_data simpad_mcp_data = { | ||
128 | .mccr0 = MCCR0_ADM, | ||
129 | .sclk_rate = 11981000, | ||
130 | }; | ||
131 | |||
126 | 132 | ||
127 | 133 | ||
128 | static void __init simpad_map_io(void) | 134 | static void __init simpad_map_io(void) |
@@ -157,6 +163,7 @@ static void __init simpad_map_io(void) | |||
157 | 163 | ||
158 | sa11x0_set_flash_data(&simpad_flash_data, simpad_flash_resources, | 164 | sa11x0_set_flash_data(&simpad_flash_data, simpad_flash_resources, |
159 | ARRAY_SIZE(simpad_flash_resources)); | 165 | ARRAY_SIZE(simpad_flash_resources)); |
166 | sa11x0_set_mcp_data(&simpad_mcp_data); | ||
160 | } | 167 | } |
161 | 168 | ||
162 | static void simpad_power_off(void) | 169 | static void simpad_power_off(void) |
diff --git a/arch/arm26/kernel/signal.c b/arch/arm26/kernel/signal.c index 356d9809cc0b..ce2055bdc9ee 100644 --- a/arch/arm26/kernel/signal.c +++ b/arch/arm26/kernel/signal.c | |||
@@ -454,14 +454,13 @@ handle_signal(unsigned long sig, siginfo_t *info, sigset_t *oldset, | |||
454 | if (ka->sa.sa_flags & SA_ONESHOT) | 454 | if (ka->sa.sa_flags & SA_ONESHOT) |
455 | ka->sa.sa_handler = SIG_DFL; | 455 | ka->sa.sa_handler = SIG_DFL; |
456 | 456 | ||
457 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 457 | spin_lock_irq(&tsk->sighand->siglock); |
458 | spin_lock_irq(&tsk->sighand->siglock); | 458 | sigorsets(&tsk->blocked, &tsk->blocked, |
459 | sigorsets(&tsk->blocked, &tsk->blocked, | 459 | &ka->sa.sa_mask); |
460 | &ka->sa.sa_mask); | 460 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
461 | sigaddset(&tsk->blocked, sig); | 461 | sigaddset(&tsk->blocked, sig); |
462 | recalc_sigpending(); | 462 | recalc_sigpending(); |
463 | spin_unlock_irq(&tsk->sighand->siglock); | 463 | spin_unlock_irq(&tsk->sighand->siglock); |
464 | } | ||
465 | return; | 464 | return; |
466 | } | 465 | } |
467 | 466 | ||
diff --git a/arch/cris/arch-v10/kernel/signal.c b/arch/cris/arch-v10/kernel/signal.c index 85e0032e664f..693771961f85 100644 --- a/arch/cris/arch-v10/kernel/signal.c +++ b/arch/cris/arch-v10/kernel/signal.c | |||
@@ -517,13 +517,12 @@ handle_signal(int canrestart, unsigned long sig, | |||
517 | if (ka->sa.sa_flags & SA_ONESHOT) | 517 | if (ka->sa.sa_flags & SA_ONESHOT) |
518 | ka->sa.sa_handler = SIG_DFL; | 518 | ka->sa.sa_handler = SIG_DFL; |
519 | 519 | ||
520 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 520 | spin_lock_irq(¤t->sighand->siglock); |
521 | spin_lock_irq(¤t->sighand->siglock); | 521 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
522 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 522 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
523 | sigaddset(¤t->blocked,sig); | 523 | sigaddset(¤t->blocked,sig); |
524 | recalc_sigpending(); | 524 | recalc_sigpending(); |
525 | spin_unlock_irq(¤t->sighand->siglock); | 525 | spin_unlock_irq(¤t->sighand->siglock); |
526 | } | ||
527 | } | 526 | } |
528 | 527 | ||
529 | /* | 528 | /* |
diff --git a/arch/cris/arch-v32/kernel/signal.c b/arch/cris/arch-v32/kernel/signal.c index fb4c79d5b76b..0a3614dab887 100644 --- a/arch/cris/arch-v32/kernel/signal.c +++ b/arch/cris/arch-v32/kernel/signal.c | |||
@@ -568,13 +568,12 @@ handle_signal(int canrestart, unsigned long sig, | |||
568 | if (ka->sa.sa_flags & SA_ONESHOT) | 568 | if (ka->sa.sa_flags & SA_ONESHOT) |
569 | ka->sa.sa_handler = SIG_DFL; | 569 | ka->sa.sa_handler = SIG_DFL; |
570 | 570 | ||
571 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 571 | spin_lock_irq(¤t->sighand->siglock); |
572 | spin_lock_irq(¤t->sighand->siglock); | 572 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
573 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 573 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
574 | sigaddset(¤t->blocked,sig); | 574 | sigaddset(¤t->blocked,sig); |
575 | recalc_sigpending(); | 575 | recalc_sigpending(); |
576 | spin_unlock_irq(¤t->sighand->siglock); | 576 | spin_unlock_irq(¤t->sighand->siglock); |
577 | } | ||
578 | } | 577 | } |
579 | 578 | ||
580 | /* | 579 | /* |
diff --git a/arch/frv/kernel/signal.c b/arch/frv/kernel/signal.c index 36a2dffc8ebd..d4ccc0728dfe 100644 --- a/arch/frv/kernel/signal.c +++ b/arch/frv/kernel/signal.c | |||
@@ -506,13 +506,12 @@ static void handle_signal(unsigned long sig, siginfo_t *info, | |||
506 | else | 506 | else |
507 | setup_frame(sig, ka, oldset, regs); | 507 | setup_frame(sig, ka, oldset, regs); |
508 | 508 | ||
509 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 509 | spin_lock_irq(¤t->sighand->siglock); |
510 | spin_lock_irq(¤t->sighand->siglock); | 510 | sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask); |
511 | sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask); | 511 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
512 | sigaddset(¤t->blocked, sig); | 512 | sigaddset(¤t->blocked, sig); |
513 | recalc_sigpending(); | 513 | recalc_sigpending(); |
514 | spin_unlock_irq(¤t->sighand->siglock); | 514 | spin_unlock_irq(¤t->sighand->siglock); |
515 | } | ||
516 | } /* end handle_signal() */ | 515 | } /* end handle_signal() */ |
517 | 516 | ||
518 | /*****************************************************************************/ | 517 | /*****************************************************************************/ |
diff --git a/arch/h8300/kernel/signal.c b/arch/h8300/kernel/signal.c index 5aab87eae1f9..f13d5e82d4b9 100644 --- a/arch/h8300/kernel/signal.c +++ b/arch/h8300/kernel/signal.c | |||
@@ -488,13 +488,12 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, | |||
488 | else | 488 | else |
489 | setup_frame(sig, ka, oldset, regs); | 489 | setup_frame(sig, ka, oldset, regs); |
490 | 490 | ||
491 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 491 | spin_lock_irq(¤t->sighand->siglock); |
492 | spin_lock_irq(¤t->sighand->siglock); | 492 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
493 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 493 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
494 | sigaddset(¤t->blocked,sig); | 494 | sigaddset(¤t->blocked,sig); |
495 | recalc_sigpending(); | 495 | recalc_sigpending(); |
496 | spin_unlock_irq(¤t->sighand->siglock); | 496 | spin_unlock_irq(¤t->sighand->siglock); |
497 | } | ||
498 | } | 497 | } |
499 | 498 | ||
500 | /* | 499 | /* |
diff --git a/arch/i386/kernel/signal.c b/arch/i386/kernel/signal.c index 89ef7adc63a4..140e340569c6 100644 --- a/arch/i386/kernel/signal.c +++ b/arch/i386/kernel/signal.c | |||
@@ -577,10 +577,11 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, | |||
577 | else | 577 | else |
578 | ret = setup_frame(sig, ka, oldset, regs); | 578 | ret = setup_frame(sig, ka, oldset, regs); |
579 | 579 | ||
580 | if (ret && !(ka->sa.sa_flags & SA_NODEFER)) { | 580 | if (ret) { |
581 | spin_lock_irq(¤t->sighand->siglock); | 581 | spin_lock_irq(¤t->sighand->siglock); |
582 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 582 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
583 | sigaddset(¤t->blocked,sig); | 583 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
584 | sigaddset(¤t->blocked,sig); | ||
584 | recalc_sigpending(); | 585 | recalc_sigpending(); |
585 | spin_unlock_irq(¤t->sighand->siglock); | 586 | spin_unlock_irq(¤t->sighand->siglock); |
586 | } | 587 | } |
diff --git a/arch/ia64/kernel/signal.c b/arch/ia64/kernel/signal.c index b8a0a7d257a9..774f34b675cf 100644 --- a/arch/ia64/kernel/signal.c +++ b/arch/ia64/kernel/signal.c | |||
@@ -467,15 +467,12 @@ handle_signal (unsigned long sig, struct k_sigaction *ka, siginfo_t *info, sigse | |||
467 | if (!setup_frame(sig, ka, info, oldset, scr)) | 467 | if (!setup_frame(sig, ka, info, oldset, scr)) |
468 | return 0; | 468 | return 0; |
469 | 469 | ||
470 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 470 | spin_lock_irq(¤t->sighand->siglock); |
471 | spin_lock_irq(¤t->sighand->siglock); | 471 | sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask); |
472 | { | 472 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
473 | sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask); | 473 | sigaddset(¤t->blocked, sig); |
474 | sigaddset(¤t->blocked, sig); | 474 | recalc_sigpending(); |
475 | recalc_sigpending(); | 475 | spin_unlock_irq(¤t->sighand->siglock); |
476 | } | ||
477 | spin_unlock_irq(¤t->sighand->siglock); | ||
478 | } | ||
479 | return 1; | 476 | return 1; |
480 | } | 477 | } |
481 | 478 | ||
diff --git a/arch/m32r/kernel/signal.c b/arch/m32r/kernel/signal.c index 5aef7e406ef5..71763f7a1d19 100644 --- a/arch/m32r/kernel/signal.c +++ b/arch/m32r/kernel/signal.c | |||
@@ -341,13 +341,12 @@ handle_signal(unsigned long sig, struct k_sigaction *ka, siginfo_t *info, | |||
341 | /* Set up the stack frame */ | 341 | /* Set up the stack frame */ |
342 | setup_rt_frame(sig, ka, info, oldset, regs); | 342 | setup_rt_frame(sig, ka, info, oldset, regs); |
343 | 343 | ||
344 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 344 | spin_lock_irq(¤t->sighand->siglock); |
345 | spin_lock_irq(¤t->sighand->siglock); | 345 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
346 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 346 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
347 | sigaddset(¤t->blocked,sig); | 347 | sigaddset(¤t->blocked,sig); |
348 | recalc_sigpending(); | 348 | recalc_sigpending(); |
349 | spin_unlock_irq(¤t->sighand->siglock); | 349 | spin_unlock_irq(¤t->sighand->siglock); |
350 | } | ||
351 | } | 350 | } |
352 | 351 | ||
353 | /* | 352 | /* |
diff --git a/arch/m68knommu/kernel/signal.c b/arch/m68knommu/kernel/signal.c index 30dceb59a462..43a2726c0d0a 100644 --- a/arch/m68knommu/kernel/signal.c +++ b/arch/m68knommu/kernel/signal.c | |||
@@ -732,13 +732,12 @@ handle_signal(int sig, struct k_sigaction *ka, siginfo_t *info, | |||
732 | if (ka->sa.sa_flags & SA_ONESHOT) | 732 | if (ka->sa.sa_flags & SA_ONESHOT) |
733 | ka->sa.sa_handler = SIG_DFL; | 733 | ka->sa.sa_handler = SIG_DFL; |
734 | 734 | ||
735 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 735 | spin_lock_irq(¤t->sighand->siglock); |
736 | spin_lock_irq(¤t->sighand->siglock); | 736 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
737 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 737 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
738 | sigaddset(¤t->blocked,sig); | 738 | sigaddset(¤t->blocked,sig); |
739 | recalc_sigpending(); | 739 | recalc_sigpending(); |
740 | spin_unlock_irq(¤t->sighand->siglock); | 740 | spin_unlock_irq(¤t->sighand->siglock); |
741 | } | ||
742 | } | 741 | } |
743 | 742 | ||
744 | /* | 743 | /* |
diff --git a/arch/mips/kernel/irixsig.c b/arch/mips/kernel/irixsig.c index 40244782a8e5..4c114ae21793 100644 --- a/arch/mips/kernel/irixsig.c +++ b/arch/mips/kernel/irixsig.c | |||
@@ -155,13 +155,12 @@ static inline void handle_signal(unsigned long sig, siginfo_t *info, | |||
155 | else | 155 | else |
156 | setup_irix_frame(ka, regs, sig, oldset); | 156 | setup_irix_frame(ka, regs, sig, oldset); |
157 | 157 | ||
158 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 158 | spin_lock_irq(¤t->sighand->siglock); |
159 | spin_lock_irq(¤t->sighand->siglock); | 159 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
160 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 160 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
161 | sigaddset(¤t->blocked,sig); | 161 | sigaddset(¤t->blocked,sig); |
162 | recalc_sigpending(); | 162 | recalc_sigpending(); |
163 | spin_unlock_irq(¤t->sighand->siglock); | 163 | spin_unlock_irq(¤t->sighand->siglock); |
164 | } | ||
165 | } | 164 | } |
166 | 165 | ||
167 | asmlinkage int do_irix_signal(sigset_t *oldset, struct pt_regs *regs) | 166 | asmlinkage int do_irix_signal(sigset_t *oldset, struct pt_regs *regs) |
diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c index 65ee15396ffd..0209c1dd1429 100644 --- a/arch/mips/kernel/signal.c +++ b/arch/mips/kernel/signal.c | |||
@@ -425,13 +425,12 @@ static inline void handle_signal(unsigned long sig, siginfo_t *info, | |||
425 | setup_frame(ka, regs, sig, oldset); | 425 | setup_frame(ka, regs, sig, oldset); |
426 | #endif | 426 | #endif |
427 | 427 | ||
428 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 428 | spin_lock_irq(¤t->sighand->siglock); |
429 | spin_lock_irq(¤t->sighand->siglock); | 429 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
430 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 430 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
431 | sigaddset(¤t->blocked,sig); | 431 | sigaddset(¤t->blocked,sig); |
432 | recalc_sigpending(); | 432 | recalc_sigpending(); |
433 | spin_unlock_irq(¤t->sighand->siglock); | 433 | spin_unlock_irq(¤t->sighand->siglock); |
434 | } | ||
435 | } | 434 | } |
436 | 435 | ||
437 | extern int do_signal32(sigset_t *oldset, struct pt_regs *regs); | 436 | extern int do_signal32(sigset_t *oldset, struct pt_regs *regs); |
diff --git a/arch/mips/kernel/signal32.c b/arch/mips/kernel/signal32.c index c1a69cf232f9..f6875f023a29 100644 --- a/arch/mips/kernel/signal32.c +++ b/arch/mips/kernel/signal32.c | |||
@@ -751,13 +751,12 @@ static inline void handle_signal(unsigned long sig, siginfo_t *info, | |||
751 | else | 751 | else |
752 | setup_frame(ka, regs, sig, oldset); | 752 | setup_frame(ka, regs, sig, oldset); |
753 | 753 | ||
754 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 754 | spin_lock_irq(¤t->sighand->siglock); |
755 | spin_lock_irq(¤t->sighand->siglock); | 755 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
756 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 756 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
757 | sigaddset(¤t->blocked,sig); | 757 | sigaddset(¤t->blocked,sig); |
758 | recalc_sigpending(); | 758 | recalc_sigpending(); |
759 | spin_unlock_irq(¤t->sighand->siglock); | 759 | spin_unlock_irq(¤t->sighand->siglock); |
760 | } | ||
761 | } | 760 | } |
762 | 761 | ||
763 | int do_signal32(sigset_t *oldset, struct pt_regs *regs) | 762 | int do_signal32(sigset_t *oldset, struct pt_regs *regs) |
diff --git a/arch/parisc/kernel/signal.c b/arch/parisc/kernel/signal.c index 9421bb98ea63..55d71c15e1f7 100644 --- a/arch/parisc/kernel/signal.c +++ b/arch/parisc/kernel/signal.c | |||
@@ -517,13 +517,12 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, | |||
517 | if (!setup_rt_frame(sig, ka, info, oldset, regs, in_syscall)) | 517 | if (!setup_rt_frame(sig, ka, info, oldset, regs, in_syscall)) |
518 | return 0; | 518 | return 0; |
519 | 519 | ||
520 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 520 | spin_lock_irq(¤t->sighand->siglock); |
521 | spin_lock_irq(¤t->sighand->siglock); | 521 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
522 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 522 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
523 | sigaddset(¤t->blocked,sig); | 523 | sigaddset(¤t->blocked,sig); |
524 | recalc_sigpending(); | 524 | recalc_sigpending(); |
525 | spin_unlock_irq(¤t->sighand->siglock); | 525 | spin_unlock_irq(¤t->sighand->siglock); |
526 | } | ||
527 | return 1; | 526 | return 1; |
528 | } | 527 | } |
529 | 528 | ||
diff --git a/arch/ppc/kernel/signal.c b/arch/ppc/kernel/signal.c index 8aaeb6f4e750..2244bf91e593 100644 --- a/arch/ppc/kernel/signal.c +++ b/arch/ppc/kernel/signal.c | |||
@@ -759,13 +759,12 @@ int do_signal(sigset_t *oldset, struct pt_regs *regs) | |||
759 | else | 759 | else |
760 | handle_signal(signr, &ka, &info, oldset, regs, newsp); | 760 | handle_signal(signr, &ka, &info, oldset, regs, newsp); |
761 | 761 | ||
762 | if (!(ka.sa.sa_flags & SA_NODEFER)) { | 762 | spin_lock_irq(¤t->sighand->siglock); |
763 | spin_lock_irq(¤t->sighand->siglock); | 763 | sigorsets(¤t->blocked,¤t->blocked,&ka.sa.sa_mask); |
764 | sigorsets(¤t->blocked,¤t->blocked,&ka.sa.sa_mask); | 764 | if (!(ka.sa.sa_flags & SA_NODEFER)) |
765 | sigaddset(¤t->blocked, signr); | 765 | sigaddset(¤t->blocked, signr); |
766 | recalc_sigpending(); | 766 | recalc_sigpending(); |
767 | spin_unlock_irq(¤t->sighand->siglock); | 767 | spin_unlock_irq(¤t->sighand->siglock); |
768 | } | ||
769 | 768 | ||
770 | return 1; | 769 | return 1; |
771 | } | 770 | } |
diff --git a/arch/ppc64/kernel/signal.c b/arch/ppc64/kernel/signal.c index bf782276984c..49a79a55c32d 100644 --- a/arch/ppc64/kernel/signal.c +++ b/arch/ppc64/kernel/signal.c | |||
@@ -481,10 +481,11 @@ static int handle_signal(unsigned long sig, struct k_sigaction *ka, | |||
481 | /* Set up Signal Frame */ | 481 | /* Set up Signal Frame */ |
482 | ret = setup_rt_frame(sig, ka, info, oldset, regs); | 482 | ret = setup_rt_frame(sig, ka, info, oldset, regs); |
483 | 483 | ||
484 | if (ret && !(ka->sa.sa_flags & SA_NODEFER)) { | 484 | if (ret) { |
485 | spin_lock_irq(¤t->sighand->siglock); | 485 | spin_lock_irq(¤t->sighand->siglock); |
486 | sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask); | 486 | sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask); |
487 | sigaddset(¤t->blocked,sig); | 487 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
488 | sigaddset(¤t->blocked,sig); | ||
488 | recalc_sigpending(); | 489 | recalc_sigpending(); |
489 | spin_unlock_irq(¤t->sighand->siglock); | 490 | spin_unlock_irq(¤t->sighand->siglock); |
490 | } | 491 | } |
diff --git a/arch/ppc64/kernel/signal32.c b/arch/ppc64/kernel/signal32.c index 3c2fa5c284c0..46f4d6cc7fc9 100644 --- a/arch/ppc64/kernel/signal32.c +++ b/arch/ppc64/kernel/signal32.c | |||
@@ -976,11 +976,12 @@ int do_signal32(sigset_t *oldset, struct pt_regs *regs) | |||
976 | else | 976 | else |
977 | ret = handle_signal32(signr, &ka, &info, oldset, regs, newsp); | 977 | ret = handle_signal32(signr, &ka, &info, oldset, regs, newsp); |
978 | 978 | ||
979 | if (ret && !(ka.sa.sa_flags & SA_NODEFER)) { | 979 | if (ret) { |
980 | spin_lock_irq(¤t->sighand->siglock); | 980 | spin_lock_irq(¤t->sighand->siglock); |
981 | sigorsets(¤t->blocked, ¤t->blocked, | 981 | sigorsets(¤t->blocked, ¤t->blocked, |
982 | &ka.sa.sa_mask); | 982 | &ka.sa.sa_mask); |
983 | sigaddset(¤t->blocked, signr); | 983 | if (!(ka.sa.sa_flags & SA_NODEFER)) |
984 | sigaddset(¤t->blocked, signr); | ||
984 | recalc_sigpending(); | 985 | recalc_sigpending(); |
985 | spin_unlock_irq(¤t->sighand->siglock); | 986 | spin_unlock_irq(¤t->sighand->siglock); |
986 | } | 987 | } |
diff --git a/arch/s390/kernel/compat_signal.c b/arch/s390/kernel/compat_signal.c index d05d65ac9694..7358cdb8441f 100644 --- a/arch/s390/kernel/compat_signal.c +++ b/arch/s390/kernel/compat_signal.c | |||
@@ -637,12 +637,11 @@ handle_signal32(unsigned long sig, struct k_sigaction *ka, | |||
637 | else | 637 | else |
638 | setup_frame32(sig, ka, oldset, regs); | 638 | setup_frame32(sig, ka, oldset, regs); |
639 | 639 | ||
640 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 640 | spin_lock_irq(¤t->sighand->siglock); |
641 | spin_lock_irq(¤t->sighand->siglock); | 641 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
642 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 642 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
643 | sigaddset(¤t->blocked,sig); | 643 | sigaddset(¤t->blocked,sig); |
644 | recalc_sigpending(); | 644 | recalc_sigpending(); |
645 | spin_unlock_irq(¤t->sighand->siglock); | 645 | spin_unlock_irq(¤t->sighand->siglock); |
646 | } | ||
647 | } | 646 | } |
648 | 647 | ||
diff --git a/arch/s390/kernel/signal.c b/arch/s390/kernel/signal.c index 610c1d03e975..6a3f5b7473a9 100644 --- a/arch/s390/kernel/signal.c +++ b/arch/s390/kernel/signal.c | |||
@@ -429,13 +429,12 @@ handle_signal(unsigned long sig, struct k_sigaction *ka, | |||
429 | else | 429 | else |
430 | setup_frame(sig, ka, oldset, regs); | 430 | setup_frame(sig, ka, oldset, regs); |
431 | 431 | ||
432 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 432 | spin_lock_irq(¤t->sighand->siglock); |
433 | spin_lock_irq(¤t->sighand->siglock); | 433 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
434 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 434 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
435 | sigaddset(¤t->blocked,sig); | 435 | sigaddset(¤t->blocked,sig); |
436 | recalc_sigpending(); | 436 | recalc_sigpending(); |
437 | spin_unlock_irq(¤t->sighand->siglock); | 437 | spin_unlock_irq(¤t->sighand->siglock); |
438 | } | ||
439 | } | 438 | } |
440 | 439 | ||
441 | /* | 440 | /* |
diff --git a/arch/sh/kernel/signal.c b/arch/sh/kernel/signal.c index 8022243f0178..b475c4d2405f 100644 --- a/arch/sh/kernel/signal.c +++ b/arch/sh/kernel/signal.c | |||
@@ -546,13 +546,12 @@ handle_signal(unsigned long sig, struct k_sigaction *ka, siginfo_t *info, | |||
546 | if (ka->sa.sa_flags & SA_ONESHOT) | 546 | if (ka->sa.sa_flags & SA_ONESHOT) |
547 | ka->sa.sa_handler = SIG_DFL; | 547 | ka->sa.sa_handler = SIG_DFL; |
548 | 548 | ||
549 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 549 | spin_lock_irq(¤t->sighand->siglock); |
550 | spin_lock_irq(¤t->sighand->siglock); | 550 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
551 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 551 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
552 | sigaddset(¤t->blocked,sig); | 552 | sigaddset(¤t->blocked,sig); |
553 | recalc_sigpending(); | 553 | recalc_sigpending(); |
554 | spin_unlock_irq(¤t->sighand->siglock); | 554 | spin_unlock_irq(¤t->sighand->siglock); |
555 | } | ||
556 | } | 555 | } |
557 | 556 | ||
558 | /* | 557 | /* |
diff --git a/arch/sh64/kernel/signal.c b/arch/sh64/kernel/signal.c index c6a14a87c59b..3ea8929e483b 100644 --- a/arch/sh64/kernel/signal.c +++ b/arch/sh64/kernel/signal.c | |||
@@ -664,13 +664,12 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, | |||
664 | else | 664 | else |
665 | setup_frame(sig, ka, oldset, regs); | 665 | setup_frame(sig, ka, oldset, regs); |
666 | 666 | ||
667 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 667 | spin_lock_irq(¤t->sighand->siglock); |
668 | spin_lock_irq(¤t->sighand->siglock); | 668 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
669 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 669 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
670 | sigaddset(¤t->blocked,sig); | 670 | sigaddset(¤t->blocked,sig); |
671 | recalc_sigpending(); | 671 | recalc_sigpending(); |
672 | spin_unlock_irq(¤t->sighand->siglock); | 672 | spin_unlock_irq(¤t->sighand->siglock); |
673 | } | ||
674 | } | 673 | } |
675 | 674 | ||
676 | /* | 675 | /* |
diff --git a/arch/sparc/kernel/setup.c b/arch/sparc/kernel/setup.c index 55352ed85e8a..53c192a4982f 100644 --- a/arch/sparc/kernel/setup.c +++ b/arch/sparc/kernel/setup.c | |||
@@ -32,7 +32,6 @@ | |||
32 | #include <linux/spinlock.h> | 32 | #include <linux/spinlock.h> |
33 | #include <linux/root_dev.h> | 33 | #include <linux/root_dev.h> |
34 | 34 | ||
35 | #include <asm/segment.h> | ||
36 | #include <asm/system.h> | 35 | #include <asm/system.h> |
37 | #include <asm/io.h> | 36 | #include <asm/io.h> |
38 | #include <asm/processor.h> | 37 | #include <asm/processor.h> |
diff --git a/arch/sparc/kernel/signal.c b/arch/sparc/kernel/signal.c index 011ff35057a5..5f34d7dc2b89 100644 --- a/arch/sparc/kernel/signal.c +++ b/arch/sparc/kernel/signal.c | |||
@@ -1034,13 +1034,12 @@ handle_signal(unsigned long signr, struct k_sigaction *ka, | |||
1034 | else | 1034 | else |
1035 | setup_frame(&ka->sa, regs, signr, oldset, info); | 1035 | setup_frame(&ka->sa, regs, signr, oldset, info); |
1036 | } | 1036 | } |
1037 | if (!(ka->sa.sa_flags & SA_NOMASK)) { | 1037 | spin_lock_irq(¤t->sighand->siglock); |
1038 | spin_lock_irq(¤t->sighand->siglock); | 1038 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
1039 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 1039 | if (!(ka->sa.sa_flags & SA_NOMASK)) |
1040 | sigaddset(¤t->blocked, signr); | 1040 | sigaddset(¤t->blocked, signr); |
1041 | recalc_sigpending(); | 1041 | recalc_sigpending(); |
1042 | spin_unlock_irq(¤t->sighand->siglock); | 1042 | spin_unlock_irq(¤t->sighand->siglock); |
1043 | } | ||
1044 | } | 1043 | } |
1045 | 1044 | ||
1046 | static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs, | 1045 | static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs, |
diff --git a/arch/sparc/kernel/tick14.c b/arch/sparc/kernel/tick14.c index fd8005a3e6bd..591547af4c65 100644 --- a/arch/sparc/kernel/tick14.c +++ b/arch/sparc/kernel/tick14.c | |||
@@ -19,7 +19,6 @@ | |||
19 | #include <linux/interrupt.h> | 19 | #include <linux/interrupt.h> |
20 | 20 | ||
21 | #include <asm/oplib.h> | 21 | #include <asm/oplib.h> |
22 | #include <asm/segment.h> | ||
23 | #include <asm/timer.h> | 22 | #include <asm/timer.h> |
24 | #include <asm/mostek.h> | 23 | #include <asm/mostek.h> |
25 | #include <asm/system.h> | 24 | #include <asm/system.h> |
diff --git a/arch/sparc/kernel/time.c b/arch/sparc/kernel/time.c index 6486cbf2efe9..3b759aefc170 100644 --- a/arch/sparc/kernel/time.c +++ b/arch/sparc/kernel/time.c | |||
@@ -32,7 +32,6 @@ | |||
32 | #include <linux/profile.h> | 32 | #include <linux/profile.h> |
33 | 33 | ||
34 | #include <asm/oplib.h> | 34 | #include <asm/oplib.h> |
35 | #include <asm/segment.h> | ||
36 | #include <asm/timer.h> | 35 | #include <asm/timer.h> |
37 | #include <asm/mostek.h> | 36 | #include <asm/mostek.h> |
38 | #include <asm/system.h> | 37 | #include <asm/system.h> |
diff --git a/arch/sparc/mm/fault.c b/arch/sparc/mm/fault.c index 37f4107bae66..2bbd53f3cafb 100644 --- a/arch/sparc/mm/fault.c +++ b/arch/sparc/mm/fault.c | |||
@@ -23,7 +23,6 @@ | |||
23 | #include <linux/module.h> | 23 | #include <linux/module.h> |
24 | 24 | ||
25 | #include <asm/system.h> | 25 | #include <asm/system.h> |
26 | #include <asm/segment.h> | ||
27 | #include <asm/page.h> | 26 | #include <asm/page.h> |
28 | #include <asm/pgtable.h> | 27 | #include <asm/pgtable.h> |
29 | #include <asm/memreg.h> | 28 | #include <asm/memreg.h> |
diff --git a/arch/sparc/mm/init.c b/arch/sparc/mm/init.c index ec2e05028a10..c03babaa0498 100644 --- a/arch/sparc/mm/init.c +++ b/arch/sparc/mm/init.c | |||
@@ -25,7 +25,6 @@ | |||
25 | #include <linux/bootmem.h> | 25 | #include <linux/bootmem.h> |
26 | 26 | ||
27 | #include <asm/system.h> | 27 | #include <asm/system.h> |
28 | #include <asm/segment.h> | ||
29 | #include <asm/vac-ops.h> | 28 | #include <asm/vac-ops.h> |
30 | #include <asm/page.h> | 29 | #include <asm/page.h> |
31 | #include <asm/pgtable.h> | 30 | #include <asm/pgtable.h> |
diff --git a/arch/sparc64/kernel/entry.S b/arch/sparc64/kernel/entry.S index 88332f00094a..cecdc0a7521f 100644 --- a/arch/sparc64/kernel/entry.S +++ b/arch/sparc64/kernel/entry.S | |||
@@ -21,6 +21,7 @@ | |||
21 | #include <asm/visasm.h> | 21 | #include <asm/visasm.h> |
22 | #include <asm/estate.h> | 22 | #include <asm/estate.h> |
23 | #include <asm/auxio.h> | 23 | #include <asm/auxio.h> |
24 | #include <asm/sfafsr.h> | ||
24 | 25 | ||
25 | #define curptr g6 | 26 | #define curptr g6 |
26 | 27 | ||
@@ -690,14 +691,159 @@ netbsd_syscall: | |||
690 | retl | 691 | retl |
691 | nop | 692 | nop |
692 | 693 | ||
693 | /* These next few routines must be sure to clear the | 694 | /* We need to carefully read the error status, ACK |
694 | * SFSR FaultValid bit so that the fast tlb data protection | 695 | * the errors, prevent recursive traps, and pass the |
695 | * handler does not flush the wrong context and lock up the | 696 | * information on to C code for logging. |
696 | * box. | 697 | * |
698 | * We pass the AFAR in as-is, and we encode the status | ||
699 | * information as described in asm-sparc64/sfafsr.h | ||
700 | */ | ||
701 | .globl __spitfire_access_error | ||
702 | __spitfire_access_error: | ||
703 | /* Disable ESTATE error reporting so that we do not | ||
704 | * take recursive traps and RED state the processor. | ||
705 | */ | ||
706 | stxa %g0, [%g0] ASI_ESTATE_ERROR_EN | ||
707 | membar #Sync | ||
708 | |||
709 | mov UDBE_UE, %g1 | ||
710 | ldxa [%g0] ASI_AFSR, %g4 ! Get AFSR | ||
711 | |||
712 | /* __spitfire_cee_trap branches here with AFSR in %g4 and | ||
713 | * UDBE_CE in %g1. It only clears ESTATE_ERR_CE in the | ||
714 | * ESTATE Error Enable register. | ||
715 | */ | ||
716 | __spitfire_cee_trap_continue: | ||
717 | ldxa [%g0] ASI_AFAR, %g5 ! Get AFAR | ||
718 | |||
719 | rdpr %tt, %g3 | ||
720 | and %g3, 0x1ff, %g3 ! Paranoia | ||
721 | sllx %g3, SFSTAT_TRAP_TYPE_SHIFT, %g3 | ||
722 | or %g4, %g3, %g4 | ||
723 | rdpr %tl, %g3 | ||
724 | cmp %g3, 1 | ||
725 | mov 1, %g3 | ||
726 | bleu %xcc, 1f | ||
727 | sllx %g3, SFSTAT_TL_GT_ONE_SHIFT, %g3 | ||
728 | |||
729 | or %g4, %g3, %g4 | ||
730 | |||
731 | /* Read in the UDB error register state, clearing the | ||
732 | * sticky error bits as-needed. We only clear them if | ||
733 | * the UE bit is set. Likewise, __spitfire_cee_trap | ||
734 | * below will only do so if the CE bit is set. | ||
735 | * | ||
736 | * NOTE: UltraSparc-I/II have high and low UDB error | ||
737 | * registers, corresponding to the two UDB units | ||
738 | * present on those chips. UltraSparc-IIi only | ||
739 | * has a single UDB, called "SDB" in the manual. | ||
740 | * For IIi the upper UDB register always reads | ||
741 | * as zero so for our purposes things will just | ||
742 | * work with the checks below. | ||
697 | */ | 743 | */ |
698 | .globl __do_data_access_exception | 744 | 1: ldxa [%g0] ASI_UDBH_ERROR_R, %g3 |
699 | .globl __do_data_access_exception_tl1 | 745 | and %g3, 0x3ff, %g7 ! Paranoia |
700 | __do_data_access_exception_tl1: | 746 | sllx %g7, SFSTAT_UDBH_SHIFT, %g7 |
747 | or %g4, %g7, %g4 | ||
748 | andcc %g3, %g1, %g3 ! UDBE_UE or UDBE_CE | ||
749 | be,pn %xcc, 1f | ||
750 | nop | ||
751 | stxa %g3, [%g0] ASI_UDB_ERROR_W | ||
752 | membar #Sync | ||
753 | |||
754 | 1: mov 0x18, %g3 | ||
755 | ldxa [%g3] ASI_UDBL_ERROR_R, %g3 | ||
756 | and %g3, 0x3ff, %g7 ! Paranoia | ||
757 | sllx %g7, SFSTAT_UDBL_SHIFT, %g7 | ||
758 | or %g4, %g7, %g4 | ||
759 | andcc %g3, %g1, %g3 ! UDBE_UE or UDBE_CE | ||
760 | be,pn %xcc, 1f | ||
761 | nop | ||
762 | mov 0x18, %g7 | ||
763 | stxa %g3, [%g7] ASI_UDB_ERROR_W | ||
764 | membar #Sync | ||
765 | |||
766 | 1: /* Ok, now that we've latched the error state, | ||
767 | * clear the sticky bits in the AFSR. | ||
768 | */ | ||
769 | stxa %g4, [%g0] ASI_AFSR | ||
770 | membar #Sync | ||
771 | |||
772 | rdpr %tl, %g2 | ||
773 | cmp %g2, 1 | ||
774 | rdpr %pil, %g2 | ||
775 | bleu,pt %xcc, 1f | ||
776 | wrpr %g0, 15, %pil | ||
777 | |||
778 | ba,pt %xcc, etraptl1 | ||
779 | rd %pc, %g7 | ||
780 | |||
781 | ba,pt %xcc, 2f | ||
782 | nop | ||
783 | |||
784 | 1: ba,pt %xcc, etrap_irq | ||
785 | rd %pc, %g7 | ||
786 | |||
787 | 2: mov %l4, %o1 | ||
788 | mov %l5, %o2 | ||
789 | call spitfire_access_error | ||
790 | add %sp, PTREGS_OFF, %o0 | ||
791 | ba,pt %xcc, rtrap | ||
792 | clr %l6 | ||
793 | |||
794 | /* This is the trap handler entry point for ECC correctable | ||
795 | * errors. They are corrected, but we listen for the trap | ||
796 | * so that the event can be logged. | ||
797 | * | ||
798 | * Disrupting errors are either: | ||
799 | * 1) single-bit ECC errors during UDB reads to system | ||
800 | * memory | ||
801 | * 2) data parity errors during write-back events | ||
802 | * | ||
803 | * As far as I can make out from the manual, the CEE trap | ||
804 | * is only for correctable errors during memory read | ||
805 | * accesses by the front-end of the processor. | ||
806 | * | ||
807 | * The code below is only for trap level 1 CEE events, | ||
808 | * as it is the only situation where we can safely record | ||
809 | * and log. For trap level >1 we just clear the CE bit | ||
810 | * in the AFSR and return. | ||
811 | * | ||
812 | * This is just like __spiftire_access_error above, but it | ||
813 | * specifically handles correctable errors. If an | ||
814 | * uncorrectable error is indicated in the AFSR we | ||
815 | * will branch directly above to __spitfire_access_error | ||
816 | * to handle it instead. Uncorrectable therefore takes | ||
817 | * priority over correctable, and the error logging | ||
818 | * C code will notice this case by inspecting the | ||
819 | * trap type. | ||
820 | */ | ||
821 | .globl __spitfire_cee_trap | ||
822 | __spitfire_cee_trap: | ||
823 | ldxa [%g0] ASI_AFSR, %g4 ! Get AFSR | ||
824 | mov 1, %g3 | ||
825 | sllx %g3, SFAFSR_UE_SHIFT, %g3 | ||
826 | andcc %g4, %g3, %g0 ! Check for UE | ||
827 | bne,pn %xcc, __spitfire_access_error | ||
828 | nop | ||
829 | |||
830 | /* Ok, in this case we only have a correctable error. | ||
831 | * Indicate we only wish to capture that state in register | ||
832 | * %g1, and we only disable CE error reporting unlike UE | ||
833 | * handling which disables all errors. | ||
834 | */ | ||
835 | ldxa [%g0] ASI_ESTATE_ERROR_EN, %g3 | ||
836 | andn %g3, ESTATE_ERR_CE, %g3 | ||
837 | stxa %g3, [%g0] ASI_ESTATE_ERROR_EN | ||
838 | membar #Sync | ||
839 | |||
840 | /* Preserve AFSR in %g4, indicate UDB state to capture in %g1 */ | ||
841 | ba,pt %xcc, __spitfire_cee_trap_continue | ||
842 | mov UDBE_CE, %g1 | ||
843 | |||
844 | .globl __spitfire_data_access_exception | ||
845 | .globl __spitfire_data_access_exception_tl1 | ||
846 | __spitfire_data_access_exception_tl1: | ||
701 | rdpr %pstate, %g4 | 847 | rdpr %pstate, %g4 |
702 | wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate | 848 | wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate |
703 | mov TLB_SFSR, %g3 | 849 | mov TLB_SFSR, %g3 |
@@ -706,9 +852,25 @@ __do_data_access_exception_tl1: | |||
706 | ldxa [%g5] ASI_DMMU, %g5 ! Get SFAR | 852 | ldxa [%g5] ASI_DMMU, %g5 ! Get SFAR |
707 | stxa %g0, [%g3] ASI_DMMU ! Clear SFSR.FaultValid bit | 853 | stxa %g0, [%g3] ASI_DMMU ! Clear SFSR.FaultValid bit |
708 | membar #Sync | 854 | membar #Sync |
855 | rdpr %tt, %g3 | ||
856 | cmp %g3, 0x80 ! first win spill/fill trap | ||
857 | blu,pn %xcc, 1f | ||
858 | cmp %g3, 0xff ! last win spill/fill trap | ||
859 | bgu,pn %xcc, 1f | ||
860 | nop | ||
709 | ba,pt %xcc, winfix_dax | 861 | ba,pt %xcc, winfix_dax |
710 | rdpr %tpc, %g3 | 862 | rdpr %tpc, %g3 |
711 | __do_data_access_exception: | 863 | 1: sethi %hi(109f), %g7 |
864 | ba,pt %xcc, etraptl1 | ||
865 | 109: or %g7, %lo(109b), %g7 | ||
866 | mov %l4, %o1 | ||
867 | mov %l5, %o2 | ||
868 | call spitfire_data_access_exception_tl1 | ||
869 | add %sp, PTREGS_OFF, %o0 | ||
870 | ba,pt %xcc, rtrap | ||
871 | clr %l6 | ||
872 | |||
873 | __spitfire_data_access_exception: | ||
712 | rdpr %pstate, %g4 | 874 | rdpr %pstate, %g4 |
713 | wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate | 875 | wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate |
714 | mov TLB_SFSR, %g3 | 876 | mov TLB_SFSR, %g3 |
@@ -722,20 +884,19 @@ __do_data_access_exception: | |||
722 | 109: or %g7, %lo(109b), %g7 | 884 | 109: or %g7, %lo(109b), %g7 |
723 | mov %l4, %o1 | 885 | mov %l4, %o1 |
724 | mov %l5, %o2 | 886 | mov %l5, %o2 |
725 | call data_access_exception | 887 | call spitfire_data_access_exception |
726 | add %sp, PTREGS_OFF, %o0 | 888 | add %sp, PTREGS_OFF, %o0 |
727 | ba,pt %xcc, rtrap | 889 | ba,pt %xcc, rtrap |
728 | clr %l6 | 890 | clr %l6 |
729 | 891 | ||
730 | .globl __do_instruction_access_exception | 892 | .globl __spitfire_insn_access_exception |
731 | .globl __do_instruction_access_exception_tl1 | 893 | .globl __spitfire_insn_access_exception_tl1 |
732 | __do_instruction_access_exception_tl1: | 894 | __spitfire_insn_access_exception_tl1: |
733 | rdpr %pstate, %g4 | 895 | rdpr %pstate, %g4 |
734 | wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate | 896 | wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate |
735 | mov TLB_SFSR, %g3 | 897 | mov TLB_SFSR, %g3 |
736 | mov DMMU_SFAR, %g5 | 898 | ldxa [%g3] ASI_IMMU, %g4 ! Get SFSR |
737 | ldxa [%g3] ASI_DMMU, %g4 ! Get SFSR | 899 | rdpr %tpc, %g5 ! IMMU has no SFAR, use TPC |
738 | ldxa [%g5] ASI_DMMU, %g5 ! Get SFAR | ||
739 | stxa %g0, [%g3] ASI_IMMU ! Clear FaultValid bit | 900 | stxa %g0, [%g3] ASI_IMMU ! Clear FaultValid bit |
740 | membar #Sync | 901 | membar #Sync |
741 | sethi %hi(109f), %g7 | 902 | sethi %hi(109f), %g7 |
@@ -743,18 +904,17 @@ __do_instruction_access_exception_tl1: | |||
743 | 109: or %g7, %lo(109b), %g7 | 904 | 109: or %g7, %lo(109b), %g7 |
744 | mov %l4, %o1 | 905 | mov %l4, %o1 |
745 | mov %l5, %o2 | 906 | mov %l5, %o2 |
746 | call instruction_access_exception_tl1 | 907 | call spitfire_insn_access_exception_tl1 |
747 | add %sp, PTREGS_OFF, %o0 | 908 | add %sp, PTREGS_OFF, %o0 |
748 | ba,pt %xcc, rtrap | 909 | ba,pt %xcc, rtrap |
749 | clr %l6 | 910 | clr %l6 |
750 | 911 | ||
751 | __do_instruction_access_exception: | 912 | __spitfire_insn_access_exception: |
752 | rdpr %pstate, %g4 | 913 | rdpr %pstate, %g4 |
753 | wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate | 914 | wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate |
754 | mov TLB_SFSR, %g3 | 915 | mov TLB_SFSR, %g3 |
755 | mov DMMU_SFAR, %g5 | 916 | ldxa [%g3] ASI_IMMU, %g4 ! Get SFSR |
756 | ldxa [%g3] ASI_DMMU, %g4 ! Get SFSR | 917 | rdpr %tpc, %g5 ! IMMU has no SFAR, use TPC |
757 | ldxa [%g5] ASI_DMMU, %g5 ! Get SFAR | ||
758 | stxa %g0, [%g3] ASI_IMMU ! Clear FaultValid bit | 918 | stxa %g0, [%g3] ASI_IMMU ! Clear FaultValid bit |
759 | membar #Sync | 919 | membar #Sync |
760 | sethi %hi(109f), %g7 | 920 | sethi %hi(109f), %g7 |
@@ -762,102 +922,11 @@ __do_instruction_access_exception: | |||
762 | 109: or %g7, %lo(109b), %g7 | 922 | 109: or %g7, %lo(109b), %g7 |
763 | mov %l4, %o1 | 923 | mov %l4, %o1 |
764 | mov %l5, %o2 | 924 | mov %l5, %o2 |
765 | call instruction_access_exception | 925 | call spitfire_insn_access_exception |
766 | add %sp, PTREGS_OFF, %o0 | 926 | add %sp, PTREGS_OFF, %o0 |
767 | ba,pt %xcc, rtrap | 927 | ba,pt %xcc, rtrap |
768 | clr %l6 | 928 | clr %l6 |
769 | 929 | ||
770 | /* This is the trap handler entry point for ECC correctable | ||
771 | * errors. They are corrected, but we listen for the trap | ||
772 | * so that the event can be logged. | ||
773 | * | ||
774 | * Disrupting errors are either: | ||
775 | * 1) single-bit ECC errors during UDB reads to system | ||
776 | * memory | ||
777 | * 2) data parity errors during write-back events | ||
778 | * | ||
779 | * As far as I can make out from the manual, the CEE trap | ||
780 | * is only for correctable errors during memory read | ||
781 | * accesses by the front-end of the processor. | ||
782 | * | ||
783 | * The code below is only for trap level 1 CEE events, | ||
784 | * as it is the only situation where we can safely record | ||
785 | * and log. For trap level >1 we just clear the CE bit | ||
786 | * in the AFSR and return. | ||
787 | */ | ||
788 | |||
789 | /* Our trap handling infrastructure allows us to preserve | ||
790 | * two 64-bit values during etrap for arguments to | ||
791 | * subsequent C code. Therefore we encode the information | ||
792 | * as follows: | ||
793 | * | ||
794 | * value 1) Full 64-bits of AFAR | ||
795 | * value 2) Low 33-bits of AFSR, then bits 33-->42 | ||
796 | * are UDBL error status and bits 43-->52 | ||
797 | * are UDBH error status | ||
798 | */ | ||
799 | .align 64 | ||
800 | .globl cee_trap | ||
801 | cee_trap: | ||
802 | ldxa [%g0] ASI_AFSR, %g1 ! Read AFSR | ||
803 | ldxa [%g0] ASI_AFAR, %g2 ! Read AFAR | ||
804 | sllx %g1, 31, %g1 ! Clear reserved bits | ||
805 | srlx %g1, 31, %g1 ! in AFSR | ||
806 | |||
807 | /* NOTE: UltraSparc-I/II have high and low UDB error | ||
808 | * registers, corresponding to the two UDB units | ||
809 | * present on those chips. UltraSparc-IIi only | ||
810 | * has a single UDB, called "SDB" in the manual. | ||
811 | * For IIi the upper UDB register always reads | ||
812 | * as zero so for our purposes things will just | ||
813 | * work with the checks below. | ||
814 | */ | ||
815 | ldxa [%g0] ASI_UDBL_ERROR_R, %g3 ! Read UDB-Low error status | ||
816 | andcc %g3, (1 << 8), %g4 ! Check CE bit | ||
817 | sllx %g3, (64 - 10), %g3 ! Clear reserved bits | ||
818 | srlx %g3, (64 - 10), %g3 ! in UDB-Low error status | ||
819 | |||
820 | sllx %g3, (33 + 0), %g3 ! Shift up to encoding area | ||
821 | or %g1, %g3, %g1 ! Or it in | ||
822 | be,pn %xcc, 1f ! Branch if CE bit was clear | ||
823 | nop | ||
824 | stxa %g4, [%g0] ASI_UDB_ERROR_W ! Clear CE sticky bit in UDBL | ||
825 | membar #Sync ! Synchronize ASI stores | ||
826 | 1: mov 0x18, %g5 ! Addr of UDB-High error status | ||
827 | ldxa [%g5] ASI_UDBH_ERROR_R, %g3 ! Read it | ||
828 | |||
829 | andcc %g3, (1 << 8), %g4 ! Check CE bit | ||
830 | sllx %g3, (64 - 10), %g3 ! Clear reserved bits | ||
831 | srlx %g3, (64 - 10), %g3 ! in UDB-High error status | ||
832 | sllx %g3, (33 + 10), %g3 ! Shift up to encoding area | ||
833 | or %g1, %g3, %g1 ! Or it in | ||
834 | be,pn %xcc, 1f ! Branch if CE bit was clear | ||
835 | nop | ||
836 | nop | ||
837 | |||
838 | stxa %g4, [%g5] ASI_UDB_ERROR_W ! Clear CE sticky bit in UDBH | ||
839 | membar #Sync ! Synchronize ASI stores | ||
840 | 1: mov 1, %g5 ! AFSR CE bit is | ||
841 | sllx %g5, 20, %g5 ! bit 20 | ||
842 | stxa %g5, [%g0] ASI_AFSR ! Clear CE sticky bit in AFSR | ||
843 | membar #Sync ! Synchronize ASI stores | ||
844 | sllx %g2, (64 - 41), %g2 ! Clear reserved bits | ||
845 | srlx %g2, (64 - 41), %g2 ! in latched AFAR | ||
846 | |||
847 | andn %g2, 0x0f, %g2 ! Finish resv bit clearing | ||
848 | mov %g1, %g4 ! Move AFSR+UDB* into save reg | ||
849 | mov %g2, %g5 ! Move AFAR into save reg | ||
850 | rdpr %pil, %g2 | ||
851 | wrpr %g0, 15, %pil | ||
852 | ba,pt %xcc, etrap_irq | ||
853 | rd %pc, %g7 | ||
854 | mov %l4, %o0 | ||
855 | |||
856 | mov %l5, %o1 | ||
857 | call cee_log | ||
858 | add %sp, PTREGS_OFF, %o2 | ||
859 | ba,a,pt %xcc, rtrap_irq | ||
860 | |||
861 | /* Capture I/D/E-cache state into per-cpu error scoreboard. | 930 | /* Capture I/D/E-cache state into per-cpu error scoreboard. |
862 | * | 931 | * |
863 | * %g1: (TL>=0) ? 1 : 0 | 932 | * %g1: (TL>=0) ? 1 : 0 |
diff --git a/arch/sparc64/kernel/pci_iommu.c b/arch/sparc64/kernel/pci_iommu.c index 2803bc7c2c79..425c60cfea19 100644 --- a/arch/sparc64/kernel/pci_iommu.c +++ b/arch/sparc64/kernel/pci_iommu.c | |||
@@ -466,7 +466,7 @@ do_flush_sync: | |||
466 | if (!limit) | 466 | if (!limit) |
467 | break; | 467 | break; |
468 | udelay(1); | 468 | udelay(1); |
469 | membar("#LoadLoad"); | 469 | rmb(); |
470 | } | 470 | } |
471 | if (!limit) | 471 | if (!limit) |
472 | printk(KERN_WARNING "pci_strbuf_flush: flushflag timeout " | 472 | printk(KERN_WARNING "pci_strbuf_flush: flushflag timeout " |
diff --git a/arch/sparc64/kernel/process.c b/arch/sparc64/kernel/process.c index 07424b075938..66255434128a 100644 --- a/arch/sparc64/kernel/process.c +++ b/arch/sparc64/kernel/process.c | |||
@@ -103,7 +103,7 @@ void cpu_idle(void) | |||
103 | * other cpus see our increasing idleness for the buddy | 103 | * other cpus see our increasing idleness for the buddy |
104 | * redistribution algorithm. -DaveM | 104 | * redistribution algorithm. -DaveM |
105 | */ | 105 | */ |
106 | membar("#StoreStore | #StoreLoad"); | 106 | membar_storeload_storestore(); |
107 | } | 107 | } |
108 | } | 108 | } |
109 | 109 | ||
diff --git a/arch/sparc64/kernel/sbus.c b/arch/sparc64/kernel/sbus.c index 89f5e019f24c..e09ddf927655 100644 --- a/arch/sparc64/kernel/sbus.c +++ b/arch/sparc64/kernel/sbus.c | |||
@@ -147,7 +147,7 @@ static void sbus_strbuf_flush(struct sbus_iommu *iommu, u32 base, unsigned long | |||
147 | if (!limit) | 147 | if (!limit) |
148 | break; | 148 | break; |
149 | udelay(1); | 149 | udelay(1); |
150 | membar("#LoadLoad"); | 150 | rmb(); |
151 | } | 151 | } |
152 | if (!limit) | 152 | if (!limit) |
153 | printk(KERN_WARNING "sbus_strbuf_flush: flushflag timeout " | 153 | printk(KERN_WARNING "sbus_strbuf_flush: flushflag timeout " |
diff --git a/arch/sparc64/kernel/setup.c b/arch/sparc64/kernel/setup.c index b7e6a91952b2..fbdfed3798d8 100644 --- a/arch/sparc64/kernel/setup.c +++ b/arch/sparc64/kernel/setup.c | |||
@@ -33,7 +33,6 @@ | |||
33 | #include <linux/cpu.h> | 33 | #include <linux/cpu.h> |
34 | #include <linux/initrd.h> | 34 | #include <linux/initrd.h> |
35 | 35 | ||
36 | #include <asm/segment.h> | ||
37 | #include <asm/system.h> | 36 | #include <asm/system.h> |
38 | #include <asm/io.h> | 37 | #include <asm/io.h> |
39 | #include <asm/processor.h> | 38 | #include <asm/processor.h> |
diff --git a/arch/sparc64/kernel/signal.c b/arch/sparc64/kernel/signal.c index b27934671c35..60f5dfabb1e1 100644 --- a/arch/sparc64/kernel/signal.c +++ b/arch/sparc64/kernel/signal.c | |||
@@ -574,13 +574,12 @@ static inline void handle_signal(unsigned long signr, struct k_sigaction *ka, | |||
574 | { | 574 | { |
575 | setup_rt_frame(ka, regs, signr, oldset, | 575 | setup_rt_frame(ka, regs, signr, oldset, |
576 | (ka->sa.sa_flags & SA_SIGINFO) ? info : NULL); | 576 | (ka->sa.sa_flags & SA_SIGINFO) ? info : NULL); |
577 | if (!(ka->sa.sa_flags & SA_NOMASK)) { | 577 | spin_lock_irq(¤t->sighand->siglock); |
578 | spin_lock_irq(¤t->sighand->siglock); | 578 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
579 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 579 | if (!(ka->sa.sa_flags & SA_NOMASK)) |
580 | sigaddset(¤t->blocked,signr); | 580 | sigaddset(¤t->blocked,signr); |
581 | recalc_sigpending(); | 581 | recalc_sigpending(); |
582 | spin_unlock_irq(¤t->sighand->siglock); | 582 | spin_unlock_irq(¤t->sighand->siglock); |
583 | } | ||
584 | } | 583 | } |
585 | 584 | ||
586 | static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs, | 585 | static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs, |
diff --git a/arch/sparc64/kernel/signal32.c b/arch/sparc64/kernel/signal32.c index f28428f4170e..aecccd0df1d1 100644 --- a/arch/sparc64/kernel/signal32.c +++ b/arch/sparc64/kernel/signal32.c | |||
@@ -877,11 +877,12 @@ static void new_setup_frame32(struct k_sigaction *ka, struct pt_regs *regs, | |||
877 | unsigned long page = (unsigned long) | 877 | unsigned long page = (unsigned long) |
878 | page_address(pte_page(*ptep)); | 878 | page_address(pte_page(*ptep)); |
879 | 879 | ||
880 | __asm__ __volatile__( | 880 | wmb(); |
881 | " membar #StoreStore\n" | 881 | __asm__ __volatile__("flush %0 + %1" |
882 | " flush %0 + %1" | 882 | : /* no outputs */ |
883 | : : "r" (page), "r" (address & (PAGE_SIZE - 1)) | 883 | : "r" (page), |
884 | : "memory"); | 884 | "r" (address & (PAGE_SIZE - 1)) |
885 | : "memory"); | ||
885 | } | 886 | } |
886 | pte_unmap(ptep); | 887 | pte_unmap(ptep); |
887 | preempt_enable(); | 888 | preempt_enable(); |
@@ -1292,11 +1293,12 @@ static void setup_rt_frame32(struct k_sigaction *ka, struct pt_regs *regs, | |||
1292 | unsigned long page = (unsigned long) | 1293 | unsigned long page = (unsigned long) |
1293 | page_address(pte_page(*ptep)); | 1294 | page_address(pte_page(*ptep)); |
1294 | 1295 | ||
1295 | __asm__ __volatile__( | 1296 | wmb(); |
1296 | " membar #StoreStore\n" | 1297 | __asm__ __volatile__("flush %0 + %1" |
1297 | " flush %0 + %1" | 1298 | : /* no outputs */ |
1298 | : : "r" (page), "r" (address & (PAGE_SIZE - 1)) | 1299 | : "r" (page), |
1299 | : "memory"); | 1300 | "r" (address & (PAGE_SIZE - 1)) |
1301 | : "memory"); | ||
1300 | } | 1302 | } |
1301 | pte_unmap(ptep); | 1303 | pte_unmap(ptep); |
1302 | preempt_enable(); | 1304 | preempt_enable(); |
@@ -1325,13 +1327,12 @@ static inline void handle_signal32(unsigned long signr, struct k_sigaction *ka, | |||
1325 | else | 1327 | else |
1326 | setup_frame32(&ka->sa, regs, signr, oldset, info); | 1328 | setup_frame32(&ka->sa, regs, signr, oldset, info); |
1327 | } | 1329 | } |
1328 | if (!(ka->sa.sa_flags & SA_NOMASK)) { | 1330 | spin_lock_irq(¤t->sighand->siglock); |
1329 | spin_lock_irq(¤t->sighand->siglock); | 1331 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
1330 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 1332 | if (!(ka->sa.sa_flags & SA_NOMASK)) |
1331 | sigaddset(¤t->blocked,signr); | 1333 | sigaddset(¤t->blocked,signr); |
1332 | recalc_sigpending(); | 1334 | recalc_sigpending(); |
1333 | spin_unlock_irq(¤t->sighand->siglock); | 1335 | spin_unlock_irq(¤t->sighand->siglock); |
1334 | } | ||
1335 | } | 1336 | } |
1336 | 1337 | ||
1337 | static inline void syscall_restart32(unsigned long orig_i0, struct pt_regs *regs, | 1338 | static inline void syscall_restart32(unsigned long orig_i0, struct pt_regs *regs, |
diff --git a/arch/sparc64/kernel/smp.c b/arch/sparc64/kernel/smp.c index b9b42491e118..b4fc6a5462b2 100644 --- a/arch/sparc64/kernel/smp.c +++ b/arch/sparc64/kernel/smp.c | |||
@@ -144,7 +144,7 @@ void __init smp_callin(void) | |||
144 | current->active_mm = &init_mm; | 144 | current->active_mm = &init_mm; |
145 | 145 | ||
146 | while (!cpu_isset(cpuid, smp_commenced_mask)) | 146 | while (!cpu_isset(cpuid, smp_commenced_mask)) |
147 | membar("#LoadLoad"); | 147 | rmb(); |
148 | 148 | ||
149 | cpu_set(cpuid, cpu_online_map); | 149 | cpu_set(cpuid, cpu_online_map); |
150 | } | 150 | } |
@@ -184,11 +184,11 @@ static inline long get_delta (long *rt, long *master) | |||
184 | for (i = 0; i < NUM_ITERS; i++) { | 184 | for (i = 0; i < NUM_ITERS; i++) { |
185 | t0 = tick_ops->get_tick(); | 185 | t0 = tick_ops->get_tick(); |
186 | go[MASTER] = 1; | 186 | go[MASTER] = 1; |
187 | membar("#StoreLoad"); | 187 | membar_storeload(); |
188 | while (!(tm = go[SLAVE])) | 188 | while (!(tm = go[SLAVE])) |
189 | membar("#LoadLoad"); | 189 | rmb(); |
190 | go[SLAVE] = 0; | 190 | go[SLAVE] = 0; |
191 | membar("#StoreStore"); | 191 | wmb(); |
192 | t1 = tick_ops->get_tick(); | 192 | t1 = tick_ops->get_tick(); |
193 | 193 | ||
194 | if (t1 - t0 < best_t1 - best_t0) | 194 | if (t1 - t0 < best_t1 - best_t0) |
@@ -221,7 +221,7 @@ void smp_synchronize_tick_client(void) | |||
221 | go[MASTER] = 1; | 221 | go[MASTER] = 1; |
222 | 222 | ||
223 | while (go[MASTER]) | 223 | while (go[MASTER]) |
224 | membar("#LoadLoad"); | 224 | rmb(); |
225 | 225 | ||
226 | local_irq_save(flags); | 226 | local_irq_save(flags); |
227 | { | 227 | { |
@@ -273,21 +273,21 @@ static void smp_synchronize_one_tick(int cpu) | |||
273 | 273 | ||
274 | /* wait for client to be ready */ | 274 | /* wait for client to be ready */ |
275 | while (!go[MASTER]) | 275 | while (!go[MASTER]) |
276 | membar("#LoadLoad"); | 276 | rmb(); |
277 | 277 | ||
278 | /* now let the client proceed into his loop */ | 278 | /* now let the client proceed into his loop */ |
279 | go[MASTER] = 0; | 279 | go[MASTER] = 0; |
280 | membar("#StoreLoad"); | 280 | membar_storeload(); |
281 | 281 | ||
282 | spin_lock_irqsave(&itc_sync_lock, flags); | 282 | spin_lock_irqsave(&itc_sync_lock, flags); |
283 | { | 283 | { |
284 | for (i = 0; i < NUM_ROUNDS*NUM_ITERS; i++) { | 284 | for (i = 0; i < NUM_ROUNDS*NUM_ITERS; i++) { |
285 | while (!go[MASTER]) | 285 | while (!go[MASTER]) |
286 | membar("#LoadLoad"); | 286 | rmb(); |
287 | go[MASTER] = 0; | 287 | go[MASTER] = 0; |
288 | membar("#StoreStore"); | 288 | wmb(); |
289 | go[SLAVE] = tick_ops->get_tick(); | 289 | go[SLAVE] = tick_ops->get_tick(); |
290 | membar("#StoreLoad"); | 290 | membar_storeload(); |
291 | } | 291 | } |
292 | } | 292 | } |
293 | spin_unlock_irqrestore(&itc_sync_lock, flags); | 293 | spin_unlock_irqrestore(&itc_sync_lock, flags); |
@@ -927,11 +927,11 @@ void smp_capture(void) | |||
927 | smp_processor_id()); | 927 | smp_processor_id()); |
928 | #endif | 928 | #endif |
929 | penguins_are_doing_time = 1; | 929 | penguins_are_doing_time = 1; |
930 | membar("#StoreStore | #LoadStore"); | 930 | membar_storestore_loadstore(); |
931 | atomic_inc(&smp_capture_registry); | 931 | atomic_inc(&smp_capture_registry); |
932 | smp_cross_call(&xcall_capture, 0, 0, 0); | 932 | smp_cross_call(&xcall_capture, 0, 0, 0); |
933 | while (atomic_read(&smp_capture_registry) != ncpus) | 933 | while (atomic_read(&smp_capture_registry) != ncpus) |
934 | membar("#LoadLoad"); | 934 | rmb(); |
935 | #ifdef CAPTURE_DEBUG | 935 | #ifdef CAPTURE_DEBUG |
936 | printk("done\n"); | 936 | printk("done\n"); |
937 | #endif | 937 | #endif |
@@ -947,7 +947,7 @@ void smp_release(void) | |||
947 | smp_processor_id()); | 947 | smp_processor_id()); |
948 | #endif | 948 | #endif |
949 | penguins_are_doing_time = 0; | 949 | penguins_are_doing_time = 0; |
950 | membar("#StoreStore | #StoreLoad"); | 950 | membar_storeload_storestore(); |
951 | atomic_dec(&smp_capture_registry); | 951 | atomic_dec(&smp_capture_registry); |
952 | } | 952 | } |
953 | } | 953 | } |
@@ -970,9 +970,9 @@ void smp_penguin_jailcell(int irq, struct pt_regs *regs) | |||
970 | save_alternate_globals(global_save); | 970 | save_alternate_globals(global_save); |
971 | prom_world(1); | 971 | prom_world(1); |
972 | atomic_inc(&smp_capture_registry); | 972 | atomic_inc(&smp_capture_registry); |
973 | membar("#StoreLoad | #StoreStore"); | 973 | membar_storeload_storestore(); |
974 | while (penguins_are_doing_time) | 974 | while (penguins_are_doing_time) |
975 | membar("#LoadLoad"); | 975 | rmb(); |
976 | restore_alternate_globals(global_save); | 976 | restore_alternate_globals(global_save); |
977 | atomic_dec(&smp_capture_registry); | 977 | atomic_dec(&smp_capture_registry); |
978 | prom_world(0); | 978 | prom_world(0); |
diff --git a/arch/sparc64/kernel/sparc64_ksyms.c b/arch/sparc64/kernel/sparc64_ksyms.c index 9202d925a9ce..a3ea697f1adb 100644 --- a/arch/sparc64/kernel/sparc64_ksyms.c +++ b/arch/sparc64/kernel/sparc64_ksyms.c | |||
@@ -99,17 +99,6 @@ extern int __ashrdi3(int, int); | |||
99 | extern void dump_thread(struct pt_regs *, struct user *); | 99 | extern void dump_thread(struct pt_regs *, struct user *); |
100 | extern int dump_fpu (struct pt_regs * regs, elf_fpregset_t * fpregs); | 100 | extern int dump_fpu (struct pt_regs * regs, elf_fpregset_t * fpregs); |
101 | 101 | ||
102 | #if defined(CONFIG_SMP) && defined(CONFIG_DEBUG_SPINLOCK) | ||
103 | extern void _do_spin_lock (spinlock_t *lock, char *str); | ||
104 | extern void _do_spin_unlock (spinlock_t *lock); | ||
105 | extern int _spin_trylock (spinlock_t *lock); | ||
106 | extern void _do_read_lock(rwlock_t *rw, char *str); | ||
107 | extern void _do_read_unlock(rwlock_t *rw, char *str); | ||
108 | extern void _do_write_lock(rwlock_t *rw, char *str); | ||
109 | extern void _do_write_unlock(rwlock_t *rw); | ||
110 | extern int _do_write_trylock(rwlock_t *rw, char *str); | ||
111 | #endif | ||
112 | |||
113 | extern unsigned long phys_base; | 102 | extern unsigned long phys_base; |
114 | extern unsigned long pfn_base; | 103 | extern unsigned long pfn_base; |
115 | 104 | ||
@@ -152,18 +141,6 @@ EXPORT_SYMBOL(_mcount); | |||
152 | EXPORT_SYMBOL(cpu_online_map); | 141 | EXPORT_SYMBOL(cpu_online_map); |
153 | EXPORT_SYMBOL(phys_cpu_present_map); | 142 | EXPORT_SYMBOL(phys_cpu_present_map); |
154 | 143 | ||
155 | /* Spinlock debugging library, optional. */ | ||
156 | #ifdef CONFIG_DEBUG_SPINLOCK | ||
157 | EXPORT_SYMBOL(_do_spin_lock); | ||
158 | EXPORT_SYMBOL(_do_spin_unlock); | ||
159 | EXPORT_SYMBOL(_spin_trylock); | ||
160 | EXPORT_SYMBOL(_do_read_lock); | ||
161 | EXPORT_SYMBOL(_do_read_unlock); | ||
162 | EXPORT_SYMBOL(_do_write_lock); | ||
163 | EXPORT_SYMBOL(_do_write_unlock); | ||
164 | EXPORT_SYMBOL(_do_write_trylock); | ||
165 | #endif | ||
166 | |||
167 | EXPORT_SYMBOL(smp_call_function); | 144 | EXPORT_SYMBOL(smp_call_function); |
168 | #endif /* CONFIG_SMP */ | 145 | #endif /* CONFIG_SMP */ |
169 | 146 | ||
@@ -429,3 +406,12 @@ EXPORT_SYMBOL(xor_vis_4); | |||
429 | EXPORT_SYMBOL(xor_vis_5); | 406 | EXPORT_SYMBOL(xor_vis_5); |
430 | 407 | ||
431 | EXPORT_SYMBOL(prom_palette); | 408 | EXPORT_SYMBOL(prom_palette); |
409 | |||
410 | /* memory barriers */ | ||
411 | EXPORT_SYMBOL(mb); | ||
412 | EXPORT_SYMBOL(rmb); | ||
413 | EXPORT_SYMBOL(wmb); | ||
414 | EXPORT_SYMBOL(membar_storeload); | ||
415 | EXPORT_SYMBOL(membar_storeload_storestore); | ||
416 | EXPORT_SYMBOL(membar_storeload_loadload); | ||
417 | EXPORT_SYMBOL(membar_storestore_loadstore); | ||
diff --git a/arch/sparc64/kernel/traps.c b/arch/sparc64/kernel/traps.c index 0c9e54b2f0c8..b280b2ef674f 100644 --- a/arch/sparc64/kernel/traps.c +++ b/arch/sparc64/kernel/traps.c | |||
@@ -33,6 +33,7 @@ | |||
33 | #include <asm/dcu.h> | 33 | #include <asm/dcu.h> |
34 | #include <asm/estate.h> | 34 | #include <asm/estate.h> |
35 | #include <asm/chafsr.h> | 35 | #include <asm/chafsr.h> |
36 | #include <asm/sfafsr.h> | ||
36 | #include <asm/psrcompat.h> | 37 | #include <asm/psrcompat.h> |
37 | #include <asm/processor.h> | 38 | #include <asm/processor.h> |
38 | #include <asm/timer.h> | 39 | #include <asm/timer.h> |
@@ -143,8 +144,7 @@ void do_BUG(const char *file, int line) | |||
143 | } | 144 | } |
144 | #endif | 145 | #endif |
145 | 146 | ||
146 | void instruction_access_exception(struct pt_regs *regs, | 147 | void spitfire_insn_access_exception(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar) |
147 | unsigned long sfsr, unsigned long sfar) | ||
148 | { | 148 | { |
149 | siginfo_t info; | 149 | siginfo_t info; |
150 | 150 | ||
@@ -153,8 +153,8 @@ void instruction_access_exception(struct pt_regs *regs, | |||
153 | return; | 153 | return; |
154 | 154 | ||
155 | if (regs->tstate & TSTATE_PRIV) { | 155 | if (regs->tstate & TSTATE_PRIV) { |
156 | printk("instruction_access_exception: SFSR[%016lx] SFAR[%016lx], going.\n", | 156 | printk("spitfire_insn_access_exception: SFSR[%016lx] " |
157 | sfsr, sfar); | 157 | "SFAR[%016lx], going.\n", sfsr, sfar); |
158 | die_if_kernel("Iax", regs); | 158 | die_if_kernel("Iax", regs); |
159 | } | 159 | } |
160 | if (test_thread_flag(TIF_32BIT)) { | 160 | if (test_thread_flag(TIF_32BIT)) { |
@@ -169,19 +169,17 @@ void instruction_access_exception(struct pt_regs *regs, | |||
169 | force_sig_info(SIGSEGV, &info, current); | 169 | force_sig_info(SIGSEGV, &info, current); |
170 | } | 170 | } |
171 | 171 | ||
172 | void instruction_access_exception_tl1(struct pt_regs *regs, | 172 | void spitfire_insn_access_exception_tl1(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar) |
173 | unsigned long sfsr, unsigned long sfar) | ||
174 | { | 173 | { |
175 | if (notify_die(DIE_TRAP_TL1, "instruction access exception tl1", regs, | 174 | if (notify_die(DIE_TRAP_TL1, "instruction access exception tl1", regs, |
176 | 0, 0x8, SIGTRAP) == NOTIFY_STOP) | 175 | 0, 0x8, SIGTRAP) == NOTIFY_STOP) |
177 | return; | 176 | return; |
178 | 177 | ||
179 | dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); | 178 | dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); |
180 | instruction_access_exception(regs, sfsr, sfar); | 179 | spitfire_insn_access_exception(regs, sfsr, sfar); |
181 | } | 180 | } |
182 | 181 | ||
183 | void data_access_exception(struct pt_regs *regs, | 182 | void spitfire_data_access_exception(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar) |
184 | unsigned long sfsr, unsigned long sfar) | ||
185 | { | 183 | { |
186 | siginfo_t info; | 184 | siginfo_t info; |
187 | 185 | ||
@@ -207,8 +205,8 @@ void data_access_exception(struct pt_regs *regs, | |||
207 | return; | 205 | return; |
208 | } | 206 | } |
209 | /* Shit... */ | 207 | /* Shit... */ |
210 | printk("data_access_exception: SFSR[%016lx] SFAR[%016lx], going.\n", | 208 | printk("spitfire_data_access_exception: SFSR[%016lx] " |
211 | sfsr, sfar); | 209 | "SFAR[%016lx], going.\n", sfsr, sfar); |
212 | die_if_kernel("Dax", regs); | 210 | die_if_kernel("Dax", regs); |
213 | } | 211 | } |
214 | 212 | ||
@@ -220,6 +218,16 @@ void data_access_exception(struct pt_regs *regs, | |||
220 | force_sig_info(SIGSEGV, &info, current); | 218 | force_sig_info(SIGSEGV, &info, current); |
221 | } | 219 | } |
222 | 220 | ||
221 | void spitfire_data_access_exception_tl1(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar) | ||
222 | { | ||
223 | if (notify_die(DIE_TRAP_TL1, "data access exception tl1", regs, | ||
224 | 0, 0x30, SIGTRAP) == NOTIFY_STOP) | ||
225 | return; | ||
226 | |||
227 | dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); | ||
228 | spitfire_data_access_exception(regs, sfsr, sfar); | ||
229 | } | ||
230 | |||
223 | #ifdef CONFIG_PCI | 231 | #ifdef CONFIG_PCI |
224 | /* This is really pathetic... */ | 232 | /* This is really pathetic... */ |
225 | extern volatile int pci_poke_in_progress; | 233 | extern volatile int pci_poke_in_progress; |
@@ -253,54 +261,13 @@ static void spitfire_clean_and_reenable_l1_caches(void) | |||
253 | : "memory"); | 261 | : "memory"); |
254 | } | 262 | } |
255 | 263 | ||
256 | void do_iae(struct pt_regs *regs) | 264 | static void spitfire_enable_estate_errors(void) |
257 | { | 265 | { |
258 | siginfo_t info; | 266 | __asm__ __volatile__("stxa %0, [%%g0] %1\n\t" |
259 | 267 | "membar #Sync" | |
260 | spitfire_clean_and_reenable_l1_caches(); | 268 | : /* no outputs */ |
261 | 269 | : "r" (ESTATE_ERR_ALL), | |
262 | if (notify_die(DIE_TRAP, "instruction access exception", regs, | 270 | "i" (ASI_ESTATE_ERROR_EN)); |
263 | 0, 0x8, SIGTRAP) == NOTIFY_STOP) | ||
264 | return; | ||
265 | |||
266 | info.si_signo = SIGBUS; | ||
267 | info.si_errno = 0; | ||
268 | info.si_code = BUS_OBJERR; | ||
269 | info.si_addr = (void *)0; | ||
270 | info.si_trapno = 0; | ||
271 | force_sig_info(SIGBUS, &info, current); | ||
272 | } | ||
273 | |||
274 | void do_dae(struct pt_regs *regs) | ||
275 | { | ||
276 | siginfo_t info; | ||
277 | |||
278 | #ifdef CONFIG_PCI | ||
279 | if (pci_poke_in_progress && pci_poke_cpu == smp_processor_id()) { | ||
280 | spitfire_clean_and_reenable_l1_caches(); | ||
281 | |||
282 | pci_poke_faulted = 1; | ||
283 | |||
284 | /* Why the fuck did they have to change this? */ | ||
285 | if (tlb_type == cheetah || tlb_type == cheetah_plus) | ||
286 | regs->tpc += 4; | ||
287 | |||
288 | regs->tnpc = regs->tpc + 4; | ||
289 | return; | ||
290 | } | ||
291 | #endif | ||
292 | spitfire_clean_and_reenable_l1_caches(); | ||
293 | |||
294 | if (notify_die(DIE_TRAP, "data access exception", regs, | ||
295 | 0, 0x30, SIGTRAP) == NOTIFY_STOP) | ||
296 | return; | ||
297 | |||
298 | info.si_signo = SIGBUS; | ||
299 | info.si_errno = 0; | ||
300 | info.si_code = BUS_OBJERR; | ||
301 | info.si_addr = (void *)0; | ||
302 | info.si_trapno = 0; | ||
303 | force_sig_info(SIGBUS, &info, current); | ||
304 | } | 271 | } |
305 | 272 | ||
306 | static char ecc_syndrome_table[] = { | 273 | static char ecc_syndrome_table[] = { |
@@ -338,65 +305,15 @@ static char ecc_syndrome_table[] = { | |||
338 | 0x0b, 0x48, 0x48, 0x4b, 0x48, 0x4b, 0x4b, 0x4a | 305 | 0x0b, 0x48, 0x48, 0x4b, 0x48, 0x4b, 0x4b, 0x4a |
339 | }; | 306 | }; |
340 | 307 | ||
341 | /* cee_trap in entry.S encodes AFSR/UDBH/UDBL error status | ||
342 | * in the following format. The AFAR is left as is, with | ||
343 | * reserved bits cleared, and is a raw 40-bit physical | ||
344 | * address. | ||
345 | */ | ||
346 | #define CE_STATUS_UDBH_UE (1UL << (43 + 9)) | ||
347 | #define CE_STATUS_UDBH_CE (1UL << (43 + 8)) | ||
348 | #define CE_STATUS_UDBH_ESYNDR (0xffUL << 43) | ||
349 | #define CE_STATUS_UDBH_SHIFT 43 | ||
350 | #define CE_STATUS_UDBL_UE (1UL << (33 + 9)) | ||
351 | #define CE_STATUS_UDBL_CE (1UL << (33 + 8)) | ||
352 | #define CE_STATUS_UDBL_ESYNDR (0xffUL << 33) | ||
353 | #define CE_STATUS_UDBL_SHIFT 33 | ||
354 | #define CE_STATUS_AFSR_MASK (0x1ffffffffUL) | ||
355 | #define CE_STATUS_AFSR_ME (1UL << 32) | ||
356 | #define CE_STATUS_AFSR_PRIV (1UL << 31) | ||
357 | #define CE_STATUS_AFSR_ISAP (1UL << 30) | ||
358 | #define CE_STATUS_AFSR_ETP (1UL << 29) | ||
359 | #define CE_STATUS_AFSR_IVUE (1UL << 28) | ||
360 | #define CE_STATUS_AFSR_TO (1UL << 27) | ||
361 | #define CE_STATUS_AFSR_BERR (1UL << 26) | ||
362 | #define CE_STATUS_AFSR_LDP (1UL << 25) | ||
363 | #define CE_STATUS_AFSR_CP (1UL << 24) | ||
364 | #define CE_STATUS_AFSR_WP (1UL << 23) | ||
365 | #define CE_STATUS_AFSR_EDP (1UL << 22) | ||
366 | #define CE_STATUS_AFSR_UE (1UL << 21) | ||
367 | #define CE_STATUS_AFSR_CE (1UL << 20) | ||
368 | #define CE_STATUS_AFSR_ETS (0xfUL << 16) | ||
369 | #define CE_STATUS_AFSR_ETS_SHIFT 16 | ||
370 | #define CE_STATUS_AFSR_PSYND (0xffffUL << 0) | ||
371 | #define CE_STATUS_AFSR_PSYND_SHIFT 0 | ||
372 | |||
373 | /* Layout of Ecache TAG Parity Syndrome of AFSR */ | ||
374 | #define AFSR_ETSYNDROME_7_0 0x1UL /* E$-tag bus bits <7:0> */ | ||
375 | #define AFSR_ETSYNDROME_15_8 0x2UL /* E$-tag bus bits <15:8> */ | ||
376 | #define AFSR_ETSYNDROME_21_16 0x4UL /* E$-tag bus bits <21:16> */ | ||
377 | #define AFSR_ETSYNDROME_24_22 0x8UL /* E$-tag bus bits <24:22> */ | ||
378 | |||
379 | static char *syndrome_unknown = "<Unknown>"; | 308 | static char *syndrome_unknown = "<Unknown>"; |
380 | 309 | ||
381 | asmlinkage void cee_log(unsigned long ce_status, | 310 | static void spitfire_log_udb_syndrome(unsigned long afar, unsigned long udbh, unsigned long udbl, unsigned long bit) |
382 | unsigned long afar, | ||
383 | struct pt_regs *regs) | ||
384 | { | 311 | { |
385 | char memmod_str[64]; | 312 | unsigned short scode; |
386 | char *p; | 313 | char memmod_str[64], *p; |
387 | unsigned short scode, udb_reg; | ||
388 | 314 | ||
389 | printk(KERN_WARNING "CPU[%d]: Correctable ECC Error " | 315 | if (udbl & bit) { |
390 | "AFSR[%lx] AFAR[%016lx] UDBL[%lx] UDBH[%lx]\n", | 316 | scode = ecc_syndrome_table[udbl & 0xff]; |
391 | smp_processor_id(), | ||
392 | (ce_status & CE_STATUS_AFSR_MASK), | ||
393 | afar, | ||
394 | ((ce_status >> CE_STATUS_UDBL_SHIFT) & 0x3ffUL), | ||
395 | ((ce_status >> CE_STATUS_UDBH_SHIFT) & 0x3ffUL)); | ||
396 | |||
397 | udb_reg = ((ce_status >> CE_STATUS_UDBL_SHIFT) & 0x3ffUL); | ||
398 | if (udb_reg & (1 << 8)) { | ||
399 | scode = ecc_syndrome_table[udb_reg & 0xff]; | ||
400 | if (prom_getunumber(scode, afar, | 317 | if (prom_getunumber(scode, afar, |
401 | memmod_str, sizeof(memmod_str)) == -1) | 318 | memmod_str, sizeof(memmod_str)) == -1) |
402 | p = syndrome_unknown; | 319 | p = syndrome_unknown; |
@@ -407,9 +324,8 @@ asmlinkage void cee_log(unsigned long ce_status, | |||
407 | smp_processor_id(), scode, p); | 324 | smp_processor_id(), scode, p); |
408 | } | 325 | } |
409 | 326 | ||
410 | udb_reg = ((ce_status >> CE_STATUS_UDBH_SHIFT) & 0x3ffUL); | 327 | if (udbh & bit) { |
411 | if (udb_reg & (1 << 8)) { | 328 | scode = ecc_syndrome_table[udbh & 0xff]; |
412 | scode = ecc_syndrome_table[udb_reg & 0xff]; | ||
413 | if (prom_getunumber(scode, afar, | 329 | if (prom_getunumber(scode, afar, |
414 | memmod_str, sizeof(memmod_str)) == -1) | 330 | memmod_str, sizeof(memmod_str)) == -1) |
415 | p = syndrome_unknown; | 331 | p = syndrome_unknown; |
@@ -419,6 +335,127 @@ asmlinkage void cee_log(unsigned long ce_status, | |||
419 | "Memory Module \"%s\"\n", | 335 | "Memory Module \"%s\"\n", |
420 | smp_processor_id(), scode, p); | 336 | smp_processor_id(), scode, p); |
421 | } | 337 | } |
338 | |||
339 | } | ||
340 | |||
341 | static void spitfire_cee_log(unsigned long afsr, unsigned long afar, unsigned long udbh, unsigned long udbl, int tl1, struct pt_regs *regs) | ||
342 | { | ||
343 | |||
344 | printk(KERN_WARNING "CPU[%d]: Correctable ECC Error " | ||
345 | "AFSR[%lx] AFAR[%016lx] UDBL[%lx] UDBH[%lx] TL>1[%d]\n", | ||
346 | smp_processor_id(), afsr, afar, udbl, udbh, tl1); | ||
347 | |||
348 | spitfire_log_udb_syndrome(afar, udbh, udbl, UDBE_CE); | ||
349 | |||
350 | /* We always log it, even if someone is listening for this | ||
351 | * trap. | ||
352 | */ | ||
353 | notify_die(DIE_TRAP, "Correctable ECC Error", regs, | ||
354 | 0, TRAP_TYPE_CEE, SIGTRAP); | ||
355 | |||
356 | /* The Correctable ECC Error trap does not disable I/D caches. So | ||
357 | * we only have to restore the ESTATE Error Enable register. | ||
358 | */ | ||
359 | spitfire_enable_estate_errors(); | ||
360 | } | ||
361 | |||
362 | static void spitfire_ue_log(unsigned long afsr, unsigned long afar, unsigned long udbh, unsigned long udbl, unsigned long tt, int tl1, struct pt_regs *regs) | ||
363 | { | ||
364 | siginfo_t info; | ||
365 | |||
366 | printk(KERN_WARNING "CPU[%d]: Uncorrectable Error AFSR[%lx] " | ||
367 | "AFAR[%lx] UDBL[%lx] UDBH[%ld] TT[%lx] TL>1[%d]\n", | ||
368 | smp_processor_id(), afsr, afar, udbl, udbh, tt, tl1); | ||
369 | |||
370 | /* XXX add more human friendly logging of the error status | ||
371 | * XXX as is implemented for cheetah | ||
372 | */ | ||
373 | |||
374 | spitfire_log_udb_syndrome(afar, udbh, udbl, UDBE_UE); | ||
375 | |||
376 | /* We always log it, even if someone is listening for this | ||
377 | * trap. | ||
378 | */ | ||
379 | notify_die(DIE_TRAP, "Uncorrectable Error", regs, | ||
380 | 0, tt, SIGTRAP); | ||
381 | |||
382 | if (regs->tstate & TSTATE_PRIV) { | ||
383 | if (tl1) | ||
384 | dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); | ||
385 | die_if_kernel("UE", regs); | ||
386 | } | ||
387 | |||
388 | /* XXX need more intelligent processing here, such as is implemented | ||
389 | * XXX for cheetah errors, in fact if the E-cache still holds the | ||
390 | * XXX line with bad parity this will loop | ||
391 | */ | ||
392 | |||
393 | spitfire_clean_and_reenable_l1_caches(); | ||
394 | spitfire_enable_estate_errors(); | ||
395 | |||
396 | if (test_thread_flag(TIF_32BIT)) { | ||
397 | regs->tpc &= 0xffffffff; | ||
398 | regs->tnpc &= 0xffffffff; | ||
399 | } | ||
400 | info.si_signo = SIGBUS; | ||
401 | info.si_errno = 0; | ||
402 | info.si_code = BUS_OBJERR; | ||
403 | info.si_addr = (void *)0; | ||
404 | info.si_trapno = 0; | ||
405 | force_sig_info(SIGBUS, &info, current); | ||
406 | } | ||
407 | |||
408 | void spitfire_access_error(struct pt_regs *regs, unsigned long status_encoded, unsigned long afar) | ||
409 | { | ||
410 | unsigned long afsr, tt, udbh, udbl; | ||
411 | int tl1; | ||
412 | |||
413 | afsr = (status_encoded & SFSTAT_AFSR_MASK) >> SFSTAT_AFSR_SHIFT; | ||
414 | tt = (status_encoded & SFSTAT_TRAP_TYPE) >> SFSTAT_TRAP_TYPE_SHIFT; | ||
415 | tl1 = (status_encoded & SFSTAT_TL_GT_ONE) ? 1 : 0; | ||
416 | udbl = (status_encoded & SFSTAT_UDBL_MASK) >> SFSTAT_UDBL_SHIFT; | ||
417 | udbh = (status_encoded & SFSTAT_UDBH_MASK) >> SFSTAT_UDBH_SHIFT; | ||
418 | |||
419 | #ifdef CONFIG_PCI | ||
420 | if (tt == TRAP_TYPE_DAE && | ||
421 | pci_poke_in_progress && pci_poke_cpu == smp_processor_id()) { | ||
422 | spitfire_clean_and_reenable_l1_caches(); | ||
423 | spitfire_enable_estate_errors(); | ||
424 | |||
425 | pci_poke_faulted = 1; | ||
426 | regs->tnpc = regs->tpc + 4; | ||
427 | return; | ||
428 | } | ||
429 | #endif | ||
430 | |||
431 | if (afsr & SFAFSR_UE) | ||
432 | spitfire_ue_log(afsr, afar, udbh, udbl, tt, tl1, regs); | ||
433 | |||
434 | if (tt == TRAP_TYPE_CEE) { | ||
435 | /* Handle the case where we took a CEE trap, but ACK'd | ||
436 | * only the UE state in the UDB error registers. | ||
437 | */ | ||
438 | if (afsr & SFAFSR_UE) { | ||
439 | if (udbh & UDBE_CE) { | ||
440 | __asm__ __volatile__( | ||
441 | "stxa %0, [%1] %2\n\t" | ||
442 | "membar #Sync" | ||
443 | : /* no outputs */ | ||
444 | : "r" (udbh & UDBE_CE), | ||
445 | "r" (0x0), "i" (ASI_UDB_ERROR_W)); | ||
446 | } | ||
447 | if (udbl & UDBE_CE) { | ||
448 | __asm__ __volatile__( | ||
449 | "stxa %0, [%1] %2\n\t" | ||
450 | "membar #Sync" | ||
451 | : /* no outputs */ | ||
452 | : "r" (udbl & UDBE_CE), | ||
453 | "r" (0x18), "i" (ASI_UDB_ERROR_W)); | ||
454 | } | ||
455 | } | ||
456 | |||
457 | spitfire_cee_log(afsr, afar, udbh, udbl, tl1, regs); | ||
458 | } | ||
422 | } | 459 | } |
423 | 460 | ||
424 | int cheetah_pcache_forced_on; | 461 | int cheetah_pcache_forced_on; |
diff --git a/arch/sparc64/kernel/ttable.S b/arch/sparc64/kernel/ttable.S index 491bb3681f9d..8365bc1f81f3 100644 --- a/arch/sparc64/kernel/ttable.S +++ b/arch/sparc64/kernel/ttable.S | |||
@@ -18,9 +18,10 @@ sparc64_ttable_tl0: | |||
18 | tl0_resv000: BOOT_KERNEL BTRAP(0x1) BTRAP(0x2) BTRAP(0x3) | 18 | tl0_resv000: BOOT_KERNEL BTRAP(0x1) BTRAP(0x2) BTRAP(0x3) |
19 | tl0_resv004: BTRAP(0x4) BTRAP(0x5) BTRAP(0x6) BTRAP(0x7) | 19 | tl0_resv004: BTRAP(0x4) BTRAP(0x5) BTRAP(0x6) BTRAP(0x7) |
20 | tl0_iax: membar #Sync | 20 | tl0_iax: membar #Sync |
21 | TRAP_NOSAVE_7INSNS(__do_instruction_access_exception) | 21 | TRAP_NOSAVE_7INSNS(__spitfire_insn_access_exception) |
22 | tl0_resv009: BTRAP(0x9) | 22 | tl0_resv009: BTRAP(0x9) |
23 | tl0_iae: TRAP(do_iae) | 23 | tl0_iae: membar #Sync |
24 | TRAP_NOSAVE_7INSNS(__spitfire_access_error) | ||
24 | tl0_resv00b: BTRAP(0xb) BTRAP(0xc) BTRAP(0xd) BTRAP(0xe) BTRAP(0xf) | 25 | tl0_resv00b: BTRAP(0xb) BTRAP(0xc) BTRAP(0xd) BTRAP(0xe) BTRAP(0xf) |
25 | tl0_ill: membar #Sync | 26 | tl0_ill: membar #Sync |
26 | TRAP_7INSNS(do_illegal_instruction) | 27 | TRAP_7INSNS(do_illegal_instruction) |
@@ -36,9 +37,10 @@ tl0_cwin: CLEAN_WINDOW | |||
36 | tl0_div0: TRAP(do_div0) | 37 | tl0_div0: TRAP(do_div0) |
37 | tl0_resv029: BTRAP(0x29) BTRAP(0x2a) BTRAP(0x2b) BTRAP(0x2c) BTRAP(0x2d) BTRAP(0x2e) | 38 | tl0_resv029: BTRAP(0x29) BTRAP(0x2a) BTRAP(0x2b) BTRAP(0x2c) BTRAP(0x2d) BTRAP(0x2e) |
38 | tl0_resv02f: BTRAP(0x2f) | 39 | tl0_resv02f: BTRAP(0x2f) |
39 | tl0_dax: TRAP_NOSAVE(__do_data_access_exception) | 40 | tl0_dax: TRAP_NOSAVE(__spitfire_data_access_exception) |
40 | tl0_resv031: BTRAP(0x31) | 41 | tl0_resv031: BTRAP(0x31) |
41 | tl0_dae: TRAP(do_dae) | 42 | tl0_dae: membar #Sync |
43 | TRAP_NOSAVE_7INSNS(__spitfire_access_error) | ||
42 | tl0_resv033: BTRAP(0x33) | 44 | tl0_resv033: BTRAP(0x33) |
43 | tl0_mna: TRAP_NOSAVE(do_mna) | 45 | tl0_mna: TRAP_NOSAVE(do_mna) |
44 | tl0_lddfmna: TRAP_NOSAVE(do_lddfmna) | 46 | tl0_lddfmna: TRAP_NOSAVE(do_lddfmna) |
@@ -73,7 +75,8 @@ tl0_resv05c: BTRAP(0x5c) BTRAP(0x5d) BTRAP(0x5e) BTRAP(0x5f) | |||
73 | tl0_ivec: TRAP_IVEC | 75 | tl0_ivec: TRAP_IVEC |
74 | tl0_paw: TRAP(do_paw) | 76 | tl0_paw: TRAP(do_paw) |
75 | tl0_vaw: TRAP(do_vaw) | 77 | tl0_vaw: TRAP(do_vaw) |
76 | tl0_cee: TRAP_NOSAVE(cee_trap) | 78 | tl0_cee: membar #Sync |
79 | TRAP_NOSAVE_7INSNS(__spitfire_cee_trap) | ||
77 | tl0_iamiss: | 80 | tl0_iamiss: |
78 | #include "itlb_base.S" | 81 | #include "itlb_base.S" |
79 | tl0_damiss: | 82 | tl0_damiss: |
@@ -175,9 +178,10 @@ tl0_resv1f0: BTRAPS(0x1f0) BTRAPS(0x1f8) | |||
175 | sparc64_ttable_tl1: | 178 | sparc64_ttable_tl1: |
176 | tl1_resv000: BOOT_KERNEL BTRAPTL1(0x1) BTRAPTL1(0x2) BTRAPTL1(0x3) | 179 | tl1_resv000: BOOT_KERNEL BTRAPTL1(0x1) BTRAPTL1(0x2) BTRAPTL1(0x3) |
177 | tl1_resv004: BTRAPTL1(0x4) BTRAPTL1(0x5) BTRAPTL1(0x6) BTRAPTL1(0x7) | 180 | tl1_resv004: BTRAPTL1(0x4) BTRAPTL1(0x5) BTRAPTL1(0x6) BTRAPTL1(0x7) |
178 | tl1_iax: TRAP_NOSAVE(__do_instruction_access_exception_tl1) | 181 | tl1_iax: TRAP_NOSAVE(__spitfire_insn_access_exception_tl1) |
179 | tl1_resv009: BTRAPTL1(0x9) | 182 | tl1_resv009: BTRAPTL1(0x9) |
180 | tl1_iae: TRAPTL1(do_iae_tl1) | 183 | tl1_iae: membar #Sync |
184 | TRAP_NOSAVE_7INSNS(__spitfire_access_error) | ||
181 | tl1_resv00b: BTRAPTL1(0xb) BTRAPTL1(0xc) BTRAPTL1(0xd) BTRAPTL1(0xe) BTRAPTL1(0xf) | 185 | tl1_resv00b: BTRAPTL1(0xb) BTRAPTL1(0xc) BTRAPTL1(0xd) BTRAPTL1(0xe) BTRAPTL1(0xf) |
182 | tl1_ill: TRAPTL1(do_ill_tl1) | 186 | tl1_ill: TRAPTL1(do_ill_tl1) |
183 | tl1_privop: BTRAPTL1(0x11) | 187 | tl1_privop: BTRAPTL1(0x11) |
@@ -193,9 +197,10 @@ tl1_cwin: CLEAN_WINDOW | |||
193 | tl1_div0: TRAPTL1(do_div0_tl1) | 197 | tl1_div0: TRAPTL1(do_div0_tl1) |
194 | tl1_resv029: BTRAPTL1(0x29) BTRAPTL1(0x2a) BTRAPTL1(0x2b) BTRAPTL1(0x2c) | 198 | tl1_resv029: BTRAPTL1(0x29) BTRAPTL1(0x2a) BTRAPTL1(0x2b) BTRAPTL1(0x2c) |
195 | tl1_resv02d: BTRAPTL1(0x2d) BTRAPTL1(0x2e) BTRAPTL1(0x2f) | 199 | tl1_resv02d: BTRAPTL1(0x2d) BTRAPTL1(0x2e) BTRAPTL1(0x2f) |
196 | tl1_dax: TRAP_NOSAVE(__do_data_access_exception_tl1) | 200 | tl1_dax: TRAP_NOSAVE(__spitfire_data_access_exception_tl1) |
197 | tl1_resv031: BTRAPTL1(0x31) | 201 | tl1_resv031: BTRAPTL1(0x31) |
198 | tl1_dae: TRAPTL1(do_dae_tl1) | 202 | tl1_dae: membar #Sync |
203 | TRAP_NOSAVE_7INSNS(__spitfire_access_error) | ||
199 | tl1_resv033: BTRAPTL1(0x33) | 204 | tl1_resv033: BTRAPTL1(0x33) |
200 | tl1_mna: TRAP_NOSAVE(do_mna) | 205 | tl1_mna: TRAP_NOSAVE(do_mna) |
201 | tl1_lddfmna: TRAPTL1(do_lddfmna_tl1) | 206 | tl1_lddfmna: TRAPTL1(do_lddfmna_tl1) |
@@ -219,8 +224,8 @@ tl1_paw: TRAPTL1(do_paw_tl1) | |||
219 | tl1_vaw: TRAPTL1(do_vaw_tl1) | 224 | tl1_vaw: TRAPTL1(do_vaw_tl1) |
220 | 225 | ||
221 | /* The grotty trick to save %g1 into current->thread.cee_stuff | 226 | /* The grotty trick to save %g1 into current->thread.cee_stuff |
222 | * is because when we take this trap we could be interrupting trap | 227 | * is because when we take this trap we could be interrupting |
223 | * code already using the trap alternate global registers. | 228 | * trap code already using the trap alternate global registers. |
224 | * | 229 | * |
225 | * We cross our fingers and pray that this store/load does | 230 | * We cross our fingers and pray that this store/load does |
226 | * not cause yet another CEE trap. | 231 | * not cause yet another CEE trap. |
diff --git a/arch/sparc64/kernel/unaligned.c b/arch/sparc64/kernel/unaligned.c index 11c3e88732e4..da9739f0d437 100644 --- a/arch/sparc64/kernel/unaligned.c +++ b/arch/sparc64/kernel/unaligned.c | |||
@@ -349,9 +349,9 @@ int handle_popc(u32 insn, struct pt_regs *regs) | |||
349 | 349 | ||
350 | extern void do_fpother(struct pt_regs *regs); | 350 | extern void do_fpother(struct pt_regs *regs); |
351 | extern void do_privact(struct pt_regs *regs); | 351 | extern void do_privact(struct pt_regs *regs); |
352 | extern void data_access_exception(struct pt_regs *regs, | 352 | extern void spitfire_data_access_exception(struct pt_regs *regs, |
353 | unsigned long sfsr, | 353 | unsigned long sfsr, |
354 | unsigned long sfar); | 354 | unsigned long sfar); |
355 | 355 | ||
356 | int handle_ldf_stq(u32 insn, struct pt_regs *regs) | 356 | int handle_ldf_stq(u32 insn, struct pt_regs *regs) |
357 | { | 357 | { |
@@ -394,14 +394,14 @@ int handle_ldf_stq(u32 insn, struct pt_regs *regs) | |||
394 | break; | 394 | break; |
395 | } | 395 | } |
396 | default: | 396 | default: |
397 | data_access_exception(regs, 0, addr); | 397 | spitfire_data_access_exception(regs, 0, addr); |
398 | return 1; | 398 | return 1; |
399 | } | 399 | } |
400 | if (put_user (first >> 32, (u32 __user *)addr) || | 400 | if (put_user (first >> 32, (u32 __user *)addr) || |
401 | __put_user ((u32)first, (u32 __user *)(addr + 4)) || | 401 | __put_user ((u32)first, (u32 __user *)(addr + 4)) || |
402 | __put_user (second >> 32, (u32 __user *)(addr + 8)) || | 402 | __put_user (second >> 32, (u32 __user *)(addr + 8)) || |
403 | __put_user ((u32)second, (u32 __user *)(addr + 12))) { | 403 | __put_user ((u32)second, (u32 __user *)(addr + 12))) { |
404 | data_access_exception(regs, 0, addr); | 404 | spitfire_data_access_exception(regs, 0, addr); |
405 | return 1; | 405 | return 1; |
406 | } | 406 | } |
407 | } else { | 407 | } else { |
@@ -414,7 +414,7 @@ int handle_ldf_stq(u32 insn, struct pt_regs *regs) | |||
414 | do_privact(regs); | 414 | do_privact(regs); |
415 | return 1; | 415 | return 1; |
416 | } else if (asi > ASI_SNFL) { | 416 | } else if (asi > ASI_SNFL) { |
417 | data_access_exception(regs, 0, addr); | 417 | spitfire_data_access_exception(regs, 0, addr); |
418 | return 1; | 418 | return 1; |
419 | } | 419 | } |
420 | switch (insn & 0x180000) { | 420 | switch (insn & 0x180000) { |
@@ -431,7 +431,7 @@ int handle_ldf_stq(u32 insn, struct pt_regs *regs) | |||
431 | err |= __get_user (data[i], (u32 __user *)(addr + 4*i)); | 431 | err |= __get_user (data[i], (u32 __user *)(addr + 4*i)); |
432 | } | 432 | } |
433 | if (err && !(asi & 0x2 /* NF */)) { | 433 | if (err && !(asi & 0x2 /* NF */)) { |
434 | data_access_exception(regs, 0, addr); | 434 | spitfire_data_access_exception(regs, 0, addr); |
435 | return 1; | 435 | return 1; |
436 | } | 436 | } |
437 | if (asi & 0x8) /* Little */ { | 437 | if (asi & 0x8) /* Little */ { |
@@ -534,7 +534,7 @@ void handle_lddfmna(struct pt_regs *regs, unsigned long sfar, unsigned long sfsr | |||
534 | *(u64 *)(f->regs + freg) = value; | 534 | *(u64 *)(f->regs + freg) = value; |
535 | current_thread_info()->fpsaved[0] |= flag; | 535 | current_thread_info()->fpsaved[0] |= flag; |
536 | } else { | 536 | } else { |
537 | daex: data_access_exception(regs, sfsr, sfar); | 537 | daex: spitfire_data_access_exception(regs, sfsr, sfar); |
538 | return; | 538 | return; |
539 | } | 539 | } |
540 | advance(regs); | 540 | advance(regs); |
@@ -578,7 +578,7 @@ void handle_stdfmna(struct pt_regs *regs, unsigned long sfar, unsigned long sfsr | |||
578 | __put_user ((u32)value, (u32 __user *)(sfar + 4))) | 578 | __put_user ((u32)value, (u32 __user *)(sfar + 4))) |
579 | goto daex; | 579 | goto daex; |
580 | } else { | 580 | } else { |
581 | daex: data_access_exception(regs, sfsr, sfar); | 581 | daex: spitfire_data_access_exception(regs, sfsr, sfar); |
582 | return; | 582 | return; |
583 | } | 583 | } |
584 | advance(regs); | 584 | advance(regs); |
diff --git a/arch/sparc64/kernel/winfixup.S b/arch/sparc64/kernel/winfixup.S index dfbc7e0dcf70..99c809a1e5ac 100644 --- a/arch/sparc64/kernel/winfixup.S +++ b/arch/sparc64/kernel/winfixup.S | |||
@@ -318,7 +318,7 @@ fill_fixup_dax: | |||
318 | nop | 318 | nop |
319 | rdpr %pstate, %l1 ! Prepare to change globals. | 319 | rdpr %pstate, %l1 ! Prepare to change globals. |
320 | mov %g4, %o1 ! Setup args for | 320 | mov %g4, %o1 ! Setup args for |
321 | mov %g5, %o2 ! final call to data_access_exception. | 321 | mov %g5, %o2 ! final call to spitfire_data_access_exception. |
322 | andn %l1, PSTATE_MM, %l1 ! We want to be in RMO | 322 | andn %l1, PSTATE_MM, %l1 ! We want to be in RMO |
323 | 323 | ||
324 | mov %g6, %o7 ! Stash away current. | 324 | mov %g6, %o7 ! Stash away current. |
@@ -330,7 +330,7 @@ fill_fixup_dax: | |||
330 | mov TSB_REG, %g1 | 330 | mov TSB_REG, %g1 |
331 | ldxa [%g1] ASI_IMMU, %g5 | 331 | ldxa [%g1] ASI_IMMU, %g5 |
332 | #endif | 332 | #endif |
333 | call data_access_exception | 333 | call spitfire_data_access_exception |
334 | add %sp, PTREGS_OFF, %o0 | 334 | add %sp, PTREGS_OFF, %o0 |
335 | 335 | ||
336 | b,pt %xcc, rtrap | 336 | b,pt %xcc, rtrap |
@@ -391,7 +391,7 @@ window_dax_from_user_common: | |||
391 | 109: or %g7, %lo(109b), %g7 | 391 | 109: or %g7, %lo(109b), %g7 |
392 | mov %l4, %o1 | 392 | mov %l4, %o1 |
393 | mov %l5, %o2 | 393 | mov %l5, %o2 |
394 | call data_access_exception | 394 | call spitfire_data_access_exception |
395 | add %sp, PTREGS_OFF, %o0 | 395 | add %sp, PTREGS_OFF, %o0 |
396 | ba,pt %xcc, rtrap | 396 | ba,pt %xcc, rtrap |
397 | clr %l6 | 397 | clr %l6 |
diff --git a/arch/sparc64/lib/Makefile b/arch/sparc64/lib/Makefile index 40dbeec7e5d6..6201f1040982 100644 --- a/arch/sparc64/lib/Makefile +++ b/arch/sparc64/lib/Makefile | |||
@@ -12,7 +12,7 @@ lib-y := PeeCeeI.o copy_page.o clear_page.o strlen.o strncmp.o \ | |||
12 | U1memcpy.o U1copy_from_user.o U1copy_to_user.o \ | 12 | U1memcpy.o U1copy_from_user.o U1copy_to_user.o \ |
13 | U3memcpy.o U3copy_from_user.o U3copy_to_user.o U3patch.o \ | 13 | U3memcpy.o U3copy_from_user.o U3copy_to_user.o U3patch.o \ |
14 | copy_in_user.o user_fixup.o memmove.o \ | 14 | copy_in_user.o user_fixup.o memmove.o \ |
15 | mcount.o ipcsum.o rwsem.o xor.o find_bit.o delay.o | 15 | mcount.o ipcsum.o rwsem.o xor.o find_bit.o delay.o mb.o |
16 | 16 | ||
17 | lib-$(CONFIG_DEBUG_SPINLOCK) += debuglocks.o | 17 | lib-$(CONFIG_DEBUG_SPINLOCK) += debuglocks.o |
18 | lib-$(CONFIG_HAVE_DEC_LOCK) += dec_and_lock.o | 18 | lib-$(CONFIG_HAVE_DEC_LOCK) += dec_and_lock.o |
diff --git a/arch/sparc64/lib/debuglocks.c b/arch/sparc64/lib/debuglocks.c index f03344cf784e..f5f0b5586f01 100644 --- a/arch/sparc64/lib/debuglocks.c +++ b/arch/sparc64/lib/debuglocks.c | |||
@@ -12,8 +12,6 @@ | |||
12 | 12 | ||
13 | #ifdef CONFIG_SMP | 13 | #ifdef CONFIG_SMP |
14 | 14 | ||
15 | #define GET_CALLER(PC) __asm__ __volatile__("mov %%i7, %0" : "=r" (PC)) | ||
16 | |||
17 | static inline void show (char *str, spinlock_t *lock, unsigned long caller) | 15 | static inline void show (char *str, spinlock_t *lock, unsigned long caller) |
18 | { | 16 | { |
19 | int cpu = smp_processor_id(); | 17 | int cpu = smp_processor_id(); |
@@ -51,20 +49,19 @@ static inline void show_write (char *str, rwlock_t *lock, unsigned long caller) | |||
51 | #undef INIT_STUCK | 49 | #undef INIT_STUCK |
52 | #define INIT_STUCK 100000000 | 50 | #define INIT_STUCK 100000000 |
53 | 51 | ||
54 | void _do_spin_lock(spinlock_t *lock, char *str) | 52 | void _do_spin_lock(spinlock_t *lock, char *str, unsigned long caller) |
55 | { | 53 | { |
56 | unsigned long caller, val; | 54 | unsigned long val; |
57 | int stuck = INIT_STUCK; | 55 | int stuck = INIT_STUCK; |
58 | int cpu = get_cpu(); | 56 | int cpu = get_cpu(); |
59 | int shown = 0; | 57 | int shown = 0; |
60 | 58 | ||
61 | GET_CALLER(caller); | ||
62 | again: | 59 | again: |
63 | __asm__ __volatile__("ldstub [%1], %0" | 60 | __asm__ __volatile__("ldstub [%1], %0" |
64 | : "=r" (val) | 61 | : "=r" (val) |
65 | : "r" (&(lock->lock)) | 62 | : "r" (&(lock->lock)) |
66 | : "memory"); | 63 | : "memory"); |
67 | membar("#StoreLoad | #StoreStore"); | 64 | membar_storeload_storestore(); |
68 | if (val) { | 65 | if (val) { |
69 | while (lock->lock) { | 66 | while (lock->lock) { |
70 | if (!--stuck) { | 67 | if (!--stuck) { |
@@ -72,7 +69,7 @@ again: | |||
72 | show(str, lock, caller); | 69 | show(str, lock, caller); |
73 | stuck = INIT_STUCK; | 70 | stuck = INIT_STUCK; |
74 | } | 71 | } |
75 | membar("#LoadLoad"); | 72 | rmb(); |
76 | } | 73 | } |
77 | goto again; | 74 | goto again; |
78 | } | 75 | } |
@@ -84,17 +81,16 @@ again: | |||
84 | put_cpu(); | 81 | put_cpu(); |
85 | } | 82 | } |
86 | 83 | ||
87 | int _do_spin_trylock(spinlock_t *lock) | 84 | int _do_spin_trylock(spinlock_t *lock, unsigned long caller) |
88 | { | 85 | { |
89 | unsigned long val, caller; | 86 | unsigned long val; |
90 | int cpu = get_cpu(); | 87 | int cpu = get_cpu(); |
91 | 88 | ||
92 | GET_CALLER(caller); | ||
93 | __asm__ __volatile__("ldstub [%1], %0" | 89 | __asm__ __volatile__("ldstub [%1], %0" |
94 | : "=r" (val) | 90 | : "=r" (val) |
95 | : "r" (&(lock->lock)) | 91 | : "r" (&(lock->lock)) |
96 | : "memory"); | 92 | : "memory"); |
97 | membar("#StoreLoad | #StoreStore"); | 93 | membar_storeload_storestore(); |
98 | if (!val) { | 94 | if (!val) { |
99 | lock->owner_pc = ((unsigned int)caller); | 95 | lock->owner_pc = ((unsigned int)caller); |
100 | lock->owner_cpu = cpu; | 96 | lock->owner_cpu = cpu; |
@@ -111,21 +107,20 @@ void _do_spin_unlock(spinlock_t *lock) | |||
111 | { | 107 | { |
112 | lock->owner_pc = 0; | 108 | lock->owner_pc = 0; |
113 | lock->owner_cpu = NO_PROC_ID; | 109 | lock->owner_cpu = NO_PROC_ID; |
114 | membar("#StoreStore | #LoadStore"); | 110 | membar_storestore_loadstore(); |
115 | lock->lock = 0; | 111 | lock->lock = 0; |
116 | current->thread.smp_lock_count--; | 112 | current->thread.smp_lock_count--; |
117 | } | 113 | } |
118 | 114 | ||
119 | /* Keep INIT_STUCK the same... */ | 115 | /* Keep INIT_STUCK the same... */ |
120 | 116 | ||
121 | void _do_read_lock (rwlock_t *rw, char *str) | 117 | void _do_read_lock(rwlock_t *rw, char *str, unsigned long caller) |
122 | { | 118 | { |
123 | unsigned long caller, val; | 119 | unsigned long val; |
124 | int stuck = INIT_STUCK; | 120 | int stuck = INIT_STUCK; |
125 | int cpu = get_cpu(); | 121 | int cpu = get_cpu(); |
126 | int shown = 0; | 122 | int shown = 0; |
127 | 123 | ||
128 | GET_CALLER(caller); | ||
129 | wlock_again: | 124 | wlock_again: |
130 | /* Wait for any writer to go away. */ | 125 | /* Wait for any writer to go away. */ |
131 | while (((long)(rw->lock)) < 0) { | 126 | while (((long)(rw->lock)) < 0) { |
@@ -134,7 +129,7 @@ wlock_again: | |||
134 | show_read(str, rw, caller); | 129 | show_read(str, rw, caller); |
135 | stuck = INIT_STUCK; | 130 | stuck = INIT_STUCK; |
136 | } | 131 | } |
137 | membar("#LoadLoad"); | 132 | rmb(); |
138 | } | 133 | } |
139 | /* Try once to increment the counter. */ | 134 | /* Try once to increment the counter. */ |
140 | __asm__ __volatile__( | 135 | __asm__ __volatile__( |
@@ -147,7 +142,7 @@ wlock_again: | |||
147 | "2:" : "=r" (val) | 142 | "2:" : "=r" (val) |
148 | : "0" (&(rw->lock)) | 143 | : "0" (&(rw->lock)) |
149 | : "g1", "g7", "memory"); | 144 | : "g1", "g7", "memory"); |
150 | membar("#StoreLoad | #StoreStore"); | 145 | membar_storeload_storestore(); |
151 | if (val) | 146 | if (val) |
152 | goto wlock_again; | 147 | goto wlock_again; |
153 | rw->reader_pc[cpu] = ((unsigned int)caller); | 148 | rw->reader_pc[cpu] = ((unsigned int)caller); |
@@ -157,15 +152,13 @@ wlock_again: | |||
157 | put_cpu(); | 152 | put_cpu(); |
158 | } | 153 | } |
159 | 154 | ||
160 | void _do_read_unlock (rwlock_t *rw, char *str) | 155 | void _do_read_unlock(rwlock_t *rw, char *str, unsigned long caller) |
161 | { | 156 | { |
162 | unsigned long caller, val; | 157 | unsigned long val; |
163 | int stuck = INIT_STUCK; | 158 | int stuck = INIT_STUCK; |
164 | int cpu = get_cpu(); | 159 | int cpu = get_cpu(); |
165 | int shown = 0; | 160 | int shown = 0; |
166 | 161 | ||
167 | GET_CALLER(caller); | ||
168 | |||
169 | /* Drop our identity _first_. */ | 162 | /* Drop our identity _first_. */ |
170 | rw->reader_pc[cpu] = 0; | 163 | rw->reader_pc[cpu] = 0; |
171 | current->thread.smp_lock_count--; | 164 | current->thread.smp_lock_count--; |
@@ -193,14 +186,13 @@ runlock_again: | |||
193 | put_cpu(); | 186 | put_cpu(); |
194 | } | 187 | } |
195 | 188 | ||
196 | void _do_write_lock (rwlock_t *rw, char *str) | 189 | void _do_write_lock(rwlock_t *rw, char *str, unsigned long caller) |
197 | { | 190 | { |
198 | unsigned long caller, val; | 191 | unsigned long val; |
199 | int stuck = INIT_STUCK; | 192 | int stuck = INIT_STUCK; |
200 | int cpu = get_cpu(); | 193 | int cpu = get_cpu(); |
201 | int shown = 0; | 194 | int shown = 0; |
202 | 195 | ||
203 | GET_CALLER(caller); | ||
204 | wlock_again: | 196 | wlock_again: |
205 | /* Spin while there is another writer. */ | 197 | /* Spin while there is another writer. */ |
206 | while (((long)rw->lock) < 0) { | 198 | while (((long)rw->lock) < 0) { |
@@ -209,7 +201,7 @@ wlock_again: | |||
209 | show_write(str, rw, caller); | 201 | show_write(str, rw, caller); |
210 | stuck = INIT_STUCK; | 202 | stuck = INIT_STUCK; |
211 | } | 203 | } |
212 | membar("#LoadLoad"); | 204 | rmb(); |
213 | } | 205 | } |
214 | 206 | ||
215 | /* Try to acuire the write bit. */ | 207 | /* Try to acuire the write bit. */ |
@@ -264,7 +256,7 @@ wlock_again: | |||
264 | show_write(str, rw, caller); | 256 | show_write(str, rw, caller); |
265 | stuck = INIT_STUCK; | 257 | stuck = INIT_STUCK; |
266 | } | 258 | } |
267 | membar("#LoadLoad"); | 259 | rmb(); |
268 | } | 260 | } |
269 | goto wlock_again; | 261 | goto wlock_again; |
270 | } | 262 | } |
@@ -278,14 +270,12 @@ wlock_again: | |||
278 | put_cpu(); | 270 | put_cpu(); |
279 | } | 271 | } |
280 | 272 | ||
281 | void _do_write_unlock(rwlock_t *rw) | 273 | void _do_write_unlock(rwlock_t *rw, unsigned long caller) |
282 | { | 274 | { |
283 | unsigned long caller, val; | 275 | unsigned long val; |
284 | int stuck = INIT_STUCK; | 276 | int stuck = INIT_STUCK; |
285 | int shown = 0; | 277 | int shown = 0; |
286 | 278 | ||
287 | GET_CALLER(caller); | ||
288 | |||
289 | /* Drop our identity _first_ */ | 279 | /* Drop our identity _first_ */ |
290 | rw->writer_pc = 0; | 280 | rw->writer_pc = 0; |
291 | rw->writer_cpu = NO_PROC_ID; | 281 | rw->writer_cpu = NO_PROC_ID; |
@@ -313,13 +303,11 @@ wlock_again: | |||
313 | } | 303 | } |
314 | } | 304 | } |
315 | 305 | ||
316 | int _do_write_trylock (rwlock_t *rw, char *str) | 306 | int _do_write_trylock(rwlock_t *rw, char *str, unsigned long caller) |
317 | { | 307 | { |
318 | unsigned long caller, val; | 308 | unsigned long val; |
319 | int cpu = get_cpu(); | 309 | int cpu = get_cpu(); |
320 | 310 | ||
321 | GET_CALLER(caller); | ||
322 | |||
323 | /* Try to acuire the write bit. */ | 311 | /* Try to acuire the write bit. */ |
324 | __asm__ __volatile__( | 312 | __asm__ __volatile__( |
325 | " mov 1, %%g3\n" | 313 | " mov 1, %%g3\n" |
diff --git a/arch/sparc64/lib/mb.S b/arch/sparc64/lib/mb.S new file mode 100644 index 000000000000..4004f748619f --- /dev/null +++ b/arch/sparc64/lib/mb.S | |||
@@ -0,0 +1,73 @@ | |||
1 | /* mb.S: Out of line memory barriers. | ||
2 | * | ||
3 | * Copyright (C) 2005 David S. Miller (davem@davemloft.net) | ||
4 | */ | ||
5 | |||
6 | /* These are here in an effort to more fully work around | ||
7 | * Spitfire Errata #51. Essentially, if a memory barrier | ||
8 | * occurs soon after a mispredicted branch, the chip can stop | ||
9 | * executing instructions until a trap occurs. Therefore, if | ||
10 | * interrupts are disabled, the chip can hang forever. | ||
11 | * | ||
12 | * It used to be believed that the memory barrier had to be | ||
13 | * right in the delay slot, but a case has been traced | ||
14 | * recently wherein the memory barrier was one instruction | ||
15 | * after the branch delay slot and the chip still hung. The | ||
16 | * offending sequence was the following in sym_wakeup_done() | ||
17 | * of the sym53c8xx_2 driver: | ||
18 | * | ||
19 | * call sym_ccb_from_dsa, 0 | ||
20 | * movge %icc, 0, %l0 | ||
21 | * brz,pn %o0, .LL1303 | ||
22 | * mov %o0, %l2 | ||
23 | * membar #LoadLoad | ||
24 | * | ||
25 | * The branch has to be mispredicted for the bug to occur. | ||
26 | * Therefore, we put the memory barrier explicitly into a | ||
27 | * "branch always, predicted taken" delay slot to avoid the | ||
28 | * problem case. | ||
29 | */ | ||
30 | |||
31 | .text | ||
32 | |||
33 | 99: retl | ||
34 | nop | ||
35 | |||
36 | .globl mb | ||
37 | mb: ba,pt %xcc, 99b | ||
38 | membar #LoadLoad | #LoadStore | #StoreStore | #StoreLoad | ||
39 | .size mb, .-mb | ||
40 | |||
41 | .globl rmb | ||
42 | rmb: ba,pt %xcc, 99b | ||
43 | membar #LoadLoad | ||
44 | .size rmb, .-rmb | ||
45 | |||
46 | .globl wmb | ||
47 | wmb: ba,pt %xcc, 99b | ||
48 | membar #StoreStore | ||
49 | .size wmb, .-wmb | ||
50 | |||
51 | .globl membar_storeload | ||
52 | membar_storeload: | ||
53 | ba,pt %xcc, 99b | ||
54 | membar #StoreLoad | ||
55 | .size membar_storeload, .-membar_storeload | ||
56 | |||
57 | .globl membar_storeload_storestore | ||
58 | membar_storeload_storestore: | ||
59 | ba,pt %xcc, 99b | ||
60 | membar #StoreLoad | #StoreStore | ||
61 | .size membar_storeload_storestore, .-membar_storeload_storestore | ||
62 | |||
63 | .globl membar_storeload_loadload | ||
64 | membar_storeload_loadload: | ||
65 | ba,pt %xcc, 99b | ||
66 | membar #StoreLoad | #LoadLoad | ||
67 | .size membar_storeload_loadload, .-membar_storeload_loadload | ||
68 | |||
69 | .globl membar_storestore_loadstore | ||
70 | membar_storestore_loadstore: | ||
71 | ba,pt %xcc, 99b | ||
72 | membar #StoreStore | #LoadStore | ||
73 | .size membar_storestore_loadstore, .-membar_storestore_loadstore | ||
diff --git a/arch/sparc64/solaris/misc.c b/arch/sparc64/solaris/misc.c index 15b4cfe07557..302efbcba70e 100644 --- a/arch/sparc64/solaris/misc.c +++ b/arch/sparc64/solaris/misc.c | |||
@@ -737,7 +737,8 @@ MODULE_LICENSE("GPL"); | |||
737 | extern u32 tl0_solaris[8]; | 737 | extern u32 tl0_solaris[8]; |
738 | #define update_ttable(x) \ | 738 | #define update_ttable(x) \ |
739 | tl0_solaris[3] = (((long)(x) - (long)tl0_solaris - 3) >> 2) | 0x40000000; \ | 739 | tl0_solaris[3] = (((long)(x) - (long)tl0_solaris - 3) >> 2) | 0x40000000; \ |
740 | __asm__ __volatile__ ("membar #StoreStore; flush %0" : : "r" (&tl0_solaris[3])) | 740 | wmb(); \ |
741 | __asm__ __volatile__ ("flush %0" : : "r" (&tl0_solaris[3])) | ||
741 | #else | 742 | #else |
742 | #endif | 743 | #endif |
743 | 744 | ||
@@ -761,7 +762,8 @@ int init_module(void) | |||
761 | entry64_personality_patch |= | 762 | entry64_personality_patch |= |
762 | (offsetof(struct task_struct, personality) + | 763 | (offsetof(struct task_struct, personality) + |
763 | (sizeof(unsigned long) - 1)); | 764 | (sizeof(unsigned long) - 1)); |
764 | __asm__ __volatile__("membar #StoreStore; flush %0" | 765 | wmb(); |
766 | __asm__ __volatile__("flush %0" | ||
765 | : : "r" (&entry64_personality_patch)); | 767 | : : "r" (&entry64_personality_patch)); |
766 | return 0; | 768 | return 0; |
767 | } | 769 | } |
diff --git a/arch/um/kernel/signal_kern.c b/arch/um/kernel/signal_kern.c index 7807a3e8c426..03618bd13d55 100644 --- a/arch/um/kernel/signal_kern.c +++ b/arch/um/kernel/signal_kern.c | |||
@@ -87,12 +87,12 @@ static int handle_signal(struct pt_regs *regs, unsigned long signr, | |||
87 | recalc_sigpending(); | 87 | recalc_sigpending(); |
88 | spin_unlock_irq(¤t->sighand->siglock); | 88 | spin_unlock_irq(¤t->sighand->siglock); |
89 | force_sigsegv(signr, current); | 89 | force_sigsegv(signr, current); |
90 | } | 90 | } else { |
91 | else if(!(ka->sa.sa_flags & SA_NODEFER)){ | ||
92 | spin_lock_irq(¤t->sighand->siglock); | 91 | spin_lock_irq(¤t->sighand->siglock); |
93 | sigorsets(¤t->blocked, ¤t->blocked, | 92 | sigorsets(¤t->blocked, ¤t->blocked, |
94 | &ka->sa.sa_mask); | 93 | &ka->sa.sa_mask); |
95 | sigaddset(¤t->blocked, signr); | 94 | if(!(ka->sa.sa_flags & SA_NODEFER)) |
95 | sigaddset(¤t->blocked, signr); | ||
96 | recalc_sigpending(); | 96 | recalc_sigpending(); |
97 | spin_unlock_irq(¤t->sighand->siglock); | 97 | spin_unlock_irq(¤t->sighand->siglock); |
98 | } | 98 | } |
diff --git a/arch/v850/kernel/signal.c b/arch/v850/kernel/signal.c index 37061e32e1a4..633e4e1b825f 100644 --- a/arch/v850/kernel/signal.c +++ b/arch/v850/kernel/signal.c | |||
@@ -462,13 +462,12 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, | |||
462 | else | 462 | else |
463 | setup_frame(sig, ka, oldset, regs); | 463 | setup_frame(sig, ka, oldset, regs); |
464 | 464 | ||
465 | if (!(ka->sa.sa_flags & SA_NODEFER)) { | 465 | spin_lock_irq(¤t->sighand->siglock); |
466 | spin_lock_irq(¤t->sighand->siglock); | 466 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
467 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 467 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
468 | sigaddset(¤t->blocked,sig); | 468 | sigaddset(¤t->blocked,sig); |
469 | recalc_sigpending(); | 469 | recalc_sigpending(); |
470 | spin_unlock_irq(¤t->sighand->siglock); | 470 | spin_unlock_irq(¤t->sighand->siglock); |
471 | } | ||
472 | } | 471 | } |
473 | 472 | ||
474 | /* | 473 | /* |
diff --git a/arch/x86_64/kernel/signal.c b/arch/x86_64/kernel/signal.c index 98590a989f3d..d642fbf3da29 100644 --- a/arch/x86_64/kernel/signal.c +++ b/arch/x86_64/kernel/signal.c | |||
@@ -394,10 +394,11 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, | |||
394 | #endif | 394 | #endif |
395 | ret = setup_rt_frame(sig, ka, info, oldset, regs); | 395 | ret = setup_rt_frame(sig, ka, info, oldset, regs); |
396 | 396 | ||
397 | if (ret && !(ka->sa.sa_flags & SA_NODEFER)) { | 397 | if (ret) { |
398 | spin_lock_irq(¤t->sighand->siglock); | 398 | spin_lock_irq(¤t->sighand->siglock); |
399 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); | 399 | sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask); |
400 | sigaddset(¤t->blocked,sig); | 400 | if (!(ka->sa.sa_flags & SA_NODEFER)) |
401 | sigaddset(¤t->blocked,sig); | ||
401 | recalc_sigpending(); | 402 | recalc_sigpending(); |
402 | spin_unlock_irq(¤t->sighand->siglock); | 403 | spin_unlock_irq(¤t->sighand->siglock); |
403 | } | 404 | } |
diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c index df6e1e17b096..dc42cede9394 100644 --- a/arch/xtensa/kernel/signal.c +++ b/arch/xtensa/kernel/signal.c | |||
@@ -702,12 +702,11 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset) | |||
702 | if (ka.sa.sa_flags & SA_ONESHOT) | 702 | if (ka.sa.sa_flags & SA_ONESHOT) |
703 | ka.sa.sa_handler = SIG_DFL; | 703 | ka.sa.sa_handler = SIG_DFL; |
704 | 704 | ||
705 | if (!(ka.sa.sa_flags & SA_NODEFER)) { | 705 | spin_lock_irq(¤t->sighand->siglock); |
706 | spin_lock_irq(¤t->sighand->siglock); | 706 | sigorsets(¤t->blocked, ¤t->blocked, &ka.sa.sa_mask); |
707 | sigorsets(¤t->blocked, ¤t->blocked, &ka.sa.sa_mask); | 707 | if (!(ka.sa.sa_flags & SA_NODEFER)) |
708 | sigaddset(¤t->blocked, signr); | 708 | sigaddset(¤t->blocked, signr); |
709 | recalc_sigpending(); | 709 | recalc_sigpending(); |
710 | spin_unlock_irq(¤t->sighand->siglock); | 710 | spin_unlock_irq(¤t->sighand->siglock); |
711 | } | ||
712 | return 1; | 711 | return 1; |
713 | } | 712 | } |
diff --git a/drivers/Kconfig b/drivers/Kconfig index cecab0acc3fe..46d655fab115 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig | |||
@@ -48,6 +48,8 @@ source "drivers/hwmon/Kconfig" | |||
48 | 48 | ||
49 | source "drivers/misc/Kconfig" | 49 | source "drivers/misc/Kconfig" |
50 | 50 | ||
51 | source "drivers/mfd/Kconfig" | ||
52 | |||
51 | source "drivers/media/Kconfig" | 53 | source "drivers/media/Kconfig" |
52 | 54 | ||
53 | source "drivers/video/Kconfig" | 55 | source "drivers/video/Kconfig" |
diff --git a/drivers/Makefile b/drivers/Makefile index 126a851d5653..9663132ed825 100644 --- a/drivers/Makefile +++ b/drivers/Makefile | |||
@@ -26,7 +26,7 @@ obj-$(CONFIG_FB_INTEL) += video/intelfb/ | |||
26 | obj-$(CONFIG_SERIO) += input/serio/ | 26 | obj-$(CONFIG_SERIO) += input/serio/ |
27 | obj-y += serial/ | 27 | obj-y += serial/ |
28 | obj-$(CONFIG_PARPORT) += parport/ | 28 | obj-$(CONFIG_PARPORT) += parport/ |
29 | obj-y += base/ block/ misc/ net/ media/ | 29 | obj-y += base/ block/ misc/ mfd/ net/ media/ |
30 | obj-$(CONFIG_NUBUS) += nubus/ | 30 | obj-$(CONFIG_NUBUS) += nubus/ |
31 | obj-$(CONFIG_ATM) += atm/ | 31 | obj-$(CONFIG_ATM) += atm/ |
32 | obj-$(CONFIG_PPC_PMAC) += macintosh/ | 32 | obj-$(CONFIG_PPC_PMAC) += macintosh/ |
diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile index 10be36731ed7..678a7e097f32 100644 --- a/drivers/infiniband/core/Makefile +++ b/drivers/infiniband/core/Makefile | |||
@@ -1,5 +1,3 @@ | |||
1 | EXTRA_CFLAGS += -Idrivers/infiniband/include | ||
2 | |||
3 | obj-$(CONFIG_INFINIBAND) += ib_core.o ib_mad.o ib_sa.o \ | 1 | obj-$(CONFIG_INFINIBAND) += ib_core.o ib_mad.o ib_sa.o \ |
4 | ib_cm.o ib_umad.o ib_ucm.o | 2 | ib_cm.o ib_umad.o ib_ucm.o |
5 | obj-$(CONFIG_INFINIBAND_USER_VERBS) += ib_uverbs.o | 3 | obj-$(CONFIG_INFINIBAND_USER_VERBS) += ib_uverbs.o |
diff --git a/drivers/infiniband/core/agent.c b/drivers/infiniband/core/agent.c index 729f0b0d983a..5ac86f566dc0 100644 --- a/drivers/infiniband/core/agent.c +++ b/drivers/infiniband/core/agent.c | |||
@@ -1,9 +1,10 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved. |
3 | * Copyright (c) 2004 Infinicon Corporation. All rights reserved. | 3 | * Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved. |
4 | * Copyright (c) 2004 Intel Corporation. All rights reserved. | 4 | * Copyright (c) 2004, 2005 Intel Corporation. All rights reserved. |
5 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. | 5 | * Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved. |
6 | * Copyright (c) 2004 Voltaire Corporation. All rights reserved. | 6 | * Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved. |
7 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
7 | * | 8 | * |
8 | * This software is available to you under a choice of one of two | 9 | * This software is available to you under a choice of one of two |
9 | * licenses. You may choose to be licensed under the terms of the GNU | 10 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -40,7 +41,7 @@ | |||
40 | 41 | ||
41 | #include <asm/bug.h> | 42 | #include <asm/bug.h> |
42 | 43 | ||
43 | #include <ib_smi.h> | 44 | #include <rdma/ib_smi.h> |
44 | 45 | ||
45 | #include "smi.h" | 46 | #include "smi.h" |
46 | #include "agent_priv.h" | 47 | #include "agent_priv.h" |
diff --git a/drivers/infiniband/core/agent_priv.h b/drivers/infiniband/core/agent_priv.h index 17435af1e914..2ec6d7f1b7d0 100644 --- a/drivers/infiniband/core/agent_priv.h +++ b/drivers/infiniband/core/agent_priv.h | |||
@@ -1,9 +1,9 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved. |
3 | * Copyright (c) 2004 Infinicon Corporation. All rights reserved. | 3 | * Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved. |
4 | * Copyright (c) 2004 Intel Corporation. All rights reserved. | 4 | * Copyright (c) 2004, 2005 Intel Corporation. All rights reserved. |
5 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. | 5 | * Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved. |
6 | * Copyright (c) 2004 Voltaire Corporation. All rights reserved. | 6 | * Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved. |
7 | * | 7 | * |
8 | * This software is available to you under a choice of one of two | 8 | * This software is available to you under a choice of one of two |
9 | * licenses. You may choose to be licensed under the terms of the GNU | 9 | * licenses. You may choose to be licensed under the terms of the GNU |
diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c index 3042360c97e1..f014e639088c 100644 --- a/drivers/infiniband/core/cache.c +++ b/drivers/infiniband/core/cache.c | |||
@@ -1,5 +1,8 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Intel Corporation. All rights reserved. | ||
4 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
5 | * Copyright (c) 2005 Voltaire, Inc. All rights reserved. | ||
3 | * | 6 | * |
4 | * This software is available to you under a choice of one of two | 7 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 8 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -32,12 +35,11 @@ | |||
32 | * $Id: cache.c 1349 2004-12-16 21:09:43Z roland $ | 35 | * $Id: cache.c 1349 2004-12-16 21:09:43Z roland $ |
33 | */ | 36 | */ |
34 | 37 | ||
35 | #include <linux/version.h> | ||
36 | #include <linux/module.h> | 38 | #include <linux/module.h> |
37 | #include <linux/errno.h> | 39 | #include <linux/errno.h> |
38 | #include <linux/slab.h> | 40 | #include <linux/slab.h> |
39 | 41 | ||
40 | #include <ib_cache.h> | 42 | #include <rdma/ib_cache.h> |
41 | 43 | ||
42 | #include "core_priv.h" | 44 | #include "core_priv.h" |
43 | 45 | ||
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c index 403ed125d8f4..4de93ba274a6 100644 --- a/drivers/infiniband/core/cm.c +++ b/drivers/infiniband/core/cm.c | |||
@@ -43,8 +43,8 @@ | |||
43 | #include <linux/spinlock.h> | 43 | #include <linux/spinlock.h> |
44 | #include <linux/workqueue.h> | 44 | #include <linux/workqueue.h> |
45 | 45 | ||
46 | #include <ib_cache.h> | 46 | #include <rdma/ib_cache.h> |
47 | #include <ib_cm.h> | 47 | #include <rdma/ib_cm.h> |
48 | #include "cm_msgs.h" | 48 | #include "cm_msgs.h" |
49 | 49 | ||
50 | MODULE_AUTHOR("Sean Hefty"); | 50 | MODULE_AUTHOR("Sean Hefty"); |
@@ -83,7 +83,7 @@ struct cm_port { | |||
83 | struct cm_device { | 83 | struct cm_device { |
84 | struct list_head list; | 84 | struct list_head list; |
85 | struct ib_device *device; | 85 | struct ib_device *device; |
86 | u64 ca_guid; | 86 | __be64 ca_guid; |
87 | struct cm_port port[0]; | 87 | struct cm_port port[0]; |
88 | }; | 88 | }; |
89 | 89 | ||
@@ -100,8 +100,8 @@ struct cm_work { | |||
100 | struct list_head list; | 100 | struct list_head list; |
101 | struct cm_port *port; | 101 | struct cm_port *port; |
102 | struct ib_mad_recv_wc *mad_recv_wc; /* Received MADs */ | 102 | struct ib_mad_recv_wc *mad_recv_wc; /* Received MADs */ |
103 | u32 local_id; /* Established / timewait */ | 103 | __be32 local_id; /* Established / timewait */ |
104 | u32 remote_id; | 104 | __be32 remote_id; |
105 | struct ib_cm_event cm_event; | 105 | struct ib_cm_event cm_event; |
106 | struct ib_sa_path_rec path[0]; | 106 | struct ib_sa_path_rec path[0]; |
107 | }; | 107 | }; |
@@ -110,8 +110,8 @@ struct cm_timewait_info { | |||
110 | struct cm_work work; /* Must be first. */ | 110 | struct cm_work work; /* Must be first. */ |
111 | struct rb_node remote_qp_node; | 111 | struct rb_node remote_qp_node; |
112 | struct rb_node remote_id_node; | 112 | struct rb_node remote_id_node; |
113 | u64 remote_ca_guid; | 113 | __be64 remote_ca_guid; |
114 | u32 remote_qpn; | 114 | __be32 remote_qpn; |
115 | u8 inserted_remote_qp; | 115 | u8 inserted_remote_qp; |
116 | u8 inserted_remote_id; | 116 | u8 inserted_remote_id; |
117 | }; | 117 | }; |
@@ -132,11 +132,11 @@ struct cm_id_private { | |||
132 | struct cm_av alt_av; | 132 | struct cm_av alt_av; |
133 | 133 | ||
134 | void *private_data; | 134 | void *private_data; |
135 | u64 tid; | 135 | __be64 tid; |
136 | u32 local_qpn; | 136 | __be32 local_qpn; |
137 | u32 remote_qpn; | 137 | __be32 remote_qpn; |
138 | u32 sq_psn; | 138 | __be32 sq_psn; |
139 | u32 rq_psn; | 139 | __be32 rq_psn; |
140 | int timeout_ms; | 140 | int timeout_ms; |
141 | enum ib_mtu path_mtu; | 141 | enum ib_mtu path_mtu; |
142 | u8 private_data_len; | 142 | u8 private_data_len; |
@@ -253,7 +253,7 @@ static void cm_set_ah_attr(struct ib_ah_attr *ah_attr, u8 port_num, | |||
253 | u16 dlid, u8 sl, u16 src_path_bits) | 253 | u16 dlid, u8 sl, u16 src_path_bits) |
254 | { | 254 | { |
255 | memset(ah_attr, 0, sizeof ah_attr); | 255 | memset(ah_attr, 0, sizeof ah_attr); |
256 | ah_attr->dlid = be16_to_cpu(dlid); | 256 | ah_attr->dlid = dlid; |
257 | ah_attr->sl = sl; | 257 | ah_attr->sl = sl; |
258 | ah_attr->src_path_bits = src_path_bits; | 258 | ah_attr->src_path_bits = src_path_bits; |
259 | ah_attr->port_num = port_num; | 259 | ah_attr->port_num = port_num; |
@@ -264,7 +264,7 @@ static void cm_init_av_for_response(struct cm_port *port, | |||
264 | { | 264 | { |
265 | av->port = port; | 265 | av->port = port; |
266 | av->pkey_index = wc->pkey_index; | 266 | av->pkey_index = wc->pkey_index; |
267 | cm_set_ah_attr(&av->ah_attr, port->port_num, cpu_to_be16(wc->slid), | 267 | cm_set_ah_attr(&av->ah_attr, port->port_num, wc->slid, |
268 | wc->sl, wc->dlid_path_bits); | 268 | wc->sl, wc->dlid_path_bits); |
269 | } | 269 | } |
270 | 270 | ||
@@ -295,8 +295,9 @@ static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av) | |||
295 | return ret; | 295 | return ret; |
296 | 296 | ||
297 | av->port = port; | 297 | av->port = port; |
298 | cm_set_ah_attr(&av->ah_attr, av->port->port_num, path->dlid, | 298 | cm_set_ah_attr(&av->ah_attr, av->port->port_num, |
299 | path->sl, path->slid & 0x7F); | 299 | be16_to_cpu(path->dlid), path->sl, |
300 | be16_to_cpu(path->slid) & 0x7F); | ||
300 | av->packet_life_time = path->packet_life_time; | 301 | av->packet_life_time = path->packet_life_time; |
301 | return 0; | 302 | return 0; |
302 | } | 303 | } |
@@ -309,26 +310,26 @@ static int cm_alloc_id(struct cm_id_private *cm_id_priv) | |||
309 | do { | 310 | do { |
310 | spin_lock_irqsave(&cm.lock, flags); | 311 | spin_lock_irqsave(&cm.lock, flags); |
311 | ret = idr_get_new_above(&cm.local_id_table, cm_id_priv, 1, | 312 | ret = idr_get_new_above(&cm.local_id_table, cm_id_priv, 1, |
312 | (int *) &cm_id_priv->id.local_id); | 313 | (__force int *) &cm_id_priv->id.local_id); |
313 | spin_unlock_irqrestore(&cm.lock, flags); | 314 | spin_unlock_irqrestore(&cm.lock, flags); |
314 | } while( (ret == -EAGAIN) && idr_pre_get(&cm.local_id_table, GFP_KERNEL) ); | 315 | } while( (ret == -EAGAIN) && idr_pre_get(&cm.local_id_table, GFP_KERNEL) ); |
315 | return ret; | 316 | return ret; |
316 | } | 317 | } |
317 | 318 | ||
318 | static void cm_free_id(u32 local_id) | 319 | static void cm_free_id(__be32 local_id) |
319 | { | 320 | { |
320 | unsigned long flags; | 321 | unsigned long flags; |
321 | 322 | ||
322 | spin_lock_irqsave(&cm.lock, flags); | 323 | spin_lock_irqsave(&cm.lock, flags); |
323 | idr_remove(&cm.local_id_table, (int) local_id); | 324 | idr_remove(&cm.local_id_table, (__force int) local_id); |
324 | spin_unlock_irqrestore(&cm.lock, flags); | 325 | spin_unlock_irqrestore(&cm.lock, flags); |
325 | } | 326 | } |
326 | 327 | ||
327 | static struct cm_id_private * cm_get_id(u32 local_id, u32 remote_id) | 328 | static struct cm_id_private * cm_get_id(__be32 local_id, __be32 remote_id) |
328 | { | 329 | { |
329 | struct cm_id_private *cm_id_priv; | 330 | struct cm_id_private *cm_id_priv; |
330 | 331 | ||
331 | cm_id_priv = idr_find(&cm.local_id_table, (int) local_id); | 332 | cm_id_priv = idr_find(&cm.local_id_table, (__force int) local_id); |
332 | if (cm_id_priv) { | 333 | if (cm_id_priv) { |
333 | if (cm_id_priv->id.remote_id == remote_id) | 334 | if (cm_id_priv->id.remote_id == remote_id) |
334 | atomic_inc(&cm_id_priv->refcount); | 335 | atomic_inc(&cm_id_priv->refcount); |
@@ -339,7 +340,7 @@ static struct cm_id_private * cm_get_id(u32 local_id, u32 remote_id) | |||
339 | return cm_id_priv; | 340 | return cm_id_priv; |
340 | } | 341 | } |
341 | 342 | ||
342 | static struct cm_id_private * cm_acquire_id(u32 local_id, u32 remote_id) | 343 | static struct cm_id_private * cm_acquire_id(__be32 local_id, __be32 remote_id) |
343 | { | 344 | { |
344 | struct cm_id_private *cm_id_priv; | 345 | struct cm_id_private *cm_id_priv; |
345 | unsigned long flags; | 346 | unsigned long flags; |
@@ -356,8 +357,8 @@ static struct cm_id_private * cm_insert_listen(struct cm_id_private *cm_id_priv) | |||
356 | struct rb_node **link = &cm.listen_service_table.rb_node; | 357 | struct rb_node **link = &cm.listen_service_table.rb_node; |
357 | struct rb_node *parent = NULL; | 358 | struct rb_node *parent = NULL; |
358 | struct cm_id_private *cur_cm_id_priv; | 359 | struct cm_id_private *cur_cm_id_priv; |
359 | u64 service_id = cm_id_priv->id.service_id; | 360 | __be64 service_id = cm_id_priv->id.service_id; |
360 | u64 service_mask = cm_id_priv->id.service_mask; | 361 | __be64 service_mask = cm_id_priv->id.service_mask; |
361 | 362 | ||
362 | while (*link) { | 363 | while (*link) { |
363 | parent = *link; | 364 | parent = *link; |
@@ -376,7 +377,7 @@ static struct cm_id_private * cm_insert_listen(struct cm_id_private *cm_id_priv) | |||
376 | return NULL; | 377 | return NULL; |
377 | } | 378 | } |
378 | 379 | ||
379 | static struct cm_id_private * cm_find_listen(u64 service_id) | 380 | static struct cm_id_private * cm_find_listen(__be64 service_id) |
380 | { | 381 | { |
381 | struct rb_node *node = cm.listen_service_table.rb_node; | 382 | struct rb_node *node = cm.listen_service_table.rb_node; |
382 | struct cm_id_private *cm_id_priv; | 383 | struct cm_id_private *cm_id_priv; |
@@ -400,8 +401,8 @@ static struct cm_timewait_info * cm_insert_remote_id(struct cm_timewait_info | |||
400 | struct rb_node **link = &cm.remote_id_table.rb_node; | 401 | struct rb_node **link = &cm.remote_id_table.rb_node; |
401 | struct rb_node *parent = NULL; | 402 | struct rb_node *parent = NULL; |
402 | struct cm_timewait_info *cur_timewait_info; | 403 | struct cm_timewait_info *cur_timewait_info; |
403 | u64 remote_ca_guid = timewait_info->remote_ca_guid; | 404 | __be64 remote_ca_guid = timewait_info->remote_ca_guid; |
404 | u32 remote_id = timewait_info->work.remote_id; | 405 | __be32 remote_id = timewait_info->work.remote_id; |
405 | 406 | ||
406 | while (*link) { | 407 | while (*link) { |
407 | parent = *link; | 408 | parent = *link; |
@@ -424,8 +425,8 @@ static struct cm_timewait_info * cm_insert_remote_id(struct cm_timewait_info | |||
424 | return NULL; | 425 | return NULL; |
425 | } | 426 | } |
426 | 427 | ||
427 | static struct cm_timewait_info * cm_find_remote_id(u64 remote_ca_guid, | 428 | static struct cm_timewait_info * cm_find_remote_id(__be64 remote_ca_guid, |
428 | u32 remote_id) | 429 | __be32 remote_id) |
429 | { | 430 | { |
430 | struct rb_node *node = cm.remote_id_table.rb_node; | 431 | struct rb_node *node = cm.remote_id_table.rb_node; |
431 | struct cm_timewait_info *timewait_info; | 432 | struct cm_timewait_info *timewait_info; |
@@ -453,8 +454,8 @@ static struct cm_timewait_info * cm_insert_remote_qpn(struct cm_timewait_info | |||
453 | struct rb_node **link = &cm.remote_qp_table.rb_node; | 454 | struct rb_node **link = &cm.remote_qp_table.rb_node; |
454 | struct rb_node *parent = NULL; | 455 | struct rb_node *parent = NULL; |
455 | struct cm_timewait_info *cur_timewait_info; | 456 | struct cm_timewait_info *cur_timewait_info; |
456 | u64 remote_ca_guid = timewait_info->remote_ca_guid; | 457 | __be64 remote_ca_guid = timewait_info->remote_ca_guid; |
457 | u32 remote_qpn = timewait_info->remote_qpn; | 458 | __be32 remote_qpn = timewait_info->remote_qpn; |
458 | 459 | ||
459 | while (*link) { | 460 | while (*link) { |
460 | parent = *link; | 461 | parent = *link; |
@@ -484,7 +485,7 @@ static struct cm_id_private * cm_insert_remote_sidr(struct cm_id_private | |||
484 | struct rb_node *parent = NULL; | 485 | struct rb_node *parent = NULL; |
485 | struct cm_id_private *cur_cm_id_priv; | 486 | struct cm_id_private *cur_cm_id_priv; |
486 | union ib_gid *port_gid = &cm_id_priv->av.dgid; | 487 | union ib_gid *port_gid = &cm_id_priv->av.dgid; |
487 | u32 remote_id = cm_id_priv->id.remote_id; | 488 | __be32 remote_id = cm_id_priv->id.remote_id; |
488 | 489 | ||
489 | while (*link) { | 490 | while (*link) { |
490 | parent = *link; | 491 | parent = *link; |
@@ -598,7 +599,7 @@ static void cm_cleanup_timewait(struct cm_timewait_info *timewait_info) | |||
598 | spin_unlock_irqrestore(&cm.lock, flags); | 599 | spin_unlock_irqrestore(&cm.lock, flags); |
599 | } | 600 | } |
600 | 601 | ||
601 | static struct cm_timewait_info * cm_create_timewait_info(u32 local_id) | 602 | static struct cm_timewait_info * cm_create_timewait_info(__be32 local_id) |
602 | { | 603 | { |
603 | struct cm_timewait_info *timewait_info; | 604 | struct cm_timewait_info *timewait_info; |
604 | 605 | ||
@@ -715,14 +716,15 @@ retest: | |||
715 | EXPORT_SYMBOL(ib_destroy_cm_id); | 716 | EXPORT_SYMBOL(ib_destroy_cm_id); |
716 | 717 | ||
717 | int ib_cm_listen(struct ib_cm_id *cm_id, | 718 | int ib_cm_listen(struct ib_cm_id *cm_id, |
718 | u64 service_id, | 719 | __be64 service_id, |
719 | u64 service_mask) | 720 | __be64 service_mask) |
720 | { | 721 | { |
721 | struct cm_id_private *cm_id_priv, *cur_cm_id_priv; | 722 | struct cm_id_private *cm_id_priv, *cur_cm_id_priv; |
722 | unsigned long flags; | 723 | unsigned long flags; |
723 | int ret = 0; | 724 | int ret = 0; |
724 | 725 | ||
725 | service_mask = service_mask ? service_mask : ~0ULL; | 726 | service_mask = service_mask ? service_mask : |
727 | __constant_cpu_to_be64(~0ULL); | ||
726 | service_id &= service_mask; | 728 | service_id &= service_mask; |
727 | if ((service_id & IB_SERVICE_ID_AGN_MASK) == IB_CM_ASSIGN_SERVICE_ID && | 729 | if ((service_id & IB_SERVICE_ID_AGN_MASK) == IB_CM_ASSIGN_SERVICE_ID && |
728 | (service_id != IB_CM_ASSIGN_SERVICE_ID)) | 730 | (service_id != IB_CM_ASSIGN_SERVICE_ID)) |
@@ -735,8 +737,8 @@ int ib_cm_listen(struct ib_cm_id *cm_id, | |||
735 | 737 | ||
736 | spin_lock_irqsave(&cm.lock, flags); | 738 | spin_lock_irqsave(&cm.lock, flags); |
737 | if (service_id == IB_CM_ASSIGN_SERVICE_ID) { | 739 | if (service_id == IB_CM_ASSIGN_SERVICE_ID) { |
738 | cm_id->service_id = __cpu_to_be64(cm.listen_service_id++); | 740 | cm_id->service_id = cpu_to_be64(cm.listen_service_id++); |
739 | cm_id->service_mask = ~0ULL; | 741 | cm_id->service_mask = __constant_cpu_to_be64(~0ULL); |
740 | } else { | 742 | } else { |
741 | cm_id->service_id = service_id; | 743 | cm_id->service_id = service_id; |
742 | cm_id->service_mask = service_mask; | 744 | cm_id->service_mask = service_mask; |
@@ -752,18 +754,19 @@ int ib_cm_listen(struct ib_cm_id *cm_id, | |||
752 | } | 754 | } |
753 | EXPORT_SYMBOL(ib_cm_listen); | 755 | EXPORT_SYMBOL(ib_cm_listen); |
754 | 756 | ||
755 | static u64 cm_form_tid(struct cm_id_private *cm_id_priv, | 757 | static __be64 cm_form_tid(struct cm_id_private *cm_id_priv, |
756 | enum cm_msg_sequence msg_seq) | 758 | enum cm_msg_sequence msg_seq) |
757 | { | 759 | { |
758 | u64 hi_tid, low_tid; | 760 | u64 hi_tid, low_tid; |
759 | 761 | ||
760 | hi_tid = ((u64) cm_id_priv->av.port->mad_agent->hi_tid) << 32; | 762 | hi_tid = ((u64) cm_id_priv->av.port->mad_agent->hi_tid) << 32; |
761 | low_tid = (u64) (cm_id_priv->id.local_id | (msg_seq << 30)); | 763 | low_tid = (u64) ((__force u32)cm_id_priv->id.local_id | |
764 | (msg_seq << 30)); | ||
762 | return cpu_to_be64(hi_tid | low_tid); | 765 | return cpu_to_be64(hi_tid | low_tid); |
763 | } | 766 | } |
764 | 767 | ||
765 | static void cm_format_mad_hdr(struct ib_mad_hdr *hdr, | 768 | static void cm_format_mad_hdr(struct ib_mad_hdr *hdr, |
766 | enum cm_msg_attr_id attr_id, u64 tid) | 769 | __be16 attr_id, __be64 tid) |
767 | { | 770 | { |
768 | hdr->base_version = IB_MGMT_BASE_VERSION; | 771 | hdr->base_version = IB_MGMT_BASE_VERSION; |
769 | hdr->mgmt_class = IB_MGMT_CLASS_CM; | 772 | hdr->mgmt_class = IB_MGMT_CLASS_CM; |
@@ -896,7 +899,7 @@ int ib_send_cm_req(struct ib_cm_id *cm_id, | |||
896 | goto error1; | 899 | goto error1; |
897 | } | 900 | } |
898 | cm_id->service_id = param->service_id; | 901 | cm_id->service_id = param->service_id; |
899 | cm_id->service_mask = ~0ULL; | 902 | cm_id->service_mask = __constant_cpu_to_be64(~0ULL); |
900 | cm_id_priv->timeout_ms = cm_convert_to_ms( | 903 | cm_id_priv->timeout_ms = cm_convert_to_ms( |
901 | param->primary_path->packet_life_time) * 2 + | 904 | param->primary_path->packet_life_time) * 2 + |
902 | cm_convert_to_ms( | 905 | cm_convert_to_ms( |
@@ -963,7 +966,7 @@ static int cm_issue_rej(struct cm_port *port, | |||
963 | rej_msg->remote_comm_id = rcv_msg->local_comm_id; | 966 | rej_msg->remote_comm_id = rcv_msg->local_comm_id; |
964 | rej_msg->local_comm_id = rcv_msg->remote_comm_id; | 967 | rej_msg->local_comm_id = rcv_msg->remote_comm_id; |
965 | cm_rej_set_msg_rejected(rej_msg, msg_rejected); | 968 | cm_rej_set_msg_rejected(rej_msg, msg_rejected); |
966 | rej_msg->reason = reason; | 969 | rej_msg->reason = cpu_to_be16(reason); |
967 | 970 | ||
968 | if (ari && ari_length) { | 971 | if (ari && ari_length) { |
969 | cm_rej_set_reject_info_len(rej_msg, ari_length); | 972 | cm_rej_set_reject_info_len(rej_msg, ari_length); |
@@ -977,8 +980,8 @@ static int cm_issue_rej(struct cm_port *port, | |||
977 | return ret; | 980 | return ret; |
978 | } | 981 | } |
979 | 982 | ||
980 | static inline int cm_is_active_peer(u64 local_ca_guid, u64 remote_ca_guid, | 983 | static inline int cm_is_active_peer(__be64 local_ca_guid, __be64 remote_ca_guid, |
981 | u32 local_qpn, u32 remote_qpn) | 984 | __be32 local_qpn, __be32 remote_qpn) |
982 | { | 985 | { |
983 | return (be64_to_cpu(local_ca_guid) > be64_to_cpu(remote_ca_guid) || | 986 | return (be64_to_cpu(local_ca_guid) > be64_to_cpu(remote_ca_guid) || |
984 | ((local_ca_guid == remote_ca_guid) && | 987 | ((local_ca_guid == remote_ca_guid) && |
@@ -1137,7 +1140,7 @@ static void cm_format_rej(struct cm_rej_msg *rej_msg, | |||
1137 | break; | 1140 | break; |
1138 | } | 1141 | } |
1139 | 1142 | ||
1140 | rej_msg->reason = reason; | 1143 | rej_msg->reason = cpu_to_be16(reason); |
1141 | if (ari && ari_length) { | 1144 | if (ari && ari_length) { |
1142 | cm_rej_set_reject_info_len(rej_msg, ari_length); | 1145 | cm_rej_set_reject_info_len(rej_msg, ari_length); |
1143 | memcpy(rej_msg->ari, ari, ari_length); | 1146 | memcpy(rej_msg->ari, ari, ari_length); |
@@ -1276,7 +1279,7 @@ static int cm_req_handler(struct cm_work *work) | |||
1276 | cm_id_priv->id.cm_handler = listen_cm_id_priv->id.cm_handler; | 1279 | cm_id_priv->id.cm_handler = listen_cm_id_priv->id.cm_handler; |
1277 | cm_id_priv->id.context = listen_cm_id_priv->id.context; | 1280 | cm_id_priv->id.context = listen_cm_id_priv->id.context; |
1278 | cm_id_priv->id.service_id = req_msg->service_id; | 1281 | cm_id_priv->id.service_id = req_msg->service_id; |
1279 | cm_id_priv->id.service_mask = ~0ULL; | 1282 | cm_id_priv->id.service_mask = __constant_cpu_to_be64(~0ULL); |
1280 | 1283 | ||
1281 | cm_format_paths_from_req(req_msg, &work->path[0], &work->path[1]); | 1284 | cm_format_paths_from_req(req_msg, &work->path[0], &work->path[1]); |
1282 | ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av); | 1285 | ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av); |
@@ -1969,7 +1972,7 @@ static void cm_format_rej_event(struct cm_work *work) | |||
1969 | param = &work->cm_event.param.rej_rcvd; | 1972 | param = &work->cm_event.param.rej_rcvd; |
1970 | param->ari = rej_msg->ari; | 1973 | param->ari = rej_msg->ari; |
1971 | param->ari_length = cm_rej_get_reject_info_len(rej_msg); | 1974 | param->ari_length = cm_rej_get_reject_info_len(rej_msg); |
1972 | param->reason = rej_msg->reason; | 1975 | param->reason = __be16_to_cpu(rej_msg->reason); |
1973 | work->cm_event.private_data = &rej_msg->private_data; | 1976 | work->cm_event.private_data = &rej_msg->private_data; |
1974 | } | 1977 | } |
1975 | 1978 | ||
@@ -1978,20 +1981,20 @@ static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg) | |||
1978 | struct cm_timewait_info *timewait_info; | 1981 | struct cm_timewait_info *timewait_info; |
1979 | struct cm_id_private *cm_id_priv; | 1982 | struct cm_id_private *cm_id_priv; |
1980 | unsigned long flags; | 1983 | unsigned long flags; |
1981 | u32 remote_id; | 1984 | __be32 remote_id; |
1982 | 1985 | ||
1983 | remote_id = rej_msg->local_comm_id; | 1986 | remote_id = rej_msg->local_comm_id; |
1984 | 1987 | ||
1985 | if (rej_msg->reason == IB_CM_REJ_TIMEOUT) { | 1988 | if (__be16_to_cpu(rej_msg->reason) == IB_CM_REJ_TIMEOUT) { |
1986 | spin_lock_irqsave(&cm.lock, flags); | 1989 | spin_lock_irqsave(&cm.lock, flags); |
1987 | timewait_info = cm_find_remote_id( *((u64 *) rej_msg->ari), | 1990 | timewait_info = cm_find_remote_id( *((__be64 *) rej_msg->ari), |
1988 | remote_id); | 1991 | remote_id); |
1989 | if (!timewait_info) { | 1992 | if (!timewait_info) { |
1990 | spin_unlock_irqrestore(&cm.lock, flags); | 1993 | spin_unlock_irqrestore(&cm.lock, flags); |
1991 | return NULL; | 1994 | return NULL; |
1992 | } | 1995 | } |
1993 | cm_id_priv = idr_find(&cm.local_id_table, | 1996 | cm_id_priv = idr_find(&cm.local_id_table, |
1994 | (int) timewait_info->work.local_id); | 1997 | (__force int) timewait_info->work.local_id); |
1995 | if (cm_id_priv) { | 1998 | if (cm_id_priv) { |
1996 | if (cm_id_priv->id.remote_id == remote_id) | 1999 | if (cm_id_priv->id.remote_id == remote_id) |
1997 | atomic_inc(&cm_id_priv->refcount); | 2000 | atomic_inc(&cm_id_priv->refcount); |
@@ -2032,7 +2035,7 @@ static int cm_rej_handler(struct cm_work *work) | |||
2032 | /* fall through */ | 2035 | /* fall through */ |
2033 | case IB_CM_REQ_RCVD: | 2036 | case IB_CM_REQ_RCVD: |
2034 | case IB_CM_MRA_REQ_SENT: | 2037 | case IB_CM_MRA_REQ_SENT: |
2035 | if (rej_msg->reason == IB_CM_REJ_STALE_CONN) | 2038 | if (__be16_to_cpu(rej_msg->reason) == IB_CM_REJ_STALE_CONN) |
2036 | cm_enter_timewait(cm_id_priv); | 2039 | cm_enter_timewait(cm_id_priv); |
2037 | else | 2040 | else |
2038 | cm_reset_to_idle(cm_id_priv); | 2041 | cm_reset_to_idle(cm_id_priv); |
@@ -2553,7 +2556,7 @@ static void cm_format_sidr_req(struct cm_sidr_req_msg *sidr_req_msg, | |||
2553 | cm_format_mad_hdr(&sidr_req_msg->hdr, CM_SIDR_REQ_ATTR_ID, | 2556 | cm_format_mad_hdr(&sidr_req_msg->hdr, CM_SIDR_REQ_ATTR_ID, |
2554 | cm_form_tid(cm_id_priv, CM_MSG_SEQUENCE_SIDR)); | 2557 | cm_form_tid(cm_id_priv, CM_MSG_SEQUENCE_SIDR)); |
2555 | sidr_req_msg->request_id = cm_id_priv->id.local_id; | 2558 | sidr_req_msg->request_id = cm_id_priv->id.local_id; |
2556 | sidr_req_msg->pkey = param->pkey; | 2559 | sidr_req_msg->pkey = cpu_to_be16(param->pkey); |
2557 | sidr_req_msg->service_id = param->service_id; | 2560 | sidr_req_msg->service_id = param->service_id; |
2558 | 2561 | ||
2559 | if (param->private_data && param->private_data_len) | 2562 | if (param->private_data && param->private_data_len) |
@@ -2580,7 +2583,7 @@ int ib_send_cm_sidr_req(struct ib_cm_id *cm_id, | |||
2580 | goto out; | 2583 | goto out; |
2581 | 2584 | ||
2582 | cm_id->service_id = param->service_id; | 2585 | cm_id->service_id = param->service_id; |
2583 | cm_id->service_mask = ~0ULL; | 2586 | cm_id->service_mask = __constant_cpu_to_be64(~0ULL); |
2584 | cm_id_priv->timeout_ms = param->timeout_ms; | 2587 | cm_id_priv->timeout_ms = param->timeout_ms; |
2585 | cm_id_priv->max_cm_retries = param->max_cm_retries; | 2588 | cm_id_priv->max_cm_retries = param->max_cm_retries; |
2586 | ret = cm_alloc_msg(cm_id_priv, &msg); | 2589 | ret = cm_alloc_msg(cm_id_priv, &msg); |
@@ -2621,7 +2624,7 @@ static void cm_format_sidr_req_event(struct cm_work *work, | |||
2621 | sidr_req_msg = (struct cm_sidr_req_msg *) | 2624 | sidr_req_msg = (struct cm_sidr_req_msg *) |
2622 | work->mad_recv_wc->recv_buf.mad; | 2625 | work->mad_recv_wc->recv_buf.mad; |
2623 | param = &work->cm_event.param.sidr_req_rcvd; | 2626 | param = &work->cm_event.param.sidr_req_rcvd; |
2624 | param->pkey = sidr_req_msg->pkey; | 2627 | param->pkey = __be16_to_cpu(sidr_req_msg->pkey); |
2625 | param->listen_id = listen_id; | 2628 | param->listen_id = listen_id; |
2626 | param->device = work->port->mad_agent->device; | 2629 | param->device = work->port->mad_agent->device; |
2627 | param->port = work->port->port_num; | 2630 | param->port = work->port->port_num; |
@@ -2645,7 +2648,7 @@ static int cm_sidr_req_handler(struct cm_work *work) | |||
2645 | sidr_req_msg = (struct cm_sidr_req_msg *) | 2648 | sidr_req_msg = (struct cm_sidr_req_msg *) |
2646 | work->mad_recv_wc->recv_buf.mad; | 2649 | work->mad_recv_wc->recv_buf.mad; |
2647 | wc = work->mad_recv_wc->wc; | 2650 | wc = work->mad_recv_wc->wc; |
2648 | cm_id_priv->av.dgid.global.subnet_prefix = wc->slid; | 2651 | cm_id_priv->av.dgid.global.subnet_prefix = cpu_to_be64(wc->slid); |
2649 | cm_id_priv->av.dgid.global.interface_id = 0; | 2652 | cm_id_priv->av.dgid.global.interface_id = 0; |
2650 | cm_init_av_for_response(work->port, work->mad_recv_wc->wc, | 2653 | cm_init_av_for_response(work->port, work->mad_recv_wc->wc, |
2651 | &cm_id_priv->av); | 2654 | &cm_id_priv->av); |
@@ -2673,7 +2676,7 @@ static int cm_sidr_req_handler(struct cm_work *work) | |||
2673 | cm_id_priv->id.cm_handler = cur_cm_id_priv->id.cm_handler; | 2676 | cm_id_priv->id.cm_handler = cur_cm_id_priv->id.cm_handler; |
2674 | cm_id_priv->id.context = cur_cm_id_priv->id.context; | 2677 | cm_id_priv->id.context = cur_cm_id_priv->id.context; |
2675 | cm_id_priv->id.service_id = sidr_req_msg->service_id; | 2678 | cm_id_priv->id.service_id = sidr_req_msg->service_id; |
2676 | cm_id_priv->id.service_mask = ~0ULL; | 2679 | cm_id_priv->id.service_mask = __constant_cpu_to_be64(~0ULL); |
2677 | 2680 | ||
2678 | cm_format_sidr_req_event(work, &cur_cm_id_priv->id); | 2681 | cm_format_sidr_req_event(work, &cur_cm_id_priv->id); |
2679 | cm_process_work(cm_id_priv, work); | 2682 | cm_process_work(cm_id_priv, work); |
@@ -3175,10 +3178,10 @@ int ib_cm_init_qp_attr(struct ib_cm_id *cm_id, | |||
3175 | } | 3178 | } |
3176 | EXPORT_SYMBOL(ib_cm_init_qp_attr); | 3179 | EXPORT_SYMBOL(ib_cm_init_qp_attr); |
3177 | 3180 | ||
3178 | static u64 cm_get_ca_guid(struct ib_device *device) | 3181 | static __be64 cm_get_ca_guid(struct ib_device *device) |
3179 | { | 3182 | { |
3180 | struct ib_device_attr *device_attr; | 3183 | struct ib_device_attr *device_attr; |
3181 | u64 guid; | 3184 | __be64 guid; |
3182 | int ret; | 3185 | int ret; |
3183 | 3186 | ||
3184 | device_attr = kmalloc(sizeof *device_attr, GFP_KERNEL); | 3187 | device_attr = kmalloc(sizeof *device_attr, GFP_KERNEL); |
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h index 15a309a77b2b..813ab70bf6d5 100644 --- a/drivers/infiniband/core/cm_msgs.h +++ b/drivers/infiniband/core/cm_msgs.h | |||
@@ -34,7 +34,7 @@ | |||
34 | #if !defined(CM_MSGS_H) | 34 | #if !defined(CM_MSGS_H) |
35 | #define CM_MSGS_H | 35 | #define CM_MSGS_H |
36 | 36 | ||
37 | #include <ib_mad.h> | 37 | #include <rdma/ib_mad.h> |
38 | 38 | ||
39 | /* | 39 | /* |
40 | * Parameters to routines below should be in network-byte order, and values | 40 | * Parameters to routines below should be in network-byte order, and values |
@@ -43,19 +43,17 @@ | |||
43 | 43 | ||
44 | #define IB_CM_CLASS_VERSION 2 /* IB specification 1.2 */ | 44 | #define IB_CM_CLASS_VERSION 2 /* IB specification 1.2 */ |
45 | 45 | ||
46 | enum cm_msg_attr_id { | 46 | #define CM_REQ_ATTR_ID __constant_htons(0x0010) |
47 | CM_REQ_ATTR_ID = __constant_htons(0x0010), | 47 | #define CM_MRA_ATTR_ID __constant_htons(0x0011) |
48 | CM_MRA_ATTR_ID = __constant_htons(0x0011), | 48 | #define CM_REJ_ATTR_ID __constant_htons(0x0012) |
49 | CM_REJ_ATTR_ID = __constant_htons(0x0012), | 49 | #define CM_REP_ATTR_ID __constant_htons(0x0013) |
50 | CM_REP_ATTR_ID = __constant_htons(0x0013), | 50 | #define CM_RTU_ATTR_ID __constant_htons(0x0014) |
51 | CM_RTU_ATTR_ID = __constant_htons(0x0014), | 51 | #define CM_DREQ_ATTR_ID __constant_htons(0x0015) |
52 | CM_DREQ_ATTR_ID = __constant_htons(0x0015), | 52 | #define CM_DREP_ATTR_ID __constant_htons(0x0016) |
53 | CM_DREP_ATTR_ID = __constant_htons(0x0016), | 53 | #define CM_SIDR_REQ_ATTR_ID __constant_htons(0x0017) |
54 | CM_SIDR_REQ_ATTR_ID = __constant_htons(0x0017), | 54 | #define CM_SIDR_REP_ATTR_ID __constant_htons(0x0018) |
55 | CM_SIDR_REP_ATTR_ID = __constant_htons(0x0018), | 55 | #define CM_LAP_ATTR_ID __constant_htons(0x0019) |
56 | CM_LAP_ATTR_ID = __constant_htons(0x0019), | 56 | #define CM_APR_ATTR_ID __constant_htons(0x001A) |
57 | CM_APR_ATTR_ID = __constant_htons(0x001A) | ||
58 | }; | ||
59 | 57 | ||
60 | enum cm_msg_sequence { | 58 | enum cm_msg_sequence { |
61 | CM_MSG_SEQUENCE_REQ, | 59 | CM_MSG_SEQUENCE_REQ, |
@@ -67,35 +65,35 @@ enum cm_msg_sequence { | |||
67 | struct cm_req_msg { | 65 | struct cm_req_msg { |
68 | struct ib_mad_hdr hdr; | 66 | struct ib_mad_hdr hdr; |
69 | 67 | ||
70 | u32 local_comm_id; | 68 | __be32 local_comm_id; |
71 | u32 rsvd4; | 69 | __be32 rsvd4; |
72 | u64 service_id; | 70 | __be64 service_id; |
73 | u64 local_ca_guid; | 71 | __be64 local_ca_guid; |
74 | u32 rsvd24; | 72 | __be32 rsvd24; |
75 | u32 local_qkey; | 73 | __be32 local_qkey; |
76 | /* local QPN:24, responder resources:8 */ | 74 | /* local QPN:24, responder resources:8 */ |
77 | u32 offset32; | 75 | __be32 offset32; |
78 | /* local EECN:24, initiator depth:8 */ | 76 | /* local EECN:24, initiator depth:8 */ |
79 | u32 offset36; | 77 | __be32 offset36; |
80 | /* | 78 | /* |
81 | * remote EECN:24, remote CM response timeout:5, | 79 | * remote EECN:24, remote CM response timeout:5, |
82 | * transport service type:2, end-to-end flow control:1 | 80 | * transport service type:2, end-to-end flow control:1 |
83 | */ | 81 | */ |
84 | u32 offset40; | 82 | __be32 offset40; |
85 | /* starting PSN:24, local CM response timeout:5, retry count:3 */ | 83 | /* starting PSN:24, local CM response timeout:5, retry count:3 */ |
86 | u32 offset44; | 84 | __be32 offset44; |
87 | u16 pkey; | 85 | __be16 pkey; |
88 | /* path MTU:4, RDC exists:1, RNR retry count:3. */ | 86 | /* path MTU:4, RDC exists:1, RNR retry count:3. */ |
89 | u8 offset50; | 87 | u8 offset50; |
90 | /* max CM Retries:4, SRQ:1, rsvd:3 */ | 88 | /* max CM Retries:4, SRQ:1, rsvd:3 */ |
91 | u8 offset51; | 89 | u8 offset51; |
92 | 90 | ||
93 | u16 primary_local_lid; | 91 | __be16 primary_local_lid; |
94 | u16 primary_remote_lid; | 92 | __be16 primary_remote_lid; |
95 | union ib_gid primary_local_gid; | 93 | union ib_gid primary_local_gid; |
96 | union ib_gid primary_remote_gid; | 94 | union ib_gid primary_remote_gid; |
97 | /* flow label:20, rsvd:6, packet rate:6 */ | 95 | /* flow label:20, rsvd:6, packet rate:6 */ |
98 | u32 primary_offset88; | 96 | __be32 primary_offset88; |
99 | u8 primary_traffic_class; | 97 | u8 primary_traffic_class; |
100 | u8 primary_hop_limit; | 98 | u8 primary_hop_limit; |
101 | /* SL:4, subnet local:1, rsvd:3 */ | 99 | /* SL:4, subnet local:1, rsvd:3 */ |
@@ -103,12 +101,12 @@ struct cm_req_msg { | |||
103 | /* local ACK timeout:5, rsvd:3 */ | 101 | /* local ACK timeout:5, rsvd:3 */ |
104 | u8 primary_offset95; | 102 | u8 primary_offset95; |
105 | 103 | ||
106 | u16 alt_local_lid; | 104 | __be16 alt_local_lid; |
107 | u16 alt_remote_lid; | 105 | __be16 alt_remote_lid; |
108 | union ib_gid alt_local_gid; | 106 | union ib_gid alt_local_gid; |
109 | union ib_gid alt_remote_gid; | 107 | union ib_gid alt_remote_gid; |
110 | /* flow label:20, rsvd:6, packet rate:6 */ | 108 | /* flow label:20, rsvd:6, packet rate:6 */ |
111 | u32 alt_offset132; | 109 | __be32 alt_offset132; |
112 | u8 alt_traffic_class; | 110 | u8 alt_traffic_class; |
113 | u8 alt_hop_limit; | 111 | u8 alt_hop_limit; |
114 | /* SL:4, subnet local:1, rsvd:3 */ | 112 | /* SL:4, subnet local:1, rsvd:3 */ |
@@ -120,12 +118,12 @@ struct cm_req_msg { | |||
120 | 118 | ||
121 | } __attribute__ ((packed)); | 119 | } __attribute__ ((packed)); |
122 | 120 | ||
123 | static inline u32 cm_req_get_local_qpn(struct cm_req_msg *req_msg) | 121 | static inline __be32 cm_req_get_local_qpn(struct cm_req_msg *req_msg) |
124 | { | 122 | { |
125 | return cpu_to_be32(be32_to_cpu(req_msg->offset32) >> 8); | 123 | return cpu_to_be32(be32_to_cpu(req_msg->offset32) >> 8); |
126 | } | 124 | } |
127 | 125 | ||
128 | static inline void cm_req_set_local_qpn(struct cm_req_msg *req_msg, u32 qpn) | 126 | static inline void cm_req_set_local_qpn(struct cm_req_msg *req_msg, __be32 qpn) |
129 | { | 127 | { |
130 | req_msg->offset32 = cpu_to_be32((be32_to_cpu(qpn) << 8) | | 128 | req_msg->offset32 = cpu_to_be32((be32_to_cpu(qpn) << 8) | |
131 | (be32_to_cpu(req_msg->offset32) & | 129 | (be32_to_cpu(req_msg->offset32) & |
@@ -208,13 +206,13 @@ static inline void cm_req_set_flow_ctrl(struct cm_req_msg *req_msg, | |||
208 | 0xFFFFFFFE)); | 206 | 0xFFFFFFFE)); |
209 | } | 207 | } |
210 | 208 | ||
211 | static inline u32 cm_req_get_starting_psn(struct cm_req_msg *req_msg) | 209 | static inline __be32 cm_req_get_starting_psn(struct cm_req_msg *req_msg) |
212 | { | 210 | { |
213 | return cpu_to_be32(be32_to_cpu(req_msg->offset44) >> 8); | 211 | return cpu_to_be32(be32_to_cpu(req_msg->offset44) >> 8); |
214 | } | 212 | } |
215 | 213 | ||
216 | static inline void cm_req_set_starting_psn(struct cm_req_msg *req_msg, | 214 | static inline void cm_req_set_starting_psn(struct cm_req_msg *req_msg, |
217 | u32 starting_psn) | 215 | __be32 starting_psn) |
218 | { | 216 | { |
219 | req_msg->offset44 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) | | 217 | req_msg->offset44 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) | |
220 | (be32_to_cpu(req_msg->offset44) & 0x000000FF)); | 218 | (be32_to_cpu(req_msg->offset44) & 0x000000FF)); |
@@ -288,13 +286,13 @@ static inline void cm_req_set_srq(struct cm_req_msg *req_msg, u8 srq) | |||
288 | ((srq & 0x1) << 3)); | 286 | ((srq & 0x1) << 3)); |
289 | } | 287 | } |
290 | 288 | ||
291 | static inline u32 cm_req_get_primary_flow_label(struct cm_req_msg *req_msg) | 289 | static inline __be32 cm_req_get_primary_flow_label(struct cm_req_msg *req_msg) |
292 | { | 290 | { |
293 | return cpu_to_be32((be32_to_cpu(req_msg->primary_offset88) >> 12)); | 291 | return cpu_to_be32(be32_to_cpu(req_msg->primary_offset88) >> 12); |
294 | } | 292 | } |
295 | 293 | ||
296 | static inline void cm_req_set_primary_flow_label(struct cm_req_msg *req_msg, | 294 | static inline void cm_req_set_primary_flow_label(struct cm_req_msg *req_msg, |
297 | u32 flow_label) | 295 | __be32 flow_label) |
298 | { | 296 | { |
299 | req_msg->primary_offset88 = cpu_to_be32( | 297 | req_msg->primary_offset88 = cpu_to_be32( |
300 | (be32_to_cpu(req_msg->primary_offset88) & | 298 | (be32_to_cpu(req_msg->primary_offset88) & |
@@ -350,13 +348,13 @@ static inline void cm_req_set_primary_local_ack_timeout(struct cm_req_msg *req_m | |||
350 | (local_ack_timeout << 3)); | 348 | (local_ack_timeout << 3)); |
351 | } | 349 | } |
352 | 350 | ||
353 | static inline u32 cm_req_get_alt_flow_label(struct cm_req_msg *req_msg) | 351 | static inline __be32 cm_req_get_alt_flow_label(struct cm_req_msg *req_msg) |
354 | { | 352 | { |
355 | return cpu_to_be32((be32_to_cpu(req_msg->alt_offset132) >> 12)); | 353 | return cpu_to_be32(be32_to_cpu(req_msg->alt_offset132) >> 12); |
356 | } | 354 | } |
357 | 355 | ||
358 | static inline void cm_req_set_alt_flow_label(struct cm_req_msg *req_msg, | 356 | static inline void cm_req_set_alt_flow_label(struct cm_req_msg *req_msg, |
359 | u32 flow_label) | 357 | __be32 flow_label) |
360 | { | 358 | { |
361 | req_msg->alt_offset132 = cpu_to_be32( | 359 | req_msg->alt_offset132 = cpu_to_be32( |
362 | (be32_to_cpu(req_msg->alt_offset132) & | 360 | (be32_to_cpu(req_msg->alt_offset132) & |
@@ -422,8 +420,8 @@ enum cm_msg_response { | |||
422 | struct cm_mra_msg { | 420 | struct cm_mra_msg { |
423 | struct ib_mad_hdr hdr; | 421 | struct ib_mad_hdr hdr; |
424 | 422 | ||
425 | u32 local_comm_id; | 423 | __be32 local_comm_id; |
426 | u32 remote_comm_id; | 424 | __be32 remote_comm_id; |
427 | /* message MRAed:2, rsvd:6 */ | 425 | /* message MRAed:2, rsvd:6 */ |
428 | u8 offset8; | 426 | u8 offset8; |
429 | /* service timeout:5, rsvd:3 */ | 427 | /* service timeout:5, rsvd:3 */ |
@@ -458,13 +456,13 @@ static inline void cm_mra_set_service_timeout(struct cm_mra_msg *mra_msg, | |||
458 | struct cm_rej_msg { | 456 | struct cm_rej_msg { |
459 | struct ib_mad_hdr hdr; | 457 | struct ib_mad_hdr hdr; |
460 | 458 | ||
461 | u32 local_comm_id; | 459 | __be32 local_comm_id; |
462 | u32 remote_comm_id; | 460 | __be32 remote_comm_id; |
463 | /* message REJected:2, rsvd:6 */ | 461 | /* message REJected:2, rsvd:6 */ |
464 | u8 offset8; | 462 | u8 offset8; |
465 | /* reject info length:7, rsvd:1. */ | 463 | /* reject info length:7, rsvd:1. */ |
466 | u8 offset9; | 464 | u8 offset9; |
467 | u16 reason; | 465 | __be16 reason; |
468 | u8 ari[IB_CM_REJ_ARI_LENGTH]; | 466 | u8 ari[IB_CM_REJ_ARI_LENGTH]; |
469 | 467 | ||
470 | u8 private_data[IB_CM_REJ_PRIVATE_DATA_SIZE]; | 468 | u8 private_data[IB_CM_REJ_PRIVATE_DATA_SIZE]; |
@@ -495,45 +493,45 @@ static inline void cm_rej_set_reject_info_len(struct cm_rej_msg *rej_msg, | |||
495 | struct cm_rep_msg { | 493 | struct cm_rep_msg { |
496 | struct ib_mad_hdr hdr; | 494 | struct ib_mad_hdr hdr; |
497 | 495 | ||
498 | u32 local_comm_id; | 496 | __be32 local_comm_id; |
499 | u32 remote_comm_id; | 497 | __be32 remote_comm_id; |
500 | u32 local_qkey; | 498 | __be32 local_qkey; |
501 | /* local QPN:24, rsvd:8 */ | 499 | /* local QPN:24, rsvd:8 */ |
502 | u32 offset12; | 500 | __be32 offset12; |
503 | /* local EECN:24, rsvd:8 */ | 501 | /* local EECN:24, rsvd:8 */ |
504 | u32 offset16; | 502 | __be32 offset16; |
505 | /* starting PSN:24 rsvd:8 */ | 503 | /* starting PSN:24 rsvd:8 */ |
506 | u32 offset20; | 504 | __be32 offset20; |
507 | u8 resp_resources; | 505 | u8 resp_resources; |
508 | u8 initiator_depth; | 506 | u8 initiator_depth; |
509 | /* target ACK delay:5, failover accepted:2, end-to-end flow control:1 */ | 507 | /* target ACK delay:5, failover accepted:2, end-to-end flow control:1 */ |
510 | u8 offset26; | 508 | u8 offset26; |
511 | /* RNR retry count:3, SRQ:1, rsvd:5 */ | 509 | /* RNR retry count:3, SRQ:1, rsvd:5 */ |
512 | u8 offset27; | 510 | u8 offset27; |
513 | u64 local_ca_guid; | 511 | __be64 local_ca_guid; |
514 | 512 | ||
515 | u8 private_data[IB_CM_REP_PRIVATE_DATA_SIZE]; | 513 | u8 private_data[IB_CM_REP_PRIVATE_DATA_SIZE]; |
516 | 514 | ||
517 | } __attribute__ ((packed)); | 515 | } __attribute__ ((packed)); |
518 | 516 | ||
519 | static inline u32 cm_rep_get_local_qpn(struct cm_rep_msg *rep_msg) | 517 | static inline __be32 cm_rep_get_local_qpn(struct cm_rep_msg *rep_msg) |
520 | { | 518 | { |
521 | return cpu_to_be32(be32_to_cpu(rep_msg->offset12) >> 8); | 519 | return cpu_to_be32(be32_to_cpu(rep_msg->offset12) >> 8); |
522 | } | 520 | } |
523 | 521 | ||
524 | static inline void cm_rep_set_local_qpn(struct cm_rep_msg *rep_msg, u32 qpn) | 522 | static inline void cm_rep_set_local_qpn(struct cm_rep_msg *rep_msg, __be32 qpn) |
525 | { | 523 | { |
526 | rep_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) | | 524 | rep_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) | |
527 | (be32_to_cpu(rep_msg->offset12) & 0x000000FF)); | 525 | (be32_to_cpu(rep_msg->offset12) & 0x000000FF)); |
528 | } | 526 | } |
529 | 527 | ||
530 | static inline u32 cm_rep_get_starting_psn(struct cm_rep_msg *rep_msg) | 528 | static inline __be32 cm_rep_get_starting_psn(struct cm_rep_msg *rep_msg) |
531 | { | 529 | { |
532 | return cpu_to_be32(be32_to_cpu(rep_msg->offset20) >> 8); | 530 | return cpu_to_be32(be32_to_cpu(rep_msg->offset20) >> 8); |
533 | } | 531 | } |
534 | 532 | ||
535 | static inline void cm_rep_set_starting_psn(struct cm_rep_msg *rep_msg, | 533 | static inline void cm_rep_set_starting_psn(struct cm_rep_msg *rep_msg, |
536 | u32 starting_psn) | 534 | __be32 starting_psn) |
537 | { | 535 | { |
538 | rep_msg->offset20 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) | | 536 | rep_msg->offset20 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) | |
539 | (be32_to_cpu(rep_msg->offset20) & 0x000000FF)); | 537 | (be32_to_cpu(rep_msg->offset20) & 0x000000FF)); |
@@ -600,8 +598,8 @@ static inline void cm_rep_set_srq(struct cm_rep_msg *rep_msg, u8 srq) | |||
600 | struct cm_rtu_msg { | 598 | struct cm_rtu_msg { |
601 | struct ib_mad_hdr hdr; | 599 | struct ib_mad_hdr hdr; |
602 | 600 | ||
603 | u32 local_comm_id; | 601 | __be32 local_comm_id; |
604 | u32 remote_comm_id; | 602 | __be32 remote_comm_id; |
605 | 603 | ||
606 | u8 private_data[IB_CM_RTU_PRIVATE_DATA_SIZE]; | 604 | u8 private_data[IB_CM_RTU_PRIVATE_DATA_SIZE]; |
607 | 605 | ||
@@ -610,21 +608,21 @@ struct cm_rtu_msg { | |||
610 | struct cm_dreq_msg { | 608 | struct cm_dreq_msg { |
611 | struct ib_mad_hdr hdr; | 609 | struct ib_mad_hdr hdr; |
612 | 610 | ||
613 | u32 local_comm_id; | 611 | __be32 local_comm_id; |
614 | u32 remote_comm_id; | 612 | __be32 remote_comm_id; |
615 | /* remote QPN/EECN:24, rsvd:8 */ | 613 | /* remote QPN/EECN:24, rsvd:8 */ |
616 | u32 offset8; | 614 | __be32 offset8; |
617 | 615 | ||
618 | u8 private_data[IB_CM_DREQ_PRIVATE_DATA_SIZE]; | 616 | u8 private_data[IB_CM_DREQ_PRIVATE_DATA_SIZE]; |
619 | 617 | ||
620 | } __attribute__ ((packed)); | 618 | } __attribute__ ((packed)); |
621 | 619 | ||
622 | static inline u32 cm_dreq_get_remote_qpn(struct cm_dreq_msg *dreq_msg) | 620 | static inline __be32 cm_dreq_get_remote_qpn(struct cm_dreq_msg *dreq_msg) |
623 | { | 621 | { |
624 | return cpu_to_be32(be32_to_cpu(dreq_msg->offset8) >> 8); | 622 | return cpu_to_be32(be32_to_cpu(dreq_msg->offset8) >> 8); |
625 | } | 623 | } |
626 | 624 | ||
627 | static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, u32 qpn) | 625 | static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, __be32 qpn) |
628 | { | 626 | { |
629 | dreq_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) | | 627 | dreq_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) | |
630 | (be32_to_cpu(dreq_msg->offset8) & 0x000000FF)); | 628 | (be32_to_cpu(dreq_msg->offset8) & 0x000000FF)); |
@@ -633,8 +631,8 @@ static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, u32 qpn) | |||
633 | struct cm_drep_msg { | 631 | struct cm_drep_msg { |
634 | struct ib_mad_hdr hdr; | 632 | struct ib_mad_hdr hdr; |
635 | 633 | ||
636 | u32 local_comm_id; | 634 | __be32 local_comm_id; |
637 | u32 remote_comm_id; | 635 | __be32 remote_comm_id; |
638 | 636 | ||
639 | u8 private_data[IB_CM_DREP_PRIVATE_DATA_SIZE]; | 637 | u8 private_data[IB_CM_DREP_PRIVATE_DATA_SIZE]; |
640 | 638 | ||
@@ -643,37 +641,37 @@ struct cm_drep_msg { | |||
643 | struct cm_lap_msg { | 641 | struct cm_lap_msg { |
644 | struct ib_mad_hdr hdr; | 642 | struct ib_mad_hdr hdr; |
645 | 643 | ||
646 | u32 local_comm_id; | 644 | __be32 local_comm_id; |
647 | u32 remote_comm_id; | 645 | __be32 remote_comm_id; |
648 | 646 | ||
649 | u32 rsvd8; | 647 | __be32 rsvd8; |
650 | /* remote QPN/EECN:24, remote CM response timeout:5, rsvd:3 */ | 648 | /* remote QPN/EECN:24, remote CM response timeout:5, rsvd:3 */ |
651 | u32 offset12; | 649 | __be32 offset12; |
652 | u32 rsvd16; | 650 | __be32 rsvd16; |
653 | 651 | ||
654 | u16 alt_local_lid; | 652 | __be16 alt_local_lid; |
655 | u16 alt_remote_lid; | 653 | __be16 alt_remote_lid; |
656 | union ib_gid alt_local_gid; | 654 | union ib_gid alt_local_gid; |
657 | union ib_gid alt_remote_gid; | 655 | union ib_gid alt_remote_gid; |
658 | /* flow label:20, rsvd:4, traffic class:8 */ | 656 | /* flow label:20, rsvd:4, traffic class:8 */ |
659 | u32 offset56; | 657 | __be32 offset56; |
660 | u8 alt_hop_limit; | 658 | u8 alt_hop_limit; |
661 | /* rsvd:2, packet rate:6 */ | 659 | /* rsvd:2, packet rate:6 */ |
662 | uint8_t offset61; | 660 | u8 offset61; |
663 | /* SL:4, subnet local:1, rsvd:3 */ | 661 | /* SL:4, subnet local:1, rsvd:3 */ |
664 | uint8_t offset62; | 662 | u8 offset62; |
665 | /* local ACK timeout:5, rsvd:3 */ | 663 | /* local ACK timeout:5, rsvd:3 */ |
666 | uint8_t offset63; | 664 | u8 offset63; |
667 | 665 | ||
668 | u8 private_data[IB_CM_LAP_PRIVATE_DATA_SIZE]; | 666 | u8 private_data[IB_CM_LAP_PRIVATE_DATA_SIZE]; |
669 | } __attribute__ ((packed)); | 667 | } __attribute__ ((packed)); |
670 | 668 | ||
671 | static inline u32 cm_lap_get_remote_qpn(struct cm_lap_msg *lap_msg) | 669 | static inline __be32 cm_lap_get_remote_qpn(struct cm_lap_msg *lap_msg) |
672 | { | 670 | { |
673 | return cpu_to_be32(be32_to_cpu(lap_msg->offset12) >> 8); | 671 | return cpu_to_be32(be32_to_cpu(lap_msg->offset12) >> 8); |
674 | } | 672 | } |
675 | 673 | ||
676 | static inline void cm_lap_set_remote_qpn(struct cm_lap_msg *lap_msg, u32 qpn) | 674 | static inline void cm_lap_set_remote_qpn(struct cm_lap_msg *lap_msg, __be32 qpn) |
677 | { | 675 | { |
678 | lap_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) | | 676 | lap_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) | |
679 | (be32_to_cpu(lap_msg->offset12) & | 677 | (be32_to_cpu(lap_msg->offset12) & |
@@ -693,17 +691,17 @@ static inline void cm_lap_set_remote_resp_timeout(struct cm_lap_msg *lap_msg, | |||
693 | 0xFFFFFF07)); | 691 | 0xFFFFFF07)); |
694 | } | 692 | } |
695 | 693 | ||
696 | static inline u32 cm_lap_get_flow_label(struct cm_lap_msg *lap_msg) | 694 | static inline __be32 cm_lap_get_flow_label(struct cm_lap_msg *lap_msg) |
697 | { | 695 | { |
698 | return be32_to_cpu(lap_msg->offset56) >> 12; | 696 | return cpu_to_be32(be32_to_cpu(lap_msg->offset56) >> 12); |
699 | } | 697 | } |
700 | 698 | ||
701 | static inline void cm_lap_set_flow_label(struct cm_lap_msg *lap_msg, | 699 | static inline void cm_lap_set_flow_label(struct cm_lap_msg *lap_msg, |
702 | u32 flow_label) | 700 | __be32 flow_label) |
703 | { | 701 | { |
704 | lap_msg->offset56 = cpu_to_be32((flow_label << 12) | | 702 | lap_msg->offset56 = cpu_to_be32( |
705 | (be32_to_cpu(lap_msg->offset56) & | 703 | (be32_to_cpu(lap_msg->offset56) & 0x00000FFF) | |
706 | 0x00000FFF)); | 704 | (be32_to_cpu(flow_label) << 12)); |
707 | } | 705 | } |
708 | 706 | ||
709 | static inline u8 cm_lap_get_traffic_class(struct cm_lap_msg *lap_msg) | 707 | static inline u8 cm_lap_get_traffic_class(struct cm_lap_msg *lap_msg) |
@@ -766,8 +764,8 @@ static inline void cm_lap_set_local_ack_timeout(struct cm_lap_msg *lap_msg, | |||
766 | struct cm_apr_msg { | 764 | struct cm_apr_msg { |
767 | struct ib_mad_hdr hdr; | 765 | struct ib_mad_hdr hdr; |
768 | 766 | ||
769 | u32 local_comm_id; | 767 | __be32 local_comm_id; |
770 | u32 remote_comm_id; | 768 | __be32 remote_comm_id; |
771 | 769 | ||
772 | u8 info_length; | 770 | u8 info_length; |
773 | u8 ap_status; | 771 | u8 ap_status; |
@@ -779,10 +777,10 @@ struct cm_apr_msg { | |||
779 | struct cm_sidr_req_msg { | 777 | struct cm_sidr_req_msg { |
780 | struct ib_mad_hdr hdr; | 778 | struct ib_mad_hdr hdr; |
781 | 779 | ||
782 | u32 request_id; | 780 | __be32 request_id; |
783 | u16 pkey; | 781 | __be16 pkey; |
784 | u16 rsvd; | 782 | __be16 rsvd; |
785 | u64 service_id; | 783 | __be64 service_id; |
786 | 784 | ||
787 | u8 private_data[IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE]; | 785 | u8 private_data[IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE]; |
788 | } __attribute__ ((packed)); | 786 | } __attribute__ ((packed)); |
@@ -790,26 +788,26 @@ struct cm_sidr_req_msg { | |||
790 | struct cm_sidr_rep_msg { | 788 | struct cm_sidr_rep_msg { |
791 | struct ib_mad_hdr hdr; | 789 | struct ib_mad_hdr hdr; |
792 | 790 | ||
793 | u32 request_id; | 791 | __be32 request_id; |
794 | u8 status; | 792 | u8 status; |
795 | u8 info_length; | 793 | u8 info_length; |
796 | u16 rsvd; | 794 | __be16 rsvd; |
797 | /* QPN:24, rsvd:8 */ | 795 | /* QPN:24, rsvd:8 */ |
798 | u32 offset8; | 796 | __be32 offset8; |
799 | u64 service_id; | 797 | __be64 service_id; |
800 | u32 qkey; | 798 | __be32 qkey; |
801 | u8 info[IB_CM_SIDR_REP_INFO_LENGTH]; | 799 | u8 info[IB_CM_SIDR_REP_INFO_LENGTH]; |
802 | 800 | ||
803 | u8 private_data[IB_CM_SIDR_REP_PRIVATE_DATA_SIZE]; | 801 | u8 private_data[IB_CM_SIDR_REP_PRIVATE_DATA_SIZE]; |
804 | } __attribute__ ((packed)); | 802 | } __attribute__ ((packed)); |
805 | 803 | ||
806 | static inline u32 cm_sidr_rep_get_qpn(struct cm_sidr_rep_msg *sidr_rep_msg) | 804 | static inline __be32 cm_sidr_rep_get_qpn(struct cm_sidr_rep_msg *sidr_rep_msg) |
807 | { | 805 | { |
808 | return cpu_to_be32(be32_to_cpu(sidr_rep_msg->offset8) >> 8); | 806 | return cpu_to_be32(be32_to_cpu(sidr_rep_msg->offset8) >> 8); |
809 | } | 807 | } |
810 | 808 | ||
811 | static inline void cm_sidr_rep_set_qpn(struct cm_sidr_rep_msg *sidr_rep_msg, | 809 | static inline void cm_sidr_rep_set_qpn(struct cm_sidr_rep_msg *sidr_rep_msg, |
812 | u32 qpn) | 810 | __be32 qpn) |
813 | { | 811 | { |
814 | sidr_rep_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) | | 812 | sidr_rep_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) | |
815 | (be32_to_cpu(sidr_rep_msg->offset8) & | 813 | (be32_to_cpu(sidr_rep_msg->offset8) & |
diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index 797049626ff6..7ad47a4b166b 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h | |||
@@ -38,7 +38,7 @@ | |||
38 | #include <linux/list.h> | 38 | #include <linux/list.h> |
39 | #include <linux/spinlock.h> | 39 | #include <linux/spinlock.h> |
40 | 40 | ||
41 | #include <ib_verbs.h> | 41 | #include <rdma/ib_verbs.h> |
42 | 42 | ||
43 | int ib_device_register_sysfs(struct ib_device *device); | 43 | int ib_device_register_sysfs(struct ib_device *device); |
44 | void ib_device_unregister_sysfs(struct ib_device *device); | 44 | void ib_device_unregister_sysfs(struct ib_device *device); |
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 9197e92d708a..d3cf84e01587 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
diff --git a/drivers/infiniband/core/fmr_pool.c b/drivers/infiniband/core/fmr_pool.c index 7763b31abba7..d34a6f1c4f4c 100644 --- a/drivers/infiniband/core/fmr_pool.c +++ b/drivers/infiniband/core/fmr_pool.c | |||
@@ -39,7 +39,7 @@ | |||
39 | #include <linux/jhash.h> | 39 | #include <linux/jhash.h> |
40 | #include <linux/kthread.h> | 40 | #include <linux/kthread.h> |
41 | 41 | ||
42 | #include <ib_fmr_pool.h> | 42 | #include <rdma/ib_fmr_pool.h> |
43 | 43 | ||
44 | #include "core_priv.h" | 44 | #include "core_priv.h" |
45 | 45 | ||
@@ -334,6 +334,7 @@ void ib_destroy_fmr_pool(struct ib_fmr_pool *pool) | |||
334 | { | 334 | { |
335 | struct ib_pool_fmr *fmr; | 335 | struct ib_pool_fmr *fmr; |
336 | struct ib_pool_fmr *tmp; | 336 | struct ib_pool_fmr *tmp; |
337 | LIST_HEAD(fmr_list); | ||
337 | int i; | 338 | int i; |
338 | 339 | ||
339 | kthread_stop(pool->thread); | 340 | kthread_stop(pool->thread); |
@@ -341,6 +342,11 @@ void ib_destroy_fmr_pool(struct ib_fmr_pool *pool) | |||
341 | 342 | ||
342 | i = 0; | 343 | i = 0; |
343 | list_for_each_entry_safe(fmr, tmp, &pool->free_list, list) { | 344 | list_for_each_entry_safe(fmr, tmp, &pool->free_list, list) { |
345 | if (fmr->remap_count) { | ||
346 | INIT_LIST_HEAD(&fmr_list); | ||
347 | list_add_tail(&fmr->fmr->list, &fmr_list); | ||
348 | ib_unmap_fmr(&fmr_list); | ||
349 | } | ||
344 | ib_dealloc_fmr(fmr->fmr); | 350 | ib_dealloc_fmr(fmr->fmr); |
345 | list_del(&fmr->list); | 351 | list_del(&fmr->list); |
346 | kfree(fmr); | 352 | kfree(fmr); |
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index b97e210ce9c8..a4a4d9c1eef3 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c | |||
@@ -693,7 +693,8 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv, | |||
693 | goto out; | 693 | goto out; |
694 | } | 694 | } |
695 | 695 | ||
696 | build_smp_wc(send_wr->wr_id, smp->dr_slid, send_wr->wr.ud.pkey_index, | 696 | build_smp_wc(send_wr->wr_id, be16_to_cpu(smp->dr_slid), |
697 | send_wr->wr.ud.pkey_index, | ||
697 | send_wr->wr.ud.port_num, &mad_wc); | 698 | send_wr->wr.ud.port_num, &mad_wc); |
698 | 699 | ||
699 | /* No GRH for DR SMP */ | 700 | /* No GRH for DR SMP */ |
@@ -1554,7 +1555,7 @@ static int is_data_mad(struct ib_mad_agent_private *mad_agent_priv, | |||
1554 | } | 1555 | } |
1555 | 1556 | ||
1556 | struct ib_mad_send_wr_private* | 1557 | struct ib_mad_send_wr_private* |
1557 | ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, u64 tid) | 1558 | ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, __be64 tid) |
1558 | { | 1559 | { |
1559 | struct ib_mad_send_wr_private *mad_send_wr; | 1560 | struct ib_mad_send_wr_private *mad_send_wr; |
1560 | 1561 | ||
@@ -1597,7 +1598,7 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv, | |||
1597 | struct ib_mad_send_wr_private *mad_send_wr; | 1598 | struct ib_mad_send_wr_private *mad_send_wr; |
1598 | struct ib_mad_send_wc mad_send_wc; | 1599 | struct ib_mad_send_wc mad_send_wc; |
1599 | unsigned long flags; | 1600 | unsigned long flags; |
1600 | u64 tid; | 1601 | __be64 tid; |
1601 | 1602 | ||
1602 | INIT_LIST_HEAD(&mad_recv_wc->rmpp_list); | 1603 | INIT_LIST_HEAD(&mad_recv_wc->rmpp_list); |
1603 | list_add(&mad_recv_wc->recv_buf.list, &mad_recv_wc->rmpp_list); | 1604 | list_add(&mad_recv_wc->recv_buf.list, &mad_recv_wc->rmpp_list); |
@@ -2165,7 +2166,8 @@ static void local_completions(void *data) | |||
2165 | * Defined behavior is to complete response | 2166 | * Defined behavior is to complete response |
2166 | * before request | 2167 | * before request |
2167 | */ | 2168 | */ |
2168 | build_smp_wc(local->wr_id, IB_LID_PERMISSIVE, | 2169 | build_smp_wc(local->wr_id, |
2170 | be16_to_cpu(IB_LID_PERMISSIVE), | ||
2169 | 0 /* pkey index */, | 2171 | 0 /* pkey index */, |
2170 | recv_mad_agent->agent.port_num, &wc); | 2172 | recv_mad_agent->agent.port_num, &wc); |
2171 | 2173 | ||
@@ -2294,7 +2296,7 @@ static void timeout_sends(void *data) | |||
2294 | spin_unlock_irqrestore(&mad_agent_priv->lock, flags); | 2296 | spin_unlock_irqrestore(&mad_agent_priv->lock, flags); |
2295 | } | 2297 | } |
2296 | 2298 | ||
2297 | static void ib_mad_thread_completion_handler(struct ib_cq *cq) | 2299 | static void ib_mad_thread_completion_handler(struct ib_cq *cq, void *arg) |
2298 | { | 2300 | { |
2299 | struct ib_mad_port_private *port_priv = cq->cq_context; | 2301 | struct ib_mad_port_private *port_priv = cq->cq_context; |
2300 | 2302 | ||
@@ -2574,8 +2576,7 @@ static int ib_mad_port_open(struct ib_device *device, | |||
2574 | 2576 | ||
2575 | cq_size = (IB_MAD_QP_SEND_SIZE + IB_MAD_QP_RECV_SIZE) * 2; | 2577 | cq_size = (IB_MAD_QP_SEND_SIZE + IB_MAD_QP_RECV_SIZE) * 2; |
2576 | port_priv->cq = ib_create_cq(port_priv->device, | 2578 | port_priv->cq = ib_create_cq(port_priv->device, |
2577 | (ib_comp_handler) | 2579 | ib_mad_thread_completion_handler, |
2578 | ib_mad_thread_completion_handler, | ||
2579 | NULL, port_priv, cq_size); | 2580 | NULL, port_priv, cq_size); |
2580 | if (IS_ERR(port_priv->cq)) { | 2581 | if (IS_ERR(port_priv->cq)) { |
2581 | printk(KERN_ERR PFX "Couldn't create ib_mad CQ\n"); | 2582 | printk(KERN_ERR PFX "Couldn't create ib_mad CQ\n"); |
diff --git a/drivers/infiniband/core/mad_priv.h b/drivers/infiniband/core/mad_priv.h index 568da10b05ab..f1ba794e0daa 100644 --- a/drivers/infiniband/core/mad_priv.h +++ b/drivers/infiniband/core/mad_priv.h | |||
@@ -40,8 +40,8 @@ | |||
40 | #include <linux/pci.h> | 40 | #include <linux/pci.h> |
41 | #include <linux/kthread.h> | 41 | #include <linux/kthread.h> |
42 | #include <linux/workqueue.h> | 42 | #include <linux/workqueue.h> |
43 | #include <ib_mad.h> | 43 | #include <rdma/ib_mad.h> |
44 | #include <ib_smi.h> | 44 | #include <rdma/ib_smi.h> |
45 | 45 | ||
46 | 46 | ||
47 | #define PFX "ib_mad: " | 47 | #define PFX "ib_mad: " |
@@ -121,7 +121,7 @@ struct ib_mad_send_wr_private { | |||
121 | struct ib_send_wr send_wr; | 121 | struct ib_send_wr send_wr; |
122 | struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG]; | 122 | struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG]; |
123 | u64 wr_id; /* client WR ID */ | 123 | u64 wr_id; /* client WR ID */ |
124 | u64 tid; | 124 | __be64 tid; |
125 | unsigned long timeout; | 125 | unsigned long timeout; |
126 | int retries; | 126 | int retries; |
127 | int retry; | 127 | int retry; |
@@ -144,7 +144,7 @@ struct ib_mad_local_private { | |||
144 | struct ib_send_wr send_wr; | 144 | struct ib_send_wr send_wr; |
145 | struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG]; | 145 | struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG]; |
146 | u64 wr_id; /* client WR ID */ | 146 | u64 wr_id; /* client WR ID */ |
147 | u64 tid; | 147 | __be64 tid; |
148 | }; | 148 | }; |
149 | 149 | ||
150 | struct ib_mad_mgmt_method_table { | 150 | struct ib_mad_mgmt_method_table { |
@@ -210,7 +210,7 @@ extern kmem_cache_t *ib_mad_cache; | |||
210 | int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr); | 210 | int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr); |
211 | 211 | ||
212 | struct ib_mad_send_wr_private * | 212 | struct ib_mad_send_wr_private * |
213 | ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, u64 tid); | 213 | ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, __be64 tid); |
214 | 214 | ||
215 | void ib_mad_complete_send_wr(struct ib_mad_send_wr_private *mad_send_wr, | 215 | void ib_mad_complete_send_wr(struct ib_mad_send_wr_private *mad_send_wr, |
216 | struct ib_mad_send_wc *mad_send_wc); | 216 | struct ib_mad_send_wc *mad_send_wc); |
diff --git a/drivers/infiniband/core/mad_rmpp.c b/drivers/infiniband/core/mad_rmpp.c index 8f1eb80e421f..43fd805e0265 100644 --- a/drivers/infiniband/core/mad_rmpp.c +++ b/drivers/infiniband/core/mad_rmpp.c | |||
@@ -61,7 +61,7 @@ struct mad_rmpp_recv { | |||
61 | int seg_num; | 61 | int seg_num; |
62 | int newwin; | 62 | int newwin; |
63 | 63 | ||
64 | u64 tid; | 64 | __be64 tid; |
65 | u32 src_qp; | 65 | u32 src_qp; |
66 | u16 slid; | 66 | u16 slid; |
67 | u8 mgmt_class; | 67 | u8 mgmt_class; |
@@ -100,6 +100,121 @@ void ib_cancel_rmpp_recvs(struct ib_mad_agent_private *agent) | |||
100 | } | 100 | } |
101 | } | 101 | } |
102 | 102 | ||
103 | static int data_offset(u8 mgmt_class) | ||
104 | { | ||
105 | if (mgmt_class == IB_MGMT_CLASS_SUBN_ADM) | ||
106 | return offsetof(struct ib_sa_mad, data); | ||
107 | else if ((mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) && | ||
108 | (mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END)) | ||
109 | return offsetof(struct ib_vendor_mad, data); | ||
110 | else | ||
111 | return offsetof(struct ib_rmpp_mad, data); | ||
112 | } | ||
113 | |||
114 | static void format_ack(struct ib_rmpp_mad *ack, | ||
115 | struct ib_rmpp_mad *data, | ||
116 | struct mad_rmpp_recv *rmpp_recv) | ||
117 | { | ||
118 | unsigned long flags; | ||
119 | |||
120 | memcpy(&ack->mad_hdr, &data->mad_hdr, | ||
121 | data_offset(data->mad_hdr.mgmt_class)); | ||
122 | |||
123 | ack->mad_hdr.method ^= IB_MGMT_METHOD_RESP; | ||
124 | ack->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ACK; | ||
125 | ib_set_rmpp_flags(&ack->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE); | ||
126 | |||
127 | spin_lock_irqsave(&rmpp_recv->lock, flags); | ||
128 | rmpp_recv->last_ack = rmpp_recv->seg_num; | ||
129 | ack->rmpp_hdr.seg_num = cpu_to_be32(rmpp_recv->seg_num); | ||
130 | ack->rmpp_hdr.paylen_newwin = cpu_to_be32(rmpp_recv->newwin); | ||
131 | spin_unlock_irqrestore(&rmpp_recv->lock, flags); | ||
132 | } | ||
133 | |||
134 | static void ack_recv(struct mad_rmpp_recv *rmpp_recv, | ||
135 | struct ib_mad_recv_wc *recv_wc) | ||
136 | { | ||
137 | struct ib_mad_send_buf *msg; | ||
138 | struct ib_send_wr *bad_send_wr; | ||
139 | int hdr_len, ret; | ||
140 | |||
141 | hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr); | ||
142 | msg = ib_create_send_mad(&rmpp_recv->agent->agent, recv_wc->wc->src_qp, | ||
143 | recv_wc->wc->pkey_index, rmpp_recv->ah, 1, | ||
144 | hdr_len, sizeof(struct ib_rmpp_mad) - hdr_len, | ||
145 | GFP_KERNEL); | ||
146 | if (!msg) | ||
147 | return; | ||
148 | |||
149 | format_ack((struct ib_rmpp_mad *) msg->mad, | ||
150 | (struct ib_rmpp_mad *) recv_wc->recv_buf.mad, rmpp_recv); | ||
151 | ret = ib_post_send_mad(&rmpp_recv->agent->agent, &msg->send_wr, | ||
152 | &bad_send_wr); | ||
153 | if (ret) | ||
154 | ib_free_send_mad(msg); | ||
155 | } | ||
156 | |||
157 | static int alloc_response_msg(struct ib_mad_agent *agent, | ||
158 | struct ib_mad_recv_wc *recv_wc, | ||
159 | struct ib_mad_send_buf **msg) | ||
160 | { | ||
161 | struct ib_mad_send_buf *m; | ||
162 | struct ib_ah *ah; | ||
163 | int hdr_len; | ||
164 | |||
165 | ah = ib_create_ah_from_wc(agent->qp->pd, recv_wc->wc, | ||
166 | recv_wc->recv_buf.grh, agent->port_num); | ||
167 | if (IS_ERR(ah)) | ||
168 | return PTR_ERR(ah); | ||
169 | |||
170 | hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr); | ||
171 | m = ib_create_send_mad(agent, recv_wc->wc->src_qp, | ||
172 | recv_wc->wc->pkey_index, ah, 1, hdr_len, | ||
173 | sizeof(struct ib_rmpp_mad) - hdr_len, | ||
174 | GFP_KERNEL); | ||
175 | if (IS_ERR(m)) { | ||
176 | ib_destroy_ah(ah); | ||
177 | return PTR_ERR(m); | ||
178 | } | ||
179 | *msg = m; | ||
180 | return 0; | ||
181 | } | ||
182 | |||
183 | static void free_msg(struct ib_mad_send_buf *msg) | ||
184 | { | ||
185 | ib_destroy_ah(msg->send_wr.wr.ud.ah); | ||
186 | ib_free_send_mad(msg); | ||
187 | } | ||
188 | |||
189 | static void nack_recv(struct ib_mad_agent_private *agent, | ||
190 | struct ib_mad_recv_wc *recv_wc, u8 rmpp_status) | ||
191 | { | ||
192 | struct ib_mad_send_buf *msg; | ||
193 | struct ib_rmpp_mad *rmpp_mad; | ||
194 | struct ib_send_wr *bad_send_wr; | ||
195 | int ret; | ||
196 | |||
197 | ret = alloc_response_msg(&agent->agent, recv_wc, &msg); | ||
198 | if (ret) | ||
199 | return; | ||
200 | |||
201 | rmpp_mad = (struct ib_rmpp_mad *) msg->mad; | ||
202 | memcpy(rmpp_mad, recv_wc->recv_buf.mad, | ||
203 | data_offset(recv_wc->recv_buf.mad->mad_hdr.mgmt_class)); | ||
204 | |||
205 | rmpp_mad->mad_hdr.method ^= IB_MGMT_METHOD_RESP; | ||
206 | rmpp_mad->rmpp_hdr.rmpp_version = IB_MGMT_RMPP_VERSION; | ||
207 | rmpp_mad->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ABORT; | ||
208 | ib_set_rmpp_flags(&rmpp_mad->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE); | ||
209 | rmpp_mad->rmpp_hdr.rmpp_status = rmpp_status; | ||
210 | rmpp_mad->rmpp_hdr.seg_num = 0; | ||
211 | rmpp_mad->rmpp_hdr.paylen_newwin = 0; | ||
212 | |||
213 | ret = ib_post_send_mad(&agent->agent, &msg->send_wr, &bad_send_wr); | ||
214 | if (ret) | ||
215 | free_msg(msg); | ||
216 | } | ||
217 | |||
103 | static void recv_timeout_handler(void *data) | 218 | static void recv_timeout_handler(void *data) |
104 | { | 219 | { |
105 | struct mad_rmpp_recv *rmpp_recv = data; | 220 | struct mad_rmpp_recv *rmpp_recv = data; |
@@ -115,8 +230,8 @@ static void recv_timeout_handler(void *data) | |||
115 | list_del(&rmpp_recv->list); | 230 | list_del(&rmpp_recv->list); |
116 | spin_unlock_irqrestore(&rmpp_recv->agent->lock, flags); | 231 | spin_unlock_irqrestore(&rmpp_recv->agent->lock, flags); |
117 | 232 | ||
118 | /* TODO: send abort. */ | ||
119 | rmpp_wc = rmpp_recv->rmpp_wc; | 233 | rmpp_wc = rmpp_recv->rmpp_wc; |
234 | nack_recv(rmpp_recv->agent, rmpp_wc, IB_MGMT_RMPP_STATUS_T2L); | ||
120 | destroy_rmpp_recv(rmpp_recv); | 235 | destroy_rmpp_recv(rmpp_recv); |
121 | ib_free_recv_mad(rmpp_wc); | 236 | ib_free_recv_mad(rmpp_wc); |
122 | } | 237 | } |
@@ -230,60 +345,6 @@ insert_rmpp_recv(struct ib_mad_agent_private *agent, | |||
230 | return cur_rmpp_recv; | 345 | return cur_rmpp_recv; |
231 | } | 346 | } |
232 | 347 | ||
233 | static int data_offset(u8 mgmt_class) | ||
234 | { | ||
235 | if (mgmt_class == IB_MGMT_CLASS_SUBN_ADM) | ||
236 | return offsetof(struct ib_sa_mad, data); | ||
237 | else if ((mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) && | ||
238 | (mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END)) | ||
239 | return offsetof(struct ib_vendor_mad, data); | ||
240 | else | ||
241 | return offsetof(struct ib_rmpp_mad, data); | ||
242 | } | ||
243 | |||
244 | static void format_ack(struct ib_rmpp_mad *ack, | ||
245 | struct ib_rmpp_mad *data, | ||
246 | struct mad_rmpp_recv *rmpp_recv) | ||
247 | { | ||
248 | unsigned long flags; | ||
249 | |||
250 | memcpy(&ack->mad_hdr, &data->mad_hdr, | ||
251 | data_offset(data->mad_hdr.mgmt_class)); | ||
252 | |||
253 | ack->mad_hdr.method ^= IB_MGMT_METHOD_RESP; | ||
254 | ack->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ACK; | ||
255 | ib_set_rmpp_flags(&ack->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE); | ||
256 | |||
257 | spin_lock_irqsave(&rmpp_recv->lock, flags); | ||
258 | rmpp_recv->last_ack = rmpp_recv->seg_num; | ||
259 | ack->rmpp_hdr.seg_num = cpu_to_be32(rmpp_recv->seg_num); | ||
260 | ack->rmpp_hdr.paylen_newwin = cpu_to_be32(rmpp_recv->newwin); | ||
261 | spin_unlock_irqrestore(&rmpp_recv->lock, flags); | ||
262 | } | ||
263 | |||
264 | static void ack_recv(struct mad_rmpp_recv *rmpp_recv, | ||
265 | struct ib_mad_recv_wc *recv_wc) | ||
266 | { | ||
267 | struct ib_mad_send_buf *msg; | ||
268 | struct ib_send_wr *bad_send_wr; | ||
269 | int hdr_len, ret; | ||
270 | |||
271 | hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr); | ||
272 | msg = ib_create_send_mad(&rmpp_recv->agent->agent, recv_wc->wc->src_qp, | ||
273 | recv_wc->wc->pkey_index, rmpp_recv->ah, 1, | ||
274 | hdr_len, sizeof(struct ib_rmpp_mad) - hdr_len, | ||
275 | GFP_KERNEL); | ||
276 | if (!msg) | ||
277 | return; | ||
278 | |||
279 | format_ack((struct ib_rmpp_mad *) msg->mad, | ||
280 | (struct ib_rmpp_mad *) recv_wc->recv_buf.mad, rmpp_recv); | ||
281 | ret = ib_post_send_mad(&rmpp_recv->agent->agent, &msg->send_wr, | ||
282 | &bad_send_wr); | ||
283 | if (ret) | ||
284 | ib_free_send_mad(msg); | ||
285 | } | ||
286 | |||
287 | static inline int get_last_flag(struct ib_mad_recv_buf *seg) | 348 | static inline int get_last_flag(struct ib_mad_recv_buf *seg) |
288 | { | 349 | { |
289 | struct ib_rmpp_mad *rmpp_mad; | 350 | struct ib_rmpp_mad *rmpp_mad; |
@@ -559,6 +620,34 @@ static int send_next_seg(struct ib_mad_send_wr_private *mad_send_wr) | |||
559 | return ib_send_mad(mad_send_wr); | 620 | return ib_send_mad(mad_send_wr); |
560 | } | 621 | } |
561 | 622 | ||
623 | static void abort_send(struct ib_mad_agent_private *agent, __be64 tid, | ||
624 | u8 rmpp_status) | ||
625 | { | ||
626 | struct ib_mad_send_wr_private *mad_send_wr; | ||
627 | struct ib_mad_send_wc wc; | ||
628 | unsigned long flags; | ||
629 | |||
630 | spin_lock_irqsave(&agent->lock, flags); | ||
631 | mad_send_wr = ib_find_send_mad(agent, tid); | ||
632 | if (!mad_send_wr) | ||
633 | goto out; /* Unmatched send */ | ||
634 | |||
635 | if ((mad_send_wr->last_ack == mad_send_wr->total_seg) || | ||
636 | (!mad_send_wr->timeout) || (mad_send_wr->status != IB_WC_SUCCESS)) | ||
637 | goto out; /* Send is already done */ | ||
638 | |||
639 | ib_mark_mad_done(mad_send_wr); | ||
640 | spin_unlock_irqrestore(&agent->lock, flags); | ||
641 | |||
642 | wc.status = IB_WC_REM_ABORT_ERR; | ||
643 | wc.vendor_err = rmpp_status; | ||
644 | wc.wr_id = mad_send_wr->wr_id; | ||
645 | ib_mad_complete_send_wr(mad_send_wr, &wc); | ||
646 | return; | ||
647 | out: | ||
648 | spin_unlock_irqrestore(&agent->lock, flags); | ||
649 | } | ||
650 | |||
562 | static void process_rmpp_ack(struct ib_mad_agent_private *agent, | 651 | static void process_rmpp_ack(struct ib_mad_agent_private *agent, |
563 | struct ib_mad_recv_wc *mad_recv_wc) | 652 | struct ib_mad_recv_wc *mad_recv_wc) |
564 | { | 653 | { |
@@ -568,11 +657,21 @@ static void process_rmpp_ack(struct ib_mad_agent_private *agent, | |||
568 | int seg_num, newwin, ret; | 657 | int seg_num, newwin, ret; |
569 | 658 | ||
570 | rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad; | 659 | rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad; |
571 | if (rmpp_mad->rmpp_hdr.rmpp_status) | 660 | if (rmpp_mad->rmpp_hdr.rmpp_status) { |
661 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
662 | IB_MGMT_RMPP_STATUS_BAD_STATUS); | ||
663 | nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS); | ||
572 | return; | 664 | return; |
665 | } | ||
573 | 666 | ||
574 | seg_num = be32_to_cpu(rmpp_mad->rmpp_hdr.seg_num); | 667 | seg_num = be32_to_cpu(rmpp_mad->rmpp_hdr.seg_num); |
575 | newwin = be32_to_cpu(rmpp_mad->rmpp_hdr.paylen_newwin); | 668 | newwin = be32_to_cpu(rmpp_mad->rmpp_hdr.paylen_newwin); |
669 | if (newwin < seg_num) { | ||
670 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
671 | IB_MGMT_RMPP_STATUS_W2S); | ||
672 | nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_W2S); | ||
673 | return; | ||
674 | } | ||
576 | 675 | ||
577 | spin_lock_irqsave(&agent->lock, flags); | 676 | spin_lock_irqsave(&agent->lock, flags); |
578 | mad_send_wr = ib_find_send_mad(agent, rmpp_mad->mad_hdr.tid); | 677 | mad_send_wr = ib_find_send_mad(agent, rmpp_mad->mad_hdr.tid); |
@@ -583,8 +682,13 @@ static void process_rmpp_ack(struct ib_mad_agent_private *agent, | |||
583 | (!mad_send_wr->timeout) || (mad_send_wr->status != IB_WC_SUCCESS)) | 682 | (!mad_send_wr->timeout) || (mad_send_wr->status != IB_WC_SUCCESS)) |
584 | goto out; /* Send is already done */ | 683 | goto out; /* Send is already done */ |
585 | 684 | ||
586 | if (seg_num > mad_send_wr->total_seg) | 685 | if (seg_num > mad_send_wr->total_seg || seg_num > mad_send_wr->newwin) { |
587 | goto out; /* Bad ACK */ | 686 | spin_unlock_irqrestore(&agent->lock, flags); |
687 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
688 | IB_MGMT_RMPP_STATUS_S2B); | ||
689 | nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_S2B); | ||
690 | return; | ||
691 | } | ||
588 | 692 | ||
589 | if (newwin < mad_send_wr->newwin || seg_num < mad_send_wr->last_ack) | 693 | if (newwin < mad_send_wr->newwin || seg_num < mad_send_wr->last_ack) |
590 | goto out; /* Old ACK */ | 694 | goto out; /* Old ACK */ |
@@ -628,6 +732,72 @@ out: | |||
628 | spin_unlock_irqrestore(&agent->lock, flags); | 732 | spin_unlock_irqrestore(&agent->lock, flags); |
629 | } | 733 | } |
630 | 734 | ||
735 | static struct ib_mad_recv_wc * | ||
736 | process_rmpp_data(struct ib_mad_agent_private *agent, | ||
737 | struct ib_mad_recv_wc *mad_recv_wc) | ||
738 | { | ||
739 | struct ib_rmpp_hdr *rmpp_hdr; | ||
740 | u8 rmpp_status; | ||
741 | |||
742 | rmpp_hdr = &((struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad)->rmpp_hdr; | ||
743 | |||
744 | if (rmpp_hdr->rmpp_status) { | ||
745 | rmpp_status = IB_MGMT_RMPP_STATUS_BAD_STATUS; | ||
746 | goto bad; | ||
747 | } | ||
748 | |||
749 | if (rmpp_hdr->seg_num == __constant_htonl(1)) { | ||
750 | if (!(ib_get_rmpp_flags(rmpp_hdr) & IB_MGMT_RMPP_FLAG_FIRST)) { | ||
751 | rmpp_status = IB_MGMT_RMPP_STATUS_BAD_SEG; | ||
752 | goto bad; | ||
753 | } | ||
754 | return start_rmpp(agent, mad_recv_wc); | ||
755 | } else { | ||
756 | if (ib_get_rmpp_flags(rmpp_hdr) & IB_MGMT_RMPP_FLAG_FIRST) { | ||
757 | rmpp_status = IB_MGMT_RMPP_STATUS_BAD_SEG; | ||
758 | goto bad; | ||
759 | } | ||
760 | return continue_rmpp(agent, mad_recv_wc); | ||
761 | } | ||
762 | bad: | ||
763 | nack_recv(agent, mad_recv_wc, rmpp_status); | ||
764 | ib_free_recv_mad(mad_recv_wc); | ||
765 | return NULL; | ||
766 | } | ||
767 | |||
768 | static void process_rmpp_stop(struct ib_mad_agent_private *agent, | ||
769 | struct ib_mad_recv_wc *mad_recv_wc) | ||
770 | { | ||
771 | struct ib_rmpp_mad *rmpp_mad; | ||
772 | |||
773 | rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad; | ||
774 | |||
775 | if (rmpp_mad->rmpp_hdr.rmpp_status != IB_MGMT_RMPP_STATUS_RESX) { | ||
776 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
777 | IB_MGMT_RMPP_STATUS_BAD_STATUS); | ||
778 | nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS); | ||
779 | } else | ||
780 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
781 | rmpp_mad->rmpp_hdr.rmpp_status); | ||
782 | } | ||
783 | |||
784 | static void process_rmpp_abort(struct ib_mad_agent_private *agent, | ||
785 | struct ib_mad_recv_wc *mad_recv_wc) | ||
786 | { | ||
787 | struct ib_rmpp_mad *rmpp_mad; | ||
788 | |||
789 | rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad; | ||
790 | |||
791 | if (rmpp_mad->rmpp_hdr.rmpp_status < IB_MGMT_RMPP_STATUS_ABORT_MIN || | ||
792 | rmpp_mad->rmpp_hdr.rmpp_status > IB_MGMT_RMPP_STATUS_ABORT_MAX) { | ||
793 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
794 | IB_MGMT_RMPP_STATUS_BAD_STATUS); | ||
795 | nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS); | ||
796 | } else | ||
797 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
798 | rmpp_mad->rmpp_hdr.rmpp_status); | ||
799 | } | ||
800 | |||
631 | struct ib_mad_recv_wc * | 801 | struct ib_mad_recv_wc * |
632 | ib_process_rmpp_recv_wc(struct ib_mad_agent_private *agent, | 802 | ib_process_rmpp_recv_wc(struct ib_mad_agent_private *agent, |
633 | struct ib_mad_recv_wc *mad_recv_wc) | 803 | struct ib_mad_recv_wc *mad_recv_wc) |
@@ -638,23 +808,29 @@ ib_process_rmpp_recv_wc(struct ib_mad_agent_private *agent, | |||
638 | if (!(rmpp_mad->rmpp_hdr.rmpp_rtime_flags & IB_MGMT_RMPP_FLAG_ACTIVE)) | 808 | if (!(rmpp_mad->rmpp_hdr.rmpp_rtime_flags & IB_MGMT_RMPP_FLAG_ACTIVE)) |
639 | return mad_recv_wc; | 809 | return mad_recv_wc; |
640 | 810 | ||
641 | if (rmpp_mad->rmpp_hdr.rmpp_version != IB_MGMT_RMPP_VERSION) | 811 | if (rmpp_mad->rmpp_hdr.rmpp_version != IB_MGMT_RMPP_VERSION) { |
812 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
813 | IB_MGMT_RMPP_STATUS_UNV); | ||
814 | nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_UNV); | ||
642 | goto out; | 815 | goto out; |
816 | } | ||
643 | 817 | ||
644 | switch (rmpp_mad->rmpp_hdr.rmpp_type) { | 818 | switch (rmpp_mad->rmpp_hdr.rmpp_type) { |
645 | case IB_MGMT_RMPP_TYPE_DATA: | 819 | case IB_MGMT_RMPP_TYPE_DATA: |
646 | if (rmpp_mad->rmpp_hdr.seg_num == __constant_htonl(1)) | 820 | return process_rmpp_data(agent, mad_recv_wc); |
647 | return start_rmpp(agent, mad_recv_wc); | ||
648 | else | ||
649 | return continue_rmpp(agent, mad_recv_wc); | ||
650 | case IB_MGMT_RMPP_TYPE_ACK: | 821 | case IB_MGMT_RMPP_TYPE_ACK: |
651 | process_rmpp_ack(agent, mad_recv_wc); | 822 | process_rmpp_ack(agent, mad_recv_wc); |
652 | break; | 823 | break; |
653 | case IB_MGMT_RMPP_TYPE_STOP: | 824 | case IB_MGMT_RMPP_TYPE_STOP: |
825 | process_rmpp_stop(agent, mad_recv_wc); | ||
826 | break; | ||
654 | case IB_MGMT_RMPP_TYPE_ABORT: | 827 | case IB_MGMT_RMPP_TYPE_ABORT: |
655 | /* TODO: process_rmpp_nack(agent, mad_recv_wc); */ | 828 | process_rmpp_abort(agent, mad_recv_wc); |
656 | break; | 829 | break; |
657 | default: | 830 | default: |
831 | abort_send(agent, rmpp_mad->mad_hdr.tid, | ||
832 | IB_MGMT_RMPP_STATUS_BADT); | ||
833 | nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BADT); | ||
658 | break; | 834 | break; |
659 | } | 835 | } |
660 | out: | 836 | out: |
@@ -714,7 +890,10 @@ int ib_process_rmpp_send_wc(struct ib_mad_send_wr_private *mad_send_wr, | |||
714 | if (rmpp_mad->rmpp_hdr.rmpp_type != IB_MGMT_RMPP_TYPE_DATA) { | 890 | if (rmpp_mad->rmpp_hdr.rmpp_type != IB_MGMT_RMPP_TYPE_DATA) { |
715 | msg = (struct ib_mad_send_buf *) (unsigned long) | 891 | msg = (struct ib_mad_send_buf *) (unsigned long) |
716 | mad_send_wc->wr_id; | 892 | mad_send_wc->wr_id; |
717 | ib_free_send_mad(msg); | 893 | if (rmpp_mad->rmpp_hdr.rmpp_type == IB_MGMT_RMPP_TYPE_ACK) |
894 | ib_free_send_mad(msg); | ||
895 | else | ||
896 | free_msg(msg); | ||
718 | return IB_RMPP_RESULT_INTERNAL; /* ACK, STOP, or ABORT */ | 897 | return IB_RMPP_RESULT_INTERNAL; /* ACK, STOP, or ABORT */ |
719 | } | 898 | } |
720 | 899 | ||
diff --git a/drivers/infiniband/core/packer.c b/drivers/infiniband/core/packer.c index eb5ff54c10d7..35df5010e723 100644 --- a/drivers/infiniband/core/packer.c +++ b/drivers/infiniband/core/packer.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -32,7 +33,7 @@ | |||
32 | * $Id: packer.c 1349 2004-12-16 21:09:43Z roland $ | 33 | * $Id: packer.c 1349 2004-12-16 21:09:43Z roland $ |
33 | */ | 34 | */ |
34 | 35 | ||
35 | #include <ib_pack.h> | 36 | #include <rdma/ib_pack.h> |
36 | 37 | ||
37 | static u64 value_read(int offset, int size, void *structure) | 38 | static u64 value_read(int offset, int size, void *structure) |
38 | { | 39 | { |
diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c index 795184931c83..126ac80db7b8 100644 --- a/drivers/infiniband/core/sa_query.c +++ b/drivers/infiniband/core/sa_query.c | |||
@@ -1,6 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Voltaire, Inc. All rights reserved. | 3 | * Copyright (c) 2005 Voltaire, Inc. All rights reserved. |
4 | * | 4 | * |
5 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -44,8 +44,8 @@ | |||
44 | #include <linux/kref.h> | 44 | #include <linux/kref.h> |
45 | #include <linux/idr.h> | 45 | #include <linux/idr.h> |
46 | 46 | ||
47 | #include <ib_pack.h> | 47 | #include <rdma/ib_pack.h> |
48 | #include <ib_sa.h> | 48 | #include <rdma/ib_sa.h> |
49 | 49 | ||
50 | MODULE_AUTHOR("Roland Dreier"); | 50 | MODULE_AUTHOR("Roland Dreier"); |
51 | MODULE_DESCRIPTION("InfiniBand subnet administration query support"); | 51 | MODULE_DESCRIPTION("InfiniBand subnet administration query support"); |
diff --git a/drivers/infiniband/core/smi.c b/drivers/infiniband/core/smi.c index b4b284324a33..35852e794e26 100644 --- a/drivers/infiniband/core/smi.c +++ b/drivers/infiniband/core/smi.c | |||
@@ -1,9 +1,10 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved. |
3 | * Copyright (c) 2004 Infinicon Corporation. All rights reserved. | 3 | * Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved. |
4 | * Copyright (c) 2004 Intel Corporation. All rights reserved. | 4 | * Copyright (c) 2004, 2005 Intel Corporation. All rights reserved. |
5 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. | 5 | * Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved. |
6 | * Copyright (c) 2004 Voltaire Corporation. All rights reserved. | 6 | * Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved. |
7 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
7 | * | 8 | * |
8 | * This software is available to you under a choice of one of two | 9 | * This software is available to you under a choice of one of two |
9 | * licenses. You may choose to be licensed under the terms of the GNU | 10 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -36,7 +37,7 @@ | |||
36 | * $Id: smi.c 1389 2004-12-27 22:56:47Z roland $ | 37 | * $Id: smi.c 1389 2004-12-27 22:56:47Z roland $ |
37 | */ | 38 | */ |
38 | 39 | ||
39 | #include <ib_smi.h> | 40 | #include <rdma/ib_smi.h> |
40 | #include "smi.h" | 41 | #include "smi.h" |
41 | 42 | ||
42 | /* | 43 | /* |
diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c index 90d51b179abe..fae1c2dcee51 100644 --- a/drivers/infiniband/core/sysfs.c +++ b/drivers/infiniband/core/sysfs.c | |||
@@ -1,5 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies Ltd. All rights reserved. | ||
4 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
3 | * | 5 | * |
4 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -34,7 +36,7 @@ | |||
34 | 36 | ||
35 | #include "core_priv.h" | 37 | #include "core_priv.h" |
36 | 38 | ||
37 | #include <ib_mad.h> | 39 | #include <rdma/ib_mad.h> |
38 | 40 | ||
39 | struct ib_port { | 41 | struct ib_port { |
40 | struct kobject kobj; | 42 | struct kobject kobj; |
@@ -253,14 +255,14 @@ static ssize_t show_port_gid(struct ib_port *p, struct port_attribute *attr, | |||
253 | return ret; | 255 | return ret; |
254 | 256 | ||
255 | return sprintf(buf, "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n", | 257 | return sprintf(buf, "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n", |
256 | be16_to_cpu(((u16 *) gid.raw)[0]), | 258 | be16_to_cpu(((__be16 *) gid.raw)[0]), |
257 | be16_to_cpu(((u16 *) gid.raw)[1]), | 259 | be16_to_cpu(((__be16 *) gid.raw)[1]), |
258 | be16_to_cpu(((u16 *) gid.raw)[2]), | 260 | be16_to_cpu(((__be16 *) gid.raw)[2]), |
259 | be16_to_cpu(((u16 *) gid.raw)[3]), | 261 | be16_to_cpu(((__be16 *) gid.raw)[3]), |
260 | be16_to_cpu(((u16 *) gid.raw)[4]), | 262 | be16_to_cpu(((__be16 *) gid.raw)[4]), |
261 | be16_to_cpu(((u16 *) gid.raw)[5]), | 263 | be16_to_cpu(((__be16 *) gid.raw)[5]), |
262 | be16_to_cpu(((u16 *) gid.raw)[6]), | 264 | be16_to_cpu(((__be16 *) gid.raw)[6]), |
263 | be16_to_cpu(((u16 *) gid.raw)[7])); | 265 | be16_to_cpu(((__be16 *) gid.raw)[7])); |
264 | } | 266 | } |
265 | 267 | ||
266 | static ssize_t show_port_pkey(struct ib_port *p, struct port_attribute *attr, | 268 | static ssize_t show_port_pkey(struct ib_port *p, struct port_attribute *attr, |
@@ -332,11 +334,11 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr, | |||
332 | break; | 334 | break; |
333 | case 16: | 335 | case 16: |
334 | ret = sprintf(buf, "%u\n", | 336 | ret = sprintf(buf, "%u\n", |
335 | be16_to_cpup((u16 *)(out_mad->data + 40 + offset / 8))); | 337 | be16_to_cpup((__be16 *)(out_mad->data + 40 + offset / 8))); |
336 | break; | 338 | break; |
337 | case 32: | 339 | case 32: |
338 | ret = sprintf(buf, "%u\n", | 340 | ret = sprintf(buf, "%u\n", |
339 | be32_to_cpup((u32 *)(out_mad->data + 40 + offset / 8))); | 341 | be32_to_cpup((__be32 *)(out_mad->data + 40 + offset / 8))); |
340 | break; | 342 | break; |
341 | default: | 343 | default: |
342 | ret = 0; | 344 | ret = 0; |
@@ -598,10 +600,10 @@ static ssize_t show_sys_image_guid(struct class_device *cdev, char *buf) | |||
598 | return ret; | 600 | return ret; |
599 | 601 | ||
600 | return sprintf(buf, "%04x:%04x:%04x:%04x\n", | 602 | return sprintf(buf, "%04x:%04x:%04x:%04x\n", |
601 | be16_to_cpu(((u16 *) &attr.sys_image_guid)[0]), | 603 | be16_to_cpu(((__be16 *) &attr.sys_image_guid)[0]), |
602 | be16_to_cpu(((u16 *) &attr.sys_image_guid)[1]), | 604 | be16_to_cpu(((__be16 *) &attr.sys_image_guid)[1]), |
603 | be16_to_cpu(((u16 *) &attr.sys_image_guid)[2]), | 605 | be16_to_cpu(((__be16 *) &attr.sys_image_guid)[2]), |
604 | be16_to_cpu(((u16 *) &attr.sys_image_guid)[3])); | 606 | be16_to_cpu(((__be16 *) &attr.sys_image_guid)[3])); |
605 | } | 607 | } |
606 | 608 | ||
607 | static ssize_t show_node_guid(struct class_device *cdev, char *buf) | 609 | static ssize_t show_node_guid(struct class_device *cdev, char *buf) |
@@ -615,10 +617,10 @@ static ssize_t show_node_guid(struct class_device *cdev, char *buf) | |||
615 | return ret; | 617 | return ret; |
616 | 618 | ||
617 | return sprintf(buf, "%04x:%04x:%04x:%04x\n", | 619 | return sprintf(buf, "%04x:%04x:%04x:%04x\n", |
618 | be16_to_cpu(((u16 *) &attr.node_guid)[0]), | 620 | be16_to_cpu(((__be16 *) &attr.node_guid)[0]), |
619 | be16_to_cpu(((u16 *) &attr.node_guid)[1]), | 621 | be16_to_cpu(((__be16 *) &attr.node_guid)[1]), |
620 | be16_to_cpu(((u16 *) &attr.node_guid)[2]), | 622 | be16_to_cpu(((__be16 *) &attr.node_guid)[2]), |
621 | be16_to_cpu(((u16 *) &attr.node_guid)[3])); | 623 | be16_to_cpu(((__be16 *) &attr.node_guid)[3])); |
622 | } | 624 | } |
623 | 625 | ||
624 | static CLASS_DEVICE_ATTR(node_type, S_IRUGO, show_node_type, NULL); | 626 | static CLASS_DEVICE_ATTR(node_type, S_IRUGO, show_node_type, NULL); |
diff --git a/drivers/infiniband/core/ucm.c b/drivers/infiniband/core/ucm.c index 61d07c732f49..79595826ccc7 100644 --- a/drivers/infiniband/core/ucm.c +++ b/drivers/infiniband/core/ucm.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Intel Corporation. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -73,14 +74,18 @@ static struct semaphore ctx_id_mutex; | |||
73 | static struct idr ctx_id_table; | 74 | static struct idr ctx_id_table; |
74 | static int ctx_id_rover = 0; | 75 | static int ctx_id_rover = 0; |
75 | 76 | ||
76 | static struct ib_ucm_context *ib_ucm_ctx_get(int id) | 77 | static struct ib_ucm_context *ib_ucm_ctx_get(struct ib_ucm_file *file, int id) |
77 | { | 78 | { |
78 | struct ib_ucm_context *ctx; | 79 | struct ib_ucm_context *ctx; |
79 | 80 | ||
80 | down(&ctx_id_mutex); | 81 | down(&ctx_id_mutex); |
81 | ctx = idr_find(&ctx_id_table, id); | 82 | ctx = idr_find(&ctx_id_table, id); |
82 | if (ctx) | 83 | if (!ctx) |
83 | ctx->ref++; | 84 | ctx = ERR_PTR(-ENOENT); |
85 | else if (ctx->file != file) | ||
86 | ctx = ERR_PTR(-EINVAL); | ||
87 | else | ||
88 | atomic_inc(&ctx->ref); | ||
84 | up(&ctx_id_mutex); | 89 | up(&ctx_id_mutex); |
85 | 90 | ||
86 | return ctx; | 91 | return ctx; |
@@ -88,21 +93,37 @@ static struct ib_ucm_context *ib_ucm_ctx_get(int id) | |||
88 | 93 | ||
89 | static void ib_ucm_ctx_put(struct ib_ucm_context *ctx) | 94 | static void ib_ucm_ctx_put(struct ib_ucm_context *ctx) |
90 | { | 95 | { |
96 | if (atomic_dec_and_test(&ctx->ref)) | ||
97 | wake_up(&ctx->wait); | ||
98 | } | ||
99 | |||
100 | static ssize_t ib_ucm_destroy_ctx(struct ib_ucm_file *file, int id) | ||
101 | { | ||
102 | struct ib_ucm_context *ctx; | ||
91 | struct ib_ucm_event *uevent; | 103 | struct ib_ucm_event *uevent; |
92 | 104 | ||
93 | down(&ctx_id_mutex); | 105 | down(&ctx_id_mutex); |
94 | 106 | ctx = idr_find(&ctx_id_table, id); | |
95 | ctx->ref--; | 107 | if (!ctx) |
96 | if (!ctx->ref) | 108 | ctx = ERR_PTR(-ENOENT); |
109 | else if (ctx->file != file) | ||
110 | ctx = ERR_PTR(-EINVAL); | ||
111 | else | ||
97 | idr_remove(&ctx_id_table, ctx->id); | 112 | idr_remove(&ctx_id_table, ctx->id); |
98 | |||
99 | up(&ctx_id_mutex); | 113 | up(&ctx_id_mutex); |
100 | 114 | ||
101 | if (ctx->ref) | 115 | if (IS_ERR(ctx)) |
102 | return; | 116 | return PTR_ERR(ctx); |
103 | 117 | ||
104 | down(&ctx->file->mutex); | 118 | atomic_dec(&ctx->ref); |
119 | wait_event(ctx->wait, !atomic_read(&ctx->ref)); | ||
120 | |||
121 | /* No new events will be generated after destroying the cm_id. */ | ||
122 | if (!IS_ERR(ctx->cm_id)) | ||
123 | ib_destroy_cm_id(ctx->cm_id); | ||
105 | 124 | ||
125 | /* Cleanup events not yet reported to the user. */ | ||
126 | down(&file->mutex); | ||
106 | list_del(&ctx->file_list); | 127 | list_del(&ctx->file_list); |
107 | while (!list_empty(&ctx->events)) { | 128 | while (!list_empty(&ctx->events)) { |
108 | 129 | ||
@@ -117,13 +138,10 @@ static void ib_ucm_ctx_put(struct ib_ucm_context *ctx) | |||
117 | 138 | ||
118 | kfree(uevent); | 139 | kfree(uevent); |
119 | } | 140 | } |
141 | up(&file->mutex); | ||
120 | 142 | ||
121 | up(&ctx->file->mutex); | ||
122 | |||
123 | ucm_dbg("Destroyed CM ID <%d>\n", ctx->id); | ||
124 | |||
125 | ib_destroy_cm_id(ctx->cm_id); | ||
126 | kfree(ctx); | 143 | kfree(ctx); |
144 | return 0; | ||
127 | } | 145 | } |
128 | 146 | ||
129 | static struct ib_ucm_context *ib_ucm_ctx_alloc(struct ib_ucm_file *file) | 147 | static struct ib_ucm_context *ib_ucm_ctx_alloc(struct ib_ucm_file *file) |
@@ -135,11 +153,11 @@ static struct ib_ucm_context *ib_ucm_ctx_alloc(struct ib_ucm_file *file) | |||
135 | if (!ctx) | 153 | if (!ctx) |
136 | return NULL; | 154 | return NULL; |
137 | 155 | ||
138 | ctx->ref = 1; /* user reference */ | 156 | atomic_set(&ctx->ref, 1); |
157 | init_waitqueue_head(&ctx->wait); | ||
139 | ctx->file = file; | 158 | ctx->file = file; |
140 | 159 | ||
141 | INIT_LIST_HEAD(&ctx->events); | 160 | INIT_LIST_HEAD(&ctx->events); |
142 | init_MUTEX(&ctx->mutex); | ||
143 | 161 | ||
144 | list_add_tail(&ctx->file_list, &file->ctxs); | 162 | list_add_tail(&ctx->file_list, &file->ctxs); |
145 | 163 | ||
@@ -177,8 +195,8 @@ static void ib_ucm_event_path_get(struct ib_ucm_path_rec *upath, | |||
177 | if (!kpath || !upath) | 195 | if (!kpath || !upath) |
178 | return; | 196 | return; |
179 | 197 | ||
180 | memcpy(upath->dgid, kpath->dgid.raw, sizeof(union ib_gid)); | 198 | memcpy(upath->dgid, kpath->dgid.raw, sizeof *upath->dgid); |
181 | memcpy(upath->sgid, kpath->sgid.raw, sizeof(union ib_gid)); | 199 | memcpy(upath->sgid, kpath->sgid.raw, sizeof *upath->sgid); |
182 | 200 | ||
183 | upath->dlid = kpath->dlid; | 201 | upath->dlid = kpath->dlid; |
184 | upath->slid = kpath->slid; | 202 | upath->slid = kpath->slid; |
@@ -201,10 +219,11 @@ static void ib_ucm_event_path_get(struct ib_ucm_path_rec *upath, | |||
201 | kpath->packet_life_time_selector; | 219 | kpath->packet_life_time_selector; |
202 | } | 220 | } |
203 | 221 | ||
204 | static void ib_ucm_event_req_get(struct ib_ucm_req_event_resp *ureq, | 222 | static void ib_ucm_event_req_get(struct ib_ucm_context *ctx, |
223 | struct ib_ucm_req_event_resp *ureq, | ||
205 | struct ib_cm_req_event_param *kreq) | 224 | struct ib_cm_req_event_param *kreq) |
206 | { | 225 | { |
207 | ureq->listen_id = (long)kreq->listen_id->context; | 226 | ureq->listen_id = ctx->id; |
208 | 227 | ||
209 | ureq->remote_ca_guid = kreq->remote_ca_guid; | 228 | ureq->remote_ca_guid = kreq->remote_ca_guid; |
210 | ureq->remote_qkey = kreq->remote_qkey; | 229 | ureq->remote_qkey = kreq->remote_qkey; |
@@ -240,34 +259,11 @@ static void ib_ucm_event_rep_get(struct ib_ucm_rep_event_resp *urep, | |||
240 | urep->srq = krep->srq; | 259 | urep->srq = krep->srq; |
241 | } | 260 | } |
242 | 261 | ||
243 | static void ib_ucm_event_rej_get(struct ib_ucm_rej_event_resp *urej, | 262 | static void ib_ucm_event_sidr_req_get(struct ib_ucm_context *ctx, |
244 | struct ib_cm_rej_event_param *krej) | 263 | struct ib_ucm_sidr_req_event_resp *ureq, |
245 | { | ||
246 | urej->reason = krej->reason; | ||
247 | } | ||
248 | |||
249 | static void ib_ucm_event_mra_get(struct ib_ucm_mra_event_resp *umra, | ||
250 | struct ib_cm_mra_event_param *kmra) | ||
251 | { | ||
252 | umra->timeout = kmra->service_timeout; | ||
253 | } | ||
254 | |||
255 | static void ib_ucm_event_lap_get(struct ib_ucm_lap_event_resp *ulap, | ||
256 | struct ib_cm_lap_event_param *klap) | ||
257 | { | ||
258 | ib_ucm_event_path_get(&ulap->path, klap->alternate_path); | ||
259 | } | ||
260 | |||
261 | static void ib_ucm_event_apr_get(struct ib_ucm_apr_event_resp *uapr, | ||
262 | struct ib_cm_apr_event_param *kapr) | ||
263 | { | ||
264 | uapr->status = kapr->ap_status; | ||
265 | } | ||
266 | |||
267 | static void ib_ucm_event_sidr_req_get(struct ib_ucm_sidr_req_event_resp *ureq, | ||
268 | struct ib_cm_sidr_req_event_param *kreq) | 264 | struct ib_cm_sidr_req_event_param *kreq) |
269 | { | 265 | { |
270 | ureq->listen_id = (long)kreq->listen_id->context; | 266 | ureq->listen_id = ctx->id; |
271 | ureq->pkey = kreq->pkey; | 267 | ureq->pkey = kreq->pkey; |
272 | } | 268 | } |
273 | 269 | ||
@@ -279,19 +275,18 @@ static void ib_ucm_event_sidr_rep_get(struct ib_ucm_sidr_rep_event_resp *urep, | |||
279 | urep->qpn = krep->qpn; | 275 | urep->qpn = krep->qpn; |
280 | }; | 276 | }; |
281 | 277 | ||
282 | static int ib_ucm_event_process(struct ib_cm_event *evt, | 278 | static int ib_ucm_event_process(struct ib_ucm_context *ctx, |
279 | struct ib_cm_event *evt, | ||
283 | struct ib_ucm_event *uvt) | 280 | struct ib_ucm_event *uvt) |
284 | { | 281 | { |
285 | void *info = NULL; | 282 | void *info = NULL; |
286 | int result; | ||
287 | 283 | ||
288 | switch (evt->event) { | 284 | switch (evt->event) { |
289 | case IB_CM_REQ_RECEIVED: | 285 | case IB_CM_REQ_RECEIVED: |
290 | ib_ucm_event_req_get(&uvt->resp.u.req_resp, | 286 | ib_ucm_event_req_get(ctx, &uvt->resp.u.req_resp, |
291 | &evt->param.req_rcvd); | 287 | &evt->param.req_rcvd); |
292 | uvt->data_len = IB_CM_REQ_PRIVATE_DATA_SIZE; | 288 | uvt->data_len = IB_CM_REQ_PRIVATE_DATA_SIZE; |
293 | uvt->resp.present |= (evt->param.req_rcvd.primary_path ? | 289 | uvt->resp.present = IB_UCM_PRES_PRIMARY; |
294 | IB_UCM_PRES_PRIMARY : 0); | ||
295 | uvt->resp.present |= (evt->param.req_rcvd.alternate_path ? | 290 | uvt->resp.present |= (evt->param.req_rcvd.alternate_path ? |
296 | IB_UCM_PRES_ALTERNATE : 0); | 291 | IB_UCM_PRES_ALTERNATE : 0); |
297 | break; | 292 | break; |
@@ -299,57 +294,46 @@ static int ib_ucm_event_process(struct ib_cm_event *evt, | |||
299 | ib_ucm_event_rep_get(&uvt->resp.u.rep_resp, | 294 | ib_ucm_event_rep_get(&uvt->resp.u.rep_resp, |
300 | &evt->param.rep_rcvd); | 295 | &evt->param.rep_rcvd); |
301 | uvt->data_len = IB_CM_REP_PRIVATE_DATA_SIZE; | 296 | uvt->data_len = IB_CM_REP_PRIVATE_DATA_SIZE; |
302 | |||
303 | break; | 297 | break; |
304 | case IB_CM_RTU_RECEIVED: | 298 | case IB_CM_RTU_RECEIVED: |
305 | uvt->data_len = IB_CM_RTU_PRIVATE_DATA_SIZE; | 299 | uvt->data_len = IB_CM_RTU_PRIVATE_DATA_SIZE; |
306 | uvt->resp.u.send_status = evt->param.send_status; | 300 | uvt->resp.u.send_status = evt->param.send_status; |
307 | |||
308 | break; | 301 | break; |
309 | case IB_CM_DREQ_RECEIVED: | 302 | case IB_CM_DREQ_RECEIVED: |
310 | uvt->data_len = IB_CM_DREQ_PRIVATE_DATA_SIZE; | 303 | uvt->data_len = IB_CM_DREQ_PRIVATE_DATA_SIZE; |
311 | uvt->resp.u.send_status = evt->param.send_status; | 304 | uvt->resp.u.send_status = evt->param.send_status; |
312 | |||
313 | break; | 305 | break; |
314 | case IB_CM_DREP_RECEIVED: | 306 | case IB_CM_DREP_RECEIVED: |
315 | uvt->data_len = IB_CM_DREP_PRIVATE_DATA_SIZE; | 307 | uvt->data_len = IB_CM_DREP_PRIVATE_DATA_SIZE; |
316 | uvt->resp.u.send_status = evt->param.send_status; | 308 | uvt->resp.u.send_status = evt->param.send_status; |
317 | |||
318 | break; | 309 | break; |
319 | case IB_CM_MRA_RECEIVED: | 310 | case IB_CM_MRA_RECEIVED: |
320 | ib_ucm_event_mra_get(&uvt->resp.u.mra_resp, | 311 | uvt->resp.u.mra_resp.timeout = |
321 | &evt->param.mra_rcvd); | 312 | evt->param.mra_rcvd.service_timeout; |
322 | uvt->data_len = IB_CM_MRA_PRIVATE_DATA_SIZE; | 313 | uvt->data_len = IB_CM_MRA_PRIVATE_DATA_SIZE; |
323 | |||
324 | break; | 314 | break; |
325 | case IB_CM_REJ_RECEIVED: | 315 | case IB_CM_REJ_RECEIVED: |
326 | ib_ucm_event_rej_get(&uvt->resp.u.rej_resp, | 316 | uvt->resp.u.rej_resp.reason = evt->param.rej_rcvd.reason; |
327 | &evt->param.rej_rcvd); | ||
328 | uvt->data_len = IB_CM_REJ_PRIVATE_DATA_SIZE; | 317 | uvt->data_len = IB_CM_REJ_PRIVATE_DATA_SIZE; |
329 | uvt->info_len = evt->param.rej_rcvd.ari_length; | 318 | uvt->info_len = evt->param.rej_rcvd.ari_length; |
330 | info = evt->param.rej_rcvd.ari; | 319 | info = evt->param.rej_rcvd.ari; |
331 | |||
332 | break; | 320 | break; |
333 | case IB_CM_LAP_RECEIVED: | 321 | case IB_CM_LAP_RECEIVED: |
334 | ib_ucm_event_lap_get(&uvt->resp.u.lap_resp, | 322 | ib_ucm_event_path_get(&uvt->resp.u.lap_resp.path, |
335 | &evt->param.lap_rcvd); | 323 | evt->param.lap_rcvd.alternate_path); |
336 | uvt->data_len = IB_CM_LAP_PRIVATE_DATA_SIZE; | 324 | uvt->data_len = IB_CM_LAP_PRIVATE_DATA_SIZE; |
337 | uvt->resp.present |= (evt->param.lap_rcvd.alternate_path ? | 325 | uvt->resp.present = IB_UCM_PRES_ALTERNATE; |
338 | IB_UCM_PRES_ALTERNATE : 0); | ||
339 | break; | 326 | break; |
340 | case IB_CM_APR_RECEIVED: | 327 | case IB_CM_APR_RECEIVED: |
341 | ib_ucm_event_apr_get(&uvt->resp.u.apr_resp, | 328 | uvt->resp.u.apr_resp.status = evt->param.apr_rcvd.ap_status; |
342 | &evt->param.apr_rcvd); | ||
343 | uvt->data_len = IB_CM_APR_PRIVATE_DATA_SIZE; | 329 | uvt->data_len = IB_CM_APR_PRIVATE_DATA_SIZE; |
344 | uvt->info_len = evt->param.apr_rcvd.info_len; | 330 | uvt->info_len = evt->param.apr_rcvd.info_len; |
345 | info = evt->param.apr_rcvd.apr_info; | 331 | info = evt->param.apr_rcvd.apr_info; |
346 | |||
347 | break; | 332 | break; |
348 | case IB_CM_SIDR_REQ_RECEIVED: | 333 | case IB_CM_SIDR_REQ_RECEIVED: |
349 | ib_ucm_event_sidr_req_get(&uvt->resp.u.sidr_req_resp, | 334 | ib_ucm_event_sidr_req_get(ctx, &uvt->resp.u.sidr_req_resp, |
350 | &evt->param.sidr_req_rcvd); | 335 | &evt->param.sidr_req_rcvd); |
351 | uvt->data_len = IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE; | 336 | uvt->data_len = IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE; |
352 | |||
353 | break; | 337 | break; |
354 | case IB_CM_SIDR_REP_RECEIVED: | 338 | case IB_CM_SIDR_REP_RECEIVED: |
355 | ib_ucm_event_sidr_rep_get(&uvt->resp.u.sidr_rep_resp, | 339 | ib_ucm_event_sidr_rep_get(&uvt->resp.u.sidr_rep_resp, |
@@ -357,43 +341,35 @@ static int ib_ucm_event_process(struct ib_cm_event *evt, | |||
357 | uvt->data_len = IB_CM_SIDR_REP_PRIVATE_DATA_SIZE; | 341 | uvt->data_len = IB_CM_SIDR_REP_PRIVATE_DATA_SIZE; |
358 | uvt->info_len = evt->param.sidr_rep_rcvd.info_len; | 342 | uvt->info_len = evt->param.sidr_rep_rcvd.info_len; |
359 | info = evt->param.sidr_rep_rcvd.info; | 343 | info = evt->param.sidr_rep_rcvd.info; |
360 | |||
361 | break; | 344 | break; |
362 | default: | 345 | default: |
363 | uvt->resp.u.send_status = evt->param.send_status; | 346 | uvt->resp.u.send_status = evt->param.send_status; |
364 | |||
365 | break; | 347 | break; |
366 | } | 348 | } |
367 | 349 | ||
368 | if (uvt->data_len && evt->private_data) { | 350 | if (uvt->data_len) { |
369 | |||
370 | uvt->data = kmalloc(uvt->data_len, GFP_KERNEL); | 351 | uvt->data = kmalloc(uvt->data_len, GFP_KERNEL); |
371 | if (!uvt->data) { | 352 | if (!uvt->data) |
372 | result = -ENOMEM; | 353 | goto err1; |
373 | goto error; | ||
374 | } | ||
375 | 354 | ||
376 | memcpy(uvt->data, evt->private_data, uvt->data_len); | 355 | memcpy(uvt->data, evt->private_data, uvt->data_len); |
377 | uvt->resp.present |= IB_UCM_PRES_DATA; | 356 | uvt->resp.present |= IB_UCM_PRES_DATA; |
378 | } | 357 | } |
379 | 358 | ||
380 | if (uvt->info_len && info) { | 359 | if (uvt->info_len) { |
381 | |||
382 | uvt->info = kmalloc(uvt->info_len, GFP_KERNEL); | 360 | uvt->info = kmalloc(uvt->info_len, GFP_KERNEL); |
383 | if (!uvt->info) { | 361 | if (!uvt->info) |
384 | result = -ENOMEM; | 362 | goto err2; |
385 | goto error; | ||
386 | } | ||
387 | 363 | ||
388 | memcpy(uvt->info, info, uvt->info_len); | 364 | memcpy(uvt->info, info, uvt->info_len); |
389 | uvt->resp.present |= IB_UCM_PRES_INFO; | 365 | uvt->resp.present |= IB_UCM_PRES_INFO; |
390 | } | 366 | } |
391 | |||
392 | return 0; | 367 | return 0; |
393 | error: | 368 | |
394 | kfree(uvt->info); | 369 | err2: |
395 | kfree(uvt->data); | 370 | kfree(uvt->data); |
396 | return result; | 371 | err1: |
372 | return -ENOMEM; | ||
397 | } | 373 | } |
398 | 374 | ||
399 | static int ib_ucm_event_handler(struct ib_cm_id *cm_id, | 375 | static int ib_ucm_event_handler(struct ib_cm_id *cm_id, |
@@ -403,63 +379,42 @@ static int ib_ucm_event_handler(struct ib_cm_id *cm_id, | |||
403 | struct ib_ucm_context *ctx; | 379 | struct ib_ucm_context *ctx; |
404 | int result = 0; | 380 | int result = 0; |
405 | int id; | 381 | int id; |
406 | /* | ||
407 | * lookup correct context based on event type. | ||
408 | */ | ||
409 | switch (event->event) { | ||
410 | case IB_CM_REQ_RECEIVED: | ||
411 | id = (long)event->param.req_rcvd.listen_id->context; | ||
412 | break; | ||
413 | case IB_CM_SIDR_REQ_RECEIVED: | ||
414 | id = (long)event->param.sidr_req_rcvd.listen_id->context; | ||
415 | break; | ||
416 | default: | ||
417 | id = (long)cm_id->context; | ||
418 | break; | ||
419 | } | ||
420 | 382 | ||
421 | ucm_dbg("Event. CM ID <%d> event <%d>\n", id, event->event); | 383 | ctx = cm_id->context; |
422 | |||
423 | ctx = ib_ucm_ctx_get(id); | ||
424 | if (!ctx) | ||
425 | return -ENOENT; | ||
426 | 384 | ||
427 | if (event->event == IB_CM_REQ_RECEIVED || | 385 | if (event->event == IB_CM_REQ_RECEIVED || |
428 | event->event == IB_CM_SIDR_REQ_RECEIVED) | 386 | event->event == IB_CM_SIDR_REQ_RECEIVED) |
429 | id = IB_UCM_CM_ID_INVALID; | 387 | id = IB_UCM_CM_ID_INVALID; |
388 | else | ||
389 | id = ctx->id; | ||
430 | 390 | ||
431 | uevent = kmalloc(sizeof(*uevent), GFP_KERNEL); | 391 | uevent = kmalloc(sizeof(*uevent), GFP_KERNEL); |
432 | if (!uevent) { | 392 | if (!uevent) |
433 | result = -ENOMEM; | 393 | goto err1; |
434 | goto done; | ||
435 | } | ||
436 | 394 | ||
437 | memset(uevent, 0, sizeof(*uevent)); | 395 | memset(uevent, 0, sizeof(*uevent)); |
438 | |||
439 | uevent->resp.id = id; | 396 | uevent->resp.id = id; |
440 | uevent->resp.event = event->event; | 397 | uevent->resp.event = event->event; |
441 | 398 | ||
442 | result = ib_ucm_event_process(event, uevent); | 399 | result = ib_ucm_event_process(ctx, event, uevent); |
443 | if (result) | 400 | if (result) |
444 | goto done; | 401 | goto err2; |
445 | 402 | ||
446 | uevent->ctx = ctx; | 403 | uevent->ctx = ctx; |
447 | uevent->cm_id = ((event->event == IB_CM_REQ_RECEIVED || | 404 | uevent->cm_id = (id == IB_UCM_CM_ID_INVALID) ? cm_id : NULL; |
448 | event->event == IB_CM_SIDR_REQ_RECEIVED ) ? | ||
449 | cm_id : NULL); | ||
450 | 405 | ||
451 | down(&ctx->file->mutex); | 406 | down(&ctx->file->mutex); |
452 | |||
453 | list_add_tail(&uevent->file_list, &ctx->file->events); | 407 | list_add_tail(&uevent->file_list, &ctx->file->events); |
454 | list_add_tail(&uevent->ctx_list, &ctx->events); | 408 | list_add_tail(&uevent->ctx_list, &ctx->events); |
455 | |||
456 | wake_up_interruptible(&ctx->file->poll_wait); | 409 | wake_up_interruptible(&ctx->file->poll_wait); |
457 | |||
458 | up(&ctx->file->mutex); | 410 | up(&ctx->file->mutex); |
459 | done: | 411 | return 0; |
460 | ctx->error = result; | 412 | |
461 | ib_ucm_ctx_put(ctx); /* func reference */ | 413 | err2: |
462 | return result; | 414 | kfree(uevent); |
415 | err1: | ||
416 | /* Destroy new cm_id's */ | ||
417 | return (id == IB_UCM_CM_ID_INVALID); | ||
463 | } | 418 | } |
464 | 419 | ||
465 | static ssize_t ib_ucm_event(struct ib_ucm_file *file, | 420 | static ssize_t ib_ucm_event(struct ib_ucm_file *file, |
@@ -517,9 +472,8 @@ static ssize_t ib_ucm_event(struct ib_ucm_file *file, | |||
517 | goto done; | 472 | goto done; |
518 | } | 473 | } |
519 | 474 | ||
520 | ctx->cm_id = uevent->cm_id; | 475 | ctx->cm_id = uevent->cm_id; |
521 | ctx->cm_id->cm_handler = ib_ucm_event_handler; | 476 | ctx->cm_id->context = ctx; |
522 | ctx->cm_id->context = (void *)(unsigned long)ctx->id; | ||
523 | 477 | ||
524 | uevent->resp.id = ctx->id; | 478 | uevent->resp.id = ctx->id; |
525 | 479 | ||
@@ -585,30 +539,29 @@ static ssize_t ib_ucm_create_id(struct ib_ucm_file *file, | |||
585 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) | 539 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) |
586 | return -EFAULT; | 540 | return -EFAULT; |
587 | 541 | ||
542 | down(&file->mutex); | ||
588 | ctx = ib_ucm_ctx_alloc(file); | 543 | ctx = ib_ucm_ctx_alloc(file); |
544 | up(&file->mutex); | ||
589 | if (!ctx) | 545 | if (!ctx) |
590 | return -ENOMEM; | 546 | return -ENOMEM; |
591 | 547 | ||
592 | ctx->cm_id = ib_create_cm_id(ib_ucm_event_handler, | 548 | ctx->cm_id = ib_create_cm_id(ib_ucm_event_handler, ctx); |
593 | (void *)(unsigned long)ctx->id); | 549 | if (IS_ERR(ctx->cm_id)) { |
594 | if (!ctx->cm_id) { | 550 | result = PTR_ERR(ctx->cm_id); |
595 | result = -ENOMEM; | 551 | goto err; |
596 | goto err_cm; | ||
597 | } | 552 | } |
598 | 553 | ||
599 | resp.id = ctx->id; | 554 | resp.id = ctx->id; |
600 | if (copy_to_user((void __user *)(unsigned long)cmd.response, | 555 | if (copy_to_user((void __user *)(unsigned long)cmd.response, |
601 | &resp, sizeof(resp))) { | 556 | &resp, sizeof(resp))) { |
602 | result = -EFAULT; | 557 | result = -EFAULT; |
603 | goto err_ret; | 558 | goto err; |
604 | } | 559 | } |
605 | 560 | ||
606 | return 0; | 561 | return 0; |
607 | err_ret: | ||
608 | ib_destroy_cm_id(ctx->cm_id); | ||
609 | err_cm: | ||
610 | ib_ucm_ctx_put(ctx); /* user reference */ | ||
611 | 562 | ||
563 | err: | ||
564 | ib_ucm_destroy_ctx(file, ctx->id); | ||
612 | return result; | 565 | return result; |
613 | } | 566 | } |
614 | 567 | ||
@@ -617,19 +570,11 @@ static ssize_t ib_ucm_destroy_id(struct ib_ucm_file *file, | |||
617 | int in_len, int out_len) | 570 | int in_len, int out_len) |
618 | { | 571 | { |
619 | struct ib_ucm_destroy_id cmd; | 572 | struct ib_ucm_destroy_id cmd; |
620 | struct ib_ucm_context *ctx; | ||
621 | 573 | ||
622 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) | 574 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) |
623 | return -EFAULT; | 575 | return -EFAULT; |
624 | 576 | ||
625 | ctx = ib_ucm_ctx_get(cmd.id); | 577 | return ib_ucm_destroy_ctx(file, cmd.id); |
626 | if (!ctx) | ||
627 | return -ENOENT; | ||
628 | |||
629 | ib_ucm_ctx_put(ctx); /* user reference */ | ||
630 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
631 | |||
632 | return 0; | ||
633 | } | 578 | } |
634 | 579 | ||
635 | static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file, | 580 | static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file, |
@@ -647,15 +592,9 @@ static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file, | |||
647 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) | 592 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) |
648 | return -EFAULT; | 593 | return -EFAULT; |
649 | 594 | ||
650 | ctx = ib_ucm_ctx_get(cmd.id); | 595 | ctx = ib_ucm_ctx_get(file, cmd.id); |
651 | if (!ctx) | 596 | if (IS_ERR(ctx)) |
652 | return -ENOENT; | 597 | return PTR_ERR(ctx); |
653 | |||
654 | down(&ctx->file->mutex); | ||
655 | if (ctx->file != file) { | ||
656 | result = -EINVAL; | ||
657 | goto done; | ||
658 | } | ||
659 | 598 | ||
660 | resp.service_id = ctx->cm_id->service_id; | 599 | resp.service_id = ctx->cm_id->service_id; |
661 | resp.service_mask = ctx->cm_id->service_mask; | 600 | resp.service_mask = ctx->cm_id->service_mask; |
@@ -666,9 +605,7 @@ static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file, | |||
666 | &resp, sizeof(resp))) | 605 | &resp, sizeof(resp))) |
667 | result = -EFAULT; | 606 | result = -EFAULT; |
668 | 607 | ||
669 | done: | 608 | ib_ucm_ctx_put(ctx); |
670 | up(&ctx->file->mutex); | ||
671 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
672 | return result; | 609 | return result; |
673 | } | 610 | } |
674 | 611 | ||
@@ -683,19 +620,12 @@ static ssize_t ib_ucm_listen(struct ib_ucm_file *file, | |||
683 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) | 620 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) |
684 | return -EFAULT; | 621 | return -EFAULT; |
685 | 622 | ||
686 | ctx = ib_ucm_ctx_get(cmd.id); | 623 | ctx = ib_ucm_ctx_get(file, cmd.id); |
687 | if (!ctx) | 624 | if (IS_ERR(ctx)) |
688 | return -ENOENT; | 625 | return PTR_ERR(ctx); |
689 | 626 | ||
690 | down(&ctx->file->mutex); | 627 | result = ib_cm_listen(ctx->cm_id, cmd.service_id, cmd.service_mask); |
691 | if (ctx->file != file) | 628 | ib_ucm_ctx_put(ctx); |
692 | result = -EINVAL; | ||
693 | else | ||
694 | result = ib_cm_listen(ctx->cm_id, cmd.service_id, | ||
695 | cmd.service_mask); | ||
696 | |||
697 | up(&ctx->file->mutex); | ||
698 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
699 | return result; | 629 | return result; |
700 | } | 630 | } |
701 | 631 | ||
@@ -710,18 +640,12 @@ static ssize_t ib_ucm_establish(struct ib_ucm_file *file, | |||
710 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) | 640 | if (copy_from_user(&cmd, inbuf, sizeof(cmd))) |
711 | return -EFAULT; | 641 | return -EFAULT; |
712 | 642 | ||
713 | ctx = ib_ucm_ctx_get(cmd.id); | 643 | ctx = ib_ucm_ctx_get(file, cmd.id); |
714 | if (!ctx) | 644 | if (IS_ERR(ctx)) |
715 | return -ENOENT; | 645 | return PTR_ERR(ctx); |
716 | |||
717 | down(&ctx->file->mutex); | ||
718 | if (ctx->file != file) | ||
719 | result = -EINVAL; | ||
720 | else | ||
721 | result = ib_cm_establish(ctx->cm_id); | ||
722 | 646 | ||
723 | up(&ctx->file->mutex); | 647 | result = ib_cm_establish(ctx->cm_id); |
724 | ib_ucm_ctx_put(ctx); /* func reference */ | 648 | ib_ucm_ctx_put(ctx); |
725 | return result; | 649 | return result; |
726 | } | 650 | } |
727 | 651 | ||
@@ -768,8 +692,8 @@ static int ib_ucm_path_get(struct ib_sa_path_rec **path, u64 src) | |||
768 | return -EFAULT; | 692 | return -EFAULT; |
769 | } | 693 | } |
770 | 694 | ||
771 | memcpy(sa_path->dgid.raw, ucm_path.dgid, sizeof(union ib_gid)); | 695 | memcpy(sa_path->dgid.raw, ucm_path.dgid, sizeof sa_path->dgid); |
772 | memcpy(sa_path->sgid.raw, ucm_path.sgid, sizeof(union ib_gid)); | 696 | memcpy(sa_path->sgid.raw, ucm_path.sgid, sizeof sa_path->sgid); |
773 | 697 | ||
774 | sa_path->dlid = ucm_path.dlid; | 698 | sa_path->dlid = ucm_path.dlid; |
775 | sa_path->slid = ucm_path.slid; | 699 | sa_path->slid = ucm_path.slid; |
@@ -839,25 +763,17 @@ static ssize_t ib_ucm_send_req(struct ib_ucm_file *file, | |||
839 | param.max_cm_retries = cmd.max_cm_retries; | 763 | param.max_cm_retries = cmd.max_cm_retries; |
840 | param.srq = cmd.srq; | 764 | param.srq = cmd.srq; |
841 | 765 | ||
842 | ctx = ib_ucm_ctx_get(cmd.id); | 766 | ctx = ib_ucm_ctx_get(file, cmd.id); |
843 | if (!ctx) { | 767 | if (!IS_ERR(ctx)) { |
844 | result = -ENOENT; | ||
845 | goto done; | ||
846 | } | ||
847 | |||
848 | down(&ctx->file->mutex); | ||
849 | if (ctx->file != file) | ||
850 | result = -EINVAL; | ||
851 | else | ||
852 | result = ib_send_cm_req(ctx->cm_id, ¶m); | 768 | result = ib_send_cm_req(ctx->cm_id, ¶m); |
769 | ib_ucm_ctx_put(ctx); | ||
770 | } else | ||
771 | result = PTR_ERR(ctx); | ||
853 | 772 | ||
854 | up(&ctx->file->mutex); | ||
855 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
856 | done: | 773 | done: |
857 | kfree(param.private_data); | 774 | kfree(param.private_data); |
858 | kfree(param.primary_path); | 775 | kfree(param.primary_path); |
859 | kfree(param.alternate_path); | 776 | kfree(param.alternate_path); |
860 | |||
861 | return result; | 777 | return result; |
862 | } | 778 | } |
863 | 779 | ||
@@ -890,23 +806,14 @@ static ssize_t ib_ucm_send_rep(struct ib_ucm_file *file, | |||
890 | param.rnr_retry_count = cmd.rnr_retry_count; | 806 | param.rnr_retry_count = cmd.rnr_retry_count; |
891 | param.srq = cmd.srq; | 807 | param.srq = cmd.srq; |
892 | 808 | ||
893 | ctx = ib_ucm_ctx_get(cmd.id); | 809 | ctx = ib_ucm_ctx_get(file, cmd.id); |
894 | if (!ctx) { | 810 | if (!IS_ERR(ctx)) { |
895 | result = -ENOENT; | ||
896 | goto done; | ||
897 | } | ||
898 | |||
899 | down(&ctx->file->mutex); | ||
900 | if (ctx->file != file) | ||
901 | result = -EINVAL; | ||
902 | else | ||
903 | result = ib_send_cm_rep(ctx->cm_id, ¶m); | 811 | result = ib_send_cm_rep(ctx->cm_id, ¶m); |
812 | ib_ucm_ctx_put(ctx); | ||
813 | } else | ||
814 | result = PTR_ERR(ctx); | ||
904 | 815 | ||
905 | up(&ctx->file->mutex); | ||
906 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
907 | done: | ||
908 | kfree(param.private_data); | 816 | kfree(param.private_data); |
909 | |||
910 | return result; | 817 | return result; |
911 | } | 818 | } |
912 | 819 | ||
@@ -928,23 +835,14 @@ static ssize_t ib_ucm_send_private_data(struct ib_ucm_file *file, | |||
928 | if (result) | 835 | if (result) |
929 | return result; | 836 | return result; |
930 | 837 | ||
931 | ctx = ib_ucm_ctx_get(cmd.id); | 838 | ctx = ib_ucm_ctx_get(file, cmd.id); |
932 | if (!ctx) { | 839 | if (!IS_ERR(ctx)) { |
933 | result = -ENOENT; | ||
934 | goto done; | ||
935 | } | ||
936 | |||
937 | down(&ctx->file->mutex); | ||
938 | if (ctx->file != file) | ||
939 | result = -EINVAL; | ||
940 | else | ||
941 | result = func(ctx->cm_id, private_data, cmd.len); | 840 | result = func(ctx->cm_id, private_data, cmd.len); |
841 | ib_ucm_ctx_put(ctx); | ||
842 | } else | ||
843 | result = PTR_ERR(ctx); | ||
942 | 844 | ||
943 | up(&ctx->file->mutex); | ||
944 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
945 | done: | ||
946 | kfree(private_data); | 845 | kfree(private_data); |
947 | |||
948 | return result; | 846 | return result; |
949 | } | 847 | } |
950 | 848 | ||
@@ -995,26 +893,17 @@ static ssize_t ib_ucm_send_info(struct ib_ucm_file *file, | |||
995 | if (result) | 893 | if (result) |
996 | goto done; | 894 | goto done; |
997 | 895 | ||
998 | ctx = ib_ucm_ctx_get(cmd.id); | 896 | ctx = ib_ucm_ctx_get(file, cmd.id); |
999 | if (!ctx) { | 897 | if (!IS_ERR(ctx)) { |
1000 | result = -ENOENT; | 898 | result = func(ctx->cm_id, cmd.status, info, cmd.info_len, |
1001 | goto done; | ||
1002 | } | ||
1003 | |||
1004 | down(&ctx->file->mutex); | ||
1005 | if (ctx->file != file) | ||
1006 | result = -EINVAL; | ||
1007 | else | ||
1008 | result = func(ctx->cm_id, cmd.status, | ||
1009 | info, cmd.info_len, | ||
1010 | data, cmd.data_len); | 899 | data, cmd.data_len); |
900 | ib_ucm_ctx_put(ctx); | ||
901 | } else | ||
902 | result = PTR_ERR(ctx); | ||
1011 | 903 | ||
1012 | up(&ctx->file->mutex); | ||
1013 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
1014 | done: | 904 | done: |
1015 | kfree(data); | 905 | kfree(data); |
1016 | kfree(info); | 906 | kfree(info); |
1017 | |||
1018 | return result; | 907 | return result; |
1019 | } | 908 | } |
1020 | 909 | ||
@@ -1048,24 +937,14 @@ static ssize_t ib_ucm_send_mra(struct ib_ucm_file *file, | |||
1048 | if (result) | 937 | if (result) |
1049 | return result; | 938 | return result; |
1050 | 939 | ||
1051 | ctx = ib_ucm_ctx_get(cmd.id); | 940 | ctx = ib_ucm_ctx_get(file, cmd.id); |
1052 | if (!ctx) { | 941 | if (!IS_ERR(ctx)) { |
1053 | result = -ENOENT; | 942 | result = ib_send_cm_mra(ctx->cm_id, cmd.timeout, data, cmd.len); |
1054 | goto done; | 943 | ib_ucm_ctx_put(ctx); |
1055 | } | 944 | } else |
945 | result = PTR_ERR(ctx); | ||
1056 | 946 | ||
1057 | down(&ctx->file->mutex); | ||
1058 | if (ctx->file != file) | ||
1059 | result = -EINVAL; | ||
1060 | else | ||
1061 | result = ib_send_cm_mra(ctx->cm_id, cmd.timeout, | ||
1062 | data, cmd.len); | ||
1063 | |||
1064 | up(&ctx->file->mutex); | ||
1065 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
1066 | done: | ||
1067 | kfree(data); | 947 | kfree(data); |
1068 | |||
1069 | return result; | 948 | return result; |
1070 | } | 949 | } |
1071 | 950 | ||
@@ -1090,24 +969,16 @@ static ssize_t ib_ucm_send_lap(struct ib_ucm_file *file, | |||
1090 | if (result) | 969 | if (result) |
1091 | goto done; | 970 | goto done; |
1092 | 971 | ||
1093 | ctx = ib_ucm_ctx_get(cmd.id); | 972 | ctx = ib_ucm_ctx_get(file, cmd.id); |
1094 | if (!ctx) { | 973 | if (!IS_ERR(ctx)) { |
1095 | result = -ENOENT; | ||
1096 | goto done; | ||
1097 | } | ||
1098 | |||
1099 | down(&ctx->file->mutex); | ||
1100 | if (ctx->file != file) | ||
1101 | result = -EINVAL; | ||
1102 | else | ||
1103 | result = ib_send_cm_lap(ctx->cm_id, path, data, cmd.len); | 974 | result = ib_send_cm_lap(ctx->cm_id, path, data, cmd.len); |
975 | ib_ucm_ctx_put(ctx); | ||
976 | } else | ||
977 | result = PTR_ERR(ctx); | ||
1104 | 978 | ||
1105 | up(&ctx->file->mutex); | ||
1106 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
1107 | done: | 979 | done: |
1108 | kfree(data); | 980 | kfree(data); |
1109 | kfree(path); | 981 | kfree(path); |
1110 | |||
1111 | return result; | 982 | return result; |
1112 | } | 983 | } |
1113 | 984 | ||
@@ -1140,24 +1011,16 @@ static ssize_t ib_ucm_send_sidr_req(struct ib_ucm_file *file, | |||
1140 | param.max_cm_retries = cmd.max_cm_retries; | 1011 | param.max_cm_retries = cmd.max_cm_retries; |
1141 | param.pkey = cmd.pkey; | 1012 | param.pkey = cmd.pkey; |
1142 | 1013 | ||
1143 | ctx = ib_ucm_ctx_get(cmd.id); | 1014 | ctx = ib_ucm_ctx_get(file, cmd.id); |
1144 | if (!ctx) { | 1015 | if (!IS_ERR(ctx)) { |
1145 | result = -ENOENT; | ||
1146 | goto done; | ||
1147 | } | ||
1148 | |||
1149 | down(&ctx->file->mutex); | ||
1150 | if (ctx->file != file) | ||
1151 | result = -EINVAL; | ||
1152 | else | ||
1153 | result = ib_send_cm_sidr_req(ctx->cm_id, ¶m); | 1016 | result = ib_send_cm_sidr_req(ctx->cm_id, ¶m); |
1017 | ib_ucm_ctx_put(ctx); | ||
1018 | } else | ||
1019 | result = PTR_ERR(ctx); | ||
1154 | 1020 | ||
1155 | up(&ctx->file->mutex); | ||
1156 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
1157 | done: | 1021 | done: |
1158 | kfree(param.private_data); | 1022 | kfree(param.private_data); |
1159 | kfree(param.path); | 1023 | kfree(param.path); |
1160 | |||
1161 | return result; | 1024 | return result; |
1162 | } | 1025 | } |
1163 | 1026 | ||
@@ -1184,30 +1047,22 @@ static ssize_t ib_ucm_send_sidr_rep(struct ib_ucm_file *file, | |||
1184 | if (result) | 1047 | if (result) |
1185 | goto done; | 1048 | goto done; |
1186 | 1049 | ||
1187 | param.qp_num = cmd.qpn; | 1050 | param.qp_num = cmd.qpn; |
1188 | param.qkey = cmd.qkey; | 1051 | param.qkey = cmd.qkey; |
1189 | param.status = cmd.status; | 1052 | param.status = cmd.status; |
1190 | param.info_length = cmd.info_len; | 1053 | param.info_length = cmd.info_len; |
1191 | param.private_data_len = cmd.data_len; | 1054 | param.private_data_len = cmd.data_len; |
1192 | |||
1193 | ctx = ib_ucm_ctx_get(cmd.id); | ||
1194 | if (!ctx) { | ||
1195 | result = -ENOENT; | ||
1196 | goto done; | ||
1197 | } | ||
1198 | 1055 | ||
1199 | down(&ctx->file->mutex); | 1056 | ctx = ib_ucm_ctx_get(file, cmd.id); |
1200 | if (ctx->file != file) | 1057 | if (!IS_ERR(ctx)) { |
1201 | result = -EINVAL; | ||
1202 | else | ||
1203 | result = ib_send_cm_sidr_rep(ctx->cm_id, ¶m); | 1058 | result = ib_send_cm_sidr_rep(ctx->cm_id, ¶m); |
1059 | ib_ucm_ctx_put(ctx); | ||
1060 | } else | ||
1061 | result = PTR_ERR(ctx); | ||
1204 | 1062 | ||
1205 | up(&ctx->file->mutex); | ||
1206 | ib_ucm_ctx_put(ctx); /* func reference */ | ||
1207 | done: | 1063 | done: |
1208 | kfree(param.private_data); | 1064 | kfree(param.private_data); |
1209 | kfree(param.info); | 1065 | kfree(param.info); |
1210 | |||
1211 | return result; | 1066 | return result; |
1212 | } | 1067 | } |
1213 | 1068 | ||
@@ -1305,22 +1160,17 @@ static int ib_ucm_close(struct inode *inode, struct file *filp) | |||
1305 | struct ib_ucm_context *ctx; | 1160 | struct ib_ucm_context *ctx; |
1306 | 1161 | ||
1307 | down(&file->mutex); | 1162 | down(&file->mutex); |
1308 | |||
1309 | while (!list_empty(&file->ctxs)) { | 1163 | while (!list_empty(&file->ctxs)) { |
1310 | 1164 | ||
1311 | ctx = list_entry(file->ctxs.next, | 1165 | ctx = list_entry(file->ctxs.next, |
1312 | struct ib_ucm_context, file_list); | 1166 | struct ib_ucm_context, file_list); |
1313 | 1167 | ||
1314 | up(&ctx->file->mutex); | 1168 | up(&file->mutex); |
1315 | ib_ucm_ctx_put(ctx); /* user reference */ | 1169 | ib_ucm_destroy_ctx(file, ctx->id); |
1316 | down(&file->mutex); | 1170 | down(&file->mutex); |
1317 | } | 1171 | } |
1318 | |||
1319 | up(&file->mutex); | 1172 | up(&file->mutex); |
1320 | |||
1321 | kfree(file); | 1173 | kfree(file); |
1322 | |||
1323 | ucm_dbg("Deleted struct\n"); | ||
1324 | return 0; | 1174 | return 0; |
1325 | } | 1175 | } |
1326 | 1176 | ||
diff --git a/drivers/infiniband/core/ucm.h b/drivers/infiniband/core/ucm.h index 6d36606151b2..c8819b928a1b 100644 --- a/drivers/infiniband/core/ucm.h +++ b/drivers/infiniband/core/ucm.h | |||
@@ -40,17 +40,15 @@ | |||
40 | #include <linux/cdev.h> | 40 | #include <linux/cdev.h> |
41 | #include <linux/idr.h> | 41 | #include <linux/idr.h> |
42 | 42 | ||
43 | #include <ib_cm.h> | 43 | #include <rdma/ib_cm.h> |
44 | #include <ib_user_cm.h> | 44 | #include <rdma/ib_user_cm.h> |
45 | 45 | ||
46 | #define IB_UCM_CM_ID_INVALID 0xffffffff | 46 | #define IB_UCM_CM_ID_INVALID 0xffffffff |
47 | 47 | ||
48 | struct ib_ucm_file { | 48 | struct ib_ucm_file { |
49 | struct semaphore mutex; | 49 | struct semaphore mutex; |
50 | struct file *filp; | 50 | struct file *filp; |
51 | /* | 51 | |
52 | * list of pending events | ||
53 | */ | ||
54 | struct list_head ctxs; /* list of active connections */ | 52 | struct list_head ctxs; /* list of active connections */ |
55 | struct list_head events; /* list of pending events */ | 53 | struct list_head events; /* list of pending events */ |
56 | wait_queue_head_t poll_wait; | 54 | wait_queue_head_t poll_wait; |
@@ -58,12 +56,11 @@ struct ib_ucm_file { | |||
58 | 56 | ||
59 | struct ib_ucm_context { | 57 | struct ib_ucm_context { |
60 | int id; | 58 | int id; |
61 | int ref; | 59 | wait_queue_head_t wait; |
62 | int error; | 60 | atomic_t ref; |
63 | 61 | ||
64 | struct ib_ucm_file *file; | 62 | struct ib_ucm_file *file; |
65 | struct ib_cm_id *cm_id; | 63 | struct ib_cm_id *cm_id; |
66 | struct semaphore mutex; | ||
67 | 64 | ||
68 | struct list_head events; /* list of pending events. */ | 65 | struct list_head events; /* list of pending events. */ |
69 | struct list_head file_list; /* member in file ctx list */ | 66 | struct list_head file_list; /* member in file ctx list */ |
diff --git a/drivers/infiniband/core/ud_header.c b/drivers/infiniband/core/ud_header.c index dc4eb1db5e96..527b23450ab3 100644 --- a/drivers/infiniband/core/ud_header.c +++ b/drivers/infiniband/core/ud_header.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -34,7 +35,7 @@ | |||
34 | 35 | ||
35 | #include <linux/errno.h> | 36 | #include <linux/errno.h> |
36 | 37 | ||
37 | #include <ib_pack.h> | 38 | #include <rdma/ib_pack.h> |
38 | 39 | ||
39 | #define STRUCT_FIELD(header, field) \ | 40 | #define STRUCT_FIELD(header, field) \ |
40 | .struct_offset_bytes = offsetof(struct ib_unpacked_ ## header, field), \ | 41 | .struct_offset_bytes = offsetof(struct ib_unpacked_ ## header, field), \ |
@@ -194,6 +195,7 @@ void ib_ud_header_init(int payload_bytes, | |||
194 | struct ib_ud_header *header) | 195 | struct ib_ud_header *header) |
195 | { | 196 | { |
196 | int header_len; | 197 | int header_len; |
198 | u16 packet_length; | ||
197 | 199 | ||
198 | memset(header, 0, sizeof *header); | 200 | memset(header, 0, sizeof *header); |
199 | 201 | ||
@@ -208,7 +210,7 @@ void ib_ud_header_init(int payload_bytes, | |||
208 | header->lrh.link_version = 0; | 210 | header->lrh.link_version = 0; |
209 | header->lrh.link_next_header = | 211 | header->lrh.link_next_header = |
210 | grh_present ? IB_LNH_IBA_GLOBAL : IB_LNH_IBA_LOCAL; | 212 | grh_present ? IB_LNH_IBA_GLOBAL : IB_LNH_IBA_LOCAL; |
211 | header->lrh.packet_length = (IB_LRH_BYTES + | 213 | packet_length = (IB_LRH_BYTES + |
212 | IB_BTH_BYTES + | 214 | IB_BTH_BYTES + |
213 | IB_DETH_BYTES + | 215 | IB_DETH_BYTES + |
214 | payload_bytes + | 216 | payload_bytes + |
@@ -217,8 +219,7 @@ void ib_ud_header_init(int payload_bytes, | |||
217 | 219 | ||
218 | header->grh_present = grh_present; | 220 | header->grh_present = grh_present; |
219 | if (grh_present) { | 221 | if (grh_present) { |
220 | header->lrh.packet_length += IB_GRH_BYTES / 4; | 222 | packet_length += IB_GRH_BYTES / 4; |
221 | |||
222 | header->grh.ip_version = 6; | 223 | header->grh.ip_version = 6; |
223 | header->grh.payload_length = | 224 | header->grh.payload_length = |
224 | cpu_to_be16((IB_BTH_BYTES + | 225 | cpu_to_be16((IB_BTH_BYTES + |
@@ -229,7 +230,7 @@ void ib_ud_header_init(int payload_bytes, | |||
229 | header->grh.next_header = 0x1b; | 230 | header->grh.next_header = 0x1b; |
230 | } | 231 | } |
231 | 232 | ||
232 | cpu_to_be16s(&header->lrh.packet_length); | 233 | header->lrh.packet_length = cpu_to_be16(packet_length); |
233 | 234 | ||
234 | if (header->immediate_present) | 235 | if (header->immediate_present) |
235 | header->bth.opcode = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; | 236 | header->bth.opcode = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; |
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c index 2e38792df533..7c2f03057ddb 100644 --- a/drivers/infiniband/core/user_mad.c +++ b/drivers/infiniband/core/user_mad.c | |||
@@ -1,6 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Voltaire, Inc. All rights reserved. | 3 | * Copyright (c) 2005 Voltaire, Inc. All rights reserved. |
4 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | 4 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. |
5 | * | 5 | * |
6 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
@@ -49,8 +49,8 @@ | |||
49 | #include <asm/uaccess.h> | 49 | #include <asm/uaccess.h> |
50 | #include <asm/semaphore.h> | 50 | #include <asm/semaphore.h> |
51 | 51 | ||
52 | #include <ib_mad.h> | 52 | #include <rdma/ib_mad.h> |
53 | #include <ib_user_mad.h> | 53 | #include <rdma/ib_user_mad.h> |
54 | 54 | ||
55 | MODULE_AUTHOR("Roland Dreier"); | 55 | MODULE_AUTHOR("Roland Dreier"); |
56 | MODULE_DESCRIPTION("InfiniBand userspace MAD packet access"); | 56 | MODULE_DESCRIPTION("InfiniBand userspace MAD packet access"); |
@@ -271,7 +271,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf, | |||
271 | struct ib_send_wr *bad_wr; | 271 | struct ib_send_wr *bad_wr; |
272 | struct ib_rmpp_mad *rmpp_mad; | 272 | struct ib_rmpp_mad *rmpp_mad; |
273 | u8 method; | 273 | u8 method; |
274 | u64 *tid; | 274 | __be64 *tid; |
275 | int ret, length, hdr_len, data_len, rmpp_hdr_size; | 275 | int ret, length, hdr_len, data_len, rmpp_hdr_size; |
276 | int rmpp_active = 0; | 276 | int rmpp_active = 0; |
277 | 277 | ||
@@ -316,7 +316,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf, | |||
316 | if (packet->mad.hdr.grh_present) { | 316 | if (packet->mad.hdr.grh_present) { |
317 | ah_attr.ah_flags = IB_AH_GRH; | 317 | ah_attr.ah_flags = IB_AH_GRH; |
318 | memcpy(ah_attr.grh.dgid.raw, packet->mad.hdr.gid, 16); | 318 | memcpy(ah_attr.grh.dgid.raw, packet->mad.hdr.gid, 16); |
319 | ah_attr.grh.flow_label = packet->mad.hdr.flow_label; | 319 | ah_attr.grh.flow_label = be32_to_cpu(packet->mad.hdr.flow_label); |
320 | ah_attr.grh.hop_limit = packet->mad.hdr.hop_limit; | 320 | ah_attr.grh.hop_limit = packet->mad.hdr.hop_limit; |
321 | ah_attr.grh.traffic_class = packet->mad.hdr.traffic_class; | 321 | ah_attr.grh.traffic_class = packet->mad.hdr.traffic_class; |
322 | } | 322 | } |
diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h index 7696022f9a4e..180b3d4765e4 100644 --- a/drivers/infiniband/core/uverbs.h +++ b/drivers/infiniband/core/uverbs.h | |||
@@ -1,6 +1,8 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
5 | * Copyright (c) 2005 Voltaire, Inc. All rights reserved. | ||
4 | * | 6 | * |
5 | * This software is available to you under a choice of one of two | 7 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 8 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -43,8 +45,8 @@ | |||
43 | #include <linux/kref.h> | 45 | #include <linux/kref.h> |
44 | #include <linux/idr.h> | 46 | #include <linux/idr.h> |
45 | 47 | ||
46 | #include <ib_verbs.h> | 48 | #include <rdma/ib_verbs.h> |
47 | #include <ib_user_verbs.h> | 49 | #include <rdma/ib_user_verbs.h> |
48 | 50 | ||
49 | struct ib_uverbs_device { | 51 | struct ib_uverbs_device { |
50 | int devnum; | 52 | int devnum; |
@@ -97,10 +99,12 @@ extern struct idr ib_uverbs_mw_idr; | |||
97 | extern struct idr ib_uverbs_ah_idr; | 99 | extern struct idr ib_uverbs_ah_idr; |
98 | extern struct idr ib_uverbs_cq_idr; | 100 | extern struct idr ib_uverbs_cq_idr; |
99 | extern struct idr ib_uverbs_qp_idr; | 101 | extern struct idr ib_uverbs_qp_idr; |
102 | extern struct idr ib_uverbs_srq_idr; | ||
100 | 103 | ||
101 | void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context); | 104 | void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context); |
102 | void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr); | 105 | void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr); |
103 | void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr); | 106 | void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr); |
107 | void ib_uverbs_srq_event_handler(struct ib_event *event, void *context_ptr); | ||
104 | 108 | ||
105 | int ib_umem_get(struct ib_device *dev, struct ib_umem *mem, | 109 | int ib_umem_get(struct ib_device *dev, struct ib_umem *mem, |
106 | void *addr, size_t size, int write); | 110 | void *addr, size_t size, int write); |
@@ -129,5 +133,8 @@ IB_UVERBS_DECLARE_CMD(modify_qp); | |||
129 | IB_UVERBS_DECLARE_CMD(destroy_qp); | 133 | IB_UVERBS_DECLARE_CMD(destroy_qp); |
130 | IB_UVERBS_DECLARE_CMD(attach_mcast); | 134 | IB_UVERBS_DECLARE_CMD(attach_mcast); |
131 | IB_UVERBS_DECLARE_CMD(detach_mcast); | 135 | IB_UVERBS_DECLARE_CMD(detach_mcast); |
136 | IB_UVERBS_DECLARE_CMD(create_srq); | ||
137 | IB_UVERBS_DECLARE_CMD(modify_srq); | ||
138 | IB_UVERBS_DECLARE_CMD(destroy_srq); | ||
132 | 139 | ||
133 | #endif /* UVERBS_H */ | 140 | #endif /* UVERBS_H */ |
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 5f2bbcda4c73..ebccf9f38af9 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c | |||
@@ -724,6 +724,7 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file, | |||
724 | struct ib_uobject *uobj; | 724 | struct ib_uobject *uobj; |
725 | struct ib_pd *pd; | 725 | struct ib_pd *pd; |
726 | struct ib_cq *scq, *rcq; | 726 | struct ib_cq *scq, *rcq; |
727 | struct ib_srq *srq; | ||
727 | struct ib_qp *qp; | 728 | struct ib_qp *qp; |
728 | struct ib_qp_init_attr attr; | 729 | struct ib_qp_init_attr attr; |
729 | int ret; | 730 | int ret; |
@@ -747,10 +748,12 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file, | |||
747 | pd = idr_find(&ib_uverbs_pd_idr, cmd.pd_handle); | 748 | pd = idr_find(&ib_uverbs_pd_idr, cmd.pd_handle); |
748 | scq = idr_find(&ib_uverbs_cq_idr, cmd.send_cq_handle); | 749 | scq = idr_find(&ib_uverbs_cq_idr, cmd.send_cq_handle); |
749 | rcq = idr_find(&ib_uverbs_cq_idr, cmd.recv_cq_handle); | 750 | rcq = idr_find(&ib_uverbs_cq_idr, cmd.recv_cq_handle); |
751 | srq = cmd.is_srq ? idr_find(&ib_uverbs_srq_idr, cmd.srq_handle) : NULL; | ||
750 | 752 | ||
751 | if (!pd || pd->uobject->context != file->ucontext || | 753 | if (!pd || pd->uobject->context != file->ucontext || |
752 | !scq || scq->uobject->context != file->ucontext || | 754 | !scq || scq->uobject->context != file->ucontext || |
753 | !rcq || rcq->uobject->context != file->ucontext) { | 755 | !rcq || rcq->uobject->context != file->ucontext || |
756 | (cmd.is_srq && (!srq || srq->uobject->context != file->ucontext))) { | ||
754 | ret = -EINVAL; | 757 | ret = -EINVAL; |
755 | goto err_up; | 758 | goto err_up; |
756 | } | 759 | } |
@@ -759,7 +762,7 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file, | |||
759 | attr.qp_context = file; | 762 | attr.qp_context = file; |
760 | attr.send_cq = scq; | 763 | attr.send_cq = scq; |
761 | attr.recv_cq = rcq; | 764 | attr.recv_cq = rcq; |
762 | attr.srq = NULL; | 765 | attr.srq = srq; |
763 | attr.sq_sig_type = cmd.sq_sig_all ? IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR; | 766 | attr.sq_sig_type = cmd.sq_sig_all ? IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR; |
764 | attr.qp_type = cmd.qp_type; | 767 | attr.qp_type = cmd.qp_type; |
765 | 768 | ||
@@ -1004,3 +1007,178 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, | |||
1004 | 1007 | ||
1005 | return ret ? ret : in_len; | 1008 | return ret ? ret : in_len; |
1006 | } | 1009 | } |
1010 | |||
1011 | ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file, | ||
1012 | const char __user *buf, int in_len, | ||
1013 | int out_len) | ||
1014 | { | ||
1015 | struct ib_uverbs_create_srq cmd; | ||
1016 | struct ib_uverbs_create_srq_resp resp; | ||
1017 | struct ib_udata udata; | ||
1018 | struct ib_uobject *uobj; | ||
1019 | struct ib_pd *pd; | ||
1020 | struct ib_srq *srq; | ||
1021 | struct ib_srq_init_attr attr; | ||
1022 | int ret; | ||
1023 | |||
1024 | if (out_len < sizeof resp) | ||
1025 | return -ENOSPC; | ||
1026 | |||
1027 | if (copy_from_user(&cmd, buf, sizeof cmd)) | ||
1028 | return -EFAULT; | ||
1029 | |||
1030 | INIT_UDATA(&udata, buf + sizeof cmd, | ||
1031 | (unsigned long) cmd.response + sizeof resp, | ||
1032 | in_len - sizeof cmd, out_len - sizeof resp); | ||
1033 | |||
1034 | uobj = kmalloc(sizeof *uobj, GFP_KERNEL); | ||
1035 | if (!uobj) | ||
1036 | return -ENOMEM; | ||
1037 | |||
1038 | down(&ib_uverbs_idr_mutex); | ||
1039 | |||
1040 | pd = idr_find(&ib_uverbs_pd_idr, cmd.pd_handle); | ||
1041 | |||
1042 | if (!pd || pd->uobject->context != file->ucontext) { | ||
1043 | ret = -EINVAL; | ||
1044 | goto err_up; | ||
1045 | } | ||
1046 | |||
1047 | attr.event_handler = ib_uverbs_srq_event_handler; | ||
1048 | attr.srq_context = file; | ||
1049 | attr.attr.max_wr = cmd.max_wr; | ||
1050 | attr.attr.max_sge = cmd.max_sge; | ||
1051 | attr.attr.srq_limit = cmd.srq_limit; | ||
1052 | |||
1053 | uobj->user_handle = cmd.user_handle; | ||
1054 | uobj->context = file->ucontext; | ||
1055 | |||
1056 | srq = pd->device->create_srq(pd, &attr, &udata); | ||
1057 | if (IS_ERR(srq)) { | ||
1058 | ret = PTR_ERR(srq); | ||
1059 | goto err_up; | ||
1060 | } | ||
1061 | |||
1062 | srq->device = pd->device; | ||
1063 | srq->pd = pd; | ||
1064 | srq->uobject = uobj; | ||
1065 | srq->event_handler = attr.event_handler; | ||
1066 | srq->srq_context = attr.srq_context; | ||
1067 | atomic_inc(&pd->usecnt); | ||
1068 | atomic_set(&srq->usecnt, 0); | ||
1069 | |||
1070 | memset(&resp, 0, sizeof resp); | ||
1071 | |||
1072 | retry: | ||
1073 | if (!idr_pre_get(&ib_uverbs_srq_idr, GFP_KERNEL)) { | ||
1074 | ret = -ENOMEM; | ||
1075 | goto err_destroy; | ||
1076 | } | ||
1077 | |||
1078 | ret = idr_get_new(&ib_uverbs_srq_idr, srq, &uobj->id); | ||
1079 | |||
1080 | if (ret == -EAGAIN) | ||
1081 | goto retry; | ||
1082 | if (ret) | ||
1083 | goto err_destroy; | ||
1084 | |||
1085 | resp.srq_handle = uobj->id; | ||
1086 | |||
1087 | spin_lock_irq(&file->ucontext->lock); | ||
1088 | list_add_tail(&uobj->list, &file->ucontext->srq_list); | ||
1089 | spin_unlock_irq(&file->ucontext->lock); | ||
1090 | |||
1091 | if (copy_to_user((void __user *) (unsigned long) cmd.response, | ||
1092 | &resp, sizeof resp)) { | ||
1093 | ret = -EFAULT; | ||
1094 | goto err_list; | ||
1095 | } | ||
1096 | |||
1097 | up(&ib_uverbs_idr_mutex); | ||
1098 | |||
1099 | return in_len; | ||
1100 | |||
1101 | err_list: | ||
1102 | spin_lock_irq(&file->ucontext->lock); | ||
1103 | list_del(&uobj->list); | ||
1104 | spin_unlock_irq(&file->ucontext->lock); | ||
1105 | |||
1106 | err_destroy: | ||
1107 | ib_destroy_srq(srq); | ||
1108 | |||
1109 | err_up: | ||
1110 | up(&ib_uverbs_idr_mutex); | ||
1111 | |||
1112 | kfree(uobj); | ||
1113 | return ret; | ||
1114 | } | ||
1115 | |||
1116 | ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, | ||
1117 | const char __user *buf, int in_len, | ||
1118 | int out_len) | ||
1119 | { | ||
1120 | struct ib_uverbs_modify_srq cmd; | ||
1121 | struct ib_srq *srq; | ||
1122 | struct ib_srq_attr attr; | ||
1123 | int ret; | ||
1124 | |||
1125 | if (copy_from_user(&cmd, buf, sizeof cmd)) | ||
1126 | return -EFAULT; | ||
1127 | |||
1128 | down(&ib_uverbs_idr_mutex); | ||
1129 | |||
1130 | srq = idr_find(&ib_uverbs_srq_idr, cmd.srq_handle); | ||
1131 | if (!srq || srq->uobject->context != file->ucontext) { | ||
1132 | ret = -EINVAL; | ||
1133 | goto out; | ||
1134 | } | ||
1135 | |||
1136 | attr.max_wr = cmd.max_wr; | ||
1137 | attr.max_sge = cmd.max_sge; | ||
1138 | attr.srq_limit = cmd.srq_limit; | ||
1139 | |||
1140 | ret = ib_modify_srq(srq, &attr, cmd.attr_mask); | ||
1141 | |||
1142 | out: | ||
1143 | up(&ib_uverbs_idr_mutex); | ||
1144 | |||
1145 | return ret ? ret : in_len; | ||
1146 | } | ||
1147 | |||
1148 | ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file, | ||
1149 | const char __user *buf, int in_len, | ||
1150 | int out_len) | ||
1151 | { | ||
1152 | struct ib_uverbs_destroy_srq cmd; | ||
1153 | struct ib_srq *srq; | ||
1154 | struct ib_uobject *uobj; | ||
1155 | int ret = -EINVAL; | ||
1156 | |||
1157 | if (copy_from_user(&cmd, buf, sizeof cmd)) | ||
1158 | return -EFAULT; | ||
1159 | |||
1160 | down(&ib_uverbs_idr_mutex); | ||
1161 | |||
1162 | srq = idr_find(&ib_uverbs_srq_idr, cmd.srq_handle); | ||
1163 | if (!srq || srq->uobject->context != file->ucontext) | ||
1164 | goto out; | ||
1165 | |||
1166 | uobj = srq->uobject; | ||
1167 | |||
1168 | ret = ib_destroy_srq(srq); | ||
1169 | if (ret) | ||
1170 | goto out; | ||
1171 | |||
1172 | idr_remove(&ib_uverbs_srq_idr, cmd.srq_handle); | ||
1173 | |||
1174 | spin_lock_irq(&file->ucontext->lock); | ||
1175 | list_del(&uobj->list); | ||
1176 | spin_unlock_irq(&file->ucontext->lock); | ||
1177 | |||
1178 | kfree(uobj); | ||
1179 | |||
1180 | out: | ||
1181 | up(&ib_uverbs_idr_mutex); | ||
1182 | |||
1183 | return ret ? ret : in_len; | ||
1184 | } | ||
diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index 5f6e9ea29cd7..09caf5b1ef36 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c | |||
@@ -1,6 +1,8 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
5 | * Copyright (c) 2005 Voltaire, Inc. All rights reserved. | ||
4 | * | 6 | * |
5 | * This software is available to you under a choice of one of two | 7 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 8 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -67,6 +69,7 @@ DEFINE_IDR(ib_uverbs_mw_idr); | |||
67 | DEFINE_IDR(ib_uverbs_ah_idr); | 69 | DEFINE_IDR(ib_uverbs_ah_idr); |
68 | DEFINE_IDR(ib_uverbs_cq_idr); | 70 | DEFINE_IDR(ib_uverbs_cq_idr); |
69 | DEFINE_IDR(ib_uverbs_qp_idr); | 71 | DEFINE_IDR(ib_uverbs_qp_idr); |
72 | DEFINE_IDR(ib_uverbs_srq_idr); | ||
70 | 73 | ||
71 | static spinlock_t map_lock; | 74 | static spinlock_t map_lock; |
72 | static DECLARE_BITMAP(dev_map, IB_UVERBS_MAX_DEVICES); | 75 | static DECLARE_BITMAP(dev_map, IB_UVERBS_MAX_DEVICES); |
@@ -91,6 +94,9 @@ static ssize_t (*uverbs_cmd_table[])(struct ib_uverbs_file *file, | |||
91 | [IB_USER_VERBS_CMD_DESTROY_QP] = ib_uverbs_destroy_qp, | 94 | [IB_USER_VERBS_CMD_DESTROY_QP] = ib_uverbs_destroy_qp, |
92 | [IB_USER_VERBS_CMD_ATTACH_MCAST] = ib_uverbs_attach_mcast, | 95 | [IB_USER_VERBS_CMD_ATTACH_MCAST] = ib_uverbs_attach_mcast, |
93 | [IB_USER_VERBS_CMD_DETACH_MCAST] = ib_uverbs_detach_mcast, | 96 | [IB_USER_VERBS_CMD_DETACH_MCAST] = ib_uverbs_detach_mcast, |
97 | [IB_USER_VERBS_CMD_CREATE_SRQ] = ib_uverbs_create_srq, | ||
98 | [IB_USER_VERBS_CMD_MODIFY_SRQ] = ib_uverbs_modify_srq, | ||
99 | [IB_USER_VERBS_CMD_DESTROY_SRQ] = ib_uverbs_destroy_srq, | ||
94 | }; | 100 | }; |
95 | 101 | ||
96 | static struct vfsmount *uverbs_event_mnt; | 102 | static struct vfsmount *uverbs_event_mnt; |
@@ -125,7 +131,14 @@ static int ib_dealloc_ucontext(struct ib_ucontext *context) | |||
125 | kfree(uobj); | 131 | kfree(uobj); |
126 | } | 132 | } |
127 | 133 | ||
128 | /* XXX Free SRQs */ | 134 | list_for_each_entry_safe(uobj, tmp, &context->srq_list, list) { |
135 | struct ib_srq *srq = idr_find(&ib_uverbs_srq_idr, uobj->id); | ||
136 | idr_remove(&ib_uverbs_srq_idr, uobj->id); | ||
137 | ib_destroy_srq(srq); | ||
138 | list_del(&uobj->list); | ||
139 | kfree(uobj); | ||
140 | } | ||
141 | |||
129 | /* XXX Free MWs */ | 142 | /* XXX Free MWs */ |
130 | 143 | ||
131 | list_for_each_entry_safe(uobj, tmp, &context->mr_list, list) { | 144 | list_for_each_entry_safe(uobj, tmp, &context->mr_list, list) { |
@@ -344,6 +357,13 @@ void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr) | |||
344 | event->event); | 357 | event->event); |
345 | } | 358 | } |
346 | 359 | ||
360 | void ib_uverbs_srq_event_handler(struct ib_event *event, void *context_ptr) | ||
361 | { | ||
362 | ib_uverbs_async_handler(context_ptr, | ||
363 | event->element.srq->uobject->user_handle, | ||
364 | event->event); | ||
365 | } | ||
366 | |||
347 | static void ib_uverbs_event_handler(struct ib_event_handler *handler, | 367 | static void ib_uverbs_event_handler(struct ib_event_handler *handler, |
348 | struct ib_event *event) | 368 | struct ib_event *event) |
349 | { | 369 | { |
diff --git a/drivers/infiniband/core/uverbs_mem.c b/drivers/infiniband/core/uverbs_mem.c index ed550f6595bd..36a32c315668 100644 --- a/drivers/infiniband/core/uverbs_mem.c +++ b/drivers/infiniband/core/uverbs_mem.c | |||
@@ -1,6 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
4 | * | 5 | * |
5 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 506fdf1f2a26..5081d903e561 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c | |||
@@ -4,6 +4,7 @@ | |||
4 | * Copyright (c) 2004 Intel Corporation. All rights reserved. | 4 | * Copyright (c) 2004 Intel Corporation. All rights reserved. |
5 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. | 5 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. |
6 | * Copyright (c) 2004 Voltaire Corporation. All rights reserved. | 6 | * Copyright (c) 2004 Voltaire Corporation. All rights reserved. |
7 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
7 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 8 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
8 | * | 9 | * |
9 | * This software is available to you under a choice of one of two | 10 | * This software is available to you under a choice of one of two |
@@ -40,8 +41,8 @@ | |||
40 | #include <linux/errno.h> | 41 | #include <linux/errno.h> |
41 | #include <linux/err.h> | 42 | #include <linux/err.h> |
42 | 43 | ||
43 | #include <ib_verbs.h> | 44 | #include <rdma/ib_verbs.h> |
44 | #include <ib_cache.h> | 45 | #include <rdma/ib_cache.h> |
45 | 46 | ||
46 | /* Protection domains */ | 47 | /* Protection domains */ |
47 | 48 | ||
@@ -153,6 +154,66 @@ int ib_destroy_ah(struct ib_ah *ah) | |||
153 | } | 154 | } |
154 | EXPORT_SYMBOL(ib_destroy_ah); | 155 | EXPORT_SYMBOL(ib_destroy_ah); |
155 | 156 | ||
157 | /* Shared receive queues */ | ||
158 | |||
159 | struct ib_srq *ib_create_srq(struct ib_pd *pd, | ||
160 | struct ib_srq_init_attr *srq_init_attr) | ||
161 | { | ||
162 | struct ib_srq *srq; | ||
163 | |||
164 | if (!pd->device->create_srq) | ||
165 | return ERR_PTR(-ENOSYS); | ||
166 | |||
167 | srq = pd->device->create_srq(pd, srq_init_attr, NULL); | ||
168 | |||
169 | if (!IS_ERR(srq)) { | ||
170 | srq->device = pd->device; | ||
171 | srq->pd = pd; | ||
172 | srq->uobject = NULL; | ||
173 | srq->event_handler = srq_init_attr->event_handler; | ||
174 | srq->srq_context = srq_init_attr->srq_context; | ||
175 | atomic_inc(&pd->usecnt); | ||
176 | atomic_set(&srq->usecnt, 0); | ||
177 | } | ||
178 | |||
179 | return srq; | ||
180 | } | ||
181 | EXPORT_SYMBOL(ib_create_srq); | ||
182 | |||
183 | int ib_modify_srq(struct ib_srq *srq, | ||
184 | struct ib_srq_attr *srq_attr, | ||
185 | enum ib_srq_attr_mask srq_attr_mask) | ||
186 | { | ||
187 | return srq->device->modify_srq(srq, srq_attr, srq_attr_mask); | ||
188 | } | ||
189 | EXPORT_SYMBOL(ib_modify_srq); | ||
190 | |||
191 | int ib_query_srq(struct ib_srq *srq, | ||
192 | struct ib_srq_attr *srq_attr) | ||
193 | { | ||
194 | return srq->device->query_srq ? | ||
195 | srq->device->query_srq(srq, srq_attr) : -ENOSYS; | ||
196 | } | ||
197 | EXPORT_SYMBOL(ib_query_srq); | ||
198 | |||
199 | int ib_destroy_srq(struct ib_srq *srq) | ||
200 | { | ||
201 | struct ib_pd *pd; | ||
202 | int ret; | ||
203 | |||
204 | if (atomic_read(&srq->usecnt)) | ||
205 | return -EBUSY; | ||
206 | |||
207 | pd = srq->pd; | ||
208 | |||
209 | ret = srq->device->destroy_srq(srq); | ||
210 | if (!ret) | ||
211 | atomic_dec(&pd->usecnt); | ||
212 | |||
213 | return ret; | ||
214 | } | ||
215 | EXPORT_SYMBOL(ib_destroy_srq); | ||
216 | |||
156 | /* Queue pairs */ | 217 | /* Queue pairs */ |
157 | 218 | ||
158 | struct ib_qp *ib_create_qp(struct ib_pd *pd, | 219 | struct ib_qp *ib_create_qp(struct ib_pd *pd, |
diff --git a/drivers/infiniband/hw/mthca/Makefile b/drivers/infiniband/hw/mthca/Makefile index 5dcbd43073e2..c44f7bae5424 100644 --- a/drivers/infiniband/hw/mthca/Makefile +++ b/drivers/infiniband/hw/mthca/Makefile | |||
@@ -1,5 +1,3 @@ | |||
1 | EXTRA_CFLAGS += -Idrivers/infiniband/include | ||
2 | |||
3 | ifdef CONFIG_INFINIBAND_MTHCA_DEBUG | 1 | ifdef CONFIG_INFINIBAND_MTHCA_DEBUG |
4 | EXTRA_CFLAGS += -DDEBUG | 2 | EXTRA_CFLAGS += -DDEBUG |
5 | endif | 3 | endif |
@@ -9,4 +7,4 @@ obj-$(CONFIG_INFINIBAND_MTHCA) += ib_mthca.o | |||
9 | ib_mthca-y := mthca_main.o mthca_cmd.o mthca_profile.o mthca_reset.o \ | 7 | ib_mthca-y := mthca_main.o mthca_cmd.o mthca_profile.o mthca_reset.o \ |
10 | mthca_allocator.o mthca_eq.o mthca_pd.o mthca_cq.o \ | 8 | mthca_allocator.o mthca_eq.o mthca_pd.o mthca_cq.o \ |
11 | mthca_mr.o mthca_qp.o mthca_av.o mthca_mcg.o mthca_mad.o \ | 9 | mthca_mr.o mthca_qp.o mthca_av.o mthca_mcg.o mthca_mad.o \ |
12 | mthca_provider.o mthca_memfree.o mthca_uar.o | 10 | mthca_provider.o mthca_memfree.o mthca_uar.o mthca_srq.o |
diff --git a/drivers/infiniband/hw/mthca/mthca_allocator.c b/drivers/infiniband/hw/mthca/mthca_allocator.c index b1db48dd91d6..9ba3211cef7c 100644 --- a/drivers/infiniband/hw/mthca/mthca_allocator.c +++ b/drivers/infiniband/hw/mthca/mthca_allocator.c | |||
@@ -177,3 +177,119 @@ void mthca_array_cleanup(struct mthca_array *array, int nent) | |||
177 | 177 | ||
178 | kfree(array->page_list); | 178 | kfree(array->page_list); |
179 | } | 179 | } |
180 | |||
181 | /* | ||
182 | * Handling for queue buffers -- we allocate a bunch of memory and | ||
183 | * register it in a memory region at HCA virtual address 0. If the | ||
184 | * requested size is > max_direct, we split the allocation into | ||
185 | * multiple pages, so we don't require too much contiguous memory. | ||
186 | */ | ||
187 | |||
188 | int mthca_buf_alloc(struct mthca_dev *dev, int size, int max_direct, | ||
189 | union mthca_buf *buf, int *is_direct, struct mthca_pd *pd, | ||
190 | int hca_write, struct mthca_mr *mr) | ||
191 | { | ||
192 | int err = -ENOMEM; | ||
193 | int npages, shift; | ||
194 | u64 *dma_list = NULL; | ||
195 | dma_addr_t t; | ||
196 | int i; | ||
197 | |||
198 | if (size <= max_direct) { | ||
199 | *is_direct = 1; | ||
200 | npages = 1; | ||
201 | shift = get_order(size) + PAGE_SHIFT; | ||
202 | |||
203 | buf->direct.buf = dma_alloc_coherent(&dev->pdev->dev, | ||
204 | size, &t, GFP_KERNEL); | ||
205 | if (!buf->direct.buf) | ||
206 | return -ENOMEM; | ||
207 | |||
208 | pci_unmap_addr_set(&buf->direct, mapping, t); | ||
209 | |||
210 | memset(buf->direct.buf, 0, size); | ||
211 | |||
212 | while (t & ((1 << shift) - 1)) { | ||
213 | --shift; | ||
214 | npages *= 2; | ||
215 | } | ||
216 | |||
217 | dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL); | ||
218 | if (!dma_list) | ||
219 | goto err_free; | ||
220 | |||
221 | for (i = 0; i < npages; ++i) | ||
222 | dma_list[i] = t + i * (1 << shift); | ||
223 | } else { | ||
224 | *is_direct = 0; | ||
225 | npages = (size + PAGE_SIZE - 1) / PAGE_SIZE; | ||
226 | shift = PAGE_SHIFT; | ||
227 | |||
228 | dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL); | ||
229 | if (!dma_list) | ||
230 | return -ENOMEM; | ||
231 | |||
232 | buf->page_list = kmalloc(npages * sizeof *buf->page_list, | ||
233 | GFP_KERNEL); | ||
234 | if (!buf->page_list) | ||
235 | goto err_out; | ||
236 | |||
237 | for (i = 0; i < npages; ++i) | ||
238 | buf->page_list[i].buf = NULL; | ||
239 | |||
240 | for (i = 0; i < npages; ++i) { | ||
241 | buf->page_list[i].buf = | ||
242 | dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE, | ||
243 | &t, GFP_KERNEL); | ||
244 | if (!buf->page_list[i].buf) | ||
245 | goto err_free; | ||
246 | |||
247 | dma_list[i] = t; | ||
248 | pci_unmap_addr_set(&buf->page_list[i], mapping, t); | ||
249 | |||
250 | memset(buf->page_list[i].buf, 0, PAGE_SIZE); | ||
251 | } | ||
252 | } | ||
253 | |||
254 | err = mthca_mr_alloc_phys(dev, pd->pd_num, | ||
255 | dma_list, shift, npages, | ||
256 | 0, size, | ||
257 | MTHCA_MPT_FLAG_LOCAL_READ | | ||
258 | (hca_write ? MTHCA_MPT_FLAG_LOCAL_WRITE : 0), | ||
259 | mr); | ||
260 | if (err) | ||
261 | goto err_free; | ||
262 | |||
263 | kfree(dma_list); | ||
264 | |||
265 | return 0; | ||
266 | |||
267 | err_free: | ||
268 | mthca_buf_free(dev, size, buf, *is_direct, NULL); | ||
269 | |||
270 | err_out: | ||
271 | kfree(dma_list); | ||
272 | |||
273 | return err; | ||
274 | } | ||
275 | |||
276 | void mthca_buf_free(struct mthca_dev *dev, int size, union mthca_buf *buf, | ||
277 | int is_direct, struct mthca_mr *mr) | ||
278 | { | ||
279 | int i; | ||
280 | |||
281 | if (mr) | ||
282 | mthca_free_mr(dev, mr); | ||
283 | |||
284 | if (is_direct) | ||
285 | dma_free_coherent(&dev->pdev->dev, size, buf->direct.buf, | ||
286 | pci_unmap_addr(&buf->direct, mapping)); | ||
287 | else { | ||
288 | for (i = 0; i < (size + PAGE_SIZE - 1) / PAGE_SIZE; ++i) | ||
289 | dma_free_coherent(&dev->pdev->dev, PAGE_SIZE, | ||
290 | buf->page_list[i].buf, | ||
291 | pci_unmap_addr(&buf->page_list[i], | ||
292 | mapping)); | ||
293 | kfree(buf->page_list); | ||
294 | } | ||
295 | } | ||
diff --git a/drivers/infiniband/hw/mthca/mthca_av.c b/drivers/infiniband/hw/mthca/mthca_av.c index d58dcbe66488..889e85096736 100644 --- a/drivers/infiniband/hw/mthca/mthca_av.c +++ b/drivers/infiniband/hw/mthca/mthca_av.c | |||
@@ -35,22 +35,22 @@ | |||
35 | 35 | ||
36 | #include <linux/init.h> | 36 | #include <linux/init.h> |
37 | 37 | ||
38 | #include <ib_verbs.h> | 38 | #include <rdma/ib_verbs.h> |
39 | #include <ib_cache.h> | 39 | #include <rdma/ib_cache.h> |
40 | 40 | ||
41 | #include "mthca_dev.h" | 41 | #include "mthca_dev.h" |
42 | 42 | ||
43 | struct mthca_av { | 43 | struct mthca_av { |
44 | u32 port_pd; | 44 | __be32 port_pd; |
45 | u8 reserved1; | 45 | u8 reserved1; |
46 | u8 g_slid; | 46 | u8 g_slid; |
47 | u16 dlid; | 47 | __be16 dlid; |
48 | u8 reserved2; | 48 | u8 reserved2; |
49 | u8 gid_index; | 49 | u8 gid_index; |
50 | u8 msg_sr; | 50 | u8 msg_sr; |
51 | u8 hop_limit; | 51 | u8 hop_limit; |
52 | u32 sl_tclass_flowlabel; | 52 | __be32 sl_tclass_flowlabel; |
53 | u32 dgid[4]; | 53 | __be32 dgid[4]; |
54 | }; | 54 | }; |
55 | 55 | ||
56 | int mthca_create_ah(struct mthca_dev *dev, | 56 | int mthca_create_ah(struct mthca_dev *dev, |
@@ -128,7 +128,7 @@ on_hca_fail: | |||
128 | av, (unsigned long) ah->avdma); | 128 | av, (unsigned long) ah->avdma); |
129 | for (j = 0; j < 8; ++j) | 129 | for (j = 0; j < 8; ++j) |
130 | printk(KERN_DEBUG " [%2x] %08x\n", | 130 | printk(KERN_DEBUG " [%2x] %08x\n", |
131 | j * 4, be32_to_cpu(((u32 *) av)[j])); | 131 | j * 4, be32_to_cpu(((__be32 *) av)[j])); |
132 | } | 132 | } |
133 | 133 | ||
134 | if (ah->type == MTHCA_AH_ON_HCA) { | 134 | if (ah->type == MTHCA_AH_ON_HCA) { |
@@ -169,7 +169,7 @@ int mthca_read_ah(struct mthca_dev *dev, struct mthca_ah *ah, | |||
169 | 169 | ||
170 | header->lrh.service_level = be32_to_cpu(ah->av->sl_tclass_flowlabel) >> 28; | 170 | header->lrh.service_level = be32_to_cpu(ah->av->sl_tclass_flowlabel) >> 28; |
171 | header->lrh.destination_lid = ah->av->dlid; | 171 | header->lrh.destination_lid = ah->av->dlid; |
172 | header->lrh.source_lid = ah->av->g_slid & 0x7f; | 172 | header->lrh.source_lid = cpu_to_be16(ah->av->g_slid & 0x7f); |
173 | if (ah->av->g_slid & 0x80) { | 173 | if (ah->av->g_slid & 0x80) { |
174 | header->grh_present = 1; | 174 | header->grh_present = 1; |
175 | header->grh.traffic_class = | 175 | header->grh.traffic_class = |
diff --git a/drivers/infiniband/hw/mthca/mthca_cmd.c b/drivers/infiniband/hw/mthca/mthca_cmd.c index 1557a522d831..cc758a2d2bc6 100644 --- a/drivers/infiniband/hw/mthca/mthca_cmd.c +++ b/drivers/infiniband/hw/mthca/mthca_cmd.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -36,7 +37,7 @@ | |||
36 | #include <linux/pci.h> | 37 | #include <linux/pci.h> |
37 | #include <linux/errno.h> | 38 | #include <linux/errno.h> |
38 | #include <asm/io.h> | 39 | #include <asm/io.h> |
39 | #include <ib_mad.h> | 40 | #include <rdma/ib_mad.h> |
40 | 41 | ||
41 | #include "mthca_dev.h" | 42 | #include "mthca_dev.h" |
42 | #include "mthca_config_reg.h" | 43 | #include "mthca_config_reg.h" |
@@ -108,6 +109,7 @@ enum { | |||
108 | CMD_SW2HW_SRQ = 0x35, | 109 | CMD_SW2HW_SRQ = 0x35, |
109 | CMD_HW2SW_SRQ = 0x36, | 110 | CMD_HW2SW_SRQ = 0x36, |
110 | CMD_QUERY_SRQ = 0x37, | 111 | CMD_QUERY_SRQ = 0x37, |
112 | CMD_ARM_SRQ = 0x40, | ||
111 | 113 | ||
112 | /* QP/EE commands */ | 114 | /* QP/EE commands */ |
113 | CMD_RST2INIT_QPEE = 0x19, | 115 | CMD_RST2INIT_QPEE = 0x19, |
@@ -219,20 +221,20 @@ static int mthca_cmd_post(struct mthca_dev *dev, | |||
219 | * (and some architectures such as ia64 implement memcpy_toio | 221 | * (and some architectures such as ia64 implement memcpy_toio |
220 | * in terms of writeb). | 222 | * in terms of writeb). |
221 | */ | 223 | */ |
222 | __raw_writel(cpu_to_be32(in_param >> 32), dev->hcr + 0 * 4); | 224 | __raw_writel((__force u32) cpu_to_be32(in_param >> 32), dev->hcr + 0 * 4); |
223 | __raw_writel(cpu_to_be32(in_param & 0xfffffffful), dev->hcr + 1 * 4); | 225 | __raw_writel((__force u32) cpu_to_be32(in_param & 0xfffffffful), dev->hcr + 1 * 4); |
224 | __raw_writel(cpu_to_be32(in_modifier), dev->hcr + 2 * 4); | 226 | __raw_writel((__force u32) cpu_to_be32(in_modifier), dev->hcr + 2 * 4); |
225 | __raw_writel(cpu_to_be32(out_param >> 32), dev->hcr + 3 * 4); | 227 | __raw_writel((__force u32) cpu_to_be32(out_param >> 32), dev->hcr + 3 * 4); |
226 | __raw_writel(cpu_to_be32(out_param & 0xfffffffful), dev->hcr + 4 * 4); | 228 | __raw_writel((__force u32) cpu_to_be32(out_param & 0xfffffffful), dev->hcr + 4 * 4); |
227 | __raw_writel(cpu_to_be32(token << 16), dev->hcr + 5 * 4); | 229 | __raw_writel((__force u32) cpu_to_be32(token << 16), dev->hcr + 5 * 4); |
228 | 230 | ||
229 | /* __raw_writel may not order writes. */ | 231 | /* __raw_writel may not order writes. */ |
230 | wmb(); | 232 | wmb(); |
231 | 233 | ||
232 | __raw_writel(cpu_to_be32((1 << HCR_GO_BIT) | | 234 | __raw_writel((__force u32) cpu_to_be32((1 << HCR_GO_BIT) | |
233 | (event ? (1 << HCA_E_BIT) : 0) | | 235 | (event ? (1 << HCA_E_BIT) : 0) | |
234 | (op_modifier << HCR_OPMOD_SHIFT) | | 236 | (op_modifier << HCR_OPMOD_SHIFT) | |
235 | op), dev->hcr + 6 * 4); | 237 | op), dev->hcr + 6 * 4); |
236 | 238 | ||
237 | out: | 239 | out: |
238 | up(&dev->cmd.hcr_sem); | 240 | up(&dev->cmd.hcr_sem); |
@@ -273,12 +275,14 @@ static int mthca_cmd_poll(struct mthca_dev *dev, | |||
273 | goto out; | 275 | goto out; |
274 | } | 276 | } |
275 | 277 | ||
276 | if (out_is_imm) { | 278 | if (out_is_imm) |
277 | memcpy_fromio(out_param, dev->hcr + HCR_OUT_PARAM_OFFSET, sizeof (u64)); | 279 | *out_param = |
278 | be64_to_cpus(out_param); | 280 | (u64) be32_to_cpu((__force __be32) |
279 | } | 281 | __raw_readl(dev->hcr + HCR_OUT_PARAM_OFFSET)) << 32 | |
282 | (u64) be32_to_cpu((__force __be32) | ||
283 | __raw_readl(dev->hcr + HCR_OUT_PARAM_OFFSET + 4)); | ||
280 | 284 | ||
281 | *status = be32_to_cpu(__raw_readl(dev->hcr + HCR_STATUS_OFFSET)) >> 24; | 285 | *status = be32_to_cpu((__force __be32) __raw_readl(dev->hcr + HCR_STATUS_OFFSET)) >> 24; |
282 | 286 | ||
283 | out: | 287 | out: |
284 | up(&dev->cmd.poll_sem); | 288 | up(&dev->cmd.poll_sem); |
@@ -1029,6 +1033,8 @@ int mthca_QUERY_DEV_LIM(struct mthca_dev *dev, | |||
1029 | 1033 | ||
1030 | mthca_dbg(dev, "Max QPs: %d, reserved QPs: %d, entry size: %d\n", | 1034 | mthca_dbg(dev, "Max QPs: %d, reserved QPs: %d, entry size: %d\n", |
1031 | dev_lim->max_qps, dev_lim->reserved_qps, dev_lim->qpc_entry_sz); | 1035 | dev_lim->max_qps, dev_lim->reserved_qps, dev_lim->qpc_entry_sz); |
1036 | mthca_dbg(dev, "Max SRQs: %d, reserved SRQs: %d, entry size: %d\n", | ||
1037 | dev_lim->max_srqs, dev_lim->reserved_srqs, dev_lim->srq_entry_sz); | ||
1032 | mthca_dbg(dev, "Max CQs: %d, reserved CQs: %d, entry size: %d\n", | 1038 | mthca_dbg(dev, "Max CQs: %d, reserved CQs: %d, entry size: %d\n", |
1033 | dev_lim->max_cqs, dev_lim->reserved_cqs, dev_lim->cqc_entry_sz); | 1039 | dev_lim->max_cqs, dev_lim->reserved_cqs, dev_lim->cqc_entry_sz); |
1034 | mthca_dbg(dev, "Max EQs: %d, reserved EQs: %d, entry size: %d\n", | 1040 | mthca_dbg(dev, "Max EQs: %d, reserved EQs: %d, entry size: %d\n", |
@@ -1082,6 +1088,34 @@ out: | |||
1082 | return err; | 1088 | return err; |
1083 | } | 1089 | } |
1084 | 1090 | ||
1091 | static void get_board_id(void *vsd, char *board_id) | ||
1092 | { | ||
1093 | int i; | ||
1094 | |||
1095 | #define VSD_OFFSET_SIG1 0x00 | ||
1096 | #define VSD_OFFSET_SIG2 0xde | ||
1097 | #define VSD_OFFSET_MLX_BOARD_ID 0xd0 | ||
1098 | #define VSD_OFFSET_TS_BOARD_ID 0x20 | ||
1099 | |||
1100 | #define VSD_SIGNATURE_TOPSPIN 0x5ad | ||
1101 | |||
1102 | memset(board_id, 0, MTHCA_BOARD_ID_LEN); | ||
1103 | |||
1104 | if (be16_to_cpup(vsd + VSD_OFFSET_SIG1) == VSD_SIGNATURE_TOPSPIN && | ||
1105 | be16_to_cpup(vsd + VSD_OFFSET_SIG2) == VSD_SIGNATURE_TOPSPIN) { | ||
1106 | strlcpy(board_id, vsd + VSD_OFFSET_TS_BOARD_ID, MTHCA_BOARD_ID_LEN); | ||
1107 | } else { | ||
1108 | /* | ||
1109 | * The board ID is a string but the firmware byte | ||
1110 | * swaps each 4-byte word before passing it back to | ||
1111 | * us. Therefore we need to swab it before printing. | ||
1112 | */ | ||
1113 | for (i = 0; i < 4; ++i) | ||
1114 | ((u32 *) board_id)[i] = | ||
1115 | swab32(*(u32 *) (vsd + VSD_OFFSET_MLX_BOARD_ID + i * 4)); | ||
1116 | } | ||
1117 | } | ||
1118 | |||
1085 | int mthca_QUERY_ADAPTER(struct mthca_dev *dev, | 1119 | int mthca_QUERY_ADAPTER(struct mthca_dev *dev, |
1086 | struct mthca_adapter *adapter, u8 *status) | 1120 | struct mthca_adapter *adapter, u8 *status) |
1087 | { | 1121 | { |
@@ -1094,6 +1128,7 @@ int mthca_QUERY_ADAPTER(struct mthca_dev *dev, | |||
1094 | #define QUERY_ADAPTER_DEVICE_ID_OFFSET 0x04 | 1128 | #define QUERY_ADAPTER_DEVICE_ID_OFFSET 0x04 |
1095 | #define QUERY_ADAPTER_REVISION_ID_OFFSET 0x08 | 1129 | #define QUERY_ADAPTER_REVISION_ID_OFFSET 0x08 |
1096 | #define QUERY_ADAPTER_INTA_PIN_OFFSET 0x10 | 1130 | #define QUERY_ADAPTER_INTA_PIN_OFFSET 0x10 |
1131 | #define QUERY_ADAPTER_VSD_OFFSET 0x20 | ||
1097 | 1132 | ||
1098 | mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL); | 1133 | mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL); |
1099 | if (IS_ERR(mailbox)) | 1134 | if (IS_ERR(mailbox)) |
@@ -1111,6 +1146,9 @@ int mthca_QUERY_ADAPTER(struct mthca_dev *dev, | |||
1111 | MTHCA_GET(adapter->revision_id, outbox, QUERY_ADAPTER_REVISION_ID_OFFSET); | 1146 | MTHCA_GET(adapter->revision_id, outbox, QUERY_ADAPTER_REVISION_ID_OFFSET); |
1112 | MTHCA_GET(adapter->inta_pin, outbox, QUERY_ADAPTER_INTA_PIN_OFFSET); | 1147 | MTHCA_GET(adapter->inta_pin, outbox, QUERY_ADAPTER_INTA_PIN_OFFSET); |
1113 | 1148 | ||
1149 | get_board_id(outbox + QUERY_ADAPTER_VSD_OFFSET / 4, | ||
1150 | adapter->board_id); | ||
1151 | |||
1114 | out: | 1152 | out: |
1115 | mthca_free_mailbox(dev, mailbox); | 1153 | mthca_free_mailbox(dev, mailbox); |
1116 | return err; | 1154 | return err; |
@@ -1121,7 +1159,7 @@ int mthca_INIT_HCA(struct mthca_dev *dev, | |||
1121 | u8 *status) | 1159 | u8 *status) |
1122 | { | 1160 | { |
1123 | struct mthca_mailbox *mailbox; | 1161 | struct mthca_mailbox *mailbox; |
1124 | u32 *inbox; | 1162 | __be32 *inbox; |
1125 | int err; | 1163 | int err; |
1126 | 1164 | ||
1127 | #define INIT_HCA_IN_SIZE 0x200 | 1165 | #define INIT_HCA_IN_SIZE 0x200 |
@@ -1247,10 +1285,8 @@ int mthca_INIT_IB(struct mthca_dev *dev, | |||
1247 | #define INIT_IB_FLAG_SIG (1 << 18) | 1285 | #define INIT_IB_FLAG_SIG (1 << 18) |
1248 | #define INIT_IB_FLAG_NG (1 << 17) | 1286 | #define INIT_IB_FLAG_NG (1 << 17) |
1249 | #define INIT_IB_FLAG_G0 (1 << 16) | 1287 | #define INIT_IB_FLAG_G0 (1 << 16) |
1250 | #define INIT_IB_FLAG_1X (1 << 8) | ||
1251 | #define INIT_IB_FLAG_4X (1 << 9) | ||
1252 | #define INIT_IB_FLAG_12X (1 << 11) | ||
1253 | #define INIT_IB_VL_SHIFT 4 | 1288 | #define INIT_IB_VL_SHIFT 4 |
1289 | #define INIT_IB_PORT_WIDTH_SHIFT 8 | ||
1254 | #define INIT_IB_MTU_SHIFT 12 | 1290 | #define INIT_IB_MTU_SHIFT 12 |
1255 | #define INIT_IB_MAX_GID_OFFSET 0x06 | 1291 | #define INIT_IB_MAX_GID_OFFSET 0x06 |
1256 | #define INIT_IB_MAX_PKEY_OFFSET 0x0a | 1292 | #define INIT_IB_MAX_PKEY_OFFSET 0x0a |
@@ -1266,12 +1302,11 @@ int mthca_INIT_IB(struct mthca_dev *dev, | |||
1266 | memset(inbox, 0, INIT_IB_IN_SIZE); | 1302 | memset(inbox, 0, INIT_IB_IN_SIZE); |
1267 | 1303 | ||
1268 | flags = 0; | 1304 | flags = 0; |
1269 | flags |= param->enable_1x ? INIT_IB_FLAG_1X : 0; | ||
1270 | flags |= param->enable_4x ? INIT_IB_FLAG_4X : 0; | ||
1271 | flags |= param->set_guid0 ? INIT_IB_FLAG_G0 : 0; | 1305 | flags |= param->set_guid0 ? INIT_IB_FLAG_G0 : 0; |
1272 | flags |= param->set_node_guid ? INIT_IB_FLAG_NG : 0; | 1306 | flags |= param->set_node_guid ? INIT_IB_FLAG_NG : 0; |
1273 | flags |= param->set_si_guid ? INIT_IB_FLAG_SIG : 0; | 1307 | flags |= param->set_si_guid ? INIT_IB_FLAG_SIG : 0; |
1274 | flags |= param->vl_cap << INIT_IB_VL_SHIFT; | 1308 | flags |= param->vl_cap << INIT_IB_VL_SHIFT; |
1309 | flags |= param->port_width << INIT_IB_PORT_WIDTH_SHIFT; | ||
1275 | flags |= param->mtu_cap << INIT_IB_MTU_SHIFT; | 1310 | flags |= param->mtu_cap << INIT_IB_MTU_SHIFT; |
1276 | MTHCA_PUT(inbox, flags, INIT_IB_FLAGS_OFFSET); | 1311 | MTHCA_PUT(inbox, flags, INIT_IB_FLAGS_OFFSET); |
1277 | 1312 | ||
@@ -1342,7 +1377,7 @@ int mthca_MAP_ICM(struct mthca_dev *dev, struct mthca_icm *icm, u64 virt, u8 *st | |||
1342 | int mthca_MAP_ICM_page(struct mthca_dev *dev, u64 dma_addr, u64 virt, u8 *status) | 1377 | int mthca_MAP_ICM_page(struct mthca_dev *dev, u64 dma_addr, u64 virt, u8 *status) |
1343 | { | 1378 | { |
1344 | struct mthca_mailbox *mailbox; | 1379 | struct mthca_mailbox *mailbox; |
1345 | u64 *inbox; | 1380 | __be64 *inbox; |
1346 | int err; | 1381 | int err; |
1347 | 1382 | ||
1348 | mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL); | 1383 | mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL); |
@@ -1468,6 +1503,27 @@ int mthca_HW2SW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox, | |||
1468 | CMD_TIME_CLASS_A, status); | 1503 | CMD_TIME_CLASS_A, status); |
1469 | } | 1504 | } |
1470 | 1505 | ||
1506 | int mthca_SW2HW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox, | ||
1507 | int srq_num, u8 *status) | ||
1508 | { | ||
1509 | return mthca_cmd(dev, mailbox->dma, srq_num, 0, CMD_SW2HW_SRQ, | ||
1510 | CMD_TIME_CLASS_A, status); | ||
1511 | } | ||
1512 | |||
1513 | int mthca_HW2SW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox, | ||
1514 | int srq_num, u8 *status) | ||
1515 | { | ||
1516 | return mthca_cmd_box(dev, 0, mailbox->dma, srq_num, 0, | ||
1517 | CMD_HW2SW_SRQ, | ||
1518 | CMD_TIME_CLASS_A, status); | ||
1519 | } | ||
1520 | |||
1521 | int mthca_ARM_SRQ(struct mthca_dev *dev, int srq_num, int limit, u8 *status) | ||
1522 | { | ||
1523 | return mthca_cmd(dev, limit, srq_num, 0, CMD_ARM_SRQ, | ||
1524 | CMD_TIME_CLASS_B, status); | ||
1525 | } | ||
1526 | |||
1471 | int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num, | 1527 | int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num, |
1472 | int is_ee, struct mthca_mailbox *mailbox, u32 optmask, | 1528 | int is_ee, struct mthca_mailbox *mailbox, u32 optmask, |
1473 | u8 *status) | 1529 | u8 *status) |
@@ -1513,7 +1569,7 @@ int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num, | |||
1513 | if (i % 8 == 0) | 1569 | if (i % 8 == 0) |
1514 | printk(" [%02x] ", i * 4); | 1570 | printk(" [%02x] ", i * 4); |
1515 | printk(" %08x", | 1571 | printk(" %08x", |
1516 | be32_to_cpu(((u32 *) mailbox->buf)[i + 2])); | 1572 | be32_to_cpu(((__be32 *) mailbox->buf)[i + 2])); |
1517 | if ((i + 1) % 8 == 0) | 1573 | if ((i + 1) % 8 == 0) |
1518 | printk("\n"); | 1574 | printk("\n"); |
1519 | } | 1575 | } |
@@ -1533,7 +1589,7 @@ int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num, | |||
1533 | if (i % 8 == 0) | 1589 | if (i % 8 == 0) |
1534 | printk("[%02x] ", i * 4); | 1590 | printk("[%02x] ", i * 4); |
1535 | printk(" %08x", | 1591 | printk(" %08x", |
1536 | be32_to_cpu(((u32 *) mailbox->buf)[i + 2])); | 1592 | be32_to_cpu(((__be32 *) mailbox->buf)[i + 2])); |
1537 | if ((i + 1) % 8 == 0) | 1593 | if ((i + 1) % 8 == 0) |
1538 | printk("\n"); | 1594 | printk("\n"); |
1539 | } | 1595 | } |
diff --git a/drivers/infiniband/hw/mthca/mthca_cmd.h b/drivers/infiniband/hw/mthca/mthca_cmd.h index ed517f175dd6..65f976a13e02 100644 --- a/drivers/infiniband/hw/mthca/mthca_cmd.h +++ b/drivers/infiniband/hw/mthca/mthca_cmd.h | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -35,7 +36,7 @@ | |||
35 | #ifndef MTHCA_CMD_H | 36 | #ifndef MTHCA_CMD_H |
36 | #define MTHCA_CMD_H | 37 | #define MTHCA_CMD_H |
37 | 38 | ||
38 | #include <ib_verbs.h> | 39 | #include <rdma/ib_verbs.h> |
39 | 40 | ||
40 | #define MTHCA_MAILBOX_SIZE 4096 | 41 | #define MTHCA_MAILBOX_SIZE 4096 |
41 | 42 | ||
@@ -183,10 +184,11 @@ struct mthca_dev_lim { | |||
183 | }; | 184 | }; |
184 | 185 | ||
185 | struct mthca_adapter { | 186 | struct mthca_adapter { |
186 | u32 vendor_id; | 187 | u32 vendor_id; |
187 | u32 device_id; | 188 | u32 device_id; |
188 | u32 revision_id; | 189 | u32 revision_id; |
189 | u8 inta_pin; | 190 | char board_id[MTHCA_BOARD_ID_LEN]; |
191 | u8 inta_pin; | ||
190 | }; | 192 | }; |
191 | 193 | ||
192 | struct mthca_init_hca_param { | 194 | struct mthca_init_hca_param { |
@@ -218,8 +220,7 @@ struct mthca_init_hca_param { | |||
218 | }; | 220 | }; |
219 | 221 | ||
220 | struct mthca_init_ib_param { | 222 | struct mthca_init_ib_param { |
221 | int enable_1x; | 223 | int port_width; |
222 | int enable_4x; | ||
223 | int vl_cap; | 224 | int vl_cap; |
224 | int mtu_cap; | 225 | int mtu_cap; |
225 | u16 gid_cap; | 226 | u16 gid_cap; |
@@ -297,6 +298,11 @@ int mthca_SW2HW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox, | |||
297 | int cq_num, u8 *status); | 298 | int cq_num, u8 *status); |
298 | int mthca_HW2SW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox, | 299 | int mthca_HW2SW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox, |
299 | int cq_num, u8 *status); | 300 | int cq_num, u8 *status); |
301 | int mthca_SW2HW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox, | ||
302 | int srq_num, u8 *status); | ||
303 | int mthca_HW2SW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox, | ||
304 | int srq_num, u8 *status); | ||
305 | int mthca_ARM_SRQ(struct mthca_dev *dev, int srq_num, int limit, u8 *status); | ||
300 | int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num, | 306 | int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num, |
301 | int is_ee, struct mthca_mailbox *mailbox, u32 optmask, | 307 | int is_ee, struct mthca_mailbox *mailbox, u32 optmask, |
302 | u8 *status); | 308 | u8 *status); |
diff --git a/drivers/infiniband/hw/mthca/mthca_config_reg.h b/drivers/infiniband/hw/mthca/mthca_config_reg.h index b4bfbbfe2c3d..afa56bfaab2e 100644 --- a/drivers/infiniband/hw/mthca/mthca_config_reg.h +++ b/drivers/infiniband/hw/mthca/mthca_config_reg.h | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
diff --git a/drivers/infiniband/hw/mthca/mthca_cq.c b/drivers/infiniband/hw/mthca/mthca_cq.c index 5687c3014522..8600b6c3e0c2 100644 --- a/drivers/infiniband/hw/mthca/mthca_cq.c +++ b/drivers/infiniband/hw/mthca/mthca_cq.c | |||
@@ -2,6 +2,8 @@ | |||
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | 3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. |
4 | * Copyright (c) 2005 Cisco Systems, Inc. All rights reserved. | 4 | * Copyright (c) 2005 Cisco Systems, Inc. All rights reserved. |
5 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
6 | * Copyright (c) 2004 Voltaire, Inc. All rights reserved. | ||
5 | * | 7 | * |
6 | * This software is available to you under a choice of one of two | 8 | * This software is available to you under a choice of one of two |
7 | * licenses. You may choose to be licensed under the terms of the GNU | 9 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -37,7 +39,7 @@ | |||
37 | #include <linux/init.h> | 39 | #include <linux/init.h> |
38 | #include <linux/hardirq.h> | 40 | #include <linux/hardirq.h> |
39 | 41 | ||
40 | #include <ib_pack.h> | 42 | #include <rdma/ib_pack.h> |
41 | 43 | ||
42 | #include "mthca_dev.h" | 44 | #include "mthca_dev.h" |
43 | #include "mthca_cmd.h" | 45 | #include "mthca_cmd.h" |
@@ -55,21 +57,21 @@ enum { | |||
55 | * Must be packed because start is 64 bits but only aligned to 32 bits. | 57 | * Must be packed because start is 64 bits but only aligned to 32 bits. |
56 | */ | 58 | */ |
57 | struct mthca_cq_context { | 59 | struct mthca_cq_context { |
58 | u32 flags; | 60 | __be32 flags; |
59 | u64 start; | 61 | __be64 start; |
60 | u32 logsize_usrpage; | 62 | __be32 logsize_usrpage; |
61 | u32 error_eqn; /* Tavor only */ | 63 | __be32 error_eqn; /* Tavor only */ |
62 | u32 comp_eqn; | 64 | __be32 comp_eqn; |
63 | u32 pd; | 65 | __be32 pd; |
64 | u32 lkey; | 66 | __be32 lkey; |
65 | u32 last_notified_index; | 67 | __be32 last_notified_index; |
66 | u32 solicit_producer_index; | 68 | __be32 solicit_producer_index; |
67 | u32 consumer_index; | 69 | __be32 consumer_index; |
68 | u32 producer_index; | 70 | __be32 producer_index; |
69 | u32 cqn; | 71 | __be32 cqn; |
70 | u32 ci_db; /* Arbel only */ | 72 | __be32 ci_db; /* Arbel only */ |
71 | u32 state_db; /* Arbel only */ | 73 | __be32 state_db; /* Arbel only */ |
72 | u32 reserved; | 74 | u32 reserved; |
73 | } __attribute__((packed)); | 75 | } __attribute__((packed)); |
74 | 76 | ||
75 | #define MTHCA_CQ_STATUS_OK ( 0 << 28) | 77 | #define MTHCA_CQ_STATUS_OK ( 0 << 28) |
@@ -108,31 +110,31 @@ enum { | |||
108 | }; | 110 | }; |
109 | 111 | ||
110 | struct mthca_cqe { | 112 | struct mthca_cqe { |
111 | u32 my_qpn; | 113 | __be32 my_qpn; |
112 | u32 my_ee; | 114 | __be32 my_ee; |
113 | u32 rqpn; | 115 | __be32 rqpn; |
114 | u16 sl_g_mlpath; | 116 | __be16 sl_g_mlpath; |
115 | u16 rlid; | 117 | __be16 rlid; |
116 | u32 imm_etype_pkey_eec; | 118 | __be32 imm_etype_pkey_eec; |
117 | u32 byte_cnt; | 119 | __be32 byte_cnt; |
118 | u32 wqe; | 120 | __be32 wqe; |
119 | u8 opcode; | 121 | u8 opcode; |
120 | u8 is_send; | 122 | u8 is_send; |
121 | u8 reserved; | 123 | u8 reserved; |
122 | u8 owner; | 124 | u8 owner; |
123 | }; | 125 | }; |
124 | 126 | ||
125 | struct mthca_err_cqe { | 127 | struct mthca_err_cqe { |
126 | u32 my_qpn; | 128 | __be32 my_qpn; |
127 | u32 reserved1[3]; | 129 | u32 reserved1[3]; |
128 | u8 syndrome; | 130 | u8 syndrome; |
129 | u8 reserved2; | 131 | u8 reserved2; |
130 | u16 db_cnt; | 132 | __be16 db_cnt; |
131 | u32 reserved3; | 133 | u32 reserved3; |
132 | u32 wqe; | 134 | __be32 wqe; |
133 | u8 opcode; | 135 | u8 opcode; |
134 | u8 reserved4[2]; | 136 | u8 reserved4[2]; |
135 | u8 owner; | 137 | u8 owner; |
136 | }; | 138 | }; |
137 | 139 | ||
138 | #define MTHCA_CQ_ENTRY_OWNER_SW (0 << 7) | 140 | #define MTHCA_CQ_ENTRY_OWNER_SW (0 << 7) |
@@ -191,7 +193,7 @@ static void dump_cqe(struct mthca_dev *dev, void *cqe_ptr) | |||
191 | static inline void update_cons_index(struct mthca_dev *dev, struct mthca_cq *cq, | 193 | static inline void update_cons_index(struct mthca_dev *dev, struct mthca_cq *cq, |
192 | int incr) | 194 | int incr) |
193 | { | 195 | { |
194 | u32 doorbell[2]; | 196 | __be32 doorbell[2]; |
195 | 197 | ||
196 | if (mthca_is_memfree(dev)) { | 198 | if (mthca_is_memfree(dev)) { |
197 | *cq->set_ci_db = cpu_to_be32(cq->cons_index); | 199 | *cq->set_ci_db = cpu_to_be32(cq->cons_index); |
@@ -222,7 +224,8 @@ void mthca_cq_event(struct mthca_dev *dev, u32 cqn) | |||
222 | cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); | 224 | cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); |
223 | } | 225 | } |
224 | 226 | ||
225 | void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn) | 227 | void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn, |
228 | struct mthca_srq *srq) | ||
226 | { | 229 | { |
227 | struct mthca_cq *cq; | 230 | struct mthca_cq *cq; |
228 | struct mthca_cqe *cqe; | 231 | struct mthca_cqe *cqe; |
@@ -263,8 +266,11 @@ void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn) | |||
263 | */ | 266 | */ |
264 | while (prod_index > cq->cons_index) { | 267 | while (prod_index > cq->cons_index) { |
265 | cqe = get_cqe(cq, (prod_index - 1) & cq->ibcq.cqe); | 268 | cqe = get_cqe(cq, (prod_index - 1) & cq->ibcq.cqe); |
266 | if (cqe->my_qpn == cpu_to_be32(qpn)) | 269 | if (cqe->my_qpn == cpu_to_be32(qpn)) { |
270 | if (srq) | ||
271 | mthca_free_srq_wqe(srq, be32_to_cpu(cqe->wqe)); | ||
267 | ++nfreed; | 272 | ++nfreed; |
273 | } | ||
268 | else if (nfreed) | 274 | else if (nfreed) |
269 | memcpy(get_cqe(cq, (prod_index - 1 + nfreed) & | 275 | memcpy(get_cqe(cq, (prod_index - 1 + nfreed) & |
270 | cq->ibcq.cqe), | 276 | cq->ibcq.cqe), |
@@ -291,7 +297,7 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq, | |||
291 | { | 297 | { |
292 | int err; | 298 | int err; |
293 | int dbd; | 299 | int dbd; |
294 | u32 new_wqe; | 300 | __be32 new_wqe; |
295 | 301 | ||
296 | if (cqe->syndrome == SYNDROME_LOCAL_QP_OP_ERR) { | 302 | if (cqe->syndrome == SYNDROME_LOCAL_QP_OP_ERR) { |
297 | mthca_dbg(dev, "local QP operation err " | 303 | mthca_dbg(dev, "local QP operation err " |
@@ -365,6 +371,13 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq, | |||
365 | break; | 371 | break; |
366 | } | 372 | } |
367 | 373 | ||
374 | /* | ||
375 | * Mem-free HCAs always generate one CQE per WQE, even in the | ||
376 | * error case, so we don't have to check the doorbell count, etc. | ||
377 | */ | ||
378 | if (mthca_is_memfree(dev)) | ||
379 | return 0; | ||
380 | |||
368 | err = mthca_free_err_wqe(dev, qp, is_send, wqe_index, &dbd, &new_wqe); | 381 | err = mthca_free_err_wqe(dev, qp, is_send, wqe_index, &dbd, &new_wqe); |
369 | if (err) | 382 | if (err) |
370 | return err; | 383 | return err; |
@@ -373,12 +386,8 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq, | |||
373 | * If we're at the end of the WQE chain, or we've used up our | 386 | * If we're at the end of the WQE chain, or we've used up our |
374 | * doorbell count, free the CQE. Otherwise just update it for | 387 | * doorbell count, free the CQE. Otherwise just update it for |
375 | * the next poll operation. | 388 | * the next poll operation. |
376 | * | ||
377 | * This does not apply to mem-free HCAs: they don't use the | ||
378 | * doorbell count field, and so we should always free the CQE. | ||
379 | */ | 389 | */ |
380 | if (mthca_is_memfree(dev) || | 390 | if (!(new_wqe & cpu_to_be32(0x3f)) || (!cqe->db_cnt && dbd)) |
381 | !(new_wqe & cpu_to_be32(0x3f)) || (!cqe->db_cnt && dbd)) | ||
382 | return 0; | 391 | return 0; |
383 | 392 | ||
384 | cqe->db_cnt = cpu_to_be16(be16_to_cpu(cqe->db_cnt) - dbd); | 393 | cqe->db_cnt = cpu_to_be16(be16_to_cpu(cqe->db_cnt) - dbd); |
@@ -450,23 +459,27 @@ static inline int mthca_poll_one(struct mthca_dev *dev, | |||
450 | >> wq->wqe_shift); | 459 | >> wq->wqe_shift); |
451 | entry->wr_id = (*cur_qp)->wrid[wqe_index + | 460 | entry->wr_id = (*cur_qp)->wrid[wqe_index + |
452 | (*cur_qp)->rq.max]; | 461 | (*cur_qp)->rq.max]; |
462 | } else if ((*cur_qp)->ibqp.srq) { | ||
463 | struct mthca_srq *srq = to_msrq((*cur_qp)->ibqp.srq); | ||
464 | u32 wqe = be32_to_cpu(cqe->wqe); | ||
465 | wq = NULL; | ||
466 | wqe_index = wqe >> srq->wqe_shift; | ||
467 | entry->wr_id = srq->wrid[wqe_index]; | ||
468 | mthca_free_srq_wqe(srq, wqe); | ||
453 | } else { | 469 | } else { |
454 | wq = &(*cur_qp)->rq; | 470 | wq = &(*cur_qp)->rq; |
455 | wqe_index = be32_to_cpu(cqe->wqe) >> wq->wqe_shift; | 471 | wqe_index = be32_to_cpu(cqe->wqe) >> wq->wqe_shift; |
456 | entry->wr_id = (*cur_qp)->wrid[wqe_index]; | 472 | entry->wr_id = (*cur_qp)->wrid[wqe_index]; |
457 | } | 473 | } |
458 | 474 | ||
459 | if (wq->last_comp < wqe_index) | 475 | if (wq) { |
460 | wq->tail += wqe_index - wq->last_comp; | 476 | if (wq->last_comp < wqe_index) |
461 | else | 477 | wq->tail += wqe_index - wq->last_comp; |
462 | wq->tail += wqe_index + wq->max - wq->last_comp; | 478 | else |
463 | 479 | wq->tail += wqe_index + wq->max - wq->last_comp; | |
464 | wq->last_comp = wqe_index; | ||
465 | 480 | ||
466 | if (0) | 481 | wq->last_comp = wqe_index; |
467 | mthca_dbg(dev, "%s completion for QP %06x, index %d (nr %d)\n", | 482 | } |
468 | is_send ? "Send" : "Receive", | ||
469 | (*cur_qp)->qpn, wqe_index, wq->max); | ||
470 | 483 | ||
471 | if (is_error) { | 484 | if (is_error) { |
472 | err = handle_error_cqe(dev, cq, *cur_qp, wqe_index, is_send, | 485 | err = handle_error_cqe(dev, cq, *cur_qp, wqe_index, is_send, |
@@ -584,13 +597,13 @@ int mthca_poll_cq(struct ib_cq *ibcq, int num_entries, | |||
584 | 597 | ||
585 | int mthca_tavor_arm_cq(struct ib_cq *cq, enum ib_cq_notify notify) | 598 | int mthca_tavor_arm_cq(struct ib_cq *cq, enum ib_cq_notify notify) |
586 | { | 599 | { |
587 | u32 doorbell[2]; | 600 | __be32 doorbell[2]; |
588 | 601 | ||
589 | doorbell[0] = cpu_to_be32((notify == IB_CQ_SOLICITED ? | 602 | doorbell[0] = cpu_to_be32((notify == IB_CQ_SOLICITED ? |
590 | MTHCA_TAVOR_CQ_DB_REQ_NOT_SOL : | 603 | MTHCA_TAVOR_CQ_DB_REQ_NOT_SOL : |
591 | MTHCA_TAVOR_CQ_DB_REQ_NOT) | | 604 | MTHCA_TAVOR_CQ_DB_REQ_NOT) | |
592 | to_mcq(cq)->cqn); | 605 | to_mcq(cq)->cqn); |
593 | doorbell[1] = 0xffffffff; | 606 | doorbell[1] = (__force __be32) 0xffffffff; |
594 | 607 | ||
595 | mthca_write64(doorbell, | 608 | mthca_write64(doorbell, |
596 | to_mdev(cq->device)->kar + MTHCA_CQ_DOORBELL, | 609 | to_mdev(cq->device)->kar + MTHCA_CQ_DOORBELL, |
@@ -602,9 +615,9 @@ int mthca_tavor_arm_cq(struct ib_cq *cq, enum ib_cq_notify notify) | |||
602 | int mthca_arbel_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify notify) | 615 | int mthca_arbel_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify notify) |
603 | { | 616 | { |
604 | struct mthca_cq *cq = to_mcq(ibcq); | 617 | struct mthca_cq *cq = to_mcq(ibcq); |
605 | u32 doorbell[2]; | 618 | __be32 doorbell[2]; |
606 | u32 sn; | 619 | u32 sn; |
607 | u32 ci; | 620 | __be32 ci; |
608 | 621 | ||
609 | sn = cq->arm_sn & 3; | 622 | sn = cq->arm_sn & 3; |
610 | ci = cpu_to_be32(cq->cons_index); | 623 | ci = cpu_to_be32(cq->cons_index); |
@@ -637,113 +650,8 @@ int mthca_arbel_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify notify) | |||
637 | 650 | ||
638 | static void mthca_free_cq_buf(struct mthca_dev *dev, struct mthca_cq *cq) | 651 | static void mthca_free_cq_buf(struct mthca_dev *dev, struct mthca_cq *cq) |
639 | { | 652 | { |
640 | int i; | 653 | mthca_buf_free(dev, (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE, |
641 | int size; | 654 | &cq->queue, cq->is_direct, &cq->mr); |
642 | |||
643 | if (cq->is_direct) | ||
644 | dma_free_coherent(&dev->pdev->dev, | ||
645 | (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE, | ||
646 | cq->queue.direct.buf, | ||
647 | pci_unmap_addr(&cq->queue.direct, | ||
648 | mapping)); | ||
649 | else { | ||
650 | size = (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE; | ||
651 | for (i = 0; i < (size + PAGE_SIZE - 1) / PAGE_SIZE; ++i) | ||
652 | if (cq->queue.page_list[i].buf) | ||
653 | dma_free_coherent(&dev->pdev->dev, PAGE_SIZE, | ||
654 | cq->queue.page_list[i].buf, | ||
655 | pci_unmap_addr(&cq->queue.page_list[i], | ||
656 | mapping)); | ||
657 | |||
658 | kfree(cq->queue.page_list); | ||
659 | } | ||
660 | } | ||
661 | |||
662 | static int mthca_alloc_cq_buf(struct mthca_dev *dev, int size, | ||
663 | struct mthca_cq *cq) | ||
664 | { | ||
665 | int err = -ENOMEM; | ||
666 | int npages, shift; | ||
667 | u64 *dma_list = NULL; | ||
668 | dma_addr_t t; | ||
669 | int i; | ||
670 | |||
671 | if (size <= MTHCA_MAX_DIRECT_CQ_SIZE) { | ||
672 | cq->is_direct = 1; | ||
673 | npages = 1; | ||
674 | shift = get_order(size) + PAGE_SHIFT; | ||
675 | |||
676 | cq->queue.direct.buf = dma_alloc_coherent(&dev->pdev->dev, | ||
677 | size, &t, GFP_KERNEL); | ||
678 | if (!cq->queue.direct.buf) | ||
679 | return -ENOMEM; | ||
680 | |||
681 | pci_unmap_addr_set(&cq->queue.direct, mapping, t); | ||
682 | |||
683 | memset(cq->queue.direct.buf, 0, size); | ||
684 | |||
685 | while (t & ((1 << shift) - 1)) { | ||
686 | --shift; | ||
687 | npages *= 2; | ||
688 | } | ||
689 | |||
690 | dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL); | ||
691 | if (!dma_list) | ||
692 | goto err_free; | ||
693 | |||
694 | for (i = 0; i < npages; ++i) | ||
695 | dma_list[i] = t + i * (1 << shift); | ||
696 | } else { | ||
697 | cq->is_direct = 0; | ||
698 | npages = (size + PAGE_SIZE - 1) / PAGE_SIZE; | ||
699 | shift = PAGE_SHIFT; | ||
700 | |||
701 | dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL); | ||
702 | if (!dma_list) | ||
703 | return -ENOMEM; | ||
704 | |||
705 | cq->queue.page_list = kmalloc(npages * sizeof *cq->queue.page_list, | ||
706 | GFP_KERNEL); | ||
707 | if (!cq->queue.page_list) | ||
708 | goto err_out; | ||
709 | |||
710 | for (i = 0; i < npages; ++i) | ||
711 | cq->queue.page_list[i].buf = NULL; | ||
712 | |||
713 | for (i = 0; i < npages; ++i) { | ||
714 | cq->queue.page_list[i].buf = | ||
715 | dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE, | ||
716 | &t, GFP_KERNEL); | ||
717 | if (!cq->queue.page_list[i].buf) | ||
718 | goto err_free; | ||
719 | |||
720 | dma_list[i] = t; | ||
721 | pci_unmap_addr_set(&cq->queue.page_list[i], mapping, t); | ||
722 | |||
723 | memset(cq->queue.page_list[i].buf, 0, PAGE_SIZE); | ||
724 | } | ||
725 | } | ||
726 | |||
727 | err = mthca_mr_alloc_phys(dev, dev->driver_pd.pd_num, | ||
728 | dma_list, shift, npages, | ||
729 | 0, size, | ||
730 | MTHCA_MPT_FLAG_LOCAL_WRITE | | ||
731 | MTHCA_MPT_FLAG_LOCAL_READ, | ||
732 | &cq->mr); | ||
733 | if (err) | ||
734 | goto err_free; | ||
735 | |||
736 | kfree(dma_list); | ||
737 | |||
738 | return 0; | ||
739 | |||
740 | err_free: | ||
741 | mthca_free_cq_buf(dev, cq); | ||
742 | |||
743 | err_out: | ||
744 | kfree(dma_list); | ||
745 | |||
746 | return err; | ||
747 | } | 655 | } |
748 | 656 | ||
749 | int mthca_init_cq(struct mthca_dev *dev, int nent, | 657 | int mthca_init_cq(struct mthca_dev *dev, int nent, |
@@ -795,7 +703,9 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, | |||
795 | cq_context = mailbox->buf; | 703 | cq_context = mailbox->buf; |
796 | 704 | ||
797 | if (cq->is_kernel) { | 705 | if (cq->is_kernel) { |
798 | err = mthca_alloc_cq_buf(dev, size, cq); | 706 | err = mthca_buf_alloc(dev, size, MTHCA_MAX_DIRECT_CQ_SIZE, |
707 | &cq->queue, &cq->is_direct, | ||
708 | &dev->driver_pd, 1, &cq->mr); | ||
799 | if (err) | 709 | if (err) |
800 | goto err_out_mailbox; | 710 | goto err_out_mailbox; |
801 | 711 | ||
@@ -811,7 +721,6 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, | |||
811 | cq_context->flags = cpu_to_be32(MTHCA_CQ_STATUS_OK | | 721 | cq_context->flags = cpu_to_be32(MTHCA_CQ_STATUS_OK | |
812 | MTHCA_CQ_STATE_DISARMED | | 722 | MTHCA_CQ_STATE_DISARMED | |
813 | MTHCA_CQ_FLAG_TR); | 723 | MTHCA_CQ_FLAG_TR); |
814 | cq_context->start = cpu_to_be64(0); | ||
815 | cq_context->logsize_usrpage = cpu_to_be32((ffs(nent) - 1) << 24); | 724 | cq_context->logsize_usrpage = cpu_to_be32((ffs(nent) - 1) << 24); |
816 | if (ctx) | 725 | if (ctx) |
817 | cq_context->logsize_usrpage |= cpu_to_be32(ctx->uar.index); | 726 | cq_context->logsize_usrpage |= cpu_to_be32(ctx->uar.index); |
@@ -857,10 +766,8 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, | |||
857 | return 0; | 766 | return 0; |
858 | 767 | ||
859 | err_out_free_mr: | 768 | err_out_free_mr: |
860 | if (cq->is_kernel) { | 769 | if (cq->is_kernel) |
861 | mthca_free_mr(dev, &cq->mr); | ||
862 | mthca_free_cq_buf(dev, cq); | 770 | mthca_free_cq_buf(dev, cq); |
863 | } | ||
864 | 771 | ||
865 | err_out_mailbox: | 772 | err_out_mailbox: |
866 | mthca_free_mailbox(dev, mailbox); | 773 | mthca_free_mailbox(dev, mailbox); |
@@ -904,7 +811,7 @@ void mthca_free_cq(struct mthca_dev *dev, | |||
904 | mthca_warn(dev, "HW2SW_CQ returned status 0x%02x\n", status); | 811 | mthca_warn(dev, "HW2SW_CQ returned status 0x%02x\n", status); |
905 | 812 | ||
906 | if (0) { | 813 | if (0) { |
907 | u32 *ctx = mailbox->buf; | 814 | __be32 *ctx = mailbox->buf; |
908 | int j; | 815 | int j; |
909 | 816 | ||
910 | printk(KERN_ERR "context for CQN %x (cons index %x, next sw %d)\n", | 817 | printk(KERN_ERR "context for CQN %x (cons index %x, next sw %d)\n", |
@@ -928,7 +835,6 @@ void mthca_free_cq(struct mthca_dev *dev, | |||
928 | wait_event(cq->wait, !atomic_read(&cq->refcount)); | 835 | wait_event(cq->wait, !atomic_read(&cq->refcount)); |
929 | 836 | ||
930 | if (cq->is_kernel) { | 837 | if (cq->is_kernel) { |
931 | mthca_free_mr(dev, &cq->mr); | ||
932 | mthca_free_cq_buf(dev, cq); | 838 | mthca_free_cq_buf(dev, cq); |
933 | if (mthca_is_memfree(dev)) { | 839 | if (mthca_is_memfree(dev)) { |
934 | mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index); | 840 | mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index); |
diff --git a/drivers/infiniband/hw/mthca/mthca_dev.h b/drivers/infiniband/hw/mthca/mthca_dev.h index 5ecdd2eeeb0f..7bff5a8425f4 100644 --- a/drivers/infiniband/hw/mthca/mthca_dev.h +++ b/drivers/infiniband/hw/mthca/mthca_dev.h | |||
@@ -2,6 +2,8 @@ | |||
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | 3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. |
4 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 4 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
5 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
6 | * Copyright (c) 2004 Voltaire, Inc. All rights reserved. | ||
5 | * | 7 | * |
6 | * This software is available to you under a choice of one of two | 8 | * This software is available to you under a choice of one of two |
7 | * licenses. You may choose to be licensed under the terms of the GNU | 9 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -67,6 +69,10 @@ enum { | |||
67 | }; | 69 | }; |
68 | 70 | ||
69 | enum { | 71 | enum { |
72 | MTHCA_BOARD_ID_LEN = 64 | ||
73 | }; | ||
74 | |||
75 | enum { | ||
70 | MTHCA_EQ_CONTEXT_SIZE = 0x40, | 76 | MTHCA_EQ_CONTEXT_SIZE = 0x40, |
71 | MTHCA_CQ_CONTEXT_SIZE = 0x40, | 77 | MTHCA_CQ_CONTEXT_SIZE = 0x40, |
72 | MTHCA_QP_CONTEXT_SIZE = 0x200, | 78 | MTHCA_QP_CONTEXT_SIZE = 0x200, |
@@ -142,6 +148,7 @@ struct mthca_limits { | |||
142 | int reserved_mcgs; | 148 | int reserved_mcgs; |
143 | int num_pds; | 149 | int num_pds; |
144 | int reserved_pds; | 150 | int reserved_pds; |
151 | u8 port_width_cap; | ||
145 | }; | 152 | }; |
146 | 153 | ||
147 | struct mthca_alloc { | 154 | struct mthca_alloc { |
@@ -211,6 +218,13 @@ struct mthca_cq_table { | |||
211 | struct mthca_icm_table *table; | 218 | struct mthca_icm_table *table; |
212 | }; | 219 | }; |
213 | 220 | ||
221 | struct mthca_srq_table { | ||
222 | struct mthca_alloc alloc; | ||
223 | spinlock_t lock; | ||
224 | struct mthca_array srq; | ||
225 | struct mthca_icm_table *table; | ||
226 | }; | ||
227 | |||
214 | struct mthca_qp_table { | 228 | struct mthca_qp_table { |
215 | struct mthca_alloc alloc; | 229 | struct mthca_alloc alloc; |
216 | u32 rdb_base; | 230 | u32 rdb_base; |
@@ -246,6 +260,7 @@ struct mthca_dev { | |||
246 | unsigned long device_cap_flags; | 260 | unsigned long device_cap_flags; |
247 | 261 | ||
248 | u32 rev_id; | 262 | u32 rev_id; |
263 | char board_id[MTHCA_BOARD_ID_LEN]; | ||
249 | 264 | ||
250 | /* firmware info */ | 265 | /* firmware info */ |
251 | u64 fw_ver; | 266 | u64 fw_ver; |
@@ -291,6 +306,7 @@ struct mthca_dev { | |||
291 | struct mthca_mr_table mr_table; | 306 | struct mthca_mr_table mr_table; |
292 | struct mthca_eq_table eq_table; | 307 | struct mthca_eq_table eq_table; |
293 | struct mthca_cq_table cq_table; | 308 | struct mthca_cq_table cq_table; |
309 | struct mthca_srq_table srq_table; | ||
294 | struct mthca_qp_table qp_table; | 310 | struct mthca_qp_table qp_table; |
295 | struct mthca_av_table av_table; | 311 | struct mthca_av_table av_table; |
296 | struct mthca_mcg_table mcg_table; | 312 | struct mthca_mcg_table mcg_table; |
@@ -331,14 +347,13 @@ extern void __buggy_use_of_MTHCA_PUT(void); | |||
331 | 347 | ||
332 | #define MTHCA_PUT(dest, source, offset) \ | 348 | #define MTHCA_PUT(dest, source, offset) \ |
333 | do { \ | 349 | do { \ |
334 | __typeof__(source) *__p = \ | 350 | void *__d = ((char *) (dest) + (offset)); \ |
335 | (__typeof__(source) *) ((char *) (dest) + (offset)); \ | ||
336 | switch (sizeof(source)) { \ | 351 | switch (sizeof(source)) { \ |
337 | case 1: *__p = (source); break; \ | 352 | case 1: *(u8 *) __d = (source); break; \ |
338 | case 2: *__p = cpu_to_be16(source); break; \ | 353 | case 2: *(__be16 *) __d = cpu_to_be16(source); break; \ |
339 | case 4: *__p = cpu_to_be32(source); break; \ | 354 | case 4: *(__be32 *) __d = cpu_to_be32(source); break; \ |
340 | case 8: *__p = cpu_to_be64(source); break; \ | 355 | case 8: *(__be64 *) __d = cpu_to_be64(source); break; \ |
341 | default: __buggy_use_of_MTHCA_PUT(); \ | 356 | default: __buggy_use_of_MTHCA_PUT(); \ |
342 | } \ | 357 | } \ |
343 | } while (0) | 358 | } while (0) |
344 | 359 | ||
@@ -354,12 +369,18 @@ int mthca_array_set(struct mthca_array *array, int index, void *value); | |||
354 | void mthca_array_clear(struct mthca_array *array, int index); | 369 | void mthca_array_clear(struct mthca_array *array, int index); |
355 | int mthca_array_init(struct mthca_array *array, int nent); | 370 | int mthca_array_init(struct mthca_array *array, int nent); |
356 | void mthca_array_cleanup(struct mthca_array *array, int nent); | 371 | void mthca_array_cleanup(struct mthca_array *array, int nent); |
372 | int mthca_buf_alloc(struct mthca_dev *dev, int size, int max_direct, | ||
373 | union mthca_buf *buf, int *is_direct, struct mthca_pd *pd, | ||
374 | int hca_write, struct mthca_mr *mr); | ||
375 | void mthca_buf_free(struct mthca_dev *dev, int size, union mthca_buf *buf, | ||
376 | int is_direct, struct mthca_mr *mr); | ||
357 | 377 | ||
358 | int mthca_init_uar_table(struct mthca_dev *dev); | 378 | int mthca_init_uar_table(struct mthca_dev *dev); |
359 | int mthca_init_pd_table(struct mthca_dev *dev); | 379 | int mthca_init_pd_table(struct mthca_dev *dev); |
360 | int mthca_init_mr_table(struct mthca_dev *dev); | 380 | int mthca_init_mr_table(struct mthca_dev *dev); |
361 | int mthca_init_eq_table(struct mthca_dev *dev); | 381 | int mthca_init_eq_table(struct mthca_dev *dev); |
362 | int mthca_init_cq_table(struct mthca_dev *dev); | 382 | int mthca_init_cq_table(struct mthca_dev *dev); |
383 | int mthca_init_srq_table(struct mthca_dev *dev); | ||
363 | int mthca_init_qp_table(struct mthca_dev *dev); | 384 | int mthca_init_qp_table(struct mthca_dev *dev); |
364 | int mthca_init_av_table(struct mthca_dev *dev); | 385 | int mthca_init_av_table(struct mthca_dev *dev); |
365 | int mthca_init_mcg_table(struct mthca_dev *dev); | 386 | int mthca_init_mcg_table(struct mthca_dev *dev); |
@@ -369,6 +390,7 @@ void mthca_cleanup_pd_table(struct mthca_dev *dev); | |||
369 | void mthca_cleanup_mr_table(struct mthca_dev *dev); | 390 | void mthca_cleanup_mr_table(struct mthca_dev *dev); |
370 | void mthca_cleanup_eq_table(struct mthca_dev *dev); | 391 | void mthca_cleanup_eq_table(struct mthca_dev *dev); |
371 | void mthca_cleanup_cq_table(struct mthca_dev *dev); | 392 | void mthca_cleanup_cq_table(struct mthca_dev *dev); |
393 | void mthca_cleanup_srq_table(struct mthca_dev *dev); | ||
372 | void mthca_cleanup_qp_table(struct mthca_dev *dev); | 394 | void mthca_cleanup_qp_table(struct mthca_dev *dev); |
373 | void mthca_cleanup_av_table(struct mthca_dev *dev); | 395 | void mthca_cleanup_av_table(struct mthca_dev *dev); |
374 | void mthca_cleanup_mcg_table(struct mthca_dev *dev); | 396 | void mthca_cleanup_mcg_table(struct mthca_dev *dev); |
@@ -419,7 +441,19 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, | |||
419 | void mthca_free_cq(struct mthca_dev *dev, | 441 | void mthca_free_cq(struct mthca_dev *dev, |
420 | struct mthca_cq *cq); | 442 | struct mthca_cq *cq); |
421 | void mthca_cq_event(struct mthca_dev *dev, u32 cqn); | 443 | void mthca_cq_event(struct mthca_dev *dev, u32 cqn); |
422 | void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn); | 444 | void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn, |
445 | struct mthca_srq *srq); | ||
446 | |||
447 | int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, | ||
448 | struct ib_srq_attr *attr, struct mthca_srq *srq); | ||
449 | void mthca_free_srq(struct mthca_dev *dev, struct mthca_srq *srq); | ||
450 | void mthca_srq_event(struct mthca_dev *dev, u32 srqn, | ||
451 | enum ib_event_type event_type); | ||
452 | void mthca_free_srq_wqe(struct mthca_srq *srq, u32 wqe_addr); | ||
453 | int mthca_tavor_post_srq_recv(struct ib_srq *srq, struct ib_recv_wr *wr, | ||
454 | struct ib_recv_wr **bad_wr); | ||
455 | int mthca_arbel_post_srq_recv(struct ib_srq *srq, struct ib_recv_wr *wr, | ||
456 | struct ib_recv_wr **bad_wr); | ||
423 | 457 | ||
424 | void mthca_qp_event(struct mthca_dev *dev, u32 qpn, | 458 | void mthca_qp_event(struct mthca_dev *dev, u32 qpn, |
425 | enum ib_event_type event_type); | 459 | enum ib_event_type event_type); |
@@ -433,7 +467,7 @@ int mthca_arbel_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, | |||
433 | int mthca_arbel_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr, | 467 | int mthca_arbel_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr, |
434 | struct ib_recv_wr **bad_wr); | 468 | struct ib_recv_wr **bad_wr); |
435 | int mthca_free_err_wqe(struct mthca_dev *dev, struct mthca_qp *qp, int is_send, | 469 | int mthca_free_err_wqe(struct mthca_dev *dev, struct mthca_qp *qp, int is_send, |
436 | int index, int *dbd, u32 *new_wqe); | 470 | int index, int *dbd, __be32 *new_wqe); |
437 | int mthca_alloc_qp(struct mthca_dev *dev, | 471 | int mthca_alloc_qp(struct mthca_dev *dev, |
438 | struct mthca_pd *pd, | 472 | struct mthca_pd *pd, |
439 | struct mthca_cq *send_cq, | 473 | struct mthca_cq *send_cq, |
diff --git a/drivers/infiniband/hw/mthca/mthca_doorbell.h b/drivers/infiniband/hw/mthca/mthca_doorbell.h index 535fad7710fb..dd9a44d170c9 100644 --- a/drivers/infiniband/hw/mthca/mthca_doorbell.h +++ b/drivers/infiniband/hw/mthca/mthca_doorbell.h | |||
@@ -1,6 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | 3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
4 | * | 5 | * |
5 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -57,13 +58,13 @@ static inline void mthca_write64_raw(__be64 val, void __iomem *dest) | |||
57 | __raw_writeq((__force u64) val, dest); | 58 | __raw_writeq((__force u64) val, dest); |
58 | } | 59 | } |
59 | 60 | ||
60 | static inline void mthca_write64(u32 val[2], void __iomem *dest, | 61 | static inline void mthca_write64(__be32 val[2], void __iomem *dest, |
61 | spinlock_t *doorbell_lock) | 62 | spinlock_t *doorbell_lock) |
62 | { | 63 | { |
63 | __raw_writeq(*(u64 *) val, dest); | 64 | __raw_writeq(*(u64 *) val, dest); |
64 | } | 65 | } |
65 | 66 | ||
66 | static inline void mthca_write_db_rec(u32 val[2], u32 *db) | 67 | static inline void mthca_write_db_rec(__be32 val[2], __be32 *db) |
67 | { | 68 | { |
68 | *(u64 *) db = *(u64 *) val; | 69 | *(u64 *) db = *(u64 *) val; |
69 | } | 70 | } |
@@ -86,18 +87,18 @@ static inline void mthca_write64_raw(__be64 val, void __iomem *dest) | |||
86 | __raw_writel(((__force u32 *) &val)[1], dest + 4); | 87 | __raw_writel(((__force u32 *) &val)[1], dest + 4); |
87 | } | 88 | } |
88 | 89 | ||
89 | static inline void mthca_write64(u32 val[2], void __iomem *dest, | 90 | static inline void mthca_write64(__be32 val[2], void __iomem *dest, |
90 | spinlock_t *doorbell_lock) | 91 | spinlock_t *doorbell_lock) |
91 | { | 92 | { |
92 | unsigned long flags; | 93 | unsigned long flags; |
93 | 94 | ||
94 | spin_lock_irqsave(doorbell_lock, flags); | 95 | spin_lock_irqsave(doorbell_lock, flags); |
95 | __raw_writel(val[0], dest); | 96 | __raw_writel((__force u32) val[0], dest); |
96 | __raw_writel(val[1], dest + 4); | 97 | __raw_writel((__force u32) val[1], dest + 4); |
97 | spin_unlock_irqrestore(doorbell_lock, flags); | 98 | spin_unlock_irqrestore(doorbell_lock, flags); |
98 | } | 99 | } |
99 | 100 | ||
100 | static inline void mthca_write_db_rec(u32 val[2], u32 *db) | 101 | static inline void mthca_write_db_rec(__be32 val[2], __be32 *db) |
101 | { | 102 | { |
102 | db[0] = val[0]; | 103 | db[0] = val[0]; |
103 | wmb(); | 104 | wmb(); |
diff --git a/drivers/infiniband/hw/mthca/mthca_eq.c b/drivers/infiniband/hw/mthca/mthca_eq.c index cbcf2b4722e4..18f0981eb0c1 100644 --- a/drivers/infiniband/hw/mthca/mthca_eq.c +++ b/drivers/infiniband/hw/mthca/mthca_eq.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -51,18 +52,18 @@ enum { | |||
51 | * Must be packed because start is 64 bits but only aligned to 32 bits. | 52 | * Must be packed because start is 64 bits but only aligned to 32 bits. |
52 | */ | 53 | */ |
53 | struct mthca_eq_context { | 54 | struct mthca_eq_context { |
54 | u32 flags; | 55 | __be32 flags; |
55 | u64 start; | 56 | __be64 start; |
56 | u32 logsize_usrpage; | 57 | __be32 logsize_usrpage; |
57 | u32 tavor_pd; /* reserved for Arbel */ | 58 | __be32 tavor_pd; /* reserved for Arbel */ |
58 | u8 reserved1[3]; | 59 | u8 reserved1[3]; |
59 | u8 intr; | 60 | u8 intr; |
60 | u32 arbel_pd; /* lost_count for Tavor */ | 61 | __be32 arbel_pd; /* lost_count for Tavor */ |
61 | u32 lkey; | 62 | __be32 lkey; |
62 | u32 reserved2[2]; | 63 | u32 reserved2[2]; |
63 | u32 consumer_index; | 64 | __be32 consumer_index; |
64 | u32 producer_index; | 65 | __be32 producer_index; |
65 | u32 reserved3[4]; | 66 | u32 reserved3[4]; |
66 | } __attribute__((packed)); | 67 | } __attribute__((packed)); |
67 | 68 | ||
68 | #define MTHCA_EQ_STATUS_OK ( 0 << 28) | 69 | #define MTHCA_EQ_STATUS_OK ( 0 << 28) |
@@ -127,28 +128,28 @@ struct mthca_eqe { | |||
127 | union { | 128 | union { |
128 | u32 raw[6]; | 129 | u32 raw[6]; |
129 | struct { | 130 | struct { |
130 | u32 cqn; | 131 | __be32 cqn; |
131 | } __attribute__((packed)) comp; | 132 | } __attribute__((packed)) comp; |
132 | struct { | 133 | struct { |
133 | u16 reserved1; | 134 | u16 reserved1; |
134 | u16 token; | 135 | __be16 token; |
135 | u32 reserved2; | 136 | u32 reserved2; |
136 | u8 reserved3[3]; | 137 | u8 reserved3[3]; |
137 | u8 status; | 138 | u8 status; |
138 | u64 out_param; | 139 | __be64 out_param; |
139 | } __attribute__((packed)) cmd; | 140 | } __attribute__((packed)) cmd; |
140 | struct { | 141 | struct { |
141 | u32 qpn; | 142 | __be32 qpn; |
142 | } __attribute__((packed)) qp; | 143 | } __attribute__((packed)) qp; |
143 | struct { | 144 | struct { |
144 | u32 cqn; | 145 | __be32 cqn; |
145 | u32 reserved1; | 146 | u32 reserved1; |
146 | u8 reserved2[3]; | 147 | u8 reserved2[3]; |
147 | u8 syndrome; | 148 | u8 syndrome; |
148 | } __attribute__((packed)) cq_err; | 149 | } __attribute__((packed)) cq_err; |
149 | struct { | 150 | struct { |
150 | u32 reserved1[2]; | 151 | u32 reserved1[2]; |
151 | u32 port; | 152 | __be32 port; |
152 | } __attribute__((packed)) port_change; | 153 | } __attribute__((packed)) port_change; |
153 | } event; | 154 | } event; |
154 | u8 reserved3[3]; | 155 | u8 reserved3[3]; |
@@ -167,7 +168,7 @@ static inline u64 async_mask(struct mthca_dev *dev) | |||
167 | 168 | ||
168 | static inline void tavor_set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u32 ci) | 169 | static inline void tavor_set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u32 ci) |
169 | { | 170 | { |
170 | u32 doorbell[2]; | 171 | __be32 doorbell[2]; |
171 | 172 | ||
172 | doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_SET_CI | eq->eqn); | 173 | doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_SET_CI | eq->eqn); |
173 | doorbell[1] = cpu_to_be32(ci & (eq->nent - 1)); | 174 | doorbell[1] = cpu_to_be32(ci & (eq->nent - 1)); |
@@ -190,8 +191,8 @@ static inline void arbel_set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u | |||
190 | { | 191 | { |
191 | /* See comment in tavor_set_eq_ci() above. */ | 192 | /* See comment in tavor_set_eq_ci() above. */ |
192 | wmb(); | 193 | wmb(); |
193 | __raw_writel(cpu_to_be32(ci), dev->eq_regs.arbel.eq_set_ci_base + | 194 | __raw_writel((__force u32) cpu_to_be32(ci), |
194 | eq->eqn * 8); | 195 | dev->eq_regs.arbel.eq_set_ci_base + eq->eqn * 8); |
195 | /* We still want ordering, just not swabbing, so add a barrier */ | 196 | /* We still want ordering, just not swabbing, so add a barrier */ |
196 | mb(); | 197 | mb(); |
197 | } | 198 | } |
@@ -206,7 +207,7 @@ static inline void set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u32 ci) | |||
206 | 207 | ||
207 | static inline void tavor_eq_req_not(struct mthca_dev *dev, int eqn) | 208 | static inline void tavor_eq_req_not(struct mthca_dev *dev, int eqn) |
208 | { | 209 | { |
209 | u32 doorbell[2]; | 210 | __be32 doorbell[2]; |
210 | 211 | ||
211 | doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_REQ_NOT | eqn); | 212 | doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_REQ_NOT | eqn); |
212 | doorbell[1] = 0; | 213 | doorbell[1] = 0; |
@@ -224,7 +225,7 @@ static inline void arbel_eq_req_not(struct mthca_dev *dev, u32 eqn_mask) | |||
224 | static inline void disarm_cq(struct mthca_dev *dev, int eqn, int cqn) | 225 | static inline void disarm_cq(struct mthca_dev *dev, int eqn, int cqn) |
225 | { | 226 | { |
226 | if (!mthca_is_memfree(dev)) { | 227 | if (!mthca_is_memfree(dev)) { |
227 | u32 doorbell[2]; | 228 | __be32 doorbell[2]; |
228 | 229 | ||
229 | doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_DISARM_CQ | eqn); | 230 | doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_DISARM_CQ | eqn); |
230 | doorbell[1] = cpu_to_be32(cqn); | 231 | doorbell[1] = cpu_to_be32(cqn); |
diff --git a/drivers/infiniband/hw/mthca/mthca_mad.c b/drivers/infiniband/hw/mthca/mthca_mad.c index 7df223642015..9804174f7f3c 100644 --- a/drivers/infiniband/hw/mthca/mthca_mad.c +++ b/drivers/infiniband/hw/mthca/mthca_mad.c | |||
@@ -1,5 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
4 | * Copyright (c) 2004 Voltaire, Inc. All rights reserved. | ||
3 | * | 5 | * |
4 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -32,9 +34,9 @@ | |||
32 | * $Id: mthca_mad.c 1349 2004-12-16 21:09:43Z roland $ | 34 | * $Id: mthca_mad.c 1349 2004-12-16 21:09:43Z roland $ |
33 | */ | 35 | */ |
34 | 36 | ||
35 | #include <ib_verbs.h> | 37 | #include <rdma/ib_verbs.h> |
36 | #include <ib_mad.h> | 38 | #include <rdma/ib_mad.h> |
37 | #include <ib_smi.h> | 39 | #include <rdma/ib_smi.h> |
38 | 40 | ||
39 | #include "mthca_dev.h" | 41 | #include "mthca_dev.h" |
40 | #include "mthca_cmd.h" | 42 | #include "mthca_cmd.h" |
@@ -192,7 +194,7 @@ int mthca_process_mad(struct ib_device *ibdev, | |||
192 | { | 194 | { |
193 | int err; | 195 | int err; |
194 | u8 status; | 196 | u8 status; |
195 | u16 slid = in_wc ? in_wc->slid : IB_LID_PERMISSIVE; | 197 | u16 slid = in_wc ? in_wc->slid : be16_to_cpu(IB_LID_PERMISSIVE); |
196 | 198 | ||
197 | /* Forward locally generated traps to the SM */ | 199 | /* Forward locally generated traps to the SM */ |
198 | if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP && | 200 | if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP && |
diff --git a/drivers/infiniband/hw/mthca/mthca_main.c b/drivers/infiniband/hw/mthca/mthca_main.c index 2ef916859e17..3241d6c9dc11 100644 --- a/drivers/infiniband/hw/mthca/mthca_main.c +++ b/drivers/infiniband/hw/mthca/mthca_main.c | |||
@@ -1,6 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | 3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
4 | * | 5 | * |
5 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -34,7 +35,6 @@ | |||
34 | */ | 35 | */ |
35 | 36 | ||
36 | #include <linux/config.h> | 37 | #include <linux/config.h> |
37 | #include <linux/version.h> | ||
38 | #include <linux/module.h> | 38 | #include <linux/module.h> |
39 | #include <linux/init.h> | 39 | #include <linux/init.h> |
40 | #include <linux/errno.h> | 40 | #include <linux/errno.h> |
@@ -171,6 +171,7 @@ static int __devinit mthca_dev_lim(struct mthca_dev *mdev, struct mthca_dev_lim | |||
171 | mdev->limits.reserved_mrws = dev_lim->reserved_mrws; | 171 | mdev->limits.reserved_mrws = dev_lim->reserved_mrws; |
172 | mdev->limits.reserved_uars = dev_lim->reserved_uars; | 172 | mdev->limits.reserved_uars = dev_lim->reserved_uars; |
173 | mdev->limits.reserved_pds = dev_lim->reserved_pds; | 173 | mdev->limits.reserved_pds = dev_lim->reserved_pds; |
174 | mdev->limits.port_width_cap = dev_lim->max_port_width; | ||
174 | 175 | ||
175 | /* IB_DEVICE_RESIZE_MAX_WR not supported by driver. | 176 | /* IB_DEVICE_RESIZE_MAX_WR not supported by driver. |
176 | May be doable since hardware supports it for SRQ. | 177 | May be doable since hardware supports it for SRQ. |
@@ -212,7 +213,6 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev) | |||
212 | struct mthca_dev_lim dev_lim; | 213 | struct mthca_dev_lim dev_lim; |
213 | struct mthca_profile profile; | 214 | struct mthca_profile profile; |
214 | struct mthca_init_hca_param init_hca; | 215 | struct mthca_init_hca_param init_hca; |
215 | struct mthca_adapter adapter; | ||
216 | 216 | ||
217 | err = mthca_SYS_EN(mdev, &status); | 217 | err = mthca_SYS_EN(mdev, &status); |
218 | if (err) { | 218 | if (err) { |
@@ -253,6 +253,8 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev) | |||
253 | profile = default_profile; | 253 | profile = default_profile; |
254 | profile.num_uar = dev_lim.uar_size / PAGE_SIZE; | 254 | profile.num_uar = dev_lim.uar_size / PAGE_SIZE; |
255 | profile.uarc_size = 0; | 255 | profile.uarc_size = 0; |
256 | if (mdev->mthca_flags & MTHCA_FLAG_SRQ) | ||
257 | profile.num_srq = dev_lim.max_srqs; | ||
256 | 258 | ||
257 | err = mthca_make_profile(mdev, &profile, &dev_lim, &init_hca); | 259 | err = mthca_make_profile(mdev, &profile, &dev_lim, &init_hca); |
258 | if (err < 0) | 260 | if (err < 0) |
@@ -270,26 +272,8 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev) | |||
270 | goto err_disable; | 272 | goto err_disable; |
271 | } | 273 | } |
272 | 274 | ||
273 | err = mthca_QUERY_ADAPTER(mdev, &adapter, &status); | ||
274 | if (err) { | ||
275 | mthca_err(mdev, "QUERY_ADAPTER command failed, aborting.\n"); | ||
276 | goto err_close; | ||
277 | } | ||
278 | if (status) { | ||
279 | mthca_err(mdev, "QUERY_ADAPTER returned status 0x%02x, " | ||
280 | "aborting.\n", status); | ||
281 | err = -EINVAL; | ||
282 | goto err_close; | ||
283 | } | ||
284 | |||
285 | mdev->eq_table.inta_pin = adapter.inta_pin; | ||
286 | mdev->rev_id = adapter.revision_id; | ||
287 | |||
288 | return 0; | 275 | return 0; |
289 | 276 | ||
290 | err_close: | ||
291 | mthca_CLOSE_HCA(mdev, 0, &status); | ||
292 | |||
293 | err_disable: | 277 | err_disable: |
294 | mthca_SYS_DIS(mdev, &status); | 278 | mthca_SYS_DIS(mdev, &status); |
295 | 279 | ||
@@ -442,15 +426,29 @@ static int __devinit mthca_init_icm(struct mthca_dev *mdev, | |||
442 | } | 426 | } |
443 | 427 | ||
444 | mdev->cq_table.table = mthca_alloc_icm_table(mdev, init_hca->cqc_base, | 428 | mdev->cq_table.table = mthca_alloc_icm_table(mdev, init_hca->cqc_base, |
445 | dev_lim->cqc_entry_sz, | 429 | dev_lim->cqc_entry_sz, |
446 | mdev->limits.num_cqs, | 430 | mdev->limits.num_cqs, |
447 | mdev->limits.reserved_cqs, 0); | 431 | mdev->limits.reserved_cqs, 0); |
448 | if (!mdev->cq_table.table) { | 432 | if (!mdev->cq_table.table) { |
449 | mthca_err(mdev, "Failed to map CQ context memory, aborting.\n"); | 433 | mthca_err(mdev, "Failed to map CQ context memory, aborting.\n"); |
450 | err = -ENOMEM; | 434 | err = -ENOMEM; |
451 | goto err_unmap_rdb; | 435 | goto err_unmap_rdb; |
452 | } | 436 | } |
453 | 437 | ||
438 | if (mdev->mthca_flags & MTHCA_FLAG_SRQ) { | ||
439 | mdev->srq_table.table = | ||
440 | mthca_alloc_icm_table(mdev, init_hca->srqc_base, | ||
441 | dev_lim->srq_entry_sz, | ||
442 | mdev->limits.num_srqs, | ||
443 | mdev->limits.reserved_srqs, 0); | ||
444 | if (!mdev->srq_table.table) { | ||
445 | mthca_err(mdev, "Failed to map SRQ context memory, " | ||
446 | "aborting.\n"); | ||
447 | err = -ENOMEM; | ||
448 | goto err_unmap_cq; | ||
449 | } | ||
450 | } | ||
451 | |||
454 | /* | 452 | /* |
455 | * It's not strictly required, but for simplicity just map the | 453 | * It's not strictly required, but for simplicity just map the |
456 | * whole multicast group table now. The table isn't very big | 454 | * whole multicast group table now. The table isn't very big |
@@ -466,11 +464,15 @@ static int __devinit mthca_init_icm(struct mthca_dev *mdev, | |||
466 | if (!mdev->mcg_table.table) { | 464 | if (!mdev->mcg_table.table) { |
467 | mthca_err(mdev, "Failed to map MCG context memory, aborting.\n"); | 465 | mthca_err(mdev, "Failed to map MCG context memory, aborting.\n"); |
468 | err = -ENOMEM; | 466 | err = -ENOMEM; |
469 | goto err_unmap_cq; | 467 | goto err_unmap_srq; |
470 | } | 468 | } |
471 | 469 | ||
472 | return 0; | 470 | return 0; |
473 | 471 | ||
472 | err_unmap_srq: | ||
473 | if (mdev->mthca_flags & MTHCA_FLAG_SRQ) | ||
474 | mthca_free_icm_table(mdev, mdev->srq_table.table); | ||
475 | |||
474 | err_unmap_cq: | 476 | err_unmap_cq: |
475 | mthca_free_icm_table(mdev, mdev->cq_table.table); | 477 | mthca_free_icm_table(mdev, mdev->cq_table.table); |
476 | 478 | ||
@@ -506,7 +508,6 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev) | |||
506 | struct mthca_dev_lim dev_lim; | 508 | struct mthca_dev_lim dev_lim; |
507 | struct mthca_profile profile; | 509 | struct mthca_profile profile; |
508 | struct mthca_init_hca_param init_hca; | 510 | struct mthca_init_hca_param init_hca; |
509 | struct mthca_adapter adapter; | ||
510 | u64 icm_size; | 511 | u64 icm_size; |
511 | u8 status; | 512 | u8 status; |
512 | int err; | 513 | int err; |
@@ -551,6 +552,8 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev) | |||
551 | profile = default_profile; | 552 | profile = default_profile; |
552 | profile.num_uar = dev_lim.uar_size / PAGE_SIZE; | 553 | profile.num_uar = dev_lim.uar_size / PAGE_SIZE; |
553 | profile.num_udav = 0; | 554 | profile.num_udav = 0; |
555 | if (mdev->mthca_flags & MTHCA_FLAG_SRQ) | ||
556 | profile.num_srq = dev_lim.max_srqs; | ||
554 | 557 | ||
555 | icm_size = mthca_make_profile(mdev, &profile, &dev_lim, &init_hca); | 558 | icm_size = mthca_make_profile(mdev, &profile, &dev_lim, &init_hca); |
556 | if ((int) icm_size < 0) { | 559 | if ((int) icm_size < 0) { |
@@ -574,24 +577,11 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev) | |||
574 | goto err_free_icm; | 577 | goto err_free_icm; |
575 | } | 578 | } |
576 | 579 | ||
577 | err = mthca_QUERY_ADAPTER(mdev, &adapter, &status); | ||
578 | if (err) { | ||
579 | mthca_err(mdev, "QUERY_ADAPTER command failed, aborting.\n"); | ||
580 | goto err_free_icm; | ||
581 | } | ||
582 | if (status) { | ||
583 | mthca_err(mdev, "QUERY_ADAPTER returned status 0x%02x, " | ||
584 | "aborting.\n", status); | ||
585 | err = -EINVAL; | ||
586 | goto err_free_icm; | ||
587 | } | ||
588 | |||
589 | mdev->eq_table.inta_pin = adapter.inta_pin; | ||
590 | mdev->rev_id = adapter.revision_id; | ||
591 | |||
592 | return 0; | 580 | return 0; |
593 | 581 | ||
594 | err_free_icm: | 582 | err_free_icm: |
583 | if (mdev->mthca_flags & MTHCA_FLAG_SRQ) | ||
584 | mthca_free_icm_table(mdev, mdev->srq_table.table); | ||
595 | mthca_free_icm_table(mdev, mdev->cq_table.table); | 585 | mthca_free_icm_table(mdev, mdev->cq_table.table); |
596 | mthca_free_icm_table(mdev, mdev->qp_table.rdb_table); | 586 | mthca_free_icm_table(mdev, mdev->qp_table.rdb_table); |
597 | mthca_free_icm_table(mdev, mdev->qp_table.eqp_table); | 587 | mthca_free_icm_table(mdev, mdev->qp_table.eqp_table); |
@@ -614,12 +604,70 @@ err_disable: | |||
614 | return err; | 604 | return err; |
615 | } | 605 | } |
616 | 606 | ||
607 | static void mthca_close_hca(struct mthca_dev *mdev) | ||
608 | { | ||
609 | u8 status; | ||
610 | |||
611 | mthca_CLOSE_HCA(mdev, 0, &status); | ||
612 | |||
613 | if (mthca_is_memfree(mdev)) { | ||
614 | if (mdev->mthca_flags & MTHCA_FLAG_SRQ) | ||
615 | mthca_free_icm_table(mdev, mdev->srq_table.table); | ||
616 | mthca_free_icm_table(mdev, mdev->cq_table.table); | ||
617 | mthca_free_icm_table(mdev, mdev->qp_table.rdb_table); | ||
618 | mthca_free_icm_table(mdev, mdev->qp_table.eqp_table); | ||
619 | mthca_free_icm_table(mdev, mdev->qp_table.qp_table); | ||
620 | mthca_free_icm_table(mdev, mdev->mr_table.mpt_table); | ||
621 | mthca_free_icm_table(mdev, mdev->mr_table.mtt_table); | ||
622 | mthca_unmap_eq_icm(mdev); | ||
623 | |||
624 | mthca_UNMAP_ICM_AUX(mdev, &status); | ||
625 | mthca_free_icm(mdev, mdev->fw.arbel.aux_icm); | ||
626 | |||
627 | mthca_UNMAP_FA(mdev, &status); | ||
628 | mthca_free_icm(mdev, mdev->fw.arbel.fw_icm); | ||
629 | |||
630 | if (!(mdev->mthca_flags & MTHCA_FLAG_NO_LAM)) | ||
631 | mthca_DISABLE_LAM(mdev, &status); | ||
632 | } else | ||
633 | mthca_SYS_DIS(mdev, &status); | ||
634 | } | ||
635 | |||
617 | static int __devinit mthca_init_hca(struct mthca_dev *mdev) | 636 | static int __devinit mthca_init_hca(struct mthca_dev *mdev) |
618 | { | 637 | { |
638 | u8 status; | ||
639 | int err; | ||
640 | struct mthca_adapter adapter; | ||
641 | |||
619 | if (mthca_is_memfree(mdev)) | 642 | if (mthca_is_memfree(mdev)) |
620 | return mthca_init_arbel(mdev); | 643 | err = mthca_init_arbel(mdev); |
621 | else | 644 | else |
622 | return mthca_init_tavor(mdev); | 645 | err = mthca_init_tavor(mdev); |
646 | |||
647 | if (err) | ||
648 | return err; | ||
649 | |||
650 | err = mthca_QUERY_ADAPTER(mdev, &adapter, &status); | ||
651 | if (err) { | ||
652 | mthca_err(mdev, "QUERY_ADAPTER command failed, aborting.\n"); | ||
653 | goto err_close; | ||
654 | } | ||
655 | if (status) { | ||
656 | mthca_err(mdev, "QUERY_ADAPTER returned status 0x%02x, " | ||
657 | "aborting.\n", status); | ||
658 | err = -EINVAL; | ||
659 | goto err_close; | ||
660 | } | ||
661 | |||
662 | mdev->eq_table.inta_pin = adapter.inta_pin; | ||
663 | mdev->rev_id = adapter.revision_id; | ||
664 | memcpy(mdev->board_id, adapter.board_id, sizeof mdev->board_id); | ||
665 | |||
666 | return 0; | ||
667 | |||
668 | err_close: | ||
669 | mthca_close_hca(mdev); | ||
670 | return err; | ||
623 | } | 671 | } |
624 | 672 | ||
625 | static int __devinit mthca_setup_hca(struct mthca_dev *dev) | 673 | static int __devinit mthca_setup_hca(struct mthca_dev *dev) |
@@ -709,11 +757,18 @@ static int __devinit mthca_setup_hca(struct mthca_dev *dev) | |||
709 | goto err_cmd_poll; | 757 | goto err_cmd_poll; |
710 | } | 758 | } |
711 | 759 | ||
760 | err = mthca_init_srq_table(dev); | ||
761 | if (err) { | ||
762 | mthca_err(dev, "Failed to initialize " | ||
763 | "shared receive queue table, aborting.\n"); | ||
764 | goto err_cq_table_free; | ||
765 | } | ||
766 | |||
712 | err = mthca_init_qp_table(dev); | 767 | err = mthca_init_qp_table(dev); |
713 | if (err) { | 768 | if (err) { |
714 | mthca_err(dev, "Failed to initialize " | 769 | mthca_err(dev, "Failed to initialize " |
715 | "queue pair table, aborting.\n"); | 770 | "queue pair table, aborting.\n"); |
716 | goto err_cq_table_free; | 771 | goto err_srq_table_free; |
717 | } | 772 | } |
718 | 773 | ||
719 | err = mthca_init_av_table(dev); | 774 | err = mthca_init_av_table(dev); |
@@ -738,6 +793,9 @@ err_av_table_free: | |||
738 | err_qp_table_free: | 793 | err_qp_table_free: |
739 | mthca_cleanup_qp_table(dev); | 794 | mthca_cleanup_qp_table(dev); |
740 | 795 | ||
796 | err_srq_table_free: | ||
797 | mthca_cleanup_srq_table(dev); | ||
798 | |||
741 | err_cq_table_free: | 799 | err_cq_table_free: |
742 | mthca_cleanup_cq_table(dev); | 800 | mthca_cleanup_cq_table(dev); |
743 | 801 | ||
@@ -844,33 +902,6 @@ static int __devinit mthca_enable_msi_x(struct mthca_dev *mdev) | |||
844 | return 0; | 902 | return 0; |
845 | } | 903 | } |
846 | 904 | ||
847 | static void mthca_close_hca(struct mthca_dev *mdev) | ||
848 | { | ||
849 | u8 status; | ||
850 | |||
851 | mthca_CLOSE_HCA(mdev, 0, &status); | ||
852 | |||
853 | if (mthca_is_memfree(mdev)) { | ||
854 | mthca_free_icm_table(mdev, mdev->cq_table.table); | ||
855 | mthca_free_icm_table(mdev, mdev->qp_table.rdb_table); | ||
856 | mthca_free_icm_table(mdev, mdev->qp_table.eqp_table); | ||
857 | mthca_free_icm_table(mdev, mdev->qp_table.qp_table); | ||
858 | mthca_free_icm_table(mdev, mdev->mr_table.mpt_table); | ||
859 | mthca_free_icm_table(mdev, mdev->mr_table.mtt_table); | ||
860 | mthca_unmap_eq_icm(mdev); | ||
861 | |||
862 | mthca_UNMAP_ICM_AUX(mdev, &status); | ||
863 | mthca_free_icm(mdev, mdev->fw.arbel.aux_icm); | ||
864 | |||
865 | mthca_UNMAP_FA(mdev, &status); | ||
866 | mthca_free_icm(mdev, mdev->fw.arbel.fw_icm); | ||
867 | |||
868 | if (!(mdev->mthca_flags & MTHCA_FLAG_NO_LAM)) | ||
869 | mthca_DISABLE_LAM(mdev, &status); | ||
870 | } else | ||
871 | mthca_SYS_DIS(mdev, &status); | ||
872 | } | ||
873 | |||
874 | /* Types of supported HCA */ | 905 | /* Types of supported HCA */ |
875 | enum { | 906 | enum { |
876 | TAVOR, /* MT23108 */ | 907 | TAVOR, /* MT23108 */ |
@@ -887,9 +918,9 @@ static struct { | |||
887 | int is_memfree; | 918 | int is_memfree; |
888 | int is_pcie; | 919 | int is_pcie; |
889 | } mthca_hca_table[] = { | 920 | } mthca_hca_table[] = { |
890 | [TAVOR] = { .latest_fw = MTHCA_FW_VER(3, 3, 2), .is_memfree = 0, .is_pcie = 0 }, | 921 | [TAVOR] = { .latest_fw = MTHCA_FW_VER(3, 3, 3), .is_memfree = 0, .is_pcie = 0 }, |
891 | [ARBEL_COMPAT] = { .latest_fw = MTHCA_FW_VER(4, 6, 2), .is_memfree = 0, .is_pcie = 1 }, | 922 | [ARBEL_COMPAT] = { .latest_fw = MTHCA_FW_VER(4, 7, 0), .is_memfree = 0, .is_pcie = 1 }, |
892 | [ARBEL_NATIVE] = { .latest_fw = MTHCA_FW_VER(5, 0, 1), .is_memfree = 1, .is_pcie = 1 }, | 923 | [ARBEL_NATIVE] = { .latest_fw = MTHCA_FW_VER(5, 1, 0), .is_memfree = 1, .is_pcie = 1 }, |
893 | [SINAI] = { .latest_fw = MTHCA_FW_VER(1, 0, 1), .is_memfree = 1, .is_pcie = 1 } | 924 | [SINAI] = { .latest_fw = MTHCA_FW_VER(1, 0, 1), .is_memfree = 1, .is_pcie = 1 } |
894 | }; | 925 | }; |
895 | 926 | ||
@@ -1051,6 +1082,7 @@ err_cleanup: | |||
1051 | mthca_cleanup_mcg_table(mdev); | 1082 | mthca_cleanup_mcg_table(mdev); |
1052 | mthca_cleanup_av_table(mdev); | 1083 | mthca_cleanup_av_table(mdev); |
1053 | mthca_cleanup_qp_table(mdev); | 1084 | mthca_cleanup_qp_table(mdev); |
1085 | mthca_cleanup_srq_table(mdev); | ||
1054 | mthca_cleanup_cq_table(mdev); | 1086 | mthca_cleanup_cq_table(mdev); |
1055 | mthca_cmd_use_polling(mdev); | 1087 | mthca_cmd_use_polling(mdev); |
1056 | mthca_cleanup_eq_table(mdev); | 1088 | mthca_cleanup_eq_table(mdev); |
@@ -1100,6 +1132,7 @@ static void __devexit mthca_remove_one(struct pci_dev *pdev) | |||
1100 | mthca_cleanup_mcg_table(mdev); | 1132 | mthca_cleanup_mcg_table(mdev); |
1101 | mthca_cleanup_av_table(mdev); | 1133 | mthca_cleanup_av_table(mdev); |
1102 | mthca_cleanup_qp_table(mdev); | 1134 | mthca_cleanup_qp_table(mdev); |
1135 | mthca_cleanup_srq_table(mdev); | ||
1103 | mthca_cleanup_cq_table(mdev); | 1136 | mthca_cleanup_cq_table(mdev); |
1104 | mthca_cmd_use_polling(mdev); | 1137 | mthca_cmd_use_polling(mdev); |
1105 | mthca_cleanup_eq_table(mdev); | 1138 | mthca_cleanup_eq_table(mdev); |
diff --git a/drivers/infiniband/hw/mthca/mthca_mcg.c b/drivers/infiniband/hw/mthca/mthca_mcg.c index 5be7d949dbf6..a2707605f4c8 100644 --- a/drivers/infiniband/hw/mthca/mthca_mcg.c +++ b/drivers/infiniband/hw/mthca/mthca_mcg.c | |||
@@ -42,10 +42,10 @@ enum { | |||
42 | }; | 42 | }; |
43 | 43 | ||
44 | struct mthca_mgm { | 44 | struct mthca_mgm { |
45 | u32 next_gid_index; | 45 | __be32 next_gid_index; |
46 | u32 reserved[3]; | 46 | u32 reserved[3]; |
47 | u8 gid[16]; | 47 | u8 gid[16]; |
48 | u32 qp[MTHCA_QP_PER_MGM]; | 48 | __be32 qp[MTHCA_QP_PER_MGM]; |
49 | }; | 49 | }; |
50 | 50 | ||
51 | static const u8 zero_gid[16]; /* automatically initialized to 0 */ | 51 | static const u8 zero_gid[16]; /* automatically initialized to 0 */ |
@@ -94,10 +94,14 @@ static int find_mgm(struct mthca_dev *dev, | |||
94 | if (0) | 94 | if (0) |
95 | mthca_dbg(dev, "Hash for %04x:%04x:%04x:%04x:" | 95 | mthca_dbg(dev, "Hash for %04x:%04x:%04x:%04x:" |
96 | "%04x:%04x:%04x:%04x is %04x\n", | 96 | "%04x:%04x:%04x:%04x is %04x\n", |
97 | be16_to_cpu(((u16 *) gid)[0]), be16_to_cpu(((u16 *) gid)[1]), | 97 | be16_to_cpu(((__be16 *) gid)[0]), |
98 | be16_to_cpu(((u16 *) gid)[2]), be16_to_cpu(((u16 *) gid)[3]), | 98 | be16_to_cpu(((__be16 *) gid)[1]), |
99 | be16_to_cpu(((u16 *) gid)[4]), be16_to_cpu(((u16 *) gid)[5]), | 99 | be16_to_cpu(((__be16 *) gid)[2]), |
100 | be16_to_cpu(((u16 *) gid)[6]), be16_to_cpu(((u16 *) gid)[7]), | 100 | be16_to_cpu(((__be16 *) gid)[3]), |
101 | be16_to_cpu(((__be16 *) gid)[4]), | ||
102 | be16_to_cpu(((__be16 *) gid)[5]), | ||
103 | be16_to_cpu(((__be16 *) gid)[6]), | ||
104 | be16_to_cpu(((__be16 *) gid)[7]), | ||
101 | *hash); | 105 | *hash); |
102 | 106 | ||
103 | *index = *hash; | 107 | *index = *hash; |
@@ -258,14 +262,14 @@ int mthca_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid) | |||
258 | if (index == -1) { | 262 | if (index == -1) { |
259 | mthca_err(dev, "MGID %04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x " | 263 | mthca_err(dev, "MGID %04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x " |
260 | "not found\n", | 264 | "not found\n", |
261 | be16_to_cpu(((u16 *) gid->raw)[0]), | 265 | be16_to_cpu(((__be16 *) gid->raw)[0]), |
262 | be16_to_cpu(((u16 *) gid->raw)[1]), | 266 | be16_to_cpu(((__be16 *) gid->raw)[1]), |
263 | be16_to_cpu(((u16 *) gid->raw)[2]), | 267 | be16_to_cpu(((__be16 *) gid->raw)[2]), |
264 | be16_to_cpu(((u16 *) gid->raw)[3]), | 268 | be16_to_cpu(((__be16 *) gid->raw)[3]), |
265 | be16_to_cpu(((u16 *) gid->raw)[4]), | 269 | be16_to_cpu(((__be16 *) gid->raw)[4]), |
266 | be16_to_cpu(((u16 *) gid->raw)[5]), | 270 | be16_to_cpu(((__be16 *) gid->raw)[5]), |
267 | be16_to_cpu(((u16 *) gid->raw)[6]), | 271 | be16_to_cpu(((__be16 *) gid->raw)[6]), |
268 | be16_to_cpu(((u16 *) gid->raw)[7])); | 272 | be16_to_cpu(((__be16 *) gid->raw)[7])); |
269 | err = -EINVAL; | 273 | err = -EINVAL; |
270 | goto out; | 274 | goto out; |
271 | } | 275 | } |
diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index 2a8646150355..1827400f189b 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c | |||
@@ -1,6 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
4 | * | 5 | * |
5 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -285,6 +286,7 @@ struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev, | |||
285 | { | 286 | { |
286 | struct mthca_icm_table *table; | 287 | struct mthca_icm_table *table; |
287 | int num_icm; | 288 | int num_icm; |
289 | unsigned chunk_size; | ||
288 | int i; | 290 | int i; |
289 | u8 status; | 291 | u8 status; |
290 | 292 | ||
@@ -305,7 +307,11 @@ struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev, | |||
305 | table->icm[i] = NULL; | 307 | table->icm[i] = NULL; |
306 | 308 | ||
307 | for (i = 0; i * MTHCA_TABLE_CHUNK_SIZE < reserved * obj_size; ++i) { | 309 | for (i = 0; i * MTHCA_TABLE_CHUNK_SIZE < reserved * obj_size; ++i) { |
308 | table->icm[i] = mthca_alloc_icm(dev, MTHCA_TABLE_CHUNK_SIZE >> PAGE_SHIFT, | 310 | chunk_size = MTHCA_TABLE_CHUNK_SIZE; |
311 | if ((i + 1) * MTHCA_TABLE_CHUNK_SIZE > nobj * obj_size) | ||
312 | chunk_size = nobj * obj_size - i * MTHCA_TABLE_CHUNK_SIZE; | ||
313 | |||
314 | table->icm[i] = mthca_alloc_icm(dev, chunk_size >> PAGE_SHIFT, | ||
309 | (use_lowmem ? GFP_KERNEL : GFP_HIGHUSER) | | 315 | (use_lowmem ? GFP_KERNEL : GFP_HIGHUSER) | |
310 | __GFP_NOWARN); | 316 | __GFP_NOWARN); |
311 | if (!table->icm[i]) | 317 | if (!table->icm[i]) |
@@ -481,7 +487,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, | |||
481 | } | 487 | } |
482 | } | 488 | } |
483 | 489 | ||
484 | int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, u32 **db) | 490 | int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, __be32 **db) |
485 | { | 491 | { |
486 | int group; | 492 | int group; |
487 | int start, end, dir; | 493 | int start, end, dir; |
@@ -564,7 +570,7 @@ found: | |||
564 | 570 | ||
565 | page->db_rec[j] = cpu_to_be64((qn << 8) | (type << 5)); | 571 | page->db_rec[j] = cpu_to_be64((qn << 8) | (type << 5)); |
566 | 572 | ||
567 | *db = (u32 *) &page->db_rec[j]; | 573 | *db = (__be32 *) &page->db_rec[j]; |
568 | 574 | ||
569 | out: | 575 | out: |
570 | up(&dev->db_tab->mutex); | 576 | up(&dev->db_tab->mutex); |
diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.h b/drivers/infiniband/hw/mthca/mthca_memfree.h index 4761d844cb5f..bafa51544aa3 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.h +++ b/drivers/infiniband/hw/mthca/mthca_memfree.h | |||
@@ -1,6 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
4 | * | 5 | * |
5 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -137,7 +138,7 @@ enum { | |||
137 | 138 | ||
138 | struct mthca_db_page { | 139 | struct mthca_db_page { |
139 | DECLARE_BITMAP(used, MTHCA_DB_REC_PER_PAGE); | 140 | DECLARE_BITMAP(used, MTHCA_DB_REC_PER_PAGE); |
140 | u64 *db_rec; | 141 | __be64 *db_rec; |
141 | dma_addr_t mapping; | 142 | dma_addr_t mapping; |
142 | }; | 143 | }; |
143 | 144 | ||
@@ -172,7 +173,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, | |||
172 | 173 | ||
173 | int mthca_init_db_tab(struct mthca_dev *dev); | 174 | int mthca_init_db_tab(struct mthca_dev *dev); |
174 | void mthca_cleanup_db_tab(struct mthca_dev *dev); | 175 | void mthca_cleanup_db_tab(struct mthca_dev *dev); |
175 | int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, u32 **db); | 176 | int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, __be32 **db); |
176 | void mthca_free_db(struct mthca_dev *dev, int type, int db_index); | 177 | void mthca_free_db(struct mthca_dev *dev, int type, int db_index); |
177 | 178 | ||
178 | #endif /* MTHCA_MEMFREE_H */ | 179 | #endif /* MTHCA_MEMFREE_H */ |
diff --git a/drivers/infiniband/hw/mthca/mthca_mr.c b/drivers/infiniband/hw/mthca/mthca_mr.c index cbe50feaf680..1f97a44477f5 100644 --- a/drivers/infiniband/hw/mthca/mthca_mr.c +++ b/drivers/infiniband/hw/mthca/mthca_mr.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -50,18 +51,18 @@ struct mthca_mtt { | |||
50 | * Must be packed because mtt_seg is 64 bits but only aligned to 32 bits. | 51 | * Must be packed because mtt_seg is 64 bits but only aligned to 32 bits. |
51 | */ | 52 | */ |
52 | struct mthca_mpt_entry { | 53 | struct mthca_mpt_entry { |
53 | u32 flags; | 54 | __be32 flags; |
54 | u32 page_size; | 55 | __be32 page_size; |
55 | u32 key; | 56 | __be32 key; |
56 | u32 pd; | 57 | __be32 pd; |
57 | u64 start; | 58 | __be64 start; |
58 | u64 length; | 59 | __be64 length; |
59 | u32 lkey; | 60 | __be32 lkey; |
60 | u32 window_count; | 61 | __be32 window_count; |
61 | u32 window_count_limit; | 62 | __be32 window_count_limit; |
62 | u64 mtt_seg; | 63 | __be64 mtt_seg; |
63 | u32 mtt_sz; /* Arbel only */ | 64 | __be32 mtt_sz; /* Arbel only */ |
64 | u32 reserved[2]; | 65 | u32 reserved[2]; |
65 | } __attribute__((packed)); | 66 | } __attribute__((packed)); |
66 | 67 | ||
67 | #define MTHCA_MPT_FLAG_SW_OWNS (0xfUL << 28) | 68 | #define MTHCA_MPT_FLAG_SW_OWNS (0xfUL << 28) |
@@ -247,7 +248,7 @@ int mthca_write_mtt(struct mthca_dev *dev, struct mthca_mtt *mtt, | |||
247 | int start_index, u64 *buffer_list, int list_len) | 248 | int start_index, u64 *buffer_list, int list_len) |
248 | { | 249 | { |
249 | struct mthca_mailbox *mailbox; | 250 | struct mthca_mailbox *mailbox; |
250 | u64 *mtt_entry; | 251 | __be64 *mtt_entry; |
251 | int err = 0; | 252 | int err = 0; |
252 | u8 status; | 253 | u8 status; |
253 | int i; | 254 | int i; |
@@ -389,7 +390,7 @@ int mthca_mr_alloc(struct mthca_dev *dev, u32 pd, int buffer_size_shift, | |||
389 | for (i = 0; i < sizeof (struct mthca_mpt_entry) / 4; ++i) { | 390 | for (i = 0; i < sizeof (struct mthca_mpt_entry) / 4; ++i) { |
390 | if (i % 4 == 0) | 391 | if (i % 4 == 0) |
391 | printk("[%02x] ", i * 4); | 392 | printk("[%02x] ", i * 4); |
392 | printk(" %08x", be32_to_cpu(((u32 *) mpt_entry)[i])); | 393 | printk(" %08x", be32_to_cpu(((__be32 *) mpt_entry)[i])); |
393 | if ((i + 1) % 4 == 0) | 394 | if ((i + 1) % 4 == 0) |
394 | printk("\n"); | 395 | printk("\n"); |
395 | } | 396 | } |
@@ -458,7 +459,7 @@ int mthca_mr_alloc_phys(struct mthca_dev *dev, u32 pd, | |||
458 | static void mthca_free_region(struct mthca_dev *dev, u32 lkey) | 459 | static void mthca_free_region(struct mthca_dev *dev, u32 lkey) |
459 | { | 460 | { |
460 | mthca_table_put(dev, dev->mr_table.mpt_table, | 461 | mthca_table_put(dev, dev->mr_table.mpt_table, |
461 | arbel_key_to_hw_index(lkey)); | 462 | key_to_hw_index(dev, lkey)); |
462 | 463 | ||
463 | mthca_free(&dev->mr_table.mpt_alloc, key_to_hw_index(dev, lkey)); | 464 | mthca_free(&dev->mr_table.mpt_alloc, key_to_hw_index(dev, lkey)); |
464 | } | 465 | } |
@@ -562,7 +563,7 @@ int mthca_fmr_alloc(struct mthca_dev *dev, u32 pd, | |||
562 | for (i = 0; i < sizeof (struct mthca_mpt_entry) / 4; ++i) { | 563 | for (i = 0; i < sizeof (struct mthca_mpt_entry) / 4; ++i) { |
563 | if (i % 4 == 0) | 564 | if (i % 4 == 0) |
564 | printk("[%02x] ", i * 4); | 565 | printk("[%02x] ", i * 4); |
565 | printk(" %08x", be32_to_cpu(((u32 *) mpt_entry)[i])); | 566 | printk(" %08x", be32_to_cpu(((__be32 *) mpt_entry)[i])); |
566 | if ((i + 1) % 4 == 0) | 567 | if ((i + 1) % 4 == 0) |
567 | printk("\n"); | 568 | printk("\n"); |
568 | } | 569 | } |
@@ -669,7 +670,7 @@ int mthca_tavor_map_phys_fmr(struct ib_fmr *ibfmr, u64 *page_list, | |||
669 | mpt_entry.length = cpu_to_be64(list_len * (1ull << fmr->attr.page_size)); | 670 | mpt_entry.length = cpu_to_be64(list_len * (1ull << fmr->attr.page_size)); |
670 | mpt_entry.start = cpu_to_be64(iova); | 671 | mpt_entry.start = cpu_to_be64(iova); |
671 | 672 | ||
672 | writel(mpt_entry.lkey, &fmr->mem.tavor.mpt->key); | 673 | __raw_writel((__force u32) mpt_entry.lkey, &fmr->mem.tavor.mpt->key); |
673 | memcpy_toio(&fmr->mem.tavor.mpt->start, &mpt_entry.start, | 674 | memcpy_toio(&fmr->mem.tavor.mpt->start, &mpt_entry.start, |
674 | offsetof(struct mthca_mpt_entry, window_count) - | 675 | offsetof(struct mthca_mpt_entry, window_count) - |
675 | offsetof(struct mthca_mpt_entry, start)); | 676 | offsetof(struct mthca_mpt_entry, start)); |
diff --git a/drivers/infiniband/hw/mthca/mthca_pd.c b/drivers/infiniband/hw/mthca/mthca_pd.c index c2c899844e98..3dbf06a6e6f4 100644 --- a/drivers/infiniband/hw/mthca/mthca_pd.c +++ b/drivers/infiniband/hw/mthca/mthca_pd.c | |||
@@ -1,6 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
4 | * | 5 | * |
5 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
diff --git a/drivers/infiniband/hw/mthca/mthca_profile.c b/drivers/infiniband/hw/mthca/mthca_profile.c index 4fedc32d5871..0576056b34f4 100644 --- a/drivers/infiniband/hw/mthca/mthca_profile.c +++ b/drivers/infiniband/hw/mthca/mthca_profile.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -101,6 +102,7 @@ u64 mthca_make_profile(struct mthca_dev *dev, | |||
101 | profile[MTHCA_RES_UARC].size = request->uarc_size; | 102 | profile[MTHCA_RES_UARC].size = request->uarc_size; |
102 | 103 | ||
103 | profile[MTHCA_RES_QP].num = request->num_qp; | 104 | profile[MTHCA_RES_QP].num = request->num_qp; |
105 | profile[MTHCA_RES_SRQ].num = request->num_srq; | ||
104 | profile[MTHCA_RES_EQP].num = request->num_qp; | 106 | profile[MTHCA_RES_EQP].num = request->num_qp; |
105 | profile[MTHCA_RES_RDB].num = request->num_qp * request->rdb_per_qp; | 107 | profile[MTHCA_RES_RDB].num = request->num_qp * request->rdb_per_qp; |
106 | profile[MTHCA_RES_CQ].num = request->num_cq; | 108 | profile[MTHCA_RES_CQ].num = request->num_cq; |
diff --git a/drivers/infiniband/hw/mthca/mthca_profile.h b/drivers/infiniband/hw/mthca/mthca_profile.h index 17aef3357661..94641808f97f 100644 --- a/drivers/infiniband/hw/mthca/mthca_profile.h +++ b/drivers/infiniband/hw/mthca/mthca_profile.h | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -41,6 +42,7 @@ | |||
41 | struct mthca_profile { | 42 | struct mthca_profile { |
42 | int num_qp; | 43 | int num_qp; |
43 | int rdb_per_qp; | 44 | int rdb_per_qp; |
45 | int num_srq; | ||
44 | int num_cq; | 46 | int num_cq; |
45 | int num_mcg; | 47 | int num_mcg; |
46 | int num_mpt; | 48 | int num_mpt; |
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c index 81919a7b4935..1c1c2e230871 100644 --- a/drivers/infiniband/hw/mthca/mthca_provider.c +++ b/drivers/infiniband/hw/mthca/mthca_provider.c | |||
@@ -2,6 +2,8 @@ | |||
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | 3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. |
4 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 4 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
5 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
6 | * Copyright (c) 2004 Voltaire, Inc. All rights reserved. | ||
5 | * | 7 | * |
6 | * This software is available to you under a choice of one of two | 8 | * This software is available to you under a choice of one of two |
7 | * licenses. You may choose to be licensed under the terms of the GNU | 9 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -34,7 +36,7 @@ | |||
34 | * $Id: mthca_provider.c 1397 2004-12-28 05:09:00Z roland $ | 36 | * $Id: mthca_provider.c 1397 2004-12-28 05:09:00Z roland $ |
35 | */ | 37 | */ |
36 | 38 | ||
37 | #include <ib_smi.h> | 39 | #include <rdma/ib_smi.h> |
38 | #include <linux/mm.h> | 40 | #include <linux/mm.h> |
39 | 41 | ||
40 | #include "mthca_dev.h" | 42 | #include "mthca_dev.h" |
@@ -79,10 +81,10 @@ static int mthca_query_device(struct ib_device *ibdev, | |||
79 | } | 81 | } |
80 | 82 | ||
81 | props->device_cap_flags = mdev->device_cap_flags; | 83 | props->device_cap_flags = mdev->device_cap_flags; |
82 | props->vendor_id = be32_to_cpup((u32 *) (out_mad->data + 36)) & | 84 | props->vendor_id = be32_to_cpup((__be32 *) (out_mad->data + 36)) & |
83 | 0xffffff; | 85 | 0xffffff; |
84 | props->vendor_part_id = be16_to_cpup((u16 *) (out_mad->data + 30)); | 86 | props->vendor_part_id = be16_to_cpup((__be16 *) (out_mad->data + 30)); |
85 | props->hw_ver = be16_to_cpup((u16 *) (out_mad->data + 32)); | 87 | props->hw_ver = be16_to_cpup((__be16 *) (out_mad->data + 32)); |
86 | memcpy(&props->sys_image_guid, out_mad->data + 4, 8); | 88 | memcpy(&props->sys_image_guid, out_mad->data + 4, 8); |
87 | memcpy(&props->node_guid, out_mad->data + 12, 8); | 89 | memcpy(&props->node_guid, out_mad->data + 12, 8); |
88 | 90 | ||
@@ -118,6 +120,8 @@ static int mthca_query_port(struct ib_device *ibdev, | |||
118 | if (!in_mad || !out_mad) | 120 | if (!in_mad || !out_mad) |
119 | goto out; | 121 | goto out; |
120 | 122 | ||
123 | memset(props, 0, sizeof *props); | ||
124 | |||
121 | memset(in_mad, 0, sizeof *in_mad); | 125 | memset(in_mad, 0, sizeof *in_mad); |
122 | in_mad->base_version = 1; | 126 | in_mad->base_version = 1; |
123 | in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED; | 127 | in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED; |
@@ -136,16 +140,17 @@ static int mthca_query_port(struct ib_device *ibdev, | |||
136 | goto out; | 140 | goto out; |
137 | } | 141 | } |
138 | 142 | ||
139 | props->lid = be16_to_cpup((u16 *) (out_mad->data + 16)); | 143 | props->lid = be16_to_cpup((__be16 *) (out_mad->data + 16)); |
140 | props->lmc = out_mad->data[34] & 0x7; | 144 | props->lmc = out_mad->data[34] & 0x7; |
141 | props->sm_lid = be16_to_cpup((u16 *) (out_mad->data + 18)); | 145 | props->sm_lid = be16_to_cpup((__be16 *) (out_mad->data + 18)); |
142 | props->sm_sl = out_mad->data[36] & 0xf; | 146 | props->sm_sl = out_mad->data[36] & 0xf; |
143 | props->state = out_mad->data[32] & 0xf; | 147 | props->state = out_mad->data[32] & 0xf; |
144 | props->phys_state = out_mad->data[33] >> 4; | 148 | props->phys_state = out_mad->data[33] >> 4; |
145 | props->port_cap_flags = be32_to_cpup((u32 *) (out_mad->data + 20)); | 149 | props->port_cap_flags = be32_to_cpup((__be32 *) (out_mad->data + 20)); |
146 | props->gid_tbl_len = to_mdev(ibdev)->limits.gid_table_len; | 150 | props->gid_tbl_len = to_mdev(ibdev)->limits.gid_table_len; |
151 | props->max_msg_sz = 0x80000000; | ||
147 | props->pkey_tbl_len = to_mdev(ibdev)->limits.pkey_table_len; | 152 | props->pkey_tbl_len = to_mdev(ibdev)->limits.pkey_table_len; |
148 | props->qkey_viol_cntr = be16_to_cpup((u16 *) (out_mad->data + 48)); | 153 | props->qkey_viol_cntr = be16_to_cpup((__be16 *) (out_mad->data + 48)); |
149 | props->active_width = out_mad->data[31] & 0xf; | 154 | props->active_width = out_mad->data[31] & 0xf; |
150 | props->active_speed = out_mad->data[35] >> 4; | 155 | props->active_speed = out_mad->data[35] >> 4; |
151 | 156 | ||
@@ -221,7 +226,7 @@ static int mthca_query_pkey(struct ib_device *ibdev, | |||
221 | goto out; | 226 | goto out; |
222 | } | 227 | } |
223 | 228 | ||
224 | *pkey = be16_to_cpu(((u16 *) out_mad->data)[index % 32]); | 229 | *pkey = be16_to_cpu(((__be16 *) out_mad->data)[index % 32]); |
225 | 230 | ||
226 | out: | 231 | out: |
227 | kfree(in_mad); | 232 | kfree(in_mad); |
@@ -420,6 +425,77 @@ static int mthca_ah_destroy(struct ib_ah *ah) | |||
420 | return 0; | 425 | return 0; |
421 | } | 426 | } |
422 | 427 | ||
428 | static struct ib_srq *mthca_create_srq(struct ib_pd *pd, | ||
429 | struct ib_srq_init_attr *init_attr, | ||
430 | struct ib_udata *udata) | ||
431 | { | ||
432 | struct mthca_create_srq ucmd; | ||
433 | struct mthca_ucontext *context = NULL; | ||
434 | struct mthca_srq *srq; | ||
435 | int err; | ||
436 | |||
437 | srq = kmalloc(sizeof *srq, GFP_KERNEL); | ||
438 | if (!srq) | ||
439 | return ERR_PTR(-ENOMEM); | ||
440 | |||
441 | if (pd->uobject) { | ||
442 | context = to_mucontext(pd->uobject->context); | ||
443 | |||
444 | if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) | ||
445 | return ERR_PTR(-EFAULT); | ||
446 | |||
447 | err = mthca_map_user_db(to_mdev(pd->device), &context->uar, | ||
448 | context->db_tab, ucmd.db_index, | ||
449 | ucmd.db_page); | ||
450 | |||
451 | if (err) | ||
452 | goto err_free; | ||
453 | |||
454 | srq->mr.ibmr.lkey = ucmd.lkey; | ||
455 | srq->db_index = ucmd.db_index; | ||
456 | } | ||
457 | |||
458 | err = mthca_alloc_srq(to_mdev(pd->device), to_mpd(pd), | ||
459 | &init_attr->attr, srq); | ||
460 | |||
461 | if (err && pd->uobject) | ||
462 | mthca_unmap_user_db(to_mdev(pd->device), &context->uar, | ||
463 | context->db_tab, ucmd.db_index); | ||
464 | |||
465 | if (err) | ||
466 | goto err_free; | ||
467 | |||
468 | if (context && ib_copy_to_udata(udata, &srq->srqn, sizeof (__u32))) { | ||
469 | mthca_free_srq(to_mdev(pd->device), srq); | ||
470 | err = -EFAULT; | ||
471 | goto err_free; | ||
472 | } | ||
473 | |||
474 | return &srq->ibsrq; | ||
475 | |||
476 | err_free: | ||
477 | kfree(srq); | ||
478 | |||
479 | return ERR_PTR(err); | ||
480 | } | ||
481 | |||
482 | static int mthca_destroy_srq(struct ib_srq *srq) | ||
483 | { | ||
484 | struct mthca_ucontext *context; | ||
485 | |||
486 | if (srq->uobject) { | ||
487 | context = to_mucontext(srq->uobject->context); | ||
488 | |||
489 | mthca_unmap_user_db(to_mdev(srq->device), &context->uar, | ||
490 | context->db_tab, to_msrq(srq)->db_index); | ||
491 | } | ||
492 | |||
493 | mthca_free_srq(to_mdev(srq->device), to_msrq(srq)); | ||
494 | kfree(srq); | ||
495 | |||
496 | return 0; | ||
497 | } | ||
498 | |||
423 | static struct ib_qp *mthca_create_qp(struct ib_pd *pd, | 499 | static struct ib_qp *mthca_create_qp(struct ib_pd *pd, |
424 | struct ib_qp_init_attr *init_attr, | 500 | struct ib_qp_init_attr *init_attr, |
425 | struct ib_udata *udata) | 501 | struct ib_udata *udata) |
@@ -956,14 +1032,22 @@ static ssize_t show_hca(struct class_device *cdev, char *buf) | |||
956 | } | 1032 | } |
957 | } | 1033 | } |
958 | 1034 | ||
1035 | static ssize_t show_board(struct class_device *cdev, char *buf) | ||
1036 | { | ||
1037 | struct mthca_dev *dev = container_of(cdev, struct mthca_dev, ib_dev.class_dev); | ||
1038 | return sprintf(buf, "%.*s\n", MTHCA_BOARD_ID_LEN, dev->board_id); | ||
1039 | } | ||
1040 | |||
959 | static CLASS_DEVICE_ATTR(hw_rev, S_IRUGO, show_rev, NULL); | 1041 | static CLASS_DEVICE_ATTR(hw_rev, S_IRUGO, show_rev, NULL); |
960 | static CLASS_DEVICE_ATTR(fw_ver, S_IRUGO, show_fw_ver, NULL); | 1042 | static CLASS_DEVICE_ATTR(fw_ver, S_IRUGO, show_fw_ver, NULL); |
961 | static CLASS_DEVICE_ATTR(hca_type, S_IRUGO, show_hca, NULL); | 1043 | static CLASS_DEVICE_ATTR(hca_type, S_IRUGO, show_hca, NULL); |
1044 | static CLASS_DEVICE_ATTR(board_id, S_IRUGO, show_board, NULL); | ||
962 | 1045 | ||
963 | static struct class_device_attribute *mthca_class_attributes[] = { | 1046 | static struct class_device_attribute *mthca_class_attributes[] = { |
964 | &class_device_attr_hw_rev, | 1047 | &class_device_attr_hw_rev, |
965 | &class_device_attr_fw_ver, | 1048 | &class_device_attr_fw_ver, |
966 | &class_device_attr_hca_type | 1049 | &class_device_attr_hca_type, |
1050 | &class_device_attr_board_id | ||
967 | }; | 1051 | }; |
968 | 1052 | ||
969 | int mthca_register_device(struct mthca_dev *dev) | 1053 | int mthca_register_device(struct mthca_dev *dev) |
@@ -990,6 +1074,17 @@ int mthca_register_device(struct mthca_dev *dev) | |||
990 | dev->ib_dev.dealloc_pd = mthca_dealloc_pd; | 1074 | dev->ib_dev.dealloc_pd = mthca_dealloc_pd; |
991 | dev->ib_dev.create_ah = mthca_ah_create; | 1075 | dev->ib_dev.create_ah = mthca_ah_create; |
992 | dev->ib_dev.destroy_ah = mthca_ah_destroy; | 1076 | dev->ib_dev.destroy_ah = mthca_ah_destroy; |
1077 | |||
1078 | if (dev->mthca_flags & MTHCA_FLAG_SRQ) { | ||
1079 | dev->ib_dev.create_srq = mthca_create_srq; | ||
1080 | dev->ib_dev.destroy_srq = mthca_destroy_srq; | ||
1081 | |||
1082 | if (mthca_is_memfree(dev)) | ||
1083 | dev->ib_dev.post_srq_recv = mthca_arbel_post_srq_recv; | ||
1084 | else | ||
1085 | dev->ib_dev.post_srq_recv = mthca_tavor_post_srq_recv; | ||
1086 | } | ||
1087 | |||
993 | dev->ib_dev.create_qp = mthca_create_qp; | 1088 | dev->ib_dev.create_qp = mthca_create_qp; |
994 | dev->ib_dev.modify_qp = mthca_modify_qp; | 1089 | dev->ib_dev.modify_qp = mthca_modify_qp; |
995 | dev->ib_dev.destroy_qp = mthca_destroy_qp; | 1090 | dev->ib_dev.destroy_qp = mthca_destroy_qp; |
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.h b/drivers/infiniband/hw/mthca/mthca_provider.h index 1d032791cc8b..bcd4b01a339c 100644 --- a/drivers/infiniband/hw/mthca/mthca_provider.h +++ b/drivers/infiniband/hw/mthca/mthca_provider.h | |||
@@ -1,6 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
4 | * | 5 | * |
5 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -36,8 +37,8 @@ | |||
36 | #ifndef MTHCA_PROVIDER_H | 37 | #ifndef MTHCA_PROVIDER_H |
37 | #define MTHCA_PROVIDER_H | 38 | #define MTHCA_PROVIDER_H |
38 | 39 | ||
39 | #include <ib_verbs.h> | 40 | #include <rdma/ib_verbs.h> |
40 | #include <ib_pack.h> | 41 | #include <rdma/ib_pack.h> |
41 | 42 | ||
42 | #define MTHCA_MPT_FLAG_ATOMIC (1 << 14) | 43 | #define MTHCA_MPT_FLAG_ATOMIC (1 << 14) |
43 | #define MTHCA_MPT_FLAG_REMOTE_WRITE (1 << 13) | 44 | #define MTHCA_MPT_FLAG_REMOTE_WRITE (1 << 13) |
@@ -50,6 +51,11 @@ struct mthca_buf_list { | |||
50 | DECLARE_PCI_UNMAP_ADDR(mapping) | 51 | DECLARE_PCI_UNMAP_ADDR(mapping) |
51 | }; | 52 | }; |
52 | 53 | ||
54 | union mthca_buf { | ||
55 | struct mthca_buf_list direct; | ||
56 | struct mthca_buf_list *page_list; | ||
57 | }; | ||
58 | |||
53 | struct mthca_uar { | 59 | struct mthca_uar { |
54 | unsigned long pfn; | 60 | unsigned long pfn; |
55 | int index; | 61 | int index; |
@@ -181,19 +187,39 @@ struct mthca_cq { | |||
181 | 187 | ||
182 | /* Next fields are Arbel only */ | 188 | /* Next fields are Arbel only */ |
183 | int set_ci_db_index; | 189 | int set_ci_db_index; |
184 | u32 *set_ci_db; | 190 | __be32 *set_ci_db; |
185 | int arm_db_index; | 191 | int arm_db_index; |
186 | u32 *arm_db; | 192 | __be32 *arm_db; |
187 | int arm_sn; | 193 | int arm_sn; |
188 | 194 | ||
189 | union { | 195 | union mthca_buf queue; |
190 | struct mthca_buf_list direct; | ||
191 | struct mthca_buf_list *page_list; | ||
192 | } queue; | ||
193 | struct mthca_mr mr; | 196 | struct mthca_mr mr; |
194 | wait_queue_head_t wait; | 197 | wait_queue_head_t wait; |
195 | }; | 198 | }; |
196 | 199 | ||
200 | struct mthca_srq { | ||
201 | struct ib_srq ibsrq; | ||
202 | spinlock_t lock; | ||
203 | atomic_t refcount; | ||
204 | int srqn; | ||
205 | int max; | ||
206 | int max_gs; | ||
207 | int wqe_shift; | ||
208 | int first_free; | ||
209 | int last_free; | ||
210 | u16 counter; /* Arbel only */ | ||
211 | int db_index; /* Arbel only */ | ||
212 | __be32 *db; /* Arbel only */ | ||
213 | void *last; | ||
214 | |||
215 | int is_direct; | ||
216 | u64 *wrid; | ||
217 | union mthca_buf queue; | ||
218 | struct mthca_mr mr; | ||
219 | |||
220 | wait_queue_head_t wait; | ||
221 | }; | ||
222 | |||
197 | struct mthca_wq { | 223 | struct mthca_wq { |
198 | spinlock_t lock; | 224 | spinlock_t lock; |
199 | int max; | 225 | int max; |
@@ -206,7 +232,7 @@ struct mthca_wq { | |||
206 | int wqe_shift; | 232 | int wqe_shift; |
207 | 233 | ||
208 | int db_index; /* Arbel only */ | 234 | int db_index; /* Arbel only */ |
209 | u32 *db; | 235 | __be32 *db; |
210 | }; | 236 | }; |
211 | 237 | ||
212 | struct mthca_qp { | 238 | struct mthca_qp { |
@@ -227,10 +253,7 @@ struct mthca_qp { | |||
227 | int send_wqe_offset; | 253 | int send_wqe_offset; |
228 | 254 | ||
229 | u64 *wrid; | 255 | u64 *wrid; |
230 | union { | 256 | union mthca_buf queue; |
231 | struct mthca_buf_list direct; | ||
232 | struct mthca_buf_list *page_list; | ||
233 | } queue; | ||
234 | 257 | ||
235 | wait_queue_head_t wait; | 258 | wait_queue_head_t wait; |
236 | }; | 259 | }; |
@@ -277,6 +300,11 @@ static inline struct mthca_cq *to_mcq(struct ib_cq *ibcq) | |||
277 | return container_of(ibcq, struct mthca_cq, ibcq); | 300 | return container_of(ibcq, struct mthca_cq, ibcq); |
278 | } | 301 | } |
279 | 302 | ||
303 | static inline struct mthca_srq *to_msrq(struct ib_srq *ibsrq) | ||
304 | { | ||
305 | return container_of(ibsrq, struct mthca_srq, ibsrq); | ||
306 | } | ||
307 | |||
280 | static inline struct mthca_qp *to_mqp(struct ib_qp *ibqp) | 308 | static inline struct mthca_qp *to_mqp(struct ib_qp *ibqp) |
281 | { | 309 | { |
282 | return container_of(ibqp, struct mthca_qp, ibqp); | 310 | return container_of(ibqp, struct mthca_qp, ibqp); |
diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c index f7126b14d5ae..0164b84d4ec6 100644 --- a/drivers/infiniband/hw/mthca/mthca_qp.c +++ b/drivers/infiniband/hw/mthca/mthca_qp.c | |||
@@ -1,6 +1,8 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 3 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
5 | * Copyright (c) 2004 Voltaire, Inc. All rights reserved. | ||
4 | * | 6 | * |
5 | * This software is available to you under a choice of one of two | 7 | * This software is available to you under a choice of one of two |
6 | * licenses. You may choose to be licensed under the terms of the GNU | 8 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -35,13 +37,14 @@ | |||
35 | 37 | ||
36 | #include <linux/init.h> | 38 | #include <linux/init.h> |
37 | 39 | ||
38 | #include <ib_verbs.h> | 40 | #include <rdma/ib_verbs.h> |
39 | #include <ib_cache.h> | 41 | #include <rdma/ib_cache.h> |
40 | #include <ib_pack.h> | 42 | #include <rdma/ib_pack.h> |
41 | 43 | ||
42 | #include "mthca_dev.h" | 44 | #include "mthca_dev.h" |
43 | #include "mthca_cmd.h" | 45 | #include "mthca_cmd.h" |
44 | #include "mthca_memfree.h" | 46 | #include "mthca_memfree.h" |
47 | #include "mthca_wqe.h" | ||
45 | 48 | ||
46 | enum { | 49 | enum { |
47 | MTHCA_MAX_DIRECT_QP_SIZE = 4 * PAGE_SIZE, | 50 | MTHCA_MAX_DIRECT_QP_SIZE = 4 * PAGE_SIZE, |
@@ -95,62 +98,62 @@ enum { | |||
95 | }; | 98 | }; |
96 | 99 | ||
97 | struct mthca_qp_path { | 100 | struct mthca_qp_path { |
98 | u32 port_pkey; | 101 | __be32 port_pkey; |
99 | u8 rnr_retry; | 102 | u8 rnr_retry; |
100 | u8 g_mylmc; | 103 | u8 g_mylmc; |
101 | u16 rlid; | 104 | __be16 rlid; |
102 | u8 ackto; | 105 | u8 ackto; |
103 | u8 mgid_index; | 106 | u8 mgid_index; |
104 | u8 static_rate; | 107 | u8 static_rate; |
105 | u8 hop_limit; | 108 | u8 hop_limit; |
106 | u32 sl_tclass_flowlabel; | 109 | __be32 sl_tclass_flowlabel; |
107 | u8 rgid[16]; | 110 | u8 rgid[16]; |
108 | } __attribute__((packed)); | 111 | } __attribute__((packed)); |
109 | 112 | ||
110 | struct mthca_qp_context { | 113 | struct mthca_qp_context { |
111 | u32 flags; | 114 | __be32 flags; |
112 | u32 tavor_sched_queue; /* Reserved on Arbel */ | 115 | __be32 tavor_sched_queue; /* Reserved on Arbel */ |
113 | u8 mtu_msgmax; | 116 | u8 mtu_msgmax; |
114 | u8 rq_size_stride; /* Reserved on Tavor */ | 117 | u8 rq_size_stride; /* Reserved on Tavor */ |
115 | u8 sq_size_stride; /* Reserved on Tavor */ | 118 | u8 sq_size_stride; /* Reserved on Tavor */ |
116 | u8 rlkey_arbel_sched_queue; /* Reserved on Tavor */ | 119 | u8 rlkey_arbel_sched_queue; /* Reserved on Tavor */ |
117 | u32 usr_page; | 120 | __be32 usr_page; |
118 | u32 local_qpn; | 121 | __be32 local_qpn; |
119 | u32 remote_qpn; | 122 | __be32 remote_qpn; |
120 | u32 reserved1[2]; | 123 | u32 reserved1[2]; |
121 | struct mthca_qp_path pri_path; | 124 | struct mthca_qp_path pri_path; |
122 | struct mthca_qp_path alt_path; | 125 | struct mthca_qp_path alt_path; |
123 | u32 rdd; | 126 | __be32 rdd; |
124 | u32 pd; | 127 | __be32 pd; |
125 | u32 wqe_base; | 128 | __be32 wqe_base; |
126 | u32 wqe_lkey; | 129 | __be32 wqe_lkey; |
127 | u32 params1; | 130 | __be32 params1; |
128 | u32 reserved2; | 131 | __be32 reserved2; |
129 | u32 next_send_psn; | 132 | __be32 next_send_psn; |
130 | u32 cqn_snd; | 133 | __be32 cqn_snd; |
131 | u32 snd_wqe_base_l; /* Next send WQE on Tavor */ | 134 | __be32 snd_wqe_base_l; /* Next send WQE on Tavor */ |
132 | u32 snd_db_index; /* (debugging only entries) */ | 135 | __be32 snd_db_index; /* (debugging only entries) */ |
133 | u32 last_acked_psn; | 136 | __be32 last_acked_psn; |
134 | u32 ssn; | 137 | __be32 ssn; |
135 | u32 params2; | 138 | __be32 params2; |
136 | u32 rnr_nextrecvpsn; | 139 | __be32 rnr_nextrecvpsn; |
137 | u32 ra_buff_indx; | 140 | __be32 ra_buff_indx; |
138 | u32 cqn_rcv; | 141 | __be32 cqn_rcv; |
139 | u32 rcv_wqe_base_l; /* Next recv WQE on Tavor */ | 142 | __be32 rcv_wqe_base_l; /* Next recv WQE on Tavor */ |
140 | u32 rcv_db_index; /* (debugging only entries) */ | 143 | __be32 rcv_db_index; /* (debugging only entries) */ |
141 | u32 qkey; | 144 | __be32 qkey; |
142 | u32 srqn; | 145 | __be32 srqn; |
143 | u32 rmsn; | 146 | __be32 rmsn; |
144 | u16 rq_wqe_counter; /* reserved on Tavor */ | 147 | __be16 rq_wqe_counter; /* reserved on Tavor */ |
145 | u16 sq_wqe_counter; /* reserved on Tavor */ | 148 | __be16 sq_wqe_counter; /* reserved on Tavor */ |
146 | u32 reserved3[18]; | 149 | u32 reserved3[18]; |
147 | } __attribute__((packed)); | 150 | } __attribute__((packed)); |
148 | 151 | ||
149 | struct mthca_qp_param { | 152 | struct mthca_qp_param { |
150 | u32 opt_param_mask; | 153 | __be32 opt_param_mask; |
151 | u32 reserved1; | 154 | u32 reserved1; |
152 | struct mthca_qp_context context; | 155 | struct mthca_qp_context context; |
153 | u32 reserved2[62]; | 156 | u32 reserved2[62]; |
154 | } __attribute__((packed)); | 157 | } __attribute__((packed)); |
155 | 158 | ||
156 | enum { | 159 | enum { |
@@ -173,80 +176,6 @@ enum { | |||
173 | MTHCA_QP_OPTPAR_SCHED_QUEUE = 1 << 16 | 176 | MTHCA_QP_OPTPAR_SCHED_QUEUE = 1 << 16 |
174 | }; | 177 | }; |
175 | 178 | ||
176 | enum { | ||
177 | MTHCA_NEXT_DBD = 1 << 7, | ||
178 | MTHCA_NEXT_FENCE = 1 << 6, | ||
179 | MTHCA_NEXT_CQ_UPDATE = 1 << 3, | ||
180 | MTHCA_NEXT_EVENT_GEN = 1 << 2, | ||
181 | MTHCA_NEXT_SOLICIT = 1 << 1, | ||
182 | |||
183 | MTHCA_MLX_VL15 = 1 << 17, | ||
184 | MTHCA_MLX_SLR = 1 << 16 | ||
185 | }; | ||
186 | |||
187 | enum { | ||
188 | MTHCA_INVAL_LKEY = 0x100 | ||
189 | }; | ||
190 | |||
191 | struct mthca_next_seg { | ||
192 | u32 nda_op; /* [31:6] next WQE [4:0] next opcode */ | ||
193 | u32 ee_nds; /* [31:8] next EE [7] DBD [6] F [5:0] next WQE size */ | ||
194 | u32 flags; /* [3] CQ [2] Event [1] Solicit */ | ||
195 | u32 imm; /* immediate data */ | ||
196 | }; | ||
197 | |||
198 | struct mthca_tavor_ud_seg { | ||
199 | u32 reserved1; | ||
200 | u32 lkey; | ||
201 | u64 av_addr; | ||
202 | u32 reserved2[4]; | ||
203 | u32 dqpn; | ||
204 | u32 qkey; | ||
205 | u32 reserved3[2]; | ||
206 | }; | ||
207 | |||
208 | struct mthca_arbel_ud_seg { | ||
209 | u32 av[8]; | ||
210 | u32 dqpn; | ||
211 | u32 qkey; | ||
212 | u32 reserved[2]; | ||
213 | }; | ||
214 | |||
215 | struct mthca_bind_seg { | ||
216 | u32 flags; /* [31] Atomic [30] rem write [29] rem read */ | ||
217 | u32 reserved; | ||
218 | u32 new_rkey; | ||
219 | u32 lkey; | ||
220 | u64 addr; | ||
221 | u64 length; | ||
222 | }; | ||
223 | |||
224 | struct mthca_raddr_seg { | ||
225 | u64 raddr; | ||
226 | u32 rkey; | ||
227 | u32 reserved; | ||
228 | }; | ||
229 | |||
230 | struct mthca_atomic_seg { | ||
231 | u64 swap_add; | ||
232 | u64 compare; | ||
233 | }; | ||
234 | |||
235 | struct mthca_data_seg { | ||
236 | u32 byte_count; | ||
237 | u32 lkey; | ||
238 | u64 addr; | ||
239 | }; | ||
240 | |||
241 | struct mthca_mlx_seg { | ||
242 | u32 nda_op; | ||
243 | u32 nds; | ||
244 | u32 flags; /* [17] VL15 [16] SLR [14:12] static rate | ||
245 | [11:8] SL [3] C [2] E */ | ||
246 | u16 rlid; | ||
247 | u16 vcrc; | ||
248 | }; | ||
249 | |||
250 | static const u8 mthca_opcode[] = { | 179 | static const u8 mthca_opcode[] = { |
251 | [IB_WR_SEND] = MTHCA_OPCODE_SEND, | 180 | [IB_WR_SEND] = MTHCA_OPCODE_SEND, |
252 | [IB_WR_SEND_WITH_IMM] = MTHCA_OPCODE_SEND_IMM, | 181 | [IB_WR_SEND_WITH_IMM] = MTHCA_OPCODE_SEND_IMM, |
@@ -573,12 +502,11 @@ static void init_port(struct mthca_dev *dev, int port) | |||
573 | 502 | ||
574 | memset(¶m, 0, sizeof param); | 503 | memset(¶m, 0, sizeof param); |
575 | 504 | ||
576 | param.enable_1x = 1; | 505 | param.port_width = dev->limits.port_width_cap; |
577 | param.enable_4x = 1; | 506 | param.vl_cap = dev->limits.vl_cap; |
578 | param.vl_cap = dev->limits.vl_cap; | 507 | param.mtu_cap = dev->limits.mtu_cap; |
579 | param.mtu_cap = dev->limits.mtu_cap; | 508 | param.gid_cap = dev->limits.gid_table_len; |
580 | param.gid_cap = dev->limits.gid_table_len; | 509 | param.pkey_cap = dev->limits.pkey_table_len; |
581 | param.pkey_cap = dev->limits.pkey_table_len; | ||
582 | 510 | ||
583 | err = mthca_INIT_IB(dev, ¶m, port, &status); | 511 | err = mthca_INIT_IB(dev, ¶m, port, &status); |
584 | if (err) | 512 | if (err) |
@@ -684,10 +612,13 @@ int mthca_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask) | |||
684 | qp_context->mtu_msgmax = (attr->path_mtu << 5) | 31; | 612 | qp_context->mtu_msgmax = (attr->path_mtu << 5) | 31; |
685 | 613 | ||
686 | if (mthca_is_memfree(dev)) { | 614 | if (mthca_is_memfree(dev)) { |
687 | qp_context->rq_size_stride = | 615 | if (qp->rq.max) |
688 | ((ffs(qp->rq.max) - 1) << 3) | (qp->rq.wqe_shift - 4); | 616 | qp_context->rq_size_stride = long_log2(qp->rq.max) << 3; |
689 | qp_context->sq_size_stride = | 617 | qp_context->rq_size_stride |= qp->rq.wqe_shift - 4; |
690 | ((ffs(qp->sq.max) - 1) << 3) | (qp->sq.wqe_shift - 4); | 618 | |
619 | if (qp->sq.max) | ||
620 | qp_context->sq_size_stride = long_log2(qp->sq.max) << 3; | ||
621 | qp_context->sq_size_stride |= qp->sq.wqe_shift - 4; | ||
691 | } | 622 | } |
692 | 623 | ||
693 | /* leave arbel_sched_queue as 0 */ | 624 | /* leave arbel_sched_queue as 0 */ |
@@ -856,6 +787,9 @@ int mthca_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask) | |||
856 | 787 | ||
857 | qp_context->params2 |= cpu_to_be32(MTHCA_QP_BIT_RSC); | 788 | qp_context->params2 |= cpu_to_be32(MTHCA_QP_BIT_RSC); |
858 | 789 | ||
790 | if (ibqp->srq) | ||
791 | qp_context->params2 |= cpu_to_be32(MTHCA_QP_BIT_RIC); | ||
792 | |||
859 | if (attr_mask & IB_QP_MIN_RNR_TIMER) { | 793 | if (attr_mask & IB_QP_MIN_RNR_TIMER) { |
860 | qp_context->rnr_nextrecvpsn |= cpu_to_be32(attr->min_rnr_timer << 24); | 794 | qp_context->rnr_nextrecvpsn |= cpu_to_be32(attr->min_rnr_timer << 24); |
861 | qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RNR_TIMEOUT); | 795 | qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RNR_TIMEOUT); |
@@ -878,6 +812,10 @@ int mthca_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask) | |||
878 | qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_Q_KEY); | 812 | qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_Q_KEY); |
879 | } | 813 | } |
880 | 814 | ||
815 | if (ibqp->srq) | ||
816 | qp_context->srqn = cpu_to_be32(1 << 24 | | ||
817 | to_msrq(ibqp->srq)->srqn); | ||
818 | |||
881 | err = mthca_MODIFY_QP(dev, state_table[cur_state][new_state].trans, | 819 | err = mthca_MODIFY_QP(dev, state_table[cur_state][new_state].trans, |
882 | qp->qpn, 0, mailbox, 0, &status); | 820 | qp->qpn, 0, mailbox, 0, &status); |
883 | if (status) { | 821 | if (status) { |
@@ -925,10 +863,6 @@ static int mthca_alloc_wqe_buf(struct mthca_dev *dev, | |||
925 | struct mthca_qp *qp) | 863 | struct mthca_qp *qp) |
926 | { | 864 | { |
927 | int size; | 865 | int size; |
928 | int i; | ||
929 | int npages, shift; | ||
930 | dma_addr_t t; | ||
931 | u64 *dma_list = NULL; | ||
932 | int err = -ENOMEM; | 866 | int err = -ENOMEM; |
933 | 867 | ||
934 | size = sizeof (struct mthca_next_seg) + | 868 | size = sizeof (struct mthca_next_seg) + |
@@ -978,116 +912,24 @@ static int mthca_alloc_wqe_buf(struct mthca_dev *dev, | |||
978 | if (!qp->wrid) | 912 | if (!qp->wrid) |
979 | goto err_out; | 913 | goto err_out; |
980 | 914 | ||
981 | if (size <= MTHCA_MAX_DIRECT_QP_SIZE) { | 915 | err = mthca_buf_alloc(dev, size, MTHCA_MAX_DIRECT_QP_SIZE, |
982 | qp->is_direct = 1; | 916 | &qp->queue, &qp->is_direct, pd, 0, &qp->mr); |
983 | npages = 1; | ||
984 | shift = get_order(size) + PAGE_SHIFT; | ||
985 | |||
986 | if (0) | ||
987 | mthca_dbg(dev, "Creating direct QP of size %d (shift %d)\n", | ||
988 | size, shift); | ||
989 | |||
990 | qp->queue.direct.buf = dma_alloc_coherent(&dev->pdev->dev, size, | ||
991 | &t, GFP_KERNEL); | ||
992 | if (!qp->queue.direct.buf) | ||
993 | goto err_out; | ||
994 | |||
995 | pci_unmap_addr_set(&qp->queue.direct, mapping, t); | ||
996 | |||
997 | memset(qp->queue.direct.buf, 0, size); | ||
998 | |||
999 | while (t & ((1 << shift) - 1)) { | ||
1000 | --shift; | ||
1001 | npages *= 2; | ||
1002 | } | ||
1003 | |||
1004 | dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL); | ||
1005 | if (!dma_list) | ||
1006 | goto err_out_free; | ||
1007 | |||
1008 | for (i = 0; i < npages; ++i) | ||
1009 | dma_list[i] = t + i * (1 << shift); | ||
1010 | } else { | ||
1011 | qp->is_direct = 0; | ||
1012 | npages = size / PAGE_SIZE; | ||
1013 | shift = PAGE_SHIFT; | ||
1014 | |||
1015 | if (0) | ||
1016 | mthca_dbg(dev, "Creating indirect QP with %d pages\n", npages); | ||
1017 | |||
1018 | dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL); | ||
1019 | if (!dma_list) | ||
1020 | goto err_out; | ||
1021 | |||
1022 | qp->queue.page_list = kmalloc(npages * | ||
1023 | sizeof *qp->queue.page_list, | ||
1024 | GFP_KERNEL); | ||
1025 | if (!qp->queue.page_list) | ||
1026 | goto err_out; | ||
1027 | |||
1028 | for (i = 0; i < npages; ++i) { | ||
1029 | qp->queue.page_list[i].buf = | ||
1030 | dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE, | ||
1031 | &t, GFP_KERNEL); | ||
1032 | if (!qp->queue.page_list[i].buf) | ||
1033 | goto err_out_free; | ||
1034 | |||
1035 | memset(qp->queue.page_list[i].buf, 0, PAGE_SIZE); | ||
1036 | |||
1037 | pci_unmap_addr_set(&qp->queue.page_list[i], mapping, t); | ||
1038 | dma_list[i] = t; | ||
1039 | } | ||
1040 | } | ||
1041 | |||
1042 | err = mthca_mr_alloc_phys(dev, pd->pd_num, dma_list, shift, | ||
1043 | npages, 0, size, | ||
1044 | MTHCA_MPT_FLAG_LOCAL_READ, | ||
1045 | &qp->mr); | ||
1046 | if (err) | 917 | if (err) |
1047 | goto err_out_free; | 918 | goto err_out; |
1048 | 919 | ||
1049 | kfree(dma_list); | ||
1050 | return 0; | 920 | return 0; |
1051 | 921 | ||
1052 | err_out_free: | 922 | err_out: |
1053 | if (qp->is_direct) { | ||
1054 | dma_free_coherent(&dev->pdev->dev, size, qp->queue.direct.buf, | ||
1055 | pci_unmap_addr(&qp->queue.direct, mapping)); | ||
1056 | } else | ||
1057 | for (i = 0; i < npages; ++i) { | ||
1058 | if (qp->queue.page_list[i].buf) | ||
1059 | dma_free_coherent(&dev->pdev->dev, PAGE_SIZE, | ||
1060 | qp->queue.page_list[i].buf, | ||
1061 | pci_unmap_addr(&qp->queue.page_list[i], | ||
1062 | mapping)); | ||
1063 | |||
1064 | } | ||
1065 | |||
1066 | err_out: | ||
1067 | kfree(qp->wrid); | 923 | kfree(qp->wrid); |
1068 | kfree(dma_list); | ||
1069 | return err; | 924 | return err; |
1070 | } | 925 | } |
1071 | 926 | ||
1072 | static void mthca_free_wqe_buf(struct mthca_dev *dev, | 927 | static void mthca_free_wqe_buf(struct mthca_dev *dev, |
1073 | struct mthca_qp *qp) | 928 | struct mthca_qp *qp) |
1074 | { | 929 | { |
1075 | int i; | 930 | mthca_buf_free(dev, PAGE_ALIGN(qp->send_wqe_offset + |
1076 | int size = PAGE_ALIGN(qp->send_wqe_offset + | 931 | (qp->sq.max << qp->sq.wqe_shift)), |
1077 | (qp->sq.max << qp->sq.wqe_shift)); | 932 | &qp->queue, qp->is_direct, &qp->mr); |
1078 | |||
1079 | if (qp->is_direct) { | ||
1080 | dma_free_coherent(&dev->pdev->dev, size, qp->queue.direct.buf, | ||
1081 | pci_unmap_addr(&qp->queue.direct, mapping)); | ||
1082 | } else { | ||
1083 | for (i = 0; i < size / PAGE_SIZE; ++i) { | ||
1084 | dma_free_coherent(&dev->pdev->dev, PAGE_SIZE, | ||
1085 | qp->queue.page_list[i].buf, | ||
1086 | pci_unmap_addr(&qp->queue.page_list[i], | ||
1087 | mapping)); | ||
1088 | } | ||
1089 | } | ||
1090 | |||
1091 | kfree(qp->wrid); | 933 | kfree(qp->wrid); |
1092 | } | 934 | } |
1093 | 935 | ||
@@ -1428,11 +1270,12 @@ void mthca_free_qp(struct mthca_dev *dev, | |||
1428 | * unref the mem-free tables and free the QPN in our table. | 1270 | * unref the mem-free tables and free the QPN in our table. |
1429 | */ | 1271 | */ |
1430 | if (!qp->ibqp.uobject) { | 1272 | if (!qp->ibqp.uobject) { |
1431 | mthca_cq_clean(dev, to_mcq(qp->ibqp.send_cq)->cqn, qp->qpn); | 1273 | mthca_cq_clean(dev, to_mcq(qp->ibqp.send_cq)->cqn, qp->qpn, |
1274 | qp->ibqp.srq ? to_msrq(qp->ibqp.srq) : NULL); | ||
1432 | if (qp->ibqp.send_cq != qp->ibqp.recv_cq) | 1275 | if (qp->ibqp.send_cq != qp->ibqp.recv_cq) |
1433 | mthca_cq_clean(dev, to_mcq(qp->ibqp.recv_cq)->cqn, qp->qpn); | 1276 | mthca_cq_clean(dev, to_mcq(qp->ibqp.recv_cq)->cqn, qp->qpn, |
1277 | qp->ibqp.srq ? to_msrq(qp->ibqp.srq) : NULL); | ||
1434 | 1278 | ||
1435 | mthca_free_mr(dev, &qp->mr); | ||
1436 | mthca_free_memfree(dev, qp); | 1279 | mthca_free_memfree(dev, qp); |
1437 | mthca_free_wqe_buf(dev, qp); | 1280 | mthca_free_wqe_buf(dev, qp); |
1438 | } | 1281 | } |
@@ -1457,6 +1300,7 @@ static int build_mlx_header(struct mthca_dev *dev, struct mthca_sqp *sqp, | |||
1457 | { | 1300 | { |
1458 | int header_size; | 1301 | int header_size; |
1459 | int err; | 1302 | int err; |
1303 | u16 pkey; | ||
1460 | 1304 | ||
1461 | ib_ud_header_init(256, /* assume a MAD */ | 1305 | ib_ud_header_init(256, /* assume a MAD */ |
1462 | sqp->ud_header.grh_present, | 1306 | sqp->ud_header.grh_present, |
@@ -1467,8 +1311,8 @@ static int build_mlx_header(struct mthca_dev *dev, struct mthca_sqp *sqp, | |||
1467 | return err; | 1311 | return err; |
1468 | mlx->flags &= ~cpu_to_be32(MTHCA_NEXT_SOLICIT | 1); | 1312 | mlx->flags &= ~cpu_to_be32(MTHCA_NEXT_SOLICIT | 1); |
1469 | mlx->flags |= cpu_to_be32((!sqp->qp.ibqp.qp_num ? MTHCA_MLX_VL15 : 0) | | 1313 | mlx->flags |= cpu_to_be32((!sqp->qp.ibqp.qp_num ? MTHCA_MLX_VL15 : 0) | |
1470 | (sqp->ud_header.lrh.destination_lid == 0xffff ? | 1314 | (sqp->ud_header.lrh.destination_lid == |
1471 | MTHCA_MLX_SLR : 0) | | 1315 | IB_LID_PERMISSIVE ? MTHCA_MLX_SLR : 0) | |
1472 | (sqp->ud_header.lrh.service_level << 8)); | 1316 | (sqp->ud_header.lrh.service_level << 8)); |
1473 | mlx->rlid = sqp->ud_header.lrh.destination_lid; | 1317 | mlx->rlid = sqp->ud_header.lrh.destination_lid; |
1474 | mlx->vcrc = 0; | 1318 | mlx->vcrc = 0; |
@@ -1488,18 +1332,16 @@ static int build_mlx_header(struct mthca_dev *dev, struct mthca_sqp *sqp, | |||
1488 | } | 1332 | } |
1489 | 1333 | ||
1490 | sqp->ud_header.lrh.virtual_lane = !sqp->qp.ibqp.qp_num ? 15 : 0; | 1334 | sqp->ud_header.lrh.virtual_lane = !sqp->qp.ibqp.qp_num ? 15 : 0; |
1491 | if (sqp->ud_header.lrh.destination_lid == 0xffff) | 1335 | if (sqp->ud_header.lrh.destination_lid == IB_LID_PERMISSIVE) |
1492 | sqp->ud_header.lrh.source_lid = 0xffff; | 1336 | sqp->ud_header.lrh.source_lid = IB_LID_PERMISSIVE; |
1493 | sqp->ud_header.bth.solicited_event = !!(wr->send_flags & IB_SEND_SOLICITED); | 1337 | sqp->ud_header.bth.solicited_event = !!(wr->send_flags & IB_SEND_SOLICITED); |
1494 | if (!sqp->qp.ibqp.qp_num) | 1338 | if (!sqp->qp.ibqp.qp_num) |
1495 | ib_get_cached_pkey(&dev->ib_dev, sqp->port, | 1339 | ib_get_cached_pkey(&dev->ib_dev, sqp->port, |
1496 | sqp->pkey_index, | 1340 | sqp->pkey_index, &pkey); |
1497 | &sqp->ud_header.bth.pkey); | ||
1498 | else | 1341 | else |
1499 | ib_get_cached_pkey(&dev->ib_dev, sqp->port, | 1342 | ib_get_cached_pkey(&dev->ib_dev, sqp->port, |
1500 | wr->wr.ud.pkey_index, | 1343 | wr->wr.ud.pkey_index, &pkey); |
1501 | &sqp->ud_header.bth.pkey); | 1344 | sqp->ud_header.bth.pkey = cpu_to_be16(pkey); |
1502 | cpu_to_be16s(&sqp->ud_header.bth.pkey); | ||
1503 | sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->wr.ud.remote_qpn); | 1345 | sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->wr.ud.remote_qpn); |
1504 | sqp->ud_header.bth.psn = cpu_to_be32((sqp->send_psn++) & ((1 << 24) - 1)); | 1346 | sqp->ud_header.bth.psn = cpu_to_be32((sqp->send_psn++) & ((1 << 24) - 1)); |
1505 | sqp->ud_header.deth.qkey = cpu_to_be32(wr->wr.ud.remote_qkey & 0x80000000 ? | 1347 | sqp->ud_header.deth.qkey = cpu_to_be32(wr->wr.ud.remote_qkey & 0x80000000 ? |
@@ -1742,7 +1584,7 @@ int mthca_tavor_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, | |||
1742 | 1584 | ||
1743 | out: | 1585 | out: |
1744 | if (likely(nreq)) { | 1586 | if (likely(nreq)) { |
1745 | u32 doorbell[2]; | 1587 | __be32 doorbell[2]; |
1746 | 1588 | ||
1747 | doorbell[0] = cpu_to_be32(((qp->sq.next_ind << qp->sq.wqe_shift) + | 1589 | doorbell[0] = cpu_to_be32(((qp->sq.next_ind << qp->sq.wqe_shift) + |
1748 | qp->send_wqe_offset) | f0 | op0); | 1590 | qp->send_wqe_offset) | f0 | op0); |
@@ -1843,7 +1685,7 @@ int mthca_tavor_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr, | |||
1843 | 1685 | ||
1844 | out: | 1686 | out: |
1845 | if (likely(nreq)) { | 1687 | if (likely(nreq)) { |
1846 | u32 doorbell[2]; | 1688 | __be32 doorbell[2]; |
1847 | 1689 | ||
1848 | doorbell[0] = cpu_to_be32((qp->rq.next_ind << qp->rq.wqe_shift) | size0); | 1690 | doorbell[0] = cpu_to_be32((qp->rq.next_ind << qp->rq.wqe_shift) | size0); |
1849 | doorbell[1] = cpu_to_be32((qp->qpn << 8) | nreq); | 1691 | doorbell[1] = cpu_to_be32((qp->qpn << 8) | nreq); |
@@ -2064,7 +1906,7 @@ int mthca_arbel_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, | |||
2064 | 1906 | ||
2065 | out: | 1907 | out: |
2066 | if (likely(nreq)) { | 1908 | if (likely(nreq)) { |
2067 | u32 doorbell[2]; | 1909 | __be32 doorbell[2]; |
2068 | 1910 | ||
2069 | doorbell[0] = cpu_to_be32((nreq << 24) | | 1911 | doorbell[0] = cpu_to_be32((nreq << 24) | |
2070 | ((qp->sq.head & 0xffff) << 8) | | 1912 | ((qp->sq.head & 0xffff) << 8) | |
@@ -2174,19 +2016,25 @@ out: | |||
2174 | } | 2016 | } |
2175 | 2017 | ||
2176 | int mthca_free_err_wqe(struct mthca_dev *dev, struct mthca_qp *qp, int is_send, | 2018 | int mthca_free_err_wqe(struct mthca_dev *dev, struct mthca_qp *qp, int is_send, |
2177 | int index, int *dbd, u32 *new_wqe) | 2019 | int index, int *dbd, __be32 *new_wqe) |
2178 | { | 2020 | { |
2179 | struct mthca_next_seg *next; | 2021 | struct mthca_next_seg *next; |
2180 | 2022 | ||
2023 | /* | ||
2024 | * For SRQs, all WQEs generate a CQE, so we're always at the | ||
2025 | * end of the doorbell chain. | ||
2026 | */ | ||
2027 | if (qp->ibqp.srq) { | ||
2028 | *new_wqe = 0; | ||
2029 | return 0; | ||
2030 | } | ||
2031 | |||
2181 | if (is_send) | 2032 | if (is_send) |
2182 | next = get_send_wqe(qp, index); | 2033 | next = get_send_wqe(qp, index); |
2183 | else | 2034 | else |
2184 | next = get_recv_wqe(qp, index); | 2035 | next = get_recv_wqe(qp, index); |
2185 | 2036 | ||
2186 | if (mthca_is_memfree(dev)) | 2037 | *dbd = !!(next->ee_nds & cpu_to_be32(MTHCA_NEXT_DBD)); |
2187 | *dbd = 1; | ||
2188 | else | ||
2189 | *dbd = !!(next->ee_nds & cpu_to_be32(MTHCA_NEXT_DBD)); | ||
2190 | if (next->ee_nds & cpu_to_be32(0x3f)) | 2038 | if (next->ee_nds & cpu_to_be32(0x3f)) |
2191 | *new_wqe = (next->nda_op & cpu_to_be32(~0x3f)) | | 2039 | *new_wqe = (next->nda_op & cpu_to_be32(~0x3f)) | |
2192 | (next->ee_nds & cpu_to_be32(0x3f)); | 2040 | (next->ee_nds & cpu_to_be32(0x3f)); |
diff --git a/drivers/infiniband/hw/mthca/mthca_srq.c b/drivers/infiniband/hw/mthca/mthca_srq.c new file mode 100644 index 000000000000..75cd2d84ef12 --- /dev/null +++ b/drivers/infiniband/hw/mthca/mthca_srq.c | |||
@@ -0,0 +1,591 @@ | |||
1 | /* | ||
2 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | ||
3 | * | ||
4 | * This software is available to you under a choice of one of two | ||
5 | * licenses. You may choose to be licensed under the terms of the GNU | ||
6 | * General Public License (GPL) Version 2, available from the file | ||
7 | * COPYING in the main directory of this source tree, or the | ||
8 | * OpenIB.org BSD license below: | ||
9 | * | ||
10 | * Redistribution and use in source and binary forms, with or | ||
11 | * without modification, are permitted provided that the following | ||
12 | * conditions are met: | ||
13 | * | ||
14 | * - Redistributions of source code must retain the above | ||
15 | * copyright notice, this list of conditions and the following | ||
16 | * disclaimer. | ||
17 | * | ||
18 | * - Redistributions in binary form must reproduce the above | ||
19 | * copyright notice, this list of conditions and the following | ||
20 | * disclaimer in the documentation and/or other materials | ||
21 | * provided with the distribution. | ||
22 | * | ||
23 | * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, | ||
24 | * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF | ||
25 | * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND | ||
26 | * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS | ||
27 | * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN | ||
28 | * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN | ||
29 | * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
30 | * SOFTWARE. | ||
31 | * | ||
32 | * $Id: mthca_srq.c 3047 2005-08-10 03:59:35Z roland $ | ||
33 | */ | ||
34 | |||
35 | #include "mthca_dev.h" | ||
36 | #include "mthca_cmd.h" | ||
37 | #include "mthca_memfree.h" | ||
38 | #include "mthca_wqe.h" | ||
39 | |||
40 | enum { | ||
41 | MTHCA_MAX_DIRECT_SRQ_SIZE = 4 * PAGE_SIZE | ||
42 | }; | ||
43 | |||
44 | struct mthca_tavor_srq_context { | ||
45 | __be64 wqe_base_ds; /* low 6 bits is descriptor size */ | ||
46 | __be32 state_pd; | ||
47 | __be32 lkey; | ||
48 | __be32 uar; | ||
49 | __be32 wqe_cnt; | ||
50 | u32 reserved[2]; | ||
51 | }; | ||
52 | |||
53 | struct mthca_arbel_srq_context { | ||
54 | __be32 state_logsize_srqn; | ||
55 | __be32 lkey; | ||
56 | __be32 db_index; | ||
57 | __be32 logstride_usrpage; | ||
58 | __be64 wqe_base; | ||
59 | __be32 eq_pd; | ||
60 | __be16 limit_watermark; | ||
61 | __be16 wqe_cnt; | ||
62 | u16 reserved1; | ||
63 | __be16 wqe_counter; | ||
64 | u32 reserved2[3]; | ||
65 | }; | ||
66 | |||
67 | static void *get_wqe(struct mthca_srq *srq, int n) | ||
68 | { | ||
69 | if (srq->is_direct) | ||
70 | return srq->queue.direct.buf + (n << srq->wqe_shift); | ||
71 | else | ||
72 | return srq->queue.page_list[(n << srq->wqe_shift) >> PAGE_SHIFT].buf + | ||
73 | ((n << srq->wqe_shift) & (PAGE_SIZE - 1)); | ||
74 | } | ||
75 | |||
76 | /* | ||
77 | * Return a pointer to the location within a WQE that we're using as a | ||
78 | * link when the WQE is in the free list. We use an offset of 4 | ||
79 | * because in the Tavor case, posting a WQE may overwrite the first | ||
80 | * four bytes of the previous WQE. The offset avoids corrupting our | ||
81 | * free list if the WQE has already completed and been put on the free | ||
82 | * list when we post the next WQE. | ||
83 | */ | ||
84 | static inline int *wqe_to_link(void *wqe) | ||
85 | { | ||
86 | return (int *) (wqe + 4); | ||
87 | } | ||
88 | |||
89 | static void mthca_tavor_init_srq_context(struct mthca_dev *dev, | ||
90 | struct mthca_pd *pd, | ||
91 | struct mthca_srq *srq, | ||
92 | struct mthca_tavor_srq_context *context) | ||
93 | { | ||
94 | memset(context, 0, sizeof *context); | ||
95 | |||
96 | context->wqe_base_ds = cpu_to_be64(1 << (srq->wqe_shift - 4)); | ||
97 | context->state_pd = cpu_to_be32(pd->pd_num); | ||
98 | context->lkey = cpu_to_be32(srq->mr.ibmr.lkey); | ||
99 | |||
100 | if (pd->ibpd.uobject) | ||
101 | context->uar = | ||
102 | cpu_to_be32(to_mucontext(pd->ibpd.uobject->context)->uar.index); | ||
103 | else | ||
104 | context->uar = cpu_to_be32(dev->driver_uar.index); | ||
105 | } | ||
106 | |||
107 | static void mthca_arbel_init_srq_context(struct mthca_dev *dev, | ||
108 | struct mthca_pd *pd, | ||
109 | struct mthca_srq *srq, | ||
110 | struct mthca_arbel_srq_context *context) | ||
111 | { | ||
112 | int logsize; | ||
113 | |||
114 | memset(context, 0, sizeof *context); | ||
115 | |||
116 | logsize = long_log2(srq->max) + srq->wqe_shift; | ||
117 | context->state_logsize_srqn = cpu_to_be32(logsize << 24 | srq->srqn); | ||
118 | context->lkey = cpu_to_be32(srq->mr.ibmr.lkey); | ||
119 | context->db_index = cpu_to_be32(srq->db_index); | ||
120 | context->logstride_usrpage = cpu_to_be32((srq->wqe_shift - 4) << 29); | ||
121 | if (pd->ibpd.uobject) | ||
122 | context->logstride_usrpage |= | ||
123 | cpu_to_be32(to_mucontext(pd->ibpd.uobject->context)->uar.index); | ||
124 | else | ||
125 | context->logstride_usrpage |= cpu_to_be32(dev->driver_uar.index); | ||
126 | context->eq_pd = cpu_to_be32(MTHCA_EQ_ASYNC << 24 | pd->pd_num); | ||
127 | } | ||
128 | |||
129 | static void mthca_free_srq_buf(struct mthca_dev *dev, struct mthca_srq *srq) | ||
130 | { | ||
131 | mthca_buf_free(dev, srq->max << srq->wqe_shift, &srq->queue, | ||
132 | srq->is_direct, &srq->mr); | ||
133 | kfree(srq->wrid); | ||
134 | } | ||
135 | |||
136 | static int mthca_alloc_srq_buf(struct mthca_dev *dev, struct mthca_pd *pd, | ||
137 | struct mthca_srq *srq) | ||
138 | { | ||
139 | struct mthca_data_seg *scatter; | ||
140 | void *wqe; | ||
141 | int err; | ||
142 | int i; | ||
143 | |||
144 | if (pd->ibpd.uobject) | ||
145 | return 0; | ||
146 | |||
147 | srq->wrid = kmalloc(srq->max * sizeof (u64), GFP_KERNEL); | ||
148 | if (!srq->wrid) | ||
149 | return -ENOMEM; | ||
150 | |||
151 | err = mthca_buf_alloc(dev, srq->max << srq->wqe_shift, | ||
152 | MTHCA_MAX_DIRECT_SRQ_SIZE, | ||
153 | &srq->queue, &srq->is_direct, pd, 1, &srq->mr); | ||
154 | if (err) { | ||
155 | kfree(srq->wrid); | ||
156 | return err; | ||
157 | } | ||
158 | |||
159 | /* | ||
160 | * Now initialize the SRQ buffer so that all of the WQEs are | ||
161 | * linked into the list of free WQEs. In addition, set the | ||
162 | * scatter list L_Keys to the sentry value of 0x100. | ||
163 | */ | ||
164 | for (i = 0; i < srq->max; ++i) { | ||
165 | wqe = get_wqe(srq, i); | ||
166 | |||
167 | *wqe_to_link(wqe) = i < srq->max - 1 ? i + 1 : -1; | ||
168 | |||
169 | for (scatter = wqe + sizeof (struct mthca_next_seg); | ||
170 | (void *) scatter < wqe + (1 << srq->wqe_shift); | ||
171 | ++scatter) | ||
172 | scatter->lkey = cpu_to_be32(MTHCA_INVAL_LKEY); | ||
173 | } | ||
174 | |||
175 | return 0; | ||
176 | } | ||
177 | |||
178 | int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, | ||
179 | struct ib_srq_attr *attr, struct mthca_srq *srq) | ||
180 | { | ||
181 | struct mthca_mailbox *mailbox; | ||
182 | u8 status; | ||
183 | int ds; | ||
184 | int err; | ||
185 | |||
186 | /* Sanity check SRQ size before proceeding */ | ||
187 | if (attr->max_wr > 16 << 20 || attr->max_sge > 64) | ||
188 | return -EINVAL; | ||
189 | |||
190 | srq->max = attr->max_wr; | ||
191 | srq->max_gs = attr->max_sge; | ||
192 | srq->last = NULL; | ||
193 | srq->counter = 0; | ||
194 | |||
195 | if (mthca_is_memfree(dev)) | ||
196 | srq->max = roundup_pow_of_two(srq->max + 1); | ||
197 | |||
198 | ds = min(64UL, | ||
199 | roundup_pow_of_two(sizeof (struct mthca_next_seg) + | ||
200 | srq->max_gs * sizeof (struct mthca_data_seg))); | ||
201 | srq->wqe_shift = long_log2(ds); | ||
202 | |||
203 | srq->srqn = mthca_alloc(&dev->srq_table.alloc); | ||
204 | if (srq->srqn == -1) | ||
205 | return -ENOMEM; | ||
206 | |||
207 | if (mthca_is_memfree(dev)) { | ||
208 | err = mthca_table_get(dev, dev->srq_table.table, srq->srqn); | ||
209 | if (err) | ||
210 | goto err_out; | ||
211 | |||
212 | if (!pd->ibpd.uobject) { | ||
213 | srq->db_index = mthca_alloc_db(dev, MTHCA_DB_TYPE_SRQ, | ||
214 | srq->srqn, &srq->db); | ||
215 | if (srq->db_index < 0) { | ||
216 | err = -ENOMEM; | ||
217 | goto err_out_icm; | ||
218 | } | ||
219 | } | ||
220 | } | ||
221 | |||
222 | mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL); | ||
223 | if (IS_ERR(mailbox)) { | ||
224 | err = PTR_ERR(mailbox); | ||
225 | goto err_out_db; | ||
226 | } | ||
227 | |||
228 | err = mthca_alloc_srq_buf(dev, pd, srq); | ||
229 | if (err) | ||
230 | goto err_out_mailbox; | ||
231 | |||
232 | spin_lock_init(&srq->lock); | ||
233 | atomic_set(&srq->refcount, 1); | ||
234 | init_waitqueue_head(&srq->wait); | ||
235 | |||
236 | if (mthca_is_memfree(dev)) | ||
237 | mthca_arbel_init_srq_context(dev, pd, srq, mailbox->buf); | ||
238 | else | ||
239 | mthca_tavor_init_srq_context(dev, pd, srq, mailbox->buf); | ||
240 | |||
241 | err = mthca_SW2HW_SRQ(dev, mailbox, srq->srqn, &status); | ||
242 | |||
243 | if (err) { | ||
244 | mthca_warn(dev, "SW2HW_SRQ failed (%d)\n", err); | ||
245 | goto err_out_free_buf; | ||
246 | } | ||
247 | if (status) { | ||
248 | mthca_warn(dev, "SW2HW_SRQ returned status 0x%02x\n", | ||
249 | status); | ||
250 | err = -EINVAL; | ||
251 | goto err_out_free_buf; | ||
252 | } | ||
253 | |||
254 | spin_lock_irq(&dev->srq_table.lock); | ||
255 | if (mthca_array_set(&dev->srq_table.srq, | ||
256 | srq->srqn & (dev->limits.num_srqs - 1), | ||
257 | srq)) { | ||
258 | spin_unlock_irq(&dev->srq_table.lock); | ||
259 | goto err_out_free_srq; | ||
260 | } | ||
261 | spin_unlock_irq(&dev->srq_table.lock); | ||
262 | |||
263 | mthca_free_mailbox(dev, mailbox); | ||
264 | |||
265 | srq->first_free = 0; | ||
266 | srq->last_free = srq->max - 1; | ||
267 | |||
268 | return 0; | ||
269 | |||
270 | err_out_free_srq: | ||
271 | err = mthca_HW2SW_SRQ(dev, mailbox, srq->srqn, &status); | ||
272 | if (err) | ||
273 | mthca_warn(dev, "HW2SW_SRQ failed (%d)\n", err); | ||
274 | else if (status) | ||
275 | mthca_warn(dev, "HW2SW_SRQ returned status 0x%02x\n", status); | ||
276 | |||
277 | err_out_free_buf: | ||
278 | if (!pd->ibpd.uobject) | ||
279 | mthca_free_srq_buf(dev, srq); | ||
280 | |||
281 | err_out_mailbox: | ||
282 | mthca_free_mailbox(dev, mailbox); | ||
283 | |||
284 | err_out_db: | ||
285 | if (!pd->ibpd.uobject && mthca_is_memfree(dev)) | ||
286 | mthca_free_db(dev, MTHCA_DB_TYPE_SRQ, srq->db_index); | ||
287 | |||
288 | err_out_icm: | ||
289 | mthca_table_put(dev, dev->srq_table.table, srq->srqn); | ||
290 | |||
291 | err_out: | ||
292 | mthca_free(&dev->srq_table.alloc, srq->srqn); | ||
293 | |||
294 | return err; | ||
295 | } | ||
296 | |||
297 | void mthca_free_srq(struct mthca_dev *dev, struct mthca_srq *srq) | ||
298 | { | ||
299 | struct mthca_mailbox *mailbox; | ||
300 | int err; | ||
301 | u8 status; | ||
302 | |||
303 | mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL); | ||
304 | if (IS_ERR(mailbox)) { | ||
305 | mthca_warn(dev, "No memory for mailbox to free SRQ.\n"); | ||
306 | return; | ||
307 | } | ||
308 | |||
309 | err = mthca_HW2SW_SRQ(dev, mailbox, srq->srqn, &status); | ||
310 | if (err) | ||
311 | mthca_warn(dev, "HW2SW_SRQ failed (%d)\n", err); | ||
312 | else if (status) | ||
313 | mthca_warn(dev, "HW2SW_SRQ returned status 0x%02x\n", status); | ||
314 | |||
315 | spin_lock_irq(&dev->srq_table.lock); | ||
316 | mthca_array_clear(&dev->srq_table.srq, | ||
317 | srq->srqn & (dev->limits.num_srqs - 1)); | ||
318 | spin_unlock_irq(&dev->srq_table.lock); | ||
319 | |||
320 | atomic_dec(&srq->refcount); | ||
321 | wait_event(srq->wait, !atomic_read(&srq->refcount)); | ||
322 | |||
323 | if (!srq->ibsrq.uobject) { | ||
324 | mthca_free_srq_buf(dev, srq); | ||
325 | if (mthca_is_memfree(dev)) | ||
326 | mthca_free_db(dev, MTHCA_DB_TYPE_SRQ, srq->db_index); | ||
327 | } | ||
328 | |||
329 | mthca_table_put(dev, dev->srq_table.table, srq->srqn); | ||
330 | mthca_free(&dev->srq_table.alloc, srq->srqn); | ||
331 | mthca_free_mailbox(dev, mailbox); | ||
332 | } | ||
333 | |||
334 | void mthca_srq_event(struct mthca_dev *dev, u32 srqn, | ||
335 | enum ib_event_type event_type) | ||
336 | { | ||
337 | struct mthca_srq *srq; | ||
338 | struct ib_event event; | ||
339 | |||
340 | spin_lock(&dev->srq_table.lock); | ||
341 | srq = mthca_array_get(&dev->srq_table.srq, srqn & (dev->limits.num_srqs - 1)); | ||
342 | if (srq) | ||
343 | atomic_inc(&srq->refcount); | ||
344 | spin_unlock(&dev->srq_table.lock); | ||
345 | |||
346 | if (!srq) { | ||
347 | mthca_warn(dev, "Async event for bogus SRQ %08x\n", srqn); | ||
348 | return; | ||
349 | } | ||
350 | |||
351 | if (!srq->ibsrq.event_handler) | ||
352 | goto out; | ||
353 | |||
354 | event.device = &dev->ib_dev; | ||
355 | event.event = event_type; | ||
356 | event.element.srq = &srq->ibsrq; | ||
357 | srq->ibsrq.event_handler(&event, srq->ibsrq.srq_context); | ||
358 | |||
359 | out: | ||
360 | if (atomic_dec_and_test(&srq->refcount)) | ||
361 | wake_up(&srq->wait); | ||
362 | } | ||
363 | |||
364 | /* | ||
365 | * This function must be called with IRQs disabled. | ||
366 | */ | ||
367 | void mthca_free_srq_wqe(struct mthca_srq *srq, u32 wqe_addr) | ||
368 | { | ||
369 | int ind; | ||
370 | |||
371 | ind = wqe_addr >> srq->wqe_shift; | ||
372 | |||
373 | spin_lock(&srq->lock); | ||
374 | |||
375 | if (likely(srq->first_free >= 0)) | ||
376 | *wqe_to_link(get_wqe(srq, srq->last_free)) = ind; | ||
377 | else | ||
378 | srq->first_free = ind; | ||
379 | |||
380 | *wqe_to_link(get_wqe(srq, ind)) = -1; | ||
381 | srq->last_free = ind; | ||
382 | |||
383 | spin_unlock(&srq->lock); | ||
384 | } | ||
385 | |||
386 | int mthca_tavor_post_srq_recv(struct ib_srq *ibsrq, struct ib_recv_wr *wr, | ||
387 | struct ib_recv_wr **bad_wr) | ||
388 | { | ||
389 | struct mthca_dev *dev = to_mdev(ibsrq->device); | ||
390 | struct mthca_srq *srq = to_msrq(ibsrq); | ||
391 | unsigned long flags; | ||
392 | int err = 0; | ||
393 | int first_ind; | ||
394 | int ind; | ||
395 | int next_ind; | ||
396 | int nreq; | ||
397 | int i; | ||
398 | void *wqe; | ||
399 | void *prev_wqe; | ||
400 | |||
401 | spin_lock_irqsave(&srq->lock, flags); | ||
402 | |||
403 | first_ind = srq->first_free; | ||
404 | |||
405 | for (nreq = 0; wr; ++nreq, wr = wr->next) { | ||
406 | ind = srq->first_free; | ||
407 | |||
408 | if (ind < 0) { | ||
409 | mthca_err(dev, "SRQ %06x full\n", srq->srqn); | ||
410 | err = -ENOMEM; | ||
411 | *bad_wr = wr; | ||
412 | return nreq; | ||
413 | } | ||
414 | |||
415 | wqe = get_wqe(srq, ind); | ||
416 | next_ind = *wqe_to_link(wqe); | ||
417 | prev_wqe = srq->last; | ||
418 | srq->last = wqe; | ||
419 | |||
420 | ((struct mthca_next_seg *) wqe)->nda_op = 0; | ||
421 | ((struct mthca_next_seg *) wqe)->ee_nds = 0; | ||
422 | /* flags field will always remain 0 */ | ||
423 | |||
424 | wqe += sizeof (struct mthca_next_seg); | ||
425 | |||
426 | if (unlikely(wr->num_sge > srq->max_gs)) { | ||
427 | err = -EINVAL; | ||
428 | *bad_wr = wr; | ||
429 | srq->last = prev_wqe; | ||
430 | return nreq; | ||
431 | } | ||
432 | |||
433 | for (i = 0; i < wr->num_sge; ++i) { | ||
434 | ((struct mthca_data_seg *) wqe)->byte_count = | ||
435 | cpu_to_be32(wr->sg_list[i].length); | ||
436 | ((struct mthca_data_seg *) wqe)->lkey = | ||
437 | cpu_to_be32(wr->sg_list[i].lkey); | ||
438 | ((struct mthca_data_seg *) wqe)->addr = | ||
439 | cpu_to_be64(wr->sg_list[i].addr); | ||
440 | wqe += sizeof (struct mthca_data_seg); | ||
441 | } | ||
442 | |||
443 | if (i < srq->max_gs) { | ||
444 | ((struct mthca_data_seg *) wqe)->byte_count = 0; | ||
445 | ((struct mthca_data_seg *) wqe)->lkey = cpu_to_be32(MTHCA_INVAL_LKEY); | ||
446 | ((struct mthca_data_seg *) wqe)->addr = 0; | ||
447 | } | ||
448 | |||
449 | if (likely(prev_wqe)) { | ||
450 | ((struct mthca_next_seg *) prev_wqe)->nda_op = | ||
451 | cpu_to_be32((ind << srq->wqe_shift) | 1); | ||
452 | wmb(); | ||
453 | ((struct mthca_next_seg *) prev_wqe)->ee_nds = | ||
454 | cpu_to_be32(MTHCA_NEXT_DBD); | ||
455 | } | ||
456 | |||
457 | srq->wrid[ind] = wr->wr_id; | ||
458 | srq->first_free = next_ind; | ||
459 | } | ||
460 | |||
461 | return nreq; | ||
462 | |||
463 | if (likely(nreq)) { | ||
464 | __be32 doorbell[2]; | ||
465 | |||
466 | doorbell[0] = cpu_to_be32(first_ind << srq->wqe_shift); | ||
467 | doorbell[1] = cpu_to_be32((srq->srqn << 8) | nreq); | ||
468 | |||
469 | /* | ||
470 | * Make sure that descriptors are written before | ||
471 | * doorbell is rung. | ||
472 | */ | ||
473 | wmb(); | ||
474 | |||
475 | mthca_write64(doorbell, | ||
476 | dev->kar + MTHCA_RECEIVE_DOORBELL, | ||
477 | MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock)); | ||
478 | } | ||
479 | |||
480 | spin_unlock_irqrestore(&srq->lock, flags); | ||
481 | return err; | ||
482 | } | ||
483 | |||
484 | int mthca_arbel_post_srq_recv(struct ib_srq *ibsrq, struct ib_recv_wr *wr, | ||
485 | struct ib_recv_wr **bad_wr) | ||
486 | { | ||
487 | struct mthca_dev *dev = to_mdev(ibsrq->device); | ||
488 | struct mthca_srq *srq = to_msrq(ibsrq); | ||
489 | unsigned long flags; | ||
490 | int err = 0; | ||
491 | int ind; | ||
492 | int next_ind; | ||
493 | int nreq; | ||
494 | int i; | ||
495 | void *wqe; | ||
496 | |||
497 | spin_lock_irqsave(&srq->lock, flags); | ||
498 | |||
499 | for (nreq = 0; wr; ++nreq, wr = wr->next) { | ||
500 | ind = srq->first_free; | ||
501 | |||
502 | if (ind < 0) { | ||
503 | mthca_err(dev, "SRQ %06x full\n", srq->srqn); | ||
504 | err = -ENOMEM; | ||
505 | *bad_wr = wr; | ||
506 | return nreq; | ||
507 | } | ||
508 | |||
509 | wqe = get_wqe(srq, ind); | ||
510 | next_ind = *wqe_to_link(wqe); | ||
511 | |||
512 | ((struct mthca_next_seg *) wqe)->nda_op = | ||
513 | cpu_to_be32((next_ind << srq->wqe_shift) | 1); | ||
514 | ((struct mthca_next_seg *) wqe)->ee_nds = 0; | ||
515 | /* flags field will always remain 0 */ | ||
516 | |||
517 | wqe += sizeof (struct mthca_next_seg); | ||
518 | |||
519 | if (unlikely(wr->num_sge > srq->max_gs)) { | ||
520 | err = -EINVAL; | ||
521 | *bad_wr = wr; | ||
522 | return nreq; | ||
523 | } | ||
524 | |||
525 | for (i = 0; i < wr->num_sge; ++i) { | ||
526 | ((struct mthca_data_seg *) wqe)->byte_count = | ||
527 | cpu_to_be32(wr->sg_list[i].length); | ||
528 | ((struct mthca_data_seg *) wqe)->lkey = | ||
529 | cpu_to_be32(wr->sg_list[i].lkey); | ||
530 | ((struct mthca_data_seg *) wqe)->addr = | ||
531 | cpu_to_be64(wr->sg_list[i].addr); | ||
532 | wqe += sizeof (struct mthca_data_seg); | ||
533 | } | ||
534 | |||
535 | if (i < srq->max_gs) { | ||
536 | ((struct mthca_data_seg *) wqe)->byte_count = 0; | ||
537 | ((struct mthca_data_seg *) wqe)->lkey = cpu_to_be32(MTHCA_INVAL_LKEY); | ||
538 | ((struct mthca_data_seg *) wqe)->addr = 0; | ||
539 | } | ||
540 | |||
541 | srq->wrid[ind] = wr->wr_id; | ||
542 | srq->first_free = next_ind; | ||
543 | } | ||
544 | |||
545 | if (likely(nreq)) { | ||
546 | srq->counter += nreq; | ||
547 | |||
548 | /* | ||
549 | * Make sure that descriptors are written before | ||
550 | * we write doorbell record. | ||
551 | */ | ||
552 | wmb(); | ||
553 | *srq->db = cpu_to_be32(srq->counter); | ||
554 | } | ||
555 | |||
556 | spin_unlock_irqrestore(&srq->lock, flags); | ||
557 | return err; | ||
558 | } | ||
559 | |||
560 | int __devinit mthca_init_srq_table(struct mthca_dev *dev) | ||
561 | { | ||
562 | int err; | ||
563 | |||
564 | if (!(dev->mthca_flags & MTHCA_FLAG_SRQ)) | ||
565 | return 0; | ||
566 | |||
567 | spin_lock_init(&dev->srq_table.lock); | ||
568 | |||
569 | err = mthca_alloc_init(&dev->srq_table.alloc, | ||
570 | dev->limits.num_srqs, | ||
571 | dev->limits.num_srqs - 1, | ||
572 | dev->limits.reserved_srqs); | ||
573 | if (err) | ||
574 | return err; | ||
575 | |||
576 | err = mthca_array_init(&dev->srq_table.srq, | ||
577 | dev->limits.num_srqs); | ||
578 | if (err) | ||
579 | mthca_alloc_cleanup(&dev->srq_table.alloc); | ||
580 | |||
581 | return err; | ||
582 | } | ||
583 | |||
584 | void __devexit mthca_cleanup_srq_table(struct mthca_dev *dev) | ||
585 | { | ||
586 | if (!(dev->mthca_flags & MTHCA_FLAG_SRQ)) | ||
587 | return; | ||
588 | |||
589 | mthca_array_cleanup(&dev->srq_table.srq, dev->limits.num_srqs); | ||
590 | mthca_alloc_cleanup(&dev->srq_table.alloc); | ||
591 | } | ||
diff --git a/drivers/infiniband/hw/mthca/mthca_user.h b/drivers/infiniband/hw/mthca/mthca_user.h index 3024c1b4547d..41613ec8a04e 100644 --- a/drivers/infiniband/hw/mthca/mthca_user.h +++ b/drivers/infiniband/hw/mthca/mthca_user.h | |||
@@ -69,6 +69,17 @@ struct mthca_create_cq_resp { | |||
69 | __u32 reserved; | 69 | __u32 reserved; |
70 | }; | 70 | }; |
71 | 71 | ||
72 | struct mthca_create_srq { | ||
73 | __u32 lkey; | ||
74 | __u32 db_index; | ||
75 | __u64 db_page; | ||
76 | }; | ||
77 | |||
78 | struct mthca_create_srq_resp { | ||
79 | __u32 srqn; | ||
80 | __u32 reserved; | ||
81 | }; | ||
82 | |||
72 | struct mthca_create_qp { | 83 | struct mthca_create_qp { |
73 | __u32 lkey; | 84 | __u32 lkey; |
74 | __u32 reserved; | 85 | __u32 reserved; |
diff --git a/drivers/infiniband/hw/mthca/mthca_wqe.h b/drivers/infiniband/hw/mthca/mthca_wqe.h new file mode 100644 index 000000000000..1f4c0ff28f79 --- /dev/null +++ b/drivers/infiniband/hw/mthca/mthca_wqe.h | |||
@@ -0,0 +1,114 @@ | |||
1 | /* | ||
2 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | ||
3 | * | ||
4 | * This software is available to you under a choice of one of two | ||
5 | * licenses. You may choose to be licensed under the terms of the GNU | ||
6 | * General Public License (GPL) Version 2, available from the file | ||
7 | * COPYING in the main directory of this source tree, or the | ||
8 | * OpenIB.org BSD license below: | ||
9 | * | ||
10 | * Redistribution and use in source and binary forms, with or | ||
11 | * without modification, are permitted provided that the following | ||
12 | * conditions are met: | ||
13 | * | ||
14 | * - Redistributions of source code must retain the above | ||
15 | * copyright notice, this list of conditions and the following | ||
16 | * disclaimer. | ||
17 | * | ||
18 | * - Redistributions in binary form must reproduce the above | ||
19 | * copyright notice, this list of conditions and the following | ||
20 | * disclaimer in the documentation and/or other materials | ||
21 | * provided with the distribution. | ||
22 | * | ||
23 | * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, | ||
24 | * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF | ||
25 | * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND | ||
26 | * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS | ||
27 | * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN | ||
28 | * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN | ||
29 | * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
30 | * SOFTWARE. | ||
31 | * | ||
32 | * $Id: mthca_wqe.h 3047 2005-08-10 03:59:35Z roland $ | ||
33 | */ | ||
34 | |||
35 | #ifndef MTHCA_WQE_H | ||
36 | #define MTHCA_WQE_H | ||
37 | |||
38 | #include <linux/types.h> | ||
39 | |||
40 | enum { | ||
41 | MTHCA_NEXT_DBD = 1 << 7, | ||
42 | MTHCA_NEXT_FENCE = 1 << 6, | ||
43 | MTHCA_NEXT_CQ_UPDATE = 1 << 3, | ||
44 | MTHCA_NEXT_EVENT_GEN = 1 << 2, | ||
45 | MTHCA_NEXT_SOLICIT = 1 << 1, | ||
46 | |||
47 | MTHCA_MLX_VL15 = 1 << 17, | ||
48 | MTHCA_MLX_SLR = 1 << 16 | ||
49 | }; | ||
50 | |||
51 | enum { | ||
52 | MTHCA_INVAL_LKEY = 0x100 | ||
53 | }; | ||
54 | |||
55 | struct mthca_next_seg { | ||
56 | __be32 nda_op; /* [31:6] next WQE [4:0] next opcode */ | ||
57 | __be32 ee_nds; /* [31:8] next EE [7] DBD [6] F [5:0] next WQE size */ | ||
58 | __be32 flags; /* [3] CQ [2] Event [1] Solicit */ | ||
59 | __be32 imm; /* immediate data */ | ||
60 | }; | ||
61 | |||
62 | struct mthca_tavor_ud_seg { | ||
63 | u32 reserved1; | ||
64 | __be32 lkey; | ||
65 | __be64 av_addr; | ||
66 | u32 reserved2[4]; | ||
67 | __be32 dqpn; | ||
68 | __be32 qkey; | ||
69 | u32 reserved3[2]; | ||
70 | }; | ||
71 | |||
72 | struct mthca_arbel_ud_seg { | ||
73 | __be32 av[8]; | ||
74 | __be32 dqpn; | ||
75 | __be32 qkey; | ||
76 | u32 reserved[2]; | ||
77 | }; | ||
78 | |||
79 | struct mthca_bind_seg { | ||
80 | __be32 flags; /* [31] Atomic [30] rem write [29] rem read */ | ||
81 | u32 reserved; | ||
82 | __be32 new_rkey; | ||
83 | __be32 lkey; | ||
84 | __be64 addr; | ||
85 | __be64 length; | ||
86 | }; | ||
87 | |||
88 | struct mthca_raddr_seg { | ||
89 | __be64 raddr; | ||
90 | __be32 rkey; | ||
91 | u32 reserved; | ||
92 | }; | ||
93 | |||
94 | struct mthca_atomic_seg { | ||
95 | __be64 swap_add; | ||
96 | __be64 compare; | ||
97 | }; | ||
98 | |||
99 | struct mthca_data_seg { | ||
100 | __be32 byte_count; | ||
101 | __be32 lkey; | ||
102 | __be64 addr; | ||
103 | }; | ||
104 | |||
105 | struct mthca_mlx_seg { | ||
106 | __be32 nda_op; | ||
107 | __be32 nds; | ||
108 | __be32 flags; /* [17] VL15 [16] SLR [14:12] static rate | ||
109 | [11:8] SL [3] C [2] E */ | ||
110 | __be16 rlid; | ||
111 | __be16 vcrc; | ||
112 | }; | ||
113 | |||
114 | #endif /* MTHCA_WQE_H */ | ||
diff --git a/drivers/infiniband/ulp/ipoib/Makefile b/drivers/infiniband/ulp/ipoib/Makefile index 394bc08abc6f..8935e74ae3f8 100644 --- a/drivers/infiniband/ulp/ipoib/Makefile +++ b/drivers/infiniband/ulp/ipoib/Makefile | |||
@@ -1,5 +1,3 @@ | |||
1 | EXTRA_CFLAGS += -Idrivers/infiniband/include | ||
2 | |||
3 | obj-$(CONFIG_INFINIBAND_IPOIB) += ib_ipoib.o | 1 | obj-$(CONFIG_INFINIBAND_IPOIB) += ib_ipoib.o |
4 | 2 | ||
5 | ib_ipoib-y := ipoib_main.o \ | 3 | ib_ipoib-y := ipoib_main.o \ |
diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h index 04c98f54e9c4..bea960b8191f 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib.h +++ b/drivers/infiniband/ulp/ipoib/ipoib.h | |||
@@ -1,5 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
4 | * Copyright (c) 2004 Voltaire, Inc. All rights reserved. | ||
3 | * | 5 | * |
4 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -49,9 +51,9 @@ | |||
49 | #include <asm/atomic.h> | 51 | #include <asm/atomic.h> |
50 | #include <asm/semaphore.h> | 52 | #include <asm/semaphore.h> |
51 | 53 | ||
52 | #include <ib_verbs.h> | 54 | #include <rdma/ib_verbs.h> |
53 | #include <ib_pack.h> | 55 | #include <rdma/ib_pack.h> |
54 | #include <ib_sa.h> | 56 | #include <rdma/ib_sa.h> |
55 | 57 | ||
56 | /* constants */ | 58 | /* constants */ |
57 | 59 | ||
@@ -88,8 +90,8 @@ enum { | |||
88 | /* structs */ | 90 | /* structs */ |
89 | 91 | ||
90 | struct ipoib_header { | 92 | struct ipoib_header { |
91 | u16 proto; | 93 | __be16 proto; |
92 | u16 reserved; | 94 | u16 reserved; |
93 | }; | 95 | }; |
94 | 96 | ||
95 | struct ipoib_pseudoheader { | 97 | struct ipoib_pseudoheader { |
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_fs.c b/drivers/infiniband/ulp/ipoib/ipoib_fs.c index a84e5fe0f193..38b150f775e7 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_fs.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_fs.c | |||
@@ -97,7 +97,7 @@ static int ipoib_mcg_seq_show(struct seq_file *file, void *iter_ptr) | |||
97 | 97 | ||
98 | for (n = 0, i = 0; i < sizeof mgid / 2; ++i) { | 98 | for (n = 0, i = 0; i < sizeof mgid / 2; ++i) { |
99 | n += sprintf(gid_buf + n, "%x", | 99 | n += sprintf(gid_buf + n, "%x", |
100 | be16_to_cpu(((u16 *)mgid.raw)[i])); | 100 | be16_to_cpu(((__be16 *) mgid.raw)[i])); |
101 | if (i < sizeof mgid / 2 - 1) | 101 | if (i < sizeof mgid / 2 - 1) |
102 | gid_buf[n++] = ':'; | 102 | gid_buf[n++] = ':'; |
103 | } | 103 | } |
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c index eee82363167d..ef0e3894863c 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c | |||
@@ -1,5 +1,8 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
4 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
5 | * Copyright (c) 2004, 2005 Voltaire, Inc. All rights reserved. | ||
3 | * | 6 | * |
4 | * This software is available to you under a choice of one of two | 7 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 8 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -35,7 +38,7 @@ | |||
35 | #include <linux/delay.h> | 38 | #include <linux/delay.h> |
36 | #include <linux/dma-mapping.h> | 39 | #include <linux/dma-mapping.h> |
37 | 40 | ||
38 | #include <ib_cache.h> | 41 | #include <rdma/ib_cache.h> |
39 | 42 | ||
40 | #include "ipoib.h" | 43 | #include "ipoib.h" |
41 | 44 | ||
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c index fa00816a3cf7..0e8ac138e355 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c | |||
@@ -1,5 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
4 | * Copyright (c) 2004 Voltaire, Inc. All rights reserved. | ||
3 | * | 5 | * |
4 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -34,7 +36,6 @@ | |||
34 | 36 | ||
35 | #include "ipoib.h" | 37 | #include "ipoib.h" |
36 | 38 | ||
37 | #include <linux/version.h> | ||
38 | #include <linux/module.h> | 39 | #include <linux/module.h> |
39 | 40 | ||
40 | #include <linux/init.h> | 41 | #include <linux/init.h> |
@@ -607,8 +608,8 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) | |||
607 | ipoib_warn(priv, "Unicast, no %s: type %04x, QPN %06x " | 608 | ipoib_warn(priv, "Unicast, no %s: type %04x, QPN %06x " |
608 | IPOIB_GID_FMT "\n", | 609 | IPOIB_GID_FMT "\n", |
609 | skb->dst ? "neigh" : "dst", | 610 | skb->dst ? "neigh" : "dst", |
610 | be16_to_cpup((u16 *) skb->data), | 611 | be16_to_cpup((__be16 *) skb->data), |
611 | be32_to_cpup((u32 *) phdr->hwaddr), | 612 | be32_to_cpup((__be32 *) phdr->hwaddr), |
612 | IPOIB_GID_ARG(*(union ib_gid *) (phdr->hwaddr + 4))); | 613 | IPOIB_GID_ARG(*(union ib_gid *) (phdr->hwaddr + 4))); |
613 | dev_kfree_skb_any(skb); | 614 | dev_kfree_skb_any(skb); |
614 | ++priv->stats.tx_dropped; | 615 | ++priv->stats.tx_dropped; |
@@ -671,7 +672,7 @@ static void ipoib_set_mcast_list(struct net_device *dev) | |||
671 | { | 672 | { |
672 | struct ipoib_dev_priv *priv = netdev_priv(dev); | 673 | struct ipoib_dev_priv *priv = netdev_priv(dev); |
673 | 674 | ||
674 | schedule_work(&priv->restart_task); | 675 | queue_work(ipoib_workqueue, &priv->restart_task); |
675 | } | 676 | } |
676 | 677 | ||
677 | static void ipoib_neigh_destructor(struct neighbour *n) | 678 | static void ipoib_neigh_destructor(struct neighbour *n) |
@@ -780,15 +781,11 @@ void ipoib_dev_cleanup(struct net_device *dev) | |||
780 | 781 | ||
781 | ipoib_ib_dev_cleanup(dev); | 782 | ipoib_ib_dev_cleanup(dev); |
782 | 783 | ||
783 | if (priv->rx_ring) { | 784 | kfree(priv->rx_ring); |
784 | kfree(priv->rx_ring); | 785 | kfree(priv->tx_ring); |
785 | priv->rx_ring = NULL; | ||
786 | } | ||
787 | 786 | ||
788 | if (priv->tx_ring) { | 787 | priv->rx_ring = NULL; |
789 | kfree(priv->tx_ring); | 788 | priv->tx_ring = NULL; |
790 | priv->tx_ring = NULL; | ||
791 | } | ||
792 | } | 789 | } |
793 | 790 | ||
794 | static void ipoib_setup(struct net_device *dev) | 791 | static void ipoib_setup(struct net_device *dev) |
@@ -886,6 +883,12 @@ static ssize_t create_child(struct class_device *cdev, | |||
886 | if (pkey < 0 || pkey > 0xffff) | 883 | if (pkey < 0 || pkey > 0xffff) |
887 | return -EINVAL; | 884 | return -EINVAL; |
888 | 885 | ||
886 | /* | ||
887 | * Set the full membership bit, so that we join the right | ||
888 | * broadcast group, etc. | ||
889 | */ | ||
890 | pkey |= 0x8000; | ||
891 | |||
889 | ret = ipoib_vlan_add(container_of(cdev, struct net_device, class_dev), | 892 | ret = ipoib_vlan_add(container_of(cdev, struct net_device, class_dev), |
890 | pkey); | 893 | pkey); |
891 | 894 | ||
@@ -938,6 +941,12 @@ static struct net_device *ipoib_add_port(const char *format, | |||
938 | goto alloc_mem_failed; | 941 | goto alloc_mem_failed; |
939 | } | 942 | } |
940 | 943 | ||
944 | /* | ||
945 | * Set the full membership bit, so that we join the right | ||
946 | * broadcast group, etc. | ||
947 | */ | ||
948 | priv->pkey |= 0x8000; | ||
949 | |||
941 | priv->dev->broadcast[8] = priv->pkey >> 8; | 950 | priv->dev->broadcast[8] = priv->pkey >> 8; |
942 | priv->dev->broadcast[9] = priv->pkey & 0xff; | 951 | priv->dev->broadcast[9] = priv->pkey & 0xff; |
943 | 952 | ||
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c index 70208c3d21e2..aca7aea18a69 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c | |||
@@ -1,5 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
4 | * Copyright (c) 2004 Voltaire, Inc. All rights reserved. | ||
3 | * | 5 | * |
4 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -357,7 +359,7 @@ static int ipoib_mcast_sendonly_join(struct ipoib_mcast *mcast) | |||
357 | 359 | ||
358 | rec.mgid = mcast->mcmember.mgid; | 360 | rec.mgid = mcast->mcmember.mgid; |
359 | rec.port_gid = priv->local_gid; | 361 | rec.port_gid = priv->local_gid; |
360 | rec.pkey = be16_to_cpu(priv->pkey); | 362 | rec.pkey = cpu_to_be16(priv->pkey); |
361 | 363 | ||
362 | ret = ib_sa_mcmember_rec_set(priv->ca, priv->port, &rec, | 364 | ret = ib_sa_mcmember_rec_set(priv->ca, priv->port, &rec, |
363 | IB_SA_MCMEMBER_REC_MGID | | 365 | IB_SA_MCMEMBER_REC_MGID | |
@@ -457,7 +459,7 @@ static void ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast, | |||
457 | 459 | ||
458 | rec.mgid = mcast->mcmember.mgid; | 460 | rec.mgid = mcast->mcmember.mgid; |
459 | rec.port_gid = priv->local_gid; | 461 | rec.port_gid = priv->local_gid; |
460 | rec.pkey = be16_to_cpu(priv->pkey); | 462 | rec.pkey = cpu_to_be16(priv->pkey); |
461 | 463 | ||
462 | comp_mask = | 464 | comp_mask = |
463 | IB_SA_MCMEMBER_REC_MGID | | 465 | IB_SA_MCMEMBER_REC_MGID | |
@@ -646,7 +648,7 @@ static int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast) | |||
646 | 648 | ||
647 | rec.mgid = mcast->mcmember.mgid; | 649 | rec.mgid = mcast->mcmember.mgid; |
648 | rec.port_gid = priv->local_gid; | 650 | rec.port_gid = priv->local_gid; |
649 | rec.pkey = be16_to_cpu(priv->pkey); | 651 | rec.pkey = cpu_to_be16(priv->pkey); |
650 | 652 | ||
651 | /* Remove ourselves from the multicast group */ | 653 | /* Remove ourselves from the multicast group */ |
652 | ret = ipoib_mcast_detach(dev, be16_to_cpu(mcast->mcmember.mlid), | 654 | ret = ipoib_mcast_detach(dev, be16_to_cpu(mcast->mcmember.mlid), |
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c index 4933edf062c2..79f59d0563ed 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Mellanox Technologies. All rights reserved. | ||
3 | * | 4 | * |
4 | * This software is available to you under a choice of one of two | 5 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 6 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -32,7 +33,7 @@ | |||
32 | * $Id: ipoib_verbs.c 1349 2004-12-16 21:09:43Z roland $ | 33 | * $Id: ipoib_verbs.c 1349 2004-12-16 21:09:43Z roland $ |
33 | */ | 34 | */ |
34 | 35 | ||
35 | #include <ib_cache.h> | 36 | #include <rdma/ib_cache.h> |
36 | 37 | ||
37 | #include "ipoib.h" | 38 | #include "ipoib.h" |
38 | 39 | ||
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_vlan.c b/drivers/infiniband/ulp/ipoib/ipoib_vlan.c index 94b8ea812fef..332d730e60c2 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_vlan.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_vlan.c | |||
@@ -32,7 +32,6 @@ | |||
32 | * $Id: ipoib_vlan.c 1349 2004-12-16 21:09:43Z roland $ | 32 | * $Id: ipoib_vlan.c 1349 2004-12-16 21:09:43Z roland $ |
33 | */ | 33 | */ |
34 | 34 | ||
35 | #include <linux/version.h> | ||
36 | #include <linux/module.h> | 35 | #include <linux/module.h> |
37 | 36 | ||
38 | #include <linux/init.h> | 37 | #include <linux/init.h> |
diff --git a/drivers/media/dvb/frontends/tda80xx.c b/drivers/media/dvb/frontends/tda80xx.c index 88e125079ca1..d1cabb6a0a13 100644 --- a/drivers/media/dvb/frontends/tda80xx.c +++ b/drivers/media/dvb/frontends/tda80xx.c | |||
@@ -30,6 +30,7 @@ | |||
30 | #include <linux/kernel.h> | 30 | #include <linux/kernel.h> |
31 | #include <linux/module.h> | 31 | #include <linux/module.h> |
32 | #include <linux/slab.h> | 32 | #include <linux/slab.h> |
33 | #include <asm/irq.h> | ||
33 | #include <asm/div64.h> | 34 | #include <asm/div64.h> |
34 | 35 | ||
35 | #include "dvb_frontend.h" | 36 | #include "dvb_frontend.h" |
diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig new file mode 100644 index 000000000000..1588a59e3767 --- /dev/null +++ b/drivers/mfd/Kconfig | |||
@@ -0,0 +1,16 @@ | |||
1 | # | ||
2 | # Multifunction miscellaneous devices | ||
3 | # | ||
4 | |||
5 | menu "Multimedia Capabilities Port drivers" | ||
6 | |||
7 | config MCP | ||
8 | tristate | ||
9 | |||
10 | # Interface drivers | ||
11 | config MCP_SA11X0 | ||
12 | tristate "Support SA11x0 MCP interface" | ||
13 | depends on ARCH_SA1100 | ||
14 | select MCP | ||
15 | |||
16 | endmenu | ||
diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile new file mode 100644 index 000000000000..98bdd6a42188 --- /dev/null +++ b/drivers/mfd/Makefile | |||
@@ -0,0 +1,6 @@ | |||
1 | # | ||
2 | # Makefile for multifunction miscellaneous devices | ||
3 | # | ||
4 | |||
5 | obj-$(CONFIG_MCP) += mcp-core.o | ||
6 | obj-$(CONFIG_MCP_SA11X0) += mcp-sa11x0.o | ||
diff --git a/drivers/mfd/mcp-core.c b/drivers/mfd/mcp-core.c new file mode 100644 index 000000000000..c75d713c01e4 --- /dev/null +++ b/drivers/mfd/mcp-core.c | |||
@@ -0,0 +1,255 @@ | |||
1 | /* | ||
2 | * linux/drivers/mfd/mcp-core.c | ||
3 | * | ||
4 | * Copyright (C) 2001 Russell King | ||
5 | * | ||
6 | * This program is free software; you can redistribute it and/or modify | ||
7 | * it under the terms of the GNU General Public License as published by | ||
8 | * the Free Software Foundation; either version 2 of the License. | ||
9 | * | ||
10 | * Generic MCP (Multimedia Communications Port) layer. All MCP locking | ||
11 | * is solely held within this file. | ||
12 | */ | ||
13 | #include <linux/module.h> | ||
14 | #include <linux/init.h> | ||
15 | #include <linux/errno.h> | ||
16 | #include <linux/smp.h> | ||
17 | #include <linux/device.h> | ||
18 | |||
19 | #include <asm/dma.h> | ||
20 | #include <asm/system.h> | ||
21 | |||
22 | #include "mcp.h" | ||
23 | |||
24 | #define to_mcp(d) container_of(d, struct mcp, attached_device) | ||
25 | #define to_mcp_driver(d) container_of(d, struct mcp_driver, drv) | ||
26 | |||
27 | static int mcp_bus_match(struct device *dev, struct device_driver *drv) | ||
28 | { | ||
29 | return 1; | ||
30 | } | ||
31 | |||
32 | static int mcp_bus_probe(struct device *dev) | ||
33 | { | ||
34 | struct mcp *mcp = to_mcp(dev); | ||
35 | struct mcp_driver *drv = to_mcp_driver(dev->driver); | ||
36 | |||
37 | return drv->probe(mcp); | ||
38 | } | ||
39 | |||
40 | static int mcp_bus_remove(struct device *dev) | ||
41 | { | ||
42 | struct mcp *mcp = to_mcp(dev); | ||
43 | struct mcp_driver *drv = to_mcp_driver(dev->driver); | ||
44 | |||
45 | drv->remove(mcp); | ||
46 | return 0; | ||
47 | } | ||
48 | |||
49 | static int mcp_bus_suspend(struct device *dev, pm_message_t state) | ||
50 | { | ||
51 | struct mcp *mcp = to_mcp(dev); | ||
52 | int ret = 0; | ||
53 | |||
54 | if (dev->driver) { | ||
55 | struct mcp_driver *drv = to_mcp_driver(dev->driver); | ||
56 | |||
57 | ret = drv->suspend(mcp, state); | ||
58 | } | ||
59 | return ret; | ||
60 | } | ||
61 | |||
62 | static int mcp_bus_resume(struct device *dev) | ||
63 | { | ||
64 | struct mcp *mcp = to_mcp(dev); | ||
65 | int ret = 0; | ||
66 | |||
67 | if (dev->driver) { | ||
68 | struct mcp_driver *drv = to_mcp_driver(dev->driver); | ||
69 | |||
70 | ret = drv->resume(mcp); | ||
71 | } | ||
72 | return ret; | ||
73 | } | ||
74 | |||
75 | static struct bus_type mcp_bus_type = { | ||
76 | .name = "mcp", | ||
77 | .match = mcp_bus_match, | ||
78 | .suspend = mcp_bus_suspend, | ||
79 | .resume = mcp_bus_resume, | ||
80 | }; | ||
81 | |||
82 | /** | ||
83 | * mcp_set_telecom_divisor - set the telecom divisor | ||
84 | * @mcp: MCP interface structure | ||
85 | * @div: SIB clock divisor | ||
86 | * | ||
87 | * Set the telecom divisor on the MCP interface. The resulting | ||
88 | * sample rate is SIBCLOCK/div. | ||
89 | */ | ||
90 | void mcp_set_telecom_divisor(struct mcp *mcp, unsigned int div) | ||
91 | { | ||
92 | spin_lock_irq(&mcp->lock); | ||
93 | mcp->ops->set_telecom_divisor(mcp, div); | ||
94 | spin_unlock_irq(&mcp->lock); | ||
95 | } | ||
96 | EXPORT_SYMBOL(mcp_set_telecom_divisor); | ||
97 | |||
98 | /** | ||
99 | * mcp_set_audio_divisor - set the audio divisor | ||
100 | * @mcp: MCP interface structure | ||
101 | * @div: SIB clock divisor | ||
102 | * | ||
103 | * Set the audio divisor on the MCP interface. | ||
104 | */ | ||
105 | void mcp_set_audio_divisor(struct mcp *mcp, unsigned int div) | ||
106 | { | ||
107 | spin_lock_irq(&mcp->lock); | ||
108 | mcp->ops->set_audio_divisor(mcp, div); | ||
109 | spin_unlock_irq(&mcp->lock); | ||
110 | } | ||
111 | EXPORT_SYMBOL(mcp_set_audio_divisor); | ||
112 | |||
113 | /** | ||
114 | * mcp_reg_write - write a device register | ||
115 | * @mcp: MCP interface structure | ||
116 | * @reg: 4-bit register index | ||
117 | * @val: 16-bit data value | ||
118 | * | ||
119 | * Write a device register. The MCP interface must be enabled | ||
120 | * to prevent this function hanging. | ||
121 | */ | ||
122 | void mcp_reg_write(struct mcp *mcp, unsigned int reg, unsigned int val) | ||
123 | { | ||
124 | unsigned long flags; | ||
125 | |||
126 | spin_lock_irqsave(&mcp->lock, flags); | ||
127 | mcp->ops->reg_write(mcp, reg, val); | ||
128 | spin_unlock_irqrestore(&mcp->lock, flags); | ||
129 | } | ||
130 | EXPORT_SYMBOL(mcp_reg_write); | ||
131 | |||
132 | /** | ||
133 | * mcp_reg_read - read a device register | ||
134 | * @mcp: MCP interface structure | ||
135 | * @reg: 4-bit register index | ||
136 | * | ||
137 | * Read a device register and return its value. The MCP interface | ||
138 | * must be enabled to prevent this function hanging. | ||
139 | */ | ||
140 | unsigned int mcp_reg_read(struct mcp *mcp, unsigned int reg) | ||
141 | { | ||
142 | unsigned long flags; | ||
143 | unsigned int val; | ||
144 | |||
145 | spin_lock_irqsave(&mcp->lock, flags); | ||
146 | val = mcp->ops->reg_read(mcp, reg); | ||
147 | spin_unlock_irqrestore(&mcp->lock, flags); | ||
148 | |||
149 | return val; | ||
150 | } | ||
151 | EXPORT_SYMBOL(mcp_reg_read); | ||
152 | |||
153 | /** | ||
154 | * mcp_enable - enable the MCP interface | ||
155 | * @mcp: MCP interface to enable | ||
156 | * | ||
157 | * Enable the MCP interface. Each call to mcp_enable will need | ||
158 | * a corresponding call to mcp_disable to disable the interface. | ||
159 | */ | ||
160 | void mcp_enable(struct mcp *mcp) | ||
161 | { | ||
162 | spin_lock_irq(&mcp->lock); | ||
163 | if (mcp->use_count++ == 0) | ||
164 | mcp->ops->enable(mcp); | ||
165 | spin_unlock_irq(&mcp->lock); | ||
166 | } | ||
167 | EXPORT_SYMBOL(mcp_enable); | ||
168 | |||
169 | /** | ||
170 | * mcp_disable - disable the MCP interface | ||
171 | * @mcp: MCP interface to disable | ||
172 | * | ||
173 | * Disable the MCP interface. The MCP interface will only be | ||
174 | * disabled once the number of calls to mcp_enable matches the | ||
175 | * number of calls to mcp_disable. | ||
176 | */ | ||
177 | void mcp_disable(struct mcp *mcp) | ||
178 | { | ||
179 | unsigned long flags; | ||
180 | |||
181 | spin_lock_irqsave(&mcp->lock, flags); | ||
182 | if (--mcp->use_count == 0) | ||
183 | mcp->ops->disable(mcp); | ||
184 | spin_unlock_irqrestore(&mcp->lock, flags); | ||
185 | } | ||
186 | EXPORT_SYMBOL(mcp_disable); | ||
187 | |||
188 | static void mcp_release(struct device *dev) | ||
189 | { | ||
190 | struct mcp *mcp = container_of(dev, struct mcp, attached_device); | ||
191 | |||
192 | kfree(mcp); | ||
193 | } | ||
194 | |||
195 | struct mcp *mcp_host_alloc(struct device *parent, size_t size) | ||
196 | { | ||
197 | struct mcp *mcp; | ||
198 | |||
199 | mcp = kmalloc(sizeof(struct mcp) + size, GFP_KERNEL); | ||
200 | if (mcp) { | ||
201 | memset(mcp, 0, sizeof(struct mcp) + size); | ||
202 | spin_lock_init(&mcp->lock); | ||
203 | mcp->attached_device.parent = parent; | ||
204 | mcp->attached_device.bus = &mcp_bus_type; | ||
205 | mcp->attached_device.dma_mask = parent->dma_mask; | ||
206 | mcp->attached_device.release = mcp_release; | ||
207 | } | ||
208 | return mcp; | ||
209 | } | ||
210 | EXPORT_SYMBOL(mcp_host_alloc); | ||
211 | |||
212 | int mcp_host_register(struct mcp *mcp) | ||
213 | { | ||
214 | strcpy(mcp->attached_device.bus_id, "mcp0"); | ||
215 | return device_register(&mcp->attached_device); | ||
216 | } | ||
217 | EXPORT_SYMBOL(mcp_host_register); | ||
218 | |||
219 | void mcp_host_unregister(struct mcp *mcp) | ||
220 | { | ||
221 | device_unregister(&mcp->attached_device); | ||
222 | } | ||
223 | EXPORT_SYMBOL(mcp_host_unregister); | ||
224 | |||
225 | int mcp_driver_register(struct mcp_driver *mcpdrv) | ||
226 | { | ||
227 | mcpdrv->drv.bus = &mcp_bus_type; | ||
228 | mcpdrv->drv.probe = mcp_bus_probe; | ||
229 | mcpdrv->drv.remove = mcp_bus_remove; | ||
230 | return driver_register(&mcpdrv->drv); | ||
231 | } | ||
232 | EXPORT_SYMBOL(mcp_driver_register); | ||
233 | |||
234 | void mcp_driver_unregister(struct mcp_driver *mcpdrv) | ||
235 | { | ||
236 | driver_unregister(&mcpdrv->drv); | ||
237 | } | ||
238 | EXPORT_SYMBOL(mcp_driver_unregister); | ||
239 | |||
240 | static int __init mcp_init(void) | ||
241 | { | ||
242 | return bus_register(&mcp_bus_type); | ||
243 | } | ||
244 | |||
245 | static void __exit mcp_exit(void) | ||
246 | { | ||
247 | bus_unregister(&mcp_bus_type); | ||
248 | } | ||
249 | |||
250 | module_init(mcp_init); | ||
251 | module_exit(mcp_exit); | ||
252 | |||
253 | MODULE_AUTHOR("Russell King <rmk@arm.linux.org.uk>"); | ||
254 | MODULE_DESCRIPTION("Core multimedia communications port driver"); | ||
255 | MODULE_LICENSE("GPL"); | ||
diff --git a/drivers/mfd/mcp-sa11x0.c b/drivers/mfd/mcp-sa11x0.c new file mode 100644 index 000000000000..e9806fbbe696 --- /dev/null +++ b/drivers/mfd/mcp-sa11x0.c | |||
@@ -0,0 +1,275 @@ | |||
1 | /* | ||
2 | * linux/drivers/mfd/mcp-sa11x0.c | ||
3 | * | ||
4 | * Copyright (C) 2001-2005 Russell King | ||
5 | * | ||
6 | * This program is free software; you can redistribute it and/or modify | ||
7 | * it under the terms of the GNU General Public License as published by | ||
8 | * the Free Software Foundation; either version 2 of the License. | ||
9 | * | ||
10 | * SA11x0 MCP (Multimedia Communications Port) driver. | ||
11 | * | ||
12 | * MCP read/write timeouts from Jordi Colomer, rehacked by rmk. | ||
13 | */ | ||
14 | #include <linux/module.h> | ||
15 | #include <linux/init.h> | ||
16 | #include <linux/errno.h> | ||
17 | #include <linux/kernel.h> | ||
18 | #include <linux/delay.h> | ||
19 | #include <linux/spinlock.h> | ||
20 | #include <linux/slab.h> | ||
21 | #include <linux/device.h> | ||
22 | |||
23 | #include <asm/dma.h> | ||
24 | #include <asm/hardware.h> | ||
25 | #include <asm/mach-types.h> | ||
26 | #include <asm/system.h> | ||
27 | #include <asm/arch/mcp.h> | ||
28 | |||
29 | #include <asm/arch/assabet.h> | ||
30 | |||
31 | #include "mcp.h" | ||
32 | |||
33 | struct mcp_sa11x0 { | ||
34 | u32 mccr0; | ||
35 | u32 mccr1; | ||
36 | }; | ||
37 | |||
38 | #define priv(mcp) ((struct mcp_sa11x0 *)mcp_priv(mcp)) | ||
39 | |||
40 | static void | ||
41 | mcp_sa11x0_set_telecom_divisor(struct mcp *mcp, unsigned int divisor) | ||
42 | { | ||
43 | unsigned int mccr0; | ||
44 | |||
45 | divisor /= 32; | ||
46 | |||
47 | mccr0 = Ser4MCCR0 & ~0x00007f00; | ||
48 | mccr0 |= divisor << 8; | ||
49 | Ser4MCCR0 = mccr0; | ||
50 | } | ||
51 | |||
52 | static void | ||
53 | mcp_sa11x0_set_audio_divisor(struct mcp *mcp, unsigned int divisor) | ||
54 | { | ||
55 | unsigned int mccr0; | ||
56 | |||
57 | divisor /= 32; | ||
58 | |||
59 | mccr0 = Ser4MCCR0 & ~0x0000007f; | ||
60 | mccr0 |= divisor; | ||
61 | Ser4MCCR0 = mccr0; | ||
62 | } | ||
63 | |||
64 | /* | ||
65 | * Write data to the device. The bit should be set after 3 subframe | ||
66 | * times (each frame is 64 clocks). We wait a maximum of 6 subframes. | ||
67 | * We really should try doing something more productive while we | ||
68 | * wait. | ||
69 | */ | ||
70 | static void | ||
71 | mcp_sa11x0_write(struct mcp *mcp, unsigned int reg, unsigned int val) | ||
72 | { | ||
73 | int ret = -ETIME; | ||
74 | int i; | ||
75 | |||
76 | Ser4MCDR2 = reg << 17 | MCDR2_Wr | (val & 0xffff); | ||
77 | |||
78 | for (i = 0; i < 2; i++) { | ||
79 | udelay(mcp->rw_timeout); | ||
80 | if (Ser4MCSR & MCSR_CWC) { | ||
81 | ret = 0; | ||
82 | break; | ||
83 | } | ||
84 | } | ||
85 | |||
86 | if (ret < 0) | ||
87 | printk(KERN_WARNING "mcp: write timed out\n"); | ||
88 | } | ||
89 | |||
90 | /* | ||
91 | * Read data from the device. The bit should be set after 3 subframe | ||
92 | * times (each frame is 64 clocks). We wait a maximum of 6 subframes. | ||
93 | * We really should try doing something more productive while we | ||
94 | * wait. | ||
95 | */ | ||
96 | static unsigned int | ||
97 | mcp_sa11x0_read(struct mcp *mcp, unsigned int reg) | ||
98 | { | ||
99 | int ret = -ETIME; | ||
100 | int i; | ||
101 | |||
102 | Ser4MCDR2 = reg << 17 | MCDR2_Rd; | ||
103 | |||
104 | for (i = 0; i < 2; i++) { | ||
105 | udelay(mcp->rw_timeout); | ||
106 | if (Ser4MCSR & MCSR_CRC) { | ||
107 | ret = Ser4MCDR2 & 0xffff; | ||
108 | break; | ||
109 | } | ||
110 | } | ||
111 | |||
112 | if (ret < 0) | ||
113 | printk(KERN_WARNING "mcp: read timed out\n"); | ||
114 | |||
115 | return ret; | ||
116 | } | ||
117 | |||
118 | static void mcp_sa11x0_enable(struct mcp *mcp) | ||
119 | { | ||
120 | Ser4MCSR = -1; | ||
121 | Ser4MCCR0 |= MCCR0_MCE; | ||
122 | } | ||
123 | |||
124 | static void mcp_sa11x0_disable(struct mcp *mcp) | ||
125 | { | ||
126 | Ser4MCCR0 &= ~MCCR0_MCE; | ||
127 | } | ||
128 | |||
129 | /* | ||
130 | * Our methods. | ||
131 | */ | ||
132 | static struct mcp_ops mcp_sa11x0 = { | ||
133 | .set_telecom_divisor = mcp_sa11x0_set_telecom_divisor, | ||
134 | .set_audio_divisor = mcp_sa11x0_set_audio_divisor, | ||
135 | .reg_write = mcp_sa11x0_write, | ||
136 | .reg_read = mcp_sa11x0_read, | ||
137 | .enable = mcp_sa11x0_enable, | ||
138 | .disable = mcp_sa11x0_disable, | ||
139 | }; | ||
140 | |||
141 | static int mcp_sa11x0_probe(struct device *dev) | ||
142 | { | ||
143 | struct platform_device *pdev = to_platform_device(dev); | ||
144 | struct mcp_plat_data *data = pdev->dev.platform_data; | ||
145 | struct mcp *mcp; | ||
146 | int ret; | ||
147 | |||
148 | if (!data) | ||
149 | return -ENODEV; | ||
150 | |||
151 | if (!request_mem_region(0x80060000, 0x60, "sa11x0-mcp")) | ||
152 | return -EBUSY; | ||
153 | |||
154 | mcp = mcp_host_alloc(&pdev->dev, sizeof(struct mcp_sa11x0)); | ||
155 | if (!mcp) { | ||
156 | ret = -ENOMEM; | ||
157 | goto release; | ||
158 | } | ||
159 | |||
160 | mcp->owner = THIS_MODULE; | ||
161 | mcp->ops = &mcp_sa11x0; | ||
162 | mcp->sclk_rate = data->sclk_rate; | ||
163 | mcp->dma_audio_rd = DMA_Ser4MCP0Rd; | ||
164 | mcp->dma_audio_wr = DMA_Ser4MCP0Wr; | ||
165 | mcp->dma_telco_rd = DMA_Ser4MCP1Rd; | ||
166 | mcp->dma_telco_wr = DMA_Ser4MCP1Wr; | ||
167 | |||
168 | dev_set_drvdata(dev, mcp); | ||
169 | |||
170 | if (machine_is_assabet()) { | ||
171 | ASSABET_BCR_set(ASSABET_BCR_CODEC_RST); | ||
172 | } | ||
173 | |||
174 | /* | ||
175 | * Setup the PPC unit correctly. | ||
176 | */ | ||
177 | PPDR &= ~PPC_RXD4; | ||
178 | PPDR |= PPC_TXD4 | PPC_SCLK | PPC_SFRM; | ||
179 | PSDR |= PPC_RXD4; | ||
180 | PSDR &= ~(PPC_TXD4 | PPC_SCLK | PPC_SFRM); | ||
181 | PPSR &= ~(PPC_TXD4 | PPC_SCLK | PPC_SFRM); | ||
182 | |||
183 | /* | ||
184 | * Initialise device. Note that we initially | ||
185 | * set the sampling rate to minimum. | ||
186 | */ | ||
187 | Ser4MCSR = -1; | ||
188 | Ser4MCCR1 = data->mccr1; | ||
189 | Ser4MCCR0 = data->mccr0 | 0x7f7f; | ||
190 | |||
191 | /* | ||
192 | * Calculate the read/write timeout (us) from the bit clock | ||
193 | * rate. This is the period for 3 64-bit frames. Always | ||
194 | * round this time up. | ||
195 | */ | ||
196 | mcp->rw_timeout = (64 * 3 * 1000000 + mcp->sclk_rate - 1) / | ||
197 | mcp->sclk_rate; | ||
198 | |||
199 | ret = mcp_host_register(mcp); | ||
200 | if (ret == 0) | ||
201 | goto out; | ||
202 | |||
203 | release: | ||
204 | release_mem_region(0x80060000, 0x60); | ||
205 | dev_set_drvdata(dev, NULL); | ||
206 | |||
207 | out: | ||
208 | return ret; | ||
209 | } | ||
210 | |||
211 | static int mcp_sa11x0_remove(struct device *dev) | ||
212 | { | ||
213 | struct mcp *mcp = dev_get_drvdata(dev); | ||
214 | |||
215 | dev_set_drvdata(dev, NULL); | ||
216 | mcp_host_unregister(mcp); | ||
217 | release_mem_region(0x80060000, 0x60); | ||
218 | |||
219 | return 0; | ||
220 | } | ||
221 | |||
222 | static int mcp_sa11x0_suspend(struct device *dev, pm_message_t state, u32 level) | ||
223 | { | ||
224 | struct mcp *mcp = dev_get_drvdata(dev); | ||
225 | |||
226 | if (level == SUSPEND_DISABLE) { | ||
227 | priv(mcp)->mccr0 = Ser4MCCR0; | ||
228 | priv(mcp)->mccr1 = Ser4MCCR1; | ||
229 | Ser4MCCR0 &= ~MCCR0_MCE; | ||
230 | } | ||
231 | return 0; | ||
232 | } | ||
233 | |||
234 | static int mcp_sa11x0_resume(struct device *dev, u32 level) | ||
235 | { | ||
236 | struct mcp *mcp = dev_get_drvdata(dev); | ||
237 | |||
238 | if (level == RESUME_RESTORE_STATE) { | ||
239 | Ser4MCCR1 = priv(mcp)->mccr1; | ||
240 | Ser4MCCR0 = priv(mcp)->mccr0; | ||
241 | } | ||
242 | return 0; | ||
243 | } | ||
244 | |||
245 | /* | ||
246 | * The driver for the SA11x0 MCP port. | ||
247 | */ | ||
248 | static struct device_driver mcp_sa11x0_driver = { | ||
249 | .name = "sa11x0-mcp", | ||
250 | .bus = &platform_bus_type, | ||
251 | .probe = mcp_sa11x0_probe, | ||
252 | .remove = mcp_sa11x0_remove, | ||
253 | .suspend = mcp_sa11x0_suspend, | ||
254 | .resume = mcp_sa11x0_resume, | ||
255 | }; | ||
256 | |||
257 | /* | ||
258 | * This needs re-working | ||
259 | */ | ||
260 | static int __init mcp_sa11x0_init(void) | ||
261 | { | ||
262 | return driver_register(&mcp_sa11x0_driver); | ||
263 | } | ||
264 | |||
265 | static void __exit mcp_sa11x0_exit(void) | ||
266 | { | ||
267 | driver_unregister(&mcp_sa11x0_driver); | ||
268 | } | ||
269 | |||
270 | module_init(mcp_sa11x0_init); | ||
271 | module_exit(mcp_sa11x0_exit); | ||
272 | |||
273 | MODULE_AUTHOR("Russell King <rmk@arm.linux.org.uk>"); | ||
274 | MODULE_DESCRIPTION("SA11x0 multimedia communications port driver"); | ||
275 | MODULE_LICENSE("GPL"); | ||
diff --git a/drivers/mfd/mcp.h b/drivers/mfd/mcp.h new file mode 100644 index 000000000000..c093a93b8808 --- /dev/null +++ b/drivers/mfd/mcp.h | |||
@@ -0,0 +1,66 @@ | |||
1 | /* | ||
2 | * linux/drivers/mfd/mcp.h | ||
3 | * | ||
4 | * Copyright (C) 2001 Russell King, All Rights Reserved. | ||
5 | * | ||
6 | * This program is free software; you can redistribute it and/or modify | ||
7 | * it under the terms of the GNU General Public License as published by | ||
8 | * the Free Software Foundation; either version 2 of the License. | ||
9 | */ | ||
10 | #ifndef MCP_H | ||
11 | #define MCP_H | ||
12 | |||
13 | struct mcp_ops; | ||
14 | |||
15 | struct mcp { | ||
16 | struct module *owner; | ||
17 | struct mcp_ops *ops; | ||
18 | spinlock_t lock; | ||
19 | int use_count; | ||
20 | unsigned int sclk_rate; | ||
21 | unsigned int rw_timeout; | ||
22 | dma_device_t dma_audio_rd; | ||
23 | dma_device_t dma_audio_wr; | ||
24 | dma_device_t dma_telco_rd; | ||
25 | dma_device_t dma_telco_wr; | ||
26 | struct device attached_device; | ||
27 | }; | ||
28 | |||
29 | struct mcp_ops { | ||
30 | void (*set_telecom_divisor)(struct mcp *, unsigned int); | ||
31 | void (*set_audio_divisor)(struct mcp *, unsigned int); | ||
32 | void (*reg_write)(struct mcp *, unsigned int, unsigned int); | ||
33 | unsigned int (*reg_read)(struct mcp *, unsigned int); | ||
34 | void (*enable)(struct mcp *); | ||
35 | void (*disable)(struct mcp *); | ||
36 | }; | ||
37 | |||
38 | void mcp_set_telecom_divisor(struct mcp *, unsigned int); | ||
39 | void mcp_set_audio_divisor(struct mcp *, unsigned int); | ||
40 | void mcp_reg_write(struct mcp *, unsigned int, unsigned int); | ||
41 | unsigned int mcp_reg_read(struct mcp *, unsigned int); | ||
42 | void mcp_enable(struct mcp *); | ||
43 | void mcp_disable(struct mcp *); | ||
44 | #define mcp_get_sclk_rate(mcp) ((mcp)->sclk_rate) | ||
45 | |||
46 | struct mcp *mcp_host_alloc(struct device *, size_t); | ||
47 | int mcp_host_register(struct mcp *); | ||
48 | void mcp_host_unregister(struct mcp *); | ||
49 | |||
50 | struct mcp_driver { | ||
51 | struct device_driver drv; | ||
52 | int (*probe)(struct mcp *); | ||
53 | void (*remove)(struct mcp *); | ||
54 | int (*suspend)(struct mcp *, pm_message_t); | ||
55 | int (*resume)(struct mcp *); | ||
56 | }; | ||
57 | |||
58 | int mcp_driver_register(struct mcp_driver *); | ||
59 | void mcp_driver_unregister(struct mcp_driver *); | ||
60 | |||
61 | #define mcp_get_drvdata(mcp) dev_get_drvdata(&(mcp)->attached_device) | ||
62 | #define mcp_set_drvdata(mcp,d) dev_set_drvdata(&(mcp)->attached_device, d) | ||
63 | |||
64 | #define mcp_priv(mcp) ((void *)((mcp)+1)) | ||
65 | |||
66 | #endif | ||
diff --git a/drivers/mmc/mmc.c b/drivers/mmc/mmc.c index eeb9f6668e69..3c5904834fe8 100644 --- a/drivers/mmc/mmc.c +++ b/drivers/mmc/mmc.c | |||
@@ -361,7 +361,7 @@ static void mmc_decode_cid(struct mmc_card *card) | |||
361 | 361 | ||
362 | default: | 362 | default: |
363 | printk("%s: card has unknown MMCA version %d\n", | 363 | printk("%s: card has unknown MMCA version %d\n", |
364 | card->host->host_name, card->csd.mmca_vsn); | 364 | mmc_hostname(card->host), card->csd.mmca_vsn); |
365 | mmc_card_set_bad(card); | 365 | mmc_card_set_bad(card); |
366 | break; | 366 | break; |
367 | } | 367 | } |
@@ -383,7 +383,7 @@ static void mmc_decode_csd(struct mmc_card *card) | |||
383 | csd_struct = UNSTUFF_BITS(resp, 126, 2); | 383 | csd_struct = UNSTUFF_BITS(resp, 126, 2); |
384 | if (csd_struct != 1 && csd_struct != 2) { | 384 | if (csd_struct != 1 && csd_struct != 2) { |
385 | printk("%s: unrecognised CSD structure version %d\n", | 385 | printk("%s: unrecognised CSD structure version %d\n", |
386 | card->host->host_name, csd_struct); | 386 | mmc_hostname(card->host), csd_struct); |
387 | mmc_card_set_bad(card); | 387 | mmc_card_set_bad(card); |
388 | return; | 388 | return; |
389 | } | 389 | } |
@@ -551,7 +551,7 @@ static void mmc_discover_cards(struct mmc_host *host) | |||
551 | } | 551 | } |
552 | if (err != MMC_ERR_NONE) { | 552 | if (err != MMC_ERR_NONE) { |
553 | printk(KERN_ERR "%s: error requesting CID: %d\n", | 553 | printk(KERN_ERR "%s: error requesting CID: %d\n", |
554 | host->host_name, err); | 554 | mmc_hostname(host), err); |
555 | break; | 555 | break; |
556 | } | 556 | } |
557 | 557 | ||
@@ -796,17 +796,13 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev) | |||
796 | { | 796 | { |
797 | struct mmc_host *host; | 797 | struct mmc_host *host; |
798 | 798 | ||
799 | host = kmalloc(sizeof(struct mmc_host) + extra, GFP_KERNEL); | 799 | host = mmc_alloc_host_sysfs(extra, dev); |
800 | if (host) { | 800 | if (host) { |
801 | memset(host, 0, sizeof(struct mmc_host) + extra); | ||
802 | |||
803 | spin_lock_init(&host->lock); | 801 | spin_lock_init(&host->lock); |
804 | init_waitqueue_head(&host->wq); | 802 | init_waitqueue_head(&host->wq); |
805 | INIT_LIST_HEAD(&host->cards); | 803 | INIT_LIST_HEAD(&host->cards); |
806 | INIT_WORK(&host->detect, mmc_rescan, host); | 804 | INIT_WORK(&host->detect, mmc_rescan, host); |
807 | 805 | ||
808 | host->dev = dev; | ||
809 | |||
810 | /* | 806 | /* |
811 | * By default, hosts do not support SGIO or large requests. | 807 | * By default, hosts do not support SGIO or large requests. |
812 | * They have to set these according to their abilities. | 808 | * They have to set these according to their abilities. |
@@ -828,15 +824,15 @@ EXPORT_SYMBOL(mmc_alloc_host); | |||
828 | */ | 824 | */ |
829 | int mmc_add_host(struct mmc_host *host) | 825 | int mmc_add_host(struct mmc_host *host) |
830 | { | 826 | { |
831 | static unsigned int host_num; | 827 | int ret; |
832 | 828 | ||
833 | snprintf(host->host_name, sizeof(host->host_name), | 829 | ret = mmc_add_host_sysfs(host); |
834 | "mmc%d", host_num++); | 830 | if (ret == 0) { |
835 | 831 | mmc_power_off(host); | |
836 | mmc_power_off(host); | 832 | mmc_detect_change(host); |
837 | mmc_detect_change(host); | 833 | } |
838 | 834 | ||
839 | return 0; | 835 | return ret; |
840 | } | 836 | } |
841 | 837 | ||
842 | EXPORT_SYMBOL(mmc_add_host); | 838 | EXPORT_SYMBOL(mmc_add_host); |
@@ -859,6 +855,7 @@ void mmc_remove_host(struct mmc_host *host) | |||
859 | } | 855 | } |
860 | 856 | ||
861 | mmc_power_off(host); | 857 | mmc_power_off(host); |
858 | mmc_remove_host_sysfs(host); | ||
862 | } | 859 | } |
863 | 860 | ||
864 | EXPORT_SYMBOL(mmc_remove_host); | 861 | EXPORT_SYMBOL(mmc_remove_host); |
@@ -872,7 +869,7 @@ EXPORT_SYMBOL(mmc_remove_host); | |||
872 | void mmc_free_host(struct mmc_host *host) | 869 | void mmc_free_host(struct mmc_host *host) |
873 | { | 870 | { |
874 | flush_scheduled_work(); | 871 | flush_scheduled_work(); |
875 | kfree(host); | 872 | mmc_free_host_sysfs(host); |
876 | } | 873 | } |
877 | 874 | ||
878 | EXPORT_SYMBOL(mmc_free_host); | 875 | EXPORT_SYMBOL(mmc_free_host); |
diff --git a/drivers/mmc/mmc.h b/drivers/mmc/mmc.h index b498dffe0b11..97bae00292fa 100644 --- a/drivers/mmc/mmc.h +++ b/drivers/mmc/mmc.h | |||
@@ -13,4 +13,9 @@ | |||
13 | void mmc_init_card(struct mmc_card *card, struct mmc_host *host); | 13 | void mmc_init_card(struct mmc_card *card, struct mmc_host *host); |
14 | int mmc_register_card(struct mmc_card *card); | 14 | int mmc_register_card(struct mmc_card *card); |
15 | void mmc_remove_card(struct mmc_card *card); | 15 | void mmc_remove_card(struct mmc_card *card); |
16 | |||
17 | struct mmc_host *mmc_alloc_host_sysfs(int extra, struct device *dev); | ||
18 | int mmc_add_host_sysfs(struct mmc_host *host); | ||
19 | void mmc_remove_host_sysfs(struct mmc_host *host); | ||
20 | void mmc_free_host_sysfs(struct mmc_host *host); | ||
16 | #endif | 21 | #endif |
diff --git a/drivers/mmc/mmc_sysfs.c b/drivers/mmc/mmc_sysfs.c index 5556cd3b5559..ad8949810fc5 100644 --- a/drivers/mmc/mmc_sysfs.c +++ b/drivers/mmc/mmc_sysfs.c | |||
@@ -12,6 +12,7 @@ | |||
12 | #include <linux/module.h> | 12 | #include <linux/module.h> |
13 | #include <linux/init.h> | 13 | #include <linux/init.h> |
14 | #include <linux/device.h> | 14 | #include <linux/device.h> |
15 | #include <linux/idr.h> | ||
15 | 16 | ||
16 | #include <linux/mmc/card.h> | 17 | #include <linux/mmc/card.h> |
17 | #include <linux/mmc/host.h> | 18 | #include <linux/mmc/host.h> |
@@ -20,6 +21,7 @@ | |||
20 | 21 | ||
21 | #define dev_to_mmc_card(d) container_of(d, struct mmc_card, dev) | 22 | #define dev_to_mmc_card(d) container_of(d, struct mmc_card, dev) |
22 | #define to_mmc_driver(d) container_of(d, struct mmc_driver, drv) | 23 | #define to_mmc_driver(d) container_of(d, struct mmc_driver, drv) |
24 | #define cls_dev_to_mmc_host(d) container_of(d, struct mmc_host, class_dev) | ||
23 | 25 | ||
24 | #define MMC_ATTR(name, fmt, args...) \ | 26 | #define MMC_ATTR(name, fmt, args...) \ |
25 | static ssize_t mmc_##name##_show (struct device *dev, struct device_attribute *attr, char *buf) \ | 27 | static ssize_t mmc_##name##_show (struct device *dev, struct device_attribute *attr, char *buf) \ |
@@ -206,7 +208,7 @@ void mmc_init_card(struct mmc_card *card, struct mmc_host *host) | |||
206 | int mmc_register_card(struct mmc_card *card) | 208 | int mmc_register_card(struct mmc_card *card) |
207 | { | 209 | { |
208 | snprintf(card->dev.bus_id, sizeof(card->dev.bus_id), | 210 | snprintf(card->dev.bus_id, sizeof(card->dev.bus_id), |
209 | "%s:%04x", card->host->host_name, card->rca); | 211 | "%s:%04x", mmc_hostname(card->host), card->rca); |
210 | 212 | ||
211 | return device_add(&card->dev); | 213 | return device_add(&card->dev); |
212 | } | 214 | } |
@@ -224,13 +226,97 @@ void mmc_remove_card(struct mmc_card *card) | |||
224 | } | 226 | } |
225 | 227 | ||
226 | 228 | ||
229 | static void mmc_host_classdev_release(struct class_device *dev) | ||
230 | { | ||
231 | struct mmc_host *host = cls_dev_to_mmc_host(dev); | ||
232 | kfree(host); | ||
233 | } | ||
234 | |||
235 | static struct class mmc_host_class = { | ||
236 | .name = "mmc_host", | ||
237 | .release = mmc_host_classdev_release, | ||
238 | }; | ||
239 | |||
240 | static DEFINE_IDR(mmc_host_idr); | ||
241 | static DEFINE_SPINLOCK(mmc_host_lock); | ||
242 | |||
243 | /* | ||
244 | * Internal function. Allocate a new MMC host. | ||
245 | */ | ||
246 | struct mmc_host *mmc_alloc_host_sysfs(int extra, struct device *dev) | ||
247 | { | ||
248 | struct mmc_host *host; | ||
249 | |||
250 | host = kmalloc(sizeof(struct mmc_host) + extra, GFP_KERNEL); | ||
251 | if (host) { | ||
252 | memset(host, 0, sizeof(struct mmc_host) + extra); | ||
253 | |||
254 | host->dev = dev; | ||
255 | host->class_dev.dev = host->dev; | ||
256 | host->class_dev.class = &mmc_host_class; | ||
257 | class_device_initialize(&host->class_dev); | ||
258 | } | ||
259 | |||
260 | return host; | ||
261 | } | ||
262 | |||
263 | /* | ||
264 | * Internal function. Register a new MMC host with the MMC class. | ||
265 | */ | ||
266 | int mmc_add_host_sysfs(struct mmc_host *host) | ||
267 | { | ||
268 | int err; | ||
269 | |||
270 | if (!idr_pre_get(&mmc_host_idr, GFP_KERNEL)) | ||
271 | return -ENOMEM; | ||
272 | |||
273 | spin_lock(&mmc_host_lock); | ||
274 | err = idr_get_new(&mmc_host_idr, host, &host->index); | ||
275 | spin_unlock(&mmc_host_lock); | ||
276 | if (err) | ||
277 | return err; | ||
278 | |||
279 | snprintf(host->class_dev.class_id, BUS_ID_SIZE, | ||
280 | "mmc%d", host->index); | ||
281 | |||
282 | return class_device_add(&host->class_dev); | ||
283 | } | ||
284 | |||
285 | /* | ||
286 | * Internal function. Unregister a MMC host with the MMC class. | ||
287 | */ | ||
288 | void mmc_remove_host_sysfs(struct mmc_host *host) | ||
289 | { | ||
290 | class_device_del(&host->class_dev); | ||
291 | |||
292 | spin_lock(&mmc_host_lock); | ||
293 | idr_remove(&mmc_host_idr, host->index); | ||
294 | spin_unlock(&mmc_host_lock); | ||
295 | } | ||
296 | |||
297 | /* | ||
298 | * Internal function. Free a MMC host. | ||
299 | */ | ||
300 | void mmc_free_host_sysfs(struct mmc_host *host) | ||
301 | { | ||
302 | class_device_put(&host->class_dev); | ||
303 | } | ||
304 | |||
305 | |||
227 | static int __init mmc_init(void) | 306 | static int __init mmc_init(void) |
228 | { | 307 | { |
229 | return bus_register(&mmc_bus_type); | 308 | int ret = bus_register(&mmc_bus_type); |
309 | if (ret == 0) { | ||
310 | ret = class_register(&mmc_host_class); | ||
311 | if (ret) | ||
312 | bus_unregister(&mmc_bus_type); | ||
313 | } | ||
314 | return ret; | ||
230 | } | 315 | } |
231 | 316 | ||
232 | static void __exit mmc_exit(void) | 317 | static void __exit mmc_exit(void) |
233 | { | 318 | { |
319 | class_unregister(&mmc_host_class); | ||
234 | bus_unregister(&mmc_bus_type); | 320 | bus_unregister(&mmc_bus_type); |
235 | } | 321 | } |
236 | 322 | ||
diff --git a/drivers/mmc/mmci.c b/drivers/mmc/mmci.c index 7a42966d755b..716c4ef4faf6 100644 --- a/drivers/mmc/mmci.c +++ b/drivers/mmc/mmci.c | |||
@@ -34,7 +34,7 @@ | |||
34 | 34 | ||
35 | #ifdef CONFIG_MMC_DEBUG | 35 | #ifdef CONFIG_MMC_DEBUG |
36 | #define DBG(host,fmt,args...) \ | 36 | #define DBG(host,fmt,args...) \ |
37 | pr_debug("%s: %s: " fmt, host->mmc->host_name, __func__ , args) | 37 | pr_debug("%s: %s: " fmt, mmc_hostname(host->mmc), __func__ , args) |
38 | #else | 38 | #else |
39 | #define DBG(host,fmt,args...) do { } while (0) | 39 | #define DBG(host,fmt,args...) do { } while (0) |
40 | #endif | 40 | #endif |
@@ -541,7 +541,7 @@ static int mmci_probe(struct amba_device *dev, void *id) | |||
541 | mmc_add_host(mmc); | 541 | mmc_add_host(mmc); |
542 | 542 | ||
543 | printk(KERN_INFO "%s: MMCI rev %x cfg %02x at 0x%08lx irq %d,%d\n", | 543 | printk(KERN_INFO "%s: MMCI rev %x cfg %02x at 0x%08lx irq %d,%d\n", |
544 | mmc->host_name, amba_rev(dev), amba_config(dev), | 544 | mmc_hostname(mmc), amba_rev(dev), amba_config(dev), |
545 | dev->res.start, dev->irq[0], dev->irq[1]); | 545 | dev->res.start, dev->irq[0], dev->irq[1]); |
546 | 546 | ||
547 | init_timer(&host->timer); | 547 | init_timer(&host->timer); |
diff --git a/drivers/mmc/wbsd.c b/drivers/mmc/wbsd.c index 974f2f36bdbe..402c2d661fb2 100644 --- a/drivers/mmc/wbsd.c +++ b/drivers/mmc/wbsd.c | |||
@@ -1796,7 +1796,7 @@ static int __devinit wbsd_init(struct device* dev, int base, int irq, int dma, | |||
1796 | 1796 | ||
1797 | mmc_add_host(mmc); | 1797 | mmc_add_host(mmc); |
1798 | 1798 | ||
1799 | printk(KERN_INFO "%s: W83L51xD", mmc->host_name); | 1799 | printk(KERN_INFO "%s: W83L51xD", mmc_hostname(mmc)); |
1800 | if (host->chip_id != 0) | 1800 | if (host->chip_id != 0) |
1801 | printk(" id %x", (int)host->chip_id); | 1801 | printk(" id %x", (int)host->chip_id); |
1802 | printk(" at 0x%x irq %d", (int)host->base, (int)host->irq); | 1802 | printk(" at 0x%x irq %d", (int)host->base, (int)host->irq); |
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 8edb6936fb9b..79e8aa6f2b9e 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig | |||
@@ -131,6 +131,8 @@ config NET_SB1000 | |||
131 | 131 | ||
132 | source "drivers/net/arcnet/Kconfig" | 132 | source "drivers/net/arcnet/Kconfig" |
133 | 133 | ||
134 | source "drivers/net/phy/Kconfig" | ||
135 | |||
134 | # | 136 | # |
135 | # Ethernet | 137 | # Ethernet |
136 | # | 138 | # |
diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 63c6d1e6d4d9..a369ae284a9a 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile | |||
@@ -65,6 +65,7 @@ obj-$(CONFIG_ADAPTEC_STARFIRE) += starfire.o | |||
65 | # | 65 | # |
66 | 66 | ||
67 | obj-$(CONFIG_MII) += mii.o | 67 | obj-$(CONFIG_MII) += mii.o |
68 | obj-$(CONFIG_PHYLIB) += phy/ | ||
68 | 69 | ||
69 | obj-$(CONFIG_SUNDANCE) += sundance.o | 70 | obj-$(CONFIG_SUNDANCE) += sundance.o |
70 | obj-$(CONFIG_HAMACHI) += hamachi.o | 71 | obj-$(CONFIG_HAMACHI) += hamachi.o |
diff --git a/drivers/net/Space.c b/drivers/net/Space.c index 3707df6b0cfa..60304f7e7e5b 100644 --- a/drivers/net/Space.c +++ b/drivers/net/Space.c | |||
@@ -87,7 +87,6 @@ extern struct net_device *mvme147lance_probe(int unit); | |||
87 | extern struct net_device *tc515_probe(int unit); | 87 | extern struct net_device *tc515_probe(int unit); |
88 | extern struct net_device *lance_probe(int unit); | 88 | extern struct net_device *lance_probe(int unit); |
89 | extern struct net_device *mace_probe(int unit); | 89 | extern struct net_device *mace_probe(int unit); |
90 | extern struct net_device *macsonic_probe(int unit); | ||
91 | extern struct net_device *mac8390_probe(int unit); | 90 | extern struct net_device *mac8390_probe(int unit); |
92 | extern struct net_device *mac89x0_probe(int unit); | 91 | extern struct net_device *mac89x0_probe(int unit); |
93 | extern struct net_device *mc32_probe(int unit); | 92 | extern struct net_device *mc32_probe(int unit); |
@@ -284,9 +283,6 @@ static struct devprobe2 m68k_probes[] __initdata = { | |||
284 | #ifdef CONFIG_MACMACE /* Mac 68k Quadra AV builtin Ethernet */ | 283 | #ifdef CONFIG_MACMACE /* Mac 68k Quadra AV builtin Ethernet */ |
285 | {mace_probe, 0}, | 284 | {mace_probe, 0}, |
286 | #endif | 285 | #endif |
287 | #ifdef CONFIG_MACSONIC /* Mac SONIC-based Ethernet of all sorts */ | ||
288 | {macsonic_probe, 0}, | ||
289 | #endif | ||
290 | #ifdef CONFIG_MAC8390 /* NuBus NS8390-based cards */ | 286 | #ifdef CONFIG_MAC8390 /* NuBus NS8390-based cards */ |
291 | {mac8390_probe, 0}, | 287 | {mac8390_probe, 0}, |
292 | #endif | 288 | #endif |
@@ -318,17 +314,9 @@ static void __init ethif_probe2(int unit) | |||
318 | #ifdef CONFIG_TR | 314 | #ifdef CONFIG_TR |
319 | /* Token-ring device probe */ | 315 | /* Token-ring device probe */ |
320 | extern int ibmtr_probe_card(struct net_device *); | 316 | extern int ibmtr_probe_card(struct net_device *); |
321 | extern struct net_device *sk_isa_probe(int unit); | ||
322 | extern struct net_device *proteon_probe(int unit); | ||
323 | extern struct net_device *smctr_probe(int unit); | 317 | extern struct net_device *smctr_probe(int unit); |
324 | 318 | ||
325 | static struct devprobe2 tr_probes2[] __initdata = { | 319 | static struct devprobe2 tr_probes2[] __initdata = { |
326 | #ifdef CONFIG_SKISA | ||
327 | {sk_isa_probe, 0}, | ||
328 | #endif | ||
329 | #ifdef CONFIG_PROTEON | ||
330 | {proteon_probe, 0}, | ||
331 | #endif | ||
332 | #ifdef CONFIG_SMCTR | 320 | #ifdef CONFIG_SMCTR |
333 | {smctr_probe, 0}, | 321 | {smctr_probe, 0}, |
334 | #endif | 322 | #endif |
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c index 5ce606d9dc03..19e829b567d0 100644 --- a/drivers/net/bonding/bond_alb.c +++ b/drivers/net/bonding/bond_alb.c | |||
@@ -1106,18 +1106,13 @@ static int alb_handle_addr_collision_on_attach(struct bonding *bond, struct slav | |||
1106 | } | 1106 | } |
1107 | } | 1107 | } |
1108 | 1108 | ||
1109 | if (found) { | 1109 | if (!found) |
1110 | /* a slave was found that is using the mac address | 1110 | return 0; |
1111 | * of the new slave | ||
1112 | */ | ||
1113 | printk(KERN_ERR DRV_NAME | ||
1114 | ": Error: the hw address of slave %s is not " | ||
1115 | "unique - cannot enslave it!", | ||
1116 | slave->dev->name); | ||
1117 | return -EINVAL; | ||
1118 | } | ||
1119 | 1111 | ||
1120 | return 0; | 1112 | /* Try setting slave mac to bond address and fall-through |
1113 | to code handling that situation below... */ | ||
1114 | alb_set_slave_mac_addr(slave, bond->dev->dev_addr, | ||
1115 | bond->alb_info.rlb_enabled); | ||
1121 | } | 1116 | } |
1122 | 1117 | ||
1123 | /* The slave's address is equal to the address of the bond. | 1118 | /* The slave's address is equal to the address of the bond. |
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 2c930da90a85..94c9f68dd16b 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c | |||
@@ -1604,6 +1604,44 @@ static int bond_sethwaddr(struct net_device *bond_dev, struct net_device *slave_ | |||
1604 | return 0; | 1604 | return 0; |
1605 | } | 1605 | } |
1606 | 1606 | ||
1607 | #define BOND_INTERSECT_FEATURES \ | ||
1608 | (NETIF_F_SG|NETIF_F_IP_CSUM|NETIF_F_NO_CSUM|NETIF_F_HW_CSUM) | ||
1609 | |||
1610 | /* | ||
1611 | * Compute the features available to the bonding device by | ||
1612 | * intersection of all of the slave devices' BOND_INTERSECT_FEATURES. | ||
1613 | * Call this after attaching or detaching a slave to update the | ||
1614 | * bond's features. | ||
1615 | */ | ||
1616 | static int bond_compute_features(struct bonding *bond) | ||
1617 | { | ||
1618 | int i; | ||
1619 | struct slave *slave; | ||
1620 | struct net_device *bond_dev = bond->dev; | ||
1621 | int features = bond->bond_features; | ||
1622 | |||
1623 | bond_for_each_slave(bond, slave, i) { | ||
1624 | struct net_device * slave_dev = slave->dev; | ||
1625 | if (i == 0) { | ||
1626 | features |= BOND_INTERSECT_FEATURES; | ||
1627 | } | ||
1628 | features &= | ||
1629 | ~(~slave_dev->features & BOND_INTERSECT_FEATURES); | ||
1630 | } | ||
1631 | |||
1632 | /* turn off NETIF_F_SG if we need a csum and h/w can't do it */ | ||
1633 | if ((features & NETIF_F_SG) && | ||
1634 | !(features & (NETIF_F_IP_CSUM | | ||
1635 | NETIF_F_NO_CSUM | | ||
1636 | NETIF_F_HW_CSUM))) { | ||
1637 | features &= ~NETIF_F_SG; | ||
1638 | } | ||
1639 | |||
1640 | bond_dev->features = features; | ||
1641 | |||
1642 | return 0; | ||
1643 | } | ||
1644 | |||
1607 | /* enslave device <slave> to bond device <master> */ | 1645 | /* enslave device <slave> to bond device <master> */ |
1608 | static int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev) | 1646 | static int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev) |
1609 | { | 1647 | { |
@@ -1811,6 +1849,8 @@ static int bond_enslave(struct net_device *bond_dev, struct net_device *slave_de | |||
1811 | new_slave->delay = 0; | 1849 | new_slave->delay = 0; |
1812 | new_slave->link_failure_count = 0; | 1850 | new_slave->link_failure_count = 0; |
1813 | 1851 | ||
1852 | bond_compute_features(bond); | ||
1853 | |||
1814 | if (bond->params.miimon && !bond->params.use_carrier) { | 1854 | if (bond->params.miimon && !bond->params.use_carrier) { |
1815 | link_reporting = bond_check_dev_link(bond, slave_dev, 1); | 1855 | link_reporting = bond_check_dev_link(bond, slave_dev, 1); |
1816 | 1856 | ||
@@ -2015,7 +2055,7 @@ err_free: | |||
2015 | 2055 | ||
2016 | err_undo_flags: | 2056 | err_undo_flags: |
2017 | bond_dev->features = old_features; | 2057 | bond_dev->features = old_features; |
2018 | 2058 | ||
2019 | return res; | 2059 | return res; |
2020 | } | 2060 | } |
2021 | 2061 | ||
@@ -2100,6 +2140,8 @@ static int bond_release(struct net_device *bond_dev, struct net_device *slave_de | |||
2100 | /* release the slave from its bond */ | 2140 | /* release the slave from its bond */ |
2101 | bond_detach_slave(bond, slave); | 2141 | bond_detach_slave(bond, slave); |
2102 | 2142 | ||
2143 | bond_compute_features(bond); | ||
2144 | |||
2103 | if (bond->primary_slave == slave) { | 2145 | if (bond->primary_slave == slave) { |
2104 | bond->primary_slave = NULL; | 2146 | bond->primary_slave = NULL; |
2105 | } | 2147 | } |
@@ -2243,6 +2285,8 @@ static int bond_release_all(struct net_device *bond_dev) | |||
2243 | bond_alb_deinit_slave(bond, slave); | 2285 | bond_alb_deinit_slave(bond, slave); |
2244 | } | 2286 | } |
2245 | 2287 | ||
2288 | bond_compute_features(bond); | ||
2289 | |||
2246 | /* now that the slave is detached, unlock and perform | 2290 | /* now that the slave is detached, unlock and perform |
2247 | * all the undo steps that should not be called from | 2291 | * all the undo steps that should not be called from |
2248 | * within a lock. | 2292 | * within a lock. |
@@ -3588,6 +3632,7 @@ static int bond_master_netdev_event(unsigned long event, struct net_device *bond | |||
3588 | static int bond_slave_netdev_event(unsigned long event, struct net_device *slave_dev) | 3632 | static int bond_slave_netdev_event(unsigned long event, struct net_device *slave_dev) |
3589 | { | 3633 | { |
3590 | struct net_device *bond_dev = slave_dev->master; | 3634 | struct net_device *bond_dev = slave_dev->master; |
3635 | struct bonding *bond = bond_dev->priv; | ||
3591 | 3636 | ||
3592 | switch (event) { | 3637 | switch (event) { |
3593 | case NETDEV_UNREGISTER: | 3638 | case NETDEV_UNREGISTER: |
@@ -3626,6 +3671,9 @@ static int bond_slave_netdev_event(unsigned long event, struct net_device *slave | |||
3626 | * TODO: handle changing the primary's name | 3671 | * TODO: handle changing the primary's name |
3627 | */ | 3672 | */ |
3628 | break; | 3673 | break; |
3674 | case NETDEV_FEAT_CHANGE: | ||
3675 | bond_compute_features(bond); | ||
3676 | break; | ||
3629 | default: | 3677 | default: |
3630 | break; | 3678 | break; |
3631 | } | 3679 | } |
@@ -4526,6 +4574,11 @@ static inline void bond_set_mode_ops(struct bonding *bond, int mode) | |||
4526 | } | 4574 | } |
4527 | } | 4575 | } |
4528 | 4576 | ||
4577 | static struct ethtool_ops bond_ethtool_ops = { | ||
4578 | .get_tx_csum = ethtool_op_get_tx_csum, | ||
4579 | .get_sg = ethtool_op_get_sg, | ||
4580 | }; | ||
4581 | |||
4529 | /* | 4582 | /* |
4530 | * Does not allocate but creates a /proc entry. | 4583 | * Does not allocate but creates a /proc entry. |
4531 | * Allowed to fail. | 4584 | * Allowed to fail. |
@@ -4555,6 +4608,7 @@ static int __init bond_init(struct net_device *bond_dev, struct bond_params *par | |||
4555 | bond_dev->stop = bond_close; | 4608 | bond_dev->stop = bond_close; |
4556 | bond_dev->get_stats = bond_get_stats; | 4609 | bond_dev->get_stats = bond_get_stats; |
4557 | bond_dev->do_ioctl = bond_do_ioctl; | 4610 | bond_dev->do_ioctl = bond_do_ioctl; |
4611 | bond_dev->ethtool_ops = &bond_ethtool_ops; | ||
4558 | bond_dev->set_multicast_list = bond_set_multicast_list; | 4612 | bond_dev->set_multicast_list = bond_set_multicast_list; |
4559 | bond_dev->change_mtu = bond_change_mtu; | 4613 | bond_dev->change_mtu = bond_change_mtu; |
4560 | bond_dev->set_mac_address = bond_set_mac_address; | 4614 | bond_dev->set_mac_address = bond_set_mac_address; |
@@ -4591,6 +4645,8 @@ static int __init bond_init(struct net_device *bond_dev, struct bond_params *par | |||
4591 | NETIF_F_HW_VLAN_RX | | 4645 | NETIF_F_HW_VLAN_RX | |
4592 | NETIF_F_HW_VLAN_FILTER); | 4646 | NETIF_F_HW_VLAN_FILTER); |
4593 | 4647 | ||
4648 | bond->bond_features = bond_dev->features; | ||
4649 | |||
4594 | #ifdef CONFIG_PROC_FS | 4650 | #ifdef CONFIG_PROC_FS |
4595 | bond_create_proc_entry(bond); | 4651 | bond_create_proc_entry(bond); |
4596 | #endif | 4652 | #endif |
diff --git a/drivers/net/bonding/bonding.h b/drivers/net/bonding/bonding.h index d27f377b3eeb..388196980862 100644 --- a/drivers/net/bonding/bonding.h +++ b/drivers/net/bonding/bonding.h | |||
@@ -211,6 +211,9 @@ struct bonding { | |||
211 | struct bond_params params; | 211 | struct bond_params params; |
212 | struct list_head vlan_list; | 212 | struct list_head vlan_list; |
213 | struct vlan_group *vlgrp; | 213 | struct vlan_group *vlgrp; |
214 | /* the features the bonding device supports, independently | ||
215 | * of any slaves */ | ||
216 | int bond_features; | ||
214 | }; | 217 | }; |
215 | 218 | ||
216 | /** | 219 | /** |
diff --git a/drivers/net/e1000/e1000_main.c b/drivers/net/e1000/e1000_main.c index b82fd15d0891..9b596e0bbf95 100644 --- a/drivers/net/e1000/e1000_main.c +++ b/drivers/net/e1000/e1000_main.c | |||
@@ -2767,7 +2767,7 @@ e1000_clean_tx_irq(struct e1000_adapter *adapter) | |||
2767 | " next_to_use <%x>\n" | 2767 | " next_to_use <%x>\n" |
2768 | " next_to_clean <%x>\n" | 2768 | " next_to_clean <%x>\n" |
2769 | "buffer_info[next_to_clean]\n" | 2769 | "buffer_info[next_to_clean]\n" |
2770 | " dma <%zx>\n" | 2770 | " dma <%llx>\n" |
2771 | " time_stamp <%lx>\n" | 2771 | " time_stamp <%lx>\n" |
2772 | " next_to_watch <%x>\n" | 2772 | " next_to_watch <%x>\n" |
2773 | " jiffies <%lx>\n" | 2773 | " jiffies <%lx>\n" |
@@ -2776,7 +2776,7 @@ e1000_clean_tx_irq(struct e1000_adapter *adapter) | |||
2776 | E1000_READ_REG(&adapter->hw, TDT), | 2776 | E1000_READ_REG(&adapter->hw, TDT), |
2777 | tx_ring->next_to_use, | 2777 | tx_ring->next_to_use, |
2778 | i, | 2778 | i, |
2779 | tx_ring->buffer_info[i].dma, | 2779 | (unsigned long long)tx_ring->buffer_info[i].dma, |
2780 | tx_ring->buffer_info[i].time_stamp, | 2780 | tx_ring->buffer_info[i].time_stamp, |
2781 | eop, | 2781 | eop, |
2782 | jiffies, | 2782 | jiffies, |
diff --git a/drivers/net/eepro100.c b/drivers/net/eepro100.c index 1795425f512e..8c62ced2c9b2 100644 --- a/drivers/net/eepro100.c +++ b/drivers/net/eepro100.c | |||
@@ -1263,8 +1263,8 @@ speedo_init_rx_ring(struct net_device *dev) | |||
1263 | for (i = 0; i < RX_RING_SIZE; i++) { | 1263 | for (i = 0; i < RX_RING_SIZE; i++) { |
1264 | struct sk_buff *skb; | 1264 | struct sk_buff *skb; |
1265 | skb = dev_alloc_skb(PKT_BUF_SZ + sizeof(struct RxFD)); | 1265 | skb = dev_alloc_skb(PKT_BUF_SZ + sizeof(struct RxFD)); |
1266 | /* XXX: do we really want to call this before the NULL check? --hch */ | 1266 | if (skb) |
1267 | rx_align(skb); /* Align IP on 16 byte boundary */ | 1267 | rx_align(skb); /* Align IP on 16 byte boundary */ |
1268 | sp->rx_skbuff[i] = skb; | 1268 | sp->rx_skbuff[i] = skb; |
1269 | if (skb == NULL) | 1269 | if (skb == NULL) |
1270 | break; /* OK. Just initially short of Rx bufs. */ | 1270 | break; /* OK. Just initially short of Rx bufs. */ |
@@ -1654,8 +1654,8 @@ static inline struct RxFD *speedo_rx_alloc(struct net_device *dev, int entry) | |||
1654 | struct sk_buff *skb; | 1654 | struct sk_buff *skb; |
1655 | /* Get a fresh skbuff to replace the consumed one. */ | 1655 | /* Get a fresh skbuff to replace the consumed one. */ |
1656 | skb = dev_alloc_skb(PKT_BUF_SZ + sizeof(struct RxFD)); | 1656 | skb = dev_alloc_skb(PKT_BUF_SZ + sizeof(struct RxFD)); |
1657 | /* XXX: do we really want to call this before the NULL check? --hch */ | 1657 | if (skb) |
1658 | rx_align(skb); /* Align IP on 16 byte boundary */ | 1658 | rx_align(skb); /* Align IP on 16 byte boundary */ |
1659 | sp->rx_skbuff[entry] = skb; | 1659 | sp->rx_skbuff[entry] = skb; |
1660 | if (skb == NULL) { | 1660 | if (skb == NULL) { |
1661 | sp->rx_ringp[entry] = NULL; | 1661 | sp->rx_ringp[entry] = NULL; |
diff --git a/drivers/net/forcedeth.c b/drivers/net/forcedeth.c index 64f0f697c958..7d93948aec83 100644 --- a/drivers/net/forcedeth.c +++ b/drivers/net/forcedeth.c | |||
@@ -85,6 +85,16 @@ | |||
85 | * 0.33: 16 May 2005: Support for MCP51 added. | 85 | * 0.33: 16 May 2005: Support for MCP51 added. |
86 | * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics. | 86 | * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics. |
87 | * 0.35: 26 Jun 2005: Support for MCP55 added. | 87 | * 0.35: 26 Jun 2005: Support for MCP55 added. |
88 | * 0.36: 28 Jun 2005: Add jumbo frame support. | ||
89 | * 0.37: 10 Jul 2005: Additional ethtool support, cleanup of pci id list | ||
90 | * 0.38: 16 Jul 2005: tx irq rewrite: Use global flags instead of | ||
91 | * per-packet flags. | ||
92 | * 0.39: 18 Jul 2005: Add 64bit descriptor support. | ||
93 | * 0.40: 19 Jul 2005: Add support for mac address change. | ||
94 | * 0.41: 30 Jul 2005: Write back original MAC in nv_close instead | ||
95 | * of nv_remove | ||
96 | * 0.42: 06 Aug 2005: Fix lack of link speed initialization | ||
97 | * in the second (and later) nv_open call | ||
88 | * | 98 | * |
89 | * Known bugs: | 99 | * Known bugs: |
90 | * We suspect that on some hardware no TX done interrupts are generated. | 100 | * We suspect that on some hardware no TX done interrupts are generated. |
@@ -96,7 +106,7 @@ | |||
96 | * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few | 106 | * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few |
97 | * superfluous timer interrupts from the nic. | 107 | * superfluous timer interrupts from the nic. |
98 | */ | 108 | */ |
99 | #define FORCEDETH_VERSION "0.35" | 109 | #define FORCEDETH_VERSION "0.41" |
100 | #define DRV_NAME "forcedeth" | 110 | #define DRV_NAME "forcedeth" |
101 | 111 | ||
102 | #include <linux/module.h> | 112 | #include <linux/module.h> |
@@ -131,11 +141,10 @@ | |||
131 | * Hardware access: | 141 | * Hardware access: |
132 | */ | 142 | */ |
133 | 143 | ||
134 | #define DEV_NEED_LASTPACKET1 0x0001 /* set LASTPACKET1 in tx flags */ | 144 | #define DEV_NEED_TIMERIRQ 0x0001 /* set the timer irq flag in the irq mask */ |
135 | #define DEV_IRQMASK_1 0x0002 /* use NVREG_IRQMASK_WANTED_1 for irq mask */ | 145 | #define DEV_NEED_LINKTIMER 0x0002 /* poll link settings. Relies on the timer irq */ |
136 | #define DEV_IRQMASK_2 0x0004 /* use NVREG_IRQMASK_WANTED_2 for irq mask */ | 146 | #define DEV_HAS_LARGEDESC 0x0004 /* device supports jumbo frames and needs packet format 2 */ |
137 | #define DEV_NEED_TIMERIRQ 0x0008 /* set the timer irq flag in the irq mask */ | 147 | #define DEV_HAS_HIGH_DMA 0x0008 /* device supports 64bit dma */ |
138 | #define DEV_NEED_LINKTIMER 0x0010 /* poll link settings. Relies on the timer irq */ | ||
139 | 148 | ||
140 | enum { | 149 | enum { |
141 | NvRegIrqStatus = 0x000, | 150 | NvRegIrqStatus = 0x000, |
@@ -146,13 +155,16 @@ enum { | |||
146 | #define NVREG_IRQ_RX 0x0002 | 155 | #define NVREG_IRQ_RX 0x0002 |
147 | #define NVREG_IRQ_RX_NOBUF 0x0004 | 156 | #define NVREG_IRQ_RX_NOBUF 0x0004 |
148 | #define NVREG_IRQ_TX_ERR 0x0008 | 157 | #define NVREG_IRQ_TX_ERR 0x0008 |
149 | #define NVREG_IRQ_TX2 0x0010 | 158 | #define NVREG_IRQ_TX_OK 0x0010 |
150 | #define NVREG_IRQ_TIMER 0x0020 | 159 | #define NVREG_IRQ_TIMER 0x0020 |
151 | #define NVREG_IRQ_LINK 0x0040 | 160 | #define NVREG_IRQ_LINK 0x0040 |
161 | #define NVREG_IRQ_TX_ERROR 0x0080 | ||
152 | #define NVREG_IRQ_TX1 0x0100 | 162 | #define NVREG_IRQ_TX1 0x0100 |
153 | #define NVREG_IRQMASK_WANTED_1 0x005f | 163 | #define NVREG_IRQMASK_WANTED 0x00df |
154 | #define NVREG_IRQMASK_WANTED_2 0x0147 | 164 | |
155 | #define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR|NVREG_IRQ_TX2|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_TX1)) | 165 | #define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR| \ |
166 | NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_TX_ERROR| \ | ||
167 | NVREG_IRQ_TX1)) | ||
156 | 168 | ||
157 | NvRegUnknownSetupReg6 = 0x008, | 169 | NvRegUnknownSetupReg6 = 0x008, |
158 | #define NVREG_UNKSETUP6_VAL 3 | 170 | #define NVREG_UNKSETUP6_VAL 3 |
@@ -286,6 +298,18 @@ struct ring_desc { | |||
286 | u32 FlagLen; | 298 | u32 FlagLen; |
287 | }; | 299 | }; |
288 | 300 | ||
301 | struct ring_desc_ex { | ||
302 | u32 PacketBufferHigh; | ||
303 | u32 PacketBufferLow; | ||
304 | u32 Reserved; | ||
305 | u32 FlagLen; | ||
306 | }; | ||
307 | |||
308 | typedef union _ring_type { | ||
309 | struct ring_desc* orig; | ||
310 | struct ring_desc_ex* ex; | ||
311 | } ring_type; | ||
312 | |||
289 | #define FLAG_MASK_V1 0xffff0000 | 313 | #define FLAG_MASK_V1 0xffff0000 |
290 | #define FLAG_MASK_V2 0xffffc000 | 314 | #define FLAG_MASK_V2 0xffffc000 |
291 | #define LEN_MASK_V1 (0xffffffff ^ FLAG_MASK_V1) | 315 | #define LEN_MASK_V1 (0xffffffff ^ FLAG_MASK_V1) |
@@ -293,7 +317,7 @@ struct ring_desc { | |||
293 | 317 | ||
294 | #define NV_TX_LASTPACKET (1<<16) | 318 | #define NV_TX_LASTPACKET (1<<16) |
295 | #define NV_TX_RETRYERROR (1<<19) | 319 | #define NV_TX_RETRYERROR (1<<19) |
296 | #define NV_TX_LASTPACKET1 (1<<24) | 320 | #define NV_TX_FORCED_INTERRUPT (1<<24) |
297 | #define NV_TX_DEFERRED (1<<26) | 321 | #define NV_TX_DEFERRED (1<<26) |
298 | #define NV_TX_CARRIERLOST (1<<27) | 322 | #define NV_TX_CARRIERLOST (1<<27) |
299 | #define NV_TX_LATECOLLISION (1<<28) | 323 | #define NV_TX_LATECOLLISION (1<<28) |
@@ -303,7 +327,7 @@ struct ring_desc { | |||
303 | 327 | ||
304 | #define NV_TX2_LASTPACKET (1<<29) | 328 | #define NV_TX2_LASTPACKET (1<<29) |
305 | #define NV_TX2_RETRYERROR (1<<18) | 329 | #define NV_TX2_RETRYERROR (1<<18) |
306 | #define NV_TX2_LASTPACKET1 (1<<23) | 330 | #define NV_TX2_FORCED_INTERRUPT (1<<30) |
307 | #define NV_TX2_DEFERRED (1<<25) | 331 | #define NV_TX2_DEFERRED (1<<25) |
308 | #define NV_TX2_CARRIERLOST (1<<26) | 332 | #define NV_TX2_CARRIERLOST (1<<26) |
309 | #define NV_TX2_LATECOLLISION (1<<27) | 333 | #define NV_TX2_LATECOLLISION (1<<27) |
@@ -379,9 +403,13 @@ struct ring_desc { | |||
379 | #define TX_LIMIT_START 62 | 403 | #define TX_LIMIT_START 62 |
380 | 404 | ||
381 | /* rx/tx mac addr + type + vlan + align + slack*/ | 405 | /* rx/tx mac addr + type + vlan + align + slack*/ |
382 | #define RX_NIC_BUFSIZE (ETH_DATA_LEN + 64) | 406 | #define NV_RX_HEADERS (64) |
383 | /* even more slack */ | 407 | /* even more slack. */ |
384 | #define RX_ALLOC_BUFSIZE (ETH_DATA_LEN + 128) | 408 | #define NV_RX_ALLOC_PAD (64) |
409 | |||
410 | /* maximum mtu size */ | ||
411 | #define NV_PKTLIMIT_1 ETH_DATA_LEN /* hard limit not known */ | ||
412 | #define NV_PKTLIMIT_2 9100 /* Actual limit according to NVidia: 9202 */ | ||
385 | 413 | ||
386 | #define OOM_REFILL (1+HZ/20) | 414 | #define OOM_REFILL (1+HZ/20) |
387 | #define POLL_WAIT (1+HZ/100) | 415 | #define POLL_WAIT (1+HZ/100) |
@@ -396,6 +424,7 @@ struct ring_desc { | |||
396 | */ | 424 | */ |
397 | #define DESC_VER_1 0x0 | 425 | #define DESC_VER_1 0x0 |
398 | #define DESC_VER_2 (0x02100|NVREG_TXRXCTL_RXCHECK) | 426 | #define DESC_VER_2 (0x02100|NVREG_TXRXCTL_RXCHECK) |
427 | #define DESC_VER_3 (0x02200|NVREG_TXRXCTL_RXCHECK) | ||
399 | 428 | ||
400 | /* PHY defines */ | 429 | /* PHY defines */ |
401 | #define PHY_OUI_MARVELL 0x5043 | 430 | #define PHY_OUI_MARVELL 0x5043 |
@@ -468,11 +497,12 @@ struct fe_priv { | |||
468 | /* rx specific fields. | 497 | /* rx specific fields. |
469 | * Locking: Within irq hander or disable_irq+spin_lock(&np->lock); | 498 | * Locking: Within irq hander or disable_irq+spin_lock(&np->lock); |
470 | */ | 499 | */ |
471 | struct ring_desc *rx_ring; | 500 | ring_type rx_ring; |
472 | unsigned int cur_rx, refill_rx; | 501 | unsigned int cur_rx, refill_rx; |
473 | struct sk_buff *rx_skbuff[RX_RING]; | 502 | struct sk_buff *rx_skbuff[RX_RING]; |
474 | dma_addr_t rx_dma[RX_RING]; | 503 | dma_addr_t rx_dma[RX_RING]; |
475 | unsigned int rx_buf_sz; | 504 | unsigned int rx_buf_sz; |
505 | unsigned int pkt_limit; | ||
476 | struct timer_list oom_kick; | 506 | struct timer_list oom_kick; |
477 | struct timer_list nic_poll; | 507 | struct timer_list nic_poll; |
478 | 508 | ||
@@ -484,7 +514,7 @@ struct fe_priv { | |||
484 | /* | 514 | /* |
485 | * tx specific fields. | 515 | * tx specific fields. |
486 | */ | 516 | */ |
487 | struct ring_desc *tx_ring; | 517 | ring_type tx_ring; |
488 | unsigned int next_tx, nic_tx; | 518 | unsigned int next_tx, nic_tx; |
489 | struct sk_buff *tx_skbuff[TX_RING]; | 519 | struct sk_buff *tx_skbuff[TX_RING]; |
490 | dma_addr_t tx_dma[TX_RING]; | 520 | dma_addr_t tx_dma[TX_RING]; |
@@ -519,6 +549,11 @@ static inline u32 nv_descr_getlength(struct ring_desc *prd, u32 v) | |||
519 | & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2); | 549 | & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2); |
520 | } | 550 | } |
521 | 551 | ||
552 | static inline u32 nv_descr_getlength_ex(struct ring_desc_ex *prd, u32 v) | ||
553 | { | ||
554 | return le32_to_cpu(prd->FlagLen) & LEN_MASK_V2; | ||
555 | } | ||
556 | |||
522 | static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target, | 557 | static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target, |
523 | int delay, int delaymax, const char *msg) | 558 | int delay, int delaymax, const char *msg) |
524 | { | 559 | { |
@@ -792,7 +827,7 @@ static int nv_alloc_rx(struct net_device *dev) | |||
792 | nr = refill_rx % RX_RING; | 827 | nr = refill_rx % RX_RING; |
793 | if (np->rx_skbuff[nr] == NULL) { | 828 | if (np->rx_skbuff[nr] == NULL) { |
794 | 829 | ||
795 | skb = dev_alloc_skb(RX_ALLOC_BUFSIZE); | 830 | skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); |
796 | if (!skb) | 831 | if (!skb) |
797 | break; | 832 | break; |
798 | 833 | ||
@@ -803,9 +838,16 @@ static int nv_alloc_rx(struct net_device *dev) | |||
803 | } | 838 | } |
804 | np->rx_dma[nr] = pci_map_single(np->pci_dev, skb->data, skb->len, | 839 | np->rx_dma[nr] = pci_map_single(np->pci_dev, skb->data, skb->len, |
805 | PCI_DMA_FROMDEVICE); | 840 | PCI_DMA_FROMDEVICE); |
806 | np->rx_ring[nr].PacketBuffer = cpu_to_le32(np->rx_dma[nr]); | 841 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { |
807 | wmb(); | 842 | np->rx_ring.orig[nr].PacketBuffer = cpu_to_le32(np->rx_dma[nr]); |
808 | np->rx_ring[nr].FlagLen = cpu_to_le32(RX_NIC_BUFSIZE | NV_RX_AVAIL); | 843 | wmb(); |
844 | np->rx_ring.orig[nr].FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL); | ||
845 | } else { | ||
846 | np->rx_ring.ex[nr].PacketBufferHigh = cpu_to_le64(np->rx_dma[nr]) >> 32; | ||
847 | np->rx_ring.ex[nr].PacketBufferLow = cpu_to_le64(np->rx_dma[nr]) & 0x0FFFFFFFF; | ||
848 | wmb(); | ||
849 | np->rx_ring.ex[nr].FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL); | ||
850 | } | ||
809 | dprintk(KERN_DEBUG "%s: nv_alloc_rx: Packet %d marked as Available\n", | 851 | dprintk(KERN_DEBUG "%s: nv_alloc_rx: Packet %d marked as Available\n", |
810 | dev->name, refill_rx); | 852 | dev->name, refill_rx); |
811 | refill_rx++; | 853 | refill_rx++; |
@@ -831,19 +873,37 @@ static void nv_do_rx_refill(unsigned long data) | |||
831 | enable_irq(dev->irq); | 873 | enable_irq(dev->irq); |
832 | } | 874 | } |
833 | 875 | ||
834 | static int nv_init_ring(struct net_device *dev) | 876 | static void nv_init_rx(struct net_device *dev) |
835 | { | 877 | { |
836 | struct fe_priv *np = get_nvpriv(dev); | 878 | struct fe_priv *np = get_nvpriv(dev); |
837 | int i; | 879 | int i; |
838 | 880 | ||
839 | np->next_tx = np->nic_tx = 0; | ||
840 | for (i = 0; i < TX_RING; i++) | ||
841 | np->tx_ring[i].FlagLen = 0; | ||
842 | |||
843 | np->cur_rx = RX_RING; | 881 | np->cur_rx = RX_RING; |
844 | np->refill_rx = 0; | 882 | np->refill_rx = 0; |
845 | for (i = 0; i < RX_RING; i++) | 883 | for (i = 0; i < RX_RING; i++) |
846 | np->rx_ring[i].FlagLen = 0; | 884 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
885 | np->rx_ring.orig[i].FlagLen = 0; | ||
886 | else | ||
887 | np->rx_ring.ex[i].FlagLen = 0; | ||
888 | } | ||
889 | |||
890 | static void nv_init_tx(struct net_device *dev) | ||
891 | { | ||
892 | struct fe_priv *np = get_nvpriv(dev); | ||
893 | int i; | ||
894 | |||
895 | np->next_tx = np->nic_tx = 0; | ||
896 | for (i = 0; i < TX_RING; i++) | ||
897 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) | ||
898 | np->tx_ring.orig[i].FlagLen = 0; | ||
899 | else | ||
900 | np->tx_ring.ex[i].FlagLen = 0; | ||
901 | } | ||
902 | |||
903 | static int nv_init_ring(struct net_device *dev) | ||
904 | { | ||
905 | nv_init_tx(dev); | ||
906 | nv_init_rx(dev); | ||
847 | return nv_alloc_rx(dev); | 907 | return nv_alloc_rx(dev); |
848 | } | 908 | } |
849 | 909 | ||
@@ -852,7 +912,10 @@ static void nv_drain_tx(struct net_device *dev) | |||
852 | struct fe_priv *np = get_nvpriv(dev); | 912 | struct fe_priv *np = get_nvpriv(dev); |
853 | int i; | 913 | int i; |
854 | for (i = 0; i < TX_RING; i++) { | 914 | for (i = 0; i < TX_RING; i++) { |
855 | np->tx_ring[i].FlagLen = 0; | 915 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
916 | np->tx_ring.orig[i].FlagLen = 0; | ||
917 | else | ||
918 | np->tx_ring.ex[i].FlagLen = 0; | ||
856 | if (np->tx_skbuff[i]) { | 919 | if (np->tx_skbuff[i]) { |
857 | pci_unmap_single(np->pci_dev, np->tx_dma[i], | 920 | pci_unmap_single(np->pci_dev, np->tx_dma[i], |
858 | np->tx_skbuff[i]->len, | 921 | np->tx_skbuff[i]->len, |
@@ -869,7 +932,10 @@ static void nv_drain_rx(struct net_device *dev) | |||
869 | struct fe_priv *np = get_nvpriv(dev); | 932 | struct fe_priv *np = get_nvpriv(dev); |
870 | int i; | 933 | int i; |
871 | for (i = 0; i < RX_RING; i++) { | 934 | for (i = 0; i < RX_RING; i++) { |
872 | np->rx_ring[i].FlagLen = 0; | 935 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
936 | np->rx_ring.orig[i].FlagLen = 0; | ||
937 | else | ||
938 | np->rx_ring.ex[i].FlagLen = 0; | ||
873 | wmb(); | 939 | wmb(); |
874 | if (np->rx_skbuff[i]) { | 940 | if (np->rx_skbuff[i]) { |
875 | pci_unmap_single(np->pci_dev, np->rx_dma[i], | 941 | pci_unmap_single(np->pci_dev, np->rx_dma[i], |
@@ -900,11 +966,19 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) | |||
900 | np->tx_dma[nr] = pci_map_single(np->pci_dev, skb->data,skb->len, | 966 | np->tx_dma[nr] = pci_map_single(np->pci_dev, skb->data,skb->len, |
901 | PCI_DMA_TODEVICE); | 967 | PCI_DMA_TODEVICE); |
902 | 968 | ||
903 | np->tx_ring[nr].PacketBuffer = cpu_to_le32(np->tx_dma[nr]); | 969 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
970 | np->tx_ring.orig[nr].PacketBuffer = cpu_to_le32(np->tx_dma[nr]); | ||
971 | else { | ||
972 | np->tx_ring.ex[nr].PacketBufferHigh = cpu_to_le64(np->tx_dma[nr]) >> 32; | ||
973 | np->tx_ring.ex[nr].PacketBufferLow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF; | ||
974 | } | ||
904 | 975 | ||
905 | spin_lock_irq(&np->lock); | 976 | spin_lock_irq(&np->lock); |
906 | wmb(); | 977 | wmb(); |
907 | np->tx_ring[nr].FlagLen = cpu_to_le32( (skb->len-1) | np->tx_flags ); | 978 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
979 | np->tx_ring.orig[nr].FlagLen = cpu_to_le32( (skb->len-1) | np->tx_flags ); | ||
980 | else | ||
981 | np->tx_ring.ex[nr].FlagLen = cpu_to_le32( (skb->len-1) | np->tx_flags ); | ||
908 | dprintk(KERN_DEBUG "%s: nv_start_xmit: packet packet %d queued for transmission.\n", | 982 | dprintk(KERN_DEBUG "%s: nv_start_xmit: packet packet %d queued for transmission.\n", |
909 | dev->name, np->next_tx); | 983 | dev->name, np->next_tx); |
910 | { | 984 | { |
@@ -942,7 +1016,10 @@ static void nv_tx_done(struct net_device *dev) | |||
942 | while (np->nic_tx != np->next_tx) { | 1016 | while (np->nic_tx != np->next_tx) { |
943 | i = np->nic_tx % TX_RING; | 1017 | i = np->nic_tx % TX_RING; |
944 | 1018 | ||
945 | Flags = le32_to_cpu(np->tx_ring[i].FlagLen); | 1019 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
1020 | Flags = le32_to_cpu(np->tx_ring.orig[i].FlagLen); | ||
1021 | else | ||
1022 | Flags = le32_to_cpu(np->tx_ring.ex[i].FlagLen); | ||
946 | 1023 | ||
947 | dprintk(KERN_DEBUG "%s: nv_tx_done: looking at packet %d, Flags 0x%x.\n", | 1024 | dprintk(KERN_DEBUG "%s: nv_tx_done: looking at packet %d, Flags 0x%x.\n", |
948 | dev->name, np->nic_tx, Flags); | 1025 | dev->name, np->nic_tx, Flags); |
@@ -993,9 +1070,56 @@ static void nv_tx_timeout(struct net_device *dev) | |||
993 | struct fe_priv *np = get_nvpriv(dev); | 1070 | struct fe_priv *np = get_nvpriv(dev); |
994 | u8 __iomem *base = get_hwbase(dev); | 1071 | u8 __iomem *base = get_hwbase(dev); |
995 | 1072 | ||
996 | dprintk(KERN_DEBUG "%s: Got tx_timeout. irq: %08x\n", dev->name, | 1073 | printk(KERN_INFO "%s: Got tx_timeout. irq: %08x\n", dev->name, |
997 | readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK); | 1074 | readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK); |
998 | 1075 | ||
1076 | { | ||
1077 | int i; | ||
1078 | |||
1079 | printk(KERN_INFO "%s: Ring at %lx: next %d nic %d\n", | ||
1080 | dev->name, (unsigned long)np->ring_addr, | ||
1081 | np->next_tx, np->nic_tx); | ||
1082 | printk(KERN_INFO "%s: Dumping tx registers\n", dev->name); | ||
1083 | for (i=0;i<0x400;i+= 32) { | ||
1084 | printk(KERN_INFO "%3x: %08x %08x %08x %08x %08x %08x %08x %08x\n", | ||
1085 | i, | ||
1086 | readl(base + i + 0), readl(base + i + 4), | ||
1087 | readl(base + i + 8), readl(base + i + 12), | ||
1088 | readl(base + i + 16), readl(base + i + 20), | ||
1089 | readl(base + i + 24), readl(base + i + 28)); | ||
1090 | } | ||
1091 | printk(KERN_INFO "%s: Dumping tx ring\n", dev->name); | ||
1092 | for (i=0;i<TX_RING;i+= 4) { | ||
1093 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { | ||
1094 | printk(KERN_INFO "%03x: %08x %08x // %08x %08x // %08x %08x // %08x %08x\n", | ||
1095 | i, | ||
1096 | le32_to_cpu(np->tx_ring.orig[i].PacketBuffer), | ||
1097 | le32_to_cpu(np->tx_ring.orig[i].FlagLen), | ||
1098 | le32_to_cpu(np->tx_ring.orig[i+1].PacketBuffer), | ||
1099 | le32_to_cpu(np->tx_ring.orig[i+1].FlagLen), | ||
1100 | le32_to_cpu(np->tx_ring.orig[i+2].PacketBuffer), | ||
1101 | le32_to_cpu(np->tx_ring.orig[i+2].FlagLen), | ||
1102 | le32_to_cpu(np->tx_ring.orig[i+3].PacketBuffer), | ||
1103 | le32_to_cpu(np->tx_ring.orig[i+3].FlagLen)); | ||
1104 | } else { | ||
1105 | printk(KERN_INFO "%03x: %08x %08x %08x // %08x %08x %08x // %08x %08x %08x // %08x %08x %08x\n", | ||
1106 | i, | ||
1107 | le32_to_cpu(np->tx_ring.ex[i].PacketBufferHigh), | ||
1108 | le32_to_cpu(np->tx_ring.ex[i].PacketBufferLow), | ||
1109 | le32_to_cpu(np->tx_ring.ex[i].FlagLen), | ||
1110 | le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferHigh), | ||
1111 | le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferLow), | ||
1112 | le32_to_cpu(np->tx_ring.ex[i+1].FlagLen), | ||
1113 | le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferHigh), | ||
1114 | le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferLow), | ||
1115 | le32_to_cpu(np->tx_ring.ex[i+2].FlagLen), | ||
1116 | le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferHigh), | ||
1117 | le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferLow), | ||
1118 | le32_to_cpu(np->tx_ring.ex[i+3].FlagLen)); | ||
1119 | } | ||
1120 | } | ||
1121 | } | ||
1122 | |||
999 | spin_lock_irq(&np->lock); | 1123 | spin_lock_irq(&np->lock); |
1000 | 1124 | ||
1001 | /* 1) stop tx engine */ | 1125 | /* 1) stop tx engine */ |
@@ -1009,7 +1133,10 @@ static void nv_tx_timeout(struct net_device *dev) | |||
1009 | printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name); | 1133 | printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name); |
1010 | nv_drain_tx(dev); | 1134 | nv_drain_tx(dev); |
1011 | np->next_tx = np->nic_tx = 0; | 1135 | np->next_tx = np->nic_tx = 0; |
1012 | writel((u32) (np->ring_addr + RX_RING*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr); | 1136 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
1137 | writel((u32) (np->ring_addr + RX_RING*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr); | ||
1138 | else | ||
1139 | writel((u32) (np->ring_addr + RX_RING*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr); | ||
1013 | netif_wake_queue(dev); | 1140 | netif_wake_queue(dev); |
1014 | } | 1141 | } |
1015 | 1142 | ||
@@ -1084,8 +1211,13 @@ static void nv_rx_process(struct net_device *dev) | |||
1084 | break; /* we scanned the whole ring - do not continue */ | 1211 | break; /* we scanned the whole ring - do not continue */ |
1085 | 1212 | ||
1086 | i = np->cur_rx % RX_RING; | 1213 | i = np->cur_rx % RX_RING; |
1087 | Flags = le32_to_cpu(np->rx_ring[i].FlagLen); | 1214 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { |
1088 | len = nv_descr_getlength(&np->rx_ring[i], np->desc_ver); | 1215 | Flags = le32_to_cpu(np->rx_ring.orig[i].FlagLen); |
1216 | len = nv_descr_getlength(&np->rx_ring.orig[i], np->desc_ver); | ||
1217 | } else { | ||
1218 | Flags = le32_to_cpu(np->rx_ring.ex[i].FlagLen); | ||
1219 | len = nv_descr_getlength_ex(&np->rx_ring.ex[i], np->desc_ver); | ||
1220 | } | ||
1089 | 1221 | ||
1090 | dprintk(KERN_DEBUG "%s: nv_rx_process: looking at packet %d, Flags 0x%x.\n", | 1222 | dprintk(KERN_DEBUG "%s: nv_rx_process: looking at packet %d, Flags 0x%x.\n", |
1091 | dev->name, np->cur_rx, Flags); | 1223 | dev->name, np->cur_rx, Flags); |
@@ -1207,15 +1339,133 @@ next_pkt: | |||
1207 | } | 1339 | } |
1208 | } | 1340 | } |
1209 | 1341 | ||
1342 | static void set_bufsize(struct net_device *dev) | ||
1343 | { | ||
1344 | struct fe_priv *np = netdev_priv(dev); | ||
1345 | |||
1346 | if (dev->mtu <= ETH_DATA_LEN) | ||
1347 | np->rx_buf_sz = ETH_DATA_LEN + NV_RX_HEADERS; | ||
1348 | else | ||
1349 | np->rx_buf_sz = dev->mtu + NV_RX_HEADERS; | ||
1350 | } | ||
1351 | |||
1210 | /* | 1352 | /* |
1211 | * nv_change_mtu: dev->change_mtu function | 1353 | * nv_change_mtu: dev->change_mtu function |
1212 | * Called with dev_base_lock held for read. | 1354 | * Called with dev_base_lock held for read. |
1213 | */ | 1355 | */ |
1214 | static int nv_change_mtu(struct net_device *dev, int new_mtu) | 1356 | static int nv_change_mtu(struct net_device *dev, int new_mtu) |
1215 | { | 1357 | { |
1216 | if (new_mtu > ETH_DATA_LEN) | 1358 | struct fe_priv *np = get_nvpriv(dev); |
1359 | int old_mtu; | ||
1360 | |||
1361 | if (new_mtu < 64 || new_mtu > np->pkt_limit) | ||
1217 | return -EINVAL; | 1362 | return -EINVAL; |
1363 | |||
1364 | old_mtu = dev->mtu; | ||
1218 | dev->mtu = new_mtu; | 1365 | dev->mtu = new_mtu; |
1366 | |||
1367 | /* return early if the buffer sizes will not change */ | ||
1368 | if (old_mtu <= ETH_DATA_LEN && new_mtu <= ETH_DATA_LEN) | ||
1369 | return 0; | ||
1370 | if (old_mtu == new_mtu) | ||
1371 | return 0; | ||
1372 | |||
1373 | /* synchronized against open : rtnl_lock() held by caller */ | ||
1374 | if (netif_running(dev)) { | ||
1375 | u8 *base = get_hwbase(dev); | ||
1376 | /* | ||
1377 | * It seems that the nic preloads valid ring entries into an | ||
1378 | * internal buffer. The procedure for flushing everything is | ||
1379 | * guessed, there is probably a simpler approach. | ||
1380 | * Changing the MTU is a rare event, it shouldn't matter. | ||
1381 | */ | ||
1382 | disable_irq(dev->irq); | ||
1383 | spin_lock_bh(&dev->xmit_lock); | ||
1384 | spin_lock(&np->lock); | ||
1385 | /* stop engines */ | ||
1386 | nv_stop_rx(dev); | ||
1387 | nv_stop_tx(dev); | ||
1388 | nv_txrx_reset(dev); | ||
1389 | /* drain rx queue */ | ||
1390 | nv_drain_rx(dev); | ||
1391 | nv_drain_tx(dev); | ||
1392 | /* reinit driver view of the rx queue */ | ||
1393 | nv_init_rx(dev); | ||
1394 | nv_init_tx(dev); | ||
1395 | /* alloc new rx buffers */ | ||
1396 | set_bufsize(dev); | ||
1397 | if (nv_alloc_rx(dev)) { | ||
1398 | if (!np->in_shutdown) | ||
1399 | mod_timer(&np->oom_kick, jiffies + OOM_REFILL); | ||
1400 | } | ||
1401 | /* reinit nic view of the rx queue */ | ||
1402 | writel(np->rx_buf_sz, base + NvRegOffloadConfig); | ||
1403 | writel((u32) np->ring_addr, base + NvRegRxRingPhysAddr); | ||
1404 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) | ||
1405 | writel((u32) (np->ring_addr + RX_RING*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr); | ||
1406 | else | ||
1407 | writel((u32) (np->ring_addr + RX_RING*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr); | ||
1408 | writel( ((RX_RING-1) << NVREG_RINGSZ_RXSHIFT) + ((TX_RING-1) << NVREG_RINGSZ_TXSHIFT), | ||
1409 | base + NvRegRingSizes); | ||
1410 | pci_push(base); | ||
1411 | writel(NVREG_TXRXCTL_KICK|np->desc_ver, get_hwbase(dev) + NvRegTxRxControl); | ||
1412 | pci_push(base); | ||
1413 | |||
1414 | /* restart rx engine */ | ||
1415 | nv_start_rx(dev); | ||
1416 | nv_start_tx(dev); | ||
1417 | spin_unlock(&np->lock); | ||
1418 | spin_unlock_bh(&dev->xmit_lock); | ||
1419 | enable_irq(dev->irq); | ||
1420 | } | ||
1421 | return 0; | ||
1422 | } | ||
1423 | |||
1424 | static void nv_copy_mac_to_hw(struct net_device *dev) | ||
1425 | { | ||
1426 | u8 *base = get_hwbase(dev); | ||
1427 | u32 mac[2]; | ||
1428 | |||
1429 | mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) + | ||
1430 | (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24); | ||
1431 | mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8); | ||
1432 | |||
1433 | writel(mac[0], base + NvRegMacAddrA); | ||
1434 | writel(mac[1], base + NvRegMacAddrB); | ||
1435 | } | ||
1436 | |||
1437 | /* | ||
1438 | * nv_set_mac_address: dev->set_mac_address function | ||
1439 | * Called with rtnl_lock() held. | ||
1440 | */ | ||
1441 | static int nv_set_mac_address(struct net_device *dev, void *addr) | ||
1442 | { | ||
1443 | struct fe_priv *np = get_nvpriv(dev); | ||
1444 | struct sockaddr *macaddr = (struct sockaddr*)addr; | ||
1445 | |||
1446 | if(!is_valid_ether_addr(macaddr->sa_data)) | ||
1447 | return -EADDRNOTAVAIL; | ||
1448 | |||
1449 | /* synchronized against open : rtnl_lock() held by caller */ | ||
1450 | memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN); | ||
1451 | |||
1452 | if (netif_running(dev)) { | ||
1453 | spin_lock_bh(&dev->xmit_lock); | ||
1454 | spin_lock_irq(&np->lock); | ||
1455 | |||
1456 | /* stop rx engine */ | ||
1457 | nv_stop_rx(dev); | ||
1458 | |||
1459 | /* set mac address */ | ||
1460 | nv_copy_mac_to_hw(dev); | ||
1461 | |||
1462 | /* restart rx engine */ | ||
1463 | nv_start_rx(dev); | ||
1464 | spin_unlock_irq(&np->lock); | ||
1465 | spin_unlock_bh(&dev->xmit_lock); | ||
1466 | } else { | ||
1467 | nv_copy_mac_to_hw(dev); | ||
1468 | } | ||
1219 | return 0; | 1469 | return 0; |
1220 | } | 1470 | } |
1221 | 1471 | ||
@@ -1470,7 +1720,7 @@ static irqreturn_t nv_nic_irq(int foo, void *data, struct pt_regs *regs) | |||
1470 | if (!(events & np->irqmask)) | 1720 | if (!(events & np->irqmask)) |
1471 | break; | 1721 | break; |
1472 | 1722 | ||
1473 | if (events & (NVREG_IRQ_TX1|NVREG_IRQ_TX2|NVREG_IRQ_TX_ERR)) { | 1723 | if (events & (NVREG_IRQ_TX1|NVREG_IRQ_TX_OK|NVREG_IRQ_TX_ERROR|NVREG_IRQ_TX_ERR)) { |
1474 | spin_lock(&np->lock); | 1724 | spin_lock(&np->lock); |
1475 | nv_tx_done(dev); | 1725 | nv_tx_done(dev); |
1476 | spin_unlock(&np->lock); | 1726 | spin_unlock(&np->lock); |
@@ -1761,6 +2011,50 @@ static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) | |||
1761 | return 0; | 2011 | return 0; |
1762 | } | 2012 | } |
1763 | 2013 | ||
2014 | #define FORCEDETH_REGS_VER 1 | ||
2015 | #define FORCEDETH_REGS_SIZE 0x400 /* 256 32-bit registers */ | ||
2016 | |||
2017 | static int nv_get_regs_len(struct net_device *dev) | ||
2018 | { | ||
2019 | return FORCEDETH_REGS_SIZE; | ||
2020 | } | ||
2021 | |||
2022 | static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf) | ||
2023 | { | ||
2024 | struct fe_priv *np = get_nvpriv(dev); | ||
2025 | u8 __iomem *base = get_hwbase(dev); | ||
2026 | u32 *rbuf = buf; | ||
2027 | int i; | ||
2028 | |||
2029 | regs->version = FORCEDETH_REGS_VER; | ||
2030 | spin_lock_irq(&np->lock); | ||
2031 | for (i=0;i<FORCEDETH_REGS_SIZE/sizeof(u32);i++) | ||
2032 | rbuf[i] = readl(base + i*sizeof(u32)); | ||
2033 | spin_unlock_irq(&np->lock); | ||
2034 | } | ||
2035 | |||
2036 | static int nv_nway_reset(struct net_device *dev) | ||
2037 | { | ||
2038 | struct fe_priv *np = get_nvpriv(dev); | ||
2039 | int ret; | ||
2040 | |||
2041 | spin_lock_irq(&np->lock); | ||
2042 | if (np->autoneg) { | ||
2043 | int bmcr; | ||
2044 | |||
2045 | bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); | ||
2046 | bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART); | ||
2047 | mii_rw(dev, np->phyaddr, MII_BMCR, bmcr); | ||
2048 | |||
2049 | ret = 0; | ||
2050 | } else { | ||
2051 | ret = -EINVAL; | ||
2052 | } | ||
2053 | spin_unlock_irq(&np->lock); | ||
2054 | |||
2055 | return ret; | ||
2056 | } | ||
2057 | |||
1764 | static struct ethtool_ops ops = { | 2058 | static struct ethtool_ops ops = { |
1765 | .get_drvinfo = nv_get_drvinfo, | 2059 | .get_drvinfo = nv_get_drvinfo, |
1766 | .get_link = ethtool_op_get_link, | 2060 | .get_link = ethtool_op_get_link, |
@@ -1768,6 +2062,9 @@ static struct ethtool_ops ops = { | |||
1768 | .set_wol = nv_set_wol, | 2062 | .set_wol = nv_set_wol, |
1769 | .get_settings = nv_get_settings, | 2063 | .get_settings = nv_get_settings, |
1770 | .set_settings = nv_set_settings, | 2064 | .set_settings = nv_set_settings, |
2065 | .get_regs_len = nv_get_regs_len, | ||
2066 | .get_regs = nv_get_regs, | ||
2067 | .nway_reset = nv_nway_reset, | ||
1771 | }; | 2068 | }; |
1772 | 2069 | ||
1773 | static int nv_open(struct net_device *dev) | 2070 | static int nv_open(struct net_device *dev) |
@@ -1792,6 +2089,7 @@ static int nv_open(struct net_device *dev) | |||
1792 | writel(0, base + NvRegAdapterControl); | 2089 | writel(0, base + NvRegAdapterControl); |
1793 | 2090 | ||
1794 | /* 2) initialize descriptor rings */ | 2091 | /* 2) initialize descriptor rings */ |
2092 | set_bufsize(dev); | ||
1795 | oom = nv_init_ring(dev); | 2093 | oom = nv_init_ring(dev); |
1796 | 2094 | ||
1797 | writel(0, base + NvRegLinkSpeed); | 2095 | writel(0, base + NvRegLinkSpeed); |
@@ -1802,20 +2100,14 @@ static int nv_open(struct net_device *dev) | |||
1802 | np->in_shutdown = 0; | 2100 | np->in_shutdown = 0; |
1803 | 2101 | ||
1804 | /* 3) set mac address */ | 2102 | /* 3) set mac address */ |
1805 | { | 2103 | nv_copy_mac_to_hw(dev); |
1806 | u32 mac[2]; | ||
1807 | |||
1808 | mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) + | ||
1809 | (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24); | ||
1810 | mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8); | ||
1811 | |||
1812 | writel(mac[0], base + NvRegMacAddrA); | ||
1813 | writel(mac[1], base + NvRegMacAddrB); | ||
1814 | } | ||
1815 | 2104 | ||
1816 | /* 4) give hw rings */ | 2105 | /* 4) give hw rings */ |
1817 | writel((u32) np->ring_addr, base + NvRegRxRingPhysAddr); | 2106 | writel((u32) np->ring_addr, base + NvRegRxRingPhysAddr); |
1818 | writel((u32) (np->ring_addr + RX_RING*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr); | 2107 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
2108 | writel((u32) (np->ring_addr + RX_RING*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr); | ||
2109 | else | ||
2110 | writel((u32) (np->ring_addr + RX_RING*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr); | ||
1819 | writel( ((RX_RING-1) << NVREG_RINGSZ_RXSHIFT) + ((TX_RING-1) << NVREG_RINGSZ_TXSHIFT), | 2111 | writel( ((RX_RING-1) << NVREG_RINGSZ_RXSHIFT) + ((TX_RING-1) << NVREG_RINGSZ_TXSHIFT), |
1820 | base + NvRegRingSizes); | 2112 | base + NvRegRingSizes); |
1821 | 2113 | ||
@@ -1837,7 +2129,7 @@ static int nv_open(struct net_device *dev) | |||
1837 | writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1); | 2129 | writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1); |
1838 | writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus); | 2130 | writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus); |
1839 | writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags); | 2131 | writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags); |
1840 | writel(NVREG_OFFLOAD_NORMAL, base + NvRegOffloadConfig); | 2132 | writel(np->rx_buf_sz, base + NvRegOffloadConfig); |
1841 | 2133 | ||
1842 | writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus); | 2134 | writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus); |
1843 | get_random_bytes(&i, sizeof(i)); | 2135 | get_random_bytes(&i, sizeof(i)); |
@@ -1888,6 +2180,9 @@ static int nv_open(struct net_device *dev) | |||
1888 | writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus); | 2180 | writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus); |
1889 | dprintk(KERN_INFO "startup: got 0x%08x.\n", miistat); | 2181 | dprintk(KERN_INFO "startup: got 0x%08x.\n", miistat); |
1890 | } | 2182 | } |
2183 | /* set linkspeed to invalid value, thus force nv_update_linkspeed | ||
2184 | * to init hw */ | ||
2185 | np->linkspeed = 0; | ||
1891 | ret = nv_update_linkspeed(dev); | 2186 | ret = nv_update_linkspeed(dev); |
1892 | nv_start_rx(dev); | 2187 | nv_start_rx(dev); |
1893 | nv_start_tx(dev); | 2188 | nv_start_tx(dev); |
@@ -1942,6 +2237,12 @@ static int nv_close(struct net_device *dev) | |||
1942 | if (np->wolenabled) | 2237 | if (np->wolenabled) |
1943 | nv_start_rx(dev); | 2238 | nv_start_rx(dev); |
1944 | 2239 | ||
2240 | /* special op: write back the misordered MAC address - otherwise | ||
2241 | * the next nv_probe would see a wrong address. | ||
2242 | */ | ||
2243 | writel(np->orig_mac[0], base + NvRegMacAddrA); | ||
2244 | writel(np->orig_mac[1], base + NvRegMacAddrB); | ||
2245 | |||
1945 | /* FIXME: power down nic */ | 2246 | /* FIXME: power down nic */ |
1946 | 2247 | ||
1947 | return 0; | 2248 | return 0; |
@@ -2006,32 +2307,55 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i | |||
2006 | } | 2307 | } |
2007 | 2308 | ||
2008 | /* handle different descriptor versions */ | 2309 | /* handle different descriptor versions */ |
2009 | if (pci_dev->device == PCI_DEVICE_ID_NVIDIA_NVENET_1 || | 2310 | if (id->driver_data & DEV_HAS_HIGH_DMA) { |
2010 | pci_dev->device == PCI_DEVICE_ID_NVIDIA_NVENET_2 || | 2311 | /* packet format 3: supports 40-bit addressing */ |
2011 | pci_dev->device == PCI_DEVICE_ID_NVIDIA_NVENET_3 || | 2312 | np->desc_ver = DESC_VER_3; |
2012 | pci_dev->device == PCI_DEVICE_ID_NVIDIA_NVENET_12 || | 2313 | if (pci_set_dma_mask(pci_dev, 0x0000007fffffffffULL)) { |
2013 | pci_dev->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) | 2314 | printk(KERN_INFO "forcedeth: 64-bit DMA failed, using 32-bit addressing for device %s.\n", |
2014 | np->desc_ver = DESC_VER_1; | 2315 | pci_name(pci_dev)); |
2015 | else | 2316 | } |
2317 | } else if (id->driver_data & DEV_HAS_LARGEDESC) { | ||
2318 | /* packet format 2: supports jumbo frames */ | ||
2016 | np->desc_ver = DESC_VER_2; | 2319 | np->desc_ver = DESC_VER_2; |
2320 | } else { | ||
2321 | /* original packet format */ | ||
2322 | np->desc_ver = DESC_VER_1; | ||
2323 | } | ||
2324 | |||
2325 | np->pkt_limit = NV_PKTLIMIT_1; | ||
2326 | if (id->driver_data & DEV_HAS_LARGEDESC) | ||
2327 | np->pkt_limit = NV_PKTLIMIT_2; | ||
2017 | 2328 | ||
2018 | err = -ENOMEM; | 2329 | err = -ENOMEM; |
2019 | np->base = ioremap(addr, NV_PCI_REGSZ); | 2330 | np->base = ioremap(addr, NV_PCI_REGSZ); |
2020 | if (!np->base) | 2331 | if (!np->base) |
2021 | goto out_relreg; | 2332 | goto out_relreg; |
2022 | dev->base_addr = (unsigned long)np->base; | 2333 | dev->base_addr = (unsigned long)np->base; |
2334 | |||
2023 | dev->irq = pci_dev->irq; | 2335 | dev->irq = pci_dev->irq; |
2024 | np->rx_ring = pci_alloc_consistent(pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING), | 2336 | |
2025 | &np->ring_addr); | 2337 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { |
2026 | if (!np->rx_ring) | 2338 | np->rx_ring.orig = pci_alloc_consistent(pci_dev, |
2027 | goto out_unmap; | 2339 | sizeof(struct ring_desc) * (RX_RING + TX_RING), |
2028 | np->tx_ring = &np->rx_ring[RX_RING]; | 2340 | &np->ring_addr); |
2341 | if (!np->rx_ring.orig) | ||
2342 | goto out_unmap; | ||
2343 | np->tx_ring.orig = &np->rx_ring.orig[RX_RING]; | ||
2344 | } else { | ||
2345 | np->rx_ring.ex = pci_alloc_consistent(pci_dev, | ||
2346 | sizeof(struct ring_desc_ex) * (RX_RING + TX_RING), | ||
2347 | &np->ring_addr); | ||
2348 | if (!np->rx_ring.ex) | ||
2349 | goto out_unmap; | ||
2350 | np->tx_ring.ex = &np->rx_ring.ex[RX_RING]; | ||
2351 | } | ||
2029 | 2352 | ||
2030 | dev->open = nv_open; | 2353 | dev->open = nv_open; |
2031 | dev->stop = nv_close; | 2354 | dev->stop = nv_close; |
2032 | dev->hard_start_xmit = nv_start_xmit; | 2355 | dev->hard_start_xmit = nv_start_xmit; |
2033 | dev->get_stats = nv_get_stats; | 2356 | dev->get_stats = nv_get_stats; |
2034 | dev->change_mtu = nv_change_mtu; | 2357 | dev->change_mtu = nv_change_mtu; |
2358 | dev->set_mac_address = nv_set_mac_address; | ||
2035 | dev->set_multicast_list = nv_set_multicast; | 2359 | dev->set_multicast_list = nv_set_multicast; |
2036 | #ifdef CONFIG_NET_POLL_CONTROLLER | 2360 | #ifdef CONFIG_NET_POLL_CONTROLLER |
2037 | dev->poll_controller = nv_poll_controller; | 2361 | dev->poll_controller = nv_poll_controller; |
@@ -2080,17 +2404,10 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i | |||
2080 | 2404 | ||
2081 | if (np->desc_ver == DESC_VER_1) { | 2405 | if (np->desc_ver == DESC_VER_1) { |
2082 | np->tx_flags = NV_TX_LASTPACKET|NV_TX_VALID; | 2406 | np->tx_flags = NV_TX_LASTPACKET|NV_TX_VALID; |
2083 | if (id->driver_data & DEV_NEED_LASTPACKET1) | ||
2084 | np->tx_flags |= NV_TX_LASTPACKET1; | ||
2085 | } else { | 2407 | } else { |
2086 | np->tx_flags = NV_TX2_LASTPACKET|NV_TX2_VALID; | 2408 | np->tx_flags = NV_TX2_LASTPACKET|NV_TX2_VALID; |
2087 | if (id->driver_data & DEV_NEED_LASTPACKET1) | ||
2088 | np->tx_flags |= NV_TX2_LASTPACKET1; | ||
2089 | } | 2409 | } |
2090 | if (id->driver_data & DEV_IRQMASK_1) | 2410 | np->irqmask = NVREG_IRQMASK_WANTED; |
2091 | np->irqmask = NVREG_IRQMASK_WANTED_1; | ||
2092 | if (id->driver_data & DEV_IRQMASK_2) | ||
2093 | np->irqmask = NVREG_IRQMASK_WANTED_2; | ||
2094 | if (id->driver_data & DEV_NEED_TIMERIRQ) | 2411 | if (id->driver_data & DEV_NEED_TIMERIRQ) |
2095 | np->irqmask |= NVREG_IRQ_TIMER; | 2412 | np->irqmask |= NVREG_IRQ_TIMER; |
2096 | if (id->driver_data & DEV_NEED_LINKTIMER) { | 2413 | if (id->driver_data & DEV_NEED_LINKTIMER) { |
@@ -2155,8 +2472,12 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i | |||
2155 | return 0; | 2472 | return 0; |
2156 | 2473 | ||
2157 | out_freering: | 2474 | out_freering: |
2158 | pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING), | 2475 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
2159 | np->rx_ring, np->ring_addr); | 2476 | pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING), |
2477 | np->rx_ring.orig, np->ring_addr); | ||
2478 | else | ||
2479 | pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (RX_RING + TX_RING), | ||
2480 | np->rx_ring.ex, np->ring_addr); | ||
2160 | pci_set_drvdata(pci_dev, NULL); | 2481 | pci_set_drvdata(pci_dev, NULL); |
2161 | out_unmap: | 2482 | out_unmap: |
2162 | iounmap(get_hwbase(dev)); | 2483 | iounmap(get_hwbase(dev)); |
@@ -2174,18 +2495,14 @@ static void __devexit nv_remove(struct pci_dev *pci_dev) | |||
2174 | { | 2495 | { |
2175 | struct net_device *dev = pci_get_drvdata(pci_dev); | 2496 | struct net_device *dev = pci_get_drvdata(pci_dev); |
2176 | struct fe_priv *np = get_nvpriv(dev); | 2497 | struct fe_priv *np = get_nvpriv(dev); |
2177 | u8 __iomem *base = get_hwbase(dev); | ||
2178 | 2498 | ||
2179 | unregister_netdev(dev); | 2499 | unregister_netdev(dev); |
2180 | 2500 | ||
2181 | /* special op: write back the misordered MAC address - otherwise | ||
2182 | * the next nv_probe would see a wrong address. | ||
2183 | */ | ||
2184 | writel(np->orig_mac[0], base + NvRegMacAddrA); | ||
2185 | writel(np->orig_mac[1], base + NvRegMacAddrB); | ||
2186 | |||
2187 | /* free all structures */ | 2501 | /* free all structures */ |
2188 | pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING), np->rx_ring, np->ring_addr); | 2502 | if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) |
2503 | pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING), np->rx_ring.orig, np->ring_addr); | ||
2504 | else | ||
2505 | pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (RX_RING + TX_RING), np->rx_ring.ex, np->ring_addr); | ||
2189 | iounmap(get_hwbase(dev)); | 2506 | iounmap(get_hwbase(dev)); |
2190 | pci_release_regions(pci_dev); | 2507 | pci_release_regions(pci_dev); |
2191 | pci_disable_device(pci_dev); | 2508 | pci_disable_device(pci_dev); |
@@ -2195,109 +2512,64 @@ static void __devexit nv_remove(struct pci_dev *pci_dev) | |||
2195 | 2512 | ||
2196 | static struct pci_device_id pci_tbl[] = { | 2513 | static struct pci_device_id pci_tbl[] = { |
2197 | { /* nForce Ethernet Controller */ | 2514 | { /* nForce Ethernet Controller */ |
2198 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2515 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_1), |
2199 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_1, | 2516 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, |
2200 | .subvendor = PCI_ANY_ID, | ||
2201 | .subdevice = PCI_ANY_ID, | ||
2202 | .driver_data = DEV_IRQMASK_1|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2203 | }, | 2517 | }, |
2204 | { /* nForce2 Ethernet Controller */ | 2518 | { /* nForce2 Ethernet Controller */ |
2205 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2519 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_2), |
2206 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_2, | 2520 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, |
2207 | .subvendor = PCI_ANY_ID, | ||
2208 | .subdevice = PCI_ANY_ID, | ||
2209 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2210 | }, | 2521 | }, |
2211 | { /* nForce3 Ethernet Controller */ | 2522 | { /* nForce3 Ethernet Controller */ |
2212 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2523 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_3), |
2213 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_3, | 2524 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, |
2214 | .subvendor = PCI_ANY_ID, | ||
2215 | .subdevice = PCI_ANY_ID, | ||
2216 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2217 | }, | 2525 | }, |
2218 | { /* nForce3 Ethernet Controller */ | 2526 | { /* nForce3 Ethernet Controller */ |
2219 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2527 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_4), |
2220 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_4, | 2528 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC, |
2221 | .subvendor = PCI_ANY_ID, | ||
2222 | .subdevice = PCI_ANY_ID, | ||
2223 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2224 | }, | 2529 | }, |
2225 | { /* nForce3 Ethernet Controller */ | 2530 | { /* nForce3 Ethernet Controller */ |
2226 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2531 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_5), |
2227 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_5, | 2532 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC, |
2228 | .subvendor = PCI_ANY_ID, | ||
2229 | .subdevice = PCI_ANY_ID, | ||
2230 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2231 | }, | 2533 | }, |
2232 | { /* nForce3 Ethernet Controller */ | 2534 | { /* nForce3 Ethernet Controller */ |
2233 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2535 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_6), |
2234 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_6, | 2536 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC, |
2235 | .subvendor = PCI_ANY_ID, | ||
2236 | .subdevice = PCI_ANY_ID, | ||
2237 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2238 | }, | 2537 | }, |
2239 | { /* nForce3 Ethernet Controller */ | 2538 | { /* nForce3 Ethernet Controller */ |
2240 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2539 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_7), |
2241 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_7, | 2540 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC, |
2242 | .subvendor = PCI_ANY_ID, | ||
2243 | .subdevice = PCI_ANY_ID, | ||
2244 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2245 | }, | 2541 | }, |
2246 | { /* CK804 Ethernet Controller */ | 2542 | { /* CK804 Ethernet Controller */ |
2247 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2543 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_8), |
2248 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_8, | 2544 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA, |
2249 | .subvendor = PCI_ANY_ID, | ||
2250 | .subdevice = PCI_ANY_ID, | ||
2251 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2252 | }, | 2545 | }, |
2253 | { /* CK804 Ethernet Controller */ | 2546 | { /* CK804 Ethernet Controller */ |
2254 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2547 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_9), |
2255 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_9, | 2548 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA, |
2256 | .subvendor = PCI_ANY_ID, | ||
2257 | .subdevice = PCI_ANY_ID, | ||
2258 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2259 | }, | 2549 | }, |
2260 | { /* MCP04 Ethernet Controller */ | 2550 | { /* MCP04 Ethernet Controller */ |
2261 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2551 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_10), |
2262 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_10, | 2552 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA, |
2263 | .subvendor = PCI_ANY_ID, | ||
2264 | .subdevice = PCI_ANY_ID, | ||
2265 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2266 | }, | 2553 | }, |
2267 | { /* MCP04 Ethernet Controller */ | 2554 | { /* MCP04 Ethernet Controller */ |
2268 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2555 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_11), |
2269 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_11, | 2556 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA, |
2270 | .subvendor = PCI_ANY_ID, | ||
2271 | .subdevice = PCI_ANY_ID, | ||
2272 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2273 | }, | 2557 | }, |
2274 | { /* MCP51 Ethernet Controller */ | 2558 | { /* MCP51 Ethernet Controller */ |
2275 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2559 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_12), |
2276 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_12, | 2560 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA, |
2277 | .subvendor = PCI_ANY_ID, | ||
2278 | .subdevice = PCI_ANY_ID, | ||
2279 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2280 | }, | 2561 | }, |
2281 | { /* MCP51 Ethernet Controller */ | 2562 | { /* MCP51 Ethernet Controller */ |
2282 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2563 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_13), |
2283 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_13, | 2564 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA, |
2284 | .subvendor = PCI_ANY_ID, | ||
2285 | .subdevice = PCI_ANY_ID, | ||
2286 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2287 | }, | 2565 | }, |
2288 | { /* MCP55 Ethernet Controller */ | 2566 | { /* MCP55 Ethernet Controller */ |
2289 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2567 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_14), |
2290 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_14, | 2568 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA, |
2291 | .subvendor = PCI_ANY_ID, | ||
2292 | .subdevice = PCI_ANY_ID, | ||
2293 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2294 | }, | 2569 | }, |
2295 | { /* MCP55 Ethernet Controller */ | 2570 | { /* MCP55 Ethernet Controller */ |
2296 | .vendor = PCI_VENDOR_ID_NVIDIA, | 2571 | PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_15), |
2297 | .device = PCI_DEVICE_ID_NVIDIA_NVENET_15, | 2572 | .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA, |
2298 | .subvendor = PCI_ANY_ID, | ||
2299 | .subdevice = PCI_ANY_ID, | ||
2300 | .driver_data = DEV_NEED_LASTPACKET1|DEV_IRQMASK_2|DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, | ||
2301 | }, | 2573 | }, |
2302 | {0,}, | 2574 | {0,}, |
2303 | }; | 2575 | }; |
diff --git a/drivers/net/hamradio/Kconfig b/drivers/net/hamradio/Kconfig index 0cd54306e636..de087cd609d9 100644 --- a/drivers/net/hamradio/Kconfig +++ b/drivers/net/hamradio/Kconfig | |||
@@ -1,6 +1,6 @@ | |||
1 | config MKISS | 1 | config MKISS |
2 | tristate "Serial port KISS driver" | 2 | tristate "Serial port KISS driver" |
3 | depends on AX25 && BROKEN_ON_SMP | 3 | depends on AX25 |
4 | ---help--- | 4 | ---help--- |
5 | KISS is a protocol used for the exchange of data between a computer | 5 | KISS is a protocol used for the exchange of data between a computer |
6 | and a Terminal Node Controller (a small embedded system commonly | 6 | and a Terminal Node Controller (a small embedded system commonly |
diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c index a7f15d9f13e5..5298096afbdb 100644 --- a/drivers/net/hamradio/baycom_epp.c +++ b/drivers/net/hamradio/baycom_epp.c | |||
@@ -54,6 +54,7 @@ | |||
54 | #include <linux/kmod.h> | 54 | #include <linux/kmod.h> |
55 | #include <linux/hdlcdrv.h> | 55 | #include <linux/hdlcdrv.h> |
56 | #include <linux/baycom.h> | 56 | #include <linux/baycom.h> |
57 | #include <linux/jiffies.h> | ||
57 | #if defined(CONFIG_AX25) || defined(CONFIG_AX25_MODULE) | 58 | #if defined(CONFIG_AX25) || defined(CONFIG_AX25_MODULE) |
58 | /* prototypes for ax25_encapsulate and ax25_rebuild_header */ | 59 | /* prototypes for ax25_encapsulate and ax25_rebuild_header */ |
59 | #include <net/ax25.h> | 60 | #include <net/ax25.h> |
@@ -287,7 +288,7 @@ static inline void baycom_int_freq(struct baycom_state *bc) | |||
287 | * measure the interrupt frequency | 288 | * measure the interrupt frequency |
288 | */ | 289 | */ |
289 | bc->debug_vals.cur_intcnt++; | 290 | bc->debug_vals.cur_intcnt++; |
290 | if ((cur_jiffies - bc->debug_vals.last_jiffies) >= HZ) { | 291 | if (time_after_eq(cur_jiffies, bc->debug_vals.last_jiffies + HZ)) { |
291 | bc->debug_vals.last_jiffies = cur_jiffies; | 292 | bc->debug_vals.last_jiffies = cur_jiffies; |
292 | bc->debug_vals.last_intcnt = bc->debug_vals.cur_intcnt; | 293 | bc->debug_vals.last_intcnt = bc->debug_vals.cur_intcnt; |
293 | bc->debug_vals.cur_intcnt = 0; | 294 | bc->debug_vals.cur_intcnt = 0; |
diff --git a/drivers/net/hamradio/baycom_par.c b/drivers/net/hamradio/baycom_par.c index 612ad452bee0..3b1bef1ee215 100644 --- a/drivers/net/hamradio/baycom_par.c +++ b/drivers/net/hamradio/baycom_par.c | |||
@@ -84,6 +84,7 @@ | |||
84 | #include <linux/baycom.h> | 84 | #include <linux/baycom.h> |
85 | #include <linux/parport.h> | 85 | #include <linux/parport.h> |
86 | #include <linux/bitops.h> | 86 | #include <linux/bitops.h> |
87 | #include <linux/jiffies.h> | ||
87 | 88 | ||
88 | #include <asm/bug.h> | 89 | #include <asm/bug.h> |
89 | #include <asm/system.h> | 90 | #include <asm/system.h> |
@@ -165,7 +166,7 @@ static void __inline__ baycom_int_freq(struct baycom_state *bc) | |||
165 | * measure the interrupt frequency | 166 | * measure the interrupt frequency |
166 | */ | 167 | */ |
167 | bc->debug_vals.cur_intcnt++; | 168 | bc->debug_vals.cur_intcnt++; |
168 | if ((cur_jiffies - bc->debug_vals.last_jiffies) >= HZ) { | 169 | if (time_after_eq(cur_jiffies, bc->debug_vals.last_jiffies + HZ)) { |
169 | bc->debug_vals.last_jiffies = cur_jiffies; | 170 | bc->debug_vals.last_jiffies = cur_jiffies; |
170 | bc->debug_vals.last_intcnt = bc->debug_vals.cur_intcnt; | 171 | bc->debug_vals.last_intcnt = bc->debug_vals.cur_intcnt; |
171 | bc->debug_vals.cur_intcnt = 0; | 172 | bc->debug_vals.cur_intcnt = 0; |
diff --git a/drivers/net/hamradio/baycom_ser_fdx.c b/drivers/net/hamradio/baycom_ser_fdx.c index 25f270b05378..232793d2ce6b 100644 --- a/drivers/net/hamradio/baycom_ser_fdx.c +++ b/drivers/net/hamradio/baycom_ser_fdx.c | |||
@@ -79,6 +79,7 @@ | |||
79 | #include <asm/io.h> | 79 | #include <asm/io.h> |
80 | #include <linux/hdlcdrv.h> | 80 | #include <linux/hdlcdrv.h> |
81 | #include <linux/baycom.h> | 81 | #include <linux/baycom.h> |
82 | #include <linux/jiffies.h> | ||
82 | 83 | ||
83 | /* --------------------------------------------------------------------- */ | 84 | /* --------------------------------------------------------------------- */ |
84 | 85 | ||
@@ -159,7 +160,7 @@ static inline void baycom_int_freq(struct baycom_state *bc) | |||
159 | * measure the interrupt frequency | 160 | * measure the interrupt frequency |
160 | */ | 161 | */ |
161 | bc->debug_vals.cur_intcnt++; | 162 | bc->debug_vals.cur_intcnt++; |
162 | if ((cur_jiffies - bc->debug_vals.last_jiffies) >= HZ) { | 163 | if (time_after_eq(cur_jiffies, bc->debug_vals.last_jiffies + HZ)) { |
163 | bc->debug_vals.last_jiffies = cur_jiffies; | 164 | bc->debug_vals.last_jiffies = cur_jiffies; |
164 | bc->debug_vals.last_intcnt = bc->debug_vals.cur_intcnt; | 165 | bc->debug_vals.last_intcnt = bc->debug_vals.cur_intcnt; |
165 | bc->debug_vals.cur_intcnt = 0; | 166 | bc->debug_vals.cur_intcnt = 0; |
diff --git a/drivers/net/hamradio/baycom_ser_hdx.c b/drivers/net/hamradio/baycom_ser_hdx.c index eead85d00962..be596a3eb3fd 100644 --- a/drivers/net/hamradio/baycom_ser_hdx.c +++ b/drivers/net/hamradio/baycom_ser_hdx.c | |||
@@ -69,6 +69,7 @@ | |||
69 | #include <asm/io.h> | 69 | #include <asm/io.h> |
70 | #include <linux/hdlcdrv.h> | 70 | #include <linux/hdlcdrv.h> |
71 | #include <linux/baycom.h> | 71 | #include <linux/baycom.h> |
72 | #include <linux/jiffies.h> | ||
72 | 73 | ||
73 | /* --------------------------------------------------------------------- */ | 74 | /* --------------------------------------------------------------------- */ |
74 | 75 | ||
@@ -150,7 +151,7 @@ static inline void baycom_int_freq(struct baycom_state *bc) | |||
150 | * measure the interrupt frequency | 151 | * measure the interrupt frequency |
151 | */ | 152 | */ |
152 | bc->debug_vals.cur_intcnt++; | 153 | bc->debug_vals.cur_intcnt++; |
153 | if ((cur_jiffies - bc->debug_vals.last_jiffies) >= HZ) { | 154 | if (time_after_eq(cur_jiffies, bc->debug_vals.last_jiffies + HZ)) { |
154 | bc->debug_vals.last_jiffies = cur_jiffies; | 155 | bc->debug_vals.last_jiffies = cur_jiffies; |
155 | bc->debug_vals.last_intcnt = bc->debug_vals.cur_intcnt; | 156 | bc->debug_vals.last_intcnt = bc->debug_vals.cur_intcnt; |
156 | bc->debug_vals.cur_intcnt = 0; | 157 | bc->debug_vals.cur_intcnt = 0; |
diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c index 3035422f5ad8..63b1a2b86acb 100644 --- a/drivers/net/hamradio/mkiss.c +++ b/drivers/net/hamradio/mkiss.c | |||
@@ -1,30 +1,19 @@ | |||
1 | /* | 1 | /* |
2 | * MKISS Driver | 2 | * This program is free software; you can distribute it and/or modify it |
3 | * under the terms of the GNU General Public License (Version 2) as | ||
4 | * published by the Free Software Foundation. | ||
3 | * | 5 | * |
4 | * This module: | 6 | * This program is distributed in the hope it will be useful, but WITHOUT |
5 | * This module is free software; you can redistribute it and/or | 7 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or |
6 | * modify it under the terms of the GNU General Public License | 8 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License |
7 | * as published by the Free Software Foundation; either version | 9 | * for more details. |
8 | * 2 of the License, or (at your option) any later version. | ||
9 | * | 10 | * |
10 | * This module implements the AX.25 protocol for kernel-based | 11 | * You should have received a copy of the GNU General Public License along |
11 | * devices like TTYs. It interfaces between a raw TTY, and the | 12 | * with this program; if not, write to the Free Software Foundation, Inc., |
12 | * kernel's AX.25 protocol layers, just like slip.c. | 13 | * 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. |
13 | * AX.25 needs to be separated from slip.c while slip.c is no | ||
14 | * longer a static kernel device since it is a module. | ||
15 | * This method clears the way to implement other kiss protocols | ||
16 | * like mkiss smack g8bpq ..... so far only mkiss is implemented. | ||
17 | * | 14 | * |
18 | * Hans Alblas <hans@esrac.ele.tue.nl> | 15 | * Copyright (C) Hans Alblas PE1AYX <hans@esrac.ele.tue.nl> |
19 | * | 16 | * Copyright (C) 2004, 05 Ralf Baechle DL5RB <ralf@linux-mips.org> |
20 | * History | ||
21 | * Jonathan (G4KLX) Fixed to match Linux networking changes - 2.1.15. | ||
22 | * Matthias (DG2FEF) Added support for FlexNet CRC (on special request) | ||
23 | * Fixed bug in ax25_close(): dev_lock_wait() was | ||
24 | * called twice, causing a deadlock. | ||
25 | * Jeroen (PE1RXQ) Removed old MKISS_MAGIC stuff and calls to | ||
26 | * MOD_*_USE_COUNT | ||
27 | * Remove cli() and fix rtnl lock usage. | ||
28 | */ | 17 | */ |
29 | 18 | ||
30 | #include <linux/config.h> | 19 | #include <linux/config.h> |
@@ -46,177 +35,300 @@ | |||
46 | #include <linux/etherdevice.h> | 35 | #include <linux/etherdevice.h> |
47 | #include <linux/skbuff.h> | 36 | #include <linux/skbuff.h> |
48 | #include <linux/if_arp.h> | 37 | #include <linux/if_arp.h> |
38 | #include <linux/jiffies.h> | ||
49 | 39 | ||
50 | #include <net/ax25.h> | 40 | #include <net/ax25.h> |
51 | 41 | ||
52 | #include "mkiss.h" | ||
53 | |||
54 | #ifdef CONFIG_INET | 42 | #ifdef CONFIG_INET |
55 | #include <linux/ip.h> | 43 | #include <linux/ip.h> |
56 | #include <linux/tcp.h> | 44 | #include <linux/tcp.h> |
57 | #endif | 45 | #endif |
58 | 46 | ||
59 | static char banner[] __initdata = KERN_INFO "mkiss: AX.25 Multikiss, Hans Albas PE1AYX\n"; | 47 | #define AX_MTU 236 |
60 | 48 | ||
61 | typedef struct ax25_ctrl { | 49 | /* SLIP/KISS protocol characters. */ |
62 | struct ax_disp ctrl; /* */ | 50 | #define END 0300 /* indicates end of frame */ |
63 | struct net_device dev; /* the device */ | 51 | #define ESC 0333 /* indicates byte stuffing */ |
64 | } ax25_ctrl_t; | 52 | #define ESC_END 0334 /* ESC ESC_END means END 'data' */ |
65 | 53 | #define ESC_ESC 0335 /* ESC ESC_ESC means ESC 'data' */ | |
66 | static ax25_ctrl_t **ax25_ctrls; | 54 | |
67 | 55 | struct mkiss { | |
68 | int ax25_maxdev = AX25_MAXDEV; /* Can be overridden with insmod! */ | 56 | struct tty_struct *tty; /* ptr to TTY structure */ |
69 | 57 | struct net_device *dev; /* easy for intr handling */ | |
70 | static struct tty_ldisc ax_ldisc; | 58 | |
71 | 59 | /* These are pointers to the malloc()ed frame buffers. */ | |
72 | static int ax25_init(struct net_device *); | 60 | spinlock_t buflock;/* lock for rbuf and xbuf */ |
73 | static int kiss_esc(unsigned char *, unsigned char *, int); | 61 | unsigned char *rbuff; /* receiver buffer */ |
74 | static int kiss_esc_crc(unsigned char *, unsigned char *, unsigned short, int); | 62 | int rcount; /* received chars counter */ |
75 | static void kiss_unesc(struct ax_disp *, unsigned char); | 63 | unsigned char *xbuff; /* transmitter buffer */ |
64 | unsigned char *xhead; /* pointer to next byte to XMIT */ | ||
65 | int xleft; /* bytes left in XMIT queue */ | ||
66 | |||
67 | struct net_device_stats stats; | ||
68 | |||
69 | /* Detailed SLIP statistics. */ | ||
70 | int mtu; /* Our mtu (to spot changes!) */ | ||
71 | int buffsize; /* Max buffers sizes */ | ||
72 | |||
73 | unsigned long flags; /* Flag values/ mode etc */ | ||
74 | /* long req'd: used by set_bit --RR */ | ||
75 | #define AXF_INUSE 0 /* Channel in use */ | ||
76 | #define AXF_ESCAPE 1 /* ESC received */ | ||
77 | #define AXF_ERROR 2 /* Parity, etc. error */ | ||
78 | #define AXF_KEEPTEST 3 /* Keepalive test flag */ | ||
79 | #define AXF_OUTWAIT 4 /* is outpacket was flag */ | ||
80 | |||
81 | int mode; | ||
82 | int crcmode; /* MW: for FlexNet, SMACK etc. */ | ||
83 | #define CRC_MODE_NONE 0 | ||
84 | #define CRC_MODE_FLEX 1 | ||
85 | #define CRC_MODE_SMACK 2 | ||
86 | |||
87 | atomic_t refcnt; | ||
88 | struct semaphore dead_sem; | ||
89 | }; | ||
76 | 90 | ||
77 | /*---------------------------------------------------------------------------*/ | 91 | /*---------------------------------------------------------------------------*/ |
78 | 92 | ||
79 | static const unsigned short Crc_flex_table[] = { | 93 | static const unsigned short crc_flex_table[] = { |
80 | 0x0f87, 0x1e0e, 0x2c95, 0x3d1c, 0x49a3, 0x582a, 0x6ab1, 0x7b38, | 94 | 0x0f87, 0x1e0e, 0x2c95, 0x3d1c, 0x49a3, 0x582a, 0x6ab1, 0x7b38, |
81 | 0x83cf, 0x9246, 0xa0dd, 0xb154, 0xc5eb, 0xd462, 0xe6f9, 0xf770, | 95 | 0x83cf, 0x9246, 0xa0dd, 0xb154, 0xc5eb, 0xd462, 0xe6f9, 0xf770, |
82 | 0x1f06, 0x0e8f, 0x3c14, 0x2d9d, 0x5922, 0x48ab, 0x7a30, 0x6bb9, | 96 | 0x1f06, 0x0e8f, 0x3c14, 0x2d9d, 0x5922, 0x48ab, 0x7a30, 0x6bb9, |
83 | 0x934e, 0x82c7, 0xb05c, 0xa1d5, 0xd56a, 0xc4e3, 0xf678, 0xe7f1, | 97 | 0x934e, 0x82c7, 0xb05c, 0xa1d5, 0xd56a, 0xc4e3, 0xf678, 0xe7f1, |
84 | 0x2e85, 0x3f0c, 0x0d97, 0x1c1e, 0x68a1, 0x7928, 0x4bb3, 0x5a3a, | 98 | 0x2e85, 0x3f0c, 0x0d97, 0x1c1e, 0x68a1, 0x7928, 0x4bb3, 0x5a3a, |
85 | 0xa2cd, 0xb344, 0x81df, 0x9056, 0xe4e9, 0xf560, 0xc7fb, 0xd672, | 99 | 0xa2cd, 0xb344, 0x81df, 0x9056, 0xe4e9, 0xf560, 0xc7fb, 0xd672, |
86 | 0x3e04, 0x2f8d, 0x1d16, 0x0c9f, 0x7820, 0x69a9, 0x5b32, 0x4abb, | 100 | 0x3e04, 0x2f8d, 0x1d16, 0x0c9f, 0x7820, 0x69a9, 0x5b32, 0x4abb, |
87 | 0xb24c, 0xa3c5, 0x915e, 0x80d7, 0xf468, 0xe5e1, 0xd77a, 0xc6f3, | 101 | 0xb24c, 0xa3c5, 0x915e, 0x80d7, 0xf468, 0xe5e1, 0xd77a, 0xc6f3, |
88 | 0x4d83, 0x5c0a, 0x6e91, 0x7f18, 0x0ba7, 0x1a2e, 0x28b5, 0x393c, | 102 | 0x4d83, 0x5c0a, 0x6e91, 0x7f18, 0x0ba7, 0x1a2e, 0x28b5, 0x393c, |
89 | 0xc1cb, 0xd042, 0xe2d9, 0xf350, 0x87ef, 0x9666, 0xa4fd, 0xb574, | 103 | 0xc1cb, 0xd042, 0xe2d9, 0xf350, 0x87ef, 0x9666, 0xa4fd, 0xb574, |
90 | 0x5d02, 0x4c8b, 0x7e10, 0x6f99, 0x1b26, 0x0aaf, 0x3834, 0x29bd, | 104 | 0x5d02, 0x4c8b, 0x7e10, 0x6f99, 0x1b26, 0x0aaf, 0x3834, 0x29bd, |
91 | 0xd14a, 0xc0c3, 0xf258, 0xe3d1, 0x976e, 0x86e7, 0xb47c, 0xa5f5, | 105 | 0xd14a, 0xc0c3, 0xf258, 0xe3d1, 0x976e, 0x86e7, 0xb47c, 0xa5f5, |
92 | 0x6c81, 0x7d08, 0x4f93, 0x5e1a, 0x2aa5, 0x3b2c, 0x09b7, 0x183e, | 106 | 0x6c81, 0x7d08, 0x4f93, 0x5e1a, 0x2aa5, 0x3b2c, 0x09b7, 0x183e, |
93 | 0xe0c9, 0xf140, 0xc3db, 0xd252, 0xa6ed, 0xb764, 0x85ff, 0x9476, | 107 | 0xe0c9, 0xf140, 0xc3db, 0xd252, 0xa6ed, 0xb764, 0x85ff, 0x9476, |
94 | 0x7c00, 0x6d89, 0x5f12, 0x4e9b, 0x3a24, 0x2bad, 0x1936, 0x08bf, | 108 | 0x7c00, 0x6d89, 0x5f12, 0x4e9b, 0x3a24, 0x2bad, 0x1936, 0x08bf, |
95 | 0xf048, 0xe1c1, 0xd35a, 0xc2d3, 0xb66c, 0xa7e5, 0x957e, 0x84f7, | 109 | 0xf048, 0xe1c1, 0xd35a, 0xc2d3, 0xb66c, 0xa7e5, 0x957e, 0x84f7, |
96 | 0x8b8f, 0x9a06, 0xa89d, 0xb914, 0xcdab, 0xdc22, 0xeeb9, 0xff30, | 110 | 0x8b8f, 0x9a06, 0xa89d, 0xb914, 0xcdab, 0xdc22, 0xeeb9, 0xff30, |
97 | 0x07c7, 0x164e, 0x24d5, 0x355c, 0x41e3, 0x506a, 0x62f1, 0x7378, | 111 | 0x07c7, 0x164e, 0x24d5, 0x355c, 0x41e3, 0x506a, 0x62f1, 0x7378, |
98 | 0x9b0e, 0x8a87, 0xb81c, 0xa995, 0xdd2a, 0xcca3, 0xfe38, 0xefb1, | 112 | 0x9b0e, 0x8a87, 0xb81c, 0xa995, 0xdd2a, 0xcca3, 0xfe38, 0xefb1, |
99 | 0x1746, 0x06cf, 0x3454, 0x25dd, 0x5162, 0x40eb, 0x7270, 0x63f9, | 113 | 0x1746, 0x06cf, 0x3454, 0x25dd, 0x5162, 0x40eb, 0x7270, 0x63f9, |
100 | 0xaa8d, 0xbb04, 0x899f, 0x9816, 0xeca9, 0xfd20, 0xcfbb, 0xde32, | 114 | 0xaa8d, 0xbb04, 0x899f, 0x9816, 0xeca9, 0xfd20, 0xcfbb, 0xde32, |
101 | 0x26c5, 0x374c, 0x05d7, 0x145e, 0x60e1, 0x7168, 0x43f3, 0x527a, | 115 | 0x26c5, 0x374c, 0x05d7, 0x145e, 0x60e1, 0x7168, 0x43f3, 0x527a, |
102 | 0xba0c, 0xab85, 0x991e, 0x8897, 0xfc28, 0xeda1, 0xdf3a, 0xceb3, | 116 | 0xba0c, 0xab85, 0x991e, 0x8897, 0xfc28, 0xeda1, 0xdf3a, 0xceb3, |
103 | 0x3644, 0x27cd, 0x1556, 0x04df, 0x7060, 0x61e9, 0x5372, 0x42fb, | 117 | 0x3644, 0x27cd, 0x1556, 0x04df, 0x7060, 0x61e9, 0x5372, 0x42fb, |
104 | 0xc98b, 0xd802, 0xea99, 0xfb10, 0x8faf, 0x9e26, 0xacbd, 0xbd34, | 118 | 0xc98b, 0xd802, 0xea99, 0xfb10, 0x8faf, 0x9e26, 0xacbd, 0xbd34, |
105 | 0x45c3, 0x544a, 0x66d1, 0x7758, 0x03e7, 0x126e, 0x20f5, 0x317c, | 119 | 0x45c3, 0x544a, 0x66d1, 0x7758, 0x03e7, 0x126e, 0x20f5, 0x317c, |
106 | 0xd90a, 0xc883, 0xfa18, 0xeb91, 0x9f2e, 0x8ea7, 0xbc3c, 0xadb5, | 120 | 0xd90a, 0xc883, 0xfa18, 0xeb91, 0x9f2e, 0x8ea7, 0xbc3c, 0xadb5, |
107 | 0x5542, 0x44cb, 0x7650, 0x67d9, 0x1366, 0x02ef, 0x3074, 0x21fd, | 121 | 0x5542, 0x44cb, 0x7650, 0x67d9, 0x1366, 0x02ef, 0x3074, 0x21fd, |
108 | 0xe889, 0xf900, 0xcb9b, 0xda12, 0xaead, 0xbf24, 0x8dbf, 0x9c36, | 122 | 0xe889, 0xf900, 0xcb9b, 0xda12, 0xaead, 0xbf24, 0x8dbf, 0x9c36, |
109 | 0x64c1, 0x7548, 0x47d3, 0x565a, 0x22e5, 0x336c, 0x01f7, 0x107e, | 123 | 0x64c1, 0x7548, 0x47d3, 0x565a, 0x22e5, 0x336c, 0x01f7, 0x107e, |
110 | 0xf808, 0xe981, 0xdb1a, 0xca93, 0xbe2c, 0xafa5, 0x9d3e, 0x8cb7, | 124 | 0xf808, 0xe981, 0xdb1a, 0xca93, 0xbe2c, 0xafa5, 0x9d3e, 0x8cb7, |
111 | 0x7440, 0x65c9, 0x5752, 0x46db, 0x3264, 0x23ed, 0x1176, 0x00ff | 125 | 0x7440, 0x65c9, 0x5752, 0x46db, 0x3264, 0x23ed, 0x1176, 0x00ff |
112 | }; | 126 | }; |
113 | 127 | ||
114 | /*---------------------------------------------------------------------------*/ | ||
115 | |||
116 | static unsigned short calc_crc_flex(unsigned char *cp, int size) | 128 | static unsigned short calc_crc_flex(unsigned char *cp, int size) |
117 | { | 129 | { |
118 | unsigned short crc = 0xffff; | 130 | unsigned short crc = 0xffff; |
119 | |||
120 | while (size--) | ||
121 | crc = (crc << 8) ^ Crc_flex_table[((crc >> 8) ^ *cp++) & 0xff]; | ||
122 | 131 | ||
123 | return crc; | 132 | while (size--) |
124 | } | 133 | crc = (crc << 8) ^ crc_flex_table[((crc >> 8) ^ *cp++) & 0xff]; |
125 | 134 | ||
126 | /*---------------------------------------------------------------------------*/ | 135 | return crc; |
136 | } | ||
127 | 137 | ||
128 | static int check_crc_flex(unsigned char *cp, int size) | 138 | static int check_crc_flex(unsigned char *cp, int size) |
129 | { | 139 | { |
130 | unsigned short crc = 0xffff; | 140 | unsigned short crc = 0xffff; |
131 | 141 | ||
132 | if (size < 3) | 142 | if (size < 3) |
133 | return -1; | 143 | return -1; |
134 | 144 | ||
135 | while (size--) | 145 | while (size--) |
136 | crc = (crc << 8) ^ Crc_flex_table[((crc >> 8) ^ *cp++) & 0xff]; | 146 | crc = (crc << 8) ^ crc_flex_table[((crc >> 8) ^ *cp++) & 0xff]; |
137 | 147 | ||
138 | if ((crc & 0xffff) != 0x7070) | 148 | if ((crc & 0xffff) != 0x7070) |
139 | return -1; | 149 | return -1; |
140 | 150 | ||
141 | return 0; | 151 | return 0; |
142 | } | 152 | } |
143 | 153 | ||
144 | /*---------------------------------------------------------------------------*/ | 154 | /* |
155 | * Standard encapsulation | ||
156 | */ | ||
145 | 157 | ||
146 | /* Find a free channel, and link in this `tty' line. */ | 158 | static int kiss_esc(unsigned char *s, unsigned char *d, int len) |
147 | static inline struct ax_disp *ax_alloc(void) | ||
148 | { | 159 | { |
149 | ax25_ctrl_t *axp=NULL; | 160 | unsigned char *ptr = d; |
150 | int i; | 161 | unsigned char c; |
151 | 162 | ||
152 | for (i = 0; i < ax25_maxdev; i++) { | 163 | /* |
153 | axp = ax25_ctrls[i]; | 164 | * Send an initial END character to flush out any data that may have |
165 | * accumulated in the receiver due to line noise. | ||
166 | */ | ||
154 | 167 | ||
155 | /* Not allocated ? */ | 168 | *ptr++ = END; |
156 | if (axp == NULL) | ||
157 | break; | ||
158 | 169 | ||
159 | /* Not in use ? */ | 170 | while (len-- > 0) { |
160 | if (!test_and_set_bit(AXF_INUSE, &axp->ctrl.flags)) | 171 | switch (c = *s++) { |
172 | case END: | ||
173 | *ptr++ = ESC; | ||
174 | *ptr++ = ESC_END; | ||
161 | break; | 175 | break; |
176 | case ESC: | ||
177 | *ptr++ = ESC; | ||
178 | *ptr++ = ESC_ESC; | ||
179 | break; | ||
180 | default: | ||
181 | *ptr++ = c; | ||
182 | break; | ||
183 | } | ||
162 | } | 184 | } |
163 | 185 | ||
164 | /* Sorry, too many, all slots in use */ | 186 | *ptr++ = END; |
165 | if (i >= ax25_maxdev) | 187 | |
166 | return NULL; | 188 | return ptr - d; |
189 | } | ||
190 | |||
191 | /* | ||
192 | * MW: | ||
193 | * OK its ugly, but tell me a better solution without copying the | ||
194 | * packet to a temporary buffer :-) | ||
195 | */ | ||
196 | static int kiss_esc_crc(unsigned char *s, unsigned char *d, unsigned short crc, | ||
197 | int len) | ||
198 | { | ||
199 | unsigned char *ptr = d; | ||
200 | unsigned char c=0; | ||
201 | |||
202 | *ptr++ = END; | ||
203 | while (len > 0) { | ||
204 | if (len > 2) | ||
205 | c = *s++; | ||
206 | else if (len > 1) | ||
207 | c = crc >> 8; | ||
208 | else if (len > 0) | ||
209 | c = crc & 0xff; | ||
210 | |||
211 | len--; | ||
167 | 212 | ||
168 | /* If no channels are available, allocate one */ | 213 | switch (c) { |
169 | if (axp == NULL && (ax25_ctrls[i] = kmalloc(sizeof(ax25_ctrl_t), GFP_KERNEL)) != NULL) { | 214 | case END: |
170 | axp = ax25_ctrls[i]; | 215 | *ptr++ = ESC; |
216 | *ptr++ = ESC_END; | ||
217 | break; | ||
218 | case ESC: | ||
219 | *ptr++ = ESC; | ||
220 | *ptr++ = ESC_ESC; | ||
221 | break; | ||
222 | default: | ||
223 | *ptr++ = c; | ||
224 | break; | ||
225 | } | ||
171 | } | 226 | } |
172 | memset(axp, 0, sizeof(ax25_ctrl_t)); | 227 | *ptr++ = END; |
173 | 228 | ||
174 | /* Initialize channel control data */ | 229 | return ptr - d; |
175 | set_bit(AXF_INUSE, &axp->ctrl.flags); | 230 | } |
176 | sprintf(axp->dev.name, "ax%d", i++); | 231 | |
177 | axp->ctrl.tty = NULL; | 232 | /* Send one completely decapsulated AX.25 packet to the AX.25 layer. */ |
178 | axp->dev.base_addr = i; | 233 | static void ax_bump(struct mkiss *ax) |
179 | axp->dev.priv = (void *)&axp->ctrl; | 234 | { |
180 | axp->dev.next = NULL; | 235 | struct sk_buff *skb; |
181 | axp->dev.init = ax25_init; | 236 | int count; |
182 | 237 | ||
183 | if (axp != NULL) { | 238 | spin_lock_bh(&ax->buflock); |
184 | /* | 239 | if (ax->rbuff[0] > 0x0f) { |
185 | * register device so that it can be ifconfig'ed | 240 | if (ax->rbuff[0] & 0x20) { |
186 | * ax25_init() will be called as a side-effect | 241 | ax->crcmode = CRC_MODE_FLEX; |
187 | * SIDE-EFFECT WARNING: ax25_init() CLEARS axp->ctrl ! | 242 | if (check_crc_flex(ax->rbuff, ax->rcount) < 0) { |
188 | */ | 243 | ax->stats.rx_errors++; |
189 | if (register_netdev(&axp->dev) == 0) { | 244 | return; |
190 | /* (Re-)Set the INUSE bit. Very Important! */ | 245 | } |
191 | set_bit(AXF_INUSE, &axp->ctrl.flags); | 246 | ax->rcount -= 2; |
192 | axp->ctrl.dev = &axp->dev; | 247 | /* dl9sau bugfix: the trailling two bytes flexnet crc |
193 | axp->dev.priv = (void *) &axp->ctrl; | 248 | * will not be passed to the kernel. thus we have |
194 | 249 | * to correct the kissparm signature, because it | |
195 | return &axp->ctrl; | 250 | * indicates a crc but there's none |
196 | } else { | 251 | */ |
197 | clear_bit(AXF_INUSE,&axp->ctrl.flags); | 252 | *ax->rbuff &= ~0x20; |
198 | printk(KERN_ERR "mkiss: ax_alloc() - register_netdev() failure.\n"); | ||
199 | } | 253 | } |
254 | } | ||
255 | spin_unlock_bh(&ax->buflock); | ||
256 | |||
257 | count = ax->rcount; | ||
258 | |||
259 | if ((skb = dev_alloc_skb(count)) == NULL) { | ||
260 | printk(KERN_ERR "mkiss: %s: memory squeeze, dropping packet.\n", | ||
261 | ax->dev->name); | ||
262 | ax->stats.rx_dropped++; | ||
263 | return; | ||
200 | } | 264 | } |
201 | 265 | ||
202 | return NULL; | 266 | spin_lock_bh(&ax->buflock); |
267 | memcpy(skb_put(skb,count), ax->rbuff, count); | ||
268 | spin_unlock_bh(&ax->buflock); | ||
269 | skb->protocol = ax25_type_trans(skb, ax->dev); | ||
270 | netif_rx(skb); | ||
271 | ax->dev->last_rx = jiffies; | ||
272 | ax->stats.rx_packets++; | ||
273 | ax->stats.rx_bytes += count; | ||
203 | } | 274 | } |
204 | 275 | ||
205 | /* Free an AX25 channel. */ | 276 | static void kiss_unesc(struct mkiss *ax, unsigned char s) |
206 | static inline void ax_free(struct ax_disp *ax) | ||
207 | { | 277 | { |
208 | /* Free all AX25 frame buffers. */ | 278 | switch (s) { |
209 | if (ax->rbuff) | 279 | case END: |
210 | kfree(ax->rbuff); | 280 | /* drop keeptest bit = VSV */ |
211 | ax->rbuff = NULL; | 281 | if (test_bit(AXF_KEEPTEST, &ax->flags)) |
212 | if (ax->xbuff) | 282 | clear_bit(AXF_KEEPTEST, &ax->flags); |
213 | kfree(ax->xbuff); | 283 | |
214 | ax->xbuff = NULL; | 284 | if (!test_and_clear_bit(AXF_ERROR, &ax->flags) && (ax->rcount > 2)) |
215 | if (!test_and_clear_bit(AXF_INUSE, &ax->flags)) | 285 | ax_bump(ax); |
216 | printk(KERN_ERR "mkiss: %s: ax_free for already free unit.\n", ax->dev->name); | 286 | |
287 | clear_bit(AXF_ESCAPE, &ax->flags); | ||
288 | ax->rcount = 0; | ||
289 | return; | ||
290 | |||
291 | case ESC: | ||
292 | set_bit(AXF_ESCAPE, &ax->flags); | ||
293 | return; | ||
294 | case ESC_ESC: | ||
295 | if (test_and_clear_bit(AXF_ESCAPE, &ax->flags)) | ||
296 | s = ESC; | ||
297 | break; | ||
298 | case ESC_END: | ||
299 | if (test_and_clear_bit(AXF_ESCAPE, &ax->flags)) | ||
300 | s = END; | ||
301 | break; | ||
302 | } | ||
303 | |||
304 | spin_lock_bh(&ax->buflock); | ||
305 | if (!test_bit(AXF_ERROR, &ax->flags)) { | ||
306 | if (ax->rcount < ax->buffsize) { | ||
307 | ax->rbuff[ax->rcount++] = s; | ||
308 | spin_unlock_bh(&ax->buflock); | ||
309 | return; | ||
310 | } | ||
311 | |||
312 | ax->stats.rx_over_errors++; | ||
313 | set_bit(AXF_ERROR, &ax->flags); | ||
314 | } | ||
315 | spin_unlock_bh(&ax->buflock); | ||
316 | } | ||
317 | |||
318 | static int ax_set_mac_address(struct net_device *dev, void *addr) | ||
319 | { | ||
320 | struct sockaddr_ax25 *sa = addr; | ||
321 | |||
322 | spin_lock_irq(&dev->xmit_lock); | ||
323 | memcpy(dev->dev_addr, &sa->sax25_call, AX25_ADDR_LEN); | ||
324 | spin_unlock_irq(&dev->xmit_lock); | ||
325 | |||
326 | return 0; | ||
217 | } | 327 | } |
218 | 328 | ||
219 | static void ax_changedmtu(struct ax_disp *ax) | 329 | /*---------------------------------------------------------------------------*/ |
330 | |||
331 | static void ax_changedmtu(struct mkiss *ax) | ||
220 | { | 332 | { |
221 | struct net_device *dev = ax->dev; | 333 | struct net_device *dev = ax->dev; |
222 | unsigned char *xbuff, *rbuff, *oxbuff, *orbuff; | 334 | unsigned char *xbuff, *rbuff, *oxbuff, *orbuff; |
@@ -236,7 +348,8 @@ static void ax_changedmtu(struct ax_disp *ax) | |||
236 | rbuff = kmalloc(len + 4, GFP_ATOMIC); | 348 | rbuff = kmalloc(len + 4, GFP_ATOMIC); |
237 | 349 | ||
238 | if (xbuff == NULL || rbuff == NULL) { | 350 | if (xbuff == NULL || rbuff == NULL) { |
239 | printk(KERN_ERR "mkiss: %s: unable to grow ax25 buffers, MTU change cancelled.\n", | 351 | printk(KERN_ERR "mkiss: %s: unable to grow ax25 buffers, " |
352 | "MTU change cancelled.\n", | ||
240 | ax->dev->name); | 353 | ax->dev->name); |
241 | dev->mtu = ax->mtu; | 354 | dev->mtu = ax->mtu; |
242 | if (xbuff != NULL) | 355 | if (xbuff != NULL) |
@@ -258,7 +371,7 @@ static void ax_changedmtu(struct ax_disp *ax) | |||
258 | memcpy(ax->xbuff, ax->xhead, ax->xleft); | 371 | memcpy(ax->xbuff, ax->xhead, ax->xleft); |
259 | } else { | 372 | } else { |
260 | ax->xleft = 0; | 373 | ax->xleft = 0; |
261 | ax->tx_dropped++; | 374 | ax->stats.tx_dropped++; |
262 | } | 375 | } |
263 | } | 376 | } |
264 | 377 | ||
@@ -269,7 +382,7 @@ static void ax_changedmtu(struct ax_disp *ax) | |||
269 | memcpy(ax->rbuff, orbuff, ax->rcount); | 382 | memcpy(ax->rbuff, orbuff, ax->rcount); |
270 | } else { | 383 | } else { |
271 | ax->rcount = 0; | 384 | ax->rcount = 0; |
272 | ax->rx_over_errors++; | 385 | ax->stats.rx_over_errors++; |
273 | set_bit(AXF_ERROR, &ax->flags); | 386 | set_bit(AXF_ERROR, &ax->flags); |
274 | } | 387 | } |
275 | } | 388 | } |
@@ -279,72 +392,14 @@ static void ax_changedmtu(struct ax_disp *ax) | |||
279 | 392 | ||
280 | spin_unlock_bh(&ax->buflock); | 393 | spin_unlock_bh(&ax->buflock); |
281 | 394 | ||
282 | if (oxbuff != NULL) | 395 | kfree(oxbuff); |
283 | kfree(oxbuff); | 396 | kfree(orbuff); |
284 | if (orbuff != NULL) | ||
285 | kfree(orbuff); | ||
286 | } | ||
287 | |||
288 | |||
289 | /* Set the "sending" flag. This must be atomic. */ | ||
290 | static inline void ax_lock(struct ax_disp *ax) | ||
291 | { | ||
292 | netif_stop_queue(ax->dev); | ||
293 | } | ||
294 | |||
295 | |||
296 | /* Clear the "sending" flag. This must be atomic. */ | ||
297 | static inline void ax_unlock(struct ax_disp *ax) | ||
298 | { | ||
299 | netif_start_queue(ax->dev); | ||
300 | } | ||
301 | |||
302 | /* Send one completely decapsulated AX.25 packet to the AX.25 layer. */ | ||
303 | static void ax_bump(struct ax_disp *ax) | ||
304 | { | ||
305 | struct sk_buff *skb; | ||
306 | int count; | ||
307 | |||
308 | spin_lock_bh(&ax->buflock); | ||
309 | if (ax->rbuff[0] > 0x0f) { | ||
310 | if (ax->rbuff[0] & 0x20) { | ||
311 | ax->crcmode = CRC_MODE_FLEX; | ||
312 | if (check_crc_flex(ax->rbuff, ax->rcount) < 0) { | ||
313 | ax->rx_errors++; | ||
314 | return; | ||
315 | } | ||
316 | ax->rcount -= 2; | ||
317 | /* dl9sau bugfix: the trailling two bytes flexnet crc | ||
318 | * will not be passed to the kernel. thus we have | ||
319 | * to correct the kissparm signature, because it | ||
320 | * indicates a crc but there's none | ||
321 | */ | ||
322 | *ax->rbuff &= ~0x20; | ||
323 | } | ||
324 | } | ||
325 | spin_unlock_bh(&ax->buflock); | ||
326 | |||
327 | count = ax->rcount; | ||
328 | |||
329 | if ((skb = dev_alloc_skb(count)) == NULL) { | ||
330 | printk(KERN_ERR "mkiss: %s: memory squeeze, dropping packet.\n", ax->dev->name); | ||
331 | ax->rx_dropped++; | ||
332 | return; | ||
333 | } | ||
334 | |||
335 | spin_lock_bh(&ax->buflock); | ||
336 | memcpy(skb_put(skb,count), ax->rbuff, count); | ||
337 | spin_unlock_bh(&ax->buflock); | ||
338 | skb->protocol = ax25_type_trans(skb, ax->dev); | ||
339 | netif_rx(skb); | ||
340 | ax->dev->last_rx = jiffies; | ||
341 | ax->rx_packets++; | ||
342 | ax->rx_bytes+=count; | ||
343 | } | 397 | } |
344 | 398 | ||
345 | /* Encapsulate one AX.25 packet and stuff into a TTY queue. */ | 399 | /* Encapsulate one AX.25 packet and stuff into a TTY queue. */ |
346 | static void ax_encaps(struct ax_disp *ax, unsigned char *icp, int len) | 400 | static void ax_encaps(struct net_device *dev, unsigned char *icp, int len) |
347 | { | 401 | { |
402 | struct mkiss *ax = netdev_priv(dev); | ||
348 | unsigned char *p; | 403 | unsigned char *p; |
349 | int actual, count; | 404 | int actual, count; |
350 | 405 | ||
@@ -354,8 +409,8 @@ static void ax_encaps(struct ax_disp *ax, unsigned char *icp, int len) | |||
354 | if (len > ax->mtu) { /* Sigh, shouldn't occur BUT ... */ | 409 | if (len > ax->mtu) { /* Sigh, shouldn't occur BUT ... */ |
355 | len = ax->mtu; | 410 | len = ax->mtu; |
356 | printk(KERN_ERR "mkiss: %s: truncating oversized transmit packet!\n", ax->dev->name); | 411 | printk(KERN_ERR "mkiss: %s: truncating oversized transmit packet!\n", ax->dev->name); |
357 | ax->tx_dropped++; | 412 | ax->stats.tx_dropped++; |
358 | ax_unlock(ax); | 413 | netif_start_queue(dev); |
359 | return; | 414 | return; |
360 | } | 415 | } |
361 | 416 | ||
@@ -376,10 +431,11 @@ static void ax_encaps(struct ax_disp *ax, unsigned char *icp, int len) | |||
376 | break; | 431 | break; |
377 | } | 432 | } |
378 | 433 | ||
379 | ax->tty->flags |= (1 << TTY_DO_WRITE_WAKEUP); | 434 | set_bit(TTY_DO_WRITE_WAKEUP, &ax->tty->flags); |
380 | actual = ax->tty->driver->write(ax->tty, ax->xbuff, count); | 435 | actual = ax->tty->driver->write(ax->tty, ax->xbuff, count); |
381 | ax->tx_packets++; | 436 | ax->stats.tx_packets++; |
382 | ax->tx_bytes+=actual; | 437 | ax->stats.tx_bytes += actual; |
438 | |||
383 | ax->dev->trans_start = jiffies; | 439 | ax->dev->trans_start = jiffies; |
384 | ax->xleft = count - actual; | 440 | ax->xleft = count - actual; |
385 | ax->xhead = ax->xbuff + actual; | 441 | ax->xhead = ax->xbuff + actual; |
@@ -387,37 +443,10 @@ static void ax_encaps(struct ax_disp *ax, unsigned char *icp, int len) | |||
387 | spin_unlock_bh(&ax->buflock); | 443 | spin_unlock_bh(&ax->buflock); |
388 | } | 444 | } |
389 | 445 | ||
390 | /* | ||
391 | * Called by the driver when there's room for more data. If we have | ||
392 | * more packets to send, we send them here. | ||
393 | */ | ||
394 | static void ax25_write_wakeup(struct tty_struct *tty) | ||
395 | { | ||
396 | int actual; | ||
397 | struct ax_disp *ax = (struct ax_disp *) tty->disc_data; | ||
398 | |||
399 | /* First make sure we're connected. */ | ||
400 | if (ax == NULL || ax->magic != AX25_MAGIC || !netif_running(ax->dev)) | ||
401 | return; | ||
402 | if (ax->xleft <= 0) { | ||
403 | /* Now serial buffer is almost free & we can start | ||
404 | * transmission of another packet | ||
405 | */ | ||
406 | tty->flags &= ~(1 << TTY_DO_WRITE_WAKEUP); | ||
407 | |||
408 | netif_wake_queue(ax->dev); | ||
409 | return; | ||
410 | } | ||
411 | |||
412 | actual = tty->driver->write(tty, ax->xhead, ax->xleft); | ||
413 | ax->xleft -= actual; | ||
414 | ax->xhead += actual; | ||
415 | } | ||
416 | |||
417 | /* Encapsulate an AX.25 packet and kick it into a TTY queue. */ | 446 | /* Encapsulate an AX.25 packet and kick it into a TTY queue. */ |
418 | static int ax_xmit(struct sk_buff *skb, struct net_device *dev) | 447 | static int ax_xmit(struct sk_buff *skb, struct net_device *dev) |
419 | { | 448 | { |
420 | struct ax_disp *ax = netdev_priv(dev); | 449 | struct mkiss *ax = netdev_priv(dev); |
421 | 450 | ||
422 | if (!netif_running(dev)) { | 451 | if (!netif_running(dev)) { |
423 | printk(KERN_ERR "mkiss: %s: xmit call when iface is down\n", dev->name); | 452 | printk(KERN_ERR "mkiss: %s: xmit call when iface is down\n", dev->name); |
@@ -429,7 +458,7 @@ static int ax_xmit(struct sk_buff *skb, struct net_device *dev) | |||
429 | * May be we must check transmitter timeout here ? | 458 | * May be we must check transmitter timeout here ? |
430 | * 14 Oct 1994 Dmitry Gorodchanin. | 459 | * 14 Oct 1994 Dmitry Gorodchanin. |
431 | */ | 460 | */ |
432 | if (jiffies - dev->trans_start < 20 * HZ) { | 461 | if (time_before(jiffies, dev->trans_start + 20 * HZ)) { |
433 | /* 20 sec timeout not reached */ | 462 | /* 20 sec timeout not reached */ |
434 | return 1; | 463 | return 1; |
435 | } | 464 | } |
@@ -439,20 +468,30 @@ static int ax_xmit(struct sk_buff *skb, struct net_device *dev) | |||
439 | "bad line quality" : "driver error"); | 468 | "bad line quality" : "driver error"); |
440 | 469 | ||
441 | ax->xleft = 0; | 470 | ax->xleft = 0; |
442 | ax->tty->flags &= ~(1 << TTY_DO_WRITE_WAKEUP); | 471 | clear_bit(TTY_DO_WRITE_WAKEUP, &ax->tty->flags); |
443 | ax_unlock(ax); | 472 | netif_start_queue(dev); |
444 | } | 473 | } |
445 | 474 | ||
446 | /* We were not busy, so we are now... :-) */ | 475 | /* We were not busy, so we are now... :-) */ |
447 | if (skb != NULL) { | 476 | if (skb != NULL) { |
448 | ax_lock(ax); | 477 | netif_stop_queue(dev); |
449 | ax_encaps(ax, skb->data, skb->len); | 478 | ax_encaps(dev, skb->data, skb->len); |
450 | kfree_skb(skb); | 479 | kfree_skb(skb); |
451 | } | 480 | } |
452 | 481 | ||
453 | return 0; | 482 | return 0; |
454 | } | 483 | } |
455 | 484 | ||
485 | static int ax_open_dev(struct net_device *dev) | ||
486 | { | ||
487 | struct mkiss *ax = netdev_priv(dev); | ||
488 | |||
489 | if (ax->tty == NULL) | ||
490 | return -ENODEV; | ||
491 | |||
492 | return 0; | ||
493 | } | ||
494 | |||
456 | #if defined(CONFIG_AX25) || defined(CONFIG_AX25_MODULE) | 495 | #if defined(CONFIG_AX25) || defined(CONFIG_AX25_MODULE) |
457 | 496 | ||
458 | /* Return the frame type ID */ | 497 | /* Return the frame type ID */ |
@@ -481,7 +520,7 @@ static int ax_rebuild_header(struct sk_buff *skb) | |||
481 | /* Open the low-level part of the AX25 channel. Easy! */ | 520 | /* Open the low-level part of the AX25 channel. Easy! */ |
482 | static int ax_open(struct net_device *dev) | 521 | static int ax_open(struct net_device *dev) |
483 | { | 522 | { |
484 | struct ax_disp *ax = netdev_priv(dev); | 523 | struct mkiss *ax = netdev_priv(dev); |
485 | unsigned long len; | 524 | unsigned long len; |
486 | 525 | ||
487 | if (ax->tty == NULL) | 526 | if (ax->tty == NULL) |
@@ -518,7 +557,6 @@ static int ax_open(struct net_device *dev) | |||
518 | 557 | ||
519 | spin_lock_init(&ax->buflock); | 558 | spin_lock_init(&ax->buflock); |
520 | 559 | ||
521 | netif_start_queue(dev); | ||
522 | return 0; | 560 | return 0; |
523 | 561 | ||
524 | noxbuff: | 562 | noxbuff: |
@@ -532,68 +570,100 @@ norbuff: | |||
532 | /* Close the low-level part of the AX25 channel. Easy! */ | 570 | /* Close the low-level part of the AX25 channel. Easy! */ |
533 | static int ax_close(struct net_device *dev) | 571 | static int ax_close(struct net_device *dev) |
534 | { | 572 | { |
535 | struct ax_disp *ax = netdev_priv(dev); | 573 | struct mkiss *ax = netdev_priv(dev); |
536 | 574 | ||
537 | if (ax->tty == NULL) | 575 | if (ax->tty) |
538 | return -EBUSY; | 576 | clear_bit(TTY_DO_WRITE_WAKEUP, &ax->tty->flags); |
539 | |||
540 | ax->tty->flags &= ~(1 << TTY_DO_WRITE_WAKEUP); | ||
541 | 577 | ||
542 | netif_stop_queue(dev); | 578 | netif_stop_queue(dev); |
543 | 579 | ||
544 | return 0; | 580 | return 0; |
545 | } | 581 | } |
546 | 582 | ||
547 | static int ax25_receive_room(struct tty_struct *tty) | 583 | static struct net_device_stats *ax_get_stats(struct net_device *dev) |
548 | { | 584 | { |
549 | return 65536; /* We can handle an infinite amount of data. :-) */ | 585 | struct mkiss *ax = netdev_priv(dev); |
586 | |||
587 | return &ax->stats; | ||
588 | } | ||
589 | |||
590 | static void ax_setup(struct net_device *dev) | ||
591 | { | ||
592 | static char ax25_bcast[AX25_ADDR_LEN] = | ||
593 | {'Q'<<1,'S'<<1,'T'<<1,' '<<1,' '<<1,' '<<1,'0'<<1}; | ||
594 | static char ax25_test[AX25_ADDR_LEN] = | ||
595 | {'L'<<1,'I'<<1,'N'<<1,'U'<<1,'X'<<1,' '<<1,'1'<<1}; | ||
596 | |||
597 | /* Finish setting up the DEVICE info. */ | ||
598 | dev->mtu = AX_MTU; | ||
599 | dev->hard_start_xmit = ax_xmit; | ||
600 | dev->open = ax_open_dev; | ||
601 | dev->stop = ax_close; | ||
602 | dev->get_stats = ax_get_stats; | ||
603 | dev->set_mac_address = ax_set_mac_address; | ||
604 | dev->hard_header_len = 0; | ||
605 | dev->addr_len = 0; | ||
606 | dev->type = ARPHRD_AX25; | ||
607 | dev->tx_queue_len = 10; | ||
608 | dev->hard_header = ax_header; | ||
609 | dev->rebuild_header = ax_rebuild_header; | ||
610 | |||
611 | memcpy(dev->broadcast, ax25_bcast, AX25_ADDR_LEN); | ||
612 | memcpy(dev->dev_addr, ax25_test, AX25_ADDR_LEN); | ||
613 | |||
614 | dev->flags = IFF_BROADCAST | IFF_MULTICAST; | ||
550 | } | 615 | } |
551 | 616 | ||
552 | /* | 617 | /* |
553 | * Handle the 'receiver data ready' interrupt. | 618 | * We have a potential race on dereferencing tty->disc_data, because the tty |
554 | * This function is called by the 'tty_io' module in the kernel when | 619 | * layer provides no locking at all - thus one cpu could be running |
555 | * a block of data has been received, which can now be decapsulated | 620 | * sixpack_receive_buf while another calls sixpack_close, which zeroes |
556 | * and sent on to the AX.25 layer for further processing. | 621 | * tty->disc_data and frees the memory that sixpack_receive_buf is using. The |
622 | * best way to fix this is to use a rwlock in the tty struct, but for now we | ||
623 | * use a single global rwlock for all ttys in ppp line discipline. | ||
557 | */ | 624 | */ |
558 | static void ax25_receive_buf(struct tty_struct *tty, const unsigned char *cp, char *fp, int count) | 625 | static rwlock_t disc_data_lock = RW_LOCK_UNLOCKED; |
626 | |||
627 | static struct mkiss *mkiss_get(struct tty_struct *tty) | ||
559 | { | 628 | { |
560 | struct ax_disp *ax = (struct ax_disp *) tty->disc_data; | 629 | struct mkiss *ax; |
561 | 630 | ||
562 | if (ax == NULL || ax->magic != AX25_MAGIC || !netif_running(ax->dev)) | 631 | read_lock(&disc_data_lock); |
563 | return; | 632 | ax = tty->disc_data; |
633 | if (ax) | ||
634 | atomic_inc(&ax->refcnt); | ||
635 | read_unlock(&disc_data_lock); | ||
564 | 636 | ||
565 | /* | 637 | return ax; |
566 | * Argh! mtu change time! - costs us the packet part received | 638 | } |
567 | * at the change | ||
568 | */ | ||
569 | if (ax->mtu != ax->dev->mtu + 73) | ||
570 | ax_changedmtu(ax); | ||
571 | |||
572 | /* Read the characters out of the buffer */ | ||
573 | while (count--) { | ||
574 | if (fp != NULL && *fp++) { | ||
575 | if (!test_and_set_bit(AXF_ERROR, &ax->flags)) | ||
576 | ax->rx_errors++; | ||
577 | cp++; | ||
578 | continue; | ||
579 | } | ||
580 | 639 | ||
581 | kiss_unesc(ax, *cp++); | 640 | static void mkiss_put(struct mkiss *ax) |
582 | } | 641 | { |
642 | if (atomic_dec_and_test(&ax->refcnt)) | ||
643 | up(&ax->dead_sem); | ||
583 | } | 644 | } |
584 | 645 | ||
585 | static int ax25_open(struct tty_struct *tty) | 646 | static int mkiss_open(struct tty_struct *tty) |
586 | { | 647 | { |
587 | struct ax_disp *ax = (struct ax_disp *) tty->disc_data; | 648 | struct net_device *dev; |
649 | struct mkiss *ax; | ||
588 | int err; | 650 | int err; |
589 | 651 | ||
590 | /* First make sure we're not already connected. */ | 652 | if (!capable(CAP_NET_ADMIN)) |
591 | if (ax && ax->magic == AX25_MAGIC) | 653 | return -EPERM; |
592 | return -EEXIST; | ||
593 | 654 | ||
594 | /* OK. Find a free AX25 channel to use. */ | 655 | dev = alloc_netdev(sizeof(struct mkiss), "ax%d", ax_setup); |
595 | if ((ax = ax_alloc()) == NULL) | 656 | if (!dev) { |
596 | return -ENFILE; | 657 | err = -ENOMEM; |
658 | goto out; | ||
659 | } | ||
660 | |||
661 | ax = netdev_priv(dev); | ||
662 | ax->dev = dev; | ||
663 | |||
664 | spin_lock_init(&ax->buflock); | ||
665 | atomic_set(&ax->refcnt, 1); | ||
666 | init_MUTEX_LOCKED(&ax->dead_sem); | ||
597 | 667 | ||
598 | ax->tty = tty; | 668 | ax->tty = tty; |
599 | tty->disc_data = ax; | 669 | tty->disc_data = ax; |
@@ -602,283 +672,212 @@ static int ax25_open(struct tty_struct *tty) | |||
602 | tty->driver->flush_buffer(tty); | 672 | tty->driver->flush_buffer(tty); |
603 | 673 | ||
604 | /* Restore default settings */ | 674 | /* Restore default settings */ |
605 | ax->dev->type = ARPHRD_AX25; | 675 | dev->type = ARPHRD_AX25; |
606 | 676 | ||
607 | /* Perform the low-level AX25 initialization. */ | 677 | /* Perform the low-level AX25 initialization. */ |
608 | if ((err = ax_open(ax->dev))) | 678 | if ((err = ax_open(ax->dev))) { |
609 | return err; | 679 | goto out_free_netdev; |
680 | } | ||
610 | 681 | ||
611 | /* Done. We have linked the TTY line to a channel. */ | 682 | if (register_netdev(dev)) |
612 | return ax->dev->base_addr; | 683 | goto out_free_buffers; |
613 | } | ||
614 | 684 | ||
615 | static void ax25_close(struct tty_struct *tty) | 685 | netif_start_queue(dev); |
616 | { | ||
617 | struct ax_disp *ax = (struct ax_disp *) tty->disc_data; | ||
618 | 686 | ||
619 | /* First make sure we're connected. */ | 687 | /* Done. We have linked the TTY line to a channel. */ |
620 | if (ax == NULL || ax->magic != AX25_MAGIC) | 688 | return 0; |
621 | return; | ||
622 | 689 | ||
623 | unregister_netdev(ax->dev); | 690 | out_free_buffers: |
691 | kfree(ax->rbuff); | ||
692 | kfree(ax->xbuff); | ||
624 | 693 | ||
625 | tty->disc_data = NULL; | 694 | out_free_netdev: |
626 | ax->tty = NULL; | 695 | free_netdev(dev); |
627 | 696 | ||
628 | ax_free(ax); | 697 | out: |
698 | return err; | ||
629 | } | 699 | } |
630 | 700 | ||
631 | 701 | static void mkiss_close(struct tty_struct *tty) | |
632 | static struct net_device_stats *ax_get_stats(struct net_device *dev) | ||
633 | { | 702 | { |
634 | static struct net_device_stats stats; | 703 | struct mkiss *ax; |
635 | struct ax_disp *ax = netdev_priv(dev); | ||
636 | |||
637 | memset(&stats, 0, sizeof(struct net_device_stats)); | ||
638 | |||
639 | stats.rx_packets = ax->rx_packets; | ||
640 | stats.tx_packets = ax->tx_packets; | ||
641 | stats.rx_bytes = ax->rx_bytes; | ||
642 | stats.tx_bytes = ax->tx_bytes; | ||
643 | stats.rx_dropped = ax->rx_dropped; | ||
644 | stats.tx_dropped = ax->tx_dropped; | ||
645 | stats.tx_errors = ax->tx_errors; | ||
646 | stats.rx_errors = ax->rx_errors; | ||
647 | stats.rx_over_errors = ax->rx_over_errors; | ||
648 | |||
649 | return &stats; | ||
650 | } | ||
651 | 704 | ||
705 | write_lock(&disc_data_lock); | ||
706 | ax = tty->disc_data; | ||
707 | tty->disc_data = NULL; | ||
708 | write_unlock(&disc_data_lock); | ||
652 | 709 | ||
653 | /************************************************************************ | 710 | if (ax == 0) |
654 | * STANDARD ENCAPSULATION * | 711 | return; |
655 | ************************************************************************/ | ||
656 | |||
657 | static int kiss_esc(unsigned char *s, unsigned char *d, int len) | ||
658 | { | ||
659 | unsigned char *ptr = d; | ||
660 | unsigned char c; | ||
661 | 712 | ||
662 | /* | 713 | /* |
663 | * Send an initial END character to flush out any | 714 | * We have now ensured that nobody can start using ap from now on, but |
664 | * data that may have accumulated in the receiver | 715 | * we have to wait for all existing users to finish. |
665 | * due to line noise. | ||
666 | */ | 716 | */ |
717 | if (!atomic_dec_and_test(&ax->refcnt)) | ||
718 | down(&ax->dead_sem); | ||
667 | 719 | ||
668 | *ptr++ = END; | 720 | unregister_netdev(ax->dev); |
669 | |||
670 | while (len-- > 0) { | ||
671 | switch (c = *s++) { | ||
672 | case END: | ||
673 | *ptr++ = ESC; | ||
674 | *ptr++ = ESC_END; | ||
675 | break; | ||
676 | case ESC: | ||
677 | *ptr++ = ESC; | ||
678 | *ptr++ = ESC_ESC; | ||
679 | break; | ||
680 | default: | ||
681 | *ptr++ = c; | ||
682 | break; | ||
683 | } | ||
684 | } | ||
685 | 721 | ||
686 | *ptr++ = END; | 722 | /* Free all AX25 frame buffers. */ |
723 | kfree(ax->rbuff); | ||
724 | kfree(ax->xbuff); | ||
687 | 725 | ||
688 | return ptr - d; | 726 | ax->tty = NULL; |
689 | } | 727 | } |
690 | 728 | ||
691 | /* | 729 | /* Perform I/O control on an active ax25 channel. */ |
692 | * MW: | 730 | static int mkiss_ioctl(struct tty_struct *tty, struct file *file, |
693 | * OK its ugly, but tell me a better solution without copying the | 731 | unsigned int cmd, unsigned long arg) |
694 | * packet to a temporary buffer :-) | ||
695 | */ | ||
696 | static int kiss_esc_crc(unsigned char *s, unsigned char *d, unsigned short crc, int len) | ||
697 | { | 732 | { |
698 | unsigned char *ptr = d; | 733 | struct mkiss *ax = mkiss_get(tty); |
699 | unsigned char c=0; | 734 | struct net_device *dev = ax->dev; |
700 | 735 | unsigned int tmp, err; | |
701 | *ptr++ = END; | ||
702 | while (len > 0) { | ||
703 | if (len > 2) | ||
704 | c = *s++; | ||
705 | else if (len > 1) | ||
706 | c = crc >> 8; | ||
707 | else if (len > 0) | ||
708 | c = crc & 0xff; | ||
709 | 736 | ||
710 | len--; | 737 | /* First make sure we're connected. */ |
738 | if (ax == NULL) | ||
739 | return -ENXIO; | ||
711 | 740 | ||
712 | switch (c) { | 741 | switch (cmd) { |
713 | case END: | 742 | case SIOCGIFNAME: |
714 | *ptr++ = ESC; | 743 | err = copy_to_user((void __user *) arg, ax->dev->name, |
715 | *ptr++ = ESC_END; | 744 | strlen(ax->dev->name) + 1) ? -EFAULT : 0; |
716 | break; | 745 | break; |
717 | case ESC: | 746 | |
718 | *ptr++ = ESC; | 747 | case SIOCGIFENCAP: |
719 | *ptr++ = ESC_ESC; | 748 | err = put_user(4, (int __user *) arg); |
720 | break; | 749 | break; |
721 | default: | 750 | |
722 | *ptr++ = c; | 751 | case SIOCSIFENCAP: |
723 | break; | 752 | if (get_user(tmp, (int __user *) arg)) { |
753 | err = -EFAULT; | ||
754 | break; | ||
724 | } | 755 | } |
725 | } | ||
726 | *ptr++ = END; | ||
727 | return ptr - d; | ||
728 | } | ||
729 | 756 | ||
730 | static void kiss_unesc(struct ax_disp *ax, unsigned char s) | 757 | ax->mode = tmp; |
731 | { | 758 | dev->addr_len = AX25_ADDR_LEN; |
732 | switch (s) { | 759 | dev->hard_header_len = AX25_KISS_HEADER_LEN + |
733 | case END: | 760 | AX25_MAX_HEADER_LEN + 3; |
734 | /* drop keeptest bit = VSV */ | 761 | dev->type = ARPHRD_AX25; |
735 | if (test_bit(AXF_KEEPTEST, &ax->flags)) | ||
736 | clear_bit(AXF_KEEPTEST, &ax->flags); | ||
737 | 762 | ||
738 | if (!test_and_clear_bit(AXF_ERROR, &ax->flags) && (ax->rcount > 2)) | 763 | err = 0; |
739 | ax_bump(ax); | 764 | break; |
740 | 765 | ||
741 | clear_bit(AXF_ESCAPE, &ax->flags); | 766 | case SIOCSIFHWADDR: { |
742 | ax->rcount = 0; | 767 | char addr[AX25_ADDR_LEN]; |
743 | return; | 768 | printk(KERN_INFO "In SIOCSIFHWADDR"); |
744 | 769 | ||
745 | case ESC: | 770 | if (copy_from_user(&addr, |
746 | set_bit(AXF_ESCAPE, &ax->flags); | 771 | (void __user *) arg, AX25_ADDR_LEN)) { |
747 | return; | 772 | err = -EFAULT; |
748 | case ESC_ESC: | ||
749 | if (test_and_clear_bit(AXF_ESCAPE, &ax->flags)) | ||
750 | s = ESC; | ||
751 | break; | 773 | break; |
752 | case ESC_END: | ||
753 | if (test_and_clear_bit(AXF_ESCAPE, &ax->flags)) | ||
754 | s = END; | ||
755 | break; | ||
756 | } | ||
757 | |||
758 | spin_lock_bh(&ax->buflock); | ||
759 | if (!test_bit(AXF_ERROR, &ax->flags)) { | ||
760 | if (ax->rcount < ax->buffsize) { | ||
761 | ax->rbuff[ax->rcount++] = s; | ||
762 | spin_unlock_bh(&ax->buflock); | ||
763 | return; | ||
764 | } | 774 | } |
765 | 775 | ||
766 | ax->rx_over_errors++; | 776 | spin_lock_irq(&dev->xmit_lock); |
767 | set_bit(AXF_ERROR, &ax->flags); | 777 | memcpy(dev->dev_addr, addr, AX25_ADDR_LEN); |
778 | spin_unlock_irq(&dev->xmit_lock); | ||
779 | |||
780 | err = 0; | ||
781 | break; | ||
782 | } | ||
783 | default: | ||
784 | err = -ENOIOCTLCMD; | ||
768 | } | 785 | } |
769 | spin_unlock_bh(&ax->buflock); | ||
770 | } | ||
771 | 786 | ||
787 | mkiss_put(ax); | ||
772 | 788 | ||
773 | static int ax_set_mac_address(struct net_device *dev, void __user *addr) | 789 | return err; |
774 | { | ||
775 | if (copy_from_user(dev->dev_addr, addr, AX25_ADDR_LEN)) | ||
776 | return -EFAULT; | ||
777 | return 0; | ||
778 | } | 790 | } |
779 | 791 | ||
780 | static int ax_set_dev_mac_address(struct net_device *dev, void *addr) | 792 | /* |
793 | * Handle the 'receiver data ready' interrupt. | ||
794 | * This function is called by the 'tty_io' module in the kernel when | ||
795 | * a block of data has been received, which can now be decapsulated | ||
796 | * and sent on to the AX.25 layer for further processing. | ||
797 | */ | ||
798 | static void mkiss_receive_buf(struct tty_struct *tty, const unsigned char *cp, | ||
799 | char *fp, int count) | ||
781 | { | 800 | { |
782 | struct sockaddr *sa = addr; | 801 | struct mkiss *ax = mkiss_get(tty); |
783 | |||
784 | memcpy(dev->dev_addr, sa->sa_data, AX25_ADDR_LEN); | ||
785 | 802 | ||
786 | return 0; | 803 | if (!ax) |
787 | } | 804 | return; |
788 | |||
789 | |||
790 | /* Perform I/O control on an active ax25 channel. */ | ||
791 | static int ax25_disp_ioctl(struct tty_struct *tty, void *file, int cmd, void __user *arg) | ||
792 | { | ||
793 | struct ax_disp *ax = (struct ax_disp *) tty->disc_data; | ||
794 | unsigned int tmp; | ||
795 | 805 | ||
796 | /* First make sure we're connected. */ | 806 | /* |
797 | if (ax == NULL || ax->magic != AX25_MAGIC) | 807 | * Argh! mtu change time! - costs us the packet part received |
798 | return -EINVAL; | 808 | * at the change |
809 | */ | ||
810 | if (ax->mtu != ax->dev->mtu + 73) | ||
811 | ax_changedmtu(ax); | ||
799 | 812 | ||
800 | switch (cmd) { | 813 | /* Read the characters out of the buffer */ |
801 | case SIOCGIFNAME: | 814 | while (count--) { |
802 | if (copy_to_user(arg, ax->dev->name, strlen(ax->dev->name) + 1)) | 815 | if (fp != NULL && *fp++) { |
803 | return -EFAULT; | 816 | if (!test_and_set_bit(AXF_ERROR, &ax->flags)) |
804 | return 0; | 817 | ax->stats.rx_errors++; |
805 | 818 | cp++; | |
806 | case SIOCGIFENCAP: | 819 | continue; |
807 | return put_user(4, (int __user *)arg); | 820 | } |
808 | |||
809 | case SIOCSIFENCAP: | ||
810 | if (get_user(tmp, (int __user *)arg)) | ||
811 | return -EFAULT; | ||
812 | ax->mode = tmp; | ||
813 | ax->dev->addr_len = AX25_ADDR_LEN; /* sizeof an AX.25 addr */ | ||
814 | ax->dev->hard_header_len = AX25_KISS_HEADER_LEN + AX25_MAX_HEADER_LEN + 3; | ||
815 | ax->dev->type = ARPHRD_AX25; | ||
816 | return 0; | ||
817 | |||
818 | case SIOCSIFHWADDR: | ||
819 | return ax_set_mac_address(ax->dev, arg); | ||
820 | 821 | ||
821 | default: | 822 | kiss_unesc(ax, *cp++); |
822 | return -ENOIOCTLCMD; | ||
823 | } | 823 | } |
824 | |||
825 | mkiss_put(ax); | ||
826 | if (test_and_clear_bit(TTY_THROTTLED, &tty->flags) | ||
827 | && tty->driver->unthrottle) | ||
828 | tty->driver->unthrottle(tty); | ||
824 | } | 829 | } |
825 | 830 | ||
826 | static int ax_open_dev(struct net_device *dev) | 831 | static int mkiss_receive_room(struct tty_struct *tty) |
827 | { | 832 | { |
828 | struct ax_disp *ax = netdev_priv(dev); | 833 | return 65536; /* We can handle an infinite amount of data. :-) */ |
829 | |||
830 | if (ax->tty == NULL) | ||
831 | return -ENODEV; | ||
832 | |||
833 | return 0; | ||
834 | } | 834 | } |
835 | 835 | ||
836 | 836 | /* | |
837 | /* Initialize the driver. Called by network startup. */ | 837 | * Called by the driver when there's room for more data. If we have |
838 | static int ax25_init(struct net_device *dev) | 838 | * more packets to send, we send them here. |
839 | */ | ||
840 | static void mkiss_write_wakeup(struct tty_struct *tty) | ||
839 | { | 841 | { |
840 | struct ax_disp *ax = netdev_priv(dev); | 842 | struct mkiss *ax = mkiss_get(tty); |
841 | 843 | int actual; | |
842 | static char ax25_bcast[AX25_ADDR_LEN] = | ||
843 | {'Q'<<1,'S'<<1,'T'<<1,' '<<1,' '<<1,' '<<1,'0'<<1}; | ||
844 | static char ax25_test[AX25_ADDR_LEN] = | ||
845 | {'L'<<1,'I'<<1,'N'<<1,'U'<<1,'X'<<1,' '<<1,'1'<<1}; | ||
846 | |||
847 | if (ax == NULL) /* Allocation failed ?? */ | ||
848 | return -ENODEV; | ||
849 | 844 | ||
850 | /* Set up the "AX25 Control Block". (And clear statistics) */ | 845 | if (!ax) |
851 | memset(ax, 0, sizeof (struct ax_disp)); | 846 | return; |
852 | ax->magic = AX25_MAGIC; | ||
853 | ax->dev = dev; | ||
854 | 847 | ||
855 | /* Finish setting up the DEVICE info. */ | 848 | if (ax->xleft <= 0) { |
856 | dev->mtu = AX_MTU; | 849 | /* Now serial buffer is almost free & we can start |
857 | dev->hard_start_xmit = ax_xmit; | 850 | * transmission of another packet |
858 | dev->open = ax_open_dev; | 851 | */ |
859 | dev->stop = ax_close; | 852 | clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); |
860 | dev->get_stats = ax_get_stats; | ||
861 | dev->set_mac_address = ax_set_dev_mac_address; | ||
862 | dev->hard_header_len = 0; | ||
863 | dev->addr_len = 0; | ||
864 | dev->type = ARPHRD_AX25; | ||
865 | dev->tx_queue_len = 10; | ||
866 | dev->hard_header = ax_header; | ||
867 | dev->rebuild_header = ax_rebuild_header; | ||
868 | 853 | ||
869 | memcpy(dev->broadcast, ax25_bcast, AX25_ADDR_LEN); | 854 | netif_wake_queue(ax->dev); |
870 | memcpy(dev->dev_addr, ax25_test, AX25_ADDR_LEN); | 855 | goto out; |
856 | } | ||
871 | 857 | ||
872 | /* New-style flags. */ | 858 | actual = tty->driver->write(tty, ax->xhead, ax->xleft); |
873 | dev->flags = IFF_BROADCAST | IFF_MULTICAST; | 859 | ax->xleft -= actual; |
860 | ax->xhead += actual; | ||
874 | 861 | ||
875 | return 0; | 862 | out: |
863 | mkiss_put(ax); | ||
876 | } | 864 | } |
877 | 865 | ||
866 | static struct tty_ldisc ax_ldisc = { | ||
867 | .magic = TTY_LDISC_MAGIC, | ||
868 | .name = "mkiss", | ||
869 | .open = mkiss_open, | ||
870 | .close = mkiss_close, | ||
871 | .ioctl = mkiss_ioctl, | ||
872 | .receive_buf = mkiss_receive_buf, | ||
873 | .receive_room = mkiss_receive_room, | ||
874 | .write_wakeup = mkiss_write_wakeup | ||
875 | }; | ||
878 | 876 | ||
879 | /* ******************************************************************** */ | 877 | static char banner[] __initdata = KERN_INFO \ |
880 | /* * Init MKISS driver * */ | 878 | "mkiss: AX.25 Multikiss, Hans Albas PE1AYX\n"; |
881 | /* ******************************************************************** */ | 879 | static char msg_regfail[] __initdata = KERN_ERR \ |
880 | "mkiss: can't register line discipline (err = %d)\n"; | ||
882 | 881 | ||
883 | static int __init mkiss_init_driver(void) | 882 | static int __init mkiss_init_driver(void) |
884 | { | 883 | { |
@@ -886,64 +885,27 @@ static int __init mkiss_init_driver(void) | |||
886 | 885 | ||
887 | printk(banner); | 886 | printk(banner); |
888 | 887 | ||
889 | if (ax25_maxdev < 4) | 888 | if ((status = tty_register_ldisc(N_AX25, &ax_ldisc)) != 0) |
890 | ax25_maxdev = 4; /* Sanity */ | 889 | printk(msg_regfail); |
891 | 890 | ||
892 | if ((ax25_ctrls = kmalloc(sizeof(void *) * ax25_maxdev, GFP_KERNEL)) == NULL) { | ||
893 | printk(KERN_ERR "mkiss: Can't allocate ax25_ctrls[] array!\n"); | ||
894 | return -ENOMEM; | ||
895 | } | ||
896 | |||
897 | /* Clear the pointer array, we allocate devices when we need them */ | ||
898 | memset(ax25_ctrls, 0, sizeof(void*) * ax25_maxdev); /* Pointers */ | ||
899 | |||
900 | /* Fill in our line protocol discipline, and register it */ | ||
901 | ax_ldisc.magic = TTY_LDISC_MAGIC; | ||
902 | ax_ldisc.name = "mkiss"; | ||
903 | ax_ldisc.open = ax25_open; | ||
904 | ax_ldisc.close = ax25_close; | ||
905 | ax_ldisc.ioctl = (int (*)(struct tty_struct *, struct file *, | ||
906 | unsigned int, unsigned long))ax25_disp_ioctl; | ||
907 | ax_ldisc.receive_buf = ax25_receive_buf; | ||
908 | ax_ldisc.receive_room = ax25_receive_room; | ||
909 | ax_ldisc.write_wakeup = ax25_write_wakeup; | ||
910 | |||
911 | if ((status = tty_register_ldisc(N_AX25, &ax_ldisc)) != 0) { | ||
912 | printk(KERN_ERR "mkiss: can't register line discipline (err = %d)\n", status); | ||
913 | kfree(ax25_ctrls); | ||
914 | } | ||
915 | return status; | 891 | return status; |
916 | } | 892 | } |
917 | 893 | ||
894 | static const char msg_unregfail[] __exitdata = KERN_ERR \ | ||
895 | "mkiss: can't unregister line discipline (err = %d)\n"; | ||
896 | |||
918 | static void __exit mkiss_exit_driver(void) | 897 | static void __exit mkiss_exit_driver(void) |
919 | { | 898 | { |
920 | int i; | 899 | int ret; |
921 | |||
922 | for (i = 0; i < ax25_maxdev; i++) { | ||
923 | if (ax25_ctrls[i]) { | ||
924 | /* | ||
925 | * VSV = if dev->start==0, then device | ||
926 | * unregistered while close proc. | ||
927 | */ | ||
928 | if (netif_running(&ax25_ctrls[i]->dev)) | ||
929 | unregister_netdev(&ax25_ctrls[i]->dev); | ||
930 | kfree(ax25_ctrls[i]); | ||
931 | } | ||
932 | } | ||
933 | 900 | ||
934 | kfree(ax25_ctrls); | 901 | if ((ret = tty_unregister_ldisc(N_AX25))) |
935 | ax25_ctrls = NULL; | 902 | printk(msg_unregfail, ret); |
936 | |||
937 | if ((i = tty_unregister_ldisc(N_AX25))) | ||
938 | printk(KERN_ERR "mkiss: can't unregister line discipline (err = %d)\n", i); | ||
939 | } | 903 | } |
940 | 904 | ||
941 | MODULE_AUTHOR("Hans Albas PE1AYX <hans@esrac.ele.tue.nl>"); | 905 | MODULE_AUTHOR("Ralf Baechle DL5RB <ralf@linux-mips.org>"); |
942 | MODULE_DESCRIPTION("KISS driver for AX.25 over TTYs"); | 906 | MODULE_DESCRIPTION("KISS driver for AX.25 over TTYs"); |
943 | MODULE_PARM(ax25_maxdev, "i"); | ||
944 | MODULE_PARM_DESC(ax25_maxdev, "number of MKISS devices"); | ||
945 | MODULE_LICENSE("GPL"); | 907 | MODULE_LICENSE("GPL"); |
946 | MODULE_ALIAS_LDISC(N_AX25); | 908 | MODULE_ALIAS_LDISC(N_AX25); |
909 | |||
947 | module_init(mkiss_init_driver); | 910 | module_init(mkiss_init_driver); |
948 | module_exit(mkiss_exit_driver); | 911 | module_exit(mkiss_exit_driver); |
949 | |||
diff --git a/drivers/net/ixgb/ixgb.h b/drivers/net/ixgb/ixgb.h index f8d3385c7842..c83271b38621 100644 --- a/drivers/net/ixgb/ixgb.h +++ b/drivers/net/ixgb/ixgb.h | |||
@@ -119,7 +119,7 @@ struct ixgb_adapter; | |||
119 | * so a DMA handle can be stored along with the buffer */ | 119 | * so a DMA handle can be stored along with the buffer */ |
120 | struct ixgb_buffer { | 120 | struct ixgb_buffer { |
121 | struct sk_buff *skb; | 121 | struct sk_buff *skb; |
122 | uint64_t dma; | 122 | dma_addr_t dma; |
123 | unsigned long time_stamp; | 123 | unsigned long time_stamp; |
124 | uint16_t length; | 124 | uint16_t length; |
125 | uint16_t next_to_watch; | 125 | uint16_t next_to_watch; |
diff --git a/drivers/net/ixgb/ixgb_ee.c b/drivers/net/ixgb/ixgb_ee.c index 3aae110c5560..661a46b95a61 100644 --- a/drivers/net/ixgb/ixgb_ee.c +++ b/drivers/net/ixgb/ixgb_ee.c | |||
@@ -565,24 +565,6 @@ ixgb_get_ee_mac_addr(struct ixgb_hw *hw, | |||
565 | } | 565 | } |
566 | } | 566 | } |
567 | 567 | ||
568 | /****************************************************************************** | ||
569 | * return the compatibility flags from EEPROM | ||
570 | * | ||
571 | * hw - Struct containing variables accessed by shared code | ||
572 | * | ||
573 | * Returns: | ||
574 | * compatibility flags if EEPROM contents are valid, 0 otherwise | ||
575 | ******************************************************************************/ | ||
576 | uint16_t | ||
577 | ixgb_get_ee_compatibility(struct ixgb_hw *hw) | ||
578 | { | ||
579 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
580 | |||
581 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
582 | return (le16_to_cpu(ee_map->compatibility)); | ||
583 | |||
584 | return(0); | ||
585 | } | ||
586 | 568 | ||
587 | /****************************************************************************** | 569 | /****************************************************************************** |
588 | * return the Printed Board Assembly number from EEPROM | 570 | * return the Printed Board Assembly number from EEPROM |
@@ -602,81 +584,6 @@ ixgb_get_ee_pba_number(struct ixgb_hw *hw) | |||
602 | return(0); | 584 | return(0); |
603 | } | 585 | } |
604 | 586 | ||
605 | /****************************************************************************** | ||
606 | * return the Initialization Control Word 1 from EEPROM | ||
607 | * | ||
608 | * hw - Struct containing variables accessed by shared code | ||
609 | * | ||
610 | * Returns: | ||
611 | * Initialization Control Word 1 if EEPROM contents are valid, 0 otherwise | ||
612 | ******************************************************************************/ | ||
613 | uint16_t | ||
614 | ixgb_get_ee_init_ctrl_reg_1(struct ixgb_hw *hw) | ||
615 | { | ||
616 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
617 | |||
618 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
619 | return (le16_to_cpu(ee_map->init_ctrl_reg_1)); | ||
620 | |||
621 | return(0); | ||
622 | } | ||
623 | |||
624 | /****************************************************************************** | ||
625 | * return the Initialization Control Word 2 from EEPROM | ||
626 | * | ||
627 | * hw - Struct containing variables accessed by shared code | ||
628 | * | ||
629 | * Returns: | ||
630 | * Initialization Control Word 2 if EEPROM contents are valid, 0 otherwise | ||
631 | ******************************************************************************/ | ||
632 | uint16_t | ||
633 | ixgb_get_ee_init_ctrl_reg_2(struct ixgb_hw *hw) | ||
634 | { | ||
635 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
636 | |||
637 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
638 | return (le16_to_cpu(ee_map->init_ctrl_reg_2)); | ||
639 | |||
640 | return(0); | ||
641 | } | ||
642 | |||
643 | /****************************************************************************** | ||
644 | * return the Subsystem Id from EEPROM | ||
645 | * | ||
646 | * hw - Struct containing variables accessed by shared code | ||
647 | * | ||
648 | * Returns: | ||
649 | * Subsystem Id if EEPROM contents are valid, 0 otherwise | ||
650 | ******************************************************************************/ | ||
651 | uint16_t | ||
652 | ixgb_get_ee_subsystem_id(struct ixgb_hw *hw) | ||
653 | { | ||
654 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
655 | |||
656 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
657 | return (le16_to_cpu(ee_map->subsystem_id)); | ||
658 | |||
659 | return(0); | ||
660 | } | ||
661 | |||
662 | /****************************************************************************** | ||
663 | * return the Sub Vendor Id from EEPROM | ||
664 | * | ||
665 | * hw - Struct containing variables accessed by shared code | ||
666 | * | ||
667 | * Returns: | ||
668 | * Sub Vendor Id if EEPROM contents are valid, 0 otherwise | ||
669 | ******************************************************************************/ | ||
670 | uint16_t | ||
671 | ixgb_get_ee_subvendor_id(struct ixgb_hw *hw) | ||
672 | { | ||
673 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
674 | |||
675 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
676 | return (le16_to_cpu(ee_map->subvendor_id)); | ||
677 | |||
678 | return(0); | ||
679 | } | ||
680 | 587 | ||
681 | /****************************************************************************** | 588 | /****************************************************************************** |
682 | * return the Device Id from EEPROM | 589 | * return the Device Id from EEPROM |
@@ -694,81 +601,6 @@ ixgb_get_ee_device_id(struct ixgb_hw *hw) | |||
694 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | 601 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) |
695 | return (le16_to_cpu(ee_map->device_id)); | 602 | return (le16_to_cpu(ee_map->device_id)); |
696 | 603 | ||
697 | return(0); | 604 | return (0); |
698 | } | ||
699 | |||
700 | /****************************************************************************** | ||
701 | * return the Vendor Id from EEPROM | ||
702 | * | ||
703 | * hw - Struct containing variables accessed by shared code | ||
704 | * | ||
705 | * Returns: | ||
706 | * Device Id if EEPROM contents are valid, 0 otherwise | ||
707 | ******************************************************************************/ | ||
708 | uint16_t | ||
709 | ixgb_get_ee_vendor_id(struct ixgb_hw *hw) | ||
710 | { | ||
711 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
712 | |||
713 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
714 | return (le16_to_cpu(ee_map->vendor_id)); | ||
715 | |||
716 | return(0); | ||
717 | } | ||
718 | |||
719 | /****************************************************************************** | ||
720 | * return the Software Defined Pins Register from EEPROM | ||
721 | * | ||
722 | * hw - Struct containing variables accessed by shared code | ||
723 | * | ||
724 | * Returns: | ||
725 | * SDP Register if EEPROM contents are valid, 0 otherwise | ||
726 | ******************************************************************************/ | ||
727 | uint16_t | ||
728 | ixgb_get_ee_swdpins_reg(struct ixgb_hw *hw) | ||
729 | { | ||
730 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
731 | |||
732 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
733 | return (le16_to_cpu(ee_map->swdpins_reg)); | ||
734 | |||
735 | return(0); | ||
736 | } | 605 | } |
737 | 606 | ||
738 | /****************************************************************************** | ||
739 | * return the D3 Power Management Bits from EEPROM | ||
740 | * | ||
741 | * hw - Struct containing variables accessed by shared code | ||
742 | * | ||
743 | * Returns: | ||
744 | * D3 Power Management Bits if EEPROM contents are valid, 0 otherwise | ||
745 | ******************************************************************************/ | ||
746 | uint8_t | ||
747 | ixgb_get_ee_d3_power(struct ixgb_hw *hw) | ||
748 | { | ||
749 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
750 | |||
751 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
752 | return (le16_to_cpu(ee_map->d3_power)); | ||
753 | |||
754 | return(0); | ||
755 | } | ||
756 | |||
757 | /****************************************************************************** | ||
758 | * return the D0 Power Management Bits from EEPROM | ||
759 | * | ||
760 | * hw - Struct containing variables accessed by shared code | ||
761 | * | ||
762 | * Returns: | ||
763 | * D0 Power Management Bits if EEPROM contents are valid, 0 otherwise | ||
764 | ******************************************************************************/ | ||
765 | uint8_t | ||
766 | ixgb_get_ee_d0_power(struct ixgb_hw *hw) | ||
767 | { | ||
768 | struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom; | ||
769 | |||
770 | if(ixgb_check_and_get_eeprom_data(hw) == TRUE) | ||
771 | return (le16_to_cpu(ee_map->d0_power)); | ||
772 | |||
773 | return(0); | ||
774 | } | ||
diff --git a/drivers/net/ixgb/ixgb_ethtool.c b/drivers/net/ixgb/ixgb_ethtool.c index 3fa113854eeb..9d026ed77ddd 100644 --- a/drivers/net/ixgb/ixgb_ethtool.c +++ b/drivers/net/ixgb/ixgb_ethtool.c | |||
@@ -98,10 +98,10 @@ static struct ixgb_stats ixgb_gstrings_stats[] = { | |||
98 | static int | 98 | static int |
99 | ixgb_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) | 99 | ixgb_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) |
100 | { | 100 | { |
101 | struct ixgb_adapter *adapter = netdev->priv; | 101 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
102 | 102 | ||
103 | ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE); | 103 | ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE); |
104 | ecmd->advertising = (SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE); | 104 | ecmd->advertising = (ADVERTISED_10000baseT_Full | ADVERTISED_FIBRE); |
105 | ecmd->port = PORT_FIBRE; | 105 | ecmd->port = PORT_FIBRE; |
106 | ecmd->transceiver = XCVR_EXTERNAL; | 106 | ecmd->transceiver = XCVR_EXTERNAL; |
107 | 107 | ||
@@ -120,7 +120,7 @@ ixgb_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) | |||
120 | static int | 120 | static int |
121 | ixgb_set_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) | 121 | ixgb_set_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) |
122 | { | 122 | { |
123 | struct ixgb_adapter *adapter = netdev->priv; | 123 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
124 | 124 | ||
125 | if(ecmd->autoneg == AUTONEG_ENABLE || | 125 | if(ecmd->autoneg == AUTONEG_ENABLE || |
126 | ecmd->speed + ecmd->duplex != SPEED_10000 + DUPLEX_FULL) | 126 | ecmd->speed + ecmd->duplex != SPEED_10000 + DUPLEX_FULL) |
@@ -130,6 +130,12 @@ ixgb_set_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) | |||
130 | ixgb_down(adapter, TRUE); | 130 | ixgb_down(adapter, TRUE); |
131 | ixgb_reset(adapter); | 131 | ixgb_reset(adapter); |
132 | ixgb_up(adapter); | 132 | ixgb_up(adapter); |
133 | /* be optimistic about our link, since we were up before */ | ||
134 | adapter->link_speed = 10000; | ||
135 | adapter->link_duplex = FULL_DUPLEX; | ||
136 | netif_carrier_on(netdev); | ||
137 | netif_wake_queue(netdev); | ||
138 | |||
133 | } else | 139 | } else |
134 | ixgb_reset(adapter); | 140 | ixgb_reset(adapter); |
135 | 141 | ||
@@ -140,7 +146,7 @@ static void | |||
140 | ixgb_get_pauseparam(struct net_device *netdev, | 146 | ixgb_get_pauseparam(struct net_device *netdev, |
141 | struct ethtool_pauseparam *pause) | 147 | struct ethtool_pauseparam *pause) |
142 | { | 148 | { |
143 | struct ixgb_adapter *adapter = netdev->priv; | 149 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
144 | struct ixgb_hw *hw = &adapter->hw; | 150 | struct ixgb_hw *hw = &adapter->hw; |
145 | 151 | ||
146 | pause->autoneg = AUTONEG_DISABLE; | 152 | pause->autoneg = AUTONEG_DISABLE; |
@@ -159,7 +165,7 @@ static int | |||
159 | ixgb_set_pauseparam(struct net_device *netdev, | 165 | ixgb_set_pauseparam(struct net_device *netdev, |
160 | struct ethtool_pauseparam *pause) | 166 | struct ethtool_pauseparam *pause) |
161 | { | 167 | { |
162 | struct ixgb_adapter *adapter = netdev->priv; | 168 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
163 | struct ixgb_hw *hw = &adapter->hw; | 169 | struct ixgb_hw *hw = &adapter->hw; |
164 | 170 | ||
165 | if(pause->autoneg == AUTONEG_ENABLE) | 171 | if(pause->autoneg == AUTONEG_ENABLE) |
@@ -177,6 +183,11 @@ ixgb_set_pauseparam(struct net_device *netdev, | |||
177 | if(netif_running(adapter->netdev)) { | 183 | if(netif_running(adapter->netdev)) { |
178 | ixgb_down(adapter, TRUE); | 184 | ixgb_down(adapter, TRUE); |
179 | ixgb_up(adapter); | 185 | ixgb_up(adapter); |
186 | /* be optimistic about our link, since we were up before */ | ||
187 | adapter->link_speed = 10000; | ||
188 | adapter->link_duplex = FULL_DUPLEX; | ||
189 | netif_carrier_on(netdev); | ||
190 | netif_wake_queue(netdev); | ||
180 | } else | 191 | } else |
181 | ixgb_reset(adapter); | 192 | ixgb_reset(adapter); |
182 | 193 | ||
@@ -186,19 +197,26 @@ ixgb_set_pauseparam(struct net_device *netdev, | |||
186 | static uint32_t | 197 | static uint32_t |
187 | ixgb_get_rx_csum(struct net_device *netdev) | 198 | ixgb_get_rx_csum(struct net_device *netdev) |
188 | { | 199 | { |
189 | struct ixgb_adapter *adapter = netdev->priv; | 200 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
201 | |||
190 | return adapter->rx_csum; | 202 | return adapter->rx_csum; |
191 | } | 203 | } |
192 | 204 | ||
193 | static int | 205 | static int |
194 | ixgb_set_rx_csum(struct net_device *netdev, uint32_t data) | 206 | ixgb_set_rx_csum(struct net_device *netdev, uint32_t data) |
195 | { | 207 | { |
196 | struct ixgb_adapter *adapter = netdev->priv; | 208 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
209 | |||
197 | adapter->rx_csum = data; | 210 | adapter->rx_csum = data; |
198 | 211 | ||
199 | if(netif_running(netdev)) { | 212 | if(netif_running(netdev)) { |
200 | ixgb_down(adapter,TRUE); | 213 | ixgb_down(adapter,TRUE); |
201 | ixgb_up(adapter); | 214 | ixgb_up(adapter); |
215 | /* be optimistic about our link, since we were up before */ | ||
216 | adapter->link_speed = 10000; | ||
217 | adapter->link_duplex = FULL_DUPLEX; | ||
218 | netif_carrier_on(netdev); | ||
219 | netif_wake_queue(netdev); | ||
202 | } else | 220 | } else |
203 | ixgb_reset(adapter); | 221 | ixgb_reset(adapter); |
204 | return 0; | 222 | return 0; |
@@ -246,14 +264,15 @@ static void | |||
246 | ixgb_get_regs(struct net_device *netdev, | 264 | ixgb_get_regs(struct net_device *netdev, |
247 | struct ethtool_regs *regs, void *p) | 265 | struct ethtool_regs *regs, void *p) |
248 | { | 266 | { |
249 | struct ixgb_adapter *adapter = netdev->priv; | 267 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
250 | struct ixgb_hw *hw = &adapter->hw; | 268 | struct ixgb_hw *hw = &adapter->hw; |
251 | uint32_t *reg = p; | 269 | uint32_t *reg = p; |
252 | uint32_t *reg_start = reg; | 270 | uint32_t *reg_start = reg; |
253 | uint8_t i; | 271 | uint8_t i; |
254 | 272 | ||
255 | /* the 1 (one) below indicates an attempt at versioning, if the | 273 | /* the 1 (one) below indicates an attempt at versioning, if the |
256 | * interface in ethtool or the driver this 1 should be incremented */ | 274 | * interface in ethtool or the driver changes, this 1 should be |
275 | * incremented */ | ||
257 | regs->version = (1<<24) | hw->revision_id << 16 | hw->device_id; | 276 | regs->version = (1<<24) | hw->revision_id << 16 | hw->device_id; |
258 | 277 | ||
259 | /* General Registers */ | 278 | /* General Registers */ |
@@ -283,7 +302,8 @@ ixgb_get_regs(struct net_device *netdev, | |||
283 | *reg++ = IXGB_READ_REG(hw, RAIDC); /* 19 */ | 302 | *reg++ = IXGB_READ_REG(hw, RAIDC); /* 19 */ |
284 | *reg++ = IXGB_READ_REG(hw, RXCSUM); /* 20 */ | 303 | *reg++ = IXGB_READ_REG(hw, RXCSUM); /* 20 */ |
285 | 304 | ||
286 | for (i = 0; i < IXGB_RAR_ENTRIES; i++) { | 305 | /* there are 16 RAR entries in hardware, we only use 3 */ |
306 | for(i = 0; i < 16; i++) { | ||
287 | *reg++ = IXGB_READ_REG_ARRAY(hw, RAL, (i << 1)); /*21,...,51 */ | 307 | *reg++ = IXGB_READ_REG_ARRAY(hw, RAL, (i << 1)); /*21,...,51 */ |
288 | *reg++ = IXGB_READ_REG_ARRAY(hw, RAH, (i << 1)); /*22,...,52 */ | 308 | *reg++ = IXGB_READ_REG_ARRAY(hw, RAH, (i << 1)); /*22,...,52 */ |
289 | } | 309 | } |
@@ -391,7 +411,7 @@ static int | |||
391 | ixgb_get_eeprom(struct net_device *netdev, | 411 | ixgb_get_eeprom(struct net_device *netdev, |
392 | struct ethtool_eeprom *eeprom, uint8_t *bytes) | 412 | struct ethtool_eeprom *eeprom, uint8_t *bytes) |
393 | { | 413 | { |
394 | struct ixgb_adapter *adapter = netdev->priv; | 414 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
395 | struct ixgb_hw *hw = &adapter->hw; | 415 | struct ixgb_hw *hw = &adapter->hw; |
396 | uint16_t *eeprom_buff; | 416 | uint16_t *eeprom_buff; |
397 | int i, max_len, first_word, last_word; | 417 | int i, max_len, first_word, last_word; |
@@ -439,7 +459,7 @@ static int | |||
439 | ixgb_set_eeprom(struct net_device *netdev, | 459 | ixgb_set_eeprom(struct net_device *netdev, |
440 | struct ethtool_eeprom *eeprom, uint8_t *bytes) | 460 | struct ethtool_eeprom *eeprom, uint8_t *bytes) |
441 | { | 461 | { |
442 | struct ixgb_adapter *adapter = netdev->priv; | 462 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
443 | struct ixgb_hw *hw = &adapter->hw; | 463 | struct ixgb_hw *hw = &adapter->hw; |
444 | uint16_t *eeprom_buff; | 464 | uint16_t *eeprom_buff; |
445 | void *ptr; | 465 | void *ptr; |
@@ -497,7 +517,7 @@ static void | |||
497 | ixgb_get_drvinfo(struct net_device *netdev, | 517 | ixgb_get_drvinfo(struct net_device *netdev, |
498 | struct ethtool_drvinfo *drvinfo) | 518 | struct ethtool_drvinfo *drvinfo) |
499 | { | 519 | { |
500 | struct ixgb_adapter *adapter = netdev->priv; | 520 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
501 | 521 | ||
502 | strncpy(drvinfo->driver, ixgb_driver_name, 32); | 522 | strncpy(drvinfo->driver, ixgb_driver_name, 32); |
503 | strncpy(drvinfo->version, ixgb_driver_version, 32); | 523 | strncpy(drvinfo->version, ixgb_driver_version, 32); |
@@ -512,7 +532,7 @@ static void | |||
512 | ixgb_get_ringparam(struct net_device *netdev, | 532 | ixgb_get_ringparam(struct net_device *netdev, |
513 | struct ethtool_ringparam *ring) | 533 | struct ethtool_ringparam *ring) |
514 | { | 534 | { |
515 | struct ixgb_adapter *adapter = netdev->priv; | 535 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
516 | struct ixgb_desc_ring *txdr = &adapter->tx_ring; | 536 | struct ixgb_desc_ring *txdr = &adapter->tx_ring; |
517 | struct ixgb_desc_ring *rxdr = &adapter->rx_ring; | 537 | struct ixgb_desc_ring *rxdr = &adapter->rx_ring; |
518 | 538 | ||
@@ -530,7 +550,7 @@ static int | |||
530 | ixgb_set_ringparam(struct net_device *netdev, | 550 | ixgb_set_ringparam(struct net_device *netdev, |
531 | struct ethtool_ringparam *ring) | 551 | struct ethtool_ringparam *ring) |
532 | { | 552 | { |
533 | struct ixgb_adapter *adapter = netdev->priv; | 553 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
534 | struct ixgb_desc_ring *txdr = &adapter->tx_ring; | 554 | struct ixgb_desc_ring *txdr = &adapter->tx_ring; |
535 | struct ixgb_desc_ring *rxdr = &adapter->rx_ring; | 555 | struct ixgb_desc_ring *rxdr = &adapter->rx_ring; |
536 | struct ixgb_desc_ring tx_old, tx_new, rx_old, rx_new; | 556 | struct ixgb_desc_ring tx_old, tx_new, rx_old, rx_new; |
@@ -573,6 +593,11 @@ ixgb_set_ringparam(struct net_device *netdev, | |||
573 | adapter->tx_ring = tx_new; | 593 | adapter->tx_ring = tx_new; |
574 | if((err = ixgb_up(adapter))) | 594 | if((err = ixgb_up(adapter))) |
575 | return err; | 595 | return err; |
596 | /* be optimistic about our link, since we were up before */ | ||
597 | adapter->link_speed = 10000; | ||
598 | adapter->link_duplex = FULL_DUPLEX; | ||
599 | netif_carrier_on(netdev); | ||
600 | netif_wake_queue(netdev); | ||
576 | } | 601 | } |
577 | 602 | ||
578 | return 0; | 603 | return 0; |
@@ -607,7 +632,7 @@ ixgb_led_blink_callback(unsigned long data) | |||
607 | static int | 632 | static int |
608 | ixgb_phys_id(struct net_device *netdev, uint32_t data) | 633 | ixgb_phys_id(struct net_device *netdev, uint32_t data) |
609 | { | 634 | { |
610 | struct ixgb_adapter *adapter = netdev->priv; | 635 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
611 | 636 | ||
612 | if(!data || data > (uint32_t)(MAX_SCHEDULE_TIMEOUT / HZ)) | 637 | if(!data || data > (uint32_t)(MAX_SCHEDULE_TIMEOUT / HZ)) |
613 | data = (uint32_t)(MAX_SCHEDULE_TIMEOUT / HZ); | 638 | data = (uint32_t)(MAX_SCHEDULE_TIMEOUT / HZ); |
@@ -643,7 +668,7 @@ static void | |||
643 | ixgb_get_ethtool_stats(struct net_device *netdev, | 668 | ixgb_get_ethtool_stats(struct net_device *netdev, |
644 | struct ethtool_stats *stats, uint64_t *data) | 669 | struct ethtool_stats *stats, uint64_t *data) |
645 | { | 670 | { |
646 | struct ixgb_adapter *adapter = netdev->priv; | 671 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
647 | int i; | 672 | int i; |
648 | 673 | ||
649 | ixgb_update_stats(adapter); | 674 | ixgb_update_stats(adapter); |
diff --git a/drivers/net/ixgb/ixgb_hw.h b/drivers/net/ixgb/ixgb_hw.h index 97898efe7cc8..8bcf31ed10c2 100644 --- a/drivers/net/ixgb/ixgb_hw.h +++ b/drivers/net/ixgb/ixgb_hw.h | |||
@@ -822,17 +822,8 @@ extern void ixgb_clear_vfta(struct ixgb_hw *hw); | |||
822 | 822 | ||
823 | /* Access functions to eeprom data */ | 823 | /* Access functions to eeprom data */ |
824 | void ixgb_get_ee_mac_addr(struct ixgb_hw *hw, uint8_t *mac_addr); | 824 | void ixgb_get_ee_mac_addr(struct ixgb_hw *hw, uint8_t *mac_addr); |
825 | uint16_t ixgb_get_ee_compatibility(struct ixgb_hw *hw); | ||
826 | uint32_t ixgb_get_ee_pba_number(struct ixgb_hw *hw); | 825 | uint32_t ixgb_get_ee_pba_number(struct ixgb_hw *hw); |
827 | uint16_t ixgb_get_ee_init_ctrl_reg_1(struct ixgb_hw *hw); | ||
828 | uint16_t ixgb_get_ee_init_ctrl_reg_2(struct ixgb_hw *hw); | ||
829 | uint16_t ixgb_get_ee_subsystem_id(struct ixgb_hw *hw); | ||
830 | uint16_t ixgb_get_ee_subvendor_id(struct ixgb_hw *hw); | ||
831 | uint16_t ixgb_get_ee_device_id(struct ixgb_hw *hw); | 826 | uint16_t ixgb_get_ee_device_id(struct ixgb_hw *hw); |
832 | uint16_t ixgb_get_ee_vendor_id(struct ixgb_hw *hw); | ||
833 | uint16_t ixgb_get_ee_swdpins_reg(struct ixgb_hw *hw); | ||
834 | uint8_t ixgb_get_ee_d3_power(struct ixgb_hw *hw); | ||
835 | uint8_t ixgb_get_ee_d0_power(struct ixgb_hw *hw); | ||
836 | boolean_t ixgb_get_eeprom_data(struct ixgb_hw *hw); | 827 | boolean_t ixgb_get_eeprom_data(struct ixgb_hw *hw); |
837 | uint16_t ixgb_get_eeprom_word(struct ixgb_hw *hw, uint16_t index); | 828 | uint16_t ixgb_get_eeprom_word(struct ixgb_hw *hw, uint16_t index); |
838 | 829 | ||
diff --git a/drivers/net/ixgb/ixgb_main.c b/drivers/net/ixgb/ixgb_main.c index 097b90ccf575..5c555373adbe 100644 --- a/drivers/net/ixgb/ixgb_main.c +++ b/drivers/net/ixgb/ixgb_main.c | |||
@@ -29,6 +29,11 @@ | |||
29 | #include "ixgb.h" | 29 | #include "ixgb.h" |
30 | 30 | ||
31 | /* Change Log | 31 | /* Change Log |
32 | * 1.0.96 04/19/05 | ||
33 | * - Make needlessly global code static -- bunk@stusta.de | ||
34 | * - ethtool cleanup -- shemminger@osdl.org | ||
35 | * - Support for MODULE_VERSION -- linville@tuxdriver.com | ||
36 | * - add skb_header_cloned check to the tso path -- herbert@apana.org.au | ||
32 | * 1.0.88 01/05/05 | 37 | * 1.0.88 01/05/05 |
33 | * - include fix to the condition that determines when to quit NAPI - Robert Olsson | 38 | * - include fix to the condition that determines when to quit NAPI - Robert Olsson |
34 | * - use netif_poll_{disable/enable} to synchronize between NAPI and i/f up/down | 39 | * - use netif_poll_{disable/enable} to synchronize between NAPI and i/f up/down |
@@ -47,10 +52,9 @@ char ixgb_driver_string[] = "Intel(R) PRO/10GbE Network Driver"; | |||
47 | #else | 52 | #else |
48 | #define DRIVERNAPI "-NAPI" | 53 | #define DRIVERNAPI "-NAPI" |
49 | #endif | 54 | #endif |
50 | 55 | #define DRV_VERSION "1.0.100-k2"DRIVERNAPI | |
51 | #define DRV_VERSION "1.0.95-k2"DRIVERNAPI | ||
52 | char ixgb_driver_version[] = DRV_VERSION; | 56 | char ixgb_driver_version[] = DRV_VERSION; |
53 | char ixgb_copyright[] = "Copyright (c) 1999-2005 Intel Corporation."; | 57 | static char ixgb_copyright[] = "Copyright (c) 1999-2005 Intel Corporation."; |
54 | 58 | ||
55 | /* ixgb_pci_tbl - PCI Device ID Table | 59 | /* ixgb_pci_tbl - PCI Device ID Table |
56 | * | 60 | * |
@@ -145,10 +149,12 @@ MODULE_LICENSE("GPL"); | |||
145 | MODULE_VERSION(DRV_VERSION); | 149 | MODULE_VERSION(DRV_VERSION); |
146 | 150 | ||
147 | /* some defines for controlling descriptor fetches in h/w */ | 151 | /* some defines for controlling descriptor fetches in h/w */ |
148 | #define RXDCTL_PTHRESH_DEFAULT 128 /* chip considers prefech below this */ | ||
149 | #define RXDCTL_HTHRESH_DEFAULT 16 /* chip will only prefetch if tail is | ||
150 | pushed this many descriptors from head */ | ||
151 | #define RXDCTL_WTHRESH_DEFAULT 16 /* chip writes back at this many or RXT0 */ | 152 | #define RXDCTL_WTHRESH_DEFAULT 16 /* chip writes back at this many or RXT0 */ |
153 | #define RXDCTL_PTHRESH_DEFAULT 0 /* chip considers prefech below | ||
154 | * this */ | ||
155 | #define RXDCTL_HTHRESH_DEFAULT 0 /* chip will only prefetch if tail | ||
156 | * is pushed this many descriptors | ||
157 | * from head */ | ||
152 | 158 | ||
153 | /** | 159 | /** |
154 | * ixgb_init_module - Driver Registration Routine | 160 | * ixgb_init_module - Driver Registration Routine |
@@ -376,7 +382,7 @@ ixgb_probe(struct pci_dev *pdev, | |||
376 | SET_NETDEV_DEV(netdev, &pdev->dev); | 382 | SET_NETDEV_DEV(netdev, &pdev->dev); |
377 | 383 | ||
378 | pci_set_drvdata(pdev, netdev); | 384 | pci_set_drvdata(pdev, netdev); |
379 | adapter = netdev->priv; | 385 | adapter = netdev_priv(netdev); |
380 | adapter->netdev = netdev; | 386 | adapter->netdev = netdev; |
381 | adapter->pdev = pdev; | 387 | adapter->pdev = pdev; |
382 | adapter->hw.back = adapter; | 388 | adapter->hw.back = adapter; |
@@ -512,7 +518,7 @@ static void __devexit | |||
512 | ixgb_remove(struct pci_dev *pdev) | 518 | ixgb_remove(struct pci_dev *pdev) |
513 | { | 519 | { |
514 | struct net_device *netdev = pci_get_drvdata(pdev); | 520 | struct net_device *netdev = pci_get_drvdata(pdev); |
515 | struct ixgb_adapter *adapter = netdev->priv; | 521 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
516 | 522 | ||
517 | unregister_netdev(netdev); | 523 | unregister_netdev(netdev); |
518 | 524 | ||
@@ -583,7 +589,7 @@ ixgb_sw_init(struct ixgb_adapter *adapter) | |||
583 | static int | 589 | static int |
584 | ixgb_open(struct net_device *netdev) | 590 | ixgb_open(struct net_device *netdev) |
585 | { | 591 | { |
586 | struct ixgb_adapter *adapter = netdev->priv; | 592 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
587 | int err; | 593 | int err; |
588 | 594 | ||
589 | /* allocate transmit descriptors */ | 595 | /* allocate transmit descriptors */ |
@@ -626,7 +632,7 @@ err_setup_tx: | |||
626 | static int | 632 | static int |
627 | ixgb_close(struct net_device *netdev) | 633 | ixgb_close(struct net_device *netdev) |
628 | { | 634 | { |
629 | struct ixgb_adapter *adapter = netdev->priv; | 635 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
630 | 636 | ||
631 | ixgb_down(adapter, TRUE); | 637 | ixgb_down(adapter, TRUE); |
632 | 638 | ||
@@ -1017,7 +1023,7 @@ ixgb_clean_rx_ring(struct ixgb_adapter *adapter) | |||
1017 | static int | 1023 | static int |
1018 | ixgb_set_mac(struct net_device *netdev, void *p) | 1024 | ixgb_set_mac(struct net_device *netdev, void *p) |
1019 | { | 1025 | { |
1020 | struct ixgb_adapter *adapter = netdev->priv; | 1026 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1021 | struct sockaddr *addr = p; | 1027 | struct sockaddr *addr = p; |
1022 | 1028 | ||
1023 | if(!is_valid_ether_addr(addr->sa_data)) | 1029 | if(!is_valid_ether_addr(addr->sa_data)) |
@@ -1043,7 +1049,7 @@ ixgb_set_mac(struct net_device *netdev, void *p) | |||
1043 | static void | 1049 | static void |
1044 | ixgb_set_multi(struct net_device *netdev) | 1050 | ixgb_set_multi(struct net_device *netdev) |
1045 | { | 1051 | { |
1046 | struct ixgb_adapter *adapter = netdev->priv; | 1052 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1047 | struct ixgb_hw *hw = &adapter->hw; | 1053 | struct ixgb_hw *hw = &adapter->hw; |
1048 | struct dev_mc_list *mc_ptr; | 1054 | struct dev_mc_list *mc_ptr; |
1049 | uint32_t rctl; | 1055 | uint32_t rctl; |
@@ -1371,7 +1377,7 @@ ixgb_tx_queue(struct ixgb_adapter *adapter, int count, int vlan_id,int tx_flags) | |||
1371 | static int | 1377 | static int |
1372 | ixgb_xmit_frame(struct sk_buff *skb, struct net_device *netdev) | 1378 | ixgb_xmit_frame(struct sk_buff *skb, struct net_device *netdev) |
1373 | { | 1379 | { |
1374 | struct ixgb_adapter *adapter = netdev->priv; | 1380 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1375 | unsigned int first; | 1381 | unsigned int first; |
1376 | unsigned int tx_flags = 0; | 1382 | unsigned int tx_flags = 0; |
1377 | unsigned long flags; | 1383 | unsigned long flags; |
@@ -1425,7 +1431,7 @@ ixgb_xmit_frame(struct sk_buff *skb, struct net_device *netdev) | |||
1425 | static void | 1431 | static void |
1426 | ixgb_tx_timeout(struct net_device *netdev) | 1432 | ixgb_tx_timeout(struct net_device *netdev) |
1427 | { | 1433 | { |
1428 | struct ixgb_adapter *adapter = netdev->priv; | 1434 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1429 | 1435 | ||
1430 | /* Do the reset outside of interrupt context */ | 1436 | /* Do the reset outside of interrupt context */ |
1431 | schedule_work(&adapter->tx_timeout_task); | 1437 | schedule_work(&adapter->tx_timeout_task); |
@@ -1434,7 +1440,7 @@ ixgb_tx_timeout(struct net_device *netdev) | |||
1434 | static void | 1440 | static void |
1435 | ixgb_tx_timeout_task(struct net_device *netdev) | 1441 | ixgb_tx_timeout_task(struct net_device *netdev) |
1436 | { | 1442 | { |
1437 | struct ixgb_adapter *adapter = netdev->priv; | 1443 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1438 | 1444 | ||
1439 | ixgb_down(adapter, TRUE); | 1445 | ixgb_down(adapter, TRUE); |
1440 | ixgb_up(adapter); | 1446 | ixgb_up(adapter); |
@@ -1451,7 +1457,7 @@ ixgb_tx_timeout_task(struct net_device *netdev) | |||
1451 | static struct net_device_stats * | 1457 | static struct net_device_stats * |
1452 | ixgb_get_stats(struct net_device *netdev) | 1458 | ixgb_get_stats(struct net_device *netdev) |
1453 | { | 1459 | { |
1454 | struct ixgb_adapter *adapter = netdev->priv; | 1460 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1455 | 1461 | ||
1456 | return &adapter->net_stats; | 1462 | return &adapter->net_stats; |
1457 | } | 1463 | } |
@@ -1467,7 +1473,7 @@ ixgb_get_stats(struct net_device *netdev) | |||
1467 | static int | 1473 | static int |
1468 | ixgb_change_mtu(struct net_device *netdev, int new_mtu) | 1474 | ixgb_change_mtu(struct net_device *netdev, int new_mtu) |
1469 | { | 1475 | { |
1470 | struct ixgb_adapter *adapter = netdev->priv; | 1476 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1471 | int max_frame = new_mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH; | 1477 | int max_frame = new_mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH; |
1472 | int old_max_frame = netdev->mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH; | 1478 | int old_max_frame = netdev->mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH; |
1473 | 1479 | ||
@@ -1522,7 +1528,8 @@ ixgb_update_stats(struct ixgb_adapter *adapter) | |||
1522 | 1528 | ||
1523 | multi |= ((u64)IXGB_READ_REG(&adapter->hw, MPRCH) << 32); | 1529 | multi |= ((u64)IXGB_READ_REG(&adapter->hw, MPRCH) << 32); |
1524 | /* fix up multicast stats by removing broadcasts */ | 1530 | /* fix up multicast stats by removing broadcasts */ |
1525 | multi -= bcast; | 1531 | if(multi >= bcast) |
1532 | multi -= bcast; | ||
1526 | 1533 | ||
1527 | adapter->stats.mprcl += (multi & 0xFFFFFFFF); | 1534 | adapter->stats.mprcl += (multi & 0xFFFFFFFF); |
1528 | adapter->stats.mprch += (multi >> 32); | 1535 | adapter->stats.mprch += (multi >> 32); |
@@ -1641,7 +1648,7 @@ static irqreturn_t | |||
1641 | ixgb_intr(int irq, void *data, struct pt_regs *regs) | 1648 | ixgb_intr(int irq, void *data, struct pt_regs *regs) |
1642 | { | 1649 | { |
1643 | struct net_device *netdev = data; | 1650 | struct net_device *netdev = data; |
1644 | struct ixgb_adapter *adapter = netdev->priv; | 1651 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1645 | struct ixgb_hw *hw = &adapter->hw; | 1652 | struct ixgb_hw *hw = &adapter->hw; |
1646 | uint32_t icr = IXGB_READ_REG(hw, ICR); | 1653 | uint32_t icr = IXGB_READ_REG(hw, ICR); |
1647 | #ifndef CONFIG_IXGB_NAPI | 1654 | #ifndef CONFIG_IXGB_NAPI |
@@ -1688,7 +1695,7 @@ ixgb_intr(int irq, void *data, struct pt_regs *regs) | |||
1688 | static int | 1695 | static int |
1689 | ixgb_clean(struct net_device *netdev, int *budget) | 1696 | ixgb_clean(struct net_device *netdev, int *budget) |
1690 | { | 1697 | { |
1691 | struct ixgb_adapter *adapter = netdev->priv; | 1698 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
1692 | int work_to_do = min(*budget, netdev->quota); | 1699 | int work_to_do = min(*budget, netdev->quota); |
1693 | int tx_cleaned; | 1700 | int tx_cleaned; |
1694 | int work_done = 0; | 1701 | int work_done = 0; |
@@ -2017,7 +2024,7 @@ ixgb_alloc_rx_buffers(struct ixgb_adapter *adapter) | |||
2017 | static void | 2024 | static void |
2018 | ixgb_vlan_rx_register(struct net_device *netdev, struct vlan_group *grp) | 2025 | ixgb_vlan_rx_register(struct net_device *netdev, struct vlan_group *grp) |
2019 | { | 2026 | { |
2020 | struct ixgb_adapter *adapter = netdev->priv; | 2027 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
2021 | uint32_t ctrl, rctl; | 2028 | uint32_t ctrl, rctl; |
2022 | 2029 | ||
2023 | ixgb_irq_disable(adapter); | 2030 | ixgb_irq_disable(adapter); |
@@ -2055,7 +2062,7 @@ ixgb_vlan_rx_register(struct net_device *netdev, struct vlan_group *grp) | |||
2055 | static void | 2062 | static void |
2056 | ixgb_vlan_rx_add_vid(struct net_device *netdev, uint16_t vid) | 2063 | ixgb_vlan_rx_add_vid(struct net_device *netdev, uint16_t vid) |
2057 | { | 2064 | { |
2058 | struct ixgb_adapter *adapter = netdev->priv; | 2065 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
2059 | uint32_t vfta, index; | 2066 | uint32_t vfta, index; |
2060 | 2067 | ||
2061 | /* add VID to filter table */ | 2068 | /* add VID to filter table */ |
@@ -2069,7 +2076,7 @@ ixgb_vlan_rx_add_vid(struct net_device *netdev, uint16_t vid) | |||
2069 | static void | 2076 | static void |
2070 | ixgb_vlan_rx_kill_vid(struct net_device *netdev, uint16_t vid) | 2077 | ixgb_vlan_rx_kill_vid(struct net_device *netdev, uint16_t vid) |
2071 | { | 2078 | { |
2072 | struct ixgb_adapter *adapter = netdev->priv; | 2079 | struct ixgb_adapter *adapter = netdev_priv(netdev); |
2073 | uint32_t vfta, index; | 2080 | uint32_t vfta, index; |
2074 | 2081 | ||
2075 | ixgb_irq_disable(adapter); | 2082 | ixgb_irq_disable(adapter); |
diff --git a/drivers/net/jazzsonic.c b/drivers/net/jazzsonic.c index 7fec613e1675..8423cb6875f0 100644 --- a/drivers/net/jazzsonic.c +++ b/drivers/net/jazzsonic.c | |||
@@ -1,5 +1,10 @@ | |||
1 | /* | 1 | /* |
2 | * sonic.c | 2 | * jazzsonic.c |
3 | * | ||
4 | * (C) 2005 Finn Thain | ||
5 | * | ||
6 | * Converted to DMA API, and (from the mac68k project) introduced | ||
7 | * dhd's support for 16-bit cards. | ||
3 | * | 8 | * |
4 | * (C) 1996,1998 by Thomas Bogendoerfer (tsbogend@alpha.franken.de) | 9 | * (C) 1996,1998 by Thomas Bogendoerfer (tsbogend@alpha.franken.de) |
5 | * | 10 | * |
@@ -28,8 +33,8 @@ | |||
28 | #include <linux/netdevice.h> | 33 | #include <linux/netdevice.h> |
29 | #include <linux/etherdevice.h> | 34 | #include <linux/etherdevice.h> |
30 | #include <linux/skbuff.h> | 35 | #include <linux/skbuff.h> |
31 | #include <linux/bitops.h> | ||
32 | #include <linux/device.h> | 36 | #include <linux/device.h> |
37 | #include <linux/dma-mapping.h> | ||
33 | 38 | ||
34 | #include <asm/bootinfo.h> | 39 | #include <asm/bootinfo.h> |
35 | #include <asm/system.h> | 40 | #include <asm/system.h> |
@@ -44,22 +49,20 @@ static struct platform_device *jazz_sonic_device; | |||
44 | 49 | ||
45 | #define SONIC_MEM_SIZE 0x100 | 50 | #define SONIC_MEM_SIZE 0x100 |
46 | 51 | ||
47 | #define SREGS_PAD(n) u16 n; | ||
48 | |||
49 | #include "sonic.h" | 52 | #include "sonic.h" |
50 | 53 | ||
51 | /* | 54 | /* |
52 | * Macros to access SONIC registers | 55 | * Macros to access SONIC registers |
53 | */ | 56 | */ |
54 | #define SONIC_READ(reg) (*((volatile unsigned int *)base_addr+reg)) | 57 | #define SONIC_READ(reg) (*((volatile unsigned int *)dev->base_addr+reg)) |
55 | 58 | ||
56 | #define SONIC_WRITE(reg,val) \ | 59 | #define SONIC_WRITE(reg,val) \ |
57 | do { \ | 60 | do { \ |
58 | *((volatile unsigned int *)base_addr+(reg)) = (val); \ | 61 | *((volatile unsigned int *)dev->base_addr+(reg)) = (val); \ |
59 | } while (0) | 62 | } while (0) |
60 | 63 | ||
61 | 64 | ||
62 | /* use 0 for production, 1 for verification, >2 for debug */ | 65 | /* use 0 for production, 1 for verification, >1 for debug */ |
63 | #ifdef SONIC_DEBUG | 66 | #ifdef SONIC_DEBUG |
64 | static unsigned int sonic_debug = SONIC_DEBUG; | 67 | static unsigned int sonic_debug = SONIC_DEBUG; |
65 | #else | 68 | #else |
@@ -85,18 +88,18 @@ static unsigned short known_revisions[] = | |||
85 | 0xffff /* end of list */ | 88 | 0xffff /* end of list */ |
86 | }; | 89 | }; |
87 | 90 | ||
88 | static int __init sonic_probe1(struct net_device *dev, unsigned long base_addr, | 91 | static int __init sonic_probe1(struct net_device *dev) |
89 | unsigned int irq) | ||
90 | { | 92 | { |
91 | static unsigned version_printed; | 93 | static unsigned version_printed; |
92 | unsigned int silicon_revision; | 94 | unsigned int silicon_revision; |
93 | unsigned int val; | 95 | unsigned int val; |
94 | struct sonic_local *lp; | 96 | struct sonic_local *lp = netdev_priv(dev); |
95 | int err = -ENODEV; | 97 | int err = -ENODEV; |
96 | int i; | 98 | int i; |
97 | 99 | ||
98 | if (!request_mem_region(base_addr, SONIC_MEM_SIZE, jazz_sonic_string)) | 100 | if (!request_mem_region(dev->base_addr, SONIC_MEM_SIZE, jazz_sonic_string)) |
99 | return -EBUSY; | 101 | return -EBUSY; |
102 | |||
100 | /* | 103 | /* |
101 | * get the Silicon Revision ID. If this is one of the known | 104 | * get the Silicon Revision ID. If this is one of the known |
102 | * one assume that we found a SONIC ethernet controller at | 105 | * one assume that we found a SONIC ethernet controller at |
@@ -120,11 +123,7 @@ static int __init sonic_probe1(struct net_device *dev, unsigned long base_addr, | |||
120 | if (sonic_debug && version_printed++ == 0) | 123 | if (sonic_debug && version_printed++ == 0) |
121 | printk(version); | 124 | printk(version); |
122 | 125 | ||
123 | printk("%s: Sonic ethernet found at 0x%08lx, ", dev->name, base_addr); | 126 | printk(KERN_INFO "%s: Sonic ethernet found at 0x%08lx, ", lp->device->bus_id, dev->base_addr); |
124 | |||
125 | /* Fill in the 'dev' fields. */ | ||
126 | dev->base_addr = base_addr; | ||
127 | dev->irq = irq; | ||
128 | 127 | ||
129 | /* | 128 | /* |
130 | * Put the sonic into software reset, then | 129 | * Put the sonic into software reset, then |
@@ -138,84 +137,44 @@ static int __init sonic_probe1(struct net_device *dev, unsigned long base_addr, | |||
138 | dev->dev_addr[i*2+1] = val >> 8; | 137 | dev->dev_addr[i*2+1] = val >> 8; |
139 | } | 138 | } |
140 | 139 | ||
141 | printk("HW Address "); | ||
142 | for (i = 0; i < 6; i++) { | ||
143 | printk("%2.2x", dev->dev_addr[i]); | ||
144 | if (i<5) | ||
145 | printk(":"); | ||
146 | } | ||
147 | |||
148 | printk(" IRQ %d\n", irq); | ||
149 | |||
150 | err = -ENOMEM; | 140 | err = -ENOMEM; |
151 | 141 | ||
152 | /* Initialize the device structure. */ | 142 | /* Initialize the device structure. */ |
153 | if (dev->priv == NULL) { | ||
154 | /* | ||
155 | * the memory be located in the same 64kb segment | ||
156 | */ | ||
157 | lp = NULL; | ||
158 | i = 0; | ||
159 | do { | ||
160 | lp = kmalloc(sizeof(*lp), GFP_KERNEL); | ||
161 | if ((unsigned long) lp >> 16 | ||
162 | != ((unsigned long)lp + sizeof(*lp) ) >> 16) { | ||
163 | /* FIXME, free the memory later */ | ||
164 | kfree(lp); | ||
165 | lp = NULL; | ||
166 | } | ||
167 | } while (lp == NULL && i++ < 20); | ||
168 | |||
169 | if (lp == NULL) { | ||
170 | printk("%s: couldn't allocate memory for descriptors\n", | ||
171 | dev->name); | ||
172 | goto out; | ||
173 | } | ||
174 | 143 | ||
175 | memset(lp, 0, sizeof(struct sonic_local)); | 144 | lp->dma_bitmode = SONIC_BITMODE32; |
176 | |||
177 | /* get the virtual dma address */ | ||
178 | lp->cda_laddr = vdma_alloc(CPHYSADDR(lp),sizeof(*lp)); | ||
179 | if (lp->cda_laddr == ~0UL) { | ||
180 | printk("%s: couldn't get DMA page entry for " | ||
181 | "descriptors\n", dev->name); | ||
182 | goto out1; | ||
183 | } | ||
184 | |||
185 | lp->tda_laddr = lp->cda_laddr + sizeof (lp->cda); | ||
186 | lp->rra_laddr = lp->tda_laddr + sizeof (lp->tda); | ||
187 | lp->rda_laddr = lp->rra_laddr + sizeof (lp->rra); | ||
188 | |||
189 | /* allocate receive buffer area */ | ||
190 | /* FIXME, maybe we should use skbs */ | ||
191 | lp->rba = kmalloc(SONIC_NUM_RRS * SONIC_RBSIZE, GFP_KERNEL); | ||
192 | if (!lp->rba) { | ||
193 | printk("%s: couldn't allocate receive buffers\n", | ||
194 | dev->name); | ||
195 | goto out2; | ||
196 | } | ||
197 | 145 | ||
198 | /* get virtual dma address */ | 146 | /* Allocate the entire chunk of memory for the descriptors. |
199 | lp->rba_laddr = vdma_alloc(CPHYSADDR(lp->rba), | 147 | Note that this cannot cross a 64K boundary. */ |
200 | SONIC_NUM_RRS * SONIC_RBSIZE); | 148 | if ((lp->descriptors = dma_alloc_coherent(lp->device, |
201 | if (lp->rba_laddr == ~0UL) { | 149 | SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode), |
202 | printk("%s: couldn't get DMA page entry for receive " | 150 | &lp->descriptors_laddr, GFP_KERNEL)) == NULL) { |
203 | "buffers\n",dev->name); | 151 | printk(KERN_ERR "%s: couldn't alloc DMA memory for descriptors.\n", lp->device->bus_id); |
204 | goto out3; | 152 | goto out; |
205 | } | ||
206 | |||
207 | /* now convert pointer to KSEG1 pointer */ | ||
208 | lp->rba = (char *)KSEG1ADDR(lp->rba); | ||
209 | flush_cache_all(); | ||
210 | dev->priv = (struct sonic_local *)KSEG1ADDR(lp); | ||
211 | } | 153 | } |
212 | 154 | ||
213 | lp = (struct sonic_local *)dev->priv; | 155 | /* Now set up the pointers to point to the appropriate places */ |
156 | lp->cda = lp->descriptors; | ||
157 | lp->tda = lp->cda + (SIZEOF_SONIC_CDA | ||
158 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | ||
159 | lp->rda = lp->tda + (SIZEOF_SONIC_TD * SONIC_NUM_TDS | ||
160 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | ||
161 | lp->rra = lp->rda + (SIZEOF_SONIC_RD * SONIC_NUM_RDS | ||
162 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | ||
163 | |||
164 | lp->cda_laddr = lp->descriptors_laddr; | ||
165 | lp->tda_laddr = lp->cda_laddr + (SIZEOF_SONIC_CDA | ||
166 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | ||
167 | lp->rda_laddr = lp->tda_laddr + (SIZEOF_SONIC_TD * SONIC_NUM_TDS | ||
168 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | ||
169 | lp->rra_laddr = lp->rda_laddr + (SIZEOF_SONIC_RD * SONIC_NUM_RDS | ||
170 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | ||
171 | |||
214 | dev->open = sonic_open; | 172 | dev->open = sonic_open; |
215 | dev->stop = sonic_close; | 173 | dev->stop = sonic_close; |
216 | dev->hard_start_xmit = sonic_send_packet; | 174 | dev->hard_start_xmit = sonic_send_packet; |
217 | dev->get_stats = sonic_get_stats; | 175 | dev->get_stats = sonic_get_stats; |
218 | dev->set_multicast_list = &sonic_multicast_list; | 176 | dev->set_multicast_list = &sonic_multicast_list; |
177 | dev->tx_timeout = sonic_tx_timeout; | ||
219 | dev->watchdog_timeo = TX_TIMEOUT; | 178 | dev->watchdog_timeo = TX_TIMEOUT; |
220 | 179 | ||
221 | /* | 180 | /* |
@@ -226,14 +185,8 @@ static int __init sonic_probe1(struct net_device *dev, unsigned long base_addr, | |||
226 | SONIC_WRITE(SONIC_MPT,0xffff); | 185 | SONIC_WRITE(SONIC_MPT,0xffff); |
227 | 186 | ||
228 | return 0; | 187 | return 0; |
229 | out3: | ||
230 | kfree(lp->rba); | ||
231 | out2: | ||
232 | vdma_free(lp->cda_laddr); | ||
233 | out1: | ||
234 | kfree(lp); | ||
235 | out: | 188 | out: |
236 | release_region(base_addr, SONIC_MEM_SIZE); | 189 | release_region(dev->base_addr, SONIC_MEM_SIZE); |
237 | return err; | 190 | return err; |
238 | } | 191 | } |
239 | 192 | ||
@@ -245,7 +198,6 @@ static int __init jazz_sonic_probe(struct device *device) | |||
245 | { | 198 | { |
246 | struct net_device *dev; | 199 | struct net_device *dev; |
247 | struct sonic_local *lp; | 200 | struct sonic_local *lp; |
248 | unsigned long base_addr; | ||
249 | int err = 0; | 201 | int err = 0; |
250 | int i; | 202 | int i; |
251 | 203 | ||
@@ -255,21 +207,26 @@ static int __init jazz_sonic_probe(struct device *device) | |||
255 | if (mips_machgroup != MACH_GROUP_JAZZ) | 207 | if (mips_machgroup != MACH_GROUP_JAZZ) |
256 | return -ENODEV; | 208 | return -ENODEV; |
257 | 209 | ||
258 | dev = alloc_etherdev(0); | 210 | dev = alloc_etherdev(sizeof(struct sonic_local)); |
259 | if (!dev) | 211 | if (!dev) |
260 | return -ENOMEM; | 212 | return -ENOMEM; |
261 | 213 | ||
214 | lp = netdev_priv(dev); | ||
215 | lp->device = device; | ||
216 | SET_NETDEV_DEV(dev, device); | ||
217 | SET_MODULE_OWNER(dev); | ||
218 | |||
262 | netdev_boot_setup_check(dev); | 219 | netdev_boot_setup_check(dev); |
263 | base_addr = dev->base_addr; | ||
264 | 220 | ||
265 | if (base_addr >= KSEG0) { /* Check a single specified location. */ | 221 | if (dev->base_addr >= KSEG0) { /* Check a single specified location. */ |
266 | err = sonic_probe1(dev, base_addr, dev->irq); | 222 | err = sonic_probe1(dev); |
267 | } else if (base_addr != 0) { /* Don't probe at all. */ | 223 | } else if (dev->base_addr != 0) { /* Don't probe at all. */ |
268 | err = -ENXIO; | 224 | err = -ENXIO; |
269 | } else { | 225 | } else { |
270 | for (i = 0; sonic_portlist[i].port; i++) { | 226 | for (i = 0; sonic_portlist[i].port; i++) { |
271 | int io = sonic_portlist[i].port; | 227 | dev->base_addr = sonic_portlist[i].port; |
272 | if (sonic_probe1(dev, io, sonic_portlist[i].irq) == 0) | 228 | dev->irq = sonic_portlist[i].irq; |
229 | if (sonic_probe1(dev) == 0) | ||
273 | break; | 230 | break; |
274 | } | 231 | } |
275 | if (!sonic_portlist[i].port) | 232 | if (!sonic_portlist[i].port) |
@@ -281,14 +238,17 @@ static int __init jazz_sonic_probe(struct device *device) | |||
281 | if (err) | 238 | if (err) |
282 | goto out1; | 239 | goto out1; |
283 | 240 | ||
241 | printk("%s: MAC ", dev->name); | ||
242 | for (i = 0; i < 6; i++) { | ||
243 | printk("%2.2x", dev->dev_addr[i]); | ||
244 | if (i < 5) | ||
245 | printk(":"); | ||
246 | } | ||
247 | printk(" IRQ %d\n", dev->irq); | ||
248 | |||
284 | return 0; | 249 | return 0; |
285 | 250 | ||
286 | out1: | 251 | out1: |
287 | lp = dev->priv; | ||
288 | vdma_free(lp->rba_laddr); | ||
289 | kfree(lp->rba); | ||
290 | vdma_free(lp->cda_laddr); | ||
291 | kfree(lp); | ||
292 | release_region(dev->base_addr, SONIC_MEM_SIZE); | 252 | release_region(dev->base_addr, SONIC_MEM_SIZE); |
293 | out: | 253 | out: |
294 | free_netdev(dev); | 254 | free_netdev(dev); |
@@ -296,21 +256,22 @@ out: | |||
296 | return err; | 256 | return err; |
297 | } | 257 | } |
298 | 258 | ||
299 | /* | 259 | MODULE_DESCRIPTION("Jazz SONIC ethernet driver"); |
300 | * SONIC uses a normal IRQ | 260 | module_param(sonic_debug, int, 0); |
301 | */ | 261 | MODULE_PARM_DESC(sonic_debug, "jazzsonic debug level (1-4)"); |
302 | #define sonic_request_irq request_irq | ||
303 | #define sonic_free_irq free_irq | ||
304 | 262 | ||
305 | #define sonic_chiptomem(x) KSEG1ADDR(vdma_log2phys(x)) | 263 | #define SONIC_IRQ_FLAG SA_INTERRUPT |
306 | 264 | ||
307 | #include "sonic.c" | 265 | #include "sonic.c" |
308 | 266 | ||
309 | static int __devexit jazz_sonic_device_remove (struct device *device) | 267 | static int __devexit jazz_sonic_device_remove (struct device *device) |
310 | { | 268 | { |
311 | struct net_device *dev = device->driver_data; | 269 | struct net_device *dev = device->driver_data; |
270 | struct sonic_local* lp = netdev_priv(dev); | ||
312 | 271 | ||
313 | unregister_netdev (dev); | 272 | unregister_netdev (dev); |
273 | dma_free_coherent(lp->device, SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode), | ||
274 | lp->descriptors, lp->descriptors_laddr); | ||
314 | release_region (dev->base_addr, SONIC_MEM_SIZE); | 275 | release_region (dev->base_addr, SONIC_MEM_SIZE); |
315 | free_netdev (dev); | 276 | free_netdev (dev); |
316 | 277 | ||
@@ -323,7 +284,7 @@ static struct device_driver jazz_sonic_driver = { | |||
323 | .probe = jazz_sonic_probe, | 284 | .probe = jazz_sonic_probe, |
324 | .remove = __devexit_p(jazz_sonic_device_remove), | 285 | .remove = __devexit_p(jazz_sonic_device_remove), |
325 | }; | 286 | }; |
326 | 287 | ||
327 | static void jazz_sonic_platform_release (struct device *device) | 288 | static void jazz_sonic_platform_release (struct device *device) |
328 | { | 289 | { |
329 | struct platform_device *pldev; | 290 | struct platform_device *pldev; |
@@ -336,10 +297,11 @@ static void jazz_sonic_platform_release (struct device *device) | |||
336 | static int __init jazz_sonic_init_module(void) | 297 | static int __init jazz_sonic_init_module(void) |
337 | { | 298 | { |
338 | struct platform_device *pldev; | 299 | struct platform_device *pldev; |
300 | int err; | ||
339 | 301 | ||
340 | if (driver_register(&jazz_sonic_driver)) { | 302 | if ((err = driver_register(&jazz_sonic_driver))) { |
341 | printk(KERN_ERR "Driver registration failed\n"); | 303 | printk(KERN_ERR "Driver registration failed\n"); |
342 | return -ENOMEM; | 304 | return err; |
343 | } | 305 | } |
344 | 306 | ||
345 | jazz_sonic_device = NULL; | 307 | jazz_sonic_device = NULL; |
diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c index 1f61f0cc95d8..690a1aae0b34 100644 --- a/drivers/net/loopback.c +++ b/drivers/net/loopback.c | |||
@@ -68,6 +68,7 @@ static DEFINE_PER_CPU(struct net_device_stats, loopback_stats); | |||
68 | * of largesending device modulo TCP checksum, which is ignored for loopback. | 68 | * of largesending device modulo TCP checksum, which is ignored for loopback. |
69 | */ | 69 | */ |
70 | 70 | ||
71 | #ifdef LOOPBACK_TSO | ||
71 | static void emulate_large_send_offload(struct sk_buff *skb) | 72 | static void emulate_large_send_offload(struct sk_buff *skb) |
72 | { | 73 | { |
73 | struct iphdr *iph = skb->nh.iph; | 74 | struct iphdr *iph = skb->nh.iph; |
@@ -119,6 +120,7 @@ static void emulate_large_send_offload(struct sk_buff *skb) | |||
119 | 120 | ||
120 | dev_kfree_skb(skb); | 121 | dev_kfree_skb(skb); |
121 | } | 122 | } |
123 | #endif /* LOOPBACK_TSO */ | ||
122 | 124 | ||
123 | /* | 125 | /* |
124 | * The higher levels take care of making this non-reentrant (it's | 126 | * The higher levels take care of making this non-reentrant (it's |
@@ -130,12 +132,13 @@ static int loopback_xmit(struct sk_buff *skb, struct net_device *dev) | |||
130 | 132 | ||
131 | skb_orphan(skb); | 133 | skb_orphan(skb); |
132 | 134 | ||
133 | skb->protocol=eth_type_trans(skb,dev); | 135 | skb->protocol = eth_type_trans(skb,dev); |
134 | skb->dev=dev; | 136 | skb->dev = dev; |
135 | #ifndef LOOPBACK_MUST_CHECKSUM | 137 | #ifndef LOOPBACK_MUST_CHECKSUM |
136 | skb->ip_summed = CHECKSUM_UNNECESSARY; | 138 | skb->ip_summed = CHECKSUM_UNNECESSARY; |
137 | #endif | 139 | #endif |
138 | 140 | ||
141 | #ifdef LOOPBACK_TSO | ||
139 | if (skb_shinfo(skb)->tso_size) { | 142 | if (skb_shinfo(skb)->tso_size) { |
140 | BUG_ON(skb->protocol != htons(ETH_P_IP)); | 143 | BUG_ON(skb->protocol != htons(ETH_P_IP)); |
141 | BUG_ON(skb->nh.iph->protocol != IPPROTO_TCP); | 144 | BUG_ON(skb->nh.iph->protocol != IPPROTO_TCP); |
@@ -143,14 +146,14 @@ static int loopback_xmit(struct sk_buff *skb, struct net_device *dev) | |||
143 | emulate_large_send_offload(skb); | 146 | emulate_large_send_offload(skb); |
144 | return 0; | 147 | return 0; |
145 | } | 148 | } |
146 | 149 | #endif | |
147 | dev->last_rx = jiffies; | 150 | dev->last_rx = jiffies; |
148 | 151 | ||
149 | lb_stats = &per_cpu(loopback_stats, get_cpu()); | 152 | lb_stats = &per_cpu(loopback_stats, get_cpu()); |
150 | lb_stats->rx_bytes += skb->len; | 153 | lb_stats->rx_bytes += skb->len; |
151 | lb_stats->tx_bytes += skb->len; | 154 | lb_stats->tx_bytes = lb_stats->rx_bytes; |
152 | lb_stats->rx_packets++; | 155 | lb_stats->rx_packets++; |
153 | lb_stats->tx_packets++; | 156 | lb_stats->tx_packets = lb_stats->rx_packets; |
154 | put_cpu(); | 157 | put_cpu(); |
155 | 158 | ||
156 | netif_rx(skb); | 159 | netif_rx(skb); |
@@ -208,9 +211,12 @@ struct net_device loopback_dev = { | |||
208 | .type = ARPHRD_LOOPBACK, /* 0x0001*/ | 211 | .type = ARPHRD_LOOPBACK, /* 0x0001*/ |
209 | .rebuild_header = eth_rebuild_header, | 212 | .rebuild_header = eth_rebuild_header, |
210 | .flags = IFF_LOOPBACK, | 213 | .flags = IFF_LOOPBACK, |
211 | .features = NETIF_F_SG|NETIF_F_FRAGLIST | 214 | .features = NETIF_F_SG | NETIF_F_FRAGLIST |
212 | |NETIF_F_NO_CSUM|NETIF_F_HIGHDMA | 215 | #ifdef LOOPBACK_TSO |
213 | |NETIF_F_LLTX, | 216 | | NETIF_F_TSO |
217 | #endif | ||
218 | | NETIF_F_NO_CSUM | NETIF_F_HIGHDMA | ||
219 | | NETIF_F_LLTX, | ||
214 | .ethtool_ops = &loopback_ethtool_ops, | 220 | .ethtool_ops = &loopback_ethtool_ops, |
215 | }; | 221 | }; |
216 | 222 | ||
diff --git a/drivers/net/macsonic.c b/drivers/net/macsonic.c index be28c65de729..405e18365ede 100644 --- a/drivers/net/macsonic.c +++ b/drivers/net/macsonic.c | |||
@@ -1,6 +1,12 @@ | |||
1 | /* | 1 | /* |
2 | * macsonic.c | 2 | * macsonic.c |
3 | * | 3 | * |
4 | * (C) 2005 Finn Thain | ||
5 | * | ||
6 | * Converted to DMA API, converted to unified driver model, made it work as | ||
7 | * a module again, and from the mac68k project, introduced more 32-bit cards | ||
8 | * and dhd's support for 16-bit cards. | ||
9 | * | ||
4 | * (C) 1998 Alan Cox | 10 | * (C) 1998 Alan Cox |
5 | * | 11 | * |
6 | * Debugging Andreas Ehliar, Michael Schmitz | 12 | * Debugging Andreas Ehliar, Michael Schmitz |
@@ -26,8 +32,8 @@ | |||
26 | */ | 32 | */ |
27 | 33 | ||
28 | #include <linux/kernel.h> | 34 | #include <linux/kernel.h> |
35 | #include <linux/module.h> | ||
29 | #include <linux/types.h> | 36 | #include <linux/types.h> |
30 | #include <linux/ctype.h> | ||
31 | #include <linux/fcntl.h> | 37 | #include <linux/fcntl.h> |
32 | #include <linux/interrupt.h> | 38 | #include <linux/interrupt.h> |
33 | #include <linux/init.h> | 39 | #include <linux/init.h> |
@@ -41,8 +47,8 @@ | |||
41 | #include <linux/netdevice.h> | 47 | #include <linux/netdevice.h> |
42 | #include <linux/etherdevice.h> | 48 | #include <linux/etherdevice.h> |
43 | #include <linux/skbuff.h> | 49 | #include <linux/skbuff.h> |
44 | #include <linux/module.h> | 50 | #include <linux/device.h> |
45 | #include <linux/bitops.h> | 51 | #include <linux/dma-mapping.h> |
46 | 52 | ||
47 | #include <asm/bootinfo.h> | 53 | #include <asm/bootinfo.h> |
48 | #include <asm/system.h> | 54 | #include <asm/system.h> |
@@ -54,25 +60,28 @@ | |||
54 | #include <asm/macints.h> | 60 | #include <asm/macints.h> |
55 | #include <asm/mac_via.h> | 61 | #include <asm/mac_via.h> |
56 | 62 | ||
57 | #define SREGS_PAD(n) u16 n; | 63 | static char mac_sonic_string[] = "macsonic"; |
64 | static struct platform_device *mac_sonic_device; | ||
58 | 65 | ||
59 | #include "sonic.h" | 66 | #include "sonic.h" |
60 | 67 | ||
61 | #define SONIC_READ(reg) \ | 68 | /* These should basically be bus-size and endian independent (since |
62 | nubus_readl(base_addr+(reg)) | 69 | the SONIC is at least smart enough that it uses the same endianness |
63 | #define SONIC_WRITE(reg,val) \ | 70 | as the host, unlike certain less enlightened Macintosh NICs) */ |
64 | nubus_writel((val), base_addr+(reg)) | 71 | #define SONIC_READ(reg) (nubus_readw(dev->base_addr + (reg * 4) \ |
65 | #define sonic_read(dev, reg) \ | 72 | + lp->reg_offset)) |
66 | nubus_readl((dev)->base_addr+(reg)) | 73 | #define SONIC_WRITE(reg,val) (nubus_writew(val, dev->base_addr + (reg * 4) \ |
67 | #define sonic_write(dev, reg, val) \ | 74 | + lp->reg_offset)) |
68 | nubus_writel((val), (dev)->base_addr+(reg)) | 75 | |
69 | 76 | /* use 0 for production, 1 for verification, >1 for debug */ | |
77 | #ifdef SONIC_DEBUG | ||
78 | static unsigned int sonic_debug = SONIC_DEBUG; | ||
79 | #else | ||
80 | static unsigned int sonic_debug = 1; | ||
81 | #endif | ||
70 | 82 | ||
71 | static int sonic_debug; | ||
72 | static int sonic_version_printed; | 83 | static int sonic_version_printed; |
73 | 84 | ||
74 | static int reg_offset; | ||
75 | |||
76 | extern int mac_onboard_sonic_probe(struct net_device* dev); | 85 | extern int mac_onboard_sonic_probe(struct net_device* dev); |
77 | extern int mac_nubus_sonic_probe(struct net_device* dev); | 86 | extern int mac_nubus_sonic_probe(struct net_device* dev); |
78 | 87 | ||
@@ -108,40 +117,6 @@ enum macsonic_type { | |||
108 | 117 | ||
109 | #define SONIC_READ_PROM(addr) nubus_readb(prom_addr+addr) | 118 | #define SONIC_READ_PROM(addr) nubus_readb(prom_addr+addr) |
110 | 119 | ||
111 | struct net_device * __init macsonic_probe(int unit) | ||
112 | { | ||
113 | struct net_device *dev = alloc_etherdev(0); | ||
114 | int err; | ||
115 | |||
116 | if (!dev) | ||
117 | return ERR_PTR(-ENOMEM); | ||
118 | |||
119 | if (unit >= 0) | ||
120 | sprintf(dev->name, "eth%d", unit); | ||
121 | |||
122 | SET_MODULE_OWNER(dev); | ||
123 | |||
124 | /* This will catch fatal stuff like -ENOMEM as well as success */ | ||
125 | err = mac_onboard_sonic_probe(dev); | ||
126 | if (err == 0) | ||
127 | goto found; | ||
128 | if (err != -ENODEV) | ||
129 | goto out; | ||
130 | err = mac_nubus_sonic_probe(dev); | ||
131 | if (err) | ||
132 | goto out; | ||
133 | found: | ||
134 | err = register_netdev(dev); | ||
135 | if (err) | ||
136 | goto out1; | ||
137 | return dev; | ||
138 | out1: | ||
139 | kfree(dev->priv); | ||
140 | out: | ||
141 | free_netdev(dev); | ||
142 | return ERR_PTR(err); | ||
143 | } | ||
144 | |||
145 | /* | 120 | /* |
146 | * For reversing the PROM address | 121 | * For reversing the PROM address |
147 | */ | 122 | */ |
@@ -160,103 +135,55 @@ static inline void bit_reverse_addr(unsigned char addr[6]) | |||
160 | 135 | ||
161 | int __init macsonic_init(struct net_device* dev) | 136 | int __init macsonic_init(struct net_device* dev) |
162 | { | 137 | { |
163 | struct sonic_local* lp = NULL; | 138 | struct sonic_local* lp = netdev_priv(dev); |
164 | int i; | ||
165 | 139 | ||
166 | /* Allocate the entire chunk of memory for the descriptors. | 140 | /* Allocate the entire chunk of memory for the descriptors. |
167 | Note that this cannot cross a 64K boundary. */ | 141 | Note that this cannot cross a 64K boundary. */ |
168 | for (i = 0; i < 20; i++) { | 142 | if ((lp->descriptors = dma_alloc_coherent(lp->device, |
169 | unsigned long desc_base, desc_top; | 143 | SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode), |
170 | if((lp = kmalloc(sizeof(struct sonic_local), GFP_KERNEL | GFP_DMA)) == NULL) { | 144 | &lp->descriptors_laddr, GFP_KERNEL)) == NULL) { |
171 | printk(KERN_ERR "%s: couldn't allocate descriptor buffers\n", dev->name); | 145 | printk(KERN_ERR "%s: couldn't alloc DMA memory for descriptors.\n", lp->device->bus_id); |
172 | return -ENOMEM; | ||
173 | } | ||
174 | |||
175 | desc_base = (unsigned long) lp; | ||
176 | desc_top = desc_base + sizeof(struct sonic_local); | ||
177 | if ((desc_top & 0xffff) >= (desc_base & 0xffff)) | ||
178 | break; | ||
179 | /* Hmm. try again (FIXME: does this actually work?) */ | ||
180 | kfree(lp); | ||
181 | printk(KERN_DEBUG | ||
182 | "%s: didn't get continguous chunk [%08lx - %08lx], trying again\n", | ||
183 | dev->name, desc_base, desc_top); | ||
184 | } | ||
185 | |||
186 | if (lp == NULL) { | ||
187 | printk(KERN_ERR "%s: tried 20 times to allocate descriptor buffers, giving up.\n", | ||
188 | dev->name); | ||
189 | return -ENOMEM; | 146 | return -ENOMEM; |
190 | } | 147 | } |
191 | |||
192 | dev->priv = lp; | ||
193 | |||
194 | #if 0 | ||
195 | /* this code is only here as a curiousity... mainly, where the | ||
196 | fuck did SONIC_BUS_SCALE come from, and what was it supposed | ||
197 | to do? the normal allocation works great for 32 bit stuffs.. */ | ||
198 | 148 | ||
199 | /* Now set up the pointers to point to the appropriate places */ | 149 | /* Now set up the pointers to point to the appropriate places */ |
200 | lp->cda = lp->sonic_desc; | 150 | lp->cda = lp->descriptors; |
201 | lp->tda = lp->cda + (SIZEOF_SONIC_CDA * SONIC_BUS_SCALE(lp->dma_bitmode)); | 151 | lp->tda = lp->cda + (SIZEOF_SONIC_CDA |
152 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | ||
202 | lp->rda = lp->tda + (SIZEOF_SONIC_TD * SONIC_NUM_TDS | 153 | lp->rda = lp->tda + (SIZEOF_SONIC_TD * SONIC_NUM_TDS |
203 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | 154 | * SONIC_BUS_SCALE(lp->dma_bitmode)); |
204 | lp->rra = lp->rda + (SIZEOF_SONIC_RD * SONIC_NUM_RDS | 155 | lp->rra = lp->rda + (SIZEOF_SONIC_RD * SONIC_NUM_RDS |
205 | * SONIC_BUS_SCALE(lp->dma_bitmode)); | 156 | * SONIC_BUS_SCALE(lp->dma_bitmode)); |
206 | 157 | ||
207 | #endif | 158 | lp->cda_laddr = lp->descriptors_laddr; |
208 | 159 | lp->tda_laddr = lp->cda_laddr + (SIZEOF_SONIC_CDA | |
209 | memset(lp, 0, sizeof(struct sonic_local)); | 160 | * SONIC_BUS_SCALE(lp->dma_bitmode)); |
210 | 161 | lp->rda_laddr = lp->tda_laddr + (SIZEOF_SONIC_TD * SONIC_NUM_TDS | |
211 | lp->cda_laddr = (unsigned int)&(lp->cda); | 162 | * SONIC_BUS_SCALE(lp->dma_bitmode)); |
212 | lp->tda_laddr = (unsigned int)lp->tda; | 163 | lp->rra_laddr = lp->rda_laddr + (SIZEOF_SONIC_RD * SONIC_NUM_RDS |
213 | lp->rra_laddr = (unsigned int)lp->rra; | 164 | * SONIC_BUS_SCALE(lp->dma_bitmode)); |
214 | lp->rda_laddr = (unsigned int)lp->rda; | ||
215 | |||
216 | /* FIXME, maybe we should use skbs */ | ||
217 | if ((lp->rba = (char *) | ||
218 | kmalloc(SONIC_NUM_RRS * SONIC_RBSIZE, GFP_KERNEL | GFP_DMA)) == NULL) { | ||
219 | printk(KERN_ERR "%s: couldn't allocate receive buffers\n", dev->name); | ||
220 | dev->priv = NULL; | ||
221 | kfree(lp); | ||
222 | return -ENOMEM; | ||
223 | } | ||
224 | |||
225 | lp->rba_laddr = (unsigned int)lp->rba; | ||
226 | |||
227 | { | ||
228 | int rs, ds; | ||
229 | |||
230 | /* almost always 12*4096, but let's not take chances */ | ||
231 | rs = ((SONIC_NUM_RRS * SONIC_RBSIZE + 4095) / 4096) * 4096; | ||
232 | /* almost always under a page, but let's not take chances */ | ||
233 | ds = ((sizeof(struct sonic_local) + 4095) / 4096) * 4096; | ||
234 | kernel_set_cachemode(lp->rba, rs, IOMAP_NOCACHE_SER); | ||
235 | kernel_set_cachemode(lp, ds, IOMAP_NOCACHE_SER); | ||
236 | } | ||
237 | |||
238 | #if 0 | ||
239 | flush_cache_all(); | ||
240 | #endif | ||
241 | 165 | ||
242 | dev->open = sonic_open; | 166 | dev->open = sonic_open; |
243 | dev->stop = sonic_close; | 167 | dev->stop = sonic_close; |
244 | dev->hard_start_xmit = sonic_send_packet; | 168 | dev->hard_start_xmit = sonic_send_packet; |
245 | dev->get_stats = sonic_get_stats; | 169 | dev->get_stats = sonic_get_stats; |
246 | dev->set_multicast_list = &sonic_multicast_list; | 170 | dev->set_multicast_list = &sonic_multicast_list; |
171 | dev->tx_timeout = sonic_tx_timeout; | ||
172 | dev->watchdog_timeo = TX_TIMEOUT; | ||
247 | 173 | ||
248 | /* | 174 | /* |
249 | * clear tally counter | 175 | * clear tally counter |
250 | */ | 176 | */ |
251 | sonic_write(dev, SONIC_CRCT, 0xffff); | 177 | SONIC_WRITE(SONIC_CRCT, 0xffff); |
252 | sonic_write(dev, SONIC_FAET, 0xffff); | 178 | SONIC_WRITE(SONIC_FAET, 0xffff); |
253 | sonic_write(dev, SONIC_MPT, 0xffff); | 179 | SONIC_WRITE(SONIC_MPT, 0xffff); |
254 | 180 | ||
255 | return 0; | 181 | return 0; |
256 | } | 182 | } |
257 | 183 | ||
258 | int __init mac_onboard_sonic_ethernet_addr(struct net_device* dev) | 184 | int __init mac_onboard_sonic_ethernet_addr(struct net_device* dev) |
259 | { | 185 | { |
186 | struct sonic_local *lp = netdev_priv(dev); | ||
260 | const int prom_addr = ONBOARD_SONIC_PROM_BASE; | 187 | const int prom_addr = ONBOARD_SONIC_PROM_BASE; |
261 | int i; | 188 | int i; |
262 | 189 | ||
@@ -270,6 +197,7 @@ int __init mac_onboard_sonic_ethernet_addr(struct net_device* dev) | |||
270 | why this is so. */ | 197 | why this is so. */ |
271 | if (memcmp(dev->dev_addr, "\x08\x00\x07", 3) && | 198 | if (memcmp(dev->dev_addr, "\x08\x00\x07", 3) && |
272 | memcmp(dev->dev_addr, "\x00\xA0\x40", 3) && | 199 | memcmp(dev->dev_addr, "\x00\xA0\x40", 3) && |
200 | memcmp(dev->dev_addr, "\x00\x80\x19", 3) && | ||
273 | memcmp(dev->dev_addr, "\x00\x05\x02", 3)) | 201 | memcmp(dev->dev_addr, "\x00\x05\x02", 3)) |
274 | bit_reverse_addr(dev->dev_addr); | 202 | bit_reverse_addr(dev->dev_addr); |
275 | else | 203 | else |
@@ -281,22 +209,23 @@ int __init mac_onboard_sonic_ethernet_addr(struct net_device* dev) | |||
281 | the card... */ | 209 | the card... */ |
282 | if (memcmp(dev->dev_addr, "\x08\x00\x07", 3) && | 210 | if (memcmp(dev->dev_addr, "\x08\x00\x07", 3) && |
283 | memcmp(dev->dev_addr, "\x00\xA0\x40", 3) && | 211 | memcmp(dev->dev_addr, "\x00\xA0\x40", 3) && |
212 | memcmp(dev->dev_addr, "\x00\x80\x19", 3) && | ||
284 | memcmp(dev->dev_addr, "\x00\x05\x02", 3)) | 213 | memcmp(dev->dev_addr, "\x00\x05\x02", 3)) |
285 | { | 214 | { |
286 | unsigned short val; | 215 | unsigned short val; |
287 | 216 | ||
288 | printk(KERN_INFO "macsonic: PROM seems to be wrong, trying CAM entry 15\n"); | 217 | printk(KERN_INFO "macsonic: PROM seems to be wrong, trying CAM entry 15\n"); |
289 | 218 | ||
290 | sonic_write(dev, SONIC_CMD, SONIC_CR_RST); | 219 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); |
291 | sonic_write(dev, SONIC_CEP, 15); | 220 | SONIC_WRITE(SONIC_CEP, 15); |
292 | 221 | ||
293 | val = sonic_read(dev, SONIC_CAP2); | 222 | val = SONIC_READ(SONIC_CAP2); |
294 | dev->dev_addr[5] = val >> 8; | 223 | dev->dev_addr[5] = val >> 8; |
295 | dev->dev_addr[4] = val & 0xff; | 224 | dev->dev_addr[4] = val & 0xff; |
296 | val = sonic_read(dev, SONIC_CAP1); | 225 | val = SONIC_READ(SONIC_CAP1); |
297 | dev->dev_addr[3] = val >> 8; | 226 | dev->dev_addr[3] = val >> 8; |
298 | dev->dev_addr[2] = val & 0xff; | 227 | dev->dev_addr[2] = val & 0xff; |
299 | val = sonic_read(dev, SONIC_CAP0); | 228 | val = SONIC_READ(SONIC_CAP0); |
300 | dev->dev_addr[1] = val >> 8; | 229 | dev->dev_addr[1] = val >> 8; |
301 | dev->dev_addr[0] = val & 0xff; | 230 | dev->dev_addr[0] = val & 0xff; |
302 | 231 | ||
@@ -311,6 +240,7 @@ int __init mac_onboard_sonic_ethernet_addr(struct net_device* dev) | |||
311 | 240 | ||
312 | if (memcmp(dev->dev_addr, "\x08\x00\x07", 3) && | 241 | if (memcmp(dev->dev_addr, "\x08\x00\x07", 3) && |
313 | memcmp(dev->dev_addr, "\x00\xA0\x40", 3) && | 242 | memcmp(dev->dev_addr, "\x00\xA0\x40", 3) && |
243 | memcmp(dev->dev_addr, "\x00\x80\x19", 3) && | ||
314 | memcmp(dev->dev_addr, "\x00\x05\x02", 3)) | 244 | memcmp(dev->dev_addr, "\x00\x05\x02", 3)) |
315 | { | 245 | { |
316 | /* | 246 | /* |
@@ -325,8 +255,9 @@ int __init mac_onboard_sonic_probe(struct net_device* dev) | |||
325 | { | 255 | { |
326 | /* Bwahahaha */ | 256 | /* Bwahahaha */ |
327 | static int once_is_more_than_enough; | 257 | static int once_is_more_than_enough; |
328 | int i; | 258 | struct sonic_local* lp = netdev_priv(dev); |
329 | int dma_bitmode; | 259 | int sr; |
260 | int commslot = 0; | ||
330 | 261 | ||
331 | if (once_is_more_than_enough) | 262 | if (once_is_more_than_enough) |
332 | return -ENODEV; | 263 | return -ENODEV; |
@@ -335,20 +266,18 @@ int __init mac_onboard_sonic_probe(struct net_device* dev) | |||
335 | if (!MACH_IS_MAC) | 266 | if (!MACH_IS_MAC) |
336 | return -ENODEV; | 267 | return -ENODEV; |
337 | 268 | ||
338 | printk(KERN_INFO "Checking for internal Macintosh ethernet (SONIC).. "); | ||
339 | |||
340 | if (macintosh_config->ether_type != MAC_ETHER_SONIC) | 269 | if (macintosh_config->ether_type != MAC_ETHER_SONIC) |
341 | { | ||
342 | printk("none.\n"); | ||
343 | return -ENODEV; | 270 | return -ENODEV; |
344 | } | 271 | |
345 | 272 | printk(KERN_INFO "Checking for internal Macintosh ethernet (SONIC).. "); | |
273 | |||
346 | /* Bogus probing, on the models which may or may not have | 274 | /* Bogus probing, on the models which may or may not have |
347 | Ethernet (BTW, the Ethernet *is* always at the same | 275 | Ethernet (BTW, the Ethernet *is* always at the same |
348 | address, and nothing else lives there, at least if Apple's | 276 | address, and nothing else lives there, at least if Apple's |
349 | documentation is to be believed) */ | 277 | documentation is to be believed) */ |
350 | if (macintosh_config->ident == MAC_MODEL_Q630 || | 278 | if (macintosh_config->ident == MAC_MODEL_Q630 || |
351 | macintosh_config->ident == MAC_MODEL_P588 || | 279 | macintosh_config->ident == MAC_MODEL_P588 || |
280 | macintosh_config->ident == MAC_MODEL_P575 || | ||
352 | macintosh_config->ident == MAC_MODEL_C610) { | 281 | macintosh_config->ident == MAC_MODEL_C610) { |
353 | unsigned long flags; | 282 | unsigned long flags; |
354 | int card_present; | 283 | int card_present; |
@@ -361,13 +290,13 @@ int __init mac_onboard_sonic_probe(struct net_device* dev) | |||
361 | printk("none.\n"); | 290 | printk("none.\n"); |
362 | return -ENODEV; | 291 | return -ENODEV; |
363 | } | 292 | } |
293 | commslot = 1; | ||
364 | } | 294 | } |
365 | 295 | ||
366 | printk("yes\n"); | 296 | printk("yes\n"); |
367 | 297 | ||
368 | /* Danger! My arms are flailing wildly! You *must* set this | 298 | /* Danger! My arms are flailing wildly! You *must* set lp->reg_offset |
369 | before using sonic_read() */ | 299 | * and dev->base_addr before using SONIC_READ() or SONIC_WRITE() */ |
370 | |||
371 | dev->base_addr = ONBOARD_SONIC_REGISTERS; | 300 | dev->base_addr = ONBOARD_SONIC_REGISTERS; |
372 | if (via_alt_mapping) | 301 | if (via_alt_mapping) |
373 | dev->irq = IRQ_AUTO_3; | 302 | dev->irq = IRQ_AUTO_3; |
@@ -379,84 +308,66 @@ int __init mac_onboard_sonic_probe(struct net_device* dev) | |||
379 | sonic_version_printed = 1; | 308 | sonic_version_printed = 1; |
380 | } | 309 | } |
381 | printk(KERN_INFO "%s: onboard / comm-slot SONIC at 0x%08lx\n", | 310 | printk(KERN_INFO "%s: onboard / comm-slot SONIC at 0x%08lx\n", |
382 | dev->name, dev->base_addr); | 311 | lp->device->bus_id, dev->base_addr); |
383 | |||
384 | /* Now do a song and dance routine in an attempt to determine | ||
385 | the bus width */ | ||
386 | 312 | ||
387 | /* The PowerBook's SONIC is 16 bit always. */ | 313 | /* The PowerBook's SONIC is 16 bit always. */ |
388 | if (macintosh_config->ident == MAC_MODEL_PB520) { | 314 | if (macintosh_config->ident == MAC_MODEL_PB520) { |
389 | reg_offset = 0; | 315 | lp->reg_offset = 0; |
390 | dma_bitmode = 0; | 316 | lp->dma_bitmode = SONIC_BITMODE16; |
391 | } else if (macintosh_config->ident == MAC_MODEL_C610) { | 317 | sr = SONIC_READ(SONIC_SR); |
392 | reg_offset = 0; | 318 | } else if (commslot) { |
393 | dma_bitmode = 1; | ||
394 | } else { | ||
395 | /* Some of the comm-slot cards are 16 bit. But some | 319 | /* Some of the comm-slot cards are 16 bit. But some |
396 | of them are not. The 32-bit cards use offset 2 and | 320 | of them are not. The 32-bit cards use offset 2 and |
397 | pad with zeroes or sometimes ones (I think...) | 321 | have known revisions, we try reading the revision |
398 | Therefore, if we try offset 0 and get a silicon | 322 | register at offset 2, if we don't get a known revision |
399 | revision of 0, we assume 16 bit. */ | 323 | we assume 16 bit at offset 0. */ |
400 | int sr; | 324 | lp->reg_offset = 2; |
401 | 325 | lp->dma_bitmode = SONIC_BITMODE16; | |
402 | /* Technically this is not necessary since we zeroed | 326 | |
403 | it above */ | 327 | sr = SONIC_READ(SONIC_SR); |
404 | reg_offset = 0; | 328 | if (sr == 0x0004 || sr == 0x0006 || sr == 0x0100 || sr == 0x0101) |
405 | dma_bitmode = 0; | 329 | /* 83932 is 0x0004 or 0x0006, 83934 is 0x0100 or 0x0101 */ |
406 | sr = sonic_read(dev, SONIC_SR); | 330 | lp->dma_bitmode = SONIC_BITMODE32; |
407 | if (sr == 0 || sr == 0xffff) { | 331 | else { |
408 | reg_offset = 2; | 332 | lp->dma_bitmode = SONIC_BITMODE16; |
409 | /* 83932 is 0x0004, 83934 is 0x0100 or 0x0101 */ | 333 | lp->reg_offset = 0; |
410 | sr = sonic_read(dev, SONIC_SR); | 334 | sr = SONIC_READ(SONIC_SR); |
411 | dma_bitmode = 1; | ||
412 | |||
413 | } | 335 | } |
414 | printk(KERN_INFO | 336 | } else { |
415 | "%s: revision 0x%04x, using %d bit DMA and register offset %d\n", | 337 | /* All onboard cards are at offset 2 with 32 bit DMA. */ |
416 | dev->name, sr, dma_bitmode?32:16, reg_offset); | 338 | lp->reg_offset = 2; |
339 | lp->dma_bitmode = SONIC_BITMODE32; | ||
340 | sr = SONIC_READ(SONIC_SR); | ||
417 | } | 341 | } |
418 | 342 | printk(KERN_INFO | |
343 | "%s: revision 0x%04x, using %d bit DMA and register offset %d\n", | ||
344 | lp->device->bus_id, sr, lp->dma_bitmode?32:16, lp->reg_offset); | ||
419 | 345 | ||
420 | /* this carries my sincere apologies -- by the time I got to updating | 346 | #if 0 /* This is sometimes useful to find out how MacOS configured the card. */ |
421 | the driver, support for "reg_offsets" appeares nowhere in the sonic | 347 | printk(KERN_INFO "%s: DCR: 0x%04x, DCR2: 0x%04x\n", lp->device->bus_id, |
422 | code, going back for over a year. Fortunately, my Mac does't seem | 348 | SONIC_READ(SONIC_DCR) & 0xffff, SONIC_READ(SONIC_DCR2) & 0xffff); |
423 | to use whatever this was. | 349 | #endif |
424 | 350 | ||
425 | If you know how this is supposed to be implemented, either fix it, | ||
426 | or contact me (sammy@oh.verio.com) to explain what it is. --Sam */ | ||
427 | |||
428 | if(reg_offset) { | ||
429 | printk("%s: register offset unsupported. please fix this if you know what it is.\n", dev->name); | ||
430 | return -ENODEV; | ||
431 | } | ||
432 | |||
433 | /* Software reset, then initialize control registers. */ | 351 | /* Software reset, then initialize control registers. */ |
434 | sonic_write(dev, SONIC_CMD, SONIC_CR_RST); | 352 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); |
435 | sonic_write(dev, SONIC_DCR, SONIC_DCR_BMS | | 353 | |
436 | SONIC_DCR_RFT1 | SONIC_DCR_TFT0 | SONIC_DCR_EXBUS | | 354 | SONIC_WRITE(SONIC_DCR, SONIC_DCR_EXBUS | SONIC_DCR_BMS | |
437 | (dma_bitmode ? SONIC_DCR_DW : 0)); | 355 | SONIC_DCR_RFT1 | SONIC_DCR_TFT0 | |
356 | (lp->dma_bitmode ? SONIC_DCR_DW : 0)); | ||
438 | 357 | ||
439 | /* This *must* be written back to in order to restore the | 358 | /* This *must* be written back to in order to restore the |
440 | extended programmable output bits */ | 359 | * extended programmable output bits, as it may not have been |
441 | sonic_write(dev, SONIC_DCR2, 0); | 360 | * initialised since the hardware reset. */ |
361 | SONIC_WRITE(SONIC_DCR2, 0); | ||
442 | 362 | ||
443 | /* Clear *and* disable interrupts to be on the safe side */ | 363 | /* Clear *and* disable interrupts to be on the safe side */ |
444 | sonic_write(dev, SONIC_ISR,0x7fff); | 364 | SONIC_WRITE(SONIC_IMR, 0); |
445 | sonic_write(dev, SONIC_IMR,0); | 365 | SONIC_WRITE(SONIC_ISR, 0x7fff); |
446 | 366 | ||
447 | /* Now look for the MAC address. */ | 367 | /* Now look for the MAC address. */ |
448 | if (mac_onboard_sonic_ethernet_addr(dev) != 0) | 368 | if (mac_onboard_sonic_ethernet_addr(dev) != 0) |
449 | return -ENODEV; | 369 | return -ENODEV; |
450 | 370 | ||
451 | printk(KERN_INFO "MAC "); | ||
452 | for (i = 0; i < 6; i++) { | ||
453 | printk("%2.2x", dev->dev_addr[i]); | ||
454 | if (i < 5) | ||
455 | printk(":"); | ||
456 | } | ||
457 | |||
458 | printk(" IRQ %d\n", dev->irq); | ||
459 | |||
460 | /* Shared init code */ | 371 | /* Shared init code */ |
461 | return macsonic_init(dev); | 372 | return macsonic_init(dev); |
462 | } | 373 | } |
@@ -468,8 +379,10 @@ int __init mac_nubus_sonic_ethernet_addr(struct net_device* dev, | |||
468 | int i; | 379 | int i; |
469 | for(i = 0; i < 6; i++) | 380 | for(i = 0; i < 6; i++) |
470 | dev->dev_addr[i] = SONIC_READ_PROM(i); | 381 | dev->dev_addr[i] = SONIC_READ_PROM(i); |
471 | /* For now we are going to assume that they're all bit-reversed */ | 382 | |
472 | bit_reverse_addr(dev->dev_addr); | 383 | /* Some of the addresses are bit-reversed */ |
384 | if (id != MACSONIC_DAYNA) | ||
385 | bit_reverse_addr(dev->dev_addr); | ||
473 | 386 | ||
474 | return 0; | 387 | return 0; |
475 | } | 388 | } |
@@ -487,6 +400,15 @@ int __init macsonic_ident(struct nubus_dev* ndev) | |||
487 | else | 400 | else |
488 | return MACSONIC_APPLE; | 401 | return MACSONIC_APPLE; |
489 | } | 402 | } |
403 | |||
404 | if (ndev->dr_hw == NUBUS_DRHW_SMC9194 && | ||
405 | ndev->dr_sw == NUBUS_DRSW_DAYNA) | ||
406 | return MACSONIC_DAYNA; | ||
407 | |||
408 | if (ndev->dr_hw == NUBUS_DRHW_SONIC_LC && | ||
409 | ndev->dr_sw == 0) { /* huh? */ | ||
410 | return MACSONIC_APPLE16; | ||
411 | } | ||
490 | return -1; | 412 | return -1; |
491 | } | 413 | } |
492 | 414 | ||
@@ -494,12 +416,12 @@ int __init mac_nubus_sonic_probe(struct net_device* dev) | |||
494 | { | 416 | { |
495 | static int slots; | 417 | static int slots; |
496 | struct nubus_dev* ndev = NULL; | 418 | struct nubus_dev* ndev = NULL; |
419 | struct sonic_local* lp = netdev_priv(dev); | ||
497 | unsigned long base_addr, prom_addr; | 420 | unsigned long base_addr, prom_addr; |
498 | u16 sonic_dcr; | 421 | u16 sonic_dcr; |
499 | int id; | 422 | int id = -1; |
500 | int i; | 423 | int reg_offset, dma_bitmode; |
501 | int dma_bitmode; | 424 | |
502 | |||
503 | /* Find the first SONIC that hasn't been initialized already */ | 425 | /* Find the first SONIC that hasn't been initialized already */ |
504 | while ((ndev = nubus_find_type(NUBUS_CAT_NETWORK, | 426 | while ((ndev = nubus_find_type(NUBUS_CAT_NETWORK, |
505 | NUBUS_TYPE_ETHERNET, ndev)) != NULL) | 427 | NUBUS_TYPE_ETHERNET, ndev)) != NULL) |
@@ -521,51 +443,52 @@ int __init mac_nubus_sonic_probe(struct net_device* dev) | |||
521 | case MACSONIC_DUODOCK: | 443 | case MACSONIC_DUODOCK: |
522 | base_addr = ndev->board->slot_addr + DUODOCK_SONIC_REGISTERS; | 444 | base_addr = ndev->board->slot_addr + DUODOCK_SONIC_REGISTERS; |
523 | prom_addr = ndev->board->slot_addr + DUODOCK_SONIC_PROM_BASE; | 445 | prom_addr = ndev->board->slot_addr + DUODOCK_SONIC_PROM_BASE; |
524 | sonic_dcr = SONIC_DCR_EXBUS | SONIC_DCR_RFT0 | SONIC_DCR_RFT1 | 446 | sonic_dcr = SONIC_DCR_EXBUS | SONIC_DCR_RFT0 | SONIC_DCR_RFT1 | |
525 | | SONIC_DCR_TFT0; | 447 | SONIC_DCR_TFT0; |
526 | reg_offset = 2; | 448 | reg_offset = 2; |
527 | dma_bitmode = 1; | 449 | dma_bitmode = SONIC_BITMODE32; |
528 | break; | 450 | break; |
529 | case MACSONIC_APPLE: | 451 | case MACSONIC_APPLE: |
530 | base_addr = ndev->board->slot_addr + APPLE_SONIC_REGISTERS; | 452 | base_addr = ndev->board->slot_addr + APPLE_SONIC_REGISTERS; |
531 | prom_addr = ndev->board->slot_addr + APPLE_SONIC_PROM_BASE; | 453 | prom_addr = ndev->board->slot_addr + APPLE_SONIC_PROM_BASE; |
532 | sonic_dcr = SONIC_DCR_BMS | SONIC_DCR_RFT1 | SONIC_DCR_TFT0; | 454 | sonic_dcr = SONIC_DCR_BMS | SONIC_DCR_RFT1 | SONIC_DCR_TFT0; |
533 | reg_offset = 0; | 455 | reg_offset = 0; |
534 | dma_bitmode = 1; | 456 | dma_bitmode = SONIC_BITMODE32; |
535 | break; | 457 | break; |
536 | case MACSONIC_APPLE16: | 458 | case MACSONIC_APPLE16: |
537 | base_addr = ndev->board->slot_addr + APPLE_SONIC_REGISTERS; | 459 | base_addr = ndev->board->slot_addr + APPLE_SONIC_REGISTERS; |
538 | prom_addr = ndev->board->slot_addr + APPLE_SONIC_PROM_BASE; | 460 | prom_addr = ndev->board->slot_addr + APPLE_SONIC_PROM_BASE; |
539 | sonic_dcr = SONIC_DCR_EXBUS | 461 | sonic_dcr = SONIC_DCR_EXBUS | SONIC_DCR_RFT1 | SONIC_DCR_TFT0 | |
540 | | SONIC_DCR_RFT1 | SONIC_DCR_TFT0 | 462 | SONIC_DCR_PO1 | SONIC_DCR_BMS; |
541 | | SONIC_DCR_PO1 | SONIC_DCR_BMS; | ||
542 | reg_offset = 0; | 463 | reg_offset = 0; |
543 | dma_bitmode = 0; | 464 | dma_bitmode = SONIC_BITMODE16; |
544 | break; | 465 | break; |
545 | case MACSONIC_DAYNALINK: | 466 | case MACSONIC_DAYNALINK: |
546 | base_addr = ndev->board->slot_addr + APPLE_SONIC_REGISTERS; | 467 | base_addr = ndev->board->slot_addr + APPLE_SONIC_REGISTERS; |
547 | prom_addr = ndev->board->slot_addr + DAYNALINK_PROM_BASE; | 468 | prom_addr = ndev->board->slot_addr + DAYNALINK_PROM_BASE; |
548 | sonic_dcr = SONIC_DCR_RFT1 | SONIC_DCR_TFT0 | 469 | sonic_dcr = SONIC_DCR_RFT1 | SONIC_DCR_TFT0 | |
549 | | SONIC_DCR_PO1 | SONIC_DCR_BMS; | 470 | SONIC_DCR_PO1 | SONIC_DCR_BMS; |
550 | reg_offset = 0; | 471 | reg_offset = 0; |
551 | dma_bitmode = 0; | 472 | dma_bitmode = SONIC_BITMODE16; |
552 | break; | 473 | break; |
553 | case MACSONIC_DAYNA: | 474 | case MACSONIC_DAYNA: |
554 | base_addr = ndev->board->slot_addr + DAYNA_SONIC_REGISTERS; | 475 | base_addr = ndev->board->slot_addr + DAYNA_SONIC_REGISTERS; |
555 | prom_addr = ndev->board->slot_addr + DAYNA_SONIC_MAC_ADDR; | 476 | prom_addr = ndev->board->slot_addr + DAYNA_SONIC_MAC_ADDR; |
556 | sonic_dcr = SONIC_DCR_BMS | 477 | sonic_dcr = SONIC_DCR_BMS | |
557 | | SONIC_DCR_RFT1 | SONIC_DCR_TFT0 | SONIC_DCR_PO1; | 478 | SONIC_DCR_RFT1 | SONIC_DCR_TFT0 | SONIC_DCR_PO1; |
558 | reg_offset = 0; | 479 | reg_offset = 0; |
559 | dma_bitmode = 0; | 480 | dma_bitmode = SONIC_BITMODE16; |
560 | break; | 481 | break; |
561 | default: | 482 | default: |
562 | printk(KERN_ERR "macsonic: WTF, id is %d\n", id); | 483 | printk(KERN_ERR "macsonic: WTF, id is %d\n", id); |
563 | return -ENODEV; | 484 | return -ENODEV; |
564 | } | 485 | } |
565 | 486 | ||
566 | /* Danger! My arms are flailing wildly! You *must* set this | 487 | /* Danger! My arms are flailing wildly! You *must* set lp->reg_offset |
567 | before using sonic_read() */ | 488 | * and dev->base_addr before using SONIC_READ() or SONIC_WRITE() */ |
568 | dev->base_addr = base_addr; | 489 | dev->base_addr = base_addr; |
490 | lp->reg_offset = reg_offset; | ||
491 | lp->dma_bitmode = dma_bitmode; | ||
569 | dev->irq = SLOT2IRQ(ndev->board->slot); | 492 | dev->irq = SLOT2IRQ(ndev->board->slot); |
570 | 493 | ||
571 | if (!sonic_version_printed) { | 494 | if (!sonic_version_printed) { |
@@ -573,29 +496,66 @@ int __init mac_nubus_sonic_probe(struct net_device* dev) | |||
573 | sonic_version_printed = 1; | 496 | sonic_version_printed = 1; |
574 | } | 497 | } |
575 | printk(KERN_INFO "%s: %s in slot %X\n", | 498 | printk(KERN_INFO "%s: %s in slot %X\n", |
576 | dev->name, ndev->board->name, ndev->board->slot); | 499 | lp->device->bus_id, ndev->board->name, ndev->board->slot); |
577 | printk(KERN_INFO "%s: revision 0x%04x, using %d bit DMA and register offset %d\n", | 500 | printk(KERN_INFO "%s: revision 0x%04x, using %d bit DMA and register offset %d\n", |
578 | dev->name, sonic_read(dev, SONIC_SR), dma_bitmode?32:16, reg_offset); | 501 | lp->device->bus_id, SONIC_READ(SONIC_SR), dma_bitmode?32:16, reg_offset); |
579 | 502 | ||
580 | if(reg_offset) { | 503 | #if 0 /* This is sometimes useful to find out how MacOS configured the card. */ |
581 | printk("%s: register offset unsupported. please fix this if you know what it is.\n", dev->name); | 504 | printk(KERN_INFO "%s: DCR: 0x%04x, DCR2: 0x%04x\n", lp->device->bus_id, |
582 | return -ENODEV; | 505 | SONIC_READ(SONIC_DCR) & 0xffff, SONIC_READ(SONIC_DCR2) & 0xffff); |
583 | } | 506 | #endif |
584 | 507 | ||
585 | /* Software reset, then initialize control registers. */ | 508 | /* Software reset, then initialize control registers. */ |
586 | sonic_write(dev, SONIC_CMD, SONIC_CR_RST); | 509 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); |
587 | sonic_write(dev, SONIC_DCR, sonic_dcr | 510 | SONIC_WRITE(SONIC_DCR, sonic_dcr | (dma_bitmode ? SONIC_DCR_DW : 0)); |
588 | | (dma_bitmode ? SONIC_DCR_DW : 0)); | 511 | /* This *must* be written back to in order to restore the |
512 | * extended programmable output bits, since it may not have been | ||
513 | * initialised since the hardware reset. */ | ||
514 | SONIC_WRITE(SONIC_DCR2, 0); | ||
589 | 515 | ||
590 | /* Clear *and* disable interrupts to be on the safe side */ | 516 | /* Clear *and* disable interrupts to be on the safe side */ |
591 | sonic_write(dev, SONIC_ISR,0x7fff); | 517 | SONIC_WRITE(SONIC_IMR, 0); |
592 | sonic_write(dev, SONIC_IMR,0); | 518 | SONIC_WRITE(SONIC_ISR, 0x7fff); |
593 | 519 | ||
594 | /* Now look for the MAC address. */ | 520 | /* Now look for the MAC address. */ |
595 | if (mac_nubus_sonic_ethernet_addr(dev, prom_addr, id) != 0) | 521 | if (mac_nubus_sonic_ethernet_addr(dev, prom_addr, id) != 0) |
596 | return -ENODEV; | 522 | return -ENODEV; |
597 | 523 | ||
598 | printk(KERN_INFO "MAC "); | 524 | /* Shared init code */ |
525 | return macsonic_init(dev); | ||
526 | } | ||
527 | |||
528 | static int __init mac_sonic_probe(struct device *device) | ||
529 | { | ||
530 | struct net_device *dev; | ||
531 | struct sonic_local *lp; | ||
532 | int err; | ||
533 | int i; | ||
534 | |||
535 | dev = alloc_etherdev(sizeof(struct sonic_local)); | ||
536 | if (!dev) | ||
537 | return -ENOMEM; | ||
538 | |||
539 | lp = netdev_priv(dev); | ||
540 | lp->device = device; | ||
541 | SET_NETDEV_DEV(dev, device); | ||
542 | SET_MODULE_OWNER(dev); | ||
543 | |||
544 | /* This will catch fatal stuff like -ENOMEM as well as success */ | ||
545 | err = mac_onboard_sonic_probe(dev); | ||
546 | if (err == 0) | ||
547 | goto found; | ||
548 | if (err != -ENODEV) | ||
549 | goto out; | ||
550 | err = mac_nubus_sonic_probe(dev); | ||
551 | if (err) | ||
552 | goto out; | ||
553 | found: | ||
554 | err = register_netdev(dev); | ||
555 | if (err) | ||
556 | goto out; | ||
557 | |||
558 | printk("%s: MAC ", dev->name); | ||
599 | for (i = 0; i < 6; i++) { | 559 | for (i = 0; i < 6; i++) { |
600 | printk("%2.2x", dev->dev_addr[i]); | 560 | printk("%2.2x", dev->dev_addr[i]); |
601 | if (i < 5) | 561 | if (i < 5) |
@@ -603,55 +563,95 @@ int __init mac_nubus_sonic_probe(struct net_device* dev) | |||
603 | } | 563 | } |
604 | printk(" IRQ %d\n", dev->irq); | 564 | printk(" IRQ %d\n", dev->irq); |
605 | 565 | ||
606 | /* Shared init code */ | 566 | return 0; |
607 | return macsonic_init(dev); | ||
608 | } | ||
609 | 567 | ||
610 | #ifdef MODULE | 568 | out: |
611 | static struct net_device *dev_macsonic; | 569 | free_netdev(dev); |
612 | 570 | ||
613 | MODULE_PARM(sonic_debug, "i"); | 571 | return err; |
572 | } | ||
573 | |||
574 | MODULE_DESCRIPTION("Macintosh SONIC ethernet driver"); | ||
575 | module_param(sonic_debug, int, 0); | ||
614 | MODULE_PARM_DESC(sonic_debug, "macsonic debug level (1-4)"); | 576 | MODULE_PARM_DESC(sonic_debug, "macsonic debug level (1-4)"); |
615 | 577 | ||
616 | int | 578 | #define SONIC_IRQ_FLAG IRQ_FLG_FAST |
617 | init_module(void) | 579 | |
580 | #include "sonic.c" | ||
581 | |||
582 | static int __devexit mac_sonic_device_remove (struct device *device) | ||
618 | { | 583 | { |
619 | dev_macsonic = macsonic_probe(-1); | 584 | struct net_device *dev = device->driver_data; |
620 | if (IS_ERR(dev_macsonic)) { | 585 | struct sonic_local* lp = netdev_priv(dev); |
621 | printk(KERN_WARNING "macsonic.c: No card found\n"); | 586 | |
622 | return PTR_ERR(dev_macsonic); | 587 | unregister_netdev (dev); |
623 | } | 588 | dma_free_coherent(lp->device, SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode), |
589 | lp->descriptors, lp->descriptors_laddr); | ||
590 | free_netdev (dev); | ||
591 | |||
624 | return 0; | 592 | return 0; |
625 | } | 593 | } |
626 | 594 | ||
627 | void | 595 | static struct device_driver mac_sonic_driver = { |
628 | cleanup_module(void) | 596 | .name = mac_sonic_string, |
597 | .bus = &platform_bus_type, | ||
598 | .probe = mac_sonic_probe, | ||
599 | .remove = __devexit_p(mac_sonic_device_remove), | ||
600 | }; | ||
601 | |||
602 | static void mac_sonic_platform_release(struct device *device) | ||
629 | { | 603 | { |
630 | unregister_netdev(dev_macsonic); | 604 | struct platform_device *pldev; |
631 | kfree(dev_macsonic->priv); | 605 | |
632 | free_netdev(dev_macsonic); | 606 | /* free device */ |
607 | pldev = to_platform_device (device); | ||
608 | kfree (pldev); | ||
633 | } | 609 | } |
634 | #endif /* MODULE */ | ||
635 | 610 | ||
611 | static int __init mac_sonic_init_module(void) | ||
612 | { | ||
613 | struct platform_device *pldev; | ||
614 | int err; | ||
636 | 615 | ||
637 | #define vdma_alloc(foo, bar) ((u32)foo) | 616 | if ((err = driver_register(&mac_sonic_driver))) { |
638 | #define vdma_free(baz) | 617 | printk(KERN_ERR "Driver registration failed\n"); |
639 | #define sonic_chiptomem(bat) (bat) | 618 | return err; |
640 | #define PHYSADDR(quux) (quux) | 619 | } |
641 | #define CPHYSADDR(quux) (quux) | ||
642 | 620 | ||
643 | #define sonic_request_irq request_irq | 621 | mac_sonic_device = NULL; |
644 | #define sonic_free_irq free_irq | ||
645 | 622 | ||
646 | #include "sonic.c" | 623 | if (!(pldev = kmalloc (sizeof (*pldev), GFP_KERNEL))) { |
624 | goto out_unregister; | ||
625 | } | ||
647 | 626 | ||
648 | /* | 627 | memset(pldev, 0, sizeof (*pldev)); |
649 | * Local variables: | 628 | pldev->name = mac_sonic_string; |
650 | * compile-command: "m68k-linux-gcc -D__KERNEL__ -I../../include -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer -pipe -fno-strength-reduce -ffixed-a2 -DMODULE -DMODVERSIONS -include ../../include/linux/modversions.h -c -o macsonic.o macsonic.c" | 629 | pldev->id = 0; |
651 | * version-control: t | 630 | pldev->dev.release = mac_sonic_platform_release; |
652 | * kept-new-versions: 5 | 631 | mac_sonic_device = pldev; |
653 | * c-indent-level: 8 | 632 | |
654 | * tab-width: 8 | 633 | if (platform_device_register (pldev)) { |
655 | * End: | 634 | kfree(pldev); |
656 | * | 635 | mac_sonic_device = NULL; |
657 | */ | 636 | } |
637 | |||
638 | return 0; | ||
639 | |||
640 | out_unregister: | ||
641 | platform_device_unregister(pldev); | ||
642 | |||
643 | return -ENOMEM; | ||
644 | } | ||
645 | |||
646 | static void __exit mac_sonic_cleanup_module(void) | ||
647 | { | ||
648 | driver_unregister(&mac_sonic_driver); | ||
649 | |||
650 | if (mac_sonic_device) { | ||
651 | platform_device_unregister(mac_sonic_device); | ||
652 | mac_sonic_device = NULL; | ||
653 | } | ||
654 | } | ||
655 | |||
656 | module_init(mac_sonic_init_module); | ||
657 | module_exit(mac_sonic_cleanup_module); | ||
diff --git a/drivers/net/mv643xx_eth.c b/drivers/net/mv643xx_eth.c index 0405e1f0d3df..fb6b232069d6 100644 --- a/drivers/net/mv643xx_eth.c +++ b/drivers/net/mv643xx_eth.c | |||
@@ -1157,16 +1157,20 @@ static int mv643xx_eth_start_xmit(struct sk_buff *skb, struct net_device *dev) | |||
1157 | if (!skb_shinfo(skb)->nr_frags) { | 1157 | if (!skb_shinfo(skb)->nr_frags) { |
1158 | linear: | 1158 | linear: |
1159 | if (skb->ip_summed != CHECKSUM_HW) { | 1159 | if (skb->ip_summed != CHECKSUM_HW) { |
1160 | /* Errata BTS #50, IHL must be 5 if no HW checksum */ | ||
1160 | pkt_info.cmd_sts = ETH_TX_ENABLE_INTERRUPT | | 1161 | pkt_info.cmd_sts = ETH_TX_ENABLE_INTERRUPT | |
1161 | ETH_TX_FIRST_DESC | ETH_TX_LAST_DESC; | 1162 | ETH_TX_FIRST_DESC | |
1163 | ETH_TX_LAST_DESC | | ||
1164 | 5 << ETH_TX_IHL_SHIFT; | ||
1162 | pkt_info.l4i_chk = 0; | 1165 | pkt_info.l4i_chk = 0; |
1163 | } else { | 1166 | } else { |
1164 | u32 ipheader = skb->nh.iph->ihl << 11; | ||
1165 | 1167 | ||
1166 | pkt_info.cmd_sts = ETH_TX_ENABLE_INTERRUPT | | 1168 | pkt_info.cmd_sts = ETH_TX_ENABLE_INTERRUPT | |
1167 | ETH_TX_FIRST_DESC | ETH_TX_LAST_DESC | | 1169 | ETH_TX_FIRST_DESC | |
1168 | ETH_GEN_TCP_UDP_CHECKSUM | | 1170 | ETH_TX_LAST_DESC | |
1169 | ETH_GEN_IP_V_4_CHECKSUM | ipheader; | 1171 | ETH_GEN_TCP_UDP_CHECKSUM | |
1172 | ETH_GEN_IP_V_4_CHECKSUM | | ||
1173 | skb->nh.iph->ihl << ETH_TX_IHL_SHIFT; | ||
1170 | /* CPU already calculated pseudo header checksum. */ | 1174 | /* CPU already calculated pseudo header checksum. */ |
1171 | if (skb->nh.iph->protocol == IPPROTO_UDP) { | 1175 | if (skb->nh.iph->protocol == IPPROTO_UDP) { |
1172 | pkt_info.cmd_sts |= ETH_UDP_FRAME; | 1176 | pkt_info.cmd_sts |= ETH_UDP_FRAME; |
@@ -1193,7 +1197,6 @@ linear: | |||
1193 | stats->tx_bytes += pkt_info.byte_cnt; | 1197 | stats->tx_bytes += pkt_info.byte_cnt; |
1194 | } else { | 1198 | } else { |
1195 | unsigned int frag; | 1199 | unsigned int frag; |
1196 | u32 ipheader; | ||
1197 | 1200 | ||
1198 | /* Since hardware can't handle unaligned fragments smaller | 1201 | /* Since hardware can't handle unaligned fragments smaller |
1199 | * than 9 bytes, if we find any, we linearize the skb | 1202 | * than 9 bytes, if we find any, we linearize the skb |
@@ -1222,12 +1225,16 @@ linear: | |||
1222 | DMA_TO_DEVICE); | 1225 | DMA_TO_DEVICE); |
1223 | pkt_info.l4i_chk = 0; | 1226 | pkt_info.l4i_chk = 0; |
1224 | pkt_info.return_info = 0; | 1227 | pkt_info.return_info = 0; |
1225 | pkt_info.cmd_sts = ETH_TX_FIRST_DESC; | ||
1226 | 1228 | ||
1227 | if (skb->ip_summed == CHECKSUM_HW) { | 1229 | if (skb->ip_summed != CHECKSUM_HW) |
1228 | ipheader = skb->nh.iph->ihl << 11; | 1230 | /* Errata BTS #50, IHL must be 5 if no HW checksum */ |
1229 | pkt_info.cmd_sts |= ETH_GEN_TCP_UDP_CHECKSUM | | 1231 | pkt_info.cmd_sts = ETH_TX_FIRST_DESC | |
1230 | ETH_GEN_IP_V_4_CHECKSUM | ipheader; | 1232 | 5 << ETH_TX_IHL_SHIFT; |
1233 | else { | ||
1234 | pkt_info.cmd_sts = ETH_TX_FIRST_DESC | | ||
1235 | ETH_GEN_TCP_UDP_CHECKSUM | | ||
1236 | ETH_GEN_IP_V_4_CHECKSUM | | ||
1237 | skb->nh.iph->ihl << ETH_TX_IHL_SHIFT; | ||
1231 | /* CPU already calculated pseudo header checksum. */ | 1238 | /* CPU already calculated pseudo header checksum. */ |
1232 | if (skb->nh.iph->protocol == IPPROTO_UDP) { | 1239 | if (skb->nh.iph->protocol == IPPROTO_UDP) { |
1233 | pkt_info.cmd_sts |= ETH_UDP_FRAME; | 1240 | pkt_info.cmd_sts |= ETH_UDP_FRAME; |
diff --git a/drivers/net/mv643xx_eth.h b/drivers/net/mv643xx_eth.h index 57c4f8fbfdb6..7678b59c2952 100644 --- a/drivers/net/mv643xx_eth.h +++ b/drivers/net/mv643xx_eth.h | |||
@@ -49,7 +49,7 @@ | |||
49 | /* Checksum offload for Tx works for most packets, but | 49 | /* Checksum offload for Tx works for most packets, but |
50 | * fails if previous packet sent did not use hw csum | 50 | * fails if previous packet sent did not use hw csum |
51 | */ | 51 | */ |
52 | #undef MV643XX_CHECKSUM_OFFLOAD_TX | 52 | #define MV643XX_CHECKSUM_OFFLOAD_TX |
53 | #define MV643XX_NAPI | 53 | #define MV643XX_NAPI |
54 | #define MV643XX_TX_FAST_REFILL | 54 | #define MV643XX_TX_FAST_REFILL |
55 | #undef MV643XX_RX_QUEUE_FILL_ON_TASK /* Does not work, yet */ | 55 | #undef MV643XX_RX_QUEUE_FILL_ON_TASK /* Does not work, yet */ |
@@ -217,6 +217,8 @@ | |||
217 | #define ETH_TX_ENABLE_INTERRUPT (BIT23) | 217 | #define ETH_TX_ENABLE_INTERRUPT (BIT23) |
218 | #define ETH_AUTO_MODE (BIT30) | 218 | #define ETH_AUTO_MODE (BIT30) |
219 | 219 | ||
220 | #define ETH_TX_IHL_SHIFT 11 | ||
221 | |||
220 | /* typedefs */ | 222 | /* typedefs */ |
221 | 223 | ||
222 | typedef enum _eth_func_ret_status { | 224 | typedef enum _eth_func_ret_status { |
diff --git a/drivers/net/pci-skeleton.c b/drivers/net/pci-skeleton.c index 4a391ea0f58a..a1ac4bd1696e 100644 --- a/drivers/net/pci-skeleton.c +++ b/drivers/net/pci-skeleton.c | |||
@@ -486,9 +486,9 @@ struct netdrv_private { | |||
486 | MODULE_AUTHOR ("Jeff Garzik <jgarzik@pobox.com>"); | 486 | MODULE_AUTHOR ("Jeff Garzik <jgarzik@pobox.com>"); |
487 | MODULE_DESCRIPTION ("Skeleton for a PCI Fast Ethernet driver"); | 487 | MODULE_DESCRIPTION ("Skeleton for a PCI Fast Ethernet driver"); |
488 | MODULE_LICENSE("GPL"); | 488 | MODULE_LICENSE("GPL"); |
489 | MODULE_PARM (multicast_filter_limit, "i"); | 489 | module_param(multicast_filter_limit, int, 0); |
490 | MODULE_PARM (max_interrupt_work, "i"); | 490 | module_param(max_interrupt_work, int, 0); |
491 | MODULE_PARM (media, "1-" __MODULE_STRING(8) "i"); | 491 | module_param_array(media, int, NULL, 0); |
492 | MODULE_PARM_DESC (multicast_filter_limit, "pci-skeleton maximum number of filtered multicast addresses"); | 492 | MODULE_PARM_DESC (multicast_filter_limit, "pci-skeleton maximum number of filtered multicast addresses"); |
493 | MODULE_PARM_DESC (max_interrupt_work, "pci-skeleton maximum events handled per interrupt"); | 493 | MODULE_PARM_DESC (max_interrupt_work, "pci-skeleton maximum events handled per interrupt"); |
494 | MODULE_PARM_DESC (media, "pci-skeleton: Bits 0-3: media type, bit 17: full duplex"); | 494 | MODULE_PARM_DESC (media, "pci-skeleton: Bits 0-3: media type, bit 17: full duplex"); |
diff --git a/drivers/net/pcmcia/fmvj18x_cs.c b/drivers/net/pcmcia/fmvj18x_cs.c index 9d8197bb293a..384a736a0d2f 100644 --- a/drivers/net/pcmcia/fmvj18x_cs.c +++ b/drivers/net/pcmcia/fmvj18x_cs.c | |||
@@ -134,7 +134,7 @@ typedef struct local_info_t { | |||
134 | u_char mc_filter[8]; | 134 | u_char mc_filter[8]; |
135 | } local_info_t; | 135 | } local_info_t; |
136 | 136 | ||
137 | #define MC_FILTERBREAK 64 | 137 | #define MC_FILTERBREAK 8 |
138 | 138 | ||
139 | /*====================================================================*/ | 139 | /*====================================================================*/ |
140 | /* | 140 | /* |
@@ -1012,7 +1012,7 @@ static void fjn_reset(struct net_device *dev) | |||
1012 | outb(BANK_1U, ioaddr + CONFIG_1); | 1012 | outb(BANK_1U, ioaddr + CONFIG_1); |
1013 | 1013 | ||
1014 | /* set the multicast table to accept none. */ | 1014 | /* set the multicast table to accept none. */ |
1015 | for (i = 0; i < 6; i++) | 1015 | for (i = 0; i < 8; i++) |
1016 | outb(0x00, ioaddr + MAR_ADR + i); | 1016 | outb(0x00, ioaddr + MAR_ADR + i); |
1017 | 1017 | ||
1018 | /* Switch to bank 2 (runtime mode) */ | 1018 | /* Switch to bank 2 (runtime mode) */ |
@@ -1269,6 +1269,16 @@ static void set_rx_mode(struct net_device *dev) | |||
1269 | u_long flags; | 1269 | u_long flags; |
1270 | int i; | 1270 | int i; |
1271 | 1271 | ||
1272 | int saved_config_0 = inb(ioaddr + CONFIG_0); | ||
1273 | |||
1274 | local_irq_save(flags); | ||
1275 | |||
1276 | /* Disable Tx and Rx */ | ||
1277 | if (sram_config == 0) | ||
1278 | outb(CONFIG0_RST, ioaddr + CONFIG_0); | ||
1279 | else | ||
1280 | outb(CONFIG0_RST_1, ioaddr + CONFIG_0); | ||
1281 | |||
1272 | if (dev->flags & IFF_PROMISC) { | 1282 | if (dev->flags & IFF_PROMISC) { |
1273 | /* Unconditionally log net taps. */ | 1283 | /* Unconditionally log net taps. */ |
1274 | printk("%s: Promiscuous mode enabled.\n", dev->name); | 1284 | printk("%s: Promiscuous mode enabled.\n", dev->name); |
@@ -1290,20 +1300,23 @@ static void set_rx_mode(struct net_device *dev) | |||
1290 | for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count; | 1300 | for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count; |
1291 | i++, mclist = mclist->next) { | 1301 | i++, mclist = mclist->next) { |
1292 | unsigned int bit = | 1302 | unsigned int bit = |
1293 | ether_crc_le(ETH_ALEN, mclist->dmi_addr) & 0x3f; | 1303 | ether_crc_le(ETH_ALEN, mclist->dmi_addr) >> 26; |
1294 | mc_filter[bit >> 3] |= (1 << bit); | 1304 | mc_filter[bit >> 3] |= (1 << (bit & 7)); |
1295 | } | 1305 | } |
1306 | outb(2, ioaddr + RX_MODE); /* Use normal mode. */ | ||
1296 | } | 1307 | } |
1297 | 1308 | ||
1298 | local_irq_save(flags); | ||
1299 | if (memcmp(mc_filter, lp->mc_filter, sizeof(mc_filter))) { | 1309 | if (memcmp(mc_filter, lp->mc_filter, sizeof(mc_filter))) { |
1300 | int saved_bank = inb(ioaddr + CONFIG_1); | 1310 | int saved_bank = inb(ioaddr + CONFIG_1); |
1301 | /* Switch to bank 1 and set the multicast table. */ | 1311 | /* Switch to bank 1 and set the multicast table. */ |
1302 | outb(0xe4, ioaddr + CONFIG_1); | 1312 | outb(0xe4, ioaddr + CONFIG_1); |
1303 | for (i = 0; i < 8; i++) | 1313 | for (i = 0; i < 8; i++) |
1304 | outb(mc_filter[i], ioaddr + 8 + i); | 1314 | outb(mc_filter[i], ioaddr + MAR_ADR + i); |
1305 | memcpy(lp->mc_filter, mc_filter, sizeof(mc_filter)); | 1315 | memcpy(lp->mc_filter, mc_filter, sizeof(mc_filter)); |
1306 | outb(saved_bank, ioaddr + CONFIG_1); | 1316 | outb(saved_bank, ioaddr + CONFIG_1); |
1307 | } | 1317 | } |
1318 | |||
1319 | outb(saved_config_0, ioaddr + CONFIG_0); | ||
1320 | |||
1308 | local_irq_restore(flags); | 1321 | local_irq_restore(flags); |
1309 | } | 1322 | } |
diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig new file mode 100644 index 000000000000..6a2fe3583478 --- /dev/null +++ b/drivers/net/phy/Kconfig | |||
@@ -0,0 +1,57 @@ | |||
1 | # | ||
2 | # PHY Layer Configuration | ||
3 | # | ||
4 | |||
5 | menu "PHY device support" | ||
6 | |||
7 | config PHYLIB | ||
8 | tristate "PHY Device support and infrastructure" | ||
9 | depends on NET_ETHERNET | ||
10 | help | ||
11 | Ethernet controllers are usually attached to PHY | ||
12 | devices. This option provides infrastructure for | ||
13 | managing PHY devices. | ||
14 | |||
15 | config PHYCONTROL | ||
16 | bool " Support for automatically handling PHY state changes" | ||
17 | depends on PHYLIB | ||
18 | help | ||
19 | Adds code to perform all the work for keeping PHY link | ||
20 | state (speed/duplex/etc) up-to-date. Also handles | ||
21 | interrupts. | ||
22 | |||
23 | comment "MII PHY device drivers" | ||
24 | depends on PHYLIB | ||
25 | |||
26 | config MARVELL_PHY | ||
27 | tristate "Drivers for Marvell PHYs" | ||
28 | depends on PHYLIB | ||
29 | ---help--- | ||
30 | Currently has a driver for the 88E1011S | ||
31 | |||
32 | config DAVICOM_PHY | ||
33 | tristate "Drivers for Davicom PHYs" | ||
34 | depends on PHYLIB | ||
35 | ---help--- | ||
36 | Currently supports dm9161e and dm9131 | ||
37 | |||
38 | config QSEMI_PHY | ||
39 | tristate "Drivers for Quality Semiconductor PHYs" | ||
40 | depends on PHYLIB | ||
41 | ---help--- | ||
42 | Currently supports the qs6612 | ||
43 | |||
44 | config LXT_PHY | ||
45 | tristate "Drivers for the Intel LXT PHYs" | ||
46 | depends on PHYLIB | ||
47 | ---help--- | ||
48 | Currently supports the lxt970, lxt971 | ||
49 | |||
50 | config CICADA_PHY | ||
51 | tristate "Drivers for the Cicada PHYs" | ||
52 | depends on PHYLIB | ||
53 | ---help--- | ||
54 | Currently supports the cis8204 | ||
55 | |||
56 | endmenu | ||
57 | |||
diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile new file mode 100644 index 000000000000..e4116a5fbb4c --- /dev/null +++ b/drivers/net/phy/Makefile | |||
@@ -0,0 +1,10 @@ | |||
1 | # Makefile for Linux PHY drivers | ||
2 | |||
3 | libphy-objs := phy.o phy_device.o mdio_bus.o | ||
4 | |||
5 | obj-$(CONFIG_PHYLIB) += libphy.o | ||
6 | obj-$(CONFIG_MARVELL_PHY) += marvell.o | ||
7 | obj-$(CONFIG_DAVICOM_PHY) += davicom.o | ||
8 | obj-$(CONFIG_CICADA_PHY) += cicada.o | ||
9 | obj-$(CONFIG_LXT_PHY) += lxt.o | ||
10 | obj-$(CONFIG_QSEMI_PHY) += qsemi.o | ||
diff --git a/drivers/net/phy/cicada.c b/drivers/net/phy/cicada.c new file mode 100644 index 000000000000..c47fb2ecd147 --- /dev/null +++ b/drivers/net/phy/cicada.c | |||
@@ -0,0 +1,134 @@ | |||
1 | /* | ||
2 | * drivers/net/phy/cicada.c | ||
3 | * | ||
4 | * Driver for Cicada PHYs | ||
5 | * | ||
6 | * Author: Andy Fleming | ||
7 | * | ||
8 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify it | ||
11 | * under the terms of the GNU General Public License as published by the | ||
12 | * Free Software Foundation; either version 2 of the License, or (at your | ||
13 | * option) any later version. | ||
14 | * | ||
15 | */ | ||
16 | #include <linux/config.h> | ||
17 | #include <linux/kernel.h> | ||
18 | #include <linux/sched.h> | ||
19 | #include <linux/string.h> | ||
20 | #include <linux/errno.h> | ||
21 | #include <linux/unistd.h> | ||
22 | #include <linux/slab.h> | ||
23 | #include <linux/interrupt.h> | ||
24 | #include <linux/init.h> | ||
25 | #include <linux/delay.h> | ||
26 | #include <linux/netdevice.h> | ||
27 | #include <linux/etherdevice.h> | ||
28 | #include <linux/skbuff.h> | ||
29 | #include <linux/spinlock.h> | ||
30 | #include <linux/mm.h> | ||
31 | #include <linux/module.h> | ||
32 | #include <linux/version.h> | ||
33 | #include <linux/mii.h> | ||
34 | #include <linux/ethtool.h> | ||
35 | #include <linux/phy.h> | ||
36 | |||
37 | #include <asm/io.h> | ||
38 | #include <asm/irq.h> | ||
39 | #include <asm/uaccess.h> | ||
40 | |||
41 | /* Cicada Extended Control Register 1 */ | ||
42 | #define MII_CIS8201_EXT_CON1 0x17 | ||
43 | #define MII_CIS8201_EXTCON1_INIT 0x0000 | ||
44 | |||
45 | /* Cicada Interrupt Mask Register */ | ||
46 | #define MII_CIS8201_IMASK 0x19 | ||
47 | #define MII_CIS8201_IMASK_IEN 0x8000 | ||
48 | #define MII_CIS8201_IMASK_SPEED 0x4000 | ||
49 | #define MII_CIS8201_IMASK_LINK 0x2000 | ||
50 | #define MII_CIS8201_IMASK_DUPLEX 0x1000 | ||
51 | #define MII_CIS8201_IMASK_MASK 0xf000 | ||
52 | |||
53 | /* Cicada Interrupt Status Register */ | ||
54 | #define MII_CIS8201_ISTAT 0x1a | ||
55 | #define MII_CIS8201_ISTAT_STATUS 0x8000 | ||
56 | #define MII_CIS8201_ISTAT_SPEED 0x4000 | ||
57 | #define MII_CIS8201_ISTAT_LINK 0x2000 | ||
58 | #define MII_CIS8201_ISTAT_DUPLEX 0x1000 | ||
59 | |||
60 | /* Cicada Auxiliary Control/Status Register */ | ||
61 | #define MII_CIS8201_AUX_CONSTAT 0x1c | ||
62 | #define MII_CIS8201_AUXCONSTAT_INIT 0x0004 | ||
63 | #define MII_CIS8201_AUXCONSTAT_DUPLEX 0x0020 | ||
64 | #define MII_CIS8201_AUXCONSTAT_SPEED 0x0018 | ||
65 | #define MII_CIS8201_AUXCONSTAT_GBIT 0x0010 | ||
66 | #define MII_CIS8201_AUXCONSTAT_100 0x0008 | ||
67 | |||
68 | MODULE_DESCRIPTION("Cicadia PHY driver"); | ||
69 | MODULE_AUTHOR("Andy Fleming"); | ||
70 | MODULE_LICENSE("GPL"); | ||
71 | |||
72 | static int cis820x_config_init(struct phy_device *phydev) | ||
73 | { | ||
74 | int err; | ||
75 | |||
76 | err = phy_write(phydev, MII_CIS8201_AUX_CONSTAT, | ||
77 | MII_CIS8201_AUXCONSTAT_INIT); | ||
78 | |||
79 | if (err < 0) | ||
80 | return err; | ||
81 | |||
82 | err = phy_write(phydev, MII_CIS8201_EXT_CON1, | ||
83 | MII_CIS8201_EXTCON1_INIT); | ||
84 | |||
85 | return err; | ||
86 | } | ||
87 | |||
88 | static int cis820x_ack_interrupt(struct phy_device *phydev) | ||
89 | { | ||
90 | int err = phy_read(phydev, MII_CIS8201_ISTAT); | ||
91 | |||
92 | return (err < 0) ? err : 0; | ||
93 | } | ||
94 | |||
95 | static int cis820x_config_intr(struct phy_device *phydev) | ||
96 | { | ||
97 | int err; | ||
98 | |||
99 | if(phydev->interrupts == PHY_INTERRUPT_ENABLED) | ||
100 | err = phy_write(phydev, MII_CIS8201_IMASK, | ||
101 | MII_CIS8201_IMASK_MASK); | ||
102 | else | ||
103 | err = phy_write(phydev, MII_CIS8201_IMASK, 0); | ||
104 | |||
105 | return err; | ||
106 | } | ||
107 | |||
108 | /* Cicada 820x */ | ||
109 | static struct phy_driver cis8204_driver = { | ||
110 | .phy_id = 0x000fc440, | ||
111 | .name = "Cicada Cis8204", | ||
112 | .phy_id_mask = 0x000fffc0, | ||
113 | .features = PHY_GBIT_FEATURES, | ||
114 | .flags = PHY_HAS_INTERRUPT, | ||
115 | .config_init = &cis820x_config_init, | ||
116 | .config_aneg = &genphy_config_aneg, | ||
117 | .read_status = &genphy_read_status, | ||
118 | .ack_interrupt = &cis820x_ack_interrupt, | ||
119 | .config_intr = &cis820x_config_intr, | ||
120 | .driver = { .owner = THIS_MODULE,}, | ||
121 | }; | ||
122 | |||
123 | static int __init cis8204_init(void) | ||
124 | { | ||
125 | return phy_driver_register(&cis8204_driver); | ||
126 | } | ||
127 | |||
128 | static void __exit cis8204_exit(void) | ||
129 | { | ||
130 | phy_driver_unregister(&cis8204_driver); | ||
131 | } | ||
132 | |||
133 | module_init(cis8204_init); | ||
134 | module_exit(cis8204_exit); | ||
diff --git a/drivers/net/phy/davicom.c b/drivers/net/phy/davicom.c new file mode 100644 index 000000000000..6caf499fae32 --- /dev/null +++ b/drivers/net/phy/davicom.c | |||
@@ -0,0 +1,195 @@ | |||
1 | /* | ||
2 | * drivers/net/phy/davicom.c | ||
3 | * | ||
4 | * Driver for Davicom PHYs | ||
5 | * | ||
6 | * Author: Andy Fleming | ||
7 | * | ||
8 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify it | ||
11 | * under the terms of the GNU General Public License as published by the | ||
12 | * Free Software Foundation; either version 2 of the License, or (at your | ||
13 | * option) any later version. | ||
14 | * | ||
15 | */ | ||
16 | #include <linux/config.h> | ||
17 | #include <linux/kernel.h> | ||
18 | #include <linux/sched.h> | ||
19 | #include <linux/string.h> | ||
20 | #include <linux/errno.h> | ||
21 | #include <linux/unistd.h> | ||
22 | #include <linux/slab.h> | ||
23 | #include <linux/interrupt.h> | ||
24 | #include <linux/init.h> | ||
25 | #include <linux/delay.h> | ||
26 | #include <linux/netdevice.h> | ||
27 | #include <linux/etherdevice.h> | ||
28 | #include <linux/skbuff.h> | ||
29 | #include <linux/spinlock.h> | ||
30 | #include <linux/mm.h> | ||
31 | #include <linux/module.h> | ||
32 | #include <linux/version.h> | ||
33 | #include <linux/mii.h> | ||
34 | #include <linux/ethtool.h> | ||
35 | #include <linux/phy.h> | ||
36 | |||
37 | #include <asm/io.h> | ||
38 | #include <asm/irq.h> | ||
39 | #include <asm/uaccess.h> | ||
40 | |||
41 | #define MII_DM9161_SCR 0x10 | ||
42 | #define MII_DM9161_SCR_INIT 0x0610 | ||
43 | |||
44 | /* DM9161 Interrupt Register */ | ||
45 | #define MII_DM9161_INTR 0x15 | ||
46 | #define MII_DM9161_INTR_PEND 0x8000 | ||
47 | #define MII_DM9161_INTR_DPLX_MASK 0x0800 | ||
48 | #define MII_DM9161_INTR_SPD_MASK 0x0400 | ||
49 | #define MII_DM9161_INTR_LINK_MASK 0x0200 | ||
50 | #define MII_DM9161_INTR_MASK 0x0100 | ||
51 | #define MII_DM9161_INTR_DPLX_CHANGE 0x0010 | ||
52 | #define MII_DM9161_INTR_SPD_CHANGE 0x0008 | ||
53 | #define MII_DM9161_INTR_LINK_CHANGE 0x0004 | ||
54 | #define MII_DM9161_INTR_INIT 0x0000 | ||
55 | #define MII_DM9161_INTR_STOP \ | ||
56 | (MII_DM9161_INTR_DPLX_MASK | MII_DM9161_INTR_SPD_MASK \ | ||
57 | | MII_DM9161_INTR_LINK_MASK | MII_DM9161_INTR_MASK) | ||
58 | |||
59 | /* DM9161 10BT Configuration/Status */ | ||
60 | #define MII_DM9161_10BTCSR 0x12 | ||
61 | #define MII_DM9161_10BTCSR_INIT 0x7800 | ||
62 | |||
63 | MODULE_DESCRIPTION("Davicom PHY driver"); | ||
64 | MODULE_AUTHOR("Andy Fleming"); | ||
65 | MODULE_LICENSE("GPL"); | ||
66 | |||
67 | |||
68 | #define DM9161_DELAY 1 | ||
69 | static int dm9161_config_intr(struct phy_device *phydev) | ||
70 | { | ||
71 | int temp; | ||
72 | |||
73 | temp = phy_read(phydev, MII_DM9161_INTR); | ||
74 | |||
75 | if (temp < 0) | ||
76 | return temp; | ||
77 | |||
78 | if(PHY_INTERRUPT_ENABLED == phydev->interrupts ) | ||
79 | temp &= ~(MII_DM9161_INTR_STOP); | ||
80 | else | ||
81 | temp |= MII_DM9161_INTR_STOP; | ||
82 | |||
83 | temp = phy_write(phydev, MII_DM9161_INTR, temp); | ||
84 | |||
85 | return temp; | ||
86 | } | ||
87 | |||
88 | static int dm9161_config_aneg(struct phy_device *phydev) | ||
89 | { | ||
90 | int err; | ||
91 | |||
92 | /* Isolate the PHY */ | ||
93 | err = phy_write(phydev, MII_BMCR, BMCR_ISOLATE); | ||
94 | |||
95 | if (err < 0) | ||
96 | return err; | ||
97 | |||
98 | /* Configure the new settings */ | ||
99 | err = genphy_config_aneg(phydev); | ||
100 | |||
101 | if (err < 0) | ||
102 | return err; | ||
103 | |||
104 | return 0; | ||
105 | } | ||
106 | |||
107 | static int dm9161_config_init(struct phy_device *phydev) | ||
108 | { | ||
109 | int err; | ||
110 | |||
111 | /* Isolate the PHY */ | ||
112 | err = phy_write(phydev, MII_BMCR, BMCR_ISOLATE); | ||
113 | |||
114 | if (err < 0) | ||
115 | return err; | ||
116 | |||
117 | /* Do not bypass the scrambler/descrambler */ | ||
118 | err = phy_write(phydev, MII_DM9161_SCR, MII_DM9161_SCR_INIT); | ||
119 | |||
120 | if (err < 0) | ||
121 | return err; | ||
122 | |||
123 | /* Clear 10BTCSR to default */ | ||
124 | err = phy_write(phydev, MII_DM9161_10BTCSR, MII_DM9161_10BTCSR_INIT); | ||
125 | |||
126 | if (err < 0) | ||
127 | return err; | ||
128 | |||
129 | /* Reconnect the PHY, and enable Autonegotiation */ | ||
130 | err = phy_write(phydev, MII_BMCR, BMCR_ANENABLE); | ||
131 | |||
132 | if (err < 0) | ||
133 | return err; | ||
134 | |||
135 | return 0; | ||
136 | } | ||
137 | |||
138 | static int dm9161_ack_interrupt(struct phy_device *phydev) | ||
139 | { | ||
140 | int err = phy_read(phydev, MII_DM9161_INTR); | ||
141 | |||
142 | return (err < 0) ? err : 0; | ||
143 | } | ||
144 | |||
145 | static struct phy_driver dm9161_driver = { | ||
146 | .phy_id = 0x0181b880, | ||
147 | .name = "Davicom DM9161E", | ||
148 | .phy_id_mask = 0x0ffffff0, | ||
149 | .features = PHY_BASIC_FEATURES, | ||
150 | .config_init = dm9161_config_init, | ||
151 | .config_aneg = dm9161_config_aneg, | ||
152 | .read_status = genphy_read_status, | ||
153 | .driver = { .owner = THIS_MODULE,}, | ||
154 | }; | ||
155 | |||
156 | static struct phy_driver dm9131_driver = { | ||
157 | .phy_id = 0x00181b80, | ||
158 | .name = "Davicom DM9131", | ||
159 | .phy_id_mask = 0x0ffffff0, | ||
160 | .features = PHY_BASIC_FEATURES, | ||
161 | .flags = PHY_HAS_INTERRUPT, | ||
162 | .config_aneg = genphy_config_aneg, | ||
163 | .read_status = genphy_read_status, | ||
164 | .ack_interrupt = dm9161_ack_interrupt, | ||
165 | .config_intr = dm9161_config_intr, | ||
166 | .driver = { .owner = THIS_MODULE,}, | ||
167 | }; | ||
168 | |||
169 | static int __init davicom_init(void) | ||
170 | { | ||
171 | int ret; | ||
172 | |||
173 | ret = phy_driver_register(&dm9161_driver); | ||
174 | if (ret) | ||
175 | goto err1; | ||
176 | |||
177 | ret = phy_driver_register(&dm9131_driver); | ||
178 | if (ret) | ||
179 | goto err2; | ||
180 | return 0; | ||
181 | |||
182 | err2: | ||
183 | phy_driver_unregister(&dm9161_driver); | ||
184 | err1: | ||
185 | return ret; | ||
186 | } | ||
187 | |||
188 | static void __exit davicom_exit(void) | ||
189 | { | ||
190 | phy_driver_unregister(&dm9161_driver); | ||
191 | phy_driver_unregister(&dm9131_driver); | ||
192 | } | ||
193 | |||
194 | module_init(davicom_init); | ||
195 | module_exit(davicom_exit); | ||
diff --git a/drivers/net/phy/lxt.c b/drivers/net/phy/lxt.c new file mode 100644 index 000000000000..4c840448ec86 --- /dev/null +++ b/drivers/net/phy/lxt.c | |||
@@ -0,0 +1,179 @@ | |||
1 | /* | ||
2 | * drivers/net/phy/lxt.c | ||
3 | * | ||
4 | * Driver for Intel LXT PHYs | ||
5 | * | ||
6 | * Author: Andy Fleming | ||
7 | * | ||
8 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify it | ||
11 | * under the terms of the GNU General Public License as published by the | ||
12 | * Free Software Foundation; either version 2 of the License, or (at your | ||
13 | * option) any later version. | ||
14 | * | ||
15 | */ | ||
16 | #include <linux/config.h> | ||
17 | #include <linux/kernel.h> | ||
18 | #include <linux/sched.h> | ||
19 | #include <linux/string.h> | ||
20 | #include <linux/errno.h> | ||
21 | #include <linux/unistd.h> | ||
22 | #include <linux/slab.h> | ||
23 | #include <linux/interrupt.h> | ||
24 | #include <linux/init.h> | ||
25 | #include <linux/delay.h> | ||
26 | #include <linux/netdevice.h> | ||
27 | #include <linux/etherdevice.h> | ||
28 | #include <linux/skbuff.h> | ||
29 | #include <linux/spinlock.h> | ||
30 | #include <linux/mm.h> | ||
31 | #include <linux/module.h> | ||
32 | #include <linux/version.h> | ||
33 | #include <linux/mii.h> | ||
34 | #include <linux/ethtool.h> | ||
35 | #include <linux/phy.h> | ||
36 | |||
37 | #include <asm/io.h> | ||
38 | #include <asm/irq.h> | ||
39 | #include <asm/uaccess.h> | ||
40 | |||
41 | /* The Level one LXT970 is used by many boards */ | ||
42 | |||
43 | #define MII_LXT970_IER 17 /* Interrupt Enable Register */ | ||
44 | |||
45 | #define MII_LXT970_IER_IEN 0x0002 | ||
46 | |||
47 | #define MII_LXT970_ISR 18 /* Interrupt Status Register */ | ||
48 | |||
49 | #define MII_LXT970_CONFIG 19 /* Configuration Register */ | ||
50 | |||
51 | /* ------------------------------------------------------------------------- */ | ||
52 | /* The Level one LXT971 is used on some of my custom boards */ | ||
53 | |||
54 | /* register definitions for the 971 */ | ||
55 | #define MII_LXT971_IER 18 /* Interrupt Enable Register */ | ||
56 | #define MII_LXT971_IER_IEN 0x00f2 | ||
57 | |||
58 | #define MII_LXT971_ISR 19 /* Interrupt Status Register */ | ||
59 | |||
60 | |||
61 | MODULE_DESCRIPTION("Intel LXT PHY driver"); | ||
62 | MODULE_AUTHOR("Andy Fleming"); | ||
63 | MODULE_LICENSE("GPL"); | ||
64 | |||
65 | static int lxt970_ack_interrupt(struct phy_device *phydev) | ||
66 | { | ||
67 | int err; | ||
68 | |||
69 | err = phy_read(phydev, MII_BMSR); | ||
70 | |||
71 | if (err < 0) | ||
72 | return err; | ||
73 | |||
74 | err = phy_read(phydev, MII_LXT970_ISR); | ||
75 | |||
76 | if (err < 0) | ||
77 | return err; | ||
78 | |||
79 | return 0; | ||
80 | } | ||
81 | |||
82 | static int lxt970_config_intr(struct phy_device *phydev) | ||
83 | { | ||
84 | int err; | ||
85 | |||
86 | if(phydev->interrupts == PHY_INTERRUPT_ENABLED) | ||
87 | err = phy_write(phydev, MII_LXT970_IER, MII_LXT970_IER_IEN); | ||
88 | else | ||
89 | err = phy_write(phydev, MII_LXT970_IER, 0); | ||
90 | |||
91 | return err; | ||
92 | } | ||
93 | |||
94 | static int lxt970_config_init(struct phy_device *phydev) | ||
95 | { | ||
96 | int err; | ||
97 | |||
98 | err = phy_write(phydev, MII_LXT970_CONFIG, 0); | ||
99 | |||
100 | return err; | ||
101 | } | ||
102 | |||
103 | |||
104 | static int lxt971_ack_interrupt(struct phy_device *phydev) | ||
105 | { | ||
106 | int err = phy_read(phydev, MII_LXT971_ISR); | ||
107 | |||
108 | if (err < 0) | ||
109 | return err; | ||
110 | |||
111 | return 0; | ||
112 | } | ||
113 | |||
114 | static int lxt971_config_intr(struct phy_device *phydev) | ||
115 | { | ||
116 | int err; | ||
117 | |||
118 | if(phydev->interrupts == PHY_INTERRUPT_ENABLED) | ||
119 | err = phy_write(phydev, MII_LXT971_IER, MII_LXT971_IER_IEN); | ||
120 | else | ||
121 | err = phy_write(phydev, MII_LXT971_IER, 0); | ||
122 | |||
123 | return err; | ||
124 | } | ||
125 | |||
126 | static struct phy_driver lxt970_driver = { | ||
127 | .phy_id = 0x07810000, | ||
128 | .name = "LXT970", | ||
129 | .phy_id_mask = 0x0fffffff, | ||
130 | .features = PHY_BASIC_FEATURES, | ||
131 | .flags = PHY_HAS_INTERRUPT, | ||
132 | .config_init = lxt970_config_init, | ||
133 | .config_aneg = genphy_config_aneg, | ||
134 | .read_status = genphy_read_status, | ||
135 | .ack_interrupt = lxt970_ack_interrupt, | ||
136 | .config_intr = lxt970_config_intr, | ||
137 | .driver = { .owner = THIS_MODULE,}, | ||
138 | }; | ||
139 | |||
140 | static struct phy_driver lxt971_driver = { | ||
141 | .phy_id = 0x0001378e, | ||
142 | .name = "LXT971", | ||
143 | .phy_id_mask = 0x0fffffff, | ||
144 | .features = PHY_BASIC_FEATURES, | ||
145 | .flags = PHY_HAS_INTERRUPT, | ||
146 | .config_aneg = genphy_config_aneg, | ||
147 | .read_status = genphy_read_status, | ||
148 | .ack_interrupt = lxt971_ack_interrupt, | ||
149 | .config_intr = lxt971_config_intr, | ||
150 | .driver = { .owner = THIS_MODULE,}, | ||
151 | }; | ||
152 | |||
153 | static int __init lxt_init(void) | ||
154 | { | ||
155 | int ret; | ||
156 | |||
157 | ret = phy_driver_register(&lxt970_driver); | ||
158 | if (ret) | ||
159 | goto err1; | ||
160 | |||
161 | ret = phy_driver_register(&lxt971_driver); | ||
162 | if (ret) | ||
163 | goto err2; | ||
164 | return 0; | ||
165 | |||
166 | err2: | ||
167 | phy_driver_unregister(&lxt970_driver); | ||
168 | err1: | ||
169 | return ret; | ||
170 | } | ||
171 | |||
172 | static void __exit lxt_exit(void) | ||
173 | { | ||
174 | phy_driver_unregister(&lxt970_driver); | ||
175 | phy_driver_unregister(&lxt971_driver); | ||
176 | } | ||
177 | |||
178 | module_init(lxt_init); | ||
179 | module_exit(lxt_exit); | ||
diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c new file mode 100644 index 000000000000..4a72b025006b --- /dev/null +++ b/drivers/net/phy/marvell.c | |||
@@ -0,0 +1,140 @@ | |||
1 | /* | ||
2 | * drivers/net/phy/marvell.c | ||
3 | * | ||
4 | * Driver for Marvell PHYs | ||
5 | * | ||
6 | * Author: Andy Fleming | ||
7 | * | ||
8 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify it | ||
11 | * under the terms of the GNU General Public License as published by the | ||
12 | * Free Software Foundation; either version 2 of the License, or (at your | ||
13 | * option) any later version. | ||
14 | * | ||
15 | */ | ||
16 | #include <linux/config.h> | ||
17 | #include <linux/kernel.h> | ||
18 | #include <linux/sched.h> | ||
19 | #include <linux/string.h> | ||
20 | #include <linux/errno.h> | ||
21 | #include <linux/unistd.h> | ||
22 | #include <linux/slab.h> | ||
23 | #include <linux/interrupt.h> | ||
24 | #include <linux/init.h> | ||
25 | #include <linux/delay.h> | ||
26 | #include <linux/netdevice.h> | ||
27 | #include <linux/etherdevice.h> | ||
28 | #include <linux/skbuff.h> | ||
29 | #include <linux/spinlock.h> | ||
30 | #include <linux/mm.h> | ||
31 | #include <linux/module.h> | ||
32 | #include <linux/version.h> | ||
33 | #include <linux/mii.h> | ||
34 | #include <linux/ethtool.h> | ||
35 | #include <linux/phy.h> | ||
36 | |||
37 | #include <asm/io.h> | ||
38 | #include <asm/irq.h> | ||
39 | #include <asm/uaccess.h> | ||
40 | |||
41 | #define MII_M1011_IEVENT 0x13 | ||
42 | #define MII_M1011_IEVENT_CLEAR 0x0000 | ||
43 | |||
44 | #define MII_M1011_IMASK 0x12 | ||
45 | #define MII_M1011_IMASK_INIT 0x6400 | ||
46 | #define MII_M1011_IMASK_CLEAR 0x0000 | ||
47 | |||
48 | MODULE_DESCRIPTION("Marvell PHY driver"); | ||
49 | MODULE_AUTHOR("Andy Fleming"); | ||
50 | MODULE_LICENSE("GPL"); | ||
51 | |||
52 | static int marvell_ack_interrupt(struct phy_device *phydev) | ||
53 | { | ||
54 | int err; | ||
55 | |||
56 | /* Clear the interrupts by reading the reg */ | ||
57 | err = phy_read(phydev, MII_M1011_IEVENT); | ||
58 | |||
59 | if (err < 0) | ||
60 | return err; | ||
61 | |||
62 | return 0; | ||
63 | } | ||
64 | |||
65 | static int marvell_config_intr(struct phy_device *phydev) | ||
66 | { | ||
67 | int err; | ||
68 | |||
69 | if(phydev->interrupts == PHY_INTERRUPT_ENABLED) | ||
70 | err = phy_write(phydev, MII_M1011_IMASK, MII_M1011_IMASK_INIT); | ||
71 | else | ||
72 | err = phy_write(phydev, MII_M1011_IMASK, MII_M1011_IMASK_CLEAR); | ||
73 | |||
74 | return err; | ||
75 | } | ||
76 | |||
77 | static int marvell_config_aneg(struct phy_device *phydev) | ||
78 | { | ||
79 | int err; | ||
80 | |||
81 | /* The Marvell PHY has an errata which requires | ||
82 | * that certain registers get written in order | ||
83 | * to restart autonegotiation */ | ||
84 | err = phy_write(phydev, MII_BMCR, BMCR_RESET); | ||
85 | |||
86 | if (err < 0) | ||
87 | return err; | ||
88 | |||
89 | err = phy_write(phydev, 0x1d, 0x1f); | ||
90 | if (err < 0) | ||
91 | return err; | ||
92 | |||
93 | err = phy_write(phydev, 0x1e, 0x200c); | ||
94 | if (err < 0) | ||
95 | return err; | ||
96 | |||
97 | err = phy_write(phydev, 0x1d, 0x5); | ||
98 | if (err < 0) | ||
99 | return err; | ||
100 | |||
101 | err = phy_write(phydev, 0x1e, 0); | ||
102 | if (err < 0) | ||
103 | return err; | ||
104 | |||
105 | err = phy_write(phydev, 0x1e, 0x100); | ||
106 | if (err < 0) | ||
107 | return err; | ||
108 | |||
109 | |||
110 | err = genphy_config_aneg(phydev); | ||
111 | |||
112 | return err; | ||
113 | } | ||
114 | |||
115 | |||
116 | static struct phy_driver m88e1101_driver = { | ||
117 | .phy_id = 0x01410c00, | ||
118 | .phy_id_mask = 0xffffff00, | ||
119 | .name = "Marvell 88E1101", | ||
120 | .features = PHY_GBIT_FEATURES, | ||
121 | .flags = PHY_HAS_INTERRUPT, | ||
122 | .config_aneg = &marvell_config_aneg, | ||
123 | .read_status = &genphy_read_status, | ||
124 | .ack_interrupt = &marvell_ack_interrupt, | ||
125 | .config_intr = &marvell_config_intr, | ||
126 | .driver = { .owner = THIS_MODULE,}, | ||
127 | }; | ||
128 | |||
129 | static int __init marvell_init(void) | ||
130 | { | ||
131 | return phy_driver_register(&m88e1101_driver); | ||
132 | } | ||
133 | |||
134 | static void __exit marvell_exit(void) | ||
135 | { | ||
136 | phy_driver_unregister(&m88e1101_driver); | ||
137 | } | ||
138 | |||
139 | module_init(marvell_init); | ||
140 | module_exit(marvell_exit); | ||
diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c new file mode 100644 index 000000000000..41f62c0c5fcb --- /dev/null +++ b/drivers/net/phy/mdio_bus.c | |||
@@ -0,0 +1,176 @@ | |||
1 | /* | ||
2 | * drivers/net/phy/mdio_bus.c | ||
3 | * | ||
4 | * MDIO Bus interface | ||
5 | * | ||
6 | * Author: Andy Fleming | ||
7 | * | ||
8 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify it | ||
11 | * under the terms of the GNU General Public License as published by the | ||
12 | * Free Software Foundation; either version 2 of the License, or (at your | ||
13 | * option) any later version. | ||
14 | * | ||
15 | */ | ||
16 | #include <linux/config.h> | ||
17 | #include <linux/kernel.h> | ||
18 | #include <linux/sched.h> | ||
19 | #include <linux/string.h> | ||
20 | #include <linux/errno.h> | ||
21 | #include <linux/unistd.h> | ||
22 | #include <linux/slab.h> | ||
23 | #include <linux/interrupt.h> | ||
24 | #include <linux/init.h> | ||
25 | #include <linux/delay.h> | ||
26 | #include <linux/netdevice.h> | ||
27 | #include <linux/etherdevice.h> | ||
28 | #include <linux/skbuff.h> | ||
29 | #include <linux/spinlock.h> | ||
30 | #include <linux/mm.h> | ||
31 | #include <linux/module.h> | ||
32 | #include <linux/version.h> | ||
33 | #include <linux/mii.h> | ||
34 | #include <linux/ethtool.h> | ||
35 | #include <linux/phy.h> | ||
36 | |||
37 | #include <asm/io.h> | ||
38 | #include <asm/irq.h> | ||
39 | #include <asm/uaccess.h> | ||
40 | |||
41 | /* mdiobus_register | ||
42 | * | ||
43 | * description: Called by a bus driver to bring up all the PHYs | ||
44 | * on a given bus, and attach them to the bus | ||
45 | */ | ||
46 | int mdiobus_register(struct mii_bus *bus) | ||
47 | { | ||
48 | int i; | ||
49 | int err = 0; | ||
50 | |||
51 | spin_lock_init(&bus->mdio_lock); | ||
52 | |||
53 | if (NULL == bus || NULL == bus->name || | ||
54 | NULL == bus->read || | ||
55 | NULL == bus->write) | ||
56 | return -EINVAL; | ||
57 | |||
58 | if (bus->reset) | ||
59 | bus->reset(bus); | ||
60 | |||
61 | for (i = 0; i < PHY_MAX_ADDR; i++) { | ||
62 | struct phy_device *phydev; | ||
63 | |||
64 | phydev = get_phy_device(bus, i); | ||
65 | |||
66 | if (IS_ERR(phydev)) | ||
67 | return PTR_ERR(phydev); | ||
68 | |||
69 | /* There's a PHY at this address | ||
70 | * We need to set: | ||
71 | * 1) IRQ | ||
72 | * 2) bus_id | ||
73 | * 3) parent | ||
74 | * 4) bus | ||
75 | * 5) mii_bus | ||
76 | * And, we need to register it */ | ||
77 | if (phydev) { | ||
78 | phydev->irq = bus->irq[i]; | ||
79 | |||
80 | phydev->dev.parent = bus->dev; | ||
81 | phydev->dev.bus = &mdio_bus_type; | ||
82 | sprintf(phydev->dev.bus_id, "phy%d:%d", bus->id, i); | ||
83 | |||
84 | phydev->bus = bus; | ||
85 | |||
86 | err = device_register(&phydev->dev); | ||
87 | |||
88 | if (err) | ||
89 | printk(KERN_ERR "phy %d failed to register\n", | ||
90 | i); | ||
91 | } | ||
92 | |||
93 | bus->phy_map[i] = phydev; | ||
94 | } | ||
95 | |||
96 | pr_info("%s: probed\n", bus->name); | ||
97 | |||
98 | return err; | ||
99 | } | ||
100 | EXPORT_SYMBOL(mdiobus_register); | ||
101 | |||
102 | void mdiobus_unregister(struct mii_bus *bus) | ||
103 | { | ||
104 | int i; | ||
105 | |||
106 | for (i = 0; i < PHY_MAX_ADDR; i++) { | ||
107 | if (bus->phy_map[i]) { | ||
108 | device_unregister(&bus->phy_map[i]->dev); | ||
109 | kfree(bus->phy_map[i]); | ||
110 | } | ||
111 | } | ||
112 | } | ||
113 | EXPORT_SYMBOL(mdiobus_unregister); | ||
114 | |||
115 | /* mdio_bus_match | ||
116 | * | ||
117 | * description: Given a PHY device, and a PHY driver, return 1 if | ||
118 | * the driver supports the device. Otherwise, return 0 | ||
119 | */ | ||
120 | static int mdio_bus_match(struct device *dev, struct device_driver *drv) | ||
121 | { | ||
122 | struct phy_device *phydev = to_phy_device(dev); | ||
123 | struct phy_driver *phydrv = to_phy_driver(drv); | ||
124 | |||
125 | return (phydrv->phy_id == (phydev->phy_id & phydrv->phy_id_mask)); | ||
126 | } | ||
127 | |||
128 | /* Suspend and resume. Copied from platform_suspend and | ||
129 | * platform_resume | ||
130 | */ | ||
131 | static int mdio_bus_suspend(struct device * dev, u32 state) | ||
132 | { | ||
133 | int ret = 0; | ||
134 | struct device_driver *drv = dev->driver; | ||
135 | |||
136 | if (drv && drv->suspend) { | ||
137 | ret = drv->suspend(dev, state, SUSPEND_DISABLE); | ||
138 | if (ret == 0) | ||
139 | ret = drv->suspend(dev, state, SUSPEND_SAVE_STATE); | ||
140 | if (ret == 0) | ||
141 | ret = drv->suspend(dev, state, SUSPEND_POWER_DOWN); | ||
142 | } | ||
143 | return ret; | ||
144 | } | ||
145 | |||
146 | static int mdio_bus_resume(struct device * dev) | ||
147 | { | ||
148 | int ret = 0; | ||
149 | struct device_driver *drv = dev->driver; | ||
150 | |||
151 | if (drv && drv->resume) { | ||
152 | ret = drv->resume(dev, RESUME_POWER_ON); | ||
153 | if (ret == 0) | ||
154 | ret = drv->resume(dev, RESUME_RESTORE_STATE); | ||
155 | if (ret == 0) | ||
156 | ret = drv->resume(dev, RESUME_ENABLE); | ||
157 | } | ||
158 | return ret; | ||
159 | } | ||
160 | |||
161 | struct bus_type mdio_bus_type = { | ||
162 | .name = "mdio_bus", | ||
163 | .match = mdio_bus_match, | ||
164 | .suspend = mdio_bus_suspend, | ||
165 | .resume = mdio_bus_resume, | ||
166 | }; | ||
167 | |||
168 | int __init mdio_bus_init(void) | ||
169 | { | ||
170 | return bus_register(&mdio_bus_type); | ||
171 | } | ||
172 | |||
173 | void __exit mdio_bus_exit(void) | ||
174 | { | ||
175 | bus_unregister(&mdio_bus_type); | ||
176 | } | ||
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c new file mode 100644 index 000000000000..d9e11f93bf3a --- /dev/null +++ b/drivers/net/phy/phy.c | |||
@@ -0,0 +1,871 @@ | |||
1 | /* | ||
2 | * drivers/net/phy/phy.c | ||
3 | * | ||
4 | * Framework for configuring and reading PHY devices | ||
5 | * Based on code in sungem_phy.c and gianfar_phy.c | ||
6 | * | ||
7 | * Author: Andy Fleming | ||
8 | * | ||
9 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
10 | * | ||
11 | * This program is free software; you can redistribute it and/or modify it | ||
12 | * under the terms of the GNU General Public License as published by the | ||
13 | * Free Software Foundation; either version 2 of the License, or (at your | ||
14 | * option) any later version. | ||
15 | * | ||
16 | */ | ||
17 | #include <linux/config.h> | ||
18 | #include <linux/kernel.h> | ||
19 | #include <linux/sched.h> | ||
20 | #include <linux/string.h> | ||
21 | #include <linux/errno.h> | ||
22 | #include <linux/unistd.h> | ||
23 | #include <linux/slab.h> | ||
24 | #include <linux/interrupt.h> | ||
25 | #include <linux/init.h> | ||
26 | #include <linux/delay.h> | ||
27 | #include <linux/netdevice.h> | ||
28 | #include <linux/etherdevice.h> | ||
29 | #include <linux/skbuff.h> | ||
30 | #include <linux/spinlock.h> | ||
31 | #include <linux/mm.h> | ||
32 | #include <linux/module.h> | ||
33 | #include <linux/version.h> | ||
34 | #include <linux/mii.h> | ||
35 | #include <linux/ethtool.h> | ||
36 | #include <linux/phy.h> | ||
37 | |||
38 | #include <asm/io.h> | ||
39 | #include <asm/irq.h> | ||
40 | #include <asm/uaccess.h> | ||
41 | |||
42 | /* Convenience function to print out the current phy status | ||
43 | */ | ||
44 | void phy_print_status(struct phy_device *phydev) | ||
45 | { | ||
46 | pr_info("%s: Link is %s", phydev->dev.bus_id, | ||
47 | phydev->link ? "Up" : "Down"); | ||
48 | if (phydev->link) | ||
49 | printk(" - %d/%s", phydev->speed, | ||
50 | DUPLEX_FULL == phydev->duplex ? | ||
51 | "Full" : "Half"); | ||
52 | |||
53 | printk("\n"); | ||
54 | } | ||
55 | EXPORT_SYMBOL(phy_print_status); | ||
56 | |||
57 | |||
58 | /* Convenience functions for reading/writing a given PHY | ||
59 | * register. They MUST NOT be called from interrupt context, | ||
60 | * because the bus read/write functions may wait for an interrupt | ||
61 | * to conclude the operation. */ | ||
62 | int phy_read(struct phy_device *phydev, u16 regnum) | ||
63 | { | ||
64 | int retval; | ||
65 | struct mii_bus *bus = phydev->bus; | ||
66 | |||
67 | spin_lock_bh(&bus->mdio_lock); | ||
68 | retval = bus->read(bus, phydev->addr, regnum); | ||
69 | spin_unlock_bh(&bus->mdio_lock); | ||
70 | |||
71 | return retval; | ||
72 | } | ||
73 | EXPORT_SYMBOL(phy_read); | ||
74 | |||
75 | int phy_write(struct phy_device *phydev, u16 regnum, u16 val) | ||
76 | { | ||
77 | int err; | ||
78 | struct mii_bus *bus = phydev->bus; | ||
79 | |||
80 | spin_lock_bh(&bus->mdio_lock); | ||
81 | err = bus->write(bus, phydev->addr, regnum, val); | ||
82 | spin_unlock_bh(&bus->mdio_lock); | ||
83 | |||
84 | return err; | ||
85 | } | ||
86 | EXPORT_SYMBOL(phy_write); | ||
87 | |||
88 | |||
89 | int phy_clear_interrupt(struct phy_device *phydev) | ||
90 | { | ||
91 | int err = 0; | ||
92 | |||
93 | if (phydev->drv->ack_interrupt) | ||
94 | err = phydev->drv->ack_interrupt(phydev); | ||
95 | |||
96 | return err; | ||
97 | } | ||
98 | |||
99 | |||
100 | int phy_config_interrupt(struct phy_device *phydev, u32 interrupts) | ||
101 | { | ||
102 | int err = 0; | ||
103 | |||
104 | phydev->interrupts = interrupts; | ||
105 | if (phydev->drv->config_intr) | ||
106 | err = phydev->drv->config_intr(phydev); | ||
107 | |||
108 | return err; | ||
109 | } | ||
110 | |||
111 | |||
112 | /* phy_aneg_done | ||
113 | * | ||
114 | * description: Reads the status register and returns 0 either if | ||
115 | * auto-negotiation is incomplete, or if there was an error. | ||
116 | * Returns BMSR_ANEGCOMPLETE if auto-negotiation is done. | ||
117 | */ | ||
118 | static inline int phy_aneg_done(struct phy_device *phydev) | ||
119 | { | ||
120 | int retval; | ||
121 | |||
122 | retval = phy_read(phydev, MII_BMSR); | ||
123 | |||
124 | return (retval < 0) ? retval : (retval & BMSR_ANEGCOMPLETE); | ||
125 | } | ||
126 | |||
127 | /* A structure for mapping a particular speed and duplex | ||
128 | * combination to a particular SUPPORTED and ADVERTISED value */ | ||
129 | struct phy_setting { | ||
130 | int speed; | ||
131 | int duplex; | ||
132 | u32 setting; | ||
133 | }; | ||
134 | |||
135 | /* A mapping of all SUPPORTED settings to speed/duplex */ | ||
136 | static struct phy_setting settings[] = { | ||
137 | { | ||
138 | .speed = 10000, | ||
139 | .duplex = DUPLEX_FULL, | ||
140 | .setting = SUPPORTED_10000baseT_Full, | ||
141 | }, | ||
142 | { | ||
143 | .speed = SPEED_1000, | ||
144 | .duplex = DUPLEX_FULL, | ||
145 | .setting = SUPPORTED_1000baseT_Full, | ||
146 | }, | ||
147 | { | ||
148 | .speed = SPEED_1000, | ||
149 | .duplex = DUPLEX_HALF, | ||
150 | .setting = SUPPORTED_1000baseT_Half, | ||
151 | }, | ||
152 | { | ||
153 | .speed = SPEED_100, | ||
154 | .duplex = DUPLEX_FULL, | ||
155 | .setting = SUPPORTED_100baseT_Full, | ||
156 | }, | ||
157 | { | ||
158 | .speed = SPEED_100, | ||
159 | .duplex = DUPLEX_HALF, | ||
160 | .setting = SUPPORTED_100baseT_Half, | ||
161 | }, | ||
162 | { | ||
163 | .speed = SPEED_10, | ||
164 | .duplex = DUPLEX_FULL, | ||
165 | .setting = SUPPORTED_10baseT_Full, | ||
166 | }, | ||
167 | { | ||
168 | .speed = SPEED_10, | ||
169 | .duplex = DUPLEX_HALF, | ||
170 | .setting = SUPPORTED_10baseT_Half, | ||
171 | }, | ||
172 | }; | ||
173 | |||
174 | #define MAX_NUM_SETTINGS (sizeof(settings)/sizeof(struct phy_setting)) | ||
175 | |||
176 | /* phy_find_setting | ||
177 | * | ||
178 | * description: Searches the settings array for the setting which | ||
179 | * matches the desired speed and duplex, and returns the index | ||
180 | * of that setting. Returns the index of the last setting if | ||
181 | * none of the others match. | ||
182 | */ | ||
183 | static inline int phy_find_setting(int speed, int duplex) | ||
184 | { | ||
185 | int idx = 0; | ||
186 | |||
187 | while (idx < ARRAY_SIZE(settings) && | ||
188 | (settings[idx].speed != speed || | ||
189 | settings[idx].duplex != duplex)) | ||
190 | idx++; | ||
191 | |||
192 | return idx < MAX_NUM_SETTINGS ? idx : MAX_NUM_SETTINGS - 1; | ||
193 | } | ||
194 | |||
195 | /* phy_find_valid | ||
196 | * idx: The first index in settings[] to search | ||
197 | * features: A mask of the valid settings | ||
198 | * | ||
199 | * description: Returns the index of the first valid setting less | ||
200 | * than or equal to the one pointed to by idx, as determined by | ||
201 | * the mask in features. Returns the index of the last setting | ||
202 | * if nothing else matches. | ||
203 | */ | ||
204 | static inline int phy_find_valid(int idx, u32 features) | ||
205 | { | ||
206 | while (idx < MAX_NUM_SETTINGS && !(settings[idx].setting & features)) | ||
207 | idx++; | ||
208 | |||
209 | return idx < MAX_NUM_SETTINGS ? idx : MAX_NUM_SETTINGS - 1; | ||
210 | } | ||
211 | |||
212 | /* phy_sanitize_settings | ||
213 | * | ||
214 | * description: Make sure the PHY is set to supported speeds and | ||
215 | * duplexes. Drop down by one in this order: 1000/FULL, | ||
216 | * 1000/HALF, 100/FULL, 100/HALF, 10/FULL, 10/HALF | ||
217 | */ | ||
218 | void phy_sanitize_settings(struct phy_device *phydev) | ||
219 | { | ||
220 | u32 features = phydev->supported; | ||
221 | int idx; | ||
222 | |||
223 | /* Sanitize settings based on PHY capabilities */ | ||
224 | if ((features & SUPPORTED_Autoneg) == 0) | ||
225 | phydev->autoneg = 0; | ||
226 | |||
227 | idx = phy_find_valid(phy_find_setting(phydev->speed, phydev->duplex), | ||
228 | features); | ||
229 | |||
230 | phydev->speed = settings[idx].speed; | ||
231 | phydev->duplex = settings[idx].duplex; | ||
232 | } | ||
233 | EXPORT_SYMBOL(phy_sanitize_settings); | ||
234 | |||
235 | /* phy_ethtool_sset: | ||
236 | * A generic ethtool sset function. Handles all the details | ||
237 | * | ||
238 | * A few notes about parameter checking: | ||
239 | * - We don't set port or transceiver, so we don't care what they | ||
240 | * were set to. | ||
241 | * - phy_start_aneg() will make sure forced settings are sane, and | ||
242 | * choose the next best ones from the ones selected, so we don't | ||
243 | * care if ethtool tries to give us bad values | ||
244 | * | ||
245 | * A note about the PHYCONTROL Layer. If you turn off | ||
246 | * CONFIG_PHYCONTROL, you will need to read the PHY status | ||
247 | * registers after this function completes, and update your | ||
248 | * controller manually. | ||
249 | */ | ||
250 | int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd) | ||
251 | { | ||
252 | if (cmd->phy_address != phydev->addr) | ||
253 | return -EINVAL; | ||
254 | |||
255 | /* We make sure that we don't pass unsupported | ||
256 | * values in to the PHY */ | ||
257 | cmd->advertising &= phydev->supported; | ||
258 | |||
259 | /* Verify the settings we care about. */ | ||
260 | if (cmd->autoneg != AUTONEG_ENABLE && cmd->autoneg != AUTONEG_DISABLE) | ||
261 | return -EINVAL; | ||
262 | |||
263 | if (cmd->autoneg == AUTONEG_ENABLE && cmd->advertising == 0) | ||
264 | return -EINVAL; | ||
265 | |||
266 | if (cmd->autoneg == AUTONEG_DISABLE | ||
267 | && ((cmd->speed != SPEED_1000 | ||
268 | && cmd->speed != SPEED_100 | ||
269 | && cmd->speed != SPEED_10) | ||
270 | || (cmd->duplex != DUPLEX_HALF | ||
271 | && cmd->duplex != DUPLEX_FULL))) | ||
272 | return -EINVAL; | ||
273 | |||
274 | phydev->autoneg = cmd->autoneg; | ||
275 | |||
276 | phydev->speed = cmd->speed; | ||
277 | |||
278 | phydev->advertising = cmd->advertising; | ||
279 | |||
280 | if (AUTONEG_ENABLE == cmd->autoneg) | ||
281 | phydev->advertising |= ADVERTISED_Autoneg; | ||
282 | else | ||
283 | phydev->advertising &= ~ADVERTISED_Autoneg; | ||
284 | |||
285 | phydev->duplex = cmd->duplex; | ||
286 | |||
287 | /* Restart the PHY */ | ||
288 | phy_start_aneg(phydev); | ||
289 | |||
290 | return 0; | ||
291 | } | ||
292 | |||
293 | int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd) | ||
294 | { | ||
295 | cmd->supported = phydev->supported; | ||
296 | |||
297 | cmd->advertising = phydev->advertising; | ||
298 | |||
299 | cmd->speed = phydev->speed; | ||
300 | cmd->duplex = phydev->duplex; | ||
301 | cmd->port = PORT_MII; | ||
302 | cmd->phy_address = phydev->addr; | ||
303 | cmd->transceiver = XCVR_EXTERNAL; | ||
304 | cmd->autoneg = phydev->autoneg; | ||
305 | |||
306 | return 0; | ||
307 | } | ||
308 | |||
309 | |||
310 | /* Note that this function is currently incompatible with the | ||
311 | * PHYCONTROL layer. It changes registers without regard to | ||
312 | * current state. Use at own risk | ||
313 | */ | ||
314 | int phy_mii_ioctl(struct phy_device *phydev, | ||
315 | struct mii_ioctl_data *mii_data, int cmd) | ||
316 | { | ||
317 | u16 val = mii_data->val_in; | ||
318 | |||
319 | switch (cmd) { | ||
320 | case SIOCGMIIPHY: | ||
321 | mii_data->phy_id = phydev->addr; | ||
322 | break; | ||
323 | case SIOCGMIIREG: | ||
324 | mii_data->val_out = phy_read(phydev, mii_data->reg_num); | ||
325 | break; | ||
326 | |||
327 | case SIOCSMIIREG: | ||
328 | if (!capable(CAP_NET_ADMIN)) | ||
329 | return -EPERM; | ||
330 | |||
331 | if (mii_data->phy_id == phydev->addr) { | ||
332 | switch(mii_data->reg_num) { | ||
333 | case MII_BMCR: | ||
334 | if (val & (BMCR_RESET|BMCR_ANENABLE)) | ||
335 | phydev->autoneg = AUTONEG_DISABLE; | ||
336 | else | ||
337 | phydev->autoneg = AUTONEG_ENABLE; | ||
338 | if ((!phydev->autoneg) && (val & BMCR_FULLDPLX)) | ||
339 | phydev->duplex = DUPLEX_FULL; | ||
340 | else | ||
341 | phydev->duplex = DUPLEX_HALF; | ||
342 | break; | ||
343 | case MII_ADVERTISE: | ||
344 | phydev->advertising = val; | ||
345 | break; | ||
346 | default: | ||
347 | /* do nothing */ | ||
348 | break; | ||
349 | } | ||
350 | } | ||
351 | |||
352 | phy_write(phydev, mii_data->reg_num, val); | ||
353 | |||
354 | if (mii_data->reg_num == MII_BMCR | ||
355 | && val & BMCR_RESET | ||
356 | && phydev->drv->config_init) | ||
357 | phydev->drv->config_init(phydev); | ||
358 | break; | ||
359 | } | ||
360 | |||
361 | return 0; | ||
362 | } | ||
363 | |||
364 | /* phy_start_aneg | ||
365 | * | ||
366 | * description: Sanitizes the settings (if we're not | ||
367 | * autonegotiating them), and then calls the driver's | ||
368 | * config_aneg function. If the PHYCONTROL Layer is operating, | ||
369 | * we change the state to reflect the beginning of | ||
370 | * Auto-negotiation or forcing. | ||
371 | */ | ||
372 | int phy_start_aneg(struct phy_device *phydev) | ||
373 | { | ||
374 | int err; | ||
375 | |||
376 | spin_lock(&phydev->lock); | ||
377 | |||
378 | if (AUTONEG_DISABLE == phydev->autoneg) | ||
379 | phy_sanitize_settings(phydev); | ||
380 | |||
381 | err = phydev->drv->config_aneg(phydev); | ||
382 | |||
383 | #ifdef CONFIG_PHYCONTROL | ||
384 | if (err < 0) | ||
385 | goto out_unlock; | ||
386 | |||
387 | if (phydev->state != PHY_HALTED) { | ||
388 | if (AUTONEG_ENABLE == phydev->autoneg) { | ||
389 | phydev->state = PHY_AN; | ||
390 | phydev->link_timeout = PHY_AN_TIMEOUT; | ||
391 | } else { | ||
392 | phydev->state = PHY_FORCING; | ||
393 | phydev->link_timeout = PHY_FORCE_TIMEOUT; | ||
394 | } | ||
395 | } | ||
396 | |||
397 | out_unlock: | ||
398 | #endif | ||
399 | spin_unlock(&phydev->lock); | ||
400 | return err; | ||
401 | } | ||
402 | EXPORT_SYMBOL(phy_start_aneg); | ||
403 | |||
404 | |||
405 | #ifdef CONFIG_PHYCONTROL | ||
406 | static void phy_change(void *data); | ||
407 | static void phy_timer(unsigned long data); | ||
408 | |||
409 | /* phy_start_machine: | ||
410 | * | ||
411 | * description: The PHY infrastructure can run a state machine | ||
412 | * which tracks whether the PHY is starting up, negotiating, | ||
413 | * etc. This function starts the timer which tracks the state | ||
414 | * of the PHY. If you want to be notified when the state | ||
415 | * changes, pass in the callback, otherwise, pass NULL. If you | ||
416 | * want to maintain your own state machine, do not call this | ||
417 | * function. */ | ||
418 | void phy_start_machine(struct phy_device *phydev, | ||
419 | void (*handler)(struct net_device *)) | ||
420 | { | ||
421 | phydev->adjust_state = handler; | ||
422 | |||
423 | init_timer(&phydev->phy_timer); | ||
424 | phydev->phy_timer.function = &phy_timer; | ||
425 | phydev->phy_timer.data = (unsigned long) phydev; | ||
426 | mod_timer(&phydev->phy_timer, jiffies + HZ); | ||
427 | } | ||
428 | |||
429 | /* phy_stop_machine | ||
430 | * | ||
431 | * description: Stops the state machine timer, sets the state to | ||
432 | * UP (unless it wasn't up yet), and then frees the interrupt, | ||
433 | * if it is in use. This function must be called BEFORE | ||
434 | * phy_detach. | ||
435 | */ | ||
436 | void phy_stop_machine(struct phy_device *phydev) | ||
437 | { | ||
438 | del_timer_sync(&phydev->phy_timer); | ||
439 | |||
440 | spin_lock(&phydev->lock); | ||
441 | if (phydev->state > PHY_UP) | ||
442 | phydev->state = PHY_UP; | ||
443 | spin_unlock(&phydev->lock); | ||
444 | |||
445 | if (phydev->irq != PHY_POLL) | ||
446 | phy_stop_interrupts(phydev); | ||
447 | |||
448 | phydev->adjust_state = NULL; | ||
449 | } | ||
450 | |||
451 | /* phy_force_reduction | ||
452 | * | ||
453 | * description: Reduces the speed/duplex settings by | ||
454 | * one notch. The order is so: | ||
455 | * 1000/FULL, 1000/HALF, 100/FULL, 100/HALF, | ||
456 | * 10/FULL, 10/HALF. The function bottoms out at 10/HALF. | ||
457 | */ | ||
458 | static void phy_force_reduction(struct phy_device *phydev) | ||
459 | { | ||
460 | int idx; | ||
461 | |||
462 | idx = phy_find_setting(phydev->speed, phydev->duplex); | ||
463 | |||
464 | idx++; | ||
465 | |||
466 | idx = phy_find_valid(idx, phydev->supported); | ||
467 | |||
468 | phydev->speed = settings[idx].speed; | ||
469 | phydev->duplex = settings[idx].duplex; | ||
470 | |||
471 | pr_info("Trying %d/%s\n", phydev->speed, | ||
472 | DUPLEX_FULL == phydev->duplex ? | ||
473 | "FULL" : "HALF"); | ||
474 | } | ||
475 | |||
476 | |||
477 | /* phy_error: | ||
478 | * | ||
479 | * Moves the PHY to the HALTED state in response to a read | ||
480 | * or write error, and tells the controller the link is down. | ||
481 | * Must not be called from interrupt context, or while the | ||
482 | * phydev->lock is held. | ||
483 | */ | ||
484 | void phy_error(struct phy_device *phydev) | ||
485 | { | ||
486 | spin_lock(&phydev->lock); | ||
487 | phydev->state = PHY_HALTED; | ||
488 | spin_unlock(&phydev->lock); | ||
489 | } | ||
490 | |||
491 | /* phy_interrupt | ||
492 | * | ||
493 | * description: When a PHY interrupt occurs, the handler disables | ||
494 | * interrupts, and schedules a work task to clear the interrupt. | ||
495 | */ | ||
496 | static irqreturn_t phy_interrupt(int irq, void *phy_dat, struct pt_regs *regs) | ||
497 | { | ||
498 | struct phy_device *phydev = phy_dat; | ||
499 | |||
500 | /* The MDIO bus is not allowed to be written in interrupt | ||
501 | * context, so we need to disable the irq here. A work | ||
502 | * queue will write the PHY to disable and clear the | ||
503 | * interrupt, and then reenable the irq line. */ | ||
504 | disable_irq_nosync(irq); | ||
505 | |||
506 | schedule_work(&phydev->phy_queue); | ||
507 | |||
508 | return IRQ_HANDLED; | ||
509 | } | ||
510 | |||
511 | /* Enable the interrupts from the PHY side */ | ||
512 | int phy_enable_interrupts(struct phy_device *phydev) | ||
513 | { | ||
514 | int err; | ||
515 | |||
516 | err = phy_clear_interrupt(phydev); | ||
517 | |||
518 | if (err < 0) | ||
519 | return err; | ||
520 | |||
521 | err = phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED); | ||
522 | |||
523 | return err; | ||
524 | } | ||
525 | EXPORT_SYMBOL(phy_enable_interrupts); | ||
526 | |||
527 | /* Disable the PHY interrupts from the PHY side */ | ||
528 | int phy_disable_interrupts(struct phy_device *phydev) | ||
529 | { | ||
530 | int err; | ||
531 | |||
532 | /* Disable PHY interrupts */ | ||
533 | err = phy_config_interrupt(phydev, PHY_INTERRUPT_DISABLED); | ||
534 | |||
535 | if (err) | ||
536 | goto phy_err; | ||
537 | |||
538 | /* Clear the interrupt */ | ||
539 | err = phy_clear_interrupt(phydev); | ||
540 | |||
541 | if (err) | ||
542 | goto phy_err; | ||
543 | |||
544 | return 0; | ||
545 | |||
546 | phy_err: | ||
547 | phy_error(phydev); | ||
548 | |||
549 | return err; | ||
550 | } | ||
551 | EXPORT_SYMBOL(phy_disable_interrupts); | ||
552 | |||
553 | /* phy_start_interrupts | ||
554 | * | ||
555 | * description: Request the interrupt for the given PHY. If | ||
556 | * this fails, then we set irq to PHY_POLL. | ||
557 | * Otherwise, we enable the interrupts in the PHY. | ||
558 | * Returns 0 on success. | ||
559 | * This should only be called with a valid IRQ number. | ||
560 | */ | ||
561 | int phy_start_interrupts(struct phy_device *phydev) | ||
562 | { | ||
563 | int err = 0; | ||
564 | |||
565 | INIT_WORK(&phydev->phy_queue, phy_change, phydev); | ||
566 | |||
567 | if (request_irq(phydev->irq, phy_interrupt, | ||
568 | SA_SHIRQ, | ||
569 | "phy_interrupt", | ||
570 | phydev) < 0) { | ||
571 | printk(KERN_WARNING "%s: Can't get IRQ %d (PHY)\n", | ||
572 | phydev->bus->name, | ||
573 | phydev->irq); | ||
574 | phydev->irq = PHY_POLL; | ||
575 | return 0; | ||
576 | } | ||
577 | |||
578 | err = phy_enable_interrupts(phydev); | ||
579 | |||
580 | return err; | ||
581 | } | ||
582 | EXPORT_SYMBOL(phy_start_interrupts); | ||
583 | |||
584 | int phy_stop_interrupts(struct phy_device *phydev) | ||
585 | { | ||
586 | int err; | ||
587 | |||
588 | err = phy_disable_interrupts(phydev); | ||
589 | |||
590 | if (err) | ||
591 | phy_error(phydev); | ||
592 | |||
593 | free_irq(phydev->irq, phydev); | ||
594 | |||
595 | return err; | ||
596 | } | ||
597 | EXPORT_SYMBOL(phy_stop_interrupts); | ||
598 | |||
599 | |||
600 | /* Scheduled by the phy_interrupt/timer to handle PHY changes */ | ||
601 | static void phy_change(void *data) | ||
602 | { | ||
603 | int err; | ||
604 | struct phy_device *phydev = data; | ||
605 | |||
606 | err = phy_disable_interrupts(phydev); | ||
607 | |||
608 | if (err) | ||
609 | goto phy_err; | ||
610 | |||
611 | spin_lock(&phydev->lock); | ||
612 | if ((PHY_RUNNING == phydev->state) || (PHY_NOLINK == phydev->state)) | ||
613 | phydev->state = PHY_CHANGELINK; | ||
614 | spin_unlock(&phydev->lock); | ||
615 | |||
616 | enable_irq(phydev->irq); | ||
617 | |||
618 | /* Reenable interrupts */ | ||
619 | err = phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED); | ||
620 | |||
621 | if (err) | ||
622 | goto irq_enable_err; | ||
623 | |||
624 | return; | ||
625 | |||
626 | irq_enable_err: | ||
627 | disable_irq(phydev->irq); | ||
628 | phy_err: | ||
629 | phy_error(phydev); | ||
630 | } | ||
631 | |||
632 | /* Bring down the PHY link, and stop checking the status. */ | ||
633 | void phy_stop(struct phy_device *phydev) | ||
634 | { | ||
635 | spin_lock(&phydev->lock); | ||
636 | |||
637 | if (PHY_HALTED == phydev->state) | ||
638 | goto out_unlock; | ||
639 | |||
640 | if (phydev->irq != PHY_POLL) { | ||
641 | /* Clear any pending interrupts */ | ||
642 | phy_clear_interrupt(phydev); | ||
643 | |||
644 | /* Disable PHY Interrupts */ | ||
645 | phy_config_interrupt(phydev, PHY_INTERRUPT_DISABLED); | ||
646 | } | ||
647 | |||
648 | phydev->state = PHY_HALTED; | ||
649 | |||
650 | out_unlock: | ||
651 | spin_unlock(&phydev->lock); | ||
652 | } | ||
653 | |||
654 | |||
655 | /* phy_start | ||
656 | * | ||
657 | * description: Indicates the attached device's readiness to | ||
658 | * handle PHY-related work. Used during startup to start the | ||
659 | * PHY, and after a call to phy_stop() to resume operation. | ||
660 | * Also used to indicate the MDIO bus has cleared an error | ||
661 | * condition. | ||
662 | */ | ||
663 | void phy_start(struct phy_device *phydev) | ||
664 | { | ||
665 | spin_lock(&phydev->lock); | ||
666 | |||
667 | switch (phydev->state) { | ||
668 | case PHY_STARTING: | ||
669 | phydev->state = PHY_PENDING; | ||
670 | break; | ||
671 | case PHY_READY: | ||
672 | phydev->state = PHY_UP; | ||
673 | break; | ||
674 | case PHY_HALTED: | ||
675 | phydev->state = PHY_RESUMING; | ||
676 | default: | ||
677 | break; | ||
678 | } | ||
679 | spin_unlock(&phydev->lock); | ||
680 | } | ||
681 | EXPORT_SYMBOL(phy_stop); | ||
682 | EXPORT_SYMBOL(phy_start); | ||
683 | |||
684 | /* PHY timer which handles the state machine */ | ||
685 | static void phy_timer(unsigned long data) | ||
686 | { | ||
687 | struct phy_device *phydev = (struct phy_device *)data; | ||
688 | int needs_aneg = 0; | ||
689 | int err = 0; | ||
690 | |||
691 | spin_lock(&phydev->lock); | ||
692 | |||
693 | if (phydev->adjust_state) | ||
694 | phydev->adjust_state(phydev->attached_dev); | ||
695 | |||
696 | switch(phydev->state) { | ||
697 | case PHY_DOWN: | ||
698 | case PHY_STARTING: | ||
699 | case PHY_READY: | ||
700 | case PHY_PENDING: | ||
701 | break; | ||
702 | case PHY_UP: | ||
703 | needs_aneg = 1; | ||
704 | |||
705 | phydev->link_timeout = PHY_AN_TIMEOUT; | ||
706 | |||
707 | break; | ||
708 | case PHY_AN: | ||
709 | /* Check if negotiation is done. Break | ||
710 | * if there's an error */ | ||
711 | err = phy_aneg_done(phydev); | ||
712 | if (err < 0) | ||
713 | break; | ||
714 | |||
715 | /* If auto-negotiation is done, we change to | ||
716 | * either RUNNING, or NOLINK */ | ||
717 | if (err > 0) { | ||
718 | err = phy_read_status(phydev); | ||
719 | |||
720 | if (err) | ||
721 | break; | ||
722 | |||
723 | if (phydev->link) { | ||
724 | phydev->state = PHY_RUNNING; | ||
725 | netif_carrier_on(phydev->attached_dev); | ||
726 | } else { | ||
727 | phydev->state = PHY_NOLINK; | ||
728 | netif_carrier_off(phydev->attached_dev); | ||
729 | } | ||
730 | |||
731 | phydev->adjust_link(phydev->attached_dev); | ||
732 | |||
733 | } else if (0 == phydev->link_timeout--) { | ||
734 | /* The counter expired, so either we | ||
735 | * switch to forced mode, or the | ||
736 | * magic_aneg bit exists, and we try aneg | ||
737 | * again */ | ||
738 | if (!(phydev->drv->flags & PHY_HAS_MAGICANEG)) { | ||
739 | int idx; | ||
740 | |||
741 | /* We'll start from the | ||
742 | * fastest speed, and work | ||
743 | * our way down */ | ||
744 | idx = phy_find_valid(0, | ||
745 | phydev->supported); | ||
746 | |||
747 | phydev->speed = settings[idx].speed; | ||
748 | phydev->duplex = settings[idx].duplex; | ||
749 | |||
750 | phydev->autoneg = AUTONEG_DISABLE; | ||
751 | phydev->state = PHY_FORCING; | ||
752 | phydev->link_timeout = | ||
753 | PHY_FORCE_TIMEOUT; | ||
754 | |||
755 | pr_info("Trying %d/%s\n", | ||
756 | phydev->speed, | ||
757 | DUPLEX_FULL == | ||
758 | phydev->duplex ? | ||
759 | "FULL" : "HALF"); | ||
760 | } | ||
761 | |||
762 | needs_aneg = 1; | ||
763 | } | ||
764 | break; | ||
765 | case PHY_NOLINK: | ||
766 | err = phy_read_status(phydev); | ||
767 | |||
768 | if (err) | ||
769 | break; | ||
770 | |||
771 | if (phydev->link) { | ||
772 | phydev->state = PHY_RUNNING; | ||
773 | netif_carrier_on(phydev->attached_dev); | ||
774 | phydev->adjust_link(phydev->attached_dev); | ||
775 | } | ||
776 | break; | ||
777 | case PHY_FORCING: | ||
778 | err = phy_read_status(phydev); | ||
779 | |||
780 | if (err) | ||
781 | break; | ||
782 | |||
783 | if (phydev->link) { | ||
784 | phydev->state = PHY_RUNNING; | ||
785 | netif_carrier_on(phydev->attached_dev); | ||
786 | } else { | ||
787 | if (0 == phydev->link_timeout--) { | ||
788 | phy_force_reduction(phydev); | ||
789 | needs_aneg = 1; | ||
790 | } | ||
791 | } | ||
792 | |||
793 | phydev->adjust_link(phydev->attached_dev); | ||
794 | break; | ||
795 | case PHY_RUNNING: | ||
796 | /* Only register a CHANGE if we are | ||
797 | * polling */ | ||
798 | if (PHY_POLL == phydev->irq) | ||
799 | phydev->state = PHY_CHANGELINK; | ||
800 | break; | ||
801 | case PHY_CHANGELINK: | ||
802 | err = phy_read_status(phydev); | ||
803 | |||
804 | if (err) | ||
805 | break; | ||
806 | |||
807 | if (phydev->link) { | ||
808 | phydev->state = PHY_RUNNING; | ||
809 | netif_carrier_on(phydev->attached_dev); | ||
810 | } else { | ||
811 | phydev->state = PHY_NOLINK; | ||
812 | netif_carrier_off(phydev->attached_dev); | ||
813 | } | ||
814 | |||
815 | phydev->adjust_link(phydev->attached_dev); | ||
816 | |||
817 | if (PHY_POLL != phydev->irq) | ||
818 | err = phy_config_interrupt(phydev, | ||
819 | PHY_INTERRUPT_ENABLED); | ||
820 | break; | ||
821 | case PHY_HALTED: | ||
822 | if (phydev->link) { | ||
823 | phydev->link = 0; | ||
824 | netif_carrier_off(phydev->attached_dev); | ||
825 | phydev->adjust_link(phydev->attached_dev); | ||
826 | } | ||
827 | break; | ||
828 | case PHY_RESUMING: | ||
829 | |||
830 | err = phy_clear_interrupt(phydev); | ||
831 | |||
832 | if (err) | ||
833 | break; | ||
834 | |||
835 | err = phy_config_interrupt(phydev, | ||
836 | PHY_INTERRUPT_ENABLED); | ||
837 | |||
838 | if (err) | ||
839 | break; | ||
840 | |||
841 | if (AUTONEG_ENABLE == phydev->autoneg) { | ||
842 | err = phy_aneg_done(phydev); | ||
843 | if (err < 0) | ||
844 | break; | ||
845 | |||
846 | /* err > 0 if AN is done. | ||
847 | * Otherwise, it's 0, and we're | ||
848 | * still waiting for AN */ | ||
849 | if (err > 0) { | ||
850 | phydev->state = PHY_RUNNING; | ||
851 | } else { | ||
852 | phydev->state = PHY_AN; | ||
853 | phydev->link_timeout = PHY_AN_TIMEOUT; | ||
854 | } | ||
855 | } else | ||
856 | phydev->state = PHY_RUNNING; | ||
857 | break; | ||
858 | } | ||
859 | |||
860 | spin_unlock(&phydev->lock); | ||
861 | |||
862 | if (needs_aneg) | ||
863 | err = phy_start_aneg(phydev); | ||
864 | |||
865 | if (err < 0) | ||
866 | phy_error(phydev); | ||
867 | |||
868 | mod_timer(&phydev->phy_timer, jiffies + PHY_STATE_TIME * HZ); | ||
869 | } | ||
870 | |||
871 | #endif /* CONFIG_PHYCONTROL */ | ||
diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c new file mode 100644 index 000000000000..33f7bdb5857c --- /dev/null +++ b/drivers/net/phy/phy_device.c | |||
@@ -0,0 +1,696 @@ | |||
1 | /* | ||
2 | * drivers/net/phy/phy_device.c | ||
3 | * | ||
4 | * Framework for finding and configuring PHYs. | ||
5 | * Also contains generic PHY driver | ||
6 | * | ||
7 | * Author: Andy Fleming | ||
8 | * | ||
9 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
10 | * | ||
11 | * This program is free software; you can redistribute it and/or modify it | ||
12 | * under the terms of the GNU General Public License as published by the | ||
13 | * Free Software Foundation; either version 2 of the License, or (at your | ||
14 | * option) any later version. | ||
15 | * | ||
16 | */ | ||
17 | #include <linux/config.h> | ||
18 | #include <linux/kernel.h> | ||
19 | #include <linux/sched.h> | ||
20 | #include <linux/string.h> | ||
21 | #include <linux/errno.h> | ||
22 | #include <linux/unistd.h> | ||
23 | #include <linux/slab.h> | ||
24 | #include <linux/interrupt.h> | ||
25 | #include <linux/init.h> | ||
26 | #include <linux/delay.h> | ||
27 | #include <linux/netdevice.h> | ||
28 | #include <linux/etherdevice.h> | ||
29 | #include <linux/skbuff.h> | ||
30 | #include <linux/spinlock.h> | ||
31 | #include <linux/mm.h> | ||
32 | #include <linux/module.h> | ||
33 | #include <linux/version.h> | ||
34 | #include <linux/mii.h> | ||
35 | #include <linux/ethtool.h> | ||
36 | #include <linux/phy.h> | ||
37 | |||
38 | #include <asm/io.h> | ||
39 | #include <asm/irq.h> | ||
40 | #include <asm/uaccess.h> | ||
41 | |||
42 | static struct phy_driver genphy_driver; | ||
43 | extern int mdio_bus_init(void); | ||
44 | extern void mdio_bus_exit(void); | ||
45 | |||
46 | /* get_phy_device | ||
47 | * | ||
48 | * description: Reads the ID registers of the PHY at addr on the | ||
49 | * bus, then allocates and returns the phy_device to | ||
50 | * represent it. | ||
51 | */ | ||
52 | struct phy_device * get_phy_device(struct mii_bus *bus, int addr) | ||
53 | { | ||
54 | int phy_reg; | ||
55 | u32 phy_id; | ||
56 | struct phy_device *dev = NULL; | ||
57 | |||
58 | /* Grab the bits from PHYIR1, and put them | ||
59 | * in the upper half */ | ||
60 | phy_reg = bus->read(bus, addr, MII_PHYSID1); | ||
61 | |||
62 | if (phy_reg < 0) | ||
63 | return ERR_PTR(phy_reg); | ||
64 | |||
65 | phy_id = (phy_reg & 0xffff) << 16; | ||
66 | |||
67 | /* Grab the bits from PHYIR2, and put them in the lower half */ | ||
68 | phy_reg = bus->read(bus, addr, MII_PHYSID2); | ||
69 | |||
70 | if (phy_reg < 0) | ||
71 | return ERR_PTR(phy_reg); | ||
72 | |||
73 | phy_id |= (phy_reg & 0xffff); | ||
74 | |||
75 | /* If the phy_id is all Fs, there is no device there */ | ||
76 | if (0xffffffff == phy_id) | ||
77 | return NULL; | ||
78 | |||
79 | /* Otherwise, we allocate the device, and initialize the | ||
80 | * default values */ | ||
81 | dev = kcalloc(1, sizeof(*dev), GFP_KERNEL); | ||
82 | |||
83 | if (NULL == dev) | ||
84 | return ERR_PTR(-ENOMEM); | ||
85 | |||
86 | dev->speed = 0; | ||
87 | dev->duplex = -1; | ||
88 | dev->pause = dev->asym_pause = 0; | ||
89 | dev->link = 1; | ||
90 | |||
91 | dev->autoneg = AUTONEG_ENABLE; | ||
92 | |||
93 | dev->addr = addr; | ||
94 | dev->phy_id = phy_id; | ||
95 | dev->bus = bus; | ||
96 | |||
97 | dev->state = PHY_DOWN; | ||
98 | |||
99 | spin_lock_init(&dev->lock); | ||
100 | |||
101 | return dev; | ||
102 | } | ||
103 | |||
104 | #ifdef CONFIG_PHYCONTROL | ||
105 | /* phy_prepare_link: | ||
106 | * | ||
107 | * description: Tells the PHY infrastructure to handle the | ||
108 | * gory details on monitoring link status (whether through | ||
109 | * polling or an interrupt), and to call back to the | ||
110 | * connected device driver when the link status changes. | ||
111 | * If you want to monitor your own link state, don't call | ||
112 | * this function */ | ||
113 | void phy_prepare_link(struct phy_device *phydev, | ||
114 | void (*handler)(struct net_device *)) | ||
115 | { | ||
116 | phydev->adjust_link = handler; | ||
117 | } | ||
118 | |||
119 | /* phy_connect: | ||
120 | * | ||
121 | * description: Convenience function for connecting ethernet | ||
122 | * devices to PHY devices. The default behavior is for | ||
123 | * the PHY infrastructure to handle everything, and only notify | ||
124 | * the connected driver when the link status changes. If you | ||
125 | * don't want, or can't use the provided functionality, you may | ||
126 | * choose to call only the subset of functions which provide | ||
127 | * the desired functionality. | ||
128 | */ | ||
129 | struct phy_device * phy_connect(struct net_device *dev, const char *phy_id, | ||
130 | void (*handler)(struct net_device *), u32 flags) | ||
131 | { | ||
132 | struct phy_device *phydev; | ||
133 | |||
134 | phydev = phy_attach(dev, phy_id, flags); | ||
135 | |||
136 | if (IS_ERR(phydev)) | ||
137 | return phydev; | ||
138 | |||
139 | phy_prepare_link(phydev, handler); | ||
140 | |||
141 | phy_start_machine(phydev, NULL); | ||
142 | |||
143 | if (phydev->irq > 0) | ||
144 | phy_start_interrupts(phydev); | ||
145 | |||
146 | return phydev; | ||
147 | } | ||
148 | EXPORT_SYMBOL(phy_connect); | ||
149 | |||
150 | void phy_disconnect(struct phy_device *phydev) | ||
151 | { | ||
152 | if (phydev->irq > 0) | ||
153 | phy_stop_interrupts(phydev); | ||
154 | |||
155 | phy_stop_machine(phydev); | ||
156 | |||
157 | phydev->adjust_link = NULL; | ||
158 | |||
159 | phy_detach(phydev); | ||
160 | } | ||
161 | EXPORT_SYMBOL(phy_disconnect); | ||
162 | |||
163 | #endif /* CONFIG_PHYCONTROL */ | ||
164 | |||
165 | /* phy_attach: | ||
166 | * | ||
167 | * description: Called by drivers to attach to a particular PHY | ||
168 | * device. The phy_device is found, and properly hooked up | ||
169 | * to the phy_driver. If no driver is attached, then the | ||
170 | * genphy_driver is used. The phy_device is given a ptr to | ||
171 | * the attaching device, and given a callback for link status | ||
172 | * change. The phy_device is returned to the attaching | ||
173 | * driver. | ||
174 | */ | ||
175 | static int phy_compare_id(struct device *dev, void *data) | ||
176 | { | ||
177 | return strcmp((char *)data, dev->bus_id) ? 0 : 1; | ||
178 | } | ||
179 | |||
180 | struct phy_device *phy_attach(struct net_device *dev, | ||
181 | const char *phy_id, u32 flags) | ||
182 | { | ||
183 | struct bus_type *bus = &mdio_bus_type; | ||
184 | struct phy_device *phydev; | ||
185 | struct device *d; | ||
186 | |||
187 | /* Search the list of PHY devices on the mdio bus for the | ||
188 | * PHY with the requested name */ | ||
189 | d = bus_find_device(bus, NULL, (void *)phy_id, phy_compare_id); | ||
190 | |||
191 | if (d) { | ||
192 | phydev = to_phy_device(d); | ||
193 | } else { | ||
194 | printk(KERN_ERR "%s not found\n", phy_id); | ||
195 | return ERR_PTR(-ENODEV); | ||
196 | } | ||
197 | |||
198 | /* Assume that if there is no driver, that it doesn't | ||
199 | * exist, and we should use the genphy driver. */ | ||
200 | if (NULL == d->driver) { | ||
201 | int err; | ||
202 | down_write(&d->bus->subsys.rwsem); | ||
203 | d->driver = &genphy_driver.driver; | ||
204 | |||
205 | err = d->driver->probe(d); | ||
206 | |||
207 | if (err < 0) | ||
208 | return ERR_PTR(err); | ||
209 | |||
210 | device_bind_driver(d); | ||
211 | up_write(&d->bus->subsys.rwsem); | ||
212 | } | ||
213 | |||
214 | if (phydev->attached_dev) { | ||
215 | printk(KERN_ERR "%s: %s already attached\n", | ||
216 | dev->name, phy_id); | ||
217 | return ERR_PTR(-EBUSY); | ||
218 | } | ||
219 | |||
220 | phydev->attached_dev = dev; | ||
221 | |||
222 | phydev->dev_flags = flags; | ||
223 | |||
224 | return phydev; | ||
225 | } | ||
226 | EXPORT_SYMBOL(phy_attach); | ||
227 | |||
228 | void phy_detach(struct phy_device *phydev) | ||
229 | { | ||
230 | phydev->attached_dev = NULL; | ||
231 | |||
232 | /* If the device had no specific driver before (i.e. - it | ||
233 | * was using the generic driver), we unbind the device | ||
234 | * from the generic driver so that there's a chance a | ||
235 | * real driver could be loaded */ | ||
236 | if (phydev->dev.driver == &genphy_driver.driver) { | ||
237 | down_write(&phydev->dev.bus->subsys.rwsem); | ||
238 | device_release_driver(&phydev->dev); | ||
239 | up_write(&phydev->dev.bus->subsys.rwsem); | ||
240 | } | ||
241 | } | ||
242 | EXPORT_SYMBOL(phy_detach); | ||
243 | |||
244 | |||
245 | /* Generic PHY support and helper functions */ | ||
246 | |||
247 | /* genphy_config_advert | ||
248 | * | ||
249 | * description: Writes MII_ADVERTISE with the appropriate values, | ||
250 | * after sanitizing the values to make sure we only advertise | ||
251 | * what is supported | ||
252 | */ | ||
253 | int genphy_config_advert(struct phy_device *phydev) | ||
254 | { | ||
255 | u32 advertise; | ||
256 | int adv; | ||
257 | int err; | ||
258 | |||
259 | /* Only allow advertising what | ||
260 | * this PHY supports */ | ||
261 | phydev->advertising &= phydev->supported; | ||
262 | advertise = phydev->advertising; | ||
263 | |||
264 | /* Setup standard advertisement */ | ||
265 | adv = phy_read(phydev, MII_ADVERTISE); | ||
266 | |||
267 | if (adv < 0) | ||
268 | return adv; | ||
269 | |||
270 | adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | | ||
271 | ADVERTISE_PAUSE_ASYM); | ||
272 | if (advertise & ADVERTISED_10baseT_Half) | ||
273 | adv |= ADVERTISE_10HALF; | ||
274 | if (advertise & ADVERTISED_10baseT_Full) | ||
275 | adv |= ADVERTISE_10FULL; | ||
276 | if (advertise & ADVERTISED_100baseT_Half) | ||
277 | adv |= ADVERTISE_100HALF; | ||
278 | if (advertise & ADVERTISED_100baseT_Full) | ||
279 | adv |= ADVERTISE_100FULL; | ||
280 | if (advertise & ADVERTISED_Pause) | ||
281 | adv |= ADVERTISE_PAUSE_CAP; | ||
282 | if (advertise & ADVERTISED_Asym_Pause) | ||
283 | adv |= ADVERTISE_PAUSE_ASYM; | ||
284 | |||
285 | err = phy_write(phydev, MII_ADVERTISE, adv); | ||
286 | |||
287 | if (err < 0) | ||
288 | return err; | ||
289 | |||
290 | /* Configure gigabit if it's supported */ | ||
291 | if (phydev->supported & (SUPPORTED_1000baseT_Half | | ||
292 | SUPPORTED_1000baseT_Full)) { | ||
293 | adv = phy_read(phydev, MII_CTRL1000); | ||
294 | |||
295 | if (adv < 0) | ||
296 | return adv; | ||
297 | |||
298 | adv &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF); | ||
299 | if (advertise & SUPPORTED_1000baseT_Half) | ||
300 | adv |= ADVERTISE_1000HALF; | ||
301 | if (advertise & SUPPORTED_1000baseT_Full) | ||
302 | adv |= ADVERTISE_1000FULL; | ||
303 | err = phy_write(phydev, MII_CTRL1000, adv); | ||
304 | |||
305 | if (err < 0) | ||
306 | return err; | ||
307 | } | ||
308 | |||
309 | return adv; | ||
310 | } | ||
311 | EXPORT_SYMBOL(genphy_config_advert); | ||
312 | |||
313 | /* genphy_setup_forced | ||
314 | * | ||
315 | * description: Configures MII_BMCR to force speed/duplex | ||
316 | * to the values in phydev. Assumes that the values are valid. | ||
317 | * Please see phy_sanitize_settings() */ | ||
318 | int genphy_setup_forced(struct phy_device *phydev) | ||
319 | { | ||
320 | int ctl = BMCR_RESET; | ||
321 | |||
322 | phydev->pause = phydev->asym_pause = 0; | ||
323 | |||
324 | if (SPEED_1000 == phydev->speed) | ||
325 | ctl |= BMCR_SPEED1000; | ||
326 | else if (SPEED_100 == phydev->speed) | ||
327 | ctl |= BMCR_SPEED100; | ||
328 | |||
329 | if (DUPLEX_FULL == phydev->duplex) | ||
330 | ctl |= BMCR_FULLDPLX; | ||
331 | |||
332 | ctl = phy_write(phydev, MII_BMCR, ctl); | ||
333 | |||
334 | if (ctl < 0) | ||
335 | return ctl; | ||
336 | |||
337 | /* We just reset the device, so we'd better configure any | ||
338 | * settings the PHY requires to operate */ | ||
339 | if (phydev->drv->config_init) | ||
340 | ctl = phydev->drv->config_init(phydev); | ||
341 | |||
342 | return ctl; | ||
343 | } | ||
344 | |||
345 | |||
346 | /* Enable and Restart Autonegotiation */ | ||
347 | int genphy_restart_aneg(struct phy_device *phydev) | ||
348 | { | ||
349 | int ctl; | ||
350 | |||
351 | ctl = phy_read(phydev, MII_BMCR); | ||
352 | |||
353 | if (ctl < 0) | ||
354 | return ctl; | ||
355 | |||
356 | ctl |= (BMCR_ANENABLE | BMCR_ANRESTART); | ||
357 | |||
358 | /* Don't isolate the PHY if we're negotiating */ | ||
359 | ctl &= ~(BMCR_ISOLATE); | ||
360 | |||
361 | ctl = phy_write(phydev, MII_BMCR, ctl); | ||
362 | |||
363 | return ctl; | ||
364 | } | ||
365 | |||
366 | |||
367 | /* genphy_config_aneg | ||
368 | * | ||
369 | * description: If auto-negotiation is enabled, we configure the | ||
370 | * advertising, and then restart auto-negotiation. If it is not | ||
371 | * enabled, then we write the BMCR | ||
372 | */ | ||
373 | int genphy_config_aneg(struct phy_device *phydev) | ||
374 | { | ||
375 | int err = 0; | ||
376 | |||
377 | if (AUTONEG_ENABLE == phydev->autoneg) { | ||
378 | err = genphy_config_advert(phydev); | ||
379 | |||
380 | if (err < 0) | ||
381 | return err; | ||
382 | |||
383 | err = genphy_restart_aneg(phydev); | ||
384 | } else | ||
385 | err = genphy_setup_forced(phydev); | ||
386 | |||
387 | return err; | ||
388 | } | ||
389 | EXPORT_SYMBOL(genphy_config_aneg); | ||
390 | |||
391 | /* genphy_update_link | ||
392 | * | ||
393 | * description: Update the value in phydev->link to reflect the | ||
394 | * current link value. In order to do this, we need to read | ||
395 | * the status register twice, keeping the second value | ||
396 | */ | ||
397 | int genphy_update_link(struct phy_device *phydev) | ||
398 | { | ||
399 | int status; | ||
400 | |||
401 | /* Do a fake read */ | ||
402 | status = phy_read(phydev, MII_BMSR); | ||
403 | |||
404 | if (status < 0) | ||
405 | return status; | ||
406 | |||
407 | /* Read link and autonegotiation status */ | ||
408 | status = phy_read(phydev, MII_BMSR); | ||
409 | |||
410 | if (status < 0) | ||
411 | return status; | ||
412 | |||
413 | if ((status & BMSR_LSTATUS) == 0) | ||
414 | phydev->link = 0; | ||
415 | else | ||
416 | phydev->link = 1; | ||
417 | |||
418 | return 0; | ||
419 | } | ||
420 | |||
421 | /* genphy_read_status | ||
422 | * | ||
423 | * description: Check the link, then figure out the current state | ||
424 | * by comparing what we advertise with what the link partner | ||
425 | * advertises. Start by checking the gigabit possibilities, | ||
426 | * then move on to 10/100. | ||
427 | */ | ||
428 | int genphy_read_status(struct phy_device *phydev) | ||
429 | { | ||
430 | int adv; | ||
431 | int err; | ||
432 | int lpa; | ||
433 | int lpagb = 0; | ||
434 | |||
435 | /* Update the link, but return if there | ||
436 | * was an error */ | ||
437 | err = genphy_update_link(phydev); | ||
438 | if (err) | ||
439 | return err; | ||
440 | |||
441 | if (AUTONEG_ENABLE == phydev->autoneg) { | ||
442 | if (phydev->supported & (SUPPORTED_1000baseT_Half | ||
443 | | SUPPORTED_1000baseT_Full)) { | ||
444 | lpagb = phy_read(phydev, MII_STAT1000); | ||
445 | |||
446 | if (lpagb < 0) | ||
447 | return lpagb; | ||
448 | |||
449 | adv = phy_read(phydev, MII_CTRL1000); | ||
450 | |||
451 | if (adv < 0) | ||
452 | return adv; | ||
453 | |||
454 | lpagb &= adv << 2; | ||
455 | } | ||
456 | |||
457 | lpa = phy_read(phydev, MII_LPA); | ||
458 | |||
459 | if (lpa < 0) | ||
460 | return lpa; | ||
461 | |||
462 | adv = phy_read(phydev, MII_ADVERTISE); | ||
463 | |||
464 | if (adv < 0) | ||
465 | return adv; | ||
466 | |||
467 | lpa &= adv; | ||
468 | |||
469 | phydev->speed = SPEED_10; | ||
470 | phydev->duplex = DUPLEX_HALF; | ||
471 | phydev->pause = phydev->asym_pause = 0; | ||
472 | |||
473 | if (lpagb & (LPA_1000FULL | LPA_1000HALF)) { | ||
474 | phydev->speed = SPEED_1000; | ||
475 | |||
476 | if (lpagb & LPA_1000FULL) | ||
477 | phydev->duplex = DUPLEX_FULL; | ||
478 | } else if (lpa & (LPA_100FULL | LPA_100HALF)) { | ||
479 | phydev->speed = SPEED_100; | ||
480 | |||
481 | if (lpa & LPA_100FULL) | ||
482 | phydev->duplex = DUPLEX_FULL; | ||
483 | } else | ||
484 | if (lpa & LPA_10FULL) | ||
485 | phydev->duplex = DUPLEX_FULL; | ||
486 | |||
487 | if (phydev->duplex == DUPLEX_FULL){ | ||
488 | phydev->pause = lpa & LPA_PAUSE_CAP ? 1 : 0; | ||
489 | phydev->asym_pause = lpa & LPA_PAUSE_ASYM ? 1 : 0; | ||
490 | } | ||
491 | } else { | ||
492 | int bmcr = phy_read(phydev, MII_BMCR); | ||
493 | if (bmcr < 0) | ||
494 | return bmcr; | ||
495 | |||
496 | if (bmcr & BMCR_FULLDPLX) | ||
497 | phydev->duplex = DUPLEX_FULL; | ||
498 | else | ||
499 | phydev->duplex = DUPLEX_HALF; | ||
500 | |||
501 | if (bmcr & BMCR_SPEED1000) | ||
502 | phydev->speed = SPEED_1000; | ||
503 | else if (bmcr & BMCR_SPEED100) | ||
504 | phydev->speed = SPEED_100; | ||
505 | else | ||
506 | phydev->speed = SPEED_10; | ||
507 | |||
508 | phydev->pause = phydev->asym_pause = 0; | ||
509 | } | ||
510 | |||
511 | return 0; | ||
512 | } | ||
513 | EXPORT_SYMBOL(genphy_read_status); | ||
514 | |||
515 | static int genphy_config_init(struct phy_device *phydev) | ||
516 | { | ||
517 | u32 val; | ||
518 | u32 features; | ||
519 | |||
520 | /* For now, I'll claim that the generic driver supports | ||
521 | * all possible port types */ | ||
522 | features = (SUPPORTED_TP | SUPPORTED_MII | ||
523 | | SUPPORTED_AUI | SUPPORTED_FIBRE | | ||
524 | SUPPORTED_BNC); | ||
525 | |||
526 | /* Do we support autonegotiation? */ | ||
527 | val = phy_read(phydev, MII_BMSR); | ||
528 | |||
529 | if (val < 0) | ||
530 | return val; | ||
531 | |||
532 | if (val & BMSR_ANEGCAPABLE) | ||
533 | features |= SUPPORTED_Autoneg; | ||
534 | |||
535 | if (val & BMSR_100FULL) | ||
536 | features |= SUPPORTED_100baseT_Full; | ||
537 | if (val & BMSR_100HALF) | ||
538 | features |= SUPPORTED_100baseT_Half; | ||
539 | if (val & BMSR_10FULL) | ||
540 | features |= SUPPORTED_10baseT_Full; | ||
541 | if (val & BMSR_10HALF) | ||
542 | features |= SUPPORTED_10baseT_Half; | ||
543 | |||
544 | if (val & BMSR_ESTATEN) { | ||
545 | val = phy_read(phydev, MII_ESTATUS); | ||
546 | |||
547 | if (val < 0) | ||
548 | return val; | ||
549 | |||
550 | if (val & ESTATUS_1000_TFULL) | ||
551 | features |= SUPPORTED_1000baseT_Full; | ||
552 | if (val & ESTATUS_1000_THALF) | ||
553 | features |= SUPPORTED_1000baseT_Half; | ||
554 | } | ||
555 | |||
556 | phydev->supported = features; | ||
557 | phydev->advertising = features; | ||
558 | |||
559 | return 0; | ||
560 | } | ||
561 | |||
562 | |||
563 | /* phy_probe | ||
564 | * | ||
565 | * description: Take care of setting up the phy_device structure, | ||
566 | * set the state to READY (the driver's init function should | ||
567 | * set it to STARTING if needed). | ||
568 | */ | ||
569 | static int phy_probe(struct device *dev) | ||
570 | { | ||
571 | struct phy_device *phydev; | ||
572 | struct phy_driver *phydrv; | ||
573 | struct device_driver *drv; | ||
574 | int err = 0; | ||
575 | |||
576 | phydev = to_phy_device(dev); | ||
577 | |||
578 | /* Make sure the driver is held. | ||
579 | * XXX -- Is this correct? */ | ||
580 | drv = get_driver(phydev->dev.driver); | ||
581 | phydrv = to_phy_driver(drv); | ||
582 | phydev->drv = phydrv; | ||
583 | |||
584 | /* Disable the interrupt if the PHY doesn't support it */ | ||
585 | if (!(phydrv->flags & PHY_HAS_INTERRUPT)) | ||
586 | phydev->irq = PHY_POLL; | ||
587 | |||
588 | spin_lock(&phydev->lock); | ||
589 | |||
590 | /* Start out supporting everything. Eventually, | ||
591 | * a controller will attach, and may modify one | ||
592 | * or both of these values */ | ||
593 | phydev->supported = phydrv->features; | ||
594 | phydev->advertising = phydrv->features; | ||
595 | |||
596 | /* Set the state to READY by default */ | ||
597 | phydev->state = PHY_READY; | ||
598 | |||
599 | if (phydev->drv->probe) | ||
600 | err = phydev->drv->probe(phydev); | ||
601 | |||
602 | spin_unlock(&phydev->lock); | ||
603 | |||
604 | if (err < 0) | ||
605 | return err; | ||
606 | |||
607 | if (phydev->drv->config_init) | ||
608 | err = phydev->drv->config_init(phydev); | ||
609 | |||
610 | return err; | ||
611 | } | ||
612 | |||
613 | static int phy_remove(struct device *dev) | ||
614 | { | ||
615 | struct phy_device *phydev; | ||
616 | |||
617 | phydev = to_phy_device(dev); | ||
618 | |||
619 | spin_lock(&phydev->lock); | ||
620 | phydev->state = PHY_DOWN; | ||
621 | spin_unlock(&phydev->lock); | ||
622 | |||
623 | if (phydev->drv->remove) | ||
624 | phydev->drv->remove(phydev); | ||
625 | |||
626 | put_driver(dev->driver); | ||
627 | phydev->drv = NULL; | ||
628 | |||
629 | return 0; | ||
630 | } | ||
631 | |||
632 | int phy_driver_register(struct phy_driver *new_driver) | ||
633 | { | ||
634 | int retval; | ||
635 | |||
636 | memset(&new_driver->driver, 0, sizeof(new_driver->driver)); | ||
637 | new_driver->driver.name = new_driver->name; | ||
638 | new_driver->driver.bus = &mdio_bus_type; | ||
639 | new_driver->driver.probe = phy_probe; | ||
640 | new_driver->driver.remove = phy_remove; | ||
641 | |||
642 | retval = driver_register(&new_driver->driver); | ||
643 | |||
644 | if (retval) { | ||
645 | printk(KERN_ERR "%s: Error %d in registering driver\n", | ||
646 | new_driver->name, retval); | ||
647 | |||
648 | return retval; | ||
649 | } | ||
650 | |||
651 | pr_info("%s: Registered new driver\n", new_driver->name); | ||
652 | |||
653 | return 0; | ||
654 | } | ||
655 | EXPORT_SYMBOL(phy_driver_register); | ||
656 | |||
657 | void phy_driver_unregister(struct phy_driver *drv) | ||
658 | { | ||
659 | driver_unregister(&drv->driver); | ||
660 | } | ||
661 | EXPORT_SYMBOL(phy_driver_unregister); | ||
662 | |||
663 | static struct phy_driver genphy_driver = { | ||
664 | .phy_id = 0xffffffff, | ||
665 | .phy_id_mask = 0xffffffff, | ||
666 | .name = "Generic PHY", | ||
667 | .config_init = genphy_config_init, | ||
668 | .features = 0, | ||
669 | .config_aneg = genphy_config_aneg, | ||
670 | .read_status = genphy_read_status, | ||
671 | .driver = {.owner= THIS_MODULE, }, | ||
672 | }; | ||
673 | |||
674 | static int __init phy_init(void) | ||
675 | { | ||
676 | int rc; | ||
677 | |||
678 | rc = mdio_bus_init(); | ||
679 | if (rc) | ||
680 | return rc; | ||
681 | |||
682 | rc = phy_driver_register(&genphy_driver); | ||
683 | if (rc) | ||
684 | mdio_bus_exit(); | ||
685 | |||
686 | return rc; | ||
687 | } | ||
688 | |||
689 | static void __exit phy_exit(void) | ||
690 | { | ||
691 | phy_driver_unregister(&genphy_driver); | ||
692 | mdio_bus_exit(); | ||
693 | } | ||
694 | |||
695 | subsys_initcall(phy_init); | ||
696 | module_exit(phy_exit); | ||
diff --git a/drivers/net/phy/qsemi.c b/drivers/net/phy/qsemi.c new file mode 100644 index 000000000000..d461ba457631 --- /dev/null +++ b/drivers/net/phy/qsemi.c | |||
@@ -0,0 +1,143 @@ | |||
1 | /* | ||
2 | * drivers/net/phy/qsemi.c | ||
3 | * | ||
4 | * Driver for Quality Semiconductor PHYs | ||
5 | * | ||
6 | * Author: Andy Fleming | ||
7 | * | ||
8 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify it | ||
11 | * under the terms of the GNU General Public License as published by the | ||
12 | * Free Software Foundation; either version 2 of the License, or (at your | ||
13 | * option) any later version. | ||
14 | * | ||
15 | */ | ||
16 | #include <linux/config.h> | ||
17 | #include <linux/kernel.h> | ||
18 | #include <linux/sched.h> | ||
19 | #include <linux/string.h> | ||
20 | #include <linux/errno.h> | ||
21 | #include <linux/unistd.h> | ||
22 | #include <linux/slab.h> | ||
23 | #include <linux/interrupt.h> | ||
24 | #include <linux/init.h> | ||
25 | #include <linux/delay.h> | ||
26 | #include <linux/netdevice.h> | ||
27 | #include <linux/etherdevice.h> | ||
28 | #include <linux/skbuff.h> | ||
29 | #include <linux/spinlock.h> | ||
30 | #include <linux/mm.h> | ||
31 | #include <linux/module.h> | ||
32 | #include <linux/version.h> | ||
33 | #include <linux/mii.h> | ||
34 | #include <linux/ethtool.h> | ||
35 | #include <linux/phy.h> | ||
36 | |||
37 | #include <asm/io.h> | ||
38 | #include <asm/irq.h> | ||
39 | #include <asm/uaccess.h> | ||
40 | |||
41 | /* ------------------------------------------------------------------------- */ | ||
42 | /* The Quality Semiconductor QS6612 is used on the RPX CLLF */ | ||
43 | |||
44 | /* register definitions */ | ||
45 | |||
46 | #define MII_QS6612_MCR 17 /* Mode Control Register */ | ||
47 | #define MII_QS6612_FTR 27 /* Factory Test Register */ | ||
48 | #define MII_QS6612_MCO 28 /* Misc. Control Register */ | ||
49 | #define MII_QS6612_ISR 29 /* Interrupt Source Register */ | ||
50 | #define MII_QS6612_IMR 30 /* Interrupt Mask Register */ | ||
51 | #define MII_QS6612_IMR_INIT 0x003a | ||
52 | #define MII_QS6612_PCR 31 /* 100BaseTx PHY Control Reg. */ | ||
53 | |||
54 | #define QS6612_PCR_AN_COMPLETE 0x1000 | ||
55 | #define QS6612_PCR_RLBEN 0x0200 | ||
56 | #define QS6612_PCR_DCREN 0x0100 | ||
57 | #define QS6612_PCR_4B5BEN 0x0040 | ||
58 | #define QS6612_PCR_TX_ISOLATE 0x0020 | ||
59 | #define QS6612_PCR_MLT3_DIS 0x0002 | ||
60 | #define QS6612_PCR_SCRM_DESCRM 0x0001 | ||
61 | |||
62 | MODULE_DESCRIPTION("Quality Semiconductor PHY driver"); | ||
63 | MODULE_AUTHOR("Andy Fleming"); | ||
64 | MODULE_LICENSE("GPL"); | ||
65 | |||
66 | /* Returns 0, unless there's a write error */ | ||
67 | static int qs6612_config_init(struct phy_device *phydev) | ||
68 | { | ||
69 | /* The PHY powers up isolated on the RPX, | ||
70 | * so send a command to allow operation. | ||
71 | * XXX - My docs indicate this should be 0x0940 | ||
72 | * ...or something. The current value sets three | ||
73 | * reserved bits, bit 11, which specifies it should be | ||
74 | * set to one, bit 10, which specifies it should be set | ||
75 | * to 0, and bit 7, which doesn't specify. However, my | ||
76 | * docs are preliminary, and I will leave it like this | ||
77 | * until someone more knowledgable corrects me or it. | ||
78 | * -- Andy Fleming | ||
79 | */ | ||
80 | return phy_write(phydev, MII_QS6612_PCR, 0x0dc0); | ||
81 | } | ||
82 | |||
83 | static int qs6612_ack_interrupt(struct phy_device *phydev) | ||
84 | { | ||
85 | int err; | ||
86 | |||
87 | err = phy_read(phydev, MII_QS6612_ISR); | ||
88 | |||
89 | if (err < 0) | ||
90 | return err; | ||
91 | |||
92 | err = phy_read(phydev, MII_BMSR); | ||
93 | |||
94 | if (err < 0) | ||
95 | return err; | ||
96 | |||
97 | err = phy_read(phydev, MII_EXPANSION); | ||
98 | |||
99 | if (err < 0) | ||
100 | return err; | ||
101 | |||
102 | return 0; | ||
103 | } | ||
104 | |||
105 | static int qs6612_config_intr(struct phy_device *phydev) | ||
106 | { | ||
107 | int err; | ||
108 | if (phydev->interrupts == PHY_INTERRUPT_ENABLED) | ||
109 | err = phy_write(phydev, MII_QS6612_IMR, | ||
110 | MII_QS6612_IMR_INIT); | ||
111 | else | ||
112 | err = phy_write(phydev, MII_QS6612_IMR, 0); | ||
113 | |||
114 | return err; | ||
115 | |||
116 | } | ||
117 | |||
118 | static struct phy_driver qs6612_driver = { | ||
119 | .phy_id = 0x00181440, | ||
120 | .name = "QS6612", | ||
121 | .phy_id_mask = 0xfffffff0, | ||
122 | .features = PHY_BASIC_FEATURES, | ||
123 | .flags = PHY_HAS_INTERRUPT, | ||
124 | .config_init = qs6612_config_init, | ||
125 | .config_aneg = genphy_config_aneg, | ||
126 | .read_status = genphy_read_status, | ||
127 | .ack_interrupt = qs6612_ack_interrupt, | ||
128 | .config_intr = qs6612_config_intr, | ||
129 | .driver = { .owner = THIS_MODULE,}, | ||
130 | }; | ||
131 | |||
132 | static int __init qs6612_init(void) | ||
133 | { | ||
134 | return phy_driver_register(&qs6612_driver); | ||
135 | } | ||
136 | |||
137 | static void __exit qs6612_exit(void) | ||
138 | { | ||
139 | phy_driver_unregister(&qs6612_driver); | ||
140 | } | ||
141 | |||
142 | module_init(qs6612_init); | ||
143 | module_exit(qs6612_exit); | ||
diff --git a/drivers/net/r8169.c b/drivers/net/r8169.c index d5afe05cd826..f0471d102e3c 100644 --- a/drivers/net/r8169.c +++ b/drivers/net/r8169.c | |||
@@ -187,6 +187,7 @@ static struct pci_device_id rtl8169_pci_tbl[] = { | |||
187 | { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), }, | 187 | { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), }, |
188 | { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4300), }, | 188 | { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4300), }, |
189 | { PCI_DEVICE(0x16ec, 0x0116), }, | 189 | { PCI_DEVICE(0x16ec, 0x0116), }, |
190 | { PCI_VENDOR_ID_LINKSYS, 0x1032, PCI_ANY_ID, 0x0024, }, | ||
190 | {0,}, | 191 | {0,}, |
191 | }; | 192 | }; |
192 | 193 | ||
diff --git a/drivers/net/s2io-regs.h b/drivers/net/s2io-regs.h index 7092ca6b277e..2234a8f05eb2 100644 --- a/drivers/net/s2io-regs.h +++ b/drivers/net/s2io-regs.h | |||
@@ -62,6 +62,7 @@ typedef struct _XENA_dev_config { | |||
62 | #define ADAPTER_STATUS_RMAC_REMOTE_FAULT BIT(6) | 62 | #define ADAPTER_STATUS_RMAC_REMOTE_FAULT BIT(6) |
63 | #define ADAPTER_STATUS_RMAC_LOCAL_FAULT BIT(7) | 63 | #define ADAPTER_STATUS_RMAC_LOCAL_FAULT BIT(7) |
64 | #define ADAPTER_STATUS_RMAC_PCC_IDLE vBIT(0xFF,8,8) | 64 | #define ADAPTER_STATUS_RMAC_PCC_IDLE vBIT(0xFF,8,8) |
65 | #define ADAPTER_STATUS_RMAC_PCC_FOUR_IDLE vBIT(0x0F,8,8) | ||
65 | #define ADAPTER_STATUS_RC_PRC_QUIESCENT vBIT(0xFF,16,8) | 66 | #define ADAPTER_STATUS_RC_PRC_QUIESCENT vBIT(0xFF,16,8) |
66 | #define ADAPTER_STATUS_MC_DRAM_READY BIT(24) | 67 | #define ADAPTER_STATUS_MC_DRAM_READY BIT(24) |
67 | #define ADAPTER_STATUS_MC_QUEUES_READY BIT(25) | 68 | #define ADAPTER_STATUS_MC_QUEUES_READY BIT(25) |
@@ -77,21 +78,34 @@ typedef struct _XENA_dev_config { | |||
77 | #define ADAPTER_ECC_EN BIT(55) | 78 | #define ADAPTER_ECC_EN BIT(55) |
78 | 79 | ||
79 | u64 serr_source; | 80 | u64 serr_source; |
80 | #define SERR_SOURCE_PIC BIT(0) | 81 | #define SERR_SOURCE_PIC BIT(0) |
81 | #define SERR_SOURCE_TXDMA BIT(1) | 82 | #define SERR_SOURCE_TXDMA BIT(1) |
82 | #define SERR_SOURCE_RXDMA BIT(2) | 83 | #define SERR_SOURCE_RXDMA BIT(2) |
83 | #define SERR_SOURCE_MAC BIT(3) | 84 | #define SERR_SOURCE_MAC BIT(3) |
84 | #define SERR_SOURCE_MC BIT(4) | 85 | #define SERR_SOURCE_MC BIT(4) |
85 | #define SERR_SOURCE_XGXS BIT(5) | 86 | #define SERR_SOURCE_XGXS BIT(5) |
86 | #define SERR_SOURCE_ANY (SERR_SOURCE_PIC | \ | 87 | #define SERR_SOURCE_ANY (SERR_SOURCE_PIC | \ |
87 | SERR_SOURCE_TXDMA | \ | 88 | SERR_SOURCE_TXDMA | \ |
88 | SERR_SOURCE_RXDMA | \ | 89 | SERR_SOURCE_RXDMA | \ |
89 | SERR_SOURCE_MAC | \ | 90 | SERR_SOURCE_MAC | \ |
90 | SERR_SOURCE_MC | \ | 91 | SERR_SOURCE_MC | \ |
91 | SERR_SOURCE_XGXS) | 92 | SERR_SOURCE_XGXS) |
92 | 93 | ||
93 | 94 | u64 pci_mode; | |
94 | u8 unused_0[0x800 - 0x120]; | 95 | #define GET_PCI_MODE(val) ((val & vBIT(0xF, 0, 4)) >> 60) |
96 | #define PCI_MODE_PCI_33 0 | ||
97 | #define PCI_MODE_PCI_66 0x1 | ||
98 | #define PCI_MODE_PCIX_M1_66 0x2 | ||
99 | #define PCI_MODE_PCIX_M1_100 0x3 | ||
100 | #define PCI_MODE_PCIX_M1_133 0x4 | ||
101 | #define PCI_MODE_PCIX_M2_66 0x5 | ||
102 | #define PCI_MODE_PCIX_M2_100 0x6 | ||
103 | #define PCI_MODE_PCIX_M2_133 0x7 | ||
104 | #define PCI_MODE_UNSUPPORTED BIT(0) | ||
105 | #define PCI_MODE_32_BITS BIT(8) | ||
106 | #define PCI_MODE_UNKNOWN_MODE BIT(9) | ||
107 | |||
108 | u8 unused_0[0x800 - 0x128]; | ||
95 | 109 | ||
96 | /* PCI-X Controller registers */ | 110 | /* PCI-X Controller registers */ |
97 | u64 pic_int_status; | 111 | u64 pic_int_status; |
@@ -153,7 +167,11 @@ typedef struct _XENA_dev_config { | |||
153 | u8 unused4[0x08]; | 167 | u8 unused4[0x08]; |
154 | 168 | ||
155 | u64 gpio_int_reg; | 169 | u64 gpio_int_reg; |
170 | #define GPIO_INT_REG_LINK_DOWN BIT(1) | ||
171 | #define GPIO_INT_REG_LINK_UP BIT(2) | ||
156 | u64 gpio_int_mask; | 172 | u64 gpio_int_mask; |
173 | #define GPIO_INT_MASK_LINK_DOWN BIT(1) | ||
174 | #define GPIO_INT_MASK_LINK_UP BIT(2) | ||
157 | u64 gpio_alarms; | 175 | u64 gpio_alarms; |
158 | 176 | ||
159 | u8 unused5[0x38]; | 177 | u8 unused5[0x38]; |
@@ -223,19 +241,16 @@ typedef struct _XENA_dev_config { | |||
223 | u64 xmsi_data; | 241 | u64 xmsi_data; |
224 | 242 | ||
225 | u64 rx_mat; | 243 | u64 rx_mat; |
244 | #define RX_MAT_SET(ring, msi) vBIT(msi, (8 * ring), 8) | ||
226 | 245 | ||
227 | u8 unused6[0x8]; | 246 | u8 unused6[0x8]; |
228 | 247 | ||
229 | u64 tx_mat0_7; | 248 | u64 tx_mat0_n[0x8]; |
230 | u64 tx_mat8_15; | 249 | #define TX_MAT_SET(fifo, msi) vBIT(msi, (8 * fifo), 8) |
231 | u64 tx_mat16_23; | ||
232 | u64 tx_mat24_31; | ||
233 | u64 tx_mat32_39; | ||
234 | u64 tx_mat40_47; | ||
235 | u64 tx_mat48_55; | ||
236 | u64 tx_mat56_63; | ||
237 | 250 | ||
238 | u8 unused_1[0x10]; | 251 | u8 unused_1[0x8]; |
252 | u64 stat_byte_cnt; | ||
253 | #define STAT_BC(n) vBIT(n,4,12) | ||
239 | 254 | ||
240 | /* Automated statistics collection */ | 255 | /* Automated statistics collection */ |
241 | u64 stat_cfg; | 256 | u64 stat_cfg; |
@@ -246,6 +261,7 @@ typedef struct _XENA_dev_config { | |||
246 | #define STAT_TRSF_PER(n) TBD | 261 | #define STAT_TRSF_PER(n) TBD |
247 | #define PER_SEC 0x208d5 | 262 | #define PER_SEC 0x208d5 |
248 | #define SET_UPDT_PERIOD(n) vBIT((PER_SEC*n),32,32) | 263 | #define SET_UPDT_PERIOD(n) vBIT((PER_SEC*n),32,32) |
264 | #define SET_UPDT_CLICKS(val) vBIT(val, 32, 32) | ||
249 | 265 | ||
250 | u64 stat_addr; | 266 | u64 stat_addr; |
251 | 267 | ||
@@ -267,8 +283,15 @@ typedef struct _XENA_dev_config { | |||
267 | 283 | ||
268 | u64 gpio_control; | 284 | u64 gpio_control; |
269 | #define GPIO_CTRL_GPIO_0 BIT(8) | 285 | #define GPIO_CTRL_GPIO_0 BIT(8) |
286 | u64 misc_control; | ||
287 | #define MISC_LINK_STABILITY_PRD(val) vBIT(val,29,3) | ||
288 | |||
289 | u8 unused7_1[0x240 - 0x208]; | ||
290 | |||
291 | u64 wreq_split_mask; | ||
292 | #define WREQ_SPLIT_MASK_SET_MASK(val) vBIT(val, 52, 12) | ||
270 | 293 | ||
271 | u8 unused7[0x600]; | 294 | u8 unused7_2[0x800 - 0x248]; |
272 | 295 | ||
273 | /* TxDMA registers */ | 296 | /* TxDMA registers */ |
274 | u64 txdma_int_status; | 297 | u64 txdma_int_status; |
@@ -290,6 +313,7 @@ typedef struct _XENA_dev_config { | |||
290 | 313 | ||
291 | u64 pcc_err_reg; | 314 | u64 pcc_err_reg; |
292 | #define PCC_FB_ECC_DB_ERR vBIT(0xFF, 16, 8) | 315 | #define PCC_FB_ECC_DB_ERR vBIT(0xFF, 16, 8) |
316 | #define PCC_ENABLE_FOUR vBIT(0x0F,0,8) | ||
293 | 317 | ||
294 | u64 pcc_err_mask; | 318 | u64 pcc_err_mask; |
295 | u64 pcc_err_alarm; | 319 | u64 pcc_err_alarm; |
@@ -468,6 +492,7 @@ typedef struct _XENA_dev_config { | |||
468 | #define PRC_CTRL_NO_SNOOP (BIT(22)|BIT(23)) | 492 | #define PRC_CTRL_NO_SNOOP (BIT(22)|BIT(23)) |
469 | #define PRC_CTRL_NO_SNOOP_DESC BIT(22) | 493 | #define PRC_CTRL_NO_SNOOP_DESC BIT(22) |
470 | #define PRC_CTRL_NO_SNOOP_BUFF BIT(23) | 494 | #define PRC_CTRL_NO_SNOOP_BUFF BIT(23) |
495 | #define PRC_CTRL_BIMODAL_INTERRUPT BIT(37) | ||
471 | #define PRC_CTRL_RXD_BACKOFF_INTERVAL(val) vBIT(val,40,24) | 496 | #define PRC_CTRL_RXD_BACKOFF_INTERVAL(val) vBIT(val,40,24) |
472 | 497 | ||
473 | u64 prc_alarm_action; | 498 | u64 prc_alarm_action; |
@@ -691,6 +716,10 @@ typedef struct _XENA_dev_config { | |||
691 | #define MC_ERR_REG_MIRI_CRI_ERR_0 BIT(22) | 716 | #define MC_ERR_REG_MIRI_CRI_ERR_0 BIT(22) |
692 | #define MC_ERR_REG_MIRI_CRI_ERR_1 BIT(23) | 717 | #define MC_ERR_REG_MIRI_CRI_ERR_1 BIT(23) |
693 | #define MC_ERR_REG_SM_ERR BIT(31) | 718 | #define MC_ERR_REG_SM_ERR BIT(31) |
719 | #define MC_ERR_REG_ECC_ALL_SNG (BIT(6) | \ | ||
720 | BIT(7) | BIT(17) | BIT(19)) | ||
721 | #define MC_ERR_REG_ECC_ALL_DBL (BIT(14) | \ | ||
722 | BIT(15) | BIT(18) | BIT(20)) | ||
694 | u64 mc_err_mask; | 723 | u64 mc_err_mask; |
695 | u64 mc_err_alarm; | 724 | u64 mc_err_alarm; |
696 | 725 | ||
@@ -736,7 +765,19 @@ typedef struct _XENA_dev_config { | |||
736 | u64 mc_rldram_test_d1; | 765 | u64 mc_rldram_test_d1; |
737 | u8 unused24[0x300 - 0x288]; | 766 | u8 unused24[0x300 - 0x288]; |
738 | u64 mc_rldram_test_d2; | 767 | u64 mc_rldram_test_d2; |
739 | u8 unused25[0x700 - 0x308]; | 768 | |
769 | u8 unused24_1[0x360 - 0x308]; | ||
770 | u64 mc_rldram_ctrl; | ||
771 | #define MC_RLDRAM_ENABLE_ODT BIT(7) | ||
772 | |||
773 | u8 unused24_2[0x640 - 0x368]; | ||
774 | u64 mc_rldram_ref_per_herc; | ||
775 | #define MC_RLDRAM_SET_REF_PERIOD(val) vBIT(val, 0, 16) | ||
776 | |||
777 | u8 unused24_3[0x660 - 0x648]; | ||
778 | u64 mc_rldram_mrs_herc; | ||
779 | |||
780 | u8 unused25[0x700 - 0x668]; | ||
740 | u64 mc_debug_ctrl; | 781 | u64 mc_debug_ctrl; |
741 | 782 | ||
742 | u8 unused26[0x3000 - 0x2f08]; | 783 | u8 unused26[0x3000 - 0x2f08]; |
diff --git a/drivers/net/s2io.c b/drivers/net/s2io.c index ea638b162d3f..7ca78228b104 100644 --- a/drivers/net/s2io.c +++ b/drivers/net/s2io.c | |||
@@ -11,29 +11,28 @@ | |||
11 | * See the file COPYING in this distribution for more information. | 11 | * See the file COPYING in this distribution for more information. |
12 | * | 12 | * |
13 | * Credits: | 13 | * Credits: |
14 | * Jeff Garzik : For pointing out the improper error condition | 14 | * Jeff Garzik : For pointing out the improper error condition |
15 | * check in the s2io_xmit routine and also some | 15 | * check in the s2io_xmit routine and also some |
16 | * issues in the Tx watch dog function. Also for | 16 | * issues in the Tx watch dog function. Also for |
17 | * patiently answering all those innumerable | 17 | * patiently answering all those innumerable |
18 | * questions regaring the 2.6 porting issues. | 18 | * questions regaring the 2.6 porting issues. |
19 | * Stephen Hemminger : Providing proper 2.6 porting mechanism for some | 19 | * Stephen Hemminger : Providing proper 2.6 porting mechanism for some |
20 | * macros available only in 2.6 Kernel. | 20 | * macros available only in 2.6 Kernel. |
21 | * Francois Romieu : For pointing out all code part that were | 21 | * Francois Romieu : For pointing out all code part that were |
22 | * deprecated and also styling related comments. | 22 | * deprecated and also styling related comments. |
23 | * Grant Grundler : For helping me get rid of some Architecture | 23 | * Grant Grundler : For helping me get rid of some Architecture |
24 | * dependent code. | 24 | * dependent code. |
25 | * Christopher Hellwig : Some more 2.6 specific issues in the driver. | 25 | * Christopher Hellwig : Some more 2.6 specific issues in the driver. |
26 | * | 26 | * |
27 | * The module loadable parameters that are supported by the driver and a brief | 27 | * The module loadable parameters that are supported by the driver and a brief |
28 | * explaination of all the variables. | 28 | * explaination of all the variables. |
29 | * rx_ring_num : This can be used to program the number of receive rings used | 29 | * rx_ring_num : This can be used to program the number of receive rings used |
30 | * in the driver. | 30 | * in the driver. |
31 | * rx_ring_len: This defines the number of descriptors each ring can have. This | 31 | * rx_ring_len: This defines the number of descriptors each ring can have. This |
32 | * is also an array of size 8. | 32 | * is also an array of size 8. |
33 | * tx_fifo_num: This defines the number of Tx FIFOs thats used int the driver. | 33 | * tx_fifo_num: This defines the number of Tx FIFOs thats used int the driver. |
34 | * tx_fifo_len: This too is an array of 8. Each element defines the number of | 34 | * tx_fifo_len: This too is an array of 8. Each element defines the number of |
35 | * Tx descriptors that can be associated with each corresponding FIFO. | 35 | * Tx descriptors that can be associated with each corresponding FIFO. |
36 | * in PCI Configuration space. | ||
37 | ************************************************************************/ | 36 | ************************************************************************/ |
38 | 37 | ||
39 | #include <linux/config.h> | 38 | #include <linux/config.h> |
@@ -56,27 +55,39 @@ | |||
56 | #include <linux/ethtool.h> | 55 | #include <linux/ethtool.h> |
57 | #include <linux/version.h> | 56 | #include <linux/version.h> |
58 | #include <linux/workqueue.h> | 57 | #include <linux/workqueue.h> |
58 | #include <linux/if_vlan.h> | ||
59 | 59 | ||
60 | #include <asm/io.h> | ||
61 | #include <asm/system.h> | 60 | #include <asm/system.h> |
62 | #include <asm/uaccess.h> | 61 | #include <asm/uaccess.h> |
62 | #include <asm/io.h> | ||
63 | 63 | ||
64 | /* local include */ | 64 | /* local include */ |
65 | #include "s2io.h" | 65 | #include "s2io.h" |
66 | #include "s2io-regs.h" | 66 | #include "s2io-regs.h" |
67 | 67 | ||
68 | /* S2io Driver name & version. */ | 68 | /* S2io Driver name & version. */ |
69 | static char s2io_driver_name[] = "s2io"; | 69 | static char s2io_driver_name[] = "Neterion"; |
70 | static char s2io_driver_version[] = "Version 1.7.7.1"; | 70 | static char s2io_driver_version[] = "Version 2.0.3.1"; |
71 | |||
72 | static inline int RXD_IS_UP2DT(RxD_t *rxdp) | ||
73 | { | ||
74 | int ret; | ||
75 | |||
76 | ret = ((!(rxdp->Control_1 & RXD_OWN_XENA)) && | ||
77 | (GET_RXD_MARKER(rxdp->Control_2) != THE_RXD_MARK)); | ||
71 | 78 | ||
72 | /* | 79 | return ret; |
80 | } | ||
81 | |||
82 | /* | ||
73 | * Cards with following subsystem_id have a link state indication | 83 | * Cards with following subsystem_id have a link state indication |
74 | * problem, 600B, 600C, 600D, 640B, 640C and 640D. | 84 | * problem, 600B, 600C, 600D, 640B, 640C and 640D. |
75 | * macro below identifies these cards given the subsystem_id. | 85 | * macro below identifies these cards given the subsystem_id. |
76 | */ | 86 | */ |
77 | #define CARDS_WITH_FAULTY_LINK_INDICATORS(subid) \ | 87 | #define CARDS_WITH_FAULTY_LINK_INDICATORS(dev_type, subid) \ |
78 | (((subid >= 0x600B) && (subid <= 0x600D)) || \ | 88 | (dev_type == XFRAME_I_DEVICE) ? \ |
79 | ((subid >= 0x640B) && (subid <= 0x640D))) ? 1 : 0 | 89 | ((((subid >= 0x600B) && (subid <= 0x600D)) || \ |
90 | ((subid >= 0x640B) && (subid <= 0x640D))) ? 1 : 0) : 0 | ||
80 | 91 | ||
81 | #define LINK_IS_UP(val64) (!(val64 & (ADAPTER_STATUS_RMAC_REMOTE_FAULT | \ | 92 | #define LINK_IS_UP(val64) (!(val64 & (ADAPTER_STATUS_RMAC_REMOTE_FAULT | \ |
82 | ADAPTER_STATUS_RMAC_LOCAL_FAULT))) | 93 | ADAPTER_STATUS_RMAC_LOCAL_FAULT))) |
@@ -86,9 +97,12 @@ static char s2io_driver_version[] = "Version 1.7.7.1"; | |||
86 | static inline int rx_buffer_level(nic_t * sp, int rxb_size, int ring) | 97 | static inline int rx_buffer_level(nic_t * sp, int rxb_size, int ring) |
87 | { | 98 | { |
88 | int level = 0; | 99 | int level = 0; |
89 | if ((sp->pkt_cnt[ring] - rxb_size) > 16) { | 100 | mac_info_t *mac_control; |
101 | |||
102 | mac_control = &sp->mac_control; | ||
103 | if ((mac_control->rings[ring].pkt_cnt - rxb_size) > 16) { | ||
90 | level = LOW; | 104 | level = LOW; |
91 | if ((sp->pkt_cnt[ring] - rxb_size) < MAX_RXDS_PER_BLOCK) { | 105 | if (rxb_size <= MAX_RXDS_PER_BLOCK) { |
92 | level = PANIC; | 106 | level = PANIC; |
93 | } | 107 | } |
94 | } | 108 | } |
@@ -145,6 +159,9 @@ static char ethtool_stats_keys[][ETH_GSTRING_LEN] = { | |||
145 | {"rmac_pause_cnt"}, | 159 | {"rmac_pause_cnt"}, |
146 | {"rmac_accepted_ip"}, | 160 | {"rmac_accepted_ip"}, |
147 | {"rmac_err_tcp"}, | 161 | {"rmac_err_tcp"}, |
162 | {"\n DRIVER STATISTICS"}, | ||
163 | {"single_bit_ecc_errs"}, | ||
164 | {"double_bit_ecc_errs"}, | ||
148 | }; | 165 | }; |
149 | 166 | ||
150 | #define S2IO_STAT_LEN sizeof(ethtool_stats_keys)/ ETH_GSTRING_LEN | 167 | #define S2IO_STAT_LEN sizeof(ethtool_stats_keys)/ ETH_GSTRING_LEN |
@@ -153,8 +170,37 @@ static char ethtool_stats_keys[][ETH_GSTRING_LEN] = { | |||
153 | #define S2IO_TEST_LEN sizeof(s2io_gstrings) / ETH_GSTRING_LEN | 170 | #define S2IO_TEST_LEN sizeof(s2io_gstrings) / ETH_GSTRING_LEN |
154 | #define S2IO_STRINGS_LEN S2IO_TEST_LEN * ETH_GSTRING_LEN | 171 | #define S2IO_STRINGS_LEN S2IO_TEST_LEN * ETH_GSTRING_LEN |
155 | 172 | ||
173 | #define S2IO_TIMER_CONF(timer, handle, arg, exp) \ | ||
174 | init_timer(&timer); \ | ||
175 | timer.function = handle; \ | ||
176 | timer.data = (unsigned long) arg; \ | ||
177 | mod_timer(&timer, (jiffies + exp)) \ | ||
178 | |||
179 | /* Add the vlan */ | ||
180 | static void s2io_vlan_rx_register(struct net_device *dev, | ||
181 | struct vlan_group *grp) | ||
182 | { | ||
183 | nic_t *nic = dev->priv; | ||
184 | unsigned long flags; | ||
185 | |||
186 | spin_lock_irqsave(&nic->tx_lock, flags); | ||
187 | nic->vlgrp = grp; | ||
188 | spin_unlock_irqrestore(&nic->tx_lock, flags); | ||
189 | } | ||
190 | |||
191 | /* Unregister the vlan */ | ||
192 | static void s2io_vlan_rx_kill_vid(struct net_device *dev, unsigned long vid) | ||
193 | { | ||
194 | nic_t *nic = dev->priv; | ||
195 | unsigned long flags; | ||
196 | |||
197 | spin_lock_irqsave(&nic->tx_lock, flags); | ||
198 | if (nic->vlgrp) | ||
199 | nic->vlgrp->vlan_devices[vid] = NULL; | ||
200 | spin_unlock_irqrestore(&nic->tx_lock, flags); | ||
201 | } | ||
156 | 202 | ||
157 | /* | 203 | /* |
158 | * Constants to be programmed into the Xena's registers, to configure | 204 | * Constants to be programmed into the Xena's registers, to configure |
159 | * the XAUI. | 205 | * the XAUI. |
160 | */ | 206 | */ |
@@ -162,7 +208,28 @@ static char ethtool_stats_keys[][ETH_GSTRING_LEN] = { | |||
162 | #define SWITCH_SIGN 0xA5A5A5A5A5A5A5A5ULL | 208 | #define SWITCH_SIGN 0xA5A5A5A5A5A5A5A5ULL |
163 | #define END_SIGN 0x0 | 209 | #define END_SIGN 0x0 |
164 | 210 | ||
165 | static u64 default_mdio_cfg[] = { | 211 | static u64 herc_act_dtx_cfg[] = { |
212 | /* Set address */ | ||
213 | 0x8000051536750000ULL, 0x80000515367500E0ULL, | ||
214 | /* Write data */ | ||
215 | 0x8000051536750004ULL, 0x80000515367500E4ULL, | ||
216 | /* Set address */ | ||
217 | 0x80010515003F0000ULL, 0x80010515003F00E0ULL, | ||
218 | /* Write data */ | ||
219 | 0x80010515003F0004ULL, 0x80010515003F00E4ULL, | ||
220 | /* Set address */ | ||
221 | 0x801205150D440000ULL, 0x801205150D4400E0ULL, | ||
222 | /* Write data */ | ||
223 | 0x801205150D440004ULL, 0x801205150D4400E4ULL, | ||
224 | /* Set address */ | ||
225 | 0x80020515F2100000ULL, 0x80020515F21000E0ULL, | ||
226 | /* Write data */ | ||
227 | 0x80020515F2100004ULL, 0x80020515F21000E4ULL, | ||
228 | /* Done */ | ||
229 | END_SIGN | ||
230 | }; | ||
231 | |||
232 | static u64 xena_mdio_cfg[] = { | ||
166 | /* Reset PMA PLL */ | 233 | /* Reset PMA PLL */ |
167 | 0xC001010000000000ULL, 0xC0010100000000E0ULL, | 234 | 0xC001010000000000ULL, 0xC0010100000000E0ULL, |
168 | 0xC0010100008000E4ULL, | 235 | 0xC0010100008000E4ULL, |
@@ -172,7 +239,7 @@ static u64 default_mdio_cfg[] = { | |||
172 | END_SIGN | 239 | END_SIGN |
173 | }; | 240 | }; |
174 | 241 | ||
175 | static u64 default_dtx_cfg[] = { | 242 | static u64 xena_dtx_cfg[] = { |
176 | 0x8000051500000000ULL, 0x80000515000000E0ULL, | 243 | 0x8000051500000000ULL, 0x80000515000000E0ULL, |
177 | 0x80000515D93500E4ULL, 0x8001051500000000ULL, | 244 | 0x80000515D93500E4ULL, 0x8001051500000000ULL, |
178 | 0x80010515000000E0ULL, 0x80010515001E00E4ULL, | 245 | 0x80010515000000E0ULL, 0x80010515001E00E4ULL, |
@@ -196,8 +263,7 @@ static u64 default_dtx_cfg[] = { | |||
196 | END_SIGN | 263 | END_SIGN |
197 | }; | 264 | }; |
198 | 265 | ||
199 | 266 | /* | |
200 | /* | ||
201 | * Constants for Fixing the MacAddress problem seen mostly on | 267 | * Constants for Fixing the MacAddress problem seen mostly on |
202 | * Alpha machines. | 268 | * Alpha machines. |
203 | */ | 269 | */ |
@@ -226,20 +292,25 @@ static unsigned int tx_fifo_len[MAX_TX_FIFOS] = | |||
226 | static unsigned int rx_ring_num = 1; | 292 | static unsigned int rx_ring_num = 1; |
227 | static unsigned int rx_ring_sz[MAX_RX_RINGS] = | 293 | static unsigned int rx_ring_sz[MAX_RX_RINGS] = |
228 | {[0 ...(MAX_RX_RINGS - 1)] = 0 }; | 294 | {[0 ...(MAX_RX_RINGS - 1)] = 0 }; |
229 | static unsigned int Stats_refresh_time = 4; | 295 | static unsigned int rts_frm_len[MAX_RX_RINGS] = |
296 | {[0 ...(MAX_RX_RINGS - 1)] = 0 }; | ||
297 | static unsigned int use_continuous_tx_intrs = 1; | ||
230 | static unsigned int rmac_pause_time = 65535; | 298 | static unsigned int rmac_pause_time = 65535; |
231 | static unsigned int mc_pause_threshold_q0q3 = 187; | 299 | static unsigned int mc_pause_threshold_q0q3 = 187; |
232 | static unsigned int mc_pause_threshold_q4q7 = 187; | 300 | static unsigned int mc_pause_threshold_q4q7 = 187; |
233 | static unsigned int shared_splits; | 301 | static unsigned int shared_splits; |
234 | static unsigned int tmac_util_period = 5; | 302 | static unsigned int tmac_util_period = 5; |
235 | static unsigned int rmac_util_period = 5; | 303 | static unsigned int rmac_util_period = 5; |
304 | static unsigned int bimodal = 0; | ||
236 | #ifndef CONFIG_S2IO_NAPI | 305 | #ifndef CONFIG_S2IO_NAPI |
237 | static unsigned int indicate_max_pkts; | 306 | static unsigned int indicate_max_pkts; |
238 | #endif | 307 | #endif |
308 | /* Frequency of Rx desc syncs expressed as power of 2 */ | ||
309 | static unsigned int rxsync_frequency = 3; | ||
239 | 310 | ||
240 | /* | 311 | /* |
241 | * S2IO device table. | 312 | * S2IO device table. |
242 | * This table lists all the devices that this driver supports. | 313 | * This table lists all the devices that this driver supports. |
243 | */ | 314 | */ |
244 | static struct pci_device_id s2io_tbl[] __devinitdata = { | 315 | static struct pci_device_id s2io_tbl[] __devinitdata = { |
245 | {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_S2IO_WIN, | 316 | {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_S2IO_WIN, |
@@ -247,9 +318,9 @@ static struct pci_device_id s2io_tbl[] __devinitdata = { | |||
247 | {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_S2IO_UNI, | 318 | {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_S2IO_UNI, |
248 | PCI_ANY_ID, PCI_ANY_ID}, | 319 | PCI_ANY_ID, PCI_ANY_ID}, |
249 | {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_HERC_WIN, | 320 | {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_HERC_WIN, |
250 | PCI_ANY_ID, PCI_ANY_ID}, | 321 | PCI_ANY_ID, PCI_ANY_ID}, |
251 | {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_HERC_UNI, | 322 | {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_HERC_UNI, |
252 | PCI_ANY_ID, PCI_ANY_ID}, | 323 | PCI_ANY_ID, PCI_ANY_ID}, |
253 | {0,} | 324 | {0,} |
254 | }; | 325 | }; |
255 | 326 | ||
@@ -268,8 +339,8 @@ static struct pci_driver s2io_driver = { | |||
268 | /** | 339 | /** |
269 | * init_shared_mem - Allocation and Initialization of Memory | 340 | * init_shared_mem - Allocation and Initialization of Memory |
270 | * @nic: Device private variable. | 341 | * @nic: Device private variable. |
271 | * Description: The function allocates all the memory areas shared | 342 | * Description: The function allocates all the memory areas shared |
272 | * between the NIC and the driver. This includes Tx descriptors, | 343 | * between the NIC and the driver. This includes Tx descriptors, |
273 | * Rx descriptors and the statistics block. | 344 | * Rx descriptors and the statistics block. |
274 | */ | 345 | */ |
275 | 346 | ||
@@ -279,11 +350,11 @@ static int init_shared_mem(struct s2io_nic *nic) | |||
279 | void *tmp_v_addr, *tmp_v_addr_next; | 350 | void *tmp_v_addr, *tmp_v_addr_next; |
280 | dma_addr_t tmp_p_addr, tmp_p_addr_next; | 351 | dma_addr_t tmp_p_addr, tmp_p_addr_next; |
281 | RxD_block_t *pre_rxd_blk = NULL; | 352 | RxD_block_t *pre_rxd_blk = NULL; |
282 | int i, j, blk_cnt; | 353 | int i, j, blk_cnt, rx_sz, tx_sz; |
283 | int lst_size, lst_per_page; | 354 | int lst_size, lst_per_page; |
284 | struct net_device *dev = nic->dev; | 355 | struct net_device *dev = nic->dev; |
285 | #ifdef CONFIG_2BUFF_MODE | 356 | #ifdef CONFIG_2BUFF_MODE |
286 | unsigned long tmp; | 357 | u64 tmp; |
287 | buffAdd_t *ba; | 358 | buffAdd_t *ba; |
288 | #endif | 359 | #endif |
289 | 360 | ||
@@ -300,36 +371,41 @@ static int init_shared_mem(struct s2io_nic *nic) | |||
300 | size += config->tx_cfg[i].fifo_len; | 371 | size += config->tx_cfg[i].fifo_len; |
301 | } | 372 | } |
302 | if (size > MAX_AVAILABLE_TXDS) { | 373 | if (size > MAX_AVAILABLE_TXDS) { |
303 | DBG_PRINT(ERR_DBG, "%s: Total number of Tx FIFOs ", | 374 | DBG_PRINT(ERR_DBG, "%s: Requested TxDs too high, ", |
304 | dev->name); | 375 | __FUNCTION__); |
305 | DBG_PRINT(ERR_DBG, "exceeds the maximum value "); | 376 | DBG_PRINT(ERR_DBG, "Requested: %d, max supported: 8192\n", size); |
306 | DBG_PRINT(ERR_DBG, "that can be used\n"); | ||
307 | return FAILURE; | 377 | return FAILURE; |
308 | } | 378 | } |
309 | 379 | ||
310 | lst_size = (sizeof(TxD_t) * config->max_txds); | 380 | lst_size = (sizeof(TxD_t) * config->max_txds); |
381 | tx_sz = lst_size * size; | ||
311 | lst_per_page = PAGE_SIZE / lst_size; | 382 | lst_per_page = PAGE_SIZE / lst_size; |
312 | 383 | ||
313 | for (i = 0; i < config->tx_fifo_num; i++) { | 384 | for (i = 0; i < config->tx_fifo_num; i++) { |
314 | int fifo_len = config->tx_cfg[i].fifo_len; | 385 | int fifo_len = config->tx_cfg[i].fifo_len; |
315 | int list_holder_size = fifo_len * sizeof(list_info_hold_t); | 386 | int list_holder_size = fifo_len * sizeof(list_info_hold_t); |
316 | nic->list_info[i] = kmalloc(list_holder_size, GFP_KERNEL); | 387 | mac_control->fifos[i].list_info = kmalloc(list_holder_size, |
317 | if (!nic->list_info[i]) { | 388 | GFP_KERNEL); |
389 | if (!mac_control->fifos[i].list_info) { | ||
318 | DBG_PRINT(ERR_DBG, | 390 | DBG_PRINT(ERR_DBG, |
319 | "Malloc failed for list_info\n"); | 391 | "Malloc failed for list_info\n"); |
320 | return -ENOMEM; | 392 | return -ENOMEM; |
321 | } | 393 | } |
322 | memset(nic->list_info[i], 0, list_holder_size); | 394 | memset(mac_control->fifos[i].list_info, 0, list_holder_size); |
323 | } | 395 | } |
324 | for (i = 0; i < config->tx_fifo_num; i++) { | 396 | for (i = 0; i < config->tx_fifo_num; i++) { |
325 | int page_num = TXD_MEM_PAGE_CNT(config->tx_cfg[i].fifo_len, | 397 | int page_num = TXD_MEM_PAGE_CNT(config->tx_cfg[i].fifo_len, |
326 | lst_per_page); | 398 | lst_per_page); |
327 | mac_control->tx_curr_put_info[i].offset = 0; | 399 | mac_control->fifos[i].tx_curr_put_info.offset = 0; |
328 | mac_control->tx_curr_put_info[i].fifo_len = | 400 | mac_control->fifos[i].tx_curr_put_info.fifo_len = |
329 | config->tx_cfg[i].fifo_len - 1; | 401 | config->tx_cfg[i].fifo_len - 1; |
330 | mac_control->tx_curr_get_info[i].offset = 0; | 402 | mac_control->fifos[i].tx_curr_get_info.offset = 0; |
331 | mac_control->tx_curr_get_info[i].fifo_len = | 403 | mac_control->fifos[i].tx_curr_get_info.fifo_len = |
332 | config->tx_cfg[i].fifo_len - 1; | 404 | config->tx_cfg[i].fifo_len - 1; |
405 | mac_control->fifos[i].fifo_no = i; | ||
406 | mac_control->fifos[i].nic = nic; | ||
407 | mac_control->fifos[i].max_txds = MAX_SKB_FRAGS; | ||
408 | |||
333 | for (j = 0; j < page_num; j++) { | 409 | for (j = 0; j < page_num; j++) { |
334 | int k = 0; | 410 | int k = 0; |
335 | dma_addr_t tmp_p; | 411 | dma_addr_t tmp_p; |
@@ -345,16 +421,15 @@ static int init_shared_mem(struct s2io_nic *nic) | |||
345 | while (k < lst_per_page) { | 421 | while (k < lst_per_page) { |
346 | int l = (j * lst_per_page) + k; | 422 | int l = (j * lst_per_page) + k; |
347 | if (l == config->tx_cfg[i].fifo_len) | 423 | if (l == config->tx_cfg[i].fifo_len) |
348 | goto end_txd_alloc; | 424 | break; |
349 | nic->list_info[i][l].list_virt_addr = | 425 | mac_control->fifos[i].list_info[l].list_virt_addr = |
350 | tmp_v + (k * lst_size); | 426 | tmp_v + (k * lst_size); |
351 | nic->list_info[i][l].list_phy_addr = | 427 | mac_control->fifos[i].list_info[l].list_phy_addr = |
352 | tmp_p + (k * lst_size); | 428 | tmp_p + (k * lst_size); |
353 | k++; | 429 | k++; |
354 | } | 430 | } |
355 | } | 431 | } |
356 | } | 432 | } |
357 | end_txd_alloc: | ||
358 | 433 | ||
359 | /* Allocation and initialization of RXDs in Rings */ | 434 | /* Allocation and initialization of RXDs in Rings */ |
360 | size = 0; | 435 | size = 0; |
@@ -367,21 +442,26 @@ static int init_shared_mem(struct s2io_nic *nic) | |||
367 | return FAILURE; | 442 | return FAILURE; |
368 | } | 443 | } |
369 | size += config->rx_cfg[i].num_rxd; | 444 | size += config->rx_cfg[i].num_rxd; |
370 | nic->block_count[i] = | 445 | mac_control->rings[i].block_count = |
371 | config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); | 446 | config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); |
372 | nic->pkt_cnt[i] = | 447 | mac_control->rings[i].pkt_cnt = |
373 | config->rx_cfg[i].num_rxd - nic->block_count[i]; | 448 | config->rx_cfg[i].num_rxd - mac_control->rings[i].block_count; |
374 | } | 449 | } |
450 | size = (size * (sizeof(RxD_t))); | ||
451 | rx_sz = size; | ||
375 | 452 | ||
376 | for (i = 0; i < config->rx_ring_num; i++) { | 453 | for (i = 0; i < config->rx_ring_num; i++) { |
377 | mac_control->rx_curr_get_info[i].block_index = 0; | 454 | mac_control->rings[i].rx_curr_get_info.block_index = 0; |
378 | mac_control->rx_curr_get_info[i].offset = 0; | 455 | mac_control->rings[i].rx_curr_get_info.offset = 0; |
379 | mac_control->rx_curr_get_info[i].ring_len = | 456 | mac_control->rings[i].rx_curr_get_info.ring_len = |
380 | config->rx_cfg[i].num_rxd - 1; | 457 | config->rx_cfg[i].num_rxd - 1; |
381 | mac_control->rx_curr_put_info[i].block_index = 0; | 458 | mac_control->rings[i].rx_curr_put_info.block_index = 0; |
382 | mac_control->rx_curr_put_info[i].offset = 0; | 459 | mac_control->rings[i].rx_curr_put_info.offset = 0; |
383 | mac_control->rx_curr_put_info[i].ring_len = | 460 | mac_control->rings[i].rx_curr_put_info.ring_len = |
384 | config->rx_cfg[i].num_rxd - 1; | 461 | config->rx_cfg[i].num_rxd - 1; |
462 | mac_control->rings[i].nic = nic; | ||
463 | mac_control->rings[i].ring_no = i; | ||
464 | |||
385 | blk_cnt = | 465 | blk_cnt = |
386 | config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); | 466 | config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); |
387 | /* Allocating all the Rx blocks */ | 467 | /* Allocating all the Rx blocks */ |
@@ -395,32 +475,36 @@ static int init_shared_mem(struct s2io_nic *nic) | |||
395 | &tmp_p_addr); | 475 | &tmp_p_addr); |
396 | if (tmp_v_addr == NULL) { | 476 | if (tmp_v_addr == NULL) { |
397 | /* | 477 | /* |
398 | * In case of failure, free_shared_mem() | 478 | * In case of failure, free_shared_mem() |
399 | * is called, which should free any | 479 | * is called, which should free any |
400 | * memory that was alloced till the | 480 | * memory that was alloced till the |
401 | * failure happened. | 481 | * failure happened. |
402 | */ | 482 | */ |
403 | nic->rx_blocks[i][j].block_virt_addr = | 483 | mac_control->rings[i].rx_blocks[j].block_virt_addr = |
404 | tmp_v_addr; | 484 | tmp_v_addr; |
405 | return -ENOMEM; | 485 | return -ENOMEM; |
406 | } | 486 | } |
407 | memset(tmp_v_addr, 0, size); | 487 | memset(tmp_v_addr, 0, size); |
408 | nic->rx_blocks[i][j].block_virt_addr = tmp_v_addr; | 488 | mac_control->rings[i].rx_blocks[j].block_virt_addr = |
409 | nic->rx_blocks[i][j].block_dma_addr = tmp_p_addr; | 489 | tmp_v_addr; |
490 | mac_control->rings[i].rx_blocks[j].block_dma_addr = | ||
491 | tmp_p_addr; | ||
410 | } | 492 | } |
411 | /* Interlinking all Rx Blocks */ | 493 | /* Interlinking all Rx Blocks */ |
412 | for (j = 0; j < blk_cnt; j++) { | 494 | for (j = 0; j < blk_cnt; j++) { |
413 | tmp_v_addr = nic->rx_blocks[i][j].block_virt_addr; | 495 | tmp_v_addr = |
496 | mac_control->rings[i].rx_blocks[j].block_virt_addr; | ||
414 | tmp_v_addr_next = | 497 | tmp_v_addr_next = |
415 | nic->rx_blocks[i][(j + 1) % | 498 | mac_control->rings[i].rx_blocks[(j + 1) % |
416 | blk_cnt].block_virt_addr; | 499 | blk_cnt].block_virt_addr; |
417 | tmp_p_addr = nic->rx_blocks[i][j].block_dma_addr; | 500 | tmp_p_addr = |
501 | mac_control->rings[i].rx_blocks[j].block_dma_addr; | ||
418 | tmp_p_addr_next = | 502 | tmp_p_addr_next = |
419 | nic->rx_blocks[i][(j + 1) % | 503 | mac_control->rings[i].rx_blocks[(j + 1) % |
420 | blk_cnt].block_dma_addr; | 504 | blk_cnt].block_dma_addr; |
421 | 505 | ||
422 | pre_rxd_blk = (RxD_block_t *) tmp_v_addr; | 506 | pre_rxd_blk = (RxD_block_t *) tmp_v_addr; |
423 | pre_rxd_blk->reserved_1 = END_OF_BLOCK; /* last RxD | 507 | pre_rxd_blk->reserved_1 = END_OF_BLOCK; /* last RxD |
424 | * marker. | 508 | * marker. |
425 | */ | 509 | */ |
426 | #ifndef CONFIG_2BUFF_MODE | 510 | #ifndef CONFIG_2BUFF_MODE |
@@ -433,43 +517,43 @@ static int init_shared_mem(struct s2io_nic *nic) | |||
433 | } | 517 | } |
434 | 518 | ||
435 | #ifdef CONFIG_2BUFF_MODE | 519 | #ifdef CONFIG_2BUFF_MODE |
436 | /* | 520 | /* |
437 | * Allocation of Storages for buffer addresses in 2BUFF mode | 521 | * Allocation of Storages for buffer addresses in 2BUFF mode |
438 | * and the buffers as well. | 522 | * and the buffers as well. |
439 | */ | 523 | */ |
440 | for (i = 0; i < config->rx_ring_num; i++) { | 524 | for (i = 0; i < config->rx_ring_num; i++) { |
441 | blk_cnt = | 525 | blk_cnt = |
442 | config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); | 526 | config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); |
443 | nic->ba[i] = kmalloc((sizeof(buffAdd_t *) * blk_cnt), | 527 | mac_control->rings[i].ba = kmalloc((sizeof(buffAdd_t *) * blk_cnt), |
444 | GFP_KERNEL); | 528 | GFP_KERNEL); |
445 | if (!nic->ba[i]) | 529 | if (!mac_control->rings[i].ba) |
446 | return -ENOMEM; | 530 | return -ENOMEM; |
447 | for (j = 0; j < blk_cnt; j++) { | 531 | for (j = 0; j < blk_cnt; j++) { |
448 | int k = 0; | 532 | int k = 0; |
449 | nic->ba[i][j] = kmalloc((sizeof(buffAdd_t) * | 533 | mac_control->rings[i].ba[j] = kmalloc((sizeof(buffAdd_t) * |
450 | (MAX_RXDS_PER_BLOCK + 1)), | 534 | (MAX_RXDS_PER_BLOCK + 1)), |
451 | GFP_KERNEL); | 535 | GFP_KERNEL); |
452 | if (!nic->ba[i][j]) | 536 | if (!mac_control->rings[i].ba[j]) |
453 | return -ENOMEM; | 537 | return -ENOMEM; |
454 | while (k != MAX_RXDS_PER_BLOCK) { | 538 | while (k != MAX_RXDS_PER_BLOCK) { |
455 | ba = &nic->ba[i][j][k]; | 539 | ba = &mac_control->rings[i].ba[j][k]; |
456 | 540 | ||
457 | ba->ba_0_org = kmalloc | 541 | ba->ba_0_org = (void *) kmalloc |
458 | (BUF0_LEN + ALIGN_SIZE, GFP_KERNEL); | 542 | (BUF0_LEN + ALIGN_SIZE, GFP_KERNEL); |
459 | if (!ba->ba_0_org) | 543 | if (!ba->ba_0_org) |
460 | return -ENOMEM; | 544 | return -ENOMEM; |
461 | tmp = (unsigned long) ba->ba_0_org; | 545 | tmp = (u64) ba->ba_0_org; |
462 | tmp += ALIGN_SIZE; | 546 | tmp += ALIGN_SIZE; |
463 | tmp &= ~((unsigned long) ALIGN_SIZE); | 547 | tmp &= ~((u64) ALIGN_SIZE); |
464 | ba->ba_0 = (void *) tmp; | 548 | ba->ba_0 = (void *) tmp; |
465 | 549 | ||
466 | ba->ba_1_org = kmalloc | 550 | ba->ba_1_org = (void *) kmalloc |
467 | (BUF1_LEN + ALIGN_SIZE, GFP_KERNEL); | 551 | (BUF1_LEN + ALIGN_SIZE, GFP_KERNEL); |
468 | if (!ba->ba_1_org) | 552 | if (!ba->ba_1_org) |
469 | return -ENOMEM; | 553 | return -ENOMEM; |
470 | tmp = (unsigned long) ba->ba_1_org; | 554 | tmp = (u64) ba->ba_1_org; |
471 | tmp += ALIGN_SIZE; | 555 | tmp += ALIGN_SIZE; |
472 | tmp &= ~((unsigned long) ALIGN_SIZE); | 556 | tmp &= ~((u64) ALIGN_SIZE); |
473 | ba->ba_1 = (void *) tmp; | 557 | ba->ba_1 = (void *) tmp; |
474 | k++; | 558 | k++; |
475 | } | 559 | } |
@@ -483,9 +567,9 @@ static int init_shared_mem(struct s2io_nic *nic) | |||
483 | (nic->pdev, size, &mac_control->stats_mem_phy); | 567 | (nic->pdev, size, &mac_control->stats_mem_phy); |
484 | 568 | ||
485 | if (!mac_control->stats_mem) { | 569 | if (!mac_control->stats_mem) { |
486 | /* | 570 | /* |
487 | * In case of failure, free_shared_mem() is called, which | 571 | * In case of failure, free_shared_mem() is called, which |
488 | * should free any memory that was alloced till the | 572 | * should free any memory that was alloced till the |
489 | * failure happened. | 573 | * failure happened. |
490 | */ | 574 | */ |
491 | return -ENOMEM; | 575 | return -ENOMEM; |
@@ -495,15 +579,14 @@ static int init_shared_mem(struct s2io_nic *nic) | |||
495 | tmp_v_addr = mac_control->stats_mem; | 579 | tmp_v_addr = mac_control->stats_mem; |
496 | mac_control->stats_info = (StatInfo_t *) tmp_v_addr; | 580 | mac_control->stats_info = (StatInfo_t *) tmp_v_addr; |
497 | memset(tmp_v_addr, 0, size); | 581 | memset(tmp_v_addr, 0, size); |
498 | |||
499 | DBG_PRINT(INIT_DBG, "%s:Ring Mem PHY: 0x%llx\n", dev->name, | 582 | DBG_PRINT(INIT_DBG, "%s:Ring Mem PHY: 0x%llx\n", dev->name, |
500 | (unsigned long long) tmp_p_addr); | 583 | (unsigned long long) tmp_p_addr); |
501 | 584 | ||
502 | return SUCCESS; | 585 | return SUCCESS; |
503 | } | 586 | } |
504 | 587 | ||
505 | /** | 588 | /** |
506 | * free_shared_mem - Free the allocated Memory | 589 | * free_shared_mem - Free the allocated Memory |
507 | * @nic: Device private variable. | 590 | * @nic: Device private variable. |
508 | * Description: This function is to free all memory locations allocated by | 591 | * Description: This function is to free all memory locations allocated by |
509 | * the init_shared_mem() function and return it to the kernel. | 592 | * the init_shared_mem() function and return it to the kernel. |
@@ -533,15 +616,19 @@ static void free_shared_mem(struct s2io_nic *nic) | |||
533 | lst_per_page); | 616 | lst_per_page); |
534 | for (j = 0; j < page_num; j++) { | 617 | for (j = 0; j < page_num; j++) { |
535 | int mem_blks = (j * lst_per_page); | 618 | int mem_blks = (j * lst_per_page); |
536 | if (!nic->list_info[i][mem_blks].list_virt_addr) | 619 | if ((!mac_control->fifos[i].list_info) || |
620 | (!mac_control->fifos[i].list_info[mem_blks]. | ||
621 | list_virt_addr)) | ||
537 | break; | 622 | break; |
538 | pci_free_consistent(nic->pdev, PAGE_SIZE, | 623 | pci_free_consistent(nic->pdev, PAGE_SIZE, |
539 | nic->list_info[i][mem_blks]. | 624 | mac_control->fifos[i]. |
625 | list_info[mem_blks]. | ||
540 | list_virt_addr, | 626 | list_virt_addr, |
541 | nic->list_info[i][mem_blks]. | 627 | mac_control->fifos[i]. |
628 | list_info[mem_blks]. | ||
542 | list_phy_addr); | 629 | list_phy_addr); |
543 | } | 630 | } |
544 | kfree(nic->list_info[i]); | 631 | kfree(mac_control->fifos[i].list_info); |
545 | } | 632 | } |
546 | 633 | ||
547 | #ifndef CONFIG_2BUFF_MODE | 634 | #ifndef CONFIG_2BUFF_MODE |
@@ -550,10 +637,12 @@ static void free_shared_mem(struct s2io_nic *nic) | |||
550 | size = SIZE_OF_BLOCK; | 637 | size = SIZE_OF_BLOCK; |
551 | #endif | 638 | #endif |
552 | for (i = 0; i < config->rx_ring_num; i++) { | 639 | for (i = 0; i < config->rx_ring_num; i++) { |
553 | blk_cnt = nic->block_count[i]; | 640 | blk_cnt = mac_control->rings[i].block_count; |
554 | for (j = 0; j < blk_cnt; j++) { | 641 | for (j = 0; j < blk_cnt; j++) { |
555 | tmp_v_addr = nic->rx_blocks[i][j].block_virt_addr; | 642 | tmp_v_addr = mac_control->rings[i].rx_blocks[j]. |
556 | tmp_p_addr = nic->rx_blocks[i][j].block_dma_addr; | 643 | block_virt_addr; |
644 | tmp_p_addr = mac_control->rings[i].rx_blocks[j]. | ||
645 | block_dma_addr; | ||
557 | if (tmp_v_addr == NULL) | 646 | if (tmp_v_addr == NULL) |
558 | break; | 647 | break; |
559 | pci_free_consistent(nic->pdev, size, | 648 | pci_free_consistent(nic->pdev, size, |
@@ -566,35 +655,21 @@ static void free_shared_mem(struct s2io_nic *nic) | |||
566 | for (i = 0; i < config->rx_ring_num; i++) { | 655 | for (i = 0; i < config->rx_ring_num; i++) { |
567 | blk_cnt = | 656 | blk_cnt = |
568 | config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); | 657 | config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); |
569 | if (!nic->ba[i]) | ||
570 | goto end_free; | ||
571 | for (j = 0; j < blk_cnt; j++) { | 658 | for (j = 0; j < blk_cnt; j++) { |
572 | int k = 0; | 659 | int k = 0; |
573 | if (!nic->ba[i][j]) { | 660 | if (!mac_control->rings[i].ba[j]) |
574 | kfree(nic->ba[i]); | 661 | continue; |
575 | goto end_free; | ||
576 | } | ||
577 | while (k != MAX_RXDS_PER_BLOCK) { | 662 | while (k != MAX_RXDS_PER_BLOCK) { |
578 | buffAdd_t *ba = &nic->ba[i][j][k]; | 663 | buffAdd_t *ba = &mac_control->rings[i].ba[j][k]; |
579 | if (!ba || !ba->ba_0_org || !ba->ba_1_org) | ||
580 | { | ||
581 | kfree(nic->ba[i]); | ||
582 | kfree(nic->ba[i][j]); | ||
583 | if(ba->ba_0_org) | ||
584 | kfree(ba->ba_0_org); | ||
585 | if(ba->ba_1_org) | ||
586 | kfree(ba->ba_1_org); | ||
587 | goto end_free; | ||
588 | } | ||
589 | kfree(ba->ba_0_org); | 664 | kfree(ba->ba_0_org); |
590 | kfree(ba->ba_1_org); | 665 | kfree(ba->ba_1_org); |
591 | k++; | 666 | k++; |
592 | } | 667 | } |
593 | kfree(nic->ba[i][j]); | 668 | kfree(mac_control->rings[i].ba[j]); |
594 | } | 669 | } |
595 | kfree(nic->ba[i]); | 670 | if (mac_control->rings[i].ba) |
671 | kfree(mac_control->rings[i].ba); | ||
596 | } | 672 | } |
597 | end_free: | ||
598 | #endif | 673 | #endif |
599 | 674 | ||
600 | if (mac_control->stats_mem) { | 675 | if (mac_control->stats_mem) { |
@@ -605,12 +680,93 @@ end_free: | |||
605 | } | 680 | } |
606 | } | 681 | } |
607 | 682 | ||
608 | /** | 683 | /** |
609 | * init_nic - Initialization of hardware | 684 | * s2io_verify_pci_mode - |
685 | */ | ||
686 | |||
687 | static int s2io_verify_pci_mode(nic_t *nic) | ||
688 | { | ||
689 | XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; | ||
690 | register u64 val64 = 0; | ||
691 | int mode; | ||
692 | |||
693 | val64 = readq(&bar0->pci_mode); | ||
694 | mode = (u8)GET_PCI_MODE(val64); | ||
695 | |||
696 | if ( val64 & PCI_MODE_UNKNOWN_MODE) | ||
697 | return -1; /* Unknown PCI mode */ | ||
698 | return mode; | ||
699 | } | ||
700 | |||
701 | |||
702 | /** | ||
703 | * s2io_print_pci_mode - | ||
704 | */ | ||
705 | static int s2io_print_pci_mode(nic_t *nic) | ||
706 | { | ||
707 | XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; | ||
708 | register u64 val64 = 0; | ||
709 | int mode; | ||
710 | struct config_param *config = &nic->config; | ||
711 | |||
712 | val64 = readq(&bar0->pci_mode); | ||
713 | mode = (u8)GET_PCI_MODE(val64); | ||
714 | |||
715 | if ( val64 & PCI_MODE_UNKNOWN_MODE) | ||
716 | return -1; /* Unknown PCI mode */ | ||
717 | |||
718 | if (val64 & PCI_MODE_32_BITS) { | ||
719 | DBG_PRINT(ERR_DBG, "%s: Device is on 32 bit ", nic->dev->name); | ||
720 | } else { | ||
721 | DBG_PRINT(ERR_DBG, "%s: Device is on 64 bit ", nic->dev->name); | ||
722 | } | ||
723 | |||
724 | switch(mode) { | ||
725 | case PCI_MODE_PCI_33: | ||
726 | DBG_PRINT(ERR_DBG, "33MHz PCI bus\n"); | ||
727 | config->bus_speed = 33; | ||
728 | break; | ||
729 | case PCI_MODE_PCI_66: | ||
730 | DBG_PRINT(ERR_DBG, "66MHz PCI bus\n"); | ||
731 | config->bus_speed = 133; | ||
732 | break; | ||
733 | case PCI_MODE_PCIX_M1_66: | ||
734 | DBG_PRINT(ERR_DBG, "66MHz PCIX(M1) bus\n"); | ||
735 | config->bus_speed = 133; /* Herc doubles the clock rate */ | ||
736 | break; | ||
737 | case PCI_MODE_PCIX_M1_100: | ||
738 | DBG_PRINT(ERR_DBG, "100MHz PCIX(M1) bus\n"); | ||
739 | config->bus_speed = 200; | ||
740 | break; | ||
741 | case PCI_MODE_PCIX_M1_133: | ||
742 | DBG_PRINT(ERR_DBG, "133MHz PCIX(M1) bus\n"); | ||
743 | config->bus_speed = 266; | ||
744 | break; | ||
745 | case PCI_MODE_PCIX_M2_66: | ||
746 | DBG_PRINT(ERR_DBG, "133MHz PCIX(M2) bus\n"); | ||
747 | config->bus_speed = 133; | ||
748 | break; | ||
749 | case PCI_MODE_PCIX_M2_100: | ||
750 | DBG_PRINT(ERR_DBG, "200MHz PCIX(M2) bus\n"); | ||
751 | config->bus_speed = 200; | ||
752 | break; | ||
753 | case PCI_MODE_PCIX_M2_133: | ||
754 | DBG_PRINT(ERR_DBG, "266MHz PCIX(M2) bus\n"); | ||
755 | config->bus_speed = 266; | ||
756 | break; | ||
757 | default: | ||
758 | return -1; /* Unsupported bus speed */ | ||
759 | } | ||
760 | |||
761 | return mode; | ||
762 | } | ||
763 | |||
764 | /** | ||
765 | * init_nic - Initialization of hardware | ||
610 | * @nic: device peivate variable | 766 | * @nic: device peivate variable |
611 | * Description: The function sequentially configures every block | 767 | * Description: The function sequentially configures every block |
612 | * of the H/W from their reset values. | 768 | * of the H/W from their reset values. |
613 | * Return Value: SUCCESS on success and | 769 | * Return Value: SUCCESS on success and |
614 | * '-1' on failure (endian settings incorrect). | 770 | * '-1' on failure (endian settings incorrect). |
615 | */ | 771 | */ |
616 | 772 | ||
@@ -626,21 +782,32 @@ static int init_nic(struct s2io_nic *nic) | |||
626 | struct config_param *config; | 782 | struct config_param *config; |
627 | int mdio_cnt = 0, dtx_cnt = 0; | 783 | int mdio_cnt = 0, dtx_cnt = 0; |
628 | unsigned long long mem_share; | 784 | unsigned long long mem_share; |
785 | int mem_size; | ||
629 | 786 | ||
630 | mac_control = &nic->mac_control; | 787 | mac_control = &nic->mac_control; |
631 | config = &nic->config; | 788 | config = &nic->config; |
632 | 789 | ||
633 | /* Initialize swapper control register */ | 790 | /* to set the swapper controle on the card */ |
634 | if (s2io_set_swapper(nic)) { | 791 | if(s2io_set_swapper(nic)) { |
635 | DBG_PRINT(ERR_DBG,"ERROR: Setting Swapper failed\n"); | 792 | DBG_PRINT(ERR_DBG,"ERROR: Setting Swapper failed\n"); |
636 | return -1; | 793 | return -1; |
637 | } | 794 | } |
638 | 795 | ||
796 | /* | ||
797 | * Herc requires EOI to be removed from reset before XGXS, so.. | ||
798 | */ | ||
799 | if (nic->device_type & XFRAME_II_DEVICE) { | ||
800 | val64 = 0xA500000000ULL; | ||
801 | writeq(val64, &bar0->sw_reset); | ||
802 | msleep(500); | ||
803 | val64 = readq(&bar0->sw_reset); | ||
804 | } | ||
805 | |||
639 | /* Remove XGXS from reset state */ | 806 | /* Remove XGXS from reset state */ |
640 | val64 = 0; | 807 | val64 = 0; |
641 | writeq(val64, &bar0->sw_reset); | 808 | writeq(val64, &bar0->sw_reset); |
642 | val64 = readq(&bar0->sw_reset); | ||
643 | msleep(500); | 809 | msleep(500); |
810 | val64 = readq(&bar0->sw_reset); | ||
644 | 811 | ||
645 | /* Enable Receiving broadcasts */ | 812 | /* Enable Receiving broadcasts */ |
646 | add = &bar0->mac_cfg; | 813 | add = &bar0->mac_cfg; |
@@ -660,48 +827,58 @@ static int init_nic(struct s2io_nic *nic) | |||
660 | val64 = dev->mtu; | 827 | val64 = dev->mtu; |
661 | writeq(vBIT(val64, 2, 14), &bar0->rmac_max_pyld_len); | 828 | writeq(vBIT(val64, 2, 14), &bar0->rmac_max_pyld_len); |
662 | 829 | ||
663 | /* | 830 | /* |
664 | * Configuring the XAUI Interface of Xena. | 831 | * Configuring the XAUI Interface of Xena. |
665 | * *************************************** | 832 | * *************************************** |
666 | * To Configure the Xena's XAUI, one has to write a series | 833 | * To Configure the Xena's XAUI, one has to write a series |
667 | * of 64 bit values into two registers in a particular | 834 | * of 64 bit values into two registers in a particular |
668 | * sequence. Hence a macro 'SWITCH_SIGN' has been defined | 835 | * sequence. Hence a macro 'SWITCH_SIGN' has been defined |
669 | * which will be defined in the array of configuration values | 836 | * which will be defined in the array of configuration values |
670 | * (default_dtx_cfg & default_mdio_cfg) at appropriate places | 837 | * (xena_dtx_cfg & xena_mdio_cfg) at appropriate places |
671 | * to switch writing from one regsiter to another. We continue | 838 | * to switch writing from one regsiter to another. We continue |
672 | * writing these values until we encounter the 'END_SIGN' macro. | 839 | * writing these values until we encounter the 'END_SIGN' macro. |
673 | * For example, After making a series of 21 writes into | 840 | * For example, After making a series of 21 writes into |
674 | * dtx_control register the 'SWITCH_SIGN' appears and hence we | 841 | * dtx_control register the 'SWITCH_SIGN' appears and hence we |
675 | * start writing into mdio_control until we encounter END_SIGN. | 842 | * start writing into mdio_control until we encounter END_SIGN. |
676 | */ | 843 | */ |
677 | while (1) { | 844 | if (nic->device_type & XFRAME_II_DEVICE) { |
678 | dtx_cfg: | 845 | while (herc_act_dtx_cfg[dtx_cnt] != END_SIGN) { |
679 | while (default_dtx_cfg[dtx_cnt] != END_SIGN) { | 846 | SPECIAL_REG_WRITE(herc_act_dtx_cfg[dtx_cnt], |
680 | if (default_dtx_cfg[dtx_cnt] == SWITCH_SIGN) { | ||
681 | dtx_cnt++; | ||
682 | goto mdio_cfg; | ||
683 | } | ||
684 | SPECIAL_REG_WRITE(default_dtx_cfg[dtx_cnt], | ||
685 | &bar0->dtx_control, UF); | 847 | &bar0->dtx_control, UF); |
686 | val64 = readq(&bar0->dtx_control); | 848 | if (dtx_cnt & 0x1) |
849 | msleep(1); /* Necessary!! */ | ||
687 | dtx_cnt++; | 850 | dtx_cnt++; |
688 | } | 851 | } |
689 | mdio_cfg: | 852 | } else { |
690 | while (default_mdio_cfg[mdio_cnt] != END_SIGN) { | 853 | while (1) { |
691 | if (default_mdio_cfg[mdio_cnt] == SWITCH_SIGN) { | 854 | dtx_cfg: |
855 | while (xena_dtx_cfg[dtx_cnt] != END_SIGN) { | ||
856 | if (xena_dtx_cfg[dtx_cnt] == SWITCH_SIGN) { | ||
857 | dtx_cnt++; | ||
858 | goto mdio_cfg; | ||
859 | } | ||
860 | SPECIAL_REG_WRITE(xena_dtx_cfg[dtx_cnt], | ||
861 | &bar0->dtx_control, UF); | ||
862 | val64 = readq(&bar0->dtx_control); | ||
863 | dtx_cnt++; | ||
864 | } | ||
865 | mdio_cfg: | ||
866 | while (xena_mdio_cfg[mdio_cnt] != END_SIGN) { | ||
867 | if (xena_mdio_cfg[mdio_cnt] == SWITCH_SIGN) { | ||
868 | mdio_cnt++; | ||
869 | goto dtx_cfg; | ||
870 | } | ||
871 | SPECIAL_REG_WRITE(xena_mdio_cfg[mdio_cnt], | ||
872 | &bar0->mdio_control, UF); | ||
873 | val64 = readq(&bar0->mdio_control); | ||
692 | mdio_cnt++; | 874 | mdio_cnt++; |
875 | } | ||
876 | if ((xena_dtx_cfg[dtx_cnt] == END_SIGN) && | ||
877 | (xena_mdio_cfg[mdio_cnt] == END_SIGN)) { | ||
878 | break; | ||
879 | } else { | ||
693 | goto dtx_cfg; | 880 | goto dtx_cfg; |
694 | } | 881 | } |
695 | SPECIAL_REG_WRITE(default_mdio_cfg[mdio_cnt], | ||
696 | &bar0->mdio_control, UF); | ||
697 | val64 = readq(&bar0->mdio_control); | ||
698 | mdio_cnt++; | ||
699 | } | ||
700 | if ((default_dtx_cfg[dtx_cnt] == END_SIGN) && | ||
701 | (default_mdio_cfg[mdio_cnt] == END_SIGN)) { | ||
702 | break; | ||
703 | } else { | ||
704 | goto dtx_cfg; | ||
705 | } | 882 | } |
706 | } | 883 | } |
707 | 884 | ||
@@ -748,12 +925,20 @@ static int init_nic(struct s2io_nic *nic) | |||
748 | val64 |= BIT(0); /* To enable the FIFO partition. */ | 925 | val64 |= BIT(0); /* To enable the FIFO partition. */ |
749 | writeq(val64, &bar0->tx_fifo_partition_0); | 926 | writeq(val64, &bar0->tx_fifo_partition_0); |
750 | 927 | ||
928 | /* | ||
929 | * Disable 4 PCCs for Xena1, 2 and 3 as per H/W bug | ||
930 | * SXE-008 TRANSMIT DMA ARBITRATION ISSUE. | ||
931 | */ | ||
932 | if ((nic->device_type == XFRAME_I_DEVICE) && | ||
933 | (get_xena_rev_id(nic->pdev) < 4)) | ||
934 | writeq(PCC_ENABLE_FOUR, &bar0->pcc_enable); | ||
935 | |||
751 | val64 = readq(&bar0->tx_fifo_partition_0); | 936 | val64 = readq(&bar0->tx_fifo_partition_0); |
752 | DBG_PRINT(INIT_DBG, "Fifo partition at: 0x%p is: 0x%llx\n", | 937 | DBG_PRINT(INIT_DBG, "Fifo partition at: 0x%p is: 0x%llx\n", |
753 | &bar0->tx_fifo_partition_0, (unsigned long long) val64); | 938 | &bar0->tx_fifo_partition_0, (unsigned long long) val64); |
754 | 939 | ||
755 | /* | 940 | /* |
756 | * Initialization of Tx_PA_CONFIG register to ignore packet | 941 | * Initialization of Tx_PA_CONFIG register to ignore packet |
757 | * integrity checking. | 942 | * integrity checking. |
758 | */ | 943 | */ |
759 | val64 = readq(&bar0->tx_pa_cfg); | 944 | val64 = readq(&bar0->tx_pa_cfg); |
@@ -770,85 +955,304 @@ static int init_nic(struct s2io_nic *nic) | |||
770 | } | 955 | } |
771 | writeq(val64, &bar0->rx_queue_priority); | 956 | writeq(val64, &bar0->rx_queue_priority); |
772 | 957 | ||
773 | /* | 958 | /* |
774 | * Allocating equal share of memory to all the | 959 | * Allocating equal share of memory to all the |
775 | * configured Rings. | 960 | * configured Rings. |
776 | */ | 961 | */ |
777 | val64 = 0; | 962 | val64 = 0; |
963 | if (nic->device_type & XFRAME_II_DEVICE) | ||
964 | mem_size = 32; | ||
965 | else | ||
966 | mem_size = 64; | ||
967 | |||
778 | for (i = 0; i < config->rx_ring_num; i++) { | 968 | for (i = 0; i < config->rx_ring_num; i++) { |
779 | switch (i) { | 969 | switch (i) { |
780 | case 0: | 970 | case 0: |
781 | mem_share = (64 / config->rx_ring_num + | 971 | mem_share = (mem_size / config->rx_ring_num + |
782 | 64 % config->rx_ring_num); | 972 | mem_size % config->rx_ring_num); |
783 | val64 |= RX_QUEUE_CFG_Q0_SZ(mem_share); | 973 | val64 |= RX_QUEUE_CFG_Q0_SZ(mem_share); |
784 | continue; | 974 | continue; |
785 | case 1: | 975 | case 1: |
786 | mem_share = (64 / config->rx_ring_num); | 976 | mem_share = (mem_size / config->rx_ring_num); |
787 | val64 |= RX_QUEUE_CFG_Q1_SZ(mem_share); | 977 | val64 |= RX_QUEUE_CFG_Q1_SZ(mem_share); |
788 | continue; | 978 | continue; |
789 | case 2: | 979 | case 2: |
790 | mem_share = (64 / config->rx_ring_num); | 980 | mem_share = (mem_size / config->rx_ring_num); |
791 | val64 |= RX_QUEUE_CFG_Q2_SZ(mem_share); | 981 | val64 |= RX_QUEUE_CFG_Q2_SZ(mem_share); |
792 | continue; | 982 | continue; |
793 | case 3: | 983 | case 3: |
794 | mem_share = (64 / config->rx_ring_num); | 984 | mem_share = (mem_size / config->rx_ring_num); |
795 | val64 |= RX_QUEUE_CFG_Q3_SZ(mem_share); | 985 | val64 |= RX_QUEUE_CFG_Q3_SZ(mem_share); |
796 | continue; | 986 | continue; |
797 | case 4: | 987 | case 4: |
798 | mem_share = (64 / config->rx_ring_num); | 988 | mem_share = (mem_size / config->rx_ring_num); |
799 | val64 |= RX_QUEUE_CFG_Q4_SZ(mem_share); | 989 | val64 |= RX_QUEUE_CFG_Q4_SZ(mem_share); |
800 | continue; | 990 | continue; |
801 | case 5: | 991 | case 5: |
802 | mem_share = (64 / config->rx_ring_num); | 992 | mem_share = (mem_size / config->rx_ring_num); |
803 | val64 |= RX_QUEUE_CFG_Q5_SZ(mem_share); | 993 | val64 |= RX_QUEUE_CFG_Q5_SZ(mem_share); |
804 | continue; | 994 | continue; |
805 | case 6: | 995 | case 6: |
806 | mem_share = (64 / config->rx_ring_num); | 996 | mem_share = (mem_size / config->rx_ring_num); |
807 | val64 |= RX_QUEUE_CFG_Q6_SZ(mem_share); | 997 | val64 |= RX_QUEUE_CFG_Q6_SZ(mem_share); |
808 | continue; | 998 | continue; |
809 | case 7: | 999 | case 7: |
810 | mem_share = (64 / config->rx_ring_num); | 1000 | mem_share = (mem_size / config->rx_ring_num); |
811 | val64 |= RX_QUEUE_CFG_Q7_SZ(mem_share); | 1001 | val64 |= RX_QUEUE_CFG_Q7_SZ(mem_share); |
812 | continue; | 1002 | continue; |
813 | } | 1003 | } |
814 | } | 1004 | } |
815 | writeq(val64, &bar0->rx_queue_cfg); | 1005 | writeq(val64, &bar0->rx_queue_cfg); |
816 | 1006 | ||
817 | /* | 1007 | /* |
818 | * Initializing the Tx round robin registers to 0. | 1008 | * Filling Tx round robin registers |
819 | * Filling Tx and Rx round robin registers as per the | 1009 | * as per the number of FIFOs |
820 | * number of FIFOs and Rings is still TODO. | ||
821 | */ | ||
822 | writeq(0, &bar0->tx_w_round_robin_0); | ||
823 | writeq(0, &bar0->tx_w_round_robin_1); | ||
824 | writeq(0, &bar0->tx_w_round_robin_2); | ||
825 | writeq(0, &bar0->tx_w_round_robin_3); | ||
826 | writeq(0, &bar0->tx_w_round_robin_4); | ||
827 | |||
828 | /* | ||
829 | * TODO | ||
830 | * Disable Rx steering. Hard coding all packets be steered to | ||
831 | * Queue 0 for now. | ||
832 | */ | 1010 | */ |
833 | val64 = 0x8080808080808080ULL; | 1011 | switch (config->tx_fifo_num) { |
834 | writeq(val64, &bar0->rts_qos_steering); | 1012 | case 1: |
1013 | val64 = 0x0000000000000000ULL; | ||
1014 | writeq(val64, &bar0->tx_w_round_robin_0); | ||
1015 | writeq(val64, &bar0->tx_w_round_robin_1); | ||
1016 | writeq(val64, &bar0->tx_w_round_robin_2); | ||
1017 | writeq(val64, &bar0->tx_w_round_robin_3); | ||
1018 | writeq(val64, &bar0->tx_w_round_robin_4); | ||
1019 | break; | ||
1020 | case 2: | ||
1021 | val64 = 0x0000010000010000ULL; | ||
1022 | writeq(val64, &bar0->tx_w_round_robin_0); | ||
1023 | val64 = 0x0100000100000100ULL; | ||
1024 | writeq(val64, &bar0->tx_w_round_robin_1); | ||
1025 | val64 = 0x0001000001000001ULL; | ||
1026 | writeq(val64, &bar0->tx_w_round_robin_2); | ||
1027 | val64 = 0x0000010000010000ULL; | ||
1028 | writeq(val64, &bar0->tx_w_round_robin_3); | ||
1029 | val64 = 0x0100000000000000ULL; | ||
1030 | writeq(val64, &bar0->tx_w_round_robin_4); | ||
1031 | break; | ||
1032 | case 3: | ||
1033 | val64 = 0x0001000102000001ULL; | ||
1034 | writeq(val64, &bar0->tx_w_round_robin_0); | ||
1035 | val64 = 0x0001020000010001ULL; | ||
1036 | writeq(val64, &bar0->tx_w_round_robin_1); | ||
1037 | val64 = 0x0200000100010200ULL; | ||
1038 | writeq(val64, &bar0->tx_w_round_robin_2); | ||
1039 | val64 = 0x0001000102000001ULL; | ||
1040 | writeq(val64, &bar0->tx_w_round_robin_3); | ||
1041 | val64 = 0x0001020000000000ULL; | ||
1042 | writeq(val64, &bar0->tx_w_round_robin_4); | ||
1043 | break; | ||
1044 | case 4: | ||
1045 | val64 = 0x0001020300010200ULL; | ||
1046 | writeq(val64, &bar0->tx_w_round_robin_0); | ||
1047 | val64 = 0x0100000102030001ULL; | ||
1048 | writeq(val64, &bar0->tx_w_round_robin_1); | ||
1049 | val64 = 0x0200010000010203ULL; | ||
1050 | writeq(val64, &bar0->tx_w_round_robin_2); | ||
1051 | val64 = 0x0001020001000001ULL; | ||
1052 | writeq(val64, &bar0->tx_w_round_robin_3); | ||
1053 | val64 = 0x0203000100000000ULL; | ||
1054 | writeq(val64, &bar0->tx_w_round_robin_4); | ||
1055 | break; | ||
1056 | case 5: | ||
1057 | val64 = 0x0001000203000102ULL; | ||
1058 | writeq(val64, &bar0->tx_w_round_robin_0); | ||
1059 | val64 = 0x0001020001030004ULL; | ||
1060 | writeq(val64, &bar0->tx_w_round_robin_1); | ||
1061 | val64 = 0x0001000203000102ULL; | ||
1062 | writeq(val64, &bar0->tx_w_round_robin_2); | ||
1063 | val64 = 0x0001020001030004ULL; | ||
1064 | writeq(val64, &bar0->tx_w_round_robin_3); | ||
1065 | val64 = 0x0001000000000000ULL; | ||
1066 | writeq(val64, &bar0->tx_w_round_robin_4); | ||
1067 | break; | ||
1068 | case 6: | ||
1069 | val64 = 0x0001020304000102ULL; | ||
1070 | writeq(val64, &bar0->tx_w_round_robin_0); | ||
1071 | val64 = 0x0304050001020001ULL; | ||
1072 | writeq(val64, &bar0->tx_w_round_robin_1); | ||
1073 | val64 = 0x0203000100000102ULL; | ||
1074 | writeq(val64, &bar0->tx_w_round_robin_2); | ||
1075 | val64 = 0x0304000102030405ULL; | ||
1076 | writeq(val64, &bar0->tx_w_round_robin_3); | ||
1077 | val64 = 0x0001000200000000ULL; | ||
1078 | writeq(val64, &bar0->tx_w_round_robin_4); | ||
1079 | break; | ||
1080 | case 7: | ||
1081 | val64 = 0x0001020001020300ULL; | ||
1082 | writeq(val64, &bar0->tx_w_round_robin_0); | ||
1083 | val64 = 0x0102030400010203ULL; | ||
1084 | writeq(val64, &bar0->tx_w_round_robin_1); | ||
1085 | val64 = 0x0405060001020001ULL; | ||
1086 | writeq(val64, &bar0->tx_w_round_robin_2); | ||
1087 | val64 = 0x0304050000010200ULL; | ||
1088 | writeq(val64, &bar0->tx_w_round_robin_3); | ||
1089 | val64 = 0x0102030000000000ULL; | ||
1090 | writeq(val64, &bar0->tx_w_round_robin_4); | ||
1091 | break; | ||
1092 | case 8: | ||
1093 | val64 = 0x0001020300040105ULL; | ||
1094 | writeq(val64, &bar0->tx_w_round_robin_0); | ||
1095 | val64 = 0x0200030106000204ULL; | ||
1096 | writeq(val64, &bar0->tx_w_round_robin_1); | ||
1097 | val64 = 0x0103000502010007ULL; | ||
1098 | writeq(val64, &bar0->tx_w_round_robin_2); | ||
1099 | val64 = 0x0304010002060500ULL; | ||
1100 | writeq(val64, &bar0->tx_w_round_robin_3); | ||
1101 | val64 = 0x0103020400000000ULL; | ||
1102 | writeq(val64, &bar0->tx_w_round_robin_4); | ||
1103 | break; | ||
1104 | } | ||
1105 | |||
1106 | /* Filling the Rx round robin registers as per the | ||
1107 | * number of Rings and steering based on QoS. | ||
1108 | */ | ||
1109 | switch (config->rx_ring_num) { | ||
1110 | case 1: | ||
1111 | val64 = 0x8080808080808080ULL; | ||
1112 | writeq(val64, &bar0->rts_qos_steering); | ||
1113 | break; | ||
1114 | case 2: | ||
1115 | val64 = 0x0000010000010000ULL; | ||
1116 | writeq(val64, &bar0->rx_w_round_robin_0); | ||
1117 | val64 = 0x0100000100000100ULL; | ||
1118 | writeq(val64, &bar0->rx_w_round_robin_1); | ||
1119 | val64 = 0x0001000001000001ULL; | ||
1120 | writeq(val64, &bar0->rx_w_round_robin_2); | ||
1121 | val64 = 0x0000010000010000ULL; | ||
1122 | writeq(val64, &bar0->rx_w_round_robin_3); | ||
1123 | val64 = 0x0100000000000000ULL; | ||
1124 | writeq(val64, &bar0->rx_w_round_robin_4); | ||
1125 | |||
1126 | val64 = 0x8080808040404040ULL; | ||
1127 | writeq(val64, &bar0->rts_qos_steering); | ||
1128 | break; | ||
1129 | case 3: | ||
1130 | val64 = 0x0001000102000001ULL; | ||
1131 | writeq(val64, &bar0->rx_w_round_robin_0); | ||
1132 | val64 = 0x0001020000010001ULL; | ||
1133 | writeq(val64, &bar0->rx_w_round_robin_1); | ||
1134 | val64 = 0x0200000100010200ULL; | ||
1135 | writeq(val64, &bar0->rx_w_round_robin_2); | ||
1136 | val64 = 0x0001000102000001ULL; | ||
1137 | writeq(val64, &bar0->rx_w_round_robin_3); | ||
1138 | val64 = 0x0001020000000000ULL; | ||
1139 | writeq(val64, &bar0->rx_w_round_robin_4); | ||
1140 | |||
1141 | val64 = 0x8080804040402020ULL; | ||
1142 | writeq(val64, &bar0->rts_qos_steering); | ||
1143 | break; | ||
1144 | case 4: | ||
1145 | val64 = 0x0001020300010200ULL; | ||
1146 | writeq(val64, &bar0->rx_w_round_robin_0); | ||
1147 | val64 = 0x0100000102030001ULL; | ||
1148 | writeq(val64, &bar0->rx_w_round_robin_1); | ||
1149 | val64 = 0x0200010000010203ULL; | ||
1150 | writeq(val64, &bar0->rx_w_round_robin_2); | ||
1151 | val64 = 0x0001020001000001ULL; | ||
1152 | writeq(val64, &bar0->rx_w_round_robin_3); | ||
1153 | val64 = 0x0203000100000000ULL; | ||
1154 | writeq(val64, &bar0->rx_w_round_robin_4); | ||
1155 | |||
1156 | val64 = 0x8080404020201010ULL; | ||
1157 | writeq(val64, &bar0->rts_qos_steering); | ||
1158 | break; | ||
1159 | case 5: | ||
1160 | val64 = 0x0001000203000102ULL; | ||
1161 | writeq(val64, &bar0->rx_w_round_robin_0); | ||
1162 | val64 = 0x0001020001030004ULL; | ||
1163 | writeq(val64, &bar0->rx_w_round_robin_1); | ||
1164 | val64 = 0x0001000203000102ULL; | ||
1165 | writeq(val64, &bar0->rx_w_round_robin_2); | ||
1166 | val64 = 0x0001020001030004ULL; | ||
1167 | writeq(val64, &bar0->rx_w_round_robin_3); | ||
1168 | val64 = 0x0001000000000000ULL; | ||
1169 | writeq(val64, &bar0->rx_w_round_robin_4); | ||
1170 | |||
1171 | val64 = 0x8080404020201008ULL; | ||
1172 | writeq(val64, &bar0->rts_qos_steering); | ||
1173 | break; | ||
1174 | case 6: | ||
1175 | val64 = 0x0001020304000102ULL; | ||
1176 | writeq(val64, &bar0->rx_w_round_robin_0); | ||
1177 | val64 = 0x0304050001020001ULL; | ||
1178 | writeq(val64, &bar0->rx_w_round_robin_1); | ||
1179 | val64 = 0x0203000100000102ULL; | ||
1180 | writeq(val64, &bar0->rx_w_round_robin_2); | ||
1181 | val64 = 0x0304000102030405ULL; | ||
1182 | writeq(val64, &bar0->rx_w_round_robin_3); | ||
1183 | val64 = 0x0001000200000000ULL; | ||
1184 | writeq(val64, &bar0->rx_w_round_robin_4); | ||
1185 | |||
1186 | val64 = 0x8080404020100804ULL; | ||
1187 | writeq(val64, &bar0->rts_qos_steering); | ||
1188 | break; | ||
1189 | case 7: | ||
1190 | val64 = 0x0001020001020300ULL; | ||
1191 | writeq(val64, &bar0->rx_w_round_robin_0); | ||
1192 | val64 = 0x0102030400010203ULL; | ||
1193 | writeq(val64, &bar0->rx_w_round_robin_1); | ||
1194 | val64 = 0x0405060001020001ULL; | ||
1195 | writeq(val64, &bar0->rx_w_round_robin_2); | ||
1196 | val64 = 0x0304050000010200ULL; | ||
1197 | writeq(val64, &bar0->rx_w_round_robin_3); | ||
1198 | val64 = 0x0102030000000000ULL; | ||
1199 | writeq(val64, &bar0->rx_w_round_robin_4); | ||
1200 | |||
1201 | val64 = 0x8080402010080402ULL; | ||
1202 | writeq(val64, &bar0->rts_qos_steering); | ||
1203 | break; | ||
1204 | case 8: | ||
1205 | val64 = 0x0001020300040105ULL; | ||
1206 | writeq(val64, &bar0->rx_w_round_robin_0); | ||
1207 | val64 = 0x0200030106000204ULL; | ||
1208 | writeq(val64, &bar0->rx_w_round_robin_1); | ||
1209 | val64 = 0x0103000502010007ULL; | ||
1210 | writeq(val64, &bar0->rx_w_round_robin_2); | ||
1211 | val64 = 0x0304010002060500ULL; | ||
1212 | writeq(val64, &bar0->rx_w_round_robin_3); | ||
1213 | val64 = 0x0103020400000000ULL; | ||
1214 | writeq(val64, &bar0->rx_w_round_robin_4); | ||
1215 | |||
1216 | val64 = 0x8040201008040201ULL; | ||
1217 | writeq(val64, &bar0->rts_qos_steering); | ||
1218 | break; | ||
1219 | } | ||
835 | 1220 | ||
836 | /* UDP Fix */ | 1221 | /* UDP Fix */ |
837 | val64 = 0; | 1222 | val64 = 0; |
838 | for (i = 1; i < 8; i++) | 1223 | for (i = 0; i < 8; i++) |
1224 | writeq(val64, &bar0->rts_frm_len_n[i]); | ||
1225 | |||
1226 | /* Set the default rts frame length for the rings configured */ | ||
1227 | val64 = MAC_RTS_FRM_LEN_SET(dev->mtu+22); | ||
1228 | for (i = 0 ; i < config->rx_ring_num ; i++) | ||
839 | writeq(val64, &bar0->rts_frm_len_n[i]); | 1229 | writeq(val64, &bar0->rts_frm_len_n[i]); |
840 | 1230 | ||
841 | /* Set rts_frm_len register for fifo 0 */ | 1231 | /* Set the frame length for the configured rings |
842 | writeq(MAC_RTS_FRM_LEN_SET(dev->mtu + 22), | 1232 | * desired by the user |
843 | &bar0->rts_frm_len_n[0]); | 1233 | */ |
1234 | for (i = 0; i < config->rx_ring_num; i++) { | ||
1235 | /* If rts_frm_len[i] == 0 then it is assumed that user not | ||
1236 | * specified frame length steering. | ||
1237 | * If the user provides the frame length then program | ||
1238 | * the rts_frm_len register for those values or else | ||
1239 | * leave it as it is. | ||
1240 | */ | ||
1241 | if (rts_frm_len[i] != 0) { | ||
1242 | writeq(MAC_RTS_FRM_LEN_SET(rts_frm_len[i]), | ||
1243 | &bar0->rts_frm_len_n[i]); | ||
1244 | } | ||
1245 | } | ||
844 | 1246 | ||
845 | /* Enable statistics */ | 1247 | /* Program statistics memory */ |
846 | writeq(mac_control->stats_mem_phy, &bar0->stat_addr); | 1248 | writeq(mac_control->stats_mem_phy, &bar0->stat_addr); |
847 | val64 = SET_UPDT_PERIOD(Stats_refresh_time) | | ||
848 | STAT_CFG_STAT_RO | STAT_CFG_STAT_EN; | ||
849 | writeq(val64, &bar0->stat_cfg); | ||
850 | 1249 | ||
851 | /* | 1250 | if (nic->device_type == XFRAME_II_DEVICE) { |
1251 | val64 = STAT_BC(0x320); | ||
1252 | writeq(val64, &bar0->stat_byte_cnt); | ||
1253 | } | ||
1254 | |||
1255 | /* | ||
852 | * Initializing the sampling rate for the device to calculate the | 1256 | * Initializing the sampling rate for the device to calculate the |
853 | * bandwidth utilization. | 1257 | * bandwidth utilization. |
854 | */ | 1258 | */ |
@@ -857,30 +1261,38 @@ static int init_nic(struct s2io_nic *nic) | |||
857 | writeq(val64, &bar0->mac_link_util); | 1261 | writeq(val64, &bar0->mac_link_util); |
858 | 1262 | ||
859 | 1263 | ||
860 | /* | 1264 | /* |
861 | * Initializing the Transmit and Receive Traffic Interrupt | 1265 | * Initializing the Transmit and Receive Traffic Interrupt |
862 | * Scheme. | 1266 | * Scheme. |
863 | */ | 1267 | */ |
864 | /* TTI Initialization. Default Tx timer gets us about | 1268 | /* |
1269 | * TTI Initialization. Default Tx timer gets us about | ||
865 | * 250 interrupts per sec. Continuous interrupts are enabled | 1270 | * 250 interrupts per sec. Continuous interrupts are enabled |
866 | * by default. | 1271 | * by default. |
867 | */ | 1272 | */ |
868 | val64 = TTI_DATA1_MEM_TX_TIMER_VAL(0x2078) | | 1273 | if (nic->device_type == XFRAME_II_DEVICE) { |
869 | TTI_DATA1_MEM_TX_URNG_A(0xA) | | 1274 | int count = (nic->config.bus_speed * 125)/2; |
1275 | val64 = TTI_DATA1_MEM_TX_TIMER_VAL(count); | ||
1276 | } else { | ||
1277 | |||
1278 | val64 = TTI_DATA1_MEM_TX_TIMER_VAL(0x2078); | ||
1279 | } | ||
1280 | val64 |= TTI_DATA1_MEM_TX_URNG_A(0xA) | | ||
870 | TTI_DATA1_MEM_TX_URNG_B(0x10) | | 1281 | TTI_DATA1_MEM_TX_URNG_B(0x10) | |
871 | TTI_DATA1_MEM_TX_URNG_C(0x30) | TTI_DATA1_MEM_TX_TIMER_AC_EN | | 1282 | TTI_DATA1_MEM_TX_URNG_C(0x30) | TTI_DATA1_MEM_TX_TIMER_AC_EN; |
872 | TTI_DATA1_MEM_TX_TIMER_CI_EN; | 1283 | if (use_continuous_tx_intrs) |
1284 | val64 |= TTI_DATA1_MEM_TX_TIMER_CI_EN; | ||
873 | writeq(val64, &bar0->tti_data1_mem); | 1285 | writeq(val64, &bar0->tti_data1_mem); |
874 | 1286 | ||
875 | val64 = TTI_DATA2_MEM_TX_UFC_A(0x10) | | 1287 | val64 = TTI_DATA2_MEM_TX_UFC_A(0x10) | |
876 | TTI_DATA2_MEM_TX_UFC_B(0x20) | | 1288 | TTI_DATA2_MEM_TX_UFC_B(0x20) | |
877 | TTI_DATA2_MEM_TX_UFC_C(0x40) | TTI_DATA2_MEM_TX_UFC_D(0x80); | 1289 | TTI_DATA2_MEM_TX_UFC_C(0x70) | TTI_DATA2_MEM_TX_UFC_D(0x80); |
878 | writeq(val64, &bar0->tti_data2_mem); | 1290 | writeq(val64, &bar0->tti_data2_mem); |
879 | 1291 | ||
880 | val64 = TTI_CMD_MEM_WE | TTI_CMD_MEM_STROBE_NEW_CMD; | 1292 | val64 = TTI_CMD_MEM_WE | TTI_CMD_MEM_STROBE_NEW_CMD; |
881 | writeq(val64, &bar0->tti_command_mem); | 1293 | writeq(val64, &bar0->tti_command_mem); |
882 | 1294 | ||
883 | /* | 1295 | /* |
884 | * Once the operation completes, the Strobe bit of the command | 1296 | * Once the operation completes, the Strobe bit of the command |
885 | * register will be reset. We poll for this particular condition | 1297 | * register will be reset. We poll for this particular condition |
886 | * We wait for a maximum of 500ms for the operation to complete, | 1298 | * We wait for a maximum of 500ms for the operation to complete, |
@@ -901,52 +1313,97 @@ static int init_nic(struct s2io_nic *nic) | |||
901 | time++; | 1313 | time++; |
902 | } | 1314 | } |
903 | 1315 | ||
904 | /* RTI Initialization */ | 1316 | if (nic->config.bimodal) { |
905 | val64 = RTI_DATA1_MEM_RX_TIMER_VAL(0xFFF) | | 1317 | int k = 0; |
906 | RTI_DATA1_MEM_RX_URNG_A(0xA) | | 1318 | for (k = 0; k < config->rx_ring_num; k++) { |
907 | RTI_DATA1_MEM_RX_URNG_B(0x10) | | 1319 | val64 = TTI_CMD_MEM_WE | TTI_CMD_MEM_STROBE_NEW_CMD; |
908 | RTI_DATA1_MEM_RX_URNG_C(0x30) | RTI_DATA1_MEM_RX_TIMER_AC_EN; | 1320 | val64 |= TTI_CMD_MEM_OFFSET(0x38+k); |
1321 | writeq(val64, &bar0->tti_command_mem); | ||
1322 | |||
1323 | /* | ||
1324 | * Once the operation completes, the Strobe bit of the command | ||
1325 | * register will be reset. We poll for this particular condition | ||
1326 | * We wait for a maximum of 500ms for the operation to complete, | ||
1327 | * if it's not complete by then we return error. | ||
1328 | */ | ||
1329 | time = 0; | ||
1330 | while (TRUE) { | ||
1331 | val64 = readq(&bar0->tti_command_mem); | ||
1332 | if (!(val64 & TTI_CMD_MEM_STROBE_NEW_CMD)) { | ||
1333 | break; | ||
1334 | } | ||
1335 | if (time > 10) { | ||
1336 | DBG_PRINT(ERR_DBG, | ||
1337 | "%s: TTI init Failed\n", | ||
1338 | dev->name); | ||
1339 | return -1; | ||
1340 | } | ||
1341 | time++; | ||
1342 | msleep(50); | ||
1343 | } | ||
1344 | } | ||
1345 | } else { | ||
909 | 1346 | ||
910 | writeq(val64, &bar0->rti_data1_mem); | 1347 | /* RTI Initialization */ |
1348 | if (nic->device_type == XFRAME_II_DEVICE) { | ||
1349 | /* | ||
1350 | * Programmed to generate Apprx 500 Intrs per | ||
1351 | * second | ||
1352 | */ | ||
1353 | int count = (nic->config.bus_speed * 125)/4; | ||
1354 | val64 = RTI_DATA1_MEM_RX_TIMER_VAL(count); | ||
1355 | } else { | ||
1356 | val64 = RTI_DATA1_MEM_RX_TIMER_VAL(0xFFF); | ||
1357 | } | ||
1358 | val64 |= RTI_DATA1_MEM_RX_URNG_A(0xA) | | ||
1359 | RTI_DATA1_MEM_RX_URNG_B(0x10) | | ||
1360 | RTI_DATA1_MEM_RX_URNG_C(0x30) | RTI_DATA1_MEM_RX_TIMER_AC_EN; | ||
911 | 1361 | ||
912 | val64 = RTI_DATA2_MEM_RX_UFC_A(0x1) | | 1362 | writeq(val64, &bar0->rti_data1_mem); |
913 | RTI_DATA2_MEM_RX_UFC_B(0x2) | | ||
914 | RTI_DATA2_MEM_RX_UFC_C(0x40) | RTI_DATA2_MEM_RX_UFC_D(0x80); | ||
915 | writeq(val64, &bar0->rti_data2_mem); | ||
916 | 1363 | ||
917 | val64 = RTI_CMD_MEM_WE | RTI_CMD_MEM_STROBE_NEW_CMD; | 1364 | val64 = RTI_DATA2_MEM_RX_UFC_A(0x1) | |
918 | writeq(val64, &bar0->rti_command_mem); | 1365 | RTI_DATA2_MEM_RX_UFC_B(0x2) | |
1366 | RTI_DATA2_MEM_RX_UFC_C(0x40) | RTI_DATA2_MEM_RX_UFC_D(0x80); | ||
1367 | writeq(val64, &bar0->rti_data2_mem); | ||
919 | 1368 | ||
920 | /* | 1369 | for (i = 0; i < config->rx_ring_num; i++) { |
921 | * Once the operation completes, the Strobe bit of the command | 1370 | val64 = RTI_CMD_MEM_WE | RTI_CMD_MEM_STROBE_NEW_CMD |
922 | * register will be reset. We poll for this particular condition | 1371 | | RTI_CMD_MEM_OFFSET(i); |
923 | * We wait for a maximum of 500ms for the operation to complete, | 1372 | writeq(val64, &bar0->rti_command_mem); |
924 | * if it's not complete by then we return error. | 1373 | |
925 | */ | 1374 | /* |
926 | time = 0; | 1375 | * Once the operation completes, the Strobe bit of the |
927 | while (TRUE) { | 1376 | * command register will be reset. We poll for this |
928 | val64 = readq(&bar0->rti_command_mem); | 1377 | * particular condition. We wait for a maximum of 500ms |
929 | if (!(val64 & TTI_CMD_MEM_STROBE_NEW_CMD)) { | 1378 | * for the operation to complete, if it's not complete |
930 | break; | 1379 | * by then we return error. |
931 | } | 1380 | */ |
932 | if (time > 10) { | 1381 | time = 0; |
933 | DBG_PRINT(ERR_DBG, "%s: RTI init Failed\n", | 1382 | while (TRUE) { |
934 | dev->name); | 1383 | val64 = readq(&bar0->rti_command_mem); |
935 | return -1; | 1384 | if (!(val64 & RTI_CMD_MEM_STROBE_NEW_CMD)) { |
1385 | break; | ||
1386 | } | ||
1387 | if (time > 10) { | ||
1388 | DBG_PRINT(ERR_DBG, "%s: RTI init Failed\n", | ||
1389 | dev->name); | ||
1390 | return -1; | ||
1391 | } | ||
1392 | time++; | ||
1393 | msleep(50); | ||
1394 | } | ||
936 | } | 1395 | } |
937 | time++; | ||
938 | msleep(50); | ||
939 | } | 1396 | } |
940 | 1397 | ||
941 | /* | 1398 | /* |
942 | * Initializing proper values as Pause threshold into all | 1399 | * Initializing proper values as Pause threshold into all |
943 | * the 8 Queues on Rx side. | 1400 | * the 8 Queues on Rx side. |
944 | */ | 1401 | */ |
945 | writeq(0xffbbffbbffbbffbbULL, &bar0->mc_pause_thresh_q0q3); | 1402 | writeq(0xffbbffbbffbbffbbULL, &bar0->mc_pause_thresh_q0q3); |
946 | writeq(0xffbbffbbffbbffbbULL, &bar0->mc_pause_thresh_q4q7); | 1403 | writeq(0xffbbffbbffbbffbbULL, &bar0->mc_pause_thresh_q4q7); |
947 | 1404 | ||
948 | /* Disable RMAC PAD STRIPPING */ | 1405 | /* Disable RMAC PAD STRIPPING */ |
949 | add = &bar0->mac_cfg; | 1406 | add = (void *) &bar0->mac_cfg; |
950 | val64 = readq(&bar0->mac_cfg); | 1407 | val64 = readq(&bar0->mac_cfg); |
951 | val64 &= ~(MAC_CFG_RMAC_STRIP_PAD); | 1408 | val64 &= ~(MAC_CFG_RMAC_STRIP_PAD); |
952 | writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key); | 1409 | writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key); |
@@ -955,8 +1412,8 @@ static int init_nic(struct s2io_nic *nic) | |||
955 | writel((u32) (val64 >> 32), (add + 4)); | 1412 | writel((u32) (val64 >> 32), (add + 4)); |
956 | val64 = readq(&bar0->mac_cfg); | 1413 | val64 = readq(&bar0->mac_cfg); |
957 | 1414 | ||
958 | /* | 1415 | /* |
959 | * Set the time value to be inserted in the pause frame | 1416 | * Set the time value to be inserted in the pause frame |
960 | * generated by xena. | 1417 | * generated by xena. |
961 | */ | 1418 | */ |
962 | val64 = readq(&bar0->rmac_pause_cfg); | 1419 | val64 = readq(&bar0->rmac_pause_cfg); |
@@ -964,7 +1421,7 @@ static int init_nic(struct s2io_nic *nic) | |||
964 | val64 |= RMAC_PAUSE_HG_PTIME(nic->mac_control.rmac_pause_time); | 1421 | val64 |= RMAC_PAUSE_HG_PTIME(nic->mac_control.rmac_pause_time); |
965 | writeq(val64, &bar0->rmac_pause_cfg); | 1422 | writeq(val64, &bar0->rmac_pause_cfg); |
966 | 1423 | ||
967 | /* | 1424 | /* |
968 | * Set the Threshold Limit for Generating the pause frame | 1425 | * Set the Threshold Limit for Generating the pause frame |
969 | * If the amount of data in any Queue exceeds ratio of | 1426 | * If the amount of data in any Queue exceeds ratio of |
970 | * (mac_control.mc_pause_threshold_q0q3 or q4q7)/256 | 1427 | * (mac_control.mc_pause_threshold_q0q3 or q4q7)/256 |
@@ -988,25 +1445,54 @@ static int init_nic(struct s2io_nic *nic) | |||
988 | } | 1445 | } |
989 | writeq(val64, &bar0->mc_pause_thresh_q4q7); | 1446 | writeq(val64, &bar0->mc_pause_thresh_q4q7); |
990 | 1447 | ||
991 | /* | 1448 | /* |
992 | * TxDMA will stop Read request if the number of read split has | 1449 | * TxDMA will stop Read request if the number of read split has |
993 | * exceeded the limit pointed by shared_splits | 1450 | * exceeded the limit pointed by shared_splits |
994 | */ | 1451 | */ |
995 | val64 = readq(&bar0->pic_control); | 1452 | val64 = readq(&bar0->pic_control); |
996 | val64 |= PIC_CNTL_SHARED_SPLITS(shared_splits); | 1453 | val64 |= PIC_CNTL_SHARED_SPLITS(shared_splits); |
997 | writeq(val64, &bar0->pic_control); | 1454 | writeq(val64, &bar0->pic_control); |
998 | 1455 | ||
1456 | /* | ||
1457 | * Programming the Herc to split every write transaction | ||
1458 | * that does not start on an ADB to reduce disconnects. | ||
1459 | */ | ||
1460 | if (nic->device_type == XFRAME_II_DEVICE) { | ||
1461 | val64 = WREQ_SPLIT_MASK_SET_MASK(255); | ||
1462 | writeq(val64, &bar0->wreq_split_mask); | ||
1463 | } | ||
1464 | |||
1465 | /* Setting Link stability period to 64 ms */ | ||
1466 | if (nic->device_type == XFRAME_II_DEVICE) { | ||
1467 | val64 = MISC_LINK_STABILITY_PRD(3); | ||
1468 | writeq(val64, &bar0->misc_control); | ||
1469 | } | ||
1470 | |||
999 | return SUCCESS; | 1471 | return SUCCESS; |
1000 | } | 1472 | } |
1473 | #define LINK_UP_DOWN_INTERRUPT 1 | ||
1474 | #define MAC_RMAC_ERR_TIMER 2 | ||
1001 | 1475 | ||
1002 | /** | 1476 | #if defined(CONFIG_MSI_MODE) || defined(CONFIG_MSIX_MODE) |
1003 | * en_dis_able_nic_intrs - Enable or Disable the interrupts | 1477 | #define s2io_link_fault_indication(x) MAC_RMAC_ERR_TIMER |
1478 | #else | ||
1479 | int s2io_link_fault_indication(nic_t *nic) | ||
1480 | { | ||
1481 | if (nic->device_type == XFRAME_II_DEVICE) | ||
1482 | return LINK_UP_DOWN_INTERRUPT; | ||
1483 | else | ||
1484 | return MAC_RMAC_ERR_TIMER; | ||
1485 | } | ||
1486 | #endif | ||
1487 | |||
1488 | /** | ||
1489 | * en_dis_able_nic_intrs - Enable or Disable the interrupts | ||
1004 | * @nic: device private variable, | 1490 | * @nic: device private variable, |
1005 | * @mask: A mask indicating which Intr block must be modified and, | 1491 | * @mask: A mask indicating which Intr block must be modified and, |
1006 | * @flag: A flag indicating whether to enable or disable the Intrs. | 1492 | * @flag: A flag indicating whether to enable or disable the Intrs. |
1007 | * Description: This function will either disable or enable the interrupts | 1493 | * Description: This function will either disable or enable the interrupts |
1008 | * depending on the flag argument. The mask argument can be used to | 1494 | * depending on the flag argument. The mask argument can be used to |
1009 | * enable/disable any Intr block. | 1495 | * enable/disable any Intr block. |
1010 | * Return Value: NONE. | 1496 | * Return Value: NONE. |
1011 | */ | 1497 | */ |
1012 | 1498 | ||
@@ -1024,20 +1510,31 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1024 | temp64 = readq(&bar0->general_int_mask); | 1510 | temp64 = readq(&bar0->general_int_mask); |
1025 | temp64 &= ~((u64) val64); | 1511 | temp64 &= ~((u64) val64); |
1026 | writeq(temp64, &bar0->general_int_mask); | 1512 | writeq(temp64, &bar0->general_int_mask); |
1027 | /* | 1513 | /* |
1028 | * Disabled all PCIX, Flash, MDIO, IIC and GPIO | 1514 | * If Hercules adapter enable GPIO otherwise |
1029 | * interrupts for now. | 1515 | * disabled all PCIX, Flash, MDIO, IIC and GPIO |
1030 | * TODO | 1516 | * interrupts for now. |
1517 | * TODO | ||
1031 | */ | 1518 | */ |
1032 | writeq(DISABLE_ALL_INTRS, &bar0->pic_int_mask); | 1519 | if (s2io_link_fault_indication(nic) == |
1033 | /* | 1520 | LINK_UP_DOWN_INTERRUPT ) { |
1521 | temp64 = readq(&bar0->pic_int_mask); | ||
1522 | temp64 &= ~((u64) PIC_INT_GPIO); | ||
1523 | writeq(temp64, &bar0->pic_int_mask); | ||
1524 | temp64 = readq(&bar0->gpio_int_mask); | ||
1525 | temp64 &= ~((u64) GPIO_INT_MASK_LINK_UP); | ||
1526 | writeq(temp64, &bar0->gpio_int_mask); | ||
1527 | } else { | ||
1528 | writeq(DISABLE_ALL_INTRS, &bar0->pic_int_mask); | ||
1529 | } | ||
1530 | /* | ||
1034 | * No MSI Support is available presently, so TTI and | 1531 | * No MSI Support is available presently, so TTI and |
1035 | * RTI interrupts are also disabled. | 1532 | * RTI interrupts are also disabled. |
1036 | */ | 1533 | */ |
1037 | } else if (flag == DISABLE_INTRS) { | 1534 | } else if (flag == DISABLE_INTRS) { |
1038 | /* | 1535 | /* |
1039 | * Disable PIC Intrs in the general | 1536 | * Disable PIC Intrs in the general |
1040 | * intr mask register | 1537 | * intr mask register |
1041 | */ | 1538 | */ |
1042 | writeq(DISABLE_ALL_INTRS, &bar0->pic_int_mask); | 1539 | writeq(DISABLE_ALL_INTRS, &bar0->pic_int_mask); |
1043 | temp64 = readq(&bar0->general_int_mask); | 1540 | temp64 = readq(&bar0->general_int_mask); |
@@ -1055,27 +1552,27 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1055 | temp64 = readq(&bar0->general_int_mask); | 1552 | temp64 = readq(&bar0->general_int_mask); |
1056 | temp64 &= ~((u64) val64); | 1553 | temp64 &= ~((u64) val64); |
1057 | writeq(temp64, &bar0->general_int_mask); | 1554 | writeq(temp64, &bar0->general_int_mask); |
1058 | /* | 1555 | /* |
1059 | * Keep all interrupts other than PFC interrupt | 1556 | * Keep all interrupts other than PFC interrupt |
1060 | * and PCC interrupt disabled in DMA level. | 1557 | * and PCC interrupt disabled in DMA level. |
1061 | */ | 1558 | */ |
1062 | val64 = DISABLE_ALL_INTRS & ~(TXDMA_PFC_INT_M | | 1559 | val64 = DISABLE_ALL_INTRS & ~(TXDMA_PFC_INT_M | |
1063 | TXDMA_PCC_INT_M); | 1560 | TXDMA_PCC_INT_M); |
1064 | writeq(val64, &bar0->txdma_int_mask); | 1561 | writeq(val64, &bar0->txdma_int_mask); |
1065 | /* | 1562 | /* |
1066 | * Enable only the MISC error 1 interrupt in PFC block | 1563 | * Enable only the MISC error 1 interrupt in PFC block |
1067 | */ | 1564 | */ |
1068 | val64 = DISABLE_ALL_INTRS & (~PFC_MISC_ERR_1); | 1565 | val64 = DISABLE_ALL_INTRS & (~PFC_MISC_ERR_1); |
1069 | writeq(val64, &bar0->pfc_err_mask); | 1566 | writeq(val64, &bar0->pfc_err_mask); |
1070 | /* | 1567 | /* |
1071 | * Enable only the FB_ECC error interrupt in PCC block | 1568 | * Enable only the FB_ECC error interrupt in PCC block |
1072 | */ | 1569 | */ |
1073 | val64 = DISABLE_ALL_INTRS & (~PCC_FB_ECC_ERR); | 1570 | val64 = DISABLE_ALL_INTRS & (~PCC_FB_ECC_ERR); |
1074 | writeq(val64, &bar0->pcc_err_mask); | 1571 | writeq(val64, &bar0->pcc_err_mask); |
1075 | } else if (flag == DISABLE_INTRS) { | 1572 | } else if (flag == DISABLE_INTRS) { |
1076 | /* | 1573 | /* |
1077 | * Disable TxDMA Intrs in the general intr mask | 1574 | * Disable TxDMA Intrs in the general intr mask |
1078 | * register | 1575 | * register |
1079 | */ | 1576 | */ |
1080 | writeq(DISABLE_ALL_INTRS, &bar0->txdma_int_mask); | 1577 | writeq(DISABLE_ALL_INTRS, &bar0->txdma_int_mask); |
1081 | writeq(DISABLE_ALL_INTRS, &bar0->pfc_err_mask); | 1578 | writeq(DISABLE_ALL_INTRS, &bar0->pfc_err_mask); |
@@ -1093,15 +1590,15 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1093 | temp64 = readq(&bar0->general_int_mask); | 1590 | temp64 = readq(&bar0->general_int_mask); |
1094 | temp64 &= ~((u64) val64); | 1591 | temp64 &= ~((u64) val64); |
1095 | writeq(temp64, &bar0->general_int_mask); | 1592 | writeq(temp64, &bar0->general_int_mask); |
1096 | /* | 1593 | /* |
1097 | * All RxDMA block interrupts are disabled for now | 1594 | * All RxDMA block interrupts are disabled for now |
1098 | * TODO | 1595 | * TODO |
1099 | */ | 1596 | */ |
1100 | writeq(DISABLE_ALL_INTRS, &bar0->rxdma_int_mask); | 1597 | writeq(DISABLE_ALL_INTRS, &bar0->rxdma_int_mask); |
1101 | } else if (flag == DISABLE_INTRS) { | 1598 | } else if (flag == DISABLE_INTRS) { |
1102 | /* | 1599 | /* |
1103 | * Disable RxDMA Intrs in the general intr mask | 1600 | * Disable RxDMA Intrs in the general intr mask |
1104 | * register | 1601 | * register |
1105 | */ | 1602 | */ |
1106 | writeq(DISABLE_ALL_INTRS, &bar0->rxdma_int_mask); | 1603 | writeq(DISABLE_ALL_INTRS, &bar0->rxdma_int_mask); |
1107 | temp64 = readq(&bar0->general_int_mask); | 1604 | temp64 = readq(&bar0->general_int_mask); |
@@ -1118,22 +1615,13 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1118 | temp64 = readq(&bar0->general_int_mask); | 1615 | temp64 = readq(&bar0->general_int_mask); |
1119 | temp64 &= ~((u64) val64); | 1616 | temp64 &= ~((u64) val64); |
1120 | writeq(temp64, &bar0->general_int_mask); | 1617 | writeq(temp64, &bar0->general_int_mask); |
1121 | /* | 1618 | /* |
1122 | * All MAC block error interrupts are disabled for now | 1619 | * All MAC block error interrupts are disabled for now |
1123 | * except the link status change interrupt. | ||
1124 | * TODO | 1620 | * TODO |
1125 | */ | 1621 | */ |
1126 | val64 = MAC_INT_STATUS_RMAC_INT; | ||
1127 | temp64 = readq(&bar0->mac_int_mask); | ||
1128 | temp64 &= ~((u64) val64); | ||
1129 | writeq(temp64, &bar0->mac_int_mask); | ||
1130 | |||
1131 | val64 = readq(&bar0->mac_rmac_err_mask); | ||
1132 | val64 &= ~((u64) RMAC_LINK_STATE_CHANGE_INT); | ||
1133 | writeq(val64, &bar0->mac_rmac_err_mask); | ||
1134 | } else if (flag == DISABLE_INTRS) { | 1622 | } else if (flag == DISABLE_INTRS) { |
1135 | /* | 1623 | /* |
1136 | * Disable MAC Intrs in the general intr mask register | 1624 | * Disable MAC Intrs in the general intr mask register |
1137 | */ | 1625 | */ |
1138 | writeq(DISABLE_ALL_INTRS, &bar0->mac_int_mask); | 1626 | writeq(DISABLE_ALL_INTRS, &bar0->mac_int_mask); |
1139 | writeq(DISABLE_ALL_INTRS, | 1627 | writeq(DISABLE_ALL_INTRS, |
@@ -1152,14 +1640,14 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1152 | temp64 = readq(&bar0->general_int_mask); | 1640 | temp64 = readq(&bar0->general_int_mask); |
1153 | temp64 &= ~((u64) val64); | 1641 | temp64 &= ~((u64) val64); |
1154 | writeq(temp64, &bar0->general_int_mask); | 1642 | writeq(temp64, &bar0->general_int_mask); |
1155 | /* | 1643 | /* |
1156 | * All XGXS block error interrupts are disabled for now | 1644 | * All XGXS block error interrupts are disabled for now |
1157 | * TODO | 1645 | * TODO |
1158 | */ | 1646 | */ |
1159 | writeq(DISABLE_ALL_INTRS, &bar0->xgxs_int_mask); | 1647 | writeq(DISABLE_ALL_INTRS, &bar0->xgxs_int_mask); |
1160 | } else if (flag == DISABLE_INTRS) { | 1648 | } else if (flag == DISABLE_INTRS) { |
1161 | /* | 1649 | /* |
1162 | * Disable MC Intrs in the general intr mask register | 1650 | * Disable MC Intrs in the general intr mask register |
1163 | */ | 1651 | */ |
1164 | writeq(DISABLE_ALL_INTRS, &bar0->xgxs_int_mask); | 1652 | writeq(DISABLE_ALL_INTRS, &bar0->xgxs_int_mask); |
1165 | temp64 = readq(&bar0->general_int_mask); | 1653 | temp64 = readq(&bar0->general_int_mask); |
@@ -1175,11 +1663,11 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1175 | temp64 = readq(&bar0->general_int_mask); | 1663 | temp64 = readq(&bar0->general_int_mask); |
1176 | temp64 &= ~((u64) val64); | 1664 | temp64 &= ~((u64) val64); |
1177 | writeq(temp64, &bar0->general_int_mask); | 1665 | writeq(temp64, &bar0->general_int_mask); |
1178 | /* | 1666 | /* |
1179 | * All MC block error interrupts are disabled for now | 1667 | * Enable all MC Intrs. |
1180 | * TODO | ||
1181 | */ | 1668 | */ |
1182 | writeq(DISABLE_ALL_INTRS, &bar0->mc_int_mask); | 1669 | writeq(0x0, &bar0->mc_int_mask); |
1670 | writeq(0x0, &bar0->mc_err_mask); | ||
1183 | } else if (flag == DISABLE_INTRS) { | 1671 | } else if (flag == DISABLE_INTRS) { |
1184 | /* | 1672 | /* |
1185 | * Disable MC Intrs in the general intr mask register | 1673 | * Disable MC Intrs in the general intr mask register |
@@ -1199,14 +1687,14 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1199 | temp64 = readq(&bar0->general_int_mask); | 1687 | temp64 = readq(&bar0->general_int_mask); |
1200 | temp64 &= ~((u64) val64); | 1688 | temp64 &= ~((u64) val64); |
1201 | writeq(temp64, &bar0->general_int_mask); | 1689 | writeq(temp64, &bar0->general_int_mask); |
1202 | /* | 1690 | /* |
1203 | * Enable all the Tx side interrupts | 1691 | * Enable all the Tx side interrupts |
1204 | * writing 0 Enables all 64 TX interrupt levels | 1692 | * writing 0 Enables all 64 TX interrupt levels |
1205 | */ | 1693 | */ |
1206 | writeq(0x0, &bar0->tx_traffic_mask); | 1694 | writeq(0x0, &bar0->tx_traffic_mask); |
1207 | } else if (flag == DISABLE_INTRS) { | 1695 | } else if (flag == DISABLE_INTRS) { |
1208 | /* | 1696 | /* |
1209 | * Disable Tx Traffic Intrs in the general intr mask | 1697 | * Disable Tx Traffic Intrs in the general intr mask |
1210 | * register. | 1698 | * register. |
1211 | */ | 1699 | */ |
1212 | writeq(DISABLE_ALL_INTRS, &bar0->tx_traffic_mask); | 1700 | writeq(DISABLE_ALL_INTRS, &bar0->tx_traffic_mask); |
@@ -1226,8 +1714,8 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1226 | /* writing 0 Enables all 8 RX interrupt levels */ | 1714 | /* writing 0 Enables all 8 RX interrupt levels */ |
1227 | writeq(0x0, &bar0->rx_traffic_mask); | 1715 | writeq(0x0, &bar0->rx_traffic_mask); |
1228 | } else if (flag == DISABLE_INTRS) { | 1716 | } else if (flag == DISABLE_INTRS) { |
1229 | /* | 1717 | /* |
1230 | * Disable Rx Traffic Intrs in the general intr mask | 1718 | * Disable Rx Traffic Intrs in the general intr mask |
1231 | * register. | 1719 | * register. |
1232 | */ | 1720 | */ |
1233 | writeq(DISABLE_ALL_INTRS, &bar0->rx_traffic_mask); | 1721 | writeq(DISABLE_ALL_INTRS, &bar0->rx_traffic_mask); |
@@ -1238,24 +1726,66 @@ static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag) | |||
1238 | } | 1726 | } |
1239 | } | 1727 | } |
1240 | 1728 | ||
1241 | /** | 1729 | static int check_prc_pcc_state(u64 val64, int flag, int rev_id, int herc) |
1242 | * verify_xena_quiescence - Checks whether the H/W is ready | 1730 | { |
1731 | int ret = 0; | ||
1732 | |||
1733 | if (flag == FALSE) { | ||
1734 | if ((!herc && (rev_id >= 4)) || herc) { | ||
1735 | if (!(val64 & ADAPTER_STATUS_RMAC_PCC_IDLE) && | ||
1736 | ((val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) == | ||
1737 | ADAPTER_STATUS_RC_PRC_QUIESCENT)) { | ||
1738 | ret = 1; | ||
1739 | } | ||
1740 | }else { | ||
1741 | if (!(val64 & ADAPTER_STATUS_RMAC_PCC_FOUR_IDLE) && | ||
1742 | ((val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) == | ||
1743 | ADAPTER_STATUS_RC_PRC_QUIESCENT)) { | ||
1744 | ret = 1; | ||
1745 | } | ||
1746 | } | ||
1747 | } else { | ||
1748 | if ((!herc && (rev_id >= 4)) || herc) { | ||
1749 | if (((val64 & ADAPTER_STATUS_RMAC_PCC_IDLE) == | ||
1750 | ADAPTER_STATUS_RMAC_PCC_IDLE) && | ||
1751 | (!(val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) || | ||
1752 | ((val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) == | ||
1753 | ADAPTER_STATUS_RC_PRC_QUIESCENT))) { | ||
1754 | ret = 1; | ||
1755 | } | ||
1756 | } else { | ||
1757 | if (((val64 & ADAPTER_STATUS_RMAC_PCC_FOUR_IDLE) == | ||
1758 | ADAPTER_STATUS_RMAC_PCC_FOUR_IDLE) && | ||
1759 | (!(val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) || | ||
1760 | ((val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) == | ||
1761 | ADAPTER_STATUS_RC_PRC_QUIESCENT))) { | ||
1762 | ret = 1; | ||
1763 | } | ||
1764 | } | ||
1765 | } | ||
1766 | |||
1767 | return ret; | ||
1768 | } | ||
1769 | /** | ||
1770 | * verify_xena_quiescence - Checks whether the H/W is ready | ||
1243 | * @val64 : Value read from adapter status register. | 1771 | * @val64 : Value read from adapter status register. |
1244 | * @flag : indicates if the adapter enable bit was ever written once | 1772 | * @flag : indicates if the adapter enable bit was ever written once |
1245 | * before. | 1773 | * before. |
1246 | * Description: Returns whether the H/W is ready to go or not. Depending | 1774 | * Description: Returns whether the H/W is ready to go or not. Depending |
1247 | * on whether adapter enable bit was written or not the comparison | 1775 | * on whether adapter enable bit was written or not the comparison |
1248 | * differs and the calling function passes the input argument flag to | 1776 | * differs and the calling function passes the input argument flag to |
1249 | * indicate this. | 1777 | * indicate this. |
1250 | * Return: 1 If xena is quiescence | 1778 | * Return: 1 If xena is quiescence |
1251 | * 0 If Xena is not quiescence | 1779 | * 0 If Xena is not quiescence |
1252 | */ | 1780 | */ |
1253 | 1781 | ||
1254 | static int verify_xena_quiescence(u64 val64, int flag) | 1782 | static int verify_xena_quiescence(nic_t *sp, u64 val64, int flag) |
1255 | { | 1783 | { |
1256 | int ret = 0; | 1784 | int ret = 0, herc; |
1257 | u64 tmp64 = ~((u64) val64); | 1785 | u64 tmp64 = ~((u64) val64); |
1786 | int rev_id = get_xena_rev_id(sp->pdev); | ||
1258 | 1787 | ||
1788 | herc = (sp->device_type == XFRAME_II_DEVICE); | ||
1259 | if (! | 1789 | if (! |
1260 | (tmp64 & | 1790 | (tmp64 & |
1261 | (ADAPTER_STATUS_TDMA_READY | ADAPTER_STATUS_RDMA_READY | | 1791 | (ADAPTER_STATUS_TDMA_READY | ADAPTER_STATUS_RDMA_READY | |
@@ -1263,25 +1793,7 @@ static int verify_xena_quiescence(u64 val64, int flag) | |||
1263 | ADAPTER_STATUS_PIC_QUIESCENT | ADAPTER_STATUS_MC_DRAM_READY | | 1793 | ADAPTER_STATUS_PIC_QUIESCENT | ADAPTER_STATUS_MC_DRAM_READY | |
1264 | ADAPTER_STATUS_MC_QUEUES_READY | ADAPTER_STATUS_M_PLL_LOCK | | 1794 | ADAPTER_STATUS_MC_QUEUES_READY | ADAPTER_STATUS_M_PLL_LOCK | |
1265 | ADAPTER_STATUS_P_PLL_LOCK))) { | 1795 | ADAPTER_STATUS_P_PLL_LOCK))) { |
1266 | if (flag == FALSE) { | 1796 | ret = check_prc_pcc_state(val64, flag, rev_id, herc); |
1267 | if (!(val64 & ADAPTER_STATUS_RMAC_PCC_IDLE) && | ||
1268 | ((val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) == | ||
1269 | ADAPTER_STATUS_RC_PRC_QUIESCENT)) { | ||
1270 | |||
1271 | ret = 1; | ||
1272 | |||
1273 | } | ||
1274 | } else { | ||
1275 | if (((val64 & ADAPTER_STATUS_RMAC_PCC_IDLE) == | ||
1276 | ADAPTER_STATUS_RMAC_PCC_IDLE) && | ||
1277 | (!(val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) || | ||
1278 | ((val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) == | ||
1279 | ADAPTER_STATUS_RC_PRC_QUIESCENT))) { | ||
1280 | |||
1281 | ret = 1; | ||
1282 | |||
1283 | } | ||
1284 | } | ||
1285 | } | 1797 | } |
1286 | 1798 | ||
1287 | return ret; | 1799 | return ret; |
@@ -1290,12 +1802,12 @@ static int verify_xena_quiescence(u64 val64, int flag) | |||
1290 | /** | 1802 | /** |
1291 | * fix_mac_address - Fix for Mac addr problem on Alpha platforms | 1803 | * fix_mac_address - Fix for Mac addr problem on Alpha platforms |
1292 | * @sp: Pointer to device specifc structure | 1804 | * @sp: Pointer to device specifc structure |
1293 | * Description : | 1805 | * Description : |
1294 | * New procedure to clear mac address reading problems on Alpha platforms | 1806 | * New procedure to clear mac address reading problems on Alpha platforms |
1295 | * | 1807 | * |
1296 | */ | 1808 | */ |
1297 | 1809 | ||
1298 | static void fix_mac_address(nic_t * sp) | 1810 | void fix_mac_address(nic_t * sp) |
1299 | { | 1811 | { |
1300 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | 1812 | XENA_dev_config_t __iomem *bar0 = sp->bar0; |
1301 | u64 val64; | 1813 | u64 val64; |
@@ -1303,20 +1815,21 @@ static void fix_mac_address(nic_t * sp) | |||
1303 | 1815 | ||
1304 | while (fix_mac[i] != END_SIGN) { | 1816 | while (fix_mac[i] != END_SIGN) { |
1305 | writeq(fix_mac[i++], &bar0->gpio_control); | 1817 | writeq(fix_mac[i++], &bar0->gpio_control); |
1818 | udelay(10); | ||
1306 | val64 = readq(&bar0->gpio_control); | 1819 | val64 = readq(&bar0->gpio_control); |
1307 | } | 1820 | } |
1308 | } | 1821 | } |
1309 | 1822 | ||
1310 | /** | 1823 | /** |
1311 | * start_nic - Turns the device on | 1824 | * start_nic - Turns the device on |
1312 | * @nic : device private variable. | 1825 | * @nic : device private variable. |
1313 | * Description: | 1826 | * Description: |
1314 | * This function actually turns the device on. Before this function is | 1827 | * This function actually turns the device on. Before this function is |
1315 | * called,all Registers are configured from their reset states | 1828 | * called,all Registers are configured from their reset states |
1316 | * and shared memory is allocated but the NIC is still quiescent. On | 1829 | * and shared memory is allocated but the NIC is still quiescent. On |
1317 | * calling this function, the device interrupts are cleared and the NIC is | 1830 | * calling this function, the device interrupts are cleared and the NIC is |
1318 | * literally switched on by writing into the adapter control register. | 1831 | * literally switched on by writing into the adapter control register. |
1319 | * Return Value: | 1832 | * Return Value: |
1320 | * SUCCESS on success and -1 on failure. | 1833 | * SUCCESS on success and -1 on failure. |
1321 | */ | 1834 | */ |
1322 | 1835 | ||
@@ -1325,8 +1838,8 @@ static int start_nic(struct s2io_nic *nic) | |||
1325 | XENA_dev_config_t __iomem *bar0 = nic->bar0; | 1838 | XENA_dev_config_t __iomem *bar0 = nic->bar0; |
1326 | struct net_device *dev = nic->dev; | 1839 | struct net_device *dev = nic->dev; |
1327 | register u64 val64 = 0; | 1840 | register u64 val64 = 0; |
1328 | u16 interruptible, i; | 1841 | u16 interruptible; |
1329 | u16 subid; | 1842 | u16 subid, i; |
1330 | mac_info_t *mac_control; | 1843 | mac_info_t *mac_control; |
1331 | struct config_param *config; | 1844 | struct config_param *config; |
1332 | 1845 | ||
@@ -1335,10 +1848,12 @@ static int start_nic(struct s2io_nic *nic) | |||
1335 | 1848 | ||
1336 | /* PRC Initialization and configuration */ | 1849 | /* PRC Initialization and configuration */ |
1337 | for (i = 0; i < config->rx_ring_num; i++) { | 1850 | for (i = 0; i < config->rx_ring_num; i++) { |
1338 | writeq((u64) nic->rx_blocks[i][0].block_dma_addr, | 1851 | writeq((u64) mac_control->rings[i].rx_blocks[0].block_dma_addr, |
1339 | &bar0->prc_rxd0_n[i]); | 1852 | &bar0->prc_rxd0_n[i]); |
1340 | 1853 | ||
1341 | val64 = readq(&bar0->prc_ctrl_n[i]); | 1854 | val64 = readq(&bar0->prc_ctrl_n[i]); |
1855 | if (nic->config.bimodal) | ||
1856 | val64 |= PRC_CTRL_BIMODAL_INTERRUPT; | ||
1342 | #ifndef CONFIG_2BUFF_MODE | 1857 | #ifndef CONFIG_2BUFF_MODE |
1343 | val64 |= PRC_CTRL_RC_ENABLED; | 1858 | val64 |= PRC_CTRL_RC_ENABLED; |
1344 | #else | 1859 | #else |
@@ -1354,7 +1869,7 @@ static int start_nic(struct s2io_nic *nic) | |||
1354 | writeq(val64, &bar0->rx_pa_cfg); | 1869 | writeq(val64, &bar0->rx_pa_cfg); |
1355 | #endif | 1870 | #endif |
1356 | 1871 | ||
1357 | /* | 1872 | /* |
1358 | * Enabling MC-RLDRAM. After enabling the device, we timeout | 1873 | * Enabling MC-RLDRAM. After enabling the device, we timeout |
1359 | * for around 100ms, which is approximately the time required | 1874 | * for around 100ms, which is approximately the time required |
1360 | * for the device to be ready for operation. | 1875 | * for the device to be ready for operation. |
@@ -1364,27 +1879,27 @@ static int start_nic(struct s2io_nic *nic) | |||
1364 | SPECIAL_REG_WRITE(val64, &bar0->mc_rldram_mrs, UF); | 1879 | SPECIAL_REG_WRITE(val64, &bar0->mc_rldram_mrs, UF); |
1365 | val64 = readq(&bar0->mc_rldram_mrs); | 1880 | val64 = readq(&bar0->mc_rldram_mrs); |
1366 | 1881 | ||
1367 | msleep(100); /* Delay by around 100 ms. */ | 1882 | msleep(100); /* Delay by around 100 ms. */ |
1368 | 1883 | ||
1369 | /* Enabling ECC Protection. */ | 1884 | /* Enabling ECC Protection. */ |
1370 | val64 = readq(&bar0->adapter_control); | 1885 | val64 = readq(&bar0->adapter_control); |
1371 | val64 &= ~ADAPTER_ECC_EN; | 1886 | val64 &= ~ADAPTER_ECC_EN; |
1372 | writeq(val64, &bar0->adapter_control); | 1887 | writeq(val64, &bar0->adapter_control); |
1373 | 1888 | ||
1374 | /* | 1889 | /* |
1375 | * Clearing any possible Link state change interrupts that | 1890 | * Clearing any possible Link state change interrupts that |
1376 | * could have popped up just before Enabling the card. | 1891 | * could have popped up just before Enabling the card. |
1377 | */ | 1892 | */ |
1378 | val64 = readq(&bar0->mac_rmac_err_reg); | 1893 | val64 = readq(&bar0->mac_rmac_err_reg); |
1379 | if (val64) | 1894 | if (val64) |
1380 | writeq(val64, &bar0->mac_rmac_err_reg); | 1895 | writeq(val64, &bar0->mac_rmac_err_reg); |
1381 | 1896 | ||
1382 | /* | 1897 | /* |
1383 | * Verify if the device is ready to be enabled, if so enable | 1898 | * Verify if the device is ready to be enabled, if so enable |
1384 | * it. | 1899 | * it. |
1385 | */ | 1900 | */ |
1386 | val64 = readq(&bar0->adapter_status); | 1901 | val64 = readq(&bar0->adapter_status); |
1387 | if (!verify_xena_quiescence(val64, nic->device_enabled_once)) { | 1902 | if (!verify_xena_quiescence(nic, val64, nic->device_enabled_once)) { |
1388 | DBG_PRINT(ERR_DBG, "%s: device is not ready, ", dev->name); | 1903 | DBG_PRINT(ERR_DBG, "%s: device is not ready, ", dev->name); |
1389 | DBG_PRINT(ERR_DBG, "Adapter status reads: 0x%llx\n", | 1904 | DBG_PRINT(ERR_DBG, "Adapter status reads: 0x%llx\n", |
1390 | (unsigned long long) val64); | 1905 | (unsigned long long) val64); |
@@ -1392,16 +1907,18 @@ static int start_nic(struct s2io_nic *nic) | |||
1392 | } | 1907 | } |
1393 | 1908 | ||
1394 | /* Enable select interrupts */ | 1909 | /* Enable select interrupts */ |
1395 | interruptible = TX_TRAFFIC_INTR | RX_TRAFFIC_INTR | TX_MAC_INTR | | 1910 | interruptible = TX_TRAFFIC_INTR | RX_TRAFFIC_INTR; |
1396 | RX_MAC_INTR; | 1911 | interruptible |= TX_PIC_INTR | RX_PIC_INTR; |
1912 | interruptible |= TX_MAC_INTR | RX_MAC_INTR; | ||
1913 | |||
1397 | en_dis_able_nic_intrs(nic, interruptible, ENABLE_INTRS); | 1914 | en_dis_able_nic_intrs(nic, interruptible, ENABLE_INTRS); |
1398 | 1915 | ||
1399 | /* | 1916 | /* |
1400 | * With some switches, link might be already up at this point. | 1917 | * With some switches, link might be already up at this point. |
1401 | * Because of this weird behavior, when we enable laser, | 1918 | * Because of this weird behavior, when we enable laser, |
1402 | * we may not get link. We need to handle this. We cannot | 1919 | * we may not get link. We need to handle this. We cannot |
1403 | * figure out which switch is misbehaving. So we are forced to | 1920 | * figure out which switch is misbehaving. So we are forced to |
1404 | * make a global change. | 1921 | * make a global change. |
1405 | */ | 1922 | */ |
1406 | 1923 | ||
1407 | /* Enabling Laser. */ | 1924 | /* Enabling Laser. */ |
@@ -1411,44 +1928,30 @@ static int start_nic(struct s2io_nic *nic) | |||
1411 | 1928 | ||
1412 | /* SXE-002: Initialize link and activity LED */ | 1929 | /* SXE-002: Initialize link and activity LED */ |
1413 | subid = nic->pdev->subsystem_device; | 1930 | subid = nic->pdev->subsystem_device; |
1414 | if ((subid & 0xFF) >= 0x07) { | 1931 | if (((subid & 0xFF) >= 0x07) && |
1932 | (nic->device_type == XFRAME_I_DEVICE)) { | ||
1415 | val64 = readq(&bar0->gpio_control); | 1933 | val64 = readq(&bar0->gpio_control); |
1416 | val64 |= 0x0000800000000000ULL; | 1934 | val64 |= 0x0000800000000000ULL; |
1417 | writeq(val64, &bar0->gpio_control); | 1935 | writeq(val64, &bar0->gpio_control); |
1418 | val64 = 0x0411040400000000ULL; | 1936 | val64 = 0x0411040400000000ULL; |
1419 | writeq(val64, (void __iomem *) bar0 + 0x2700); | 1937 | writeq(val64, (void __iomem *) ((u8 *) bar0 + 0x2700)); |
1420 | } | 1938 | } |
1421 | 1939 | ||
1422 | /* | 1940 | /* |
1423 | * Don't see link state interrupts on certain switches, so | 1941 | * Don't see link state interrupts on certain switches, so |
1424 | * directly scheduling a link state task from here. | 1942 | * directly scheduling a link state task from here. |
1425 | */ | 1943 | */ |
1426 | schedule_work(&nic->set_link_task); | 1944 | schedule_work(&nic->set_link_task); |
1427 | 1945 | ||
1428 | /* | ||
1429 | * Here we are performing soft reset on XGXS to | ||
1430 | * force link down. Since link is already up, we will get | ||
1431 | * link state change interrupt after this reset | ||
1432 | */ | ||
1433 | SPECIAL_REG_WRITE(0x80010515001E0000ULL, &bar0->dtx_control, UF); | ||
1434 | val64 = readq(&bar0->dtx_control); | ||
1435 | udelay(50); | ||
1436 | SPECIAL_REG_WRITE(0x80010515001E00E0ULL, &bar0->dtx_control, UF); | ||
1437 | val64 = readq(&bar0->dtx_control); | ||
1438 | udelay(50); | ||
1439 | SPECIAL_REG_WRITE(0x80070515001F00E4ULL, &bar0->dtx_control, UF); | ||
1440 | val64 = readq(&bar0->dtx_control); | ||
1441 | udelay(50); | ||
1442 | |||
1443 | return SUCCESS; | 1946 | return SUCCESS; |
1444 | } | 1947 | } |
1445 | 1948 | ||
1446 | /** | 1949 | /** |
1447 | * free_tx_buffers - Free all queued Tx buffers | 1950 | * free_tx_buffers - Free all queued Tx buffers |
1448 | * @nic : device private variable. | 1951 | * @nic : device private variable. |
1449 | * Description: | 1952 | * Description: |
1450 | * Free all queued Tx buffers. | 1953 | * Free all queued Tx buffers. |
1451 | * Return Value: void | 1954 | * Return Value: void |
1452 | */ | 1955 | */ |
1453 | 1956 | ||
1454 | static void free_tx_buffers(struct s2io_nic *nic) | 1957 | static void free_tx_buffers(struct s2io_nic *nic) |
@@ -1459,39 +1962,61 @@ static void free_tx_buffers(struct s2io_nic *nic) | |||
1459 | int i, j; | 1962 | int i, j; |
1460 | mac_info_t *mac_control; | 1963 | mac_info_t *mac_control; |
1461 | struct config_param *config; | 1964 | struct config_param *config; |
1462 | int cnt = 0; | 1965 | int cnt = 0, frg_cnt; |
1463 | 1966 | ||
1464 | mac_control = &nic->mac_control; | 1967 | mac_control = &nic->mac_control; |
1465 | config = &nic->config; | 1968 | config = &nic->config; |
1466 | 1969 | ||
1467 | for (i = 0; i < config->tx_fifo_num; i++) { | 1970 | for (i = 0; i < config->tx_fifo_num; i++) { |
1468 | for (j = 0; j < config->tx_cfg[i].fifo_len - 1; j++) { | 1971 | for (j = 0; j < config->tx_cfg[i].fifo_len - 1; j++) { |
1469 | txdp = (TxD_t *) nic->list_info[i][j]. | 1972 | txdp = (TxD_t *) mac_control->fifos[i].list_info[j]. |
1470 | list_virt_addr; | 1973 | list_virt_addr; |
1471 | skb = | 1974 | skb = |
1472 | (struct sk_buff *) ((unsigned long) txdp-> | 1975 | (struct sk_buff *) ((unsigned long) txdp-> |
1473 | Host_Control); | 1976 | Host_Control); |
1474 | if (skb == NULL) { | 1977 | if (skb == NULL) { |
1475 | memset(txdp, 0, sizeof(TxD_t)); | 1978 | memset(txdp, 0, sizeof(TxD_t) * |
1979 | config->max_txds); | ||
1476 | continue; | 1980 | continue; |
1477 | } | 1981 | } |
1982 | frg_cnt = skb_shinfo(skb)->nr_frags; | ||
1983 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
1984 | txdp->Buffer_Pointer, | ||
1985 | skb->len - skb->data_len, | ||
1986 | PCI_DMA_TODEVICE); | ||
1987 | if (frg_cnt) { | ||
1988 | TxD_t *temp; | ||
1989 | temp = txdp; | ||
1990 | txdp++; | ||
1991 | for (j = 0; j < frg_cnt; j++, txdp++) { | ||
1992 | skb_frag_t *frag = | ||
1993 | &skb_shinfo(skb)->frags[j]; | ||
1994 | pci_unmap_page(nic->pdev, | ||
1995 | (dma_addr_t) | ||
1996 | txdp-> | ||
1997 | Buffer_Pointer, | ||
1998 | frag->size, | ||
1999 | PCI_DMA_TODEVICE); | ||
2000 | } | ||
2001 | txdp = temp; | ||
2002 | } | ||
1478 | dev_kfree_skb(skb); | 2003 | dev_kfree_skb(skb); |
1479 | memset(txdp, 0, sizeof(TxD_t)); | 2004 | memset(txdp, 0, sizeof(TxD_t) * config->max_txds); |
1480 | cnt++; | 2005 | cnt++; |
1481 | } | 2006 | } |
1482 | DBG_PRINT(INTR_DBG, | 2007 | DBG_PRINT(INTR_DBG, |
1483 | "%s:forcibly freeing %d skbs on FIFO%d\n", | 2008 | "%s:forcibly freeing %d skbs on FIFO%d\n", |
1484 | dev->name, cnt, i); | 2009 | dev->name, cnt, i); |
1485 | mac_control->tx_curr_get_info[i].offset = 0; | 2010 | mac_control->fifos[i].tx_curr_get_info.offset = 0; |
1486 | mac_control->tx_curr_put_info[i].offset = 0; | 2011 | mac_control->fifos[i].tx_curr_put_info.offset = 0; |
1487 | } | 2012 | } |
1488 | } | 2013 | } |
1489 | 2014 | ||
1490 | /** | 2015 | /** |
1491 | * stop_nic - To stop the nic | 2016 | * stop_nic - To stop the nic |
1492 | * @nic ; device private variable. | 2017 | * @nic ; device private variable. |
1493 | * Description: | 2018 | * Description: |
1494 | * This function does exactly the opposite of what the start_nic() | 2019 | * This function does exactly the opposite of what the start_nic() |
1495 | * function does. This function is called to stop the device. | 2020 | * function does. This function is called to stop the device. |
1496 | * Return Value: | 2021 | * Return Value: |
1497 | * void. | 2022 | * void. |
@@ -1509,8 +2034,9 @@ static void stop_nic(struct s2io_nic *nic) | |||
1509 | config = &nic->config; | 2034 | config = &nic->config; |
1510 | 2035 | ||
1511 | /* Disable all interrupts */ | 2036 | /* Disable all interrupts */ |
1512 | interruptible = TX_TRAFFIC_INTR | RX_TRAFFIC_INTR | TX_MAC_INTR | | 2037 | interruptible = TX_TRAFFIC_INTR | RX_TRAFFIC_INTR; |
1513 | RX_MAC_INTR; | 2038 | interruptible |= TX_PIC_INTR | RX_PIC_INTR; |
2039 | interruptible |= TX_MAC_INTR | RX_MAC_INTR; | ||
1514 | en_dis_able_nic_intrs(nic, interruptible, DISABLE_INTRS); | 2040 | en_dis_able_nic_intrs(nic, interruptible, DISABLE_INTRS); |
1515 | 2041 | ||
1516 | /* Disable PRCs */ | 2042 | /* Disable PRCs */ |
@@ -1521,11 +2047,11 @@ static void stop_nic(struct s2io_nic *nic) | |||
1521 | } | 2047 | } |
1522 | } | 2048 | } |
1523 | 2049 | ||
1524 | /** | 2050 | /** |
1525 | * fill_rx_buffers - Allocates the Rx side skbs | 2051 | * fill_rx_buffers - Allocates the Rx side skbs |
1526 | * @nic: device private variable | 2052 | * @nic: device private variable |
1527 | * @ring_no: ring number | 2053 | * @ring_no: ring number |
1528 | * Description: | 2054 | * Description: |
1529 | * The function allocates Rx side skbs and puts the physical | 2055 | * The function allocates Rx side skbs and puts the physical |
1530 | * address of these buffers into the RxD buffer pointers, so that the NIC | 2056 | * address of these buffers into the RxD buffer pointers, so that the NIC |
1531 | * can DMA the received frame into these locations. | 2057 | * can DMA the received frame into these locations. |
@@ -1533,8 +2059,8 @@ static void stop_nic(struct s2io_nic *nic) | |||
1533 | * 1. single buffer, | 2059 | * 1. single buffer, |
1534 | * 2. three buffer and | 2060 | * 2. three buffer and |
1535 | * 3. Five buffer modes. | 2061 | * 3. Five buffer modes. |
1536 | * Each mode defines how many fragments the received frame will be split | 2062 | * Each mode defines how many fragments the received frame will be split |
1537 | * up into by the NIC. The frame is split into L3 header, L4 Header, | 2063 | * up into by the NIC. The frame is split into L3 header, L4 Header, |
1538 | * L4 payload in three buffer mode and in 5 buffer mode, L4 payload itself | 2064 | * L4 payload in three buffer mode and in 5 buffer mode, L4 payload itself |
1539 | * is split into 3 fragments. As of now only single buffer mode is | 2065 | * is split into 3 fragments. As of now only single buffer mode is |
1540 | * supported. | 2066 | * supported. |
@@ -1542,7 +2068,7 @@ static void stop_nic(struct s2io_nic *nic) | |||
1542 | * SUCCESS on success or an appropriate -ve value on failure. | 2068 | * SUCCESS on success or an appropriate -ve value on failure. |
1543 | */ | 2069 | */ |
1544 | 2070 | ||
1545 | static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | 2071 | int fill_rx_buffers(struct s2io_nic *nic, int ring_no) |
1546 | { | 2072 | { |
1547 | struct net_device *dev = nic->dev; | 2073 | struct net_device *dev = nic->dev; |
1548 | struct sk_buff *skb; | 2074 | struct sk_buff *skb; |
@@ -1550,34 +2076,35 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | |||
1550 | int off, off1, size, block_no, block_no1; | 2076 | int off, off1, size, block_no, block_no1; |
1551 | int offset, offset1; | 2077 | int offset, offset1; |
1552 | u32 alloc_tab = 0; | 2078 | u32 alloc_tab = 0; |
1553 | u32 alloc_cnt = nic->pkt_cnt[ring_no] - | 2079 | u32 alloc_cnt; |
1554 | atomic_read(&nic->rx_bufs_left[ring_no]); | ||
1555 | mac_info_t *mac_control; | 2080 | mac_info_t *mac_control; |
1556 | struct config_param *config; | 2081 | struct config_param *config; |
1557 | #ifdef CONFIG_2BUFF_MODE | 2082 | #ifdef CONFIG_2BUFF_MODE |
1558 | RxD_t *rxdpnext; | 2083 | RxD_t *rxdpnext; |
1559 | int nextblk; | 2084 | int nextblk; |
1560 | unsigned long tmp; | 2085 | u64 tmp; |
1561 | buffAdd_t *ba; | 2086 | buffAdd_t *ba; |
1562 | dma_addr_t rxdpphys; | 2087 | dma_addr_t rxdpphys; |
1563 | #endif | 2088 | #endif |
1564 | #ifndef CONFIG_S2IO_NAPI | 2089 | #ifndef CONFIG_S2IO_NAPI |
1565 | unsigned long flags; | 2090 | unsigned long flags; |
1566 | #endif | 2091 | #endif |
2092 | RxD_t *first_rxdp = NULL; | ||
1567 | 2093 | ||
1568 | mac_control = &nic->mac_control; | 2094 | mac_control = &nic->mac_control; |
1569 | config = &nic->config; | 2095 | config = &nic->config; |
1570 | 2096 | alloc_cnt = mac_control->rings[ring_no].pkt_cnt - | |
2097 | atomic_read(&nic->rx_bufs_left[ring_no]); | ||
1571 | size = dev->mtu + HEADER_ETHERNET_II_802_3_SIZE + | 2098 | size = dev->mtu + HEADER_ETHERNET_II_802_3_SIZE + |
1572 | HEADER_802_2_SIZE + HEADER_SNAP_SIZE; | 2099 | HEADER_802_2_SIZE + HEADER_SNAP_SIZE; |
1573 | 2100 | ||
1574 | while (alloc_tab < alloc_cnt) { | 2101 | while (alloc_tab < alloc_cnt) { |
1575 | block_no = mac_control->rx_curr_put_info[ring_no]. | 2102 | block_no = mac_control->rings[ring_no].rx_curr_put_info. |
1576 | block_index; | 2103 | block_index; |
1577 | block_no1 = mac_control->rx_curr_get_info[ring_no]. | 2104 | block_no1 = mac_control->rings[ring_no].rx_curr_get_info. |
1578 | block_index; | 2105 | block_index; |
1579 | off = mac_control->rx_curr_put_info[ring_no].offset; | 2106 | off = mac_control->rings[ring_no].rx_curr_put_info.offset; |
1580 | off1 = mac_control->rx_curr_get_info[ring_no].offset; | 2107 | off1 = mac_control->rings[ring_no].rx_curr_get_info.offset; |
1581 | #ifndef CONFIG_2BUFF_MODE | 2108 | #ifndef CONFIG_2BUFF_MODE |
1582 | offset = block_no * (MAX_RXDS_PER_BLOCK + 1) + off; | 2109 | offset = block_no * (MAX_RXDS_PER_BLOCK + 1) + off; |
1583 | offset1 = block_no1 * (MAX_RXDS_PER_BLOCK + 1) + off1; | 2110 | offset1 = block_no1 * (MAX_RXDS_PER_BLOCK + 1) + off1; |
@@ -1586,7 +2113,7 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | |||
1586 | offset1 = block_no1 * (MAX_RXDS_PER_BLOCK) + off1; | 2113 | offset1 = block_no1 * (MAX_RXDS_PER_BLOCK) + off1; |
1587 | #endif | 2114 | #endif |
1588 | 2115 | ||
1589 | rxdp = nic->rx_blocks[ring_no][block_no]. | 2116 | rxdp = mac_control->rings[ring_no].rx_blocks[block_no]. |
1590 | block_virt_addr + off; | 2117 | block_virt_addr + off; |
1591 | if ((offset == offset1) && (rxdp->Host_Control)) { | 2118 | if ((offset == offset1) && (rxdp->Host_Control)) { |
1592 | DBG_PRINT(INTR_DBG, "%s: Get and Put", dev->name); | 2119 | DBG_PRINT(INTR_DBG, "%s: Get and Put", dev->name); |
@@ -1595,15 +2122,15 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | |||
1595 | } | 2122 | } |
1596 | #ifndef CONFIG_2BUFF_MODE | 2123 | #ifndef CONFIG_2BUFF_MODE |
1597 | if (rxdp->Control_1 == END_OF_BLOCK) { | 2124 | if (rxdp->Control_1 == END_OF_BLOCK) { |
1598 | mac_control->rx_curr_put_info[ring_no]. | 2125 | mac_control->rings[ring_no].rx_curr_put_info. |
1599 | block_index++; | 2126 | block_index++; |
1600 | mac_control->rx_curr_put_info[ring_no]. | 2127 | mac_control->rings[ring_no].rx_curr_put_info. |
1601 | block_index %= nic->block_count[ring_no]; | 2128 | block_index %= mac_control->rings[ring_no].block_count; |
1602 | block_no = mac_control->rx_curr_put_info | 2129 | block_no = mac_control->rings[ring_no].rx_curr_put_info. |
1603 | [ring_no].block_index; | 2130 | block_index; |
1604 | off++; | 2131 | off++; |
1605 | off %= (MAX_RXDS_PER_BLOCK + 1); | 2132 | off %= (MAX_RXDS_PER_BLOCK + 1); |
1606 | mac_control->rx_curr_put_info[ring_no].offset = | 2133 | mac_control->rings[ring_no].rx_curr_put_info.offset = |
1607 | off; | 2134 | off; |
1608 | rxdp = (RxD_t *) ((unsigned long) rxdp->Control_2); | 2135 | rxdp = (RxD_t *) ((unsigned long) rxdp->Control_2); |
1609 | DBG_PRINT(INTR_DBG, "%s: Next block at: %p\n", | 2136 | DBG_PRINT(INTR_DBG, "%s: Next block at: %p\n", |
@@ -1611,30 +2138,30 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | |||
1611 | } | 2138 | } |
1612 | #ifndef CONFIG_S2IO_NAPI | 2139 | #ifndef CONFIG_S2IO_NAPI |
1613 | spin_lock_irqsave(&nic->put_lock, flags); | 2140 | spin_lock_irqsave(&nic->put_lock, flags); |
1614 | nic->put_pos[ring_no] = | 2141 | mac_control->rings[ring_no].put_pos = |
1615 | (block_no * (MAX_RXDS_PER_BLOCK + 1)) + off; | 2142 | (block_no * (MAX_RXDS_PER_BLOCK + 1)) + off; |
1616 | spin_unlock_irqrestore(&nic->put_lock, flags); | 2143 | spin_unlock_irqrestore(&nic->put_lock, flags); |
1617 | #endif | 2144 | #endif |
1618 | #else | 2145 | #else |
1619 | if (rxdp->Host_Control == END_OF_BLOCK) { | 2146 | if (rxdp->Host_Control == END_OF_BLOCK) { |
1620 | mac_control->rx_curr_put_info[ring_no]. | 2147 | mac_control->rings[ring_no].rx_curr_put_info. |
1621 | block_index++; | 2148 | block_index++; |
1622 | mac_control->rx_curr_put_info[ring_no]. | 2149 | mac_control->rings[ring_no].rx_curr_put_info.block_index |
1623 | block_index %= nic->block_count[ring_no]; | 2150 | %= mac_control->rings[ring_no].block_count; |
1624 | block_no = mac_control->rx_curr_put_info | 2151 | block_no = mac_control->rings[ring_no].rx_curr_put_info |
1625 | [ring_no].block_index; | 2152 | .block_index; |
1626 | off = 0; | 2153 | off = 0; |
1627 | DBG_PRINT(INTR_DBG, "%s: block%d at: 0x%llx\n", | 2154 | DBG_PRINT(INTR_DBG, "%s: block%d at: 0x%llx\n", |
1628 | dev->name, block_no, | 2155 | dev->name, block_no, |
1629 | (unsigned long long) rxdp->Control_1); | 2156 | (unsigned long long) rxdp->Control_1); |
1630 | mac_control->rx_curr_put_info[ring_no].offset = | 2157 | mac_control->rings[ring_no].rx_curr_put_info.offset = |
1631 | off; | 2158 | off; |
1632 | rxdp = nic->rx_blocks[ring_no][block_no]. | 2159 | rxdp = mac_control->rings[ring_no].rx_blocks[block_no]. |
1633 | block_virt_addr; | 2160 | block_virt_addr; |
1634 | } | 2161 | } |
1635 | #ifndef CONFIG_S2IO_NAPI | 2162 | #ifndef CONFIG_S2IO_NAPI |
1636 | spin_lock_irqsave(&nic->put_lock, flags); | 2163 | spin_lock_irqsave(&nic->put_lock, flags); |
1637 | nic->put_pos[ring_no] = (block_no * | 2164 | mac_control->rings[ring_no].put_pos = (block_no * |
1638 | (MAX_RXDS_PER_BLOCK + 1)) + off; | 2165 | (MAX_RXDS_PER_BLOCK + 1)) + off; |
1639 | spin_unlock_irqrestore(&nic->put_lock, flags); | 2166 | spin_unlock_irqrestore(&nic->put_lock, flags); |
1640 | #endif | 2167 | #endif |
@@ -1646,27 +2173,27 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | |||
1646 | if (rxdp->Control_2 & BIT(0)) | 2173 | if (rxdp->Control_2 & BIT(0)) |
1647 | #endif | 2174 | #endif |
1648 | { | 2175 | { |
1649 | mac_control->rx_curr_put_info[ring_no]. | 2176 | mac_control->rings[ring_no].rx_curr_put_info. |
1650 | offset = off; | 2177 | offset = off; |
1651 | goto end; | 2178 | goto end; |
1652 | } | 2179 | } |
1653 | #ifdef CONFIG_2BUFF_MODE | 2180 | #ifdef CONFIG_2BUFF_MODE |
1654 | /* | 2181 | /* |
1655 | * RxDs Spanning cache lines will be replenished only | 2182 | * RxDs Spanning cache lines will be replenished only |
1656 | * if the succeeding RxD is also owned by Host. It | 2183 | * if the succeeding RxD is also owned by Host. It |
1657 | * will always be the ((8*i)+3) and ((8*i)+6) | 2184 | * will always be the ((8*i)+3) and ((8*i)+6) |
1658 | * descriptors for the 48 byte descriptor. The offending | 2185 | * descriptors for the 48 byte descriptor. The offending |
1659 | * decsriptor is of-course the 3rd descriptor. | 2186 | * decsriptor is of-course the 3rd descriptor. |
1660 | */ | 2187 | */ |
1661 | rxdpphys = nic->rx_blocks[ring_no][block_no]. | 2188 | rxdpphys = mac_control->rings[ring_no].rx_blocks[block_no]. |
1662 | block_dma_addr + (off * sizeof(RxD_t)); | 2189 | block_dma_addr + (off * sizeof(RxD_t)); |
1663 | if (((u64) (rxdpphys)) % 128 > 80) { | 2190 | if (((u64) (rxdpphys)) % 128 > 80) { |
1664 | rxdpnext = nic->rx_blocks[ring_no][block_no]. | 2191 | rxdpnext = mac_control->rings[ring_no].rx_blocks[block_no]. |
1665 | block_virt_addr + (off + 1); | 2192 | block_virt_addr + (off + 1); |
1666 | if (rxdpnext->Host_Control == END_OF_BLOCK) { | 2193 | if (rxdpnext->Host_Control == END_OF_BLOCK) { |
1667 | nextblk = (block_no + 1) % | 2194 | nextblk = (block_no + 1) % |
1668 | (nic->block_count[ring_no]); | 2195 | (mac_control->rings[ring_no].block_count); |
1669 | rxdpnext = nic->rx_blocks[ring_no] | 2196 | rxdpnext = mac_control->rings[ring_no].rx_blocks |
1670 | [nextblk].block_virt_addr; | 2197 | [nextblk].block_virt_addr; |
1671 | } | 2198 | } |
1672 | if (rxdpnext->Control_2 & BIT(0)) | 2199 | if (rxdpnext->Control_2 & BIT(0)) |
@@ -1682,6 +2209,10 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | |||
1682 | if (!skb) { | 2209 | if (!skb) { |
1683 | DBG_PRINT(ERR_DBG, "%s: Out of ", dev->name); | 2210 | DBG_PRINT(ERR_DBG, "%s: Out of ", dev->name); |
1684 | DBG_PRINT(ERR_DBG, "memory to allocate SKBs\n"); | 2211 | DBG_PRINT(ERR_DBG, "memory to allocate SKBs\n"); |
2212 | if (first_rxdp) { | ||
2213 | wmb(); | ||
2214 | first_rxdp->Control_1 |= RXD_OWN_XENA; | ||
2215 | } | ||
1685 | return -ENOMEM; | 2216 | return -ENOMEM; |
1686 | } | 2217 | } |
1687 | #ifndef CONFIG_2BUFF_MODE | 2218 | #ifndef CONFIG_2BUFF_MODE |
@@ -1692,12 +2223,13 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | |||
1692 | rxdp->Control_2 &= (~MASK_BUFFER0_SIZE); | 2223 | rxdp->Control_2 &= (~MASK_BUFFER0_SIZE); |
1693 | rxdp->Control_2 |= SET_BUFFER0_SIZE(size); | 2224 | rxdp->Control_2 |= SET_BUFFER0_SIZE(size); |
1694 | rxdp->Host_Control = (unsigned long) (skb); | 2225 | rxdp->Host_Control = (unsigned long) (skb); |
1695 | rxdp->Control_1 |= RXD_OWN_XENA; | 2226 | if (alloc_tab & ((1 << rxsync_frequency) - 1)) |
2227 | rxdp->Control_1 |= RXD_OWN_XENA; | ||
1696 | off++; | 2228 | off++; |
1697 | off %= (MAX_RXDS_PER_BLOCK + 1); | 2229 | off %= (MAX_RXDS_PER_BLOCK + 1); |
1698 | mac_control->rx_curr_put_info[ring_no].offset = off; | 2230 | mac_control->rings[ring_no].rx_curr_put_info.offset = off; |
1699 | #else | 2231 | #else |
1700 | ba = &nic->ba[ring_no][block_no][off]; | 2232 | ba = &mac_control->rings[ring_no].ba[block_no][off]; |
1701 | skb_reserve(skb, BUF0_LEN); | 2233 | skb_reserve(skb, BUF0_LEN); |
1702 | tmp = ((unsigned long) skb->data & ALIGN_SIZE); | 2234 | tmp = ((unsigned long) skb->data & ALIGN_SIZE); |
1703 | if (tmp) | 2235 | if (tmp) |
@@ -1719,22 +2251,41 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no) | |||
1719 | rxdp->Control_2 |= SET_BUFFER1_SIZE(1); /* dummy. */ | 2251 | rxdp->Control_2 |= SET_BUFFER1_SIZE(1); /* dummy. */ |
1720 | rxdp->Control_2 |= BIT(0); /* Set Buffer_Empty bit. */ | 2252 | rxdp->Control_2 |= BIT(0); /* Set Buffer_Empty bit. */ |
1721 | rxdp->Host_Control = (u64) ((unsigned long) (skb)); | 2253 | rxdp->Host_Control = (u64) ((unsigned long) (skb)); |
1722 | rxdp->Control_1 |= RXD_OWN_XENA; | 2254 | if (alloc_tab & ((1 << rxsync_frequency) - 1)) |
2255 | rxdp->Control_1 |= RXD_OWN_XENA; | ||
1723 | off++; | 2256 | off++; |
1724 | mac_control->rx_curr_put_info[ring_no].offset = off; | 2257 | mac_control->rings[ring_no].rx_curr_put_info.offset = off; |
1725 | #endif | 2258 | #endif |
2259 | rxdp->Control_2 |= SET_RXD_MARKER; | ||
2260 | |||
2261 | if (!(alloc_tab & ((1 << rxsync_frequency) - 1))) { | ||
2262 | if (first_rxdp) { | ||
2263 | wmb(); | ||
2264 | first_rxdp->Control_1 |= RXD_OWN_XENA; | ||
2265 | } | ||
2266 | first_rxdp = rxdp; | ||
2267 | } | ||
1726 | atomic_inc(&nic->rx_bufs_left[ring_no]); | 2268 | atomic_inc(&nic->rx_bufs_left[ring_no]); |
1727 | alloc_tab++; | 2269 | alloc_tab++; |
1728 | } | 2270 | } |
1729 | 2271 | ||
1730 | end: | 2272 | end: |
2273 | /* Transfer ownership of first descriptor to adapter just before | ||
2274 | * exiting. Before that, use memory barrier so that ownership | ||
2275 | * and other fields are seen by adapter correctly. | ||
2276 | */ | ||
2277 | if (first_rxdp) { | ||
2278 | wmb(); | ||
2279 | first_rxdp->Control_1 |= RXD_OWN_XENA; | ||
2280 | } | ||
2281 | |||
1731 | return SUCCESS; | 2282 | return SUCCESS; |
1732 | } | 2283 | } |
1733 | 2284 | ||
1734 | /** | 2285 | /** |
1735 | * free_rx_buffers - Frees all Rx buffers | 2286 | * free_rx_buffers - Frees all Rx buffers |
1736 | * @sp: device private variable. | 2287 | * @sp: device private variable. |
1737 | * Description: | 2288 | * Description: |
1738 | * This function will free all Rx buffers allocated by host. | 2289 | * This function will free all Rx buffers allocated by host. |
1739 | * Return Value: | 2290 | * Return Value: |
1740 | * NONE. | 2291 | * NONE. |
@@ -1758,7 +2309,8 @@ static void free_rx_buffers(struct s2io_nic *sp) | |||
1758 | for (i = 0; i < config->rx_ring_num; i++) { | 2309 | for (i = 0; i < config->rx_ring_num; i++) { |
1759 | for (j = 0, blk = 0; j < config->rx_cfg[i].num_rxd; j++) { | 2310 | for (j = 0, blk = 0; j < config->rx_cfg[i].num_rxd; j++) { |
1760 | off = j % (MAX_RXDS_PER_BLOCK + 1); | 2311 | off = j % (MAX_RXDS_PER_BLOCK + 1); |
1761 | rxdp = sp->rx_blocks[i][blk].block_virt_addr + off; | 2312 | rxdp = mac_control->rings[i].rx_blocks[blk]. |
2313 | block_virt_addr + off; | ||
1762 | 2314 | ||
1763 | #ifndef CONFIG_2BUFF_MODE | 2315 | #ifndef CONFIG_2BUFF_MODE |
1764 | if (rxdp->Control_1 == END_OF_BLOCK) { | 2316 | if (rxdp->Control_1 == END_OF_BLOCK) { |
@@ -1793,7 +2345,7 @@ static void free_rx_buffers(struct s2io_nic *sp) | |||
1793 | HEADER_SNAP_SIZE, | 2345 | HEADER_SNAP_SIZE, |
1794 | PCI_DMA_FROMDEVICE); | 2346 | PCI_DMA_FROMDEVICE); |
1795 | #else | 2347 | #else |
1796 | ba = &sp->ba[i][blk][off]; | 2348 | ba = &mac_control->rings[i].ba[blk][off]; |
1797 | pci_unmap_single(sp->pdev, (dma_addr_t) | 2349 | pci_unmap_single(sp->pdev, (dma_addr_t) |
1798 | rxdp->Buffer0_ptr, | 2350 | rxdp->Buffer0_ptr, |
1799 | BUF0_LEN, | 2351 | BUF0_LEN, |
@@ -1813,10 +2365,10 @@ static void free_rx_buffers(struct s2io_nic *sp) | |||
1813 | } | 2365 | } |
1814 | memset(rxdp, 0, sizeof(RxD_t)); | 2366 | memset(rxdp, 0, sizeof(RxD_t)); |
1815 | } | 2367 | } |
1816 | mac_control->rx_curr_put_info[i].block_index = 0; | 2368 | mac_control->rings[i].rx_curr_put_info.block_index = 0; |
1817 | mac_control->rx_curr_get_info[i].block_index = 0; | 2369 | mac_control->rings[i].rx_curr_get_info.block_index = 0; |
1818 | mac_control->rx_curr_put_info[i].offset = 0; | 2370 | mac_control->rings[i].rx_curr_put_info.offset = 0; |
1819 | mac_control->rx_curr_get_info[i].offset = 0; | 2371 | mac_control->rings[i].rx_curr_get_info.offset = 0; |
1820 | atomic_set(&sp->rx_bufs_left[i], 0); | 2372 | atomic_set(&sp->rx_bufs_left[i], 0); |
1821 | DBG_PRINT(INIT_DBG, "%s:Freed 0x%x Rx Buffers on ring%d\n", | 2373 | DBG_PRINT(INIT_DBG, "%s:Freed 0x%x Rx Buffers on ring%d\n", |
1822 | dev->name, buf_cnt, i); | 2374 | dev->name, buf_cnt, i); |
@@ -1826,7 +2378,7 @@ static void free_rx_buffers(struct s2io_nic *sp) | |||
1826 | /** | 2378 | /** |
1827 | * s2io_poll - Rx interrupt handler for NAPI support | 2379 | * s2io_poll - Rx interrupt handler for NAPI support |
1828 | * @dev : pointer to the device structure. | 2380 | * @dev : pointer to the device structure. |
1829 | * @budget : The number of packets that were budgeted to be processed | 2381 | * @budget : The number of packets that were budgeted to be processed |
1830 | * during one pass through the 'Poll" function. | 2382 | * during one pass through the 'Poll" function. |
1831 | * Description: | 2383 | * Description: |
1832 | * Comes into picture only if NAPI support has been incorporated. It does | 2384 | * Comes into picture only if NAPI support has been incorporated. It does |
@@ -1836,160 +2388,36 @@ static void free_rx_buffers(struct s2io_nic *sp) | |||
1836 | * 0 on success and 1 if there are No Rx packets to be processed. | 2388 | * 0 on success and 1 if there are No Rx packets to be processed. |
1837 | */ | 2389 | */ |
1838 | 2390 | ||
1839 | #ifdef CONFIG_S2IO_NAPI | 2391 | #if defined(CONFIG_S2IO_NAPI) |
1840 | static int s2io_poll(struct net_device *dev, int *budget) | 2392 | static int s2io_poll(struct net_device *dev, int *budget) |
1841 | { | 2393 | { |
1842 | nic_t *nic = dev->priv; | 2394 | nic_t *nic = dev->priv; |
1843 | XENA_dev_config_t __iomem *bar0 = nic->bar0; | 2395 | int pkt_cnt = 0, org_pkts_to_process; |
1844 | int pkts_to_process = *budget, pkt_cnt = 0; | ||
1845 | register u64 val64 = 0; | ||
1846 | rx_curr_get_info_t get_info, put_info; | ||
1847 | int i, get_block, put_block, get_offset, put_offset, ring_bufs; | ||
1848 | #ifndef CONFIG_2BUFF_MODE | ||
1849 | u16 val16, cksum; | ||
1850 | #endif | ||
1851 | struct sk_buff *skb; | ||
1852 | RxD_t *rxdp; | ||
1853 | mac_info_t *mac_control; | 2396 | mac_info_t *mac_control; |
1854 | struct config_param *config; | 2397 | struct config_param *config; |
1855 | #ifdef CONFIG_2BUFF_MODE | 2398 | XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; |
1856 | buffAdd_t *ba; | 2399 | u64 val64; |
1857 | #endif | 2400 | int i; |
1858 | 2401 | ||
2402 | atomic_inc(&nic->isr_cnt); | ||
1859 | mac_control = &nic->mac_control; | 2403 | mac_control = &nic->mac_control; |
1860 | config = &nic->config; | 2404 | config = &nic->config; |
1861 | 2405 | ||
1862 | if (pkts_to_process > dev->quota) | 2406 | nic->pkts_to_process = *budget; |
1863 | pkts_to_process = dev->quota; | 2407 | if (nic->pkts_to_process > dev->quota) |
2408 | nic->pkts_to_process = dev->quota; | ||
2409 | org_pkts_to_process = nic->pkts_to_process; | ||
1864 | 2410 | ||
1865 | val64 = readq(&bar0->rx_traffic_int); | 2411 | val64 = readq(&bar0->rx_traffic_int); |
1866 | writeq(val64, &bar0->rx_traffic_int); | 2412 | writeq(val64, &bar0->rx_traffic_int); |
1867 | 2413 | ||
1868 | for (i = 0; i < config->rx_ring_num; i++) { | 2414 | for (i = 0; i < config->rx_ring_num; i++) { |
1869 | get_info = mac_control->rx_curr_get_info[i]; | 2415 | rx_intr_handler(&mac_control->rings[i]); |
1870 | get_block = get_info.block_index; | 2416 | pkt_cnt = org_pkts_to_process - nic->pkts_to_process; |
1871 | put_info = mac_control->rx_curr_put_info[i]; | 2417 | if (!nic->pkts_to_process) { |
1872 | put_block = put_info.block_index; | 2418 | /* Quota for the current iteration has been met */ |
1873 | ring_bufs = config->rx_cfg[i].num_rxd; | 2419 | goto no_rx; |
1874 | rxdp = nic->rx_blocks[i][get_block].block_virt_addr + | ||
1875 | get_info.offset; | ||
1876 | #ifndef CONFIG_2BUFF_MODE | ||
1877 | get_offset = (get_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
1878 | get_info.offset; | ||
1879 | put_offset = (put_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
1880 | put_info.offset; | ||
1881 | while ((!(rxdp->Control_1 & RXD_OWN_XENA)) && | ||
1882 | (((get_offset + 1) % ring_bufs) != put_offset)) { | ||
1883 | if (--pkts_to_process < 0) { | ||
1884 | goto no_rx; | ||
1885 | } | ||
1886 | if (rxdp->Control_1 == END_OF_BLOCK) { | ||
1887 | rxdp = | ||
1888 | (RxD_t *) ((unsigned long) rxdp-> | ||
1889 | Control_2); | ||
1890 | get_info.offset++; | ||
1891 | get_info.offset %= | ||
1892 | (MAX_RXDS_PER_BLOCK + 1); | ||
1893 | get_block++; | ||
1894 | get_block %= nic->block_count[i]; | ||
1895 | mac_control->rx_curr_get_info[i]. | ||
1896 | offset = get_info.offset; | ||
1897 | mac_control->rx_curr_get_info[i]. | ||
1898 | block_index = get_block; | ||
1899 | continue; | ||
1900 | } | ||
1901 | get_offset = | ||
1902 | (get_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
1903 | get_info.offset; | ||
1904 | skb = | ||
1905 | (struct sk_buff *) ((unsigned long) rxdp-> | ||
1906 | Host_Control); | ||
1907 | if (skb == NULL) { | ||
1908 | DBG_PRINT(ERR_DBG, "%s: The skb is ", | ||
1909 | dev->name); | ||
1910 | DBG_PRINT(ERR_DBG, "Null in Rx Intr\n"); | ||
1911 | goto no_rx; | ||
1912 | } | ||
1913 | val64 = RXD_GET_BUFFER0_SIZE(rxdp->Control_2); | ||
1914 | val16 = (u16) (val64 >> 48); | ||
1915 | cksum = RXD_GET_L4_CKSUM(rxdp->Control_1); | ||
1916 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
1917 | rxdp->Buffer0_ptr, | ||
1918 | dev->mtu + | ||
1919 | HEADER_ETHERNET_II_802_3_SIZE + | ||
1920 | HEADER_802_2_SIZE + | ||
1921 | HEADER_SNAP_SIZE, | ||
1922 | PCI_DMA_FROMDEVICE); | ||
1923 | rx_osm_handler(nic, val16, rxdp, i); | ||
1924 | pkt_cnt++; | ||
1925 | get_info.offset++; | ||
1926 | get_info.offset %= (MAX_RXDS_PER_BLOCK + 1); | ||
1927 | rxdp = | ||
1928 | nic->rx_blocks[i][get_block].block_virt_addr + | ||
1929 | get_info.offset; | ||
1930 | mac_control->rx_curr_get_info[i].offset = | ||
1931 | get_info.offset; | ||
1932 | } | 2420 | } |
1933 | #else | ||
1934 | get_offset = (get_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
1935 | get_info.offset; | ||
1936 | put_offset = (put_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
1937 | put_info.offset; | ||
1938 | while (((!(rxdp->Control_1 & RXD_OWN_XENA)) && | ||
1939 | !(rxdp->Control_2 & BIT(0))) && | ||
1940 | (((get_offset + 1) % ring_bufs) != put_offset)) { | ||
1941 | if (--pkts_to_process < 0) { | ||
1942 | goto no_rx; | ||
1943 | } | ||
1944 | skb = (struct sk_buff *) ((unsigned long) | ||
1945 | rxdp->Host_Control); | ||
1946 | if (skb == NULL) { | ||
1947 | DBG_PRINT(ERR_DBG, "%s: The skb is ", | ||
1948 | dev->name); | ||
1949 | DBG_PRINT(ERR_DBG, "Null in Rx Intr\n"); | ||
1950 | goto no_rx; | ||
1951 | } | ||
1952 | |||
1953 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
1954 | rxdp->Buffer0_ptr, | ||
1955 | BUF0_LEN, PCI_DMA_FROMDEVICE); | ||
1956 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
1957 | rxdp->Buffer1_ptr, | ||
1958 | BUF1_LEN, PCI_DMA_FROMDEVICE); | ||
1959 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
1960 | rxdp->Buffer2_ptr, | ||
1961 | dev->mtu + BUF0_LEN + 4, | ||
1962 | PCI_DMA_FROMDEVICE); | ||
1963 | ba = &nic->ba[i][get_block][get_info.offset]; | ||
1964 | |||
1965 | rx_osm_handler(nic, rxdp, i, ba); | ||
1966 | |||
1967 | get_info.offset++; | ||
1968 | mac_control->rx_curr_get_info[i].offset = | ||
1969 | get_info.offset; | ||
1970 | rxdp = | ||
1971 | nic->rx_blocks[i][get_block].block_virt_addr + | ||
1972 | get_info.offset; | ||
1973 | |||
1974 | if (get_info.offset && | ||
1975 | (!(get_info.offset % MAX_RXDS_PER_BLOCK))) { | ||
1976 | get_info.offset = 0; | ||
1977 | mac_control->rx_curr_get_info[i]. | ||
1978 | offset = get_info.offset; | ||
1979 | get_block++; | ||
1980 | get_block %= nic->block_count[i]; | ||
1981 | mac_control->rx_curr_get_info[i]. | ||
1982 | block_index = get_block; | ||
1983 | rxdp = | ||
1984 | nic->rx_blocks[i][get_block]. | ||
1985 | block_virt_addr; | ||
1986 | } | ||
1987 | get_offset = | ||
1988 | (get_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
1989 | get_info.offset; | ||
1990 | pkt_cnt++; | ||
1991 | } | ||
1992 | #endif | ||
1993 | } | 2421 | } |
1994 | if (!pkt_cnt) | 2422 | if (!pkt_cnt) |
1995 | pkt_cnt = 1; | 2423 | pkt_cnt = 1; |
@@ -2007,9 +2435,10 @@ static int s2io_poll(struct net_device *dev, int *budget) | |||
2007 | } | 2435 | } |
2008 | /* Re enable the Rx interrupts. */ | 2436 | /* Re enable the Rx interrupts. */ |
2009 | en_dis_able_nic_intrs(nic, RX_TRAFFIC_INTR, ENABLE_INTRS); | 2437 | en_dis_able_nic_intrs(nic, RX_TRAFFIC_INTR, ENABLE_INTRS); |
2438 | atomic_dec(&nic->isr_cnt); | ||
2010 | return 0; | 2439 | return 0; |
2011 | 2440 | ||
2012 | no_rx: | 2441 | no_rx: |
2013 | dev->quota -= pkt_cnt; | 2442 | dev->quota -= pkt_cnt; |
2014 | *budget -= pkt_cnt; | 2443 | *budget -= pkt_cnt; |
2015 | 2444 | ||
@@ -2020,279 +2449,204 @@ static int s2io_poll(struct net_device *dev, int *budget) | |||
2020 | break; | 2449 | break; |
2021 | } | 2450 | } |
2022 | } | 2451 | } |
2452 | atomic_dec(&nic->isr_cnt); | ||
2023 | return 1; | 2453 | return 1; |
2024 | } | 2454 | } |
2025 | #else | 2455 | #endif |
2026 | /** | 2456 | |
2457 | /** | ||
2027 | * rx_intr_handler - Rx interrupt handler | 2458 | * rx_intr_handler - Rx interrupt handler |
2028 | * @nic: device private variable. | 2459 | * @nic: device private variable. |
2029 | * Description: | 2460 | * Description: |
2030 | * If the interrupt is because of a received frame or if the | 2461 | * If the interrupt is because of a received frame or if the |
2031 | * receive ring contains fresh as yet un-processed frames,this function is | 2462 | * receive ring contains fresh as yet un-processed frames,this function is |
2032 | * called. It picks out the RxD at which place the last Rx processing had | 2463 | * called. It picks out the RxD at which place the last Rx processing had |
2033 | * stopped and sends the skb to the OSM's Rx handler and then increments | 2464 | * stopped and sends the skb to the OSM's Rx handler and then increments |
2034 | * the offset. | 2465 | * the offset. |
2035 | * Return Value: | 2466 | * Return Value: |
2036 | * NONE. | 2467 | * NONE. |
2037 | */ | 2468 | */ |
2038 | 2469 | static void rx_intr_handler(ring_info_t *ring_data) | |
2039 | static void rx_intr_handler(struct s2io_nic *nic) | ||
2040 | { | 2470 | { |
2471 | nic_t *nic = ring_data->nic; | ||
2041 | struct net_device *dev = (struct net_device *) nic->dev; | 2472 | struct net_device *dev = (struct net_device *) nic->dev; |
2042 | XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; | 2473 | int get_block, get_offset, put_block, put_offset, ring_bufs; |
2043 | rx_curr_get_info_t get_info, put_info; | 2474 | rx_curr_get_info_t get_info, put_info; |
2044 | RxD_t *rxdp; | 2475 | RxD_t *rxdp; |
2045 | struct sk_buff *skb; | 2476 | struct sk_buff *skb; |
2046 | #ifndef CONFIG_2BUFF_MODE | 2477 | #ifndef CONFIG_S2IO_NAPI |
2047 | u16 val16, cksum; | 2478 | int pkt_cnt = 0; |
2048 | #endif | ||
2049 | register u64 val64 = 0; | ||
2050 | int get_block, get_offset, put_block, put_offset, ring_bufs; | ||
2051 | int i, pkt_cnt = 0; | ||
2052 | mac_info_t *mac_control; | ||
2053 | struct config_param *config; | ||
2054 | #ifdef CONFIG_2BUFF_MODE | ||
2055 | buffAdd_t *ba; | ||
2056 | #endif | 2479 | #endif |
2480 | spin_lock(&nic->rx_lock); | ||
2481 | if (atomic_read(&nic->card_state) == CARD_DOWN) { | ||
2482 | DBG_PRINT(ERR_DBG, "%s: %s going down for reset\n", | ||
2483 | __FUNCTION__, dev->name); | ||
2484 | spin_unlock(&nic->rx_lock); | ||
2485 | } | ||
2057 | 2486 | ||
2058 | mac_control = &nic->mac_control; | 2487 | get_info = ring_data->rx_curr_get_info; |
2059 | config = &nic->config; | 2488 | get_block = get_info.block_index; |
2060 | 2489 | put_info = ring_data->rx_curr_put_info; | |
2061 | /* | 2490 | put_block = put_info.block_index; |
2062 | * rx_traffic_int reg is an R1 register, hence we read and write back | 2491 | ring_bufs = get_info.ring_len+1; |
2063 | * the samevalue in the register to clear it. | 2492 | rxdp = ring_data->rx_blocks[get_block].block_virt_addr + |
2064 | */ | ||
2065 | val64 = readq(&bar0->rx_traffic_int); | ||
2066 | writeq(val64, &bar0->rx_traffic_int); | ||
2067 | |||
2068 | for (i = 0; i < config->rx_ring_num; i++) { | ||
2069 | get_info = mac_control->rx_curr_get_info[i]; | ||
2070 | get_block = get_info.block_index; | ||
2071 | put_info = mac_control->rx_curr_put_info[i]; | ||
2072 | put_block = put_info.block_index; | ||
2073 | ring_bufs = config->rx_cfg[i].num_rxd; | ||
2074 | rxdp = nic->rx_blocks[i][get_block].block_virt_addr + | ||
2075 | get_info.offset; | ||
2076 | #ifndef CONFIG_2BUFF_MODE | ||
2077 | get_offset = (get_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
2078 | get_info.offset; | 2493 | get_info.offset; |
2079 | spin_lock(&nic->put_lock); | 2494 | get_offset = (get_block * (MAX_RXDS_PER_BLOCK + 1)) + |
2080 | put_offset = nic->put_pos[i]; | 2495 | get_info.offset; |
2081 | spin_unlock(&nic->put_lock); | 2496 | #ifndef CONFIG_S2IO_NAPI |
2082 | while ((!(rxdp->Control_1 & RXD_OWN_XENA)) && | 2497 | spin_lock(&nic->put_lock); |
2083 | (((get_offset + 1) % ring_bufs) != put_offset)) { | 2498 | put_offset = ring_data->put_pos; |
2084 | if (rxdp->Control_1 == END_OF_BLOCK) { | 2499 | spin_unlock(&nic->put_lock); |
2085 | rxdp = (RxD_t *) ((unsigned long) | 2500 | #else |
2086 | rxdp->Control_2); | 2501 | put_offset = (put_block * (MAX_RXDS_PER_BLOCK + 1)) + |
2087 | get_info.offset++; | 2502 | put_info.offset; |
2088 | get_info.offset %= | 2503 | #endif |
2089 | (MAX_RXDS_PER_BLOCK + 1); | 2504 | while (RXD_IS_UP2DT(rxdp) && |
2090 | get_block++; | 2505 | (((get_offset + 1) % ring_bufs) != put_offset)) { |
2091 | get_block %= nic->block_count[i]; | 2506 | skb = (struct sk_buff *) ((unsigned long)rxdp->Host_Control); |
2092 | mac_control->rx_curr_get_info[i]. | 2507 | if (skb == NULL) { |
2093 | offset = get_info.offset; | 2508 | DBG_PRINT(ERR_DBG, "%s: The skb is ", |
2094 | mac_control->rx_curr_get_info[i]. | 2509 | dev->name); |
2095 | block_index = get_block; | 2510 | DBG_PRINT(ERR_DBG, "Null in Rx Intr\n"); |
2096 | continue; | 2511 | spin_unlock(&nic->rx_lock); |
2097 | } | 2512 | return; |
2098 | get_offset = | ||
2099 | (get_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
2100 | get_info.offset; | ||
2101 | skb = (struct sk_buff *) ((unsigned long) | ||
2102 | rxdp->Host_Control); | ||
2103 | if (skb == NULL) { | ||
2104 | DBG_PRINT(ERR_DBG, "%s: The skb is ", | ||
2105 | dev->name); | ||
2106 | DBG_PRINT(ERR_DBG, "Null in Rx Intr\n"); | ||
2107 | return; | ||
2108 | } | ||
2109 | val64 = RXD_GET_BUFFER0_SIZE(rxdp->Control_2); | ||
2110 | val16 = (u16) (val64 >> 48); | ||
2111 | cksum = RXD_GET_L4_CKSUM(rxdp->Control_1); | ||
2112 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
2113 | rxdp->Buffer0_ptr, | ||
2114 | dev->mtu + | ||
2115 | HEADER_ETHERNET_II_802_3_SIZE + | ||
2116 | HEADER_802_2_SIZE + | ||
2117 | HEADER_SNAP_SIZE, | ||
2118 | PCI_DMA_FROMDEVICE); | ||
2119 | rx_osm_handler(nic, val16, rxdp, i); | ||
2120 | get_info.offset++; | ||
2121 | get_info.offset %= (MAX_RXDS_PER_BLOCK + 1); | ||
2122 | rxdp = | ||
2123 | nic->rx_blocks[i][get_block].block_virt_addr + | ||
2124 | get_info.offset; | ||
2125 | mac_control->rx_curr_get_info[i].offset = | ||
2126 | get_info.offset; | ||
2127 | pkt_cnt++; | ||
2128 | if ((indicate_max_pkts) | ||
2129 | && (pkt_cnt > indicate_max_pkts)) | ||
2130 | break; | ||
2131 | } | 2513 | } |
2514 | #ifndef CONFIG_2BUFF_MODE | ||
2515 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
2516 | rxdp->Buffer0_ptr, | ||
2517 | dev->mtu + | ||
2518 | HEADER_ETHERNET_II_802_3_SIZE + | ||
2519 | HEADER_802_2_SIZE + | ||
2520 | HEADER_SNAP_SIZE, | ||
2521 | PCI_DMA_FROMDEVICE); | ||
2132 | #else | 2522 | #else |
2133 | get_offset = (get_block * (MAX_RXDS_PER_BLOCK + 1)) + | 2523 | pci_unmap_single(nic->pdev, (dma_addr_t) |
2524 | rxdp->Buffer0_ptr, | ||
2525 | BUF0_LEN, PCI_DMA_FROMDEVICE); | ||
2526 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
2527 | rxdp->Buffer1_ptr, | ||
2528 | BUF1_LEN, PCI_DMA_FROMDEVICE); | ||
2529 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
2530 | rxdp->Buffer2_ptr, | ||
2531 | dev->mtu + BUF0_LEN + 4, | ||
2532 | PCI_DMA_FROMDEVICE); | ||
2533 | #endif | ||
2534 | rx_osm_handler(ring_data, rxdp); | ||
2535 | get_info.offset++; | ||
2536 | ring_data->rx_curr_get_info.offset = | ||
2134 | get_info.offset; | 2537 | get_info.offset; |
2135 | spin_lock(&nic->put_lock); | 2538 | rxdp = ring_data->rx_blocks[get_block].block_virt_addr + |
2136 | put_offset = nic->put_pos[i]; | 2539 | get_info.offset; |
2137 | spin_unlock(&nic->put_lock); | 2540 | if (get_info.offset && |
2138 | while (((!(rxdp->Control_1 & RXD_OWN_XENA)) && | 2541 | (!(get_info.offset % MAX_RXDS_PER_BLOCK))) { |
2139 | !(rxdp->Control_2 & BIT(0))) && | 2542 | get_info.offset = 0; |
2140 | (((get_offset + 1) % ring_bufs) != put_offset)) { | 2543 | ring_data->rx_curr_get_info.offset |
2141 | skb = (struct sk_buff *) ((unsigned long) | 2544 | = get_info.offset; |
2142 | rxdp->Host_Control); | 2545 | get_block++; |
2143 | if (skb == NULL) { | 2546 | get_block %= ring_data->block_count; |
2144 | DBG_PRINT(ERR_DBG, "%s: The skb is ", | 2547 | ring_data->rx_curr_get_info.block_index |
2145 | dev->name); | 2548 | = get_block; |
2146 | DBG_PRINT(ERR_DBG, "Null in Rx Intr\n"); | 2549 | rxdp = ring_data->rx_blocks[get_block].block_virt_addr; |
2147 | return; | 2550 | } |
2148 | } | ||
2149 | |||
2150 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
2151 | rxdp->Buffer0_ptr, | ||
2152 | BUF0_LEN, PCI_DMA_FROMDEVICE); | ||
2153 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
2154 | rxdp->Buffer1_ptr, | ||
2155 | BUF1_LEN, PCI_DMA_FROMDEVICE); | ||
2156 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
2157 | rxdp->Buffer2_ptr, | ||
2158 | dev->mtu + BUF0_LEN + 4, | ||
2159 | PCI_DMA_FROMDEVICE); | ||
2160 | ba = &nic->ba[i][get_block][get_info.offset]; | ||
2161 | |||
2162 | rx_osm_handler(nic, rxdp, i, ba); | ||
2163 | |||
2164 | get_info.offset++; | ||
2165 | mac_control->rx_curr_get_info[i].offset = | ||
2166 | get_info.offset; | ||
2167 | rxdp = | ||
2168 | nic->rx_blocks[i][get_block].block_virt_addr + | ||
2169 | get_info.offset; | ||
2170 | 2551 | ||
2171 | if (get_info.offset && | 2552 | get_offset = (get_block * (MAX_RXDS_PER_BLOCK + 1)) + |
2172 | (!(get_info.offset % MAX_RXDS_PER_BLOCK))) { | ||
2173 | get_info.offset = 0; | ||
2174 | mac_control->rx_curr_get_info[i]. | ||
2175 | offset = get_info.offset; | ||
2176 | get_block++; | ||
2177 | get_block %= nic->block_count[i]; | ||
2178 | mac_control->rx_curr_get_info[i]. | ||
2179 | block_index = get_block; | ||
2180 | rxdp = | ||
2181 | nic->rx_blocks[i][get_block]. | ||
2182 | block_virt_addr; | ||
2183 | } | ||
2184 | get_offset = | ||
2185 | (get_block * (MAX_RXDS_PER_BLOCK + 1)) + | ||
2186 | get_info.offset; | 2553 | get_info.offset; |
2187 | pkt_cnt++; | 2554 | #ifdef CONFIG_S2IO_NAPI |
2188 | if ((indicate_max_pkts) | 2555 | nic->pkts_to_process -= 1; |
2189 | && (pkt_cnt > indicate_max_pkts)) | 2556 | if (!nic->pkts_to_process) |
2190 | break; | 2557 | break; |
2191 | } | 2558 | #else |
2192 | #endif | 2559 | pkt_cnt++; |
2193 | if ((indicate_max_pkts) && (pkt_cnt > indicate_max_pkts)) | 2560 | if ((indicate_max_pkts) && (pkt_cnt > indicate_max_pkts)) |
2194 | break; | 2561 | break; |
2562 | #endif | ||
2195 | } | 2563 | } |
2564 | spin_unlock(&nic->rx_lock); | ||
2196 | } | 2565 | } |
2197 | #endif | 2566 | |
2198 | /** | 2567 | /** |
2199 | * tx_intr_handler - Transmit interrupt handler | 2568 | * tx_intr_handler - Transmit interrupt handler |
2200 | * @nic : device private variable | 2569 | * @nic : device private variable |
2201 | * Description: | 2570 | * Description: |
2202 | * If an interrupt was raised to indicate DMA complete of the | 2571 | * If an interrupt was raised to indicate DMA complete of the |
2203 | * Tx packet, this function is called. It identifies the last TxD | 2572 | * Tx packet, this function is called. It identifies the last TxD |
2204 | * whose buffer was freed and frees all skbs whose data have already | 2573 | * whose buffer was freed and frees all skbs whose data have already |
2205 | * DMA'ed into the NICs internal memory. | 2574 | * DMA'ed into the NICs internal memory. |
2206 | * Return Value: | 2575 | * Return Value: |
2207 | * NONE | 2576 | * NONE |
2208 | */ | 2577 | */ |
2209 | 2578 | ||
2210 | static void tx_intr_handler(struct s2io_nic *nic) | 2579 | static void tx_intr_handler(fifo_info_t *fifo_data) |
2211 | { | 2580 | { |
2212 | XENA_dev_config_t __iomem *bar0 = nic->bar0; | 2581 | nic_t *nic = fifo_data->nic; |
2213 | struct net_device *dev = (struct net_device *) nic->dev; | 2582 | struct net_device *dev = (struct net_device *) nic->dev; |
2214 | tx_curr_get_info_t get_info, put_info; | 2583 | tx_curr_get_info_t get_info, put_info; |
2215 | struct sk_buff *skb; | 2584 | struct sk_buff *skb; |
2216 | TxD_t *txdlp; | 2585 | TxD_t *txdlp; |
2217 | register u64 val64 = 0; | ||
2218 | int i; | ||
2219 | u16 j, frg_cnt; | 2586 | u16 j, frg_cnt; |
2220 | mac_info_t *mac_control; | ||
2221 | struct config_param *config; | ||
2222 | 2587 | ||
2223 | mac_control = &nic->mac_control; | 2588 | get_info = fifo_data->tx_curr_get_info; |
2224 | config = &nic->config; | 2589 | put_info = fifo_data->tx_curr_put_info; |
2225 | 2590 | txdlp = (TxD_t *) fifo_data->list_info[get_info.offset]. | |
2226 | /* | 2591 | list_virt_addr; |
2227 | * tx_traffic_int reg is an R1 register, hence we read and write | 2592 | while ((!(txdlp->Control_1 & TXD_LIST_OWN_XENA)) && |
2228 | * back the samevalue in the register to clear it. | 2593 | (get_info.offset != put_info.offset) && |
2229 | */ | 2594 | (txdlp->Host_Control)) { |
2230 | val64 = readq(&bar0->tx_traffic_int); | 2595 | /* Check for TxD errors */ |
2231 | writeq(val64, &bar0->tx_traffic_int); | 2596 | if (txdlp->Control_1 & TXD_T_CODE) { |
2597 | unsigned long long err; | ||
2598 | err = txdlp->Control_1 & TXD_T_CODE; | ||
2599 | DBG_PRINT(ERR_DBG, "***TxD error %llx\n", | ||
2600 | err); | ||
2601 | } | ||
2232 | 2602 | ||
2233 | for (i = 0; i < config->tx_fifo_num; i++) { | 2603 | skb = (struct sk_buff *) ((unsigned long) |
2234 | get_info = mac_control->tx_curr_get_info[i]; | 2604 | txdlp->Host_Control); |
2235 | put_info = mac_control->tx_curr_put_info[i]; | 2605 | if (skb == NULL) { |
2236 | txdlp = (TxD_t *) nic->list_info[i][get_info.offset]. | 2606 | DBG_PRINT(ERR_DBG, "%s: Null skb ", |
2237 | list_virt_addr; | 2607 | __FUNCTION__); |
2238 | while ((!(txdlp->Control_1 & TXD_LIST_OWN_XENA)) && | 2608 | DBG_PRINT(ERR_DBG, "in Tx Free Intr\n"); |
2239 | (get_info.offset != put_info.offset) && | 2609 | return; |
2240 | (txdlp->Host_Control)) { | 2610 | } |
2241 | /* Check for TxD errors */ | ||
2242 | if (txdlp->Control_1 & TXD_T_CODE) { | ||
2243 | unsigned long long err; | ||
2244 | err = txdlp->Control_1 & TXD_T_CODE; | ||
2245 | DBG_PRINT(ERR_DBG, "***TxD error %llx\n", | ||
2246 | err); | ||
2247 | } | ||
2248 | 2611 | ||
2249 | skb = (struct sk_buff *) ((unsigned long) | 2612 | frg_cnt = skb_shinfo(skb)->nr_frags; |
2250 | txdlp->Host_Control); | 2613 | nic->tx_pkt_count++; |
2251 | if (skb == NULL) { | 2614 | |
2252 | DBG_PRINT(ERR_DBG, "%s: Null skb ", | 2615 | pci_unmap_single(nic->pdev, (dma_addr_t) |
2253 | dev->name); | 2616 | txdlp->Buffer_Pointer, |
2254 | DBG_PRINT(ERR_DBG, "in Tx Free Intr\n"); | 2617 | skb->len - skb->data_len, |
2255 | return; | 2618 | PCI_DMA_TODEVICE); |
2619 | if (frg_cnt) { | ||
2620 | TxD_t *temp; | ||
2621 | temp = txdlp; | ||
2622 | txdlp++; | ||
2623 | for (j = 0; j < frg_cnt; j++, txdlp++) { | ||
2624 | skb_frag_t *frag = | ||
2625 | &skb_shinfo(skb)->frags[j]; | ||
2626 | if (!txdlp->Buffer_Pointer) | ||
2627 | break; | ||
2628 | pci_unmap_page(nic->pdev, | ||
2629 | (dma_addr_t) | ||
2630 | txdlp-> | ||
2631 | Buffer_Pointer, | ||
2632 | frag->size, | ||
2633 | PCI_DMA_TODEVICE); | ||
2256 | } | 2634 | } |
2257 | nic->tx_pkt_count++; | 2635 | txdlp = temp; |
2258 | |||
2259 | frg_cnt = skb_shinfo(skb)->nr_frags; | ||
2260 | |||
2261 | /* For unfragmented skb */ | ||
2262 | pci_unmap_single(nic->pdev, (dma_addr_t) | ||
2263 | txdlp->Buffer_Pointer, | ||
2264 | skb->len - skb->data_len, | ||
2265 | PCI_DMA_TODEVICE); | ||
2266 | if (frg_cnt) { | ||
2267 | TxD_t *temp = txdlp; | ||
2268 | txdlp++; | ||
2269 | for (j = 0; j < frg_cnt; j++, txdlp++) { | ||
2270 | skb_frag_t *frag = | ||
2271 | &skb_shinfo(skb)->frags[j]; | ||
2272 | pci_unmap_page(nic->pdev, | ||
2273 | (dma_addr_t) | ||
2274 | txdlp-> | ||
2275 | Buffer_Pointer, | ||
2276 | frag->size, | ||
2277 | PCI_DMA_TODEVICE); | ||
2278 | } | ||
2279 | txdlp = temp; | ||
2280 | } | ||
2281 | memset(txdlp, 0, | ||
2282 | (sizeof(TxD_t) * config->max_txds)); | ||
2283 | |||
2284 | /* Updating the statistics block */ | ||
2285 | nic->stats.tx_packets++; | ||
2286 | nic->stats.tx_bytes += skb->len; | ||
2287 | dev_kfree_skb_irq(skb); | ||
2288 | |||
2289 | get_info.offset++; | ||
2290 | get_info.offset %= get_info.fifo_len + 1; | ||
2291 | txdlp = (TxD_t *) nic->list_info[i] | ||
2292 | [get_info.offset].list_virt_addr; | ||
2293 | mac_control->tx_curr_get_info[i].offset = | ||
2294 | get_info.offset; | ||
2295 | } | 2636 | } |
2637 | memset(txdlp, 0, | ||
2638 | (sizeof(TxD_t) * fifo_data->max_txds)); | ||
2639 | |||
2640 | /* Updating the statistics block */ | ||
2641 | nic->stats.tx_bytes += skb->len; | ||
2642 | dev_kfree_skb_irq(skb); | ||
2643 | |||
2644 | get_info.offset++; | ||
2645 | get_info.offset %= get_info.fifo_len + 1; | ||
2646 | txdlp = (TxD_t *) fifo_data->list_info | ||
2647 | [get_info.offset].list_virt_addr; | ||
2648 | fifo_data->tx_curr_get_info.offset = | ||
2649 | get_info.offset; | ||
2296 | } | 2650 | } |
2297 | 2651 | ||
2298 | spin_lock(&nic->tx_lock); | 2652 | spin_lock(&nic->tx_lock); |
@@ -2301,13 +2655,13 @@ static void tx_intr_handler(struct s2io_nic *nic) | |||
2301 | spin_unlock(&nic->tx_lock); | 2655 | spin_unlock(&nic->tx_lock); |
2302 | } | 2656 | } |
2303 | 2657 | ||
2304 | /** | 2658 | /** |
2305 | * alarm_intr_handler - Alarm Interrrupt handler | 2659 | * alarm_intr_handler - Alarm Interrrupt handler |
2306 | * @nic: device private variable | 2660 | * @nic: device private variable |
2307 | * Description: If the interrupt was neither because of Rx packet or Tx | 2661 | * Description: If the interrupt was neither because of Rx packet or Tx |
2308 | * complete, this function is called. If the interrupt was to indicate | 2662 | * complete, this function is called. If the interrupt was to indicate |
2309 | * a loss of link, the OSM link status handler is invoked for any other | 2663 | * a loss of link, the OSM link status handler is invoked for any other |
2310 | * alarm interrupt the block that raised the interrupt is displayed | 2664 | * alarm interrupt the block that raised the interrupt is displayed |
2311 | * and a H/W reset is issued. | 2665 | * and a H/W reset is issued. |
2312 | * Return Value: | 2666 | * Return Value: |
2313 | * NONE | 2667 | * NONE |
@@ -2320,10 +2674,32 @@ static void alarm_intr_handler(struct s2io_nic *nic) | |||
2320 | register u64 val64 = 0, err_reg = 0; | 2674 | register u64 val64 = 0, err_reg = 0; |
2321 | 2675 | ||
2322 | /* Handling link status change error Intr */ | 2676 | /* Handling link status change error Intr */ |
2323 | err_reg = readq(&bar0->mac_rmac_err_reg); | 2677 | if (s2io_link_fault_indication(nic) == MAC_RMAC_ERR_TIMER) { |
2324 | writeq(err_reg, &bar0->mac_rmac_err_reg); | 2678 | err_reg = readq(&bar0->mac_rmac_err_reg); |
2325 | if (err_reg & RMAC_LINK_STATE_CHANGE_INT) { | 2679 | writeq(err_reg, &bar0->mac_rmac_err_reg); |
2326 | schedule_work(&nic->set_link_task); | 2680 | if (err_reg & RMAC_LINK_STATE_CHANGE_INT) { |
2681 | schedule_work(&nic->set_link_task); | ||
2682 | } | ||
2683 | } | ||
2684 | |||
2685 | /* Handling Ecc errors */ | ||
2686 | val64 = readq(&bar0->mc_err_reg); | ||
2687 | writeq(val64, &bar0->mc_err_reg); | ||
2688 | if (val64 & (MC_ERR_REG_ECC_ALL_SNG | MC_ERR_REG_ECC_ALL_DBL)) { | ||
2689 | if (val64 & MC_ERR_REG_ECC_ALL_DBL) { | ||
2690 | nic->mac_control.stats_info->sw_stat. | ||
2691 | double_ecc_errs++; | ||
2692 | DBG_PRINT(ERR_DBG, "%s: Device indicates ", | ||
2693 | dev->name); | ||
2694 | DBG_PRINT(ERR_DBG, "double ECC error!!\n"); | ||
2695 | if (nic->device_type != XFRAME_II_DEVICE) { | ||
2696 | netif_stop_queue(dev); | ||
2697 | schedule_work(&nic->rst_timer_task); | ||
2698 | } | ||
2699 | } else { | ||
2700 | nic->mac_control.stats_info->sw_stat. | ||
2701 | single_ecc_errs++; | ||
2702 | } | ||
2327 | } | 2703 | } |
2328 | 2704 | ||
2329 | /* In case of a serious error, the device will be Reset. */ | 2705 | /* In case of a serious error, the device will be Reset. */ |
@@ -2338,7 +2714,7 @@ static void alarm_intr_handler(struct s2io_nic *nic) | |||
2338 | /* | 2714 | /* |
2339 | * Also as mentioned in the latest Errata sheets if the PCC_FB_ECC | 2715 | * Also as mentioned in the latest Errata sheets if the PCC_FB_ECC |
2340 | * Error occurs, the adapter will be recycled by disabling the | 2716 | * Error occurs, the adapter will be recycled by disabling the |
2341 | * adapter enable bit and enabling it again after the device | 2717 | * adapter enable bit and enabling it again after the device |
2342 | * becomes Quiescent. | 2718 | * becomes Quiescent. |
2343 | */ | 2719 | */ |
2344 | val64 = readq(&bar0->pcc_err_reg); | 2720 | val64 = readq(&bar0->pcc_err_reg); |
@@ -2354,18 +2730,18 @@ static void alarm_intr_handler(struct s2io_nic *nic) | |||
2354 | /* Other type of interrupts are not being handled now, TODO */ | 2730 | /* Other type of interrupts are not being handled now, TODO */ |
2355 | } | 2731 | } |
2356 | 2732 | ||
2357 | /** | 2733 | /** |
2358 | * wait_for_cmd_complete - waits for a command to complete. | 2734 | * wait_for_cmd_complete - waits for a command to complete. |
2359 | * @sp : private member of the device structure, which is a pointer to the | 2735 | * @sp : private member of the device structure, which is a pointer to the |
2360 | * s2io_nic structure. | 2736 | * s2io_nic structure. |
2361 | * Description: Function that waits for a command to Write into RMAC | 2737 | * Description: Function that waits for a command to Write into RMAC |
2362 | * ADDR DATA registers to be completed and returns either success or | 2738 | * ADDR DATA registers to be completed and returns either success or |
2363 | * error depending on whether the command was complete or not. | 2739 | * error depending on whether the command was complete or not. |
2364 | * Return value: | 2740 | * Return value: |
2365 | * SUCCESS on success and FAILURE on failure. | 2741 | * SUCCESS on success and FAILURE on failure. |
2366 | */ | 2742 | */ |
2367 | 2743 | ||
2368 | static int wait_for_cmd_complete(nic_t * sp) | 2744 | int wait_for_cmd_complete(nic_t * sp) |
2369 | { | 2745 | { |
2370 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | 2746 | XENA_dev_config_t __iomem *bar0 = sp->bar0; |
2371 | int ret = FAILURE, cnt = 0; | 2747 | int ret = FAILURE, cnt = 0; |
@@ -2385,29 +2761,32 @@ static int wait_for_cmd_complete(nic_t * sp) | |||
2385 | return ret; | 2761 | return ret; |
2386 | } | 2762 | } |
2387 | 2763 | ||
2388 | /** | 2764 | /** |
2389 | * s2io_reset - Resets the card. | 2765 | * s2io_reset - Resets the card. |
2390 | * @sp : private member of the device structure. | 2766 | * @sp : private member of the device structure. |
2391 | * Description: Function to Reset the card. This function then also | 2767 | * Description: Function to Reset the card. This function then also |
2392 | * restores the previously saved PCI configuration space registers as | 2768 | * restores the previously saved PCI configuration space registers as |
2393 | * the card reset also resets the configuration space. | 2769 | * the card reset also resets the configuration space. |
2394 | * Return value: | 2770 | * Return value: |
2395 | * void. | 2771 | * void. |
2396 | */ | 2772 | */ |
2397 | 2773 | ||
2398 | static void s2io_reset(nic_t * sp) | 2774 | void s2io_reset(nic_t * sp) |
2399 | { | 2775 | { |
2400 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | 2776 | XENA_dev_config_t __iomem *bar0 = sp->bar0; |
2401 | u64 val64; | 2777 | u64 val64; |
2402 | u16 subid; | 2778 | u16 subid, pci_cmd; |
2779 | |||
2780 | /* Back up the PCI-X CMD reg, dont want to lose MMRBC, OST settings */ | ||
2781 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, &(pci_cmd)); | ||
2403 | 2782 | ||
2404 | val64 = SW_RESET_ALL; | 2783 | val64 = SW_RESET_ALL; |
2405 | writeq(val64, &bar0->sw_reset); | 2784 | writeq(val64, &bar0->sw_reset); |
2406 | 2785 | ||
2407 | /* | 2786 | /* |
2408 | * At this stage, if the PCI write is indeed completed, the | 2787 | * At this stage, if the PCI write is indeed completed, the |
2409 | * card is reset and so is the PCI Config space of the device. | 2788 | * card is reset and so is the PCI Config space of the device. |
2410 | * So a read cannot be issued at this stage on any of the | 2789 | * So a read cannot be issued at this stage on any of the |
2411 | * registers to ensure the write into "sw_reset" register | 2790 | * registers to ensure the write into "sw_reset" register |
2412 | * has gone through. | 2791 | * has gone through. |
2413 | * Question: Is there any system call that will explicitly force | 2792 | * Question: Is there any system call that will explicitly force |
@@ -2418,42 +2797,72 @@ static void s2io_reset(nic_t * sp) | |||
2418 | */ | 2797 | */ |
2419 | msleep(250); | 2798 | msleep(250); |
2420 | 2799 | ||
2421 | /* Restore the PCI state saved during initializarion. */ | 2800 | /* Restore the PCI state saved during initialization. */ |
2422 | pci_restore_state(sp->pdev); | 2801 | pci_restore_state(sp->pdev); |
2802 | pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | ||
2803 | pci_cmd); | ||
2423 | s2io_init_pci(sp); | 2804 | s2io_init_pci(sp); |
2424 | 2805 | ||
2425 | msleep(250); | 2806 | msleep(250); |
2426 | 2807 | ||
2808 | /* Set swapper to enable I/O register access */ | ||
2809 | s2io_set_swapper(sp); | ||
2810 | |||
2811 | /* Clear certain PCI/PCI-X fields after reset */ | ||
2812 | if (sp->device_type == XFRAME_II_DEVICE) { | ||
2813 | /* Clear parity err detect bit */ | ||
2814 | pci_write_config_word(sp->pdev, PCI_STATUS, 0x8000); | ||
2815 | |||
2816 | /* Clearing PCIX Ecc status register */ | ||
2817 | pci_write_config_dword(sp->pdev, 0x68, 0x7C); | ||
2818 | |||
2819 | /* Clearing PCI_STATUS error reflected here */ | ||
2820 | writeq(BIT(62), &bar0->txpic_int_reg); | ||
2821 | } | ||
2822 | |||
2823 | /* Reset device statistics maintained by OS */ | ||
2824 | memset(&sp->stats, 0, sizeof (struct net_device_stats)); | ||
2825 | |||
2427 | /* SXE-002: Configure link and activity LED to turn it off */ | 2826 | /* SXE-002: Configure link and activity LED to turn it off */ |
2428 | subid = sp->pdev->subsystem_device; | 2827 | subid = sp->pdev->subsystem_device; |
2429 | if ((subid & 0xFF) >= 0x07) { | 2828 | if (((subid & 0xFF) >= 0x07) && |
2829 | (sp->device_type == XFRAME_I_DEVICE)) { | ||
2430 | val64 = readq(&bar0->gpio_control); | 2830 | val64 = readq(&bar0->gpio_control); |
2431 | val64 |= 0x0000800000000000ULL; | 2831 | val64 |= 0x0000800000000000ULL; |
2432 | writeq(val64, &bar0->gpio_control); | 2832 | writeq(val64, &bar0->gpio_control); |
2433 | val64 = 0x0411040400000000ULL; | 2833 | val64 = 0x0411040400000000ULL; |
2434 | writeq(val64, (void __iomem *) bar0 + 0x2700); | 2834 | writeq(val64, (void __iomem *) ((u8 *) bar0 + 0x2700)); |
2835 | } | ||
2836 | |||
2837 | /* | ||
2838 | * Clear spurious ECC interrupts that would have occured on | ||
2839 | * XFRAME II cards after reset. | ||
2840 | */ | ||
2841 | if (sp->device_type == XFRAME_II_DEVICE) { | ||
2842 | val64 = readq(&bar0->pcc_err_reg); | ||
2843 | writeq(val64, &bar0->pcc_err_reg); | ||
2435 | } | 2844 | } |
2436 | 2845 | ||
2437 | sp->device_enabled_once = FALSE; | 2846 | sp->device_enabled_once = FALSE; |
2438 | } | 2847 | } |
2439 | 2848 | ||
2440 | /** | 2849 | /** |
2441 | * s2io_set_swapper - to set the swapper controle on the card | 2850 | * s2io_set_swapper - to set the swapper controle on the card |
2442 | * @sp : private member of the device structure, | 2851 | * @sp : private member of the device structure, |
2443 | * pointer to the s2io_nic structure. | 2852 | * pointer to the s2io_nic structure. |
2444 | * Description: Function to set the swapper control on the card | 2853 | * Description: Function to set the swapper control on the card |
2445 | * correctly depending on the 'endianness' of the system. | 2854 | * correctly depending on the 'endianness' of the system. |
2446 | * Return value: | 2855 | * Return value: |
2447 | * SUCCESS on success and FAILURE on failure. | 2856 | * SUCCESS on success and FAILURE on failure. |
2448 | */ | 2857 | */ |
2449 | 2858 | ||
2450 | static int s2io_set_swapper(nic_t * sp) | 2859 | int s2io_set_swapper(nic_t * sp) |
2451 | { | 2860 | { |
2452 | struct net_device *dev = sp->dev; | 2861 | struct net_device *dev = sp->dev; |
2453 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | 2862 | XENA_dev_config_t __iomem *bar0 = sp->bar0; |
2454 | u64 val64, valt, valr; | 2863 | u64 val64, valt, valr; |
2455 | 2864 | ||
2456 | /* | 2865 | /* |
2457 | * Set proper endian settings and verify the same by reading | 2866 | * Set proper endian settings and verify the same by reading |
2458 | * the PIF Feed-back register. | 2867 | * the PIF Feed-back register. |
2459 | */ | 2868 | */ |
@@ -2505,8 +2914,9 @@ static int s2io_set_swapper(nic_t * sp) | |||
2505 | i++; | 2914 | i++; |
2506 | } | 2915 | } |
2507 | if(i == 4) { | 2916 | if(i == 4) { |
2917 | unsigned long long x = val64; | ||
2508 | DBG_PRINT(ERR_DBG, "Write failed, Xmsi_addr "); | 2918 | DBG_PRINT(ERR_DBG, "Write failed, Xmsi_addr "); |
2509 | DBG_PRINT(ERR_DBG, "reads:0x%llx\n",val64); | 2919 | DBG_PRINT(ERR_DBG, "reads:0x%llx\n", x); |
2510 | return FAILURE; | 2920 | return FAILURE; |
2511 | } | 2921 | } |
2512 | } | 2922 | } |
@@ -2514,8 +2924,8 @@ static int s2io_set_swapper(nic_t * sp) | |||
2514 | val64 &= 0xFFFF000000000000ULL; | 2924 | val64 &= 0xFFFF000000000000ULL; |
2515 | 2925 | ||
2516 | #ifdef __BIG_ENDIAN | 2926 | #ifdef __BIG_ENDIAN |
2517 | /* | 2927 | /* |
2518 | * The device by default set to a big endian format, so a | 2928 | * The device by default set to a big endian format, so a |
2519 | * big endian driver need not set anything. | 2929 | * big endian driver need not set anything. |
2520 | */ | 2930 | */ |
2521 | val64 |= (SWAPPER_CTRL_TXP_FE | | 2931 | val64 |= (SWAPPER_CTRL_TXP_FE | |
@@ -2531,9 +2941,9 @@ static int s2io_set_swapper(nic_t * sp) | |||
2531 | SWAPPER_CTRL_STATS_FE | SWAPPER_CTRL_STATS_SE); | 2941 | SWAPPER_CTRL_STATS_FE | SWAPPER_CTRL_STATS_SE); |
2532 | writeq(val64, &bar0->swapper_ctrl); | 2942 | writeq(val64, &bar0->swapper_ctrl); |
2533 | #else | 2943 | #else |
2534 | /* | 2944 | /* |
2535 | * Initially we enable all bits to make it accessible by the | 2945 | * Initially we enable all bits to make it accessible by the |
2536 | * driver, then we selectively enable only those bits that | 2946 | * driver, then we selectively enable only those bits that |
2537 | * we want to set. | 2947 | * we want to set. |
2538 | */ | 2948 | */ |
2539 | val64 |= (SWAPPER_CTRL_TXP_FE | | 2949 | val64 |= (SWAPPER_CTRL_TXP_FE | |
@@ -2555,8 +2965,8 @@ static int s2io_set_swapper(nic_t * sp) | |||
2555 | #endif | 2965 | #endif |
2556 | val64 = readq(&bar0->swapper_ctrl); | 2966 | val64 = readq(&bar0->swapper_ctrl); |
2557 | 2967 | ||
2558 | /* | 2968 | /* |
2559 | * Verifying if endian settings are accurate by reading a | 2969 | * Verifying if endian settings are accurate by reading a |
2560 | * feedback register. | 2970 | * feedback register. |
2561 | */ | 2971 | */ |
2562 | val64 = readq(&bar0->pif_rd_swapper_fb); | 2972 | val64 = readq(&bar0->pif_rd_swapper_fb); |
@@ -2576,55 +2986,63 @@ static int s2io_set_swapper(nic_t * sp) | |||
2576 | * Functions defined below concern the OS part of the driver * | 2986 | * Functions defined below concern the OS part of the driver * |
2577 | * ********************************************************* */ | 2987 | * ********************************************************* */ |
2578 | 2988 | ||
2579 | /** | 2989 | /** |
2580 | * s2io_open - open entry point of the driver | 2990 | * s2io_open - open entry point of the driver |
2581 | * @dev : pointer to the device structure. | 2991 | * @dev : pointer to the device structure. |
2582 | * Description: | 2992 | * Description: |
2583 | * This function is the open entry point of the driver. It mainly calls a | 2993 | * This function is the open entry point of the driver. It mainly calls a |
2584 | * function to allocate Rx buffers and inserts them into the buffer | 2994 | * function to allocate Rx buffers and inserts them into the buffer |
2585 | * descriptors and then enables the Rx part of the NIC. | 2995 | * descriptors and then enables the Rx part of the NIC. |
2586 | * Return value: | 2996 | * Return value: |
2587 | * 0 on success and an appropriate (-)ve integer as defined in errno.h | 2997 | * 0 on success and an appropriate (-)ve integer as defined in errno.h |
2588 | * file on failure. | 2998 | * file on failure. |
2589 | */ | 2999 | */ |
2590 | 3000 | ||
2591 | static int s2io_open(struct net_device *dev) | 3001 | int s2io_open(struct net_device *dev) |
2592 | { | 3002 | { |
2593 | nic_t *sp = dev->priv; | 3003 | nic_t *sp = dev->priv; |
2594 | int err = 0; | 3004 | int err = 0; |
2595 | 3005 | ||
2596 | /* | 3006 | /* |
2597 | * Make sure you have link off by default every time | 3007 | * Make sure you have link off by default every time |
2598 | * Nic is initialized | 3008 | * Nic is initialized |
2599 | */ | 3009 | */ |
2600 | netif_carrier_off(dev); | 3010 | netif_carrier_off(dev); |
2601 | sp->last_link_state = LINK_DOWN; | 3011 | sp->last_link_state = 0; |
2602 | 3012 | ||
2603 | /* Initialize H/W and enable interrupts */ | 3013 | /* Initialize H/W and enable interrupts */ |
2604 | if (s2io_card_up(sp)) { | 3014 | if (s2io_card_up(sp)) { |
2605 | DBG_PRINT(ERR_DBG, "%s: H/W initialization failed\n", | 3015 | DBG_PRINT(ERR_DBG, "%s: H/W initialization failed\n", |
2606 | dev->name); | 3016 | dev->name); |
2607 | return -ENODEV; | 3017 | err = -ENODEV; |
3018 | goto hw_init_failed; | ||
2608 | } | 3019 | } |
2609 | 3020 | ||
2610 | /* After proper initialization of H/W, register ISR */ | 3021 | /* After proper initialization of H/W, register ISR */ |
2611 | err = request_irq((int) sp->irq, s2io_isr, SA_SHIRQ, | 3022 | err = request_irq((int) sp->pdev->irq, s2io_isr, SA_SHIRQ, |
2612 | sp->name, dev); | 3023 | sp->name, dev); |
2613 | if (err) { | 3024 | if (err) { |
2614 | s2io_reset(sp); | ||
2615 | DBG_PRINT(ERR_DBG, "%s: ISR registration failed\n", | 3025 | DBG_PRINT(ERR_DBG, "%s: ISR registration failed\n", |
2616 | dev->name); | 3026 | dev->name); |
2617 | return err; | 3027 | goto isr_registration_failed; |
2618 | } | 3028 | } |
2619 | 3029 | ||
2620 | if (s2io_set_mac_addr(dev, dev->dev_addr) == FAILURE) { | 3030 | if (s2io_set_mac_addr(dev, dev->dev_addr) == FAILURE) { |
2621 | DBG_PRINT(ERR_DBG, "Set Mac Address Failed\n"); | 3031 | DBG_PRINT(ERR_DBG, "Set Mac Address Failed\n"); |
2622 | s2io_reset(sp); | 3032 | err = -ENODEV; |
2623 | return -ENODEV; | 3033 | goto setting_mac_address_failed; |
2624 | } | 3034 | } |
2625 | 3035 | ||
2626 | netif_start_queue(dev); | 3036 | netif_start_queue(dev); |
2627 | return 0; | 3037 | return 0; |
3038 | |||
3039 | setting_mac_address_failed: | ||
3040 | free_irq(sp->pdev->irq, dev); | ||
3041 | isr_registration_failed: | ||
3042 | del_timer_sync(&sp->alarm_timer); | ||
3043 | s2io_reset(sp); | ||
3044 | hw_init_failed: | ||
3045 | return err; | ||
2628 | } | 3046 | } |
2629 | 3047 | ||
2630 | /** | 3048 | /** |
@@ -2640,16 +3058,15 @@ static int s2io_open(struct net_device *dev) | |||
2640 | * file on failure. | 3058 | * file on failure. |
2641 | */ | 3059 | */ |
2642 | 3060 | ||
2643 | static int s2io_close(struct net_device *dev) | 3061 | int s2io_close(struct net_device *dev) |
2644 | { | 3062 | { |
2645 | nic_t *sp = dev->priv; | 3063 | nic_t *sp = dev->priv; |
2646 | |||
2647 | flush_scheduled_work(); | 3064 | flush_scheduled_work(); |
2648 | netif_stop_queue(dev); | 3065 | netif_stop_queue(dev); |
2649 | /* Reset card, kill tasklet and free Tx and Rx buffers. */ | 3066 | /* Reset card, kill tasklet and free Tx and Rx buffers. */ |
2650 | s2io_card_down(sp); | 3067 | s2io_card_down(sp); |
2651 | 3068 | ||
2652 | free_irq(dev->irq, dev); | 3069 | free_irq(sp->pdev->irq, dev); |
2653 | sp->device_close_flag = TRUE; /* Device is shut down. */ | 3070 | sp->device_close_flag = TRUE; /* Device is shut down. */ |
2654 | return 0; | 3071 | return 0; |
2655 | } | 3072 | } |
@@ -2667,7 +3084,7 @@ static int s2io_close(struct net_device *dev) | |||
2667 | * 0 on success & 1 on failure. | 3084 | * 0 on success & 1 on failure. |
2668 | */ | 3085 | */ |
2669 | 3086 | ||
2670 | static int s2io_xmit(struct sk_buff *skb, struct net_device *dev) | 3087 | int s2io_xmit(struct sk_buff *skb, struct net_device *dev) |
2671 | { | 3088 | { |
2672 | nic_t *sp = dev->priv; | 3089 | nic_t *sp = dev->priv; |
2673 | u16 frg_cnt, frg_len, i, queue, queue_len, put_off, get_off; | 3090 | u16 frg_cnt, frg_len, i, queue, queue_len, put_off, get_off; |
@@ -2678,29 +3095,39 @@ static int s2io_xmit(struct sk_buff *skb, struct net_device *dev) | |||
2678 | #ifdef NETIF_F_TSO | 3095 | #ifdef NETIF_F_TSO |
2679 | int mss; | 3096 | int mss; |
2680 | #endif | 3097 | #endif |
3098 | u16 vlan_tag = 0; | ||
3099 | int vlan_priority = 0; | ||
2681 | mac_info_t *mac_control; | 3100 | mac_info_t *mac_control; |
2682 | struct config_param *config; | 3101 | struct config_param *config; |
2683 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | ||
2684 | 3102 | ||
2685 | mac_control = &sp->mac_control; | 3103 | mac_control = &sp->mac_control; |
2686 | config = &sp->config; | 3104 | config = &sp->config; |
2687 | 3105 | ||
2688 | DBG_PRINT(TX_DBG, "%s: In S2IO Tx routine\n", dev->name); | 3106 | DBG_PRINT(TX_DBG, "%s: In Neterion Tx routine\n", dev->name); |
2689 | spin_lock_irqsave(&sp->tx_lock, flags); | 3107 | spin_lock_irqsave(&sp->tx_lock, flags); |
2690 | |||
2691 | if (atomic_read(&sp->card_state) == CARD_DOWN) { | 3108 | if (atomic_read(&sp->card_state) == CARD_DOWN) { |
2692 | DBG_PRINT(ERR_DBG, "%s: Card going down for reset\n", | 3109 | DBG_PRINT(TX_DBG, "%s: Card going down for reset\n", |
2693 | dev->name); | 3110 | dev->name); |
2694 | spin_unlock_irqrestore(&sp->tx_lock, flags); | 3111 | spin_unlock_irqrestore(&sp->tx_lock, flags); |
2695 | return 1; | 3112 | dev_kfree_skb(skb); |
3113 | return 0; | ||
2696 | } | 3114 | } |
2697 | 3115 | ||
2698 | queue = 0; | 3116 | queue = 0; |
2699 | put_off = (u16) mac_control->tx_curr_put_info[queue].offset; | ||
2700 | get_off = (u16) mac_control->tx_curr_get_info[queue].offset; | ||
2701 | txdp = (TxD_t *) sp->list_info[queue][put_off].list_virt_addr; | ||
2702 | 3117 | ||
2703 | queue_len = mac_control->tx_curr_put_info[queue].fifo_len + 1; | 3118 | /* Get Fifo number to Transmit based on vlan priority */ |
3119 | if (sp->vlgrp && vlan_tx_tag_present(skb)) { | ||
3120 | vlan_tag = vlan_tx_tag_get(skb); | ||
3121 | vlan_priority = vlan_tag >> 13; | ||
3122 | queue = config->fifo_mapping[vlan_priority]; | ||
3123 | } | ||
3124 | |||
3125 | put_off = (u16) mac_control->fifos[queue].tx_curr_put_info.offset; | ||
3126 | get_off = (u16) mac_control->fifos[queue].tx_curr_get_info.offset; | ||
3127 | txdp = (TxD_t *) mac_control->fifos[queue].list_info[put_off]. | ||
3128 | list_virt_addr; | ||
3129 | |||
3130 | queue_len = mac_control->fifos[queue].tx_curr_put_info.fifo_len + 1; | ||
2704 | /* Avoid "put" pointer going beyond "get" pointer */ | 3131 | /* Avoid "put" pointer going beyond "get" pointer */ |
2705 | if (txdp->Host_Control || (((put_off + 1) % queue_len) == get_off)) { | 3132 | if (txdp->Host_Control || (((put_off + 1) % queue_len) == get_off)) { |
2706 | DBG_PRINT(ERR_DBG, "Error in xmit, No free TXDs.\n"); | 3133 | DBG_PRINT(ERR_DBG, "Error in xmit, No free TXDs.\n"); |
@@ -2709,6 +3136,15 @@ static int s2io_xmit(struct sk_buff *skb, struct net_device *dev) | |||
2709 | spin_unlock_irqrestore(&sp->tx_lock, flags); | 3136 | spin_unlock_irqrestore(&sp->tx_lock, flags); |
2710 | return 0; | 3137 | return 0; |
2711 | } | 3138 | } |
3139 | |||
3140 | /* A buffer with no data will be dropped */ | ||
3141 | if (!skb->len) { | ||
3142 | DBG_PRINT(TX_DBG, "%s:Buffer has no data..\n", dev->name); | ||
3143 | dev_kfree_skb(skb); | ||
3144 | spin_unlock_irqrestore(&sp->tx_lock, flags); | ||
3145 | return 0; | ||
3146 | } | ||
3147 | |||
2712 | #ifdef NETIF_F_TSO | 3148 | #ifdef NETIF_F_TSO |
2713 | mss = skb_shinfo(skb)->tso_size; | 3149 | mss = skb_shinfo(skb)->tso_size; |
2714 | if (mss) { | 3150 | if (mss) { |
@@ -2720,9 +3156,9 @@ static int s2io_xmit(struct sk_buff *skb, struct net_device *dev) | |||
2720 | frg_cnt = skb_shinfo(skb)->nr_frags; | 3156 | frg_cnt = skb_shinfo(skb)->nr_frags; |
2721 | frg_len = skb->len - skb->data_len; | 3157 | frg_len = skb->len - skb->data_len; |
2722 | 3158 | ||
2723 | txdp->Host_Control = (unsigned long) skb; | ||
2724 | txdp->Buffer_Pointer = pci_map_single | 3159 | txdp->Buffer_Pointer = pci_map_single |
2725 | (sp->pdev, skb->data, frg_len, PCI_DMA_TODEVICE); | 3160 | (sp->pdev, skb->data, frg_len, PCI_DMA_TODEVICE); |
3161 | txdp->Host_Control = (unsigned long) skb; | ||
2726 | if (skb->ip_summed == CHECKSUM_HW) { | 3162 | if (skb->ip_summed == CHECKSUM_HW) { |
2727 | txdp->Control_2 |= | 3163 | txdp->Control_2 |= |
2728 | (TXD_TX_CKO_IPV4_EN | TXD_TX_CKO_TCP_EN | | 3164 | (TXD_TX_CKO_IPV4_EN | TXD_TX_CKO_TCP_EN | |
@@ -2731,6 +3167,11 @@ static int s2io_xmit(struct sk_buff *skb, struct net_device *dev) | |||
2731 | 3167 | ||
2732 | txdp->Control_2 |= config->tx_intr_type; | 3168 | txdp->Control_2 |= config->tx_intr_type; |
2733 | 3169 | ||
3170 | if (sp->vlgrp && vlan_tx_tag_present(skb)) { | ||
3171 | txdp->Control_2 |= TXD_VLAN_ENABLE; | ||
3172 | txdp->Control_2 |= TXD_VLAN_TAG(vlan_tag); | ||
3173 | } | ||
3174 | |||
2734 | txdp->Control_1 |= (TXD_BUFFER0_SIZE(frg_len) | | 3175 | txdp->Control_1 |= (TXD_BUFFER0_SIZE(frg_len) | |
2735 | TXD_GATHER_CODE_FIRST); | 3176 | TXD_GATHER_CODE_FIRST); |
2736 | txdp->Control_1 |= TXD_LIST_OWN_XENA; | 3177 | txdp->Control_1 |= TXD_LIST_OWN_XENA; |
@@ -2738,6 +3179,9 @@ static int s2io_xmit(struct sk_buff *skb, struct net_device *dev) | |||
2738 | /* For fragmented SKB. */ | 3179 | /* For fragmented SKB. */ |
2739 | for (i = 0; i < frg_cnt; i++) { | 3180 | for (i = 0; i < frg_cnt; i++) { |
2740 | skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; | 3181 | skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; |
3182 | /* A '0' length fragment will be ignored */ | ||
3183 | if (!frag->size) | ||
3184 | continue; | ||
2741 | txdp++; | 3185 | txdp++; |
2742 | txdp->Buffer_Pointer = (u64) pci_map_page | 3186 | txdp->Buffer_Pointer = (u64) pci_map_page |
2743 | (sp->pdev, frag->page, frag->page_offset, | 3187 | (sp->pdev, frag->page, frag->page_offset, |
@@ -2747,23 +3191,23 @@ static int s2io_xmit(struct sk_buff *skb, struct net_device *dev) | |||
2747 | txdp->Control_1 |= TXD_GATHER_CODE_LAST; | 3191 | txdp->Control_1 |= TXD_GATHER_CODE_LAST; |
2748 | 3192 | ||
2749 | tx_fifo = mac_control->tx_FIFO_start[queue]; | 3193 | tx_fifo = mac_control->tx_FIFO_start[queue]; |
2750 | val64 = sp->list_info[queue][put_off].list_phy_addr; | 3194 | val64 = mac_control->fifos[queue].list_info[put_off].list_phy_addr; |
2751 | writeq(val64, &tx_fifo->TxDL_Pointer); | 3195 | writeq(val64, &tx_fifo->TxDL_Pointer); |
2752 | 3196 | ||
2753 | val64 = (TX_FIFO_LAST_TXD_NUM(frg_cnt) | TX_FIFO_FIRST_LIST | | 3197 | val64 = (TX_FIFO_LAST_TXD_NUM(frg_cnt) | TX_FIFO_FIRST_LIST | |
2754 | TX_FIFO_LAST_LIST); | 3198 | TX_FIFO_LAST_LIST); |
3199 | |||
2755 | #ifdef NETIF_F_TSO | 3200 | #ifdef NETIF_F_TSO |
2756 | if (mss) | 3201 | if (mss) |
2757 | val64 |= TX_FIFO_SPECIAL_FUNC; | 3202 | val64 |= TX_FIFO_SPECIAL_FUNC; |
2758 | #endif | 3203 | #endif |
2759 | writeq(val64, &tx_fifo->List_Control); | 3204 | writeq(val64, &tx_fifo->List_Control); |
2760 | 3205 | ||
2761 | /* Perform a PCI read to flush previous writes */ | 3206 | mmiowb(); |
2762 | val64 = readq(&bar0->general_int_status); | ||
2763 | 3207 | ||
2764 | put_off++; | 3208 | put_off++; |
2765 | put_off %= mac_control->tx_curr_put_info[queue].fifo_len + 1; | 3209 | put_off %= mac_control->fifos[queue].tx_curr_put_info.fifo_len + 1; |
2766 | mac_control->tx_curr_put_info[queue].offset = put_off; | 3210 | mac_control->fifos[queue].tx_curr_put_info.offset = put_off; |
2767 | 3211 | ||
2768 | /* Avoid "put" pointer going beyond "get" pointer */ | 3212 | /* Avoid "put" pointer going beyond "get" pointer */ |
2769 | if (((put_off + 1) % queue_len) == get_off) { | 3213 | if (((put_off + 1) % queue_len) == get_off) { |
@@ -2779,18 +3223,74 @@ static int s2io_xmit(struct sk_buff *skb, struct net_device *dev) | |||
2779 | return 0; | 3223 | return 0; |
2780 | } | 3224 | } |
2781 | 3225 | ||
3226 | static void | ||
3227 | s2io_alarm_handle(unsigned long data) | ||
3228 | { | ||
3229 | nic_t *sp = (nic_t *)data; | ||
3230 | |||
3231 | alarm_intr_handler(sp); | ||
3232 | mod_timer(&sp->alarm_timer, jiffies + HZ / 2); | ||
3233 | } | ||
3234 | |||
3235 | static void s2io_txpic_intr_handle(nic_t *sp) | ||
3236 | { | ||
3237 | XENA_dev_config_t *bar0 = (XENA_dev_config_t *) sp->bar0; | ||
3238 | u64 val64; | ||
3239 | |||
3240 | val64 = readq(&bar0->pic_int_status); | ||
3241 | if (val64 & PIC_INT_GPIO) { | ||
3242 | val64 = readq(&bar0->gpio_int_reg); | ||
3243 | if ((val64 & GPIO_INT_REG_LINK_DOWN) && | ||
3244 | (val64 & GPIO_INT_REG_LINK_UP)) { | ||
3245 | val64 |= GPIO_INT_REG_LINK_DOWN; | ||
3246 | val64 |= GPIO_INT_REG_LINK_UP; | ||
3247 | writeq(val64, &bar0->gpio_int_reg); | ||
3248 | goto masking; | ||
3249 | } | ||
3250 | |||
3251 | if (((sp->last_link_state == LINK_UP) && | ||
3252 | (val64 & GPIO_INT_REG_LINK_DOWN)) || | ||
3253 | ((sp->last_link_state == LINK_DOWN) && | ||
3254 | (val64 & GPIO_INT_REG_LINK_UP))) { | ||
3255 | val64 = readq(&bar0->gpio_int_mask); | ||
3256 | val64 |= GPIO_INT_MASK_LINK_DOWN; | ||
3257 | val64 |= GPIO_INT_MASK_LINK_UP; | ||
3258 | writeq(val64, &bar0->gpio_int_mask); | ||
3259 | s2io_set_link((unsigned long)sp); | ||
3260 | } | ||
3261 | masking: | ||
3262 | if (sp->last_link_state == LINK_UP) { | ||
3263 | /*enable down interrupt */ | ||
3264 | val64 = readq(&bar0->gpio_int_mask); | ||
3265 | /* unmasks link down intr */ | ||
3266 | val64 &= ~GPIO_INT_MASK_LINK_DOWN; | ||
3267 | /* masks link up intr */ | ||
3268 | val64 |= GPIO_INT_MASK_LINK_UP; | ||
3269 | writeq(val64, &bar0->gpio_int_mask); | ||
3270 | } else { | ||
3271 | /*enable UP Interrupt */ | ||
3272 | val64 = readq(&bar0->gpio_int_mask); | ||
3273 | /* unmasks link up interrupt */ | ||
3274 | val64 &= ~GPIO_INT_MASK_LINK_UP; | ||
3275 | /* masks link down interrupt */ | ||
3276 | val64 |= GPIO_INT_MASK_LINK_DOWN; | ||
3277 | writeq(val64, &bar0->gpio_int_mask); | ||
3278 | } | ||
3279 | } | ||
3280 | } | ||
3281 | |||
2782 | /** | 3282 | /** |
2783 | * s2io_isr - ISR handler of the device . | 3283 | * s2io_isr - ISR handler of the device . |
2784 | * @irq: the irq of the device. | 3284 | * @irq: the irq of the device. |
2785 | * @dev_id: a void pointer to the dev structure of the NIC. | 3285 | * @dev_id: a void pointer to the dev structure of the NIC. |
2786 | * @pt_regs: pointer to the registers pushed on the stack. | 3286 | * @pt_regs: pointer to the registers pushed on the stack. |
2787 | * Description: This function is the ISR handler of the device. It | 3287 | * Description: This function is the ISR handler of the device. It |
2788 | * identifies the reason for the interrupt and calls the relevant | 3288 | * identifies the reason for the interrupt and calls the relevant |
2789 | * service routines. As a contongency measure, this ISR allocates the | 3289 | * service routines. As a contongency measure, this ISR allocates the |
2790 | * recv buffers, if their numbers are below the panic value which is | 3290 | * recv buffers, if their numbers are below the panic value which is |
2791 | * presently set to 25% of the original number of rcv buffers allocated. | 3291 | * presently set to 25% of the original number of rcv buffers allocated. |
2792 | * Return value: | 3292 | * Return value: |
2793 | * IRQ_HANDLED: will be returned if IRQ was handled by this routine | 3293 | * IRQ_HANDLED: will be returned if IRQ was handled by this routine |
2794 | * IRQ_NONE: will be returned if interrupt is not from our device | 3294 | * IRQ_NONE: will be returned if interrupt is not from our device |
2795 | */ | 3295 | */ |
2796 | static irqreturn_t s2io_isr(int irq, void *dev_id, struct pt_regs *regs) | 3296 | static irqreturn_t s2io_isr(int irq, void *dev_id, struct pt_regs *regs) |
@@ -2798,40 +3298,31 @@ static irqreturn_t s2io_isr(int irq, void *dev_id, struct pt_regs *regs) | |||
2798 | struct net_device *dev = (struct net_device *) dev_id; | 3298 | struct net_device *dev = (struct net_device *) dev_id; |
2799 | nic_t *sp = dev->priv; | 3299 | nic_t *sp = dev->priv; |
2800 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | 3300 | XENA_dev_config_t __iomem *bar0 = sp->bar0; |
2801 | #ifndef CONFIG_S2IO_NAPI | 3301 | int i; |
2802 | int i, ret; | 3302 | u64 reason = 0, val64; |
2803 | #endif | ||
2804 | u64 reason = 0; | ||
2805 | mac_info_t *mac_control; | 3303 | mac_info_t *mac_control; |
2806 | struct config_param *config; | 3304 | struct config_param *config; |
2807 | 3305 | ||
3306 | atomic_inc(&sp->isr_cnt); | ||
2808 | mac_control = &sp->mac_control; | 3307 | mac_control = &sp->mac_control; |
2809 | config = &sp->config; | 3308 | config = &sp->config; |
2810 | 3309 | ||
2811 | /* | 3310 | /* |
2812 | * Identify the cause for interrupt and call the appropriate | 3311 | * Identify the cause for interrupt and call the appropriate |
2813 | * interrupt handler. Causes for the interrupt could be; | 3312 | * interrupt handler. Causes for the interrupt could be; |
2814 | * 1. Rx of packet. | 3313 | * 1. Rx of packet. |
2815 | * 2. Tx complete. | 3314 | * 2. Tx complete. |
2816 | * 3. Link down. | 3315 | * 3. Link down. |
2817 | * 4. Error in any functional blocks of the NIC. | 3316 | * 4. Error in any functional blocks of the NIC. |
2818 | */ | 3317 | */ |
2819 | reason = readq(&bar0->general_int_status); | 3318 | reason = readq(&bar0->general_int_status); |
2820 | 3319 | ||
2821 | if (!reason) { | 3320 | if (!reason) { |
2822 | /* The interrupt was not raised by Xena. */ | 3321 | /* The interrupt was not raised by Xena. */ |
3322 | atomic_dec(&sp->isr_cnt); | ||
2823 | return IRQ_NONE; | 3323 | return IRQ_NONE; |
2824 | } | 3324 | } |
2825 | 3325 | ||
2826 | /* If Intr is because of Tx Traffic */ | ||
2827 | if (reason & GEN_INTR_TXTRAFFIC) { | ||
2828 | tx_intr_handler(sp); | ||
2829 | } | ||
2830 | |||
2831 | /* If Intr is because of an error */ | ||
2832 | if (reason & (GEN_ERROR_INTR)) | ||
2833 | alarm_intr_handler(sp); | ||
2834 | |||
2835 | #ifdef CONFIG_S2IO_NAPI | 3326 | #ifdef CONFIG_S2IO_NAPI |
2836 | if (reason & GEN_INTR_RXTRAFFIC) { | 3327 | if (reason & GEN_INTR_RXTRAFFIC) { |
2837 | if (netif_rx_schedule_prep(dev)) { | 3328 | if (netif_rx_schedule_prep(dev)) { |
@@ -2843,17 +3334,43 @@ static irqreturn_t s2io_isr(int irq, void *dev_id, struct pt_regs *regs) | |||
2843 | #else | 3334 | #else |
2844 | /* If Intr is because of Rx Traffic */ | 3335 | /* If Intr is because of Rx Traffic */ |
2845 | if (reason & GEN_INTR_RXTRAFFIC) { | 3336 | if (reason & GEN_INTR_RXTRAFFIC) { |
2846 | rx_intr_handler(sp); | 3337 | /* |
3338 | * rx_traffic_int reg is an R1 register, writing all 1's | ||
3339 | * will ensure that the actual interrupt causing bit get's | ||
3340 | * cleared and hence a read can be avoided. | ||
3341 | */ | ||
3342 | val64 = 0xFFFFFFFFFFFFFFFFULL; | ||
3343 | writeq(val64, &bar0->rx_traffic_int); | ||
3344 | for (i = 0; i < config->rx_ring_num; i++) { | ||
3345 | rx_intr_handler(&mac_control->rings[i]); | ||
3346 | } | ||
2847 | } | 3347 | } |
2848 | #endif | 3348 | #endif |
2849 | 3349 | ||
2850 | /* | 3350 | /* If Intr is because of Tx Traffic */ |
2851 | * If the Rx buffer count is below the panic threshold then | 3351 | if (reason & GEN_INTR_TXTRAFFIC) { |
2852 | * reallocate the buffers from the interrupt handler itself, | 3352 | /* |
3353 | * tx_traffic_int reg is an R1 register, writing all 1's | ||
3354 | * will ensure that the actual interrupt causing bit get's | ||
3355 | * cleared and hence a read can be avoided. | ||
3356 | */ | ||
3357 | val64 = 0xFFFFFFFFFFFFFFFFULL; | ||
3358 | writeq(val64, &bar0->tx_traffic_int); | ||
3359 | |||
3360 | for (i = 0; i < config->tx_fifo_num; i++) | ||
3361 | tx_intr_handler(&mac_control->fifos[i]); | ||
3362 | } | ||
3363 | |||
3364 | if (reason & GEN_INTR_TXPIC) | ||
3365 | s2io_txpic_intr_handle(sp); | ||
3366 | /* | ||
3367 | * If the Rx buffer count is below the panic threshold then | ||
3368 | * reallocate the buffers from the interrupt handler itself, | ||
2853 | * else schedule a tasklet to reallocate the buffers. | 3369 | * else schedule a tasklet to reallocate the buffers. |
2854 | */ | 3370 | */ |
2855 | #ifndef CONFIG_S2IO_NAPI | 3371 | #ifndef CONFIG_S2IO_NAPI |
2856 | for (i = 0; i < config->rx_ring_num; i++) { | 3372 | for (i = 0; i < config->rx_ring_num; i++) { |
3373 | int ret; | ||
2857 | int rxb_size = atomic_read(&sp->rx_bufs_left[i]); | 3374 | int rxb_size = atomic_read(&sp->rx_bufs_left[i]); |
2858 | int level = rx_buffer_level(sp, rxb_size, i); | 3375 | int level = rx_buffer_level(sp, rxb_size, i); |
2859 | 3376 | ||
@@ -2865,6 +3382,7 @@ static irqreturn_t s2io_isr(int irq, void *dev_id, struct pt_regs *regs) | |||
2865 | dev->name); | 3382 | dev->name); |
2866 | DBG_PRINT(ERR_DBG, " in ISR!!\n"); | 3383 | DBG_PRINT(ERR_DBG, " in ISR!!\n"); |
2867 | clear_bit(0, (&sp->tasklet_status)); | 3384 | clear_bit(0, (&sp->tasklet_status)); |
3385 | atomic_dec(&sp->isr_cnt); | ||
2868 | return IRQ_HANDLED; | 3386 | return IRQ_HANDLED; |
2869 | } | 3387 | } |
2870 | clear_bit(0, (&sp->tasklet_status)); | 3388 | clear_bit(0, (&sp->tasklet_status)); |
@@ -2874,33 +3392,69 @@ static irqreturn_t s2io_isr(int irq, void *dev_id, struct pt_regs *regs) | |||
2874 | } | 3392 | } |
2875 | #endif | 3393 | #endif |
2876 | 3394 | ||
3395 | atomic_dec(&sp->isr_cnt); | ||
2877 | return IRQ_HANDLED; | 3396 | return IRQ_HANDLED; |
2878 | } | 3397 | } |
2879 | 3398 | ||
2880 | /** | 3399 | /** |
2881 | * s2io_get_stats - Updates the device statistics structure. | 3400 | * s2io_updt_stats - |
3401 | */ | ||
3402 | static void s2io_updt_stats(nic_t *sp) | ||
3403 | { | ||
3404 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | ||
3405 | u64 val64; | ||
3406 | int cnt = 0; | ||
3407 | |||
3408 | if (atomic_read(&sp->card_state) == CARD_UP) { | ||
3409 | /* Apprx 30us on a 133 MHz bus */ | ||
3410 | val64 = SET_UPDT_CLICKS(10) | | ||
3411 | STAT_CFG_ONE_SHOT_EN | STAT_CFG_STAT_EN; | ||
3412 | writeq(val64, &bar0->stat_cfg); | ||
3413 | do { | ||
3414 | udelay(100); | ||
3415 | val64 = readq(&bar0->stat_cfg); | ||
3416 | if (!(val64 & BIT(0))) | ||
3417 | break; | ||
3418 | cnt++; | ||
3419 | if (cnt == 5) | ||
3420 | break; /* Updt failed */ | ||
3421 | } while(1); | ||
3422 | } | ||
3423 | } | ||
3424 | |||
3425 | /** | ||
3426 | * s2io_get_stats - Updates the device statistics structure. | ||
2882 | * @dev : pointer to the device structure. | 3427 | * @dev : pointer to the device structure. |
2883 | * Description: | 3428 | * Description: |
2884 | * This function updates the device statistics structure in the s2io_nic | 3429 | * This function updates the device statistics structure in the s2io_nic |
2885 | * structure and returns a pointer to the same. | 3430 | * structure and returns a pointer to the same. |
2886 | * Return value: | 3431 | * Return value: |
2887 | * pointer to the updated net_device_stats structure. | 3432 | * pointer to the updated net_device_stats structure. |
2888 | */ | 3433 | */ |
2889 | 3434 | ||
2890 | static struct net_device_stats *s2io_get_stats(struct net_device *dev) | 3435 | struct net_device_stats *s2io_get_stats(struct net_device *dev) |
2891 | { | 3436 | { |
2892 | nic_t *sp = dev->priv; | 3437 | nic_t *sp = dev->priv; |
2893 | mac_info_t *mac_control; | 3438 | mac_info_t *mac_control; |
2894 | struct config_param *config; | 3439 | struct config_param *config; |
2895 | 3440 | ||
3441 | |||
2896 | mac_control = &sp->mac_control; | 3442 | mac_control = &sp->mac_control; |
2897 | config = &sp->config; | 3443 | config = &sp->config; |
2898 | 3444 | ||
2899 | sp->stats.tx_errors = mac_control->stats_info->tmac_any_err_frms; | 3445 | /* Configure Stats for immediate updt */ |
2900 | sp->stats.rx_errors = mac_control->stats_info->rmac_drop_frms; | 3446 | s2io_updt_stats(sp); |
2901 | sp->stats.multicast = mac_control->stats_info->rmac_vld_mcst_frms; | 3447 | |
3448 | sp->stats.tx_packets = | ||
3449 | le32_to_cpu(mac_control->stats_info->tmac_frms); | ||
3450 | sp->stats.tx_errors = | ||
3451 | le32_to_cpu(mac_control->stats_info->tmac_any_err_frms); | ||
3452 | sp->stats.rx_errors = | ||
3453 | le32_to_cpu(mac_control->stats_info->rmac_drop_frms); | ||
3454 | sp->stats.multicast = | ||
3455 | le32_to_cpu(mac_control->stats_info->rmac_vld_mcst_frms); | ||
2902 | sp->stats.rx_length_errors = | 3456 | sp->stats.rx_length_errors = |
2903 | mac_control->stats_info->rmac_long_frms; | 3457 | le32_to_cpu(mac_control->stats_info->rmac_long_frms); |
2904 | 3458 | ||
2905 | return (&sp->stats); | 3459 | return (&sp->stats); |
2906 | } | 3460 | } |
@@ -2909,8 +3463,8 @@ static struct net_device_stats *s2io_get_stats(struct net_device *dev) | |||
2909 | * s2io_set_multicast - entry point for multicast address enable/disable. | 3463 | * s2io_set_multicast - entry point for multicast address enable/disable. |
2910 | * @dev : pointer to the device structure | 3464 | * @dev : pointer to the device structure |
2911 | * Description: | 3465 | * Description: |
2912 | * This function is a driver entry point which gets called by the kernel | 3466 | * This function is a driver entry point which gets called by the kernel |
2913 | * whenever multicast addresses must be enabled/disabled. This also gets | 3467 | * whenever multicast addresses must be enabled/disabled. This also gets |
2914 | * called to set/reset promiscuous mode. Depending on the deivce flag, we | 3468 | * called to set/reset promiscuous mode. Depending on the deivce flag, we |
2915 | * determine, if multicast address must be enabled or if promiscuous mode | 3469 | * determine, if multicast address must be enabled or if promiscuous mode |
2916 | * is to be disabled etc. | 3470 | * is to be disabled etc. |
@@ -2948,6 +3502,8 @@ static void s2io_set_multicast(struct net_device *dev) | |||
2948 | /* Disable all Multicast addresses */ | 3502 | /* Disable all Multicast addresses */ |
2949 | writeq(RMAC_ADDR_DATA0_MEM_ADDR(dis_addr), | 3503 | writeq(RMAC_ADDR_DATA0_MEM_ADDR(dis_addr), |
2950 | &bar0->rmac_addr_data0_mem); | 3504 | &bar0->rmac_addr_data0_mem); |
3505 | writeq(RMAC_ADDR_DATA1_MEM_MASK(0x0), | ||
3506 | &bar0->rmac_addr_data1_mem); | ||
2951 | val64 = RMAC_ADDR_CMD_MEM_WE | | 3507 | val64 = RMAC_ADDR_CMD_MEM_WE | |
2952 | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD | | 3508 | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD | |
2953 | RMAC_ADDR_CMD_MEM_OFFSET(sp->all_multi_pos); | 3509 | RMAC_ADDR_CMD_MEM_OFFSET(sp->all_multi_pos); |
@@ -3010,7 +3566,7 @@ static void s2io_set_multicast(struct net_device *dev) | |||
3010 | writeq(RMAC_ADDR_DATA0_MEM_ADDR(dis_addr), | 3566 | writeq(RMAC_ADDR_DATA0_MEM_ADDR(dis_addr), |
3011 | &bar0->rmac_addr_data0_mem); | 3567 | &bar0->rmac_addr_data0_mem); |
3012 | writeq(RMAC_ADDR_DATA1_MEM_MASK(0ULL), | 3568 | writeq(RMAC_ADDR_DATA1_MEM_MASK(0ULL), |
3013 | &bar0->rmac_addr_data1_mem); | 3569 | &bar0->rmac_addr_data1_mem); |
3014 | val64 = RMAC_ADDR_CMD_MEM_WE | | 3570 | val64 = RMAC_ADDR_CMD_MEM_WE | |
3015 | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD | | 3571 | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD | |
3016 | RMAC_ADDR_CMD_MEM_OFFSET | 3572 | RMAC_ADDR_CMD_MEM_OFFSET |
@@ -3039,8 +3595,7 @@ static void s2io_set_multicast(struct net_device *dev) | |||
3039 | writeq(RMAC_ADDR_DATA0_MEM_ADDR(mac_addr), | 3595 | writeq(RMAC_ADDR_DATA0_MEM_ADDR(mac_addr), |
3040 | &bar0->rmac_addr_data0_mem); | 3596 | &bar0->rmac_addr_data0_mem); |
3041 | writeq(RMAC_ADDR_DATA1_MEM_MASK(0ULL), | 3597 | writeq(RMAC_ADDR_DATA1_MEM_MASK(0ULL), |
3042 | &bar0->rmac_addr_data1_mem); | 3598 | &bar0->rmac_addr_data1_mem); |
3043 | |||
3044 | val64 = RMAC_ADDR_CMD_MEM_WE | | 3599 | val64 = RMAC_ADDR_CMD_MEM_WE | |
3045 | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD | | 3600 | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD | |
3046 | RMAC_ADDR_CMD_MEM_OFFSET | 3601 | RMAC_ADDR_CMD_MEM_OFFSET |
@@ -3059,12 +3614,12 @@ static void s2io_set_multicast(struct net_device *dev) | |||
3059 | } | 3614 | } |
3060 | 3615 | ||
3061 | /** | 3616 | /** |
3062 | * s2io_set_mac_addr - Programs the Xframe mac address | 3617 | * s2io_set_mac_addr - Programs the Xframe mac address |
3063 | * @dev : pointer to the device structure. | 3618 | * @dev : pointer to the device structure. |
3064 | * @addr: a uchar pointer to the new mac address which is to be set. | 3619 | * @addr: a uchar pointer to the new mac address which is to be set. |
3065 | * Description : This procedure will program the Xframe to receive | 3620 | * Description : This procedure will program the Xframe to receive |
3066 | * frames with new Mac Address | 3621 | * frames with new Mac Address |
3067 | * Return value: SUCCESS on success and an appropriate (-)ve integer | 3622 | * Return value: SUCCESS on success and an appropriate (-)ve integer |
3068 | * as defined in errno.h file on failure. | 3623 | * as defined in errno.h file on failure. |
3069 | */ | 3624 | */ |
3070 | 3625 | ||
@@ -3075,10 +3630,10 @@ int s2io_set_mac_addr(struct net_device *dev, u8 * addr) | |||
3075 | register u64 val64, mac_addr = 0; | 3630 | register u64 val64, mac_addr = 0; |
3076 | int i; | 3631 | int i; |
3077 | 3632 | ||
3078 | /* | 3633 | /* |
3079 | * Set the new MAC address as the new unicast filter and reflect this | 3634 | * Set the new MAC address as the new unicast filter and reflect this |
3080 | * change on the device address registered with the OS. It will be | 3635 | * change on the device address registered with the OS. It will be |
3081 | * at offset 0. | 3636 | * at offset 0. |
3082 | */ | 3637 | */ |
3083 | for (i = 0; i < ETH_ALEN; i++) { | 3638 | for (i = 0; i < ETH_ALEN; i++) { |
3084 | mac_addr <<= 8; | 3639 | mac_addr <<= 8; |
@@ -3102,12 +3657,12 @@ int s2io_set_mac_addr(struct net_device *dev, u8 * addr) | |||
3102 | } | 3657 | } |
3103 | 3658 | ||
3104 | /** | 3659 | /** |
3105 | * s2io_ethtool_sset - Sets different link parameters. | 3660 | * s2io_ethtool_sset - Sets different link parameters. |
3106 | * @sp : private member of the device structure, which is a pointer to the * s2io_nic structure. | 3661 | * @sp : private member of the device structure, which is a pointer to the * s2io_nic structure. |
3107 | * @info: pointer to the structure with parameters given by ethtool to set | 3662 | * @info: pointer to the structure with parameters given by ethtool to set |
3108 | * link information. | 3663 | * link information. |
3109 | * Description: | 3664 | * Description: |
3110 | * The function sets different link parameters provided by the user onto | 3665 | * The function sets different link parameters provided by the user onto |
3111 | * the NIC. | 3666 | * the NIC. |
3112 | * Return value: | 3667 | * Return value: |
3113 | * 0 on success. | 3668 | * 0 on success. |
@@ -3129,7 +3684,7 @@ static int s2io_ethtool_sset(struct net_device *dev, | |||
3129 | } | 3684 | } |
3130 | 3685 | ||
3131 | /** | 3686 | /** |
3132 | * s2io_ethtol_gset - Return link specific information. | 3687 | * s2io_ethtol_gset - Return link specific information. |
3133 | * @sp : private member of the device structure, pointer to the | 3688 | * @sp : private member of the device structure, pointer to the |
3134 | * s2io_nic structure. | 3689 | * s2io_nic structure. |
3135 | * @info : pointer to the structure with parameters given by ethtool | 3690 | * @info : pointer to the structure with parameters given by ethtool |
@@ -3161,8 +3716,8 @@ static int s2io_ethtool_gset(struct net_device *dev, struct ethtool_cmd *info) | |||
3161 | } | 3716 | } |
3162 | 3717 | ||
3163 | /** | 3718 | /** |
3164 | * s2io_ethtool_gdrvinfo - Returns driver specific information. | 3719 | * s2io_ethtool_gdrvinfo - Returns driver specific information. |
3165 | * @sp : private member of the device structure, which is a pointer to the | 3720 | * @sp : private member of the device structure, which is a pointer to the |
3166 | * s2io_nic structure. | 3721 | * s2io_nic structure. |
3167 | * @info : pointer to the structure with parameters given by ethtool to | 3722 | * @info : pointer to the structure with parameters given by ethtool to |
3168 | * return driver information. | 3723 | * return driver information. |
@@ -3190,9 +3745,9 @@ static void s2io_ethtool_gdrvinfo(struct net_device *dev, | |||
3190 | 3745 | ||
3191 | /** | 3746 | /** |
3192 | * s2io_ethtool_gregs - dumps the entire space of Xfame into the buffer. | 3747 | * s2io_ethtool_gregs - dumps the entire space of Xfame into the buffer. |
3193 | * @sp: private member of the device structure, which is a pointer to the | 3748 | * @sp: private member of the device structure, which is a pointer to the |
3194 | * s2io_nic structure. | 3749 | * s2io_nic structure. |
3195 | * @regs : pointer to the structure with parameters given by ethtool for | 3750 | * @regs : pointer to the structure with parameters given by ethtool for |
3196 | * dumping the registers. | 3751 | * dumping the registers. |
3197 | * @reg_space: The input argumnet into which all the registers are dumped. | 3752 | * @reg_space: The input argumnet into which all the registers are dumped. |
3198 | * Description: | 3753 | * Description: |
@@ -3221,11 +3776,11 @@ static void s2io_ethtool_gregs(struct net_device *dev, | |||
3221 | 3776 | ||
3222 | /** | 3777 | /** |
3223 | * s2io_phy_id - timer function that alternates adapter LED. | 3778 | * s2io_phy_id - timer function that alternates adapter LED. |
3224 | * @data : address of the private member of the device structure, which | 3779 | * @data : address of the private member of the device structure, which |
3225 | * is a pointer to the s2io_nic structure, provided as an u32. | 3780 | * is a pointer to the s2io_nic structure, provided as an u32. |
3226 | * Description: This is actually the timer function that alternates the | 3781 | * Description: This is actually the timer function that alternates the |
3227 | * adapter LED bit of the adapter control bit to set/reset every time on | 3782 | * adapter LED bit of the adapter control bit to set/reset every time on |
3228 | * invocation. The timer is set for 1/2 a second, hence tha NIC blinks | 3783 | * invocation. The timer is set for 1/2 a second, hence tha NIC blinks |
3229 | * once every second. | 3784 | * once every second. |
3230 | */ | 3785 | */ |
3231 | static void s2io_phy_id(unsigned long data) | 3786 | static void s2io_phy_id(unsigned long data) |
@@ -3236,7 +3791,8 @@ static void s2io_phy_id(unsigned long data) | |||
3236 | u16 subid; | 3791 | u16 subid; |
3237 | 3792 | ||
3238 | subid = sp->pdev->subsystem_device; | 3793 | subid = sp->pdev->subsystem_device; |
3239 | if ((subid & 0xFF) >= 0x07) { | 3794 | if ((sp->device_type == XFRAME_II_DEVICE) || |
3795 | ((subid & 0xFF) >= 0x07)) { | ||
3240 | val64 = readq(&bar0->gpio_control); | 3796 | val64 = readq(&bar0->gpio_control); |
3241 | val64 ^= GPIO_CTRL_GPIO_0; | 3797 | val64 ^= GPIO_CTRL_GPIO_0; |
3242 | writeq(val64, &bar0->gpio_control); | 3798 | writeq(val64, &bar0->gpio_control); |
@@ -3253,12 +3809,12 @@ static void s2io_phy_id(unsigned long data) | |||
3253 | * s2io_ethtool_idnic - To physically identify the nic on the system. | 3809 | * s2io_ethtool_idnic - To physically identify the nic on the system. |
3254 | * @sp : private member of the device structure, which is a pointer to the | 3810 | * @sp : private member of the device structure, which is a pointer to the |
3255 | * s2io_nic structure. | 3811 | * s2io_nic structure. |
3256 | * @id : pointer to the structure with identification parameters given by | 3812 | * @id : pointer to the structure with identification parameters given by |
3257 | * ethtool. | 3813 | * ethtool. |
3258 | * Description: Used to physically identify the NIC on the system. | 3814 | * Description: Used to physically identify the NIC on the system. |
3259 | * The Link LED will blink for a time specified by the user for | 3815 | * The Link LED will blink for a time specified by the user for |
3260 | * identification. | 3816 | * identification. |
3261 | * NOTE: The Link has to be Up to be able to blink the LED. Hence | 3817 | * NOTE: The Link has to be Up to be able to blink the LED. Hence |
3262 | * identification is possible only if it's link is up. | 3818 | * identification is possible only if it's link is up. |
3263 | * Return value: | 3819 | * Return value: |
3264 | * int , returns 0 on success | 3820 | * int , returns 0 on success |
@@ -3273,7 +3829,8 @@ static int s2io_ethtool_idnic(struct net_device *dev, u32 data) | |||
3273 | 3829 | ||
3274 | subid = sp->pdev->subsystem_device; | 3830 | subid = sp->pdev->subsystem_device; |
3275 | last_gpio_ctrl_val = readq(&bar0->gpio_control); | 3831 | last_gpio_ctrl_val = readq(&bar0->gpio_control); |
3276 | if ((subid & 0xFF) < 0x07) { | 3832 | if ((sp->device_type == XFRAME_I_DEVICE) && |
3833 | ((subid & 0xFF) < 0x07)) { | ||
3277 | val64 = readq(&bar0->adapter_control); | 3834 | val64 = readq(&bar0->adapter_control); |
3278 | if (!(val64 & ADAPTER_CNTL_EN)) { | 3835 | if (!(val64 & ADAPTER_CNTL_EN)) { |
3279 | printk(KERN_ERR | 3836 | printk(KERN_ERR |
@@ -3288,12 +3845,12 @@ static int s2io_ethtool_idnic(struct net_device *dev, u32 data) | |||
3288 | } | 3845 | } |
3289 | mod_timer(&sp->id_timer, jiffies); | 3846 | mod_timer(&sp->id_timer, jiffies); |
3290 | if (data) | 3847 | if (data) |
3291 | msleep(data * 1000); | 3848 | msleep_interruptible(data * HZ); |
3292 | else | 3849 | else |
3293 | msleep(0xFFFFFFFF); | 3850 | msleep_interruptible(MAX_FLICKER_TIME); |
3294 | del_timer_sync(&sp->id_timer); | 3851 | del_timer_sync(&sp->id_timer); |
3295 | 3852 | ||
3296 | if (CARDS_WITH_FAULTY_LINK_INDICATORS(subid)) { | 3853 | if (CARDS_WITH_FAULTY_LINK_INDICATORS(sp->device_type, subid)) { |
3297 | writeq(last_gpio_ctrl_val, &bar0->gpio_control); | 3854 | writeq(last_gpio_ctrl_val, &bar0->gpio_control); |
3298 | last_gpio_ctrl_val = readq(&bar0->gpio_control); | 3855 | last_gpio_ctrl_val = readq(&bar0->gpio_control); |
3299 | } | 3856 | } |
@@ -3303,7 +3860,8 @@ static int s2io_ethtool_idnic(struct net_device *dev, u32 data) | |||
3303 | 3860 | ||
3304 | /** | 3861 | /** |
3305 | * s2io_ethtool_getpause_data -Pause frame frame generation and reception. | 3862 | * s2io_ethtool_getpause_data -Pause frame frame generation and reception. |
3306 | * @sp : private member of the device structure, which is a pointer to the * s2io_nic structure. | 3863 | * @sp : private member of the device structure, which is a pointer to the |
3864 | * s2io_nic structure. | ||
3307 | * @ep : pointer to the structure with pause parameters given by ethtool. | 3865 | * @ep : pointer to the structure with pause parameters given by ethtool. |
3308 | * Description: | 3866 | * Description: |
3309 | * Returns the Pause frame generation and reception capability of the NIC. | 3867 | * Returns the Pause frame generation and reception capability of the NIC. |
@@ -3327,7 +3885,7 @@ static void s2io_ethtool_getpause_data(struct net_device *dev, | |||
3327 | 3885 | ||
3328 | /** | 3886 | /** |
3329 | * s2io_ethtool_setpause_data - set/reset pause frame generation. | 3887 | * s2io_ethtool_setpause_data - set/reset pause frame generation. |
3330 | * @sp : private member of the device structure, which is a pointer to the | 3888 | * @sp : private member of the device structure, which is a pointer to the |
3331 | * s2io_nic structure. | 3889 | * s2io_nic structure. |
3332 | * @ep : pointer to the structure with pause parameters given by ethtool. | 3890 | * @ep : pointer to the structure with pause parameters given by ethtool. |
3333 | * Description: | 3891 | * Description: |
@@ -3338,7 +3896,7 @@ static void s2io_ethtool_getpause_data(struct net_device *dev, | |||
3338 | */ | 3896 | */ |
3339 | 3897 | ||
3340 | static int s2io_ethtool_setpause_data(struct net_device *dev, | 3898 | static int s2io_ethtool_setpause_data(struct net_device *dev, |
3341 | struct ethtool_pauseparam *ep) | 3899 | struct ethtool_pauseparam *ep) |
3342 | { | 3900 | { |
3343 | u64 val64; | 3901 | u64 val64; |
3344 | nic_t *sp = dev->priv; | 3902 | nic_t *sp = dev->priv; |
@@ -3359,13 +3917,13 @@ static int s2io_ethtool_setpause_data(struct net_device *dev, | |||
3359 | 3917 | ||
3360 | /** | 3918 | /** |
3361 | * read_eeprom - reads 4 bytes of data from user given offset. | 3919 | * read_eeprom - reads 4 bytes of data from user given offset. |
3362 | * @sp : private member of the device structure, which is a pointer to the | 3920 | * @sp : private member of the device structure, which is a pointer to the |
3363 | * s2io_nic structure. | 3921 | * s2io_nic structure. |
3364 | * @off : offset at which the data must be written | 3922 | * @off : offset at which the data must be written |
3365 | * @data : Its an output parameter where the data read at the given | 3923 | * @data : Its an output parameter where the data read at the given |
3366 | * offset is stored. | 3924 | * offset is stored. |
3367 | * Description: | 3925 | * Description: |
3368 | * Will read 4 bytes of data from the user given offset and return the | 3926 | * Will read 4 bytes of data from the user given offset and return the |
3369 | * read data. | 3927 | * read data. |
3370 | * NOTE: Will allow to read only part of the EEPROM visible through the | 3928 | * NOTE: Will allow to read only part of the EEPROM visible through the |
3371 | * I2C bus. | 3929 | * I2C bus. |
@@ -3406,7 +3964,7 @@ static int read_eeprom(nic_t * sp, int off, u32 * data) | |||
3406 | * s2io_nic structure. | 3964 | * s2io_nic structure. |
3407 | * @off : offset at which the data must be written | 3965 | * @off : offset at which the data must be written |
3408 | * @data : The data that is to be written | 3966 | * @data : The data that is to be written |
3409 | * @cnt : Number of bytes of the data that are actually to be written into | 3967 | * @cnt : Number of bytes of the data that are actually to be written into |
3410 | * the Eeprom. (max of 3) | 3968 | * the Eeprom. (max of 3) |
3411 | * Description: | 3969 | * Description: |
3412 | * Actually writes the relevant part of the data value into the Eeprom | 3970 | * Actually writes the relevant part of the data value into the Eeprom |
@@ -3443,7 +4001,7 @@ static int write_eeprom(nic_t * sp, int off, u32 data, int cnt) | |||
3443 | /** | 4001 | /** |
3444 | * s2io_ethtool_geeprom - reads the value stored in the Eeprom. | 4002 | * s2io_ethtool_geeprom - reads the value stored in the Eeprom. |
3445 | * @sp : private member of the device structure, which is a pointer to the * s2io_nic structure. | 4003 | * @sp : private member of the device structure, which is a pointer to the * s2io_nic structure. |
3446 | * @eeprom : pointer to the user level structure provided by ethtool, | 4004 | * @eeprom : pointer to the user level structure provided by ethtool, |
3447 | * containing all relevant information. | 4005 | * containing all relevant information. |
3448 | * @data_buf : user defined value to be written into Eeprom. | 4006 | * @data_buf : user defined value to be written into Eeprom. |
3449 | * Description: Reads the values stored in the Eeprom at given offset | 4007 | * Description: Reads the values stored in the Eeprom at given offset |
@@ -3454,7 +4012,7 @@ static int write_eeprom(nic_t * sp, int off, u32 data, int cnt) | |||
3454 | */ | 4012 | */ |
3455 | 4013 | ||
3456 | static int s2io_ethtool_geeprom(struct net_device *dev, | 4014 | static int s2io_ethtool_geeprom(struct net_device *dev, |
3457 | struct ethtool_eeprom *eeprom, u8 * data_buf) | 4015 | struct ethtool_eeprom *eeprom, u8 * data_buf) |
3458 | { | 4016 | { |
3459 | u32 data, i, valid; | 4017 | u32 data, i, valid; |
3460 | nic_t *sp = dev->priv; | 4018 | nic_t *sp = dev->priv; |
@@ -3479,7 +4037,7 @@ static int s2io_ethtool_geeprom(struct net_device *dev, | |||
3479 | * s2io_ethtool_seeprom - tries to write the user provided value in Eeprom | 4037 | * s2io_ethtool_seeprom - tries to write the user provided value in Eeprom |
3480 | * @sp : private member of the device structure, which is a pointer to the | 4038 | * @sp : private member of the device structure, which is a pointer to the |
3481 | * s2io_nic structure. | 4039 | * s2io_nic structure. |
3482 | * @eeprom : pointer to the user level structure provided by ethtool, | 4040 | * @eeprom : pointer to the user level structure provided by ethtool, |
3483 | * containing all relevant information. | 4041 | * containing all relevant information. |
3484 | * @data_buf ; user defined value to be written into Eeprom. | 4042 | * @data_buf ; user defined value to be written into Eeprom. |
3485 | * Description: | 4043 | * Description: |
@@ -3527,8 +4085,8 @@ static int s2io_ethtool_seeprom(struct net_device *dev, | |||
3527 | } | 4085 | } |
3528 | 4086 | ||
3529 | /** | 4087 | /** |
3530 | * s2io_register_test - reads and writes into all clock domains. | 4088 | * s2io_register_test - reads and writes into all clock domains. |
3531 | * @sp : private member of the device structure, which is a pointer to the | 4089 | * @sp : private member of the device structure, which is a pointer to the |
3532 | * s2io_nic structure. | 4090 | * s2io_nic structure. |
3533 | * @data : variable that returns the result of each of the test conducted b | 4091 | * @data : variable that returns the result of each of the test conducted b |
3534 | * by the driver. | 4092 | * by the driver. |
@@ -3545,8 +4103,8 @@ static int s2io_register_test(nic_t * sp, uint64_t * data) | |||
3545 | u64 val64 = 0; | 4103 | u64 val64 = 0; |
3546 | int fail = 0; | 4104 | int fail = 0; |
3547 | 4105 | ||
3548 | val64 = readq(&bar0->pcc_enable); | 4106 | val64 = readq(&bar0->pif_rd_swapper_fb); |
3549 | if (val64 != 0xff00000000000000ULL) { | 4107 | if (val64 != 0x123456789abcdefULL) { |
3550 | fail = 1; | 4108 | fail = 1; |
3551 | DBG_PRINT(INFO_DBG, "Read Test level 1 fails\n"); | 4109 | DBG_PRINT(INFO_DBG, "Read Test level 1 fails\n"); |
3552 | } | 4110 | } |
@@ -3590,13 +4148,13 @@ static int s2io_register_test(nic_t * sp, uint64_t * data) | |||
3590 | } | 4148 | } |
3591 | 4149 | ||
3592 | /** | 4150 | /** |
3593 | * s2io_eeprom_test - to verify that EEprom in the xena can be programmed. | 4151 | * s2io_eeprom_test - to verify that EEprom in the xena can be programmed. |
3594 | * @sp : private member of the device structure, which is a pointer to the | 4152 | * @sp : private member of the device structure, which is a pointer to the |
3595 | * s2io_nic structure. | 4153 | * s2io_nic structure. |
3596 | * @data:variable that returns the result of each of the test conducted by | 4154 | * @data:variable that returns the result of each of the test conducted by |
3597 | * the driver. | 4155 | * the driver. |
3598 | * Description: | 4156 | * Description: |
3599 | * Verify that EEPROM in the xena can be programmed using I2C_CONTROL | 4157 | * Verify that EEPROM in the xena can be programmed using I2C_CONTROL |
3600 | * register. | 4158 | * register. |
3601 | * Return value: | 4159 | * Return value: |
3602 | * 0 on success. | 4160 | * 0 on success. |
@@ -3661,14 +4219,14 @@ static int s2io_eeprom_test(nic_t * sp, uint64_t * data) | |||
3661 | 4219 | ||
3662 | /** | 4220 | /** |
3663 | * s2io_bist_test - invokes the MemBist test of the card . | 4221 | * s2io_bist_test - invokes the MemBist test of the card . |
3664 | * @sp : private member of the device structure, which is a pointer to the | 4222 | * @sp : private member of the device structure, which is a pointer to the |
3665 | * s2io_nic structure. | 4223 | * s2io_nic structure. |
3666 | * @data:variable that returns the result of each of the test conducted by | 4224 | * @data:variable that returns the result of each of the test conducted by |
3667 | * the driver. | 4225 | * the driver. |
3668 | * Description: | 4226 | * Description: |
3669 | * This invokes the MemBist test of the card. We give around | 4227 | * This invokes the MemBist test of the card. We give around |
3670 | * 2 secs time for the Test to complete. If it's still not complete | 4228 | * 2 secs time for the Test to complete. If it's still not complete |
3671 | * within this peiod, we consider that the test failed. | 4229 | * within this peiod, we consider that the test failed. |
3672 | * Return value: | 4230 | * Return value: |
3673 | * 0 on success and -1 on failure. | 4231 | * 0 on success and -1 on failure. |
3674 | */ | 4232 | */ |
@@ -3697,13 +4255,13 @@ static int s2io_bist_test(nic_t * sp, uint64_t * data) | |||
3697 | } | 4255 | } |
3698 | 4256 | ||
3699 | /** | 4257 | /** |
3700 | * s2io-link_test - verifies the link state of the nic | 4258 | * s2io-link_test - verifies the link state of the nic |
3701 | * @sp ; private member of the device structure, which is a pointer to the | 4259 | * @sp ; private member of the device structure, which is a pointer to the |
3702 | * s2io_nic structure. | 4260 | * s2io_nic structure. |
3703 | * @data: variable that returns the result of each of the test conducted by | 4261 | * @data: variable that returns the result of each of the test conducted by |
3704 | * the driver. | 4262 | * the driver. |
3705 | * Description: | 4263 | * Description: |
3706 | * The function verifies the link state of the NIC and updates the input | 4264 | * The function verifies the link state of the NIC and updates the input |
3707 | * argument 'data' appropriately. | 4265 | * argument 'data' appropriately. |
3708 | * Return value: | 4266 | * Return value: |
3709 | * 0 on success. | 4267 | * 0 on success. |
@@ -3722,13 +4280,13 @@ static int s2io_link_test(nic_t * sp, uint64_t * data) | |||
3722 | } | 4280 | } |
3723 | 4281 | ||
3724 | /** | 4282 | /** |
3725 | * s2io_rldram_test - offline test for access to the RldRam chip on the NIC | 4283 | * s2io_rldram_test - offline test for access to the RldRam chip on the NIC |
3726 | * @sp - private member of the device structure, which is a pointer to the | 4284 | * @sp - private member of the device structure, which is a pointer to the |
3727 | * s2io_nic structure. | 4285 | * s2io_nic structure. |
3728 | * @data - variable that returns the result of each of the test | 4286 | * @data - variable that returns the result of each of the test |
3729 | * conducted by the driver. | 4287 | * conducted by the driver. |
3730 | * Description: | 4288 | * Description: |
3731 | * This is one of the offline test that tests the read and write | 4289 | * This is one of the offline test that tests the read and write |
3732 | * access to the RldRam chip on the NIC. | 4290 | * access to the RldRam chip on the NIC. |
3733 | * Return value: | 4291 | * Return value: |
3734 | * 0 on success. | 4292 | * 0 on success. |
@@ -3833,7 +4391,7 @@ static int s2io_rldram_test(nic_t * sp, uint64_t * data) | |||
3833 | * s2io_nic structure. | 4391 | * s2io_nic structure. |
3834 | * @ethtest : pointer to a ethtool command specific structure that will be | 4392 | * @ethtest : pointer to a ethtool command specific structure that will be |
3835 | * returned to the user. | 4393 | * returned to the user. |
3836 | * @data : variable that returns the result of each of the test | 4394 | * @data : variable that returns the result of each of the test |
3837 | * conducted by the driver. | 4395 | * conducted by the driver. |
3838 | * Description: | 4396 | * Description: |
3839 | * This function conducts 6 tests ( 4 offline and 2 online) to determine | 4397 | * This function conducts 6 tests ( 4 offline and 2 online) to determine |
@@ -3851,23 +4409,18 @@ static void s2io_ethtool_test(struct net_device *dev, | |||
3851 | 4409 | ||
3852 | if (ethtest->flags == ETH_TEST_FL_OFFLINE) { | 4410 | if (ethtest->flags == ETH_TEST_FL_OFFLINE) { |
3853 | /* Offline Tests. */ | 4411 | /* Offline Tests. */ |
3854 | if (orig_state) { | 4412 | if (orig_state) |
3855 | s2io_close(sp->dev); | 4413 | s2io_close(sp->dev); |
3856 | s2io_set_swapper(sp); | ||
3857 | } else | ||
3858 | s2io_set_swapper(sp); | ||
3859 | 4414 | ||
3860 | if (s2io_register_test(sp, &data[0])) | 4415 | if (s2io_register_test(sp, &data[0])) |
3861 | ethtest->flags |= ETH_TEST_FL_FAILED; | 4416 | ethtest->flags |= ETH_TEST_FL_FAILED; |
3862 | 4417 | ||
3863 | s2io_reset(sp); | 4418 | s2io_reset(sp); |
3864 | s2io_set_swapper(sp); | ||
3865 | 4419 | ||
3866 | if (s2io_rldram_test(sp, &data[3])) | 4420 | if (s2io_rldram_test(sp, &data[3])) |
3867 | ethtest->flags |= ETH_TEST_FL_FAILED; | 4421 | ethtest->flags |= ETH_TEST_FL_FAILED; |
3868 | 4422 | ||
3869 | s2io_reset(sp); | 4423 | s2io_reset(sp); |
3870 | s2io_set_swapper(sp); | ||
3871 | 4424 | ||
3872 | if (s2io_eeprom_test(sp, &data[1])) | 4425 | if (s2io_eeprom_test(sp, &data[1])) |
3873 | ethtest->flags |= ETH_TEST_FL_FAILED; | 4426 | ethtest->flags |= ETH_TEST_FL_FAILED; |
@@ -3910,61 +4463,111 @@ static void s2io_get_ethtool_stats(struct net_device *dev, | |||
3910 | nic_t *sp = dev->priv; | 4463 | nic_t *sp = dev->priv; |
3911 | StatInfo_t *stat_info = sp->mac_control.stats_info; | 4464 | StatInfo_t *stat_info = sp->mac_control.stats_info; |
3912 | 4465 | ||
3913 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_frms); | 4466 | s2io_updt_stats(sp); |
3914 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_data_octets); | 4467 | tmp_stats[i++] = |
4468 | (u64)le32_to_cpu(stat_info->tmac_frms_oflow) << 32 | | ||
4469 | le32_to_cpu(stat_info->tmac_frms); | ||
4470 | tmp_stats[i++] = | ||
4471 | (u64)le32_to_cpu(stat_info->tmac_data_octets_oflow) << 32 | | ||
4472 | le32_to_cpu(stat_info->tmac_data_octets); | ||
3915 | tmp_stats[i++] = le64_to_cpu(stat_info->tmac_drop_frms); | 4473 | tmp_stats[i++] = le64_to_cpu(stat_info->tmac_drop_frms); |
3916 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_mcst_frms); | 4474 | tmp_stats[i++] = |
3917 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_bcst_frms); | 4475 | (u64)le32_to_cpu(stat_info->tmac_mcst_frms_oflow) << 32 | |
4476 | le32_to_cpu(stat_info->tmac_mcst_frms); | ||
4477 | tmp_stats[i++] = | ||
4478 | (u64)le32_to_cpu(stat_info->tmac_bcst_frms_oflow) << 32 | | ||
4479 | le32_to_cpu(stat_info->tmac_bcst_frms); | ||
3918 | tmp_stats[i++] = le64_to_cpu(stat_info->tmac_pause_ctrl_frms); | 4480 | tmp_stats[i++] = le64_to_cpu(stat_info->tmac_pause_ctrl_frms); |
3919 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_any_err_frms); | 4481 | tmp_stats[i++] = |
4482 | (u64)le32_to_cpu(stat_info->tmac_any_err_frms_oflow) << 32 | | ||
4483 | le32_to_cpu(stat_info->tmac_any_err_frms); | ||
3920 | tmp_stats[i++] = le64_to_cpu(stat_info->tmac_vld_ip_octets); | 4484 | tmp_stats[i++] = le64_to_cpu(stat_info->tmac_vld_ip_octets); |
3921 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_vld_ip); | 4485 | tmp_stats[i++] = |
3922 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_drop_ip); | 4486 | (u64)le32_to_cpu(stat_info->tmac_vld_ip_oflow) << 32 | |
3923 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_icmp); | 4487 | le32_to_cpu(stat_info->tmac_vld_ip); |
3924 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_rst_tcp); | 4488 | tmp_stats[i++] = |
4489 | (u64)le32_to_cpu(stat_info->tmac_drop_ip_oflow) << 32 | | ||
4490 | le32_to_cpu(stat_info->tmac_drop_ip); | ||
4491 | tmp_stats[i++] = | ||
4492 | (u64)le32_to_cpu(stat_info->tmac_icmp_oflow) << 32 | | ||
4493 | le32_to_cpu(stat_info->tmac_icmp); | ||
4494 | tmp_stats[i++] = | ||
4495 | (u64)le32_to_cpu(stat_info->tmac_rst_tcp_oflow) << 32 | | ||
4496 | le32_to_cpu(stat_info->tmac_rst_tcp); | ||
3925 | tmp_stats[i++] = le64_to_cpu(stat_info->tmac_tcp); | 4497 | tmp_stats[i++] = le64_to_cpu(stat_info->tmac_tcp); |
3926 | tmp_stats[i++] = le32_to_cpu(stat_info->tmac_udp); | 4498 | tmp_stats[i++] = (u64)le32_to_cpu(stat_info->tmac_udp_oflow) << 32 | |
3927 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_vld_frms); | 4499 | le32_to_cpu(stat_info->tmac_udp); |
3928 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_data_octets); | 4500 | tmp_stats[i++] = |
4501 | (u64)le32_to_cpu(stat_info->rmac_vld_frms_oflow) << 32 | | ||
4502 | le32_to_cpu(stat_info->rmac_vld_frms); | ||
4503 | tmp_stats[i++] = | ||
4504 | (u64)le32_to_cpu(stat_info->rmac_data_octets_oflow) << 32 | | ||
4505 | le32_to_cpu(stat_info->rmac_data_octets); | ||
3929 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_fcs_err_frms); | 4506 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_fcs_err_frms); |
3930 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_drop_frms); | 4507 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_drop_frms); |
3931 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_vld_mcst_frms); | 4508 | tmp_stats[i++] = |
3932 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_vld_bcst_frms); | 4509 | (u64)le32_to_cpu(stat_info->rmac_vld_mcst_frms_oflow) << 32 | |
4510 | le32_to_cpu(stat_info->rmac_vld_mcst_frms); | ||
4511 | tmp_stats[i++] = | ||
4512 | (u64)le32_to_cpu(stat_info->rmac_vld_bcst_frms_oflow) << 32 | | ||
4513 | le32_to_cpu(stat_info->rmac_vld_bcst_frms); | ||
3933 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_in_rng_len_err_frms); | 4514 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_in_rng_len_err_frms); |
3934 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_long_frms); | 4515 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_long_frms); |
3935 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_pause_ctrl_frms); | 4516 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_pause_ctrl_frms); |
3936 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_discarded_frms); | 4517 | tmp_stats[i++] = |
3937 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_usized_frms); | 4518 | (u64)le32_to_cpu(stat_info->rmac_discarded_frms_oflow) << 32 | |
3938 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_osized_frms); | 4519 | le32_to_cpu(stat_info->rmac_discarded_frms); |
3939 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_frag_frms); | 4520 | tmp_stats[i++] = |
3940 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_jabber_frms); | 4521 | (u64)le32_to_cpu(stat_info->rmac_usized_frms_oflow) << 32 | |
3941 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_ip); | 4522 | le32_to_cpu(stat_info->rmac_usized_frms); |
4523 | tmp_stats[i++] = | ||
4524 | (u64)le32_to_cpu(stat_info->rmac_osized_frms_oflow) << 32 | | ||
4525 | le32_to_cpu(stat_info->rmac_osized_frms); | ||
4526 | tmp_stats[i++] = | ||
4527 | (u64)le32_to_cpu(stat_info->rmac_frag_frms_oflow) << 32 | | ||
4528 | le32_to_cpu(stat_info->rmac_frag_frms); | ||
4529 | tmp_stats[i++] = | ||
4530 | (u64)le32_to_cpu(stat_info->rmac_jabber_frms_oflow) << 32 | | ||
4531 | le32_to_cpu(stat_info->rmac_jabber_frms); | ||
4532 | tmp_stats[i++] = (u64)le32_to_cpu(stat_info->rmac_ip_oflow) << 32 | | ||
4533 | le32_to_cpu(stat_info->rmac_ip); | ||
3942 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_ip_octets); | 4534 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_ip_octets); |
3943 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_hdr_err_ip); | 4535 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_hdr_err_ip); |
3944 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_drop_ip); | 4536 | tmp_stats[i++] = (u64)le32_to_cpu(stat_info->rmac_drop_ip_oflow) << 32 | |
3945 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_icmp); | 4537 | le32_to_cpu(stat_info->rmac_drop_ip); |
4538 | tmp_stats[i++] = (u64)le32_to_cpu(stat_info->rmac_icmp_oflow) << 32 | | ||
4539 | le32_to_cpu(stat_info->rmac_icmp); | ||
3946 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_tcp); | 4540 | tmp_stats[i++] = le64_to_cpu(stat_info->rmac_tcp); |
3947 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_udp); | 4541 | tmp_stats[i++] = (u64)le32_to_cpu(stat_info->rmac_udp_oflow) << 32 | |
3948 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_err_drp_udp); | 4542 | le32_to_cpu(stat_info->rmac_udp); |
3949 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_pause_cnt); | 4543 | tmp_stats[i++] = |
3950 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_accepted_ip); | 4544 | (u64)le32_to_cpu(stat_info->rmac_err_drp_udp_oflow) << 32 | |
4545 | le32_to_cpu(stat_info->rmac_err_drp_udp); | ||
4546 | tmp_stats[i++] = | ||
4547 | (u64)le32_to_cpu(stat_info->rmac_pause_cnt_oflow) << 32 | | ||
4548 | le32_to_cpu(stat_info->rmac_pause_cnt); | ||
4549 | tmp_stats[i++] = | ||
4550 | (u64)le32_to_cpu(stat_info->rmac_accepted_ip_oflow) << 32 | | ||
4551 | le32_to_cpu(stat_info->rmac_accepted_ip); | ||
3951 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_err_tcp); | 4552 | tmp_stats[i++] = le32_to_cpu(stat_info->rmac_err_tcp); |
4553 | tmp_stats[i++] = 0; | ||
4554 | tmp_stats[i++] = stat_info->sw_stat.single_ecc_errs; | ||
4555 | tmp_stats[i++] = stat_info->sw_stat.double_ecc_errs; | ||
3952 | } | 4556 | } |
3953 | 4557 | ||
3954 | static int s2io_ethtool_get_regs_len(struct net_device *dev) | 4558 | int s2io_ethtool_get_regs_len(struct net_device *dev) |
3955 | { | 4559 | { |
3956 | return (XENA_REG_SPACE); | 4560 | return (XENA_REG_SPACE); |
3957 | } | 4561 | } |
3958 | 4562 | ||
3959 | 4563 | ||
3960 | static u32 s2io_ethtool_get_rx_csum(struct net_device * dev) | 4564 | u32 s2io_ethtool_get_rx_csum(struct net_device * dev) |
3961 | { | 4565 | { |
3962 | nic_t *sp = dev->priv; | 4566 | nic_t *sp = dev->priv; |
3963 | 4567 | ||
3964 | return (sp->rx_csum); | 4568 | return (sp->rx_csum); |
3965 | } | 4569 | } |
3966 | 4570 | int s2io_ethtool_set_rx_csum(struct net_device *dev, u32 data) | |
3967 | static int s2io_ethtool_set_rx_csum(struct net_device *dev, u32 data) | ||
3968 | { | 4571 | { |
3969 | nic_t *sp = dev->priv; | 4572 | nic_t *sp = dev->priv; |
3970 | 4573 | ||
@@ -3975,19 +4578,17 @@ static int s2io_ethtool_set_rx_csum(struct net_device *dev, u32 data) | |||
3975 | 4578 | ||
3976 | return 0; | 4579 | return 0; |
3977 | } | 4580 | } |
3978 | 4581 | int s2io_get_eeprom_len(struct net_device *dev) | |
3979 | static int s2io_get_eeprom_len(struct net_device *dev) | ||
3980 | { | 4582 | { |
3981 | return (XENA_EEPROM_SPACE); | 4583 | return (XENA_EEPROM_SPACE); |
3982 | } | 4584 | } |
3983 | 4585 | ||
3984 | static int s2io_ethtool_self_test_count(struct net_device *dev) | 4586 | int s2io_ethtool_self_test_count(struct net_device *dev) |
3985 | { | 4587 | { |
3986 | return (S2IO_TEST_LEN); | 4588 | return (S2IO_TEST_LEN); |
3987 | } | 4589 | } |
3988 | 4590 | void s2io_ethtool_get_strings(struct net_device *dev, | |
3989 | static void s2io_ethtool_get_strings(struct net_device *dev, | 4591 | u32 stringset, u8 * data) |
3990 | u32 stringset, u8 * data) | ||
3991 | { | 4592 | { |
3992 | switch (stringset) { | 4593 | switch (stringset) { |
3993 | case ETH_SS_TEST: | 4594 | case ETH_SS_TEST: |
@@ -3998,13 +4599,12 @@ static void s2io_ethtool_get_strings(struct net_device *dev, | |||
3998 | sizeof(ethtool_stats_keys)); | 4599 | sizeof(ethtool_stats_keys)); |
3999 | } | 4600 | } |
4000 | } | 4601 | } |
4001 | |||
4002 | static int s2io_ethtool_get_stats_count(struct net_device *dev) | 4602 | static int s2io_ethtool_get_stats_count(struct net_device *dev) |
4003 | { | 4603 | { |
4004 | return (S2IO_STAT_LEN); | 4604 | return (S2IO_STAT_LEN); |
4005 | } | 4605 | } |
4006 | 4606 | ||
4007 | static int s2io_ethtool_op_set_tx_csum(struct net_device *dev, u32 data) | 4607 | int s2io_ethtool_op_set_tx_csum(struct net_device *dev, u32 data) |
4008 | { | 4608 | { |
4009 | if (data) | 4609 | if (data) |
4010 | dev->features |= NETIF_F_IP_CSUM; | 4610 | dev->features |= NETIF_F_IP_CSUM; |
@@ -4046,21 +4646,18 @@ static struct ethtool_ops netdev_ethtool_ops = { | |||
4046 | }; | 4646 | }; |
4047 | 4647 | ||
4048 | /** | 4648 | /** |
4049 | * s2io_ioctl - Entry point for the Ioctl | 4649 | * s2io_ioctl - Entry point for the Ioctl |
4050 | * @dev : Device pointer. | 4650 | * @dev : Device pointer. |
4051 | * @ifr : An IOCTL specefic structure, that can contain a pointer to | 4651 | * @ifr : An IOCTL specefic structure, that can contain a pointer to |
4052 | * a proprietary structure used to pass information to the driver. | 4652 | * a proprietary structure used to pass information to the driver. |
4053 | * @cmd : This is used to distinguish between the different commands that | 4653 | * @cmd : This is used to distinguish between the different commands that |
4054 | * can be passed to the IOCTL functions. | 4654 | * can be passed to the IOCTL functions. |
4055 | * Description: | 4655 | * Description: |
4056 | * This function has support for ethtool, adding multiple MAC addresses on | 4656 | * Currently there are no special functionality supported in IOCTL, hence |
4057 | * the NIC and some DBG commands for the util tool. | 4657 | * function always return EOPNOTSUPPORTED |
4058 | * Return value: | ||
4059 | * Currently the IOCTL supports no operations, hence by default this | ||
4060 | * function returns OP NOT SUPPORTED value. | ||
4061 | */ | 4658 | */ |
4062 | 4659 | ||
4063 | static int s2io_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) | 4660 | int s2io_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) |
4064 | { | 4661 | { |
4065 | return -EOPNOTSUPP; | 4662 | return -EOPNOTSUPP; |
4066 | } | 4663 | } |
@@ -4076,17 +4673,9 @@ static int s2io_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) | |||
4076 | * file on failure. | 4673 | * file on failure. |
4077 | */ | 4674 | */ |
4078 | 4675 | ||
4079 | static int s2io_change_mtu(struct net_device *dev, int new_mtu) | 4676 | int s2io_change_mtu(struct net_device *dev, int new_mtu) |
4080 | { | 4677 | { |
4081 | nic_t *sp = dev->priv; | 4678 | nic_t *sp = dev->priv; |
4082 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | ||
4083 | register u64 val64; | ||
4084 | |||
4085 | if (netif_running(dev)) { | ||
4086 | DBG_PRINT(ERR_DBG, "%s: Must be stopped to ", dev->name); | ||
4087 | DBG_PRINT(ERR_DBG, "change its MTU \n"); | ||
4088 | return -EBUSY; | ||
4089 | } | ||
4090 | 4679 | ||
4091 | if ((new_mtu < MIN_MTU) || (new_mtu > S2IO_JUMBO_SIZE)) { | 4680 | if ((new_mtu < MIN_MTU) || (new_mtu > S2IO_JUMBO_SIZE)) { |
4092 | DBG_PRINT(ERR_DBG, "%s: MTU size is invalid.\n", | 4681 | DBG_PRINT(ERR_DBG, "%s: MTU size is invalid.\n", |
@@ -4094,11 +4683,22 @@ static int s2io_change_mtu(struct net_device *dev, int new_mtu) | |||
4094 | return -EPERM; | 4683 | return -EPERM; |
4095 | } | 4684 | } |
4096 | 4685 | ||
4097 | /* Set the new MTU into the PYLD register of the NIC */ | ||
4098 | val64 = new_mtu; | ||
4099 | writeq(vBIT(val64, 2, 14), &bar0->rmac_max_pyld_len); | ||
4100 | |||
4101 | dev->mtu = new_mtu; | 4686 | dev->mtu = new_mtu; |
4687 | if (netif_running(dev)) { | ||
4688 | s2io_card_down(sp); | ||
4689 | netif_stop_queue(dev); | ||
4690 | if (s2io_card_up(sp)) { | ||
4691 | DBG_PRINT(ERR_DBG, "%s: Device bring up failed\n", | ||
4692 | __FUNCTION__); | ||
4693 | } | ||
4694 | if (netif_queue_stopped(dev)) | ||
4695 | netif_wake_queue(dev); | ||
4696 | } else { /* Device is down */ | ||
4697 | XENA_dev_config_t __iomem *bar0 = sp->bar0; | ||
4698 | u64 val64 = new_mtu; | ||
4699 | |||
4700 | writeq(vBIT(val64, 2, 14), &bar0->rmac_max_pyld_len); | ||
4701 | } | ||
4102 | 4702 | ||
4103 | return 0; | 4703 | return 0; |
4104 | } | 4704 | } |
@@ -4108,9 +4708,9 @@ static int s2io_change_mtu(struct net_device *dev, int new_mtu) | |||
4108 | * @dev_adr : address of the device structure in dma_addr_t format. | 4708 | * @dev_adr : address of the device structure in dma_addr_t format. |
4109 | * Description: | 4709 | * Description: |
4110 | * This is the tasklet or the bottom half of the ISR. This is | 4710 | * This is the tasklet or the bottom half of the ISR. This is |
4111 | * an extension of the ISR which is scheduled by the scheduler to be run | 4711 | * an extension of the ISR which is scheduled by the scheduler to be run |
4112 | * when the load on the CPU is low. All low priority tasks of the ISR can | 4712 | * when the load on the CPU is low. All low priority tasks of the ISR can |
4113 | * be pushed into the tasklet. For now the tasklet is used only to | 4713 | * be pushed into the tasklet. For now the tasklet is used only to |
4114 | * replenish the Rx buffers in the Rx buffer descriptors. | 4714 | * replenish the Rx buffers in the Rx buffer descriptors. |
4115 | * Return value: | 4715 | * Return value: |
4116 | * void. | 4716 | * void. |
@@ -4166,19 +4766,22 @@ static void s2io_set_link(unsigned long data) | |||
4166 | } | 4766 | } |
4167 | 4767 | ||
4168 | subid = nic->pdev->subsystem_device; | 4768 | subid = nic->pdev->subsystem_device; |
4169 | /* | 4769 | if (s2io_link_fault_indication(nic) == MAC_RMAC_ERR_TIMER) { |
4170 | * Allow a small delay for the NICs self initiated | 4770 | /* |
4171 | * cleanup to complete. | 4771 | * Allow a small delay for the NICs self initiated |
4172 | */ | 4772 | * cleanup to complete. |
4173 | msleep(100); | 4773 | */ |
4774 | msleep(100); | ||
4775 | } | ||
4174 | 4776 | ||
4175 | val64 = readq(&bar0->adapter_status); | 4777 | val64 = readq(&bar0->adapter_status); |
4176 | if (verify_xena_quiescence(val64, nic->device_enabled_once)) { | 4778 | if (verify_xena_quiescence(nic, val64, nic->device_enabled_once)) { |
4177 | if (LINK_IS_UP(val64)) { | 4779 | if (LINK_IS_UP(val64)) { |
4178 | val64 = readq(&bar0->adapter_control); | 4780 | val64 = readq(&bar0->adapter_control); |
4179 | val64 |= ADAPTER_CNTL_EN; | 4781 | val64 |= ADAPTER_CNTL_EN; |
4180 | writeq(val64, &bar0->adapter_control); | 4782 | writeq(val64, &bar0->adapter_control); |
4181 | if (CARDS_WITH_FAULTY_LINK_INDICATORS(subid)) { | 4783 | if (CARDS_WITH_FAULTY_LINK_INDICATORS(nic->device_type, |
4784 | subid)) { | ||
4182 | val64 = readq(&bar0->gpio_control); | 4785 | val64 = readq(&bar0->gpio_control); |
4183 | val64 |= GPIO_CTRL_GPIO_0; | 4786 | val64 |= GPIO_CTRL_GPIO_0; |
4184 | writeq(val64, &bar0->gpio_control); | 4787 | writeq(val64, &bar0->gpio_control); |
@@ -4187,20 +4790,24 @@ static void s2io_set_link(unsigned long data) | |||
4187 | val64 |= ADAPTER_LED_ON; | 4790 | val64 |= ADAPTER_LED_ON; |
4188 | writeq(val64, &bar0->adapter_control); | 4791 | writeq(val64, &bar0->adapter_control); |
4189 | } | 4792 | } |
4190 | val64 = readq(&bar0->adapter_status); | 4793 | if (s2io_link_fault_indication(nic) == |
4191 | if (!LINK_IS_UP(val64)) { | 4794 | MAC_RMAC_ERR_TIMER) { |
4192 | DBG_PRINT(ERR_DBG, "%s:", dev->name); | 4795 | val64 = readq(&bar0->adapter_status); |
4193 | DBG_PRINT(ERR_DBG, " Link down"); | 4796 | if (!LINK_IS_UP(val64)) { |
4194 | DBG_PRINT(ERR_DBG, "after "); | 4797 | DBG_PRINT(ERR_DBG, "%s:", dev->name); |
4195 | DBG_PRINT(ERR_DBG, "enabling "); | 4798 | DBG_PRINT(ERR_DBG, " Link down"); |
4196 | DBG_PRINT(ERR_DBG, "device \n"); | 4799 | DBG_PRINT(ERR_DBG, "after "); |
4800 | DBG_PRINT(ERR_DBG, "enabling "); | ||
4801 | DBG_PRINT(ERR_DBG, "device \n"); | ||
4802 | } | ||
4197 | } | 4803 | } |
4198 | if (nic->device_enabled_once == FALSE) { | 4804 | if (nic->device_enabled_once == FALSE) { |
4199 | nic->device_enabled_once = TRUE; | 4805 | nic->device_enabled_once = TRUE; |
4200 | } | 4806 | } |
4201 | s2io_link(nic, LINK_UP); | 4807 | s2io_link(nic, LINK_UP); |
4202 | } else { | 4808 | } else { |
4203 | if (CARDS_WITH_FAULTY_LINK_INDICATORS(subid)) { | 4809 | if (CARDS_WITH_FAULTY_LINK_INDICATORS(nic->device_type, |
4810 | subid)) { | ||
4204 | val64 = readq(&bar0->gpio_control); | 4811 | val64 = readq(&bar0->gpio_control); |
4205 | val64 &= ~GPIO_CTRL_GPIO_0; | 4812 | val64 &= ~GPIO_CTRL_GPIO_0; |
4206 | writeq(val64, &bar0->gpio_control); | 4813 | writeq(val64, &bar0->gpio_control); |
@@ -4223,9 +4830,11 @@ static void s2io_card_down(nic_t * sp) | |||
4223 | unsigned long flags; | 4830 | unsigned long flags; |
4224 | register u64 val64 = 0; | 4831 | register u64 val64 = 0; |
4225 | 4832 | ||
4833 | del_timer_sync(&sp->alarm_timer); | ||
4226 | /* If s2io_set_link task is executing, wait till it completes. */ | 4834 | /* If s2io_set_link task is executing, wait till it completes. */ |
4227 | while (test_and_set_bit(0, &(sp->link_state))) | 4835 | while (test_and_set_bit(0, &(sp->link_state))) { |
4228 | msleep(50); | 4836 | msleep(50); |
4837 | } | ||
4229 | atomic_set(&sp->card_state, CARD_DOWN); | 4838 | atomic_set(&sp->card_state, CARD_DOWN); |
4230 | 4839 | ||
4231 | /* disable Tx and Rx traffic on the NIC */ | 4840 | /* disable Tx and Rx traffic on the NIC */ |
@@ -4237,7 +4846,7 @@ static void s2io_card_down(nic_t * sp) | |||
4237 | /* Check if the device is Quiescent and then Reset the NIC */ | 4846 | /* Check if the device is Quiescent and then Reset the NIC */ |
4238 | do { | 4847 | do { |
4239 | val64 = readq(&bar0->adapter_status); | 4848 | val64 = readq(&bar0->adapter_status); |
4240 | if (verify_xena_quiescence(val64, sp->device_enabled_once)) { | 4849 | if (verify_xena_quiescence(sp, val64, sp->device_enabled_once)) { |
4241 | break; | 4850 | break; |
4242 | } | 4851 | } |
4243 | 4852 | ||
@@ -4251,14 +4860,27 @@ static void s2io_card_down(nic_t * sp) | |||
4251 | break; | 4860 | break; |
4252 | } | 4861 | } |
4253 | } while (1); | 4862 | } while (1); |
4254 | spin_lock_irqsave(&sp->tx_lock, flags); | ||
4255 | s2io_reset(sp); | 4863 | s2io_reset(sp); |
4256 | 4864 | ||
4257 | /* Free all unused Tx and Rx buffers */ | 4865 | /* Waiting till all Interrupt handlers are complete */ |
4866 | cnt = 0; | ||
4867 | do { | ||
4868 | msleep(10); | ||
4869 | if (!atomic_read(&sp->isr_cnt)) | ||
4870 | break; | ||
4871 | cnt++; | ||
4872 | } while(cnt < 5); | ||
4873 | |||
4874 | spin_lock_irqsave(&sp->tx_lock, flags); | ||
4875 | /* Free all Tx buffers */ | ||
4258 | free_tx_buffers(sp); | 4876 | free_tx_buffers(sp); |
4877 | spin_unlock_irqrestore(&sp->tx_lock, flags); | ||
4878 | |||
4879 | /* Free all Rx buffers */ | ||
4880 | spin_lock_irqsave(&sp->rx_lock, flags); | ||
4259 | free_rx_buffers(sp); | 4881 | free_rx_buffers(sp); |
4882 | spin_unlock_irqrestore(&sp->rx_lock, flags); | ||
4260 | 4883 | ||
4261 | spin_unlock_irqrestore(&sp->tx_lock, flags); | ||
4262 | clear_bit(0, &(sp->link_state)); | 4884 | clear_bit(0, &(sp->link_state)); |
4263 | } | 4885 | } |
4264 | 4886 | ||
@@ -4276,8 +4898,8 @@ static int s2io_card_up(nic_t * sp) | |||
4276 | return -ENODEV; | 4898 | return -ENODEV; |
4277 | } | 4899 | } |
4278 | 4900 | ||
4279 | /* | 4901 | /* |
4280 | * Initializing the Rx buffers. For now we are considering only 1 | 4902 | * Initializing the Rx buffers. For now we are considering only 1 |
4281 | * Rx ring and initializing buffers into 30 Rx blocks | 4903 | * Rx ring and initializing buffers into 30 Rx blocks |
4282 | */ | 4904 | */ |
4283 | mac_control = &sp->mac_control; | 4905 | mac_control = &sp->mac_control; |
@@ -4311,16 +4933,18 @@ static int s2io_card_up(nic_t * sp) | |||
4311 | return -ENODEV; | 4933 | return -ENODEV; |
4312 | } | 4934 | } |
4313 | 4935 | ||
4936 | S2IO_TIMER_CONF(sp->alarm_timer, s2io_alarm_handle, sp, (HZ/2)); | ||
4937 | |||
4314 | atomic_set(&sp->card_state, CARD_UP); | 4938 | atomic_set(&sp->card_state, CARD_UP); |
4315 | return 0; | 4939 | return 0; |
4316 | } | 4940 | } |
4317 | 4941 | ||
4318 | /** | 4942 | /** |
4319 | * s2io_restart_nic - Resets the NIC. | 4943 | * s2io_restart_nic - Resets the NIC. |
4320 | * @data : long pointer to the device private structure | 4944 | * @data : long pointer to the device private structure |
4321 | * Description: | 4945 | * Description: |
4322 | * This function is scheduled to be run by the s2io_tx_watchdog | 4946 | * This function is scheduled to be run by the s2io_tx_watchdog |
4323 | * function after 0.5 secs to reset the NIC. The idea is to reduce | 4947 | * function after 0.5 secs to reset the NIC. The idea is to reduce |
4324 | * the run time of the watch dog routine which is run holding a | 4948 | * the run time of the watch dog routine which is run holding a |
4325 | * spin lock. | 4949 | * spin lock. |
4326 | */ | 4950 | */ |
@@ -4338,10 +4962,11 @@ static void s2io_restart_nic(unsigned long data) | |||
4338 | netif_wake_queue(dev); | 4962 | netif_wake_queue(dev); |
4339 | DBG_PRINT(ERR_DBG, "%s: was reset by Tx watchdog timer\n", | 4963 | DBG_PRINT(ERR_DBG, "%s: was reset by Tx watchdog timer\n", |
4340 | dev->name); | 4964 | dev->name); |
4965 | |||
4341 | } | 4966 | } |
4342 | 4967 | ||
4343 | /** | 4968 | /** |
4344 | * s2io_tx_watchdog - Watchdog for transmit side. | 4969 | * s2io_tx_watchdog - Watchdog for transmit side. |
4345 | * @dev : Pointer to net device structure | 4970 | * @dev : Pointer to net device structure |
4346 | * Description: | 4971 | * Description: |
4347 | * This function is triggered if the Tx Queue is stopped | 4972 | * This function is triggered if the Tx Queue is stopped |
@@ -4369,7 +4994,7 @@ static void s2io_tx_watchdog(struct net_device *dev) | |||
4369 | * @len : length of the packet | 4994 | * @len : length of the packet |
4370 | * @cksum : FCS checksum of the frame. | 4995 | * @cksum : FCS checksum of the frame. |
4371 | * @ring_no : the ring from which this RxD was extracted. | 4996 | * @ring_no : the ring from which this RxD was extracted. |
4372 | * Description: | 4997 | * Description: |
4373 | * This function is called by the Tx interrupt serivce routine to perform | 4998 | * This function is called by the Tx interrupt serivce routine to perform |
4374 | * some OS related operations on the SKB before passing it to the upper | 4999 | * some OS related operations on the SKB before passing it to the upper |
4375 | * layers. It mainly checks if the checksum is OK, if so adds it to the | 5000 | * layers. It mainly checks if the checksum is OK, if so adds it to the |
@@ -4379,35 +5004,68 @@ static void s2io_tx_watchdog(struct net_device *dev) | |||
4379 | * Return value: | 5004 | * Return value: |
4380 | * SUCCESS on success and -1 on failure. | 5005 | * SUCCESS on success and -1 on failure. |
4381 | */ | 5006 | */ |
4382 | #ifndef CONFIG_2BUFF_MODE | 5007 | static int rx_osm_handler(ring_info_t *ring_data, RxD_t * rxdp) |
4383 | static int rx_osm_handler(nic_t * sp, u16 len, RxD_t * rxdp, int ring_no) | ||
4384 | #else | ||
4385 | static int rx_osm_handler(nic_t * sp, RxD_t * rxdp, int ring_no, | ||
4386 | buffAdd_t * ba) | ||
4387 | #endif | ||
4388 | { | 5008 | { |
5009 | nic_t *sp = ring_data->nic; | ||
4389 | struct net_device *dev = (struct net_device *) sp->dev; | 5010 | struct net_device *dev = (struct net_device *) sp->dev; |
4390 | struct sk_buff *skb = | 5011 | struct sk_buff *skb = (struct sk_buff *) |
4391 | (struct sk_buff *) ((unsigned long) rxdp->Host_Control); | 5012 | ((unsigned long) rxdp->Host_Control); |
5013 | int ring_no = ring_data->ring_no; | ||
4392 | u16 l3_csum, l4_csum; | 5014 | u16 l3_csum, l4_csum; |
4393 | #ifdef CONFIG_2BUFF_MODE | 5015 | #ifdef CONFIG_2BUFF_MODE |
4394 | int buf0_len, buf2_len; | 5016 | int buf0_len = RXD_GET_BUFFER0_SIZE(rxdp->Control_2); |
5017 | int buf2_len = RXD_GET_BUFFER2_SIZE(rxdp->Control_2); | ||
5018 | int get_block = ring_data->rx_curr_get_info.block_index; | ||
5019 | int get_off = ring_data->rx_curr_get_info.offset; | ||
5020 | buffAdd_t *ba = &ring_data->ba[get_block][get_off]; | ||
4395 | unsigned char *buff; | 5021 | unsigned char *buff; |
5022 | #else | ||
5023 | u16 len = (u16) ((RXD_GET_BUFFER0_SIZE(rxdp->Control_2)) >> 48);; | ||
4396 | #endif | 5024 | #endif |
5025 | skb->dev = dev; | ||
5026 | if (rxdp->Control_1 & RXD_T_CODE) { | ||
5027 | unsigned long long err = rxdp->Control_1 & RXD_T_CODE; | ||
5028 | DBG_PRINT(ERR_DBG, "%s: Rx error Value: 0x%llx\n", | ||
5029 | dev->name, err); | ||
5030 | dev_kfree_skb(skb); | ||
5031 | sp->stats.rx_crc_errors++; | ||
5032 | atomic_dec(&sp->rx_bufs_left[ring_no]); | ||
5033 | rxdp->Host_Control = 0; | ||
5034 | return 0; | ||
5035 | } | ||
4397 | 5036 | ||
4398 | l3_csum = RXD_GET_L3_CKSUM(rxdp->Control_1); | 5037 | /* Updating statistics */ |
4399 | if ((rxdp->Control_1 & TCP_OR_UDP_FRAME) && (sp->rx_csum)) { | 5038 | rxdp->Host_Control = 0; |
5039 | sp->rx_pkt_count++; | ||
5040 | sp->stats.rx_packets++; | ||
5041 | #ifndef CONFIG_2BUFF_MODE | ||
5042 | sp->stats.rx_bytes += len; | ||
5043 | #else | ||
5044 | sp->stats.rx_bytes += buf0_len + buf2_len; | ||
5045 | #endif | ||
5046 | |||
5047 | #ifndef CONFIG_2BUFF_MODE | ||
5048 | skb_put(skb, len); | ||
5049 | #else | ||
5050 | buff = skb_push(skb, buf0_len); | ||
5051 | memcpy(buff, ba->ba_0, buf0_len); | ||
5052 | skb_put(skb, buf2_len); | ||
5053 | #endif | ||
5054 | |||
5055 | if ((rxdp->Control_1 & TCP_OR_UDP_FRAME) && | ||
5056 | (sp->rx_csum)) { | ||
5057 | l3_csum = RXD_GET_L3_CKSUM(rxdp->Control_1); | ||
4400 | l4_csum = RXD_GET_L4_CKSUM(rxdp->Control_1); | 5058 | l4_csum = RXD_GET_L4_CKSUM(rxdp->Control_1); |
4401 | if ((l3_csum == L3_CKSUM_OK) && (l4_csum == L4_CKSUM_OK)) { | 5059 | if ((l3_csum == L3_CKSUM_OK) && (l4_csum == L4_CKSUM_OK)) { |
4402 | /* | 5060 | /* |
4403 | * NIC verifies if the Checksum of the received | 5061 | * NIC verifies if the Checksum of the received |
4404 | * frame is Ok or not and accordingly returns | 5062 | * frame is Ok or not and accordingly returns |
4405 | * a flag in the RxD. | 5063 | * a flag in the RxD. |
4406 | */ | 5064 | */ |
4407 | skb->ip_summed = CHECKSUM_UNNECESSARY; | 5065 | skb->ip_summed = CHECKSUM_UNNECESSARY; |
4408 | } else { | 5066 | } else { |
4409 | /* | 5067 | /* |
4410 | * Packet with erroneous checksum, let the | 5068 | * Packet with erroneous checksum, let the |
4411 | * upper layers deal with it. | 5069 | * upper layers deal with it. |
4412 | */ | 5070 | */ |
4413 | skb->ip_summed = CHECKSUM_NONE; | 5071 | skb->ip_summed = CHECKSUM_NONE; |
@@ -4416,44 +5074,26 @@ static int rx_osm_handler(nic_t * sp, RxD_t * rxdp, int ring_no, | |||
4416 | skb->ip_summed = CHECKSUM_NONE; | 5074 | skb->ip_summed = CHECKSUM_NONE; |
4417 | } | 5075 | } |
4418 | 5076 | ||
4419 | if (rxdp->Control_1 & RXD_T_CODE) { | ||
4420 | unsigned long long err = rxdp->Control_1 & RXD_T_CODE; | ||
4421 | DBG_PRINT(ERR_DBG, "%s: Rx error Value: 0x%llx\n", | ||
4422 | dev->name, err); | ||
4423 | } | ||
4424 | #ifdef CONFIG_2BUFF_MODE | ||
4425 | buf0_len = RXD_GET_BUFFER0_SIZE(rxdp->Control_2); | ||
4426 | buf2_len = RXD_GET_BUFFER2_SIZE(rxdp->Control_2); | ||
4427 | #endif | ||
4428 | |||
4429 | skb->dev = dev; | ||
4430 | #ifndef CONFIG_2BUFF_MODE | ||
4431 | skb_put(skb, len); | ||
4432 | skb->protocol = eth_type_trans(skb, dev); | ||
4433 | #else | ||
4434 | buff = skb_push(skb, buf0_len); | ||
4435 | memcpy(buff, ba->ba_0, buf0_len); | ||
4436 | skb_put(skb, buf2_len); | ||
4437 | skb->protocol = eth_type_trans(skb, dev); | 5077 | skb->protocol = eth_type_trans(skb, dev); |
4438 | #endif | ||
4439 | |||
4440 | #ifdef CONFIG_S2IO_NAPI | 5078 | #ifdef CONFIG_S2IO_NAPI |
4441 | netif_receive_skb(skb); | 5079 | if (sp->vlgrp && RXD_GET_VLAN_TAG(rxdp->Control_2)) { |
5080 | /* Queueing the vlan frame to the upper layer */ | ||
5081 | vlan_hwaccel_receive_skb(skb, sp->vlgrp, | ||
5082 | RXD_GET_VLAN_TAG(rxdp->Control_2)); | ||
5083 | } else { | ||
5084 | netif_receive_skb(skb); | ||
5085 | } | ||
4442 | #else | 5086 | #else |
4443 | netif_rx(skb); | 5087 | if (sp->vlgrp && RXD_GET_VLAN_TAG(rxdp->Control_2)) { |
5088 | /* Queueing the vlan frame to the upper layer */ | ||
5089 | vlan_hwaccel_rx(skb, sp->vlgrp, | ||
5090 | RXD_GET_VLAN_TAG(rxdp->Control_2)); | ||
5091 | } else { | ||
5092 | netif_rx(skb); | ||
5093 | } | ||
4444 | #endif | 5094 | #endif |
4445 | |||
4446 | dev->last_rx = jiffies; | 5095 | dev->last_rx = jiffies; |
4447 | sp->rx_pkt_count++; | ||
4448 | sp->stats.rx_packets++; | ||
4449 | #ifndef CONFIG_2BUFF_MODE | ||
4450 | sp->stats.rx_bytes += len; | ||
4451 | #else | ||
4452 | sp->stats.rx_bytes += buf0_len + buf2_len; | ||
4453 | #endif | ||
4454 | |||
4455 | atomic_dec(&sp->rx_bufs_left[ring_no]); | 5096 | atomic_dec(&sp->rx_bufs_left[ring_no]); |
4456 | rxdp->Host_Control = 0; | ||
4457 | return SUCCESS; | 5097 | return SUCCESS; |
4458 | } | 5098 | } |
4459 | 5099 | ||
@@ -4464,13 +5104,13 @@ static int rx_osm_handler(nic_t * sp, RxD_t * rxdp, int ring_no, | |||
4464 | * @link : inidicates whether link is UP/DOWN. | 5104 | * @link : inidicates whether link is UP/DOWN. |
4465 | * Description: | 5105 | * Description: |
4466 | * This function stops/starts the Tx queue depending on whether the link | 5106 | * This function stops/starts the Tx queue depending on whether the link |
4467 | * status of the NIC is is down or up. This is called by the Alarm | 5107 | * status of the NIC is is down or up. This is called by the Alarm |
4468 | * interrupt handler whenever a link change interrupt comes up. | 5108 | * interrupt handler whenever a link change interrupt comes up. |
4469 | * Return value: | 5109 | * Return value: |
4470 | * void. | 5110 | * void. |
4471 | */ | 5111 | */ |
4472 | 5112 | ||
4473 | static void s2io_link(nic_t * sp, int link) | 5113 | void s2io_link(nic_t * sp, int link) |
4474 | { | 5114 | { |
4475 | struct net_device *dev = (struct net_device *) sp->dev; | 5115 | struct net_device *dev = (struct net_device *) sp->dev; |
4476 | 5116 | ||
@@ -4487,8 +5127,25 @@ static void s2io_link(nic_t * sp, int link) | |||
4487 | } | 5127 | } |
4488 | 5128 | ||
4489 | /** | 5129 | /** |
4490 | * s2io_init_pci -Initialization of PCI and PCI-X configuration registers . | 5130 | * get_xena_rev_id - to identify revision ID of xena. |
4491 | * @sp : private member of the device structure, which is a pointer to the | 5131 | * @pdev : PCI Dev structure |
5132 | * Description: | ||
5133 | * Function to identify the Revision ID of xena. | ||
5134 | * Return value: | ||
5135 | * returns the revision ID of the device. | ||
5136 | */ | ||
5137 | |||
5138 | int get_xena_rev_id(struct pci_dev *pdev) | ||
5139 | { | ||
5140 | u8 id = 0; | ||
5141 | int ret; | ||
5142 | ret = pci_read_config_byte(pdev, PCI_REVISION_ID, (u8 *) & id); | ||
5143 | return id; | ||
5144 | } | ||
5145 | |||
5146 | /** | ||
5147 | * s2io_init_pci -Initialization of PCI and PCI-X configuration registers . | ||
5148 | * @sp : private member of the device structure, which is a pointer to the | ||
4492 | * s2io_nic structure. | 5149 | * s2io_nic structure. |
4493 | * Description: | 5150 | * Description: |
4494 | * This function initializes a few of the PCI and PCI-X configuration registers | 5151 | * This function initializes a few of the PCI and PCI-X configuration registers |
@@ -4499,15 +5156,15 @@ static void s2io_link(nic_t * sp, int link) | |||
4499 | 5156 | ||
4500 | static void s2io_init_pci(nic_t * sp) | 5157 | static void s2io_init_pci(nic_t * sp) |
4501 | { | 5158 | { |
4502 | u16 pci_cmd = 0; | 5159 | u16 pci_cmd = 0, pcix_cmd = 0; |
4503 | 5160 | ||
4504 | /* Enable Data Parity Error Recovery in PCI-X command register. */ | 5161 | /* Enable Data Parity Error Recovery in PCI-X command register. */ |
4505 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | 5162 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, |
4506 | &(sp->pcix_cmd)); | 5163 | &(pcix_cmd)); |
4507 | pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | 5164 | pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER, |
4508 | (sp->pcix_cmd | 1)); | 5165 | (pcix_cmd | 1)); |
4509 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | 5166 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, |
4510 | &(sp->pcix_cmd)); | 5167 | &(pcix_cmd)); |
4511 | 5168 | ||
4512 | /* Set the PErr Response bit in PCI command register. */ | 5169 | /* Set the PErr Response bit in PCI command register. */ |
4513 | pci_read_config_word(sp->pdev, PCI_COMMAND, &pci_cmd); | 5170 | pci_read_config_word(sp->pdev, PCI_COMMAND, &pci_cmd); |
@@ -4515,53 +5172,43 @@ static void s2io_init_pci(nic_t * sp) | |||
4515 | (pci_cmd | PCI_COMMAND_PARITY)); | 5172 | (pci_cmd | PCI_COMMAND_PARITY)); |
4516 | pci_read_config_word(sp->pdev, PCI_COMMAND, &pci_cmd); | 5173 | pci_read_config_word(sp->pdev, PCI_COMMAND, &pci_cmd); |
4517 | 5174 | ||
4518 | /* Set MMRB count to 1024 in PCI-X Command register. */ | ||
4519 | sp->pcix_cmd &= 0xFFF3; | ||
4520 | pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER, (sp->pcix_cmd | (0x1 << 2))); /* MMRBC 1K */ | ||
4521 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | ||
4522 | &(sp->pcix_cmd)); | ||
4523 | |||
4524 | /* Setting Maximum outstanding splits based on system type. */ | ||
4525 | sp->pcix_cmd &= 0xFF8F; | ||
4526 | |||
4527 | sp->pcix_cmd |= XENA_MAX_OUTSTANDING_SPLITS(0x1); /* 2 splits. */ | ||
4528 | pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | ||
4529 | sp->pcix_cmd); | ||
4530 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | ||
4531 | &(sp->pcix_cmd)); | ||
4532 | /* Forcibly disabling relaxed ordering capability of the card. */ | 5175 | /* Forcibly disabling relaxed ordering capability of the card. */ |
4533 | sp->pcix_cmd &= 0xfffd; | 5176 | pcix_cmd &= 0xfffd; |
4534 | pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | 5177 | pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER, |
4535 | sp->pcix_cmd); | 5178 | pcix_cmd); |
4536 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, | 5179 | pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, |
4537 | &(sp->pcix_cmd)); | 5180 | &(pcix_cmd)); |
4538 | } | 5181 | } |
4539 | 5182 | ||
4540 | MODULE_AUTHOR("Raghavendra Koushik <raghavendra.koushik@neterion.com>"); | 5183 | MODULE_AUTHOR("Raghavendra Koushik <raghavendra.koushik@neterion.com>"); |
4541 | MODULE_LICENSE("GPL"); | 5184 | MODULE_LICENSE("GPL"); |
4542 | module_param(tx_fifo_num, int, 0); | 5185 | module_param(tx_fifo_num, int, 0); |
4543 | module_param_array(tx_fifo_len, int, NULL, 0); | ||
4544 | module_param(rx_ring_num, int, 0); | 5186 | module_param(rx_ring_num, int, 0); |
4545 | module_param_array(rx_ring_sz, int, NULL, 0); | 5187 | module_param_array(tx_fifo_len, uint, NULL, 0); |
4546 | module_param(Stats_refresh_time, int, 0); | 5188 | module_param_array(rx_ring_sz, uint, NULL, 0); |
5189 | module_param_array(rts_frm_len, uint, NULL, 0); | ||
5190 | module_param(use_continuous_tx_intrs, int, 1); | ||
4547 | module_param(rmac_pause_time, int, 0); | 5191 | module_param(rmac_pause_time, int, 0); |
4548 | module_param(mc_pause_threshold_q0q3, int, 0); | 5192 | module_param(mc_pause_threshold_q0q3, int, 0); |
4549 | module_param(mc_pause_threshold_q4q7, int, 0); | 5193 | module_param(mc_pause_threshold_q4q7, int, 0); |
4550 | module_param(shared_splits, int, 0); | 5194 | module_param(shared_splits, int, 0); |
4551 | module_param(tmac_util_period, int, 0); | 5195 | module_param(tmac_util_period, int, 0); |
4552 | module_param(rmac_util_period, int, 0); | 5196 | module_param(rmac_util_period, int, 0); |
5197 | module_param(bimodal, bool, 0); | ||
4553 | #ifndef CONFIG_S2IO_NAPI | 5198 | #ifndef CONFIG_S2IO_NAPI |
4554 | module_param(indicate_max_pkts, int, 0); | 5199 | module_param(indicate_max_pkts, int, 0); |
4555 | #endif | 5200 | #endif |
5201 | module_param(rxsync_frequency, int, 0); | ||
5202 | |||
4556 | /** | 5203 | /** |
4557 | * s2io_init_nic - Initialization of the adapter . | 5204 | * s2io_init_nic - Initialization of the adapter . |
4558 | * @pdev : structure containing the PCI related information of the device. | 5205 | * @pdev : structure containing the PCI related information of the device. |
4559 | * @pre: List of PCI devices supported by the driver listed in s2io_tbl. | 5206 | * @pre: List of PCI devices supported by the driver listed in s2io_tbl. |
4560 | * Description: | 5207 | * Description: |
4561 | * The function initializes an adapter identified by the pci_dec structure. | 5208 | * The function initializes an adapter identified by the pci_dec structure. |
4562 | * All OS related initialization including memory and device structure and | 5209 | * All OS related initialization including memory and device structure and |
4563 | * initlaization of the device private variable is done. Also the swapper | 5210 | * initlaization of the device private variable is done. Also the swapper |
4564 | * control register is initialized to enable read and write into the I/O | 5211 | * control register is initialized to enable read and write into the I/O |
4565 | * registers of the device. | 5212 | * registers of the device. |
4566 | * Return value: | 5213 | * Return value: |
4567 | * returns 0 on success and negative on failure. | 5214 | * returns 0 on success and negative on failure. |
@@ -4572,7 +5219,6 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4572 | { | 5219 | { |
4573 | nic_t *sp; | 5220 | nic_t *sp; |
4574 | struct net_device *dev; | 5221 | struct net_device *dev; |
4575 | char *dev_name = "S2IO 10GE NIC"; | ||
4576 | int i, j, ret; | 5222 | int i, j, ret; |
4577 | int dma_flag = FALSE; | 5223 | int dma_flag = FALSE; |
4578 | u32 mac_up, mac_down; | 5224 | u32 mac_up, mac_down; |
@@ -4581,10 +5227,11 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4581 | u16 subid; | 5227 | u16 subid; |
4582 | mac_info_t *mac_control; | 5228 | mac_info_t *mac_control; |
4583 | struct config_param *config; | 5229 | struct config_param *config; |
5230 | int mode; | ||
4584 | 5231 | ||
4585 | 5232 | #ifdef CONFIG_S2IO_NAPI | |
4586 | DBG_PRINT(ERR_DBG, "Loading S2IO driver with %s\n", | 5233 | DBG_PRINT(ERR_DBG, "NAPI support has been enabled\n"); |
4587 | s2io_driver_version); | 5234 | #endif |
4588 | 5235 | ||
4589 | if ((ret = pci_enable_device(pdev))) { | 5236 | if ((ret = pci_enable_device(pdev))) { |
4590 | DBG_PRINT(ERR_DBG, | 5237 | DBG_PRINT(ERR_DBG, |
@@ -4595,7 +5242,6 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4595 | if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) { | 5242 | if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) { |
4596 | DBG_PRINT(INIT_DBG, "s2io_init_nic: Using 64bit DMA\n"); | 5243 | DBG_PRINT(INIT_DBG, "s2io_init_nic: Using 64bit DMA\n"); |
4597 | dma_flag = TRUE; | 5244 | dma_flag = TRUE; |
4598 | |||
4599 | if (pci_set_consistent_dma_mask | 5245 | if (pci_set_consistent_dma_mask |
4600 | (pdev, DMA_64BIT_MASK)) { | 5246 | (pdev, DMA_64BIT_MASK)) { |
4601 | DBG_PRINT(ERR_DBG, | 5247 | DBG_PRINT(ERR_DBG, |
@@ -4635,34 +5281,41 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4635 | memset(sp, 0, sizeof(nic_t)); | 5281 | memset(sp, 0, sizeof(nic_t)); |
4636 | sp->dev = dev; | 5282 | sp->dev = dev; |
4637 | sp->pdev = pdev; | 5283 | sp->pdev = pdev; |
4638 | sp->vendor_id = pdev->vendor; | ||
4639 | sp->device_id = pdev->device; | ||
4640 | sp->high_dma_flag = dma_flag; | 5284 | sp->high_dma_flag = dma_flag; |
4641 | sp->irq = pdev->irq; | ||
4642 | sp->device_enabled_once = FALSE; | 5285 | sp->device_enabled_once = FALSE; |
4643 | strcpy(sp->name, dev_name); | 5286 | |
5287 | if ((pdev->device == PCI_DEVICE_ID_HERC_WIN) || | ||
5288 | (pdev->device == PCI_DEVICE_ID_HERC_UNI)) | ||
5289 | sp->device_type = XFRAME_II_DEVICE; | ||
5290 | else | ||
5291 | sp->device_type = XFRAME_I_DEVICE; | ||
4644 | 5292 | ||
4645 | /* Initialize some PCI/PCI-X fields of the NIC. */ | 5293 | /* Initialize some PCI/PCI-X fields of the NIC. */ |
4646 | s2io_init_pci(sp); | 5294 | s2io_init_pci(sp); |
4647 | 5295 | ||
4648 | /* | 5296 | /* |
4649 | * Setting the device configuration parameters. | 5297 | * Setting the device configuration parameters. |
4650 | * Most of these parameters can be specified by the user during | 5298 | * Most of these parameters can be specified by the user during |
4651 | * module insertion as they are module loadable parameters. If | 5299 | * module insertion as they are module loadable parameters. If |
4652 | * these parameters are not not specified during load time, they | 5300 | * these parameters are not not specified during load time, they |
4653 | * are initialized with default values. | 5301 | * are initialized with default values. |
4654 | */ | 5302 | */ |
4655 | mac_control = &sp->mac_control; | 5303 | mac_control = &sp->mac_control; |
4656 | config = &sp->config; | 5304 | config = &sp->config; |
4657 | 5305 | ||
4658 | /* Tx side parameters. */ | 5306 | /* Tx side parameters. */ |
4659 | tx_fifo_len[0] = DEFAULT_FIFO_LEN; /* Default value. */ | 5307 | if (tx_fifo_len[0] == 0) |
5308 | tx_fifo_len[0] = DEFAULT_FIFO_LEN; /* Default value. */ | ||
4660 | config->tx_fifo_num = tx_fifo_num; | 5309 | config->tx_fifo_num = tx_fifo_num; |
4661 | for (i = 0; i < MAX_TX_FIFOS; i++) { | 5310 | for (i = 0; i < MAX_TX_FIFOS; i++) { |
4662 | config->tx_cfg[i].fifo_len = tx_fifo_len[i]; | 5311 | config->tx_cfg[i].fifo_len = tx_fifo_len[i]; |
4663 | config->tx_cfg[i].fifo_priority = i; | 5312 | config->tx_cfg[i].fifo_priority = i; |
4664 | } | 5313 | } |
4665 | 5314 | ||
5315 | /* mapping the QoS priority to the configured fifos */ | ||
5316 | for (i = 0; i < MAX_TX_FIFOS; i++) | ||
5317 | config->fifo_mapping[i] = fifo_map[config->tx_fifo_num][i]; | ||
5318 | |||
4666 | config->tx_intr_type = TXD_INT_TYPE_UTILZ; | 5319 | config->tx_intr_type = TXD_INT_TYPE_UTILZ; |
4667 | for (i = 0; i < config->tx_fifo_num; i++) { | 5320 | for (i = 0; i < config->tx_fifo_num; i++) { |
4668 | config->tx_cfg[i].f_no_snoop = | 5321 | config->tx_cfg[i].f_no_snoop = |
@@ -4675,7 +5328,8 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4675 | config->max_txds = MAX_SKB_FRAGS; | 5328 | config->max_txds = MAX_SKB_FRAGS; |
4676 | 5329 | ||
4677 | /* Rx side parameters. */ | 5330 | /* Rx side parameters. */ |
4678 | rx_ring_sz[0] = SMALL_BLK_CNT; /* Default value. */ | 5331 | if (rx_ring_sz[0] == 0) |
5332 | rx_ring_sz[0] = SMALL_BLK_CNT; /* Default value. */ | ||
4679 | config->rx_ring_num = rx_ring_num; | 5333 | config->rx_ring_num = rx_ring_num; |
4680 | for (i = 0; i < MAX_RX_RINGS; i++) { | 5334 | for (i = 0; i < MAX_RX_RINGS; i++) { |
4681 | config->rx_cfg[i].num_rxd = rx_ring_sz[i] * | 5335 | config->rx_cfg[i].num_rxd = rx_ring_sz[i] * |
@@ -4699,10 +5353,13 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4699 | for (i = 0; i < config->rx_ring_num; i++) | 5353 | for (i = 0; i < config->rx_ring_num; i++) |
4700 | atomic_set(&sp->rx_bufs_left[i], 0); | 5354 | atomic_set(&sp->rx_bufs_left[i], 0); |
4701 | 5355 | ||
5356 | /* Initialize the number of ISRs currently running */ | ||
5357 | atomic_set(&sp->isr_cnt, 0); | ||
5358 | |||
4702 | /* initialize the shared memory used by the NIC and the host */ | 5359 | /* initialize the shared memory used by the NIC and the host */ |
4703 | if (init_shared_mem(sp)) { | 5360 | if (init_shared_mem(sp)) { |
4704 | DBG_PRINT(ERR_DBG, "%s: Memory allocation failed\n", | 5361 | DBG_PRINT(ERR_DBG, "%s: Memory allocation failed\n", |
4705 | dev->name); | 5362 | __FUNCTION__); |
4706 | ret = -ENOMEM; | 5363 | ret = -ENOMEM; |
4707 | goto mem_alloc_failed; | 5364 | goto mem_alloc_failed; |
4708 | } | 5365 | } |
@@ -4743,13 +5400,17 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4743 | dev->do_ioctl = &s2io_ioctl; | 5400 | dev->do_ioctl = &s2io_ioctl; |
4744 | dev->change_mtu = &s2io_change_mtu; | 5401 | dev->change_mtu = &s2io_change_mtu; |
4745 | SET_ETHTOOL_OPS(dev, &netdev_ethtool_ops); | 5402 | SET_ETHTOOL_OPS(dev, &netdev_ethtool_ops); |
5403 | dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; | ||
5404 | dev->vlan_rx_register = s2io_vlan_rx_register; | ||
5405 | dev->vlan_rx_kill_vid = (void *)s2io_vlan_rx_kill_vid; | ||
5406 | |||
4746 | /* | 5407 | /* |
4747 | * will use eth_mac_addr() for dev->set_mac_address | 5408 | * will use eth_mac_addr() for dev->set_mac_address |
4748 | * mac address will be set every time dev->open() is called | 5409 | * mac address will be set every time dev->open() is called |
4749 | */ | 5410 | */ |
4750 | #ifdef CONFIG_S2IO_NAPI | 5411 | #if defined(CONFIG_S2IO_NAPI) |
4751 | dev->poll = s2io_poll; | 5412 | dev->poll = s2io_poll; |
4752 | dev->weight = 90; | 5413 | dev->weight = 32; |
4753 | #endif | 5414 | #endif |
4754 | 5415 | ||
4755 | dev->features |= NETIF_F_SG | NETIF_F_IP_CSUM; | 5416 | dev->features |= NETIF_F_SG | NETIF_F_IP_CSUM; |
@@ -4776,22 +5437,28 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4776 | goto set_swap_failed; | 5437 | goto set_swap_failed; |
4777 | } | 5438 | } |
4778 | 5439 | ||
4779 | /* Fix for all "FFs" MAC address problems observed on Alpha platforms */ | 5440 | /* Verify if the Herc works on the slot its placed into */ |
4780 | fix_mac_address(sp); | 5441 | if (sp->device_type & XFRAME_II_DEVICE) { |
4781 | s2io_reset(sp); | 5442 | mode = s2io_verify_pci_mode(sp); |
5443 | if (mode < 0) { | ||
5444 | DBG_PRINT(ERR_DBG, "%s: ", __FUNCTION__); | ||
5445 | DBG_PRINT(ERR_DBG, " Unsupported PCI bus mode\n"); | ||
5446 | ret = -EBADSLT; | ||
5447 | goto set_swap_failed; | ||
5448 | } | ||
5449 | } | ||
4782 | 5450 | ||
4783 | /* | 5451 | /* Not needed for Herc */ |
4784 | * Setting swapper control on the NIC, so the MAC address can be read. | 5452 | if (sp->device_type & XFRAME_I_DEVICE) { |
4785 | */ | 5453 | /* |
4786 | if (s2io_set_swapper(sp)) { | 5454 | * Fix for all "FFs" MAC address problems observed on |
4787 | DBG_PRINT(ERR_DBG, | 5455 | * Alpha platforms |
4788 | "%s: S2IO: swapper settings are wrong\n", | 5456 | */ |
4789 | dev->name); | 5457 | fix_mac_address(sp); |
4790 | ret = -EAGAIN; | 5458 | s2io_reset(sp); |
4791 | goto set_swap_failed; | ||
4792 | } | 5459 | } |
4793 | 5460 | ||
4794 | /* | 5461 | /* |
4795 | * MAC address initialization. | 5462 | * MAC address initialization. |
4796 | * For now only one mac address will be read and used. | 5463 | * For now only one mac address will be read and used. |
4797 | */ | 5464 | */ |
@@ -4814,37 +5481,28 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4814 | sp->def_mac_addr[0].mac_addr[5] = (u8) (mac_down >> 16); | 5481 | sp->def_mac_addr[0].mac_addr[5] = (u8) (mac_down >> 16); |
4815 | sp->def_mac_addr[0].mac_addr[4] = (u8) (mac_down >> 24); | 5482 | sp->def_mac_addr[0].mac_addr[4] = (u8) (mac_down >> 24); |
4816 | 5483 | ||
4817 | DBG_PRINT(INIT_DBG, | ||
4818 | "DEFAULT MAC ADDR:0x%02x-%02x-%02x-%02x-%02x-%02x\n", | ||
4819 | sp->def_mac_addr[0].mac_addr[0], | ||
4820 | sp->def_mac_addr[0].mac_addr[1], | ||
4821 | sp->def_mac_addr[0].mac_addr[2], | ||
4822 | sp->def_mac_addr[0].mac_addr[3], | ||
4823 | sp->def_mac_addr[0].mac_addr[4], | ||
4824 | sp->def_mac_addr[0].mac_addr[5]); | ||
4825 | |||
4826 | /* Set the factory defined MAC address initially */ | 5484 | /* Set the factory defined MAC address initially */ |
4827 | dev->addr_len = ETH_ALEN; | 5485 | dev->addr_len = ETH_ALEN; |
4828 | memcpy(dev->dev_addr, sp->def_mac_addr, ETH_ALEN); | 5486 | memcpy(dev->dev_addr, sp->def_mac_addr, ETH_ALEN); |
4829 | 5487 | ||
4830 | /* | 5488 | /* |
4831 | * Initialize the tasklet status and link state flags | 5489 | * Initialize the tasklet status and link state flags |
4832 | * and the card statte parameter | 5490 | * and the card state parameter |
4833 | */ | 5491 | */ |
4834 | atomic_set(&(sp->card_state), 0); | 5492 | atomic_set(&(sp->card_state), 0); |
4835 | sp->tasklet_status = 0; | 5493 | sp->tasklet_status = 0; |
4836 | sp->link_state = 0; | 5494 | sp->link_state = 0; |
4837 | 5495 | ||
4838 | |||
4839 | /* Initialize spinlocks */ | 5496 | /* Initialize spinlocks */ |
4840 | spin_lock_init(&sp->tx_lock); | 5497 | spin_lock_init(&sp->tx_lock); |
4841 | #ifndef CONFIG_S2IO_NAPI | 5498 | #ifndef CONFIG_S2IO_NAPI |
4842 | spin_lock_init(&sp->put_lock); | 5499 | spin_lock_init(&sp->put_lock); |
4843 | #endif | 5500 | #endif |
5501 | spin_lock_init(&sp->rx_lock); | ||
4844 | 5502 | ||
4845 | /* | 5503 | /* |
4846 | * SXE-002: Configure link and activity LED to init state | 5504 | * SXE-002: Configure link and activity LED to init state |
4847 | * on driver load. | 5505 | * on driver load. |
4848 | */ | 5506 | */ |
4849 | subid = sp->pdev->subsystem_device; | 5507 | subid = sp->pdev->subsystem_device; |
4850 | if ((subid & 0xFF) >= 0x07) { | 5508 | if ((subid & 0xFF) >= 0x07) { |
@@ -4864,13 +5522,61 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4864 | goto register_failed; | 5522 | goto register_failed; |
4865 | } | 5523 | } |
4866 | 5524 | ||
4867 | /* | 5525 | if (sp->device_type & XFRAME_II_DEVICE) { |
4868 | * Make Link state as off at this point, when the Link change | 5526 | DBG_PRINT(ERR_DBG, "%s: Neterion Xframe II 10GbE adapter ", |
4869 | * interrupt comes the state will be automatically changed to | 5527 | dev->name); |
5528 | DBG_PRINT(ERR_DBG, "(rev %d), Driver %s\n", | ||
5529 | get_xena_rev_id(sp->pdev), | ||
5530 | s2io_driver_version); | ||
5531 | DBG_PRINT(ERR_DBG, "MAC ADDR: %02x:%02x:%02x:%02x:%02x:%02x\n", | ||
5532 | sp->def_mac_addr[0].mac_addr[0], | ||
5533 | sp->def_mac_addr[0].mac_addr[1], | ||
5534 | sp->def_mac_addr[0].mac_addr[2], | ||
5535 | sp->def_mac_addr[0].mac_addr[3], | ||
5536 | sp->def_mac_addr[0].mac_addr[4], | ||
5537 | sp->def_mac_addr[0].mac_addr[5]); | ||
5538 | mode = s2io_print_pci_mode(sp); | ||
5539 | if (mode < 0) { | ||
5540 | DBG_PRINT(ERR_DBG, " Unsupported PCI bus mode "); | ||
5541 | ret = -EBADSLT; | ||
5542 | goto set_swap_failed; | ||
5543 | } | ||
5544 | } else { | ||
5545 | DBG_PRINT(ERR_DBG, "%s: Neterion Xframe I 10GbE adapter ", | ||
5546 | dev->name); | ||
5547 | DBG_PRINT(ERR_DBG, "(rev %d), Driver %s\n", | ||
5548 | get_xena_rev_id(sp->pdev), | ||
5549 | s2io_driver_version); | ||
5550 | DBG_PRINT(ERR_DBG, "MAC ADDR: %02x:%02x:%02x:%02x:%02x:%02x\n", | ||
5551 | sp->def_mac_addr[0].mac_addr[0], | ||
5552 | sp->def_mac_addr[0].mac_addr[1], | ||
5553 | sp->def_mac_addr[0].mac_addr[2], | ||
5554 | sp->def_mac_addr[0].mac_addr[3], | ||
5555 | sp->def_mac_addr[0].mac_addr[4], | ||
5556 | sp->def_mac_addr[0].mac_addr[5]); | ||
5557 | } | ||
5558 | |||
5559 | /* Initialize device name */ | ||
5560 | strcpy(sp->name, dev->name); | ||
5561 | if (sp->device_type & XFRAME_II_DEVICE) | ||
5562 | strcat(sp->name, ": Neterion Xframe II 10GbE adapter"); | ||
5563 | else | ||
5564 | strcat(sp->name, ": Neterion Xframe I 10GbE adapter"); | ||
5565 | |||
5566 | /* Initialize bimodal Interrupts */ | ||
5567 | sp->config.bimodal = bimodal; | ||
5568 | if (!(sp->device_type & XFRAME_II_DEVICE) && bimodal) { | ||
5569 | sp->config.bimodal = 0; | ||
5570 | DBG_PRINT(ERR_DBG,"%s:Bimodal intr not supported by Xframe I\n", | ||
5571 | dev->name); | ||
5572 | } | ||
5573 | |||
5574 | /* | ||
5575 | * Make Link state as off at this point, when the Link change | ||
5576 | * interrupt comes the state will be automatically changed to | ||
4870 | * the right state. | 5577 | * the right state. |
4871 | */ | 5578 | */ |
4872 | netif_carrier_off(dev); | 5579 | netif_carrier_off(dev); |
4873 | sp->last_link_state = LINK_DOWN; | ||
4874 | 5580 | ||
4875 | return 0; | 5581 | return 0; |
4876 | 5582 | ||
@@ -4891,11 +5597,11 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre) | |||
4891 | } | 5597 | } |
4892 | 5598 | ||
4893 | /** | 5599 | /** |
4894 | * s2io_rem_nic - Free the PCI device | 5600 | * s2io_rem_nic - Free the PCI device |
4895 | * @pdev: structure containing the PCI related information of the device. | 5601 | * @pdev: structure containing the PCI related information of the device. |
4896 | * Description: This function is called by the Pci subsystem to release a | 5602 | * Description: This function is called by the Pci subsystem to release a |
4897 | * PCI device and free up all resource held up by the device. This could | 5603 | * PCI device and free up all resource held up by the device. This could |
4898 | * be in response to a Hot plug event or when the driver is to be removed | 5604 | * be in response to a Hot plug event or when the driver is to be removed |
4899 | * from memory. | 5605 | * from memory. |
4900 | */ | 5606 | */ |
4901 | 5607 | ||
@@ -4919,7 +5625,6 @@ static void __devexit s2io_rem_nic(struct pci_dev *pdev) | |||
4919 | pci_disable_device(pdev); | 5625 | pci_disable_device(pdev); |
4920 | pci_release_regions(pdev); | 5626 | pci_release_regions(pdev); |
4921 | pci_set_drvdata(pdev, NULL); | 5627 | pci_set_drvdata(pdev, NULL); |
4922 | |||
4923 | free_netdev(dev); | 5628 | free_netdev(dev); |
4924 | } | 5629 | } |
4925 | 5630 | ||
@@ -4935,11 +5640,11 @@ int __init s2io_starter(void) | |||
4935 | } | 5640 | } |
4936 | 5641 | ||
4937 | /** | 5642 | /** |
4938 | * s2io_closer - Cleanup routine for the driver | 5643 | * s2io_closer - Cleanup routine for the driver |
4939 | * Description: This function is the cleanup routine for the driver. It unregist * ers the driver. | 5644 | * Description: This function is the cleanup routine for the driver. It unregist * ers the driver. |
4940 | */ | 5645 | */ |
4941 | 5646 | ||
4942 | static void s2io_closer(void) | 5647 | void s2io_closer(void) |
4943 | { | 5648 | { |
4944 | pci_unregister_driver(&s2io_driver); | 5649 | pci_unregister_driver(&s2io_driver); |
4945 | DBG_PRINT(INIT_DBG, "cleanup done\n"); | 5650 | DBG_PRINT(INIT_DBG, "cleanup done\n"); |
diff --git a/drivers/net/s2io.h b/drivers/net/s2io.h index 1711c8c3dc99..5d9270730ca2 100644 --- a/drivers/net/s2io.h +++ b/drivers/net/s2io.h | |||
@@ -31,6 +31,9 @@ | |||
31 | #define SUCCESS 0 | 31 | #define SUCCESS 0 |
32 | #define FAILURE -1 | 32 | #define FAILURE -1 |
33 | 33 | ||
34 | /* Maximum time to flicker LED when asked to identify NIC using ethtool */ | ||
35 | #define MAX_FLICKER_TIME 60000 /* 60 Secs */ | ||
36 | |||
34 | /* Maximum outstanding splits to be configured into xena. */ | 37 | /* Maximum outstanding splits to be configured into xena. */ |
35 | typedef enum xena_max_outstanding_splits { | 38 | typedef enum xena_max_outstanding_splits { |
36 | XENA_ONE_SPLIT_TRANSACTION = 0, | 39 | XENA_ONE_SPLIT_TRANSACTION = 0, |
@@ -45,10 +48,10 @@ typedef enum xena_max_outstanding_splits { | |||
45 | #define XENA_MAX_OUTSTANDING_SPLITS(n) (n << 4) | 48 | #define XENA_MAX_OUTSTANDING_SPLITS(n) (n << 4) |
46 | 49 | ||
47 | /* OS concerned variables and constants */ | 50 | /* OS concerned variables and constants */ |
48 | #define WATCH_DOG_TIMEOUT 5*HZ | 51 | #define WATCH_DOG_TIMEOUT 15*HZ |
49 | #define EFILL 0x1234 | 52 | #define EFILL 0x1234 |
50 | #define ALIGN_SIZE 127 | 53 | #define ALIGN_SIZE 127 |
51 | #define PCIX_COMMAND_REGISTER 0x62 | 54 | #define PCIX_COMMAND_REGISTER 0x62 |
52 | 55 | ||
53 | /* | 56 | /* |
54 | * Debug related variables. | 57 | * Debug related variables. |
@@ -61,7 +64,7 @@ typedef enum xena_max_outstanding_splits { | |||
61 | #define INTR_DBG 4 | 64 | #define INTR_DBG 4 |
62 | 65 | ||
63 | /* Global variable that defines the present debug level of the driver. */ | 66 | /* Global variable that defines the present debug level of the driver. */ |
64 | static int debug_level = ERR_DBG; /* Default level. */ | 67 | int debug_level = ERR_DBG; /* Default level. */ |
65 | 68 | ||
66 | /* DEBUG message print. */ | 69 | /* DEBUG message print. */ |
67 | #define DBG_PRINT(dbg_level, args...) if(!(debug_level<dbg_level)) printk(args) | 70 | #define DBG_PRINT(dbg_level, args...) if(!(debug_level<dbg_level)) printk(args) |
@@ -71,6 +74,12 @@ static int debug_level = ERR_DBG; /* Default level. */ | |||
71 | #define L4_CKSUM_OK 0xFFFF | 74 | #define L4_CKSUM_OK 0xFFFF |
72 | #define S2IO_JUMBO_SIZE 9600 | 75 | #define S2IO_JUMBO_SIZE 9600 |
73 | 76 | ||
77 | /* Driver statistics maintained by driver */ | ||
78 | typedef struct { | ||
79 | unsigned long long single_ecc_errs; | ||
80 | unsigned long long double_ecc_errs; | ||
81 | } swStat_t; | ||
82 | |||
74 | /* The statistics block of Xena */ | 83 | /* The statistics block of Xena */ |
75 | typedef struct stat_block { | 84 | typedef struct stat_block { |
76 | /* Tx MAC statistics counters. */ | 85 | /* Tx MAC statistics counters. */ |
@@ -186,12 +195,90 @@ typedef struct stat_block { | |||
186 | u32 rxd_rd_cnt; | 195 | u32 rxd_rd_cnt; |
187 | u32 rxf_wr_cnt; | 196 | u32 rxf_wr_cnt; |
188 | u32 txf_rd_cnt; | 197 | u32 txf_rd_cnt; |
198 | |||
199 | /* Tx MAC statistics overflow counters. */ | ||
200 | u32 tmac_data_octets_oflow; | ||
201 | u32 tmac_frms_oflow; | ||
202 | u32 tmac_bcst_frms_oflow; | ||
203 | u32 tmac_mcst_frms_oflow; | ||
204 | u32 tmac_ucst_frms_oflow; | ||
205 | u32 tmac_ttl_octets_oflow; | ||
206 | u32 tmac_any_err_frms_oflow; | ||
207 | u32 tmac_nucst_frms_oflow; | ||
208 | u64 tmac_vlan_frms; | ||
209 | u32 tmac_drop_ip_oflow; | ||
210 | u32 tmac_vld_ip_oflow; | ||
211 | u32 tmac_rst_tcp_oflow; | ||
212 | u32 tmac_icmp_oflow; | ||
213 | u32 tpa_unknown_protocol; | ||
214 | u32 tmac_udp_oflow; | ||
215 | u32 reserved_10; | ||
216 | u32 tpa_parse_failure; | ||
217 | |||
218 | /* Rx MAC Statistics overflow counters. */ | ||
219 | u32 rmac_data_octets_oflow; | ||
220 | u32 rmac_vld_frms_oflow; | ||
221 | u32 rmac_vld_bcst_frms_oflow; | ||
222 | u32 rmac_vld_mcst_frms_oflow; | ||
223 | u32 rmac_accepted_ucst_frms_oflow; | ||
224 | u32 rmac_ttl_octets_oflow; | ||
225 | u32 rmac_discarded_frms_oflow; | ||
226 | u32 rmac_accepted_nucst_frms_oflow; | ||
227 | u32 rmac_usized_frms_oflow; | ||
228 | u32 rmac_drop_events_oflow; | ||
229 | u32 rmac_frag_frms_oflow; | ||
230 | u32 rmac_osized_frms_oflow; | ||
231 | u32 rmac_ip_oflow; | ||
232 | u32 rmac_jabber_frms_oflow; | ||
233 | u32 rmac_icmp_oflow; | ||
234 | u32 rmac_drop_ip_oflow; | ||
235 | u32 rmac_err_drp_udp_oflow; | ||
236 | u32 rmac_udp_oflow; | ||
237 | u32 reserved_11; | ||
238 | u32 rmac_pause_cnt_oflow; | ||
239 | u64 rmac_ttl_1519_4095_frms; | ||
240 | u64 rmac_ttl_4096_8191_frms; | ||
241 | u64 rmac_ttl_8192_max_frms; | ||
242 | u64 rmac_ttl_gt_max_frms; | ||
243 | u64 rmac_osized_alt_frms; | ||
244 | u64 rmac_jabber_alt_frms; | ||
245 | u64 rmac_gt_max_alt_frms; | ||
246 | u64 rmac_vlan_frms; | ||
247 | u32 rmac_len_discard; | ||
248 | u32 rmac_fcs_discard; | ||
249 | u32 rmac_pf_discard; | ||
250 | u32 rmac_da_discard; | ||
251 | u32 rmac_red_discard; | ||
252 | u32 rmac_rts_discard; | ||
253 | u32 reserved_12; | ||
254 | u32 rmac_ingm_full_discard; | ||
255 | u32 reserved_13; | ||
256 | u32 rmac_accepted_ip_oflow; | ||
257 | u32 reserved_14; | ||
258 | u32 link_fault_cnt; | ||
259 | swStat_t sw_stat; | ||
189 | } StatInfo_t; | 260 | } StatInfo_t; |
190 | 261 | ||
191 | /* Structures representing different init time configuration | 262 | /* |
263 | * Structures representing different init time configuration | ||
192 | * parameters of the NIC. | 264 | * parameters of the NIC. |
193 | */ | 265 | */ |
194 | 266 | ||
267 | #define MAX_TX_FIFOS 8 | ||
268 | #define MAX_RX_RINGS 8 | ||
269 | |||
270 | /* FIFO mappings for all possible number of fifos configured */ | ||
271 | int fifo_map[][MAX_TX_FIFOS] = { | ||
272 | {0, 0, 0, 0, 0, 0, 0, 0}, | ||
273 | {0, 0, 0, 0, 1, 1, 1, 1}, | ||
274 | {0, 0, 0, 1, 1, 1, 2, 2}, | ||
275 | {0, 0, 1, 1, 2, 2, 3, 3}, | ||
276 | {0, 0, 1, 1, 2, 2, 3, 4}, | ||
277 | {0, 0, 1, 1, 2, 3, 4, 5}, | ||
278 | {0, 0, 1, 2, 3, 4, 5, 6}, | ||
279 | {0, 1, 2, 3, 4, 5, 6, 7}, | ||
280 | }; | ||
281 | |||
195 | /* Maintains Per FIFO related information. */ | 282 | /* Maintains Per FIFO related information. */ |
196 | typedef struct tx_fifo_config { | 283 | typedef struct tx_fifo_config { |
197 | #define MAX_AVAILABLE_TXDS 8192 | 284 | #define MAX_AVAILABLE_TXDS 8192 |
@@ -237,14 +324,14 @@ typedef struct rx_ring_config { | |||
237 | #define NO_SNOOP_RXD_BUFFER 0x02 | 324 | #define NO_SNOOP_RXD_BUFFER 0x02 |
238 | } rx_ring_config_t; | 325 | } rx_ring_config_t; |
239 | 326 | ||
240 | /* This structure provides contains values of the tunable parameters | 327 | /* This structure provides contains values of the tunable parameters |
241 | * of the H/W | 328 | * of the H/W |
242 | */ | 329 | */ |
243 | struct config_param { | 330 | struct config_param { |
244 | /* Tx Side */ | 331 | /* Tx Side */ |
245 | u32 tx_fifo_num; /*Number of Tx FIFOs */ | 332 | u32 tx_fifo_num; /*Number of Tx FIFOs */ |
246 | #define MAX_TX_FIFOS 8 | ||
247 | 333 | ||
334 | u8 fifo_mapping[MAX_TX_FIFOS]; | ||
248 | tx_fifo_config_t tx_cfg[MAX_TX_FIFOS]; /*Per-Tx FIFO config */ | 335 | tx_fifo_config_t tx_cfg[MAX_TX_FIFOS]; /*Per-Tx FIFO config */ |
249 | u32 max_txds; /*Max no. of Tx buffer descriptor per TxDL */ | 336 | u32 max_txds; /*Max no. of Tx buffer descriptor per TxDL */ |
250 | u64 tx_intr_type; | 337 | u64 tx_intr_type; |
@@ -252,10 +339,10 @@ struct config_param { | |||
252 | 339 | ||
253 | /* Rx Side */ | 340 | /* Rx Side */ |
254 | u32 rx_ring_num; /*Number of receive rings */ | 341 | u32 rx_ring_num; /*Number of receive rings */ |
255 | #define MAX_RX_RINGS 8 | ||
256 | #define MAX_RX_BLOCKS_PER_RING 150 | 342 | #define MAX_RX_BLOCKS_PER_RING 150 |
257 | 343 | ||
258 | rx_ring_config_t rx_cfg[MAX_RX_RINGS]; /*Per-Rx Ring config */ | 344 | rx_ring_config_t rx_cfg[MAX_RX_RINGS]; /*Per-Rx Ring config */ |
345 | u8 bimodal; /*Flag for setting bimodal interrupts*/ | ||
259 | 346 | ||
260 | #define HEADER_ETHERNET_II_802_3_SIZE 14 | 347 | #define HEADER_ETHERNET_II_802_3_SIZE 14 |
261 | #define HEADER_802_2_SIZE 3 | 348 | #define HEADER_802_2_SIZE 3 |
@@ -269,6 +356,7 @@ struct config_param { | |||
269 | #define MAX_PYLD_JUMBO 9600 | 356 | #define MAX_PYLD_JUMBO 9600 |
270 | #define MAX_MTU_JUMBO (MAX_PYLD_JUMBO+18) | 357 | #define MAX_MTU_JUMBO (MAX_PYLD_JUMBO+18) |
271 | #define MAX_MTU_JUMBO_VLAN (MAX_PYLD_JUMBO+22) | 358 | #define MAX_MTU_JUMBO_VLAN (MAX_PYLD_JUMBO+22) |
359 | u16 bus_speed; | ||
272 | }; | 360 | }; |
273 | 361 | ||
274 | /* Structure representing MAC Addrs */ | 362 | /* Structure representing MAC Addrs */ |
@@ -277,7 +365,7 @@ typedef struct mac_addr { | |||
277 | } macaddr_t; | 365 | } macaddr_t; |
278 | 366 | ||
279 | /* Structure that represent every FIFO element in the BAR1 | 367 | /* Structure that represent every FIFO element in the BAR1 |
280 | * Address location. | 368 | * Address location. |
281 | */ | 369 | */ |
282 | typedef struct _TxFIFO_element { | 370 | typedef struct _TxFIFO_element { |
283 | u64 TxDL_Pointer; | 371 | u64 TxDL_Pointer; |
@@ -339,6 +427,7 @@ typedef struct _RxD_t { | |||
339 | #define RXD_FRAME_PROTO vBIT(0xFFFF,24,8) | 427 | #define RXD_FRAME_PROTO vBIT(0xFFFF,24,8) |
340 | #define RXD_FRAME_PROTO_IPV4 BIT(27) | 428 | #define RXD_FRAME_PROTO_IPV4 BIT(27) |
341 | #define RXD_FRAME_PROTO_IPV6 BIT(28) | 429 | #define RXD_FRAME_PROTO_IPV6 BIT(28) |
430 | #define RXD_FRAME_IP_FRAG BIT(29) | ||
342 | #define RXD_FRAME_PROTO_TCP BIT(30) | 431 | #define RXD_FRAME_PROTO_TCP BIT(30) |
343 | #define RXD_FRAME_PROTO_UDP BIT(31) | 432 | #define RXD_FRAME_PROTO_UDP BIT(31) |
344 | #define TCP_OR_UDP_FRAME (RXD_FRAME_PROTO_TCP | RXD_FRAME_PROTO_UDP) | 433 | #define TCP_OR_UDP_FRAME (RXD_FRAME_PROTO_TCP | RXD_FRAME_PROTO_UDP) |
@@ -346,11 +435,15 @@ typedef struct _RxD_t { | |||
346 | #define RXD_GET_L4_CKSUM(val) ((u16)(val) & 0xFFFF) | 435 | #define RXD_GET_L4_CKSUM(val) ((u16)(val) & 0xFFFF) |
347 | 436 | ||
348 | u64 Control_2; | 437 | u64 Control_2; |
438 | #define THE_RXD_MARK 0x3 | ||
439 | #define SET_RXD_MARKER vBIT(THE_RXD_MARK, 0, 2) | ||
440 | #define GET_RXD_MARKER(ctrl) ((ctrl & SET_RXD_MARKER) >> 62) | ||
441 | |||
349 | #ifndef CONFIG_2BUFF_MODE | 442 | #ifndef CONFIG_2BUFF_MODE |
350 | #define MASK_BUFFER0_SIZE vBIT(0xFFFF,0,16) | 443 | #define MASK_BUFFER0_SIZE vBIT(0x3FFF,2,14) |
351 | #define SET_BUFFER0_SIZE(val) vBIT(val,0,16) | 444 | #define SET_BUFFER0_SIZE(val) vBIT(val,2,14) |
352 | #else | 445 | #else |
353 | #define MASK_BUFFER0_SIZE vBIT(0xFF,0,16) | 446 | #define MASK_BUFFER0_SIZE vBIT(0xFF,2,14) |
354 | #define MASK_BUFFER1_SIZE vBIT(0xFFFF,16,16) | 447 | #define MASK_BUFFER1_SIZE vBIT(0xFFFF,16,16) |
355 | #define MASK_BUFFER2_SIZE vBIT(0xFFFF,32,16) | 448 | #define MASK_BUFFER2_SIZE vBIT(0xFFFF,32,16) |
356 | #define SET_BUFFER0_SIZE(val) vBIT(val,8,8) | 449 | #define SET_BUFFER0_SIZE(val) vBIT(val,8,8) |
@@ -363,7 +456,7 @@ typedef struct _RxD_t { | |||
363 | #define SET_NUM_TAG(val) vBIT(val,16,32) | 456 | #define SET_NUM_TAG(val) vBIT(val,16,32) |
364 | 457 | ||
365 | #ifndef CONFIG_2BUFF_MODE | 458 | #ifndef CONFIG_2BUFF_MODE |
366 | #define RXD_GET_BUFFER0_SIZE(Control_2) (u64)((Control_2 & vBIT(0xFFFF,0,16))) | 459 | #define RXD_GET_BUFFER0_SIZE(Control_2) (u64)((Control_2 & vBIT(0x3FFF,2,14))) |
367 | #else | 460 | #else |
368 | #define RXD_GET_BUFFER0_SIZE(Control_2) (u8)((Control_2 & MASK_BUFFER0_SIZE) \ | 461 | #define RXD_GET_BUFFER0_SIZE(Control_2) (u8)((Control_2 & MASK_BUFFER0_SIZE) \ |
369 | >> 48) | 462 | >> 48) |
@@ -382,7 +475,7 @@ typedef struct _RxD_t { | |||
382 | #endif | 475 | #endif |
383 | } RxD_t; | 476 | } RxD_t; |
384 | 477 | ||
385 | /* Structure that represents the Rx descriptor block which contains | 478 | /* Structure that represents the Rx descriptor block which contains |
386 | * 128 Rx descriptors. | 479 | * 128 Rx descriptors. |
387 | */ | 480 | */ |
388 | #ifndef CONFIG_2BUFF_MODE | 481 | #ifndef CONFIG_2BUFF_MODE |
@@ -392,11 +485,11 @@ typedef struct _RxD_block { | |||
392 | 485 | ||
393 | u64 reserved_0; | 486 | u64 reserved_0; |
394 | #define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL | 487 | #define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL |
395 | u64 reserved_1; /* 0xFEFFFFFFFFFFFFFF to mark last | 488 | u64 reserved_1; /* 0xFEFFFFFFFFFFFFFF to mark last |
396 | * Rxd in this blk */ | 489 | * Rxd in this blk */ |
397 | u64 reserved_2_pNext_RxD_block; /* Logical ptr to next */ | 490 | u64 reserved_2_pNext_RxD_block; /* Logical ptr to next */ |
398 | u64 pNext_RxD_Blk_physical; /* Buff0_ptr.In a 32 bit arch | 491 | u64 pNext_RxD_Blk_physical; /* Buff0_ptr.In a 32 bit arch |
399 | * the upper 32 bits should | 492 | * the upper 32 bits should |
400 | * be 0 */ | 493 | * be 0 */ |
401 | } RxD_block_t; | 494 | } RxD_block_t; |
402 | #else | 495 | #else |
@@ -405,13 +498,13 @@ typedef struct _RxD_block { | |||
405 | RxD_t rxd[MAX_RXDS_PER_BLOCK]; | 498 | RxD_t rxd[MAX_RXDS_PER_BLOCK]; |
406 | 499 | ||
407 | #define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL | 500 | #define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL |
408 | u64 reserved_1; /* 0xFEFFFFFFFFFFFFFF to mark last Rxd | 501 | u64 reserved_1; /* 0xFEFFFFFFFFFFFFFF to mark last Rxd |
409 | * in this blk */ | 502 | * in this blk */ |
410 | u64 pNext_RxD_Blk_physical; /* Phy ponter to next blk. */ | 503 | u64 pNext_RxD_Blk_physical; /* Phy ponter to next blk. */ |
411 | } RxD_block_t; | 504 | } RxD_block_t; |
412 | #define SIZE_OF_BLOCK 4096 | 505 | #define SIZE_OF_BLOCK 4096 |
413 | 506 | ||
414 | /* Structure to hold virtual addresses of Buf0 and Buf1 in | 507 | /* Structure to hold virtual addresses of Buf0 and Buf1 in |
415 | * 2buf mode. */ | 508 | * 2buf mode. */ |
416 | typedef struct bufAdd { | 509 | typedef struct bufAdd { |
417 | void *ba_0_org; | 510 | void *ba_0_org; |
@@ -423,8 +516,8 @@ typedef struct bufAdd { | |||
423 | 516 | ||
424 | /* Structure which stores all the MAC control parameters */ | 517 | /* Structure which stores all the MAC control parameters */ |
425 | 518 | ||
426 | /* This structure stores the offset of the RxD in the ring | 519 | /* This structure stores the offset of the RxD in the ring |
427 | * from which the Rx Interrupt processor can start picking | 520 | * from which the Rx Interrupt processor can start picking |
428 | * up the RxDs for processing. | 521 | * up the RxDs for processing. |
429 | */ | 522 | */ |
430 | typedef struct _rx_curr_get_info_t { | 523 | typedef struct _rx_curr_get_info_t { |
@@ -436,7 +529,7 @@ typedef struct _rx_curr_get_info_t { | |||
436 | typedef rx_curr_get_info_t rx_curr_put_info_t; | 529 | typedef rx_curr_get_info_t rx_curr_put_info_t; |
437 | 530 | ||
438 | /* This structure stores the offset of the TxDl in the FIFO | 531 | /* This structure stores the offset of the TxDl in the FIFO |
439 | * from which the Tx Interrupt processor can start picking | 532 | * from which the Tx Interrupt processor can start picking |
440 | * up the TxDLs for send complete interrupt processing. | 533 | * up the TxDLs for send complete interrupt processing. |
441 | */ | 534 | */ |
442 | typedef struct { | 535 | typedef struct { |
@@ -446,32 +539,96 @@ typedef struct { | |||
446 | 539 | ||
447 | typedef tx_curr_get_info_t tx_curr_put_info_t; | 540 | typedef tx_curr_get_info_t tx_curr_put_info_t; |
448 | 541 | ||
449 | /* Infomation related to the Tx and Rx FIFOs and Rings of Xena | 542 | /* Structure that holds the Phy and virt addresses of the Blocks */ |
450 | * is maintained in this structure. | 543 | typedef struct rx_block_info { |
451 | */ | 544 | RxD_t *block_virt_addr; |
452 | typedef struct mac_info { | 545 | dma_addr_t block_dma_addr; |
453 | /* rx side stuff */ | 546 | } rx_block_info_t; |
454 | /* Put pointer info which indictes which RxD has to be replenished | 547 | |
548 | /* pre declaration of the nic structure */ | ||
549 | typedef struct s2io_nic nic_t; | ||
550 | |||
551 | /* Ring specific structure */ | ||
552 | typedef struct ring_info { | ||
553 | /* The ring number */ | ||
554 | int ring_no; | ||
555 | |||
556 | /* | ||
557 | * Place holders for the virtual and physical addresses of | ||
558 | * all the Rx Blocks | ||
559 | */ | ||
560 | rx_block_info_t rx_blocks[MAX_RX_BLOCKS_PER_RING]; | ||
561 | int block_count; | ||
562 | int pkt_cnt; | ||
563 | |||
564 | /* | ||
565 | * Put pointer info which indictes which RxD has to be replenished | ||
455 | * with a new buffer. | 566 | * with a new buffer. |
456 | */ | 567 | */ |
457 | rx_curr_put_info_t rx_curr_put_info[MAX_RX_RINGS]; | 568 | rx_curr_put_info_t rx_curr_put_info; |
458 | 569 | ||
459 | /* Get pointer info which indictes which is the last RxD that was | 570 | /* |
571 | * Get pointer info which indictes which is the last RxD that was | ||
460 | * processed by the driver. | 572 | * processed by the driver. |
461 | */ | 573 | */ |
462 | rx_curr_get_info_t rx_curr_get_info[MAX_RX_RINGS]; | 574 | rx_curr_get_info_t rx_curr_get_info; |
463 | 575 | ||
464 | u16 rmac_pause_time; | 576 | #ifndef CONFIG_S2IO_NAPI |
465 | u16 mc_pause_threshold_q0q3; | 577 | /* Index to the absolute position of the put pointer of Rx ring */ |
466 | u16 mc_pause_threshold_q4q7; | 578 | int put_pos; |
579 | #endif | ||
580 | |||
581 | #ifdef CONFIG_2BUFF_MODE | ||
582 | /* Buffer Address store. */ | ||
583 | buffAdd_t **ba; | ||
584 | #endif | ||
585 | nic_t *nic; | ||
586 | } ring_info_t; | ||
467 | 587 | ||
588 | /* Fifo specific structure */ | ||
589 | typedef struct fifo_info { | ||
590 | /* FIFO number */ | ||
591 | int fifo_no; | ||
592 | |||
593 | /* Maximum TxDs per TxDL */ | ||
594 | int max_txds; | ||
595 | |||
596 | /* Place holder of all the TX List's Phy and Virt addresses. */ | ||
597 | list_info_hold_t *list_info; | ||
598 | |||
599 | /* | ||
600 | * Current offset within the tx FIFO where driver would write | ||
601 | * new Tx frame | ||
602 | */ | ||
603 | tx_curr_put_info_t tx_curr_put_info; | ||
604 | |||
605 | /* | ||
606 | * Current offset within tx FIFO from where the driver would start freeing | ||
607 | * the buffers | ||
608 | */ | ||
609 | tx_curr_get_info_t tx_curr_get_info; | ||
610 | |||
611 | nic_t *nic; | ||
612 | }fifo_info_t; | ||
613 | |||
614 | /* Infomation related to the Tx and Rx FIFOs and Rings of Xena | ||
615 | * is maintained in this structure. | ||
616 | */ | ||
617 | typedef struct mac_info { | ||
468 | /* tx side stuff */ | 618 | /* tx side stuff */ |
469 | /* logical pointer of start of each Tx FIFO */ | 619 | /* logical pointer of start of each Tx FIFO */ |
470 | TxFIFO_element_t __iomem *tx_FIFO_start[MAX_TX_FIFOS]; | 620 | TxFIFO_element_t __iomem *tx_FIFO_start[MAX_TX_FIFOS]; |
471 | 621 | ||
472 | /* Current offset within tx_FIFO_start, where driver would write new Tx frame*/ | 622 | /* Fifo specific structure */ |
473 | tx_curr_put_info_t tx_curr_put_info[MAX_TX_FIFOS]; | 623 | fifo_info_t fifos[MAX_TX_FIFOS]; |
474 | tx_curr_get_info_t tx_curr_get_info[MAX_TX_FIFOS]; | 624 | |
625 | /* rx side stuff */ | ||
626 | /* Ring specific structure */ | ||
627 | ring_info_t rings[MAX_RX_RINGS]; | ||
628 | |||
629 | u16 rmac_pause_time; | ||
630 | u16 mc_pause_threshold_q0q3; | ||
631 | u16 mc_pause_threshold_q4q7; | ||
475 | 632 | ||
476 | void *stats_mem; /* orignal pointer to allocated mem */ | 633 | void *stats_mem; /* orignal pointer to allocated mem */ |
477 | dma_addr_t stats_mem_phy; /* Physical address of the stat block */ | 634 | dma_addr_t stats_mem_phy; /* Physical address of the stat block */ |
@@ -485,12 +642,6 @@ typedef struct { | |||
485 | int usage_cnt; | 642 | int usage_cnt; |
486 | } usr_addr_t; | 643 | } usr_addr_t; |
487 | 644 | ||
488 | /* Structure that holds the Phy and virt addresses of the Blocks */ | ||
489 | typedef struct rx_block_info { | ||
490 | RxD_t *block_virt_addr; | ||
491 | dma_addr_t block_dma_addr; | ||
492 | } rx_block_info_t; | ||
493 | |||
494 | /* Default Tunable parameters of the NIC. */ | 645 | /* Default Tunable parameters of the NIC. */ |
495 | #define DEFAULT_FIFO_LEN 4096 | 646 | #define DEFAULT_FIFO_LEN 4096 |
496 | #define SMALL_RXD_CNT 30 * (MAX_RXDS_PER_BLOCK+1) | 647 | #define SMALL_RXD_CNT 30 * (MAX_RXDS_PER_BLOCK+1) |
@@ -499,7 +650,20 @@ typedef struct rx_block_info { | |||
499 | #define LARGE_BLK_CNT 100 | 650 | #define LARGE_BLK_CNT 100 |
500 | 651 | ||
501 | /* Structure representing one instance of the NIC */ | 652 | /* Structure representing one instance of the NIC */ |
502 | typedef struct s2io_nic { | 653 | struct s2io_nic { |
654 | #ifdef CONFIG_S2IO_NAPI | ||
655 | /* | ||
656 | * Count of packets to be processed in a given iteration, it will be indicated | ||
657 | * by the quota field of the device structure when NAPI is enabled. | ||
658 | */ | ||
659 | int pkts_to_process; | ||
660 | #endif | ||
661 | struct net_device *dev; | ||
662 | mac_info_t mac_control; | ||
663 | struct config_param config; | ||
664 | struct pci_dev *pdev; | ||
665 | void __iomem *bar0; | ||
666 | void __iomem *bar1; | ||
503 | #define MAX_MAC_SUPPORTED 16 | 667 | #define MAX_MAC_SUPPORTED 16 |
504 | #define MAX_SUPPORTED_MULTICASTS MAX_MAC_SUPPORTED | 668 | #define MAX_SUPPORTED_MULTICASTS MAX_MAC_SUPPORTED |
505 | 669 | ||
@@ -507,33 +671,20 @@ typedef struct s2io_nic { | |||
507 | macaddr_t pre_mac_addr[MAX_MAC_SUPPORTED]; | 671 | macaddr_t pre_mac_addr[MAX_MAC_SUPPORTED]; |
508 | 672 | ||
509 | struct net_device_stats stats; | 673 | struct net_device_stats stats; |
510 | void __iomem *bar0; | ||
511 | void __iomem *bar1; | ||
512 | struct config_param config; | ||
513 | mac_info_t mac_control; | ||
514 | int high_dma_flag; | 674 | int high_dma_flag; |
515 | int device_close_flag; | 675 | int device_close_flag; |
516 | int device_enabled_once; | 676 | int device_enabled_once; |
517 | 677 | ||
518 | char name[32]; | 678 | char name[50]; |
519 | struct tasklet_struct task; | 679 | struct tasklet_struct task; |
520 | volatile unsigned long tasklet_status; | 680 | volatile unsigned long tasklet_status; |
521 | struct timer_list timer; | ||
522 | struct net_device *dev; | ||
523 | struct pci_dev *pdev; | ||
524 | 681 | ||
525 | u16 vendor_id; | 682 | /* Timer that handles I/O errors/exceptions */ |
526 | u16 device_id; | 683 | struct timer_list alarm_timer; |
527 | u16 ccmd; | 684 | |
528 | u32 cbar0_1; | 685 | /* Space to back up the PCI config space */ |
529 | u32 cbar0_2; | 686 | u32 config_space[256 / sizeof(u32)]; |
530 | u32 cbar1_1; | 687 | |
531 | u32 cbar1_2; | ||
532 | u32 cirq; | ||
533 | u8 cache_line; | ||
534 | u32 rom_expansion; | ||
535 | u16 pcix_cmd; | ||
536 | u32 irq; | ||
537 | atomic_t rx_bufs_left[MAX_RX_RINGS]; | 688 | atomic_t rx_bufs_left[MAX_RX_RINGS]; |
538 | 689 | ||
539 | spinlock_t tx_lock; | 690 | spinlock_t tx_lock; |
@@ -558,27 +709,11 @@ typedef struct s2io_nic { | |||
558 | u16 tx_err_count; | 709 | u16 tx_err_count; |
559 | u16 rx_err_count; | 710 | u16 rx_err_count; |
560 | 711 | ||
561 | #ifndef CONFIG_S2IO_NAPI | ||
562 | /* Index to the absolute position of the put pointer of Rx ring. */ | ||
563 | int put_pos[MAX_RX_RINGS]; | ||
564 | #endif | ||
565 | |||
566 | /* | ||
567 | * Place holders for the virtual and physical addresses of | ||
568 | * all the Rx Blocks | ||
569 | */ | ||
570 | rx_block_info_t rx_blocks[MAX_RX_RINGS][MAX_RX_BLOCKS_PER_RING]; | ||
571 | int block_count[MAX_RX_RINGS]; | ||
572 | int pkt_cnt[MAX_RX_RINGS]; | ||
573 | |||
574 | /* Place holder of all the TX List's Phy and Virt addresses. */ | ||
575 | list_info_hold_t *list_info[MAX_TX_FIFOS]; | ||
576 | |||
577 | /* Id timer, used to blink NIC to physically identify NIC. */ | 712 | /* Id timer, used to blink NIC to physically identify NIC. */ |
578 | struct timer_list id_timer; | 713 | struct timer_list id_timer; |
579 | 714 | ||
580 | /* Restart timer, used to restart NIC if the device is stuck and | 715 | /* Restart timer, used to restart NIC if the device is stuck and |
581 | * a schedule task that will set the correct Link state once the | 716 | * a schedule task that will set the correct Link state once the |
582 | * NIC's PHY has stabilized after a state change. | 717 | * NIC's PHY has stabilized after a state change. |
583 | */ | 718 | */ |
584 | #ifdef INIT_TQUEUE | 719 | #ifdef INIT_TQUEUE |
@@ -589,12 +724,12 @@ typedef struct s2io_nic { | |||
589 | struct work_struct set_link_task; | 724 | struct work_struct set_link_task; |
590 | #endif | 725 | #endif |
591 | 726 | ||
592 | /* Flag that can be used to turn on or turn off the Rx checksum | 727 | /* Flag that can be used to turn on or turn off the Rx checksum |
593 | * offload feature. | 728 | * offload feature. |
594 | */ | 729 | */ |
595 | int rx_csum; | 730 | int rx_csum; |
596 | 731 | ||
597 | /* after blink, the adapter must be restored with original | 732 | /* after blink, the adapter must be restored with original |
598 | * values. | 733 | * values. |
599 | */ | 734 | */ |
600 | u64 adapt_ctrl_org; | 735 | u64 adapt_ctrl_org; |
@@ -604,16 +739,19 @@ typedef struct s2io_nic { | |||
604 | #define LINK_DOWN 1 | 739 | #define LINK_DOWN 1 |
605 | #define LINK_UP 2 | 740 | #define LINK_UP 2 |
606 | 741 | ||
607 | #ifdef CONFIG_2BUFF_MODE | ||
608 | /* Buffer Address store. */ | ||
609 | buffAdd_t **ba[MAX_RX_RINGS]; | ||
610 | #endif | ||
611 | int task_flag; | 742 | int task_flag; |
612 | #define CARD_DOWN 1 | 743 | #define CARD_DOWN 1 |
613 | #define CARD_UP 2 | 744 | #define CARD_UP 2 |
614 | atomic_t card_state; | 745 | atomic_t card_state; |
615 | volatile unsigned long link_state; | 746 | volatile unsigned long link_state; |
616 | } nic_t; | 747 | struct vlan_group *vlgrp; |
748 | #define XFRAME_I_DEVICE 1 | ||
749 | #define XFRAME_II_DEVICE 2 | ||
750 | u8 device_type; | ||
751 | |||
752 | spinlock_t rx_lock; | ||
753 | atomic_t isr_cnt; | ||
754 | }; | ||
617 | 755 | ||
618 | #define RESET_ERROR 1; | 756 | #define RESET_ERROR 1; |
619 | #define CMD_ERROR 2; | 757 | #define CMD_ERROR 2; |
@@ -622,9 +760,10 @@ typedef struct s2io_nic { | |||
622 | #ifndef readq | 760 | #ifndef readq |
623 | static inline u64 readq(void __iomem *addr) | 761 | static inline u64 readq(void __iomem *addr) |
624 | { | 762 | { |
625 | u64 ret = readl(addr + 4); | 763 | u64 ret = 0; |
626 | ret <<= 32; | 764 | ret = readl(addr + 4); |
627 | ret |= readl(addr); | 765 | (u64) ret <<= 32; |
766 | (u64) ret |= readl(addr); | ||
628 | 767 | ||
629 | return ret; | 768 | return ret; |
630 | } | 769 | } |
@@ -637,10 +776,10 @@ static inline void writeq(u64 val, void __iomem *addr) | |||
637 | writel((u32) (val >> 32), (addr + 4)); | 776 | writel((u32) (val >> 32), (addr + 4)); |
638 | } | 777 | } |
639 | 778 | ||
640 | /* In 32 bit modes, some registers have to be written in a | 779 | /* In 32 bit modes, some registers have to be written in a |
641 | * particular order to expect correct hardware operation. The | 780 | * particular order to expect correct hardware operation. The |
642 | * macro SPECIAL_REG_WRITE is used to perform such ordered | 781 | * macro SPECIAL_REG_WRITE is used to perform such ordered |
643 | * writes. Defines UF (Upper First) and LF (Lower First) will | 782 | * writes. Defines UF (Upper First) and LF (Lower First) will |
644 | * be used to specify the required write order. | 783 | * be used to specify the required write order. |
645 | */ | 784 | */ |
646 | #define UF 1 | 785 | #define UF 1 |
@@ -716,6 +855,7 @@ static inline void SPECIAL_REG_WRITE(u64 val, void __iomem *addr, int order) | |||
716 | #define PCC_FB_ECC_ERR vBIT(0xff, 16, 8) /* Interrupt to indicate | 855 | #define PCC_FB_ECC_ERR vBIT(0xff, 16, 8) /* Interrupt to indicate |
717 | PCC_FB_ECC Error. */ | 856 | PCC_FB_ECC Error. */ |
718 | 857 | ||
858 | #define RXD_GET_VLAN_TAG(Control_2) (u16)(Control_2 & MASK_VLAN_TAG) | ||
719 | /* | 859 | /* |
720 | * Prototype declaration. | 860 | * Prototype declaration. |
721 | */ | 861 | */ |
@@ -725,36 +865,30 @@ static void __devexit s2io_rem_nic(struct pci_dev *pdev); | |||
725 | static int init_shared_mem(struct s2io_nic *sp); | 865 | static int init_shared_mem(struct s2io_nic *sp); |
726 | static void free_shared_mem(struct s2io_nic *sp); | 866 | static void free_shared_mem(struct s2io_nic *sp); |
727 | static int init_nic(struct s2io_nic *nic); | 867 | static int init_nic(struct s2io_nic *nic); |
728 | #ifndef CONFIG_S2IO_NAPI | 868 | static void rx_intr_handler(ring_info_t *ring_data); |
729 | static void rx_intr_handler(struct s2io_nic *sp); | 869 | static void tx_intr_handler(fifo_info_t *fifo_data); |
730 | #endif | ||
731 | static void tx_intr_handler(struct s2io_nic *sp); | ||
732 | static void alarm_intr_handler(struct s2io_nic *sp); | 870 | static void alarm_intr_handler(struct s2io_nic *sp); |
733 | 871 | ||
734 | static int s2io_starter(void); | 872 | static int s2io_starter(void); |
735 | static void s2io_closer(void); | 873 | void s2io_closer(void); |
736 | static void s2io_tx_watchdog(struct net_device *dev); | 874 | static void s2io_tx_watchdog(struct net_device *dev); |
737 | static void s2io_tasklet(unsigned long dev_addr); | 875 | static void s2io_tasklet(unsigned long dev_addr); |
738 | static void s2io_set_multicast(struct net_device *dev); | 876 | static void s2io_set_multicast(struct net_device *dev); |
739 | #ifndef CONFIG_2BUFF_MODE | 877 | static int rx_osm_handler(ring_info_t *ring_data, RxD_t * rxdp); |
740 | static int rx_osm_handler(nic_t * sp, u16 len, RxD_t * rxdp, int ring_no); | 878 | void s2io_link(nic_t * sp, int link); |
741 | #else | 879 | void s2io_reset(nic_t * sp); |
742 | static int rx_osm_handler(nic_t * sp, RxD_t * rxdp, int ring_no, | 880 | #if defined(CONFIG_S2IO_NAPI) |
743 | buffAdd_t * ba); | ||
744 | #endif | ||
745 | static void s2io_link(nic_t * sp, int link); | ||
746 | static void s2io_reset(nic_t * sp); | ||
747 | #ifdef CONFIG_S2IO_NAPI | ||
748 | static int s2io_poll(struct net_device *dev, int *budget); | 881 | static int s2io_poll(struct net_device *dev, int *budget); |
749 | #endif | 882 | #endif |
750 | static void s2io_init_pci(nic_t * sp); | 883 | static void s2io_init_pci(nic_t * sp); |
751 | static int s2io_set_mac_addr(struct net_device *dev, u8 * addr); | 884 | int s2io_set_mac_addr(struct net_device *dev, u8 * addr); |
885 | static void s2io_alarm_handle(unsigned long data); | ||
752 | static irqreturn_t s2io_isr(int irq, void *dev_id, struct pt_regs *regs); | 886 | static irqreturn_t s2io_isr(int irq, void *dev_id, struct pt_regs *regs); |
753 | static int verify_xena_quiescence(u64 val64, int flag); | 887 | static int verify_xena_quiescence(nic_t *sp, u64 val64, int flag); |
754 | static struct ethtool_ops netdev_ethtool_ops; | 888 | static struct ethtool_ops netdev_ethtool_ops; |
755 | static void s2io_set_link(unsigned long data); | 889 | static void s2io_set_link(unsigned long data); |
756 | static int s2io_set_swapper(nic_t * sp); | 890 | int s2io_set_swapper(nic_t * sp); |
757 | static void s2io_card_down(nic_t * nic); | 891 | static void s2io_card_down(nic_t *nic); |
758 | static int s2io_card_up(nic_t * nic); | 892 | static int s2io_card_up(nic_t *nic); |
759 | 893 | int get_xena_rev_id(struct pci_dev *pdev); | |
760 | #endif /* _S2IO_H */ | 894 | #endif /* _S2IO_H */ |
diff --git a/drivers/net/skge.c b/drivers/net/skge.c index f15739481d62..d7c98515fdfd 100644 --- a/drivers/net/skge.c +++ b/drivers/net/skge.c | |||
@@ -42,7 +42,7 @@ | |||
42 | #include "skge.h" | 42 | #include "skge.h" |
43 | 43 | ||
44 | #define DRV_NAME "skge" | 44 | #define DRV_NAME "skge" |
45 | #define DRV_VERSION "0.8" | 45 | #define DRV_VERSION "0.9" |
46 | #define PFX DRV_NAME " " | 46 | #define PFX DRV_NAME " " |
47 | 47 | ||
48 | #define DEFAULT_TX_RING_SIZE 128 | 48 | #define DEFAULT_TX_RING_SIZE 128 |
@@ -79,8 +79,8 @@ static const struct pci_device_id skge_id_table[] = { | |||
79 | { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4320) }, | 79 | { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4320) }, |
80 | { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x5005) }, /* Belkin */ | 80 | { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x5005) }, /* Belkin */ |
81 | { PCI_DEVICE(PCI_VENDOR_ID_CNET, PCI_DEVICE_ID_CNET_GIGACARD) }, | 81 | { PCI_DEVICE(PCI_VENDOR_ID_CNET, PCI_DEVICE_ID_CNET_GIGACARD) }, |
82 | { PCI_DEVICE(PCI_VENDOR_ID_LINKSYS, PCI_DEVICE_ID_LINKSYS_EG1032) }, | ||
83 | { PCI_DEVICE(PCI_VENDOR_ID_LINKSYS, PCI_DEVICE_ID_LINKSYS_EG1064) }, | 82 | { PCI_DEVICE(PCI_VENDOR_ID_LINKSYS, PCI_DEVICE_ID_LINKSYS_EG1064) }, |
83 | { PCI_VENDOR_ID_LINKSYS, 0x1032, PCI_ANY_ID, 0x0015, }, | ||
84 | { 0 } | 84 | { 0 } |
85 | }; | 85 | }; |
86 | MODULE_DEVICE_TABLE(pci, skge_id_table); | 86 | MODULE_DEVICE_TABLE(pci, skge_id_table); |
@@ -189,7 +189,7 @@ static u32 skge_supported_modes(const struct skge_hw *hw) | |||
189 | { | 189 | { |
190 | u32 supported; | 190 | u32 supported; |
191 | 191 | ||
192 | if (iscopper(hw)) { | 192 | if (hw->copper) { |
193 | supported = SUPPORTED_10baseT_Half | 193 | supported = SUPPORTED_10baseT_Half |
194 | | SUPPORTED_10baseT_Full | 194 | | SUPPORTED_10baseT_Full |
195 | | SUPPORTED_100baseT_Half | 195 | | SUPPORTED_100baseT_Half |
@@ -222,7 +222,7 @@ static int skge_get_settings(struct net_device *dev, | |||
222 | ecmd->transceiver = XCVR_INTERNAL; | 222 | ecmd->transceiver = XCVR_INTERNAL; |
223 | ecmd->supported = skge_supported_modes(hw); | 223 | ecmd->supported = skge_supported_modes(hw); |
224 | 224 | ||
225 | if (iscopper(hw)) { | 225 | if (hw->copper) { |
226 | ecmd->port = PORT_TP; | 226 | ecmd->port = PORT_TP; |
227 | ecmd->phy_address = hw->phy_addr; | 227 | ecmd->phy_address = hw->phy_addr; |
228 | } else | 228 | } else |
@@ -876,6 +876,9 @@ static int skge_rx_fill(struct skge_port *skge) | |||
876 | 876 | ||
877 | static void skge_link_up(struct skge_port *skge) | 877 | static void skge_link_up(struct skge_port *skge) |
878 | { | 878 | { |
879 | skge_write8(skge->hw, SK_REG(skge->port, LNK_LED_REG), | ||
880 | LED_BLK_OFF|LED_SYNC_OFF|LED_ON); | ||
881 | |||
879 | netif_carrier_on(skge->netdev); | 882 | netif_carrier_on(skge->netdev); |
880 | if (skge->tx_avail > MAX_SKB_FRAGS + 1) | 883 | if (skge->tx_avail > MAX_SKB_FRAGS + 1) |
881 | netif_wake_queue(skge->netdev); | 884 | netif_wake_queue(skge->netdev); |
@@ -894,6 +897,7 @@ static void skge_link_up(struct skge_port *skge) | |||
894 | 897 | ||
895 | static void skge_link_down(struct skge_port *skge) | 898 | static void skge_link_down(struct skge_port *skge) |
896 | { | 899 | { |
900 | skge_write8(skge->hw, SK_REG(skge->port, LNK_LED_REG), LED_OFF); | ||
897 | netif_carrier_off(skge->netdev); | 901 | netif_carrier_off(skge->netdev); |
898 | netif_stop_queue(skge->netdev); | 902 | netif_stop_queue(skge->netdev); |
899 | 903 | ||
@@ -1599,7 +1603,7 @@ static void yukon_init(struct skge_hw *hw, int port) | |||
1599 | adv = PHY_AN_CSMA; | 1603 | adv = PHY_AN_CSMA; |
1600 | 1604 | ||
1601 | if (skge->autoneg == AUTONEG_ENABLE) { | 1605 | if (skge->autoneg == AUTONEG_ENABLE) { |
1602 | if (iscopper(hw)) { | 1606 | if (hw->copper) { |
1603 | if (skge->advertising & ADVERTISED_1000baseT_Full) | 1607 | if (skge->advertising & ADVERTISED_1000baseT_Full) |
1604 | ct1000 |= PHY_M_1000C_AFD; | 1608 | ct1000 |= PHY_M_1000C_AFD; |
1605 | if (skge->advertising & ADVERTISED_1000baseT_Half) | 1609 | if (skge->advertising & ADVERTISED_1000baseT_Half) |
@@ -1691,7 +1695,7 @@ static void yukon_mac_init(struct skge_hw *hw, int port) | |||
1691 | /* Set hardware config mode */ | 1695 | /* Set hardware config mode */ |
1692 | reg = GPC_INT_POL_HI | GPC_DIS_FC | GPC_DIS_SLEEP | | 1696 | reg = GPC_INT_POL_HI | GPC_DIS_FC | GPC_DIS_SLEEP | |
1693 | GPC_ENA_XC | GPC_ANEG_ADV_ALL_M | GPC_ENA_PAUSE; | 1697 | GPC_ENA_XC | GPC_ANEG_ADV_ALL_M | GPC_ENA_PAUSE; |
1694 | reg |= iscopper(hw) ? GPC_HWCFG_GMII_COP : GPC_HWCFG_GMII_FIB; | 1698 | reg |= hw->copper ? GPC_HWCFG_GMII_COP : GPC_HWCFG_GMII_FIB; |
1695 | 1699 | ||
1696 | /* Clear GMC reset */ | 1700 | /* Clear GMC reset */ |
1697 | skge_write32(hw, SK_REG(port, GPHY_CTRL), reg | GPC_RST_SET); | 1701 | skge_write32(hw, SK_REG(port, GPHY_CTRL), reg | GPC_RST_SET); |
@@ -1780,7 +1784,12 @@ static void yukon_mac_init(struct skge_hw *hw, int port) | |||
1780 | reg &= ~GMF_RX_F_FL_ON; | 1784 | reg &= ~GMF_RX_F_FL_ON; |
1781 | skge_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_CLR); | 1785 | skge_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_CLR); |
1782 | skge_write16(hw, SK_REG(port, RX_GMF_CTRL_T), reg); | 1786 | skge_write16(hw, SK_REG(port, RX_GMF_CTRL_T), reg); |
1783 | skge_write16(hw, SK_REG(port, RX_GMF_FL_THR), RX_GMF_FL_THR_DEF); | 1787 | /* |
1788 | * because Pause Packet Truncation in GMAC is not working | ||
1789 | * we have to increase the Flush Threshold to 64 bytes | ||
1790 | * in order to flush pause packets in Rx FIFO on Yukon-1 | ||
1791 | */ | ||
1792 | skge_write16(hw, SK_REG(port, RX_GMF_FL_THR), RX_GMF_FL_THR_DEF+1); | ||
1784 | 1793 | ||
1785 | /* Configure Tx MAC FIFO */ | 1794 | /* Configure Tx MAC FIFO */ |
1786 | skge_write8(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_RST_CLR); | 1795 | skge_write8(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_RST_CLR); |
@@ -2670,18 +2679,6 @@ static void skge_error_irq(struct skge_hw *hw) | |||
2670 | /* Timestamp (unused) overflow */ | 2679 | /* Timestamp (unused) overflow */ |
2671 | if (hwstatus & IS_IRQ_TIST_OV) | 2680 | if (hwstatus & IS_IRQ_TIST_OV) |
2672 | skge_write8(hw, GMAC_TI_ST_CTRL, GMT_ST_CLR_IRQ); | 2681 | skge_write8(hw, GMAC_TI_ST_CTRL, GMT_ST_CLR_IRQ); |
2673 | |||
2674 | if (hwstatus & IS_IRQ_SENSOR) { | ||
2675 | /* no sensors on 32-bit Yukon */ | ||
2676 | if (!(skge_read16(hw, B0_CTST) & CS_BUS_SLOT_SZ)) { | ||
2677 | printk(KERN_ERR PFX "ignoring bogus sensor interrups\n"); | ||
2678 | skge_write32(hw, B0_HWE_IMSK, | ||
2679 | IS_ERR_MSK & ~IS_IRQ_SENSOR); | ||
2680 | } else | ||
2681 | printk(KERN_WARNING PFX "sensor interrupt\n"); | ||
2682 | } | ||
2683 | |||
2684 | |||
2685 | } | 2682 | } |
2686 | 2683 | ||
2687 | if (hwstatus & IS_RAM_RD_PAR) { | 2684 | if (hwstatus & IS_RAM_RD_PAR) { |
@@ -2712,9 +2709,10 @@ static void skge_error_irq(struct skge_hw *hw) | |||
2712 | 2709 | ||
2713 | skge_pci_clear(hw); | 2710 | skge_pci_clear(hw); |
2714 | 2711 | ||
2712 | /* if error still set then just ignore it */ | ||
2715 | hwstatus = skge_read32(hw, B0_HWE_ISRC); | 2713 | hwstatus = skge_read32(hw, B0_HWE_ISRC); |
2716 | if (hwstatus & IS_IRQ_STAT) { | 2714 | if (hwstatus & IS_IRQ_STAT) { |
2717 | printk(KERN_WARNING PFX "IRQ status %x: still set ignoring hardware errors\n", | 2715 | pr_debug("IRQ status %x: still set ignoring hardware errors\n", |
2718 | hwstatus); | 2716 | hwstatus); |
2719 | hw->intr_mask &= ~IS_HW_ERR; | 2717 | hw->intr_mask &= ~IS_HW_ERR; |
2720 | } | 2718 | } |
@@ -2876,7 +2874,7 @@ static const char *skge_board_name(const struct skge_hw *hw) | |||
2876 | static int skge_reset(struct skge_hw *hw) | 2874 | static int skge_reset(struct skge_hw *hw) |
2877 | { | 2875 | { |
2878 | u16 ctst; | 2876 | u16 ctst; |
2879 | u8 t8, mac_cfg; | 2877 | u8 t8, mac_cfg, pmd_type, phy_type; |
2880 | int i; | 2878 | int i; |
2881 | 2879 | ||
2882 | ctst = skge_read16(hw, B0_CTST); | 2880 | ctst = skge_read16(hw, B0_CTST); |
@@ -2895,18 +2893,19 @@ static int skge_reset(struct skge_hw *hw) | |||
2895 | ctst & (CS_CLK_RUN_HOT|CS_CLK_RUN_RST|CS_CLK_RUN_ENA)); | 2893 | ctst & (CS_CLK_RUN_HOT|CS_CLK_RUN_RST|CS_CLK_RUN_ENA)); |
2896 | 2894 | ||
2897 | hw->chip_id = skge_read8(hw, B2_CHIP_ID); | 2895 | hw->chip_id = skge_read8(hw, B2_CHIP_ID); |
2898 | hw->phy_type = skge_read8(hw, B2_E_1) & 0xf; | 2896 | phy_type = skge_read8(hw, B2_E_1) & 0xf; |
2899 | hw->pmd_type = skge_read8(hw, B2_PMD_TYP); | 2897 | pmd_type = skge_read8(hw, B2_PMD_TYP); |
2898 | hw->copper = (pmd_type == 'T' || pmd_type == '1'); | ||
2900 | 2899 | ||
2901 | switch (hw->chip_id) { | 2900 | switch (hw->chip_id) { |
2902 | case CHIP_ID_GENESIS: | 2901 | case CHIP_ID_GENESIS: |
2903 | switch (hw->phy_type) { | 2902 | switch (phy_type) { |
2904 | case SK_PHY_BCOM: | 2903 | case SK_PHY_BCOM: |
2905 | hw->phy_addr = PHY_ADDR_BCOM; | 2904 | hw->phy_addr = PHY_ADDR_BCOM; |
2906 | break; | 2905 | break; |
2907 | default: | 2906 | default: |
2908 | printk(KERN_ERR PFX "%s: unsupported phy type 0x%x\n", | 2907 | printk(KERN_ERR PFX "%s: unsupported phy type 0x%x\n", |
2909 | pci_name(hw->pdev), hw->phy_type); | 2908 | pci_name(hw->pdev), phy_type); |
2910 | return -EOPNOTSUPP; | 2909 | return -EOPNOTSUPP; |
2911 | } | 2910 | } |
2912 | break; | 2911 | break; |
@@ -2914,13 +2913,10 @@ static int skge_reset(struct skge_hw *hw) | |||
2914 | case CHIP_ID_YUKON: | 2913 | case CHIP_ID_YUKON: |
2915 | case CHIP_ID_YUKON_LITE: | 2914 | case CHIP_ID_YUKON_LITE: |
2916 | case CHIP_ID_YUKON_LP: | 2915 | case CHIP_ID_YUKON_LP: |
2917 | if (hw->phy_type < SK_PHY_MARV_COPPER && hw->pmd_type != 'S') | 2916 | if (phy_type < SK_PHY_MARV_COPPER && pmd_type != 'S') |
2918 | hw->phy_type = SK_PHY_MARV_COPPER; | 2917 | hw->copper = 1; |
2919 | 2918 | ||
2920 | hw->phy_addr = PHY_ADDR_MARV; | 2919 | hw->phy_addr = PHY_ADDR_MARV; |
2921 | if (!iscopper(hw)) | ||
2922 | hw->phy_type = SK_PHY_MARV_FIBER; | ||
2923 | |||
2924 | break; | 2920 | break; |
2925 | 2921 | ||
2926 | default: | 2922 | default: |
@@ -2948,12 +2944,20 @@ static int skge_reset(struct skge_hw *hw) | |||
2948 | else | 2944 | else |
2949 | hw->ram_size = t8 * 4096; | 2945 | hw->ram_size = t8 * 4096; |
2950 | 2946 | ||
2947 | hw->intr_mask = IS_HW_ERR | IS_EXT_REG; | ||
2951 | if (hw->chip_id == CHIP_ID_GENESIS) | 2948 | if (hw->chip_id == CHIP_ID_GENESIS) |
2952 | genesis_init(hw); | 2949 | genesis_init(hw); |
2953 | else { | 2950 | else { |
2954 | /* switch power to VCC (WA for VAUX problem) */ | 2951 | /* switch power to VCC (WA for VAUX problem) */ |
2955 | skge_write8(hw, B0_POWER_CTRL, | 2952 | skge_write8(hw, B0_POWER_CTRL, |
2956 | PC_VAUX_ENA | PC_VCC_ENA | PC_VAUX_OFF | PC_VCC_ON); | 2953 | PC_VAUX_ENA | PC_VCC_ENA | PC_VAUX_OFF | PC_VCC_ON); |
2954 | /* avoid boards with stuck Hardware error bits */ | ||
2955 | if ((skge_read32(hw, B0_ISRC) & IS_HW_ERR) && | ||
2956 | (skge_read32(hw, B0_HWE_ISRC) & IS_IRQ_SENSOR)) { | ||
2957 | printk(KERN_WARNING PFX "stuck hardware sensor bit\n"); | ||
2958 | hw->intr_mask &= ~IS_HW_ERR; | ||
2959 | } | ||
2960 | |||
2957 | for (i = 0; i < hw->ports; i++) { | 2961 | for (i = 0; i < hw->ports; i++) { |
2958 | skge_write16(hw, SK_REG(i, GMAC_LINK_CTRL), GMLC_RST_SET); | 2962 | skge_write16(hw, SK_REG(i, GMAC_LINK_CTRL), GMLC_RST_SET); |
2959 | skge_write16(hw, SK_REG(i, GMAC_LINK_CTRL), GMLC_RST_CLR); | 2963 | skge_write16(hw, SK_REG(i, GMAC_LINK_CTRL), GMLC_RST_CLR); |
@@ -2994,7 +2998,6 @@ static int skge_reset(struct skge_hw *hw) | |||
2994 | skge_write32(hw, B2_IRQM_INI, skge_usecs2clk(hw, 100)); | 2998 | skge_write32(hw, B2_IRQM_INI, skge_usecs2clk(hw, 100)); |
2995 | skge_write32(hw, B2_IRQM_CTRL, TIM_START); | 2999 | skge_write32(hw, B2_IRQM_CTRL, TIM_START); |
2996 | 3000 | ||
2997 | hw->intr_mask = IS_HW_ERR | IS_EXT_REG; | ||
2998 | skge_write32(hw, B0_IMSK, hw->intr_mask); | 3001 | skge_write32(hw, B0_IMSK, hw->intr_mask); |
2999 | 3002 | ||
3000 | if (hw->chip_id != CHIP_ID_GENESIS) | 3003 | if (hw->chip_id != CHIP_ID_GENESIS) |
diff --git a/drivers/net/skge.h b/drivers/net/skge.h index b432f1bb8168..f1680beb8e68 100644 --- a/drivers/net/skge.h +++ b/drivers/net/skge.h | |||
@@ -214,8 +214,6 @@ enum { | |||
214 | 214 | ||
215 | /* B2_IRQM_HWE_MSK 32 bit IRQ Moderation HW Error Mask */ | 215 | /* B2_IRQM_HWE_MSK 32 bit IRQ Moderation HW Error Mask */ |
216 | enum { | 216 | enum { |
217 | IS_ERR_MSK = 0x00003fff,/* All Error bits */ | ||
218 | |||
219 | IS_IRQ_TIST_OV = 1<<13, /* Time Stamp Timer Overflow (YUKON only) */ | 217 | IS_IRQ_TIST_OV = 1<<13, /* Time Stamp Timer Overflow (YUKON only) */ |
220 | IS_IRQ_SENSOR = 1<<12, /* IRQ from Sensor (YUKON only) */ | 218 | IS_IRQ_SENSOR = 1<<12, /* IRQ from Sensor (YUKON only) */ |
221 | IS_IRQ_MST_ERR = 1<<11, /* IRQ master error detected */ | 219 | IS_IRQ_MST_ERR = 1<<11, /* IRQ master error detected */ |
@@ -230,6 +228,12 @@ enum { | |||
230 | IS_M2_PAR_ERR = 1<<2, /* MAC 2 Parity Error */ | 228 | IS_M2_PAR_ERR = 1<<2, /* MAC 2 Parity Error */ |
231 | IS_R1_PAR_ERR = 1<<1, /* Queue R1 Parity Error */ | 229 | IS_R1_PAR_ERR = 1<<1, /* Queue R1 Parity Error */ |
232 | IS_R2_PAR_ERR = 1<<0, /* Queue R2 Parity Error */ | 230 | IS_R2_PAR_ERR = 1<<0, /* Queue R2 Parity Error */ |
231 | |||
232 | IS_ERR_MSK = IS_IRQ_MST_ERR | IS_IRQ_STAT | ||
233 | | IS_NO_STAT_M1 | IS_NO_STAT_M2 | ||
234 | | IS_RAM_RD_PAR | IS_RAM_WR_PAR | ||
235 | | IS_M1_PAR_ERR | IS_M2_PAR_ERR | ||
236 | | IS_R1_PAR_ERR | IS_R2_PAR_ERR, | ||
233 | }; | 237 | }; |
234 | 238 | ||
235 | /* B2_TST_CTRL1 8 bit Test Control Register 1 */ | 239 | /* B2_TST_CTRL1 8 bit Test Control Register 1 */ |
@@ -2456,24 +2460,17 @@ struct skge_hw { | |||
2456 | 2460 | ||
2457 | u8 chip_id; | 2461 | u8 chip_id; |
2458 | u8 chip_rev; | 2462 | u8 chip_rev; |
2459 | u8 phy_type; | 2463 | u8 copper; |
2460 | u8 pmd_type; | ||
2461 | u16 phy_addr; | ||
2462 | u8 ports; | 2464 | u8 ports; |
2463 | 2465 | ||
2464 | u32 ram_size; | 2466 | u32 ram_size; |
2465 | u32 ram_offset; | 2467 | u32 ram_offset; |
2468 | u16 phy_addr; | ||
2466 | 2469 | ||
2467 | struct tasklet_struct ext_tasklet; | 2470 | struct tasklet_struct ext_tasklet; |
2468 | spinlock_t phy_lock; | 2471 | spinlock_t phy_lock; |
2469 | }; | 2472 | }; |
2470 | 2473 | ||
2471 | |||
2472 | static inline int iscopper(const struct skge_hw *hw) | ||
2473 | { | ||
2474 | return (hw->pmd_type == 'T'); | ||
2475 | } | ||
2476 | |||
2477 | enum { | 2474 | enum { |
2478 | FLOW_MODE_NONE = 0, /* No Flow-Control */ | 2475 | FLOW_MODE_NONE = 0, /* No Flow-Control */ |
2479 | FLOW_MODE_LOC_SEND = 1, /* Local station sends PAUSE */ | 2476 | FLOW_MODE_LOC_SEND = 1, /* Local station sends PAUSE */ |
diff --git a/drivers/net/smc-ultra.c b/drivers/net/smc-ultra.c index 6d9dae60a697..ba8593ac3f8a 100644 --- a/drivers/net/smc-ultra.c +++ b/drivers/net/smc-ultra.c | |||
@@ -68,6 +68,7 @@ static const char version[] = | |||
68 | #include <linux/etherdevice.h> | 68 | #include <linux/etherdevice.h> |
69 | 69 | ||
70 | #include <asm/io.h> | 70 | #include <asm/io.h> |
71 | #include <asm/irq.h> | ||
71 | #include <asm/system.h> | 72 | #include <asm/system.h> |
72 | 73 | ||
73 | #include "8390.h" | 74 | #include "8390.h" |
diff --git a/drivers/net/sonic.c b/drivers/net/sonic.c index cdc9cc873e06..90b818a8de6e 100644 --- a/drivers/net/sonic.c +++ b/drivers/net/sonic.c | |||
@@ -1,6 +1,11 @@ | |||
1 | /* | 1 | /* |
2 | * sonic.c | 2 | * sonic.c |
3 | * | 3 | * |
4 | * (C) 2005 Finn Thain | ||
5 | * | ||
6 | * Converted to DMA API, added zero-copy buffer handling, and | ||
7 | * (from the mac68k project) introduced dhd's support for 16-bit cards. | ||
8 | * | ||
4 | * (C) 1996,1998 by Thomas Bogendoerfer (tsbogend@alpha.franken.de) | 9 | * (C) 1996,1998 by Thomas Bogendoerfer (tsbogend@alpha.franken.de) |
5 | * | 10 | * |
6 | * This driver is based on work from Andreas Busse, but most of | 11 | * This driver is based on work from Andreas Busse, but most of |
@@ -9,12 +14,23 @@ | |||
9 | * (C) 1995 by Andreas Busse (andy@waldorf-gmbh.de) | 14 | * (C) 1995 by Andreas Busse (andy@waldorf-gmbh.de) |
10 | * | 15 | * |
11 | * Core code included by system sonic drivers | 16 | * Core code included by system sonic drivers |
17 | * | ||
18 | * And... partially rewritten again by David Huggins-Daines in order | ||
19 | * to cope with screwed up Macintosh NICs that may or may not use | ||
20 | * 16-bit DMA. | ||
21 | * | ||
22 | * (C) 1999 David Huggins-Daines <dhd@debian.org> | ||
23 | * | ||
12 | */ | 24 | */ |
13 | 25 | ||
14 | /* | 26 | /* |
15 | * Sources: Olivetti M700-10 Risc Personal Computer hardware handbook, | 27 | * Sources: Olivetti M700-10 Risc Personal Computer hardware handbook, |
16 | * National Semiconductors data sheet for the DP83932B Sonic Ethernet | 28 | * National Semiconductors data sheet for the DP83932B Sonic Ethernet |
17 | * controller, and the files "8390.c" and "skeleton.c" in this directory. | 29 | * controller, and the files "8390.c" and "skeleton.c" in this directory. |
30 | * | ||
31 | * Additional sources: Nat Semi data sheet for the DP83932C and Nat Semi | ||
32 | * Application Note AN-746, the files "lance.c" and "ibmlana.c". See also | ||
33 | * the NetBSD file "sys/arch/mac68k/dev/if_sn.c". | ||
18 | */ | 34 | */ |
19 | 35 | ||
20 | 36 | ||
@@ -28,6 +44,9 @@ | |||
28 | */ | 44 | */ |
29 | static int sonic_open(struct net_device *dev) | 45 | static int sonic_open(struct net_device *dev) |
30 | { | 46 | { |
47 | struct sonic_local *lp = netdev_priv(dev); | ||
48 | int i; | ||
49 | |||
31 | if (sonic_debug > 2) | 50 | if (sonic_debug > 2) |
32 | printk("sonic_open: initializing sonic driver.\n"); | 51 | printk("sonic_open: initializing sonic driver.\n"); |
33 | 52 | ||
@@ -40,14 +59,59 @@ static int sonic_open(struct net_device *dev) | |||
40 | * This means that during execution of the handler interrupt are disabled | 59 | * This means that during execution of the handler interrupt are disabled |
41 | * covering another bug otherwise corrupting data. This doesn't mean | 60 | * covering another bug otherwise corrupting data. This doesn't mean |
42 | * this glue works ok under all situations. | 61 | * this glue works ok under all situations. |
62 | * | ||
63 | * Note (dhd): this also appears to prevent lockups on the Macintrash | ||
64 | * when more than one Ethernet card is installed (knock on wood) | ||
65 | * | ||
66 | * Note (fthain): whether the above is still true is anyones guess. Certainly | ||
67 | * the buffer handling algorithms will not tolerate re-entrance without some | ||
68 | * mutual exclusion added. Anyway, the memcpy has now been eliminated from the | ||
69 | * rx code to make this a faster "fast interrupt". | ||
43 | */ | 70 | */ |
44 | // if (sonic_request_irq(dev->irq, &sonic_interrupt, 0, "sonic", dev)) { | 71 | if (request_irq(dev->irq, &sonic_interrupt, SONIC_IRQ_FLAG, "sonic", dev)) { |
45 | if (sonic_request_irq(dev->irq, &sonic_interrupt, SA_INTERRUPT, | 72 | printk(KERN_ERR "\n%s: unable to get IRQ %d .\n", dev->name, dev->irq); |
46 | "sonic", dev)) { | ||
47 | printk("\n%s: unable to get IRQ %d .\n", dev->name, dev->irq); | ||
48 | return -EAGAIN; | 73 | return -EAGAIN; |
49 | } | 74 | } |
50 | 75 | ||
76 | for (i = 0; i < SONIC_NUM_RRS; i++) { | ||
77 | struct sk_buff *skb = dev_alloc_skb(SONIC_RBSIZE + 2); | ||
78 | if (skb == NULL) { | ||
79 | while(i > 0) { /* free any that were allocated successfully */ | ||
80 | i--; | ||
81 | dev_kfree_skb(lp->rx_skb[i]); | ||
82 | lp->rx_skb[i] = NULL; | ||
83 | } | ||
84 | printk(KERN_ERR "%s: couldn't allocate receive buffers\n", | ||
85 | dev->name); | ||
86 | return -ENOMEM; | ||
87 | } | ||
88 | skb->dev = dev; | ||
89 | /* align IP header unless DMA requires otherwise */ | ||
90 | if (SONIC_BUS_SCALE(lp->dma_bitmode) == 2) | ||
91 | skb_reserve(skb, 2); | ||
92 | lp->rx_skb[i] = skb; | ||
93 | } | ||
94 | |||
95 | for (i = 0; i < SONIC_NUM_RRS; i++) { | ||
96 | dma_addr_t laddr = dma_map_single(lp->device, skb_put(lp->rx_skb[i], SONIC_RBSIZE), | ||
97 | SONIC_RBSIZE, DMA_FROM_DEVICE); | ||
98 | if (!laddr) { | ||
99 | while(i > 0) { /* free any that were mapped successfully */ | ||
100 | i--; | ||
101 | dma_unmap_single(lp->device, lp->rx_laddr[i], SONIC_RBSIZE, DMA_FROM_DEVICE); | ||
102 | lp->rx_laddr[i] = (dma_addr_t)0; | ||
103 | } | ||
104 | for (i = 0; i < SONIC_NUM_RRS; i++) { | ||
105 | dev_kfree_skb(lp->rx_skb[i]); | ||
106 | lp->rx_skb[i] = NULL; | ||
107 | } | ||
108 | printk(KERN_ERR "%s: couldn't map rx DMA buffers\n", | ||
109 | dev->name); | ||
110 | return -ENOMEM; | ||
111 | } | ||
112 | lp->rx_laddr[i] = laddr; | ||
113 | } | ||
114 | |||
51 | /* | 115 | /* |
52 | * Initialize the SONIC | 116 | * Initialize the SONIC |
53 | */ | 117 | */ |
@@ -67,7 +131,8 @@ static int sonic_open(struct net_device *dev) | |||
67 | */ | 131 | */ |
68 | static int sonic_close(struct net_device *dev) | 132 | static int sonic_close(struct net_device *dev) |
69 | { | 133 | { |
70 | unsigned int base_addr = dev->base_addr; | 134 | struct sonic_local *lp = netdev_priv(dev); |
135 | int i; | ||
71 | 136 | ||
72 | if (sonic_debug > 2) | 137 | if (sonic_debug > 2) |
73 | printk("sonic_close\n"); | 138 | printk("sonic_close\n"); |
@@ -77,20 +142,56 @@ static int sonic_close(struct net_device *dev) | |||
77 | /* | 142 | /* |
78 | * stop the SONIC, disable interrupts | 143 | * stop the SONIC, disable interrupts |
79 | */ | 144 | */ |
80 | SONIC_WRITE(SONIC_ISR, 0x7fff); | ||
81 | SONIC_WRITE(SONIC_IMR, 0); | 145 | SONIC_WRITE(SONIC_IMR, 0); |
146 | SONIC_WRITE(SONIC_ISR, 0x7fff); | ||
82 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); | 147 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); |
83 | 148 | ||
84 | sonic_free_irq(dev->irq, dev); /* release the IRQ */ | 149 | /* unmap and free skbs that haven't been transmitted */ |
150 | for (i = 0; i < SONIC_NUM_TDS; i++) { | ||
151 | if(lp->tx_laddr[i]) { | ||
152 | dma_unmap_single(lp->device, lp->tx_laddr[i], lp->tx_len[i], DMA_TO_DEVICE); | ||
153 | lp->tx_laddr[i] = (dma_addr_t)0; | ||
154 | } | ||
155 | if(lp->tx_skb[i]) { | ||
156 | dev_kfree_skb(lp->tx_skb[i]); | ||
157 | lp->tx_skb[i] = NULL; | ||
158 | } | ||
159 | } | ||
160 | |||
161 | /* unmap and free the receive buffers */ | ||
162 | for (i = 0; i < SONIC_NUM_RRS; i++) { | ||
163 | if(lp->rx_laddr[i]) { | ||
164 | dma_unmap_single(lp->device, lp->rx_laddr[i], SONIC_RBSIZE, DMA_FROM_DEVICE); | ||
165 | lp->rx_laddr[i] = (dma_addr_t)0; | ||
166 | } | ||
167 | if(lp->rx_skb[i]) { | ||
168 | dev_kfree_skb(lp->rx_skb[i]); | ||
169 | lp->rx_skb[i] = NULL; | ||
170 | } | ||
171 | } | ||
172 | |||
173 | free_irq(dev->irq, dev); /* release the IRQ */ | ||
85 | 174 | ||
86 | return 0; | 175 | return 0; |
87 | } | 176 | } |
88 | 177 | ||
89 | static void sonic_tx_timeout(struct net_device *dev) | 178 | static void sonic_tx_timeout(struct net_device *dev) |
90 | { | 179 | { |
91 | struct sonic_local *lp = (struct sonic_local *) dev->priv; | 180 | struct sonic_local *lp = netdev_priv(dev); |
92 | printk("%s: transmit timed out.\n", dev->name); | 181 | int i; |
93 | 182 | /* Stop the interrupts for this */ | |
183 | SONIC_WRITE(SONIC_IMR, 0); | ||
184 | /* We could resend the original skbs. Easier to re-initialise. */ | ||
185 | for (i = 0; i < SONIC_NUM_TDS; i++) { | ||
186 | if(lp->tx_laddr[i]) { | ||
187 | dma_unmap_single(lp->device, lp->tx_laddr[i], lp->tx_len[i], DMA_TO_DEVICE); | ||
188 | lp->tx_laddr[i] = (dma_addr_t)0; | ||
189 | } | ||
190 | if(lp->tx_skb[i]) { | ||
191 | dev_kfree_skb(lp->tx_skb[i]); | ||
192 | lp->tx_skb[i] = NULL; | ||
193 | } | ||
194 | } | ||
94 | /* Try to restart the adaptor. */ | 195 | /* Try to restart the adaptor. */ |
95 | sonic_init(dev); | 196 | sonic_init(dev); |
96 | lp->stats.tx_errors++; | 197 | lp->stats.tx_errors++; |
@@ -100,60 +201,92 @@ static void sonic_tx_timeout(struct net_device *dev) | |||
100 | 201 | ||
101 | /* | 202 | /* |
102 | * transmit packet | 203 | * transmit packet |
204 | * | ||
205 | * Appends new TD during transmission thus avoiding any TX interrupts | ||
206 | * until we run out of TDs. | ||
207 | * This routine interacts closely with the ISR in that it may, | ||
208 | * set tx_skb[i] | ||
209 | * reset the status flags of the new TD | ||
210 | * set and reset EOL flags | ||
211 | * stop the tx queue | ||
212 | * The ISR interacts with this routine in various ways. It may, | ||
213 | * reset tx_skb[i] | ||
214 | * test the EOL and status flags of the TDs | ||
215 | * wake the tx queue | ||
216 | * Concurrently with all of this, the SONIC is potentially writing to | ||
217 | * the status flags of the TDs. | ||
218 | * Until some mutual exclusion is added, this code will not work with SMP. However, | ||
219 | * MIPS Jazz machines and m68k Macs were all uni-processor machines. | ||
103 | */ | 220 | */ |
221 | |||
104 | static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev) | 222 | static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev) |
105 | { | 223 | { |
106 | struct sonic_local *lp = (struct sonic_local *) dev->priv; | 224 | struct sonic_local *lp = netdev_priv(dev); |
107 | unsigned int base_addr = dev->base_addr; | 225 | dma_addr_t laddr; |
108 | unsigned int laddr; | 226 | int length; |
109 | int entry, length; | 227 | int entry = lp->next_tx; |
110 | |||
111 | netif_stop_queue(dev); | ||
112 | 228 | ||
113 | if (sonic_debug > 2) | 229 | if (sonic_debug > 2) |
114 | printk("sonic_send_packet: skb=%p, dev=%p\n", skb, dev); | 230 | printk("sonic_send_packet: skb=%p, dev=%p\n", skb, dev); |
115 | 231 | ||
232 | length = skb->len; | ||
233 | if (length < ETH_ZLEN) { | ||
234 | skb = skb_padto(skb, ETH_ZLEN); | ||
235 | if (skb == NULL) | ||
236 | return 0; | ||
237 | length = ETH_ZLEN; | ||
238 | } | ||
239 | |||
116 | /* | 240 | /* |
117 | * Map the packet data into the logical DMA address space | 241 | * Map the packet data into the logical DMA address space |
118 | */ | 242 | */ |
119 | if ((laddr = vdma_alloc(CPHYSADDR(skb->data), skb->len)) == ~0UL) { | 243 | |
120 | printk("%s: no VDMA entry for transmit available.\n", | 244 | laddr = dma_map_single(lp->device, skb->data, length, DMA_TO_DEVICE); |
121 | dev->name); | 245 | if (!laddr) { |
246 | printk(KERN_ERR "%s: failed to map tx DMA buffer.\n", dev->name); | ||
122 | dev_kfree_skb(skb); | 247 | dev_kfree_skb(skb); |
123 | netif_start_queue(dev); | ||
124 | return 1; | 248 | return 1; |
125 | } | 249 | } |
126 | entry = lp->cur_tx & SONIC_TDS_MASK; | 250 | |
251 | sonic_tda_put(dev, entry, SONIC_TD_STATUS, 0); /* clear status */ | ||
252 | sonic_tda_put(dev, entry, SONIC_TD_FRAG_COUNT, 1); /* single fragment */ | ||
253 | sonic_tda_put(dev, entry, SONIC_TD_PKTSIZE, length); /* length of packet */ | ||
254 | sonic_tda_put(dev, entry, SONIC_TD_FRAG_PTR_L, laddr & 0xffff); | ||
255 | sonic_tda_put(dev, entry, SONIC_TD_FRAG_PTR_H, laddr >> 16); | ||
256 | sonic_tda_put(dev, entry, SONIC_TD_FRAG_SIZE, length); | ||
257 | sonic_tda_put(dev, entry, SONIC_TD_LINK, | ||
258 | sonic_tda_get(dev, entry, SONIC_TD_LINK) | SONIC_EOL); | ||
259 | |||
260 | /* | ||
261 | * Must set tx_skb[entry] only after clearing status, and | ||
262 | * before clearing EOL and before stopping queue | ||
263 | */ | ||
264 | wmb(); | ||
265 | lp->tx_len[entry] = length; | ||
127 | lp->tx_laddr[entry] = laddr; | 266 | lp->tx_laddr[entry] = laddr; |
128 | lp->tx_skb[entry] = skb; | 267 | lp->tx_skb[entry] = skb; |
129 | 268 | ||
130 | length = (skb->len < ETH_ZLEN) ? ETH_ZLEN : skb->len; | 269 | wmb(); |
131 | flush_cache_all(); | 270 | sonic_tda_put(dev, lp->eol_tx, SONIC_TD_LINK, |
271 | sonic_tda_get(dev, lp->eol_tx, SONIC_TD_LINK) & ~SONIC_EOL); | ||
272 | lp->eol_tx = entry; | ||
132 | 273 | ||
133 | /* | 274 | lp->next_tx = (entry + 1) & SONIC_TDS_MASK; |
134 | * Setup the transmit descriptor and issue the transmit command. | 275 | if (lp->tx_skb[lp->next_tx] != NULL) { |
135 | */ | 276 | /* The ring is full, the ISR has yet to process the next TD. */ |
136 | lp->tda[entry].tx_status = 0; /* clear status */ | 277 | if (sonic_debug > 3) |
137 | lp->tda[entry].tx_frag_count = 1; /* single fragment */ | 278 | printk("%s: stopping queue\n", dev->name); |
138 | lp->tda[entry].tx_pktsize = length; /* length of packet */ | 279 | netif_stop_queue(dev); |
139 | lp->tda[entry].tx_frag_ptr_l = laddr & 0xffff; | 280 | /* after this packet, wait for ISR to free up some TDAs */ |
140 | lp->tda[entry].tx_frag_ptr_h = laddr >> 16; | 281 | } else netif_start_queue(dev); |
141 | lp->tda[entry].tx_frag_size = length; | ||
142 | lp->cur_tx++; | ||
143 | lp->stats.tx_bytes += length; | ||
144 | 282 | ||
145 | if (sonic_debug > 2) | 283 | if (sonic_debug > 2) |
146 | printk("sonic_send_packet: issueing Tx command\n"); | 284 | printk("sonic_send_packet: issuing Tx command\n"); |
147 | 285 | ||
148 | SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP); | 286 | SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP); |
149 | 287 | ||
150 | dev->trans_start = jiffies; | 288 | dev->trans_start = jiffies; |
151 | 289 | ||
152 | if (lp->cur_tx < lp->dirty_tx + SONIC_NUM_TDS) | ||
153 | netif_start_queue(dev); | ||
154 | else | ||
155 | lp->tx_full = 1; | ||
156 | |||
157 | return 0; | 290 | return 0; |
158 | } | 291 | } |
159 | 292 | ||
@@ -164,175 +297,199 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev) | |||
164 | static irqreturn_t sonic_interrupt(int irq, void *dev_id, struct pt_regs *regs) | 297 | static irqreturn_t sonic_interrupt(int irq, void *dev_id, struct pt_regs *regs) |
165 | { | 298 | { |
166 | struct net_device *dev = (struct net_device *) dev_id; | 299 | struct net_device *dev = (struct net_device *) dev_id; |
167 | unsigned int base_addr = dev->base_addr; | 300 | struct sonic_local *lp = netdev_priv(dev); |
168 | struct sonic_local *lp; | ||
169 | int status; | 301 | int status; |
170 | 302 | ||
171 | if (dev == NULL) { | 303 | if (dev == NULL) { |
172 | printk("sonic_interrupt: irq %d for unknown device.\n", irq); | 304 | printk(KERN_ERR "sonic_interrupt: irq %d for unknown device.\n", irq); |
173 | return IRQ_NONE; | 305 | return IRQ_NONE; |
174 | } | 306 | } |
175 | 307 | ||
176 | lp = (struct sonic_local *) dev->priv; | 308 | if (!(status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT)) |
177 | 309 | return IRQ_NONE; | |
178 | status = SONIC_READ(SONIC_ISR); | ||
179 | SONIC_WRITE(SONIC_ISR, 0x7fff); /* clear all bits */ | ||
180 | |||
181 | if (sonic_debug > 2) | ||
182 | printk("sonic_interrupt: ISR=%x\n", status); | ||
183 | |||
184 | if (status & SONIC_INT_PKTRX) { | ||
185 | sonic_rx(dev); /* got packet(s) */ | ||
186 | } | ||
187 | |||
188 | if (status & SONIC_INT_TXDN) { | ||
189 | int dirty_tx = lp->dirty_tx; | ||
190 | |||
191 | while (dirty_tx < lp->cur_tx) { | ||
192 | int entry = dirty_tx & SONIC_TDS_MASK; | ||
193 | int status = lp->tda[entry].tx_status; | ||
194 | 310 | ||
195 | if (sonic_debug > 3) | 311 | do { |
196 | printk | 312 | if (status & SONIC_INT_PKTRX) { |
197 | ("sonic_interrupt: status %d, cur_tx %d, dirty_tx %d\n", | 313 | if (sonic_debug > 2) |
198 | status, lp->cur_tx, lp->dirty_tx); | 314 | printk("%s: packet rx\n", dev->name); |
315 | sonic_rx(dev); /* got packet(s) */ | ||
316 | SONIC_WRITE(SONIC_ISR, SONIC_INT_PKTRX); /* clear the interrupt */ | ||
317 | } | ||
199 | 318 | ||
200 | if (status == 0) { | 319 | if (status & SONIC_INT_TXDN) { |
201 | /* It still hasn't been Txed, kick the sonic again */ | 320 | int entry = lp->cur_tx; |
202 | SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP); | 321 | int td_status; |
203 | break; | 322 | int freed_some = 0; |
204 | } | ||
205 | 323 | ||
206 | /* put back EOL and free descriptor */ | 324 | /* At this point, cur_tx is the index of a TD that is one of: |
207 | lp->tda[entry].tx_frag_count = 0; | 325 | * unallocated/freed (status set & tx_skb[entry] clear) |
208 | lp->tda[entry].tx_status = 0; | 326 | * allocated and sent (status set & tx_skb[entry] set ) |
209 | 327 | * allocated and not yet sent (status clear & tx_skb[entry] set ) | |
210 | if (status & 0x0001) | 328 | * still being allocated by sonic_send_packet (status clear & tx_skb[entry] clear) |
211 | lp->stats.tx_packets++; | 329 | */ |
212 | else { | ||
213 | lp->stats.tx_errors++; | ||
214 | if (status & 0x0642) | ||
215 | lp->stats.tx_aborted_errors++; | ||
216 | if (status & 0x0180) | ||
217 | lp->stats.tx_carrier_errors++; | ||
218 | if (status & 0x0020) | ||
219 | lp->stats.tx_window_errors++; | ||
220 | if (status & 0x0004) | ||
221 | lp->stats.tx_fifo_errors++; | ||
222 | } | ||
223 | 330 | ||
224 | /* We must free the original skb */ | 331 | if (sonic_debug > 2) |
225 | if (lp->tx_skb[entry]) { | 332 | printk("%s: tx done\n", dev->name); |
333 | |||
334 | while (lp->tx_skb[entry] != NULL) { | ||
335 | if ((td_status = sonic_tda_get(dev, entry, SONIC_TD_STATUS)) == 0) | ||
336 | break; | ||
337 | |||
338 | if (td_status & 0x0001) { | ||
339 | lp->stats.tx_packets++; | ||
340 | lp->stats.tx_bytes += sonic_tda_get(dev, entry, SONIC_TD_PKTSIZE); | ||
341 | } else { | ||
342 | lp->stats.tx_errors++; | ||
343 | if (td_status & 0x0642) | ||
344 | lp->stats.tx_aborted_errors++; | ||
345 | if (td_status & 0x0180) | ||
346 | lp->stats.tx_carrier_errors++; | ||
347 | if (td_status & 0x0020) | ||
348 | lp->stats.tx_window_errors++; | ||
349 | if (td_status & 0x0004) | ||
350 | lp->stats.tx_fifo_errors++; | ||
351 | } | ||
352 | |||
353 | /* We must free the original skb */ | ||
226 | dev_kfree_skb_irq(lp->tx_skb[entry]); | 354 | dev_kfree_skb_irq(lp->tx_skb[entry]); |
227 | lp->tx_skb[entry] = 0; | 355 | lp->tx_skb[entry] = NULL; |
356 | /* and unmap DMA buffer */ | ||
357 | dma_unmap_single(lp->device, lp->tx_laddr[entry], lp->tx_len[entry], DMA_TO_DEVICE); | ||
358 | lp->tx_laddr[entry] = (dma_addr_t)0; | ||
359 | freed_some = 1; | ||
360 | |||
361 | if (sonic_tda_get(dev, entry, SONIC_TD_LINK) & SONIC_EOL) { | ||
362 | entry = (entry + 1) & SONIC_TDS_MASK; | ||
363 | break; | ||
364 | } | ||
365 | entry = (entry + 1) & SONIC_TDS_MASK; | ||
228 | } | 366 | } |
229 | /* and the VDMA address */ | ||
230 | vdma_free(lp->tx_laddr[entry]); | ||
231 | dirty_tx++; | ||
232 | } | ||
233 | 367 | ||
234 | if (lp->tx_full | 368 | if (freed_some || lp->tx_skb[entry] == NULL) |
235 | && dirty_tx + SONIC_NUM_TDS > lp->cur_tx + 2) { | 369 | netif_wake_queue(dev); /* The ring is no longer full */ |
236 | /* The ring is no longer full, clear tbusy. */ | 370 | lp->cur_tx = entry; |
237 | lp->tx_full = 0; | 371 | SONIC_WRITE(SONIC_ISR, SONIC_INT_TXDN); /* clear the interrupt */ |
238 | netif_wake_queue(dev); | ||
239 | } | 372 | } |
240 | 373 | ||
241 | lp->dirty_tx = dirty_tx; | 374 | /* |
242 | } | 375 | * check error conditions |
376 | */ | ||
377 | if (status & SONIC_INT_RFO) { | ||
378 | if (sonic_debug > 1) | ||
379 | printk("%s: rx fifo overrun\n", dev->name); | ||
380 | lp->stats.rx_fifo_errors++; | ||
381 | SONIC_WRITE(SONIC_ISR, SONIC_INT_RFO); /* clear the interrupt */ | ||
382 | } | ||
383 | if (status & SONIC_INT_RDE) { | ||
384 | if (sonic_debug > 1) | ||
385 | printk("%s: rx descriptors exhausted\n", dev->name); | ||
386 | lp->stats.rx_dropped++; | ||
387 | SONIC_WRITE(SONIC_ISR, SONIC_INT_RDE); /* clear the interrupt */ | ||
388 | } | ||
389 | if (status & SONIC_INT_RBAE) { | ||
390 | if (sonic_debug > 1) | ||
391 | printk("%s: rx buffer area exceeded\n", dev->name); | ||
392 | lp->stats.rx_dropped++; | ||
393 | SONIC_WRITE(SONIC_ISR, SONIC_INT_RBAE); /* clear the interrupt */ | ||
394 | } | ||
243 | 395 | ||
244 | /* | 396 | /* counter overruns; all counters are 16bit wide */ |
245 | * check error conditions | 397 | if (status & SONIC_INT_FAE) { |
246 | */ | 398 | lp->stats.rx_frame_errors += 65536; |
247 | if (status & SONIC_INT_RFO) { | 399 | SONIC_WRITE(SONIC_ISR, SONIC_INT_FAE); /* clear the interrupt */ |
248 | printk("%s: receive fifo underrun\n", dev->name); | 400 | } |
249 | lp->stats.rx_fifo_errors++; | 401 | if (status & SONIC_INT_CRC) { |
250 | } | 402 | lp->stats.rx_crc_errors += 65536; |
251 | if (status & SONIC_INT_RDE) { | 403 | SONIC_WRITE(SONIC_ISR, SONIC_INT_CRC); /* clear the interrupt */ |
252 | printk("%s: receive descriptors exhausted\n", dev->name); | 404 | } |
253 | lp->stats.rx_dropped++; | 405 | if (status & SONIC_INT_MP) { |
254 | } | 406 | lp->stats.rx_missed_errors += 65536; |
255 | if (status & SONIC_INT_RBE) { | 407 | SONIC_WRITE(SONIC_ISR, SONIC_INT_MP); /* clear the interrupt */ |
256 | printk("%s: receive buffer exhausted\n", dev->name); | 408 | } |
257 | lp->stats.rx_dropped++; | ||
258 | } | ||
259 | if (status & SONIC_INT_RBAE) { | ||
260 | printk("%s: receive buffer area exhausted\n", dev->name); | ||
261 | lp->stats.rx_dropped++; | ||
262 | } | ||
263 | 409 | ||
264 | /* counter overruns; all counters are 16bit wide */ | 410 | /* transmit error */ |
265 | if (status & SONIC_INT_FAE) | 411 | if (status & SONIC_INT_TXER) { |
266 | lp->stats.rx_frame_errors += 65536; | 412 | if ((SONIC_READ(SONIC_TCR) & SONIC_TCR_FU) && (sonic_debug > 2)) |
267 | if (status & SONIC_INT_CRC) | 413 | printk(KERN_ERR "%s: tx fifo underrun\n", dev->name); |
268 | lp->stats.rx_crc_errors += 65536; | 414 | SONIC_WRITE(SONIC_ISR, SONIC_INT_TXER); /* clear the interrupt */ |
269 | if (status & SONIC_INT_MP) | 415 | } |
270 | lp->stats.rx_missed_errors += 65536; | ||
271 | 416 | ||
272 | /* transmit error */ | 417 | /* bus retry */ |
273 | if (status & SONIC_INT_TXER) | 418 | if (status & SONIC_INT_BR) { |
274 | lp->stats.tx_errors++; | 419 | printk(KERN_ERR "%s: Bus retry occurred! Device interrupt disabled.\n", |
420 | dev->name); | ||
421 | /* ... to help debug DMA problems causing endless interrupts. */ | ||
422 | /* Bounce the eth interface to turn on the interrupt again. */ | ||
423 | SONIC_WRITE(SONIC_IMR, 0); | ||
424 | SONIC_WRITE(SONIC_ISR, SONIC_INT_BR); /* clear the interrupt */ | ||
425 | } | ||
275 | 426 | ||
276 | /* | 427 | /* load CAM done */ |
277 | * clear interrupt bits and return | 428 | if (status & SONIC_INT_LCD) |
278 | */ | 429 | SONIC_WRITE(SONIC_ISR, SONIC_INT_LCD); /* clear the interrupt */ |
279 | SONIC_WRITE(SONIC_ISR, status); | 430 | } while((status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT)); |
280 | return IRQ_HANDLED; | 431 | return IRQ_HANDLED; |
281 | } | 432 | } |
282 | 433 | ||
283 | /* | 434 | /* |
284 | * We have a good packet(s), get it/them out of the buffers. | 435 | * We have a good packet(s), pass it/them up the network stack. |
285 | */ | 436 | */ |
286 | static void sonic_rx(struct net_device *dev) | 437 | static void sonic_rx(struct net_device *dev) |
287 | { | 438 | { |
288 | unsigned int base_addr = dev->base_addr; | 439 | struct sonic_local *lp = netdev_priv(dev); |
289 | struct sonic_local *lp = (struct sonic_local *) dev->priv; | ||
290 | sonic_rd_t *rd = &lp->rda[lp->cur_rx & SONIC_RDS_MASK]; | ||
291 | int status; | 440 | int status; |
292 | 441 | int entry = lp->cur_rx; | |
293 | while (rd->in_use == 0) { | 442 | |
294 | struct sk_buff *skb; | 443 | while (sonic_rda_get(dev, entry, SONIC_RD_IN_USE) == 0) { |
444 | struct sk_buff *used_skb; | ||
445 | struct sk_buff *new_skb; | ||
446 | dma_addr_t new_laddr; | ||
447 | u16 bufadr_l; | ||
448 | u16 bufadr_h; | ||
295 | int pkt_len; | 449 | int pkt_len; |
296 | unsigned char *pkt_ptr; | ||
297 | 450 | ||
298 | status = rd->rx_status; | 451 | status = sonic_rda_get(dev, entry, SONIC_RD_STATUS); |
299 | if (sonic_debug > 3) | ||
300 | printk("status %x, cur_rx %d, cur_rra %x\n", | ||
301 | status, lp->cur_rx, lp->cur_rra); | ||
302 | if (status & SONIC_RCR_PRX) { | 452 | if (status & SONIC_RCR_PRX) { |
303 | pkt_len = rd->rx_pktlen; | ||
304 | pkt_ptr = | ||
305 | (char *) | ||
306 | sonic_chiptomem((rd->rx_pktptr_h << 16) + | ||
307 | rd->rx_pktptr_l); | ||
308 | |||
309 | if (sonic_debug > 3) | ||
310 | printk | ||
311 | ("pktptr %p (rba %p) h:%x l:%x, bsize h:%x l:%x\n", | ||
312 | pkt_ptr, lp->rba, rd->rx_pktptr_h, | ||
313 | rd->rx_pktptr_l, | ||
314 | SONIC_READ(SONIC_RBWC1), | ||
315 | SONIC_READ(SONIC_RBWC0)); | ||
316 | |||
317 | /* Malloc up new buffer. */ | 453 | /* Malloc up new buffer. */ |
318 | skb = dev_alloc_skb(pkt_len + 2); | 454 | new_skb = dev_alloc_skb(SONIC_RBSIZE + 2); |
319 | if (skb == NULL) { | 455 | if (new_skb == NULL) { |
320 | printk | 456 | printk(KERN_ERR "%s: Memory squeeze, dropping packet.\n", dev->name); |
321 | ("%s: Memory squeeze, dropping packet.\n", | 457 | lp->stats.rx_dropped++; |
322 | dev->name); | 458 | break; |
459 | } | ||
460 | new_skb->dev = dev; | ||
461 | /* provide 16 byte IP header alignment unless DMA requires otherwise */ | ||
462 | if(SONIC_BUS_SCALE(lp->dma_bitmode) == 2) | ||
463 | skb_reserve(new_skb, 2); | ||
464 | |||
465 | new_laddr = dma_map_single(lp->device, skb_put(new_skb, SONIC_RBSIZE), | ||
466 | SONIC_RBSIZE, DMA_FROM_DEVICE); | ||
467 | if (!new_laddr) { | ||
468 | dev_kfree_skb(new_skb); | ||
469 | printk(KERN_ERR "%s: Failed to map rx buffer, dropping packet.\n", dev->name); | ||
323 | lp->stats.rx_dropped++; | 470 | lp->stats.rx_dropped++; |
324 | break; | 471 | break; |
325 | } | 472 | } |
326 | skb->dev = dev; | 473 | |
327 | skb_reserve(skb, 2); /* 16 byte align */ | 474 | /* now we have a new skb to replace it, pass the used one up the stack */ |
328 | skb_put(skb, pkt_len); /* Make room */ | 475 | dma_unmap_single(lp->device, lp->rx_laddr[entry], SONIC_RBSIZE, DMA_FROM_DEVICE); |
329 | eth_copy_and_sum(skb, pkt_ptr, pkt_len, 0); | 476 | used_skb = lp->rx_skb[entry]; |
330 | skb->protocol = eth_type_trans(skb, dev); | 477 | pkt_len = sonic_rda_get(dev, entry, SONIC_RD_PKTLEN); |
331 | netif_rx(skb); /* pass the packet to upper layers */ | 478 | skb_trim(used_skb, pkt_len); |
479 | used_skb->protocol = eth_type_trans(used_skb, dev); | ||
480 | netif_rx(used_skb); | ||
332 | dev->last_rx = jiffies; | 481 | dev->last_rx = jiffies; |
333 | lp->stats.rx_packets++; | 482 | lp->stats.rx_packets++; |
334 | lp->stats.rx_bytes += pkt_len; | 483 | lp->stats.rx_bytes += pkt_len; |
335 | 484 | ||
485 | /* and insert the new skb */ | ||
486 | lp->rx_laddr[entry] = new_laddr; | ||
487 | lp->rx_skb[entry] = new_skb; | ||
488 | |||
489 | bufadr_l = (unsigned long)new_laddr & 0xffff; | ||
490 | bufadr_h = (unsigned long)new_laddr >> 16; | ||
491 | sonic_rra_put(dev, entry, SONIC_RR_BUFADR_L, bufadr_l); | ||
492 | sonic_rra_put(dev, entry, SONIC_RR_BUFADR_H, bufadr_h); | ||
336 | } else { | 493 | } else { |
337 | /* This should only happen, if we enable accepting broken packets. */ | 494 | /* This should only happen, if we enable accepting broken packets. */ |
338 | lp->stats.rx_errors++; | 495 | lp->stats.rx_errors++; |
@@ -341,29 +498,35 @@ static void sonic_rx(struct net_device *dev) | |||
341 | if (status & SONIC_RCR_CRCR) | 498 | if (status & SONIC_RCR_CRCR) |
342 | lp->stats.rx_crc_errors++; | 499 | lp->stats.rx_crc_errors++; |
343 | } | 500 | } |
344 | |||
345 | rd->in_use = 1; | ||
346 | rd = &lp->rda[(++lp->cur_rx) & SONIC_RDS_MASK]; | ||
347 | /* now give back the buffer to the receive buffer area */ | ||
348 | if (status & SONIC_RCR_LPKT) { | 501 | if (status & SONIC_RCR_LPKT) { |
349 | /* | 502 | /* |
350 | * this was the last packet out of the current receice buffer | 503 | * this was the last packet out of the current receive buffer |
351 | * give the buffer back to the SONIC | 504 | * give the buffer back to the SONIC |
352 | */ | 505 | */ |
353 | lp->cur_rra += sizeof(sonic_rr_t); | 506 | lp->cur_rwp += SIZEOF_SONIC_RR * SONIC_BUS_SCALE(lp->dma_bitmode); |
354 | if (lp->cur_rra > | 507 | if (lp->cur_rwp >= lp->rra_end) lp->cur_rwp = lp->rra_laddr & 0xffff; |
355 | (lp->rra_laddr + | 508 | SONIC_WRITE(SONIC_RWP, lp->cur_rwp); |
356 | (SONIC_NUM_RRS - | 509 | if (SONIC_READ(SONIC_ISR) & SONIC_INT_RBE) { |
357 | 1) * sizeof(sonic_rr_t))) lp->cur_rra = | 510 | if (sonic_debug > 2) |
358 | lp->rra_laddr; | 511 | printk("%s: rx buffer exhausted\n", dev->name); |
359 | SONIC_WRITE(SONIC_RWP, lp->cur_rra & 0xffff); | 512 | SONIC_WRITE(SONIC_ISR, SONIC_INT_RBE); /* clear the flag */ |
513 | } | ||
360 | } else | 514 | } else |
361 | printk | 515 | printk(KERN_ERR "%s: rx desc without RCR_LPKT. Shouldn't happen !?\n", |
362 | ("%s: rx desc without RCR_LPKT. Shouldn't happen !?\n", | ||
363 | dev->name); | 516 | dev->name); |
517 | /* | ||
518 | * give back the descriptor | ||
519 | */ | ||
520 | sonic_rda_put(dev, entry, SONIC_RD_LINK, | ||
521 | sonic_rda_get(dev, entry, SONIC_RD_LINK) | SONIC_EOL); | ||
522 | sonic_rda_put(dev, entry, SONIC_RD_IN_USE, 1); | ||
523 | sonic_rda_put(dev, lp->eol_rx, SONIC_RD_LINK, | ||
524 | sonic_rda_get(dev, lp->eol_rx, SONIC_RD_LINK) & ~SONIC_EOL); | ||
525 | lp->eol_rx = entry; | ||
526 | lp->cur_rx = entry = (entry + 1) & SONIC_RDS_MASK; | ||
364 | } | 527 | } |
365 | /* | 528 | /* |
366 | * If any worth-while packets have been received, dev_rint() | 529 | * If any worth-while packets have been received, netif_rx() |
367 | * has done a mark_bh(NET_BH) for us and will work on them | 530 | * has done a mark_bh(NET_BH) for us and will work on them |
368 | * when we get to the bottom-half routine. | 531 | * when we get to the bottom-half routine. |
369 | */ | 532 | */ |
@@ -376,8 +539,7 @@ static void sonic_rx(struct net_device *dev) | |||
376 | */ | 539 | */ |
377 | static struct net_device_stats *sonic_get_stats(struct net_device *dev) | 540 | static struct net_device_stats *sonic_get_stats(struct net_device *dev) |
378 | { | 541 | { |
379 | struct sonic_local *lp = (struct sonic_local *) dev->priv; | 542 | struct sonic_local *lp = netdev_priv(dev); |
380 | unsigned int base_addr = dev->base_addr; | ||
381 | 543 | ||
382 | /* read the tally counter from the SONIC and reset them */ | 544 | /* read the tally counter from the SONIC and reset them */ |
383 | lp->stats.rx_crc_errors += SONIC_READ(SONIC_CRCT); | 545 | lp->stats.rx_crc_errors += SONIC_READ(SONIC_CRCT); |
@@ -396,8 +558,7 @@ static struct net_device_stats *sonic_get_stats(struct net_device *dev) | |||
396 | */ | 558 | */ |
397 | static void sonic_multicast_list(struct net_device *dev) | 559 | static void sonic_multicast_list(struct net_device *dev) |
398 | { | 560 | { |
399 | struct sonic_local *lp = (struct sonic_local *) dev->priv; | 561 | struct sonic_local *lp = netdev_priv(dev); |
400 | unsigned int base_addr = dev->base_addr; | ||
401 | unsigned int rcr; | 562 | unsigned int rcr; |
402 | struct dev_mc_list *dmi = dev->mc_list; | 563 | struct dev_mc_list *dmi = dev->mc_list; |
403 | unsigned char *addr; | 564 | unsigned char *addr; |
@@ -413,20 +574,15 @@ static void sonic_multicast_list(struct net_device *dev) | |||
413 | rcr |= SONIC_RCR_AMC; | 574 | rcr |= SONIC_RCR_AMC; |
414 | } else { | 575 | } else { |
415 | if (sonic_debug > 2) | 576 | if (sonic_debug > 2) |
416 | printk | 577 | printk("sonic_multicast_list: mc_count %d\n", dev->mc_count); |
417 | ("sonic_multicast_list: mc_count %d\n", | 578 | sonic_set_cam_enable(dev, 1); /* always enable our own address */ |
418 | dev->mc_count); | ||
419 | lp->cda.cam_enable = 1; /* always enable our own address */ | ||
420 | for (i = 1; i <= dev->mc_count; i++) { | 579 | for (i = 1; i <= dev->mc_count; i++) { |
421 | addr = dmi->dmi_addr; | 580 | addr = dmi->dmi_addr; |
422 | dmi = dmi->next; | 581 | dmi = dmi->next; |
423 | lp->cda.cam_desc[i].cam_cap0 = | 582 | sonic_cda_put(dev, i, SONIC_CD_CAP0, addr[1] << 8 | addr[0]); |
424 | addr[1] << 8 | addr[0]; | 583 | sonic_cda_put(dev, i, SONIC_CD_CAP1, addr[3] << 8 | addr[2]); |
425 | lp->cda.cam_desc[i].cam_cap1 = | 584 | sonic_cda_put(dev, i, SONIC_CD_CAP2, addr[5] << 8 | addr[4]); |
426 | addr[3] << 8 | addr[2]; | 585 | sonic_set_cam_enable(dev, sonic_get_cam_enable(dev) | (1 << i)); |
427 | lp->cda.cam_desc[i].cam_cap2 = | ||
428 | addr[5] << 8 | addr[4]; | ||
429 | lp->cda.cam_enable |= (1 << i); | ||
430 | } | 586 | } |
431 | SONIC_WRITE(SONIC_CDC, 16); | 587 | SONIC_WRITE(SONIC_CDC, 16); |
432 | /* issue Load CAM command */ | 588 | /* issue Load CAM command */ |
@@ -447,19 +603,16 @@ static void sonic_multicast_list(struct net_device *dev) | |||
447 | */ | 603 | */ |
448 | static int sonic_init(struct net_device *dev) | 604 | static int sonic_init(struct net_device *dev) |
449 | { | 605 | { |
450 | unsigned int base_addr = dev->base_addr; | ||
451 | unsigned int cmd; | 606 | unsigned int cmd; |
452 | struct sonic_local *lp = (struct sonic_local *) dev->priv; | 607 | struct sonic_local *lp = netdev_priv(dev); |
453 | unsigned int rra_start; | ||
454 | unsigned int rra_end; | ||
455 | int i; | 608 | int i; |
456 | 609 | ||
457 | /* | 610 | /* |
458 | * put the Sonic into software-reset mode and | 611 | * put the Sonic into software-reset mode and |
459 | * disable all interrupts | 612 | * disable all interrupts |
460 | */ | 613 | */ |
461 | SONIC_WRITE(SONIC_ISR, 0x7fff); | ||
462 | SONIC_WRITE(SONIC_IMR, 0); | 614 | SONIC_WRITE(SONIC_IMR, 0); |
615 | SONIC_WRITE(SONIC_ISR, 0x7fff); | ||
463 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); | 616 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); |
464 | 617 | ||
465 | /* | 618 | /* |
@@ -475,34 +628,32 @@ static int sonic_init(struct net_device *dev) | |||
475 | if (sonic_debug > 2) | 628 | if (sonic_debug > 2) |
476 | printk("sonic_init: initialize receive resource area\n"); | 629 | printk("sonic_init: initialize receive resource area\n"); |
477 | 630 | ||
478 | rra_start = lp->rra_laddr & 0xffff; | ||
479 | rra_end = | ||
480 | (rra_start + (SONIC_NUM_RRS * sizeof(sonic_rr_t))) & 0xffff; | ||
481 | |||
482 | for (i = 0; i < SONIC_NUM_RRS; i++) { | 631 | for (i = 0; i < SONIC_NUM_RRS; i++) { |
483 | lp->rra[i].rx_bufadr_l = | 632 | u16 bufadr_l = (unsigned long)lp->rx_laddr[i] & 0xffff; |
484 | (lp->rba_laddr + i * SONIC_RBSIZE) & 0xffff; | 633 | u16 bufadr_h = (unsigned long)lp->rx_laddr[i] >> 16; |
485 | lp->rra[i].rx_bufadr_h = | 634 | sonic_rra_put(dev, i, SONIC_RR_BUFADR_L, bufadr_l); |
486 | (lp->rba_laddr + i * SONIC_RBSIZE) >> 16; | 635 | sonic_rra_put(dev, i, SONIC_RR_BUFADR_H, bufadr_h); |
487 | lp->rra[i].rx_bufsize_l = SONIC_RBSIZE >> 1; | 636 | sonic_rra_put(dev, i, SONIC_RR_BUFSIZE_L, SONIC_RBSIZE >> 1); |
488 | lp->rra[i].rx_bufsize_h = 0; | 637 | sonic_rra_put(dev, i, SONIC_RR_BUFSIZE_H, 0); |
489 | } | 638 | } |
490 | 639 | ||
491 | /* initialize all RRA registers */ | 640 | /* initialize all RRA registers */ |
492 | SONIC_WRITE(SONIC_RSA, rra_start); | 641 | lp->rra_end = (lp->rra_laddr + SONIC_NUM_RRS * SIZEOF_SONIC_RR * |
493 | SONIC_WRITE(SONIC_REA, rra_end); | 642 | SONIC_BUS_SCALE(lp->dma_bitmode)) & 0xffff; |
494 | SONIC_WRITE(SONIC_RRP, rra_start); | 643 | lp->cur_rwp = (lp->rra_laddr + (SONIC_NUM_RRS - 1) * SIZEOF_SONIC_RR * |
495 | SONIC_WRITE(SONIC_RWP, rra_end); | 644 | SONIC_BUS_SCALE(lp->dma_bitmode)) & 0xffff; |
645 | |||
646 | SONIC_WRITE(SONIC_RSA, lp->rra_laddr & 0xffff); | ||
647 | SONIC_WRITE(SONIC_REA, lp->rra_end); | ||
648 | SONIC_WRITE(SONIC_RRP, lp->rra_laddr & 0xffff); | ||
649 | SONIC_WRITE(SONIC_RWP, lp->cur_rwp); | ||
496 | SONIC_WRITE(SONIC_URRA, lp->rra_laddr >> 16); | 650 | SONIC_WRITE(SONIC_URRA, lp->rra_laddr >> 16); |
497 | SONIC_WRITE(SONIC_EOBC, (SONIC_RBSIZE - 2) >> 1); | 651 | SONIC_WRITE(SONIC_EOBC, (SONIC_RBSIZE >> 1) - (lp->dma_bitmode ? 2 : 1)); |
498 | |||
499 | lp->cur_rra = | ||
500 | lp->rra_laddr + (SONIC_NUM_RRS - 1) * sizeof(sonic_rr_t); | ||
501 | 652 | ||
502 | /* load the resource pointers */ | 653 | /* load the resource pointers */ |
503 | if (sonic_debug > 3) | 654 | if (sonic_debug > 3) |
504 | printk("sonic_init: issueing RRRA command\n"); | 655 | printk("sonic_init: issuing RRRA command\n"); |
505 | 656 | ||
506 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RRRA); | 657 | SONIC_WRITE(SONIC_CMD, SONIC_CR_RRRA); |
507 | i = 0; | 658 | i = 0; |
508 | while (i++ < 100) { | 659 | while (i++ < 100) { |
@@ -511,27 +662,30 @@ static int sonic_init(struct net_device *dev) | |||
511 | } | 662 | } |
512 | 663 | ||
513 | if (sonic_debug > 2) | 664 | if (sonic_debug > 2) |
514 | printk("sonic_init: status=%x\n", SONIC_READ(SONIC_CMD)); | 665 | printk("sonic_init: status=%x i=%d\n", SONIC_READ(SONIC_CMD), i); |
515 | 666 | ||
516 | /* | 667 | /* |
517 | * Initialize the receive descriptors so that they | 668 | * Initialize the receive descriptors so that they |
518 | * become a circular linked list, ie. let the last | 669 | * become a circular linked list, ie. let the last |
519 | * descriptor point to the first again. | 670 | * descriptor point to the first again. |
520 | */ | 671 | */ |
521 | if (sonic_debug > 2) | 672 | if (sonic_debug > 2) |
522 | printk("sonic_init: initialize receive descriptors\n"); | 673 | printk("sonic_init: initialize receive descriptors\n"); |
523 | for (i = 0; i < SONIC_NUM_RDS; i++) { | 674 | for (i=0; i<SONIC_NUM_RDS; i++) { |
524 | lp->rda[i].rx_status = 0; | 675 | sonic_rda_put(dev, i, SONIC_RD_STATUS, 0); |
525 | lp->rda[i].rx_pktlen = 0; | 676 | sonic_rda_put(dev, i, SONIC_RD_PKTLEN, 0); |
526 | lp->rda[i].rx_pktptr_l = 0; | 677 | sonic_rda_put(dev, i, SONIC_RD_PKTPTR_L, 0); |
527 | lp->rda[i].rx_pktptr_h = 0; | 678 | sonic_rda_put(dev, i, SONIC_RD_PKTPTR_H, 0); |
528 | lp->rda[i].rx_seqno = 0; | 679 | sonic_rda_put(dev, i, SONIC_RD_SEQNO, 0); |
529 | lp->rda[i].in_use = 1; | 680 | sonic_rda_put(dev, i, SONIC_RD_IN_USE, 1); |
530 | lp->rda[i].link = | 681 | sonic_rda_put(dev, i, SONIC_RD_LINK, |
531 | lp->rda_laddr + (i + 1) * sizeof(sonic_rd_t); | 682 | lp->rda_laddr + |
683 | ((i+1) * SIZEOF_SONIC_RD * SONIC_BUS_SCALE(lp->dma_bitmode))); | ||
532 | } | 684 | } |
533 | /* fix last descriptor */ | 685 | /* fix last descriptor */ |
534 | lp->rda[SONIC_NUM_RDS - 1].link = lp->rda_laddr; | 686 | sonic_rda_put(dev, SONIC_NUM_RDS - 1, SONIC_RD_LINK, |
687 | (lp->rda_laddr & 0xffff) | SONIC_EOL); | ||
688 | lp->eol_rx = SONIC_NUM_RDS - 1; | ||
535 | lp->cur_rx = 0; | 689 | lp->cur_rx = 0; |
536 | SONIC_WRITE(SONIC_URDA, lp->rda_laddr >> 16); | 690 | SONIC_WRITE(SONIC_URDA, lp->rda_laddr >> 16); |
537 | SONIC_WRITE(SONIC_CRDA, lp->rda_laddr & 0xffff); | 691 | SONIC_WRITE(SONIC_CRDA, lp->rda_laddr & 0xffff); |
@@ -542,34 +696,34 @@ static int sonic_init(struct net_device *dev) | |||
542 | if (sonic_debug > 2) | 696 | if (sonic_debug > 2) |
543 | printk("sonic_init: initialize transmit descriptors\n"); | 697 | printk("sonic_init: initialize transmit descriptors\n"); |
544 | for (i = 0; i < SONIC_NUM_TDS; i++) { | 698 | for (i = 0; i < SONIC_NUM_TDS; i++) { |
545 | lp->tda[i].tx_status = 0; | 699 | sonic_tda_put(dev, i, SONIC_TD_STATUS, 0); |
546 | lp->tda[i].tx_config = 0; | 700 | sonic_tda_put(dev, i, SONIC_TD_CONFIG, 0); |
547 | lp->tda[i].tx_pktsize = 0; | 701 | sonic_tda_put(dev, i, SONIC_TD_PKTSIZE, 0); |
548 | lp->tda[i].tx_frag_count = 0; | 702 | sonic_tda_put(dev, i, SONIC_TD_FRAG_COUNT, 0); |
549 | lp->tda[i].link = | 703 | sonic_tda_put(dev, i, SONIC_TD_LINK, |
550 | (lp->tda_laddr + | 704 | (lp->tda_laddr & 0xffff) + |
551 | (i + 1) * sizeof(sonic_td_t)) | SONIC_END_OF_LINKS; | 705 | (i + 1) * SIZEOF_SONIC_TD * SONIC_BUS_SCALE(lp->dma_bitmode)); |
706 | lp->tx_skb[i] = NULL; | ||
552 | } | 707 | } |
553 | lp->tda[SONIC_NUM_TDS - 1].link = | 708 | /* fix last descriptor */ |
554 | (lp->tda_laddr & 0xffff) | SONIC_END_OF_LINKS; | 709 | sonic_tda_put(dev, SONIC_NUM_TDS - 1, SONIC_TD_LINK, |
710 | (lp->tda_laddr & 0xffff)); | ||
555 | 711 | ||
556 | SONIC_WRITE(SONIC_UTDA, lp->tda_laddr >> 16); | 712 | SONIC_WRITE(SONIC_UTDA, lp->tda_laddr >> 16); |
557 | SONIC_WRITE(SONIC_CTDA, lp->tda_laddr & 0xffff); | 713 | SONIC_WRITE(SONIC_CTDA, lp->tda_laddr & 0xffff); |
558 | lp->cur_tx = lp->dirty_tx = 0; | 714 | lp->cur_tx = lp->next_tx = 0; |
559 | 715 | lp->eol_tx = SONIC_NUM_TDS - 1; | |
716 | |||
560 | /* | 717 | /* |
561 | * put our own address to CAM desc[0] | 718 | * put our own address to CAM desc[0] |
562 | */ | 719 | */ |
563 | lp->cda.cam_desc[0].cam_cap0 = | 720 | sonic_cda_put(dev, 0, SONIC_CD_CAP0, dev->dev_addr[1] << 8 | dev->dev_addr[0]); |
564 | dev->dev_addr[1] << 8 | dev->dev_addr[0]; | 721 | sonic_cda_put(dev, 0, SONIC_CD_CAP1, dev->dev_addr[3] << 8 | dev->dev_addr[2]); |
565 | lp->cda.cam_desc[0].cam_cap1 = | 722 | sonic_cda_put(dev, 0, SONIC_CD_CAP2, dev->dev_addr[5] << 8 | dev->dev_addr[4]); |
566 | dev->dev_addr[3] << 8 | dev->dev_addr[2]; | 723 | sonic_set_cam_enable(dev, 1); |
567 | lp->cda.cam_desc[0].cam_cap2 = | ||
568 | dev->dev_addr[5] << 8 | dev->dev_addr[4]; | ||
569 | lp->cda.cam_enable = 1; | ||
570 | 724 | ||
571 | for (i = 0; i < 16; i++) | 725 | for (i = 0; i < 16; i++) |
572 | lp->cda.cam_desc[i].cam_entry_pointer = i; | 726 | sonic_cda_put(dev, i, SONIC_CD_ENTRY_POINTER, i); |
573 | 727 | ||
574 | /* | 728 | /* |
575 | * initialize CAM registers | 729 | * initialize CAM registers |
@@ -588,8 +742,8 @@ static int sonic_init(struct net_device *dev) | |||
588 | break; | 742 | break; |
589 | } | 743 | } |
590 | if (sonic_debug > 2) { | 744 | if (sonic_debug > 2) { |
591 | printk("sonic_init: CMD=%x, ISR=%x\n", | 745 | printk("sonic_init: CMD=%x, ISR=%x\n, i=%d", |
592 | SONIC_READ(SONIC_CMD), SONIC_READ(SONIC_ISR)); | 746 | SONIC_READ(SONIC_CMD), SONIC_READ(SONIC_ISR), i); |
593 | } | 747 | } |
594 | 748 | ||
595 | /* | 749 | /* |
@@ -604,7 +758,7 @@ static int sonic_init(struct net_device *dev) | |||
604 | 758 | ||
605 | cmd = SONIC_READ(SONIC_CMD); | 759 | cmd = SONIC_READ(SONIC_CMD); |
606 | if ((cmd & SONIC_CR_RXEN) == 0 || (cmd & SONIC_CR_STP) == 0) | 760 | if ((cmd & SONIC_CR_RXEN) == 0 || (cmd & SONIC_CR_STP) == 0) |
607 | printk("sonic_init: failed, status=%x\n", cmd); | 761 | printk(KERN_ERR "sonic_init: failed, status=%x\n", cmd); |
608 | 762 | ||
609 | if (sonic_debug > 2) | 763 | if (sonic_debug > 2) |
610 | printk("sonic_init: new status=%x\n", | 764 | printk("sonic_init: new status=%x\n", |
diff --git a/drivers/net/sonic.h b/drivers/net/sonic.h index c4a6d58e4afb..cede969a8baa 100644 --- a/drivers/net/sonic.h +++ b/drivers/net/sonic.h | |||
@@ -1,5 +1,5 @@ | |||
1 | /* | 1 | /* |
2 | * Helpfile for sonic.c | 2 | * Header file for sonic.c |
3 | * | 3 | * |
4 | * (C) Waldorf Electronics, Germany | 4 | * (C) Waldorf Electronics, Germany |
5 | * Written by Andreas Busse | 5 | * Written by Andreas Busse |
@@ -9,10 +9,16 @@ | |||
9 | * and pad structure members must be exchanged. Also, the structures | 9 | * and pad structure members must be exchanged. Also, the structures |
10 | * need to be changed accordingly to the bus size. | 10 | * need to be changed accordingly to the bus size. |
11 | * | 11 | * |
12 | * 981229 MSch: did just that for the 68k Mac port (32 bit, big endian), | 12 | * 981229 MSch: did just that for the 68k Mac port (32 bit, big endian) |
13 | * see CONFIG_MACSONIC branch below. | ||
14 | * | 13 | * |
14 | * 990611 David Huggins-Daines <dhd@debian.org>: This machine abstraction | ||
15 | * does not cope with 16-bit bus sizes very well. Therefore I have | ||
16 | * rewritten it with ugly macros and evil inlines. | ||
17 | * | ||
18 | * 050625 Finn Thain: introduced more 32-bit cards and dhd's support | ||
19 | * for 16-bit cards (from the mac68k project). | ||
15 | */ | 20 | */ |
21 | |||
16 | #ifndef SONIC_H | 22 | #ifndef SONIC_H |
17 | #define SONIC_H | 23 | #define SONIC_H |
18 | 24 | ||
@@ -83,6 +89,7 @@ | |||
83 | /* | 89 | /* |
84 | * Error counters | 90 | * Error counters |
85 | */ | 91 | */ |
92 | |||
86 | #define SONIC_CRCT 0x2c | 93 | #define SONIC_CRCT 0x2c |
87 | #define SONIC_FAET 0x2d | 94 | #define SONIC_FAET 0x2d |
88 | #define SONIC_MPT 0x2e | 95 | #define SONIC_MPT 0x2e |
@@ -182,14 +189,14 @@ | |||
182 | 189 | ||
183 | #define SONIC_INT_BR 0x4000 | 190 | #define SONIC_INT_BR 0x4000 |
184 | #define SONIC_INT_HBL 0x2000 | 191 | #define SONIC_INT_HBL 0x2000 |
185 | #define SONIC_INT_LCD 0x1000 | 192 | #define SONIC_INT_LCD 0x1000 |
186 | #define SONIC_INT_PINT 0x0800 | 193 | #define SONIC_INT_PINT 0x0800 |
187 | #define SONIC_INT_PKTRX 0x0400 | 194 | #define SONIC_INT_PKTRX 0x0400 |
188 | #define SONIC_INT_TXDN 0x0200 | 195 | #define SONIC_INT_TXDN 0x0200 |
189 | #define SONIC_INT_TXER 0x0100 | 196 | #define SONIC_INT_TXER 0x0100 |
190 | #define SONIC_INT_TC 0x0080 | 197 | #define SONIC_INT_TC 0x0080 |
191 | #define SONIC_INT_RDE 0x0040 | 198 | #define SONIC_INT_RDE 0x0040 |
192 | #define SONIC_INT_RBE 0x0020 | 199 | #define SONIC_INT_RBE 0x0020 |
193 | #define SONIC_INT_RBAE 0x0010 | 200 | #define SONIC_INT_RBAE 0x0010 |
194 | #define SONIC_INT_CRC 0x0008 | 201 | #define SONIC_INT_CRC 0x0008 |
195 | #define SONIC_INT_FAE 0x0004 | 202 | #define SONIC_INT_FAE 0x0004 |
@@ -201,224 +208,61 @@ | |||
201 | * The interrupts we allow. | 208 | * The interrupts we allow. |
202 | */ | 209 | */ |
203 | 210 | ||
204 | #define SONIC_IMR_DEFAULT (SONIC_INT_BR | \ | 211 | #define SONIC_IMR_DEFAULT ( SONIC_INT_BR | \ |
205 | SONIC_INT_LCD | \ | 212 | SONIC_INT_LCD | \ |
206 | SONIC_INT_PINT | \ | 213 | SONIC_INT_RFO | \ |
207 | SONIC_INT_PKTRX | \ | 214 | SONIC_INT_PKTRX | \ |
208 | SONIC_INT_TXDN | \ | 215 | SONIC_INT_TXDN | \ |
209 | SONIC_INT_TXER | \ | 216 | SONIC_INT_TXER | \ |
210 | SONIC_INT_RDE | \ | 217 | SONIC_INT_RDE | \ |
211 | SONIC_INT_RBE | \ | ||
212 | SONIC_INT_RBAE | \ | 218 | SONIC_INT_RBAE | \ |
213 | SONIC_INT_CRC | \ | 219 | SONIC_INT_CRC | \ |
214 | SONIC_INT_FAE | \ | 220 | SONIC_INT_FAE | \ |
215 | SONIC_INT_MP) | 221 | SONIC_INT_MP) |
216 | 222 | ||
217 | 223 | ||
218 | #define SONIC_END_OF_LINKS 0x0001 | 224 | #define SONIC_EOL 0x0001 |
219 | |||
220 | |||
221 | #ifdef CONFIG_MACSONIC | ||
222 | /* | ||
223 | * Big endian like structures on 680x0 Macs | ||
224 | */ | ||
225 | |||
226 | typedef struct { | ||
227 | u32 rx_bufadr_l; /* receive buffer ptr */ | ||
228 | u32 rx_bufadr_h; | ||
229 | |||
230 | u32 rx_bufsize_l; /* no. of words in the receive buffer */ | ||
231 | u32 rx_bufsize_h; | ||
232 | } sonic_rr_t; | ||
233 | |||
234 | /* | ||
235 | * Sonic receive descriptor. Receive descriptors are | ||
236 | * kept in a linked list of these structures. | ||
237 | */ | ||
238 | |||
239 | typedef struct { | ||
240 | SREGS_PAD(pad0); | ||
241 | u16 rx_status; /* status after reception of a packet */ | ||
242 | SREGS_PAD(pad1); | ||
243 | u16 rx_pktlen; /* length of the packet incl. CRC */ | ||
244 | |||
245 | /* | ||
246 | * Pointers to the location in the receive buffer area (RBA) | ||
247 | * where the packet resides. A packet is always received into | ||
248 | * a contiguous piece of memory. | ||
249 | */ | ||
250 | SREGS_PAD(pad2); | ||
251 | u16 rx_pktptr_l; | ||
252 | SREGS_PAD(pad3); | ||
253 | u16 rx_pktptr_h; | ||
254 | |||
255 | SREGS_PAD(pad4); | ||
256 | u16 rx_seqno; /* sequence no. */ | ||
257 | |||
258 | SREGS_PAD(pad5); | ||
259 | u16 link; /* link to next RDD (end if EOL bit set) */ | ||
260 | |||
261 | /* | ||
262 | * Owner of this descriptor, 0= driver, 1=sonic | ||
263 | */ | ||
264 | |||
265 | SREGS_PAD(pad6); | ||
266 | u16 in_use; | ||
267 | |||
268 | caddr_t rda_next; /* pointer to next RD */ | ||
269 | } sonic_rd_t; | ||
270 | |||
271 | |||
272 | /* | ||
273 | * Describes a Transmit Descriptor | ||
274 | */ | ||
275 | typedef struct { | ||
276 | SREGS_PAD(pad0); | ||
277 | u16 tx_status; /* status after transmission of a packet */ | ||
278 | SREGS_PAD(pad1); | ||
279 | u16 tx_config; /* transmit configuration for this packet */ | ||
280 | SREGS_PAD(pad2); | ||
281 | u16 tx_pktsize; /* size of the packet to be transmitted */ | ||
282 | SREGS_PAD(pad3); | ||
283 | u16 tx_frag_count; /* no. of fragments */ | ||
284 | |||
285 | SREGS_PAD(pad4); | ||
286 | u16 tx_frag_ptr_l; | ||
287 | SREGS_PAD(pad5); | ||
288 | u16 tx_frag_ptr_h; | ||
289 | SREGS_PAD(pad6); | ||
290 | u16 tx_frag_size; | ||
291 | |||
292 | SREGS_PAD(pad7); | ||
293 | u16 link; /* ptr to next descriptor */ | ||
294 | } sonic_td_t; | ||
295 | |||
296 | |||
297 | /* | ||
298 | * Describes an entry in the CAM Descriptor Area. | ||
299 | */ | ||
300 | |||
301 | typedef struct { | ||
302 | SREGS_PAD(pad0); | ||
303 | u16 cam_entry_pointer; | ||
304 | SREGS_PAD(pad1); | ||
305 | u16 cam_cap0; | ||
306 | SREGS_PAD(pad2); | ||
307 | u16 cam_cap1; | ||
308 | SREGS_PAD(pad3); | ||
309 | u16 cam_cap2; | ||
310 | } sonic_cd_t; | ||
311 | |||
312 | #define CAM_DESCRIPTORS 16 | 225 | #define CAM_DESCRIPTORS 16 |
313 | 226 | ||
314 | 227 | /* Offsets in the various DMA buffers accessed by the SONIC */ | |
315 | typedef struct { | 228 | |
316 | sonic_cd_t cam_desc[CAM_DESCRIPTORS]; | 229 | #define SONIC_BITMODE16 0 |
317 | SREGS_PAD(pad); | 230 | #define SONIC_BITMODE32 1 |
318 | u16 cam_enable; | 231 | #define SONIC_BUS_SCALE(bitmode) ((bitmode) ? 4 : 2) |
319 | } sonic_cda_t; | 232 | /* Note! These are all measured in bus-size units, so use SONIC_BUS_SCALE */ |
320 | 233 | #define SIZEOF_SONIC_RR 4 | |
321 | #else /* original declarations, little endian 32 bit */ | 234 | #define SONIC_RR_BUFADR_L 0 |
322 | 235 | #define SONIC_RR_BUFADR_H 1 | |
323 | /* | 236 | #define SONIC_RR_BUFSIZE_L 2 |
324 | * structure definitions | 237 | #define SONIC_RR_BUFSIZE_H 3 |
325 | */ | 238 | |
326 | 239 | #define SIZEOF_SONIC_RD 7 | |
327 | typedef struct { | 240 | #define SONIC_RD_STATUS 0 |
328 | u32 rx_bufadr_l; /* receive buffer ptr */ | 241 | #define SONIC_RD_PKTLEN 1 |
329 | u32 rx_bufadr_h; | 242 | #define SONIC_RD_PKTPTR_L 2 |
330 | 243 | #define SONIC_RD_PKTPTR_H 3 | |
331 | u32 rx_bufsize_l; /* no. of words in the receive buffer */ | 244 | #define SONIC_RD_SEQNO 4 |
332 | u32 rx_bufsize_h; | 245 | #define SONIC_RD_LINK 5 |
333 | } sonic_rr_t; | 246 | #define SONIC_RD_IN_USE 6 |
334 | 247 | ||
335 | /* | 248 | #define SIZEOF_SONIC_TD 8 |
336 | * Sonic receive descriptor. Receive descriptors are | 249 | #define SONIC_TD_STATUS 0 |
337 | * kept in a linked list of these structures. | 250 | #define SONIC_TD_CONFIG 1 |
338 | */ | 251 | #define SONIC_TD_PKTSIZE 2 |
339 | 252 | #define SONIC_TD_FRAG_COUNT 3 | |
340 | typedef struct { | 253 | #define SONIC_TD_FRAG_PTR_L 4 |
341 | u16 rx_status; /* status after reception of a packet */ | 254 | #define SONIC_TD_FRAG_PTR_H 5 |
342 | SREGS_PAD(pad0); | 255 | #define SONIC_TD_FRAG_SIZE 6 |
343 | u16 rx_pktlen; /* length of the packet incl. CRC */ | 256 | #define SONIC_TD_LINK 7 |
344 | SREGS_PAD(pad1); | 257 | |
345 | 258 | #define SIZEOF_SONIC_CD 4 | |
346 | /* | 259 | #define SONIC_CD_ENTRY_POINTER 0 |
347 | * Pointers to the location in the receive buffer area (RBA) | 260 | #define SONIC_CD_CAP0 1 |
348 | * where the packet resides. A packet is always received into | 261 | #define SONIC_CD_CAP1 2 |
349 | * a contiguous piece of memory. | 262 | #define SONIC_CD_CAP2 3 |
350 | */ | 263 | |
351 | u16 rx_pktptr_l; | 264 | #define SIZEOF_SONIC_CDA ((CAM_DESCRIPTORS * SIZEOF_SONIC_CD) + 1) |
352 | SREGS_PAD(pad2); | 265 | #define SONIC_CDA_CAM_ENABLE (CAM_DESCRIPTORS * SIZEOF_SONIC_CD) |
353 | u16 rx_pktptr_h; | ||
354 | SREGS_PAD(pad3); | ||
355 | |||
356 | u16 rx_seqno; /* sequence no. */ | ||
357 | SREGS_PAD(pad4); | ||
358 | |||
359 | u16 link; /* link to next RDD (end if EOL bit set) */ | ||
360 | SREGS_PAD(pad5); | ||
361 | |||
362 | /* | ||
363 | * Owner of this descriptor, 0= driver, 1=sonic | ||
364 | */ | ||
365 | |||
366 | u16 in_use; | ||
367 | SREGS_PAD(pad6); | ||
368 | |||
369 | caddr_t rda_next; /* pointer to next RD */ | ||
370 | } sonic_rd_t; | ||
371 | |||
372 | |||
373 | /* | ||
374 | * Describes a Transmit Descriptor | ||
375 | */ | ||
376 | typedef struct { | ||
377 | u16 tx_status; /* status after transmission of a packet */ | ||
378 | SREGS_PAD(pad0); | ||
379 | u16 tx_config; /* transmit configuration for this packet */ | ||
380 | SREGS_PAD(pad1); | ||
381 | u16 tx_pktsize; /* size of the packet to be transmitted */ | ||
382 | SREGS_PAD(pad2); | ||
383 | u16 tx_frag_count; /* no. of fragments */ | ||
384 | SREGS_PAD(pad3); | ||
385 | |||
386 | u16 tx_frag_ptr_l; | ||
387 | SREGS_PAD(pad4); | ||
388 | u16 tx_frag_ptr_h; | ||
389 | SREGS_PAD(pad5); | ||
390 | u16 tx_frag_size; | ||
391 | SREGS_PAD(pad6); | ||
392 | |||
393 | u16 link; /* ptr to next descriptor */ | ||
394 | SREGS_PAD(pad7); | ||
395 | } sonic_td_t; | ||
396 | |||
397 | |||
398 | /* | ||
399 | * Describes an entry in the CAM Descriptor Area. | ||
400 | */ | ||
401 | |||
402 | typedef struct { | ||
403 | u16 cam_entry_pointer; | ||
404 | SREGS_PAD(pad0); | ||
405 | u16 cam_cap0; | ||
406 | SREGS_PAD(pad1); | ||
407 | u16 cam_cap1; | ||
408 | SREGS_PAD(pad2); | ||
409 | u16 cam_cap2; | ||
410 | SREGS_PAD(pad3); | ||
411 | } sonic_cd_t; | ||
412 | |||
413 | #define CAM_DESCRIPTORS 16 | ||
414 | |||
415 | |||
416 | typedef struct { | ||
417 | sonic_cd_t cam_desc[CAM_DESCRIPTORS]; | ||
418 | u16 cam_enable; | ||
419 | SREGS_PAD(pad); | ||
420 | } sonic_cda_t; | ||
421 | #endif /* endianness */ | ||
422 | 266 | ||
423 | /* | 267 | /* |
424 | * Some tunables for the buffer areas. Power of 2 is required | 268 | * Some tunables for the buffer areas. Power of 2 is required |
@@ -426,44 +270,60 @@ typedef struct { | |||
426 | * | 270 | * |
427 | * MSch: use more buffer space for the slow m68k Macs! | 271 | * MSch: use more buffer space for the slow m68k Macs! |
428 | */ | 272 | */ |
429 | #ifdef CONFIG_MACSONIC | 273 | #define SONIC_NUM_RRS 16 /* number of receive resources */ |
430 | #define SONIC_NUM_RRS 32 /* number of receive resources */ | 274 | #define SONIC_NUM_RDS SONIC_NUM_RRS /* number of receive descriptors */ |
431 | #define SONIC_NUM_RDS SONIC_NUM_RRS /* number of receive descriptors */ | 275 | #define SONIC_NUM_TDS 16 /* number of transmit descriptors */ |
432 | #define SONIC_NUM_TDS 32 /* number of transmit descriptors */ | ||
433 | #else | ||
434 | #define SONIC_NUM_RRS 16 /* number of receive resources */ | ||
435 | #define SONIC_NUM_RDS SONIC_NUM_RRS /* number of receive descriptors */ | ||
436 | #define SONIC_NUM_TDS 16 /* number of transmit descriptors */ | ||
437 | #endif | ||
438 | #define SONIC_RBSIZE 1520 /* size of one resource buffer */ | ||
439 | 276 | ||
440 | #define SONIC_RDS_MASK (SONIC_NUM_RDS-1) | 277 | #define SONIC_RDS_MASK (SONIC_NUM_RDS-1) |
441 | #define SONIC_TDS_MASK (SONIC_NUM_TDS-1) | 278 | #define SONIC_TDS_MASK (SONIC_NUM_TDS-1) |
442 | 279 | ||
280 | #define SONIC_RBSIZE 1520 /* size of one resource buffer */ | ||
281 | |||
282 | /* Again, measured in bus size units! */ | ||
283 | #define SIZEOF_SONIC_DESC (SIZEOF_SONIC_CDA \ | ||
284 | + (SIZEOF_SONIC_TD * SONIC_NUM_TDS) \ | ||
285 | + (SIZEOF_SONIC_RD * SONIC_NUM_RDS) \ | ||
286 | + (SIZEOF_SONIC_RR * SONIC_NUM_RRS)) | ||
443 | 287 | ||
444 | /* Information that need to be kept for each board. */ | 288 | /* Information that need to be kept for each board. */ |
445 | struct sonic_local { | 289 | struct sonic_local { |
446 | sonic_cda_t cda; /* virtual CPU address of CDA */ | 290 | /* Bus size. 0 == 16 bits, 1 == 32 bits. */ |
447 | sonic_td_t tda[SONIC_NUM_TDS]; /* transmit descriptor area */ | 291 | int dma_bitmode; |
448 | sonic_rr_t rra[SONIC_NUM_RRS]; /* receive resource area */ | 292 | /* Register offset within the longword (independent of endianness, |
449 | sonic_rd_t rda[SONIC_NUM_RDS]; /* receive descriptor area */ | 293 | and varies from one type of Macintosh SONIC to another |
450 | struct sk_buff *tx_skb[SONIC_NUM_TDS]; /* skbuffs for packets to transmit */ | 294 | (Aarrgh)) */ |
451 | unsigned int tx_laddr[SONIC_NUM_TDS]; /* logical DMA address fro skbuffs */ | 295 | int reg_offset; |
452 | unsigned char *rba; /* start of receive buffer areas */ | 296 | void *descriptors; |
453 | unsigned int cda_laddr; /* logical DMA address of CDA */ | 297 | /* Crud. These areas have to be within the same 64K. Therefore |
454 | unsigned int tda_laddr; /* logical DMA address of TDA */ | 298 | we allocate a desriptors page, and point these to places within it. */ |
455 | unsigned int rra_laddr; /* logical DMA address of RRA */ | 299 | void *cda; /* CAM descriptor area */ |
456 | unsigned int rda_laddr; /* logical DMA address of RDA */ | 300 | void *tda; /* Transmit descriptor area */ |
457 | unsigned int rba_laddr; /* logical DMA address of RBA */ | 301 | void *rra; /* Receive resource area */ |
458 | unsigned int cur_rra; /* current indexes to resource areas */ | 302 | void *rda; /* Receive descriptor area */ |
303 | struct sk_buff* volatile rx_skb[SONIC_NUM_RRS]; /* packets to be received */ | ||
304 | struct sk_buff* volatile tx_skb[SONIC_NUM_TDS]; /* packets to be transmitted */ | ||
305 | unsigned int tx_len[SONIC_NUM_TDS]; /* lengths of tx DMA mappings */ | ||
306 | /* Logical DMA addresses on MIPS, bus addresses on m68k | ||
307 | * (so "laddr" is a bit misleading) */ | ||
308 | dma_addr_t descriptors_laddr; | ||
309 | u32 cda_laddr; /* logical DMA address of CDA */ | ||
310 | u32 tda_laddr; /* logical DMA address of TDA */ | ||
311 | u32 rra_laddr; /* logical DMA address of RRA */ | ||
312 | u32 rda_laddr; /* logical DMA address of RDA */ | ||
313 | dma_addr_t rx_laddr[SONIC_NUM_RRS]; /* logical DMA addresses of rx skbuffs */ | ||
314 | dma_addr_t tx_laddr[SONIC_NUM_TDS]; /* logical DMA addresses of tx skbuffs */ | ||
315 | unsigned int rra_end; | ||
316 | unsigned int cur_rwp; | ||
459 | unsigned int cur_rx; | 317 | unsigned int cur_rx; |
460 | unsigned int cur_tx; | 318 | unsigned int cur_tx; /* first unacked transmit packet */ |
461 | unsigned int dirty_tx; /* last unacked transmit packet */ | 319 | unsigned int eol_rx; |
462 | char tx_full; | 320 | unsigned int eol_tx; /* last unacked transmit packet */ |
321 | unsigned int next_tx; /* next free TD */ | ||
322 | struct device *device; /* generic device */ | ||
463 | struct net_device_stats stats; | 323 | struct net_device_stats stats; |
464 | }; | 324 | }; |
465 | 325 | ||
466 | #define TX_TIMEOUT 6 | 326 | #define TX_TIMEOUT (3 * HZ) |
467 | 327 | ||
468 | /* Index to functions, as function prototypes. */ | 328 | /* Index to functions, as function prototypes. */ |
469 | 329 | ||
@@ -477,6 +337,114 @@ static void sonic_multicast_list(struct net_device *dev); | |||
477 | static int sonic_init(struct net_device *dev); | 337 | static int sonic_init(struct net_device *dev); |
478 | static void sonic_tx_timeout(struct net_device *dev); | 338 | static void sonic_tx_timeout(struct net_device *dev); |
479 | 339 | ||
340 | /* Internal inlines for reading/writing DMA buffers. Note that bus | ||
341 | size and endianness matter here, whereas they don't for registers, | ||
342 | as far as we can tell. */ | ||
343 | /* OpenBSD calls this "SWO". I'd like to think that sonic_buf_put() | ||
344 | is a much better name. */ | ||
345 | static inline void sonic_buf_put(void* base, int bitmode, | ||
346 | int offset, __u16 val) | ||
347 | { | ||
348 | if (bitmode) | ||
349 | #ifdef __BIG_ENDIAN | ||
350 | ((__u16 *) base + (offset*2))[1] = val; | ||
351 | #else | ||
352 | ((__u16 *) base + (offset*2))[0] = val; | ||
353 | #endif | ||
354 | else | ||
355 | ((__u16 *) base)[offset] = val; | ||
356 | } | ||
357 | |||
358 | static inline __u16 sonic_buf_get(void* base, int bitmode, | ||
359 | int offset) | ||
360 | { | ||
361 | if (bitmode) | ||
362 | #ifdef __BIG_ENDIAN | ||
363 | return ((volatile __u16 *) base + (offset*2))[1]; | ||
364 | #else | ||
365 | return ((volatile __u16 *) base + (offset*2))[0]; | ||
366 | #endif | ||
367 | else | ||
368 | return ((volatile __u16 *) base)[offset]; | ||
369 | } | ||
370 | |||
371 | /* Inlines that you should actually use for reading/writing DMA buffers */ | ||
372 | static inline void sonic_cda_put(struct net_device* dev, int entry, | ||
373 | int offset, __u16 val) | ||
374 | { | ||
375 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
376 | sonic_buf_put(lp->cda, lp->dma_bitmode, | ||
377 | (entry * SIZEOF_SONIC_CD) + offset, val); | ||
378 | } | ||
379 | |||
380 | static inline __u16 sonic_cda_get(struct net_device* dev, int entry, | ||
381 | int offset) | ||
382 | { | ||
383 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
384 | return sonic_buf_get(lp->cda, lp->dma_bitmode, | ||
385 | (entry * SIZEOF_SONIC_CD) + offset); | ||
386 | } | ||
387 | |||
388 | static inline void sonic_set_cam_enable(struct net_device* dev, __u16 val) | ||
389 | { | ||
390 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
391 | sonic_buf_put(lp->cda, lp->dma_bitmode, SONIC_CDA_CAM_ENABLE, val); | ||
392 | } | ||
393 | |||
394 | static inline __u16 sonic_get_cam_enable(struct net_device* dev) | ||
395 | { | ||
396 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
397 | return sonic_buf_get(lp->cda, lp->dma_bitmode, SONIC_CDA_CAM_ENABLE); | ||
398 | } | ||
399 | |||
400 | static inline void sonic_tda_put(struct net_device* dev, int entry, | ||
401 | int offset, __u16 val) | ||
402 | { | ||
403 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
404 | sonic_buf_put(lp->tda, lp->dma_bitmode, | ||
405 | (entry * SIZEOF_SONIC_TD) + offset, val); | ||
406 | } | ||
407 | |||
408 | static inline __u16 sonic_tda_get(struct net_device* dev, int entry, | ||
409 | int offset) | ||
410 | { | ||
411 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
412 | return sonic_buf_get(lp->tda, lp->dma_bitmode, | ||
413 | (entry * SIZEOF_SONIC_TD) + offset); | ||
414 | } | ||
415 | |||
416 | static inline void sonic_rda_put(struct net_device* dev, int entry, | ||
417 | int offset, __u16 val) | ||
418 | { | ||
419 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
420 | sonic_buf_put(lp->rda, lp->dma_bitmode, | ||
421 | (entry * SIZEOF_SONIC_RD) + offset, val); | ||
422 | } | ||
423 | |||
424 | static inline __u16 sonic_rda_get(struct net_device* dev, int entry, | ||
425 | int offset) | ||
426 | { | ||
427 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
428 | return sonic_buf_get(lp->rda, lp->dma_bitmode, | ||
429 | (entry * SIZEOF_SONIC_RD) + offset); | ||
430 | } | ||
431 | |||
432 | static inline void sonic_rra_put(struct net_device* dev, int entry, | ||
433 | int offset, __u16 val) | ||
434 | { | ||
435 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
436 | sonic_buf_put(lp->rra, lp->dma_bitmode, | ||
437 | (entry * SIZEOF_SONIC_RR) + offset, val); | ||
438 | } | ||
439 | |||
440 | static inline __u16 sonic_rra_get(struct net_device* dev, int entry, | ||
441 | int offset) | ||
442 | { | ||
443 | struct sonic_local* lp = (struct sonic_local *) dev->priv; | ||
444 | return sonic_buf_get(lp->rra, lp->dma_bitmode, | ||
445 | (entry * SIZEOF_SONIC_RR) + offset); | ||
446 | } | ||
447 | |||
480 | static const char *version = | 448 | static const char *version = |
481 | "sonic.c:v0.92 20.9.98 tsbogend@alpha.franken.de\n"; | 449 | "sonic.c:v0.92 20.9.98 tsbogend@alpha.franken.de\n"; |
482 | 450 | ||
diff --git a/drivers/net/tokenring/Kconfig b/drivers/net/tokenring/Kconfig index 7e99e9f8045e..e4cfc80b283b 100644 --- a/drivers/net/tokenring/Kconfig +++ b/drivers/net/tokenring/Kconfig | |||
@@ -84,7 +84,7 @@ config 3C359 | |||
84 | 84 | ||
85 | config TMS380TR | 85 | config TMS380TR |
86 | tristate "Generic TMS380 Token Ring ISA/PCI adapter support" | 86 | tristate "Generic TMS380 Token Ring ISA/PCI adapter support" |
87 | depends on TR && (PCI || ISA && ISA_DMA_API) | 87 | depends on TR && (PCI || ISA && ISA_DMA_API || MCA) |
88 | select FW_LOADER | 88 | select FW_LOADER |
89 | ---help--- | 89 | ---help--- |
90 | This driver provides generic support for token ring adapters | 90 | This driver provides generic support for token ring adapters |
@@ -158,7 +158,7 @@ config ABYSS | |||
158 | 158 | ||
159 | config MADGEMC | 159 | config MADGEMC |
160 | tristate "Madge Smart 16/4 Ringnode MicroChannel" | 160 | tristate "Madge Smart 16/4 Ringnode MicroChannel" |
161 | depends on TR && TMS380TR && MCA_LEGACY | 161 | depends on TR && TMS380TR && MCA |
162 | help | 162 | help |
163 | This tms380 module supports the Madge Smart 16/4 MC16 and MC32 | 163 | This tms380 module supports the Madge Smart 16/4 MC16 and MC32 |
164 | MicroChannel adapters. | 164 | MicroChannel adapters. |
diff --git a/drivers/net/tokenring/abyss.c b/drivers/net/tokenring/abyss.c index 87103c400999..9345e68c451e 100644 --- a/drivers/net/tokenring/abyss.c +++ b/drivers/net/tokenring/abyss.c | |||
@@ -139,7 +139,7 @@ static int __devinit abyss_attach(struct pci_dev *pdev, const struct pci_device_ | |||
139 | */ | 139 | */ |
140 | dev->base_addr += 0x10; | 140 | dev->base_addr += 0x10; |
141 | 141 | ||
142 | ret = tmsdev_init(dev, PCI_MAX_ADDRESS, pdev); | 142 | ret = tmsdev_init(dev, &pdev->dev); |
143 | if (ret) { | 143 | if (ret) { |
144 | printk("%s: unable to get memory for dev->priv.\n", | 144 | printk("%s: unable to get memory for dev->priv.\n", |
145 | dev->name); | 145 | dev->name); |
diff --git a/drivers/net/tokenring/madgemc.c b/drivers/net/tokenring/madgemc.c index 659cbdbef7f3..3a25d191ea4a 100644 --- a/drivers/net/tokenring/madgemc.c +++ b/drivers/net/tokenring/madgemc.c | |||
@@ -20,7 +20,7 @@ | |||
20 | static const char version[] = "madgemc.c: v0.91 23/01/2000 by Adam Fritzler\n"; | 20 | static const char version[] = "madgemc.c: v0.91 23/01/2000 by Adam Fritzler\n"; |
21 | 21 | ||
22 | #include <linux/module.h> | 22 | #include <linux/module.h> |
23 | #include <linux/mca-legacy.h> | 23 | #include <linux/mca.h> |
24 | #include <linux/kernel.h> | 24 | #include <linux/kernel.h> |
25 | #include <linux/errno.h> | 25 | #include <linux/errno.h> |
26 | #include <linux/pci.h> | 26 | #include <linux/pci.h> |
@@ -38,9 +38,7 @@ static const char version[] = "madgemc.c: v0.91 23/01/2000 by Adam Fritzler\n"; | |||
38 | #define MADGEMC_IO_EXTENT 32 | 38 | #define MADGEMC_IO_EXTENT 32 |
39 | #define MADGEMC_SIF_OFFSET 0x08 | 39 | #define MADGEMC_SIF_OFFSET 0x08 |
40 | 40 | ||
41 | struct madgemc_card { | 41 | struct card_info { |
42 | struct net_device *dev; | ||
43 | |||
44 | /* | 42 | /* |
45 | * These are read from the BIA ROM. | 43 | * These are read from the BIA ROM. |
46 | */ | 44 | */ |
@@ -57,16 +55,12 @@ struct madgemc_card { | |||
57 | unsigned int arblevel:4; | 55 | unsigned int arblevel:4; |
58 | unsigned int ringspeed:2; /* 0 = 4mb, 1 = 16, 2 = Auto/none */ | 56 | unsigned int ringspeed:2; /* 0 = 4mb, 1 = 16, 2 = Auto/none */ |
59 | unsigned int cabletype:1; /* 0 = RJ45, 1 = DB9 */ | 57 | unsigned int cabletype:1; /* 0 = RJ45, 1 = DB9 */ |
60 | |||
61 | struct madgemc_card *next; | ||
62 | }; | 58 | }; |
63 | static struct madgemc_card *madgemc_card_list; | ||
64 | |||
65 | 59 | ||
66 | static int madgemc_open(struct net_device *dev); | 60 | static int madgemc_open(struct net_device *dev); |
67 | static int madgemc_close(struct net_device *dev); | 61 | static int madgemc_close(struct net_device *dev); |
68 | static int madgemc_chipset_init(struct net_device *dev); | 62 | static int madgemc_chipset_init(struct net_device *dev); |
69 | static void madgemc_read_rom(struct madgemc_card *card); | 63 | static void madgemc_read_rom(struct net_device *dev, struct card_info *card); |
70 | static unsigned short madgemc_setnselout_pins(struct net_device *dev); | 64 | static unsigned short madgemc_setnselout_pins(struct net_device *dev); |
71 | static void madgemc_setcabletype(struct net_device *dev, int type); | 65 | static void madgemc_setcabletype(struct net_device *dev, int type); |
72 | 66 | ||
@@ -151,261 +145,237 @@ static void madgemc_sifwritew(struct net_device *dev, unsigned short val, unsign | |||
151 | 145 | ||
152 | 146 | ||
153 | 147 | ||
154 | static int __init madgemc_probe(void) | 148 | static int __devinit madgemc_probe(struct device *device) |
155 | { | 149 | { |
156 | static int versionprinted; | 150 | static int versionprinted; |
157 | struct net_device *dev; | 151 | struct net_device *dev; |
158 | struct net_local *tp; | 152 | struct net_local *tp; |
159 | struct madgemc_card *card; | 153 | struct card_info *card; |
160 | int i,slot = 0; | 154 | struct mca_device *mdev = to_mca_device(device); |
161 | __u8 posreg[4]; | 155 | int ret = 0, i = 0; |
162 | 156 | ||
163 | if (!MCA_bus) | 157 | if (versionprinted++ == 0) |
164 | return -1; | 158 | printk("%s", version); |
165 | 159 | ||
166 | while (slot != MCA_NOTFOUND) { | 160 | if(mca_device_claimed(mdev)) |
167 | /* | 161 | return -EBUSY; |
168 | * Currently we only support the MC16/32 (MCA ID 002d) | 162 | mca_device_set_claim(mdev, 1); |
169 | */ | 163 | |
170 | slot = mca_find_unused_adapter(0x002d, slot); | 164 | dev = alloc_trdev(sizeof(struct net_local)); |
171 | if (slot == MCA_NOTFOUND) | 165 | if (!dev) { |
172 | break; | 166 | printk("madgemc: unable to allocate dev space\n"); |
173 | 167 | mca_device_set_claim(mdev, 0); | |
174 | /* | 168 | ret = -ENOMEM; |
175 | * If we get here, we have an adapter. | 169 | goto getout; |
176 | */ | 170 | } |
177 | if (versionprinted++ == 0) | ||
178 | printk("%s", version); | ||
179 | |||
180 | dev = alloc_trdev(sizeof(struct net_local)); | ||
181 | if (dev == NULL) { | ||
182 | printk("madgemc: unable to allocate dev space\n"); | ||
183 | if (madgemc_card_list) | ||
184 | return 0; | ||
185 | return -1; | ||
186 | } | ||
187 | 171 | ||
188 | SET_MODULE_OWNER(dev); | 172 | SET_MODULE_OWNER(dev); |
189 | dev->dma = 0; | 173 | dev->dma = 0; |
190 | 174 | ||
191 | /* | 175 | card = kmalloc(sizeof(struct card_info), GFP_KERNEL); |
192 | * Fetch MCA config registers | 176 | if (card==NULL) { |
193 | */ | 177 | printk("madgemc: unable to allocate card struct\n"); |
194 | for(i=0;i<4;i++) | 178 | ret = -ENOMEM; |
195 | posreg[i] = mca_read_stored_pos(slot, i+2); | 179 | goto getout1; |
196 | 180 | } | |
197 | card = kmalloc(sizeof(struct madgemc_card), GFP_KERNEL); | 181 | |
198 | if (card==NULL) { | 182 | /* |
199 | printk("madgemc: unable to allocate card struct\n"); | 183 | * Parse configuration information. This all comes |
200 | free_netdev(dev); | 184 | * directly from the publicly available @002d.ADF. |
201 | if (madgemc_card_list) | 185 | * Get it from Madge or your local ADF library. |
202 | return 0; | 186 | */ |
203 | return -1; | 187 | |
204 | } | 188 | /* |
205 | card->dev = dev; | 189 | * Base address |
206 | 190 | */ | |
207 | /* | 191 | dev->base_addr = 0x0a20 + |
208 | * Parse configuration information. This all comes | 192 | ((mdev->pos[2] & MC16_POS2_ADDR2)?0x0400:0) + |
209 | * directly from the publicly available @002d.ADF. | 193 | ((mdev->pos[0] & MC16_POS0_ADDR1)?0x1000:0) + |
210 | * Get it from Madge or your local ADF library. | 194 | ((mdev->pos[3] & MC16_POS3_ADDR3)?0x2000:0); |
211 | */ | 195 | |
212 | 196 | /* | |
213 | /* | 197 | * Interrupt line |
214 | * Base address | 198 | */ |
215 | */ | 199 | switch(mdev->pos[0] >> 6) { /* upper two bits */ |
216 | dev->base_addr = 0x0a20 + | ||
217 | ((posreg[2] & MC16_POS2_ADDR2)?0x0400:0) + | ||
218 | ((posreg[0] & MC16_POS0_ADDR1)?0x1000:0) + | ||
219 | ((posreg[3] & MC16_POS3_ADDR3)?0x2000:0); | ||
220 | |||
221 | /* | ||
222 | * Interrupt line | ||
223 | */ | ||
224 | switch(posreg[0] >> 6) { /* upper two bits */ | ||
225 | case 0x1: dev->irq = 3; break; | 200 | case 0x1: dev->irq = 3; break; |
226 | case 0x2: dev->irq = 9; break; /* IRQ 2 = IRQ 9 */ | 201 | case 0x2: dev->irq = 9; break; /* IRQ 2 = IRQ 9 */ |
227 | case 0x3: dev->irq = 10; break; | 202 | case 0x3: dev->irq = 10; break; |
228 | default: dev->irq = 0; break; | 203 | default: dev->irq = 0; break; |
229 | } | 204 | } |
230 | 205 | ||
231 | if (dev->irq == 0) { | 206 | if (dev->irq == 0) { |
232 | printk("%s: invalid IRQ\n", dev->name); | 207 | printk("%s: invalid IRQ\n", dev->name); |
233 | goto getout1; | 208 | ret = -EBUSY; |
234 | } | 209 | goto getout2; |
210 | } | ||
235 | 211 | ||
236 | if (!request_region(dev->base_addr, MADGEMC_IO_EXTENT, | 212 | if (!request_region(dev->base_addr, MADGEMC_IO_EXTENT, |
237 | "madgemc")) { | 213 | "madgemc")) { |
238 | printk(KERN_INFO "madgemc: unable to setup Smart MC in slot %d because of I/O base conflict at 0x%04lx\n", slot, dev->base_addr); | 214 | printk(KERN_INFO "madgemc: unable to setup Smart MC in slot %d because of I/O base conflict at 0x%04lx\n", mdev->slot, dev->base_addr); |
239 | dev->base_addr += MADGEMC_SIF_OFFSET; | ||
240 | goto getout1; | ||
241 | } | ||
242 | dev->base_addr += MADGEMC_SIF_OFFSET; | 215 | dev->base_addr += MADGEMC_SIF_OFFSET; |
216 | ret = -EBUSY; | ||
217 | goto getout2; | ||
218 | } | ||
219 | dev->base_addr += MADGEMC_SIF_OFFSET; | ||
220 | |||
221 | /* | ||
222 | * Arbitration Level | ||
223 | */ | ||
224 | card->arblevel = ((mdev->pos[0] >> 1) & 0x7) + 8; | ||
225 | |||
226 | /* | ||
227 | * Burst mode and Fairness | ||
228 | */ | ||
229 | card->burstmode = ((mdev->pos[2] >> 6) & 0x3); | ||
230 | card->fairness = ((mdev->pos[2] >> 4) & 0x1); | ||
231 | |||
232 | /* | ||
233 | * Ring Speed | ||
234 | */ | ||
235 | if ((mdev->pos[1] >> 2)&0x1) | ||
236 | card->ringspeed = 2; /* not selected */ | ||
237 | else if ((mdev->pos[2] >> 5) & 0x1) | ||
238 | card->ringspeed = 1; /* 16Mb */ | ||
239 | else | ||
240 | card->ringspeed = 0; /* 4Mb */ | ||
241 | |||
242 | /* | ||
243 | * Cable type | ||
244 | */ | ||
245 | if ((mdev->pos[1] >> 6)&0x1) | ||
246 | card->cabletype = 1; /* STP/DB9 */ | ||
247 | else | ||
248 | card->cabletype = 0; /* UTP/RJ-45 */ | ||
249 | |||
250 | |||
251 | /* | ||
252 | * ROM Info. This requires us to actually twiddle | ||
253 | * bits on the card, so we must ensure above that | ||
254 | * the base address is free of conflict (request_region above). | ||
255 | */ | ||
256 | madgemc_read_rom(dev, card); | ||
243 | 257 | ||
244 | /* | 258 | if (card->manid != 0x4d) { /* something went wrong */ |
245 | * Arbitration Level | 259 | printk(KERN_INFO "%s: Madge MC ROM read failed (unknown manufacturer ID %02x)\n", dev->name, card->manid); |
246 | */ | 260 | goto getout3; |
247 | card->arblevel = ((posreg[0] >> 1) & 0x7) + 8; | 261 | } |
248 | |||
249 | /* | ||
250 | * Burst mode and Fairness | ||
251 | */ | ||
252 | card->burstmode = ((posreg[2] >> 6) & 0x3); | ||
253 | card->fairness = ((posreg[2] >> 4) & 0x1); | ||
254 | |||
255 | /* | ||
256 | * Ring Speed | ||
257 | */ | ||
258 | if ((posreg[1] >> 2)&0x1) | ||
259 | card->ringspeed = 2; /* not selected */ | ||
260 | else if ((posreg[2] >> 5) & 0x1) | ||
261 | card->ringspeed = 1; /* 16Mb */ | ||
262 | else | ||
263 | card->ringspeed = 0; /* 4Mb */ | ||
264 | |||
265 | /* | ||
266 | * Cable type | ||
267 | */ | ||
268 | if ((posreg[1] >> 6)&0x1) | ||
269 | card->cabletype = 1; /* STP/DB9 */ | ||
270 | else | ||
271 | card->cabletype = 0; /* UTP/RJ-45 */ | ||
272 | |||
273 | |||
274 | /* | ||
275 | * ROM Info. This requires us to actually twiddle | ||
276 | * bits on the card, so we must ensure above that | ||
277 | * the base address is free of conflict (request_region above). | ||
278 | */ | ||
279 | madgemc_read_rom(card); | ||
280 | |||
281 | if (card->manid != 0x4d) { /* something went wrong */ | ||
282 | printk(KERN_INFO "%s: Madge MC ROM read failed (unknown manufacturer ID %02x)\n", dev->name, card->manid); | ||
283 | goto getout; | ||
284 | } | ||
285 | 262 | ||
286 | if ((card->cardtype != 0x08) && (card->cardtype != 0x0d)) { | 263 | if ((card->cardtype != 0x08) && (card->cardtype != 0x0d)) { |
287 | printk(KERN_INFO "%s: Madge MC ROM read failed (unknown card ID %02x)\n", dev->name, card->cardtype); | 264 | printk(KERN_INFO "%s: Madge MC ROM read failed (unknown card ID %02x)\n", dev->name, card->cardtype); |
288 | goto getout; | 265 | ret = -EIO; |
289 | } | 266 | goto getout3; |
267 | } | ||
290 | 268 | ||
291 | /* All cards except Rev 0 and 1 MC16's have 256kb of RAM */ | 269 | /* All cards except Rev 0 and 1 MC16's have 256kb of RAM */ |
292 | if ((card->cardtype == 0x08) && (card->cardrev <= 0x01)) | 270 | if ((card->cardtype == 0x08) && (card->cardrev <= 0x01)) |
293 | card->ramsize = 128; | 271 | card->ramsize = 128; |
294 | else | 272 | else |
295 | card->ramsize = 256; | 273 | card->ramsize = 256; |
296 | 274 | ||
297 | printk("%s: %s Rev %d at 0x%04lx IRQ %d\n", | 275 | printk("%s: %s Rev %d at 0x%04lx IRQ %d\n", |
298 | dev->name, | 276 | dev->name, |
299 | (card->cardtype == 0x08)?MADGEMC16_CARDNAME: | 277 | (card->cardtype == 0x08)?MADGEMC16_CARDNAME: |
300 | MADGEMC32_CARDNAME, card->cardrev, | 278 | MADGEMC32_CARDNAME, card->cardrev, |
301 | dev->base_addr, dev->irq); | 279 | dev->base_addr, dev->irq); |
302 | 280 | ||
303 | if (card->cardtype == 0x0d) | 281 | if (card->cardtype == 0x0d) |
304 | printk("%s: Warning: MC32 support is experimental and highly untested\n", dev->name); | 282 | printk("%s: Warning: MC32 support is experimental and highly untested\n", dev->name); |
305 | 283 | ||
306 | if (card->ringspeed==2) { /* Unknown */ | 284 | if (card->ringspeed==2) { /* Unknown */ |
307 | printk("%s: Warning: Ring speed not set in POS -- Please run the reference disk and set it!\n", dev->name); | 285 | printk("%s: Warning: Ring speed not set in POS -- Please run the reference disk and set it!\n", dev->name); |
308 | card->ringspeed = 1; /* default to 16mb */ | 286 | card->ringspeed = 1; /* default to 16mb */ |
309 | } | 287 | } |
310 | 288 | ||
311 | printk("%s: RAM Size: %dKB\n", dev->name, card->ramsize); | 289 | printk("%s: RAM Size: %dKB\n", dev->name, card->ramsize); |
312 | 290 | ||
313 | printk("%s: Ring Speed: %dMb/sec on %s\n", dev->name, | 291 | printk("%s: Ring Speed: %dMb/sec on %s\n", dev->name, |
314 | (card->ringspeed)?16:4, | 292 | (card->ringspeed)?16:4, |
315 | card->cabletype?"STP/DB9":"UTP/RJ-45"); | 293 | card->cabletype?"STP/DB9":"UTP/RJ-45"); |
316 | printk("%s: Arbitration Level: %d\n", dev->name, | 294 | printk("%s: Arbitration Level: %d\n", dev->name, |
317 | card->arblevel); | 295 | card->arblevel); |
318 | 296 | ||
319 | printk("%s: Burst Mode: ", dev->name); | 297 | printk("%s: Burst Mode: ", dev->name); |
320 | switch(card->burstmode) { | 298 | switch(card->burstmode) { |
321 | case 0: printk("Cycle steal"); break; | 299 | case 0: printk("Cycle steal"); break; |
322 | case 1: printk("Limited burst"); break; | 300 | case 1: printk("Limited burst"); break; |
323 | case 2: printk("Delayed release"); break; | 301 | case 2: printk("Delayed release"); break; |
324 | case 3: printk("Immediate release"); break; | 302 | case 3: printk("Immediate release"); break; |
325 | } | 303 | } |
326 | printk(" (%s)\n", (card->fairness)?"Unfair":"Fair"); | 304 | printk(" (%s)\n", (card->fairness)?"Unfair":"Fair"); |
327 | |||
328 | |||
329 | /* | ||
330 | * Enable SIF before we assign the interrupt handler, | ||
331 | * just in case we get spurious interrupts that need | ||
332 | * handling. | ||
333 | */ | ||
334 | outb(0, dev->base_addr + MC_CONTROL_REG0); /* sanity */ | ||
335 | madgemc_setsifsel(dev, 1); | ||
336 | if (request_irq(dev->irq, madgemc_interrupt, SA_SHIRQ, | ||
337 | "madgemc", dev)) | ||
338 | goto getout; | ||
339 | |||
340 | madgemc_chipset_init(dev); /* enables interrupts! */ | ||
341 | madgemc_setcabletype(dev, card->cabletype); | ||
342 | 305 | ||
343 | /* Setup MCA structures */ | ||
344 | mca_set_adapter_name(slot, (card->cardtype == 0x08)?MADGEMC16_CARDNAME:MADGEMC32_CARDNAME); | ||
345 | mca_set_adapter_procfn(slot, madgemc_mcaproc, dev); | ||
346 | mca_mark_as_used(slot); | ||
347 | 306 | ||
348 | printk("%s: Ring Station Address: ", dev->name); | 307 | /* |
349 | printk("%2.2x", dev->dev_addr[0]); | 308 | * Enable SIF before we assign the interrupt handler, |
350 | for (i = 1; i < 6; i++) | 309 | * just in case we get spurious interrupts that need |
351 | printk(":%2.2x", dev->dev_addr[i]); | 310 | * handling. |
352 | printk("\n"); | 311 | */ |
353 | 312 | outb(0, dev->base_addr + MC_CONTROL_REG0); /* sanity */ | |
354 | /* XXX is ISA_MAX_ADDRESS correct here? */ | 313 | madgemc_setsifsel(dev, 1); |
355 | if (tmsdev_init(dev, ISA_MAX_ADDRESS, NULL)) { | 314 | if (request_irq(dev->irq, madgemc_interrupt, SA_SHIRQ, |
356 | printk("%s: unable to get memory for dev->priv.\n", | 315 | "madgemc", dev)) { |
357 | dev->name); | 316 | ret = -EBUSY; |
358 | release_region(dev->base_addr-MADGEMC_SIF_OFFSET, | 317 | goto getout3; |
359 | MADGEMC_IO_EXTENT); | ||
360 | |||
361 | kfree(card); | ||
362 | tmsdev_term(dev); | ||
363 | free_netdev(dev); | ||
364 | if (madgemc_card_list) | ||
365 | return 0; | ||
366 | return -1; | ||
367 | } | ||
368 | tp = netdev_priv(dev); | ||
369 | |||
370 | /* | ||
371 | * The MC16 is physically a 32bit card. However, Madge | ||
372 | * insists on calling it 16bit, so I'll assume here that | ||
373 | * they know what they're talking about. Cut off DMA | ||
374 | * at 16mb. | ||
375 | */ | ||
376 | tp->setnselout = madgemc_setnselout_pins; | ||
377 | tp->sifwriteb = madgemc_sifwriteb; | ||
378 | tp->sifreadb = madgemc_sifreadb; | ||
379 | tp->sifwritew = madgemc_sifwritew; | ||
380 | tp->sifreadw = madgemc_sifreadw; | ||
381 | tp->DataRate = (card->ringspeed)?SPEED_16:SPEED_4; | ||
382 | |||
383 | memcpy(tp->ProductID, "Madge MCA 16/4 ", PROD_ID_SIZE + 1); | ||
384 | |||
385 | dev->open = madgemc_open; | ||
386 | dev->stop = madgemc_close; | ||
387 | |||
388 | if (register_netdev(dev) == 0) { | ||
389 | /* Enlist in the card list */ | ||
390 | card->next = madgemc_card_list; | ||
391 | madgemc_card_list = card; | ||
392 | slot++; | ||
393 | continue; /* successful, try to find another */ | ||
394 | } | ||
395 | |||
396 | free_irq(dev->irq, dev); | ||
397 | getout: | ||
398 | release_region(dev->base_addr-MADGEMC_SIF_OFFSET, | ||
399 | MADGEMC_IO_EXTENT); | ||
400 | getout1: | ||
401 | kfree(card); | ||
402 | free_netdev(dev); | ||
403 | slot++; | ||
404 | } | 318 | } |
405 | 319 | ||
406 | if (madgemc_card_list) | 320 | madgemc_chipset_init(dev); /* enables interrupts! */ |
321 | madgemc_setcabletype(dev, card->cabletype); | ||
322 | |||
323 | /* Setup MCA structures */ | ||
324 | mca_device_set_name(mdev, (card->cardtype == 0x08)?MADGEMC16_CARDNAME:MADGEMC32_CARDNAME); | ||
325 | mca_set_adapter_procfn(mdev->slot, madgemc_mcaproc, dev); | ||
326 | |||
327 | printk("%s: Ring Station Address: ", dev->name); | ||
328 | printk("%2.2x", dev->dev_addr[0]); | ||
329 | for (i = 1; i < 6; i++) | ||
330 | printk(":%2.2x", dev->dev_addr[i]); | ||
331 | printk("\n"); | ||
332 | |||
333 | if (tmsdev_init(dev, device)) { | ||
334 | printk("%s: unable to get memory for dev->priv.\n", | ||
335 | dev->name); | ||
336 | ret = -ENOMEM; | ||
337 | goto getout4; | ||
338 | } | ||
339 | tp = netdev_priv(dev); | ||
340 | |||
341 | /* | ||
342 | * The MC16 is physically a 32bit card. However, Madge | ||
343 | * insists on calling it 16bit, so I'll assume here that | ||
344 | * they know what they're talking about. Cut off DMA | ||
345 | * at 16mb. | ||
346 | */ | ||
347 | tp->setnselout = madgemc_setnselout_pins; | ||
348 | tp->sifwriteb = madgemc_sifwriteb; | ||
349 | tp->sifreadb = madgemc_sifreadb; | ||
350 | tp->sifwritew = madgemc_sifwritew; | ||
351 | tp->sifreadw = madgemc_sifreadw; | ||
352 | tp->DataRate = (card->ringspeed)?SPEED_16:SPEED_4; | ||
353 | |||
354 | memcpy(tp->ProductID, "Madge MCA 16/4 ", PROD_ID_SIZE + 1); | ||
355 | |||
356 | dev->open = madgemc_open; | ||
357 | dev->stop = madgemc_close; | ||
358 | |||
359 | tp->tmspriv = card; | ||
360 | dev_set_drvdata(device, dev); | ||
361 | |||
362 | if (register_netdev(dev) == 0) | ||
407 | return 0; | 363 | return 0; |
408 | return -1; | 364 | |
365 | dev_set_drvdata(device, NULL); | ||
366 | ret = -ENOMEM; | ||
367 | getout4: | ||
368 | free_irq(dev->irq, dev); | ||
369 | getout3: | ||
370 | release_region(dev->base_addr-MADGEMC_SIF_OFFSET, | ||
371 | MADGEMC_IO_EXTENT); | ||
372 | getout2: | ||
373 | kfree(card); | ||
374 | getout1: | ||
375 | free_netdev(dev); | ||
376 | getout: | ||
377 | mca_device_set_claim(mdev, 0); | ||
378 | return ret; | ||
409 | } | 379 | } |
410 | 380 | ||
411 | /* | 381 | /* |
@@ -664,12 +634,12 @@ static void madgemc_chipset_close(struct net_device *dev) | |||
664 | * is complete. | 634 | * is complete. |
665 | * | 635 | * |
666 | */ | 636 | */ |
667 | static void madgemc_read_rom(struct madgemc_card *card) | 637 | static void madgemc_read_rom(struct net_device *dev, struct card_info *card) |
668 | { | 638 | { |
669 | unsigned long ioaddr; | 639 | unsigned long ioaddr; |
670 | unsigned char reg0, reg1, tmpreg0, i; | 640 | unsigned char reg0, reg1, tmpreg0, i; |
671 | 641 | ||
672 | ioaddr = card->dev->base_addr; | 642 | ioaddr = dev->base_addr; |
673 | 643 | ||
674 | reg0 = inb(ioaddr + MC_CONTROL_REG0); | 644 | reg0 = inb(ioaddr + MC_CONTROL_REG0); |
675 | reg1 = inb(ioaddr + MC_CONTROL_REG1); | 645 | reg1 = inb(ioaddr + MC_CONTROL_REG1); |
@@ -686,9 +656,9 @@ static void madgemc_read_rom(struct madgemc_card *card) | |||
686 | outb(tmpreg0 | MC_CONTROL_REG0_PAGE, ioaddr + MC_CONTROL_REG0); | 656 | outb(tmpreg0 | MC_CONTROL_REG0_PAGE, ioaddr + MC_CONTROL_REG0); |
687 | 657 | ||
688 | /* Read BIA */ | 658 | /* Read BIA */ |
689 | card->dev->addr_len = 6; | 659 | dev->addr_len = 6; |
690 | for (i = 0; i < 6; i++) | 660 | for (i = 0; i < 6; i++) |
691 | card->dev->dev_addr[i] = inb(ioaddr + MC_ROM_BIA_START + i); | 661 | dev->dev_addr[i] = inb(ioaddr + MC_ROM_BIA_START + i); |
692 | 662 | ||
693 | /* Restore original register values */ | 663 | /* Restore original register values */ |
694 | outb(reg0, ioaddr + MC_CONTROL_REG0); | 664 | outb(reg0, ioaddr + MC_CONTROL_REG0); |
@@ -721,14 +691,10 @@ static int madgemc_close(struct net_device *dev) | |||
721 | static int madgemc_mcaproc(char *buf, int slot, void *d) | 691 | static int madgemc_mcaproc(char *buf, int slot, void *d) |
722 | { | 692 | { |
723 | struct net_device *dev = (struct net_device *)d; | 693 | struct net_device *dev = (struct net_device *)d; |
724 | struct madgemc_card *curcard = madgemc_card_list; | 694 | struct net_local *tp = dev->priv; |
695 | struct card_info *curcard = tp->tmspriv; | ||
725 | int len = 0; | 696 | int len = 0; |
726 | 697 | ||
727 | while (curcard) { /* search for card struct */ | ||
728 | if (curcard->dev == dev) | ||
729 | break; | ||
730 | curcard = curcard->next; | ||
731 | } | ||
732 | len += sprintf(buf+len, "-------\n"); | 698 | len += sprintf(buf+len, "-------\n"); |
733 | if (curcard) { | 699 | if (curcard) { |
734 | struct net_local *tp = netdev_priv(dev); | 700 | struct net_local *tp = netdev_priv(dev); |
@@ -763,25 +729,56 @@ static int madgemc_mcaproc(char *buf, int slot, void *d) | |||
763 | return len; | 729 | return len; |
764 | } | 730 | } |
765 | 731 | ||
766 | static void __exit madgemc_exit(void) | 732 | static int __devexit madgemc_remove(struct device *device) |
767 | { | 733 | { |
768 | struct net_device *dev; | 734 | struct net_device *dev = dev_get_drvdata(device); |
769 | struct madgemc_card *this_card; | 735 | struct net_local *tp; |
770 | 736 | struct card_info *card; | |
771 | while (madgemc_card_list) { | 737 | |
772 | dev = madgemc_card_list->dev; | 738 | if (!dev) |
773 | unregister_netdev(dev); | 739 | BUG(); |
774 | release_region(dev->base_addr-MADGEMC_SIF_OFFSET, MADGEMC_IO_EXTENT); | 740 | |
775 | free_irq(dev->irq, dev); | 741 | tp = dev->priv; |
776 | tmsdev_term(dev); | 742 | card = tp->tmspriv; |
777 | free_netdev(dev); | 743 | kfree(card); |
778 | this_card = madgemc_card_list; | 744 | tp->tmspriv = NULL; |
779 | madgemc_card_list = this_card->next; | 745 | |
780 | kfree(this_card); | 746 | unregister_netdev(dev); |
781 | } | 747 | release_region(dev->base_addr-MADGEMC_SIF_OFFSET, MADGEMC_IO_EXTENT); |
748 | free_irq(dev->irq, dev); | ||
749 | tmsdev_term(dev); | ||
750 | free_netdev(dev); | ||
751 | dev_set_drvdata(device, NULL); | ||
752 | |||
753 | return 0; | ||
754 | } | ||
755 | |||
756 | static short madgemc_adapter_ids[] __initdata = { | ||
757 | 0x002d, | ||
758 | 0x0000 | ||
759 | }; | ||
760 | |||
761 | static struct mca_driver madgemc_driver = { | ||
762 | .id_table = madgemc_adapter_ids, | ||
763 | .driver = { | ||
764 | .name = "madgemc", | ||
765 | .bus = &mca_bus_type, | ||
766 | .probe = madgemc_probe, | ||
767 | .remove = __devexit_p(madgemc_remove), | ||
768 | }, | ||
769 | }; | ||
770 | |||
771 | static int __init madgemc_init (void) | ||
772 | { | ||
773 | return mca_register_driver (&madgemc_driver); | ||
774 | } | ||
775 | |||
776 | static void __exit madgemc_exit (void) | ||
777 | { | ||
778 | mca_unregister_driver (&madgemc_driver); | ||
782 | } | 779 | } |
783 | 780 | ||
784 | module_init(madgemc_probe); | 781 | module_init(madgemc_init); |
785 | module_exit(madgemc_exit); | 782 | module_exit(madgemc_exit); |
786 | 783 | ||
787 | MODULE_LICENSE("GPL"); | 784 | MODULE_LICENSE("GPL"); |
diff --git a/drivers/net/tokenring/proteon.c b/drivers/net/tokenring/proteon.c index 40ad0fde28af..eb1423ede75c 100644 --- a/drivers/net/tokenring/proteon.c +++ b/drivers/net/tokenring/proteon.c | |||
@@ -62,8 +62,7 @@ static int dmalist[] __initdata = { | |||
62 | }; | 62 | }; |
63 | 63 | ||
64 | static char cardname[] = "Proteon 1392\0"; | 64 | static char cardname[] = "Proteon 1392\0"; |
65 | 65 | static u64 dma_mask = ISA_MAX_ADDRESS; | |
66 | struct net_device *proteon_probe(int unit); | ||
67 | static int proteon_open(struct net_device *dev); | 66 | static int proteon_open(struct net_device *dev); |
68 | static void proteon_read_eeprom(struct net_device *dev); | 67 | static void proteon_read_eeprom(struct net_device *dev); |
69 | static unsigned short proteon_setnselout_pins(struct net_device *dev); | 68 | static unsigned short proteon_setnselout_pins(struct net_device *dev); |
@@ -116,7 +115,7 @@ nodev: | |||
116 | return -ENODEV; | 115 | return -ENODEV; |
117 | } | 116 | } |
118 | 117 | ||
119 | static int __init setup_card(struct net_device *dev) | 118 | static int __init setup_card(struct net_device *dev, struct device *pdev) |
120 | { | 119 | { |
121 | struct net_local *tp; | 120 | struct net_local *tp; |
122 | static int versionprinted; | 121 | static int versionprinted; |
@@ -137,7 +136,7 @@ static int __init setup_card(struct net_device *dev) | |||
137 | } | 136 | } |
138 | } | 137 | } |
139 | if (err) | 138 | if (err) |
140 | goto out4; | 139 | goto out5; |
141 | 140 | ||
142 | /* At this point we have found a valid card. */ | 141 | /* At this point we have found a valid card. */ |
143 | 142 | ||
@@ -145,14 +144,15 @@ static int __init setup_card(struct net_device *dev) | |||
145 | printk(KERN_DEBUG "%s", version); | 144 | printk(KERN_DEBUG "%s", version); |
146 | 145 | ||
147 | err = -EIO; | 146 | err = -EIO; |
148 | if (tmsdev_init(dev, ISA_MAX_ADDRESS, NULL)) | 147 | pdev->dma_mask = &dma_mask; |
148 | if (tmsdev_init(dev, pdev)) | ||
149 | goto out4; | 149 | goto out4; |
150 | 150 | ||
151 | dev->base_addr &= ~3; | 151 | dev->base_addr &= ~3; |
152 | 152 | ||
153 | proteon_read_eeprom(dev); | 153 | proteon_read_eeprom(dev); |
154 | 154 | ||
155 | printk(KERN_DEBUG "%s: Ring Station Address: ", dev->name); | 155 | printk(KERN_DEBUG "proteon.c: Ring Station Address: "); |
156 | printk("%2.2x", dev->dev_addr[0]); | 156 | printk("%2.2x", dev->dev_addr[0]); |
157 | for (j = 1; j < 6; j++) | 157 | for (j = 1; j < 6; j++) |
158 | printk(":%2.2x", dev->dev_addr[j]); | 158 | printk(":%2.2x", dev->dev_addr[j]); |
@@ -185,7 +185,7 @@ static int __init setup_card(struct net_device *dev) | |||
185 | 185 | ||
186 | if(irqlist[j] == 0) | 186 | if(irqlist[j] == 0) |
187 | { | 187 | { |
188 | printk(KERN_INFO "%s: AutoSelect no IRQ available\n", dev->name); | 188 | printk(KERN_INFO "proteon.c: AutoSelect no IRQ available\n"); |
189 | goto out3; | 189 | goto out3; |
190 | } | 190 | } |
191 | } | 191 | } |
@@ -196,15 +196,15 @@ static int __init setup_card(struct net_device *dev) | |||
196 | break; | 196 | break; |
197 | if (irqlist[j] == 0) | 197 | if (irqlist[j] == 0) |
198 | { | 198 | { |
199 | printk(KERN_INFO "%s: Illegal IRQ %d specified\n", | 199 | printk(KERN_INFO "proteon.c: Illegal IRQ %d specified\n", |
200 | dev->name, dev->irq); | 200 | dev->irq); |
201 | goto out3; | 201 | goto out3; |
202 | } | 202 | } |
203 | if (request_irq(dev->irq, tms380tr_interrupt, 0, | 203 | if (request_irq(dev->irq, tms380tr_interrupt, 0, |
204 | cardname, dev)) | 204 | cardname, dev)) |
205 | { | 205 | { |
206 | printk(KERN_INFO "%s: Selected IRQ %d not available\n", | 206 | printk(KERN_INFO "proteon.c: Selected IRQ %d not available\n", |
207 | dev->name, dev->irq); | 207 | dev->irq); |
208 | goto out3; | 208 | goto out3; |
209 | } | 209 | } |
210 | } | 210 | } |
@@ -220,7 +220,7 @@ static int __init setup_card(struct net_device *dev) | |||
220 | 220 | ||
221 | if(dmalist[j] == 0) | 221 | if(dmalist[j] == 0) |
222 | { | 222 | { |
223 | printk(KERN_INFO "%s: AutoSelect no DMA available\n", dev->name); | 223 | printk(KERN_INFO "proteon.c: AutoSelect no DMA available\n"); |
224 | goto out2; | 224 | goto out2; |
225 | } | 225 | } |
226 | } | 226 | } |
@@ -231,25 +231,25 @@ static int __init setup_card(struct net_device *dev) | |||
231 | break; | 231 | break; |
232 | if (dmalist[j] == 0) | 232 | if (dmalist[j] == 0) |
233 | { | 233 | { |
234 | printk(KERN_INFO "%s: Illegal DMA %d specified\n", | 234 | printk(KERN_INFO "proteon.c: Illegal DMA %d specified\n", |
235 | dev->name, dev->dma); | 235 | dev->dma); |
236 | goto out2; | 236 | goto out2; |
237 | } | 237 | } |
238 | if (request_dma(dev->dma, cardname)) | 238 | if (request_dma(dev->dma, cardname)) |
239 | { | 239 | { |
240 | printk(KERN_INFO "%s: Selected DMA %d not available\n", | 240 | printk(KERN_INFO "proteon.c: Selected DMA %d not available\n", |
241 | dev->name, dev->dma); | 241 | dev->dma); |
242 | goto out2; | 242 | goto out2; |
243 | } | 243 | } |
244 | } | 244 | } |
245 | 245 | ||
246 | printk(KERN_DEBUG "%s: IO: %#4lx IRQ: %d DMA: %d\n", | ||
247 | dev->name, dev->base_addr, dev->irq, dev->dma); | ||
248 | |||
249 | err = register_netdev(dev); | 246 | err = register_netdev(dev); |
250 | if (err) | 247 | if (err) |
251 | goto out; | 248 | goto out; |
252 | 249 | ||
250 | printk(KERN_DEBUG "%s: IO: %#4lx IRQ: %d DMA: %d\n", | ||
251 | dev->name, dev->base_addr, dev->irq, dev->dma); | ||
252 | |||
253 | return 0; | 253 | return 0; |
254 | out: | 254 | out: |
255 | free_dma(dev->dma); | 255 | free_dma(dev->dma); |
@@ -258,34 +258,11 @@ out2: | |||
258 | out3: | 258 | out3: |
259 | tmsdev_term(dev); | 259 | tmsdev_term(dev); |
260 | out4: | 260 | out4: |
261 | release_region(dev->base_addr, PROTEON_IO_EXTENT); | 261 | release_region(dev->base_addr, PROTEON_IO_EXTENT); |
262 | out5: | ||
262 | return err; | 263 | return err; |
263 | } | 264 | } |
264 | 265 | ||
265 | struct net_device * __init proteon_probe(int unit) | ||
266 | { | ||
267 | struct net_device *dev = alloc_trdev(sizeof(struct net_local)); | ||
268 | int err = 0; | ||
269 | |||
270 | if (!dev) | ||
271 | return ERR_PTR(-ENOMEM); | ||
272 | |||
273 | if (unit >= 0) { | ||
274 | sprintf(dev->name, "tr%d", unit); | ||
275 | netdev_boot_setup_check(dev); | ||
276 | } | ||
277 | |||
278 | err = setup_card(dev); | ||
279 | if (err) | ||
280 | goto out; | ||
281 | |||
282 | return dev; | ||
283 | |||
284 | out: | ||
285 | free_netdev(dev); | ||
286 | return ERR_PTR(err); | ||
287 | } | ||
288 | |||
289 | /* | 266 | /* |
290 | * Reads MAC address from adapter RAM, which should've read it from | 267 | * Reads MAC address from adapter RAM, which should've read it from |
291 | * the onboard ROM. | 268 | * the onboard ROM. |
@@ -352,8 +329,6 @@ static int proteon_open(struct net_device *dev) | |||
352 | return tms380tr_open(dev); | 329 | return tms380tr_open(dev); |
353 | } | 330 | } |
354 | 331 | ||
355 | #ifdef MODULE | ||
356 | |||
357 | #define ISATR_MAX_ADAPTERS 3 | 332 | #define ISATR_MAX_ADAPTERS 3 |
358 | 333 | ||
359 | static int io[ISATR_MAX_ADAPTERS]; | 334 | static int io[ISATR_MAX_ADAPTERS]; |
@@ -366,13 +341,23 @@ module_param_array(io, int, NULL, 0); | |||
366 | module_param_array(irq, int, NULL, 0); | 341 | module_param_array(irq, int, NULL, 0); |
367 | module_param_array(dma, int, NULL, 0); | 342 | module_param_array(dma, int, NULL, 0); |
368 | 343 | ||
369 | static struct net_device *proteon_dev[ISATR_MAX_ADAPTERS]; | 344 | static struct platform_device *proteon_dev[ISATR_MAX_ADAPTERS]; |
345 | |||
346 | static struct device_driver proteon_driver = { | ||
347 | .name = "proteon", | ||
348 | .bus = &platform_bus_type, | ||
349 | }; | ||
370 | 350 | ||
371 | int init_module(void) | 351 | static int __init proteon_init(void) |
372 | { | 352 | { |
373 | struct net_device *dev; | 353 | struct net_device *dev; |
354 | struct platform_device *pdev; | ||
374 | int i, num = 0, err = 0; | 355 | int i, num = 0, err = 0; |
375 | 356 | ||
357 | err = driver_register(&proteon_driver); | ||
358 | if (err) | ||
359 | return err; | ||
360 | |||
376 | for (i = 0; i < ISATR_MAX_ADAPTERS ; i++) { | 361 | for (i = 0; i < ISATR_MAX_ADAPTERS ; i++) { |
377 | dev = alloc_trdev(sizeof(struct net_local)); | 362 | dev = alloc_trdev(sizeof(struct net_local)); |
378 | if (!dev) | 363 | if (!dev) |
@@ -381,11 +366,15 @@ int init_module(void) | |||
381 | dev->base_addr = io[i]; | 366 | dev->base_addr = io[i]; |
382 | dev->irq = irq[i]; | 367 | dev->irq = irq[i]; |
383 | dev->dma = dma[i]; | 368 | dev->dma = dma[i]; |
384 | err = setup_card(dev); | 369 | pdev = platform_device_register_simple("proteon", |
370 | i, NULL, 0); | ||
371 | err = setup_card(dev, &pdev->dev); | ||
385 | if (!err) { | 372 | if (!err) { |
386 | proteon_dev[i] = dev; | 373 | proteon_dev[i] = pdev; |
374 | dev_set_drvdata(&pdev->dev, dev); | ||
387 | ++num; | 375 | ++num; |
388 | } else { | 376 | } else { |
377 | platform_device_unregister(pdev); | ||
389 | free_netdev(dev); | 378 | free_netdev(dev); |
390 | } | 379 | } |
391 | } | 380 | } |
@@ -399,23 +388,28 @@ int init_module(void) | |||
399 | return (0); | 388 | return (0); |
400 | } | 389 | } |
401 | 390 | ||
402 | void cleanup_module(void) | 391 | static void __exit proteon_cleanup(void) |
403 | { | 392 | { |
393 | struct net_device *dev; | ||
404 | int i; | 394 | int i; |
405 | 395 | ||
406 | for (i = 0; i < ISATR_MAX_ADAPTERS ; i++) { | 396 | for (i = 0; i < ISATR_MAX_ADAPTERS ; i++) { |
407 | struct net_device *dev = proteon_dev[i]; | 397 | struct platform_device *pdev = proteon_dev[i]; |
408 | 398 | ||
409 | if (!dev) | 399 | if (!pdev) |
410 | continue; | 400 | continue; |
411 | 401 | dev = dev_get_drvdata(&pdev->dev); | |
412 | unregister_netdev(dev); | 402 | unregister_netdev(dev); |
413 | release_region(dev->base_addr, PROTEON_IO_EXTENT); | 403 | release_region(dev->base_addr, PROTEON_IO_EXTENT); |
414 | free_irq(dev->irq, dev); | 404 | free_irq(dev->irq, dev); |
415 | free_dma(dev->dma); | 405 | free_dma(dev->dma); |
416 | tmsdev_term(dev); | 406 | tmsdev_term(dev); |
417 | free_netdev(dev); | 407 | free_netdev(dev); |
408 | dev_set_drvdata(&pdev->dev, NULL); | ||
409 | platform_device_unregister(pdev); | ||
418 | } | 410 | } |
411 | driver_unregister(&proteon_driver); | ||
419 | } | 412 | } |
420 | #endif /* MODULE */ | ||
421 | 413 | ||
414 | module_init(proteon_init); | ||
415 | module_exit(proteon_cleanup); | ||
diff --git a/drivers/net/tokenring/skisa.c b/drivers/net/tokenring/skisa.c index f26796e2d0e5..3c7c66204f74 100644 --- a/drivers/net/tokenring/skisa.c +++ b/drivers/net/tokenring/skisa.c | |||
@@ -68,8 +68,7 @@ static int dmalist[] __initdata = { | |||
68 | }; | 68 | }; |
69 | 69 | ||
70 | static char isa_cardname[] = "SK NET TR 4/16 ISA\0"; | 70 | static char isa_cardname[] = "SK NET TR 4/16 ISA\0"; |
71 | 71 | static u64 dma_mask = ISA_MAX_ADDRESS; | |
72 | struct net_device *sk_isa_probe(int unit); | ||
73 | static int sk_isa_open(struct net_device *dev); | 72 | static int sk_isa_open(struct net_device *dev); |
74 | static void sk_isa_read_eeprom(struct net_device *dev); | 73 | static void sk_isa_read_eeprom(struct net_device *dev); |
75 | static unsigned short sk_isa_setnselout_pins(struct net_device *dev); | 74 | static unsigned short sk_isa_setnselout_pins(struct net_device *dev); |
@@ -133,7 +132,7 @@ static int __init sk_isa_probe1(struct net_device *dev, int ioaddr) | |||
133 | return 0; | 132 | return 0; |
134 | } | 133 | } |
135 | 134 | ||
136 | static int __init setup_card(struct net_device *dev) | 135 | static int __init setup_card(struct net_device *dev, struct device *pdev) |
137 | { | 136 | { |
138 | struct net_local *tp; | 137 | struct net_local *tp; |
139 | static int versionprinted; | 138 | static int versionprinted; |
@@ -154,7 +153,7 @@ static int __init setup_card(struct net_device *dev) | |||
154 | } | 153 | } |
155 | } | 154 | } |
156 | if (err) | 155 | if (err) |
157 | goto out4; | 156 | goto out5; |
158 | 157 | ||
159 | /* At this point we have found a valid card. */ | 158 | /* At this point we have found a valid card. */ |
160 | 159 | ||
@@ -162,14 +161,15 @@ static int __init setup_card(struct net_device *dev) | |||
162 | printk(KERN_DEBUG "%s", version); | 161 | printk(KERN_DEBUG "%s", version); |
163 | 162 | ||
164 | err = -EIO; | 163 | err = -EIO; |
165 | if (tmsdev_init(dev, ISA_MAX_ADDRESS, NULL)) | 164 | pdev->dma_mask = &dma_mask; |
165 | if (tmsdev_init(dev, pdev)) | ||
166 | goto out4; | 166 | goto out4; |
167 | 167 | ||
168 | dev->base_addr &= ~3; | 168 | dev->base_addr &= ~3; |
169 | 169 | ||
170 | sk_isa_read_eeprom(dev); | 170 | sk_isa_read_eeprom(dev); |
171 | 171 | ||
172 | printk(KERN_DEBUG "%s: Ring Station Address: ", dev->name); | 172 | printk(KERN_DEBUG "skisa.c: Ring Station Address: "); |
173 | printk("%2.2x", dev->dev_addr[0]); | 173 | printk("%2.2x", dev->dev_addr[0]); |
174 | for (j = 1; j < 6; j++) | 174 | for (j = 1; j < 6; j++) |
175 | printk(":%2.2x", dev->dev_addr[j]); | 175 | printk(":%2.2x", dev->dev_addr[j]); |
@@ -202,7 +202,7 @@ static int __init setup_card(struct net_device *dev) | |||
202 | 202 | ||
203 | if(irqlist[j] == 0) | 203 | if(irqlist[j] == 0) |
204 | { | 204 | { |
205 | printk(KERN_INFO "%s: AutoSelect no IRQ available\n", dev->name); | 205 | printk(KERN_INFO "skisa.c: AutoSelect no IRQ available\n"); |
206 | goto out3; | 206 | goto out3; |
207 | } | 207 | } |
208 | } | 208 | } |
@@ -213,15 +213,15 @@ static int __init setup_card(struct net_device *dev) | |||
213 | break; | 213 | break; |
214 | if (irqlist[j] == 0) | 214 | if (irqlist[j] == 0) |
215 | { | 215 | { |
216 | printk(KERN_INFO "%s: Illegal IRQ %d specified\n", | 216 | printk(KERN_INFO "skisa.c: Illegal IRQ %d specified\n", |
217 | dev->name, dev->irq); | 217 | dev->irq); |
218 | goto out3; | 218 | goto out3; |
219 | } | 219 | } |
220 | if (request_irq(dev->irq, tms380tr_interrupt, 0, | 220 | if (request_irq(dev->irq, tms380tr_interrupt, 0, |
221 | isa_cardname, dev)) | 221 | isa_cardname, dev)) |
222 | { | 222 | { |
223 | printk(KERN_INFO "%s: Selected IRQ %d not available\n", | 223 | printk(KERN_INFO "skisa.c: Selected IRQ %d not available\n", |
224 | dev->name, dev->irq); | 224 | dev->irq); |
225 | goto out3; | 225 | goto out3; |
226 | } | 226 | } |
227 | } | 227 | } |
@@ -237,7 +237,7 @@ static int __init setup_card(struct net_device *dev) | |||
237 | 237 | ||
238 | if(dmalist[j] == 0) | 238 | if(dmalist[j] == 0) |
239 | { | 239 | { |
240 | printk(KERN_INFO "%s: AutoSelect no DMA available\n", dev->name); | 240 | printk(KERN_INFO "skisa.c: AutoSelect no DMA available\n"); |
241 | goto out2; | 241 | goto out2; |
242 | } | 242 | } |
243 | } | 243 | } |
@@ -248,25 +248,25 @@ static int __init setup_card(struct net_device *dev) | |||
248 | break; | 248 | break; |
249 | if (dmalist[j] == 0) | 249 | if (dmalist[j] == 0) |
250 | { | 250 | { |
251 | printk(KERN_INFO "%s: Illegal DMA %d specified\n", | 251 | printk(KERN_INFO "skisa.c: Illegal DMA %d specified\n", |
252 | dev->name, dev->dma); | 252 | dev->dma); |
253 | goto out2; | 253 | goto out2; |
254 | } | 254 | } |
255 | if (request_dma(dev->dma, isa_cardname)) | 255 | if (request_dma(dev->dma, isa_cardname)) |
256 | { | 256 | { |
257 | printk(KERN_INFO "%s: Selected DMA %d not available\n", | 257 | printk(KERN_INFO "skisa.c: Selected DMA %d not available\n", |
258 | dev->name, dev->dma); | 258 | dev->dma); |
259 | goto out2; | 259 | goto out2; |
260 | } | 260 | } |
261 | } | 261 | } |
262 | 262 | ||
263 | printk(KERN_DEBUG "%s: IO: %#4lx IRQ: %d DMA: %d\n", | ||
264 | dev->name, dev->base_addr, dev->irq, dev->dma); | ||
265 | |||
266 | err = register_netdev(dev); | 263 | err = register_netdev(dev); |
267 | if (err) | 264 | if (err) |
268 | goto out; | 265 | goto out; |
269 | 266 | ||
267 | printk(KERN_DEBUG "%s: IO: %#4lx IRQ: %d DMA: %d\n", | ||
268 | dev->name, dev->base_addr, dev->irq, dev->dma); | ||
269 | |||
270 | return 0; | 270 | return 0; |
271 | out: | 271 | out: |
272 | free_dma(dev->dma); | 272 | free_dma(dev->dma); |
@@ -275,33 +275,11 @@ out2: | |||
275 | out3: | 275 | out3: |
276 | tmsdev_term(dev); | 276 | tmsdev_term(dev); |
277 | out4: | 277 | out4: |
278 | release_region(dev->base_addr, SK_ISA_IO_EXTENT); | 278 | release_region(dev->base_addr, SK_ISA_IO_EXTENT); |
279 | out5: | ||
279 | return err; | 280 | return err; |
280 | } | 281 | } |
281 | 282 | ||
282 | struct net_device * __init sk_isa_probe(int unit) | ||
283 | { | ||
284 | struct net_device *dev = alloc_trdev(sizeof(struct net_local)); | ||
285 | int err = 0; | ||
286 | |||
287 | if (!dev) | ||
288 | return ERR_PTR(-ENOMEM); | ||
289 | |||
290 | if (unit >= 0) { | ||
291 | sprintf(dev->name, "tr%d", unit); | ||
292 | netdev_boot_setup_check(dev); | ||
293 | } | ||
294 | |||
295 | err = setup_card(dev); | ||
296 | if (err) | ||
297 | goto out; | ||
298 | |||
299 | return dev; | ||
300 | out: | ||
301 | free_netdev(dev); | ||
302 | return ERR_PTR(err); | ||
303 | } | ||
304 | |||
305 | /* | 283 | /* |
306 | * Reads MAC address from adapter RAM, which should've read it from | 284 | * Reads MAC address from adapter RAM, which should've read it from |
307 | * the onboard ROM. | 285 | * the onboard ROM. |
@@ -361,8 +339,6 @@ static int sk_isa_open(struct net_device *dev) | |||
361 | return tms380tr_open(dev); | 339 | return tms380tr_open(dev); |
362 | } | 340 | } |
363 | 341 | ||
364 | #ifdef MODULE | ||
365 | |||
366 | #define ISATR_MAX_ADAPTERS 3 | 342 | #define ISATR_MAX_ADAPTERS 3 |
367 | 343 | ||
368 | static int io[ISATR_MAX_ADAPTERS]; | 344 | static int io[ISATR_MAX_ADAPTERS]; |
@@ -375,13 +351,23 @@ module_param_array(io, int, NULL, 0); | |||
375 | module_param_array(irq, int, NULL, 0); | 351 | module_param_array(irq, int, NULL, 0); |
376 | module_param_array(dma, int, NULL, 0); | 352 | module_param_array(dma, int, NULL, 0); |
377 | 353 | ||
378 | static struct net_device *sk_isa_dev[ISATR_MAX_ADAPTERS]; | 354 | static struct platform_device *sk_isa_dev[ISATR_MAX_ADAPTERS]; |
379 | 355 | ||
380 | int init_module(void) | 356 | static struct device_driver sk_isa_driver = { |
357 | .name = "skisa", | ||
358 | .bus = &platform_bus_type, | ||
359 | }; | ||
360 | |||
361 | static int __init sk_isa_init(void) | ||
381 | { | 362 | { |
382 | struct net_device *dev; | 363 | struct net_device *dev; |
364 | struct platform_device *pdev; | ||
383 | int i, num = 0, err = 0; | 365 | int i, num = 0, err = 0; |
384 | 366 | ||
367 | err = driver_register(&sk_isa_driver); | ||
368 | if (err) | ||
369 | return err; | ||
370 | |||
385 | for (i = 0; i < ISATR_MAX_ADAPTERS ; i++) { | 371 | for (i = 0; i < ISATR_MAX_ADAPTERS ; i++) { |
386 | dev = alloc_trdev(sizeof(struct net_local)); | 372 | dev = alloc_trdev(sizeof(struct net_local)); |
387 | if (!dev) | 373 | if (!dev) |
@@ -390,12 +376,15 @@ int init_module(void) | |||
390 | dev->base_addr = io[i]; | 376 | dev->base_addr = io[i]; |
391 | dev->irq = irq[i]; | 377 | dev->irq = irq[i]; |
392 | dev->dma = dma[i]; | 378 | dev->dma = dma[i]; |
393 | err = setup_card(dev); | 379 | pdev = platform_device_register_simple("skisa", |
394 | 380 | i, NULL, 0); | |
381 | err = setup_card(dev, &pdev->dev); | ||
395 | if (!err) { | 382 | if (!err) { |
396 | sk_isa_dev[i] = dev; | 383 | sk_isa_dev[i] = pdev; |
384 | dev_set_drvdata(&sk_isa_dev[i]->dev, dev); | ||
397 | ++num; | 385 | ++num; |
398 | } else { | 386 | } else { |
387 | platform_device_unregister(pdev); | ||
399 | free_netdev(dev); | 388 | free_netdev(dev); |
400 | } | 389 | } |
401 | } | 390 | } |
@@ -409,23 +398,28 @@ int init_module(void) | |||
409 | return (0); | 398 | return (0); |
410 | } | 399 | } |
411 | 400 | ||
412 | void cleanup_module(void) | 401 | static void __exit sk_isa_cleanup(void) |
413 | { | 402 | { |
403 | struct net_device *dev; | ||
414 | int i; | 404 | int i; |
415 | 405 | ||
416 | for (i = 0; i < ISATR_MAX_ADAPTERS ; i++) { | 406 | for (i = 0; i < ISATR_MAX_ADAPTERS ; i++) { |
417 | struct net_device *dev = sk_isa_dev[i]; | 407 | struct platform_device *pdev = sk_isa_dev[i]; |
418 | 408 | ||
419 | if (!dev) | 409 | if (!pdev) |
420 | continue; | 410 | continue; |
421 | 411 | dev = dev_get_drvdata(&pdev->dev); | |
422 | unregister_netdev(dev); | 412 | unregister_netdev(dev); |
423 | release_region(dev->base_addr, SK_ISA_IO_EXTENT); | 413 | release_region(dev->base_addr, SK_ISA_IO_EXTENT); |
424 | free_irq(dev->irq, dev); | 414 | free_irq(dev->irq, dev); |
425 | free_dma(dev->dma); | 415 | free_dma(dev->dma); |
426 | tmsdev_term(dev); | 416 | tmsdev_term(dev); |
427 | free_netdev(dev); | 417 | free_netdev(dev); |
418 | dev_set_drvdata(&pdev->dev, NULL); | ||
419 | platform_device_unregister(pdev); | ||
428 | } | 420 | } |
421 | driver_unregister(&sk_isa_driver); | ||
429 | } | 422 | } |
430 | #endif /* MODULE */ | ||
431 | 423 | ||
424 | module_init(sk_isa_init); | ||
425 | module_exit(sk_isa_cleanup); | ||
diff --git a/drivers/net/tokenring/tms380tr.c b/drivers/net/tokenring/tms380tr.c index 5e0b0ce98ed7..2e39bf1f7462 100644 --- a/drivers/net/tokenring/tms380tr.c +++ b/drivers/net/tokenring/tms380tr.c | |||
@@ -62,6 +62,7 @@ | |||
62 | * normal operation. | 62 | * normal operation. |
63 | * 30-Dec-02 JF Removed incorrect __init from | 63 | * 30-Dec-02 JF Removed incorrect __init from |
64 | * tms380tr_init_card. | 64 | * tms380tr_init_card. |
65 | * 22-Jul-05 JF Converted to dma-mapping. | ||
65 | * | 66 | * |
66 | * To do: | 67 | * To do: |
67 | * 1. Multi/Broadcast packet handling (this may have fixed itself) | 68 | * 1. Multi/Broadcast packet handling (this may have fixed itself) |
@@ -89,7 +90,7 @@ static const char version[] = "tms380tr.c: v1.10 30/12/2002 by Christoph Goos, A | |||
89 | #include <linux/time.h> | 90 | #include <linux/time.h> |
90 | #include <linux/errno.h> | 91 | #include <linux/errno.h> |
91 | #include <linux/init.h> | 92 | #include <linux/init.h> |
92 | #include <linux/pci.h> | 93 | #include <linux/dma-mapping.h> |
93 | #include <linux/delay.h> | 94 | #include <linux/delay.h> |
94 | #include <linux/netdevice.h> | 95 | #include <linux/netdevice.h> |
95 | #include <linux/etherdevice.h> | 96 | #include <linux/etherdevice.h> |
@@ -114,8 +115,6 @@ static const char version[] = "tms380tr.c: v1.10 30/12/2002 by Christoph Goos, A | |||
114 | #endif | 115 | #endif |
115 | static unsigned int tms380tr_debug = TMS380TR_DEBUG; | 116 | static unsigned int tms380tr_debug = TMS380TR_DEBUG; |
116 | 117 | ||
117 | static struct device tms_device; | ||
118 | |||
119 | /* Index to functions, as function prototypes. | 118 | /* Index to functions, as function prototypes. |
120 | * Alphabetical by function name. | 119 | * Alphabetical by function name. |
121 | */ | 120 | */ |
@@ -434,7 +433,7 @@ static void tms380tr_init_net_local(struct net_device *dev) | |||
434 | skb_put(tp->Rpl[i].Skb, tp->MaxPacketSize); | 433 | skb_put(tp->Rpl[i].Skb, tp->MaxPacketSize); |
435 | 434 | ||
436 | /* data unreachable for DMA ? then use local buffer */ | 435 | /* data unreachable for DMA ? then use local buffer */ |
437 | dmabuf = pci_map_single(tp->pdev, tp->Rpl[i].Skb->data, tp->MaxPacketSize, PCI_DMA_FROMDEVICE); | 436 | dmabuf = dma_map_single(tp->pdev, tp->Rpl[i].Skb->data, tp->MaxPacketSize, DMA_FROM_DEVICE); |
438 | if(tp->dmalimit && (dmabuf + tp->MaxPacketSize > tp->dmalimit)) | 437 | if(tp->dmalimit && (dmabuf + tp->MaxPacketSize > tp->dmalimit)) |
439 | { | 438 | { |
440 | tp->Rpl[i].SkbStat = SKB_DATA_COPY; | 439 | tp->Rpl[i].SkbStat = SKB_DATA_COPY; |
@@ -638,10 +637,10 @@ static int tms380tr_hardware_send_packet(struct sk_buff *skb, struct net_device | |||
638 | /* Is buffer reachable for Busmaster-DMA? */ | 637 | /* Is buffer reachable for Busmaster-DMA? */ |
639 | 638 | ||
640 | length = skb->len; | 639 | length = skb->len; |
641 | dmabuf = pci_map_single(tp->pdev, skb->data, length, PCI_DMA_TODEVICE); | 640 | dmabuf = dma_map_single(tp->pdev, skb->data, length, DMA_TO_DEVICE); |
642 | if(tp->dmalimit && (dmabuf + length > tp->dmalimit)) { | 641 | if(tp->dmalimit && (dmabuf + length > tp->dmalimit)) { |
643 | /* Copy frame to local buffer */ | 642 | /* Copy frame to local buffer */ |
644 | pci_unmap_single(tp->pdev, dmabuf, length, PCI_DMA_TODEVICE); | 643 | dma_unmap_single(tp->pdev, dmabuf, length, DMA_TO_DEVICE); |
645 | dmabuf = 0; | 644 | dmabuf = 0; |
646 | i = tp->TplFree->TPLIndex; | 645 | i = tp->TplFree->TPLIndex; |
647 | buf = tp->LocalTxBuffers[i]; | 646 | buf = tp->LocalTxBuffers[i]; |
@@ -1284,9 +1283,7 @@ static int tms380tr_reset_adapter(struct net_device *dev) | |||
1284 | unsigned short count, c, count2; | 1283 | unsigned short count, c, count2; |
1285 | const struct firmware *fw_entry = NULL; | 1284 | const struct firmware *fw_entry = NULL; |
1286 | 1285 | ||
1287 | strncpy(tms_device.bus_id,dev->name, BUS_ID_SIZE); | 1286 | if (request_firmware(&fw_entry, "tms380tr.bin", tp->pdev) != 0) { |
1288 | |||
1289 | if (request_firmware(&fw_entry, "tms380tr.bin", &tms_device) != 0) { | ||
1290 | printk(KERN_ALERT "%s: firmware %s is missing, cannot start.\n", | 1287 | printk(KERN_ALERT "%s: firmware %s is missing, cannot start.\n", |
1291 | dev->name, "tms380tr.bin"); | 1288 | dev->name, "tms380tr.bin"); |
1292 | return (-1); | 1289 | return (-1); |
@@ -2021,7 +2018,7 @@ static void tms380tr_cancel_tx_queue(struct net_local* tp) | |||
2021 | 2018 | ||
2022 | printk(KERN_INFO "Cancel tx (%08lXh).\n", (unsigned long)tpl); | 2019 | printk(KERN_INFO "Cancel tx (%08lXh).\n", (unsigned long)tpl); |
2023 | if (tpl->DMABuff) | 2020 | if (tpl->DMABuff) |
2024 | pci_unmap_single(tp->pdev, tpl->DMABuff, tpl->Skb->len, PCI_DMA_TODEVICE); | 2021 | dma_unmap_single(tp->pdev, tpl->DMABuff, tpl->Skb->len, DMA_TO_DEVICE); |
2025 | dev_kfree_skb_any(tpl->Skb); | 2022 | dev_kfree_skb_any(tpl->Skb); |
2026 | } | 2023 | } |
2027 | 2024 | ||
@@ -2090,7 +2087,7 @@ static void tms380tr_tx_status_irq(struct net_device *dev) | |||
2090 | 2087 | ||
2091 | tp->MacStat.tx_packets++; | 2088 | tp->MacStat.tx_packets++; |
2092 | if (tpl->DMABuff) | 2089 | if (tpl->DMABuff) |
2093 | pci_unmap_single(tp->pdev, tpl->DMABuff, tpl->Skb->len, PCI_DMA_TODEVICE); | 2090 | dma_unmap_single(tp->pdev, tpl->DMABuff, tpl->Skb->len, DMA_TO_DEVICE); |
2094 | dev_kfree_skb_irq(tpl->Skb); | 2091 | dev_kfree_skb_irq(tpl->Skb); |
2095 | tpl->BusyFlag = 0; /* "free" TPL */ | 2092 | tpl->BusyFlag = 0; /* "free" TPL */ |
2096 | } | 2093 | } |
@@ -2209,7 +2206,7 @@ static void tms380tr_rcv_status_irq(struct net_device *dev) | |||
2209 | tp->MacStat.rx_errors++; | 2206 | tp->MacStat.rx_errors++; |
2210 | } | 2207 | } |
2211 | if (rpl->DMABuff) | 2208 | if (rpl->DMABuff) |
2212 | pci_unmap_single(tp->pdev, rpl->DMABuff, tp->MaxPacketSize, PCI_DMA_TODEVICE); | 2209 | dma_unmap_single(tp->pdev, rpl->DMABuff, tp->MaxPacketSize, DMA_TO_DEVICE); |
2213 | rpl->DMABuff = 0; | 2210 | rpl->DMABuff = 0; |
2214 | 2211 | ||
2215 | /* Allocate new skb for rpl */ | 2212 | /* Allocate new skb for rpl */ |
@@ -2227,7 +2224,7 @@ static void tms380tr_rcv_status_irq(struct net_device *dev) | |||
2227 | skb_put(rpl->Skb, tp->MaxPacketSize); | 2224 | skb_put(rpl->Skb, tp->MaxPacketSize); |
2228 | 2225 | ||
2229 | /* Data unreachable for DMA ? then use local buffer */ | 2226 | /* Data unreachable for DMA ? then use local buffer */ |
2230 | dmabuf = pci_map_single(tp->pdev, rpl->Skb->data, tp->MaxPacketSize, PCI_DMA_FROMDEVICE); | 2227 | dmabuf = dma_map_single(tp->pdev, rpl->Skb->data, tp->MaxPacketSize, DMA_FROM_DEVICE); |
2231 | if(tp->dmalimit && (dmabuf + tp->MaxPacketSize > tp->dmalimit)) | 2228 | if(tp->dmalimit && (dmabuf + tp->MaxPacketSize > tp->dmalimit)) |
2232 | { | 2229 | { |
2233 | rpl->SkbStat = SKB_DATA_COPY; | 2230 | rpl->SkbStat = SKB_DATA_COPY; |
@@ -2332,23 +2329,26 @@ void tmsdev_term(struct net_device *dev) | |||
2332 | struct net_local *tp; | 2329 | struct net_local *tp; |
2333 | 2330 | ||
2334 | tp = netdev_priv(dev); | 2331 | tp = netdev_priv(dev); |
2335 | pci_unmap_single(tp->pdev, tp->dmabuffer, sizeof(struct net_local), | 2332 | dma_unmap_single(tp->pdev, tp->dmabuffer, sizeof(struct net_local), |
2336 | PCI_DMA_BIDIRECTIONAL); | 2333 | DMA_BIDIRECTIONAL); |
2337 | } | 2334 | } |
2338 | 2335 | ||
2339 | int tmsdev_init(struct net_device *dev, unsigned long dmalimit, | 2336 | int tmsdev_init(struct net_device *dev, struct device *pdev) |
2340 | struct pci_dev *pdev) | ||
2341 | { | 2337 | { |
2342 | struct net_local *tms_local; | 2338 | struct net_local *tms_local; |
2343 | 2339 | ||
2344 | memset(dev->priv, 0, sizeof(struct net_local)); | 2340 | memset(dev->priv, 0, sizeof(struct net_local)); |
2345 | tms_local = netdev_priv(dev); | 2341 | tms_local = netdev_priv(dev); |
2346 | init_waitqueue_head(&tms_local->wait_for_tok_int); | 2342 | init_waitqueue_head(&tms_local->wait_for_tok_int); |
2347 | tms_local->dmalimit = dmalimit; | 2343 | if (pdev->dma_mask) |
2344 | tms_local->dmalimit = *pdev->dma_mask; | ||
2345 | else | ||
2346 | return -ENOMEM; | ||
2348 | tms_local->pdev = pdev; | 2347 | tms_local->pdev = pdev; |
2349 | tms_local->dmabuffer = pci_map_single(pdev, (void *)tms_local, | 2348 | tms_local->dmabuffer = dma_map_single(pdev, (void *)tms_local, |
2350 | sizeof(struct net_local), PCI_DMA_BIDIRECTIONAL); | 2349 | sizeof(struct net_local), DMA_BIDIRECTIONAL); |
2351 | if (tms_local->dmabuffer + sizeof(struct net_local) > dmalimit) | 2350 | if (tms_local->dmabuffer + sizeof(struct net_local) > |
2351 | tms_local->dmalimit) | ||
2352 | { | 2352 | { |
2353 | printk(KERN_INFO "%s: Memory not accessible for DMA\n", | 2353 | printk(KERN_INFO "%s: Memory not accessible for DMA\n", |
2354 | dev->name); | 2354 | dev->name); |
@@ -2370,8 +2370,6 @@ int tmsdev_init(struct net_device *dev, unsigned long dmalimit, | |||
2370 | return 0; | 2370 | return 0; |
2371 | } | 2371 | } |
2372 | 2372 | ||
2373 | #ifdef MODULE | ||
2374 | |||
2375 | EXPORT_SYMBOL(tms380tr_open); | 2373 | EXPORT_SYMBOL(tms380tr_open); |
2376 | EXPORT_SYMBOL(tms380tr_close); | 2374 | EXPORT_SYMBOL(tms380tr_close); |
2377 | EXPORT_SYMBOL(tms380tr_interrupt); | 2375 | EXPORT_SYMBOL(tms380tr_interrupt); |
@@ -2379,6 +2377,8 @@ EXPORT_SYMBOL(tmsdev_init); | |||
2379 | EXPORT_SYMBOL(tmsdev_term); | 2377 | EXPORT_SYMBOL(tmsdev_term); |
2380 | EXPORT_SYMBOL(tms380tr_wait); | 2378 | EXPORT_SYMBOL(tms380tr_wait); |
2381 | 2379 | ||
2380 | #ifdef MODULE | ||
2381 | |||
2382 | static struct module *TMS380_module = NULL; | 2382 | static struct module *TMS380_module = NULL; |
2383 | 2383 | ||
2384 | int init_module(void) | 2384 | int init_module(void) |
diff --git a/drivers/net/tokenring/tms380tr.h b/drivers/net/tokenring/tms380tr.h index f2c5ba0f37a5..30452c67bb68 100644 --- a/drivers/net/tokenring/tms380tr.h +++ b/drivers/net/tokenring/tms380tr.h | |||
@@ -17,8 +17,7 @@ | |||
17 | int tms380tr_open(struct net_device *dev); | 17 | int tms380tr_open(struct net_device *dev); |
18 | int tms380tr_close(struct net_device *dev); | 18 | int tms380tr_close(struct net_device *dev); |
19 | irqreturn_t tms380tr_interrupt(int irq, void *dev_id, struct pt_regs *regs); | 19 | irqreturn_t tms380tr_interrupt(int irq, void *dev_id, struct pt_regs *regs); |
20 | int tmsdev_init(struct net_device *dev, unsigned long dmalimit, | 20 | int tmsdev_init(struct net_device *dev, struct device *pdev); |
21 | struct pci_dev *pdev); | ||
22 | void tmsdev_term(struct net_device *dev); | 21 | void tmsdev_term(struct net_device *dev); |
23 | void tms380tr_wait(unsigned long time); | 22 | void tms380tr_wait(unsigned long time); |
24 | 23 | ||
@@ -719,7 +718,7 @@ struct s_TPL { /* Transmit Parameter List (align on even word boundaries) */ | |||
719 | struct sk_buff *Skb; | 718 | struct sk_buff *Skb; |
720 | unsigned char TPLIndex; | 719 | unsigned char TPLIndex; |
721 | volatile unsigned char BusyFlag;/* Flag: TPL busy? */ | 720 | volatile unsigned char BusyFlag;/* Flag: TPL busy? */ |
722 | dma_addr_t DMABuff; /* DMA IO bus address from pci_map */ | 721 | dma_addr_t DMABuff; /* DMA IO bus address from dma_map */ |
723 | }; | 722 | }; |
724 | 723 | ||
725 | /* ---------------------Receive Functions-------------------------------* | 724 | /* ---------------------Receive Functions-------------------------------* |
@@ -1060,7 +1059,7 @@ struct s_RPL { /* Receive Parameter List */ | |||
1060 | struct sk_buff *Skb; | 1059 | struct sk_buff *Skb; |
1061 | SKB_STAT SkbStat; | 1060 | SKB_STAT SkbStat; |
1062 | int RPLIndex; | 1061 | int RPLIndex; |
1063 | dma_addr_t DMABuff; /* DMA IO bus address from pci_map */ | 1062 | dma_addr_t DMABuff; /* DMA IO bus address from dma_map */ |
1064 | }; | 1063 | }; |
1065 | 1064 | ||
1066 | /* Information that need to be kept for each board. */ | 1065 | /* Information that need to be kept for each board. */ |
@@ -1091,7 +1090,7 @@ typedef struct net_local { | |||
1091 | RPL *RplTail; | 1090 | RPL *RplTail; |
1092 | unsigned char LocalRxBuffers[RPL_NUM][DEFAULT_PACKET_SIZE]; | 1091 | unsigned char LocalRxBuffers[RPL_NUM][DEFAULT_PACKET_SIZE]; |
1093 | 1092 | ||
1094 | struct pci_dev *pdev; | 1093 | struct device *pdev; |
1095 | int DataRate; | 1094 | int DataRate; |
1096 | unsigned char ScbInUse; | 1095 | unsigned char ScbInUse; |
1097 | unsigned short CMDqueue; | 1096 | unsigned short CMDqueue; |
diff --git a/drivers/net/tokenring/tmspci.c b/drivers/net/tokenring/tmspci.c index 2e18c0a46482..ab47c0547a3b 100644 --- a/drivers/net/tokenring/tmspci.c +++ b/drivers/net/tokenring/tmspci.c | |||
@@ -100,7 +100,7 @@ static int __devinit tms_pci_attach(struct pci_dev *pdev, const struct pci_devic | |||
100 | unsigned int pci_irq_line; | 100 | unsigned int pci_irq_line; |
101 | unsigned long pci_ioaddr; | 101 | unsigned long pci_ioaddr; |
102 | struct card_info *cardinfo = &card_info_table[ent->driver_data]; | 102 | struct card_info *cardinfo = &card_info_table[ent->driver_data]; |
103 | 103 | ||
104 | if (versionprinted++ == 0) | 104 | if (versionprinted++ == 0) |
105 | printk("%s", version); | 105 | printk("%s", version); |
106 | 106 | ||
@@ -143,7 +143,7 @@ static int __devinit tms_pci_attach(struct pci_dev *pdev, const struct pci_devic | |||
143 | printk(":%2.2x", dev->dev_addr[i]); | 143 | printk(":%2.2x", dev->dev_addr[i]); |
144 | printk("\n"); | 144 | printk("\n"); |
145 | 145 | ||
146 | ret = tmsdev_init(dev, PCI_MAX_ADDRESS, pdev); | 146 | ret = tmsdev_init(dev, &pdev->dev); |
147 | if (ret) { | 147 | if (ret) { |
148 | printk("%s: unable to get memory for dev->priv.\n", dev->name); | 148 | printk("%s: unable to get memory for dev->priv.\n", dev->name); |
149 | goto err_out_irq; | 149 | goto err_out_irq; |
diff --git a/drivers/net/wan/cycx_drv.c b/drivers/net/wan/cycx_drv.c index 6e74af62ca08..9e56fc346ba4 100644 --- a/drivers/net/wan/cycx_drv.c +++ b/drivers/net/wan/cycx_drv.c | |||
@@ -56,7 +56,7 @@ | |||
56 | #include <linux/sched.h> /* for jiffies, HZ, etc. */ | 56 | #include <linux/sched.h> /* for jiffies, HZ, etc. */ |
57 | #include <linux/cycx_drv.h> /* API definitions */ | 57 | #include <linux/cycx_drv.h> /* API definitions */ |
58 | #include <linux/cycx_cfm.h> /* CYCX firmware module definitions */ | 58 | #include <linux/cycx_cfm.h> /* CYCX firmware module definitions */ |
59 | #include <linux/delay.h> /* udelay */ | 59 | #include <linux/delay.h> /* udelay, msleep_interruptible */ |
60 | #include <asm/io.h> /* read[wl], write[wl], ioremap, iounmap */ | 60 | #include <asm/io.h> /* read[wl], write[wl], ioremap, iounmap */ |
61 | 61 | ||
62 | #define MOD_VERSION 0 | 62 | #define MOD_VERSION 0 |
@@ -74,7 +74,6 @@ static int reset_cyc2x(void __iomem *addr); | |||
74 | static int detect_cyc2x(void __iomem *addr); | 74 | static int detect_cyc2x(void __iomem *addr); |
75 | 75 | ||
76 | /* Miscellaneous functions */ | 76 | /* Miscellaneous functions */ |
77 | static void delay_cycx(int sec); | ||
78 | static int get_option_index(long *optlist, long optval); | 77 | static int get_option_index(long *optlist, long optval); |
79 | static u16 checksum(u8 *buf, u32 len); | 78 | static u16 checksum(u8 *buf, u32 len); |
80 | 79 | ||
@@ -259,7 +258,7 @@ static int memory_exists(void __iomem *addr) | |||
259 | if (readw(addr + 0x10) == TEST_PATTERN) | 258 | if (readw(addr + 0x10) == TEST_PATTERN) |
260 | return 1; | 259 | return 1; |
261 | 260 | ||
262 | delay_cycx(1); | 261 | msleep_interruptible(1 * 1000); |
263 | } | 262 | } |
264 | 263 | ||
265 | return 0; | 264 | return 0; |
@@ -316,7 +315,7 @@ static void cycx_reset_boot(void __iomem *addr, u8 *code, u32 len) | |||
316 | 315 | ||
317 | /* 80186 was in hold, go */ | 316 | /* 80186 was in hold, go */ |
318 | writeb(0, addr + START_CPU); | 317 | writeb(0, addr + START_CPU); |
319 | delay_cycx(1); | 318 | msleep_interruptible(1 * 1000); |
320 | } | 319 | } |
321 | 320 | ||
322 | /* Load data.bin file through boot (reset) interface. */ | 321 | /* Load data.bin file through boot (reset) interface. */ |
@@ -462,13 +461,13 @@ static int load_cyc2x(struct cycx_hw *hw, struct cycx_firmware *cfm, u32 len) | |||
462 | cycx_reset_boot(hw->dpmbase, reset_image, img_hdr->reset_size); | 461 | cycx_reset_boot(hw->dpmbase, reset_image, img_hdr->reset_size); |
463 | /* reset is waiting for boot */ | 462 | /* reset is waiting for boot */ |
464 | writew(GEN_POWER_ON, pt_cycld); | 463 | writew(GEN_POWER_ON, pt_cycld); |
465 | delay_cycx(1); | 464 | msleep_interruptible(1 * 1000); |
466 | 465 | ||
467 | for (j = 0 ; j < 3 ; j++) | 466 | for (j = 0 ; j < 3 ; j++) |
468 | if (!readw(pt_cycld)) | 467 | if (!readw(pt_cycld)) |
469 | goto reset_loaded; | 468 | goto reset_loaded; |
470 | else | 469 | else |
471 | delay_cycx(1); | 470 | msleep_interruptible(1 * 1000); |
472 | } | 471 | } |
473 | 472 | ||
474 | printk(KERN_ERR "%s: reset not started.\n", modname); | 473 | printk(KERN_ERR "%s: reset not started.\n", modname); |
@@ -495,7 +494,7 @@ reset_loaded: | |||
495 | 494 | ||
496 | /* Arthur Ganzert's tip: wait a while after the firmware loading... | 495 | /* Arthur Ganzert's tip: wait a while after the firmware loading... |
497 | seg abr 26 17:17:12 EST 1999 - acme */ | 496 | seg abr 26 17:17:12 EST 1999 - acme */ |
498 | delay_cycx(7); | 497 | msleep_interruptible(7 * 1000); |
499 | printk(KERN_INFO "%s: firmware loaded!\n", modname); | 498 | printk(KERN_INFO "%s: firmware loaded!\n", modname); |
500 | 499 | ||
501 | /* enable interrupts */ | 500 | /* enable interrupts */ |
@@ -547,20 +546,13 @@ static int get_option_index(long *optlist, long optval) | |||
547 | static int reset_cyc2x(void __iomem *addr) | 546 | static int reset_cyc2x(void __iomem *addr) |
548 | { | 547 | { |
549 | writeb(0, addr + RST_ENABLE); | 548 | writeb(0, addr + RST_ENABLE); |
550 | delay_cycx(2); | 549 | msleep_interruptible(2 * 1000); |
551 | writeb(0, addr + RST_DISABLE); | 550 | writeb(0, addr + RST_DISABLE); |
552 | delay_cycx(2); | 551 | msleep_interruptible(2 * 1000); |
553 | 552 | ||
554 | return memory_exists(addr); | 553 | return memory_exists(addr); |
555 | } | 554 | } |
556 | 555 | ||
557 | /* Delay */ | ||
558 | static void delay_cycx(int sec) | ||
559 | { | ||
560 | set_current_state(TASK_INTERRUPTIBLE); | ||
561 | schedule_timeout(sec * HZ); | ||
562 | } | ||
563 | |||
564 | /* Calculate 16-bit CRC using CCITT polynomial. */ | 556 | /* Calculate 16-bit CRC using CCITT polynomial. */ |
565 | static u16 checksum(u8 *buf, u32 len) | 557 | static u16 checksum(u8 *buf, u32 len) |
566 | { | 558 | { |
diff --git a/drivers/net/wireless/orinoco.c b/drivers/net/wireless/orinoco.c index aabcdc2be05e..9c2d07cde010 100644 --- a/drivers/net/wireless/orinoco.c +++ b/drivers/net/wireless/orinoco.c | |||
@@ -4322,36 +4322,36 @@ static const struct iw_priv_args orinoco_privtab[] = { | |||
4322 | */ | 4322 | */ |
4323 | 4323 | ||
4324 | static const iw_handler orinoco_handler[] = { | 4324 | static const iw_handler orinoco_handler[] = { |
4325 | [SIOCSIWCOMMIT-SIOCIWFIRST] (iw_handler) orinoco_ioctl_commit, | 4325 | [SIOCSIWCOMMIT-SIOCIWFIRST] = (iw_handler) orinoco_ioctl_commit, |
4326 | [SIOCGIWNAME -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getname, | 4326 | [SIOCGIWNAME -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getname, |
4327 | [SIOCSIWFREQ -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setfreq, | 4327 | [SIOCSIWFREQ -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setfreq, |
4328 | [SIOCGIWFREQ -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getfreq, | 4328 | [SIOCGIWFREQ -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getfreq, |
4329 | [SIOCSIWMODE -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setmode, | 4329 | [SIOCSIWMODE -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setmode, |
4330 | [SIOCGIWMODE -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getmode, | 4330 | [SIOCGIWMODE -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getmode, |
4331 | [SIOCSIWSENS -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setsens, | 4331 | [SIOCSIWSENS -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setsens, |
4332 | [SIOCGIWSENS -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getsens, | 4332 | [SIOCGIWSENS -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getsens, |
4333 | [SIOCGIWRANGE -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getiwrange, | 4333 | [SIOCGIWRANGE -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getiwrange, |
4334 | [SIOCSIWSPY -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setspy, | 4334 | [SIOCSIWSPY -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setspy, |
4335 | [SIOCGIWSPY -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getspy, | 4335 | [SIOCGIWSPY -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getspy, |
4336 | [SIOCSIWAP -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setwap, | 4336 | [SIOCSIWAP -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setwap, |
4337 | [SIOCGIWAP -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getwap, | 4337 | [SIOCGIWAP -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getwap, |
4338 | [SIOCSIWSCAN -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setscan, | 4338 | [SIOCSIWSCAN -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setscan, |
4339 | [SIOCGIWSCAN -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getscan, | 4339 | [SIOCGIWSCAN -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getscan, |
4340 | [SIOCSIWESSID -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setessid, | 4340 | [SIOCSIWESSID -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setessid, |
4341 | [SIOCGIWESSID -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getessid, | 4341 | [SIOCGIWESSID -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getessid, |
4342 | [SIOCSIWNICKN -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setnick, | 4342 | [SIOCSIWNICKN -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setnick, |
4343 | [SIOCGIWNICKN -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getnick, | 4343 | [SIOCGIWNICKN -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getnick, |
4344 | [SIOCSIWRATE -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setrate, | 4344 | [SIOCSIWRATE -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setrate, |
4345 | [SIOCGIWRATE -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getrate, | 4345 | [SIOCGIWRATE -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getrate, |
4346 | [SIOCSIWRTS -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setrts, | 4346 | [SIOCSIWRTS -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setrts, |
4347 | [SIOCGIWRTS -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getrts, | 4347 | [SIOCGIWRTS -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getrts, |
4348 | [SIOCSIWFRAG -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setfrag, | 4348 | [SIOCSIWFRAG -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setfrag, |
4349 | [SIOCGIWFRAG -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getfrag, | 4349 | [SIOCGIWFRAG -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getfrag, |
4350 | [SIOCGIWRETRY -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getretry, | 4350 | [SIOCGIWRETRY -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getretry, |
4351 | [SIOCSIWENCODE-SIOCIWFIRST] (iw_handler) orinoco_ioctl_setiwencode, | 4351 | [SIOCSIWENCODE-SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setiwencode, |
4352 | [SIOCGIWENCODE-SIOCIWFIRST] (iw_handler) orinoco_ioctl_getiwencode, | 4352 | [SIOCGIWENCODE-SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getiwencode, |
4353 | [SIOCSIWPOWER -SIOCIWFIRST] (iw_handler) orinoco_ioctl_setpower, | 4353 | [SIOCSIWPOWER -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_setpower, |
4354 | [SIOCGIWPOWER -SIOCIWFIRST] (iw_handler) orinoco_ioctl_getpower, | 4354 | [SIOCGIWPOWER -SIOCIWFIRST] = (iw_handler) orinoco_ioctl_getpower, |
4355 | }; | 4355 | }; |
4356 | 4356 | ||
4357 | 4357 | ||
@@ -4359,15 +4359,15 @@ static const iw_handler orinoco_handler[] = { | |||
4359 | Added typecasting since we no longer use iwreq_data -- Moustafa | 4359 | Added typecasting since we no longer use iwreq_data -- Moustafa |
4360 | */ | 4360 | */ |
4361 | static const iw_handler orinoco_private_handler[] = { | 4361 | static const iw_handler orinoco_private_handler[] = { |
4362 | [0] (iw_handler) orinoco_ioctl_reset, | 4362 | [0] = (iw_handler) orinoco_ioctl_reset, |
4363 | [1] (iw_handler) orinoco_ioctl_reset, | 4363 | [1] = (iw_handler) orinoco_ioctl_reset, |
4364 | [2] (iw_handler) orinoco_ioctl_setport3, | 4364 | [2] = (iw_handler) orinoco_ioctl_setport3, |
4365 | [3] (iw_handler) orinoco_ioctl_getport3, | 4365 | [3] = (iw_handler) orinoco_ioctl_getport3, |
4366 | [4] (iw_handler) orinoco_ioctl_setpreamble, | 4366 | [4] = (iw_handler) orinoco_ioctl_setpreamble, |
4367 | [5] (iw_handler) orinoco_ioctl_getpreamble, | 4367 | [5] = (iw_handler) orinoco_ioctl_getpreamble, |
4368 | [6] (iw_handler) orinoco_ioctl_setibssport, | 4368 | [6] = (iw_handler) orinoco_ioctl_setibssport, |
4369 | [7] (iw_handler) orinoco_ioctl_getibssport, | 4369 | [7] = (iw_handler) orinoco_ioctl_getibssport, |
4370 | [9] (iw_handler) orinoco_ioctl_getrid, | 4370 | [9] = (iw_handler) orinoco_ioctl_getrid, |
4371 | }; | 4371 | }; |
4372 | 4372 | ||
4373 | static const struct iw_handler_def orinoco_handler_def = { | 4373 | static const struct iw_handler_def orinoco_handler_def = { |
diff --git a/drivers/parport/parport_serial.c b/drivers/parport/parport_serial.c index 00498e2f1205..d3dad0aac7cb 100644 --- a/drivers/parport/parport_serial.c +++ b/drivers/parport/parport_serial.c | |||
@@ -23,13 +23,8 @@ | |||
23 | #include <linux/pci.h> | 23 | #include <linux/pci.h> |
24 | #include <linux/parport.h> | 24 | #include <linux/parport.h> |
25 | #include <linux/parport_pc.h> | 25 | #include <linux/parport_pc.h> |
26 | #include <linux/serial.h> | ||
27 | #include <linux/serialP.h> | ||
28 | #include <linux/list.h> | ||
29 | #include <linux/8250_pci.h> | 26 | #include <linux/8250_pci.h> |
30 | 27 | ||
31 | #include <asm/serial.h> | ||
32 | |||
33 | enum parport_pc_pci_cards { | 28 | enum parport_pc_pci_cards { |
34 | titan_110l = 0, | 29 | titan_110l = 0, |
35 | titan_210l, | 30 | titan_210l, |
@@ -168,182 +163,147 @@ static struct pci_device_id parport_serial_pci_tbl[] = { | |||
168 | }; | 163 | }; |
169 | MODULE_DEVICE_TABLE(pci,parport_serial_pci_tbl); | 164 | MODULE_DEVICE_TABLE(pci,parport_serial_pci_tbl); |
170 | 165 | ||
171 | struct pci_board_no_ids { | 166 | /* |
172 | int flags; | 167 | * This table describes the serial "geometry" of these boards. Any |
173 | int num_ports; | 168 | * quirks for these can be found in drivers/serial/8250_pci.c |
174 | int base_baud; | 169 | * |
175 | int uart_offset; | 170 | * Cards not tested are marked n/t |
176 | int reg_shift; | 171 | * If you have one of these cards and it works for you, please tell me.. |
177 | int (*init_fn)(struct pci_dev *dev, struct pci_board_no_ids *board, | 172 | */ |
178 | int enable); | 173 | static struct pciserial_board pci_parport_serial_boards[] __devinitdata = { |
179 | int first_uart_offset; | 174 | [titan_110l] = { |
180 | }; | 175 | .flags = FL_BASE1 | FL_BASE_BARS, |
181 | 176 | .num_ports = 1, | |
182 | static int __devinit siig10x_init_fn(struct pci_dev *dev, struct pci_board_no_ids *board, int enable) | 177 | .base_baud = 921600, |
183 | { | 178 | .uart_offset = 8, |
184 | return pci_siig10x_fn(dev, enable); | 179 | }, |
185 | } | 180 | [titan_210l] = { |
186 | 181 | .flags = FL_BASE1 | FL_BASE_BARS, | |
187 | static int __devinit siig20x_init_fn(struct pci_dev *dev, struct pci_board_no_ids *board, int enable) | 182 | .num_ports = 2, |
188 | { | 183 | .base_baud = 921600, |
189 | return pci_siig20x_fn(dev, enable); | 184 | .uart_offset = 8, |
190 | } | 185 | }, |
191 | 186 | [netmos_9xx5_combo] = { | |
192 | static int __devinit netmos_serial_init(struct pci_dev *dev, struct pci_board_no_ids *board, int enable) | 187 | .flags = FL_BASE0 | FL_BASE_BARS, |
193 | { | 188 | .num_ports = 1, |
194 | board->num_ports = dev->subsystem_device & 0xf; | 189 | .base_baud = 115200, |
195 | return 0; | 190 | .uart_offset = 8, |
196 | } | 191 | }, |
197 | 192 | [netmos_9855] = { | |
198 | static struct pci_board_no_ids pci_boards[] __devinitdata = { | 193 | .flags = FL_BASE2 | FL_BASE_BARS, |
199 | /* | 194 | .num_ports = 1, |
200 | * PCI Flags, Number of Ports, Base (Maximum) Baud Rate, | 195 | .base_baud = 115200, |
201 | * Offset to get to next UART's registers, | 196 | .uart_offset = 8, |
202 | * Register shift to use for memory-mapped I/O, | 197 | }, |
203 | * Initialization function, first UART offset | 198 | [avlab_1s1p] = { /* n/t */ |
204 | */ | 199 | .flags = FL_BASE0 | FL_BASE_BARS, |
205 | 200 | .num_ports = 1, | |
206 | // Cards not tested are marked n/t | 201 | .base_baud = 115200, |
207 | // If you have one of these cards and it works for you, please tell me.. | 202 | .uart_offset = 8, |
208 | 203 | }, | |
209 | /* titan_110l */ { SPCI_FL_BASE1 | SPCI_FL_BASE_TABLE, 1, 921600 }, | 204 | [avlab_1s1p_650] = { /* nt */ |
210 | /* titan_210l */ { SPCI_FL_BASE1 | SPCI_FL_BASE_TABLE, 2, 921600 }, | 205 | .flags = FL_BASE0 | FL_BASE_BARS, |
211 | /* netmos_9xx5_combo */ { SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 1, 115200, 0, 0, netmos_serial_init }, | 206 | .num_ports = 1, |
212 | /* netmos_9855 */ { SPCI_FL_BASE2 | SPCI_FL_BASE_TABLE, 1, 115200, 0, 0, netmos_serial_init }, | 207 | .base_baud = 115200, |
213 | /* avlab_1s1p (n/t) */ { SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 1, 115200 }, | 208 | .uart_offset = 8, |
214 | /* avlab_1s1p_650 (nt)*/{ SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 1, 115200 }, | 209 | }, |
215 | /* avlab_1s1p_850 (nt)*/{ SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 1, 115200 }, | 210 | [avlab_1s1p_850] = { /* nt */ |
216 | /* avlab_1s2p (n/t) */ { SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 1, 115200 }, | 211 | .flags = FL_BASE0 | FL_BASE_BARS, |
217 | /* avlab_1s2p_650 (nt)*/{ SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 1, 115200 }, | 212 | .num_ports = 1, |
218 | /* avlab_1s2p_850 (nt)*/{ SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 1, 115200 }, | 213 | .base_baud = 115200, |
219 | /* avlab_2s1p (n/t) */ { SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 2, 115200 }, | 214 | .uart_offset = 8, |
220 | /* avlab_2s1p_650 (nt)*/{ SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 2, 115200 }, | 215 | }, |
221 | /* avlab_2s1p_850 (nt)*/{ SPCI_FL_BASE0 | SPCI_FL_BASE_TABLE, 2, 115200 }, | 216 | [avlab_1s2p] = { /* n/t */ |
222 | /* siig_1s1p_10x */ { SPCI_FL_BASE2, 1, 460800, 0, 0, siig10x_init_fn }, | 217 | .flags = FL_BASE0 | FL_BASE_BARS, |
223 | /* siig_2s1p_10x */ { SPCI_FL_BASE2, 1, 921600, 0, 0, siig10x_init_fn }, | 218 | .num_ports = 1, |
224 | /* siig_2p1s_20x */ { SPCI_FL_BASE0, 1, 921600, 0, 0, siig20x_init_fn }, | 219 | .base_baud = 115200, |
225 | /* siig_1s1p_20x */ { SPCI_FL_BASE0, 1, 921600, 0, 0, siig20x_init_fn }, | 220 | .uart_offset = 8, |
226 | /* siig_2s1p_20x */ { SPCI_FL_BASE0, 1, 921600, 0, 0, siig20x_init_fn }, | 221 | }, |
222 | [avlab_1s2p_650] = { /* nt */ | ||
223 | .flags = FL_BASE0 | FL_BASE_BARS, | ||
224 | .num_ports = 1, | ||
225 | .base_baud = 115200, | ||
226 | .uart_offset = 8, | ||
227 | }, | ||
228 | [avlab_1s2p_850] = { /* nt */ | ||
229 | .flags = FL_BASE0 | FL_BASE_BARS, | ||
230 | .num_ports = 1, | ||
231 | .base_baud = 115200, | ||
232 | .uart_offset = 8, | ||
233 | }, | ||
234 | [avlab_2s1p] = { /* n/t */ | ||
235 | .flags = FL_BASE0 | FL_BASE_BARS, | ||
236 | .num_ports = 2, | ||
237 | .base_baud = 115200, | ||
238 | .uart_offset = 8, | ||
239 | }, | ||
240 | [avlab_2s1p_650] = { /* nt */ | ||
241 | .flags = FL_BASE0 | FL_BASE_BARS, | ||
242 | .num_ports = 2, | ||
243 | .base_baud = 115200, | ||
244 | .uart_offset = 8, | ||
245 | }, | ||
246 | [avlab_2s1p_850] = { /* nt */ | ||
247 | .flags = FL_BASE0 | FL_BASE_BARS, | ||
248 | .num_ports = 2, | ||
249 | .base_baud = 115200, | ||
250 | .uart_offset = 8, | ||
251 | }, | ||
252 | [siig_1s1p_10x] = { | ||
253 | .flags = FL_BASE2, | ||
254 | .num_ports = 1, | ||
255 | .base_baud = 460800, | ||
256 | .uart_offset = 8, | ||
257 | }, | ||
258 | [siig_2s1p_10x] = { | ||
259 | .flags = FL_BASE2, | ||
260 | .num_ports = 1, | ||
261 | .base_baud = 921600, | ||
262 | .uart_offset = 8, | ||
263 | }, | ||
264 | [siig_2p1s_20x] = { | ||
265 | .flags = FL_BASE0, | ||
266 | .num_ports = 1, | ||
267 | .base_baud = 921600, | ||
268 | .uart_offset = 8, | ||
269 | }, | ||
270 | [siig_1s1p_20x] = { | ||
271 | .flags = FL_BASE0, | ||
272 | .num_ports = 1, | ||
273 | .base_baud = 921600, | ||
274 | .uart_offset = 8, | ||
275 | }, | ||
276 | [siig_2s1p_20x] = { | ||
277 | .flags = FL_BASE0, | ||
278 | .num_ports = 1, | ||
279 | .base_baud = 921600, | ||
280 | .uart_offset = 8, | ||
281 | }, | ||
227 | }; | 282 | }; |
228 | 283 | ||
229 | struct parport_serial_private { | 284 | struct parport_serial_private { |
230 | int num_ser; | 285 | struct serial_private *serial; |
231 | int line[20]; | ||
232 | struct pci_board_no_ids ser; | ||
233 | int num_par; | 286 | int num_par; |
234 | struct parport *port[PARPORT_MAX]; | 287 | struct parport *port[PARPORT_MAX]; |
235 | struct parport_pc_pci par; | 288 | struct parport_pc_pci par; |
236 | }; | 289 | }; |
237 | 290 | ||
238 | static int __devinit get_pci_port (struct pci_dev *dev, | ||
239 | struct pci_board_no_ids *board, | ||
240 | struct serial_struct *req, | ||
241 | int idx) | ||
242 | { | ||
243 | unsigned long port; | ||
244 | int base_idx; | ||
245 | int max_port; | ||
246 | int offset; | ||
247 | |||
248 | base_idx = SPCI_FL_GET_BASE(board->flags); | ||
249 | if (board->flags & SPCI_FL_BASE_TABLE) | ||
250 | base_idx += idx; | ||
251 | |||
252 | if (board->flags & SPCI_FL_REGION_SZ_CAP) { | ||
253 | max_port = pci_resource_len(dev, base_idx) / 8; | ||
254 | if (idx >= max_port) | ||
255 | return 1; | ||
256 | } | ||
257 | |||
258 | offset = board->first_uart_offset; | ||
259 | |||
260 | /* Timedia/SUNIX uses a mixture of BARs and offsets */ | ||
261 | /* Ugh, this is ugly as all hell --- TYT */ | ||
262 | if(dev->vendor == PCI_VENDOR_ID_TIMEDIA ) /* 0x1409 */ | ||
263 | switch(idx) { | ||
264 | case 0: base_idx=0; | ||
265 | break; | ||
266 | case 1: base_idx=0; offset=8; | ||
267 | break; | ||
268 | case 2: base_idx=1; | ||
269 | break; | ||
270 | case 3: base_idx=1; offset=8; | ||
271 | break; | ||
272 | case 4: /* BAR 2*/ | ||
273 | case 5: /* BAR 3 */ | ||
274 | case 6: /* BAR 4*/ | ||
275 | case 7: base_idx=idx-2; /* BAR 5*/ | ||
276 | } | ||
277 | |||
278 | port = pci_resource_start(dev, base_idx) + offset; | ||
279 | |||
280 | if ((board->flags & SPCI_FL_BASE_TABLE) == 0) | ||
281 | port += idx * (board->uart_offset ? board->uart_offset : 8); | ||
282 | |||
283 | if (pci_resource_flags (dev, base_idx) & IORESOURCE_IO) { | ||
284 | int high_bits_offset = ((sizeof(long)-sizeof(int))*8); | ||
285 | req->port = port; | ||
286 | if (high_bits_offset) | ||
287 | req->port_high = port >> high_bits_offset; | ||
288 | else | ||
289 | req->port_high = 0; | ||
290 | return 0; | ||
291 | } | ||
292 | req->io_type = SERIAL_IO_MEM; | ||
293 | req->iomem_base = ioremap(port, board->uart_offset); | ||
294 | req->iomem_reg_shift = board->reg_shift; | ||
295 | req->port = 0; | ||
296 | return req->iomem_base ? 0 : 1; | ||
297 | } | ||
298 | |||
299 | /* Register the serial port(s) of a PCI card. */ | 291 | /* Register the serial port(s) of a PCI card. */ |
300 | static int __devinit serial_register (struct pci_dev *dev, | 292 | static int __devinit serial_register (struct pci_dev *dev, |
301 | const struct pci_device_id *id) | 293 | const struct pci_device_id *id) |
302 | { | 294 | { |
303 | struct pci_board_no_ids *board; | ||
304 | struct parport_serial_private *priv = pci_get_drvdata (dev); | 295 | struct parport_serial_private *priv = pci_get_drvdata (dev); |
305 | struct serial_struct serial_req; | 296 | struct pciserial_board *board; |
306 | int base_baud; | 297 | struct serial_private *serial; |
307 | int k; | ||
308 | int success = 0; | ||
309 | |||
310 | priv->ser = pci_boards[id->driver_data]; | ||
311 | board = &priv->ser; | ||
312 | if (board->init_fn && ((board->init_fn) (dev, board, 1) != 0)) | ||
313 | return 1; | ||
314 | |||
315 | base_baud = board->base_baud; | ||
316 | if (!base_baud) | ||
317 | base_baud = BASE_BAUD; | ||
318 | memset (&serial_req, 0, sizeof (serial_req)); | ||
319 | |||
320 | for (k = 0; k < board->num_ports; k++) { | ||
321 | int line; | ||
322 | 298 | ||
323 | if (priv->num_ser == ARRAY_SIZE (priv->line)) { | 299 | board = &pci_parport_serial_boards[id->driver_data]; |
324 | printk (KERN_WARNING | 300 | serial = pciserial_init_ports(dev, board); |
325 | "parport_serial: %s: only %u serial lines " | ||
326 | "supported (%d reported)\n", pci_name (dev), | ||
327 | ARRAY_SIZE (priv->line), board->num_ports); | ||
328 | break; | ||
329 | } | ||
330 | 301 | ||
331 | serial_req.irq = dev->irq; | 302 | if (IS_ERR(serial)) |
332 | if (get_pci_port (dev, board, &serial_req, k)) | 303 | return PTR_ERR(serial); |
333 | break; | ||
334 | serial_req.flags = ASYNC_SKIP_TEST | ASYNC_AUTOPROBE; | ||
335 | serial_req.baud_base = base_baud; | ||
336 | line = register_serial (&serial_req); | ||
337 | if (line < 0) { | ||
338 | printk (KERN_DEBUG | ||
339 | "parport_serial: register_serial failed\n"); | ||
340 | continue; | ||
341 | } | ||
342 | priv->line[priv->num_ser++] = line; | ||
343 | success = 1; | ||
344 | } | ||
345 | 304 | ||
346 | return success ? 0 : 1; | 305 | priv->serial = serial; |
306 | return 0; | ||
347 | } | 307 | } |
348 | 308 | ||
349 | /* Register the parallel port(s) of a PCI card. */ | 309 | /* Register the parallel port(s) of a PCI card. */ |
@@ -411,7 +371,7 @@ static int __devinit parport_serial_pci_probe (struct pci_dev *dev, | |||
411 | priv = kmalloc (sizeof *priv, GFP_KERNEL); | 371 | priv = kmalloc (sizeof *priv, GFP_KERNEL); |
412 | if (!priv) | 372 | if (!priv) |
413 | return -ENOMEM; | 373 | return -ENOMEM; |
414 | priv->num_ser = priv->num_par = 0; | 374 | memset(priv, 0, sizeof(struct parport_serial_private)); |
415 | pci_set_drvdata (dev, priv); | 375 | pci_set_drvdata (dev, priv); |
416 | 376 | ||
417 | err = pci_enable_device (dev); | 377 | err = pci_enable_device (dev); |
@@ -444,15 +404,12 @@ static void __devexit parport_serial_pci_remove (struct pci_dev *dev) | |||
444 | struct parport_serial_private *priv = pci_get_drvdata (dev); | 404 | struct parport_serial_private *priv = pci_get_drvdata (dev); |
445 | int i; | 405 | int i; |
446 | 406 | ||
407 | pci_set_drvdata(dev, NULL); | ||
408 | |||
447 | // Serial ports | 409 | // Serial ports |
448 | for (i = 0; i < priv->num_ser; i++) { | 410 | if (priv->serial) |
449 | unregister_serial (priv->line[i]); | 411 | pciserial_remove_ports(priv->serial); |
450 | 412 | ||
451 | if (priv->ser.init_fn) | ||
452 | (priv->ser.init_fn) (dev, &priv->ser, 0); | ||
453 | } | ||
454 | pci_set_drvdata (dev, NULL); | ||
455 | |||
456 | // Parallel ports | 413 | // Parallel ports |
457 | for (i = 0; i < priv->num_par; i++) | 414 | for (i = 0; i < priv->num_par; i++) |
458 | parport_pc_unregister_port (priv->port[i]); | 415 | parport_pc_unregister_port (priv->port[i]); |
@@ -461,11 +418,47 @@ static void __devexit parport_serial_pci_remove (struct pci_dev *dev) | |||
461 | return; | 418 | return; |
462 | } | 419 | } |
463 | 420 | ||
421 | static int parport_serial_pci_suspend(struct pci_dev *dev, pm_message_t state) | ||
422 | { | ||
423 | struct parport_serial_private *priv = pci_get_drvdata(dev); | ||
424 | |||
425 | if (priv->serial) | ||
426 | pciserial_suspend_ports(priv->serial); | ||
427 | |||
428 | /* FIXME: What about parport? */ | ||
429 | |||
430 | pci_save_state(dev); | ||
431 | pci_set_power_state(dev, pci_choose_state(dev, state)); | ||
432 | return 0; | ||
433 | } | ||
434 | |||
435 | static int parport_serial_pci_resume(struct pci_dev *dev) | ||
436 | { | ||
437 | struct parport_serial_private *priv = pci_get_drvdata(dev); | ||
438 | |||
439 | pci_set_power_state(dev, PCI_D0); | ||
440 | pci_restore_state(dev); | ||
441 | |||
442 | /* | ||
443 | * The device may have been disabled. Re-enable it. | ||
444 | */ | ||
445 | pci_enable_device(dev); | ||
446 | |||
447 | if (priv->serial) | ||
448 | pciserial_resume_ports(priv->serial); | ||
449 | |||
450 | /* FIXME: What about parport? */ | ||
451 | |||
452 | return 0; | ||
453 | } | ||
454 | |||
464 | static struct pci_driver parport_serial_pci_driver = { | 455 | static struct pci_driver parport_serial_pci_driver = { |
465 | .name = "parport_serial", | 456 | .name = "parport_serial", |
466 | .id_table = parport_serial_pci_tbl, | 457 | .id_table = parport_serial_pci_tbl, |
467 | .probe = parport_serial_pci_probe, | 458 | .probe = parport_serial_pci_probe, |
468 | .remove = __devexit_p(parport_serial_pci_remove), | 459 | .remove = __devexit_p(parport_serial_pci_remove), |
460 | .suspend = parport_serial_pci_suspend, | ||
461 | .resume = parport_serial_pci_resume, | ||
469 | }; | 462 | }; |
470 | 463 | ||
471 | 464 | ||
diff --git a/drivers/scsi/ahci.c b/drivers/scsi/ahci.c index e3b9692b9688..179c95c878ac 100644 --- a/drivers/scsi/ahci.c +++ b/drivers/scsi/ahci.c | |||
@@ -1,26 +1,34 @@ | |||
1 | /* | 1 | /* |
2 | * ahci.c - AHCI SATA support | 2 | * ahci.c - AHCI SATA support |
3 | * | 3 | * |
4 | * Copyright 2004 Red Hat, Inc. | 4 | * Maintained by: Jeff Garzik <jgarzik@pobox.com> |
5 | * Please ALWAYS copy linux-ide@vger.kernel.org | ||
6 | * on emails. | ||
5 | * | 7 | * |
6 | * The contents of this file are subject to the Open | 8 | * Copyright 2004-2005 Red Hat, Inc. |
7 | * Software License version 1.1 that can be found at | ||
8 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
9 | * by reference. | ||
10 | * | 9 | * |
11 | * Alternatively, the contents of this file may be used under the terms | ||
12 | * of the GNU General Public License version 2 (the "GPL") as distributed | ||
13 | * in the kernel source COPYING file, in which case the provisions of | ||
14 | * the GPL are applicable instead of the above. If you wish to allow | ||
15 | * the use of your version of this file only under the terms of the | ||
16 | * GPL and not to allow others to use your version of this file under | ||
17 | * the OSL, indicate your decision by deleting the provisions above and | ||
18 | * replace them with the notice and other provisions required by the GPL. | ||
19 | * If you do not delete the provisions above, a recipient may use your | ||
20 | * version of this file under either the OSL or the GPL. | ||
21 | * | 10 | * |
22 | * Version 1.0 of the AHCI specification: | 11 | * This program is free software; you can redistribute it and/or modify |
12 | * it under the terms of the GNU General Public License as published by | ||
13 | * the Free Software Foundation; either version 2, or (at your option) | ||
14 | * any later version. | ||
15 | * | ||
16 | * This program is distributed in the hope that it will be useful, | ||
17 | * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
18 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
19 | * GNU General Public License for more details. | ||
20 | * | ||
21 | * You should have received a copy of the GNU General Public License | ||
22 | * along with this program; see the file COPYING. If not, write to | ||
23 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
24 | * | ||
25 | * | ||
26 | * libata documentation is available via 'make {ps|pdf}docs', | ||
27 | * as Documentation/DocBook/libata.* | ||
28 | * | ||
29 | * AHCI hardware documentation: | ||
23 | * http://www.intel.com/technology/serialata/pdf/rev1_0.pdf | 30 | * http://www.intel.com/technology/serialata/pdf/rev1_0.pdf |
31 | * http://www.intel.com/technology/serialata/pdf/rev1_1.pdf | ||
24 | * | 32 | * |
25 | */ | 33 | */ |
26 | 34 | ||
@@ -269,6 +277,8 @@ static struct pci_device_id ahci_pci_tbl[] = { | |||
269 | board_ahci }, /* ESB2 */ | 277 | board_ahci }, /* ESB2 */ |
270 | { PCI_VENDOR_ID_INTEL, 0x2683, PCI_ANY_ID, PCI_ANY_ID, 0, 0, | 278 | { PCI_VENDOR_ID_INTEL, 0x2683, PCI_ANY_ID, PCI_ANY_ID, 0, 0, |
271 | board_ahci }, /* ESB2 */ | 279 | board_ahci }, /* ESB2 */ |
280 | { PCI_VENDOR_ID_INTEL, 0x27c6, PCI_ANY_ID, PCI_ANY_ID, 0, 0, | ||
281 | board_ahci }, /* ICH7-M DH */ | ||
272 | { } /* terminate list */ | 282 | { } /* terminate list */ |
273 | }; | 283 | }; |
274 | 284 | ||
@@ -584,12 +594,16 @@ static void ahci_intr_error(struct ata_port *ap, u32 irq_stat) | |||
584 | 594 | ||
585 | static void ahci_eng_timeout(struct ata_port *ap) | 595 | static void ahci_eng_timeout(struct ata_port *ap) |
586 | { | 596 | { |
587 | void *mmio = ap->host_set->mmio_base; | 597 | struct ata_host_set *host_set = ap->host_set; |
598 | void *mmio = host_set->mmio_base; | ||
588 | void *port_mmio = ahci_port_base(mmio, ap->port_no); | 599 | void *port_mmio = ahci_port_base(mmio, ap->port_no); |
589 | struct ata_queued_cmd *qc; | 600 | struct ata_queued_cmd *qc; |
601 | unsigned long flags; | ||
590 | 602 | ||
591 | DPRINTK("ENTER\n"); | 603 | DPRINTK("ENTER\n"); |
592 | 604 | ||
605 | spin_lock_irqsave(&host_set->lock, flags); | ||
606 | |||
593 | ahci_intr_error(ap, readl(port_mmio + PORT_IRQ_STAT)); | 607 | ahci_intr_error(ap, readl(port_mmio + PORT_IRQ_STAT)); |
594 | 608 | ||
595 | qc = ata_qc_from_tag(ap, ap->active_tag); | 609 | qc = ata_qc_from_tag(ap, ap->active_tag); |
@@ -607,6 +621,7 @@ static void ahci_eng_timeout(struct ata_port *ap) | |||
607 | ata_qc_complete(qc, ATA_ERR); | 621 | ata_qc_complete(qc, ATA_ERR); |
608 | } | 622 | } |
609 | 623 | ||
624 | spin_unlock_irqrestore(&host_set->lock, flags); | ||
610 | } | 625 | } |
611 | 626 | ||
612 | static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc) | 627 | static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc) |
@@ -696,9 +711,6 @@ static int ahci_qc_issue(struct ata_queued_cmd *qc) | |||
696 | struct ata_port *ap = qc->ap; | 711 | struct ata_port *ap = qc->ap; |
697 | void *port_mmio = (void *) ap->ioaddr.cmd_addr; | 712 | void *port_mmio = (void *) ap->ioaddr.cmd_addr; |
698 | 713 | ||
699 | writel(1, port_mmio + PORT_SCR_ACT); | ||
700 | readl(port_mmio + PORT_SCR_ACT); /* flush */ | ||
701 | |||
702 | writel(1, port_mmio + PORT_CMD_ISSUE); | 714 | writel(1, port_mmio + PORT_CMD_ISSUE); |
703 | readl(port_mmio + PORT_CMD_ISSUE); /* flush */ | 715 | readl(port_mmio + PORT_CMD_ISSUE); /* flush */ |
704 | 716 | ||
diff --git a/drivers/scsi/ata_piix.c b/drivers/scsi/ata_piix.c index d96ebf9d2228..fb28c1261848 100644 --- a/drivers/scsi/ata_piix.c +++ b/drivers/scsi/ata_piix.c | |||
@@ -1,24 +1,42 @@ | |||
1 | /* | 1 | /* |
2 | 2 | * ata_piix.c - Intel PATA/SATA controllers | |
3 | ata_piix.c - Intel PATA/SATA controllers | 3 | * |
4 | 4 | * Maintained by: Jeff Garzik <jgarzik@pobox.com> | |
5 | Maintained by: Jeff Garzik <jgarzik@pobox.com> | 5 | * Please ALWAYS copy linux-ide@vger.kernel.org |
6 | Please ALWAYS copy linux-ide@vger.kernel.org | 6 | * on emails. |
7 | on emails. | 7 | * |
8 | 8 | * | |
9 | 9 | * Copyright 2003-2005 Red Hat Inc | |
10 | Copyright 2003-2004 Red Hat Inc | 10 | * Copyright 2003-2005 Jeff Garzik |
11 | Copyright 2003-2004 Jeff Garzik | 11 | * |
12 | 12 | * | |
13 | 13 | * Copyright header from piix.c: | |
14 | Copyright header from piix.c: | 14 | * |
15 | 15 | * Copyright (C) 1998-1999 Andrzej Krzysztofowicz, Author and Maintainer | |
16 | Copyright (C) 1998-1999 Andrzej Krzysztofowicz, Author and Maintainer | 16 | * Copyright (C) 1998-2000 Andre Hedrick <andre@linux-ide.org> |
17 | Copyright (C) 1998-2000 Andre Hedrick <andre@linux-ide.org> | 17 | * Copyright (C) 2003 Red Hat Inc <alan@redhat.com> |
18 | Copyright (C) 2003 Red Hat Inc <alan@redhat.com> | 18 | * |
19 | 19 | * | |
20 | May be copied or modified under the terms of the GNU General Public License | 20 | * This program is free software; you can redistribute it and/or modify |
21 | 21 | * it under the terms of the GNU General Public License as published by | |
22 | * the Free Software Foundation; either version 2, or (at your option) | ||
23 | * any later version. | ||
24 | * | ||
25 | * This program is distributed in the hope that it will be useful, | ||
26 | * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
27 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
28 | * GNU General Public License for more details. | ||
29 | * | ||
30 | * You should have received a copy of the GNU General Public License | ||
31 | * along with this program; see the file COPYING. If not, write to | ||
32 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
33 | * | ||
34 | * | ||
35 | * libata documentation is available via 'make {ps|pdf}docs', | ||
36 | * as Documentation/DocBook/libata.* | ||
37 | * | ||
38 | * Hardware documentation available at http://developer.intel.com/ | ||
39 | * | ||
22 | */ | 40 | */ |
23 | 41 | ||
24 | #include <linux/kernel.h> | 42 | #include <linux/kernel.h> |
@@ -629,13 +647,13 @@ static int piix_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) | |||
629 | port_info[1] = NULL; | 647 | port_info[1] = NULL; |
630 | 648 | ||
631 | if (port_info[0]->host_flags & PIIX_FLAG_AHCI) { | 649 | if (port_info[0]->host_flags & PIIX_FLAG_AHCI) { |
632 | u8 tmp; | 650 | u8 tmp; |
633 | pci_read_config_byte(pdev, PIIX_SCC, &tmp); | 651 | pci_read_config_byte(pdev, PIIX_SCC, &tmp); |
634 | if (tmp == PIIX_AHCI_DEVICE) { | 652 | if (tmp == PIIX_AHCI_DEVICE) { |
635 | int rc = piix_disable_ahci(pdev); | 653 | int rc = piix_disable_ahci(pdev); |
636 | if (rc) | 654 | if (rc) |
637 | return rc; | 655 | return rc; |
638 | } | 656 | } |
639 | } | 657 | } |
640 | 658 | ||
641 | if (port_info[0]->host_flags & PIIX_FLAG_COMBINED) { | 659 | if (port_info[0]->host_flags & PIIX_FLAG_COMBINED) { |
diff --git a/drivers/scsi/libata-core.c b/drivers/scsi/libata-core.c index f4e7dcb6492b..dee4b12b0342 100644 --- a/drivers/scsi/libata-core.c +++ b/drivers/scsi/libata-core.c | |||
@@ -1,25 +1,35 @@ | |||
1 | /* | 1 | /* |
2 | libata-core.c - helper library for ATA | 2 | * libata-core.c - helper library for ATA |
3 | 3 | * | |
4 | Copyright 2003-2004 Red Hat, Inc. All rights reserved. | 4 | * Maintained by: Jeff Garzik <jgarzik@pobox.com> |
5 | Copyright 2003-2004 Jeff Garzik | 5 | * Please ALWAYS copy linux-ide@vger.kernel.org |
6 | 6 | * on emails. | |
7 | The contents of this file are subject to the Open | 7 | * |
8 | Software License version 1.1 that can be found at | 8 | * Copyright 2003-2004 Red Hat, Inc. All rights reserved. |
9 | http://www.opensource.org/licenses/osl-1.1.txt and is included herein | 9 | * Copyright 2003-2004 Jeff Garzik |
10 | by reference. | 10 | * |
11 | 11 | * | |
12 | Alternatively, the contents of this file may be used under the terms | 12 | * This program is free software; you can redistribute it and/or modify |
13 | of the GNU General Public License version 2 (the "GPL") as distributed | 13 | * it under the terms of the GNU General Public License as published by |
14 | in the kernel source COPYING file, in which case the provisions of | 14 | * the Free Software Foundation; either version 2, or (at your option) |
15 | the GPL are applicable instead of the above. If you wish to allow | 15 | * any later version. |
16 | the use of your version of this file only under the terms of the | 16 | * |
17 | GPL and not to allow others to use your version of this file under | 17 | * This program is distributed in the hope that it will be useful, |
18 | the OSL, indicate your decision by deleting the provisions above and | 18 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
19 | replace them with the notice and other provisions required by the GPL. | 19 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
20 | If you do not delete the provisions above, a recipient may use your | 20 | * GNU General Public License for more details. |
21 | version of this file under either the OSL or the GPL. | 21 | * |
22 | 22 | * You should have received a copy of the GNU General Public License | |
23 | * along with this program; see the file COPYING. If not, write to | ||
24 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
25 | * | ||
26 | * | ||
27 | * libata documentation is available via 'make {ps|pdf}docs', | ||
28 | * as Documentation/DocBook/libata.* | ||
29 | * | ||
30 | * Hardware documentation available from http://www.t13.org/ and | ||
31 | * http://www.sata-io.org/ | ||
32 | * | ||
23 | */ | 33 | */ |
24 | 34 | ||
25 | #include <linux/config.h> | 35 | #include <linux/config.h> |
@@ -1304,12 +1314,12 @@ static inline u8 ata_dev_knobble(struct ata_port *ap) | |||
1304 | /** | 1314 | /** |
1305 | * ata_dev_config - Run device specific handlers and check for | 1315 | * ata_dev_config - Run device specific handlers and check for |
1306 | * SATA->PATA bridges | 1316 | * SATA->PATA bridges |
1307 | * @ap: Bus | 1317 | * @ap: Bus |
1308 | * @i: Device | 1318 | * @i: Device |
1309 | * | 1319 | * |
1310 | * LOCKING: | 1320 | * LOCKING: |
1311 | */ | 1321 | */ |
1312 | 1322 | ||
1313 | void ata_dev_config(struct ata_port *ap, unsigned int i) | 1323 | void ata_dev_config(struct ata_port *ap, unsigned int i) |
1314 | { | 1324 | { |
1315 | /* limit bridge transfers to udma5, 200 sectors */ | 1325 | /* limit bridge transfers to udma5, 200 sectors */ |
@@ -2377,6 +2387,27 @@ static int ata_sg_setup(struct ata_queued_cmd *qc) | |||
2377 | } | 2387 | } |
2378 | 2388 | ||
2379 | /** | 2389 | /** |
2390 | * ata_poll_qc_complete - turn irq back on and finish qc | ||
2391 | * @qc: Command to complete | ||
2392 | * @drv_stat: ATA status register content | ||
2393 | * | ||
2394 | * LOCKING: | ||
2395 | * None. (grabs host lock) | ||
2396 | */ | ||
2397 | |||
2398 | void ata_poll_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat) | ||
2399 | { | ||
2400 | struct ata_port *ap = qc->ap; | ||
2401 | unsigned long flags; | ||
2402 | |||
2403 | spin_lock_irqsave(&ap->host_set->lock, flags); | ||
2404 | ap->flags &= ~ATA_FLAG_NOINTR; | ||
2405 | ata_irq_on(ap); | ||
2406 | ata_qc_complete(qc, drv_stat); | ||
2407 | spin_unlock_irqrestore(&ap->host_set->lock, flags); | ||
2408 | } | ||
2409 | |||
2410 | /** | ||
2380 | * ata_pio_poll - | 2411 | * ata_pio_poll - |
2381 | * @ap: | 2412 | * @ap: |
2382 | * | 2413 | * |
@@ -2438,11 +2469,10 @@ static void ata_pio_complete (struct ata_port *ap) | |||
2438 | u8 drv_stat; | 2469 | u8 drv_stat; |
2439 | 2470 | ||
2440 | /* | 2471 | /* |
2441 | * This is purely hueristic. This is a fast path. | 2472 | * This is purely heuristic. This is a fast path. Sometimes when |
2442 | * Sometimes when we enter, BSY will be cleared in | 2473 | * we enter, BSY will be cleared in a chk-status or two. If not, |
2443 | * a chk-status or two. If not, the drive is probably seeking | 2474 | * the drive is probably seeking or something. Snooze for a couple |
2444 | * or something. Snooze for a couple msecs, then | 2475 | * msecs, then chk-status again. If still busy, fall back to |
2445 | * chk-status again. If still busy, fall back to | ||
2446 | * PIO_ST_POLL state. | 2476 | * PIO_ST_POLL state. |
2447 | */ | 2477 | */ |
2448 | drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 10); | 2478 | drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 10); |
@@ -2467,9 +2497,7 @@ static void ata_pio_complete (struct ata_port *ap) | |||
2467 | 2497 | ||
2468 | ap->pio_task_state = PIO_ST_IDLE; | 2498 | ap->pio_task_state = PIO_ST_IDLE; |
2469 | 2499 | ||
2470 | ata_irq_on(ap); | 2500 | ata_poll_qc_complete(qc, drv_stat); |
2471 | |||
2472 | ata_qc_complete(qc, drv_stat); | ||
2473 | } | 2501 | } |
2474 | 2502 | ||
2475 | 2503 | ||
@@ -2494,6 +2522,20 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words) | |||
2494 | #endif /* __BIG_ENDIAN */ | 2522 | #endif /* __BIG_ENDIAN */ |
2495 | } | 2523 | } |
2496 | 2524 | ||
2525 | /** | ||
2526 | * ata_mmio_data_xfer - Transfer data by MMIO | ||
2527 | * @ap: port to read/write | ||
2528 | * @buf: data buffer | ||
2529 | * @buflen: buffer length | ||
2530 | * @do_write: read/write | ||
2531 | * | ||
2532 | * Transfer data from/to the device data register by MMIO. | ||
2533 | * | ||
2534 | * LOCKING: | ||
2535 | * Inherited from caller. | ||
2536 | * | ||
2537 | */ | ||
2538 | |||
2497 | static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf, | 2539 | static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf, |
2498 | unsigned int buflen, int write_data) | 2540 | unsigned int buflen, int write_data) |
2499 | { | 2541 | { |
@@ -2502,6 +2544,7 @@ static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf, | |||
2502 | u16 *buf16 = (u16 *) buf; | 2544 | u16 *buf16 = (u16 *) buf; |
2503 | void __iomem *mmio = (void __iomem *)ap->ioaddr.data_addr; | 2545 | void __iomem *mmio = (void __iomem *)ap->ioaddr.data_addr; |
2504 | 2546 | ||
2547 | /* Transfer multiple of 2 bytes */ | ||
2505 | if (write_data) { | 2548 | if (write_data) { |
2506 | for (i = 0; i < words; i++) | 2549 | for (i = 0; i < words; i++) |
2507 | writew(le16_to_cpu(buf16[i]), mmio); | 2550 | writew(le16_to_cpu(buf16[i]), mmio); |
@@ -2509,19 +2552,76 @@ static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf, | |||
2509 | for (i = 0; i < words; i++) | 2552 | for (i = 0; i < words; i++) |
2510 | buf16[i] = cpu_to_le16(readw(mmio)); | 2553 | buf16[i] = cpu_to_le16(readw(mmio)); |
2511 | } | 2554 | } |
2555 | |||
2556 | /* Transfer trailing 1 byte, if any. */ | ||
2557 | if (unlikely(buflen & 0x01)) { | ||
2558 | u16 align_buf[1] = { 0 }; | ||
2559 | unsigned char *trailing_buf = buf + buflen - 1; | ||
2560 | |||
2561 | if (write_data) { | ||
2562 | memcpy(align_buf, trailing_buf, 1); | ||
2563 | writew(le16_to_cpu(align_buf[0]), mmio); | ||
2564 | } else { | ||
2565 | align_buf[0] = cpu_to_le16(readw(mmio)); | ||
2566 | memcpy(trailing_buf, align_buf, 1); | ||
2567 | } | ||
2568 | } | ||
2512 | } | 2569 | } |
2513 | 2570 | ||
2571 | /** | ||
2572 | * ata_pio_data_xfer - Transfer data by PIO | ||
2573 | * @ap: port to read/write | ||
2574 | * @buf: data buffer | ||
2575 | * @buflen: buffer length | ||
2576 | * @do_write: read/write | ||
2577 | * | ||
2578 | * Transfer data from/to the device data register by PIO. | ||
2579 | * | ||
2580 | * LOCKING: | ||
2581 | * Inherited from caller. | ||
2582 | * | ||
2583 | */ | ||
2584 | |||
2514 | static void ata_pio_data_xfer(struct ata_port *ap, unsigned char *buf, | 2585 | static void ata_pio_data_xfer(struct ata_port *ap, unsigned char *buf, |
2515 | unsigned int buflen, int write_data) | 2586 | unsigned int buflen, int write_data) |
2516 | { | 2587 | { |
2517 | unsigned int dwords = buflen >> 1; | 2588 | unsigned int words = buflen >> 1; |
2518 | 2589 | ||
2590 | /* Transfer multiple of 2 bytes */ | ||
2519 | if (write_data) | 2591 | if (write_data) |
2520 | outsw(ap->ioaddr.data_addr, buf, dwords); | 2592 | outsw(ap->ioaddr.data_addr, buf, words); |
2521 | else | 2593 | else |
2522 | insw(ap->ioaddr.data_addr, buf, dwords); | 2594 | insw(ap->ioaddr.data_addr, buf, words); |
2595 | |||
2596 | /* Transfer trailing 1 byte, if any. */ | ||
2597 | if (unlikely(buflen & 0x01)) { | ||
2598 | u16 align_buf[1] = { 0 }; | ||
2599 | unsigned char *trailing_buf = buf + buflen - 1; | ||
2600 | |||
2601 | if (write_data) { | ||
2602 | memcpy(align_buf, trailing_buf, 1); | ||
2603 | outw(le16_to_cpu(align_buf[0]), ap->ioaddr.data_addr); | ||
2604 | } else { | ||
2605 | align_buf[0] = cpu_to_le16(inw(ap->ioaddr.data_addr)); | ||
2606 | memcpy(trailing_buf, align_buf, 1); | ||
2607 | } | ||
2608 | } | ||
2523 | } | 2609 | } |
2524 | 2610 | ||
2611 | /** | ||
2612 | * ata_data_xfer - Transfer data from/to the data register. | ||
2613 | * @ap: port to read/write | ||
2614 | * @buf: data buffer | ||
2615 | * @buflen: buffer length | ||
2616 | * @do_write: read/write | ||
2617 | * | ||
2618 | * Transfer data from/to the device data register. | ||
2619 | * | ||
2620 | * LOCKING: | ||
2621 | * Inherited from caller. | ||
2622 | * | ||
2623 | */ | ||
2624 | |||
2525 | static void ata_data_xfer(struct ata_port *ap, unsigned char *buf, | 2625 | static void ata_data_xfer(struct ata_port *ap, unsigned char *buf, |
2526 | unsigned int buflen, int do_write) | 2626 | unsigned int buflen, int do_write) |
2527 | { | 2627 | { |
@@ -2531,6 +2631,16 @@ static void ata_data_xfer(struct ata_port *ap, unsigned char *buf, | |||
2531 | ata_pio_data_xfer(ap, buf, buflen, do_write); | 2631 | ata_pio_data_xfer(ap, buf, buflen, do_write); |
2532 | } | 2632 | } |
2533 | 2633 | ||
2634 | /** | ||
2635 | * ata_pio_sector - Transfer ATA_SECT_SIZE (512 bytes) of data. | ||
2636 | * @qc: Command on going | ||
2637 | * | ||
2638 | * Transfer ATA_SECT_SIZE of data from/to the ATA device. | ||
2639 | * | ||
2640 | * LOCKING: | ||
2641 | * Inherited from caller. | ||
2642 | */ | ||
2643 | |||
2534 | static void ata_pio_sector(struct ata_queued_cmd *qc) | 2644 | static void ata_pio_sector(struct ata_queued_cmd *qc) |
2535 | { | 2645 | { |
2536 | int do_write = (qc->tf.flags & ATA_TFLAG_WRITE); | 2646 | int do_write = (qc->tf.flags & ATA_TFLAG_WRITE); |
@@ -2569,6 +2679,18 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) | |||
2569 | kunmap(page); | 2679 | kunmap(page); |
2570 | } | 2680 | } |
2571 | 2681 | ||
2682 | /** | ||
2683 | * __atapi_pio_bytes - Transfer data from/to the ATAPI device. | ||
2684 | * @qc: Command on going | ||
2685 | * @bytes: number of bytes | ||
2686 | * | ||
2687 | * Transfer Transfer data from/to the ATAPI device. | ||
2688 | * | ||
2689 | * LOCKING: | ||
2690 | * Inherited from caller. | ||
2691 | * | ||
2692 | */ | ||
2693 | |||
2572 | static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes) | 2694 | static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes) |
2573 | { | 2695 | { |
2574 | int do_write = (qc->tf.flags & ATA_TFLAG_WRITE); | 2696 | int do_write = (qc->tf.flags & ATA_TFLAG_WRITE); |
@@ -2578,10 +2700,33 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes) | |||
2578 | unsigned char *buf; | 2700 | unsigned char *buf; |
2579 | unsigned int offset, count; | 2701 | unsigned int offset, count; |
2580 | 2702 | ||
2581 | if (qc->curbytes == qc->nbytes - bytes) | 2703 | if (qc->curbytes + bytes >= qc->nbytes) |
2582 | ap->pio_task_state = PIO_ST_LAST; | 2704 | ap->pio_task_state = PIO_ST_LAST; |
2583 | 2705 | ||
2584 | next_sg: | 2706 | next_sg: |
2707 | if (unlikely(qc->cursg >= qc->n_elem)) { | ||
2708 | /* | ||
2709 | * The end of qc->sg is reached and the device expects | ||
2710 | * more data to transfer. In order not to overrun qc->sg | ||
2711 | * and fulfill length specified in the byte count register, | ||
2712 | * - for read case, discard trailing data from the device | ||
2713 | * - for write case, padding zero data to the device | ||
2714 | */ | ||
2715 | u16 pad_buf[1] = { 0 }; | ||
2716 | unsigned int words = bytes >> 1; | ||
2717 | unsigned int i; | ||
2718 | |||
2719 | if (words) /* warning if bytes > 1 */ | ||
2720 | printk(KERN_WARNING "ata%u: %u bytes trailing data\n", | ||
2721 | ap->id, bytes); | ||
2722 | |||
2723 | for (i = 0; i < words; i++) | ||
2724 | ata_data_xfer(ap, (unsigned char*)pad_buf, 2, do_write); | ||
2725 | |||
2726 | ap->pio_task_state = PIO_ST_LAST; | ||
2727 | return; | ||
2728 | } | ||
2729 | |||
2585 | sg = &qc->sg[qc->cursg]; | 2730 | sg = &qc->sg[qc->cursg]; |
2586 | 2731 | ||
2587 | page = sg->page; | 2732 | page = sg->page; |
@@ -2615,11 +2760,21 @@ next_sg: | |||
2615 | 2760 | ||
2616 | kunmap(page); | 2761 | kunmap(page); |
2617 | 2762 | ||
2618 | if (bytes) { | 2763 | if (bytes) |
2619 | goto next_sg; | 2764 | goto next_sg; |
2620 | } | ||
2621 | } | 2765 | } |
2622 | 2766 | ||
2767 | /** | ||
2768 | * atapi_pio_bytes - Transfer data from/to the ATAPI device. | ||
2769 | * @qc: Command on going | ||
2770 | * | ||
2771 | * Transfer Transfer data from/to the ATAPI device. | ||
2772 | * | ||
2773 | * LOCKING: | ||
2774 | * Inherited from caller. | ||
2775 | * | ||
2776 | */ | ||
2777 | |||
2623 | static void atapi_pio_bytes(struct ata_queued_cmd *qc) | 2778 | static void atapi_pio_bytes(struct ata_queued_cmd *qc) |
2624 | { | 2779 | { |
2625 | struct ata_port *ap = qc->ap; | 2780 | struct ata_port *ap = qc->ap; |
@@ -2692,9 +2847,7 @@ static void ata_pio_block(struct ata_port *ap) | |||
2692 | if ((status & ATA_DRQ) == 0) { | 2847 | if ((status & ATA_DRQ) == 0) { |
2693 | ap->pio_task_state = PIO_ST_IDLE; | 2848 | ap->pio_task_state = PIO_ST_IDLE; |
2694 | 2849 | ||
2695 | ata_irq_on(ap); | 2850 | ata_poll_qc_complete(qc, status); |
2696 | |||
2697 | ata_qc_complete(qc, status); | ||
2698 | return; | 2851 | return; |
2699 | } | 2852 | } |
2700 | 2853 | ||
@@ -2724,9 +2877,7 @@ static void ata_pio_error(struct ata_port *ap) | |||
2724 | 2877 | ||
2725 | ap->pio_task_state = PIO_ST_IDLE; | 2878 | ap->pio_task_state = PIO_ST_IDLE; |
2726 | 2879 | ||
2727 | ata_irq_on(ap); | 2880 | ata_poll_qc_complete(qc, drv_stat | ATA_ERR); |
2728 | |||
2729 | ata_qc_complete(qc, drv_stat | ATA_ERR); | ||
2730 | } | 2881 | } |
2731 | 2882 | ||
2732 | static void ata_pio_task(void *_data) | 2883 | static void ata_pio_task(void *_data) |
@@ -2832,8 +2983,10 @@ static void atapi_request_sense(struct ata_port *ap, struct ata_device *dev, | |||
2832 | static void ata_qc_timeout(struct ata_queued_cmd *qc) | 2983 | static void ata_qc_timeout(struct ata_queued_cmd *qc) |
2833 | { | 2984 | { |
2834 | struct ata_port *ap = qc->ap; | 2985 | struct ata_port *ap = qc->ap; |
2986 | struct ata_host_set *host_set = ap->host_set; | ||
2835 | struct ata_device *dev = qc->dev; | 2987 | struct ata_device *dev = qc->dev; |
2836 | u8 host_stat = 0, drv_stat; | 2988 | u8 host_stat = 0, drv_stat; |
2989 | unsigned long flags; | ||
2837 | 2990 | ||
2838 | DPRINTK("ENTER\n"); | 2991 | DPRINTK("ENTER\n"); |
2839 | 2992 | ||
@@ -2844,7 +2997,9 @@ static void ata_qc_timeout(struct ata_queued_cmd *qc) | |||
2844 | if (!(cmd->eh_eflags & SCSI_EH_CANCEL_CMD)) { | 2997 | if (!(cmd->eh_eflags & SCSI_EH_CANCEL_CMD)) { |
2845 | 2998 | ||
2846 | /* finish completing original command */ | 2999 | /* finish completing original command */ |
3000 | spin_lock_irqsave(&host_set->lock, flags); | ||
2847 | __ata_qc_complete(qc); | 3001 | __ata_qc_complete(qc); |
3002 | spin_unlock_irqrestore(&host_set->lock, flags); | ||
2848 | 3003 | ||
2849 | atapi_request_sense(ap, dev, cmd); | 3004 | atapi_request_sense(ap, dev, cmd); |
2850 | 3005 | ||
@@ -2855,6 +3010,8 @@ static void ata_qc_timeout(struct ata_queued_cmd *qc) | |||
2855 | } | 3010 | } |
2856 | } | 3011 | } |
2857 | 3012 | ||
3013 | spin_lock_irqsave(&host_set->lock, flags); | ||
3014 | |||
2858 | /* hack alert! We cannot use the supplied completion | 3015 | /* hack alert! We cannot use the supplied completion |
2859 | * function from inside the ->eh_strategy_handler() thread. | 3016 | * function from inside the ->eh_strategy_handler() thread. |
2860 | * libata is the only user of ->eh_strategy_handler() in | 3017 | * libata is the only user of ->eh_strategy_handler() in |
@@ -2870,7 +3027,7 @@ static void ata_qc_timeout(struct ata_queued_cmd *qc) | |||
2870 | host_stat = ap->ops->bmdma_status(ap); | 3027 | host_stat = ap->ops->bmdma_status(ap); |
2871 | 3028 | ||
2872 | /* before we do anything else, clear DMA-Start bit */ | 3029 | /* before we do anything else, clear DMA-Start bit */ |
2873 | ap->ops->bmdma_stop(ap); | 3030 | ap->ops->bmdma_stop(qc); |
2874 | 3031 | ||
2875 | /* fall through */ | 3032 | /* fall through */ |
2876 | 3033 | ||
@@ -2888,6 +3045,9 @@ static void ata_qc_timeout(struct ata_queued_cmd *qc) | |||
2888 | ata_qc_complete(qc, drv_stat); | 3045 | ata_qc_complete(qc, drv_stat); |
2889 | break; | 3046 | break; |
2890 | } | 3047 | } |
3048 | |||
3049 | spin_unlock_irqrestore(&host_set->lock, flags); | ||
3050 | |||
2891 | out: | 3051 | out: |
2892 | DPRINTK("EXIT\n"); | 3052 | DPRINTK("EXIT\n"); |
2893 | } | 3053 | } |
@@ -3061,9 +3221,14 @@ void ata_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat) | |||
3061 | if (likely(qc->flags & ATA_QCFLAG_DMAMAP)) | 3221 | if (likely(qc->flags & ATA_QCFLAG_DMAMAP)) |
3062 | ata_sg_clean(qc); | 3222 | ata_sg_clean(qc); |
3063 | 3223 | ||
3224 | /* atapi: mark qc as inactive to prevent the interrupt handler | ||
3225 | * from completing the command twice later, before the error handler | ||
3226 | * is called. (when rc != 0 and atapi request sense is needed) | ||
3227 | */ | ||
3228 | qc->flags &= ~ATA_QCFLAG_ACTIVE; | ||
3229 | |||
3064 | /* call completion callback */ | 3230 | /* call completion callback */ |
3065 | rc = qc->complete_fn(qc, drv_stat); | 3231 | rc = qc->complete_fn(qc, drv_stat); |
3066 | qc->flags &= ~ATA_QCFLAG_ACTIVE; | ||
3067 | 3232 | ||
3068 | /* if callback indicates not to complete command (non-zero), | 3233 | /* if callback indicates not to complete command (non-zero), |
3069 | * return immediately | 3234 | * return immediately |
@@ -3193,11 +3358,13 @@ int ata_qc_issue_prot(struct ata_queued_cmd *qc) | |||
3193 | break; | 3358 | break; |
3194 | 3359 | ||
3195 | case ATA_PROT_ATAPI_NODATA: | 3360 | case ATA_PROT_ATAPI_NODATA: |
3361 | ap->flags |= ATA_FLAG_NOINTR; | ||
3196 | ata_tf_to_host_nolock(ap, &qc->tf); | 3362 | ata_tf_to_host_nolock(ap, &qc->tf); |
3197 | queue_work(ata_wq, &ap->packet_task); | 3363 | queue_work(ata_wq, &ap->packet_task); |
3198 | break; | 3364 | break; |
3199 | 3365 | ||
3200 | case ATA_PROT_ATAPI_DMA: | 3366 | case ATA_PROT_ATAPI_DMA: |
3367 | ap->flags |= ATA_FLAG_NOINTR; | ||
3201 | ap->ops->tf_load(ap, &qc->tf); /* load tf registers */ | 3368 | ap->ops->tf_load(ap, &qc->tf); /* load tf registers */ |
3202 | ap->ops->bmdma_setup(qc); /* set up bmdma */ | 3369 | ap->ops->bmdma_setup(qc); /* set up bmdma */ |
3203 | queue_work(ata_wq, &ap->packet_task); | 3370 | queue_work(ata_wq, &ap->packet_task); |
@@ -3242,7 +3409,7 @@ static void ata_bmdma_setup_mmio (struct ata_queued_cmd *qc) | |||
3242 | } | 3409 | } |
3243 | 3410 | ||
3244 | /** | 3411 | /** |
3245 | * ata_bmdma_start - Start a PCI IDE BMDMA transaction | 3412 | * ata_bmdma_start_mmio - Start a PCI IDE BMDMA transaction |
3246 | * @qc: Info associated with this ATA transaction. | 3413 | * @qc: Info associated with this ATA transaction. |
3247 | * | 3414 | * |
3248 | * LOCKING: | 3415 | * LOCKING: |
@@ -3413,7 +3580,7 @@ u8 ata_bmdma_status(struct ata_port *ap) | |||
3413 | 3580 | ||
3414 | /** | 3581 | /** |
3415 | * ata_bmdma_stop - Stop PCI IDE BMDMA transfer | 3582 | * ata_bmdma_stop - Stop PCI IDE BMDMA transfer |
3416 | * @ap: Port associated with this ATA transaction. | 3583 | * @qc: Command we are ending DMA for |
3417 | * | 3584 | * |
3418 | * Clears the ATA_DMA_START flag in the dma control register | 3585 | * Clears the ATA_DMA_START flag in the dma control register |
3419 | * | 3586 | * |
@@ -3423,8 +3590,9 @@ u8 ata_bmdma_status(struct ata_port *ap) | |||
3423 | * spin_lock_irqsave(host_set lock) | 3590 | * spin_lock_irqsave(host_set lock) |
3424 | */ | 3591 | */ |
3425 | 3592 | ||
3426 | void ata_bmdma_stop(struct ata_port *ap) | 3593 | void ata_bmdma_stop(struct ata_queued_cmd *qc) |
3427 | { | 3594 | { |
3595 | struct ata_port *ap = qc->ap; | ||
3428 | if (ap->flags & ATA_FLAG_MMIO) { | 3596 | if (ap->flags & ATA_FLAG_MMIO) { |
3429 | void __iomem *mmio = (void __iomem *) ap->ioaddr.bmdma_addr; | 3597 | void __iomem *mmio = (void __iomem *) ap->ioaddr.bmdma_addr; |
3430 | 3598 | ||
@@ -3476,7 +3644,7 @@ inline unsigned int ata_host_intr (struct ata_port *ap, | |||
3476 | goto idle_irq; | 3644 | goto idle_irq; |
3477 | 3645 | ||
3478 | /* before we do anything else, clear DMA-Start bit */ | 3646 | /* before we do anything else, clear DMA-Start bit */ |
3479 | ap->ops->bmdma_stop(ap); | 3647 | ap->ops->bmdma_stop(qc); |
3480 | 3648 | ||
3481 | /* fall through */ | 3649 | /* fall through */ |
3482 | 3650 | ||
@@ -3551,7 +3719,8 @@ irqreturn_t ata_interrupt (int irq, void *dev_instance, struct pt_regs *regs) | |||
3551 | struct ata_port *ap; | 3719 | struct ata_port *ap; |
3552 | 3720 | ||
3553 | ap = host_set->ports[i]; | 3721 | ap = host_set->ports[i]; |
3554 | if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) { | 3722 | if (ap && |
3723 | !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { | ||
3555 | struct ata_queued_cmd *qc; | 3724 | struct ata_queued_cmd *qc; |
3556 | 3725 | ||
3557 | qc = ata_qc_from_tag(ap, ap->active_tag); | 3726 | qc = ata_qc_from_tag(ap, ap->active_tag); |
@@ -3603,19 +3772,27 @@ static void atapi_packet_task(void *_data) | |||
3603 | /* send SCSI cdb */ | 3772 | /* send SCSI cdb */ |
3604 | DPRINTK("send cdb\n"); | 3773 | DPRINTK("send cdb\n"); |
3605 | assert(ap->cdb_len >= 12); | 3774 | assert(ap->cdb_len >= 12); |
3606 | ata_data_xfer(ap, qc->cdb, ap->cdb_len, 1); | ||
3607 | 3775 | ||
3608 | /* if we are DMA'ing, irq handler takes over from here */ | 3776 | if (qc->tf.protocol == ATA_PROT_ATAPI_DMA || |
3609 | if (qc->tf.protocol == ATA_PROT_ATAPI_DMA) | 3777 | qc->tf.protocol == ATA_PROT_ATAPI_NODATA) { |
3610 | ap->ops->bmdma_start(qc); /* initiate bmdma */ | 3778 | unsigned long flags; |
3611 | 3779 | ||
3612 | /* non-data commands are also handled via irq */ | 3780 | /* Once we're done issuing command and kicking bmdma, |
3613 | else if (qc->tf.protocol == ATA_PROT_ATAPI_NODATA) { | 3781 | * irq handler takes over. To not lose irq, we need |
3614 | /* do nothing */ | 3782 | * to clear NOINTR flag before sending cdb, but |
3615 | } | 3783 | * interrupt handler shouldn't be invoked before we're |
3784 | * finished. Hence, the following locking. | ||
3785 | */ | ||
3786 | spin_lock_irqsave(&ap->host_set->lock, flags); | ||
3787 | ap->flags &= ~ATA_FLAG_NOINTR; | ||
3788 | ata_data_xfer(ap, qc->cdb, ap->cdb_len, 1); | ||
3789 | if (qc->tf.protocol == ATA_PROT_ATAPI_DMA) | ||
3790 | ap->ops->bmdma_start(qc); /* initiate bmdma */ | ||
3791 | spin_unlock_irqrestore(&ap->host_set->lock, flags); | ||
3792 | } else { | ||
3793 | ata_data_xfer(ap, qc->cdb, ap->cdb_len, 1); | ||
3616 | 3794 | ||
3617 | /* PIO commands are handled by polling */ | 3795 | /* PIO commands are handled by polling */ |
3618 | else { | ||
3619 | ap->pio_task_state = PIO_ST; | 3796 | ap->pio_task_state = PIO_ST; |
3620 | queue_work(ata_wq, &ap->pio_task); | 3797 | queue_work(ata_wq, &ap->pio_task); |
3621 | } | 3798 | } |
@@ -3623,7 +3800,7 @@ static void atapi_packet_task(void *_data) | |||
3623 | return; | 3800 | return; |
3624 | 3801 | ||
3625 | err_out: | 3802 | err_out: |
3626 | ata_qc_complete(qc, ATA_ERR); | 3803 | ata_poll_qc_complete(qc, ATA_ERR); |
3627 | } | 3804 | } |
3628 | 3805 | ||
3629 | 3806 | ||
diff --git a/drivers/scsi/libata-scsi.c b/drivers/scsi/libata-scsi.c index 6a75ec2187fd..346eb36b1e31 100644 --- a/drivers/scsi/libata-scsi.c +++ b/drivers/scsi/libata-scsi.c | |||
@@ -1,25 +1,36 @@ | |||
1 | /* | 1 | /* |
2 | libata-scsi.c - helper library for ATA | 2 | * libata-scsi.c - helper library for ATA |
3 | 3 | * | |
4 | Copyright 2003-2004 Red Hat, Inc. All rights reserved. | 4 | * Maintained by: Jeff Garzik <jgarzik@pobox.com> |
5 | Copyright 2003-2004 Jeff Garzik | 5 | * Please ALWAYS copy linux-ide@vger.kernel.org |
6 | 6 | * on emails. | |
7 | The contents of this file are subject to the Open | 7 | * |
8 | Software License version 1.1 that can be found at | 8 | * Copyright 2003-2004 Red Hat, Inc. All rights reserved. |
9 | http://www.opensource.org/licenses/osl-1.1.txt and is included herein | 9 | * Copyright 2003-2004 Jeff Garzik |
10 | by reference. | 10 | * |
11 | 11 | * | |
12 | Alternatively, the contents of this file may be used under the terms | 12 | * This program is free software; you can redistribute it and/or modify |
13 | of the GNU General Public License version 2 (the "GPL") as distributed | 13 | * it under the terms of the GNU General Public License as published by |
14 | in the kernel source COPYING file, in which case the provisions of | 14 | * the Free Software Foundation; either version 2, or (at your option) |
15 | the GPL are applicable instead of the above. If you wish to allow | 15 | * any later version. |
16 | the use of your version of this file only under the terms of the | 16 | * |
17 | GPL and not to allow others to use your version of this file under | 17 | * This program is distributed in the hope that it will be useful, |
18 | the OSL, indicate your decision by deleting the provisions above and | 18 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
19 | replace them with the notice and other provisions required by the GPL. | 19 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
20 | If you do not delete the provisions above, a recipient may use your | 20 | * GNU General Public License for more details. |
21 | version of this file under either the OSL or the GPL. | 21 | * |
22 | 22 | * You should have received a copy of the GNU General Public License | |
23 | * along with this program; see the file COPYING. If not, write to | ||
24 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
25 | * | ||
26 | * | ||
27 | * libata documentation is available via 'make {ps|pdf}docs', | ||
28 | * as Documentation/DocBook/libata.* | ||
29 | * | ||
30 | * Hardware documentation available from | ||
31 | * - http://www.t10.org/ | ||
32 | * - http://www.t13.org/ | ||
33 | * | ||
23 | */ | 34 | */ |
24 | 35 | ||
25 | #include <linux/kernel.h> | 36 | #include <linux/kernel.h> |
@@ -392,6 +403,60 @@ int ata_scsi_error(struct Scsi_Host *host) | |||
392 | } | 403 | } |
393 | 404 | ||
394 | /** | 405 | /** |
406 | * ata_scsi_start_stop_xlat - Translate SCSI START STOP UNIT command | ||
407 | * @qc: Storage for translated ATA taskfile | ||
408 | * @scsicmd: SCSI command to translate | ||
409 | * | ||
410 | * Sets up an ATA taskfile to issue STANDBY (to stop) or READ VERIFY | ||
411 | * (to start). Perhaps these commands should be preceded by | ||
412 | * CHECK POWER MODE to see what power mode the device is already in. | ||
413 | * [See SAT revision 5 at www.t10.org] | ||
414 | * | ||
415 | * LOCKING: | ||
416 | * spin_lock_irqsave(host_set lock) | ||
417 | * | ||
418 | * RETURNS: | ||
419 | * Zero on success, non-zero on error. | ||
420 | */ | ||
421 | |||
422 | static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc, | ||
423 | u8 *scsicmd) | ||
424 | { | ||
425 | struct ata_taskfile *tf = &qc->tf; | ||
426 | |||
427 | tf->flags |= ATA_TFLAG_DEVICE | ATA_TFLAG_ISADDR; | ||
428 | tf->protocol = ATA_PROT_NODATA; | ||
429 | if (scsicmd[1] & 0x1) { | ||
430 | ; /* ignore IMMED bit, violates sat-r05 */ | ||
431 | } | ||
432 | if (scsicmd[4] & 0x2) | ||
433 | return 1; /* LOEJ bit set not supported */ | ||
434 | if (((scsicmd[4] >> 4) & 0xf) != 0) | ||
435 | return 1; /* power conditions not supported */ | ||
436 | if (scsicmd[4] & 0x1) { | ||
437 | tf->nsect = 1; /* 1 sector, lba=0 */ | ||
438 | tf->lbah = 0x0; | ||
439 | tf->lbam = 0x0; | ||
440 | tf->lbal = 0x0; | ||
441 | tf->device |= ATA_LBA; | ||
442 | tf->command = ATA_CMD_VERIFY; /* READ VERIFY */ | ||
443 | } else { | ||
444 | tf->nsect = 0; /* time period value (0 implies now) */ | ||
445 | tf->command = ATA_CMD_STANDBY; | ||
446 | /* Consider: ATA STANDBY IMMEDIATE command */ | ||
447 | } | ||
448 | /* | ||
449 | * Standby and Idle condition timers could be implemented but that | ||
450 | * would require libata to implement the Power condition mode page | ||
451 | * and allow the user to change it. Changing mode pages requires | ||
452 | * MODE SELECT to be implemented. | ||
453 | */ | ||
454 | |||
455 | return 0; | ||
456 | } | ||
457 | |||
458 | |||
459 | /** | ||
395 | * ata_scsi_flush_xlat - Translate SCSI SYNCHRONIZE CACHE command | 460 | * ata_scsi_flush_xlat - Translate SCSI SYNCHRONIZE CACHE command |
396 | * @qc: Storage for translated ATA taskfile | 461 | * @qc: Storage for translated ATA taskfile |
397 | * @scsicmd: SCSI command to translate (ignored) | 462 | * @scsicmd: SCSI command to translate (ignored) |
@@ -576,11 +641,19 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) | |||
576 | tf->lbah = scsicmd[3]; | 641 | tf->lbah = scsicmd[3]; |
577 | 642 | ||
578 | VPRINTK("ten-byte command\n"); | 643 | VPRINTK("ten-byte command\n"); |
644 | if (qc->nsect == 0) /* we don't support length==0 cmds */ | ||
645 | return 1; | ||
579 | return 0; | 646 | return 0; |
580 | } | 647 | } |
581 | 648 | ||
582 | if (scsicmd[0] == READ_6 || scsicmd[0] == WRITE_6) { | 649 | if (scsicmd[0] == READ_6 || scsicmd[0] == WRITE_6) { |
583 | qc->nsect = tf->nsect = scsicmd[4]; | 650 | qc->nsect = tf->nsect = scsicmd[4]; |
651 | if (!qc->nsect) { | ||
652 | qc->nsect = 256; | ||
653 | if (lba48) | ||
654 | tf->hob_nsect = 1; | ||
655 | } | ||
656 | |||
584 | tf->lbal = scsicmd[3]; | 657 | tf->lbal = scsicmd[3]; |
585 | tf->lbam = scsicmd[2]; | 658 | tf->lbam = scsicmd[2]; |
586 | tf->lbah = scsicmd[1] & 0x1f; /* mask out reserved bits */ | 659 | tf->lbah = scsicmd[1] & 0x1f; /* mask out reserved bits */ |
@@ -620,6 +693,8 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) | |||
620 | tf->lbah = scsicmd[7]; | 693 | tf->lbah = scsicmd[7]; |
621 | 694 | ||
622 | VPRINTK("sixteen-byte command\n"); | 695 | VPRINTK("sixteen-byte command\n"); |
696 | if (qc->nsect == 0) /* we don't support length==0 cmds */ | ||
697 | return 1; | ||
623 | return 0; | 698 | return 0; |
624 | } | 699 | } |
625 | 700 | ||
@@ -1435,6 +1510,8 @@ static inline ata_xlat_func_t ata_get_xlat_func(struct ata_device *dev, u8 cmd) | |||
1435 | case VERIFY: | 1510 | case VERIFY: |
1436 | case VERIFY_16: | 1511 | case VERIFY_16: |
1437 | return ata_scsi_verify_xlat; | 1512 | return ata_scsi_verify_xlat; |
1513 | case START_STOP: | ||
1514 | return ata_scsi_start_stop_xlat; | ||
1438 | } | 1515 | } |
1439 | 1516 | ||
1440 | return NULL; | 1517 | return NULL; |
diff --git a/drivers/scsi/libata.h b/drivers/scsi/libata.h index 3e7f4843020f..809c634afbcd 100644 --- a/drivers/scsi/libata.h +++ b/drivers/scsi/libata.h | |||
@@ -1,25 +1,28 @@ | |||
1 | /* | 1 | /* |
2 | libata.h - helper library for ATA | 2 | * libata.h - helper library for ATA |
3 | 3 | * | |
4 | Copyright 2003-2004 Red Hat, Inc. All rights reserved. | 4 | * Copyright 2003-2004 Red Hat, Inc. All rights reserved. |
5 | Copyright 2003-2004 Jeff Garzik | 5 | * Copyright 2003-2004 Jeff Garzik |
6 | 6 | * | |
7 | The contents of this file are subject to the Open | 7 | * |
8 | Software License version 1.1 that can be found at | 8 | * This program is free software; you can redistribute it and/or modify |
9 | http://www.opensource.org/licenses/osl-1.1.txt and is included herein | 9 | * it under the terms of the GNU General Public License as published by |
10 | by reference. | 10 | * the Free Software Foundation; either version 2, or (at your option) |
11 | 11 | * any later version. | |
12 | Alternatively, the contents of this file may be used under the terms | 12 | * |
13 | of the GNU General Public License version 2 (the "GPL") as distributed | 13 | * This program is distributed in the hope that it will be useful, |
14 | in the kernel source COPYING file, in which case the provisions of | 14 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
15 | the GPL are applicable instead of the above. If you wish to allow | 15 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
16 | the use of your version of this file only under the terms of the | 16 | * GNU General Public License for more details. |
17 | GPL and not to allow others to use your version of this file under | 17 | * |
18 | the OSL, indicate your decision by deleting the provisions above and | 18 | * You should have received a copy of the GNU General Public License |
19 | replace them with the notice and other provisions required by the GPL. | 19 | * along with this program; see the file COPYING. If not, write to |
20 | If you do not delete the provisions above, a recipient may use your | 20 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. |
21 | version of this file under either the OSL or the GPL. | 21 | * |
22 | 22 | * | |
23 | * libata documentation is available via 'make {ps|pdf}docs', | ||
24 | * as Documentation/DocBook/libata.* | ||
25 | * | ||
23 | */ | 26 | */ |
24 | 27 | ||
25 | #ifndef __LIBATA_H__ | 28 | #ifndef __LIBATA_H__ |
@@ -72,7 +75,7 @@ extern unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf, | |||
72 | extern void ata_scsi_badcmd(struct scsi_cmnd *cmd, | 75 | extern void ata_scsi_badcmd(struct scsi_cmnd *cmd, |
73 | void (*done)(struct scsi_cmnd *), | 76 | void (*done)(struct scsi_cmnd *), |
74 | u8 asc, u8 ascq); | 77 | u8 asc, u8 ascq); |
75 | extern void ata_scsi_rbuf_fill(struct ata_scsi_args *args, | 78 | extern void ata_scsi_rbuf_fill(struct ata_scsi_args *args, |
76 | unsigned int (*actor) (struct ata_scsi_args *args, | 79 | unsigned int (*actor) (struct ata_scsi_args *args, |
77 | u8 *rbuf, unsigned int buflen)); | 80 | u8 *rbuf, unsigned int buflen)); |
78 | 81 | ||
diff --git a/drivers/scsi/sata_nv.c b/drivers/scsi/sata_nv.c index b0403ccd8a25..03d9bc6e69df 100644 --- a/drivers/scsi/sata_nv.c +++ b/drivers/scsi/sata_nv.c | |||
@@ -4,21 +4,37 @@ | |||
4 | * Copyright 2004 NVIDIA Corp. All rights reserved. | 4 | * Copyright 2004 NVIDIA Corp. All rights reserved. |
5 | * Copyright 2004 Andrew Chew | 5 | * Copyright 2004 Andrew Chew |
6 | * | 6 | * |
7 | * The contents of this file are subject to the Open | ||
8 | * Software License version 1.1 that can be found at | ||
9 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
10 | * by reference. | ||
11 | * | 7 | * |
12 | * Alternatively, the contents of this file may be used under the terms | 8 | * This program is free software; you can redistribute it and/or modify |
13 | * of the GNU General Public License version 2 (the "GPL") as distributed | 9 | * it under the terms of the GNU General Public License as published by |
14 | * in the kernel source COPYING file, in which case the provisions of | 10 | * the Free Software Foundation; either version 2, or (at your option) |
15 | * the GPL are applicable instead of the above. If you wish to allow | 11 | * any later version. |
16 | * the use of your version of this file only under the terms of the | 12 | * |
17 | * GPL and not to allow others to use your version of this file under | 13 | * This program is distributed in the hope that it will be useful, |
18 | * the OSL, indicate your decision by deleting the provisions above and | 14 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
19 | * replace them with the notice and other provisions required by the GPL. | 15 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
20 | * If you do not delete the provisions above, a recipient may use your | 16 | * GNU General Public License for more details. |
21 | * version of this file under either the OSL or the GPL. | 17 | * |
18 | * You should have received a copy of the GNU General Public License | ||
19 | * along with this program; see the file COPYING. If not, write to | ||
20 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
21 | * | ||
22 | * | ||
23 | * libata documentation is available via 'make {ps|pdf}docs', | ||
24 | * as Documentation/DocBook/libata.* | ||
25 | * | ||
26 | * No hardware documentation available outside of NVIDIA. | ||
27 | * This driver programs the NVIDIA SATA controller in a similar | ||
28 | * fashion as with other PCI IDE BMDMA controllers, with a few | ||
29 | * NV-specific details such as register offsets, SATA phy location, | ||
30 | * hotplug info, etc. | ||
31 | * | ||
32 | * | ||
33 | * 0.08 | ||
34 | * - Added support for MCP51 and MCP55. | ||
35 | * | ||
36 | * 0.07 | ||
37 | * - Added support for RAID class code. | ||
22 | * | 38 | * |
23 | * 0.06 | 39 | * 0.06 |
24 | * - Added generic SATA support by using a pci_device_id that filters on | 40 | * - Added generic SATA support by using a pci_device_id that filters on |
@@ -48,7 +64,7 @@ | |||
48 | #include <linux/libata.h> | 64 | #include <linux/libata.h> |
49 | 65 | ||
50 | #define DRV_NAME "sata_nv" | 66 | #define DRV_NAME "sata_nv" |
51 | #define DRV_VERSION "0.6" | 67 | #define DRV_VERSION "0.8" |
52 | 68 | ||
53 | #define NV_PORTS 2 | 69 | #define NV_PORTS 2 |
54 | #define NV_PIO_MASK 0x1f | 70 | #define NV_PIO_MASK 0x1f |
@@ -116,7 +132,9 @@ enum nv_host_type | |||
116 | GENERIC, | 132 | GENERIC, |
117 | NFORCE2, | 133 | NFORCE2, |
118 | NFORCE3, | 134 | NFORCE3, |
119 | CK804 | 135 | CK804, |
136 | MCP51, | ||
137 | MCP55 | ||
120 | }; | 138 | }; |
121 | 139 | ||
122 | static struct pci_device_id nv_pci_tbl[] = { | 140 | static struct pci_device_id nv_pci_tbl[] = { |
@@ -134,9 +152,18 @@ static struct pci_device_id nv_pci_tbl[] = { | |||
134 | PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 }, | 152 | PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 }, |
135 | { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_SATA2, | 153 | { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_SATA2, |
136 | PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 }, | 154 | PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 }, |
155 | { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA, | ||
156 | PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP51 }, | ||
157 | { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA2, | ||
158 | PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP51 }, | ||
159 | { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA, | ||
160 | PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP55 }, | ||
137 | { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, | 161 | { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, |
138 | PCI_ANY_ID, PCI_ANY_ID, | 162 | PCI_ANY_ID, PCI_ANY_ID, |
139 | PCI_CLASS_STORAGE_IDE<<8, 0xffff00, GENERIC }, | 163 | PCI_CLASS_STORAGE_IDE<<8, 0xffff00, GENERIC }, |
164 | { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, | ||
165 | PCI_ANY_ID, PCI_ANY_ID, | ||
166 | PCI_CLASS_STORAGE_RAID<<8, 0xffff00, GENERIC }, | ||
140 | { 0, } /* terminate list */ | 167 | { 0, } /* terminate list */ |
141 | }; | 168 | }; |
142 | 169 | ||
@@ -274,7 +301,8 @@ static irqreturn_t nv_interrupt (int irq, void *dev_instance, | |||
274 | struct ata_port *ap; | 301 | struct ata_port *ap; |
275 | 302 | ||
276 | ap = host_set->ports[i]; | 303 | ap = host_set->ports[i]; |
277 | if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) { | 304 | if (ap && |
305 | !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { | ||
278 | struct ata_queued_cmd *qc; | 306 | struct ata_queued_cmd *qc; |
279 | 307 | ||
280 | qc = ata_qc_from_tag(ap, ap->active_tag); | 308 | qc = ata_qc_from_tag(ap, ap->active_tag); |
diff --git a/drivers/scsi/sata_promise.c b/drivers/scsi/sata_promise.c index ad31b8afec6f..7c4f6ecc1cc9 100644 --- a/drivers/scsi/sata_promise.c +++ b/drivers/scsi/sata_promise.c | |||
@@ -7,21 +7,26 @@ | |||
7 | * | 7 | * |
8 | * Copyright 2003-2004 Red Hat, Inc. | 8 | * Copyright 2003-2004 Red Hat, Inc. |
9 | * | 9 | * |
10 | * The contents of this file are subject to the Open | ||
11 | * Software License version 1.1 that can be found at | ||
12 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
13 | * by reference. | ||
14 | * | 10 | * |
15 | * Alternatively, the contents of this file may be used under the terms | 11 | * This program is free software; you can redistribute it and/or modify |
16 | * of the GNU General Public License version 2 (the "GPL") as distributed | 12 | * it under the terms of the GNU General Public License as published by |
17 | * in the kernel source COPYING file, in which case the provisions of | 13 | * the Free Software Foundation; either version 2, or (at your option) |
18 | * the GPL are applicable instead of the above. If you wish to allow | 14 | * any later version. |
19 | * the use of your version of this file only under the terms of the | 15 | * |
20 | * GPL and not to allow others to use your version of this file under | 16 | * This program is distributed in the hope that it will be useful, |
21 | * the OSL, indicate your decision by deleting the provisions above and | 17 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
22 | * replace them with the notice and other provisions required by the GPL. | 18 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
23 | * If you do not delete the provisions above, a recipient may use your | 19 | * GNU General Public License for more details. |
24 | * version of this file under either the OSL or the GPL. | 20 | * |
21 | * You should have received a copy of the GNU General Public License | ||
22 | * along with this program; see the file COPYING. If not, write to | ||
23 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
24 | * | ||
25 | * | ||
26 | * libata documentation is available via 'make {ps|pdf}docs', | ||
27 | * as Documentation/DocBook/libata.* | ||
28 | * | ||
29 | * Hardware information only available under NDA. | ||
25 | * | 30 | * |
26 | */ | 31 | */ |
27 | 32 | ||
@@ -206,6 +211,10 @@ static struct pci_device_id pdc_ata_pci_tbl[] = { | |||
206 | board_20319 }, | 211 | board_20319 }, |
207 | { PCI_VENDOR_ID_PROMISE, 0x3319, PCI_ANY_ID, PCI_ANY_ID, 0, 0, | 212 | { PCI_VENDOR_ID_PROMISE, 0x3319, PCI_ANY_ID, PCI_ANY_ID, 0, 0, |
208 | board_20319 }, | 213 | board_20319 }, |
214 | { PCI_VENDOR_ID_PROMISE, 0x3519, PCI_ANY_ID, PCI_ANY_ID, 0, 0, | ||
215 | board_20319 }, | ||
216 | { PCI_VENDOR_ID_PROMISE, 0x3d17, PCI_ANY_ID, PCI_ANY_ID, 0, 0, | ||
217 | board_20319 }, | ||
209 | { PCI_VENDOR_ID_PROMISE, 0x3d18, PCI_ANY_ID, PCI_ANY_ID, 0, 0, | 218 | { PCI_VENDOR_ID_PROMISE, 0x3d18, PCI_ANY_ID, PCI_ANY_ID, 0, 0, |
210 | board_20319 }, | 219 | board_20319 }, |
211 | 220 | ||
@@ -357,11 +366,15 @@ static void pdc_qc_prep(struct ata_queued_cmd *qc) | |||
357 | 366 | ||
358 | static void pdc_eng_timeout(struct ata_port *ap) | 367 | static void pdc_eng_timeout(struct ata_port *ap) |
359 | { | 368 | { |
369 | struct ata_host_set *host_set = ap->host_set; | ||
360 | u8 drv_stat; | 370 | u8 drv_stat; |
361 | struct ata_queued_cmd *qc; | 371 | struct ata_queued_cmd *qc; |
372 | unsigned long flags; | ||
362 | 373 | ||
363 | DPRINTK("ENTER\n"); | 374 | DPRINTK("ENTER\n"); |
364 | 375 | ||
376 | spin_lock_irqsave(&host_set->lock, flags); | ||
377 | |||
365 | qc = ata_qc_from_tag(ap, ap->active_tag); | 378 | qc = ata_qc_from_tag(ap, ap->active_tag); |
366 | if (!qc) { | 379 | if (!qc) { |
367 | printk(KERN_ERR "ata%u: BUG: timeout without command\n", | 380 | printk(KERN_ERR "ata%u: BUG: timeout without command\n", |
@@ -395,6 +408,7 @@ static void pdc_eng_timeout(struct ata_port *ap) | |||
395 | } | 408 | } |
396 | 409 | ||
397 | out: | 410 | out: |
411 | spin_unlock_irqrestore(&host_set->lock, flags); | ||
398 | DPRINTK("EXIT\n"); | 412 | DPRINTK("EXIT\n"); |
399 | } | 413 | } |
400 | 414 | ||
@@ -477,7 +491,8 @@ static irqreturn_t pdc_interrupt (int irq, void *dev_instance, struct pt_regs *r | |||
477 | VPRINTK("port %u\n", i); | 491 | VPRINTK("port %u\n", i); |
478 | ap = host_set->ports[i]; | 492 | ap = host_set->ports[i]; |
479 | tmp = mask & (1 << (i + 1)); | 493 | tmp = mask & (1 << (i + 1)); |
480 | if (tmp && ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) { | 494 | if (tmp && ap && |
495 | !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { | ||
481 | struct ata_queued_cmd *qc; | 496 | struct ata_queued_cmd *qc; |
482 | 497 | ||
483 | qc = ata_qc_from_tag(ap, ap->active_tag); | 498 | qc = ata_qc_from_tag(ap, ap->active_tag); |
diff --git a/drivers/scsi/sata_promise.h b/drivers/scsi/sata_promise.h index 6e7e96b9ee13..6ee5e190262d 100644 --- a/drivers/scsi/sata_promise.h +++ b/drivers/scsi/sata_promise.h | |||
@@ -3,21 +3,24 @@ | |||
3 | * | 3 | * |
4 | * Copyright 2003-2004 Red Hat, Inc. | 4 | * Copyright 2003-2004 Red Hat, Inc. |
5 | * | 5 | * |
6 | * The contents of this file are subject to the Open | ||
7 | * Software License version 1.1 that can be found at | ||
8 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
9 | * by reference. | ||
10 | * | 6 | * |
11 | * Alternatively, the contents of this file may be used under the terms | 7 | * This program is free software; you can redistribute it and/or modify |
12 | * of the GNU General Public License version 2 (the "GPL") as distributed | 8 | * it under the terms of the GNU General Public License as published by |
13 | * in the kernel source COPYING file, in which case the provisions of | 9 | * the Free Software Foundation; either version 2, or (at your option) |
14 | * the GPL are applicable instead of the above. If you wish to allow | 10 | * any later version. |
15 | * the use of your version of this file only under the terms of the | 11 | * |
16 | * GPL and not to allow others to use your version of this file under | 12 | * This program is distributed in the hope that it will be useful, |
17 | * the OSL, indicate your decision by deleting the provisions above and | 13 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
18 | * replace them with the notice and other provisions required by the GPL. | 14 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
19 | * If you do not delete the provisions above, a recipient may use your | 15 | * GNU General Public License for more details. |
20 | * version of this file under either the OSL or the GPL. | 16 | * |
17 | * You should have received a copy of the GNU General Public License | ||
18 | * along with this program; see the file COPYING. If not, write to | ||
19 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
20 | * | ||
21 | * | ||
22 | * libata documentation is available via 'make {ps|pdf}docs', | ||
23 | * as Documentation/DocBook/libata.* | ||
21 | * | 24 | * |
22 | */ | 25 | */ |
23 | 26 | ||
diff --git a/drivers/scsi/sata_qstor.c b/drivers/scsi/sata_qstor.c index 1383e8a28d72..9c99ab433bd3 100644 --- a/drivers/scsi/sata_qstor.c +++ b/drivers/scsi/sata_qstor.c | |||
@@ -6,21 +6,24 @@ | |||
6 | * Copyright 2005 Pacific Digital Corporation. | 6 | * Copyright 2005 Pacific Digital Corporation. |
7 | * (OSL/GPL code release authorized by Jalil Fadavi). | 7 | * (OSL/GPL code release authorized by Jalil Fadavi). |
8 | * | 8 | * |
9 | * The contents of this file are subject to the Open | ||
10 | * Software License version 1.1 that can be found at | ||
11 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
12 | * by reference. | ||
13 | * | 9 | * |
14 | * Alternatively, the contents of this file may be used under the terms | 10 | * This program is free software; you can redistribute it and/or modify |
15 | * of the GNU General Public License version 2 (the "GPL") as distributed | 11 | * it under the terms of the GNU General Public License as published by |
16 | * in the kernel source COPYING file, in which case the provisions of | 12 | * the Free Software Foundation; either version 2, or (at your option) |
17 | * the GPL are applicable instead of the above. If you wish to allow | 13 | * any later version. |
18 | * the use of your version of this file only under the terms of the | 14 | * |
19 | * GPL and not to allow others to use your version of this file under | 15 | * This program is distributed in the hope that it will be useful, |
20 | * the OSL, indicate your decision by deleting the provisions above and | 16 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
21 | * replace them with the notice and other provisions required by the GPL. | 17 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
22 | * If you do not delete the provisions above, a recipient may use your | 18 | * GNU General Public License for more details. |
23 | * version of this file under either the OSL or the GPL. | 19 | * |
20 | * You should have received a copy of the GNU General Public License | ||
21 | * along with this program; see the file COPYING. If not, write to | ||
22 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
23 | * | ||
24 | * | ||
25 | * libata documentation is available via 'make {ps|pdf}docs', | ||
26 | * as Documentation/DocBook/libata.* | ||
24 | * | 27 | * |
25 | */ | 28 | */ |
26 | 29 | ||
@@ -117,7 +120,7 @@ static void qs_phy_reset(struct ata_port *ap); | |||
117 | static void qs_qc_prep(struct ata_queued_cmd *qc); | 120 | static void qs_qc_prep(struct ata_queued_cmd *qc); |
118 | static int qs_qc_issue(struct ata_queued_cmd *qc); | 121 | static int qs_qc_issue(struct ata_queued_cmd *qc); |
119 | static int qs_check_atapi_dma(struct ata_queued_cmd *qc); | 122 | static int qs_check_atapi_dma(struct ata_queued_cmd *qc); |
120 | static void qs_bmdma_stop(struct ata_port *ap); | 123 | static void qs_bmdma_stop(struct ata_queued_cmd *qc); |
121 | static u8 qs_bmdma_status(struct ata_port *ap); | 124 | static u8 qs_bmdma_status(struct ata_port *ap); |
122 | static void qs_irq_clear(struct ata_port *ap); | 125 | static void qs_irq_clear(struct ata_port *ap); |
123 | static void qs_eng_timeout(struct ata_port *ap); | 126 | static void qs_eng_timeout(struct ata_port *ap); |
@@ -198,7 +201,7 @@ static int qs_check_atapi_dma(struct ata_queued_cmd *qc) | |||
198 | return 1; /* ATAPI DMA not supported */ | 201 | return 1; /* ATAPI DMA not supported */ |
199 | } | 202 | } |
200 | 203 | ||
201 | static void qs_bmdma_stop(struct ata_port *ap) | 204 | static void qs_bmdma_stop(struct ata_queued_cmd *qc) |
202 | { | 205 | { |
203 | /* nothing */ | 206 | /* nothing */ |
204 | } | 207 | } |
@@ -386,7 +389,8 @@ static inline unsigned int qs_intr_pkt(struct ata_host_set *host_set) | |||
386 | DPRINTK("SFF=%08x%08x: sCHAN=%u sHST=%d sDST=%02x\n", | 389 | DPRINTK("SFF=%08x%08x: sCHAN=%u sHST=%d sDST=%02x\n", |
387 | sff1, sff0, port_no, sHST, sDST); | 390 | sff1, sff0, port_no, sHST, sDST); |
388 | handled = 1; | 391 | handled = 1; |
389 | if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) { | 392 | if (ap && !(ap->flags & |
393 | (ATA_FLAG_PORT_DISABLED|ATA_FLAG_NOINTR))) { | ||
390 | struct ata_queued_cmd *qc; | 394 | struct ata_queued_cmd *qc; |
391 | struct qs_port_priv *pp = ap->private_data; | 395 | struct qs_port_priv *pp = ap->private_data; |
392 | if (!pp || pp->state != qs_state_pkt) | 396 | if (!pp || pp->state != qs_state_pkt) |
@@ -417,7 +421,8 @@ static inline unsigned int qs_intr_mmio(struct ata_host_set *host_set) | |||
417 | for (port_no = 0; port_no < host_set->n_ports; ++port_no) { | 421 | for (port_no = 0; port_no < host_set->n_ports; ++port_no) { |
418 | struct ata_port *ap; | 422 | struct ata_port *ap; |
419 | ap = host_set->ports[port_no]; | 423 | ap = host_set->ports[port_no]; |
420 | if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) { | 424 | if (ap && |
425 | !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { | ||
421 | struct ata_queued_cmd *qc; | 426 | struct ata_queued_cmd *qc; |
422 | struct qs_port_priv *pp = ap->private_data; | 427 | struct qs_port_priv *pp = ap->private_data; |
423 | if (!pp || pp->state != qs_state_mmio) | 428 | if (!pp || pp->state != qs_state_mmio) |
@@ -431,7 +436,7 @@ static inline unsigned int qs_intr_mmio(struct ata_host_set *host_set) | |||
431 | continue; | 436 | continue; |
432 | DPRINTK("ata%u: protocol %d (dev_stat 0x%X)\n", | 437 | DPRINTK("ata%u: protocol %d (dev_stat 0x%X)\n", |
433 | ap->id, qc->tf.protocol, status); | 438 | ap->id, qc->tf.protocol, status); |
434 | 439 | ||
435 | /* complete taskfile transaction */ | 440 | /* complete taskfile transaction */ |
436 | pp->state = qs_state_idle; | 441 | pp->state = qs_state_idle; |
437 | ata_qc_complete(qc, status); | 442 | ata_qc_complete(qc, status); |
diff --git a/drivers/scsi/sata_sil.c b/drivers/scsi/sata_sil.c index 49ed557a4b66..71d49548f0a3 100644 --- a/drivers/scsi/sata_sil.c +++ b/drivers/scsi/sata_sil.c | |||
@@ -5,24 +5,32 @@ | |||
5 | * Please ALWAYS copy linux-ide@vger.kernel.org | 5 | * Please ALWAYS copy linux-ide@vger.kernel.org |
6 | * on emails. | 6 | * on emails. |
7 | * | 7 | * |
8 | * Copyright 2003 Red Hat, Inc. | 8 | * Copyright 2003-2005 Red Hat, Inc. |
9 | * Copyright 2003 Benjamin Herrenschmidt | 9 | * Copyright 2003 Benjamin Herrenschmidt |
10 | * | 10 | * |
11 | * The contents of this file are subject to the Open | ||
12 | * Software License version 1.1 that can be found at | ||
13 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
14 | * by reference. | ||
15 | * | 11 | * |
16 | * Alternatively, the contents of this file may be used under the terms | 12 | * This program is free software; you can redistribute it and/or modify |
17 | * of the GNU General Public License version 2 (the "GPL") as distributed | 13 | * it under the terms of the GNU General Public License as published by |
18 | * in the kernel source COPYING file, in which case the provisions of | 14 | * the Free Software Foundation; either version 2, or (at your option) |
19 | * the GPL are applicable instead of the above. If you wish to allow | 15 | * any later version. |
20 | * the use of your version of this file only under the terms of the | 16 | * |
21 | * GPL and not to allow others to use your version of this file under | 17 | * This program is distributed in the hope that it will be useful, |
22 | * the OSL, indicate your decision by deleting the provisions above and | 18 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
23 | * replace them with the notice and other provisions required by the GPL. | 19 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
24 | * If you do not delete the provisions above, a recipient may use your | 20 | * GNU General Public License for more details. |
25 | * version of this file under either the OSL or the GPL. | 21 | * |
22 | * You should have received a copy of the GNU General Public License | ||
23 | * along with this program; see the file COPYING. If not, write to | ||
24 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
25 | * | ||
26 | * | ||
27 | * libata documentation is available via 'make {ps|pdf}docs', | ||
28 | * as Documentation/DocBook/libata.* | ||
29 | * | ||
30 | * Documentation for SiI 3112: | ||
31 | * http://gkernel.sourceforge.net/specs/sii/3112A_SiI-DS-0095-B2.pdf.bz2 | ||
32 | * | ||
33 | * Other errata and documentation available under NDA. | ||
26 | * | 34 | * |
27 | */ | 35 | */ |
28 | 36 | ||
@@ -41,8 +49,11 @@ | |||
41 | #define DRV_VERSION "0.9" | 49 | #define DRV_VERSION "0.9" |
42 | 50 | ||
43 | enum { | 51 | enum { |
52 | SIL_FLAG_MOD15WRITE = (1 << 30), | ||
53 | |||
44 | sil_3112 = 0, | 54 | sil_3112 = 0, |
45 | sil_3114 = 1, | 55 | sil_3112_m15w = 1, |
56 | sil_3114 = 2, | ||
46 | 57 | ||
47 | SIL_FIFO_R0 = 0x40, | 58 | SIL_FIFO_R0 = 0x40, |
48 | SIL_FIFO_W0 = 0x41, | 59 | SIL_FIFO_W0 = 0x41, |
@@ -76,13 +87,13 @@ static void sil_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val); | |||
76 | static void sil_post_set_mode (struct ata_port *ap); | 87 | static void sil_post_set_mode (struct ata_port *ap); |
77 | 88 | ||
78 | static struct pci_device_id sil_pci_tbl[] = { | 89 | static struct pci_device_id sil_pci_tbl[] = { |
79 | { 0x1095, 0x3112, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112 }, | 90 | { 0x1095, 0x3112, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112_m15w }, |
80 | { 0x1095, 0x0240, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112 }, | 91 | { 0x1095, 0x0240, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112_m15w }, |
81 | { 0x1095, 0x3512, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112 }, | 92 | { 0x1095, 0x3512, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112 }, |
82 | { 0x1095, 0x3114, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3114 }, | 93 | { 0x1095, 0x3114, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3114 }, |
83 | { 0x1002, 0x436e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112 }, | 94 | { 0x1002, 0x436e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112_m15w }, |
84 | { 0x1002, 0x4379, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112 }, | 95 | { 0x1002, 0x4379, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112_m15w }, |
85 | { 0x1002, 0x437a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112 }, | 96 | { 0x1002, 0x437a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, sil_3112_m15w }, |
86 | { } /* terminate list */ | 97 | { } /* terminate list */ |
87 | }; | 98 | }; |
88 | 99 | ||
@@ -174,6 +185,16 @@ static struct ata_port_info sil_port_info[] = { | |||
174 | .mwdma_mask = 0x07, /* mwdma0-2 */ | 185 | .mwdma_mask = 0x07, /* mwdma0-2 */ |
175 | .udma_mask = 0x3f, /* udma0-5 */ | 186 | .udma_mask = 0x3f, /* udma0-5 */ |
176 | .port_ops = &sil_ops, | 187 | .port_ops = &sil_ops, |
188 | }, /* sil_3112_15w - keep it sync'd w/ sil_3112 */ | ||
189 | { | ||
190 | .sht = &sil_sht, | ||
191 | .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | | ||
192 | ATA_FLAG_SRST | ATA_FLAG_MMIO | | ||
193 | SIL_FLAG_MOD15WRITE, | ||
194 | .pio_mask = 0x1f, /* pio0-4 */ | ||
195 | .mwdma_mask = 0x07, /* mwdma0-2 */ | ||
196 | .udma_mask = 0x3f, /* udma0-5 */ | ||
197 | .port_ops = &sil_ops, | ||
177 | }, /* sil_3114 */ | 198 | }, /* sil_3114 */ |
178 | { | 199 | { |
179 | .sht = &sil_sht, | 200 | .sht = &sil_sht, |
@@ -323,15 +344,15 @@ static void sil_dev_config(struct ata_port *ap, struct ata_device *dev) | |||
323 | while ((len > 0) && (s[len - 1] == ' ')) | 344 | while ((len > 0) && (s[len - 1] == ' ')) |
324 | len--; | 345 | len--; |
325 | 346 | ||
326 | for (n = 0; sil_blacklist[n].product; n++) | 347 | for (n = 0; sil_blacklist[n].product; n++) |
327 | if (!memcmp(sil_blacklist[n].product, s, | 348 | if (!memcmp(sil_blacklist[n].product, s, |
328 | strlen(sil_blacklist[n].product))) { | 349 | strlen(sil_blacklist[n].product))) { |
329 | quirks = sil_blacklist[n].quirk; | 350 | quirks = sil_blacklist[n].quirk; |
330 | break; | 351 | break; |
331 | } | 352 | } |
332 | 353 | ||
333 | /* limit requests to 15 sectors */ | 354 | /* limit requests to 15 sectors */ |
334 | if (quirks & SIL_QUIRK_MOD15WRITE) { | 355 | if ((ap->flags & SIL_FLAG_MOD15WRITE) && (quirks & SIL_QUIRK_MOD15WRITE)) { |
335 | printk(KERN_INFO "ata%u(%u): applying Seagate errata fix\n", | 356 | printk(KERN_INFO "ata%u(%u): applying Seagate errata fix\n", |
336 | ap->id, dev->devno); | 357 | ap->id, dev->devno); |
337 | ap->host->max_sectors = 15; | 358 | ap->host->max_sectors = 15; |
diff --git a/drivers/scsi/sata_sis.c b/drivers/scsi/sata_sis.c index e418b89c6b9d..43af445b3ad2 100644 --- a/drivers/scsi/sata_sis.c +++ b/drivers/scsi/sata_sis.c | |||
@@ -7,21 +7,26 @@ | |||
7 | * | 7 | * |
8 | * Copyright 2004 Uwe Koziolek | 8 | * Copyright 2004 Uwe Koziolek |
9 | * | 9 | * |
10 | * The contents of this file are subject to the Open | ||
11 | * Software License version 1.1 that can be found at | ||
12 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
13 | * by reference. | ||
14 | * | 10 | * |
15 | * Alternatively, the contents of this file may be used under the terms | 11 | * This program is free software; you can redistribute it and/or modify |
16 | * of the GNU General Public License version 2 (the "GPL") as distributed | 12 | * it under the terms of the GNU General Public License as published by |
17 | * in the kernel source COPYING file, in which case the provisions of | 13 | * the Free Software Foundation; either version 2, or (at your option) |
18 | * the GPL are applicable instead of the above. If you wish to allow | 14 | * any later version. |
19 | * the use of your version of this file only under the terms of the | 15 | * |
20 | * GPL and not to allow others to use your version of this file under | 16 | * This program is distributed in the hope that it will be useful, |
21 | * the OSL, indicate your decision by deleting the provisions above and | 17 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
22 | * replace them with the notice and other provisions required by the GPL. | 18 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
23 | * If you do not delete the provisions above, a recipient may use your | 19 | * GNU General Public License for more details. |
24 | * version of this file under either the OSL or the GPL. | 20 | * |
21 | * You should have received a copy of the GNU General Public License | ||
22 | * along with this program; see the file COPYING. If not, write to | ||
23 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
24 | * | ||
25 | * | ||
26 | * libata documentation is available via 'make {ps|pdf}docs', | ||
27 | * as Documentation/DocBook/libata.* | ||
28 | * | ||
29 | * Hardware documentation available under NDA. | ||
25 | * | 30 | * |
26 | */ | 31 | */ |
27 | 32 | ||
@@ -234,7 +239,7 @@ static int sis_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) | |||
234 | pci_read_config_dword(pdev, SIS_GENCTL, &genctl); | 239 | pci_read_config_dword(pdev, SIS_GENCTL, &genctl); |
235 | if ((genctl & GENCTL_IOMAPPED_SCR) == 0) | 240 | if ((genctl & GENCTL_IOMAPPED_SCR) == 0) |
236 | probe_ent->host_flags |= SIS_FLAG_CFGSCR; | 241 | probe_ent->host_flags |= SIS_FLAG_CFGSCR; |
237 | 242 | ||
238 | /* if hardware thinks SCRs are in IO space, but there are | 243 | /* if hardware thinks SCRs are in IO space, but there are |
239 | * no IO resources assigned, change to PCI cfg space. | 244 | * no IO resources assigned, change to PCI cfg space. |
240 | */ | 245 | */ |
diff --git a/drivers/scsi/sata_svw.c b/drivers/scsi/sata_svw.c index 858e07185dbd..19d3bb3b0fb6 100644 --- a/drivers/scsi/sata_svw.c +++ b/drivers/scsi/sata_svw.c | |||
@@ -13,21 +13,26 @@ | |||
13 | * This driver probably works with non-Apple versions of the | 13 | * This driver probably works with non-Apple versions of the |
14 | * Broadcom chipset... | 14 | * Broadcom chipset... |
15 | * | 15 | * |
16 | * The contents of this file are subject to the Open | ||
17 | * Software License version 1.1 that can be found at | ||
18 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
19 | * by reference. | ||
20 | * | 16 | * |
21 | * Alternatively, the contents of this file may be used under the terms | 17 | * This program is free software; you can redistribute it and/or modify |
22 | * of the GNU General Public License version 2 (the "GPL") as distributed | 18 | * it under the terms of the GNU General Public License as published by |
23 | * in the kernel source COPYING file, in which case the provisions of | 19 | * the Free Software Foundation; either version 2, or (at your option) |
24 | * the GPL are applicable instead of the above. If you wish to allow | 20 | * any later version. |
25 | * the use of your version of this file only under the terms of the | 21 | * |
26 | * GPL and not to allow others to use your version of this file under | 22 | * This program is distributed in the hope that it will be useful, |
27 | * the OSL, indicate your decision by deleting the provisions above and | 23 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
28 | * replace them with the notice and other provisions required by the GPL. | 24 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
29 | * If you do not delete the provisions above, a recipient may use your | 25 | * GNU General Public License for more details. |
30 | * version of this file under either the OSL or the GPL. | 26 | * |
27 | * You should have received a copy of the GNU General Public License | ||
28 | * along with this program; see the file COPYING. If not, write to | ||
29 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
30 | * | ||
31 | * | ||
32 | * libata documentation is available via 'make {ps|pdf}docs', | ||
33 | * as Documentation/DocBook/libata.* | ||
34 | * | ||
35 | * Hardware documentation available under NDA. | ||
31 | * | 36 | * |
32 | */ | 37 | */ |
33 | 38 | ||
@@ -195,18 +200,18 @@ static void k2_bmdma_start_mmio (struct ata_queued_cmd *qc) | |||
195 | /* start host DMA transaction */ | 200 | /* start host DMA transaction */ |
196 | dmactl = readb(mmio + ATA_DMA_CMD); | 201 | dmactl = readb(mmio + ATA_DMA_CMD); |
197 | writeb(dmactl | ATA_DMA_START, mmio + ATA_DMA_CMD); | 202 | writeb(dmactl | ATA_DMA_START, mmio + ATA_DMA_CMD); |
198 | /* There is a race condition in certain SATA controllers that can | 203 | /* There is a race condition in certain SATA controllers that can |
199 | be seen when the r/w command is given to the controller before the | 204 | be seen when the r/w command is given to the controller before the |
200 | host DMA is started. On a Read command, the controller would initiate | 205 | host DMA is started. On a Read command, the controller would initiate |
201 | the command to the drive even before it sees the DMA start. When there | 206 | the command to the drive even before it sees the DMA start. When there |
202 | are very fast drives connected to the controller, or when the data request | 207 | are very fast drives connected to the controller, or when the data request |
203 | hits in the drive cache, there is the possibility that the drive returns a part | 208 | hits in the drive cache, there is the possibility that the drive returns a part |
204 | or all of the requested data to the controller before the DMA start is issued. | 209 | or all of the requested data to the controller before the DMA start is issued. |
205 | In this case, the controller would become confused as to what to do with the data. | 210 | In this case, the controller would become confused as to what to do with the data. |
206 | In the worst case when all the data is returned back to the controller, the | 211 | In the worst case when all the data is returned back to the controller, the |
207 | controller could hang. In other cases it could return partial data returning | 212 | controller could hang. In other cases it could return partial data returning |
208 | in data corruption. This problem has been seen in PPC systems and can also appear | 213 | in data corruption. This problem has been seen in PPC systems and can also appear |
209 | on an system with very fast disks, where the SATA controller is sitting behind a | 214 | on an system with very fast disks, where the SATA controller is sitting behind a |
210 | number of bridges, and hence there is significant latency between the r/w command | 215 | number of bridges, and hence there is significant latency between the r/w command |
211 | and the start command. */ | 216 | and the start command. */ |
212 | /* issue r/w command if the access is to ATA*/ | 217 | /* issue r/w command if the access is to ATA*/ |
@@ -214,7 +219,7 @@ static void k2_bmdma_start_mmio (struct ata_queued_cmd *qc) | |||
214 | ap->ops->exec_command(ap, &qc->tf); | 219 | ap->ops->exec_command(ap, &qc->tf); |
215 | } | 220 | } |
216 | 221 | ||
217 | 222 | ||
218 | static u8 k2_stat_check_status(struct ata_port *ap) | 223 | static u8 k2_stat_check_status(struct ata_port *ap) |
219 | { | 224 | { |
220 | return readl((void *) ap->ioaddr.status_addr); | 225 | return readl((void *) ap->ioaddr.status_addr); |
diff --git a/drivers/scsi/sata_sx4.c b/drivers/scsi/sata_sx4.c index efd7d7a61135..c72fcc46f0fa 100644 --- a/drivers/scsi/sata_sx4.c +++ b/drivers/scsi/sata_sx4.c | |||
@@ -7,21 +7,26 @@ | |||
7 | * | 7 | * |
8 | * Copyright 2003-2004 Red Hat, Inc. | 8 | * Copyright 2003-2004 Red Hat, Inc. |
9 | * | 9 | * |
10 | * The contents of this file are subject to the Open | ||
11 | * Software License version 1.1 that can be found at | ||
12 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
13 | * by reference. | ||
14 | * | 10 | * |
15 | * Alternatively, the contents of this file may be used under the terms | 11 | * This program is free software; you can redistribute it and/or modify |
16 | * of the GNU General Public License version 2 (the "GPL") as distributed | 12 | * it under the terms of the GNU General Public License as published by |
17 | * in the kernel source COPYING file, in which case the provisions of | 13 | * the Free Software Foundation; either version 2, or (at your option) |
18 | * the GPL are applicable instead of the above. If you wish to allow | 14 | * any later version. |
19 | * the use of your version of this file only under the terms of the | 15 | * |
20 | * GPL and not to allow others to use your version of this file under | 16 | * This program is distributed in the hope that it will be useful, |
21 | * the OSL, indicate your decision by deleting the provisions above and | 17 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
22 | * replace them with the notice and other provisions required by the GPL. | 18 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
23 | * If you do not delete the provisions above, a recipient may use your | 19 | * GNU General Public License for more details. |
24 | * version of this file under either the OSL or the GPL. | 20 | * |
21 | * You should have received a copy of the GNU General Public License | ||
22 | * along with this program; see the file COPYING. If not, write to | ||
23 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
24 | * | ||
25 | * | ||
26 | * libata documentation is available via 'make {ps|pdf}docs', | ||
27 | * as Documentation/DocBook/libata.* | ||
28 | * | ||
29 | * Hardware documentation available under NDA. | ||
25 | * | 30 | * |
26 | */ | 31 | */ |
27 | 32 | ||
@@ -94,7 +99,7 @@ enum { | |||
94 | PDC_DIMM1_CONTROL_OFFSET = 0x84, | 99 | PDC_DIMM1_CONTROL_OFFSET = 0x84, |
95 | PDC_SDRAM_CONTROL_OFFSET = 0x88, | 100 | PDC_SDRAM_CONTROL_OFFSET = 0x88, |
96 | PDC_I2C_WRITE = 0x00000000, | 101 | PDC_I2C_WRITE = 0x00000000, |
97 | PDC_I2C_READ = 0x00000040, | 102 | PDC_I2C_READ = 0x00000040, |
98 | PDC_I2C_START = 0x00000080, | 103 | PDC_I2C_START = 0x00000080, |
99 | PDC_I2C_MASK_INT = 0x00000020, | 104 | PDC_I2C_MASK_INT = 0x00000020, |
100 | PDC_I2C_COMPLETE = 0x00010000, | 105 | PDC_I2C_COMPLETE = 0x00010000, |
@@ -105,16 +110,16 @@ enum { | |||
105 | PDC_DIMM_SPD_COLUMN_NUM = 4, | 110 | PDC_DIMM_SPD_COLUMN_NUM = 4, |
106 | PDC_DIMM_SPD_MODULE_ROW = 5, | 111 | PDC_DIMM_SPD_MODULE_ROW = 5, |
107 | PDC_DIMM_SPD_TYPE = 11, | 112 | PDC_DIMM_SPD_TYPE = 11, |
108 | PDC_DIMM_SPD_FRESH_RATE = 12, | 113 | PDC_DIMM_SPD_FRESH_RATE = 12, |
109 | PDC_DIMM_SPD_BANK_NUM = 17, | 114 | PDC_DIMM_SPD_BANK_NUM = 17, |
110 | PDC_DIMM_SPD_CAS_LATENCY = 18, | 115 | PDC_DIMM_SPD_CAS_LATENCY = 18, |
111 | PDC_DIMM_SPD_ATTRIBUTE = 21, | 116 | PDC_DIMM_SPD_ATTRIBUTE = 21, |
112 | PDC_DIMM_SPD_ROW_PRE_CHARGE = 27, | 117 | PDC_DIMM_SPD_ROW_PRE_CHARGE = 27, |
113 | PDC_DIMM_SPD_ROW_ACTIVE_DELAY = 28, | 118 | PDC_DIMM_SPD_ROW_ACTIVE_DELAY = 28, |
114 | PDC_DIMM_SPD_RAS_CAS_DELAY = 29, | 119 | PDC_DIMM_SPD_RAS_CAS_DELAY = 29, |
115 | PDC_DIMM_SPD_ACTIVE_PRECHARGE = 30, | 120 | PDC_DIMM_SPD_ACTIVE_PRECHARGE = 30, |
116 | PDC_DIMM_SPD_SYSTEM_FREQ = 126, | 121 | PDC_DIMM_SPD_SYSTEM_FREQ = 126, |
117 | PDC_CTL_STATUS = 0x08, | 122 | PDC_CTL_STATUS = 0x08, |
118 | PDC_DIMM_WINDOW_CTLR = 0x0C, | 123 | PDC_DIMM_WINDOW_CTLR = 0x0C, |
119 | PDC_TIME_CONTROL = 0x3C, | 124 | PDC_TIME_CONTROL = 0x3C, |
120 | PDC_TIME_PERIOD = 0x40, | 125 | PDC_TIME_PERIOD = 0x40, |
@@ -157,15 +162,15 @@ static void pdc_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf); | |||
157 | static void pdc20621_host_stop(struct ata_host_set *host_set); | 162 | static void pdc20621_host_stop(struct ata_host_set *host_set); |
158 | static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe); | 163 | static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe); |
159 | static int pdc20621_detect_dimm(struct ata_probe_ent *pe); | 164 | static int pdc20621_detect_dimm(struct ata_probe_ent *pe); |
160 | static unsigned int pdc20621_i2c_read(struct ata_probe_ent *pe, | 165 | static unsigned int pdc20621_i2c_read(struct ata_probe_ent *pe, |
161 | u32 device, u32 subaddr, u32 *pdata); | 166 | u32 device, u32 subaddr, u32 *pdata); |
162 | static int pdc20621_prog_dimm0(struct ata_probe_ent *pe); | 167 | static int pdc20621_prog_dimm0(struct ata_probe_ent *pe); |
163 | static unsigned int pdc20621_prog_dimm_global(struct ata_probe_ent *pe); | 168 | static unsigned int pdc20621_prog_dimm_global(struct ata_probe_ent *pe); |
164 | #ifdef ATA_VERBOSE_DEBUG | 169 | #ifdef ATA_VERBOSE_DEBUG |
165 | static void pdc20621_get_from_dimm(struct ata_probe_ent *pe, | 170 | static void pdc20621_get_from_dimm(struct ata_probe_ent *pe, |
166 | void *psource, u32 offset, u32 size); | 171 | void *psource, u32 offset, u32 size); |
167 | #endif | 172 | #endif |
168 | static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, | 173 | static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, |
169 | void *psource, u32 offset, u32 size); | 174 | void *psource, u32 offset, u32 size); |
170 | static void pdc20621_irq_clear(struct ata_port *ap); | 175 | static void pdc20621_irq_clear(struct ata_port *ap); |
171 | static int pdc20621_qc_issue_prot(struct ata_queued_cmd *qc); | 176 | static int pdc20621_qc_issue_prot(struct ata_queued_cmd *qc); |
@@ -825,7 +830,8 @@ static irqreturn_t pdc20621_interrupt (int irq, void *dev_instance, struct pt_re | |||
825 | ap = host_set->ports[port_no]; | 830 | ap = host_set->ports[port_no]; |
826 | tmp = mask & (1 << i); | 831 | tmp = mask & (1 << i); |
827 | VPRINTK("seq %u, port_no %u, ap %p, tmp %x\n", i, port_no, ap, tmp); | 832 | VPRINTK("seq %u, port_no %u, ap %p, tmp %x\n", i, port_no, ap, tmp); |
828 | if (tmp && ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) { | 833 | if (tmp && ap && |
834 | !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { | ||
829 | struct ata_queued_cmd *qc; | 835 | struct ata_queued_cmd *qc; |
830 | 836 | ||
831 | qc = ata_qc_from_tag(ap, ap->active_tag); | 837 | qc = ata_qc_from_tag(ap, ap->active_tag); |
@@ -847,10 +853,14 @@ static irqreturn_t pdc20621_interrupt (int irq, void *dev_instance, struct pt_re | |||
847 | static void pdc_eng_timeout(struct ata_port *ap) | 853 | static void pdc_eng_timeout(struct ata_port *ap) |
848 | { | 854 | { |
849 | u8 drv_stat; | 855 | u8 drv_stat; |
856 | struct ata_host_set *host_set = ap->host_set; | ||
850 | struct ata_queued_cmd *qc; | 857 | struct ata_queued_cmd *qc; |
858 | unsigned long flags; | ||
851 | 859 | ||
852 | DPRINTK("ENTER\n"); | 860 | DPRINTK("ENTER\n"); |
853 | 861 | ||
862 | spin_lock_irqsave(&host_set->lock, flags); | ||
863 | |||
854 | qc = ata_qc_from_tag(ap, ap->active_tag); | 864 | qc = ata_qc_from_tag(ap, ap->active_tag); |
855 | if (!qc) { | 865 | if (!qc) { |
856 | printk(KERN_ERR "ata%u: BUG: timeout without command\n", | 866 | printk(KERN_ERR "ata%u: BUG: timeout without command\n", |
@@ -884,6 +894,7 @@ static void pdc_eng_timeout(struct ata_port *ap) | |||
884 | } | 894 | } |
885 | 895 | ||
886 | out: | 896 | out: |
897 | spin_unlock_irqrestore(&host_set->lock, flags); | ||
887 | DPRINTK("EXIT\n"); | 898 | DPRINTK("EXIT\n"); |
888 | } | 899 | } |
889 | 900 | ||
@@ -922,7 +933,7 @@ static void pdc_sata_setup_port(struct ata_ioports *port, unsigned long base) | |||
922 | 933 | ||
923 | 934 | ||
924 | #ifdef ATA_VERBOSE_DEBUG | 935 | #ifdef ATA_VERBOSE_DEBUG |
925 | static void pdc20621_get_from_dimm(struct ata_probe_ent *pe, void *psource, | 936 | static void pdc20621_get_from_dimm(struct ata_probe_ent *pe, void *psource, |
926 | u32 offset, u32 size) | 937 | u32 offset, u32 size) |
927 | { | 938 | { |
928 | u32 window_size; | 939 | u32 window_size; |
@@ -936,9 +947,9 @@ static void pdc20621_get_from_dimm(struct ata_probe_ent *pe, void *psource, | |||
936 | /* hard-code chip #0 */ | 947 | /* hard-code chip #0 */ |
937 | mmio += PDC_CHIP0_OFS; | 948 | mmio += PDC_CHIP0_OFS; |
938 | 949 | ||
939 | page_mask = 0x00; | 950 | page_mask = 0x00; |
940 | window_size = 0x2000 * 4; /* 32K byte uchar size */ | 951 | window_size = 0x2000 * 4; /* 32K byte uchar size */ |
941 | idx = (u16) (offset / window_size); | 952 | idx = (u16) (offset / window_size); |
942 | 953 | ||
943 | writel(0x01, mmio + PDC_GENERAL_CTLR); | 954 | writel(0x01, mmio + PDC_GENERAL_CTLR); |
944 | readl(mmio + PDC_GENERAL_CTLR); | 955 | readl(mmio + PDC_GENERAL_CTLR); |
@@ -947,19 +958,19 @@ static void pdc20621_get_from_dimm(struct ata_probe_ent *pe, void *psource, | |||
947 | 958 | ||
948 | offset -= (idx * window_size); | 959 | offset -= (idx * window_size); |
949 | idx++; | 960 | idx++; |
950 | dist = ((long) (window_size - (offset + size))) >= 0 ? size : | 961 | dist = ((long) (window_size - (offset + size))) >= 0 ? size : |
951 | (long) (window_size - offset); | 962 | (long) (window_size - offset); |
952 | memcpy_fromio((char *) psource, (char *) (dimm_mmio + offset / 4), | 963 | memcpy_fromio((char *) psource, (char *) (dimm_mmio + offset / 4), |
953 | dist); | 964 | dist); |
954 | 965 | ||
955 | psource += dist; | 966 | psource += dist; |
956 | size -= dist; | 967 | size -= dist; |
957 | for (; (long) size >= (long) window_size ;) { | 968 | for (; (long) size >= (long) window_size ;) { |
958 | writel(0x01, mmio + PDC_GENERAL_CTLR); | 969 | writel(0x01, mmio + PDC_GENERAL_CTLR); |
959 | readl(mmio + PDC_GENERAL_CTLR); | 970 | readl(mmio + PDC_GENERAL_CTLR); |
960 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); | 971 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); |
961 | readl(mmio + PDC_DIMM_WINDOW_CTLR); | 972 | readl(mmio + PDC_DIMM_WINDOW_CTLR); |
962 | memcpy_fromio((char *) psource, (char *) (dimm_mmio), | 973 | memcpy_fromio((char *) psource, (char *) (dimm_mmio), |
963 | window_size / 4); | 974 | window_size / 4); |
964 | psource += window_size; | 975 | psource += window_size; |
965 | size -= window_size; | 976 | size -= window_size; |
@@ -971,14 +982,14 @@ static void pdc20621_get_from_dimm(struct ata_probe_ent *pe, void *psource, | |||
971 | readl(mmio + PDC_GENERAL_CTLR); | 982 | readl(mmio + PDC_GENERAL_CTLR); |
972 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); | 983 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); |
973 | readl(mmio + PDC_DIMM_WINDOW_CTLR); | 984 | readl(mmio + PDC_DIMM_WINDOW_CTLR); |
974 | memcpy_fromio((char *) psource, (char *) (dimm_mmio), | 985 | memcpy_fromio((char *) psource, (char *) (dimm_mmio), |
975 | size / 4); | 986 | size / 4); |
976 | } | 987 | } |
977 | } | 988 | } |
978 | #endif | 989 | #endif |
979 | 990 | ||
980 | 991 | ||
981 | static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, | 992 | static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, |
982 | u32 offset, u32 size) | 993 | u32 offset, u32 size) |
983 | { | 994 | { |
984 | u32 window_size; | 995 | u32 window_size; |
@@ -989,16 +1000,16 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, | |||
989 | struct pdc_host_priv *hpriv = pe->private_data; | 1000 | struct pdc_host_priv *hpriv = pe->private_data; |
990 | void *dimm_mmio = hpriv->dimm_mmio; | 1001 | void *dimm_mmio = hpriv->dimm_mmio; |
991 | 1002 | ||
992 | /* hard-code chip #0 */ | 1003 | /* hard-code chip #0 */ |
993 | mmio += PDC_CHIP0_OFS; | 1004 | mmio += PDC_CHIP0_OFS; |
994 | 1005 | ||
995 | page_mask = 0x00; | 1006 | page_mask = 0x00; |
996 | window_size = 0x2000 * 4; /* 32K byte uchar size */ | 1007 | window_size = 0x2000 * 4; /* 32K byte uchar size */ |
997 | idx = (u16) (offset / window_size); | 1008 | idx = (u16) (offset / window_size); |
998 | 1009 | ||
999 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); | 1010 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); |
1000 | readl(mmio + PDC_DIMM_WINDOW_CTLR); | 1011 | readl(mmio + PDC_DIMM_WINDOW_CTLR); |
1001 | offset -= (idx * window_size); | 1012 | offset -= (idx * window_size); |
1002 | idx++; | 1013 | idx++; |
1003 | dist = ((long)(s32)(window_size - (offset + size))) >= 0 ? size : | 1014 | dist = ((long)(s32)(window_size - (offset + size))) >= 0 ? size : |
1004 | (long) (window_size - offset); | 1015 | (long) (window_size - offset); |
@@ -1006,12 +1017,12 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, | |||
1006 | writel(0x01, mmio + PDC_GENERAL_CTLR); | 1017 | writel(0x01, mmio + PDC_GENERAL_CTLR); |
1007 | readl(mmio + PDC_GENERAL_CTLR); | 1018 | readl(mmio + PDC_GENERAL_CTLR); |
1008 | 1019 | ||
1009 | psource += dist; | 1020 | psource += dist; |
1010 | size -= dist; | 1021 | size -= dist; |
1011 | for (; (long) size >= (long) window_size ;) { | 1022 | for (; (long) size >= (long) window_size ;) { |
1012 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); | 1023 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); |
1013 | readl(mmio + PDC_DIMM_WINDOW_CTLR); | 1024 | readl(mmio + PDC_DIMM_WINDOW_CTLR); |
1014 | memcpy_toio((char *) (dimm_mmio), (char *) psource, | 1025 | memcpy_toio((char *) (dimm_mmio), (char *) psource, |
1015 | window_size / 4); | 1026 | window_size / 4); |
1016 | writel(0x01, mmio + PDC_GENERAL_CTLR); | 1027 | writel(0x01, mmio + PDC_GENERAL_CTLR); |
1017 | readl(mmio + PDC_GENERAL_CTLR); | 1028 | readl(mmio + PDC_GENERAL_CTLR); |
@@ -1019,7 +1030,7 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, | |||
1019 | size -= window_size; | 1030 | size -= window_size; |
1020 | idx ++; | 1031 | idx ++; |
1021 | } | 1032 | } |
1022 | 1033 | ||
1023 | if (size) { | 1034 | if (size) { |
1024 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); | 1035 | writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); |
1025 | readl(mmio + PDC_DIMM_WINDOW_CTLR); | 1036 | readl(mmio + PDC_DIMM_WINDOW_CTLR); |
@@ -1030,12 +1041,12 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, | |||
1030 | } | 1041 | } |
1031 | 1042 | ||
1032 | 1043 | ||
1033 | static unsigned int pdc20621_i2c_read(struct ata_probe_ent *pe, u32 device, | 1044 | static unsigned int pdc20621_i2c_read(struct ata_probe_ent *pe, u32 device, |
1034 | u32 subaddr, u32 *pdata) | 1045 | u32 subaddr, u32 *pdata) |
1035 | { | 1046 | { |
1036 | void *mmio = pe->mmio_base; | 1047 | void *mmio = pe->mmio_base; |
1037 | u32 i2creg = 0; | 1048 | u32 i2creg = 0; |
1038 | u32 status; | 1049 | u32 status; |
1039 | u32 count =0; | 1050 | u32 count =0; |
1040 | 1051 | ||
1041 | /* hard-code chip #0 */ | 1052 | /* hard-code chip #0 */ |
@@ -1049,7 +1060,7 @@ static unsigned int pdc20621_i2c_read(struct ata_probe_ent *pe, u32 device, | |||
1049 | readl(mmio + PDC_I2C_ADDR_DATA_OFFSET); | 1060 | readl(mmio + PDC_I2C_ADDR_DATA_OFFSET); |
1050 | 1061 | ||
1051 | /* Write Control to perform read operation, mask int */ | 1062 | /* Write Control to perform read operation, mask int */ |
1052 | writel(PDC_I2C_READ | PDC_I2C_START | PDC_I2C_MASK_INT, | 1063 | writel(PDC_I2C_READ | PDC_I2C_START | PDC_I2C_MASK_INT, |
1053 | mmio + PDC_I2C_CONTROL_OFFSET); | 1064 | mmio + PDC_I2C_CONTROL_OFFSET); |
1054 | 1065 | ||
1055 | for (count = 0; count <= 1000; count ++) { | 1066 | for (count = 0; count <= 1000; count ++) { |
@@ -1062,26 +1073,26 @@ static unsigned int pdc20621_i2c_read(struct ata_probe_ent *pe, u32 device, | |||
1062 | } | 1073 | } |
1063 | 1074 | ||
1064 | *pdata = (status >> 8) & 0x000000ff; | 1075 | *pdata = (status >> 8) & 0x000000ff; |
1065 | return 1; | 1076 | return 1; |
1066 | } | 1077 | } |
1067 | 1078 | ||
1068 | 1079 | ||
1069 | static int pdc20621_detect_dimm(struct ata_probe_ent *pe) | 1080 | static int pdc20621_detect_dimm(struct ata_probe_ent *pe) |
1070 | { | 1081 | { |
1071 | u32 data=0 ; | 1082 | u32 data=0 ; |
1072 | if (pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, | 1083 | if (pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, |
1073 | PDC_DIMM_SPD_SYSTEM_FREQ, &data)) { | 1084 | PDC_DIMM_SPD_SYSTEM_FREQ, &data)) { |
1074 | if (data == 100) | 1085 | if (data == 100) |
1075 | return 100; | 1086 | return 100; |
1076 | } else | 1087 | } else |
1077 | return 0; | 1088 | return 0; |
1078 | 1089 | ||
1079 | if (pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, 9, &data)) { | 1090 | if (pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, 9, &data)) { |
1080 | if(data <= 0x75) | 1091 | if(data <= 0x75) |
1081 | return 133; | 1092 | return 133; |
1082 | } else | 1093 | } else |
1083 | return 0; | 1094 | return 0; |
1084 | 1095 | ||
1085 | return 0; | 1096 | return 0; |
1086 | } | 1097 | } |
1087 | 1098 | ||
@@ -1091,15 +1102,15 @@ static int pdc20621_prog_dimm0(struct ata_probe_ent *pe) | |||
1091 | u32 spd0[50]; | 1102 | u32 spd0[50]; |
1092 | u32 data = 0; | 1103 | u32 data = 0; |
1093 | int size, i; | 1104 | int size, i; |
1094 | u8 bdimmsize; | 1105 | u8 bdimmsize; |
1095 | void *mmio = pe->mmio_base; | 1106 | void *mmio = pe->mmio_base; |
1096 | static const struct { | 1107 | static const struct { |
1097 | unsigned int reg; | 1108 | unsigned int reg; |
1098 | unsigned int ofs; | 1109 | unsigned int ofs; |
1099 | } pdc_i2c_read_data [] = { | 1110 | } pdc_i2c_read_data [] = { |
1100 | { PDC_DIMM_SPD_TYPE, 11 }, | 1111 | { PDC_DIMM_SPD_TYPE, 11 }, |
1101 | { PDC_DIMM_SPD_FRESH_RATE, 12 }, | 1112 | { PDC_DIMM_SPD_FRESH_RATE, 12 }, |
1102 | { PDC_DIMM_SPD_COLUMN_NUM, 4 }, | 1113 | { PDC_DIMM_SPD_COLUMN_NUM, 4 }, |
1103 | { PDC_DIMM_SPD_ATTRIBUTE, 21 }, | 1114 | { PDC_DIMM_SPD_ATTRIBUTE, 21 }, |
1104 | { PDC_DIMM_SPD_ROW_NUM, 3 }, | 1115 | { PDC_DIMM_SPD_ROW_NUM, 3 }, |
1105 | { PDC_DIMM_SPD_BANK_NUM, 17 }, | 1116 | { PDC_DIMM_SPD_BANK_NUM, 17 }, |
@@ -1108,7 +1119,7 @@ static int pdc20621_prog_dimm0(struct ata_probe_ent *pe) | |||
1108 | { PDC_DIMM_SPD_ROW_ACTIVE_DELAY, 28 }, | 1119 | { PDC_DIMM_SPD_ROW_ACTIVE_DELAY, 28 }, |
1109 | { PDC_DIMM_SPD_RAS_CAS_DELAY, 29 }, | 1120 | { PDC_DIMM_SPD_RAS_CAS_DELAY, 29 }, |
1110 | { PDC_DIMM_SPD_ACTIVE_PRECHARGE, 30 }, | 1121 | { PDC_DIMM_SPD_ACTIVE_PRECHARGE, 30 }, |
1111 | { PDC_DIMM_SPD_CAS_LATENCY, 18 }, | 1122 | { PDC_DIMM_SPD_CAS_LATENCY, 18 }, |
1112 | }; | 1123 | }; |
1113 | 1124 | ||
1114 | /* hard-code chip #0 */ | 1125 | /* hard-code chip #0 */ |
@@ -1116,17 +1127,17 @@ static int pdc20621_prog_dimm0(struct ata_probe_ent *pe) | |||
1116 | 1127 | ||
1117 | for(i=0; i<ARRAY_SIZE(pdc_i2c_read_data); i++) | 1128 | for(i=0; i<ARRAY_SIZE(pdc_i2c_read_data); i++) |
1118 | pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, | 1129 | pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, |
1119 | pdc_i2c_read_data[i].reg, | 1130 | pdc_i2c_read_data[i].reg, |
1120 | &spd0[pdc_i2c_read_data[i].ofs]); | 1131 | &spd0[pdc_i2c_read_data[i].ofs]); |
1121 | 1132 | ||
1122 | data |= (spd0[4] - 8) | ((spd0[21] != 0) << 3) | ((spd0[3]-11) << 4); | 1133 | data |= (spd0[4] - 8) | ((spd0[21] != 0) << 3) | ((spd0[3]-11) << 4); |
1123 | data |= ((spd0[17] / 4) << 6) | ((spd0[5] / 2) << 7) | | 1134 | data |= ((spd0[17] / 4) << 6) | ((spd0[5] / 2) << 7) | |
1124 | ((((spd0[27] + 9) / 10) - 1) << 8) ; | 1135 | ((((spd0[27] + 9) / 10) - 1) << 8) ; |
1125 | data |= (((((spd0[29] > spd0[28]) | 1136 | data |= (((((spd0[29] > spd0[28]) |
1126 | ? spd0[29] : spd0[28]) + 9) / 10) - 1) << 10; | 1137 | ? spd0[29] : spd0[28]) + 9) / 10) - 1) << 10; |
1127 | data |= ((spd0[30] - spd0[29] + 9) / 10 - 2) << 12; | 1138 | data |= ((spd0[30] - spd0[29] + 9) / 10 - 2) << 12; |
1128 | 1139 | ||
1129 | if (spd0[18] & 0x08) | 1140 | if (spd0[18] & 0x08) |
1130 | data |= ((0x03) << 14); | 1141 | data |= ((0x03) << 14); |
1131 | else if (spd0[18] & 0x04) | 1142 | else if (spd0[18] & 0x04) |
1132 | data |= ((0x02) << 14); | 1143 | data |= ((0x02) << 14); |
@@ -1135,7 +1146,7 @@ static int pdc20621_prog_dimm0(struct ata_probe_ent *pe) | |||
1135 | else | 1146 | else |
1136 | data |= (0 << 14); | 1147 | data |= (0 << 14); |
1137 | 1148 | ||
1138 | /* | 1149 | /* |
1139 | Calculate the size of bDIMMSize (power of 2) and | 1150 | Calculate the size of bDIMMSize (power of 2) and |
1140 | merge the DIMM size by program start/end address. | 1151 | merge the DIMM size by program start/end address. |
1141 | */ | 1152 | */ |
@@ -1145,9 +1156,9 @@ static int pdc20621_prog_dimm0(struct ata_probe_ent *pe) | |||
1145 | data |= (((size / 16) - 1) << 16); | 1156 | data |= (((size / 16) - 1) << 16); |
1146 | data |= (0 << 23); | 1157 | data |= (0 << 23); |
1147 | data |= 8; | 1158 | data |= 8; |
1148 | writel(data, mmio + PDC_DIMM0_CONTROL_OFFSET); | 1159 | writel(data, mmio + PDC_DIMM0_CONTROL_OFFSET); |
1149 | readl(mmio + PDC_DIMM0_CONTROL_OFFSET); | 1160 | readl(mmio + PDC_DIMM0_CONTROL_OFFSET); |
1150 | return size; | 1161 | return size; |
1151 | } | 1162 | } |
1152 | 1163 | ||
1153 | 1164 | ||
@@ -1167,12 +1178,12 @@ static unsigned int pdc20621_prog_dimm_global(struct ata_probe_ent *pe) | |||
1167 | Refresh Enable (bit 17) | 1178 | Refresh Enable (bit 17) |
1168 | */ | 1179 | */ |
1169 | 1180 | ||
1170 | data = 0x022259F1; | 1181 | data = 0x022259F1; |
1171 | writel(data, mmio + PDC_SDRAM_CONTROL_OFFSET); | 1182 | writel(data, mmio + PDC_SDRAM_CONTROL_OFFSET); |
1172 | readl(mmio + PDC_SDRAM_CONTROL_OFFSET); | 1183 | readl(mmio + PDC_SDRAM_CONTROL_OFFSET); |
1173 | 1184 | ||
1174 | /* Turn on for ECC */ | 1185 | /* Turn on for ECC */ |
1175 | pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, | 1186 | pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, |
1176 | PDC_DIMM_SPD_TYPE, &spd0); | 1187 | PDC_DIMM_SPD_TYPE, &spd0); |
1177 | if (spd0 == 0x02) { | 1188 | if (spd0 == 0x02) { |
1178 | data |= (0x01 << 16); | 1189 | data |= (0x01 << 16); |
@@ -1186,22 +1197,22 @@ static unsigned int pdc20621_prog_dimm_global(struct ata_probe_ent *pe) | |||
1186 | data |= (1<<19); | 1197 | data |= (1<<19); |
1187 | writel(data, mmio + PDC_SDRAM_CONTROL_OFFSET); | 1198 | writel(data, mmio + PDC_SDRAM_CONTROL_OFFSET); |
1188 | 1199 | ||
1189 | error = 1; | 1200 | error = 1; |
1190 | for (i = 1; i <= 10; i++) { /* polling ~5 secs */ | 1201 | for (i = 1; i <= 10; i++) { /* polling ~5 secs */ |
1191 | data = readl(mmio + PDC_SDRAM_CONTROL_OFFSET); | 1202 | data = readl(mmio + PDC_SDRAM_CONTROL_OFFSET); |
1192 | if (!(data & (1<<19))) { | 1203 | if (!(data & (1<<19))) { |
1193 | error = 0; | 1204 | error = 0; |
1194 | break; | 1205 | break; |
1195 | } | 1206 | } |
1196 | msleep(i*100); | 1207 | msleep(i*100); |
1197 | } | 1208 | } |
1198 | return error; | 1209 | return error; |
1199 | } | 1210 | } |
1200 | 1211 | ||
1201 | 1212 | ||
1202 | static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe) | 1213 | static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe) |
1203 | { | 1214 | { |
1204 | int speed, size, length; | 1215 | int speed, size, length; |
1205 | u32 addr,spd0,pci_status; | 1216 | u32 addr,spd0,pci_status; |
1206 | u32 tmp=0; | 1217 | u32 tmp=0; |
1207 | u32 time_period=0; | 1218 | u32 time_period=0; |
@@ -1228,7 +1239,7 @@ static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe) | |||
1228 | /* Wait 3 seconds */ | 1239 | /* Wait 3 seconds */ |
1229 | msleep(3000); | 1240 | msleep(3000); |
1230 | 1241 | ||
1231 | /* | 1242 | /* |
1232 | When timer is enabled, counter is decreased every internal | 1243 | When timer is enabled, counter is decreased every internal |
1233 | clock cycle. | 1244 | clock cycle. |
1234 | */ | 1245 | */ |
@@ -1236,24 +1247,24 @@ static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe) | |||
1236 | tcount = readl(mmio + PDC_TIME_COUNTER); | 1247 | tcount = readl(mmio + PDC_TIME_COUNTER); |
1237 | VPRINTK("Time Counter Register (0x44): 0x%x\n", tcount); | 1248 | VPRINTK("Time Counter Register (0x44): 0x%x\n", tcount); |
1238 | 1249 | ||
1239 | /* | 1250 | /* |
1240 | If SX4 is on PCI-X bus, after 3 seconds, the timer counter | 1251 | If SX4 is on PCI-X bus, after 3 seconds, the timer counter |
1241 | register should be >= (0xffffffff - 3x10^8). | 1252 | register should be >= (0xffffffff - 3x10^8). |
1242 | */ | 1253 | */ |
1243 | if(tcount >= PCI_X_TCOUNT) { | 1254 | if(tcount >= PCI_X_TCOUNT) { |
1244 | ticks = (time_period - tcount); | 1255 | ticks = (time_period - tcount); |
1245 | VPRINTK("Num counters 0x%x (%d)\n", ticks, ticks); | 1256 | VPRINTK("Num counters 0x%x (%d)\n", ticks, ticks); |
1246 | 1257 | ||
1247 | clock = (ticks / 300000); | 1258 | clock = (ticks / 300000); |
1248 | VPRINTK("10 * Internal clk = 0x%x (%d)\n", clock, clock); | 1259 | VPRINTK("10 * Internal clk = 0x%x (%d)\n", clock, clock); |
1249 | 1260 | ||
1250 | clock = (clock * 33); | 1261 | clock = (clock * 33); |
1251 | VPRINTK("10 * Internal clk * 33 = 0x%x (%d)\n", clock, clock); | 1262 | VPRINTK("10 * Internal clk * 33 = 0x%x (%d)\n", clock, clock); |
1252 | 1263 | ||
1253 | /* PLL F Param (bit 22:16) */ | 1264 | /* PLL F Param (bit 22:16) */ |
1254 | fparam = (1400000 / clock) - 2; | 1265 | fparam = (1400000 / clock) - 2; |
1255 | VPRINTK("PLL F Param: 0x%x (%d)\n", fparam, fparam); | 1266 | VPRINTK("PLL F Param: 0x%x (%d)\n", fparam, fparam); |
1256 | 1267 | ||
1257 | /* OD param = 0x2 (bit 31:30), R param = 0x5 (bit 29:25) */ | 1268 | /* OD param = 0x2 (bit 31:30), R param = 0x5 (bit 29:25) */ |
1258 | pci_status = (0x8a001824 | (fparam << 16)); | 1269 | pci_status = (0x8a001824 | (fparam << 16)); |
1259 | } else | 1270 | } else |
@@ -1264,21 +1275,21 @@ static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe) | |||
1264 | writel(pci_status, mmio + PDC_CTL_STATUS); | 1275 | writel(pci_status, mmio + PDC_CTL_STATUS); |
1265 | readl(mmio + PDC_CTL_STATUS); | 1276 | readl(mmio + PDC_CTL_STATUS); |
1266 | 1277 | ||
1267 | /* | 1278 | /* |
1268 | Read SPD of DIMM by I2C interface, | 1279 | Read SPD of DIMM by I2C interface, |
1269 | and program the DIMM Module Controller. | 1280 | and program the DIMM Module Controller. |
1270 | */ | 1281 | */ |
1271 | if (!(speed = pdc20621_detect_dimm(pe))) { | 1282 | if (!(speed = pdc20621_detect_dimm(pe))) { |
1272 | printk(KERN_ERR "Detect Local DIMM Fail\n"); | 1283 | printk(KERN_ERR "Detect Local DIMM Fail\n"); |
1273 | return 1; /* DIMM error */ | 1284 | return 1; /* DIMM error */ |
1274 | } | 1285 | } |
1275 | VPRINTK("Local DIMM Speed = %d\n", speed); | 1286 | VPRINTK("Local DIMM Speed = %d\n", speed); |
1276 | 1287 | ||
1277 | /* Programming DIMM0 Module Control Register (index_CID0:80h) */ | 1288 | /* Programming DIMM0 Module Control Register (index_CID0:80h) */ |
1278 | size = pdc20621_prog_dimm0(pe); | 1289 | size = pdc20621_prog_dimm0(pe); |
1279 | VPRINTK("Local DIMM Size = %dMB\n",size); | 1290 | VPRINTK("Local DIMM Size = %dMB\n",size); |
1280 | 1291 | ||
1281 | /* Programming DIMM Module Global Control Register (index_CID0:88h) */ | 1292 | /* Programming DIMM Module Global Control Register (index_CID0:88h) */ |
1282 | if (pdc20621_prog_dimm_global(pe)) { | 1293 | if (pdc20621_prog_dimm_global(pe)) { |
1283 | printk(KERN_ERR "Programming DIMM Module Global Control Register Fail\n"); | 1294 | printk(KERN_ERR "Programming DIMM Module Global Control Register Fail\n"); |
1284 | return 1; | 1295 | return 1; |
@@ -1297,30 +1308,30 @@ static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe) | |||
1297 | 1308 | ||
1298 | pdc20621_put_to_dimm(pe, (void *) test_parttern1, 0x10040, 40); | 1309 | pdc20621_put_to_dimm(pe, (void *) test_parttern1, 0x10040, 40); |
1299 | pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x40, 40); | 1310 | pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x40, 40); |
1300 | printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], | 1311 | printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], |
1301 | test_parttern2[1], &(test_parttern2[2])); | 1312 | test_parttern2[1], &(test_parttern2[2])); |
1302 | pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x10040, | 1313 | pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x10040, |
1303 | 40); | 1314 | 40); |
1304 | printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], | 1315 | printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], |
1305 | test_parttern2[1], &(test_parttern2[2])); | 1316 | test_parttern2[1], &(test_parttern2[2])); |
1306 | 1317 | ||
1307 | pdc20621_put_to_dimm(pe, (void *) test_parttern1, 0x40, 40); | 1318 | pdc20621_put_to_dimm(pe, (void *) test_parttern1, 0x40, 40); |
1308 | pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x40, 40); | 1319 | pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x40, 40); |
1309 | printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], | 1320 | printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], |
1310 | test_parttern2[1], &(test_parttern2[2])); | 1321 | test_parttern2[1], &(test_parttern2[2])); |
1311 | } | 1322 | } |
1312 | #endif | 1323 | #endif |
1313 | 1324 | ||
1314 | /* ECC initiliazation. */ | 1325 | /* ECC initiliazation. */ |
1315 | 1326 | ||
1316 | pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, | 1327 | pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, |
1317 | PDC_DIMM_SPD_TYPE, &spd0); | 1328 | PDC_DIMM_SPD_TYPE, &spd0); |
1318 | if (spd0 == 0x02) { | 1329 | if (spd0 == 0x02) { |
1319 | VPRINTK("Start ECC initialization\n"); | 1330 | VPRINTK("Start ECC initialization\n"); |
1320 | addr = 0; | 1331 | addr = 0; |
1321 | length = size * 1024 * 1024; | 1332 | length = size * 1024 * 1024; |
1322 | while (addr < length) { | 1333 | while (addr < length) { |
1323 | pdc20621_put_to_dimm(pe, (void *) &tmp, addr, | 1334 | pdc20621_put_to_dimm(pe, (void *) &tmp, addr, |
1324 | sizeof(u32)); | 1335 | sizeof(u32)); |
1325 | addr += sizeof(u32); | 1336 | addr += sizeof(u32); |
1326 | } | 1337 | } |
diff --git a/drivers/scsi/sata_uli.c b/drivers/scsi/sata_uli.c index a71fb54eebd3..1566886815fb 100644 --- a/drivers/scsi/sata_uli.c +++ b/drivers/scsi/sata_uli.c | |||
@@ -1,21 +1,26 @@ | |||
1 | /* | 1 | /* |
2 | * sata_uli.c - ULi Electronics SATA | 2 | * sata_uli.c - ULi Electronics SATA |
3 | * | 3 | * |
4 | * The contents of this file are subject to the Open | ||
5 | * Software License version 1.1 that can be found at | ||
6 | * http://www.opensource.org/licenses/osl-1.1.txt and is included herein | ||
7 | * by reference. | ||
8 | * | 4 | * |
9 | * Alternatively, the contents of this file may be used under the terms | 5 | * This program is free software; you can redistribute it and/or modify |
10 | * of the GNU General Public License version 2 (the "GPL") as distributed | 6 | * it under the terms of the GNU General Public License as published by |
11 | * in the kernel source COPYING file, in which case the provisions of | 7 | * the Free Software Foundation; either version 2, or (at your option) |
12 | * the GPL are applicable instead of the above. If you wish to allow | 8 | * any later version. |
13 | * the use of your version of this file only under the terms of the | 9 | * |
14 | * GPL and not to allow others to use your version of this file under | 10 | * This program is distributed in the hope that it will be useful, |
15 | * the OSL, indicate your decision by deleting the provisions above and | 11 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
16 | * replace them with the notice and other provisions required by the GPL. | 12 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
17 | * If you do not delete the provisions above, a recipient may use your | 13 | * GNU General Public License for more details. |
18 | * version of this file under either the OSL or the GPL. | 14 | * |
15 | * You should have received a copy of the GNU General Public License | ||
16 | * along with this program; see the file COPYING. If not, write to | ||
17 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
18 | * | ||
19 | * | ||
20 | * libata documentation is available via 'make {ps|pdf}docs', | ||
21 | * as Documentation/DocBook/libata.* | ||
22 | * | ||
23 | * Hardware documentation available under NDA. | ||
19 | * | 24 | * |
20 | */ | 25 | */ |
21 | 26 | ||
@@ -214,7 +219,7 @@ static int uli_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) | |||
214 | rc = -ENOMEM; | 219 | rc = -ENOMEM; |
215 | goto err_out_regions; | 220 | goto err_out_regions; |
216 | } | 221 | } |
217 | 222 | ||
218 | switch (board_idx) { | 223 | switch (board_idx) { |
219 | case uli_5287: | 224 | case uli_5287: |
220 | probe_ent->port[0].scr_addr = ULI5287_BASE; | 225 | probe_ent->port[0].scr_addr = ULI5287_BASE; |
diff --git a/drivers/scsi/sata_via.c b/drivers/scsi/sata_via.c index f43183c19a12..128b996b07b7 100644 --- a/drivers/scsi/sata_via.c +++ b/drivers/scsi/sata_via.c | |||
@@ -1,34 +1,38 @@ | |||
1 | /* | 1 | /* |
2 | sata_via.c - VIA Serial ATA controllers | 2 | * sata_via.c - VIA Serial ATA controllers |
3 | 3 | * | |
4 | Maintained by: Jeff Garzik <jgarzik@pobox.com> | 4 | * Maintained by: Jeff Garzik <jgarzik@pobox.com> |
5 | Please ALWAYS copy linux-ide@vger.kernel.org | 5 | * Please ALWAYS copy linux-ide@vger.kernel.org |
6 | on emails. | 6 | on emails. |
7 | 7 | * | |
8 | Copyright 2003-2004 Red Hat, Inc. All rights reserved. | 8 | * Copyright 2003-2004 Red Hat, Inc. All rights reserved. |
9 | Copyright 2003-2004 Jeff Garzik | 9 | * Copyright 2003-2004 Jeff Garzik |
10 | 10 | * | |
11 | The contents of this file are subject to the Open | 11 | * |
12 | Software License version 1.1 that can be found at | 12 | * This program is free software; you can redistribute it and/or modify |
13 | http://www.opensource.org/licenses/osl-1.1.txt and is included herein | 13 | * it under the terms of the GNU General Public License as published by |
14 | by reference. | 14 | * the Free Software Foundation; either version 2, or (at your option) |
15 | 15 | * any later version. | |
16 | Alternatively, the contents of this file may be used under the terms | 16 | * |
17 | of the GNU General Public License version 2 (the "GPL") as distributed | 17 | * This program is distributed in the hope that it will be useful, |
18 | in the kernel source COPYING file, in which case the provisions of | 18 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
19 | the GPL are applicable instead of the above. If you wish to allow | 19 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
20 | the use of your version of this file only under the terms of the | 20 | * GNU General Public License for more details. |
21 | GPL and not to allow others to use your version of this file under | 21 | * |
22 | the OSL, indicate your decision by deleting the provisions above and | 22 | * You should have received a copy of the GNU General Public License |
23 | replace them with the notice and other provisions required by the GPL. | 23 | * along with this program; see the file COPYING. If not, write to |
24 | If you do not delete the provisions above, a recipient may use your | 24 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. |
25 | version of this file under either the OSL or the GPL. | 25 | * |
26 | 26 | * | |
27 | ---------------------------------------------------------------------- | 27 | * libata documentation is available via 'make {ps|pdf}docs', |
28 | 28 | * as Documentation/DocBook/libata.* | |
29 | To-do list: | 29 | * |
30 | * VT6421 PATA support | 30 | * Hardware documentation available under NDA. |
31 | 31 | * | |
32 | * | ||
33 | * To-do list: | ||
34 | * - VT6421 PATA support | ||
35 | * | ||
32 | */ | 36 | */ |
33 | 37 | ||
34 | #include <linux/kernel.h> | 38 | #include <linux/kernel.h> |
@@ -347,7 +351,7 @@ static int svia_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) | |||
347 | probe_ent = vt6420_init_probe_ent(pdev); | 351 | probe_ent = vt6420_init_probe_ent(pdev); |
348 | else | 352 | else |
349 | probe_ent = vt6421_init_probe_ent(pdev); | 353 | probe_ent = vt6421_init_probe_ent(pdev); |
350 | 354 | ||
351 | if (!probe_ent) { | 355 | if (!probe_ent) { |
352 | printk(KERN_ERR DRV_NAME "(%s): out of memory\n", | 356 | printk(KERN_ERR DRV_NAME "(%s): out of memory\n", |
353 | pci_name(pdev)); | 357 | pci_name(pdev)); |
diff --git a/drivers/scsi/sata_vsc.c b/drivers/scsi/sata_vsc.c index c5e09dc6f3de..3985f344da4d 100644 --- a/drivers/scsi/sata_vsc.c +++ b/drivers/scsi/sata_vsc.c | |||
@@ -9,9 +9,29 @@ | |||
9 | * | 9 | * |
10 | * Bits from Jeff Garzik, Copyright RedHat, Inc. | 10 | * Bits from Jeff Garzik, Copyright RedHat, Inc. |
11 | * | 11 | * |
12 | * This file is subject to the terms and conditions of the GNU General Public | 12 | * |
13 | * License. See the file "COPYING" in the main directory of this archive | 13 | * This program is free software; you can redistribute it and/or modify |
14 | * for more details. | 14 | * it under the terms of the GNU General Public License as published by |
15 | * the Free Software Foundation; either version 2, or (at your option) | ||
16 | * any later version. | ||
17 | * | ||
18 | * This program is distributed in the hope that it will be useful, | ||
19 | * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
20 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
21 | * GNU General Public License for more details. | ||
22 | * | ||
23 | * You should have received a copy of the GNU General Public License | ||
24 | * along with this program; see the file COPYING. If not, write to | ||
25 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. | ||
26 | * | ||
27 | * | ||
28 | * libata documentation is available via 'make {ps|pdf}docs', | ||
29 | * as Documentation/DocBook/libata.* | ||
30 | * | ||
31 | * Vitesse hardware documentation presumably available under NDA. | ||
32 | * Intel 31244 (same hardware interface) documentation presumably | ||
33 | * available from http://developer.intel.com/ | ||
34 | * | ||
15 | */ | 35 | */ |
16 | 36 | ||
17 | #include <linux/kernel.h> | 37 | #include <linux/kernel.h> |
@@ -173,7 +193,8 @@ static irqreturn_t vsc_sata_interrupt (int irq, void *dev_instance, | |||
173 | struct ata_port *ap; | 193 | struct ata_port *ap; |
174 | 194 | ||
175 | ap = host_set->ports[i]; | 195 | ap = host_set->ports[i]; |
176 | if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) { | 196 | if (ap && !(ap->flags & |
197 | (ATA_FLAG_PORT_DISABLED|ATA_FLAG_NOINTR))) { | ||
177 | struct ata_queued_cmd *qc; | 198 | struct ata_queued_cmd *qc; |
178 | 199 | ||
179 | qc = ata_qc_from_tag(ap, ap->active_tag); | 200 | qc = ata_qc_from_tag(ap, ap->active_tag); |
@@ -342,7 +363,7 @@ static int __devinit vsc_sata_init_one (struct pci_dev *pdev, const struct pci_d | |||
342 | 363 | ||
343 | pci_set_master(pdev); | 364 | pci_set_master(pdev); |
344 | 365 | ||
345 | /* | 366 | /* |
346 | * Config offset 0x98 is "Extended Control and Status Register 0" | 367 | * Config offset 0x98 is "Extended Control and Status Register 0" |
347 | * Default value is (1 << 28). All bits except bit 28 are reserved in | 368 | * Default value is (1 << 28). All bits except bit 28 are reserved in |
348 | * DPA mode. If bit 28 is set, LED 0 reflects all ports' activity. | 369 | * DPA mode. If bit 28 is set, LED 0 reflects all ports' activity. |
diff --git a/drivers/serial/8250_pci.c b/drivers/serial/8250_pci.c index 07f05e9d0955..0e21f583690e 100644 --- a/drivers/serial/8250_pci.c +++ b/drivers/serial/8250_pci.c | |||
@@ -34,36 +34,6 @@ | |||
34 | #undef SERIAL_DEBUG_PCI | 34 | #undef SERIAL_DEBUG_PCI |
35 | 35 | ||
36 | /* | 36 | /* |
37 | * Definitions for PCI support. | ||
38 | */ | ||
39 | #define FL_BASE_MASK 0x0007 | ||
40 | #define FL_BASE0 0x0000 | ||
41 | #define FL_BASE1 0x0001 | ||
42 | #define FL_BASE2 0x0002 | ||
43 | #define FL_BASE3 0x0003 | ||
44 | #define FL_BASE4 0x0004 | ||
45 | #define FL_GET_BASE(x) (x & FL_BASE_MASK) | ||
46 | |||
47 | /* Use successive BARs (PCI base address registers), | ||
48 | else use offset into some specified BAR */ | ||
49 | #define FL_BASE_BARS 0x0008 | ||
50 | |||
51 | /* do not assign an irq */ | ||
52 | #define FL_NOIRQ 0x0080 | ||
53 | |||
54 | /* Use the Base address register size to cap number of ports */ | ||
55 | #define FL_REGION_SZ_CAP 0x0100 | ||
56 | |||
57 | struct pci_board { | ||
58 | unsigned int flags; | ||
59 | unsigned int num_ports; | ||
60 | unsigned int base_baud; | ||
61 | unsigned int uart_offset; | ||
62 | unsigned int reg_shift; | ||
63 | unsigned int first_offset; | ||
64 | }; | ||
65 | |||
66 | /* | ||
67 | * init function returns: | 37 | * init function returns: |
68 | * > 0 - number of ports | 38 | * > 0 - number of ports |
69 | * = 0 - use board->num_ports | 39 | * = 0 - use board->num_ports |
@@ -75,14 +45,15 @@ struct pci_serial_quirk { | |||
75 | u32 subvendor; | 45 | u32 subvendor; |
76 | u32 subdevice; | 46 | u32 subdevice; |
77 | int (*init)(struct pci_dev *dev); | 47 | int (*init)(struct pci_dev *dev); |
78 | int (*setup)(struct pci_dev *dev, struct pci_board *board, | 48 | int (*setup)(struct serial_private *, struct pciserial_board *, |
79 | struct uart_port *port, int idx); | 49 | struct uart_port *, int); |
80 | void (*exit)(struct pci_dev *dev); | 50 | void (*exit)(struct pci_dev *dev); |
81 | }; | 51 | }; |
82 | 52 | ||
83 | #define PCI_NUM_BAR_RESOURCES 6 | 53 | #define PCI_NUM_BAR_RESOURCES 6 |
84 | 54 | ||
85 | struct serial_private { | 55 | struct serial_private { |
56 | struct pci_dev *dev; | ||
86 | unsigned int nr; | 57 | unsigned int nr; |
87 | void __iomem *remapped_bar[PCI_NUM_BAR_RESOURCES]; | 58 | void __iomem *remapped_bar[PCI_NUM_BAR_RESOURCES]; |
88 | struct pci_serial_quirk *quirk; | 59 | struct pci_serial_quirk *quirk; |
@@ -101,17 +72,18 @@ static void moan_device(const char *str, struct pci_dev *dev) | |||
101 | } | 72 | } |
102 | 73 | ||
103 | static int | 74 | static int |
104 | setup_port(struct pci_dev *dev, struct uart_port *port, | 75 | setup_port(struct serial_private *priv, struct uart_port *port, |
105 | int bar, int offset, int regshift) | 76 | int bar, int offset, int regshift) |
106 | { | 77 | { |
107 | struct serial_private *priv = pci_get_drvdata(dev); | 78 | struct pci_dev *dev = priv->dev; |
108 | unsigned long base, len; | 79 | unsigned long base, len; |
109 | 80 | ||
110 | if (bar >= PCI_NUM_BAR_RESOURCES) | 81 | if (bar >= PCI_NUM_BAR_RESOURCES) |
111 | return -EINVAL; | 82 | return -EINVAL; |
112 | 83 | ||
84 | base = pci_resource_start(dev, bar); | ||
85 | |||
113 | if (pci_resource_flags(dev, bar) & IORESOURCE_MEM) { | 86 | if (pci_resource_flags(dev, bar) & IORESOURCE_MEM) { |
114 | base = pci_resource_start(dev, bar); | ||
115 | len = pci_resource_len(dev, bar); | 87 | len = pci_resource_len(dev, bar); |
116 | 88 | ||
117 | if (!priv->remapped_bar[bar]) | 89 | if (!priv->remapped_bar[bar]) |
@@ -120,13 +92,16 @@ setup_port(struct pci_dev *dev, struct uart_port *port, | |||
120 | return -ENOMEM; | 92 | return -ENOMEM; |
121 | 93 | ||
122 | port->iotype = UPIO_MEM; | 94 | port->iotype = UPIO_MEM; |
95 | port->iobase = 0; | ||
123 | port->mapbase = base + offset; | 96 | port->mapbase = base + offset; |
124 | port->membase = priv->remapped_bar[bar] + offset; | 97 | port->membase = priv->remapped_bar[bar] + offset; |
125 | port->regshift = regshift; | 98 | port->regshift = regshift; |
126 | } else { | 99 | } else { |
127 | base = pci_resource_start(dev, bar) + offset; | ||
128 | port->iotype = UPIO_PORT; | 100 | port->iotype = UPIO_PORT; |
129 | port->iobase = base; | 101 | port->iobase = base + offset; |
102 | port->mapbase = 0; | ||
103 | port->membase = NULL; | ||
104 | port->regshift = 0; | ||
130 | } | 105 | } |
131 | return 0; | 106 | return 0; |
132 | } | 107 | } |
@@ -136,7 +111,7 @@ setup_port(struct pci_dev *dev, struct uart_port *port, | |||
136 | * Not that ugly ;) -- HW | 111 | * Not that ugly ;) -- HW |
137 | */ | 112 | */ |
138 | static int | 113 | static int |
139 | afavlab_setup(struct pci_dev *dev, struct pci_board *board, | 114 | afavlab_setup(struct serial_private *priv, struct pciserial_board *board, |
140 | struct uart_port *port, int idx) | 115 | struct uart_port *port, int idx) |
141 | { | 116 | { |
142 | unsigned int bar, offset = board->first_offset; | 117 | unsigned int bar, offset = board->first_offset; |
@@ -149,7 +124,7 @@ afavlab_setup(struct pci_dev *dev, struct pci_board *board, | |||
149 | offset += (idx - 4) * board->uart_offset; | 124 | offset += (idx - 4) * board->uart_offset; |
150 | } | 125 | } |
151 | 126 | ||
152 | return setup_port(dev, port, bar, offset, board->reg_shift); | 127 | return setup_port(priv, port, bar, offset, board->reg_shift); |
153 | } | 128 | } |
154 | 129 | ||
155 | /* | 130 | /* |
@@ -189,13 +164,13 @@ static int __devinit pci_hp_diva_init(struct pci_dev *dev) | |||
189 | * some serial ports are supposed to be hidden on certain models. | 164 | * some serial ports are supposed to be hidden on certain models. |
190 | */ | 165 | */ |
191 | static int | 166 | static int |
192 | pci_hp_diva_setup(struct pci_dev *dev, struct pci_board *board, | 167 | pci_hp_diva_setup(struct serial_private *priv, struct pciserial_board *board, |
193 | struct uart_port *port, int idx) | 168 | struct uart_port *port, int idx) |
194 | { | 169 | { |
195 | unsigned int offset = board->first_offset; | 170 | unsigned int offset = board->first_offset; |
196 | unsigned int bar = FL_GET_BASE(board->flags); | 171 | unsigned int bar = FL_GET_BASE(board->flags); |
197 | 172 | ||
198 | switch (dev->subsystem_device) { | 173 | switch (priv->dev->subsystem_device) { |
199 | case PCI_DEVICE_ID_HP_DIVA_MAESTRO: | 174 | case PCI_DEVICE_ID_HP_DIVA_MAESTRO: |
200 | if (idx == 3) | 175 | if (idx == 3) |
201 | idx++; | 176 | idx++; |
@@ -212,7 +187,7 @@ pci_hp_diva_setup(struct pci_dev *dev, struct pci_board *board, | |||
212 | 187 | ||
213 | offset += idx * board->uart_offset; | 188 | offset += idx * board->uart_offset; |
214 | 189 | ||
215 | return setup_port(dev, port, bar, offset, board->reg_shift); | 190 | return setup_port(priv, port, bar, offset, board->reg_shift); |
216 | } | 191 | } |
217 | 192 | ||
218 | /* | 193 | /* |
@@ -307,7 +282,7 @@ static void __devexit pci_plx9050_exit(struct pci_dev *dev) | |||
307 | 282 | ||
308 | /* SBS Technologies Inc. PMC-OCTPRO and P-OCTAL cards */ | 283 | /* SBS Technologies Inc. PMC-OCTPRO and P-OCTAL cards */ |
309 | static int | 284 | static int |
310 | sbs_setup(struct pci_dev *dev, struct pci_board *board, | 285 | sbs_setup(struct serial_private *priv, struct pciserial_board *board, |
311 | struct uart_port *port, int idx) | 286 | struct uart_port *port, int idx) |
312 | { | 287 | { |
313 | unsigned int bar, offset = board->first_offset; | 288 | unsigned int bar, offset = board->first_offset; |
@@ -323,7 +298,7 @@ sbs_setup(struct pci_dev *dev, struct pci_board *board, | |||
323 | } else /* we have only 8 ports on PMC-OCTALPRO */ | 298 | } else /* we have only 8 ports on PMC-OCTALPRO */ |
324 | return 1; | 299 | return 1; |
325 | 300 | ||
326 | return setup_port(dev, port, bar, offset, board->reg_shift); | 301 | return setup_port(priv, port, bar, offset, board->reg_shift); |
327 | } | 302 | } |
328 | 303 | ||
329 | /* | 304 | /* |
@@ -389,6 +364,9 @@ static void __devexit sbs_exit(struct pci_dev *dev) | |||
389 | * - 10x cards have control registers in IO and/or memory space; | 364 | * - 10x cards have control registers in IO and/or memory space; |
390 | * - 20x cards have control registers in standard PCI configuration space. | 365 | * - 20x cards have control registers in standard PCI configuration space. |
391 | * | 366 | * |
367 | * Note: all 10x cards have PCI device ids 0x10.. | ||
368 | * all 20x cards have PCI device ids 0x20.. | ||
369 | * | ||
392 | * There are also Quartet Serial cards which use Oxford Semiconductor | 370 | * There are also Quartet Serial cards which use Oxford Semiconductor |
393 | * 16954 quad UART PCI chip clocked by 18.432 MHz quartz. | 371 | * 16954 quad UART PCI chip clocked by 18.432 MHz quartz. |
394 | * | 372 | * |
@@ -445,24 +423,18 @@ static int pci_siig20x_init(struct pci_dev *dev) | |||
445 | return 0; | 423 | return 0; |
446 | } | 424 | } |
447 | 425 | ||
448 | int pci_siig10x_fn(struct pci_dev *dev, int enable) | 426 | static int pci_siig_init(struct pci_dev *dev) |
449 | { | 427 | { |
450 | int ret = 0; | 428 | unsigned int type = dev->device & 0xff00; |
451 | if (enable) | ||
452 | ret = pci_siig10x_init(dev); | ||
453 | return ret; | ||
454 | } | ||
455 | 429 | ||
456 | int pci_siig20x_fn(struct pci_dev *dev, int enable) | 430 | if (type == 0x1000) |
457 | { | 431 | return pci_siig10x_init(dev); |
458 | int ret = 0; | 432 | else if (type == 0x2000) |
459 | if (enable) | 433 | return pci_siig20x_init(dev); |
460 | ret = pci_siig20x_init(dev); | ||
461 | return ret; | ||
462 | } | ||
463 | 434 | ||
464 | EXPORT_SYMBOL(pci_siig10x_fn); | 435 | moan_device("Unknown SIIG card", dev); |
465 | EXPORT_SYMBOL(pci_siig20x_fn); | 436 | return -ENODEV; |
437 | } | ||
466 | 438 | ||
467 | /* | 439 | /* |
468 | * Timedia has an explosion of boards, and to avoid the PCI table from | 440 | * Timedia has an explosion of boards, and to avoid the PCI table from |
@@ -523,7 +495,7 @@ static int __devinit pci_timedia_init(struct pci_dev *dev) | |||
523 | * Ugh, this is ugly as all hell --- TYT | 495 | * Ugh, this is ugly as all hell --- TYT |
524 | */ | 496 | */ |
525 | static int | 497 | static int |
526 | pci_timedia_setup(struct pci_dev *dev, struct pci_board *board, | 498 | pci_timedia_setup(struct serial_private *priv, struct pciserial_board *board, |
527 | struct uart_port *port, int idx) | 499 | struct uart_port *port, int idx) |
528 | { | 500 | { |
529 | unsigned int bar = 0, offset = board->first_offset; | 501 | unsigned int bar = 0, offset = board->first_offset; |
@@ -549,14 +521,15 @@ pci_timedia_setup(struct pci_dev *dev, struct pci_board *board, | |||
549 | bar = idx - 2; | 521 | bar = idx - 2; |
550 | } | 522 | } |
551 | 523 | ||
552 | return setup_port(dev, port, bar, offset, board->reg_shift); | 524 | return setup_port(priv, port, bar, offset, board->reg_shift); |
553 | } | 525 | } |
554 | 526 | ||
555 | /* | 527 | /* |
556 | * Some Titan cards are also a little weird | 528 | * Some Titan cards are also a little weird |
557 | */ | 529 | */ |
558 | static int | 530 | static int |
559 | titan_400l_800l_setup(struct pci_dev *dev, struct pci_board *board, | 531 | titan_400l_800l_setup(struct serial_private *priv, |
532 | struct pciserial_board *board, | ||
560 | struct uart_port *port, int idx) | 533 | struct uart_port *port, int idx) |
561 | { | 534 | { |
562 | unsigned int bar, offset = board->first_offset; | 535 | unsigned int bar, offset = board->first_offset; |
@@ -573,7 +546,7 @@ titan_400l_800l_setup(struct pci_dev *dev, struct pci_board *board, | |||
573 | offset = (idx - 2) * board->uart_offset; | 546 | offset = (idx - 2) * board->uart_offset; |
574 | } | 547 | } |
575 | 548 | ||
576 | return setup_port(dev, port, bar, offset, board->reg_shift); | 549 | return setup_port(priv, port, bar, offset, board->reg_shift); |
577 | } | 550 | } |
578 | 551 | ||
579 | static int __devinit pci_xircom_init(struct pci_dev *dev) | 552 | static int __devinit pci_xircom_init(struct pci_dev *dev) |
@@ -593,7 +566,7 @@ static int __devinit pci_netmos_init(struct pci_dev *dev) | |||
593 | } | 566 | } |
594 | 567 | ||
595 | static int | 568 | static int |
596 | pci_default_setup(struct pci_dev *dev, struct pci_board *board, | 569 | pci_default_setup(struct serial_private *priv, struct pciserial_board *board, |
597 | struct uart_port *port, int idx) | 570 | struct uart_port *port, int idx) |
598 | { | 571 | { |
599 | unsigned int bar, offset = board->first_offset, maxnr; | 572 | unsigned int bar, offset = board->first_offset, maxnr; |
@@ -604,13 +577,13 @@ pci_default_setup(struct pci_dev *dev, struct pci_board *board, | |||
604 | else | 577 | else |
605 | offset += idx * board->uart_offset; | 578 | offset += idx * board->uart_offset; |
606 | 579 | ||
607 | maxnr = (pci_resource_len(dev, bar) - board->first_offset) / | 580 | maxnr = (pci_resource_len(priv->dev, bar) - board->first_offset) / |
608 | (8 << board->reg_shift); | 581 | (8 << board->reg_shift); |
609 | 582 | ||
610 | if (board->flags & FL_REGION_SZ_CAP && idx >= maxnr) | 583 | if (board->flags & FL_REGION_SZ_CAP && idx >= maxnr) |
611 | return 1; | 584 | return 1; |
612 | 585 | ||
613 | return setup_port(dev, port, bar, offset, board->reg_shift); | 586 | return setup_port(priv, port, bar, offset, board->reg_shift); |
614 | } | 587 | } |
615 | 588 | ||
616 | /* This should be in linux/pci_ids.h */ | 589 | /* This should be in linux/pci_ids.h */ |
@@ -754,152 +727,15 @@ static struct pci_serial_quirk pci_serial_quirks[] = { | |||
754 | .setup = sbs_setup, | 727 | .setup = sbs_setup, |
755 | .exit = __devexit_p(sbs_exit), | 728 | .exit = __devexit_p(sbs_exit), |
756 | }, | 729 | }, |
757 | |||
758 | /* | 730 | /* |
759 | * SIIG cards. | 731 | * SIIG cards. |
760 | * It is not clear whether these could be collapsed. | ||
761 | */ | 732 | */ |
762 | { | 733 | { |
763 | .vendor = PCI_VENDOR_ID_SIIG, | 734 | .vendor = PCI_VENDOR_ID_SIIG, |
764 | .device = PCI_DEVICE_ID_SIIG_1S_10x_550, | 735 | .device = PCI_ANY_ID, |
765 | .subvendor = PCI_ANY_ID, | ||
766 | .subdevice = PCI_ANY_ID, | ||
767 | .init = pci_siig10x_init, | ||
768 | .setup = pci_default_setup, | ||
769 | }, | ||
770 | { | ||
771 | .vendor = PCI_VENDOR_ID_SIIG, | ||
772 | .device = PCI_DEVICE_ID_SIIG_1S_10x_650, | ||
773 | .subvendor = PCI_ANY_ID, | ||
774 | .subdevice = PCI_ANY_ID, | ||
775 | .init = pci_siig10x_init, | ||
776 | .setup = pci_default_setup, | ||
777 | }, | ||
778 | { | ||
779 | .vendor = PCI_VENDOR_ID_SIIG, | ||
780 | .device = PCI_DEVICE_ID_SIIG_1S_10x_850, | ||
781 | .subvendor = PCI_ANY_ID, | ||
782 | .subdevice = PCI_ANY_ID, | ||
783 | .init = pci_siig10x_init, | ||
784 | .setup = pci_default_setup, | ||
785 | }, | ||
786 | { | ||
787 | .vendor = PCI_VENDOR_ID_SIIG, | ||
788 | .device = PCI_DEVICE_ID_SIIG_2S_10x_550, | ||
789 | .subvendor = PCI_ANY_ID, | ||
790 | .subdevice = PCI_ANY_ID, | ||
791 | .init = pci_siig10x_init, | ||
792 | .setup = pci_default_setup, | ||
793 | }, | ||
794 | { | ||
795 | .vendor = PCI_VENDOR_ID_SIIG, | ||
796 | .device = PCI_DEVICE_ID_SIIG_2S_10x_650, | ||
797 | .subvendor = PCI_ANY_ID, | ||
798 | .subdevice = PCI_ANY_ID, | ||
799 | .init = pci_siig10x_init, | ||
800 | .setup = pci_default_setup, | ||
801 | }, | ||
802 | { | ||
803 | .vendor = PCI_VENDOR_ID_SIIG, | ||
804 | .device = PCI_DEVICE_ID_SIIG_2S_10x_850, | ||
805 | .subvendor = PCI_ANY_ID, | ||
806 | .subdevice = PCI_ANY_ID, | ||
807 | .init = pci_siig10x_init, | ||
808 | .setup = pci_default_setup, | ||
809 | }, | ||
810 | { | ||
811 | .vendor = PCI_VENDOR_ID_SIIG, | ||
812 | .device = PCI_DEVICE_ID_SIIG_4S_10x_550, | ||
813 | .subvendor = PCI_ANY_ID, | ||
814 | .subdevice = PCI_ANY_ID, | ||
815 | .init = pci_siig10x_init, | ||
816 | .setup = pci_default_setup, | ||
817 | }, | ||
818 | { | ||
819 | .vendor = PCI_VENDOR_ID_SIIG, | ||
820 | .device = PCI_DEVICE_ID_SIIG_4S_10x_650, | ||
821 | .subvendor = PCI_ANY_ID, | ||
822 | .subdevice = PCI_ANY_ID, | ||
823 | .init = pci_siig10x_init, | ||
824 | .setup = pci_default_setup, | ||
825 | }, | ||
826 | { | ||
827 | .vendor = PCI_VENDOR_ID_SIIG, | ||
828 | .device = PCI_DEVICE_ID_SIIG_4S_10x_850, | ||
829 | .subvendor = PCI_ANY_ID, | ||
830 | .subdevice = PCI_ANY_ID, | ||
831 | .init = pci_siig10x_init, | ||
832 | .setup = pci_default_setup, | ||
833 | }, | ||
834 | { | ||
835 | .vendor = PCI_VENDOR_ID_SIIG, | ||
836 | .device = PCI_DEVICE_ID_SIIG_1S_20x_550, | ||
837 | .subvendor = PCI_ANY_ID, | ||
838 | .subdevice = PCI_ANY_ID, | ||
839 | .init = pci_siig20x_init, | ||
840 | .setup = pci_default_setup, | ||
841 | }, | ||
842 | { | ||
843 | .vendor = PCI_VENDOR_ID_SIIG, | ||
844 | .device = PCI_DEVICE_ID_SIIG_1S_20x_650, | ||
845 | .subvendor = PCI_ANY_ID, | ||
846 | .subdevice = PCI_ANY_ID, | ||
847 | .init = pci_siig20x_init, | ||
848 | .setup = pci_default_setup, | ||
849 | }, | ||
850 | { | ||
851 | .vendor = PCI_VENDOR_ID_SIIG, | ||
852 | .device = PCI_DEVICE_ID_SIIG_1S_20x_850, | ||
853 | .subvendor = PCI_ANY_ID, | ||
854 | .subdevice = PCI_ANY_ID, | ||
855 | .init = pci_siig20x_init, | ||
856 | .setup = pci_default_setup, | ||
857 | }, | ||
858 | { | ||
859 | .vendor = PCI_VENDOR_ID_SIIG, | ||
860 | .device = PCI_DEVICE_ID_SIIG_2S_20x_550, | ||
861 | .subvendor = PCI_ANY_ID, | ||
862 | .subdevice = PCI_ANY_ID, | ||
863 | .init = pci_siig20x_init, | ||
864 | .setup = pci_default_setup, | ||
865 | }, | ||
866 | { .vendor = PCI_VENDOR_ID_SIIG, | ||
867 | .device = PCI_DEVICE_ID_SIIG_2S_20x_650, | ||
868 | .subvendor = PCI_ANY_ID, | ||
869 | .subdevice = PCI_ANY_ID, | ||
870 | .init = pci_siig20x_init, | ||
871 | .setup = pci_default_setup, | ||
872 | }, | ||
873 | { | ||
874 | .vendor = PCI_VENDOR_ID_SIIG, | ||
875 | .device = PCI_DEVICE_ID_SIIG_2S_20x_850, | ||
876 | .subvendor = PCI_ANY_ID, | ||
877 | .subdevice = PCI_ANY_ID, | ||
878 | .init = pci_siig20x_init, | ||
879 | .setup = pci_default_setup, | ||
880 | }, | ||
881 | { | ||
882 | .vendor = PCI_VENDOR_ID_SIIG, | ||
883 | .device = PCI_DEVICE_ID_SIIG_4S_20x_550, | ||
884 | .subvendor = PCI_ANY_ID, | ||
885 | .subdevice = PCI_ANY_ID, | ||
886 | .init = pci_siig20x_init, | ||
887 | .setup = pci_default_setup, | ||
888 | }, | ||
889 | { | ||
890 | .vendor = PCI_VENDOR_ID_SIIG, | ||
891 | .device = PCI_DEVICE_ID_SIIG_4S_20x_650, | ||
892 | .subvendor = PCI_ANY_ID, | ||
893 | .subdevice = PCI_ANY_ID, | ||
894 | .init = pci_siig20x_init, | ||
895 | .setup = pci_default_setup, | ||
896 | }, | ||
897 | { | ||
898 | .vendor = PCI_VENDOR_ID_SIIG, | ||
899 | .device = PCI_DEVICE_ID_SIIG_4S_20x_850, | ||
900 | .subvendor = PCI_ANY_ID, | 736 | .subvendor = PCI_ANY_ID, |
901 | .subdevice = PCI_ANY_ID, | 737 | .subdevice = PCI_ANY_ID, |
902 | .init = pci_siig20x_init, | 738 | .init = pci_siig_init, |
903 | .setup = pci_default_setup, | 739 | .setup = pci_default_setup, |
904 | }, | 740 | }, |
905 | /* | 741 | /* |
@@ -990,7 +826,7 @@ static struct pci_serial_quirk *find_quirk(struct pci_dev *dev) | |||
990 | } | 826 | } |
991 | 827 | ||
992 | static _INLINE_ int | 828 | static _INLINE_ int |
993 | get_pci_irq(struct pci_dev *dev, struct pci_board *board, int idx) | 829 | get_pci_irq(struct pci_dev *dev, struct pciserial_board *board) |
994 | { | 830 | { |
995 | if (board->flags & FL_NOIRQ) | 831 | if (board->flags & FL_NOIRQ) |
996 | return 0; | 832 | return 0; |
@@ -1115,7 +951,7 @@ enum pci_board_num_t { | |||
1115 | * see first lines of serial_in() and serial_out() in 8250.c | 951 | * see first lines of serial_in() and serial_out() in 8250.c |
1116 | */ | 952 | */ |
1117 | 953 | ||
1118 | static struct pci_board pci_boards[] __devinitdata = { | 954 | static struct pciserial_board pci_boards[] __devinitdata = { |
1119 | [pbn_default] = { | 955 | [pbn_default] = { |
1120 | .flags = FL_BASE0, | 956 | .flags = FL_BASE0, |
1121 | .num_ports = 1, | 957 | .num_ports = 1, |
@@ -1575,7 +1411,7 @@ static struct pci_board pci_boards[] __devinitdata = { | |||
1575 | * serial specs. Returns 0 on success, 1 on failure. | 1411 | * serial specs. Returns 0 on success, 1 on failure. |
1576 | */ | 1412 | */ |
1577 | static int __devinit | 1413 | static int __devinit |
1578 | serial_pci_guess_board(struct pci_dev *dev, struct pci_board *board) | 1414 | serial_pci_guess_board(struct pci_dev *dev, struct pciserial_board *board) |
1579 | { | 1415 | { |
1580 | int num_iomem, num_port, first_port = -1, i; | 1416 | int num_iomem, num_port, first_port = -1, i; |
1581 | 1417 | ||
@@ -1640,7 +1476,8 @@ serial_pci_guess_board(struct pci_dev *dev, struct pci_board *board) | |||
1640 | } | 1476 | } |
1641 | 1477 | ||
1642 | static inline int | 1478 | static inline int |
1643 | serial_pci_matches(struct pci_board *board, struct pci_board *guessed) | 1479 | serial_pci_matches(struct pciserial_board *board, |
1480 | struct pciserial_board *guessed) | ||
1644 | { | 1481 | { |
1645 | return | 1482 | return |
1646 | board->num_ports == guessed->num_ports && | 1483 | board->num_ports == guessed->num_ports && |
@@ -1650,58 +1487,14 @@ serial_pci_matches(struct pci_board *board, struct pci_board *guessed) | |||
1650 | board->first_offset == guessed->first_offset; | 1487 | board->first_offset == guessed->first_offset; |
1651 | } | 1488 | } |
1652 | 1489 | ||
1653 | /* | 1490 | struct serial_private * |
1654 | * Probe one serial board. Unfortunately, there is no rhyme nor reason | 1491 | pciserial_init_ports(struct pci_dev *dev, struct pciserial_board *board) |
1655 | * to the arrangement of serial ports on a PCI card. | ||
1656 | */ | ||
1657 | static int __devinit | ||
1658 | pciserial_init_one(struct pci_dev *dev, const struct pci_device_id *ent) | ||
1659 | { | 1492 | { |
1493 | struct uart_port serial_port; | ||
1660 | struct serial_private *priv; | 1494 | struct serial_private *priv; |
1661 | struct pci_board *board, tmp; | ||
1662 | struct pci_serial_quirk *quirk; | 1495 | struct pci_serial_quirk *quirk; |
1663 | int rc, nr_ports, i; | 1496 | int rc, nr_ports, i; |
1664 | 1497 | ||
1665 | if (ent->driver_data >= ARRAY_SIZE(pci_boards)) { | ||
1666 | printk(KERN_ERR "pci_init_one: invalid driver_data: %ld\n", | ||
1667 | ent->driver_data); | ||
1668 | return -EINVAL; | ||
1669 | } | ||
1670 | |||
1671 | board = &pci_boards[ent->driver_data]; | ||
1672 | |||
1673 | rc = pci_enable_device(dev); | ||
1674 | if (rc) | ||
1675 | return rc; | ||
1676 | |||
1677 | if (ent->driver_data == pbn_default) { | ||
1678 | /* | ||
1679 | * Use a copy of the pci_board entry for this; | ||
1680 | * avoid changing entries in the table. | ||
1681 | */ | ||
1682 | memcpy(&tmp, board, sizeof(struct pci_board)); | ||
1683 | board = &tmp; | ||
1684 | |||
1685 | /* | ||
1686 | * We matched one of our class entries. Try to | ||
1687 | * determine the parameters of this board. | ||
1688 | */ | ||
1689 | rc = serial_pci_guess_board(dev, board); | ||
1690 | if (rc) | ||
1691 | goto disable; | ||
1692 | } else { | ||
1693 | /* | ||
1694 | * We matched an explicit entry. If we are able to | ||
1695 | * detect this boards settings with our heuristic, | ||
1696 | * then we no longer need this entry. | ||
1697 | */ | ||
1698 | memcpy(&tmp, &pci_boards[pbn_default], sizeof(struct pci_board)); | ||
1699 | rc = serial_pci_guess_board(dev, &tmp); | ||
1700 | if (rc == 0 && serial_pci_matches(board, &tmp)) | ||
1701 | moan_device("Redundant entry in serial pci_table.", | ||
1702 | dev); | ||
1703 | } | ||
1704 | |||
1705 | nr_ports = board->num_ports; | 1498 | nr_ports = board->num_ports; |
1706 | 1499 | ||
1707 | /* | 1500 | /* |
@@ -1718,8 +1511,10 @@ pciserial_init_one(struct pci_dev *dev, const struct pci_device_id *ent) | |||
1718 | */ | 1511 | */ |
1719 | if (quirk->init) { | 1512 | if (quirk->init) { |
1720 | rc = quirk->init(dev); | 1513 | rc = quirk->init(dev); |
1721 | if (rc < 0) | 1514 | if (rc < 0) { |
1722 | goto disable; | 1515 | priv = ERR_PTR(rc); |
1516 | goto err_out; | ||
1517 | } | ||
1723 | if (rc) | 1518 | if (rc) |
1724 | nr_ports = rc; | 1519 | nr_ports = rc; |
1725 | } | 1520 | } |
@@ -1728,27 +1523,26 @@ pciserial_init_one(struct pci_dev *dev, const struct pci_device_id *ent) | |||
1728 | sizeof(unsigned int) * nr_ports, | 1523 | sizeof(unsigned int) * nr_ports, |
1729 | GFP_KERNEL); | 1524 | GFP_KERNEL); |
1730 | if (!priv) { | 1525 | if (!priv) { |
1731 | rc = -ENOMEM; | 1526 | priv = ERR_PTR(-ENOMEM); |
1732 | goto deinit; | 1527 | goto err_deinit; |
1733 | } | 1528 | } |
1734 | 1529 | ||
1735 | memset(priv, 0, sizeof(struct serial_private) + | 1530 | memset(priv, 0, sizeof(struct serial_private) + |
1736 | sizeof(unsigned int) * nr_ports); | 1531 | sizeof(unsigned int) * nr_ports); |
1737 | 1532 | ||
1533 | priv->dev = dev; | ||
1738 | priv->quirk = quirk; | 1534 | priv->quirk = quirk; |
1739 | pci_set_drvdata(dev, priv); | 1535 | |
1536 | memset(&serial_port, 0, sizeof(struct uart_port)); | ||
1537 | serial_port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_SHARE_IRQ; | ||
1538 | serial_port.uartclk = board->base_baud * 16; | ||
1539 | serial_port.irq = get_pci_irq(dev, board); | ||
1540 | serial_port.dev = &dev->dev; | ||
1740 | 1541 | ||
1741 | for (i = 0; i < nr_ports; i++) { | 1542 | for (i = 0; i < nr_ports; i++) { |
1742 | struct uart_port serial_port; | 1543 | if (quirk->setup(priv, board, &serial_port, i)) |
1743 | memset(&serial_port, 0, sizeof(struct uart_port)); | ||
1744 | |||
1745 | serial_port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | | ||
1746 | UPF_SHARE_IRQ; | ||
1747 | serial_port.uartclk = board->base_baud * 16; | ||
1748 | serial_port.irq = get_pci_irq(dev, board, i); | ||
1749 | serial_port.dev = &dev->dev; | ||
1750 | if (quirk->setup(dev, board, &serial_port, i)) | ||
1751 | break; | 1544 | break; |
1545 | |||
1752 | #ifdef SERIAL_DEBUG_PCI | 1546 | #ifdef SERIAL_DEBUG_PCI |
1753 | printk("Setup PCI port: port %x, irq %d, type %d\n", | 1547 | printk("Setup PCI port: port %x, irq %d, type %d\n", |
1754 | serial_port.iobase, serial_port.irq, serial_port.iotype); | 1548 | serial_port.iobase, serial_port.irq, serial_port.iotype); |
@@ -1763,24 +1557,21 @@ pciserial_init_one(struct pci_dev *dev, const struct pci_device_id *ent) | |||
1763 | 1557 | ||
1764 | priv->nr = i; | 1558 | priv->nr = i; |
1765 | 1559 | ||
1766 | return 0; | 1560 | return priv; |
1767 | 1561 | ||
1768 | deinit: | 1562 | err_deinit: |
1769 | if (quirk->exit) | 1563 | if (quirk->exit) |
1770 | quirk->exit(dev); | 1564 | quirk->exit(dev); |
1771 | disable: | 1565 | err_out: |
1772 | pci_disable_device(dev); | 1566 | return priv; |
1773 | return rc; | ||
1774 | } | 1567 | } |
1568 | EXPORT_SYMBOL_GPL(pciserial_init_ports); | ||
1775 | 1569 | ||
1776 | static void __devexit pciserial_remove_one(struct pci_dev *dev) | 1570 | void pciserial_remove_ports(struct serial_private *priv) |
1777 | { | 1571 | { |
1778 | struct serial_private *priv = pci_get_drvdata(dev); | ||
1779 | struct pci_serial_quirk *quirk; | 1572 | struct pci_serial_quirk *quirk; |
1780 | int i; | 1573 | int i; |
1781 | 1574 | ||
1782 | pci_set_drvdata(dev, NULL); | ||
1783 | |||
1784 | for (i = 0; i < priv->nr; i++) | 1575 | for (i = 0; i < priv->nr; i++) |
1785 | serial8250_unregister_port(priv->line[i]); | 1576 | serial8250_unregister_port(priv->line[i]); |
1786 | 1577 | ||
@@ -1793,25 +1584,123 @@ static void __devexit pciserial_remove_one(struct pci_dev *dev) | |||
1793 | /* | 1584 | /* |
1794 | * Find the exit quirks. | 1585 | * Find the exit quirks. |
1795 | */ | 1586 | */ |
1796 | quirk = find_quirk(dev); | 1587 | quirk = find_quirk(priv->dev); |
1797 | if (quirk->exit) | 1588 | if (quirk->exit) |
1798 | quirk->exit(dev); | 1589 | quirk->exit(priv->dev); |
1590 | |||
1591 | kfree(priv); | ||
1592 | } | ||
1593 | EXPORT_SYMBOL_GPL(pciserial_remove_ports); | ||
1594 | |||
1595 | void pciserial_suspend_ports(struct serial_private *priv) | ||
1596 | { | ||
1597 | int i; | ||
1598 | |||
1599 | for (i = 0; i < priv->nr; i++) | ||
1600 | if (priv->line[i] >= 0) | ||
1601 | serial8250_suspend_port(priv->line[i]); | ||
1602 | } | ||
1603 | EXPORT_SYMBOL_GPL(pciserial_suspend_ports); | ||
1604 | |||
1605 | void pciserial_resume_ports(struct serial_private *priv) | ||
1606 | { | ||
1607 | int i; | ||
1608 | |||
1609 | /* | ||
1610 | * Ensure that the board is correctly configured. | ||
1611 | */ | ||
1612 | if (priv->quirk->init) | ||
1613 | priv->quirk->init(priv->dev); | ||
1614 | |||
1615 | for (i = 0; i < priv->nr; i++) | ||
1616 | if (priv->line[i] >= 0) | ||
1617 | serial8250_resume_port(priv->line[i]); | ||
1618 | } | ||
1619 | EXPORT_SYMBOL_GPL(pciserial_resume_ports); | ||
1620 | |||
1621 | /* | ||
1622 | * Probe one serial board. Unfortunately, there is no rhyme nor reason | ||
1623 | * to the arrangement of serial ports on a PCI card. | ||
1624 | */ | ||
1625 | static int __devinit | ||
1626 | pciserial_init_one(struct pci_dev *dev, const struct pci_device_id *ent) | ||
1627 | { | ||
1628 | struct serial_private *priv; | ||
1629 | struct pciserial_board *board, tmp; | ||
1630 | int rc; | ||
1631 | |||
1632 | if (ent->driver_data >= ARRAY_SIZE(pci_boards)) { | ||
1633 | printk(KERN_ERR "pci_init_one: invalid driver_data: %ld\n", | ||
1634 | ent->driver_data); | ||
1635 | return -EINVAL; | ||
1636 | } | ||
1637 | |||
1638 | board = &pci_boards[ent->driver_data]; | ||
1639 | |||
1640 | rc = pci_enable_device(dev); | ||
1641 | if (rc) | ||
1642 | return rc; | ||
1643 | |||
1644 | if (ent->driver_data == pbn_default) { | ||
1645 | /* | ||
1646 | * Use a copy of the pci_board entry for this; | ||
1647 | * avoid changing entries in the table. | ||
1648 | */ | ||
1649 | memcpy(&tmp, board, sizeof(struct pciserial_board)); | ||
1650 | board = &tmp; | ||
1651 | |||
1652 | /* | ||
1653 | * We matched one of our class entries. Try to | ||
1654 | * determine the parameters of this board. | ||
1655 | */ | ||
1656 | rc = serial_pci_guess_board(dev, board); | ||
1657 | if (rc) | ||
1658 | goto disable; | ||
1659 | } else { | ||
1660 | /* | ||
1661 | * We matched an explicit entry. If we are able to | ||
1662 | * detect this boards settings with our heuristic, | ||
1663 | * then we no longer need this entry. | ||
1664 | */ | ||
1665 | memcpy(&tmp, &pci_boards[pbn_default], | ||
1666 | sizeof(struct pciserial_board)); | ||
1667 | rc = serial_pci_guess_board(dev, &tmp); | ||
1668 | if (rc == 0 && serial_pci_matches(board, &tmp)) | ||
1669 | moan_device("Redundant entry in serial pci_table.", | ||
1670 | dev); | ||
1671 | } | ||
1799 | 1672 | ||
1673 | priv = pciserial_init_ports(dev, board); | ||
1674 | if (!IS_ERR(priv)) { | ||
1675 | pci_set_drvdata(dev, priv); | ||
1676 | return 0; | ||
1677 | } | ||
1678 | |||
1679 | rc = PTR_ERR(priv); | ||
1680 | |||
1681 | disable: | ||
1800 | pci_disable_device(dev); | 1682 | pci_disable_device(dev); |
1683 | return rc; | ||
1684 | } | ||
1801 | 1685 | ||
1802 | kfree(priv); | 1686 | static void __devexit pciserial_remove_one(struct pci_dev *dev) |
1687 | { | ||
1688 | struct serial_private *priv = pci_get_drvdata(dev); | ||
1689 | |||
1690 | pci_set_drvdata(dev, NULL); | ||
1691 | |||
1692 | pciserial_remove_ports(priv); | ||
1693 | |||
1694 | pci_disable_device(dev); | ||
1803 | } | 1695 | } |
1804 | 1696 | ||
1805 | static int pciserial_suspend_one(struct pci_dev *dev, pm_message_t state) | 1697 | static int pciserial_suspend_one(struct pci_dev *dev, pm_message_t state) |
1806 | { | 1698 | { |
1807 | struct serial_private *priv = pci_get_drvdata(dev); | 1699 | struct serial_private *priv = pci_get_drvdata(dev); |
1808 | 1700 | ||
1809 | if (priv) { | 1701 | if (priv) |
1810 | int i; | 1702 | pciserial_suspend_ports(priv); |
1811 | 1703 | ||
1812 | for (i = 0; i < priv->nr; i++) | ||
1813 | serial8250_suspend_port(priv->line[i]); | ||
1814 | } | ||
1815 | pci_save_state(dev); | 1704 | pci_save_state(dev); |
1816 | pci_set_power_state(dev, pci_choose_state(dev, state)); | 1705 | pci_set_power_state(dev, pci_choose_state(dev, state)); |
1817 | return 0; | 1706 | return 0; |
@@ -1825,21 +1714,12 @@ static int pciserial_resume_one(struct pci_dev *dev) | |||
1825 | pci_restore_state(dev); | 1714 | pci_restore_state(dev); |
1826 | 1715 | ||
1827 | if (priv) { | 1716 | if (priv) { |
1828 | int i; | ||
1829 | |||
1830 | /* | 1717 | /* |
1831 | * The device may have been disabled. Re-enable it. | 1718 | * The device may have been disabled. Re-enable it. |
1832 | */ | 1719 | */ |
1833 | pci_enable_device(dev); | 1720 | pci_enable_device(dev); |
1834 | 1721 | ||
1835 | /* | 1722 | pciserial_resume_ports(priv); |
1836 | * Ensure that the board is correctly configured. | ||
1837 | */ | ||
1838 | if (priv->quirk->init) | ||
1839 | priv->quirk->init(dev); | ||
1840 | |||
1841 | for (i = 0; i < priv->nr; i++) | ||
1842 | serial8250_resume_port(priv->line[i]); | ||
1843 | } | 1723 | } |
1844 | return 0; | 1724 | return 0; |
1845 | } | 1725 | } |
diff --git a/fs/adfs/adfs.h b/fs/adfs/adfs.h index 63f5df9afb71..fd528433de43 100644 --- a/fs/adfs/adfs.h +++ b/fs/adfs/adfs.h | |||
@@ -97,7 +97,7 @@ extern int adfs_dir_update(struct super_block *sb, struct object_info *obj); | |||
97 | extern struct inode_operations adfs_file_inode_operations; | 97 | extern struct inode_operations adfs_file_inode_operations; |
98 | extern struct file_operations adfs_file_operations; | 98 | extern struct file_operations adfs_file_operations; |
99 | 99 | ||
100 | extern inline __u32 signed_asl(__u32 val, signed int shift) | 100 | static inline __u32 signed_asl(__u32 val, signed int shift) |
101 | { | 101 | { |
102 | if (shift >= 0) | 102 | if (shift >= 0) |
103 | val <<= shift; | 103 | val <<= shift; |
@@ -112,7 +112,7 @@ extern inline __u32 signed_asl(__u32 val, signed int shift) | |||
112 | * | 112 | * |
113 | * The root directory ID should always be looked up in the map [3.4] | 113 | * The root directory ID should always be looked up in the map [3.4] |
114 | */ | 114 | */ |
115 | extern inline int | 115 | static inline int |
116 | __adfs_block_map(struct super_block *sb, unsigned int object_id, | 116 | __adfs_block_map(struct super_block *sb, unsigned int object_id, |
117 | unsigned int block) | 117 | unsigned int block) |
118 | { | 118 | { |
diff --git a/include/asm-arm/arch-sa1100/mcp.h b/include/asm-arm/arch-sa1100/mcp.h new file mode 100644 index 000000000000..f58a22755c61 --- /dev/null +++ b/include/asm-arm/arch-sa1100/mcp.h | |||
@@ -0,0 +1,21 @@ | |||
1 | /* | ||
2 | * linux/include/asm-arm/arch-sa1100/mcp.h | ||
3 | * | ||
4 | * Copyright (C) 2005 Russell King. | ||
5 | * | ||
6 | * This program is free software; you can redistribute it and/or modify | ||
7 | * it under the terms of the GNU General Public License version 2 as | ||
8 | * published by the Free Software Foundation. | ||
9 | */ | ||
10 | #ifndef __ASM_ARM_ARCH_MCP_H | ||
11 | #define __ASM_ARM_ARCH_MCP_H | ||
12 | |||
13 | #include <linux/types.h> | ||
14 | |||
15 | struct mcp_plat_data { | ||
16 | u32 mccr0; | ||
17 | u32 mccr1; | ||
18 | unsigned int sclk_rate; | ||
19 | }; | ||
20 | |||
21 | #endif | ||
diff --git a/include/asm-arm/hardware/gic.h b/include/asm-arm/hardware/gic.h new file mode 100644 index 000000000000..3fa5eb70f64e --- /dev/null +++ b/include/asm-arm/hardware/gic.h | |||
@@ -0,0 +1,41 @@ | |||
1 | /* | ||
2 | * linux/include/asm-arm/hardware/gic.h | ||
3 | * | ||
4 | * Copyright (C) 2002 ARM Limited, All Rights Reserved. | ||
5 | * | ||
6 | * This program is free software; you can redistribute it and/or modify | ||
7 | * it under the terms of the GNU General Public License version 2 as | ||
8 | * published by the Free Software Foundation. | ||
9 | */ | ||
10 | #ifndef __ASM_ARM_HARDWARE_GIC_H | ||
11 | #define __ASM_ARM_HARDWARE_GIC_H | ||
12 | |||
13 | #include <linux/compiler.h> | ||
14 | |||
15 | #define GIC_CPU_CTRL 0x00 | ||
16 | #define GIC_CPU_PRIMASK 0x04 | ||
17 | #define GIC_CPU_BINPOINT 0x08 | ||
18 | #define GIC_CPU_INTACK 0x0c | ||
19 | #define GIC_CPU_EOI 0x10 | ||
20 | #define GIC_CPU_RUNNINGPRI 0x14 | ||
21 | #define GIC_CPU_HIGHPRI 0x18 | ||
22 | |||
23 | #define GIC_DIST_CTRL 0x000 | ||
24 | #define GIC_DIST_CTR 0x004 | ||
25 | #define GIC_DIST_ENABLE_SET 0x100 | ||
26 | #define GIC_DIST_ENABLE_CLEAR 0x180 | ||
27 | #define GIC_DIST_PENDING_SET 0x200 | ||
28 | #define GIC_DIST_PENDING_CLEAR 0x280 | ||
29 | #define GIC_DIST_ACTIVE_BIT 0x300 | ||
30 | #define GIC_DIST_PRI 0x400 | ||
31 | #define GIC_DIST_TARGET 0x800 | ||
32 | #define GIC_DIST_CONFIG 0xc00 | ||
33 | #define GIC_DIST_SOFTINT 0xf00 | ||
34 | |||
35 | #ifndef __ASSEMBLY__ | ||
36 | void gic_dist_init(void __iomem *base); | ||
37 | void gic_cpu_init(void __iomem *base); | ||
38 | void gic_raise_softirq(cpumask_t cpumask, unsigned int irq); | ||
39 | #endif | ||
40 | |||
41 | #endif | ||
diff --git a/include/asm-sparc/processor.h b/include/asm-sparc/processor.h index 32c9699367cf..5a7a1a8d29ac 100644 --- a/include/asm-sparc/processor.h +++ b/include/asm-sparc/processor.h | |||
@@ -19,7 +19,6 @@ | |||
19 | #include <asm/ptrace.h> | 19 | #include <asm/ptrace.h> |
20 | #include <asm/head.h> | 20 | #include <asm/head.h> |
21 | #include <asm/signal.h> | 21 | #include <asm/signal.h> |
22 | #include <asm/segment.h> | ||
23 | #include <asm/btfixup.h> | 22 | #include <asm/btfixup.h> |
24 | #include <asm/page.h> | 23 | #include <asm/page.h> |
25 | 24 | ||
diff --git a/include/asm-sparc/segment.h b/include/asm-sparc/segment.h deleted file mode 100644 index a1b7ffc9eec9..000000000000 --- a/include/asm-sparc/segment.h +++ /dev/null | |||
@@ -1,6 +0,0 @@ | |||
1 | #ifndef __SPARC_SEGMENT_H | ||
2 | #define __SPARC_SEGMENT_H | ||
3 | |||
4 | /* Only here because we have some old header files that expect it.. */ | ||
5 | |||
6 | #endif | ||
diff --git a/include/asm-sparc/system.h b/include/asm-sparc/system.h index 898562ebe94c..3557781a4bfd 100644 --- a/include/asm-sparc/system.h +++ b/include/asm-sparc/system.h | |||
@@ -9,7 +9,6 @@ | |||
9 | #include <linux/threads.h> /* NR_CPUS */ | 9 | #include <linux/threads.h> /* NR_CPUS */ |
10 | #include <linux/thread_info.h> | 10 | #include <linux/thread_info.h> |
11 | 11 | ||
12 | #include <asm/segment.h> | ||
13 | #include <asm/page.h> | 12 | #include <asm/page.h> |
14 | #include <asm/psr.h> | 13 | #include <asm/psr.h> |
15 | #include <asm/ptrace.h> | 14 | #include <asm/ptrace.h> |
diff --git a/include/asm-sparc64/atomic.h b/include/asm-sparc64/atomic.h index d80f3379669b..e175afcf2cde 100644 --- a/include/asm-sparc64/atomic.h +++ b/include/asm-sparc64/atomic.h | |||
@@ -72,10 +72,10 @@ extern int atomic64_sub_ret(int, atomic64_t *); | |||
72 | 72 | ||
73 | /* Atomic operations are already serializing */ | 73 | /* Atomic operations are already serializing */ |
74 | #ifdef CONFIG_SMP | 74 | #ifdef CONFIG_SMP |
75 | #define smp_mb__before_atomic_dec() membar("#StoreLoad | #LoadLoad") | 75 | #define smp_mb__before_atomic_dec() membar_storeload_loadload(); |
76 | #define smp_mb__after_atomic_dec() membar("#StoreLoad | #StoreStore") | 76 | #define smp_mb__after_atomic_dec() membar_storeload_storestore(); |
77 | #define smp_mb__before_atomic_inc() membar("#StoreLoad | #LoadLoad") | 77 | #define smp_mb__before_atomic_inc() membar_storeload_loadload(); |
78 | #define smp_mb__after_atomic_inc() membar("#StoreLoad | #StoreStore") | 78 | #define smp_mb__after_atomic_inc() membar_storeload_storestore(); |
79 | #else | 79 | #else |
80 | #define smp_mb__before_atomic_dec() barrier() | 80 | #define smp_mb__before_atomic_dec() barrier() |
81 | #define smp_mb__after_atomic_dec() barrier() | 81 | #define smp_mb__after_atomic_dec() barrier() |
diff --git a/include/asm-sparc64/bitops.h b/include/asm-sparc64/bitops.h index 9c5e71970287..6388b8376c50 100644 --- a/include/asm-sparc64/bitops.h +++ b/include/asm-sparc64/bitops.h | |||
@@ -72,8 +72,8 @@ static inline int __test_and_change_bit(int nr, volatile unsigned long *addr) | |||
72 | } | 72 | } |
73 | 73 | ||
74 | #ifdef CONFIG_SMP | 74 | #ifdef CONFIG_SMP |
75 | #define smp_mb__before_clear_bit() membar("#StoreLoad | #LoadLoad") | 75 | #define smp_mb__before_clear_bit() membar_storeload_loadload() |
76 | #define smp_mb__after_clear_bit() membar("#StoreLoad | #StoreStore") | 76 | #define smp_mb__after_clear_bit() membar_storeload_storestore() |
77 | #else | 77 | #else |
78 | #define smp_mb__before_clear_bit() barrier() | 78 | #define smp_mb__before_clear_bit() barrier() |
79 | #define smp_mb__after_clear_bit() barrier() | 79 | #define smp_mb__after_clear_bit() barrier() |
diff --git a/include/asm-sparc64/processor.h b/include/asm-sparc64/processor.h index d0bee2413560..3169f3e2237e 100644 --- a/include/asm-sparc64/processor.h +++ b/include/asm-sparc64/processor.h | |||
@@ -18,7 +18,6 @@ | |||
18 | #include <asm/a.out.h> | 18 | #include <asm/a.out.h> |
19 | #include <asm/pstate.h> | 19 | #include <asm/pstate.h> |
20 | #include <asm/ptrace.h> | 20 | #include <asm/ptrace.h> |
21 | #include <asm/segment.h> | ||
22 | #include <asm/page.h> | 21 | #include <asm/page.h> |
23 | 22 | ||
24 | /* The sparc has no problems with write protection */ | 23 | /* The sparc has no problems with write protection */ |
diff --git a/include/asm-sparc64/segment.h b/include/asm-sparc64/segment.h deleted file mode 100644 index b03e709fc945..000000000000 --- a/include/asm-sparc64/segment.h +++ /dev/null | |||
@@ -1,6 +0,0 @@ | |||
1 | #ifndef __SPARC64_SEGMENT_H | ||
2 | #define __SPARC64_SEGMENT_H | ||
3 | |||
4 | /* Only here because we have some old header files that expect it.. */ | ||
5 | |||
6 | #endif | ||
diff --git a/include/asm-sparc64/sfafsr.h b/include/asm-sparc64/sfafsr.h new file mode 100644 index 000000000000..2f792c20b53c --- /dev/null +++ b/include/asm-sparc64/sfafsr.h | |||
@@ -0,0 +1,82 @@ | |||
1 | #ifndef _SPARC64_SFAFSR_H | ||
2 | #define _SPARC64_SFAFSR_H | ||
3 | |||
4 | #include <asm/const.h> | ||
5 | |||
6 | /* Spitfire Asynchronous Fault Status register, ASI=0x4C VA<63:0>=0x0 */ | ||
7 | |||
8 | #define SFAFSR_ME (_AC(1,UL) << SFAFSR_ME_SHIFT) | ||
9 | #define SFAFSR_ME_SHIFT 32 | ||
10 | #define SFAFSR_PRIV (_AC(1,UL) << SFAFSR_PRIV_SHIFT) | ||
11 | #define SFAFSR_PRIV_SHIFT 31 | ||
12 | #define SFAFSR_ISAP (_AC(1,UL) << SFAFSR_ISAP_SHIFT) | ||
13 | #define SFAFSR_ISAP_SHIFT 30 | ||
14 | #define SFAFSR_ETP (_AC(1,UL) << SFAFSR_ETP_SHIFT) | ||
15 | #define SFAFSR_ETP_SHIFT 29 | ||
16 | #define SFAFSR_IVUE (_AC(1,UL) << SFAFSR_IVUE_SHIFT) | ||
17 | #define SFAFSR_IVUE_SHIFT 28 | ||
18 | #define SFAFSR_TO (_AC(1,UL) << SFAFSR_TO_SHIFT) | ||
19 | #define SFAFSR_TO_SHIFT 27 | ||
20 | #define SFAFSR_BERR (_AC(1,UL) << SFAFSR_BERR_SHIFT) | ||
21 | #define SFAFSR_BERR_SHIFT 26 | ||
22 | #define SFAFSR_LDP (_AC(1,UL) << SFAFSR_LDP_SHIFT) | ||
23 | #define SFAFSR_LDP_SHIFT 25 | ||
24 | #define SFAFSR_CP (_AC(1,UL) << SFAFSR_CP_SHIFT) | ||
25 | #define SFAFSR_CP_SHIFT 24 | ||
26 | #define SFAFSR_WP (_AC(1,UL) << SFAFSR_WP_SHIFT) | ||
27 | #define SFAFSR_WP_SHIFT 23 | ||
28 | #define SFAFSR_EDP (_AC(1,UL) << SFAFSR_EDP_SHIFT) | ||
29 | #define SFAFSR_EDP_SHIFT 22 | ||
30 | #define SFAFSR_UE (_AC(1,UL) << SFAFSR_UE_SHIFT) | ||
31 | #define SFAFSR_UE_SHIFT 21 | ||
32 | #define SFAFSR_CE (_AC(1,UL) << SFAFSR_CE_SHIFT) | ||
33 | #define SFAFSR_CE_SHIFT 20 | ||
34 | #define SFAFSR_ETS (_AC(0xf,UL) << SFAFSR_ETS_SHIFT) | ||
35 | #define SFAFSR_ETS_SHIFT 16 | ||
36 | #define SFAFSR_PSYND (_AC(0xffff,UL) << SFAFSR_PSYND_SHIFT) | ||
37 | #define SFAFSR_PSYND_SHIFT 0 | ||
38 | |||
39 | /* UDB Error Register, ASI=0x7f VA<63:0>=0x0(High),0x18(Low) for read | ||
40 | * ASI=0x77 VA<63:0>=0x0(High),0x18(Low) for write | ||
41 | */ | ||
42 | |||
43 | #define UDBE_UE (_AC(1,UL) << 9) | ||
44 | #define UDBE_CE (_AC(1,UL) << 8) | ||
45 | #define UDBE_E_SYNDR (_AC(0xff,UL) << 0) | ||
46 | |||
47 | /* The trap handlers for asynchronous errors encode the AFSR and | ||
48 | * other pieces of information into a 64-bit argument for C code | ||
49 | * encoded as follows: | ||
50 | * | ||
51 | * ----------------------------------------------- | ||
52 | * | UDB_H | UDB_L | TL>1 | TT | AFSR | | ||
53 | * ----------------------------------------------- | ||
54 | * 63 54 53 44 42 41 33 32 0 | ||
55 | * | ||
56 | * The AFAR is passed in unchanged. | ||
57 | */ | ||
58 | #define SFSTAT_UDBH_MASK (_AC(0x3ff,UL) << SFSTAT_UDBH_SHIFT) | ||
59 | #define SFSTAT_UDBH_SHIFT 54 | ||
60 | #define SFSTAT_UDBL_MASK (_AC(0x3ff,UL) << SFSTAT_UDBH_SHIFT) | ||
61 | #define SFSTAT_UDBL_SHIFT 44 | ||
62 | #define SFSTAT_TL_GT_ONE (_AC(1,UL) << SFSTAT_TL_GT_ONE_SHIFT) | ||
63 | #define SFSTAT_TL_GT_ONE_SHIFT 42 | ||
64 | #define SFSTAT_TRAP_TYPE (_AC(0x1FF,UL) << SFSTAT_TRAP_TYPE_SHIFT) | ||
65 | #define SFSTAT_TRAP_TYPE_SHIFT 33 | ||
66 | #define SFSTAT_AFSR_MASK (_AC(0x1ffffffff,UL) << SFSTAT_AFSR_SHIFT) | ||
67 | #define SFSTAT_AFSR_SHIFT 0 | ||
68 | |||
69 | /* ESTATE Error Enable Register, ASI=0x4b VA<63:0>=0x0 */ | ||
70 | #define ESTATE_ERR_CE 0x1 /* Correctable errors */ | ||
71 | #define ESTATE_ERR_NCE 0x2 /* TO, BERR, LDP, ETP, EDP, WP, UE, IVUE */ | ||
72 | #define ESTATE_ERR_ISAP 0x4 /* System address parity error */ | ||
73 | #define ESTATE_ERR_ALL (ESTATE_ERR_CE | \ | ||
74 | ESTATE_ERR_NCE | \ | ||
75 | ESTATE_ERR_ISAP) | ||
76 | |||
77 | /* The various trap types that report using the above state. */ | ||
78 | #define TRAP_TYPE_IAE 0x09 /* Instruction Access Error */ | ||
79 | #define TRAP_TYPE_DAE 0x32 /* Data Access Error */ | ||
80 | #define TRAP_TYPE_CEE 0x63 /* Correctable ECC Error */ | ||
81 | |||
82 | #endif /* _SPARC64_SFAFSR_H */ | ||
diff --git a/include/asm-sparc64/spinlock.h b/include/asm-sparc64/spinlock.h index 9cb93a5c2b4f..a02c4370eb42 100644 --- a/include/asm-sparc64/spinlock.h +++ b/include/asm-sparc64/spinlock.h | |||
@@ -43,7 +43,7 @@ typedef struct { | |||
43 | #define spin_is_locked(lp) ((lp)->lock != 0) | 43 | #define spin_is_locked(lp) ((lp)->lock != 0) |
44 | 44 | ||
45 | #define spin_unlock_wait(lp) \ | 45 | #define spin_unlock_wait(lp) \ |
46 | do { membar("#LoadLoad"); \ | 46 | do { rmb(); \ |
47 | } while((lp)->lock) | 47 | } while((lp)->lock) |
48 | 48 | ||
49 | static inline void _raw_spin_lock(spinlock_t *lock) | 49 | static inline void _raw_spin_lock(spinlock_t *lock) |
@@ -129,15 +129,18 @@ typedef struct { | |||
129 | #define spin_is_locked(__lock) ((__lock)->lock != 0) | 129 | #define spin_is_locked(__lock) ((__lock)->lock != 0) |
130 | #define spin_unlock_wait(__lock) \ | 130 | #define spin_unlock_wait(__lock) \ |
131 | do { \ | 131 | do { \ |
132 | membar("#LoadLoad"); \ | 132 | rmb(); \ |
133 | } while((__lock)->lock) | 133 | } while((__lock)->lock) |
134 | 134 | ||
135 | extern void _do_spin_lock (spinlock_t *lock, char *str); | 135 | extern void _do_spin_lock(spinlock_t *lock, char *str, unsigned long caller); |
136 | extern void _do_spin_unlock (spinlock_t *lock); | 136 | extern void _do_spin_unlock(spinlock_t *lock); |
137 | extern int _do_spin_trylock (spinlock_t *lock); | 137 | extern int _do_spin_trylock(spinlock_t *lock, unsigned long caller); |
138 | 138 | ||
139 | #define _raw_spin_trylock(lp) _do_spin_trylock(lp) | 139 | #define _raw_spin_trylock(lp) \ |
140 | #define _raw_spin_lock(lock) _do_spin_lock(lock, "spin_lock") | 140 | _do_spin_trylock(lp, (unsigned long) __builtin_return_address(0)) |
141 | #define _raw_spin_lock(lock) \ | ||
142 | _do_spin_lock(lock, "spin_lock", \ | ||
143 | (unsigned long) __builtin_return_address(0)) | ||
141 | #define _raw_spin_unlock(lock) _do_spin_unlock(lock) | 144 | #define _raw_spin_unlock(lock) _do_spin_unlock(lock) |
142 | #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) | 145 | #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) |
143 | 146 | ||
@@ -279,37 +282,41 @@ typedef struct { | |||
279 | #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0, 0xff, { } } | 282 | #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0, 0xff, { } } |
280 | #define rwlock_init(lp) do { *(lp) = RW_LOCK_UNLOCKED; } while(0) | 283 | #define rwlock_init(lp) do { *(lp) = RW_LOCK_UNLOCKED; } while(0) |
281 | 284 | ||
282 | extern void _do_read_lock(rwlock_t *rw, char *str); | 285 | extern void _do_read_lock(rwlock_t *rw, char *str, unsigned long caller); |
283 | extern void _do_read_unlock(rwlock_t *rw, char *str); | 286 | extern void _do_read_unlock(rwlock_t *rw, char *str, unsigned long caller); |
284 | extern void _do_write_lock(rwlock_t *rw, char *str); | 287 | extern void _do_write_lock(rwlock_t *rw, char *str, unsigned long caller); |
285 | extern void _do_write_unlock(rwlock_t *rw); | 288 | extern void _do_write_unlock(rwlock_t *rw, unsigned long caller); |
286 | extern int _do_write_trylock(rwlock_t *rw, char *str); | 289 | extern int _do_write_trylock(rwlock_t *rw, char *str, unsigned long caller); |
287 | 290 | ||
288 | #define _raw_read_lock(lock) \ | 291 | #define _raw_read_lock(lock) \ |
289 | do { unsigned long flags; \ | 292 | do { unsigned long flags; \ |
290 | local_irq_save(flags); \ | 293 | local_irq_save(flags); \ |
291 | _do_read_lock(lock, "read_lock"); \ | 294 | _do_read_lock(lock, "read_lock", \ |
295 | (unsigned long) __builtin_return_address(0)); \ | ||
292 | local_irq_restore(flags); \ | 296 | local_irq_restore(flags); \ |
293 | } while(0) | 297 | } while(0) |
294 | 298 | ||
295 | #define _raw_read_unlock(lock) \ | 299 | #define _raw_read_unlock(lock) \ |
296 | do { unsigned long flags; \ | 300 | do { unsigned long flags; \ |
297 | local_irq_save(flags); \ | 301 | local_irq_save(flags); \ |
298 | _do_read_unlock(lock, "read_unlock"); \ | 302 | _do_read_unlock(lock, "read_unlock", \ |
303 | (unsigned long) __builtin_return_address(0)); \ | ||
299 | local_irq_restore(flags); \ | 304 | local_irq_restore(flags); \ |
300 | } while(0) | 305 | } while(0) |
301 | 306 | ||
302 | #define _raw_write_lock(lock) \ | 307 | #define _raw_write_lock(lock) \ |
303 | do { unsigned long flags; \ | 308 | do { unsigned long flags; \ |
304 | local_irq_save(flags); \ | 309 | local_irq_save(flags); \ |
305 | _do_write_lock(lock, "write_lock"); \ | 310 | _do_write_lock(lock, "write_lock", \ |
311 | (unsigned long) __builtin_return_address(0)); \ | ||
306 | local_irq_restore(flags); \ | 312 | local_irq_restore(flags); \ |
307 | } while(0) | 313 | } while(0) |
308 | 314 | ||
309 | #define _raw_write_unlock(lock) \ | 315 | #define _raw_write_unlock(lock) \ |
310 | do { unsigned long flags; \ | 316 | do { unsigned long flags; \ |
311 | local_irq_save(flags); \ | 317 | local_irq_save(flags); \ |
312 | _do_write_unlock(lock); \ | 318 | _do_write_unlock(lock, \ |
319 | (unsigned long) __builtin_return_address(0)); \ | ||
313 | local_irq_restore(flags); \ | 320 | local_irq_restore(flags); \ |
314 | } while(0) | 321 | } while(0) |
315 | 322 | ||
@@ -317,7 +324,8 @@ do { unsigned long flags; \ | |||
317 | ({ unsigned long flags; \ | 324 | ({ unsigned long flags; \ |
318 | int val; \ | 325 | int val; \ |
319 | local_irq_save(flags); \ | 326 | local_irq_save(flags); \ |
320 | val = _do_write_trylock(lock, "write_trylock"); \ | 327 | val = _do_write_trylock(lock, "write_trylock", \ |
328 | (unsigned long) __builtin_return_address(0)); \ | ||
321 | local_irq_restore(flags); \ | 329 | local_irq_restore(flags); \ |
322 | val; \ | 330 | val; \ |
323 | }) | 331 | }) |
diff --git a/include/asm-sparc64/system.h b/include/asm-sparc64/system.h index ee4bdfc6b88f..5e94c05dc2fc 100644 --- a/include/asm-sparc64/system.h +++ b/include/asm-sparc64/system.h | |||
@@ -28,6 +28,14 @@ enum sparc_cpu { | |||
28 | #define ARCH_SUN4C_SUN4 0 | 28 | #define ARCH_SUN4C_SUN4 0 |
29 | #define ARCH_SUN4 0 | 29 | #define ARCH_SUN4 0 |
30 | 30 | ||
31 | extern void mb(void); | ||
32 | extern void rmb(void); | ||
33 | extern void wmb(void); | ||
34 | extern void membar_storeload(void); | ||
35 | extern void membar_storeload_storestore(void); | ||
36 | extern void membar_storeload_loadload(void); | ||
37 | extern void membar_storestore_loadstore(void); | ||
38 | |||
31 | #endif | 39 | #endif |
32 | 40 | ||
33 | #define setipl(__new_ipl) \ | 41 | #define setipl(__new_ipl) \ |
@@ -78,16 +86,11 @@ enum sparc_cpu { | |||
78 | 86 | ||
79 | #define nop() __asm__ __volatile__ ("nop") | 87 | #define nop() __asm__ __volatile__ ("nop") |
80 | 88 | ||
81 | #define membar(type) __asm__ __volatile__ ("membar " type : : : "memory") | ||
82 | #define mb() \ | ||
83 | membar("#LoadLoad | #LoadStore | #StoreStore | #StoreLoad") | ||
84 | #define rmb() membar("#LoadLoad") | ||
85 | #define wmb() membar("#StoreStore") | ||
86 | #define read_barrier_depends() do { } while(0) | 89 | #define read_barrier_depends() do { } while(0) |
87 | #define set_mb(__var, __value) \ | 90 | #define set_mb(__var, __value) \ |
88 | do { __var = __value; membar("#StoreLoad | #StoreStore"); } while(0) | 91 | do { __var = __value; membar_storeload_storestore(); } while(0) |
89 | #define set_wmb(__var, __value) \ | 92 | #define set_wmb(__var, __value) \ |
90 | do { __var = __value; membar("#StoreStore"); } while(0) | 93 | do { __var = __value; wmb(); } while(0) |
91 | 94 | ||
92 | #ifdef CONFIG_SMP | 95 | #ifdef CONFIG_SMP |
93 | #define smp_mb() mb() | 96 | #define smp_mb() mb() |
diff --git a/include/linux/8250_pci.h b/include/linux/8250_pci.h index 5f3ab21b339b..3209dd46ea7d 100644 --- a/include/linux/8250_pci.h +++ b/include/linux/8250_pci.h | |||
@@ -1,2 +1,37 @@ | |||
1 | int pci_siig10x_fn(struct pci_dev *dev, int enable); | 1 | /* |
2 | int pci_siig20x_fn(struct pci_dev *dev, int enable); | 2 | * Definitions for PCI support. |
3 | */ | ||
4 | #define FL_BASE_MASK 0x0007 | ||
5 | #define FL_BASE0 0x0000 | ||
6 | #define FL_BASE1 0x0001 | ||
7 | #define FL_BASE2 0x0002 | ||
8 | #define FL_BASE3 0x0003 | ||
9 | #define FL_BASE4 0x0004 | ||
10 | #define FL_GET_BASE(x) (x & FL_BASE_MASK) | ||
11 | |||
12 | /* Use successive BARs (PCI base address registers), | ||
13 | else use offset into some specified BAR */ | ||
14 | #define FL_BASE_BARS 0x0008 | ||
15 | |||
16 | /* do not assign an irq */ | ||
17 | #define FL_NOIRQ 0x0080 | ||
18 | |||
19 | /* Use the Base address register size to cap number of ports */ | ||
20 | #define FL_REGION_SZ_CAP 0x0100 | ||
21 | |||
22 | struct pciserial_board { | ||
23 | unsigned int flags; | ||
24 | unsigned int num_ports; | ||
25 | unsigned int base_baud; | ||
26 | unsigned int uart_offset; | ||
27 | unsigned int reg_shift; | ||
28 | unsigned int first_offset; | ||
29 | }; | ||
30 | |||
31 | struct serial_private; | ||
32 | |||
33 | struct serial_private * | ||
34 | pciserial_init_ports(struct pci_dev *dev, struct pciserial_board *board); | ||
35 | void pciserial_remove_ports(struct serial_private *priv); | ||
36 | void pciserial_suspend_ports(struct serial_private *priv); | ||
37 | void pciserial_resume_ports(struct serial_private *priv); | ||
diff --git a/include/linux/ata.h b/include/linux/ata.h index ca5fcadf9981..a5b74efab067 100644 --- a/include/linux/ata.h +++ b/include/linux/ata.h | |||
@@ -1,24 +1,29 @@ | |||
1 | 1 | ||
2 | /* | 2 | /* |
3 | Copyright 2003-2004 Red Hat, Inc. All rights reserved. | 3 | * Copyright 2003-2004 Red Hat, Inc. All rights reserved. |
4 | Copyright 2003-2004 Jeff Garzik | 4 | * Copyright 2003-2004 Jeff Garzik |
5 | 5 | * | |
6 | The contents of this file are subject to the Open | 6 | * |
7 | Software License version 1.1 that can be found at | 7 | * This program is free software; you can redistribute it and/or modify |
8 | http://www.opensource.org/licenses/osl-1.1.txt and is included herein | 8 | * it under the terms of the GNU General Public License as published by |
9 | by reference. | 9 | * the Free Software Foundation; either version 2, or (at your option) |
10 | 10 | * any later version. | |
11 | Alternatively, the contents of this file may be used under the terms | 11 | * |
12 | of the GNU General Public License version 2 (the "GPL") as distributed | 12 | * This program is distributed in the hope that it will be useful, |
13 | in the kernel source COPYING file, in which case the provisions of | 13 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
14 | the GPL are applicable instead of the above. If you wish to allow | 14 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
15 | the use of your version of this file only under the terms of the | 15 | * GNU General Public License for more details. |
16 | GPL and not to allow others to use your version of this file under | 16 | * |
17 | the OSL, indicate your decision by deleting the provisions above and | 17 | * You should have received a copy of the GNU General Public License |
18 | replace them with the notice and other provisions required by the GPL. | 18 | * along with this program; see the file COPYING. If not, write to |
19 | If you do not delete the provisions above, a recipient may use your | 19 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. |
20 | version of this file under either the OSL or the GPL. | 20 | * |
21 | 21 | * | |
22 | * libata documentation is available via 'make {ps|pdf}docs', | ||
23 | * as Documentation/DocBook/libata.* | ||
24 | * | ||
25 | * Hardware documentation available from http://www.t13.org/ | ||
26 | * | ||
22 | */ | 27 | */ |
23 | 28 | ||
24 | #ifndef __LINUX_ATA_H__ | 29 | #ifndef __LINUX_ATA_H__ |
@@ -108,6 +113,8 @@ enum { | |||
108 | 113 | ||
109 | /* ATA device commands */ | 114 | /* ATA device commands */ |
110 | ATA_CMD_CHK_POWER = 0xE5, /* check power mode */ | 115 | ATA_CMD_CHK_POWER = 0xE5, /* check power mode */ |
116 | ATA_CMD_STANDBY = 0xE2, /* place in standby power mode */ | ||
117 | ATA_CMD_IDLE = 0xE3, /* place in idle power mode */ | ||
111 | ATA_CMD_EDD = 0x90, /* execute device diagnostic */ | 118 | ATA_CMD_EDD = 0x90, /* execute device diagnostic */ |
112 | ATA_CMD_FLUSH = 0xE7, | 119 | ATA_CMD_FLUSH = 0xE7, |
113 | ATA_CMD_FLUSH_EXT = 0xEA, | 120 | ATA_CMD_FLUSH_EXT = 0xEA, |
diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index a0ab26aab450..d7021c391b2b 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h | |||
@@ -408,6 +408,8 @@ struct ethtool_ops { | |||
408 | #define SUPPORTED_FIBRE (1 << 10) | 408 | #define SUPPORTED_FIBRE (1 << 10) |
409 | #define SUPPORTED_BNC (1 << 11) | 409 | #define SUPPORTED_BNC (1 << 11) |
410 | #define SUPPORTED_10000baseT_Full (1 << 12) | 410 | #define SUPPORTED_10000baseT_Full (1 << 12) |
411 | #define SUPPORTED_Pause (1 << 13) | ||
412 | #define SUPPORTED_Asym_Pause (1 << 14) | ||
411 | 413 | ||
412 | /* Indicates what features are advertised by the interface. */ | 414 | /* Indicates what features are advertised by the interface. */ |
413 | #define ADVERTISED_10baseT_Half (1 << 0) | 415 | #define ADVERTISED_10baseT_Half (1 << 0) |
@@ -423,6 +425,8 @@ struct ethtool_ops { | |||
423 | #define ADVERTISED_FIBRE (1 << 10) | 425 | #define ADVERTISED_FIBRE (1 << 10) |
424 | #define ADVERTISED_BNC (1 << 11) | 426 | #define ADVERTISED_BNC (1 << 11) |
425 | #define ADVERTISED_10000baseT_Full (1 << 12) | 427 | #define ADVERTISED_10000baseT_Full (1 << 12) |
428 | #define ADVERTISED_Pause (1 << 13) | ||
429 | #define ADVERTISED_Asym_Pause (1 << 14) | ||
426 | 430 | ||
427 | /* The following are all involved in forcing a particular link | 431 | /* The following are all involved in forcing a particular link |
428 | * mode for the device for setting things. When getting the | 432 | * mode for the device for setting things. When getting the |
diff --git a/include/linux/libata.h b/include/linux/libata.h index 6cd9ba63563b..fc05a9899288 100644 --- a/include/linux/libata.h +++ b/include/linux/libata.h | |||
@@ -1,23 +1,26 @@ | |||
1 | /* | 1 | /* |
2 | Copyright 2003-2004 Red Hat, Inc. All rights reserved. | 2 | * Copyright 2003-2005 Red Hat, Inc. All rights reserved. |
3 | Copyright 2003-2004 Jeff Garzik | 3 | * Copyright 2003-2005 Jeff Garzik |
4 | 4 | * | |
5 | The contents of this file are subject to the Open | 5 | * |
6 | Software License version 1.1 that can be found at | 6 | * This program is free software; you can redistribute it and/or modify |
7 | http://www.opensource.org/licenses/osl-1.1.txt and is included herein | 7 | * it under the terms of the GNU General Public License as published by |
8 | by reference. | 8 | * the Free Software Foundation; either version 2, or (at your option) |
9 | 9 | * any later version. | |
10 | Alternatively, the contents of this file may be used under the terms | 10 | * |
11 | of the GNU General Public License version 2 (the "GPL") as distributed | 11 | * This program is distributed in the hope that it will be useful, |
12 | in the kernel source COPYING file, in which case the provisions of | 12 | * but WITHOUT ANY WARRANTY; without even the implied warranty of |
13 | the GPL are applicable instead of the above. If you wish to allow | 13 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
14 | the use of your version of this file only under the terms of the | 14 | * GNU General Public License for more details. |
15 | GPL and not to allow others to use your version of this file under | 15 | * |
16 | the OSL, indicate your decision by deleting the provisions above and | 16 | * You should have received a copy of the GNU General Public License |
17 | replace them with the notice and other provisions required by the GPL. | 17 | * along with this program; see the file COPYING. If not, write to |
18 | If you do not delete the provisions above, a recipient may use your | 18 | * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. |
19 | version of this file under either the OSL or the GPL. | 19 | * |
20 | 20 | * | |
21 | * libata documentation is available via 'make {ps|pdf}docs', | ||
22 | * as Documentation/DocBook/libata.* | ||
23 | * | ||
21 | */ | 24 | */ |
22 | 25 | ||
23 | #ifndef __LINUX_LIBATA_H__ | 26 | #ifndef __LINUX_LIBATA_H__ |
@@ -113,6 +116,8 @@ enum { | |||
113 | ATA_FLAG_MMIO = (1 << 6), /* use MMIO, not PIO */ | 116 | ATA_FLAG_MMIO = (1 << 6), /* use MMIO, not PIO */ |
114 | ATA_FLAG_SATA_RESET = (1 << 7), /* use COMRESET */ | 117 | ATA_FLAG_SATA_RESET = (1 << 7), /* use COMRESET */ |
115 | ATA_FLAG_PIO_DMA = (1 << 8), /* PIO cmds via DMA */ | 118 | ATA_FLAG_PIO_DMA = (1 << 8), /* PIO cmds via DMA */ |
119 | ATA_FLAG_NOINTR = (1 << 9), /* FIXME: Remove this once | ||
120 | * proper HSM is in place. */ | ||
116 | 121 | ||
117 | ATA_QCFLAG_ACTIVE = (1 << 1), /* cmd not yet ack'd to scsi lyer */ | 122 | ATA_QCFLAG_ACTIVE = (1 << 1), /* cmd not yet ack'd to scsi lyer */ |
118 | ATA_QCFLAG_SG = (1 << 3), /* have s/g table? */ | 123 | ATA_QCFLAG_SG = (1 << 3), /* have s/g table? */ |
@@ -363,7 +368,7 @@ struct ata_port_operations { | |||
363 | 368 | ||
364 | void (*host_stop) (struct ata_host_set *host_set); | 369 | void (*host_stop) (struct ata_host_set *host_set); |
365 | 370 | ||
366 | void (*bmdma_stop) (struct ata_port *ap); | 371 | void (*bmdma_stop) (struct ata_queued_cmd *qc); |
367 | u8 (*bmdma_status) (struct ata_port *ap); | 372 | u8 (*bmdma_status) (struct ata_port *ap); |
368 | }; | 373 | }; |
369 | 374 | ||
@@ -424,7 +429,7 @@ extern void ata_dev_id_string(u16 *id, unsigned char *s, | |||
424 | extern void ata_dev_config(struct ata_port *ap, unsigned int i); | 429 | extern void ata_dev_config(struct ata_port *ap, unsigned int i); |
425 | extern void ata_bmdma_setup (struct ata_queued_cmd *qc); | 430 | extern void ata_bmdma_setup (struct ata_queued_cmd *qc); |
426 | extern void ata_bmdma_start (struct ata_queued_cmd *qc); | 431 | extern void ata_bmdma_start (struct ata_queued_cmd *qc); |
427 | extern void ata_bmdma_stop(struct ata_port *ap); | 432 | extern void ata_bmdma_stop(struct ata_queued_cmd *qc); |
428 | extern u8 ata_bmdma_status(struct ata_port *ap); | 433 | extern u8 ata_bmdma_status(struct ata_port *ap); |
429 | extern void ata_bmdma_irq_clear(struct ata_port *ap); | 434 | extern void ata_bmdma_irq_clear(struct ata_port *ap); |
430 | extern void ata_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat); | 435 | extern void ata_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat); |
@@ -644,7 +649,7 @@ static inline void scr_write(struct ata_port *ap, unsigned int reg, u32 val) | |||
644 | ap->ops->scr_write(ap, reg, val); | 649 | ap->ops->scr_write(ap, reg, val); |
645 | } | 650 | } |
646 | 651 | ||
647 | static inline void scr_write_flush(struct ata_port *ap, unsigned int reg, | 652 | static inline void scr_write_flush(struct ata_port *ap, unsigned int reg, |
648 | u32 val) | 653 | u32 val) |
649 | { | 654 | { |
650 | ap->ops->scr_write(ap, reg, val); | 655 | ap->ops->scr_write(ap, reg, val); |
diff --git a/include/linux/mii.h b/include/linux/mii.h index 374b615ea9ea..9b8d0476988a 100644 --- a/include/linux/mii.h +++ b/include/linux/mii.h | |||
@@ -22,6 +22,7 @@ | |||
22 | #define MII_EXPANSION 0x06 /* Expansion register */ | 22 | #define MII_EXPANSION 0x06 /* Expansion register */ |
23 | #define MII_CTRL1000 0x09 /* 1000BASE-T control */ | 23 | #define MII_CTRL1000 0x09 /* 1000BASE-T control */ |
24 | #define MII_STAT1000 0x0a /* 1000BASE-T status */ | 24 | #define MII_STAT1000 0x0a /* 1000BASE-T status */ |
25 | #define MII_ESTATUS 0x0f /* Extended Status */ | ||
25 | #define MII_DCOUNTER 0x12 /* Disconnect counter */ | 26 | #define MII_DCOUNTER 0x12 /* Disconnect counter */ |
26 | #define MII_FCSCOUNTER 0x13 /* False carrier counter */ | 27 | #define MII_FCSCOUNTER 0x13 /* False carrier counter */ |
27 | #define MII_NWAYTEST 0x14 /* N-way auto-neg test reg */ | 28 | #define MII_NWAYTEST 0x14 /* N-way auto-neg test reg */ |
@@ -54,7 +55,10 @@ | |||
54 | #define BMSR_ANEGCAPABLE 0x0008 /* Able to do auto-negotiation */ | 55 | #define BMSR_ANEGCAPABLE 0x0008 /* Able to do auto-negotiation */ |
55 | #define BMSR_RFAULT 0x0010 /* Remote fault detected */ | 56 | #define BMSR_RFAULT 0x0010 /* Remote fault detected */ |
56 | #define BMSR_ANEGCOMPLETE 0x0020 /* Auto-negotiation complete */ | 57 | #define BMSR_ANEGCOMPLETE 0x0020 /* Auto-negotiation complete */ |
57 | #define BMSR_RESV 0x07c0 /* Unused... */ | 58 | #define BMSR_RESV 0x00c0 /* Unused... */ |
59 | #define BMSR_ESTATEN 0x0100 /* Extended Status in R15 */ | ||
60 | #define BMSR_100FULL2 0x0200 /* Can do 100BASE-T2 HDX */ | ||
61 | #define BMSR_100HALF2 0x0400 /* Can do 100BASE-T2 FDX */ | ||
58 | #define BMSR_10HALF 0x0800 /* Can do 10mbps, half-duplex */ | 62 | #define BMSR_10HALF 0x0800 /* Can do 10mbps, half-duplex */ |
59 | #define BMSR_10FULL 0x1000 /* Can do 10mbps, full-duplex */ | 63 | #define BMSR_10FULL 0x1000 /* Can do 10mbps, full-duplex */ |
60 | #define BMSR_100HALF 0x2000 /* Can do 100mbps, half-duplex */ | 64 | #define BMSR_100HALF 0x2000 /* Can do 100mbps, half-duplex */ |
@@ -114,6 +118,9 @@ | |||
114 | #define EXPANSION_MFAULTS 0x0010 /* Multiple faults detected */ | 118 | #define EXPANSION_MFAULTS 0x0010 /* Multiple faults detected */ |
115 | #define EXPANSION_RESV 0xffe0 /* Unused... */ | 119 | #define EXPANSION_RESV 0xffe0 /* Unused... */ |
116 | 120 | ||
121 | #define ESTATUS_1000_TFULL 0x2000 /* Can do 1000BT Full */ | ||
122 | #define ESTATUS_1000_THALF 0x1000 /* Can do 1000BT Half */ | ||
123 | |||
117 | /* N-way test register. */ | 124 | /* N-way test register. */ |
118 | #define NWAYTEST_RESV1 0x00ff /* Unused... */ | 125 | #define NWAYTEST_RESV1 0x00ff /* Unused... */ |
119 | #define NWAYTEST_LOOPBACK 0x0100 /* Enable loopback for N-way */ | 126 | #define NWAYTEST_LOOPBACK 0x0100 /* Enable loopback for N-way */ |
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index f90f674eb3b0..9a0893f3249e 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h | |||
@@ -63,11 +63,12 @@ struct device; | |||
63 | 63 | ||
64 | struct mmc_host { | 64 | struct mmc_host { |
65 | struct device *dev; | 65 | struct device *dev; |
66 | struct class_device class_dev; | ||
67 | int index; | ||
66 | struct mmc_host_ops *ops; | 68 | struct mmc_host_ops *ops; |
67 | unsigned int f_min; | 69 | unsigned int f_min; |
68 | unsigned int f_max; | 70 | unsigned int f_max; |
69 | u32 ocr_avail; | 71 | u32 ocr_avail; |
70 | char host_name[8]; | ||
71 | 72 | ||
72 | /* host specific block data */ | 73 | /* host specific block data */ |
73 | unsigned int max_seg_size; /* see blk_queue_max_segment_size */ | 74 | unsigned int max_seg_size; /* see blk_queue_max_segment_size */ |
@@ -97,6 +98,7 @@ extern void mmc_free_host(struct mmc_host *); | |||
97 | 98 | ||
98 | #define mmc_priv(x) ((void *)((x) + 1)) | 99 | #define mmc_priv(x) ((void *)((x) + 1)) |
99 | #define mmc_dev(x) ((x)->dev) | 100 | #define mmc_dev(x) ((x)->dev) |
101 | #define mmc_hostname(x) ((x)->class_dev.class_id) | ||
100 | 102 | ||
101 | extern int mmc_suspend_host(struct mmc_host *, pm_message_t); | 103 | extern int mmc_suspend_host(struct mmc_host *, pm_message_t); |
102 | extern int mmc_resume_host(struct mmc_host *); | 104 | extern int mmc_resume_host(struct mmc_host *); |
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index dce53ac1625d..97bbccdbcca3 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h | |||
@@ -33,7 +33,8 @@ struct ieee1394_device_id { | |||
33 | __u32 model_id; | 33 | __u32 model_id; |
34 | __u32 specifier_id; | 34 | __u32 specifier_id; |
35 | __u32 version; | 35 | __u32 version; |
36 | kernel_ulong_t driver_data; | 36 | kernel_ulong_t driver_data |
37 | __attribute__((aligned(sizeof(kernel_ulong_t)))); | ||
37 | }; | 38 | }; |
38 | 39 | ||
39 | 40 | ||
@@ -182,7 +183,11 @@ struct of_device_id | |||
182 | char name[32]; | 183 | char name[32]; |
183 | char type[32]; | 184 | char type[32]; |
184 | char compatible[128]; | 185 | char compatible[128]; |
186 | #if __KERNEL__ | ||
185 | void *data; | 187 | void *data; |
188 | #else | ||
189 | kernel_ulong_t data; | ||
190 | #endif | ||
186 | }; | 191 | }; |
187 | 192 | ||
188 | 193 | ||
@@ -208,7 +213,8 @@ struct pcmcia_device_id { | |||
208 | #ifdef __KERNEL__ | 213 | #ifdef __KERNEL__ |
209 | const char * prod_id[4]; | 214 | const char * prod_id[4]; |
210 | #else | 215 | #else |
211 | kernel_ulong_t prod_id[4]; | 216 | kernel_ulong_t prod_id[4] |
217 | __attribute__((aligned(sizeof(kernel_ulong_t)))); | ||
212 | #endif | 218 | #endif |
213 | 219 | ||
214 | /* not matched against */ | 220 | /* not matched against */ |
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 927ed487630d..499a5325f67f 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h | |||
@@ -1249,6 +1249,7 @@ | |||
1249 | #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA 0x0266 | 1249 | #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA 0x0266 |
1250 | #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA2 0x0267 | 1250 | #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA2 0x0267 |
1251 | #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_IDE 0x036E | 1251 | #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_IDE 0x036E |
1252 | #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA 0x036F | ||
1252 | #define PCI_DEVICE_ID_NVIDIA_NVENET_12 0x0268 | 1253 | #define PCI_DEVICE_ID_NVIDIA_NVENET_12 0x0268 |
1253 | #define PCI_DEVICE_ID_NVIDIA_NVENET_13 0x0269 | 1254 | #define PCI_DEVICE_ID_NVIDIA_NVENET_13 0x0269 |
1254 | #define PCI_DEVICE_ID_NVIDIA_MCP51_AUDIO 0x026B | 1255 | #define PCI_DEVICE_ID_NVIDIA_MCP51_AUDIO 0x026B |
diff --git a/include/linux/phy.h b/include/linux/phy.h new file mode 100644 index 000000000000..72cb67b66e0c --- /dev/null +++ b/include/linux/phy.h | |||
@@ -0,0 +1,377 @@ | |||
1 | /* | ||
2 | * include/linux/phy.h | ||
3 | * | ||
4 | * Framework and drivers for configuring and reading different PHYs | ||
5 | * Based on code in sungem_phy.c and gianfar_phy.c | ||
6 | * | ||
7 | * Author: Andy Fleming | ||
8 | * | ||
9 | * Copyright (c) 2004 Freescale Semiconductor, Inc. | ||
10 | * | ||
11 | * This program is free software; you can redistribute it and/or modify it | ||
12 | * under the terms of the GNU General Public License as published by the | ||
13 | * Free Software Foundation; either version 2 of the License, or (at your | ||
14 | * option) any later version. | ||
15 | * | ||
16 | */ | ||
17 | |||
18 | #ifndef __PHY_H | ||
19 | #define __PHY_H | ||
20 | |||
21 | #include <linux/spinlock.h> | ||
22 | #include <linux/device.h> | ||
23 | |||
24 | #define PHY_BASIC_FEATURES (SUPPORTED_10baseT_Half | \ | ||
25 | SUPPORTED_10baseT_Full | \ | ||
26 | SUPPORTED_100baseT_Half | \ | ||
27 | SUPPORTED_100baseT_Full | \ | ||
28 | SUPPORTED_Autoneg | \ | ||
29 | SUPPORTED_TP | \ | ||
30 | SUPPORTED_MII) | ||
31 | |||
32 | #define PHY_GBIT_FEATURES (PHY_BASIC_FEATURES | \ | ||
33 | SUPPORTED_1000baseT_Half | \ | ||
34 | SUPPORTED_1000baseT_Full) | ||
35 | |||
36 | /* Set phydev->irq to PHY_POLL if interrupts are not supported, | ||
37 | * or not desired for this PHY. Set to PHY_IGNORE_INTERRUPT if | ||
38 | * the attached driver handles the interrupt | ||
39 | */ | ||
40 | #define PHY_POLL -1 | ||
41 | #define PHY_IGNORE_INTERRUPT -2 | ||
42 | |||
43 | #define PHY_HAS_INTERRUPT 0x00000001 | ||
44 | #define PHY_HAS_MAGICANEG 0x00000002 | ||
45 | |||
46 | #define MII_BUS_MAX 4 | ||
47 | |||
48 | |||
49 | #define PHY_INIT_TIMEOUT 100000 | ||
50 | #define PHY_STATE_TIME 1 | ||
51 | #define PHY_FORCE_TIMEOUT 10 | ||
52 | #define PHY_AN_TIMEOUT 10 | ||
53 | |||
54 | #define PHY_MAX_ADDR 32 | ||
55 | |||
56 | /* The Bus class for PHYs. Devices which provide access to | ||
57 | * PHYs should register using this structure */ | ||
58 | struct mii_bus { | ||
59 | const char *name; | ||
60 | int id; | ||
61 | void *priv; | ||
62 | int (*read)(struct mii_bus *bus, int phy_id, int regnum); | ||
63 | int (*write)(struct mii_bus *bus, int phy_id, int regnum, u16 val); | ||
64 | int (*reset)(struct mii_bus *bus); | ||
65 | |||
66 | /* A lock to ensure that only one thing can read/write | ||
67 | * the MDIO bus at a time */ | ||
68 | spinlock_t mdio_lock; | ||
69 | |||
70 | struct device *dev; | ||
71 | |||
72 | /* list of all PHYs on bus */ | ||
73 | struct phy_device *phy_map[PHY_MAX_ADDR]; | ||
74 | |||
75 | /* Pointer to an array of interrupts, each PHY's | ||
76 | * interrupt at the index matching its address */ | ||
77 | int *irq; | ||
78 | }; | ||
79 | |||
80 | #define PHY_INTERRUPT_DISABLED 0x0 | ||
81 | #define PHY_INTERRUPT_ENABLED 0x80000000 | ||
82 | |||
83 | /* PHY state machine states: | ||
84 | * | ||
85 | * DOWN: PHY device and driver are not ready for anything. probe | ||
86 | * should be called if and only if the PHY is in this state, | ||
87 | * given that the PHY device exists. | ||
88 | * - PHY driver probe function will, depending on the PHY, set | ||
89 | * the state to STARTING or READY | ||
90 | * | ||
91 | * STARTING: PHY device is coming up, and the ethernet driver is | ||
92 | * not ready. PHY drivers may set this in the probe function. | ||
93 | * If they do, they are responsible for making sure the state is | ||
94 | * eventually set to indicate whether the PHY is UP or READY, | ||
95 | * depending on the state when the PHY is done starting up. | ||
96 | * - PHY driver will set the state to READY | ||
97 | * - start will set the state to PENDING | ||
98 | * | ||
99 | * READY: PHY is ready to send and receive packets, but the | ||
100 | * controller is not. By default, PHYs which do not implement | ||
101 | * probe will be set to this state by phy_probe(). If the PHY | ||
102 | * driver knows the PHY is ready, and the PHY state is STARTING, | ||
103 | * then it sets this STATE. | ||
104 | * - start will set the state to UP | ||
105 | * | ||
106 | * PENDING: PHY device is coming up, but the ethernet driver is | ||
107 | * ready. phy_start will set this state if the PHY state is | ||
108 | * STARTING. | ||
109 | * - PHY driver will set the state to UP when the PHY is ready | ||
110 | * | ||
111 | * UP: The PHY and attached device are ready to do work. | ||
112 | * Interrupts should be started here. | ||
113 | * - timer moves to AN | ||
114 | * | ||
115 | * AN: The PHY is currently negotiating the link state. Link is | ||
116 | * therefore down for now. phy_timer will set this state when it | ||
117 | * detects the state is UP. config_aneg will set this state | ||
118 | * whenever called with phydev->autoneg set to AUTONEG_ENABLE. | ||
119 | * - If autonegotiation finishes, but there's no link, it sets | ||
120 | * the state to NOLINK. | ||
121 | * - If aneg finishes with link, it sets the state to RUNNING, | ||
122 | * and calls adjust_link | ||
123 | * - If autonegotiation did not finish after an arbitrary amount | ||
124 | * of time, autonegotiation should be tried again if the PHY | ||
125 | * supports "magic" autonegotiation (back to AN) | ||
126 | * - If it didn't finish, and no magic_aneg, move to FORCING. | ||
127 | * | ||
128 | * NOLINK: PHY is up, but not currently plugged in. | ||
129 | * - If the timer notes that the link comes back, we move to RUNNING | ||
130 | * - config_aneg moves to AN | ||
131 | * - phy_stop moves to HALTED | ||
132 | * | ||
133 | * FORCING: PHY is being configured with forced settings | ||
134 | * - if link is up, move to RUNNING | ||
135 | * - If link is down, we drop to the next highest setting, and | ||
136 | * retry (FORCING) after a timeout | ||
137 | * - phy_stop moves to HALTED | ||
138 | * | ||
139 | * RUNNING: PHY is currently up, running, and possibly sending | ||
140 | * and/or receiving packets | ||
141 | * - timer will set CHANGELINK if we're polling (this ensures the | ||
142 | * link state is polled every other cycle of this state machine, | ||
143 | * which makes it every other second) | ||
144 | * - irq will set CHANGELINK | ||
145 | * - config_aneg will set AN | ||
146 | * - phy_stop moves to HALTED | ||
147 | * | ||
148 | * CHANGELINK: PHY experienced a change in link state | ||
149 | * - timer moves to RUNNING if link | ||
150 | * - timer moves to NOLINK if the link is down | ||
151 | * - phy_stop moves to HALTED | ||
152 | * | ||
153 | * HALTED: PHY is up, but no polling or interrupts are done. Or | ||
154 | * PHY is in an error state. | ||
155 | * | ||
156 | * - phy_start moves to RESUMING | ||
157 | * | ||
158 | * RESUMING: PHY was halted, but now wants to run again. | ||
159 | * - If we are forcing, or aneg is done, timer moves to RUNNING | ||
160 | * - If aneg is not done, timer moves to AN | ||
161 | * - phy_stop moves to HALTED | ||
162 | */ | ||
163 | enum phy_state { | ||
164 | PHY_DOWN=0, | ||
165 | PHY_STARTING, | ||
166 | PHY_READY, | ||
167 | PHY_PENDING, | ||
168 | PHY_UP, | ||
169 | PHY_AN, | ||
170 | PHY_RUNNING, | ||
171 | PHY_NOLINK, | ||
172 | PHY_FORCING, | ||
173 | PHY_CHANGELINK, | ||
174 | PHY_HALTED, | ||
175 | PHY_RESUMING | ||
176 | }; | ||
177 | |||
178 | /* phy_device: An instance of a PHY | ||
179 | * | ||
180 | * drv: Pointer to the driver for this PHY instance | ||
181 | * bus: Pointer to the bus this PHY is on | ||
182 | * dev: driver model device structure for this PHY | ||
183 | * phy_id: UID for this device found during discovery | ||
184 | * state: state of the PHY for management purposes | ||
185 | * dev_flags: Device-specific flags used by the PHY driver. | ||
186 | * addr: Bus address of PHY | ||
187 | * link_timeout: The number of timer firings to wait before the | ||
188 | * giving up on the current attempt at acquiring a link | ||
189 | * irq: IRQ number of the PHY's interrupt (-1 if none) | ||
190 | * phy_timer: The timer for handling the state machine | ||
191 | * phy_queue: A work_queue for the interrupt | ||
192 | * attached_dev: The attached enet driver's device instance ptr | ||
193 | * adjust_link: Callback for the enet controller to respond to | ||
194 | * changes in the link state. | ||
195 | * adjust_state: Callback for the enet driver to respond to | ||
196 | * changes in the state machine. | ||
197 | * | ||
198 | * speed, duplex, pause, supported, advertising, and | ||
199 | * autoneg are used like in mii_if_info | ||
200 | * | ||
201 | * interrupts currently only supports enabled or disabled, | ||
202 | * but could be changed in the future to support enabling | ||
203 | * and disabling specific interrupts | ||
204 | * | ||
205 | * Contains some infrastructure for polling and interrupt | ||
206 | * handling, as well as handling shifts in PHY hardware state | ||
207 | */ | ||
208 | struct phy_device { | ||
209 | /* Information about the PHY type */ | ||
210 | /* And management functions */ | ||
211 | struct phy_driver *drv; | ||
212 | |||
213 | struct mii_bus *bus; | ||
214 | |||
215 | struct device dev; | ||
216 | |||
217 | u32 phy_id; | ||
218 | |||
219 | enum phy_state state; | ||
220 | |||
221 | u32 dev_flags; | ||
222 | |||
223 | /* Bus address of the PHY (0-32) */ | ||
224 | int addr; | ||
225 | |||
226 | /* forced speed & duplex (no autoneg) | ||
227 | * partner speed & duplex & pause (autoneg) | ||
228 | */ | ||
229 | int speed; | ||
230 | int duplex; | ||
231 | int pause; | ||
232 | int asym_pause; | ||
233 | |||
234 | /* The most recently read link state */ | ||
235 | int link; | ||
236 | |||
237 | /* Enabled Interrupts */ | ||
238 | u32 interrupts; | ||
239 | |||
240 | /* Union of PHY and Attached devices' supported modes */ | ||
241 | /* See mii.h for more info */ | ||
242 | u32 supported; | ||
243 | u32 advertising; | ||
244 | |||
245 | int autoneg; | ||
246 | |||
247 | int link_timeout; | ||
248 | |||
249 | /* Interrupt number for this PHY | ||
250 | * -1 means no interrupt */ | ||
251 | int irq; | ||
252 | |||
253 | /* private data pointer */ | ||
254 | /* For use by PHYs to maintain extra state */ | ||
255 | void *priv; | ||
256 | |||
257 | /* Interrupt and Polling infrastructure */ | ||
258 | struct work_struct phy_queue; | ||
259 | struct timer_list phy_timer; | ||
260 | |||
261 | spinlock_t lock; | ||
262 | |||
263 | struct net_device *attached_dev; | ||
264 | |||
265 | void (*adjust_link)(struct net_device *dev); | ||
266 | |||
267 | void (*adjust_state)(struct net_device *dev); | ||
268 | }; | ||
269 | #define to_phy_device(d) container_of(d, struct phy_device, dev) | ||
270 | |||
271 | /* struct phy_driver: Driver structure for a particular PHY type | ||
272 | * | ||
273 | * phy_id: The result of reading the UID registers of this PHY | ||
274 | * type, and ANDing them with the phy_id_mask. This driver | ||
275 | * only works for PHYs with IDs which match this field | ||
276 | * name: The friendly name of this PHY type | ||
277 | * phy_id_mask: Defines the important bits of the phy_id | ||
278 | * features: A list of features (speed, duplex, etc) supported | ||
279 | * by this PHY | ||
280 | * flags: A bitfield defining certain other features this PHY | ||
281 | * supports (like interrupts) | ||
282 | * | ||
283 | * The drivers must implement config_aneg and read_status. All | ||
284 | * other functions are optional. Note that none of these | ||
285 | * functions should be called from interrupt time. The goal is | ||
286 | * for the bus read/write functions to be able to block when the | ||
287 | * bus transaction is happening, and be freed up by an interrupt | ||
288 | * (The MPC85xx has this ability, though it is not currently | ||
289 | * supported in the driver). | ||
290 | */ | ||
291 | struct phy_driver { | ||
292 | u32 phy_id; | ||
293 | char *name; | ||
294 | unsigned int phy_id_mask; | ||
295 | u32 features; | ||
296 | u32 flags; | ||
297 | |||
298 | /* Called to initialize the PHY, | ||
299 | * including after a reset */ | ||
300 | int (*config_init)(struct phy_device *phydev); | ||
301 | |||
302 | /* Called during discovery. Used to set | ||
303 | * up device-specific structures, if any */ | ||
304 | int (*probe)(struct phy_device *phydev); | ||
305 | |||
306 | /* PHY Power Management */ | ||
307 | int (*suspend)(struct phy_device *phydev); | ||
308 | int (*resume)(struct phy_device *phydev); | ||
309 | |||
310 | /* Configures the advertisement and resets | ||
311 | * autonegotiation if phydev->autoneg is on, | ||
312 | * forces the speed to the current settings in phydev | ||
313 | * if phydev->autoneg is off */ | ||
314 | int (*config_aneg)(struct phy_device *phydev); | ||
315 | |||
316 | /* Determines the negotiated speed and duplex */ | ||
317 | int (*read_status)(struct phy_device *phydev); | ||
318 | |||
319 | /* Clears any pending interrupts */ | ||
320 | int (*ack_interrupt)(struct phy_device *phydev); | ||
321 | |||
322 | /* Enables or disables interrupts */ | ||
323 | int (*config_intr)(struct phy_device *phydev); | ||
324 | |||
325 | /* Clears up any memory if needed */ | ||
326 | void (*remove)(struct phy_device *phydev); | ||
327 | |||
328 | struct device_driver driver; | ||
329 | }; | ||
330 | #define to_phy_driver(d) container_of(d, struct phy_driver, driver) | ||
331 | |||
332 | int phy_read(struct phy_device *phydev, u16 regnum); | ||
333 | int phy_write(struct phy_device *phydev, u16 regnum, u16 val); | ||
334 | struct phy_device* get_phy_device(struct mii_bus *bus, int addr); | ||
335 | int phy_clear_interrupt(struct phy_device *phydev); | ||
336 | int phy_config_interrupt(struct phy_device *phydev, u32 interrupts); | ||
337 | struct phy_device * phy_attach(struct net_device *dev, | ||
338 | const char *phy_id, u32 flags); | ||
339 | struct phy_device * phy_connect(struct net_device *dev, const char *phy_id, | ||
340 | void (*handler)(struct net_device *), u32 flags); | ||
341 | void phy_disconnect(struct phy_device *phydev); | ||
342 | void phy_detach(struct phy_device *phydev); | ||
343 | void phy_start(struct phy_device *phydev); | ||
344 | void phy_stop(struct phy_device *phydev); | ||
345 | int phy_start_aneg(struct phy_device *phydev); | ||
346 | |||
347 | int mdiobus_register(struct mii_bus *bus); | ||
348 | void mdiobus_unregister(struct mii_bus *bus); | ||
349 | void phy_sanitize_settings(struct phy_device *phydev); | ||
350 | int phy_stop_interrupts(struct phy_device *phydev); | ||
351 | |||
352 | static inline int phy_read_status(struct phy_device *phydev) { | ||
353 | return phydev->drv->read_status(phydev); | ||
354 | } | ||
355 | |||
356 | int genphy_config_advert(struct phy_device *phydev); | ||
357 | int genphy_setup_forced(struct phy_device *phydev); | ||
358 | int genphy_restart_aneg(struct phy_device *phydev); | ||
359 | int genphy_config_aneg(struct phy_device *phydev); | ||
360 | int genphy_update_link(struct phy_device *phydev); | ||
361 | int genphy_read_status(struct phy_device *phydev); | ||
362 | void phy_driver_unregister(struct phy_driver *drv); | ||
363 | int phy_driver_register(struct phy_driver *new_driver); | ||
364 | void phy_prepare_link(struct phy_device *phydev, | ||
365 | void (*adjust_link)(struct net_device *)); | ||
366 | void phy_start_machine(struct phy_device *phydev, | ||
367 | void (*handler)(struct net_device *)); | ||
368 | void phy_stop_machine(struct phy_device *phydev); | ||
369 | int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd); | ||
370 | int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd); | ||
371 | int phy_mii_ioctl(struct phy_device *phydev, | ||
372 | struct mii_ioctl_data *mii_data, int cmd); | ||
373 | int phy_start_interrupts(struct phy_device *phydev); | ||
374 | void phy_print_status(struct phy_device *phydev); | ||
375 | |||
376 | extern struct bus_type mdio_bus_type; | ||
377 | #endif /* __PHY_H */ | ||
diff --git a/include/linux/serialP.h b/include/linux/serialP.h index 2b2f35a64d75..2b9e6b9554d5 100644 --- a/include/linux/serialP.h +++ b/include/linux/serialP.h | |||
@@ -140,44 +140,4 @@ struct rs_multiport_struct { | |||
140 | #define ALPHA_KLUDGE_MCR 0 | 140 | #define ALPHA_KLUDGE_MCR 0 |
141 | #endif | 141 | #endif |
142 | 142 | ||
143 | /* | ||
144 | * Definitions for PCI support. | ||
145 | */ | ||
146 | #define SPCI_FL_BASE_MASK 0x0007 | ||
147 | #define SPCI_FL_BASE0 0x0000 | ||
148 | #define SPCI_FL_BASE1 0x0001 | ||
149 | #define SPCI_FL_BASE2 0x0002 | ||
150 | #define SPCI_FL_BASE3 0x0003 | ||
151 | #define SPCI_FL_BASE4 0x0004 | ||
152 | #define SPCI_FL_GET_BASE(x) (x & SPCI_FL_BASE_MASK) | ||
153 | |||
154 | #define SPCI_FL_IRQ_MASK (0x0007 << 4) | ||
155 | #define SPCI_FL_IRQBASE0 (0x0000 << 4) | ||
156 | #define SPCI_FL_IRQBASE1 (0x0001 << 4) | ||
157 | #define SPCI_FL_IRQBASE2 (0x0002 << 4) | ||
158 | #define SPCI_FL_IRQBASE3 (0x0003 << 4) | ||
159 | #define SPCI_FL_IRQBASE4 (0x0004 << 4) | ||
160 | #define SPCI_FL_GET_IRQBASE(x) ((x & SPCI_FL_IRQ_MASK) >> 4) | ||
161 | |||
162 | /* Use successive BARs (PCI base address registers), | ||
163 | else use offset into some specified BAR */ | ||
164 | #define SPCI_FL_BASE_TABLE 0x0100 | ||
165 | |||
166 | /* Use successive entries in the irq resource table */ | ||
167 | #define SPCI_FL_IRQ_TABLE 0x0200 | ||
168 | |||
169 | /* Use the irq resource table instead of dev->irq */ | ||
170 | #define SPCI_FL_IRQRESOURCE 0x0400 | ||
171 | |||
172 | /* Use the Base address register size to cap number of ports */ | ||
173 | #define SPCI_FL_REGION_SZ_CAP 0x0800 | ||
174 | |||
175 | /* Do not use irq sharing for this device */ | ||
176 | #define SPCI_FL_NO_SHIRQ 0x1000 | ||
177 | |||
178 | /* This is a PNP device */ | ||
179 | #define SPCI_FL_ISPNP 0x2000 | ||
180 | |||
181 | #define SPCI_FL_PNPDEFAULT (SPCI_FL_IRQRESOURCE|SPCI_FL_ISPNP) | ||
182 | |||
183 | #endif /* _LINUX_SERIAL_H */ | 143 | #endif /* _LINUX_SERIAL_H */ |
diff --git a/drivers/infiniband/include/ib_cache.h b/include/rdma/ib_cache.h index 44ef6bb9b9df..5bf9834f7dca 100644 --- a/drivers/infiniband/include/ib_cache.h +++ b/include/rdma/ib_cache.h | |||
@@ -1,5 +1,7 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. | 2 | * Copyright (c) 2004 Topspin Communications. All rights reserved. |
3 | * Copyright (c) 2005 Intel Corporation. All rights reserved. | ||
4 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
3 | * | 5 | * |
4 | * This software is available to you under a choice of one of two | 6 | * This software is available to you under a choice of one of two |
5 | * licenses. You may choose to be licensed under the terms of the GNU | 7 | * licenses. You may choose to be licensed under the terms of the GNU |
@@ -35,7 +37,7 @@ | |||
35 | #ifndef _IB_CACHE_H | 37 | #ifndef _IB_CACHE_H |
36 | #define _IB_CACHE_H | 38 | #define _IB_CACHE_H |
37 | 39 | ||
38 | #include <ib_verbs.h> | 40 | #include <rdma/ib_verbs.h> |
39 | 41 | ||
40 | /** | 42 | /** |
41 | * ib_get_cached_gid - Returns a cached GID table entry | 43 | * ib_get_cached_gid - Returns a cached GID table entry |
diff --git a/drivers/infiniband/include/ib_cm.h b/include/rdma/ib_cm.h index da650115e79a..77fe9039209b 100644 --- a/drivers/infiniband/include/ib_cm.h +++ b/include/rdma/ib_cm.h | |||
@@ -37,8 +37,8 @@ | |||
37 | #if !defined(IB_CM_H) | 37 | #if !defined(IB_CM_H) |
38 | #define IB_CM_H | 38 | #define IB_CM_H |
39 | 39 | ||
40 | #include <ib_mad.h> | 40 | #include <rdma/ib_mad.h> |
41 | #include <ib_sa.h> | 41 | #include <rdma/ib_sa.h> |
42 | 42 | ||
43 | enum ib_cm_state { | 43 | enum ib_cm_state { |
44 | IB_CM_IDLE, | 44 | IB_CM_IDLE, |
@@ -115,7 +115,7 @@ struct ib_cm_req_event_param { | |||
115 | struct ib_sa_path_rec *primary_path; | 115 | struct ib_sa_path_rec *primary_path; |
116 | struct ib_sa_path_rec *alternate_path; | 116 | struct ib_sa_path_rec *alternate_path; |
117 | 117 | ||
118 | u64 remote_ca_guid; | 118 | __be64 remote_ca_guid; |
119 | u32 remote_qkey; | 119 | u32 remote_qkey; |
120 | u32 remote_qpn; | 120 | u32 remote_qpn; |
121 | enum ib_qp_type qp_type; | 121 | enum ib_qp_type qp_type; |
@@ -132,7 +132,7 @@ struct ib_cm_req_event_param { | |||
132 | }; | 132 | }; |
133 | 133 | ||
134 | struct ib_cm_rep_event_param { | 134 | struct ib_cm_rep_event_param { |
135 | u64 remote_ca_guid; | 135 | __be64 remote_ca_guid; |
136 | u32 remote_qkey; | 136 | u32 remote_qkey; |
137 | u32 remote_qpn; | 137 | u32 remote_qpn; |
138 | u32 starting_psn; | 138 | u32 starting_psn; |
@@ -146,39 +146,39 @@ struct ib_cm_rep_event_param { | |||
146 | }; | 146 | }; |
147 | 147 | ||
148 | enum ib_cm_rej_reason { | 148 | enum ib_cm_rej_reason { |
149 | IB_CM_REJ_NO_QP = __constant_htons(1), | 149 | IB_CM_REJ_NO_QP = 1, |
150 | IB_CM_REJ_NO_EEC = __constant_htons(2), | 150 | IB_CM_REJ_NO_EEC = 2, |
151 | IB_CM_REJ_NO_RESOURCES = __constant_htons(3), | 151 | IB_CM_REJ_NO_RESOURCES = 3, |
152 | IB_CM_REJ_TIMEOUT = __constant_htons(4), | 152 | IB_CM_REJ_TIMEOUT = 4, |
153 | IB_CM_REJ_UNSUPPORTED = __constant_htons(5), | 153 | IB_CM_REJ_UNSUPPORTED = 5, |
154 | IB_CM_REJ_INVALID_COMM_ID = __constant_htons(6), | 154 | IB_CM_REJ_INVALID_COMM_ID = 6, |
155 | IB_CM_REJ_INVALID_COMM_INSTANCE = __constant_htons(7), | 155 | IB_CM_REJ_INVALID_COMM_INSTANCE = 7, |
156 | IB_CM_REJ_INVALID_SERVICE_ID = __constant_htons(8), | 156 | IB_CM_REJ_INVALID_SERVICE_ID = 8, |
157 | IB_CM_REJ_INVALID_TRANSPORT_TYPE = __constant_htons(9), | 157 | IB_CM_REJ_INVALID_TRANSPORT_TYPE = 9, |
158 | IB_CM_REJ_STALE_CONN = __constant_htons(10), | 158 | IB_CM_REJ_STALE_CONN = 10, |
159 | IB_CM_REJ_RDC_NOT_EXIST = __constant_htons(11), | 159 | IB_CM_REJ_RDC_NOT_EXIST = 11, |
160 | IB_CM_REJ_INVALID_GID = __constant_htons(12), | 160 | IB_CM_REJ_INVALID_GID = 12, |
161 | IB_CM_REJ_INVALID_LID = __constant_htons(13), | 161 | IB_CM_REJ_INVALID_LID = 13, |
162 | IB_CM_REJ_INVALID_SL = __constant_htons(14), | 162 | IB_CM_REJ_INVALID_SL = 14, |
163 | IB_CM_REJ_INVALID_TRAFFIC_CLASS = __constant_htons(15), | 163 | IB_CM_REJ_INVALID_TRAFFIC_CLASS = 15, |
164 | IB_CM_REJ_INVALID_HOP_LIMIT = __constant_htons(16), | 164 | IB_CM_REJ_INVALID_HOP_LIMIT = 16, |
165 | IB_CM_REJ_INVALID_PACKET_RATE = __constant_htons(17), | 165 | IB_CM_REJ_INVALID_PACKET_RATE = 17, |
166 | IB_CM_REJ_INVALID_ALT_GID = __constant_htons(18), | 166 | IB_CM_REJ_INVALID_ALT_GID = 18, |
167 | IB_CM_REJ_INVALID_ALT_LID = __constant_htons(19), | 167 | IB_CM_REJ_INVALID_ALT_LID = 19, |
168 | IB_CM_REJ_INVALID_ALT_SL = __constant_htons(20), | 168 | IB_CM_REJ_INVALID_ALT_SL = 20, |
169 | IB_CM_REJ_INVALID_ALT_TRAFFIC_CLASS = __constant_htons(21), | 169 | IB_CM_REJ_INVALID_ALT_TRAFFIC_CLASS = 21, |
170 | IB_CM_REJ_INVALID_ALT_HOP_LIMIT = __constant_htons(22), | 170 | IB_CM_REJ_INVALID_ALT_HOP_LIMIT = 22, |
171 | IB_CM_REJ_INVALID_ALT_PACKET_RATE = __constant_htons(23), | 171 | IB_CM_REJ_INVALID_ALT_PACKET_RATE = 23, |
172 | IB_CM_REJ_PORT_CM_REDIRECT = __constant_htons(24), | 172 | IB_CM_REJ_PORT_CM_REDIRECT = 24, |
173 | IB_CM_REJ_PORT_REDIRECT = __constant_htons(25), | 173 | IB_CM_REJ_PORT_REDIRECT = 25, |
174 | IB_CM_REJ_INVALID_MTU = __constant_htons(26), | 174 | IB_CM_REJ_INVALID_MTU = 26, |
175 | IB_CM_REJ_INSUFFICIENT_RESP_RESOURCES = __constant_htons(27), | 175 | IB_CM_REJ_INSUFFICIENT_RESP_RESOURCES = 27, |
176 | IB_CM_REJ_CONSUMER_DEFINED = __constant_htons(28), | 176 | IB_CM_REJ_CONSUMER_DEFINED = 28, |
177 | IB_CM_REJ_INVALID_RNR_RETRY = __constant_htons(29), | 177 | IB_CM_REJ_INVALID_RNR_RETRY = 29, |
178 | IB_CM_REJ_DUPLICATE_LOCAL_COMM_ID = __constant_htons(30), | 178 | IB_CM_REJ_DUPLICATE_LOCAL_COMM_ID = 30, |
179 | IB_CM_REJ_INVALID_CLASS_VERSION = __constant_htons(31), | 179 | IB_CM_REJ_INVALID_CLASS_VERSION = 31, |
180 | IB_CM_REJ_INVALID_FLOW_LABEL = __constant_htons(32), | 180 | IB_CM_REJ_INVALID_FLOW_LABEL = 32, |
181 | IB_CM_REJ_INVALID_ALT_FLOW_LABEL = __constant_htons(33) | 181 | IB_CM_REJ_INVALID_ALT_FLOW_LABEL = 33 |
182 | }; | 182 | }; |
183 | 183 | ||
184 | struct ib_cm_rej_event_param { | 184 | struct ib_cm_rej_event_param { |
@@ -222,8 +222,7 @@ struct ib_cm_sidr_req_event_param { | |||
222 | struct ib_cm_id *listen_id; | 222 | struct ib_cm_id *listen_id; |
223 | struct ib_device *device; | 223 | struct ib_device *device; |
224 | u8 port; | 224 | u8 port; |
225 | 225 | u16 pkey; | |
226 | u16 pkey; | ||
227 | }; | 226 | }; |
228 | 227 | ||
229 | enum ib_cm_sidr_status { | 228 | enum ib_cm_sidr_status { |
@@ -285,12 +284,12 @@ typedef int (*ib_cm_handler)(struct ib_cm_id *cm_id, | |||
285 | struct ib_cm_id { | 284 | struct ib_cm_id { |
286 | ib_cm_handler cm_handler; | 285 | ib_cm_handler cm_handler; |
287 | void *context; | 286 | void *context; |
288 | u64 service_id; | 287 | __be64 service_id; |
289 | u64 service_mask; | 288 | __be64 service_mask; |
290 | enum ib_cm_state state; /* internal CM/debug use */ | 289 | enum ib_cm_state state; /* internal CM/debug use */ |
291 | enum ib_cm_lap_state lap_state; /* internal CM/debug use */ | 290 | enum ib_cm_lap_state lap_state; /* internal CM/debug use */ |
292 | u32 local_id; | 291 | __be32 local_id; |
293 | u32 remote_id; | 292 | __be32 remote_id; |
294 | }; | 293 | }; |
295 | 294 | ||
296 | /** | 295 | /** |
@@ -330,13 +329,13 @@ void ib_destroy_cm_id(struct ib_cm_id *cm_id); | |||
330 | * IB_CM_ASSIGN_SERVICE_ID. | 329 | * IB_CM_ASSIGN_SERVICE_ID. |
331 | */ | 330 | */ |
332 | int ib_cm_listen(struct ib_cm_id *cm_id, | 331 | int ib_cm_listen(struct ib_cm_id *cm_id, |
333 | u64 service_id, | 332 | __be64 service_id, |
334 | u64 service_mask); | 333 | __be64 service_mask); |
335 | 334 | ||
336 | struct ib_cm_req_param { | 335 | struct ib_cm_req_param { |
337 | struct ib_sa_path_rec *primary_path; | 336 | struct ib_sa_path_rec *primary_path; |
338 | struct ib_sa_path_rec *alternate_path; | 337 | struct ib_sa_path_rec *alternate_path; |
339 | u64 service_id; | 338 | __be64 service_id; |
340 | u32 qp_num; | 339 | u32 qp_num; |
341 | enum ib_qp_type qp_type; | 340 | enum ib_qp_type qp_type; |
342 | u32 starting_psn; | 341 | u32 starting_psn; |
@@ -528,7 +527,7 @@ int ib_send_cm_apr(struct ib_cm_id *cm_id, | |||
528 | 527 | ||
529 | struct ib_cm_sidr_req_param { | 528 | struct ib_cm_sidr_req_param { |
530 | struct ib_sa_path_rec *path; | 529 | struct ib_sa_path_rec *path; |
531 | u64 service_id; | 530 | __be64 service_id; |
532 | int timeout_ms; | 531 | int timeout_ms; |
533 | const void *private_data; | 532 | const void *private_data; |
534 | u8 private_data_len; | 533 | u8 private_data_len; |
diff --git a/drivers/infiniband/include/ib_fmr_pool.h b/include/rdma/ib_fmr_pool.h index 6c9e24d6e144..86b7e93f198b 100644 --- a/drivers/infiniband/include/ib_fmr_pool.h +++ b/include/rdma/ib_fmr_pool.h | |||
@@ -36,7 +36,7 @@ | |||
36 | #if !defined(IB_FMR_POOL_H) | 36 | #if !defined(IB_FMR_POOL_H) |
37 | #define IB_FMR_POOL_H | 37 | #define IB_FMR_POOL_H |
38 | 38 | ||
39 | #include <ib_verbs.h> | 39 | #include <rdma/ib_verbs.h> |
40 | 40 | ||
41 | struct ib_fmr_pool; | 41 | struct ib_fmr_pool; |
42 | 42 | ||
diff --git a/drivers/infiniband/include/ib_mad.h b/include/rdma/ib_mad.h index 491b6f25b3b8..fc6b1c18ffc6 100644 --- a/drivers/infiniband/include/ib_mad.h +++ b/include/rdma/ib_mad.h | |||
@@ -41,7 +41,7 @@ | |||
41 | 41 | ||
42 | #include <linux/pci.h> | 42 | #include <linux/pci.h> |
43 | 43 | ||
44 | #include <ib_verbs.h> | 44 | #include <rdma/ib_verbs.h> |
45 | 45 | ||
46 | /* Management base version */ | 46 | /* Management base version */ |
47 | #define IB_MGMT_BASE_VERSION 1 | 47 | #define IB_MGMT_BASE_VERSION 1 |
@@ -90,6 +90,7 @@ | |||
90 | 90 | ||
91 | #define IB_MGMT_RMPP_STATUS_SUCCESS 0 | 91 | #define IB_MGMT_RMPP_STATUS_SUCCESS 0 |
92 | #define IB_MGMT_RMPP_STATUS_RESX 1 | 92 | #define IB_MGMT_RMPP_STATUS_RESX 1 |
93 | #define IB_MGMT_RMPP_STATUS_ABORT_MIN 118 | ||
93 | #define IB_MGMT_RMPP_STATUS_T2L 118 | 94 | #define IB_MGMT_RMPP_STATUS_T2L 118 |
94 | #define IB_MGMT_RMPP_STATUS_BAD_LEN 119 | 95 | #define IB_MGMT_RMPP_STATUS_BAD_LEN 119 |
95 | #define IB_MGMT_RMPP_STATUS_BAD_SEG 120 | 96 | #define IB_MGMT_RMPP_STATUS_BAD_SEG 120 |
@@ -100,6 +101,7 @@ | |||
100 | #define IB_MGMT_RMPP_STATUS_UNV 125 | 101 | #define IB_MGMT_RMPP_STATUS_UNV 125 |
101 | #define IB_MGMT_RMPP_STATUS_TMR 126 | 102 | #define IB_MGMT_RMPP_STATUS_TMR 126 |
102 | #define IB_MGMT_RMPP_STATUS_UNSPEC 127 | 103 | #define IB_MGMT_RMPP_STATUS_UNSPEC 127 |
104 | #define IB_MGMT_RMPP_STATUS_ABORT_MAX 127 | ||
103 | 105 | ||
104 | #define IB_QP0 0 | 106 | #define IB_QP0 0 |
105 | #define IB_QP1 __constant_htonl(1) | 107 | #define IB_QP1 __constant_htonl(1) |
@@ -111,12 +113,12 @@ struct ib_mad_hdr { | |||
111 | u8 mgmt_class; | 113 | u8 mgmt_class; |
112 | u8 class_version; | 114 | u8 class_version; |
113 | u8 method; | 115 | u8 method; |
114 | u16 status; | 116 | __be16 status; |
115 | u16 class_specific; | 117 | __be16 class_specific; |
116 | u64 tid; | 118 | __be64 tid; |
117 | u16 attr_id; | 119 | __be16 attr_id; |
118 | u16 resv; | 120 | __be16 resv; |
119 | u32 attr_mod; | 121 | __be32 attr_mod; |
120 | }; | 122 | }; |
121 | 123 | ||
122 | struct ib_rmpp_hdr { | 124 | struct ib_rmpp_hdr { |
@@ -124,8 +126,8 @@ struct ib_rmpp_hdr { | |||
124 | u8 rmpp_type; | 126 | u8 rmpp_type; |
125 | u8 rmpp_rtime_flags; | 127 | u8 rmpp_rtime_flags; |
126 | u8 rmpp_status; | 128 | u8 rmpp_status; |
127 | u32 seg_num; | 129 | __be32 seg_num; |
128 | u32 paylen_newwin; | 130 | __be32 paylen_newwin; |
129 | }; | 131 | }; |
130 | 132 | ||
131 | typedef u64 __bitwise ib_sa_comp_mask; | 133 | typedef u64 __bitwise ib_sa_comp_mask; |
@@ -139,9 +141,9 @@ typedef u64 __bitwise ib_sa_comp_mask; | |||
139 | * the wire so we can't change the layout) | 141 | * the wire so we can't change the layout) |
140 | */ | 142 | */ |
141 | struct ib_sa_hdr { | 143 | struct ib_sa_hdr { |
142 | u64 sm_key; | 144 | __be64 sm_key; |
143 | u16 attr_offset; | 145 | __be16 attr_offset; |
144 | u16 reserved; | 146 | __be16 reserved; |
145 | ib_sa_comp_mask comp_mask; | 147 | ib_sa_comp_mask comp_mask; |
146 | } __attribute__ ((packed)); | 148 | } __attribute__ ((packed)); |
147 | 149 | ||
diff --git a/drivers/infiniband/include/ib_pack.h b/include/rdma/ib_pack.h index fe480f3e8654..f926020d6331 100644 --- a/drivers/infiniband/include/ib_pack.h +++ b/include/rdma/ib_pack.h | |||
@@ -35,7 +35,7 @@ | |||
35 | #ifndef IB_PACK_H | 35 | #ifndef IB_PACK_H |
36 | #define IB_PACK_H | 36 | #define IB_PACK_H |
37 | 37 | ||
38 | #include <ib_verbs.h> | 38 | #include <rdma/ib_verbs.h> |
39 | 39 | ||
40 | enum { | 40 | enum { |
41 | IB_LRH_BYTES = 8, | 41 | IB_LRH_BYTES = 8, |
diff --git a/drivers/infiniband/include/ib_sa.h b/include/rdma/ib_sa.h index 6d999f7b5d93..c022edfc49da 100644 --- a/drivers/infiniband/include/ib_sa.h +++ b/include/rdma/ib_sa.h | |||
@@ -38,8 +38,8 @@ | |||
38 | 38 | ||
39 | #include <linux/compiler.h> | 39 | #include <linux/compiler.h> |
40 | 40 | ||
41 | #include <ib_verbs.h> | 41 | #include <rdma/ib_verbs.h> |
42 | #include <ib_mad.h> | 42 | #include <rdma/ib_mad.h> |
43 | 43 | ||
44 | enum { | 44 | enum { |
45 | IB_SA_CLASS_VERSION = 2, /* IB spec version 1.1/1.2 */ | 45 | IB_SA_CLASS_VERSION = 2, /* IB spec version 1.1/1.2 */ |
@@ -133,16 +133,16 @@ struct ib_sa_path_rec { | |||
133 | /* reserved */ | 133 | /* reserved */ |
134 | union ib_gid dgid; | 134 | union ib_gid dgid; |
135 | union ib_gid sgid; | 135 | union ib_gid sgid; |
136 | u16 dlid; | 136 | __be16 dlid; |
137 | u16 slid; | 137 | __be16 slid; |
138 | int raw_traffic; | 138 | int raw_traffic; |
139 | /* reserved */ | 139 | /* reserved */ |
140 | u32 flow_label; | 140 | __be32 flow_label; |
141 | u8 hop_limit; | 141 | u8 hop_limit; |
142 | u8 traffic_class; | 142 | u8 traffic_class; |
143 | int reversible; | 143 | int reversible; |
144 | u8 numb_path; | 144 | u8 numb_path; |
145 | u16 pkey; | 145 | __be16 pkey; |
146 | /* reserved */ | 146 | /* reserved */ |
147 | u8 sl; | 147 | u8 sl; |
148 | u8 mtu_selector; | 148 | u8 mtu_selector; |
@@ -176,18 +176,18 @@ struct ib_sa_path_rec { | |||
176 | struct ib_sa_mcmember_rec { | 176 | struct ib_sa_mcmember_rec { |
177 | union ib_gid mgid; | 177 | union ib_gid mgid; |
178 | union ib_gid port_gid; | 178 | union ib_gid port_gid; |
179 | u32 qkey; | 179 | __be32 qkey; |
180 | u16 mlid; | 180 | __be16 mlid; |
181 | u8 mtu_selector; | 181 | u8 mtu_selector; |
182 | u8 mtu; | 182 | u8 mtu; |
183 | u8 traffic_class; | 183 | u8 traffic_class; |
184 | u16 pkey; | 184 | __be16 pkey; |
185 | u8 rate_selector; | 185 | u8 rate_selector; |
186 | u8 rate; | 186 | u8 rate; |
187 | u8 packet_life_time_selector; | 187 | u8 packet_life_time_selector; |
188 | u8 packet_life_time; | 188 | u8 packet_life_time; |
189 | u8 sl; | 189 | u8 sl; |
190 | u32 flow_label; | 190 | __be32 flow_label; |
191 | u8 hop_limit; | 191 | u8 hop_limit; |
192 | u8 scope; | 192 | u8 scope; |
193 | u8 join_state; | 193 | u8 join_state; |
@@ -238,7 +238,7 @@ struct ib_sa_mcmember_rec { | |||
238 | struct ib_sa_service_rec { | 238 | struct ib_sa_service_rec { |
239 | u64 id; | 239 | u64 id; |
240 | union ib_gid gid; | 240 | union ib_gid gid; |
241 | u16 pkey; | 241 | __be16 pkey; |
242 | /* reserved */ | 242 | /* reserved */ |
243 | u32 lease; | 243 | u32 lease; |
244 | u8 key[16]; | 244 | u8 key[16]; |
diff --git a/drivers/infiniband/include/ib_smi.h b/include/rdma/ib_smi.h index ca8216514963..87f60737f695 100644 --- a/drivers/infiniband/include/ib_smi.h +++ b/include/rdma/ib_smi.h | |||
@@ -39,9 +39,7 @@ | |||
39 | #if !defined( IB_SMI_H ) | 39 | #if !defined( IB_SMI_H ) |
40 | #define IB_SMI_H | 40 | #define IB_SMI_H |
41 | 41 | ||
42 | #include <ib_mad.h> | 42 | #include <rdma/ib_mad.h> |
43 | |||
44 | #define IB_LID_PERMISSIVE 0xFFFF | ||
45 | 43 | ||
46 | #define IB_SMP_DATA_SIZE 64 | 44 | #define IB_SMP_DATA_SIZE 64 |
47 | #define IB_SMP_MAX_PATH_HOPS 64 | 45 | #define IB_SMP_MAX_PATH_HOPS 64 |
@@ -51,16 +49,16 @@ struct ib_smp { | |||
51 | u8 mgmt_class; | 49 | u8 mgmt_class; |
52 | u8 class_version; | 50 | u8 class_version; |
53 | u8 method; | 51 | u8 method; |
54 | u16 status; | 52 | __be16 status; |
55 | u8 hop_ptr; | 53 | u8 hop_ptr; |
56 | u8 hop_cnt; | 54 | u8 hop_cnt; |
57 | u64 tid; | 55 | __be64 tid; |
58 | u16 attr_id; | 56 | __be16 attr_id; |
59 | u16 resv; | 57 | __be16 resv; |
60 | u32 attr_mod; | 58 | __be32 attr_mod; |
61 | u64 mkey; | 59 | __be64 mkey; |
62 | u16 dr_slid; | 60 | __be16 dr_slid; |
63 | u16 dr_dlid; | 61 | __be16 dr_dlid; |
64 | u8 reserved[28]; | 62 | u8 reserved[28]; |
65 | u8 data[IB_SMP_DATA_SIZE]; | 63 | u8 data[IB_SMP_DATA_SIZE]; |
66 | u8 initial_path[IB_SMP_MAX_PATH_HOPS]; | 64 | u8 initial_path[IB_SMP_MAX_PATH_HOPS]; |
diff --git a/drivers/infiniband/include/ib_user_cm.h b/include/rdma/ib_user_cm.h index 500b1af6ff77..72182d16778b 100644 --- a/drivers/infiniband/include/ib_user_cm.h +++ b/include/rdma/ib_user_cm.h | |||
@@ -88,15 +88,15 @@ struct ib_ucm_attr_id { | |||
88 | }; | 88 | }; |
89 | 89 | ||
90 | struct ib_ucm_attr_id_resp { | 90 | struct ib_ucm_attr_id_resp { |
91 | __u64 service_id; | 91 | __be64 service_id; |
92 | __u64 service_mask; | 92 | __be64 service_mask; |
93 | __u32 local_id; | 93 | __be32 local_id; |
94 | __u32 remote_id; | 94 | __be32 remote_id; |
95 | }; | 95 | }; |
96 | 96 | ||
97 | struct ib_ucm_listen { | 97 | struct ib_ucm_listen { |
98 | __u64 service_id; | 98 | __be64 service_id; |
99 | __u64 service_mask; | 99 | __be64 service_mask; |
100 | __u32 id; | 100 | __u32 id; |
101 | }; | 101 | }; |
102 | 102 | ||
@@ -114,13 +114,13 @@ struct ib_ucm_private_data { | |||
114 | struct ib_ucm_path_rec { | 114 | struct ib_ucm_path_rec { |
115 | __u8 dgid[16]; | 115 | __u8 dgid[16]; |
116 | __u8 sgid[16]; | 116 | __u8 sgid[16]; |
117 | __u16 dlid; | 117 | __be16 dlid; |
118 | __u16 slid; | 118 | __be16 slid; |
119 | __u32 raw_traffic; | 119 | __u32 raw_traffic; |
120 | __u32 flow_label; | 120 | __be32 flow_label; |
121 | __u32 reversible; | 121 | __u32 reversible; |
122 | __u32 mtu; | 122 | __u32 mtu; |
123 | __u16 pkey; | 123 | __be16 pkey; |
124 | __u8 hop_limit; | 124 | __u8 hop_limit; |
125 | __u8 traffic_class; | 125 | __u8 traffic_class; |
126 | __u8 numb_path; | 126 | __u8 numb_path; |
@@ -138,7 +138,7 @@ struct ib_ucm_req { | |||
138 | __u32 qpn; | 138 | __u32 qpn; |
139 | __u32 qp_type; | 139 | __u32 qp_type; |
140 | __u32 psn; | 140 | __u32 psn; |
141 | __u64 sid; | 141 | __be64 sid; |
142 | __u64 data; | 142 | __u64 data; |
143 | __u64 primary_path; | 143 | __u64 primary_path; |
144 | __u64 alternate_path; | 144 | __u64 alternate_path; |
@@ -200,7 +200,7 @@ struct ib_ucm_lap { | |||
200 | struct ib_ucm_sidr_req { | 200 | struct ib_ucm_sidr_req { |
201 | __u32 id; | 201 | __u32 id; |
202 | __u32 timeout; | 202 | __u32 timeout; |
203 | __u64 sid; | 203 | __be64 sid; |
204 | __u64 data; | 204 | __u64 data; |
205 | __u64 path; | 205 | __u64 path; |
206 | __u16 pkey; | 206 | __u16 pkey; |
@@ -237,7 +237,7 @@ struct ib_ucm_req_event_resp { | |||
237 | /* port */ | 237 | /* port */ |
238 | struct ib_ucm_path_rec primary_path; | 238 | struct ib_ucm_path_rec primary_path; |
239 | struct ib_ucm_path_rec alternate_path; | 239 | struct ib_ucm_path_rec alternate_path; |
240 | __u64 remote_ca_guid; | 240 | __be64 remote_ca_guid; |
241 | __u32 remote_qkey; | 241 | __u32 remote_qkey; |
242 | __u32 remote_qpn; | 242 | __u32 remote_qpn; |
243 | __u32 qp_type; | 243 | __u32 qp_type; |
@@ -253,7 +253,7 @@ struct ib_ucm_req_event_resp { | |||
253 | }; | 253 | }; |
254 | 254 | ||
255 | struct ib_ucm_rep_event_resp { | 255 | struct ib_ucm_rep_event_resp { |
256 | __u64 remote_ca_guid; | 256 | __be64 remote_ca_guid; |
257 | __u32 remote_qkey; | 257 | __u32 remote_qkey; |
258 | __u32 remote_qpn; | 258 | __u32 remote_qpn; |
259 | __u32 starting_psn; | 259 | __u32 starting_psn; |
diff --git a/drivers/infiniband/include/ib_user_mad.h b/include/rdma/ib_user_mad.h index a9a56b50aacc..44537aa32e62 100644 --- a/drivers/infiniband/include/ib_user_mad.h +++ b/include/rdma/ib_user_mad.h | |||
@@ -70,8 +70,6 @@ | |||
70 | * @traffic_class - Traffic class in GRH | 70 | * @traffic_class - Traffic class in GRH |
71 | * @gid - Remote GID in GRH | 71 | * @gid - Remote GID in GRH |
72 | * @flow_label - Flow label in GRH | 72 | * @flow_label - Flow label in GRH |
73 | * | ||
74 | * All multi-byte quantities are stored in network (big endian) byte order. | ||
75 | */ | 73 | */ |
76 | struct ib_user_mad_hdr { | 74 | struct ib_user_mad_hdr { |
77 | __u32 id; | 75 | __u32 id; |
@@ -79,9 +77,9 @@ struct ib_user_mad_hdr { | |||
79 | __u32 timeout_ms; | 77 | __u32 timeout_ms; |
80 | __u32 retries; | 78 | __u32 retries; |
81 | __u32 length; | 79 | __u32 length; |
82 | __u32 qpn; | 80 | __be32 qpn; |
83 | __u32 qkey; | 81 | __be32 qkey; |
84 | __u16 lid; | 82 | __be16 lid; |
85 | __u8 sl; | 83 | __u8 sl; |
86 | __u8 path_bits; | 84 | __u8 path_bits; |
87 | __u8 grh_present; | 85 | __u8 grh_present; |
@@ -89,7 +87,7 @@ struct ib_user_mad_hdr { | |||
89 | __u8 hop_limit; | 87 | __u8 hop_limit; |
90 | __u8 traffic_class; | 88 | __u8 traffic_class; |
91 | __u8 gid[16]; | 89 | __u8 gid[16]; |
92 | __u32 flow_label; | 90 | __be32 flow_label; |
93 | }; | 91 | }; |
94 | 92 | ||
95 | /** | 93 | /** |
diff --git a/drivers/infiniband/include/ib_user_verbs.h b/include/rdma/ib_user_verbs.h index 7c613706af72..7ebb01c8f996 100644 --- a/drivers/infiniband/include/ib_user_verbs.h +++ b/include/rdma/ib_user_verbs.h | |||
@@ -78,7 +78,12 @@ enum { | |||
78 | IB_USER_VERBS_CMD_POST_SEND, | 78 | IB_USER_VERBS_CMD_POST_SEND, |
79 | IB_USER_VERBS_CMD_POST_RECV, | 79 | IB_USER_VERBS_CMD_POST_RECV, |
80 | IB_USER_VERBS_CMD_ATTACH_MCAST, | 80 | IB_USER_VERBS_CMD_ATTACH_MCAST, |
81 | IB_USER_VERBS_CMD_DETACH_MCAST | 81 | IB_USER_VERBS_CMD_DETACH_MCAST, |
82 | IB_USER_VERBS_CMD_CREATE_SRQ, | ||
83 | IB_USER_VERBS_CMD_MODIFY_SRQ, | ||
84 | IB_USER_VERBS_CMD_QUERY_SRQ, | ||
85 | IB_USER_VERBS_CMD_DESTROY_SRQ, | ||
86 | IB_USER_VERBS_CMD_POST_SRQ_RECV | ||
82 | }; | 87 | }; |
83 | 88 | ||
84 | /* | 89 | /* |
@@ -143,8 +148,8 @@ struct ib_uverbs_query_device { | |||
143 | 148 | ||
144 | struct ib_uverbs_query_device_resp { | 149 | struct ib_uverbs_query_device_resp { |
145 | __u64 fw_ver; | 150 | __u64 fw_ver; |
146 | __u64 node_guid; | 151 | __be64 node_guid; |
147 | __u64 sys_image_guid; | 152 | __be64 sys_image_guid; |
148 | __u64 max_mr_size; | 153 | __u64 max_mr_size; |
149 | __u64 page_size_cap; | 154 | __u64 page_size_cap; |
150 | __u32 vendor_id; | 155 | __u32 vendor_id; |
@@ -386,4 +391,32 @@ struct ib_uverbs_detach_mcast { | |||
386 | __u64 driver_data[0]; | 391 | __u64 driver_data[0]; |
387 | }; | 392 | }; |
388 | 393 | ||
394 | struct ib_uverbs_create_srq { | ||
395 | __u64 response; | ||
396 | __u64 user_handle; | ||
397 | __u32 pd_handle; | ||
398 | __u32 max_wr; | ||
399 | __u32 max_sge; | ||
400 | __u32 srq_limit; | ||
401 | __u64 driver_data[0]; | ||
402 | }; | ||
403 | |||
404 | struct ib_uverbs_create_srq_resp { | ||
405 | __u32 srq_handle; | ||
406 | }; | ||
407 | |||
408 | struct ib_uverbs_modify_srq { | ||
409 | __u32 srq_handle; | ||
410 | __u32 attr_mask; | ||
411 | __u32 max_wr; | ||
412 | __u32 max_sge; | ||
413 | __u32 srq_limit; | ||
414 | __u32 reserved; | ||
415 | __u64 driver_data[0]; | ||
416 | }; | ||
417 | |||
418 | struct ib_uverbs_destroy_srq { | ||
419 | __u32 srq_handle; | ||
420 | }; | ||
421 | |||
389 | #endif /* IB_USER_VERBS_H */ | 422 | #endif /* IB_USER_VERBS_H */ |
diff --git a/drivers/infiniband/include/ib_verbs.h b/include/rdma/ib_verbs.h index 5d24edaa66e6..e16cf94870f2 100644 --- a/drivers/infiniband/include/ib_verbs.h +++ b/include/rdma/ib_verbs.h | |||
@@ -4,6 +4,7 @@ | |||
4 | * Copyright (c) 2004 Intel Corporation. All rights reserved. | 4 | * Copyright (c) 2004 Intel Corporation. All rights reserved. |
5 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. | 5 | * Copyright (c) 2004 Topspin Corporation. All rights reserved. |
6 | * Copyright (c) 2004 Voltaire Corporation. All rights reserved. | 6 | * Copyright (c) 2004 Voltaire Corporation. All rights reserved. |
7 | * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. | ||
7 | * Copyright (c) 2005 Cisco Systems. All rights reserved. | 8 | * Copyright (c) 2005 Cisco Systems. All rights reserved. |
8 | * | 9 | * |
9 | * This software is available to you under a choice of one of two | 10 | * This software is available to you under a choice of one of two |
@@ -50,8 +51,8 @@ | |||
50 | union ib_gid { | 51 | union ib_gid { |
51 | u8 raw[16]; | 52 | u8 raw[16]; |
52 | struct { | 53 | struct { |
53 | u64 subnet_prefix; | 54 | __be64 subnet_prefix; |
54 | u64 interface_id; | 55 | __be64 interface_id; |
55 | } global; | 56 | } global; |
56 | }; | 57 | }; |
57 | 58 | ||
@@ -87,8 +88,8 @@ enum ib_atomic_cap { | |||
87 | 88 | ||
88 | struct ib_device_attr { | 89 | struct ib_device_attr { |
89 | u64 fw_ver; | 90 | u64 fw_ver; |
90 | u64 node_guid; | 91 | __be64 node_guid; |
91 | u64 sys_image_guid; | 92 | __be64 sys_image_guid; |
92 | u64 max_mr_size; | 93 | u64 max_mr_size; |
93 | u64 page_size_cap; | 94 | u64 page_size_cap; |
94 | u32 vendor_id; | 95 | u32 vendor_id; |
@@ -255,7 +256,10 @@ enum ib_event_type { | |||
255 | IB_EVENT_PORT_ERR, | 256 | IB_EVENT_PORT_ERR, |
256 | IB_EVENT_LID_CHANGE, | 257 | IB_EVENT_LID_CHANGE, |
257 | IB_EVENT_PKEY_CHANGE, | 258 | IB_EVENT_PKEY_CHANGE, |
258 | IB_EVENT_SM_CHANGE | 259 | IB_EVENT_SM_CHANGE, |
260 | IB_EVENT_SRQ_ERR, | ||
261 | IB_EVENT_SRQ_LIMIT_REACHED, | ||
262 | IB_EVENT_QP_LAST_WQE_REACHED | ||
259 | }; | 263 | }; |
260 | 264 | ||
261 | struct ib_event { | 265 | struct ib_event { |
@@ -263,6 +267,7 @@ struct ib_event { | |||
263 | union { | 267 | union { |
264 | struct ib_cq *cq; | 268 | struct ib_cq *cq; |
265 | struct ib_qp *qp; | 269 | struct ib_qp *qp; |
270 | struct ib_srq *srq; | ||
266 | u8 port_num; | 271 | u8 port_num; |
267 | } element; | 272 | } element; |
268 | enum ib_event_type event; | 273 | enum ib_event_type event; |
@@ -290,8 +295,8 @@ struct ib_global_route { | |||
290 | }; | 295 | }; |
291 | 296 | ||
292 | struct ib_grh { | 297 | struct ib_grh { |
293 | u32 version_tclass_flow; | 298 | __be32 version_tclass_flow; |
294 | u16 paylen; | 299 | __be16 paylen; |
295 | u8 next_hdr; | 300 | u8 next_hdr; |
296 | u8 hop_limit; | 301 | u8 hop_limit; |
297 | union ib_gid sgid; | 302 | union ib_gid sgid; |
@@ -302,6 +307,8 @@ enum { | |||
302 | IB_MULTICAST_QPN = 0xffffff | 307 | IB_MULTICAST_QPN = 0xffffff |
303 | }; | 308 | }; |
304 | 309 | ||
310 | #define IB_LID_PERMISSIVE __constant_htons(0xFFFF) | ||
311 | |||
305 | enum ib_ah_flags { | 312 | enum ib_ah_flags { |
306 | IB_AH_GRH = 1 | 313 | IB_AH_GRH = 1 |
307 | }; | 314 | }; |
@@ -383,6 +390,23 @@ enum ib_cq_notify { | |||
383 | IB_CQ_NEXT_COMP | 390 | IB_CQ_NEXT_COMP |
384 | }; | 391 | }; |
385 | 392 | ||
393 | enum ib_srq_attr_mask { | ||
394 | IB_SRQ_MAX_WR = 1 << 0, | ||
395 | IB_SRQ_LIMIT = 1 << 1, | ||
396 | }; | ||
397 | |||
398 | struct ib_srq_attr { | ||
399 | u32 max_wr; | ||
400 | u32 max_sge; | ||
401 | u32 srq_limit; | ||
402 | }; | ||
403 | |||
404 | struct ib_srq_init_attr { | ||
405 | void (*event_handler)(struct ib_event *, void *); | ||
406 | void *srq_context; | ||
407 | struct ib_srq_attr attr; | ||
408 | }; | ||
409 | |||
386 | struct ib_qp_cap { | 410 | struct ib_qp_cap { |
387 | u32 max_send_wr; | 411 | u32 max_send_wr; |
388 | u32 max_recv_wr; | 412 | u32 max_recv_wr; |
@@ -710,10 +734,11 @@ struct ib_cq { | |||
710 | }; | 734 | }; |
711 | 735 | ||
712 | struct ib_srq { | 736 | struct ib_srq { |
713 | struct ib_device *device; | 737 | struct ib_device *device; |
714 | struct ib_uobject *uobject; | 738 | struct ib_pd *pd; |
715 | struct ib_pd *pd; | 739 | struct ib_uobject *uobject; |
716 | void *srq_context; | 740 | void (*event_handler)(struct ib_event *, void *); |
741 | void *srq_context; | ||
717 | atomic_t usecnt; | 742 | atomic_t usecnt; |
718 | }; | 743 | }; |
719 | 744 | ||
@@ -827,6 +852,18 @@ struct ib_device { | |||
827 | int (*query_ah)(struct ib_ah *ah, | 852 | int (*query_ah)(struct ib_ah *ah, |
828 | struct ib_ah_attr *ah_attr); | 853 | struct ib_ah_attr *ah_attr); |
829 | int (*destroy_ah)(struct ib_ah *ah); | 854 | int (*destroy_ah)(struct ib_ah *ah); |
855 | struct ib_srq * (*create_srq)(struct ib_pd *pd, | ||
856 | struct ib_srq_init_attr *srq_init_attr, | ||
857 | struct ib_udata *udata); | ||
858 | int (*modify_srq)(struct ib_srq *srq, | ||
859 | struct ib_srq_attr *srq_attr, | ||
860 | enum ib_srq_attr_mask srq_attr_mask); | ||
861 | int (*query_srq)(struct ib_srq *srq, | ||
862 | struct ib_srq_attr *srq_attr); | ||
863 | int (*destroy_srq)(struct ib_srq *srq); | ||
864 | int (*post_srq_recv)(struct ib_srq *srq, | ||
865 | struct ib_recv_wr *recv_wr, | ||
866 | struct ib_recv_wr **bad_recv_wr); | ||
830 | struct ib_qp * (*create_qp)(struct ib_pd *pd, | 867 | struct ib_qp * (*create_qp)(struct ib_pd *pd, |
831 | struct ib_qp_init_attr *qp_init_attr, | 868 | struct ib_qp_init_attr *qp_init_attr, |
832 | struct ib_udata *udata); | 869 | struct ib_udata *udata); |
@@ -1039,6 +1076,65 @@ int ib_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr); | |||
1039 | int ib_destroy_ah(struct ib_ah *ah); | 1076 | int ib_destroy_ah(struct ib_ah *ah); |
1040 | 1077 | ||
1041 | /** | 1078 | /** |
1079 | * ib_create_srq - Creates a SRQ associated with the specified protection | ||
1080 | * domain. | ||
1081 | * @pd: The protection domain associated with the SRQ. | ||
1082 | * @srq_init_attr: A list of initial attributes required to create the SRQ. | ||
1083 | * | ||
1084 | * srq_attr->max_wr and srq_attr->max_sge are read the determine the | ||
1085 | * requested size of the SRQ, and set to the actual values allocated | ||
1086 | * on return. If ib_create_srq() succeeds, then max_wr and max_sge | ||
1087 | * will always be at least as large as the requested values. | ||
1088 | */ | ||
1089 | struct ib_srq *ib_create_srq(struct ib_pd *pd, | ||
1090 | struct ib_srq_init_attr *srq_init_attr); | ||
1091 | |||
1092 | /** | ||
1093 | * ib_modify_srq - Modifies the attributes for the specified SRQ. | ||
1094 | * @srq: The SRQ to modify. | ||
1095 | * @srq_attr: On input, specifies the SRQ attributes to modify. On output, | ||
1096 | * the current values of selected SRQ attributes are returned. | ||
1097 | * @srq_attr_mask: A bit-mask used to specify which attributes of the SRQ | ||
1098 | * are being modified. | ||
1099 | * | ||
1100 | * The mask may contain IB_SRQ_MAX_WR to resize the SRQ and/or | ||
1101 | * IB_SRQ_LIMIT to set the SRQ's limit and request notification when | ||
1102 | * the number of receives queued drops below the limit. | ||
1103 | */ | ||
1104 | int ib_modify_srq(struct ib_srq *srq, | ||
1105 | struct ib_srq_attr *srq_attr, | ||
1106 | enum ib_srq_attr_mask srq_attr_mask); | ||
1107 | |||
1108 | /** | ||
1109 | * ib_query_srq - Returns the attribute list and current values for the | ||
1110 | * specified SRQ. | ||
1111 | * @srq: The SRQ to query. | ||
1112 | * @srq_attr: The attributes of the specified SRQ. | ||
1113 | */ | ||
1114 | int ib_query_srq(struct ib_srq *srq, | ||
1115 | struct ib_srq_attr *srq_attr); | ||
1116 | |||
1117 | /** | ||
1118 | * ib_destroy_srq - Destroys the specified SRQ. | ||
1119 | * @srq: The SRQ to destroy. | ||
1120 | */ | ||
1121 | int ib_destroy_srq(struct ib_srq *srq); | ||
1122 | |||
1123 | /** | ||
1124 | * ib_post_srq_recv - Posts a list of work requests to the specified SRQ. | ||
1125 | * @srq: The SRQ to post the work request on. | ||
1126 | * @recv_wr: A list of work requests to post on the receive queue. | ||
1127 | * @bad_recv_wr: On an immediate failure, this parameter will reference | ||
1128 | * the work request that failed to be posted on the QP. | ||
1129 | */ | ||
1130 | static inline int ib_post_srq_recv(struct ib_srq *srq, | ||
1131 | struct ib_recv_wr *recv_wr, | ||
1132 | struct ib_recv_wr **bad_recv_wr) | ||
1133 | { | ||
1134 | return srq->device->post_srq_recv(srq, recv_wr, bad_recv_wr); | ||
1135 | } | ||
1136 | |||
1137 | /** | ||
1042 | * ib_create_qp - Creates a QP associated with the specified protection | 1138 | * ib_create_qp - Creates a QP associated with the specified protection |
1043 | * domain. | 1139 | * domain. |
1044 | * @pd: The protection domain associated with the QP. | 1140 | * @pd: The protection domain associated with the QP. |