diff options
author | David S. Miller <davem@davemloft.net> | 2015-05-12 18:43:56 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2015-05-12 18:43:56 -0400 |
commit | a62b70ddd13993e3706acf3021bf2680461195f4 (patch) | |
tree | 9d4c7467556f31d2d58821185c6a765304180592 /net/switchdev/switchdev.c | |
parent | a3eb95f891d6130b1fc03dd07a8b54cf0a5c8ab8 (diff) | |
parent | 4ceec22d6d89360ff7ebbf53dd3ab4e29e3d8a09 (diff) |
Merge branch 'switchdev_spring_cleanup'
Scott Feldman says:
====================
switchdev: spring cleanup
v7:
Address review comments:
- [Jiri] split the br_setlink and br_dellink reverts into their own patches
- [Jiri] some parameter cleanup of rocker's memory allocators
- [Jiri] pass trans mode as formal parameter rather than hanging off of
rocker_port.
v6:
Address review comments:
- [Jiri] split a couple of patches into one-logical-change per patch
- [Joe Perches] revert checkpatch -f changes for wrapped lines with long
symbols.
v5:
Address review comments:
- [Jiri] include Jiri's s/swdev/switchdev rename patches up front.
- [Jiri] squash some patches. Now setlink/dellink/getlink patches are in
three parts: new implementation, convert drivers to new, delete old impl.
- [Jiri] some minor variable renames
- [Jiri] use BUG_ON rather than WARN when COMMIT phase fails when PREPARE
phase said it was safe to come into the water.
- [Simon] rocker: fix a few transaction prepare-commit cases that were wrong.
This was the bulk of the changes in v5.
v4:
Well, it was a lot of work, but now prepare-commit transaction model is how
davem advises: if prepare fails, abort the transaction. The driver must do
resource reservations up front in prepare phase and return those resources if
aborting. Commit phase would use reserved resources. The good news is the
driver code (for rocker) now handles resource allocation failures better by not
leaving partially device or driver states. This is a side-effect of the
prepare phase where state isn't modified; only validation of inputs and
resource reservations happen in the prepare phase. Since we're supporting
setting attrs and add objs across lower devs in the stacked case, we need to
hold rtnl_lock (or ensure rtnl_lock is held) so lower devs don't move on us
during the prepare-commit transaction. DSA driver code skips the prepare phase
and goes straight for the commit phase since no up-front allocations are done
and no device failures (that could be detected in the prepare phase) can
happen.
Remove NETIF_F_HW_SWITCH_OFFLOAD from rocker and the swdev_attr_set/get
wrappers. DSA doesn't set NETIF_F_HW_SWITCH_OFFLOAD, so it can't be in
swdev_attr_set/get. rocker doesn't need it; or rather can't support
NETIF_F_HW_SWITCH_OFFLOAD being set/cleared at run-time after the device
port is already up and offloading L2/L3. NETIF_F_HW_SWITCH_OFFLOAD is still
left as a feature flag for drivers that can use it.
Drop the renaming patch for netdev_switch_notifier. Other renames are a
result of moving to the attr get/set or obj add/del model. Everything
but the netdev_switch_notifier is still prefixed with "swdev_".
v3:
Move to two-phase prepare-commit transaction model for attr set and obj add.
Driver gets a change in prepare phase to NACK transaction if lack of resources
or support in device.
v2:
Address review comments:
- [Jiri] squash a few related patches
- [Roopa] don't remove NETIF_F_HW_SWITCH_OFFLOAD
- [Roopa] address VLAN setlink/dellink
- [Ronen] print warning is attr set revert fails
Not address:
- Using something other than "swdev_" prefix
- Vendor extentions
The patch set grew a bit to not only support port attr get/set but also add
support for port obj add/del. Example of port objs are VLAN, FDB entries, and
FIB entries. The VLAN support now allows the swdev driver to get VLAN ranges
and flags like PVID and "untagged". Sridhar will be adding FDB obj support
in follow-on patch.
v1:
The main theme of this patch set is to cleanup swdev in preparation for
new features or fixes to be added soon. We have a pretty good idea now how
to handle stacked drivers in swdev, but there where some loose ends. For
example, if a set failed in the middle of walking the lower devs, we would
leave the system in an undefined state...there was no way to recover back to
the previous state. Speaking of sets, also recognize a pattern that most
swdev API accesses are gets or sets of port attributes, so go ahead and make
port attr get/set the central swdev API, and convert everything that is
set-ish/get-ish to this new API.
Features/fixes that should follow from this cleanup:
- solve the duplicate pkt forwarding issue
- get/set bridge attrs, like ageing_time, from/to device
- get/set more bridge port attrs from/to device
There are some rename cleanups tagging along at the end, to give swdev
consistent naming.
And finally, some much needed updates to the switchdev.txt documentation to
hopefully capture the state-of-the-art of swdev. Hopefully, we can do a better
job keeping this document up-to-date.
Tested with rocker, of course, to make sure nothing functional broke. There
are a couple minor tweaks to DSA code for getting switch ID and setting STP
updates to use new API, but not expecting amy breakage there.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/switchdev/switchdev.c')
-rw-r--r-- | net/switchdev/switchdev.c | 662 |
1 files changed, 502 insertions, 160 deletions
diff --git a/net/switchdev/switchdev.c b/net/switchdev/switchdev.c index 46568b85c333..65d49d4477b9 100644 --- a/net/switchdev/switchdev.c +++ b/net/switchdev/switchdev.c | |||
@@ -15,97 +15,328 @@ | |||
15 | #include <linux/mutex.h> | 15 | #include <linux/mutex.h> |
16 | #include <linux/notifier.h> | 16 | #include <linux/notifier.h> |
17 | #include <linux/netdevice.h> | 17 | #include <linux/netdevice.h> |
18 | #include <linux/if_bridge.h> | ||
18 | #include <net/ip_fib.h> | 19 | #include <net/ip_fib.h> |
19 | #include <net/switchdev.h> | 20 | #include <net/switchdev.h> |
20 | 21 | ||
21 | /** | 22 | /** |
22 | * netdev_switch_parent_id_get - Get ID of a switch | 23 | * switchdev_port_attr_get - Get port attribute |
24 | * | ||
23 | * @dev: port device | 25 | * @dev: port device |
24 | * @psid: switch ID | 26 | * @attr: attribute to get |
27 | */ | ||
28 | int switchdev_port_attr_get(struct net_device *dev, struct switchdev_attr *attr) | ||
29 | { | ||
30 | const struct switchdev_ops *ops = dev->switchdev_ops; | ||
31 | struct net_device *lower_dev; | ||
32 | struct list_head *iter; | ||
33 | struct switchdev_attr first = { | ||
34 | .id = SWITCHDEV_ATTR_UNDEFINED | ||
35 | }; | ||
36 | int err = -EOPNOTSUPP; | ||
37 | |||
38 | if (ops && ops->switchdev_port_attr_get) | ||
39 | return ops->switchdev_port_attr_get(dev, attr); | ||
40 | |||
41 | if (attr->flags & SWITCHDEV_F_NO_RECURSE) | ||
42 | return err; | ||
43 | |||
44 | /* Switch device port(s) may be stacked under | ||
45 | * bond/team/vlan dev, so recurse down to get attr on | ||
46 | * each port. Return -ENODATA if attr values don't | ||
47 | * compare across ports. | ||
48 | */ | ||
49 | |||
50 | netdev_for_each_lower_dev(dev, lower_dev, iter) { | ||
51 | err = switchdev_port_attr_get(lower_dev, attr); | ||
52 | if (err) | ||
53 | break; | ||
54 | if (first.id == SWITCHDEV_ATTR_UNDEFINED) | ||
55 | first = *attr; | ||
56 | else if (memcmp(&first, attr, sizeof(*attr))) | ||
57 | return -ENODATA; | ||
58 | } | ||
59 | |||
60 | return err; | ||
61 | } | ||
62 | EXPORT_SYMBOL_GPL(switchdev_port_attr_get); | ||
63 | |||
64 | static int __switchdev_port_attr_set(struct net_device *dev, | ||
65 | struct switchdev_attr *attr) | ||
66 | { | ||
67 | const struct switchdev_ops *ops = dev->switchdev_ops; | ||
68 | struct net_device *lower_dev; | ||
69 | struct list_head *iter; | ||
70 | int err = -EOPNOTSUPP; | ||
71 | |||
72 | if (ops && ops->switchdev_port_attr_set) | ||
73 | return ops->switchdev_port_attr_set(dev, attr); | ||
74 | |||
75 | if (attr->flags & SWITCHDEV_F_NO_RECURSE) | ||
76 | return err; | ||
77 | |||
78 | /* Switch device port(s) may be stacked under | ||
79 | * bond/team/vlan dev, so recurse down to set attr on | ||
80 | * each port. | ||
81 | */ | ||
82 | |||
83 | netdev_for_each_lower_dev(dev, lower_dev, iter) { | ||
84 | err = __switchdev_port_attr_set(lower_dev, attr); | ||
85 | if (err) | ||
86 | break; | ||
87 | } | ||
88 | |||
89 | return err; | ||
90 | } | ||
91 | |||
92 | struct switchdev_attr_set_work { | ||
93 | struct work_struct work; | ||
94 | struct net_device *dev; | ||
95 | struct switchdev_attr attr; | ||
96 | }; | ||
97 | |||
98 | static void switchdev_port_attr_set_work(struct work_struct *work) | ||
99 | { | ||
100 | struct switchdev_attr_set_work *asw = | ||
101 | container_of(work, struct switchdev_attr_set_work, work); | ||
102 | int err; | ||
103 | |||
104 | rtnl_lock(); | ||
105 | err = switchdev_port_attr_set(asw->dev, &asw->attr); | ||
106 | BUG_ON(err); | ||
107 | rtnl_unlock(); | ||
108 | |||
109 | dev_put(asw->dev); | ||
110 | kfree(work); | ||
111 | } | ||
112 | |||
113 | static int switchdev_port_attr_set_defer(struct net_device *dev, | ||
114 | struct switchdev_attr *attr) | ||
115 | { | ||
116 | struct switchdev_attr_set_work *asw; | ||
117 | |||
118 | asw = kmalloc(sizeof(*asw), GFP_ATOMIC); | ||
119 | if (!asw) | ||
120 | return -ENOMEM; | ||
121 | |||
122 | INIT_WORK(&asw->work, switchdev_port_attr_set_work); | ||
123 | |||
124 | dev_hold(dev); | ||
125 | asw->dev = dev; | ||
126 | memcpy(&asw->attr, attr, sizeof(asw->attr)); | ||
127 | |||
128 | schedule_work(&asw->work); | ||
129 | |||
130 | return 0; | ||
131 | } | ||
132 | |||
133 | /** | ||
134 | * switchdev_port_attr_set - Set port attribute | ||
135 | * | ||
136 | * @dev: port device | ||
137 | * @attr: attribute to set | ||
25 | * | 138 | * |
26 | * Get ID of a switch this port is part of. | 139 | * Use a 2-phase prepare-commit transaction model to ensure |
140 | * system is not left in a partially updated state due to | ||
141 | * failure from driver/device. | ||
27 | */ | 142 | */ |
28 | int netdev_switch_parent_id_get(struct net_device *dev, | 143 | int switchdev_port_attr_set(struct net_device *dev, struct switchdev_attr *attr) |
29 | struct netdev_phys_item_id *psid) | ||
30 | { | 144 | { |
31 | const struct swdev_ops *ops = dev->swdev_ops; | 145 | int err; |
146 | |||
147 | if (!rtnl_is_locked()) { | ||
148 | /* Running prepare-commit transaction across stacked | ||
149 | * devices requires nothing moves, so if rtnl_lock is | ||
150 | * not held, schedule a worker thread to hold rtnl_lock | ||
151 | * while setting attr. | ||
152 | */ | ||
153 | |||
154 | return switchdev_port_attr_set_defer(dev, attr); | ||
155 | } | ||
32 | 156 | ||
33 | if (!ops || !ops->swdev_parent_id_get) | 157 | /* Phase I: prepare for attr set. Driver/device should fail |
34 | return -EOPNOTSUPP; | 158 | * here if there are going to be issues in the commit phase, |
35 | return ops->swdev_parent_id_get(dev, psid); | 159 | * such as lack of resources or support. The driver/device |
160 | * should reserve resources needed for the commit phase here, | ||
161 | * but should not commit the attr. | ||
162 | */ | ||
163 | |||
164 | attr->trans = SWITCHDEV_TRANS_PREPARE; | ||
165 | err = __switchdev_port_attr_set(dev, attr); | ||
166 | if (err) { | ||
167 | /* Prepare phase failed: abort the transaction. Any | ||
168 | * resources reserved in the prepare phase are | ||
169 | * released. | ||
170 | */ | ||
171 | |||
172 | attr->trans = SWITCHDEV_TRANS_ABORT; | ||
173 | __switchdev_port_attr_set(dev, attr); | ||
174 | |||
175 | return err; | ||
176 | } | ||
177 | |||
178 | /* Phase II: commit attr set. This cannot fail as a fault | ||
179 | * of driver/device. If it does, it's a bug in the driver/device | ||
180 | * because the driver said everythings was OK in phase I. | ||
181 | */ | ||
182 | |||
183 | attr->trans = SWITCHDEV_TRANS_COMMIT; | ||
184 | err = __switchdev_port_attr_set(dev, attr); | ||
185 | BUG_ON(err); | ||
186 | |||
187 | return err; | ||
188 | } | ||
189 | EXPORT_SYMBOL_GPL(switchdev_port_attr_set); | ||
190 | |||
191 | int __switchdev_port_obj_add(struct net_device *dev, struct switchdev_obj *obj) | ||
192 | { | ||
193 | const struct switchdev_ops *ops = dev->switchdev_ops; | ||
194 | struct net_device *lower_dev; | ||
195 | struct list_head *iter; | ||
196 | int err = -EOPNOTSUPP; | ||
197 | |||
198 | if (ops && ops->switchdev_port_obj_add) | ||
199 | return ops->switchdev_port_obj_add(dev, obj); | ||
200 | |||
201 | /* Switch device port(s) may be stacked under | ||
202 | * bond/team/vlan dev, so recurse down to add object on | ||
203 | * each port. | ||
204 | */ | ||
205 | |||
206 | netdev_for_each_lower_dev(dev, lower_dev, iter) { | ||
207 | err = __switchdev_port_obj_add(lower_dev, obj); | ||
208 | if (err) | ||
209 | break; | ||
210 | } | ||
211 | |||
212 | return err; | ||
36 | } | 213 | } |
37 | EXPORT_SYMBOL_GPL(netdev_switch_parent_id_get); | ||
38 | 214 | ||
39 | /** | 215 | /** |
40 | * netdev_switch_port_stp_update - Notify switch device port of STP | 216 | * switchdev_port_obj_add - Add port object |
41 | * state change | 217 | * |
42 | * @dev: port device | 218 | * @dev: port device |
43 | * @state: port STP state | 219 | * @obj: object to add |
220 | * | ||
221 | * Use a 2-phase prepare-commit transaction model to ensure | ||
222 | * system is not left in a partially updated state due to | ||
223 | * failure from driver/device. | ||
224 | * | ||
225 | * rtnl_lock must be held. | ||
226 | */ | ||
227 | int switchdev_port_obj_add(struct net_device *dev, struct switchdev_obj *obj) | ||
228 | { | ||
229 | int err; | ||
230 | |||
231 | ASSERT_RTNL(); | ||
232 | |||
233 | /* Phase I: prepare for obj add. Driver/device should fail | ||
234 | * here if there are going to be issues in the commit phase, | ||
235 | * such as lack of resources or support. The driver/device | ||
236 | * should reserve resources needed for the commit phase here, | ||
237 | * but should not commit the obj. | ||
238 | */ | ||
239 | |||
240 | obj->trans = SWITCHDEV_TRANS_PREPARE; | ||
241 | err = __switchdev_port_obj_add(dev, obj); | ||
242 | if (err) { | ||
243 | /* Prepare phase failed: abort the transaction. Any | ||
244 | * resources reserved in the prepare phase are | ||
245 | * released. | ||
246 | */ | ||
247 | |||
248 | obj->trans = SWITCHDEV_TRANS_ABORT; | ||
249 | __switchdev_port_obj_add(dev, obj); | ||
250 | |||
251 | return err; | ||
252 | } | ||
253 | |||
254 | /* Phase II: commit obj add. This cannot fail as a fault | ||
255 | * of driver/device. If it does, it's a bug in the driver/device | ||
256 | * because the driver said everythings was OK in phase I. | ||
257 | */ | ||
258 | |||
259 | obj->trans = SWITCHDEV_TRANS_COMMIT; | ||
260 | err = __switchdev_port_obj_add(dev, obj); | ||
261 | WARN(err, "%s: Commit of object (id=%d) failed.\n", dev->name, obj->id); | ||
262 | |||
263 | return err; | ||
264 | } | ||
265 | EXPORT_SYMBOL_GPL(switchdev_port_obj_add); | ||
266 | |||
267 | /** | ||
268 | * switchdev_port_obj_del - Delete port object | ||
44 | * | 269 | * |
45 | * Notify switch device port of bridge port STP state change. | 270 | * @dev: port device |
271 | * @obj: object to delete | ||
46 | */ | 272 | */ |
47 | int netdev_switch_port_stp_update(struct net_device *dev, u8 state) | 273 | int switchdev_port_obj_del(struct net_device *dev, struct switchdev_obj *obj) |
48 | { | 274 | { |
49 | const struct swdev_ops *ops = dev->swdev_ops; | 275 | const struct switchdev_ops *ops = dev->switchdev_ops; |
50 | struct net_device *lower_dev; | 276 | struct net_device *lower_dev; |
51 | struct list_head *iter; | 277 | struct list_head *iter; |
52 | int err = -EOPNOTSUPP; | 278 | int err = -EOPNOTSUPP; |
53 | 279 | ||
54 | if (ops && ops->swdev_port_stp_update) | 280 | if (ops && ops->switchdev_port_obj_del) |
55 | return ops->swdev_port_stp_update(dev, state); | 281 | return ops->switchdev_port_obj_del(dev, obj); |
282 | |||
283 | /* Switch device port(s) may be stacked under | ||
284 | * bond/team/vlan dev, so recurse down to delete object on | ||
285 | * each port. | ||
286 | */ | ||
56 | 287 | ||
57 | netdev_for_each_lower_dev(dev, lower_dev, iter) { | 288 | netdev_for_each_lower_dev(dev, lower_dev, iter) { |
58 | err = netdev_switch_port_stp_update(lower_dev, state); | 289 | err = switchdev_port_obj_del(lower_dev, obj); |
59 | if (err && err != -EOPNOTSUPP) | 290 | if (err) |
60 | return err; | 291 | break; |
61 | } | 292 | } |
62 | 293 | ||
63 | return err; | 294 | return err; |
64 | } | 295 | } |
65 | EXPORT_SYMBOL_GPL(netdev_switch_port_stp_update); | 296 | EXPORT_SYMBOL_GPL(switchdev_port_obj_del); |
66 | 297 | ||
67 | static DEFINE_MUTEX(netdev_switch_mutex); | 298 | static DEFINE_MUTEX(switchdev_mutex); |
68 | static RAW_NOTIFIER_HEAD(netdev_switch_notif_chain); | 299 | static RAW_NOTIFIER_HEAD(switchdev_notif_chain); |
69 | 300 | ||
70 | /** | 301 | /** |
71 | * register_netdev_switch_notifier - Register notifier | 302 | * register_switchdev_notifier - Register notifier |
72 | * @nb: notifier_block | 303 | * @nb: notifier_block |
73 | * | 304 | * |
74 | * Register switch device notifier. This should be used by code | 305 | * Register switch device notifier. This should be used by code |
75 | * which needs to monitor events happening in particular device. | 306 | * which needs to monitor events happening in particular device. |
76 | * Return values are same as for atomic_notifier_chain_register(). | 307 | * Return values are same as for atomic_notifier_chain_register(). |
77 | */ | 308 | */ |
78 | int register_netdev_switch_notifier(struct notifier_block *nb) | 309 | int register_switchdev_notifier(struct notifier_block *nb) |
79 | { | 310 | { |
80 | int err; | 311 | int err; |
81 | 312 | ||
82 | mutex_lock(&netdev_switch_mutex); | 313 | mutex_lock(&switchdev_mutex); |
83 | err = raw_notifier_chain_register(&netdev_switch_notif_chain, nb); | 314 | err = raw_notifier_chain_register(&switchdev_notif_chain, nb); |
84 | mutex_unlock(&netdev_switch_mutex); | 315 | mutex_unlock(&switchdev_mutex); |
85 | return err; | 316 | return err; |
86 | } | 317 | } |
87 | EXPORT_SYMBOL_GPL(register_netdev_switch_notifier); | 318 | EXPORT_SYMBOL_GPL(register_switchdev_notifier); |
88 | 319 | ||
89 | /** | 320 | /** |
90 | * unregister_netdev_switch_notifier - Unregister notifier | 321 | * unregister_switchdev_notifier - Unregister notifier |
91 | * @nb: notifier_block | 322 | * @nb: notifier_block |
92 | * | 323 | * |
93 | * Unregister switch device notifier. | 324 | * Unregister switch device notifier. |
94 | * Return values are same as for atomic_notifier_chain_unregister(). | 325 | * Return values are same as for atomic_notifier_chain_unregister(). |
95 | */ | 326 | */ |
96 | int unregister_netdev_switch_notifier(struct notifier_block *nb) | 327 | int unregister_switchdev_notifier(struct notifier_block *nb) |
97 | { | 328 | { |
98 | int err; | 329 | int err; |
99 | 330 | ||
100 | mutex_lock(&netdev_switch_mutex); | 331 | mutex_lock(&switchdev_mutex); |
101 | err = raw_notifier_chain_unregister(&netdev_switch_notif_chain, nb); | 332 | err = raw_notifier_chain_unregister(&switchdev_notif_chain, nb); |
102 | mutex_unlock(&netdev_switch_mutex); | 333 | mutex_unlock(&switchdev_mutex); |
103 | return err; | 334 | return err; |
104 | } | 335 | } |
105 | EXPORT_SYMBOL_GPL(unregister_netdev_switch_notifier); | 336 | EXPORT_SYMBOL_GPL(unregister_switchdev_notifier); |
106 | 337 | ||
107 | /** | 338 | /** |
108 | * call_netdev_switch_notifiers - Call notifiers | 339 | * call_switchdev_notifiers - Call notifiers |
109 | * @val: value passed unmodified to notifier function | 340 | * @val: value passed unmodified to notifier function |
110 | * @dev: port device | 341 | * @dev: port device |
111 | * @info: notifier information data | 342 | * @info: notifier information data |
@@ -114,146 +345,241 @@ EXPORT_SYMBOL_GPL(unregister_netdev_switch_notifier); | |||
114 | * when it needs to propagate hardware event. | 345 | * when it needs to propagate hardware event. |
115 | * Return values are same as for atomic_notifier_call_chain(). | 346 | * Return values are same as for atomic_notifier_call_chain(). |
116 | */ | 347 | */ |
117 | int call_netdev_switch_notifiers(unsigned long val, struct net_device *dev, | 348 | int call_switchdev_notifiers(unsigned long val, struct net_device *dev, |
118 | struct netdev_switch_notifier_info *info) | 349 | struct switchdev_notifier_info *info) |
119 | { | 350 | { |
120 | int err; | 351 | int err; |
121 | 352 | ||
122 | info->dev = dev; | 353 | info->dev = dev; |
123 | mutex_lock(&netdev_switch_mutex); | 354 | mutex_lock(&switchdev_mutex); |
124 | err = raw_notifier_call_chain(&netdev_switch_notif_chain, val, info); | 355 | err = raw_notifier_call_chain(&switchdev_notif_chain, val, info); |
125 | mutex_unlock(&netdev_switch_mutex); | 356 | mutex_unlock(&switchdev_mutex); |
126 | return err; | 357 | return err; |
127 | } | 358 | } |
128 | EXPORT_SYMBOL_GPL(call_netdev_switch_notifiers); | 359 | EXPORT_SYMBOL_GPL(call_switchdev_notifiers); |
129 | 360 | ||
130 | /** | 361 | /** |
131 | * netdev_switch_port_bridge_setlink - Notify switch device port of bridge | 362 | * switchdev_port_bridge_getlink - Get bridge port attributes |
132 | * port attributes | ||
133 | * | 363 | * |
134 | * @dev: port device | 364 | * @dev: port device |
135 | * @nlh: netlink msg with bridge port attributes | ||
136 | * @flags: bridge setlink flags | ||
137 | * | 365 | * |
138 | * Notify switch device port of bridge port attributes | 366 | * Called for SELF on rtnl_bridge_getlink to get bridge port |
367 | * attributes. | ||
139 | */ | 368 | */ |
140 | int netdev_switch_port_bridge_setlink(struct net_device *dev, | 369 | int switchdev_port_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, |
141 | struct nlmsghdr *nlh, u16 flags) | 370 | struct net_device *dev, u32 filter_mask, |
371 | int nlflags) | ||
142 | { | 372 | { |
143 | const struct net_device_ops *ops = dev->netdev_ops; | 373 | struct switchdev_attr attr = { |
374 | .id = SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS, | ||
375 | }; | ||
376 | u16 mode = BRIDGE_MODE_UNDEF; | ||
377 | u32 mask = BR_LEARNING | BR_LEARNING_SYNC; | ||
378 | int err; | ||
144 | 379 | ||
145 | if (!(dev->features & NETIF_F_HW_SWITCH_OFFLOAD)) | 380 | err = switchdev_port_attr_get(dev, &attr); |
146 | return 0; | 381 | if (err) |
382 | return err; | ||
383 | |||
384 | return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode, | ||
385 | attr.brport_flags, mask, nlflags); | ||
386 | } | ||
387 | EXPORT_SYMBOL_GPL(switchdev_port_bridge_getlink); | ||
388 | |||
389 | static int switchdev_port_br_setflag(struct net_device *dev, | ||
390 | struct nlattr *nlattr, | ||
391 | unsigned long brport_flag) | ||
392 | { | ||
393 | struct switchdev_attr attr = { | ||
394 | .id = SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS, | ||
395 | }; | ||
396 | u8 flag = nla_get_u8(nlattr); | ||
397 | int err; | ||
398 | |||
399 | err = switchdev_port_attr_get(dev, &attr); | ||
400 | if (err) | ||
401 | return err; | ||
147 | 402 | ||
148 | if (!ops->ndo_bridge_setlink) | 403 | if (flag) |
149 | return -EOPNOTSUPP; | 404 | attr.brport_flags |= brport_flag; |
405 | else | ||
406 | attr.brport_flags &= ~brport_flag; | ||
150 | 407 | ||
151 | return ops->ndo_bridge_setlink(dev, nlh, flags); | 408 | return switchdev_port_attr_set(dev, &attr); |
152 | } | 409 | } |
153 | EXPORT_SYMBOL_GPL(netdev_switch_port_bridge_setlink); | ||
154 | 410 | ||
155 | /** | 411 | static const struct nla_policy |
156 | * netdev_switch_port_bridge_dellink - Notify switch device port of bridge | 412 | switchdev_port_bridge_policy[IFLA_BRPORT_MAX + 1] = { |
157 | * port attribute delete | 413 | [IFLA_BRPORT_STATE] = { .type = NLA_U8 }, |
158 | * | 414 | [IFLA_BRPORT_COST] = { .type = NLA_U32 }, |
159 | * @dev: port device | 415 | [IFLA_BRPORT_PRIORITY] = { .type = NLA_U16 }, |
160 | * @nlh: netlink msg with bridge port attributes | 416 | [IFLA_BRPORT_MODE] = { .type = NLA_U8 }, |
161 | * @flags: bridge setlink flags | 417 | [IFLA_BRPORT_GUARD] = { .type = NLA_U8 }, |
162 | * | 418 | [IFLA_BRPORT_PROTECT] = { .type = NLA_U8 }, |
163 | * Notify switch device port of bridge port attribute delete | 419 | [IFLA_BRPORT_FAST_LEAVE] = { .type = NLA_U8 }, |
164 | */ | 420 | [IFLA_BRPORT_LEARNING] = { .type = NLA_U8 }, |
165 | int netdev_switch_port_bridge_dellink(struct net_device *dev, | 421 | [IFLA_BRPORT_LEARNING_SYNC] = { .type = NLA_U8 }, |
166 | struct nlmsghdr *nlh, u16 flags) | 422 | [IFLA_BRPORT_UNICAST_FLOOD] = { .type = NLA_U8 }, |
423 | }; | ||
424 | |||
425 | static int switchdev_port_br_setlink_protinfo(struct net_device *dev, | ||
426 | struct nlattr *protinfo) | ||
167 | { | 427 | { |
168 | const struct net_device_ops *ops = dev->netdev_ops; | 428 | struct nlattr *attr; |
429 | int rem; | ||
430 | int err; | ||
169 | 431 | ||
170 | if (!(dev->features & NETIF_F_HW_SWITCH_OFFLOAD)) | 432 | err = nla_validate_nested(protinfo, IFLA_BRPORT_MAX, |
171 | return 0; | 433 | switchdev_port_bridge_policy); |
434 | if (err) | ||
435 | return err; | ||
436 | |||
437 | nla_for_each_nested(attr, protinfo, rem) { | ||
438 | switch (nla_type(attr)) { | ||
439 | case IFLA_BRPORT_LEARNING: | ||
440 | err = switchdev_port_br_setflag(dev, attr, | ||
441 | BR_LEARNING); | ||
442 | break; | ||
443 | case IFLA_BRPORT_LEARNING_SYNC: | ||
444 | err = switchdev_port_br_setflag(dev, attr, | ||
445 | BR_LEARNING_SYNC); | ||
446 | break; | ||
447 | default: | ||
448 | err = -EOPNOTSUPP; | ||
449 | break; | ||
450 | } | ||
451 | if (err) | ||
452 | return err; | ||
453 | } | ||
454 | |||
455 | return 0; | ||
456 | } | ||
457 | |||
458 | static int switchdev_port_br_afspec(struct net_device *dev, | ||
459 | struct nlattr *afspec, | ||
460 | int (*f)(struct net_device *dev, | ||
461 | struct switchdev_obj *obj)) | ||
462 | { | ||
463 | struct nlattr *attr; | ||
464 | struct bridge_vlan_info *vinfo; | ||
465 | struct switchdev_obj obj = { | ||
466 | .id = SWITCHDEV_OBJ_PORT_VLAN, | ||
467 | }; | ||
468 | int rem; | ||
469 | int err; | ||
172 | 470 | ||
173 | if (!ops->ndo_bridge_dellink) | 471 | nla_for_each_nested(attr, afspec, rem) { |
174 | return -EOPNOTSUPP; | 472 | if (nla_type(attr) != IFLA_BRIDGE_VLAN_INFO) |
473 | continue; | ||
474 | if (nla_len(attr) != sizeof(struct bridge_vlan_info)) | ||
475 | return -EINVAL; | ||
476 | vinfo = nla_data(attr); | ||
477 | obj.vlan.flags = vinfo->flags; | ||
478 | if (vinfo->flags & BRIDGE_VLAN_INFO_RANGE_BEGIN) { | ||
479 | if (obj.vlan.vid_start) | ||
480 | return -EINVAL; | ||
481 | obj.vlan.vid_start = vinfo->vid; | ||
482 | } else if (vinfo->flags & BRIDGE_VLAN_INFO_RANGE_END) { | ||
483 | if (!obj.vlan.vid_start) | ||
484 | return -EINVAL; | ||
485 | obj.vlan.vid_end = vinfo->vid; | ||
486 | if (obj.vlan.vid_end <= obj.vlan.vid_start) | ||
487 | return -EINVAL; | ||
488 | err = f(dev, &obj); | ||
489 | if (err) | ||
490 | return err; | ||
491 | memset(&obj.vlan, 0, sizeof(obj.vlan)); | ||
492 | } else { | ||
493 | if (obj.vlan.vid_start) | ||
494 | return -EINVAL; | ||
495 | obj.vlan.vid_start = vinfo->vid; | ||
496 | obj.vlan.vid_end = vinfo->vid; | ||
497 | err = f(dev, &obj); | ||
498 | if (err) | ||
499 | return err; | ||
500 | memset(&obj.vlan, 0, sizeof(obj.vlan)); | ||
501 | } | ||
502 | } | ||
175 | 503 | ||
176 | return ops->ndo_bridge_dellink(dev, nlh, flags); | 504 | return 0; |
177 | } | 505 | } |
178 | EXPORT_SYMBOL_GPL(netdev_switch_port_bridge_dellink); | ||
179 | 506 | ||
180 | /** | 507 | /** |
181 | * ndo_dflt_netdev_switch_port_bridge_setlink - default ndo bridge setlink | 508 | * switchdev_port_bridge_setlink - Set bridge port attributes |
182 | * op for master devices | ||
183 | * | 509 | * |
184 | * @dev: port device | 510 | * @dev: port device |
185 | * @nlh: netlink msg with bridge port attributes | 511 | * @nlh: netlink header |
186 | * @flags: bridge setlink flags | 512 | * @flags: netlink flags |
187 | * | 513 | * |
188 | * Notify master device slaves of bridge port attributes | 514 | * Called for SELF on rtnl_bridge_setlink to set bridge port |
515 | * attributes. | ||
189 | */ | 516 | */ |
190 | int ndo_dflt_netdev_switch_port_bridge_setlink(struct net_device *dev, | 517 | int switchdev_port_bridge_setlink(struct net_device *dev, |
191 | struct nlmsghdr *nlh, u16 flags) | 518 | struct nlmsghdr *nlh, u16 flags) |
192 | { | 519 | { |
193 | struct net_device *lower_dev; | 520 | struct nlattr *protinfo; |
194 | struct list_head *iter; | 521 | struct nlattr *afspec; |
195 | int ret = 0, err = 0; | 522 | int err = 0; |
196 | |||
197 | if (!(dev->features & NETIF_F_HW_SWITCH_OFFLOAD)) | ||
198 | return ret; | ||
199 | 523 | ||
200 | netdev_for_each_lower_dev(dev, lower_dev, iter) { | 524 | protinfo = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), |
201 | err = netdev_switch_port_bridge_setlink(lower_dev, nlh, flags); | 525 | IFLA_PROTINFO); |
202 | if (err && err != -EOPNOTSUPP) | 526 | if (protinfo) { |
203 | ret = err; | 527 | err = switchdev_port_br_setlink_protinfo(dev, protinfo); |
528 | if (err) | ||
529 | return err; | ||
204 | } | 530 | } |
205 | 531 | ||
206 | return ret; | 532 | afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), |
533 | IFLA_AF_SPEC); | ||
534 | if (afspec) | ||
535 | err = switchdev_port_br_afspec(dev, afspec, | ||
536 | switchdev_port_obj_add); | ||
537 | |||
538 | return err; | ||
207 | } | 539 | } |
208 | EXPORT_SYMBOL_GPL(ndo_dflt_netdev_switch_port_bridge_setlink); | 540 | EXPORT_SYMBOL_GPL(switchdev_port_bridge_setlink); |
209 | 541 | ||
210 | /** | 542 | /** |
211 | * ndo_dflt_netdev_switch_port_bridge_dellink - default ndo bridge dellink | 543 | * switchdev_port_bridge_dellink - Set bridge port attributes |
212 | * op for master devices | ||
213 | * | 544 | * |
214 | * @dev: port device | 545 | * @dev: port device |
215 | * @nlh: netlink msg with bridge port attributes | 546 | * @nlh: netlink header |
216 | * @flags: bridge dellink flags | 547 | * @flags: netlink flags |
217 | * | 548 | * |
218 | * Notify master device slaves of bridge port attribute deletes | 549 | * Called for SELF on rtnl_bridge_dellink to set bridge port |
550 | * attributes. | ||
219 | */ | 551 | */ |
220 | int ndo_dflt_netdev_switch_port_bridge_dellink(struct net_device *dev, | 552 | int switchdev_port_bridge_dellink(struct net_device *dev, |
221 | struct nlmsghdr *nlh, u16 flags) | 553 | struct nlmsghdr *nlh, u16 flags) |
222 | { | 554 | { |
223 | struct net_device *lower_dev; | 555 | struct nlattr *afspec; |
224 | struct list_head *iter; | ||
225 | int ret = 0, err = 0; | ||
226 | 556 | ||
227 | if (!(dev->features & NETIF_F_HW_SWITCH_OFFLOAD)) | 557 | afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), |
228 | return ret; | 558 | IFLA_AF_SPEC); |
559 | if (afspec) | ||
560 | return switchdev_port_br_afspec(dev, afspec, | ||
561 | switchdev_port_obj_del); | ||
229 | 562 | ||
230 | netdev_for_each_lower_dev(dev, lower_dev, iter) { | 563 | return 0; |
231 | err = netdev_switch_port_bridge_dellink(lower_dev, nlh, flags); | ||
232 | if (err && err != -EOPNOTSUPP) | ||
233 | ret = err; | ||
234 | } | ||
235 | |||
236 | return ret; | ||
237 | } | 564 | } |
238 | EXPORT_SYMBOL_GPL(ndo_dflt_netdev_switch_port_bridge_dellink); | 565 | EXPORT_SYMBOL_GPL(switchdev_port_bridge_dellink); |
239 | 566 | ||
240 | static struct net_device *netdev_switch_get_lowest_dev(struct net_device *dev) | 567 | static struct net_device *switchdev_get_lowest_dev(struct net_device *dev) |
241 | { | 568 | { |
242 | const struct swdev_ops *ops = dev->swdev_ops; | 569 | const struct switchdev_ops *ops = dev->switchdev_ops; |
243 | struct net_device *lower_dev; | 570 | struct net_device *lower_dev; |
244 | struct net_device *port_dev; | 571 | struct net_device *port_dev; |
245 | struct list_head *iter; | 572 | struct list_head *iter; |
246 | 573 | ||
247 | /* Recusively search down until we find a sw port dev. | 574 | /* Recusively search down until we find a sw port dev. |
248 | * (A sw port dev supports swdev_parent_id_get). | 575 | * (A sw port dev supports switchdev_port_attr_get). |
249 | */ | 576 | */ |
250 | 577 | ||
251 | if (dev->features & NETIF_F_HW_SWITCH_OFFLOAD && | 578 | if (ops && ops->switchdev_port_attr_get) |
252 | ops && ops->swdev_parent_id_get) | ||
253 | return dev; | 579 | return dev; |
254 | 580 | ||
255 | netdev_for_each_lower_dev(dev, lower_dev, iter) { | 581 | netdev_for_each_lower_dev(dev, lower_dev, iter) { |
256 | port_dev = netdev_switch_get_lowest_dev(lower_dev); | 582 | port_dev = switchdev_get_lowest_dev(lower_dev); |
257 | if (port_dev) | 583 | if (port_dev) |
258 | return port_dev; | 584 | return port_dev; |
259 | } | 585 | } |
@@ -261,10 +587,12 @@ static struct net_device *netdev_switch_get_lowest_dev(struct net_device *dev) | |||
261 | return NULL; | 587 | return NULL; |
262 | } | 588 | } |
263 | 589 | ||
264 | static struct net_device *netdev_switch_get_dev_by_nhs(struct fib_info *fi) | 590 | static struct net_device *switchdev_get_dev_by_nhs(struct fib_info *fi) |
265 | { | 591 | { |
266 | struct netdev_phys_item_id psid; | 592 | struct switchdev_attr attr = { |
267 | struct netdev_phys_item_id prev_psid; | 593 | .id = SWITCHDEV_ATTR_PORT_PARENT_ID, |
594 | }; | ||
595 | struct switchdev_attr prev_attr; | ||
268 | struct net_device *dev = NULL; | 596 | struct net_device *dev = NULL; |
269 | int nhsel; | 597 | int nhsel; |
270 | 598 | ||
@@ -276,28 +604,29 @@ static struct net_device *netdev_switch_get_dev_by_nhs(struct fib_info *fi) | |||
276 | if (!nh->nh_dev) | 604 | if (!nh->nh_dev) |
277 | return NULL; | 605 | return NULL; |
278 | 606 | ||
279 | dev = netdev_switch_get_lowest_dev(nh->nh_dev); | 607 | dev = switchdev_get_lowest_dev(nh->nh_dev); |
280 | if (!dev) | 608 | if (!dev) |
281 | return NULL; | 609 | return NULL; |
282 | 610 | ||
283 | if (netdev_switch_parent_id_get(dev, &psid)) | 611 | if (switchdev_port_attr_get(dev, &attr)) |
284 | return NULL; | 612 | return NULL; |
285 | 613 | ||
286 | if (nhsel > 0) { | 614 | if (nhsel > 0) { |
287 | if (prev_psid.id_len != psid.id_len) | 615 | if (prev_attr.ppid.id_len != attr.ppid.id_len) |
288 | return NULL; | 616 | return NULL; |
289 | if (memcmp(prev_psid.id, psid.id, psid.id_len)) | 617 | if (memcmp(prev_attr.ppid.id, attr.ppid.id, |
618 | attr.ppid.id_len)) | ||
290 | return NULL; | 619 | return NULL; |
291 | } | 620 | } |
292 | 621 | ||
293 | prev_psid = psid; | 622 | prev_attr = attr; |
294 | } | 623 | } |
295 | 624 | ||
296 | return dev; | 625 | return dev; |
297 | } | 626 | } |
298 | 627 | ||
299 | /** | 628 | /** |
300 | * netdev_switch_fib_ipv4_add - Add IPv4 route entry to switch | 629 | * switchdev_fib_ipv4_add - Add IPv4 route entry to switch |
301 | * | 630 | * |
302 | * @dst: route's IPv4 destination address | 631 | * @dst: route's IPv4 destination address |
303 | * @dst_len: destination address length (prefix length) | 632 | * @dst_len: destination address length (prefix length) |
@@ -309,11 +638,22 @@ static struct net_device *netdev_switch_get_dev_by_nhs(struct fib_info *fi) | |||
309 | * | 638 | * |
310 | * Add IPv4 route entry to switch device. | 639 | * Add IPv4 route entry to switch device. |
311 | */ | 640 | */ |
312 | int netdev_switch_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi, | 641 | int switchdev_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi, |
313 | u8 tos, u8 type, u32 nlflags, u32 tb_id) | 642 | u8 tos, u8 type, u32 nlflags, u32 tb_id) |
314 | { | 643 | { |
644 | struct switchdev_obj fib_obj = { | ||
645 | .id = SWITCHDEV_OBJ_IPV4_FIB, | ||
646 | .ipv4_fib = { | ||
647 | .dst = htonl(dst), | ||
648 | .dst_len = dst_len, | ||
649 | .fi = fi, | ||
650 | .tos = tos, | ||
651 | .type = type, | ||
652 | .nlflags = nlflags, | ||
653 | .tb_id = tb_id, | ||
654 | }, | ||
655 | }; | ||
315 | struct net_device *dev; | 656 | struct net_device *dev; |
316 | const struct swdev_ops *ops; | ||
317 | int err = 0; | 657 | int err = 0; |
318 | 658 | ||
319 | /* Don't offload route if using custom ip rules or if | 659 | /* Don't offload route if using custom ip rules or if |
@@ -328,25 +668,20 @@ int netdev_switch_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi, | |||
328 | if (fi->fib_net->ipv4.fib_offload_disabled) | 668 | if (fi->fib_net->ipv4.fib_offload_disabled) |
329 | return 0; | 669 | return 0; |
330 | 670 | ||
331 | dev = netdev_switch_get_dev_by_nhs(fi); | 671 | dev = switchdev_get_dev_by_nhs(fi); |
332 | if (!dev) | 672 | if (!dev) |
333 | return 0; | 673 | return 0; |
334 | ops = dev->swdev_ops; | 674 | |
335 | 675 | err = switchdev_port_obj_add(dev, &fib_obj); | |
336 | if (ops->swdev_fib_ipv4_add) { | 676 | if (!err) |
337 | err = ops->swdev_fib_ipv4_add(dev, htonl(dst), dst_len, | 677 | fi->fib_flags |= RTNH_F_EXTERNAL; |
338 | fi, tos, type, nlflags, | ||
339 | tb_id); | ||
340 | if (!err) | ||
341 | fi->fib_flags |= RTNH_F_EXTERNAL; | ||
342 | } | ||
343 | 678 | ||
344 | return err; | 679 | return err; |
345 | } | 680 | } |
346 | EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_add); | 681 | EXPORT_SYMBOL_GPL(switchdev_fib_ipv4_add); |
347 | 682 | ||
348 | /** | 683 | /** |
349 | * netdev_switch_fib_ipv4_del - Delete IPv4 route entry from switch | 684 | * switchdev_fib_ipv4_del - Delete IPv4 route entry from switch |
350 | * | 685 | * |
351 | * @dst: route's IPv4 destination address | 686 | * @dst: route's IPv4 destination address |
352 | * @dst_len: destination address length (prefix length) | 687 | * @dst_len: destination address length (prefix length) |
@@ -357,38 +692,45 @@ EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_add); | |||
357 | * | 692 | * |
358 | * Delete IPv4 route entry from switch device. | 693 | * Delete IPv4 route entry from switch device. |
359 | */ | 694 | */ |
360 | int netdev_switch_fib_ipv4_del(u32 dst, int dst_len, struct fib_info *fi, | 695 | int switchdev_fib_ipv4_del(u32 dst, int dst_len, struct fib_info *fi, |
361 | u8 tos, u8 type, u32 tb_id) | 696 | u8 tos, u8 type, u32 tb_id) |
362 | { | 697 | { |
698 | struct switchdev_obj fib_obj = { | ||
699 | .id = SWITCHDEV_OBJ_IPV4_FIB, | ||
700 | .ipv4_fib = { | ||
701 | .dst = htonl(dst), | ||
702 | .dst_len = dst_len, | ||
703 | .fi = fi, | ||
704 | .tos = tos, | ||
705 | .type = type, | ||
706 | .nlflags = 0, | ||
707 | .tb_id = tb_id, | ||
708 | }, | ||
709 | }; | ||
363 | struct net_device *dev; | 710 | struct net_device *dev; |
364 | const struct swdev_ops *ops; | ||
365 | int err = 0; | 711 | int err = 0; |
366 | 712 | ||
367 | if (!(fi->fib_flags & RTNH_F_EXTERNAL)) | 713 | if (!(fi->fib_flags & RTNH_F_EXTERNAL)) |
368 | return 0; | 714 | return 0; |
369 | 715 | ||
370 | dev = netdev_switch_get_dev_by_nhs(fi); | 716 | dev = switchdev_get_dev_by_nhs(fi); |
371 | if (!dev) | 717 | if (!dev) |
372 | return 0; | 718 | return 0; |
373 | ops = dev->swdev_ops; | ||
374 | 719 | ||
375 | if (ops->swdev_fib_ipv4_del) { | 720 | err = switchdev_port_obj_del(dev, &fib_obj); |
376 | err = ops->swdev_fib_ipv4_del(dev, htonl(dst), dst_len, | 721 | if (!err) |
377 | fi, tos, type, tb_id); | 722 | fi->fib_flags &= ~RTNH_F_EXTERNAL; |
378 | if (!err) | ||
379 | fi->fib_flags &= ~RTNH_F_EXTERNAL; | ||
380 | } | ||
381 | 723 | ||
382 | return err; | 724 | return err; |
383 | } | 725 | } |
384 | EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_del); | 726 | EXPORT_SYMBOL_GPL(switchdev_fib_ipv4_del); |
385 | 727 | ||
386 | /** | 728 | /** |
387 | * netdev_switch_fib_ipv4_abort - Abort an IPv4 FIB operation | 729 | * switchdev_fib_ipv4_abort - Abort an IPv4 FIB operation |
388 | * | 730 | * |
389 | * @fi: route FIB info structure | 731 | * @fi: route FIB info structure |
390 | */ | 732 | */ |
391 | void netdev_switch_fib_ipv4_abort(struct fib_info *fi) | 733 | void switchdev_fib_ipv4_abort(struct fib_info *fi) |
392 | { | 734 | { |
393 | /* There was a problem installing this route to the offload | 735 | /* There was a problem installing this route to the offload |
394 | * device. For now, until we come up with more refined | 736 | * device. For now, until we come up with more refined |
@@ -401,4 +743,4 @@ void netdev_switch_fib_ipv4_abort(struct fib_info *fi) | |||
401 | fib_flush_external(fi->fib_net); | 743 | fib_flush_external(fi->fib_net); |
402 | fi->fib_net->ipv4.fib_offload_disabled = true; | 744 | fi->fib_net->ipv4.fib_offload_disabled = true; |
403 | } | 745 | } |
404 | EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_abort); | 746 | EXPORT_SYMBOL_GPL(switchdev_fib_ipv4_abort); |