aboutsummaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2016-06-29 14:50:42 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2016-06-29 14:50:42 -0400
commit32826ac41f2170df0d9a2e8df5a9b570c7858ccf (patch)
treef7b3b971a078d1d3742a2d21e67d530fd3f66640 /net
parent653c574a7df4760442162272169834d63c33f298 (diff)
parent751ad819b0bf443ad8963eb7bfbd533e6a463973 (diff)
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller: "I've been traveling so this accumulates more than week or so of bug fixing. It perhaps looks a little worse than it really is. 1) Fix deadlock in ath10k driver, from Ben Greear. 2) Increase scan timeout in iwlwifi, from Luca Coelho. 3) Unbreak STP by properly reinjecting STP packets back into the stack. Regression fix from Ido Schimmel. 4) Mediatek driver fixes (missing malloc failure checks, leaking of scratch memory, wrong indexing when mapping TX buffers, etc.) from John Crispin. 5) Fix endianness bug in icmpv6_err() handler, from Hannes Frederic Sowa. 6) Fix hashing of flows in UDP in the ruseport case, from Xuemin Su. 7) Fix netlink notifications in ovs for tunnels, delete link messages are never emitted because of how the device registry state is handled. From Nicolas Dichtel. 8) Conntrack module leaks kmemcache on unload, from Florian Westphal. 9) Prevent endless jump loops in nft rules, from Liping Zhang and Pablo Neira Ayuso. 10) Not early enough spinlock initialization in mlx4, from Eric Dumazet. 11) Bind refcount leak in act_ipt, from Cong WANG. 12) Missing RCU locking in HTB scheduler, from Florian Westphal. 13) Several small MACSEC bug fixes from Sabrina Dubroca (missing RCU barrier, using heap for SG and IV, and erroneous use of async flag when allocating AEAD conext.) 14) RCU handling fix in TIPC, from Ying Xue. 15) Pass correct protocol down into ipv4_{update_pmtu,redirect}() in SIT driver, from Simon Horman. 16) Socket timer deadlock fix in TIPC from Jon Paul Maloy. 17) Fix potential deadlock in team enslave, from Ido Schimmel. 18) Memory leak in KCM procfs handling, from Jiri Slaby. 19) ESN generation fix in ipv4 ESP, from Herbert Xu. 20) Fix GFP_KERNEL allocations with locks held in act_ife, from Cong WANG. 21) Use after free in netem, from Eric Dumazet. 22) Uninitialized last assert time in multicast router code, from Tom Goff. 23) Skip raw sockets in sock_diag destruction broadcast, from Willem de Bruijn. 24) Fix link status reporting in thunderx, from Sunil Goutham. 25) Limit resegmentation of retransmit queue so that we do not retransmit too large GSO frames. From Eric Dumazet. 26) Delay bpf program release after grace period, from Daniel Borkmann" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (141 commits) openvswitch: fix conntrack netlink event delivery qed: Protect the doorbell BAR with the write barriers. neigh: Explicitly declare RCU-bh read side critical section in neigh_xmit() e1000e: keep VLAN interfaces functional after rxvlan off cfg80211: fix proto in ieee80211_data_to_8023 for frames without LLC header qlcnic: use the correct ring in qlcnic_83xx_process_rcv_ring_diag() bpf, perf: delay release of BPF prog after grace period net: bridge: fix vlan stats continue counter tcp: do not send too big packets at retransmit time ibmvnic: fix to use list_for_each_safe() when delete items net: thunderx: Fix TL4 configuration for secondary Qsets net: thunderx: Fix link status reporting net/mlx5e: Reorganize ethtool statistics net/mlx5e: Fix number of PFC counters reported to ethtool net/mlx5e: Prevent adding the same vxlan port net/mlx5e: Check for BlueFlame capability before allocating SQ uar net/mlx5e: Change enum to better reflect usage net/mlx5: Add ConnectX-5 PCIe 4.0 to list of supported devices net/mlx5: Update command strings net: marvell: Add separate config ANEG function for Marvell 88E1111 ...
Diffstat (limited to 'net')
-rw-r--r--net/ax25/af_ax25.c3
-rw-r--r--net/ax25/ax25_ds_timer.c5
-rw-r--r--net/ax25/ax25_std_timer.c5
-rw-r--r--net/ax25/ax25_subr.c3
-rw-r--r--net/batman-adv/routing.c1
-rw-r--r--net/batman-adv/soft-interface.c9
-rw-r--r--net/batman-adv/translation-table.c50
-rw-r--r--net/batman-adv/types.h2
-rw-r--r--net/bridge/br_input.c15
-rw-r--r--net/bridge/br_multicast.c4
-rw-r--r--net/bridge/br_netlink.c2
-rw-r--r--net/bridge/br_private.h23
-rw-r--r--net/core/filter.c16
-rw-r--r--net/core/neighbour.c6
-rw-r--r--net/ipv4/esp4.c52
-rw-r--r--net/ipv4/gre_demux.c10
-rw-r--r--net/ipv4/ip_gre.c26
-rw-r--r--net/ipv4/ipconfig.c4
-rw-r--r--net/ipv4/ipmr.c4
-rw-r--r--net/ipv4/tcp_output.c7
-rw-r--r--net/ipv4/udp.c80
-rw-r--r--net/ipv6/icmp.c2
-rw-r--r--net/ipv6/ip6_checksum.c7
-rw-r--r--net/ipv6/ip6_gre.c2
-rw-r--r--net/ipv6/ip6mr.c1
-rw-r--r--net/ipv6/route.c2
-rw-r--r--net/ipv6/sit.c4
-rw-r--r--net/ipv6/tcp_ipv6.c4
-rw-r--r--net/ipv6/udp.c71
-rw-r--r--net/kcm/kcmproc.c1
-rw-r--r--net/mac80211/mesh.c7
-rw-r--r--net/netfilter/nf_conntrack_core.c2
-rw-r--r--net/netfilter/nf_tables_api.c24
-rw-r--r--net/netfilter/nf_tables_core.c2
-rw-r--r--net/netfilter/nft_hash.c3
-rw-r--r--net/netfilter/nft_rbtree.c3
-rw-r--r--net/openvswitch/conntrack.c14
-rw-r--r--net/rds/ib_cm.c2
-rw-r--r--net/rds/loop.c5
-rw-r--r--net/rds/sysctl.c3
-rw-r--r--net/rds/tcp.h2
-rw-r--r--net/rds/tcp_connect.c26
-rw-r--r--net/rds/tcp_listen.c2
-rw-r--r--net/rds/tcp_recv.c2
-rw-r--r--net/rds/tcp_send.c14
-rw-r--r--net/rds/transport.c3
-rw-r--r--net/sched/act_api.c2
-rw-r--r--net/sched/act_ife.c53
-rw-r--r--net/sched/act_ipt.c7
-rw-r--r--net/sched/sch_fifo.c4
-rw-r--r--net/sched/sch_htb.c2
-rw-r--r--net/sched/sch_netem.c12
-rw-r--r--net/sched/sch_prio.c67
-rw-r--r--net/sctp/sctp_diag.c6
-rw-r--r--net/tipc/bearer.c2
-rw-r--r--net/tipc/link.c3
-rw-r--r--net/tipc/msg.c6
-rw-r--r--net/tipc/msg.h11
-rw-r--r--net/tipc/socket.c54
-rw-r--r--net/vmw_vsock/af_vsock.c12
-rw-r--r--net/wireless/util.c2
61 files changed, 433 insertions, 345 deletions
diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
index fbd0acf80b13..2fdebabbfacd 100644
--- a/net/ax25/af_ax25.c
+++ b/net/ax25/af_ax25.c
@@ -976,7 +976,8 @@ static int ax25_release(struct socket *sock)
976 release_sock(sk); 976 release_sock(sk);
977 ax25_disconnect(ax25, 0); 977 ax25_disconnect(ax25, 0);
978 lock_sock(sk); 978 lock_sock(sk);
979 ax25_destroy_socket(ax25); 979 if (!sock_flag(ax25->sk, SOCK_DESTROY))
980 ax25_destroy_socket(ax25);
980 break; 981 break;
981 982
982 case AX25_STATE_3: 983 case AX25_STATE_3:
diff --git a/net/ax25/ax25_ds_timer.c b/net/ax25/ax25_ds_timer.c
index 951cd57bb07d..5237dff6941d 100644
--- a/net/ax25/ax25_ds_timer.c
+++ b/net/ax25/ax25_ds_timer.c
@@ -102,6 +102,7 @@ void ax25_ds_heartbeat_expiry(ax25_cb *ax25)
102 switch (ax25->state) { 102 switch (ax25->state) {
103 103
104 case AX25_STATE_0: 104 case AX25_STATE_0:
105 case AX25_STATE_2:
105 /* Magic here: If we listen() and a new link dies before it 106 /* Magic here: If we listen() and a new link dies before it
106 is accepted() it isn't 'dead' so doesn't get removed. */ 107 is accepted() it isn't 'dead' so doesn't get removed. */
107 if (!sk || sock_flag(sk, SOCK_DESTROY) || 108 if (!sk || sock_flag(sk, SOCK_DESTROY) ||
@@ -111,6 +112,7 @@ void ax25_ds_heartbeat_expiry(ax25_cb *ax25)
111 sock_hold(sk); 112 sock_hold(sk);
112 ax25_destroy_socket(ax25); 113 ax25_destroy_socket(ax25);
113 bh_unlock_sock(sk); 114 bh_unlock_sock(sk);
115 /* Ungrab socket and destroy it */
114 sock_put(sk); 116 sock_put(sk);
115 } else 117 } else
116 ax25_destroy_socket(ax25); 118 ax25_destroy_socket(ax25);
@@ -213,7 +215,8 @@ void ax25_ds_t1_timeout(ax25_cb *ax25)
213 case AX25_STATE_2: 215 case AX25_STATE_2:
214 if (ax25->n2count == ax25->n2) { 216 if (ax25->n2count == ax25->n2) {
215 ax25_send_control(ax25, AX25_DISC, AX25_POLLON, AX25_COMMAND); 217 ax25_send_control(ax25, AX25_DISC, AX25_POLLON, AX25_COMMAND);
216 ax25_disconnect(ax25, ETIMEDOUT); 218 if (!sock_flag(ax25->sk, SOCK_DESTROY))
219 ax25_disconnect(ax25, ETIMEDOUT);
217 return; 220 return;
218 } else { 221 } else {
219 ax25->n2count++; 222 ax25->n2count++;
diff --git a/net/ax25/ax25_std_timer.c b/net/ax25/ax25_std_timer.c
index 004467c9e6e1..2c0d6ef66f9d 100644
--- a/net/ax25/ax25_std_timer.c
+++ b/net/ax25/ax25_std_timer.c
@@ -38,6 +38,7 @@ void ax25_std_heartbeat_expiry(ax25_cb *ax25)
38 38
39 switch (ax25->state) { 39 switch (ax25->state) {
40 case AX25_STATE_0: 40 case AX25_STATE_0:
41 case AX25_STATE_2:
41 /* Magic here: If we listen() and a new link dies before it 42 /* Magic here: If we listen() and a new link dies before it
42 is accepted() it isn't 'dead' so doesn't get removed. */ 43 is accepted() it isn't 'dead' so doesn't get removed. */
43 if (!sk || sock_flag(sk, SOCK_DESTROY) || 44 if (!sk || sock_flag(sk, SOCK_DESTROY) ||
@@ -47,6 +48,7 @@ void ax25_std_heartbeat_expiry(ax25_cb *ax25)
47 sock_hold(sk); 48 sock_hold(sk);
48 ax25_destroy_socket(ax25); 49 ax25_destroy_socket(ax25);
49 bh_unlock_sock(sk); 50 bh_unlock_sock(sk);
51 /* Ungrab socket and destroy it */
50 sock_put(sk); 52 sock_put(sk);
51 } else 53 } else
52 ax25_destroy_socket(ax25); 54 ax25_destroy_socket(ax25);
@@ -144,7 +146,8 @@ void ax25_std_t1timer_expiry(ax25_cb *ax25)
144 case AX25_STATE_2: 146 case AX25_STATE_2:
145 if (ax25->n2count == ax25->n2) { 147 if (ax25->n2count == ax25->n2) {
146 ax25_send_control(ax25, AX25_DISC, AX25_POLLON, AX25_COMMAND); 148 ax25_send_control(ax25, AX25_DISC, AX25_POLLON, AX25_COMMAND);
147 ax25_disconnect(ax25, ETIMEDOUT); 149 if (!sock_flag(ax25->sk, SOCK_DESTROY))
150 ax25_disconnect(ax25, ETIMEDOUT);
148 return; 151 return;
149 } else { 152 } else {
150 ax25->n2count++; 153 ax25->n2count++;
diff --git a/net/ax25/ax25_subr.c b/net/ax25/ax25_subr.c
index 3b78e8473a01..655a7d4c96e1 100644
--- a/net/ax25/ax25_subr.c
+++ b/net/ax25/ax25_subr.c
@@ -264,7 +264,8 @@ void ax25_disconnect(ax25_cb *ax25, int reason)
264{ 264{
265 ax25_clear_queues(ax25); 265 ax25_clear_queues(ax25);
266 266
267 ax25_stop_heartbeat(ax25); 267 if (!sock_flag(ax25->sk, SOCK_DESTROY))
268 ax25_stop_heartbeat(ax25);
268 ax25_stop_t1timer(ax25); 269 ax25_stop_t1timer(ax25);
269 ax25_stop_t2timer(ax25); 270 ax25_stop_t2timer(ax25);
270 ax25_stop_t3timer(ax25); 271 ax25_stop_t3timer(ax25);
diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c
index e3857ed4057f..6c2901a86230 100644
--- a/net/batman-adv/routing.c
+++ b/net/batman-adv/routing.c
@@ -374,6 +374,7 @@ int batadv_recv_icmp_packet(struct sk_buff *skb,
374 if (skb_cow(skb, ETH_HLEN) < 0) 374 if (skb_cow(skb, ETH_HLEN) < 0)
375 goto out; 375 goto out;
376 376
377 ethhdr = eth_hdr(skb);
377 icmph = (struct batadv_icmp_header *)skb->data; 378 icmph = (struct batadv_icmp_header *)skb->data;
378 icmp_packet_rr = (struct batadv_icmp_packet_rr *)icmph; 379 icmp_packet_rr = (struct batadv_icmp_packet_rr *)icmph;
379 if (icmp_packet_rr->rr_cur >= BATADV_RR_LEN) 380 if (icmp_packet_rr->rr_cur >= BATADV_RR_LEN)
diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
index 343d2c904399..287a3879ed7e 100644
--- a/net/batman-adv/soft-interface.c
+++ b/net/batman-adv/soft-interface.c
@@ -1033,7 +1033,9 @@ void batadv_softif_destroy_sysfs(struct net_device *soft_iface)
1033static void batadv_softif_destroy_netlink(struct net_device *soft_iface, 1033static void batadv_softif_destroy_netlink(struct net_device *soft_iface,
1034 struct list_head *head) 1034 struct list_head *head)
1035{ 1035{
1036 struct batadv_priv *bat_priv = netdev_priv(soft_iface);
1036 struct batadv_hard_iface *hard_iface; 1037 struct batadv_hard_iface *hard_iface;
1038 struct batadv_softif_vlan *vlan;
1037 1039
1038 list_for_each_entry(hard_iface, &batadv_hardif_list, list) { 1040 list_for_each_entry(hard_iface, &batadv_hardif_list, list) {
1039 if (hard_iface->soft_iface == soft_iface) 1041 if (hard_iface->soft_iface == soft_iface)
@@ -1041,6 +1043,13 @@ static void batadv_softif_destroy_netlink(struct net_device *soft_iface,
1041 BATADV_IF_CLEANUP_KEEP); 1043 BATADV_IF_CLEANUP_KEEP);
1042 } 1044 }
1043 1045
1046 /* destroy the "untagged" VLAN */
1047 vlan = batadv_softif_vlan_get(bat_priv, BATADV_NO_FLAGS);
1048 if (vlan) {
1049 batadv_softif_destroy_vlan(bat_priv, vlan);
1050 batadv_softif_vlan_put(vlan);
1051 }
1052
1044 batadv_sysfs_del_meshif(soft_iface); 1053 batadv_sysfs_del_meshif(soft_iface);
1045 unregister_netdevice_queue(soft_iface, head); 1054 unregister_netdevice_queue(soft_iface, head);
1046} 1055}
diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
index feaf492b01ca..57ec87f37050 100644
--- a/net/batman-adv/translation-table.c
+++ b/net/batman-adv/translation-table.c
@@ -650,8 +650,10 @@ bool batadv_tt_local_add(struct net_device *soft_iface, const u8 *addr,
650 650
651 /* increase the refcounter of the related vlan */ 651 /* increase the refcounter of the related vlan */
652 vlan = batadv_softif_vlan_get(bat_priv, vid); 652 vlan = batadv_softif_vlan_get(bat_priv, vid);
653 if (WARN(!vlan, "adding TT local entry %pM to non-existent VLAN %d", 653 if (!vlan) {
654 addr, BATADV_PRINT_VID(vid))) { 654 net_ratelimited_function(batadv_info, soft_iface,
655 "adding TT local entry %pM to non-existent VLAN %d\n",
656 addr, BATADV_PRINT_VID(vid));
655 kfree(tt_local); 657 kfree(tt_local);
656 tt_local = NULL; 658 tt_local = NULL;
657 goto out; 659 goto out;
@@ -691,7 +693,6 @@ bool batadv_tt_local_add(struct net_device *soft_iface, const u8 *addr,
691 if (unlikely(hash_added != 0)) { 693 if (unlikely(hash_added != 0)) {
692 /* remove the reference for the hash */ 694 /* remove the reference for the hash */
693 batadv_tt_local_entry_put(tt_local); 695 batadv_tt_local_entry_put(tt_local);
694 batadv_softif_vlan_put(vlan);
695 goto out; 696 goto out;
696 } 697 }
697 698
@@ -2269,6 +2270,29 @@ static u32 batadv_tt_local_crc(struct batadv_priv *bat_priv,
2269 return crc; 2270 return crc;
2270} 2271}
2271 2272
2273/**
2274 * batadv_tt_req_node_release - free tt_req node entry
2275 * @ref: kref pointer of the tt req_node entry
2276 */
2277static void batadv_tt_req_node_release(struct kref *ref)
2278{
2279 struct batadv_tt_req_node *tt_req_node;
2280
2281 tt_req_node = container_of(ref, struct batadv_tt_req_node, refcount);
2282
2283 kfree(tt_req_node);
2284}
2285
2286/**
2287 * batadv_tt_req_node_put - decrement the tt_req_node refcounter and
2288 * possibly release it
2289 * @tt_req_node: tt_req_node to be free'd
2290 */
2291static void batadv_tt_req_node_put(struct batadv_tt_req_node *tt_req_node)
2292{
2293 kref_put(&tt_req_node->refcount, batadv_tt_req_node_release);
2294}
2295
2272static void batadv_tt_req_list_free(struct batadv_priv *bat_priv) 2296static void batadv_tt_req_list_free(struct batadv_priv *bat_priv)
2273{ 2297{
2274 struct batadv_tt_req_node *node; 2298 struct batadv_tt_req_node *node;
@@ -2278,7 +2302,7 @@ static void batadv_tt_req_list_free(struct batadv_priv *bat_priv)
2278 2302
2279 hlist_for_each_entry_safe(node, safe, &bat_priv->tt.req_list, list) { 2303 hlist_for_each_entry_safe(node, safe, &bat_priv->tt.req_list, list) {
2280 hlist_del_init(&node->list); 2304 hlist_del_init(&node->list);
2281 kfree(node); 2305 batadv_tt_req_node_put(node);
2282 } 2306 }
2283 2307
2284 spin_unlock_bh(&bat_priv->tt.req_list_lock); 2308 spin_unlock_bh(&bat_priv->tt.req_list_lock);
@@ -2315,7 +2339,7 @@ static void batadv_tt_req_purge(struct batadv_priv *bat_priv)
2315 if (batadv_has_timed_out(node->issued_at, 2339 if (batadv_has_timed_out(node->issued_at,
2316 BATADV_TT_REQUEST_TIMEOUT)) { 2340 BATADV_TT_REQUEST_TIMEOUT)) {
2317 hlist_del_init(&node->list); 2341 hlist_del_init(&node->list);
2318 kfree(node); 2342 batadv_tt_req_node_put(node);
2319 } 2343 }
2320 } 2344 }
2321 spin_unlock_bh(&bat_priv->tt.req_list_lock); 2345 spin_unlock_bh(&bat_priv->tt.req_list_lock);
@@ -2347,9 +2371,11 @@ batadv_tt_req_node_new(struct batadv_priv *bat_priv,
2347 if (!tt_req_node) 2371 if (!tt_req_node)
2348 goto unlock; 2372 goto unlock;
2349 2373
2374 kref_init(&tt_req_node->refcount);
2350 ether_addr_copy(tt_req_node->addr, orig_node->orig); 2375 ether_addr_copy(tt_req_node->addr, orig_node->orig);
2351 tt_req_node->issued_at = jiffies; 2376 tt_req_node->issued_at = jiffies;
2352 2377
2378 kref_get(&tt_req_node->refcount);
2353 hlist_add_head(&tt_req_node->list, &bat_priv->tt.req_list); 2379 hlist_add_head(&tt_req_node->list, &bat_priv->tt.req_list);
2354unlock: 2380unlock:
2355 spin_unlock_bh(&bat_priv->tt.req_list_lock); 2381 spin_unlock_bh(&bat_priv->tt.req_list_lock);
@@ -2613,13 +2639,19 @@ static bool batadv_send_tt_request(struct batadv_priv *bat_priv,
2613out: 2639out:
2614 if (primary_if) 2640 if (primary_if)
2615 batadv_hardif_put(primary_if); 2641 batadv_hardif_put(primary_if);
2642
2616 if (ret && tt_req_node) { 2643 if (ret && tt_req_node) {
2617 spin_lock_bh(&bat_priv->tt.req_list_lock); 2644 spin_lock_bh(&bat_priv->tt.req_list_lock);
2618 /* hlist_del_init() verifies tt_req_node still is in the list */ 2645 if (!hlist_unhashed(&tt_req_node->list)) {
2619 hlist_del_init(&tt_req_node->list); 2646 hlist_del_init(&tt_req_node->list);
2647 batadv_tt_req_node_put(tt_req_node);
2648 }
2620 spin_unlock_bh(&bat_priv->tt.req_list_lock); 2649 spin_unlock_bh(&bat_priv->tt.req_list_lock);
2621 kfree(tt_req_node);
2622 } 2650 }
2651
2652 if (tt_req_node)
2653 batadv_tt_req_node_put(tt_req_node);
2654
2623 kfree(tvlv_tt_data); 2655 kfree(tvlv_tt_data);
2624 return ret; 2656 return ret;
2625} 2657}
@@ -3055,7 +3087,7 @@ static void batadv_handle_tt_response(struct batadv_priv *bat_priv,
3055 if (!batadv_compare_eth(node->addr, resp_src)) 3087 if (!batadv_compare_eth(node->addr, resp_src))
3056 continue; 3088 continue;
3057 hlist_del_init(&node->list); 3089 hlist_del_init(&node->list);
3058 kfree(node); 3090 batadv_tt_req_node_put(node);
3059 } 3091 }
3060 3092
3061 spin_unlock_bh(&bat_priv->tt.req_list_lock); 3093 spin_unlock_bh(&bat_priv->tt.req_list_lock);
diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
index 6a577f4f8ba7..ba846b078af8 100644
--- a/net/batman-adv/types.h
+++ b/net/batman-adv/types.h
@@ -1137,11 +1137,13 @@ struct batadv_tt_change_node {
1137 * struct batadv_tt_req_node - data to keep track of the tt requests in flight 1137 * struct batadv_tt_req_node - data to keep track of the tt requests in flight
1138 * @addr: mac address address of the originator this request was sent to 1138 * @addr: mac address address of the originator this request was sent to
1139 * @issued_at: timestamp used for purging stale tt requests 1139 * @issued_at: timestamp used for purging stale tt requests
1140 * @refcount: number of contexts the object is used by
1140 * @list: list node for batadv_priv_tt::req_list 1141 * @list: list node for batadv_priv_tt::req_list
1141 */ 1142 */
1142struct batadv_tt_req_node { 1143struct batadv_tt_req_node {
1143 u8 addr[ETH_ALEN]; 1144 u8 addr[ETH_ALEN];
1144 unsigned long issued_at; 1145 unsigned long issued_at;
1146 struct kref refcount;
1145 struct hlist_node list; 1147 struct hlist_node list;
1146}; 1148};
1147 1149
diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
index 160797722228..43d2cd862bc2 100644
--- a/net/bridge/br_input.c
+++ b/net/bridge/br_input.c
@@ -213,8 +213,7 @@ drop:
213} 213}
214EXPORT_SYMBOL_GPL(br_handle_frame_finish); 214EXPORT_SYMBOL_GPL(br_handle_frame_finish);
215 215
216/* note: already called with rcu_read_lock */ 216static void __br_handle_local_finish(struct sk_buff *skb)
217static int br_handle_local_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
218{ 217{
219 struct net_bridge_port *p = br_port_get_rcu(skb->dev); 218 struct net_bridge_port *p = br_port_get_rcu(skb->dev);
220 u16 vid = 0; 219 u16 vid = 0;
@@ -222,6 +221,14 @@ static int br_handle_local_finish(struct net *net, struct sock *sk, struct sk_bu
222 /* check if vlan is allowed, to avoid spoofing */ 221 /* check if vlan is allowed, to avoid spoofing */
223 if (p->flags & BR_LEARNING && br_should_learn(p, skb, &vid)) 222 if (p->flags & BR_LEARNING && br_should_learn(p, skb, &vid))
224 br_fdb_update(p->br, p, eth_hdr(skb)->h_source, vid, false); 223 br_fdb_update(p->br, p, eth_hdr(skb)->h_source, vid, false);
224}
225
226/* note: already called with rcu_read_lock */
227static int br_handle_local_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
228{
229 struct net_bridge_port *p = br_port_get_rcu(skb->dev);
230
231 __br_handle_local_finish(skb);
225 232
226 BR_INPUT_SKB_CB(skb)->brdev = p->br->dev; 233 BR_INPUT_SKB_CB(skb)->brdev = p->br->dev;
227 br_pass_frame_up(skb); 234 br_pass_frame_up(skb);
@@ -274,7 +281,9 @@ rx_handler_result_t br_handle_frame(struct sk_buff **pskb)
274 if (p->br->stp_enabled == BR_NO_STP || 281 if (p->br->stp_enabled == BR_NO_STP ||
275 fwd_mask & (1u << dest[5])) 282 fwd_mask & (1u << dest[5]))
276 goto forward; 283 goto forward;
277 break; 284 *pskb = skb;
285 __br_handle_local_finish(skb);
286 return RX_HANDLER_PASS;
278 287
279 case 0x01: /* IEEE MAC (Pause) */ 288 case 0x01: /* IEEE MAC (Pause) */
280 goto drop; 289 goto drop;
diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
index 6852f3c7009c..43844144c9c4 100644
--- a/net/bridge/br_multicast.c
+++ b/net/bridge/br_multicast.c
@@ -464,8 +464,11 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br,
464 if (ipv6_dev_get_saddr(dev_net(br->dev), br->dev, &ip6h->daddr, 0, 464 if (ipv6_dev_get_saddr(dev_net(br->dev), br->dev, &ip6h->daddr, 0,
465 &ip6h->saddr)) { 465 &ip6h->saddr)) {
466 kfree_skb(skb); 466 kfree_skb(skb);
467 br->has_ipv6_addr = 0;
467 return NULL; 468 return NULL;
468 } 469 }
470
471 br->has_ipv6_addr = 1;
469 ipv6_eth_mc_map(&ip6h->daddr, eth->h_dest); 472 ipv6_eth_mc_map(&ip6h->daddr, eth->h_dest);
470 473
471 hopopt = (u8 *)(ip6h + 1); 474 hopopt = (u8 *)(ip6h + 1);
@@ -1745,6 +1748,7 @@ void br_multicast_init(struct net_bridge *br)
1745 br->ip6_other_query.delay_time = 0; 1748 br->ip6_other_query.delay_time = 0;
1746 br->ip6_querier.port = NULL; 1749 br->ip6_querier.port = NULL;
1747#endif 1750#endif
1751 br->has_ipv6_addr = 1;
1748 1752
1749 spin_lock_init(&br->multicast_lock); 1753 spin_lock_init(&br->multicast_lock);
1750 setup_timer(&br->multicast_router_timer, 1754 setup_timer(&br->multicast_router_timer,
diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
index a5343c7232bf..85e89f693589 100644
--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -1273,7 +1273,7 @@ static int br_fill_linkxstats(struct sk_buff *skb, const struct net_device *dev,
1273 struct bridge_vlan_xstats vxi; 1273 struct bridge_vlan_xstats vxi;
1274 struct br_vlan_stats stats; 1274 struct br_vlan_stats stats;
1275 1275
1276 if (vl_idx++ < *prividx) 1276 if (++vl_idx < *prividx)
1277 continue; 1277 continue;
1278 memset(&vxi, 0, sizeof(vxi)); 1278 memset(&vxi, 0, sizeof(vxi));
1279 vxi.vid = v->vid; 1279 vxi.vid = v->vid;
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index c7fb5d7a7218..52edecf3c294 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -314,6 +314,7 @@ struct net_bridge
314 u8 multicast_disabled:1; 314 u8 multicast_disabled:1;
315 u8 multicast_querier:1; 315 u8 multicast_querier:1;
316 u8 multicast_query_use_ifaddr:1; 316 u8 multicast_query_use_ifaddr:1;
317 u8 has_ipv6_addr:1;
317 318
318 u32 hash_elasticity; 319 u32 hash_elasticity;
319 u32 hash_max; 320 u32 hash_max;
@@ -588,10 +589,22 @@ static inline bool br_multicast_is_router(struct net_bridge *br)
588 589
589static inline bool 590static inline bool
590__br_multicast_querier_exists(struct net_bridge *br, 591__br_multicast_querier_exists(struct net_bridge *br,
591 struct bridge_mcast_other_query *querier) 592 struct bridge_mcast_other_query *querier,
593 const bool is_ipv6)
592{ 594{
595 bool own_querier_enabled;
596
597 if (br->multicast_querier) {
598 if (is_ipv6 && !br->has_ipv6_addr)
599 own_querier_enabled = false;
600 else
601 own_querier_enabled = true;
602 } else {
603 own_querier_enabled = false;
604 }
605
593 return time_is_before_jiffies(querier->delay_time) && 606 return time_is_before_jiffies(querier->delay_time) &&
594 (br->multicast_querier || timer_pending(&querier->timer)); 607 (own_querier_enabled || timer_pending(&querier->timer));
595} 608}
596 609
597static inline bool br_multicast_querier_exists(struct net_bridge *br, 610static inline bool br_multicast_querier_exists(struct net_bridge *br,
@@ -599,10 +612,12 @@ static inline bool br_multicast_querier_exists(struct net_bridge *br,
599{ 612{
600 switch (eth->h_proto) { 613 switch (eth->h_proto) {
601 case (htons(ETH_P_IP)): 614 case (htons(ETH_P_IP)):
602 return __br_multicast_querier_exists(br, &br->ip4_other_query); 615 return __br_multicast_querier_exists(br,
616 &br->ip4_other_query, false);
603#if IS_ENABLED(CONFIG_IPV6) 617#if IS_ENABLED(CONFIG_IPV6)
604 case (htons(ETH_P_IPV6)): 618 case (htons(ETH_P_IPV6)):
605 return __br_multicast_querier_exists(br, &br->ip6_other_query); 619 return __br_multicast_querier_exists(br,
620 &br->ip6_other_query, true);
606#endif 621#endif
607 default: 622 default:
608 return false; 623 return false;
diff --git a/net/core/filter.c b/net/core/filter.c
index 68adb5f52110..c4b330c85c02 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2085,7 +2085,8 @@ static bool __is_valid_access(int off, int size, enum bpf_access_type type)
2085} 2085}
2086 2086
2087static bool sk_filter_is_valid_access(int off, int size, 2087static bool sk_filter_is_valid_access(int off, int size,
2088 enum bpf_access_type type) 2088 enum bpf_access_type type,
2089 enum bpf_reg_type *reg_type)
2089{ 2090{
2090 switch (off) { 2091 switch (off) {
2091 case offsetof(struct __sk_buff, tc_classid): 2092 case offsetof(struct __sk_buff, tc_classid):
@@ -2108,7 +2109,8 @@ static bool sk_filter_is_valid_access(int off, int size,
2108} 2109}
2109 2110
2110static bool tc_cls_act_is_valid_access(int off, int size, 2111static bool tc_cls_act_is_valid_access(int off, int size,
2111 enum bpf_access_type type) 2112 enum bpf_access_type type,
2113 enum bpf_reg_type *reg_type)
2112{ 2114{
2113 if (type == BPF_WRITE) { 2115 if (type == BPF_WRITE) {
2114 switch (off) { 2116 switch (off) {
@@ -2123,6 +2125,16 @@ static bool tc_cls_act_is_valid_access(int off, int size,
2123 return false; 2125 return false;
2124 } 2126 }
2125 } 2127 }
2128
2129 switch (off) {
2130 case offsetof(struct __sk_buff, data):
2131 *reg_type = PTR_TO_PACKET;
2132 break;
2133 case offsetof(struct __sk_buff, data_end):
2134 *reg_type = PTR_TO_PACKET_END;
2135 break;
2136 }
2137
2126 return __is_valid_access(off, size, type); 2138 return __is_valid_access(off, size, type);
2127} 2139}
2128 2140
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 29dd8cc22bbf..510cd62fcb99 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -2469,13 +2469,17 @@ int neigh_xmit(int index, struct net_device *dev,
2469 tbl = neigh_tables[index]; 2469 tbl = neigh_tables[index];
2470 if (!tbl) 2470 if (!tbl)
2471 goto out; 2471 goto out;
2472 rcu_read_lock_bh();
2472 neigh = __neigh_lookup_noref(tbl, addr, dev); 2473 neigh = __neigh_lookup_noref(tbl, addr, dev);
2473 if (!neigh) 2474 if (!neigh)
2474 neigh = __neigh_create(tbl, addr, dev, false); 2475 neigh = __neigh_create(tbl, addr, dev, false);
2475 err = PTR_ERR(neigh); 2476 err = PTR_ERR(neigh);
2476 if (IS_ERR(neigh)) 2477 if (IS_ERR(neigh)) {
2478 rcu_read_unlock_bh();
2477 goto out_kfree_skb; 2479 goto out_kfree_skb;
2480 }
2478 err = neigh->output(neigh, skb); 2481 err = neigh->output(neigh, skb);
2482 rcu_read_unlock_bh();
2479 } 2483 }
2480 else if (index == NEIGH_LINK_TABLE) { 2484 else if (index == NEIGH_LINK_TABLE) {
2481 err = dev_hard_header(skb, dev, ntohs(skb->protocol), 2485 err = dev_hard_header(skb, dev, ntohs(skb->protocol),
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 477937465a20..d95631d09248 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -23,6 +23,11 @@ struct esp_skb_cb {
23 void *tmp; 23 void *tmp;
24}; 24};
25 25
26struct esp_output_extra {
27 __be32 seqhi;
28 u32 esphoff;
29};
30
26#define ESP_SKB_CB(__skb) ((struct esp_skb_cb *)&((__skb)->cb[0])) 31#define ESP_SKB_CB(__skb) ((struct esp_skb_cb *)&((__skb)->cb[0]))
27 32
28static u32 esp4_get_mtu(struct xfrm_state *x, int mtu); 33static u32 esp4_get_mtu(struct xfrm_state *x, int mtu);
@@ -35,11 +40,11 @@ static u32 esp4_get_mtu(struct xfrm_state *x, int mtu);
35 * 40 *
36 * TODO: Use spare space in skb for this where possible. 41 * TODO: Use spare space in skb for this where possible.
37 */ 42 */
38static void *esp_alloc_tmp(struct crypto_aead *aead, int nfrags, int seqhilen) 43static void *esp_alloc_tmp(struct crypto_aead *aead, int nfrags, int extralen)
39{ 44{
40 unsigned int len; 45 unsigned int len;
41 46
42 len = seqhilen; 47 len = extralen;
43 48
44 len += crypto_aead_ivsize(aead); 49 len += crypto_aead_ivsize(aead);
45 50
@@ -57,15 +62,16 @@ static void *esp_alloc_tmp(struct crypto_aead *aead, int nfrags, int seqhilen)
57 return kmalloc(len, GFP_ATOMIC); 62 return kmalloc(len, GFP_ATOMIC);
58} 63}
59 64
60static inline __be32 *esp_tmp_seqhi(void *tmp) 65static inline void *esp_tmp_extra(void *tmp)
61{ 66{
62 return PTR_ALIGN((__be32 *)tmp, __alignof__(__be32)); 67 return PTR_ALIGN(tmp, __alignof__(struct esp_output_extra));
63} 68}
64static inline u8 *esp_tmp_iv(struct crypto_aead *aead, void *tmp, int seqhilen) 69
70static inline u8 *esp_tmp_iv(struct crypto_aead *aead, void *tmp, int extralen)
65{ 71{
66 return crypto_aead_ivsize(aead) ? 72 return crypto_aead_ivsize(aead) ?
67 PTR_ALIGN((u8 *)tmp + seqhilen, 73 PTR_ALIGN((u8 *)tmp + extralen,
68 crypto_aead_alignmask(aead) + 1) : tmp + seqhilen; 74 crypto_aead_alignmask(aead) + 1) : tmp + extralen;
69} 75}
70 76
71static inline struct aead_request *esp_tmp_req(struct crypto_aead *aead, u8 *iv) 77static inline struct aead_request *esp_tmp_req(struct crypto_aead *aead, u8 *iv)
@@ -99,7 +105,7 @@ static void esp_restore_header(struct sk_buff *skb, unsigned int offset)
99{ 105{
100 struct ip_esp_hdr *esph = (void *)(skb->data + offset); 106 struct ip_esp_hdr *esph = (void *)(skb->data + offset);
101 void *tmp = ESP_SKB_CB(skb)->tmp; 107 void *tmp = ESP_SKB_CB(skb)->tmp;
102 __be32 *seqhi = esp_tmp_seqhi(tmp); 108 __be32 *seqhi = esp_tmp_extra(tmp);
103 109
104 esph->seq_no = esph->spi; 110 esph->seq_no = esph->spi;
105 esph->spi = *seqhi; 111 esph->spi = *seqhi;
@@ -107,7 +113,11 @@ static void esp_restore_header(struct sk_buff *skb, unsigned int offset)
107 113
108static void esp_output_restore_header(struct sk_buff *skb) 114static void esp_output_restore_header(struct sk_buff *skb)
109{ 115{
110 esp_restore_header(skb, skb_transport_offset(skb) - sizeof(__be32)); 116 void *tmp = ESP_SKB_CB(skb)->tmp;
117 struct esp_output_extra *extra = esp_tmp_extra(tmp);
118
119 esp_restore_header(skb, skb_transport_offset(skb) + extra->esphoff -
120 sizeof(__be32));
111} 121}
112 122
113static void esp_output_done_esn(struct crypto_async_request *base, int err) 123static void esp_output_done_esn(struct crypto_async_request *base, int err)
@@ -121,6 +131,7 @@ static void esp_output_done_esn(struct crypto_async_request *base, int err)
121static int esp_output(struct xfrm_state *x, struct sk_buff *skb) 131static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
122{ 132{
123 int err; 133 int err;
134 struct esp_output_extra *extra;
124 struct ip_esp_hdr *esph; 135 struct ip_esp_hdr *esph;
125 struct crypto_aead *aead; 136 struct crypto_aead *aead;
126 struct aead_request *req; 137 struct aead_request *req;
@@ -137,8 +148,7 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
137 int tfclen; 148 int tfclen;
138 int nfrags; 149 int nfrags;
139 int assoclen; 150 int assoclen;
140 int seqhilen; 151 int extralen;
141 __be32 *seqhi;
142 __be64 seqno; 152 __be64 seqno;
143 153
144 /* skb is pure payload to encrypt */ 154 /* skb is pure payload to encrypt */
@@ -166,21 +176,21 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
166 nfrags = err; 176 nfrags = err;
167 177
168 assoclen = sizeof(*esph); 178 assoclen = sizeof(*esph);
169 seqhilen = 0; 179 extralen = 0;
170 180
171 if (x->props.flags & XFRM_STATE_ESN) { 181 if (x->props.flags & XFRM_STATE_ESN) {
172 seqhilen += sizeof(__be32); 182 extralen += sizeof(*extra);
173 assoclen += seqhilen; 183 assoclen += sizeof(__be32);
174 } 184 }
175 185
176 tmp = esp_alloc_tmp(aead, nfrags, seqhilen); 186 tmp = esp_alloc_tmp(aead, nfrags, extralen);
177 if (!tmp) { 187 if (!tmp) {
178 err = -ENOMEM; 188 err = -ENOMEM;
179 goto error; 189 goto error;
180 } 190 }
181 191
182 seqhi = esp_tmp_seqhi(tmp); 192 extra = esp_tmp_extra(tmp);
183 iv = esp_tmp_iv(aead, tmp, seqhilen); 193 iv = esp_tmp_iv(aead, tmp, extralen);
184 req = esp_tmp_req(aead, iv); 194 req = esp_tmp_req(aead, iv);
185 sg = esp_req_sg(aead, req); 195 sg = esp_req_sg(aead, req);
186 196
@@ -247,8 +257,10 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
247 * encryption. 257 * encryption.
248 */ 258 */
249 if ((x->props.flags & XFRM_STATE_ESN)) { 259 if ((x->props.flags & XFRM_STATE_ESN)) {
250 esph = (void *)(skb_transport_header(skb) - sizeof(__be32)); 260 extra->esphoff = (unsigned char *)esph -
251 *seqhi = esph->spi; 261 skb_transport_header(skb);
262 esph = (struct ip_esp_hdr *)((unsigned char *)esph - 4);
263 extra->seqhi = esph->spi;
252 esph->seq_no = htonl(XFRM_SKB_CB(skb)->seq.output.hi); 264 esph->seq_no = htonl(XFRM_SKB_CB(skb)->seq.output.hi);
253 aead_request_set_callback(req, 0, esp_output_done_esn, skb); 265 aead_request_set_callback(req, 0, esp_output_done_esn, skb);
254 } 266 }
@@ -445,7 +457,7 @@ static int esp_input(struct xfrm_state *x, struct sk_buff *skb)
445 goto out; 457 goto out;
446 458
447 ESP_SKB_CB(skb)->tmp = tmp; 459 ESP_SKB_CB(skb)->tmp = tmp;
448 seqhi = esp_tmp_seqhi(tmp); 460 seqhi = esp_tmp_extra(tmp);
449 iv = esp_tmp_iv(aead, tmp, seqhilen); 461 iv = esp_tmp_iv(aead, tmp, seqhilen);
450 req = esp_tmp_req(aead, iv); 462 req = esp_tmp_req(aead, iv);
451 sg = esp_req_sg(aead, req); 463 sg = esp_req_sg(aead, req);
diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c
index 4c39f4fd332a..de1d119a4497 100644
--- a/net/ipv4/gre_demux.c
+++ b/net/ipv4/gre_demux.c
@@ -62,26 +62,26 @@ EXPORT_SYMBOL_GPL(gre_del_protocol);
62 62
63/* Fills in tpi and returns header length to be pulled. */ 63/* Fills in tpi and returns header length to be pulled. */
64int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi, 64int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
65 bool *csum_err, __be16 proto) 65 bool *csum_err, __be16 proto, int nhs)
66{ 66{
67 const struct gre_base_hdr *greh; 67 const struct gre_base_hdr *greh;
68 __be32 *options; 68 __be32 *options;
69 int hdr_len; 69 int hdr_len;
70 70
71 if (unlikely(!pskb_may_pull(skb, sizeof(struct gre_base_hdr)))) 71 if (unlikely(!pskb_may_pull(skb, nhs + sizeof(struct gre_base_hdr))))
72 return -EINVAL; 72 return -EINVAL;
73 73
74 greh = (struct gre_base_hdr *)skb_transport_header(skb); 74 greh = (struct gre_base_hdr *)(skb->data + nhs);
75 if (unlikely(greh->flags & (GRE_VERSION | GRE_ROUTING))) 75 if (unlikely(greh->flags & (GRE_VERSION | GRE_ROUTING)))
76 return -EINVAL; 76 return -EINVAL;
77 77
78 tpi->flags = gre_flags_to_tnl_flags(greh->flags); 78 tpi->flags = gre_flags_to_tnl_flags(greh->flags);
79 hdr_len = gre_calc_hlen(tpi->flags); 79 hdr_len = gre_calc_hlen(tpi->flags);
80 80
81 if (!pskb_may_pull(skb, hdr_len)) 81 if (!pskb_may_pull(skb, nhs + hdr_len))
82 return -EINVAL; 82 return -EINVAL;
83 83
84 greh = (struct gre_base_hdr *)skb_transport_header(skb); 84 greh = (struct gre_base_hdr *)(skb->data + nhs);
85 tpi->proto = greh->protocol; 85 tpi->proto = greh->protocol;
86 86
87 options = (__be32 *)(greh + 1); 87 options = (__be32 *)(greh + 1);
diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
index 4d2025f7ec57..1d000af7f561 100644
--- a/net/ipv4/ip_gre.c
+++ b/net/ipv4/ip_gre.c
@@ -49,12 +49,6 @@
49#include <net/gre.h> 49#include <net/gre.h>
50#include <net/dst_metadata.h> 50#include <net/dst_metadata.h>
51 51
52#if IS_ENABLED(CONFIG_IPV6)
53#include <net/ipv6.h>
54#include <net/ip6_fib.h>
55#include <net/ip6_route.h>
56#endif
57
58/* 52/*
59 Problems & solutions 53 Problems & solutions
60 -------------------- 54 --------------------
@@ -217,12 +211,14 @@ static void gre_err(struct sk_buff *skb, u32 info)
217 * by themselves??? 211 * by themselves???
218 */ 212 */
219 213
214 const struct iphdr *iph = (struct iphdr *)skb->data;
220 const int type = icmp_hdr(skb)->type; 215 const int type = icmp_hdr(skb)->type;
221 const int code = icmp_hdr(skb)->code; 216 const int code = icmp_hdr(skb)->code;
222 struct tnl_ptk_info tpi; 217 struct tnl_ptk_info tpi;
223 bool csum_err = false; 218 bool csum_err = false;
224 219
225 if (gre_parse_header(skb, &tpi, &csum_err, htons(ETH_P_IP)) < 0) { 220 if (gre_parse_header(skb, &tpi, &csum_err, htons(ETH_P_IP),
221 iph->ihl * 4) < 0) {
226 if (!csum_err) /* ignore csum errors. */ 222 if (!csum_err) /* ignore csum errors. */
227 return; 223 return;
228 } 224 }
@@ -338,7 +334,7 @@ static int gre_rcv(struct sk_buff *skb)
338 } 334 }
339#endif 335#endif
340 336
341 hdr_len = gre_parse_header(skb, &tpi, &csum_err, htons(ETH_P_IP)); 337 hdr_len = gre_parse_header(skb, &tpi, &csum_err, htons(ETH_P_IP), 0);
342 if (hdr_len < 0) 338 if (hdr_len < 0)
343 goto drop; 339 goto drop;
344 340
@@ -1121,6 +1117,7 @@ struct net_device *gretap_fb_dev_create(struct net *net, const char *name,
1121{ 1117{
1122 struct nlattr *tb[IFLA_MAX + 1]; 1118 struct nlattr *tb[IFLA_MAX + 1];
1123 struct net_device *dev; 1119 struct net_device *dev;
1120 LIST_HEAD(list_kill);
1124 struct ip_tunnel *t; 1121 struct ip_tunnel *t;
1125 int err; 1122 int err;
1126 1123
@@ -1136,8 +1133,10 @@ struct net_device *gretap_fb_dev_create(struct net *net, const char *name,
1136 t->collect_md = true; 1133 t->collect_md = true;
1137 1134
1138 err = ipgre_newlink(net, dev, tb, NULL); 1135 err = ipgre_newlink(net, dev, tb, NULL);
1139 if (err < 0) 1136 if (err < 0) {
1140 goto out; 1137 free_netdev(dev);
1138 return ERR_PTR(err);
1139 }
1141 1140
1142 /* openvswitch users expect packet sizes to be unrestricted, 1141 /* openvswitch users expect packet sizes to be unrestricted,
1143 * so set the largest MTU we can. 1142 * so set the largest MTU we can.
@@ -1146,9 +1145,14 @@ struct net_device *gretap_fb_dev_create(struct net *net, const char *name,
1146 if (err) 1145 if (err)
1147 goto out; 1146 goto out;
1148 1147
1148 err = rtnl_configure_link(dev, NULL);
1149 if (err < 0)
1150 goto out;
1151
1149 return dev; 1152 return dev;
1150out: 1153out:
1151 free_netdev(dev); 1154 ip_tunnel_dellink(dev, &list_kill);
1155 unregister_netdevice_many(&list_kill);
1152 return ERR_PTR(err); 1156 return ERR_PTR(err);
1153} 1157}
1154EXPORT_SYMBOL_GPL(gretap_fb_dev_create); 1158EXPORT_SYMBOL_GPL(gretap_fb_dev_create);
diff --git a/net/ipv4/ipconfig.c b/net/ipv4/ipconfig.c
index 2ed9dd2b5f2f..1d71c40eaaf3 100644
--- a/net/ipv4/ipconfig.c
+++ b/net/ipv4/ipconfig.c
@@ -127,7 +127,9 @@ __be32 ic_myaddr = NONE; /* My IP address */
127static __be32 ic_netmask = NONE; /* Netmask for local subnet */ 127static __be32 ic_netmask = NONE; /* Netmask for local subnet */
128__be32 ic_gateway = NONE; /* Gateway IP address */ 128__be32 ic_gateway = NONE; /* Gateway IP address */
129 129
130__be32 ic_addrservaddr = NONE; /* IP Address of the IP addresses'server */ 130#ifdef IPCONFIG_DYNAMIC
131static __be32 ic_addrservaddr = NONE; /* IP Address of the IP addresses'server */
132#endif
131 133
132__be32 ic_servaddr = NONE; /* Boot server IP address */ 134__be32 ic_servaddr = NONE; /* Boot server IP address */
133 135
diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
index 21a38e296fe2..5ad48ec77710 100644
--- a/net/ipv4/ipmr.c
+++ b/net/ipv4/ipmr.c
@@ -891,8 +891,10 @@ static struct mfc_cache *ipmr_cache_alloc(void)
891{ 891{
892 struct mfc_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL); 892 struct mfc_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL);
893 893
894 if (c) 894 if (c) {
895 c->mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
895 c->mfc_un.res.minvif = MAXVIFS; 896 c->mfc_un.res.minvif = MAXVIFS;
897 }
896 return c; 898 return c;
897} 899}
898 900
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 8bd9911fdd16..e00e972c4e6a 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2751,7 +2751,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
2751 struct tcp_sock *tp = tcp_sk(sk); 2751 struct tcp_sock *tp = tcp_sk(sk);
2752 struct sk_buff *skb; 2752 struct sk_buff *skb;
2753 struct sk_buff *hole = NULL; 2753 struct sk_buff *hole = NULL;
2754 u32 last_lost; 2754 u32 max_segs, last_lost;
2755 int mib_idx; 2755 int mib_idx;
2756 int fwd_rexmitting = 0; 2756 int fwd_rexmitting = 0;
2757 2757
@@ -2771,6 +2771,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
2771 last_lost = tp->snd_una; 2771 last_lost = tp->snd_una;
2772 } 2772 }
2773 2773
2774 max_segs = tcp_tso_autosize(sk, tcp_current_mss(sk));
2774 tcp_for_write_queue_from(skb, sk) { 2775 tcp_for_write_queue_from(skb, sk) {
2775 __u8 sacked = TCP_SKB_CB(skb)->sacked; 2776 __u8 sacked = TCP_SKB_CB(skb)->sacked;
2776 int segs; 2777 int segs;
@@ -2784,6 +2785,10 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
2784 segs = tp->snd_cwnd - tcp_packets_in_flight(tp); 2785 segs = tp->snd_cwnd - tcp_packets_in_flight(tp);
2785 if (segs <= 0) 2786 if (segs <= 0)
2786 return; 2787 return;
2788 /* In case tcp_shift_skb_data() have aggregated large skbs,
2789 * we need to make sure not sending too bigs TSO packets
2790 */
2791 segs = min_t(int, segs, max_segs);
2787 2792
2788 if (fwd_rexmitting) { 2793 if (fwd_rexmitting) {
2789begin_fwd: 2794begin_fwd:
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 0ff31d97d485..ca5e8ea29538 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -391,9 +391,9 @@ int udp_v4_get_port(struct sock *sk, unsigned short snum)
391 return udp_lib_get_port(sk, snum, ipv4_rcv_saddr_equal, hash2_nulladdr); 391 return udp_lib_get_port(sk, snum, ipv4_rcv_saddr_equal, hash2_nulladdr);
392} 392}
393 393
394static inline int compute_score(struct sock *sk, struct net *net, 394static int compute_score(struct sock *sk, struct net *net,
395 __be32 saddr, unsigned short hnum, __be16 sport, 395 __be32 saddr, __be16 sport,
396 __be32 daddr, __be16 dport, int dif) 396 __be32 daddr, unsigned short hnum, int dif)
397{ 397{
398 int score; 398 int score;
399 struct inet_sock *inet; 399 struct inet_sock *inet;
@@ -434,52 +434,6 @@ static inline int compute_score(struct sock *sk, struct net *net,
434 return score; 434 return score;
435} 435}
436 436
437/*
438 * In this second variant, we check (daddr, dport) matches (inet_rcv_sadd, inet_num)
439 */
440static inline int compute_score2(struct sock *sk, struct net *net,
441 __be32 saddr, __be16 sport,
442 __be32 daddr, unsigned int hnum, int dif)
443{
444 int score;
445 struct inet_sock *inet;
446
447 if (!net_eq(sock_net(sk), net) ||
448 ipv6_only_sock(sk))
449 return -1;
450
451 inet = inet_sk(sk);
452
453 if (inet->inet_rcv_saddr != daddr ||
454 inet->inet_num != hnum)
455 return -1;
456
457 score = (sk->sk_family == PF_INET) ? 2 : 1;
458
459 if (inet->inet_daddr) {
460 if (inet->inet_daddr != saddr)
461 return -1;
462 score += 4;
463 }
464
465 if (inet->inet_dport) {
466 if (inet->inet_dport != sport)
467 return -1;
468 score += 4;
469 }
470
471 if (sk->sk_bound_dev_if) {
472 if (sk->sk_bound_dev_if != dif)
473 return -1;
474 score += 4;
475 }
476
477 if (sk->sk_incoming_cpu == raw_smp_processor_id())
478 score++;
479
480 return score;
481}
482
483static u32 udp_ehashfn(const struct net *net, const __be32 laddr, 437static u32 udp_ehashfn(const struct net *net, const __be32 laddr,
484 const __u16 lport, const __be32 faddr, 438 const __u16 lport, const __be32 faddr,
485 const __be16 fport) 439 const __be16 fport)
@@ -492,11 +446,11 @@ static u32 udp_ehashfn(const struct net *net, const __be32 laddr,
492 udp_ehash_secret + net_hash_mix(net)); 446 udp_ehash_secret + net_hash_mix(net));
493} 447}
494 448
495/* called with read_rcu_lock() */ 449/* called with rcu_read_lock() */
496static struct sock *udp4_lib_lookup2(struct net *net, 450static struct sock *udp4_lib_lookup2(struct net *net,
497 __be32 saddr, __be16 sport, 451 __be32 saddr, __be16 sport,
498 __be32 daddr, unsigned int hnum, int dif, 452 __be32 daddr, unsigned int hnum, int dif,
499 struct udp_hslot *hslot2, unsigned int slot2, 453 struct udp_hslot *hslot2,
500 struct sk_buff *skb) 454 struct sk_buff *skb)
501{ 455{
502 struct sock *sk, *result; 456 struct sock *sk, *result;
@@ -506,7 +460,7 @@ static struct sock *udp4_lib_lookup2(struct net *net,
506 result = NULL; 460 result = NULL;
507 badness = 0; 461 badness = 0;
508 udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) { 462 udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
509 score = compute_score2(sk, net, saddr, sport, 463 score = compute_score(sk, net, saddr, sport,
510 daddr, hnum, dif); 464 daddr, hnum, dif);
511 if (score > badness) { 465 if (score > badness) {
512 reuseport = sk->sk_reuseport; 466 reuseport = sk->sk_reuseport;
@@ -554,17 +508,22 @@ struct sock *__udp4_lib_lookup(struct net *net, __be32 saddr,
554 508
555 result = udp4_lib_lookup2(net, saddr, sport, 509 result = udp4_lib_lookup2(net, saddr, sport,
556 daddr, hnum, dif, 510 daddr, hnum, dif,
557 hslot2, slot2, skb); 511 hslot2, skb);
558 if (!result) { 512 if (!result) {
513 unsigned int old_slot2 = slot2;
559 hash2 = udp4_portaddr_hash(net, htonl(INADDR_ANY), hnum); 514 hash2 = udp4_portaddr_hash(net, htonl(INADDR_ANY), hnum);
560 slot2 = hash2 & udptable->mask; 515 slot2 = hash2 & udptable->mask;
516 /* avoid searching the same slot again. */
517 if (unlikely(slot2 == old_slot2))
518 return result;
519
561 hslot2 = &udptable->hash2[slot2]; 520 hslot2 = &udptable->hash2[slot2];
562 if (hslot->count < hslot2->count) 521 if (hslot->count < hslot2->count)
563 goto begin; 522 goto begin;
564 523
565 result = udp4_lib_lookup2(net, saddr, sport, 524 result = udp4_lib_lookup2(net, saddr, sport,
566 htonl(INADDR_ANY), hnum, dif, 525 daddr, hnum, dif,
567 hslot2, slot2, skb); 526 hslot2, skb);
568 } 527 }
569 return result; 528 return result;
570 } 529 }
@@ -572,8 +531,8 @@ begin:
572 result = NULL; 531 result = NULL;
573 badness = 0; 532 badness = 0;
574 sk_for_each_rcu(sk, &hslot->head) { 533 sk_for_each_rcu(sk, &hslot->head) {
575 score = compute_score(sk, net, saddr, hnum, sport, 534 score = compute_score(sk, net, saddr, sport,
576 daddr, dport, dif); 535 daddr, hnum, dif);
577 if (score > badness) { 536 if (score > badness) {
578 reuseport = sk->sk_reuseport; 537 reuseport = sk->sk_reuseport;
579 if (reuseport) { 538 if (reuseport) {
@@ -1755,8 +1714,11 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh,
1755 return err; 1714 return err;
1756 } 1715 }
1757 1716
1758 return skb_checksum_init_zero_check(skb, proto, uh->check, 1717 /* Note, we are only interested in != 0 or == 0, thus the
1759 inet_compute_pseudo); 1718 * force to int.
1719 */
1720 return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
1721 inet_compute_pseudo);
1760} 1722}
1761 1723
1762/* 1724/*
diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
index 4527285fcaa2..a4fa84076969 100644
--- a/net/ipv6/icmp.c
+++ b/net/ipv6/icmp.c
@@ -98,7 +98,7 @@ static void icmpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
98 98
99 if (!(type & ICMPV6_INFOMSG_MASK)) 99 if (!(type & ICMPV6_INFOMSG_MASK))
100 if (icmp6->icmp6_type == ICMPV6_ECHO_REQUEST) 100 if (icmp6->icmp6_type == ICMPV6_ECHO_REQUEST)
101 ping_err(skb, offset, info); 101 ping_err(skb, offset, ntohl(info));
102} 102}
103 103
104static int icmpv6_rcv(struct sk_buff *skb); 104static int icmpv6_rcv(struct sk_buff *skb);
diff --git a/net/ipv6/ip6_checksum.c b/net/ipv6/ip6_checksum.c
index b2025bf3da4a..c0cbcb259f5a 100644
--- a/net/ipv6/ip6_checksum.c
+++ b/net/ipv6/ip6_checksum.c
@@ -78,9 +78,12 @@ int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh, int proto)
78 * we accept a checksum of zero here. When we find the socket 78 * we accept a checksum of zero here. When we find the socket
79 * for the UDP packet we'll check if that socket allows zero checksum 79 * for the UDP packet we'll check if that socket allows zero checksum
80 * for IPv6 (set by socket option). 80 * for IPv6 (set by socket option).
81 *
82 * Note, we are only interested in != 0 or == 0, thus the
83 * force to int.
81 */ 84 */
82 return skb_checksum_init_zero_check(skb, proto, uh->check, 85 return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
83 ip6_compute_pseudo); 86 ip6_compute_pseudo);
84} 87}
85EXPORT_SYMBOL(udp6_csum_init); 88EXPORT_SYMBOL(udp6_csum_init);
86 89
diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
index fdc9de276ab1..776d145113e1 100644
--- a/net/ipv6/ip6_gre.c
+++ b/net/ipv6/ip6_gre.c
@@ -468,7 +468,7 @@ static int gre_rcv(struct sk_buff *skb)
468 bool csum_err = false; 468 bool csum_err = false;
469 int hdr_len; 469 int hdr_len;
470 470
471 hdr_len = gre_parse_header(skb, &tpi, &csum_err, htons(ETH_P_IPV6)); 471 hdr_len = gre_parse_header(skb, &tpi, &csum_err, htons(ETH_P_IPV6), 0);
472 if (hdr_len < 0) 472 if (hdr_len < 0)
473 goto drop; 473 goto drop;
474 474
diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
index f2e2013f8346..487ef3bc7bbc 100644
--- a/net/ipv6/ip6mr.c
+++ b/net/ipv6/ip6mr.c
@@ -1074,6 +1074,7 @@ static struct mfc6_cache *ip6mr_cache_alloc(void)
1074 struct mfc6_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL); 1074 struct mfc6_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL);
1075 if (!c) 1075 if (!c)
1076 return NULL; 1076 return NULL;
1077 c->mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
1077 c->mfc_un.res.minvif = MAXMIFS; 1078 c->mfc_un.res.minvif = MAXMIFS;
1078 return c; 1079 return c;
1079} 1080}
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 969913da494f..520b7884d0c2 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -1782,7 +1782,7 @@ static struct rt6_info *ip6_nh_lookup_table(struct net *net,
1782 }; 1782 };
1783 struct fib6_table *table; 1783 struct fib6_table *table;
1784 struct rt6_info *rt; 1784 struct rt6_info *rt;
1785 int flags = 0; 1785 int flags = RT6_LOOKUP_F_IFACE;
1786 1786
1787 table = fib6_get_table(net, cfg->fc_table); 1787 table = fib6_get_table(net, cfg->fc_table);
1788 if (!table) 1788 if (!table)
diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
index 0a5a255277e5..0619ac70836d 100644
--- a/net/ipv6/sit.c
+++ b/net/ipv6/sit.c
@@ -560,13 +560,13 @@ static int ipip6_err(struct sk_buff *skb, u32 info)
560 560
561 if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) { 561 if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) {
562 ipv4_update_pmtu(skb, dev_net(skb->dev), info, 562 ipv4_update_pmtu(skb, dev_net(skb->dev), info,
563 t->parms.link, 0, IPPROTO_IPV6, 0); 563 t->parms.link, 0, iph->protocol, 0);
564 err = 0; 564 err = 0;
565 goto out; 565 goto out;
566 } 566 }
567 if (type == ICMP_REDIRECT) { 567 if (type == ICMP_REDIRECT) {
568 ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0, 568 ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0,
569 IPPROTO_IPV6, 0); 569 iph->protocol, 0);
570 err = 0; 570 err = 0;
571 goto out; 571 goto out;
572 } 572 }
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index f36c2d076fce..2255d2bf5f6b 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -738,7 +738,7 @@ static const struct tcp_request_sock_ops tcp_request_sock_ipv6_ops = {
738static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32 seq, 738static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32 seq,
739 u32 ack, u32 win, u32 tsval, u32 tsecr, 739 u32 ack, u32 win, u32 tsval, u32 tsecr,
740 int oif, struct tcp_md5sig_key *key, int rst, 740 int oif, struct tcp_md5sig_key *key, int rst,
741 u8 tclass, u32 label) 741 u8 tclass, __be32 label)
742{ 742{
743 const struct tcphdr *th = tcp_hdr(skb); 743 const struct tcphdr *th = tcp_hdr(skb);
744 struct tcphdr *t1; 744 struct tcphdr *t1;
@@ -911,7 +911,7 @@ out:
911static void tcp_v6_send_ack(const struct sock *sk, struct sk_buff *skb, u32 seq, 911static void tcp_v6_send_ack(const struct sock *sk, struct sk_buff *skb, u32 seq,
912 u32 ack, u32 win, u32 tsval, u32 tsecr, int oif, 912 u32 ack, u32 win, u32 tsval, u32 tsecr, int oif,
913 struct tcp_md5sig_key *key, u8 tclass, 913 struct tcp_md5sig_key *key, u8 tclass,
914 u32 label) 914 __be32 label)
915{ 915{
916 tcp_v6_send_response(sk, skb, seq, ack, win, tsval, tsecr, oif, key, 0, 916 tcp_v6_send_response(sk, skb, seq, ack, win, tsval, tsecr, oif, key, 0,
917 tclass, label); 917 tclass, label);
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index f421c9f23c5b..005dc82c2138 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -115,11 +115,10 @@ static void udp_v6_rehash(struct sock *sk)
115 udp_lib_rehash(sk, new_hash); 115 udp_lib_rehash(sk, new_hash);
116} 116}
117 117
118static inline int compute_score(struct sock *sk, struct net *net, 118static int compute_score(struct sock *sk, struct net *net,
119 unsigned short hnum, 119 const struct in6_addr *saddr, __be16 sport,
120 const struct in6_addr *saddr, __be16 sport, 120 const struct in6_addr *daddr, unsigned short hnum,
121 const struct in6_addr *daddr, __be16 dport, 121 int dif)
122 int dif)
123{ 122{
124 int score; 123 int score;
125 struct inet_sock *inet; 124 struct inet_sock *inet;
@@ -162,54 +161,11 @@ static inline int compute_score(struct sock *sk, struct net *net,
162 return score; 161 return score;
163} 162}
164 163
165static inline int compute_score2(struct sock *sk, struct net *net, 164/* called with rcu_read_lock() */
166 const struct in6_addr *saddr, __be16 sport,
167 const struct in6_addr *daddr,
168 unsigned short hnum, int dif)
169{
170 int score;
171 struct inet_sock *inet;
172
173 if (!net_eq(sock_net(sk), net) ||
174 udp_sk(sk)->udp_port_hash != hnum ||
175 sk->sk_family != PF_INET6)
176 return -1;
177
178 if (!ipv6_addr_equal(&sk->sk_v6_rcv_saddr, daddr))
179 return -1;
180
181 score = 0;
182 inet = inet_sk(sk);
183
184 if (inet->inet_dport) {
185 if (inet->inet_dport != sport)
186 return -1;
187 score++;
188 }
189
190 if (!ipv6_addr_any(&sk->sk_v6_daddr)) {
191 if (!ipv6_addr_equal(&sk->sk_v6_daddr, saddr))
192 return -1;
193 score++;
194 }
195
196 if (sk->sk_bound_dev_if) {
197 if (sk->sk_bound_dev_if != dif)
198 return -1;
199 score++;
200 }
201
202 if (sk->sk_incoming_cpu == raw_smp_processor_id())
203 score++;
204
205 return score;
206}
207
208/* called with read_rcu_lock() */
209static struct sock *udp6_lib_lookup2(struct net *net, 165static struct sock *udp6_lib_lookup2(struct net *net,
210 const struct in6_addr *saddr, __be16 sport, 166 const struct in6_addr *saddr, __be16 sport,
211 const struct in6_addr *daddr, unsigned int hnum, int dif, 167 const struct in6_addr *daddr, unsigned int hnum, int dif,
212 struct udp_hslot *hslot2, unsigned int slot2, 168 struct udp_hslot *hslot2,
213 struct sk_buff *skb) 169 struct sk_buff *skb)
214{ 170{
215 struct sock *sk, *result; 171 struct sock *sk, *result;
@@ -219,7 +175,7 @@ static struct sock *udp6_lib_lookup2(struct net *net,
219 result = NULL; 175 result = NULL;
220 badness = -1; 176 badness = -1;
221 udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) { 177 udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
222 score = compute_score2(sk, net, saddr, sport, 178 score = compute_score(sk, net, saddr, sport,
223 daddr, hnum, dif); 179 daddr, hnum, dif);
224 if (score > badness) { 180 if (score > badness) {
225 reuseport = sk->sk_reuseport; 181 reuseport = sk->sk_reuseport;
@@ -268,17 +224,22 @@ struct sock *__udp6_lib_lookup(struct net *net,
268 224
269 result = udp6_lib_lookup2(net, saddr, sport, 225 result = udp6_lib_lookup2(net, saddr, sport,
270 daddr, hnum, dif, 226 daddr, hnum, dif,
271 hslot2, slot2, skb); 227 hslot2, skb);
272 if (!result) { 228 if (!result) {
229 unsigned int old_slot2 = slot2;
273 hash2 = udp6_portaddr_hash(net, &in6addr_any, hnum); 230 hash2 = udp6_portaddr_hash(net, &in6addr_any, hnum);
274 slot2 = hash2 & udptable->mask; 231 slot2 = hash2 & udptable->mask;
232 /* avoid searching the same slot again. */
233 if (unlikely(slot2 == old_slot2))
234 return result;
235
275 hslot2 = &udptable->hash2[slot2]; 236 hslot2 = &udptable->hash2[slot2];
276 if (hslot->count < hslot2->count) 237 if (hslot->count < hslot2->count)
277 goto begin; 238 goto begin;
278 239
279 result = udp6_lib_lookup2(net, saddr, sport, 240 result = udp6_lib_lookup2(net, saddr, sport,
280 &in6addr_any, hnum, dif, 241 daddr, hnum, dif,
281 hslot2, slot2, skb); 242 hslot2, skb);
282 } 243 }
283 return result; 244 return result;
284 } 245 }
@@ -286,7 +247,7 @@ begin:
286 result = NULL; 247 result = NULL;
287 badness = -1; 248 badness = -1;
288 sk_for_each_rcu(sk, &hslot->head) { 249 sk_for_each_rcu(sk, &hslot->head) {
289 score = compute_score(sk, net, hnum, saddr, sport, daddr, dport, dif); 250 score = compute_score(sk, net, saddr, sport, daddr, hnum, dif);
290 if (score > badness) { 251 if (score > badness) {
291 reuseport = sk->sk_reuseport; 252 reuseport = sk->sk_reuseport;
292 if (reuseport) { 253 if (reuseport) {
diff --git a/net/kcm/kcmproc.c b/net/kcm/kcmproc.c
index 738008726cc6..fda7f4715c58 100644
--- a/net/kcm/kcmproc.c
+++ b/net/kcm/kcmproc.c
@@ -241,6 +241,7 @@ static const struct file_operations kcm_seq_fops = {
241 .open = kcm_seq_open, 241 .open = kcm_seq_open,
242 .read = seq_read, 242 .read = seq_read,
243 .llseek = seq_lseek, 243 .llseek = seq_lseek,
244 .release = seq_release_net,
244}; 245};
245 246
246static struct kcm_seq_muxinfo kcm_seq_muxinfo = { 247static struct kcm_seq_muxinfo kcm_seq_muxinfo = {
diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
index 21b1fdf5d01d..6a1603bcdced 100644
--- a/net/mac80211/mesh.c
+++ b/net/mac80211/mesh.c
@@ -148,14 +148,17 @@ u32 mesh_accept_plinks_update(struct ieee80211_sub_if_data *sdata)
148void mesh_sta_cleanup(struct sta_info *sta) 148void mesh_sta_cleanup(struct sta_info *sta)
149{ 149{
150 struct ieee80211_sub_if_data *sdata = sta->sdata; 150 struct ieee80211_sub_if_data *sdata = sta->sdata;
151 u32 changed; 151 u32 changed = 0;
152 152
153 /* 153 /*
154 * maybe userspace handles peer allocation and peering, but in either 154 * maybe userspace handles peer allocation and peering, but in either
155 * case the beacon is still generated by the kernel and we might need 155 * case the beacon is still generated by the kernel and we might need
156 * an update. 156 * an update.
157 */ 157 */
158 changed = mesh_accept_plinks_update(sdata); 158 if (sdata->u.mesh.user_mpm &&
159 sta->mesh->plink_state == NL80211_PLINK_ESTAB)
160 changed |= mesh_plink_dec_estab_count(sdata);
161 changed |= mesh_accept_plinks_update(sdata);
159 if (!sdata->u.mesh.user_mpm) { 162 if (!sdata->u.mesh.user_mpm) {
160 changed |= mesh_plink_deactivate(sta); 163 changed |= mesh_plink_deactivate(sta);
161 del_timer_sync(&sta->mesh->plink_timer); 164 del_timer_sync(&sta->mesh->plink_timer);
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index db2312eeb2a4..f204274a9b6b 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -1544,6 +1544,8 @@ void nf_conntrack_cleanup_end(void)
1544 nf_conntrack_tstamp_fini(); 1544 nf_conntrack_tstamp_fini();
1545 nf_conntrack_acct_fini(); 1545 nf_conntrack_acct_fini();
1546 nf_conntrack_expect_fini(); 1546 nf_conntrack_expect_fini();
1547
1548 kmem_cache_destroy(nf_conntrack_cachep);
1547} 1549}
1548 1550
1549/* 1551/*
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index 7b7aa871a174..2c881871db38 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -2946,24 +2946,20 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
2946 * jumps are already validated for that chain. 2946 * jumps are already validated for that chain.
2947 */ 2947 */
2948 list_for_each_entry(i, &set->bindings, list) { 2948 list_for_each_entry(i, &set->bindings, list) {
2949 if (binding->flags & NFT_SET_MAP && 2949 if (i->flags & NFT_SET_MAP &&
2950 i->chain == binding->chain) 2950 i->chain == binding->chain)
2951 goto bind; 2951 goto bind;
2952 } 2952 }
2953 2953
2954 iter.genmask = nft_genmask_next(ctx->net);
2954 iter.skip = 0; 2955 iter.skip = 0;
2955 iter.count = 0; 2956 iter.count = 0;
2956 iter.err = 0; 2957 iter.err = 0;
2957 iter.fn = nf_tables_bind_check_setelem; 2958 iter.fn = nf_tables_bind_check_setelem;
2958 2959
2959 set->ops->walk(ctx, set, &iter); 2960 set->ops->walk(ctx, set, &iter);
2960 if (iter.err < 0) { 2961 if (iter.err < 0)
2961 /* Destroy anonymous sets if binding fails */
2962 if (set->flags & NFT_SET_ANONYMOUS)
2963 nf_tables_set_destroy(ctx, set);
2964
2965 return iter.err; 2962 return iter.err;
2966 }
2967 } 2963 }
2968bind: 2964bind:
2969 binding->chain = ctx->chain; 2965 binding->chain = ctx->chain;
@@ -3192,12 +3188,13 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
3192 if (nest == NULL) 3188 if (nest == NULL)
3193 goto nla_put_failure; 3189 goto nla_put_failure;
3194 3190
3195 args.cb = cb; 3191 args.cb = cb;
3196 args.skb = skb; 3192 args.skb = skb;
3197 args.iter.skip = cb->args[0]; 3193 args.iter.genmask = nft_genmask_cur(ctx.net);
3198 args.iter.count = 0; 3194 args.iter.skip = cb->args[0];
3199 args.iter.err = 0; 3195 args.iter.count = 0;
3200 args.iter.fn = nf_tables_dump_setelem; 3196 args.iter.err = 0;
3197 args.iter.fn = nf_tables_dump_setelem;
3201 set->ops->walk(&ctx, set, &args.iter); 3198 set->ops->walk(&ctx, set, &args.iter);
3202 3199
3203 nla_nest_end(skb, nest); 3200 nla_nest_end(skb, nest);
@@ -4284,6 +4281,7 @@ static int nf_tables_check_loops(const struct nft_ctx *ctx,
4284 binding->chain != chain) 4281 binding->chain != chain)
4285 continue; 4282 continue;
4286 4283
4284 iter.genmask = nft_genmask_next(ctx->net);
4287 iter.skip = 0; 4285 iter.skip = 0;
4288 iter.count = 0; 4286 iter.count = 0;
4289 iter.err = 0; 4287 iter.err = 0;
diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
index e9f8dffcc244..fb8b5892b5ff 100644
--- a/net/netfilter/nf_tables_core.c
+++ b/net/netfilter/nf_tables_core.c
@@ -143,7 +143,7 @@ next_rule:
143 list_for_each_entry_continue_rcu(rule, &chain->rules, list) { 143 list_for_each_entry_continue_rcu(rule, &chain->rules, list) {
144 144
145 /* This rule is not active, skip. */ 145 /* This rule is not active, skip. */
146 if (unlikely(rule->genmask & (1 << gencursor))) 146 if (unlikely(rule->genmask & gencursor))
147 continue; 147 continue;
148 148
149 rulenum++; 149 rulenum++;
diff --git a/net/netfilter/nft_hash.c b/net/netfilter/nft_hash.c
index 6fa016564f90..f39c53a159eb 100644
--- a/net/netfilter/nft_hash.c
+++ b/net/netfilter/nft_hash.c
@@ -189,7 +189,6 @@ static void nft_hash_walk(const struct nft_ctx *ctx, const struct nft_set *set,
189 struct nft_hash_elem *he; 189 struct nft_hash_elem *he;
190 struct rhashtable_iter hti; 190 struct rhashtable_iter hti;
191 struct nft_set_elem elem; 191 struct nft_set_elem elem;
192 u8 genmask = nft_genmask_cur(read_pnet(&set->pnet));
193 int err; 192 int err;
194 193
195 err = rhashtable_walk_init(&priv->ht, &hti, GFP_KERNEL); 194 err = rhashtable_walk_init(&priv->ht, &hti, GFP_KERNEL);
@@ -218,7 +217,7 @@ static void nft_hash_walk(const struct nft_ctx *ctx, const struct nft_set *set,
218 goto cont; 217 goto cont;
219 if (nft_set_elem_expired(&he->ext)) 218 if (nft_set_elem_expired(&he->ext))
220 goto cont; 219 goto cont;
221 if (!nft_set_elem_active(&he->ext, genmask)) 220 if (!nft_set_elem_active(&he->ext, iter->genmask))
222 goto cont; 221 goto cont;
223 222
224 elem.priv = he; 223 elem.priv = he;
diff --git a/net/netfilter/nft_rbtree.c b/net/netfilter/nft_rbtree.c
index f762094af7c1..7201d57b5a93 100644
--- a/net/netfilter/nft_rbtree.c
+++ b/net/netfilter/nft_rbtree.c
@@ -211,7 +211,6 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx,
211 struct nft_rbtree_elem *rbe; 211 struct nft_rbtree_elem *rbe;
212 struct nft_set_elem elem; 212 struct nft_set_elem elem;
213 struct rb_node *node; 213 struct rb_node *node;
214 u8 genmask = nft_genmask_cur(read_pnet(&set->pnet));
215 214
216 spin_lock_bh(&nft_rbtree_lock); 215 spin_lock_bh(&nft_rbtree_lock);
217 for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) { 216 for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
@@ -219,7 +218,7 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx,
219 218
220 if (iter->count < iter->skip) 219 if (iter->count < iter->skip)
221 goto cont; 220 goto cont;
222 if (!nft_set_elem_active(&rbe->ext, genmask)) 221 if (!nft_set_elem_active(&rbe->ext, iter->genmask))
223 goto cont; 222 goto cont;
224 223
225 elem.priv = rbe; 224 elem.priv = rbe;
diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
index 3d5feede962d..d84312584ee4 100644
--- a/net/openvswitch/conntrack.c
+++ b/net/openvswitch/conntrack.c
@@ -818,8 +818,18 @@ static int ovs_ct_lookup(struct net *net, struct sw_flow_key *key,
818 */ 818 */
819 state = OVS_CS_F_TRACKED | OVS_CS_F_NEW | OVS_CS_F_RELATED; 819 state = OVS_CS_F_TRACKED | OVS_CS_F_NEW | OVS_CS_F_RELATED;
820 __ovs_ct_update_key(key, state, &info->zone, exp->master); 820 __ovs_ct_update_key(key, state, &info->zone, exp->master);
821 } else 821 } else {
822 return __ovs_ct_lookup(net, key, info, skb); 822 struct nf_conn *ct;
823 int err;
824
825 err = __ovs_ct_lookup(net, key, info, skb);
826 if (err)
827 return err;
828
829 ct = (struct nf_conn *)skb->nfct;
830 if (ct)
831 nf_ct_deliver_cached_events(ct);
832 }
823 833
824 return 0; 834 return 0;
825} 835}
diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
index 310cabce2311..7c2a65a6af5c 100644
--- a/net/rds/ib_cm.c
+++ b/net/rds/ib_cm.c
@@ -111,7 +111,7 @@ void rds_ib_cm_connect_complete(struct rds_connection *conn, struct rdma_cm_even
111 } 111 }
112 } 112 }
113 113
114 if (conn->c_version < RDS_PROTOCOL(3,1)) { 114 if (conn->c_version < RDS_PROTOCOL(3, 1)) {
115 printk(KERN_NOTICE "RDS/IB: Connection to %pI4 version %u.%u failed," 115 printk(KERN_NOTICE "RDS/IB: Connection to %pI4 version %u.%u failed,"
116 " no longer supported\n", 116 " no longer supported\n",
117 &conn->c_faddr, 117 &conn->c_faddr,
diff --git a/net/rds/loop.c b/net/rds/loop.c
index 6b12b68541ae..814173b466d9 100644
--- a/net/rds/loop.c
+++ b/net/rds/loop.c
@@ -95,8 +95,9 @@ out:
95 */ 95 */
96static void rds_loop_inc_free(struct rds_incoming *inc) 96static void rds_loop_inc_free(struct rds_incoming *inc)
97{ 97{
98 struct rds_message *rm = container_of(inc, struct rds_message, m_inc); 98 struct rds_message *rm = container_of(inc, struct rds_message, m_inc);
99 rds_message_put(rm); 99
100 rds_message_put(rm);
100} 101}
101 102
102/* we need to at least give the thread something to succeed */ 103/* we need to at least give the thread something to succeed */
diff --git a/net/rds/sysctl.c b/net/rds/sysctl.c
index c173f69e1479..e381bbcd9cc1 100644
--- a/net/rds/sysctl.c
+++ b/net/rds/sysctl.c
@@ -102,7 +102,8 @@ int rds_sysctl_init(void)
102 rds_sysctl_reconnect_min = msecs_to_jiffies(1); 102 rds_sysctl_reconnect_min = msecs_to_jiffies(1);
103 rds_sysctl_reconnect_min_jiffies = rds_sysctl_reconnect_min; 103 rds_sysctl_reconnect_min_jiffies = rds_sysctl_reconnect_min;
104 104
105 rds_sysctl_reg_table = register_net_sysctl(&init_net,"net/rds", rds_sysctl_rds_table); 105 rds_sysctl_reg_table =
106 register_net_sysctl(&init_net, "net/rds", rds_sysctl_rds_table);
106 if (!rds_sysctl_reg_table) 107 if (!rds_sysctl_reg_table)
107 return -ENOMEM; 108 return -ENOMEM;
108 return 0; 109 return 0;
diff --git a/net/rds/tcp.h b/net/rds/tcp.h
index ec0602b0dc24..7940babf6c71 100644
--- a/net/rds/tcp.h
+++ b/net/rds/tcp.h
@@ -83,7 +83,7 @@ int rds_tcp_inc_copy_to_user(struct rds_incoming *inc, struct iov_iter *to);
83void rds_tcp_xmit_prepare(struct rds_connection *conn); 83void rds_tcp_xmit_prepare(struct rds_connection *conn);
84void rds_tcp_xmit_complete(struct rds_connection *conn); 84void rds_tcp_xmit_complete(struct rds_connection *conn);
85int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm, 85int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm,
86 unsigned int hdr_off, unsigned int sg, unsigned int off); 86 unsigned int hdr_off, unsigned int sg, unsigned int off);
87void rds_tcp_write_space(struct sock *sk); 87void rds_tcp_write_space(struct sock *sk);
88 88
89/* tcp_stats.c */ 89/* tcp_stats.c */
diff --git a/net/rds/tcp_connect.c b/net/rds/tcp_connect.c
index fba13d0305fb..f6e95d60db54 100644
--- a/net/rds/tcp_connect.c
+++ b/net/rds/tcp_connect.c
@@ -54,19 +54,19 @@ void rds_tcp_state_change(struct sock *sk)
54 54
55 rdsdebug("sock %p state_change to %d\n", tc->t_sock, sk->sk_state); 55 rdsdebug("sock %p state_change to %d\n", tc->t_sock, sk->sk_state);
56 56
57 switch(sk->sk_state) { 57 switch (sk->sk_state) {
58 /* ignore connecting sockets as they make progress */ 58 /* ignore connecting sockets as they make progress */
59 case TCP_SYN_SENT: 59 case TCP_SYN_SENT:
60 case TCP_SYN_RECV: 60 case TCP_SYN_RECV:
61 break; 61 break;
62 case TCP_ESTABLISHED: 62 case TCP_ESTABLISHED:
63 rds_connect_path_complete(conn, RDS_CONN_CONNECTING); 63 rds_connect_path_complete(conn, RDS_CONN_CONNECTING);
64 break; 64 break;
65 case TCP_CLOSE_WAIT: 65 case TCP_CLOSE_WAIT:
66 case TCP_CLOSE: 66 case TCP_CLOSE:
67 rds_conn_drop(conn); 67 rds_conn_drop(conn);
68 default: 68 default:
69 break; 69 break;
70 } 70 }
71out: 71out:
72 read_unlock_bh(&sk->sk_callback_lock); 72 read_unlock_bh(&sk->sk_callback_lock);
diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
index 686b1d03a558..245542ca4718 100644
--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -138,7 +138,7 @@ int rds_tcp_accept_one(struct socket *sock)
138 rds_tcp_reset_callbacks(new_sock, conn); 138 rds_tcp_reset_callbacks(new_sock, conn);
139 conn->c_outgoing = 0; 139 conn->c_outgoing = 0;
140 /* rds_connect_path_complete() marks RDS_CONN_UP */ 140 /* rds_connect_path_complete() marks RDS_CONN_UP */
141 rds_connect_path_complete(conn, RDS_CONN_DISCONNECTING); 141 rds_connect_path_complete(conn, RDS_CONN_RESETTING);
142 } 142 }
143 } else { 143 } else {
144 rds_tcp_set_callbacks(new_sock, conn); 144 rds_tcp_set_callbacks(new_sock, conn);
diff --git a/net/rds/tcp_recv.c b/net/rds/tcp_recv.c
index c3196f9d070a..6e6a7111a034 100644
--- a/net/rds/tcp_recv.c
+++ b/net/rds/tcp_recv.c
@@ -171,7 +171,7 @@ static int rds_tcp_data_recv(read_descriptor_t *desc, struct sk_buff *skb,
171 while (left) { 171 while (left) {
172 if (!tinc) { 172 if (!tinc) {
173 tinc = kmem_cache_alloc(rds_tcp_incoming_slab, 173 tinc = kmem_cache_alloc(rds_tcp_incoming_slab,
174 arg->gfp); 174 arg->gfp);
175 if (!tinc) { 175 if (!tinc) {
176 desc->error = -ENOMEM; 176 desc->error = -ENOMEM;
177 goto out; 177 goto out;
diff --git a/net/rds/tcp_send.c b/net/rds/tcp_send.c
index 22d0f2020a79..618be69c9c3b 100644
--- a/net/rds/tcp_send.c
+++ b/net/rds/tcp_send.c
@@ -66,19 +66,19 @@ void rds_tcp_xmit_complete(struct rds_connection *conn)
66static int rds_tcp_sendmsg(struct socket *sock, void *data, unsigned int len) 66static int rds_tcp_sendmsg(struct socket *sock, void *data, unsigned int len)
67{ 67{
68 struct kvec vec = { 68 struct kvec vec = {
69 .iov_base = data, 69 .iov_base = data,
70 .iov_len = len, 70 .iov_len = len,
71 };
72 struct msghdr msg = {
73 .msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL,
71 }; 74 };
72 struct msghdr msg = {
73 .msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL,
74 };
75 75
76 return kernel_sendmsg(sock, &msg, &vec, 1, vec.iov_len); 76 return kernel_sendmsg(sock, &msg, &vec, 1, vec.iov_len);
77} 77}
78 78
79/* the core send_sem serializes this with other xmit and shutdown */ 79/* the core send_sem serializes this with other xmit and shutdown */
80int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm, 80int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm,
81 unsigned int hdr_off, unsigned int sg, unsigned int off) 81 unsigned int hdr_off, unsigned int sg, unsigned int off)
82{ 82{
83 struct rds_tcp_connection *tc = conn->c_transport_data; 83 struct rds_tcp_connection *tc = conn->c_transport_data;
84 int done = 0; 84 int done = 0;
@@ -196,7 +196,7 @@ void rds_tcp_write_space(struct sock *sk)
196 tc->t_last_seen_una = rds_tcp_snd_una(tc); 196 tc->t_last_seen_una = rds_tcp_snd_una(tc);
197 rds_send_drop_acked(conn, rds_tcp_snd_una(tc), rds_tcp_is_acked); 197 rds_send_drop_acked(conn, rds_tcp_snd_una(tc), rds_tcp_is_acked);
198 198
199 if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf) 199 if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf)
200 queue_delayed_work(rds_wq, &conn->c_send_w, 0); 200 queue_delayed_work(rds_wq, &conn->c_send_w, 0);
201 201
202out: 202out:
diff --git a/net/rds/transport.c b/net/rds/transport.c
index f3afd1d60d3c..2ffd3e30c643 100644
--- a/net/rds/transport.c
+++ b/net/rds/transport.c
@@ -140,8 +140,7 @@ unsigned int rds_trans_stats_info_copy(struct rds_info_iterator *iter,
140 rds_info_iter_unmap(iter); 140 rds_info_iter_unmap(iter);
141 down_read(&rds_trans_sem); 141 down_read(&rds_trans_sem);
142 142
143 for (i = 0; i < RDS_TRANS_COUNT; i++) 143 for (i = 0; i < RDS_TRANS_COUNT; i++) {
144 {
145 trans = transports[i]; 144 trans = transports[i];
146 if (!trans || !trans->stats_info_copy) 145 if (!trans || !trans->stats_info_copy)
147 continue; 146 continue;
diff --git a/net/sched/act_api.c b/net/sched/act_api.c
index 336774a535c3..c7a0b0d481c0 100644
--- a/net/sched/act_api.c
+++ b/net/sched/act_api.c
@@ -1118,7 +1118,7 @@ tc_dump_action(struct sk_buff *skb, struct netlink_callback *cb)
1118 nla_nest_end(skb, nest); 1118 nla_nest_end(skb, nest);
1119 ret = skb->len; 1119 ret = skb->len;
1120 } else 1120 } else
1121 nla_nest_cancel(skb, nest); 1121 nlmsg_trim(skb, b);
1122 1122
1123 nlh->nlmsg_len = skb_tail_pointer(skb) - b; 1123 nlh->nlmsg_len = skb_tail_pointer(skb) - b;
1124 if (NETLINK_CB(cb->skb).portid && ret) 1124 if (NETLINK_CB(cb->skb).portid && ret)
diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
index 658046dfe02d..ea4a2fef1b71 100644
--- a/net/sched/act_ife.c
+++ b/net/sched/act_ife.c
@@ -106,9 +106,9 @@ int ife_get_meta_u16(struct sk_buff *skb, struct tcf_meta_info *mi)
106} 106}
107EXPORT_SYMBOL_GPL(ife_get_meta_u16); 107EXPORT_SYMBOL_GPL(ife_get_meta_u16);
108 108
109int ife_alloc_meta_u32(struct tcf_meta_info *mi, void *metaval) 109int ife_alloc_meta_u32(struct tcf_meta_info *mi, void *metaval, gfp_t gfp)
110{ 110{
111 mi->metaval = kmemdup(metaval, sizeof(u32), GFP_KERNEL); 111 mi->metaval = kmemdup(metaval, sizeof(u32), gfp);
112 if (!mi->metaval) 112 if (!mi->metaval)
113 return -ENOMEM; 113 return -ENOMEM;
114 114
@@ -116,9 +116,9 @@ int ife_alloc_meta_u32(struct tcf_meta_info *mi, void *metaval)
116} 116}
117EXPORT_SYMBOL_GPL(ife_alloc_meta_u32); 117EXPORT_SYMBOL_GPL(ife_alloc_meta_u32);
118 118
119int ife_alloc_meta_u16(struct tcf_meta_info *mi, void *metaval) 119int ife_alloc_meta_u16(struct tcf_meta_info *mi, void *metaval, gfp_t gfp)
120{ 120{
121 mi->metaval = kmemdup(metaval, sizeof(u16), GFP_KERNEL); 121 mi->metaval = kmemdup(metaval, sizeof(u16), gfp);
122 if (!mi->metaval) 122 if (!mi->metaval)
123 return -ENOMEM; 123 return -ENOMEM;
124 124
@@ -240,10 +240,10 @@ static int ife_validate_metatype(struct tcf_meta_ops *ops, void *val, int len)
240} 240}
241 241
242/* called when adding new meta information 242/* called when adding new meta information
243 * under ife->tcf_lock 243 * under ife->tcf_lock for existing action
244*/ 244*/
245static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid, 245static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
246 void *val, int len) 246 void *val, int len, bool exists)
247{ 247{
248 struct tcf_meta_ops *ops = find_ife_oplist(metaid); 248 struct tcf_meta_ops *ops = find_ife_oplist(metaid);
249 int ret = 0; 249 int ret = 0;
@@ -251,11 +251,13 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
251 if (!ops) { 251 if (!ops) {
252 ret = -ENOENT; 252 ret = -ENOENT;
253#ifdef CONFIG_MODULES 253#ifdef CONFIG_MODULES
254 spin_unlock_bh(&ife->tcf_lock); 254 if (exists)
255 spin_unlock_bh(&ife->tcf_lock);
255 rtnl_unlock(); 256 rtnl_unlock();
256 request_module("ifemeta%u", metaid); 257 request_module("ifemeta%u", metaid);
257 rtnl_lock(); 258 rtnl_lock();
258 spin_lock_bh(&ife->tcf_lock); 259 if (exists)
260 spin_lock_bh(&ife->tcf_lock);
259 ops = find_ife_oplist(metaid); 261 ops = find_ife_oplist(metaid);
260#endif 262#endif
261 } 263 }
@@ -272,10 +274,10 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
272} 274}
273 275
274/* called when adding new meta information 276/* called when adding new meta information
275 * under ife->tcf_lock 277 * under ife->tcf_lock for existing action
276*/ 278*/
277static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval, 279static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
278 int len) 280 int len, bool atomic)
279{ 281{
280 struct tcf_meta_info *mi = NULL; 282 struct tcf_meta_info *mi = NULL;
281 struct tcf_meta_ops *ops = find_ife_oplist(metaid); 283 struct tcf_meta_ops *ops = find_ife_oplist(metaid);
@@ -284,7 +286,7 @@ static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
284 if (!ops) 286 if (!ops)
285 return -ENOENT; 287 return -ENOENT;
286 288
287 mi = kzalloc(sizeof(*mi), GFP_KERNEL); 289 mi = kzalloc(sizeof(*mi), atomic ? GFP_ATOMIC : GFP_KERNEL);
288 if (!mi) { 290 if (!mi) {
289 /*put back what find_ife_oplist took */ 291 /*put back what find_ife_oplist took */
290 module_put(ops->owner); 292 module_put(ops->owner);
@@ -294,7 +296,7 @@ static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
294 mi->metaid = metaid; 296 mi->metaid = metaid;
295 mi->ops = ops; 297 mi->ops = ops;
296 if (len > 0) { 298 if (len > 0) {
297 ret = ops->alloc(mi, metaval); 299 ret = ops->alloc(mi, metaval, atomic ? GFP_ATOMIC : GFP_KERNEL);
298 if (ret != 0) { 300 if (ret != 0) {
299 kfree(mi); 301 kfree(mi);
300 module_put(ops->owner); 302 module_put(ops->owner);
@@ -313,11 +315,13 @@ static int use_all_metadata(struct tcf_ife_info *ife)
313 int rc = 0; 315 int rc = 0;
314 int installed = 0; 316 int installed = 0;
315 317
318 read_lock(&ife_mod_lock);
316 list_for_each_entry(o, &ifeoplist, list) { 319 list_for_each_entry(o, &ifeoplist, list) {
317 rc = add_metainfo(ife, o->metaid, NULL, 0); 320 rc = add_metainfo(ife, o->metaid, NULL, 0, true);
318 if (rc == 0) 321 if (rc == 0)
319 installed += 1; 322 installed += 1;
320 } 323 }
324 read_unlock(&ife_mod_lock);
321 325
322 if (installed) 326 if (installed)
323 return 0; 327 return 0;
@@ -385,8 +389,9 @@ static void tcf_ife_cleanup(struct tc_action *a, int bind)
385 spin_unlock_bh(&ife->tcf_lock); 389 spin_unlock_bh(&ife->tcf_lock);
386} 390}
387 391
388/* under ife->tcf_lock */ 392/* under ife->tcf_lock for existing action */
389static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb) 393static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
394 bool exists)
390{ 395{
391 int len = 0; 396 int len = 0;
392 int rc = 0; 397 int rc = 0;
@@ -398,11 +403,11 @@ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb)
398 val = nla_data(tb[i]); 403 val = nla_data(tb[i]);
399 len = nla_len(tb[i]); 404 len = nla_len(tb[i]);
400 405
401 rc = load_metaops_and_vet(ife, i, val, len); 406 rc = load_metaops_and_vet(ife, i, val, len, exists);
402 if (rc != 0) 407 if (rc != 0)
403 return rc; 408 return rc;
404 409
405 rc = add_metainfo(ife, i, val, len); 410 rc = add_metainfo(ife, i, val, len, exists);
406 if (rc) 411 if (rc)
407 return rc; 412 return rc;
408 } 413 }
@@ -474,7 +479,8 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
474 saddr = nla_data(tb[TCA_IFE_SMAC]); 479 saddr = nla_data(tb[TCA_IFE_SMAC]);
475 } 480 }
476 481
477 spin_lock_bh(&ife->tcf_lock); 482 if (exists)
483 spin_lock_bh(&ife->tcf_lock);
478 ife->tcf_action = parm->action; 484 ife->tcf_action = parm->action;
479 485
480 if (parm->flags & IFE_ENCODE) { 486 if (parm->flags & IFE_ENCODE) {
@@ -504,11 +510,12 @@ metadata_parse_err:
504 if (ret == ACT_P_CREATED) 510 if (ret == ACT_P_CREATED)
505 _tcf_ife_cleanup(a, bind); 511 _tcf_ife_cleanup(a, bind);
506 512
507 spin_unlock_bh(&ife->tcf_lock); 513 if (exists)
514 spin_unlock_bh(&ife->tcf_lock);
508 return err; 515 return err;
509 } 516 }
510 517
511 err = populate_metalist(ife, tb2); 518 err = populate_metalist(ife, tb2, exists);
512 if (err) 519 if (err)
513 goto metadata_parse_err; 520 goto metadata_parse_err;
514 521
@@ -523,12 +530,14 @@ metadata_parse_err:
523 if (ret == ACT_P_CREATED) 530 if (ret == ACT_P_CREATED)
524 _tcf_ife_cleanup(a, bind); 531 _tcf_ife_cleanup(a, bind);
525 532
526 spin_unlock_bh(&ife->tcf_lock); 533 if (exists)
534 spin_unlock_bh(&ife->tcf_lock);
527 return err; 535 return err;
528 } 536 }
529 } 537 }
530 538
531 spin_unlock_bh(&ife->tcf_lock); 539 if (exists)
540 spin_unlock_bh(&ife->tcf_lock);
532 541
533 if (ret == ACT_P_CREATED) 542 if (ret == ACT_P_CREATED)
534 tcf_hash_insert(tn, a); 543 tcf_hash_insert(tn, a);
diff --git a/net/sched/act_ipt.c b/net/sched/act_ipt.c
index 9f002ada7074..d4bd19ee5822 100644
--- a/net/sched/act_ipt.c
+++ b/net/sched/act_ipt.c
@@ -121,10 +121,13 @@ static int __tcf_ipt_init(struct tc_action_net *tn, struct nlattr *nla,
121 } 121 }
122 122
123 td = (struct xt_entry_target *)nla_data(tb[TCA_IPT_TARG]); 123 td = (struct xt_entry_target *)nla_data(tb[TCA_IPT_TARG]);
124 if (nla_len(tb[TCA_IPT_TARG]) < td->u.target_size) 124 if (nla_len(tb[TCA_IPT_TARG]) < td->u.target_size) {
125 if (exists)
126 tcf_hash_release(a, bind);
125 return -EINVAL; 127 return -EINVAL;
128 }
126 129
127 if (!tcf_hash_check(tn, index, a, bind)) { 130 if (!exists) {
128 ret = tcf_hash_create(tn, index, est, a, sizeof(*ipt), bind, 131 ret = tcf_hash_create(tn, index, est, a, sizeof(*ipt), bind,
129 false); 132 false);
130 if (ret) 133 if (ret)
diff --git a/net/sched/sch_fifo.c b/net/sched/sch_fifo.c
index 2177eac0a61e..2e4bd2c0a50c 100644
--- a/net/sched/sch_fifo.c
+++ b/net/sched/sch_fifo.c
@@ -37,14 +37,18 @@ static int pfifo_enqueue(struct sk_buff *skb, struct Qdisc *sch)
37 37
38static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc *sch) 38static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc *sch)
39{ 39{
40 unsigned int prev_backlog;
41
40 if (likely(skb_queue_len(&sch->q) < sch->limit)) 42 if (likely(skb_queue_len(&sch->q) < sch->limit))
41 return qdisc_enqueue_tail(skb, sch); 43 return qdisc_enqueue_tail(skb, sch);
42 44
45 prev_backlog = sch->qstats.backlog;
43 /* queue full, remove one skb to fulfill the limit */ 46 /* queue full, remove one skb to fulfill the limit */
44 __qdisc_queue_drop_head(sch, &sch->q); 47 __qdisc_queue_drop_head(sch, &sch->q);
45 qdisc_qstats_drop(sch); 48 qdisc_qstats_drop(sch);
46 qdisc_enqueue_tail(skb, sch); 49 qdisc_enqueue_tail(skb, sch);
47 50
51 qdisc_tree_reduce_backlog(sch, 0, prev_backlog - sch->qstats.backlog);
48 return NET_XMIT_CN; 52 return NET_XMIT_CN;
49} 53}
50 54
diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index d4b4218af6b1..62f9d8100c6e 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -1007,7 +1007,9 @@ static void htb_work_func(struct work_struct *work)
1007 struct htb_sched *q = container_of(work, struct htb_sched, work); 1007 struct htb_sched *q = container_of(work, struct htb_sched, work);
1008 struct Qdisc *sch = q->watchdog.qdisc; 1008 struct Qdisc *sch = q->watchdog.qdisc;
1009 1009
1010 rcu_read_lock();
1010 __netif_schedule(qdisc_root(sch)); 1011 __netif_schedule(qdisc_root(sch));
1012 rcu_read_unlock();
1011} 1013}
1012 1014
1013static int htb_init(struct Qdisc *sch, struct nlattr *opt) 1015static int htb_init(struct Qdisc *sch, struct nlattr *opt)
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 205bed00dd34..178f1630a036 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -650,14 +650,14 @@ deliver:
650#endif 650#endif
651 651
652 if (q->qdisc) { 652 if (q->qdisc) {
653 unsigned int pkt_len = qdisc_pkt_len(skb);
653 int err = qdisc_enqueue(skb, q->qdisc); 654 int err = qdisc_enqueue(skb, q->qdisc);
654 655
655 if (unlikely(err != NET_XMIT_SUCCESS)) { 656 if (err != NET_XMIT_SUCCESS &&
656 if (net_xmit_drop_count(err)) { 657 net_xmit_drop_count(err)) {
657 qdisc_qstats_drop(sch); 658 qdisc_qstats_drop(sch);
658 qdisc_tree_reduce_backlog(sch, 1, 659 qdisc_tree_reduce_backlog(sch, 1,
659 qdisc_pkt_len(skb)); 660 pkt_len);
660 }
661 } 661 }
662 goto tfifo_dequeue; 662 goto tfifo_dequeue;
663 } 663 }
diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
index 4b0a82191bc4..a356450b747b 100644
--- a/net/sched/sch_prio.c
+++ b/net/sched/sch_prio.c
@@ -172,8 +172,9 @@ prio_destroy(struct Qdisc *sch)
172static int prio_tune(struct Qdisc *sch, struct nlattr *opt) 172static int prio_tune(struct Qdisc *sch, struct nlattr *opt)
173{ 173{
174 struct prio_sched_data *q = qdisc_priv(sch); 174 struct prio_sched_data *q = qdisc_priv(sch);
175 struct Qdisc *queues[TCQ_PRIO_BANDS];
176 int oldbands = q->bands, i;
175 struct tc_prio_qopt *qopt; 177 struct tc_prio_qopt *qopt;
176 int i;
177 178
178 if (nla_len(opt) < sizeof(*qopt)) 179 if (nla_len(opt) < sizeof(*qopt))
179 return -EINVAL; 180 return -EINVAL;
@@ -187,62 +188,42 @@ static int prio_tune(struct Qdisc *sch, struct nlattr *opt)
187 return -EINVAL; 188 return -EINVAL;
188 } 189 }
189 190
191 /* Before commit, make sure we can allocate all new qdiscs */
192 for (i = oldbands; i < qopt->bands; i++) {
193 queues[i] = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
194 TC_H_MAKE(sch->handle, i + 1));
195 if (!queues[i]) {
196 while (i > oldbands)
197 qdisc_destroy(queues[--i]);
198 return -ENOMEM;
199 }
200 }
201
190 sch_tree_lock(sch); 202 sch_tree_lock(sch);
191 q->bands = qopt->bands; 203 q->bands = qopt->bands;
192 memcpy(q->prio2band, qopt->priomap, TC_PRIO_MAX+1); 204 memcpy(q->prio2band, qopt->priomap, TC_PRIO_MAX+1);
193 205
194 for (i = q->bands; i < TCQ_PRIO_BANDS; i++) { 206 for (i = q->bands; i < oldbands; i++) {
195 struct Qdisc *child = q->queues[i]; 207 struct Qdisc *child = q->queues[i];
196 q->queues[i] = &noop_qdisc;
197 if (child != &noop_qdisc) {
198 qdisc_tree_reduce_backlog(child, child->q.qlen, child->qstats.backlog);
199 qdisc_destroy(child);
200 }
201 }
202 sch_tree_unlock(sch);
203 208
204 for (i = 0; i < q->bands; i++) { 209 qdisc_tree_reduce_backlog(child, child->q.qlen,
205 if (q->queues[i] == &noop_qdisc) { 210 child->qstats.backlog);
206 struct Qdisc *child, *old; 211 qdisc_destroy(child);
207
208 child = qdisc_create_dflt(sch->dev_queue,
209 &pfifo_qdisc_ops,
210 TC_H_MAKE(sch->handle, i + 1));
211 if (child) {
212 sch_tree_lock(sch);
213 old = q->queues[i];
214 q->queues[i] = child;
215
216 if (old != &noop_qdisc) {
217 qdisc_tree_reduce_backlog(old,
218 old->q.qlen,
219 old->qstats.backlog);
220 qdisc_destroy(old);
221 }
222 sch_tree_unlock(sch);
223 }
224 }
225 } 212 }
213
214 for (i = oldbands; i < q->bands; i++)
215 q->queues[i] = queues[i];
216
217 sch_tree_unlock(sch);
226 return 0; 218 return 0;
227} 219}
228 220
229static int prio_init(struct Qdisc *sch, struct nlattr *opt) 221static int prio_init(struct Qdisc *sch, struct nlattr *opt)
230{ 222{
231 struct prio_sched_data *q = qdisc_priv(sch); 223 if (!opt)
232 int i;
233
234 for (i = 0; i < TCQ_PRIO_BANDS; i++)
235 q->queues[i] = &noop_qdisc;
236
237 if (opt == NULL) {
238 return -EINVAL; 224 return -EINVAL;
239 } else {
240 int err;
241 225
242 if ((err = prio_tune(sch, opt)) != 0) 226 return prio_tune(sch, opt);
243 return err;
244 }
245 return 0;
246} 227}
247 228
248static int prio_dump(struct Qdisc *sch, struct sk_buff *skb) 229static int prio_dump(struct Qdisc *sch, struct sk_buff *skb)
diff --git a/net/sctp/sctp_diag.c b/net/sctp/sctp_diag.c
index 1ce724b87618..f69edcf219e5 100644
--- a/net/sctp/sctp_diag.c
+++ b/net/sctp/sctp_diag.c
@@ -3,12 +3,6 @@
3#include <linux/sock_diag.h> 3#include <linux/sock_diag.h>
4#include <net/sctp/sctp.h> 4#include <net/sctp/sctp.h>
5 5
6extern void inet_diag_msg_common_fill(struct inet_diag_msg *r,
7 struct sock *sk);
8extern int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb,
9 struct inet_diag_msg *r, int ext,
10 struct user_namespace *user_ns);
11
12static void sctp_diag_get_info(struct sock *sk, struct inet_diag_msg *r, 6static void sctp_diag_get_info(struct sock *sk, struct inet_diag_msg *r,
13 void *info); 7 void *info);
14 8
diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
index 6f11c62bc8f9..bf8f05c3eb82 100644
--- a/net/tipc/bearer.c
+++ b/net/tipc/bearer.c
@@ -405,7 +405,7 @@ int tipc_l2_send_msg(struct net *net, struct sk_buff *skb,
405 return 0; 405 return 0;
406 406
407 /* Send RESET message even if bearer is detached from device */ 407 /* Send RESET message even if bearer is detached from device */
408 tipc_ptr = rtnl_dereference(dev->tipc_ptr); 408 tipc_ptr = rcu_dereference_rtnl(dev->tipc_ptr);
409 if (unlikely(!tipc_ptr && !msg_is_reset(buf_msg(skb)))) 409 if (unlikely(!tipc_ptr && !msg_is_reset(buf_msg(skb))))
410 goto drop; 410 goto drop;
411 411
diff --git a/net/tipc/link.c b/net/tipc/link.c
index 7059c94f33c5..67b6ab9f4c8d 100644
--- a/net/tipc/link.c
+++ b/net/tipc/link.c
@@ -704,7 +704,8 @@ static void link_profile_stats(struct tipc_link *l)
704 */ 704 */
705int tipc_link_timeout(struct tipc_link *l, struct sk_buff_head *xmitq) 705int tipc_link_timeout(struct tipc_link *l, struct sk_buff_head *xmitq)
706{ 706{
707 int mtyp, rc = 0; 707 int mtyp = 0;
708 int rc = 0;
708 bool state = false; 709 bool state = false;
709 bool probe = false; 710 bool probe = false;
710 bool setup = false; 711 bool setup = false;
diff --git a/net/tipc/msg.c b/net/tipc/msg.c
index 8740930f0787..17201aa8423d 100644
--- a/net/tipc/msg.c
+++ b/net/tipc/msg.c
@@ -41,6 +41,8 @@
41#include "name_table.h" 41#include "name_table.h"
42 42
43#define MAX_FORWARD_SIZE 1024 43#define MAX_FORWARD_SIZE 1024
44#define BUF_HEADROOM (LL_MAX_HEADER + 48)
45#define BUF_TAILROOM 16
44 46
45static unsigned int align(unsigned int i) 47static unsigned int align(unsigned int i)
46{ 48{
@@ -505,6 +507,10 @@ bool tipc_msg_reverse(u32 own_node, struct sk_buff **skb, int err)
505 msg_set_hdr_sz(hdr, BASIC_H_SIZE); 507 msg_set_hdr_sz(hdr, BASIC_H_SIZE);
506 } 508 }
507 509
510 if (skb_cloned(_skb) &&
511 pskb_expand_head(_skb, BUF_HEADROOM, BUF_TAILROOM, GFP_KERNEL))
512 goto exit;
513
508 /* Now reverse the concerned fields */ 514 /* Now reverse the concerned fields */
509 msg_set_errcode(hdr, err); 515 msg_set_errcode(hdr, err);
510 msg_set_origport(hdr, msg_destport(&ohdr)); 516 msg_set_origport(hdr, msg_destport(&ohdr));
diff --git a/net/tipc/msg.h b/net/tipc/msg.h
index 024da8af91f0..7cf52fb39bee 100644
--- a/net/tipc/msg.h
+++ b/net/tipc/msg.h
@@ -94,17 +94,6 @@ struct plist;
94 94
95#define TIPC_MEDIA_INFO_OFFSET 5 95#define TIPC_MEDIA_INFO_OFFSET 5
96 96
97/**
98 * TIPC message buffer code
99 *
100 * TIPC message buffer headroom reserves space for the worst-case
101 * link-level device header (in case the message is sent off-node).
102 *
103 * Note: Headroom should be a multiple of 4 to ensure the TIPC header fields
104 * are word aligned for quicker access
105 */
106#define BUF_HEADROOM (LL_MAX_HEADER + 48)
107
108struct tipc_skb_cb { 97struct tipc_skb_cb {
109 void *handle; 98 void *handle;
110 struct sk_buff *tail; 99 struct sk_buff *tail;
diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index 88bfcd707064..c49b8df438cb 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -796,9 +796,11 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq,
796 * @tsk: receiving socket 796 * @tsk: receiving socket
797 * @skb: pointer to message buffer. 797 * @skb: pointer to message buffer.
798 */ 798 */
799static void tipc_sk_proto_rcv(struct tipc_sock *tsk, struct sk_buff *skb) 799static void tipc_sk_proto_rcv(struct tipc_sock *tsk, struct sk_buff *skb,
800 struct sk_buff_head *xmitq)
800{ 801{
801 struct sock *sk = &tsk->sk; 802 struct sock *sk = &tsk->sk;
803 u32 onode = tsk_own_node(tsk);
802 struct tipc_msg *hdr = buf_msg(skb); 804 struct tipc_msg *hdr = buf_msg(skb);
803 int mtyp = msg_type(hdr); 805 int mtyp = msg_type(hdr);
804 bool conn_cong; 806 bool conn_cong;
@@ -811,7 +813,8 @@ static void tipc_sk_proto_rcv(struct tipc_sock *tsk, struct sk_buff *skb)
811 813
812 if (mtyp == CONN_PROBE) { 814 if (mtyp == CONN_PROBE) {
813 msg_set_type(hdr, CONN_PROBE_REPLY); 815 msg_set_type(hdr, CONN_PROBE_REPLY);
814 tipc_sk_respond(sk, skb, TIPC_OK); 816 if (tipc_msg_reverse(onode, &skb, TIPC_OK))
817 __skb_queue_tail(xmitq, skb);
815 return; 818 return;
816 } else if (mtyp == CONN_ACK) { 819 } else if (mtyp == CONN_ACK) {
817 conn_cong = tsk_conn_cong(tsk); 820 conn_cong = tsk_conn_cong(tsk);
@@ -1686,7 +1689,8 @@ static unsigned int rcvbuf_limit(struct sock *sk, struct sk_buff *skb)
1686 * 1689 *
1687 * Returns true if message was added to socket receive queue, otherwise false 1690 * Returns true if message was added to socket receive queue, otherwise false
1688 */ 1691 */
1689static bool filter_rcv(struct sock *sk, struct sk_buff *skb) 1692static bool filter_rcv(struct sock *sk, struct sk_buff *skb,
1693 struct sk_buff_head *xmitq)
1690{ 1694{
1691 struct socket *sock = sk->sk_socket; 1695 struct socket *sock = sk->sk_socket;
1692 struct tipc_sock *tsk = tipc_sk(sk); 1696 struct tipc_sock *tsk = tipc_sk(sk);
@@ -1696,7 +1700,7 @@ static bool filter_rcv(struct sock *sk, struct sk_buff *skb)
1696 int usr = msg_user(hdr); 1700 int usr = msg_user(hdr);
1697 1701
1698 if (unlikely(msg_user(hdr) == CONN_MANAGER)) { 1702 if (unlikely(msg_user(hdr) == CONN_MANAGER)) {
1699 tipc_sk_proto_rcv(tsk, skb); 1703 tipc_sk_proto_rcv(tsk, skb, xmitq);
1700 return false; 1704 return false;
1701 } 1705 }
1702 1706
@@ -1739,7 +1743,8 @@ static bool filter_rcv(struct sock *sk, struct sk_buff *skb)
1739 return true; 1743 return true;
1740 1744
1741reject: 1745reject:
1742 tipc_sk_respond(sk, skb, err); 1746 if (tipc_msg_reverse(tsk_own_node(tsk), &skb, err))
1747 __skb_queue_tail(xmitq, skb);
1743 return false; 1748 return false;
1744} 1749}
1745 1750
@@ -1755,9 +1760,24 @@ reject:
1755static int tipc_backlog_rcv(struct sock *sk, struct sk_buff *skb) 1760static int tipc_backlog_rcv(struct sock *sk, struct sk_buff *skb)
1756{ 1761{
1757 unsigned int truesize = skb->truesize; 1762 unsigned int truesize = skb->truesize;
1763 struct sk_buff_head xmitq;
1764 u32 dnode, selector;
1758 1765
1759 if (likely(filter_rcv(sk, skb))) 1766 __skb_queue_head_init(&xmitq);
1767
1768 if (likely(filter_rcv(sk, skb, &xmitq))) {
1760 atomic_add(truesize, &tipc_sk(sk)->dupl_rcvcnt); 1769 atomic_add(truesize, &tipc_sk(sk)->dupl_rcvcnt);
1770 return 0;
1771 }
1772
1773 if (skb_queue_empty(&xmitq))
1774 return 0;
1775
1776 /* Send response/rejected message */
1777 skb = __skb_dequeue(&xmitq);
1778 dnode = msg_destnode(buf_msg(skb));
1779 selector = msg_origport(buf_msg(skb));
1780 tipc_node_xmit_skb(sock_net(sk), skb, dnode, selector);
1761 return 0; 1781 return 0;
1762} 1782}
1763 1783
@@ -1771,12 +1791,13 @@ static int tipc_backlog_rcv(struct sock *sk, struct sk_buff *skb)
1771 * Caller must hold socket lock 1791 * Caller must hold socket lock
1772 */ 1792 */
1773static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk, 1793static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
1774 u32 dport) 1794 u32 dport, struct sk_buff_head *xmitq)
1775{ 1795{
1796 unsigned long time_limit = jiffies + 2;
1797 struct sk_buff *skb;
1776 unsigned int lim; 1798 unsigned int lim;
1777 atomic_t *dcnt; 1799 atomic_t *dcnt;
1778 struct sk_buff *skb; 1800 u32 onode;
1779 unsigned long time_limit = jiffies + 2;
1780 1801
1781 while (skb_queue_len(inputq)) { 1802 while (skb_queue_len(inputq)) {
1782 if (unlikely(time_after_eq(jiffies, time_limit))) 1803 if (unlikely(time_after_eq(jiffies, time_limit)))
@@ -1788,7 +1809,7 @@ static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
1788 1809
1789 /* Add message directly to receive queue if possible */ 1810 /* Add message directly to receive queue if possible */
1790 if (!sock_owned_by_user(sk)) { 1811 if (!sock_owned_by_user(sk)) {
1791 filter_rcv(sk, skb); 1812 filter_rcv(sk, skb, xmitq);
1792 continue; 1813 continue;
1793 } 1814 }
1794 1815
@@ -1801,7 +1822,9 @@ static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
1801 continue; 1822 continue;
1802 1823
1803 /* Overload => reject message back to sender */ 1824 /* Overload => reject message back to sender */
1804 tipc_sk_respond(sk, skb, TIPC_ERR_OVERLOAD); 1825 onode = tipc_own_addr(sock_net(sk));
1826 if (tipc_msg_reverse(onode, &skb, TIPC_ERR_OVERLOAD))
1827 __skb_queue_tail(xmitq, skb);
1805 break; 1828 break;
1806 } 1829 }
1807} 1830}
@@ -1814,12 +1837,14 @@ static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
1814 */ 1837 */
1815void tipc_sk_rcv(struct net *net, struct sk_buff_head *inputq) 1838void tipc_sk_rcv(struct net *net, struct sk_buff_head *inputq)
1816{ 1839{
1840 struct sk_buff_head xmitq;
1817 u32 dnode, dport = 0; 1841 u32 dnode, dport = 0;
1818 int err; 1842 int err;
1819 struct tipc_sock *tsk; 1843 struct tipc_sock *tsk;
1820 struct sock *sk; 1844 struct sock *sk;
1821 struct sk_buff *skb; 1845 struct sk_buff *skb;
1822 1846
1847 __skb_queue_head_init(&xmitq);
1823 while (skb_queue_len(inputq)) { 1848 while (skb_queue_len(inputq)) {
1824 dport = tipc_skb_peek_port(inputq, dport); 1849 dport = tipc_skb_peek_port(inputq, dport);
1825 tsk = tipc_sk_lookup(net, dport); 1850 tsk = tipc_sk_lookup(net, dport);
@@ -1827,9 +1852,14 @@ void tipc_sk_rcv(struct net *net, struct sk_buff_head *inputq)
1827 if (likely(tsk)) { 1852 if (likely(tsk)) {
1828 sk = &tsk->sk; 1853 sk = &tsk->sk;
1829 if (likely(spin_trylock_bh(&sk->sk_lock.slock))) { 1854 if (likely(spin_trylock_bh(&sk->sk_lock.slock))) {
1830 tipc_sk_enqueue(inputq, sk, dport); 1855 tipc_sk_enqueue(inputq, sk, dport, &xmitq);
1831 spin_unlock_bh(&sk->sk_lock.slock); 1856 spin_unlock_bh(&sk->sk_lock.slock);
1832 } 1857 }
1858 /* Send pending response/rejected messages, if any */
1859 while ((skb = __skb_dequeue(&xmitq))) {
1860 dnode = msg_destnode(buf_msg(skb));
1861 tipc_node_xmit_skb(net, skb, dnode, dport);
1862 }
1833 sock_put(sk); 1863 sock_put(sk);
1834 continue; 1864 continue;
1835 } 1865 }
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index b5f1221f48d4..b96ac918e0ba 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -61,6 +61,14 @@
61 * function will also cleanup rejected sockets, those that reach the connected 61 * function will also cleanup rejected sockets, those that reach the connected
62 * state but leave it before they have been accepted. 62 * state but leave it before they have been accepted.
63 * 63 *
64 * - Lock ordering for pending or accept queue sockets is:
65 *
66 * lock_sock(listener);
67 * lock_sock_nested(pending, SINGLE_DEPTH_NESTING);
68 *
69 * Using explicit nested locking keeps lockdep happy since normally only one
70 * lock of a given class may be taken at a time.
71 *
64 * - Sockets created by user action will be cleaned up when the user process 72 * - Sockets created by user action will be cleaned up when the user process
65 * calls close(2), causing our release implementation to be called. Our release 73 * calls close(2), causing our release implementation to be called. Our release
66 * implementation will perform some cleanup then drop the last reference so our 74 * implementation will perform some cleanup then drop the last reference so our
@@ -443,7 +451,7 @@ void vsock_pending_work(struct work_struct *work)
443 cleanup = true; 451 cleanup = true;
444 452
445 lock_sock(listener); 453 lock_sock(listener);
446 lock_sock(sk); 454 lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
447 455
448 if (vsock_is_pending(sk)) { 456 if (vsock_is_pending(sk)) {
449 vsock_remove_pending(listener, sk); 457 vsock_remove_pending(listener, sk);
@@ -1292,7 +1300,7 @@ static int vsock_accept(struct socket *sock, struct socket *newsock, int flags)
1292 if (connected) { 1300 if (connected) {
1293 listener->sk_ack_backlog--; 1301 listener->sk_ack_backlog--;
1294 1302
1295 lock_sock(connected); 1303 lock_sock_nested(connected, SINGLE_DEPTH_NESTING);
1296 vconnected = vsock_sk(connected); 1304 vconnected = vsock_sk(connected);
1297 1305
1298 /* If the listener socket has received an error, then we should 1306 /* If the listener socket has received an error, then we should
diff --git a/net/wireless/util.c b/net/wireless/util.c
index 4e809e978b7d..2443ee30ba5b 100644
--- a/net/wireless/util.c
+++ b/net/wireless/util.c
@@ -509,7 +509,7 @@ static int __ieee80211_data_to_8023(struct sk_buff *skb, struct ethhdr *ehdr,
509 * replace EtherType */ 509 * replace EtherType */
510 hdrlen += ETH_ALEN + 2; 510 hdrlen += ETH_ALEN + 2;
511 else 511 else
512 tmp.h_proto = htons(skb->len); 512 tmp.h_proto = htons(skb->len - hdrlen);
513 513
514 pskb_pull(skb, hdrlen); 514 pskb_pull(skb, hdrlen);
515 515