aboutsummaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-07-09 21:24:39 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-07-09 21:24:39 -0400
commit496322bc91e35007ed754184dcd447a02b6dd685 (patch)
treef5298d0a74c0a6e65c0e98050b594b8d020904c1 /net
parent2e17c5a97e231f3cb426f4b7895eab5be5c5442e (diff)
parent56e0ef527b184b3de2d7f88c6190812b2b2ac6bf (diff)
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: "This is a re-do of the net-next pull request for the current merge window. The only difference from the one I made the other day is that this has Eliezer's interface renames and the timeout handling changes made based upon your feedback, as well as a few bug fixes that have trickeled in. Highlights: 1) Low latency device polling, eliminating the cost of interrupt handling and context switches. Allows direct polling of a network device from socket operations, such as recvmsg() and poll(). Currently ixgbe, mlx4, and bnx2x support this feature. Full high level description, performance numbers, and design in commit 0a4db187a999 ("Merge branch 'll_poll'") From Eliezer Tamir. 2) With the routing cache removed, ip_check_mc_rcu() gets exercised more than ever before in the case where we have lots of multicast addresses. Use a hash table instead of a simple linked list, from Eric Dumazet. 3) Add driver for Atheros CQA98xx 802.11ac wireless devices, from Bartosz Markowski, Janusz Dziedzic, Kalle Valo, Marek Kwaczynski, Marek Puzyniak, Michal Kazior, and Sujith Manoharan. 4) Support reporting the TUN device persist flag to userspace, from Pavel Emelyanov. 5) Allow controlling network device VF link state using netlink, from Rony Efraim. 6) Support GRE tunneling in openvswitch, from Pravin B Shelar. 7) Adjust SOCK_MIN_RCVBUF and SOCK_MIN_SNDBUF for modern times, from Daniel Borkmann and Eric Dumazet. 8) Allow controlling of TCP quickack behavior on a per-route basis, from Cong Wang. 9) Several bug fixes and improvements to vxlan from Stephen Hemminger, Pravin B Shelar, and Mike Rapoport. In particular, support receiving on multiple UDP ports. 10) Major cleanups, particular in the area of debugging and cookie lifetime handline, to the SCTP protocol code. From Daniel Borkmann. 11) Allow packets to cross network namespaces when traversing tunnel devices. From Nicolas Dichtel. 12) Allow monitoring netlink traffic via AF_PACKET sockets, in a manner akin to how we monitor real network traffic via ptype_all. From Daniel Borkmann. 13) Several bug fixes and improvements for the new alx device driver, from Johannes Berg. 14) Fix scalability issues in the netem packet scheduler's time queue, by using an rbtree. From Eric Dumazet. 15) Several bug fixes in TCP loss recovery handling, from Yuchung Cheng. 16) Add support for GSO segmentation of MPLS packets, from Simon Horman. 17) Make network notifiers have a real data type for the opaque pointer that's passed into them. Use this to properly handle network device flag changes in arp_netdev_event(). From Jiri Pirko and Timo Teräs. 18) Convert several drivers over to module_pci_driver(), from Peter Huewe. 19) tcp_fixup_rcvbuf() can loop 500 times over loopback, just use a O(1) calculation instead. From Eric Dumazet. 20) Support setting of explicit tunnel peer addresses in ipv6, just like ipv4. From Nicolas Dichtel. 21) Protect x86 BPF JIT against spraying attacks, from Eric Dumazet. 22) Prevent a single high rate flow from overruning an individual cpu during RX packet processing via selective flow shedding. From Willem de Bruijn. 23) Don't use spinlocks in TCP md5 signing fast paths, from Eric Dumazet. 24) Don't just drop GSO packets which are above the TBF scheduler's burst limit, chop them up so they are in-bounds instead. Also from Eric Dumazet. 25) VLAN offloads are missed when configured on top of a bridge, fix from Vlad Yasevich. 26) Support IPV6 in ping sockets. From Lorenzo Colitti. 27) Receive flow steering targets should be updated at poll() time too, from David Majnemer. 28) Fix several corner case regressions in PMTU/redirect handling due to the routing cache removal, from Timo Teräs. 29) We have to be mindful of ipv4 mapped ipv6 sockets in upd_v6_push_pending_frames(). From Hannes Frederic Sowa. 30) Fix L2TP sequence number handling bugs, from James Chapman." * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1214 commits) drivers/net: caif: fix wrong rtnl_is_locked() usage drivers/net: enic: release rtnl_lock on error-path vhost-net: fix use-after-free in vhost_net_flush net: mv643xx_eth: do not use port number as platform device id net: sctp: confirm route during forward progress virtio_net: fix race in RX VQ processing virtio: support unlocked queue poll net/cadence/macb: fix bug/typo in extracting gem_irq_read_clear bit Documentation: Fix references to defunct linux-net@vger.kernel.org net/fs: change busy poll time accounting net: rename low latency sockets functions to busy poll bridge: fix some kernel warning in multicast timer sfc: Fix memory leak when discarding scattered packets sit: fix tunnel update via netlink dt:net:stmmac: Add dt specific phy reset callback support. dt:net:stmmac: Add support to dwmac version 3.610 and 3.710 dt:net:stmmac: Allocate platform data only if its NULL. net:stmmac: fix memleak in the open method ipv6: rt6_check_neigh should successfully verify neigh if no NUD information are available net: ipv6: fix wrong ping_v6_sendmsg return value ...
Diffstat (limited to 'net')
-rw-r--r--net/8021q/vlan.c2
-rw-r--r--net/Kconfig17
-rw-r--r--net/Makefile1
-rw-r--r--net/appletalk/aarp.c2
-rw-r--r--net/appletalk/ddp.c2
-rw-r--r--net/atm/clip.c8
-rw-r--r--net/atm/mpc.c6
-rw-r--r--net/ax25/af_ax25.c6
-rw-r--r--net/ax25/sysctl_net_ax25.c2
-rw-r--r--net/batman-adv/Makefile1
-rw-r--r--net/batman-adv/bat_iv_ogm.c123
-rw-r--r--net/batman-adv/bridge_loop_avoidance.c94
-rw-r--r--net/batman-adv/bridge_loop_avoidance.h12
-rw-r--r--net/batman-adv/distributed-arp-table.c82
-rw-r--r--net/batman-adv/hard-interface.c98
-rw-r--r--net/batman-adv/icmp_socket.c4
-rw-r--r--net/batman-adv/main.c1
-rw-r--r--net/batman-adv/main.h18
-rw-r--r--net/batman-adv/network-coding.c22
-rw-r--r--net/batman-adv/network-coding.h6
-rw-r--r--net/batman-adv/originator.c6
-rw-r--r--net/batman-adv/originator.h2
-rw-r--r--net/batman-adv/ring_buffer.c51
-rw-r--r--net/batman-adv/ring_buffer.h27
-rw-r--r--net/batman-adv/routing.c64
-rw-r--r--net/batman-adv/routing.h1
-rw-r--r--net/batman-adv/send.c36
-rw-r--r--net/batman-adv/send.h6
-rw-r--r--net/batman-adv/soft-interface.c6
-rw-r--r--net/batman-adv/translation-table.c74
-rw-r--r--net/batman-adv/translation-table.h2
-rw-r--r--net/batman-adv/types.h6
-rw-r--r--net/batman-adv/unicast.c2
-rw-r--r--net/batman-adv/vis.c19
-rw-r--r--net/bluetooth/hci_core.c192
-rw-r--r--net/bluetooth/hci_event.c71
-rw-r--r--net/bluetooth/hidp/core.c14
-rw-r--r--net/bluetooth/l2cap_core.c121
-rw-r--r--net/bluetooth/l2cap_sock.c4
-rw-r--r--net/bluetooth/mgmt.c229
-rw-r--r--net/bridge/br_device.c21
-rw-r--r--net/bridge/br_fdb.c5
-rw-r--r--net/bridge/br_forward.c14
-rw-r--r--net/bridge/br_if.c2
-rw-r--r--net/bridge/br_input.c15
-rw-r--r--net/bridge/br_mdb.c2
-rw-r--r--net/bridge/br_multicast.c74
-rw-r--r--net/bridge/br_netfilter.c4
-rw-r--r--net/bridge/br_netlink.c10
-rw-r--r--net/bridge/br_notify.c2
-rw-r--r--net/bridge/br_private.h9
-rw-r--r--net/bridge/br_sysfs_br.c26
-rw-r--r--net/bridge/br_sysfs_if.c4
-rw-r--r--net/bridge/netfilter/ebt_ulog.c6
-rw-r--r--net/bridge/netfilter/ebtables.c6
-rw-r--r--net/caif/caif_dev.c4
-rw-r--r--net/caif/caif_usb.c4
-rw-r--r--net/can/af_can.c4
-rw-r--r--net/can/bcm.c4
-rw-r--r--net/can/gw.c4
-rw-r--r--net/can/raw.c4
-rw-r--r--net/core/datagram.c5
-rw-r--r--net/core/dev.c238
-rw-r--r--net/core/drop_monitor.c4
-rw-r--r--net/core/dst.c2
-rw-r--r--net/core/ethtool.c24
-rw-r--r--net/core/fib_rules.c4
-rw-r--r--net/core/gen_estimator.c12
-rw-r--r--net/core/gen_stats.c22
-rw-r--r--net/core/link_watch.c3
-rw-r--r--net/core/neighbour.c34
-rw-r--r--net/core/net-procfs.c16
-rw-r--r--net/core/netpoll.c16
-rw-r--r--net/core/netprio_cgroup.c2
-rw-r--r--net/core/pktgen.c81
-rw-r--r--net/core/rtnetlink.c32
-rw-r--r--net/core/skbuff.c69
-rw-r--r--net/core/sock.c26
-rw-r--r--net/core/sysctl_net_core.c139
-rw-r--r--net/decnet/af_decnet.c4
-rw-r--r--net/decnet/dn_dev.c6
-rw-r--r--net/decnet/sysctl_net_decnet.c6
-rw-r--r--net/ieee802154/6lowpan.c5
-rw-r--r--net/ipv4/Kconfig11
-rw-r--r--net/ipv4/Makefile7
-rw-r--r--net/ipv4/af_inet.c26
-rw-r--r--net/ipv4/ah4.c7
-rw-r--r--net/ipv4/arp.c8
-rw-r--r--net/ipv4/devinet.c9
-rw-r--r--net/ipv4/esp4.c7
-rw-r--r--net/ipv4/fib_frontend.c4
-rw-r--r--net/ipv4/fib_semantics.c3
-rw-r--r--net/ipv4/gre.c253
-rw-r--r--net/ipv4/gre_demux.c414
-rw-r--r--net/ipv4/gre_offload.c127
-rw-r--r--net/ipv4/icmp.c51
-rw-r--r--net/ipv4/igmp.c79
-rw-r--r--net/ipv4/inet_fragment.c2
-rw-r--r--net/ipv4/ip_gre.c258
-rw-r--r--net/ipv4/ip_tunnel.c177
-rw-r--r--net/ipv4/ip_tunnel_core.c122
-rw-r--r--net/ipv4/ip_vti.c7
-rw-r--r--net/ipv4/ipcomp.c7
-rw-r--r--net/ipv4/ipip.c20
-rw-r--r--net/ipv4/ipmr.c4
-rw-r--r--net/ipv4/netfilter/Kconfig2
-rw-r--r--net/ipv4/netfilter/ipt_MASQUERADE.c7
-rw-r--r--net/ipv4/netfilter/ipt_ULOG.c6
-rw-r--r--net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c2
-rw-r--r--net/ipv4/ping.c641
-rw-r--r--net/ipv4/proc.c1
-rw-r--r--net/ipv4/route.c144
-rw-r--r--net/ipv4/sysctl_net_ipv4.c31
-rw-r--r--net/ipv4/tcp.c344
-rw-r--r--net/ipv4/tcp_input.c517
-rw-r--r--net/ipv4/tcp_ipv4.c78
-rw-r--r--net/ipv4/tcp_minisocks.c6
-rw-r--r--net/ipv4/tcp_offload.c332
-rw-r--r--net/ipv4/tcp_output.c39
-rw-r--r--net/ipv4/udp.c90
-rw-r--r--net/ipv4/udp_offload.c100
-rw-r--r--net/ipv4/xfrm4_tunnel.c2
-rw-r--r--net/ipv6/Makefile2
-rw-r--r--net/ipv6/addrconf.c304
-rw-r--r--net/ipv6/addrconf_core.c1
-rw-r--r--net/ipv6/af_inet6.c12
-rw-r--r--net/ipv6/datagram.c27
-rw-r--r--net/ipv6/exthdrs_core.c2
-rw-r--r--net/ipv6/icmp.c23
-rw-r--r--net/ipv6/ip6_offload.c1
-rw-r--r--net/ipv6/ip6_output.c16
-rw-r--r--net/ipv6/ip6mr.c2
-rw-r--r--net/ipv6/mcast.c75
-rw-r--r--net/ipv6/mip6.c6
-rw-r--r--net/ipv6/ndisc.c11
-rw-r--r--net/ipv6/netfilter/ip6t_MASQUERADE.c6
-rw-r--r--net/ipv6/output_core.c3
-rw-r--r--net/ipv6/ping.c277
-rw-r--r--net/ipv6/raw.c48
-rw-r--r--net/ipv6/route.c24
-rw-r--r--net/ipv6/sit.c211
-rw-r--r--net/ipv6/sysctl_net_ipv6.c4
-rw-r--r--net/ipv6/tcp_ipv6.c2
-rw-r--r--net/ipv6/udp.c62
-rw-r--r--net/ipv6/udp_offload.c3
-rw-r--r--net/ipx/af_ipx.c2
-rw-r--r--net/irda/irsysctl.c6
-rw-r--r--net/iucv/af_iucv.c2
-rw-r--r--net/l2tp/l2tp_core.c114
-rw-r--r--net/l2tp/l2tp_core.h5
-rw-r--r--net/l2tp/l2tp_ppp.c3
-rw-r--r--net/mac80211/aes_ccm.c6
-rw-r--r--net/mac80211/cfg.c67
-rw-r--r--net/mac80211/debugfs_netdev.c15
-rw-r--r--net/mac80211/driver-ops.h3
-rw-r--r--net/mac80211/ht.c8
-rw-r--r--net/mac80211/ibss.c114
-rw-r--r--net/mac80211/ieee80211_i.h37
-rw-r--r--net/mac80211/iface.c34
-rw-r--r--net/mac80211/key.c24
-rw-r--r--net/mac80211/key.h15
-rw-r--r--net/mac80211/main.c7
-rw-r--r--net/mac80211/mesh.c107
-rw-r--r--net/mac80211/mesh.h7
-rw-r--r--net/mac80211/mesh_plink.c8
-rw-r--r--net/mac80211/mlme.c436
-rw-r--r--net/mac80211/rate.c8
-rw-r--r--net/mac80211/rx.c56
-rw-r--r--net/mac80211/scan.c9
-rw-r--r--net/mac80211/sta_info.c8
-rw-r--r--net/mac80211/sta_info.h9
-rw-r--r--net/mac80211/tx.c11
-rw-r--r--net/mac80211/util.c41
-rw-r--r--net/mac80211/vht.c2
-rw-r--r--net/mac80211/wep.c48
-rw-r--r--net/mac80211/wpa.c68
-rw-r--r--net/mpls/Kconfig9
-rw-r--r--net/mpls/Makefile4
-rw-r--r--net/mpls/mpls_gso.c108
-rw-r--r--net/netfilter/core.c21
-rw-r--r--net/netfilter/ipvs/ip_vs_conn.c35
-rw-r--r--net/netfilter/ipvs/ip_vs_core.c4
-rw-r--r--net/netfilter/ipvs/ip_vs_ctl.c35
-rw-r--r--net/netfilter/ipvs/ip_vs_dh.c10
-rw-r--r--net/netfilter/ipvs/ip_vs_lblc.c14
-rw-r--r--net/netfilter/ipvs/ip_vs_lblcr.c14
-rw-r--r--net/netfilter/ipvs/ip_vs_lc.c3
-rw-r--r--net/netfilter/ipvs/ip_vs_nq.c3
-rw-r--r--net/netfilter/ipvs/ip_vs_proto_sctp.c860
-rw-r--r--net/netfilter/ipvs/ip_vs_proto_tcp.c14
-rw-r--r--net/netfilter/ipvs/ip_vs_rr.c3
-rw-r--r--net/netfilter/ipvs/ip_vs_sed.c3
-rw-r--r--net/netfilter/ipvs/ip_vs_sh.c108
-rw-r--r--net/netfilter/ipvs/ip_vs_sync.c19
-rw-r--r--net/netfilter/ipvs/ip_vs_wlc.c3
-rw-r--r--net/netfilter/ipvs/ip_vs_wrr.c3
-rw-r--r--net/netfilter/nf_conntrack_ftp.c73
-rw-r--r--net/netfilter/nf_conntrack_netlink.c30
-rw-r--r--net/netfilter/nf_conntrack_proto_tcp.c6
-rw-r--r--net/netfilter/nf_conntrack_standalone.c4
-rw-r--r--net/netfilter/nf_log.c6
-rw-r--r--net/netfilter/nf_nat_helper.c2
-rw-r--r--net/netfilter/nfnetlink_cthelper.c16
-rw-r--r--net/netfilter/nfnetlink_cttimeout.c6
-rw-r--r--net/netfilter/nfnetlink_queue_core.c47
-rw-r--r--net/netfilter/xt_CT.c10
-rw-r--r--net/netfilter/xt_TEE.c2
-rw-r--r--net/netfilter/xt_rateest.c2
-rw-r--r--net/netfilter/xt_socket.c96
-rw-r--r--net/netlabel/netlabel_unlabeled.c7
-rw-r--r--net/netlink/af_netlink.c178
-rw-r--r--net/netlink/af_netlink.h1
-rw-r--r--net/netrom/af_netrom.c2
-rw-r--r--net/netrom/sysctl_net_netrom.c2
-rw-r--r--net/nfc/core.c224
-rw-r--r--net/nfc/hci/core.c75
-rw-r--r--net/nfc/llcp.h3
-rw-r--r--net/nfc/llcp_commands.c22
-rw-r--r--net/nfc/llcp_core.c16
-rw-r--r--net/nfc/llcp_sock.c19
-rw-r--r--net/nfc/nci/Kconfig10
-rw-r--r--net/nfc/nci/Makefile4
-rw-r--r--net/nfc/nci/core.c37
-rw-r--r--net/nfc/nci/data.c2
-rw-r--r--net/nfc/nci/spi.c378
-rw-r--r--net/nfc/netlink.c183
-rw-r--r--net/nfc/nfc.h11
-rw-r--r--net/openvswitch/Kconfig14
-rw-r--r--net/openvswitch/Makefile3
-rw-r--r--net/openvswitch/actions.c8
-rw-r--r--net/openvswitch/datapath.c371
-rw-r--r--net/openvswitch/datapath.h4
-rw-r--r--net/openvswitch/dp_notify.c2
-rw-r--r--net/openvswitch/flow.c211
-rw-r--r--net/openvswitch/flow.h47
-rw-r--r--net/openvswitch/vport-gre.c275
-rw-r--r--net/openvswitch/vport-internal_dev.c3
-rw-r--r--net/openvswitch/vport-netdev.c9
-rw-r--r--net/openvswitch/vport-netdev.h1
-rw-r--r--net/openvswitch/vport.c34
-rw-r--r--net/openvswitch/vport.h23
-rw-r--r--net/packet/af_packet.c5
-rw-r--r--net/phonet/pn_dev.c4
-rw-r--r--net/phonet/sysctl.c4
-rw-r--r--net/rds/ib_sysctl.c2
-rw-r--r--net/rds/iw_sysctl.c2
-rw-r--r--net/rds/sysctl.c2
-rw-r--r--net/rose/af_rose.c6
-rw-r--r--net/rose/sysctl_net_rose.c2
-rw-r--r--net/sched/act_mirred.c2
-rw-r--r--net/sched/sch_cbq.c2
-rw-r--r--net/sched/sch_drr.c2
-rw-r--r--net/sched/sch_generic.c44
-rw-r--r--net/sched/sch_hfsc.c2
-rw-r--r--net/sched/sch_htb.c261
-rw-r--r--net/sched/sch_netem.c111
-rw-r--r--net/sched/sch_qfq.c2
-rw-r--r--net/sched/sch_tbf.c47
-rw-r--r--net/sctp/Kconfig11
-rw-r--r--net/sctp/associola.c84
-rw-r--r--net/sctp/bind_addr.c2
-rw-r--r--net/sctp/chunk.c7
-rw-r--r--net/sctp/debug.c4
-rw-r--r--net/sctp/endpointola.c25
-rw-r--r--net/sctp/input.c12
-rw-r--r--net/sctp/inqueue.c9
-rw-r--r--net/sctp/ipv6.c29
-rw-r--r--net/sctp/output.c40
-rw-r--r--net/sctp/outqueue.c214
-rw-r--r--net/sctp/proc.c12
-rw-r--r--net/sctp/protocol.c56
-rw-r--r--net/sctp/sm_make_chunk.c54
-rw-r--r--net/sctp/sm_sideeffect.c107
-rw-r--r--net/sctp/sm_statefuns.c85
-rw-r--r--net/sctp/socket.c289
-rw-r--r--net/sctp/sysctl.c10
-rw-r--r--net/sctp/transport.c51
-rw-r--r--net/sctp/tsnmap.c10
-rw-r--r--net/sctp/ulpevent.c10
-rw-r--r--net/socket.c23
-rw-r--r--net/sunrpc/sysctl.c10
-rw-r--r--net/sunrpc/xprtrdma/svc_rdma.c8
-rw-r--r--net/sunrpc/xprtrdma/transport.c4
-rw-r--r--net/sunrpc/xprtsock.c4
-rw-r--r--net/tipc/Makefile3
-rw-r--r--net/tipc/bcast.c3
-rw-r--r--net/tipc/bcast.h3
-rw-r--r--net/tipc/config.c119
-rw-r--r--net/tipc/core.c22
-rw-r--r--net/tipc/core.h17
-rw-r--r--net/tipc/discover.c7
-rw-r--r--net/tipc/eth_media.c19
-rw-r--r--net/tipc/ib_media.c17
-rw-r--r--net/tipc/link.c88
-rw-r--r--net/tipc/msg.c19
-rw-r--r--net/tipc/msg.h8
-rw-r--r--net/tipc/name_table.c10
-rw-r--r--net/tipc/name_table.h11
-rw-r--r--net/tipc/node_subscr.c2
-rw-r--r--net/tipc/port.c320
-rw-r--r--net/tipc/port.h85
-rw-r--r--net/tipc/server.c596
-rw-r--r--net/tipc/server.h94
-rw-r--r--net/tipc/socket.c146
-rw-r--r--net/tipc/subscr.c348
-rw-r--r--net/tipc/subscr.h21
-rw-r--r--net/tipc/sysctl.c64
-rw-r--r--net/unix/sysctl_net_unix.c2
-rw-r--r--net/vmw_vsock/af_vsock.c55
-rw-r--r--net/vmw_vsock/vmci_transport.c18
-rw-r--r--net/wireless/chan.c57
-rw-r--r--net/wireless/core.c270
-rw-r--r--net/wireless/core.h123
-rw-r--r--net/wireless/debugfs.c4
-rw-r--r--net/wireless/ibss.c16
-rw-r--r--net/wireless/mesh.c15
-rw-r--r--net/wireless/mlme.c433
-rw-r--r--net/wireless/nl80211.c814
-rw-r--r--net/wireless/reg.c138
-rw-r--r--net/wireless/scan.c51
-rw-r--r--net/wireless/sme.c652
-rw-r--r--net/wireless/sysfs.c8
-rw-r--r--net/wireless/trace.h46
-rw-r--r--net/wireless/util.c39
-rw-r--r--net/wireless/wext-compat.c22
-rw-r--r--net/wireless/wext-sme.c49
-rw-r--r--net/x25/af_x25.c17
-rw-r--r--net/xfrm/xfrm_input.c5
-rw-r--r--net/xfrm/xfrm_output.c9
-rw-r--r--net/xfrm/xfrm_policy.c2
-rw-r--r--net/xfrm/xfrm_proc.c1
331 files changed, 11095 insertions, 7956 deletions
diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
index 9424f3718ea7..2fb2d88e8c2e 100644
--- a/net/8021q/vlan.c
+++ b/net/8021q/vlan.c
@@ -341,7 +341,7 @@ static void __vlan_device_event(struct net_device *dev, unsigned long event)
341static int vlan_device_event(struct notifier_block *unused, unsigned long event, 341static int vlan_device_event(struct notifier_block *unused, unsigned long event,
342 void *ptr) 342 void *ptr)
343{ 343{
344 struct net_device *dev = ptr; 344 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
345 struct vlan_group *grp; 345 struct vlan_group *grp;
346 struct vlan_info *vlan_info; 346 struct vlan_info *vlan_info;
347 int i, flgs; 347 int i, flgs;
diff --git a/net/Kconfig b/net/Kconfig
index 6dfe1c636a80..37702491abe9 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -219,6 +219,7 @@ source "net/batman-adv/Kconfig"
219source "net/openvswitch/Kconfig" 219source "net/openvswitch/Kconfig"
220source "net/vmw_vsock/Kconfig" 220source "net/vmw_vsock/Kconfig"
221source "net/netlink/Kconfig" 221source "net/netlink/Kconfig"
222source "net/mpls/Kconfig"
222 223
223config RPS 224config RPS
224 boolean 225 boolean
@@ -243,6 +244,10 @@ config NETPRIO_CGROUP
243 Cgroup subsystem for use in assigning processes to network priorities on 244 Cgroup subsystem for use in assigning processes to network priorities on
244 a per-interface basis 245 a per-interface basis
245 246
247config NET_LL_RX_POLL
248 boolean
249 default y
250
246config BQL 251config BQL
247 boolean 252 boolean
248 depends on SYSFS 253 depends on SYSFS
@@ -260,6 +265,18 @@ config BPF_JIT
260 packet sniffing (libpcap/tcpdump). Note : Admin should enable 265 packet sniffing (libpcap/tcpdump). Note : Admin should enable
261 this feature changing /proc/sys/net/core/bpf_jit_enable 266 this feature changing /proc/sys/net/core/bpf_jit_enable
262 267
268config NET_FLOW_LIMIT
269 boolean
270 depends on RPS
271 default y
272 ---help---
273 The network stack has to drop packets when a receive processing CPU's
274 backlog reaches netdev_max_backlog. If a few out of many active flows
275 generate the vast majority of load, drop their traffic earlier to
276 maintain capacity for the other flows. This feature provides servers
277 with many clients some protection against DoS by a single (spoofed)
278 flow that greatly exceeds average workload.
279
263menu "Network testing" 280menu "Network testing"
264 281
265config NET_PKTGEN 282config NET_PKTGEN
diff --git a/net/Makefile b/net/Makefile
index 091e7b04f301..9492e8cb64e9 100644
--- a/net/Makefile
+++ b/net/Makefile
@@ -70,3 +70,4 @@ obj-$(CONFIG_BATMAN_ADV) += batman-adv/
70obj-$(CONFIG_NFC) += nfc/ 70obj-$(CONFIG_NFC) += nfc/
71obj-$(CONFIG_OPENVSWITCH) += openvswitch/ 71obj-$(CONFIG_OPENVSWITCH) += openvswitch/
72obj-$(CONFIG_VSOCKETS) += vmw_vsock/ 72obj-$(CONFIG_VSOCKETS) += vmw_vsock/
73obj-$(CONFIG_NET_MPLS_GSO) += mpls/
diff --git a/net/appletalk/aarp.c b/net/appletalk/aarp.c
index 173a2e82f486..690356fa52b9 100644
--- a/net/appletalk/aarp.c
+++ b/net/appletalk/aarp.c
@@ -332,7 +332,7 @@ static void aarp_expire_timeout(unsigned long unused)
332static int aarp_device_event(struct notifier_block *this, unsigned long event, 332static int aarp_device_event(struct notifier_block *this, unsigned long event,
333 void *ptr) 333 void *ptr)
334{ 334{
335 struct net_device *dev = ptr; 335 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
336 int ct; 336 int ct;
337 337
338 if (!net_eq(dev_net(dev), &init_net)) 338 if (!net_eq(dev_net(dev), &init_net))
diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
index ef12839a7cfe..7fee50d637f9 100644
--- a/net/appletalk/ddp.c
+++ b/net/appletalk/ddp.c
@@ -644,7 +644,7 @@ static inline void atalk_dev_down(struct net_device *dev)
644static int ddp_device_event(struct notifier_block *this, unsigned long event, 644static int ddp_device_event(struct notifier_block *this, unsigned long event,
645 void *ptr) 645 void *ptr)
646{ 646{
647 struct net_device *dev = ptr; 647 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
648 648
649 if (!net_eq(dev_net(dev), &init_net)) 649 if (!net_eq(dev_net(dev), &init_net))
650 return NOTIFY_DONE; 650 return NOTIFY_DONE;
diff --git a/net/atm/clip.c b/net/atm/clip.c
index 8ae3a7879335..8215f7cb170b 100644
--- a/net/atm/clip.c
+++ b/net/atm/clip.c
@@ -539,9 +539,9 @@ static int clip_create(int number)
539} 539}
540 540
541static int clip_device_event(struct notifier_block *this, unsigned long event, 541static int clip_device_event(struct notifier_block *this, unsigned long event,
542 void *arg) 542 void *ptr)
543{ 543{
544 struct net_device *dev = arg; 544 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
545 545
546 if (!net_eq(dev_net(dev), &init_net)) 546 if (!net_eq(dev_net(dev), &init_net))
547 return NOTIFY_DONE; 547 return NOTIFY_DONE;
@@ -575,6 +575,7 @@ static int clip_inet_event(struct notifier_block *this, unsigned long event,
575 void *ifa) 575 void *ifa)
576{ 576{
577 struct in_device *in_dev; 577 struct in_device *in_dev;
578 struct netdev_notifier_info info;
578 579
579 in_dev = ((struct in_ifaddr *)ifa)->ifa_dev; 580 in_dev = ((struct in_ifaddr *)ifa)->ifa_dev;
580 /* 581 /*
@@ -583,7 +584,8 @@ static int clip_inet_event(struct notifier_block *this, unsigned long event,
583 */ 584 */
584 if (event != NETDEV_UP) 585 if (event != NETDEV_UP)
585 return NOTIFY_DONE; 586 return NOTIFY_DONE;
586 return clip_device_event(this, NETDEV_CHANGE, in_dev->dev); 587 netdev_notifier_info_init(&info, in_dev->dev);
588 return clip_device_event(this, NETDEV_CHANGE, &info);
587} 589}
588 590
589static struct notifier_block clip_dev_notifier = { 591static struct notifier_block clip_dev_notifier = {
diff --git a/net/atm/mpc.c b/net/atm/mpc.c
index d4cc1be5c364..3af12755cd04 100644
--- a/net/atm/mpc.c
+++ b/net/atm/mpc.c
@@ -998,14 +998,12 @@ int msg_to_mpoad(struct k_message *mesg, struct mpoa_client *mpc)
998} 998}
999 999
1000static int mpoa_event_listener(struct notifier_block *mpoa_notifier, 1000static int mpoa_event_listener(struct notifier_block *mpoa_notifier,
1001 unsigned long event, void *dev_ptr) 1001 unsigned long event, void *ptr)
1002{ 1002{
1003 struct net_device *dev; 1003 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1004 struct mpoa_client *mpc; 1004 struct mpoa_client *mpc;
1005 struct lec_priv *priv; 1005 struct lec_priv *priv;
1006 1006
1007 dev = dev_ptr;
1008
1009 if (!net_eq(dev_net(dev), &init_net)) 1007 if (!net_eq(dev_net(dev), &init_net))
1010 return NOTIFY_DONE; 1008 return NOTIFY_DONE;
1011 1009
diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
index e277e38f736b..4b4d2b779ec1 100644
--- a/net/ax25/af_ax25.c
+++ b/net/ax25/af_ax25.c
@@ -111,9 +111,9 @@ again:
111 * Handle device status changes. 111 * Handle device status changes.
112 */ 112 */
113static int ax25_device_event(struct notifier_block *this, unsigned long event, 113static int ax25_device_event(struct notifier_block *this, unsigned long event,
114 void *ptr) 114 void *ptr)
115{ 115{
116 struct net_device *dev = (struct net_device *)ptr; 116 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
117 117
118 if (!net_eq(dev_net(dev), &init_net)) 118 if (!net_eq(dev_net(dev), &init_net))
119 return NOTIFY_DONE; 119 return NOTIFY_DONE;
@@ -1974,7 +1974,7 @@ static struct packet_type ax25_packet_type __read_mostly = {
1974}; 1974};
1975 1975
1976static struct notifier_block ax25_dev_notifier = { 1976static struct notifier_block ax25_dev_notifier = {
1977 .notifier_call =ax25_device_event, 1977 .notifier_call = ax25_device_event,
1978}; 1978};
1979 1979
1980static int __init ax25_init(void) 1980static int __init ax25_init(void)
diff --git a/net/ax25/sysctl_net_ax25.c b/net/ax25/sysctl_net_ax25.c
index d5744b752511..919a5ce47515 100644
--- a/net/ax25/sysctl_net_ax25.c
+++ b/net/ax25/sysctl_net_ax25.c
@@ -29,7 +29,7 @@ static int min_proto[1], max_proto[] = { AX25_PROTO_MAX };
29static int min_ds_timeout[1], max_ds_timeout[] = {65535000}; 29static int min_ds_timeout[1], max_ds_timeout[] = {65535000};
30#endif 30#endif
31 31
32static const ctl_table ax25_param_table[] = { 32static const struct ctl_table ax25_param_table[] = {
33 { 33 {
34 .procname = "ip_default_mode", 34 .procname = "ip_default_mode",
35 .maxlen = sizeof(int), 35 .maxlen = sizeof(int),
diff --git a/net/batman-adv/Makefile b/net/batman-adv/Makefile
index acbac2a9c62f..489bb36f1b94 100644
--- a/net/batman-adv/Makefile
+++ b/net/batman-adv/Makefile
@@ -32,7 +32,6 @@ batman-adv-y += icmp_socket.o
32batman-adv-y += main.o 32batman-adv-y += main.o
33batman-adv-$(CONFIG_BATMAN_ADV_NC) += network-coding.o 33batman-adv-$(CONFIG_BATMAN_ADV_NC) += network-coding.o
34batman-adv-y += originator.o 34batman-adv-y += originator.o
35batman-adv-y += ring_buffer.o
36batman-adv-y += routing.o 35batman-adv-y += routing.o
37batman-adv-y += send.o 36batman-adv-y += send.o
38batman-adv-y += soft-interface.o 37batman-adv-y += soft-interface.o
diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
index f680ee101878..62da5278014a 100644
--- a/net/batman-adv/bat_iv_ogm.c
+++ b/net/batman-adv/bat_iv_ogm.c
@@ -19,7 +19,6 @@
19 19
20#include "main.h" 20#include "main.h"
21#include "translation-table.h" 21#include "translation-table.h"
22#include "ring_buffer.h"
23#include "originator.h" 22#include "originator.h"
24#include "routing.h" 23#include "routing.h"
25#include "gateway_common.h" 24#include "gateway_common.h"
@@ -30,6 +29,49 @@
30#include "network-coding.h" 29#include "network-coding.h"
31 30
32/** 31/**
32 * batadv_ring_buffer_set - update the ring buffer with the given value
33 * @lq_recv: pointer to the ring buffer
34 * @lq_index: index to store the value at
35 * @value: value to store in the ring buffer
36 */
37static void batadv_ring_buffer_set(uint8_t lq_recv[], uint8_t *lq_index,
38 uint8_t value)
39{
40 lq_recv[*lq_index] = value;
41 *lq_index = (*lq_index + 1) % BATADV_TQ_GLOBAL_WINDOW_SIZE;
42}
43
44/**
45 * batadv_ring_buffer_set - compute the average of all non-zero values stored
46 * in the given ring buffer
47 * @lq_recv: pointer to the ring buffer
48 *
49 * Returns computed average value.
50 */
51static uint8_t batadv_ring_buffer_avg(const uint8_t lq_recv[])
52{
53 const uint8_t *ptr;
54 uint16_t count = 0, i = 0, sum = 0;
55
56 ptr = lq_recv;
57
58 while (i < BATADV_TQ_GLOBAL_WINDOW_SIZE) {
59 if (*ptr != 0) {
60 count++;
61 sum += *ptr;
62 }
63
64 i++;
65 ptr++;
66 }
67
68 if (count == 0)
69 return 0;
70
71 return (uint8_t)(sum / count);
72}
73
74/*
33 * batadv_dup_status - duplicate status 75 * batadv_dup_status - duplicate status
34 * @BATADV_NO_DUP: the packet is a duplicate 76 * @BATADV_NO_DUP: the packet is a duplicate
35 * @BATADV_ORIG_DUP: OGM is a duplicate in the originator (but not for the 77 * @BATADV_ORIG_DUP: OGM is a duplicate in the originator (but not for the
@@ -48,12 +90,11 @@ static struct batadv_neigh_node *
48batadv_iv_ogm_neigh_new(struct batadv_hard_iface *hard_iface, 90batadv_iv_ogm_neigh_new(struct batadv_hard_iface *hard_iface,
49 const uint8_t *neigh_addr, 91 const uint8_t *neigh_addr,
50 struct batadv_orig_node *orig_node, 92 struct batadv_orig_node *orig_node,
51 struct batadv_orig_node *orig_neigh, __be32 seqno) 93 struct batadv_orig_node *orig_neigh)
52{ 94{
53 struct batadv_neigh_node *neigh_node; 95 struct batadv_neigh_node *neigh_node;
54 96
55 neigh_node = batadv_neigh_node_new(hard_iface, neigh_addr, 97 neigh_node = batadv_neigh_node_new(hard_iface, neigh_addr);
56 ntohl(seqno));
57 if (!neigh_node) 98 if (!neigh_node)
58 goto out; 99 goto out;
59 100
@@ -428,18 +469,16 @@ static void batadv_iv_ogm_aggregate_new(const unsigned char *packet_buff,
428 else 469 else
429 skb_size = packet_len; 470 skb_size = packet_len;
430 471
431 skb_size += ETH_HLEN + NET_IP_ALIGN; 472 skb_size += ETH_HLEN;
432 473
433 forw_packet_aggr->skb = dev_alloc_skb(skb_size); 474 forw_packet_aggr->skb = netdev_alloc_skb_ip_align(NULL, skb_size);
434 if (!forw_packet_aggr->skb) { 475 if (!forw_packet_aggr->skb) {
435 if (!own_packet) 476 if (!own_packet)
436 atomic_inc(&bat_priv->batman_queue_left); 477 atomic_inc(&bat_priv->batman_queue_left);
437 kfree(forw_packet_aggr); 478 kfree(forw_packet_aggr);
438 goto out; 479 goto out;
439 } 480 }
440 skb_reserve(forw_packet_aggr->skb, ETH_HLEN + NET_IP_ALIGN); 481 skb_reserve(forw_packet_aggr->skb, ETH_HLEN);
441
442 INIT_HLIST_NODE(&forw_packet_aggr->list);
443 482
444 skb_buff = skb_put(forw_packet_aggr->skb, packet_len); 483 skb_buff = skb_put(forw_packet_aggr->skb, packet_len);
445 forw_packet_aggr->packet_len = packet_len; 484 forw_packet_aggr->packet_len = packet_len;
@@ -605,6 +644,41 @@ static void batadv_iv_ogm_forward(struct batadv_orig_node *orig_node,
605 if_incoming, 0, batadv_iv_ogm_fwd_send_time()); 644 if_incoming, 0, batadv_iv_ogm_fwd_send_time());
606} 645}
607 646
647/**
648 * batadv_iv_ogm_slide_own_bcast_window - bitshift own OGM broadcast windows for
649 * the given interface
650 * @hard_iface: the interface for which the windows have to be shifted
651 */
652static void
653batadv_iv_ogm_slide_own_bcast_window(struct batadv_hard_iface *hard_iface)
654{
655 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
656 struct batadv_hashtable *hash = bat_priv->orig_hash;
657 struct hlist_head *head;
658 struct batadv_orig_node *orig_node;
659 unsigned long *word;
660 uint32_t i;
661 size_t word_index;
662 uint8_t *w;
663
664 for (i = 0; i < hash->size; i++) {
665 head = &hash->table[i];
666
667 rcu_read_lock();
668 hlist_for_each_entry_rcu(orig_node, head, hash_entry) {
669 spin_lock_bh(&orig_node->ogm_cnt_lock);
670 word_index = hard_iface->if_num * BATADV_NUM_WORDS;
671 word = &(orig_node->bcast_own[word_index]);
672
673 batadv_bit_get_packet(bat_priv, word, 1, 0);
674 w = &orig_node->bcast_own_sum[hard_iface->if_num];
675 *w = bitmap_weight(word, BATADV_TQ_LOCAL_WINDOW_SIZE);
676 spin_unlock_bh(&orig_node->ogm_cnt_lock);
677 }
678 rcu_read_unlock();
679 }
680}
681
608static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface) 682static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
609{ 683{
610 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); 684 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
@@ -649,7 +723,7 @@ static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
649 batadv_ogm_packet->gw_flags = BATADV_NO_FLAGS; 723 batadv_ogm_packet->gw_flags = BATADV_NO_FLAGS;
650 } 724 }
651 725
652 batadv_slide_own_bcast_window(hard_iface); 726 batadv_iv_ogm_slide_own_bcast_window(hard_iface);
653 batadv_iv_ogm_queue_add(bat_priv, hard_iface->bat_iv.ogm_buff, 727 batadv_iv_ogm_queue_add(bat_priv, hard_iface->bat_iv.ogm_buff,
654 hard_iface->bat_iv.ogm_buff_len, hard_iface, 1, 728 hard_iface->bat_iv.ogm_buff_len, hard_iface, 1,
655 batadv_iv_ogm_emit_send_time(bat_priv)); 729 batadv_iv_ogm_emit_send_time(bat_priv));
@@ -685,7 +759,7 @@ batadv_iv_ogm_orig_update(struct batadv_priv *bat_priv,
685 if (batadv_compare_eth(neigh_addr, ethhdr->h_source) && 759 if (batadv_compare_eth(neigh_addr, ethhdr->h_source) &&
686 tmp_neigh_node->if_incoming == if_incoming && 760 tmp_neigh_node->if_incoming == if_incoming &&
687 atomic_inc_not_zero(&tmp_neigh_node->refcount)) { 761 atomic_inc_not_zero(&tmp_neigh_node->refcount)) {
688 if (neigh_node) 762 if (WARN(neigh_node, "too many matching neigh_nodes"))
689 batadv_neigh_node_free_ref(neigh_node); 763 batadv_neigh_node_free_ref(neigh_node);
690 neigh_node = tmp_neigh_node; 764 neigh_node = tmp_neigh_node;
691 continue; 765 continue;
@@ -711,8 +785,7 @@ batadv_iv_ogm_orig_update(struct batadv_priv *bat_priv,
711 785
712 neigh_node = batadv_iv_ogm_neigh_new(if_incoming, 786 neigh_node = batadv_iv_ogm_neigh_new(if_incoming,
713 ethhdr->h_source, 787 ethhdr->h_source,
714 orig_node, orig_tmp, 788 orig_node, orig_tmp);
715 batadv_ogm_packet->seqno);
716 789
717 batadv_orig_node_free_ref(orig_tmp); 790 batadv_orig_node_free_ref(orig_tmp);
718 if (!neigh_node) 791 if (!neigh_node)
@@ -844,8 +917,7 @@ static int batadv_iv_ogm_calc_tq(struct batadv_orig_node *orig_node,
844 neigh_node = batadv_iv_ogm_neigh_new(if_incoming, 917 neigh_node = batadv_iv_ogm_neigh_new(if_incoming,
845 orig_neigh_node->orig, 918 orig_neigh_node->orig,
846 orig_neigh_node, 919 orig_neigh_node,
847 orig_neigh_node, 920 orig_neigh_node);
848 batadv_ogm_packet->seqno);
849 921
850 if (!neigh_node) 922 if (!neigh_node)
851 goto out; 923 goto out;
@@ -1013,7 +1085,7 @@ static void batadv_iv_ogm_process(const struct ethhdr *ethhdr,
1013 struct batadv_neigh_node *orig_neigh_router = NULL; 1085 struct batadv_neigh_node *orig_neigh_router = NULL;
1014 int has_directlink_flag; 1086 int has_directlink_flag;
1015 int is_my_addr = 0, is_my_orig = 0, is_my_oldorig = 0; 1087 int is_my_addr = 0, is_my_orig = 0, is_my_oldorig = 0;
1016 int is_broadcast = 0, is_bidirect; 1088 int is_bidirect;
1017 bool is_single_hop_neigh = false; 1089 bool is_single_hop_neigh = false;
1018 bool is_from_best_next_hop = false; 1090 bool is_from_best_next_hop = false;
1019 int sameseq, similar_ttl; 1091 int sameseq, similar_ttl;
@@ -1077,19 +1149,9 @@ static void batadv_iv_ogm_process(const struct ethhdr *ethhdr,
1077 if (batadv_compare_eth(batadv_ogm_packet->prev_sender, 1149 if (batadv_compare_eth(batadv_ogm_packet->prev_sender,
1078 hard_iface->net_dev->dev_addr)) 1150 hard_iface->net_dev->dev_addr))
1079 is_my_oldorig = 1; 1151 is_my_oldorig = 1;
1080
1081 if (is_broadcast_ether_addr(ethhdr->h_source))
1082 is_broadcast = 1;
1083 } 1152 }
1084 rcu_read_unlock(); 1153 rcu_read_unlock();
1085 1154
1086 if (batadv_ogm_packet->header.version != BATADV_COMPAT_VERSION) {
1087 batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
1088 "Drop packet: incompatible batman version (%i)\n",
1089 batadv_ogm_packet->header.version);
1090 return;
1091 }
1092
1093 if (is_my_addr) { 1155 if (is_my_addr) {
1094 batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 1156 batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
1095 "Drop packet: received my own broadcast (sender: %pM)\n", 1157 "Drop packet: received my own broadcast (sender: %pM)\n",
@@ -1097,13 +1159,6 @@ static void batadv_iv_ogm_process(const struct ethhdr *ethhdr,
1097 return; 1159 return;
1098 } 1160 }
1099 1161
1100 if (is_broadcast) {
1101 batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
1102 "Drop packet: ignoring all packets with broadcast source addr (sender: %pM)\n",
1103 ethhdr->h_source);
1104 return;
1105 }
1106
1107 if (is_my_orig) { 1162 if (is_my_orig) {
1108 unsigned long *word; 1163 unsigned long *word;
1109 int offset; 1164 int offset;
@@ -1312,7 +1367,7 @@ static int batadv_iv_ogm_receive(struct sk_buff *skb,
1312 skb->len + ETH_HLEN); 1367 skb->len + ETH_HLEN);
1313 1368
1314 packet_len = skb_headlen(skb); 1369 packet_len = skb_headlen(skb);
1315 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1370 ethhdr = eth_hdr(skb);
1316 packet_buff = skb->data; 1371 packet_buff = skb->data;
1317 batadv_ogm_packet = (struct batadv_ogm_packet *)packet_buff; 1372 batadv_ogm_packet = (struct batadv_ogm_packet *)packet_buff;
1318 1373
diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
index de27b3175cfd..e14531f1ce1c 100644
--- a/net/batman-adv/bridge_loop_avoidance.c
+++ b/net/batman-adv/bridge_loop_avoidance.c
@@ -180,7 +180,7 @@ static struct batadv_bla_claim
180 */ 180 */
181static struct batadv_bla_backbone_gw * 181static struct batadv_bla_backbone_gw *
182batadv_backbone_hash_find(struct batadv_priv *bat_priv, 182batadv_backbone_hash_find(struct batadv_priv *bat_priv,
183 uint8_t *addr, short vid) 183 uint8_t *addr, unsigned short vid)
184{ 184{
185 struct batadv_hashtable *hash = bat_priv->bla.backbone_hash; 185 struct batadv_hashtable *hash = bat_priv->bla.backbone_hash;
186 struct hlist_head *head; 186 struct hlist_head *head;
@@ -257,7 +257,7 @@ batadv_bla_del_backbone_claims(struct batadv_bla_backbone_gw *backbone_gw)
257 * @claimtype: the type of the claim (CLAIM, UNCLAIM, ANNOUNCE, ...) 257 * @claimtype: the type of the claim (CLAIM, UNCLAIM, ANNOUNCE, ...)
258 */ 258 */
259static void batadv_bla_send_claim(struct batadv_priv *bat_priv, uint8_t *mac, 259static void batadv_bla_send_claim(struct batadv_priv *bat_priv, uint8_t *mac,
260 short vid, int claimtype) 260 unsigned short vid, int claimtype)
261{ 261{
262 struct sk_buff *skb; 262 struct sk_buff *skb;
263 struct ethhdr *ethhdr; 263 struct ethhdr *ethhdr;
@@ -307,7 +307,8 @@ static void batadv_bla_send_claim(struct batadv_priv *bat_priv, uint8_t *mac,
307 */ 307 */
308 memcpy(ethhdr->h_source, mac, ETH_ALEN); 308 memcpy(ethhdr->h_source, mac, ETH_ALEN);
309 batadv_dbg(BATADV_DBG_BLA, bat_priv, 309 batadv_dbg(BATADV_DBG_BLA, bat_priv,
310 "bla_send_claim(): CLAIM %pM on vid %d\n", mac, vid); 310 "bla_send_claim(): CLAIM %pM on vid %d\n", mac,
311 BATADV_PRINT_VID(vid));
311 break; 312 break;
312 case BATADV_CLAIM_TYPE_UNCLAIM: 313 case BATADV_CLAIM_TYPE_UNCLAIM:
313 /* unclaim frame 314 /* unclaim frame
@@ -316,7 +317,7 @@ static void batadv_bla_send_claim(struct batadv_priv *bat_priv, uint8_t *mac,
316 memcpy(hw_src, mac, ETH_ALEN); 317 memcpy(hw_src, mac, ETH_ALEN);
317 batadv_dbg(BATADV_DBG_BLA, bat_priv, 318 batadv_dbg(BATADV_DBG_BLA, bat_priv,
318 "bla_send_claim(): UNCLAIM %pM on vid %d\n", mac, 319 "bla_send_claim(): UNCLAIM %pM on vid %d\n", mac,
319 vid); 320 BATADV_PRINT_VID(vid));
320 break; 321 break;
321 case BATADV_CLAIM_TYPE_ANNOUNCE: 322 case BATADV_CLAIM_TYPE_ANNOUNCE:
322 /* announcement frame 323 /* announcement frame
@@ -325,7 +326,7 @@ static void batadv_bla_send_claim(struct batadv_priv *bat_priv, uint8_t *mac,
325 memcpy(hw_src, mac, ETH_ALEN); 326 memcpy(hw_src, mac, ETH_ALEN);
326 batadv_dbg(BATADV_DBG_BLA, bat_priv, 327 batadv_dbg(BATADV_DBG_BLA, bat_priv,
327 "bla_send_claim(): ANNOUNCE of %pM on vid %d\n", 328 "bla_send_claim(): ANNOUNCE of %pM on vid %d\n",
328 ethhdr->h_source, vid); 329 ethhdr->h_source, BATADV_PRINT_VID(vid));
329 break; 330 break;
330 case BATADV_CLAIM_TYPE_REQUEST: 331 case BATADV_CLAIM_TYPE_REQUEST:
331 /* request frame 332 /* request frame
@@ -335,13 +336,15 @@ static void batadv_bla_send_claim(struct batadv_priv *bat_priv, uint8_t *mac,
335 memcpy(hw_src, mac, ETH_ALEN); 336 memcpy(hw_src, mac, ETH_ALEN);
336 memcpy(ethhdr->h_dest, mac, ETH_ALEN); 337 memcpy(ethhdr->h_dest, mac, ETH_ALEN);
337 batadv_dbg(BATADV_DBG_BLA, bat_priv, 338 batadv_dbg(BATADV_DBG_BLA, bat_priv,
338 "bla_send_claim(): REQUEST of %pM to %pMon vid %d\n", 339 "bla_send_claim(): REQUEST of %pM to %pM on vid %d\n",
339 ethhdr->h_source, ethhdr->h_dest, vid); 340 ethhdr->h_source, ethhdr->h_dest,
341 BATADV_PRINT_VID(vid));
340 break; 342 break;
341 } 343 }
342 344
343 if (vid != -1) 345 if (vid & BATADV_VLAN_HAS_TAG)
344 skb = vlan_insert_tag(skb, htons(ETH_P_8021Q), vid); 346 skb = vlan_insert_tag(skb, htons(ETH_P_8021Q),
347 vid & VLAN_VID_MASK);
345 348
346 skb_reset_mac_header(skb); 349 skb_reset_mac_header(skb);
347 skb->protocol = eth_type_trans(skb, soft_iface); 350 skb->protocol = eth_type_trans(skb, soft_iface);
@@ -367,7 +370,7 @@ out:
367 */ 370 */
368static struct batadv_bla_backbone_gw * 371static struct batadv_bla_backbone_gw *
369batadv_bla_get_backbone_gw(struct batadv_priv *bat_priv, uint8_t *orig, 372batadv_bla_get_backbone_gw(struct batadv_priv *bat_priv, uint8_t *orig,
370 short vid, bool own_backbone) 373 unsigned short vid, bool own_backbone)
371{ 374{
372 struct batadv_bla_backbone_gw *entry; 375 struct batadv_bla_backbone_gw *entry;
373 struct batadv_orig_node *orig_node; 376 struct batadv_orig_node *orig_node;
@@ -380,7 +383,7 @@ batadv_bla_get_backbone_gw(struct batadv_priv *bat_priv, uint8_t *orig,
380 383
381 batadv_dbg(BATADV_DBG_BLA, bat_priv, 384 batadv_dbg(BATADV_DBG_BLA, bat_priv,
382 "bla_get_backbone_gw(): not found (%pM, %d), creating new entry\n", 385 "bla_get_backbone_gw(): not found (%pM, %d), creating new entry\n",
383 orig, vid); 386 orig, BATADV_PRINT_VID(vid));
384 387
385 entry = kzalloc(sizeof(*entry), GFP_ATOMIC); 388 entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
386 if (!entry) 389 if (!entry)
@@ -434,7 +437,7 @@ batadv_bla_get_backbone_gw(struct batadv_priv *bat_priv, uint8_t *orig,
434static void 437static void
435batadv_bla_update_own_backbone_gw(struct batadv_priv *bat_priv, 438batadv_bla_update_own_backbone_gw(struct batadv_priv *bat_priv,
436 struct batadv_hard_iface *primary_if, 439 struct batadv_hard_iface *primary_if,
437 short vid) 440 unsigned short vid)
438{ 441{
439 struct batadv_bla_backbone_gw *backbone_gw; 442 struct batadv_bla_backbone_gw *backbone_gw;
440 443
@@ -456,7 +459,7 @@ batadv_bla_update_own_backbone_gw(struct batadv_priv *bat_priv,
456 */ 459 */
457static void batadv_bla_answer_request(struct batadv_priv *bat_priv, 460static void batadv_bla_answer_request(struct batadv_priv *bat_priv,
458 struct batadv_hard_iface *primary_if, 461 struct batadv_hard_iface *primary_if,
459 short vid) 462 unsigned short vid)
460{ 463{
461 struct hlist_head *head; 464 struct hlist_head *head;
462 struct batadv_hashtable *hash; 465 struct batadv_hashtable *hash;
@@ -547,7 +550,7 @@ static void batadv_bla_send_announce(struct batadv_priv *bat_priv,
547 * @backbone_gw: the backbone gateway which claims it 550 * @backbone_gw: the backbone gateway which claims it
548 */ 551 */
549static void batadv_bla_add_claim(struct batadv_priv *bat_priv, 552static void batadv_bla_add_claim(struct batadv_priv *bat_priv,
550 const uint8_t *mac, const short vid, 553 const uint8_t *mac, const unsigned short vid,
551 struct batadv_bla_backbone_gw *backbone_gw) 554 struct batadv_bla_backbone_gw *backbone_gw)
552{ 555{
553 struct batadv_bla_claim *claim; 556 struct batadv_bla_claim *claim;
@@ -572,7 +575,7 @@ static void batadv_bla_add_claim(struct batadv_priv *bat_priv,
572 atomic_set(&claim->refcount, 2); 575 atomic_set(&claim->refcount, 2);
573 batadv_dbg(BATADV_DBG_BLA, bat_priv, 576 batadv_dbg(BATADV_DBG_BLA, bat_priv,
574 "bla_add_claim(): adding new entry %pM, vid %d to hash ...\n", 577 "bla_add_claim(): adding new entry %pM, vid %d to hash ...\n",
575 mac, vid); 578 mac, BATADV_PRINT_VID(vid));
576 hash_added = batadv_hash_add(bat_priv->bla.claim_hash, 579 hash_added = batadv_hash_add(bat_priv->bla.claim_hash,
577 batadv_compare_claim, 580 batadv_compare_claim,
578 batadv_choose_claim, claim, 581 batadv_choose_claim, claim,
@@ -591,7 +594,7 @@ static void batadv_bla_add_claim(struct batadv_priv *bat_priv,
591 594
592 batadv_dbg(BATADV_DBG_BLA, bat_priv, 595 batadv_dbg(BATADV_DBG_BLA, bat_priv,
593 "bla_add_claim(): changing ownership for %pM, vid %d\n", 596 "bla_add_claim(): changing ownership for %pM, vid %d\n",
594 mac, vid); 597 mac, BATADV_PRINT_VID(vid));
595 598
596 claim->backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN); 599 claim->backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN);
597 batadv_backbone_gw_free_ref(claim->backbone_gw); 600 batadv_backbone_gw_free_ref(claim->backbone_gw);
@@ -611,7 +614,7 @@ claim_free_ref:
611 * given mac address and vid. 614 * given mac address and vid.
612 */ 615 */
613static void batadv_bla_del_claim(struct batadv_priv *bat_priv, 616static void batadv_bla_del_claim(struct batadv_priv *bat_priv,
614 const uint8_t *mac, const short vid) 617 const uint8_t *mac, const unsigned short vid)
615{ 618{
616 struct batadv_bla_claim search_claim, *claim; 619 struct batadv_bla_claim search_claim, *claim;
617 620
@@ -622,7 +625,7 @@ static void batadv_bla_del_claim(struct batadv_priv *bat_priv,
622 return; 625 return;
623 626
624 batadv_dbg(BATADV_DBG_BLA, bat_priv, "bla_del_claim(): %pM, vid %d\n", 627 batadv_dbg(BATADV_DBG_BLA, bat_priv, "bla_del_claim(): %pM, vid %d\n",
625 mac, vid); 628 mac, BATADV_PRINT_VID(vid));
626 629
627 batadv_hash_remove(bat_priv->bla.claim_hash, batadv_compare_claim, 630 batadv_hash_remove(bat_priv->bla.claim_hash, batadv_compare_claim,
628 batadv_choose_claim, claim); 631 batadv_choose_claim, claim);
@@ -637,7 +640,7 @@ static void batadv_bla_del_claim(struct batadv_priv *bat_priv,
637/* check for ANNOUNCE frame, return 1 if handled */ 640/* check for ANNOUNCE frame, return 1 if handled */
638static int batadv_handle_announce(struct batadv_priv *bat_priv, 641static int batadv_handle_announce(struct batadv_priv *bat_priv,
639 uint8_t *an_addr, uint8_t *backbone_addr, 642 uint8_t *an_addr, uint8_t *backbone_addr,
640 short vid) 643 unsigned short vid)
641{ 644{
642 struct batadv_bla_backbone_gw *backbone_gw; 645 struct batadv_bla_backbone_gw *backbone_gw;
643 uint16_t crc; 646 uint16_t crc;
@@ -658,12 +661,13 @@ static int batadv_handle_announce(struct batadv_priv *bat_priv,
658 661
659 batadv_dbg(BATADV_DBG_BLA, bat_priv, 662 batadv_dbg(BATADV_DBG_BLA, bat_priv,
660 "handle_announce(): ANNOUNCE vid %d (sent by %pM)... CRC = %#.4x\n", 663 "handle_announce(): ANNOUNCE vid %d (sent by %pM)... CRC = %#.4x\n",
661 vid, backbone_gw->orig, crc); 664 BATADV_PRINT_VID(vid), backbone_gw->orig, crc);
662 665
663 if (backbone_gw->crc != crc) { 666 if (backbone_gw->crc != crc) {
664 batadv_dbg(BATADV_DBG_BLA, backbone_gw->bat_priv, 667 batadv_dbg(BATADV_DBG_BLA, backbone_gw->bat_priv,
665 "handle_announce(): CRC FAILED for %pM/%d (my = %#.4x, sent = %#.4x)\n", 668 "handle_announce(): CRC FAILED for %pM/%d (my = %#.4x, sent = %#.4x)\n",
666 backbone_gw->orig, backbone_gw->vid, 669 backbone_gw->orig,
670 BATADV_PRINT_VID(backbone_gw->vid),
667 backbone_gw->crc, crc); 671 backbone_gw->crc, crc);
668 672
669 batadv_bla_send_request(backbone_gw); 673 batadv_bla_send_request(backbone_gw);
@@ -685,7 +689,7 @@ static int batadv_handle_announce(struct batadv_priv *bat_priv,
685static int batadv_handle_request(struct batadv_priv *bat_priv, 689static int batadv_handle_request(struct batadv_priv *bat_priv,
686 struct batadv_hard_iface *primary_if, 690 struct batadv_hard_iface *primary_if,
687 uint8_t *backbone_addr, 691 uint8_t *backbone_addr,
688 struct ethhdr *ethhdr, short vid) 692 struct ethhdr *ethhdr, unsigned short vid)
689{ 693{
690 /* check for REQUEST frame */ 694 /* check for REQUEST frame */
691 if (!batadv_compare_eth(backbone_addr, ethhdr->h_dest)) 695 if (!batadv_compare_eth(backbone_addr, ethhdr->h_dest))
@@ -699,7 +703,7 @@ static int batadv_handle_request(struct batadv_priv *bat_priv,
699 703
700 batadv_dbg(BATADV_DBG_BLA, bat_priv, 704 batadv_dbg(BATADV_DBG_BLA, bat_priv,
701 "handle_request(): REQUEST vid %d (sent by %pM)...\n", 705 "handle_request(): REQUEST vid %d (sent by %pM)...\n",
702 vid, ethhdr->h_source); 706 BATADV_PRINT_VID(vid), ethhdr->h_source);
703 707
704 batadv_bla_answer_request(bat_priv, primary_if, vid); 708 batadv_bla_answer_request(bat_priv, primary_if, vid);
705 return 1; 709 return 1;
@@ -709,7 +713,7 @@ static int batadv_handle_request(struct batadv_priv *bat_priv,
709static int batadv_handle_unclaim(struct batadv_priv *bat_priv, 713static int batadv_handle_unclaim(struct batadv_priv *bat_priv,
710 struct batadv_hard_iface *primary_if, 714 struct batadv_hard_iface *primary_if,
711 uint8_t *backbone_addr, 715 uint8_t *backbone_addr,
712 uint8_t *claim_addr, short vid) 716 uint8_t *claim_addr, unsigned short vid)
713{ 717{
714 struct batadv_bla_backbone_gw *backbone_gw; 718 struct batadv_bla_backbone_gw *backbone_gw;
715 719
@@ -727,7 +731,7 @@ static int batadv_handle_unclaim(struct batadv_priv *bat_priv,
727 /* this must be an UNCLAIM frame */ 731 /* this must be an UNCLAIM frame */
728 batadv_dbg(BATADV_DBG_BLA, bat_priv, 732 batadv_dbg(BATADV_DBG_BLA, bat_priv,
729 "handle_unclaim(): UNCLAIM %pM on vid %d (sent by %pM)...\n", 733 "handle_unclaim(): UNCLAIM %pM on vid %d (sent by %pM)...\n",
730 claim_addr, vid, backbone_gw->orig); 734 claim_addr, BATADV_PRINT_VID(vid), backbone_gw->orig);
731 735
732 batadv_bla_del_claim(bat_priv, claim_addr, vid); 736 batadv_bla_del_claim(bat_priv, claim_addr, vid);
733 batadv_backbone_gw_free_ref(backbone_gw); 737 batadv_backbone_gw_free_ref(backbone_gw);
@@ -738,7 +742,7 @@ static int batadv_handle_unclaim(struct batadv_priv *bat_priv,
738static int batadv_handle_claim(struct batadv_priv *bat_priv, 742static int batadv_handle_claim(struct batadv_priv *bat_priv,
739 struct batadv_hard_iface *primary_if, 743 struct batadv_hard_iface *primary_if,
740 uint8_t *backbone_addr, uint8_t *claim_addr, 744 uint8_t *backbone_addr, uint8_t *claim_addr,
741 short vid) 745 unsigned short vid)
742{ 746{
743 struct batadv_bla_backbone_gw *backbone_gw; 747 struct batadv_bla_backbone_gw *backbone_gw;
744 748
@@ -861,14 +865,15 @@ static int batadv_bla_process_claim(struct batadv_priv *bat_priv,
861 struct batadv_bla_claim_dst *bla_dst; 865 struct batadv_bla_claim_dst *bla_dst;
862 uint16_t proto; 866 uint16_t proto;
863 int headlen; 867 int headlen;
864 short vid = -1; 868 unsigned short vid = BATADV_NO_FLAGS;
865 int ret; 869 int ret;
866 870
867 ethhdr = (struct ethhdr *)skb_mac_header(skb); 871 ethhdr = eth_hdr(skb);
868 872
869 if (ntohs(ethhdr->h_proto) == ETH_P_8021Q) { 873 if (ntohs(ethhdr->h_proto) == ETH_P_8021Q) {
870 vhdr = (struct vlan_ethhdr *)ethhdr; 874 vhdr = (struct vlan_ethhdr *)ethhdr;
871 vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK; 875 vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK;
876 vid |= BATADV_VLAN_HAS_TAG;
872 proto = ntohs(vhdr->h_vlan_encapsulated_proto); 877 proto = ntohs(vhdr->h_vlan_encapsulated_proto);
873 headlen = sizeof(*vhdr); 878 headlen = sizeof(*vhdr);
874 } else { 879 } else {
@@ -885,7 +890,7 @@ static int batadv_bla_process_claim(struct batadv_priv *bat_priv,
885 return 0; 890 return 0;
886 891
887 /* pskb_may_pull() may have modified the pointers, get ethhdr again */ 892 /* pskb_may_pull() may have modified the pointers, get ethhdr again */
888 ethhdr = (struct ethhdr *)skb_mac_header(skb); 893 ethhdr = eth_hdr(skb);
889 arphdr = (struct arphdr *)((uint8_t *)ethhdr + headlen); 894 arphdr = (struct arphdr *)((uint8_t *)ethhdr + headlen);
890 895
891 /* Check whether the ARP frame carries a valid 896 /* Check whether the ARP frame carries a valid
@@ -910,7 +915,8 @@ static int batadv_bla_process_claim(struct batadv_priv *bat_priv,
910 if (ret == 1) 915 if (ret == 1)
911 batadv_dbg(BATADV_DBG_BLA, bat_priv, 916 batadv_dbg(BATADV_DBG_BLA, bat_priv,
912 "bla_process_claim(): received a claim frame from another group. From: %pM on vid %d ...(hw_src %pM, hw_dst %pM)\n", 917 "bla_process_claim(): received a claim frame from another group. From: %pM on vid %d ...(hw_src %pM, hw_dst %pM)\n",
913 ethhdr->h_source, vid, hw_src, hw_dst); 918 ethhdr->h_source, BATADV_PRINT_VID(vid), hw_src,
919 hw_dst);
914 920
915 if (ret < 2) 921 if (ret < 2)
916 return ret; 922 return ret;
@@ -945,7 +951,7 @@ static int batadv_bla_process_claim(struct batadv_priv *bat_priv,
945 951
946 batadv_dbg(BATADV_DBG_BLA, bat_priv, 952 batadv_dbg(BATADV_DBG_BLA, bat_priv,
947 "bla_process_claim(): ERROR - this looks like a claim frame, but is useless. eth src %pM on vid %d ...(hw_src %pM, hw_dst %pM)\n", 953 "bla_process_claim(): ERROR - this looks like a claim frame, but is useless. eth src %pM on vid %d ...(hw_src %pM, hw_dst %pM)\n",
948 ethhdr->h_source, vid, hw_src, hw_dst); 954 ethhdr->h_source, BATADV_PRINT_VID(vid), hw_src, hw_dst);
949 return 1; 955 return 1;
950} 956}
951 957
@@ -1362,7 +1368,7 @@ int batadv_bla_is_backbone_gw(struct sk_buff *skb,
1362 struct ethhdr *ethhdr; 1368 struct ethhdr *ethhdr;
1363 struct vlan_ethhdr *vhdr; 1369 struct vlan_ethhdr *vhdr;
1364 struct batadv_bla_backbone_gw *backbone_gw; 1370 struct batadv_bla_backbone_gw *backbone_gw;
1365 short vid = -1; 1371 unsigned short vid = BATADV_NO_FLAGS;
1366 1372
1367 if (!atomic_read(&orig_node->bat_priv->bridge_loop_avoidance)) 1373 if (!atomic_read(&orig_node->bat_priv->bridge_loop_avoidance))
1368 return 0; 1374 return 0;
@@ -1379,6 +1385,7 @@ int batadv_bla_is_backbone_gw(struct sk_buff *skb,
1379 1385
1380 vhdr = (struct vlan_ethhdr *)(skb->data + hdr_size); 1386 vhdr = (struct vlan_ethhdr *)(skb->data + hdr_size);
1381 vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK; 1387 vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK;
1388 vid |= BATADV_VLAN_HAS_TAG;
1382 } 1389 }
1383 1390
1384 /* see if this originator is a backbone gw for this VLAN */ 1391 /* see if this originator is a backbone gw for this VLAN */
@@ -1428,15 +1435,15 @@ void batadv_bla_free(struct batadv_priv *bat_priv)
1428 * returns 1, otherwise it returns 0 and the caller shall further 1435 * returns 1, otherwise it returns 0 and the caller shall further
1429 * process the skb. 1436 * process the skb.
1430 */ 1437 */
1431int batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid, 1438int batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
1432 bool is_bcast) 1439 unsigned short vid, bool is_bcast)
1433{ 1440{
1434 struct ethhdr *ethhdr; 1441 struct ethhdr *ethhdr;
1435 struct batadv_bla_claim search_claim, *claim = NULL; 1442 struct batadv_bla_claim search_claim, *claim = NULL;
1436 struct batadv_hard_iface *primary_if; 1443 struct batadv_hard_iface *primary_if;
1437 int ret; 1444 int ret;
1438 1445
1439 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1446 ethhdr = eth_hdr(skb);
1440 1447
1441 primary_if = batadv_primary_if_get_selected(bat_priv); 1448 primary_if = batadv_primary_if_get_selected(bat_priv);
1442 if (!primary_if) 1449 if (!primary_if)
@@ -1523,7 +1530,8 @@ out:
1523 * returns 1, otherwise it returns 0 and the caller shall further 1530 * returns 1, otherwise it returns 0 and the caller shall further
1524 * process the skb. 1531 * process the skb.
1525 */ 1532 */
1526int batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid) 1533int batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb,
1534 unsigned short vid)
1527{ 1535{
1528 struct ethhdr *ethhdr; 1536 struct ethhdr *ethhdr;
1529 struct batadv_bla_claim search_claim, *claim = NULL; 1537 struct batadv_bla_claim search_claim, *claim = NULL;
@@ -1543,7 +1551,7 @@ int batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid)
1543 if (batadv_bla_process_claim(bat_priv, primary_if, skb)) 1551 if (batadv_bla_process_claim(bat_priv, primary_if, skb))
1544 goto handled; 1552 goto handled;
1545 1553
1546 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1554 ethhdr = eth_hdr(skb);
1547 1555
1548 if (unlikely(atomic_read(&bat_priv->bla.num_requests))) 1556 if (unlikely(atomic_read(&bat_priv->bla.num_requests)))
1549 /* don't allow broadcasts while requests are in flight */ 1557 /* don't allow broadcasts while requests are in flight */
@@ -1627,8 +1635,8 @@ int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, void *offset)
1627 hlist_for_each_entry_rcu(claim, head, hash_entry) { 1635 hlist_for_each_entry_rcu(claim, head, hash_entry) {
1628 is_own = batadv_compare_eth(claim->backbone_gw->orig, 1636 is_own = batadv_compare_eth(claim->backbone_gw->orig,
1629 primary_addr); 1637 primary_addr);
1630 seq_printf(seq, " * %pM on % 5d by %pM [%c] (%#.4x)\n", 1638 seq_printf(seq, " * %pM on %5d by %pM [%c] (%#.4x)\n",
1631 claim->addr, claim->vid, 1639 claim->addr, BATADV_PRINT_VID(claim->vid),
1632 claim->backbone_gw->orig, 1640 claim->backbone_gw->orig,
1633 (is_own ? 'x' : ' '), 1641 (is_own ? 'x' : ' '),
1634 claim->backbone_gw->crc); 1642 claim->backbone_gw->crc);
@@ -1680,10 +1688,10 @@ int batadv_bla_backbone_table_seq_print_text(struct seq_file *seq, void *offset)
1680 if (is_own) 1688 if (is_own)
1681 continue; 1689 continue;
1682 1690
1683 seq_printf(seq, 1691 seq_printf(seq, " * %pM on %5d %4i.%03is (%#.4x)\n",
1684 " * %pM on % 5d % 4i.%03is (%#.4x)\n", 1692 backbone_gw->orig,
1685 backbone_gw->orig, backbone_gw->vid, 1693 BATADV_PRINT_VID(backbone_gw->vid), secs,
1686 secs, msecs, backbone_gw->crc); 1694 msecs, backbone_gw->crc);
1687 } 1695 }
1688 rcu_read_unlock(); 1696 rcu_read_unlock();
1689 } 1697 }
diff --git a/net/batman-adv/bridge_loop_avoidance.h b/net/batman-adv/bridge_loop_avoidance.h
index dea2fbc5d98d..4b102e71e5bd 100644
--- a/net/batman-adv/bridge_loop_avoidance.h
+++ b/net/batman-adv/bridge_loop_avoidance.h
@@ -21,9 +21,10 @@
21#define _NET_BATMAN_ADV_BLA_H_ 21#define _NET_BATMAN_ADV_BLA_H_
22 22
23#ifdef CONFIG_BATMAN_ADV_BLA 23#ifdef CONFIG_BATMAN_ADV_BLA
24int batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid, 24int batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
25 bool is_bcast); 25 unsigned short vid, bool is_bcast);
26int batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid); 26int batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb,
27 unsigned short vid);
27int batadv_bla_is_backbone_gw(struct sk_buff *skb, 28int batadv_bla_is_backbone_gw(struct sk_buff *skb,
28 struct batadv_orig_node *orig_node, int hdr_size); 29 struct batadv_orig_node *orig_node, int hdr_size);
29int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, void *offset); 30int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, void *offset);
@@ -42,13 +43,14 @@ void batadv_bla_free(struct batadv_priv *bat_priv);
42#else /* ifdef CONFIG_BATMAN_ADV_BLA */ 43#else /* ifdef CONFIG_BATMAN_ADV_BLA */
43 44
44static inline int batadv_bla_rx(struct batadv_priv *bat_priv, 45static inline int batadv_bla_rx(struct batadv_priv *bat_priv,
45 struct sk_buff *skb, short vid, bool is_bcast) 46 struct sk_buff *skb, unsigned short vid,
47 bool is_bcast)
46{ 48{
47 return 0; 49 return 0;
48} 50}
49 51
50static inline int batadv_bla_tx(struct batadv_priv *bat_priv, 52static inline int batadv_bla_tx(struct batadv_priv *bat_priv,
51 struct sk_buff *skb, short vid) 53 struct sk_buff *skb, unsigned short vid)
52{ 54{
53 return 0; 55 return 0;
54} 56}
diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
index 239992021b1d..06345d401588 100644
--- a/net/batman-adv/distributed-arp-table.c
+++ b/net/batman-adv/distributed-arp-table.c
@@ -45,9 +45,9 @@ static void batadv_dat_start_timer(struct batadv_priv *bat_priv)
45} 45}
46 46
47/** 47/**
48 * batadv_dat_entry_free_ref - decrements the dat_entry refcounter and possibly 48 * batadv_dat_entry_free_ref - decrement the dat_entry refcounter and possibly
49 * free it 49 * free it
50 * @dat_entry: the oentry to free 50 * @dat_entry: the entry to free
51 */ 51 */
52static void batadv_dat_entry_free_ref(struct batadv_dat_entry *dat_entry) 52static void batadv_dat_entry_free_ref(struct batadv_dat_entry *dat_entry)
53{ 53{
@@ -56,10 +56,10 @@ static void batadv_dat_entry_free_ref(struct batadv_dat_entry *dat_entry)
56} 56}
57 57
58/** 58/**
59 * batadv_dat_to_purge - checks whether a dat_entry has to be purged or not 59 * batadv_dat_to_purge - check whether a dat_entry has to be purged or not
60 * @dat_entry: the entry to check 60 * @dat_entry: the entry to check
61 * 61 *
62 * Returns true if the entry has to be purged now, false otherwise 62 * Returns true if the entry has to be purged now, false otherwise.
63 */ 63 */
64static bool batadv_dat_to_purge(struct batadv_dat_entry *dat_entry) 64static bool batadv_dat_to_purge(struct batadv_dat_entry *dat_entry)
65{ 65{
@@ -75,8 +75,8 @@ static bool batadv_dat_to_purge(struct batadv_dat_entry *dat_entry)
75 * returns a boolean value: true is the entry has to be deleted, 75 * returns a boolean value: true is the entry has to be deleted,
76 * false otherwise 76 * false otherwise
77 * 77 *
78 * Loops over each entry in the DAT local storage and delete it if and only if 78 * Loops over each entry in the DAT local storage and deletes it if and only if
79 * the to_purge function passed as argument returns true 79 * the to_purge function passed as argument returns true.
80 */ 80 */
81static void __batadv_dat_purge(struct batadv_priv *bat_priv, 81static void __batadv_dat_purge(struct batadv_priv *bat_priv,
82 bool (*to_purge)(struct batadv_dat_entry *)) 82 bool (*to_purge)(struct batadv_dat_entry *))
@@ -97,7 +97,7 @@ static void __batadv_dat_purge(struct batadv_priv *bat_priv,
97 spin_lock_bh(list_lock); 97 spin_lock_bh(list_lock);
98 hlist_for_each_entry_safe(dat_entry, node_tmp, head, 98 hlist_for_each_entry_safe(dat_entry, node_tmp, head,
99 hash_entry) { 99 hash_entry) {
100 /* if an helper function has been passed as parameter, 100 /* if a helper function has been passed as parameter,
101 * ask it if the entry has to be purged or not 101 * ask it if the entry has to be purged or not
102 */ 102 */
103 if (to_purge && !to_purge(dat_entry)) 103 if (to_purge && !to_purge(dat_entry))
@@ -134,7 +134,7 @@ static void batadv_dat_purge(struct work_struct *work)
134 * @node: node in the local table 134 * @node: node in the local table
135 * @data2: second object to compare the node to 135 * @data2: second object to compare the node to
136 * 136 *
137 * Returns 1 if the two entry are the same, 0 otherwise 137 * Returns 1 if the two entries are the same, 0 otherwise.
138 */ 138 */
139static int batadv_compare_dat(const struct hlist_node *node, const void *data2) 139static int batadv_compare_dat(const struct hlist_node *node, const void *data2)
140{ 140{
@@ -149,7 +149,7 @@ static int batadv_compare_dat(const struct hlist_node *node, const void *data2)
149 * @skb: ARP packet 149 * @skb: ARP packet
150 * @hdr_size: size of the possible header before the ARP packet 150 * @hdr_size: size of the possible header before the ARP packet
151 * 151 *
152 * Returns the value of the hw_src field in the ARP packet 152 * Returns the value of the hw_src field in the ARP packet.
153 */ 153 */
154static uint8_t *batadv_arp_hw_src(struct sk_buff *skb, int hdr_size) 154static uint8_t *batadv_arp_hw_src(struct sk_buff *skb, int hdr_size)
155{ 155{
@@ -166,7 +166,7 @@ static uint8_t *batadv_arp_hw_src(struct sk_buff *skb, int hdr_size)
166 * @skb: ARP packet 166 * @skb: ARP packet
167 * @hdr_size: size of the possible header before the ARP packet 167 * @hdr_size: size of the possible header before the ARP packet
168 * 168 *
169 * Returns the value of the ip_src field in the ARP packet 169 * Returns the value of the ip_src field in the ARP packet.
170 */ 170 */
171static __be32 batadv_arp_ip_src(struct sk_buff *skb, int hdr_size) 171static __be32 batadv_arp_ip_src(struct sk_buff *skb, int hdr_size)
172{ 172{
@@ -178,7 +178,7 @@ static __be32 batadv_arp_ip_src(struct sk_buff *skb, int hdr_size)
178 * @skb: ARP packet 178 * @skb: ARP packet
179 * @hdr_size: size of the possible header before the ARP packet 179 * @hdr_size: size of the possible header before the ARP packet
180 * 180 *
181 * Returns the value of the hw_dst field in the ARP packet 181 * Returns the value of the hw_dst field in the ARP packet.
182 */ 182 */
183static uint8_t *batadv_arp_hw_dst(struct sk_buff *skb, int hdr_size) 183static uint8_t *batadv_arp_hw_dst(struct sk_buff *skb, int hdr_size)
184{ 184{
@@ -190,7 +190,7 @@ static uint8_t *batadv_arp_hw_dst(struct sk_buff *skb, int hdr_size)
190 * @skb: ARP packet 190 * @skb: ARP packet
191 * @hdr_size: size of the possible header before the ARP packet 191 * @hdr_size: size of the possible header before the ARP packet
192 * 192 *
193 * Returns the value of the ip_dst field in the ARP packet 193 * Returns the value of the ip_dst field in the ARP packet.
194 */ 194 */
195static __be32 batadv_arp_ip_dst(struct sk_buff *skb, int hdr_size) 195static __be32 batadv_arp_ip_dst(struct sk_buff *skb, int hdr_size)
196{ 196{
@@ -202,7 +202,7 @@ static __be32 batadv_arp_ip_dst(struct sk_buff *skb, int hdr_size)
202 * @data: data to hash 202 * @data: data to hash
203 * @size: size of the hash table 203 * @size: size of the hash table
204 * 204 *
205 * Returns the selected index in the hash table for the given data 205 * Returns the selected index in the hash table for the given data.
206 */ 206 */
207static uint32_t batadv_hash_dat(const void *data, uint32_t size) 207static uint32_t batadv_hash_dat(const void *data, uint32_t size)
208{ 208{
@@ -224,12 +224,12 @@ static uint32_t batadv_hash_dat(const void *data, uint32_t size)
224} 224}
225 225
226/** 226/**
227 * batadv_dat_entry_hash_find - looks for a given dat_entry in the local hash 227 * batadv_dat_entry_hash_find - look for a given dat_entry in the local hash
228 * table 228 * table
229 * @bat_priv: the bat priv with all the soft interface information 229 * @bat_priv: the bat priv with all the soft interface information
230 * @ip: search key 230 * @ip: search key
231 * 231 *
232 * Returns the dat_entry if found, NULL otherwise 232 * Returns the dat_entry if found, NULL otherwise.
233 */ 233 */
234static struct batadv_dat_entry * 234static struct batadv_dat_entry *
235batadv_dat_entry_hash_find(struct batadv_priv *bat_priv, __be32 ip) 235batadv_dat_entry_hash_find(struct batadv_priv *bat_priv, __be32 ip)
@@ -343,9 +343,6 @@ static void batadv_dbg_arp(struct batadv_priv *bat_priv, struct sk_buff *skb,
343 if (hdr_size == 0) 343 if (hdr_size == 0)
344 return; 344 return;
345 345
346 /* if the ARP packet is encapsulated in a batman packet, let's print
347 * some debug messages
348 */
349 unicast_4addr_packet = (struct batadv_unicast_4addr_packet *)skb->data; 346 unicast_4addr_packet = (struct batadv_unicast_4addr_packet *)skb->data;
350 347
351 switch (unicast_4addr_packet->u.header.packet_type) { 348 switch (unicast_4addr_packet->u.header.packet_type) {
@@ -409,7 +406,8 @@ static void batadv_dbg_arp(struct batadv_priv *bat_priv, struct sk_buff *skb,
409 * @candidate: orig_node under evaluation 406 * @candidate: orig_node under evaluation
410 * @max_orig_node: last selected candidate 407 * @max_orig_node: last selected candidate
411 * 408 *
412 * Returns true if the node has been elected as next candidate or false othrwise 409 * Returns true if the node has been elected as next candidate or false
410 * otherwise.
413 */ 411 */
414static bool batadv_is_orig_node_eligible(struct batadv_dat_candidate *res, 412static bool batadv_is_orig_node_eligible(struct batadv_dat_candidate *res,
415 int select, batadv_dat_addr_t tmp_max, 413 int select, batadv_dat_addr_t tmp_max,
@@ -472,7 +470,7 @@ static void batadv_choose_next_candidate(struct batadv_priv *bat_priv,
472 */ 470 */
473 cands[select].type = BATADV_DAT_CANDIDATE_NOT_FOUND; 471 cands[select].type = BATADV_DAT_CANDIDATE_NOT_FOUND;
474 472
475 /* iterate over the originator list and find the node with closest 473 /* iterate over the originator list and find the node with the closest
476 * dat_address which has not been selected yet 474 * dat_address which has not been selected yet
477 */ 475 */
478 for (i = 0; i < hash->size; i++) { 476 for (i = 0; i < hash->size; i++) {
@@ -480,7 +478,7 @@ static void batadv_choose_next_candidate(struct batadv_priv *bat_priv,
480 478
481 rcu_read_lock(); 479 rcu_read_lock();
482 hlist_for_each_entry_rcu(orig_node, head, hash_entry) { 480 hlist_for_each_entry_rcu(orig_node, head, hash_entry) {
483 /* the dht space is a ring and addresses are unsigned */ 481 /* the dht space is a ring using unsigned addresses */
484 tmp_max = BATADV_DAT_ADDR_MAX - orig_node->dat_addr + 482 tmp_max = BATADV_DAT_ADDR_MAX - orig_node->dat_addr +
485 ip_key; 483 ip_key;
486 484
@@ -512,7 +510,7 @@ static void batadv_choose_next_candidate(struct batadv_priv *bat_priv,
512} 510}
513 511
514/** 512/**
515 * batadv_dat_select_candidates - selects the nodes which the DHT message has to 513 * batadv_dat_select_candidates - select the nodes which the DHT message has to
516 * be sent to 514 * be sent to
517 * @bat_priv: the bat priv with all the soft interface information 515 * @bat_priv: the bat priv with all the soft interface information
518 * @ip_dst: ipv4 to look up in the DHT 516 * @ip_dst: ipv4 to look up in the DHT
@@ -521,7 +519,7 @@ static void batadv_choose_next_candidate(struct batadv_priv *bat_priv,
521 * closest values (from the LEFT, with wrap around if needed) then the hash 519 * closest values (from the LEFT, with wrap around if needed) then the hash
522 * value of the key. ip_dst is the key. 520 * value of the key. ip_dst is the key.
523 * 521 *
524 * Returns the candidate array of size BATADV_DAT_CANDIDATE_NUM 522 * Returns the candidate array of size BATADV_DAT_CANDIDATE_NUM.
525 */ 523 */
526static struct batadv_dat_candidate * 524static struct batadv_dat_candidate *
527batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst) 525batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)
@@ -558,10 +556,11 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)
558 * @ip: the DHT key 556 * @ip: the DHT key
559 * @packet_subtype: unicast4addr packet subtype to use 557 * @packet_subtype: unicast4addr packet subtype to use
560 * 558 *
561 * In this function the skb is copied by means of pskb_copy() and is sent as 559 * This function copies the skb with pskb_copy() and is sent as unicast packet
562 * unicast packet to each of the selected candidates 560 * to each of the selected candidates.
563 * 561 *
564 * Returns true if the packet is sent to at least one candidate, false otherwise 562 * Returns true if the packet is sent to at least one candidate, false
563 * otherwise.
565 */ 564 */
566static bool batadv_dat_send_data(struct batadv_priv *bat_priv, 565static bool batadv_dat_send_data(struct batadv_priv *bat_priv,
567 struct sk_buff *skb, __be32 ip, 566 struct sk_buff *skb, __be32 ip,
@@ -727,7 +726,7 @@ out:
727 * @skb: packet to analyse 726 * @skb: packet to analyse
728 * @hdr_size: size of the possible header before the ARP packet in the skb 727 * @hdr_size: size of the possible header before the ARP packet in the skb
729 * 728 *
730 * Returns the ARP type if the skb contains a valid ARP packet, 0 otherwise 729 * Returns the ARP type if the skb contains a valid ARP packet, 0 otherwise.
731 */ 730 */
732static uint16_t batadv_arp_get_type(struct batadv_priv *bat_priv, 731static uint16_t batadv_arp_get_type(struct batadv_priv *bat_priv,
733 struct sk_buff *skb, int hdr_size) 732 struct sk_buff *skb, int hdr_size)
@@ -754,9 +753,7 @@ static uint16_t batadv_arp_get_type(struct batadv_priv *bat_priv,
754 753
755 arphdr = (struct arphdr *)(skb->data + hdr_size + ETH_HLEN); 754 arphdr = (struct arphdr *)(skb->data + hdr_size + ETH_HLEN);
756 755
757 /* Check whether the ARP packet carries a valid 756 /* check whether the ARP packet carries a valid IP information */
758 * IP information
759 */
760 if (arphdr->ar_hrd != htons(ARPHRD_ETHER)) 757 if (arphdr->ar_hrd != htons(ARPHRD_ETHER))
761 goto out; 758 goto out;
762 759
@@ -784,7 +781,7 @@ static uint16_t batadv_arp_get_type(struct batadv_priv *bat_priv,
784 if (is_zero_ether_addr(hw_src) || is_multicast_ether_addr(hw_src)) 781 if (is_zero_ether_addr(hw_src) || is_multicast_ether_addr(hw_src))
785 goto out; 782 goto out;
786 783
787 /* we don't care about the destination MAC address in ARP requests */ 784 /* don't care about the destination MAC address in ARP requests */
788 if (arphdr->ar_op != htons(ARPOP_REQUEST)) { 785 if (arphdr->ar_op != htons(ARPOP_REQUEST)) {
789 hw_dst = batadv_arp_hw_dst(skb, hdr_size); 786 hw_dst = batadv_arp_hw_dst(skb, hdr_size);
790 if (is_zero_ether_addr(hw_dst) || 787 if (is_zero_ether_addr(hw_dst) ||
@@ -804,8 +801,8 @@ out:
804 * @skb: packet to check 801 * @skb: packet to check
805 * 802 *
806 * Returns true if the message has been sent to the dht candidates, false 803 * Returns true if the message has been sent to the dht candidates, false
807 * otherwise. In case of true the message has to be enqueued to permit the 804 * otherwise. In case of a positive return value the message has to be enqueued
808 * fallback 805 * to permit the fallback.
809 */ 806 */
810bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv, 807bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv,
811 struct sk_buff *skb) 808 struct sk_buff *skb)
@@ -867,7 +864,7 @@ bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv,
867 batadv_dbg(BATADV_DBG_DAT, bat_priv, "ARP request replied locally\n"); 864 batadv_dbg(BATADV_DBG_DAT, bat_priv, "ARP request replied locally\n");
868 ret = true; 865 ret = true;
869 } else { 866 } else {
870 /* Send the request on the DHT */ 867 /* Send the request to the DHT */
871 ret = batadv_dat_send_data(bat_priv, skb, ip_dst, 868 ret = batadv_dat_send_data(bat_priv, skb, ip_dst,
872 BATADV_P_DAT_DHT_GET); 869 BATADV_P_DAT_DHT_GET);
873 } 870 }
@@ -884,7 +881,7 @@ out:
884 * @skb: packet to check 881 * @skb: packet to check
885 * @hdr_size: size of the encapsulation header 882 * @hdr_size: size of the encapsulation header
886 * 883 *
887 * Returns true if the request has been answered, false otherwise 884 * Returns true if the request has been answered, false otherwise.
888 */ 885 */
889bool batadv_dat_snoop_incoming_arp_request(struct batadv_priv *bat_priv, 886bool batadv_dat_snoop_incoming_arp_request(struct batadv_priv *bat_priv,
890 struct sk_buff *skb, int hdr_size) 887 struct sk_buff *skb, int hdr_size)
@@ -924,10 +921,9 @@ bool batadv_dat_snoop_incoming_arp_request(struct batadv_priv *bat_priv,
924 if (!skb_new) 921 if (!skb_new)
925 goto out; 922 goto out;
926 923
927 /* to preserve backwards compatibility, here the node has to answer 924 /* To preserve backwards compatibility, the node has choose the outgoing
928 * using the same packet type it received for the request. This is due 925 * format based on the incoming request packet type. The assumption is
929 * to that if a node is not using the 4addr packet format it may not 926 * that a node not using the 4addr packet format doesn't support it.
930 * support it.
931 */ 927 */
932 if (hdr_size == sizeof(struct batadv_unicast_4addr_packet)) 928 if (hdr_size == sizeof(struct batadv_unicast_4addr_packet))
933 err = batadv_unicast_4addr_send_skb(bat_priv, skb_new, 929 err = batadv_unicast_4addr_send_skb(bat_priv, skb_new,
@@ -977,7 +973,7 @@ void batadv_dat_snoop_outgoing_arp_reply(struct batadv_priv *bat_priv,
977 batadv_dat_entry_add(bat_priv, ip_dst, hw_dst); 973 batadv_dat_entry_add(bat_priv, ip_dst, hw_dst);
978 974
979 /* Send the ARP reply to the candidates for both the IP addresses that 975 /* Send the ARP reply to the candidates for both the IP addresses that
980 * the node got within the ARP reply 976 * the node obtained from the ARP reply
981 */ 977 */
982 batadv_dat_send_data(bat_priv, skb, ip_src, BATADV_P_DAT_DHT_PUT); 978 batadv_dat_send_data(bat_priv, skb, ip_src, BATADV_P_DAT_DHT_PUT);
983 batadv_dat_send_data(bat_priv, skb, ip_dst, BATADV_P_DAT_DHT_PUT); 979 batadv_dat_send_data(bat_priv, skb, ip_dst, BATADV_P_DAT_DHT_PUT);
@@ -987,7 +983,7 @@ void batadv_dat_snoop_outgoing_arp_reply(struct batadv_priv *bat_priv,
987 * DAT storage only 983 * DAT storage only
988 * @bat_priv: the bat priv with all the soft interface information 984 * @bat_priv: the bat priv with all the soft interface information
989 * @skb: packet to check 985 * @skb: packet to check
990 * @hdr_size: siaze of the encapsulation header 986 * @hdr_size: size of the encapsulation header
991 */ 987 */
992bool batadv_dat_snoop_incoming_arp_reply(struct batadv_priv *bat_priv, 988bool batadv_dat_snoop_incoming_arp_reply(struct batadv_priv *bat_priv,
993 struct sk_buff *skb, int hdr_size) 989 struct sk_buff *skb, int hdr_size)
@@ -1031,11 +1027,11 @@ out:
1031 1027
1032/** 1028/**
1033 * batadv_dat_drop_broadcast_packet - check if an ARP request has to be dropped 1029 * batadv_dat_drop_broadcast_packet - check if an ARP request has to be dropped
1034 * (because the node has already got the reply via DAT) or not 1030 * (because the node has already obtained the reply via DAT) or not
1035 * @bat_priv: the bat priv with all the soft interface information 1031 * @bat_priv: the bat priv with all the soft interface information
1036 * @forw_packet: the broadcast packet 1032 * @forw_packet: the broadcast packet
1037 * 1033 *
1038 * Returns true if the node can drop the packet, false otherwise 1034 * Returns true if the node can drop the packet, false otherwise.
1039 */ 1035 */
1040bool batadv_dat_drop_broadcast_packet(struct batadv_priv *bat_priv, 1036bool batadv_dat_drop_broadcast_packet(struct batadv_priv *bat_priv,
1041 struct batadv_forw_packet *forw_packet) 1037 struct batadv_forw_packet *forw_packet)
diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
index 522243aff2f3..c478e6bcf89b 100644
--- a/net/batman-adv/hard-interface.c
+++ b/net/batman-adv/hard-interface.c
@@ -117,6 +117,58 @@ static int batadv_is_valid_iface(const struct net_device *net_dev)
117 return 1; 117 return 1;
118} 118}
119 119
120/**
121 * batadv_is_wifi_netdev - check if the given net_device struct is a wifi
122 * interface
123 * @net_device: the device to check
124 *
125 * Returns true if the net device is a 802.11 wireless device, false otherwise.
126 */
127static bool batadv_is_wifi_netdev(struct net_device *net_device)
128{
129#ifdef CONFIG_WIRELESS_EXT
130 /* pre-cfg80211 drivers have to implement WEXT, so it is possible to
131 * check for wireless_handlers != NULL
132 */
133 if (net_device->wireless_handlers)
134 return true;
135#endif
136
137 /* cfg80211 drivers have to set ieee80211_ptr */
138 if (net_device->ieee80211_ptr)
139 return true;
140
141 return false;
142}
143
144/**
145 * batadv_is_wifi_iface - check if the given interface represented by ifindex
146 * is a wifi interface
147 * @ifindex: interface index to check
148 *
149 * Returns true if the interface represented by ifindex is a 802.11 wireless
150 * device, false otherwise.
151 */
152bool batadv_is_wifi_iface(int ifindex)
153{
154 struct net_device *net_device = NULL;
155 bool ret = false;
156
157 if (ifindex == BATADV_NULL_IFINDEX)
158 goto out;
159
160 net_device = dev_get_by_index(&init_net, ifindex);
161 if (!net_device)
162 goto out;
163
164 ret = batadv_is_wifi_netdev(net_device);
165
166out:
167 if (net_device)
168 dev_put(net_device);
169 return ret;
170}
171
120static struct batadv_hard_iface * 172static struct batadv_hard_iface *
121batadv_hardif_get_active(const struct net_device *soft_iface) 173batadv_hardif_get_active(const struct net_device *soft_iface)
122{ 174{
@@ -525,7 +577,7 @@ batadv_hardif_add_interface(struct net_device *net_dev)
525 577
526 dev_hold(net_dev); 578 dev_hold(net_dev);
527 579
528 hard_iface = kmalloc(sizeof(*hard_iface), GFP_ATOMIC); 580 hard_iface = kzalloc(sizeof(*hard_iface), GFP_ATOMIC);
529 if (!hard_iface) 581 if (!hard_iface)
530 goto release_dev; 582 goto release_dev;
531 583
@@ -541,18 +593,16 @@ batadv_hardif_add_interface(struct net_device *net_dev)
541 INIT_WORK(&hard_iface->cleanup_work, 593 INIT_WORK(&hard_iface->cleanup_work,
542 batadv_hardif_remove_interface_finish); 594 batadv_hardif_remove_interface_finish);
543 595
596 hard_iface->num_bcasts = BATADV_NUM_BCASTS_DEFAULT;
597 if (batadv_is_wifi_netdev(net_dev))
598 hard_iface->num_bcasts = BATADV_NUM_BCASTS_WIRELESS;
599
544 /* extra reference for return */ 600 /* extra reference for return */
545 atomic_set(&hard_iface->refcount, 2); 601 atomic_set(&hard_iface->refcount, 2);
546 602
547 batadv_check_known_mac_addr(hard_iface->net_dev); 603 batadv_check_known_mac_addr(hard_iface->net_dev);
548 list_add_tail_rcu(&hard_iface->list, &batadv_hardif_list); 604 list_add_tail_rcu(&hard_iface->list, &batadv_hardif_list);
549 605
550 /* This can't be called via a bat_priv callback because
551 * we have no bat_priv yet.
552 */
553 atomic_set(&hard_iface->bat_iv.ogm_seqno, 1);
554 hard_iface->bat_iv.ogm_buff = NULL;
555
556 return hard_iface; 606 return hard_iface;
557 607
558free_if: 608free_if:
@@ -595,7 +645,7 @@ void batadv_hardif_remove_interfaces(void)
595static int batadv_hard_if_event(struct notifier_block *this, 645static int batadv_hard_if_event(struct notifier_block *this,
596 unsigned long event, void *ptr) 646 unsigned long event, void *ptr)
597{ 647{
598 struct net_device *net_dev = ptr; 648 struct net_device *net_dev = netdev_notifier_info_to_dev(ptr);
599 struct batadv_hard_iface *hard_iface; 649 struct batadv_hard_iface *hard_iface;
600 struct batadv_hard_iface *primary_if = NULL; 650 struct batadv_hard_iface *primary_if = NULL;
601 struct batadv_priv *bat_priv; 651 struct batadv_priv *bat_priv;
@@ -657,38 +707,6 @@ out:
657 return NOTIFY_DONE; 707 return NOTIFY_DONE;
658} 708}
659 709
660/* This function returns true if the interface represented by ifindex is a
661 * 802.11 wireless device
662 */
663bool batadv_is_wifi_iface(int ifindex)
664{
665 struct net_device *net_device = NULL;
666 bool ret = false;
667
668 if (ifindex == BATADV_NULL_IFINDEX)
669 goto out;
670
671 net_device = dev_get_by_index(&init_net, ifindex);
672 if (!net_device)
673 goto out;
674
675#ifdef CONFIG_WIRELESS_EXT
676 /* pre-cfg80211 drivers have to implement WEXT, so it is possible to
677 * check for wireless_handlers != NULL
678 */
679 if (net_device->wireless_handlers)
680 ret = true;
681 else
682#endif
683 /* cfg80211 drivers have to set ieee80211_ptr */
684 if (net_device->ieee80211_ptr)
685 ret = true;
686out:
687 if (net_device)
688 dev_put(net_device);
689 return ret;
690}
691
692struct notifier_block batadv_hard_if_notifier = { 710struct notifier_block batadv_hard_if_notifier = {
693 .notifier_call = batadv_hard_if_event, 711 .notifier_call = batadv_hard_if_event,
694}; 712};
diff --git a/net/batman-adv/icmp_socket.c b/net/batman-adv/icmp_socket.c
index 0ba6c899b2d3..b27508b8085c 100644
--- a/net/batman-adv/icmp_socket.c
+++ b/net/batman-adv/icmp_socket.c
@@ -177,13 +177,13 @@ static ssize_t batadv_socket_write(struct file *file, const char __user *buff,
177 if (len >= sizeof(struct batadv_icmp_packet_rr)) 177 if (len >= sizeof(struct batadv_icmp_packet_rr))
178 packet_len = sizeof(struct batadv_icmp_packet_rr); 178 packet_len = sizeof(struct batadv_icmp_packet_rr);
179 179
180 skb = dev_alloc_skb(packet_len + ETH_HLEN + NET_IP_ALIGN); 180 skb = netdev_alloc_skb_ip_align(NULL, packet_len + ETH_HLEN);
181 if (!skb) { 181 if (!skb) {
182 len = -ENOMEM; 182 len = -ENOMEM;
183 goto out; 183 goto out;
184 } 184 }
185 185
186 skb_reserve(skb, ETH_HLEN + NET_IP_ALIGN); 186 skb_reserve(skb, ETH_HLEN);
187 icmp_packet = (struct batadv_icmp_packet_rr *)skb_put(skb, packet_len); 187 icmp_packet = (struct batadv_icmp_packet_rr *)skb_put(skb, packet_len);
188 188
189 if (copy_from_user(icmp_packet, buff, packet_len)) { 189 if (copy_from_user(icmp_packet, buff, packet_len)) {
diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c
index 51aafd669cbb..08125f3f6064 100644
--- a/net/batman-adv/main.c
+++ b/net/batman-adv/main.c
@@ -473,7 +473,6 @@ __be32 batadv_skb_crc32(struct sk_buff *skb, u8 *payload_ptr)
473 crc = crc32c(crc, data, len); 473 crc = crc32c(crc, data, len);
474 consumed += len; 474 consumed += len;
475 } 475 }
476 skb_abort_seq_read(&st);
477 476
478 return htonl(crc); 477 return htonl(crc);
479} 478}
diff --git a/net/batman-adv/main.h b/net/batman-adv/main.h
index 59a0d6af15c8..5e9aebb7d56b 100644
--- a/net/batman-adv/main.h
+++ b/net/batman-adv/main.h
@@ -26,7 +26,7 @@
26#define BATADV_DRIVER_DEVICE "batman-adv" 26#define BATADV_DRIVER_DEVICE "batman-adv"
27 27
28#ifndef BATADV_SOURCE_VERSION 28#ifndef BATADV_SOURCE_VERSION
29#define BATADV_SOURCE_VERSION "2013.2.0" 29#define BATADV_SOURCE_VERSION "2013.3.0"
30#endif 30#endif
31 31
32/* B.A.T.M.A.N. parameters */ 32/* B.A.T.M.A.N. parameters */
@@ -76,6 +76,11 @@
76 76
77#define BATADV_LOG_BUF_LEN 8192 /* has to be a power of 2 */ 77#define BATADV_LOG_BUF_LEN 8192 /* has to be a power of 2 */
78 78
79/* number of packets to send for broadcasts on different interface types */
80#define BATADV_NUM_BCASTS_DEFAULT 1
81#define BATADV_NUM_BCASTS_WIRELESS 3
82#define BATADV_NUM_BCASTS_MAX 3
83
79/* msecs after which an ARP_REQUEST is sent in broadcast as fallback */ 84/* msecs after which an ARP_REQUEST is sent in broadcast as fallback */
80#define ARP_REQ_DELAY 250 85#define ARP_REQ_DELAY 250
81/* numbers of originator to contact for any PUT/GET DHT operation */ 86/* numbers of originator to contact for any PUT/GET DHT operation */
@@ -157,6 +162,17 @@ enum batadv_uev_type {
157#include <linux/seq_file.h> 162#include <linux/seq_file.h>
158#include "types.h" 163#include "types.h"
159 164
165/**
166 * batadv_vlan_flags - flags for the four MSB of any vlan ID field
167 * @BATADV_VLAN_HAS_TAG: whether the field contains a valid vlan tag or not
168 */
169enum batadv_vlan_flags {
170 BATADV_VLAN_HAS_TAG = BIT(15),
171};
172
173#define BATADV_PRINT_VID(vid) (vid & BATADV_VLAN_HAS_TAG ? \
174 (int)(vid & VLAN_VID_MASK) : -1)
175
160extern char batadv_routing_algo[]; 176extern char batadv_routing_algo[];
161extern struct list_head batadv_hardif_list; 177extern struct list_head batadv_hardif_list;
162 178
diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
index e84629ece9b7..a487d46e0aec 100644
--- a/net/batman-adv/network-coding.c
+++ b/net/batman-adv/network-coding.c
@@ -1245,7 +1245,7 @@ static void batadv_nc_skb_store_before_coding(struct batadv_priv *bat_priv,
1245 return; 1245 return;
1246 1246
1247 /* Set the mac header as if we actually sent the packet uncoded */ 1247 /* Set the mac header as if we actually sent the packet uncoded */
1248 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1248 ethhdr = eth_hdr(skb);
1249 memcpy(ethhdr->h_source, ethhdr->h_dest, ETH_ALEN); 1249 memcpy(ethhdr->h_source, ethhdr->h_dest, ETH_ALEN);
1250 memcpy(ethhdr->h_dest, eth_dst_new, ETH_ALEN); 1250 memcpy(ethhdr->h_dest, eth_dst_new, ETH_ALEN);
1251 1251
@@ -1359,18 +1359,17 @@ static bool batadv_nc_skb_add_to_path(struct sk_buff *skb,
1359 * buffer 1359 * buffer
1360 * @skb: data skb to forward 1360 * @skb: data skb to forward
1361 * @neigh_node: next hop to forward packet to 1361 * @neigh_node: next hop to forward packet to
1362 * @ethhdr: pointer to the ethernet header inside the skb
1363 * 1362 *
1364 * Returns true if the skb was consumed (encoded packet sent) or false otherwise 1363 * Returns true if the skb was consumed (encoded packet sent) or false otherwise
1365 */ 1364 */
1366bool batadv_nc_skb_forward(struct sk_buff *skb, 1365bool batadv_nc_skb_forward(struct sk_buff *skb,
1367 struct batadv_neigh_node *neigh_node, 1366 struct batadv_neigh_node *neigh_node)
1368 struct ethhdr *ethhdr)
1369{ 1367{
1370 const struct net_device *netdev = neigh_node->if_incoming->soft_iface; 1368 const struct net_device *netdev = neigh_node->if_incoming->soft_iface;
1371 struct batadv_priv *bat_priv = netdev_priv(netdev); 1369 struct batadv_priv *bat_priv = netdev_priv(netdev);
1372 struct batadv_unicast_packet *packet; 1370 struct batadv_unicast_packet *packet;
1373 struct batadv_nc_path *nc_path; 1371 struct batadv_nc_path *nc_path;
1372 struct ethhdr *ethhdr = eth_hdr(skb);
1374 __be32 packet_id; 1373 __be32 packet_id;
1375 u8 *payload; 1374 u8 *payload;
1376 1375
@@ -1423,7 +1422,7 @@ void batadv_nc_skb_store_for_decoding(struct batadv_priv *bat_priv,
1423{ 1422{
1424 struct batadv_unicast_packet *packet; 1423 struct batadv_unicast_packet *packet;
1425 struct batadv_nc_path *nc_path; 1424 struct batadv_nc_path *nc_path;
1426 struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb); 1425 struct ethhdr *ethhdr = eth_hdr(skb);
1427 __be32 packet_id; 1426 __be32 packet_id;
1428 u8 *payload; 1427 u8 *payload;
1429 1428
@@ -1482,7 +1481,7 @@ out:
1482void batadv_nc_skb_store_sniffed_unicast(struct batadv_priv *bat_priv, 1481void batadv_nc_skb_store_sniffed_unicast(struct batadv_priv *bat_priv,
1483 struct sk_buff *skb) 1482 struct sk_buff *skb)
1484{ 1483{
1485 struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb); 1484 struct ethhdr *ethhdr = eth_hdr(skb);
1486 1485
1487 if (batadv_is_my_mac(bat_priv, ethhdr->h_dest)) 1486 if (batadv_is_my_mac(bat_priv, ethhdr->h_dest))
1488 return; 1487 return;
@@ -1533,7 +1532,7 @@ batadv_nc_skb_decode_packet(struct batadv_priv *bat_priv, struct sk_buff *skb,
1533 skb_reset_network_header(skb); 1532 skb_reset_network_header(skb);
1534 1533
1535 /* Reconstruct original mac header */ 1534 /* Reconstruct original mac header */
1536 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1535 ethhdr = eth_hdr(skb);
1537 memcpy(ethhdr, &ethhdr_tmp, sizeof(*ethhdr)); 1536 memcpy(ethhdr, &ethhdr_tmp, sizeof(*ethhdr));
1538 1537
1539 /* Select the correct unicast header information based on the location 1538 /* Select the correct unicast header information based on the location
@@ -1677,7 +1676,7 @@ static int batadv_nc_recv_coded_packet(struct sk_buff *skb,
1677 return NET_RX_DROP; 1676 return NET_RX_DROP;
1678 1677
1679 coded_packet = (struct batadv_coded_packet *)skb->data; 1678 coded_packet = (struct batadv_coded_packet *)skb->data;
1680 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1679 ethhdr = eth_hdr(skb);
1681 1680
1682 /* Verify frame is destined for us */ 1681 /* Verify frame is destined for us */
1683 if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest) && 1682 if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest) &&
@@ -1763,6 +1762,13 @@ int batadv_nc_nodes_seq_print_text(struct seq_file *seq, void *offset)
1763 /* For each orig_node in this bin */ 1762 /* For each orig_node in this bin */
1764 rcu_read_lock(); 1763 rcu_read_lock();
1765 hlist_for_each_entry_rcu(orig_node, head, hash_entry) { 1764 hlist_for_each_entry_rcu(orig_node, head, hash_entry) {
1765 /* no need to print the orig node if it does not have
1766 * network coding neighbors
1767 */
1768 if (list_empty(&orig_node->in_coding_list) &&
1769 list_empty(&orig_node->out_coding_list))
1770 continue;
1771
1766 seq_printf(seq, "Node: %pM\n", orig_node->orig); 1772 seq_printf(seq, "Node: %pM\n", orig_node->orig);
1767 1773
1768 seq_puts(seq, " Ingoing: "); 1774 seq_puts(seq, " Ingoing: ");
diff --git a/net/batman-adv/network-coding.h b/net/batman-adv/network-coding.h
index 4fa6d0caddbd..85a4ec81ad50 100644
--- a/net/batman-adv/network-coding.h
+++ b/net/batman-adv/network-coding.h
@@ -36,8 +36,7 @@ void batadv_nc_purge_orig(struct batadv_priv *bat_priv,
36void batadv_nc_init_bat_priv(struct batadv_priv *bat_priv); 36void batadv_nc_init_bat_priv(struct batadv_priv *bat_priv);
37void batadv_nc_init_orig(struct batadv_orig_node *orig_node); 37void batadv_nc_init_orig(struct batadv_orig_node *orig_node);
38bool batadv_nc_skb_forward(struct sk_buff *skb, 38bool batadv_nc_skb_forward(struct sk_buff *skb,
39 struct batadv_neigh_node *neigh_node, 39 struct batadv_neigh_node *neigh_node);
40 struct ethhdr *ethhdr);
41void batadv_nc_skb_store_for_decoding(struct batadv_priv *bat_priv, 40void batadv_nc_skb_store_for_decoding(struct batadv_priv *bat_priv,
42 struct sk_buff *skb); 41 struct sk_buff *skb);
43void batadv_nc_skb_store_sniffed_unicast(struct batadv_priv *bat_priv, 42void batadv_nc_skb_store_sniffed_unicast(struct batadv_priv *bat_priv,
@@ -87,8 +86,7 @@ static inline void batadv_nc_init_orig(struct batadv_orig_node *orig_node)
87} 86}
88 87
89static inline bool batadv_nc_skb_forward(struct sk_buff *skb, 88static inline bool batadv_nc_skb_forward(struct sk_buff *skb,
90 struct batadv_neigh_node *neigh_node, 89 struct batadv_neigh_node *neigh_node)
91 struct ethhdr *ethhdr)
92{ 90{
93 return false; 91 return false;
94} 92}
diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c
index fad1a2093e15..f50553a7de62 100644
--- a/net/batman-adv/originator.c
+++ b/net/batman-adv/originator.c
@@ -92,7 +92,7 @@ batadv_orig_node_get_router(struct batadv_orig_node *orig_node)
92 92
93struct batadv_neigh_node * 93struct batadv_neigh_node *
94batadv_neigh_node_new(struct batadv_hard_iface *hard_iface, 94batadv_neigh_node_new(struct batadv_hard_iface *hard_iface,
95 const uint8_t *neigh_addr, uint32_t seqno) 95 const uint8_t *neigh_addr)
96{ 96{
97 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); 97 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
98 struct batadv_neigh_node *neigh_node; 98 struct batadv_neigh_node *neigh_node;
@@ -110,8 +110,8 @@ batadv_neigh_node_new(struct batadv_hard_iface *hard_iface,
110 atomic_set(&neigh_node->refcount, 2); 110 atomic_set(&neigh_node->refcount, 2);
111 111
112 batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 112 batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
113 "Creating new neighbor %pM, initial seqno %d\n", 113 "Creating new neighbor %pM on interface %s\n", neigh_addr,
114 neigh_addr, seqno); 114 hard_iface->net_dev->name);
115 115
116out: 116out:
117 return neigh_node; 117 return neigh_node;
diff --git a/net/batman-adv/originator.h b/net/batman-adv/originator.h
index 734e5a3d8a5b..7887b84a9af4 100644
--- a/net/batman-adv/originator.h
+++ b/net/batman-adv/originator.h
@@ -31,7 +31,7 @@ struct batadv_orig_node *batadv_get_orig_node(struct batadv_priv *bat_priv,
31 const uint8_t *addr); 31 const uint8_t *addr);
32struct batadv_neigh_node * 32struct batadv_neigh_node *
33batadv_neigh_node_new(struct batadv_hard_iface *hard_iface, 33batadv_neigh_node_new(struct batadv_hard_iface *hard_iface,
34 const uint8_t *neigh_addr, uint32_t seqno); 34 const uint8_t *neigh_addr);
35void batadv_neigh_node_free_ref(struct batadv_neigh_node *neigh_node); 35void batadv_neigh_node_free_ref(struct batadv_neigh_node *neigh_node);
36struct batadv_neigh_node * 36struct batadv_neigh_node *
37batadv_orig_node_get_router(struct batadv_orig_node *orig_node); 37batadv_orig_node_get_router(struct batadv_orig_node *orig_node);
diff --git a/net/batman-adv/ring_buffer.c b/net/batman-adv/ring_buffer.c
deleted file mode 100644
index ccab0bbdbb59..000000000000
--- a/net/batman-adv/ring_buffer.c
+++ /dev/null
@@ -1,51 +0,0 @@
1/* Copyright (C) 2007-2013 B.A.T.M.A.N. contributors:
2 *
3 * Marek Lindner
4 *
5 * This program is free software; you can redistribute it and/or
6 * modify it under the terms of version 2 of the GNU General Public
7 * License as published by the Free Software Foundation.
8 *
9 * This program is distributed in the hope that it will be useful, but
10 * WITHOUT ANY WARRANTY; without even the implied warranty of
11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
12 * General Public License for more details.
13 *
14 * You should have received a copy of the GNU General Public License
15 * along with this program; if not, write to the Free Software
16 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
17 * 02110-1301, USA
18 */
19
20#include "main.h"
21#include "ring_buffer.h"
22
23void batadv_ring_buffer_set(uint8_t lq_recv[], uint8_t *lq_index,
24 uint8_t value)
25{
26 lq_recv[*lq_index] = value;
27 *lq_index = (*lq_index + 1) % BATADV_TQ_GLOBAL_WINDOW_SIZE;
28}
29
30uint8_t batadv_ring_buffer_avg(const uint8_t lq_recv[])
31{
32 const uint8_t *ptr;
33 uint16_t count = 0, i = 0, sum = 0;
34
35 ptr = lq_recv;
36
37 while (i < BATADV_TQ_GLOBAL_WINDOW_SIZE) {
38 if (*ptr != 0) {
39 count++;
40 sum += *ptr;
41 }
42
43 i++;
44 ptr++;
45 }
46
47 if (count == 0)
48 return 0;
49
50 return (uint8_t)(sum / count);
51}
diff --git a/net/batman-adv/ring_buffer.h b/net/batman-adv/ring_buffer.h
deleted file mode 100644
index 3f92ae248e83..000000000000
--- a/net/batman-adv/ring_buffer.h
+++ /dev/null
@@ -1,27 +0,0 @@
1/* Copyright (C) 2007-2013 B.A.T.M.A.N. contributors:
2 *
3 * Marek Lindner
4 *
5 * This program is free software; you can redistribute it and/or
6 * modify it under the terms of version 2 of the GNU General Public
7 * License as published by the Free Software Foundation.
8 *
9 * This program is distributed in the hope that it will be useful, but
10 * WITHOUT ANY WARRANTY; without even the implied warranty of
11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
12 * General Public License for more details.
13 *
14 * You should have received a copy of the GNU General Public License
15 * along with this program; if not, write to the Free Software
16 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
17 * 02110-1301, USA
18 */
19
20#ifndef _NET_BATMAN_ADV_RING_BUFFER_H_
21#define _NET_BATMAN_ADV_RING_BUFFER_H_
22
23void batadv_ring_buffer_set(uint8_t lq_recv[], uint8_t *lq_index,
24 uint8_t value);
25uint8_t batadv_ring_buffer_avg(const uint8_t lq_recv[]);
26
27#endif /* _NET_BATMAN_ADV_RING_BUFFER_H_ */
diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c
index b27a4d792d15..2f0bd3ffe6e8 100644
--- a/net/batman-adv/routing.c
+++ b/net/batman-adv/routing.c
@@ -34,35 +34,6 @@
34static int batadv_route_unicast_packet(struct sk_buff *skb, 34static int batadv_route_unicast_packet(struct sk_buff *skb,
35 struct batadv_hard_iface *recv_if); 35 struct batadv_hard_iface *recv_if);
36 36
37void batadv_slide_own_bcast_window(struct batadv_hard_iface *hard_iface)
38{
39 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
40 struct batadv_hashtable *hash = bat_priv->orig_hash;
41 struct hlist_head *head;
42 struct batadv_orig_node *orig_node;
43 unsigned long *word;
44 uint32_t i;
45 size_t word_index;
46 uint8_t *w;
47
48 for (i = 0; i < hash->size; i++) {
49 head = &hash->table[i];
50
51 rcu_read_lock();
52 hlist_for_each_entry_rcu(orig_node, head, hash_entry) {
53 spin_lock_bh(&orig_node->ogm_cnt_lock);
54 word_index = hard_iface->if_num * BATADV_NUM_WORDS;
55 word = &(orig_node->bcast_own[word_index]);
56
57 batadv_bit_get_packet(bat_priv, word, 1, 0);
58 w = &orig_node->bcast_own_sum[hard_iface->if_num];
59 *w = bitmap_weight(word, BATADV_TQ_LOCAL_WINDOW_SIZE);
60 spin_unlock_bh(&orig_node->ogm_cnt_lock);
61 }
62 rcu_read_unlock();
63 }
64}
65
66static void _batadv_update_route(struct batadv_priv *bat_priv, 37static void _batadv_update_route(struct batadv_priv *bat_priv,
67 struct batadv_orig_node *orig_node, 38 struct batadv_orig_node *orig_node,
68 struct batadv_neigh_node *neigh_node) 39 struct batadv_neigh_node *neigh_node)
@@ -256,7 +227,7 @@ bool batadv_check_management_packet(struct sk_buff *skb,
256 if (unlikely(!pskb_may_pull(skb, header_len))) 227 if (unlikely(!pskb_may_pull(skb, header_len)))
257 return false; 228 return false;
258 229
259 ethhdr = (struct ethhdr *)skb_mac_header(skb); 230 ethhdr = eth_hdr(skb);
260 231
261 /* packet with broadcast indication but unicast recipient */ 232 /* packet with broadcast indication but unicast recipient */
262 if (!is_broadcast_ether_addr(ethhdr->h_dest)) 233 if (!is_broadcast_ether_addr(ethhdr->h_dest))
@@ -314,7 +285,7 @@ static int batadv_recv_my_icmp_packet(struct batadv_priv *bat_priv,
314 icmp_packet->msg_type = BATADV_ECHO_REPLY; 285 icmp_packet->msg_type = BATADV_ECHO_REPLY;
315 icmp_packet->header.ttl = BATADV_TTL; 286 icmp_packet->header.ttl = BATADV_TTL;
316 287
317 if (batadv_send_skb_to_orig(skb, orig_node, NULL)) 288 if (batadv_send_skb_to_orig(skb, orig_node, NULL) != NET_XMIT_DROP)
318 ret = NET_RX_SUCCESS; 289 ret = NET_RX_SUCCESS;
319 290
320out: 291out:
@@ -362,7 +333,7 @@ static int batadv_recv_icmp_ttl_exceeded(struct batadv_priv *bat_priv,
362 icmp_packet->msg_type = BATADV_TTL_EXCEEDED; 333 icmp_packet->msg_type = BATADV_TTL_EXCEEDED;
363 icmp_packet->header.ttl = BATADV_TTL; 334 icmp_packet->header.ttl = BATADV_TTL;
364 335
365 if (batadv_send_skb_to_orig(skb, orig_node, NULL)) 336 if (batadv_send_skb_to_orig(skb, orig_node, NULL) != NET_XMIT_DROP)
366 ret = NET_RX_SUCCESS; 337 ret = NET_RX_SUCCESS;
367 338
368out: 339out:
@@ -392,7 +363,7 @@ int batadv_recv_icmp_packet(struct sk_buff *skb,
392 if (unlikely(!pskb_may_pull(skb, hdr_size))) 363 if (unlikely(!pskb_may_pull(skb, hdr_size)))
393 goto out; 364 goto out;
394 365
395 ethhdr = (struct ethhdr *)skb_mac_header(skb); 366 ethhdr = eth_hdr(skb);
396 367
397 /* packet with unicast indication but broadcast recipient */ 368 /* packet with unicast indication but broadcast recipient */
398 if (is_broadcast_ether_addr(ethhdr->h_dest)) 369 if (is_broadcast_ether_addr(ethhdr->h_dest))
@@ -439,7 +410,7 @@ int batadv_recv_icmp_packet(struct sk_buff *skb,
439 icmp_packet->header.ttl--; 410 icmp_packet->header.ttl--;
440 411
441 /* route it */ 412 /* route it */
442 if (batadv_send_skb_to_orig(skb, orig_node, recv_if)) 413 if (batadv_send_skb_to_orig(skb, orig_node, recv_if) != NET_XMIT_DROP)
443 ret = NET_RX_SUCCESS; 414 ret = NET_RX_SUCCESS;
444 415
445out: 416out:
@@ -569,7 +540,7 @@ static int batadv_check_unicast_packet(struct batadv_priv *bat_priv,
569 if (unlikely(!pskb_may_pull(skb, hdr_size))) 540 if (unlikely(!pskb_may_pull(skb, hdr_size)))
570 return -ENODATA; 541 return -ENODATA;
571 542
572 ethhdr = (struct ethhdr *)skb_mac_header(skb); 543 ethhdr = eth_hdr(skb);
573 544
574 /* packet with unicast indication but broadcast recipient */ 545 /* packet with unicast indication but broadcast recipient */
575 if (is_broadcast_ether_addr(ethhdr->h_dest)) 546 if (is_broadcast_ether_addr(ethhdr->h_dest))
@@ -803,8 +774,8 @@ static int batadv_route_unicast_packet(struct sk_buff *skb,
803 struct batadv_orig_node *orig_node = NULL; 774 struct batadv_orig_node *orig_node = NULL;
804 struct batadv_neigh_node *neigh_node = NULL; 775 struct batadv_neigh_node *neigh_node = NULL;
805 struct batadv_unicast_packet *unicast_packet; 776 struct batadv_unicast_packet *unicast_packet;
806 struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb); 777 struct ethhdr *ethhdr = eth_hdr(skb);
807 int ret = NET_RX_DROP; 778 int res, ret = NET_RX_DROP;
808 struct sk_buff *new_skb; 779 struct sk_buff *new_skb;
809 780
810 unicast_packet = (struct batadv_unicast_packet *)skb->data; 781 unicast_packet = (struct batadv_unicast_packet *)skb->data;
@@ -864,16 +835,19 @@ static int batadv_route_unicast_packet(struct sk_buff *skb,
864 /* decrement ttl */ 835 /* decrement ttl */
865 unicast_packet->header.ttl--; 836 unicast_packet->header.ttl--;
866 837
867 /* network code packet if possible */ 838 res = batadv_send_skb_to_orig(skb, orig_node, recv_if);
868 if (batadv_nc_skb_forward(skb, neigh_node, ethhdr)) {
869 ret = NET_RX_SUCCESS;
870 } else if (batadv_send_skb_to_orig(skb, orig_node, recv_if)) {
871 ret = NET_RX_SUCCESS;
872 839
873 /* Update stats counter */ 840 /* translate transmit result into receive result */
841 if (res == NET_XMIT_SUCCESS) {
842 /* skb was transmitted and consumed */
874 batadv_inc_counter(bat_priv, BATADV_CNT_FORWARD); 843 batadv_inc_counter(bat_priv, BATADV_CNT_FORWARD);
875 batadv_add_counter(bat_priv, BATADV_CNT_FORWARD_BYTES, 844 batadv_add_counter(bat_priv, BATADV_CNT_FORWARD_BYTES,
876 skb->len + ETH_HLEN); 845 skb->len + ETH_HLEN);
846
847 ret = NET_RX_SUCCESS;
848 } else if (res == NET_XMIT_POLICED) {
849 /* skb was buffered and consumed */
850 ret = NET_RX_SUCCESS;
877 } 851 }
878 852
879out: 853out:
@@ -1165,7 +1139,7 @@ int batadv_recv_bcast_packet(struct sk_buff *skb,
1165 if (unlikely(!pskb_may_pull(skb, hdr_size))) 1139 if (unlikely(!pskb_may_pull(skb, hdr_size)))
1166 goto out; 1140 goto out;
1167 1141
1168 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1142 ethhdr = eth_hdr(skb);
1169 1143
1170 /* packet with broadcast indication but unicast recipient */ 1144 /* packet with broadcast indication but unicast recipient */
1171 if (!is_broadcast_ether_addr(ethhdr->h_dest)) 1145 if (!is_broadcast_ether_addr(ethhdr->h_dest))
@@ -1265,7 +1239,7 @@ int batadv_recv_vis_packet(struct sk_buff *skb,
1265 return NET_RX_DROP; 1239 return NET_RX_DROP;
1266 1240
1267 vis_packet = (struct batadv_vis_packet *)skb->data; 1241 vis_packet = (struct batadv_vis_packet *)skb->data;
1268 ethhdr = (struct ethhdr *)skb_mac_header(skb); 1242 ethhdr = eth_hdr(skb);
1269 1243
1270 /* not for me */ 1244 /* not for me */
1271 if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest)) 1245 if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest))
diff --git a/net/batman-adv/routing.h b/net/batman-adv/routing.h
index 99eeafaba407..72a29bde2010 100644
--- a/net/batman-adv/routing.h
+++ b/net/batman-adv/routing.h
@@ -20,7 +20,6 @@
20#ifndef _NET_BATMAN_ADV_ROUTING_H_ 20#ifndef _NET_BATMAN_ADV_ROUTING_H_
21#define _NET_BATMAN_ADV_ROUTING_H_ 21#define _NET_BATMAN_ADV_ROUTING_H_
22 22
23void batadv_slide_own_bcast_window(struct batadv_hard_iface *hard_iface);
24bool batadv_check_management_packet(struct sk_buff *skb, 23bool batadv_check_management_packet(struct sk_buff *skb,
25 struct batadv_hard_iface *hard_iface, 24 struct batadv_hard_iface *hard_iface,
26 int header_len); 25 int header_len);
diff --git a/net/batman-adv/send.c b/net/batman-adv/send.c
index 263cfd1ccee7..e9ff8d801201 100644
--- a/net/batman-adv/send.c
+++ b/net/batman-adv/send.c
@@ -61,7 +61,7 @@ int batadv_send_skb_packet(struct sk_buff *skb,
61 61
62 skb_reset_mac_header(skb); 62 skb_reset_mac_header(skb);
63 63
64 ethhdr = (struct ethhdr *)skb_mac_header(skb); 64 ethhdr = eth_hdr(skb);
65 memcpy(ethhdr->h_source, hard_iface->net_dev->dev_addr, ETH_ALEN); 65 memcpy(ethhdr->h_source, hard_iface->net_dev->dev_addr, ETH_ALEN);
66 memcpy(ethhdr->h_dest, dst_addr, ETH_ALEN); 66 memcpy(ethhdr->h_dest, dst_addr, ETH_ALEN);
67 ethhdr->h_proto = __constant_htons(ETH_P_BATMAN); 67 ethhdr->h_proto = __constant_htons(ETH_P_BATMAN);
@@ -96,26 +96,37 @@ send_skb_err:
96 * host, NULL can be passed as recv_if and no interface alternating is 96 * host, NULL can be passed as recv_if and no interface alternating is
97 * attempted. 97 * attempted.
98 * 98 *
99 * Returns TRUE on success; FALSE otherwise. 99 * Returns NET_XMIT_SUCCESS on success, NET_XMIT_DROP on failure, or
100 * NET_XMIT_POLICED if the skb is buffered for later transmit.
100 */ 101 */
101bool batadv_send_skb_to_orig(struct sk_buff *skb, 102int batadv_send_skb_to_orig(struct sk_buff *skb,
102 struct batadv_orig_node *orig_node, 103 struct batadv_orig_node *orig_node,
103 struct batadv_hard_iface *recv_if) 104 struct batadv_hard_iface *recv_if)
104{ 105{
105 struct batadv_priv *bat_priv = orig_node->bat_priv; 106 struct batadv_priv *bat_priv = orig_node->bat_priv;
106 struct batadv_neigh_node *neigh_node; 107 struct batadv_neigh_node *neigh_node;
108 int ret = NET_XMIT_DROP;
107 109
108 /* batadv_find_router() increases neigh_nodes refcount if found. */ 110 /* batadv_find_router() increases neigh_nodes refcount if found. */
109 neigh_node = batadv_find_router(bat_priv, orig_node, recv_if); 111 neigh_node = batadv_find_router(bat_priv, orig_node, recv_if);
110 if (!neigh_node) 112 if (!neigh_node)
111 return false; 113 return ret;
112 114
113 /* route it */ 115 /* try to network code the packet, if it is received on an interface
114 batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); 116 * (i.e. being forwarded). If the packet originates from this node or if
117 * network coding fails, then send the packet as usual.
118 */
119 if (recv_if && batadv_nc_skb_forward(skb, neigh_node)) {
120 ret = NET_XMIT_POLICED;
121 } else {
122 batadv_send_skb_packet(skb, neigh_node->if_incoming,
123 neigh_node->addr);
124 ret = NET_XMIT_SUCCESS;
125 }
115 126
116 batadv_neigh_node_free_ref(neigh_node); 127 batadv_neigh_node_free_ref(neigh_node);
117 128
118 return true; 129 return ret;
119} 130}
120 131
121void batadv_schedule_bat_ogm(struct batadv_hard_iface *hard_iface) 132void batadv_schedule_bat_ogm(struct batadv_hard_iface *hard_iface)
@@ -152,8 +163,6 @@ _batadv_add_bcast_packet_to_list(struct batadv_priv *bat_priv,
152 struct batadv_forw_packet *forw_packet, 163 struct batadv_forw_packet *forw_packet,
153 unsigned long send_time) 164 unsigned long send_time)
154{ 165{
155 INIT_HLIST_NODE(&forw_packet->list);
156
157 /* add new packet to packet list */ 166 /* add new packet to packet list */
158 spin_lock_bh(&bat_priv->forw_bcast_list_lock); 167 spin_lock_bh(&bat_priv->forw_bcast_list_lock);
159 hlist_add_head(&forw_packet->list, &bat_priv->forw_bcast_list); 168 hlist_add_head(&forw_packet->list, &bat_priv->forw_bcast_list);
@@ -260,6 +269,9 @@ static void batadv_send_outstanding_bcast_packet(struct work_struct *work)
260 if (hard_iface->soft_iface != soft_iface) 269 if (hard_iface->soft_iface != soft_iface)
261 continue; 270 continue;
262 271
272 if (forw_packet->num_packets >= hard_iface->num_bcasts)
273 continue;
274
263 /* send a copy of the saved skb */ 275 /* send a copy of the saved skb */
264 skb1 = skb_clone(forw_packet->skb, GFP_ATOMIC); 276 skb1 = skb_clone(forw_packet->skb, GFP_ATOMIC);
265 if (skb1) 277 if (skb1)
@@ -271,7 +283,7 @@ static void batadv_send_outstanding_bcast_packet(struct work_struct *work)
271 forw_packet->num_packets++; 283 forw_packet->num_packets++;
272 284
273 /* if we still have some more bcasts to send */ 285 /* if we still have some more bcasts to send */
274 if (forw_packet->num_packets < 3) { 286 if (forw_packet->num_packets < BATADV_NUM_BCASTS_MAX) {
275 _batadv_add_bcast_packet_to_list(bat_priv, forw_packet, 287 _batadv_add_bcast_packet_to_list(bat_priv, forw_packet,
276 msecs_to_jiffies(5)); 288 msecs_to_jiffies(5));
277 return; 289 return;
diff --git a/net/batman-adv/send.h b/net/batman-adv/send.h
index 38e662f619ac..e7b17880fca4 100644
--- a/net/batman-adv/send.h
+++ b/net/batman-adv/send.h
@@ -23,9 +23,9 @@
23int batadv_send_skb_packet(struct sk_buff *skb, 23int batadv_send_skb_packet(struct sk_buff *skb,
24 struct batadv_hard_iface *hard_iface, 24 struct batadv_hard_iface *hard_iface,
25 const uint8_t *dst_addr); 25 const uint8_t *dst_addr);
26bool batadv_send_skb_to_orig(struct sk_buff *skb, 26int batadv_send_skb_to_orig(struct sk_buff *skb,
27 struct batadv_orig_node *orig_node, 27 struct batadv_orig_node *orig_node,
28 struct batadv_hard_iface *recv_if); 28 struct batadv_hard_iface *recv_if);
29void batadv_schedule_bat_ogm(struct batadv_hard_iface *hard_iface); 29void batadv_schedule_bat_ogm(struct batadv_hard_iface *hard_iface);
30int batadv_add_bcast_packet_to_list(struct batadv_priv *bat_priv, 30int batadv_add_bcast_packet_to_list(struct batadv_priv *bat_priv,
31 const struct sk_buff *skb, 31 const struct sk_buff *skb,
diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
index 819dfb006cdf..700d0b49742d 100644
--- a/net/batman-adv/soft-interface.c
+++ b/net/batman-adv/soft-interface.c
@@ -154,7 +154,7 @@ static int batadv_interface_tx(struct sk_buff *skb,
154 0x00, 0x00}; 154 0x00, 0x00};
155 unsigned int header_len = 0; 155 unsigned int header_len = 0;
156 int data_len = skb->len, ret; 156 int data_len = skb->len, ret;
157 short vid __maybe_unused = -1; 157 unsigned short vid __maybe_unused = BATADV_NO_FLAGS;
158 bool do_bcast = false; 158 bool do_bcast = false;
159 uint32_t seqno; 159 uint32_t seqno;
160 unsigned long brd_delay = 1; 160 unsigned long brd_delay = 1;
@@ -303,7 +303,7 @@ void batadv_interface_rx(struct net_device *soft_iface,
303 struct ethhdr *ethhdr; 303 struct ethhdr *ethhdr;
304 struct vlan_ethhdr *vhdr; 304 struct vlan_ethhdr *vhdr;
305 struct batadv_header *batadv_header = (struct batadv_header *)skb->data; 305 struct batadv_header *batadv_header = (struct batadv_header *)skb->data;
306 short vid __maybe_unused = -1; 306 unsigned short vid __maybe_unused = BATADV_NO_FLAGS;
307 __be16 ethertype = __constant_htons(ETH_P_BATMAN); 307 __be16 ethertype = __constant_htons(ETH_P_BATMAN);
308 bool is_bcast; 308 bool is_bcast;
309 309
@@ -316,7 +316,7 @@ void batadv_interface_rx(struct net_device *soft_iface,
316 skb_pull_rcsum(skb, hdr_size); 316 skb_pull_rcsum(skb, hdr_size);
317 skb_reset_mac_header(skb); 317 skb_reset_mac_header(skb);
318 318
319 ethhdr = (struct ethhdr *)skb_mac_header(skb); 319 ethhdr = eth_hdr(skb);
320 320
321 switch (ntohs(ethhdr->h_proto)) { 321 switch (ntohs(ethhdr->h_proto)) {
322 case ETH_P_8021Q: 322 case ETH_P_8021Q:
diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
index 9e8748575845..429aeef3d8b2 100644
--- a/net/batman-adv/translation-table.c
+++ b/net/batman-adv/translation-table.c
@@ -163,10 +163,19 @@ batadv_tt_orig_list_entry_free_ref(struct batadv_tt_orig_list_entry *orig_entry)
163 call_rcu(&orig_entry->rcu, batadv_tt_orig_list_entry_free_rcu); 163 call_rcu(&orig_entry->rcu, batadv_tt_orig_list_entry_free_rcu);
164} 164}
165 165
166/**
167 * batadv_tt_local_event - store a local TT event (ADD/DEL)
168 * @bat_priv: the bat priv with all the soft interface information
169 * @tt_local_entry: the TT entry involved in the event
170 * @event_flags: flags to store in the event structure
171 */
166static void batadv_tt_local_event(struct batadv_priv *bat_priv, 172static void batadv_tt_local_event(struct batadv_priv *bat_priv,
167 const uint8_t *addr, uint8_t flags) 173 struct batadv_tt_local_entry *tt_local_entry,
174 uint8_t event_flags)
168{ 175{
169 struct batadv_tt_change_node *tt_change_node, *entry, *safe; 176 struct batadv_tt_change_node *tt_change_node, *entry, *safe;
177 struct batadv_tt_common_entry *common = &tt_local_entry->common;
178 uint8_t flags = common->flags | event_flags;
170 bool event_removed = false; 179 bool event_removed = false;
171 bool del_op_requested, del_op_entry; 180 bool del_op_requested, del_op_entry;
172 181
@@ -176,7 +185,7 @@ static void batadv_tt_local_event(struct batadv_priv *bat_priv,
176 return; 185 return;
177 186
178 tt_change_node->change.flags = flags; 187 tt_change_node->change.flags = flags;
179 memcpy(tt_change_node->change.addr, addr, ETH_ALEN); 188 memcpy(tt_change_node->change.addr, common->addr, ETH_ALEN);
180 189
181 del_op_requested = flags & BATADV_TT_CLIENT_DEL; 190 del_op_requested = flags & BATADV_TT_CLIENT_DEL;
182 191
@@ -184,7 +193,7 @@ static void batadv_tt_local_event(struct batadv_priv *bat_priv,
184 spin_lock_bh(&bat_priv->tt.changes_list_lock); 193 spin_lock_bh(&bat_priv->tt.changes_list_lock);
185 list_for_each_entry_safe(entry, safe, &bat_priv->tt.changes_list, 194 list_for_each_entry_safe(entry, safe, &bat_priv->tt.changes_list,
186 list) { 195 list) {
187 if (!batadv_compare_eth(entry->change.addr, addr)) 196 if (!batadv_compare_eth(entry->change.addr, common->addr))
188 continue; 197 continue;
189 198
190 /* DEL+ADD in the same orig interval have no effect and can be 199 /* DEL+ADD in the same orig interval have no effect and can be
@@ -332,7 +341,7 @@ void batadv_tt_local_add(struct net_device *soft_iface, const uint8_t *addr,
332 } 341 }
333 342
334add_event: 343add_event:
335 batadv_tt_local_event(bat_priv, addr, tt_local->common.flags); 344 batadv_tt_local_event(bat_priv, tt_local, BATADV_NO_FLAGS);
336 345
337check_roaming: 346check_roaming:
338 /* Check whether it is a roaming, but don't do anything if the roaming 347 /* Check whether it is a roaming, but don't do anything if the roaming
@@ -529,8 +538,7 @@ batadv_tt_local_set_pending(struct batadv_priv *bat_priv,
529 struct batadv_tt_local_entry *tt_local_entry, 538 struct batadv_tt_local_entry *tt_local_entry,
530 uint16_t flags, const char *message) 539 uint16_t flags, const char *message)
531{ 540{
532 batadv_tt_local_event(bat_priv, tt_local_entry->common.addr, 541 batadv_tt_local_event(bat_priv, tt_local_entry, flags);
533 tt_local_entry->common.flags | flags);
534 542
535 /* The local client has to be marked as "pending to be removed" but has 543 /* The local client has to be marked as "pending to be removed" but has
536 * to be kept in the table in order to send it in a full table 544 * to be kept in the table in order to send it in a full table
@@ -584,8 +592,7 @@ uint16_t batadv_tt_local_remove(struct batadv_priv *bat_priv,
584 /* if this client has been added right now, it is possible to 592 /* if this client has been added right now, it is possible to
585 * immediately purge it 593 * immediately purge it
586 */ 594 */
587 batadv_tt_local_event(bat_priv, tt_local_entry->common.addr, 595 batadv_tt_local_event(bat_priv, tt_local_entry, BATADV_TT_CLIENT_DEL);
588 curr_flags | BATADV_TT_CLIENT_DEL);
589 hlist_del_rcu(&tt_local_entry->common.hash_entry); 596 hlist_del_rcu(&tt_local_entry->common.hash_entry);
590 batadv_tt_local_entry_free_ref(tt_local_entry); 597 batadv_tt_local_entry_free_ref(tt_local_entry);
591 598
@@ -791,10 +798,25 @@ out:
791 batadv_tt_orig_list_entry_free_ref(orig_entry); 798 batadv_tt_orig_list_entry_free_ref(orig_entry);
792} 799}
793 800
794/* caller must hold orig_node refcount */ 801/**
802 * batadv_tt_global_add - add a new TT global entry or update an existing one
803 * @bat_priv: the bat priv with all the soft interface information
804 * @orig_node: the originator announcing the client
805 * @tt_addr: the mac address of the non-mesh client
806 * @flags: TT flags that have to be set for this non-mesh client
807 * @ttvn: the tt version number ever announcing this non-mesh client
808 *
809 * Add a new TT global entry for the given originator. If the entry already
810 * exists add a new reference to the given originator (a global entry can have
811 * references to multiple originators) and adjust the flags attribute to reflect
812 * the function argument.
813 * If a TT local entry exists for this non-mesh client remove it.
814 *
815 * The caller must hold orig_node refcount.
816 */
795int batadv_tt_global_add(struct batadv_priv *bat_priv, 817int batadv_tt_global_add(struct batadv_priv *bat_priv,
796 struct batadv_orig_node *orig_node, 818 struct batadv_orig_node *orig_node,
797 const unsigned char *tt_addr, uint8_t flags, 819 const unsigned char *tt_addr, uint16_t flags,
798 uint8_t ttvn) 820 uint8_t ttvn)
799{ 821{
800 struct batadv_tt_global_entry *tt_global_entry; 822 struct batadv_tt_global_entry *tt_global_entry;
@@ -1600,11 +1622,11 @@ batadv_tt_response_fill_table(uint16_t tt_len, uint8_t ttvn,
1600 tt_tot = tt_len / sizeof(struct batadv_tt_change); 1622 tt_tot = tt_len / sizeof(struct batadv_tt_change);
1601 1623
1602 len = tt_query_size + tt_len; 1624 len = tt_query_size + tt_len;
1603 skb = dev_alloc_skb(len + ETH_HLEN + NET_IP_ALIGN); 1625 skb = netdev_alloc_skb_ip_align(NULL, len + ETH_HLEN);
1604 if (!skb) 1626 if (!skb)
1605 goto out; 1627 goto out;
1606 1628
1607 skb_reserve(skb, ETH_HLEN + NET_IP_ALIGN); 1629 skb_reserve(skb, ETH_HLEN);
1608 tt_response = (struct batadv_tt_query_packet *)skb_put(skb, len); 1630 tt_response = (struct batadv_tt_query_packet *)skb_put(skb, len);
1609 tt_response->ttvn = ttvn; 1631 tt_response->ttvn = ttvn;
1610 1632
@@ -1665,11 +1687,11 @@ static int batadv_send_tt_request(struct batadv_priv *bat_priv,
1665 if (!tt_req_node) 1687 if (!tt_req_node)
1666 goto out; 1688 goto out;
1667 1689
1668 skb = dev_alloc_skb(sizeof(*tt_request) + ETH_HLEN + NET_IP_ALIGN); 1690 skb = netdev_alloc_skb_ip_align(NULL, sizeof(*tt_request) + ETH_HLEN);
1669 if (!skb) 1691 if (!skb)
1670 goto out; 1692 goto out;
1671 1693
1672 skb_reserve(skb, ETH_HLEN + NET_IP_ALIGN); 1694 skb_reserve(skb, ETH_HLEN);
1673 1695
1674 tt_req_len = sizeof(*tt_request); 1696 tt_req_len = sizeof(*tt_request);
1675 tt_request = (struct batadv_tt_query_packet *)skb_put(skb, tt_req_len); 1697 tt_request = (struct batadv_tt_query_packet *)skb_put(skb, tt_req_len);
@@ -1691,7 +1713,7 @@ static int batadv_send_tt_request(struct batadv_priv *bat_priv,
1691 1713
1692 batadv_inc_counter(bat_priv, BATADV_CNT_TT_REQUEST_TX); 1714 batadv_inc_counter(bat_priv, BATADV_CNT_TT_REQUEST_TX);
1693 1715
1694 if (batadv_send_skb_to_orig(skb, dst_orig_node, NULL)) 1716 if (batadv_send_skb_to_orig(skb, dst_orig_node, NULL) != NET_XMIT_DROP)
1695 ret = 0; 1717 ret = 0;
1696 1718
1697out: 1719out:
@@ -1715,7 +1737,7 @@ batadv_send_other_tt_response(struct batadv_priv *bat_priv,
1715 struct batadv_orig_node *req_dst_orig_node; 1737 struct batadv_orig_node *req_dst_orig_node;
1716 struct batadv_orig_node *res_dst_orig_node = NULL; 1738 struct batadv_orig_node *res_dst_orig_node = NULL;
1717 uint8_t orig_ttvn, req_ttvn, ttvn; 1739 uint8_t orig_ttvn, req_ttvn, ttvn;
1718 int ret = false; 1740 int res, ret = false;
1719 unsigned char *tt_buff; 1741 unsigned char *tt_buff;
1720 bool full_table; 1742 bool full_table;
1721 uint16_t tt_len, tt_tot; 1743 uint16_t tt_len, tt_tot;
@@ -1762,11 +1784,11 @@ batadv_send_other_tt_response(struct batadv_priv *bat_priv,
1762 tt_tot = tt_len / sizeof(struct batadv_tt_change); 1784 tt_tot = tt_len / sizeof(struct batadv_tt_change);
1763 1785
1764 len = sizeof(*tt_response) + tt_len; 1786 len = sizeof(*tt_response) + tt_len;
1765 skb = dev_alloc_skb(len + ETH_HLEN + NET_IP_ALIGN); 1787 skb = netdev_alloc_skb_ip_align(NULL, len + ETH_HLEN);
1766 if (!skb) 1788 if (!skb)
1767 goto unlock; 1789 goto unlock;
1768 1790
1769 skb_reserve(skb, ETH_HLEN + NET_IP_ALIGN); 1791 skb_reserve(skb, ETH_HLEN);
1770 packet_pos = skb_put(skb, len); 1792 packet_pos = skb_put(skb, len);
1771 tt_response = (struct batadv_tt_query_packet *)packet_pos; 1793 tt_response = (struct batadv_tt_query_packet *)packet_pos;
1772 tt_response->ttvn = req_ttvn; 1794 tt_response->ttvn = req_ttvn;
@@ -1810,8 +1832,10 @@ batadv_send_other_tt_response(struct batadv_priv *bat_priv,
1810 1832
1811 batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_TX); 1833 batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_TX);
1812 1834
1813 if (batadv_send_skb_to_orig(skb, res_dst_orig_node, NULL)) 1835 res = batadv_send_skb_to_orig(skb, res_dst_orig_node, NULL);
1836 if (res != NET_XMIT_DROP)
1814 ret = true; 1837 ret = true;
1838
1815 goto out; 1839 goto out;
1816 1840
1817unlock: 1841unlock:
@@ -1878,11 +1902,11 @@ batadv_send_my_tt_response(struct batadv_priv *bat_priv,
1878 tt_tot = tt_len / sizeof(struct batadv_tt_change); 1902 tt_tot = tt_len / sizeof(struct batadv_tt_change);
1879 1903
1880 len = sizeof(*tt_response) + tt_len; 1904 len = sizeof(*tt_response) + tt_len;
1881 skb = dev_alloc_skb(len + ETH_HLEN + NET_IP_ALIGN); 1905 skb = netdev_alloc_skb_ip_align(NULL, len + ETH_HLEN);
1882 if (!skb) 1906 if (!skb)
1883 goto unlock; 1907 goto unlock;
1884 1908
1885 skb_reserve(skb, ETH_HLEN + NET_IP_ALIGN); 1909 skb_reserve(skb, ETH_HLEN);
1886 packet_pos = skb_put(skb, len); 1910 packet_pos = skb_put(skb, len);
1887 tt_response = (struct batadv_tt_query_packet *)packet_pos; 1911 tt_response = (struct batadv_tt_query_packet *)packet_pos;
1888 tt_response->ttvn = req_ttvn; 1912 tt_response->ttvn = req_ttvn;
@@ -1925,7 +1949,7 @@ batadv_send_my_tt_response(struct batadv_priv *bat_priv,
1925 1949
1926 batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_TX); 1950 batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_TX);
1927 1951
1928 if (batadv_send_skb_to_orig(skb, orig_node, NULL)) 1952 if (batadv_send_skb_to_orig(skb, orig_node, NULL) != NET_XMIT_DROP)
1929 ret = true; 1953 ret = true;
1930 goto out; 1954 goto out;
1931 1955
@@ -2212,11 +2236,11 @@ static void batadv_send_roam_adv(struct batadv_priv *bat_priv, uint8_t *client,
2212 if (!batadv_tt_check_roam_count(bat_priv, client)) 2236 if (!batadv_tt_check_roam_count(bat_priv, client))
2213 goto out; 2237 goto out;
2214 2238
2215 skb = dev_alloc_skb(sizeof(*roam_adv_packet) + ETH_HLEN + NET_IP_ALIGN); 2239 skb = netdev_alloc_skb_ip_align(NULL, len + ETH_HLEN);
2216 if (!skb) 2240 if (!skb)
2217 goto out; 2241 goto out;
2218 2242
2219 skb_reserve(skb, ETH_HLEN + NET_IP_ALIGN); 2243 skb_reserve(skb, ETH_HLEN);
2220 2244
2221 roam_adv_packet = (struct batadv_roam_adv_packet *)skb_put(skb, len); 2245 roam_adv_packet = (struct batadv_roam_adv_packet *)skb_put(skb, len);
2222 2246
@@ -2238,7 +2262,7 @@ static void batadv_send_roam_adv(struct batadv_priv *bat_priv, uint8_t *client,
2238 2262
2239 batadv_inc_counter(bat_priv, BATADV_CNT_TT_ROAM_ADV_TX); 2263 batadv_inc_counter(bat_priv, BATADV_CNT_TT_ROAM_ADV_TX);
2240 2264
2241 if (batadv_send_skb_to_orig(skb, orig_node, NULL)) 2265 if (batadv_send_skb_to_orig(skb, orig_node, NULL) != NET_XMIT_DROP)
2242 ret = 0; 2266 ret = 0;
2243 2267
2244out: 2268out:
diff --git a/net/batman-adv/translation-table.h b/net/batman-adv/translation-table.h
index ab8e683b402f..659a3bb759ce 100644
--- a/net/batman-adv/translation-table.h
+++ b/net/batman-adv/translation-table.h
@@ -33,7 +33,7 @@ void batadv_tt_global_add_orig(struct batadv_priv *bat_priv,
33 const unsigned char *tt_buff, int tt_buff_len); 33 const unsigned char *tt_buff, int tt_buff_len);
34int batadv_tt_global_add(struct batadv_priv *bat_priv, 34int batadv_tt_global_add(struct batadv_priv *bat_priv,
35 struct batadv_orig_node *orig_node, 35 struct batadv_orig_node *orig_node,
36 const unsigned char *addr, uint8_t flags, 36 const unsigned char *addr, uint16_t flags,
37 uint8_t ttvn); 37 uint8_t ttvn);
38int batadv_tt_global_seq_print_text(struct seq_file *seq, void *offset); 38int batadv_tt_global_seq_print_text(struct seq_file *seq, void *offset);
39void batadv_tt_global_del_orig(struct batadv_priv *bat_priv, 39void batadv_tt_global_del_orig(struct batadv_priv *bat_priv,
diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
index aba8364c3689..b2c94e139319 100644
--- a/net/batman-adv/types.h
+++ b/net/batman-adv/types.h
@@ -61,6 +61,7 @@ struct batadv_hard_iface_bat_iv {
61 * @if_status: status of the interface for batman-adv 61 * @if_status: status of the interface for batman-adv
62 * @net_dev: pointer to the net_device 62 * @net_dev: pointer to the net_device
63 * @frag_seqno: last fragment sequence number sent by this interface 63 * @frag_seqno: last fragment sequence number sent by this interface
64 * @num_bcasts: number of payload re-broadcasts on this interface (ARQ)
64 * @hardif_obj: kobject of the per interface sysfs "mesh" directory 65 * @hardif_obj: kobject of the per interface sysfs "mesh" directory
65 * @refcount: number of contexts the object is used 66 * @refcount: number of contexts the object is used
66 * @batman_adv_ptype: packet type describing packets that should be processed by 67 * @batman_adv_ptype: packet type describing packets that should be processed by
@@ -76,6 +77,7 @@ struct batadv_hard_iface {
76 char if_status; 77 char if_status;
77 struct net_device *net_dev; 78 struct net_device *net_dev;
78 atomic_t frag_seqno; 79 atomic_t frag_seqno;
80 uint8_t num_bcasts;
79 struct kobject *hardif_obj; 81 struct kobject *hardif_obj;
80 atomic_t refcount; 82 atomic_t refcount;
81 struct packet_type batman_adv_ptype; 83 struct packet_type batman_adv_ptype;
@@ -640,7 +642,7 @@ struct batadv_socket_packet {
640#ifdef CONFIG_BATMAN_ADV_BLA 642#ifdef CONFIG_BATMAN_ADV_BLA
641struct batadv_bla_backbone_gw { 643struct batadv_bla_backbone_gw {
642 uint8_t orig[ETH_ALEN]; 644 uint8_t orig[ETH_ALEN];
643 short vid; 645 unsigned short vid;
644 struct hlist_node hash_entry; 646 struct hlist_node hash_entry;
645 struct batadv_priv *bat_priv; 647 struct batadv_priv *bat_priv;
646 unsigned long lasttime; 648 unsigned long lasttime;
@@ -663,7 +665,7 @@ struct batadv_bla_backbone_gw {
663 */ 665 */
664struct batadv_bla_claim { 666struct batadv_bla_claim {
665 uint8_t addr[ETH_ALEN]; 667 uint8_t addr[ETH_ALEN];
666 short vid; 668 unsigned short vid;
667 struct batadv_bla_backbone_gw *backbone_gw; 669 struct batadv_bla_backbone_gw *backbone_gw;
668 unsigned long lasttime; 670 unsigned long lasttime;
669 struct hlist_node hash_entry; 671 struct hlist_node hash_entry;
diff --git a/net/batman-adv/unicast.c b/net/batman-adv/unicast.c
index 0bb3b5982f94..dc8b5d4dd636 100644
--- a/net/batman-adv/unicast.c
+++ b/net/batman-adv/unicast.c
@@ -464,7 +464,7 @@ find_router:
464 goto out; 464 goto out;
465 } 465 }
466 466
467 if (batadv_send_skb_to_orig(skb, orig_node, NULL)) 467 if (batadv_send_skb_to_orig(skb, orig_node, NULL) != NET_XMIT_DROP)
468 ret = 0; 468 ret = 0;
469 469
470out: 470out:
diff --git a/net/batman-adv/vis.c b/net/batman-adv/vis.c
index 1625e5793a89..4983340f1943 100644
--- a/net/batman-adv/vis.c
+++ b/net/batman-adv/vis.c
@@ -392,12 +392,12 @@ batadv_add_packet(struct batadv_priv *bat_priv,
392 return NULL; 392 return NULL;
393 393
394 len = sizeof(*packet) + vis_info_len; 394 len = sizeof(*packet) + vis_info_len;
395 info->skb_packet = dev_alloc_skb(len + ETH_HLEN + NET_IP_ALIGN); 395 info->skb_packet = netdev_alloc_skb_ip_align(NULL, len + ETH_HLEN);
396 if (!info->skb_packet) { 396 if (!info->skb_packet) {
397 kfree(info); 397 kfree(info);
398 return NULL; 398 return NULL;
399 } 399 }
400 skb_reserve(info->skb_packet, ETH_HLEN + NET_IP_ALIGN); 400 skb_reserve(info->skb_packet, ETH_HLEN);
401 packet = (struct batadv_vis_packet *)skb_put(info->skb_packet, len); 401 packet = (struct batadv_vis_packet *)skb_put(info->skb_packet, len);
402 402
403 kref_init(&info->refcount); 403 kref_init(&info->refcount);
@@ -697,7 +697,7 @@ static void batadv_broadcast_vis_packet(struct batadv_priv *bat_priv,
697 struct batadv_orig_node *orig_node; 697 struct batadv_orig_node *orig_node;
698 struct batadv_vis_packet *packet; 698 struct batadv_vis_packet *packet;
699 struct sk_buff *skb; 699 struct sk_buff *skb;
700 uint32_t i; 700 uint32_t i, res;
701 701
702 702
703 packet = (struct batadv_vis_packet *)info->skb_packet->data; 703 packet = (struct batadv_vis_packet *)info->skb_packet->data;
@@ -724,7 +724,8 @@ static void batadv_broadcast_vis_packet(struct batadv_priv *bat_priv,
724 if (!skb) 724 if (!skb)
725 continue; 725 continue;
726 726
727 if (!batadv_send_skb_to_orig(skb, orig_node, NULL)) 727 res = batadv_send_skb_to_orig(skb, orig_node, NULL);
728 if (res == NET_XMIT_DROP)
728 kfree_skb(skb); 729 kfree_skb(skb);
729 } 730 }
730 rcu_read_unlock(); 731 rcu_read_unlock();
@@ -748,7 +749,7 @@ static void batadv_unicast_vis_packet(struct batadv_priv *bat_priv,
748 if (!skb) 749 if (!skb)
749 goto out; 750 goto out;
750 751
751 if (!batadv_send_skb_to_orig(skb, orig_node, NULL)) 752 if (batadv_send_skb_to_orig(skb, orig_node, NULL) == NET_XMIT_DROP)
752 kfree_skb(skb); 753 kfree_skb(skb);
753 754
754out: 755out:
@@ -854,13 +855,13 @@ int batadv_vis_init(struct batadv_priv *bat_priv)
854 if (!bat_priv->vis.my_info) 855 if (!bat_priv->vis.my_info)
855 goto err; 856 goto err;
856 857
857 len = sizeof(*packet) + BATADV_MAX_VIS_PACKET_SIZE; 858 len = sizeof(*packet) + BATADV_MAX_VIS_PACKET_SIZE + ETH_HLEN;
858 len += ETH_HLEN + NET_IP_ALIGN; 859 bat_priv->vis.my_info->skb_packet = netdev_alloc_skb_ip_align(NULL,
859 bat_priv->vis.my_info->skb_packet = dev_alloc_skb(len); 860 len);
860 if (!bat_priv->vis.my_info->skb_packet) 861 if (!bat_priv->vis.my_info->skb_packet)
861 goto free_info; 862 goto free_info;
862 863
863 skb_reserve(bat_priv->vis.my_info->skb_packet, ETH_HLEN + NET_IP_ALIGN); 864 skb_reserve(bat_priv->vis.my_info->skb_packet, ETH_HLEN);
864 tmp_skb = bat_priv->vis.my_info->skb_packet; 865 tmp_skb = bat_priv->vis.my_info->skb_packet;
865 packet = (struct batadv_vis_packet *)skb_put(tmp_skb, sizeof(*packet)); 866 packet = (struct batadv_vis_packet *)skb_put(tmp_skb, sizeof(*packet));
866 867
diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
index db7de80b88a2..e3a349977595 100644
--- a/net/bluetooth/hci_core.c
+++ b/net/bluetooth/hci_core.c
@@ -597,7 +597,15 @@ static void hci_init3_req(struct hci_request *req, unsigned long opt)
597 struct hci_dev *hdev = req->hdev; 597 struct hci_dev *hdev = req->hdev;
598 u8 p; 598 u8 p;
599 599
600 /* Only send HCI_Delete_Stored_Link_Key if it is supported */ 600 /* Some Broadcom based Bluetooth controllers do not support the
601 * Delete Stored Link Key command. They are clearly indicating its
602 * absence in the bit mask of supported commands.
603 *
604 * Check the supported commands and only if the the command is marked
605 * as supported send it. If not supported assume that the controller
606 * does not have actual support for stored link keys which makes this
607 * command redundant anyway.
608 */
601 if (hdev->commands[6] & 0x80) { 609 if (hdev->commands[6] & 0x80) {
602 struct hci_cp_delete_stored_link_key cp; 610 struct hci_cp_delete_stored_link_key cp;
603 611
@@ -751,7 +759,7 @@ void hci_discovery_set_state(struct hci_dev *hdev, int state)
751 hdev->discovery.state = state; 759 hdev->discovery.state = state;
752} 760}
753 761
754static void inquiry_cache_flush(struct hci_dev *hdev) 762void hci_inquiry_cache_flush(struct hci_dev *hdev)
755{ 763{
756 struct discovery_state *cache = &hdev->discovery; 764 struct discovery_state *cache = &hdev->discovery;
757 struct inquiry_entry *p, *n; 765 struct inquiry_entry *p, *n;
@@ -964,7 +972,7 @@ int hci_inquiry(void __user *arg)
964 hci_dev_lock(hdev); 972 hci_dev_lock(hdev);
965 if (inquiry_cache_age(hdev) > INQUIRY_CACHE_AGE_MAX || 973 if (inquiry_cache_age(hdev) > INQUIRY_CACHE_AGE_MAX ||
966 inquiry_cache_empty(hdev) || ir.flags & IREQ_CACHE_FLUSH) { 974 inquiry_cache_empty(hdev) || ir.flags & IREQ_CACHE_FLUSH) {
967 inquiry_cache_flush(hdev); 975 hci_inquiry_cache_flush(hdev);
968 do_inquiry = 1; 976 do_inquiry = 1;
969 } 977 }
970 hci_dev_unlock(hdev); 978 hci_dev_unlock(hdev);
@@ -1201,8 +1209,6 @@ static int hci_dev_do_close(struct hci_dev *hdev)
1201{ 1209{
1202 BT_DBG("%s %p", hdev->name, hdev); 1210 BT_DBG("%s %p", hdev->name, hdev);
1203 1211
1204 cancel_work_sync(&hdev->le_scan);
1205
1206 cancel_delayed_work(&hdev->power_off); 1212 cancel_delayed_work(&hdev->power_off);
1207 1213
1208 hci_req_cancel(hdev, ENODEV); 1214 hci_req_cancel(hdev, ENODEV);
@@ -1230,7 +1236,7 @@ static int hci_dev_do_close(struct hci_dev *hdev)
1230 cancel_delayed_work_sync(&hdev->le_scan_disable); 1236 cancel_delayed_work_sync(&hdev->le_scan_disable);
1231 1237
1232 hci_dev_lock(hdev); 1238 hci_dev_lock(hdev);
1233 inquiry_cache_flush(hdev); 1239 hci_inquiry_cache_flush(hdev);
1234 hci_conn_hash_flush(hdev); 1240 hci_conn_hash_flush(hdev);
1235 hci_dev_unlock(hdev); 1241 hci_dev_unlock(hdev);
1236 1242
@@ -1331,7 +1337,7 @@ int hci_dev_reset(__u16 dev)
1331 skb_queue_purge(&hdev->cmd_q); 1337 skb_queue_purge(&hdev->cmd_q);
1332 1338
1333 hci_dev_lock(hdev); 1339 hci_dev_lock(hdev);
1334 inquiry_cache_flush(hdev); 1340 hci_inquiry_cache_flush(hdev);
1335 hci_conn_hash_flush(hdev); 1341 hci_conn_hash_flush(hdev);
1336 hci_dev_unlock(hdev); 1342 hci_dev_unlock(hdev);
1337 1343
@@ -1991,80 +1997,59 @@ int hci_blacklist_del(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 type)
1991 return mgmt_device_unblocked(hdev, bdaddr, type); 1997 return mgmt_device_unblocked(hdev, bdaddr, type);
1992} 1998}
1993 1999
1994static void le_scan_param_req(struct hci_request *req, unsigned long opt) 2000static void inquiry_complete(struct hci_dev *hdev, u8 status)
1995{ 2001{
1996 struct le_scan_params *param = (struct le_scan_params *) opt; 2002 if (status) {
1997 struct hci_cp_le_set_scan_param cp; 2003 BT_ERR("Failed to start inquiry: status %d", status);
1998
1999 memset(&cp, 0, sizeof(cp));
2000 cp.type = param->type;
2001 cp.interval = cpu_to_le16(param->interval);
2002 cp.window = cpu_to_le16(param->window);
2003 2004
2004 hci_req_add(req, HCI_OP_LE_SET_SCAN_PARAM, sizeof(cp), &cp); 2005 hci_dev_lock(hdev);
2005} 2006 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
2006 2007 hci_dev_unlock(hdev);
2007static void le_scan_enable_req(struct hci_request *req, unsigned long opt) 2008 return;
2008{ 2009 }
2009 struct hci_cp_le_set_scan_enable cp;
2010
2011 memset(&cp, 0, sizeof(cp));
2012 cp.enable = LE_SCAN_ENABLE;
2013 cp.filter_dup = LE_SCAN_FILTER_DUP_ENABLE;
2014
2015 hci_req_add(req, HCI_OP_LE_SET_SCAN_ENABLE, sizeof(cp), &cp);
2016} 2010}
2017 2011
2018static int hci_do_le_scan(struct hci_dev *hdev, u8 type, u16 interval, 2012static void le_scan_disable_work_complete(struct hci_dev *hdev, u8 status)
2019 u16 window, int timeout)
2020{ 2013{
2021 long timeo = msecs_to_jiffies(3000); 2014 /* General inquiry access code (GIAC) */
2022 struct le_scan_params param; 2015 u8 lap[3] = { 0x33, 0x8b, 0x9e };
2016 struct hci_request req;
2017 struct hci_cp_inquiry cp;
2023 int err; 2018 int err;
2024 2019
2025 BT_DBG("%s", hdev->name); 2020 if (status) {
2026 2021 BT_ERR("Failed to disable LE scanning: status %d", status);
2027 if (test_bit(HCI_LE_SCAN, &hdev->dev_flags)) 2022 return;
2028 return -EINPROGRESS; 2023 }
2029
2030 param.type = type;
2031 param.interval = interval;
2032 param.window = window;
2033
2034 hci_req_lock(hdev);
2035
2036 err = __hci_req_sync(hdev, le_scan_param_req, (unsigned long) &param,
2037 timeo);
2038 if (!err)
2039 err = __hci_req_sync(hdev, le_scan_enable_req, 0, timeo);
2040
2041 hci_req_unlock(hdev);
2042 2024
2043 if (err < 0) 2025 switch (hdev->discovery.type) {
2044 return err; 2026 case DISCOV_TYPE_LE:
2027 hci_dev_lock(hdev);
2028 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
2029 hci_dev_unlock(hdev);
2030 break;
2045 2031
2046 queue_delayed_work(hdev->workqueue, &hdev->le_scan_disable, 2032 case DISCOV_TYPE_INTERLEAVED:
2047 timeout); 2033 hci_req_init(&req, hdev);
2048 2034
2049 return 0; 2035 memset(&cp, 0, sizeof(cp));
2050} 2036 memcpy(&cp.lap, lap, sizeof(cp.lap));
2037 cp.length = DISCOV_INTERLEAVED_INQUIRY_LEN;
2038 hci_req_add(&req, HCI_OP_INQUIRY, sizeof(cp), &cp);
2051 2039
2052int hci_cancel_le_scan(struct hci_dev *hdev) 2040 hci_dev_lock(hdev);
2053{
2054 BT_DBG("%s", hdev->name);
2055 2041
2056 if (!test_bit(HCI_LE_SCAN, &hdev->dev_flags)) 2042 hci_inquiry_cache_flush(hdev);
2057 return -EALREADY;
2058 2043
2059 if (cancel_delayed_work(&hdev->le_scan_disable)) { 2044 err = hci_req_run(&req, inquiry_complete);
2060 struct hci_cp_le_set_scan_enable cp; 2045 if (err) {
2046 BT_ERR("Inquiry request failed: err %d", err);
2047 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
2048 }
2061 2049
2062 /* Send HCI command to disable LE Scan */ 2050 hci_dev_unlock(hdev);
2063 memset(&cp, 0, sizeof(cp)); 2051 break;
2064 hci_send_cmd(hdev, HCI_OP_LE_SET_SCAN_ENABLE, sizeof(cp), &cp);
2065 } 2052 }
2066
2067 return 0;
2068} 2053}
2069 2054
2070static void le_scan_disable_work(struct work_struct *work) 2055static void le_scan_disable_work(struct work_struct *work)
@@ -2072,46 +2057,20 @@ static void le_scan_disable_work(struct work_struct *work)
2072 struct hci_dev *hdev = container_of(work, struct hci_dev, 2057 struct hci_dev *hdev = container_of(work, struct hci_dev,
2073 le_scan_disable.work); 2058 le_scan_disable.work);
2074 struct hci_cp_le_set_scan_enable cp; 2059 struct hci_cp_le_set_scan_enable cp;
2060 struct hci_request req;
2061 int err;
2075 2062
2076 BT_DBG("%s", hdev->name); 2063 BT_DBG("%s", hdev->name);
2077 2064
2078 memset(&cp, 0, sizeof(cp)); 2065 hci_req_init(&req, hdev);
2079
2080 hci_send_cmd(hdev, HCI_OP_LE_SET_SCAN_ENABLE, sizeof(cp), &cp);
2081}
2082
2083static void le_scan_work(struct work_struct *work)
2084{
2085 struct hci_dev *hdev = container_of(work, struct hci_dev, le_scan);
2086 struct le_scan_params *param = &hdev->le_scan_params;
2087 2066
2088 BT_DBG("%s", hdev->name); 2067 memset(&cp, 0, sizeof(cp));
2068 cp.enable = LE_SCAN_DISABLE;
2069 hci_req_add(&req, HCI_OP_LE_SET_SCAN_ENABLE, sizeof(cp), &cp);
2089 2070
2090 hci_do_le_scan(hdev, param->type, param->interval, param->window, 2071 err = hci_req_run(&req, le_scan_disable_work_complete);
2091 param->timeout); 2072 if (err)
2092} 2073 BT_ERR("Disable LE scanning request failed: err %d", err);
2093
2094int hci_le_scan(struct hci_dev *hdev, u8 type, u16 interval, u16 window,
2095 int timeout)
2096{
2097 struct le_scan_params *param = &hdev->le_scan_params;
2098
2099 BT_DBG("%s", hdev->name);
2100
2101 if (test_bit(HCI_LE_PERIPHERAL, &hdev->dev_flags))
2102 return -ENOTSUPP;
2103
2104 if (work_busy(&hdev->le_scan))
2105 return -EINPROGRESS;
2106
2107 param->type = type;
2108 param->interval = interval;
2109 param->window = window;
2110 param->timeout = timeout;
2111
2112 queue_work(system_long_wq, &hdev->le_scan);
2113
2114 return 0;
2115} 2074}
2116 2075
2117/* Alloc HCI device */ 2076/* Alloc HCI device */
@@ -2148,7 +2107,6 @@ struct hci_dev *hci_alloc_dev(void)
2148 INIT_WORK(&hdev->cmd_work, hci_cmd_work); 2107 INIT_WORK(&hdev->cmd_work, hci_cmd_work);
2149 INIT_WORK(&hdev->tx_work, hci_tx_work); 2108 INIT_WORK(&hdev->tx_work, hci_tx_work);
2150 INIT_WORK(&hdev->power_on, hci_power_on); 2109 INIT_WORK(&hdev->power_on, hci_power_on);
2151 INIT_WORK(&hdev->le_scan, le_scan_work);
2152 2110
2153 INIT_DELAYED_WORK(&hdev->power_off, hci_power_off); 2111 INIT_DELAYED_WORK(&hdev->power_off, hci_power_off);
2154 INIT_DELAYED_WORK(&hdev->discov_off, hci_discov_off); 2112 INIT_DELAYED_WORK(&hdev->discov_off, hci_discov_off);
@@ -3550,36 +3508,6 @@ static void hci_cmd_work(struct work_struct *work)
3550 } 3508 }
3551} 3509}
3552 3510
3553int hci_do_inquiry(struct hci_dev *hdev, u8 length)
3554{
3555 /* General inquiry access code (GIAC) */
3556 u8 lap[3] = { 0x33, 0x8b, 0x9e };
3557 struct hci_cp_inquiry cp;
3558
3559 BT_DBG("%s", hdev->name);
3560
3561 if (test_bit(HCI_INQUIRY, &hdev->flags))
3562 return -EINPROGRESS;
3563
3564 inquiry_cache_flush(hdev);
3565
3566 memset(&cp, 0, sizeof(cp));
3567 memcpy(&cp.lap, lap, sizeof(cp.lap));
3568 cp.length = length;
3569
3570 return hci_send_cmd(hdev, HCI_OP_INQUIRY, sizeof(cp), &cp);
3571}
3572
3573int hci_cancel_inquiry(struct hci_dev *hdev)
3574{
3575 BT_DBG("%s", hdev->name);
3576
3577 if (!test_bit(HCI_INQUIRY, &hdev->flags))
3578 return -EALREADY;
3579
3580 return hci_send_cmd(hdev, HCI_OP_INQUIRY_CANCEL, 0, NULL);
3581}
3582
3583u8 bdaddr_to_le(u8 bdaddr_type) 3511u8 bdaddr_to_le(u8 bdaddr_type)
3584{ 3512{
3585 switch (bdaddr_type) { 3513 switch (bdaddr_type) {
diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
index b93cd2eb5d58..0437200d92f4 100644
--- a/net/bluetooth/hci_event.c
+++ b/net/bluetooth/hci_event.c
@@ -40,21 +40,13 @@ static void hci_cc_inquiry_cancel(struct hci_dev *hdev, struct sk_buff *skb)
40 40
41 BT_DBG("%s status 0x%2.2x", hdev->name, status); 41 BT_DBG("%s status 0x%2.2x", hdev->name, status);
42 42
43 if (status) { 43 if (status)
44 hci_dev_lock(hdev);
45 mgmt_stop_discovery_failed(hdev, status);
46 hci_dev_unlock(hdev);
47 return; 44 return;
48 }
49 45
50 clear_bit(HCI_INQUIRY, &hdev->flags); 46 clear_bit(HCI_INQUIRY, &hdev->flags);
51 smp_mb__after_clear_bit(); /* wake_up_bit advises about this barrier */ 47 smp_mb__after_clear_bit(); /* wake_up_bit advises about this barrier */
52 wake_up_bit(&hdev->flags, HCI_INQUIRY); 48 wake_up_bit(&hdev->flags, HCI_INQUIRY);
53 49
54 hci_dev_lock(hdev);
55 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
56 hci_dev_unlock(hdev);
57
58 hci_conn_check_pending(hdev); 50 hci_conn_check_pending(hdev);
59} 51}
60 52
@@ -937,20 +929,6 @@ static void hci_cc_le_set_adv_enable(struct hci_dev *hdev, struct sk_buff *skb)
937 hci_dev_unlock(hdev); 929 hci_dev_unlock(hdev);
938} 930}
939 931
940static void hci_cc_le_set_scan_param(struct hci_dev *hdev, struct sk_buff *skb)
941{
942 __u8 status = *((__u8 *) skb->data);
943
944 BT_DBG("%s status 0x%2.2x", hdev->name, status);
945
946 if (status) {
947 hci_dev_lock(hdev);
948 mgmt_start_discovery_failed(hdev, status);
949 hci_dev_unlock(hdev);
950 return;
951 }
952}
953
954static void hci_cc_le_set_scan_enable(struct hci_dev *hdev, 932static void hci_cc_le_set_scan_enable(struct hci_dev *hdev,
955 struct sk_buff *skb) 933 struct sk_buff *skb)
956{ 934{
@@ -963,41 +941,16 @@ static void hci_cc_le_set_scan_enable(struct hci_dev *hdev,
963 if (!cp) 941 if (!cp)
964 return; 942 return;
965 943
944 if (status)
945 return;
946
966 switch (cp->enable) { 947 switch (cp->enable) {
967 case LE_SCAN_ENABLE: 948 case LE_SCAN_ENABLE:
968 if (status) {
969 hci_dev_lock(hdev);
970 mgmt_start_discovery_failed(hdev, status);
971 hci_dev_unlock(hdev);
972 return;
973 }
974
975 set_bit(HCI_LE_SCAN, &hdev->dev_flags); 949 set_bit(HCI_LE_SCAN, &hdev->dev_flags);
976
977 hci_dev_lock(hdev);
978 hci_discovery_set_state(hdev, DISCOVERY_FINDING);
979 hci_dev_unlock(hdev);
980 break; 950 break;
981 951
982 case LE_SCAN_DISABLE: 952 case LE_SCAN_DISABLE:
983 if (status) {
984 hci_dev_lock(hdev);
985 mgmt_stop_discovery_failed(hdev, status);
986 hci_dev_unlock(hdev);
987 return;
988 }
989
990 clear_bit(HCI_LE_SCAN, &hdev->dev_flags); 953 clear_bit(HCI_LE_SCAN, &hdev->dev_flags);
991
992 if (hdev->discovery.type == DISCOV_TYPE_INTERLEAVED &&
993 hdev->discovery.state == DISCOVERY_FINDING) {
994 mgmt_interleaved_discovery(hdev);
995 } else {
996 hci_dev_lock(hdev);
997 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
998 hci_dev_unlock(hdev);
999 }
1000
1001 break; 954 break;
1002 955
1003 default: 956 default:
@@ -1077,18 +1030,10 @@ static void hci_cs_inquiry(struct hci_dev *hdev, __u8 status)
1077 1030
1078 if (status) { 1031 if (status) {
1079 hci_conn_check_pending(hdev); 1032 hci_conn_check_pending(hdev);
1080 hci_dev_lock(hdev);
1081 if (test_bit(HCI_MGMT, &hdev->dev_flags))
1082 mgmt_start_discovery_failed(hdev, status);
1083 hci_dev_unlock(hdev);
1084 return; 1033 return;
1085 } 1034 }
1086 1035
1087 set_bit(HCI_INQUIRY, &hdev->flags); 1036 set_bit(HCI_INQUIRY, &hdev->flags);
1088
1089 hci_dev_lock(hdev);
1090 hci_discovery_set_state(hdev, DISCOVERY_FINDING);
1091 hci_dev_unlock(hdev);
1092} 1037}
1093 1038
1094static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status) 1039static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status)
@@ -2298,10 +2243,6 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
2298 hci_cc_user_passkey_neg_reply(hdev, skb); 2243 hci_cc_user_passkey_neg_reply(hdev, skb);
2299 break; 2244 break;
2300 2245
2301 case HCI_OP_LE_SET_SCAN_PARAM:
2302 hci_cc_le_set_scan_param(hdev, skb);
2303 break;
2304
2305 case HCI_OP_LE_SET_ADV_ENABLE: 2246 case HCI_OP_LE_SET_ADV_ENABLE:
2306 hci_cc_le_set_adv_enable(hdev, skb); 2247 hci_cc_le_set_adv_enable(hdev, skb);
2307 break; 2248 break;
@@ -2670,7 +2611,7 @@ static void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
2670 2611
2671 BT_DBG("%s", hdev->name); 2612 BT_DBG("%s", hdev->name);
2672 2613
2673 if (!test_bit(HCI_LINK_KEYS, &hdev->dev_flags)) 2614 if (!test_bit(HCI_MGMT, &hdev->dev_flags))
2674 return; 2615 return;
2675 2616
2676 hci_dev_lock(hdev); 2617 hci_dev_lock(hdev);
@@ -2746,7 +2687,7 @@ static void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff *skb)
2746 hci_conn_drop(conn); 2687 hci_conn_drop(conn);
2747 } 2688 }
2748 2689
2749 if (test_bit(HCI_LINK_KEYS, &hdev->dev_flags)) 2690 if (test_bit(HCI_MGMT, &hdev->dev_flags))
2750 hci_add_link_key(hdev, conn, 1, &ev->bdaddr, ev->link_key, 2691 hci_add_link_key(hdev, conn, 1, &ev->bdaddr, ev->link_key,
2751 ev->key_type, pin_len); 2692 ev->key_type, pin_len);
2752 2693
diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
index 46c6a148f0b3..0c699cdc3696 100644
--- a/net/bluetooth/hidp/core.c
+++ b/net/bluetooth/hidp/core.c
@@ -76,25 +76,19 @@ static void hidp_copy_session(struct hidp_session *session, struct hidp_conninfo
76 ci->flags = session->flags; 76 ci->flags = session->flags;
77 ci->state = BT_CONNECTED; 77 ci->state = BT_CONNECTED;
78 78
79 ci->vendor = 0x0000;
80 ci->product = 0x0000;
81 ci->version = 0x0000;
82
83 if (session->input) { 79 if (session->input) {
84 ci->vendor = session->input->id.vendor; 80 ci->vendor = session->input->id.vendor;
85 ci->product = session->input->id.product; 81 ci->product = session->input->id.product;
86 ci->version = session->input->id.version; 82 ci->version = session->input->id.version;
87 if (session->input->name) 83 if (session->input->name)
88 strncpy(ci->name, session->input->name, 128); 84 strlcpy(ci->name, session->input->name, 128);
89 else 85 else
90 strncpy(ci->name, "HID Boot Device", 128); 86 strlcpy(ci->name, "HID Boot Device", 128);
91 } 87 } else if (session->hid) {
92
93 if (session->hid) {
94 ci->vendor = session->hid->vendor; 88 ci->vendor = session->hid->vendor;
95 ci->product = session->hid->product; 89 ci->product = session->hid->product;
96 ci->version = session->hid->version; 90 ci->version = session->hid->version;
97 strncpy(ci->name, session->hid->name, 128); 91 strlcpy(ci->name, session->hid->name, 128);
98 } 92 }
99} 93}
100 94
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index 68843a28a7af..8c3499bec893 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -504,8 +504,10 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
504 if (conn->hcon->type == LE_LINK) { 504 if (conn->hcon->type == LE_LINK) {
505 /* LE connection */ 505 /* LE connection */
506 chan->omtu = L2CAP_DEFAULT_MTU; 506 chan->omtu = L2CAP_DEFAULT_MTU;
507 chan->scid = L2CAP_CID_LE_DATA; 507 if (chan->dcid == L2CAP_CID_ATT)
508 chan->dcid = L2CAP_CID_LE_DATA; 508 chan->scid = L2CAP_CID_ATT;
509 else
510 chan->scid = l2cap_alloc_cid(conn);
509 } else { 511 } else {
510 /* Alloc CID for connection-oriented socket */ 512 /* Alloc CID for connection-oriented socket */
511 chan->scid = l2cap_alloc_cid(conn); 513 chan->scid = l2cap_alloc_cid(conn);
@@ -543,6 +545,8 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
543 545
544 l2cap_chan_hold(chan); 546 l2cap_chan_hold(chan);
545 547
548 hci_conn_hold(conn->hcon);
549
546 list_add(&chan->list, &conn->chan_l); 550 list_add(&chan->list, &conn->chan_l);
547} 551}
548 552
@@ -1338,17 +1342,21 @@ static struct l2cap_chan *l2cap_global_chan_by_scid(int state, u16 cid,
1338 1342
1339static void l2cap_le_conn_ready(struct l2cap_conn *conn) 1343static void l2cap_le_conn_ready(struct l2cap_conn *conn)
1340{ 1344{
1341 struct sock *parent, *sk; 1345 struct sock *parent;
1342 struct l2cap_chan *chan, *pchan; 1346 struct l2cap_chan *chan, *pchan;
1343 1347
1344 BT_DBG(""); 1348 BT_DBG("");
1345 1349
1346 /* Check if we have socket listening on cid */ 1350 /* Check if we have socket listening on cid */
1347 pchan = l2cap_global_chan_by_scid(BT_LISTEN, L2CAP_CID_LE_DATA, 1351 pchan = l2cap_global_chan_by_scid(BT_LISTEN, L2CAP_CID_ATT,
1348 conn->src, conn->dst); 1352 conn->src, conn->dst);
1349 if (!pchan) 1353 if (!pchan)
1350 return; 1354 return;
1351 1355
1356 /* Client ATT sockets should override the server one */
1357 if (__l2cap_get_chan_by_dcid(conn, L2CAP_CID_ATT))
1358 return;
1359
1352 parent = pchan->sk; 1360 parent = pchan->sk;
1353 1361
1354 lock_sock(parent); 1362 lock_sock(parent);
@@ -1357,17 +1365,12 @@ static void l2cap_le_conn_ready(struct l2cap_conn *conn)
1357 if (!chan) 1365 if (!chan)
1358 goto clean; 1366 goto clean;
1359 1367
1360 sk = chan->sk; 1368 chan->dcid = L2CAP_CID_ATT;
1361
1362 hci_conn_hold(conn->hcon);
1363 conn->hcon->disc_timeout = HCI_DISCONN_TIMEOUT;
1364
1365 bacpy(&bt_sk(sk)->src, conn->src);
1366 bacpy(&bt_sk(sk)->dst, conn->dst);
1367 1369
1368 l2cap_chan_add(conn, chan); 1370 bacpy(&bt_sk(chan->sk)->src, conn->src);
1371 bacpy(&bt_sk(chan->sk)->dst, conn->dst);
1369 1372
1370 l2cap_chan_ready(chan); 1373 __l2cap_chan_add(conn, chan);
1371 1374
1372clean: 1375clean:
1373 release_sock(parent); 1376 release_sock(parent);
@@ -1380,14 +1383,17 @@ static void l2cap_conn_ready(struct l2cap_conn *conn)
1380 1383
1381 BT_DBG("conn %p", conn); 1384 BT_DBG("conn %p", conn);
1382 1385
1383 if (!hcon->out && hcon->type == LE_LINK) 1386 /* For outgoing pairing which doesn't necessarily have an
1384 l2cap_le_conn_ready(conn); 1387 * associated socket (e.g. mgmt_pair_device).
1385 1388 */
1386 if (hcon->out && hcon->type == LE_LINK) 1389 if (hcon->out && hcon->type == LE_LINK)
1387 smp_conn_security(hcon, hcon->pending_sec_level); 1390 smp_conn_security(hcon, hcon->pending_sec_level);
1388 1391
1389 mutex_lock(&conn->chan_lock); 1392 mutex_lock(&conn->chan_lock);
1390 1393
1394 if (hcon->type == LE_LINK)
1395 l2cap_le_conn_ready(conn);
1396
1391 list_for_each_entry(chan, &conn->chan_l, list) { 1397 list_for_each_entry(chan, &conn->chan_l, list) {
1392 1398
1393 l2cap_chan_lock(chan); 1399 l2cap_chan_lock(chan);
@@ -1792,7 +1798,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
1792 1798
1793 auth_type = l2cap_get_auth_type(chan); 1799 auth_type = l2cap_get_auth_type(chan);
1794 1800
1795 if (chan->dcid == L2CAP_CID_LE_DATA) 1801 if (bdaddr_type_is_le(dst_type))
1796 hcon = hci_connect(hdev, LE_LINK, dst, dst_type, 1802 hcon = hci_connect(hdev, LE_LINK, dst, dst_type,
1797 chan->sec_level, auth_type); 1803 chan->sec_level, auth_type);
1798 else 1804 else
@@ -1811,16 +1817,10 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
1811 goto done; 1817 goto done;
1812 } 1818 }
1813 1819
1814 if (hcon->type == LE_LINK) { 1820 if (cid && __l2cap_get_chan_by_dcid(conn, cid)) {
1815 err = 0; 1821 hci_conn_drop(hcon);
1816 1822 err = -EBUSY;
1817 if (!list_empty(&conn->chan_l)) { 1823 goto done;
1818 err = -EBUSY;
1819 hci_conn_drop(hcon);
1820 }
1821
1822 if (err)
1823 goto done;
1824 } 1824 }
1825 1825
1826 /* Update source addr of the socket */ 1826 /* Update source addr of the socket */
@@ -1830,6 +1830,9 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
1830 l2cap_chan_add(conn, chan); 1830 l2cap_chan_add(conn, chan);
1831 l2cap_chan_lock(chan); 1831 l2cap_chan_lock(chan);
1832 1832
1833 /* l2cap_chan_add takes its own ref so we can drop this one */
1834 hci_conn_drop(hcon);
1835
1833 l2cap_state_change(chan, BT_CONNECT); 1836 l2cap_state_change(chan, BT_CONNECT);
1834 __set_chan_timer(chan, sk->sk_sndtimeo); 1837 __set_chan_timer(chan, sk->sk_sndtimeo);
1835 1838
@@ -3751,8 +3754,6 @@ static struct l2cap_chan *l2cap_connect(struct l2cap_conn *conn,
3751 3754
3752 sk = chan->sk; 3755 sk = chan->sk;
3753 3756
3754 hci_conn_hold(conn->hcon);
3755
3756 bacpy(&bt_sk(sk)->src, conn->src); 3757 bacpy(&bt_sk(sk)->src, conn->src);
3757 bacpy(&bt_sk(sk)->dst, conn->dst); 3758 bacpy(&bt_sk(sk)->dst, conn->dst);
3758 chan->psm = psm; 3759 chan->psm = psm;
@@ -5292,6 +5293,51 @@ static inline int l2cap_le_sig_cmd(struct l2cap_conn *conn,
5292 } 5293 }
5293} 5294}
5294 5295
5296static inline void l2cap_le_sig_channel(struct l2cap_conn *conn,
5297 struct sk_buff *skb)
5298{
5299 u8 *data = skb->data;
5300 int len = skb->len;
5301 struct l2cap_cmd_hdr cmd;
5302 int err;
5303
5304 l2cap_raw_recv(conn, skb);
5305
5306 while (len >= L2CAP_CMD_HDR_SIZE) {
5307 u16 cmd_len;
5308 memcpy(&cmd, data, L2CAP_CMD_HDR_SIZE);
5309 data += L2CAP_CMD_HDR_SIZE;
5310 len -= L2CAP_CMD_HDR_SIZE;
5311
5312 cmd_len = le16_to_cpu(cmd.len);
5313
5314 BT_DBG("code 0x%2.2x len %d id 0x%2.2x", cmd.code, cmd_len,
5315 cmd.ident);
5316
5317 if (cmd_len > len || !cmd.ident) {
5318 BT_DBG("corrupted command");
5319 break;
5320 }
5321
5322 err = l2cap_le_sig_cmd(conn, &cmd, data);
5323 if (err) {
5324 struct l2cap_cmd_rej_unk rej;
5325
5326 BT_ERR("Wrong link type (%d)", err);
5327
5328 /* FIXME: Map err to a valid reason */
5329 rej.reason = __constant_cpu_to_le16(L2CAP_REJ_NOT_UNDERSTOOD);
5330 l2cap_send_cmd(conn, cmd.ident, L2CAP_COMMAND_REJ,
5331 sizeof(rej), &rej);
5332 }
5333
5334 data += cmd_len;
5335 len -= cmd_len;
5336 }
5337
5338 kfree_skb(skb);
5339}
5340
5295static inline void l2cap_sig_channel(struct l2cap_conn *conn, 5341static inline void l2cap_sig_channel(struct l2cap_conn *conn,
5296 struct sk_buff *skb) 5342 struct sk_buff *skb)
5297{ 5343{
@@ -5318,11 +5364,7 @@ static inline void l2cap_sig_channel(struct l2cap_conn *conn,
5318 break; 5364 break;
5319 } 5365 }
5320 5366
5321 if (conn->hcon->type == LE_LINK) 5367 err = l2cap_bredr_sig_cmd(conn, &cmd, cmd_len, data);
5322 err = l2cap_le_sig_cmd(conn, &cmd, data);
5323 else
5324 err = l2cap_bredr_sig_cmd(conn, &cmd, cmd_len, data);
5325
5326 if (err) { 5368 if (err) {
5327 struct l2cap_cmd_rej_unk rej; 5369 struct l2cap_cmd_rej_unk rej;
5328 5370
@@ -6356,16 +6398,13 @@ static void l2cap_att_channel(struct l2cap_conn *conn,
6356{ 6398{
6357 struct l2cap_chan *chan; 6399 struct l2cap_chan *chan;
6358 6400
6359 chan = l2cap_global_chan_by_scid(0, L2CAP_CID_LE_DATA, 6401 chan = l2cap_global_chan_by_scid(BT_CONNECTED, L2CAP_CID_ATT,
6360 conn->src, conn->dst); 6402 conn->src, conn->dst);
6361 if (!chan) 6403 if (!chan)
6362 goto drop; 6404 goto drop;
6363 6405
6364 BT_DBG("chan %p, len %d", chan, skb->len); 6406 BT_DBG("chan %p, len %d", chan, skb->len);
6365 6407
6366 if (chan->state != BT_BOUND && chan->state != BT_CONNECTED)
6367 goto drop;
6368
6369 if (chan->imtu < skb->len) 6408 if (chan->imtu < skb->len)
6370 goto drop; 6409 goto drop;
6371 6410
@@ -6395,6 +6434,8 @@ static void l2cap_recv_frame(struct l2cap_conn *conn, struct sk_buff *skb)
6395 6434
6396 switch (cid) { 6435 switch (cid) {
6397 case L2CAP_CID_LE_SIGNALING: 6436 case L2CAP_CID_LE_SIGNALING:
6437 l2cap_le_sig_channel(conn, skb);
6438 break;
6398 case L2CAP_CID_SIGNALING: 6439 case L2CAP_CID_SIGNALING:
6399 l2cap_sig_channel(conn, skb); 6440 l2cap_sig_channel(conn, skb);
6400 break; 6441 break;
@@ -6405,7 +6446,7 @@ static void l2cap_recv_frame(struct l2cap_conn *conn, struct sk_buff *skb)
6405 l2cap_conless_channel(conn, psm, skb); 6446 l2cap_conless_channel(conn, psm, skb);
6406 break; 6447 break;
6407 6448
6408 case L2CAP_CID_LE_DATA: 6449 case L2CAP_CID_ATT:
6409 l2cap_att_channel(conn, skb); 6450 l2cap_att_channel(conn, skb);
6410 break; 6451 break;
6411 6452
@@ -6531,7 +6572,7 @@ int l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
6531 continue; 6572 continue;
6532 } 6573 }
6533 6574
6534 if (chan->scid == L2CAP_CID_LE_DATA) { 6575 if (chan->scid == L2CAP_CID_ATT) {
6535 if (!status && encrypt) { 6576 if (!status && encrypt) {
6536 chan->sec_level = hcon->sec_level; 6577 chan->sec_level = hcon->sec_level;
6537 l2cap_chan_ready(chan); 6578 l2cap_chan_ready(chan);
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
index 36fed40c162c..0098af80b213 100644
--- a/net/bluetooth/l2cap_sock.c
+++ b/net/bluetooth/l2cap_sock.c
@@ -466,7 +466,7 @@ static int l2cap_sock_getsockopt(struct socket *sock, int level, int optname,
466static bool l2cap_valid_mtu(struct l2cap_chan *chan, u16 mtu) 466static bool l2cap_valid_mtu(struct l2cap_chan *chan, u16 mtu)
467{ 467{
468 switch (chan->scid) { 468 switch (chan->scid) {
469 case L2CAP_CID_LE_DATA: 469 case L2CAP_CID_ATT:
470 if (mtu < L2CAP_LE_MIN_MTU) 470 if (mtu < L2CAP_LE_MIN_MTU)
471 return false; 471 return false;
472 break; 472 break;
@@ -630,7 +630,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
630 conn = chan->conn; 630 conn = chan->conn;
631 631
632 /*change security for LE channels */ 632 /*change security for LE channels */
633 if (chan->scid == L2CAP_CID_LE_DATA) { 633 if (chan->scid == L2CAP_CID_ATT) {
634 if (!conn->hcon->out) { 634 if (!conn->hcon->out) {
635 err = -EINVAL; 635 err = -EINVAL;
636 break; 636 break;
diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
index f8ecbc70293d..fedc5399d465 100644
--- a/net/bluetooth/mgmt.c
+++ b/net/bluetooth/mgmt.c
@@ -102,18 +102,6 @@ static const u16 mgmt_events[] = {
102 MGMT_EV_PASSKEY_NOTIFY, 102 MGMT_EV_PASSKEY_NOTIFY,
103}; 103};
104 104
105/*
106 * These LE scan and inquiry parameters were chosen according to LE General
107 * Discovery Procedure specification.
108 */
109#define LE_SCAN_WIN 0x12
110#define LE_SCAN_INT 0x12
111#define LE_SCAN_TIMEOUT_LE_ONLY msecs_to_jiffies(10240)
112#define LE_SCAN_TIMEOUT_BREDR_LE msecs_to_jiffies(5120)
113
114#define INQUIRY_LEN_BREDR 0x08 /* TGAP(100) */
115#define INQUIRY_LEN_BREDR_LE 0x04 /* TGAP(100)/2 */
116
117#define CACHE_TIMEOUT msecs_to_jiffies(2 * 1000) 105#define CACHE_TIMEOUT msecs_to_jiffies(2 * 1000)
118 106
119#define hdev_is_powered(hdev) (test_bit(HCI_UP, &hdev->flags) && \ 107#define hdev_is_powered(hdev) (test_bit(HCI_UP, &hdev->flags) && \
@@ -1748,8 +1736,6 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
1748 1736
1749 hci_link_keys_clear(hdev); 1737 hci_link_keys_clear(hdev);
1750 1738
1751 set_bit(HCI_LINK_KEYS, &hdev->dev_flags);
1752
1753 if (cp->debug_keys) 1739 if (cp->debug_keys)
1754 set_bit(HCI_DEBUG_KEYS, &hdev->dev_flags); 1740 set_bit(HCI_DEBUG_KEYS, &hdev->dev_flags);
1755 else 1741 else
@@ -2633,28 +2619,72 @@ static int remove_remote_oob_data(struct sock *sk, struct hci_dev *hdev,
2633 return err; 2619 return err;
2634} 2620}
2635 2621
2636int mgmt_interleaved_discovery(struct hci_dev *hdev) 2622static int mgmt_start_discovery_failed(struct hci_dev *hdev, u8 status)
2637{ 2623{
2624 struct pending_cmd *cmd;
2625 u8 type;
2638 int err; 2626 int err;
2639 2627
2640 BT_DBG("%s", hdev->name); 2628 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
2641 2629
2642 hci_dev_lock(hdev); 2630 cmd = mgmt_pending_find(MGMT_OP_START_DISCOVERY, hdev);
2631 if (!cmd)
2632 return -ENOENT;
2643 2633
2644 err = hci_do_inquiry(hdev, INQUIRY_LEN_BREDR_LE); 2634 type = hdev->discovery.type;
2645 if (err < 0)
2646 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
2647 2635
2648 hci_dev_unlock(hdev); 2636 err = cmd_complete(cmd->sk, hdev->id, cmd->opcode, mgmt_status(status),
2637 &type, sizeof(type));
2638 mgmt_pending_remove(cmd);
2649 2639
2650 return err; 2640 return err;
2651} 2641}
2652 2642
2643static void start_discovery_complete(struct hci_dev *hdev, u8 status)
2644{
2645 BT_DBG("status %d", status);
2646
2647 if (status) {
2648 hci_dev_lock(hdev);
2649 mgmt_start_discovery_failed(hdev, status);
2650 hci_dev_unlock(hdev);
2651 return;
2652 }
2653
2654 hci_dev_lock(hdev);
2655 hci_discovery_set_state(hdev, DISCOVERY_FINDING);
2656 hci_dev_unlock(hdev);
2657
2658 switch (hdev->discovery.type) {
2659 case DISCOV_TYPE_LE:
2660 queue_delayed_work(hdev->workqueue, &hdev->le_scan_disable,
2661 DISCOV_LE_TIMEOUT);
2662 break;
2663
2664 case DISCOV_TYPE_INTERLEAVED:
2665 queue_delayed_work(hdev->workqueue, &hdev->le_scan_disable,
2666 DISCOV_INTERLEAVED_TIMEOUT);
2667 break;
2668
2669 case DISCOV_TYPE_BREDR:
2670 break;
2671
2672 default:
2673 BT_ERR("Invalid discovery type %d", hdev->discovery.type);
2674 }
2675}
2676
2653static int start_discovery(struct sock *sk, struct hci_dev *hdev, 2677static int start_discovery(struct sock *sk, struct hci_dev *hdev,
2654 void *data, u16 len) 2678 void *data, u16 len)
2655{ 2679{
2656 struct mgmt_cp_start_discovery *cp = data; 2680 struct mgmt_cp_start_discovery *cp = data;
2657 struct pending_cmd *cmd; 2681 struct pending_cmd *cmd;
2682 struct hci_cp_le_set_scan_param param_cp;
2683 struct hci_cp_le_set_scan_enable enable_cp;
2684 struct hci_cp_inquiry inq_cp;
2685 struct hci_request req;
2686 /* General inquiry access code (GIAC) */
2687 u8 lap[3] = { 0x33, 0x8b, 0x9e };
2658 int err; 2688 int err;
2659 2689
2660 BT_DBG("%s", hdev->name); 2690 BT_DBG("%s", hdev->name);
@@ -2687,6 +2717,8 @@ static int start_discovery(struct sock *sk, struct hci_dev *hdev,
2687 2717
2688 hdev->discovery.type = cp->type; 2718 hdev->discovery.type = cp->type;
2689 2719
2720 hci_req_init(&req, hdev);
2721
2690 switch (hdev->discovery.type) { 2722 switch (hdev->discovery.type) {
2691 case DISCOV_TYPE_BREDR: 2723 case DISCOV_TYPE_BREDR:
2692 if (!lmp_bredr_capable(hdev)) { 2724 if (!lmp_bredr_capable(hdev)) {
@@ -2696,10 +2728,23 @@ static int start_discovery(struct sock *sk, struct hci_dev *hdev,
2696 goto failed; 2728 goto failed;
2697 } 2729 }
2698 2730
2699 err = hci_do_inquiry(hdev, INQUIRY_LEN_BREDR); 2731 if (test_bit(HCI_INQUIRY, &hdev->flags)) {
2732 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY,
2733 MGMT_STATUS_BUSY);
2734 mgmt_pending_remove(cmd);
2735 goto failed;
2736 }
2737
2738 hci_inquiry_cache_flush(hdev);
2739
2740 memset(&inq_cp, 0, sizeof(inq_cp));
2741 memcpy(&inq_cp.lap, lap, sizeof(inq_cp.lap));
2742 inq_cp.length = DISCOV_BREDR_INQUIRY_LEN;
2743 hci_req_add(&req, HCI_OP_INQUIRY, sizeof(inq_cp), &inq_cp);
2700 break; 2744 break;
2701 2745
2702 case DISCOV_TYPE_LE: 2746 case DISCOV_TYPE_LE:
2747 case DISCOV_TYPE_INTERLEAVED:
2703 if (!test_bit(HCI_LE_ENABLED, &hdev->dev_flags)) { 2748 if (!test_bit(HCI_LE_ENABLED, &hdev->dev_flags)) {
2704 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY, 2749 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY,
2705 MGMT_STATUS_NOT_SUPPORTED); 2750 MGMT_STATUS_NOT_SUPPORTED);
@@ -2707,20 +2752,40 @@ static int start_discovery(struct sock *sk, struct hci_dev *hdev,
2707 goto failed; 2752 goto failed;
2708 } 2753 }
2709 2754
2710 err = hci_le_scan(hdev, LE_SCAN_ACTIVE, LE_SCAN_INT, 2755 if (hdev->discovery.type == DISCOV_TYPE_INTERLEAVED &&
2711 LE_SCAN_WIN, LE_SCAN_TIMEOUT_LE_ONLY); 2756 !lmp_bredr_capable(hdev)) {
2712 break;
2713
2714 case DISCOV_TYPE_INTERLEAVED:
2715 if (!lmp_host_le_capable(hdev) || !lmp_bredr_capable(hdev)) {
2716 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY, 2757 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY,
2717 MGMT_STATUS_NOT_SUPPORTED); 2758 MGMT_STATUS_NOT_SUPPORTED);
2718 mgmt_pending_remove(cmd); 2759 mgmt_pending_remove(cmd);
2719 goto failed; 2760 goto failed;
2720 } 2761 }
2721 2762
2722 err = hci_le_scan(hdev, LE_SCAN_ACTIVE, LE_SCAN_INT, 2763 if (test_bit(HCI_LE_PERIPHERAL, &hdev->dev_flags)) {
2723 LE_SCAN_WIN, LE_SCAN_TIMEOUT_BREDR_LE); 2764 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY,
2765 MGMT_STATUS_REJECTED);
2766 mgmt_pending_remove(cmd);
2767 goto failed;
2768 }
2769
2770 if (test_bit(HCI_LE_SCAN, &hdev->dev_flags)) {
2771 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY,
2772 MGMT_STATUS_BUSY);
2773 mgmt_pending_remove(cmd);
2774 goto failed;
2775 }
2776
2777 memset(&param_cp, 0, sizeof(param_cp));
2778 param_cp.type = LE_SCAN_ACTIVE;
2779 param_cp.interval = cpu_to_le16(DISCOV_LE_SCAN_INT);
2780 param_cp.window = cpu_to_le16(DISCOV_LE_SCAN_WIN);
2781 hci_req_add(&req, HCI_OP_LE_SET_SCAN_PARAM, sizeof(param_cp),
2782 &param_cp);
2783
2784 memset(&enable_cp, 0, sizeof(enable_cp));
2785 enable_cp.enable = LE_SCAN_ENABLE;
2786 enable_cp.filter_dup = LE_SCAN_FILTER_DUP_ENABLE;
2787 hci_req_add(&req, HCI_OP_LE_SET_SCAN_ENABLE, sizeof(enable_cp),
2788 &enable_cp);
2724 break; 2789 break;
2725 2790
2726 default: 2791 default:
@@ -2730,6 +2795,7 @@ static int start_discovery(struct sock *sk, struct hci_dev *hdev,
2730 goto failed; 2795 goto failed;
2731 } 2796 }
2732 2797
2798 err = hci_req_run(&req, start_discovery_complete);
2733 if (err < 0) 2799 if (err < 0)
2734 mgmt_pending_remove(cmd); 2800 mgmt_pending_remove(cmd);
2735 else 2801 else
@@ -2740,6 +2806,39 @@ failed:
2740 return err; 2806 return err;
2741} 2807}
2742 2808
2809static int mgmt_stop_discovery_failed(struct hci_dev *hdev, u8 status)
2810{
2811 struct pending_cmd *cmd;
2812 int err;
2813
2814 cmd = mgmt_pending_find(MGMT_OP_STOP_DISCOVERY, hdev);
2815 if (!cmd)
2816 return -ENOENT;
2817
2818 err = cmd_complete(cmd->sk, hdev->id, cmd->opcode, mgmt_status(status),
2819 &hdev->discovery.type, sizeof(hdev->discovery.type));
2820 mgmt_pending_remove(cmd);
2821
2822 return err;
2823}
2824
2825static void stop_discovery_complete(struct hci_dev *hdev, u8 status)
2826{
2827 BT_DBG("status %d", status);
2828
2829 hci_dev_lock(hdev);
2830
2831 if (status) {
2832 mgmt_stop_discovery_failed(hdev, status);
2833 goto unlock;
2834 }
2835
2836 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
2837
2838unlock:
2839 hci_dev_unlock(hdev);
2840}
2841
2743static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data, 2842static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data,
2744 u16 len) 2843 u16 len)
2745{ 2844{
@@ -2747,6 +2846,8 @@ static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data,
2747 struct pending_cmd *cmd; 2846 struct pending_cmd *cmd;
2748 struct hci_cp_remote_name_req_cancel cp; 2847 struct hci_cp_remote_name_req_cancel cp;
2749 struct inquiry_entry *e; 2848 struct inquiry_entry *e;
2849 struct hci_request req;
2850 struct hci_cp_le_set_scan_enable enable_cp;
2750 int err; 2851 int err;
2751 2852
2752 BT_DBG("%s", hdev->name); 2853 BT_DBG("%s", hdev->name);
@@ -2773,12 +2874,20 @@ static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data,
2773 goto unlock; 2874 goto unlock;
2774 } 2875 }
2775 2876
2877 hci_req_init(&req, hdev);
2878
2776 switch (hdev->discovery.state) { 2879 switch (hdev->discovery.state) {
2777 case DISCOVERY_FINDING: 2880 case DISCOVERY_FINDING:
2778 if (test_bit(HCI_INQUIRY, &hdev->flags)) 2881 if (test_bit(HCI_INQUIRY, &hdev->flags)) {
2779 err = hci_cancel_inquiry(hdev); 2882 hci_req_add(&req, HCI_OP_INQUIRY_CANCEL, 0, NULL);
2780 else 2883 } else {
2781 err = hci_cancel_le_scan(hdev); 2884 cancel_delayed_work(&hdev->le_scan_disable);
2885
2886 memset(&enable_cp, 0, sizeof(enable_cp));
2887 enable_cp.enable = LE_SCAN_DISABLE;
2888 hci_req_add(&req, HCI_OP_LE_SET_SCAN_ENABLE,
2889 sizeof(enable_cp), &enable_cp);
2890 }
2782 2891
2783 break; 2892 break;
2784 2893
@@ -2796,16 +2905,22 @@ static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data,
2796 } 2905 }
2797 2906
2798 bacpy(&cp.bdaddr, &e->data.bdaddr); 2907 bacpy(&cp.bdaddr, &e->data.bdaddr);
2799 err = hci_send_cmd(hdev, HCI_OP_REMOTE_NAME_REQ_CANCEL, 2908 hci_req_add(&req, HCI_OP_REMOTE_NAME_REQ_CANCEL, sizeof(cp),
2800 sizeof(cp), &cp); 2909 &cp);
2801 2910
2802 break; 2911 break;
2803 2912
2804 default: 2913 default:
2805 BT_DBG("unknown discovery state %u", hdev->discovery.state); 2914 BT_DBG("unknown discovery state %u", hdev->discovery.state);
2806 err = -EFAULT; 2915
2916 mgmt_pending_remove(cmd);
2917 err = cmd_complete(sk, hdev->id, MGMT_OP_STOP_DISCOVERY,
2918 MGMT_STATUS_FAILED, &mgmt_cp->type,
2919 sizeof(mgmt_cp->type));
2920 goto unlock;
2807 } 2921 }
2808 2922
2923 err = hci_req_run(&req, stop_discovery_complete);
2809 if (err < 0) 2924 if (err < 0)
2810 mgmt_pending_remove(cmd); 2925 mgmt_pending_remove(cmd);
2811 else 2926 else
@@ -4063,6 +4178,9 @@ int mgmt_device_found(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
4063 struct mgmt_ev_device_found *ev = (void *) buf; 4178 struct mgmt_ev_device_found *ev = (void *) buf;
4064 size_t ev_size; 4179 size_t ev_size;
4065 4180
4181 if (!hci_discovery_active(hdev))
4182 return -EPERM;
4183
4066 /* Leave 5 bytes for a potential CoD field */ 4184 /* Leave 5 bytes for a potential CoD field */
4067 if (sizeof(*ev) + eir_len + 5 > sizeof(buf)) 4185 if (sizeof(*ev) + eir_len + 5 > sizeof(buf))
4068 return -EINVAL; 4186 return -EINVAL;
@@ -4114,43 +4232,6 @@ int mgmt_remote_name(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
4114 sizeof(*ev) + eir_len, NULL); 4232 sizeof(*ev) + eir_len, NULL);
4115} 4233}
4116 4234
4117int mgmt_start_discovery_failed(struct hci_dev *hdev, u8 status)
4118{
4119 struct pending_cmd *cmd;
4120 u8 type;
4121 int err;
4122
4123 hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
4124
4125 cmd = mgmt_pending_find(MGMT_OP_START_DISCOVERY, hdev);
4126 if (!cmd)
4127 return -ENOENT;
4128
4129 type = hdev->discovery.type;
4130
4131 err = cmd_complete(cmd->sk, hdev->id, cmd->opcode, mgmt_status(status),
4132 &type, sizeof(type));
4133 mgmt_pending_remove(cmd);
4134
4135 return err;
4136}
4137
4138int mgmt_stop_discovery_failed(struct hci_dev *hdev, u8 status)
4139{
4140 struct pending_cmd *cmd;
4141 int err;
4142
4143 cmd = mgmt_pending_find(MGMT_OP_STOP_DISCOVERY, hdev);
4144 if (!cmd)
4145 return -ENOENT;
4146
4147 err = cmd_complete(cmd->sk, hdev->id, cmd->opcode, mgmt_status(status),
4148 &hdev->discovery.type, sizeof(hdev->discovery.type));
4149 mgmt_pending_remove(cmd);
4150
4151 return err;
4152}
4153
4154int mgmt_discovering(struct hci_dev *hdev, u8 discovering) 4235int mgmt_discovering(struct hci_dev *hdev, u8 discovering)
4155{ 4236{
4156 struct mgmt_ev_discovering ev; 4237 struct mgmt_ev_discovering ev;
diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
index 967312803e41..2ef66781fedb 100644
--- a/net/bridge/br_device.c
+++ b/net/bridge/br_device.c
@@ -22,6 +22,9 @@
22#include <asm/uaccess.h> 22#include <asm/uaccess.h>
23#include "br_private.h" 23#include "br_private.h"
24 24
25#define COMMON_FEATURES (NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_HIGHDMA | \
26 NETIF_F_GSO_MASK | NETIF_F_HW_CSUM)
27
25/* net device transmit always called with BH disabled */ 28/* net device transmit always called with BH disabled */
26netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) 29netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
27{ 30{
@@ -55,10 +58,10 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
55 skb_pull(skb, ETH_HLEN); 58 skb_pull(skb, ETH_HLEN);
56 59
57 if (is_broadcast_ether_addr(dest)) 60 if (is_broadcast_ether_addr(dest))
58 br_flood_deliver(br, skb); 61 br_flood_deliver(br, skb, false);
59 else if (is_multicast_ether_addr(dest)) { 62 else if (is_multicast_ether_addr(dest)) {
60 if (unlikely(netpoll_tx_running(dev))) { 63 if (unlikely(netpoll_tx_running(dev))) {
61 br_flood_deliver(br, skb); 64 br_flood_deliver(br, skb, false);
62 goto out; 65 goto out;
63 } 66 }
64 if (br_multicast_rcv(br, NULL, skb)) { 67 if (br_multicast_rcv(br, NULL, skb)) {
@@ -70,11 +73,11 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
70 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) 73 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb))
71 br_multicast_deliver(mdst, skb); 74 br_multicast_deliver(mdst, skb);
72 else 75 else
73 br_flood_deliver(br, skb); 76 br_flood_deliver(br, skb, false);
74 } else if ((dst = __br_fdb_get(br, dest, vid)) != NULL) 77 } else if ((dst = __br_fdb_get(br, dest, vid)) != NULL)
75 br_deliver(dst->dst, skb); 78 br_deliver(dst->dst, skb);
76 else 79 else
77 br_flood_deliver(br, skb); 80 br_flood_deliver(br, skb, true);
78 81
79out: 82out:
80 rcu_read_unlock(); 83 rcu_read_unlock();
@@ -346,12 +349,10 @@ void br_dev_setup(struct net_device *dev)
346 dev->tx_queue_len = 0; 349 dev->tx_queue_len = 0;
347 dev->priv_flags = IFF_EBRIDGE; 350 dev->priv_flags = IFF_EBRIDGE;
348 351
349 dev->features = NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_HIGHDMA | 352 dev->features = COMMON_FEATURES | NETIF_F_LLTX | NETIF_F_NETNS_LOCAL |
350 NETIF_F_GSO_MASK | NETIF_F_HW_CSUM | NETIF_F_LLTX | 353 NETIF_F_HW_VLAN_CTAG_TX;
351 NETIF_F_NETNS_LOCAL | NETIF_F_HW_VLAN_CTAG_TX; 354 dev->hw_features = COMMON_FEATURES | NETIF_F_HW_VLAN_CTAG_TX;
352 dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_HIGHDMA | 355 dev->vlan_features = COMMON_FEATURES;
353 NETIF_F_GSO_MASK | NETIF_F_HW_CSUM |
354 NETIF_F_HW_VLAN_CTAG_TX;
355 356
356 br->dev = dev; 357 br->dev = dev;
357 spin_lock_init(&br->lock); 358 spin_lock_init(&br->lock);
diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
index ebfa4443c69b..60aca9109a50 100644
--- a/net/bridge/br_fdb.c
+++ b/net/bridge/br_fdb.c
@@ -707,6 +707,11 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
707 } 707 }
708 } 708 }
709 709
710 if (is_zero_ether_addr(addr)) {
711 pr_info("bridge: RTM_NEWNEIGH with invalid ether address\n");
712 return -EINVAL;
713 }
714
710 p = br_port_get_rtnl(dev); 715 p = br_port_get_rtnl(dev);
711 if (p == NULL) { 716 if (p == NULL) {
712 pr_info("bridge: RTM_NEWNEIGH %s not a bridge port\n", 717 pr_info("bridge: RTM_NEWNEIGH %s not a bridge port\n",
diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c
index 092b20e4ee4c..4b81b1471789 100644
--- a/net/bridge/br_forward.c
+++ b/net/bridge/br_forward.c
@@ -174,7 +174,8 @@ out:
174static void br_flood(struct net_bridge *br, struct sk_buff *skb, 174static void br_flood(struct net_bridge *br, struct sk_buff *skb,
175 struct sk_buff *skb0, 175 struct sk_buff *skb0,
176 void (*__packet_hook)(const struct net_bridge_port *p, 176 void (*__packet_hook)(const struct net_bridge_port *p,
177 struct sk_buff *skb)) 177 struct sk_buff *skb),
178 bool unicast)
178{ 179{
179 struct net_bridge_port *p; 180 struct net_bridge_port *p;
180 struct net_bridge_port *prev; 181 struct net_bridge_port *prev;
@@ -182,6 +183,9 @@ static void br_flood(struct net_bridge *br, struct sk_buff *skb,
182 prev = NULL; 183 prev = NULL;
183 184
184 list_for_each_entry_rcu(p, &br->port_list, list) { 185 list_for_each_entry_rcu(p, &br->port_list, list) {
186 /* Do not flood unicast traffic to ports that turn it off */
187 if (unicast && !(p->flags & BR_FLOOD))
188 continue;
185 prev = maybe_deliver(prev, p, skb, __packet_hook); 189 prev = maybe_deliver(prev, p, skb, __packet_hook);
186 if (IS_ERR(prev)) 190 if (IS_ERR(prev))
187 goto out; 191 goto out;
@@ -203,16 +207,16 @@ out:
203 207
204 208
205/* called with rcu_read_lock */ 209/* called with rcu_read_lock */
206void br_flood_deliver(struct net_bridge *br, struct sk_buff *skb) 210void br_flood_deliver(struct net_bridge *br, struct sk_buff *skb, bool unicast)
207{ 211{
208 br_flood(br, skb, NULL, __br_deliver); 212 br_flood(br, skb, NULL, __br_deliver, unicast);
209} 213}
210 214
211/* called under bridge lock */ 215/* called under bridge lock */
212void br_flood_forward(struct net_bridge *br, struct sk_buff *skb, 216void br_flood_forward(struct net_bridge *br, struct sk_buff *skb,
213 struct sk_buff *skb2) 217 struct sk_buff *skb2, bool unicast)
214{ 218{
215 br_flood(br, skb, skb2, __br_forward); 219 br_flood(br, skb, skb2, __br_forward, unicast);
216} 220}
217 221
218#ifdef CONFIG_BRIDGE_IGMP_SNOOPING 222#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
index 4cdba60926ff..5623be6b9ecd 100644
--- a/net/bridge/br_if.c
+++ b/net/bridge/br_if.c
@@ -221,7 +221,7 @@ static struct net_bridge_port *new_nbp(struct net_bridge *br,
221 p->path_cost = port_cost(dev); 221 p->path_cost = port_cost(dev);
222 p->priority = 0x8000 >> BR_PORT_BITS; 222 p->priority = 0x8000 >> BR_PORT_BITS;
223 p->port_no = index; 223 p->port_no = index;
224 p->flags = 0; 224 p->flags = BR_LEARNING | BR_FLOOD;
225 br_init_port(p); 225 br_init_port(p);
226 p->state = BR_STATE_DISABLED; 226 p->state = BR_STATE_DISABLED;
227 br_stp_port_timer_init(p); 227 br_stp_port_timer_init(p);
diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
index 828e2bcc1f52..1b8b8b824cd7 100644
--- a/net/bridge/br_input.c
+++ b/net/bridge/br_input.c
@@ -65,6 +65,7 @@ int br_handle_frame_finish(struct sk_buff *skb)
65 struct net_bridge_fdb_entry *dst; 65 struct net_bridge_fdb_entry *dst;
66 struct net_bridge_mdb_entry *mdst; 66 struct net_bridge_mdb_entry *mdst;
67 struct sk_buff *skb2; 67 struct sk_buff *skb2;
68 bool unicast = true;
68 u16 vid = 0; 69 u16 vid = 0;
69 70
70 if (!p || p->state == BR_STATE_DISABLED) 71 if (!p || p->state == BR_STATE_DISABLED)
@@ -75,7 +76,8 @@ int br_handle_frame_finish(struct sk_buff *skb)
75 76
76 /* insert into forwarding database after filtering to avoid spoofing */ 77 /* insert into forwarding database after filtering to avoid spoofing */
77 br = p->br; 78 br = p->br;
78 br_fdb_update(br, p, eth_hdr(skb)->h_source, vid); 79 if (p->flags & BR_LEARNING)
80 br_fdb_update(br, p, eth_hdr(skb)->h_source, vid);
79 81
80 if (!is_broadcast_ether_addr(dest) && is_multicast_ether_addr(dest) && 82 if (!is_broadcast_ether_addr(dest) && is_multicast_ether_addr(dest) &&
81 br_multicast_rcv(br, p, skb)) 83 br_multicast_rcv(br, p, skb))
@@ -94,9 +96,10 @@ int br_handle_frame_finish(struct sk_buff *skb)
94 96
95 dst = NULL; 97 dst = NULL;
96 98
97 if (is_broadcast_ether_addr(dest)) 99 if (is_broadcast_ether_addr(dest)) {
98 skb2 = skb; 100 skb2 = skb;
99 else if (is_multicast_ether_addr(dest)) { 101 unicast = false;
102 } else if (is_multicast_ether_addr(dest)) {
100 mdst = br_mdb_get(br, skb, vid); 103 mdst = br_mdb_get(br, skb, vid);
101 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) { 104 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) {
102 if ((mdst && mdst->mglist) || 105 if ((mdst && mdst->mglist) ||
@@ -109,6 +112,7 @@ int br_handle_frame_finish(struct sk_buff *skb)
109 } else 112 } else
110 skb2 = skb; 113 skb2 = skb;
111 114
115 unicast = false;
112 br->dev->stats.multicast++; 116 br->dev->stats.multicast++;
113 } else if ((dst = __br_fdb_get(br, dest, vid)) && 117 } else if ((dst = __br_fdb_get(br, dest, vid)) &&
114 dst->is_local) { 118 dst->is_local) {
@@ -122,7 +126,7 @@ int br_handle_frame_finish(struct sk_buff *skb)
122 dst->used = jiffies; 126 dst->used = jiffies;
123 br_forward(dst->dst, skb, skb2); 127 br_forward(dst->dst, skb, skb2);
124 } else 128 } else
125 br_flood_forward(br, skb, skb2); 129 br_flood_forward(br, skb, skb2, unicast);
126 } 130 }
127 131
128 if (skb2) 132 if (skb2)
@@ -142,7 +146,8 @@ static int br_handle_local_finish(struct sk_buff *skb)
142 u16 vid = 0; 146 u16 vid = 0;
143 147
144 br_vlan_get_tag(skb, &vid); 148 br_vlan_get_tag(skb, &vid);
145 br_fdb_update(p->br, p, eth_hdr(skb)->h_source, vid); 149 if (p->flags & BR_LEARNING)
150 br_fdb_update(p->br, p, eth_hdr(skb)->h_source, vid);
146 return 0; /* process further */ 151 return 0; /* process further */
147} 152}
148 153
diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
index 19942e38fd2d..0daae3ec2355 100644
--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -447,7 +447,7 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry)
447 call_rcu_bh(&p->rcu, br_multicast_free_pg); 447 call_rcu_bh(&p->rcu, br_multicast_free_pg);
448 err = 0; 448 err = 0;
449 449
450 if (!mp->ports && !mp->mglist && 450 if (!mp->ports && !mp->mglist && mp->timer_armed &&
451 netif_running(br->dev)) 451 netif_running(br->dev))
452 mod_timer(&mp->timer, jiffies); 452 mod_timer(&mp->timer, jiffies);
453 break; 453 break;
diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
index d6448e35e027..69af490cce44 100644
--- a/net/bridge/br_multicast.c
+++ b/net/bridge/br_multicast.c
@@ -23,6 +23,7 @@
23#include <linux/skbuff.h> 23#include <linux/skbuff.h>
24#include <linux/slab.h> 24#include <linux/slab.h>
25#include <linux/timer.h> 25#include <linux/timer.h>
26#include <linux/inetdevice.h>
26#include <net/ip.h> 27#include <net/ip.h>
27#if IS_ENABLED(CONFIG_IPV6) 28#if IS_ENABLED(CONFIG_IPV6)
28#include <net/ipv6.h> 29#include <net/ipv6.h>
@@ -269,7 +270,7 @@ static void br_multicast_del_pg(struct net_bridge *br,
269 del_timer(&p->timer); 270 del_timer(&p->timer);
270 call_rcu_bh(&p->rcu, br_multicast_free_pg); 271 call_rcu_bh(&p->rcu, br_multicast_free_pg);
271 272
272 if (!mp->ports && !mp->mglist && 273 if (!mp->ports && !mp->mglist && mp->timer_armed &&
273 netif_running(br->dev)) 274 netif_running(br->dev))
274 mod_timer(&mp->timer, jiffies); 275 mod_timer(&mp->timer, jiffies);
275 276
@@ -381,7 +382,8 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br,
381 iph->frag_off = htons(IP_DF); 382 iph->frag_off = htons(IP_DF);
382 iph->ttl = 1; 383 iph->ttl = 1;
383 iph->protocol = IPPROTO_IGMP; 384 iph->protocol = IPPROTO_IGMP;
384 iph->saddr = 0; 385 iph->saddr = br->multicast_query_use_ifaddr ?
386 inet_select_addr(br->dev, 0, RT_SCOPE_LINK) : 0;
385 iph->daddr = htonl(INADDR_ALLHOSTS_GROUP); 387 iph->daddr = htonl(INADDR_ALLHOSTS_GROUP);
386 ((u8 *)&iph[1])[0] = IPOPT_RA; 388 ((u8 *)&iph[1])[0] = IPOPT_RA;
387 ((u8 *)&iph[1])[1] = 4; 389 ((u8 *)&iph[1])[1] = 4;
@@ -616,8 +618,6 @@ rehash:
616 618
617 mp->br = br; 619 mp->br = br;
618 mp->addr = *group; 620 mp->addr = *group;
619 setup_timer(&mp->timer, br_multicast_group_expired,
620 (unsigned long)mp);
621 621
622 hlist_add_head_rcu(&mp->hlist[mdb->ver], &mdb->mhash[hash]); 622 hlist_add_head_rcu(&mp->hlist[mdb->ver], &mdb->mhash[hash]);
623 mdb->size++; 623 mdb->size++;
@@ -655,7 +655,6 @@ static int br_multicast_add_group(struct net_bridge *br,
655 struct net_bridge_mdb_entry *mp; 655 struct net_bridge_mdb_entry *mp;
656 struct net_bridge_port_group *p; 656 struct net_bridge_port_group *p;
657 struct net_bridge_port_group __rcu **pp; 657 struct net_bridge_port_group __rcu **pp;
658 unsigned long now = jiffies;
659 int err; 658 int err;
660 659
661 spin_lock(&br->multicast_lock); 660 spin_lock(&br->multicast_lock);
@@ -670,7 +669,6 @@ static int br_multicast_add_group(struct net_bridge *br,
670 669
671 if (!port) { 670 if (!port) {
672 mp->mglist = true; 671 mp->mglist = true;
673 mod_timer(&mp->timer, now + br->multicast_membership_interval);
674 goto out; 672 goto out;
675 } 673 }
676 674
@@ -678,7 +676,7 @@ static int br_multicast_add_group(struct net_bridge *br,
678 (p = mlock_dereference(*pp, br)) != NULL; 676 (p = mlock_dereference(*pp, br)) != NULL;
679 pp = &p->next) { 677 pp = &p->next) {
680 if (p->port == port) 678 if (p->port == port)
681 goto found; 679 goto out;
682 if ((unsigned long)p->port < (unsigned long)port) 680 if ((unsigned long)p->port < (unsigned long)port)
683 break; 681 break;
684 } 682 }
@@ -689,8 +687,6 @@ static int br_multicast_add_group(struct net_bridge *br,
689 rcu_assign_pointer(*pp, p); 687 rcu_assign_pointer(*pp, p);
690 br_mdb_notify(br->dev, port, group, RTM_NEWMDB); 688 br_mdb_notify(br->dev, port, group, RTM_NEWMDB);
691 689
692found:
693 mod_timer(&p->timer, now + br->multicast_membership_interval);
694out: 690out:
695 err = 0; 691 err = 0;
696 692
@@ -1016,7 +1012,7 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br,
1016#endif 1012#endif
1017 1013
1018/* 1014/*
1019 * Add port to rotuer_list 1015 * Add port to router_list
1020 * list is maintained ordered by pointer value 1016 * list is maintained ordered by pointer value
1021 * and locked by br->multicast_lock and RCU 1017 * and locked by br->multicast_lock and RCU
1022 */ 1018 */
@@ -1130,6 +1126,10 @@ static int br_ip4_multicast_query(struct net_bridge *br,
1130 if (!mp) 1126 if (!mp)
1131 goto out; 1127 goto out;
1132 1128
1129 setup_timer(&mp->timer, br_multicast_group_expired, (unsigned long)mp);
1130 mod_timer(&mp->timer, now + br->multicast_membership_interval);
1131 mp->timer_armed = true;
1132
1133 max_delay *= br->multicast_last_member_count; 1133 max_delay *= br->multicast_last_member_count;
1134 1134
1135 if (mp->mglist && 1135 if (mp->mglist &&
@@ -1204,6 +1204,10 @@ static int br_ip6_multicast_query(struct net_bridge *br,
1204 if (!mp) 1204 if (!mp)
1205 goto out; 1205 goto out;
1206 1206
1207 setup_timer(&mp->timer, br_multicast_group_expired, (unsigned long)mp);
1208 mod_timer(&mp->timer, now + br->multicast_membership_interval);
1209 mp->timer_armed = true;
1210
1207 max_delay *= br->multicast_last_member_count; 1211 max_delay *= br->multicast_last_member_count;
1208 if (mp->mglist && 1212 if (mp->mglist &&
1209 (timer_pending(&mp->timer) ? 1213 (timer_pending(&mp->timer) ?
@@ -1247,6 +1251,32 @@ static void br_multicast_leave_group(struct net_bridge *br,
1247 if (!mp) 1251 if (!mp)
1248 goto out; 1252 goto out;
1249 1253
1254 if (br->multicast_querier &&
1255 !timer_pending(&br->multicast_querier_timer)) {
1256 __br_multicast_send_query(br, port, &mp->addr);
1257
1258 time = jiffies + br->multicast_last_member_count *
1259 br->multicast_last_member_interval;
1260 mod_timer(port ? &port->multicast_query_timer :
1261 &br->multicast_query_timer, time);
1262
1263 for (p = mlock_dereference(mp->ports, br);
1264 p != NULL;
1265 p = mlock_dereference(p->next, br)) {
1266 if (p->port != port)
1267 continue;
1268
1269 if (!hlist_unhashed(&p->mglist) &&
1270 (timer_pending(&p->timer) ?
1271 time_after(p->timer.expires, time) :
1272 try_to_del_timer_sync(&p->timer) >= 0)) {
1273 mod_timer(&p->timer, time);
1274 }
1275
1276 break;
1277 }
1278 }
1279
1250 if (port && (port->flags & BR_MULTICAST_FAST_LEAVE)) { 1280 if (port && (port->flags & BR_MULTICAST_FAST_LEAVE)) {
1251 struct net_bridge_port_group __rcu **pp; 1281 struct net_bridge_port_group __rcu **pp;
1252 1282
@@ -1262,7 +1292,7 @@ static void br_multicast_leave_group(struct net_bridge *br,
1262 call_rcu_bh(&p->rcu, br_multicast_free_pg); 1292 call_rcu_bh(&p->rcu, br_multicast_free_pg);
1263 br_mdb_notify(br->dev, port, group, RTM_DELMDB); 1293 br_mdb_notify(br->dev, port, group, RTM_DELMDB);
1264 1294
1265 if (!mp->ports && !mp->mglist && 1295 if (!mp->ports && !mp->mglist && mp->timer_armed &&
1266 netif_running(br->dev)) 1296 netif_running(br->dev))
1267 mod_timer(&mp->timer, jiffies); 1297 mod_timer(&mp->timer, jiffies);
1268 } 1298 }
@@ -1274,30 +1304,12 @@ static void br_multicast_leave_group(struct net_bridge *br,
1274 br->multicast_last_member_interval; 1304 br->multicast_last_member_interval;
1275 1305
1276 if (!port) { 1306 if (!port) {
1277 if (mp->mglist && 1307 if (mp->mglist && mp->timer_armed &&
1278 (timer_pending(&mp->timer) ? 1308 (timer_pending(&mp->timer) ?
1279 time_after(mp->timer.expires, time) : 1309 time_after(mp->timer.expires, time) :
1280 try_to_del_timer_sync(&mp->timer) >= 0)) { 1310 try_to_del_timer_sync(&mp->timer) >= 0)) {
1281 mod_timer(&mp->timer, time); 1311 mod_timer(&mp->timer, time);
1282 } 1312 }
1283
1284 goto out;
1285 }
1286
1287 for (p = mlock_dereference(mp->ports, br);
1288 p != NULL;
1289 p = mlock_dereference(p->next, br)) {
1290 if (p->port != port)
1291 continue;
1292
1293 if (!hlist_unhashed(&p->mglist) &&
1294 (timer_pending(&p->timer) ?
1295 time_after(p->timer.expires, time) :
1296 try_to_del_timer_sync(&p->timer) >= 0)) {
1297 mod_timer(&p->timer, time);
1298 }
1299
1300 break;
1301 } 1313 }
1302 1314
1303out: 1315out:
@@ -1619,6 +1631,7 @@ void br_multicast_init(struct net_bridge *br)
1619 1631
1620 br->multicast_router = 1; 1632 br->multicast_router = 1;
1621 br->multicast_querier = 0; 1633 br->multicast_querier = 0;
1634 br->multicast_query_use_ifaddr = 0;
1622 br->multicast_last_member_count = 2; 1635 br->multicast_last_member_count = 2;
1623 br->multicast_startup_query_count = 2; 1636 br->multicast_startup_query_count = 2;
1624 1637
@@ -1672,6 +1685,7 @@ void br_multicast_stop(struct net_bridge *br)
1672 hlist_for_each_entry_safe(mp, n, &mdb->mhash[i], 1685 hlist_for_each_entry_safe(mp, n, &mdb->mhash[i],
1673 hlist[ver]) { 1686 hlist[ver]) {
1674 del_timer(&mp->timer); 1687 del_timer(&mp->timer);
1688 mp->timer_armed = false;
1675 call_rcu_bh(&mp->rcu, br_multicast_free_group); 1689 call_rcu_bh(&mp->rcu, br_multicast_free_group);
1676 } 1690 }
1677 } 1691 }
diff --git a/net/bridge/br_netfilter.c b/net/bridge/br_netfilter.c
index 1ed75bfd8d1d..f87736270eaa 100644
--- a/net/bridge/br_netfilter.c
+++ b/net/bridge/br_netfilter.c
@@ -992,7 +992,7 @@ static struct nf_hook_ops br_nf_ops[] __read_mostly = {
992 992
993#ifdef CONFIG_SYSCTL 993#ifdef CONFIG_SYSCTL
994static 994static
995int brnf_sysctl_call_tables(ctl_table * ctl, int write, 995int brnf_sysctl_call_tables(struct ctl_table *ctl, int write,
996 void __user * buffer, size_t * lenp, loff_t * ppos) 996 void __user * buffer, size_t * lenp, loff_t * ppos)
997{ 997{
998 int ret; 998 int ret;
@@ -1004,7 +1004,7 @@ int brnf_sysctl_call_tables(ctl_table * ctl, int write,
1004 return ret; 1004 return ret;
1005} 1005}
1006 1006
1007static ctl_table brnf_table[] = { 1007static struct ctl_table brnf_table[] = {
1008 { 1008 {
1009 .procname = "bridge-nf-call-arptables", 1009 .procname = "bridge-nf-call-arptables",
1010 .data = &brnf_call_arptables, 1010 .data = &brnf_call_arptables,
diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
index 8e3abf564798..1fc30abd3a52 100644
--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -30,6 +30,8 @@ static inline size_t br_port_info_size(void)
30 + nla_total_size(1) /* IFLA_BRPORT_GUARD */ 30 + nla_total_size(1) /* IFLA_BRPORT_GUARD */
31 + nla_total_size(1) /* IFLA_BRPORT_PROTECT */ 31 + nla_total_size(1) /* IFLA_BRPORT_PROTECT */
32 + nla_total_size(1) /* IFLA_BRPORT_FAST_LEAVE */ 32 + nla_total_size(1) /* IFLA_BRPORT_FAST_LEAVE */
33 + nla_total_size(1) /* IFLA_BRPORT_LEARNING */
34 + nla_total_size(1) /* IFLA_BRPORT_UNICAST_FLOOD */
33 + 0; 35 + 0;
34} 36}
35 37
@@ -56,7 +58,9 @@ static int br_port_fill_attrs(struct sk_buff *skb,
56 nla_put_u8(skb, IFLA_BRPORT_MODE, mode) || 58 nla_put_u8(skb, IFLA_BRPORT_MODE, mode) ||
57 nla_put_u8(skb, IFLA_BRPORT_GUARD, !!(p->flags & BR_BPDU_GUARD)) || 59 nla_put_u8(skb, IFLA_BRPORT_GUARD, !!(p->flags & BR_BPDU_GUARD)) ||
58 nla_put_u8(skb, IFLA_BRPORT_PROTECT, !!(p->flags & BR_ROOT_BLOCK)) || 60 nla_put_u8(skb, IFLA_BRPORT_PROTECT, !!(p->flags & BR_ROOT_BLOCK)) ||
59 nla_put_u8(skb, IFLA_BRPORT_FAST_LEAVE, !!(p->flags & BR_MULTICAST_FAST_LEAVE))) 61 nla_put_u8(skb, IFLA_BRPORT_FAST_LEAVE, !!(p->flags & BR_MULTICAST_FAST_LEAVE)) ||
62 nla_put_u8(skb, IFLA_BRPORT_LEARNING, !!(p->flags & BR_LEARNING)) ||
63 nla_put_u8(skb, IFLA_BRPORT_UNICAST_FLOOD, !!(p->flags & BR_FLOOD)))
60 return -EMSGSIZE; 64 return -EMSGSIZE;
61 65
62 return 0; 66 return 0;
@@ -281,6 +285,8 @@ static const struct nla_policy ifla_brport_policy[IFLA_BRPORT_MAX + 1] = {
281 [IFLA_BRPORT_MODE] = { .type = NLA_U8 }, 285 [IFLA_BRPORT_MODE] = { .type = NLA_U8 },
282 [IFLA_BRPORT_GUARD] = { .type = NLA_U8 }, 286 [IFLA_BRPORT_GUARD] = { .type = NLA_U8 },
283 [IFLA_BRPORT_PROTECT] = { .type = NLA_U8 }, 287 [IFLA_BRPORT_PROTECT] = { .type = NLA_U8 },
288 [IFLA_BRPORT_LEARNING] = { .type = NLA_U8 },
289 [IFLA_BRPORT_UNICAST_FLOOD] = { .type = NLA_U8 },
284}; 290};
285 291
286/* Change the state of the port and notify spanning tree */ 292/* Change the state of the port and notify spanning tree */
@@ -328,6 +334,8 @@ static int br_setport(struct net_bridge_port *p, struct nlattr *tb[])
328 br_set_port_flag(p, tb, IFLA_BRPORT_GUARD, BR_BPDU_GUARD); 334 br_set_port_flag(p, tb, IFLA_BRPORT_GUARD, BR_BPDU_GUARD);
329 br_set_port_flag(p, tb, IFLA_BRPORT_FAST_LEAVE, BR_MULTICAST_FAST_LEAVE); 335 br_set_port_flag(p, tb, IFLA_BRPORT_FAST_LEAVE, BR_MULTICAST_FAST_LEAVE);
330 br_set_port_flag(p, tb, IFLA_BRPORT_PROTECT, BR_ROOT_BLOCK); 336 br_set_port_flag(p, tb, IFLA_BRPORT_PROTECT, BR_ROOT_BLOCK);
337 br_set_port_flag(p, tb, IFLA_BRPORT_LEARNING, BR_LEARNING);
338 br_set_port_flag(p, tb, IFLA_BRPORT_UNICAST_FLOOD, BR_FLOOD);
331 339
332 if (tb[IFLA_BRPORT_COST]) { 340 if (tb[IFLA_BRPORT_COST]) {
333 err = br_stp_set_path_cost(p, nla_get_u32(tb[IFLA_BRPORT_COST])); 341 err = br_stp_set_path_cost(p, nla_get_u32(tb[IFLA_BRPORT_COST]));
diff --git a/net/bridge/br_notify.c b/net/bridge/br_notify.c
index 1644b3e1f947..3a3f371b2841 100644
--- a/net/bridge/br_notify.c
+++ b/net/bridge/br_notify.c
@@ -31,7 +31,7 @@ struct notifier_block br_device_notifier = {
31 */ 31 */
32static int br_device_event(struct notifier_block *unused, unsigned long event, void *ptr) 32static int br_device_event(struct notifier_block *unused, unsigned long event, void *ptr)
33{ 33{
34 struct net_device *dev = ptr; 34 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
35 struct net_bridge_port *p; 35 struct net_bridge_port *p;
36 struct net_bridge *br; 36 struct net_bridge *br;
37 bool changed_addr; 37 bool changed_addr;
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index d2c043a857b6..3be89b3ce17b 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -112,6 +112,7 @@ struct net_bridge_mdb_entry
112 struct timer_list timer; 112 struct timer_list timer;
113 struct br_ip addr; 113 struct br_ip addr;
114 bool mglist; 114 bool mglist;
115 bool timer_armed;
115}; 116};
116 117
117struct net_bridge_mdb_htable 118struct net_bridge_mdb_htable
@@ -157,6 +158,8 @@ struct net_bridge_port
157#define BR_ROOT_BLOCK 0x00000004 158#define BR_ROOT_BLOCK 0x00000004
158#define BR_MULTICAST_FAST_LEAVE 0x00000008 159#define BR_MULTICAST_FAST_LEAVE 0x00000008
159#define BR_ADMIN_COST 0x00000010 160#define BR_ADMIN_COST 0x00000010
161#define BR_LEARNING 0x00000020
162#define BR_FLOOD 0x00000040
160 163
161#ifdef CONFIG_BRIDGE_IGMP_SNOOPING 164#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
162 u32 multicast_startup_queries_sent; 165 u32 multicast_startup_queries_sent;
@@ -249,6 +252,7 @@ struct net_bridge
249 252
250 u8 multicast_disabled:1; 253 u8 multicast_disabled:1;
251 u8 multicast_querier:1; 254 u8 multicast_querier:1;
255 u8 multicast_query_use_ifaddr:1;
252 256
253 u32 hash_elasticity; 257 u32 hash_elasticity;
254 u32 hash_max; 258 u32 hash_max;
@@ -411,9 +415,10 @@ extern int br_dev_queue_push_xmit(struct sk_buff *skb);
411extern void br_forward(const struct net_bridge_port *to, 415extern void br_forward(const struct net_bridge_port *to,
412 struct sk_buff *skb, struct sk_buff *skb0); 416 struct sk_buff *skb, struct sk_buff *skb0);
413extern int br_forward_finish(struct sk_buff *skb); 417extern int br_forward_finish(struct sk_buff *skb);
414extern void br_flood_deliver(struct net_bridge *br, struct sk_buff *skb); 418extern void br_flood_deliver(struct net_bridge *br, struct sk_buff *skb,
419 bool unicast);
415extern void br_flood_forward(struct net_bridge *br, struct sk_buff *skb, 420extern void br_flood_forward(struct net_bridge *br, struct sk_buff *skb,
416 struct sk_buff *skb2); 421 struct sk_buff *skb2, bool unicast);
417 422
418/* br_if.c */ 423/* br_if.c */
419extern void br_port_carrier_check(struct net_bridge_port *p); 424extern void br_port_carrier_check(struct net_bridge_port *p);
diff --git a/net/bridge/br_sysfs_br.c b/net/bridge/br_sysfs_br.c
index 8baa9c08e1a4..394bb96b6087 100644
--- a/net/bridge/br_sysfs_br.c
+++ b/net/bridge/br_sysfs_br.c
@@ -375,6 +375,31 @@ static ssize_t store_multicast_snooping(struct device *d,
375static DEVICE_ATTR(multicast_snooping, S_IRUGO | S_IWUSR, 375static DEVICE_ATTR(multicast_snooping, S_IRUGO | S_IWUSR,
376 show_multicast_snooping, store_multicast_snooping); 376 show_multicast_snooping, store_multicast_snooping);
377 377
378static ssize_t show_multicast_query_use_ifaddr(struct device *d,
379 struct device_attribute *attr,
380 char *buf)
381{
382 struct net_bridge *br = to_bridge(d);
383 return sprintf(buf, "%d\n", br->multicast_query_use_ifaddr);
384}
385
386static int set_query_use_ifaddr(struct net_bridge *br, unsigned long val)
387{
388 br->multicast_query_use_ifaddr = !!val;
389 return 0;
390}
391
392static ssize_t
393store_multicast_query_use_ifaddr(struct device *d,
394 struct device_attribute *attr,
395 const char *buf, size_t len)
396{
397 return store_bridge_parm(d, buf, len, set_query_use_ifaddr);
398}
399static DEVICE_ATTR(multicast_query_use_ifaddr, S_IRUGO | S_IWUSR,
400 show_multicast_query_use_ifaddr,
401 store_multicast_query_use_ifaddr);
402
378static ssize_t show_multicast_querier(struct device *d, 403static ssize_t show_multicast_querier(struct device *d,
379 struct device_attribute *attr, 404 struct device_attribute *attr,
380 char *buf) 405 char *buf)
@@ -734,6 +759,7 @@ static struct attribute *bridge_attrs[] = {
734 &dev_attr_multicast_router.attr, 759 &dev_attr_multicast_router.attr,
735 &dev_attr_multicast_snooping.attr, 760 &dev_attr_multicast_snooping.attr,
736 &dev_attr_multicast_querier.attr, 761 &dev_attr_multicast_querier.attr,
762 &dev_attr_multicast_query_use_ifaddr.attr,
737 &dev_attr_hash_elasticity.attr, 763 &dev_attr_hash_elasticity.attr,
738 &dev_attr_hash_max.attr, 764 &dev_attr_hash_max.attr,
739 &dev_attr_multicast_last_member_count.attr, 765 &dev_attr_multicast_last_member_count.attr,
diff --git a/net/bridge/br_sysfs_if.c b/net/bridge/br_sysfs_if.c
index a1ef1b6e14dc..2a2cdb756d51 100644
--- a/net/bridge/br_sysfs_if.c
+++ b/net/bridge/br_sysfs_if.c
@@ -158,6 +158,8 @@ static BRPORT_ATTR(flush, S_IWUSR, NULL, store_flush);
158BRPORT_ATTR_FLAG(hairpin_mode, BR_HAIRPIN_MODE); 158BRPORT_ATTR_FLAG(hairpin_mode, BR_HAIRPIN_MODE);
159BRPORT_ATTR_FLAG(bpdu_guard, BR_BPDU_GUARD); 159BRPORT_ATTR_FLAG(bpdu_guard, BR_BPDU_GUARD);
160BRPORT_ATTR_FLAG(root_block, BR_ROOT_BLOCK); 160BRPORT_ATTR_FLAG(root_block, BR_ROOT_BLOCK);
161BRPORT_ATTR_FLAG(learning, BR_LEARNING);
162BRPORT_ATTR_FLAG(unicast_flood, BR_FLOOD);
161 163
162#ifdef CONFIG_BRIDGE_IGMP_SNOOPING 164#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
163static ssize_t show_multicast_router(struct net_bridge_port *p, char *buf) 165static ssize_t show_multicast_router(struct net_bridge_port *p, char *buf)
@@ -195,6 +197,8 @@ static const struct brport_attribute *brport_attrs[] = {
195 &brport_attr_hairpin_mode, 197 &brport_attr_hairpin_mode,
196 &brport_attr_bpdu_guard, 198 &brport_attr_bpdu_guard,
197 &brport_attr_root_block, 199 &brport_attr_root_block,
200 &brport_attr_learning,
201 &brport_attr_unicast_flood,
198#ifdef CONFIG_BRIDGE_IGMP_SNOOPING 202#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
199 &brport_attr_multicast_router, 203 &brport_attr_multicast_router,
200 &brport_attr_multicast_fast_leave, 204 &brport_attr_multicast_fast_leave,
diff --git a/net/bridge/netfilter/ebt_ulog.c b/net/bridge/netfilter/ebt_ulog.c
index df0364aa12d5..518093802d1d 100644
--- a/net/bridge/netfilter/ebt_ulog.c
+++ b/net/bridge/netfilter/ebt_ulog.c
@@ -271,6 +271,12 @@ static int ebt_ulog_tg_check(const struct xt_tgchk_param *par)
271{ 271{
272 struct ebt_ulog_info *uloginfo = par->targinfo; 272 struct ebt_ulog_info *uloginfo = par->targinfo;
273 273
274 if (!par->net->xt.ebt_ulog_warn_deprecated) {
275 pr_info("ebt_ulog is deprecated and it will be removed soon, "
276 "use ebt_nflog instead\n");
277 par->net->xt.ebt_ulog_warn_deprecated = true;
278 }
279
274 if (uloginfo->nlgroup > 31) 280 if (uloginfo->nlgroup > 31)
275 return -EINVAL; 281 return -EINVAL;
276 282
diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
index 3d110c4fc787..ac7802428384 100644
--- a/net/bridge/netfilter/ebtables.c
+++ b/net/bridge/netfilter/ebtables.c
@@ -1339,7 +1339,7 @@ static inline int ebt_make_matchname(const struct ebt_entry_match *m,
1339 1339
1340 /* ebtables expects 32 bytes long names but xt_match names are 29 bytes 1340 /* ebtables expects 32 bytes long names but xt_match names are 29 bytes
1341 long. Copy 29 bytes and fill remaining bytes with zeroes. */ 1341 long. Copy 29 bytes and fill remaining bytes with zeroes. */
1342 strncpy(name, m->u.match->name, sizeof(name)); 1342 strlcpy(name, m->u.match->name, sizeof(name));
1343 if (copy_to_user(hlp, name, EBT_FUNCTION_MAXNAMELEN)) 1343 if (copy_to_user(hlp, name, EBT_FUNCTION_MAXNAMELEN))
1344 return -EFAULT; 1344 return -EFAULT;
1345 return 0; 1345 return 0;
@@ -1351,7 +1351,7 @@ static inline int ebt_make_watchername(const struct ebt_entry_watcher *w,
1351 char __user *hlp = ubase + ((char *)w - base); 1351 char __user *hlp = ubase + ((char *)w - base);
1352 char name[EBT_FUNCTION_MAXNAMELEN] = {}; 1352 char name[EBT_FUNCTION_MAXNAMELEN] = {};
1353 1353
1354 strncpy(name, w->u.watcher->name, sizeof(name)); 1354 strlcpy(name, w->u.watcher->name, sizeof(name));
1355 if (copy_to_user(hlp , name, EBT_FUNCTION_MAXNAMELEN)) 1355 if (copy_to_user(hlp , name, EBT_FUNCTION_MAXNAMELEN))
1356 return -EFAULT; 1356 return -EFAULT;
1357 return 0; 1357 return 0;
@@ -1377,7 +1377,7 @@ ebt_make_names(struct ebt_entry *e, const char *base, char __user *ubase)
1377 ret = EBT_WATCHER_ITERATE(e, ebt_make_watchername, base, ubase); 1377 ret = EBT_WATCHER_ITERATE(e, ebt_make_watchername, base, ubase);
1378 if (ret != 0) 1378 if (ret != 0)
1379 return ret; 1379 return ret;
1380 strncpy(name, t->u.target->name, sizeof(name)); 1380 strlcpy(name, t->u.target->name, sizeof(name));
1381 if (copy_to_user(hlp, name, EBT_FUNCTION_MAXNAMELEN)) 1381 if (copy_to_user(hlp, name, EBT_FUNCTION_MAXNAMELEN))
1382 return -EFAULT; 1382 return -EFAULT;
1383 return 0; 1383 return 0;
diff --git a/net/caif/caif_dev.c b/net/caif/caif_dev.c
index 1f9ece1a9c34..4dca159435cf 100644
--- a/net/caif/caif_dev.c
+++ b/net/caif/caif_dev.c
@@ -352,9 +352,9 @@ EXPORT_SYMBOL(caif_enroll_dev);
352 352
353/* notify Caif of device events */ 353/* notify Caif of device events */
354static int caif_device_notify(struct notifier_block *me, unsigned long what, 354static int caif_device_notify(struct notifier_block *me, unsigned long what,
355 void *arg) 355 void *ptr)
356{ 356{
357 struct net_device *dev = arg; 357 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
358 struct caif_device_entry *caifd = NULL; 358 struct caif_device_entry *caifd = NULL;
359 struct caif_dev_common *caifdev; 359 struct caif_dev_common *caifdev;
360 struct cfcnfg *cfg; 360 struct cfcnfg *cfg;
diff --git a/net/caif/caif_usb.c b/net/caif/caif_usb.c
index 942e00a425fd..75ed04b78fa4 100644
--- a/net/caif/caif_usb.c
+++ b/net/caif/caif_usb.c
@@ -121,9 +121,9 @@ static struct packet_type caif_usb_type __read_mostly = {
121}; 121};
122 122
123static int cfusbl_device_notify(struct notifier_block *me, unsigned long what, 123static int cfusbl_device_notify(struct notifier_block *me, unsigned long what,
124 void *arg) 124 void *ptr)
125{ 125{
126 struct net_device *dev = arg; 126 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
127 struct caif_dev_common common; 127 struct caif_dev_common common;
128 struct cflayer *layer, *link_support; 128 struct cflayer *layer, *link_support;
129 struct usbnet *usbnet; 129 struct usbnet *usbnet;
diff --git a/net/can/af_can.c b/net/can/af_can.c
index c4e50852c9f4..3ab8dd2e1282 100644
--- a/net/can/af_can.c
+++ b/net/can/af_can.c
@@ -794,9 +794,9 @@ EXPORT_SYMBOL(can_proto_unregister);
794 * af_can notifier to create/remove CAN netdevice specific structs 794 * af_can notifier to create/remove CAN netdevice specific structs
795 */ 795 */
796static int can_notifier(struct notifier_block *nb, unsigned long msg, 796static int can_notifier(struct notifier_block *nb, unsigned long msg,
797 void *data) 797 void *ptr)
798{ 798{
799 struct net_device *dev = (struct net_device *)data; 799 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
800 struct dev_rcv_lists *d; 800 struct dev_rcv_lists *d;
801 801
802 if (!net_eq(dev_net(dev), &init_net)) 802 if (!net_eq(dev_net(dev), &init_net))
diff --git a/net/can/bcm.c b/net/can/bcm.c
index 8f113e6ff327..46f20bfafc0e 100644
--- a/net/can/bcm.c
+++ b/net/can/bcm.c
@@ -1350,9 +1350,9 @@ static int bcm_sendmsg(struct kiocb *iocb, struct socket *sock,
1350 * notification handler for netdevice status changes 1350 * notification handler for netdevice status changes
1351 */ 1351 */
1352static int bcm_notifier(struct notifier_block *nb, unsigned long msg, 1352static int bcm_notifier(struct notifier_block *nb, unsigned long msg,
1353 void *data) 1353 void *ptr)
1354{ 1354{
1355 struct net_device *dev = (struct net_device *)data; 1355 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1356 struct bcm_sock *bo = container_of(nb, struct bcm_sock, notifier); 1356 struct bcm_sock *bo = container_of(nb, struct bcm_sock, notifier);
1357 struct sock *sk = &bo->sk; 1357 struct sock *sk = &bo->sk;
1358 struct bcm_op *op; 1358 struct bcm_op *op;
diff --git a/net/can/gw.c b/net/can/gw.c
index 3ee690e8c7d3..2f291f961a17 100644
--- a/net/can/gw.c
+++ b/net/can/gw.c
@@ -445,9 +445,9 @@ static inline void cgw_unregister_filter(struct cgw_job *gwj)
445} 445}
446 446
447static int cgw_notifier(struct notifier_block *nb, 447static int cgw_notifier(struct notifier_block *nb,
448 unsigned long msg, void *data) 448 unsigned long msg, void *ptr)
449{ 449{
450 struct net_device *dev = (struct net_device *)data; 450 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
451 451
452 if (!net_eq(dev_net(dev), &init_net)) 452 if (!net_eq(dev_net(dev), &init_net))
453 return NOTIFY_DONE; 453 return NOTIFY_DONE;
diff --git a/net/can/raw.c b/net/can/raw.c
index 1085e65f848e..641e1c895123 100644
--- a/net/can/raw.c
+++ b/net/can/raw.c
@@ -239,9 +239,9 @@ static int raw_enable_allfilters(struct net_device *dev, struct sock *sk)
239} 239}
240 240
241static int raw_notifier(struct notifier_block *nb, 241static int raw_notifier(struct notifier_block *nb,
242 unsigned long msg, void *data) 242 unsigned long msg, void *ptr)
243{ 243{
244 struct net_device *dev = (struct net_device *)data; 244 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
245 struct raw_sock *ro = container_of(nb, struct raw_sock, notifier); 245 struct raw_sock *ro = container_of(nb, struct raw_sock, notifier);
246 struct sock *sk = &ro->sk; 246 struct sock *sk = &ro->sk;
247 247
diff --git a/net/core/datagram.c b/net/core/datagram.c
index b71423db7785..6e9ab31e457e 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -56,6 +56,7 @@
56#include <net/sock.h> 56#include <net/sock.h>
57#include <net/tcp_states.h> 57#include <net/tcp_states.h>
58#include <trace/events/skb.h> 58#include <trace/events/skb.h>
59#include <net/ll_poll.h>
59 60
60/* 61/*
61 * Is a socket 'connection oriented' ? 62 * Is a socket 'connection oriented' ?
@@ -207,6 +208,10 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags,
207 } 208 }
208 spin_unlock_irqrestore(&queue->lock, cpu_flags); 209 spin_unlock_irqrestore(&queue->lock, cpu_flags);
209 210
211 if (sk_can_busy_loop(sk) &&
212 sk_busy_loop(sk, flags & MSG_DONTWAIT))
213 continue;
214
210 /* User doesn't want to wait */ 215 /* User doesn't want to wait */
211 error = -EAGAIN; 216 error = -EAGAIN;
212 if (!timeo) 217 if (!timeo)
diff --git a/net/core/dev.c b/net/core/dev.c
index faebb398fb46..560dafd83adf 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -129,6 +129,8 @@
129#include <linux/inetdevice.h> 129#include <linux/inetdevice.h>
130#include <linux/cpu_rmap.h> 130#include <linux/cpu_rmap.h>
131#include <linux/static_key.h> 131#include <linux/static_key.h>
132#include <linux/hashtable.h>
133#include <linux/vmalloc.h>
132 134
133#include "net-sysfs.h" 135#include "net-sysfs.h"
134 136
@@ -166,6 +168,12 @@ static struct list_head offload_base __read_mostly;
166DEFINE_RWLOCK(dev_base_lock); 168DEFINE_RWLOCK(dev_base_lock);
167EXPORT_SYMBOL(dev_base_lock); 169EXPORT_SYMBOL(dev_base_lock);
168 170
171/* protects napi_hash addition/deletion and napi_gen_id */
172static DEFINE_SPINLOCK(napi_hash_lock);
173
174static unsigned int napi_gen_id;
175static DEFINE_HASHTABLE(napi_hash, 8);
176
169seqcount_t devnet_rename_seq; 177seqcount_t devnet_rename_seq;
170 178
171static inline void dev_base_seq_inc(struct net *net) 179static inline void dev_base_seq_inc(struct net *net)
@@ -1232,9 +1240,7 @@ static int __dev_open(struct net_device *dev)
1232 * If we don't do this there is a chance ndo_poll_controller 1240 * If we don't do this there is a chance ndo_poll_controller
1233 * or ndo_poll may be running while we open the device 1241 * or ndo_poll may be running while we open the device
1234 */ 1242 */
1235 ret = netpoll_rx_disable(dev); 1243 netpoll_rx_disable(dev);
1236 if (ret)
1237 return ret;
1238 1244
1239 ret = call_netdevice_notifiers(NETDEV_PRE_UP, dev); 1245 ret = call_netdevice_notifiers(NETDEV_PRE_UP, dev);
1240 ret = notifier_to_errno(ret); 1246 ret = notifier_to_errno(ret);
@@ -1343,9 +1349,7 @@ static int __dev_close(struct net_device *dev)
1343 LIST_HEAD(single); 1349 LIST_HEAD(single);
1344 1350
1345 /* Temporarily disable netpoll until the interface is down */ 1351 /* Temporarily disable netpoll until the interface is down */
1346 retval = netpoll_rx_disable(dev); 1352 netpoll_rx_disable(dev);
1347 if (retval)
1348 return retval;
1349 1353
1350 list_add(&dev->unreg_list, &single); 1354 list_add(&dev->unreg_list, &single);
1351 retval = __dev_close_many(&single); 1355 retval = __dev_close_many(&single);
@@ -1387,14 +1391,11 @@ static int dev_close_many(struct list_head *head)
1387 */ 1391 */
1388int dev_close(struct net_device *dev) 1392int dev_close(struct net_device *dev)
1389{ 1393{
1390 int ret = 0;
1391 if (dev->flags & IFF_UP) { 1394 if (dev->flags & IFF_UP) {
1392 LIST_HEAD(single); 1395 LIST_HEAD(single);
1393 1396
1394 /* Block netpoll rx while the interface is going down */ 1397 /* Block netpoll rx while the interface is going down */
1395 ret = netpoll_rx_disable(dev); 1398 netpoll_rx_disable(dev);
1396 if (ret)
1397 return ret;
1398 1399
1399 list_add(&dev->unreg_list, &single); 1400 list_add(&dev->unreg_list, &single);
1400 dev_close_many(&single); 1401 dev_close_many(&single);
@@ -1402,7 +1403,7 @@ int dev_close(struct net_device *dev)
1402 1403
1403 netpoll_rx_enable(dev); 1404 netpoll_rx_enable(dev);
1404 } 1405 }
1405 return ret; 1406 return 0;
1406} 1407}
1407EXPORT_SYMBOL(dev_close); 1408EXPORT_SYMBOL(dev_close);
1408 1409
@@ -1432,6 +1433,14 @@ void dev_disable_lro(struct net_device *dev)
1432} 1433}
1433EXPORT_SYMBOL(dev_disable_lro); 1434EXPORT_SYMBOL(dev_disable_lro);
1434 1435
1436static int call_netdevice_notifier(struct notifier_block *nb, unsigned long val,
1437 struct net_device *dev)
1438{
1439 struct netdev_notifier_info info;
1440
1441 netdev_notifier_info_init(&info, dev);
1442 return nb->notifier_call(nb, val, &info);
1443}
1435 1444
1436static int dev_boot_phase = 1; 1445static int dev_boot_phase = 1;
1437 1446
@@ -1464,7 +1473,7 @@ int register_netdevice_notifier(struct notifier_block *nb)
1464 goto unlock; 1473 goto unlock;
1465 for_each_net(net) { 1474 for_each_net(net) {
1466 for_each_netdev(net, dev) { 1475 for_each_netdev(net, dev) {
1467 err = nb->notifier_call(nb, NETDEV_REGISTER, dev); 1476 err = call_netdevice_notifier(nb, NETDEV_REGISTER, dev);
1468 err = notifier_to_errno(err); 1477 err = notifier_to_errno(err);
1469 if (err) 1478 if (err)
1470 goto rollback; 1479 goto rollback;
@@ -1472,7 +1481,7 @@ int register_netdevice_notifier(struct notifier_block *nb)
1472 if (!(dev->flags & IFF_UP)) 1481 if (!(dev->flags & IFF_UP))
1473 continue; 1482 continue;
1474 1483
1475 nb->notifier_call(nb, NETDEV_UP, dev); 1484 call_netdevice_notifier(nb, NETDEV_UP, dev);
1476 } 1485 }
1477 } 1486 }
1478 1487
@@ -1488,10 +1497,11 @@ rollback:
1488 goto outroll; 1497 goto outroll;
1489 1498
1490 if (dev->flags & IFF_UP) { 1499 if (dev->flags & IFF_UP) {
1491 nb->notifier_call(nb, NETDEV_GOING_DOWN, dev); 1500 call_netdevice_notifier(nb, NETDEV_GOING_DOWN,
1492 nb->notifier_call(nb, NETDEV_DOWN, dev); 1501 dev);
1502 call_netdevice_notifier(nb, NETDEV_DOWN, dev);
1493 } 1503 }
1494 nb->notifier_call(nb, NETDEV_UNREGISTER, dev); 1504 call_netdevice_notifier(nb, NETDEV_UNREGISTER, dev);
1495 } 1505 }
1496 } 1506 }
1497 1507
@@ -1529,10 +1539,11 @@ int unregister_netdevice_notifier(struct notifier_block *nb)
1529 for_each_net(net) { 1539 for_each_net(net) {
1530 for_each_netdev(net, dev) { 1540 for_each_netdev(net, dev) {
1531 if (dev->flags & IFF_UP) { 1541 if (dev->flags & IFF_UP) {
1532 nb->notifier_call(nb, NETDEV_GOING_DOWN, dev); 1542 call_netdevice_notifier(nb, NETDEV_GOING_DOWN,
1533 nb->notifier_call(nb, NETDEV_DOWN, dev); 1543 dev);
1544 call_netdevice_notifier(nb, NETDEV_DOWN, dev);
1534 } 1545 }
1535 nb->notifier_call(nb, NETDEV_UNREGISTER, dev); 1546 call_netdevice_notifier(nb, NETDEV_UNREGISTER, dev);
1536 } 1547 }
1537 } 1548 }
1538unlock: 1549unlock:
@@ -1542,6 +1553,25 @@ unlock:
1542EXPORT_SYMBOL(unregister_netdevice_notifier); 1553EXPORT_SYMBOL(unregister_netdevice_notifier);
1543 1554
1544/** 1555/**
1556 * call_netdevice_notifiers_info - call all network notifier blocks
1557 * @val: value passed unmodified to notifier function
1558 * @dev: net_device pointer passed unmodified to notifier function
1559 * @info: notifier information data
1560 *
1561 * Call all network notifier blocks. Parameters and return value
1562 * are as for raw_notifier_call_chain().
1563 */
1564
1565int call_netdevice_notifiers_info(unsigned long val, struct net_device *dev,
1566 struct netdev_notifier_info *info)
1567{
1568 ASSERT_RTNL();
1569 netdev_notifier_info_init(info, dev);
1570 return raw_notifier_call_chain(&netdev_chain, val, info);
1571}
1572EXPORT_SYMBOL(call_netdevice_notifiers_info);
1573
1574/**
1545 * call_netdevice_notifiers - call all network notifier blocks 1575 * call_netdevice_notifiers - call all network notifier blocks
1546 * @val: value passed unmodified to notifier function 1576 * @val: value passed unmodified to notifier function
1547 * @dev: net_device pointer passed unmodified to notifier function 1577 * @dev: net_device pointer passed unmodified to notifier function
@@ -1552,8 +1582,9 @@ EXPORT_SYMBOL(unregister_netdevice_notifier);
1552 1582
1553int call_netdevice_notifiers(unsigned long val, struct net_device *dev) 1583int call_netdevice_notifiers(unsigned long val, struct net_device *dev)
1554{ 1584{
1555 ASSERT_RTNL(); 1585 struct netdev_notifier_info info;
1556 return raw_notifier_call_chain(&netdev_chain, val, dev); 1586
1587 return call_netdevice_notifiers_info(val, dev, &info);
1557} 1588}
1558EXPORT_SYMBOL(call_netdevice_notifiers); 1589EXPORT_SYMBOL(call_netdevice_notifiers);
1559 1590
@@ -1655,23 +1686,19 @@ int dev_forward_skb(struct net_device *dev, struct sk_buff *skb)
1655 } 1686 }
1656 } 1687 }
1657 1688
1658 skb_orphan(skb);
1659
1660 if (unlikely(!is_skb_forwardable(dev, skb))) { 1689 if (unlikely(!is_skb_forwardable(dev, skb))) {
1661 atomic_long_inc(&dev->rx_dropped); 1690 atomic_long_inc(&dev->rx_dropped);
1662 kfree_skb(skb); 1691 kfree_skb(skb);
1663 return NET_RX_DROP; 1692 return NET_RX_DROP;
1664 } 1693 }
1665 skb->skb_iif = 0; 1694 skb_scrub_packet(skb);
1666 skb->dev = dev;
1667 skb_dst_drop(skb);
1668 skb->tstamp.tv64 = 0;
1669 skb->pkt_type = PACKET_HOST;
1670 skb->protocol = eth_type_trans(skb, dev); 1695 skb->protocol = eth_type_trans(skb, dev);
1671 skb->mark = 0; 1696
1672 secpath_reset(skb); 1697 /* eth_type_trans() can set pkt_type.
1673 nf_reset(skb); 1698 * clear pkt_type _after_ calling eth_type_trans()
1674 nf_reset_trace(skb); 1699 */
1700 skb->pkt_type = PACKET_HOST;
1701
1675 return netif_rx(skb); 1702 return netif_rx(skb);
1676} 1703}
1677EXPORT_SYMBOL_GPL(dev_forward_skb); 1704EXPORT_SYMBOL_GPL(dev_forward_skb);
@@ -1736,7 +1763,7 @@ static void dev_queue_xmit_nit(struct sk_buff *skb, struct net_device *dev)
1736 skb_reset_mac_header(skb2); 1763 skb_reset_mac_header(skb2);
1737 1764
1738 if (skb_network_header(skb2) < skb2->data || 1765 if (skb_network_header(skb2) < skb2->data ||
1739 skb2->network_header > skb2->tail) { 1766 skb_network_header(skb2) > skb_tail_pointer(skb2)) {
1740 net_crit_ratelimited("protocol %04x is buggy, dev %s\n", 1767 net_crit_ratelimited("protocol %04x is buggy, dev %s\n",
1741 ntohs(skb2->protocol), 1768 ntohs(skb2->protocol),
1742 dev->name); 1769 dev->name);
@@ -3099,6 +3126,46 @@ static int rps_ipi_queued(struct softnet_data *sd)
3099 return 0; 3126 return 0;
3100} 3127}
3101 3128
3129#ifdef CONFIG_NET_FLOW_LIMIT
3130int netdev_flow_limit_table_len __read_mostly = (1 << 12);
3131#endif
3132
3133static bool skb_flow_limit(struct sk_buff *skb, unsigned int qlen)
3134{
3135#ifdef CONFIG_NET_FLOW_LIMIT
3136 struct sd_flow_limit *fl;
3137 struct softnet_data *sd;
3138 unsigned int old_flow, new_flow;
3139
3140 if (qlen < (netdev_max_backlog >> 1))
3141 return false;
3142
3143 sd = &__get_cpu_var(softnet_data);
3144
3145 rcu_read_lock();
3146 fl = rcu_dereference(sd->flow_limit);
3147 if (fl) {
3148 new_flow = skb_get_rxhash(skb) & (fl->num_buckets - 1);
3149 old_flow = fl->history[fl->history_head];
3150 fl->history[fl->history_head] = new_flow;
3151
3152 fl->history_head++;
3153 fl->history_head &= FLOW_LIMIT_HISTORY - 1;
3154
3155 if (likely(fl->buckets[old_flow]))
3156 fl->buckets[old_flow]--;
3157
3158 if (++fl->buckets[new_flow] > (FLOW_LIMIT_HISTORY >> 1)) {
3159 fl->count++;
3160 rcu_read_unlock();
3161 return true;
3162 }
3163 }
3164 rcu_read_unlock();
3165#endif
3166 return false;
3167}
3168
3102/* 3169/*
3103 * enqueue_to_backlog is called to queue an skb to a per CPU backlog 3170 * enqueue_to_backlog is called to queue an skb to a per CPU backlog
3104 * queue (may be a remote CPU queue). 3171 * queue (may be a remote CPU queue).
@@ -3108,13 +3175,15 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
3108{ 3175{
3109 struct softnet_data *sd; 3176 struct softnet_data *sd;
3110 unsigned long flags; 3177 unsigned long flags;
3178 unsigned int qlen;
3111 3179
3112 sd = &per_cpu(softnet_data, cpu); 3180 sd = &per_cpu(softnet_data, cpu);
3113 3181
3114 local_irq_save(flags); 3182 local_irq_save(flags);
3115 3183
3116 rps_lock(sd); 3184 rps_lock(sd);
3117 if (skb_queue_len(&sd->input_pkt_queue) <= netdev_max_backlog) { 3185 qlen = skb_queue_len(&sd->input_pkt_queue);
3186 if (qlen <= netdev_max_backlog && !skb_flow_limit(skb, qlen)) {
3118 if (skb_queue_len(&sd->input_pkt_queue)) { 3187 if (skb_queue_len(&sd->input_pkt_queue)) {
3119enqueue: 3188enqueue:
3120 __skb_queue_tail(&sd->input_pkt_queue, skb); 3189 __skb_queue_tail(&sd->input_pkt_queue, skb);
@@ -3862,7 +3931,7 @@ static void skb_gro_reset_offset(struct sk_buff *skb)
3862 NAPI_GRO_CB(skb)->frag0 = NULL; 3931 NAPI_GRO_CB(skb)->frag0 = NULL;
3863 NAPI_GRO_CB(skb)->frag0_len = 0; 3932 NAPI_GRO_CB(skb)->frag0_len = 0;
3864 3933
3865 if (skb->mac_header == skb->tail && 3934 if (skb_mac_header(skb) == skb_tail_pointer(skb) &&
3866 pinfo->nr_frags && 3935 pinfo->nr_frags &&
3867 !PageHighMem(skb_frag_page(frag0))) { 3936 !PageHighMem(skb_frag_page(frag0))) {
3868 NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); 3937 NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
@@ -4106,6 +4175,58 @@ void napi_complete(struct napi_struct *n)
4106} 4175}
4107EXPORT_SYMBOL(napi_complete); 4176EXPORT_SYMBOL(napi_complete);
4108 4177
4178/* must be called under rcu_read_lock(), as we dont take a reference */
4179struct napi_struct *napi_by_id(unsigned int napi_id)
4180{
4181 unsigned int hash = napi_id % HASH_SIZE(napi_hash);
4182 struct napi_struct *napi;
4183
4184 hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node)
4185 if (napi->napi_id == napi_id)
4186 return napi;
4187
4188 return NULL;
4189}
4190EXPORT_SYMBOL_GPL(napi_by_id);
4191
4192void napi_hash_add(struct napi_struct *napi)
4193{
4194 if (!test_and_set_bit(NAPI_STATE_HASHED, &napi->state)) {
4195
4196 spin_lock(&napi_hash_lock);
4197
4198 /* 0 is not a valid id, we also skip an id that is taken
4199 * we expect both events to be extremely rare
4200 */
4201 napi->napi_id = 0;
4202 while (!napi->napi_id) {
4203 napi->napi_id = ++napi_gen_id;
4204 if (napi_by_id(napi->napi_id))
4205 napi->napi_id = 0;
4206 }
4207
4208 hlist_add_head_rcu(&napi->napi_hash_node,
4209 &napi_hash[napi->napi_id % HASH_SIZE(napi_hash)]);
4210
4211 spin_unlock(&napi_hash_lock);
4212 }
4213}
4214EXPORT_SYMBOL_GPL(napi_hash_add);
4215
4216/* Warning : caller is responsible to make sure rcu grace period
4217 * is respected before freeing memory containing @napi
4218 */
4219void napi_hash_del(struct napi_struct *napi)
4220{
4221 spin_lock(&napi_hash_lock);
4222
4223 if (test_and_clear_bit(NAPI_STATE_HASHED, &napi->state))
4224 hlist_del_rcu(&napi->napi_hash_node);
4225
4226 spin_unlock(&napi_hash_lock);
4227}
4228EXPORT_SYMBOL_GPL(napi_hash_del);
4229
4109void netif_napi_add(struct net_device *dev, struct napi_struct *napi, 4230void netif_napi_add(struct net_device *dev, struct napi_struct *napi,
4110 int (*poll)(struct napi_struct *, int), int weight) 4231 int (*poll)(struct napi_struct *, int), int weight)
4111{ 4232{
@@ -4404,7 +4525,7 @@ static int __netdev_upper_dev_link(struct net_device *dev,
4404 else 4525 else
4405 list_add_tail_rcu(&upper->list, &dev->upper_dev_list); 4526 list_add_tail_rcu(&upper->list, &dev->upper_dev_list);
4406 dev_hold(upper_dev); 4527 dev_hold(upper_dev);
4407 4528 call_netdevice_notifiers(NETDEV_CHANGEUPPER, dev);
4408 return 0; 4529 return 0;
4409} 4530}
4410 4531
@@ -4464,6 +4585,7 @@ void netdev_upper_dev_unlink(struct net_device *dev,
4464 list_del_rcu(&upper->list); 4585 list_del_rcu(&upper->list);
4465 dev_put(upper_dev); 4586 dev_put(upper_dev);
4466 kfree_rcu(upper, rcu); 4587 kfree_rcu(upper, rcu);
4588 call_netdevice_notifiers(NETDEV_CHANGEUPPER, dev);
4467} 4589}
4468EXPORT_SYMBOL(netdev_upper_dev_unlink); 4590EXPORT_SYMBOL(netdev_upper_dev_unlink);
4469 4591
@@ -4734,8 +4856,13 @@ void __dev_notify_flags(struct net_device *dev, unsigned int old_flags)
4734 } 4856 }
4735 4857
4736 if (dev->flags & IFF_UP && 4858 if (dev->flags & IFF_UP &&
4737 (changes & ~(IFF_UP | IFF_PROMISC | IFF_ALLMULTI | IFF_VOLATILE))) 4859 (changes & ~(IFF_UP | IFF_PROMISC | IFF_ALLMULTI | IFF_VOLATILE))) {
4738 call_netdevice_notifiers(NETDEV_CHANGE, dev); 4860 struct netdev_notifier_change_info change_info;
4861
4862 change_info.flags_changed = changes;
4863 call_netdevice_notifiers_info(NETDEV_CHANGE, dev,
4864 &change_info.info);
4865 }
4739} 4866}
4740 4867
4741/** 4868/**
@@ -5158,17 +5285,28 @@ static void netdev_init_one_queue(struct net_device *dev,
5158#endif 5285#endif
5159} 5286}
5160 5287
5288static void netif_free_tx_queues(struct net_device *dev)
5289{
5290 if (is_vmalloc_addr(dev->_tx))
5291 vfree(dev->_tx);
5292 else
5293 kfree(dev->_tx);
5294}
5295
5161static int netif_alloc_netdev_queues(struct net_device *dev) 5296static int netif_alloc_netdev_queues(struct net_device *dev)
5162{ 5297{
5163 unsigned int count = dev->num_tx_queues; 5298 unsigned int count = dev->num_tx_queues;
5164 struct netdev_queue *tx; 5299 struct netdev_queue *tx;
5300 size_t sz = count * sizeof(*tx);
5165 5301
5166 BUG_ON(count < 1); 5302 BUG_ON(count < 1 || count > 0xffff);
5167
5168 tx = kcalloc(count, sizeof(struct netdev_queue), GFP_KERNEL);
5169 if (!tx)
5170 return -ENOMEM;
5171 5303
5304 tx = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
5305 if (!tx) {
5306 tx = vzalloc(sz);
5307 if (!tx)
5308 return -ENOMEM;
5309 }
5172 dev->_tx = tx; 5310 dev->_tx = tx;
5173 5311
5174 netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL); 5312 netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
@@ -5269,6 +5407,10 @@ int register_netdevice(struct net_device *dev)
5269 */ 5407 */
5270 dev->hw_enc_features |= NETIF_F_SG; 5408 dev->hw_enc_features |= NETIF_F_SG;
5271 5409
5410 /* Make NETIF_F_SG inheritable to MPLS.
5411 */
5412 dev->mpls_features |= NETIF_F_SG;
5413
5272 ret = call_netdevice_notifiers(NETDEV_POST_INIT, dev); 5414 ret = call_netdevice_notifiers(NETDEV_POST_INIT, dev);
5273 ret = notifier_to_errno(ret); 5415 ret = notifier_to_errno(ret);
5274 if (ret) 5416 if (ret)
@@ -5712,7 +5854,7 @@ free_all:
5712 5854
5713free_pcpu: 5855free_pcpu:
5714 free_percpu(dev->pcpu_refcnt); 5856 free_percpu(dev->pcpu_refcnt);
5715 kfree(dev->_tx); 5857 netif_free_tx_queues(dev);
5716#ifdef CONFIG_RPS 5858#ifdef CONFIG_RPS
5717 kfree(dev->_rx); 5859 kfree(dev->_rx);
5718#endif 5860#endif
@@ -5737,7 +5879,7 @@ void free_netdev(struct net_device *dev)
5737 5879
5738 release_net(dev_net(dev)); 5880 release_net(dev_net(dev));
5739 5881
5740 kfree(dev->_tx); 5882 netif_free_tx_queues(dev);
5741#ifdef CONFIG_RPS 5883#ifdef CONFIG_RPS
5742 kfree(dev->_rx); 5884 kfree(dev->_rx);
5743#endif 5885#endif
@@ -6048,7 +6190,7 @@ netdev_features_t netdev_increment_features(netdev_features_t all,
6048} 6190}
6049EXPORT_SYMBOL(netdev_increment_features); 6191EXPORT_SYMBOL(netdev_increment_features);
6050 6192
6051static struct hlist_head *netdev_create_hash(void) 6193static struct hlist_head * __net_init netdev_create_hash(void)
6052{ 6194{
6053 int i; 6195 int i;
6054 struct hlist_head *hash; 6196 struct hlist_head *hash;
@@ -6304,6 +6446,10 @@ static int __init net_dev_init(void)
6304 sd->backlog.weight = weight_p; 6446 sd->backlog.weight = weight_p;
6305 sd->backlog.gro_list = NULL; 6447 sd->backlog.gro_list = NULL;
6306 sd->backlog.gro_count = 0; 6448 sd->backlog.gro_count = 0;
6449
6450#ifdef CONFIG_NET_FLOW_LIMIT
6451 sd->flow_limit = NULL;
6452#endif
6307 } 6453 }
6308 6454
6309 dev_boot_phase = 0; 6455 dev_boot_phase = 0;
diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
index d23b6682f4e9..5e78d44333b9 100644
--- a/net/core/drop_monitor.c
+++ b/net/core/drop_monitor.c
@@ -295,9 +295,9 @@ static int net_dm_cmd_trace(struct sk_buff *skb,
295} 295}
296 296
297static int dropmon_net_event(struct notifier_block *ev_block, 297static int dropmon_net_event(struct notifier_block *ev_block,
298 unsigned long event, void *ptr) 298 unsigned long event, void *ptr)
299{ 299{
300 struct net_device *dev = ptr; 300 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
301 struct dm_hw_stat_delta *new_stat = NULL; 301 struct dm_hw_stat_delta *new_stat = NULL;
302 struct dm_hw_stat_delta *tmp; 302 struct dm_hw_stat_delta *tmp;
303 303
diff --git a/net/core/dst.c b/net/core/dst.c
index df9cc810ec8e..ca4231ec7347 100644
--- a/net/core/dst.c
+++ b/net/core/dst.c
@@ -372,7 +372,7 @@ static void dst_ifdown(struct dst_entry *dst, struct net_device *dev,
372static int dst_dev_event(struct notifier_block *this, unsigned long event, 372static int dst_dev_event(struct notifier_block *this, unsigned long event,
373 void *ptr) 373 void *ptr)
374{ 374{
375 struct net_device *dev = ptr; 375 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
376 struct dst_entry *dst, *last = NULL; 376 struct dst_entry *dst, *last = NULL;
377 377
378 switch (event) { 378 switch (event) {
diff --git a/net/core/ethtool.c b/net/core/ethtool.c
index ce91766eeca9..ab5fa6336c84 100644
--- a/net/core/ethtool.c
+++ b/net/core/ethtool.c
@@ -82,6 +82,7 @@ static const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN]
82 [NETIF_F_FSO_BIT] = "tx-fcoe-segmentation", 82 [NETIF_F_FSO_BIT] = "tx-fcoe-segmentation",
83 [NETIF_F_GSO_GRE_BIT] = "tx-gre-segmentation", 83 [NETIF_F_GSO_GRE_BIT] = "tx-gre-segmentation",
84 [NETIF_F_GSO_UDP_TUNNEL_BIT] = "tx-udp_tnl-segmentation", 84 [NETIF_F_GSO_UDP_TUNNEL_BIT] = "tx-udp_tnl-segmentation",
85 [NETIF_F_GSO_MPLS_BIT] = "tx-mpls-segmentation",
85 86
86 [NETIF_F_FCOE_CRC_BIT] = "tx-checksum-fcoe-crc", 87 [NETIF_F_FCOE_CRC_BIT] = "tx-checksum-fcoe-crc",
87 [NETIF_F_SCTP_CSUM_BIT] = "tx-checksum-sctp", 88 [NETIF_F_SCTP_CSUM_BIT] = "tx-checksum-sctp",
@@ -1319,10 +1320,19 @@ static int ethtool_get_dump_data(struct net_device *dev,
1319 if (ret) 1320 if (ret)
1320 return ret; 1321 return ret;
1321 1322
1322 len = (tmp.len > dump.len) ? dump.len : tmp.len; 1323 len = min(tmp.len, dump.len);
1323 if (!len) 1324 if (!len)
1324 return -EFAULT; 1325 return -EFAULT;
1325 1326
1327 /* Don't ever let the driver think there's more space available
1328 * than it requested with .get_dump_flag().
1329 */
1330 dump.len = len;
1331
1332 /* Always allocate enough space to hold the whole thing so that the
1333 * driver does not need to check the length and bother with partial
1334 * dumping.
1335 */
1326 data = vzalloc(tmp.len); 1336 data = vzalloc(tmp.len);
1327 if (!data) 1337 if (!data)
1328 return -ENOMEM; 1338 return -ENOMEM;
@@ -1330,6 +1340,16 @@ static int ethtool_get_dump_data(struct net_device *dev,
1330 if (ret) 1340 if (ret)
1331 goto out; 1341 goto out;
1332 1342
1343 /* There are two sane possibilities:
1344 * 1. The driver's .get_dump_data() does not touch dump.len.
1345 * 2. Or it may set dump.len to how much it really writes, which
1346 * should be tmp.len (or len if it can do a partial dump).
1347 * In any case respond to userspace with the actual length of data
1348 * it's receiving.
1349 */
1350 WARN_ON(dump.len != len && dump.len != tmp.len);
1351 dump.len = len;
1352
1333 if (copy_to_user(useraddr, &dump, sizeof(dump))) { 1353 if (copy_to_user(useraddr, &dump, sizeof(dump))) {
1334 ret = -EFAULT; 1354 ret = -EFAULT;
1335 goto out; 1355 goto out;
@@ -1413,7 +1433,7 @@ static int ethtool_get_module_eeprom(struct net_device *dev,
1413 modinfo.eeprom_len); 1433 modinfo.eeprom_len);
1414} 1434}
1415 1435
1416/* The main entry point in this file. Called from net/core/dev.c */ 1436/* The main entry point in this file. Called from net/core/dev_ioctl.c */
1417 1437
1418int dev_ethtool(struct net *net, struct ifreq *ifr) 1438int dev_ethtool(struct net *net, struct ifreq *ifr)
1419{ 1439{
diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
index d5a9f8ead0d8..21735440c44a 100644
--- a/net/core/fib_rules.c
+++ b/net/core/fib_rules.c
@@ -705,9 +705,9 @@ static void detach_rules(struct list_head *rules, struct net_device *dev)
705 705
706 706
707static int fib_rules_event(struct notifier_block *this, unsigned long event, 707static int fib_rules_event(struct notifier_block *this, unsigned long event,
708 void *ptr) 708 void *ptr)
709{ 709{
710 struct net_device *dev = ptr; 710 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
711 struct net *net = dev_net(dev); 711 struct net *net = dev_net(dev);
712 struct fib_rules_ops *ops; 712 struct fib_rules_ops *ops;
713 713
diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
index d9d198aa9fed..6b5b6e7013ca 100644
--- a/net/core/gen_estimator.c
+++ b/net/core/gen_estimator.c
@@ -82,7 +82,7 @@ struct gen_estimator
82{ 82{
83 struct list_head list; 83 struct list_head list;
84 struct gnet_stats_basic_packed *bstats; 84 struct gnet_stats_basic_packed *bstats;
85 struct gnet_stats_rate_est *rate_est; 85 struct gnet_stats_rate_est64 *rate_est;
86 spinlock_t *stats_lock; 86 spinlock_t *stats_lock;
87 int ewma_log; 87 int ewma_log;
88 u64 last_bytes; 88 u64 last_bytes;
@@ -167,7 +167,7 @@ static void gen_add_node(struct gen_estimator *est)
167 167
168static 168static
169struct gen_estimator *gen_find_node(const struct gnet_stats_basic_packed *bstats, 169struct gen_estimator *gen_find_node(const struct gnet_stats_basic_packed *bstats,
170 const struct gnet_stats_rate_est *rate_est) 170 const struct gnet_stats_rate_est64 *rate_est)
171{ 171{
172 struct rb_node *p = est_root.rb_node; 172 struct rb_node *p = est_root.rb_node;
173 173
@@ -203,7 +203,7 @@ struct gen_estimator *gen_find_node(const struct gnet_stats_basic_packed *bstats
203 * 203 *
204 */ 204 */
205int gen_new_estimator(struct gnet_stats_basic_packed *bstats, 205int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
206 struct gnet_stats_rate_est *rate_est, 206 struct gnet_stats_rate_est64 *rate_est,
207 spinlock_t *stats_lock, 207 spinlock_t *stats_lock,
208 struct nlattr *opt) 208 struct nlattr *opt)
209{ 209{
@@ -258,7 +258,7 @@ EXPORT_SYMBOL(gen_new_estimator);
258 * Note : Caller should respect an RCU grace period before freeing stats_lock 258 * Note : Caller should respect an RCU grace period before freeing stats_lock
259 */ 259 */
260void gen_kill_estimator(struct gnet_stats_basic_packed *bstats, 260void gen_kill_estimator(struct gnet_stats_basic_packed *bstats,
261 struct gnet_stats_rate_est *rate_est) 261 struct gnet_stats_rate_est64 *rate_est)
262{ 262{
263 struct gen_estimator *e; 263 struct gen_estimator *e;
264 264
@@ -290,7 +290,7 @@ EXPORT_SYMBOL(gen_kill_estimator);
290 * Returns 0 on success or a negative error code. 290 * Returns 0 on success or a negative error code.
291 */ 291 */
292int gen_replace_estimator(struct gnet_stats_basic_packed *bstats, 292int gen_replace_estimator(struct gnet_stats_basic_packed *bstats,
293 struct gnet_stats_rate_est *rate_est, 293 struct gnet_stats_rate_est64 *rate_est,
294 spinlock_t *stats_lock, struct nlattr *opt) 294 spinlock_t *stats_lock, struct nlattr *opt)
295{ 295{
296 gen_kill_estimator(bstats, rate_est); 296 gen_kill_estimator(bstats, rate_est);
@@ -306,7 +306,7 @@ EXPORT_SYMBOL(gen_replace_estimator);
306 * Returns true if estimator is active, and false if not. 306 * Returns true if estimator is active, and false if not.
307 */ 307 */
308bool gen_estimator_active(const struct gnet_stats_basic_packed *bstats, 308bool gen_estimator_active(const struct gnet_stats_basic_packed *bstats,
309 const struct gnet_stats_rate_est *rate_est) 309 const struct gnet_stats_rate_est64 *rate_est)
310{ 310{
311 bool res; 311 bool res;
312 312
diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
index ddedf211e588..9d3d9e78397b 100644
--- a/net/core/gen_stats.c
+++ b/net/core/gen_stats.c
@@ -143,18 +143,30 @@ EXPORT_SYMBOL(gnet_stats_copy_basic);
143int 143int
144gnet_stats_copy_rate_est(struct gnet_dump *d, 144gnet_stats_copy_rate_est(struct gnet_dump *d,
145 const struct gnet_stats_basic_packed *b, 145 const struct gnet_stats_basic_packed *b,
146 struct gnet_stats_rate_est *r) 146 struct gnet_stats_rate_est64 *r)
147{ 147{
148 struct gnet_stats_rate_est est;
149 int res;
150
148 if (b && !gen_estimator_active(b, r)) 151 if (b && !gen_estimator_active(b, r))
149 return 0; 152 return 0;
150 153
154 est.bps = min_t(u64, UINT_MAX, r->bps);
155 /* we have some time before reaching 2^32 packets per second */
156 est.pps = r->pps;
157
151 if (d->compat_tc_stats) { 158 if (d->compat_tc_stats) {
152 d->tc_stats.bps = r->bps; 159 d->tc_stats.bps = est.bps;
153 d->tc_stats.pps = r->pps; 160 d->tc_stats.pps = est.pps;
154 } 161 }
155 162
156 if (d->tail) 163 if (d->tail) {
157 return gnet_stats_copy(d, TCA_STATS_RATE_EST, r, sizeof(*r)); 164 res = gnet_stats_copy(d, TCA_STATS_RATE_EST, &est, sizeof(est));
165 if (res < 0 || est.bps == r->bps)
166 return res;
167 /* emit 64bit stats only if needed */
168 return gnet_stats_copy(d, TCA_STATS_RATE_EST64, r, sizeof(*r));
169 }
158 170
159 return 0; 171 return 0;
160} 172}
diff --git a/net/core/link_watch.c b/net/core/link_watch.c
index 8f82a5cc3851..9c3a839322ba 100644
--- a/net/core/link_watch.c
+++ b/net/core/link_watch.c
@@ -92,6 +92,9 @@ static bool linkwatch_urgent_event(struct net_device *dev)
92 if (dev->ifindex != dev->iflink) 92 if (dev->ifindex != dev->iflink)
93 return true; 93 return true;
94 94
95 if (dev->priv_flags & IFF_TEAM_PORT)
96 return true;
97
95 return netif_carrier_ok(dev) && qdisc_tx_changing(dev); 98 return netif_carrier_ok(dev) && qdisc_tx_changing(dev);
96} 99}
97 100
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 5c56b217b999..b7de821f98df 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -231,7 +231,7 @@ static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev)
231 we must kill timers etc. and move 231 we must kill timers etc. and move
232 it to safe state. 232 it to safe state.
233 */ 233 */
234 skb_queue_purge(&n->arp_queue); 234 __skb_queue_purge(&n->arp_queue);
235 n->arp_queue_len_bytes = 0; 235 n->arp_queue_len_bytes = 0;
236 n->output = neigh_blackhole; 236 n->output = neigh_blackhole;
237 if (n->nud_state & NUD_VALID) 237 if (n->nud_state & NUD_VALID)
@@ -286,7 +286,7 @@ static struct neighbour *neigh_alloc(struct neigh_table *tbl, struct net_device
286 if (!n) 286 if (!n)
287 goto out_entries; 287 goto out_entries;
288 288
289 skb_queue_head_init(&n->arp_queue); 289 __skb_queue_head_init(&n->arp_queue);
290 rwlock_init(&n->lock); 290 rwlock_init(&n->lock);
291 seqlock_init(&n->ha_lock); 291 seqlock_init(&n->ha_lock);
292 n->updated = n->used = now; 292 n->updated = n->used = now;
@@ -708,7 +708,9 @@ void neigh_destroy(struct neighbour *neigh)
708 if (neigh_del_timer(neigh)) 708 if (neigh_del_timer(neigh))
709 pr_warn("Impossible event\n"); 709 pr_warn("Impossible event\n");
710 710
711 skb_queue_purge(&neigh->arp_queue); 711 write_lock_bh(&neigh->lock);
712 __skb_queue_purge(&neigh->arp_queue);
713 write_unlock_bh(&neigh->lock);
712 neigh->arp_queue_len_bytes = 0; 714 neigh->arp_queue_len_bytes = 0;
713 715
714 if (dev->netdev_ops->ndo_neigh_destroy) 716 if (dev->netdev_ops->ndo_neigh_destroy)
@@ -858,7 +860,7 @@ static void neigh_invalidate(struct neighbour *neigh)
858 neigh->ops->error_report(neigh, skb); 860 neigh->ops->error_report(neigh, skb);
859 write_lock(&neigh->lock); 861 write_lock(&neigh->lock);
860 } 862 }
861 skb_queue_purge(&neigh->arp_queue); 863 __skb_queue_purge(&neigh->arp_queue);
862 neigh->arp_queue_len_bytes = 0; 864 neigh->arp_queue_len_bytes = 0;
863} 865}
864 866
@@ -1210,7 +1212,7 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
1210 1212
1211 write_lock_bh(&neigh->lock); 1213 write_lock_bh(&neigh->lock);
1212 } 1214 }
1213 skb_queue_purge(&neigh->arp_queue); 1215 __skb_queue_purge(&neigh->arp_queue);
1214 neigh->arp_queue_len_bytes = 0; 1216 neigh->arp_queue_len_bytes = 0;
1215 } 1217 }
1216out: 1218out:
@@ -1419,7 +1421,7 @@ static inline struct neigh_parms *lookup_neigh_parms(struct neigh_table *tbl,
1419 1421
1420 for (p = &tbl->parms; p; p = p->next) { 1422 for (p = &tbl->parms; p; p = p->next) {
1421 if ((p->dev && p->dev->ifindex == ifindex && net_eq(neigh_parms_net(p), net)) || 1423 if ((p->dev && p->dev->ifindex == ifindex && net_eq(neigh_parms_net(p), net)) ||
1422 (!p->dev && !ifindex)) 1424 (!p->dev && !ifindex && net_eq(net, &init_net)))
1423 return p; 1425 return p;
1424 } 1426 }
1425 1427
@@ -1429,15 +1431,11 @@ static inline struct neigh_parms *lookup_neigh_parms(struct neigh_table *tbl,
1429struct neigh_parms *neigh_parms_alloc(struct net_device *dev, 1431struct neigh_parms *neigh_parms_alloc(struct net_device *dev,
1430 struct neigh_table *tbl) 1432 struct neigh_table *tbl)
1431{ 1433{
1432 struct neigh_parms *p, *ref; 1434 struct neigh_parms *p;
1433 struct net *net = dev_net(dev); 1435 struct net *net = dev_net(dev);
1434 const struct net_device_ops *ops = dev->netdev_ops; 1436 const struct net_device_ops *ops = dev->netdev_ops;
1435 1437
1436 ref = lookup_neigh_parms(tbl, net, 0); 1438 p = kmemdup(&tbl->parms, sizeof(*p), GFP_KERNEL);
1437 if (!ref)
1438 return NULL;
1439
1440 p = kmemdup(ref, sizeof(*p), GFP_KERNEL);
1441 if (p) { 1439 if (p) {
1442 p->tbl = tbl; 1440 p->tbl = tbl;
1443 atomic_set(&p->refcnt, 1); 1441 atomic_set(&p->refcnt, 1);
@@ -2053,6 +2051,12 @@ static int neightbl_set(struct sk_buff *skb, struct nlmsghdr *nlh)
2053 } 2051 }
2054 } 2052 }
2055 2053
2054 err = -ENOENT;
2055 if ((tb[NDTA_THRESH1] || tb[NDTA_THRESH2] ||
2056 tb[NDTA_THRESH3] || tb[NDTA_GC_INTERVAL]) &&
2057 !net_eq(net, &init_net))
2058 goto errout_tbl_lock;
2059
2056 if (tb[NDTA_THRESH1]) 2060 if (tb[NDTA_THRESH1])
2057 tbl->gc_thresh1 = nla_get_u32(tb[NDTA_THRESH1]); 2061 tbl->gc_thresh1 = nla_get_u32(tb[NDTA_THRESH1]);
2058 2062
@@ -2765,11 +2769,11 @@ EXPORT_SYMBOL(neigh_app_ns);
2765static int zero; 2769static int zero;
2766static int unres_qlen_max = INT_MAX / SKB_TRUESIZE(ETH_FRAME_LEN); 2770static int unres_qlen_max = INT_MAX / SKB_TRUESIZE(ETH_FRAME_LEN);
2767 2771
2768static int proc_unres_qlen(ctl_table *ctl, int write, void __user *buffer, 2772static int proc_unres_qlen(struct ctl_table *ctl, int write,
2769 size_t *lenp, loff_t *ppos) 2773 void __user *buffer, size_t *lenp, loff_t *ppos)
2770{ 2774{
2771 int size, ret; 2775 int size, ret;
2772 ctl_table tmp = *ctl; 2776 struct ctl_table tmp = *ctl;
2773 2777
2774 tmp.extra1 = &zero; 2778 tmp.extra1 = &zero;
2775 tmp.extra2 = &unres_qlen_max; 2779 tmp.extra2 = &unres_qlen_max;
diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c
index 569d355fec3e..2bf83299600a 100644
--- a/net/core/net-procfs.c
+++ b/net/core/net-procfs.c
@@ -146,11 +146,23 @@ static void softnet_seq_stop(struct seq_file *seq, void *v)
146static int softnet_seq_show(struct seq_file *seq, void *v) 146static int softnet_seq_show(struct seq_file *seq, void *v)
147{ 147{
148 struct softnet_data *sd = v; 148 struct softnet_data *sd = v;
149 unsigned int flow_limit_count = 0;
149 150
150 seq_printf(seq, "%08x %08x %08x %08x %08x %08x %08x %08x %08x %08x\n", 151#ifdef CONFIG_NET_FLOW_LIMIT
152 struct sd_flow_limit *fl;
153
154 rcu_read_lock();
155 fl = rcu_dereference(sd->flow_limit);
156 if (fl)
157 flow_limit_count = fl->count;
158 rcu_read_unlock();
159#endif
160
161 seq_printf(seq,
162 "%08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x\n",
151 sd->processed, sd->dropped, sd->time_squeeze, 0, 163 sd->processed, sd->dropped, sd->time_squeeze, 0,
152 0, 0, 0, 0, /* was fastroute */ 164 0, 0, 0, 0, /* was fastroute */
153 sd->cpu_collision, sd->received_rps); 165 sd->cpu_collision, sd->received_rps, flow_limit_count);
154 return 0; 166 return 0;
155} 167}
156 168
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index 35a9f0804b6f..2c637e9a0b27 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -248,7 +248,7 @@ static void netpoll_poll_dev(struct net_device *dev)
248 zap_completion_queue(); 248 zap_completion_queue();
249} 249}
250 250
251int netpoll_rx_disable(struct net_device *dev) 251void netpoll_rx_disable(struct net_device *dev)
252{ 252{
253 struct netpoll_info *ni; 253 struct netpoll_info *ni;
254 int idx; 254 int idx;
@@ -258,7 +258,6 @@ int netpoll_rx_disable(struct net_device *dev)
258 if (ni) 258 if (ni)
259 down(&ni->dev_lock); 259 down(&ni->dev_lock);
260 srcu_read_unlock(&netpoll_srcu, idx); 260 srcu_read_unlock(&netpoll_srcu, idx);
261 return 0;
262} 261}
263EXPORT_SYMBOL(netpoll_rx_disable); 262EXPORT_SYMBOL(netpoll_rx_disable);
264 263
@@ -691,25 +690,20 @@ static void netpoll_neigh_reply(struct sk_buff *skb, struct netpoll_info *npinfo
691 send_skb->dev = skb->dev; 690 send_skb->dev = skb->dev;
692 691
693 skb_reset_network_header(send_skb); 692 skb_reset_network_header(send_skb);
694 skb_put(send_skb, sizeof(struct ipv6hdr)); 693 hdr = (struct ipv6hdr *) skb_put(send_skb, sizeof(struct ipv6hdr));
695 hdr = ipv6_hdr(send_skb);
696
697 *(__be32*)hdr = htonl(0x60000000); 694 *(__be32*)hdr = htonl(0x60000000);
698
699 hdr->payload_len = htons(size); 695 hdr->payload_len = htons(size);
700 hdr->nexthdr = IPPROTO_ICMPV6; 696 hdr->nexthdr = IPPROTO_ICMPV6;
701 hdr->hop_limit = 255; 697 hdr->hop_limit = 255;
702 hdr->saddr = *saddr; 698 hdr->saddr = *saddr;
703 hdr->daddr = *daddr; 699 hdr->daddr = *daddr;
704 700
705 send_skb->transport_header = send_skb->tail; 701 icmp6h = (struct icmp6hdr *) skb_put(send_skb, sizeof(struct icmp6hdr));
706 skb_put(send_skb, size);
707
708 icmp6h = (struct icmp6hdr *)skb_transport_header(skb);
709 icmp6h->icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT; 702 icmp6h->icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT;
710 icmp6h->icmp6_router = 0; 703 icmp6h->icmp6_router = 0;
711 icmp6h->icmp6_solicited = 1; 704 icmp6h->icmp6_solicited = 1;
712 target = (struct in6_addr *)(skb_transport_header(send_skb) + sizeof(struct icmp6hdr)); 705
706 target = (struct in6_addr *) skb_put(send_skb, sizeof(struct in6_addr));
713 *target = msg->target; 707 *target = msg->target;
714 icmp6h->icmp6_cksum = csum_ipv6_magic(saddr, daddr, size, 708 icmp6h->icmp6_cksum = csum_ipv6_magic(saddr, daddr, size,
715 IPPROTO_ICMPV6, 709 IPPROTO_ICMPV6,
diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c
index 0777d0aa18c3..e533259dce3c 100644
--- a/net/core/netprio_cgroup.c
+++ b/net/core/netprio_cgroup.c
@@ -261,7 +261,7 @@ struct cgroup_subsys net_prio_subsys = {
261static int netprio_device_event(struct notifier_block *unused, 261static int netprio_device_event(struct notifier_block *unused,
262 unsigned long event, void *ptr) 262 unsigned long event, void *ptr)
263{ 263{
264 struct net_device *dev = ptr; 264 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
265 struct netprio_map *old; 265 struct netprio_map *old;
266 266
267 /* 267 /*
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index 11f2704c3810..9640972ec50e 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -1921,7 +1921,7 @@ static void pktgen_change_name(const struct pktgen_net *pn, struct net_device *d
1921static int pktgen_device_event(struct notifier_block *unused, 1921static int pktgen_device_event(struct notifier_block *unused,
1922 unsigned long event, void *ptr) 1922 unsigned long event, void *ptr)
1923{ 1923{
1924 struct net_device *dev = ptr; 1924 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1925 struct pktgen_net *pn = net_generic(dev_net(dev), pg_net_id); 1925 struct pktgen_net *pn = net_generic(dev_net(dev), pg_net_id);
1926 1926
1927 if (pn->pktgen_exiting) 1927 if (pn->pktgen_exiting)
@@ -2627,6 +2627,29 @@ static void pktgen_finalize_skb(struct pktgen_dev *pkt_dev, struct sk_buff *skb,
2627 pgh->tv_usec = htonl(timestamp.tv_usec); 2627 pgh->tv_usec = htonl(timestamp.tv_usec);
2628} 2628}
2629 2629
2630static struct sk_buff *pktgen_alloc_skb(struct net_device *dev,
2631 struct pktgen_dev *pkt_dev,
2632 unsigned int extralen)
2633{
2634 struct sk_buff *skb = NULL;
2635 unsigned int size = pkt_dev->cur_pkt_size + 64 + extralen +
2636 pkt_dev->pkt_overhead;
2637
2638 if (pkt_dev->flags & F_NODE) {
2639 int node = pkt_dev->node >= 0 ? pkt_dev->node : numa_node_id();
2640
2641 skb = __alloc_skb(NET_SKB_PAD + size, GFP_NOWAIT, 0, node);
2642 if (likely(skb)) {
2643 skb_reserve(skb, NET_SKB_PAD);
2644 skb->dev = dev;
2645 }
2646 } else {
2647 skb = __netdev_alloc_skb(dev, size, GFP_NOWAIT);
2648 }
2649
2650 return skb;
2651}
2652
2630static struct sk_buff *fill_packet_ipv4(struct net_device *odev, 2653static struct sk_buff *fill_packet_ipv4(struct net_device *odev,
2631 struct pktgen_dev *pkt_dev) 2654 struct pktgen_dev *pkt_dev)
2632{ 2655{
@@ -2657,32 +2680,13 @@ static struct sk_buff *fill_packet_ipv4(struct net_device *odev,
2657 2680
2658 datalen = (odev->hard_header_len + 16) & ~0xf; 2681 datalen = (odev->hard_header_len + 16) & ~0xf;
2659 2682
2660 if (pkt_dev->flags & F_NODE) { 2683 skb = pktgen_alloc_skb(odev, pkt_dev, datalen);
2661 int node;
2662
2663 if (pkt_dev->node >= 0)
2664 node = pkt_dev->node;
2665 else
2666 node = numa_node_id();
2667
2668 skb = __alloc_skb(NET_SKB_PAD + pkt_dev->cur_pkt_size + 64
2669 + datalen + pkt_dev->pkt_overhead, GFP_NOWAIT, 0, node);
2670 if (likely(skb)) {
2671 skb_reserve(skb, NET_SKB_PAD);
2672 skb->dev = odev;
2673 }
2674 }
2675 else
2676 skb = __netdev_alloc_skb(odev,
2677 pkt_dev->cur_pkt_size + 64
2678 + datalen + pkt_dev->pkt_overhead, GFP_NOWAIT);
2679
2680 if (!skb) { 2684 if (!skb) {
2681 sprintf(pkt_dev->result, "No memory"); 2685 sprintf(pkt_dev->result, "No memory");
2682 return NULL; 2686 return NULL;
2683 } 2687 }
2684 prefetchw(skb->data);
2685 2688
2689 prefetchw(skb->data);
2686 skb_reserve(skb, datalen); 2690 skb_reserve(skb, datalen);
2687 2691
2688 /* Reserve for ethernet and IP header */ 2692 /* Reserve for ethernet and IP header */
@@ -2708,15 +2712,15 @@ static struct sk_buff *fill_packet_ipv4(struct net_device *odev,
2708 *vlan_encapsulated_proto = htons(ETH_P_IP); 2712 *vlan_encapsulated_proto = htons(ETH_P_IP);
2709 } 2713 }
2710 2714
2711 skb->network_header = skb->tail; 2715 skb_set_mac_header(skb, 0);
2712 skb->transport_header = skb->network_header + sizeof(struct iphdr); 2716 skb_set_network_header(skb, skb->len);
2713 skb_put(skb, sizeof(struct iphdr) + sizeof(struct udphdr)); 2717 iph = (struct iphdr *) skb_put(skb, sizeof(struct iphdr));
2718
2719 skb_set_transport_header(skb, skb->len);
2720 udph = (struct udphdr *) skb_put(skb, sizeof(struct udphdr));
2714 skb_set_queue_mapping(skb, queue_map); 2721 skb_set_queue_mapping(skb, queue_map);
2715 skb->priority = pkt_dev->skb_priority; 2722 skb->priority = pkt_dev->skb_priority;
2716 2723
2717 iph = ip_hdr(skb);
2718 udph = udp_hdr(skb);
2719
2720 memcpy(eth, pkt_dev->hh, 12); 2724 memcpy(eth, pkt_dev->hh, 12);
2721 *(__be16 *) & eth[12] = protocol; 2725 *(__be16 *) & eth[12] = protocol;
2722 2726
@@ -2746,8 +2750,6 @@ static struct sk_buff *fill_packet_ipv4(struct net_device *odev,
2746 iph->check = 0; 2750 iph->check = 0;
2747 iph->check = ip_fast_csum((void *)iph, iph->ihl); 2751 iph->check = ip_fast_csum((void *)iph, iph->ihl);
2748 skb->protocol = protocol; 2752 skb->protocol = protocol;
2749 skb->mac_header = (skb->network_header - ETH_HLEN -
2750 pkt_dev->pkt_overhead);
2751 skb->dev = odev; 2753 skb->dev = odev;
2752 skb->pkt_type = PACKET_HOST; 2754 skb->pkt_type = PACKET_HOST;
2753 pktgen_finalize_skb(pkt_dev, skb, datalen); 2755 pktgen_finalize_skb(pkt_dev, skb, datalen);
@@ -2788,15 +2790,13 @@ static struct sk_buff *fill_packet_ipv6(struct net_device *odev,
2788 mod_cur_headers(pkt_dev); 2790 mod_cur_headers(pkt_dev);
2789 queue_map = pkt_dev->cur_queue_map; 2791 queue_map = pkt_dev->cur_queue_map;
2790 2792
2791 skb = __netdev_alloc_skb(odev, 2793 skb = pktgen_alloc_skb(odev, pkt_dev, 16);
2792 pkt_dev->cur_pkt_size + 64
2793 + 16 + pkt_dev->pkt_overhead, GFP_NOWAIT);
2794 if (!skb) { 2794 if (!skb) {
2795 sprintf(pkt_dev->result, "No memory"); 2795 sprintf(pkt_dev->result, "No memory");
2796 return NULL; 2796 return NULL;
2797 } 2797 }
2798 prefetchw(skb->data);
2799 2798
2799 prefetchw(skb->data);
2800 skb_reserve(skb, 16); 2800 skb_reserve(skb, 16);
2801 2801
2802 /* Reserve for ethernet and IP header */ 2802 /* Reserve for ethernet and IP header */
@@ -2822,13 +2822,14 @@ static struct sk_buff *fill_packet_ipv6(struct net_device *odev,
2822 *vlan_encapsulated_proto = htons(ETH_P_IPV6); 2822 *vlan_encapsulated_proto = htons(ETH_P_IPV6);
2823 } 2823 }
2824 2824
2825 skb->network_header = skb->tail; 2825 skb_set_mac_header(skb, 0);
2826 skb->transport_header = skb->network_header + sizeof(struct ipv6hdr); 2826 skb_set_network_header(skb, skb->len);
2827 skb_put(skb, sizeof(struct ipv6hdr) + sizeof(struct udphdr)); 2827 iph = (struct ipv6hdr *) skb_put(skb, sizeof(struct ipv6hdr));
2828
2829 skb_set_transport_header(skb, skb->len);
2830 udph = (struct udphdr *) skb_put(skb, sizeof(struct udphdr));
2828 skb_set_queue_mapping(skb, queue_map); 2831 skb_set_queue_mapping(skb, queue_map);
2829 skb->priority = pkt_dev->skb_priority; 2832 skb->priority = pkt_dev->skb_priority;
2830 iph = ipv6_hdr(skb);
2831 udph = udp_hdr(skb);
2832 2833
2833 memcpy(eth, pkt_dev->hh, 12); 2834 memcpy(eth, pkt_dev->hh, 12);
2834 *(__be16 *) &eth[12] = protocol; 2835 *(__be16 *) &eth[12] = protocol;
@@ -2863,8 +2864,6 @@ static struct sk_buff *fill_packet_ipv6(struct net_device *odev,
2863 iph->daddr = pkt_dev->cur_in6_daddr; 2864 iph->daddr = pkt_dev->cur_in6_daddr;
2864 iph->saddr = pkt_dev->cur_in6_saddr; 2865 iph->saddr = pkt_dev->cur_in6_saddr;
2865 2866
2866 skb->mac_header = (skb->network_header - ETH_HLEN -
2867 pkt_dev->pkt_overhead);
2868 skb->protocol = protocol; 2867 skb->protocol = protocol;
2869 skb->dev = odev; 2868 skb->dev = odev;
2870 skb->pkt_type = PACKET_HOST; 2869 skb->pkt_type = PACKET_HOST;
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
index a08bd2b7fe3f..3de740834d1f 100644
--- a/net/core/rtnetlink.c
+++ b/net/core/rtnetlink.c
@@ -947,6 +947,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
947 struct ifla_vf_vlan vf_vlan; 947 struct ifla_vf_vlan vf_vlan;
948 struct ifla_vf_tx_rate vf_tx_rate; 948 struct ifla_vf_tx_rate vf_tx_rate;
949 struct ifla_vf_spoofchk vf_spoofchk; 949 struct ifla_vf_spoofchk vf_spoofchk;
950 struct ifla_vf_link_state vf_linkstate;
950 951
951 /* 952 /*
952 * Not all SR-IOV capable drivers support the 953 * Not all SR-IOV capable drivers support the
@@ -956,18 +957,24 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
956 */ 957 */
957 ivi.spoofchk = -1; 958 ivi.spoofchk = -1;
958 memset(ivi.mac, 0, sizeof(ivi.mac)); 959 memset(ivi.mac, 0, sizeof(ivi.mac));
960 /* The default value for VF link state is "auto"
961 * IFLA_VF_LINK_STATE_AUTO which equals zero
962 */
963 ivi.linkstate = 0;
959 if (dev->netdev_ops->ndo_get_vf_config(dev, i, &ivi)) 964 if (dev->netdev_ops->ndo_get_vf_config(dev, i, &ivi))
960 break; 965 break;
961 vf_mac.vf = 966 vf_mac.vf =
962 vf_vlan.vf = 967 vf_vlan.vf =
963 vf_tx_rate.vf = 968 vf_tx_rate.vf =
964 vf_spoofchk.vf = ivi.vf; 969 vf_spoofchk.vf =
970 vf_linkstate.vf = ivi.vf;
965 971
966 memcpy(vf_mac.mac, ivi.mac, sizeof(ivi.mac)); 972 memcpy(vf_mac.mac, ivi.mac, sizeof(ivi.mac));
967 vf_vlan.vlan = ivi.vlan; 973 vf_vlan.vlan = ivi.vlan;
968 vf_vlan.qos = ivi.qos; 974 vf_vlan.qos = ivi.qos;
969 vf_tx_rate.rate = ivi.tx_rate; 975 vf_tx_rate.rate = ivi.tx_rate;
970 vf_spoofchk.setting = ivi.spoofchk; 976 vf_spoofchk.setting = ivi.spoofchk;
977 vf_linkstate.link_state = ivi.linkstate;
971 vf = nla_nest_start(skb, IFLA_VF_INFO); 978 vf = nla_nest_start(skb, IFLA_VF_INFO);
972 if (!vf) { 979 if (!vf) {
973 nla_nest_cancel(skb, vfinfo); 980 nla_nest_cancel(skb, vfinfo);
@@ -978,7 +985,9 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
978 nla_put(skb, IFLA_VF_TX_RATE, sizeof(vf_tx_rate), 985 nla_put(skb, IFLA_VF_TX_RATE, sizeof(vf_tx_rate),
979 &vf_tx_rate) || 986 &vf_tx_rate) ||
980 nla_put(skb, IFLA_VF_SPOOFCHK, sizeof(vf_spoofchk), 987 nla_put(skb, IFLA_VF_SPOOFCHK, sizeof(vf_spoofchk),
981 &vf_spoofchk)) 988 &vf_spoofchk) ||
989 nla_put(skb, IFLA_VF_LINK_STATE, sizeof(vf_linkstate),
990 &vf_linkstate))
982 goto nla_put_failure; 991 goto nla_put_failure;
983 nla_nest_end(skb, vf); 992 nla_nest_end(skb, vf);
984 } 993 }
@@ -1238,6 +1247,15 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr *attr)
1238 ivs->setting); 1247 ivs->setting);
1239 break; 1248 break;
1240 } 1249 }
1250 case IFLA_VF_LINK_STATE: {
1251 struct ifla_vf_link_state *ivl;
1252 ivl = nla_data(vf);
1253 err = -EOPNOTSUPP;
1254 if (ops->ndo_set_vf_link_state)
1255 err = ops->ndo_set_vf_link_state(dev, ivl->vf,
1256 ivl->link_state);
1257 break;
1258 }
1241 default: 1259 default:
1242 err = -EINVAL; 1260 err = -EINVAL;
1243 break; 1261 break;
@@ -2091,10 +2109,6 @@ static int rtnl_fdb_add(struct sk_buff *skb, struct nlmsghdr *nlh)
2091 } 2109 }
2092 2110
2093 addr = nla_data(tb[NDA_LLADDR]); 2111 addr = nla_data(tb[NDA_LLADDR]);
2094 if (is_zero_ether_addr(addr)) {
2095 pr_info("PF_BRIDGE: RTM_NEWNEIGH with invalid ether address\n");
2096 return -EINVAL;
2097 }
2098 2112
2099 err = -EOPNOTSUPP; 2113 err = -EOPNOTSUPP;
2100 2114
@@ -2192,10 +2206,6 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh)
2192 } 2206 }
2193 2207
2194 addr = nla_data(tb[NDA_LLADDR]); 2208 addr = nla_data(tb[NDA_LLADDR]);
2195 if (is_zero_ether_addr(addr)) {
2196 pr_info("PF_BRIDGE: RTM_DELNEIGH with invalid ether address\n");
2197 return -EINVAL;
2198 }
2199 2209
2200 err = -EOPNOTSUPP; 2210 err = -EOPNOTSUPP;
2201 2211
@@ -2667,7 +2677,7 @@ static void rtnetlink_rcv(struct sk_buff *skb)
2667 2677
2668static int rtnetlink_event(struct notifier_block *this, unsigned long event, void *ptr) 2678static int rtnetlink_event(struct notifier_block *this, unsigned long event, void *ptr)
2669{ 2679{
2670 struct net_device *dev = ptr; 2680 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
2671 2681
2672 switch (event) { 2682 switch (event) {
2673 case NETDEV_UP: 2683 case NETDEV_UP:
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 1c1738cc4538..724bb7cb173f 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -199,9 +199,7 @@ struct sk_buff *__alloc_skb_head(gfp_t gfp_mask, int node)
199 skb->truesize = sizeof(struct sk_buff); 199 skb->truesize = sizeof(struct sk_buff);
200 atomic_set(&skb->users, 1); 200 atomic_set(&skb->users, 1);
201 201
202#ifdef NET_SKBUFF_DATA_USES_OFFSET 202 skb->mac_header = (typeof(skb->mac_header))~0U;
203 skb->mac_header = ~0U;
204#endif
205out: 203out:
206 return skb; 204 return skb;
207} 205}
@@ -275,10 +273,8 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
275 skb->data = data; 273 skb->data = data;
276 skb_reset_tail_pointer(skb); 274 skb_reset_tail_pointer(skb);
277 skb->end = skb->tail + size; 275 skb->end = skb->tail + size;
278#ifdef NET_SKBUFF_DATA_USES_OFFSET 276 skb->mac_header = (typeof(skb->mac_header))~0U;
279 skb->mac_header = ~0U; 277 skb->transport_header = (typeof(skb->transport_header))~0U;
280 skb->transport_header = ~0U;
281#endif
282 278
283 /* make sure we initialize shinfo sequentially */ 279 /* make sure we initialize shinfo sequentially */
284 shinfo = skb_shinfo(skb); 280 shinfo = skb_shinfo(skb);
@@ -344,10 +340,8 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
344 skb->data = data; 340 skb->data = data;
345 skb_reset_tail_pointer(skb); 341 skb_reset_tail_pointer(skb);
346 skb->end = skb->tail + size; 342 skb->end = skb->tail + size;
347#ifdef NET_SKBUFF_DATA_USES_OFFSET 343 skb->mac_header = (typeof(skb->mac_header))~0U;
348 skb->mac_header = ~0U; 344 skb->transport_header = (typeof(skb->transport_header))~0U;
349 skb->transport_header = ~0U;
350#endif
351 345
352 /* make sure we initialize shinfo sequentially */ 346 /* make sure we initialize shinfo sequentially */
353 shinfo = skb_shinfo(skb); 347 shinfo = skb_shinfo(skb);
@@ -703,6 +697,7 @@ static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
703 new->transport_header = old->transport_header; 697 new->transport_header = old->transport_header;
704 new->network_header = old->network_header; 698 new->network_header = old->network_header;
705 new->mac_header = old->mac_header; 699 new->mac_header = old->mac_header;
700 new->inner_protocol = old->inner_protocol;
706 new->inner_transport_header = old->inner_transport_header; 701 new->inner_transport_header = old->inner_transport_header;
707 new->inner_network_header = old->inner_network_header; 702 new->inner_network_header = old->inner_network_header;
708 new->inner_mac_header = old->inner_mac_header; 703 new->inner_mac_header = old->inner_mac_header;
@@ -743,6 +738,10 @@ static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
743 new->vlan_tci = old->vlan_tci; 738 new->vlan_tci = old->vlan_tci;
744 739
745 skb_copy_secmark(new, old); 740 skb_copy_secmark(new, old);
741
742#ifdef CONFIG_NET_LL_RX_POLL
743 new->napi_id = old->napi_id;
744#endif
746} 745}
747 746
748/* 747/*
@@ -915,18 +914,8 @@ static void skb_headers_offset_update(struct sk_buff *skb, int off)
915 914
916static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old) 915static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
917{ 916{
918#ifndef NET_SKBUFF_DATA_USES_OFFSET
919 /*
920 * Shift between the two data areas in bytes
921 */
922 unsigned long offset = new->data - old->data;
923#endif
924
925 __copy_skb_header(new, old); 917 __copy_skb_header(new, old);
926 918
927#ifndef NET_SKBUFF_DATA_USES_OFFSET
928 skb_headers_offset_update(new, offset);
929#endif
930 skb_shinfo(new)->gso_size = skb_shinfo(old)->gso_size; 919 skb_shinfo(new)->gso_size = skb_shinfo(old)->gso_size;
931 skb_shinfo(new)->gso_segs = skb_shinfo(old)->gso_segs; 920 skb_shinfo(new)->gso_segs = skb_shinfo(old)->gso_segs;
932 skb_shinfo(new)->gso_type = skb_shinfo(old)->gso_type; 921 skb_shinfo(new)->gso_type = skb_shinfo(old)->gso_type;
@@ -1118,7 +1107,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
1118 skb->end = skb->head + size; 1107 skb->end = skb->head + size;
1119#endif 1108#endif
1120 skb->tail += off; 1109 skb->tail += off;
1121 skb_headers_offset_update(skb, off); 1110 skb_headers_offset_update(skb, nhead);
1122 /* Only adjust this if it actually is csum_start rather than csum */ 1111 /* Only adjust this if it actually is csum_start rather than csum */
1123 if (skb->ip_summed == CHECKSUM_PARTIAL) 1112 if (skb->ip_summed == CHECKSUM_PARTIAL)
1124 skb->csum_start += nhead; 1113 skb->csum_start += nhead;
@@ -1213,9 +1202,8 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb,
1213 off = newheadroom - oldheadroom; 1202 off = newheadroom - oldheadroom;
1214 if (n->ip_summed == CHECKSUM_PARTIAL) 1203 if (n->ip_summed == CHECKSUM_PARTIAL)
1215 n->csum_start += off; 1204 n->csum_start += off;
1216#ifdef NET_SKBUFF_DATA_USES_OFFSET 1205
1217 skb_headers_offset_update(n, off); 1206 skb_headers_offset_update(n, off);
1218#endif
1219 1207
1220 return n; 1208 return n;
1221} 1209}
@@ -2558,8 +2546,13 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data,
2558 unsigned int block_limit, abs_offset = consumed + st->lower_offset; 2546 unsigned int block_limit, abs_offset = consumed + st->lower_offset;
2559 skb_frag_t *frag; 2547 skb_frag_t *frag;
2560 2548
2561 if (unlikely(abs_offset >= st->upper_offset)) 2549 if (unlikely(abs_offset >= st->upper_offset)) {
2550 if (st->frag_data) {
2551 kunmap_atomic(st->frag_data);
2552 st->frag_data = NULL;
2553 }
2562 return 0; 2554 return 0;
2555 }
2563 2556
2564next_skb: 2557next_skb:
2565 block_limit = skb_headlen(st->cur_skb) + st->stepped_offset; 2558 block_limit = skb_headlen(st->cur_skb) + st->stepped_offset;
@@ -2857,7 +2850,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features)
2857 doffset + tnl_hlen); 2850 doffset + tnl_hlen);
2858 2851
2859 if (fskb != skb_shinfo(skb)->frag_list) 2852 if (fskb != skb_shinfo(skb)->frag_list)
2860 continue; 2853 goto perform_csum_check;
2861 2854
2862 if (!sg) { 2855 if (!sg) {
2863 nskb->ip_summed = CHECKSUM_NONE; 2856 nskb->ip_summed = CHECKSUM_NONE;
@@ -2921,6 +2914,7 @@ skip_fraglist:
2921 nskb->len += nskb->data_len; 2914 nskb->len += nskb->data_len;
2922 nskb->truesize += nskb->data_len; 2915 nskb->truesize += nskb->data_len;
2923 2916
2917perform_csum_check:
2924 if (!csum) { 2918 if (!csum) {
2925 nskb->csum = skb_checksum(nskb, doffset, 2919 nskb->csum = skb_checksum(nskb, doffset,
2926 nskb->len - doffset, 0); 2920 nskb->len - doffset, 0);
@@ -3503,3 +3497,26 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
3503 return true; 3497 return true;
3504} 3498}
3505EXPORT_SYMBOL(skb_try_coalesce); 3499EXPORT_SYMBOL(skb_try_coalesce);
3500
3501/**
3502 * skb_scrub_packet - scrub an skb before sending it to another netns
3503 *
3504 * @skb: buffer to clean
3505 *
3506 * skb_scrub_packet can be used to clean an skb before injecting it in
3507 * another namespace. We have to clear all information in the skb that
3508 * could impact namespace isolation.
3509 */
3510void skb_scrub_packet(struct sk_buff *skb)
3511{
3512 skb_orphan(skb);
3513 skb->tstamp.tv64 = 0;
3514 skb->pkt_type = PACKET_HOST;
3515 skb->skb_iif = 0;
3516 skb_dst_drop(skb);
3517 skb->mark = 0;
3518 secpath_reset(skb);
3519 nf_reset(skb);
3520 nf_reset_trace(skb);
3521}
3522EXPORT_SYMBOL_GPL(skb_scrub_packet);
diff --git a/net/core/sock.c b/net/core/sock.c
index d6d024cfaaaf..ab06b719f5b1 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -139,6 +139,8 @@
139#include <net/tcp.h> 139#include <net/tcp.h>
140#endif 140#endif
141 141
142#include <net/ll_poll.h>
143
142static DEFINE_MUTEX(proto_list_mutex); 144static DEFINE_MUTEX(proto_list_mutex);
143static LIST_HEAD(proto_list); 145static LIST_HEAD(proto_list);
144 146
@@ -898,6 +900,19 @@ set_rcvbuf:
898 sock_valbool_flag(sk, SOCK_SELECT_ERR_QUEUE, valbool); 900 sock_valbool_flag(sk, SOCK_SELECT_ERR_QUEUE, valbool);
899 break; 901 break;
900 902
903#ifdef CONFIG_NET_LL_RX_POLL
904 case SO_LL:
905 /* allow unprivileged users to decrease the value */
906 if ((val > sk->sk_ll_usec) && !capable(CAP_NET_ADMIN))
907 ret = -EPERM;
908 else {
909 if (val < 0)
910 ret = -EINVAL;
911 else
912 sk->sk_ll_usec = val;
913 }
914 break;
915#endif
901 default: 916 default:
902 ret = -ENOPROTOOPT; 917 ret = -ENOPROTOOPT;
903 break; 918 break;
@@ -1155,6 +1170,12 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
1155 v.val = sock_flag(sk, SOCK_SELECT_ERR_QUEUE); 1170 v.val = sock_flag(sk, SOCK_SELECT_ERR_QUEUE);
1156 break; 1171 break;
1157 1172
1173#ifdef CONFIG_NET_LL_RX_POLL
1174 case SO_LL:
1175 v.val = sk->sk_ll_usec;
1176 break;
1177#endif
1178
1158 default: 1179 default:
1159 return -ENOPROTOOPT; 1180 return -ENOPROTOOPT;
1160 } 1181 }
@@ -2271,6 +2292,11 @@ void sock_init_data(struct socket *sock, struct sock *sk)
2271 2292
2272 sk->sk_stamp = ktime_set(-1L, 0); 2293 sk->sk_stamp = ktime_set(-1L, 0);
2273 2294
2295#ifdef CONFIG_NET_LL_RX_POLL
2296 sk->sk_napi_id = 0;
2297 sk->sk_ll_usec = sysctl_net_ll_read;
2298#endif
2299
2274 /* 2300 /*
2275 * Before updating sk_refcnt, we must commit prior changes to memory 2301 * Before updating sk_refcnt, we must commit prior changes to memory
2276 * (Documentation/RCU/rculist_nulls.txt for details) 2302 * (Documentation/RCU/rculist_nulls.txt for details)
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
index cfdb46ab3a7f..afc677eadd93 100644
--- a/net/core/sysctl_net_core.c
+++ b/net/core/sysctl_net_core.c
@@ -19,16 +19,17 @@
19#include <net/ip.h> 19#include <net/ip.h>
20#include <net/sock.h> 20#include <net/sock.h>
21#include <net/net_ratelimit.h> 21#include <net/net_ratelimit.h>
22#include <net/ll_poll.h>
22 23
23static int one = 1; 24static int one = 1;
24 25
25#ifdef CONFIG_RPS 26#ifdef CONFIG_RPS
26static int rps_sock_flow_sysctl(ctl_table *table, int write, 27static int rps_sock_flow_sysctl(struct ctl_table *table, int write,
27 void __user *buffer, size_t *lenp, loff_t *ppos) 28 void __user *buffer, size_t *lenp, loff_t *ppos)
28{ 29{
29 unsigned int orig_size, size; 30 unsigned int orig_size, size;
30 int ret, i; 31 int ret, i;
31 ctl_table tmp = { 32 struct ctl_table tmp = {
32 .data = &size, 33 .data = &size,
33 .maxlen = sizeof(size), 34 .maxlen = sizeof(size),
34 .mode = table->mode 35 .mode = table->mode
@@ -87,6 +88,109 @@ static int rps_sock_flow_sysctl(ctl_table *table, int write,
87} 88}
88#endif /* CONFIG_RPS */ 89#endif /* CONFIG_RPS */
89 90
91#ifdef CONFIG_NET_FLOW_LIMIT
92static DEFINE_MUTEX(flow_limit_update_mutex);
93
94static int flow_limit_cpu_sysctl(struct ctl_table *table, int write,
95 void __user *buffer, size_t *lenp,
96 loff_t *ppos)
97{
98 struct sd_flow_limit *cur;
99 struct softnet_data *sd;
100 cpumask_var_t mask;
101 int i, len, ret = 0;
102
103 if (!alloc_cpumask_var(&mask, GFP_KERNEL))
104 return -ENOMEM;
105
106 if (write) {
107 ret = cpumask_parse_user(buffer, *lenp, mask);
108 if (ret)
109 goto done;
110
111 mutex_lock(&flow_limit_update_mutex);
112 len = sizeof(*cur) + netdev_flow_limit_table_len;
113 for_each_possible_cpu(i) {
114 sd = &per_cpu(softnet_data, i);
115 cur = rcu_dereference_protected(sd->flow_limit,
116 lockdep_is_held(&flow_limit_update_mutex));
117 if (cur && !cpumask_test_cpu(i, mask)) {
118 RCU_INIT_POINTER(sd->flow_limit, NULL);
119 synchronize_rcu();
120 kfree(cur);
121 } else if (!cur && cpumask_test_cpu(i, mask)) {
122 cur = kzalloc(len, GFP_KERNEL);
123 if (!cur) {
124 /* not unwinding previous changes */
125 ret = -ENOMEM;
126 goto write_unlock;
127 }
128 cur->num_buckets = netdev_flow_limit_table_len;
129 rcu_assign_pointer(sd->flow_limit, cur);
130 }
131 }
132write_unlock:
133 mutex_unlock(&flow_limit_update_mutex);
134 } else {
135 char kbuf[128];
136
137 if (*ppos || !*lenp) {
138 *lenp = 0;
139 goto done;
140 }
141
142 cpumask_clear(mask);
143 rcu_read_lock();
144 for_each_possible_cpu(i) {
145 sd = &per_cpu(softnet_data, i);
146 if (rcu_dereference(sd->flow_limit))
147 cpumask_set_cpu(i, mask);
148 }
149 rcu_read_unlock();
150
151 len = min(sizeof(kbuf) - 1, *lenp);
152 len = cpumask_scnprintf(kbuf, len, mask);
153 if (!len) {
154 *lenp = 0;
155 goto done;
156 }
157 if (len < *lenp)
158 kbuf[len++] = '\n';
159 if (copy_to_user(buffer, kbuf, len)) {
160 ret = -EFAULT;
161 goto done;
162 }
163 *lenp = len;
164 *ppos += len;
165 }
166
167done:
168 free_cpumask_var(mask);
169 return ret;
170}
171
172static int flow_limit_table_len_sysctl(struct ctl_table *table, int write,
173 void __user *buffer, size_t *lenp,
174 loff_t *ppos)
175{
176 unsigned int old, *ptr;
177 int ret;
178
179 mutex_lock(&flow_limit_update_mutex);
180
181 ptr = table->data;
182 old = *ptr;
183 ret = proc_dointvec(table, write, buffer, lenp, ppos);
184 if (!ret && write && !is_power_of_2(*ptr)) {
185 *ptr = old;
186 ret = -EINVAL;
187 }
188
189 mutex_unlock(&flow_limit_update_mutex);
190 return ret;
191}
192#endif /* CONFIG_NET_FLOW_LIMIT */
193
90static struct ctl_table net_core_table[] = { 194static struct ctl_table net_core_table[] = {
91#ifdef CONFIG_NET 195#ifdef CONFIG_NET
92 { 196 {
@@ -180,6 +284,37 @@ static struct ctl_table net_core_table[] = {
180 .proc_handler = rps_sock_flow_sysctl 284 .proc_handler = rps_sock_flow_sysctl
181 }, 285 },
182#endif 286#endif
287#ifdef CONFIG_NET_FLOW_LIMIT
288 {
289 .procname = "flow_limit_cpu_bitmap",
290 .mode = 0644,
291 .proc_handler = flow_limit_cpu_sysctl
292 },
293 {
294 .procname = "flow_limit_table_len",
295 .data = &netdev_flow_limit_table_len,
296 .maxlen = sizeof(int),
297 .mode = 0644,
298 .proc_handler = flow_limit_table_len_sysctl
299 },
300#endif /* CONFIG_NET_FLOW_LIMIT */
301#ifdef CONFIG_NET_LL_RX_POLL
302 {
303 .procname = "low_latency_poll",
304 .data = &sysctl_net_ll_poll,
305 .maxlen = sizeof(unsigned int),
306 .mode = 0644,
307 .proc_handler = proc_dointvec
308 },
309 {
310 .procname = "low_latency_read",
311 .data = &sysctl_net_ll_read,
312 .maxlen = sizeof(unsigned int),
313 .mode = 0644,
314 .proc_handler = proc_dointvec
315 },
316#
317#endif
183#endif /* CONFIG_NET */ 318#endif /* CONFIG_NET */
184 { 319 {
185 .procname = "netdev_budget", 320 .procname = "netdev_budget",
diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
index c21f200eed93..dd4d506ef923 100644
--- a/net/decnet/af_decnet.c
+++ b/net/decnet/af_decnet.c
@@ -2078,9 +2078,9 @@ out_err:
2078} 2078}
2079 2079
2080static int dn_device_event(struct notifier_block *this, unsigned long event, 2080static int dn_device_event(struct notifier_block *this, unsigned long event,
2081 void *ptr) 2081 void *ptr)
2082{ 2082{
2083 struct net_device *dev = (struct net_device *)ptr; 2083 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
2084 2084
2085 if (!net_eq(dev_net(dev), &init_net)) 2085 if (!net_eq(dev_net(dev), &init_net))
2086 return NOTIFY_DONE; 2086 return NOTIFY_DONE;
diff --git a/net/decnet/dn_dev.c b/net/decnet/dn_dev.c
index 7d9197063ebb..dd0dfb25f4b1 100644
--- a/net/decnet/dn_dev.c
+++ b/net/decnet/dn_dev.c
@@ -158,11 +158,11 @@ static int max_t3[] = { 8191 }; /* Must fit in 16 bits when multiplied by BCT3MU
158static int min_priority[1]; 158static int min_priority[1];
159static int max_priority[] = { 127 }; /* From DECnet spec */ 159static int max_priority[] = { 127 }; /* From DECnet spec */
160 160
161static int dn_forwarding_proc(ctl_table *, int, 161static int dn_forwarding_proc(struct ctl_table *, int,
162 void __user *, size_t *, loff_t *); 162 void __user *, size_t *, loff_t *);
163static struct dn_dev_sysctl_table { 163static struct dn_dev_sysctl_table {
164 struct ctl_table_header *sysctl_header; 164 struct ctl_table_header *sysctl_header;
165 ctl_table dn_dev_vars[5]; 165 struct ctl_table dn_dev_vars[5];
166} dn_dev_sysctl = { 166} dn_dev_sysctl = {
167 NULL, 167 NULL,
168 { 168 {
@@ -242,7 +242,7 @@ static void dn_dev_sysctl_unregister(struct dn_dev_parms *parms)
242 } 242 }
243} 243}
244 244
245static int dn_forwarding_proc(ctl_table *table, int write, 245static int dn_forwarding_proc(struct ctl_table *table, int write,
246 void __user *buffer, 246 void __user *buffer,
247 size_t *lenp, loff_t *ppos) 247 size_t *lenp, loff_t *ppos)
248{ 248{
diff --git a/net/decnet/sysctl_net_decnet.c b/net/decnet/sysctl_net_decnet.c
index a55eeccaa72f..5325b541c526 100644
--- a/net/decnet/sysctl_net_decnet.c
+++ b/net/decnet/sysctl_net_decnet.c
@@ -132,7 +132,7 @@ static int parse_addr(__le16 *addr, char *str)
132 return 0; 132 return 0;
133} 133}
134 134
135static int dn_node_address_handler(ctl_table *table, int write, 135static int dn_node_address_handler(struct ctl_table *table, int write,
136 void __user *buffer, 136 void __user *buffer,
137 size_t *lenp, loff_t *ppos) 137 size_t *lenp, loff_t *ppos)
138{ 138{
@@ -183,7 +183,7 @@ static int dn_node_address_handler(ctl_table *table, int write,
183 return 0; 183 return 0;
184} 184}
185 185
186static int dn_def_dev_handler(ctl_table *table, int write, 186static int dn_def_dev_handler(struct ctl_table *table, int write,
187 void __user *buffer, 187 void __user *buffer,
188 size_t *lenp, loff_t *ppos) 188 size_t *lenp, loff_t *ppos)
189{ 189{
@@ -246,7 +246,7 @@ static int dn_def_dev_handler(ctl_table *table, int write,
246 return 0; 246 return 0;
247} 247}
248 248
249static ctl_table dn_table[] = { 249static struct ctl_table dn_table[] = {
250 { 250 {
251 .procname = "node_address", 251 .procname = "node_address",
252 .maxlen = 7, 252 .maxlen = 7,
diff --git a/net/ieee802154/6lowpan.c b/net/ieee802154/6lowpan.c
index 55e1fd5b3e56..3b9d5f20bd1c 100644
--- a/net/ieee802154/6lowpan.c
+++ b/net/ieee802154/6lowpan.c
@@ -1352,10 +1352,9 @@ static inline void lowpan_netlink_fini(void)
1352} 1352}
1353 1353
1354static int lowpan_device_event(struct notifier_block *unused, 1354static int lowpan_device_event(struct notifier_block *unused,
1355 unsigned long event, 1355 unsigned long event, void *ptr)
1356 void *ptr)
1357{ 1356{
1358 struct net_device *dev = ptr; 1357 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1359 LIST_HEAD(del_list); 1358 LIST_HEAD(del_list);
1360 struct lowpan_dev_record *entry, *tmp; 1359 struct lowpan_dev_record *entry, *tmp;
1361 1360
diff --git a/net/ipv4/Kconfig b/net/ipv4/Kconfig
index 8603ca827104..37cf1a6ea3ad 100644
--- a/net/ipv4/Kconfig
+++ b/net/ipv4/Kconfig
@@ -9,10 +9,7 @@ config IP_MULTICAST
9 intend to participate in the MBONE, a high bandwidth network on top 9 intend to participate in the MBONE, a high bandwidth network on top
10 of the Internet which carries audio and video broadcasts. More 10 of the Internet which carries audio and video broadcasts. More
11 information about the MBONE is on the WWW at 11 information about the MBONE is on the WWW at
12 <http://www.savetz.com/mbone/>. Information about the multicast 12 <http://www.savetz.com/mbone/>. For most people, it's safe to say N.
13 capabilities of the various network cards is contained in
14 <file:Documentation/networking/multicast.txt>. For most people, it's
15 safe to say N.
16 13
17config IP_ADVANCED_ROUTER 14config IP_ADVANCED_ROUTER
18 bool "IP: advanced router" 15 bool "IP: advanced router"
@@ -223,10 +220,8 @@ config IP_MROUTE
223 packets that have several destination addresses. It is needed on the 220 packets that have several destination addresses. It is needed on the
224 MBONE, a high bandwidth network on top of the Internet which carries 221 MBONE, a high bandwidth network on top of the Internet which carries
225 audio and video broadcasts. In order to do that, you would most 222 audio and video broadcasts. In order to do that, you would most
226 likely run the program mrouted. Information about the multicast 223 likely run the program mrouted. If you haven't heard about it, you
227 capabilities of the various network cards is contained in 224 don't need it.
228 <file:Documentation/networking/multicast.txt>. If you haven't heard
229 about it, you don't need it.
230 225
231config IP_MROUTE_MULTIPLE_TABLES 226config IP_MROUTE_MULTIPLE_TABLES
232 bool "IP: multicast policy routing" 227 bool "IP: multicast policy routing"
diff --git a/net/ipv4/Makefile b/net/ipv4/Makefile
index 089cb9f36387..4b81e91c80fe 100644
--- a/net/ipv4/Makefile
+++ b/net/ipv4/Makefile
@@ -8,10 +8,10 @@ obj-y := route.o inetpeer.o protocol.o \
8 inet_timewait_sock.o inet_connection_sock.o \ 8 inet_timewait_sock.o inet_connection_sock.o \
9 tcp.o tcp_input.o tcp_output.o tcp_timer.o tcp_ipv4.o \ 9 tcp.o tcp_input.o tcp_output.o tcp_timer.o tcp_ipv4.o \
10 tcp_minisocks.o tcp_cong.o tcp_metrics.o tcp_fastopen.o \ 10 tcp_minisocks.o tcp_cong.o tcp_metrics.o tcp_fastopen.o \
11 datagram.o raw.o udp.o udplite.o \ 11 tcp_offload.o datagram.o raw.o udp.o udplite.o \
12 arp.o icmp.o devinet.o af_inet.o igmp.o \ 12 udp_offload.o arp.o icmp.o devinet.o af_inet.o igmp.o \
13 fib_frontend.o fib_semantics.o fib_trie.o \ 13 fib_frontend.o fib_semantics.o fib_trie.o \
14 inet_fragment.o ping.o 14 inet_fragment.o ping.o ip_tunnel_core.o
15 15
16obj-$(CONFIG_NET_IP_TUNNEL) += ip_tunnel.o 16obj-$(CONFIG_NET_IP_TUNNEL) += ip_tunnel.o
17obj-$(CONFIG_SYSCTL) += sysctl_net_ipv4.o 17obj-$(CONFIG_SYSCTL) += sysctl_net_ipv4.o
@@ -19,6 +19,7 @@ obj-$(CONFIG_PROC_FS) += proc.o
19obj-$(CONFIG_IP_MULTIPLE_TABLES) += fib_rules.o 19obj-$(CONFIG_IP_MULTIPLE_TABLES) += fib_rules.o
20obj-$(CONFIG_IP_MROUTE) += ipmr.o 20obj-$(CONFIG_IP_MROUTE) += ipmr.o
21obj-$(CONFIG_NET_IPIP) += ipip.o 21obj-$(CONFIG_NET_IPIP) += ipip.o
22gre-y := gre_demux.o gre_offload.o
22obj-$(CONFIG_NET_IPGRE_DEMUX) += gre.o 23obj-$(CONFIG_NET_IPGRE_DEMUX) += gre.o
23obj-$(CONFIG_NET_IPGRE) += ip_gre.o 24obj-$(CONFIG_NET_IPGRE) += ip_gre.o
24obj-$(CONFIG_NET_IPVTI) += ip_vti.o 25obj-$(CONFIG_NET_IPVTI) += ip_vti.o
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index d01be2a3ae53..b4d0be2b7ce9 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -1295,6 +1295,7 @@ static struct sk_buff *inet_gso_segment(struct sk_buff *skb,
1295 SKB_GSO_GRE | 1295 SKB_GSO_GRE |
1296 SKB_GSO_TCPV6 | 1296 SKB_GSO_TCPV6 |
1297 SKB_GSO_UDP_TUNNEL | 1297 SKB_GSO_UDP_TUNNEL |
1298 SKB_GSO_MPLS |
1298 0))) 1299 0)))
1299 goto out; 1300 goto out;
1300 1301
@@ -1384,7 +1385,7 @@ static struct sk_buff **inet_gro_receive(struct sk_buff **head,
1384 goto out_unlock; 1385 goto out_unlock;
1385 1386
1386 id = ntohl(*(__be32 *)&iph->id); 1387 id = ntohl(*(__be32 *)&iph->id);
1387 flush = (u16)((ntohl(*(__be32 *)iph) ^ skb_gro_len(skb)) | (id ^ IP_DF)); 1388 flush = (u16)((ntohl(*(__be32 *)iph) ^ skb_gro_len(skb)) | (id & ~IP_DF));
1388 id >>= 16; 1389 id >>= 16;
1389 1390
1390 for (p = *head; p; p = p->next) { 1391 for (p = *head; p; p = p->next) {
@@ -1406,6 +1407,7 @@ static struct sk_buff **inet_gro_receive(struct sk_buff **head,
1406 NAPI_GRO_CB(p)->flush |= 1407 NAPI_GRO_CB(p)->flush |=
1407 (iph->ttl ^ iph2->ttl) | 1408 (iph->ttl ^ iph2->ttl) |
1408 (iph->tos ^ iph2->tos) | 1409 (iph->tos ^ iph2->tos) |
1410 (__force int)((iph->frag_off ^ iph2->frag_off) & htons(IP_DF)) |
1409 ((u16)(ntohs(iph2->id) + NAPI_GRO_CB(p)->count) ^ id); 1411 ((u16)(ntohs(iph2->id) + NAPI_GRO_CB(p)->count) ^ id);
1410 1412
1411 NAPI_GRO_CB(p)->flush |= flush; 1413 NAPI_GRO_CB(p)->flush |= flush;
@@ -1557,15 +1559,6 @@ static const struct net_protocol tcp_protocol = {
1557 .netns_ok = 1, 1559 .netns_ok = 1,
1558}; 1560};
1559 1561
1560static const struct net_offload tcp_offload = {
1561 .callbacks = {
1562 .gso_send_check = tcp_v4_gso_send_check,
1563 .gso_segment = tcp_tso_segment,
1564 .gro_receive = tcp4_gro_receive,
1565 .gro_complete = tcp4_gro_complete,
1566 },
1567};
1568
1569static const struct net_protocol udp_protocol = { 1562static const struct net_protocol udp_protocol = {
1570 .handler = udp_rcv, 1563 .handler = udp_rcv,
1571 .err_handler = udp_err, 1564 .err_handler = udp_err,
@@ -1573,13 +1566,6 @@ static const struct net_protocol udp_protocol = {
1573 .netns_ok = 1, 1566 .netns_ok = 1,
1574}; 1567};
1575 1568
1576static const struct net_offload udp_offload = {
1577 .callbacks = {
1578 .gso_send_check = udp4_ufo_send_check,
1579 .gso_segment = udp4_ufo_fragment,
1580 },
1581};
1582
1583static const struct net_protocol icmp_protocol = { 1569static const struct net_protocol icmp_protocol = {
1584 .handler = icmp_rcv, 1570 .handler = icmp_rcv,
1585 .err_handler = icmp_err, 1571 .err_handler = icmp_err,
@@ -1679,10 +1665,10 @@ static int __init ipv4_offload_init(void)
1679 /* 1665 /*
1680 * Add offloads 1666 * Add offloads
1681 */ 1667 */
1682 if (inet_add_offload(&udp_offload, IPPROTO_UDP) < 0) 1668 if (udpv4_offload_init() < 0)
1683 pr_crit("%s: Cannot add UDP protocol offload\n", __func__); 1669 pr_crit("%s: Cannot add UDP protocol offload\n", __func__);
1684 if (inet_add_offload(&tcp_offload, IPPROTO_TCP) < 0) 1670 if (tcpv4_offload_init() < 0)
1685 pr_crit("%s: Cannot add TCP protocol offlaod\n", __func__); 1671 pr_crit("%s: Cannot add TCP protocol offload\n", __func__);
1686 1672
1687 dev_add_offload(&ip_packet_offload); 1673 dev_add_offload(&ip_packet_offload);
1688 return 0; 1674 return 0;
diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
index 2e7f1948216f..717902669d2f 100644
--- a/net/ipv4/ah4.c
+++ b/net/ipv4/ah4.c
@@ -419,12 +419,9 @@ static void ah4_err(struct sk_buff *skb, u32 info)
419 if (!x) 419 if (!x)
420 return; 420 return;
421 421
422 if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) { 422 if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH)
423 atomic_inc(&flow_cache_genid);
424 rt_genid_bump(net);
425
426 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_AH, 0); 423 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_AH, 0);
427 } else 424 else
428 ipv4_redirect(skb, net, 0, 0, IPPROTO_AH, 0); 425 ipv4_redirect(skb, net, 0, 0, IPPROTO_AH, 0);
429 xfrm_state_put(x); 426 xfrm_state_put(x);
430} 427}
diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
index 247ec1951c35..4429b013f269 100644
--- a/net/ipv4/arp.c
+++ b/net/ipv4/arp.c
@@ -1234,13 +1234,19 @@ out:
1234static int arp_netdev_event(struct notifier_block *this, unsigned long event, 1234static int arp_netdev_event(struct notifier_block *this, unsigned long event,
1235 void *ptr) 1235 void *ptr)
1236{ 1236{
1237 struct net_device *dev = ptr; 1237 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1238 struct netdev_notifier_change_info *change_info;
1238 1239
1239 switch (event) { 1240 switch (event) {
1240 case NETDEV_CHANGEADDR: 1241 case NETDEV_CHANGEADDR:
1241 neigh_changeaddr(&arp_tbl, dev); 1242 neigh_changeaddr(&arp_tbl, dev);
1242 rt_cache_flush(dev_net(dev)); 1243 rt_cache_flush(dev_net(dev));
1243 break; 1244 break;
1245 case NETDEV_CHANGE:
1246 change_info = ptr;
1247 if (change_info->flags_changed & IFF_NOARP)
1248 neigh_changeaddr(&arp_tbl, dev);
1249 break;
1244 default: 1250 default:
1245 break; 1251 break;
1246 } 1252 }
diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
index dfc39d4d48b7..8d48c392adcc 100644
--- a/net/ipv4/devinet.c
+++ b/net/ipv4/devinet.c
@@ -215,6 +215,7 @@ void in_dev_finish_destroy(struct in_device *idev)
215 215
216 WARN_ON(idev->ifa_list); 216 WARN_ON(idev->ifa_list);
217 WARN_ON(idev->mc_list); 217 WARN_ON(idev->mc_list);
218 kfree(rcu_dereference_protected(idev->mc_hash, 1));
218#ifdef NET_REFCNT_DEBUG 219#ifdef NET_REFCNT_DEBUG
219 pr_debug("%s: %p=%s\n", __func__, idev, dev ? dev->name : "NIL"); 220 pr_debug("%s: %p=%s\n", __func__, idev, dev ? dev->name : "NIL");
220#endif 221#endif
@@ -1333,7 +1334,7 @@ static void inetdev_send_gratuitous_arp(struct net_device *dev,
1333static int inetdev_event(struct notifier_block *this, unsigned long event, 1334static int inetdev_event(struct notifier_block *this, unsigned long event,
1334 void *ptr) 1335 void *ptr)
1335{ 1336{
1336 struct net_device *dev = ptr; 1337 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1337 struct in_device *in_dev = __in_dev_get_rtnl(dev); 1338 struct in_device *in_dev = __in_dev_get_rtnl(dev);
1338 1339
1339 ASSERT_RTNL(); 1340 ASSERT_RTNL();
@@ -1941,7 +1942,7 @@ static void inet_forward_change(struct net *net)
1941 } 1942 }
1942} 1943}
1943 1944
1944static int devinet_conf_proc(ctl_table *ctl, int write, 1945static int devinet_conf_proc(struct ctl_table *ctl, int write,
1945 void __user *buffer, 1946 void __user *buffer,
1946 size_t *lenp, loff_t *ppos) 1947 size_t *lenp, loff_t *ppos)
1947{ 1948{
@@ -1984,7 +1985,7 @@ static int devinet_conf_proc(ctl_table *ctl, int write,
1984 return ret; 1985 return ret;
1985} 1986}
1986 1987
1987static int devinet_sysctl_forward(ctl_table *ctl, int write, 1988static int devinet_sysctl_forward(struct ctl_table *ctl, int write,
1988 void __user *buffer, 1989 void __user *buffer,
1989 size_t *lenp, loff_t *ppos) 1990 size_t *lenp, loff_t *ppos)
1990{ 1991{
@@ -2027,7 +2028,7 @@ static int devinet_sysctl_forward(ctl_table *ctl, int write,
2027 return ret; 2028 return ret;
2028} 2029}
2029 2030
2030static int ipv4_doint_and_flush(ctl_table *ctl, int write, 2031static int ipv4_doint_and_flush(struct ctl_table *ctl, int write,
2031 void __user *buffer, 2032 void __user *buffer,
2032 size_t *lenp, loff_t *ppos) 2033 size_t *lenp, loff_t *ppos)
2033{ 2034{
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 4cfe34d4cc96..ab3d814bc80a 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -502,12 +502,9 @@ static void esp4_err(struct sk_buff *skb, u32 info)
502 if (!x) 502 if (!x)
503 return; 503 return;
504 504
505 if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) { 505 if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH)
506 atomic_inc(&flow_cache_genid);
507 rt_genid_bump(net);
508
509 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_ESP, 0); 506 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_ESP, 0);
510 } else 507 else
511 ipv4_redirect(skb, net, 0, 0, IPPROTO_ESP, 0); 508 ipv4_redirect(skb, net, 0, 0, IPPROTO_ESP, 0);
512 xfrm_state_put(x); 509 xfrm_state_put(x);
513} 510}
diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
index c7629a209f9d..b3f627ac4ed8 100644
--- a/net/ipv4/fib_frontend.c
+++ b/net/ipv4/fib_frontend.c
@@ -961,7 +961,7 @@ static void nl_fib_input(struct sk_buff *skb)
961 nlmsg_len(nlh) < sizeof(*frn)) 961 nlmsg_len(nlh) < sizeof(*frn))
962 return; 962 return;
963 963
964 skb = skb_clone(skb, GFP_KERNEL); 964 skb = netlink_skb_clone(skb, GFP_KERNEL);
965 if (skb == NULL) 965 if (skb == NULL)
966 return; 966 return;
967 nlh = nlmsg_hdr(skb); 967 nlh = nlmsg_hdr(skb);
@@ -1038,7 +1038,7 @@ static int fib_inetaddr_event(struct notifier_block *this, unsigned long event,
1038 1038
1039static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr) 1039static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)
1040{ 1040{
1041 struct net_device *dev = ptr; 1041 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1042 struct in_device *in_dev; 1042 struct in_device *in_dev;
1043 struct net *net = dev_net(dev); 1043 struct net *net = dev_net(dev);
1044 1044
diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
index 8f6cb7a87cd6..d5dbca5ecf62 100644
--- a/net/ipv4/fib_semantics.c
+++ b/net/ipv4/fib_semantics.c
@@ -169,7 +169,8 @@ static void free_nh_exceptions(struct fib_nh *nh)
169 169
170 next = rcu_dereference_protected(fnhe->fnhe_next, 1); 170 next = rcu_dereference_protected(fnhe->fnhe_next, 1);
171 171
172 rt_fibinfo_free(&fnhe->fnhe_rth); 172 rt_fibinfo_free(&fnhe->fnhe_rth_input);
173 rt_fibinfo_free(&fnhe->fnhe_rth_output);
173 174
174 kfree(fnhe); 175 kfree(fnhe);
175 176
diff --git a/net/ipv4/gre.c b/net/ipv4/gre.c
deleted file mode 100644
index 7856d1651d05..000000000000
--- a/net/ipv4/gre.c
+++ /dev/null
@@ -1,253 +0,0 @@
1/*
2 * GRE over IPv4 demultiplexer driver
3 *
4 * Authors: Dmitry Kozlov (xeb@mail.ru)
5 *
6 * This program is free software; you can redistribute it and/or
7 * modify it under the terms of the GNU General Public License
8 * as published by the Free Software Foundation; either version
9 * 2 of the License, or (at your option) any later version.
10 *
11 */
12
13#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
14
15#include <linux/module.h>
16#include <linux/kernel.h>
17#include <linux/kmod.h>
18#include <linux/skbuff.h>
19#include <linux/in.h>
20#include <linux/ip.h>
21#include <linux/netdevice.h>
22#include <linux/if_tunnel.h>
23#include <linux/spinlock.h>
24#include <net/protocol.h>
25#include <net/gre.h>
26
27
28static const struct gre_protocol __rcu *gre_proto[GREPROTO_MAX] __read_mostly;
29static DEFINE_SPINLOCK(gre_proto_lock);
30
31int gre_add_protocol(const struct gre_protocol *proto, u8 version)
32{
33 if (version >= GREPROTO_MAX)
34 goto err_out;
35
36 spin_lock(&gre_proto_lock);
37 if (gre_proto[version])
38 goto err_out_unlock;
39
40 RCU_INIT_POINTER(gre_proto[version], proto);
41 spin_unlock(&gre_proto_lock);
42 return 0;
43
44err_out_unlock:
45 spin_unlock(&gre_proto_lock);
46err_out:
47 return -1;
48}
49EXPORT_SYMBOL_GPL(gre_add_protocol);
50
51int gre_del_protocol(const struct gre_protocol *proto, u8 version)
52{
53 if (version >= GREPROTO_MAX)
54 goto err_out;
55
56 spin_lock(&gre_proto_lock);
57 if (rcu_dereference_protected(gre_proto[version],
58 lockdep_is_held(&gre_proto_lock)) != proto)
59 goto err_out_unlock;
60 RCU_INIT_POINTER(gre_proto[version], NULL);
61 spin_unlock(&gre_proto_lock);
62 synchronize_rcu();
63 return 0;
64
65err_out_unlock:
66 spin_unlock(&gre_proto_lock);
67err_out:
68 return -1;
69}
70EXPORT_SYMBOL_GPL(gre_del_protocol);
71
72static int gre_rcv(struct sk_buff *skb)
73{
74 const struct gre_protocol *proto;
75 u8 ver;
76 int ret;
77
78 if (!pskb_may_pull(skb, 12))
79 goto drop;
80
81 ver = skb->data[1]&0x7f;
82 if (ver >= GREPROTO_MAX)
83 goto drop;
84
85 rcu_read_lock();
86 proto = rcu_dereference(gre_proto[ver]);
87 if (!proto || !proto->handler)
88 goto drop_unlock;
89 ret = proto->handler(skb);
90 rcu_read_unlock();
91 return ret;
92
93drop_unlock:
94 rcu_read_unlock();
95drop:
96 kfree_skb(skb);
97 return NET_RX_DROP;
98}
99
100static void gre_err(struct sk_buff *skb, u32 info)
101{
102 const struct gre_protocol *proto;
103 const struct iphdr *iph = (const struct iphdr *)skb->data;
104 u8 ver = skb->data[(iph->ihl<<2) + 1]&0x7f;
105
106 if (ver >= GREPROTO_MAX)
107 return;
108
109 rcu_read_lock();
110 proto = rcu_dereference(gre_proto[ver]);
111 if (proto && proto->err_handler)
112 proto->err_handler(skb, info);
113 rcu_read_unlock();
114}
115
116static struct sk_buff *gre_gso_segment(struct sk_buff *skb,
117 netdev_features_t features)
118{
119 struct sk_buff *segs = ERR_PTR(-EINVAL);
120 netdev_features_t enc_features;
121 int ghl = GRE_HEADER_SECTION;
122 struct gre_base_hdr *greh;
123 int mac_len = skb->mac_len;
124 __be16 protocol = skb->protocol;
125 int tnl_hlen;
126 bool csum;
127
128 if (unlikely(skb_shinfo(skb)->gso_type &
129 ~(SKB_GSO_TCPV4 |
130 SKB_GSO_TCPV6 |
131 SKB_GSO_UDP |
132 SKB_GSO_DODGY |
133 SKB_GSO_TCP_ECN |
134 SKB_GSO_GRE)))
135 goto out;
136
137 if (unlikely(!pskb_may_pull(skb, sizeof(*greh))))
138 goto out;
139
140 greh = (struct gre_base_hdr *)skb_transport_header(skb);
141
142 if (greh->flags & GRE_KEY)
143 ghl += GRE_HEADER_SECTION;
144 if (greh->flags & GRE_SEQ)
145 ghl += GRE_HEADER_SECTION;
146 if (greh->flags & GRE_CSUM) {
147 ghl += GRE_HEADER_SECTION;
148 csum = true;
149 } else
150 csum = false;
151
152 /* setup inner skb. */
153 skb->protocol = greh->protocol;
154 skb->encapsulation = 0;
155
156 if (unlikely(!pskb_may_pull(skb, ghl)))
157 goto out;
158 __skb_pull(skb, ghl);
159 skb_reset_mac_header(skb);
160 skb_set_network_header(skb, skb_inner_network_offset(skb));
161 skb->mac_len = skb_inner_network_offset(skb);
162
163 /* segment inner packet. */
164 enc_features = skb->dev->hw_enc_features & netif_skb_features(skb);
165 segs = skb_mac_gso_segment(skb, enc_features);
166 if (!segs || IS_ERR(segs))
167 goto out;
168
169 skb = segs;
170 tnl_hlen = skb_tnl_header_len(skb);
171 do {
172 __skb_push(skb, ghl);
173 if (csum) {
174 __be32 *pcsum;
175
176 if (skb_has_shared_frag(skb)) {
177 int err;
178
179 err = __skb_linearize(skb);
180 if (err) {
181 kfree_skb_list(segs);
182 segs = ERR_PTR(err);
183 goto out;
184 }
185 }
186
187 greh = (struct gre_base_hdr *)(skb->data);
188 pcsum = (__be32 *)(greh + 1);
189 *pcsum = 0;
190 *(__sum16 *)pcsum = csum_fold(skb_checksum(skb, 0, skb->len, 0));
191 }
192 __skb_push(skb, tnl_hlen - ghl);
193
194 skb_reset_mac_header(skb);
195 skb_set_network_header(skb, mac_len);
196 skb->mac_len = mac_len;
197 skb->protocol = protocol;
198 } while ((skb = skb->next));
199out:
200 return segs;
201}
202
203static int gre_gso_send_check(struct sk_buff *skb)
204{
205 if (!skb->encapsulation)
206 return -EINVAL;
207 return 0;
208}
209
210static const struct net_protocol net_gre_protocol = {
211 .handler = gre_rcv,
212 .err_handler = gre_err,
213 .netns_ok = 1,
214};
215
216static const struct net_offload gre_offload = {
217 .callbacks = {
218 .gso_send_check = gre_gso_send_check,
219 .gso_segment = gre_gso_segment,
220 },
221};
222
223static int __init gre_init(void)
224{
225 pr_info("GRE over IPv4 demultiplexor driver\n");
226
227 if (inet_add_protocol(&net_gre_protocol, IPPROTO_GRE) < 0) {
228 pr_err("can't add protocol\n");
229 return -EAGAIN;
230 }
231
232 if (inet_add_offload(&gre_offload, IPPROTO_GRE)) {
233 pr_err("can't add protocol offload\n");
234 inet_del_protocol(&net_gre_protocol, IPPROTO_GRE);
235 return -EAGAIN;
236 }
237
238 return 0;
239}
240
241static void __exit gre_exit(void)
242{
243 inet_del_offload(&gre_offload, IPPROTO_GRE);
244 inet_del_protocol(&net_gre_protocol, IPPROTO_GRE);
245}
246
247module_init(gre_init);
248module_exit(gre_exit);
249
250MODULE_DESCRIPTION("GRE over IPv4 demultiplexer driver");
251MODULE_AUTHOR("D. Kozlov (xeb@mail.ru)");
252MODULE_LICENSE("GPL");
253
diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c
new file mode 100644
index 000000000000..736c9fc3ef93
--- /dev/null
+++ b/net/ipv4/gre_demux.c
@@ -0,0 +1,414 @@
1/*
2 * GRE over IPv4 demultiplexer driver
3 *
4 * Authors: Dmitry Kozlov (xeb@mail.ru)
5 *
6 * This program is free software; you can redistribute it and/or
7 * modify it under the terms of the GNU General Public License
8 * as published by the Free Software Foundation; either version
9 * 2 of the License, or (at your option) any later version.
10 *
11 */
12
13#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
14
15#include <linux/module.h>
16#include <linux/if.h>
17#include <linux/icmp.h>
18#include <linux/kernel.h>
19#include <linux/kmod.h>
20#include <linux/skbuff.h>
21#include <linux/in.h>
22#include <linux/ip.h>
23#include <linux/netdevice.h>
24#include <linux/if_tunnel.h>
25#include <linux/spinlock.h>
26#include <net/protocol.h>
27#include <net/gre.h>
28
29#include <net/icmp.h>
30#include <net/route.h>
31#include <net/xfrm.h>
32
33static const struct gre_protocol __rcu *gre_proto[GREPROTO_MAX] __read_mostly;
34static struct gre_cisco_protocol __rcu *gre_cisco_proto_list[GRE_IP_PROTO_MAX];
35
36int gre_add_protocol(const struct gre_protocol *proto, u8 version)
37{
38 if (version >= GREPROTO_MAX)
39 return -EINVAL;
40
41 return (cmpxchg((const struct gre_protocol **)&gre_proto[version], NULL, proto) == NULL) ?
42 0 : -EBUSY;
43}
44EXPORT_SYMBOL_GPL(gre_add_protocol);
45
46int gre_del_protocol(const struct gre_protocol *proto, u8 version)
47{
48 int ret;
49
50 if (version >= GREPROTO_MAX)
51 return -EINVAL;
52
53 ret = (cmpxchg((const struct gre_protocol **)&gre_proto[version], proto, NULL) == proto) ?
54 0 : -EBUSY;
55
56 if (ret)
57 return ret;
58
59 synchronize_rcu();
60 return 0;
61}
62EXPORT_SYMBOL_GPL(gre_del_protocol);
63
64void gre_build_header(struct sk_buff *skb, const struct tnl_ptk_info *tpi,
65 int hdr_len)
66{
67 struct gre_base_hdr *greh;
68
69 skb_push(skb, hdr_len);
70
71 greh = (struct gre_base_hdr *)skb->data;
72 greh->flags = tnl_flags_to_gre_flags(tpi->flags);
73 greh->protocol = tpi->proto;
74
75 if (tpi->flags&(TUNNEL_KEY|TUNNEL_CSUM|TUNNEL_SEQ)) {
76 __be32 *ptr = (__be32 *)(((u8 *)greh) + hdr_len - 4);
77
78 if (tpi->flags&TUNNEL_SEQ) {
79 *ptr = tpi->seq;
80 ptr--;
81 }
82 if (tpi->flags&TUNNEL_KEY) {
83 *ptr = tpi->key;
84 ptr--;
85 }
86 if (tpi->flags&TUNNEL_CSUM &&
87 !(skb_shinfo(skb)->gso_type & SKB_GSO_GRE)) {
88 *ptr = 0;
89 *(__sum16 *)ptr = csum_fold(skb_checksum(skb, 0,
90 skb->len, 0));
91 }
92 }
93}
94EXPORT_SYMBOL_GPL(gre_build_header);
95
96struct sk_buff *gre_handle_offloads(struct sk_buff *skb, bool gre_csum)
97{
98 int err;
99
100 if (likely(!skb->encapsulation)) {
101 skb_reset_inner_headers(skb);
102 skb->encapsulation = 1;
103 }
104
105 if (skb_is_gso(skb)) {
106 err = skb_unclone(skb, GFP_ATOMIC);
107 if (unlikely(err))
108 goto error;
109 skb_shinfo(skb)->gso_type |= SKB_GSO_GRE;
110 return skb;
111 } else if (skb->ip_summed == CHECKSUM_PARTIAL && gre_csum) {
112 err = skb_checksum_help(skb);
113 if (unlikely(err))
114 goto error;
115 } else if (skb->ip_summed != CHECKSUM_PARTIAL)
116 skb->ip_summed = CHECKSUM_NONE;
117
118 return skb;
119error:
120 kfree_skb(skb);
121 return ERR_PTR(err);
122}
123EXPORT_SYMBOL_GPL(gre_handle_offloads);
124
125static __sum16 check_checksum(struct sk_buff *skb)
126{
127 __sum16 csum = 0;
128
129 switch (skb->ip_summed) {
130 case CHECKSUM_COMPLETE:
131 csum = csum_fold(skb->csum);
132
133 if (!csum)
134 break;
135 /* Fall through. */
136
137 case CHECKSUM_NONE:
138 skb->csum = 0;
139 csum = __skb_checksum_complete(skb);
140 skb->ip_summed = CHECKSUM_COMPLETE;
141 break;
142 }
143
144 return csum;
145}
146
147static int parse_gre_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
148 bool *csum_err)
149{
150 unsigned int ip_hlen = ip_hdrlen(skb);
151 const struct gre_base_hdr *greh;
152 __be32 *options;
153 int hdr_len;
154
155 if (unlikely(!pskb_may_pull(skb, sizeof(struct gre_base_hdr))))
156 return -EINVAL;
157
158 greh = (struct gre_base_hdr *)(skb_network_header(skb) + ip_hlen);
159 if (unlikely(greh->flags & (GRE_VERSION | GRE_ROUTING)))
160 return -EINVAL;
161
162 tpi->flags = gre_flags_to_tnl_flags(greh->flags);
163 hdr_len = ip_gre_calc_hlen(tpi->flags);
164
165 if (!pskb_may_pull(skb, hdr_len))
166 return -EINVAL;
167
168 greh = (struct gre_base_hdr *)(skb_network_header(skb) + ip_hlen);
169 tpi->proto = greh->protocol;
170
171 options = (__be32 *)(greh + 1);
172 if (greh->flags & GRE_CSUM) {
173 if (check_checksum(skb)) {
174 *csum_err = true;
175 return -EINVAL;
176 }
177 options++;
178 }
179
180 if (greh->flags & GRE_KEY) {
181 tpi->key = *options;
182 options++;
183 } else
184 tpi->key = 0;
185
186 if (unlikely(greh->flags & GRE_SEQ)) {
187 tpi->seq = *options;
188 options++;
189 } else
190 tpi->seq = 0;
191
192 /* WCCP version 1 and 2 protocol decoding.
193 * - Change protocol to IP
194 * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header
195 */
196 if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) {
197 tpi->proto = htons(ETH_P_IP);
198 if ((*(u8 *)options & 0xF0) != 0x40) {
199 hdr_len += 4;
200 if (!pskb_may_pull(skb, hdr_len))
201 return -EINVAL;
202 }
203 }
204
205 return iptunnel_pull_header(skb, hdr_len, tpi->proto);
206}
207
208static int gre_cisco_rcv(struct sk_buff *skb)
209{
210 struct tnl_ptk_info tpi;
211 int i;
212 bool csum_err = false;
213
214 if (parse_gre_header(skb, &tpi, &csum_err) < 0)
215 goto drop;
216
217 rcu_read_lock();
218 for (i = 0; i < GRE_IP_PROTO_MAX; i++) {
219 struct gre_cisco_protocol *proto;
220 int ret;
221
222 proto = rcu_dereference(gre_cisco_proto_list[i]);
223 if (!proto)
224 continue;
225 ret = proto->handler(skb, &tpi);
226 if (ret == PACKET_RCVD) {
227 rcu_read_unlock();
228 return 0;
229 }
230 }
231 rcu_read_unlock();
232
233 icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0);
234drop:
235 kfree_skb(skb);
236 return 0;
237}
238
239static void gre_cisco_err(struct sk_buff *skb, u32 info)
240{
241 /* All the routers (except for Linux) return only
242 * 8 bytes of packet payload. It means, that precise relaying of
243 * ICMP in the real Internet is absolutely infeasible.
244 *
245 * Moreover, Cisco "wise men" put GRE key to the third word
246 * in GRE header. It makes impossible maintaining even soft
247 * state for keyed
248 * GRE tunnels with enabled checksum. Tell them "thank you".
249 *
250 * Well, I wonder, rfc1812 was written by Cisco employee,
251 * what the hell these idiots break standards established
252 * by themselves???
253 */
254
255 const int type = icmp_hdr(skb)->type;
256 const int code = icmp_hdr(skb)->code;
257 struct tnl_ptk_info tpi;
258 bool csum_err = false;
259 int i;
260
261 if (parse_gre_header(skb, &tpi, &csum_err)) {
262 if (!csum_err) /* ignore csum errors. */
263 return;
264 }
265
266 if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) {
267 ipv4_update_pmtu(skb, dev_net(skb->dev), info,
268 skb->dev->ifindex, 0, IPPROTO_GRE, 0);
269 return;
270 }
271 if (type == ICMP_REDIRECT) {
272 ipv4_redirect(skb, dev_net(skb->dev), skb->dev->ifindex, 0,
273 IPPROTO_GRE, 0);
274 return;
275 }
276
277 rcu_read_lock();
278 for (i = 0; i < GRE_IP_PROTO_MAX; i++) {
279 struct gre_cisco_protocol *proto;
280
281 proto = rcu_dereference(gre_cisco_proto_list[i]);
282 if (!proto)
283 continue;
284
285 if (proto->err_handler(skb, info, &tpi) == PACKET_RCVD)
286 goto out;
287
288 }
289out:
290 rcu_read_unlock();
291}
292
293static int gre_rcv(struct sk_buff *skb)
294{
295 const struct gre_protocol *proto;
296 u8 ver;
297 int ret;
298
299 if (!pskb_may_pull(skb, 12))
300 goto drop;
301
302 ver = skb->data[1]&0x7f;
303 if (ver >= GREPROTO_MAX)
304 goto drop;
305
306 rcu_read_lock();
307 proto = rcu_dereference(gre_proto[ver]);
308 if (!proto || !proto->handler)
309 goto drop_unlock;
310 ret = proto->handler(skb);
311 rcu_read_unlock();
312 return ret;
313
314drop_unlock:
315 rcu_read_unlock();
316drop:
317 kfree_skb(skb);
318 return NET_RX_DROP;
319}
320
321static void gre_err(struct sk_buff *skb, u32 info)
322{
323 const struct gre_protocol *proto;
324 const struct iphdr *iph = (const struct iphdr *)skb->data;
325 u8 ver = skb->data[(iph->ihl<<2) + 1]&0x7f;
326
327 if (ver >= GREPROTO_MAX)
328 return;
329
330 rcu_read_lock();
331 proto = rcu_dereference(gre_proto[ver]);
332 if (proto && proto->err_handler)
333 proto->err_handler(skb, info);
334 rcu_read_unlock();
335}
336
337static const struct net_protocol net_gre_protocol = {
338 .handler = gre_rcv,
339 .err_handler = gre_err,
340 .netns_ok = 1,
341};
342
343static const struct gre_protocol ipgre_protocol = {
344 .handler = gre_cisco_rcv,
345 .err_handler = gre_cisco_err,
346};
347
348int gre_cisco_register(struct gre_cisco_protocol *newp)
349{
350 struct gre_cisco_protocol **proto = (struct gre_cisco_protocol **)
351 &gre_cisco_proto_list[newp->priority];
352
353 return (cmpxchg(proto, NULL, newp) == NULL) ? 0 : -EBUSY;
354}
355EXPORT_SYMBOL_GPL(gre_cisco_register);
356
357int gre_cisco_unregister(struct gre_cisco_protocol *del_proto)
358{
359 struct gre_cisco_protocol **proto = (struct gre_cisco_protocol **)
360 &gre_cisco_proto_list[del_proto->priority];
361 int ret;
362
363 ret = (cmpxchg(proto, del_proto, NULL) == del_proto) ? 0 : -EINVAL;
364
365 if (ret)
366 return ret;
367
368 synchronize_net();
369 return 0;
370}
371EXPORT_SYMBOL_GPL(gre_cisco_unregister);
372
373static int __init gre_init(void)
374{
375 pr_info("GRE over IPv4 demultiplexor driver\n");
376
377 if (inet_add_protocol(&net_gre_protocol, IPPROTO_GRE) < 0) {
378 pr_err("can't add protocol\n");
379 goto err;
380 }
381
382 if (gre_add_protocol(&ipgre_protocol, GREPROTO_CISCO) < 0) {
383 pr_info("%s: can't add ipgre handler\n", __func__);
384 goto err_gre;
385 }
386
387 if (gre_offload_init()) {
388 pr_err("can't add protocol offload\n");
389 goto err_gso;
390 }
391
392 return 0;
393err_gso:
394 gre_del_protocol(&ipgre_protocol, GREPROTO_CISCO);
395err_gre:
396 inet_del_protocol(&net_gre_protocol, IPPROTO_GRE);
397err:
398 return -EAGAIN;
399}
400
401static void __exit gre_exit(void)
402{
403 gre_offload_exit();
404
405 gre_del_protocol(&ipgre_protocol, GREPROTO_CISCO);
406 inet_del_protocol(&net_gre_protocol, IPPROTO_GRE);
407}
408
409module_init(gre_init);
410module_exit(gre_exit);
411
412MODULE_DESCRIPTION("GRE over IPv4 demultiplexer driver");
413MODULE_AUTHOR("D. Kozlov (xeb@mail.ru)");
414MODULE_LICENSE("GPL");
diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
new file mode 100644
index 000000000000..775d5b532ece
--- /dev/null
+++ b/net/ipv4/gre_offload.c
@@ -0,0 +1,127 @@
1/*
2 * IPV4 GSO/GRO offload support
3 * Linux INET implementation
4 *
5 * This program is free software; you can redistribute it and/or
6 * modify it under the terms of the GNU General Public License
7 * as published by the Free Software Foundation; either version
8 * 2 of the License, or (at your option) any later version.
9 *
10 * GRE GSO support
11 */
12
13#include <linux/skbuff.h>
14#include <net/protocol.h>
15#include <net/gre.h>
16
17static int gre_gso_send_check(struct sk_buff *skb)
18{
19 if (!skb->encapsulation)
20 return -EINVAL;
21 return 0;
22}
23
24static struct sk_buff *gre_gso_segment(struct sk_buff *skb,
25 netdev_features_t features)
26{
27 struct sk_buff *segs = ERR_PTR(-EINVAL);
28 netdev_features_t enc_features;
29 int ghl = GRE_HEADER_SECTION;
30 struct gre_base_hdr *greh;
31 int mac_len = skb->mac_len;
32 __be16 protocol = skb->protocol;
33 int tnl_hlen;
34 bool csum;
35
36 if (unlikely(skb_shinfo(skb)->gso_type &
37 ~(SKB_GSO_TCPV4 |
38 SKB_GSO_TCPV6 |
39 SKB_GSO_UDP |
40 SKB_GSO_DODGY |
41 SKB_GSO_TCP_ECN |
42 SKB_GSO_GRE)))
43 goto out;
44
45 if (unlikely(!pskb_may_pull(skb, sizeof(*greh))))
46 goto out;
47
48 greh = (struct gre_base_hdr *)skb_transport_header(skb);
49
50 if (greh->flags & GRE_KEY)
51 ghl += GRE_HEADER_SECTION;
52 if (greh->flags & GRE_SEQ)
53 ghl += GRE_HEADER_SECTION;
54 if (greh->flags & GRE_CSUM) {
55 ghl += GRE_HEADER_SECTION;
56 csum = true;
57 } else
58 csum = false;
59
60 /* setup inner skb. */
61 skb->protocol = greh->protocol;
62 skb->encapsulation = 0;
63
64 if (unlikely(!pskb_may_pull(skb, ghl)))
65 goto out;
66
67 __skb_pull(skb, ghl);
68 skb_reset_mac_header(skb);
69 skb_set_network_header(skb, skb_inner_network_offset(skb));
70 skb->mac_len = skb_inner_network_offset(skb);
71
72 /* segment inner packet. */
73 enc_features = skb->dev->hw_enc_features & netif_skb_features(skb);
74 segs = skb_mac_gso_segment(skb, enc_features);
75 if (!segs || IS_ERR(segs))
76 goto out;
77
78 skb = segs;
79 tnl_hlen = skb_tnl_header_len(skb);
80 do {
81 __skb_push(skb, ghl);
82 if (csum) {
83 __be32 *pcsum;
84
85 if (skb_has_shared_frag(skb)) {
86 int err;
87
88 err = __skb_linearize(skb);
89 if (err) {
90 kfree_skb_list(segs);
91 segs = ERR_PTR(err);
92 goto out;
93 }
94 }
95
96 greh = (struct gre_base_hdr *)(skb->data);
97 pcsum = (__be32 *)(greh + 1);
98 *pcsum = 0;
99 *(__sum16 *)pcsum = csum_fold(skb_checksum(skb, 0, skb->len, 0));
100 }
101 __skb_push(skb, tnl_hlen - ghl);
102
103 skb_reset_mac_header(skb);
104 skb_set_network_header(skb, mac_len);
105 skb->mac_len = mac_len;
106 skb->protocol = protocol;
107 } while ((skb = skb->next));
108out:
109 return segs;
110}
111
112static const struct net_offload gre_offload = {
113 .callbacks = {
114 .gso_send_check = gre_gso_send_check,
115 .gso_segment = gre_gso_segment,
116 },
117};
118
119int __init gre_offload_init(void)
120{
121 return inet_add_offload(&gre_offload, IPPROTO_GRE);
122}
123
124void __exit gre_offload_exit(void)
125{
126 inet_del_offload(&gre_offload, IPPROTO_GRE);
127}
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
index 76e10b47e053..5f7d11a45871 100644
--- a/net/ipv4/icmp.c
+++ b/net/ipv4/icmp.c
@@ -482,7 +482,7 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)
482{ 482{
483 struct iphdr *iph; 483 struct iphdr *iph;
484 int room; 484 int room;
485 struct icmp_bxm icmp_param; 485 struct icmp_bxm *icmp_param;
486 struct rtable *rt = skb_rtable(skb_in); 486 struct rtable *rt = skb_rtable(skb_in);
487 struct ipcm_cookie ipc; 487 struct ipcm_cookie ipc;
488 struct flowi4 fl4; 488 struct flowi4 fl4;
@@ -503,7 +503,8 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)
503 iph = ip_hdr(skb_in); 503 iph = ip_hdr(skb_in);
504 504
505 if ((u8 *)iph < skb_in->head || 505 if ((u8 *)iph < skb_in->head ||
506 (skb_in->network_header + sizeof(*iph)) > skb_in->tail) 506 (skb_network_header(skb_in) + sizeof(*iph)) >
507 skb_tail_pointer(skb_in))
507 goto out; 508 goto out;
508 509
509 /* 510 /*
@@ -557,9 +558,13 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)
557 } 558 }
558 } 559 }
559 560
561 icmp_param = kmalloc(sizeof(*icmp_param), GFP_ATOMIC);
562 if (!icmp_param)
563 return;
564
560 sk = icmp_xmit_lock(net); 565 sk = icmp_xmit_lock(net);
561 if (sk == NULL) 566 if (sk == NULL)
562 return; 567 goto out_free;
563 568
564 /* 569 /*
565 * Construct source address and options. 570 * Construct source address and options.
@@ -585,7 +590,7 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)
585 IPTOS_PREC_INTERNETCONTROL) : 590 IPTOS_PREC_INTERNETCONTROL) :
586 iph->tos; 591 iph->tos;
587 592
588 if (ip_options_echo(&icmp_param.replyopts.opt.opt, skb_in)) 593 if (ip_options_echo(&icmp_param->replyopts.opt.opt, skb_in))
589 goto out_unlock; 594 goto out_unlock;
590 595
591 596
@@ -593,19 +598,19 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)
593 * Prepare data for ICMP header. 598 * Prepare data for ICMP header.
594 */ 599 */
595 600
596 icmp_param.data.icmph.type = type; 601 icmp_param->data.icmph.type = type;
597 icmp_param.data.icmph.code = code; 602 icmp_param->data.icmph.code = code;
598 icmp_param.data.icmph.un.gateway = info; 603 icmp_param->data.icmph.un.gateway = info;
599 icmp_param.data.icmph.checksum = 0; 604 icmp_param->data.icmph.checksum = 0;
600 icmp_param.skb = skb_in; 605 icmp_param->skb = skb_in;
601 icmp_param.offset = skb_network_offset(skb_in); 606 icmp_param->offset = skb_network_offset(skb_in);
602 inet_sk(sk)->tos = tos; 607 inet_sk(sk)->tos = tos;
603 ipc.addr = iph->saddr; 608 ipc.addr = iph->saddr;
604 ipc.opt = &icmp_param.replyopts.opt; 609 ipc.opt = &icmp_param->replyopts.opt;
605 ipc.tx_flags = 0; 610 ipc.tx_flags = 0;
606 611
607 rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, tos, 612 rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, tos,
608 type, code, &icmp_param); 613 type, code, icmp_param);
609 if (IS_ERR(rt)) 614 if (IS_ERR(rt))
610 goto out_unlock; 615 goto out_unlock;
611 616
@@ -617,19 +622,21 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)
617 room = dst_mtu(&rt->dst); 622 room = dst_mtu(&rt->dst);
618 if (room > 576) 623 if (room > 576)
619 room = 576; 624 room = 576;
620 room -= sizeof(struct iphdr) + icmp_param.replyopts.opt.opt.optlen; 625 room -= sizeof(struct iphdr) + icmp_param->replyopts.opt.opt.optlen;
621 room -= sizeof(struct icmphdr); 626 room -= sizeof(struct icmphdr);
622 627
623 icmp_param.data_len = skb_in->len - icmp_param.offset; 628 icmp_param->data_len = skb_in->len - icmp_param->offset;
624 if (icmp_param.data_len > room) 629 if (icmp_param->data_len > room)
625 icmp_param.data_len = room; 630 icmp_param->data_len = room;
626 icmp_param.head_len = sizeof(struct icmphdr); 631 icmp_param->head_len = sizeof(struct icmphdr);
627 632
628 icmp_push_reply(&icmp_param, &fl4, &ipc, &rt); 633 icmp_push_reply(icmp_param, &fl4, &ipc, &rt);
629ende: 634ende:
630 ip_rt_put(rt); 635 ip_rt_put(rt);
631out_unlock: 636out_unlock:
632 icmp_xmit_unlock(sk); 637 icmp_xmit_unlock(sk);
638out_free:
639 kfree(icmp_param);
633out:; 640out:;
634} 641}
635EXPORT_SYMBOL(icmp_send); 642EXPORT_SYMBOL(icmp_send);
@@ -657,7 +664,8 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
657} 664}
658 665
659/* 666/*
660 * Handle ICMP_DEST_UNREACH, ICMP_TIME_EXCEED, and ICMP_QUENCH. 667 * Handle ICMP_DEST_UNREACH, ICMP_TIME_EXCEED, ICMP_QUENCH, and
668 * ICMP_PARAMETERPROB.
661 */ 669 */
662 670
663static void icmp_unreach(struct sk_buff *skb) 671static void icmp_unreach(struct sk_buff *skb)
@@ -939,7 +947,8 @@ error:
939void icmp_err(struct sk_buff *skb, u32 info) 947void icmp_err(struct sk_buff *skb, u32 info)
940{ 948{
941 struct iphdr *iph = (struct iphdr *)skb->data; 949 struct iphdr *iph = (struct iphdr *)skb->data;
942 struct icmphdr *icmph = (struct icmphdr *)(skb->data+(iph->ihl<<2)); 950 int offset = iph->ihl<<2;
951 struct icmphdr *icmph = (struct icmphdr *)(skb->data + offset);
943 int type = icmp_hdr(skb)->type; 952 int type = icmp_hdr(skb)->type;
944 int code = icmp_hdr(skb)->code; 953 int code = icmp_hdr(skb)->code;
945 struct net *net = dev_net(skb->dev); 954 struct net *net = dev_net(skb->dev);
@@ -949,7 +958,7 @@ void icmp_err(struct sk_buff *skb, u32 info)
949 * triggered by ICMP_ECHOREPLY which sent from kernel. 958 * triggered by ICMP_ECHOREPLY which sent from kernel.
950 */ 959 */
951 if (icmph->type != ICMP_ECHOREPLY) { 960 if (icmph->type != ICMP_ECHOREPLY) {
952 ping_err(skb, info); 961 ping_err(skb, offset, info);
953 return; 962 return;
954 } 963 }
955 964
diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
index d8c232794bcb..cd71190d2962 100644
--- a/net/ipv4/igmp.c
+++ b/net/ipv4/igmp.c
@@ -363,7 +363,7 @@ static struct sk_buff *igmpv3_newpack(struct net_device *dev, int size)
363static int igmpv3_sendpack(struct sk_buff *skb) 363static int igmpv3_sendpack(struct sk_buff *skb)
364{ 364{
365 struct igmphdr *pig = igmp_hdr(skb); 365 struct igmphdr *pig = igmp_hdr(skb);
366 const int igmplen = skb->tail - skb->transport_header; 366 const int igmplen = skb_tail_pointer(skb) - skb_transport_header(skb);
367 367
368 pig->csum = ip_compute_csum(igmp_hdr(skb), igmplen); 368 pig->csum = ip_compute_csum(igmp_hdr(skb), igmplen);
369 369
@@ -1217,6 +1217,57 @@ static void igmp_group_added(struct ip_mc_list *im)
1217 * Multicast list managers 1217 * Multicast list managers
1218 */ 1218 */
1219 1219
1220static u32 ip_mc_hash(const struct ip_mc_list *im)
1221{
1222 return hash_32((__force u32)im->multiaddr, MC_HASH_SZ_LOG);
1223}
1224
1225static void ip_mc_hash_add(struct in_device *in_dev,
1226 struct ip_mc_list *im)
1227{
1228 struct ip_mc_list __rcu **mc_hash;
1229 u32 hash;
1230
1231 mc_hash = rtnl_dereference(in_dev->mc_hash);
1232 if (mc_hash) {
1233 hash = ip_mc_hash(im);
1234 im->next_hash = mc_hash[hash];
1235 rcu_assign_pointer(mc_hash[hash], im);
1236 return;
1237 }
1238
1239 /* do not use a hash table for small number of items */
1240 if (in_dev->mc_count < 4)
1241 return;
1242
1243 mc_hash = kzalloc(sizeof(struct ip_mc_list *) << MC_HASH_SZ_LOG,
1244 GFP_KERNEL);
1245 if (!mc_hash)
1246 return;
1247
1248 for_each_pmc_rtnl(in_dev, im) {
1249 hash = ip_mc_hash(im);
1250 im->next_hash = mc_hash[hash];
1251 RCU_INIT_POINTER(mc_hash[hash], im);
1252 }
1253
1254 rcu_assign_pointer(in_dev->mc_hash, mc_hash);
1255}
1256
1257static void ip_mc_hash_remove(struct in_device *in_dev,
1258 struct ip_mc_list *im)
1259{
1260 struct ip_mc_list __rcu **mc_hash = rtnl_dereference(in_dev->mc_hash);
1261 struct ip_mc_list *aux;
1262
1263 if (!mc_hash)
1264 return;
1265 mc_hash += ip_mc_hash(im);
1266 while ((aux = rtnl_dereference(*mc_hash)) != im)
1267 mc_hash = &aux->next_hash;
1268 *mc_hash = im->next_hash;
1269}
1270
1220 1271
1221/* 1272/*
1222 * A socket has joined a multicast group on device dev. 1273 * A socket has joined a multicast group on device dev.
@@ -1258,6 +1309,8 @@ void ip_mc_inc_group(struct in_device *in_dev, __be32 addr)
1258 in_dev->mc_count++; 1309 in_dev->mc_count++;
1259 rcu_assign_pointer(in_dev->mc_list, im); 1310 rcu_assign_pointer(in_dev->mc_list, im);
1260 1311
1312 ip_mc_hash_add(in_dev, im);
1313
1261#ifdef CONFIG_IP_MULTICAST 1314#ifdef CONFIG_IP_MULTICAST
1262 igmpv3_del_delrec(in_dev, im->multiaddr); 1315 igmpv3_del_delrec(in_dev, im->multiaddr);
1263#endif 1316#endif
@@ -1314,6 +1367,7 @@ void ip_mc_dec_group(struct in_device *in_dev, __be32 addr)
1314 ip = &i->next_rcu) { 1367 ip = &i->next_rcu) {
1315 if (i->multiaddr == addr) { 1368 if (i->multiaddr == addr) {
1316 if (--i->users == 0) { 1369 if (--i->users == 0) {
1370 ip_mc_hash_remove(in_dev, i);
1317 *ip = i->next_rcu; 1371 *ip = i->next_rcu;
1318 in_dev->mc_count--; 1372 in_dev->mc_count--;
1319 igmp_group_dropped(i); 1373 igmp_group_dropped(i);
@@ -1381,13 +1435,9 @@ void ip_mc_init_dev(struct in_device *in_dev)
1381{ 1435{
1382 ASSERT_RTNL(); 1436 ASSERT_RTNL();
1383 1437
1384 in_dev->mc_tomb = NULL;
1385#ifdef CONFIG_IP_MULTICAST 1438#ifdef CONFIG_IP_MULTICAST
1386 in_dev->mr_gq_running = 0;
1387 setup_timer(&in_dev->mr_gq_timer, igmp_gq_timer_expire, 1439 setup_timer(&in_dev->mr_gq_timer, igmp_gq_timer_expire,
1388 (unsigned long)in_dev); 1440 (unsigned long)in_dev);
1389 in_dev->mr_ifc_count = 0;
1390 in_dev->mc_count = 0;
1391 setup_timer(&in_dev->mr_ifc_timer, igmp_ifc_timer_expire, 1441 setup_timer(&in_dev->mr_ifc_timer, igmp_ifc_timer_expire,
1392 (unsigned long)in_dev); 1442 (unsigned long)in_dev);
1393 in_dev->mr_qrv = IGMP_Unsolicited_Report_Count; 1443 in_dev->mr_qrv = IGMP_Unsolicited_Report_Count;
@@ -2321,12 +2371,25 @@ void ip_mc_drop_socket(struct sock *sk)
2321int ip_check_mc_rcu(struct in_device *in_dev, __be32 mc_addr, __be32 src_addr, u16 proto) 2371int ip_check_mc_rcu(struct in_device *in_dev, __be32 mc_addr, __be32 src_addr, u16 proto)
2322{ 2372{
2323 struct ip_mc_list *im; 2373 struct ip_mc_list *im;
2374 struct ip_mc_list __rcu **mc_hash;
2324 struct ip_sf_list *psf; 2375 struct ip_sf_list *psf;
2325 int rv = 0; 2376 int rv = 0;
2326 2377
2327 for_each_pmc_rcu(in_dev, im) { 2378 mc_hash = rcu_dereference(in_dev->mc_hash);
2328 if (im->multiaddr == mc_addr) 2379 if (mc_hash) {
2329 break; 2380 u32 hash = hash_32((__force u32)mc_addr, MC_HASH_SZ_LOG);
2381
2382 for (im = rcu_dereference(mc_hash[hash]);
2383 im != NULL;
2384 im = rcu_dereference(im->next_hash)) {
2385 if (im->multiaddr == mc_addr)
2386 break;
2387 }
2388 } else {
2389 for_each_pmc_rcu(in_dev, im) {
2390 if (im->multiaddr == mc_addr)
2391 break;
2392 }
2330 } 2393 }
2331 if (im && proto == IPPROTO_IGMP) { 2394 if (im && proto == IPPROTO_IGMP) {
2332 rv = 1; 2395 rv = 1;
diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
index cec539458307..c5313a9c019b 100644
--- a/net/ipv4/inet_fragment.c
+++ b/net/ipv4/inet_fragment.c
@@ -247,8 +247,6 @@ static struct inet_frag_queue *inet_frag_intern(struct netns_frags *nf,
247{ 247{
248 struct inet_frag_bucket *hb; 248 struct inet_frag_bucket *hb;
249 struct inet_frag_queue *qp; 249 struct inet_frag_queue *qp;
250#ifdef CONFIG_SMP
251#endif
252 unsigned int hash; 250 unsigned int hash;
253 251
254 read_lock(&f->lock); /* Protects against hash rebuild */ 252 read_lock(&f->lock); /* Protects against hash rebuild */
diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
index 2a83591492dd..1f6eab66f7ce 100644
--- a/net/ipv4/ip_gre.c
+++ b/net/ipv4/ip_gre.c
@@ -121,103 +121,8 @@ static int ipgre_tunnel_init(struct net_device *dev);
121static int ipgre_net_id __read_mostly; 121static int ipgre_net_id __read_mostly;
122static int gre_tap_net_id __read_mostly; 122static int gre_tap_net_id __read_mostly;
123 123
124static __sum16 check_checksum(struct sk_buff *skb) 124static int ipgre_err(struct sk_buff *skb, u32 info,
125{ 125 const struct tnl_ptk_info *tpi)
126 __sum16 csum = 0;
127
128 switch (skb->ip_summed) {
129 case CHECKSUM_COMPLETE:
130 csum = csum_fold(skb->csum);
131
132 if (!csum)
133 break;
134 /* Fall through. */
135
136 case CHECKSUM_NONE:
137 skb->csum = 0;
138 csum = __skb_checksum_complete(skb);
139 skb->ip_summed = CHECKSUM_COMPLETE;
140 break;
141 }
142
143 return csum;
144}
145
146static int ip_gre_calc_hlen(__be16 o_flags)
147{
148 int addend = 4;
149
150 if (o_flags&TUNNEL_CSUM)
151 addend += 4;
152 if (o_flags&TUNNEL_KEY)
153 addend += 4;
154 if (o_flags&TUNNEL_SEQ)
155 addend += 4;
156 return addend;
157}
158
159static int parse_gre_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
160 bool *csum_err, int *hdr_len)
161{
162 unsigned int ip_hlen = ip_hdrlen(skb);
163 const struct gre_base_hdr *greh;
164 __be32 *options;
165
166 if (unlikely(!pskb_may_pull(skb, sizeof(struct gre_base_hdr))))
167 return -EINVAL;
168
169 greh = (struct gre_base_hdr *)(skb_network_header(skb) + ip_hlen);
170 if (unlikely(greh->flags & (GRE_VERSION | GRE_ROUTING)))
171 return -EINVAL;
172
173 tpi->flags = gre_flags_to_tnl_flags(greh->flags);
174 *hdr_len = ip_gre_calc_hlen(tpi->flags);
175
176 if (!pskb_may_pull(skb, *hdr_len))
177 return -EINVAL;
178
179 greh = (struct gre_base_hdr *)(skb_network_header(skb) + ip_hlen);
180
181 tpi->proto = greh->protocol;
182
183 options = (__be32 *)(greh + 1);
184 if (greh->flags & GRE_CSUM) {
185 if (check_checksum(skb)) {
186 *csum_err = true;
187 return -EINVAL;
188 }
189 options++;
190 }
191
192 if (greh->flags & GRE_KEY) {
193 tpi->key = *options;
194 options++;
195 } else
196 tpi->key = 0;
197
198 if (unlikely(greh->flags & GRE_SEQ)) {
199 tpi->seq = *options;
200 options++;
201 } else
202 tpi->seq = 0;
203
204 /* WCCP version 1 and 2 protocol decoding.
205 * - Change protocol to IP
206 * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header
207 */
208 if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) {
209 tpi->proto = htons(ETH_P_IP);
210 if ((*(u8 *)options & 0xF0) != 0x40) {
211 *hdr_len += 4;
212 if (!pskb_may_pull(skb, *hdr_len))
213 return -EINVAL;
214 }
215 }
216
217 return 0;
218}
219
220static void ipgre_err(struct sk_buff *skb, u32 info)
221{ 126{
222 127
223 /* All the routers (except for Linux) return only 128 /* All the routers (except for Linux) return only
@@ -239,26 +144,18 @@ static void ipgre_err(struct sk_buff *skb, u32 info)
239 const int type = icmp_hdr(skb)->type; 144 const int type = icmp_hdr(skb)->type;
240 const int code = icmp_hdr(skb)->code; 145 const int code = icmp_hdr(skb)->code;
241 struct ip_tunnel *t; 146 struct ip_tunnel *t;
242 struct tnl_ptk_info tpi;
243 int hdr_len;
244 bool csum_err = false;
245
246 if (parse_gre_header(skb, &tpi, &csum_err, &hdr_len)) {
247 if (!csum_err) /* ignore csum errors. */
248 return;
249 }
250 147
251 switch (type) { 148 switch (type) {
252 default: 149 default:
253 case ICMP_PARAMETERPROB: 150 case ICMP_PARAMETERPROB:
254 return; 151 return PACKET_RCVD;
255 152
256 case ICMP_DEST_UNREACH: 153 case ICMP_DEST_UNREACH:
257 switch (code) { 154 switch (code) {
258 case ICMP_SR_FAILED: 155 case ICMP_SR_FAILED:
259 case ICMP_PORT_UNREACH: 156 case ICMP_PORT_UNREACH:
260 /* Impossible event. */ 157 /* Impossible event. */
261 return; 158 return PACKET_RCVD;
262 default: 159 default:
263 /* All others are translated to HOST_UNREACH. 160 /* All others are translated to HOST_UNREACH.
264 rfc2003 contains "deep thoughts" about NET_UNREACH, 161 rfc2003 contains "deep thoughts" about NET_UNREACH,
@@ -269,138 +166,61 @@ static void ipgre_err(struct sk_buff *skb, u32 info)
269 break; 166 break;
270 case ICMP_TIME_EXCEEDED: 167 case ICMP_TIME_EXCEEDED:
271 if (code != ICMP_EXC_TTL) 168 if (code != ICMP_EXC_TTL)
272 return; 169 return PACKET_RCVD;
273 break; 170 break;
274 171
275 case ICMP_REDIRECT: 172 case ICMP_REDIRECT:
276 break; 173 break;
277 } 174 }
278 175
279 if (tpi.proto == htons(ETH_P_TEB)) 176 if (tpi->proto == htons(ETH_P_TEB))
280 itn = net_generic(net, gre_tap_net_id); 177 itn = net_generic(net, gre_tap_net_id);
281 else 178 else
282 itn = net_generic(net, ipgre_net_id); 179 itn = net_generic(net, ipgre_net_id);
283 180
284 iph = (const struct iphdr *)skb->data; 181 iph = (const struct iphdr *)skb->data;
285 t = ip_tunnel_lookup(itn, skb->dev->ifindex, tpi.flags, 182 t = ip_tunnel_lookup(itn, skb->dev->ifindex, tpi->flags,
286 iph->daddr, iph->saddr, tpi.key); 183 iph->daddr, iph->saddr, tpi->key);
287 184
288 if (t == NULL) 185 if (t == NULL)
289 return; 186 return PACKET_REJECT;
290 187
291 if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) {
292 ipv4_update_pmtu(skb, dev_net(skb->dev), info,
293 t->parms.link, 0, IPPROTO_GRE, 0);
294 return;
295 }
296 if (type == ICMP_REDIRECT) {
297 ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0,
298 IPPROTO_GRE, 0);
299 return;
300 }
301 if (t->parms.iph.daddr == 0 || 188 if (t->parms.iph.daddr == 0 ||
302 ipv4_is_multicast(t->parms.iph.daddr)) 189 ipv4_is_multicast(t->parms.iph.daddr))
303 return; 190 return PACKET_RCVD;
304 191
305 if (t->parms.iph.ttl == 0 && type == ICMP_TIME_EXCEEDED) 192 if (t->parms.iph.ttl == 0 && type == ICMP_TIME_EXCEEDED)
306 return; 193 return PACKET_RCVD;
307 194
308 if (time_before(jiffies, t->err_time + IPTUNNEL_ERR_TIMEO)) 195 if (time_before(jiffies, t->err_time + IPTUNNEL_ERR_TIMEO))
309 t->err_count++; 196 t->err_count++;
310 else 197 else
311 t->err_count = 1; 198 t->err_count = 1;
312 t->err_time = jiffies; 199 t->err_time = jiffies;
200 return PACKET_RCVD;
313} 201}
314 202
315static int ipgre_rcv(struct sk_buff *skb) 203static int ipgre_rcv(struct sk_buff *skb, const struct tnl_ptk_info *tpi)
316{ 204{
317 struct net *net = dev_net(skb->dev); 205 struct net *net = dev_net(skb->dev);
318 struct ip_tunnel_net *itn; 206 struct ip_tunnel_net *itn;
319 const struct iphdr *iph; 207 const struct iphdr *iph;
320 struct ip_tunnel *tunnel; 208 struct ip_tunnel *tunnel;
321 struct tnl_ptk_info tpi;
322 int hdr_len;
323 bool csum_err = false;
324
325 if (parse_gre_header(skb, &tpi, &csum_err, &hdr_len) < 0)
326 goto drop;
327 209
328 if (tpi.proto == htons(ETH_P_TEB)) 210 if (tpi->proto == htons(ETH_P_TEB))
329 itn = net_generic(net, gre_tap_net_id); 211 itn = net_generic(net, gre_tap_net_id);
330 else 212 else
331 itn = net_generic(net, ipgre_net_id); 213 itn = net_generic(net, ipgre_net_id);
332 214
333 iph = ip_hdr(skb); 215 iph = ip_hdr(skb);
334 tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, tpi.flags, 216 tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, tpi->flags,
335 iph->saddr, iph->daddr, tpi.key); 217 iph->saddr, iph->daddr, tpi->key);
336 218
337 if (tunnel) { 219 if (tunnel) {
338 ip_tunnel_rcv(tunnel, skb, &tpi, log_ecn_error); 220 ip_tunnel_rcv(tunnel, skb, tpi, log_ecn_error);
339 return 0; 221 return PACKET_RCVD;
340 }
341 icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0);
342drop:
343 kfree_skb(skb);
344 return 0;
345}
346
347static struct sk_buff *handle_offloads(struct ip_tunnel *tunnel, struct sk_buff *skb)
348{
349 int err;
350
351 if (skb_is_gso(skb)) {
352 err = skb_unclone(skb, GFP_ATOMIC);
353 if (unlikely(err))
354 goto error;
355 skb_shinfo(skb)->gso_type |= SKB_GSO_GRE;
356 return skb;
357 } else if (skb->ip_summed == CHECKSUM_PARTIAL &&
358 tunnel->parms.o_flags&TUNNEL_CSUM) {
359 err = skb_checksum_help(skb);
360 if (unlikely(err))
361 goto error;
362 } else if (skb->ip_summed != CHECKSUM_PARTIAL)
363 skb->ip_summed = CHECKSUM_NONE;
364
365 return skb;
366
367error:
368 kfree_skb(skb);
369 return ERR_PTR(err);
370}
371
372static struct sk_buff *gre_build_header(struct sk_buff *skb,
373 const struct tnl_ptk_info *tpi,
374 int hdr_len)
375{
376 struct gre_base_hdr *greh;
377
378 skb_push(skb, hdr_len);
379
380 greh = (struct gre_base_hdr *)skb->data;
381 greh->flags = tnl_flags_to_gre_flags(tpi->flags);
382 greh->protocol = tpi->proto;
383
384 if (tpi->flags&(TUNNEL_KEY|TUNNEL_CSUM|TUNNEL_SEQ)) {
385 __be32 *ptr = (__be32 *)(((u8 *)greh) + hdr_len - 4);
386
387 if (tpi->flags&TUNNEL_SEQ) {
388 *ptr = tpi->seq;
389 ptr--;
390 }
391 if (tpi->flags&TUNNEL_KEY) {
392 *ptr = tpi->key;
393 ptr--;
394 }
395 if (tpi->flags&TUNNEL_CSUM &&
396 !(skb_shinfo(skb)->gso_type & SKB_GSO_GRE)) {
397 *(__sum16 *)ptr = 0;
398 *(__sum16 *)ptr = csum_fold(skb_checksum(skb, 0,
399 skb->len, 0));
400 }
401 } 222 }
402 223 return PACKET_REJECT;
403 return skb;
404} 224}
405 225
406static void __gre_xmit(struct sk_buff *skb, struct net_device *dev, 226static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
@@ -410,11 +230,6 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
410 struct ip_tunnel *tunnel = netdev_priv(dev); 230 struct ip_tunnel *tunnel = netdev_priv(dev);
411 struct tnl_ptk_info tpi; 231 struct tnl_ptk_info tpi;
412 232
413 if (likely(!skb->encapsulation)) {
414 skb_reset_inner_headers(skb);
415 skb->encapsulation = 1;
416 }
417
418 tpi.flags = tunnel->parms.o_flags; 233 tpi.flags = tunnel->parms.o_flags;
419 tpi.proto = proto; 234 tpi.proto = proto;
420 tpi.key = tunnel->parms.o_key; 235 tpi.key = tunnel->parms.o_key;
@@ -423,13 +238,9 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
423 tpi.seq = htonl(tunnel->o_seqno); 238 tpi.seq = htonl(tunnel->o_seqno);
424 239
425 /* Push GRE header. */ 240 /* Push GRE header. */
426 skb = gre_build_header(skb, &tpi, tunnel->hlen); 241 gre_build_header(skb, &tpi, tunnel->hlen);
427 if (unlikely(!skb)) {
428 dev->stats.tx_dropped++;
429 return;
430 }
431 242
432 ip_tunnel_xmit(skb, dev, tnl_params); 243 ip_tunnel_xmit(skb, dev, tnl_params, tnl_params->protocol);
433} 244}
434 245
435static netdev_tx_t ipgre_xmit(struct sk_buff *skb, 246static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
@@ -438,7 +249,7 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
438 struct ip_tunnel *tunnel = netdev_priv(dev); 249 struct ip_tunnel *tunnel = netdev_priv(dev);
439 const struct iphdr *tnl_params; 250 const struct iphdr *tnl_params;
440 251
441 skb = handle_offloads(tunnel, skb); 252 skb = gre_handle_offloads(skb, !!(tunnel->parms.o_flags&TUNNEL_CSUM));
442 if (IS_ERR(skb)) 253 if (IS_ERR(skb))
443 goto out; 254 goto out;
444 255
@@ -477,7 +288,7 @@ static netdev_tx_t gre_tap_xmit(struct sk_buff *skb,
477{ 288{
478 struct ip_tunnel *tunnel = netdev_priv(dev); 289 struct ip_tunnel *tunnel = netdev_priv(dev);
479 290
480 skb = handle_offloads(tunnel, skb); 291 skb = gre_handle_offloads(skb, !!(tunnel->parms.o_flags&TUNNEL_CSUM));
481 if (IS_ERR(skb)) 292 if (IS_ERR(skb))
482 goto out; 293 goto out;
483 294
@@ -503,10 +314,11 @@ static int ipgre_tunnel_ioctl(struct net_device *dev,
503 314
504 if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof(p))) 315 if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof(p)))
505 return -EFAULT; 316 return -EFAULT;
506 if (p.iph.version != 4 || p.iph.protocol != IPPROTO_GRE || 317 if (cmd == SIOCADDTUNNEL || cmd == SIOCCHGTUNNEL) {
507 p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)) || 318 if (p.iph.version != 4 || p.iph.protocol != IPPROTO_GRE ||
508 ((p.i_flags|p.o_flags)&(GRE_VERSION|GRE_ROUTING))) { 319 p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)) ||
509 return -EINVAL; 320 ((p.i_flags|p.o_flags)&(GRE_VERSION|GRE_ROUTING)))
321 return -EINVAL;
510 } 322 }
511 p.i_flags = gre_flags_to_tnl_flags(p.i_flags); 323 p.i_flags = gre_flags_to_tnl_flags(p.i_flags);
512 p.o_flags = gre_flags_to_tnl_flags(p.o_flags); 324 p.o_flags = gre_flags_to_tnl_flags(p.o_flags);
@@ -708,9 +520,10 @@ static int ipgre_tunnel_init(struct net_device *dev)
708 return ip_tunnel_init(dev); 520 return ip_tunnel_init(dev);
709} 521}
710 522
711static const struct gre_protocol ipgre_protocol = { 523static struct gre_cisco_protocol ipgre_protocol = {
712 .handler = ipgre_rcv, 524 .handler = ipgre_rcv,
713 .err_handler = ipgre_err, 525 .err_handler = ipgre_err,
526 .priority = 0,
714}; 527};
715 528
716static int __net_init ipgre_init_net(struct net *net) 529static int __net_init ipgre_init_net(struct net *net)
@@ -978,7 +791,7 @@ static int __init ipgre_init(void)
978 if (err < 0) 791 if (err < 0)
979 goto pnet_tap_faied; 792 goto pnet_tap_faied;
980 793
981 err = gre_add_protocol(&ipgre_protocol, GREPROTO_CISCO); 794 err = gre_cisco_register(&ipgre_protocol);
982 if (err < 0) { 795 if (err < 0) {
983 pr_info("%s: can't add protocol\n", __func__); 796 pr_info("%s: can't add protocol\n", __func__);
984 goto add_proto_failed; 797 goto add_proto_failed;
@@ -997,7 +810,7 @@ static int __init ipgre_init(void)
997tap_ops_failed: 810tap_ops_failed:
998 rtnl_link_unregister(&ipgre_link_ops); 811 rtnl_link_unregister(&ipgre_link_ops);
999rtnl_link_failed: 812rtnl_link_failed:
1000 gre_del_protocol(&ipgre_protocol, GREPROTO_CISCO); 813 gre_cisco_unregister(&ipgre_protocol);
1001add_proto_failed: 814add_proto_failed:
1002 unregister_pernet_device(&ipgre_tap_net_ops); 815 unregister_pernet_device(&ipgre_tap_net_ops);
1003pnet_tap_faied: 816pnet_tap_faied:
@@ -1009,8 +822,7 @@ static void __exit ipgre_fini(void)
1009{ 822{
1010 rtnl_link_unregister(&ipgre_tap_ops); 823 rtnl_link_unregister(&ipgre_tap_ops);
1011 rtnl_link_unregister(&ipgre_link_ops); 824 rtnl_link_unregister(&ipgre_link_ops);
1012 if (gre_del_protocol(&ipgre_protocol, GREPROTO_CISCO) < 0) 825 gre_cisco_unregister(&ipgre_protocol);
1013 pr_info("%s: can't remove protocol\n", __func__);
1014 unregister_pernet_device(&ipgre_tap_net_ops); 826 unregister_pernet_device(&ipgre_tap_net_ops);
1015 unregister_pernet_device(&ipgre_net_ops); 827 unregister_pernet_device(&ipgre_net_ops);
1016} 828}
diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
index 7fa8f08fa7ae..945734b2f209 100644
--- a/net/ipv4/ip_tunnel.c
+++ b/net/ipv4/ip_tunnel.c
@@ -304,6 +304,7 @@ static struct net_device *__ip_tunnel_create(struct net *net,
304 304
305 tunnel = netdev_priv(dev); 305 tunnel = netdev_priv(dev);
306 tunnel->parms = *parms; 306 tunnel->parms = *parms;
307 tunnel->net = net;
307 308
308 err = register_netdevice(dev); 309 err = register_netdevice(dev);
309 if (err) 310 if (err)
@@ -408,13 +409,6 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
408 const struct iphdr *iph = ip_hdr(skb); 409 const struct iphdr *iph = ip_hdr(skb);
409 int err; 410 int err;
410 411
411 secpath_reset(skb);
412
413 skb->protocol = tpi->proto;
414
415 skb->mac_header = skb->network_header;
416 __pskb_pull(skb, tunnel->hlen);
417 skb_postpull_rcsum(skb, skb_transport_header(skb), tunnel->hlen);
418#ifdef CONFIG_NET_IPGRE_BROADCAST 412#ifdef CONFIG_NET_IPGRE_BROADCAST
419 if (ipv4_is_multicast(iph->daddr)) { 413 if (ipv4_is_multicast(iph->daddr)) {
420 /* Looped back packet, drop it! */ 414 /* Looped back packet, drop it! */
@@ -442,23 +436,6 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
442 tunnel->i_seqno = ntohl(tpi->seq) + 1; 436 tunnel->i_seqno = ntohl(tpi->seq) + 1;
443 } 437 }
444 438
445 /* Warning: All skb pointers will be invalidated! */
446 if (tunnel->dev->type == ARPHRD_ETHER) {
447 if (!pskb_may_pull(skb, ETH_HLEN)) {
448 tunnel->dev->stats.rx_length_errors++;
449 tunnel->dev->stats.rx_errors++;
450 goto drop;
451 }
452
453 iph = ip_hdr(skb);
454 skb->protocol = eth_type_trans(skb, tunnel->dev);
455 skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
456 }
457
458 skb->pkt_type = PACKET_HOST;
459 __skb_tunnel_rx(skb, tunnel->dev);
460
461 skb_reset_network_header(skb);
462 err = IP_ECN_decapsulate(iph, skb); 439 err = IP_ECN_decapsulate(iph, skb);
463 if (unlikely(err)) { 440 if (unlikely(err)) {
464 if (log_ecn_error) 441 if (log_ecn_error)
@@ -477,6 +454,15 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
477 tstats->rx_bytes += skb->len; 454 tstats->rx_bytes += skb->len;
478 u64_stats_update_end(&tstats->syncp); 455 u64_stats_update_end(&tstats->syncp);
479 456
457 if (tunnel->net != dev_net(tunnel->dev))
458 skb_scrub_packet(skb);
459
460 if (tunnel->dev->type == ARPHRD_ETHER) {
461 skb->protocol = eth_type_trans(skb, tunnel->dev);
462 skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
463 } else {
464 skb->dev = tunnel->dev;
465 }
480 gro_cells_receive(&tunnel->gro_cells, skb); 466 gro_cells_receive(&tunnel->gro_cells, skb);
481 return 0; 467 return 0;
482 468
@@ -486,24 +472,69 @@ drop:
486} 472}
487EXPORT_SYMBOL_GPL(ip_tunnel_rcv); 473EXPORT_SYMBOL_GPL(ip_tunnel_rcv);
488 474
475static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
476 struct rtable *rt, __be16 df)
477{
478 struct ip_tunnel *tunnel = netdev_priv(dev);
479 int pkt_size = skb->len - tunnel->hlen;
480 int mtu;
481
482 if (df)
483 mtu = dst_mtu(&rt->dst) - dev->hard_header_len
484 - sizeof(struct iphdr) - tunnel->hlen;
485 else
486 mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
487
488 if (skb_dst(skb))
489 skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu);
490
491 if (skb->protocol == htons(ETH_P_IP)) {
492 if (!skb_is_gso(skb) &&
493 (df & htons(IP_DF)) && mtu < pkt_size) {
494 memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
495 icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
496 return -E2BIG;
497 }
498 }
499#if IS_ENABLED(CONFIG_IPV6)
500 else if (skb->protocol == htons(ETH_P_IPV6)) {
501 struct rt6_info *rt6 = (struct rt6_info *)skb_dst(skb);
502
503 if (rt6 && mtu < dst_mtu(skb_dst(skb)) &&
504 mtu >= IPV6_MIN_MTU) {
505 if ((tunnel->parms.iph.daddr &&
506 !ipv4_is_multicast(tunnel->parms.iph.daddr)) ||
507 rt6->rt6i_dst.plen == 128) {
508 rt6->rt6i_flags |= RTF_MODIFIED;
509 dst_metric_set(skb_dst(skb), RTAX_MTU, mtu);
510 }
511 }
512
513 if (!skb_is_gso(skb) && mtu >= IPV6_MIN_MTU &&
514 mtu < pkt_size) {
515 icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
516 return -E2BIG;
517 }
518 }
519#endif
520 return 0;
521}
522
489void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, 523void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
490 const struct iphdr *tnl_params) 524 const struct iphdr *tnl_params, const u8 protocol)
491{ 525{
492 struct ip_tunnel *tunnel = netdev_priv(dev); 526 struct ip_tunnel *tunnel = netdev_priv(dev);
493 const struct iphdr *inner_iph; 527 const struct iphdr *inner_iph;
494 struct iphdr *iph;
495 struct flowi4 fl4; 528 struct flowi4 fl4;
496 u8 tos, ttl; 529 u8 tos, ttl;
497 __be16 df; 530 __be16 df;
498 struct rtable *rt; /* Route to the other host */ 531 struct rtable *rt; /* Route to the other host */
499 struct net_device *tdev; /* Device to other host */
500 unsigned int max_headroom; /* The extra header space needed */ 532 unsigned int max_headroom; /* The extra header space needed */
501 __be32 dst; 533 __be32 dst;
502 int mtu; 534 int err;
503 535
504 inner_iph = (const struct iphdr *)skb_inner_network_header(skb); 536 inner_iph = (const struct iphdr *)skb_inner_network_header(skb);
505 537
506 memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
507 dst = tnl_params->daddr; 538 dst = tnl_params->daddr;
508 if (dst == 0) { 539 if (dst == 0) {
509 /* NBMA tunnel */ 540 /* NBMA tunnel */
@@ -561,8 +592,8 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
561 tos = ipv6_get_dsfield((const struct ipv6hdr *)inner_iph); 592 tos = ipv6_get_dsfield((const struct ipv6hdr *)inner_iph);
562 } 593 }
563 594
564 rt = ip_route_output_tunnel(dev_net(dev), &fl4, 595 rt = ip_route_output_tunnel(tunnel->net, &fl4,
565 tunnel->parms.iph.protocol, 596 protocol,
566 dst, tnl_params->saddr, 597 dst, tnl_params->saddr,
567 tunnel->parms.o_key, 598 tunnel->parms.o_key,
568 RT_TOS(tos), 599 RT_TOS(tos),
@@ -571,58 +602,19 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
571 dev->stats.tx_carrier_errors++; 602 dev->stats.tx_carrier_errors++;
572 goto tx_error; 603 goto tx_error;
573 } 604 }
574 tdev = rt->dst.dev; 605 if (rt->dst.dev == dev) {
575
576 if (tdev == dev) {
577 ip_rt_put(rt); 606 ip_rt_put(rt);
578 dev->stats.collisions++; 607 dev->stats.collisions++;
579 goto tx_error; 608 goto tx_error;
580 } 609 }
581 610
582 df = tnl_params->frag_off; 611 if (tnl_update_pmtu(dev, skb, rt, tnl_params->frag_off)) {
583 612 ip_rt_put(rt);
584 if (df) 613 goto tx_error;
585 mtu = dst_mtu(&rt->dst) - dev->hard_header_len
586 - sizeof(struct iphdr);
587 else
588 mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
589
590 if (skb_dst(skb))
591 skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu);
592
593 if (skb->protocol == htons(ETH_P_IP)) {
594 df |= (inner_iph->frag_off&htons(IP_DF));
595
596 if (!skb_is_gso(skb) &&
597 (inner_iph->frag_off&htons(IP_DF)) &&
598 mtu < ntohs(inner_iph->tot_len)) {
599 icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
600 ip_rt_put(rt);
601 goto tx_error;
602 }
603 } 614 }
604#if IS_ENABLED(CONFIG_IPV6)
605 else if (skb->protocol == htons(ETH_P_IPV6)) {
606 struct rt6_info *rt6 = (struct rt6_info *)skb_dst(skb);
607
608 if (rt6 && mtu < dst_mtu(skb_dst(skb)) &&
609 mtu >= IPV6_MIN_MTU) {
610 if ((tunnel->parms.iph.daddr &&
611 !ipv4_is_multicast(tunnel->parms.iph.daddr)) ||
612 rt6->rt6i_dst.plen == 128) {
613 rt6->rt6i_flags |= RTF_MODIFIED;
614 dst_metric_set(skb_dst(skb), RTAX_MTU, mtu);
615 }
616 }
617 615
618 if (!skb_is_gso(skb) && mtu >= IPV6_MIN_MTU && 616 if (tunnel->net != dev_net(dev))
619 mtu < skb->len) { 617 skb_scrub_packet(skb);
620 icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
621 ip_rt_put(rt);
622 goto tx_error;
623 }
624 }
625#endif
626 618
627 if (tunnel->err_count > 0) { 619 if (tunnel->err_count > 0) {
628 if (time_before(jiffies, 620 if (time_before(jiffies,
@@ -646,8 +638,12 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
646 ttl = ip4_dst_hoplimit(&rt->dst); 638 ttl = ip4_dst_hoplimit(&rt->dst);
647 } 639 }
648 640
649 max_headroom = LL_RESERVED_SPACE(tdev) + sizeof(struct iphdr) 641 df = tnl_params->frag_off;
650 + rt->dst.header_len; 642 if (skb->protocol == htons(ETH_P_IP))
643 df |= (inner_iph->frag_off&htons(IP_DF));
644
645 max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
646 + rt->dst.header_len;
651 if (max_headroom > dev->needed_headroom) { 647 if (max_headroom > dev->needed_headroom) {
652 dev->needed_headroom = max_headroom; 648 dev->needed_headroom = max_headroom;
653 if (skb_cow_head(skb, dev->needed_headroom)) { 649 if (skb_cow_head(skb, dev->needed_headroom)) {
@@ -657,27 +653,11 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
657 } 653 }
658 } 654 }
659 655
660 skb_dst_drop(skb); 656 err = iptunnel_xmit(dev_net(dev), rt, skb,
661 skb_dst_set(skb, &rt->dst); 657 fl4.saddr, fl4.daddr, protocol,
662 658 ip_tunnel_ecn_encap(tos, inner_iph, skb), ttl, df);
663 /* Push down and install the IP header. */ 659 iptunnel_xmit_stats(err, &dev->stats, dev->tstats);
664 skb_push(skb, sizeof(struct iphdr));
665 skb_reset_network_header(skb);
666
667 iph = ip_hdr(skb);
668 inner_iph = (const struct iphdr *)skb_inner_network_header(skb);
669 660
670 iph->version = 4;
671 iph->ihl = sizeof(struct iphdr) >> 2;
672 iph->frag_off = df;
673 iph->protocol = tnl_params->protocol;
674 iph->tos = ip_tunnel_ecn_encap(tos, inner_iph, skb);
675 iph->daddr = fl4.daddr;
676 iph->saddr = fl4.saddr;
677 iph->ttl = ttl;
678 tunnel_ip_select_ident(skb, inner_iph, &rt->dst);
679
680 iptunnel_xmit(skb, dev);
681 return; 661 return;
682 662
683#if IS_ENABLED(CONFIG_IPV6) 663#if IS_ENABLED(CONFIG_IPV6)
@@ -926,6 +906,7 @@ int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
926 if (ip_tunnel_find(itn, p, dev->type)) 906 if (ip_tunnel_find(itn, p, dev->type))
927 return -EEXIST; 907 return -EEXIST;
928 908
909 nt->net = net;
929 nt->parms = *p; 910 nt->parms = *p;
930 err = register_netdevice(dev); 911 err = register_netdevice(dev);
931 if (err) 912 if (err)
diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
new file mode 100644
index 000000000000..7167b08977df
--- /dev/null
+++ b/net/ipv4/ip_tunnel_core.c
@@ -0,0 +1,122 @@
1/*
2 * Copyright (c) 2013 Nicira, Inc.
3 *
4 * This program is free software; you can redistribute it and/or
5 * modify it under the terms of version 2 of the GNU General Public
6 * License as published by the Free Software Foundation.
7 *
8 * This program is distributed in the hope that it will be useful, but
9 * WITHOUT ANY WARRANTY; without even the implied warranty of
10 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
11 * General Public License for more details.
12 *
13 * You should have received a copy of the GNU General Public License
14 * along with this program; if not, write to the Free Software
15 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
16 * 02110-1301, USA
17 */
18
19#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
20
21#include <linux/types.h>
22#include <linux/kernel.h>
23#include <linux/skbuff.h>
24#include <linux/netdevice.h>
25#include <linux/in.h>
26#include <linux/if_arp.h>
27#include <linux/mroute.h>
28#include <linux/init.h>
29#include <linux/in6.h>
30#include <linux/inetdevice.h>
31#include <linux/netfilter_ipv4.h>
32#include <linux/etherdevice.h>
33#include <linux/if_ether.h>
34#include <linux/if_vlan.h>
35
36#include <net/ip.h>
37#include <net/icmp.h>
38#include <net/protocol.h>
39#include <net/ip_tunnels.h>
40#include <net/arp.h>
41#include <net/checksum.h>
42#include <net/dsfield.h>
43#include <net/inet_ecn.h>
44#include <net/xfrm.h>
45#include <net/net_namespace.h>
46#include <net/netns/generic.h>
47#include <net/rtnetlink.h>
48
49int iptunnel_xmit(struct net *net, struct rtable *rt,
50 struct sk_buff *skb,
51 __be32 src, __be32 dst, __u8 proto,
52 __u8 tos, __u8 ttl, __be16 df)
53{
54 int pkt_len = skb->len;
55 struct iphdr *iph;
56 int err;
57
58 nf_reset(skb);
59 secpath_reset(skb);
60 skb->rxhash = 0;
61 skb_dst_drop(skb);
62 skb_dst_set(skb, &rt->dst);
63 memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
64
65 /* Push down and install the IP header. */
66 __skb_push(skb, sizeof(struct iphdr));
67 skb_reset_network_header(skb);
68
69 iph = ip_hdr(skb);
70
71 iph->version = 4;
72 iph->ihl = sizeof(struct iphdr) >> 2;
73 iph->frag_off = df;
74 iph->protocol = proto;
75 iph->tos = tos;
76 iph->daddr = dst;
77 iph->saddr = src;
78 iph->ttl = ttl;
79 tunnel_ip_select_ident(skb,
80 (const struct iphdr *)skb_inner_network_header(skb),
81 &rt->dst);
82
83 err = ip_local_out(skb);
84 if (unlikely(net_xmit_eval(err)))
85 pkt_len = 0;
86 return pkt_len;
87}
88EXPORT_SYMBOL_GPL(iptunnel_xmit);
89
90int iptunnel_pull_header(struct sk_buff *skb, int hdr_len, __be16 inner_proto)
91{
92 if (unlikely(!pskb_may_pull(skb, hdr_len)))
93 return -ENOMEM;
94
95 skb_pull_rcsum(skb, hdr_len);
96
97 if (inner_proto == htons(ETH_P_TEB)) {
98 struct ethhdr *eh = (struct ethhdr *)skb->data;
99
100 if (unlikely(!pskb_may_pull(skb, ETH_HLEN)))
101 return -ENOMEM;
102
103 if (likely(ntohs(eh->h_proto) >= ETH_P_802_3_MIN))
104 skb->protocol = eh->h_proto;
105 else
106 skb->protocol = htons(ETH_P_802_2);
107
108 } else {
109 skb->protocol = inner_proto;
110 }
111
112 nf_reset(skb);
113 secpath_reset(skb);
114 if (!skb->l4_rxhash)
115 skb->rxhash = 0;
116 skb_dst_drop(skb);
117 skb->vlan_tci = 0;
118 skb_set_queue_mapping(skb, 0);
119 skb->pkt_type = PACKET_HOST;
120 return 0;
121}
122EXPORT_SYMBOL_GPL(iptunnel_pull_header);
diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
index c118f6b576bb..17cc0ffa8c0d 100644
--- a/net/ipv4/ip_vti.c
+++ b/net/ipv4/ip_vti.c
@@ -606,17 +606,10 @@ static int __net_init vti_fb_tunnel_init(struct net_device *dev)
606 struct iphdr *iph = &tunnel->parms.iph; 606 struct iphdr *iph = &tunnel->parms.iph;
607 struct vti_net *ipn = net_generic(dev_net(dev), vti_net_id); 607 struct vti_net *ipn = net_generic(dev_net(dev), vti_net_id);
608 608
609 tunnel->dev = dev;
610 strcpy(tunnel->parms.name, dev->name);
611
612 iph->version = 4; 609 iph->version = 4;
613 iph->protocol = IPPROTO_IPIP; 610 iph->protocol = IPPROTO_IPIP;
614 iph->ihl = 5; 611 iph->ihl = 5;
615 612
616 dev->tstats = alloc_percpu(struct pcpu_tstats);
617 if (!dev->tstats)
618 return -ENOMEM;
619
620 dev_hold(dev); 613 dev_hold(dev);
621 rcu_assign_pointer(ipn->tunnels_wc[0], tunnel); 614 rcu_assign_pointer(ipn->tunnels_wc[0], tunnel);
622 return 0; 615 return 0;
diff --git a/net/ipv4/ipcomp.c b/net/ipv4/ipcomp.c
index 59cb8c769056..826be4cb482a 100644
--- a/net/ipv4/ipcomp.c
+++ b/net/ipv4/ipcomp.c
@@ -47,12 +47,9 @@ static void ipcomp4_err(struct sk_buff *skb, u32 info)
47 if (!x) 47 if (!x)
48 return; 48 return;
49 49
50 if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) { 50 if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH)
51 atomic_inc(&flow_cache_genid);
52 rt_genid_bump(net);
53
54 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_COMP, 0); 51 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_COMP, 0);
55 } else 52 else
56 ipv4_redirect(skb, net, 0, 0, IPPROTO_COMP, 0); 53 ipv4_redirect(skb, net, 0, 0, IPPROTO_COMP, 0);
57 xfrm_state_put(x); 54 xfrm_state_put(x);
58} 55}
diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
index 77bfcce64fe5..51fc2a1dcdd3 100644
--- a/net/ipv4/ipip.c
+++ b/net/ipv4/ipip.c
@@ -188,8 +188,12 @@ static int ipip_rcv(struct sk_buff *skb)
188 struct net *net = dev_net(skb->dev); 188 struct net *net = dev_net(skb->dev);
189 struct ip_tunnel_net *itn = net_generic(net, ipip_net_id); 189 struct ip_tunnel_net *itn = net_generic(net, ipip_net_id);
190 struct ip_tunnel *tunnel; 190 struct ip_tunnel *tunnel;
191 const struct iphdr *iph = ip_hdr(skb); 191 const struct iphdr *iph;
192 192
193 if (iptunnel_pull_header(skb, 0, tpi.proto))
194 goto drop;
195
196 iph = ip_hdr(skb);
193 tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, TUNNEL_NO_KEY, 197 tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, TUNNEL_NO_KEY,
194 iph->saddr, iph->daddr, 0); 198 iph->saddr, iph->daddr, 0);
195 if (tunnel) { 199 if (tunnel) {
@@ -222,7 +226,7 @@ static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
222 skb->encapsulation = 1; 226 skb->encapsulation = 1;
223 } 227 }
224 228
225 ip_tunnel_xmit(skb, dev, tiph); 229 ip_tunnel_xmit(skb, dev, tiph, tiph->protocol);
226 return NETDEV_TX_OK; 230 return NETDEV_TX_OK;
227 231
228tx_error: 232tx_error:
@@ -240,11 +244,13 @@ ipip_tunnel_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
240 if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof(p))) 244 if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof(p)))
241 return -EFAULT; 245 return -EFAULT;
242 246
243 if (p.iph.version != 4 || p.iph.protocol != IPPROTO_IPIP || 247 if (cmd == SIOCADDTUNNEL || cmd == SIOCCHGTUNNEL) {
244 p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF))) 248 if (p.iph.version != 4 || p.iph.protocol != IPPROTO_IPIP ||
245 return -EINVAL; 249 p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)))
246 if (p.i_key || p.o_key || p.i_flags || p.o_flags) 250 return -EINVAL;
247 return -EINVAL; 251 }
252
253 p.i_key = p.o_key = p.i_flags = p.o_flags = 0;
248 if (p.iph.ttl) 254 if (p.iph.ttl)
249 p.iph.frag_off |= htons(IP_DF); 255 p.iph.frag_off |= htons(IP_DF);
250 256
diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
index 9d9610ae7855..132a09664704 100644
--- a/net/ipv4/ipmr.c
+++ b/net/ipv4/ipmr.c
@@ -980,7 +980,7 @@ static int ipmr_cache_report(struct mr_table *mrt,
980 980
981 /* Copy the IP header */ 981 /* Copy the IP header */
982 982
983 skb->network_header = skb->tail; 983 skb_set_network_header(skb, skb->len);
984 skb_put(skb, ihl); 984 skb_put(skb, ihl);
985 skb_copy_to_linear_data(skb, pkt->data, ihl); 985 skb_copy_to_linear_data(skb, pkt->data, ihl);
986 ip_hdr(skb)->protocol = 0; /* Flag to the kernel this is a route add */ 986 ip_hdr(skb)->protocol = 0; /* Flag to the kernel this is a route add */
@@ -1609,7 +1609,7 @@ int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
1609 1609
1610static int ipmr_device_event(struct notifier_block *this, unsigned long event, void *ptr) 1610static int ipmr_device_event(struct notifier_block *this, unsigned long event, void *ptr)
1611{ 1611{
1612 struct net_device *dev = ptr; 1612 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1613 struct net *net = dev_net(dev); 1613 struct net *net = dev_net(dev);
1614 struct mr_table *mrt; 1614 struct mr_table *mrt;
1615 struct vif_device *v; 1615 struct vif_device *v;
diff --git a/net/ipv4/netfilter/Kconfig b/net/ipv4/netfilter/Kconfig
index e7916c193932..4e9028017428 100644
--- a/net/ipv4/netfilter/Kconfig
+++ b/net/ipv4/netfilter/Kconfig
@@ -111,7 +111,7 @@ config IP_NF_TARGET_REJECT
111 To compile it as a module, choose M here. If unsure, say N. 111 To compile it as a module, choose M here. If unsure, say N.
112 112
113config IP_NF_TARGET_ULOG 113config IP_NF_TARGET_ULOG
114 tristate "ULOG target support" 114 tristate "ULOG target support (obsolete)"
115 default m if NETFILTER_ADVANCED=n 115 default m if NETFILTER_ADVANCED=n
116 ---help--- 116 ---help---
117 117
diff --git a/net/ipv4/netfilter/ipt_MASQUERADE.c b/net/ipv4/netfilter/ipt_MASQUERADE.c
index 5d5d4d1be9c2..30e4de940567 100644
--- a/net/ipv4/netfilter/ipt_MASQUERADE.c
+++ b/net/ipv4/netfilter/ipt_MASQUERADE.c
@@ -108,7 +108,7 @@ static int masq_device_event(struct notifier_block *this,
108 unsigned long event, 108 unsigned long event,
109 void *ptr) 109 void *ptr)
110{ 110{
111 const struct net_device *dev = ptr; 111 const struct net_device *dev = netdev_notifier_info_to_dev(ptr);
112 struct net *net = dev_net(dev); 112 struct net *net = dev_net(dev);
113 113
114 if (event == NETDEV_DOWN) { 114 if (event == NETDEV_DOWN) {
@@ -129,7 +129,10 @@ static int masq_inet_event(struct notifier_block *this,
129 void *ptr) 129 void *ptr)
130{ 130{
131 struct net_device *dev = ((struct in_ifaddr *)ptr)->ifa_dev->dev; 131 struct net_device *dev = ((struct in_ifaddr *)ptr)->ifa_dev->dev;
132 return masq_device_event(this, event, dev); 132 struct netdev_notifier_info info;
133
134 netdev_notifier_info_init(&info, dev);
135 return masq_device_event(this, event, &info);
133} 136}
134 137
135static struct notifier_block masq_dev_notifier = { 138static struct notifier_block masq_dev_notifier = {
diff --git a/net/ipv4/netfilter/ipt_ULOG.c b/net/ipv4/netfilter/ipt_ULOG.c
index 32b0e978c8e0..cbc22158af49 100644
--- a/net/ipv4/netfilter/ipt_ULOG.c
+++ b/net/ipv4/netfilter/ipt_ULOG.c
@@ -331,6 +331,12 @@ static int ulog_tg_check(const struct xt_tgchk_param *par)
331{ 331{
332 const struct ipt_ulog_info *loginfo = par->targinfo; 332 const struct ipt_ulog_info *loginfo = par->targinfo;
333 333
334 if (!par->net->xt.ulog_warn_deprecated) {
335 pr_info("ULOG is deprecated and it will be removed soon, "
336 "use NFLOG instead\n");
337 par->net->xt.ulog_warn_deprecated = true;
338 }
339
334 if (loginfo->prefix[sizeof(loginfo->prefix) - 1] != '\0') { 340 if (loginfo->prefix[sizeof(loginfo->prefix) - 1] != '\0') {
335 pr_debug("prefix not null-terminated\n"); 341 pr_debug("prefix not null-terminated\n");
336 return -EINVAL; 342 return -EINVAL;
diff --git a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
index 567d84168bd2..0a2e0e3e95ba 100644
--- a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+++ b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
@@ -223,7 +223,7 @@ static struct nf_hook_ops ipv4_conntrack_ops[] __read_mostly = {
223static int log_invalid_proto_min = 0; 223static int log_invalid_proto_min = 0;
224static int log_invalid_proto_max = 255; 224static int log_invalid_proto_max = 255;
225 225
226static ctl_table ip_ct_sysctl_table[] = { 226static struct ctl_table ip_ct_sysctl_table[] = {
227 { 227 {
228 .procname = "ip_conntrack_max", 228 .procname = "ip_conntrack_max",
229 .maxlen = sizeof(int), 229 .maxlen = sizeof(int),
diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
index 7d93d62cd5fd..746427c9e719 100644
--- a/net/ipv4/ping.c
+++ b/net/ipv4/ping.c
@@ -33,7 +33,6 @@
33#include <linux/netdevice.h> 33#include <linux/netdevice.h>
34#include <net/snmp.h> 34#include <net/snmp.h>
35#include <net/ip.h> 35#include <net/ip.h>
36#include <net/ipv6.h>
37#include <net/icmp.h> 36#include <net/icmp.h>
38#include <net/protocol.h> 37#include <net/protocol.h>
39#include <linux/skbuff.h> 38#include <linux/skbuff.h>
@@ -46,8 +45,18 @@
46#include <net/inet_common.h> 45#include <net/inet_common.h>
47#include <net/checksum.h> 46#include <net/checksum.h>
48 47
48#if IS_ENABLED(CONFIG_IPV6)
49#include <linux/in6.h>
50#include <linux/icmpv6.h>
51#include <net/addrconf.h>
52#include <net/ipv6.h>
53#include <net/transp_v6.h>
54#endif
49 55
50static struct ping_table ping_table; 56
57struct ping_table ping_table;
58struct pingv6_ops pingv6_ops;
59EXPORT_SYMBOL_GPL(pingv6_ops);
51 60
52static u16 ping_port_rover; 61static u16 ping_port_rover;
53 62
@@ -58,6 +67,7 @@ static inline int ping_hashfn(struct net *net, unsigned int num, unsigned int ma
58 pr_debug("hash(%d) = %d\n", num, res); 67 pr_debug("hash(%d) = %d\n", num, res);
59 return res; 68 return res;
60} 69}
70EXPORT_SYMBOL_GPL(ping_hash);
61 71
62static inline struct hlist_nulls_head *ping_hashslot(struct ping_table *table, 72static inline struct hlist_nulls_head *ping_hashslot(struct ping_table *table,
63 struct net *net, unsigned int num) 73 struct net *net, unsigned int num)
@@ -65,7 +75,7 @@ static inline struct hlist_nulls_head *ping_hashslot(struct ping_table *table,
65 return &table->hash[ping_hashfn(net, num, PING_HTABLE_MASK)]; 75 return &table->hash[ping_hashfn(net, num, PING_HTABLE_MASK)];
66} 76}
67 77
68static int ping_v4_get_port(struct sock *sk, unsigned short ident) 78int ping_get_port(struct sock *sk, unsigned short ident)
69{ 79{
70 struct hlist_nulls_node *node; 80 struct hlist_nulls_node *node;
71 struct hlist_nulls_head *hlist; 81 struct hlist_nulls_head *hlist;
@@ -103,6 +113,10 @@ next_port:
103 ping_portaddr_for_each_entry(sk2, node, hlist) { 113 ping_portaddr_for_each_entry(sk2, node, hlist) {
104 isk2 = inet_sk(sk2); 114 isk2 = inet_sk(sk2);
105 115
116 /* BUG? Why is this reuse and not reuseaddr? ping.c
117 * doesn't turn off SO_REUSEADDR, and it doesn't expect
118 * that other ping processes can steal its packets.
119 */
106 if ((isk2->inet_num == ident) && 120 if ((isk2->inet_num == ident) &&
107 (sk2 != sk) && 121 (sk2 != sk) &&
108 (!sk2->sk_reuse || !sk->sk_reuse)) 122 (!sk2->sk_reuse || !sk->sk_reuse))
@@ -125,17 +139,18 @@ fail:
125 write_unlock_bh(&ping_table.lock); 139 write_unlock_bh(&ping_table.lock);
126 return 1; 140 return 1;
127} 141}
142EXPORT_SYMBOL_GPL(ping_get_port);
128 143
129static void ping_v4_hash(struct sock *sk) 144void ping_hash(struct sock *sk)
130{ 145{
131 pr_debug("ping_v4_hash(sk->port=%u)\n", inet_sk(sk)->inet_num); 146 pr_debug("ping_hash(sk->port=%u)\n", inet_sk(sk)->inet_num);
132 BUG(); /* "Please do not press this button again." */ 147 BUG(); /* "Please do not press this button again." */
133} 148}
134 149
135static void ping_v4_unhash(struct sock *sk) 150void ping_unhash(struct sock *sk)
136{ 151{
137 struct inet_sock *isk = inet_sk(sk); 152 struct inet_sock *isk = inet_sk(sk);
138 pr_debug("ping_v4_unhash(isk=%p,isk->num=%u)\n", isk, isk->inet_num); 153 pr_debug("ping_unhash(isk=%p,isk->num=%u)\n", isk, isk->inet_num);
139 if (sk_hashed(sk)) { 154 if (sk_hashed(sk)) {
140 write_lock_bh(&ping_table.lock); 155 write_lock_bh(&ping_table.lock);
141 hlist_nulls_del(&sk->sk_nulls_node); 156 hlist_nulls_del(&sk->sk_nulls_node);
@@ -146,31 +161,61 @@ static void ping_v4_unhash(struct sock *sk)
146 write_unlock_bh(&ping_table.lock); 161 write_unlock_bh(&ping_table.lock);
147 } 162 }
148} 163}
164EXPORT_SYMBOL_GPL(ping_unhash);
149 165
150static struct sock *ping_v4_lookup(struct net *net, __be32 saddr, __be32 daddr, 166static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
151 u16 ident, int dif)
152{ 167{
153 struct hlist_nulls_head *hslot = ping_hashslot(&ping_table, net, ident); 168 struct hlist_nulls_head *hslot = ping_hashslot(&ping_table, net, ident);
154 struct sock *sk = NULL; 169 struct sock *sk = NULL;
155 struct inet_sock *isk; 170 struct inet_sock *isk;
156 struct hlist_nulls_node *hnode; 171 struct hlist_nulls_node *hnode;
172 int dif = skb->dev->ifindex;
173
174 if (skb->protocol == htons(ETH_P_IP)) {
175 pr_debug("try to find: num = %d, daddr = %pI4, dif = %d\n",
176 (int)ident, &ip_hdr(skb)->daddr, dif);
177#if IS_ENABLED(CONFIG_IPV6)
178 } else if (skb->protocol == htons(ETH_P_IPV6)) {
179 pr_debug("try to find: num = %d, daddr = %pI6c, dif = %d\n",
180 (int)ident, &ipv6_hdr(skb)->daddr, dif);
181#endif
182 }
157 183
158 pr_debug("try to find: num = %d, daddr = %pI4, dif = %d\n",
159 (int)ident, &daddr, dif);
160 read_lock_bh(&ping_table.lock); 184 read_lock_bh(&ping_table.lock);
161 185
162 ping_portaddr_for_each_entry(sk, hnode, hslot) { 186 ping_portaddr_for_each_entry(sk, hnode, hslot) {
163 isk = inet_sk(sk); 187 isk = inet_sk(sk);
164 188
165 pr_debug("found: %p: num = %d, daddr = %pI4, dif = %d\n", sk,
166 (int)isk->inet_num, &isk->inet_rcv_saddr,
167 sk->sk_bound_dev_if);
168
169 pr_debug("iterate\n"); 189 pr_debug("iterate\n");
170 if (isk->inet_num != ident) 190 if (isk->inet_num != ident)
171 continue; 191 continue;
172 if (isk->inet_rcv_saddr && isk->inet_rcv_saddr != daddr) 192
173 continue; 193 if (skb->protocol == htons(ETH_P_IP) &&
194 sk->sk_family == AF_INET) {
195 pr_debug("found: %p: num=%d, daddr=%pI4, dif=%d\n", sk,
196 (int) isk->inet_num, &isk->inet_rcv_saddr,
197 sk->sk_bound_dev_if);
198
199 if (isk->inet_rcv_saddr &&
200 isk->inet_rcv_saddr != ip_hdr(skb)->daddr)
201 continue;
202#if IS_ENABLED(CONFIG_IPV6)
203 } else if (skb->protocol == htons(ETH_P_IPV6) &&
204 sk->sk_family == AF_INET6) {
205 struct ipv6_pinfo *np = inet6_sk(sk);
206
207 pr_debug("found: %p: num=%d, daddr=%pI6c, dif=%d\n", sk,
208 (int) isk->inet_num,
209 &inet6_sk(sk)->rcv_saddr,
210 sk->sk_bound_dev_if);
211
212 if (!ipv6_addr_any(&np->rcv_saddr) &&
213 !ipv6_addr_equal(&np->rcv_saddr,
214 &ipv6_hdr(skb)->daddr))
215 continue;
216#endif
217 }
218
174 if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif) 219 if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif)
175 continue; 220 continue;
176 221
@@ -200,7 +245,7 @@ static void inet_get_ping_group_range_net(struct net *net, kgid_t *low,
200} 245}
201 246
202 247
203static int ping_init_sock(struct sock *sk) 248int ping_init_sock(struct sock *sk)
204{ 249{
205 struct net *net = sock_net(sk); 250 struct net *net = sock_net(sk);
206 kgid_t group = current_egid(); 251 kgid_t group = current_egid();
@@ -225,8 +270,9 @@ static int ping_init_sock(struct sock *sk)
225 270
226 return -EACCES; 271 return -EACCES;
227} 272}
273EXPORT_SYMBOL_GPL(ping_init_sock);
228 274
229static void ping_close(struct sock *sk, long timeout) 275void ping_close(struct sock *sk, long timeout)
230{ 276{
231 pr_debug("ping_close(sk=%p,sk->num=%u)\n", 277 pr_debug("ping_close(sk=%p,sk->num=%u)\n",
232 inet_sk(sk), inet_sk(sk)->inet_num); 278 inet_sk(sk), inet_sk(sk)->inet_num);
@@ -234,36 +280,122 @@ static void ping_close(struct sock *sk, long timeout)
234 280
235 sk_common_release(sk); 281 sk_common_release(sk);
236} 282}
283EXPORT_SYMBOL_GPL(ping_close);
284
285/* Checks the bind address and possibly modifies sk->sk_bound_dev_if. */
286static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
287 struct sockaddr *uaddr, int addr_len) {
288 struct net *net = sock_net(sk);
289 if (sk->sk_family == AF_INET) {
290 struct sockaddr_in *addr = (struct sockaddr_in *) uaddr;
291 int chk_addr_ret;
292
293 if (addr_len < sizeof(*addr))
294 return -EINVAL;
295
296 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n",
297 sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port));
298
299 chk_addr_ret = inet_addr_type(net, addr->sin_addr.s_addr);
300
301 if (addr->sin_addr.s_addr == htonl(INADDR_ANY))
302 chk_addr_ret = RTN_LOCAL;
303
304 if ((sysctl_ip_nonlocal_bind == 0 &&
305 isk->freebind == 0 && isk->transparent == 0 &&
306 chk_addr_ret != RTN_LOCAL) ||
307 chk_addr_ret == RTN_MULTICAST ||
308 chk_addr_ret == RTN_BROADCAST)
309 return -EADDRNOTAVAIL;
310
311#if IS_ENABLED(CONFIG_IPV6)
312 } else if (sk->sk_family == AF_INET6) {
313 struct sockaddr_in6 *addr = (struct sockaddr_in6 *) uaddr;
314 int addr_type, scoped, has_addr;
315 struct net_device *dev = NULL;
316
317 if (addr_len < sizeof(*addr))
318 return -EINVAL;
319
320 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI6c,port=%d)\n",
321 sk, addr->sin6_addr.s6_addr, ntohs(addr->sin6_port));
322
323 addr_type = ipv6_addr_type(&addr->sin6_addr);
324 scoped = __ipv6_addr_needs_scope_id(addr_type);
325 if ((addr_type != IPV6_ADDR_ANY &&
326 !(addr_type & IPV6_ADDR_UNICAST)) ||
327 (scoped && !addr->sin6_scope_id))
328 return -EINVAL;
329
330 rcu_read_lock();
331 if (addr->sin6_scope_id) {
332 dev = dev_get_by_index_rcu(net, addr->sin6_scope_id);
333 if (!dev) {
334 rcu_read_unlock();
335 return -ENODEV;
336 }
337 }
338 has_addr = pingv6_ops.ipv6_chk_addr(net, &addr->sin6_addr, dev,
339 scoped);
340 rcu_read_unlock();
341
342 if (!(isk->freebind || isk->transparent || has_addr ||
343 addr_type == IPV6_ADDR_ANY))
344 return -EADDRNOTAVAIL;
345
346 if (scoped)
347 sk->sk_bound_dev_if = addr->sin6_scope_id;
348#endif
349 } else {
350 return -EAFNOSUPPORT;
351 }
352 return 0;
353}
354
355static void ping_set_saddr(struct sock *sk, struct sockaddr *saddr)
356{
357 if (saddr->sa_family == AF_INET) {
358 struct inet_sock *isk = inet_sk(sk);
359 struct sockaddr_in *addr = (struct sockaddr_in *) saddr;
360 isk->inet_rcv_saddr = isk->inet_saddr = addr->sin_addr.s_addr;
361#if IS_ENABLED(CONFIG_IPV6)
362 } else if (saddr->sa_family == AF_INET6) {
363 struct sockaddr_in6 *addr = (struct sockaddr_in6 *) saddr;
364 struct ipv6_pinfo *np = inet6_sk(sk);
365 np->rcv_saddr = np->saddr = addr->sin6_addr;
366#endif
367 }
368}
237 369
370static void ping_clear_saddr(struct sock *sk, int dif)
371{
372 sk->sk_bound_dev_if = dif;
373 if (sk->sk_family == AF_INET) {
374 struct inet_sock *isk = inet_sk(sk);
375 isk->inet_rcv_saddr = isk->inet_saddr = 0;
376#if IS_ENABLED(CONFIG_IPV6)
377 } else if (sk->sk_family == AF_INET6) {
378 struct ipv6_pinfo *np = inet6_sk(sk);
379 memset(&np->rcv_saddr, 0, sizeof(np->rcv_saddr));
380 memset(&np->saddr, 0, sizeof(np->saddr));
381#endif
382 }
383}
238/* 384/*
239 * We need our own bind because there are no privileged id's == local ports. 385 * We need our own bind because there are no privileged id's == local ports.
240 * Moreover, we don't allow binding to multi- and broadcast addresses. 386 * Moreover, we don't allow binding to multi- and broadcast addresses.
241 */ 387 */
242 388
243static int ping_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len) 389int ping_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
244{ 390{
245 struct sockaddr_in *addr = (struct sockaddr_in *)uaddr;
246 struct inet_sock *isk = inet_sk(sk); 391 struct inet_sock *isk = inet_sk(sk);
247 unsigned short snum; 392 unsigned short snum;
248 int chk_addr_ret;
249 int err; 393 int err;
394 int dif = sk->sk_bound_dev_if;
250 395
251 if (addr_len < sizeof(struct sockaddr_in)) 396 err = ping_check_bind_addr(sk, isk, uaddr, addr_len);
252 return -EINVAL; 397 if (err)
253 398 return err;
254 pr_debug("ping_v4_bind(sk=%p,sa_addr=%08x,sa_port=%d)\n",
255 sk, addr->sin_addr.s_addr, ntohs(addr->sin_port));
256
257 chk_addr_ret = inet_addr_type(sock_net(sk), addr->sin_addr.s_addr);
258 if (addr->sin_addr.s_addr == htonl(INADDR_ANY))
259 chk_addr_ret = RTN_LOCAL;
260
261 if ((sysctl_ip_nonlocal_bind == 0 &&
262 isk->freebind == 0 && isk->transparent == 0 &&
263 chk_addr_ret != RTN_LOCAL) ||
264 chk_addr_ret == RTN_MULTICAST ||
265 chk_addr_ret == RTN_BROADCAST)
266 return -EADDRNOTAVAIL;
267 399
268 lock_sock(sk); 400 lock_sock(sk);
269 401
@@ -272,42 +404,50 @@ static int ping_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
272 goto out; 404 goto out;
273 405
274 err = -EADDRINUSE; 406 err = -EADDRINUSE;
275 isk->inet_rcv_saddr = isk->inet_saddr = addr->sin_addr.s_addr; 407 ping_set_saddr(sk, uaddr);
276 snum = ntohs(addr->sin_port); 408 snum = ntohs(((struct sockaddr_in *)uaddr)->sin_port);
277 if (ping_v4_get_port(sk, snum) != 0) { 409 if (ping_get_port(sk, snum) != 0) {
278 isk->inet_saddr = isk->inet_rcv_saddr = 0; 410 ping_clear_saddr(sk, dif);
279 goto out; 411 goto out;
280 } 412 }
281 413
282 pr_debug("after bind(): num = %d, daddr = %pI4, dif = %d\n", 414 pr_debug("after bind(): num = %d, dif = %d\n",
283 (int)isk->inet_num, 415 (int)isk->inet_num,
284 &isk->inet_rcv_saddr,
285 (int)sk->sk_bound_dev_if); 416 (int)sk->sk_bound_dev_if);
286 417
287 err = 0; 418 err = 0;
288 if (isk->inet_rcv_saddr) 419 if ((sk->sk_family == AF_INET && isk->inet_rcv_saddr) ||
420 (sk->sk_family == AF_INET6 &&
421 !ipv6_addr_any(&inet6_sk(sk)->rcv_saddr)))
289 sk->sk_userlocks |= SOCK_BINDADDR_LOCK; 422 sk->sk_userlocks |= SOCK_BINDADDR_LOCK;
423
290 if (snum) 424 if (snum)
291 sk->sk_userlocks |= SOCK_BINDPORT_LOCK; 425 sk->sk_userlocks |= SOCK_BINDPORT_LOCK;
292 isk->inet_sport = htons(isk->inet_num); 426 isk->inet_sport = htons(isk->inet_num);
293 isk->inet_daddr = 0; 427 isk->inet_daddr = 0;
294 isk->inet_dport = 0; 428 isk->inet_dport = 0;
429
430#if IS_ENABLED(CONFIG_IPV6)
431 if (sk->sk_family == AF_INET6)
432 memset(&inet6_sk(sk)->daddr, 0, sizeof(inet6_sk(sk)->daddr));
433#endif
434
295 sk_dst_reset(sk); 435 sk_dst_reset(sk);
296out: 436out:
297 release_sock(sk); 437 release_sock(sk);
298 pr_debug("ping_v4_bind -> %d\n", err); 438 pr_debug("ping_v4_bind -> %d\n", err);
299 return err; 439 return err;
300} 440}
441EXPORT_SYMBOL_GPL(ping_bind);
301 442
302/* 443/*
303 * Is this a supported type of ICMP message? 444 * Is this a supported type of ICMP message?
304 */ 445 */
305 446
306static inline int ping_supported(int type, int code) 447static inline int ping_supported(int family, int type, int code)
307{ 448{
308 if (type == ICMP_ECHO && code == 0) 449 return (family == AF_INET && type == ICMP_ECHO && code == 0) ||
309 return 1; 450 (family == AF_INET6 && type == ICMPV6_ECHO_REQUEST && code == 0);
310 return 0;
311} 451}
312 452
313/* 453/*
@@ -315,30 +455,42 @@ static inline int ping_supported(int type, int code)
315 * sort of error condition. 455 * sort of error condition.
316 */ 456 */
317 457
318static int ping_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 458void ping_err(struct sk_buff *skb, int offset, u32 info)
319
320void ping_err(struct sk_buff *skb, u32 info)
321{ 459{
322 struct iphdr *iph = (struct iphdr *)skb->data; 460 int family;
323 struct icmphdr *icmph = (struct icmphdr *)(skb->data+(iph->ihl<<2)); 461 struct icmphdr *icmph;
324 struct inet_sock *inet_sock; 462 struct inet_sock *inet_sock;
325 int type = icmp_hdr(skb)->type; 463 int type;
326 int code = icmp_hdr(skb)->code; 464 int code;
327 struct net *net = dev_net(skb->dev); 465 struct net *net = dev_net(skb->dev);
328 struct sock *sk; 466 struct sock *sk;
329 int harderr; 467 int harderr;
330 int err; 468 int err;
331 469
470 if (skb->protocol == htons(ETH_P_IP)) {
471 family = AF_INET;
472 type = icmp_hdr(skb)->type;
473 code = icmp_hdr(skb)->code;
474 icmph = (struct icmphdr *)(skb->data + offset);
475 } else if (skb->protocol == htons(ETH_P_IPV6)) {
476 family = AF_INET6;
477 type = icmp6_hdr(skb)->icmp6_type;
478 code = icmp6_hdr(skb)->icmp6_code;
479 icmph = (struct icmphdr *) (skb->data + offset);
480 } else {
481 BUG();
482 }
483
332 /* We assume the packet has already been checked by icmp_unreach */ 484 /* We assume the packet has already been checked by icmp_unreach */
333 485
334 if (!ping_supported(icmph->type, icmph->code)) 486 if (!ping_supported(family, icmph->type, icmph->code))
335 return; 487 return;
336 488
337 pr_debug("ping_err(type=%04x,code=%04x,id=%04x,seq=%04x)\n", type, 489 pr_debug("ping_err(proto=0x%x,type=%d,code=%d,id=%04x,seq=%04x)\n",
338 code, ntohs(icmph->un.echo.id), ntohs(icmph->un.echo.sequence)); 490 skb->protocol, type, code, ntohs(icmph->un.echo.id),
491 ntohs(icmph->un.echo.sequence));
339 492
340 sk = ping_v4_lookup(net, iph->daddr, iph->saddr, 493 sk = ping_lookup(net, skb, ntohs(icmph->un.echo.id));
341 ntohs(icmph->un.echo.id), skb->dev->ifindex);
342 if (sk == NULL) { 494 if (sk == NULL) {
343 pr_debug("no socket, dropping\n"); 495 pr_debug("no socket, dropping\n");
344 return; /* No socket for error */ 496 return; /* No socket for error */
@@ -349,72 +501,83 @@ void ping_err(struct sk_buff *skb, u32 info)
349 harderr = 0; 501 harderr = 0;
350 inet_sock = inet_sk(sk); 502 inet_sock = inet_sk(sk);
351 503
352 switch (type) { 504 if (skb->protocol == htons(ETH_P_IP)) {
353 default: 505 switch (type) {
354 case ICMP_TIME_EXCEEDED: 506 default:
355 err = EHOSTUNREACH; 507 case ICMP_TIME_EXCEEDED:
356 break; 508 err = EHOSTUNREACH;
357 case ICMP_SOURCE_QUENCH: 509 break;
358 /* This is not a real error but ping wants to see it. 510 case ICMP_SOURCE_QUENCH:
359 * Report it with some fake errno. */ 511 /* This is not a real error but ping wants to see it.
360 err = EREMOTEIO; 512 * Report it with some fake errno.
361 break; 513 */
362 case ICMP_PARAMETERPROB: 514 err = EREMOTEIO;
363 err = EPROTO; 515 break;
364 harderr = 1; 516 case ICMP_PARAMETERPROB:
365 break; 517 err = EPROTO;
366 case ICMP_DEST_UNREACH: 518 harderr = 1;
367 if (code == ICMP_FRAG_NEEDED) { /* Path MTU discovery */ 519 break;
368 ipv4_sk_update_pmtu(skb, sk, info); 520 case ICMP_DEST_UNREACH:
369 if (inet_sock->pmtudisc != IP_PMTUDISC_DONT) { 521 if (code == ICMP_FRAG_NEEDED) { /* Path MTU discovery */
370 err = EMSGSIZE; 522 ipv4_sk_update_pmtu(skb, sk, info);
371 harderr = 1; 523 if (inet_sock->pmtudisc != IP_PMTUDISC_DONT) {
372 break; 524 err = EMSGSIZE;
525 harderr = 1;
526 break;
527 }
528 goto out;
373 } 529 }
374 goto out; 530 err = EHOSTUNREACH;
375 } 531 if (code <= NR_ICMP_UNREACH) {
376 err = EHOSTUNREACH; 532 harderr = icmp_err_convert[code].fatal;
377 if (code <= NR_ICMP_UNREACH) { 533 err = icmp_err_convert[code].errno;
378 harderr = icmp_err_convert[code].fatal; 534 }
379 err = icmp_err_convert[code].errno; 535 break;
536 case ICMP_REDIRECT:
537 /* See ICMP_SOURCE_QUENCH */
538 ipv4_sk_redirect(skb, sk);
539 err = EREMOTEIO;
540 break;
380 } 541 }
381 break; 542#if IS_ENABLED(CONFIG_IPV6)
382 case ICMP_REDIRECT: 543 } else if (skb->protocol == htons(ETH_P_IPV6)) {
383 /* See ICMP_SOURCE_QUENCH */ 544 harderr = pingv6_ops.icmpv6_err_convert(type, code, &err);
384 ipv4_sk_redirect(skb, sk); 545#endif
385 err = EREMOTEIO;
386 break;
387 } 546 }
388 547
389 /* 548 /*
390 * RFC1122: OK. Passes ICMP errors back to application, as per 549 * RFC1122: OK. Passes ICMP errors back to application, as per
391 * 4.1.3.3. 550 * 4.1.3.3.
392 */ 551 */
393 if (!inet_sock->recverr) { 552 if ((family == AF_INET && !inet_sock->recverr) ||
553 (family == AF_INET6 && !inet6_sk(sk)->recverr)) {
394 if (!harderr || sk->sk_state != TCP_ESTABLISHED) 554 if (!harderr || sk->sk_state != TCP_ESTABLISHED)
395 goto out; 555 goto out;
396 } else { 556 } else {
397 ip_icmp_error(sk, skb, err, 0 /* no remote port */, 557 if (family == AF_INET) {
398 info, (u8 *)icmph); 558 ip_icmp_error(sk, skb, err, 0 /* no remote port */,
559 info, (u8 *)icmph);
560#if IS_ENABLED(CONFIG_IPV6)
561 } else if (family == AF_INET6) {
562 pingv6_ops.ipv6_icmp_error(sk, skb, err, 0,
563 info, (u8 *)icmph);
564#endif
565 }
399 } 566 }
400 sk->sk_err = err; 567 sk->sk_err = err;
401 sk->sk_error_report(sk); 568 sk->sk_error_report(sk);
402out: 569out:
403 sock_put(sk); 570 sock_put(sk);
404} 571}
572EXPORT_SYMBOL_GPL(ping_err);
405 573
406/* 574/*
407 * Copy and checksum an ICMP Echo packet from user space into a buffer. 575 * Copy and checksum an ICMP Echo packet from user space into a buffer
576 * starting from the payload.
408 */ 577 */
409 578
410struct pingfakehdr { 579int ping_getfrag(void *from, char *to,
411 struct icmphdr icmph; 580 int offset, int fraglen, int odd, struct sk_buff *skb)
412 struct iovec *iov;
413 __wsum wcheck;
414};
415
416static int ping_getfrag(void *from, char *to,
417 int offset, int fraglen, int odd, struct sk_buff *skb)
418{ 581{
419 struct pingfakehdr *pfh = (struct pingfakehdr *)from; 582 struct pingfakehdr *pfh = (struct pingfakehdr *)from;
420 583
@@ -425,20 +588,33 @@ static int ping_getfrag(void *from, char *to,
425 pfh->iov, 0, fraglen - sizeof(struct icmphdr), 588 pfh->iov, 0, fraglen - sizeof(struct icmphdr),
426 &pfh->wcheck)) 589 &pfh->wcheck))
427 return -EFAULT; 590 return -EFAULT;
591 } else if (offset < sizeof(struct icmphdr)) {
592 BUG();
593 } else {
594 if (csum_partial_copy_fromiovecend
595 (to, pfh->iov, offset - sizeof(struct icmphdr),
596 fraglen, &pfh->wcheck))
597 return -EFAULT;
598 }
428 599
429 return 0; 600#if IS_ENABLED(CONFIG_IPV6)
601 /* For IPv6, checksum each skb as we go along, as expected by
602 * icmpv6_push_pending_frames. For IPv4, accumulate the checksum in
603 * wcheck, it will be finalized in ping_v4_push_pending_frames.
604 */
605 if (pfh->family == AF_INET6) {
606 skb->csum = pfh->wcheck;
607 skb->ip_summed = CHECKSUM_NONE;
608 pfh->wcheck = 0;
430 } 609 }
431 if (offset < sizeof(struct icmphdr)) 610#endif
432 BUG(); 611
433 if (csum_partial_copy_fromiovecend
434 (to, pfh->iov, offset - sizeof(struct icmphdr),
435 fraglen, &pfh->wcheck))
436 return -EFAULT;
437 return 0; 612 return 0;
438} 613}
614EXPORT_SYMBOL_GPL(ping_getfrag);
439 615
440static int ping_push_pending_frames(struct sock *sk, struct pingfakehdr *pfh, 616static int ping_v4_push_pending_frames(struct sock *sk, struct pingfakehdr *pfh,
441 struct flowi4 *fl4) 617 struct flowi4 *fl4)
442{ 618{
443 struct sk_buff *skb = skb_peek(&sk->sk_write_queue); 619 struct sk_buff *skb = skb_peek(&sk->sk_write_queue);
444 620
@@ -450,24 +626,9 @@ static int ping_push_pending_frames(struct sock *sk, struct pingfakehdr *pfh,
450 return ip_push_pending_frames(sk, fl4); 626 return ip_push_pending_frames(sk, fl4);
451} 627}
452 628
453static int ping_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 629int ping_common_sendmsg(int family, struct msghdr *msg, size_t len,
454 size_t len) 630 void *user_icmph, size_t icmph_len) {
455{ 631 u8 type, code;
456 struct net *net = sock_net(sk);
457 struct flowi4 fl4;
458 struct inet_sock *inet = inet_sk(sk);
459 struct ipcm_cookie ipc;
460 struct icmphdr user_icmph;
461 struct pingfakehdr pfh;
462 struct rtable *rt = NULL;
463 struct ip_options_data opt_copy;
464 int free = 0;
465 __be32 saddr, daddr, faddr;
466 u8 tos;
467 int err;
468
469 pr_debug("ping_sendmsg(sk=%p,sk->num=%u)\n", inet, inet->inet_num);
470
471 632
472 if (len > 0xFFFF) 633 if (len > 0xFFFF)
473 return -EMSGSIZE; 634 return -EMSGSIZE;
@@ -482,15 +643,53 @@ static int ping_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
482 643
483 /* 644 /*
484 * Fetch the ICMP header provided by the userland. 645 * Fetch the ICMP header provided by the userland.
485 * iovec is modified! 646 * iovec is modified! The ICMP header is consumed.
486 */ 647 */
487 648 if (memcpy_fromiovec(user_icmph, msg->msg_iov, icmph_len))
488 if (memcpy_fromiovec((u8 *)&user_icmph, msg->msg_iov,
489 sizeof(struct icmphdr)))
490 return -EFAULT; 649 return -EFAULT;
491 if (!ping_supported(user_icmph.type, user_icmph.code)) 650
651 if (family == AF_INET) {
652 type = ((struct icmphdr *) user_icmph)->type;
653 code = ((struct icmphdr *) user_icmph)->code;
654#if IS_ENABLED(CONFIG_IPV6)
655 } else if (family == AF_INET6) {
656 type = ((struct icmp6hdr *) user_icmph)->icmp6_type;
657 code = ((struct icmp6hdr *) user_icmph)->icmp6_code;
658#endif
659 } else {
660 BUG();
661 }
662
663 if (!ping_supported(family, type, code))
492 return -EINVAL; 664 return -EINVAL;
493 665
666 return 0;
667}
668EXPORT_SYMBOL_GPL(ping_common_sendmsg);
669
670int ping_v4_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
671 size_t len)
672{
673 struct net *net = sock_net(sk);
674 struct flowi4 fl4;
675 struct inet_sock *inet = inet_sk(sk);
676 struct ipcm_cookie ipc;
677 struct icmphdr user_icmph;
678 struct pingfakehdr pfh;
679 struct rtable *rt = NULL;
680 struct ip_options_data opt_copy;
681 int free = 0;
682 __be32 saddr, daddr, faddr;
683 u8 tos;
684 int err;
685
686 pr_debug("ping_v4_sendmsg(sk=%p,sk->num=%u)\n", inet, inet->inet_num);
687
688 err = ping_common_sendmsg(AF_INET, msg, len, &user_icmph,
689 sizeof(user_icmph));
690 if (err)
691 return err;
692
494 /* 693 /*
495 * Get and verify the address. 694 * Get and verify the address.
496 */ 695 */
@@ -595,13 +794,14 @@ back_from_confirm:
595 pfh.icmph.un.echo.sequence = user_icmph.un.echo.sequence; 794 pfh.icmph.un.echo.sequence = user_icmph.un.echo.sequence;
596 pfh.iov = msg->msg_iov; 795 pfh.iov = msg->msg_iov;
597 pfh.wcheck = 0; 796 pfh.wcheck = 0;
797 pfh.family = AF_INET;
598 798
599 err = ip_append_data(sk, &fl4, ping_getfrag, &pfh, len, 799 err = ip_append_data(sk, &fl4, ping_getfrag, &pfh, len,
600 0, &ipc, &rt, msg->msg_flags); 800 0, &ipc, &rt, msg->msg_flags);
601 if (err) 801 if (err)
602 ip_flush_pending_frames(sk); 802 ip_flush_pending_frames(sk);
603 else 803 else
604 err = ping_push_pending_frames(sk, &pfh, &fl4); 804 err = ping_v4_push_pending_frames(sk, &pfh, &fl4);
605 release_sock(sk); 805 release_sock(sk);
606 806
607out: 807out:
@@ -622,11 +822,13 @@ do_confirm:
622 goto out; 822 goto out;
623} 823}
624 824
625static int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 825int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
626 size_t len, int noblock, int flags, int *addr_len) 826 size_t len, int noblock, int flags, int *addr_len)
627{ 827{
628 struct inet_sock *isk = inet_sk(sk); 828 struct inet_sock *isk = inet_sk(sk);
629 struct sockaddr_in *sin = (struct sockaddr_in *)msg->msg_name; 829 int family = sk->sk_family;
830 struct sockaddr_in *sin;
831 struct sockaddr_in6 *sin6;
630 struct sk_buff *skb; 832 struct sk_buff *skb;
631 int copied, err; 833 int copied, err;
632 834
@@ -636,11 +838,22 @@ static int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
636 if (flags & MSG_OOB) 838 if (flags & MSG_OOB)
637 goto out; 839 goto out;
638 840
639 if (addr_len) 841 if (addr_len) {
640 *addr_len = sizeof(*sin); 842 if (family == AF_INET)
843 *addr_len = sizeof(*sin);
844 else if (family == AF_INET6 && addr_len)
845 *addr_len = sizeof(*sin6);
846 }
641 847
642 if (flags & MSG_ERRQUEUE) 848 if (flags & MSG_ERRQUEUE) {
643 return ip_recv_error(sk, msg, len); 849 if (family == AF_INET) {
850 return ip_recv_error(sk, msg, len);
851#if IS_ENABLED(CONFIG_IPV6)
852 } else if (family == AF_INET6) {
853 return pingv6_ops.ipv6_recv_error(sk, msg, len);
854#endif
855 }
856 }
644 857
645 skb = skb_recv_datagram(sk, flags, noblock, &err); 858 skb = skb_recv_datagram(sk, flags, noblock, &err);
646 if (!skb) 859 if (!skb)
@@ -659,15 +872,40 @@ static int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
659 872
660 sock_recv_timestamp(msg, sk, skb); 873 sock_recv_timestamp(msg, sk, skb);
661 874
662 /* Copy the address. */ 875 /* Copy the address and add cmsg data. */
663 if (sin) { 876 if (family == AF_INET) {
877 sin = (struct sockaddr_in *) msg->msg_name;
664 sin->sin_family = AF_INET; 878 sin->sin_family = AF_INET;
665 sin->sin_port = 0 /* skb->h.uh->source */; 879 sin->sin_port = 0 /* skb->h.uh->source */;
666 sin->sin_addr.s_addr = ip_hdr(skb)->saddr; 880 sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
667 memset(sin->sin_zero, 0, sizeof(sin->sin_zero)); 881 memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
882
883 if (isk->cmsg_flags)
884 ip_cmsg_recv(msg, skb);
885
886#if IS_ENABLED(CONFIG_IPV6)
887 } else if (family == AF_INET6) {
888 struct ipv6_pinfo *np = inet6_sk(sk);
889 struct ipv6hdr *ip6 = ipv6_hdr(skb);
890 sin6 = (struct sockaddr_in6 *) msg->msg_name;
891 sin6->sin6_family = AF_INET6;
892 sin6->sin6_port = 0;
893 sin6->sin6_addr = ip6->saddr;
894
895 sin6->sin6_flowinfo = 0;
896 if (np->sndflow)
897 sin6->sin6_flowinfo = ip6_flowinfo(ip6);
898
899 sin6->sin6_scope_id = ipv6_iface_scope_id(&sin6->sin6_addr,
900 IP6CB(skb)->iif);
901
902 if (inet6_sk(sk)->rxopt.all)
903 pingv6_ops.ip6_datagram_recv_ctl(sk, msg, skb);
904#endif
905 } else {
906 BUG();
668 } 907 }
669 if (isk->cmsg_flags) 908
670 ip_cmsg_recv(msg, skb);
671 err = copied; 909 err = copied;
672 910
673done: 911done:
@@ -676,8 +914,9 @@ out:
676 pr_debug("ping_recvmsg -> %d\n", err); 914 pr_debug("ping_recvmsg -> %d\n", err);
677 return err; 915 return err;
678} 916}
917EXPORT_SYMBOL_GPL(ping_recvmsg);
679 918
680static int ping_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) 919int ping_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
681{ 920{
682 pr_debug("ping_queue_rcv_skb(sk=%p,sk->num=%d,skb=%p)\n", 921 pr_debug("ping_queue_rcv_skb(sk=%p,sk->num=%d,skb=%p)\n",
683 inet_sk(sk), inet_sk(sk)->inet_num, skb); 922 inet_sk(sk), inet_sk(sk)->inet_num, skb);
@@ -688,6 +927,7 @@ static int ping_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
688 } 927 }
689 return 0; 928 return 0;
690} 929}
930EXPORT_SYMBOL_GPL(ping_queue_rcv_skb);
691 931
692 932
693/* 933/*
@@ -698,10 +938,7 @@ void ping_rcv(struct sk_buff *skb)
698{ 938{
699 struct sock *sk; 939 struct sock *sk;
700 struct net *net = dev_net(skb->dev); 940 struct net *net = dev_net(skb->dev);
701 struct iphdr *iph = ip_hdr(skb);
702 struct icmphdr *icmph = icmp_hdr(skb); 941 struct icmphdr *icmph = icmp_hdr(skb);
703 __be32 saddr = iph->saddr;
704 __be32 daddr = iph->daddr;
705 942
706 /* We assume the packet has already been checked by icmp_rcv */ 943 /* We assume the packet has already been checked by icmp_rcv */
707 944
@@ -711,8 +948,7 @@ void ping_rcv(struct sk_buff *skb)
711 /* Push ICMP header back */ 948 /* Push ICMP header back */
712 skb_push(skb, skb->data - (u8 *)icmph); 949 skb_push(skb, skb->data - (u8 *)icmph);
713 950
714 sk = ping_v4_lookup(net, saddr, daddr, ntohs(icmph->un.echo.id), 951 sk = ping_lookup(net, skb, ntohs(icmph->un.echo.id));
715 skb->dev->ifindex);
716 if (sk != NULL) { 952 if (sk != NULL) {
717 pr_debug("rcv on socket %p\n", sk); 953 pr_debug("rcv on socket %p\n", sk);
718 ping_queue_rcv_skb(sk, skb_get(skb)); 954 ping_queue_rcv_skb(sk, skb_get(skb));
@@ -723,6 +959,7 @@ void ping_rcv(struct sk_buff *skb)
723 959
724 /* We're called from icmp_rcv(). kfree_skb() is done there. */ 960 /* We're called from icmp_rcv(). kfree_skb() is done there. */
725} 961}
962EXPORT_SYMBOL_GPL(ping_rcv);
726 963
727struct proto ping_prot = { 964struct proto ping_prot = {
728 .name = "PING", 965 .name = "PING",
@@ -733,14 +970,14 @@ struct proto ping_prot = {
733 .disconnect = udp_disconnect, 970 .disconnect = udp_disconnect,
734 .setsockopt = ip_setsockopt, 971 .setsockopt = ip_setsockopt,
735 .getsockopt = ip_getsockopt, 972 .getsockopt = ip_getsockopt,
736 .sendmsg = ping_sendmsg, 973 .sendmsg = ping_v4_sendmsg,
737 .recvmsg = ping_recvmsg, 974 .recvmsg = ping_recvmsg,
738 .bind = ping_bind, 975 .bind = ping_bind,
739 .backlog_rcv = ping_queue_rcv_skb, 976 .backlog_rcv = ping_queue_rcv_skb,
740 .release_cb = ip4_datagram_release_cb, 977 .release_cb = ip4_datagram_release_cb,
741 .hash = ping_v4_hash, 978 .hash = ping_hash,
742 .unhash = ping_v4_unhash, 979 .unhash = ping_unhash,
743 .get_port = ping_v4_get_port, 980 .get_port = ping_get_port,
744 .obj_size = sizeof(struct inet_sock), 981 .obj_size = sizeof(struct inet_sock),
745}; 982};
746EXPORT_SYMBOL(ping_prot); 983EXPORT_SYMBOL(ping_prot);
@@ -764,7 +1001,8 @@ static struct sock *ping_get_first(struct seq_file *seq, int start)
764 continue; 1001 continue;
765 1002
766 sk_nulls_for_each(sk, node, hslot) { 1003 sk_nulls_for_each(sk, node, hslot) {
767 if (net_eq(sock_net(sk), net)) 1004 if (net_eq(sock_net(sk), net) &&
1005 sk->sk_family == state->family)
768 goto found; 1006 goto found;
769 } 1007 }
770 } 1008 }
@@ -797,17 +1035,24 @@ static struct sock *ping_get_idx(struct seq_file *seq, loff_t pos)
797 return pos ? NULL : sk; 1035 return pos ? NULL : sk;
798} 1036}
799 1037
800static void *ping_seq_start(struct seq_file *seq, loff_t *pos) 1038void *ping_seq_start(struct seq_file *seq, loff_t *pos, sa_family_t family)
801{ 1039{
802 struct ping_iter_state *state = seq->private; 1040 struct ping_iter_state *state = seq->private;
803 state->bucket = 0; 1041 state->bucket = 0;
1042 state->family = family;
804 1043
805 read_lock_bh(&ping_table.lock); 1044 read_lock_bh(&ping_table.lock);
806 1045
807 return *pos ? ping_get_idx(seq, *pos-1) : SEQ_START_TOKEN; 1046 return *pos ? ping_get_idx(seq, *pos-1) : SEQ_START_TOKEN;
808} 1047}
1048EXPORT_SYMBOL_GPL(ping_seq_start);
1049
1050static void *ping_v4_seq_start(struct seq_file *seq, loff_t *pos)
1051{
1052 return ping_seq_start(seq, pos, AF_INET);
1053}
809 1054
810static void *ping_seq_next(struct seq_file *seq, void *v, loff_t *pos) 1055void *ping_seq_next(struct seq_file *seq, void *v, loff_t *pos)
811{ 1056{
812 struct sock *sk; 1057 struct sock *sk;
813 1058
@@ -819,13 +1064,15 @@ static void *ping_seq_next(struct seq_file *seq, void *v, loff_t *pos)
819 ++*pos; 1064 ++*pos;
820 return sk; 1065 return sk;
821} 1066}
1067EXPORT_SYMBOL_GPL(ping_seq_next);
822 1068
823static void ping_seq_stop(struct seq_file *seq, void *v) 1069void ping_seq_stop(struct seq_file *seq, void *v)
824{ 1070{
825 read_unlock_bh(&ping_table.lock); 1071 read_unlock_bh(&ping_table.lock);
826} 1072}
1073EXPORT_SYMBOL_GPL(ping_seq_stop);
827 1074
828static void ping_format_sock(struct sock *sp, struct seq_file *f, 1075static void ping_v4_format_sock(struct sock *sp, struct seq_file *f,
829 int bucket, int *len) 1076 int bucket, int *len)
830{ 1077{
831 struct inet_sock *inet = inet_sk(sp); 1078 struct inet_sock *inet = inet_sk(sp);
@@ -846,7 +1093,7 @@ static void ping_format_sock(struct sock *sp, struct seq_file *f,
846 atomic_read(&sp->sk_drops), len); 1093 atomic_read(&sp->sk_drops), len);
847} 1094}
848 1095
849static int ping_seq_show(struct seq_file *seq, void *v) 1096static int ping_v4_seq_show(struct seq_file *seq, void *v)
850{ 1097{
851 if (v == SEQ_START_TOKEN) 1098 if (v == SEQ_START_TOKEN)
852 seq_printf(seq, "%-127s\n", 1099 seq_printf(seq, "%-127s\n",
@@ -857,72 +1104,86 @@ static int ping_seq_show(struct seq_file *seq, void *v)
857 struct ping_iter_state *state = seq->private; 1104 struct ping_iter_state *state = seq->private;
858 int len; 1105 int len;
859 1106
860 ping_format_sock(v, seq, state->bucket, &len); 1107 ping_v4_format_sock(v, seq, state->bucket, &len);
861 seq_printf(seq, "%*s\n", 127 - len, ""); 1108 seq_printf(seq, "%*s\n", 127 - len, "");
862 } 1109 }
863 return 0; 1110 return 0;
864} 1111}
865 1112
866static const struct seq_operations ping_seq_ops = { 1113static const struct seq_operations ping_v4_seq_ops = {
867 .show = ping_seq_show, 1114 .show = ping_v4_seq_show,
868 .start = ping_seq_start, 1115 .start = ping_v4_seq_start,
869 .next = ping_seq_next, 1116 .next = ping_seq_next,
870 .stop = ping_seq_stop, 1117 .stop = ping_seq_stop,
871}; 1118};
872 1119
873static int ping_seq_open(struct inode *inode, struct file *file) 1120static int ping_seq_open(struct inode *inode, struct file *file)
874{ 1121{
875 return seq_open_net(inode, file, &ping_seq_ops, 1122 struct ping_seq_afinfo *afinfo = PDE_DATA(inode);
1123 return seq_open_net(inode, file, &afinfo->seq_ops,
876 sizeof(struct ping_iter_state)); 1124 sizeof(struct ping_iter_state));
877} 1125}
878 1126
879static const struct file_operations ping_seq_fops = { 1127const struct file_operations ping_seq_fops = {
880 .open = ping_seq_open, 1128 .open = ping_seq_open,
881 .read = seq_read, 1129 .read = seq_read,
882 .llseek = seq_lseek, 1130 .llseek = seq_lseek,
883 .release = seq_release_net, 1131 .release = seq_release_net,
884}; 1132};
1133EXPORT_SYMBOL_GPL(ping_seq_fops);
1134
1135static struct ping_seq_afinfo ping_v4_seq_afinfo = {
1136 .name = "icmp",
1137 .family = AF_INET,
1138 .seq_fops = &ping_seq_fops,
1139 .seq_ops = {
1140 .start = ping_v4_seq_start,
1141 .show = ping_v4_seq_show,
1142 .next = ping_seq_next,
1143 .stop = ping_seq_stop,
1144 },
1145};
885 1146
886static int ping_proc_register(struct net *net) 1147int ping_proc_register(struct net *net, struct ping_seq_afinfo *afinfo)
887{ 1148{
888 struct proc_dir_entry *p; 1149 struct proc_dir_entry *p;
889 int rc = 0; 1150 p = proc_create_data(afinfo->name, S_IRUGO, net->proc_net,
890 1151 afinfo->seq_fops, afinfo);
891 p = proc_create("icmp", S_IRUGO, net->proc_net, &ping_seq_fops);
892 if (!p) 1152 if (!p)
893 rc = -ENOMEM; 1153 return -ENOMEM;
894 return rc; 1154 return 0;
895} 1155}
1156EXPORT_SYMBOL_GPL(ping_proc_register);
896 1157
897static void ping_proc_unregister(struct net *net) 1158void ping_proc_unregister(struct net *net, struct ping_seq_afinfo *afinfo)
898{ 1159{
899 remove_proc_entry("icmp", net->proc_net); 1160 remove_proc_entry(afinfo->name, net->proc_net);
900} 1161}
1162EXPORT_SYMBOL_GPL(ping_proc_unregister);
901 1163
902 1164static int __net_init ping_v4_proc_init_net(struct net *net)
903static int __net_init ping_proc_init_net(struct net *net)
904{ 1165{
905 return ping_proc_register(net); 1166 return ping_proc_register(net, &ping_v4_seq_afinfo);
906} 1167}
907 1168
908static void __net_exit ping_proc_exit_net(struct net *net) 1169static void __net_exit ping_v4_proc_exit_net(struct net *net)
909{ 1170{
910 ping_proc_unregister(net); 1171 ping_proc_unregister(net, &ping_v4_seq_afinfo);
911} 1172}
912 1173
913static struct pernet_operations ping_net_ops = { 1174static struct pernet_operations ping_v4_net_ops = {
914 .init = ping_proc_init_net, 1175 .init = ping_v4_proc_init_net,
915 .exit = ping_proc_exit_net, 1176 .exit = ping_v4_proc_exit_net,
916}; 1177};
917 1178
918int __init ping_proc_init(void) 1179int __init ping_proc_init(void)
919{ 1180{
920 return register_pernet_subsys(&ping_net_ops); 1181 return register_pernet_subsys(&ping_v4_net_ops);
921} 1182}
922 1183
923void ping_proc_exit(void) 1184void ping_proc_exit(void)
924{ 1185{
925 unregister_pernet_subsys(&ping_net_ops); 1186 unregister_pernet_subsys(&ping_v4_net_ops);
926} 1187}
927 1188
928#endif 1189#endif
diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 2a5bf86d2415..6577a1149a47 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -273,6 +273,7 @@ static const struct snmp_mib snmp4_net_list[] = {
273 SNMP_MIB_ITEM("TCPFastOpenListenOverflow", LINUX_MIB_TCPFASTOPENLISTENOVERFLOW), 273 SNMP_MIB_ITEM("TCPFastOpenListenOverflow", LINUX_MIB_TCPFASTOPENLISTENOVERFLOW),
274 SNMP_MIB_ITEM("TCPFastOpenCookieReqd", LINUX_MIB_TCPFASTOPENCOOKIEREQD), 274 SNMP_MIB_ITEM("TCPFastOpenCookieReqd", LINUX_MIB_TCPFASTOPENCOOKIEREQD),
275 SNMP_MIB_ITEM("TCPSpuriousRtxHostQueues", LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES), 275 SNMP_MIB_ITEM("TCPSpuriousRtxHostQueues", LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES),
276 SNMP_MIB_ITEM("LowLatencyRxPackets", LINUX_MIB_LOWLATENCYRXPACKETS),
276 SNMP_MIB_SENTINEL 277 SNMP_MIB_SENTINEL
277}; 278};
278 279
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index d35bbf0cf404..a9a54a236832 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -565,10 +565,25 @@ static inline void rt_free(struct rtable *rt)
565 565
566static DEFINE_SPINLOCK(fnhe_lock); 566static DEFINE_SPINLOCK(fnhe_lock);
567 567
568static void fnhe_flush_routes(struct fib_nh_exception *fnhe)
569{
570 struct rtable *rt;
571
572 rt = rcu_dereference(fnhe->fnhe_rth_input);
573 if (rt) {
574 RCU_INIT_POINTER(fnhe->fnhe_rth_input, NULL);
575 rt_free(rt);
576 }
577 rt = rcu_dereference(fnhe->fnhe_rth_output);
578 if (rt) {
579 RCU_INIT_POINTER(fnhe->fnhe_rth_output, NULL);
580 rt_free(rt);
581 }
582}
583
568static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash) 584static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash)
569{ 585{
570 struct fib_nh_exception *fnhe, *oldest; 586 struct fib_nh_exception *fnhe, *oldest;
571 struct rtable *orig;
572 587
573 oldest = rcu_dereference(hash->chain); 588 oldest = rcu_dereference(hash->chain);
574 for (fnhe = rcu_dereference(oldest->fnhe_next); fnhe; 589 for (fnhe = rcu_dereference(oldest->fnhe_next); fnhe;
@@ -576,11 +591,7 @@ static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash)
576 if (time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp)) 591 if (time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp))
577 oldest = fnhe; 592 oldest = fnhe;
578 } 593 }
579 orig = rcu_dereference(oldest->fnhe_rth); 594 fnhe_flush_routes(oldest);
580 if (orig) {
581 RCU_INIT_POINTER(oldest->fnhe_rth, NULL);
582 rt_free(orig);
583 }
584 return oldest; 595 return oldest;
585} 596}
586 597
@@ -594,11 +605,25 @@ static inline u32 fnhe_hashfun(__be32 daddr)
594 return hval & (FNHE_HASH_SIZE - 1); 605 return hval & (FNHE_HASH_SIZE - 1);
595} 606}
596 607
608static void fill_route_from_fnhe(struct rtable *rt, struct fib_nh_exception *fnhe)
609{
610 rt->rt_pmtu = fnhe->fnhe_pmtu;
611 rt->dst.expires = fnhe->fnhe_expires;
612
613 if (fnhe->fnhe_gw) {
614 rt->rt_flags |= RTCF_REDIRECTED;
615 rt->rt_gateway = fnhe->fnhe_gw;
616 rt->rt_uses_gateway = 1;
617 }
618}
619
597static void update_or_create_fnhe(struct fib_nh *nh, __be32 daddr, __be32 gw, 620static void update_or_create_fnhe(struct fib_nh *nh, __be32 daddr, __be32 gw,
598 u32 pmtu, unsigned long expires) 621 u32 pmtu, unsigned long expires)
599{ 622{
600 struct fnhe_hash_bucket *hash; 623 struct fnhe_hash_bucket *hash;
601 struct fib_nh_exception *fnhe; 624 struct fib_nh_exception *fnhe;
625 struct rtable *rt;
626 unsigned int i;
602 int depth; 627 int depth;
603 u32 hval = fnhe_hashfun(daddr); 628 u32 hval = fnhe_hashfun(daddr);
604 629
@@ -627,8 +652,15 @@ static void update_or_create_fnhe(struct fib_nh *nh, __be32 daddr, __be32 gw,
627 fnhe->fnhe_gw = gw; 652 fnhe->fnhe_gw = gw;
628 if (pmtu) { 653 if (pmtu) {
629 fnhe->fnhe_pmtu = pmtu; 654 fnhe->fnhe_pmtu = pmtu;
630 fnhe->fnhe_expires = expires; 655 fnhe->fnhe_expires = max(1UL, expires);
631 } 656 }
657 /* Update all cached dsts too */
658 rt = rcu_dereference(fnhe->fnhe_rth_input);
659 if (rt)
660 fill_route_from_fnhe(rt, fnhe);
661 rt = rcu_dereference(fnhe->fnhe_rth_output);
662 if (rt)
663 fill_route_from_fnhe(rt, fnhe);
632 } else { 664 } else {
633 if (depth > FNHE_RECLAIM_DEPTH) 665 if (depth > FNHE_RECLAIM_DEPTH)
634 fnhe = fnhe_oldest(hash); 666 fnhe = fnhe_oldest(hash);
@@ -640,10 +672,27 @@ static void update_or_create_fnhe(struct fib_nh *nh, __be32 daddr, __be32 gw,
640 fnhe->fnhe_next = hash->chain; 672 fnhe->fnhe_next = hash->chain;
641 rcu_assign_pointer(hash->chain, fnhe); 673 rcu_assign_pointer(hash->chain, fnhe);
642 } 674 }
675 fnhe->fnhe_genid = fnhe_genid(dev_net(nh->nh_dev));
643 fnhe->fnhe_daddr = daddr; 676 fnhe->fnhe_daddr = daddr;
644 fnhe->fnhe_gw = gw; 677 fnhe->fnhe_gw = gw;
645 fnhe->fnhe_pmtu = pmtu; 678 fnhe->fnhe_pmtu = pmtu;
646 fnhe->fnhe_expires = expires; 679 fnhe->fnhe_expires = expires;
680
681 /* Exception created; mark the cached routes for the nexthop
682 * stale, so anyone caching it rechecks if this exception
683 * applies to them.
684 */
685 rt = rcu_dereference(nh->nh_rth_input);
686 if (rt)
687 rt->dst.obsolete = DST_OBSOLETE_KILL;
688
689 for_each_possible_cpu(i) {
690 struct rtable __rcu **prt;
691 prt = per_cpu_ptr(nh->nh_pcpu_rth_output, i);
692 rt = rcu_dereference(*prt);
693 if (rt)
694 rt->dst.obsolete = DST_OBSOLETE_KILL;
695 }
647 } 696 }
648 697
649 fnhe->fnhe_stamp = jiffies; 698 fnhe->fnhe_stamp = jiffies;
@@ -922,12 +971,9 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
922 if (mtu < ip_rt_min_pmtu) 971 if (mtu < ip_rt_min_pmtu)
923 mtu = ip_rt_min_pmtu; 972 mtu = ip_rt_min_pmtu;
924 973
925 if (!rt->rt_pmtu) { 974 if (rt->rt_pmtu == mtu &&
926 dst->obsolete = DST_OBSOLETE_KILL; 975 time_before(jiffies, dst->expires - ip_rt_mtu_expires / 2))
927 } else { 976 return;
928 rt->rt_pmtu = mtu;
929 dst->expires = max(1UL, jiffies + ip_rt_mtu_expires);
930 }
931 977
932 rcu_read_lock(); 978 rcu_read_lock();
933 if (fib_lookup(dev_net(dst->dev), fl4, &res) == 0) { 979 if (fib_lookup(dev_net(dst->dev), fl4, &res) == 0) {
@@ -1068,11 +1114,11 @@ static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie)
1068 * DST_OBSOLETE_FORCE_CHK which forces validation calls down 1114 * DST_OBSOLETE_FORCE_CHK which forces validation calls down
1069 * into this function always. 1115 * into this function always.
1070 * 1116 *
1071 * When a PMTU/redirect information update invalidates a 1117 * When a PMTU/redirect information update invalidates a route,
1072 * route, this is indicated by setting obsolete to 1118 * this is indicated by setting obsolete to DST_OBSOLETE_KILL or
1073 * DST_OBSOLETE_KILL. 1119 * DST_OBSOLETE_DEAD by dst_free().
1074 */ 1120 */
1075 if (dst->obsolete == DST_OBSOLETE_KILL || rt_is_expired(rt)) 1121 if (dst->obsolete != DST_OBSOLETE_FORCE_CHK || rt_is_expired(rt))
1076 return NULL; 1122 return NULL;
1077 return dst; 1123 return dst;
1078} 1124}
@@ -1214,34 +1260,36 @@ static bool rt_bind_exception(struct rtable *rt, struct fib_nh_exception *fnhe,
1214 spin_lock_bh(&fnhe_lock); 1260 spin_lock_bh(&fnhe_lock);
1215 1261
1216 if (daddr == fnhe->fnhe_daddr) { 1262 if (daddr == fnhe->fnhe_daddr) {
1217 struct rtable *orig = rcu_dereference(fnhe->fnhe_rth); 1263 struct rtable __rcu **porig;
1218 if (orig && rt_is_expired(orig)) { 1264 struct rtable *orig;
1265 int genid = fnhe_genid(dev_net(rt->dst.dev));
1266
1267 if (rt_is_input_route(rt))
1268 porig = &fnhe->fnhe_rth_input;
1269 else
1270 porig = &fnhe->fnhe_rth_output;
1271 orig = rcu_dereference(*porig);
1272
1273 if (fnhe->fnhe_genid != genid) {
1274 fnhe->fnhe_genid = genid;
1219 fnhe->fnhe_gw = 0; 1275 fnhe->fnhe_gw = 0;
1220 fnhe->fnhe_pmtu = 0; 1276 fnhe->fnhe_pmtu = 0;
1221 fnhe->fnhe_expires = 0; 1277 fnhe->fnhe_expires = 0;
1278 fnhe_flush_routes(fnhe);
1279 orig = NULL;
1222 } 1280 }
1223 if (fnhe->fnhe_pmtu) { 1281 fill_route_from_fnhe(rt, fnhe);
1224 unsigned long expires = fnhe->fnhe_expires; 1282 if (!rt->rt_gateway)
1225 unsigned long diff = expires - jiffies;
1226
1227 if (time_before(jiffies, expires)) {
1228 rt->rt_pmtu = fnhe->fnhe_pmtu;
1229 dst_set_expires(&rt->dst, diff);
1230 }
1231 }
1232 if (fnhe->fnhe_gw) {
1233 rt->rt_flags |= RTCF_REDIRECTED;
1234 rt->rt_gateway = fnhe->fnhe_gw;
1235 rt->rt_uses_gateway = 1;
1236 } else if (!rt->rt_gateway)
1237 rt->rt_gateway = daddr; 1283 rt->rt_gateway = daddr;
1238 1284
1239 rcu_assign_pointer(fnhe->fnhe_rth, rt); 1285 if (!(rt->dst.flags & DST_NOCACHE)) {
1240 if (orig) 1286 rcu_assign_pointer(*porig, rt);
1241 rt_free(orig); 1287 if (orig)
1288 rt_free(orig);
1289 ret = true;
1290 }
1242 1291
1243 fnhe->fnhe_stamp = jiffies; 1292 fnhe->fnhe_stamp = jiffies;
1244 ret = true;
1245 } 1293 }
1246 spin_unlock_bh(&fnhe_lock); 1294 spin_unlock_bh(&fnhe_lock);
1247 1295
@@ -1473,6 +1521,7 @@ static int __mkroute_input(struct sk_buff *skb,
1473 struct in_device *in_dev, 1521 struct in_device *in_dev,
1474 __be32 daddr, __be32 saddr, u32 tos) 1522 __be32 daddr, __be32 saddr, u32 tos)
1475{ 1523{
1524 struct fib_nh_exception *fnhe;
1476 struct rtable *rth; 1525 struct rtable *rth;
1477 int err; 1526 int err;
1478 struct in_device *out_dev; 1527 struct in_device *out_dev;
@@ -1519,8 +1568,13 @@ static int __mkroute_input(struct sk_buff *skb,
1519 } 1568 }
1520 } 1569 }
1521 1570
1571 fnhe = find_exception(&FIB_RES_NH(*res), daddr);
1522 if (do_cache) { 1572 if (do_cache) {
1523 rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input); 1573 if (fnhe != NULL)
1574 rth = rcu_dereference(fnhe->fnhe_rth_input);
1575 else
1576 rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input);
1577
1524 if (rt_cache_valid(rth)) { 1578 if (rt_cache_valid(rth)) {
1525 skb_dst_set_noref(skb, &rth->dst); 1579 skb_dst_set_noref(skb, &rth->dst);
1526 goto out; 1580 goto out;
@@ -1548,7 +1602,7 @@ static int __mkroute_input(struct sk_buff *skb,
1548 rth->dst.input = ip_forward; 1602 rth->dst.input = ip_forward;
1549 rth->dst.output = ip_output; 1603 rth->dst.output = ip_output;
1550 1604
1551 rt_set_nexthop(rth, daddr, res, NULL, res->fi, res->type, itag); 1605 rt_set_nexthop(rth, daddr, res, fnhe, res->fi, res->type, itag);
1552 skb_dst_set(skb, &rth->dst); 1606 skb_dst_set(skb, &rth->dst);
1553out: 1607out:
1554 err = 0; 1608 err = 0;
@@ -1863,7 +1917,7 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
1863 1917
1864 fnhe = find_exception(nh, fl4->daddr); 1918 fnhe = find_exception(nh, fl4->daddr);
1865 if (fnhe) 1919 if (fnhe)
1866 prth = &fnhe->fnhe_rth; 1920 prth = &fnhe->fnhe_rth_output;
1867 else { 1921 else {
1868 if (unlikely(fl4->flowi4_flags & 1922 if (unlikely(fl4->flowi4_flags &
1869 FLOWI_FLAG_KNOWN_NH && 1923 FLOWI_FLAG_KNOWN_NH &&
@@ -2429,19 +2483,22 @@ static int ip_rt_gc_interval __read_mostly = 60 * HZ;
2429static int ip_rt_gc_min_interval __read_mostly = HZ / 2; 2483static int ip_rt_gc_min_interval __read_mostly = HZ / 2;
2430static int ip_rt_gc_elasticity __read_mostly = 8; 2484static int ip_rt_gc_elasticity __read_mostly = 8;
2431 2485
2432static int ipv4_sysctl_rtcache_flush(ctl_table *__ctl, int write, 2486static int ipv4_sysctl_rtcache_flush(struct ctl_table *__ctl, int write,
2433 void __user *buffer, 2487 void __user *buffer,
2434 size_t *lenp, loff_t *ppos) 2488 size_t *lenp, loff_t *ppos)
2435{ 2489{
2490 struct net *net = (struct net *)__ctl->extra1;
2491
2436 if (write) { 2492 if (write) {
2437 rt_cache_flush((struct net *)__ctl->extra1); 2493 rt_cache_flush(net);
2494 fnhe_genid_bump(net);
2438 return 0; 2495 return 0;
2439 } 2496 }
2440 2497
2441 return -EINVAL; 2498 return -EINVAL;
2442} 2499}
2443 2500
2444static ctl_table ipv4_route_table[] = { 2501static struct ctl_table ipv4_route_table[] = {
2445 { 2502 {
2446 .procname = "gc_thresh", 2503 .procname = "gc_thresh",
2447 .data = &ipv4_dst_ops.gc_thresh, 2504 .data = &ipv4_dst_ops.gc_thresh,
@@ -2609,6 +2666,7 @@ static __net_initdata struct pernet_operations sysctl_route_ops = {
2609static __net_init int rt_genid_init(struct net *net) 2666static __net_init int rt_genid_init(struct net *net)
2610{ 2667{
2611 atomic_set(&net->rt_genid, 0); 2668 atomic_set(&net->rt_genid, 0);
2669 atomic_set(&net->fnhe_genid, 0);
2612 get_random_bytes(&net->ipv4.dev_addr_genid, 2670 get_random_bytes(&net->ipv4.dev_addr_genid,
2613 sizeof(net->ipv4.dev_addr_genid)); 2671 sizeof(net->ipv4.dev_addr_genid));
2614 return 0; 2672 return 0;
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index fa2f63fc453b..b2c123c44d69 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -49,13 +49,13 @@ static void set_local_port_range(int range[2])
49} 49}
50 50
51/* Validate changes from /proc interface. */ 51/* Validate changes from /proc interface. */
52static int ipv4_local_port_range(ctl_table *table, int write, 52static int ipv4_local_port_range(struct ctl_table *table, int write,
53 void __user *buffer, 53 void __user *buffer,
54 size_t *lenp, loff_t *ppos) 54 size_t *lenp, loff_t *ppos)
55{ 55{
56 int ret; 56 int ret;
57 int range[2]; 57 int range[2];
58 ctl_table tmp = { 58 struct ctl_table tmp = {
59 .data = &range, 59 .data = &range,
60 .maxlen = sizeof(range), 60 .maxlen = sizeof(range),
61 .mode = table->mode, 61 .mode = table->mode,
@@ -100,7 +100,7 @@ static void set_ping_group_range(struct ctl_table *table, kgid_t low, kgid_t hig
100} 100}
101 101
102/* Validate changes from /proc interface. */ 102/* Validate changes from /proc interface. */
103static int ipv4_ping_group_range(ctl_table *table, int write, 103static int ipv4_ping_group_range(struct ctl_table *table, int write,
104 void __user *buffer, 104 void __user *buffer,
105 size_t *lenp, loff_t *ppos) 105 size_t *lenp, loff_t *ppos)
106{ 106{
@@ -108,7 +108,7 @@ static int ipv4_ping_group_range(ctl_table *table, int write,
108 int ret; 108 int ret;
109 gid_t urange[2]; 109 gid_t urange[2];
110 kgid_t low, high; 110 kgid_t low, high;
111 ctl_table tmp = { 111 struct ctl_table tmp = {
112 .data = &urange, 112 .data = &urange,
113 .maxlen = sizeof(urange), 113 .maxlen = sizeof(urange),
114 .mode = table->mode, 114 .mode = table->mode,
@@ -135,11 +135,11 @@ static int ipv4_ping_group_range(ctl_table *table, int write,
135 return ret; 135 return ret;
136} 136}
137 137
138static int proc_tcp_congestion_control(ctl_table *ctl, int write, 138static int proc_tcp_congestion_control(struct ctl_table *ctl, int write,
139 void __user *buffer, size_t *lenp, loff_t *ppos) 139 void __user *buffer, size_t *lenp, loff_t *ppos)
140{ 140{
141 char val[TCP_CA_NAME_MAX]; 141 char val[TCP_CA_NAME_MAX];
142 ctl_table tbl = { 142 struct ctl_table tbl = {
143 .data = val, 143 .data = val,
144 .maxlen = TCP_CA_NAME_MAX, 144 .maxlen = TCP_CA_NAME_MAX,
145 }; 145 };
@@ -153,12 +153,12 @@ static int proc_tcp_congestion_control(ctl_table *ctl, int write,
153 return ret; 153 return ret;
154} 154}
155 155
156static int proc_tcp_available_congestion_control(ctl_table *ctl, 156static int proc_tcp_available_congestion_control(struct ctl_table *ctl,
157 int write, 157 int write,
158 void __user *buffer, size_t *lenp, 158 void __user *buffer, size_t *lenp,
159 loff_t *ppos) 159 loff_t *ppos)
160{ 160{
161 ctl_table tbl = { .maxlen = TCP_CA_BUF_MAX, }; 161 struct ctl_table tbl = { .maxlen = TCP_CA_BUF_MAX, };
162 int ret; 162 int ret;
163 163
164 tbl.data = kmalloc(tbl.maxlen, GFP_USER); 164 tbl.data = kmalloc(tbl.maxlen, GFP_USER);
@@ -170,12 +170,12 @@ static int proc_tcp_available_congestion_control(ctl_table *ctl,
170 return ret; 170 return ret;
171} 171}
172 172
173static int proc_allowed_congestion_control(ctl_table *ctl, 173static int proc_allowed_congestion_control(struct ctl_table *ctl,
174 int write, 174 int write,
175 void __user *buffer, size_t *lenp, 175 void __user *buffer, size_t *lenp,
176 loff_t *ppos) 176 loff_t *ppos)
177{ 177{
178 ctl_table tbl = { .maxlen = TCP_CA_BUF_MAX }; 178 struct ctl_table tbl = { .maxlen = TCP_CA_BUF_MAX };
179 int ret; 179 int ret;
180 180
181 tbl.data = kmalloc(tbl.maxlen, GFP_USER); 181 tbl.data = kmalloc(tbl.maxlen, GFP_USER);
@@ -190,7 +190,7 @@ static int proc_allowed_congestion_control(ctl_table *ctl,
190 return ret; 190 return ret;
191} 191}
192 192
193static int ipv4_tcp_mem(ctl_table *ctl, int write, 193static int ipv4_tcp_mem(struct ctl_table *ctl, int write,
194 void __user *buffer, size_t *lenp, 194 void __user *buffer, size_t *lenp,
195 loff_t *ppos) 195 loff_t *ppos)
196{ 196{
@@ -201,7 +201,7 @@ static int ipv4_tcp_mem(ctl_table *ctl, int write,
201 struct mem_cgroup *memcg; 201 struct mem_cgroup *memcg;
202#endif 202#endif
203 203
204 ctl_table tmp = { 204 struct ctl_table tmp = {
205 .data = &vec, 205 .data = &vec,
206 .maxlen = sizeof(vec), 206 .maxlen = sizeof(vec),
207 .mode = ctl->mode, 207 .mode = ctl->mode,
@@ -233,10 +233,11 @@ static int ipv4_tcp_mem(ctl_table *ctl, int write,
233 return 0; 233 return 0;
234} 234}
235 235
236static int proc_tcp_fastopen_key(ctl_table *ctl, int write, void __user *buffer, 236static int proc_tcp_fastopen_key(struct ctl_table *ctl, int write,
237 size_t *lenp, loff_t *ppos) 237 void __user *buffer, size_t *lenp,
238 loff_t *ppos)
238{ 239{
239 ctl_table tbl = { .maxlen = (TCP_FASTOPEN_KEY_LENGTH * 2 + 10) }; 240 struct ctl_table tbl = { .maxlen = (TCP_FASTOPEN_KEY_LENGTH * 2 + 10) };
240 struct tcp_fastopen_context *ctxt; 241 struct tcp_fastopen_context *ctxt;
241 int ret; 242 int ret;
242 u32 user_key[4]; /* 16 bytes, matching TCP_FASTOPEN_KEY_LENGTH */ 243 u32 user_key[4]; /* 16 bytes, matching TCP_FASTOPEN_KEY_LENGTH */
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index ab450c099aa4..15cbfa94bd8e 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -279,6 +279,7 @@
279 279
280#include <asm/uaccess.h> 280#include <asm/uaccess.h>
281#include <asm/ioctls.h> 281#include <asm/ioctls.h>
282#include <net/ll_poll.h>
282 283
283int sysctl_tcp_fin_timeout __read_mostly = TCP_FIN_TIMEOUT; 284int sysctl_tcp_fin_timeout __read_mostly = TCP_FIN_TIMEOUT;
284 285
@@ -436,6 +437,8 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
436 struct sock *sk = sock->sk; 437 struct sock *sk = sock->sk;
437 const struct tcp_sock *tp = tcp_sk(sk); 438 const struct tcp_sock *tp = tcp_sk(sk);
438 439
440 sock_rps_record_flow(sk);
441
439 sock_poll_wait(file, sk_sleep(sk), wait); 442 sock_poll_wait(file, sk_sleep(sk), wait);
440 if (sk->sk_state == TCP_LISTEN) 443 if (sk->sk_state == TCP_LISTEN)
441 return inet_csk_listen_poll(sk); 444 return inet_csk_listen_poll(sk);
@@ -1551,6 +1554,10 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
1551 struct sk_buff *skb; 1554 struct sk_buff *skb;
1552 u32 urg_hole = 0; 1555 u32 urg_hole = 0;
1553 1556
1557 if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue) &&
1558 (sk->sk_state == TCP_ESTABLISHED))
1559 sk_busy_loop(sk, nonblock);
1560
1554 lock_sock(sk); 1561 lock_sock(sk);
1555 1562
1556 err = -ENOTCONN; 1563 err = -ENOTCONN;
@@ -2875,249 +2882,9 @@ int compat_tcp_getsockopt(struct sock *sk, int level, int optname,
2875EXPORT_SYMBOL(compat_tcp_getsockopt); 2882EXPORT_SYMBOL(compat_tcp_getsockopt);
2876#endif 2883#endif
2877 2884
2878struct sk_buff *tcp_tso_segment(struct sk_buff *skb,
2879 netdev_features_t features)
2880{
2881 struct sk_buff *segs = ERR_PTR(-EINVAL);
2882 struct tcphdr *th;
2883 unsigned int thlen;
2884 unsigned int seq;
2885 __be32 delta;
2886 unsigned int oldlen;
2887 unsigned int mss;
2888 struct sk_buff *gso_skb = skb;
2889 __sum16 newcheck;
2890 bool ooo_okay, copy_destructor;
2891
2892 if (!pskb_may_pull(skb, sizeof(*th)))
2893 goto out;
2894
2895 th = tcp_hdr(skb);
2896 thlen = th->doff * 4;
2897 if (thlen < sizeof(*th))
2898 goto out;
2899
2900 if (!pskb_may_pull(skb, thlen))
2901 goto out;
2902
2903 oldlen = (u16)~skb->len;
2904 __skb_pull(skb, thlen);
2905
2906 mss = skb_shinfo(skb)->gso_size;
2907 if (unlikely(skb->len <= mss))
2908 goto out;
2909
2910 if (skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) {
2911 /* Packet is from an untrusted source, reset gso_segs. */
2912 int type = skb_shinfo(skb)->gso_type;
2913
2914 if (unlikely(type &
2915 ~(SKB_GSO_TCPV4 |
2916 SKB_GSO_DODGY |
2917 SKB_GSO_TCP_ECN |
2918 SKB_GSO_TCPV6 |
2919 SKB_GSO_GRE |
2920 SKB_GSO_UDP_TUNNEL |
2921 0) ||
2922 !(type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))))
2923 goto out;
2924
2925 skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(skb->len, mss);
2926
2927 segs = NULL;
2928 goto out;
2929 }
2930
2931 copy_destructor = gso_skb->destructor == tcp_wfree;
2932 ooo_okay = gso_skb->ooo_okay;
2933 /* All segments but the first should have ooo_okay cleared */
2934 skb->ooo_okay = 0;
2935
2936 segs = skb_segment(skb, features);
2937 if (IS_ERR(segs))
2938 goto out;
2939
2940 /* Only first segment might have ooo_okay set */
2941 segs->ooo_okay = ooo_okay;
2942
2943 delta = htonl(oldlen + (thlen + mss));
2944
2945 skb = segs;
2946 th = tcp_hdr(skb);
2947 seq = ntohl(th->seq);
2948
2949 newcheck = ~csum_fold((__force __wsum)((__force u32)th->check +
2950 (__force u32)delta));
2951
2952 do {
2953 th->fin = th->psh = 0;
2954 th->check = newcheck;
2955
2956 if (skb->ip_summed != CHECKSUM_PARTIAL)
2957 th->check =
2958 csum_fold(csum_partial(skb_transport_header(skb),
2959 thlen, skb->csum));
2960
2961 seq += mss;
2962 if (copy_destructor) {
2963 skb->destructor = gso_skb->destructor;
2964 skb->sk = gso_skb->sk;
2965 /* {tcp|sock}_wfree() use exact truesize accounting :
2966 * sum(skb->truesize) MUST be exactly be gso_skb->truesize
2967 * So we account mss bytes of 'true size' for each segment.
2968 * The last segment will contain the remaining.
2969 */
2970 skb->truesize = mss;
2971 gso_skb->truesize -= mss;
2972 }
2973 skb = skb->next;
2974 th = tcp_hdr(skb);
2975
2976 th->seq = htonl(seq);
2977 th->cwr = 0;
2978 } while (skb->next);
2979
2980 /* Following permits TCP Small Queues to work well with GSO :
2981 * The callback to TCP stack will be called at the time last frag
2982 * is freed at TX completion, and not right now when gso_skb
2983 * is freed by GSO engine
2984 */
2985 if (copy_destructor) {
2986 swap(gso_skb->sk, skb->sk);
2987 swap(gso_skb->destructor, skb->destructor);
2988 swap(gso_skb->truesize, skb->truesize);
2989 }
2990
2991 delta = htonl(oldlen + (skb->tail - skb->transport_header) +
2992 skb->data_len);
2993 th->check = ~csum_fold((__force __wsum)((__force u32)th->check +
2994 (__force u32)delta));
2995 if (skb->ip_summed != CHECKSUM_PARTIAL)
2996 th->check = csum_fold(csum_partial(skb_transport_header(skb),
2997 thlen, skb->csum));
2998
2999out:
3000 return segs;
3001}
3002EXPORT_SYMBOL(tcp_tso_segment);
3003
3004struct sk_buff **tcp_gro_receive(struct sk_buff **head, struct sk_buff *skb)
3005{
3006 struct sk_buff **pp = NULL;
3007 struct sk_buff *p;
3008 struct tcphdr *th;
3009 struct tcphdr *th2;
3010 unsigned int len;
3011 unsigned int thlen;
3012 __be32 flags;
3013 unsigned int mss = 1;
3014 unsigned int hlen;
3015 unsigned int off;
3016 int flush = 1;
3017 int i;
3018
3019 off = skb_gro_offset(skb);
3020 hlen = off + sizeof(*th);
3021 th = skb_gro_header_fast(skb, off);
3022 if (skb_gro_header_hard(skb, hlen)) {
3023 th = skb_gro_header_slow(skb, hlen, off);
3024 if (unlikely(!th))
3025 goto out;
3026 }
3027
3028 thlen = th->doff * 4;
3029 if (thlen < sizeof(*th))
3030 goto out;
3031
3032 hlen = off + thlen;
3033 if (skb_gro_header_hard(skb, hlen)) {
3034 th = skb_gro_header_slow(skb, hlen, off);
3035 if (unlikely(!th))
3036 goto out;
3037 }
3038
3039 skb_gro_pull(skb, thlen);
3040
3041 len = skb_gro_len(skb);
3042 flags = tcp_flag_word(th);
3043
3044 for (; (p = *head); head = &p->next) {
3045 if (!NAPI_GRO_CB(p)->same_flow)
3046 continue;
3047
3048 th2 = tcp_hdr(p);
3049
3050 if (*(u32 *)&th->source ^ *(u32 *)&th2->source) {
3051 NAPI_GRO_CB(p)->same_flow = 0;
3052 continue;
3053 }
3054
3055 goto found;
3056 }
3057
3058 goto out_check_final;
3059
3060found:
3061 flush = NAPI_GRO_CB(p)->flush;
3062 flush |= (__force int)(flags & TCP_FLAG_CWR);
3063 flush |= (__force int)((flags ^ tcp_flag_word(th2)) &
3064 ~(TCP_FLAG_CWR | TCP_FLAG_FIN | TCP_FLAG_PSH));
3065 flush |= (__force int)(th->ack_seq ^ th2->ack_seq);
3066 for (i = sizeof(*th); i < thlen; i += 4)
3067 flush |= *(u32 *)((u8 *)th + i) ^
3068 *(u32 *)((u8 *)th2 + i);
3069
3070 mss = skb_shinfo(p)->gso_size;
3071
3072 flush |= (len - 1) >= mss;
3073 flush |= (ntohl(th2->seq) + skb_gro_len(p)) ^ ntohl(th->seq);
3074
3075 if (flush || skb_gro_receive(head, skb)) {
3076 mss = 1;
3077 goto out_check_final;
3078 }
3079
3080 p = *head;
3081 th2 = tcp_hdr(p);
3082 tcp_flag_word(th2) |= flags & (TCP_FLAG_FIN | TCP_FLAG_PSH);
3083
3084out_check_final:
3085 flush = len < mss;
3086 flush |= (__force int)(flags & (TCP_FLAG_URG | TCP_FLAG_PSH |
3087 TCP_FLAG_RST | TCP_FLAG_SYN |
3088 TCP_FLAG_FIN));
3089
3090 if (p && (!NAPI_GRO_CB(skb)->same_flow || flush))
3091 pp = head;
3092
3093out:
3094 NAPI_GRO_CB(skb)->flush |= flush;
3095
3096 return pp;
3097}
3098EXPORT_SYMBOL(tcp_gro_receive);
3099
3100int tcp_gro_complete(struct sk_buff *skb)
3101{
3102 struct tcphdr *th = tcp_hdr(skb);
3103
3104 skb->csum_start = skb_transport_header(skb) - skb->head;
3105 skb->csum_offset = offsetof(struct tcphdr, check);
3106 skb->ip_summed = CHECKSUM_PARTIAL;
3107
3108 skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
3109
3110 if (th->cwr)
3111 skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
3112
3113 return 0;
3114}
3115EXPORT_SYMBOL(tcp_gro_complete);
3116
3117#ifdef CONFIG_TCP_MD5SIG 2885#ifdef CONFIG_TCP_MD5SIG
3118static unsigned long tcp_md5sig_users; 2886static struct tcp_md5sig_pool __percpu *tcp_md5sig_pool __read_mostly;
3119static struct tcp_md5sig_pool __percpu *tcp_md5sig_pool; 2887static DEFINE_MUTEX(tcp_md5sig_mutex);
3120static DEFINE_SPINLOCK(tcp_md5sig_pool_lock);
3121 2888
3122static void __tcp_free_md5sig_pool(struct tcp_md5sig_pool __percpu *pool) 2889static void __tcp_free_md5sig_pool(struct tcp_md5sig_pool __percpu *pool)
3123{ 2890{
@@ -3132,30 +2899,14 @@ static void __tcp_free_md5sig_pool(struct tcp_md5sig_pool __percpu *pool)
3132 free_percpu(pool); 2899 free_percpu(pool);
3133} 2900}
3134 2901
3135void tcp_free_md5sig_pool(void) 2902static void __tcp_alloc_md5sig_pool(void)
3136{
3137 struct tcp_md5sig_pool __percpu *pool = NULL;
3138
3139 spin_lock_bh(&tcp_md5sig_pool_lock);
3140 if (--tcp_md5sig_users == 0) {
3141 pool = tcp_md5sig_pool;
3142 tcp_md5sig_pool = NULL;
3143 }
3144 spin_unlock_bh(&tcp_md5sig_pool_lock);
3145 if (pool)
3146 __tcp_free_md5sig_pool(pool);
3147}
3148EXPORT_SYMBOL(tcp_free_md5sig_pool);
3149
3150static struct tcp_md5sig_pool __percpu *
3151__tcp_alloc_md5sig_pool(struct sock *sk)
3152{ 2903{
3153 int cpu; 2904 int cpu;
3154 struct tcp_md5sig_pool __percpu *pool; 2905 struct tcp_md5sig_pool __percpu *pool;
3155 2906
3156 pool = alloc_percpu(struct tcp_md5sig_pool); 2907 pool = alloc_percpu(struct tcp_md5sig_pool);
3157 if (!pool) 2908 if (!pool)
3158 return NULL; 2909 return;
3159 2910
3160 for_each_possible_cpu(cpu) { 2911 for_each_possible_cpu(cpu) {
3161 struct crypto_hash *hash; 2912 struct crypto_hash *hash;
@@ -3166,53 +2917,27 @@ __tcp_alloc_md5sig_pool(struct sock *sk)
3166 2917
3167 per_cpu_ptr(pool, cpu)->md5_desc.tfm = hash; 2918 per_cpu_ptr(pool, cpu)->md5_desc.tfm = hash;
3168 } 2919 }
3169 return pool; 2920 /* before setting tcp_md5sig_pool, we must commit all writes
2921 * to memory. See ACCESS_ONCE() in tcp_get_md5sig_pool()
2922 */
2923 smp_wmb();
2924 tcp_md5sig_pool = pool;
2925 return;
3170out_free: 2926out_free:
3171 __tcp_free_md5sig_pool(pool); 2927 __tcp_free_md5sig_pool(pool);
3172 return NULL;
3173} 2928}
3174 2929
3175struct tcp_md5sig_pool __percpu *tcp_alloc_md5sig_pool(struct sock *sk) 2930bool tcp_alloc_md5sig_pool(void)
3176{ 2931{
3177 struct tcp_md5sig_pool __percpu *pool; 2932 if (unlikely(!tcp_md5sig_pool)) {
3178 bool alloc = false; 2933 mutex_lock(&tcp_md5sig_mutex);
3179 2934
3180retry: 2935 if (!tcp_md5sig_pool)
3181 spin_lock_bh(&tcp_md5sig_pool_lock); 2936 __tcp_alloc_md5sig_pool();
3182 pool = tcp_md5sig_pool; 2937
3183 if (tcp_md5sig_users++ == 0) { 2938 mutex_unlock(&tcp_md5sig_mutex);
3184 alloc = true;
3185 spin_unlock_bh(&tcp_md5sig_pool_lock);
3186 } else if (!pool) {
3187 tcp_md5sig_users--;
3188 spin_unlock_bh(&tcp_md5sig_pool_lock);
3189 cpu_relax();
3190 goto retry;
3191 } else
3192 spin_unlock_bh(&tcp_md5sig_pool_lock);
3193
3194 if (alloc) {
3195 /* we cannot hold spinlock here because this may sleep. */
3196 struct tcp_md5sig_pool __percpu *p;
3197
3198 p = __tcp_alloc_md5sig_pool(sk);
3199 spin_lock_bh(&tcp_md5sig_pool_lock);
3200 if (!p) {
3201 tcp_md5sig_users--;
3202 spin_unlock_bh(&tcp_md5sig_pool_lock);
3203 return NULL;
3204 }
3205 pool = tcp_md5sig_pool;
3206 if (pool) {
3207 /* oops, it has already been assigned. */
3208 spin_unlock_bh(&tcp_md5sig_pool_lock);
3209 __tcp_free_md5sig_pool(p);
3210 } else {
3211 tcp_md5sig_pool = pool = p;
3212 spin_unlock_bh(&tcp_md5sig_pool_lock);
3213 }
3214 } 2939 }
3215 return pool; 2940 return tcp_md5sig_pool != NULL;
3216} 2941}
3217EXPORT_SYMBOL(tcp_alloc_md5sig_pool); 2942EXPORT_SYMBOL(tcp_alloc_md5sig_pool);
3218 2943
@@ -3229,28 +2954,15 @@ struct tcp_md5sig_pool *tcp_get_md5sig_pool(void)
3229 struct tcp_md5sig_pool __percpu *p; 2954 struct tcp_md5sig_pool __percpu *p;
3230 2955
3231 local_bh_disable(); 2956 local_bh_disable();
3232 2957 p = ACCESS_ONCE(tcp_md5sig_pool);
3233 spin_lock(&tcp_md5sig_pool_lock);
3234 p = tcp_md5sig_pool;
3235 if (p) 2958 if (p)
3236 tcp_md5sig_users++; 2959 return __this_cpu_ptr(p);
3237 spin_unlock(&tcp_md5sig_pool_lock);
3238
3239 if (p)
3240 return this_cpu_ptr(p);
3241 2960
3242 local_bh_enable(); 2961 local_bh_enable();
3243 return NULL; 2962 return NULL;
3244} 2963}
3245EXPORT_SYMBOL(tcp_get_md5sig_pool); 2964EXPORT_SYMBOL(tcp_get_md5sig_pool);
3246 2965
3247void tcp_put_md5sig_pool(void)
3248{
3249 local_bh_enable();
3250 tcp_free_md5sig_pool();
3251}
3252EXPORT_SYMBOL(tcp_put_md5sig_pool);
3253
3254int tcp_md5_hash_header(struct tcp_md5sig_pool *hp, 2966int tcp_md5_hash_header(struct tcp_md5sig_pool *hp,
3255 const struct tcphdr *th) 2967 const struct tcphdr *th)
3256{ 2968{
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 9c6225780bd5..28af45abe062 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -347,24 +347,13 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
347} 347}
348 348
349/* 3. Tuning rcvbuf, when connection enters established state. */ 349/* 3. Tuning rcvbuf, when connection enters established state. */
350
351static void tcp_fixup_rcvbuf(struct sock *sk) 350static void tcp_fixup_rcvbuf(struct sock *sk)
352{ 351{
353 u32 mss = tcp_sk(sk)->advmss; 352 u32 mss = tcp_sk(sk)->advmss;
354 u32 icwnd = TCP_DEFAULT_INIT_RCVWND;
355 int rcvmem; 353 int rcvmem;
356 354
357 /* Limit to 10 segments if mss <= 1460, 355 rcvmem = 2 * SKB_TRUESIZE(mss + MAX_TCP_HEADER) *
358 * or 14600/mss segments, with a minimum of two segments. 356 tcp_default_init_rwnd(mss);
359 */
360 if (mss > 1460)
361 icwnd = max_t(u32, (1460 * TCP_DEFAULT_INIT_RCVWND) / mss, 2);
362
363 rcvmem = SKB_TRUESIZE(mss + MAX_TCP_HEADER);
364 while (tcp_win_from_space(rcvmem) < mss)
365 rcvmem += 128;
366
367 rcvmem *= icwnd;
368 357
369 if (sk->sk_rcvbuf < rcvmem) 358 if (sk->sk_rcvbuf < rcvmem)
370 sk->sk_rcvbuf = min(rcvmem, sysctl_tcp_rmem[2]); 359 sk->sk_rcvbuf = min(rcvmem, sysctl_tcp_rmem[2]);
@@ -1257,8 +1246,6 @@ static bool tcp_shifted_skb(struct sock *sk, struct sk_buff *skb,
1257 1246
1258 if (skb == tp->retransmit_skb_hint) 1247 if (skb == tp->retransmit_skb_hint)
1259 tp->retransmit_skb_hint = prev; 1248 tp->retransmit_skb_hint = prev;
1260 if (skb == tp->scoreboard_skb_hint)
1261 tp->scoreboard_skb_hint = prev;
1262 if (skb == tp->lost_skb_hint) { 1249 if (skb == tp->lost_skb_hint) {
1263 tp->lost_skb_hint = prev; 1250 tp->lost_skb_hint = prev;
1264 tp->lost_cnt_hint -= tcp_skb_pcount(prev); 1251 tp->lost_cnt_hint -= tcp_skb_pcount(prev);
@@ -1966,20 +1953,6 @@ static bool tcp_pause_early_retransmit(struct sock *sk, int flag)
1966 return true; 1953 return true;
1967} 1954}
1968 1955
1969static inline int tcp_skb_timedout(const struct sock *sk,
1970 const struct sk_buff *skb)
1971{
1972 return tcp_time_stamp - TCP_SKB_CB(skb)->when > inet_csk(sk)->icsk_rto;
1973}
1974
1975static inline int tcp_head_timedout(const struct sock *sk)
1976{
1977 const struct tcp_sock *tp = tcp_sk(sk);
1978
1979 return tp->packets_out &&
1980 tcp_skb_timedout(sk, tcp_write_queue_head(sk));
1981}
1982
1983/* Linux NewReno/SACK/FACK/ECN state machine. 1956/* Linux NewReno/SACK/FACK/ECN state machine.
1984 * -------------------------------------- 1957 * --------------------------------------
1985 * 1958 *
@@ -2086,12 +2059,6 @@ static bool tcp_time_to_recover(struct sock *sk, int flag)
2086 if (tcp_dupack_heuristics(tp) > tp->reordering) 2059 if (tcp_dupack_heuristics(tp) > tp->reordering)
2087 return true; 2060 return true;
2088 2061
2089 /* Trick#3 : when we use RFC2988 timer restart, fast
2090 * retransmit can be triggered by timeout of queue head.
2091 */
2092 if (tcp_is_fack(tp) && tcp_head_timedout(sk))
2093 return true;
2094
2095 /* Trick#4: It is still not OK... But will it be useful to delay 2062 /* Trick#4: It is still not OK... But will it be useful to delay
2096 * recovery more? 2063 * recovery more?
2097 */ 2064 */
@@ -2128,44 +2095,6 @@ static bool tcp_time_to_recover(struct sock *sk, int flag)
2128 return false; 2095 return false;
2129} 2096}
2130 2097
2131/* New heuristics: it is possible only after we switched to restart timer
2132 * each time when something is ACKed. Hence, we can detect timed out packets
2133 * during fast retransmit without falling to slow start.
2134 *
2135 * Usefulness of this as is very questionable, since we should know which of
2136 * the segments is the next to timeout which is relatively expensive to find
2137 * in general case unless we add some data structure just for that. The
2138 * current approach certainly won't find the right one too often and when it
2139 * finally does find _something_ it usually marks large part of the window
2140 * right away (because a retransmission with a larger timestamp blocks the
2141 * loop from advancing). -ij
2142 */
2143static void tcp_timeout_skbs(struct sock *sk)
2144{
2145 struct tcp_sock *tp = tcp_sk(sk);
2146 struct sk_buff *skb;
2147
2148 if (!tcp_is_fack(tp) || !tcp_head_timedout(sk))
2149 return;
2150
2151 skb = tp->scoreboard_skb_hint;
2152 if (tp->scoreboard_skb_hint == NULL)
2153 skb = tcp_write_queue_head(sk);
2154
2155 tcp_for_write_queue_from(skb, sk) {
2156 if (skb == tcp_send_head(sk))
2157 break;
2158 if (!tcp_skb_timedout(sk, skb))
2159 break;
2160
2161 tcp_skb_mark_lost(tp, skb);
2162 }
2163
2164 tp->scoreboard_skb_hint = skb;
2165
2166 tcp_verify_left_out(tp);
2167}
2168
2169/* Detect loss in event "A" above by marking head of queue up as lost. 2098/* Detect loss in event "A" above by marking head of queue up as lost.
2170 * For FACK or non-SACK(Reno) senders, the first "packets" number of segments 2099 * For FACK or non-SACK(Reno) senders, the first "packets" number of segments
2171 * are considered lost. For RFC3517 SACK, a segment is considered lost if it 2100 * are considered lost. For RFC3517 SACK, a segment is considered lost if it
@@ -2251,8 +2180,6 @@ static void tcp_update_scoreboard(struct sock *sk, int fast_rexmit)
2251 else if (fast_rexmit) 2180 else if (fast_rexmit)
2252 tcp_mark_head_lost(sk, 1, 1); 2181 tcp_mark_head_lost(sk, 1, 1);
2253 } 2182 }
2254
2255 tcp_timeout_skbs(sk);
2256} 2183}
2257 2184
2258/* CWND moderation, preventing bursts due to too big ACKs 2185/* CWND moderation, preventing bursts due to too big ACKs
@@ -2307,10 +2234,22 @@ static void DBGUNDO(struct sock *sk, const char *msg)
2307#define DBGUNDO(x...) do { } while (0) 2234#define DBGUNDO(x...) do { } while (0)
2308#endif 2235#endif
2309 2236
2310static void tcp_undo_cwr(struct sock *sk, const bool undo_ssthresh) 2237static void tcp_undo_cwnd_reduction(struct sock *sk, bool unmark_loss)
2311{ 2238{
2312 struct tcp_sock *tp = tcp_sk(sk); 2239 struct tcp_sock *tp = tcp_sk(sk);
2313 2240
2241 if (unmark_loss) {
2242 struct sk_buff *skb;
2243
2244 tcp_for_write_queue(skb, sk) {
2245 if (skb == tcp_send_head(sk))
2246 break;
2247 TCP_SKB_CB(skb)->sacked &= ~TCPCB_LOST;
2248 }
2249 tp->lost_out = 0;
2250 tcp_clear_all_retrans_hints(tp);
2251 }
2252
2314 if (tp->prior_ssthresh) { 2253 if (tp->prior_ssthresh) {
2315 const struct inet_connection_sock *icsk = inet_csk(sk); 2254 const struct inet_connection_sock *icsk = inet_csk(sk);
2316 2255
@@ -2319,7 +2258,7 @@ static void tcp_undo_cwr(struct sock *sk, const bool undo_ssthresh)
2319 else 2258 else
2320 tp->snd_cwnd = max(tp->snd_cwnd, tp->snd_ssthresh << 1); 2259 tp->snd_cwnd = max(tp->snd_cwnd, tp->snd_ssthresh << 1);
2321 2260
2322 if (undo_ssthresh && tp->prior_ssthresh > tp->snd_ssthresh) { 2261 if (tp->prior_ssthresh > tp->snd_ssthresh) {
2323 tp->snd_ssthresh = tp->prior_ssthresh; 2262 tp->snd_ssthresh = tp->prior_ssthresh;
2324 TCP_ECN_withdraw_cwr(tp); 2263 TCP_ECN_withdraw_cwr(tp);
2325 } 2264 }
@@ -2327,6 +2266,7 @@ static void tcp_undo_cwr(struct sock *sk, const bool undo_ssthresh)
2327 tp->snd_cwnd = max(tp->snd_cwnd, tp->snd_ssthresh); 2266 tp->snd_cwnd = max(tp->snd_cwnd, tp->snd_ssthresh);
2328 } 2267 }
2329 tp->snd_cwnd_stamp = tcp_time_stamp; 2268 tp->snd_cwnd_stamp = tcp_time_stamp;
2269 tp->undo_marker = 0;
2330} 2270}
2331 2271
2332static inline bool tcp_may_undo(const struct tcp_sock *tp) 2272static inline bool tcp_may_undo(const struct tcp_sock *tp)
@@ -2346,14 +2286,13 @@ static bool tcp_try_undo_recovery(struct sock *sk)
2346 * or our original transmission succeeded. 2286 * or our original transmission succeeded.
2347 */ 2287 */
2348 DBGUNDO(sk, inet_csk(sk)->icsk_ca_state == TCP_CA_Loss ? "loss" : "retrans"); 2288 DBGUNDO(sk, inet_csk(sk)->icsk_ca_state == TCP_CA_Loss ? "loss" : "retrans");
2349 tcp_undo_cwr(sk, true); 2289 tcp_undo_cwnd_reduction(sk, false);
2350 if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss) 2290 if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss)
2351 mib_idx = LINUX_MIB_TCPLOSSUNDO; 2291 mib_idx = LINUX_MIB_TCPLOSSUNDO;
2352 else 2292 else
2353 mib_idx = LINUX_MIB_TCPFULLUNDO; 2293 mib_idx = LINUX_MIB_TCPFULLUNDO;
2354 2294
2355 NET_INC_STATS_BH(sock_net(sk), mib_idx); 2295 NET_INC_STATS_BH(sock_net(sk), mib_idx);
2356 tp->undo_marker = 0;
2357 } 2296 }
2358 if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) { 2297 if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
2359 /* Hold old state until something *above* high_seq 2298 /* Hold old state until something *above* high_seq
@@ -2367,16 +2306,17 @@ static bool tcp_try_undo_recovery(struct sock *sk)
2367} 2306}
2368 2307
2369/* Try to undo cwnd reduction, because D-SACKs acked all retransmitted data */ 2308/* Try to undo cwnd reduction, because D-SACKs acked all retransmitted data */
2370static void tcp_try_undo_dsack(struct sock *sk) 2309static bool tcp_try_undo_dsack(struct sock *sk)
2371{ 2310{
2372 struct tcp_sock *tp = tcp_sk(sk); 2311 struct tcp_sock *tp = tcp_sk(sk);
2373 2312
2374 if (tp->undo_marker && !tp->undo_retrans) { 2313 if (tp->undo_marker && !tp->undo_retrans) {
2375 DBGUNDO(sk, "D-SACK"); 2314 DBGUNDO(sk, "D-SACK");
2376 tcp_undo_cwr(sk, true); 2315 tcp_undo_cwnd_reduction(sk, false);
2377 tp->undo_marker = 0;
2378 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPDSACKUNDO); 2316 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPDSACKUNDO);
2317 return true;
2379 } 2318 }
2319 return false;
2380} 2320}
2381 2321
2382/* We can clear retrans_stamp when there are no retransmissions in the 2322/* We can clear retrans_stamp when there are no retransmissions in the
@@ -2408,60 +2348,20 @@ static bool tcp_any_retrans_done(const struct sock *sk)
2408 return false; 2348 return false;
2409} 2349}
2410 2350
2411/* Undo during fast recovery after partial ACK. */
2412
2413static int tcp_try_undo_partial(struct sock *sk, int acked)
2414{
2415 struct tcp_sock *tp = tcp_sk(sk);
2416 /* Partial ACK arrived. Force Hoe's retransmit. */
2417 int failed = tcp_is_reno(tp) || (tcp_fackets_out(tp) > tp->reordering);
2418
2419 if (tcp_may_undo(tp)) {
2420 /* Plain luck! Hole if filled with delayed
2421 * packet, rather than with a retransmit.
2422 */
2423 if (!tcp_any_retrans_done(sk))
2424 tp->retrans_stamp = 0;
2425
2426 tcp_update_reordering(sk, tcp_fackets_out(tp) + acked, 1);
2427
2428 DBGUNDO(sk, "Hoe");
2429 tcp_undo_cwr(sk, false);
2430 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPPARTIALUNDO);
2431
2432 /* So... Do not make Hoe's retransmit yet.
2433 * If the first packet was delayed, the rest
2434 * ones are most probably delayed as well.
2435 */
2436 failed = 0;
2437 }
2438 return failed;
2439}
2440
2441/* Undo during loss recovery after partial ACK or using F-RTO. */ 2351/* Undo during loss recovery after partial ACK or using F-RTO. */
2442static bool tcp_try_undo_loss(struct sock *sk, bool frto_undo) 2352static bool tcp_try_undo_loss(struct sock *sk, bool frto_undo)
2443{ 2353{
2444 struct tcp_sock *tp = tcp_sk(sk); 2354 struct tcp_sock *tp = tcp_sk(sk);
2445 2355
2446 if (frto_undo || tcp_may_undo(tp)) { 2356 if (frto_undo || tcp_may_undo(tp)) {
2447 struct sk_buff *skb; 2357 tcp_undo_cwnd_reduction(sk, true);
2448 tcp_for_write_queue(skb, sk) {
2449 if (skb == tcp_send_head(sk))
2450 break;
2451 TCP_SKB_CB(skb)->sacked &= ~TCPCB_LOST;
2452 }
2453
2454 tcp_clear_all_retrans_hints(tp);
2455 2358
2456 DBGUNDO(sk, "partial loss"); 2359 DBGUNDO(sk, "partial loss");
2457 tp->lost_out = 0;
2458 tcp_undo_cwr(sk, true);
2459 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPLOSSUNDO); 2360 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPLOSSUNDO);
2460 if (frto_undo) 2361 if (frto_undo)
2461 NET_INC_STATS_BH(sock_net(sk), 2362 NET_INC_STATS_BH(sock_net(sk),
2462 LINUX_MIB_TCPSPURIOUSRTOS); 2363 LINUX_MIB_TCPSPURIOUSRTOS);
2463 inet_csk(sk)->icsk_retransmits = 0; 2364 inet_csk(sk)->icsk_retransmits = 0;
2464 tp->undo_marker = 0;
2465 if (frto_undo || tcp_is_sack(tp)) 2365 if (frto_undo || tcp_is_sack(tp))
2466 tcp_set_ca_state(sk, TCP_CA_Open); 2366 tcp_set_ca_state(sk, TCP_CA_Open);
2467 return true; 2367 return true;
@@ -2494,12 +2394,14 @@ static void tcp_init_cwnd_reduction(struct sock *sk, const bool set_ssthresh)
2494 TCP_ECN_queue_cwr(tp); 2394 TCP_ECN_queue_cwr(tp);
2495} 2395}
2496 2396
2497static void tcp_cwnd_reduction(struct sock *sk, int newly_acked_sacked, 2397static void tcp_cwnd_reduction(struct sock *sk, const int prior_unsacked,
2498 int fast_rexmit) 2398 int fast_rexmit)
2499{ 2399{
2500 struct tcp_sock *tp = tcp_sk(sk); 2400 struct tcp_sock *tp = tcp_sk(sk);
2501 int sndcnt = 0; 2401 int sndcnt = 0;
2502 int delta = tp->snd_ssthresh - tcp_packets_in_flight(tp); 2402 int delta = tp->snd_ssthresh - tcp_packets_in_flight(tp);
2403 int newly_acked_sacked = prior_unsacked -
2404 (tp->packets_out - tp->sacked_out);
2503 2405
2504 tp->prr_delivered += newly_acked_sacked; 2406 tp->prr_delivered += newly_acked_sacked;
2505 if (tcp_packets_in_flight(tp) > tp->snd_ssthresh) { 2407 if (tcp_packets_in_flight(tp) > tp->snd_ssthresh) {
@@ -2556,7 +2458,7 @@ static void tcp_try_keep_open(struct sock *sk)
2556 } 2458 }
2557} 2459}
2558 2460
2559static void tcp_try_to_open(struct sock *sk, int flag, int newly_acked_sacked) 2461static void tcp_try_to_open(struct sock *sk, int flag, const int prior_unsacked)
2560{ 2462{
2561 struct tcp_sock *tp = tcp_sk(sk); 2463 struct tcp_sock *tp = tcp_sk(sk);
2562 2464
@@ -2573,7 +2475,7 @@ static void tcp_try_to_open(struct sock *sk, int flag, int newly_acked_sacked)
2573 if (inet_csk(sk)->icsk_ca_state != TCP_CA_Open) 2475 if (inet_csk(sk)->icsk_ca_state != TCP_CA_Open)
2574 tcp_moderate_cwnd(tp); 2476 tcp_moderate_cwnd(tp);
2575 } else { 2477 } else {
2576 tcp_cwnd_reduction(sk, newly_acked_sacked, 0); 2478 tcp_cwnd_reduction(sk, prior_unsacked, 0);
2577 } 2479 }
2578} 2480}
2579 2481
@@ -2731,6 +2633,40 @@ static void tcp_process_loss(struct sock *sk, int flag, bool is_dupack)
2731 tcp_xmit_retransmit_queue(sk); 2633 tcp_xmit_retransmit_queue(sk);
2732} 2634}
2733 2635
2636/* Undo during fast recovery after partial ACK. */
2637static bool tcp_try_undo_partial(struct sock *sk, const int acked,
2638 const int prior_unsacked)
2639{
2640 struct tcp_sock *tp = tcp_sk(sk);
2641
2642 if (tp->undo_marker && tcp_packet_delayed(tp)) {
2643 /* Plain luck! Hole if filled with delayed
2644 * packet, rather than with a retransmit.
2645 */
2646 tcp_update_reordering(sk, tcp_fackets_out(tp) + acked, 1);
2647
2648 /* We are getting evidence that the reordering degree is higher
2649 * than we realized. If there are no retransmits out then we
2650 * can undo. Otherwise we clock out new packets but do not
2651 * mark more packets lost or retransmit more.
2652 */
2653 if (tp->retrans_out) {
2654 tcp_cwnd_reduction(sk, prior_unsacked, 0);
2655 return true;
2656 }
2657
2658 if (!tcp_any_retrans_done(sk))
2659 tp->retrans_stamp = 0;
2660
2661 DBGUNDO(sk, "partial recovery");
2662 tcp_undo_cwnd_reduction(sk, true);
2663 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPPARTIALUNDO);
2664 tcp_try_keep_open(sk);
2665 return true;
2666 }
2667 return false;
2668}
2669
2734/* Process an event, which can update packets-in-flight not trivially. 2670/* Process an event, which can update packets-in-flight not trivially.
2735 * Main goal of this function is to calculate new estimate for left_out, 2671 * Main goal of this function is to calculate new estimate for left_out,
2736 * taking into account both packets sitting in receiver's buffer and 2672 * taking into account both packets sitting in receiver's buffer and
@@ -2742,15 +2678,14 @@ static void tcp_process_loss(struct sock *sk, int flag, bool is_dupack)
2742 * It does _not_ decide what to send, it is made in function 2678 * It does _not_ decide what to send, it is made in function
2743 * tcp_xmit_retransmit_queue(). 2679 * tcp_xmit_retransmit_queue().
2744 */ 2680 */
2745static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked, 2681static void tcp_fastretrans_alert(struct sock *sk, const int acked,
2746 int prior_sacked, int prior_packets, 2682 const int prior_unsacked,
2747 bool is_dupack, int flag) 2683 bool is_dupack, int flag)
2748{ 2684{
2749 struct inet_connection_sock *icsk = inet_csk(sk); 2685 struct inet_connection_sock *icsk = inet_csk(sk);
2750 struct tcp_sock *tp = tcp_sk(sk); 2686 struct tcp_sock *tp = tcp_sk(sk);
2751 int do_lost = is_dupack || ((flag & FLAG_DATA_SACKED) && 2687 bool do_lost = is_dupack || ((flag & FLAG_DATA_SACKED) &&
2752 (tcp_fackets_out(tp) > tp->reordering)); 2688 (tcp_fackets_out(tp) > tp->reordering));
2753 int newly_acked_sacked = 0;
2754 int fast_rexmit = 0; 2689 int fast_rexmit = 0;
2755 2690
2756 if (WARN_ON(!tp->packets_out && tp->sacked_out)) 2691 if (WARN_ON(!tp->packets_out && tp->sacked_out))
@@ -2802,10 +2737,17 @@ static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked,
2802 if (!(flag & FLAG_SND_UNA_ADVANCED)) { 2737 if (!(flag & FLAG_SND_UNA_ADVANCED)) {
2803 if (tcp_is_reno(tp) && is_dupack) 2738 if (tcp_is_reno(tp) && is_dupack)
2804 tcp_add_reno_sack(sk); 2739 tcp_add_reno_sack(sk);
2805 } else 2740 } else {
2806 do_lost = tcp_try_undo_partial(sk, pkts_acked); 2741 if (tcp_try_undo_partial(sk, acked, prior_unsacked))
2807 newly_acked_sacked = prior_packets - tp->packets_out + 2742 return;
2808 tp->sacked_out - prior_sacked; 2743 /* Partial ACK arrived. Force fast retransmit. */
2744 do_lost = tcp_is_reno(tp) ||
2745 tcp_fackets_out(tp) > tp->reordering;
2746 }
2747 if (tcp_try_undo_dsack(sk)) {
2748 tcp_try_keep_open(sk);
2749 return;
2750 }
2809 break; 2751 break;
2810 case TCP_CA_Loss: 2752 case TCP_CA_Loss:
2811 tcp_process_loss(sk, flag, is_dupack); 2753 tcp_process_loss(sk, flag, is_dupack);
@@ -2819,14 +2761,12 @@ static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked,
2819 if (is_dupack) 2761 if (is_dupack)
2820 tcp_add_reno_sack(sk); 2762 tcp_add_reno_sack(sk);
2821 } 2763 }
2822 newly_acked_sacked = prior_packets - tp->packets_out +
2823 tp->sacked_out - prior_sacked;
2824 2764
2825 if (icsk->icsk_ca_state <= TCP_CA_Disorder) 2765 if (icsk->icsk_ca_state <= TCP_CA_Disorder)
2826 tcp_try_undo_dsack(sk); 2766 tcp_try_undo_dsack(sk);
2827 2767
2828 if (!tcp_time_to_recover(sk, flag)) { 2768 if (!tcp_time_to_recover(sk, flag)) {
2829 tcp_try_to_open(sk, flag, newly_acked_sacked); 2769 tcp_try_to_open(sk, flag, prior_unsacked);
2830 return; 2770 return;
2831 } 2771 }
2832 2772
@@ -2846,9 +2786,9 @@ static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked,
2846 fast_rexmit = 1; 2786 fast_rexmit = 1;
2847 } 2787 }
2848 2788
2849 if (do_lost || (tcp_is_fack(tp) && tcp_head_timedout(sk))) 2789 if (do_lost)
2850 tcp_update_scoreboard(sk, fast_rexmit); 2790 tcp_update_scoreboard(sk, fast_rexmit);
2851 tcp_cwnd_reduction(sk, newly_acked_sacked, fast_rexmit); 2791 tcp_cwnd_reduction(sk, prior_unsacked, fast_rexmit);
2852 tcp_xmit_retransmit_queue(sk); 2792 tcp_xmit_retransmit_queue(sk);
2853} 2793}
2854 2794
@@ -3079,7 +3019,6 @@ static int tcp_clean_rtx_queue(struct sock *sk, int prior_fackets,
3079 3019
3080 tcp_unlink_write_queue(skb, sk); 3020 tcp_unlink_write_queue(skb, sk);
3081 sk_wmem_free_skb(sk, skb); 3021 sk_wmem_free_skb(sk, skb);
3082 tp->scoreboard_skb_hint = NULL;
3083 if (skb == tp->retransmit_skb_hint) 3022 if (skb == tp->retransmit_skb_hint)
3084 tp->retransmit_skb_hint = NULL; 3023 tp->retransmit_skb_hint = NULL;
3085 if (skb == tp->lost_skb_hint) 3024 if (skb == tp->lost_skb_hint)
@@ -3333,9 +3272,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
3333 u32 prior_in_flight; 3272 u32 prior_in_flight;
3334 u32 prior_fackets; 3273 u32 prior_fackets;
3335 int prior_packets = tp->packets_out; 3274 int prior_packets = tp->packets_out;
3336 int prior_sacked = tp->sacked_out; 3275 const int prior_unsacked = tp->packets_out - tp->sacked_out;
3337 int pkts_acked = 0; 3276 int acked = 0; /* Number of packets newly acked */
3338 int previous_packets_out = 0;
3339 3277
3340 /* If the ack is older than previous acks 3278 /* If the ack is older than previous acks
3341 * then we can probably ignore it. 3279 * then we can probably ignore it.
@@ -3410,18 +3348,17 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
3410 goto no_queue; 3348 goto no_queue;
3411 3349
3412 /* See if we can take anything off of the retransmit queue. */ 3350 /* See if we can take anything off of the retransmit queue. */
3413 previous_packets_out = tp->packets_out; 3351 acked = tp->packets_out;
3414 flag |= tcp_clean_rtx_queue(sk, prior_fackets, prior_snd_una); 3352 flag |= tcp_clean_rtx_queue(sk, prior_fackets, prior_snd_una);
3415 3353 acked -= tp->packets_out;
3416 pkts_acked = previous_packets_out - tp->packets_out;
3417 3354
3418 if (tcp_ack_is_dubious(sk, flag)) { 3355 if (tcp_ack_is_dubious(sk, flag)) {
3419 /* Advance CWND, if state allows this. */ 3356 /* Advance CWND, if state allows this. */
3420 if ((flag & FLAG_DATA_ACKED) && tcp_may_raise_cwnd(sk, flag)) 3357 if ((flag & FLAG_DATA_ACKED) && tcp_may_raise_cwnd(sk, flag))
3421 tcp_cong_avoid(sk, ack, prior_in_flight); 3358 tcp_cong_avoid(sk, ack, prior_in_flight);
3422 is_dupack = !(flag & (FLAG_SND_UNA_ADVANCED | FLAG_NOT_DUP)); 3359 is_dupack = !(flag & (FLAG_SND_UNA_ADVANCED | FLAG_NOT_DUP));
3423 tcp_fastretrans_alert(sk, pkts_acked, prior_sacked, 3360 tcp_fastretrans_alert(sk, acked, prior_unsacked,
3424 prior_packets, is_dupack, flag); 3361 is_dupack, flag);
3425 } else { 3362 } else {
3426 if (flag & FLAG_DATA_ACKED) 3363 if (flag & FLAG_DATA_ACKED)
3427 tcp_cong_avoid(sk, ack, prior_in_flight); 3364 tcp_cong_avoid(sk, ack, prior_in_flight);
@@ -3443,8 +3380,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
3443no_queue: 3380no_queue:
3444 /* If data was DSACKed, see if we can undo a cwnd reduction. */ 3381 /* If data was DSACKed, see if we can undo a cwnd reduction. */
3445 if (flag & FLAG_DSACKING_ACK) 3382 if (flag & FLAG_DSACKING_ACK)
3446 tcp_fastretrans_alert(sk, pkts_acked, prior_sacked, 3383 tcp_fastretrans_alert(sk, acked, prior_unsacked,
3447 prior_packets, is_dupack, flag); 3384 is_dupack, flag);
3448 /* If this ack opens up a zero window, clear backoff. It was 3385 /* If this ack opens up a zero window, clear backoff. It was
3449 * being used to time the probes, and is probably far higher than 3386 * being used to time the probes, and is probably far higher than
3450 * it needs to be for normal retransmission. 3387 * it needs to be for normal retransmission.
@@ -3466,8 +3403,8 @@ old_ack:
3466 */ 3403 */
3467 if (TCP_SKB_CB(skb)->sacked) { 3404 if (TCP_SKB_CB(skb)->sacked) {
3468 flag |= tcp_sacktag_write_queue(sk, skb, prior_snd_una); 3405 flag |= tcp_sacktag_write_queue(sk, skb, prior_snd_una);
3469 tcp_fastretrans_alert(sk, pkts_acked, prior_sacked, 3406 tcp_fastretrans_alert(sk, acked, prior_unsacked,
3470 prior_packets, is_dupack, flag); 3407 is_dupack, flag);
3471 } 3408 }
3472 3409
3473 SOCK_DEBUG(sk, "Ack %u before %u:%u\n", ack, tp->snd_una, tp->snd_nxt); 3410 SOCK_DEBUG(sk, "Ack %u before %u:%u\n", ack, tp->snd_una, tp->snd_nxt);
@@ -3780,6 +3717,7 @@ void tcp_reset(struct sock *sk)
3780static void tcp_fin(struct sock *sk) 3717static void tcp_fin(struct sock *sk)
3781{ 3718{
3782 struct tcp_sock *tp = tcp_sk(sk); 3719 struct tcp_sock *tp = tcp_sk(sk);
3720 const struct dst_entry *dst;
3783 3721
3784 inet_csk_schedule_ack(sk); 3722 inet_csk_schedule_ack(sk);
3785 3723
@@ -3791,7 +3729,9 @@ static void tcp_fin(struct sock *sk)
3791 case TCP_ESTABLISHED: 3729 case TCP_ESTABLISHED:
3792 /* Move to CLOSE_WAIT */ 3730 /* Move to CLOSE_WAIT */
3793 tcp_set_state(sk, TCP_CLOSE_WAIT); 3731 tcp_set_state(sk, TCP_CLOSE_WAIT);
3794 inet_csk(sk)->icsk_ack.pingpong = 1; 3732 dst = __sk_dst_get(sk);
3733 if (!dst || !dst_metric(dst, RTAX_QUICKACK))
3734 inet_csk(sk)->icsk_ack.pingpong = 1;
3795 break; 3735 break;
3796 3736
3797 case TCP_CLOSE_WAIT: 3737 case TCP_CLOSE_WAIT:
@@ -5601,6 +5541,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
5601 struct inet_connection_sock *icsk = inet_csk(sk); 5541 struct inet_connection_sock *icsk = inet_csk(sk);
5602 struct request_sock *req; 5542 struct request_sock *req;
5603 int queued = 0; 5543 int queued = 0;
5544 bool acceptable;
5604 5545
5605 tp->rx_opt.saw_tstamp = 0; 5546 tp->rx_opt.saw_tstamp = 0;
5606 5547
@@ -5671,157 +5612,147 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
5671 return 0; 5612 return 0;
5672 5613
5673 /* step 5: check the ACK field */ 5614 /* step 5: check the ACK field */
5674 if (true) { 5615 acceptable = tcp_ack(sk, skb, FLAG_SLOWPATH |
5675 int acceptable = tcp_ack(sk, skb, FLAG_SLOWPATH | 5616 FLAG_UPDATE_TS_RECENT) > 0;
5676 FLAG_UPDATE_TS_RECENT) > 0;
5677
5678 switch (sk->sk_state) {
5679 case TCP_SYN_RECV:
5680 if (acceptable) {
5681 /* Once we leave TCP_SYN_RECV, we no longer
5682 * need req so release it.
5683 */
5684 if (req) {
5685 tcp_synack_rtt_meas(sk, req);
5686 tp->total_retrans = req->num_retrans;
5687
5688 reqsk_fastopen_remove(sk, req, false);
5689 } else {
5690 /* Make sure socket is routed, for
5691 * correct metrics.
5692 */
5693 icsk->icsk_af_ops->rebuild_header(sk);
5694 tcp_init_congestion_control(sk);
5695 5617
5696 tcp_mtup_init(sk); 5618 switch (sk->sk_state) {
5697 tcp_init_buffer_space(sk); 5619 case TCP_SYN_RECV:
5698 tp->copied_seq = tp->rcv_nxt; 5620 if (!acceptable)
5699 } 5621 return 1;
5700 smp_mb();
5701 tcp_set_state(sk, TCP_ESTABLISHED);
5702 sk->sk_state_change(sk);
5703
5704 /* Note, that this wakeup is only for marginal
5705 * crossed SYN case. Passively open sockets
5706 * are not waked up, because sk->sk_sleep ==
5707 * NULL and sk->sk_socket == NULL.
5708 */
5709 if (sk->sk_socket)
5710 sk_wake_async(sk,
5711 SOCK_WAKE_IO, POLL_OUT);
5712
5713 tp->snd_una = TCP_SKB_CB(skb)->ack_seq;
5714 tp->snd_wnd = ntohs(th->window) <<
5715 tp->rx_opt.snd_wscale;
5716 tcp_init_wl(tp, TCP_SKB_CB(skb)->seq);
5717
5718 if (tp->rx_opt.tstamp_ok)
5719 tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
5720
5721 if (req) {
5722 /* Re-arm the timer because data may
5723 * have been sent out. This is similar
5724 * to the regular data transmission case
5725 * when new data has just been ack'ed.
5726 *
5727 * (TFO) - we could try to be more
5728 * aggressive and retranmitting any data
5729 * sooner based on when they were sent
5730 * out.
5731 */
5732 tcp_rearm_rto(sk);
5733 } else
5734 tcp_init_metrics(sk);
5735 5622
5736 /* Prevent spurious tcp_cwnd_restart() on 5623 /* Once we leave TCP_SYN_RECV, we no longer need req
5737 * first data packet. 5624 * so release it.
5738 */ 5625 */
5739 tp->lsndtime = tcp_time_stamp; 5626 if (req) {
5627 tcp_synack_rtt_meas(sk, req);
5628 tp->total_retrans = req->num_retrans;
5740 5629
5741 tcp_initialize_rcv_mss(sk); 5630 reqsk_fastopen_remove(sk, req, false);
5742 tcp_fast_path_on(tp); 5631 } else {
5743 } else { 5632 /* Make sure socket is routed, for correct metrics. */
5744 return 1; 5633 icsk->icsk_af_ops->rebuild_header(sk);
5745 } 5634 tcp_init_congestion_control(sk);
5746 break; 5635
5636 tcp_mtup_init(sk);
5637 tcp_init_buffer_space(sk);
5638 tp->copied_seq = tp->rcv_nxt;
5639 }
5640 smp_mb();
5641 tcp_set_state(sk, TCP_ESTABLISHED);
5642 sk->sk_state_change(sk);
5643
5644 /* Note, that this wakeup is only for marginal crossed SYN case.
5645 * Passively open sockets are not waked up, because
5646 * sk->sk_sleep == NULL and sk->sk_socket == NULL.
5647 */
5648 if (sk->sk_socket)
5649 sk_wake_async(sk, SOCK_WAKE_IO, POLL_OUT);
5650
5651 tp->snd_una = TCP_SKB_CB(skb)->ack_seq;
5652 tp->snd_wnd = ntohs(th->window) << tp->rx_opt.snd_wscale;
5653 tcp_init_wl(tp, TCP_SKB_CB(skb)->seq);
5747 5654
5748 case TCP_FIN_WAIT1: 5655 if (tp->rx_opt.tstamp_ok)
5749 /* If we enter the TCP_FIN_WAIT1 state and we are a 5656 tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
5750 * Fast Open socket and this is the first acceptable 5657
5751 * ACK we have received, this would have acknowledged 5658 if (req) {
5752 * our SYNACK so stop the SYNACK timer. 5659 /* Re-arm the timer because data may have been sent out.
5660 * This is similar to the regular data transmission case
5661 * when new data has just been ack'ed.
5662 *
5663 * (TFO) - we could try to be more aggressive and
5664 * retransmitting any data sooner based on when they
5665 * are sent out.
5753 */ 5666 */
5754 if (req != NULL) { 5667 tcp_rearm_rto(sk);
5755 /* Return RST if ack_seq is invalid. 5668 } else
5756 * Note that RFC793 only says to generate a 5669 tcp_init_metrics(sk);
5757 * DUPACK for it but for TCP Fast Open it seems
5758 * better to treat this case like TCP_SYN_RECV
5759 * above.
5760 */
5761 if (!acceptable)
5762 return 1;
5763 /* We no longer need the request sock. */
5764 reqsk_fastopen_remove(sk, req, false);
5765 tcp_rearm_rto(sk);
5766 }
5767 if (tp->snd_una == tp->write_seq) {
5768 struct dst_entry *dst;
5769
5770 tcp_set_state(sk, TCP_FIN_WAIT2);
5771 sk->sk_shutdown |= SEND_SHUTDOWN;
5772
5773 dst = __sk_dst_get(sk);
5774 if (dst)
5775 dst_confirm(dst);
5776
5777 if (!sock_flag(sk, SOCK_DEAD))
5778 /* Wake up lingering close() */
5779 sk->sk_state_change(sk);
5780 else {
5781 int tmo;
5782
5783 if (tp->linger2 < 0 ||
5784 (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
5785 after(TCP_SKB_CB(skb)->end_seq - th->fin, tp->rcv_nxt))) {
5786 tcp_done(sk);
5787 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
5788 return 1;
5789 }
5790 5670
5791 tmo = tcp_fin_time(sk); 5671 /* Prevent spurious tcp_cwnd_restart() on first data packet */
5792 if (tmo > TCP_TIMEWAIT_LEN) { 5672 tp->lsndtime = tcp_time_stamp;
5793 inet_csk_reset_keepalive_timer(sk, tmo - TCP_TIMEWAIT_LEN);
5794 } else if (th->fin || sock_owned_by_user(sk)) {
5795 /* Bad case. We could lose such FIN otherwise.
5796 * It is not a big problem, but it looks confusing
5797 * and not so rare event. We still can lose it now,
5798 * if it spins in bh_lock_sock(), but it is really
5799 * marginal case.
5800 */
5801 inet_csk_reset_keepalive_timer(sk, tmo);
5802 } else {
5803 tcp_time_wait(sk, TCP_FIN_WAIT2, tmo);
5804 goto discard;
5805 }
5806 }
5807 }
5808 break;
5809 5673
5810 case TCP_CLOSING: 5674 tcp_initialize_rcv_mss(sk);
5811 if (tp->snd_una == tp->write_seq) { 5675 tcp_fast_path_on(tp);
5812 tcp_time_wait(sk, TCP_TIME_WAIT, 0); 5676 break;
5813 goto discard; 5677
5814 } 5678 case TCP_FIN_WAIT1: {
5679 struct dst_entry *dst;
5680 int tmo;
5681
5682 /* If we enter the TCP_FIN_WAIT1 state and we are a
5683 * Fast Open socket and this is the first acceptable
5684 * ACK we have received, this would have acknowledged
5685 * our SYNACK so stop the SYNACK timer.
5686 */
5687 if (req != NULL) {
5688 /* Return RST if ack_seq is invalid.
5689 * Note that RFC793 only says to generate a
5690 * DUPACK for it but for TCP Fast Open it seems
5691 * better to treat this case like TCP_SYN_RECV
5692 * above.
5693 */
5694 if (!acceptable)
5695 return 1;
5696 /* We no longer need the request sock. */
5697 reqsk_fastopen_remove(sk, req, false);
5698 tcp_rearm_rto(sk);
5699 }
5700 if (tp->snd_una != tp->write_seq)
5815 break; 5701 break;
5816 5702
5817 case TCP_LAST_ACK: 5703 tcp_set_state(sk, TCP_FIN_WAIT2);
5818 if (tp->snd_una == tp->write_seq) { 5704 sk->sk_shutdown |= SEND_SHUTDOWN;
5819 tcp_update_metrics(sk); 5705
5820 tcp_done(sk); 5706 dst = __sk_dst_get(sk);
5821 goto discard; 5707 if (dst)
5822 } 5708 dst_confirm(dst);
5709
5710 if (!sock_flag(sk, SOCK_DEAD)) {
5711 /* Wake up lingering close() */
5712 sk->sk_state_change(sk);
5823 break; 5713 break;
5824 } 5714 }
5715
5716 if (tp->linger2 < 0 ||
5717 (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
5718 after(TCP_SKB_CB(skb)->end_seq - th->fin, tp->rcv_nxt))) {
5719 tcp_done(sk);
5720 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
5721 return 1;
5722 }
5723
5724 tmo = tcp_fin_time(sk);
5725 if (tmo > TCP_TIMEWAIT_LEN) {
5726 inet_csk_reset_keepalive_timer(sk, tmo - TCP_TIMEWAIT_LEN);
5727 } else if (th->fin || sock_owned_by_user(sk)) {
5728 /* Bad case. We could lose such FIN otherwise.
5729 * It is not a big problem, but it looks confusing
5730 * and not so rare event. We still can lose it now,
5731 * if it spins in bh_lock_sock(), but it is really
5732 * marginal case.
5733 */
5734 inet_csk_reset_keepalive_timer(sk, tmo);
5735 } else {
5736 tcp_time_wait(sk, TCP_FIN_WAIT2, tmo);
5737 goto discard;
5738 }
5739 break;
5740 }
5741
5742 case TCP_CLOSING:
5743 if (tp->snd_una == tp->write_seq) {
5744 tcp_time_wait(sk, TCP_TIME_WAIT, 0);
5745 goto discard;
5746 }
5747 break;
5748
5749 case TCP_LAST_ACK:
5750 if (tp->snd_una == tp->write_seq) {
5751 tcp_update_metrics(sk);
5752 tcp_done(sk);
5753 goto discard;
5754 }
5755 break;
5825 } 5756 }
5826 5757
5827 /* step 6: check the URG bit */ 5758 /* step 6: check the URG bit */
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 7999fc55c83b..35675e46aff8 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -75,6 +75,7 @@
75#include <net/netdma.h> 75#include <net/netdma.h>
76#include <net/secure_seq.h> 76#include <net/secure_seq.h>
77#include <net/tcp_memcontrol.h> 77#include <net/tcp_memcontrol.h>
78#include <net/ll_poll.h>
78 79
79#include <linux/inet.h> 80#include <linux/inet.h>
80#include <linux/ipv6.h> 81#include <linux/ipv6.h>
@@ -545,8 +546,7 @@ out:
545 sock_put(sk); 546 sock_put(sk);
546} 547}
547 548
548static void __tcp_v4_send_check(struct sk_buff *skb, 549void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr)
549 __be32 saddr, __be32 daddr)
550{ 550{
551 struct tcphdr *th = tcp_hdr(skb); 551 struct tcphdr *th = tcp_hdr(skb);
552 552
@@ -571,23 +571,6 @@ void tcp_v4_send_check(struct sock *sk, struct sk_buff *skb)
571} 571}
572EXPORT_SYMBOL(tcp_v4_send_check); 572EXPORT_SYMBOL(tcp_v4_send_check);
573 573
574int tcp_v4_gso_send_check(struct sk_buff *skb)
575{
576 const struct iphdr *iph;
577 struct tcphdr *th;
578
579 if (!pskb_may_pull(skb, sizeof(*th)))
580 return -EINVAL;
581
582 iph = ip_hdr(skb);
583 th = tcp_hdr(skb);
584
585 th->check = 0;
586 skb->ip_summed = CHECKSUM_PARTIAL;
587 __tcp_v4_send_check(skb, iph->saddr, iph->daddr);
588 return 0;
589}
590
591/* 574/*
592 * This routine will send an RST to the other tcp. 575 * This routine will send an RST to the other tcp.
593 * 576 *
@@ -1026,7 +1009,7 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
1026 key = sock_kmalloc(sk, sizeof(*key), gfp); 1009 key = sock_kmalloc(sk, sizeof(*key), gfp);
1027 if (!key) 1010 if (!key)
1028 return -ENOMEM; 1011 return -ENOMEM;
1029 if (hlist_empty(&md5sig->head) && !tcp_alloc_md5sig_pool(sk)) { 1012 if (!tcp_alloc_md5sig_pool()) {
1030 sock_kfree_s(sk, key, sizeof(*key)); 1013 sock_kfree_s(sk, key, sizeof(*key));
1031 return -ENOMEM; 1014 return -ENOMEM;
1032 } 1015 }
@@ -1044,9 +1027,7 @@ EXPORT_SYMBOL(tcp_md5_do_add);
1044 1027
1045int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family) 1028int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family)
1046{ 1029{
1047 struct tcp_sock *tp = tcp_sk(sk);
1048 struct tcp_md5sig_key *key; 1030 struct tcp_md5sig_key *key;
1049 struct tcp_md5sig_info *md5sig;
1050 1031
1051 key = tcp_md5_do_lookup(sk, addr, family); 1032 key = tcp_md5_do_lookup(sk, addr, family);
1052 if (!key) 1033 if (!key)
@@ -1054,10 +1035,6 @@ int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family)
1054 hlist_del_rcu(&key->node); 1035 hlist_del_rcu(&key->node);
1055 atomic_sub(sizeof(*key), &sk->sk_omem_alloc); 1036 atomic_sub(sizeof(*key), &sk->sk_omem_alloc);
1056 kfree_rcu(key, rcu); 1037 kfree_rcu(key, rcu);
1057 md5sig = rcu_dereference_protected(tp->md5sig_info,
1058 sock_owned_by_user(sk));
1059 if (hlist_empty(&md5sig->head))
1060 tcp_free_md5sig_pool();
1061 return 0; 1038 return 0;
1062} 1039}
1063EXPORT_SYMBOL(tcp_md5_do_del); 1040EXPORT_SYMBOL(tcp_md5_do_del);
@@ -1071,8 +1048,6 @@ static void tcp_clear_md5_list(struct sock *sk)
1071 1048
1072 md5sig = rcu_dereference_protected(tp->md5sig_info, 1); 1049 md5sig = rcu_dereference_protected(tp->md5sig_info, 1);
1073 1050
1074 if (!hlist_empty(&md5sig->head))
1075 tcp_free_md5sig_pool();
1076 hlist_for_each_entry_safe(key, n, &md5sig->head, node) { 1051 hlist_for_each_entry_safe(key, n, &md5sig->head, node) {
1077 hlist_del_rcu(&key->node); 1052 hlist_del_rcu(&key->node);
1078 atomic_sub(sizeof(*key), &sk->sk_omem_alloc); 1053 atomic_sub(sizeof(*key), &sk->sk_omem_alloc);
@@ -2019,6 +1994,7 @@ process:
2019 if (sk_filter(sk, skb)) 1994 if (sk_filter(sk, skb))
2020 goto discard_and_relse; 1995 goto discard_and_relse;
2021 1996
1997 sk_mark_ll(sk, skb);
2022 skb->dev = NULL; 1998 skb->dev = NULL;
2023 1999
2024 bh_lock_sock_nested(sk); 2000 bh_lock_sock_nested(sk);
@@ -2803,52 +2779,6 @@ void tcp4_proc_exit(void)
2803} 2779}
2804#endif /* CONFIG_PROC_FS */ 2780#endif /* CONFIG_PROC_FS */
2805 2781
2806struct sk_buff **tcp4_gro_receive(struct sk_buff **head, struct sk_buff *skb)
2807{
2808 const struct iphdr *iph = skb_gro_network_header(skb);
2809 __wsum wsum;
2810 __sum16 sum;
2811
2812 switch (skb->ip_summed) {
2813 case CHECKSUM_COMPLETE:
2814 if (!tcp_v4_check(skb_gro_len(skb), iph->saddr, iph->daddr,
2815 skb->csum)) {
2816 skb->ip_summed = CHECKSUM_UNNECESSARY;
2817 break;
2818 }
2819flush:
2820 NAPI_GRO_CB(skb)->flush = 1;
2821 return NULL;
2822
2823 case CHECKSUM_NONE:
2824 wsum = csum_tcpudp_nofold(iph->saddr, iph->daddr,
2825 skb_gro_len(skb), IPPROTO_TCP, 0);
2826 sum = csum_fold(skb_checksum(skb,
2827 skb_gro_offset(skb),
2828 skb_gro_len(skb),
2829 wsum));
2830 if (sum)
2831 goto flush;
2832
2833 skb->ip_summed = CHECKSUM_UNNECESSARY;
2834 break;
2835 }
2836
2837 return tcp_gro_receive(head, skb);
2838}
2839
2840int tcp4_gro_complete(struct sk_buff *skb)
2841{
2842 const struct iphdr *iph = ip_hdr(skb);
2843 struct tcphdr *th = tcp_hdr(skb);
2844
2845 th->check = ~tcp_v4_check(skb->len - skb_transport_offset(skb),
2846 iph->saddr, iph->daddr, 0);
2847 skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
2848
2849 return tcp_gro_complete(skb);
2850}
2851
2852struct proto tcp_prot = { 2782struct proto tcp_prot = {
2853 .name = "TCP", 2783 .name = "TCP",
2854 .owner = THIS_MODULE, 2784 .owner = THIS_MODULE,
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index 0f0178827259..ab1c08658528 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -317,7 +317,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
317 key = tp->af_specific->md5_lookup(sk, sk); 317 key = tp->af_specific->md5_lookup(sk, sk);
318 if (key != NULL) { 318 if (key != NULL) {
319 tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC); 319 tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC);
320 if (tcptw->tw_md5_key && tcp_alloc_md5sig_pool(sk) == NULL) 320 if (tcptw->tw_md5_key && !tcp_alloc_md5sig_pool())
321 BUG(); 321 BUG();
322 } 322 }
323 } while (0); 323 } while (0);
@@ -358,10 +358,8 @@ void tcp_twsk_destructor(struct sock *sk)
358#ifdef CONFIG_TCP_MD5SIG 358#ifdef CONFIG_TCP_MD5SIG
359 struct tcp_timewait_sock *twsk = tcp_twsk(sk); 359 struct tcp_timewait_sock *twsk = tcp_twsk(sk);
360 360
361 if (twsk->tw_md5_key) { 361 if (twsk->tw_md5_key)
362 tcp_free_md5sig_pool();
363 kfree_rcu(twsk->tw_md5_key, rcu); 362 kfree_rcu(twsk->tw_md5_key, rcu);
364 }
365#endif 363#endif
366} 364}
367EXPORT_SYMBOL_GPL(tcp_twsk_destructor); 365EXPORT_SYMBOL_GPL(tcp_twsk_destructor);
diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
new file mode 100644
index 000000000000..3a7525e6c086
--- /dev/null
+++ b/net/ipv4/tcp_offload.c
@@ -0,0 +1,332 @@
1/*
2 * IPV4 GSO/GRO offload support
3 * Linux INET implementation
4 *
5 * This program is free software; you can redistribute it and/or
6 * modify it under the terms of the GNU General Public License
7 * as published by the Free Software Foundation; either version
8 * 2 of the License, or (at your option) any later version.
9 *
10 * TCPv4 GSO/GRO support
11 */
12
13#include <linux/skbuff.h>
14#include <net/tcp.h>
15#include <net/protocol.h>
16
17struct sk_buff *tcp_tso_segment(struct sk_buff *skb,
18 netdev_features_t features)
19{
20 struct sk_buff *segs = ERR_PTR(-EINVAL);
21 struct tcphdr *th;
22 unsigned int thlen;
23 unsigned int seq;
24 __be32 delta;
25 unsigned int oldlen;
26 unsigned int mss;
27 struct sk_buff *gso_skb = skb;
28 __sum16 newcheck;
29 bool ooo_okay, copy_destructor;
30
31 if (!pskb_may_pull(skb, sizeof(*th)))
32 goto out;
33
34 th = tcp_hdr(skb);
35 thlen = th->doff * 4;
36 if (thlen < sizeof(*th))
37 goto out;
38
39 if (!pskb_may_pull(skb, thlen))
40 goto out;
41
42 oldlen = (u16)~skb->len;
43 __skb_pull(skb, thlen);
44
45 mss = tcp_skb_mss(skb);
46 if (unlikely(skb->len <= mss))
47 goto out;
48
49 if (skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) {
50 /* Packet is from an untrusted source, reset gso_segs. */
51 int type = skb_shinfo(skb)->gso_type;
52
53 if (unlikely(type &
54 ~(SKB_GSO_TCPV4 |
55 SKB_GSO_DODGY |
56 SKB_GSO_TCP_ECN |
57 SKB_GSO_TCPV6 |
58 SKB_GSO_GRE |
59 SKB_GSO_MPLS |
60 SKB_GSO_UDP_TUNNEL |
61 0) ||
62 !(type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))))
63 goto out;
64
65 skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(skb->len, mss);
66
67 segs = NULL;
68 goto out;
69 }
70
71 copy_destructor = gso_skb->destructor == tcp_wfree;
72 ooo_okay = gso_skb->ooo_okay;
73 /* All segments but the first should have ooo_okay cleared */
74 skb->ooo_okay = 0;
75
76 segs = skb_segment(skb, features);
77 if (IS_ERR(segs))
78 goto out;
79
80 /* Only first segment might have ooo_okay set */
81 segs->ooo_okay = ooo_okay;
82
83 delta = htonl(oldlen + (thlen + mss));
84
85 skb = segs;
86 th = tcp_hdr(skb);
87 seq = ntohl(th->seq);
88
89 newcheck = ~csum_fold((__force __wsum)((__force u32)th->check +
90 (__force u32)delta));
91
92 do {
93 th->fin = th->psh = 0;
94 th->check = newcheck;
95
96 if (skb->ip_summed != CHECKSUM_PARTIAL)
97 th->check =
98 csum_fold(csum_partial(skb_transport_header(skb),
99 thlen, skb->csum));
100
101 seq += mss;
102 if (copy_destructor) {
103 skb->destructor = gso_skb->destructor;
104 skb->sk = gso_skb->sk;
105 /* {tcp|sock}_wfree() use exact truesize accounting :
106 * sum(skb->truesize) MUST be exactly be gso_skb->truesize
107 * So we account mss bytes of 'true size' for each segment.
108 * The last segment will contain the remaining.
109 */
110 skb->truesize = mss;
111 gso_skb->truesize -= mss;
112 }
113 skb = skb->next;
114 th = tcp_hdr(skb);
115
116 th->seq = htonl(seq);
117 th->cwr = 0;
118 } while (skb->next);
119
120 /* Following permits TCP Small Queues to work well with GSO :
121 * The callback to TCP stack will be called at the time last frag
122 * is freed at TX completion, and not right now when gso_skb
123 * is freed by GSO engine
124 */
125 if (copy_destructor) {
126 swap(gso_skb->sk, skb->sk);
127 swap(gso_skb->destructor, skb->destructor);
128 swap(gso_skb->truesize, skb->truesize);
129 }
130
131 delta = htonl(oldlen + (skb_tail_pointer(skb) -
132 skb_transport_header(skb)) +
133 skb->data_len);
134 th->check = ~csum_fold((__force __wsum)((__force u32)th->check +
135 (__force u32)delta));
136 if (skb->ip_summed != CHECKSUM_PARTIAL)
137 th->check = csum_fold(csum_partial(skb_transport_header(skb),
138 thlen, skb->csum));
139out:
140 return segs;
141}
142EXPORT_SYMBOL(tcp_tso_segment);
143
144struct sk_buff **tcp_gro_receive(struct sk_buff **head, struct sk_buff *skb)
145{
146 struct sk_buff **pp = NULL;
147 struct sk_buff *p;
148 struct tcphdr *th;
149 struct tcphdr *th2;
150 unsigned int len;
151 unsigned int thlen;
152 __be32 flags;
153 unsigned int mss = 1;
154 unsigned int hlen;
155 unsigned int off;
156 int flush = 1;
157 int i;
158
159 off = skb_gro_offset(skb);
160 hlen = off + sizeof(*th);
161 th = skb_gro_header_fast(skb, off);
162 if (skb_gro_header_hard(skb, hlen)) {
163 th = skb_gro_header_slow(skb, hlen, off);
164 if (unlikely(!th))
165 goto out;
166 }
167
168 thlen = th->doff * 4;
169 if (thlen < sizeof(*th))
170 goto out;
171
172 hlen = off + thlen;
173 if (skb_gro_header_hard(skb, hlen)) {
174 th = skb_gro_header_slow(skb, hlen, off);
175 if (unlikely(!th))
176 goto out;
177 }
178
179 skb_gro_pull(skb, thlen);
180
181 len = skb_gro_len(skb);
182 flags = tcp_flag_word(th);
183
184 for (; (p = *head); head = &p->next) {
185 if (!NAPI_GRO_CB(p)->same_flow)
186 continue;
187
188 th2 = tcp_hdr(p);
189
190 if (*(u32 *)&th->source ^ *(u32 *)&th2->source) {
191 NAPI_GRO_CB(p)->same_flow = 0;
192 continue;
193 }
194
195 goto found;
196 }
197
198 goto out_check_final;
199
200found:
201 flush = NAPI_GRO_CB(p)->flush;
202 flush |= (__force int)(flags & TCP_FLAG_CWR);
203 flush |= (__force int)((flags ^ tcp_flag_word(th2)) &
204 ~(TCP_FLAG_CWR | TCP_FLAG_FIN | TCP_FLAG_PSH));
205 flush |= (__force int)(th->ack_seq ^ th2->ack_seq);
206 for (i = sizeof(*th); i < thlen; i += 4)
207 flush |= *(u32 *)((u8 *)th + i) ^
208 *(u32 *)((u8 *)th2 + i);
209
210 mss = tcp_skb_mss(p);
211
212 flush |= (len - 1) >= mss;
213 flush |= (ntohl(th2->seq) + skb_gro_len(p)) ^ ntohl(th->seq);
214
215 if (flush || skb_gro_receive(head, skb)) {
216 mss = 1;
217 goto out_check_final;
218 }
219
220 p = *head;
221 th2 = tcp_hdr(p);
222 tcp_flag_word(th2) |= flags & (TCP_FLAG_FIN | TCP_FLAG_PSH);
223
224out_check_final:
225 flush = len < mss;
226 flush |= (__force int)(flags & (TCP_FLAG_URG | TCP_FLAG_PSH |
227 TCP_FLAG_RST | TCP_FLAG_SYN |
228 TCP_FLAG_FIN));
229
230 if (p && (!NAPI_GRO_CB(skb)->same_flow || flush))
231 pp = head;
232
233out:
234 NAPI_GRO_CB(skb)->flush |= flush;
235
236 return pp;
237}
238EXPORT_SYMBOL(tcp_gro_receive);
239
240int tcp_gro_complete(struct sk_buff *skb)
241{
242 struct tcphdr *th = tcp_hdr(skb);
243
244 skb->csum_start = skb_transport_header(skb) - skb->head;
245 skb->csum_offset = offsetof(struct tcphdr, check);
246 skb->ip_summed = CHECKSUM_PARTIAL;
247
248 skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
249
250 if (th->cwr)
251 skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
252
253 return 0;
254}
255EXPORT_SYMBOL(tcp_gro_complete);
256
257static int tcp_v4_gso_send_check(struct sk_buff *skb)
258{
259 const struct iphdr *iph;
260 struct tcphdr *th;
261
262 if (!pskb_may_pull(skb, sizeof(*th)))
263 return -EINVAL;
264
265 iph = ip_hdr(skb);
266 th = tcp_hdr(skb);
267
268 th->check = 0;
269 skb->ip_summed = CHECKSUM_PARTIAL;
270 __tcp_v4_send_check(skb, iph->saddr, iph->daddr);
271 return 0;
272}
273
274static struct sk_buff **tcp4_gro_receive(struct sk_buff **head, struct sk_buff *skb)
275{
276 const struct iphdr *iph = skb_gro_network_header(skb);
277 __wsum wsum;
278 __sum16 sum;
279
280 switch (skb->ip_summed) {
281 case CHECKSUM_COMPLETE:
282 if (!tcp_v4_check(skb_gro_len(skb), iph->saddr, iph->daddr,
283 skb->csum)) {
284 skb->ip_summed = CHECKSUM_UNNECESSARY;
285 break;
286 }
287flush:
288 NAPI_GRO_CB(skb)->flush = 1;
289 return NULL;
290
291 case CHECKSUM_NONE:
292 wsum = csum_tcpudp_nofold(iph->saddr, iph->daddr,
293 skb_gro_len(skb), IPPROTO_TCP, 0);
294 sum = csum_fold(skb_checksum(skb,
295 skb_gro_offset(skb),
296 skb_gro_len(skb),
297 wsum));
298 if (sum)
299 goto flush;
300
301 skb->ip_summed = CHECKSUM_UNNECESSARY;
302 break;
303 }
304
305 return tcp_gro_receive(head, skb);
306}
307
308static int tcp4_gro_complete(struct sk_buff *skb)
309{
310 const struct iphdr *iph = ip_hdr(skb);
311 struct tcphdr *th = tcp_hdr(skb);
312
313 th->check = ~tcp_v4_check(skb->len - skb_transport_offset(skb),
314 iph->saddr, iph->daddr, 0);
315 skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
316
317 return tcp_gro_complete(skb);
318}
319
320static const struct net_offload tcpv4_offload = {
321 .callbacks = {
322 .gso_send_check = tcp_v4_gso_send_check,
323 .gso_segment = tcp_tso_segment,
324 .gro_receive = tcp4_gro_receive,
325 .gro_complete = tcp4_gro_complete,
326 },
327};
328
329int __init tcpv4_offload_init(void)
330{
331 return inet_add_offload(&tcpv4_offload, IPPROTO_TCP);
332}
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index ec335fabd5cc..3d609490f118 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -160,6 +160,7 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
160{ 160{
161 struct inet_connection_sock *icsk = inet_csk(sk); 161 struct inet_connection_sock *icsk = inet_csk(sk);
162 const u32 now = tcp_time_stamp; 162 const u32 now = tcp_time_stamp;
163 const struct dst_entry *dst = __sk_dst_get(sk);
163 164
164 if (sysctl_tcp_slow_start_after_idle && 165 if (sysctl_tcp_slow_start_after_idle &&
165 (!tp->packets_out && (s32)(now - tp->lsndtime) > icsk->icsk_rto)) 166 (!tp->packets_out && (s32)(now - tp->lsndtime) > icsk->icsk_rto))
@@ -170,8 +171,9 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
170 /* If it is a reply for ato after last received 171 /* If it is a reply for ato after last received
171 * packet, enter pingpong mode. 172 * packet, enter pingpong mode.
172 */ 173 */
173 if ((u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato) 174 if ((u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato &&
174 icsk->icsk_ack.pingpong = 1; 175 (!dst || !dst_metric(dst, RTAX_QUICKACK)))
176 icsk->icsk_ack.pingpong = 1;
175} 177}
176 178
177/* Account for an ACK we sent. */ 179/* Account for an ACK we sent. */
@@ -181,6 +183,21 @@ static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts)
181 inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK); 183 inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK);
182} 184}
183 185
186
187u32 tcp_default_init_rwnd(u32 mss)
188{
189 /* Initial receive window should be twice of TCP_INIT_CWND to
190 * enable proper sending of new unsent data during fast recovery
191 * (RFC 3517, Section 4, NextSeg() rule (2)). Further place a
192 * limit when mss is larger than 1460.
193 */
194 u32 init_rwnd = TCP_INIT_CWND * 2;
195
196 if (mss > 1460)
197 init_rwnd = max((1460 * init_rwnd) / mss, 2U);
198 return init_rwnd;
199}
200
184/* Determine a window scaling and initial window to offer. 201/* Determine a window scaling and initial window to offer.
185 * Based on the assumption that the given amount of space 202 * Based on the assumption that the given amount of space
186 * will be offered. Store the results in the tp structure. 203 * will be offered. Store the results in the tp structure.
@@ -230,22 +247,10 @@ void tcp_select_initial_window(int __space, __u32 mss,
230 } 247 }
231 } 248 }
232 249
233 /* Set initial window to a value enough for senders starting with
234 * initial congestion window of TCP_DEFAULT_INIT_RCVWND. Place
235 * a limit on the initial window when mss is larger than 1460.
236 */
237 if (mss > (1 << *rcv_wscale)) { 250 if (mss > (1 << *rcv_wscale)) {
238 int init_cwnd = TCP_DEFAULT_INIT_RCVWND; 251 if (!init_rcv_wnd) /* Use default unless specified otherwise */
239 if (mss > 1460) 252 init_rcv_wnd = tcp_default_init_rwnd(mss);
240 init_cwnd = 253 *rcv_wnd = min(*rcv_wnd, init_rcv_wnd * mss);
241 max_t(u32, (1460 * TCP_DEFAULT_INIT_RCVWND) / mss, 2);
242 /* when initializing use the value from init_rcv_wnd
243 * rather than the default from above
244 */
245 if (init_rcv_wnd)
246 *rcv_wnd = min(*rcv_wnd, init_rcv_wnd * mss);
247 else
248 *rcv_wnd = min(*rcv_wnd, init_cwnd * mss);
249 } 254 }
250 255
251 /* Set the clamp no higher than max representable value */ 256 /* Set the clamp no higher than max representable value */
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 0bf5d399a03c..6b270e53c207 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -109,6 +109,7 @@
109#include <trace/events/udp.h> 109#include <trace/events/udp.h>
110#include <linux/static_key.h> 110#include <linux/static_key.h>
111#include <trace/events/skb.h> 111#include <trace/events/skb.h>
112#include <net/ll_poll.h>
112#include "udp_impl.h" 113#include "udp_impl.h"
113 114
114struct udp_table udp_table __read_mostly; 115struct udp_table udp_table __read_mostly;
@@ -429,7 +430,7 @@ begin:
429 reuseport = sk->sk_reuseport; 430 reuseport = sk->sk_reuseport;
430 if (reuseport) { 431 if (reuseport) {
431 hash = inet_ehashfn(net, daddr, hnum, 432 hash = inet_ehashfn(net, daddr, hnum,
432 saddr, htons(sport)); 433 saddr, sport);
433 matches = 1; 434 matches = 1;
434 } 435 }
435 } else if (score == badness && reuseport) { 436 } else if (score == badness && reuseport) {
@@ -510,7 +511,7 @@ begin:
510 reuseport = sk->sk_reuseport; 511 reuseport = sk->sk_reuseport;
511 if (reuseport) { 512 if (reuseport) {
512 hash = inet_ehashfn(net, daddr, hnum, 513 hash = inet_ehashfn(net, daddr, hnum,
513 saddr, htons(sport)); 514 saddr, sport);
514 matches = 1; 515 matches = 1;
515 } 516 }
516 } else if (score == badness && reuseport) { 517 } else if (score == badness && reuseport) {
@@ -799,7 +800,7 @@ send:
799/* 800/*
800 * Push out all pending data as one UDP datagram. Socket is locked. 801 * Push out all pending data as one UDP datagram. Socket is locked.
801 */ 802 */
802static int udp_push_pending_frames(struct sock *sk) 803int udp_push_pending_frames(struct sock *sk)
803{ 804{
804 struct udp_sock *up = udp_sk(sk); 805 struct udp_sock *up = udp_sk(sk);
805 struct inet_sock *inet = inet_sk(sk); 806 struct inet_sock *inet = inet_sk(sk);
@@ -818,6 +819,7 @@ out:
818 up->pending = 0; 819 up->pending = 0;
819 return err; 820 return err;
820} 821}
822EXPORT_SYMBOL(udp_push_pending_frames);
821 823
822int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 824int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
823 size_t len) 825 size_t len)
@@ -1709,7 +1711,10 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
1709 sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable); 1711 sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
1710 1712
1711 if (sk != NULL) { 1713 if (sk != NULL) {
1712 int ret = udp_queue_rcv_skb(sk, skb); 1714 int ret;
1715
1716 sk_mark_ll(sk, skb);
1717 ret = udp_queue_rcv_skb(sk, skb);
1713 sock_put(sk); 1718 sock_put(sk);
1714 1719
1715 /* a return value > 0 means to resubmit the input, but 1720 /* a return value > 0 means to resubmit the input, but
@@ -1967,6 +1972,8 @@ unsigned int udp_poll(struct file *file, struct socket *sock, poll_table *wait)
1967 unsigned int mask = datagram_poll(file, sock, wait); 1972 unsigned int mask = datagram_poll(file, sock, wait);
1968 struct sock *sk = sock->sk; 1973 struct sock *sk = sock->sk;
1969 1974
1975 sock_rps_record_flow(sk);
1976
1970 /* Check for false positives due to checksum errors */ 1977 /* Check for false positives due to checksum errors */
1971 if ((mask & POLLRDNORM) && !(file->f_flags & O_NONBLOCK) && 1978 if ((mask & POLLRDNORM) && !(file->f_flags & O_NONBLOCK) &&
1972 !(sk->sk_shutdown & RCV_SHUTDOWN) && !first_packet_length(sk)) 1979 !(sk->sk_shutdown & RCV_SHUTDOWN) && !first_packet_length(sk))
@@ -2284,29 +2291,8 @@ void __init udp_init(void)
2284 sysctl_udp_wmem_min = SK_MEM_QUANTUM; 2291 sysctl_udp_wmem_min = SK_MEM_QUANTUM;
2285} 2292}
2286 2293
2287int udp4_ufo_send_check(struct sk_buff *skb) 2294struct sk_buff *skb_udp_tunnel_segment(struct sk_buff *skb,
2288{ 2295 netdev_features_t features)
2289 if (!pskb_may_pull(skb, sizeof(struct udphdr)))
2290 return -EINVAL;
2291
2292 if (likely(!skb->encapsulation)) {
2293 const struct iphdr *iph;
2294 struct udphdr *uh;
2295
2296 iph = ip_hdr(skb);
2297 uh = udp_hdr(skb);
2298
2299 uh->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, skb->len,
2300 IPPROTO_UDP, 0);
2301 skb->csum_start = skb_transport_header(skb) - skb->head;
2302 skb->csum_offset = offsetof(struct udphdr, check);
2303 skb->ip_summed = CHECKSUM_PARTIAL;
2304 }
2305 return 0;
2306}
2307
2308static struct sk_buff *skb_udp_tunnel_segment(struct sk_buff *skb,
2309 netdev_features_t features)
2310{ 2296{
2311 struct sk_buff *segs = ERR_PTR(-EINVAL); 2297 struct sk_buff *segs = ERR_PTR(-EINVAL);
2312 int mac_len = skb->mac_len; 2298 int mac_len = skb->mac_len;
@@ -2365,53 +2351,3 @@ static struct sk_buff *skb_udp_tunnel_segment(struct sk_buff *skb,
2365out: 2351out:
2366 return segs; 2352 return segs;
2367} 2353}
2368
2369struct sk_buff *udp4_ufo_fragment(struct sk_buff *skb,
2370 netdev_features_t features)
2371{
2372 struct sk_buff *segs = ERR_PTR(-EINVAL);
2373 unsigned int mss;
2374 mss = skb_shinfo(skb)->gso_size;
2375 if (unlikely(skb->len <= mss))
2376 goto out;
2377
2378 if (skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) {
2379 /* Packet is from an untrusted source, reset gso_segs. */
2380 int type = skb_shinfo(skb)->gso_type;
2381
2382 if (unlikely(type & ~(SKB_GSO_UDP | SKB_GSO_DODGY |
2383 SKB_GSO_UDP_TUNNEL |
2384 SKB_GSO_GRE) ||
2385 !(type & (SKB_GSO_UDP))))
2386 goto out;
2387
2388 skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(skb->len, mss);
2389
2390 segs = NULL;
2391 goto out;
2392 }
2393
2394 /* Fragment the skb. IP headers of the fragments are updated in
2395 * inet_gso_segment()
2396 */
2397 if (skb->encapsulation && skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL)
2398 segs = skb_udp_tunnel_segment(skb, features);
2399 else {
2400 int offset;
2401 __wsum csum;
2402
2403 /* Do software UFO. Complete and fill in the UDP checksum as
2404 * HW cannot do checksum of UDP packets sent as multiple
2405 * IP fragments.
2406 */
2407 offset = skb_checksum_start_offset(skb);
2408 csum = skb_checksum(skb, offset, skb->len - offset, 0);
2409 offset += skb->csum_offset;
2410 *(__sum16 *)(skb->data + offset) = csum_fold(csum);
2411 skb->ip_summed = CHECKSUM_NONE;
2412
2413 segs = skb_segment(skb, features);
2414 }
2415out:
2416 return segs;
2417}
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
new file mode 100644
index 000000000000..f35eccaa855e
--- /dev/null
+++ b/net/ipv4/udp_offload.c
@@ -0,0 +1,100 @@
1/*
2 * IPV4 GSO/GRO offload support
3 * Linux INET implementation
4 *
5 * This program is free software; you can redistribute it and/or
6 * modify it under the terms of the GNU General Public License
7 * as published by the Free Software Foundation; either version
8 * 2 of the License, or (at your option) any later version.
9 *
10 * UDPv4 GSO support
11 */
12
13#include <linux/skbuff.h>
14#include <net/udp.h>
15#include <net/protocol.h>
16
17static int udp4_ufo_send_check(struct sk_buff *skb)
18{
19 if (!pskb_may_pull(skb, sizeof(struct udphdr)))
20 return -EINVAL;
21
22 if (likely(!skb->encapsulation)) {
23 const struct iphdr *iph;
24 struct udphdr *uh;
25
26 iph = ip_hdr(skb);
27 uh = udp_hdr(skb);
28
29 uh->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, skb->len,
30 IPPROTO_UDP, 0);
31 skb->csum_start = skb_transport_header(skb) - skb->head;
32 skb->csum_offset = offsetof(struct udphdr, check);
33 skb->ip_summed = CHECKSUM_PARTIAL;
34 }
35
36 return 0;
37}
38
39static struct sk_buff *udp4_ufo_fragment(struct sk_buff *skb,
40 netdev_features_t features)
41{
42 struct sk_buff *segs = ERR_PTR(-EINVAL);
43 unsigned int mss;
44
45 mss = skb_shinfo(skb)->gso_size;
46 if (unlikely(skb->len <= mss))
47 goto out;
48
49 if (skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) {
50 /* Packet is from an untrusted source, reset gso_segs. */
51 int type = skb_shinfo(skb)->gso_type;
52
53 if (unlikely(type & ~(SKB_GSO_UDP | SKB_GSO_DODGY |
54 SKB_GSO_UDP_TUNNEL |
55 SKB_GSO_GRE | SKB_GSO_MPLS) ||
56 !(type & (SKB_GSO_UDP))))
57 goto out;
58
59 skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(skb->len, mss);
60
61 segs = NULL;
62 goto out;
63 }
64
65 /* Fragment the skb. IP headers of the fragments are updated in
66 * inet_gso_segment()
67 */
68 if (skb->encapsulation && skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL)
69 segs = skb_udp_tunnel_segment(skb, features);
70 else {
71 int offset;
72 __wsum csum;
73
74 /* Do software UFO. Complete and fill in the UDP checksum as
75 * HW cannot do checksum of UDP packets sent as multiple
76 * IP fragments.
77 */
78 offset = skb_checksum_start_offset(skb);
79 csum = skb_checksum(skb, offset, skb->len - offset, 0);
80 offset += skb->csum_offset;
81 *(__sum16 *)(skb->data + offset) = csum_fold(csum);
82 skb->ip_summed = CHECKSUM_NONE;
83
84 segs = skb_segment(skb, features);
85 }
86out:
87 return segs;
88}
89
90static const struct net_offload udpv4_offload = {
91 .callbacks = {
92 .gso_send_check = udp4_ufo_send_check,
93 .gso_segment = udp4_ufo_fragment,
94 },
95};
96
97int __init udpv4_offload_init(void)
98{
99 return inet_add_offload(&udpv4_offload, IPPROTO_UDP);
100}
diff --git a/net/ipv4/xfrm4_tunnel.c b/net/ipv4/xfrm4_tunnel.c
index 05a5df2febc9..06347dbd32c1 100644
--- a/net/ipv4/xfrm4_tunnel.c
+++ b/net/ipv4/xfrm4_tunnel.c
@@ -63,7 +63,7 @@ static int xfrm_tunnel_err(struct sk_buff *skb, u32 info)
63static struct xfrm_tunnel xfrm_tunnel_handler __read_mostly = { 63static struct xfrm_tunnel xfrm_tunnel_handler __read_mostly = {
64 .handler = xfrm_tunnel_rcv, 64 .handler = xfrm_tunnel_rcv,
65 .err_handler = xfrm_tunnel_err, 65 .err_handler = xfrm_tunnel_err,
66 .priority = 2, 66 .priority = 3,
67}; 67};
68 68
69#if IS_ENABLED(CONFIG_IPV6) 69#if IS_ENABLED(CONFIG_IPV6)
diff --git a/net/ipv6/Makefile b/net/ipv6/Makefile
index 9af088d2cdaa..470a9c008e9b 100644
--- a/net/ipv6/Makefile
+++ b/net/ipv6/Makefile
@@ -7,7 +7,7 @@ obj-$(CONFIG_IPV6) += ipv6.o
7ipv6-objs := af_inet6.o anycast.o ip6_output.o ip6_input.o addrconf.o \ 7ipv6-objs := af_inet6.o anycast.o ip6_output.o ip6_input.o addrconf.o \
8 addrlabel.o \ 8 addrlabel.o \
9 route.o ip6_fib.o ipv6_sockglue.o ndisc.o udp.o udplite.o \ 9 route.o ip6_fib.o ipv6_sockglue.o ndisc.o udp.o udplite.o \
10 raw.o icmp.o mcast.o reassembly.o tcp_ipv6.o \ 10 raw.o icmp.o mcast.o reassembly.o tcp_ipv6.o ping.o \
11 exthdrs.o datagram.o ip6_flowlabel.o inet6_connection_sock.o 11 exthdrs.o datagram.o ip6_flowlabel.o inet6_connection_sock.o
12 12
13ipv6-offload := ip6_offload.o tcpv6_offload.o udp_offload.o exthdrs_offload.o 13ipv6-offload := ip6_offload.o tcpv6_offload.o udp_offload.o exthdrs_offload.o
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 4ab4c38958c6..cfdcf7b2daf6 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -253,37 +253,32 @@ static inline bool addrconf_qdisc_ok(const struct net_device *dev)
253 return !qdisc_tx_is_noop(dev); 253 return !qdisc_tx_is_noop(dev);
254} 254}
255 255
256static void addrconf_del_timer(struct inet6_ifaddr *ifp) 256static void addrconf_del_rs_timer(struct inet6_dev *idev)
257{ 257{
258 if (del_timer(&ifp->timer)) 258 if (del_timer(&idev->rs_timer))
259 __in6_dev_put(idev);
260}
261
262static void addrconf_del_dad_timer(struct inet6_ifaddr *ifp)
263{
264 if (del_timer(&ifp->dad_timer))
259 __in6_ifa_put(ifp); 265 __in6_ifa_put(ifp);
260} 266}
261 267
262enum addrconf_timer_t { 268static void addrconf_mod_rs_timer(struct inet6_dev *idev,
263 AC_NONE, 269 unsigned long when)
264 AC_DAD, 270{
265 AC_RS, 271 if (!timer_pending(&idev->rs_timer))
266}; 272 in6_dev_hold(idev);
273 mod_timer(&idev->rs_timer, jiffies + when);
274}
267 275
268static void addrconf_mod_timer(struct inet6_ifaddr *ifp, 276static void addrconf_mod_dad_timer(struct inet6_ifaddr *ifp,
269 enum addrconf_timer_t what, 277 unsigned long when)
270 unsigned long when)
271{ 278{
272 if (!del_timer(&ifp->timer)) 279 if (!timer_pending(&ifp->dad_timer))
273 in6_ifa_hold(ifp); 280 in6_ifa_hold(ifp);
274 281 mod_timer(&ifp->dad_timer, jiffies + when);
275 switch (what) {
276 case AC_DAD:
277 ifp->timer.function = addrconf_dad_timer;
278 break;
279 case AC_RS:
280 ifp->timer.function = addrconf_rs_timer;
281 break;
282 default:
283 break;
284 }
285 ifp->timer.expires = jiffies + when;
286 add_timer(&ifp->timer);
287} 282}
288 283
289static int snmp6_alloc_dev(struct inet6_dev *idev) 284static int snmp6_alloc_dev(struct inet6_dev *idev)
@@ -326,6 +321,7 @@ void in6_dev_finish_destroy(struct inet6_dev *idev)
326 321
327 WARN_ON(!list_empty(&idev->addr_list)); 322 WARN_ON(!list_empty(&idev->addr_list));
328 WARN_ON(idev->mc_list != NULL); 323 WARN_ON(idev->mc_list != NULL);
324 WARN_ON(timer_pending(&idev->rs_timer));
329 325
330#ifdef NET_REFCNT_DEBUG 326#ifdef NET_REFCNT_DEBUG
331 pr_debug("%s: %s\n", __func__, dev ? dev->name : "NIL"); 327 pr_debug("%s: %s\n", __func__, dev ? dev->name : "NIL");
@@ -357,7 +353,8 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
357 rwlock_init(&ndev->lock); 353 rwlock_init(&ndev->lock);
358 ndev->dev = dev; 354 ndev->dev = dev;
359 INIT_LIST_HEAD(&ndev->addr_list); 355 INIT_LIST_HEAD(&ndev->addr_list);
360 356 setup_timer(&ndev->rs_timer, addrconf_rs_timer,
357 (unsigned long)ndev);
361 memcpy(&ndev->cnf, dev_net(dev)->ipv6.devconf_dflt, sizeof(ndev->cnf)); 358 memcpy(&ndev->cnf, dev_net(dev)->ipv6.devconf_dflt, sizeof(ndev->cnf));
362 ndev->cnf.mtu6 = dev->mtu; 359 ndev->cnf.mtu6 = dev->mtu;
363 ndev->cnf.sysctl = NULL; 360 ndev->cnf.sysctl = NULL;
@@ -776,7 +773,7 @@ void inet6_ifa_finish_destroy(struct inet6_ifaddr *ifp)
776 773
777 in6_dev_put(ifp->idev); 774 in6_dev_put(ifp->idev);
778 775
779 if (del_timer(&ifp->timer)) 776 if (del_timer(&ifp->dad_timer))
780 pr_notice("Timer is still running, when freeing ifa=%p\n", ifp); 777 pr_notice("Timer is still running, when freeing ifa=%p\n", ifp);
781 778
782 if (ifp->state != INET6_IFADDR_STATE_DEAD) { 779 if (ifp->state != INET6_IFADDR_STATE_DEAD) {
@@ -869,9 +866,9 @@ ipv6_add_addr(struct inet6_dev *idev, const struct in6_addr *addr, int pfxlen,
869 866
870 spin_lock_init(&ifa->lock); 867 spin_lock_init(&ifa->lock);
871 spin_lock_init(&ifa->state_lock); 868 spin_lock_init(&ifa->state_lock);
872 init_timer(&ifa->timer); 869 setup_timer(&ifa->dad_timer, addrconf_dad_timer,
870 (unsigned long)ifa);
873 INIT_HLIST_NODE(&ifa->addr_lst); 871 INIT_HLIST_NODE(&ifa->addr_lst);
874 ifa->timer.data = (unsigned long) ifa;
875 ifa->scope = scope; 872 ifa->scope = scope;
876 ifa->prefix_len = pfxlen; 873 ifa->prefix_len = pfxlen;
877 ifa->flags = flags | IFA_F_TENTATIVE; 874 ifa->flags = flags | IFA_F_TENTATIVE;
@@ -994,7 +991,7 @@ static void ipv6_del_addr(struct inet6_ifaddr *ifp)
994 } 991 }
995 write_unlock_bh(&idev->lock); 992 write_unlock_bh(&idev->lock);
996 993
997 addrconf_del_timer(ifp); 994 addrconf_del_dad_timer(ifp);
998 995
999 ipv6_ifa_notify(RTM_DELADDR, ifp); 996 ipv6_ifa_notify(RTM_DELADDR, ifp);
1000 997
@@ -1126,8 +1123,7 @@ retry:
1126 1123
1127 ift = !max_addresses || 1124 ift = !max_addresses ||
1128 ipv6_count_addresses(idev) < max_addresses ? 1125 ipv6_count_addresses(idev) < max_addresses ?
1129 ipv6_add_addr(idev, &addr, tmp_plen, 1126 ipv6_add_addr(idev, &addr, tmp_plen, ipv6_addr_scope(&addr),
1130 ipv6_addr_type(&addr)&IPV6_ADDR_SCOPE_MASK,
1131 addr_flags) : NULL; 1127 addr_flags) : NULL;
1132 if (IS_ERR_OR_NULL(ift)) { 1128 if (IS_ERR_OR_NULL(ift)) {
1133 in6_ifa_put(ifp); 1129 in6_ifa_put(ifp);
@@ -1448,6 +1444,23 @@ try_nextdev:
1448} 1444}
1449EXPORT_SYMBOL(ipv6_dev_get_saddr); 1445EXPORT_SYMBOL(ipv6_dev_get_saddr);
1450 1446
1447int __ipv6_get_lladdr(struct inet6_dev *idev, struct in6_addr *addr,
1448 unsigned char banned_flags)
1449{
1450 struct inet6_ifaddr *ifp;
1451 int err = -EADDRNOTAVAIL;
1452
1453 list_for_each_entry(ifp, &idev->addr_list, if_list) {
1454 if (ifp->scope == IFA_LINK &&
1455 !(ifp->flags & banned_flags)) {
1456 *addr = ifp->addr;
1457 err = 0;
1458 break;
1459 }
1460 }
1461 return err;
1462}
1463
1451int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *addr, 1464int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *addr,
1452 unsigned char banned_flags) 1465 unsigned char banned_flags)
1453{ 1466{
@@ -1457,17 +1470,8 @@ int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *addr,
1457 rcu_read_lock(); 1470 rcu_read_lock();
1458 idev = __in6_dev_get(dev); 1471 idev = __in6_dev_get(dev);
1459 if (idev) { 1472 if (idev) {
1460 struct inet6_ifaddr *ifp;
1461
1462 read_lock_bh(&idev->lock); 1473 read_lock_bh(&idev->lock);
1463 list_for_each_entry(ifp, &idev->addr_list, if_list) { 1474 err = __ipv6_get_lladdr(idev, addr, banned_flags);
1464 if (ifp->scope == IFA_LINK &&
1465 !(ifp->flags & banned_flags)) {
1466 *addr = ifp->addr;
1467 err = 0;
1468 break;
1469 }
1470 }
1471 read_unlock_bh(&idev->lock); 1475 read_unlock_bh(&idev->lock);
1472 } 1476 }
1473 rcu_read_unlock(); 1477 rcu_read_unlock();
@@ -1581,7 +1585,7 @@ static void addrconf_dad_stop(struct inet6_ifaddr *ifp, int dad_failed)
1581{ 1585{
1582 if (ifp->flags&IFA_F_PERMANENT) { 1586 if (ifp->flags&IFA_F_PERMANENT) {
1583 spin_lock_bh(&ifp->lock); 1587 spin_lock_bh(&ifp->lock);
1584 addrconf_del_timer(ifp); 1588 addrconf_del_dad_timer(ifp);
1585 ifp->flags |= IFA_F_TENTATIVE; 1589 ifp->flags |= IFA_F_TENTATIVE;
1586 if (dad_failed) 1590 if (dad_failed)
1587 ifp->flags |= IFA_F_DADFAILED; 1591 ifp->flags |= IFA_F_DADFAILED;
@@ -2402,6 +2406,7 @@ err_exit:
2402 * Manual configuration of address on an interface 2406 * Manual configuration of address on an interface
2403 */ 2407 */
2404static int inet6_addr_add(struct net *net, int ifindex, const struct in6_addr *pfx, 2408static int inet6_addr_add(struct net *net, int ifindex, const struct in6_addr *pfx,
2409 const struct in6_addr *peer_pfx,
2405 unsigned int plen, __u8 ifa_flags, __u32 prefered_lft, 2410 unsigned int plen, __u8 ifa_flags, __u32 prefered_lft,
2406 __u32 valid_lft) 2411 __u32 valid_lft)
2407{ 2412{
@@ -2457,6 +2462,8 @@ static int inet6_addr_add(struct net *net, int ifindex, const struct in6_addr *p
2457 ifp->valid_lft = valid_lft; 2462 ifp->valid_lft = valid_lft;
2458 ifp->prefered_lft = prefered_lft; 2463 ifp->prefered_lft = prefered_lft;
2459 ifp->tstamp = jiffies; 2464 ifp->tstamp = jiffies;
2465 if (peer_pfx)
2466 ifp->peer_addr = *peer_pfx;
2460 spin_unlock_bh(&ifp->lock); 2467 spin_unlock_bh(&ifp->lock);
2461 2468
2462 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, dev, 2469 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, dev,
@@ -2500,12 +2507,6 @@ static int inet6_addr_del(struct net *net, int ifindex, const struct in6_addr *p
2500 read_unlock_bh(&idev->lock); 2507 read_unlock_bh(&idev->lock);
2501 2508
2502 ipv6_del_addr(ifp); 2509 ipv6_del_addr(ifp);
2503
2504 /* If the last address is deleted administratively,
2505 disable IPv6 on this interface.
2506 */
2507 if (list_empty(&idev->addr_list))
2508 addrconf_ifdown(idev->dev, 1);
2509 return 0; 2510 return 0;
2510 } 2511 }
2511 } 2512 }
@@ -2526,7 +2527,7 @@ int addrconf_add_ifaddr(struct net *net, void __user *arg)
2526 return -EFAULT; 2527 return -EFAULT;
2527 2528
2528 rtnl_lock(); 2529 rtnl_lock();
2529 err = inet6_addr_add(net, ireq.ifr6_ifindex, &ireq.ifr6_addr, 2530 err = inet6_addr_add(net, ireq.ifr6_ifindex, &ireq.ifr6_addr, NULL,
2530 ireq.ifr6_prefixlen, IFA_F_PERMANENT, 2531 ireq.ifr6_prefixlen, IFA_F_PERMANENT,
2531 INFINITY_LIFE_TIME, INFINITY_LIFE_TIME); 2532 INFINITY_LIFE_TIME, INFINITY_LIFE_TIME);
2532 rtnl_unlock(); 2533 rtnl_unlock();
@@ -2761,8 +2762,6 @@ static void addrconf_gre_config(struct net_device *dev)
2761 struct inet6_dev *idev; 2762 struct inet6_dev *idev;
2762 struct in6_addr addr; 2763 struct in6_addr addr;
2763 2764
2764 pr_info("%s(%s)\n", __func__, dev->name);
2765
2766 ASSERT_RTNL(); 2765 ASSERT_RTNL();
2767 2766
2768 if ((idev = ipv6_find_idev(dev)) == NULL) { 2767 if ((idev = ipv6_find_idev(dev)) == NULL) {
@@ -2829,9 +2828,9 @@ static void addrconf_ip6_tnl_config(struct net_device *dev)
2829} 2828}
2830 2829
2831static int addrconf_notify(struct notifier_block *this, unsigned long event, 2830static int addrconf_notify(struct notifier_block *this, unsigned long event,
2832 void *data) 2831 void *ptr)
2833{ 2832{
2834 struct net_device *dev = (struct net_device *) data; 2833 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
2835 struct inet6_dev *idev = __in6_dev_get(dev); 2834 struct inet6_dev *idev = __in6_dev_get(dev);
2836 int run_pending = 0; 2835 int run_pending = 0;
2837 int err; 2836 int err;
@@ -3039,7 +3038,7 @@ static int addrconf_ifdown(struct net_device *dev, int how)
3039 hlist_for_each_entry_rcu(ifa, h, addr_lst) { 3038 hlist_for_each_entry_rcu(ifa, h, addr_lst) {
3040 if (ifa->idev == idev) { 3039 if (ifa->idev == idev) {
3041 hlist_del_init_rcu(&ifa->addr_lst); 3040 hlist_del_init_rcu(&ifa->addr_lst);
3042 addrconf_del_timer(ifa); 3041 addrconf_del_dad_timer(ifa);
3043 goto restart; 3042 goto restart;
3044 } 3043 }
3045 } 3044 }
@@ -3048,6 +3047,8 @@ static int addrconf_ifdown(struct net_device *dev, int how)
3048 3047
3049 write_lock_bh(&idev->lock); 3048 write_lock_bh(&idev->lock);
3050 3049
3050 addrconf_del_rs_timer(idev);
3051
3051 /* Step 2: clear flags for stateless addrconf */ 3052 /* Step 2: clear flags for stateless addrconf */
3052 if (!how) 3053 if (!how)
3053 idev->if_flags &= ~(IF_RS_SENT|IF_RA_RCVD|IF_READY); 3054 idev->if_flags &= ~(IF_RS_SENT|IF_RA_RCVD|IF_READY);
@@ -3077,7 +3078,7 @@ static int addrconf_ifdown(struct net_device *dev, int how)
3077 while (!list_empty(&idev->addr_list)) { 3078 while (!list_empty(&idev->addr_list)) {
3078 ifa = list_first_entry(&idev->addr_list, 3079 ifa = list_first_entry(&idev->addr_list,
3079 struct inet6_ifaddr, if_list); 3080 struct inet6_ifaddr, if_list);
3080 addrconf_del_timer(ifa); 3081 addrconf_del_dad_timer(ifa);
3081 3082
3082 list_del(&ifa->if_list); 3083 list_del(&ifa->if_list);
3083 3084
@@ -3119,10 +3120,10 @@ static int addrconf_ifdown(struct net_device *dev, int how)
3119 3120
3120static void addrconf_rs_timer(unsigned long data) 3121static void addrconf_rs_timer(unsigned long data)
3121{ 3122{
3122 struct inet6_ifaddr *ifp = (struct inet6_ifaddr *) data; 3123 struct inet6_dev *idev = (struct inet6_dev *)data;
3123 struct inet6_dev *idev = ifp->idev; 3124 struct in6_addr lladdr;
3124 3125
3125 read_lock(&idev->lock); 3126 write_lock(&idev->lock);
3126 if (idev->dead || !(idev->if_flags & IF_READY)) 3127 if (idev->dead || !(idev->if_flags & IF_READY))
3127 goto out; 3128 goto out;
3128 3129
@@ -3133,18 +3134,19 @@ static void addrconf_rs_timer(unsigned long data)
3133 if (idev->if_flags & IF_RA_RCVD) 3134 if (idev->if_flags & IF_RA_RCVD)
3134 goto out; 3135 goto out;
3135 3136
3136 spin_lock(&ifp->lock); 3137 if (idev->rs_probes++ < idev->cnf.rtr_solicits) {
3137 if (ifp->probes++ < idev->cnf.rtr_solicits) { 3138 if (!__ipv6_get_lladdr(idev, &lladdr, IFA_F_TENTATIVE))
3138 /* The wait after the last probe can be shorter */ 3139 ndisc_send_rs(idev->dev, &lladdr,
3139 addrconf_mod_timer(ifp, AC_RS, 3140 &in6addr_linklocal_allrouters);
3140 (ifp->probes == idev->cnf.rtr_solicits) ? 3141 else
3141 idev->cnf.rtr_solicit_delay : 3142 goto out;
3142 idev->cnf.rtr_solicit_interval);
3143 spin_unlock(&ifp->lock);
3144 3143
3145 ndisc_send_rs(idev->dev, &ifp->addr, &in6addr_linklocal_allrouters); 3144 /* The wait after the last probe can be shorter */
3145 addrconf_mod_rs_timer(idev, (idev->rs_probes ==
3146 idev->cnf.rtr_solicits) ?
3147 idev->cnf.rtr_solicit_delay :
3148 idev->cnf.rtr_solicit_interval);
3146 } else { 3149 } else {
3147 spin_unlock(&ifp->lock);
3148 /* 3150 /*
3149 * Note: we do not support deprecated "all on-link" 3151 * Note: we do not support deprecated "all on-link"
3150 * assumption any longer. 3152 * assumption any longer.
@@ -3153,8 +3155,8 @@ static void addrconf_rs_timer(unsigned long data)
3153 } 3155 }
3154 3156
3155out: 3157out:
3156 read_unlock(&idev->lock); 3158 write_unlock(&idev->lock);
3157 in6_ifa_put(ifp); 3159 in6_dev_put(idev);
3158} 3160}
3159 3161
3160/* 3162/*
@@ -3170,8 +3172,8 @@ static void addrconf_dad_kick(struct inet6_ifaddr *ifp)
3170 else 3172 else
3171 rand_num = net_random() % (idev->cnf.rtr_solicit_delay ? : 1); 3173 rand_num = net_random() % (idev->cnf.rtr_solicit_delay ? : 1);
3172 3174
3173 ifp->probes = idev->cnf.dad_transmits; 3175 ifp->dad_probes = idev->cnf.dad_transmits;
3174 addrconf_mod_timer(ifp, AC_DAD, rand_num); 3176 addrconf_mod_dad_timer(ifp, rand_num);
3175} 3177}
3176 3178
3177static void addrconf_dad_start(struct inet6_ifaddr *ifp) 3179static void addrconf_dad_start(struct inet6_ifaddr *ifp)
@@ -3232,40 +3234,40 @@ static void addrconf_dad_timer(unsigned long data)
3232 struct inet6_dev *idev = ifp->idev; 3234 struct inet6_dev *idev = ifp->idev;
3233 struct in6_addr mcaddr; 3235 struct in6_addr mcaddr;
3234 3236
3235 if (!ifp->probes && addrconf_dad_end(ifp)) 3237 if (!ifp->dad_probes && addrconf_dad_end(ifp))
3236 goto out; 3238 goto out;
3237 3239
3238 read_lock(&idev->lock); 3240 write_lock(&idev->lock);
3239 if (idev->dead || !(idev->if_flags & IF_READY)) { 3241 if (idev->dead || !(idev->if_flags & IF_READY)) {
3240 read_unlock(&idev->lock); 3242 write_unlock(&idev->lock);
3241 goto out; 3243 goto out;
3242 } 3244 }
3243 3245
3244 spin_lock(&ifp->lock); 3246 spin_lock(&ifp->lock);
3245 if (ifp->state == INET6_IFADDR_STATE_DEAD) { 3247 if (ifp->state == INET6_IFADDR_STATE_DEAD) {
3246 spin_unlock(&ifp->lock); 3248 spin_unlock(&ifp->lock);
3247 read_unlock(&idev->lock); 3249 write_unlock(&idev->lock);
3248 goto out; 3250 goto out;
3249 } 3251 }
3250 3252
3251 if (ifp->probes == 0) { 3253 if (ifp->dad_probes == 0) {
3252 /* 3254 /*
3253 * DAD was successful 3255 * DAD was successful
3254 */ 3256 */
3255 3257
3256 ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED); 3258 ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED);
3257 spin_unlock(&ifp->lock); 3259 spin_unlock(&ifp->lock);
3258 read_unlock(&idev->lock); 3260 write_unlock(&idev->lock);
3259 3261
3260 addrconf_dad_completed(ifp); 3262 addrconf_dad_completed(ifp);
3261 3263
3262 goto out; 3264 goto out;
3263 } 3265 }
3264 3266
3265 ifp->probes--; 3267 ifp->dad_probes--;
3266 addrconf_mod_timer(ifp, AC_DAD, ifp->idev->nd_parms->retrans_time); 3268 addrconf_mod_dad_timer(ifp, ifp->idev->nd_parms->retrans_time);
3267 spin_unlock(&ifp->lock); 3269 spin_unlock(&ifp->lock);
3268 read_unlock(&idev->lock); 3270 write_unlock(&idev->lock);
3269 3271
3270 /* send a neighbour solicitation for our addr */ 3272 /* send a neighbour solicitation for our addr */
3271 addrconf_addr_solict_mult(&ifp->addr, &mcaddr); 3273 addrconf_addr_solict_mult(&ifp->addr, &mcaddr);
@@ -3277,6 +3279,10 @@ out:
3277static void addrconf_dad_completed(struct inet6_ifaddr *ifp) 3279static void addrconf_dad_completed(struct inet6_ifaddr *ifp)
3278{ 3280{
3279 struct net_device *dev = ifp->idev->dev; 3281 struct net_device *dev = ifp->idev->dev;
3282 struct in6_addr lladdr;
3283 bool send_rs, send_mld;
3284
3285 addrconf_del_dad_timer(ifp);
3280 3286
3281 /* 3287 /*
3282 * Configure the address for reception. Now it is valid. 3288 * Configure the address for reception. Now it is valid.
@@ -3288,22 +3294,41 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp)
3288 router advertisements, start sending router solicitations. 3294 router advertisements, start sending router solicitations.
3289 */ 3295 */
3290 3296
3291 if (ipv6_accept_ra(ifp->idev) && 3297 read_lock_bh(&ifp->idev->lock);
3292 ifp->idev->cnf.rtr_solicits > 0 && 3298 spin_lock(&ifp->lock);
3293 (dev->flags&IFF_LOOPBACK) == 0 && 3299 send_mld = ipv6_addr_type(&ifp->addr) & IPV6_ADDR_LINKLOCAL &&
3294 (ipv6_addr_type(&ifp->addr) & IPV6_ADDR_LINKLOCAL)) { 3300 ifp->idev->valid_ll_addr_cnt == 1;
3301 send_rs = send_mld &&
3302 ipv6_accept_ra(ifp->idev) &&
3303 ifp->idev->cnf.rtr_solicits > 0 &&
3304 (dev->flags&IFF_LOOPBACK) == 0;
3305 spin_unlock(&ifp->lock);
3306 read_unlock_bh(&ifp->idev->lock);
3307
3308 /* While dad is in progress mld report's source address is in6_addrany.
3309 * Resend with proper ll now.
3310 */
3311 if (send_mld)
3312 ipv6_mc_dad_complete(ifp->idev);
3313
3314 if (send_rs) {
3295 /* 3315 /*
3296 * If a host as already performed a random delay 3316 * If a host as already performed a random delay
3297 * [...] as part of DAD [...] there is no need 3317 * [...] as part of DAD [...] there is no need
3298 * to delay again before sending the first RS 3318 * to delay again before sending the first RS
3299 */ 3319 */
3300 ndisc_send_rs(ifp->idev->dev, &ifp->addr, &in6addr_linklocal_allrouters); 3320 if (ipv6_get_lladdr(dev, &lladdr, IFA_F_TENTATIVE))
3321 return;
3322 ndisc_send_rs(dev, &lladdr, &in6addr_linklocal_allrouters);
3301 3323
3302 spin_lock_bh(&ifp->lock); 3324 write_lock_bh(&ifp->idev->lock);
3303 ifp->probes = 1; 3325 spin_lock(&ifp->lock);
3326 ifp->idev->rs_probes = 1;
3304 ifp->idev->if_flags |= IF_RS_SENT; 3327 ifp->idev->if_flags |= IF_RS_SENT;
3305 addrconf_mod_timer(ifp, AC_RS, ifp->idev->cnf.rtr_solicit_interval); 3328 addrconf_mod_rs_timer(ifp->idev,
3306 spin_unlock_bh(&ifp->lock); 3329 ifp->idev->cnf.rtr_solicit_interval);
3330 spin_unlock(&ifp->lock);
3331 write_unlock_bh(&ifp->idev->lock);
3307 } 3332 }
3308} 3333}
3309 3334
@@ -3615,18 +3640,20 @@ restart:
3615 rcu_read_unlock_bh(); 3640 rcu_read_unlock_bh();
3616} 3641}
3617 3642
3618static struct in6_addr *extract_addr(struct nlattr *addr, struct nlattr *local) 3643static struct in6_addr *extract_addr(struct nlattr *addr, struct nlattr *local,
3644 struct in6_addr **peer_pfx)
3619{ 3645{
3620 struct in6_addr *pfx = NULL; 3646 struct in6_addr *pfx = NULL;
3621 3647
3648 *peer_pfx = NULL;
3649
3622 if (addr) 3650 if (addr)
3623 pfx = nla_data(addr); 3651 pfx = nla_data(addr);
3624 3652
3625 if (local) { 3653 if (local) {
3626 if (pfx && nla_memcmp(local, pfx, sizeof(*pfx))) 3654 if (pfx && nla_memcmp(local, pfx, sizeof(*pfx)))
3627 pfx = NULL; 3655 *peer_pfx = pfx;
3628 else 3656 pfx = nla_data(local);
3629 pfx = nla_data(local);
3630 } 3657 }
3631 3658
3632 return pfx; 3659 return pfx;
@@ -3644,7 +3671,7 @@ inet6_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh)
3644 struct net *net = sock_net(skb->sk); 3671 struct net *net = sock_net(skb->sk);
3645 struct ifaddrmsg *ifm; 3672 struct ifaddrmsg *ifm;
3646 struct nlattr *tb[IFA_MAX+1]; 3673 struct nlattr *tb[IFA_MAX+1];
3647 struct in6_addr *pfx; 3674 struct in6_addr *pfx, *peer_pfx;
3648 int err; 3675 int err;
3649 3676
3650 err = nlmsg_parse(nlh, sizeof(*ifm), tb, IFA_MAX, ifa_ipv6_policy); 3677 err = nlmsg_parse(nlh, sizeof(*ifm), tb, IFA_MAX, ifa_ipv6_policy);
@@ -3652,7 +3679,7 @@ inet6_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh)
3652 return err; 3679 return err;
3653 3680
3654 ifm = nlmsg_data(nlh); 3681 ifm = nlmsg_data(nlh);
3655 pfx = extract_addr(tb[IFA_ADDRESS], tb[IFA_LOCAL]); 3682 pfx = extract_addr(tb[IFA_ADDRESS], tb[IFA_LOCAL], &peer_pfx);
3656 if (pfx == NULL) 3683 if (pfx == NULL)
3657 return -EINVAL; 3684 return -EINVAL;
3658 3685
@@ -3710,7 +3737,7 @@ inet6_rtm_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh)
3710 struct net *net = sock_net(skb->sk); 3737 struct net *net = sock_net(skb->sk);
3711 struct ifaddrmsg *ifm; 3738 struct ifaddrmsg *ifm;
3712 struct nlattr *tb[IFA_MAX+1]; 3739 struct nlattr *tb[IFA_MAX+1];
3713 struct in6_addr *pfx; 3740 struct in6_addr *pfx, *peer_pfx;
3714 struct inet6_ifaddr *ifa; 3741 struct inet6_ifaddr *ifa;
3715 struct net_device *dev; 3742 struct net_device *dev;
3716 u32 valid_lft = INFINITY_LIFE_TIME, preferred_lft = INFINITY_LIFE_TIME; 3743 u32 valid_lft = INFINITY_LIFE_TIME, preferred_lft = INFINITY_LIFE_TIME;
@@ -3722,7 +3749,7 @@ inet6_rtm_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh)
3722 return err; 3749 return err;
3723 3750
3724 ifm = nlmsg_data(nlh); 3751 ifm = nlmsg_data(nlh);
3725 pfx = extract_addr(tb[IFA_ADDRESS], tb[IFA_LOCAL]); 3752 pfx = extract_addr(tb[IFA_ADDRESS], tb[IFA_LOCAL], &peer_pfx);
3726 if (pfx == NULL) 3753 if (pfx == NULL)
3727 return -EINVAL; 3754 return -EINVAL;
3728 3755
@@ -3750,7 +3777,7 @@ inet6_rtm_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh)
3750 * It would be best to check for !NLM_F_CREATE here but 3777 * It would be best to check for !NLM_F_CREATE here but
3751 * userspace alreay relies on not having to provide this. 3778 * userspace alreay relies on not having to provide this.
3752 */ 3779 */
3753 return inet6_addr_add(net, ifm->ifa_index, pfx, 3780 return inet6_addr_add(net, ifm->ifa_index, pfx, peer_pfx,
3754 ifm->ifa_prefixlen, ifa_flags, 3781 ifm->ifa_prefixlen, ifa_flags,
3755 preferred_lft, valid_lft); 3782 preferred_lft, valid_lft);
3756 } 3783 }
@@ -3807,6 +3834,7 @@ static inline int rt_scope(int ifa_scope)
3807static inline int inet6_ifaddr_msgsize(void) 3834static inline int inet6_ifaddr_msgsize(void)
3808{ 3835{
3809 return NLMSG_ALIGN(sizeof(struct ifaddrmsg)) 3836 return NLMSG_ALIGN(sizeof(struct ifaddrmsg))
3837 + nla_total_size(16) /* IFA_LOCAL */
3810 + nla_total_size(16) /* IFA_ADDRESS */ 3838 + nla_total_size(16) /* IFA_ADDRESS */
3811 + nla_total_size(sizeof(struct ifa_cacheinfo)); 3839 + nla_total_size(sizeof(struct ifa_cacheinfo));
3812} 3840}
@@ -3845,13 +3873,22 @@ static int inet6_fill_ifaddr(struct sk_buff *skb, struct inet6_ifaddr *ifa,
3845 valid = INFINITY_LIFE_TIME; 3873 valid = INFINITY_LIFE_TIME;
3846 } 3874 }
3847 3875
3848 if (nla_put(skb, IFA_ADDRESS, 16, &ifa->addr) < 0 || 3876 if (!ipv6_addr_any(&ifa->peer_addr)) {
3849 put_cacheinfo(skb, ifa->cstamp, ifa->tstamp, preferred, valid) < 0) { 3877 if (nla_put(skb, IFA_LOCAL, 16, &ifa->addr) < 0 ||
3850 nlmsg_cancel(skb, nlh); 3878 nla_put(skb, IFA_ADDRESS, 16, &ifa->peer_addr) < 0)
3851 return -EMSGSIZE; 3879 goto error;
3852 } 3880 } else
3881 if (nla_put(skb, IFA_ADDRESS, 16, &ifa->addr) < 0)
3882 goto error;
3883
3884 if (put_cacheinfo(skb, ifa->cstamp, ifa->tstamp, preferred, valid) < 0)
3885 goto error;
3853 3886
3854 return nlmsg_end(skb, nlh); 3887 return nlmsg_end(skb, nlh);
3888
3889error:
3890 nlmsg_cancel(skb, nlh);
3891 return -EMSGSIZE;
3855} 3892}
3856 3893
3857static int inet6_fill_ifmcaddr(struct sk_buff *skb, struct ifmcaddr6 *ifmca, 3894static int inet6_fill_ifmcaddr(struct sk_buff *skb, struct ifmcaddr6 *ifmca,
@@ -4051,7 +4088,7 @@ static int inet6_rtm_getaddr(struct sk_buff *in_skb, struct nlmsghdr *nlh)
4051 struct net *net = sock_net(in_skb->sk); 4088 struct net *net = sock_net(in_skb->sk);
4052 struct ifaddrmsg *ifm; 4089 struct ifaddrmsg *ifm;
4053 struct nlattr *tb[IFA_MAX+1]; 4090 struct nlattr *tb[IFA_MAX+1];
4054 struct in6_addr *addr = NULL; 4091 struct in6_addr *addr = NULL, *peer;
4055 struct net_device *dev = NULL; 4092 struct net_device *dev = NULL;
4056 struct inet6_ifaddr *ifa; 4093 struct inet6_ifaddr *ifa;
4057 struct sk_buff *skb; 4094 struct sk_buff *skb;
@@ -4061,7 +4098,7 @@ static int inet6_rtm_getaddr(struct sk_buff *in_skb, struct nlmsghdr *nlh)
4061 if (err < 0) 4098 if (err < 0)
4062 goto errout; 4099 goto errout;
4063 4100
4064 addr = extract_addr(tb[IFA_ADDRESS], tb[IFA_LOCAL]); 4101 addr = extract_addr(tb[IFA_ADDRESS], tb[IFA_LOCAL], &peer);
4065 if (addr == NULL) { 4102 if (addr == NULL) {
4066 err = -EINVAL; 4103 err = -EINVAL;
4067 goto errout; 4104 goto errout;
@@ -4339,8 +4376,11 @@ static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token)
4339 4376
4340 write_lock_bh(&idev->lock); 4377 write_lock_bh(&idev->lock);
4341 4378
4342 if (update_rs) 4379 if (update_rs) {
4343 idev->if_flags |= IF_RS_SENT; 4380 idev->if_flags |= IF_RS_SENT;
4381 idev->rs_probes = 1;
4382 addrconf_mod_rs_timer(idev, idev->cnf.rtr_solicit_interval);
4383 }
4344 4384
4345 /* Well, that's kinda nasty ... */ 4385 /* Well, that's kinda nasty ... */
4346 list_for_each_entry(ifp, &idev->addr_list, if_list) { 4386 list_for_each_entry(ifp, &idev->addr_list, if_list) {
@@ -4353,6 +4393,7 @@ static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token)
4353 } 4393 }
4354 4394
4355 write_unlock_bh(&idev->lock); 4395 write_unlock_bh(&idev->lock);
4396 addrconf_verify(0);
4356 return 0; 4397 return 0;
4357} 4398}
4358 4399
@@ -4550,6 +4591,19 @@ errout:
4550 rtnl_set_sk_err(net, RTNLGRP_IPV6_PREFIX, err); 4591 rtnl_set_sk_err(net, RTNLGRP_IPV6_PREFIX, err);
4551} 4592}
4552 4593
4594static void update_valid_ll_addr_cnt(struct inet6_ifaddr *ifp, int count)
4595{
4596 write_lock_bh(&ifp->idev->lock);
4597 spin_lock(&ifp->lock);
4598 if (((ifp->flags & (IFA_F_PERMANENT|IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|
4599 IFA_F_DADFAILED)) == IFA_F_PERMANENT) &&
4600 (ipv6_addr_type(&ifp->addr) & IPV6_ADDR_LINKLOCAL))
4601 ifp->idev->valid_ll_addr_cnt += count;
4602 WARN_ON(ifp->idev->valid_ll_addr_cnt < 0);
4603 spin_unlock(&ifp->lock);
4604 write_unlock_bh(&ifp->idev->lock);
4605}
4606
4553static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp) 4607static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
4554{ 4608{
4555 struct net *net = dev_net(ifp->idev->dev); 4609 struct net *net = dev_net(ifp->idev->dev);
@@ -4558,6 +4612,8 @@ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
4558 4612
4559 switch (event) { 4613 switch (event) {
4560 case RTM_NEWADDR: 4614 case RTM_NEWADDR:
4615 update_valid_ll_addr_cnt(ifp, 1);
4616
4561 /* 4617 /*
4562 * If the address was optimistic 4618 * If the address was optimistic
4563 * we inserted the route at the start of 4619 * we inserted the route at the start of
@@ -4568,11 +4624,28 @@ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
4568 ip6_ins_rt(ifp->rt); 4624 ip6_ins_rt(ifp->rt);
4569 if (ifp->idev->cnf.forwarding) 4625 if (ifp->idev->cnf.forwarding)
4570 addrconf_join_anycast(ifp); 4626 addrconf_join_anycast(ifp);
4627 if (!ipv6_addr_any(&ifp->peer_addr))
4628 addrconf_prefix_route(&ifp->peer_addr, 128,
4629 ifp->idev->dev, 0, 0);
4571 break; 4630 break;
4572 case RTM_DELADDR: 4631 case RTM_DELADDR:
4632 update_valid_ll_addr_cnt(ifp, -1);
4633
4573 if (ifp->idev->cnf.forwarding) 4634 if (ifp->idev->cnf.forwarding)
4574 addrconf_leave_anycast(ifp); 4635 addrconf_leave_anycast(ifp);
4575 addrconf_leave_solict(ifp->idev, &ifp->addr); 4636 addrconf_leave_solict(ifp->idev, &ifp->addr);
4637 if (!ipv6_addr_any(&ifp->peer_addr)) {
4638 struct rt6_info *rt;
4639 struct net_device *dev = ifp->idev->dev;
4640
4641 rt = rt6_lookup(dev_net(dev), &ifp->peer_addr, NULL,
4642 dev->ifindex, 1);
4643 if (rt) {
4644 dst_hold(&rt->dst);
4645 if (ip6_del_rt(rt))
4646 dst_free(&rt->dst);
4647 }
4648 }
4576 dst_hold(&ifp->rt->dst); 4649 dst_hold(&ifp->rt->dst);
4577 4650
4578 if (ip6_del_rt(ifp->rt)) 4651 if (ip6_del_rt(ifp->rt))
@@ -4593,13 +4666,13 @@ static void ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
4593#ifdef CONFIG_SYSCTL 4666#ifdef CONFIG_SYSCTL
4594 4667
4595static 4668static
4596int addrconf_sysctl_forward(ctl_table *ctl, int write, 4669int addrconf_sysctl_forward(struct ctl_table *ctl, int write,
4597 void __user *buffer, size_t *lenp, loff_t *ppos) 4670 void __user *buffer, size_t *lenp, loff_t *ppos)
4598{ 4671{
4599 int *valp = ctl->data; 4672 int *valp = ctl->data;
4600 int val = *valp; 4673 int val = *valp;
4601 loff_t pos = *ppos; 4674 loff_t pos = *ppos;
4602 ctl_table lctl; 4675 struct ctl_table lctl;
4603 int ret; 4676 int ret;
4604 4677
4605 /* 4678 /*
@@ -4620,13 +4693,16 @@ int addrconf_sysctl_forward(ctl_table *ctl, int write,
4620 4693
4621static void dev_disable_change(struct inet6_dev *idev) 4694static void dev_disable_change(struct inet6_dev *idev)
4622{ 4695{
4696 struct netdev_notifier_info info;
4697
4623 if (!idev || !idev->dev) 4698 if (!idev || !idev->dev)
4624 return; 4699 return;
4625 4700
4701 netdev_notifier_info_init(&info, idev->dev);
4626 if (idev->cnf.disable_ipv6) 4702 if (idev->cnf.disable_ipv6)
4627 addrconf_notify(NULL, NETDEV_DOWN, idev->dev); 4703 addrconf_notify(NULL, NETDEV_DOWN, &info);
4628 else 4704 else
4629 addrconf_notify(NULL, NETDEV_UP, idev->dev); 4705 addrconf_notify(NULL, NETDEV_UP, &info);
4630} 4706}
4631 4707
4632static void addrconf_disable_change(struct net *net, __s32 newf) 4708static void addrconf_disable_change(struct net *net, __s32 newf)
@@ -4675,13 +4751,13 @@ static int addrconf_disable_ipv6(struct ctl_table *table, int *p, int newf)
4675} 4751}
4676 4752
4677static 4753static
4678int addrconf_sysctl_disable(ctl_table *ctl, int write, 4754int addrconf_sysctl_disable(struct ctl_table *ctl, int write,
4679 void __user *buffer, size_t *lenp, loff_t *ppos) 4755 void __user *buffer, size_t *lenp, loff_t *ppos)
4680{ 4756{
4681 int *valp = ctl->data; 4757 int *valp = ctl->data;
4682 int val = *valp; 4758 int val = *valp;
4683 loff_t pos = *ppos; 4759 loff_t pos = *ppos;
4684 ctl_table lctl; 4760 struct ctl_table lctl;
4685 int ret; 4761 int ret;
4686 4762
4687 /* 4763 /*
@@ -4703,7 +4779,7 @@ int addrconf_sysctl_disable(ctl_table *ctl, int write,
4703static struct addrconf_sysctl_table 4779static struct addrconf_sysctl_table
4704{ 4780{
4705 struct ctl_table_header *sysctl_header; 4781 struct ctl_table_header *sysctl_header;
4706 ctl_table addrconf_vars[DEVCONF_MAX+1]; 4782 struct ctl_table addrconf_vars[DEVCONF_MAX+1];
4707} addrconf_sysctl __read_mostly = { 4783} addrconf_sysctl __read_mostly = {
4708 .sysctl_header = NULL, 4784 .sysctl_header = NULL,
4709 .addrconf_vars = { 4785 .addrconf_vars = {
diff --git a/net/ipv6/addrconf_core.c b/net/ipv6/addrconf_core.c
index 72104562c864..d2f87427244b 100644
--- a/net/ipv6/addrconf_core.c
+++ b/net/ipv6/addrconf_core.c
@@ -5,6 +5,7 @@
5 5
6#include <linux/export.h> 6#include <linux/export.h>
7#include <net/ipv6.h> 7#include <net/ipv6.h>
8#include <net/addrconf.h>
8 9
9#define IPV6_ADDR_SCOPE_TYPE(scope) ((scope) << 16) 10#define IPV6_ADDR_SCOPE_TYPE(scope) ((scope) << 16)
10 11
diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index ab5c7ad482cd..a5ac969aeefe 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -49,6 +49,7 @@
49#include <net/udp.h> 49#include <net/udp.h>
50#include <net/udplite.h> 50#include <net/udplite.h>
51#include <net/tcp.h> 51#include <net/tcp.h>
52#include <net/ping.h>
52#include <net/protocol.h> 53#include <net/protocol.h>
53#include <net/inet_common.h> 54#include <net/inet_common.h>
54#include <net/route.h> 55#include <net/route.h>
@@ -840,6 +841,9 @@ static int __init inet6_init(void)
840 if (err) 841 if (err)
841 goto out_unregister_udplite_proto; 842 goto out_unregister_udplite_proto;
842 843
844 err = proto_register(&pingv6_prot, 1);
845 if (err)
846 goto out_unregister_ping_proto;
843 847
844 /* We MUST register RAW sockets before we create the ICMP6, 848 /* We MUST register RAW sockets before we create the ICMP6,
845 * IGMP6, or NDISC control sockets. 849 * IGMP6, or NDISC control sockets.
@@ -930,6 +934,10 @@ static int __init inet6_init(void)
930 if (err) 934 if (err)
931 goto ipv6_packet_fail; 935 goto ipv6_packet_fail;
932 936
937 err = pingv6_init();
938 if (err)
939 goto pingv6_fail;
940
933#ifdef CONFIG_SYSCTL 941#ifdef CONFIG_SYSCTL
934 err = ipv6_sysctl_register(); 942 err = ipv6_sysctl_register();
935 if (err) 943 if (err)
@@ -942,6 +950,8 @@ out:
942sysctl_fail: 950sysctl_fail:
943 ipv6_packet_cleanup(); 951 ipv6_packet_cleanup();
944#endif 952#endif
953pingv6_fail:
954 pingv6_exit();
945ipv6_packet_fail: 955ipv6_packet_fail:
946 tcpv6_exit(); 956 tcpv6_exit();
947tcpv6_fail: 957tcpv6_fail:
@@ -985,6 +995,8 @@ register_pernet_fail:
985 rtnl_unregister_all(PF_INET6); 995 rtnl_unregister_all(PF_INET6);
986out_sock_register_fail: 996out_sock_register_fail:
987 rawv6_exit(); 997 rawv6_exit();
998out_unregister_ping_proto:
999 proto_unregister(&pingv6_prot);
988out_unregister_raw_proto: 1000out_unregister_raw_proto:
989 proto_unregister(&rawv6_prot); 1001 proto_unregister(&rawv6_prot);
990out_unregister_udplite_proto: 1002out_unregister_udplite_proto:
diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
index 4b56cbbc7890..197e6f4a2b74 100644
--- a/net/ipv6/datagram.c
+++ b/net/ipv6/datagram.c
@@ -879,3 +879,30 @@ exit_f:
879 return err; 879 return err;
880} 880}
881EXPORT_SYMBOL_GPL(ip6_datagram_send_ctl); 881EXPORT_SYMBOL_GPL(ip6_datagram_send_ctl);
882
883void ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
884 __u16 srcp, __u16 destp, int bucket)
885{
886 struct ipv6_pinfo *np = inet6_sk(sp);
887 const struct in6_addr *dest, *src;
888
889 dest = &np->daddr;
890 src = &np->rcv_saddr;
891 seq_printf(seq,
892 "%5d: %08X%08X%08X%08X:%04X %08X%08X%08X%08X:%04X "
893 "%02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %pK %d\n",
894 bucket,
895 src->s6_addr32[0], src->s6_addr32[1],
896 src->s6_addr32[2], src->s6_addr32[3], srcp,
897 dest->s6_addr32[0], dest->s6_addr32[1],
898 dest->s6_addr32[2], dest->s6_addr32[3], destp,
899 sp->sk_state,
900 sk_wmem_alloc_get(sp),
901 sk_rmem_alloc_get(sp),
902 0, 0L, 0,
903 from_kuid_munged(seq_user_ns(seq), sock_i_uid(sp)),
904 0,
905 sock_i_ino(sp),
906 atomic_read(&sp->sk_refcnt), sp,
907 atomic_read(&sp->sk_drops));
908}
diff --git a/net/ipv6/exthdrs_core.c b/net/ipv6/exthdrs_core.c
index c5e83fae4df4..140748debc4a 100644
--- a/net/ipv6/exthdrs_core.c
+++ b/net/ipv6/exthdrs_core.c
@@ -115,7 +115,7 @@ EXPORT_SYMBOL(ipv6_skip_exthdr);
115int ipv6_find_tlv(struct sk_buff *skb, int offset, int type) 115int ipv6_find_tlv(struct sk_buff *skb, int offset, int type)
116{ 116{
117 const unsigned char *nh = skb_network_header(skb); 117 const unsigned char *nh = skb_network_header(skb);
118 int packet_len = skb->tail - skb->network_header; 118 int packet_len = skb_tail_pointer(skb) - skb_network_header(skb);
119 struct ipv6_opt_hdr *hdr; 119 struct ipv6_opt_hdr *hdr;
120 int len; 120 int len;
121 121
diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
index b4ff0a42b8c7..7cfc8d284870 100644
--- a/net/ipv6/icmp.c
+++ b/net/ipv6/icmp.c
@@ -57,6 +57,7 @@
57 57
58#include <net/ipv6.h> 58#include <net/ipv6.h>
59#include <net/ip6_checksum.h> 59#include <net/ip6_checksum.h>
60#include <net/ping.h>
60#include <net/protocol.h> 61#include <net/protocol.h>
61#include <net/raw.h> 62#include <net/raw.h>
62#include <net/rawv6.h> 63#include <net/rawv6.h>
@@ -84,12 +85,18 @@ static inline struct sock *icmpv6_sk(struct net *net)
84static void icmpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, 85static void icmpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
85 u8 type, u8 code, int offset, __be32 info) 86 u8 type, u8 code, int offset, __be32 info)
86{ 87{
88 /* icmpv6_notify checks 8 bytes can be pulled, icmp6hdr is 8 bytes */
89 struct icmp6hdr *icmp6 = (struct icmp6hdr *) (skb->data + offset);
87 struct net *net = dev_net(skb->dev); 90 struct net *net = dev_net(skb->dev);
88 91
89 if (type == ICMPV6_PKT_TOOBIG) 92 if (type == ICMPV6_PKT_TOOBIG)
90 ip6_update_pmtu(skb, net, info, 0, 0); 93 ip6_update_pmtu(skb, net, info, 0, 0);
91 else if (type == NDISC_REDIRECT) 94 else if (type == NDISC_REDIRECT)
92 ip6_redirect(skb, net, 0, 0); 95 ip6_redirect(skb, net, 0, 0);
96
97 if (!(type & ICMPV6_INFOMSG_MASK))
98 if (icmp6->icmp6_type == ICMPV6_ECHO_REQUEST)
99 ping_err(skb, offset, info);
93} 100}
94 101
95static int icmpv6_rcv(struct sk_buff *skb); 102static int icmpv6_rcv(struct sk_buff *skb);
@@ -224,7 +231,8 @@ static bool opt_unrec(struct sk_buff *skb, __u32 offset)
224 return (*op & 0xC0) == 0x80; 231 return (*op & 0xC0) == 0x80;
225} 232}
226 233
227static int icmpv6_push_pending_frames(struct sock *sk, struct flowi6 *fl6, struct icmp6hdr *thdr, int len) 234int icmpv6_push_pending_frames(struct sock *sk, struct flowi6 *fl6,
235 struct icmp6hdr *thdr, int len)
228{ 236{
229 struct sk_buff *skb; 237 struct sk_buff *skb;
230 struct icmp6hdr *icmp6h; 238 struct icmp6hdr *icmp6h;
@@ -307,8 +315,8 @@ static void mip6_addr_swap(struct sk_buff *skb)
307static inline void mip6_addr_swap(struct sk_buff *skb) {} 315static inline void mip6_addr_swap(struct sk_buff *skb) {}
308#endif 316#endif
309 317
310static struct dst_entry *icmpv6_route_lookup(struct net *net, struct sk_buff *skb, 318struct dst_entry *icmpv6_route_lookup(struct net *net, struct sk_buff *skb,
311 struct sock *sk, struct flowi6 *fl6) 319 struct sock *sk, struct flowi6 *fl6)
312{ 320{
313 struct dst_entry *dst, *dst2; 321 struct dst_entry *dst, *dst2;
314 struct flowi6 fl2; 322 struct flowi6 fl2;
@@ -391,7 +399,7 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info)
391 int err = 0; 399 int err = 0;
392 400
393 if ((u8 *)hdr < skb->head || 401 if ((u8 *)hdr < skb->head ||
394 (skb->network_header + sizeof(*hdr)) > skb->tail) 402 (skb_network_header(skb) + sizeof(*hdr)) > skb_tail_pointer(skb))
395 return; 403 return;
396 404
397 /* 405 /*
@@ -697,7 +705,8 @@ static int icmpv6_rcv(struct sk_buff *skb)
697 skb->csum = ~csum_unfold(csum_ipv6_magic(saddr, daddr, skb->len, 705 skb->csum = ~csum_unfold(csum_ipv6_magic(saddr, daddr, skb->len,
698 IPPROTO_ICMPV6, 0)); 706 IPPROTO_ICMPV6, 0));
699 if (__skb_checksum_complete(skb)) { 707 if (__skb_checksum_complete(skb)) {
700 LIMIT_NETDEBUG(KERN_DEBUG "ICMPv6 checksum failed [%pI6 > %pI6]\n", 708 LIMIT_NETDEBUG(KERN_DEBUG
709 "ICMPv6 checksum failed [%pI6c > %pI6c]\n",
701 saddr, daddr); 710 saddr, daddr);
702 goto csum_error; 711 goto csum_error;
703 } 712 }
@@ -718,7 +727,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
718 break; 727 break;
719 728
720 case ICMPV6_ECHO_REPLY: 729 case ICMPV6_ECHO_REPLY:
721 /* we couldn't care less */ 730 ping_rcv(skb);
722 break; 731 break;
723 732
724 case ICMPV6_PKT_TOOBIG: 733 case ICMPV6_PKT_TOOBIG:
@@ -967,7 +976,7 @@ int icmpv6_err_convert(u8 type, u8 code, int *err)
967EXPORT_SYMBOL(icmpv6_err_convert); 976EXPORT_SYMBOL(icmpv6_err_convert);
968 977
969#ifdef CONFIG_SYSCTL 978#ifdef CONFIG_SYSCTL
970ctl_table ipv6_icmp_table_template[] = { 979struct ctl_table ipv6_icmp_table_template[] = {
971 { 980 {
972 .procname = "ratelimit", 981 .procname = "ratelimit",
973 .data = &init_net.ipv6.sysctl.icmpv6_time, 982 .data = &init_net.ipv6.sysctl.icmpv6_time,
diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
index 71b766ee821d..a263b990ee11 100644
--- a/net/ipv6/ip6_offload.c
+++ b/net/ipv6/ip6_offload.c
@@ -98,6 +98,7 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
98 SKB_GSO_TCP_ECN | 98 SKB_GSO_TCP_ECN |
99 SKB_GSO_GRE | 99 SKB_GSO_GRE |
100 SKB_GSO_UDP_TUNNEL | 100 SKB_GSO_UDP_TUNNEL |
101 SKB_GSO_MPLS |
101 SKB_GSO_TCPV6 | 102 SKB_GSO_TCPV6 |
102 0))) 103 0)))
103 goto out; 104 goto out;
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
index d5d20cde8d92..6e3ddf806ec2 100644
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -1098,11 +1098,12 @@ static inline struct ipv6_rt_hdr *ip6_rthdr_dup(struct ipv6_rt_hdr *src,
1098 return src ? kmemdup(src, (src->hdrlen + 1) * 8, gfp) : NULL; 1098 return src ? kmemdup(src, (src->hdrlen + 1) * 8, gfp) : NULL;
1099} 1099}
1100 1100
1101static void ip6_append_data_mtu(int *mtu, 1101static void ip6_append_data_mtu(unsigned int *mtu,
1102 int *maxfraglen, 1102 int *maxfraglen,
1103 unsigned int fragheaderlen, 1103 unsigned int fragheaderlen,
1104 struct sk_buff *skb, 1104 struct sk_buff *skb,
1105 struct rt6_info *rt) 1105 struct rt6_info *rt,
1106 bool pmtuprobe)
1106{ 1107{
1107 if (!(rt->dst.flags & DST_XFRM_TUNNEL)) { 1108 if (!(rt->dst.flags & DST_XFRM_TUNNEL)) {
1108 if (skb == NULL) { 1109 if (skb == NULL) {
@@ -1114,7 +1115,9 @@ static void ip6_append_data_mtu(int *mtu,
1114 * this fragment is not first, the headers 1115 * this fragment is not first, the headers
1115 * space is regarded as data space. 1116 * space is regarded as data space.
1116 */ 1117 */
1117 *mtu = dst_mtu(rt->dst.path); 1118 *mtu = min(*mtu, pmtuprobe ?
1119 rt->dst.dev->mtu :
1120 dst_mtu(rt->dst.path));
1118 } 1121 }
1119 *maxfraglen = ((*mtu - fragheaderlen) & ~7) 1122 *maxfraglen = ((*mtu - fragheaderlen) & ~7)
1120 + fragheaderlen - sizeof(struct frag_hdr); 1123 + fragheaderlen - sizeof(struct frag_hdr);
@@ -1131,11 +1134,10 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to,
1131 struct ipv6_pinfo *np = inet6_sk(sk); 1134 struct ipv6_pinfo *np = inet6_sk(sk);
1132 struct inet_cork *cork; 1135 struct inet_cork *cork;
1133 struct sk_buff *skb, *skb_prev = NULL; 1136 struct sk_buff *skb, *skb_prev = NULL;
1134 unsigned int maxfraglen, fragheaderlen; 1137 unsigned int maxfraglen, fragheaderlen, mtu;
1135 int exthdrlen; 1138 int exthdrlen;
1136 int dst_exthdrlen; 1139 int dst_exthdrlen;
1137 int hh_len; 1140 int hh_len;
1138 int mtu;
1139 int copy; 1141 int copy;
1140 int err; 1142 int err;
1141 int offset = 0; 1143 int offset = 0;
@@ -1292,7 +1294,9 @@ alloc_new_skb:
1292 /* update mtu and maxfraglen if necessary */ 1294 /* update mtu and maxfraglen if necessary */
1293 if (skb == NULL || skb_prev == NULL) 1295 if (skb == NULL || skb_prev == NULL)
1294 ip6_append_data_mtu(&mtu, &maxfraglen, 1296 ip6_append_data_mtu(&mtu, &maxfraglen,
1295 fragheaderlen, skb, rt); 1297 fragheaderlen, skb, rt,
1298 np->pmtudisc ==
1299 IPV6_PMTUDISC_PROBE);
1296 1300
1297 skb_prev = skb; 1301 skb_prev = skb;
1298 1302
diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
index 241fb8ad9fcf..583e8d435f9a 100644
--- a/net/ipv6/ip6mr.c
+++ b/net/ipv6/ip6mr.c
@@ -1319,7 +1319,7 @@ static int ip6mr_mfc_delete(struct mr6_table *mrt, struct mf6cctl *mfc,
1319static int ip6mr_device_event(struct notifier_block *this, 1319static int ip6mr_device_event(struct notifier_block *this,
1320 unsigned long event, void *ptr) 1320 unsigned long event, void *ptr)
1321{ 1321{
1322 struct net_device *dev = ptr; 1322 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1323 struct net *net = dev_net(dev); 1323 struct net *net = dev_net(dev);
1324 struct mr6_table *mrt; 1324 struct mr6_table *mrt;
1325 struct mif_device *v; 1325 struct mif_device *v;
diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
index bfa6cc36ef2a..99cd65c715cd 100644
--- a/net/ipv6/mcast.c
+++ b/net/ipv6/mcast.c
@@ -999,6 +999,14 @@ static void mld_ifc_start_timer(struct inet6_dev *idev, int delay)
999 in6_dev_hold(idev); 999 in6_dev_hold(idev);
1000} 1000}
1001 1001
1002static void mld_dad_start_timer(struct inet6_dev *idev, int delay)
1003{
1004 int tv = net_random() % delay;
1005
1006 if (!mod_timer(&idev->mc_dad_timer, jiffies+tv+2))
1007 in6_dev_hold(idev);
1008}
1009
1002/* 1010/*
1003 * IGMP handling (alias multicast ICMPv6 messages) 1011 * IGMP handling (alias multicast ICMPv6 messages)
1004 */ 1012 */
@@ -1343,8 +1351,9 @@ static void ip6_mc_hdr(struct sock *sk, struct sk_buff *skb,
1343 hdr->daddr = *daddr; 1351 hdr->daddr = *daddr;
1344} 1352}
1345 1353
1346static struct sk_buff *mld_newpack(struct net_device *dev, int size) 1354static struct sk_buff *mld_newpack(struct inet6_dev *idev, int size)
1347{ 1355{
1356 struct net_device *dev = idev->dev;
1348 struct net *net = dev_net(dev); 1357 struct net *net = dev_net(dev);
1349 struct sock *sk = net->ipv6.igmp_sk; 1358 struct sock *sk = net->ipv6.igmp_sk;
1350 struct sk_buff *skb; 1359 struct sk_buff *skb;
@@ -1369,7 +1378,7 @@ static struct sk_buff *mld_newpack(struct net_device *dev, int size)
1369 1378
1370 skb_reserve(skb, hlen); 1379 skb_reserve(skb, hlen);
1371 1380
1372 if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) { 1381 if (__ipv6_get_lladdr(idev, &addr_buf, IFA_F_TENTATIVE)) {
1373 /* <draft-ietf-magma-mld-source-05.txt>: 1382 /* <draft-ietf-magma-mld-source-05.txt>:
1374 * use unspecified address as the source address 1383 * use unspecified address as the source address
1375 * when a valid link-local address is not available. 1384 * when a valid link-local address is not available.
@@ -1409,8 +1418,9 @@ static void mld_sendpack(struct sk_buff *skb)
1409 idev = __in6_dev_get(skb->dev); 1418 idev = __in6_dev_get(skb->dev);
1410 IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUT, skb->len); 1419 IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUT, skb->len);
1411 1420
1412 payload_len = (skb->tail - skb->network_header) - sizeof(*pip6); 1421 payload_len = (skb_tail_pointer(skb) - skb_network_header(skb)) -
1413 mldlen = skb->tail - skb->transport_header; 1422 sizeof(*pip6);
1423 mldlen = skb_tail_pointer(skb) - skb_transport_header(skb);
1414 pip6->payload_len = htons(payload_len); 1424 pip6->payload_len = htons(payload_len);
1415 1425
1416 pmr->mld2r_cksum = csum_ipv6_magic(&pip6->saddr, &pip6->daddr, mldlen, 1426 pmr->mld2r_cksum = csum_ipv6_magic(&pip6->saddr, &pip6->daddr, mldlen,
@@ -1465,7 +1475,7 @@ static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc,
1465 struct mld2_grec *pgr; 1475 struct mld2_grec *pgr;
1466 1476
1467 if (!skb) 1477 if (!skb)
1468 skb = mld_newpack(dev, dev->mtu); 1478 skb = mld_newpack(pmc->idev, dev->mtu);
1469 if (!skb) 1479 if (!skb)
1470 return NULL; 1480 return NULL;
1471 pgr = (struct mld2_grec *)skb_put(skb, sizeof(struct mld2_grec)); 1481 pgr = (struct mld2_grec *)skb_put(skb, sizeof(struct mld2_grec));
@@ -1485,7 +1495,8 @@ static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc,
1485static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, 1495static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc,
1486 int type, int gdeleted, int sdeleted) 1496 int type, int gdeleted, int sdeleted)
1487{ 1497{
1488 struct net_device *dev = pmc->idev->dev; 1498 struct inet6_dev *idev = pmc->idev;
1499 struct net_device *dev = idev->dev;
1489 struct mld2_report *pmr; 1500 struct mld2_report *pmr;
1490 struct mld2_grec *pgr = NULL; 1501 struct mld2_grec *pgr = NULL;
1491 struct ip6_sf_list *psf, *psf_next, *psf_prev, **psf_list; 1502 struct ip6_sf_list *psf, *psf_next, *psf_prev, **psf_list;
@@ -1514,7 +1525,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc,
1514 AVAILABLE(skb) < grec_size(pmc, type, gdeleted, sdeleted)) { 1525 AVAILABLE(skb) < grec_size(pmc, type, gdeleted, sdeleted)) {
1515 if (skb) 1526 if (skb)
1516 mld_sendpack(skb); 1527 mld_sendpack(skb);
1517 skb = mld_newpack(dev, dev->mtu); 1528 skb = mld_newpack(idev, dev->mtu);
1518 } 1529 }
1519 } 1530 }
1520 first = 1; 1531 first = 1;
@@ -1541,7 +1552,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc,
1541 pgr->grec_nsrcs = htons(scount); 1552 pgr->grec_nsrcs = htons(scount);
1542 if (skb) 1553 if (skb)
1543 mld_sendpack(skb); 1554 mld_sendpack(skb);
1544 skb = mld_newpack(dev, dev->mtu); 1555 skb = mld_newpack(idev, dev->mtu);
1545 first = 1; 1556 first = 1;
1546 scount = 0; 1557 scount = 0;
1547 } 1558 }
@@ -1596,8 +1607,8 @@ static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *pmc)
1596 struct sk_buff *skb = NULL; 1607 struct sk_buff *skb = NULL;
1597 int type; 1608 int type;
1598 1609
1610 read_lock_bh(&idev->lock);
1599 if (!pmc) { 1611 if (!pmc) {
1600 read_lock_bh(&idev->lock);
1601 for (pmc=idev->mc_list; pmc; pmc=pmc->next) { 1612 for (pmc=idev->mc_list; pmc; pmc=pmc->next) {
1602 if (pmc->mca_flags & MAF_NOREPORT) 1613 if (pmc->mca_flags & MAF_NOREPORT)
1603 continue; 1614 continue;
@@ -1609,7 +1620,6 @@ static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *pmc)
1609 skb = add_grec(skb, pmc, type, 0, 0); 1620 skb = add_grec(skb, pmc, type, 0, 0);
1610 spin_unlock_bh(&pmc->mca_lock); 1621 spin_unlock_bh(&pmc->mca_lock);
1611 } 1622 }
1612 read_unlock_bh(&idev->lock);
1613 } else { 1623 } else {
1614 spin_lock_bh(&pmc->mca_lock); 1624 spin_lock_bh(&pmc->mca_lock);
1615 if (pmc->mca_sfcount[MCAST_EXCLUDE]) 1625 if (pmc->mca_sfcount[MCAST_EXCLUDE])
@@ -1619,6 +1629,7 @@ static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *pmc)
1619 skb = add_grec(skb, pmc, type, 0, 0); 1629 skb = add_grec(skb, pmc, type, 0, 0);
1620 spin_unlock_bh(&pmc->mca_lock); 1630 spin_unlock_bh(&pmc->mca_lock);
1621 } 1631 }
1632 read_unlock_bh(&idev->lock);
1622 if (skb) 1633 if (skb)
1623 mld_sendpack(skb); 1634 mld_sendpack(skb);
1624} 1635}
@@ -1814,6 +1825,46 @@ err_out:
1814 goto out; 1825 goto out;
1815} 1826}
1816 1827
1828static void mld_resend_report(struct inet6_dev *idev)
1829{
1830 if (MLD_V1_SEEN(idev)) {
1831 struct ifmcaddr6 *mcaddr;
1832 read_lock_bh(&idev->lock);
1833 for (mcaddr = idev->mc_list; mcaddr; mcaddr = mcaddr->next) {
1834 if (!(mcaddr->mca_flags & MAF_NOREPORT))
1835 igmp6_send(&mcaddr->mca_addr, idev->dev,
1836 ICMPV6_MGM_REPORT);
1837 }
1838 read_unlock_bh(&idev->lock);
1839 } else {
1840 mld_send_report(idev, NULL);
1841 }
1842}
1843
1844void ipv6_mc_dad_complete(struct inet6_dev *idev)
1845{
1846 idev->mc_dad_count = idev->mc_qrv;
1847 if (idev->mc_dad_count) {
1848 mld_resend_report(idev);
1849 idev->mc_dad_count--;
1850 if (idev->mc_dad_count)
1851 mld_dad_start_timer(idev, idev->mc_maxdelay);
1852 }
1853}
1854
1855static void mld_dad_timer_expire(unsigned long data)
1856{
1857 struct inet6_dev *idev = (struct inet6_dev *)data;
1858
1859 mld_resend_report(idev);
1860 if (idev->mc_dad_count) {
1861 idev->mc_dad_count--;
1862 if (idev->mc_dad_count)
1863 mld_dad_start_timer(idev, idev->mc_maxdelay);
1864 }
1865 __in6_dev_put(idev);
1866}
1867
1817static int ip6_mc_del1_src(struct ifmcaddr6 *pmc, int sfmode, 1868static int ip6_mc_del1_src(struct ifmcaddr6 *pmc, int sfmode,
1818 const struct in6_addr *psfsrc) 1869 const struct in6_addr *psfsrc)
1819{ 1870{
@@ -2231,6 +2282,8 @@ void ipv6_mc_down(struct inet6_dev *idev)
2231 idev->mc_gq_running = 0; 2282 idev->mc_gq_running = 0;
2232 if (del_timer(&idev->mc_gq_timer)) 2283 if (del_timer(&idev->mc_gq_timer))
2233 __in6_dev_put(idev); 2284 __in6_dev_put(idev);
2285 if (del_timer(&idev->mc_dad_timer))
2286 __in6_dev_put(idev);
2234 2287
2235 for (i = idev->mc_list; i; i=i->next) 2288 for (i = idev->mc_list; i; i=i->next)
2236 igmp6_group_dropped(i); 2289 igmp6_group_dropped(i);
@@ -2267,6 +2320,8 @@ void ipv6_mc_init_dev(struct inet6_dev *idev)
2267 idev->mc_ifc_count = 0; 2320 idev->mc_ifc_count = 0;
2268 setup_timer(&idev->mc_ifc_timer, mld_ifc_timer_expire, 2321 setup_timer(&idev->mc_ifc_timer, mld_ifc_timer_expire,
2269 (unsigned long)idev); 2322 (unsigned long)idev);
2323 setup_timer(&idev->mc_dad_timer, mld_dad_timer_expire,
2324 (unsigned long)idev);
2270 idev->mc_qrv = MLD_QRV_DEFAULT; 2325 idev->mc_qrv = MLD_QRV_DEFAULT;
2271 idev->mc_maxdelay = IGMP6_UNSOLICITED_IVAL; 2326 idev->mc_maxdelay = IGMP6_UNSOLICITED_IVAL;
2272 idev->mc_v1_seen = 0; 2327 idev->mc_v1_seen = 0;
diff --git a/net/ipv6/mip6.c b/net/ipv6/mip6.c
index 0f9bdc5ee9f3..9ac01dc9402e 100644
--- a/net/ipv6/mip6.c
+++ b/net/ipv6/mip6.c
@@ -268,7 +268,8 @@ static int mip6_destopt_offset(struct xfrm_state *x, struct sk_buff *skb,
268 struct ipv6_opt_hdr *exthdr = 268 struct ipv6_opt_hdr *exthdr =
269 (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1); 269 (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1);
270 const unsigned char *nh = skb_network_header(skb); 270 const unsigned char *nh = skb_network_header(skb);
271 unsigned int packet_len = skb->tail - skb->network_header; 271 unsigned int packet_len = skb_tail_pointer(skb) -
272 skb_network_header(skb);
272 int found_rhdr = 0; 273 int found_rhdr = 0;
273 274
274 *nexthdr = &ipv6_hdr(skb)->nexthdr; 275 *nexthdr = &ipv6_hdr(skb)->nexthdr;
@@ -404,7 +405,8 @@ static int mip6_rthdr_offset(struct xfrm_state *x, struct sk_buff *skb,
404 struct ipv6_opt_hdr *exthdr = 405 struct ipv6_opt_hdr *exthdr =
405 (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1); 406 (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1);
406 const unsigned char *nh = skb_network_header(skb); 407 const unsigned char *nh = skb_network_header(skb);
407 unsigned int packet_len = skb->tail - skb->network_header; 408 unsigned int packet_len = skb_tail_pointer(skb) -
409 skb_network_header(skb);
408 int found_rhdr = 0; 410 int found_rhdr = 0;
409 411
410 *nexthdr = &ipv6_hdr(skb)->nexthdr; 412 *nexthdr = &ipv6_hdr(skb)->nexthdr;
diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
index ca4ffcc287f1..b3b5730b48c5 100644
--- a/net/ipv6/ndisc.c
+++ b/net/ipv6/ndisc.c
@@ -693,7 +693,7 @@ static void ndisc_recv_ns(struct sk_buff *skb)
693 const struct in6_addr *saddr = &ipv6_hdr(skb)->saddr; 693 const struct in6_addr *saddr = &ipv6_hdr(skb)->saddr;
694 const struct in6_addr *daddr = &ipv6_hdr(skb)->daddr; 694 const struct in6_addr *daddr = &ipv6_hdr(skb)->daddr;
695 u8 *lladdr = NULL; 695 u8 *lladdr = NULL;
696 u32 ndoptlen = skb->tail - (skb->transport_header + 696 u32 ndoptlen = skb_tail_pointer(skb) - (skb_transport_header(skb) +
697 offsetof(struct nd_msg, opt)); 697 offsetof(struct nd_msg, opt));
698 struct ndisc_options ndopts; 698 struct ndisc_options ndopts;
699 struct net_device *dev = skb->dev; 699 struct net_device *dev = skb->dev;
@@ -853,7 +853,7 @@ static void ndisc_recv_na(struct sk_buff *skb)
853 const struct in6_addr *saddr = &ipv6_hdr(skb)->saddr; 853 const struct in6_addr *saddr = &ipv6_hdr(skb)->saddr;
854 const struct in6_addr *daddr = &ipv6_hdr(skb)->daddr; 854 const struct in6_addr *daddr = &ipv6_hdr(skb)->daddr;
855 u8 *lladdr = NULL; 855 u8 *lladdr = NULL;
856 u32 ndoptlen = skb->tail - (skb->transport_header + 856 u32 ndoptlen = skb_tail_pointer(skb) - (skb_transport_header(skb) +
857 offsetof(struct nd_msg, opt)); 857 offsetof(struct nd_msg, opt));
858 struct ndisc_options ndopts; 858 struct ndisc_options ndopts;
859 struct net_device *dev = skb->dev; 859 struct net_device *dev = skb->dev;
@@ -1069,7 +1069,8 @@ static void ndisc_router_discovery(struct sk_buff *skb)
1069 1069
1070 __u8 * opt = (__u8 *)(ra_msg + 1); 1070 __u8 * opt = (__u8 *)(ra_msg + 1);
1071 1071
1072 optlen = (skb->tail - skb->transport_header) - sizeof(struct ra_msg); 1072 optlen = (skb_tail_pointer(skb) - skb_transport_header(skb)) -
1073 sizeof(struct ra_msg);
1073 1074
1074 if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) { 1075 if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) {
1075 ND_PRINTK(2, warn, "RA: source address is not link-local\n"); 1076 ND_PRINTK(2, warn, "RA: source address is not link-local\n");
@@ -1346,7 +1347,7 @@ static void ndisc_redirect_rcv(struct sk_buff *skb)
1346 u8 *hdr; 1347 u8 *hdr;
1347 struct ndisc_options ndopts; 1348 struct ndisc_options ndopts;
1348 struct rd_msg *msg = (struct rd_msg *)skb_transport_header(skb); 1349 struct rd_msg *msg = (struct rd_msg *)skb_transport_header(skb);
1349 u32 ndoptlen = skb->tail - (skb->transport_header + 1350 u32 ndoptlen = skb_tail_pointer(skb) - (skb_transport_header(skb) +
1350 offsetof(struct rd_msg, opt)); 1351 offsetof(struct rd_msg, opt));
1351 1352
1352#ifdef CONFIG_IPV6_NDISC_NODETYPE 1353#ifdef CONFIG_IPV6_NDISC_NODETYPE
@@ -1568,7 +1569,7 @@ int ndisc_rcv(struct sk_buff *skb)
1568 1569
1569static int ndisc_netdev_event(struct notifier_block *this, unsigned long event, void *ptr) 1570static int ndisc_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)
1570{ 1571{
1571 struct net_device *dev = ptr; 1572 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1572 struct net *net = dev_net(dev); 1573 struct net *net = dev_net(dev);
1573 struct inet6_dev *idev; 1574 struct inet6_dev *idev;
1574 1575
diff --git a/net/ipv6/netfilter/ip6t_MASQUERADE.c b/net/ipv6/netfilter/ip6t_MASQUERADE.c
index 60e9053bab05..47bff6107519 100644
--- a/net/ipv6/netfilter/ip6t_MASQUERADE.c
+++ b/net/ipv6/netfilter/ip6t_MASQUERADE.c
@@ -71,7 +71,7 @@ static int device_cmp(struct nf_conn *ct, void *ifindex)
71static int masq_device_event(struct notifier_block *this, 71static int masq_device_event(struct notifier_block *this,
72 unsigned long event, void *ptr) 72 unsigned long event, void *ptr)
73{ 73{
74 const struct net_device *dev = ptr; 74 const struct net_device *dev = netdev_notifier_info_to_dev(ptr);
75 struct net *net = dev_net(dev); 75 struct net *net = dev_net(dev);
76 76
77 if (event == NETDEV_DOWN) 77 if (event == NETDEV_DOWN)
@@ -89,8 +89,10 @@ static int masq_inet_event(struct notifier_block *this,
89 unsigned long event, void *ptr) 89 unsigned long event, void *ptr)
90{ 90{
91 struct inet6_ifaddr *ifa = ptr; 91 struct inet6_ifaddr *ifa = ptr;
92 struct netdev_notifier_info info;
92 93
93 return masq_device_event(this, event, ifa->idev->dev); 94 netdev_notifier_info_init(&info, ifa->idev->dev);
95 return masq_device_event(this, event, &info);
94} 96}
95 97
96static struct notifier_block masq_inet_notifier = { 98static struct notifier_block masq_inet_notifier = {
diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c
index c2e73e647e44..ab92a3673fbb 100644
--- a/net/ipv6/output_core.c
+++ b/net/ipv6/output_core.c
@@ -40,7 +40,8 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr)
40 u16 offset = sizeof(struct ipv6hdr); 40 u16 offset = sizeof(struct ipv6hdr);
41 struct ipv6_opt_hdr *exthdr = 41 struct ipv6_opt_hdr *exthdr =
42 (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1); 42 (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1);
43 unsigned int packet_len = skb->tail - skb->network_header; 43 unsigned int packet_len = skb_tail_pointer(skb) -
44 skb_network_header(skb);
44 int found_rhdr = 0; 45 int found_rhdr = 0;
45 *nexthdr = &ipv6_hdr(skb)->nexthdr; 46 *nexthdr = &ipv6_hdr(skb)->nexthdr;
46 47
diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
new file mode 100644
index 000000000000..18f19df4189f
--- /dev/null
+++ b/net/ipv6/ping.c
@@ -0,0 +1,277 @@
1/*
2 * INET An implementation of the TCP/IP protocol suite for the LINUX
3 * operating system. INET is implemented using the BSD Socket
4 * interface as the means of communication with the user level.
5 *
6 * "Ping" sockets
7 *
8 * This program is free software; you can redistribute it and/or
9 * modify it under the terms of the GNU General Public License
10 * as published by the Free Software Foundation; either version
11 * 2 of the License, or (at your option) any later version.
12 *
13 * Based on ipv4/ping.c code.
14 *
15 * Authors: Lorenzo Colitti (IPv6 support)
16 * Vasiliy Kulikov / Openwall (IPv4 implementation, for Linux 2.6),
17 * Pavel Kankovsky (IPv4 implementation, for Linux 2.4.32)
18 *
19 */
20
21#include <net/addrconf.h>
22#include <net/ipv6.h>
23#include <net/ip6_route.h>
24#include <net/protocol.h>
25#include <net/udp.h>
26#include <net/transp_v6.h>
27#include <net/ping.h>
28
29struct proto pingv6_prot = {
30 .name = "PINGv6",
31 .owner = THIS_MODULE,
32 .init = ping_init_sock,
33 .close = ping_close,
34 .connect = ip6_datagram_connect,
35 .disconnect = udp_disconnect,
36 .setsockopt = ipv6_setsockopt,
37 .getsockopt = ipv6_getsockopt,
38 .sendmsg = ping_v6_sendmsg,
39 .recvmsg = ping_recvmsg,
40 .bind = ping_bind,
41 .backlog_rcv = ping_queue_rcv_skb,
42 .hash = ping_hash,
43 .unhash = ping_unhash,
44 .get_port = ping_get_port,
45 .obj_size = sizeof(struct raw6_sock),
46};
47EXPORT_SYMBOL_GPL(pingv6_prot);
48
49static struct inet_protosw pingv6_protosw = {
50 .type = SOCK_DGRAM,
51 .protocol = IPPROTO_ICMPV6,
52 .prot = &pingv6_prot,
53 .ops = &inet6_dgram_ops,
54 .no_check = UDP_CSUM_DEFAULT,
55 .flags = INET_PROTOSW_REUSE,
56};
57
58
59/* Compatibility glue so we can support IPv6 when it's compiled as a module */
60static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len)
61{
62 return -EAFNOSUPPORT;
63}
64static int dummy_ip6_datagram_recv_ctl(struct sock *sk, struct msghdr *msg,
65 struct sk_buff *skb)
66{
67 return -EAFNOSUPPORT;
68}
69static int dummy_icmpv6_err_convert(u8 type, u8 code, int *err)
70{
71 return -EAFNOSUPPORT;
72}
73static void dummy_ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err,
74 __be16 port, u32 info, u8 *payload) {}
75static int dummy_ipv6_chk_addr(struct net *net, const struct in6_addr *addr,
76 const struct net_device *dev, int strict)
77{
78 return 0;
79}
80
81int ping_v6_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
82 size_t len)
83{
84 struct inet_sock *inet = inet_sk(sk);
85 struct ipv6_pinfo *np = inet6_sk(sk);
86 struct icmp6hdr user_icmph;
87 int addr_type;
88 struct in6_addr *daddr;
89 int iif = 0;
90 struct flowi6 fl6;
91 int err;
92 int hlimit;
93 struct dst_entry *dst;
94 struct rt6_info *rt;
95 struct pingfakehdr pfh;
96
97 pr_debug("ping_v6_sendmsg(sk=%p,sk->num=%u)\n", inet, inet->inet_num);
98
99 err = ping_common_sendmsg(AF_INET6, msg, len, &user_icmph,
100 sizeof(user_icmph));
101 if (err)
102 return err;
103
104 if (msg->msg_name) {
105 struct sockaddr_in6 *u = (struct sockaddr_in6 *) msg->msg_name;
106 if (msg->msg_namelen < sizeof(struct sockaddr_in6) ||
107 u->sin6_family != AF_INET6) {
108 return -EINVAL;
109 }
110 if (sk->sk_bound_dev_if &&
111 sk->sk_bound_dev_if != u->sin6_scope_id) {
112 return -EINVAL;
113 }
114 daddr = &(u->sin6_addr);
115 iif = u->sin6_scope_id;
116 } else {
117 if (sk->sk_state != TCP_ESTABLISHED)
118 return -EDESTADDRREQ;
119 daddr = &np->daddr;
120 }
121
122 if (!iif)
123 iif = sk->sk_bound_dev_if;
124
125 addr_type = ipv6_addr_type(daddr);
126 if (__ipv6_addr_needs_scope_id(addr_type) && !iif)
127 return -EINVAL;
128 if (addr_type & IPV6_ADDR_MAPPED)
129 return -EINVAL;
130
131 /* TODO: use ip6_datagram_send_ctl to get options from cmsg */
132
133 memset(&fl6, 0, sizeof(fl6));
134
135 fl6.flowi6_proto = IPPROTO_ICMPV6;
136 fl6.saddr = np->saddr;
137 fl6.daddr = *daddr;
138 fl6.fl6_icmp_type = user_icmph.icmp6_type;
139 fl6.fl6_icmp_code = user_icmph.icmp6_code;
140 security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
141
142 if (!fl6.flowi6_oif && ipv6_addr_is_multicast(&fl6.daddr))
143 fl6.flowi6_oif = np->mcast_oif;
144 else if (!fl6.flowi6_oif)
145 fl6.flowi6_oif = np->ucast_oif;
146
147 dst = ip6_sk_dst_lookup_flow(sk, &fl6, daddr, 1);
148 if (IS_ERR(dst))
149 return PTR_ERR(dst);
150 rt = (struct rt6_info *) dst;
151
152 np = inet6_sk(sk);
153 if (!np)
154 return -EBADF;
155
156 if (!fl6.flowi6_oif && ipv6_addr_is_multicast(&fl6.daddr))
157 fl6.flowi6_oif = np->mcast_oif;
158 else if (!fl6.flowi6_oif)
159 fl6.flowi6_oif = np->ucast_oif;
160
161 pfh.icmph.type = user_icmph.icmp6_type;
162 pfh.icmph.code = user_icmph.icmp6_code;
163 pfh.icmph.checksum = 0;
164 pfh.icmph.un.echo.id = inet->inet_sport;
165 pfh.icmph.un.echo.sequence = user_icmph.icmp6_sequence;
166 pfh.iov = msg->msg_iov;
167 pfh.wcheck = 0;
168 pfh.family = AF_INET6;
169
170 if (ipv6_addr_is_multicast(&fl6.daddr))
171 hlimit = np->mcast_hops;
172 else
173 hlimit = np->hop_limit;
174 if (hlimit < 0)
175 hlimit = ip6_dst_hoplimit(dst);
176
177 lock_sock(sk);
178 err = ip6_append_data(sk, ping_getfrag, &pfh, len,
179 0, hlimit,
180 np->tclass, NULL, &fl6, rt,
181 MSG_DONTWAIT, np->dontfrag);
182
183 if (err) {
184 ICMP6_INC_STATS_BH(sock_net(sk), rt->rt6i_idev,
185 ICMP6_MIB_OUTERRORS);
186 ip6_flush_pending_frames(sk);
187 } else {
188 err = icmpv6_push_pending_frames(sk, &fl6,
189 (struct icmp6hdr *) &pfh.icmph,
190 len);
191 }
192 release_sock(sk);
193
194 if (err)
195 return err;
196
197 return len;
198}
199
200#ifdef CONFIG_PROC_FS
201static void *ping_v6_seq_start(struct seq_file *seq, loff_t *pos)
202{
203 return ping_seq_start(seq, pos, AF_INET6);
204}
205
206static int ping_v6_seq_show(struct seq_file *seq, void *v)
207{
208 if (v == SEQ_START_TOKEN) {
209 seq_puts(seq, IPV6_SEQ_DGRAM_HEADER);
210 } else {
211 int bucket = ((struct ping_iter_state *) seq->private)->bucket;
212 struct inet_sock *inet = inet_sk(v);
213 __u16 srcp = ntohs(inet->inet_sport);
214 __u16 destp = ntohs(inet->inet_dport);
215 ip6_dgram_sock_seq_show(seq, v, srcp, destp, bucket);
216 }
217 return 0;
218}
219
220static struct ping_seq_afinfo ping_v6_seq_afinfo = {
221 .name = "icmp6",
222 .family = AF_INET6,
223 .seq_fops = &ping_seq_fops,
224 .seq_ops = {
225 .start = ping_v6_seq_start,
226 .show = ping_v6_seq_show,
227 .next = ping_seq_next,
228 .stop = ping_seq_stop,
229 },
230};
231
232static int __net_init ping_v6_proc_init_net(struct net *net)
233{
234 return ping_proc_register(net, &ping_v6_seq_afinfo);
235}
236
237static void __net_init ping_v6_proc_exit_net(struct net *net)
238{
239 return ping_proc_unregister(net, &ping_v6_seq_afinfo);
240}
241
242static struct pernet_operations ping_v6_net_ops = {
243 .init = ping_v6_proc_init_net,
244 .exit = ping_v6_proc_exit_net,
245};
246#endif
247
248int __init pingv6_init(void)
249{
250#ifdef CONFIG_PROC_FS
251 int ret = register_pernet_subsys(&ping_v6_net_ops);
252 if (ret)
253 return ret;
254#endif
255 pingv6_ops.ipv6_recv_error = ipv6_recv_error;
256 pingv6_ops.ip6_datagram_recv_ctl = ip6_datagram_recv_ctl;
257 pingv6_ops.icmpv6_err_convert = icmpv6_err_convert;
258 pingv6_ops.ipv6_icmp_error = ipv6_icmp_error;
259 pingv6_ops.ipv6_chk_addr = ipv6_chk_addr;
260 return inet6_register_protosw(&pingv6_protosw);
261}
262
263/* This never gets called because it's not possible to unload the ipv6 module,
264 * but just in case.
265 */
266void pingv6_exit(void)
267{
268 pingv6_ops.ipv6_recv_error = dummy_ipv6_recv_error;
269 pingv6_ops.ip6_datagram_recv_ctl = dummy_ip6_datagram_recv_ctl;
270 pingv6_ops.icmpv6_err_convert = dummy_icmpv6_err_convert;
271 pingv6_ops.ipv6_icmp_error = dummy_ipv6_icmp_error;
272 pingv6_ops.ipv6_chk_addr = dummy_ipv6_chk_addr;
273#ifdef CONFIG_PROC_FS
274 unregister_pernet_subsys(&ping_v6_net_ops);
275#endif
276 inet6_unregister_protosw(&pingv6_protosw);
277}
diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
index eedff8ccded5..c45f7a5c36e9 100644
--- a/net/ipv6/raw.c
+++ b/net/ipv6/raw.c
@@ -1132,7 +1132,8 @@ static int rawv6_ioctl(struct sock *sk, int cmd, unsigned long arg)
1132 spin_lock_bh(&sk->sk_receive_queue.lock); 1132 spin_lock_bh(&sk->sk_receive_queue.lock);
1133 skb = skb_peek(&sk->sk_receive_queue); 1133 skb = skb_peek(&sk->sk_receive_queue);
1134 if (skb != NULL) 1134 if (skb != NULL)
1135 amount = skb->tail - skb->transport_header; 1135 amount = skb_tail_pointer(skb) -
1136 skb_transport_header(skb);
1136 spin_unlock_bh(&sk->sk_receive_queue.lock); 1137 spin_unlock_bh(&sk->sk_receive_queue.lock);
1137 return put_user(amount, (int __user *)arg); 1138 return put_user(amount, (int __user *)arg);
1138 } 1139 }
@@ -1226,45 +1227,16 @@ struct proto rawv6_prot = {
1226}; 1227};
1227 1228
1228#ifdef CONFIG_PROC_FS 1229#ifdef CONFIG_PROC_FS
1229static void raw6_sock_seq_show(struct seq_file *seq, struct sock *sp, int i)
1230{
1231 struct ipv6_pinfo *np = inet6_sk(sp);
1232 const struct in6_addr *dest, *src;
1233 __u16 destp, srcp;
1234
1235 dest = &np->daddr;
1236 src = &np->rcv_saddr;
1237 destp = 0;
1238 srcp = inet_sk(sp)->inet_num;
1239 seq_printf(seq,
1240 "%4d: %08X%08X%08X%08X:%04X %08X%08X%08X%08X:%04X "
1241 "%02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %pK %d\n",
1242 i,
1243 src->s6_addr32[0], src->s6_addr32[1],
1244 src->s6_addr32[2], src->s6_addr32[3], srcp,
1245 dest->s6_addr32[0], dest->s6_addr32[1],
1246 dest->s6_addr32[2], dest->s6_addr32[3], destp,
1247 sp->sk_state,
1248 sk_wmem_alloc_get(sp),
1249 sk_rmem_alloc_get(sp),
1250 0, 0L, 0,
1251 from_kuid_munged(seq_user_ns(seq), sock_i_uid(sp)),
1252 0,
1253 sock_i_ino(sp),
1254 atomic_read(&sp->sk_refcnt), sp, atomic_read(&sp->sk_drops));
1255}
1256
1257static int raw6_seq_show(struct seq_file *seq, void *v) 1230static int raw6_seq_show(struct seq_file *seq, void *v)
1258{ 1231{
1259 if (v == SEQ_START_TOKEN) 1232 if (v == SEQ_START_TOKEN) {
1260 seq_printf(seq, 1233 seq_puts(seq, IPV6_SEQ_DGRAM_HEADER);
1261 " sl " 1234 } else {
1262 "local_address " 1235 struct sock *sp = v;
1263 "remote_address " 1236 __u16 srcp = inet_sk(sp)->inet_num;
1264 "st tx_queue rx_queue tr tm->when retrnsmt" 1237 ip6_dgram_sock_seq_show(seq, v, srcp, 0,
1265 " uid timeout inode ref pointer drops\n"); 1238 raw_seq_private(seq)->bucket);
1266 else 1239 }
1267 raw6_sock_seq_show(seq, v, raw_seq_private(seq)->bucket);
1268 return 0; 1240 return 0;
1269} 1241}
1270 1242
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index ad0aa6b0b86a..bd5fd7054031 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -83,6 +83,7 @@ static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk,
83 struct sk_buff *skb, u32 mtu); 83 struct sk_buff *skb, u32 mtu);
84static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk, 84static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk,
85 struct sk_buff *skb); 85 struct sk_buff *skb);
86static int rt6_score_route(struct rt6_info *rt, int oif, int strict);
86 87
87#ifdef CONFIG_IPV6_ROUTE_INFO 88#ifdef CONFIG_IPV6_ROUTE_INFO
88static struct rt6_info *rt6_add_route_info(struct net *net, 89static struct rt6_info *rt6_add_route_info(struct net *net,
@@ -394,7 +395,8 @@ static int rt6_info_hash_nhsfn(unsigned int candidate_count,
394} 395}
395 396
396static struct rt6_info *rt6_multipath_select(struct rt6_info *match, 397static struct rt6_info *rt6_multipath_select(struct rt6_info *match,
397 struct flowi6 *fl6) 398 struct flowi6 *fl6, int oif,
399 int strict)
398{ 400{
399 struct rt6_info *sibling, *next_sibling; 401 struct rt6_info *sibling, *next_sibling;
400 int route_choosen; 402 int route_choosen;
@@ -408,6 +410,8 @@ static struct rt6_info *rt6_multipath_select(struct rt6_info *match,
408 &match->rt6i_siblings, rt6i_siblings) { 410 &match->rt6i_siblings, rt6i_siblings) {
409 route_choosen--; 411 route_choosen--;
410 if (route_choosen == 0) { 412 if (route_choosen == 0) {
413 if (rt6_score_route(sibling, oif, strict) < 0)
414 break;
411 match = sibling; 415 match = sibling;
412 break; 416 break;
413 } 417 }
@@ -547,6 +551,8 @@ static inline bool rt6_check_neigh(struct rt6_info *rt)
547 ret = true; 551 ret = true;
548#endif 552#endif
549 read_unlock(&neigh->lock); 553 read_unlock(&neigh->lock);
554 } else if (IS_ENABLED(CONFIG_IPV6_ROUTER_PREF)) {
555 ret = true;
550 } 556 }
551 rcu_read_unlock_bh(); 557 rcu_read_unlock_bh();
552 558
@@ -743,7 +749,7 @@ restart:
743 rt = fn->leaf; 749 rt = fn->leaf;
744 rt = rt6_device_match(net, rt, &fl6->saddr, fl6->flowi6_oif, flags); 750 rt = rt6_device_match(net, rt, &fl6->saddr, fl6->flowi6_oif, flags);
745 if (rt->rt6i_nsiblings && fl6->flowi6_oif == 0) 751 if (rt->rt6i_nsiblings && fl6->flowi6_oif == 0)
746 rt = rt6_multipath_select(rt, fl6); 752 rt = rt6_multipath_select(rt, fl6, fl6->flowi6_oif, flags);
747 BACKTRACK(net, &fl6->saddr); 753 BACKTRACK(net, &fl6->saddr);
748out: 754out:
749 dst_use(&rt->dst, jiffies); 755 dst_use(&rt->dst, jiffies);
@@ -875,8 +881,8 @@ restart_2:
875 881
876restart: 882restart:
877 rt = rt6_select(fn, oif, strict | reachable); 883 rt = rt6_select(fn, oif, strict | reachable);
878 if (rt->rt6i_nsiblings && oif == 0) 884 if (rt->rt6i_nsiblings)
879 rt = rt6_multipath_select(rt, fl6); 885 rt = rt6_multipath_select(rt, fl6, oif, strict | reachable);
880 BACKTRACK(net, &fl6->saddr); 886 BACKTRACK(net, &fl6->saddr);
881 if (rt == net->ipv6.ip6_null_entry || 887 if (rt == net->ipv6.ip6_null_entry ||
882 rt->rt6i_flags & RTF_CACHE) 888 rt->rt6i_flags & RTF_CACHE)
@@ -1649,7 +1655,7 @@ static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_bu
1649 int optlen, on_link; 1655 int optlen, on_link;
1650 u8 *lladdr; 1656 u8 *lladdr;
1651 1657
1652 optlen = skb->tail - skb->transport_header; 1658 optlen = skb_tail_pointer(skb) - skb_transport_header(skb);
1653 optlen -= sizeof(*msg); 1659 optlen -= sizeof(*msg);
1654 1660
1655 if (optlen < 0) { 1661 if (optlen < 0) {
@@ -2681,9 +2687,9 @@ errout:
2681} 2687}
2682 2688
2683static int ip6_route_dev_notify(struct notifier_block *this, 2689static int ip6_route_dev_notify(struct notifier_block *this,
2684 unsigned long event, void *data) 2690 unsigned long event, void *ptr)
2685{ 2691{
2686 struct net_device *dev = (struct net_device *)data; 2692 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
2687 struct net *net = dev_net(dev); 2693 struct net *net = dev_net(dev);
2688 2694
2689 if (event == NETDEV_REGISTER && (dev->flags & IFF_LOOPBACK)) { 2695 if (event == NETDEV_REGISTER && (dev->flags & IFF_LOOPBACK)) {
@@ -2790,7 +2796,7 @@ static const struct file_operations rt6_stats_seq_fops = {
2790#ifdef CONFIG_SYSCTL 2796#ifdef CONFIG_SYSCTL
2791 2797
2792static 2798static
2793int ipv6_sysctl_rtcache_flush(ctl_table *ctl, int write, 2799int ipv6_sysctl_rtcache_flush(struct ctl_table *ctl, int write,
2794 void __user *buffer, size_t *lenp, loff_t *ppos) 2800 void __user *buffer, size_t *lenp, loff_t *ppos)
2795{ 2801{
2796 struct net *net; 2802 struct net *net;
@@ -2805,7 +2811,7 @@ int ipv6_sysctl_rtcache_flush(ctl_table *ctl, int write,
2805 return 0; 2811 return 0;
2806} 2812}
2807 2813
2808ctl_table ipv6_route_table_template[] = { 2814struct ctl_table ipv6_route_table_template[] = {
2809 { 2815 {
2810 .procname = "flush", 2816 .procname = "flush",
2811 .data = &init_net.ipv6.sysctl.flush_delay, 2817 .data = &init_net.ipv6.sysctl.flush_delay,
diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
index 335363478bbf..a3437a4cd07e 100644
--- a/net/ipv6/sit.c
+++ b/net/ipv6/sit.c
@@ -466,14 +466,14 @@ isatap_chksrc(struct sk_buff *skb, const struct iphdr *iph, struct ip_tunnel *t)
466 466
467static void ipip6_tunnel_uninit(struct net_device *dev) 467static void ipip6_tunnel_uninit(struct net_device *dev)
468{ 468{
469 struct net *net = dev_net(dev); 469 struct ip_tunnel *tunnel = netdev_priv(dev);
470 struct sit_net *sitn = net_generic(net, sit_net_id); 470 struct sit_net *sitn = net_generic(tunnel->net, sit_net_id);
471 471
472 if (dev == sitn->fb_tunnel_dev) { 472 if (dev == sitn->fb_tunnel_dev) {
473 RCU_INIT_POINTER(sitn->tunnels_wc[0], NULL); 473 RCU_INIT_POINTER(sitn->tunnels_wc[0], NULL);
474 } else { 474 } else {
475 ipip6_tunnel_unlink(sitn, netdev_priv(dev)); 475 ipip6_tunnel_unlink(sitn, tunnel);
476 ipip6_tunnel_del_prl(netdev_priv(dev), NULL); 476 ipip6_tunnel_del_prl(tunnel, NULL);
477 } 477 }
478 dev_put(dev); 478 dev_put(dev);
479} 479}
@@ -577,6 +577,10 @@ static int ipip6_rcv(struct sk_buff *skb)
577 if (tunnel != NULL) { 577 if (tunnel != NULL) {
578 struct pcpu_tstats *tstats; 578 struct pcpu_tstats *tstats;
579 579
580 if (tunnel->parms.iph.protocol != IPPROTO_IPV6 &&
581 tunnel->parms.iph.protocol != 0)
582 goto out;
583
580 secpath_reset(skb); 584 secpath_reset(skb);
581 skb->mac_header = skb->network_header; 585 skb->mac_header = skb->network_header;
582 skb_reset_network_header(skb); 586 skb_reset_network_header(skb);
@@ -589,7 +593,7 @@ static int ipip6_rcv(struct sk_buff *skb)
589 tunnel->dev->stats.rx_errors++; 593 tunnel->dev->stats.rx_errors++;
590 goto out; 594 goto out;
591 } 595 }
592 } else { 596 } else if (!(tunnel->dev->flags&IFF_POINTOPOINT)) {
593 if (is_spoofed_6rd(tunnel, iph->saddr, 597 if (is_spoofed_6rd(tunnel, iph->saddr,
594 &ipv6_hdr(skb)->saddr) || 598 &ipv6_hdr(skb)->saddr) ||
595 is_spoofed_6rd(tunnel, iph->daddr, 599 is_spoofed_6rd(tunnel, iph->daddr,
@@ -617,6 +621,8 @@ static int ipip6_rcv(struct sk_buff *skb)
617 tstats->rx_packets++; 621 tstats->rx_packets++;
618 tstats->rx_bytes += skb->len; 622 tstats->rx_bytes += skb->len;
619 623
624 if (tunnel->net != dev_net(tunnel->dev))
625 skb_scrub_packet(skb);
620 netif_rx(skb); 626 netif_rx(skb);
621 627
622 return 0; 628 return 0;
@@ -629,6 +635,40 @@ out:
629 return 0; 635 return 0;
630} 636}
631 637
638static const struct tnl_ptk_info tpi = {
639 /* no tunnel info required for ipip. */
640 .proto = htons(ETH_P_IP),
641};
642
643static int ipip_rcv(struct sk_buff *skb)
644{
645 const struct iphdr *iph;
646 struct ip_tunnel *tunnel;
647
648 if (iptunnel_pull_header(skb, 0, tpi.proto))
649 goto drop;
650
651 iph = ip_hdr(skb);
652
653 tunnel = ipip6_tunnel_lookup(dev_net(skb->dev), skb->dev,
654 iph->saddr, iph->daddr);
655 if (tunnel != NULL) {
656 if (tunnel->parms.iph.protocol != IPPROTO_IPIP &&
657 tunnel->parms.iph.protocol != 0)
658 goto drop;
659
660 if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
661 goto drop;
662 return ip_tunnel_rcv(tunnel, skb, &tpi, log_ecn_error);
663 }
664
665 return 1;
666
667drop:
668 kfree_skb(skb);
669 return 0;
670}
671
632/* 672/*
633 * If the IPv6 address comes from 6rd / 6to4 (RFC 3056) addr space this function 673 * If the IPv6 address comes from 6rd / 6to4 (RFC 3056) addr space this function
634 * stores the embedded IPv4 address in v4dst and returns true. 674 * stores the embedded IPv4 address in v4dst and returns true.
@@ -690,13 +730,14 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
690 __be16 df = tiph->frag_off; 730 __be16 df = tiph->frag_off;
691 struct rtable *rt; /* Route to the other host */ 731 struct rtable *rt; /* Route to the other host */
692 struct net_device *tdev; /* Device to other host */ 732 struct net_device *tdev; /* Device to other host */
693 struct iphdr *iph; /* Our new IP header */
694 unsigned int max_headroom; /* The extra header space needed */ 733 unsigned int max_headroom; /* The extra header space needed */
695 __be32 dst = tiph->daddr; 734 __be32 dst = tiph->daddr;
696 struct flowi4 fl4; 735 struct flowi4 fl4;
697 int mtu; 736 int mtu;
698 const struct in6_addr *addr6; 737 const struct in6_addr *addr6;
699 int addr_type; 738 int addr_type;
739 u8 ttl;
740 int err;
700 741
701 if (skb->protocol != htons(ETH_P_IPV6)) 742 if (skb->protocol != htons(ETH_P_IPV6))
702 goto tx_error; 743 goto tx_error;
@@ -764,7 +805,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
764 goto tx_error; 805 goto tx_error;
765 } 806 }
766 807
767 rt = ip_route_output_ports(dev_net(dev), &fl4, NULL, 808 rt = ip_route_output_ports(tunnel->net, &fl4, NULL,
768 dst, tiph->saddr, 809 dst, tiph->saddr,
769 0, 0, 810 0, 0,
770 IPPROTO_IPV6, RT_TOS(tos), 811 IPPROTO_IPV6, RT_TOS(tos),
@@ -819,6 +860,9 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
819 tunnel->err_count = 0; 860 tunnel->err_count = 0;
820 } 861 }
821 862
863 if (tunnel->net != dev_net(dev))
864 skb_scrub_packet(skb);
865
822 /* 866 /*
823 * Okay, now see if we can stuff it in the buffer as-is. 867 * Okay, now see if we can stuff it in the buffer as-is.
824 */ 868 */
@@ -839,34 +883,14 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
839 skb = new_skb; 883 skb = new_skb;
840 iph6 = ipv6_hdr(skb); 884 iph6 = ipv6_hdr(skb);
841 } 885 }
842 886 ttl = tiph->ttl;
843 skb->transport_header = skb->network_header; 887 if (ttl == 0)
844 skb_push(skb, sizeof(struct iphdr)); 888 ttl = iph6->hop_limit;
845 skb_reset_network_header(skb); 889 tos = INET_ECN_encapsulate(tos, ipv6_get_dsfield(iph6));
846 memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); 890
847 IPCB(skb)->flags = 0; 891 err = iptunnel_xmit(dev_net(dev), rt, skb, fl4.saddr, fl4.daddr,
848 skb_dst_drop(skb); 892 IPPROTO_IPV6, tos, ttl, df);
849 skb_dst_set(skb, &rt->dst); 893 iptunnel_xmit_stats(err, &dev->stats, dev->tstats);
850
851 /*
852 * Push down and install the IPIP header.
853 */
854
855 iph = ip_hdr(skb);
856 iph->version = 4;
857 iph->ihl = sizeof(struct iphdr)>>2;
858 iph->frag_off = df;
859 iph->protocol = IPPROTO_IPV6;
860 iph->tos = INET_ECN_encapsulate(tos, ipv6_get_dsfield(iph6));
861 iph->daddr = fl4.daddr;
862 iph->saddr = fl4.saddr;
863
864 if ((iph->ttl = tiph->ttl) == 0)
865 iph->ttl = iph6->hop_limit;
866
867 skb->ip_summed = CHECKSUM_NONE;
868 ip_select_ident(iph, skb_dst(skb), NULL);
869 iptunnel_xmit(skb, dev);
870 return NETDEV_TX_OK; 894 return NETDEV_TX_OK;
871 895
872tx_error_icmp: 896tx_error_icmp:
@@ -877,6 +901,43 @@ tx_error:
877 return NETDEV_TX_OK; 901 return NETDEV_TX_OK;
878} 902}
879 903
904static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
905{
906 struct ip_tunnel *tunnel = netdev_priv(dev);
907 const struct iphdr *tiph = &tunnel->parms.iph;
908
909 if (likely(!skb->encapsulation)) {
910 skb_reset_inner_headers(skb);
911 skb->encapsulation = 1;
912 }
913
914 ip_tunnel_xmit(skb, dev, tiph, IPPROTO_IPIP);
915 return NETDEV_TX_OK;
916}
917
918static netdev_tx_t sit_tunnel_xmit(struct sk_buff *skb,
919 struct net_device *dev)
920{
921 switch (skb->protocol) {
922 case htons(ETH_P_IP):
923 ipip_tunnel_xmit(skb, dev);
924 break;
925 case htons(ETH_P_IPV6):
926 ipip6_tunnel_xmit(skb, dev);
927 break;
928 default:
929 goto tx_err;
930 }
931
932 return NETDEV_TX_OK;
933
934tx_err:
935 dev->stats.tx_errors++;
936 dev_kfree_skb(skb);
937 return NETDEV_TX_OK;
938
939}
940
880static void ipip6_tunnel_bind_dev(struct net_device *dev) 941static void ipip6_tunnel_bind_dev(struct net_device *dev)
881{ 942{
882 struct net_device *tdev = NULL; 943 struct net_device *tdev = NULL;
@@ -888,7 +949,8 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
888 iph = &tunnel->parms.iph; 949 iph = &tunnel->parms.iph;
889 950
890 if (iph->daddr) { 951 if (iph->daddr) {
891 struct rtable *rt = ip_route_output_ports(dev_net(dev), &fl4, NULL, 952 struct rtable *rt = ip_route_output_ports(tunnel->net, &fl4,
953 NULL,
892 iph->daddr, iph->saddr, 954 iph->daddr, iph->saddr,
893 0, 0, 955 0, 0,
894 IPPROTO_IPV6, 956 IPPROTO_IPV6,
@@ -903,7 +965,7 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
903 } 965 }
904 966
905 if (!tdev && tunnel->parms.link) 967 if (!tdev && tunnel->parms.link)
906 tdev = __dev_get_by_index(dev_net(dev), tunnel->parms.link); 968 tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
907 969
908 if (tdev) { 970 if (tdev) {
909 dev->hard_header_len = tdev->hard_header_len + sizeof(struct iphdr); 971 dev->hard_header_len = tdev->hard_header_len + sizeof(struct iphdr);
@@ -916,7 +978,7 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
916 978
917static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p) 979static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p)
918{ 980{
919 struct net *net = dev_net(t->dev); 981 struct net *net = t->net;
920 struct sit_net *sitn = net_generic(net, sit_net_id); 982 struct sit_net *sitn = net_generic(net, sit_net_id);
921 983
922 ipip6_tunnel_unlink(sitn, t); 984 ipip6_tunnel_unlink(sitn, t);
@@ -1027,7 +1089,11 @@ ipip6_tunnel_ioctl (struct net_device *dev, struct ifreq *ifr, int cmd)
1027 goto done; 1089 goto done;
1028 1090
1029 err = -EINVAL; 1091 err = -EINVAL;
1030 if (p.iph.version != 4 || p.iph.protocol != IPPROTO_IPV6 || 1092 if (p.iph.protocol != IPPROTO_IPV6 &&
1093 p.iph.protocol != IPPROTO_IPIP &&
1094 p.iph.protocol != 0)
1095 goto done;
1096 if (p.iph.version != 4 ||
1031 p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF))) 1097 p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)))
1032 goto done; 1098 goto done;
1033 if (p.iph.ttl) 1099 if (p.iph.ttl)
@@ -1164,7 +1230,7 @@ static int ipip6_tunnel_change_mtu(struct net_device *dev, int new_mtu)
1164 1230
1165static const struct net_device_ops ipip6_netdev_ops = { 1231static const struct net_device_ops ipip6_netdev_ops = {
1166 .ndo_uninit = ipip6_tunnel_uninit, 1232 .ndo_uninit = ipip6_tunnel_uninit,
1167 .ndo_start_xmit = ipip6_tunnel_xmit, 1233 .ndo_start_xmit = sit_tunnel_xmit,
1168 .ndo_do_ioctl = ipip6_tunnel_ioctl, 1234 .ndo_do_ioctl = ipip6_tunnel_ioctl,
1169 .ndo_change_mtu = ipip6_tunnel_change_mtu, 1235 .ndo_change_mtu = ipip6_tunnel_change_mtu,
1170 .ndo_get_stats64 = ip_tunnel_get_stats64, 1236 .ndo_get_stats64 = ip_tunnel_get_stats64,
@@ -1188,7 +1254,6 @@ static void ipip6_tunnel_setup(struct net_device *dev)
1188 dev->priv_flags &= ~IFF_XMIT_DST_RELEASE; 1254 dev->priv_flags &= ~IFF_XMIT_DST_RELEASE;
1189 dev->iflink = 0; 1255 dev->iflink = 0;
1190 dev->addr_len = 4; 1256 dev->addr_len = 4;
1191 dev->features |= NETIF_F_NETNS_LOCAL;
1192 dev->features |= NETIF_F_LLTX; 1257 dev->features |= NETIF_F_LLTX;
1193} 1258}
1194 1259
@@ -1197,6 +1262,7 @@ static int ipip6_tunnel_init(struct net_device *dev)
1197 struct ip_tunnel *tunnel = netdev_priv(dev); 1262 struct ip_tunnel *tunnel = netdev_priv(dev);
1198 1263
1199 tunnel->dev = dev; 1264 tunnel->dev = dev;
1265 tunnel->net = dev_net(dev);
1200 1266
1201 memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4); 1267 memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4);
1202 memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4); 1268 memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4);
@@ -1217,6 +1283,7 @@ static int __net_init ipip6_fb_tunnel_init(struct net_device *dev)
1217 struct sit_net *sitn = net_generic(net, sit_net_id); 1283 struct sit_net *sitn = net_generic(net, sit_net_id);
1218 1284
1219 tunnel->dev = dev; 1285 tunnel->dev = dev;
1286 tunnel->net = dev_net(dev);
1220 strcpy(tunnel->parms.name, dev->name); 1287 strcpy(tunnel->parms.name, dev->name);
1221 1288
1222 iph->version = 4; 1289 iph->version = 4;
@@ -1232,6 +1299,22 @@ static int __net_init ipip6_fb_tunnel_init(struct net_device *dev)
1232 return 0; 1299 return 0;
1233} 1300}
1234 1301
1302static int ipip6_validate(struct nlattr *tb[], struct nlattr *data[])
1303{
1304 u8 proto;
1305
1306 if (!data || !data[IFLA_IPTUN_PROTO])
1307 return 0;
1308
1309 proto = nla_get_u8(data[IFLA_IPTUN_PROTO]);
1310 if (proto != IPPROTO_IPV6 &&
1311 proto != IPPROTO_IPIP &&
1312 proto != 0)
1313 return -EINVAL;
1314
1315 return 0;
1316}
1317
1235static void ipip6_netlink_parms(struct nlattr *data[], 1318static void ipip6_netlink_parms(struct nlattr *data[],
1236 struct ip_tunnel_parm *parms) 1319 struct ip_tunnel_parm *parms)
1237{ 1320{
@@ -1268,6 +1351,10 @@ static void ipip6_netlink_parms(struct nlattr *data[],
1268 1351
1269 if (data[IFLA_IPTUN_FLAGS]) 1352 if (data[IFLA_IPTUN_FLAGS])
1270 parms->i_flags = nla_get_be16(data[IFLA_IPTUN_FLAGS]); 1353 parms->i_flags = nla_get_be16(data[IFLA_IPTUN_FLAGS]);
1354
1355 if (data[IFLA_IPTUN_PROTO])
1356 parms->iph.protocol = nla_get_u8(data[IFLA_IPTUN_PROTO]);
1357
1271} 1358}
1272 1359
1273#ifdef CONFIG_IPV6_SIT_6RD 1360#ifdef CONFIG_IPV6_SIT_6RD
@@ -1339,9 +1426,9 @@ static int ipip6_newlink(struct net *src_net, struct net_device *dev,
1339static int ipip6_changelink(struct net_device *dev, struct nlattr *tb[], 1426static int ipip6_changelink(struct net_device *dev, struct nlattr *tb[],
1340 struct nlattr *data[]) 1427 struct nlattr *data[])
1341{ 1428{
1342 struct ip_tunnel *t; 1429 struct ip_tunnel *t = netdev_priv(dev);
1343 struct ip_tunnel_parm p; 1430 struct ip_tunnel_parm p;
1344 struct net *net = dev_net(dev); 1431 struct net *net = t->net;
1345 struct sit_net *sitn = net_generic(net, sit_net_id); 1432 struct sit_net *sitn = net_generic(net, sit_net_id);
1346#ifdef CONFIG_IPV6_SIT_6RD 1433#ifdef CONFIG_IPV6_SIT_6RD
1347 struct ip_tunnel_6rd ip6rd; 1434 struct ip_tunnel_6rd ip6rd;
@@ -1391,6 +1478,8 @@ static size_t ipip6_get_size(const struct net_device *dev)
1391 nla_total_size(1) + 1478 nla_total_size(1) +
1392 /* IFLA_IPTUN_FLAGS */ 1479 /* IFLA_IPTUN_FLAGS */
1393 nla_total_size(2) + 1480 nla_total_size(2) +
1481 /* IFLA_IPTUN_PROTO */
1482 nla_total_size(1) +
1394#ifdef CONFIG_IPV6_SIT_6RD 1483#ifdef CONFIG_IPV6_SIT_6RD
1395 /* IFLA_IPTUN_6RD_PREFIX */ 1484 /* IFLA_IPTUN_6RD_PREFIX */
1396 nla_total_size(sizeof(struct in6_addr)) + 1485 nla_total_size(sizeof(struct in6_addr)) +
@@ -1416,6 +1505,7 @@ static int ipip6_fill_info(struct sk_buff *skb, const struct net_device *dev)
1416 nla_put_u8(skb, IFLA_IPTUN_TOS, parm->iph.tos) || 1505 nla_put_u8(skb, IFLA_IPTUN_TOS, parm->iph.tos) ||
1417 nla_put_u8(skb, IFLA_IPTUN_PMTUDISC, 1506 nla_put_u8(skb, IFLA_IPTUN_PMTUDISC,
1418 !!(parm->iph.frag_off & htons(IP_DF))) || 1507 !!(parm->iph.frag_off & htons(IP_DF))) ||
1508 nla_put_u8(skb, IFLA_IPTUN_PROTO, parm->iph.protocol) ||
1419 nla_put_be16(skb, IFLA_IPTUN_FLAGS, parm->i_flags)) 1509 nla_put_be16(skb, IFLA_IPTUN_FLAGS, parm->i_flags))
1420 goto nla_put_failure; 1510 goto nla_put_failure;
1421 1511
@@ -1445,6 +1535,7 @@ static const struct nla_policy ipip6_policy[IFLA_IPTUN_MAX + 1] = {
1445 [IFLA_IPTUN_TOS] = { .type = NLA_U8 }, 1535 [IFLA_IPTUN_TOS] = { .type = NLA_U8 },
1446 [IFLA_IPTUN_PMTUDISC] = { .type = NLA_U8 }, 1536 [IFLA_IPTUN_PMTUDISC] = { .type = NLA_U8 },
1447 [IFLA_IPTUN_FLAGS] = { .type = NLA_U16 }, 1537 [IFLA_IPTUN_FLAGS] = { .type = NLA_U16 },
1538 [IFLA_IPTUN_PROTO] = { .type = NLA_U8 },
1448#ifdef CONFIG_IPV6_SIT_6RD 1539#ifdef CONFIG_IPV6_SIT_6RD
1449 [IFLA_IPTUN_6RD_PREFIX] = { .len = sizeof(struct in6_addr) }, 1540 [IFLA_IPTUN_6RD_PREFIX] = { .len = sizeof(struct in6_addr) },
1450 [IFLA_IPTUN_6RD_RELAY_PREFIX] = { .type = NLA_U32 }, 1541 [IFLA_IPTUN_6RD_RELAY_PREFIX] = { .type = NLA_U32 },
@@ -1459,6 +1550,7 @@ static struct rtnl_link_ops sit_link_ops __read_mostly = {
1459 .policy = ipip6_policy, 1550 .policy = ipip6_policy,
1460 .priv_size = sizeof(struct ip_tunnel), 1551 .priv_size = sizeof(struct ip_tunnel),
1461 .setup = ipip6_tunnel_setup, 1552 .setup = ipip6_tunnel_setup,
1553 .validate = ipip6_validate,
1462 .newlink = ipip6_newlink, 1554 .newlink = ipip6_newlink,
1463 .changelink = ipip6_changelink, 1555 .changelink = ipip6_changelink,
1464 .get_size = ipip6_get_size, 1556 .get_size = ipip6_get_size,
@@ -1471,10 +1563,22 @@ static struct xfrm_tunnel sit_handler __read_mostly = {
1471 .priority = 1, 1563 .priority = 1,
1472}; 1564};
1473 1565
1566static struct xfrm_tunnel ipip_handler __read_mostly = {
1567 .handler = ipip_rcv,
1568 .err_handler = ipip6_err,
1569 .priority = 2,
1570};
1571
1474static void __net_exit sit_destroy_tunnels(struct sit_net *sitn, struct list_head *head) 1572static void __net_exit sit_destroy_tunnels(struct sit_net *sitn, struct list_head *head)
1475{ 1573{
1574 struct net *net = dev_net(sitn->fb_tunnel_dev);
1575 struct net_device *dev, *aux;
1476 int prio; 1576 int prio;
1477 1577
1578 for_each_netdev_safe(net, dev, aux)
1579 if (dev->rtnl_link_ops == &sit_link_ops)
1580 unregister_netdevice_queue(dev, head);
1581
1478 for (prio = 1; prio < 4; prio++) { 1582 for (prio = 1; prio < 4; prio++) {
1479 int h; 1583 int h;
1480 for (h = 0; h < HASH_SIZE; h++) { 1584 for (h = 0; h < HASH_SIZE; h++) {
@@ -1482,7 +1586,12 @@ static void __net_exit sit_destroy_tunnels(struct sit_net *sitn, struct list_hea
1482 1586
1483 t = rtnl_dereference(sitn->tunnels[prio][h]); 1587 t = rtnl_dereference(sitn->tunnels[prio][h]);
1484 while (t != NULL) { 1588 while (t != NULL) {
1485 unregister_netdevice_queue(t->dev, head); 1589 /* If dev is in the same netns, it has already
1590 * been added to the list by the previous loop.
1591 */
1592 if (dev_net(t->dev) != net)
1593 unregister_netdevice_queue(t->dev,
1594 head);
1486 t = rtnl_dereference(t->next); 1595 t = rtnl_dereference(t->next);
1487 } 1596 }
1488 } 1597 }
@@ -1507,6 +1616,10 @@ static int __net_init sit_init_net(struct net *net)
1507 goto err_alloc_dev; 1616 goto err_alloc_dev;
1508 } 1617 }
1509 dev_net_set(sitn->fb_tunnel_dev, net); 1618 dev_net_set(sitn->fb_tunnel_dev, net);
1619 /* FB netdevice is special: we have one, and only one per netns.
1620 * Allowing to move it to another netns is clearly unsafe.
1621 */
1622 sitn->fb_tunnel_dev->features |= NETIF_F_NETNS_LOCAL;
1510 1623
1511 err = ipip6_fb_tunnel_init(sitn->fb_tunnel_dev); 1624 err = ipip6_fb_tunnel_init(sitn->fb_tunnel_dev);
1512 if (err) 1625 if (err)
@@ -1553,6 +1666,7 @@ static void __exit sit_cleanup(void)
1553{ 1666{
1554 rtnl_link_unregister(&sit_link_ops); 1667 rtnl_link_unregister(&sit_link_ops);
1555 xfrm4_tunnel_deregister(&sit_handler, AF_INET6); 1668 xfrm4_tunnel_deregister(&sit_handler, AF_INET6);
1669 xfrm4_tunnel_deregister(&ipip_handler, AF_INET);
1556 1670
1557 unregister_pernet_device(&sit_net_ops); 1671 unregister_pernet_device(&sit_net_ops);
1558 rcu_barrier(); /* Wait for completion of call_rcu()'s */ 1672 rcu_barrier(); /* Wait for completion of call_rcu()'s */
@@ -1569,9 +1683,14 @@ static int __init sit_init(void)
1569 return err; 1683 return err;
1570 err = xfrm4_tunnel_register(&sit_handler, AF_INET6); 1684 err = xfrm4_tunnel_register(&sit_handler, AF_INET6);
1571 if (err < 0) { 1685 if (err < 0) {
1572 pr_info("%s: can't add protocol\n", __func__); 1686 pr_info("%s: can't register ip6ip4\n", __func__);
1573 goto xfrm_tunnel_failed; 1687 goto xfrm_tunnel_failed;
1574 } 1688 }
1689 err = xfrm4_tunnel_register(&ipip_handler, AF_INET);
1690 if (err < 0) {
1691 pr_info("%s: can't register ip4ip4\n", __func__);
1692 goto xfrm_tunnel4_failed;
1693 }
1575 err = rtnl_link_register(&sit_link_ops); 1694 err = rtnl_link_register(&sit_link_ops);
1576 if (err < 0) 1695 if (err < 0)
1577 goto rtnl_link_failed; 1696 goto rtnl_link_failed;
@@ -1580,6 +1699,8 @@ out:
1580 return err; 1699 return err;
1581 1700
1582rtnl_link_failed: 1701rtnl_link_failed:
1702 xfrm4_tunnel_deregister(&ipip_handler, AF_INET);
1703xfrm_tunnel4_failed:
1583 xfrm4_tunnel_deregister(&sit_handler, AF_INET6); 1704 xfrm4_tunnel_deregister(&sit_handler, AF_INET6);
1584xfrm_tunnel_failed: 1705xfrm_tunnel_failed:
1585 unregister_pernet_device(&sit_net_ops); 1706 unregister_pernet_device(&sit_net_ops);
diff --git a/net/ipv6/sysctl_net_ipv6.c b/net/ipv6/sysctl_net_ipv6.c
index e85c48bd404f..107b2f1d90ae 100644
--- a/net/ipv6/sysctl_net_ipv6.c
+++ b/net/ipv6/sysctl_net_ipv6.c
@@ -16,7 +16,7 @@
16#include <net/addrconf.h> 16#include <net/addrconf.h>
17#include <net/inet_frag.h> 17#include <net/inet_frag.h>
18 18
19static ctl_table ipv6_table_template[] = { 19static struct ctl_table ipv6_table_template[] = {
20 { 20 {
21 .procname = "bindv6only", 21 .procname = "bindv6only",
22 .data = &init_net.ipv6.sysctl.bindv6only, 22 .data = &init_net.ipv6.sysctl.bindv6only,
@@ -27,7 +27,7 @@ static ctl_table ipv6_table_template[] = {
27 { } 27 { }
28}; 28};
29 29
30static ctl_table ipv6_rotable[] = { 30static struct ctl_table ipv6_rotable[] = {
31 { 31 {
32 .procname = "mld_max_msf", 32 .procname = "mld_max_msf",
33 .data = &sysctl_mld_max_msf, 33 .data = &sysctl_mld_max_msf,
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 0a17ed9eaf39..5cffa5c3e6b8 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -63,6 +63,7 @@
63#include <net/inet_common.h> 63#include <net/inet_common.h>
64#include <net/secure_seq.h> 64#include <net/secure_seq.h>
65#include <net/tcp_memcontrol.h> 65#include <net/tcp_memcontrol.h>
66#include <net/ll_poll.h>
66 67
67#include <asm/uaccess.h> 68#include <asm/uaccess.h>
68 69
@@ -1498,6 +1499,7 @@ process:
1498 if (sk_filter(sk, skb)) 1499 if (sk_filter(sk, skb))
1499 goto discard_and_relse; 1500 goto discard_and_relse;
1500 1501
1502 sk_mark_ll(sk, skb);
1501 skb->dev = NULL; 1503 skb->dev = NULL;
1502 1504
1503 bh_lock_sock_nested(sk); 1505 bh_lock_sock_nested(sk);
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 42923b14dfa6..b6f31437a1f8 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -46,6 +46,7 @@
46#include <net/ip6_checksum.h> 46#include <net/ip6_checksum.h>
47#include <net/xfrm.h> 47#include <net/xfrm.h>
48#include <net/inet6_hashtables.h> 48#include <net/inet6_hashtables.h>
49#include <net/ll_poll.h>
49 50
50#include <linux/proc_fs.h> 51#include <linux/proc_fs.h>
51#include <linux/seq_file.h> 52#include <linux/seq_file.h>
@@ -841,7 +842,10 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
841 */ 842 */
842 sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable); 843 sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
843 if (sk != NULL) { 844 if (sk != NULL) {
844 int ret = udpv6_queue_rcv_skb(sk, skb); 845 int ret;
846
847 sk_mark_ll(sk, skb);
848 ret = udpv6_queue_rcv_skb(sk, skb);
845 sock_put(sk); 849 sock_put(sk);
846 850
847 /* a return value > 0 means to resubmit the input, but 851 /* a return value > 0 means to resubmit the input, but
@@ -955,11 +959,16 @@ static int udp_v6_push_pending_frames(struct sock *sk)
955 struct udphdr *uh; 959 struct udphdr *uh;
956 struct udp_sock *up = udp_sk(sk); 960 struct udp_sock *up = udp_sk(sk);
957 struct inet_sock *inet = inet_sk(sk); 961 struct inet_sock *inet = inet_sk(sk);
958 struct flowi6 *fl6 = &inet->cork.fl.u.ip6; 962 struct flowi6 *fl6;
959 int err = 0; 963 int err = 0;
960 int is_udplite = IS_UDPLITE(sk); 964 int is_udplite = IS_UDPLITE(sk);
961 __wsum csum = 0; 965 __wsum csum = 0;
962 966
967 if (up->pending == AF_INET)
968 return udp_push_pending_frames(sk);
969
970 fl6 = &inet->cork.fl.u.ip6;
971
963 /* Grab the skbuff where UDP header space exists. */ 972 /* Grab the skbuff where UDP header space exists. */
964 if ((skb = skb_peek(&sk->sk_write_queue)) == NULL) 973 if ((skb = skb_peek(&sk->sk_write_queue)) == NULL)
965 goto out; 974 goto out;
@@ -1359,48 +1368,17 @@ static const struct inet6_protocol udpv6_protocol = {
1359 1368
1360/* ------------------------------------------------------------------------ */ 1369/* ------------------------------------------------------------------------ */
1361#ifdef CONFIG_PROC_FS 1370#ifdef CONFIG_PROC_FS
1362
1363static void udp6_sock_seq_show(struct seq_file *seq, struct sock *sp, int bucket)
1364{
1365 struct inet_sock *inet = inet_sk(sp);
1366 struct ipv6_pinfo *np = inet6_sk(sp);
1367 const struct in6_addr *dest, *src;
1368 __u16 destp, srcp;
1369
1370 dest = &np->daddr;
1371 src = &np->rcv_saddr;
1372 destp = ntohs(inet->inet_dport);
1373 srcp = ntohs(inet->inet_sport);
1374 seq_printf(seq,
1375 "%5d: %08X%08X%08X%08X:%04X %08X%08X%08X%08X:%04X "
1376 "%02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %pK %d\n",
1377 bucket,
1378 src->s6_addr32[0], src->s6_addr32[1],
1379 src->s6_addr32[2], src->s6_addr32[3], srcp,
1380 dest->s6_addr32[0], dest->s6_addr32[1],
1381 dest->s6_addr32[2], dest->s6_addr32[3], destp,
1382 sp->sk_state,
1383 sk_wmem_alloc_get(sp),
1384 sk_rmem_alloc_get(sp),
1385 0, 0L, 0,
1386 from_kuid_munged(seq_user_ns(seq), sock_i_uid(sp)),
1387 0,
1388 sock_i_ino(sp),
1389 atomic_read(&sp->sk_refcnt), sp,
1390 atomic_read(&sp->sk_drops));
1391}
1392
1393int udp6_seq_show(struct seq_file *seq, void *v) 1371int udp6_seq_show(struct seq_file *seq, void *v)
1394{ 1372{
1395 if (v == SEQ_START_TOKEN) 1373 if (v == SEQ_START_TOKEN) {
1396 seq_printf(seq, 1374 seq_puts(seq, IPV6_SEQ_DGRAM_HEADER);
1397 " sl " 1375 } else {
1398 "local_address " 1376 int bucket = ((struct udp_iter_state *)seq->private)->bucket;
1399 "remote_address " 1377 struct inet_sock *inet = inet_sk(v);
1400 "st tx_queue rx_queue tr tm->when retrnsmt" 1378 __u16 srcp = ntohs(inet->inet_sport);
1401 " uid timeout inode ref pointer drops\n"); 1379 __u16 destp = ntohs(inet->inet_dport);
1402 else 1380 ip6_dgram_sock_seq_show(seq, v, srcp, destp, bucket);
1403 udp6_sock_seq_show(seq, v, ((struct udp_iter_state *)seq->private)->bucket); 1381 }
1404 return 0; 1382 return 0;
1405} 1383}
1406 1384
diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c
index d3cfaf9c7a08..5d1b8d7ac993 100644
--- a/net/ipv6/udp_offload.c
+++ b/net/ipv6/udp_offload.c
@@ -64,7 +64,8 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb,
64 if (unlikely(type & ~(SKB_GSO_UDP | 64 if (unlikely(type & ~(SKB_GSO_UDP |
65 SKB_GSO_DODGY | 65 SKB_GSO_DODGY |
66 SKB_GSO_UDP_TUNNEL | 66 SKB_GSO_UDP_TUNNEL |
67 SKB_GSO_GRE) || 67 SKB_GSO_GRE |
68 SKB_GSO_MPLS) ||
68 !(type & (SKB_GSO_UDP)))) 69 !(type & (SKB_GSO_UDP))))
69 goto out; 70 goto out;
70 71
diff --git a/net/ipx/af_ipx.c b/net/ipx/af_ipx.c
index f547a47d381c..7a1e0fc1bd4d 100644
--- a/net/ipx/af_ipx.c
+++ b/net/ipx/af_ipx.c
@@ -330,7 +330,7 @@ static __inline__ void __ipxitf_put(struct ipx_interface *intrfc)
330static int ipxitf_device_event(struct notifier_block *notifier, 330static int ipxitf_device_event(struct notifier_block *notifier,
331 unsigned long event, void *ptr) 331 unsigned long event, void *ptr)
332{ 332{
333 struct net_device *dev = ptr; 333 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
334 struct ipx_interface *i, *tmp; 334 struct ipx_interface *i, *tmp;
335 335
336 if (!net_eq(dev_net(dev), &init_net)) 336 if (!net_eq(dev_net(dev), &init_net))
diff --git a/net/irda/irsysctl.c b/net/irda/irsysctl.c
index de73f6496db5..d6a59651767a 100644
--- a/net/irda/irsysctl.c
+++ b/net/irda/irsysctl.c
@@ -73,7 +73,7 @@ static int min_lap_keepalive_time = 100; /* 100us */
73/* For other sysctl, I've no idea of the range. Maybe Dag could help 73/* For other sysctl, I've no idea of the range. Maybe Dag could help
74 * us on that - Jean II */ 74 * us on that - Jean II */
75 75
76static int do_devname(ctl_table *table, int write, 76static int do_devname(struct ctl_table *table, int write,
77 void __user *buffer, size_t *lenp, loff_t *ppos) 77 void __user *buffer, size_t *lenp, loff_t *ppos)
78{ 78{
79 int ret; 79 int ret;
@@ -90,7 +90,7 @@ static int do_devname(ctl_table *table, int write,
90} 90}
91 91
92 92
93static int do_discovery(ctl_table *table, int write, 93static int do_discovery(struct ctl_table *table, int write,
94 void __user *buffer, size_t *lenp, loff_t *ppos) 94 void __user *buffer, size_t *lenp, loff_t *ppos)
95{ 95{
96 int ret; 96 int ret;
@@ -111,7 +111,7 @@ static int do_discovery(ctl_table *table, int write,
111} 111}
112 112
113/* One file */ 113/* One file */
114static ctl_table irda_table[] = { 114static struct ctl_table irda_table[] = {
115 { 115 {
116 .procname = "discovery", 116 .procname = "discovery",
117 .data = &sysctl_discovery, 117 .data = &sysctl_discovery,
diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
index ae691651b721..168aff5e60de 100644
--- a/net/iucv/af_iucv.c
+++ b/net/iucv/af_iucv.c
@@ -2293,7 +2293,7 @@ out_unlock:
2293static int afiucv_netdev_event(struct notifier_block *this, 2293static int afiucv_netdev_event(struct notifier_block *this,
2294 unsigned long event, void *ptr) 2294 unsigned long event, void *ptr)
2295{ 2295{
2296 struct net_device *event_dev = (struct net_device *)ptr; 2296 struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
2297 struct sock *sk; 2297 struct sock *sk;
2298 struct iucv_sock *iucv; 2298 struct iucv_sock *iucv;
2299 2299
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
index 6984c3a353cd..feae495a0a30 100644
--- a/net/l2tp/l2tp_core.c
+++ b/net/l2tp/l2tp_core.c
@@ -414,10 +414,7 @@ static void l2tp_recv_dequeue_skb(struct l2tp_session *session, struct sk_buff *
414 if (L2TP_SKB_CB(skb)->has_seq) { 414 if (L2TP_SKB_CB(skb)->has_seq) {
415 /* Bump our Nr */ 415 /* Bump our Nr */
416 session->nr++; 416 session->nr++;
417 if (tunnel->version == L2TP_HDR_VER_2) 417 session->nr &= session->nr_max;
418 session->nr &= 0xffff;
419 else
420 session->nr &= 0xffffff;
421 418
422 l2tp_dbg(session, L2TP_MSG_SEQ, "%s: updated nr to %hu\n", 419 l2tp_dbg(session, L2TP_MSG_SEQ, "%s: updated nr to %hu\n",
423 session->name, session->nr); 420 session->name, session->nr);
@@ -542,6 +539,84 @@ static inline int l2tp_verify_udp_checksum(struct sock *sk,
542 return __skb_checksum_complete(skb); 539 return __skb_checksum_complete(skb);
543} 540}
544 541
542static int l2tp_seq_check_rx_window(struct l2tp_session *session, u32 nr)
543{
544 u32 nws;
545
546 if (nr >= session->nr)
547 nws = nr - session->nr;
548 else
549 nws = (session->nr_max + 1) - (session->nr - nr);
550
551 return nws < session->nr_window_size;
552}
553
554/* If packet has sequence numbers, queue it if acceptable. Returns 0 if
555 * acceptable, else non-zero.
556 */
557static int l2tp_recv_data_seq(struct l2tp_session *session, struct sk_buff *skb)
558{
559 if (!l2tp_seq_check_rx_window(session, L2TP_SKB_CB(skb)->ns)) {
560 /* Packet sequence number is outside allowed window.
561 * Discard it.
562 */
563 l2tp_dbg(session, L2TP_MSG_SEQ,
564 "%s: pkt %u len %d discarded, outside window, nr=%u\n",
565 session->name, L2TP_SKB_CB(skb)->ns,
566 L2TP_SKB_CB(skb)->length, session->nr);
567 goto discard;
568 }
569
570 if (session->reorder_timeout != 0) {
571 /* Packet reordering enabled. Add skb to session's
572 * reorder queue, in order of ns.
573 */
574 l2tp_recv_queue_skb(session, skb);
575 goto out;
576 }
577
578 /* Packet reordering disabled. Discard out-of-sequence packets, while
579 * tracking the number if in-sequence packets after the first OOS packet
580 * is seen. After nr_oos_count_max in-sequence packets, reset the
581 * sequence number to re-enable packet reception.
582 */
583 if (L2TP_SKB_CB(skb)->ns == session->nr) {
584 skb_queue_tail(&session->reorder_q, skb);
585 } else {
586 u32 nr_oos = L2TP_SKB_CB(skb)->ns;
587 u32 nr_next = (session->nr_oos + 1) & session->nr_max;
588
589 if (nr_oos == nr_next)
590 session->nr_oos_count++;
591 else
592 session->nr_oos_count = 0;
593
594 session->nr_oos = nr_oos;
595 if (session->nr_oos_count > session->nr_oos_count_max) {
596 session->reorder_skip = 1;
597 l2tp_dbg(session, L2TP_MSG_SEQ,
598 "%s: %d oos packets received. Resetting sequence numbers\n",
599 session->name, session->nr_oos_count);
600 }
601 if (!session->reorder_skip) {
602 atomic_long_inc(&session->stats.rx_seq_discards);
603 l2tp_dbg(session, L2TP_MSG_SEQ,
604 "%s: oos pkt %u len %d discarded, waiting for %u, reorder_q_len=%d\n",
605 session->name, L2TP_SKB_CB(skb)->ns,
606 L2TP_SKB_CB(skb)->length, session->nr,
607 skb_queue_len(&session->reorder_q));
608 goto discard;
609 }
610 skb_queue_tail(&session->reorder_q, skb);
611 }
612
613out:
614 return 0;
615
616discard:
617 return 1;
618}
619
545/* Do receive processing of L2TP data frames. We handle both L2TPv2 620/* Do receive processing of L2TP data frames. We handle both L2TPv2
546 * and L2TPv3 data frames here. 621 * and L2TPv3 data frames here.
547 * 622 *
@@ -757,26 +832,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
757 * enabled. Saved L2TP protocol info is stored in skb->sb[]. 832 * enabled. Saved L2TP protocol info is stored in skb->sb[].
758 */ 833 */
759 if (L2TP_SKB_CB(skb)->has_seq) { 834 if (L2TP_SKB_CB(skb)->has_seq) {
760 if (session->reorder_timeout != 0) { 835 if (l2tp_recv_data_seq(session, skb))
761 /* Packet reordering enabled. Add skb to session's 836 goto discard;
762 * reorder queue, in order of ns.
763 */
764 l2tp_recv_queue_skb(session, skb);
765 } else {
766 /* Packet reordering disabled. Discard out-of-sequence
767 * packets
768 */
769 if (L2TP_SKB_CB(skb)->ns != session->nr) {
770 atomic_long_inc(&session->stats.rx_seq_discards);
771 l2tp_dbg(session, L2TP_MSG_SEQ,
772 "%s: oos pkt %u len %d discarded, waiting for %u, reorder_q_len=%d\n",
773 session->name, L2TP_SKB_CB(skb)->ns,
774 L2TP_SKB_CB(skb)->length, session->nr,
775 skb_queue_len(&session->reorder_q));
776 goto discard;
777 }
778 skb_queue_tail(&session->reorder_q, skb);
779 }
780 } else { 837 } else {
781 /* No sequence numbers. Add the skb to the tail of the 838 /* No sequence numbers. Add the skb to the tail of the
782 * reorder queue. This ensures that it will be 839 * reorder queue. This ensures that it will be
@@ -1812,6 +1869,15 @@ struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunn
1812 session->session_id = session_id; 1869 session->session_id = session_id;
1813 session->peer_session_id = peer_session_id; 1870 session->peer_session_id = peer_session_id;
1814 session->nr = 0; 1871 session->nr = 0;
1872 if (tunnel->version == L2TP_HDR_VER_2)
1873 session->nr_max = 0xffff;
1874 else
1875 session->nr_max = 0xffffff;
1876 session->nr_window_size = session->nr_max / 2;
1877 session->nr_oos_count_max = 4;
1878
1879 /* Use NR of first received packet */
1880 session->reorder_skip = 1;
1815 1881
1816 sprintf(&session->name[0], "sess %u/%u", 1882 sprintf(&session->name[0], "sess %u/%u",
1817 tunnel->tunnel_id, session->session_id); 1883 tunnel->tunnel_id, session->session_id);
diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
index 485a490fd990..66a559b104b6 100644
--- a/net/l2tp/l2tp_core.h
+++ b/net/l2tp/l2tp_core.h
@@ -102,6 +102,11 @@ struct l2tp_session {
102 u32 nr; /* session NR state (receive) */ 102 u32 nr; /* session NR state (receive) */
103 u32 ns; /* session NR state (send) */ 103 u32 ns; /* session NR state (send) */
104 struct sk_buff_head reorder_q; /* receive reorder queue */ 104 struct sk_buff_head reorder_q; /* receive reorder queue */
105 u32 nr_max; /* max NR. Depends on tunnel */
106 u32 nr_window_size; /* NR window size */
107 u32 nr_oos; /* NR of last OOS packet */
108 int nr_oos_count; /* For OOS recovery */
109 int nr_oos_count_max;
105 struct hlist_node hlist; /* Hash list node */ 110 struct hlist_node hlist; /* Hash list node */
106 atomic_t ref_count; 111 atomic_t ref_count;
107 112
diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
index 8dec6876dc50..5ebee2ded9e9 100644
--- a/net/l2tp/l2tp_ppp.c
+++ b/net/l2tp/l2tp_ppp.c
@@ -1793,7 +1793,8 @@ static const struct proto_ops pppol2tp_ops = {
1793 1793
1794static const struct pppox_proto pppol2tp_proto = { 1794static const struct pppox_proto pppol2tp_proto = {
1795 .create = pppol2tp_create, 1795 .create = pppol2tp_create,
1796 .ioctl = pppol2tp_ioctl 1796 .ioctl = pppol2tp_ioctl,
1797 .owner = THIS_MODULE,
1797}; 1798};
1798 1799
1799#ifdef CONFIG_L2TP_V3 1800#ifdef CONFIG_L2TP_V3
diff --git a/net/mac80211/aes_ccm.c b/net/mac80211/aes_ccm.c
index 0785e95c9924..be7614b9ed27 100644
--- a/net/mac80211/aes_ccm.c
+++ b/net/mac80211/aes_ccm.c
@@ -85,7 +85,7 @@ void ieee80211_aes_ccm_encrypt(struct crypto_cipher *tfm, u8 *scratch,
85 *cpos++ = *pos++ ^ e[i]; 85 *cpos++ = *pos++ ^ e[i];
86 } 86 }
87 87
88 for (i = 0; i < CCMP_MIC_LEN; i++) 88 for (i = 0; i < IEEE80211_CCMP_MIC_LEN; i++)
89 mic[i] = b[i] ^ s_0[i]; 89 mic[i] = b[i] ^ s_0[i];
90} 90}
91 91
@@ -123,7 +123,7 @@ int ieee80211_aes_ccm_decrypt(struct crypto_cipher *tfm, u8 *scratch,
123 crypto_cipher_encrypt_one(tfm, a, a); 123 crypto_cipher_encrypt_one(tfm, a, a);
124 } 124 }
125 125
126 for (i = 0; i < CCMP_MIC_LEN; i++) { 126 for (i = 0; i < IEEE80211_CCMP_MIC_LEN; i++) {
127 if ((mic[i] ^ s_0[i]) != a[i]) 127 if ((mic[i] ^ s_0[i]) != a[i])
128 return -1; 128 return -1;
129 } 129 }
@@ -138,7 +138,7 @@ struct crypto_cipher *ieee80211_aes_key_setup_encrypt(const u8 key[])
138 138
139 tfm = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC); 139 tfm = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC);
140 if (!IS_ERR(tfm)) 140 if (!IS_ERR(tfm))
141 crypto_cipher_setkey(tfm, key, ALG_CCMP_KEY_LEN); 141 crypto_cipher_setkey(tfm, key, WLAN_KEY_LEN_CCMP);
142 142
143 return tfm; 143 return tfm;
144} 144}
diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
index 4fdb306e42e0..8184d121ff09 100644
--- a/net/mac80211/cfg.c
+++ b/net/mac80211/cfg.c
@@ -73,16 +73,19 @@ static int ieee80211_change_iface(struct wiphy *wiphy,
73 struct ieee80211_local *local = sdata->local; 73 struct ieee80211_local *local = sdata->local;
74 74
75 if (ieee80211_sdata_running(sdata)) { 75 if (ieee80211_sdata_running(sdata)) {
76 u32 mask = MONITOR_FLAG_COOK_FRAMES |
77 MONITOR_FLAG_ACTIVE;
78
76 /* 79 /*
77 * Prohibit MONITOR_FLAG_COOK_FRAMES to be 80 * Prohibit MONITOR_FLAG_COOK_FRAMES and
78 * changed while the interface is up. 81 * MONITOR_FLAG_ACTIVE to be changed while the
82 * interface is up.
79 * Else we would need to add a lot of cruft 83 * Else we would need to add a lot of cruft
80 * to update everything: 84 * to update everything:
81 * cooked_mntrs, monitor and all fif_* counters 85 * cooked_mntrs, monitor and all fif_* counters
82 * reconfigure hardware 86 * reconfigure hardware
83 */ 87 */
84 if ((*flags & MONITOR_FLAG_COOK_FRAMES) != 88 if ((*flags & mask) != (sdata->u.mntr_flags & mask))
85 (sdata->u.mntr_flags & MONITOR_FLAG_COOK_FRAMES))
86 return -EBUSY; 89 return -EBUSY;
87 90
88 ieee80211_adjust_monitor_flags(sdata, -1); 91 ieee80211_adjust_monitor_flags(sdata, -1);
@@ -444,7 +447,7 @@ static void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo)
444 struct ieee80211_local *local = sdata->local; 447 struct ieee80211_local *local = sdata->local;
445 struct timespec uptime; 448 struct timespec uptime;
446 u64 packets = 0; 449 u64 packets = 0;
447 int ac; 450 int i, ac;
448 451
449 sinfo->generation = sdata->local->sta_generation; 452 sinfo->generation = sdata->local->sta_generation;
450 453
@@ -488,6 +491,17 @@ static void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo)
488 sinfo->signal = (s8)sta->last_signal; 491 sinfo->signal = (s8)sta->last_signal;
489 sinfo->signal_avg = (s8) -ewma_read(&sta->avg_signal); 492 sinfo->signal_avg = (s8) -ewma_read(&sta->avg_signal);
490 } 493 }
494 if (sta->chains) {
495 sinfo->filled |= STATION_INFO_CHAIN_SIGNAL |
496 STATION_INFO_CHAIN_SIGNAL_AVG;
497
498 sinfo->chains = sta->chains;
499 for (i = 0; i < ARRAY_SIZE(sinfo->chain_signal); i++) {
500 sinfo->chain_signal[i] = sta->chain_signal_last[i];
501 sinfo->chain_signal_avg[i] =
502 (s8) -ewma_read(&sta->chain_signal_avg[i]);
503 }
504 }
491 505
492 sta_set_rate_info_tx(sta, &sta->last_tx_rate, &sinfo->txrate); 506 sta_set_rate_info_tx(sta, &sta->last_tx_rate, &sinfo->txrate);
493 sta_set_rate_info_rx(sta, &sinfo->rxrate); 507 sta_set_rate_info_rx(sta, &sinfo->rxrate);
@@ -728,7 +742,7 @@ static void ieee80211_get_et_strings(struct wiphy *wiphy,
728 742
729 if (sset == ETH_SS_STATS) { 743 if (sset == ETH_SS_STATS) {
730 sz_sta_stats = sizeof(ieee80211_gstrings_sta_stats); 744 sz_sta_stats = sizeof(ieee80211_gstrings_sta_stats);
731 memcpy(data, *ieee80211_gstrings_sta_stats, sz_sta_stats); 745 memcpy(data, ieee80211_gstrings_sta_stats, sz_sta_stats);
732 } 746 }
733 drv_get_et_strings(sdata, sset, &(data[sz_sta_stats])); 747 drv_get_et_strings(sdata, sset, &(data[sz_sta_stats]));
734} 748}
@@ -1741,6 +1755,7 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
1741 ifmsh->mesh_pp_id = setup->path_sel_proto; 1755 ifmsh->mesh_pp_id = setup->path_sel_proto;
1742 ifmsh->mesh_pm_id = setup->path_metric; 1756 ifmsh->mesh_pm_id = setup->path_metric;
1743 ifmsh->user_mpm = setup->user_mpm; 1757 ifmsh->user_mpm = setup->user_mpm;
1758 ifmsh->mesh_auth_id = setup->auth_id;
1744 ifmsh->security = IEEE80211_MESH_SEC_NONE; 1759 ifmsh->security = IEEE80211_MESH_SEC_NONE;
1745 if (setup->is_authenticated) 1760 if (setup->is_authenticated)
1746 ifmsh->security |= IEEE80211_MESH_SEC_AUTHED; 1761 ifmsh->security |= IEEE80211_MESH_SEC_AUTHED;
@@ -1750,6 +1765,7 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
1750 /* mcast rate setting in Mesh Node */ 1765 /* mcast rate setting in Mesh Node */
1751 memcpy(sdata->vif.bss_conf.mcast_rate, setup->mcast_rate, 1766 memcpy(sdata->vif.bss_conf.mcast_rate, setup->mcast_rate,
1752 sizeof(setup->mcast_rate)); 1767 sizeof(setup->mcast_rate));
1768 sdata->vif.bss_conf.basic_rates = setup->basic_rates;
1753 1769
1754 sdata->vif.bss_conf.beacon_int = setup->beacon_interval; 1770 sdata->vif.bss_conf.beacon_int = setup->beacon_interval;
1755 sdata->vif.bss_conf.dtim_period = setup->dtim_period; 1771 sdata->vif.bss_conf.dtim_period = setup->dtim_period;
@@ -1862,6 +1878,8 @@ static int ieee80211_update_mesh_config(struct wiphy *wiphy,
1862 if (_chg_mesh_attr(NL80211_MESHCONF_AWAKE_WINDOW, mask)) 1878 if (_chg_mesh_attr(NL80211_MESHCONF_AWAKE_WINDOW, mask))
1863 conf->dot11MeshAwakeWindowDuration = 1879 conf->dot11MeshAwakeWindowDuration =
1864 nconf->dot11MeshAwakeWindowDuration; 1880 nconf->dot11MeshAwakeWindowDuration;
1881 if (_chg_mesh_attr(NL80211_MESHCONF_PLINK_TIMEOUT, mask))
1882 conf->plink_timeout = nconf->plink_timeout;
1865 ieee80211_mbss_info_change_notify(sdata, BSS_CHANGED_BEACON); 1883 ieee80211_mbss_info_change_notify(sdata, BSS_CHANGED_BEACON);
1866 return 0; 1884 return 0;
1867} 1885}
@@ -2312,7 +2330,7 @@ int __ieee80211_request_smps(struct ieee80211_sub_if_data *sdata,
2312 enum ieee80211_smps_mode old_req; 2330 enum ieee80211_smps_mode old_req;
2313 int err; 2331 int err;
2314 2332
2315 lockdep_assert_held(&sdata->u.mgd.mtx); 2333 lockdep_assert_held(&sdata->wdev.mtx);
2316 2334
2317 old_req = sdata->u.mgd.req_smps; 2335 old_req = sdata->u.mgd.req_smps;
2318 sdata->u.mgd.req_smps = smps_mode; 2336 sdata->u.mgd.req_smps = smps_mode;
@@ -2369,9 +2387,9 @@ static int ieee80211_set_power_mgmt(struct wiphy *wiphy, struct net_device *dev,
2369 local->dynamic_ps_forced_timeout = timeout; 2387 local->dynamic_ps_forced_timeout = timeout;
2370 2388
2371 /* no change, but if automatic follow powersave */ 2389 /* no change, but if automatic follow powersave */
2372 mutex_lock(&sdata->u.mgd.mtx); 2390 sdata_lock(sdata);
2373 __ieee80211_request_smps(sdata, sdata->u.mgd.req_smps); 2391 __ieee80211_request_smps(sdata, sdata->u.mgd.req_smps);
2374 mutex_unlock(&sdata->u.mgd.mtx); 2392 sdata_unlock(sdata);
2375 2393
2376 if (local->hw.flags & IEEE80211_HW_SUPPORTS_DYNAMIC_PS) 2394 if (local->hw.flags & IEEE80211_HW_SUPPORTS_DYNAMIC_PS)
2377 ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS); 2395 ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
@@ -2809,7 +2827,8 @@ static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
2809 !rcu_access_pointer(sdata->bss->beacon)) 2827 !rcu_access_pointer(sdata->bss->beacon))
2810 need_offchan = true; 2828 need_offchan = true;
2811 if (!ieee80211_is_action(mgmt->frame_control) || 2829 if (!ieee80211_is_action(mgmt->frame_control) ||
2812 mgmt->u.action.category == WLAN_CATEGORY_PUBLIC) 2830 mgmt->u.action.category == WLAN_CATEGORY_PUBLIC ||
2831 mgmt->u.action.category == WLAN_CATEGORY_SELF_PROTECTED)
2813 break; 2832 break;
2814 rcu_read_lock(); 2833 rcu_read_lock();
2815 sta = sta_info_get(sdata, mgmt->da); 2834 sta = sta_info_get(sdata, mgmt->da);
@@ -2829,6 +2848,12 @@ static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
2829 return -EOPNOTSUPP; 2848 return -EOPNOTSUPP;
2830 } 2849 }
2831 2850
2851 /* configurations requiring offchan cannot work if no channel has been
2852 * specified
2853 */
2854 if (need_offchan && !chan)
2855 return -EINVAL;
2856
2832 mutex_lock(&local->mtx); 2857 mutex_lock(&local->mtx);
2833 2858
2834 /* Check if the operating channel is the requested channel */ 2859 /* Check if the operating channel is the requested channel */
@@ -2838,10 +2863,15 @@ static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
2838 rcu_read_lock(); 2863 rcu_read_lock();
2839 chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf); 2864 chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
2840 2865
2841 if (chanctx_conf) 2866 if (chanctx_conf) {
2842 need_offchan = chan != chanctx_conf->def.chan; 2867 need_offchan = chan && (chan != chanctx_conf->def.chan);
2843 else 2868 } else if (!chan) {
2869 ret = -EINVAL;
2870 rcu_read_unlock();
2871 goto out_unlock;
2872 } else {
2844 need_offchan = true; 2873 need_offchan = true;
2874 }
2845 rcu_read_unlock(); 2875 rcu_read_unlock();
2846 } 2876 }
2847 2877
@@ -2901,19 +2931,8 @@ static void ieee80211_mgmt_frame_register(struct wiphy *wiphy,
2901 u16 frame_type, bool reg) 2931 u16 frame_type, bool reg)
2902{ 2932{
2903 struct ieee80211_local *local = wiphy_priv(wiphy); 2933 struct ieee80211_local *local = wiphy_priv(wiphy);
2904 struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
2905 2934
2906 switch (frame_type) { 2935 switch (frame_type) {
2907 case IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_AUTH:
2908 if (sdata->vif.type == NL80211_IFTYPE_ADHOC) {
2909 struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
2910
2911 if (reg)
2912 ifibss->auth_frame_registrations++;
2913 else
2914 ifibss->auth_frame_registrations--;
2915 }
2916 break;
2917 case IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_PROBE_REQ: 2936 case IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_PROBE_REQ:
2918 if (reg) 2937 if (reg)
2919 local->probe_req_reg++; 2938 local->probe_req_reg++;
diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
index 14abcf44f974..cafe614ef93d 100644
--- a/net/mac80211/debugfs_netdev.c
+++ b/net/mac80211/debugfs_netdev.c
@@ -228,9 +228,9 @@ static int ieee80211_set_smps(struct ieee80211_sub_if_data *sdata,
228 if (sdata->vif.type != NL80211_IFTYPE_STATION) 228 if (sdata->vif.type != NL80211_IFTYPE_STATION)
229 return -EOPNOTSUPP; 229 return -EOPNOTSUPP;
230 230
231 mutex_lock(&sdata->u.mgd.mtx); 231 sdata_lock(sdata);
232 err = __ieee80211_request_smps(sdata, smps_mode); 232 err = __ieee80211_request_smps(sdata, smps_mode);
233 mutex_unlock(&sdata->u.mgd.mtx); 233 sdata_unlock(sdata);
234 234
235 return err; 235 return err;
236} 236}
@@ -313,16 +313,16 @@ static ssize_t ieee80211_if_parse_tkip_mic_test(
313 case NL80211_IFTYPE_STATION: 313 case NL80211_IFTYPE_STATION:
314 fc |= cpu_to_le16(IEEE80211_FCTL_TODS); 314 fc |= cpu_to_le16(IEEE80211_FCTL_TODS);
315 /* BSSID SA DA */ 315 /* BSSID SA DA */
316 mutex_lock(&sdata->u.mgd.mtx); 316 sdata_lock(sdata);
317 if (!sdata->u.mgd.associated) { 317 if (!sdata->u.mgd.associated) {
318 mutex_unlock(&sdata->u.mgd.mtx); 318 sdata_unlock(sdata);
319 dev_kfree_skb(skb); 319 dev_kfree_skb(skb);
320 return -ENOTCONN; 320 return -ENOTCONN;
321 } 321 }
322 memcpy(hdr->addr1, sdata->u.mgd.associated->bssid, ETH_ALEN); 322 memcpy(hdr->addr1, sdata->u.mgd.associated->bssid, ETH_ALEN);
323 memcpy(hdr->addr2, sdata->vif.addr, ETH_ALEN); 323 memcpy(hdr->addr2, sdata->vif.addr, ETH_ALEN);
324 memcpy(hdr->addr3, addr, ETH_ALEN); 324 memcpy(hdr->addr3, addr, ETH_ALEN);
325 mutex_unlock(&sdata->u.mgd.mtx); 325 sdata_unlock(sdata);
326 break; 326 break;
327 default: 327 default:
328 dev_kfree_skb(skb); 328 dev_kfree_skb(skb);
@@ -471,6 +471,8 @@ __IEEE80211_IF_FILE_W(tsf);
471IEEE80211_IF_FILE(peer, u.wds.remote_addr, MAC); 471IEEE80211_IF_FILE(peer, u.wds.remote_addr, MAC);
472 472
473#ifdef CONFIG_MAC80211_MESH 473#ifdef CONFIG_MAC80211_MESH
474IEEE80211_IF_FILE(estab_plinks, u.mesh.estab_plinks, ATOMIC);
475
474/* Mesh stats attributes */ 476/* Mesh stats attributes */
475IEEE80211_IF_FILE(fwded_mcast, u.mesh.mshstats.fwded_mcast, DEC); 477IEEE80211_IF_FILE(fwded_mcast, u.mesh.mshstats.fwded_mcast, DEC);
476IEEE80211_IF_FILE(fwded_unicast, u.mesh.mshstats.fwded_unicast, DEC); 478IEEE80211_IF_FILE(fwded_unicast, u.mesh.mshstats.fwded_unicast, DEC);
@@ -480,7 +482,6 @@ IEEE80211_IF_FILE(dropped_frames_congestion,
480 u.mesh.mshstats.dropped_frames_congestion, DEC); 482 u.mesh.mshstats.dropped_frames_congestion, DEC);
481IEEE80211_IF_FILE(dropped_frames_no_route, 483IEEE80211_IF_FILE(dropped_frames_no_route,
482 u.mesh.mshstats.dropped_frames_no_route, DEC); 484 u.mesh.mshstats.dropped_frames_no_route, DEC);
483IEEE80211_IF_FILE(estab_plinks, u.mesh.estab_plinks, ATOMIC);
484 485
485/* Mesh parameters */ 486/* Mesh parameters */
486IEEE80211_IF_FILE(dot11MeshMaxRetries, 487IEEE80211_IF_FILE(dot11MeshMaxRetries,
@@ -583,6 +584,7 @@ static void add_wds_files(struct ieee80211_sub_if_data *sdata)
583static void add_mesh_files(struct ieee80211_sub_if_data *sdata) 584static void add_mesh_files(struct ieee80211_sub_if_data *sdata)
584{ 585{
585 DEBUGFS_ADD_MODE(tsf, 0600); 586 DEBUGFS_ADD_MODE(tsf, 0600);
587 DEBUGFS_ADD_MODE(estab_plinks, 0400);
586} 588}
587 589
588static void add_mesh_stats(struct ieee80211_sub_if_data *sdata) 590static void add_mesh_stats(struct ieee80211_sub_if_data *sdata)
@@ -598,7 +600,6 @@ static void add_mesh_stats(struct ieee80211_sub_if_data *sdata)
598 MESHSTATS_ADD(dropped_frames_ttl); 600 MESHSTATS_ADD(dropped_frames_ttl);
599 MESHSTATS_ADD(dropped_frames_no_route); 601 MESHSTATS_ADD(dropped_frames_no_route);
600 MESHSTATS_ADD(dropped_frames_congestion); 602 MESHSTATS_ADD(dropped_frames_congestion);
601 MESHSTATS_ADD(estab_plinks);
602#undef MESHSTATS_ADD 603#undef MESHSTATS_ADD
603} 604}
604 605
diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
index 169664c122e2..b931c96a596f 100644
--- a/net/mac80211/driver-ops.h
+++ b/net/mac80211/driver-ops.h
@@ -146,7 +146,8 @@ static inline int drv_add_interface(struct ieee80211_local *local,
146 146
147 if (WARN_ON(sdata->vif.type == NL80211_IFTYPE_AP_VLAN || 147 if (WARN_ON(sdata->vif.type == NL80211_IFTYPE_AP_VLAN ||
148 (sdata->vif.type == NL80211_IFTYPE_MONITOR && 148 (sdata->vif.type == NL80211_IFTYPE_MONITOR &&
149 !(local->hw.flags & IEEE80211_HW_WANT_MONITOR_VIF)))) 149 !(local->hw.flags & IEEE80211_HW_WANT_MONITOR_VIF) &&
150 !(sdata->u.mntr_flags & MONITOR_FLAG_ACTIVE))))
150 return -EINVAL; 151 return -EINVAL;
151 152
152 trace_drv_add_interface(local, sdata); 153 trace_drv_add_interface(local, sdata);
diff --git a/net/mac80211/ht.c b/net/mac80211/ht.c
index af8cee06e4f3..f83534f6a2ee 100644
--- a/net/mac80211/ht.c
+++ b/net/mac80211/ht.c
@@ -281,13 +281,14 @@ void ieee80211_ba_session_work(struct work_struct *work)
281 sta, tid, WLAN_BACK_RECIPIENT, 281 sta, tid, WLAN_BACK_RECIPIENT,
282 WLAN_REASON_UNSPECIFIED, true); 282 WLAN_REASON_UNSPECIFIED, true);
283 283
284 spin_lock_bh(&sta->lock);
285
284 tid_tx = sta->ampdu_mlme.tid_start_tx[tid]; 286 tid_tx = sta->ampdu_mlme.tid_start_tx[tid];
285 if (tid_tx) { 287 if (tid_tx) {
286 /* 288 /*
287 * Assign it over to the normal tid_tx array 289 * Assign it over to the normal tid_tx array
288 * where it "goes live". 290 * where it "goes live".
289 */ 291 */
290 spin_lock_bh(&sta->lock);
291 292
292 sta->ampdu_mlme.tid_start_tx[tid] = NULL; 293 sta->ampdu_mlme.tid_start_tx[tid] = NULL;
293 /* could there be a race? */ 294 /* could there be a race? */
@@ -300,6 +301,7 @@ void ieee80211_ba_session_work(struct work_struct *work)
300 ieee80211_tx_ba_session_handle_start(sta, tid); 301 ieee80211_tx_ba_session_handle_start(sta, tid);
301 continue; 302 continue;
302 } 303 }
304 spin_unlock_bh(&sta->lock);
303 305
304 tid_tx = rcu_dereference_protected_tid_tx(sta, tid); 306 tid_tx = rcu_dereference_protected_tid_tx(sta, tid);
305 if (tid_tx && test_and_clear_bit(HT_AGG_STATE_WANT_STOP, 307 if (tid_tx && test_and_clear_bit(HT_AGG_STATE_WANT_STOP,
@@ -429,9 +431,9 @@ void ieee80211_request_smps_work(struct work_struct *work)
429 container_of(work, struct ieee80211_sub_if_data, 431 container_of(work, struct ieee80211_sub_if_data,
430 u.mgd.request_smps_work); 432 u.mgd.request_smps_work);
431 433
432 mutex_lock(&sdata->u.mgd.mtx); 434 sdata_lock(sdata);
433 __ieee80211_request_smps(sdata, sdata->u.mgd.driver_smps_mode); 435 __ieee80211_request_smps(sdata, sdata->u.mgd.driver_smps_mode);
434 mutex_unlock(&sdata->u.mgd.mtx); 436 sdata_unlock(sdata);
435} 437}
436 438
437void ieee80211_request_smps(struct ieee80211_vif *vif, 439void ieee80211_request_smps(struct ieee80211_vif *vif,
diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
index 170f9a7fa319..ea7b9c2c7e66 100644
--- a/net/mac80211/ibss.c
+++ b/net/mac80211/ibss.c
@@ -54,7 +54,7 @@ static void __ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
54 struct beacon_data *presp; 54 struct beacon_data *presp;
55 int frame_len; 55 int frame_len;
56 56
57 lockdep_assert_held(&ifibss->mtx); 57 sdata_assert_lock(sdata);
58 58
59 /* Reset own TSF to allow time synchronization work. */ 59 /* Reset own TSF to allow time synchronization work. */
60 drv_reset_tsf(local, sdata); 60 drv_reset_tsf(local, sdata);
@@ -74,14 +74,14 @@ static void __ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
74 } 74 }
75 75
76 presp = rcu_dereference_protected(ifibss->presp, 76 presp = rcu_dereference_protected(ifibss->presp,
77 lockdep_is_held(&ifibss->mtx)); 77 lockdep_is_held(&sdata->wdev.mtx));
78 rcu_assign_pointer(ifibss->presp, NULL); 78 rcu_assign_pointer(ifibss->presp, NULL);
79 if (presp) 79 if (presp)
80 kfree_rcu(presp, rcu_head); 80 kfree_rcu(presp, rcu_head);
81 81
82 sdata->drop_unencrypted = capability & WLAN_CAPABILITY_PRIVACY ? 1 : 0; 82 sdata->drop_unencrypted = capability & WLAN_CAPABILITY_PRIVACY ? 1 : 0;
83 83
84 cfg80211_chandef_create(&chandef, chan, ifibss->channel_type); 84 chandef = ifibss->chandef;
85 if (!cfg80211_reg_can_beacon(local->hw.wiphy, &chandef)) { 85 if (!cfg80211_reg_can_beacon(local->hw.wiphy, &chandef)) {
86 chandef.width = NL80211_CHAN_WIDTH_20; 86 chandef.width = NL80211_CHAN_WIDTH_20;
87 chandef.center_freq1 = chan->center_freq; 87 chandef.center_freq1 = chan->center_freq;
@@ -176,6 +176,8 @@ static void __ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
176 176
177 /* add HT capability and information IEs */ 177 /* add HT capability and information IEs */
178 if (chandef.width != NL80211_CHAN_WIDTH_20_NOHT && 178 if (chandef.width != NL80211_CHAN_WIDTH_20_NOHT &&
179 chandef.width != NL80211_CHAN_WIDTH_5 &&
180 chandef.width != NL80211_CHAN_WIDTH_10 &&
179 sband->ht_cap.ht_supported) { 181 sband->ht_cap.ht_supported) {
180 pos = ieee80211_ie_build_ht_cap(pos, &sband->ht_cap, 182 pos = ieee80211_ie_build_ht_cap(pos, &sband->ht_cap,
181 sband->ht_cap.cap); 183 sband->ht_cap.cap);
@@ -263,7 +265,7 @@ static void ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
263 const struct cfg80211_bss_ies *ies; 265 const struct cfg80211_bss_ies *ies;
264 u64 tsf; 266 u64 tsf;
265 267
266 lockdep_assert_held(&sdata->u.ibss.mtx); 268 sdata_assert_lock(sdata);
267 269
268 if (beacon_int < 10) 270 if (beacon_int < 10)
269 beacon_int = 10; 271 beacon_int = 10;
@@ -298,8 +300,7 @@ static void ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
298 tsf, false); 300 tsf, false);
299} 301}
300 302
301static struct sta_info *ieee80211_ibss_finish_sta(struct sta_info *sta, 303static struct sta_info *ieee80211_ibss_finish_sta(struct sta_info *sta)
302 bool auth)
303 __acquires(RCU) 304 __acquires(RCU)
304{ 305{
305 struct ieee80211_sub_if_data *sdata = sta->sdata; 306 struct ieee80211_sub_if_data *sdata = sta->sdata;
@@ -321,26 +322,19 @@ static struct sta_info *ieee80211_ibss_finish_sta(struct sta_info *sta,
321 /* If it fails, maybe we raced another insertion? */ 322 /* If it fails, maybe we raced another insertion? */
322 if (sta_info_insert_rcu(sta)) 323 if (sta_info_insert_rcu(sta))
323 return sta_info_get(sdata, addr); 324 return sta_info_get(sdata, addr);
324 if (auth && !sdata->u.ibss.auth_frame_registrations) {
325 ibss_dbg(sdata,
326 "TX Auth SA=%pM DA=%pM BSSID=%pM (auth_transaction=1)\n",
327 sdata->vif.addr, addr, sdata->u.ibss.bssid);
328 ieee80211_send_auth(sdata, 1, WLAN_AUTH_OPEN, 0, NULL, 0,
329 addr, sdata->u.ibss.bssid, NULL, 0, 0, 0);
330 }
331 return sta; 325 return sta;
332} 326}
333 327
334static struct sta_info * 328static struct sta_info *
335ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata, 329ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata, const u8 *bssid,
336 const u8 *bssid, const u8 *addr, 330 const u8 *addr, u32 supp_rates)
337 u32 supp_rates, bool auth)
338 __acquires(RCU) 331 __acquires(RCU)
339{ 332{
340 struct ieee80211_if_ibss *ifibss = &sdata->u.ibss; 333 struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
341 struct ieee80211_local *local = sdata->local; 334 struct ieee80211_local *local = sdata->local;
342 struct sta_info *sta; 335 struct sta_info *sta;
343 struct ieee80211_chanctx_conf *chanctx_conf; 336 struct ieee80211_chanctx_conf *chanctx_conf;
337 struct ieee80211_supported_band *sband;
344 int band; 338 int band;
345 339
346 /* 340 /*
@@ -380,10 +374,11 @@ ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata,
380 sta->last_rx = jiffies; 374 sta->last_rx = jiffies;
381 375
382 /* make sure mandatory rates are always added */ 376 /* make sure mandatory rates are always added */
377 sband = local->hw.wiphy->bands[band];
383 sta->sta.supp_rates[band] = supp_rates | 378 sta->sta.supp_rates[band] = supp_rates |
384 ieee80211_mandatory_rates(local, band); 379 ieee80211_mandatory_rates(sband);
385 380
386 return ieee80211_ibss_finish_sta(sta, auth); 381 return ieee80211_ibss_finish_sta(sta);
387} 382}
388 383
389static void ieee80211_rx_mgmt_deauth_ibss(struct ieee80211_sub_if_data *sdata, 384static void ieee80211_rx_mgmt_deauth_ibss(struct ieee80211_sub_if_data *sdata,
@@ -405,10 +400,8 @@ static void ieee80211_rx_mgmt_auth_ibss(struct ieee80211_sub_if_data *sdata,
405 size_t len) 400 size_t len)
406{ 401{
407 u16 auth_alg, auth_transaction; 402 u16 auth_alg, auth_transaction;
408 struct sta_info *sta;
409 u8 deauth_frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
410 403
411 lockdep_assert_held(&sdata->u.ibss.mtx); 404 sdata_assert_lock(sdata);
412 405
413 if (len < 24 + 6) 406 if (len < 24 + 6)
414 return; 407 return;
@@ -423,22 +416,6 @@ static void ieee80211_rx_mgmt_auth_ibss(struct ieee80211_sub_if_data *sdata,
423 if (auth_alg != WLAN_AUTH_OPEN || auth_transaction != 1) 416 if (auth_alg != WLAN_AUTH_OPEN || auth_transaction != 1)
424 return; 417 return;
425 418
426 sta_info_destroy_addr(sdata, mgmt->sa);
427 sta = ieee80211_ibss_add_sta(sdata, mgmt->bssid, mgmt->sa, 0, false);
428 rcu_read_unlock();
429
430 /*
431 * if we have any problem in allocating the new station, we reply with a
432 * DEAUTH frame to tell the other end that we had a problem
433 */
434 if (!sta) {
435 ieee80211_send_deauth_disassoc(sdata, sdata->u.ibss.bssid,
436 IEEE80211_STYPE_DEAUTH,
437 WLAN_REASON_UNSPECIFIED, true,
438 deauth_frame_buf);
439 return;
440 }
441
442 /* 419 /*
443 * IEEE 802.11 standard does not require authentication in IBSS 420 * IEEE 802.11 standard does not require authentication in IBSS
444 * networks and most implementations do not seem to use it. 421 * networks and most implementations do not seem to use it.
@@ -492,7 +469,7 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
492 prev_rates = sta->sta.supp_rates[band]; 469 prev_rates = sta->sta.supp_rates[band];
493 /* make sure mandatory rates are always added */ 470 /* make sure mandatory rates are always added */
494 sta->sta.supp_rates[band] = supp_rates | 471 sta->sta.supp_rates[band] = supp_rates |
495 ieee80211_mandatory_rates(local, band); 472 ieee80211_mandatory_rates(sband);
496 473
497 if (sta->sta.supp_rates[band] != prev_rates) { 474 if (sta->sta.supp_rates[band] != prev_rates) {
498 ibss_dbg(sdata, 475 ibss_dbg(sdata,
@@ -504,7 +481,7 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
504 } else { 481 } else {
505 rcu_read_unlock(); 482 rcu_read_unlock();
506 sta = ieee80211_ibss_add_sta(sdata, mgmt->bssid, 483 sta = ieee80211_ibss_add_sta(sdata, mgmt->bssid,
507 mgmt->sa, supp_rates, true); 484 mgmt->sa, supp_rates);
508 } 485 }
509 } 486 }
510 487
@@ -512,7 +489,9 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
512 set_sta_flag(sta, WLAN_STA_WME); 489 set_sta_flag(sta, WLAN_STA_WME);
513 490
514 if (sta && elems->ht_operation && elems->ht_cap_elem && 491 if (sta && elems->ht_operation && elems->ht_cap_elem &&
515 sdata->u.ibss.channel_type != NL80211_CHAN_NO_HT) { 492 sdata->u.ibss.chandef.width != NL80211_CHAN_WIDTH_20_NOHT &&
493 sdata->u.ibss.chandef.width != NL80211_CHAN_WIDTH_5 &&
494 sdata->u.ibss.chandef.width != NL80211_CHAN_WIDTH_10) {
516 /* we both use HT */ 495 /* we both use HT */
517 struct ieee80211_ht_cap htcap_ie; 496 struct ieee80211_ht_cap htcap_ie;
518 struct cfg80211_chan_def chandef; 497 struct cfg80211_chan_def chandef;
@@ -527,8 +506,8 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
527 * fall back to HT20 if we don't use or use 506 * fall back to HT20 if we don't use or use
528 * the other extension channel 507 * the other extension channel
529 */ 508 */
530 if (cfg80211_get_chandef_type(&chandef) != 509 if (chandef.center_freq1 !=
531 sdata->u.ibss.channel_type) 510 sdata->u.ibss.chandef.center_freq1)
532 htcap_ie.cap_info &= 511 htcap_ie.cap_info &=
533 cpu_to_le16(~IEEE80211_HT_CAP_SUP_WIDTH_20_40); 512 cpu_to_le16(~IEEE80211_HT_CAP_SUP_WIDTH_20_40);
534 513
@@ -567,7 +546,7 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
567 546
568 /* different channel */ 547 /* different channel */
569 if (sdata->u.ibss.fixed_channel && 548 if (sdata->u.ibss.fixed_channel &&
570 sdata->u.ibss.channel != cbss->channel) 549 sdata->u.ibss.chandef.chan != cbss->channel)
571 goto put_bss; 550 goto put_bss;
572 551
573 /* different SSID */ 552 /* different SSID */
@@ -608,7 +587,7 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
608 ieee80211_sta_join_ibss(sdata, bss); 587 ieee80211_sta_join_ibss(sdata, bss);
609 supp_rates = ieee80211_sta_get_rates(local, elems, band, NULL); 588 supp_rates = ieee80211_sta_get_rates(local, elems, band, NULL);
610 ieee80211_ibss_add_sta(sdata, mgmt->bssid, mgmt->sa, 589 ieee80211_ibss_add_sta(sdata, mgmt->bssid, mgmt->sa,
611 supp_rates, true); 590 supp_rates);
612 rcu_read_unlock(); 591 rcu_read_unlock();
613 } 592 }
614 593
@@ -624,6 +603,7 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata,
624 struct ieee80211_local *local = sdata->local; 603 struct ieee80211_local *local = sdata->local;
625 struct sta_info *sta; 604 struct sta_info *sta;
626 struct ieee80211_chanctx_conf *chanctx_conf; 605 struct ieee80211_chanctx_conf *chanctx_conf;
606 struct ieee80211_supported_band *sband;
627 int band; 607 int band;
628 608
629 /* 609 /*
@@ -658,8 +638,9 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata,
658 sta->last_rx = jiffies; 638 sta->last_rx = jiffies;
659 639
660 /* make sure mandatory rates are always added */ 640 /* make sure mandatory rates are always added */
641 sband = local->hw.wiphy->bands[band];
661 sta->sta.supp_rates[band] = supp_rates | 642 sta->sta.supp_rates[band] = supp_rates |
662 ieee80211_mandatory_rates(local, band); 643 ieee80211_mandatory_rates(sband);
663 644
664 spin_lock(&ifibss->incomplete_lock); 645 spin_lock(&ifibss->incomplete_lock);
665 list_add(&sta->list, &ifibss->incomplete_stations); 646 list_add(&sta->list, &ifibss->incomplete_stations);
@@ -673,7 +654,7 @@ static int ieee80211_sta_active_ibss(struct ieee80211_sub_if_data *sdata)
673 int active = 0; 654 int active = 0;
674 struct sta_info *sta; 655 struct sta_info *sta;
675 656
676 lockdep_assert_held(&sdata->u.ibss.mtx); 657 sdata_assert_lock(sdata);
677 658
678 rcu_read_lock(); 659 rcu_read_lock();
679 660
@@ -699,7 +680,7 @@ static void ieee80211_sta_merge_ibss(struct ieee80211_sub_if_data *sdata)
699{ 680{
700 struct ieee80211_if_ibss *ifibss = &sdata->u.ibss; 681 struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
701 682
702 lockdep_assert_held(&ifibss->mtx); 683 sdata_assert_lock(sdata);
703 684
704 mod_timer(&ifibss->timer, 685 mod_timer(&ifibss->timer,
705 round_jiffies(jiffies + IEEE80211_IBSS_MERGE_INTERVAL)); 686 round_jiffies(jiffies + IEEE80211_IBSS_MERGE_INTERVAL));
@@ -730,7 +711,7 @@ static void ieee80211_sta_create_ibss(struct ieee80211_sub_if_data *sdata)
730 u16 capability; 711 u16 capability;
731 int i; 712 int i;
732 713
733 lockdep_assert_held(&ifibss->mtx); 714 sdata_assert_lock(sdata);
734 715
735 if (ifibss->fixed_bssid) { 716 if (ifibss->fixed_bssid) {
736 memcpy(bssid, ifibss->bssid, ETH_ALEN); 717 memcpy(bssid, ifibss->bssid, ETH_ALEN);
@@ -755,7 +736,7 @@ static void ieee80211_sta_create_ibss(struct ieee80211_sub_if_data *sdata)
755 sdata->drop_unencrypted = 0; 736 sdata->drop_unencrypted = 0;
756 737
757 __ieee80211_sta_join_ibss(sdata, bssid, sdata->vif.bss_conf.beacon_int, 738 __ieee80211_sta_join_ibss(sdata, bssid, sdata->vif.bss_conf.beacon_int,
758 ifibss->channel, ifibss->basic_rates, 739 ifibss->chandef.chan, ifibss->basic_rates,
759 capability, 0, true); 740 capability, 0, true);
760} 741}
761 742
@@ -773,7 +754,7 @@ static void ieee80211_sta_find_ibss(struct ieee80211_sub_if_data *sdata)
773 int active_ibss; 754 int active_ibss;
774 u16 capability; 755 u16 capability;
775 756
776 lockdep_assert_held(&ifibss->mtx); 757 sdata_assert_lock(sdata);
777 758
778 active_ibss = ieee80211_sta_active_ibss(sdata); 759 active_ibss = ieee80211_sta_active_ibss(sdata);
779 ibss_dbg(sdata, "sta_find_ibss (active_ibss=%d)\n", active_ibss); 760 ibss_dbg(sdata, "sta_find_ibss (active_ibss=%d)\n", active_ibss);
@@ -787,7 +768,7 @@ static void ieee80211_sta_find_ibss(struct ieee80211_sub_if_data *sdata)
787 if (ifibss->fixed_bssid) 768 if (ifibss->fixed_bssid)
788 bssid = ifibss->bssid; 769 bssid = ifibss->bssid;
789 if (ifibss->fixed_channel) 770 if (ifibss->fixed_channel)
790 chan = ifibss->channel; 771 chan = ifibss->chandef.chan;
791 if (!is_zero_ether_addr(ifibss->bssid)) 772 if (!is_zero_ether_addr(ifibss->bssid))
792 bssid = ifibss->bssid; 773 bssid = ifibss->bssid;
793 cbss = cfg80211_get_bss(local->hw.wiphy, chan, bssid, 774 cbss = cfg80211_get_bss(local->hw.wiphy, chan, bssid,
@@ -843,10 +824,10 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata,
843 struct beacon_data *presp; 824 struct beacon_data *presp;
844 u8 *pos, *end; 825 u8 *pos, *end;
845 826
846 lockdep_assert_held(&ifibss->mtx); 827 sdata_assert_lock(sdata);
847 828
848 presp = rcu_dereference_protected(ifibss->presp, 829 presp = rcu_dereference_protected(ifibss->presp,
849 lockdep_is_held(&ifibss->mtx)); 830 lockdep_is_held(&sdata->wdev.mtx));
850 831
851 if (ifibss->state != IEEE80211_IBSS_MLME_JOINED || 832 if (ifibss->state != IEEE80211_IBSS_MLME_JOINED ||
852 len < 24 + 2 || !presp) 833 len < 24 + 2 || !presp)
@@ -930,7 +911,7 @@ void ieee80211_ibss_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
930 mgmt = (struct ieee80211_mgmt *) skb->data; 911 mgmt = (struct ieee80211_mgmt *) skb->data;
931 fc = le16_to_cpu(mgmt->frame_control); 912 fc = le16_to_cpu(mgmt->frame_control);
932 913
933 mutex_lock(&sdata->u.ibss.mtx); 914 sdata_lock(sdata);
934 915
935 if (!sdata->u.ibss.ssid_len) 916 if (!sdata->u.ibss.ssid_len)
936 goto mgmt_out; /* not ready to merge yet */ 917 goto mgmt_out; /* not ready to merge yet */
@@ -953,7 +934,7 @@ void ieee80211_ibss_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
953 } 934 }
954 935
955 mgmt_out: 936 mgmt_out:
956 mutex_unlock(&sdata->u.ibss.mtx); 937 sdata_unlock(sdata);
957} 938}
958 939
959void ieee80211_ibss_work(struct ieee80211_sub_if_data *sdata) 940void ieee80211_ibss_work(struct ieee80211_sub_if_data *sdata)
@@ -961,7 +942,7 @@ void ieee80211_ibss_work(struct ieee80211_sub_if_data *sdata)
961 struct ieee80211_if_ibss *ifibss = &sdata->u.ibss; 942 struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
962 struct sta_info *sta; 943 struct sta_info *sta;
963 944
964 mutex_lock(&ifibss->mtx); 945 sdata_lock(sdata);
965 946
966 /* 947 /*
967 * Work could be scheduled after scan or similar 948 * Work could be scheduled after scan or similar
@@ -978,7 +959,7 @@ void ieee80211_ibss_work(struct ieee80211_sub_if_data *sdata)
978 list_del(&sta->list); 959 list_del(&sta->list);
979 spin_unlock_bh(&ifibss->incomplete_lock); 960 spin_unlock_bh(&ifibss->incomplete_lock);
980 961
981 ieee80211_ibss_finish_sta(sta, true); 962 ieee80211_ibss_finish_sta(sta);
982 rcu_read_unlock(); 963 rcu_read_unlock();
983 spin_lock_bh(&ifibss->incomplete_lock); 964 spin_lock_bh(&ifibss->incomplete_lock);
984 } 965 }
@@ -997,7 +978,7 @@ void ieee80211_ibss_work(struct ieee80211_sub_if_data *sdata)
997 } 978 }
998 979
999 out: 980 out:
1000 mutex_unlock(&ifibss->mtx); 981 sdata_unlock(sdata);
1001} 982}
1002 983
1003static void ieee80211_ibss_timer(unsigned long data) 984static void ieee80211_ibss_timer(unsigned long data)
@@ -1014,7 +995,6 @@ void ieee80211_ibss_setup_sdata(struct ieee80211_sub_if_data *sdata)
1014 995
1015 setup_timer(&ifibss->timer, ieee80211_ibss_timer, 996 setup_timer(&ifibss->timer, ieee80211_ibss_timer,
1016 (unsigned long) sdata); 997 (unsigned long) sdata);
1017 mutex_init(&ifibss->mtx);
1018 INIT_LIST_HEAD(&ifibss->incomplete_stations); 998 INIT_LIST_HEAD(&ifibss->incomplete_stations);
1019 spin_lock_init(&ifibss->incomplete_lock); 999 spin_lock_init(&ifibss->incomplete_lock);
1020} 1000}
@@ -1041,8 +1021,6 @@ int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata,
1041{ 1021{
1042 u32 changed = 0; 1022 u32 changed = 0;
1043 1023
1044 mutex_lock(&sdata->u.ibss.mtx);
1045
1046 if (params->bssid) { 1024 if (params->bssid) {
1047 memcpy(sdata->u.ibss.bssid, params->bssid, ETH_ALEN); 1025 memcpy(sdata->u.ibss.bssid, params->bssid, ETH_ALEN);
1048 sdata->u.ibss.fixed_bssid = true; 1026 sdata->u.ibss.fixed_bssid = true;
@@ -1057,9 +1035,7 @@ int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata,
1057 1035
1058 sdata->vif.bss_conf.beacon_int = params->beacon_interval; 1036 sdata->vif.bss_conf.beacon_int = params->beacon_interval;
1059 1037
1060 sdata->u.ibss.channel = params->chandef.chan; 1038 sdata->u.ibss.chandef = params->chandef;
1061 sdata->u.ibss.channel_type =
1062 cfg80211_get_chandef_type(&params->chandef);
1063 sdata->u.ibss.fixed_channel = params->channel_fixed; 1039 sdata->u.ibss.fixed_channel = params->channel_fixed;
1064 1040
1065 if (params->ie) { 1041 if (params->ie) {
@@ -1075,8 +1051,6 @@ int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata,
1075 memcpy(sdata->u.ibss.ssid, params->ssid, params->ssid_len); 1051 memcpy(sdata->u.ibss.ssid, params->ssid, params->ssid_len);
1076 sdata->u.ibss.ssid_len = params->ssid_len; 1052 sdata->u.ibss.ssid_len = params->ssid_len;
1077 1053
1078 mutex_unlock(&sdata->u.ibss.mtx);
1079
1080 /* 1054 /*
1081 * 802.11n-2009 9.13.3.1: In an IBSS, the HT Protection field is 1055 * 802.11n-2009 9.13.3.1: In an IBSS, the HT Protection field is
1082 * reserved, but an HT STA shall protect HT transmissions as though 1056 * reserved, but an HT STA shall protect HT transmissions as though
@@ -1112,8 +1086,6 @@ int ieee80211_ibss_leave(struct ieee80211_sub_if_data *sdata)
1112 struct sta_info *sta; 1086 struct sta_info *sta;
1113 struct beacon_data *presp; 1087 struct beacon_data *presp;
1114 1088
1115 mutex_lock(&sdata->u.ibss.mtx);
1116
1117 active_ibss = ieee80211_sta_active_ibss(sdata); 1089 active_ibss = ieee80211_sta_active_ibss(sdata);
1118 1090
1119 if (!active_ibss && !is_zero_ether_addr(ifibss->bssid)) { 1091 if (!active_ibss && !is_zero_ether_addr(ifibss->bssid)) {
@@ -1122,7 +1094,7 @@ int ieee80211_ibss_leave(struct ieee80211_sub_if_data *sdata)
1122 if (ifibss->privacy) 1094 if (ifibss->privacy)
1123 capability |= WLAN_CAPABILITY_PRIVACY; 1095 capability |= WLAN_CAPABILITY_PRIVACY;
1124 1096
1125 cbss = cfg80211_get_bss(local->hw.wiphy, ifibss->channel, 1097 cbss = cfg80211_get_bss(local->hw.wiphy, ifibss->chandef.chan,
1126 ifibss->bssid, ifibss->ssid, 1098 ifibss->bssid, ifibss->ssid,
1127 ifibss->ssid_len, WLAN_CAPABILITY_IBSS | 1099 ifibss->ssid_len, WLAN_CAPABILITY_IBSS |
1128 WLAN_CAPABILITY_PRIVACY, 1100 WLAN_CAPABILITY_PRIVACY,
@@ -1157,7 +1129,7 @@ int ieee80211_ibss_leave(struct ieee80211_sub_if_data *sdata)
1157 /* remove beacon */ 1129 /* remove beacon */
1158 kfree(sdata->u.ibss.ie); 1130 kfree(sdata->u.ibss.ie);
1159 presp = rcu_dereference_protected(ifibss->presp, 1131 presp = rcu_dereference_protected(ifibss->presp,
1160 lockdep_is_held(&sdata->u.ibss.mtx)); 1132 lockdep_is_held(&sdata->wdev.mtx));
1161 RCU_INIT_POINTER(sdata->u.ibss.presp, NULL); 1133 RCU_INIT_POINTER(sdata->u.ibss.presp, NULL);
1162 sdata->vif.bss_conf.ibss_joined = false; 1134 sdata->vif.bss_conf.ibss_joined = false;
1163 sdata->vif.bss_conf.ibss_creator = false; 1135 sdata->vif.bss_conf.ibss_creator = false;
@@ -1173,7 +1145,5 @@ int ieee80211_ibss_leave(struct ieee80211_sub_if_data *sdata)
1173 1145
1174 del_timer_sync(&sdata->u.ibss.timer); 1146 del_timer_sync(&sdata->u.ibss.timer);
1175 1147
1176 mutex_unlock(&sdata->u.ibss.mtx);
1177
1178 return 0; 1148 return 0;
1179} 1149}
diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
index 9ca8e3278cc0..8412a303993a 100644
--- a/net/mac80211/ieee80211_i.h
+++ b/net/mac80211/ieee80211_i.h
@@ -94,6 +94,7 @@ struct ieee80211_bss {
94#define IEEE80211_MAX_SUPP_RATES 32 94#define IEEE80211_MAX_SUPP_RATES 32
95 u8 supp_rates[IEEE80211_MAX_SUPP_RATES]; 95 u8 supp_rates[IEEE80211_MAX_SUPP_RATES];
96 size_t supp_rates_len; 96 size_t supp_rates_len;
97 struct ieee80211_rate *beacon_rate;
97 98
98 /* 99 /*
99 * During association, we save an ERP value from a probe response so 100 * During association, we save an ERP value from a probe response so
@@ -366,7 +367,7 @@ struct ieee80211_mgd_assoc_data {
366 u8 ssid_len; 367 u8 ssid_len;
367 u8 supp_rates_len; 368 u8 supp_rates_len;
368 bool wmm, uapsd; 369 bool wmm, uapsd;
369 bool have_beacon, need_beacon; 370 bool need_beacon;
370 bool synced; 371 bool synced;
371 bool timeout_started; 372 bool timeout_started;
372 373
@@ -394,7 +395,6 @@ struct ieee80211_if_managed {
394 bool nullfunc_failed; 395 bool nullfunc_failed;
395 bool connection_loss; 396 bool connection_loss;
396 397
397 struct mutex mtx;
398 struct cfg80211_bss *associated; 398 struct cfg80211_bss *associated;
399 struct ieee80211_mgd_auth_data *auth_data; 399 struct ieee80211_mgd_auth_data *auth_data;
400 struct ieee80211_mgd_assoc_data *assoc_data; 400 struct ieee80211_mgd_assoc_data *assoc_data;
@@ -405,6 +405,7 @@ struct ieee80211_if_managed {
405 405
406 bool powersave; /* powersave requested for this iface */ 406 bool powersave; /* powersave requested for this iface */
407 bool broken_ap; /* AP is broken -- turn off powersave */ 407 bool broken_ap; /* AP is broken -- turn off powersave */
408 bool have_beacon;
408 u8 dtim_period; 409 u8 dtim_period;
409 enum ieee80211_smps_mode req_smps, /* requested smps mode */ 410 enum ieee80211_smps_mode req_smps, /* requested smps mode */
410 driver_smps_mode; /* smps mode request */ 411 driver_smps_mode; /* smps mode request */
@@ -488,8 +489,6 @@ struct ieee80211_if_managed {
488struct ieee80211_if_ibss { 489struct ieee80211_if_ibss {
489 struct timer_list timer; 490 struct timer_list timer;
490 491
491 struct mutex mtx;
492
493 unsigned long last_scan_completed; 492 unsigned long last_scan_completed;
494 493
495 u32 basic_rates; 494 u32 basic_rates;
@@ -499,14 +498,12 @@ struct ieee80211_if_ibss {
499 bool privacy; 498 bool privacy;
500 499
501 bool control_port; 500 bool control_port;
502 unsigned int auth_frame_registrations;
503 501
504 u8 bssid[ETH_ALEN] __aligned(2); 502 u8 bssid[ETH_ALEN] __aligned(2);
505 u8 ssid[IEEE80211_MAX_SSID_LEN]; 503 u8 ssid[IEEE80211_MAX_SSID_LEN];
506 u8 ssid_len, ie_len; 504 u8 ssid_len, ie_len;
507 u8 *ie; 505 u8 *ie;
508 struct ieee80211_channel *channel; 506 struct cfg80211_chan_def chandef;
509 enum nl80211_channel_type channel_type;
510 507
511 unsigned long ibss_join_req; 508 unsigned long ibss_join_req;
512 /* probe response/beacon for IBSS */ 509 /* probe response/beacon for IBSS */
@@ -545,6 +542,7 @@ struct ieee80211_if_mesh {
545 struct timer_list mesh_path_root_timer; 542 struct timer_list mesh_path_root_timer;
546 543
547 unsigned long wrkq_flags; 544 unsigned long wrkq_flags;
545 unsigned long mbss_changed;
548 546
549 u8 mesh_id[IEEE80211_MAX_MESH_ID_LEN]; 547 u8 mesh_id[IEEE80211_MAX_MESH_ID_LEN];
550 size_t mesh_id_len; 548 size_t mesh_id_len;
@@ -580,8 +578,6 @@ struct ieee80211_if_mesh {
580 bool accepting_plinks; 578 bool accepting_plinks;
581 int num_gates; 579 int num_gates;
582 struct beacon_data __rcu *beacon; 580 struct beacon_data __rcu *beacon;
583 /* just protects beacon updates for now */
584 struct mutex mtx;
585 const u8 *ie; 581 const u8 *ie;
586 u8 ie_len; 582 u8 ie_len;
587 enum { 583 enum {
@@ -778,6 +774,26 @@ struct ieee80211_sub_if_data *vif_to_sdata(struct ieee80211_vif *p)
778 return container_of(p, struct ieee80211_sub_if_data, vif); 774 return container_of(p, struct ieee80211_sub_if_data, vif);
779} 775}
780 776
777static inline void sdata_lock(struct ieee80211_sub_if_data *sdata)
778 __acquires(&sdata->wdev.mtx)
779{
780 mutex_lock(&sdata->wdev.mtx);
781 __acquire(&sdata->wdev.mtx);
782}
783
784static inline void sdata_unlock(struct ieee80211_sub_if_data *sdata)
785 __releases(&sdata->wdev.mtx)
786{
787 mutex_unlock(&sdata->wdev.mtx);
788 __release(&sdata->wdev.mtx);
789}
790
791static inline void
792sdata_assert_lock(struct ieee80211_sub_if_data *sdata)
793{
794 lockdep_assert_held(&sdata->wdev.mtx);
795}
796
781static inline enum ieee80211_band 797static inline enum ieee80211_band
782ieee80211_get_sdata_band(struct ieee80211_sub_if_data *sdata) 798ieee80211_get_sdata_band(struct ieee80211_sub_if_data *sdata)
783{ 799{
@@ -1507,9 +1523,6 @@ static inline void ieee802_11_parse_elems(const u8 *start, size_t len,
1507 ieee802_11_parse_elems_crc(start, len, action, elems, 0, 0); 1523 ieee802_11_parse_elems_crc(start, len, action, elems, 0, 0);
1508} 1524}
1509 1525
1510u32 ieee80211_mandatory_rates(struct ieee80211_local *local,
1511 enum ieee80211_band band);
1512
1513void ieee80211_dynamic_ps_enable_work(struct work_struct *work); 1526void ieee80211_dynamic_ps_enable_work(struct work_struct *work);
1514void ieee80211_dynamic_ps_disable_work(struct work_struct *work); 1527void ieee80211_dynamic_ps_disable_work(struct work_struct *work);
1515void ieee80211_dynamic_ps_timer(unsigned long data); 1528void ieee80211_dynamic_ps_timer(unsigned long data);
diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
index 98d20c0f6fed..cc117591f678 100644
--- a/net/mac80211/iface.c
+++ b/net/mac80211/iface.c
@@ -159,7 +159,8 @@ static int ieee80211_change_mtu(struct net_device *dev, int new_mtu)
159 return 0; 159 return 0;
160} 160}
161 161
162static int ieee80211_verify_mac(struct ieee80211_sub_if_data *sdata, u8 *addr) 162static int ieee80211_verify_mac(struct ieee80211_sub_if_data *sdata, u8 *addr,
163 bool check_dup)
163{ 164{
164 struct ieee80211_local *local = sdata->local; 165 struct ieee80211_local *local = sdata->local;
165 struct ieee80211_sub_if_data *iter; 166 struct ieee80211_sub_if_data *iter;
@@ -180,13 +181,16 @@ static int ieee80211_verify_mac(struct ieee80211_sub_if_data *sdata, u8 *addr)
180 ((u64)m[2] << 3*8) | ((u64)m[3] << 2*8) | 181 ((u64)m[2] << 3*8) | ((u64)m[3] << 2*8) |
181 ((u64)m[4] << 1*8) | ((u64)m[5] << 0*8); 182 ((u64)m[4] << 1*8) | ((u64)m[5] << 0*8);
182 183
184 if (!check_dup)
185 return ret;
183 186
184 mutex_lock(&local->iflist_mtx); 187 mutex_lock(&local->iflist_mtx);
185 list_for_each_entry(iter, &local->interfaces, list) { 188 list_for_each_entry(iter, &local->interfaces, list) {
186 if (iter == sdata) 189 if (iter == sdata)
187 continue; 190 continue;
188 191
189 if (iter->vif.type == NL80211_IFTYPE_MONITOR) 192 if (iter->vif.type == NL80211_IFTYPE_MONITOR &&
193 !(iter->u.mntr_flags & MONITOR_FLAG_ACTIVE))
190 continue; 194 continue;
191 195
192 m = iter->vif.addr; 196 m = iter->vif.addr;
@@ -208,12 +212,17 @@ static int ieee80211_change_mac(struct net_device *dev, void *addr)
208{ 212{
209 struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); 213 struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
210 struct sockaddr *sa = addr; 214 struct sockaddr *sa = addr;
215 bool check_dup = true;
211 int ret; 216 int ret;
212 217
213 if (ieee80211_sdata_running(sdata)) 218 if (ieee80211_sdata_running(sdata))
214 return -EBUSY; 219 return -EBUSY;
215 220
216 ret = ieee80211_verify_mac(sdata, sa->sa_data); 221 if (sdata->vif.type == NL80211_IFTYPE_MONITOR &&
222 !(sdata->u.mntr_flags & MONITOR_FLAG_ACTIVE))
223 check_dup = false;
224
225 ret = ieee80211_verify_mac(sdata, sa->sa_data, check_dup);
217 if (ret) 226 if (ret)
218 return ret; 227 return ret;
219 228
@@ -545,7 +554,11 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
545 break; 554 break;
546 } 555 }
547 556
548 if (local->monitors == 0 && local->open_count == 0) { 557 if (sdata->u.mntr_flags & MONITOR_FLAG_ACTIVE) {
558 res = drv_add_interface(local, sdata);
559 if (res)
560 goto err_stop;
561 } else if (local->monitors == 0 && local->open_count == 0) {
549 res = ieee80211_add_virtual_monitor(local); 562 res = ieee80211_add_virtual_monitor(local);
550 if (res) 563 if (res)
551 goto err_stop; 564 goto err_stop;
@@ -923,7 +936,11 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata,
923 mutex_lock(&local->mtx); 936 mutex_lock(&local->mtx);
924 ieee80211_recalc_idle(local); 937 ieee80211_recalc_idle(local);
925 mutex_unlock(&local->mtx); 938 mutex_unlock(&local->mtx);
926 break; 939
940 if (!(sdata->u.mntr_flags & MONITOR_FLAG_ACTIVE))
941 break;
942
943 /* fall through */
927 default: 944 default:
928 if (going_down) 945 if (going_down)
929 drv_remove_interface(local, sdata); 946 drv_remove_interface(local, sdata);
@@ -1072,7 +1089,7 @@ static const struct net_device_ops ieee80211_monitorif_ops = {
1072 .ndo_start_xmit = ieee80211_monitor_start_xmit, 1089 .ndo_start_xmit = ieee80211_monitor_start_xmit,
1073 .ndo_set_rx_mode = ieee80211_set_multicast_list, 1090 .ndo_set_rx_mode = ieee80211_set_multicast_list,
1074 .ndo_change_mtu = ieee80211_change_mtu, 1091 .ndo_change_mtu = ieee80211_change_mtu,
1075 .ndo_set_mac_address = eth_mac_addr, 1092 .ndo_set_mac_address = ieee80211_change_mac,
1076 .ndo_select_queue = ieee80211_monitor_select_queue, 1093 .ndo_select_queue = ieee80211_monitor_select_queue,
1077}; 1094};
1078 1095
@@ -1747,10 +1764,9 @@ void ieee80211_remove_interfaces(struct ieee80211_local *local)
1747} 1764}
1748 1765
1749static int netdev_notify(struct notifier_block *nb, 1766static int netdev_notify(struct notifier_block *nb,
1750 unsigned long state, 1767 unsigned long state, void *ptr)
1751 void *ndev)
1752{ 1768{
1753 struct net_device *dev = ndev; 1769 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1754 struct ieee80211_sub_if_data *sdata; 1770 struct ieee80211_sub_if_data *sdata;
1755 1771
1756 if (state != NETDEV_CHANGENAME) 1772 if (state != NETDEV_CHANGENAME)
diff --git a/net/mac80211/key.c b/net/mac80211/key.c
index 67059b88fea5..e39cc91d0cf1 100644
--- a/net/mac80211/key.c
+++ b/net/mac80211/key.c
@@ -335,12 +335,12 @@ struct ieee80211_key *ieee80211_key_alloc(u32 cipher, int idx, size_t key_len,
335 switch (cipher) { 335 switch (cipher) {
336 case WLAN_CIPHER_SUITE_WEP40: 336 case WLAN_CIPHER_SUITE_WEP40:
337 case WLAN_CIPHER_SUITE_WEP104: 337 case WLAN_CIPHER_SUITE_WEP104:
338 key->conf.iv_len = WEP_IV_LEN; 338 key->conf.iv_len = IEEE80211_WEP_IV_LEN;
339 key->conf.icv_len = WEP_ICV_LEN; 339 key->conf.icv_len = IEEE80211_WEP_ICV_LEN;
340 break; 340 break;
341 case WLAN_CIPHER_SUITE_TKIP: 341 case WLAN_CIPHER_SUITE_TKIP:
342 key->conf.iv_len = TKIP_IV_LEN; 342 key->conf.iv_len = IEEE80211_TKIP_IV_LEN;
343 key->conf.icv_len = TKIP_ICV_LEN; 343 key->conf.icv_len = IEEE80211_TKIP_ICV_LEN;
344 if (seq) { 344 if (seq) {
345 for (i = 0; i < IEEE80211_NUM_TIDS; i++) { 345 for (i = 0; i < IEEE80211_NUM_TIDS; i++) {
346 key->u.tkip.rx[i].iv32 = 346 key->u.tkip.rx[i].iv32 =
@@ -352,13 +352,13 @@ struct ieee80211_key *ieee80211_key_alloc(u32 cipher, int idx, size_t key_len,
352 spin_lock_init(&key->u.tkip.txlock); 352 spin_lock_init(&key->u.tkip.txlock);
353 break; 353 break;
354 case WLAN_CIPHER_SUITE_CCMP: 354 case WLAN_CIPHER_SUITE_CCMP:
355 key->conf.iv_len = CCMP_HDR_LEN; 355 key->conf.iv_len = IEEE80211_CCMP_HDR_LEN;
356 key->conf.icv_len = CCMP_MIC_LEN; 356 key->conf.icv_len = IEEE80211_CCMP_MIC_LEN;
357 if (seq) { 357 if (seq) {
358 for (i = 0; i < IEEE80211_NUM_TIDS + 1; i++) 358 for (i = 0; i < IEEE80211_NUM_TIDS + 1; i++)
359 for (j = 0; j < CCMP_PN_LEN; j++) 359 for (j = 0; j < IEEE80211_CCMP_PN_LEN; j++)
360 key->u.ccmp.rx_pn[i][j] = 360 key->u.ccmp.rx_pn[i][j] =
361 seq[CCMP_PN_LEN - j - 1]; 361 seq[IEEE80211_CCMP_PN_LEN - j - 1];
362 } 362 }
363 /* 363 /*
364 * Initialize AES key state here as an optimization so that 364 * Initialize AES key state here as an optimization so that
@@ -375,9 +375,9 @@ struct ieee80211_key *ieee80211_key_alloc(u32 cipher, int idx, size_t key_len,
375 key->conf.iv_len = 0; 375 key->conf.iv_len = 0;
376 key->conf.icv_len = sizeof(struct ieee80211_mmie); 376 key->conf.icv_len = sizeof(struct ieee80211_mmie);
377 if (seq) 377 if (seq)
378 for (j = 0; j < CMAC_PN_LEN; j++) 378 for (j = 0; j < IEEE80211_CMAC_PN_LEN; j++)
379 key->u.aes_cmac.rx_pn[j] = 379 key->u.aes_cmac.rx_pn[j] =
380 seq[CMAC_PN_LEN - j - 1]; 380 seq[IEEE80211_CMAC_PN_LEN - j - 1];
381 /* 381 /*
382 * Initialize AES key state here as an optimization so that 382 * Initialize AES key state here as an optimization so that
383 * it does not need to be initialized for every packet. 383 * it does not need to be initialized for every packet.
@@ -740,13 +740,13 @@ void ieee80211_get_key_rx_seq(struct ieee80211_key_conf *keyconf,
740 pn = key->u.ccmp.rx_pn[IEEE80211_NUM_TIDS]; 740 pn = key->u.ccmp.rx_pn[IEEE80211_NUM_TIDS];
741 else 741 else
742 pn = key->u.ccmp.rx_pn[tid]; 742 pn = key->u.ccmp.rx_pn[tid];
743 memcpy(seq->ccmp.pn, pn, CCMP_PN_LEN); 743 memcpy(seq->ccmp.pn, pn, IEEE80211_CCMP_PN_LEN);
744 break; 744 break;
745 case WLAN_CIPHER_SUITE_AES_CMAC: 745 case WLAN_CIPHER_SUITE_AES_CMAC:
746 if (WARN_ON(tid != 0)) 746 if (WARN_ON(tid != 0))
747 return; 747 return;
748 pn = key->u.aes_cmac.rx_pn; 748 pn = key->u.aes_cmac.rx_pn;
749 memcpy(seq->aes_cmac.pn, pn, CMAC_PN_LEN); 749 memcpy(seq->aes_cmac.pn, pn, IEEE80211_CMAC_PN_LEN);
750 break; 750 break;
751 } 751 }
752} 752}
diff --git a/net/mac80211/key.h b/net/mac80211/key.h
index e8de3e6d7804..036d57e76a5e 100644
--- a/net/mac80211/key.h
+++ b/net/mac80211/key.h
@@ -19,17 +19,6 @@
19#define NUM_DEFAULT_KEYS 4 19#define NUM_DEFAULT_KEYS 4
20#define NUM_DEFAULT_MGMT_KEYS 2 20#define NUM_DEFAULT_MGMT_KEYS 2
21 21
22#define WEP_IV_LEN 4
23#define WEP_ICV_LEN 4
24#define ALG_CCMP_KEY_LEN 16
25#define CCMP_HDR_LEN 8
26#define CCMP_MIC_LEN 8
27#define CCMP_TK_LEN 16
28#define CCMP_PN_LEN 6
29#define TKIP_IV_LEN 8
30#define TKIP_ICV_LEN 4
31#define CMAC_PN_LEN 6
32
33struct ieee80211_local; 22struct ieee80211_local;
34struct ieee80211_sub_if_data; 23struct ieee80211_sub_if_data;
35struct sta_info; 24struct sta_info;
@@ -93,13 +82,13 @@ struct ieee80211_key {
93 * frames and the last counter is used with Robust 82 * frames and the last counter is used with Robust
94 * Management frames. 83 * Management frames.
95 */ 84 */
96 u8 rx_pn[IEEE80211_NUM_TIDS + 1][CCMP_PN_LEN]; 85 u8 rx_pn[IEEE80211_NUM_TIDS + 1][IEEE80211_CCMP_PN_LEN];
97 struct crypto_cipher *tfm; 86 struct crypto_cipher *tfm;
98 u32 replays; /* dot11RSNAStatsCCMPReplays */ 87 u32 replays; /* dot11RSNAStatsCCMPReplays */
99 } ccmp; 88 } ccmp;
100 struct { 89 struct {
101 atomic64_t tx_pn; 90 atomic64_t tx_pn;
102 u8 rx_pn[CMAC_PN_LEN]; 91 u8 rx_pn[IEEE80211_CMAC_PN_LEN];
103 struct crypto_cipher *tfm; 92 struct crypto_cipher *tfm;
104 u32 replays; /* dot11RSNAStatsCMACReplays */ 93 u32 replays; /* dot11RSNAStatsCMACReplays */
105 u32 icverrors; /* dot11RSNAStatsCMACICVErrors */ 94 u32 icverrors; /* dot11RSNAStatsCMACICVErrors */
diff --git a/net/mac80211/main.c b/net/mac80211/main.c
index 8eae74ac4e1e..091088ac7890 100644
--- a/net/mac80211/main.c
+++ b/net/mac80211/main.c
@@ -331,7 +331,7 @@ static int ieee80211_ifa_changed(struct notifier_block *nb,
331 return NOTIFY_DONE; 331 return NOTIFY_DONE;
332 332
333 ifmgd = &sdata->u.mgd; 333 ifmgd = &sdata->u.mgd;
334 mutex_lock(&ifmgd->mtx); 334 sdata_lock(sdata);
335 335
336 /* Copy the addresses to the bss_conf list */ 336 /* Copy the addresses to the bss_conf list */
337 ifa = idev->ifa_list; 337 ifa = idev->ifa_list;
@@ -349,7 +349,7 @@ static int ieee80211_ifa_changed(struct notifier_block *nb,
349 ieee80211_bss_info_change_notify(sdata, 349 ieee80211_bss_info_change_notify(sdata,
350 BSS_CHANGED_ARP_FILTER); 350 BSS_CHANGED_ARP_FILTER);
351 351
352 mutex_unlock(&ifmgd->mtx); 352 sdata_unlock(sdata);
353 353
354 return NOTIFY_DONE; 354 return NOTIFY_DONE;
355} 355}
@@ -686,8 +686,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
686 return -EINVAL; 686 return -EINVAL;
687 687
688#ifdef CONFIG_PM 688#ifdef CONFIG_PM
689 if ((hw->wiphy->wowlan.flags || hw->wiphy->wowlan.n_patterns) && 689 if (hw->wiphy->wowlan && (!local->ops->suspend || !local->ops->resume))
690 (!local->ops->suspend || !local->ops->resume))
691 return -EINVAL; 690 return -EINVAL;
692#endif 691#endif
693 692
diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
index 6952760881c8..447f41bbe744 100644
--- a/net/mac80211/mesh.c
+++ b/net/mac80211/mesh.c
@@ -271,8 +271,7 @@ int mesh_add_meshconf_ie(struct ieee80211_sub_if_data *sdata,
271 *pos++ = ifmsh->mesh_auth_id; 271 *pos++ = ifmsh->mesh_auth_id;
272 /* Mesh Formation Info - number of neighbors */ 272 /* Mesh Formation Info - number of neighbors */
273 neighbors = atomic_read(&ifmsh->estab_plinks); 273 neighbors = atomic_read(&ifmsh->estab_plinks);
274 /* Number of neighbor mesh STAs or 15 whichever is smaller */ 274 neighbors = min_t(int, neighbors, IEEE80211_MAX_MESH_PEERINGS);
275 neighbors = (neighbors > 15) ? 15 : neighbors;
276 *pos++ = neighbors << 1; 275 *pos++ = neighbors << 1;
277 /* Mesh capability */ 276 /* Mesh capability */
278 *pos = IEEE80211_MESHCONF_CAPAB_FORWARDING; 277 *pos = IEEE80211_MESHCONF_CAPAB_FORWARDING;
@@ -417,7 +416,9 @@ int mesh_add_ht_cap_ie(struct ieee80211_sub_if_data *sdata,
417 416
418 sband = local->hw.wiphy->bands[band]; 417 sband = local->hw.wiphy->bands[band];
419 if (!sband->ht_cap.ht_supported || 418 if (!sband->ht_cap.ht_supported ||
420 sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT) 419 sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT ||
420 sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_5 ||
421 sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_10)
421 return 0; 422 return 0;
422 423
423 if (skb_tailroom(skb) < 2 + sizeof(struct ieee80211_ht_cap)) 424 if (skb_tailroom(skb) < 2 + sizeof(struct ieee80211_ht_cap))
@@ -573,7 +574,7 @@ static void ieee80211_mesh_housekeeping(struct ieee80211_sub_if_data *sdata)
573 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; 574 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
574 u32 changed; 575 u32 changed;
575 576
576 ieee80211_sta_expire(sdata, IEEE80211_MESH_PEER_INACTIVITY_LIMIT); 577 ieee80211_sta_expire(sdata, ifmsh->mshcfg.plink_timeout * HZ);
577 mesh_path_expire(sdata); 578 mesh_path_expire(sdata);
578 579
579 changed = mesh_accept_plinks_update(sdata); 580 changed = mesh_accept_plinks_update(sdata);
@@ -697,38 +698,38 @@ out_free:
697} 698}
698 699
699static int 700static int
700ieee80211_mesh_rebuild_beacon(struct ieee80211_if_mesh *ifmsh) 701ieee80211_mesh_rebuild_beacon(struct ieee80211_sub_if_data *sdata)
701{ 702{
702 struct beacon_data *old_bcn; 703 struct beacon_data *old_bcn;
703 int ret; 704 int ret;
704 705
705 mutex_lock(&ifmsh->mtx); 706 old_bcn = rcu_dereference_protected(sdata->u.mesh.beacon,
706 707 lockdep_is_held(&sdata->wdev.mtx));
707 old_bcn = rcu_dereference_protected(ifmsh->beacon, 708 ret = ieee80211_mesh_build_beacon(&sdata->u.mesh);
708 lockdep_is_held(&ifmsh->mtx));
709 ret = ieee80211_mesh_build_beacon(ifmsh);
710 if (ret) 709 if (ret)
711 /* just reuse old beacon */ 710 /* just reuse old beacon */
712 goto out; 711 return ret;
713 712
714 if (old_bcn) 713 if (old_bcn)
715 kfree_rcu(old_bcn, rcu_head); 714 kfree_rcu(old_bcn, rcu_head);
716out: 715 return 0;
717 mutex_unlock(&ifmsh->mtx);
718 return ret;
719} 716}
720 717
721void ieee80211_mbss_info_change_notify(struct ieee80211_sub_if_data *sdata, 718void ieee80211_mbss_info_change_notify(struct ieee80211_sub_if_data *sdata,
722 u32 changed) 719 u32 changed)
723{ 720{
724 if (sdata->vif.bss_conf.enable_beacon && 721 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
725 (changed & (BSS_CHANGED_BEACON | 722 unsigned long bits = changed;
726 BSS_CHANGED_HT | 723 u32 bit;
727 BSS_CHANGED_BASIC_RATES | 724
728 BSS_CHANGED_BEACON_INT))) 725 if (!bits)
729 if (ieee80211_mesh_rebuild_beacon(&sdata->u.mesh)) 726 return;
730 return; 727
731 ieee80211_bss_info_change_notify(sdata, changed); 728 /* if we race with running work, worst case this work becomes a noop */
729 for_each_set_bit(bit, &bits, sizeof(changed) * BITS_PER_BYTE)
730 set_bit(bit, &ifmsh->mbss_changed);
731 set_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags);
732 ieee80211_queue_work(&sdata->local->hw, &sdata->work);
732} 733}
733 734
734int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata) 735int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata)
@@ -740,7 +741,6 @@ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata)
740 BSS_CHANGED_HT | 741 BSS_CHANGED_HT |
741 BSS_CHANGED_BASIC_RATES | 742 BSS_CHANGED_BASIC_RATES |
742 BSS_CHANGED_BEACON_INT; 743 BSS_CHANGED_BEACON_INT;
743 enum ieee80211_band band = ieee80211_get_sdata_band(sdata);
744 744
745 local->fif_other_bss++; 745 local->fif_other_bss++;
746 /* mesh ifaces must set allmulti to forward mcast traffic */ 746 /* mesh ifaces must set allmulti to forward mcast traffic */
@@ -748,7 +748,6 @@ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata)
748 ieee80211_configure_filter(local); 748 ieee80211_configure_filter(local);
749 749
750 ifmsh->mesh_cc_id = 0; /* Disabled */ 750 ifmsh->mesh_cc_id = 0; /* Disabled */
751 ifmsh->mesh_auth_id = 0; /* Disabled */
752 /* register sync ops from extensible synchronization framework */ 751 /* register sync ops from extensible synchronization framework */
753 ifmsh->sync_ops = ieee80211_mesh_sync_ops_get(ifmsh->mesh_sp_id); 752 ifmsh->sync_ops = ieee80211_mesh_sync_ops_get(ifmsh->mesh_sp_id);
754 ifmsh->adjusting_tbtt = false; 753 ifmsh->adjusting_tbtt = false;
@@ -759,8 +758,6 @@ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata)
759 sdata->vif.bss_conf.ht_operation_mode = 758 sdata->vif.bss_conf.ht_operation_mode =
760 ifmsh->mshcfg.ht_opmode; 759 ifmsh->mshcfg.ht_opmode;
761 sdata->vif.bss_conf.enable_beacon = true; 760 sdata->vif.bss_conf.enable_beacon = true;
762 sdata->vif.bss_conf.basic_rates =
763 ieee80211_mandatory_rates(local, band);
764 761
765 changed |= ieee80211_mps_local_status_update(sdata); 762 changed |= ieee80211_mps_local_status_update(sdata);
766 763
@@ -788,12 +785,10 @@ void ieee80211_stop_mesh(struct ieee80211_sub_if_data *sdata)
788 sdata->vif.bss_conf.enable_beacon = false; 785 sdata->vif.bss_conf.enable_beacon = false;
789 clear_bit(SDATA_STATE_OFFCHANNEL_BEACON_STOPPED, &sdata->state); 786 clear_bit(SDATA_STATE_OFFCHANNEL_BEACON_STOPPED, &sdata->state);
790 ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BEACON_ENABLED); 787 ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BEACON_ENABLED);
791 mutex_lock(&ifmsh->mtx);
792 bcn = rcu_dereference_protected(ifmsh->beacon, 788 bcn = rcu_dereference_protected(ifmsh->beacon,
793 lockdep_is_held(&ifmsh->mtx)); 789 lockdep_is_held(&sdata->wdev.mtx));
794 rcu_assign_pointer(ifmsh->beacon, NULL); 790 rcu_assign_pointer(ifmsh->beacon, NULL);
795 kfree_rcu(bcn, rcu_head); 791 kfree_rcu(bcn, rcu_head);
796 mutex_unlock(&ifmsh->mtx);
797 792
798 /* flush STAs and mpaths on this iface */ 793 /* flush STAs and mpaths on this iface */
799 sta_info_flush(sdata); 794 sta_info_flush(sdata);
@@ -806,14 +801,10 @@ void ieee80211_stop_mesh(struct ieee80211_sub_if_data *sdata)
806 del_timer_sync(&sdata->u.mesh.housekeeping_timer); 801 del_timer_sync(&sdata->u.mesh.housekeeping_timer);
807 del_timer_sync(&sdata->u.mesh.mesh_path_root_timer); 802 del_timer_sync(&sdata->u.mesh.mesh_path_root_timer);
808 del_timer_sync(&sdata->u.mesh.mesh_path_timer); 803 del_timer_sync(&sdata->u.mesh.mesh_path_timer);
809 /* 804
810 * If the timer fired while we waited for it, it will have 805 /* clear any mesh work (for next join) we may have accrued */
811 * requeued the work. Now the work will be running again 806 ifmsh->wrkq_flags = 0;
812 * but will not rearm the timer again because it checks 807 ifmsh->mbss_changed = 0;
813 * whether the interface is running, which, at this point,
814 * it no longer is.
815 */
816 cancel_work_sync(&sdata->work);
817 808
818 local->fif_other_bss--; 809 local->fif_other_bss--;
819 atomic_dec(&local->iff_allmultis); 810 atomic_dec(&local->iff_allmultis);
@@ -954,6 +945,12 @@ void ieee80211_mesh_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
954 struct ieee80211_mgmt *mgmt; 945 struct ieee80211_mgmt *mgmt;
955 u16 stype; 946 u16 stype;
956 947
948 sdata_lock(sdata);
949
950 /* mesh already went down */
951 if (!sdata->wdev.mesh_id_len)
952 goto out;
953
957 rx_status = IEEE80211_SKB_RXCB(skb); 954 rx_status = IEEE80211_SKB_RXCB(skb);
958 mgmt = (struct ieee80211_mgmt *) skb->data; 955 mgmt = (struct ieee80211_mgmt *) skb->data;
959 stype = le16_to_cpu(mgmt->frame_control) & IEEE80211_FCTL_STYPE; 956 stype = le16_to_cpu(mgmt->frame_control) & IEEE80211_FCTL_STYPE;
@@ -971,12 +968,42 @@ void ieee80211_mesh_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
971 ieee80211_mesh_rx_mgmt_action(sdata, mgmt, skb->len, rx_status); 968 ieee80211_mesh_rx_mgmt_action(sdata, mgmt, skb->len, rx_status);
972 break; 969 break;
973 } 970 }
971out:
972 sdata_unlock(sdata);
973}
974
975static void mesh_bss_info_changed(struct ieee80211_sub_if_data *sdata)
976{
977 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
978 u32 bit, changed = 0;
979
980 for_each_set_bit(bit, &ifmsh->mbss_changed,
981 sizeof(changed) * BITS_PER_BYTE) {
982 clear_bit(bit, &ifmsh->mbss_changed);
983 changed |= BIT(bit);
984 }
985
986 if (sdata->vif.bss_conf.enable_beacon &&
987 (changed & (BSS_CHANGED_BEACON |
988 BSS_CHANGED_HT |
989 BSS_CHANGED_BASIC_RATES |
990 BSS_CHANGED_BEACON_INT)))
991 if (ieee80211_mesh_rebuild_beacon(sdata))
992 return;
993
994 ieee80211_bss_info_change_notify(sdata, changed);
974} 995}
975 996
976void ieee80211_mesh_work(struct ieee80211_sub_if_data *sdata) 997void ieee80211_mesh_work(struct ieee80211_sub_if_data *sdata)
977{ 998{
978 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; 999 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
979 1000
1001 sdata_lock(sdata);
1002
1003 /* mesh already went down */
1004 if (!sdata->wdev.mesh_id_len)
1005 goto out;
1006
980 if (ifmsh->preq_queue_len && 1007 if (ifmsh->preq_queue_len &&
981 time_after(jiffies, 1008 time_after(jiffies,
982 ifmsh->last_preq + msecs_to_jiffies(ifmsh->mshcfg.dot11MeshHWMPpreqMinInterval))) 1009 ifmsh->last_preq + msecs_to_jiffies(ifmsh->mshcfg.dot11MeshHWMPpreqMinInterval)))
@@ -996,6 +1023,11 @@ void ieee80211_mesh_work(struct ieee80211_sub_if_data *sdata)
996 1023
997 if (test_and_clear_bit(MESH_WORK_DRIFT_ADJUST, &ifmsh->wrkq_flags)) 1024 if (test_and_clear_bit(MESH_WORK_DRIFT_ADJUST, &ifmsh->wrkq_flags))
998 mesh_sync_adjust_tbtt(sdata); 1025 mesh_sync_adjust_tbtt(sdata);
1026
1027 if (test_and_clear_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags))
1028 mesh_bss_info_changed(sdata);
1029out:
1030 sdata_unlock(sdata);
999} 1031}
1000 1032
1001void ieee80211_mesh_notify_scan_completed(struct ieee80211_local *local) 1033void ieee80211_mesh_notify_scan_completed(struct ieee80211_local *local)
@@ -1041,7 +1073,6 @@ void ieee80211_mesh_init_sdata(struct ieee80211_sub_if_data *sdata)
1041 spin_lock_init(&ifmsh->mesh_preq_queue_lock); 1073 spin_lock_init(&ifmsh->mesh_preq_queue_lock);
1042 spin_lock_init(&ifmsh->sync_offset_lock); 1074 spin_lock_init(&ifmsh->sync_offset_lock);
1043 RCU_INIT_POINTER(ifmsh->beacon, NULL); 1075 RCU_INIT_POINTER(ifmsh->beacon, NULL);
1044 mutex_init(&ifmsh->mtx);
1045 1076
1046 sdata->vif.bss_conf.bssid = zero_addr; 1077 sdata->vif.bss_conf.bssid = zero_addr;
1047} 1078}
diff --git a/net/mac80211/mesh.h b/net/mac80211/mesh.h
index da158774eebb..2bc7fd2f787d 100644
--- a/net/mac80211/mesh.h
+++ b/net/mac80211/mesh.h
@@ -58,6 +58,7 @@ enum mesh_path_flags {
58 * @MESH_WORK_ROOT: the mesh root station needs to send a frame 58 * @MESH_WORK_ROOT: the mesh root station needs to send a frame
59 * @MESH_WORK_DRIFT_ADJUST: time to compensate for clock drift relative to other 59 * @MESH_WORK_DRIFT_ADJUST: time to compensate for clock drift relative to other
60 * mesh nodes 60 * mesh nodes
61 * @MESH_WORK_MBSS_CHANGED: rebuild beacon and notify driver of BSS changes
61 */ 62 */
62enum mesh_deferred_task_flags { 63enum mesh_deferred_task_flags {
63 MESH_WORK_HOUSEKEEPING, 64 MESH_WORK_HOUSEKEEPING,
@@ -65,6 +66,7 @@ enum mesh_deferred_task_flags {
65 MESH_WORK_GROW_MPP_TABLE, 66 MESH_WORK_GROW_MPP_TABLE,
66 MESH_WORK_ROOT, 67 MESH_WORK_ROOT,
67 MESH_WORK_DRIFT_ADJUST, 68 MESH_WORK_DRIFT_ADJUST,
69 MESH_WORK_MBSS_CHANGED,
68}; 70};
69 71
70/** 72/**
@@ -188,7 +190,6 @@ struct mesh_rmc {
188 u32 idx_mask; 190 u32 idx_mask;
189}; 191};
190 192
191#define IEEE80211_MESH_PEER_INACTIVITY_LIMIT (1800 * HZ)
192#define IEEE80211_MESH_HOUSEKEEPING_INTERVAL (60 * HZ) 193#define IEEE80211_MESH_HOUSEKEEPING_INTERVAL (60 * HZ)
193 194
194#define MESH_PATH_EXPIRE (600 * HZ) 195#define MESH_PATH_EXPIRE (600 * HZ)
@@ -324,14 +325,14 @@ static inline
324u32 mesh_plink_inc_estab_count(struct ieee80211_sub_if_data *sdata) 325u32 mesh_plink_inc_estab_count(struct ieee80211_sub_if_data *sdata)
325{ 326{
326 atomic_inc(&sdata->u.mesh.estab_plinks); 327 atomic_inc(&sdata->u.mesh.estab_plinks);
327 return mesh_accept_plinks_update(sdata); 328 return mesh_accept_plinks_update(sdata) | BSS_CHANGED_BEACON;
328} 329}
329 330
330static inline 331static inline
331u32 mesh_plink_dec_estab_count(struct ieee80211_sub_if_data *sdata) 332u32 mesh_plink_dec_estab_count(struct ieee80211_sub_if_data *sdata)
332{ 333{
333 atomic_dec(&sdata->u.mesh.estab_plinks); 334 atomic_dec(&sdata->u.mesh.estab_plinks);
334 return mesh_accept_plinks_update(sdata); 335 return mesh_accept_plinks_update(sdata) | BSS_CHANGED_BEACON;
335} 336}
336 337
337static inline int mesh_plink_free_count(struct ieee80211_sub_if_data *sdata) 338static inline int mesh_plink_free_count(struct ieee80211_sub_if_data *sdata)
diff --git a/net/mac80211/mesh_plink.c b/net/mac80211/mesh_plink.c
index 09bebed99416..02c05fa15c20 100644
--- a/net/mac80211/mesh_plink.c
+++ b/net/mac80211/mesh_plink.c
@@ -154,8 +154,14 @@ static u32 mesh_set_ht_prot_mode(struct ieee80211_sub_if_data *sdata)
154 u16 ht_opmode; 154 u16 ht_opmode;
155 bool non_ht_sta = false, ht20_sta = false; 155 bool non_ht_sta = false, ht20_sta = false;
156 156
157 if (sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT) 157 switch (sdata->vif.bss_conf.chandef.width) {
158 case NL80211_CHAN_WIDTH_20_NOHT:
159 case NL80211_CHAN_WIDTH_5:
160 case NL80211_CHAN_WIDTH_10:
158 return 0; 161 return 0;
162 default:
163 break;
164 }
159 165
160 rcu_read_lock(); 166 rcu_read_lock();
161 list_for_each_entry_rcu(sta, &local->sta_list, list) { 167 list_for_each_entry_rcu(sta, &local->sta_list, list) {
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
index 741448b30825..ae31968d42d3 100644
--- a/net/mac80211/mlme.c
+++ b/net/mac80211/mlme.c
@@ -91,41 +91,6 @@ MODULE_PARM_DESC(probe_wait_ms,
91#define IEEE80211_SIGNAL_AVE_MIN_COUNT 4 91#define IEEE80211_SIGNAL_AVE_MIN_COUNT 4
92 92
93/* 93/*
94 * All cfg80211 functions have to be called outside a locked
95 * section so that they can acquire a lock themselves... This
96 * is much simpler than queuing up things in cfg80211, but we
97 * do need some indirection for that here.
98 */
99enum rx_mgmt_action {
100 /* no action required */
101 RX_MGMT_NONE,
102
103 /* caller must call cfg80211_send_deauth() */
104 RX_MGMT_CFG80211_DEAUTH,
105
106 /* caller must call cfg80211_send_disassoc() */
107 RX_MGMT_CFG80211_DISASSOC,
108
109 /* caller must call cfg80211_send_rx_auth() */
110 RX_MGMT_CFG80211_RX_AUTH,
111
112 /* caller must call cfg80211_send_rx_assoc() */
113 RX_MGMT_CFG80211_RX_ASSOC,
114
115 /* caller must call cfg80211_send_assoc_timeout() */
116 RX_MGMT_CFG80211_ASSOC_TIMEOUT,
117
118 /* used when a processed beacon causes a deauth */
119 RX_MGMT_CFG80211_TX_DEAUTH,
120};
121
122/* utils */
123static inline void ASSERT_MGD_MTX(struct ieee80211_if_managed *ifmgd)
124{
125 lockdep_assert_held(&ifmgd->mtx);
126}
127
128/*
129 * We can have multiple work items (and connection probing) 94 * We can have multiple work items (and connection probing)
130 * scheduling this timer, but we need to take care to only 95 * scheduling this timer, but we need to take care to only
131 * reschedule it when it should fire _earlier_ than it was 96 * reschedule it when it should fire _earlier_ than it was
@@ -135,13 +100,14 @@ static inline void ASSERT_MGD_MTX(struct ieee80211_if_managed *ifmgd)
135 * has happened -- the work that runs from this timer will 100 * has happened -- the work that runs from this timer will
136 * do that. 101 * do that.
137 */ 102 */
138static void run_again(struct ieee80211_if_managed *ifmgd, unsigned long timeout) 103static void run_again(struct ieee80211_sub_if_data *sdata,
104 unsigned long timeout)
139{ 105{
140 ASSERT_MGD_MTX(ifmgd); 106 sdata_assert_lock(sdata);
141 107
142 if (!timer_pending(&ifmgd->timer) || 108 if (!timer_pending(&sdata->u.mgd.timer) ||
143 time_before(timeout, ifmgd->timer.expires)) 109 time_before(timeout, sdata->u.mgd.timer.expires))
144 mod_timer(&ifmgd->timer, timeout); 110 mod_timer(&sdata->u.mgd.timer, timeout);
145} 111}
146 112
147void ieee80211_sta_reset_beacon_monitor(struct ieee80211_sub_if_data *sdata) 113void ieee80211_sta_reset_beacon_monitor(struct ieee80211_sub_if_data *sdata)
@@ -224,6 +190,12 @@ static u32 chandef_downgrade(struct cfg80211_chan_def *c)
224 c->width = NL80211_CHAN_WIDTH_20_NOHT; 190 c->width = NL80211_CHAN_WIDTH_20_NOHT;
225 ret = IEEE80211_STA_DISABLE_HT | IEEE80211_STA_DISABLE_VHT; 191 ret = IEEE80211_STA_DISABLE_HT | IEEE80211_STA_DISABLE_VHT;
226 break; 192 break;
193 case NL80211_CHAN_WIDTH_5:
194 case NL80211_CHAN_WIDTH_10:
195 WARN_ON_ONCE(1);
196 /* keep c->width */
197 ret = IEEE80211_STA_DISABLE_HT | IEEE80211_STA_DISABLE_VHT;
198 break;
227 } 199 }
228 200
229 WARN_ON_ONCE(!cfg80211_chandef_valid(c)); 201 WARN_ON_ONCE(!cfg80211_chandef_valid(c));
@@ -652,7 +624,7 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
652 struct ieee80211_channel *chan; 624 struct ieee80211_channel *chan;
653 u32 rates = 0; 625 u32 rates = 0;
654 626
655 lockdep_assert_held(&ifmgd->mtx); 627 sdata_assert_lock(sdata);
656 628
657 rcu_read_lock(); 629 rcu_read_lock();
658 chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf); 630 chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
@@ -914,6 +886,10 @@ void ieee80211_send_nullfunc(struct ieee80211_local *local,
914 886
915 IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT | 887 IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT |
916 IEEE80211_TX_INTFL_OFFCHAN_TX_OK; 888 IEEE80211_TX_INTFL_OFFCHAN_TX_OK;
889
890 if (local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS)
891 IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
892
917 if (ifmgd->flags & (IEEE80211_STA_BEACON_POLL | 893 if (ifmgd->flags & (IEEE80211_STA_BEACON_POLL |
918 IEEE80211_STA_CONNECTION_POLL)) 894 IEEE80211_STA_CONNECTION_POLL))
919 IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_USE_MINRATE; 895 IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_USE_MINRATE;
@@ -962,7 +938,7 @@ static void ieee80211_chswitch_work(struct work_struct *work)
962 if (!ieee80211_sdata_running(sdata)) 938 if (!ieee80211_sdata_running(sdata))
963 return; 939 return;
964 940
965 mutex_lock(&ifmgd->mtx); 941 sdata_lock(sdata);
966 if (!ifmgd->associated) 942 if (!ifmgd->associated)
967 goto out; 943 goto out;
968 944
@@ -985,7 +961,7 @@ static void ieee80211_chswitch_work(struct work_struct *work)
985 IEEE80211_QUEUE_STOP_REASON_CSA); 961 IEEE80211_QUEUE_STOP_REASON_CSA);
986 out: 962 out:
987 ifmgd->flags &= ~IEEE80211_STA_CSA_RECEIVED; 963 ifmgd->flags &= ~IEEE80211_STA_CSA_RECEIVED;
988 mutex_unlock(&ifmgd->mtx); 964 sdata_unlock(sdata);
989} 965}
990 966
991void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success) 967void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success)
@@ -1036,7 +1012,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
1036 const struct ieee80211_ht_operation *ht_oper; 1012 const struct ieee80211_ht_operation *ht_oper;
1037 int secondary_channel_offset = -1; 1013 int secondary_channel_offset = -1;
1038 1014
1039 ASSERT_MGD_MTX(ifmgd); 1015 sdata_assert_lock(sdata);
1040 1016
1041 if (!cbss) 1017 if (!cbss)
1042 return; 1018 return;
@@ -1390,6 +1366,9 @@ static bool ieee80211_powersave_allowed(struct ieee80211_sub_if_data *sdata)
1390 IEEE80211_STA_CONNECTION_POLL)) 1366 IEEE80211_STA_CONNECTION_POLL))
1391 return false; 1367 return false;
1392 1368
1369 if (!mgd->have_beacon)
1370 return false;
1371
1393 rcu_read_lock(); 1372 rcu_read_lock();
1394 sta = sta_info_get(sdata, mgd->bssid); 1373 sta = sta_info_get(sdata, mgd->bssid);
1395 if (sta) 1374 if (sta)
@@ -1798,7 +1777,7 @@ static void ieee80211_set_associated(struct ieee80211_sub_if_data *sdata,
1798 1777
1799 ieee80211_led_assoc(local, 1); 1778 ieee80211_led_assoc(local, 1);
1800 1779
1801 if (sdata->u.mgd.assoc_data->have_beacon) { 1780 if (sdata->u.mgd.have_beacon) {
1802 /* 1781 /*
1803 * If the AP is buggy we may get here with no DTIM period 1782 * If the AP is buggy we may get here with no DTIM period
1804 * known, so assume it's 1 which is the only safe assumption 1783 * known, so assume it's 1 which is the only safe assumption
@@ -1806,8 +1785,10 @@ static void ieee80211_set_associated(struct ieee80211_sub_if_data *sdata,
1806 * probably just won't work at all. 1785 * probably just won't work at all.
1807 */ 1786 */
1808 bss_conf->dtim_period = sdata->u.mgd.dtim_period ?: 1; 1787 bss_conf->dtim_period = sdata->u.mgd.dtim_period ?: 1;
1809 bss_info_changed |= BSS_CHANGED_DTIM_PERIOD; 1788 bss_conf->beacon_rate = bss->beacon_rate;
1789 bss_info_changed |= BSS_CHANGED_BEACON_INFO;
1810 } else { 1790 } else {
1791 bss_conf->beacon_rate = NULL;
1811 bss_conf->dtim_period = 0; 1792 bss_conf->dtim_period = 0;
1812 } 1793 }
1813 1794
@@ -1842,7 +1823,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
1842 struct ieee80211_local *local = sdata->local; 1823 struct ieee80211_local *local = sdata->local;
1843 u32 changed = 0; 1824 u32 changed = 0;
1844 1825
1845 ASSERT_MGD_MTX(ifmgd); 1826 sdata_assert_lock(sdata);
1846 1827
1847 if (WARN_ON_ONCE(tx && !frame_buf)) 1828 if (WARN_ON_ONCE(tx && !frame_buf))
1848 return; 1829 return;
@@ -1930,6 +1911,9 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
1930 del_timer_sync(&sdata->u.mgd.chswitch_timer); 1911 del_timer_sync(&sdata->u.mgd.chswitch_timer);
1931 1912
1932 sdata->vif.bss_conf.dtim_period = 0; 1913 sdata->vif.bss_conf.dtim_period = 0;
1914 sdata->vif.bss_conf.beacon_rate = NULL;
1915
1916 ifmgd->have_beacon = false;
1933 1917
1934 ifmgd->flags = 0; 1918 ifmgd->flags = 0;
1935 ieee80211_vif_release_channel(sdata); 1919 ieee80211_vif_release_channel(sdata);
@@ -2051,7 +2035,7 @@ static void ieee80211_mgd_probe_ap_send(struct ieee80211_sub_if_data *sdata)
2051 } 2035 }
2052 2036
2053 ifmgd->probe_timeout = jiffies + msecs_to_jiffies(probe_wait_ms); 2037 ifmgd->probe_timeout = jiffies + msecs_to_jiffies(probe_wait_ms);
2054 run_again(ifmgd, ifmgd->probe_timeout); 2038 run_again(sdata, ifmgd->probe_timeout);
2055 if (sdata->local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS) 2039 if (sdata->local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS)
2056 ieee80211_flush_queues(sdata->local, sdata); 2040 ieee80211_flush_queues(sdata->local, sdata);
2057} 2041}
@@ -2065,7 +2049,7 @@ static void ieee80211_mgd_probe_ap(struct ieee80211_sub_if_data *sdata,
2065 if (!ieee80211_sdata_running(sdata)) 2049 if (!ieee80211_sdata_running(sdata))
2066 return; 2050 return;
2067 2051
2068 mutex_lock(&ifmgd->mtx); 2052 sdata_lock(sdata);
2069 2053
2070 if (!ifmgd->associated) 2054 if (!ifmgd->associated)
2071 goto out; 2055 goto out;
@@ -2119,7 +2103,7 @@ static void ieee80211_mgd_probe_ap(struct ieee80211_sub_if_data *sdata,
2119 ifmgd->probe_send_count = 0; 2103 ifmgd->probe_send_count = 0;
2120 ieee80211_mgd_probe_ap_send(sdata); 2104 ieee80211_mgd_probe_ap_send(sdata);
2121 out: 2105 out:
2122 mutex_unlock(&ifmgd->mtx); 2106 sdata_unlock(sdata);
2123} 2107}
2124 2108
2125struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw, 2109struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw,
@@ -2135,7 +2119,7 @@ struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw,
2135 if (WARN_ON(sdata->vif.type != NL80211_IFTYPE_STATION)) 2119 if (WARN_ON(sdata->vif.type != NL80211_IFTYPE_STATION))
2136 return NULL; 2120 return NULL;
2137 2121
2138 ASSERT_MGD_MTX(ifmgd); 2122 sdata_assert_lock(sdata);
2139 2123
2140 if (ifmgd->associated) 2124 if (ifmgd->associated)
2141 cbss = ifmgd->associated; 2125 cbss = ifmgd->associated;
@@ -2168,9 +2152,9 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
2168 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2152 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
2169 u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; 2153 u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
2170 2154
2171 mutex_lock(&ifmgd->mtx); 2155 sdata_lock(sdata);
2172 if (!ifmgd->associated) { 2156 if (!ifmgd->associated) {
2173 mutex_unlock(&ifmgd->mtx); 2157 sdata_unlock(sdata);
2174 return; 2158 return;
2175 } 2159 }
2176 2160
@@ -2181,13 +2165,10 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
2181 ieee80211_wake_queues_by_reason(&sdata->local->hw, 2165 ieee80211_wake_queues_by_reason(&sdata->local->hw,
2182 IEEE80211_MAX_QUEUE_MAP, 2166 IEEE80211_MAX_QUEUE_MAP,
2183 IEEE80211_QUEUE_STOP_REASON_CSA); 2167 IEEE80211_QUEUE_STOP_REASON_CSA);
2184 mutex_unlock(&ifmgd->mtx);
2185 2168
2186 /* 2169 cfg80211_tx_mlme_mgmt(sdata->dev, frame_buf,
2187 * must be outside lock due to cfg80211, 2170 IEEE80211_DEAUTH_FRAME_LEN);
2188 * but that's not a problem. 2171 sdata_unlock(sdata);
2189 */
2190 cfg80211_send_deauth(sdata->dev, frame_buf, IEEE80211_DEAUTH_FRAME_LEN);
2191} 2172}
2192 2173
2193static void ieee80211_beacon_connection_loss_work(struct work_struct *work) 2174static void ieee80211_beacon_connection_loss_work(struct work_struct *work)
@@ -2254,7 +2235,7 @@ static void ieee80211_destroy_auth_data(struct ieee80211_sub_if_data *sdata,
2254{ 2235{
2255 struct ieee80211_mgd_auth_data *auth_data = sdata->u.mgd.auth_data; 2236 struct ieee80211_mgd_auth_data *auth_data = sdata->u.mgd.auth_data;
2256 2237
2257 lockdep_assert_held(&sdata->u.mgd.mtx); 2238 sdata_assert_lock(sdata);
2258 2239
2259 if (!assoc) { 2240 if (!assoc) {
2260 sta_info_destroy_addr(sdata, auth_data->bss->bssid); 2241 sta_info_destroy_addr(sdata, auth_data->bss->bssid);
@@ -2295,27 +2276,26 @@ static void ieee80211_auth_challenge(struct ieee80211_sub_if_data *sdata,
2295 auth_data->key_idx, tx_flags); 2276 auth_data->key_idx, tx_flags);
2296} 2277}
2297 2278
2298static enum rx_mgmt_action __must_check 2279static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
2299ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata, 2280 struct ieee80211_mgmt *mgmt, size_t len)
2300 struct ieee80211_mgmt *mgmt, size_t len)
2301{ 2281{
2302 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2282 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
2303 u8 bssid[ETH_ALEN]; 2283 u8 bssid[ETH_ALEN];
2304 u16 auth_alg, auth_transaction, status_code; 2284 u16 auth_alg, auth_transaction, status_code;
2305 struct sta_info *sta; 2285 struct sta_info *sta;
2306 2286
2307 lockdep_assert_held(&ifmgd->mtx); 2287 sdata_assert_lock(sdata);
2308 2288
2309 if (len < 24 + 6) 2289 if (len < 24 + 6)
2310 return RX_MGMT_NONE; 2290 return;
2311 2291
2312 if (!ifmgd->auth_data || ifmgd->auth_data->done) 2292 if (!ifmgd->auth_data || ifmgd->auth_data->done)
2313 return RX_MGMT_NONE; 2293 return;
2314 2294
2315 memcpy(bssid, ifmgd->auth_data->bss->bssid, ETH_ALEN); 2295 memcpy(bssid, ifmgd->auth_data->bss->bssid, ETH_ALEN);
2316 2296
2317 if (!ether_addr_equal(bssid, mgmt->bssid)) 2297 if (!ether_addr_equal(bssid, mgmt->bssid))
2318 return RX_MGMT_NONE; 2298 return;
2319 2299
2320 auth_alg = le16_to_cpu(mgmt->u.auth.auth_alg); 2300 auth_alg = le16_to_cpu(mgmt->u.auth.auth_alg);
2321 auth_transaction = le16_to_cpu(mgmt->u.auth.auth_transaction); 2301 auth_transaction = le16_to_cpu(mgmt->u.auth.auth_transaction);
@@ -2327,14 +2307,15 @@ ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
2327 mgmt->sa, auth_alg, ifmgd->auth_data->algorithm, 2307 mgmt->sa, auth_alg, ifmgd->auth_data->algorithm,
2328 auth_transaction, 2308 auth_transaction,
2329 ifmgd->auth_data->expected_transaction); 2309 ifmgd->auth_data->expected_transaction);
2330 return RX_MGMT_NONE; 2310 return;
2331 } 2311 }
2332 2312
2333 if (status_code != WLAN_STATUS_SUCCESS) { 2313 if (status_code != WLAN_STATUS_SUCCESS) {
2334 sdata_info(sdata, "%pM denied authentication (status %d)\n", 2314 sdata_info(sdata, "%pM denied authentication (status %d)\n",
2335 mgmt->sa, status_code); 2315 mgmt->sa, status_code);
2336 ieee80211_destroy_auth_data(sdata, false); 2316 ieee80211_destroy_auth_data(sdata, false);
2337 return RX_MGMT_CFG80211_RX_AUTH; 2317 cfg80211_rx_mlme_mgmt(sdata->dev, (u8 *)mgmt, len);
2318 return;
2338 } 2319 }
2339 2320
2340 switch (ifmgd->auth_data->algorithm) { 2321 switch (ifmgd->auth_data->algorithm) {
@@ -2347,20 +2328,20 @@ ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
2347 if (ifmgd->auth_data->expected_transaction != 4) { 2328 if (ifmgd->auth_data->expected_transaction != 4) {
2348 ieee80211_auth_challenge(sdata, mgmt, len); 2329 ieee80211_auth_challenge(sdata, mgmt, len);
2349 /* need another frame */ 2330 /* need another frame */
2350 return RX_MGMT_NONE; 2331 return;
2351 } 2332 }
2352 break; 2333 break;
2353 default: 2334 default:
2354 WARN_ONCE(1, "invalid auth alg %d", 2335 WARN_ONCE(1, "invalid auth alg %d",
2355 ifmgd->auth_data->algorithm); 2336 ifmgd->auth_data->algorithm);
2356 return RX_MGMT_NONE; 2337 return;
2357 } 2338 }
2358 2339
2359 sdata_info(sdata, "authenticated\n"); 2340 sdata_info(sdata, "authenticated\n");
2360 ifmgd->auth_data->done = true; 2341 ifmgd->auth_data->done = true;
2361 ifmgd->auth_data->timeout = jiffies + IEEE80211_AUTH_WAIT_ASSOC; 2342 ifmgd->auth_data->timeout = jiffies + IEEE80211_AUTH_WAIT_ASSOC;
2362 ifmgd->auth_data->timeout_started = true; 2343 ifmgd->auth_data->timeout_started = true;
2363 run_again(ifmgd, ifmgd->auth_data->timeout); 2344 run_again(sdata, ifmgd->auth_data->timeout);
2364 2345
2365 if (ifmgd->auth_data->algorithm == WLAN_AUTH_SAE && 2346 if (ifmgd->auth_data->algorithm == WLAN_AUTH_SAE &&
2366 ifmgd->auth_data->expected_transaction != 2) { 2347 ifmgd->auth_data->expected_transaction != 2) {
@@ -2368,7 +2349,8 @@ ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
2368 * Report auth frame to user space for processing since another 2349 * Report auth frame to user space for processing since another
2369 * round of Authentication frames is still needed. 2350 * round of Authentication frames is still needed.
2370 */ 2351 */
2371 return RX_MGMT_CFG80211_RX_AUTH; 2352 cfg80211_rx_mlme_mgmt(sdata->dev, (u8 *)mgmt, len);
2353 return;
2372 } 2354 }
2373 2355
2374 /* move station state to auth */ 2356 /* move station state to auth */
@@ -2384,30 +2366,29 @@ ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
2384 } 2366 }
2385 mutex_unlock(&sdata->local->sta_mtx); 2367 mutex_unlock(&sdata->local->sta_mtx);
2386 2368
2387 return RX_MGMT_CFG80211_RX_AUTH; 2369 cfg80211_rx_mlme_mgmt(sdata->dev, (u8 *)mgmt, len);
2370 return;
2388 out_err: 2371 out_err:
2389 mutex_unlock(&sdata->local->sta_mtx); 2372 mutex_unlock(&sdata->local->sta_mtx);
2390 /* ignore frame -- wait for timeout */ 2373 /* ignore frame -- wait for timeout */
2391 return RX_MGMT_NONE;
2392} 2374}
2393 2375
2394 2376
2395static enum rx_mgmt_action __must_check 2377static void ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata,
2396ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata, 2378 struct ieee80211_mgmt *mgmt, size_t len)
2397 struct ieee80211_mgmt *mgmt, size_t len)
2398{ 2379{
2399 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2380 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
2400 const u8 *bssid = NULL; 2381 const u8 *bssid = NULL;
2401 u16 reason_code; 2382 u16 reason_code;
2402 2383
2403 lockdep_assert_held(&ifmgd->mtx); 2384 sdata_assert_lock(sdata);
2404 2385
2405 if (len < 24 + 2) 2386 if (len < 24 + 2)
2406 return RX_MGMT_NONE; 2387 return;
2407 2388
2408 if (!ifmgd->associated || 2389 if (!ifmgd->associated ||
2409 !ether_addr_equal(mgmt->bssid, ifmgd->associated->bssid)) 2390 !ether_addr_equal(mgmt->bssid, ifmgd->associated->bssid))
2410 return RX_MGMT_NONE; 2391 return;
2411 2392
2412 bssid = ifmgd->associated->bssid; 2393 bssid = ifmgd->associated->bssid;
2413 2394
@@ -2418,25 +2399,24 @@ ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata,
2418 2399
2419 ieee80211_set_disassoc(sdata, 0, 0, false, NULL); 2400 ieee80211_set_disassoc(sdata, 0, 0, false, NULL);
2420 2401
2421 return RX_MGMT_CFG80211_DEAUTH; 2402 cfg80211_rx_mlme_mgmt(sdata->dev, (u8 *)mgmt, len);
2422} 2403}
2423 2404
2424 2405
2425static enum rx_mgmt_action __must_check 2406static void ieee80211_rx_mgmt_disassoc(struct ieee80211_sub_if_data *sdata,
2426ieee80211_rx_mgmt_disassoc(struct ieee80211_sub_if_data *sdata, 2407 struct ieee80211_mgmt *mgmt, size_t len)
2427 struct ieee80211_mgmt *mgmt, size_t len)
2428{ 2408{
2429 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2409 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
2430 u16 reason_code; 2410 u16 reason_code;
2431 2411
2432 lockdep_assert_held(&ifmgd->mtx); 2412 sdata_assert_lock(sdata);
2433 2413
2434 if (len < 24 + 2) 2414 if (len < 24 + 2)
2435 return RX_MGMT_NONE; 2415 return;
2436 2416
2437 if (!ifmgd->associated || 2417 if (!ifmgd->associated ||
2438 !ether_addr_equal(mgmt->bssid, ifmgd->associated->bssid)) 2418 !ether_addr_equal(mgmt->bssid, ifmgd->associated->bssid))
2439 return RX_MGMT_NONE; 2419 return;
2440 2420
2441 reason_code = le16_to_cpu(mgmt->u.disassoc.reason_code); 2421 reason_code = le16_to_cpu(mgmt->u.disassoc.reason_code);
2442 2422
@@ -2445,7 +2425,7 @@ ieee80211_rx_mgmt_disassoc(struct ieee80211_sub_if_data *sdata,
2445 2425
2446 ieee80211_set_disassoc(sdata, 0, 0, false, NULL); 2426 ieee80211_set_disassoc(sdata, 0, 0, false, NULL);
2447 2427
2448 return RX_MGMT_CFG80211_DISASSOC; 2428 cfg80211_rx_mlme_mgmt(sdata->dev, (u8 *)mgmt, len);
2449} 2429}
2450 2430
2451static void ieee80211_get_rates(struct ieee80211_supported_band *sband, 2431static void ieee80211_get_rates(struct ieee80211_supported_band *sband,
@@ -2495,7 +2475,7 @@ static void ieee80211_destroy_assoc_data(struct ieee80211_sub_if_data *sdata,
2495{ 2475{
2496 struct ieee80211_mgd_assoc_data *assoc_data = sdata->u.mgd.assoc_data; 2476 struct ieee80211_mgd_assoc_data *assoc_data = sdata->u.mgd.assoc_data;
2497 2477
2498 lockdep_assert_held(&sdata->u.mgd.mtx); 2478 sdata_assert_lock(sdata);
2499 2479
2500 if (!assoc) { 2480 if (!assoc) {
2501 sta_info_destroy_addr(sdata, assoc_data->bss->bssid); 2481 sta_info_destroy_addr(sdata, assoc_data->bss->bssid);
@@ -2749,10 +2729,9 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
2749 return ret; 2729 return ret;
2750} 2730}
2751 2731
2752static enum rx_mgmt_action __must_check 2732static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
2753ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata, 2733 struct ieee80211_mgmt *mgmt,
2754 struct ieee80211_mgmt *mgmt, size_t len, 2734 size_t len)
2755 struct cfg80211_bss **bss)
2756{ 2735{
2757 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2736 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
2758 struct ieee80211_mgd_assoc_data *assoc_data = ifmgd->assoc_data; 2737 struct ieee80211_mgd_assoc_data *assoc_data = ifmgd->assoc_data;
@@ -2760,13 +2739,14 @@ ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
2760 struct ieee802_11_elems elems; 2739 struct ieee802_11_elems elems;
2761 u8 *pos; 2740 u8 *pos;
2762 bool reassoc; 2741 bool reassoc;
2742 struct cfg80211_bss *bss;
2763 2743
2764 lockdep_assert_held(&ifmgd->mtx); 2744 sdata_assert_lock(sdata);
2765 2745
2766 if (!assoc_data) 2746 if (!assoc_data)
2767 return RX_MGMT_NONE; 2747 return;
2768 if (!ether_addr_equal(assoc_data->bss->bssid, mgmt->bssid)) 2748 if (!ether_addr_equal(assoc_data->bss->bssid, mgmt->bssid))
2769 return RX_MGMT_NONE; 2749 return;
2770 2750
2771 /* 2751 /*
2772 * AssocResp and ReassocResp have identical structure, so process both 2752 * AssocResp and ReassocResp have identical structure, so process both
@@ -2774,7 +2754,7 @@ ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
2774 */ 2754 */
2775 2755
2776 if (len < 24 + 6) 2756 if (len < 24 + 6)
2777 return RX_MGMT_NONE; 2757 return;
2778 2758
2779 reassoc = ieee80211_is_reassoc_req(mgmt->frame_control); 2759 reassoc = ieee80211_is_reassoc_req(mgmt->frame_control);
2780 capab_info = le16_to_cpu(mgmt->u.assoc_resp.capab_info); 2760 capab_info = le16_to_cpu(mgmt->u.assoc_resp.capab_info);
@@ -2801,22 +2781,22 @@ ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
2801 assoc_data->timeout = jiffies + msecs_to_jiffies(ms); 2781 assoc_data->timeout = jiffies + msecs_to_jiffies(ms);
2802 assoc_data->timeout_started = true; 2782 assoc_data->timeout_started = true;
2803 if (ms > IEEE80211_ASSOC_TIMEOUT) 2783 if (ms > IEEE80211_ASSOC_TIMEOUT)
2804 run_again(ifmgd, assoc_data->timeout); 2784 run_again(sdata, assoc_data->timeout);
2805 return RX_MGMT_NONE; 2785 return;
2806 } 2786 }
2807 2787
2808 *bss = assoc_data->bss; 2788 bss = assoc_data->bss;
2809 2789
2810 if (status_code != WLAN_STATUS_SUCCESS) { 2790 if (status_code != WLAN_STATUS_SUCCESS) {
2811 sdata_info(sdata, "%pM denied association (code=%d)\n", 2791 sdata_info(sdata, "%pM denied association (code=%d)\n",
2812 mgmt->sa, status_code); 2792 mgmt->sa, status_code);
2813 ieee80211_destroy_assoc_data(sdata, false); 2793 ieee80211_destroy_assoc_data(sdata, false);
2814 } else { 2794 } else {
2815 if (!ieee80211_assoc_success(sdata, *bss, mgmt, len)) { 2795 if (!ieee80211_assoc_success(sdata, bss, mgmt, len)) {
2816 /* oops -- internal error -- send timeout for now */ 2796 /* oops -- internal error -- send timeout for now */
2817 ieee80211_destroy_assoc_data(sdata, false); 2797 ieee80211_destroy_assoc_data(sdata, false);
2818 cfg80211_put_bss(sdata->local->hw.wiphy, *bss); 2798 cfg80211_assoc_timeout(sdata->dev, bss);
2819 return RX_MGMT_CFG80211_ASSOC_TIMEOUT; 2799 return;
2820 } 2800 }
2821 sdata_info(sdata, "associated\n"); 2801 sdata_info(sdata, "associated\n");
2822 2802
@@ -2828,7 +2808,7 @@ ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
2828 ieee80211_destroy_assoc_data(sdata, true); 2808 ieee80211_destroy_assoc_data(sdata, true);
2829 } 2809 }
2830 2810
2831 return RX_MGMT_CFG80211_RX_ASSOC; 2811 cfg80211_rx_assoc_resp(sdata->dev, bss, (u8 *)mgmt, len);
2832} 2812}
2833 2813
2834static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata, 2814static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
@@ -2840,23 +2820,8 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
2840 int freq; 2820 int freq;
2841 struct ieee80211_bss *bss; 2821 struct ieee80211_bss *bss;
2842 struct ieee80211_channel *channel; 2822 struct ieee80211_channel *channel;
2843 bool need_ps = false;
2844
2845 lockdep_assert_held(&sdata->u.mgd.mtx);
2846 2823
2847 if ((sdata->u.mgd.associated && 2824 sdata_assert_lock(sdata);
2848 ether_addr_equal(mgmt->bssid, sdata->u.mgd.associated->bssid)) ||
2849 (sdata->u.mgd.assoc_data &&
2850 ether_addr_equal(mgmt->bssid,
2851 sdata->u.mgd.assoc_data->bss->bssid))) {
2852 /* not previously set so we may need to recalc */
2853 need_ps = sdata->u.mgd.associated && !sdata->u.mgd.dtim_period;
2854
2855 if (elems->tim && !elems->parse_error) {
2856 const struct ieee80211_tim_ie *tim_ie = elems->tim;
2857 sdata->u.mgd.dtim_period = tim_ie->dtim_period;
2858 }
2859 }
2860 2825
2861 if (elems->ds_params) 2826 if (elems->ds_params)
2862 freq = ieee80211_channel_to_frequency(elems->ds_params[0], 2827 freq = ieee80211_channel_to_frequency(elems->ds_params[0],
@@ -2871,19 +2836,15 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
2871 2836
2872 bss = ieee80211_bss_info_update(local, rx_status, mgmt, len, elems, 2837 bss = ieee80211_bss_info_update(local, rx_status, mgmt, len, elems,
2873 channel); 2838 channel);
2874 if (bss) 2839 if (bss) {
2875 ieee80211_rx_bss_put(local, bss); 2840 ieee80211_rx_bss_put(local, bss);
2841 sdata->vif.bss_conf.beacon_rate = bss->beacon_rate;
2842 }
2876 2843
2877 if (!sdata->u.mgd.associated || 2844 if (!sdata->u.mgd.associated ||
2878 !ether_addr_equal(mgmt->bssid, sdata->u.mgd.associated->bssid)) 2845 !ether_addr_equal(mgmt->bssid, sdata->u.mgd.associated->bssid))
2879 return; 2846 return;
2880 2847
2881 if (need_ps) {
2882 mutex_lock(&local->iflist_mtx);
2883 ieee80211_recalc_ps(local, -1);
2884 mutex_unlock(&local->iflist_mtx);
2885 }
2886
2887 ieee80211_sta_process_chanswitch(sdata, rx_status->mactime, 2848 ieee80211_sta_process_chanswitch(sdata, rx_status->mactime,
2888 elems, true); 2849 elems, true);
2889 2850
@@ -2901,7 +2862,7 @@ static void ieee80211_rx_mgmt_probe_resp(struct ieee80211_sub_if_data *sdata,
2901 2862
2902 ifmgd = &sdata->u.mgd; 2863 ifmgd = &sdata->u.mgd;
2903 2864
2904 ASSERT_MGD_MTX(ifmgd); 2865 sdata_assert_lock(sdata);
2905 2866
2906 if (!ether_addr_equal(mgmt->da, sdata->vif.addr)) 2867 if (!ether_addr_equal(mgmt->da, sdata->vif.addr))
2907 return; /* ignore ProbeResp to foreign address */ 2868 return; /* ignore ProbeResp to foreign address */
@@ -2926,7 +2887,7 @@ static void ieee80211_rx_mgmt_probe_resp(struct ieee80211_sub_if_data *sdata,
2926 ifmgd->auth_data->tries = 0; 2887 ifmgd->auth_data->tries = 0;
2927 ifmgd->auth_data->timeout = jiffies; 2888 ifmgd->auth_data->timeout = jiffies;
2928 ifmgd->auth_data->timeout_started = true; 2889 ifmgd->auth_data->timeout_started = true;
2929 run_again(ifmgd, ifmgd->auth_data->timeout); 2890 run_again(sdata, ifmgd->auth_data->timeout);
2930 } 2891 }
2931} 2892}
2932 2893
@@ -2951,10 +2912,9 @@ static const u64 care_about_ies =
2951 (1ULL << WLAN_EID_HT_CAPABILITY) | 2912 (1ULL << WLAN_EID_HT_CAPABILITY) |
2952 (1ULL << WLAN_EID_HT_OPERATION); 2913 (1ULL << WLAN_EID_HT_OPERATION);
2953 2914
2954static enum rx_mgmt_action 2915static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
2955ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, 2916 struct ieee80211_mgmt *mgmt, size_t len,
2956 struct ieee80211_mgmt *mgmt, size_t len, 2917 struct ieee80211_rx_status *rx_status)
2957 u8 *deauth_buf, struct ieee80211_rx_status *rx_status)
2958{ 2918{
2959 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2919 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
2960 struct ieee80211_bss_conf *bss_conf = &sdata->vif.bss_conf; 2920 struct ieee80211_bss_conf *bss_conf = &sdata->vif.bss_conf;
@@ -2969,24 +2929,25 @@ ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
2969 u8 erp_value = 0; 2929 u8 erp_value = 0;
2970 u32 ncrc; 2930 u32 ncrc;
2971 u8 *bssid; 2931 u8 *bssid;
2932 u8 deauth_buf[IEEE80211_DEAUTH_FRAME_LEN];
2972 2933
2973 lockdep_assert_held(&ifmgd->mtx); 2934 sdata_assert_lock(sdata);
2974 2935
2975 /* Process beacon from the current BSS */ 2936 /* Process beacon from the current BSS */
2976 baselen = (u8 *) mgmt->u.beacon.variable - (u8 *) mgmt; 2937 baselen = (u8 *) mgmt->u.beacon.variable - (u8 *) mgmt;
2977 if (baselen > len) 2938 if (baselen > len)
2978 return RX_MGMT_NONE; 2939 return;
2979 2940
2980 rcu_read_lock(); 2941 rcu_read_lock();
2981 chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf); 2942 chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
2982 if (!chanctx_conf) { 2943 if (!chanctx_conf) {
2983 rcu_read_unlock(); 2944 rcu_read_unlock();
2984 return RX_MGMT_NONE; 2945 return;
2985 } 2946 }
2986 2947
2987 if (rx_status->freq != chanctx_conf->def.chan->center_freq) { 2948 if (rx_status->freq != chanctx_conf->def.chan->center_freq) {
2988 rcu_read_unlock(); 2949 rcu_read_unlock();
2989 return RX_MGMT_NONE; 2950 return;
2990 } 2951 }
2991 chan = chanctx_conf->def.chan; 2952 chan = chanctx_conf->def.chan;
2992 rcu_read_unlock(); 2953 rcu_read_unlock();
@@ -2997,7 +2958,11 @@ ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
2997 len - baselen, false, &elems); 2958 len - baselen, false, &elems);
2998 2959
2999 ieee80211_rx_bss_info(sdata, mgmt, len, rx_status, &elems); 2960 ieee80211_rx_bss_info(sdata, mgmt, len, rx_status, &elems);
3000 ifmgd->assoc_data->have_beacon = true; 2961 if (elems.tim && !elems.parse_error) {
2962 const struct ieee80211_tim_ie *tim_ie = elems.tim;
2963 ifmgd->dtim_period = tim_ie->dtim_period;
2964 }
2965 ifmgd->have_beacon = true;
3001 ifmgd->assoc_data->need_beacon = false; 2966 ifmgd->assoc_data->need_beacon = false;
3002 if (local->hw.flags & IEEE80211_HW_TIMING_BEACON_ONLY) { 2967 if (local->hw.flags & IEEE80211_HW_TIMING_BEACON_ONLY) {
3003 sdata->vif.bss_conf.sync_tsf = 2968 sdata->vif.bss_conf.sync_tsf =
@@ -3013,13 +2978,13 @@ ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
3013 /* continue assoc process */ 2978 /* continue assoc process */
3014 ifmgd->assoc_data->timeout = jiffies; 2979 ifmgd->assoc_data->timeout = jiffies;
3015 ifmgd->assoc_data->timeout_started = true; 2980 ifmgd->assoc_data->timeout_started = true;
3016 run_again(ifmgd, ifmgd->assoc_data->timeout); 2981 run_again(sdata, ifmgd->assoc_data->timeout);
3017 return RX_MGMT_NONE; 2982 return;
3018 } 2983 }
3019 2984
3020 if (!ifmgd->associated || 2985 if (!ifmgd->associated ||
3021 !ether_addr_equal(mgmt->bssid, ifmgd->associated->bssid)) 2986 !ether_addr_equal(mgmt->bssid, ifmgd->associated->bssid))
3022 return RX_MGMT_NONE; 2987 return;
3023 bssid = ifmgd->associated->bssid; 2988 bssid = ifmgd->associated->bssid;
3024 2989
3025 /* Track average RSSI from the Beacon frames of the current AP */ 2990 /* Track average RSSI from the Beacon frames of the current AP */
@@ -3165,7 +3130,7 @@ ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
3165 } 3130 }
3166 3131
3167 if (ncrc == ifmgd->beacon_crc && ifmgd->beacon_crc_valid) 3132 if (ncrc == ifmgd->beacon_crc && ifmgd->beacon_crc_valid)
3168 return RX_MGMT_NONE; 3133 return;
3169 ifmgd->beacon_crc = ncrc; 3134 ifmgd->beacon_crc = ncrc;
3170 ifmgd->beacon_crc_valid = true; 3135 ifmgd->beacon_crc_valid = true;
3171 3136
@@ -3179,7 +3144,7 @@ ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
3179 * If we haven't had a beacon before, tell the driver about the 3144 * If we haven't had a beacon before, tell the driver about the
3180 * DTIM period (and beacon timing if desired) now. 3145 * DTIM period (and beacon timing if desired) now.
3181 */ 3146 */
3182 if (!bss_conf->dtim_period) { 3147 if (!ifmgd->have_beacon) {
3183 /* a few bogus AP send dtim_period = 0 or no TIM IE */ 3148 /* a few bogus AP send dtim_period = 0 or no TIM IE */
3184 if (elems.tim) 3149 if (elems.tim)
3185 bss_conf->dtim_period = elems.tim->dtim_period ?: 1; 3150 bss_conf->dtim_period = elems.tim->dtim_period ?: 1;
@@ -3198,7 +3163,14 @@ ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
3198 sdata->vif.bss_conf.sync_dtim_count = 0; 3163 sdata->vif.bss_conf.sync_dtim_count = 0;
3199 } 3164 }
3200 3165
3201 changed |= BSS_CHANGED_DTIM_PERIOD; 3166 changed |= BSS_CHANGED_BEACON_INFO;
3167 ifmgd->have_beacon = true;
3168
3169 mutex_lock(&local->iflist_mtx);
3170 ieee80211_recalc_ps(local, -1);
3171 mutex_unlock(&local->iflist_mtx);
3172
3173 ieee80211_recalc_ps_vif(sdata);
3202 } 3174 }
3203 3175
3204 if (elems.erp_info) { 3176 if (elems.erp_info) {
@@ -3220,7 +3192,9 @@ ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
3220 ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH, 3192 ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
3221 WLAN_REASON_DEAUTH_LEAVING, 3193 WLAN_REASON_DEAUTH_LEAVING,
3222 true, deauth_buf); 3194 true, deauth_buf);
3223 return RX_MGMT_CFG80211_TX_DEAUTH; 3195 cfg80211_tx_mlme_mgmt(sdata->dev, deauth_buf,
3196 sizeof(deauth_buf));
3197 return;
3224 } 3198 }
3225 3199
3226 if (sta && elems.opmode_notif) 3200 if (sta && elems.opmode_notif)
@@ -3237,19 +3211,13 @@ ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
3237 elems.pwr_constr_elem); 3211 elems.pwr_constr_elem);
3238 3212
3239 ieee80211_bss_info_change_notify(sdata, changed); 3213 ieee80211_bss_info_change_notify(sdata, changed);
3240
3241 return RX_MGMT_NONE;
3242} 3214}
3243 3215
3244void ieee80211_sta_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata, 3216void ieee80211_sta_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
3245 struct sk_buff *skb) 3217 struct sk_buff *skb)
3246{ 3218{
3247 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
3248 struct ieee80211_rx_status *rx_status; 3219 struct ieee80211_rx_status *rx_status;
3249 struct ieee80211_mgmt *mgmt; 3220 struct ieee80211_mgmt *mgmt;
3250 struct cfg80211_bss *bss = NULL;
3251 enum rx_mgmt_action rma = RX_MGMT_NONE;
3252 u8 deauth_buf[IEEE80211_DEAUTH_FRAME_LEN];
3253 u16 fc; 3221 u16 fc;
3254 struct ieee802_11_elems elems; 3222 struct ieee802_11_elems elems;
3255 int ies_len; 3223 int ies_len;
@@ -3258,28 +3226,27 @@ void ieee80211_sta_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
3258 mgmt = (struct ieee80211_mgmt *) skb->data; 3226 mgmt = (struct ieee80211_mgmt *) skb->data;
3259 fc = le16_to_cpu(mgmt->frame_control); 3227 fc = le16_to_cpu(mgmt->frame_control);
3260 3228
3261 mutex_lock(&ifmgd->mtx); 3229 sdata_lock(sdata);
3262 3230
3263 switch (fc & IEEE80211_FCTL_STYPE) { 3231 switch (fc & IEEE80211_FCTL_STYPE) {
3264 case IEEE80211_STYPE_BEACON: 3232 case IEEE80211_STYPE_BEACON:
3265 rma = ieee80211_rx_mgmt_beacon(sdata, mgmt, skb->len, 3233 ieee80211_rx_mgmt_beacon(sdata, mgmt, skb->len, rx_status);
3266 deauth_buf, rx_status);
3267 break; 3234 break;
3268 case IEEE80211_STYPE_PROBE_RESP: 3235 case IEEE80211_STYPE_PROBE_RESP:
3269 ieee80211_rx_mgmt_probe_resp(sdata, skb); 3236 ieee80211_rx_mgmt_probe_resp(sdata, skb);
3270 break; 3237 break;
3271 case IEEE80211_STYPE_AUTH: 3238 case IEEE80211_STYPE_AUTH:
3272 rma = ieee80211_rx_mgmt_auth(sdata, mgmt, skb->len); 3239 ieee80211_rx_mgmt_auth(sdata, mgmt, skb->len);
3273 break; 3240 break;
3274 case IEEE80211_STYPE_DEAUTH: 3241 case IEEE80211_STYPE_DEAUTH:
3275 rma = ieee80211_rx_mgmt_deauth(sdata, mgmt, skb->len); 3242 ieee80211_rx_mgmt_deauth(sdata, mgmt, skb->len);
3276 break; 3243 break;
3277 case IEEE80211_STYPE_DISASSOC: 3244 case IEEE80211_STYPE_DISASSOC:
3278 rma = ieee80211_rx_mgmt_disassoc(sdata, mgmt, skb->len); 3245 ieee80211_rx_mgmt_disassoc(sdata, mgmt, skb->len);
3279 break; 3246 break;
3280 case IEEE80211_STYPE_ASSOC_RESP: 3247 case IEEE80211_STYPE_ASSOC_RESP:
3281 case IEEE80211_STYPE_REASSOC_RESP: 3248 case IEEE80211_STYPE_REASSOC_RESP:
3282 rma = ieee80211_rx_mgmt_assoc_resp(sdata, mgmt, skb->len, &bss); 3249 ieee80211_rx_mgmt_assoc_resp(sdata, mgmt, skb->len);
3283 break; 3250 break;
3284 case IEEE80211_STYPE_ACTION: 3251 case IEEE80211_STYPE_ACTION:
3285 if (mgmt->u.action.category == WLAN_CATEGORY_SPECTRUM_MGMT) { 3252 if (mgmt->u.action.category == WLAN_CATEGORY_SPECTRUM_MGMT) {
@@ -3325,34 +3292,7 @@ void ieee80211_sta_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
3325 } 3292 }
3326 break; 3293 break;
3327 } 3294 }
3328 mutex_unlock(&ifmgd->mtx); 3295 sdata_unlock(sdata);
3329
3330 switch (rma) {
3331 case RX_MGMT_NONE:
3332 /* no action */
3333 break;
3334 case RX_MGMT_CFG80211_DEAUTH:
3335 cfg80211_send_deauth(sdata->dev, (u8 *)mgmt, skb->len);
3336 break;
3337 case RX_MGMT_CFG80211_DISASSOC:
3338 cfg80211_send_disassoc(sdata->dev, (u8 *)mgmt, skb->len);
3339 break;
3340 case RX_MGMT_CFG80211_RX_AUTH:
3341 cfg80211_send_rx_auth(sdata->dev, (u8 *)mgmt, skb->len);
3342 break;
3343 case RX_MGMT_CFG80211_RX_ASSOC:
3344 cfg80211_send_rx_assoc(sdata->dev, bss, (u8 *)mgmt, skb->len);
3345 break;
3346 case RX_MGMT_CFG80211_ASSOC_TIMEOUT:
3347 cfg80211_send_assoc_timeout(sdata->dev, mgmt->bssid);
3348 break;
3349 case RX_MGMT_CFG80211_TX_DEAUTH:
3350 cfg80211_send_deauth(sdata->dev, deauth_buf,
3351 sizeof(deauth_buf));
3352 break;
3353 default:
3354 WARN(1, "unexpected: %d", rma);
3355 }
3356} 3296}
3357 3297
3358static void ieee80211_sta_timer(unsigned long data) 3298static void ieee80211_sta_timer(unsigned long data)
@@ -3366,20 +3306,13 @@ static void ieee80211_sta_timer(unsigned long data)
3366static void ieee80211_sta_connection_lost(struct ieee80211_sub_if_data *sdata, 3306static void ieee80211_sta_connection_lost(struct ieee80211_sub_if_data *sdata,
3367 u8 *bssid, u8 reason, bool tx) 3307 u8 *bssid, u8 reason, bool tx)
3368{ 3308{
3369 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
3370 u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; 3309 u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
3371 3310
3372 ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH, reason, 3311 ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH, reason,
3373 tx, frame_buf); 3312 tx, frame_buf);
3374 mutex_unlock(&ifmgd->mtx);
3375 3313
3376 /* 3314 cfg80211_tx_mlme_mgmt(sdata->dev, frame_buf,
3377 * must be outside lock due to cfg80211, 3315 IEEE80211_DEAUTH_FRAME_LEN);
3378 * but that's not a problem.
3379 */
3380 cfg80211_send_deauth(sdata->dev, frame_buf, IEEE80211_DEAUTH_FRAME_LEN);
3381
3382 mutex_lock(&ifmgd->mtx);
3383} 3316}
3384 3317
3385static int ieee80211_probe_auth(struct ieee80211_sub_if_data *sdata) 3318static int ieee80211_probe_auth(struct ieee80211_sub_if_data *sdata)
@@ -3389,7 +3322,7 @@ static int ieee80211_probe_auth(struct ieee80211_sub_if_data *sdata)
3389 struct ieee80211_mgd_auth_data *auth_data = ifmgd->auth_data; 3322 struct ieee80211_mgd_auth_data *auth_data = ifmgd->auth_data;
3390 u32 tx_flags = 0; 3323 u32 tx_flags = 0;
3391 3324
3392 lockdep_assert_held(&ifmgd->mtx); 3325 sdata_assert_lock(sdata);
3393 3326
3394 if (WARN_ON_ONCE(!auth_data)) 3327 if (WARN_ON_ONCE(!auth_data))
3395 return -EINVAL; 3328 return -EINVAL;
@@ -3462,7 +3395,7 @@ static int ieee80211_probe_auth(struct ieee80211_sub_if_data *sdata)
3462 if (tx_flags == 0) { 3395 if (tx_flags == 0) {
3463 auth_data->timeout = jiffies + IEEE80211_AUTH_TIMEOUT; 3396 auth_data->timeout = jiffies + IEEE80211_AUTH_TIMEOUT;
3464 ifmgd->auth_data->timeout_started = true; 3397 ifmgd->auth_data->timeout_started = true;
3465 run_again(ifmgd, auth_data->timeout); 3398 run_again(sdata, auth_data->timeout);
3466 } else { 3399 } else {
3467 auth_data->timeout_started = false; 3400 auth_data->timeout_started = false;
3468 } 3401 }
@@ -3475,7 +3408,7 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata)
3475 struct ieee80211_mgd_assoc_data *assoc_data = sdata->u.mgd.assoc_data; 3408 struct ieee80211_mgd_assoc_data *assoc_data = sdata->u.mgd.assoc_data;
3476 struct ieee80211_local *local = sdata->local; 3409 struct ieee80211_local *local = sdata->local;
3477 3410
3478 lockdep_assert_held(&sdata->u.mgd.mtx); 3411 sdata_assert_lock(sdata);
3479 3412
3480 assoc_data->tries++; 3413 assoc_data->tries++;
3481 if (assoc_data->tries > IEEE80211_ASSOC_MAX_TRIES) { 3414 if (assoc_data->tries > IEEE80211_ASSOC_MAX_TRIES) {
@@ -3499,7 +3432,7 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata)
3499 if (!(local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS)) { 3432 if (!(local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS)) {
3500 assoc_data->timeout = jiffies + IEEE80211_ASSOC_TIMEOUT; 3433 assoc_data->timeout = jiffies + IEEE80211_ASSOC_TIMEOUT;
3501 assoc_data->timeout_started = true; 3434 assoc_data->timeout_started = true;
3502 run_again(&sdata->u.mgd, assoc_data->timeout); 3435 run_again(sdata, assoc_data->timeout);
3503 } else { 3436 } else {
3504 assoc_data->timeout_started = false; 3437 assoc_data->timeout_started = false;
3505 } 3438 }
@@ -3524,7 +3457,7 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
3524 struct ieee80211_local *local = sdata->local; 3457 struct ieee80211_local *local = sdata->local;
3525 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 3458 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
3526 3459
3527 mutex_lock(&ifmgd->mtx); 3460 sdata_lock(sdata);
3528 3461
3529 if (ifmgd->status_received) { 3462 if (ifmgd->status_received) {
3530 __le16 fc = ifmgd->status_fc; 3463 __le16 fc = ifmgd->status_fc;
@@ -3536,7 +3469,7 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
3536 if (status_acked) { 3469 if (status_acked) {
3537 ifmgd->auth_data->timeout = 3470 ifmgd->auth_data->timeout =
3538 jiffies + IEEE80211_AUTH_TIMEOUT_SHORT; 3471 jiffies + IEEE80211_AUTH_TIMEOUT_SHORT;
3539 run_again(ifmgd, ifmgd->auth_data->timeout); 3472 run_again(sdata, ifmgd->auth_data->timeout);
3540 } else { 3473 } else {
3541 ifmgd->auth_data->timeout = jiffies - 1; 3474 ifmgd->auth_data->timeout = jiffies - 1;
3542 } 3475 }
@@ -3547,7 +3480,7 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
3547 if (status_acked) { 3480 if (status_acked) {
3548 ifmgd->assoc_data->timeout = 3481 ifmgd->assoc_data->timeout =
3549 jiffies + IEEE80211_ASSOC_TIMEOUT_SHORT; 3482 jiffies + IEEE80211_ASSOC_TIMEOUT_SHORT;
3550 run_again(ifmgd, ifmgd->assoc_data->timeout); 3483 run_again(sdata, ifmgd->assoc_data->timeout);
3551 } else { 3484 } else {
3552 ifmgd->assoc_data->timeout = jiffies - 1; 3485 ifmgd->assoc_data->timeout = jiffies - 1;
3553 } 3486 }
@@ -3570,30 +3503,22 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
3570 3503
3571 ieee80211_destroy_auth_data(sdata, false); 3504 ieee80211_destroy_auth_data(sdata, false);
3572 3505
3573 mutex_unlock(&ifmgd->mtx); 3506 cfg80211_auth_timeout(sdata->dev, bssid);
3574 cfg80211_send_auth_timeout(sdata->dev, bssid);
3575 mutex_lock(&ifmgd->mtx);
3576 } 3507 }
3577 } else if (ifmgd->auth_data && ifmgd->auth_data->timeout_started) 3508 } else if (ifmgd->auth_data && ifmgd->auth_data->timeout_started)
3578 run_again(ifmgd, ifmgd->auth_data->timeout); 3509 run_again(sdata, ifmgd->auth_data->timeout);
3579 3510
3580 if (ifmgd->assoc_data && ifmgd->assoc_data->timeout_started && 3511 if (ifmgd->assoc_data && ifmgd->assoc_data->timeout_started &&
3581 time_after(jiffies, ifmgd->assoc_data->timeout)) { 3512 time_after(jiffies, ifmgd->assoc_data->timeout)) {
3582 if ((ifmgd->assoc_data->need_beacon && 3513 if ((ifmgd->assoc_data->need_beacon && !ifmgd->have_beacon) ||
3583 !ifmgd->assoc_data->have_beacon) ||
3584 ieee80211_do_assoc(sdata)) { 3514 ieee80211_do_assoc(sdata)) {
3585 u8 bssid[ETH_ALEN]; 3515 struct cfg80211_bss *bss = ifmgd->assoc_data->bss;
3586
3587 memcpy(bssid, ifmgd->assoc_data->bss->bssid, ETH_ALEN);
3588 3516
3589 ieee80211_destroy_assoc_data(sdata, false); 3517 ieee80211_destroy_assoc_data(sdata, false);
3590 3518 cfg80211_assoc_timeout(sdata->dev, bss);
3591 mutex_unlock(&ifmgd->mtx);
3592 cfg80211_send_assoc_timeout(sdata->dev, bssid);
3593 mutex_lock(&ifmgd->mtx);
3594 } 3519 }
3595 } else if (ifmgd->assoc_data && ifmgd->assoc_data->timeout_started) 3520 } else if (ifmgd->assoc_data && ifmgd->assoc_data->timeout_started)
3596 run_again(ifmgd, ifmgd->assoc_data->timeout); 3521 run_again(sdata, ifmgd->assoc_data->timeout);
3597 3522
3598 if (ifmgd->flags & (IEEE80211_STA_BEACON_POLL | 3523 if (ifmgd->flags & (IEEE80211_STA_BEACON_POLL |
3599 IEEE80211_STA_CONNECTION_POLL) && 3524 IEEE80211_STA_CONNECTION_POLL) &&
@@ -3627,7 +3552,7 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
3627 false); 3552 false);
3628 } 3553 }
3629 } else if (time_is_after_jiffies(ifmgd->probe_timeout)) 3554 } else if (time_is_after_jiffies(ifmgd->probe_timeout))
3630 run_again(ifmgd, ifmgd->probe_timeout); 3555 run_again(sdata, ifmgd->probe_timeout);
3631 else if (local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS) { 3556 else if (local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS) {
3632 mlme_dbg(sdata, 3557 mlme_dbg(sdata,
3633 "Failed to send nullfunc to AP %pM after %dms, disconnecting\n", 3558 "Failed to send nullfunc to AP %pM after %dms, disconnecting\n",
@@ -3656,7 +3581,7 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
3656 } 3581 }
3657 } 3582 }
3658 3583
3659 mutex_unlock(&ifmgd->mtx); 3584 sdata_unlock(sdata);
3660} 3585}
3661 3586
3662static void ieee80211_sta_bcn_mon_timer(unsigned long data) 3587static void ieee80211_sta_bcn_mon_timer(unsigned long data)
@@ -3717,9 +3642,9 @@ void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata)
3717{ 3642{
3718 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 3643 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
3719 3644
3720 mutex_lock(&ifmgd->mtx); 3645 sdata_lock(sdata);
3721 if (!ifmgd->associated) { 3646 if (!ifmgd->associated) {
3722 mutex_unlock(&ifmgd->mtx); 3647 sdata_unlock(sdata);
3723 return; 3648 return;
3724 } 3649 }
3725 3650
@@ -3730,10 +3655,10 @@ void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata)
3730 ifmgd->associated->bssid, 3655 ifmgd->associated->bssid,
3731 WLAN_REASON_UNSPECIFIED, 3656 WLAN_REASON_UNSPECIFIED,
3732 true); 3657 true);
3733 mutex_unlock(&ifmgd->mtx); 3658 sdata_unlock(sdata);
3734 return; 3659 return;
3735 } 3660 }
3736 mutex_unlock(&ifmgd->mtx); 3661 sdata_unlock(sdata);
3737} 3662}
3738#endif 3663#endif
3739 3664
@@ -3765,8 +3690,6 @@ void ieee80211_sta_setup_sdata(struct ieee80211_sub_if_data *sdata)
3765 ifmgd->uapsd_max_sp_len = sdata->local->hw.uapsd_max_sp_len; 3690 ifmgd->uapsd_max_sp_len = sdata->local->hw.uapsd_max_sp_len;
3766 ifmgd->p2p_noa_index = -1; 3691 ifmgd->p2p_noa_index = -1;
3767 3692
3768 mutex_init(&ifmgd->mtx);
3769
3770 if (sdata->local->hw.flags & IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS) 3693 if (sdata->local->hw.flags & IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS)
3771 ifmgd->req_smps = IEEE80211_SMPS_AUTOMATIC; 3694 ifmgd->req_smps = IEEE80211_SMPS_AUTOMATIC;
3772 else 3695 else
@@ -3923,6 +3846,12 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
3923 */ 3846 */
3924 ret = ieee80211_vif_use_channel(sdata, &chandef, 3847 ret = ieee80211_vif_use_channel(sdata, &chandef,
3925 IEEE80211_CHANCTX_SHARED); 3848 IEEE80211_CHANCTX_SHARED);
3849
3850 /* don't downgrade for 5 and 10 MHz channels, though. */
3851 if (chandef.width == NL80211_CHAN_WIDTH_5 ||
3852 chandef.width == NL80211_CHAN_WIDTH_10)
3853 return ret;
3854
3926 while (ret && chandef.width != NL80211_CHAN_WIDTH_20_NOHT) { 3855 while (ret && chandef.width != NL80211_CHAN_WIDTH_20_NOHT) {
3927 ifmgd->flags |= chandef_downgrade(&chandef); 3856 ifmgd->flags |= chandef_downgrade(&chandef);
3928 ret = ieee80211_vif_use_channel(sdata, &chandef, 3857 ret = ieee80211_vif_use_channel(sdata, &chandef,
@@ -4122,8 +4051,6 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
4122 4051
4123 /* try to authenticate/probe */ 4052 /* try to authenticate/probe */
4124 4053
4125 mutex_lock(&ifmgd->mtx);
4126
4127 if ((ifmgd->auth_data && !ifmgd->auth_data->done) || 4054 if ((ifmgd->auth_data && !ifmgd->auth_data->done) ||
4128 ifmgd->assoc_data) { 4055 ifmgd->assoc_data) {
4129 err = -EBUSY; 4056 err = -EBUSY;
@@ -4143,8 +4070,8 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
4143 WLAN_REASON_UNSPECIFIED, 4070 WLAN_REASON_UNSPECIFIED,
4144 false, frame_buf); 4071 false, frame_buf);
4145 4072
4146 __cfg80211_send_deauth(sdata->dev, frame_buf, 4073 cfg80211_tx_mlme_mgmt(sdata->dev, frame_buf,
4147 sizeof(frame_buf)); 4074 sizeof(frame_buf));
4148 } 4075 }
4149 4076
4150 sdata_info(sdata, "authenticate with %pM\n", req->bss->bssid); 4077 sdata_info(sdata, "authenticate with %pM\n", req->bss->bssid);
@@ -4161,8 +4088,7 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
4161 4088
4162 /* hold our own reference */ 4089 /* hold our own reference */
4163 cfg80211_ref_bss(local->hw.wiphy, auth_data->bss); 4090 cfg80211_ref_bss(local->hw.wiphy, auth_data->bss);
4164 err = 0; 4091 return 0;
4165 goto out_unlock;
4166 4092
4167 err_clear: 4093 err_clear:
4168 memset(ifmgd->bssid, 0, ETH_ALEN); 4094 memset(ifmgd->bssid, 0, ETH_ALEN);
@@ -4170,9 +4096,6 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
4170 ifmgd->auth_data = NULL; 4096 ifmgd->auth_data = NULL;
4171 err_free: 4097 err_free:
4172 kfree(auth_data); 4098 kfree(auth_data);
4173 out_unlock:
4174 mutex_unlock(&ifmgd->mtx);
4175
4176 return err; 4099 return err;
4177} 4100}
4178 4101
@@ -4203,8 +4126,6 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
4203 assoc_data->ssid_len = ssidie[1]; 4126 assoc_data->ssid_len = ssidie[1];
4204 rcu_read_unlock(); 4127 rcu_read_unlock();
4205 4128
4206 mutex_lock(&ifmgd->mtx);
4207
4208 if (ifmgd->associated) { 4129 if (ifmgd->associated) {
4209 u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; 4130 u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
4210 4131
@@ -4212,8 +4133,8 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
4212 WLAN_REASON_UNSPECIFIED, 4133 WLAN_REASON_UNSPECIFIED,
4213 false, frame_buf); 4134 false, frame_buf);
4214 4135
4215 __cfg80211_send_deauth(sdata->dev, frame_buf, 4136 cfg80211_tx_mlme_mgmt(sdata->dev, frame_buf,
4216 sizeof(frame_buf)); 4137 sizeof(frame_buf));
4217 } 4138 }
4218 4139
4219 if (ifmgd->auth_data && !ifmgd->auth_data->done) { 4140 if (ifmgd->auth_data && !ifmgd->auth_data->done) {
@@ -4360,6 +4281,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
4360 4281
4361 ifmgd->assoc_data = assoc_data; 4282 ifmgd->assoc_data = assoc_data;
4362 ifmgd->dtim_period = 0; 4283 ifmgd->dtim_period = 0;
4284 ifmgd->have_beacon = false;
4363 4285
4364 err = ieee80211_prep_connection(sdata, req->bss, true); 4286 err = ieee80211_prep_connection(sdata, req->bss, true);
4365 if (err) 4287 if (err)
@@ -4391,7 +4313,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
4391 ifmgd->dtim_period = tim->dtim_period; 4313 ifmgd->dtim_period = tim->dtim_period;
4392 dtim_count = tim->dtim_count; 4314 dtim_count = tim->dtim_count;
4393 } 4315 }
4394 assoc_data->have_beacon = true; 4316 ifmgd->have_beacon = true;
4395 assoc_data->timeout = jiffies; 4317 assoc_data->timeout = jiffies;
4396 assoc_data->timeout_started = true; 4318 assoc_data->timeout_started = true;
4397 4319
@@ -4407,7 +4329,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
4407 } 4329 }
4408 rcu_read_unlock(); 4330 rcu_read_unlock();
4409 4331
4410 run_again(ifmgd, assoc_data->timeout); 4332 run_again(sdata, assoc_data->timeout);
4411 4333
4412 if (bss->corrupt_data) { 4334 if (bss->corrupt_data) {
4413 char *corrupt_type = "data"; 4335 char *corrupt_type = "data";
@@ -4423,17 +4345,13 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
4423 corrupt_type); 4345 corrupt_type);
4424 } 4346 }
4425 4347
4426 err = 0; 4348 return 0;
4427 goto out;
4428 err_clear: 4349 err_clear:
4429 memset(ifmgd->bssid, 0, ETH_ALEN); 4350 memset(ifmgd->bssid, 0, ETH_ALEN);
4430 ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BSSID); 4351 ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BSSID);
4431 ifmgd->assoc_data = NULL; 4352 ifmgd->assoc_data = NULL;
4432 err_free: 4353 err_free:
4433 kfree(assoc_data); 4354 kfree(assoc_data);
4434 out:
4435 mutex_unlock(&ifmgd->mtx);
4436
4437 return err; 4355 return err;
4438} 4356}
4439 4357
@@ -4445,8 +4363,6 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
4445 bool tx = !req->local_state_change; 4363 bool tx = !req->local_state_change;
4446 bool report_frame = false; 4364 bool report_frame = false;
4447 4365
4448 mutex_lock(&ifmgd->mtx);
4449
4450 sdata_info(sdata, 4366 sdata_info(sdata,
4451 "deauthenticating from %pM by local choice (reason=%d)\n", 4367 "deauthenticating from %pM by local choice (reason=%d)\n",
4452 req->bssid, req->reason_code); 4368 req->bssid, req->reason_code);
@@ -4458,7 +4374,6 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
4458 req->reason_code, tx, 4374 req->reason_code, tx,
4459 frame_buf); 4375 frame_buf);
4460 ieee80211_destroy_auth_data(sdata, false); 4376 ieee80211_destroy_auth_data(sdata, false);
4461 mutex_unlock(&ifmgd->mtx);
4462 4377
4463 report_frame = true; 4378 report_frame = true;
4464 goto out; 4379 goto out;
@@ -4470,12 +4385,11 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
4470 req->reason_code, tx, frame_buf); 4385 req->reason_code, tx, frame_buf);
4471 report_frame = true; 4386 report_frame = true;
4472 } 4387 }
4473 mutex_unlock(&ifmgd->mtx);
4474 4388
4475 out: 4389 out:
4476 if (report_frame) 4390 if (report_frame)
4477 __cfg80211_send_deauth(sdata->dev, frame_buf, 4391 cfg80211_tx_mlme_mgmt(sdata->dev, frame_buf,
4478 IEEE80211_DEAUTH_FRAME_LEN); 4392 IEEE80211_DEAUTH_FRAME_LEN);
4479 4393
4480 return 0; 4394 return 0;
4481} 4395}
@@ -4487,18 +4401,14 @@ int ieee80211_mgd_disassoc(struct ieee80211_sub_if_data *sdata,
4487 u8 bssid[ETH_ALEN]; 4401 u8 bssid[ETH_ALEN];
4488 u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; 4402 u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
4489 4403
4490 mutex_lock(&ifmgd->mtx);
4491
4492 /* 4404 /*
4493 * cfg80211 should catch this ... but it's racy since 4405 * cfg80211 should catch this ... but it's racy since
4494 * we can receive a disassoc frame, process it, hand it 4406 * we can receive a disassoc frame, process it, hand it
4495 * to cfg80211 while that's in a locked section already 4407 * to cfg80211 while that's in a locked section already
4496 * trying to tell us that the user wants to disconnect. 4408 * trying to tell us that the user wants to disconnect.
4497 */ 4409 */
4498 if (ifmgd->associated != req->bss) { 4410 if (ifmgd->associated != req->bss)
4499 mutex_unlock(&ifmgd->mtx);
4500 return -ENOLINK; 4411 return -ENOLINK;
4501 }
4502 4412
4503 sdata_info(sdata, 4413 sdata_info(sdata,
4504 "disassociating from %pM by local choice (reason=%d)\n", 4414 "disassociating from %pM by local choice (reason=%d)\n",
@@ -4508,10 +4418,9 @@ int ieee80211_mgd_disassoc(struct ieee80211_sub_if_data *sdata,
4508 ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DISASSOC, 4418 ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DISASSOC,
4509 req->reason_code, !req->local_state_change, 4419 req->reason_code, !req->local_state_change,
4510 frame_buf); 4420 frame_buf);
4511 mutex_unlock(&ifmgd->mtx);
4512 4421
4513 __cfg80211_send_disassoc(sdata->dev, frame_buf, 4422 cfg80211_tx_mlme_mgmt(sdata->dev, frame_buf,
4514 IEEE80211_DEAUTH_FRAME_LEN); 4423 IEEE80211_DEAUTH_FRAME_LEN);
4515 4424
4516 return 0; 4425 return 0;
4517} 4426}
@@ -4531,13 +4440,16 @@ void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata)
4531 cancel_work_sync(&ifmgd->csa_connection_drop_work); 4440 cancel_work_sync(&ifmgd->csa_connection_drop_work);
4532 cancel_work_sync(&ifmgd->chswitch_work); 4441 cancel_work_sync(&ifmgd->chswitch_work);
4533 4442
4534 mutex_lock(&ifmgd->mtx); 4443 sdata_lock(sdata);
4535 if (ifmgd->assoc_data) 4444 if (ifmgd->assoc_data) {
4445 struct cfg80211_bss *bss = ifmgd->assoc_data->bss;
4536 ieee80211_destroy_assoc_data(sdata, false); 4446 ieee80211_destroy_assoc_data(sdata, false);
4447 cfg80211_assoc_timeout(sdata->dev, bss);
4448 }
4537 if (ifmgd->auth_data) 4449 if (ifmgd->auth_data)
4538 ieee80211_destroy_auth_data(sdata, false); 4450 ieee80211_destroy_auth_data(sdata, false);
4539 del_timer_sync(&ifmgd->timer); 4451 del_timer_sync(&ifmgd->timer);
4540 mutex_unlock(&ifmgd->mtx); 4452 sdata_unlock(sdata);
4541} 4453}
4542 4454
4543void ieee80211_cqm_rssi_notify(struct ieee80211_vif *vif, 4455void ieee80211_cqm_rssi_notify(struct ieee80211_vif *vif,
diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
index a02bef35b134..30d58d2d13e2 100644
--- a/net/mac80211/rate.c
+++ b/net/mac80211/rate.c
@@ -397,8 +397,14 @@ static void rate_idx_match_mask(struct ieee80211_tx_rate *rate,
397 return; 397 return;
398 398
399 /* if HT BSS, and we handle a data frame, also try HT rates */ 399 /* if HT BSS, and we handle a data frame, also try HT rates */
400 if (chan_width == NL80211_CHAN_WIDTH_20_NOHT) 400 switch (chan_width) {
401 case NL80211_CHAN_WIDTH_20_NOHT:
402 case NL80211_CHAN_WIDTH_5:
403 case NL80211_CHAN_WIDTH_10:
401 return; 404 return;
405 default:
406 break;
407 }
402 408
403 alt_rate.idx = 0; 409 alt_rate.idx = 0;
404 /* keep protection flags */ 410 /* keep protection flags */
diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
index 8e2952620256..23dbcfc69b3b 100644
--- a/net/mac80211/rx.c
+++ b/net/mac80211/rx.c
@@ -258,6 +258,8 @@ ieee80211_add_rx_radiotap_header(struct ieee80211_local *local,
258 pos += 2; 258 pos += 2;
259 259
260 if (status->flag & RX_FLAG_HT) { 260 if (status->flag & RX_FLAG_HT) {
261 unsigned int stbc;
262
261 rthdr->it_present |= cpu_to_le32(1 << IEEE80211_RADIOTAP_MCS); 263 rthdr->it_present |= cpu_to_le32(1 << IEEE80211_RADIOTAP_MCS);
262 *pos++ = local->hw.radiotap_mcs_details; 264 *pos++ = local->hw.radiotap_mcs_details;
263 *pos = 0; 265 *pos = 0;
@@ -267,6 +269,8 @@ ieee80211_add_rx_radiotap_header(struct ieee80211_local *local,
267 *pos |= IEEE80211_RADIOTAP_MCS_BW_40; 269 *pos |= IEEE80211_RADIOTAP_MCS_BW_40;
268 if (status->flag & RX_FLAG_HT_GF) 270 if (status->flag & RX_FLAG_HT_GF)
269 *pos |= IEEE80211_RADIOTAP_MCS_FMT_GF; 271 *pos |= IEEE80211_RADIOTAP_MCS_FMT_GF;
272 stbc = (status->flag & RX_FLAG_STBC_MASK) >> RX_FLAG_STBC_SHIFT;
273 *pos |= stbc << IEEE80211_RADIOTAP_MCS_STBC_SHIFT;
270 pos++; 274 pos++;
271 *pos++ = status->rate_idx; 275 *pos++ = status->rate_idx;
272 } 276 }
@@ -1372,6 +1376,7 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
1372 struct sk_buff *skb = rx->skb; 1376 struct sk_buff *skb = rx->skb;
1373 struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb); 1377 struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
1374 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1378 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
1379 int i;
1375 1380
1376 if (!sta) 1381 if (!sta)
1377 return RX_CONTINUE; 1382 return RX_CONTINUE;
@@ -1422,6 +1427,19 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
1422 ewma_add(&sta->avg_signal, -status->signal); 1427 ewma_add(&sta->avg_signal, -status->signal);
1423 } 1428 }
1424 1429
1430 if (status->chains) {
1431 sta->chains = status->chains;
1432 for (i = 0; i < ARRAY_SIZE(status->chain_signal); i++) {
1433 int signal = status->chain_signal[i];
1434
1435 if (!(status->chains & BIT(i)))
1436 continue;
1437
1438 sta->chain_signal_last[i] = signal;
1439 ewma_add(&sta->chain_signal_avg[i], -signal);
1440 }
1441 }
1442
1425 /* 1443 /*
1426 * Change STA power saving mode only at the end of a frame 1444 * Change STA power saving mode only at the end of a frame
1427 * exchange sequence. 1445 * exchange sequence.
@@ -1608,7 +1626,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
1608 entry->ccmp = 1; 1626 entry->ccmp = 1;
1609 memcpy(entry->last_pn, 1627 memcpy(entry->last_pn,
1610 rx->key->u.ccmp.rx_pn[queue], 1628 rx->key->u.ccmp.rx_pn[queue],
1611 CCMP_PN_LEN); 1629 IEEE80211_CCMP_PN_LEN);
1612 } 1630 }
1613 return RX_QUEUED; 1631 return RX_QUEUED;
1614 } 1632 }
@@ -1627,21 +1645,21 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
1627 * (IEEE 802.11i, 8.3.3.4.5) */ 1645 * (IEEE 802.11i, 8.3.3.4.5) */
1628 if (entry->ccmp) { 1646 if (entry->ccmp) {
1629 int i; 1647 int i;
1630 u8 pn[CCMP_PN_LEN], *rpn; 1648 u8 pn[IEEE80211_CCMP_PN_LEN], *rpn;
1631 int queue; 1649 int queue;
1632 if (!rx->key || rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP) 1650 if (!rx->key || rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP)
1633 return RX_DROP_UNUSABLE; 1651 return RX_DROP_UNUSABLE;
1634 memcpy(pn, entry->last_pn, CCMP_PN_LEN); 1652 memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN);
1635 for (i = CCMP_PN_LEN - 1; i >= 0; i--) { 1653 for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) {
1636 pn[i]++; 1654 pn[i]++;
1637 if (pn[i]) 1655 if (pn[i])
1638 break; 1656 break;
1639 } 1657 }
1640 queue = rx->security_idx; 1658 queue = rx->security_idx;
1641 rpn = rx->key->u.ccmp.rx_pn[queue]; 1659 rpn = rx->key->u.ccmp.rx_pn[queue];
1642 if (memcmp(pn, rpn, CCMP_PN_LEN)) 1660 if (memcmp(pn, rpn, IEEE80211_CCMP_PN_LEN))
1643 return RX_DROP_UNUSABLE; 1661 return RX_DROP_UNUSABLE;
1644 memcpy(entry->last_pn, pn, CCMP_PN_LEN); 1662 memcpy(entry->last_pn, pn, IEEE80211_CCMP_PN_LEN);
1645 } 1663 }
1646 1664
1647 skb_pull(rx->skb, ieee80211_hdrlen(fc)); 1665 skb_pull(rx->skb, ieee80211_hdrlen(fc));
@@ -1729,27 +1747,21 @@ static int ieee80211_drop_unencrypted_mgmt(struct ieee80211_rx_data *rx)
1729 if (unlikely(!ieee80211_has_protected(fc) && 1747 if (unlikely(!ieee80211_has_protected(fc) &&
1730 ieee80211_is_unicast_robust_mgmt_frame(rx->skb) && 1748 ieee80211_is_unicast_robust_mgmt_frame(rx->skb) &&
1731 rx->key)) { 1749 rx->key)) {
1732 if (ieee80211_is_deauth(fc)) 1750 if (ieee80211_is_deauth(fc) ||
1733 cfg80211_send_unprot_deauth(rx->sdata->dev, 1751 ieee80211_is_disassoc(fc))
1734 rx->skb->data, 1752 cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
1735 rx->skb->len); 1753 rx->skb->data,
1736 else if (ieee80211_is_disassoc(fc)) 1754 rx->skb->len);
1737 cfg80211_send_unprot_disassoc(rx->sdata->dev,
1738 rx->skb->data,
1739 rx->skb->len);
1740 return -EACCES; 1755 return -EACCES;
1741 } 1756 }
1742 /* BIP does not use Protected field, so need to check MMIE */ 1757 /* BIP does not use Protected field, so need to check MMIE */
1743 if (unlikely(ieee80211_is_multicast_robust_mgmt_frame(rx->skb) && 1758 if (unlikely(ieee80211_is_multicast_robust_mgmt_frame(rx->skb) &&
1744 ieee80211_get_mmie_keyidx(rx->skb) < 0)) { 1759 ieee80211_get_mmie_keyidx(rx->skb) < 0)) {
1745 if (ieee80211_is_deauth(fc)) 1760 if (ieee80211_is_deauth(fc) ||
1746 cfg80211_send_unprot_deauth(rx->sdata->dev, 1761 ieee80211_is_disassoc(fc))
1747 rx->skb->data, 1762 cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
1748 rx->skb->len); 1763 rx->skb->data,
1749 else if (ieee80211_is_disassoc(fc)) 1764 rx->skb->len);
1750 cfg80211_send_unprot_disassoc(rx->sdata->dev,
1751 rx->skb->data,
1752 rx->skb->len);
1753 return -EACCES; 1765 return -EACCES;
1754 } 1766 }
1755 /* 1767 /*
diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
index 99b103921a4b..1b122a79b0d8 100644
--- a/net/mac80211/scan.c
+++ b/net/mac80211/scan.c
@@ -140,6 +140,15 @@ ieee80211_bss_info_update(struct ieee80211_local *local,
140 bss->valid_data |= IEEE80211_BSS_VALID_WMM; 140 bss->valid_data |= IEEE80211_BSS_VALID_WMM;
141 } 141 }
142 142
143 if (beacon) {
144 struct ieee80211_supported_band *sband =
145 local->hw.wiphy->bands[rx_status->band];
146 if (!(rx_status->flag & RX_FLAG_HT) &&
147 !(rx_status->flag & RX_FLAG_VHT))
148 bss->beacon_rate =
149 &sband->bitrates[rx_status->rate_idx];
150 }
151
143 return bss; 152 return bss;
144} 153}
145 154
diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
index 11216bc13b27..aeb967a0aeed 100644
--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -149,6 +149,7 @@ static void cleanup_single_sta(struct sta_info *sta)
149 * directly by station destruction. 149 * directly by station destruction.
150 */ 150 */
151 for (i = 0; i < IEEE80211_NUM_TIDS; i++) { 151 for (i = 0; i < IEEE80211_NUM_TIDS; i++) {
152 kfree(sta->ampdu_mlme.tid_start_tx[i]);
152 tid_tx = rcu_dereference_raw(sta->ampdu_mlme.tid_tx[i]); 153 tid_tx = rcu_dereference_raw(sta->ampdu_mlme.tid_tx[i]);
153 if (!tid_tx) 154 if (!tid_tx)
154 continue; 155 continue;
@@ -346,6 +347,7 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata,
346 if (ieee80211_vif_is_mesh(&sdata->vif) && 347 if (ieee80211_vif_is_mesh(&sdata->vif) &&
347 !sdata->u.mesh.user_mpm) 348 !sdata->u.mesh.user_mpm)
348 init_timer(&sta->plink_timer); 349 init_timer(&sta->plink_timer);
350 sta->nonpeer_pm = NL80211_MESH_POWER_ACTIVE;
349#endif 351#endif
350 352
351 memcpy(sta->sta.addr, addr, ETH_ALEN); 353 memcpy(sta->sta.addr, addr, ETH_ALEN);
@@ -358,6 +360,8 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata,
358 do_posix_clock_monotonic_gettime(&uptime); 360 do_posix_clock_monotonic_gettime(&uptime);
359 sta->last_connected = uptime.tv_sec; 361 sta->last_connected = uptime.tv_sec;
360 ewma_init(&sta->avg_signal, 1024, 8); 362 ewma_init(&sta->avg_signal, 1024, 8);
363 for (i = 0; i < ARRAY_SIZE(sta->chain_signal_avg); i++)
364 ewma_init(&sta->chain_signal_avg[i], 1024, 8);
361 365
362 if (sta_prepare_rate_control(local, sta, gfp)) { 366 if (sta_prepare_rate_control(local, sta, gfp)) {
363 kfree(sta); 367 kfree(sta);
@@ -1130,6 +1134,7 @@ static void ieee80211_send_null_response(struct ieee80211_sub_if_data *sdata,
1130 * ends the poll/service period. 1134 * ends the poll/service period.
1131 */ 1135 */
1132 info->flags |= IEEE80211_TX_CTL_NO_PS_BUFFER | 1136 info->flags |= IEEE80211_TX_CTL_NO_PS_BUFFER |
1137 IEEE80211_TX_CTL_PS_RESPONSE |
1133 IEEE80211_TX_STATUS_EOSP | 1138 IEEE80211_TX_STATUS_EOSP |
1134 IEEE80211_TX_CTL_REQ_TX_STATUS; 1139 IEEE80211_TX_CTL_REQ_TX_STATUS;
1135 1140
@@ -1267,7 +1272,8 @@ ieee80211_sta_ps_deliver_response(struct sta_info *sta,
1267 * STA may still remain is PS mode after this frame 1272 * STA may still remain is PS mode after this frame
1268 * exchange. 1273 * exchange.
1269 */ 1274 */
1270 info->flags |= IEEE80211_TX_CTL_NO_PS_BUFFER; 1275 info->flags |= IEEE80211_TX_CTL_NO_PS_BUFFER |
1276 IEEE80211_TX_CTL_PS_RESPONSE;
1271 1277
1272 /* 1278 /*
1273 * Use MoreData flag to indicate whether there are 1279 * Use MoreData flag to indicate whether there are
diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
index adc30045f99e..4208dbd5861f 100644
--- a/net/mac80211/sta_info.h
+++ b/net/mac80211/sta_info.h
@@ -203,6 +203,7 @@ struct tid_ampdu_rx {
203 * driver requested to close until the work for it runs 203 * driver requested to close until the work for it runs
204 * @mtx: mutex to protect all TX data (except non-NULL assignments 204 * @mtx: mutex to protect all TX data (except non-NULL assignments
205 * to tid_tx[idx], which are protected by the sta spinlock) 205 * to tid_tx[idx], which are protected by the sta spinlock)
206 * tid_start_tx is also protected by sta->lock.
206 */ 207 */
207struct sta_ampdu_mlme { 208struct sta_ampdu_mlme {
208 struct mutex mtx; 209 struct mutex mtx;
@@ -297,6 +298,9 @@ struct sta_ampdu_mlme {
297 * @rcu_head: RCU head used for freeing this station struct 298 * @rcu_head: RCU head used for freeing this station struct
298 * @cur_max_bandwidth: maximum bandwidth to use for TX to the station, 299 * @cur_max_bandwidth: maximum bandwidth to use for TX to the station,
299 * taken from HT/VHT capabilities or VHT operating mode notification 300 * taken from HT/VHT capabilities or VHT operating mode notification
301 * @chains: chains ever used for RX from this station
302 * @chain_signal_last: last signal (per chain)
303 * @chain_signal_avg: signal average (per chain)
300 */ 304 */
301struct sta_info { 305struct sta_info {
302 /* General information, mostly static */ 306 /* General information, mostly static */
@@ -344,6 +348,11 @@ struct sta_info {
344 int last_signal; 348 int last_signal;
345 struct ewma avg_signal; 349 struct ewma avg_signal;
346 int last_ack_signal; 350 int last_ack_signal;
351
352 u8 chains;
353 s8 chain_signal_last[IEEE80211_MAX_CHAINS];
354 struct ewma chain_signal_avg[IEEE80211_MAX_CHAINS];
355
347 /* Plus 1 for non-QoS frames */ 356 /* Plus 1 for non-QoS frames */
348 __le16 last_seq_ctrl[IEEE80211_NUM_TIDS + 1]; 357 __le16 last_seq_ctrl[IEEE80211_NUM_TIDS + 1];
349 358
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 9972e07a2f96..4105d0ca963e 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -398,13 +398,14 @@ ieee80211_tx_h_multicast_ps_buf(struct ieee80211_tx_data *tx)
398 if (ieee80211_has_order(hdr->frame_control)) 398 if (ieee80211_has_order(hdr->frame_control))
399 return TX_CONTINUE; 399 return TX_CONTINUE;
400 400
401 if (tx->local->hw.flags & IEEE80211_HW_QUEUE_CONTROL)
402 info->hw_queue = tx->sdata->vif.cab_queue;
403
401 /* no stations in PS mode */ 404 /* no stations in PS mode */
402 if (!atomic_read(&ps->num_sta_ps)) 405 if (!atomic_read(&ps->num_sta_ps))
403 return TX_CONTINUE; 406 return TX_CONTINUE;
404 407
405 info->flags |= IEEE80211_TX_CTL_SEND_AFTER_DTIM; 408 info->flags |= IEEE80211_TX_CTL_SEND_AFTER_DTIM;
406 if (tx->local->hw.flags & IEEE80211_HW_QUEUE_CONTROL)
407 info->hw_queue = tx->sdata->vif.cab_queue;
408 409
409 /* device releases frame after DTIM beacon */ 410 /* device releases frame after DTIM beacon */
410 if (!(tx->local->hw.flags & IEEE80211_HW_HOST_BROADCAST_PS_BUFFERING)) 411 if (!(tx->local->hw.flags & IEEE80211_HW_HOST_BROADCAST_PS_BUFFERING))
@@ -1789,12 +1790,6 @@ netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb,
1789 break; 1790 break;
1790#ifdef CONFIG_MAC80211_MESH 1791#ifdef CONFIG_MAC80211_MESH
1791 case NL80211_IFTYPE_MESH_POINT: 1792 case NL80211_IFTYPE_MESH_POINT:
1792 if (!sdata->u.mesh.mshcfg.dot11MeshTTL) {
1793 /* Do not send frames with mesh_ttl == 0 */
1794 sdata->u.mesh.mshstats.dropped_frames_ttl++;
1795 goto fail_rcu;
1796 }
1797
1798 if (!is_multicast_ether_addr(skb->data)) { 1793 if (!is_multicast_ether_addr(skb->data)) {
1799 struct sta_info *next_hop; 1794 struct sta_info *next_hop;
1800 bool mpp_lookup = true; 1795 bool mpp_lookup = true;
diff --git a/net/mac80211/util.c b/net/mac80211/util.c
index 72e6292955bb..22654452a561 100644
--- a/net/mac80211/util.c
+++ b/net/mac80211/util.c
@@ -560,6 +560,9 @@ void ieee80211_iterate_active_interfaces(
560 list_for_each_entry(sdata, &local->interfaces, list) { 560 list_for_each_entry(sdata, &local->interfaces, list) {
561 switch (sdata->vif.type) { 561 switch (sdata->vif.type) {
562 case NL80211_IFTYPE_MONITOR: 562 case NL80211_IFTYPE_MONITOR:
563 if (!(sdata->u.mntr_flags & MONITOR_FLAG_ACTIVE))
564 continue;
565 break;
563 case NL80211_IFTYPE_AP_VLAN: 566 case NL80211_IFTYPE_AP_VLAN:
564 continue; 567 continue;
565 default: 568 default:
@@ -598,6 +601,9 @@ void ieee80211_iterate_active_interfaces_atomic(
598 list_for_each_entry_rcu(sdata, &local->interfaces, list) { 601 list_for_each_entry_rcu(sdata, &local->interfaces, list) {
599 switch (sdata->vif.type) { 602 switch (sdata->vif.type) {
600 case NL80211_IFTYPE_MONITOR: 603 case NL80211_IFTYPE_MONITOR:
604 if (!(sdata->u.mntr_flags & MONITOR_FLAG_ACTIVE))
605 continue;
606 break;
601 case NL80211_IFTYPE_AP_VLAN: 607 case NL80211_IFTYPE_AP_VLAN:
602 continue; 608 continue;
603 default: 609 default:
@@ -1072,32 +1078,6 @@ void ieee80211_sta_def_wmm_params(struct ieee80211_sub_if_data *sdata,
1072 ieee80211_set_wmm_default(sdata, true); 1078 ieee80211_set_wmm_default(sdata, true);
1073} 1079}
1074 1080
1075u32 ieee80211_mandatory_rates(struct ieee80211_local *local,
1076 enum ieee80211_band band)
1077{
1078 struct ieee80211_supported_band *sband;
1079 struct ieee80211_rate *bitrates;
1080 u32 mandatory_rates;
1081 enum ieee80211_rate_flags mandatory_flag;
1082 int i;
1083
1084 sband = local->hw.wiphy->bands[band];
1085 if (WARN_ON(!sband))
1086 return 1;
1087
1088 if (band == IEEE80211_BAND_2GHZ)
1089 mandatory_flag = IEEE80211_RATE_MANDATORY_B;
1090 else
1091 mandatory_flag = IEEE80211_RATE_MANDATORY_A;
1092
1093 bitrates = sband->bitrates;
1094 mandatory_rates = 0;
1095 for (i = 0; i < sband->n_bitrates; i++)
1096 if (bitrates[i].flags & mandatory_flag)
1097 mandatory_rates |= BIT(i);
1098 return mandatory_rates;
1099}
1100
1101void ieee80211_send_auth(struct ieee80211_sub_if_data *sdata, 1081void ieee80211_send_auth(struct ieee80211_sub_if_data *sdata,
1102 u16 transaction, u16 auth_alg, u16 status, 1082 u16 transaction, u16 auth_alg, u16 status,
1103 const u8 *extra, size_t extra_len, const u8 *da, 1083 const u8 *extra, size_t extra_len, const u8 *da,
@@ -1604,12 +1584,13 @@ int ieee80211_reconfig(struct ieee80211_local *local)
1604 BSS_CHANGED_ARP_FILTER | 1584 BSS_CHANGED_ARP_FILTER |
1605 BSS_CHANGED_PS; 1585 BSS_CHANGED_PS;
1606 1586
1607 if (sdata->u.mgd.dtim_period) 1587 /* Re-send beacon info report to the driver */
1608 changed |= BSS_CHANGED_DTIM_PERIOD; 1588 if (sdata->u.mgd.have_beacon)
1589 changed |= BSS_CHANGED_BEACON_INFO;
1609 1590
1610 mutex_lock(&sdata->u.mgd.mtx); 1591 sdata_lock(sdata);
1611 ieee80211_bss_info_change_notify(sdata, changed); 1592 ieee80211_bss_info_change_notify(sdata, changed);
1612 mutex_unlock(&sdata->u.mgd.mtx); 1593 sdata_unlock(sdata);
1613 break; 1594 break;
1614 case NL80211_IFTYPE_ADHOC: 1595 case NL80211_IFTYPE_ADHOC:
1615 changed |= BSS_CHANGED_IBSS; 1596 changed |= BSS_CHANGED_IBSS;
diff --git a/net/mac80211/vht.c b/net/mac80211/vht.c
index 171344d4eb7c..97c289414e32 100644
--- a/net/mac80211/vht.c
+++ b/net/mac80211/vht.c
@@ -396,7 +396,7 @@ void ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
396 new_bw = ieee80211_sta_cur_vht_bw(sta); 396 new_bw = ieee80211_sta_cur_vht_bw(sta);
397 if (new_bw != sta->sta.bandwidth) { 397 if (new_bw != sta->sta.bandwidth) {
398 sta->sta.bandwidth = new_bw; 398 sta->sta.bandwidth = new_bw;
399 changed |= IEEE80211_RC_NSS_CHANGED; 399 changed |= IEEE80211_RC_BW_CHANGED;
400 } 400 }
401 401
402 change: 402 change:
diff --git a/net/mac80211/wep.c b/net/mac80211/wep.c
index c04d401dae92..6ee2b5863572 100644
--- a/net/mac80211/wep.c
+++ b/net/mac80211/wep.c
@@ -28,7 +28,7 @@
28int ieee80211_wep_init(struct ieee80211_local *local) 28int ieee80211_wep_init(struct ieee80211_local *local)
29{ 29{
30 /* start WEP IV from a random value */ 30 /* start WEP IV from a random value */
31 get_random_bytes(&local->wep_iv, WEP_IV_LEN); 31 get_random_bytes(&local->wep_iv, IEEE80211_WEP_IV_LEN);
32 32
33 local->wep_tx_tfm = crypto_alloc_cipher("arc4", 0, CRYPTO_ALG_ASYNC); 33 local->wep_tx_tfm = crypto_alloc_cipher("arc4", 0, CRYPTO_ALG_ASYNC);
34 if (IS_ERR(local->wep_tx_tfm)) { 34 if (IS_ERR(local->wep_tx_tfm)) {
@@ -98,20 +98,21 @@ static u8 *ieee80211_wep_add_iv(struct ieee80211_local *local,
98 98
99 hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PROTECTED); 99 hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
100 100
101 if (WARN_ON(skb_tailroom(skb) < WEP_ICV_LEN || 101 if (WARN_ON(skb_tailroom(skb) < IEEE80211_WEP_ICV_LEN ||
102 skb_headroom(skb) < WEP_IV_LEN)) 102 skb_headroom(skb) < IEEE80211_WEP_IV_LEN))
103 return NULL; 103 return NULL;
104 104
105 hdrlen = ieee80211_hdrlen(hdr->frame_control); 105 hdrlen = ieee80211_hdrlen(hdr->frame_control);
106 newhdr = skb_push(skb, WEP_IV_LEN); 106 newhdr = skb_push(skb, IEEE80211_WEP_IV_LEN);
107 memmove(newhdr, newhdr + WEP_IV_LEN, hdrlen); 107 memmove(newhdr, newhdr + IEEE80211_WEP_IV_LEN, hdrlen);
108 108
109 /* the HW only needs room for the IV, but not the actual IV */ 109 /* the HW only needs room for the IV, but not the actual IV */
110 if (info->control.hw_key && 110 if (info->control.hw_key &&
111 (info->control.hw_key->flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE)) 111 (info->control.hw_key->flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE))
112 return newhdr + hdrlen; 112 return newhdr + hdrlen;
113 113
114 skb_set_network_header(skb, skb_network_offset(skb) + WEP_IV_LEN); 114 skb_set_network_header(skb, skb_network_offset(skb) +
115 IEEE80211_WEP_IV_LEN);
115 ieee80211_wep_get_iv(local, keylen, keyidx, newhdr + hdrlen); 116 ieee80211_wep_get_iv(local, keylen, keyidx, newhdr + hdrlen);
116 return newhdr + hdrlen; 117 return newhdr + hdrlen;
117} 118}
@@ -125,8 +126,8 @@ static void ieee80211_wep_remove_iv(struct ieee80211_local *local,
125 unsigned int hdrlen; 126 unsigned int hdrlen;
126 127
127 hdrlen = ieee80211_hdrlen(hdr->frame_control); 128 hdrlen = ieee80211_hdrlen(hdr->frame_control);
128 memmove(skb->data + WEP_IV_LEN, skb->data, hdrlen); 129 memmove(skb->data + IEEE80211_WEP_IV_LEN, skb->data, hdrlen);
129 skb_pull(skb, WEP_IV_LEN); 130 skb_pull(skb, IEEE80211_WEP_IV_LEN);
130} 131}
131 132
132 133
@@ -146,7 +147,7 @@ int ieee80211_wep_encrypt_data(struct crypto_cipher *tfm, u8 *rc4key,
146 put_unaligned(icv, (__le32 *)(data + data_len)); 147 put_unaligned(icv, (__le32 *)(data + data_len));
147 148
148 crypto_cipher_setkey(tfm, rc4key, klen); 149 crypto_cipher_setkey(tfm, rc4key, klen);
149 for (i = 0; i < data_len + WEP_ICV_LEN; i++) 150 for (i = 0; i < data_len + IEEE80211_WEP_ICV_LEN; i++)
150 crypto_cipher_encrypt_one(tfm, data + i, data + i); 151 crypto_cipher_encrypt_one(tfm, data + i, data + i);
151 152
152 return 0; 153 return 0;
@@ -172,7 +173,7 @@ int ieee80211_wep_encrypt(struct ieee80211_local *local,
172 if (!iv) 173 if (!iv)
173 return -1; 174 return -1;
174 175
175 len = skb->len - (iv + WEP_IV_LEN - skb->data); 176 len = skb->len - (iv + IEEE80211_WEP_IV_LEN - skb->data);
176 177
177 /* Prepend 24-bit IV to RC4 key */ 178 /* Prepend 24-bit IV to RC4 key */
178 memcpy(rc4key, iv, 3); 179 memcpy(rc4key, iv, 3);
@@ -181,10 +182,10 @@ int ieee80211_wep_encrypt(struct ieee80211_local *local,
181 memcpy(rc4key + 3, key, keylen); 182 memcpy(rc4key + 3, key, keylen);
182 183
183 /* Add room for ICV */ 184 /* Add room for ICV */
184 skb_put(skb, WEP_ICV_LEN); 185 skb_put(skb, IEEE80211_WEP_ICV_LEN);
185 186
186 return ieee80211_wep_encrypt_data(local->wep_tx_tfm, rc4key, keylen + 3, 187 return ieee80211_wep_encrypt_data(local->wep_tx_tfm, rc4key, keylen + 3,
187 iv + WEP_IV_LEN, len); 188 iv + IEEE80211_WEP_IV_LEN, len);
188} 189}
189 190
190 191
@@ -201,11 +202,11 @@ int ieee80211_wep_decrypt_data(struct crypto_cipher *tfm, u8 *rc4key,
201 return -1; 202 return -1;
202 203
203 crypto_cipher_setkey(tfm, rc4key, klen); 204 crypto_cipher_setkey(tfm, rc4key, klen);
204 for (i = 0; i < data_len + WEP_ICV_LEN; i++) 205 for (i = 0; i < data_len + IEEE80211_WEP_ICV_LEN; i++)
205 crypto_cipher_decrypt_one(tfm, data + i, data + i); 206 crypto_cipher_decrypt_one(tfm, data + i, data + i);
206 207
207 crc = cpu_to_le32(~crc32_le(~0, data, data_len)); 208 crc = cpu_to_le32(~crc32_le(~0, data, data_len));
208 if (memcmp(&crc, data + data_len, WEP_ICV_LEN) != 0) 209 if (memcmp(&crc, data + data_len, IEEE80211_WEP_ICV_LEN) != 0)
209 /* ICV mismatch */ 210 /* ICV mismatch */
210 return -1; 211 return -1;
211 212
@@ -237,10 +238,10 @@ static int ieee80211_wep_decrypt(struct ieee80211_local *local,
237 return -1; 238 return -1;
238 239
239 hdrlen = ieee80211_hdrlen(hdr->frame_control); 240 hdrlen = ieee80211_hdrlen(hdr->frame_control);
240 if (skb->len < hdrlen + WEP_IV_LEN + WEP_ICV_LEN) 241 if (skb->len < hdrlen + IEEE80211_WEP_IV_LEN + IEEE80211_WEP_ICV_LEN)
241 return -1; 242 return -1;
242 243
243 len = skb->len - hdrlen - WEP_IV_LEN - WEP_ICV_LEN; 244 len = skb->len - hdrlen - IEEE80211_WEP_IV_LEN - IEEE80211_WEP_ICV_LEN;
244 245
245 keyidx = skb->data[hdrlen + 3] >> 6; 246 keyidx = skb->data[hdrlen + 3] >> 6;
246 247
@@ -256,16 +257,16 @@ static int ieee80211_wep_decrypt(struct ieee80211_local *local,
256 memcpy(rc4key + 3, key->conf.key, key->conf.keylen); 257 memcpy(rc4key + 3, key->conf.key, key->conf.keylen);
257 258
258 if (ieee80211_wep_decrypt_data(local->wep_rx_tfm, rc4key, klen, 259 if (ieee80211_wep_decrypt_data(local->wep_rx_tfm, rc4key, klen,
259 skb->data + hdrlen + WEP_IV_LEN, 260 skb->data + hdrlen +
260 len)) 261 IEEE80211_WEP_IV_LEN, len))
261 ret = -1; 262 ret = -1;
262 263
263 /* Trim ICV */ 264 /* Trim ICV */
264 skb_trim(skb, skb->len - WEP_ICV_LEN); 265 skb_trim(skb, skb->len - IEEE80211_WEP_ICV_LEN);
265 266
266 /* Remove IV */ 267 /* Remove IV */
267 memmove(skb->data + WEP_IV_LEN, skb->data, hdrlen); 268 memmove(skb->data + IEEE80211_WEP_IV_LEN, skb->data, hdrlen);
268 skb_pull(skb, WEP_IV_LEN); 269 skb_pull(skb, IEEE80211_WEP_IV_LEN);
269 270
270 return ret; 271 return ret;
271} 272}
@@ -305,13 +306,14 @@ ieee80211_crypto_wep_decrypt(struct ieee80211_rx_data *rx)
305 if (ieee80211_wep_decrypt(rx->local, rx->skb, rx->key)) 306 if (ieee80211_wep_decrypt(rx->local, rx->skb, rx->key))
306 return RX_DROP_UNUSABLE; 307 return RX_DROP_UNUSABLE;
307 } else if (!(status->flag & RX_FLAG_IV_STRIPPED)) { 308 } else if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
308 if (!pskb_may_pull(rx->skb, ieee80211_hdrlen(fc) + WEP_IV_LEN)) 309 if (!pskb_may_pull(rx->skb, ieee80211_hdrlen(fc) +
310 IEEE80211_WEP_IV_LEN))
309 return RX_DROP_UNUSABLE; 311 return RX_DROP_UNUSABLE;
310 if (rx->sta && ieee80211_wep_is_weak_iv(rx->skb, rx->key)) 312 if (rx->sta && ieee80211_wep_is_weak_iv(rx->skb, rx->key))
311 rx->sta->wep_weak_iv_count++; 313 rx->sta->wep_weak_iv_count++;
312 ieee80211_wep_remove_iv(rx->local, rx->skb, rx->key); 314 ieee80211_wep_remove_iv(rx->local, rx->skb, rx->key);
313 /* remove ICV */ 315 /* remove ICV */
314 if (pskb_trim(rx->skb, rx->skb->len - WEP_ICV_LEN)) 316 if (pskb_trim(rx->skb, rx->skb->len - IEEE80211_WEP_ICV_LEN))
315 return RX_DROP_UNUSABLE; 317 return RX_DROP_UNUSABLE;
316 } 318 }
317 319
diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
index c7c6d644486f..c9edfcb7a13b 100644
--- a/net/mac80211/wpa.c
+++ b/net/mac80211/wpa.c
@@ -62,10 +62,10 @@ ieee80211_tx_h_michael_mic_add(struct ieee80211_tx_data *tx)
62 62
63 tail = MICHAEL_MIC_LEN; 63 tail = MICHAEL_MIC_LEN;
64 if (!info->control.hw_key) 64 if (!info->control.hw_key)
65 tail += TKIP_ICV_LEN; 65 tail += IEEE80211_TKIP_ICV_LEN;
66 66
67 if (WARN_ON(skb_tailroom(skb) < tail || 67 if (WARN_ON(skb_tailroom(skb) < tail ||
68 skb_headroom(skb) < TKIP_IV_LEN)) 68 skb_headroom(skb) < IEEE80211_TKIP_IV_LEN))
69 return TX_DROP; 69 return TX_DROP;
70 70
71 key = &tx->key->conf.key[NL80211_TKIP_DATA_OFFSET_TX_MIC_KEY]; 71 key = &tx->key->conf.key[NL80211_TKIP_DATA_OFFSET_TX_MIC_KEY];
@@ -198,15 +198,16 @@ static int tkip_encrypt_skb(struct ieee80211_tx_data *tx, struct sk_buff *skb)
198 if (info->control.hw_key) 198 if (info->control.hw_key)
199 tail = 0; 199 tail = 0;
200 else 200 else
201 tail = TKIP_ICV_LEN; 201 tail = IEEE80211_TKIP_ICV_LEN;
202 202
203 if (WARN_ON(skb_tailroom(skb) < tail || 203 if (WARN_ON(skb_tailroom(skb) < tail ||
204 skb_headroom(skb) < TKIP_IV_LEN)) 204 skb_headroom(skb) < IEEE80211_TKIP_IV_LEN))
205 return -1; 205 return -1;
206 206
207 pos = skb_push(skb, TKIP_IV_LEN); 207 pos = skb_push(skb, IEEE80211_TKIP_IV_LEN);
208 memmove(pos, pos + TKIP_IV_LEN, hdrlen); 208 memmove(pos, pos + IEEE80211_TKIP_IV_LEN, hdrlen);
209 skb_set_network_header(skb, skb_network_offset(skb) + TKIP_IV_LEN); 209 skb_set_network_header(skb, skb_network_offset(skb) +
210 IEEE80211_TKIP_IV_LEN);
210 pos += hdrlen; 211 pos += hdrlen;
211 212
212 /* the HW only needs room for the IV, but not the actual IV */ 213 /* the HW only needs room for the IV, but not the actual IV */
@@ -227,7 +228,7 @@ static int tkip_encrypt_skb(struct ieee80211_tx_data *tx, struct sk_buff *skb)
227 return 0; 228 return 0;
228 229
229 /* Add room for ICV */ 230 /* Add room for ICV */
230 skb_put(skb, TKIP_ICV_LEN); 231 skb_put(skb, IEEE80211_TKIP_ICV_LEN);
231 232
232 return ieee80211_tkip_encrypt_data(tx->local->wep_tx_tfm, 233 return ieee80211_tkip_encrypt_data(tx->local->wep_tx_tfm,
233 key, skb, pos, len); 234 key, skb, pos, len);
@@ -290,11 +291,11 @@ ieee80211_crypto_tkip_decrypt(struct ieee80211_rx_data *rx)
290 return RX_DROP_UNUSABLE; 291 return RX_DROP_UNUSABLE;
291 292
292 /* Trim ICV */ 293 /* Trim ICV */
293 skb_trim(skb, skb->len - TKIP_ICV_LEN); 294 skb_trim(skb, skb->len - IEEE80211_TKIP_ICV_LEN);
294 295
295 /* Remove IV */ 296 /* Remove IV */
296 memmove(skb->data + TKIP_IV_LEN, skb->data, hdrlen); 297 memmove(skb->data + IEEE80211_TKIP_IV_LEN, skb->data, hdrlen);
297 skb_pull(skb, TKIP_IV_LEN); 298 skb_pull(skb, IEEE80211_TKIP_IV_LEN);
298 299
299 return RX_CONTINUE; 300 return RX_CONTINUE;
300} 301}
@@ -337,9 +338,9 @@ static void ccmp_special_blocks(struct sk_buff *skb, u8 *pn, u8 *scratch,
337 else 338 else
338 qos_tid = 0; 339 qos_tid = 0;
339 340
340 data_len = skb->len - hdrlen - CCMP_HDR_LEN; 341 data_len = skb->len - hdrlen - IEEE80211_CCMP_HDR_LEN;
341 if (encrypted) 342 if (encrypted)
342 data_len -= CCMP_MIC_LEN; 343 data_len -= IEEE80211_CCMP_MIC_LEN;
343 344
344 /* First block, b_0 */ 345 /* First block, b_0 */
345 b_0[0] = 0x59; /* flags: Adata: 1, M: 011, L: 001 */ 346 b_0[0] = 0x59; /* flags: Adata: 1, M: 011, L: 001 */
@@ -348,7 +349,7 @@ static void ccmp_special_blocks(struct sk_buff *skb, u8 *pn, u8 *scratch,
348 */ 349 */
349 b_0[1] = qos_tid | (mgmt << 4); 350 b_0[1] = qos_tid | (mgmt << 4);
350 memcpy(&b_0[2], hdr->addr2, ETH_ALEN); 351 memcpy(&b_0[2], hdr->addr2, ETH_ALEN);
351 memcpy(&b_0[8], pn, CCMP_PN_LEN); 352 memcpy(&b_0[8], pn, IEEE80211_CCMP_PN_LEN);
352 /* l(m) */ 353 /* l(m) */
353 put_unaligned_be16(data_len, &b_0[14]); 354 put_unaligned_be16(data_len, &b_0[14]);
354 355
@@ -424,15 +425,16 @@ static int ccmp_encrypt_skb(struct ieee80211_tx_data *tx, struct sk_buff *skb)
424 if (info->control.hw_key) 425 if (info->control.hw_key)
425 tail = 0; 426 tail = 0;
426 else 427 else
427 tail = CCMP_MIC_LEN; 428 tail = IEEE80211_CCMP_MIC_LEN;
428 429
429 if (WARN_ON(skb_tailroom(skb) < tail || 430 if (WARN_ON(skb_tailroom(skb) < tail ||
430 skb_headroom(skb) < CCMP_HDR_LEN)) 431 skb_headroom(skb) < IEEE80211_CCMP_HDR_LEN))
431 return -1; 432 return -1;
432 433
433 pos = skb_push(skb, CCMP_HDR_LEN); 434 pos = skb_push(skb, IEEE80211_CCMP_HDR_LEN);
434 memmove(pos, pos + CCMP_HDR_LEN, hdrlen); 435 memmove(pos, pos + IEEE80211_CCMP_HDR_LEN, hdrlen);
435 skb_set_network_header(skb, skb_network_offset(skb) + CCMP_HDR_LEN); 436 skb_set_network_header(skb, skb_network_offset(skb) +
437 IEEE80211_CCMP_HDR_LEN);
436 438
437 /* the HW only needs room for the IV, but not the actual IV */ 439 /* the HW only needs room for the IV, but not the actual IV */
438 if (info->control.hw_key && 440 if (info->control.hw_key &&
@@ -457,10 +459,10 @@ static int ccmp_encrypt_skb(struct ieee80211_tx_data *tx, struct sk_buff *skb)
457 if (info->control.hw_key) 459 if (info->control.hw_key)
458 return 0; 460 return 0;
459 461
460 pos += CCMP_HDR_LEN; 462 pos += IEEE80211_CCMP_HDR_LEN;
461 ccmp_special_blocks(skb, pn, scratch, 0); 463 ccmp_special_blocks(skb, pn, scratch, 0);
462 ieee80211_aes_ccm_encrypt(key->u.ccmp.tfm, scratch, pos, len, 464 ieee80211_aes_ccm_encrypt(key->u.ccmp.tfm, scratch, pos, len,
463 pos, skb_put(skb, CCMP_MIC_LEN)); 465 pos, skb_put(skb, IEEE80211_CCMP_MIC_LEN));
464 466
465 return 0; 467 return 0;
466} 468}
@@ -490,7 +492,7 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx)
490 struct ieee80211_key *key = rx->key; 492 struct ieee80211_key *key = rx->key;
491 struct sk_buff *skb = rx->skb; 493 struct sk_buff *skb = rx->skb;
492 struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb); 494 struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
493 u8 pn[CCMP_PN_LEN]; 495 u8 pn[IEEE80211_CCMP_PN_LEN];
494 int data_len; 496 int data_len;
495 int queue; 497 int queue;
496 498
@@ -500,12 +502,13 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx)
500 !ieee80211_is_robust_mgmt_frame(hdr)) 502 !ieee80211_is_robust_mgmt_frame(hdr))
501 return RX_CONTINUE; 503 return RX_CONTINUE;
502 504
503 data_len = skb->len - hdrlen - CCMP_HDR_LEN - CCMP_MIC_LEN; 505 data_len = skb->len - hdrlen - IEEE80211_CCMP_HDR_LEN -
506 IEEE80211_CCMP_MIC_LEN;
504 if (!rx->sta || data_len < 0) 507 if (!rx->sta || data_len < 0)
505 return RX_DROP_UNUSABLE; 508 return RX_DROP_UNUSABLE;
506 509
507 if (status->flag & RX_FLAG_DECRYPTED) { 510 if (status->flag & RX_FLAG_DECRYPTED) {
508 if (!pskb_may_pull(rx->skb, hdrlen + CCMP_HDR_LEN)) 511 if (!pskb_may_pull(rx->skb, hdrlen + IEEE80211_CCMP_HDR_LEN))
509 return RX_DROP_UNUSABLE; 512 return RX_DROP_UNUSABLE;
510 } else { 513 } else {
511 if (skb_linearize(rx->skb)) 514 if (skb_linearize(rx->skb))
@@ -516,7 +519,7 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx)
516 519
517 queue = rx->security_idx; 520 queue = rx->security_idx;
518 521
519 if (memcmp(pn, key->u.ccmp.rx_pn[queue], CCMP_PN_LEN) <= 0) { 522 if (memcmp(pn, key->u.ccmp.rx_pn[queue], IEEE80211_CCMP_PN_LEN) <= 0) {
520 key->u.ccmp.replays++; 523 key->u.ccmp.replays++;
521 return RX_DROP_UNUSABLE; 524 return RX_DROP_UNUSABLE;
522 } 525 }
@@ -528,19 +531,20 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx)
528 531
529 if (ieee80211_aes_ccm_decrypt( 532 if (ieee80211_aes_ccm_decrypt(
530 key->u.ccmp.tfm, scratch, 533 key->u.ccmp.tfm, scratch,
531 skb->data + hdrlen + CCMP_HDR_LEN, data_len, 534 skb->data + hdrlen + IEEE80211_CCMP_HDR_LEN,
532 skb->data + skb->len - CCMP_MIC_LEN, 535 data_len,
533 skb->data + hdrlen + CCMP_HDR_LEN)) 536 skb->data + skb->len - IEEE80211_CCMP_MIC_LEN,
537 skb->data + hdrlen + IEEE80211_CCMP_HDR_LEN))
534 return RX_DROP_UNUSABLE; 538 return RX_DROP_UNUSABLE;
535 } 539 }
536 540
537 memcpy(key->u.ccmp.rx_pn[queue], pn, CCMP_PN_LEN); 541 memcpy(key->u.ccmp.rx_pn[queue], pn, IEEE80211_CCMP_PN_LEN);
538 542
539 /* Remove CCMP header and MIC */ 543 /* Remove CCMP header and MIC */
540 if (pskb_trim(skb, skb->len - CCMP_MIC_LEN)) 544 if (pskb_trim(skb, skb->len - IEEE80211_CCMP_MIC_LEN))
541 return RX_DROP_UNUSABLE; 545 return RX_DROP_UNUSABLE;
542 memmove(skb->data + CCMP_HDR_LEN, skb->data, hdrlen); 546 memmove(skb->data + IEEE80211_CCMP_HDR_LEN, skb->data, hdrlen);
543 skb_pull(skb, CCMP_HDR_LEN); 547 skb_pull(skb, IEEE80211_CCMP_HDR_LEN);
544 548
545 return RX_CONTINUE; 549 return RX_CONTINUE;
546} 550}
diff --git a/net/mpls/Kconfig b/net/mpls/Kconfig
new file mode 100644
index 000000000000..37421db88965
--- /dev/null
+++ b/net/mpls/Kconfig
@@ -0,0 +1,9 @@
1#
2# MPLS configuration
3#
4config NET_MPLS_GSO
5 tristate "MPLS: GSO support"
6 help
7 This is helper module to allow segmentation of non-MPLS GSO packets
8 that have had MPLS stack entries pushed onto them and thus
9 become MPLS GSO packets.
diff --git a/net/mpls/Makefile b/net/mpls/Makefile
new file mode 100644
index 000000000000..0a3c171be537
--- /dev/null
+++ b/net/mpls/Makefile
@@ -0,0 +1,4 @@
1#
2# Makefile for MPLS.
3#
4obj-y += mpls_gso.o
diff --git a/net/mpls/mpls_gso.c b/net/mpls/mpls_gso.c
new file mode 100644
index 000000000000..1bec1219ab81
--- /dev/null
+++ b/net/mpls/mpls_gso.c
@@ -0,0 +1,108 @@
1/*
2 * MPLS GSO Support
3 *
4 * Authors: Simon Horman (horms@verge.net.au)
5 *
6 * This program is free software; you can redistribute it and/or
7 * modify it under the terms of the GNU General Public License
8 * as published by the Free Software Foundation; either version
9 * 2 of the License, or (at your option) any later version.
10 *
11 * Based on: GSO portions of net/ipv4/gre.c
12 */
13
14#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
15
16#include <linux/err.h>
17#include <linux/module.h>
18#include <linux/netdev_features.h>
19#include <linux/netdevice.h>
20#include <linux/skbuff.h>
21
22static struct sk_buff *mpls_gso_segment(struct sk_buff *skb,
23 netdev_features_t features)
24{
25 struct sk_buff *segs = ERR_PTR(-EINVAL);
26 netdev_features_t mpls_features;
27 __be16 mpls_protocol;
28
29 if (unlikely(skb_shinfo(skb)->gso_type &
30 ~(SKB_GSO_TCPV4 |
31 SKB_GSO_TCPV6 |
32 SKB_GSO_UDP |
33 SKB_GSO_DODGY |
34 SKB_GSO_TCP_ECN |
35 SKB_GSO_GRE |
36 SKB_GSO_MPLS)))
37 goto out;
38
39 /* Setup inner SKB. */
40 mpls_protocol = skb->protocol;
41 skb->protocol = skb->inner_protocol;
42
43 /* Push back the mac header that skb_mac_gso_segment() has pulled.
44 * It will be re-pulled by the call to skb_mac_gso_segment() below
45 */
46 __skb_push(skb, skb->mac_len);
47
48 /* Segment inner packet. */
49 mpls_features = skb->dev->mpls_features & netif_skb_features(skb);
50 segs = skb_mac_gso_segment(skb, mpls_features);
51
52
53 /* Restore outer protocol. */
54 skb->protocol = mpls_protocol;
55
56 /* Re-pull the mac header that the call to skb_mac_gso_segment()
57 * above pulled. It will be re-pushed after returning
58 * skb_mac_gso_segment(), an indirect caller of this function.
59 */
60 __skb_push(skb, skb->data - skb_mac_header(skb));
61
62out:
63 return segs;
64}
65
66static int mpls_gso_send_check(struct sk_buff *skb)
67{
68 return 0;
69}
70
71static struct packet_offload mpls_mc_offload = {
72 .type = cpu_to_be16(ETH_P_MPLS_MC),
73 .callbacks = {
74 .gso_send_check = mpls_gso_send_check,
75 .gso_segment = mpls_gso_segment,
76 },
77};
78
79static struct packet_offload mpls_uc_offload = {
80 .type = cpu_to_be16(ETH_P_MPLS_UC),
81 .callbacks = {
82 .gso_send_check = mpls_gso_send_check,
83 .gso_segment = mpls_gso_segment,
84 },
85};
86
87static int __init mpls_gso_init(void)
88{
89 pr_info("MPLS GSO support\n");
90
91 dev_add_offload(&mpls_uc_offload);
92 dev_add_offload(&mpls_mc_offload);
93
94 return 0;
95}
96
97static void __exit mpls_gso_exit(void)
98{
99 dev_remove_offload(&mpls_uc_offload);
100 dev_remove_offload(&mpls_mc_offload);
101}
102
103module_init(mpls_gso_init);
104module_exit(mpls_gso_exit);
105
106MODULE_DESCRIPTION("MPLS GSO support");
107MODULE_AUTHOR("Simon Horman (horms@verge.net.au)");
108MODULE_LICENSE("GPL");
diff --git a/net/netfilter/core.c b/net/netfilter/core.c
index 857ca9f35177..2217363ab422 100644
--- a/net/netfilter/core.c
+++ b/net/netfilter/core.c
@@ -304,17 +304,26 @@ static struct pernet_operations netfilter_net_ops = {
304 .exit = netfilter_net_exit, 304 .exit = netfilter_net_exit,
305}; 305};
306 306
307void __init netfilter_init(void) 307int __init netfilter_init(void)
308{ 308{
309 int i, h; 309 int i, h, ret;
310
310 for (i = 0; i < ARRAY_SIZE(nf_hooks); i++) { 311 for (i = 0; i < ARRAY_SIZE(nf_hooks); i++) {
311 for (h = 0; h < NF_MAX_HOOKS; h++) 312 for (h = 0; h < NF_MAX_HOOKS; h++)
312 INIT_LIST_HEAD(&nf_hooks[i][h]); 313 INIT_LIST_HEAD(&nf_hooks[i][h]);
313 } 314 }
314 315
315 if (register_pernet_subsys(&netfilter_net_ops) < 0) 316 ret = register_pernet_subsys(&netfilter_net_ops);
316 panic("cannot create netfilter proc entry"); 317 if (ret < 0)
318 goto err;
319
320 ret = netfilter_log_init();
321 if (ret < 0)
322 goto err_pernet;
317 323
318 if (netfilter_log_init() < 0) 324 return 0;
319 panic("cannot initialize nf_log"); 325err_pernet:
326 unregister_pernet_subsys(&netfilter_net_ops);
327err:
328 return ret;
320} 329}
diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
index a083bda322b6..4c8e5c0aa1ab 100644
--- a/net/netfilter/ipvs/ip_vs_conn.c
+++ b/net/netfilter/ipvs/ip_vs_conn.c
@@ -975,8 +975,7 @@ static void *ip_vs_conn_array(struct seq_file *seq, loff_t pos)
975 return cp; 975 return cp;
976 } 976 }
977 } 977 }
978 rcu_read_unlock(); 978 cond_resched_rcu();
979 rcu_read_lock();
980 } 979 }
981 980
982 return NULL; 981 return NULL;
@@ -1015,8 +1014,7 @@ static void *ip_vs_conn_seq_next(struct seq_file *seq, void *v, loff_t *pos)
1015 iter->l = &ip_vs_conn_tab[idx]; 1014 iter->l = &ip_vs_conn_tab[idx];
1016 return cp; 1015 return cp;
1017 } 1016 }
1018 rcu_read_unlock(); 1017 cond_resched_rcu();
1019 rcu_read_lock();
1020 } 1018 }
1021 iter->l = NULL; 1019 iter->l = NULL;
1022 return NULL; 1020 return NULL;
@@ -1206,17 +1204,13 @@ void ip_vs_random_dropentry(struct net *net)
1206 int idx; 1204 int idx;
1207 struct ip_vs_conn *cp, *cp_c; 1205 struct ip_vs_conn *cp, *cp_c;
1208 1206
1207 rcu_read_lock();
1209 /* 1208 /*
1210 * Randomly scan 1/32 of the whole table every second 1209 * Randomly scan 1/32 of the whole table every second
1211 */ 1210 */
1212 for (idx = 0; idx < (ip_vs_conn_tab_size>>5); idx++) { 1211 for (idx = 0; idx < (ip_vs_conn_tab_size>>5); idx++) {
1213 unsigned int hash = net_random() & ip_vs_conn_tab_mask; 1212 unsigned int hash = net_random() & ip_vs_conn_tab_mask;
1214 1213
1215 /*
1216 * Lock is actually needed in this loop.
1217 */
1218 rcu_read_lock();
1219
1220 hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[hash], c_list) { 1214 hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[hash], c_list) {
1221 if (cp->flags & IP_VS_CONN_F_TEMPLATE) 1215 if (cp->flags & IP_VS_CONN_F_TEMPLATE)
1222 /* connection template */ 1216 /* connection template */
@@ -1237,6 +1231,18 @@ void ip_vs_random_dropentry(struct net *net)
1237 default: 1231 default:
1238 continue; 1232 continue;
1239 } 1233 }
1234 } else if (cp->protocol == IPPROTO_SCTP) {
1235 switch (cp->state) {
1236 case IP_VS_SCTP_S_INIT1:
1237 case IP_VS_SCTP_S_INIT:
1238 break;
1239 case IP_VS_SCTP_S_ESTABLISHED:
1240 if (todrop_entry(cp))
1241 break;
1242 continue;
1243 default:
1244 continue;
1245 }
1240 } else { 1246 } else {
1241 if (!todrop_entry(cp)) 1247 if (!todrop_entry(cp))
1242 continue; 1248 continue;
@@ -1252,8 +1258,9 @@ void ip_vs_random_dropentry(struct net *net)
1252 __ip_vs_conn_put(cp); 1258 __ip_vs_conn_put(cp);
1253 } 1259 }
1254 } 1260 }
1255 rcu_read_unlock(); 1261 cond_resched_rcu();
1256 } 1262 }
1263 rcu_read_unlock();
1257} 1264}
1258 1265
1259 1266
@@ -1267,11 +1274,8 @@ static void ip_vs_conn_flush(struct net *net)
1267 struct netns_ipvs *ipvs = net_ipvs(net); 1274 struct netns_ipvs *ipvs = net_ipvs(net);
1268 1275
1269flush_again: 1276flush_again:
1277 rcu_read_lock();
1270 for (idx = 0; idx < ip_vs_conn_tab_size; idx++) { 1278 for (idx = 0; idx < ip_vs_conn_tab_size; idx++) {
1271 /*
1272 * Lock is actually needed in this loop.
1273 */
1274 rcu_read_lock();
1275 1279
1276 hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[idx], c_list) { 1280 hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[idx], c_list) {
1277 if (!ip_vs_conn_net_eq(cp, net)) 1281 if (!ip_vs_conn_net_eq(cp, net))
@@ -1286,8 +1290,9 @@ flush_again:
1286 __ip_vs_conn_put(cp); 1290 __ip_vs_conn_put(cp);
1287 } 1291 }
1288 } 1292 }
1289 rcu_read_unlock(); 1293 cond_resched_rcu();
1290 } 1294 }
1295 rcu_read_unlock();
1291 1296
1292 /* the counter may be not NULL, because maybe some conn entries 1297 /* the counter may be not NULL, because maybe some conn entries
1293 are run by slow timer handler or unhashed but still referred */ 1298 are run by slow timer handler or unhashed but still referred */
diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
index 23b8eb53a569..4f69e83ff836 100644
--- a/net/netfilter/ipvs/ip_vs_core.c
+++ b/net/netfilter/ipvs/ip_vs_core.c
@@ -305,7 +305,7 @@ ip_vs_sched_persist(struct ip_vs_service *svc,
305 * return *ignored=0 i.e. ICMP and NF_DROP 305 * return *ignored=0 i.e. ICMP and NF_DROP
306 */ 306 */
307 sched = rcu_dereference(svc->scheduler); 307 sched = rcu_dereference(svc->scheduler);
308 dest = sched->schedule(svc, skb); 308 dest = sched->schedule(svc, skb, iph);
309 if (!dest) { 309 if (!dest) {
310 IP_VS_DBG(1, "p-schedule: no dest found.\n"); 310 IP_VS_DBG(1, "p-schedule: no dest found.\n");
311 kfree(param.pe_data); 311 kfree(param.pe_data);
@@ -452,7 +452,7 @@ ip_vs_schedule(struct ip_vs_service *svc, struct sk_buff *skb,
452 } 452 }
453 453
454 sched = rcu_dereference(svc->scheduler); 454 sched = rcu_dereference(svc->scheduler);
455 dest = sched->schedule(svc, skb); 455 dest = sched->schedule(svc, skb, iph);
456 if (dest == NULL) { 456 if (dest == NULL) {
457 IP_VS_DBG(1, "Schedule: no dest found.\n"); 457 IP_VS_DBG(1, "Schedule: no dest found.\n");
458 return NULL; 458 return NULL;
diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
index 9e6c2a075a4c..c8148e487386 100644
--- a/net/netfilter/ipvs/ip_vs_ctl.c
+++ b/net/netfilter/ipvs/ip_vs_ctl.c
@@ -1487,9 +1487,9 @@ ip_vs_forget_dev(struct ip_vs_dest *dest, struct net_device *dev)
1487 * Currently only NETDEV_DOWN is handled to release refs to cached dsts 1487 * Currently only NETDEV_DOWN is handled to release refs to cached dsts
1488 */ 1488 */
1489static int ip_vs_dst_event(struct notifier_block *this, unsigned long event, 1489static int ip_vs_dst_event(struct notifier_block *this, unsigned long event,
1490 void *ptr) 1490 void *ptr)
1491{ 1491{
1492 struct net_device *dev = ptr; 1492 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
1493 struct net *net = dev_net(dev); 1493 struct net *net = dev_net(dev);
1494 struct netns_ipvs *ipvs = net_ipvs(net); 1494 struct netns_ipvs *ipvs = net_ipvs(net);
1495 struct ip_vs_service *svc; 1495 struct ip_vs_service *svc;
@@ -1575,7 +1575,7 @@ static int zero;
1575static int three = 3; 1575static int three = 3;
1576 1576
1577static int 1577static int
1578proc_do_defense_mode(ctl_table *table, int write, 1578proc_do_defense_mode(struct ctl_table *table, int write,
1579 void __user *buffer, size_t *lenp, loff_t *ppos) 1579 void __user *buffer, size_t *lenp, loff_t *ppos)
1580{ 1580{
1581 struct net *net = current->nsproxy->net_ns; 1581 struct net *net = current->nsproxy->net_ns;
@@ -1596,7 +1596,7 @@ proc_do_defense_mode(ctl_table *table, int write,
1596} 1596}
1597 1597
1598static int 1598static int
1599proc_do_sync_threshold(ctl_table *table, int write, 1599proc_do_sync_threshold(struct ctl_table *table, int write,
1600 void __user *buffer, size_t *lenp, loff_t *ppos) 1600 void __user *buffer, size_t *lenp, loff_t *ppos)
1601{ 1601{
1602 int *valp = table->data; 1602 int *valp = table->data;
@@ -1616,7 +1616,7 @@ proc_do_sync_threshold(ctl_table *table, int write,
1616} 1616}
1617 1617
1618static int 1618static int
1619proc_do_sync_mode(ctl_table *table, int write, 1619proc_do_sync_mode(struct ctl_table *table, int write,
1620 void __user *buffer, size_t *lenp, loff_t *ppos) 1620 void __user *buffer, size_t *lenp, loff_t *ppos)
1621{ 1621{
1622 int *valp = table->data; 1622 int *valp = table->data;
@@ -1634,7 +1634,7 @@ proc_do_sync_mode(ctl_table *table, int write,
1634} 1634}
1635 1635
1636static int 1636static int
1637proc_do_sync_ports(ctl_table *table, int write, 1637proc_do_sync_ports(struct ctl_table *table, int write,
1638 void __user *buffer, size_t *lenp, loff_t *ppos) 1638 void __user *buffer, size_t *lenp, loff_t *ppos)
1639{ 1639{
1640 int *valp = table->data; 1640 int *valp = table->data;
@@ -1715,12 +1715,18 @@ static struct ctl_table vs_vars[] = {
1715 .proc_handler = &proc_do_sync_ports, 1715 .proc_handler = &proc_do_sync_ports,
1716 }, 1716 },
1717 { 1717 {
1718 .procname = "sync_qlen_max", 1718 .procname = "sync_persist_mode",
1719 .maxlen = sizeof(int), 1719 .maxlen = sizeof(int),
1720 .mode = 0644, 1720 .mode = 0644,
1721 .proc_handler = proc_dointvec, 1721 .proc_handler = proc_dointvec,
1722 }, 1722 },
1723 { 1723 {
1724 .procname = "sync_qlen_max",
1725 .maxlen = sizeof(unsigned long),
1726 .mode = 0644,
1727 .proc_handler = proc_doulongvec_minmax,
1728 },
1729 {
1724 .procname = "sync_sock_size", 1730 .procname = "sync_sock_size",
1725 .maxlen = sizeof(int), 1731 .maxlen = sizeof(int),
1726 .mode = 0644, 1732 .mode = 0644,
@@ -1739,6 +1745,18 @@ static struct ctl_table vs_vars[] = {
1739 .proc_handler = proc_dointvec, 1745 .proc_handler = proc_dointvec,
1740 }, 1746 },
1741 { 1747 {
1748 .procname = "sloppy_tcp",
1749 .maxlen = sizeof(int),
1750 .mode = 0644,
1751 .proc_handler = proc_dointvec,
1752 },
1753 {
1754 .procname = "sloppy_sctp",
1755 .maxlen = sizeof(int),
1756 .mode = 0644,
1757 .proc_handler = proc_dointvec,
1758 },
1759 {
1742 .procname = "expire_quiescent_template", 1760 .procname = "expire_quiescent_template",
1743 .maxlen = sizeof(int), 1761 .maxlen = sizeof(int),
1744 .mode = 0644, 1762 .mode = 0644,
@@ -3717,12 +3735,15 @@ static int __net_init ip_vs_control_net_init_sysctl(struct net *net)
3717 tbl[idx++].data = &ipvs->sysctl_sync_ver; 3735 tbl[idx++].data = &ipvs->sysctl_sync_ver;
3718 ipvs->sysctl_sync_ports = 1; 3736 ipvs->sysctl_sync_ports = 1;
3719 tbl[idx++].data = &ipvs->sysctl_sync_ports; 3737 tbl[idx++].data = &ipvs->sysctl_sync_ports;
3738 tbl[idx++].data = &ipvs->sysctl_sync_persist_mode;
3720 ipvs->sysctl_sync_qlen_max = nr_free_buffer_pages() / 32; 3739 ipvs->sysctl_sync_qlen_max = nr_free_buffer_pages() / 32;
3721 tbl[idx++].data = &ipvs->sysctl_sync_qlen_max; 3740 tbl[idx++].data = &ipvs->sysctl_sync_qlen_max;
3722 ipvs->sysctl_sync_sock_size = 0; 3741 ipvs->sysctl_sync_sock_size = 0;
3723 tbl[idx++].data = &ipvs->sysctl_sync_sock_size; 3742 tbl[idx++].data = &ipvs->sysctl_sync_sock_size;
3724 tbl[idx++].data = &ipvs->sysctl_cache_bypass; 3743 tbl[idx++].data = &ipvs->sysctl_cache_bypass;
3725 tbl[idx++].data = &ipvs->sysctl_expire_nodest_conn; 3744 tbl[idx++].data = &ipvs->sysctl_expire_nodest_conn;
3745 tbl[idx++].data = &ipvs->sysctl_sloppy_tcp;
3746 tbl[idx++].data = &ipvs->sysctl_sloppy_sctp;
3726 tbl[idx++].data = &ipvs->sysctl_expire_quiescent_template; 3747 tbl[idx++].data = &ipvs->sysctl_expire_quiescent_template;
3727 ipvs->sysctl_sync_threshold[0] = DEFAULT_SYNC_THRESHOLD; 3748 ipvs->sysctl_sync_threshold[0] = DEFAULT_SYNC_THRESHOLD;
3728 ipvs->sysctl_sync_threshold[1] = DEFAULT_SYNC_PERIOD; 3749 ipvs->sysctl_sync_threshold[1] = DEFAULT_SYNC_PERIOD;
diff --git a/net/netfilter/ipvs/ip_vs_dh.c b/net/netfilter/ipvs/ip_vs_dh.c
index ccab120df45e..c3b84546ea9e 100644
--- a/net/netfilter/ipvs/ip_vs_dh.c
+++ b/net/netfilter/ipvs/ip_vs_dh.c
@@ -214,18 +214,16 @@ static inline int is_overloaded(struct ip_vs_dest *dest)
214 * Destination hashing scheduling 214 * Destination hashing scheduling
215 */ 215 */
216static struct ip_vs_dest * 216static struct ip_vs_dest *
217ip_vs_dh_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 217ip_vs_dh_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
218 struct ip_vs_iphdr *iph)
218{ 219{
219 struct ip_vs_dest *dest; 220 struct ip_vs_dest *dest;
220 struct ip_vs_dh_state *s; 221 struct ip_vs_dh_state *s;
221 struct ip_vs_iphdr iph;
222
223 ip_vs_fill_iph_addr_only(svc->af, skb, &iph);
224 222
225 IP_VS_DBG(6, "%s(): Scheduling...\n", __func__); 223 IP_VS_DBG(6, "%s(): Scheduling...\n", __func__);
226 224
227 s = (struct ip_vs_dh_state *) svc->sched_data; 225 s = (struct ip_vs_dh_state *) svc->sched_data;
228 dest = ip_vs_dh_get(svc->af, s, &iph.daddr); 226 dest = ip_vs_dh_get(svc->af, s, &iph->daddr);
229 if (!dest 227 if (!dest
230 || !(dest->flags & IP_VS_DEST_F_AVAILABLE) 228 || !(dest->flags & IP_VS_DEST_F_AVAILABLE)
231 || atomic_read(&dest->weight) <= 0 229 || atomic_read(&dest->weight) <= 0
@@ -235,7 +233,7 @@ ip_vs_dh_schedule(struct ip_vs_service *svc, const struct sk_buff *skb)
235 } 233 }
236 234
237 IP_VS_DBG_BUF(6, "DH: destination IP address %s --> server %s:%d\n", 235 IP_VS_DBG_BUF(6, "DH: destination IP address %s --> server %s:%d\n",
238 IP_VS_DBG_ADDR(svc->af, &iph.daddr), 236 IP_VS_DBG_ADDR(svc->af, &iph->daddr),
239 IP_VS_DBG_ADDR(svc->af, &dest->addr), 237 IP_VS_DBG_ADDR(svc->af, &dest->addr),
240 ntohs(dest->port)); 238 ntohs(dest->port));
241 239
diff --git a/net/netfilter/ipvs/ip_vs_lblc.c b/net/netfilter/ipvs/ip_vs_lblc.c
index 5ea26bd87743..1383b0eadc0e 100644
--- a/net/netfilter/ipvs/ip_vs_lblc.c
+++ b/net/netfilter/ipvs/ip_vs_lblc.c
@@ -118,7 +118,7 @@ struct ip_vs_lblc_table {
118 * IPVS LBLC sysctl table 118 * IPVS LBLC sysctl table
119 */ 119 */
120#ifdef CONFIG_SYSCTL 120#ifdef CONFIG_SYSCTL
121static ctl_table vs_vars_table[] = { 121static struct ctl_table vs_vars_table[] = {
122 { 122 {
123 .procname = "lblc_expiration", 123 .procname = "lblc_expiration",
124 .data = NULL, 124 .data = NULL,
@@ -487,19 +487,17 @@ is_overloaded(struct ip_vs_dest *dest, struct ip_vs_service *svc)
487 * Locality-Based (weighted) Least-Connection scheduling 487 * Locality-Based (weighted) Least-Connection scheduling
488 */ 488 */
489static struct ip_vs_dest * 489static struct ip_vs_dest *
490ip_vs_lblc_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 490ip_vs_lblc_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
491 struct ip_vs_iphdr *iph)
491{ 492{
492 struct ip_vs_lblc_table *tbl = svc->sched_data; 493 struct ip_vs_lblc_table *tbl = svc->sched_data;
493 struct ip_vs_iphdr iph;
494 struct ip_vs_dest *dest = NULL; 494 struct ip_vs_dest *dest = NULL;
495 struct ip_vs_lblc_entry *en; 495 struct ip_vs_lblc_entry *en;
496 496
497 ip_vs_fill_iph_addr_only(svc->af, skb, &iph);
498
499 IP_VS_DBG(6, "%s(): Scheduling...\n", __func__); 497 IP_VS_DBG(6, "%s(): Scheduling...\n", __func__);
500 498
501 /* First look in our cache */ 499 /* First look in our cache */
502 en = ip_vs_lblc_get(svc->af, tbl, &iph.daddr); 500 en = ip_vs_lblc_get(svc->af, tbl, &iph->daddr);
503 if (en) { 501 if (en) {
504 /* We only hold a read lock, but this is atomic */ 502 /* We only hold a read lock, but this is atomic */
505 en->lastuse = jiffies; 503 en->lastuse = jiffies;
@@ -529,12 +527,12 @@ ip_vs_lblc_schedule(struct ip_vs_service *svc, const struct sk_buff *skb)
529 /* If we fail to create a cache entry, we'll just use the valid dest */ 527 /* If we fail to create a cache entry, we'll just use the valid dest */
530 spin_lock_bh(&svc->sched_lock); 528 spin_lock_bh(&svc->sched_lock);
531 if (!tbl->dead) 529 if (!tbl->dead)
532 ip_vs_lblc_new(tbl, &iph.daddr, dest); 530 ip_vs_lblc_new(tbl, &iph->daddr, dest);
533 spin_unlock_bh(&svc->sched_lock); 531 spin_unlock_bh(&svc->sched_lock);
534 532
535out: 533out:
536 IP_VS_DBG_BUF(6, "LBLC: destination IP address %s --> server %s:%d\n", 534 IP_VS_DBG_BUF(6, "LBLC: destination IP address %s --> server %s:%d\n",
537 IP_VS_DBG_ADDR(svc->af, &iph.daddr), 535 IP_VS_DBG_ADDR(svc->af, &iph->daddr),
538 IP_VS_DBG_ADDR(svc->af, &dest->addr), ntohs(dest->port)); 536 IP_VS_DBG_ADDR(svc->af, &dest->addr), ntohs(dest->port));
539 537
540 return dest; 538 return dest;
diff --git a/net/netfilter/ipvs/ip_vs_lblcr.c b/net/netfilter/ipvs/ip_vs_lblcr.c
index 50123c2ab484..3cd85b2fc67c 100644
--- a/net/netfilter/ipvs/ip_vs_lblcr.c
+++ b/net/netfilter/ipvs/ip_vs_lblcr.c
@@ -299,7 +299,7 @@ struct ip_vs_lblcr_table {
299 * IPVS LBLCR sysctl table 299 * IPVS LBLCR sysctl table
300 */ 300 */
301 301
302static ctl_table vs_vars_table[] = { 302static struct ctl_table vs_vars_table[] = {
303 { 303 {
304 .procname = "lblcr_expiration", 304 .procname = "lblcr_expiration",
305 .data = NULL, 305 .data = NULL,
@@ -655,19 +655,17 @@ is_overloaded(struct ip_vs_dest *dest, struct ip_vs_service *svc)
655 * Locality-Based (weighted) Least-Connection scheduling 655 * Locality-Based (weighted) Least-Connection scheduling
656 */ 656 */
657static struct ip_vs_dest * 657static struct ip_vs_dest *
658ip_vs_lblcr_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 658ip_vs_lblcr_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
659 struct ip_vs_iphdr *iph)
659{ 660{
660 struct ip_vs_lblcr_table *tbl = svc->sched_data; 661 struct ip_vs_lblcr_table *tbl = svc->sched_data;
661 struct ip_vs_iphdr iph;
662 struct ip_vs_dest *dest; 662 struct ip_vs_dest *dest;
663 struct ip_vs_lblcr_entry *en; 663 struct ip_vs_lblcr_entry *en;
664 664
665 ip_vs_fill_iph_addr_only(svc->af, skb, &iph);
666
667 IP_VS_DBG(6, "%s(): Scheduling...\n", __func__); 665 IP_VS_DBG(6, "%s(): Scheduling...\n", __func__);
668 666
669 /* First look in our cache */ 667 /* First look in our cache */
670 en = ip_vs_lblcr_get(svc->af, tbl, &iph.daddr); 668 en = ip_vs_lblcr_get(svc->af, tbl, &iph->daddr);
671 if (en) { 669 if (en) {
672 en->lastuse = jiffies; 670 en->lastuse = jiffies;
673 671
@@ -718,12 +716,12 @@ ip_vs_lblcr_schedule(struct ip_vs_service *svc, const struct sk_buff *skb)
718 /* If we fail to create a cache entry, we'll just use the valid dest */ 716 /* If we fail to create a cache entry, we'll just use the valid dest */
719 spin_lock_bh(&svc->sched_lock); 717 spin_lock_bh(&svc->sched_lock);
720 if (!tbl->dead) 718 if (!tbl->dead)
721 ip_vs_lblcr_new(tbl, &iph.daddr, dest); 719 ip_vs_lblcr_new(tbl, &iph->daddr, dest);
722 spin_unlock_bh(&svc->sched_lock); 720 spin_unlock_bh(&svc->sched_lock);
723 721
724out: 722out:
725 IP_VS_DBG_BUF(6, "LBLCR: destination IP address %s --> server %s:%d\n", 723 IP_VS_DBG_BUF(6, "LBLCR: destination IP address %s --> server %s:%d\n",
726 IP_VS_DBG_ADDR(svc->af, &iph.daddr), 724 IP_VS_DBG_ADDR(svc->af, &iph->daddr),
727 IP_VS_DBG_ADDR(svc->af, &dest->addr), ntohs(dest->port)); 725 IP_VS_DBG_ADDR(svc->af, &dest->addr), ntohs(dest->port));
728 726
729 return dest; 727 return dest;
diff --git a/net/netfilter/ipvs/ip_vs_lc.c b/net/netfilter/ipvs/ip_vs_lc.c
index 5128e338a749..2bdcb1cf2127 100644
--- a/net/netfilter/ipvs/ip_vs_lc.c
+++ b/net/netfilter/ipvs/ip_vs_lc.c
@@ -26,7 +26,8 @@
26 * Least Connection scheduling 26 * Least Connection scheduling
27 */ 27 */
28static struct ip_vs_dest * 28static struct ip_vs_dest *
29ip_vs_lc_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 29ip_vs_lc_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
30 struct ip_vs_iphdr *iph)
30{ 31{
31 struct ip_vs_dest *dest, *least = NULL; 32 struct ip_vs_dest *dest, *least = NULL;
32 unsigned int loh = 0, doh; 33 unsigned int loh = 0, doh;
diff --git a/net/netfilter/ipvs/ip_vs_nq.c b/net/netfilter/ipvs/ip_vs_nq.c
index 646cfd4baa73..d8d9860934fe 100644
--- a/net/netfilter/ipvs/ip_vs_nq.c
+++ b/net/netfilter/ipvs/ip_vs_nq.c
@@ -55,7 +55,8 @@ ip_vs_nq_dest_overhead(struct ip_vs_dest *dest)
55 * Weighted Least Connection scheduling 55 * Weighted Least Connection scheduling
56 */ 56 */
57static struct ip_vs_dest * 57static struct ip_vs_dest *
58ip_vs_nq_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 58ip_vs_nq_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
59 struct ip_vs_iphdr *iph)
59{ 60{
60 struct ip_vs_dest *dest, *least = NULL; 61 struct ip_vs_dest *dest, *least = NULL;
61 unsigned int loh = 0, doh; 62 unsigned int loh = 0, doh;
diff --git a/net/netfilter/ipvs/ip_vs_proto_sctp.c b/net/netfilter/ipvs/ip_vs_proto_sctp.c
index 86464881cd20..3c0da8728036 100644
--- a/net/netfilter/ipvs/ip_vs_proto_sctp.c
+++ b/net/netfilter/ipvs/ip_vs_proto_sctp.c
@@ -15,6 +15,7 @@ sctp_conn_schedule(int af, struct sk_buff *skb, struct ip_vs_proto_data *pd,
15{ 15{
16 struct net *net; 16 struct net *net;
17 struct ip_vs_service *svc; 17 struct ip_vs_service *svc;
18 struct netns_ipvs *ipvs;
18 sctp_chunkhdr_t _schunkh, *sch; 19 sctp_chunkhdr_t _schunkh, *sch;
19 sctp_sctphdr_t *sh, _sctph; 20 sctp_sctphdr_t *sh, _sctph;
20 21
@@ -27,13 +28,14 @@ sctp_conn_schedule(int af, struct sk_buff *skb, struct ip_vs_proto_data *pd,
27 if (sch == NULL) 28 if (sch == NULL)
28 return 0; 29 return 0;
29 net = skb_net(skb); 30 net = skb_net(skb);
31 ipvs = net_ipvs(net);
30 rcu_read_lock(); 32 rcu_read_lock();
31 if ((sch->type == SCTP_CID_INIT) && 33 if ((sch->type == SCTP_CID_INIT || sysctl_sloppy_sctp(ipvs)) &&
32 (svc = ip_vs_service_find(net, af, skb->mark, iph->protocol, 34 (svc = ip_vs_service_find(net, af, skb->mark, iph->protocol,
33 &iph->daddr, sh->dest))) { 35 &iph->daddr, sh->dest))) {
34 int ignored; 36 int ignored;
35 37
36 if (ip_vs_todrop(net_ipvs(net))) { 38 if (ip_vs_todrop(ipvs)) {
37 /* 39 /*
38 * It seems that we are very loaded. 40 * It seems that we are very loaded.
39 * We have to drop this packet :( 41 * We have to drop this packet :(
@@ -183,710 +185,159 @@ sctp_csum_check(int af, struct sk_buff *skb, struct ip_vs_protocol *pp)
183 return 1; 185 return 1;
184} 186}
185 187
186struct ipvs_sctp_nextstate {
187 int next_state;
188};
189enum ipvs_sctp_event_t { 188enum ipvs_sctp_event_t {
190 IP_VS_SCTP_EVE_DATA_CLI, 189 IP_VS_SCTP_DATA = 0, /* DATA, SACK, HEARTBEATs */
191 IP_VS_SCTP_EVE_DATA_SER, 190 IP_VS_SCTP_INIT,
192 IP_VS_SCTP_EVE_INIT_CLI, 191 IP_VS_SCTP_INIT_ACK,
193 IP_VS_SCTP_EVE_INIT_SER, 192 IP_VS_SCTP_COOKIE_ECHO,
194 IP_VS_SCTP_EVE_INIT_ACK_CLI, 193 IP_VS_SCTP_COOKIE_ACK,
195 IP_VS_SCTP_EVE_INIT_ACK_SER, 194 IP_VS_SCTP_SHUTDOWN,
196 IP_VS_SCTP_EVE_COOKIE_ECHO_CLI, 195 IP_VS_SCTP_SHUTDOWN_ACK,
197 IP_VS_SCTP_EVE_COOKIE_ECHO_SER, 196 IP_VS_SCTP_SHUTDOWN_COMPLETE,
198 IP_VS_SCTP_EVE_COOKIE_ACK_CLI, 197 IP_VS_SCTP_ERROR,
199 IP_VS_SCTP_EVE_COOKIE_ACK_SER, 198 IP_VS_SCTP_ABORT,
200 IP_VS_SCTP_EVE_ABORT_CLI, 199 IP_VS_SCTP_EVENT_LAST
201 IP_VS_SCTP_EVE__ABORT_SER,
202 IP_VS_SCTP_EVE_SHUT_CLI,
203 IP_VS_SCTP_EVE_SHUT_SER,
204 IP_VS_SCTP_EVE_SHUT_ACK_CLI,
205 IP_VS_SCTP_EVE_SHUT_ACK_SER,
206 IP_VS_SCTP_EVE_SHUT_COM_CLI,
207 IP_VS_SCTP_EVE_SHUT_COM_SER,
208 IP_VS_SCTP_EVE_LAST
209}; 200};
210 201
211static enum ipvs_sctp_event_t sctp_events[256] = { 202/* RFC 2960, 3.2 Chunk Field Descriptions */
212 IP_VS_SCTP_EVE_DATA_CLI, 203static __u8 sctp_events[] = {
213 IP_VS_SCTP_EVE_INIT_CLI, 204 [SCTP_CID_DATA] = IP_VS_SCTP_DATA,
214 IP_VS_SCTP_EVE_INIT_ACK_CLI, 205 [SCTP_CID_INIT] = IP_VS_SCTP_INIT,
215 IP_VS_SCTP_EVE_DATA_CLI, 206 [SCTP_CID_INIT_ACK] = IP_VS_SCTP_INIT_ACK,
216 IP_VS_SCTP_EVE_DATA_CLI, 207 [SCTP_CID_SACK] = IP_VS_SCTP_DATA,
217 IP_VS_SCTP_EVE_DATA_CLI, 208 [SCTP_CID_HEARTBEAT] = IP_VS_SCTP_DATA,
218 IP_VS_SCTP_EVE_ABORT_CLI, 209 [SCTP_CID_HEARTBEAT_ACK] = IP_VS_SCTP_DATA,
219 IP_VS_SCTP_EVE_SHUT_CLI, 210 [SCTP_CID_ABORT] = IP_VS_SCTP_ABORT,
220 IP_VS_SCTP_EVE_SHUT_ACK_CLI, 211 [SCTP_CID_SHUTDOWN] = IP_VS_SCTP_SHUTDOWN,
221 IP_VS_SCTP_EVE_DATA_CLI, 212 [SCTP_CID_SHUTDOWN_ACK] = IP_VS_SCTP_SHUTDOWN_ACK,
222 IP_VS_SCTP_EVE_COOKIE_ECHO_CLI, 213 [SCTP_CID_ERROR] = IP_VS_SCTP_ERROR,
223 IP_VS_SCTP_EVE_COOKIE_ACK_CLI, 214 [SCTP_CID_COOKIE_ECHO] = IP_VS_SCTP_COOKIE_ECHO,
224 IP_VS_SCTP_EVE_DATA_CLI, 215 [SCTP_CID_COOKIE_ACK] = IP_VS_SCTP_COOKIE_ACK,
225 IP_VS_SCTP_EVE_DATA_CLI, 216 [SCTP_CID_ECN_ECNE] = IP_VS_SCTP_DATA,
226 IP_VS_SCTP_EVE_SHUT_COM_CLI, 217 [SCTP_CID_ECN_CWR] = IP_VS_SCTP_DATA,
218 [SCTP_CID_SHUTDOWN_COMPLETE] = IP_VS_SCTP_SHUTDOWN_COMPLETE,
227}; 219};
228 220
229static struct ipvs_sctp_nextstate 221/* SCTP States:
230 sctp_states_table[IP_VS_SCTP_S_LAST][IP_VS_SCTP_EVE_LAST] = { 222 * See RFC 2960, 4. SCTP Association State Diagram
231 /* 223 *
232 * STATE : IP_VS_SCTP_S_NONE 224 * New states (not in diagram):
233 */ 225 * - INIT1 state: use shorter timeout for dropped INIT packets
234 /*next state *//*event */ 226 * - REJECTED state: use shorter timeout if INIT is rejected with ABORT
235 {{IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_CLI */ }, 227 * - INIT, COOKIE_SENT, COOKIE_REPLIED, COOKIE states: for better debugging
236 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_SER */ }, 228 *
237 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ }, 229 * The states are as seen in real server. In the diagram, INIT1, INIT,
238 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ }, 230 * COOKIE_SENT and COOKIE_REPLIED processing happens in CLOSED state.
239 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ }, 231 *
240 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_SER */ }, 232 * States as per packets from client (C) and server (S):
241 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ }, 233 *
242 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ }, 234 * Setup of client connection:
243 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ }, 235 * IP_VS_SCTP_S_INIT1: First C:INIT sent, wait for S:INIT-ACK
244 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ }, 236 * IP_VS_SCTP_S_INIT: Next C:INIT sent, wait for S:INIT-ACK
245 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ }, 237 * IP_VS_SCTP_S_COOKIE_SENT: S:INIT-ACK sent, wait for C:COOKIE-ECHO
246 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ }, 238 * IP_VS_SCTP_S_COOKIE_REPLIED: C:COOKIE-ECHO sent, wait for S:COOKIE-ACK
247 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_CLI */ }, 239 *
248 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_SER */ }, 240 * Setup of server connection:
249 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ }, 241 * IP_VS_SCTP_S_COOKIE_WAIT: S:INIT sent, wait for C:INIT-ACK
250 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ }, 242 * IP_VS_SCTP_S_COOKIE: C:INIT-ACK sent, wait for S:COOKIE-ECHO
251 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ }, 243 * IP_VS_SCTP_S_COOKIE_ECHOED: S:COOKIE-ECHO sent, wait for C:COOKIE-ACK
252 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }, 244 */
253 },
254 /*
255 * STATE : IP_VS_SCTP_S_INIT_CLI
256 * Cient sent INIT and is waiting for reply from server(In ECHO_WAIT)
257 */
258 {{IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_CLI */ },
259 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_SER */ },
260 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
261 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
262 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
263 {IP_VS_SCTP_S_INIT_ACK_SER /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
264 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ECHO_CLI */ },
265 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_ECHO_SER */ },
266 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
267 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
268 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
269 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
270 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_CLI */ },
271 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_SER */ },
272 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
273 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
274 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
275 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
276 },
277 /*
278 * State : IP_VS_SCTP_S_INIT_SER
279 * Server sent INIT and waiting for INIT ACK from the client
280 */
281 {{IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_CLI */ },
282 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_SER */ },
283 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
284 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
285 {IP_VS_SCTP_S_INIT_ACK_CLI /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
286 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
287 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
288 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
289 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
290 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
291 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
292 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
293 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_CLI */ },
294 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_SER */ },
295 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
296 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
297 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
298 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
299 },
300 /*
301 * State : IP_VS_SCTP_S_INIT_ACK_CLI
302 * Client sent INIT ACK and waiting for ECHO from the server
303 */
304 {{IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_CLI */ },
305 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_SER */ },
306 /*
307 * We have got an INIT from client. From the spec.“Upon receipt of
308 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with
309 * an INIT ACK using the same parameters it sent in its original
310 * INIT chunk (including its Initiate Tag, unchanged”).
311 */
312 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
313 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
314 /*
315 * INIT_ACK has been resent by the client, let us stay is in
316 * the same state
317 */
318 {IP_VS_SCTP_S_INIT_ACK_CLI /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
319 /*
320 * INIT_ACK sent by the server, close the connection
321 */
322 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
323 /*
324 * ECHO by client, it should not happen, close the connection
325 */
326 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
327 /*
328 * ECHO by server, this is what we are expecting, move to ECHO_SER
329 */
330 {IP_VS_SCTP_S_ECHO_SER /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
331 /*
332 * COOKIE ACK from client, it should not happen, close the connection
333 */
334 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
335 /*
336 * Unexpected COOKIE ACK from server, staty in the same state
337 */
338 {IP_VS_SCTP_S_INIT_ACK_CLI /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
339 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
340 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
341 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_CLI */ },
342 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_SER */ },
343 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
344 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
345 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
346 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
347 },
348 /*
349 * State : IP_VS_SCTP_S_INIT_ACK_SER
350 * Server sent INIT ACK and waiting for ECHO from the client
351 */
352 {{IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_CLI */ },
353 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_SER */ },
354 /*
355 * We have got an INIT from client. From the spec.“Upon receipt of
356 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with
357 * an INIT ACK using the same parameters it sent in its original
358 * INIT chunk (including its Initiate Tag, unchanged”).
359 */
360 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
361 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
362 /*
363 * Unexpected INIT_ACK by the client, let us close the connection
364 */
365 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
366 /*
367 * INIT_ACK resent by the server, let us move to same state
368 */
369 {IP_VS_SCTP_S_INIT_ACK_SER /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
370 /*
371 * Client send the ECHO, this is what we are expecting,
372 * move to ECHO_CLI
373 */
374 {IP_VS_SCTP_S_ECHO_CLI /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
375 /*
376 * ECHO received from the server, Not sure what to do,
377 * let us close it
378 */
379 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
380 /*
381 * COOKIE ACK from client, let us stay in the same state
382 */
383 {IP_VS_SCTP_S_INIT_ACK_SER /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
384 /*
385 * COOKIE ACK from server, hmm... this should not happen, lets close
386 * the connection.
387 */
388 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
389 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
390 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
391 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_CLI */ },
392 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_SER */ },
393 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
394 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
395 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
396 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
397 },
398 /*
399 * State : IP_VS_SCTP_S_ECHO_CLI
400 * Cient sent ECHO and waiting COOKEI ACK from the Server
401 */
402 {{IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_CLI */ },
403 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_SER */ },
404 /*
405 * We have got an INIT from client. From the spec.“Upon receipt of
406 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with
407 * an INIT ACK using the same parameters it sent in its original
408 * INIT chunk (including its Initiate Tag, unchanged”).
409 */
410 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
411 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
412 /*
413 * INIT_ACK has been by the client, let us close the connection
414 */
415 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
416 /*
417 * INIT_ACK sent by the server, Unexpected INIT ACK, spec says,
418 * “If an INIT ACK is received by an endpoint in any state other
419 * than the COOKIE-WAIT state, the endpoint should discard the
420 * INIT ACK chunk”. Stay in the same state
421 */
422 {IP_VS_SCTP_S_ECHO_CLI /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
423 /*
424 * Client resent the ECHO, let us stay in the same state
425 */
426 {IP_VS_SCTP_S_ECHO_CLI /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
427 /*
428 * ECHO received from the server, Not sure what to do,
429 * let us close it
430 */
431 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
432 /*
433 * COOKIE ACK from client, this shoud not happen, let's close the
434 * connection
435 */
436 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
437 /*
438 * COOKIE ACK from server, this is what we are awaiting,lets move to
439 * ESTABLISHED.
440 */
441 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
442 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
443 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
444 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_CLI */ },
445 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_SER */ },
446 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
447 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
448 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
449 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
450 },
451 /*
452 * State : IP_VS_SCTP_S_ECHO_SER
453 * Server sent ECHO and waiting COOKEI ACK from the client
454 */
455 {{IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_CLI */ },
456 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_SER */ },
457 /*
458 * We have got an INIT from client. From the spec.“Upon receipt of
459 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with
460 * an INIT ACK using the same parameters it sent in its original
461 * INIT chunk (including its Initiate Tag, unchanged”).
462 */
463 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
464 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
465 /*
466 * INIT_ACK sent by the server, Unexpected INIT ACK, spec says,
467 * “If an INIT ACK is received by an endpoint in any state other
468 * than the COOKIE-WAIT state, the endpoint should discard the
469 * INIT ACK chunk”. Stay in the same state
470 */
471 {IP_VS_SCTP_S_ECHO_SER /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
472 /*
473 * INIT_ACK has been by the server, let us close the connection
474 */
475 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
476 /*
477 * Client sent the ECHO, not sure what to do, let's close the
478 * connection.
479 */
480 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
481 /*
482 * ECHO resent by the server, stay in the same state
483 */
484 {IP_VS_SCTP_S_ECHO_SER /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
485 /*
486 * COOKIE ACK from client, this is what we are expecting, let's move
487 * to ESTABLISHED.
488 */
489 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
490 /*
491 * COOKIE ACK from server, this should not happen, lets close the
492 * connection.
493 */
494 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
495 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
496 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
497 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_CLI */ },
498 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_SER */ },
499 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
500 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
501 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
502 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
503 },
504 /*
505 * State : IP_VS_SCTP_S_ESTABLISHED
506 * Association established
507 */
508 {{IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_DATA_CLI */ },
509 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_DATA_SER */ },
510 /*
511 * We have got an INIT from client. From the spec.“Upon receipt of
512 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with
513 * an INIT ACK using the same parameters it sent in its original
514 * INIT chunk (including its Initiate Tag, unchanged”).
515 */
516 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
517 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
518 /*
519 * INIT_ACK sent by the server, Unexpected INIT ACK, spec says,
520 * “If an INIT ACK is received by an endpoint in any state other
521 * than the COOKIE-WAIT state, the endpoint should discard the
522 * INIT ACK chunk”. Stay in the same state
523 */
524 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
525 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
526 /*
527 * Client sent ECHO, Spec(sec 5.2.4) says it may be handled by the
528 * peer and peer shall move to the ESTABISHED. if it doesn't handle
529 * it will send ERROR chunk. So, stay in the same state
530 */
531 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
532 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
533 /*
534 * COOKIE ACK from client, not sure what to do stay in the same state
535 */
536 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
537 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
538 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
539 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
540 /*
541 * SHUTDOWN from the client, move to SHUDDOWN_CLI
542 */
543 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_SHUT_CLI */ },
544 /*
545 * SHUTDOWN from the server, move to SHUTDOWN_SER
546 */
547 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_SHUT_SER */ },
548 /*
549 * client sent SHUDTDOWN_ACK, this should not happen, let's close
550 * the connection
551 */
552 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
553 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
554 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
555 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
556 },
557 /*
558 * State : IP_VS_SCTP_S_SHUT_CLI
559 * SHUTDOWN sent from the client, waitinf for SHUT ACK from the server
560 */
561 /*
562 * We received the data chuck, keep the state unchanged. I assume
563 * that still data chuncks can be received by both the peers in
564 * SHUDOWN state
565 */
566
567 {{IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_DATA_CLI */ },
568 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_DATA_SER */ },
569 /*
570 * We have got an INIT from client. From the spec.“Upon receipt of
571 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with
572 * an INIT ACK using the same parameters it sent in its original
573 * INIT chunk (including its Initiate Tag, unchanged”).
574 */
575 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
576 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
577 /*
578 * INIT_ACK sent by the server, Unexpected INIT ACK, spec says,
579 * “If an INIT ACK is received by an endpoint in any state other
580 * than the COOKIE-WAIT state, the endpoint should discard the
581 * INIT ACK chunk”. Stay in the same state
582 */
583 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
584 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
585 /*
586 * Client sent ECHO, Spec(sec 5.2.4) says it may be handled by the
587 * peer and peer shall move to the ESTABISHED. if it doesn't handle
588 * it will send ERROR chunk. So, stay in the same state
589 */
590 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
591 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
592 /*
593 * COOKIE ACK from client, not sure what to do stay in the same state
594 */
595 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
596 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
597 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
598 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
599 /*
600 * SHUTDOWN resent from the client, move to SHUDDOWN_CLI
601 */
602 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_SHUT_CLI */ },
603 /*
604 * SHUTDOWN from the server, move to SHUTDOWN_SER
605 */
606 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_SHUT_SER */ },
607 /*
608 * client sent SHUDTDOWN_ACK, this should not happen, let's close
609 * the connection
610 */
611 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
612 /*
613 * Server sent SHUTDOWN ACK, this is what we are expecting, let's move
614 * to SHUDOWN_ACK_SER
615 */
616 {IP_VS_SCTP_S_SHUT_ACK_SER /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
617 /*
618 * SHUTDOWN COM from client, this should not happen, let's close the
619 * connection
620 */
621 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
622 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
623 },
624 /*
625 * State : IP_VS_SCTP_S_SHUT_SER
626 * SHUTDOWN sent from the server, waitinf for SHUTDOWN ACK from client
627 */
628 /*
629 * We received the data chuck, keep the state unchanged. I assume
630 * that still data chuncks can be received by both the peers in
631 * SHUDOWN state
632 */
633
634 {{IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_DATA_CLI */ },
635 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_DATA_SER */ },
636 /*
637 * We have got an INIT from client. From the spec.“Upon receipt of
638 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with
639 * an INIT ACK using the same parameters it sent in its original
640 * INIT chunk (including its Initiate Tag, unchanged”).
641 */
642 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
643 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
644 /*
645 * INIT_ACK sent by the server, Unexpected INIT ACK, spec says,
646 * “If an INIT ACK is received by an endpoint in any state other
647 * than the COOKIE-WAIT state, the endpoint should discard the
648 * INIT ACK chunk”. Stay in the same state
649 */
650 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
651 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
652 /*
653 * Client sent ECHO, Spec(sec 5.2.4) says it may be handled by the
654 * peer and peer shall move to the ESTABISHED. if it doesn't handle
655 * it will send ERROR chunk. So, stay in the same state
656 */
657 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
658 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
659 /*
660 * COOKIE ACK from client, not sure what to do stay in the same state
661 */
662 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
663 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
664 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
665 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
666 /*
667 * SHUTDOWN resent from the client, move to SHUDDOWN_CLI
668 */
669 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_SHUT_CLI */ },
670 /*
671 * SHUTDOWN resent from the server, move to SHUTDOWN_SER
672 */
673 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_SHUT_SER */ },
674 /*
675 * client sent SHUDTDOWN_ACK, this is what we are expecting, let's
676 * move to SHUT_ACK_CLI
677 */
678 {IP_VS_SCTP_S_SHUT_ACK_CLI /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
679 /*
680 * Server sent SHUTDOWN ACK, this should not happen, let's close the
681 * connection
682 */
683 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
684 /*
685 * SHUTDOWN COM from client, this should not happen, let's close the
686 * connection
687 */
688 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
689 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
690 },
691
692 /*
693 * State : IP_VS_SCTP_S_SHUT_ACK_CLI
694 * SHUTDOWN ACK from the client, awaiting for SHUTDOWN COM from server
695 */
696 /*
697 * We received the data chuck, keep the state unchanged. I assume
698 * that still data chuncks can be received by both the peers in
699 * SHUDOWN state
700 */
701
702 {{IP_VS_SCTP_S_SHUT_ACK_CLI /* IP_VS_SCTP_EVE_DATA_CLI */ },
703 {IP_VS_SCTP_S_SHUT_ACK_CLI /* IP_VS_SCTP_EVE_DATA_SER */ },
704 /*
705 * We have got an INIT from client. From the spec.“Upon receipt of
706 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with
707 * an INIT ACK using the same parameters it sent in its original
708 * INIT chunk (including its Initiate Tag, unchanged”).
709 */
710 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
711 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
712 /*
713 * INIT_ACK sent by the server, Unexpected INIT ACK, spec says,
714 * “If an INIT ACK is received by an endpoint in any state other
715 * than the COOKIE-WAIT state, the endpoint should discard the
716 * INIT ACK chunk”. Stay in the same state
717 */
718 {IP_VS_SCTP_S_SHUT_ACK_CLI /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
719 {IP_VS_SCTP_S_SHUT_ACK_CLI /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
720 /*
721 * Client sent ECHO, Spec(sec 5.2.4) says it may be handled by the
722 * peer and peer shall move to the ESTABISHED. if it doesn't handle
723 * it will send ERROR chunk. So, stay in the same state
724 */
725 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
726 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
727 /*
728 * COOKIE ACK from client, not sure what to do stay in the same state
729 */
730 {IP_VS_SCTP_S_SHUT_ACK_CLI /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
731 {IP_VS_SCTP_S_SHUT_ACK_CLI /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
732 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
733 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
734 /*
735 * SHUTDOWN sent from the client, move to SHUDDOWN_CLI
736 */
737 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_SHUT_CLI */ },
738 /*
739 * SHUTDOWN sent from the server, move to SHUTDOWN_SER
740 */
741 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_SHUT_SER */ },
742 /*
743 * client resent SHUDTDOWN_ACK, let's stay in the same state
744 */
745 {IP_VS_SCTP_S_SHUT_ACK_CLI /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
746 /*
747 * Server sent SHUTDOWN ACK, this should not happen, let's close the
748 * connection
749 */
750 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
751 /*
752 * SHUTDOWN COM from client, this should not happen, let's close the
753 * connection
754 */
755 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
756 /*
757 * SHUTDOWN COMPLETE from server this is what we are expecting.
758 */
759 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
760 },
761
762 /*
763 * State : IP_VS_SCTP_S_SHUT_ACK_SER
764 * SHUTDOWN ACK from the server, awaiting for SHUTDOWN COM from client
765 */
766 /*
767 * We received the data chuck, keep the state unchanged. I assume
768 * that still data chuncks can be received by both the peers in
769 * SHUDOWN state
770 */
771 245
772 {{IP_VS_SCTP_S_SHUT_ACK_SER /* IP_VS_SCTP_EVE_DATA_CLI */ }, 246#define sNO IP_VS_SCTP_S_NONE
773 {IP_VS_SCTP_S_SHUT_ACK_SER /* IP_VS_SCTP_EVE_DATA_SER */ }, 247#define sI1 IP_VS_SCTP_S_INIT1
774 /* 248#define sIN IP_VS_SCTP_S_INIT
775 * We have got an INIT from client. From the spec.“Upon receipt of 249#define sCS IP_VS_SCTP_S_COOKIE_SENT
776 * an INIT in the COOKIE-WAIT state, an endpoint MUST respond with 250#define sCR IP_VS_SCTP_S_COOKIE_REPLIED
777 * an INIT ACK using the same parameters it sent in its original 251#define sCW IP_VS_SCTP_S_COOKIE_WAIT
778 * INIT chunk (including its Initiate Tag, unchanged”). 252#define sCO IP_VS_SCTP_S_COOKIE
779 */ 253#define sCE IP_VS_SCTP_S_COOKIE_ECHOED
780 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ }, 254#define sES IP_VS_SCTP_S_ESTABLISHED
781 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ }, 255#define sSS IP_VS_SCTP_S_SHUTDOWN_SENT
782 /* 256#define sSR IP_VS_SCTP_S_SHUTDOWN_RECEIVED
783 * INIT_ACK sent by the server, Unexpected INIT ACK, spec says, 257#define sSA IP_VS_SCTP_S_SHUTDOWN_ACK_SENT
784 * “If an INIT ACK is received by an endpoint in any state other 258#define sRJ IP_VS_SCTP_S_REJECTED
785 * than the COOKIE-WAIT state, the endpoint should discard the 259#define sCL IP_VS_SCTP_S_CLOSED
786 * INIT ACK chunk”. Stay in the same state 260
787 */ 261static const __u8 sctp_states
788 {IP_VS_SCTP_S_SHUT_ACK_SER /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ }, 262 [IP_VS_DIR_LAST][IP_VS_SCTP_EVENT_LAST][IP_VS_SCTP_S_LAST] = {
789 {IP_VS_SCTP_S_SHUT_ACK_SER /* IP_VS_SCTP_EVE_INIT_ACK_SER */ }, 263 { /* INPUT */
790 /* 264/* sNO, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL*/
791 * Client sent ECHO, Spec(sec 5.2.4) says it may be handled by the 265/* d */{sES, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
792 * peer and peer shall move to the ESTABISHED. if it doesn't handle 266/* i */{sI1, sIN, sIN, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sIN, sIN},
793 * it will send ERROR chunk. So, stay in the same state 267/* i_a */{sCW, sCW, sCW, sCS, sCR, sCO, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
794 */ 268/* c_e */{sCR, sIN, sIN, sCR, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
795 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ }, 269/* c_a */{sES, sI1, sIN, sCS, sCR, sCW, sCO, sES, sES, sSS, sSR, sSA, sRJ, sCL},
796 {IP_VS_SCTP_S_ESTABLISHED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ }, 270/* s */{sSR, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sSR, sSS, sSR, sSA, sRJ, sCL},
797 /* 271/* s_a */{sCL, sIN, sIN, sCS, sCR, sCW, sCO, sCE, sES, sCL, sSR, sCL, sRJ, sCL},
798 * COOKIE ACK from client, not sure what to do stay in the same state 272/* s_c */{sCL, sCL, sCL, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sCL, sRJ, sCL},
799 */ 273/* err */{sCL, sI1, sIN, sCS, sCR, sCW, sCO, sCL, sES, sSS, sSR, sSA, sRJ, sCL},
800 {IP_VS_SCTP_S_SHUT_ACK_SER /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ }, 274/* ab */{sCL, sCL, sCL, sCL, sCL, sRJ, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},
801 {IP_VS_SCTP_S_SHUT_ACK_SER /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ }, 275 },
802 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ }, 276 { /* OUTPUT */
803 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ }, 277/* sNO, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL*/
804 /* 278/* d */{sES, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
805 * SHUTDOWN sent from the client, move to SHUDDOWN_CLI 279/* i */{sCW, sCW, sCW, sCW, sCW, sCW, sCW, sCW, sES, sCW, sCW, sCW, sCW, sCW},
806 */ 280/* i_a */{sCS, sCS, sCS, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
807 {IP_VS_SCTP_S_SHUT_CLI /* IP_VS_SCTP_EVE_SHUT_CLI */ }, 281/* c_e */{sCE, sCE, sCE, sCE, sCE, sCE, sCE, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
808 /* 282/* c_a */{sES, sES, sES, sES, sES, sES, sES, sES, sES, sSS, sSR, sSA, sRJ, sCL},
809 * SHUTDOWN sent from the server, move to SHUTDOWN_SER 283/* s */{sSS, sSS, sSS, sSS, sSS, sSS, sSS, sSS, sSS, sSS, sSR, sSA, sRJ, sCL},
810 */ 284/* s_a */{sSA, sSA, sSA, sSA, sSA, sCW, sCO, sCE, sES, sSA, sSA, sSA, sRJ, sCL},
811 {IP_VS_SCTP_S_SHUT_SER /* IP_VS_SCTP_EVE_SHUT_SER */ }, 285/* s_c */{sCL, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
812 /* 286/* err */{sCL, sCL, sCL, sCL, sCL, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
813 * client sent SHUDTDOWN_ACK, this should not happen let's close 287/* ab */{sCL, sRJ, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},
814 * the connection. 288 },
815 */ 289 { /* INPUT-ONLY */
816 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ }, 290/* sNO, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL*/
817 /* 291/* d */{sES, sI1, sIN, sCS, sCR, sES, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
818 * Server resent SHUTDOWN ACK, stay in the same state 292/* i */{sI1, sIN, sIN, sIN, sIN, sIN, sCO, sCE, sES, sSS, sSR, sSA, sIN, sIN},
819 */ 293/* i_a */{sCE, sCE, sCE, sCE, sCE, sCE, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
820 {IP_VS_SCTP_S_SHUT_ACK_SER /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ }, 294/* c_e */{sES, sES, sES, sES, sES, sES, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
821 /* 295/* c_a */{sES, sI1, sIN, sES, sES, sCW, sES, sES, sES, sSS, sSR, sSA, sRJ, sCL},
822 * SHUTDOWN COM from client, this what we are expecting, let's close 296/* s */{sSR, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sSR, sSS, sSR, sSA, sRJ, sCL},
823 * the connection 297/* s_a */{sCL, sIN, sIN, sCS, sCR, sCW, sCO, sCE, sCL, sCL, sSR, sCL, sRJ, sCL},
824 */ 298/* s_c */{sCL, sCL, sCL, sCL, sCL, sCW, sCO, sCE, sES, sSS, sCL, sCL, sRJ, sCL},
825 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ }, 299/* err */{sCL, sI1, sIN, sCS, sCR, sCW, sCO, sCE, sES, sSS, sSR, sSA, sRJ, sCL},
826 /* 300/* ab */{sCL, sCL, sCL, sCL, sCL, sRJ, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},
827 * SHUTDOWN COMPLETE from server this should not happen. 301 },
828 */
829 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
830 },
831 /*
832 * State : IP_VS_SCTP_S_CLOSED
833 */
834 {{IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_CLI */ },
835 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_DATA_SER */ },
836 {IP_VS_SCTP_S_INIT_CLI /* IP_VS_SCTP_EVE_INIT_CLI */ },
837 {IP_VS_SCTP_S_INIT_SER /* IP_VS_SCTP_EVE_INIT_SER */ },
838 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_CLI */ },
839 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_INIT_ACK_SER */ },
840 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_CLI */ },
841 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ECHO_SER */ },
842 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_CLI */ },
843 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_COOKIE_ACK_SER */ },
844 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_CLI */ },
845 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_ABORT_SER */ },
846 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_CLI */ },
847 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_SER */ },
848 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_CLI */ },
849 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_ACK_SER */ },
850 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_CLI */ },
851 {IP_VS_SCTP_S_CLOSED /* IP_VS_SCTP_EVE_SHUT_COM_SER */ }
852 }
853}; 302};
854 303
855/* 304#define IP_VS_SCTP_MAX_RTO ((60 + 1) * HZ)
856 * Timeout table[state] 305
857 */ 306/* Timeout table[state] */
858static const int sctp_timeouts[IP_VS_SCTP_S_LAST + 1] = { 307static const int sctp_timeouts[IP_VS_SCTP_S_LAST + 1] = {
859 [IP_VS_SCTP_S_NONE] = 2 * HZ, 308 [IP_VS_SCTP_S_NONE] = 2 * HZ,
860 [IP_VS_SCTP_S_INIT_CLI] = 1 * 60 * HZ, 309 [IP_VS_SCTP_S_INIT1] = (0 + 3 + 1) * HZ,
861 [IP_VS_SCTP_S_INIT_SER] = 1 * 60 * HZ, 310 [IP_VS_SCTP_S_INIT] = IP_VS_SCTP_MAX_RTO,
862 [IP_VS_SCTP_S_INIT_ACK_CLI] = 1 * 60 * HZ, 311 [IP_VS_SCTP_S_COOKIE_SENT] = IP_VS_SCTP_MAX_RTO,
863 [IP_VS_SCTP_S_INIT_ACK_SER] = 1 * 60 * HZ, 312 [IP_VS_SCTP_S_COOKIE_REPLIED] = IP_VS_SCTP_MAX_RTO,
864 [IP_VS_SCTP_S_ECHO_CLI] = 1 * 60 * HZ, 313 [IP_VS_SCTP_S_COOKIE_WAIT] = IP_VS_SCTP_MAX_RTO,
865 [IP_VS_SCTP_S_ECHO_SER] = 1 * 60 * HZ, 314 [IP_VS_SCTP_S_COOKIE] = IP_VS_SCTP_MAX_RTO,
866 [IP_VS_SCTP_S_ESTABLISHED] = 15 * 60 * HZ, 315 [IP_VS_SCTP_S_COOKIE_ECHOED] = IP_VS_SCTP_MAX_RTO,
867 [IP_VS_SCTP_S_SHUT_CLI] = 1 * 60 * HZ, 316 [IP_VS_SCTP_S_ESTABLISHED] = 15 * 60 * HZ,
868 [IP_VS_SCTP_S_SHUT_SER] = 1 * 60 * HZ, 317 [IP_VS_SCTP_S_SHUTDOWN_SENT] = IP_VS_SCTP_MAX_RTO,
869 [IP_VS_SCTP_S_SHUT_ACK_CLI] = 1 * 60 * HZ, 318 [IP_VS_SCTP_S_SHUTDOWN_RECEIVED] = IP_VS_SCTP_MAX_RTO,
870 [IP_VS_SCTP_S_SHUT_ACK_SER] = 1 * 60 * HZ, 319 [IP_VS_SCTP_S_SHUTDOWN_ACK_SENT] = IP_VS_SCTP_MAX_RTO,
871 [IP_VS_SCTP_S_CLOSED] = 10 * HZ, 320 [IP_VS_SCTP_S_REJECTED] = (0 + 3 + 1) * HZ,
872 [IP_VS_SCTP_S_LAST] = 2 * HZ, 321 [IP_VS_SCTP_S_CLOSED] = IP_VS_SCTP_MAX_RTO,
322 [IP_VS_SCTP_S_LAST] = 2 * HZ,
873}; 323};
874 324
875static const char *sctp_state_name_table[IP_VS_SCTP_S_LAST + 1] = { 325static const char *sctp_state_name_table[IP_VS_SCTP_S_LAST + 1] = {
876 [IP_VS_SCTP_S_NONE] = "NONE", 326 [IP_VS_SCTP_S_NONE] = "NONE",
877 [IP_VS_SCTP_S_INIT_CLI] = "INIT_CLI", 327 [IP_VS_SCTP_S_INIT1] = "INIT1",
878 [IP_VS_SCTP_S_INIT_SER] = "INIT_SER", 328 [IP_VS_SCTP_S_INIT] = "INIT",
879 [IP_VS_SCTP_S_INIT_ACK_CLI] = "INIT_ACK_CLI", 329 [IP_VS_SCTP_S_COOKIE_SENT] = "C-SENT",
880 [IP_VS_SCTP_S_INIT_ACK_SER] = "INIT_ACK_SER", 330 [IP_VS_SCTP_S_COOKIE_REPLIED] = "C-REPLIED",
881 [IP_VS_SCTP_S_ECHO_CLI] = "COOKIE_ECHO_CLI", 331 [IP_VS_SCTP_S_COOKIE_WAIT] = "C-WAIT",
882 [IP_VS_SCTP_S_ECHO_SER] = "COOKIE_ECHO_SER", 332 [IP_VS_SCTP_S_COOKIE] = "COOKIE",
883 [IP_VS_SCTP_S_ESTABLISHED] = "ESTABISHED", 333 [IP_VS_SCTP_S_COOKIE_ECHOED] = "C-ECHOED",
884 [IP_VS_SCTP_S_SHUT_CLI] = "SHUTDOWN_CLI", 334 [IP_VS_SCTP_S_ESTABLISHED] = "ESTABLISHED",
885 [IP_VS_SCTP_S_SHUT_SER] = "SHUTDOWN_SER", 335 [IP_VS_SCTP_S_SHUTDOWN_SENT] = "S-SENT",
886 [IP_VS_SCTP_S_SHUT_ACK_CLI] = "SHUTDOWN_ACK_CLI", 336 [IP_VS_SCTP_S_SHUTDOWN_RECEIVED] = "S-RECEIVED",
887 [IP_VS_SCTP_S_SHUT_ACK_SER] = "SHUTDOWN_ACK_SER", 337 [IP_VS_SCTP_S_SHUTDOWN_ACK_SENT] = "S-ACK-SENT",
888 [IP_VS_SCTP_S_CLOSED] = "CLOSED", 338 [IP_VS_SCTP_S_REJECTED] = "REJECTED",
889 [IP_VS_SCTP_S_LAST] = "BUG!" 339 [IP_VS_SCTP_S_CLOSED] = "CLOSED",
340 [IP_VS_SCTP_S_LAST] = "BUG!",
890}; 341};
891 342
892 343
@@ -943,17 +394,20 @@ set_sctp_state(struct ip_vs_proto_data *pd, struct ip_vs_conn *cp,
943 } 394 }
944 } 395 }
945 396
946 event = sctp_events[chunk_type]; 397 event = (chunk_type < sizeof(sctp_events)) ?
398 sctp_events[chunk_type] : IP_VS_SCTP_DATA;
947 399
948 /* 400 /* Update direction to INPUT_ONLY if necessary
949 * If the direction is IP_VS_DIR_OUTPUT, this event is from server 401 * or delete NO_OUTPUT flag if output packet detected
950 */
951 if (direction == IP_VS_DIR_OUTPUT)
952 event++;
953 /*
954 * get next state
955 */ 402 */
956 next_state = sctp_states_table[cp->state][event].next_state; 403 if (cp->flags & IP_VS_CONN_F_NOOUTPUT) {
404 if (direction == IP_VS_DIR_OUTPUT)
405 cp->flags &= ~IP_VS_CONN_F_NOOUTPUT;
406 else
407 direction = IP_VS_DIR_INPUT_ONLY;
408 }
409
410 next_state = sctp_states[direction][event][cp->state];
957 411
958 if (next_state != cp->state) { 412 if (next_state != cp->state) {
959 struct ip_vs_dest *dest = cp->dest; 413 struct ip_vs_dest *dest = cp->dest;
diff --git a/net/netfilter/ipvs/ip_vs_proto_tcp.c b/net/netfilter/ipvs/ip_vs_proto_tcp.c
index 50a15944c6c1..e3a697234a98 100644
--- a/net/netfilter/ipvs/ip_vs_proto_tcp.c
+++ b/net/netfilter/ipvs/ip_vs_proto_tcp.c
@@ -39,6 +39,7 @@ tcp_conn_schedule(int af, struct sk_buff *skb, struct ip_vs_proto_data *pd,
39 struct net *net; 39 struct net *net;
40 struct ip_vs_service *svc; 40 struct ip_vs_service *svc;
41 struct tcphdr _tcph, *th; 41 struct tcphdr _tcph, *th;
42 struct netns_ipvs *ipvs;
42 43
43 th = skb_header_pointer(skb, iph->len, sizeof(_tcph), &_tcph); 44 th = skb_header_pointer(skb, iph->len, sizeof(_tcph), &_tcph);
44 if (th == NULL) { 45 if (th == NULL) {
@@ -46,14 +47,15 @@ tcp_conn_schedule(int af, struct sk_buff *skb, struct ip_vs_proto_data *pd,
46 return 0; 47 return 0;
47 } 48 }
48 net = skb_net(skb); 49 net = skb_net(skb);
50 ipvs = net_ipvs(net);
49 /* No !th->ack check to allow scheduling on SYN+ACK for Active FTP */ 51 /* No !th->ack check to allow scheduling on SYN+ACK for Active FTP */
50 rcu_read_lock(); 52 rcu_read_lock();
51 if (th->syn && 53 if ((th->syn || sysctl_sloppy_tcp(ipvs)) && !th->rst &&
52 (svc = ip_vs_service_find(net, af, skb->mark, iph->protocol, 54 (svc = ip_vs_service_find(net, af, skb->mark, iph->protocol,
53 &iph->daddr, th->dest))) { 55 &iph->daddr, th->dest))) {
54 int ignored; 56 int ignored;
55 57
56 if (ip_vs_todrop(net_ipvs(net))) { 58 if (ip_vs_todrop(ipvs)) {
57 /* 59 /*
58 * It seems that we are very loaded. 60 * It seems that we are very loaded.
59 * We have to drop this packet :( 61 * We have to drop this packet :(
@@ -401,7 +403,7 @@ static struct tcp_states_t tcp_states [] = {
401/* sNO, sES, sSS, sSR, sFW, sTW, sCL, sCW, sLA, sLI, sSA */ 403/* sNO, sES, sSS, sSR, sFW, sTW, sCL, sCW, sLA, sLI, sSA */
402/*syn*/ {{sSR, sES, sES, sSR, sSR, sSR, sSR, sSR, sSR, sSR, sSR }}, 404/*syn*/ {{sSR, sES, sES, sSR, sSR, sSR, sSR, sSR, sSR, sSR, sSR }},
403/*fin*/ {{sCL, sCW, sSS, sTW, sTW, sTW, sCL, sCW, sLA, sLI, sTW }}, 405/*fin*/ {{sCL, sCW, sSS, sTW, sTW, sTW, sCL, sCW, sLA, sLI, sTW }},
404/*ack*/ {{sCL, sES, sSS, sES, sFW, sTW, sCL, sCW, sCL, sLI, sES }}, 406/*ack*/ {{sES, sES, sSS, sES, sFW, sTW, sCL, sCW, sCL, sLI, sES }},
405/*rst*/ {{sCL, sCL, sCL, sSR, sCL, sCL, sCL, sCL, sLA, sLI, sSR }}, 407/*rst*/ {{sCL, sCL, sCL, sSR, sCL, sCL, sCL, sCL, sLA, sLI, sSR }},
406 408
407/* OUTPUT */ 409/* OUTPUT */
@@ -415,7 +417,7 @@ static struct tcp_states_t tcp_states [] = {
415/* sNO, sES, sSS, sSR, sFW, sTW, sCL, sCW, sLA, sLI, sSA */ 417/* sNO, sES, sSS, sSR, sFW, sTW, sCL, sCW, sLA, sLI, sSA */
416/*syn*/ {{sSR, sES, sES, sSR, sSR, sSR, sSR, sSR, sSR, sSR, sSR }}, 418/*syn*/ {{sSR, sES, sES, sSR, sSR, sSR, sSR, sSR, sSR, sSR, sSR }},
417/*fin*/ {{sCL, sFW, sSS, sTW, sFW, sTW, sCL, sCW, sLA, sLI, sTW }}, 419/*fin*/ {{sCL, sFW, sSS, sTW, sFW, sTW, sCL, sCW, sLA, sLI, sTW }},
418/*ack*/ {{sCL, sES, sSS, sES, sFW, sTW, sCL, sCW, sCL, sLI, sES }}, 420/*ack*/ {{sES, sES, sSS, sES, sFW, sTW, sCL, sCW, sCL, sLI, sES }},
419/*rst*/ {{sCL, sCL, sCL, sSR, sCL, sCL, sCL, sCL, sLA, sLI, sCL }}, 421/*rst*/ {{sCL, sCL, sCL, sSR, sCL, sCL, sCL, sCL, sLA, sLI, sCL }},
420}; 422};
421 423
@@ -424,7 +426,7 @@ static struct tcp_states_t tcp_states_dos [] = {
424/* sNO, sES, sSS, sSR, sFW, sTW, sCL, sCW, sLA, sLI, sSA */ 426/* sNO, sES, sSS, sSR, sFW, sTW, sCL, sCW, sLA, sLI, sSA */
425/*syn*/ {{sSR, sES, sES, sSR, sSR, sSR, sSR, sSR, sSR, sSR, sSA }}, 427/*syn*/ {{sSR, sES, sES, sSR, sSR, sSR, sSR, sSR, sSR, sSR, sSA }},
426/*fin*/ {{sCL, sCW, sSS, sTW, sTW, sTW, sCL, sCW, sLA, sLI, sSA }}, 428/*fin*/ {{sCL, sCW, sSS, sTW, sTW, sTW, sCL, sCW, sLA, sLI, sSA }},
427/*ack*/ {{sCL, sES, sSS, sSR, sFW, sTW, sCL, sCW, sCL, sLI, sSA }}, 429/*ack*/ {{sES, sES, sSS, sSR, sFW, sTW, sCL, sCW, sCL, sLI, sSA }},
428/*rst*/ {{sCL, sCL, sCL, sSR, sCL, sCL, sCL, sCL, sLA, sLI, sCL }}, 430/*rst*/ {{sCL, sCL, sCL, sSR, sCL, sCL, sCL, sCL, sLA, sLI, sCL }},
429 431
430/* OUTPUT */ 432/* OUTPUT */
@@ -438,7 +440,7 @@ static struct tcp_states_t tcp_states_dos [] = {
438/* sNO, sES, sSS, sSR, sFW, sTW, sCL, sCW, sLA, sLI, sSA */ 440/* sNO, sES, sSS, sSR, sFW, sTW, sCL, sCW, sLA, sLI, sSA */
439/*syn*/ {{sSA, sES, sES, sSR, sSA, sSA, sSA, sSA, sSA, sSA, sSA }}, 441/*syn*/ {{sSA, sES, sES, sSR, sSA, sSA, sSA, sSA, sSA, sSA, sSA }},
440/*fin*/ {{sCL, sFW, sSS, sTW, sFW, sTW, sCL, sCW, sLA, sLI, sTW }}, 442/*fin*/ {{sCL, sFW, sSS, sTW, sFW, sTW, sCL, sCW, sLA, sLI, sTW }},
441/*ack*/ {{sCL, sES, sSS, sES, sFW, sTW, sCL, sCW, sCL, sLI, sES }}, 443/*ack*/ {{sES, sES, sSS, sES, sFW, sTW, sCL, sCW, sCL, sLI, sES }},
442/*rst*/ {{sCL, sCL, sCL, sSR, sCL, sCL, sCL, sCL, sLA, sLI, sCL }}, 444/*rst*/ {{sCL, sCL, sCL, sSR, sCL, sCL, sCL, sCL, sLA, sLI, sCL }},
443}; 445};
444 446
diff --git a/net/netfilter/ipvs/ip_vs_rr.c b/net/netfilter/ipvs/ip_vs_rr.c
index c35986c793d9..176b87c35e34 100644
--- a/net/netfilter/ipvs/ip_vs_rr.c
+++ b/net/netfilter/ipvs/ip_vs_rr.c
@@ -55,7 +55,8 @@ static int ip_vs_rr_del_dest(struct ip_vs_service *svc, struct ip_vs_dest *dest)
55 * Round-Robin Scheduling 55 * Round-Robin Scheduling
56 */ 56 */
57static struct ip_vs_dest * 57static struct ip_vs_dest *
58ip_vs_rr_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 58ip_vs_rr_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
59 struct ip_vs_iphdr *iph)
59{ 60{
60 struct list_head *p; 61 struct list_head *p;
61 struct ip_vs_dest *dest, *last; 62 struct ip_vs_dest *dest, *last;
diff --git a/net/netfilter/ipvs/ip_vs_sed.c b/net/netfilter/ipvs/ip_vs_sed.c
index f3205925359a..a5284cc3d882 100644
--- a/net/netfilter/ipvs/ip_vs_sed.c
+++ b/net/netfilter/ipvs/ip_vs_sed.c
@@ -59,7 +59,8 @@ ip_vs_sed_dest_overhead(struct ip_vs_dest *dest)
59 * Weighted Least Connection scheduling 59 * Weighted Least Connection scheduling
60 */ 60 */
61static struct ip_vs_dest * 61static struct ip_vs_dest *
62ip_vs_sed_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 62ip_vs_sed_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
63 struct ip_vs_iphdr *iph)
63{ 64{
64 struct ip_vs_dest *dest, *least; 65 struct ip_vs_dest *dest, *least;
65 unsigned int loh, doh; 66 unsigned int loh, doh;
diff --git a/net/netfilter/ipvs/ip_vs_sh.c b/net/netfilter/ipvs/ip_vs_sh.c
index a65edfe4b16c..f16c027df15b 100644
--- a/net/netfilter/ipvs/ip_vs_sh.c
+++ b/net/netfilter/ipvs/ip_vs_sh.c
@@ -48,6 +48,10 @@
48 48
49#include <net/ip_vs.h> 49#include <net/ip_vs.h>
50 50
51#include <net/tcp.h>
52#include <linux/udp.h>
53#include <linux/sctp.h>
54
51 55
52/* 56/*
53 * IPVS SH bucket 57 * IPVS SH bucket
@@ -71,10 +75,19 @@ struct ip_vs_sh_state {
71 struct ip_vs_sh_bucket buckets[IP_VS_SH_TAB_SIZE]; 75 struct ip_vs_sh_bucket buckets[IP_VS_SH_TAB_SIZE];
72}; 76};
73 77
78/* Helper function to determine if server is unavailable */
79static inline bool is_unavailable(struct ip_vs_dest *dest)
80{
81 return atomic_read(&dest->weight) <= 0 ||
82 dest->flags & IP_VS_DEST_F_OVERLOAD;
83}
84
74/* 85/*
75 * Returns hash value for IPVS SH entry 86 * Returns hash value for IPVS SH entry
76 */ 87 */
77static inline unsigned int ip_vs_sh_hashkey(int af, const union nf_inet_addr *addr) 88static inline unsigned int
89ip_vs_sh_hashkey(int af, const union nf_inet_addr *addr,
90 __be16 port, unsigned int offset)
78{ 91{
79 __be32 addr_fold = addr->ip; 92 __be32 addr_fold = addr->ip;
80 93
@@ -83,7 +96,8 @@ static inline unsigned int ip_vs_sh_hashkey(int af, const union nf_inet_addr *ad
83 addr_fold = addr->ip6[0]^addr->ip6[1]^ 96 addr_fold = addr->ip6[0]^addr->ip6[1]^
84 addr->ip6[2]^addr->ip6[3]; 97 addr->ip6[2]^addr->ip6[3];
85#endif 98#endif
86 return (ntohl(addr_fold)*2654435761UL) & IP_VS_SH_TAB_MASK; 99 return (offset + (ntohs(port) + ntohl(addr_fold))*2654435761UL) &
100 IP_VS_SH_TAB_MASK;
87} 101}
88 102
89 103
@@ -91,12 +105,42 @@ static inline unsigned int ip_vs_sh_hashkey(int af, const union nf_inet_addr *ad
91 * Get ip_vs_dest associated with supplied parameters. 105 * Get ip_vs_dest associated with supplied parameters.
92 */ 106 */
93static inline struct ip_vs_dest * 107static inline struct ip_vs_dest *
94ip_vs_sh_get(int af, struct ip_vs_sh_state *s, const union nf_inet_addr *addr) 108ip_vs_sh_get(struct ip_vs_service *svc, struct ip_vs_sh_state *s,
109 const union nf_inet_addr *addr, __be16 port)
95{ 110{
96 return rcu_dereference(s->buckets[ip_vs_sh_hashkey(af, addr)].dest); 111 unsigned int hash = ip_vs_sh_hashkey(svc->af, addr, port, 0);
112 struct ip_vs_dest *dest = rcu_dereference(s->buckets[hash].dest);
113
114 return (!dest || is_unavailable(dest)) ? NULL : dest;
97} 115}
98 116
99 117
118/* As ip_vs_sh_get, but with fallback if selected server is unavailable */
119static inline struct ip_vs_dest *
120ip_vs_sh_get_fallback(struct ip_vs_service *svc, struct ip_vs_sh_state *s,
121 const union nf_inet_addr *addr, __be16 port)
122{
123 unsigned int offset;
124 unsigned int hash;
125 struct ip_vs_dest *dest;
126
127 for (offset = 0; offset < IP_VS_SH_TAB_SIZE; offset++) {
128 hash = ip_vs_sh_hashkey(svc->af, addr, port, offset);
129 dest = rcu_dereference(s->buckets[hash].dest);
130 if (!dest)
131 break;
132 if (is_unavailable(dest))
133 IP_VS_DBG_BUF(6, "SH: selected unavailable server "
134 "%s:%d (offset %d)",
135 IP_VS_DBG_ADDR(svc->af, &dest->addr),
136 ntohs(dest->port), offset);
137 else
138 return dest;
139 }
140
141 return NULL;
142}
143
100/* 144/*
101 * Assign all the hash buckets of the specified table with the service. 145 * Assign all the hash buckets of the specified table with the service.
102 */ 146 */
@@ -213,13 +257,33 @@ static int ip_vs_sh_dest_changed(struct ip_vs_service *svc,
213} 257}
214 258
215 259
216/* 260/* Helper function to get port number */
217 * If the dest flags is set with IP_VS_DEST_F_OVERLOAD, 261static inline __be16
218 * consider that the server is overloaded here. 262ip_vs_sh_get_port(const struct sk_buff *skb, struct ip_vs_iphdr *iph)
219 */
220static inline int is_overloaded(struct ip_vs_dest *dest)
221{ 263{
222 return dest->flags & IP_VS_DEST_F_OVERLOAD; 264 __be16 port;
265 struct tcphdr _tcph, *th;
266 struct udphdr _udph, *uh;
267 sctp_sctphdr_t _sctph, *sh;
268
269 switch (iph->protocol) {
270 case IPPROTO_TCP:
271 th = skb_header_pointer(skb, iph->len, sizeof(_tcph), &_tcph);
272 port = th->source;
273 break;
274 case IPPROTO_UDP:
275 uh = skb_header_pointer(skb, iph->len, sizeof(_udph), &_udph);
276 port = uh->source;
277 break;
278 case IPPROTO_SCTP:
279 sh = skb_header_pointer(skb, iph->len, sizeof(_sctph), &_sctph);
280 port = sh->source;
281 break;
282 default:
283 port = 0;
284 }
285
286 return port;
223} 287}
224 288
225 289
@@ -227,28 +291,32 @@ static inline int is_overloaded(struct ip_vs_dest *dest)
227 * Source Hashing scheduling 291 * Source Hashing scheduling
228 */ 292 */
229static struct ip_vs_dest * 293static struct ip_vs_dest *
230ip_vs_sh_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 294ip_vs_sh_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
295 struct ip_vs_iphdr *iph)
231{ 296{
232 struct ip_vs_dest *dest; 297 struct ip_vs_dest *dest;
233 struct ip_vs_sh_state *s; 298 struct ip_vs_sh_state *s;
234 struct ip_vs_iphdr iph; 299 __be16 port = 0;
235
236 ip_vs_fill_iph_addr_only(svc->af, skb, &iph);
237 300
238 IP_VS_DBG(6, "ip_vs_sh_schedule(): Scheduling...\n"); 301 IP_VS_DBG(6, "ip_vs_sh_schedule(): Scheduling...\n");
239 302
303 if (svc->flags & IP_VS_SVC_F_SCHED_SH_PORT)
304 port = ip_vs_sh_get_port(skb, iph);
305
240 s = (struct ip_vs_sh_state *) svc->sched_data; 306 s = (struct ip_vs_sh_state *) svc->sched_data;
241 dest = ip_vs_sh_get(svc->af, s, &iph.saddr); 307
242 if (!dest 308 if (svc->flags & IP_VS_SVC_F_SCHED_SH_FALLBACK)
243 || !(dest->flags & IP_VS_DEST_F_AVAILABLE) 309 dest = ip_vs_sh_get_fallback(svc, s, &iph->saddr, port);
244 || atomic_read(&dest->weight) <= 0 310 else
245 || is_overloaded(dest)) { 311 dest = ip_vs_sh_get(svc, s, &iph->saddr, port);
312
313 if (!dest) {
246 ip_vs_scheduler_err(svc, "no destination available"); 314 ip_vs_scheduler_err(svc, "no destination available");
247 return NULL; 315 return NULL;
248 } 316 }
249 317
250 IP_VS_DBG_BUF(6, "SH: source IP address %s --> server %s:%d\n", 318 IP_VS_DBG_BUF(6, "SH: source IP address %s --> server %s:%d\n",
251 IP_VS_DBG_ADDR(svc->af, &iph.saddr), 319 IP_VS_DBG_ADDR(svc->af, &iph->saddr),
252 IP_VS_DBG_ADDR(svc->af, &dest->addr), 320 IP_VS_DBG_ADDR(svc->af, &dest->addr),
253 ntohs(dest->port)); 321 ntohs(dest->port));
254 322
diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
index f6046d9af8d3..f4484719f3e6 100644
--- a/net/netfilter/ipvs/ip_vs_sync.c
+++ b/net/netfilter/ipvs/ip_vs_sync.c
@@ -425,6 +425,16 @@ ip_vs_sync_buff_create_v0(struct netns_ipvs *ipvs)
425 return sb; 425 return sb;
426} 426}
427 427
428/* Check if connection is controlled by persistence */
429static inline bool in_persistence(struct ip_vs_conn *cp)
430{
431 for (cp = cp->control; cp; cp = cp->control) {
432 if (cp->flags & IP_VS_CONN_F_TEMPLATE)
433 return true;
434 }
435 return false;
436}
437
428/* Check if conn should be synced. 438/* Check if conn should be synced.
429 * pkts: conn packets, use sysctl_sync_threshold to avoid packet check 439 * pkts: conn packets, use sysctl_sync_threshold to avoid packet check
430 * - (1) sync_refresh_period: reduce sync rate. Additionally, retry 440 * - (1) sync_refresh_period: reduce sync rate. Additionally, retry
@@ -447,6 +457,8 @@ static int ip_vs_sync_conn_needed(struct netns_ipvs *ipvs,
447 /* Check if we sync in current state */ 457 /* Check if we sync in current state */
448 if (unlikely(cp->flags & IP_VS_CONN_F_TEMPLATE)) 458 if (unlikely(cp->flags & IP_VS_CONN_F_TEMPLATE))
449 force = 0; 459 force = 0;
460 else if (unlikely(sysctl_sync_persist_mode(ipvs) && in_persistence(cp)))
461 return 0;
450 else if (likely(cp->protocol == IPPROTO_TCP)) { 462 else if (likely(cp->protocol == IPPROTO_TCP)) {
451 if (!((1 << cp->state) & 463 if (!((1 << cp->state) &
452 ((1 << IP_VS_TCP_S_ESTABLISHED) | 464 ((1 << IP_VS_TCP_S_ESTABLISHED) |
@@ -461,9 +473,10 @@ static int ip_vs_sync_conn_needed(struct netns_ipvs *ipvs,
461 } else if (unlikely(cp->protocol == IPPROTO_SCTP)) { 473 } else if (unlikely(cp->protocol == IPPROTO_SCTP)) {
462 if (!((1 << cp->state) & 474 if (!((1 << cp->state) &
463 ((1 << IP_VS_SCTP_S_ESTABLISHED) | 475 ((1 << IP_VS_SCTP_S_ESTABLISHED) |
464 (1 << IP_VS_SCTP_S_CLOSED) | 476 (1 << IP_VS_SCTP_S_SHUTDOWN_SENT) |
465 (1 << IP_VS_SCTP_S_SHUT_ACK_CLI) | 477 (1 << IP_VS_SCTP_S_SHUTDOWN_RECEIVED) |
466 (1 << IP_VS_SCTP_S_SHUT_ACK_SER)))) 478 (1 << IP_VS_SCTP_S_SHUTDOWN_ACK_SENT) |
479 (1 << IP_VS_SCTP_S_CLOSED))))
467 return 0; 480 return 0;
468 force = cp->state != cp->old_state; 481 force = cp->state != cp->old_state;
469 if (force && cp->state != IP_VS_SCTP_S_ESTABLISHED) 482 if (force && cp->state != IP_VS_SCTP_S_ESTABLISHED)
diff --git a/net/netfilter/ipvs/ip_vs_wlc.c b/net/netfilter/ipvs/ip_vs_wlc.c
index c60a81c4ce9a..6dc1fa128840 100644
--- a/net/netfilter/ipvs/ip_vs_wlc.c
+++ b/net/netfilter/ipvs/ip_vs_wlc.c
@@ -31,7 +31,8 @@
31 * Weighted Least Connection scheduling 31 * Weighted Least Connection scheduling
32 */ 32 */
33static struct ip_vs_dest * 33static struct ip_vs_dest *
34ip_vs_wlc_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 34ip_vs_wlc_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
35 struct ip_vs_iphdr *iph)
35{ 36{
36 struct ip_vs_dest *dest, *least; 37 struct ip_vs_dest *dest, *least;
37 unsigned int loh, doh; 38 unsigned int loh, doh;
diff --git a/net/netfilter/ipvs/ip_vs_wrr.c b/net/netfilter/ipvs/ip_vs_wrr.c
index 0e68555bceb9..0546cd572d6b 100644
--- a/net/netfilter/ipvs/ip_vs_wrr.c
+++ b/net/netfilter/ipvs/ip_vs_wrr.c
@@ -162,7 +162,8 @@ static int ip_vs_wrr_dest_changed(struct ip_vs_service *svc,
162 * Weighted Round-Robin Scheduling 162 * Weighted Round-Robin Scheduling
163 */ 163 */
164static struct ip_vs_dest * 164static struct ip_vs_dest *
165ip_vs_wrr_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) 165ip_vs_wrr_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
166 struct ip_vs_iphdr *iph)
166{ 167{
167 struct ip_vs_dest *dest, *last, *stop = NULL; 168 struct ip_vs_dest *dest, *last, *stop = NULL;
168 struct ip_vs_wrr_mark *mark = svc->sched_data; 169 struct ip_vs_wrr_mark *mark = svc->sched_data;
diff --git a/net/netfilter/nf_conntrack_ftp.c b/net/netfilter/nf_conntrack_ftp.c
index 6b217074237b..b8a0924064ef 100644
--- a/net/netfilter/nf_conntrack_ftp.c
+++ b/net/netfilter/nf_conntrack_ftp.c
@@ -55,10 +55,14 @@ unsigned int (*nf_nat_ftp_hook)(struct sk_buff *skb,
55 struct nf_conntrack_expect *exp); 55 struct nf_conntrack_expect *exp);
56EXPORT_SYMBOL_GPL(nf_nat_ftp_hook); 56EXPORT_SYMBOL_GPL(nf_nat_ftp_hook);
57 57
58static int try_rfc959(const char *, size_t, struct nf_conntrack_man *, char); 58static int try_rfc959(const char *, size_t, struct nf_conntrack_man *,
59static int try_eprt(const char *, size_t, struct nf_conntrack_man *, char); 59 char, unsigned int *);
60static int try_rfc1123(const char *, size_t, struct nf_conntrack_man *,
61 char, unsigned int *);
62static int try_eprt(const char *, size_t, struct nf_conntrack_man *,
63 char, unsigned int *);
60static int try_epsv_response(const char *, size_t, struct nf_conntrack_man *, 64static int try_epsv_response(const char *, size_t, struct nf_conntrack_man *,
61 char); 65 char, unsigned int *);
62 66
63static struct ftp_search { 67static struct ftp_search {
64 const char *pattern; 68 const char *pattern;
@@ -66,7 +70,7 @@ static struct ftp_search {
66 char skip; 70 char skip;
67 char term; 71 char term;
68 enum nf_ct_ftp_type ftptype; 72 enum nf_ct_ftp_type ftptype;
69 int (*getnum)(const char *, size_t, struct nf_conntrack_man *, char); 73 int (*getnum)(const char *, size_t, struct nf_conntrack_man *, char, unsigned int *);
70} search[IP_CT_DIR_MAX][2] = { 74} search[IP_CT_DIR_MAX][2] = {
71 [IP_CT_DIR_ORIGINAL] = { 75 [IP_CT_DIR_ORIGINAL] = {
72 { 76 {
@@ -90,10 +94,8 @@ static struct ftp_search {
90 { 94 {
91 .pattern = "227 ", 95 .pattern = "227 ",
92 .plen = sizeof("227 ") - 1, 96 .plen = sizeof("227 ") - 1,
93 .skip = '(',
94 .term = ')',
95 .ftptype = NF_CT_FTP_PASV, 97 .ftptype = NF_CT_FTP_PASV,
96 .getnum = try_rfc959, 98 .getnum = try_rfc1123,
97 }, 99 },
98 { 100 {
99 .pattern = "229 ", 101 .pattern = "229 ",
@@ -132,8 +134,9 @@ static int try_number(const char *data, size_t dlen, u_int32_t array[],
132 i++; 134 i++;
133 else { 135 else {
134 /* Unexpected character; true if it's the 136 /* Unexpected character; true if it's the
135 terminator and we're finished. */ 137 terminator (or we don't care about one)
136 if (*data == term && i == array_size - 1) 138 and we're finished. */
139 if ((*data == term || !term) && i == array_size - 1)
137 return len; 140 return len;
138 141
139 pr_debug("Char %u (got %u nums) `%u' unexpected\n", 142 pr_debug("Char %u (got %u nums) `%u' unexpected\n",
@@ -148,7 +151,8 @@ static int try_number(const char *data, size_t dlen, u_int32_t array[],
148 151
149/* Returns 0, or length of numbers: 192,168,1,1,5,6 */ 152/* Returns 0, or length of numbers: 192,168,1,1,5,6 */
150static int try_rfc959(const char *data, size_t dlen, 153static int try_rfc959(const char *data, size_t dlen,
151 struct nf_conntrack_man *cmd, char term) 154 struct nf_conntrack_man *cmd, char term,
155 unsigned int *offset)
152{ 156{
153 int length; 157 int length;
154 u_int32_t array[6]; 158 u_int32_t array[6];
@@ -163,6 +167,33 @@ static int try_rfc959(const char *data, size_t dlen,
163 return length; 167 return length;
164} 168}
165 169
170/*
171 * From RFC 1123:
172 * The format of the 227 reply to a PASV command is not
173 * well standardized. In particular, an FTP client cannot
174 * assume that the parentheses shown on page 40 of RFC-959
175 * will be present (and in fact, Figure 3 on page 43 omits
176 * them). Therefore, a User-FTP program that interprets
177 * the PASV reply must scan the reply for the first digit
178 * of the host and port numbers.
179 */
180static int try_rfc1123(const char *data, size_t dlen,
181 struct nf_conntrack_man *cmd, char term,
182 unsigned int *offset)
183{
184 int i;
185 for (i = 0; i < dlen; i++)
186 if (isdigit(data[i]))
187 break;
188
189 if (i == dlen)
190 return 0;
191
192 *offset += i;
193
194 return try_rfc959(data + i, dlen - i, cmd, 0, offset);
195}
196
166/* Grab port: number up to delimiter */ 197/* Grab port: number up to delimiter */
167static int get_port(const char *data, int start, size_t dlen, char delim, 198static int get_port(const char *data, int start, size_t dlen, char delim,
168 __be16 *port) 199 __be16 *port)
@@ -191,7 +222,7 @@ static int get_port(const char *data, int start, size_t dlen, char delim,
191 222
192/* Returns 0, or length of numbers: |1|132.235.1.2|6275| or |2|3ffe::1|6275| */ 223/* Returns 0, or length of numbers: |1|132.235.1.2|6275| or |2|3ffe::1|6275| */
193static int try_eprt(const char *data, size_t dlen, struct nf_conntrack_man *cmd, 224static int try_eprt(const char *data, size_t dlen, struct nf_conntrack_man *cmd,
194 char term) 225 char term, unsigned int *offset)
195{ 226{
196 char delim; 227 char delim;
197 int length; 228 int length;
@@ -239,7 +270,8 @@ static int try_eprt(const char *data, size_t dlen, struct nf_conntrack_man *cmd,
239 270
240/* Returns 0, or length of numbers: |||6446| */ 271/* Returns 0, or length of numbers: |||6446| */
241static int try_epsv_response(const char *data, size_t dlen, 272static int try_epsv_response(const char *data, size_t dlen,
242 struct nf_conntrack_man *cmd, char term) 273 struct nf_conntrack_man *cmd, char term,
274 unsigned int *offset)
243{ 275{
244 char delim; 276 char delim;
245 277
@@ -261,9 +293,10 @@ static int find_pattern(const char *data, size_t dlen,
261 unsigned int *numlen, 293 unsigned int *numlen,
262 struct nf_conntrack_man *cmd, 294 struct nf_conntrack_man *cmd,
263 int (*getnum)(const char *, size_t, 295 int (*getnum)(const char *, size_t,
264 struct nf_conntrack_man *, char)) 296 struct nf_conntrack_man *, char,
297 unsigned int *))
265{ 298{
266 size_t i; 299 size_t i = plen;
267 300
268 pr_debug("find_pattern `%s': dlen = %Zu\n", pattern, dlen); 301 pr_debug("find_pattern `%s': dlen = %Zu\n", pattern, dlen);
269 if (dlen == 0) 302 if (dlen == 0)
@@ -293,16 +326,18 @@ static int find_pattern(const char *data, size_t dlen,
293 pr_debug("Pattern matches!\n"); 326 pr_debug("Pattern matches!\n");
294 /* Now we've found the constant string, try to skip 327 /* Now we've found the constant string, try to skip
295 to the 'skip' character */ 328 to the 'skip' character */
296 for (i = plen; data[i] != skip; i++) 329 if (skip) {
297 if (i == dlen - 1) return -1; 330 for (i = plen; data[i] != skip; i++)
331 if (i == dlen - 1) return -1;
298 332
299 /* Skip over the last character */ 333 /* Skip over the last character */
300 i++; 334 i++;
335 }
301 336
302 pr_debug("Skipped up to `%c'!\n", skip); 337 pr_debug("Skipped up to `%c'!\n", skip);
303 338
304 *numoff = i; 339 *numoff = i;
305 *numlen = getnum(data + i, dlen - i, cmd, term); 340 *numlen = getnum(data + i, dlen - i, cmd, term, numoff);
306 if (!*numlen) 341 if (!*numlen)
307 return -1; 342 return -1;
308 343
diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
index ecf065f94032..edc410e778f7 100644
--- a/net/netfilter/nf_conntrack_netlink.c
+++ b/net/netfilter/nf_conntrack_netlink.c
@@ -828,7 +828,9 @@ ctnetlink_parse_tuple_ip(struct nlattr *attr, struct nf_conntrack_tuple *tuple)
828 struct nf_conntrack_l3proto *l3proto; 828 struct nf_conntrack_l3proto *l3proto;
829 int ret = 0; 829 int ret = 0;
830 830
831 nla_parse_nested(tb, CTA_IP_MAX, attr, NULL); 831 ret = nla_parse_nested(tb, CTA_IP_MAX, attr, NULL);
832 if (ret < 0)
833 return ret;
832 834
833 rcu_read_lock(); 835 rcu_read_lock();
834 l3proto = __nf_ct_l3proto_find(tuple->src.l3num); 836 l3proto = __nf_ct_l3proto_find(tuple->src.l3num);
@@ -895,7 +897,9 @@ ctnetlink_parse_tuple(const struct nlattr * const cda[],
895 897
896 memset(tuple, 0, sizeof(*tuple)); 898 memset(tuple, 0, sizeof(*tuple));
897 899
898 nla_parse_nested(tb, CTA_TUPLE_MAX, cda[type], tuple_nla_policy); 900 err = nla_parse_nested(tb, CTA_TUPLE_MAX, cda[type], tuple_nla_policy);
901 if (err < 0)
902 return err;
899 903
900 if (!tb[CTA_TUPLE_IP]) 904 if (!tb[CTA_TUPLE_IP])
901 return -EINVAL; 905 return -EINVAL;
@@ -946,9 +950,12 @@ static inline int
946ctnetlink_parse_help(const struct nlattr *attr, char **helper_name, 950ctnetlink_parse_help(const struct nlattr *attr, char **helper_name,
947 struct nlattr **helpinfo) 951 struct nlattr **helpinfo)
948{ 952{
953 int err;
949 struct nlattr *tb[CTA_HELP_MAX+1]; 954 struct nlattr *tb[CTA_HELP_MAX+1];
950 955
951 nla_parse_nested(tb, CTA_HELP_MAX, attr, help_nla_policy); 956 err = nla_parse_nested(tb, CTA_HELP_MAX, attr, help_nla_policy);
957 if (err < 0)
958 return err;
952 959
953 if (!tb[CTA_HELP_NAME]) 960 if (!tb[CTA_HELP_NAME])
954 return -EINVAL; 961 return -EINVAL;
@@ -1431,7 +1438,9 @@ ctnetlink_change_protoinfo(struct nf_conn *ct, const struct nlattr * const cda[]
1431 struct nf_conntrack_l4proto *l4proto; 1438 struct nf_conntrack_l4proto *l4proto;
1432 int err = 0; 1439 int err = 0;
1433 1440
1434 nla_parse_nested(tb, CTA_PROTOINFO_MAX, attr, protoinfo_policy); 1441 err = nla_parse_nested(tb, CTA_PROTOINFO_MAX, attr, protoinfo_policy);
1442 if (err < 0)
1443 return err;
1435 1444
1436 rcu_read_lock(); 1445 rcu_read_lock();
1437 l4proto = __nf_ct_l4proto_find(nf_ct_l3num(ct), nf_ct_protonum(ct)); 1446 l4proto = __nf_ct_l4proto_find(nf_ct_l3num(ct), nf_ct_protonum(ct));
@@ -1452,9 +1461,12 @@ static const struct nla_policy nat_seq_policy[CTA_NAT_SEQ_MAX+1] = {
1452static inline int 1461static inline int
1453change_nat_seq_adj(struct nf_nat_seq *natseq, const struct nlattr * const attr) 1462change_nat_seq_adj(struct nf_nat_seq *natseq, const struct nlattr * const attr)
1454{ 1463{
1464 int err;
1455 struct nlattr *cda[CTA_NAT_SEQ_MAX+1]; 1465 struct nlattr *cda[CTA_NAT_SEQ_MAX+1];
1456 1466
1457 nla_parse_nested(cda, CTA_NAT_SEQ_MAX, attr, nat_seq_policy); 1467 err = nla_parse_nested(cda, CTA_NAT_SEQ_MAX, attr, nat_seq_policy);
1468 if (err < 0)
1469 return err;
1458 1470
1459 if (!cda[CTA_NAT_SEQ_CORRECTION_POS]) 1471 if (!cda[CTA_NAT_SEQ_CORRECTION_POS])
1460 return -EINVAL; 1472 return -EINVAL;
@@ -2116,7 +2128,9 @@ ctnetlink_nfqueue_parse(const struct nlattr *attr, struct nf_conn *ct)
2116 struct nlattr *cda[CTA_MAX+1]; 2128 struct nlattr *cda[CTA_MAX+1];
2117 int ret; 2129 int ret;
2118 2130
2119 nla_parse_nested(cda, CTA_MAX, attr, ct_nla_policy); 2131 ret = nla_parse_nested(cda, CTA_MAX, attr, ct_nla_policy);
2132 if (ret < 0)
2133 return ret;
2120 2134
2121 spin_lock_bh(&nf_conntrack_lock); 2135 spin_lock_bh(&nf_conntrack_lock);
2122 ret = ctnetlink_nfqueue_parse_ct((const struct nlattr **)cda, ct); 2136 ret = ctnetlink_nfqueue_parse_ct((const struct nlattr **)cda, ct);
@@ -2711,7 +2725,9 @@ ctnetlink_parse_expect_nat(const struct nlattr *attr,
2711 struct nf_conntrack_tuple nat_tuple = {}; 2725 struct nf_conntrack_tuple nat_tuple = {};
2712 int err; 2726 int err;
2713 2727
2714 nla_parse_nested(tb, CTA_EXPECT_NAT_MAX, attr, exp_nat_nla_policy); 2728 err = nla_parse_nested(tb, CTA_EXPECT_NAT_MAX, attr, exp_nat_nla_policy);
2729 if (err < 0)
2730 return err;
2715 2731
2716 if (!tb[CTA_EXPECT_NAT_DIR] || !tb[CTA_EXPECT_NAT_TUPLE]) 2732 if (!tb[CTA_EXPECT_NAT_DIR] || !tb[CTA_EXPECT_NAT_TUPLE])
2717 return -EINVAL; 2733 return -EINVAL;
diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
index 4d4d8f1d01fc..7dcc376eea5f 100644
--- a/net/netfilter/nf_conntrack_proto_tcp.c
+++ b/net/netfilter/nf_conntrack_proto_tcp.c
@@ -1043,6 +1043,12 @@ static int tcp_packet(struct nf_conn *ct,
1043 nf_ct_kill_acct(ct, ctinfo, skb); 1043 nf_ct_kill_acct(ct, ctinfo, skb);
1044 return NF_ACCEPT; 1044 return NF_ACCEPT;
1045 } 1045 }
1046 /* ESTABLISHED without SEEN_REPLY, i.e. mid-connection
1047 * pickup with loose=1. Avoid large ESTABLISHED timeout.
1048 */
1049 if (new_state == TCP_CONNTRACK_ESTABLISHED &&
1050 timeout > timeouts[TCP_CONNTRACK_UNACK])
1051 timeout = timeouts[TCP_CONNTRACK_UNACK];
1046 } else if (!test_bit(IPS_ASSURED_BIT, &ct->status) 1052 } else if (!test_bit(IPS_ASSURED_BIT, &ct->status)
1047 && (old_state == TCP_CONNTRACK_SYN_RECV 1053 && (old_state == TCP_CONNTRACK_SYN_RECV
1048 || old_state == TCP_CONNTRACK_ESTABLISHED) 1054 || old_state == TCP_CONNTRACK_ESTABLISHED)
diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
index bd700b4013c1..f641751dba9d 100644
--- a/net/netfilter/nf_conntrack_standalone.c
+++ b/net/netfilter/nf_conntrack_standalone.c
@@ -408,7 +408,7 @@ static int log_invalid_proto_max = 255;
408 408
409static struct ctl_table_header *nf_ct_netfilter_header; 409static struct ctl_table_header *nf_ct_netfilter_header;
410 410
411static ctl_table nf_ct_sysctl_table[] = { 411static struct ctl_table nf_ct_sysctl_table[] = {
412 { 412 {
413 .procname = "nf_conntrack_max", 413 .procname = "nf_conntrack_max",
414 .data = &nf_conntrack_max, 414 .data = &nf_conntrack_max,
@@ -458,7 +458,7 @@ static ctl_table nf_ct_sysctl_table[] = {
458 458
459#define NET_NF_CONNTRACK_MAX 2089 459#define NET_NF_CONNTRACK_MAX 2089
460 460
461static ctl_table nf_ct_netfilter_table[] = { 461static struct ctl_table nf_ct_netfilter_table[] = {
462 { 462 {
463 .procname = "nf_conntrack_max", 463 .procname = "nf_conntrack_max",
464 .data = &nf_conntrack_max, 464 .data = &nf_conntrack_max,
diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
index 3b18dd1be7d9..85296d4eac0e 100644
--- a/net/netfilter/nf_log.c
+++ b/net/netfilter/nf_log.c
@@ -245,7 +245,7 @@ static const struct file_operations nflog_file_ops = {
245static char nf_log_sysctl_fnames[NFPROTO_NUMPROTO-NFPROTO_UNSPEC][3]; 245static char nf_log_sysctl_fnames[NFPROTO_NUMPROTO-NFPROTO_UNSPEC][3];
246static struct ctl_table nf_log_sysctl_table[NFPROTO_NUMPROTO+1]; 246static struct ctl_table nf_log_sysctl_table[NFPROTO_NUMPROTO+1];
247 247
248static int nf_log_proc_dostring(ctl_table *table, int write, 248static int nf_log_proc_dostring(struct ctl_table *table, int write,
249 void __user *buffer, size_t *lenp, loff_t *ppos) 249 void __user *buffer, size_t *lenp, loff_t *ppos)
250{ 250{
251 const struct nf_logger *logger; 251 const struct nf_logger *logger;
@@ -369,9 +369,7 @@ static int __net_init nf_log_net_init(struct net *net)
369 369
370out_sysctl: 370out_sysctl:
371#ifdef CONFIG_PROC_FS 371#ifdef CONFIG_PROC_FS
372 /* For init_net: errors will trigger panic, don't unroll on error. */ 372 remove_proc_entry("nf_log", net->nf.proc_netfilter);
373 if (!net_eq(net, &init_net))
374 remove_proc_entry("nf_log", net->nf.proc_netfilter);
375#endif 373#endif
376 return ret; 374 return ret;
377} 375}
diff --git a/net/netfilter/nf_nat_helper.c b/net/netfilter/nf_nat_helper.c
index 5fea563afe30..85e20a919081 100644
--- a/net/netfilter/nf_nat_helper.c
+++ b/net/netfilter/nf_nat_helper.c
@@ -104,7 +104,7 @@ static void mangle_contents(struct sk_buff *skb,
104 /* move post-replacement */ 104 /* move post-replacement */
105 memmove(data + match_offset + rep_len, 105 memmove(data + match_offset + rep_len,
106 data + match_offset + match_len, 106 data + match_offset + match_len,
107 skb->tail - (skb->network_header + dataoff + 107 skb_tail_pointer(skb) - (skb_network_header(skb) + dataoff +
108 match_offset + match_len)); 108 match_offset + match_len));
109 109
110 /* insert data from buffer */ 110 /* insert data from buffer */
diff --git a/net/netfilter/nfnetlink_cthelper.c b/net/netfilter/nfnetlink_cthelper.c
index a191b6db657e..9e287cb56a04 100644
--- a/net/netfilter/nfnetlink_cthelper.c
+++ b/net/netfilter/nfnetlink_cthelper.c
@@ -67,9 +67,12 @@ static int
67nfnl_cthelper_parse_tuple(struct nf_conntrack_tuple *tuple, 67nfnl_cthelper_parse_tuple(struct nf_conntrack_tuple *tuple,
68 const struct nlattr *attr) 68 const struct nlattr *attr)
69{ 69{
70 int err;
70 struct nlattr *tb[NFCTH_TUPLE_MAX+1]; 71 struct nlattr *tb[NFCTH_TUPLE_MAX+1];
71 72
72 nla_parse_nested(tb, NFCTH_TUPLE_MAX, attr, nfnl_cthelper_tuple_pol); 73 err = nla_parse_nested(tb, NFCTH_TUPLE_MAX, attr, nfnl_cthelper_tuple_pol);
74 if (err < 0)
75 return err;
73 76
74 if (!tb[NFCTH_TUPLE_L3PROTONUM] || !tb[NFCTH_TUPLE_L4PROTONUM]) 77 if (!tb[NFCTH_TUPLE_L3PROTONUM] || !tb[NFCTH_TUPLE_L4PROTONUM])
75 return -EINVAL; 78 return -EINVAL;
@@ -121,9 +124,12 @@ static int
121nfnl_cthelper_expect_policy(struct nf_conntrack_expect_policy *expect_policy, 124nfnl_cthelper_expect_policy(struct nf_conntrack_expect_policy *expect_policy,
122 const struct nlattr *attr) 125 const struct nlattr *attr)
123{ 126{
127 int err;
124 struct nlattr *tb[NFCTH_POLICY_MAX+1]; 128 struct nlattr *tb[NFCTH_POLICY_MAX+1];
125 129
126 nla_parse_nested(tb, NFCTH_POLICY_MAX, attr, nfnl_cthelper_expect_pol); 130 err = nla_parse_nested(tb, NFCTH_POLICY_MAX, attr, nfnl_cthelper_expect_pol);
131 if (err < 0)
132 return err;
127 133
128 if (!tb[NFCTH_POLICY_NAME] || 134 if (!tb[NFCTH_POLICY_NAME] ||
129 !tb[NFCTH_POLICY_EXPECT_MAX] || 135 !tb[NFCTH_POLICY_EXPECT_MAX] ||
@@ -153,8 +159,10 @@ nfnl_cthelper_parse_expect_policy(struct nf_conntrack_helper *helper,
153 struct nf_conntrack_expect_policy *expect_policy; 159 struct nf_conntrack_expect_policy *expect_policy;
154 struct nlattr *tb[NFCTH_POLICY_SET_MAX+1]; 160 struct nlattr *tb[NFCTH_POLICY_SET_MAX+1];
155 161
156 nla_parse_nested(tb, NFCTH_POLICY_SET_MAX, attr, 162 ret = nla_parse_nested(tb, NFCTH_POLICY_SET_MAX, attr,
157 nfnl_cthelper_expect_policy_set); 163 nfnl_cthelper_expect_policy_set);
164 if (ret < 0)
165 return ret;
158 166
159 if (!tb[NFCTH_POLICY_SET_NUM]) 167 if (!tb[NFCTH_POLICY_SET_NUM])
160 return -EINVAL; 168 return -EINVAL;
diff --git a/net/netfilter/nfnetlink_cttimeout.c b/net/netfilter/nfnetlink_cttimeout.c
index 65074dfb9383..50580494148d 100644
--- a/net/netfilter/nfnetlink_cttimeout.c
+++ b/net/netfilter/nfnetlink_cttimeout.c
@@ -59,8 +59,10 @@ ctnl_timeout_parse_policy(struct ctnl_timeout *timeout,
59 if (likely(l4proto->ctnl_timeout.nlattr_to_obj)) { 59 if (likely(l4proto->ctnl_timeout.nlattr_to_obj)) {
60 struct nlattr *tb[l4proto->ctnl_timeout.nlattr_max+1]; 60 struct nlattr *tb[l4proto->ctnl_timeout.nlattr_max+1];
61 61
62 nla_parse_nested(tb, l4proto->ctnl_timeout.nlattr_max, 62 ret = nla_parse_nested(tb, l4proto->ctnl_timeout.nlattr_max,
63 attr, l4proto->ctnl_timeout.nla_policy); 63 attr, l4proto->ctnl_timeout.nla_policy);
64 if (ret < 0)
65 return ret;
64 66
65 ret = l4proto->ctnl_timeout.nlattr_to_obj(tb, net, 67 ret = l4proto->ctnl_timeout.nlattr_to_obj(tb, net,
66 &timeout->data); 68 &timeout->data);
diff --git a/net/netfilter/nfnetlink_queue_core.c b/net/netfilter/nfnetlink_queue_core.c
index 5352b2d2d5bf..971ea145ab3e 100644
--- a/net/netfilter/nfnetlink_queue_core.c
+++ b/net/netfilter/nfnetlink_queue_core.c
@@ -41,6 +41,14 @@
41 41
42#define NFQNL_QMAX_DEFAULT 1024 42#define NFQNL_QMAX_DEFAULT 1024
43 43
44/* We're using struct nlattr which has 16bit nla_len. Note that nla_len
45 * includes the header length. Thus, the maximum packet length that we
46 * support is 65531 bytes. We send truncated packets if the specified length
47 * is larger than that. Userspace can check for presence of NFQA_CAP_LEN
48 * attribute to detect truncation.
49 */
50#define NFQNL_MAX_COPY_RANGE (0xffff - NLA_HDRLEN)
51
44struct nfqnl_instance { 52struct nfqnl_instance {
45 struct hlist_node hlist; /* global list of queues */ 53 struct hlist_node hlist; /* global list of queues */
46 struct rcu_head rcu; 54 struct rcu_head rcu;
@@ -122,7 +130,7 @@ instance_create(struct nfnl_queue_net *q, u_int16_t queue_num,
122 inst->queue_num = queue_num; 130 inst->queue_num = queue_num;
123 inst->peer_portid = portid; 131 inst->peer_portid = portid;
124 inst->queue_maxlen = NFQNL_QMAX_DEFAULT; 132 inst->queue_maxlen = NFQNL_QMAX_DEFAULT;
125 inst->copy_range = 0xffff; 133 inst->copy_range = NFQNL_MAX_COPY_RANGE;
126 inst->copy_mode = NFQNL_COPY_NONE; 134 inst->copy_mode = NFQNL_COPY_NONE;
127 spin_lock_init(&inst->lock); 135 spin_lock_init(&inst->lock);
128 INIT_LIST_HEAD(&inst->queue_list); 136 INIT_LIST_HEAD(&inst->queue_list);
@@ -272,12 +280,17 @@ nfqnl_zcopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen)
272 skb_shinfo(to)->nr_frags = j; 280 skb_shinfo(to)->nr_frags = j;
273} 281}
274 282
275static int nfqnl_put_packet_info(struct sk_buff *nlskb, struct sk_buff *packet) 283static int
284nfqnl_put_packet_info(struct sk_buff *nlskb, struct sk_buff *packet,
285 bool csum_verify)
276{ 286{
277 __u32 flags = 0; 287 __u32 flags = 0;
278 288
279 if (packet->ip_summed == CHECKSUM_PARTIAL) 289 if (packet->ip_summed == CHECKSUM_PARTIAL)
280 flags = NFQA_SKB_CSUMNOTREADY; 290 flags = NFQA_SKB_CSUMNOTREADY;
291 else if (csum_verify)
292 flags = NFQA_SKB_CSUM_NOTVERIFIED;
293
281 if (skb_is_gso(packet)) 294 if (skb_is_gso(packet))
282 flags |= NFQA_SKB_GSO; 295 flags |= NFQA_SKB_GSO;
283 296
@@ -302,6 +315,7 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue,
302 struct net_device *outdev; 315 struct net_device *outdev;
303 struct nf_conn *ct = NULL; 316 struct nf_conn *ct = NULL;
304 enum ip_conntrack_info uninitialized_var(ctinfo); 317 enum ip_conntrack_info uninitialized_var(ctinfo);
318 bool csum_verify;
305 319
306 size = nlmsg_total_size(sizeof(struct nfgenmsg)) 320 size = nlmsg_total_size(sizeof(struct nfgenmsg))
307 + nla_total_size(sizeof(struct nfqnl_msg_packet_hdr)) 321 + nla_total_size(sizeof(struct nfqnl_msg_packet_hdr))
@@ -319,6 +333,12 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue,
319 if (entskb->tstamp.tv64) 333 if (entskb->tstamp.tv64)
320 size += nla_total_size(sizeof(struct nfqnl_msg_packet_timestamp)); 334 size += nla_total_size(sizeof(struct nfqnl_msg_packet_timestamp));
321 335
336 if (entry->hook <= NF_INET_FORWARD ||
337 (entry->hook == NF_INET_POST_ROUTING && entskb->sk == NULL))
338 csum_verify = !skb_csum_unnecessary(entskb);
339 else
340 csum_verify = false;
341
322 outdev = entry->outdev; 342 outdev = entry->outdev;
323 343
324 switch ((enum nfqnl_config_mode)ACCESS_ONCE(queue->copy_mode)) { 344 switch ((enum nfqnl_config_mode)ACCESS_ONCE(queue->copy_mode)) {
@@ -333,10 +353,9 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue,
333 return NULL; 353 return NULL;
334 354
335 data_len = ACCESS_ONCE(queue->copy_range); 355 data_len = ACCESS_ONCE(queue->copy_range);
336 if (data_len == 0 || data_len > entskb->len) 356 if (data_len > entskb->len)
337 data_len = entskb->len; 357 data_len = entskb->len;
338 358
339
340 if (!entskb->head_frag || 359 if (!entskb->head_frag ||
341 skb_headlen(entskb) < L1_CACHE_BYTES || 360 skb_headlen(entskb) < L1_CACHE_BYTES ||
342 skb_shinfo(entskb)->nr_frags >= MAX_SKB_FRAGS) 361 skb_shinfo(entskb)->nr_frags >= MAX_SKB_FRAGS)
@@ -465,10 +484,11 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue,
465 if (ct && nfqnl_ct_put(skb, ct, ctinfo) < 0) 484 if (ct && nfqnl_ct_put(skb, ct, ctinfo) < 0)
466 goto nla_put_failure; 485 goto nla_put_failure;
467 486
468 if (cap_len > 0 && nla_put_be32(skb, NFQA_CAP_LEN, htonl(cap_len))) 487 if (cap_len > data_len &&
488 nla_put_be32(skb, NFQA_CAP_LEN, htonl(cap_len)))
469 goto nla_put_failure; 489 goto nla_put_failure;
470 490
471 if (nfqnl_put_packet_info(skb, entskb)) 491 if (nfqnl_put_packet_info(skb, entskb, csum_verify))
472 goto nla_put_failure; 492 goto nla_put_failure;
473 493
474 if (data_len) { 494 if (data_len) {
@@ -509,10 +529,6 @@ __nfqnl_enqueue_packet(struct net *net, struct nfqnl_instance *queue,
509 } 529 }
510 spin_lock_bh(&queue->lock); 530 spin_lock_bh(&queue->lock);
511 531
512 if (!queue->peer_portid) {
513 err = -EINVAL;
514 goto err_out_free_nskb;
515 }
516 if (queue->queue_total >= queue->queue_maxlen) { 532 if (queue->queue_total >= queue->queue_maxlen) {
517 if (queue->flags & NFQA_CFG_F_FAIL_OPEN) { 533 if (queue->flags & NFQA_CFG_F_FAIL_OPEN) {
518 failopen = 1; 534 failopen = 1;
@@ -731,13 +747,8 @@ nfqnl_set_mode(struct nfqnl_instance *queue,
731 747
732 case NFQNL_COPY_PACKET: 748 case NFQNL_COPY_PACKET:
733 queue->copy_mode = mode; 749 queue->copy_mode = mode;
734 /* We're using struct nlattr which has 16bit nla_len. Note that 750 if (range == 0 || range > NFQNL_MAX_COPY_RANGE)
735 * nla_len includes the header length. Thus, the maximum packet 751 queue->copy_range = NFQNL_MAX_COPY_RANGE;
736 * length that we support is 65531 bytes. We send truncated
737 * packets if the specified length is larger than that.
738 */
739 if (range > 0xffff - NLA_HDRLEN)
740 queue->copy_range = 0xffff - NLA_HDRLEN;
741 else 752 else
742 queue->copy_range = range; 753 queue->copy_range = range;
743 break; 754 break;
@@ -800,7 +811,7 @@ static int
800nfqnl_rcv_dev_event(struct notifier_block *this, 811nfqnl_rcv_dev_event(struct notifier_block *this,
801 unsigned long event, void *ptr) 812 unsigned long event, void *ptr)
802{ 813{
803 struct net_device *dev = ptr; 814 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
804 815
805 /* Drop any packets associated with the downed device */ 816 /* Drop any packets associated with the downed device */
806 if (event == NETDEV_DOWN) 817 if (event == NETDEV_DOWN)
diff --git a/net/netfilter/xt_CT.c b/net/netfilter/xt_CT.c
index a60261cb0e80..da35ac06a975 100644
--- a/net/netfilter/xt_CT.c
+++ b/net/netfilter/xt_CT.c
@@ -26,6 +26,9 @@ static inline int xt_ct_target(struct sk_buff *skb, struct nf_conn *ct)
26 if (skb->nfct != NULL) 26 if (skb->nfct != NULL)
27 return XT_CONTINUE; 27 return XT_CONTINUE;
28 28
29 /* special case the untracked ct : we want the percpu object */
30 if (!ct)
31 ct = nf_ct_untracked_get();
29 atomic_inc(&ct->ct_general.use); 32 atomic_inc(&ct->ct_general.use);
30 skb->nfct = &ct->ct_general; 33 skb->nfct = &ct->ct_general;
31 skb->nfctinfo = IP_CT_NEW; 34 skb->nfctinfo = IP_CT_NEW;
@@ -186,8 +189,7 @@ static int xt_ct_tg_check(const struct xt_tgchk_param *par,
186 int ret = -EOPNOTSUPP; 189 int ret = -EOPNOTSUPP;
187 190
188 if (info->flags & XT_CT_NOTRACK) { 191 if (info->flags & XT_CT_NOTRACK) {
189 ct = nf_ct_untracked_get(); 192 ct = NULL;
190 atomic_inc(&ct->ct_general.use);
191 goto out; 193 goto out;
192 } 194 }
193 195
@@ -311,7 +313,7 @@ static void xt_ct_tg_destroy(const struct xt_tgdtor_param *par,
311 struct nf_conn *ct = info->ct; 313 struct nf_conn *ct = info->ct;
312 struct nf_conn_help *help; 314 struct nf_conn_help *help;
313 315
314 if (!nf_ct_is_untracked(ct)) { 316 if (ct && !nf_ct_is_untracked(ct)) {
315 help = nfct_help(ct); 317 help = nfct_help(ct);
316 if (help) 318 if (help)
317 module_put(help->helper->me); 319 module_put(help->helper->me);
@@ -319,8 +321,8 @@ static void xt_ct_tg_destroy(const struct xt_tgdtor_param *par,
319 nf_ct_l3proto_module_put(par->family); 321 nf_ct_l3proto_module_put(par->family);
320 322
321 xt_ct_destroy_timeout(ct); 323 xt_ct_destroy_timeout(ct);
324 nf_ct_put(info->ct);
322 } 325 }
323 nf_ct_put(info->ct);
324} 326}
325 327
326static void xt_ct_tg_destroy_v0(const struct xt_tgdtor_param *par) 328static void xt_ct_tg_destroy_v0(const struct xt_tgdtor_param *par)
diff --git a/net/netfilter/xt_TEE.c b/net/netfilter/xt_TEE.c
index bd93e51d30ac..292934d23482 100644
--- a/net/netfilter/xt_TEE.c
+++ b/net/netfilter/xt_TEE.c
@@ -200,7 +200,7 @@ tee_tg6(struct sk_buff *skb, const struct xt_action_param *par)
200static int tee_netdev_event(struct notifier_block *this, unsigned long event, 200static int tee_netdev_event(struct notifier_block *this, unsigned long event,
201 void *ptr) 201 void *ptr)
202{ 202{
203 struct net_device *dev = ptr; 203 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
204 struct xt_tee_priv *priv; 204 struct xt_tee_priv *priv;
205 205
206 priv = container_of(this, struct xt_tee_priv, notifier); 206 priv = container_of(this, struct xt_tee_priv, notifier);
diff --git a/net/netfilter/xt_rateest.c b/net/netfilter/xt_rateest.c
index ed0db15ab00e..7720b036d76a 100644
--- a/net/netfilter/xt_rateest.c
+++ b/net/netfilter/xt_rateest.c
@@ -18,7 +18,7 @@ static bool
18xt_rateest_mt(const struct sk_buff *skb, struct xt_action_param *par) 18xt_rateest_mt(const struct sk_buff *skb, struct xt_action_param *par)
19{ 19{
20 const struct xt_rateest_match_info *info = par->matchinfo; 20 const struct xt_rateest_match_info *info = par->matchinfo;
21 struct gnet_stats_rate_est *r; 21 struct gnet_stats_rate_est64 *r;
22 u_int32_t bps1, bps2, pps1, pps2; 22 u_int32_t bps1, bps2, pps1, pps2;
23 bool ret = true; 23 bool ret = true;
24 24
diff --git a/net/netfilter/xt_socket.c b/net/netfilter/xt_socket.c
index 63b2bdb59e95..f8b71911037a 100644
--- a/net/netfilter/xt_socket.c
+++ b/net/netfilter/xt_socket.c
@@ -107,7 +107,7 @@ socket_match(const struct sk_buff *skb, struct xt_action_param *par,
107{ 107{
108 const struct iphdr *iph = ip_hdr(skb); 108 const struct iphdr *iph = ip_hdr(skb);
109 struct udphdr _hdr, *hp = NULL; 109 struct udphdr _hdr, *hp = NULL;
110 struct sock *sk; 110 struct sock *sk = skb->sk;
111 __be32 uninitialized_var(daddr), uninitialized_var(saddr); 111 __be32 uninitialized_var(daddr), uninitialized_var(saddr);
112 __be16 uninitialized_var(dport), uninitialized_var(sport); 112 __be16 uninitialized_var(dport), uninitialized_var(sport);
113 u8 uninitialized_var(protocol); 113 u8 uninitialized_var(protocol);
@@ -155,14 +155,19 @@ socket_match(const struct sk_buff *skb, struct xt_action_param *par,
155 } 155 }
156#endif 156#endif
157 157
158 sk = nf_tproxy_get_sock_v4(dev_net(skb->dev), protocol, 158 if (!sk)
159 saddr, daddr, sport, dport, par->in, NFT_LOOKUP_ANY); 159 sk = nf_tproxy_get_sock_v4(dev_net(skb->dev), protocol,
160 if (sk != NULL) { 160 saddr, daddr, sport, dport,
161 par->in, NFT_LOOKUP_ANY);
162 if (sk) {
161 bool wildcard; 163 bool wildcard;
162 bool transparent = true; 164 bool transparent = true;
163 165
164 /* Ignore sockets listening on INADDR_ANY */ 166 /* Ignore sockets listening on INADDR_ANY,
165 wildcard = (sk->sk_state != TCP_TIME_WAIT && 167 * unless XT_SOCKET_NOWILDCARD is set
168 */
169 wildcard = (!(info->flags & XT_SOCKET_NOWILDCARD) &&
170 sk->sk_state != TCP_TIME_WAIT &&
166 inet_sk(sk)->inet_rcv_saddr == 0); 171 inet_sk(sk)->inet_rcv_saddr == 0);
167 172
168 /* Ignore non-transparent sockets, 173 /* Ignore non-transparent sockets,
@@ -173,7 +178,8 @@ socket_match(const struct sk_buff *skb, struct xt_action_param *par,
173 (sk->sk_state == TCP_TIME_WAIT && 178 (sk->sk_state == TCP_TIME_WAIT &&
174 inet_twsk(sk)->tw_transparent)); 179 inet_twsk(sk)->tw_transparent));
175 180
176 xt_socket_put_sk(sk); 181 if (sk != skb->sk)
182 xt_socket_put_sk(sk);
177 183
178 if (wildcard || !transparent) 184 if (wildcard || !transparent)
179 sk = NULL; 185 sk = NULL;
@@ -194,7 +200,7 @@ socket_mt4_v0(const struct sk_buff *skb, struct xt_action_param *par)
194} 200}
195 201
196static bool 202static bool
197socket_mt4_v1(const struct sk_buff *skb, struct xt_action_param *par) 203socket_mt4_v1_v2(const struct sk_buff *skb, struct xt_action_param *par)
198{ 204{
199 return socket_match(skb, par, par->matchinfo); 205 return socket_match(skb, par, par->matchinfo);
200} 206}
@@ -256,11 +262,11 @@ extract_icmp6_fields(const struct sk_buff *skb,
256} 262}
257 263
258static bool 264static bool
259socket_mt6_v1(const struct sk_buff *skb, struct xt_action_param *par) 265socket_mt6_v1_v2(const struct sk_buff *skb, struct xt_action_param *par)
260{ 266{
261 struct ipv6hdr *iph = ipv6_hdr(skb); 267 struct ipv6hdr *iph = ipv6_hdr(skb);
262 struct udphdr _hdr, *hp = NULL; 268 struct udphdr _hdr, *hp = NULL;
263 struct sock *sk; 269 struct sock *sk = skb->sk;
264 struct in6_addr *daddr = NULL, *saddr = NULL; 270 struct in6_addr *daddr = NULL, *saddr = NULL;
265 __be16 uninitialized_var(dport), uninitialized_var(sport); 271 __be16 uninitialized_var(dport), uninitialized_var(sport);
266 int thoff = 0, uninitialized_var(tproto); 272 int thoff = 0, uninitialized_var(tproto);
@@ -291,14 +297,19 @@ socket_mt6_v1(const struct sk_buff *skb, struct xt_action_param *par)
291 return false; 297 return false;
292 } 298 }
293 299
294 sk = nf_tproxy_get_sock_v6(dev_net(skb->dev), tproto, 300 if (!sk)
295 saddr, daddr, sport, dport, par->in, NFT_LOOKUP_ANY); 301 sk = nf_tproxy_get_sock_v6(dev_net(skb->dev), tproto,
296 if (sk != NULL) { 302 saddr, daddr, sport, dport,
303 par->in, NFT_LOOKUP_ANY);
304 if (sk) {
297 bool wildcard; 305 bool wildcard;
298 bool transparent = true; 306 bool transparent = true;
299 307
300 /* Ignore sockets listening on INADDR_ANY */ 308 /* Ignore sockets listening on INADDR_ANY
301 wildcard = (sk->sk_state != TCP_TIME_WAIT && 309 * unless XT_SOCKET_NOWILDCARD is set
310 */
311 wildcard = (!(info->flags & XT_SOCKET_NOWILDCARD) &&
312 sk->sk_state != TCP_TIME_WAIT &&
302 ipv6_addr_any(&inet6_sk(sk)->rcv_saddr)); 313 ipv6_addr_any(&inet6_sk(sk)->rcv_saddr));
303 314
304 /* Ignore non-transparent sockets, 315 /* Ignore non-transparent sockets,
@@ -309,7 +320,8 @@ socket_mt6_v1(const struct sk_buff *skb, struct xt_action_param *par)
309 (sk->sk_state == TCP_TIME_WAIT && 320 (sk->sk_state == TCP_TIME_WAIT &&
310 inet_twsk(sk)->tw_transparent)); 321 inet_twsk(sk)->tw_transparent));
311 322
312 xt_socket_put_sk(sk); 323 if (sk != skb->sk)
324 xt_socket_put_sk(sk);
313 325
314 if (wildcard || !transparent) 326 if (wildcard || !transparent)
315 sk = NULL; 327 sk = NULL;
@@ -325,6 +337,28 @@ socket_mt6_v1(const struct sk_buff *skb, struct xt_action_param *par)
325} 337}
326#endif 338#endif
327 339
340static int socket_mt_v1_check(const struct xt_mtchk_param *par)
341{
342 const struct xt_socket_mtinfo1 *info = (struct xt_socket_mtinfo1 *) par->matchinfo;
343
344 if (info->flags & ~XT_SOCKET_FLAGS_V1) {
345 pr_info("unknown flags 0x%x\n", info->flags & ~XT_SOCKET_FLAGS_V1);
346 return -EINVAL;
347 }
348 return 0;
349}
350
351static int socket_mt_v2_check(const struct xt_mtchk_param *par)
352{
353 const struct xt_socket_mtinfo2 *info = (struct xt_socket_mtinfo2 *) par->matchinfo;
354
355 if (info->flags & ~XT_SOCKET_FLAGS_V2) {
356 pr_info("unknown flags 0x%x\n", info->flags & ~XT_SOCKET_FLAGS_V2);
357 return -EINVAL;
358 }
359 return 0;
360}
361
328static struct xt_match socket_mt_reg[] __read_mostly = { 362static struct xt_match socket_mt_reg[] __read_mostly = {
329 { 363 {
330 .name = "socket", 364 .name = "socket",
@@ -339,7 +373,8 @@ static struct xt_match socket_mt_reg[] __read_mostly = {
339 .name = "socket", 373 .name = "socket",
340 .revision = 1, 374 .revision = 1,
341 .family = NFPROTO_IPV4, 375 .family = NFPROTO_IPV4,
342 .match = socket_mt4_v1, 376 .match = socket_mt4_v1_v2,
377 .checkentry = socket_mt_v1_check,
343 .matchsize = sizeof(struct xt_socket_mtinfo1), 378 .matchsize = sizeof(struct xt_socket_mtinfo1),
344 .hooks = (1 << NF_INET_PRE_ROUTING) | 379 .hooks = (1 << NF_INET_PRE_ROUTING) |
345 (1 << NF_INET_LOCAL_IN), 380 (1 << NF_INET_LOCAL_IN),
@@ -350,7 +385,32 @@ static struct xt_match socket_mt_reg[] __read_mostly = {
350 .name = "socket", 385 .name = "socket",
351 .revision = 1, 386 .revision = 1,
352 .family = NFPROTO_IPV6, 387 .family = NFPROTO_IPV6,
353 .match = socket_mt6_v1, 388 .match = socket_mt6_v1_v2,
389 .checkentry = socket_mt_v1_check,
390 .matchsize = sizeof(struct xt_socket_mtinfo1),
391 .hooks = (1 << NF_INET_PRE_ROUTING) |
392 (1 << NF_INET_LOCAL_IN),
393 .me = THIS_MODULE,
394 },
395#endif
396 {
397 .name = "socket",
398 .revision = 2,
399 .family = NFPROTO_IPV4,
400 .match = socket_mt4_v1_v2,
401 .checkentry = socket_mt_v2_check,
402 .matchsize = sizeof(struct xt_socket_mtinfo1),
403 .hooks = (1 << NF_INET_PRE_ROUTING) |
404 (1 << NF_INET_LOCAL_IN),
405 .me = THIS_MODULE,
406 },
407#ifdef XT_SOCKET_HAVE_IPV6
408 {
409 .name = "socket",
410 .revision = 2,
411 .family = NFPROTO_IPV6,
412 .match = socket_mt6_v1_v2,
413 .checkentry = socket_mt_v2_check,
354 .matchsize = sizeof(struct xt_socket_mtinfo1), 414 .matchsize = sizeof(struct xt_socket_mtinfo1),
355 .hooks = (1 << NF_INET_PRE_ROUTING) | 415 .hooks = (1 << NF_INET_PRE_ROUTING) |
356 (1 << NF_INET_LOCAL_IN), 416 (1 << NF_INET_LOCAL_IN),
diff --git a/net/netlabel/netlabel_unlabeled.c b/net/netlabel/netlabel_unlabeled.c
index 8a6c6ea466d8..af3531926ee0 100644
--- a/net/netlabel/netlabel_unlabeled.c
+++ b/net/netlabel/netlabel_unlabeled.c
@@ -708,7 +708,7 @@ unlhsh_remove_return:
708 * netlbl_unlhsh_netdev_handler - Network device notification handler 708 * netlbl_unlhsh_netdev_handler - Network device notification handler
709 * @this: notifier block 709 * @this: notifier block
710 * @event: the event 710 * @event: the event
711 * @ptr: the network device (cast to void) 711 * @ptr: the netdevice notifier info (cast to void)
712 * 712 *
713 * Description: 713 * Description:
714 * Handle network device events, although at present all we care about is a 714 * Handle network device events, although at present all we care about is a
@@ -717,10 +717,9 @@ unlhsh_remove_return:
717 * 717 *
718 */ 718 */
719static int netlbl_unlhsh_netdev_handler(struct notifier_block *this, 719static int netlbl_unlhsh_netdev_handler(struct notifier_block *this,
720 unsigned long event, 720 unsigned long event, void *ptr)
721 void *ptr)
722{ 721{
723 struct net_device *dev = ptr; 722 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
724 struct netlbl_unlhsh_iface *iface = NULL; 723 struct netlbl_unlhsh_iface *iface = NULL;
725 724
726 if (!net_eq(dev_net(dev), &init_net)) 725 if (!net_eq(dev_net(dev), &init_net))
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index 57ee84d21470..0c61b59175dc 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -57,6 +57,7 @@
57#include <linux/audit.h> 57#include <linux/audit.h>
58#include <linux/mutex.h> 58#include <linux/mutex.h>
59#include <linux/vmalloc.h> 59#include <linux/vmalloc.h>
60#include <linux/if_arp.h>
60#include <asm/cacheflush.h> 61#include <asm/cacheflush.h>
61 62
62#include <net/net_namespace.h> 63#include <net/net_namespace.h>
@@ -101,6 +102,9 @@ static atomic_t nl_table_users = ATOMIC_INIT(0);
101 102
102static ATOMIC_NOTIFIER_HEAD(netlink_chain); 103static ATOMIC_NOTIFIER_HEAD(netlink_chain);
103 104
105static DEFINE_SPINLOCK(netlink_tap_lock);
106static struct list_head netlink_tap_all __read_mostly;
107
104static inline u32 netlink_group_mask(u32 group) 108static inline u32 netlink_group_mask(u32 group)
105{ 109{
106 return group ? 1 << (group - 1) : 0; 110 return group ? 1 << (group - 1) : 0;
@@ -111,6 +115,100 @@ static inline struct hlist_head *nl_portid_hashfn(struct nl_portid_hash *hash, u
111 return &hash->table[jhash_1word(portid, hash->rnd) & hash->mask]; 115 return &hash->table[jhash_1word(portid, hash->rnd) & hash->mask];
112} 116}
113 117
118int netlink_add_tap(struct netlink_tap *nt)
119{
120 if (unlikely(nt->dev->type != ARPHRD_NETLINK))
121 return -EINVAL;
122
123 spin_lock(&netlink_tap_lock);
124 list_add_rcu(&nt->list, &netlink_tap_all);
125 spin_unlock(&netlink_tap_lock);
126
127 if (nt->module)
128 __module_get(nt->module);
129
130 return 0;
131}
132EXPORT_SYMBOL_GPL(netlink_add_tap);
133
134int __netlink_remove_tap(struct netlink_tap *nt)
135{
136 bool found = false;
137 struct netlink_tap *tmp;
138
139 spin_lock(&netlink_tap_lock);
140
141 list_for_each_entry(tmp, &netlink_tap_all, list) {
142 if (nt == tmp) {
143 list_del_rcu(&nt->list);
144 found = true;
145 goto out;
146 }
147 }
148
149 pr_warn("__netlink_remove_tap: %p not found\n", nt);
150out:
151 spin_unlock(&netlink_tap_lock);
152
153 if (found && nt->module)
154 module_put(nt->module);
155
156 return found ? 0 : -ENODEV;
157}
158EXPORT_SYMBOL_GPL(__netlink_remove_tap);
159
160int netlink_remove_tap(struct netlink_tap *nt)
161{
162 int ret;
163
164 ret = __netlink_remove_tap(nt);
165 synchronize_net();
166
167 return ret;
168}
169EXPORT_SYMBOL_GPL(netlink_remove_tap);
170
171static int __netlink_deliver_tap_skb(struct sk_buff *skb,
172 struct net_device *dev)
173{
174 struct sk_buff *nskb;
175 int ret = -ENOMEM;
176
177 dev_hold(dev);
178 nskb = skb_clone(skb, GFP_ATOMIC);
179 if (nskb) {
180 nskb->dev = dev;
181 ret = dev_queue_xmit(nskb);
182 if (unlikely(ret > 0))
183 ret = net_xmit_errno(ret);
184 }
185
186 dev_put(dev);
187 return ret;
188}
189
190static void __netlink_deliver_tap(struct sk_buff *skb)
191{
192 int ret;
193 struct netlink_tap *tmp;
194
195 list_for_each_entry_rcu(tmp, &netlink_tap_all, list) {
196 ret = __netlink_deliver_tap_skb(skb, tmp->dev);
197 if (unlikely(ret))
198 break;
199 }
200}
201
202static void netlink_deliver_tap(struct sk_buff *skb)
203{
204 rcu_read_lock();
205
206 if (unlikely(!list_empty(&netlink_tap_all)))
207 __netlink_deliver_tap(skb);
208
209 rcu_read_unlock();
210}
211
114static void netlink_overrun(struct sock *sk) 212static void netlink_overrun(struct sock *sk)
115{ 213{
116 struct netlink_sock *nlk = nlk_sk(sk); 214 struct netlink_sock *nlk = nlk_sk(sk);
@@ -750,6 +848,13 @@ static void netlink_skb_destructor(struct sk_buff *skb)
750 skb->head = NULL; 848 skb->head = NULL;
751 } 849 }
752#endif 850#endif
851 if (is_vmalloc_addr(skb->head)) {
852 if (!skb->cloned ||
853 !atomic_dec_return(&(skb_shinfo(skb)->dataref)))
854 vfree(skb->head);
855
856 skb->head = NULL;
857 }
753 if (skb->sk != NULL) 858 if (skb->sk != NULL)
754 sock_rfree(skb); 859 sock_rfree(skb);
755} 860}
@@ -854,16 +959,23 @@ netlink_unlock_table(void)
854 wake_up(&nl_table_wait); 959 wake_up(&nl_table_wait);
855} 960}
856 961
962static bool netlink_compare(struct net *net, struct sock *sk)
963{
964 return net_eq(sock_net(sk), net);
965}
966
857static struct sock *netlink_lookup(struct net *net, int protocol, u32 portid) 967static struct sock *netlink_lookup(struct net *net, int protocol, u32 portid)
858{ 968{
859 struct nl_portid_hash *hash = &nl_table[protocol].hash; 969 struct netlink_table *table = &nl_table[protocol];
970 struct nl_portid_hash *hash = &table->hash;
860 struct hlist_head *head; 971 struct hlist_head *head;
861 struct sock *sk; 972 struct sock *sk;
862 973
863 read_lock(&nl_table_lock); 974 read_lock(&nl_table_lock);
864 head = nl_portid_hashfn(hash, portid); 975 head = nl_portid_hashfn(hash, portid);
865 sk_for_each(sk, head) { 976 sk_for_each(sk, head) {
866 if (net_eq(sock_net(sk), net) && (nlk_sk(sk)->portid == portid)) { 977 if (table->compare(net, sk) &&
978 (nlk_sk(sk)->portid == portid)) {
867 sock_hold(sk); 979 sock_hold(sk);
868 goto found; 980 goto found;
869 } 981 }
@@ -976,7 +1088,8 @@ netlink_update_listeners(struct sock *sk)
976 1088
977static int netlink_insert(struct sock *sk, struct net *net, u32 portid) 1089static int netlink_insert(struct sock *sk, struct net *net, u32 portid)
978{ 1090{
979 struct nl_portid_hash *hash = &nl_table[sk->sk_protocol].hash; 1091 struct netlink_table *table = &nl_table[sk->sk_protocol];
1092 struct nl_portid_hash *hash = &table->hash;
980 struct hlist_head *head; 1093 struct hlist_head *head;
981 int err = -EADDRINUSE; 1094 int err = -EADDRINUSE;
982 struct sock *osk; 1095 struct sock *osk;
@@ -986,7 +1099,8 @@ static int netlink_insert(struct sock *sk, struct net *net, u32 portid)
986 head = nl_portid_hashfn(hash, portid); 1099 head = nl_portid_hashfn(hash, portid);
987 len = 0; 1100 len = 0;
988 sk_for_each(osk, head) { 1101 sk_for_each(osk, head) {
989 if (net_eq(sock_net(osk), net) && (nlk_sk(osk)->portid == portid)) 1102 if (table->compare(net, osk) &&
1103 (nlk_sk(osk)->portid == portid))
990 break; 1104 break;
991 len++; 1105 len++;
992 } 1106 }
@@ -1183,7 +1297,8 @@ static int netlink_autobind(struct socket *sock)
1183{ 1297{
1184 struct sock *sk = sock->sk; 1298 struct sock *sk = sock->sk;
1185 struct net *net = sock_net(sk); 1299 struct net *net = sock_net(sk);
1186 struct nl_portid_hash *hash = &nl_table[sk->sk_protocol].hash; 1300 struct netlink_table *table = &nl_table[sk->sk_protocol];
1301 struct nl_portid_hash *hash = &table->hash;
1187 struct hlist_head *head; 1302 struct hlist_head *head;
1188 struct sock *osk; 1303 struct sock *osk;
1189 s32 portid = task_tgid_vnr(current); 1304 s32 portid = task_tgid_vnr(current);
@@ -1195,7 +1310,7 @@ retry:
1195 netlink_table_grab(); 1310 netlink_table_grab();
1196 head = nl_portid_hashfn(hash, portid); 1311 head = nl_portid_hashfn(hash, portid);
1197 sk_for_each(osk, head) { 1312 sk_for_each(osk, head) {
1198 if (!net_eq(sock_net(osk), net)) 1313 if (!table->compare(net, osk))
1199 continue; 1314 continue;
1200 if (nlk_sk(osk)->portid == portid) { 1315 if (nlk_sk(osk)->portid == portid) {
1201 /* Bind collision, search negative portid values. */ 1316 /* Bind collision, search negative portid values. */
@@ -1420,6 +1535,33 @@ struct sock *netlink_getsockbyfilp(struct file *filp)
1420 return sock; 1535 return sock;
1421} 1536}
1422 1537
1538static struct sk_buff *netlink_alloc_large_skb(unsigned int size,
1539 int broadcast)
1540{
1541 struct sk_buff *skb;
1542 void *data;
1543
1544 if (size <= NLMSG_GOODSIZE || broadcast)
1545 return alloc_skb(size, GFP_KERNEL);
1546
1547 size = SKB_DATA_ALIGN(size) +
1548 SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
1549
1550 data = vmalloc(size);
1551 if (data == NULL)
1552 return NULL;
1553
1554 skb = build_skb(data, size);
1555 if (skb == NULL)
1556 vfree(data);
1557 else {
1558 skb->head_frag = 0;
1559 skb->destructor = netlink_skb_destructor;
1560 }
1561
1562 return skb;
1563}
1564
1423/* 1565/*
1424 * Attach a skb to a netlink socket. 1566 * Attach a skb to a netlink socket.
1425 * The caller must hold a reference to the destination socket. On error, the 1567 * The caller must hold a reference to the destination socket. On error, the
@@ -1475,6 +1617,8 @@ static int __netlink_sendskb(struct sock *sk, struct sk_buff *skb)
1475{ 1617{
1476 int len = skb->len; 1618 int len = skb->len;
1477 1619
1620 netlink_deliver_tap(skb);
1621
1478#ifdef CONFIG_NETLINK_MMAP 1622#ifdef CONFIG_NETLINK_MMAP
1479 if (netlink_skb_is_mmaped(skb)) 1623 if (netlink_skb_is_mmaped(skb))
1480 netlink_queue_mmaped_skb(sk, skb); 1624 netlink_queue_mmaped_skb(sk, skb);
@@ -1510,7 +1654,7 @@ static struct sk_buff *netlink_trim(struct sk_buff *skb, gfp_t allocation)
1510 return skb; 1654 return skb;
1511 1655
1512 delta = skb->end - skb->tail; 1656 delta = skb->end - skb->tail;
1513 if (delta * 2 < skb->truesize) 1657 if (is_vmalloc_addr(skb->head) || delta * 2 < skb->truesize)
1514 return skb; 1658 return skb;
1515 1659
1516 if (skb_shared(skb)) { 1660 if (skb_shared(skb)) {
@@ -1535,6 +1679,11 @@ static int netlink_unicast_kernel(struct sock *sk, struct sk_buff *skb,
1535 1679
1536 ret = -ECONNREFUSED; 1680 ret = -ECONNREFUSED;
1537 if (nlk->netlink_rcv != NULL) { 1681 if (nlk->netlink_rcv != NULL) {
1682 /* We could do a netlink_deliver_tap(skb) here as well
1683 * but since this is intended for the kernel only, we
1684 * should rather let it stay under the hood.
1685 */
1686
1538 ret = skb->len; 1687 ret = skb->len;
1539 netlink_skb_set_owner_r(skb, sk); 1688 netlink_skb_set_owner_r(skb, sk);
1540 NETLINK_CB(skb).sk = ssk; 1689 NETLINK_CB(skb).sk = ssk;
@@ -2096,7 +2245,7 @@ static int netlink_sendmsg(struct kiocb *kiocb, struct socket *sock,
2096 if (len > sk->sk_sndbuf - 32) 2245 if (len > sk->sk_sndbuf - 32)
2097 goto out; 2246 goto out;
2098 err = -ENOBUFS; 2247 err = -ENOBUFS;
2099 skb = alloc_skb(len, GFP_KERNEL); 2248 skb = netlink_alloc_large_skb(len, dst_group);
2100 if (skb == NULL) 2249 if (skb == NULL)
2101 goto out; 2250 goto out;
2102 2251
@@ -2285,6 +2434,8 @@ __netlink_kernel_create(struct net *net, int unit, struct module *module,
2285 if (cfg) { 2434 if (cfg) {
2286 nl_table[unit].bind = cfg->bind; 2435 nl_table[unit].bind = cfg->bind;
2287 nl_table[unit].flags = cfg->flags; 2436 nl_table[unit].flags = cfg->flags;
2437 if (cfg->compare)
2438 nl_table[unit].compare = cfg->compare;
2288 } 2439 }
2289 nl_table[unit].registered = 1; 2440 nl_table[unit].registered = 1;
2290 } else { 2441 } else {
@@ -2707,6 +2858,7 @@ static void *netlink_seq_next(struct seq_file *seq, void *v, loff_t *pos)
2707{ 2858{
2708 struct sock *s; 2859 struct sock *s;
2709 struct nl_seq_iter *iter; 2860 struct nl_seq_iter *iter;
2861 struct net *net;
2710 int i, j; 2862 int i, j;
2711 2863
2712 ++*pos; 2864 ++*pos;
@@ -2714,11 +2866,12 @@ static void *netlink_seq_next(struct seq_file *seq, void *v, loff_t *pos)
2714 if (v == SEQ_START_TOKEN) 2866 if (v == SEQ_START_TOKEN)
2715 return netlink_seq_socket_idx(seq, 0); 2867 return netlink_seq_socket_idx(seq, 0);
2716 2868
2869 net = seq_file_net(seq);
2717 iter = seq->private; 2870 iter = seq->private;
2718 s = v; 2871 s = v;
2719 do { 2872 do {
2720 s = sk_next(s); 2873 s = sk_next(s);
2721 } while (s && sock_net(s) != seq_file_net(seq)); 2874 } while (s && !nl_table[s->sk_protocol].compare(net, s));
2722 if (s) 2875 if (s)
2723 return s; 2876 return s;
2724 2877
@@ -2730,7 +2883,8 @@ static void *netlink_seq_next(struct seq_file *seq, void *v, loff_t *pos)
2730 2883
2731 for (; j <= hash->mask; j++) { 2884 for (; j <= hash->mask; j++) {
2732 s = sk_head(&hash->table[j]); 2885 s = sk_head(&hash->table[j]);
2733 while (s && sock_net(s) != seq_file_net(seq)) 2886
2887 while (s && !nl_table[s->sk_protocol].compare(net, s))
2734 s = sk_next(s); 2888 s = sk_next(s);
2735 if (s) { 2889 if (s) {
2736 iter->link = i; 2890 iter->link = i;
@@ -2923,8 +3077,12 @@ static int __init netlink_proto_init(void)
2923 hash->shift = 0; 3077 hash->shift = 0;
2924 hash->mask = 0; 3078 hash->mask = 0;
2925 hash->rehash_time = jiffies; 3079 hash->rehash_time = jiffies;
3080
3081 nl_table[i].compare = netlink_compare;
2926 } 3082 }
2927 3083
3084 INIT_LIST_HEAD(&netlink_tap_all);
3085
2928 netlink_add_usersock_entry(); 3086 netlink_add_usersock_entry();
2929 3087
2930 sock_register(&netlink_family_ops); 3088 sock_register(&netlink_family_ops);
diff --git a/net/netlink/af_netlink.h b/net/netlink/af_netlink.h
index ed8522265f4e..eaa88d187cdc 100644
--- a/net/netlink/af_netlink.h
+++ b/net/netlink/af_netlink.h
@@ -73,6 +73,7 @@ struct netlink_table {
73 struct mutex *cb_mutex; 73 struct mutex *cb_mutex;
74 struct module *module; 74 struct module *module;
75 void (*bind)(int group); 75 void (*bind)(int group);
76 bool (*compare)(struct net *net, struct sock *sock);
76 int registered; 77 int registered;
77}; 78};
78 79
diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
index ec0c80fde69f..698814bfa7ad 100644
--- a/net/netrom/af_netrom.c
+++ b/net/netrom/af_netrom.c
@@ -117,7 +117,7 @@ static void nr_kill_by_device(struct net_device *dev)
117 */ 117 */
118static int nr_device_event(struct notifier_block *this, unsigned long event, void *ptr) 118static int nr_device_event(struct notifier_block *this, unsigned long event, void *ptr)
119{ 119{
120 struct net_device *dev = (struct net_device *)ptr; 120 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
121 121
122 if (!net_eq(dev_net(dev), &init_net)) 122 if (!net_eq(dev_net(dev), &init_net))
123 return NOTIFY_DONE; 123 return NOTIFY_DONE;
diff --git a/net/netrom/sysctl_net_netrom.c b/net/netrom/sysctl_net_netrom.c
index 42f630b9a698..ba1c368b3f18 100644
--- a/net/netrom/sysctl_net_netrom.c
+++ b/net/netrom/sysctl_net_netrom.c
@@ -34,7 +34,7 @@ static int min_reset[] = {0}, max_reset[] = {1};
34 34
35static struct ctl_table_header *nr_table_header; 35static struct ctl_table_header *nr_table_header;
36 36
37static ctl_table nr_table[] = { 37static struct ctl_table nr_table[] = {
38 { 38 {
39 .procname = "default_path_quality", 39 .procname = "default_path_quality",
40 .data = &sysctl_netrom_default_path_quality, 40 .data = &sysctl_netrom_default_path_quality,
diff --git a/net/nfc/core.c b/net/nfc/core.c
index 40d2527693da..dc96a83aa6ab 100644
--- a/net/nfc/core.c
+++ b/net/nfc/core.c
@@ -44,6 +44,47 @@ DEFINE_MUTEX(nfc_devlist_mutex);
44/* NFC device ID bitmap */ 44/* NFC device ID bitmap */
45static DEFINE_IDA(nfc_index_ida); 45static DEFINE_IDA(nfc_index_ida);
46 46
47int nfc_fw_upload(struct nfc_dev *dev, const char *firmware_name)
48{
49 int rc = 0;
50
51 pr_debug("%s do firmware %s\n", dev_name(&dev->dev), firmware_name);
52
53 device_lock(&dev->dev);
54
55 if (!device_is_registered(&dev->dev)) {
56 rc = -ENODEV;
57 goto error;
58 }
59
60 if (dev->dev_up) {
61 rc = -EBUSY;
62 goto error;
63 }
64
65 if (!dev->ops->fw_upload) {
66 rc = -EOPNOTSUPP;
67 goto error;
68 }
69
70 dev->fw_upload_in_progress = true;
71 rc = dev->ops->fw_upload(dev, firmware_name);
72 if (rc)
73 dev->fw_upload_in_progress = false;
74
75error:
76 device_unlock(&dev->dev);
77 return rc;
78}
79
80int nfc_fw_upload_done(struct nfc_dev *dev, const char *firmware_name)
81{
82 dev->fw_upload_in_progress = false;
83
84 return nfc_genl_fw_upload_done(dev, firmware_name);
85}
86EXPORT_SYMBOL(nfc_fw_upload_done);
87
47/** 88/**
48 * nfc_dev_up - turn on the NFC device 89 * nfc_dev_up - turn on the NFC device
49 * 90 *
@@ -69,6 +110,11 @@ int nfc_dev_up(struct nfc_dev *dev)
69 goto error; 110 goto error;
70 } 111 }
71 112
113 if (dev->fw_upload_in_progress) {
114 rc = -EBUSY;
115 goto error;
116 }
117
72 if (dev->dev_up) { 118 if (dev->dev_up) {
73 rc = -EALREADY; 119 rc = -EALREADY;
74 goto error; 120 goto error;
@@ -80,6 +126,13 @@ int nfc_dev_up(struct nfc_dev *dev)
80 if (!rc) 126 if (!rc)
81 dev->dev_up = true; 127 dev->dev_up = true;
82 128
129 /* We have to enable the device before discovering SEs */
130 if (dev->ops->discover_se) {
131 rc = dev->ops->discover_se(dev);
132 if (!rc)
133 pr_warn("SE discovery failed\n");
134 }
135
83error: 136error:
84 device_unlock(&dev->dev); 137 device_unlock(&dev->dev);
85 return rc; 138 return rc;
@@ -475,6 +528,108 @@ error:
475 return rc; 528 return rc;
476} 529}
477 530
531static struct nfc_se *find_se(struct nfc_dev *dev, u32 se_idx)
532{
533 struct nfc_se *se, *n;
534
535 list_for_each_entry_safe(se, n, &dev->secure_elements, list)
536 if (se->idx == se_idx)
537 return se;
538
539 return NULL;
540}
541
542int nfc_enable_se(struct nfc_dev *dev, u32 se_idx)
543{
544
545 struct nfc_se *se;
546 int rc;
547
548 pr_debug("%s se index %d\n", dev_name(&dev->dev), se_idx);
549
550 device_lock(&dev->dev);
551
552 if (!device_is_registered(&dev->dev)) {
553 rc = -ENODEV;
554 goto error;
555 }
556
557 if (!dev->dev_up) {
558 rc = -ENODEV;
559 goto error;
560 }
561
562 if (dev->polling) {
563 rc = -EBUSY;
564 goto error;
565 }
566
567 if (!dev->ops->enable_se || !dev->ops->disable_se) {
568 rc = -EOPNOTSUPP;
569 goto error;
570 }
571
572 se = find_se(dev, se_idx);
573 if (!se) {
574 rc = -EINVAL;
575 goto error;
576 }
577
578 if (se->type == NFC_SE_ENABLED) {
579 rc = -EALREADY;
580 goto error;
581 }
582
583 rc = dev->ops->enable_se(dev, se_idx);
584
585error:
586 device_unlock(&dev->dev);
587 return rc;
588}
589
590int nfc_disable_se(struct nfc_dev *dev, u32 se_idx)
591{
592
593 struct nfc_se *se;
594 int rc;
595
596 pr_debug("%s se index %d\n", dev_name(&dev->dev), se_idx);
597
598 device_lock(&dev->dev);
599
600 if (!device_is_registered(&dev->dev)) {
601 rc = -ENODEV;
602 goto error;
603 }
604
605 if (!dev->dev_up) {
606 rc = -ENODEV;
607 goto error;
608 }
609
610 if (!dev->ops->enable_se || !dev->ops->disable_se) {
611 rc = -EOPNOTSUPP;
612 goto error;
613 }
614
615 se = find_se(dev, se_idx);
616 if (!se) {
617 rc = -EINVAL;
618 goto error;
619 }
620
621 if (se->type == NFC_SE_DISABLED) {
622 rc = -EALREADY;
623 goto error;
624 }
625
626 rc = dev->ops->disable_se(dev, se_idx);
627
628error:
629 device_unlock(&dev->dev);
630 return rc;
631}
632
478int nfc_set_remote_general_bytes(struct nfc_dev *dev, u8 *gb, u8 gb_len) 633int nfc_set_remote_general_bytes(struct nfc_dev *dev, u8 *gb, u8 gb_len)
479{ 634{
480 pr_debug("dev_name=%s gb_len=%d\n", dev_name(&dev->dev), gb_len); 635 pr_debug("dev_name=%s gb_len=%d\n", dev_name(&dev->dev), gb_len);
@@ -707,14 +862,79 @@ inline void nfc_driver_failure(struct nfc_dev *dev, int err)
707} 862}
708EXPORT_SYMBOL(nfc_driver_failure); 863EXPORT_SYMBOL(nfc_driver_failure);
709 864
865int nfc_add_se(struct nfc_dev *dev, u32 se_idx, u16 type)
866{
867 struct nfc_se *se;
868 int rc;
869
870 pr_debug("%s se index %d\n", dev_name(&dev->dev), se_idx);
871
872 se = find_se(dev, se_idx);
873 if (se)
874 return -EALREADY;
875
876 se = kzalloc(sizeof(struct nfc_se), GFP_KERNEL);
877 if (!se)
878 return -ENOMEM;
879
880 se->idx = se_idx;
881 se->type = type;
882 se->state = NFC_SE_DISABLED;
883 INIT_LIST_HEAD(&se->list);
884
885 list_add(&se->list, &dev->secure_elements);
886
887 rc = nfc_genl_se_added(dev, se_idx, type);
888 if (rc < 0) {
889 list_del(&se->list);
890 kfree(se);
891
892 return rc;
893 }
894
895 return 0;
896}
897EXPORT_SYMBOL(nfc_add_se);
898
899int nfc_remove_se(struct nfc_dev *dev, u32 se_idx)
900{
901 struct nfc_se *se, *n;
902 int rc;
903
904 pr_debug("%s se index %d\n", dev_name(&dev->dev), se_idx);
905
906 list_for_each_entry_safe(se, n, &dev->secure_elements, list)
907 if (se->idx == se_idx) {
908 rc = nfc_genl_se_removed(dev, se_idx);
909 if (rc < 0)
910 return rc;
911
912 list_del(&se->list);
913 kfree(se);
914
915 return 0;
916 }
917
918 return -EINVAL;
919}
920EXPORT_SYMBOL(nfc_remove_se);
921
710static void nfc_release(struct device *d) 922static void nfc_release(struct device *d)
711{ 923{
712 struct nfc_dev *dev = to_nfc_dev(d); 924 struct nfc_dev *dev = to_nfc_dev(d);
925 struct nfc_se *se, *n;
713 926
714 pr_debug("dev_name=%s\n", dev_name(&dev->dev)); 927 pr_debug("dev_name=%s\n", dev_name(&dev->dev));
715 928
716 nfc_genl_data_exit(&dev->genl_data); 929 nfc_genl_data_exit(&dev->genl_data);
717 kfree(dev->targets); 930 kfree(dev->targets);
931
932 list_for_each_entry_safe(se, n, &dev->secure_elements, list) {
933 nfc_genl_se_removed(dev, se->idx);
934 list_del(&se->list);
935 kfree(se);
936 }
937
718 kfree(dev); 938 kfree(dev);
719} 939}
720 940
@@ -786,7 +1006,6 @@ struct nfc_dev *nfc_get_device(unsigned int idx)
786 */ 1006 */
787struct nfc_dev *nfc_allocate_device(struct nfc_ops *ops, 1007struct nfc_dev *nfc_allocate_device(struct nfc_ops *ops,
788 u32 supported_protocols, 1008 u32 supported_protocols,
789 u32 supported_se,
790 int tx_headroom, int tx_tailroom) 1009 int tx_headroom, int tx_tailroom)
791{ 1010{
792 struct nfc_dev *dev; 1011 struct nfc_dev *dev;
@@ -804,10 +1023,9 @@ struct nfc_dev *nfc_allocate_device(struct nfc_ops *ops,
804 1023
805 dev->ops = ops; 1024 dev->ops = ops;
806 dev->supported_protocols = supported_protocols; 1025 dev->supported_protocols = supported_protocols;
807 dev->supported_se = supported_se;
808 dev->active_se = NFC_SE_NONE;
809 dev->tx_headroom = tx_headroom; 1026 dev->tx_headroom = tx_headroom;
810 dev->tx_tailroom = tx_tailroom; 1027 dev->tx_tailroom = tx_tailroom;
1028 INIT_LIST_HEAD(&dev->secure_elements);
811 1029
812 nfc_genl_data_init(&dev->genl_data); 1030 nfc_genl_data_init(&dev->genl_data);
813 1031
diff --git a/net/nfc/hci/core.c b/net/nfc/hci/core.c
index 91020b210d87..7b1c186736eb 100644
--- a/net/nfc/hci/core.c
+++ b/net/nfc/hci/core.c
@@ -570,21 +570,21 @@ static int hci_dep_link_up(struct nfc_dev *nfc_dev, struct nfc_target *target,
570{ 570{
571 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); 571 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev);
572 572
573 if (hdev->ops->dep_link_up) 573 if (!hdev->ops->dep_link_up)
574 return hdev->ops->dep_link_up(hdev, target, comm_mode, 574 return 0;
575 gb, gb_len);
576 575
577 return 0; 576 return hdev->ops->dep_link_up(hdev, target, comm_mode,
577 gb, gb_len);
578} 578}
579 579
580static int hci_dep_link_down(struct nfc_dev *nfc_dev) 580static int hci_dep_link_down(struct nfc_dev *nfc_dev)
581{ 581{
582 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); 582 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev);
583 583
584 if (hdev->ops->dep_link_down) 584 if (!hdev->ops->dep_link_down)
585 return hdev->ops->dep_link_down(hdev); 585 return 0;
586 586
587 return 0; 587 return hdev->ops->dep_link_down(hdev);
588} 588}
589 589
590static int hci_activate_target(struct nfc_dev *nfc_dev, 590static int hci_activate_target(struct nfc_dev *nfc_dev,
@@ -673,12 +673,12 @@ static int hci_tm_send(struct nfc_dev *nfc_dev, struct sk_buff *skb)
673{ 673{
674 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); 674 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev);
675 675
676 if (hdev->ops->tm_send) 676 if (!hdev->ops->tm_send) {
677 return hdev->ops->tm_send(hdev, skb); 677 kfree_skb(skb);
678 678 return -ENOTSUPP;
679 kfree_skb(skb); 679 }
680 680
681 return -ENOTSUPP; 681 return hdev->ops->tm_send(hdev, skb);
682} 682}
683 683
684static int hci_check_presence(struct nfc_dev *nfc_dev, 684static int hci_check_presence(struct nfc_dev *nfc_dev,
@@ -686,8 +686,38 @@ static int hci_check_presence(struct nfc_dev *nfc_dev,
686{ 686{
687 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); 687 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev);
688 688
689 if (hdev->ops->check_presence) 689 if (!hdev->ops->check_presence)
690 return hdev->ops->check_presence(hdev, target); 690 return 0;
691
692 return hdev->ops->check_presence(hdev, target);
693}
694
695static int hci_discover_se(struct nfc_dev *nfc_dev)
696{
697 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev);
698
699 if (hdev->ops->discover_se)
700 return hdev->ops->discover_se(hdev);
701
702 return 0;
703}
704
705static int hci_enable_se(struct nfc_dev *nfc_dev, u32 se_idx)
706{
707 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev);
708
709 if (hdev->ops->enable_se)
710 return hdev->ops->enable_se(hdev, se_idx);
711
712 return 0;
713}
714
715static int hci_disable_se(struct nfc_dev *nfc_dev, u32 se_idx)
716{
717 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev);
718
719 if (hdev->ops->disable_se)
720 return hdev->ops->enable_se(hdev, se_idx);
691 721
692 return 0; 722 return 0;
693} 723}
@@ -779,6 +809,16 @@ static void nfc_hci_recv_from_llc(struct nfc_hci_dev *hdev, struct sk_buff *skb)
779 } 809 }
780} 810}
781 811
812static int hci_fw_upload(struct nfc_dev *nfc_dev, const char *firmware_name)
813{
814 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev);
815
816 if (!hdev->ops->fw_upload)
817 return -ENOTSUPP;
818
819 return hdev->ops->fw_upload(hdev, firmware_name);
820}
821
782static struct nfc_ops hci_nfc_ops = { 822static struct nfc_ops hci_nfc_ops = {
783 .dev_up = hci_dev_up, 823 .dev_up = hci_dev_up,
784 .dev_down = hci_dev_down, 824 .dev_down = hci_dev_down,
@@ -791,13 +831,16 @@ static struct nfc_ops hci_nfc_ops = {
791 .im_transceive = hci_transceive, 831 .im_transceive = hci_transceive,
792 .tm_send = hci_tm_send, 832 .tm_send = hci_tm_send,
793 .check_presence = hci_check_presence, 833 .check_presence = hci_check_presence,
834 .fw_upload = hci_fw_upload,
835 .discover_se = hci_discover_se,
836 .enable_se = hci_enable_se,
837 .disable_se = hci_disable_se,
794}; 838};
795 839
796struct nfc_hci_dev *nfc_hci_allocate_device(struct nfc_hci_ops *ops, 840struct nfc_hci_dev *nfc_hci_allocate_device(struct nfc_hci_ops *ops,
797 struct nfc_hci_init_data *init_data, 841 struct nfc_hci_init_data *init_data,
798 unsigned long quirks, 842 unsigned long quirks,
799 u32 protocols, 843 u32 protocols,
800 u32 supported_se,
801 const char *llc_name, 844 const char *llc_name,
802 int tx_headroom, 845 int tx_headroom,
803 int tx_tailroom, 846 int tx_tailroom,
@@ -823,7 +866,7 @@ struct nfc_hci_dev *nfc_hci_allocate_device(struct nfc_hci_ops *ops,
823 return NULL; 866 return NULL;
824 } 867 }
825 868
826 hdev->ndev = nfc_allocate_device(&hci_nfc_ops, protocols, supported_se, 869 hdev->ndev = nfc_allocate_device(&hci_nfc_ops, protocols,
827 tx_headroom + HCI_CMDS_HEADROOM, 870 tx_headroom + HCI_CMDS_HEADROOM,
828 tx_tailroom); 871 tx_tailroom);
829 if (!hdev->ndev) { 872 if (!hdev->ndev) {
diff --git a/net/nfc/llcp.h b/net/nfc/llcp.h
index ff8c434f7df8..f4d48b57ea11 100644
--- a/net/nfc/llcp.h
+++ b/net/nfc/llcp.h
@@ -19,6 +19,8 @@
19 19
20enum llcp_state { 20enum llcp_state {
21 LLCP_CONNECTED = 1, /* wait_for_packet() wants that */ 21 LLCP_CONNECTED = 1, /* wait_for_packet() wants that */
22 LLCP_CONNECTING,
23 LLCP_DISCONNECTING,
22 LLCP_CLOSED, 24 LLCP_CLOSED,
23 LLCP_BOUND, 25 LLCP_BOUND,
24 LLCP_LISTEN, 26 LLCP_LISTEN,
@@ -246,7 +248,6 @@ struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, char *uri,
246void nfc_llcp_free_sdp_tlv(struct nfc_llcp_sdp_tlv *sdp); 248void nfc_llcp_free_sdp_tlv(struct nfc_llcp_sdp_tlv *sdp);
247void nfc_llcp_free_sdp_tlv_list(struct hlist_head *sdp_head); 249void nfc_llcp_free_sdp_tlv_list(struct hlist_head *sdp_head);
248void nfc_llcp_recv(void *data, struct sk_buff *skb, int err); 250void nfc_llcp_recv(void *data, struct sk_buff *skb, int err);
249int nfc_llcp_disconnect(struct nfc_llcp_sock *sock);
250int nfc_llcp_send_symm(struct nfc_dev *dev); 251int nfc_llcp_send_symm(struct nfc_dev *dev);
251int nfc_llcp_send_connect(struct nfc_llcp_sock *sock); 252int nfc_llcp_send_connect(struct nfc_llcp_sock *sock);
252int nfc_llcp_send_cc(struct nfc_llcp_sock *sock); 253int nfc_llcp_send_cc(struct nfc_llcp_sock *sock);
diff --git a/net/nfc/llcp_commands.c b/net/nfc/llcp_commands.c
index c1b23eef83ca..1017894807c0 100644
--- a/net/nfc/llcp_commands.c
+++ b/net/nfc/llcp_commands.c
@@ -339,7 +339,7 @@ static struct sk_buff *llcp_allocate_pdu(struct nfc_llcp_sock *sock,
339 return skb; 339 return skb;
340} 340}
341 341
342int nfc_llcp_disconnect(struct nfc_llcp_sock *sock) 342int nfc_llcp_send_disconnect(struct nfc_llcp_sock *sock)
343{ 343{
344 struct sk_buff *skb; 344 struct sk_buff *skb;
345 struct nfc_dev *dev; 345 struct nfc_dev *dev;
@@ -630,26 +630,6 @@ int nfc_llcp_send_dm(struct nfc_llcp_local *local, u8 ssap, u8 dsap, u8 reason)
630 return 0; 630 return 0;
631} 631}
632 632
633int nfc_llcp_send_disconnect(struct nfc_llcp_sock *sock)
634{
635 struct sk_buff *skb;
636 struct nfc_llcp_local *local;
637
638 pr_debug("Send DISC\n");
639
640 local = sock->local;
641 if (local == NULL)
642 return -ENODEV;
643
644 skb = llcp_allocate_pdu(sock, LLCP_PDU_DISC, 0);
645 if (skb == NULL)
646 return -ENOMEM;
647
648 skb_queue_head(&local->tx_queue, skb);
649
650 return 0;
651}
652
653int nfc_llcp_send_i_frame(struct nfc_llcp_sock *sock, 633int nfc_llcp_send_i_frame(struct nfc_llcp_sock *sock,
654 struct msghdr *msg, size_t len) 634 struct msghdr *msg, size_t len)
655{ 635{
diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
index 158bdbf668cc..81cd3416c7d4 100644
--- a/net/nfc/llcp_core.c
+++ b/net/nfc/llcp_core.c
@@ -537,6 +537,7 @@ static int nfc_llcp_build_gb(struct nfc_llcp_local *local)
537 u8 *lto_tlv, lto_length; 537 u8 *lto_tlv, lto_length;
538 u8 *wks_tlv, wks_length; 538 u8 *wks_tlv, wks_length;
539 u8 *miux_tlv, miux_length; 539 u8 *miux_tlv, miux_length;
540 __be16 wks = cpu_to_be16(local->local_wks);
540 u8 gb_len = 0; 541 u8 gb_len = 0;
541 int ret = 0; 542 int ret = 0;
542 543
@@ -549,8 +550,7 @@ static int nfc_llcp_build_gb(struct nfc_llcp_local *local)
549 gb_len += lto_length; 550 gb_len += lto_length;
550 551
551 pr_debug("Local wks 0x%lx\n", local->local_wks); 552 pr_debug("Local wks 0x%lx\n", local->local_wks);
552 wks_tlv = nfc_llcp_build_tlv(LLCP_TLV_WKS, (u8 *)&local->local_wks, 2, 553 wks_tlv = nfc_llcp_build_tlv(LLCP_TLV_WKS, (u8 *)&wks, 2, &wks_length);
553 &wks_length);
554 gb_len += wks_length; 554 gb_len += wks_length;
555 555
556 miux_tlv = nfc_llcp_build_tlv(LLCP_TLV_MIUX, (u8 *)&local->miux, 0, 556 miux_tlv = nfc_llcp_build_tlv(LLCP_TLV_MIUX, (u8 *)&local->miux, 0,
@@ -719,6 +719,10 @@ static void nfc_llcp_tx_work(struct work_struct *work)
719 llcp_sock = nfc_llcp_sock(sk); 719 llcp_sock = nfc_llcp_sock(sk);
720 720
721 if (llcp_sock == NULL && nfc_llcp_ptype(skb) == LLCP_PDU_I) { 721 if (llcp_sock == NULL && nfc_llcp_ptype(skb) == LLCP_PDU_I) {
722 kfree_skb(skb);
723 nfc_llcp_send_symm(local->dev);
724 } else if (llcp_sock && !llcp_sock->remote_ready) {
725 skb_queue_head(&local->tx_queue, skb);
722 nfc_llcp_send_symm(local->dev); 726 nfc_llcp_send_symm(local->dev);
723 } else { 727 } else {
724 struct sk_buff *copy_skb = NULL; 728 struct sk_buff *copy_skb = NULL;
@@ -730,6 +734,13 @@ static void nfc_llcp_tx_work(struct work_struct *work)
730 DUMP_PREFIX_OFFSET, 16, 1, 734 DUMP_PREFIX_OFFSET, 16, 1,
731 skb->data, skb->len, true); 735 skb->data, skb->len, true);
732 736
737 if (ptype == LLCP_PDU_DISC && sk != NULL &&
738 sk->sk_state == LLCP_DISCONNECTING) {
739 nfc_llcp_sock_unlink(&local->sockets, sk);
740 sock_orphan(sk);
741 sock_put(sk);
742 }
743
733 if (ptype == LLCP_PDU_I) 744 if (ptype == LLCP_PDU_I)
734 copy_skb = skb_copy(skb, GFP_ATOMIC); 745 copy_skb = skb_copy(skb, GFP_ATOMIC);
735 746
@@ -1579,6 +1590,7 @@ int nfc_llcp_register_device(struct nfc_dev *ndev)
1579 local->lto = 150; /* 1500 ms */ 1590 local->lto = 150; /* 1500 ms */
1580 local->rw = LLCP_MAX_RW; 1591 local->rw = LLCP_MAX_RW;
1581 local->miux = cpu_to_be16(LLCP_MAX_MIUX); 1592 local->miux = cpu_to_be16(LLCP_MAX_MIUX);
1593 local->local_wks = 0x1; /* LLC Link Management */
1582 1594
1583 nfc_llcp_build_gb(local); 1595 nfc_llcp_build_gb(local);
1584 1596
diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
index 380253eccb74..d308402b67d8 100644
--- a/net/nfc/llcp_sock.c
+++ b/net/nfc/llcp_sock.c
@@ -571,7 +571,7 @@ static unsigned int llcp_sock_poll(struct file *file, struct socket *sock,
571 if (sk->sk_shutdown == SHUTDOWN_MASK) 571 if (sk->sk_shutdown == SHUTDOWN_MASK)
572 mask |= POLLHUP; 572 mask |= POLLHUP;
573 573
574 if (sock_writeable(sk)) 574 if (sock_writeable(sk) && sk->sk_state == LLCP_CONNECTED)
575 mask |= POLLOUT | POLLWRNORM | POLLWRBAND; 575 mask |= POLLOUT | POLLWRNORM | POLLWRBAND;
576 else 576 else
577 set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 577 set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags);
@@ -603,7 +603,7 @@ static int llcp_sock_release(struct socket *sock)
603 603
604 /* Send a DISC */ 604 /* Send a DISC */
605 if (sk->sk_state == LLCP_CONNECTED) 605 if (sk->sk_state == LLCP_CONNECTED)
606 nfc_llcp_disconnect(llcp_sock); 606 nfc_llcp_send_disconnect(llcp_sock);
607 607
608 if (sk->sk_state == LLCP_LISTEN) { 608 if (sk->sk_state == LLCP_LISTEN) {
609 struct nfc_llcp_sock *lsk, *n; 609 struct nfc_llcp_sock *lsk, *n;
@@ -614,7 +614,7 @@ static int llcp_sock_release(struct socket *sock)
614 accept_sk = &lsk->sk; 614 accept_sk = &lsk->sk;
615 lock_sock(accept_sk); 615 lock_sock(accept_sk);
616 616
617 nfc_llcp_disconnect(lsk); 617 nfc_llcp_send_disconnect(lsk);
618 nfc_llcp_accept_unlink(accept_sk); 618 nfc_llcp_accept_unlink(accept_sk);
619 619
620 release_sock(accept_sk); 620 release_sock(accept_sk);
@@ -626,6 +626,13 @@ static int llcp_sock_release(struct socket *sock)
626 626
627 release_sock(sk); 627 release_sock(sk);
628 628
629 /* Keep this sock alive and therefore do not remove it from the sockets
630 * list until the DISC PDU has been actually sent. Otherwise we would
631 * reply with DM PDUs before sending the DISC one.
632 */
633 if (sk->sk_state == LLCP_DISCONNECTING)
634 return err;
635
629 if (sock->type == SOCK_RAW) 636 if (sock->type == SOCK_RAW)
630 nfc_llcp_sock_unlink(&local->raw_sockets, sk); 637 nfc_llcp_sock_unlink(&local->raw_sockets, sk);
631 else 638 else
@@ -722,14 +729,16 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
722 if (ret) 729 if (ret)
723 goto sock_unlink; 730 goto sock_unlink;
724 731
732 sk->sk_state = LLCP_CONNECTING;
733
725 ret = sock_wait_state(sk, LLCP_CONNECTED, 734 ret = sock_wait_state(sk, LLCP_CONNECTED,
726 sock_sndtimeo(sk, flags & O_NONBLOCK)); 735 sock_sndtimeo(sk, flags & O_NONBLOCK));
727 if (ret) 736 if (ret && ret != -EINPROGRESS)
728 goto sock_unlink; 737 goto sock_unlink;
729 738
730 release_sock(sk); 739 release_sock(sk);
731 740
732 return 0; 741 return ret;
733 742
734sock_unlink: 743sock_unlink:
735 nfc_llcp_put_ssap(local, llcp_sock->ssap); 744 nfc_llcp_put_ssap(local, llcp_sock->ssap);
diff --git a/net/nfc/nci/Kconfig b/net/nfc/nci/Kconfig
index 6d69b5f0f19b..2a2416080b4f 100644
--- a/net/nfc/nci/Kconfig
+++ b/net/nfc/nci/Kconfig
@@ -8,3 +8,13 @@ config NFC_NCI
8 8
9 Say Y here to compile NCI support into the kernel or say M to 9 Say Y here to compile NCI support into the kernel or say M to
10 compile it as module (nci). 10 compile it as module (nci).
11
12config NFC_NCI_SPI
13 depends on NFC_NCI && SPI
14 bool "NCI over SPI protocol support"
15 default n
16 help
17 NCI (NFC Controller Interface) is a communication protocol between
18 an NFC Controller (NFCC) and a Device Host (DH).
19
20 Say yes if you use an NCI driver that requires SPI link layer.
diff --git a/net/nfc/nci/Makefile b/net/nfc/nci/Makefile
index cdb3a2e44471..7aeedc43187d 100644
--- a/net/nfc/nci/Makefile
+++ b/net/nfc/nci/Makefile
@@ -4,4 +4,6 @@
4 4
5obj-$(CONFIG_NFC_NCI) += nci.o 5obj-$(CONFIG_NFC_NCI) += nci.o
6 6
7nci-objs := core.o data.o lib.o ntf.o rsp.o \ No newline at end of file 7nci-objs := core.o data.o lib.o ntf.o rsp.o
8
9nci-$(CONFIG_NFC_NCI_SPI) += spi.o
diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
index 48ada0ec749e..b943d46a1644 100644
--- a/net/nfc/nci/core.c
+++ b/net/nfc/nci/core.c
@@ -636,6 +636,21 @@ static int nci_transceive(struct nfc_dev *nfc_dev, struct nfc_target *target,
636 return rc; 636 return rc;
637} 637}
638 638
639static int nci_enable_se(struct nfc_dev *nfc_dev, u32 se_idx)
640{
641 return 0;
642}
643
644static int nci_disable_se(struct nfc_dev *nfc_dev, u32 se_idx)
645{
646 return 0;
647}
648
649static int nci_discover_se(struct nfc_dev *nfc_dev)
650{
651 return 0;
652}
653
639static struct nfc_ops nci_nfc_ops = { 654static struct nfc_ops nci_nfc_ops = {
640 .dev_up = nci_dev_up, 655 .dev_up = nci_dev_up,
641 .dev_down = nci_dev_down, 656 .dev_down = nci_dev_down,
@@ -646,6 +661,9 @@ static struct nfc_ops nci_nfc_ops = {
646 .activate_target = nci_activate_target, 661 .activate_target = nci_activate_target,
647 .deactivate_target = nci_deactivate_target, 662 .deactivate_target = nci_deactivate_target,
648 .im_transceive = nci_transceive, 663 .im_transceive = nci_transceive,
664 .enable_se = nci_enable_se,
665 .disable_se = nci_disable_se,
666 .discover_se = nci_discover_se,
649}; 667};
650 668
651/* ---- Interface to NCI drivers ---- */ 669/* ---- Interface to NCI drivers ---- */
@@ -658,7 +676,6 @@ static struct nfc_ops nci_nfc_ops = {
658 */ 676 */
659struct nci_dev *nci_allocate_device(struct nci_ops *ops, 677struct nci_dev *nci_allocate_device(struct nci_ops *ops,
660 __u32 supported_protocols, 678 __u32 supported_protocols,
661 __u32 supported_se,
662 int tx_headroom, int tx_tailroom) 679 int tx_headroom, int tx_tailroom)
663{ 680{
664 struct nci_dev *ndev; 681 struct nci_dev *ndev;
@@ -681,7 +698,6 @@ struct nci_dev *nci_allocate_device(struct nci_ops *ops,
681 698
682 ndev->nfc_dev = nfc_allocate_device(&nci_nfc_ops, 699 ndev->nfc_dev = nfc_allocate_device(&nci_nfc_ops,
683 supported_protocols, 700 supported_protocols,
684 supported_se,
685 tx_headroom + NCI_DATA_HDR_SIZE, 701 tx_headroom + NCI_DATA_HDR_SIZE,
686 tx_tailroom); 702 tx_tailroom);
687 if (!ndev->nfc_dev) 703 if (!ndev->nfc_dev)
@@ -797,12 +813,11 @@ EXPORT_SYMBOL(nci_unregister_device);
797/** 813/**
798 * nci_recv_frame - receive frame from NCI drivers 814 * nci_recv_frame - receive frame from NCI drivers
799 * 815 *
816 * @ndev: The nci device
800 * @skb: The sk_buff to receive 817 * @skb: The sk_buff to receive
801 */ 818 */
802int nci_recv_frame(struct sk_buff *skb) 819int nci_recv_frame(struct nci_dev *ndev, struct sk_buff *skb)
803{ 820{
804 struct nci_dev *ndev = (struct nci_dev *) skb->dev;
805
806 pr_debug("len %d\n", skb->len); 821 pr_debug("len %d\n", skb->len);
807 822
808 if (!ndev || (!test_bit(NCI_UP, &ndev->flags) && 823 if (!ndev || (!test_bit(NCI_UP, &ndev->flags) &&
@@ -819,10 +834,8 @@ int nci_recv_frame(struct sk_buff *skb)
819} 834}
820EXPORT_SYMBOL(nci_recv_frame); 835EXPORT_SYMBOL(nci_recv_frame);
821 836
822static int nci_send_frame(struct sk_buff *skb) 837static int nci_send_frame(struct nci_dev *ndev, struct sk_buff *skb)
823{ 838{
824 struct nci_dev *ndev = (struct nci_dev *) skb->dev;
825
826 pr_debug("len %d\n", skb->len); 839 pr_debug("len %d\n", skb->len);
827 840
828 if (!ndev) { 841 if (!ndev) {
@@ -833,7 +846,7 @@ static int nci_send_frame(struct sk_buff *skb)
833 /* Get rid of skb owner, prior to sending to the driver. */ 846 /* Get rid of skb owner, prior to sending to the driver. */
834 skb_orphan(skb); 847 skb_orphan(skb);
835 848
836 return ndev->ops->send(skb); 849 return ndev->ops->send(ndev, skb);
837} 850}
838 851
839/* Send NCI command */ 852/* Send NCI command */
@@ -861,8 +874,6 @@ int nci_send_cmd(struct nci_dev *ndev, __u16 opcode, __u8 plen, void *payload)
861 if (plen) 874 if (plen)
862 memcpy(skb_put(skb, plen), payload, plen); 875 memcpy(skb_put(skb, plen), payload, plen);
863 876
864 skb->dev = (void *) ndev;
865
866 skb_queue_tail(&ndev->cmd_q, skb); 877 skb_queue_tail(&ndev->cmd_q, skb);
867 queue_work(ndev->cmd_wq, &ndev->cmd_work); 878 queue_work(ndev->cmd_wq, &ndev->cmd_work);
868 879
@@ -894,7 +905,7 @@ static void nci_tx_work(struct work_struct *work)
894 nci_conn_id(skb->data), 905 nci_conn_id(skb->data),
895 nci_plen(skb->data)); 906 nci_plen(skb->data));
896 907
897 nci_send_frame(skb); 908 nci_send_frame(ndev, skb);
898 909
899 mod_timer(&ndev->data_timer, 910 mod_timer(&ndev->data_timer,
900 jiffies + msecs_to_jiffies(NCI_DATA_TIMEOUT)); 911 jiffies + msecs_to_jiffies(NCI_DATA_TIMEOUT));
@@ -963,7 +974,7 @@ static void nci_cmd_work(struct work_struct *work)
963 nci_opcode_oid(nci_opcode(skb->data)), 974 nci_opcode_oid(nci_opcode(skb->data)),
964 nci_plen(skb->data)); 975 nci_plen(skb->data));
965 976
966 nci_send_frame(skb); 977 nci_send_frame(ndev, skb);
967 978
968 mod_timer(&ndev->cmd_timer, 979 mod_timer(&ndev->cmd_timer,
969 jiffies + msecs_to_jiffies(NCI_CMD_TIMEOUT)); 980 jiffies + msecs_to_jiffies(NCI_CMD_TIMEOUT));
diff --git a/net/nfc/nci/data.c b/net/nfc/nci/data.c
index 76c48c5324f8..2a9399dd6c68 100644
--- a/net/nfc/nci/data.c
+++ b/net/nfc/nci/data.c
@@ -80,8 +80,6 @@ static inline void nci_push_data_hdr(struct nci_dev *ndev,
80 80
81 nci_mt_set((__u8 *)hdr, NCI_MT_DATA_PKT); 81 nci_mt_set((__u8 *)hdr, NCI_MT_DATA_PKT);
82 nci_pbf_set((__u8 *)hdr, pbf); 82 nci_pbf_set((__u8 *)hdr, pbf);
83
84 skb->dev = (void *) ndev;
85} 83}
86 84
87static int nci_queue_tx_data_frags(struct nci_dev *ndev, 85static int nci_queue_tx_data_frags(struct nci_dev *ndev,
diff --git a/net/nfc/nci/spi.c b/net/nfc/nci/spi.c
new file mode 100644
index 000000000000..c7cf37ba7298
--- /dev/null
+++ b/net/nfc/nci/spi.c
@@ -0,0 +1,378 @@
1/*
2 * Copyright (C) 2013 Intel Corporation. All rights reserved.
3 *
4 * This program is free software; you can redistribute it and/or modify it
5 * under the terms and conditions of the GNU General Public License,
6 * version 2, as published by the Free Software Foundation.
7 *
8 * This program is distributed in the hope it will be useful, but WITHOUT
9 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
10 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
11 * more details.
12 *
13 * You should have received a copy of the GNU General Public License along with
14 * this program; if not, write to the Free Software Foundation, Inc.,
15 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
16 *
17 */
18
19#define pr_fmt(fmt) "nci_spi: %s: " fmt, __func__
20
21#include <linux/export.h>
22#include <linux/spi/spi.h>
23#include <linux/crc-ccitt.h>
24#include <linux/nfc.h>
25#include <net/nfc/nci_core.h>
26
27#define NCI_SPI_HDR_LEN 4
28#define NCI_SPI_CRC_LEN 2
29#define NCI_SPI_ACK_SHIFT 6
30#define NCI_SPI_MSB_PAYLOAD_MASK 0x3F
31
32#define NCI_SPI_SEND_TIMEOUT (NCI_CMD_TIMEOUT > NCI_DATA_TIMEOUT ? \
33 NCI_CMD_TIMEOUT : NCI_DATA_TIMEOUT)
34
35#define NCI_SPI_DIRECT_WRITE 0x01
36#define NCI_SPI_DIRECT_READ 0x02
37
38#define ACKNOWLEDGE_NONE 0
39#define ACKNOWLEDGE_ACK 1
40#define ACKNOWLEDGE_NACK 2
41
42#define CRC_INIT 0xFFFF
43
44static int nci_spi_open(struct nci_dev *nci_dev)
45{
46 struct nci_spi_dev *ndev = nci_get_drvdata(nci_dev);
47
48 return ndev->ops->open(ndev);
49}
50
51static int nci_spi_close(struct nci_dev *nci_dev)
52{
53 struct nci_spi_dev *ndev = nci_get_drvdata(nci_dev);
54
55 return ndev->ops->close(ndev);
56}
57
58static int __nci_spi_send(struct nci_spi_dev *ndev, struct sk_buff *skb)
59{
60 struct spi_message m;
61 struct spi_transfer t;
62
63 t.tx_buf = skb->data;
64 t.len = skb->len;
65 t.cs_change = 0;
66 t.delay_usecs = ndev->xfer_udelay;
67
68 spi_message_init(&m);
69 spi_message_add_tail(&t, &m);
70
71 return spi_sync(ndev->spi, &m);
72}
73
74static int nci_spi_send(struct nci_dev *nci_dev, struct sk_buff *skb)
75{
76 struct nci_spi_dev *ndev = nci_get_drvdata(nci_dev);
77 unsigned int payload_len = skb->len;
78 unsigned char *hdr;
79 int ret;
80 long completion_rc;
81
82 ndev->ops->deassert_int(ndev);
83
84 /* add the NCI SPI header to the start of the buffer */
85 hdr = skb_push(skb, NCI_SPI_HDR_LEN);
86 hdr[0] = NCI_SPI_DIRECT_WRITE;
87 hdr[1] = ndev->acknowledge_mode;
88 hdr[2] = payload_len >> 8;
89 hdr[3] = payload_len & 0xFF;
90
91 if (ndev->acknowledge_mode == NCI_SPI_CRC_ENABLED) {
92 u16 crc;
93
94 crc = crc_ccitt(CRC_INIT, skb->data, skb->len);
95 *skb_put(skb, 1) = crc >> 8;
96 *skb_put(skb, 1) = crc & 0xFF;
97 }
98
99 ret = __nci_spi_send(ndev, skb);
100
101 kfree_skb(skb);
102 ndev->ops->assert_int(ndev);
103
104 if (ret != 0 || ndev->acknowledge_mode == NCI_SPI_CRC_DISABLED)
105 goto done;
106
107 init_completion(&ndev->req_completion);
108 completion_rc =
109 wait_for_completion_interruptible_timeout(&ndev->req_completion,
110 NCI_SPI_SEND_TIMEOUT);
111
112 if (completion_rc <= 0 || ndev->req_result == ACKNOWLEDGE_NACK)
113 ret = -EIO;
114
115done:
116 return ret;
117}
118
119static struct nci_ops nci_spi_ops = {
120 .open = nci_spi_open,
121 .close = nci_spi_close,
122 .send = nci_spi_send,
123};
124
125/* ---- Interface to NCI SPI drivers ---- */
126
127/**
128 * nci_spi_allocate_device - allocate a new nci spi device
129 *
130 * @spi: SPI device
131 * @ops: device operations
132 * @supported_protocols: NFC protocols supported by the device
133 * @supported_se: NFC Secure Elements supported by the device
134 * @acknowledge_mode: Acknowledge mode used by the device
135 * @delay: delay between transactions in us
136 */
137struct nci_spi_dev *nci_spi_allocate_device(struct spi_device *spi,
138 struct nci_spi_ops *ops,
139 u32 supported_protocols,
140 u32 supported_se,
141 u8 acknowledge_mode,
142 unsigned int delay)
143{
144 struct nci_spi_dev *ndev;
145 int tailroom = 0;
146
147 if (!ops->open || !ops->close || !ops->assert_int || !ops->deassert_int)
148 return NULL;
149
150 if (!supported_protocols)
151 return NULL;
152
153 ndev = devm_kzalloc(&spi->dev, sizeof(struct nci_dev), GFP_KERNEL);
154 if (!ndev)
155 return NULL;
156
157 ndev->ops = ops;
158 ndev->acknowledge_mode = acknowledge_mode;
159 ndev->xfer_udelay = delay;
160
161 if (acknowledge_mode == NCI_SPI_CRC_ENABLED)
162 tailroom += NCI_SPI_CRC_LEN;
163
164 ndev->nci_dev = nci_allocate_device(&nci_spi_ops, supported_protocols,
165 NCI_SPI_HDR_LEN, tailroom);
166 if (!ndev->nci_dev)
167 return NULL;
168
169 nci_set_drvdata(ndev->nci_dev, ndev);
170
171 return ndev;
172}
173EXPORT_SYMBOL_GPL(nci_spi_allocate_device);
174
175/**
176 * nci_spi_free_device - deallocate nci spi device
177 *
178 * @ndev: The nci spi device to deallocate
179 */
180void nci_spi_free_device(struct nci_spi_dev *ndev)
181{
182 nci_free_device(ndev->nci_dev);
183}
184EXPORT_SYMBOL_GPL(nci_spi_free_device);
185
186/**
187 * nci_spi_register_device - register a nci spi device in the nfc subsystem
188 *
189 * @pdev: The nci spi device to register
190 */
191int nci_spi_register_device(struct nci_spi_dev *ndev)
192{
193 return nci_register_device(ndev->nci_dev);
194}
195EXPORT_SYMBOL_GPL(nci_spi_register_device);
196
197/**
198 * nci_spi_unregister_device - unregister a nci spi device in the nfc subsystem
199 *
200 * @dev: The nci spi device to unregister
201 */
202void nci_spi_unregister_device(struct nci_spi_dev *ndev)
203{
204 nci_unregister_device(ndev->nci_dev);
205}
206EXPORT_SYMBOL_GPL(nci_spi_unregister_device);
207
208static int send_acknowledge(struct nci_spi_dev *ndev, u8 acknowledge)
209{
210 struct sk_buff *skb;
211 unsigned char *hdr;
212 u16 crc;
213 int ret;
214
215 skb = nci_skb_alloc(ndev->nci_dev, 0, GFP_KERNEL);
216
217 /* add the NCI SPI header to the start of the buffer */
218 hdr = skb_push(skb, NCI_SPI_HDR_LEN);
219 hdr[0] = NCI_SPI_DIRECT_WRITE;
220 hdr[1] = NCI_SPI_CRC_ENABLED;
221 hdr[2] = acknowledge << NCI_SPI_ACK_SHIFT;
222 hdr[3] = 0;
223
224 crc = crc_ccitt(CRC_INIT, skb->data, skb->len);
225 *skb_put(skb, 1) = crc >> 8;
226 *skb_put(skb, 1) = crc & 0xFF;
227
228 ret = __nci_spi_send(ndev, skb);
229
230 kfree_skb(skb);
231
232 return ret;
233}
234
235static struct sk_buff *__nci_spi_recv_frame(struct nci_spi_dev *ndev)
236{
237 struct sk_buff *skb;
238 struct spi_message m;
239 unsigned char req[2], resp_hdr[2];
240 struct spi_transfer tx, rx;
241 unsigned short rx_len = 0;
242 int ret;
243
244 spi_message_init(&m);
245 req[0] = NCI_SPI_DIRECT_READ;
246 req[1] = ndev->acknowledge_mode;
247 tx.tx_buf = req;
248 tx.len = 2;
249 tx.cs_change = 0;
250 spi_message_add_tail(&tx, &m);
251 rx.rx_buf = resp_hdr;
252 rx.len = 2;
253 rx.cs_change = 1;
254 spi_message_add_tail(&rx, &m);
255 ret = spi_sync(ndev->spi, &m);
256
257 if (ret)
258 return NULL;
259
260 if (ndev->acknowledge_mode == NCI_SPI_CRC_ENABLED)
261 rx_len = ((resp_hdr[0] & NCI_SPI_MSB_PAYLOAD_MASK) << 8) +
262 resp_hdr[1] + NCI_SPI_CRC_LEN;
263 else
264 rx_len = (resp_hdr[0] << 8) | resp_hdr[1];
265
266 skb = nci_skb_alloc(ndev->nci_dev, rx_len, GFP_KERNEL);
267 if (!skb)
268 return NULL;
269
270 spi_message_init(&m);
271 rx.rx_buf = skb_put(skb, rx_len);
272 rx.len = rx_len;
273 rx.cs_change = 0;
274 rx.delay_usecs = ndev->xfer_udelay;
275 spi_message_add_tail(&rx, &m);
276 ret = spi_sync(ndev->spi, &m);
277
278 if (ret)
279 goto receive_error;
280
281 if (ndev->acknowledge_mode == NCI_SPI_CRC_ENABLED) {
282 *skb_push(skb, 1) = resp_hdr[1];
283 *skb_push(skb, 1) = resp_hdr[0];
284 }
285
286 return skb;
287
288receive_error:
289 kfree_skb(skb);
290
291 return NULL;
292}
293
294static int nci_spi_check_crc(struct sk_buff *skb)
295{
296 u16 crc_data = (skb->data[skb->len - 2] << 8) |
297 skb->data[skb->len - 1];
298 int ret;
299
300 ret = (crc_ccitt(CRC_INIT, skb->data, skb->len - NCI_SPI_CRC_LEN)
301 == crc_data);
302
303 skb_trim(skb, skb->len - NCI_SPI_CRC_LEN);
304
305 return ret;
306}
307
308static u8 nci_spi_get_ack(struct sk_buff *skb)
309{
310 u8 ret;
311
312 ret = skb->data[0] >> NCI_SPI_ACK_SHIFT;
313
314 /* Remove NFCC part of the header: ACK, NACK and MSB payload len */
315 skb_pull(skb, 2);
316
317 return ret;
318}
319
320/**
321 * nci_spi_recv_frame - receive frame from NCI SPI drivers
322 *
323 * @ndev: The nci spi device
324 * Context: can sleep
325 *
326 * This call may only be used from a context that may sleep. The sleep
327 * is non-interruptible, and has no timeout.
328 *
329 * It returns zero on success, else a negative error code.
330 */
331int nci_spi_recv_frame(struct nci_spi_dev *ndev)
332{
333 struct sk_buff *skb;
334 int ret = 0;
335
336 ndev->ops->deassert_int(ndev);
337
338 /* Retrieve frame from SPI */
339 skb = __nci_spi_recv_frame(ndev);
340 if (!skb) {
341 ret = -EIO;
342 goto done;
343 }
344
345 if (ndev->acknowledge_mode == NCI_SPI_CRC_ENABLED) {
346 if (!nci_spi_check_crc(skb)) {
347 send_acknowledge(ndev, ACKNOWLEDGE_NACK);
348 goto done;
349 }
350
351 /* In case of acknowledged mode: if ACK or NACK received,
352 * unblock completion of latest frame sent.
353 */
354 ndev->req_result = nci_spi_get_ack(skb);
355 if (ndev->req_result)
356 complete(&ndev->req_completion);
357 }
358
359 /* If there is no payload (ACK/NACK only frame),
360 * free the socket buffer
361 */
362 if (skb->len == 0) {
363 kfree_skb(skb);
364 goto done;
365 }
366
367 if (ndev->acknowledge_mode == NCI_SPI_CRC_ENABLED)
368 send_acknowledge(ndev, ACKNOWLEDGE_ACK);
369
370 /* Forward skb to NCI core layer */
371 ret = nci_recv_frame(ndev->nci_dev, skb);
372
373done:
374 ndev->ops->assert_int(ndev);
375
376 return ret;
377}
378EXPORT_SYMBOL_GPL(nci_spi_recv_frame);
diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
index f0c4d61f37c0..b05ad909778f 100644
--- a/net/nfc/netlink.c
+++ b/net/nfc/netlink.c
@@ -56,6 +56,8 @@ static const struct nla_policy nfc_genl_policy[NFC_ATTR_MAX + 1] = {
56 [NFC_ATTR_LLC_PARAM_RW] = { .type = NLA_U8 }, 56 [NFC_ATTR_LLC_PARAM_RW] = { .type = NLA_U8 },
57 [NFC_ATTR_LLC_PARAM_MIUX] = { .type = NLA_U16 }, 57 [NFC_ATTR_LLC_PARAM_MIUX] = { .type = NLA_U16 },
58 [NFC_ATTR_LLC_SDP] = { .type = NLA_NESTED }, 58 [NFC_ATTR_LLC_SDP] = { .type = NLA_NESTED },
59 [NFC_ATTR_FIRMWARE_NAME] = { .type = NLA_STRING,
60 .len = NFC_FIRMWARE_NAME_MAXSIZE },
59}; 61};
60 62
61static const struct nla_policy nfc_sdp_genl_policy[NFC_SDP_ATTR_MAX + 1] = { 63static const struct nla_policy nfc_sdp_genl_policy[NFC_SDP_ATTR_MAX + 1] = {
@@ -424,6 +426,69 @@ free_msg:
424 return rc; 426 return rc;
425} 427}
426 428
429int nfc_genl_se_added(struct nfc_dev *dev, u32 se_idx, u16 type)
430{
431 struct sk_buff *msg;
432 void *hdr;
433
434 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
435 if (!msg)
436 return -ENOMEM;
437
438 hdr = genlmsg_put(msg, 0, 0, &nfc_genl_family, 0,
439 NFC_EVENT_SE_ADDED);
440 if (!hdr)
441 goto free_msg;
442
443 if (nla_put_u32(msg, NFC_ATTR_DEVICE_INDEX, dev->idx) ||
444 nla_put_u32(msg, NFC_ATTR_SE_INDEX, se_idx) ||
445 nla_put_u8(msg, NFC_ATTR_SE_TYPE, type))
446 goto nla_put_failure;
447
448 genlmsg_end(msg, hdr);
449
450 genlmsg_multicast(msg, 0, nfc_genl_event_mcgrp.id, GFP_KERNEL);
451
452 return 0;
453
454nla_put_failure:
455 genlmsg_cancel(msg, hdr);
456free_msg:
457 nlmsg_free(msg);
458 return -EMSGSIZE;
459}
460
461int nfc_genl_se_removed(struct nfc_dev *dev, u32 se_idx)
462{
463 struct sk_buff *msg;
464 void *hdr;
465
466 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
467 if (!msg)
468 return -ENOMEM;
469
470 hdr = genlmsg_put(msg, 0, 0, &nfc_genl_family, 0,
471 NFC_EVENT_SE_REMOVED);
472 if (!hdr)
473 goto free_msg;
474
475 if (nla_put_u32(msg, NFC_ATTR_DEVICE_INDEX, dev->idx) ||
476 nla_put_u32(msg, NFC_ATTR_SE_INDEX, se_idx))
477 goto nla_put_failure;
478
479 genlmsg_end(msg, hdr);
480
481 genlmsg_multicast(msg, 0, nfc_genl_event_mcgrp.id, GFP_KERNEL);
482
483 return 0;
484
485nla_put_failure:
486 genlmsg_cancel(msg, hdr);
487free_msg:
488 nlmsg_free(msg);
489 return -EMSGSIZE;
490}
491
427static int nfc_genl_send_device(struct sk_buff *msg, struct nfc_dev *dev, 492static int nfc_genl_send_device(struct sk_buff *msg, struct nfc_dev *dev,
428 u32 portid, u32 seq, 493 u32 portid, u32 seq,
429 struct netlink_callback *cb, 494 struct netlink_callback *cb,
@@ -442,7 +507,6 @@ static int nfc_genl_send_device(struct sk_buff *msg, struct nfc_dev *dev,
442 if (nla_put_string(msg, NFC_ATTR_DEVICE_NAME, nfc_device_name(dev)) || 507 if (nla_put_string(msg, NFC_ATTR_DEVICE_NAME, nfc_device_name(dev)) ||
443 nla_put_u32(msg, NFC_ATTR_DEVICE_INDEX, dev->idx) || 508 nla_put_u32(msg, NFC_ATTR_DEVICE_INDEX, dev->idx) ||
444 nla_put_u32(msg, NFC_ATTR_PROTOCOLS, dev->supported_protocols) || 509 nla_put_u32(msg, NFC_ATTR_PROTOCOLS, dev->supported_protocols) ||
445 nla_put_u32(msg, NFC_ATTR_SE, dev->supported_se) ||
446 nla_put_u8(msg, NFC_ATTR_DEVICE_POWERED, dev->dev_up) || 510 nla_put_u8(msg, NFC_ATTR_DEVICE_POWERED, dev->dev_up) ||
447 nla_put_u8(msg, NFC_ATTR_RF_MODE, dev->rf_mode)) 511 nla_put_u8(msg, NFC_ATTR_RF_MODE, dev->rf_mode))
448 goto nla_put_failure; 512 goto nla_put_failure;
@@ -1025,6 +1089,108 @@ exit:
1025 return rc; 1089 return rc;
1026} 1090}
1027 1091
1092static int nfc_genl_fw_upload(struct sk_buff *skb, struct genl_info *info)
1093{
1094 struct nfc_dev *dev;
1095 int rc;
1096 u32 idx;
1097 char firmware_name[NFC_FIRMWARE_NAME_MAXSIZE + 1];
1098
1099 if (!info->attrs[NFC_ATTR_DEVICE_INDEX])
1100 return -EINVAL;
1101
1102 idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
1103
1104 dev = nfc_get_device(idx);
1105 if (!dev)
1106 return -ENODEV;
1107
1108 nla_strlcpy(firmware_name, info->attrs[NFC_ATTR_FIRMWARE_NAME],
1109 sizeof(firmware_name));
1110
1111 rc = nfc_fw_upload(dev, firmware_name);
1112
1113 nfc_put_device(dev);
1114 return rc;
1115}
1116
1117int nfc_genl_fw_upload_done(struct nfc_dev *dev, const char *firmware_name)
1118{
1119 struct sk_buff *msg;
1120 void *hdr;
1121
1122 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
1123 if (!msg)
1124 return -ENOMEM;
1125
1126 hdr = genlmsg_put(msg, 0, 0, &nfc_genl_family, 0,
1127 NFC_CMD_FW_UPLOAD);
1128 if (!hdr)
1129 goto free_msg;
1130
1131 if (nla_put_string(msg, NFC_ATTR_FIRMWARE_NAME, firmware_name) ||
1132 nla_put_u32(msg, NFC_ATTR_DEVICE_INDEX, dev->idx))
1133 goto nla_put_failure;
1134
1135 genlmsg_end(msg, hdr);
1136
1137 genlmsg_multicast(msg, 0, nfc_genl_event_mcgrp.id, GFP_KERNEL);
1138
1139 return 0;
1140
1141nla_put_failure:
1142 genlmsg_cancel(msg, hdr);
1143free_msg:
1144 nlmsg_free(msg);
1145 return -EMSGSIZE;
1146}
1147
1148static int nfc_genl_enable_se(struct sk_buff *skb, struct genl_info *info)
1149{
1150 struct nfc_dev *dev;
1151 int rc;
1152 u32 idx, se_idx;
1153
1154 if (!info->attrs[NFC_ATTR_DEVICE_INDEX] ||
1155 !info->attrs[NFC_ATTR_SE_INDEX])
1156 return -EINVAL;
1157
1158 idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
1159 se_idx = nla_get_u32(info->attrs[NFC_ATTR_SE_INDEX]);
1160
1161 dev = nfc_get_device(idx);
1162 if (!dev)
1163 return -ENODEV;
1164
1165 rc = nfc_enable_se(dev, se_idx);
1166
1167 nfc_put_device(dev);
1168 return rc;
1169}
1170
1171static int nfc_genl_disable_se(struct sk_buff *skb, struct genl_info *info)
1172{
1173 struct nfc_dev *dev;
1174 int rc;
1175 u32 idx, se_idx;
1176
1177 if (!info->attrs[NFC_ATTR_DEVICE_INDEX] ||
1178 !info->attrs[NFC_ATTR_SE_INDEX])
1179 return -EINVAL;
1180
1181 idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
1182 se_idx = nla_get_u32(info->attrs[NFC_ATTR_SE_INDEX]);
1183
1184 dev = nfc_get_device(idx);
1185 if (!dev)
1186 return -ENODEV;
1187
1188 rc = nfc_disable_se(dev, se_idx);
1189
1190 nfc_put_device(dev);
1191 return rc;
1192}
1193
1028static struct genl_ops nfc_genl_ops[] = { 1194static struct genl_ops nfc_genl_ops[] = {
1029 { 1195 {
1030 .cmd = NFC_CMD_GET_DEVICE, 1196 .cmd = NFC_CMD_GET_DEVICE,
@@ -1084,6 +1250,21 @@ static struct genl_ops nfc_genl_ops[] = {
1084 .doit = nfc_genl_llc_sdreq, 1250 .doit = nfc_genl_llc_sdreq,
1085 .policy = nfc_genl_policy, 1251 .policy = nfc_genl_policy,
1086 }, 1252 },
1253 {
1254 .cmd = NFC_CMD_FW_UPLOAD,
1255 .doit = nfc_genl_fw_upload,
1256 .policy = nfc_genl_policy,
1257 },
1258 {
1259 .cmd = NFC_CMD_ENABLE_SE,
1260 .doit = nfc_genl_enable_se,
1261 .policy = nfc_genl_policy,
1262 },
1263 {
1264 .cmd = NFC_CMD_DISABLE_SE,
1265 .doit = nfc_genl_disable_se,
1266 .policy = nfc_genl_policy,
1267 },
1087}; 1268};
1088 1269
1089 1270
diff --git a/net/nfc/nfc.h b/net/nfc/nfc.h
index afa1f84ba040..ee85a1fc1b24 100644
--- a/net/nfc/nfc.h
+++ b/net/nfc/nfc.h
@@ -94,6 +94,9 @@ int nfc_genl_tm_deactivated(struct nfc_dev *dev);
94 94
95int nfc_genl_llc_send_sdres(struct nfc_dev *dev, struct hlist_head *sdres_list); 95int nfc_genl_llc_send_sdres(struct nfc_dev *dev, struct hlist_head *sdres_list);
96 96
97int nfc_genl_se_added(struct nfc_dev *dev, u32 se_idx, u16 type);
98int nfc_genl_se_removed(struct nfc_dev *dev, u32 se_idx);
99
97struct nfc_dev *nfc_get_device(unsigned int idx); 100struct nfc_dev *nfc_get_device(unsigned int idx);
98 101
99static inline void nfc_put_device(struct nfc_dev *dev) 102static inline void nfc_put_device(struct nfc_dev *dev)
@@ -120,6 +123,11 @@ static inline void nfc_device_iter_exit(struct class_dev_iter *iter)
120 class_dev_iter_exit(iter); 123 class_dev_iter_exit(iter);
121} 124}
122 125
126int nfc_fw_upload(struct nfc_dev *dev, const char *firmware_name);
127int nfc_genl_fw_upload_done(struct nfc_dev *dev, const char *firmware_name);
128
129int nfc_fw_upload_done(struct nfc_dev *dev, const char *firmware_name);
130
123int nfc_dev_up(struct nfc_dev *dev); 131int nfc_dev_up(struct nfc_dev *dev);
124 132
125int nfc_dev_down(struct nfc_dev *dev); 133int nfc_dev_down(struct nfc_dev *dev);
@@ -139,4 +147,7 @@ int nfc_deactivate_target(struct nfc_dev *dev, u32 target_idx);
139int nfc_data_exchange(struct nfc_dev *dev, u32 target_idx, struct sk_buff *skb, 147int nfc_data_exchange(struct nfc_dev *dev, u32 target_idx, struct sk_buff *skb,
140 data_exchange_cb_t cb, void *cb_context); 148 data_exchange_cb_t cb, void *cb_context);
141 149
150int nfc_enable_se(struct nfc_dev *dev, u32 se_idx);
151int nfc_disable_se(struct nfc_dev *dev, u32 se_idx);
152
142#endif /* __LOCAL_NFC_H */ 153#endif /* __LOCAL_NFC_H */
diff --git a/net/openvswitch/Kconfig b/net/openvswitch/Kconfig
index d9ea33c361be..27ee56b688a3 100644
--- a/net/openvswitch/Kconfig
+++ b/net/openvswitch/Kconfig
@@ -26,3 +26,17 @@ config OPENVSWITCH
26 called openvswitch. 26 called openvswitch.
27 27
28 If unsure, say N. 28 If unsure, say N.
29
30config OPENVSWITCH_GRE
31 bool "Open vSwitch GRE tunneling support"
32 depends on INET
33 depends on OPENVSWITCH
34 depends on NET_IPGRE_DEMUX && !(OPENVSWITCH=y && NET_IPGRE_DEMUX=m)
35 default y
36 ---help---
37 If you say Y here, then the Open vSwitch will be able create GRE
38 vport.
39
40 Say N to exclude this support and reduce the binary size.
41
42 If unsure, say Y.
diff --git a/net/openvswitch/Makefile b/net/openvswitch/Makefile
index 15e7384745c1..01bddb2991e3 100644
--- a/net/openvswitch/Makefile
+++ b/net/openvswitch/Makefile
@@ -10,5 +10,6 @@ openvswitch-y := \
10 dp_notify.o \ 10 dp_notify.o \
11 flow.o \ 11 flow.o \
12 vport.o \ 12 vport.o \
13 vport-gre.o \
13 vport-internal_dev.o \ 14 vport-internal_dev.o \
14 vport-netdev.o \ 15 vport-netdev.o
diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
index 894b6cbdd929..22c5f399f1cf 100644
--- a/net/openvswitch/actions.c
+++ b/net/openvswitch/actions.c
@@ -130,9 +130,13 @@ static int set_eth_addr(struct sk_buff *skb,
130 if (unlikely(err)) 130 if (unlikely(err))
131 return err; 131 return err;
132 132
133 skb_postpull_rcsum(skb, eth_hdr(skb), ETH_ALEN * 2);
134
133 memcpy(eth_hdr(skb)->h_source, eth_key->eth_src, ETH_ALEN); 135 memcpy(eth_hdr(skb)->h_source, eth_key->eth_src, ETH_ALEN);
134 memcpy(eth_hdr(skb)->h_dest, eth_key->eth_dst, ETH_ALEN); 136 memcpy(eth_hdr(skb)->h_dest, eth_key->eth_dst, ETH_ALEN);
135 137
138 ovs_skb_postpush_rcsum(skb, eth_hdr(skb), ETH_ALEN * 2);
139
136 return 0; 140 return 0;
137} 141}
138 142
@@ -432,6 +436,10 @@ static int execute_set_action(struct sk_buff *skb,
432 skb->mark = nla_get_u32(nested_attr); 436 skb->mark = nla_get_u32(nested_attr);
433 break; 437 break;
434 438
439 case OVS_KEY_ATTR_IPV4_TUNNEL:
440 OVS_CB(skb)->tun_key = nla_data(nested_attr);
441 break;
442
435 case OVS_KEY_ATTR_ETHERNET: 443 case OVS_KEY_ATTR_ETHERNET:
436 err = set_eth_addr(skb, nla_data(nested_attr)); 444 err = set_eth_addr(skb, nla_data(nested_attr));
437 break; 445 break;
diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index d12d6b8b5e8b..f7e3a0d84c40 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -362,6 +362,14 @@ static int queue_gso_packets(struct net *net, int dp_ifindex,
362static size_t key_attr_size(void) 362static size_t key_attr_size(void)
363{ 363{
364 return nla_total_size(4) /* OVS_KEY_ATTR_PRIORITY */ 364 return nla_total_size(4) /* OVS_KEY_ATTR_PRIORITY */
365 + nla_total_size(0) /* OVS_KEY_ATTR_TUNNEL */
366 + nla_total_size(8) /* OVS_TUNNEL_KEY_ATTR_ID */
367 + nla_total_size(4) /* OVS_TUNNEL_KEY_ATTR_IPV4_SRC */
368 + nla_total_size(4) /* OVS_TUNNEL_KEY_ATTR_IPV4_DST */
369 + nla_total_size(1) /* OVS_TUNNEL_KEY_ATTR_TOS */
370 + nla_total_size(1) /* OVS_TUNNEL_KEY_ATTR_TTL */
371 + nla_total_size(0) /* OVS_TUNNEL_KEY_ATTR_DONT_FRAGMENT */
372 + nla_total_size(0) /* OVS_TUNNEL_KEY_ATTR_CSUM */
365 + nla_total_size(4) /* OVS_KEY_ATTR_IN_PORT */ 373 + nla_total_size(4) /* OVS_KEY_ATTR_IN_PORT */
366 + nla_total_size(4) /* OVS_KEY_ATTR_SKB_MARK */ 374 + nla_total_size(4) /* OVS_KEY_ATTR_SKB_MARK */
367 + nla_total_size(12) /* OVS_KEY_ATTR_ETHERNET */ 375 + nla_total_size(12) /* OVS_KEY_ATTR_ETHERNET */
@@ -464,16 +472,89 @@ static int flush_flows(struct datapath *dp)
464 return 0; 472 return 0;
465} 473}
466 474
467static int validate_actions(const struct nlattr *attr, 475static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa, int attr_len)
468 const struct sw_flow_key *key, int depth); 476{
477
478 struct sw_flow_actions *acts;
479 int new_acts_size;
480 int req_size = NLA_ALIGN(attr_len);
481 int next_offset = offsetof(struct sw_flow_actions, actions) +
482 (*sfa)->actions_len;
483
484 if (req_size <= (ksize(*sfa) - next_offset))
485 goto out;
486
487 new_acts_size = ksize(*sfa) * 2;
488
489 if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
490 if ((MAX_ACTIONS_BUFSIZE - next_offset) < req_size)
491 return ERR_PTR(-EMSGSIZE);
492 new_acts_size = MAX_ACTIONS_BUFSIZE;
493 }
494
495 acts = ovs_flow_actions_alloc(new_acts_size);
496 if (IS_ERR(acts))
497 return (void *)acts;
498
499 memcpy(acts->actions, (*sfa)->actions, (*sfa)->actions_len);
500 acts->actions_len = (*sfa)->actions_len;
501 kfree(*sfa);
502 *sfa = acts;
503
504out:
505 (*sfa)->actions_len += req_size;
506 return (struct nlattr *) ((unsigned char *)(*sfa) + next_offset);
507}
508
509static int add_action(struct sw_flow_actions **sfa, int attrtype, void *data, int len)
510{
511 struct nlattr *a;
512
513 a = reserve_sfa_size(sfa, nla_attr_size(len));
514 if (IS_ERR(a))
515 return PTR_ERR(a);
516
517 a->nla_type = attrtype;
518 a->nla_len = nla_attr_size(len);
519
520 if (data)
521 memcpy(nla_data(a), data, len);
522 memset((unsigned char *) a + a->nla_len, 0, nla_padlen(len));
523
524 return 0;
525}
526
527static inline int add_nested_action_start(struct sw_flow_actions **sfa, int attrtype)
528{
529 int used = (*sfa)->actions_len;
530 int err;
531
532 err = add_action(sfa, attrtype, NULL, 0);
533 if (err)
534 return err;
535
536 return used;
537}
469 538
470static int validate_sample(const struct nlattr *attr, 539static inline void add_nested_action_end(struct sw_flow_actions *sfa, int st_offset)
471 const struct sw_flow_key *key, int depth) 540{
541 struct nlattr *a = (struct nlattr *) ((unsigned char *)sfa->actions + st_offset);
542
543 a->nla_len = sfa->actions_len - st_offset;
544}
545
546static int validate_and_copy_actions(const struct nlattr *attr,
547 const struct sw_flow_key *key, int depth,
548 struct sw_flow_actions **sfa);
549
550static int validate_and_copy_sample(const struct nlattr *attr,
551 const struct sw_flow_key *key, int depth,
552 struct sw_flow_actions **sfa)
472{ 553{
473 const struct nlattr *attrs[OVS_SAMPLE_ATTR_MAX + 1]; 554 const struct nlattr *attrs[OVS_SAMPLE_ATTR_MAX + 1];
474 const struct nlattr *probability, *actions; 555 const struct nlattr *probability, *actions;
475 const struct nlattr *a; 556 const struct nlattr *a;
476 int rem; 557 int rem, start, err, st_acts;
477 558
478 memset(attrs, 0, sizeof(attrs)); 559 memset(attrs, 0, sizeof(attrs));
479 nla_for_each_nested(a, attr, rem) { 560 nla_for_each_nested(a, attr, rem) {
@@ -492,7 +573,26 @@ static int validate_sample(const struct nlattr *attr,
492 actions = attrs[OVS_SAMPLE_ATTR_ACTIONS]; 573 actions = attrs[OVS_SAMPLE_ATTR_ACTIONS];
493 if (!actions || (nla_len(actions) && nla_len(actions) < NLA_HDRLEN)) 574 if (!actions || (nla_len(actions) && nla_len(actions) < NLA_HDRLEN))
494 return -EINVAL; 575 return -EINVAL;
495 return validate_actions(actions, key, depth + 1); 576
577 /* validation done, copy sample action. */
578 start = add_nested_action_start(sfa, OVS_ACTION_ATTR_SAMPLE);
579 if (start < 0)
580 return start;
581 err = add_action(sfa, OVS_SAMPLE_ATTR_PROBABILITY, nla_data(probability), sizeof(u32));
582 if (err)
583 return err;
584 st_acts = add_nested_action_start(sfa, OVS_SAMPLE_ATTR_ACTIONS);
585 if (st_acts < 0)
586 return st_acts;
587
588 err = validate_and_copy_actions(actions, key, depth + 1, sfa);
589 if (err)
590 return err;
591
592 add_nested_action_end(*sfa, st_acts);
593 add_nested_action_end(*sfa, start);
594
595 return 0;
496} 596}
497 597
498static int validate_tp_port(const struct sw_flow_key *flow_key) 598static int validate_tp_port(const struct sw_flow_key *flow_key)
@@ -508,8 +608,30 @@ static int validate_tp_port(const struct sw_flow_key *flow_key)
508 return -EINVAL; 608 return -EINVAL;
509} 609}
510 610
611static int validate_and_copy_set_tun(const struct nlattr *attr,
612 struct sw_flow_actions **sfa)
613{
614 struct ovs_key_ipv4_tunnel tun_key;
615 int err, start;
616
617 err = ovs_ipv4_tun_from_nlattr(nla_data(attr), &tun_key);
618 if (err)
619 return err;
620
621 start = add_nested_action_start(sfa, OVS_ACTION_ATTR_SET);
622 if (start < 0)
623 return start;
624
625 err = add_action(sfa, OVS_KEY_ATTR_IPV4_TUNNEL, &tun_key, sizeof(tun_key));
626 add_nested_action_end(*sfa, start);
627
628 return err;
629}
630
511static int validate_set(const struct nlattr *a, 631static int validate_set(const struct nlattr *a,
512 const struct sw_flow_key *flow_key) 632 const struct sw_flow_key *flow_key,
633 struct sw_flow_actions **sfa,
634 bool *set_tun)
513{ 635{
514 const struct nlattr *ovs_key = nla_data(a); 636 const struct nlattr *ovs_key = nla_data(a);
515 int key_type = nla_type(ovs_key); 637 int key_type = nla_type(ovs_key);
@@ -519,18 +641,27 @@ static int validate_set(const struct nlattr *a,
519 return -EINVAL; 641 return -EINVAL;
520 642
521 if (key_type > OVS_KEY_ATTR_MAX || 643 if (key_type > OVS_KEY_ATTR_MAX ||
522 nla_len(ovs_key) != ovs_key_lens[key_type]) 644 (ovs_key_lens[key_type] != nla_len(ovs_key) &&
645 ovs_key_lens[key_type] != -1))
523 return -EINVAL; 646 return -EINVAL;
524 647
525 switch (key_type) { 648 switch (key_type) {
526 const struct ovs_key_ipv4 *ipv4_key; 649 const struct ovs_key_ipv4 *ipv4_key;
527 const struct ovs_key_ipv6 *ipv6_key; 650 const struct ovs_key_ipv6 *ipv6_key;
651 int err;
528 652
529 case OVS_KEY_ATTR_PRIORITY: 653 case OVS_KEY_ATTR_PRIORITY:
530 case OVS_KEY_ATTR_SKB_MARK: 654 case OVS_KEY_ATTR_SKB_MARK:
531 case OVS_KEY_ATTR_ETHERNET: 655 case OVS_KEY_ATTR_ETHERNET:
532 break; 656 break;
533 657
658 case OVS_KEY_ATTR_TUNNEL:
659 *set_tun = true;
660 err = validate_and_copy_set_tun(a, sfa);
661 if (err)
662 return err;
663 break;
664
534 case OVS_KEY_ATTR_IPV4: 665 case OVS_KEY_ATTR_IPV4:
535 if (flow_key->eth.type != htons(ETH_P_IP)) 666 if (flow_key->eth.type != htons(ETH_P_IP))
536 return -EINVAL; 667 return -EINVAL;
@@ -606,8 +737,24 @@ static int validate_userspace(const struct nlattr *attr)
606 return 0; 737 return 0;
607} 738}
608 739
609static int validate_actions(const struct nlattr *attr, 740static int copy_action(const struct nlattr *from,
610 const struct sw_flow_key *key, int depth) 741 struct sw_flow_actions **sfa)
742{
743 int totlen = NLA_ALIGN(from->nla_len);
744 struct nlattr *to;
745
746 to = reserve_sfa_size(sfa, from->nla_len);
747 if (IS_ERR(to))
748 return PTR_ERR(to);
749
750 memcpy(to, from, totlen);
751 return 0;
752}
753
754static int validate_and_copy_actions(const struct nlattr *attr,
755 const struct sw_flow_key *key,
756 int depth,
757 struct sw_flow_actions **sfa)
611{ 758{
612 const struct nlattr *a; 759 const struct nlattr *a;
613 int rem, err; 760 int rem, err;
@@ -627,12 +774,14 @@ static int validate_actions(const struct nlattr *attr,
627 }; 774 };
628 const struct ovs_action_push_vlan *vlan; 775 const struct ovs_action_push_vlan *vlan;
629 int type = nla_type(a); 776 int type = nla_type(a);
777 bool skip_copy;
630 778
631 if (type > OVS_ACTION_ATTR_MAX || 779 if (type > OVS_ACTION_ATTR_MAX ||
632 (action_lens[type] != nla_len(a) && 780 (action_lens[type] != nla_len(a) &&
633 action_lens[type] != (u32)-1)) 781 action_lens[type] != (u32)-1))
634 return -EINVAL; 782 return -EINVAL;
635 783
784 skip_copy = false;
636 switch (type) { 785 switch (type) {
637 case OVS_ACTION_ATTR_UNSPEC: 786 case OVS_ACTION_ATTR_UNSPEC:
638 return -EINVAL; 787 return -EINVAL;
@@ -661,20 +810,26 @@ static int validate_actions(const struct nlattr *attr,
661 break; 810 break;
662 811
663 case OVS_ACTION_ATTR_SET: 812 case OVS_ACTION_ATTR_SET:
664 err = validate_set(a, key); 813 err = validate_set(a, key, sfa, &skip_copy);
665 if (err) 814 if (err)
666 return err; 815 return err;
667 break; 816 break;
668 817
669 case OVS_ACTION_ATTR_SAMPLE: 818 case OVS_ACTION_ATTR_SAMPLE:
670 err = validate_sample(a, key, depth); 819 err = validate_and_copy_sample(a, key, depth, sfa);
671 if (err) 820 if (err)
672 return err; 821 return err;
822 skip_copy = true;
673 break; 823 break;
674 824
675 default: 825 default:
676 return -EINVAL; 826 return -EINVAL;
677 } 827 }
828 if (!skip_copy) {
829 err = copy_action(a, sfa);
830 if (err)
831 return err;
832 }
678 } 833 }
679 834
680 if (rem > 0) 835 if (rem > 0)
@@ -739,24 +894,18 @@ static int ovs_packet_cmd_execute(struct sk_buff *skb, struct genl_info *info)
739 if (err) 894 if (err)
740 goto err_flow_free; 895 goto err_flow_free;
741 896
742 err = ovs_flow_metadata_from_nlattrs(&flow->key.phy.priority, 897 err = ovs_flow_metadata_from_nlattrs(flow, key_len, a[OVS_PACKET_ATTR_KEY]);
743 &flow->key.phy.skb_mark,
744 &flow->key.phy.in_port,
745 a[OVS_PACKET_ATTR_KEY]);
746 if (err)
747 goto err_flow_free;
748
749 err = validate_actions(a[OVS_PACKET_ATTR_ACTIONS], &flow->key, 0);
750 if (err) 898 if (err)
751 goto err_flow_free; 899 goto err_flow_free;
752 900 acts = ovs_flow_actions_alloc(nla_len(a[OVS_PACKET_ATTR_ACTIONS]));
753 flow->hash = ovs_flow_hash(&flow->key, key_len);
754
755 acts = ovs_flow_actions_alloc(a[OVS_PACKET_ATTR_ACTIONS]);
756 err = PTR_ERR(acts); 901 err = PTR_ERR(acts);
757 if (IS_ERR(acts)) 902 if (IS_ERR(acts))
758 goto err_flow_free; 903 goto err_flow_free;
904
905 err = validate_and_copy_actions(a[OVS_PACKET_ATTR_ACTIONS], &flow->key, 0, &acts);
759 rcu_assign_pointer(flow->sf_acts, acts); 906 rcu_assign_pointer(flow->sf_acts, acts);
907 if (err)
908 goto err_flow_free;
760 909
761 OVS_CB(packet)->flow = flow; 910 OVS_CB(packet)->flow = flow;
762 packet->priority = flow->key.phy.priority; 911 packet->priority = flow->key.phy.priority;
@@ -846,6 +995,99 @@ static struct genl_multicast_group ovs_dp_flow_multicast_group = {
846 .name = OVS_FLOW_MCGROUP 995 .name = OVS_FLOW_MCGROUP
847}; 996};
848 997
998static int actions_to_attr(const struct nlattr *attr, int len, struct sk_buff *skb);
999static int sample_action_to_attr(const struct nlattr *attr, struct sk_buff *skb)
1000{
1001 const struct nlattr *a;
1002 struct nlattr *start;
1003 int err = 0, rem;
1004
1005 start = nla_nest_start(skb, OVS_ACTION_ATTR_SAMPLE);
1006 if (!start)
1007 return -EMSGSIZE;
1008
1009 nla_for_each_nested(a, attr, rem) {
1010 int type = nla_type(a);
1011 struct nlattr *st_sample;
1012
1013 switch (type) {
1014 case OVS_SAMPLE_ATTR_PROBABILITY:
1015 if (nla_put(skb, OVS_SAMPLE_ATTR_PROBABILITY, sizeof(u32), nla_data(a)))
1016 return -EMSGSIZE;
1017 break;
1018 case OVS_SAMPLE_ATTR_ACTIONS:
1019 st_sample = nla_nest_start(skb, OVS_SAMPLE_ATTR_ACTIONS);
1020 if (!st_sample)
1021 return -EMSGSIZE;
1022 err = actions_to_attr(nla_data(a), nla_len(a), skb);
1023 if (err)
1024 return err;
1025 nla_nest_end(skb, st_sample);
1026 break;
1027 }
1028 }
1029
1030 nla_nest_end(skb, start);
1031 return err;
1032}
1033
1034static int set_action_to_attr(const struct nlattr *a, struct sk_buff *skb)
1035{
1036 const struct nlattr *ovs_key = nla_data(a);
1037 int key_type = nla_type(ovs_key);
1038 struct nlattr *start;
1039 int err;
1040
1041 switch (key_type) {
1042 case OVS_KEY_ATTR_IPV4_TUNNEL:
1043 start = nla_nest_start(skb, OVS_ACTION_ATTR_SET);
1044 if (!start)
1045 return -EMSGSIZE;
1046
1047 err = ovs_ipv4_tun_to_nlattr(skb, nla_data(ovs_key));
1048 if (err)
1049 return err;
1050 nla_nest_end(skb, start);
1051 break;
1052 default:
1053 if (nla_put(skb, OVS_ACTION_ATTR_SET, nla_len(a), ovs_key))
1054 return -EMSGSIZE;
1055 break;
1056 }
1057
1058 return 0;
1059}
1060
1061static int actions_to_attr(const struct nlattr *attr, int len, struct sk_buff *skb)
1062{
1063 const struct nlattr *a;
1064 int rem, err;
1065
1066 nla_for_each_attr(a, attr, len, rem) {
1067 int type = nla_type(a);
1068
1069 switch (type) {
1070 case OVS_ACTION_ATTR_SET:
1071 err = set_action_to_attr(a, skb);
1072 if (err)
1073 return err;
1074 break;
1075
1076 case OVS_ACTION_ATTR_SAMPLE:
1077 err = sample_action_to_attr(a, skb);
1078 if (err)
1079 return err;
1080 break;
1081 default:
1082 if (nla_put(skb, type, nla_len(a), nla_data(a)))
1083 return -EMSGSIZE;
1084 break;
1085 }
1086 }
1087
1088 return 0;
1089}
1090
849static size_t ovs_flow_cmd_msg_size(const struct sw_flow_actions *acts) 1091static size_t ovs_flow_cmd_msg_size(const struct sw_flow_actions *acts)
850{ 1092{
851 return NLMSG_ALIGN(sizeof(struct ovs_header)) 1093 return NLMSG_ALIGN(sizeof(struct ovs_header))
@@ -863,6 +1105,7 @@ static int ovs_flow_cmd_fill_info(struct sw_flow *flow, struct datapath *dp,
863{ 1105{
864 const int skb_orig_len = skb->len; 1106 const int skb_orig_len = skb->len;
865 const struct sw_flow_actions *sf_acts; 1107 const struct sw_flow_actions *sf_acts;
1108 struct nlattr *start;
866 struct ovs_flow_stats stats; 1109 struct ovs_flow_stats stats;
867 struct ovs_header *ovs_header; 1110 struct ovs_header *ovs_header;
868 struct nlattr *nla; 1111 struct nlattr *nla;
@@ -916,10 +1159,19 @@ static int ovs_flow_cmd_fill_info(struct sw_flow *flow, struct datapath *dp,
916 * This can only fail for dump operations because the skb is always 1159 * This can only fail for dump operations because the skb is always
917 * properly sized for single flows. 1160 * properly sized for single flows.
918 */ 1161 */
919 err = nla_put(skb, OVS_FLOW_ATTR_ACTIONS, sf_acts->actions_len, 1162 start = nla_nest_start(skb, OVS_FLOW_ATTR_ACTIONS);
920 sf_acts->actions); 1163 if (start) {
921 if (err < 0 && skb_orig_len) 1164 err = actions_to_attr(sf_acts->actions, sf_acts->actions_len, skb);
922 goto error; 1165 if (!err)
1166 nla_nest_end(skb, start);
1167 else {
1168 if (skb_orig_len)
1169 goto error;
1170
1171 nla_nest_cancel(skb, start);
1172 }
1173 } else if (skb_orig_len)
1174 goto nla_put_failure;
923 1175
924 return genlmsg_end(skb, ovs_header); 1176 return genlmsg_end(skb, ovs_header);
925 1177
@@ -964,6 +1216,7 @@ static int ovs_flow_cmd_new_or_set(struct sk_buff *skb, struct genl_info *info)
964 struct sk_buff *reply; 1216 struct sk_buff *reply;
965 struct datapath *dp; 1217 struct datapath *dp;
966 struct flow_table *table; 1218 struct flow_table *table;
1219 struct sw_flow_actions *acts = NULL;
967 int error; 1220 int error;
968 int key_len; 1221 int key_len;
969 1222
@@ -977,9 +1230,14 @@ static int ovs_flow_cmd_new_or_set(struct sk_buff *skb, struct genl_info *info)
977 1230
978 /* Validate actions. */ 1231 /* Validate actions. */
979 if (a[OVS_FLOW_ATTR_ACTIONS]) { 1232 if (a[OVS_FLOW_ATTR_ACTIONS]) {
980 error = validate_actions(a[OVS_FLOW_ATTR_ACTIONS], &key, 0); 1233 acts = ovs_flow_actions_alloc(nla_len(a[OVS_FLOW_ATTR_ACTIONS]));
981 if (error) 1234 error = PTR_ERR(acts);
1235 if (IS_ERR(acts))
982 goto error; 1236 goto error;
1237
1238 error = validate_and_copy_actions(a[OVS_FLOW_ATTR_ACTIONS], &key, 0, &acts);
1239 if (error)
1240 goto err_kfree;
983 } else if (info->genlhdr->cmd == OVS_FLOW_CMD_NEW) { 1241 } else if (info->genlhdr->cmd == OVS_FLOW_CMD_NEW) {
984 error = -EINVAL; 1242 error = -EINVAL;
985 goto error; 1243 goto error;
@@ -994,8 +1252,6 @@ static int ovs_flow_cmd_new_or_set(struct sk_buff *skb, struct genl_info *info)
994 table = ovsl_dereference(dp->table); 1252 table = ovsl_dereference(dp->table);
995 flow = ovs_flow_tbl_lookup(table, &key, key_len); 1253 flow = ovs_flow_tbl_lookup(table, &key, key_len);
996 if (!flow) { 1254 if (!flow) {
997 struct sw_flow_actions *acts;
998
999 /* Bail out if we're not allowed to create a new flow. */ 1255 /* Bail out if we're not allowed to create a new flow. */
1000 error = -ENOENT; 1256 error = -ENOENT;
1001 if (info->genlhdr->cmd == OVS_FLOW_CMD_SET) 1257 if (info->genlhdr->cmd == OVS_FLOW_CMD_SET)
@@ -1019,19 +1275,12 @@ static int ovs_flow_cmd_new_or_set(struct sk_buff *skb, struct genl_info *info)
1019 error = PTR_ERR(flow); 1275 error = PTR_ERR(flow);
1020 goto err_unlock_ovs; 1276 goto err_unlock_ovs;
1021 } 1277 }
1022 flow->key = key;
1023 clear_stats(flow); 1278 clear_stats(flow);
1024 1279
1025 /* Obtain actions. */
1026 acts = ovs_flow_actions_alloc(a[OVS_FLOW_ATTR_ACTIONS]);
1027 error = PTR_ERR(acts);
1028 if (IS_ERR(acts))
1029 goto error_free_flow;
1030 rcu_assign_pointer(flow->sf_acts, acts); 1280 rcu_assign_pointer(flow->sf_acts, acts);
1031 1281
1032 /* Put flow in bucket. */ 1282 /* Put flow in bucket. */
1033 flow->hash = ovs_flow_hash(&key, key_len); 1283 ovs_flow_tbl_insert(table, flow, &key, key_len);
1034 ovs_flow_tbl_insert(table, flow);
1035 1284
1036 reply = ovs_flow_cmd_build_info(flow, dp, info->snd_portid, 1285 reply = ovs_flow_cmd_build_info(flow, dp, info->snd_portid,
1037 info->snd_seq, 1286 info->snd_seq,
@@ -1039,7 +1288,6 @@ static int ovs_flow_cmd_new_or_set(struct sk_buff *skb, struct genl_info *info)
1039 } else { 1288 } else {
1040 /* We found a matching flow. */ 1289 /* We found a matching flow. */
1041 struct sw_flow_actions *old_acts; 1290 struct sw_flow_actions *old_acts;
1042 struct nlattr *acts_attrs;
1043 1291
1044 /* Bail out if we're not allowed to modify an existing flow. 1292 /* Bail out if we're not allowed to modify an existing flow.
1045 * We accept NLM_F_CREATE in place of the intended NLM_F_EXCL 1293 * We accept NLM_F_CREATE in place of the intended NLM_F_EXCL
@@ -1054,21 +1302,8 @@ static int ovs_flow_cmd_new_or_set(struct sk_buff *skb, struct genl_info *info)
1054 1302
1055 /* Update actions. */ 1303 /* Update actions. */
1056 old_acts = ovsl_dereference(flow->sf_acts); 1304 old_acts = ovsl_dereference(flow->sf_acts);
1057 acts_attrs = a[OVS_FLOW_ATTR_ACTIONS]; 1305 rcu_assign_pointer(flow->sf_acts, acts);
1058 if (acts_attrs && 1306 ovs_flow_deferred_free_acts(old_acts);
1059 (old_acts->actions_len != nla_len(acts_attrs) ||
1060 memcmp(old_acts->actions, nla_data(acts_attrs),
1061 old_acts->actions_len))) {
1062 struct sw_flow_actions *new_acts;
1063
1064 new_acts = ovs_flow_actions_alloc(acts_attrs);
1065 error = PTR_ERR(new_acts);
1066 if (IS_ERR(new_acts))
1067 goto err_unlock_ovs;
1068
1069 rcu_assign_pointer(flow->sf_acts, new_acts);
1070 ovs_flow_deferred_free_acts(old_acts);
1071 }
1072 1307
1073 reply = ovs_flow_cmd_build_info(flow, dp, info->snd_portid, 1308 reply = ovs_flow_cmd_build_info(flow, dp, info->snd_portid,
1074 info->snd_seq, OVS_FLOW_CMD_NEW); 1309 info->snd_seq, OVS_FLOW_CMD_NEW);
@@ -1089,10 +1324,10 @@ static int ovs_flow_cmd_new_or_set(struct sk_buff *skb, struct genl_info *info)
1089 ovs_dp_flow_multicast_group.id, PTR_ERR(reply)); 1324 ovs_dp_flow_multicast_group.id, PTR_ERR(reply));
1090 return 0; 1325 return 0;
1091 1326
1092error_free_flow:
1093 ovs_flow_free(flow);
1094err_unlock_ovs: 1327err_unlock_ovs:
1095 ovs_unlock(); 1328 ovs_unlock();
1329err_kfree:
1330 kfree(acts);
1096error: 1331error:
1097 return error; 1332 return error;
1098} 1333}
@@ -1812,10 +2047,11 @@ static int ovs_vport_cmd_set(struct sk_buff *skb, struct genl_info *info)
1812 if (IS_ERR(vport)) 2047 if (IS_ERR(vport))
1813 goto exit_unlock; 2048 goto exit_unlock;
1814 2049
1815 err = 0;
1816 if (a[OVS_VPORT_ATTR_TYPE] && 2050 if (a[OVS_VPORT_ATTR_TYPE] &&
1817 nla_get_u32(a[OVS_VPORT_ATTR_TYPE]) != vport->ops->type) 2051 nla_get_u32(a[OVS_VPORT_ATTR_TYPE]) != vport->ops->type) {
1818 err = -EINVAL; 2052 err = -EINVAL;
2053 goto exit_unlock;
2054 }
1819 2055
1820 reply = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 2056 reply = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
1821 if (!reply) { 2057 if (!reply) {
@@ -1823,10 +2059,11 @@ static int ovs_vport_cmd_set(struct sk_buff *skb, struct genl_info *info)
1823 goto exit_unlock; 2059 goto exit_unlock;
1824 } 2060 }
1825 2061
1826 if (!err && a[OVS_VPORT_ATTR_OPTIONS]) 2062 if (a[OVS_VPORT_ATTR_OPTIONS]) {
1827 err = ovs_vport_set_options(vport, a[OVS_VPORT_ATTR_OPTIONS]); 2063 err = ovs_vport_set_options(vport, a[OVS_VPORT_ATTR_OPTIONS]);
1828 if (err) 2064 if (err)
1829 goto exit_free; 2065 goto exit_free;
2066 }
1830 2067
1831 if (a[OVS_VPORT_ATTR_UPCALL_PID]) 2068 if (a[OVS_VPORT_ATTR_UPCALL_PID])
1832 vport->upcall_portid = nla_get_u32(a[OVS_VPORT_ATTR_UPCALL_PID]); 2069 vport->upcall_portid = nla_get_u32(a[OVS_VPORT_ATTR_UPCALL_PID]);
@@ -1867,8 +2104,8 @@ static int ovs_vport_cmd_del(struct sk_buff *skb, struct genl_info *info)
1867 goto exit_unlock; 2104 goto exit_unlock;
1868 } 2105 }
1869 2106
1870 reply = ovs_vport_cmd_build_info(vport, info->snd_portid, info->snd_seq, 2107 reply = ovs_vport_cmd_build_info(vport, info->snd_portid,
1871 OVS_VPORT_CMD_DEL); 2108 info->snd_seq, OVS_VPORT_CMD_DEL);
1872 err = PTR_ERR(reply); 2109 err = PTR_ERR(reply);
1873 if (IS_ERR(reply)) 2110 if (IS_ERR(reply))
1874 goto exit_unlock; 2111 goto exit_unlock;
@@ -1897,8 +2134,8 @@ static int ovs_vport_cmd_get(struct sk_buff *skb, struct genl_info *info)
1897 if (IS_ERR(vport)) 2134 if (IS_ERR(vport))
1898 goto exit_unlock; 2135 goto exit_unlock;
1899 2136
1900 reply = ovs_vport_cmd_build_info(vport, info->snd_portid, info->snd_seq, 2137 reply = ovs_vport_cmd_build_info(vport, info->snd_portid,
1901 OVS_VPORT_CMD_NEW); 2138 info->snd_seq, OVS_VPORT_CMD_NEW);
1902 err = PTR_ERR(reply); 2139 err = PTR_ERR(reply);
1903 if (IS_ERR(reply)) 2140 if (IS_ERR(reply))
1904 goto exit_unlock; 2141 goto exit_unlock;
diff --git a/net/openvswitch/datapath.h b/net/openvswitch/datapath.h
index 16b840695216..a91486484916 100644
--- a/net/openvswitch/datapath.h
+++ b/net/openvswitch/datapath.h
@@ -88,9 +88,12 @@ struct datapath {
88/** 88/**
89 * struct ovs_skb_cb - OVS data in skb CB 89 * struct ovs_skb_cb - OVS data in skb CB
90 * @flow: The flow associated with this packet. May be %NULL if no flow. 90 * @flow: The flow associated with this packet. May be %NULL if no flow.
91 * @tun_key: Key for the tunnel that encapsulated this packet. NULL if the
92 * packet is not being tunneled.
91 */ 93 */
92struct ovs_skb_cb { 94struct ovs_skb_cb {
93 struct sw_flow *flow; 95 struct sw_flow *flow;
96 struct ovs_key_ipv4_tunnel *tun_key;
94}; 97};
95#define OVS_CB(skb) ((struct ovs_skb_cb *)(skb)->cb) 98#define OVS_CB(skb) ((struct ovs_skb_cb *)(skb)->cb)
96 99
@@ -119,6 +122,7 @@ struct dp_upcall_info {
119struct ovs_net { 122struct ovs_net {
120 struct list_head dps; 123 struct list_head dps;
121 struct work_struct dp_notify_work; 124 struct work_struct dp_notify_work;
125 struct vport_net vport_net;
122}; 126};
123 127
124extern int ovs_net_id; 128extern int ovs_net_id;
diff --git a/net/openvswitch/dp_notify.c b/net/openvswitch/dp_notify.c
index ef4feec6cd84..c3235675f359 100644
--- a/net/openvswitch/dp_notify.c
+++ b/net/openvswitch/dp_notify.c
@@ -78,7 +78,7 @@ static int dp_device_event(struct notifier_block *unused, unsigned long event,
78 void *ptr) 78 void *ptr)
79{ 79{
80 struct ovs_net *ovs_net; 80 struct ovs_net *ovs_net;
81 struct net_device *dev = ptr; 81 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
82 struct vport *vport = NULL; 82 struct vport *vport = NULL;
83 83
84 if (!ovs_is_internal_dev(dev)) 84 if (!ovs_is_internal_dev(dev))
diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
index b15321a2228c..5c519b121e1b 100644
--- a/net/openvswitch/flow.c
+++ b/net/openvswitch/flow.c
@@ -40,6 +40,7 @@
40#include <linux/icmpv6.h> 40#include <linux/icmpv6.h>
41#include <linux/rculist.h> 41#include <linux/rculist.h>
42#include <net/ip.h> 42#include <net/ip.h>
43#include <net/ip_tunnels.h>
43#include <net/ipv6.h> 44#include <net/ipv6.h>
44#include <net/ndisc.h> 45#include <net/ndisc.h>
45 46
@@ -198,20 +199,18 @@ void ovs_flow_used(struct sw_flow *flow, struct sk_buff *skb)
198 spin_unlock(&flow->lock); 199 spin_unlock(&flow->lock);
199} 200}
200 201
201struct sw_flow_actions *ovs_flow_actions_alloc(const struct nlattr *actions) 202struct sw_flow_actions *ovs_flow_actions_alloc(int size)
202{ 203{
203 int actions_len = nla_len(actions);
204 struct sw_flow_actions *sfa; 204 struct sw_flow_actions *sfa;
205 205
206 if (actions_len > MAX_ACTIONS_BUFSIZE) 206 if (size > MAX_ACTIONS_BUFSIZE)
207 return ERR_PTR(-EINVAL); 207 return ERR_PTR(-EINVAL);
208 208
209 sfa = kmalloc(sizeof(*sfa) + actions_len, GFP_KERNEL); 209 sfa = kmalloc(sizeof(*sfa) + size, GFP_KERNEL);
210 if (!sfa) 210 if (!sfa)
211 return ERR_PTR(-ENOMEM); 211 return ERR_PTR(-ENOMEM);
212 212
213 sfa->actions_len = actions_len; 213 sfa->actions_len = 0;
214 nla_memcpy(sfa->actions, actions, actions_len);
215 return sfa; 214 return sfa;
216} 215}
217 216
@@ -354,6 +353,14 @@ struct sw_flow *ovs_flow_tbl_next(struct flow_table *table, u32 *bucket, u32 *la
354 return NULL; 353 return NULL;
355} 354}
356 355
356static void __flow_tbl_insert(struct flow_table *table, struct sw_flow *flow)
357{
358 struct hlist_head *head;
359 head = find_bucket(table, flow->hash);
360 hlist_add_head_rcu(&flow->hash_node[table->node_ver], head);
361 table->count++;
362}
363
357static void flow_table_copy_flows(struct flow_table *old, struct flow_table *new) 364static void flow_table_copy_flows(struct flow_table *old, struct flow_table *new)
358{ 365{
359 int old_ver; 366 int old_ver;
@@ -370,7 +377,7 @@ static void flow_table_copy_flows(struct flow_table *old, struct flow_table *new
370 head = flex_array_get(old->buckets, i); 377 head = flex_array_get(old->buckets, i);
371 378
372 hlist_for_each_entry(flow, head, hash_node[old_ver]) 379 hlist_for_each_entry(flow, head, hash_node[old_ver])
373 ovs_flow_tbl_insert(new, flow); 380 __flow_tbl_insert(new, flow);
374 } 381 }
375 old->keep_flows = true; 382 old->keep_flows = true;
376} 383}
@@ -590,10 +597,10 @@ out:
590 * - skb->network_header: just past the Ethernet header, or just past the 597 * - skb->network_header: just past the Ethernet header, or just past the
591 * VLAN header, to the first byte of the Ethernet payload. 598 * VLAN header, to the first byte of the Ethernet payload.
592 * 599 *
593 * - skb->transport_header: If key->dl_type is ETH_P_IP or ETH_P_IPV6 600 * - skb->transport_header: If key->eth.type is ETH_P_IP or ETH_P_IPV6
594 * on output, then just past the IP header, if one is present and 601 * on output, then just past the IP header, if one is present and
595 * of a correct length, otherwise the same as skb->network_header. 602 * of a correct length, otherwise the same as skb->network_header.
596 * For other key->dl_type values it is left untouched. 603 * For other key->eth.type values it is left untouched.
597 */ 604 */
598int ovs_flow_extract(struct sk_buff *skb, u16 in_port, struct sw_flow_key *key, 605int ovs_flow_extract(struct sk_buff *skb, u16 in_port, struct sw_flow_key *key,
599 int *key_lenp) 606 int *key_lenp)
@@ -605,6 +612,8 @@ int ovs_flow_extract(struct sk_buff *skb, u16 in_port, struct sw_flow_key *key,
605 memset(key, 0, sizeof(*key)); 612 memset(key, 0, sizeof(*key));
606 613
607 key->phy.priority = skb->priority; 614 key->phy.priority = skb->priority;
615 if (OVS_CB(skb)->tun_key)
616 memcpy(&key->tun_key, OVS_CB(skb)->tun_key, sizeof(key->tun_key));
608 key->phy.in_port = in_port; 617 key->phy.in_port = in_port;
609 key->phy.skb_mark = skb->mark; 618 key->phy.skb_mark = skb->mark;
610 619
@@ -618,6 +627,9 @@ int ovs_flow_extract(struct sk_buff *skb, u16 in_port, struct sw_flow_key *key,
618 memcpy(key->eth.dst, eth->h_dest, ETH_ALEN); 627 memcpy(key->eth.dst, eth->h_dest, ETH_ALEN);
619 628
620 __skb_pull(skb, 2 * ETH_ALEN); 629 __skb_pull(skb, 2 * ETH_ALEN);
630 /* We are going to push all headers that we pull, so no need to
631 * update skb->csum here.
632 */
621 633
622 if (vlan_tx_tag_present(skb)) 634 if (vlan_tx_tag_present(skb))
623 key->eth.tci = htons(skb->vlan_tci); 635 key->eth.tci = htons(skb->vlan_tci);
@@ -759,9 +771,18 @@ out:
759 return error; 771 return error;
760} 772}
761 773
762u32 ovs_flow_hash(const struct sw_flow_key *key, int key_len) 774static u32 ovs_flow_hash(const struct sw_flow_key *key, int key_start, int key_len)
775{
776 return jhash2((u32 *)((u8 *)key + key_start),
777 DIV_ROUND_UP(key_len - key_start, sizeof(u32)), 0);
778}
779
780static int flow_key_start(struct sw_flow_key *key)
763{ 781{
764 return jhash2((u32 *)key, DIV_ROUND_UP(key_len, sizeof(u32)), 0); 782 if (key->tun_key.ipv4_dst)
783 return 0;
784 else
785 return offsetof(struct sw_flow_key, phy);
765} 786}
766 787
767struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *table, 788struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *table,
@@ -769,28 +790,31 @@ struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *table,
769{ 790{
770 struct sw_flow *flow; 791 struct sw_flow *flow;
771 struct hlist_head *head; 792 struct hlist_head *head;
793 u8 *_key;
794 int key_start;
772 u32 hash; 795 u32 hash;
773 796
774 hash = ovs_flow_hash(key, key_len); 797 key_start = flow_key_start(key);
798 hash = ovs_flow_hash(key, key_start, key_len);
775 799
800 _key = (u8 *) key + key_start;
776 head = find_bucket(table, hash); 801 head = find_bucket(table, hash);
777 hlist_for_each_entry_rcu(flow, head, hash_node[table->node_ver]) { 802 hlist_for_each_entry_rcu(flow, head, hash_node[table->node_ver]) {
778 803
779 if (flow->hash == hash && 804 if (flow->hash == hash &&
780 !memcmp(&flow->key, key, key_len)) { 805 !memcmp((u8 *)&flow->key + key_start, _key, key_len - key_start)) {
781 return flow; 806 return flow;
782 } 807 }
783 } 808 }
784 return NULL; 809 return NULL;
785} 810}
786 811
787void ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow) 812void ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow,
813 struct sw_flow_key *key, int key_len)
788{ 814{
789 struct hlist_head *head; 815 flow->hash = ovs_flow_hash(key, flow_key_start(key), key_len);
790 816 memcpy(&flow->key, key, sizeof(flow->key));
791 head = find_bucket(table, flow->hash); 817 __flow_tbl_insert(table, flow);
792 hlist_add_head_rcu(&flow->hash_node[table->node_ver], head);
793 table->count++;
794} 818}
795 819
796void ovs_flow_tbl_remove(struct flow_table *table, struct sw_flow *flow) 820void ovs_flow_tbl_remove(struct flow_table *table, struct sw_flow *flow)
@@ -817,6 +841,7 @@ const int ovs_key_lens[OVS_KEY_ATTR_MAX + 1] = {
817 [OVS_KEY_ATTR_ICMPV6] = sizeof(struct ovs_key_icmpv6), 841 [OVS_KEY_ATTR_ICMPV6] = sizeof(struct ovs_key_icmpv6),
818 [OVS_KEY_ATTR_ARP] = sizeof(struct ovs_key_arp), 842 [OVS_KEY_ATTR_ARP] = sizeof(struct ovs_key_arp),
819 [OVS_KEY_ATTR_ND] = sizeof(struct ovs_key_nd), 843 [OVS_KEY_ATTR_ND] = sizeof(struct ovs_key_nd),
844 [OVS_KEY_ATTR_TUNNEL] = -1,
820}; 845};
821 846
822static int ipv4_flow_from_nlattrs(struct sw_flow_key *swkey, int *key_len, 847static int ipv4_flow_from_nlattrs(struct sw_flow_key *swkey, int *key_len,
@@ -954,6 +979,105 @@ static int parse_flow_nlattrs(const struct nlattr *attr,
954 return 0; 979 return 0;
955} 980}
956 981
982int ovs_ipv4_tun_from_nlattr(const struct nlattr *attr,
983 struct ovs_key_ipv4_tunnel *tun_key)
984{
985 struct nlattr *a;
986 int rem;
987 bool ttl = false;
988
989 memset(tun_key, 0, sizeof(*tun_key));
990
991 nla_for_each_nested(a, attr, rem) {
992 int type = nla_type(a);
993 static const u32 ovs_tunnel_key_lens[OVS_TUNNEL_KEY_ATTR_MAX + 1] = {
994 [OVS_TUNNEL_KEY_ATTR_ID] = sizeof(u64),
995 [OVS_TUNNEL_KEY_ATTR_IPV4_SRC] = sizeof(u32),
996 [OVS_TUNNEL_KEY_ATTR_IPV4_DST] = sizeof(u32),
997 [OVS_TUNNEL_KEY_ATTR_TOS] = 1,
998 [OVS_TUNNEL_KEY_ATTR_TTL] = 1,
999 [OVS_TUNNEL_KEY_ATTR_DONT_FRAGMENT] = 0,
1000 [OVS_TUNNEL_KEY_ATTR_CSUM] = 0,
1001 };
1002
1003 if (type > OVS_TUNNEL_KEY_ATTR_MAX ||
1004 ovs_tunnel_key_lens[type] != nla_len(a))
1005 return -EINVAL;
1006
1007 switch (type) {
1008 case OVS_TUNNEL_KEY_ATTR_ID:
1009 tun_key->tun_id = nla_get_be64(a);
1010 tun_key->tun_flags |= TUNNEL_KEY;
1011 break;
1012 case OVS_TUNNEL_KEY_ATTR_IPV4_SRC:
1013 tun_key->ipv4_src = nla_get_be32(a);
1014 break;
1015 case OVS_TUNNEL_KEY_ATTR_IPV4_DST:
1016 tun_key->ipv4_dst = nla_get_be32(a);
1017 break;
1018 case OVS_TUNNEL_KEY_ATTR_TOS:
1019 tun_key->ipv4_tos = nla_get_u8(a);
1020 break;
1021 case OVS_TUNNEL_KEY_ATTR_TTL:
1022 tun_key->ipv4_ttl = nla_get_u8(a);
1023 ttl = true;
1024 break;
1025 case OVS_TUNNEL_KEY_ATTR_DONT_FRAGMENT:
1026 tun_key->tun_flags |= TUNNEL_DONT_FRAGMENT;
1027 break;
1028 case OVS_TUNNEL_KEY_ATTR_CSUM:
1029 tun_key->tun_flags |= TUNNEL_CSUM;
1030 break;
1031 default:
1032 return -EINVAL;
1033
1034 }
1035 }
1036 if (rem > 0)
1037 return -EINVAL;
1038
1039 if (!tun_key->ipv4_dst)
1040 return -EINVAL;
1041
1042 if (!ttl)
1043 return -EINVAL;
1044
1045 return 0;
1046}
1047
1048int ovs_ipv4_tun_to_nlattr(struct sk_buff *skb,
1049 const struct ovs_key_ipv4_tunnel *tun_key)
1050{
1051 struct nlattr *nla;
1052
1053 nla = nla_nest_start(skb, OVS_KEY_ATTR_TUNNEL);
1054 if (!nla)
1055 return -EMSGSIZE;
1056
1057 if (tun_key->tun_flags & TUNNEL_KEY &&
1058 nla_put_be64(skb, OVS_TUNNEL_KEY_ATTR_ID, tun_key->tun_id))
1059 return -EMSGSIZE;
1060 if (tun_key->ipv4_src &&
1061 nla_put_be32(skb, OVS_TUNNEL_KEY_ATTR_IPV4_SRC, tun_key->ipv4_src))
1062 return -EMSGSIZE;
1063 if (nla_put_be32(skb, OVS_TUNNEL_KEY_ATTR_IPV4_DST, tun_key->ipv4_dst))
1064 return -EMSGSIZE;
1065 if (tun_key->ipv4_tos &&
1066 nla_put_u8(skb, OVS_TUNNEL_KEY_ATTR_TOS, tun_key->ipv4_tos))
1067 return -EMSGSIZE;
1068 if (nla_put_u8(skb, OVS_TUNNEL_KEY_ATTR_TTL, tun_key->ipv4_ttl))
1069 return -EMSGSIZE;
1070 if ((tun_key->tun_flags & TUNNEL_DONT_FRAGMENT) &&
1071 nla_put_flag(skb, OVS_TUNNEL_KEY_ATTR_DONT_FRAGMENT))
1072 return -EMSGSIZE;
1073 if ((tun_key->tun_flags & TUNNEL_CSUM) &&
1074 nla_put_flag(skb, OVS_TUNNEL_KEY_ATTR_CSUM))
1075 return -EMSGSIZE;
1076
1077 nla_nest_end(skb, nla);
1078 return 0;
1079}
1080
957/** 1081/**
958 * ovs_flow_from_nlattrs - parses Netlink attributes into a flow key. 1082 * ovs_flow_from_nlattrs - parses Netlink attributes into a flow key.
959 * @swkey: receives the extracted flow key. 1083 * @swkey: receives the extracted flow key.
@@ -996,6 +1120,14 @@ int ovs_flow_from_nlattrs(struct sw_flow_key *swkey, int *key_lenp,
996 attrs &= ~(1 << OVS_KEY_ATTR_SKB_MARK); 1120 attrs &= ~(1 << OVS_KEY_ATTR_SKB_MARK);
997 } 1121 }
998 1122
1123 if (attrs & (1 << OVS_KEY_ATTR_TUNNEL)) {
1124 err = ovs_ipv4_tun_from_nlattr(a[OVS_KEY_ATTR_TUNNEL], &swkey->tun_key);
1125 if (err)
1126 return err;
1127
1128 attrs &= ~(1 << OVS_KEY_ATTR_TUNNEL);
1129 }
1130
999 /* Data attributes. */ 1131 /* Data attributes. */
1000 if (!(attrs & (1 << OVS_KEY_ATTR_ETHERNET))) 1132 if (!(attrs & (1 << OVS_KEY_ATTR_ETHERNET)))
1001 return -EINVAL; 1133 return -EINVAL;
@@ -1122,10 +1254,9 @@ int ovs_flow_from_nlattrs(struct sw_flow_key *swkey, int *key_lenp,
1122 1254
1123/** 1255/**
1124 * ovs_flow_metadata_from_nlattrs - parses Netlink attributes into a flow key. 1256 * ovs_flow_metadata_from_nlattrs - parses Netlink attributes into a flow key.
1125 * @priority: receives the skb priority 1257 * @flow: Receives extracted in_port, priority, tun_key and skb_mark.
1126 * @mark: receives the skb mark 1258 * @key_len: Length of key in @flow. Used for calculating flow hash.
1127 * @in_port: receives the extracted input port. 1259 * @attr: Netlink attribute holding nested %OVS_KEY_ATTR_* Netlink attribute
1128 * @key: Netlink attribute holding nested %OVS_KEY_ATTR_* Netlink attribute
1129 * sequence. 1260 * sequence.
1130 * 1261 *
1131 * This parses a series of Netlink attributes that form a flow key, which must 1262 * This parses a series of Netlink attributes that form a flow key, which must
@@ -1133,42 +1264,56 @@ int ovs_flow_from_nlattrs(struct sw_flow_key *swkey, int *key_lenp,
1133 * get the metadata, that is, the parts of the flow key that cannot be 1264 * get the metadata, that is, the parts of the flow key that cannot be
1134 * extracted from the packet itself. 1265 * extracted from the packet itself.
1135 */ 1266 */
1136int ovs_flow_metadata_from_nlattrs(u32 *priority, u32 *mark, u16 *in_port, 1267int ovs_flow_metadata_from_nlattrs(struct sw_flow *flow, int key_len,
1137 const struct nlattr *attr) 1268 const struct nlattr *attr)
1138{ 1269{
1270 struct ovs_key_ipv4_tunnel *tun_key = &flow->key.tun_key;
1139 const struct nlattr *nla; 1271 const struct nlattr *nla;
1140 int rem; 1272 int rem;
1141 1273
1142 *in_port = DP_MAX_PORTS; 1274 flow->key.phy.in_port = DP_MAX_PORTS;
1143 *priority = 0; 1275 flow->key.phy.priority = 0;
1144 *mark = 0; 1276 flow->key.phy.skb_mark = 0;
1277 memset(tun_key, 0, sizeof(flow->key.tun_key));
1145 1278
1146 nla_for_each_nested(nla, attr, rem) { 1279 nla_for_each_nested(nla, attr, rem) {
1147 int type = nla_type(nla); 1280 int type = nla_type(nla);
1148 1281
1149 if (type <= OVS_KEY_ATTR_MAX && ovs_key_lens[type] > 0) { 1282 if (type <= OVS_KEY_ATTR_MAX && ovs_key_lens[type] > 0) {
1283 int err;
1284
1150 if (nla_len(nla) != ovs_key_lens[type]) 1285 if (nla_len(nla) != ovs_key_lens[type])
1151 return -EINVAL; 1286 return -EINVAL;
1152 1287
1153 switch (type) { 1288 switch (type) {
1154 case OVS_KEY_ATTR_PRIORITY: 1289 case OVS_KEY_ATTR_PRIORITY:
1155 *priority = nla_get_u32(nla); 1290 flow->key.phy.priority = nla_get_u32(nla);
1291 break;
1292
1293 case OVS_KEY_ATTR_TUNNEL:
1294 err = ovs_ipv4_tun_from_nlattr(nla, tun_key);
1295 if (err)
1296 return err;
1156 break; 1297 break;
1157 1298
1158 case OVS_KEY_ATTR_IN_PORT: 1299 case OVS_KEY_ATTR_IN_PORT:
1159 if (nla_get_u32(nla) >= DP_MAX_PORTS) 1300 if (nla_get_u32(nla) >= DP_MAX_PORTS)
1160 return -EINVAL; 1301 return -EINVAL;
1161 *in_port = nla_get_u32(nla); 1302 flow->key.phy.in_port = nla_get_u32(nla);
1162 break; 1303 break;
1163 1304
1164 case OVS_KEY_ATTR_SKB_MARK: 1305 case OVS_KEY_ATTR_SKB_MARK:
1165 *mark = nla_get_u32(nla); 1306 flow->key.phy.skb_mark = nla_get_u32(nla);
1166 break; 1307 break;
1167 } 1308 }
1168 } 1309 }
1169 } 1310 }
1170 if (rem) 1311 if (rem)
1171 return -EINVAL; 1312 return -EINVAL;
1313
1314 flow->hash = ovs_flow_hash(&flow->key,
1315 flow_key_start(&flow->key), key_len);
1316
1172 return 0; 1317 return 0;
1173} 1318}
1174 1319
@@ -1181,6 +1326,10 @@ int ovs_flow_to_nlattrs(const struct sw_flow_key *swkey, struct sk_buff *skb)
1181 nla_put_u32(skb, OVS_KEY_ATTR_PRIORITY, swkey->phy.priority)) 1326 nla_put_u32(skb, OVS_KEY_ATTR_PRIORITY, swkey->phy.priority))
1182 goto nla_put_failure; 1327 goto nla_put_failure;
1183 1328
1329 if (swkey->tun_key.ipv4_dst &&
1330 ovs_ipv4_tun_to_nlattr(skb, &swkey->tun_key))
1331 goto nla_put_failure;
1332
1184 if (swkey->phy.in_port != DP_MAX_PORTS && 1333 if (swkey->phy.in_port != DP_MAX_PORTS &&
1185 nla_put_u32(skb, OVS_KEY_ATTR_IN_PORT, swkey->phy.in_port)) 1334 nla_put_u32(skb, OVS_KEY_ATTR_IN_PORT, swkey->phy.in_port))
1186 goto nla_put_failure; 1335 goto nla_put_failure;
diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h
index 0875fde65b9c..66ef7220293e 100644
--- a/net/openvswitch/flow.h
+++ b/net/openvswitch/flow.h
@@ -40,7 +40,38 @@ struct sw_flow_actions {
40 struct nlattr actions[]; 40 struct nlattr actions[];
41}; 41};
42 42
43/* Used to memset ovs_key_ipv4_tunnel padding. */
44#define OVS_TUNNEL_KEY_SIZE \
45 (offsetof(struct ovs_key_ipv4_tunnel, ipv4_ttl) + \
46 FIELD_SIZEOF(struct ovs_key_ipv4_tunnel, ipv4_ttl))
47
48struct ovs_key_ipv4_tunnel {
49 __be64 tun_id;
50 __be32 ipv4_src;
51 __be32 ipv4_dst;
52 __be16 tun_flags;
53 u8 ipv4_tos;
54 u8 ipv4_ttl;
55};
56
57static inline void ovs_flow_tun_key_init(struct ovs_key_ipv4_tunnel *tun_key,
58 const struct iphdr *iph, __be64 tun_id,
59 __be16 tun_flags)
60{
61 tun_key->tun_id = tun_id;
62 tun_key->ipv4_src = iph->saddr;
63 tun_key->ipv4_dst = iph->daddr;
64 tun_key->ipv4_tos = iph->tos;
65 tun_key->ipv4_ttl = iph->ttl;
66 tun_key->tun_flags = tun_flags;
67
68 /* clear struct padding. */
69 memset((unsigned char *) tun_key + OVS_TUNNEL_KEY_SIZE, 0,
70 sizeof(*tun_key) - OVS_TUNNEL_KEY_SIZE);
71}
72
43struct sw_flow_key { 73struct sw_flow_key {
74 struct ovs_key_ipv4_tunnel tun_key; /* Encapsulating tunnel key. */
44 struct { 75 struct {
45 u32 priority; /* Packet QoS priority. */ 76 u32 priority; /* Packet QoS priority. */
46 u32 skb_mark; /* SKB mark. */ 77 u32 skb_mark; /* SKB mark. */
@@ -130,7 +161,7 @@ struct sw_flow *ovs_flow_alloc(void);
130void ovs_flow_deferred_free(struct sw_flow *); 161void ovs_flow_deferred_free(struct sw_flow *);
131void ovs_flow_free(struct sw_flow *flow); 162void ovs_flow_free(struct sw_flow *flow);
132 163
133struct sw_flow_actions *ovs_flow_actions_alloc(const struct nlattr *); 164struct sw_flow_actions *ovs_flow_actions_alloc(int actions_len);
134void ovs_flow_deferred_free_acts(struct sw_flow_actions *); 165void ovs_flow_deferred_free_acts(struct sw_flow_actions *);
135 166
136int ovs_flow_extract(struct sk_buff *, u16 in_port, struct sw_flow_key *, 167int ovs_flow_extract(struct sk_buff *, u16 in_port, struct sw_flow_key *,
@@ -141,10 +172,10 @@ u64 ovs_flow_used_time(unsigned long flow_jiffies);
141int ovs_flow_to_nlattrs(const struct sw_flow_key *, struct sk_buff *); 172int ovs_flow_to_nlattrs(const struct sw_flow_key *, struct sk_buff *);
142int ovs_flow_from_nlattrs(struct sw_flow_key *swkey, int *key_lenp, 173int ovs_flow_from_nlattrs(struct sw_flow_key *swkey, int *key_lenp,
143 const struct nlattr *); 174 const struct nlattr *);
144int ovs_flow_metadata_from_nlattrs(u32 *priority, u32 *mark, u16 *in_port, 175int ovs_flow_metadata_from_nlattrs(struct sw_flow *flow, int key_len,
145 const struct nlattr *); 176 const struct nlattr *attr);
146 177
147#define MAX_ACTIONS_BUFSIZE (16 * 1024) 178#define MAX_ACTIONS_BUFSIZE (32 * 1024)
148#define TBL_MIN_BUCKETS 1024 179#define TBL_MIN_BUCKETS 1024
149 180
150struct flow_table { 181struct flow_table {
@@ -173,11 +204,15 @@ void ovs_flow_tbl_deferred_destroy(struct flow_table *table);
173struct flow_table *ovs_flow_tbl_alloc(int new_size); 204struct flow_table *ovs_flow_tbl_alloc(int new_size);
174struct flow_table *ovs_flow_tbl_expand(struct flow_table *table); 205struct flow_table *ovs_flow_tbl_expand(struct flow_table *table);
175struct flow_table *ovs_flow_tbl_rehash(struct flow_table *table); 206struct flow_table *ovs_flow_tbl_rehash(struct flow_table *table);
176void ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow); 207void ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow,
208 struct sw_flow_key *key, int key_len);
177void ovs_flow_tbl_remove(struct flow_table *table, struct sw_flow *flow); 209void ovs_flow_tbl_remove(struct flow_table *table, struct sw_flow *flow);
178u32 ovs_flow_hash(const struct sw_flow_key *key, int key_len);
179 210
180struct sw_flow *ovs_flow_tbl_next(struct flow_table *table, u32 *bucket, u32 *idx); 211struct sw_flow *ovs_flow_tbl_next(struct flow_table *table, u32 *bucket, u32 *idx);
181extern const int ovs_key_lens[OVS_KEY_ATTR_MAX + 1]; 212extern const int ovs_key_lens[OVS_KEY_ATTR_MAX + 1];
213int ovs_ipv4_tun_from_nlattr(const struct nlattr *attr,
214 struct ovs_key_ipv4_tunnel *tun_key);
215int ovs_ipv4_tun_to_nlattr(struct sk_buff *skb,
216 const struct ovs_key_ipv4_tunnel *tun_key);
182 217
183#endif /* flow.h */ 218#endif /* flow.h */
diff --git a/net/openvswitch/vport-gre.c b/net/openvswitch/vport-gre.c
new file mode 100644
index 000000000000..493e9775dcda
--- /dev/null
+++ b/net/openvswitch/vport-gre.c
@@ -0,0 +1,275 @@
1/*
2 * Copyright (c) 2007-2013 Nicira, Inc.
3 *
4 * This program is free software; you can redistribute it and/or
5 * modify it under the terms of version 2 of the GNU General Public
6 * License as published by the Free Software Foundation.
7 *
8 * This program is distributed in the hope that it will be useful, but
9 * WITHOUT ANY WARRANTY; without even the implied warranty of
10 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
11 * General Public License for more details.
12 *
13 * You should have received a copy of the GNU General Public License
14 * along with this program; if not, write to the Free Software
15 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
16 * 02110-1301, USA
17 */
18
19#ifdef CONFIG_OPENVSWITCH_GRE
20#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
21
22#include <linux/if.h>
23#include <linux/skbuff.h>
24#include <linux/ip.h>
25#include <linux/if_tunnel.h>
26#include <linux/if_vlan.h>
27#include <linux/in.h>
28#include <linux/if_vlan.h>
29#include <linux/in.h>
30#include <linux/in_route.h>
31#include <linux/inetdevice.h>
32#include <linux/jhash.h>
33#include <linux/list.h>
34#include <linux/kernel.h>
35#include <linux/workqueue.h>
36#include <linux/rculist.h>
37#include <net/route.h>
38#include <net/xfrm.h>
39
40#include <net/icmp.h>
41#include <net/ip.h>
42#include <net/ip_tunnels.h>
43#include <net/gre.h>
44#include <net/net_namespace.h>
45#include <net/netns/generic.h>
46#include <net/protocol.h>
47
48#include "datapath.h"
49#include "vport.h"
50
51/* Returns the least-significant 32 bits of a __be64. */
52static __be32 be64_get_low32(__be64 x)
53{
54#ifdef __BIG_ENDIAN
55 return (__force __be32)x;
56#else
57 return (__force __be32)((__force u64)x >> 32);
58#endif
59}
60
61static __be16 filter_tnl_flags(__be16 flags)
62{
63 return flags & (TUNNEL_CSUM | TUNNEL_KEY);
64}
65
66static struct sk_buff *__build_header(struct sk_buff *skb,
67 int tunnel_hlen)
68{
69 const struct ovs_key_ipv4_tunnel *tun_key = OVS_CB(skb)->tun_key;
70 struct tnl_ptk_info tpi;
71
72 skb = gre_handle_offloads(skb, !!(tun_key->tun_flags & TUNNEL_CSUM));
73 if (IS_ERR(skb))
74 return NULL;
75
76 tpi.flags = filter_tnl_flags(tun_key->tun_flags);
77 tpi.proto = htons(ETH_P_TEB);
78 tpi.key = be64_get_low32(tun_key->tun_id);
79 tpi.seq = 0;
80 gre_build_header(skb, &tpi, tunnel_hlen);
81
82 return skb;
83}
84
85static __be64 key_to_tunnel_id(__be32 key, __be32 seq)
86{
87#ifdef __BIG_ENDIAN
88 return (__force __be64)((__force u64)seq << 32 | (__force u32)key);
89#else
90 return (__force __be64)((__force u64)key << 32 | (__force u32)seq);
91#endif
92}
93
94/* Called with rcu_read_lock and BH disabled. */
95static int gre_rcv(struct sk_buff *skb,
96 const struct tnl_ptk_info *tpi)
97{
98 struct ovs_key_ipv4_tunnel tun_key;
99 struct ovs_net *ovs_net;
100 struct vport *vport;
101 __be64 key;
102
103 ovs_net = net_generic(dev_net(skb->dev), ovs_net_id);
104 vport = rcu_dereference(ovs_net->vport_net.gre_vport);
105 if (unlikely(!vport))
106 return PACKET_REJECT;
107
108 key = key_to_tunnel_id(tpi->key, tpi->seq);
109 ovs_flow_tun_key_init(&tun_key, ip_hdr(skb), key,
110 filter_tnl_flags(tpi->flags));
111
112 ovs_vport_receive(vport, skb, &tun_key);
113 return PACKET_RCVD;
114}
115
116static int gre_tnl_send(struct vport *vport, struct sk_buff *skb)
117{
118 struct net *net = ovs_dp_get_net(vport->dp);
119 struct flowi4 fl;
120 struct rtable *rt;
121 int min_headroom;
122 int tunnel_hlen;
123 __be16 df;
124 int err;
125
126 if (unlikely(!OVS_CB(skb)->tun_key)) {
127 err = -EINVAL;
128 goto error;
129 }
130
131 /* Route lookup */
132 memset(&fl, 0, sizeof(fl));
133 fl.daddr = OVS_CB(skb)->tun_key->ipv4_dst;
134 fl.saddr = OVS_CB(skb)->tun_key->ipv4_src;
135 fl.flowi4_tos = RT_TOS(OVS_CB(skb)->tun_key->ipv4_tos);
136 fl.flowi4_mark = skb->mark;
137 fl.flowi4_proto = IPPROTO_GRE;
138
139 rt = ip_route_output_key(net, &fl);
140 if (IS_ERR(rt))
141 return PTR_ERR(rt);
142
143 tunnel_hlen = ip_gre_calc_hlen(OVS_CB(skb)->tun_key->tun_flags);
144
145 min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
146 + tunnel_hlen + sizeof(struct iphdr)
147 + (vlan_tx_tag_present(skb) ? VLAN_HLEN : 0);
148 if (skb_headroom(skb) < min_headroom || skb_header_cloned(skb)) {
149 int head_delta = SKB_DATA_ALIGN(min_headroom -
150 skb_headroom(skb) +
151 16);
152 err = pskb_expand_head(skb, max_t(int, head_delta, 0),
153 0, GFP_ATOMIC);
154 if (unlikely(err))
155 goto err_free_rt;
156 }
157
158 if (vlan_tx_tag_present(skb)) {
159 if (unlikely(!__vlan_put_tag(skb,
160 skb->vlan_proto,
161 vlan_tx_tag_get(skb)))) {
162 err = -ENOMEM;
163 goto err_free_rt;
164 }
165 skb->vlan_tci = 0;
166 }
167
168 /* Push Tunnel header. */
169 skb = __build_header(skb, tunnel_hlen);
170 if (unlikely(!skb)) {
171 err = 0;
172 goto err_free_rt;
173 }
174
175 df = OVS_CB(skb)->tun_key->tun_flags & TUNNEL_DONT_FRAGMENT ?
176 htons(IP_DF) : 0;
177
178 skb->local_df = 1;
179
180 return iptunnel_xmit(net, rt, skb, fl.saddr,
181 OVS_CB(skb)->tun_key->ipv4_dst, IPPROTO_GRE,
182 OVS_CB(skb)->tun_key->ipv4_tos,
183 OVS_CB(skb)->tun_key->ipv4_ttl, df);
184err_free_rt:
185 ip_rt_put(rt);
186error:
187 return err;
188}
189
190static struct gre_cisco_protocol gre_protocol = {
191 .handler = gre_rcv,
192 .priority = 1,
193};
194
195static int gre_ports;
196static int gre_init(void)
197{
198 int err;
199
200 gre_ports++;
201 if (gre_ports > 1)
202 return 0;
203
204 err = gre_cisco_register(&gre_protocol);
205 if (err)
206 pr_warn("cannot register gre protocol handler\n");
207
208 return err;
209}
210
211static void gre_exit(void)
212{
213 gre_ports--;
214 if (gre_ports > 0)
215 return;
216
217 gre_cisco_unregister(&gre_protocol);
218}
219
220static const char *gre_get_name(const struct vport *vport)
221{
222 return vport_priv(vport);
223}
224
225static struct vport *gre_create(const struct vport_parms *parms)
226{
227 struct net *net = ovs_dp_get_net(parms->dp);
228 struct ovs_net *ovs_net;
229 struct vport *vport;
230 int err;
231
232 err = gre_init();
233 if (err)
234 return ERR_PTR(err);
235
236 ovs_net = net_generic(net, ovs_net_id);
237 if (ovsl_dereference(ovs_net->vport_net.gre_vport)) {
238 vport = ERR_PTR(-EEXIST);
239 goto error;
240 }
241
242 vport = ovs_vport_alloc(IFNAMSIZ, &ovs_gre_vport_ops, parms);
243 if (IS_ERR(vport))
244 goto error;
245
246 strncpy(vport_priv(vport), parms->name, IFNAMSIZ);
247 rcu_assign_pointer(ovs_net->vport_net.gre_vport, vport);
248 return vport;
249
250error:
251 gre_exit();
252 return vport;
253}
254
255static void gre_tnl_destroy(struct vport *vport)
256{
257 struct net *net = ovs_dp_get_net(vport->dp);
258 struct ovs_net *ovs_net;
259
260 ovs_net = net_generic(net, ovs_net_id);
261
262 rcu_assign_pointer(ovs_net->vport_net.gre_vport, NULL);
263 ovs_vport_deferred_free(vport);
264 gre_exit();
265}
266
267const struct vport_ops ovs_gre_vport_ops = {
268 .type = OVS_VPORT_TYPE_GRE,
269 .create = gre_create,
270 .destroy = gre_tnl_destroy,
271 .get_name = gre_get_name,
272 .send = gre_tnl_send,
273};
274
275#endif /* OPENVSWITCH_GRE */
diff --git a/net/openvswitch/vport-internal_dev.c b/net/openvswitch/vport-internal_dev.c
index 84e0a0379186..98d3edbbc235 100644
--- a/net/openvswitch/vport-internal_dev.c
+++ b/net/openvswitch/vport-internal_dev.c
@@ -67,7 +67,7 @@ static struct rtnl_link_stats64 *internal_dev_get_stats(struct net_device *netde
67static int internal_dev_xmit(struct sk_buff *skb, struct net_device *netdev) 67static int internal_dev_xmit(struct sk_buff *skb, struct net_device *netdev)
68{ 68{
69 rcu_read_lock(); 69 rcu_read_lock();
70 ovs_vport_receive(internal_dev_priv(netdev)->vport, skb); 70 ovs_vport_receive(internal_dev_priv(netdev)->vport, skb, NULL);
71 rcu_read_unlock(); 71 rcu_read_unlock();
72 return 0; 72 return 0;
73} 73}
@@ -221,6 +221,7 @@ static int internal_dev_recv(struct vport *vport, struct sk_buff *skb)
221 skb->dev = netdev; 221 skb->dev = netdev;
222 skb->pkt_type = PACKET_HOST; 222 skb->pkt_type = PACKET_HOST;
223 skb->protocol = eth_type_trans(skb, netdev); 223 skb->protocol = eth_type_trans(skb, netdev);
224 skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
224 225
225 netif_rx(skb); 226 netif_rx(skb);
226 227
diff --git a/net/openvswitch/vport-netdev.c b/net/openvswitch/vport-netdev.c
index 4f01c6d2ffa4..5982f3f62835 100644
--- a/net/openvswitch/vport-netdev.c
+++ b/net/openvswitch/vport-netdev.c
@@ -49,7 +49,9 @@ static void netdev_port_receive(struct vport *vport, struct sk_buff *skb)
49 return; 49 return;
50 50
51 skb_push(skb, ETH_HLEN); 51 skb_push(skb, ETH_HLEN);
52 ovs_vport_receive(vport, skb); 52 ovs_skb_postpush_rcsum(skb, skb->data, ETH_HLEN);
53
54 ovs_vport_receive(vport, skb, NULL);
53 return; 55 return;
54 56
55error: 57error:
@@ -170,7 +172,7 @@ static int netdev_send(struct vport *vport, struct sk_buff *skb)
170 net_warn_ratelimited("%s: dropped over-mtu packet: %d > %d\n", 172 net_warn_ratelimited("%s: dropped over-mtu packet: %d > %d\n",
171 netdev_vport->dev->name, 173 netdev_vport->dev->name,
172 packet_length(skb), mtu); 174 packet_length(skb), mtu);
173 goto error; 175 goto drop;
174 } 176 }
175 177
176 skb->dev = netdev_vport->dev; 178 skb->dev = netdev_vport->dev;
@@ -179,9 +181,8 @@ static int netdev_send(struct vport *vport, struct sk_buff *skb)
179 181
180 return len; 182 return len;
181 183
182error: 184drop:
183 kfree_skb(skb); 185 kfree_skb(skb);
184 ovs_vport_record_error(vport, VPORT_E_TX_DROPPED);
185 return 0; 186 return 0;
186} 187}
187 188
diff --git a/net/openvswitch/vport-netdev.h b/net/openvswitch/vport-netdev.h
index a3cb3a32cd77..dd298b5c5cdb 100644
--- a/net/openvswitch/vport-netdev.h
+++ b/net/openvswitch/vport-netdev.h
@@ -39,6 +39,5 @@ netdev_vport_priv(const struct vport *vport)
39} 39}
40 40
41const char *ovs_netdev_get_name(const struct vport *); 41const char *ovs_netdev_get_name(const struct vport *);
42const char *ovs_netdev_get_config(const struct vport *);
43 42
44#endif /* vport_netdev.h */ 43#endif /* vport_netdev.h */
diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
index 720623190eaa..d4c7fa04ce08 100644
--- a/net/openvswitch/vport.c
+++ b/net/openvswitch/vport.c
@@ -38,6 +38,10 @@
38static const struct vport_ops *vport_ops_list[] = { 38static const struct vport_ops *vport_ops_list[] = {
39 &ovs_netdev_vport_ops, 39 &ovs_netdev_vport_ops,
40 &ovs_internal_vport_ops, 40 &ovs_internal_vport_ops,
41
42#ifdef CONFIG_OPENVSWITCH_GRE
43 &ovs_gre_vport_ops,
44#endif
41}; 45};
42 46
43/* Protected by RCU read lock for reading, ovs_mutex for writing. */ 47/* Protected by RCU read lock for reading, ovs_mutex for writing. */
@@ -325,7 +329,8 @@ int ovs_vport_get_options(const struct vport *vport, struct sk_buff *skb)
325 * Must be called with rcu_read_lock. The packet cannot be shared and 329 * Must be called with rcu_read_lock. The packet cannot be shared and
326 * skb->data should point to the Ethernet header. 330 * skb->data should point to the Ethernet header.
327 */ 331 */
328void ovs_vport_receive(struct vport *vport, struct sk_buff *skb) 332void ovs_vport_receive(struct vport *vport, struct sk_buff *skb,
333 struct ovs_key_ipv4_tunnel *tun_key)
329{ 334{
330 struct pcpu_tstats *stats; 335 struct pcpu_tstats *stats;
331 336
@@ -335,6 +340,7 @@ void ovs_vport_receive(struct vport *vport, struct sk_buff *skb)
335 stats->rx_bytes += skb->len; 340 stats->rx_bytes += skb->len;
336 u64_stats_update_end(&stats->syncp); 341 u64_stats_update_end(&stats->syncp);
337 342
343 OVS_CB(skb)->tun_key = tun_key;
338 ovs_dp_process_received_packet(vport, skb); 344 ovs_dp_process_received_packet(vport, skb);
339} 345}
340 346
@@ -351,7 +357,7 @@ int ovs_vport_send(struct vport *vport, struct sk_buff *skb)
351{ 357{
352 int sent = vport->ops->send(vport, skb); 358 int sent = vport->ops->send(vport, skb);
353 359
354 if (likely(sent)) { 360 if (likely(sent > 0)) {
355 struct pcpu_tstats *stats; 361 struct pcpu_tstats *stats;
356 362
357 stats = this_cpu_ptr(vport->percpu_stats); 363 stats = this_cpu_ptr(vport->percpu_stats);
@@ -360,7 +366,12 @@ int ovs_vport_send(struct vport *vport, struct sk_buff *skb)
360 stats->tx_packets++; 366 stats->tx_packets++;
361 stats->tx_bytes += sent; 367 stats->tx_bytes += sent;
362 u64_stats_update_end(&stats->syncp); 368 u64_stats_update_end(&stats->syncp);
363 } 369 } else if (sent < 0) {
370 ovs_vport_record_error(vport, VPORT_E_TX_ERROR);
371 kfree_skb(skb);
372 } else
373 ovs_vport_record_error(vport, VPORT_E_TX_DROPPED);
374
364 return sent; 375 return sent;
365} 376}
366 377
@@ -371,7 +382,7 @@ int ovs_vport_send(struct vport *vport, struct sk_buff *skb)
371 * @err_type: one of enum vport_err_type types to indicate the error type 382 * @err_type: one of enum vport_err_type types to indicate the error type
372 * 383 *
373 * If using the vport generic stats layer indicate that an error of the given 384 * If using the vport generic stats layer indicate that an error of the given
374 * type has occured. 385 * type has occurred.
375 */ 386 */
376void ovs_vport_record_error(struct vport *vport, enum vport_err_type err_type) 387void ovs_vport_record_error(struct vport *vport, enum vport_err_type err_type)
377{ 388{
@@ -397,3 +408,18 @@ void ovs_vport_record_error(struct vport *vport, enum vport_err_type err_type)
397 408
398 spin_unlock(&vport->stats_lock); 409 spin_unlock(&vport->stats_lock);
399} 410}
411
412static void free_vport_rcu(struct rcu_head *rcu)
413{
414 struct vport *vport = container_of(rcu, struct vport, rcu);
415
416 ovs_vport_free(vport);
417}
418
419void ovs_vport_deferred_free(struct vport *vport)
420{
421 if (!vport)
422 return;
423
424 call_rcu(&vport->rcu, free_vport_rcu);
425}
diff --git a/net/openvswitch/vport.h b/net/openvswitch/vport.h
index 68a377bc0841..376045c42f8b 100644
--- a/net/openvswitch/vport.h
+++ b/net/openvswitch/vport.h
@@ -34,6 +34,11 @@ struct vport_parms;
34 34
35/* The following definitions are for users of the vport subsytem: */ 35/* The following definitions are for users of the vport subsytem: */
36 36
37/* The following definitions are for users of the vport subsytem: */
38struct vport_net {
39 struct vport __rcu *gre_vport;
40};
41
37int ovs_vport_init(void); 42int ovs_vport_init(void);
38void ovs_vport_exit(void); 43void ovs_vport_exit(void);
39 44
@@ -123,9 +128,8 @@ struct vport_parms {
123 * existing vport to a &struct sk_buff. May be %NULL for a vport that does not 128 * existing vport to a &struct sk_buff. May be %NULL for a vport that does not
124 * have any configuration. 129 * have any configuration.
125 * @get_name: Get the device's name. 130 * @get_name: Get the device's name.
126 * @get_config: Get the device's configuration. 131 * @send: Send a packet on the device. Returns the length of the packet sent,
127 * May be null if the device does not have an ifindex. 132 * zero for dropped packets or negative for error.
128 * @send: Send a packet on the device. Returns the length of the packet sent.
129 */ 133 */
130struct vport_ops { 134struct vport_ops {
131 enum ovs_vport_type type; 135 enum ovs_vport_type type;
@@ -139,7 +143,6 @@ struct vport_ops {
139 143
140 /* Called with rcu_read_lock or ovs_mutex. */ 144 /* Called with rcu_read_lock or ovs_mutex. */
141 const char *(*get_name)(const struct vport *); 145 const char *(*get_name)(const struct vport *);
142 void (*get_config)(const struct vport *, void *);
143 146
144 int (*send)(struct vport *, struct sk_buff *); 147 int (*send)(struct vport *, struct sk_buff *);
145}; 148};
@@ -154,6 +157,7 @@ enum vport_err_type {
154struct vport *ovs_vport_alloc(int priv_size, const struct vport_ops *, 157struct vport *ovs_vport_alloc(int priv_size, const struct vport_ops *,
155 const struct vport_parms *); 158 const struct vport_parms *);
156void ovs_vport_free(struct vport *); 159void ovs_vport_free(struct vport *);
160void ovs_vport_deferred_free(struct vport *vport);
157 161
158#define VPORT_ALIGN 8 162#define VPORT_ALIGN 8
159 163
@@ -186,12 +190,21 @@ static inline struct vport *vport_from_priv(const void *priv)
186 return (struct vport *)(priv - ALIGN(sizeof(struct vport), VPORT_ALIGN)); 190 return (struct vport *)(priv - ALIGN(sizeof(struct vport), VPORT_ALIGN));
187} 191}
188 192
189void ovs_vport_receive(struct vport *, struct sk_buff *); 193void ovs_vport_receive(struct vport *, struct sk_buff *,
194 struct ovs_key_ipv4_tunnel *);
190void ovs_vport_record_error(struct vport *, enum vport_err_type err_type); 195void ovs_vport_record_error(struct vport *, enum vport_err_type err_type);
191 196
192/* List of statically compiled vport implementations. Don't forget to also 197/* List of statically compiled vport implementations. Don't forget to also
193 * add yours to the list at the top of vport.c. */ 198 * add yours to the list at the top of vport.c. */
194extern const struct vport_ops ovs_netdev_vport_ops; 199extern const struct vport_ops ovs_netdev_vport_ops;
195extern const struct vport_ops ovs_internal_vport_ops; 200extern const struct vport_ops ovs_internal_vport_ops;
201extern const struct vport_ops ovs_gre_vport_ops;
202
203static inline void ovs_skb_postpush_rcsum(struct sk_buff *skb,
204 const void *start, unsigned int len)
205{
206 if (skb->ip_summed == CHECKSUM_COMPLETE)
207 skb->csum = csum_add(skb->csum, csum_partial(start, len, 0));
208}
196 209
197#endif /* vport.h */ 210#endif /* vport.h */
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index 20a1bd0e6549..4b66c752eae5 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -3330,10 +3330,11 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
3330} 3330}
3331 3331
3332 3332
3333static int packet_notifier(struct notifier_block *this, unsigned long msg, void *data) 3333static int packet_notifier(struct notifier_block *this,
3334 unsigned long msg, void *ptr)
3334{ 3335{
3335 struct sock *sk; 3336 struct sock *sk;
3336 struct net_device *dev = data; 3337 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
3337 struct net *net = dev_net(dev); 3338 struct net *net = dev_net(dev);
3338 3339
3339 rcu_read_lock(); 3340 rcu_read_lock();
diff --git a/net/phonet/pn_dev.c b/net/phonet/pn_dev.c
index 45a7df6575de..56a6146ac94b 100644
--- a/net/phonet/pn_dev.c
+++ b/net/phonet/pn_dev.c
@@ -292,9 +292,9 @@ static void phonet_route_autodel(struct net_device *dev)
292 292
293/* notify Phonet of device events */ 293/* notify Phonet of device events */
294static int phonet_device_notify(struct notifier_block *me, unsigned long what, 294static int phonet_device_notify(struct notifier_block *me, unsigned long what,
295 void *arg) 295 void *ptr)
296{ 296{
297 struct net_device *dev = arg; 297 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
298 298
299 switch (what) { 299 switch (what) {
300 case NETDEV_REGISTER: 300 case NETDEV_REGISTER:
diff --git a/net/phonet/sysctl.c b/net/phonet/sysctl.c
index d6bbbbd0af18..c02a8c4bc11f 100644
--- a/net/phonet/sysctl.c
+++ b/net/phonet/sysctl.c
@@ -61,13 +61,13 @@ void phonet_get_local_port_range(int *min, int *max)
61 } while (read_seqretry(&local_port_range_lock, seq)); 61 } while (read_seqretry(&local_port_range_lock, seq));
62} 62}
63 63
64static int proc_local_port_range(ctl_table *table, int write, 64static int proc_local_port_range(struct ctl_table *table, int write,
65 void __user *buffer, 65 void __user *buffer,
66 size_t *lenp, loff_t *ppos) 66 size_t *lenp, loff_t *ppos)
67{ 67{
68 int ret; 68 int ret;
69 int range[2] = {local_port_range[0], local_port_range[1]}; 69 int range[2] = {local_port_range[0], local_port_range[1]};
70 ctl_table tmp = { 70 struct ctl_table tmp = {
71 .data = &range, 71 .data = &range,
72 .maxlen = sizeof(range), 72 .maxlen = sizeof(range),
73 .mode = table->mode, 73 .mode = table->mode,
diff --git a/net/rds/ib_sysctl.c b/net/rds/ib_sysctl.c
index 7e643bafb4af..e4e41b3afce7 100644
--- a/net/rds/ib_sysctl.c
+++ b/net/rds/ib_sysctl.c
@@ -61,7 +61,7 @@ static unsigned long rds_ib_sysctl_max_unsig_wr_max = 64;
61 */ 61 */
62unsigned int rds_ib_sysctl_flow_control = 0; 62unsigned int rds_ib_sysctl_flow_control = 0;
63 63
64static ctl_table rds_ib_sysctl_table[] = { 64static struct ctl_table rds_ib_sysctl_table[] = {
65 { 65 {
66 .procname = "max_send_wr", 66 .procname = "max_send_wr",
67 .data = &rds_ib_sysctl_max_send_wr, 67 .data = &rds_ib_sysctl_max_send_wr,
diff --git a/net/rds/iw_sysctl.c b/net/rds/iw_sysctl.c
index 5d5ebd576f3f..89c91515ed0c 100644
--- a/net/rds/iw_sysctl.c
+++ b/net/rds/iw_sysctl.c
@@ -55,7 +55,7 @@ static unsigned long rds_iw_sysctl_max_unsig_bytes_max = ~0UL;
55 55
56unsigned int rds_iw_sysctl_flow_control = 1; 56unsigned int rds_iw_sysctl_flow_control = 1;
57 57
58static ctl_table rds_iw_sysctl_table[] = { 58static struct ctl_table rds_iw_sysctl_table[] = {
59 { 59 {
60 .procname = "max_send_wr", 60 .procname = "max_send_wr",
61 .data = &rds_iw_sysctl_max_send_wr, 61 .data = &rds_iw_sysctl_max_send_wr,
diff --git a/net/rds/sysctl.c b/net/rds/sysctl.c
index 907214b4c4d0..b5cb2aa08f33 100644
--- a/net/rds/sysctl.c
+++ b/net/rds/sysctl.c
@@ -49,7 +49,7 @@ unsigned int rds_sysctl_max_unacked_bytes = (16 << 20);
49 49
50unsigned int rds_sysctl_ping_enable = 1; 50unsigned int rds_sysctl_ping_enable = 1;
51 51
52static ctl_table rds_sysctl_rds_table[] = { 52static struct ctl_table rds_sysctl_rds_table[] = {
53 { 53 {
54 .procname = "reconnect_min_delay_ms", 54 .procname = "reconnect_min_delay_ms",
55 .data = &rds_sysctl_reconnect_min_jiffies, 55 .data = &rds_sysctl_reconnect_min_jiffies,
diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
index 9c8347451597..e98fcfbe6007 100644
--- a/net/rose/af_rose.c
+++ b/net/rose/af_rose.c
@@ -202,10 +202,10 @@ static void rose_kill_by_device(struct net_device *dev)
202/* 202/*
203 * Handle device status changes. 203 * Handle device status changes.
204 */ 204 */
205static int rose_device_event(struct notifier_block *this, unsigned long event, 205static int rose_device_event(struct notifier_block *this,
206 void *ptr) 206 unsigned long event, void *ptr)
207{ 207{
208 struct net_device *dev = (struct net_device *)ptr; 208 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
209 209
210 if (!net_eq(dev_net(dev), &init_net)) 210 if (!net_eq(dev_net(dev), &init_net))
211 return NOTIFY_DONE; 211 return NOTIFY_DONE;
diff --git a/net/rose/sysctl_net_rose.c b/net/rose/sysctl_net_rose.c
index 94ca9c2ccd69..89a9278795a9 100644
--- a/net/rose/sysctl_net_rose.c
+++ b/net/rose/sysctl_net_rose.c
@@ -24,7 +24,7 @@ static int min_window[] = {1}, max_window[] = {7};
24 24
25static struct ctl_table_header *rose_table_header; 25static struct ctl_table_header *rose_table_header;
26 26
27static ctl_table rose_table[] = { 27static struct ctl_table rose_table[] = {
28 { 28 {
29 .procname = "restart_request_timeout", 29 .procname = "restart_request_timeout",
30 .data = &sysctl_rose_restart_request_timeout, 30 .data = &sysctl_rose_restart_request_timeout,
diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
index 5d676edc22a6..977c10e0631b 100644
--- a/net/sched/act_mirred.c
+++ b/net/sched/act_mirred.c
@@ -243,7 +243,7 @@ nla_put_failure:
243static int mirred_device_event(struct notifier_block *unused, 243static int mirred_device_event(struct notifier_block *unused,
244 unsigned long event, void *ptr) 244 unsigned long event, void *ptr)
245{ 245{
246 struct net_device *dev = ptr; 246 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
247 struct tcf_mirred *m; 247 struct tcf_mirred *m;
248 248
249 if (event == NETDEV_UNREGISTER) 249 if (event == NETDEV_UNREGISTER)
diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
index 1bc210ffcba2..71a568862557 100644
--- a/net/sched/sch_cbq.c
+++ b/net/sched/sch_cbq.c
@@ -130,7 +130,7 @@ struct cbq_class {
130 psched_time_t penalized; 130 psched_time_t penalized;
131 struct gnet_stats_basic_packed bstats; 131 struct gnet_stats_basic_packed bstats;
132 struct gnet_stats_queue qstats; 132 struct gnet_stats_queue qstats;
133 struct gnet_stats_rate_est rate_est; 133 struct gnet_stats_rate_est64 rate_est;
134 struct tc_cbq_xstats xstats; 134 struct tc_cbq_xstats xstats;
135 135
136 struct tcf_proto *filter_list; 136 struct tcf_proto *filter_list;
diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c
index 759b308d1a8d..8302717ea303 100644
--- a/net/sched/sch_drr.c
+++ b/net/sched/sch_drr.c
@@ -25,7 +25,7 @@ struct drr_class {
25 25
26 struct gnet_stats_basic_packed bstats; 26 struct gnet_stats_basic_packed bstats;
27 struct gnet_stats_queue qstats; 27 struct gnet_stats_queue qstats;
28 struct gnet_stats_rate_est rate_est; 28 struct gnet_stats_rate_est64 rate_est;
29 struct list_head alist; 29 struct list_head alist;
30 struct Qdisc *qdisc; 30 struct Qdisc *qdisc;
31 31
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 20224086cc28..4626cef4b76e 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -901,37 +901,33 @@ void dev_shutdown(struct net_device *dev)
901void psched_ratecfg_precompute(struct psched_ratecfg *r, 901void psched_ratecfg_precompute(struct psched_ratecfg *r,
902 const struct tc_ratespec *conf) 902 const struct tc_ratespec *conf)
903{ 903{
904 u64 factor;
905 u64 mult;
906 int shift;
907
908 memset(r, 0, sizeof(*r)); 904 memset(r, 0, sizeof(*r));
909 r->overhead = conf->overhead; 905 r->overhead = conf->overhead;
910 r->rate_bps = (u64)conf->rate << 3; 906 r->rate_bytes_ps = conf->rate;
911 r->mult = 1; 907 r->mult = 1;
912 /* 908 /*
913 * Calibrate mult, shift so that token counting is accurate 909 * The deal here is to replace a divide by a reciprocal one
914 * for smallest packet size (64 bytes). Token (time in ns) is 910 * in fast path (a reciprocal divide is a multiply and a shift)
915 * computed as (bytes * 8) * NSEC_PER_SEC / rate_bps. It will 911 *
916 * work as long as the smallest packet transfer time can be 912 * Normal formula would be :
917 * accurately represented in nanosec. 913 * time_in_ns = (NSEC_PER_SEC * len) / rate_bps
914 *
915 * We compute mult/shift to use instead :
916 * time_in_ns = (len * mult) >> shift;
917 *
918 * We try to get the highest possible mult value for accuracy,
919 * but have to make sure no overflows will ever happen.
918 */ 920 */
919 if (r->rate_bps > 0) { 921 if (r->rate_bytes_ps > 0) {
920 /* 922 u64 factor = NSEC_PER_SEC;
921 * Higher shift gives better accuracy. Find the largest 923
922 * shift such that mult fits in 32 bits. 924 for (;;) {
923 */ 925 r->mult = div64_u64(factor, r->rate_bytes_ps);
924 for (shift = 0; shift < 16; shift++) { 926 if (r->mult & (1U << 31) || factor & (1ULL << 63))
925 r->shift = shift;
926 factor = 8LLU * NSEC_PER_SEC * (1 << r->shift);
927 mult = div64_u64(factor, r->rate_bps);
928 if (mult > UINT_MAX)
929 break; 927 break;
928 factor <<= 1;
929 r->shift++;
930 } 930 }
931
932 r->shift = shift - 1;
933 factor = 8LLU * NSEC_PER_SEC * (1 << r->shift);
934 r->mult = div64_u64(factor, r->rate_bps);
935 } 931 }
936} 932}
937EXPORT_SYMBOL(psched_ratecfg_precompute); 933EXPORT_SYMBOL(psched_ratecfg_precompute);
diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
index 9facea03faeb..c4075610502c 100644
--- a/net/sched/sch_hfsc.c
+++ b/net/sched/sch_hfsc.c
@@ -114,7 +114,7 @@ struct hfsc_class {
114 114
115 struct gnet_stats_basic_packed bstats; 115 struct gnet_stats_basic_packed bstats;
116 struct gnet_stats_queue qstats; 116 struct gnet_stats_queue qstats;
117 struct gnet_stats_rate_est rate_est; 117 struct gnet_stats_rate_est64 rate_est;
118 unsigned int level; /* class level in hierarchy */ 118 unsigned int level; /* class level in hierarchy */
119 struct tcf_proto *filter_list; /* filter list */ 119 struct tcf_proto *filter_list; /* filter list */
120 unsigned int filter_cnt; /* filter count */ 120 unsigned int filter_cnt; /* filter count */
diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index adaedd79389c..c2124ea29f45 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -65,6 +65,10 @@ static int htb_hysteresis __read_mostly = 0; /* whether to use mode hysteresis f
65module_param (htb_hysteresis, int, 0640); 65module_param (htb_hysteresis, int, 0640);
66MODULE_PARM_DESC(htb_hysteresis, "Hysteresis mode, less CPU load, less accurate"); 66MODULE_PARM_DESC(htb_hysteresis, "Hysteresis mode, less CPU load, less accurate");
67 67
68static int htb_rate_est = 0; /* htb classes have a default rate estimator */
69module_param(htb_rate_est, int, 0640);
70MODULE_PARM_DESC(htb_rate_est, "setup a default rate estimator (4sec 16sec) for htb classes");
71
68/* used internaly to keep status of single class */ 72/* used internaly to keep status of single class */
69enum htb_cmode { 73enum htb_cmode {
70 HTB_CANT_SEND, /* class can't send and can't borrow */ 74 HTB_CANT_SEND, /* class can't send and can't borrow */
@@ -72,95 +76,105 @@ enum htb_cmode {
72 HTB_CAN_SEND /* class can send */ 76 HTB_CAN_SEND /* class can send */
73}; 77};
74 78
75/* interior & leaf nodes; props specific to leaves are marked L: */ 79struct htb_prio {
80 union {
81 struct rb_root row;
82 struct rb_root feed;
83 };
84 struct rb_node *ptr;
85 /* When class changes from state 1->2 and disconnects from
86 * parent's feed then we lost ptr value and start from the
87 * first child again. Here we store classid of the
88 * last valid ptr (used when ptr is NULL).
89 */
90 u32 last_ptr_id;
91};
92
93/* interior & leaf nodes; props specific to leaves are marked L:
94 * To reduce false sharing, place mostly read fields at beginning,
95 * and mostly written ones at the end.
96 */
76struct htb_class { 97struct htb_class {
77 struct Qdisc_class_common common; 98 struct Qdisc_class_common common;
78 /* general class parameters */ 99 struct psched_ratecfg rate;
79 struct gnet_stats_basic_packed bstats; 100 struct psched_ratecfg ceil;
80 struct gnet_stats_queue qstats; 101 s64 buffer, cbuffer;/* token bucket depth/rate */
81 struct gnet_stats_rate_est rate_est; 102 s64 mbuffer; /* max wait time */
82 struct tc_htb_xstats xstats; /* our special stats */ 103 int prio; /* these two are used only by leaves... */
83 int refcnt; /* usage count of this class */ 104 int quantum; /* but stored for parent-to-leaf return */
84 105
85 /* topology */ 106 struct tcf_proto *filter_list; /* class attached filters */
86 int level; /* our level (see above) */ 107 int filter_cnt;
87 unsigned int children; 108 int refcnt; /* usage count of this class */
88 struct htb_class *parent; /* parent class */
89 109
90 int prio; /* these two are used only by leaves... */ 110 int level; /* our level (see above) */
91 int quantum; /* but stored for parent-to-leaf return */ 111 unsigned int children;
112 struct htb_class *parent; /* parent class */
113
114 struct gnet_stats_rate_est64 rate_est;
115
116 /*
117 * Written often fields
118 */
119 struct gnet_stats_basic_packed bstats;
120 struct gnet_stats_queue qstats;
121 struct tc_htb_xstats xstats; /* our special stats */
122
123 /* token bucket parameters */
124 s64 tokens, ctokens;/* current number of tokens */
125 s64 t_c; /* checkpoint time */
92 126
93 union { 127 union {
94 struct htb_class_leaf { 128 struct htb_class_leaf {
95 struct Qdisc *q;
96 int deficit[TC_HTB_MAXDEPTH];
97 struct list_head drop_list; 129 struct list_head drop_list;
130 int deficit[TC_HTB_MAXDEPTH];
131 struct Qdisc *q;
98 } leaf; 132 } leaf;
99 struct htb_class_inner { 133 struct htb_class_inner {
100 struct rb_root feed[TC_HTB_NUMPRIO]; /* feed trees */ 134 struct htb_prio clprio[TC_HTB_NUMPRIO];
101 struct rb_node *ptr[TC_HTB_NUMPRIO]; /* current class ptr */
102 /* When class changes from state 1->2 and disconnects from
103 * parent's feed then we lost ptr value and start from the
104 * first child again. Here we store classid of the
105 * last valid ptr (used when ptr is NULL).
106 */
107 u32 last_ptr_id[TC_HTB_NUMPRIO];
108 } inner; 135 } inner;
109 } un; 136 } un;
110 struct rb_node node[TC_HTB_NUMPRIO]; /* node for self or feed tree */ 137 s64 pq_key;
111 struct rb_node pq_node; /* node for event queue */
112 s64 pq_key;
113 138
114 int prio_activity; /* for which prios are we active */ 139 int prio_activity; /* for which prios are we active */
115 enum htb_cmode cmode; /* current mode of the class */ 140 enum htb_cmode cmode; /* current mode of the class */
116 141 struct rb_node pq_node; /* node for event queue */
117 /* class attached filters */ 142 struct rb_node node[TC_HTB_NUMPRIO]; /* node for self or feed tree */
118 struct tcf_proto *filter_list; 143};
119 int filter_cnt;
120 144
121 /* token bucket parameters */ 145struct htb_level {
122 struct psched_ratecfg rate; 146 struct rb_root wait_pq;
123 struct psched_ratecfg ceil; 147 struct htb_prio hprio[TC_HTB_NUMPRIO];
124 s64 buffer, cbuffer; /* token bucket depth/rate */
125 s64 mbuffer; /* max wait time */
126 s64 tokens, ctokens; /* current number of tokens */
127 s64 t_c; /* checkpoint time */
128}; 148};
129 149
130struct htb_sched { 150struct htb_sched {
131 struct Qdisc_class_hash clhash; 151 struct Qdisc_class_hash clhash;
132 struct list_head drops[TC_HTB_NUMPRIO];/* active leaves (for drops) */ 152 int defcls; /* class where unclassified flows go to */
133 153 int rate2quantum; /* quant = rate / rate2quantum */
134 /* self list - roots of self generating tree */
135 struct rb_root row[TC_HTB_MAXDEPTH][TC_HTB_NUMPRIO];
136 int row_mask[TC_HTB_MAXDEPTH];
137 struct rb_node *ptr[TC_HTB_MAXDEPTH][TC_HTB_NUMPRIO];
138 u32 last_ptr_id[TC_HTB_MAXDEPTH][TC_HTB_NUMPRIO];
139 154
140 /* self wait list - roots of wait PQs per row */ 155 /* filters for qdisc itself */
141 struct rb_root wait_pq[TC_HTB_MAXDEPTH]; 156 struct tcf_proto *filter_list;
142 157
143 /* time of nearest event per level (row) */ 158#define HTB_WARN_TOOMANYEVENTS 0x1
144 s64 near_ev_cache[TC_HTB_MAXDEPTH]; 159 unsigned int warned; /* only one warning */
160 int direct_qlen;
161 struct work_struct work;
145 162
146 int defcls; /* class where unclassified flows go to */ 163 /* non shaped skbs; let them go directly thru */
164 struct sk_buff_head direct_queue;
165 long direct_pkts;
147 166
148 /* filters for qdisc itself */ 167 struct qdisc_watchdog watchdog;
149 struct tcf_proto *filter_list;
150 168
151 int rate2quantum; /* quant = rate / rate2quantum */ 169 s64 now; /* cached dequeue time */
152 s64 now; /* cached dequeue time */ 170 struct list_head drops[TC_HTB_NUMPRIO];/* active leaves (for drops) */
153 struct qdisc_watchdog watchdog;
154 171
155 /* non shaped skbs; let them go directly thru */ 172 /* time of nearest event per level (row) */
156 struct sk_buff_head direct_queue; 173 s64 near_ev_cache[TC_HTB_MAXDEPTH];
157 int direct_qlen; /* max qlen of above */
158 174
159 long direct_pkts; 175 int row_mask[TC_HTB_MAXDEPTH];
160 176
161#define HTB_WARN_TOOMANYEVENTS 0x1 177 struct htb_level hlevel[TC_HTB_MAXDEPTH];
162 unsigned int warned; /* only one warning */
163 struct work_struct work;
164}; 178};
165 179
166/* find class in global hash table using given handle */ 180/* find class in global hash table using given handle */
@@ -276,7 +290,7 @@ static void htb_add_to_id_tree(struct rb_root *root,
276static void htb_add_to_wait_tree(struct htb_sched *q, 290static void htb_add_to_wait_tree(struct htb_sched *q,
277 struct htb_class *cl, s64 delay) 291 struct htb_class *cl, s64 delay)
278{ 292{
279 struct rb_node **p = &q->wait_pq[cl->level].rb_node, *parent = NULL; 293 struct rb_node **p = &q->hlevel[cl->level].wait_pq.rb_node, *parent = NULL;
280 294
281 cl->pq_key = q->now + delay; 295 cl->pq_key = q->now + delay;
282 if (cl->pq_key == q->now) 296 if (cl->pq_key == q->now)
@@ -296,7 +310,7 @@ static void htb_add_to_wait_tree(struct htb_sched *q,
296 p = &parent->rb_left; 310 p = &parent->rb_left;
297 } 311 }
298 rb_link_node(&cl->pq_node, parent, p); 312 rb_link_node(&cl->pq_node, parent, p);
299 rb_insert_color(&cl->pq_node, &q->wait_pq[cl->level]); 313 rb_insert_color(&cl->pq_node, &q->hlevel[cl->level].wait_pq);
300} 314}
301 315
302/** 316/**
@@ -323,7 +337,7 @@ static inline void htb_add_class_to_row(struct htb_sched *q,
323 while (mask) { 337 while (mask) {
324 int prio = ffz(~mask); 338 int prio = ffz(~mask);
325 mask &= ~(1 << prio); 339 mask &= ~(1 << prio);
326 htb_add_to_id_tree(q->row[cl->level] + prio, cl, prio); 340 htb_add_to_id_tree(&q->hlevel[cl->level].hprio[prio].row, cl, prio);
327 } 341 }
328} 342}
329 343
@@ -349,16 +363,18 @@ static inline void htb_remove_class_from_row(struct htb_sched *q,
349 struct htb_class *cl, int mask) 363 struct htb_class *cl, int mask)
350{ 364{
351 int m = 0; 365 int m = 0;
366 struct htb_level *hlevel = &q->hlevel[cl->level];
352 367
353 while (mask) { 368 while (mask) {
354 int prio = ffz(~mask); 369 int prio = ffz(~mask);
370 struct htb_prio *hprio = &hlevel->hprio[prio];
355 371
356 mask &= ~(1 << prio); 372 mask &= ~(1 << prio);
357 if (q->ptr[cl->level][prio] == cl->node + prio) 373 if (hprio->ptr == cl->node + prio)
358 htb_next_rb_node(q->ptr[cl->level] + prio); 374 htb_next_rb_node(&hprio->ptr);
359 375
360 htb_safe_rb_erase(cl->node + prio, q->row[cl->level] + prio); 376 htb_safe_rb_erase(cl->node + prio, &hprio->row);
361 if (!q->row[cl->level][prio].rb_node) 377 if (!hprio->row.rb_node)
362 m |= 1 << prio; 378 m |= 1 << prio;
363 } 379 }
364 q->row_mask[cl->level] &= ~m; 380 q->row_mask[cl->level] &= ~m;
@@ -382,13 +398,13 @@ static void htb_activate_prios(struct htb_sched *q, struct htb_class *cl)
382 int prio = ffz(~m); 398 int prio = ffz(~m);
383 m &= ~(1 << prio); 399 m &= ~(1 << prio);
384 400
385 if (p->un.inner.feed[prio].rb_node) 401 if (p->un.inner.clprio[prio].feed.rb_node)
386 /* parent already has its feed in use so that 402 /* parent already has its feed in use so that
387 * reset bit in mask as parent is already ok 403 * reset bit in mask as parent is already ok
388 */ 404 */
389 mask &= ~(1 << prio); 405 mask &= ~(1 << prio);
390 406
391 htb_add_to_id_tree(p->un.inner.feed + prio, cl, prio); 407 htb_add_to_id_tree(&p->un.inner.clprio[prio].feed, cl, prio);
392 } 408 }
393 p->prio_activity |= mask; 409 p->prio_activity |= mask;
394 cl = p; 410 cl = p;
@@ -418,18 +434,19 @@ static void htb_deactivate_prios(struct htb_sched *q, struct htb_class *cl)
418 int prio = ffz(~m); 434 int prio = ffz(~m);
419 m &= ~(1 << prio); 435 m &= ~(1 << prio);
420 436
421 if (p->un.inner.ptr[prio] == cl->node + prio) { 437 if (p->un.inner.clprio[prio].ptr == cl->node + prio) {
422 /* we are removing child which is pointed to from 438 /* we are removing child which is pointed to from
423 * parent feed - forget the pointer but remember 439 * parent feed - forget the pointer but remember
424 * classid 440 * classid
425 */ 441 */
426 p->un.inner.last_ptr_id[prio] = cl->common.classid; 442 p->un.inner.clprio[prio].last_ptr_id = cl->common.classid;
427 p->un.inner.ptr[prio] = NULL; 443 p->un.inner.clprio[prio].ptr = NULL;
428 } 444 }
429 445
430 htb_safe_rb_erase(cl->node + prio, p->un.inner.feed + prio); 446 htb_safe_rb_erase(cl->node + prio,
447 &p->un.inner.clprio[prio].feed);
431 448
432 if (!p->un.inner.feed[prio].rb_node) 449 if (!p->un.inner.clprio[prio].feed.rb_node)
433 mask |= 1 << prio; 450 mask |= 1 << prio;
434 } 451 }
435 452
@@ -644,7 +661,7 @@ static void htb_charge_class(struct htb_sched *q, struct htb_class *cl,
644 htb_change_class_mode(q, cl, &diff); 661 htb_change_class_mode(q, cl, &diff);
645 if (old_mode != cl->cmode) { 662 if (old_mode != cl->cmode) {
646 if (old_mode != HTB_CAN_SEND) 663 if (old_mode != HTB_CAN_SEND)
647 htb_safe_rb_erase(&cl->pq_node, q->wait_pq + cl->level); 664 htb_safe_rb_erase(&cl->pq_node, &q->hlevel[cl->level].wait_pq);
648 if (cl->cmode != HTB_CAN_SEND) 665 if (cl->cmode != HTB_CAN_SEND)
649 htb_add_to_wait_tree(q, cl, diff); 666 htb_add_to_wait_tree(q, cl, diff);
650 } 667 }
@@ -664,7 +681,7 @@ static void htb_charge_class(struct htb_sched *q, struct htb_class *cl,
664 * next pending event (0 for no event in pq, q->now for too many events). 681 * next pending event (0 for no event in pq, q->now for too many events).
665 * Note: Applied are events whose have cl->pq_key <= q->now. 682 * Note: Applied are events whose have cl->pq_key <= q->now.
666 */ 683 */
667static s64 htb_do_events(struct htb_sched *q, int level, 684static s64 htb_do_events(struct htb_sched *q, const int level,
668 unsigned long start) 685 unsigned long start)
669{ 686{
670 /* don't run for longer than 2 jiffies; 2 is used instead of 687 /* don't run for longer than 2 jiffies; 2 is used instead of
@@ -672,10 +689,12 @@ static s64 htb_do_events(struct htb_sched *q, int level,
672 * too soon 689 * too soon
673 */ 690 */
674 unsigned long stop_at = start + 2; 691 unsigned long stop_at = start + 2;
692 struct rb_root *wait_pq = &q->hlevel[level].wait_pq;
693
675 while (time_before(jiffies, stop_at)) { 694 while (time_before(jiffies, stop_at)) {
676 struct htb_class *cl; 695 struct htb_class *cl;
677 s64 diff; 696 s64 diff;
678 struct rb_node *p = rb_first(&q->wait_pq[level]); 697 struct rb_node *p = rb_first(wait_pq);
679 698
680 if (!p) 699 if (!p)
681 return 0; 700 return 0;
@@ -684,7 +703,7 @@ static s64 htb_do_events(struct htb_sched *q, int level,
684 if (cl->pq_key > q->now) 703 if (cl->pq_key > q->now)
685 return cl->pq_key; 704 return cl->pq_key;
686 705
687 htb_safe_rb_erase(p, q->wait_pq + level); 706 htb_safe_rb_erase(p, wait_pq);
688 diff = min_t(s64, q->now - cl->t_c, cl->mbuffer); 707 diff = min_t(s64, q->now - cl->t_c, cl->mbuffer);
689 htb_change_class_mode(q, cl, &diff); 708 htb_change_class_mode(q, cl, &diff);
690 if (cl->cmode != HTB_CAN_SEND) 709 if (cl->cmode != HTB_CAN_SEND)
@@ -728,8 +747,7 @@ static struct rb_node *htb_id_find_next_upper(int prio, struct rb_node *n,
728 * 747 *
729 * Find leaf where current feed pointers points to. 748 * Find leaf where current feed pointers points to.
730 */ 749 */
731static struct htb_class *htb_lookup_leaf(struct rb_root *tree, int prio, 750static struct htb_class *htb_lookup_leaf(struct htb_prio *hprio, const int prio)
732 struct rb_node **pptr, u32 * pid)
733{ 751{
734 int i; 752 int i;
735 struct { 753 struct {
@@ -738,10 +756,10 @@ static struct htb_class *htb_lookup_leaf(struct rb_root *tree, int prio,
738 u32 *pid; 756 u32 *pid;
739 } stk[TC_HTB_MAXDEPTH], *sp = stk; 757 } stk[TC_HTB_MAXDEPTH], *sp = stk;
740 758
741 BUG_ON(!tree->rb_node); 759 BUG_ON(!hprio->row.rb_node);
742 sp->root = tree->rb_node; 760 sp->root = hprio->row.rb_node;
743 sp->pptr = pptr; 761 sp->pptr = &hprio->ptr;
744 sp->pid = pid; 762 sp->pid = &hprio->last_ptr_id;
745 763
746 for (i = 0; i < 65535; i++) { 764 for (i = 0; i < 65535; i++) {
747 if (!*sp->pptr && *sp->pid) { 765 if (!*sp->pptr && *sp->pid) {
@@ -768,12 +786,15 @@ static struct htb_class *htb_lookup_leaf(struct rb_root *tree, int prio,
768 } 786 }
769 } else { 787 } else {
770 struct htb_class *cl; 788 struct htb_class *cl;
789 struct htb_prio *clp;
790
771 cl = rb_entry(*sp->pptr, struct htb_class, node[prio]); 791 cl = rb_entry(*sp->pptr, struct htb_class, node[prio]);
772 if (!cl->level) 792 if (!cl->level)
773 return cl; 793 return cl;
774 (++sp)->root = cl->un.inner.feed[prio].rb_node; 794 clp = &cl->un.inner.clprio[prio];
775 sp->pptr = cl->un.inner.ptr + prio; 795 (++sp)->root = clp->feed.rb_node;
776 sp->pid = cl->un.inner.last_ptr_id + prio; 796 sp->pptr = &clp->ptr;
797 sp->pid = &clp->last_ptr_id;
777 } 798 }
778 } 799 }
779 WARN_ON(1); 800 WARN_ON(1);
@@ -783,15 +804,16 @@ static struct htb_class *htb_lookup_leaf(struct rb_root *tree, int prio,
783/* dequeues packet at given priority and level; call only if 804/* dequeues packet at given priority and level; call only if
784 * you are sure that there is active class at prio/level 805 * you are sure that there is active class at prio/level
785 */ 806 */
786static struct sk_buff *htb_dequeue_tree(struct htb_sched *q, int prio, 807static struct sk_buff *htb_dequeue_tree(struct htb_sched *q, const int prio,
787 int level) 808 const int level)
788{ 809{
789 struct sk_buff *skb = NULL; 810 struct sk_buff *skb = NULL;
790 struct htb_class *cl, *start; 811 struct htb_class *cl, *start;
812 struct htb_level *hlevel = &q->hlevel[level];
813 struct htb_prio *hprio = &hlevel->hprio[prio];
814
791 /* look initial class up in the row */ 815 /* look initial class up in the row */
792 start = cl = htb_lookup_leaf(q->row[level] + prio, prio, 816 start = cl = htb_lookup_leaf(hprio, prio);
793 q->ptr[level] + prio,
794 q->last_ptr_id[level] + prio);
795 817
796 do { 818 do {
797next: 819next:
@@ -811,9 +833,7 @@ next:
811 if ((q->row_mask[level] & (1 << prio)) == 0) 833 if ((q->row_mask[level] & (1 << prio)) == 0)
812 return NULL; 834 return NULL;
813 835
814 next = htb_lookup_leaf(q->row[level] + prio, 836 next = htb_lookup_leaf(hprio, prio);
815 prio, q->ptr[level] + prio,
816 q->last_ptr_id[level] + prio);
817 837
818 if (cl == start) /* fix start if we just deleted it */ 838 if (cl == start) /* fix start if we just deleted it */
819 start = next; 839 start = next;
@@ -826,11 +846,9 @@ next:
826 break; 846 break;
827 847
828 qdisc_warn_nonwc("htb", cl->un.leaf.q); 848 qdisc_warn_nonwc("htb", cl->un.leaf.q);
829 htb_next_rb_node((level ? cl->parent->un.inner.ptr : q-> 849 htb_next_rb_node(level ? &cl->parent->un.inner.clprio[prio].ptr:
830 ptr[0]) + prio); 850 &q->hlevel[0].hprio[prio].ptr);
831 cl = htb_lookup_leaf(q->row[level] + prio, prio, 851 cl = htb_lookup_leaf(hprio, prio);
832 q->ptr[level] + prio,
833 q->last_ptr_id[level] + prio);
834 852
835 } while (cl != start); 853 } while (cl != start);
836 854
@@ -839,8 +857,8 @@ next:
839 cl->un.leaf.deficit[level] -= qdisc_pkt_len(skb); 857 cl->un.leaf.deficit[level] -= qdisc_pkt_len(skb);
840 if (cl->un.leaf.deficit[level] < 0) { 858 if (cl->un.leaf.deficit[level] < 0) {
841 cl->un.leaf.deficit[level] += cl->quantum; 859 cl->un.leaf.deficit[level] += cl->quantum;
842 htb_next_rb_node((level ? cl->parent->un.inner.ptr : q-> 860 htb_next_rb_node(level ? &cl->parent->un.inner.clprio[prio].ptr :
843 ptr[0]) + prio); 861 &q->hlevel[0].hprio[prio].ptr);
844 } 862 }
845 /* this used to be after charge_class but this constelation 863 /* this used to be after charge_class but this constelation
846 * gives us slightly better performance 864 * gives us slightly better performance
@@ -880,15 +898,14 @@ ok:
880 for (level = 0; level < TC_HTB_MAXDEPTH; level++) { 898 for (level = 0; level < TC_HTB_MAXDEPTH; level++) {
881 /* common case optimization - skip event handler quickly */ 899 /* common case optimization - skip event handler quickly */
882 int m; 900 int m;
883 s64 event; 901 s64 event = q->near_ev_cache[level];
884 902
885 if (q->now >= q->near_ev_cache[level]) { 903 if (q->now >= event) {
886 event = htb_do_events(q, level, start_at); 904 event = htb_do_events(q, level, start_at);
887 if (!event) 905 if (!event)
888 event = q->now + NSEC_PER_SEC; 906 event = q->now + NSEC_PER_SEC;
889 q->near_ev_cache[level] = event; 907 q->near_ev_cache[level] = event;
890 } else 908 }
891 event = q->near_ev_cache[level];
892 909
893 if (next_event > event) 910 if (next_event > event)
894 next_event = event; 911 next_event = event;
@@ -968,10 +985,8 @@ static void htb_reset(struct Qdisc *sch)
968 qdisc_watchdog_cancel(&q->watchdog); 985 qdisc_watchdog_cancel(&q->watchdog);
969 __skb_queue_purge(&q->direct_queue); 986 __skb_queue_purge(&q->direct_queue);
970 sch->q.qlen = 0; 987 sch->q.qlen = 0;
971 memset(q->row, 0, sizeof(q->row)); 988 memset(q->hlevel, 0, sizeof(q->hlevel));
972 memset(q->row_mask, 0, sizeof(q->row_mask)); 989 memset(q->row_mask, 0, sizeof(q->row_mask));
973 memset(q->wait_pq, 0, sizeof(q->wait_pq));
974 memset(q->ptr, 0, sizeof(q->ptr));
975 for (i = 0; i < TC_HTB_NUMPRIO; i++) 990 for (i = 0; i < TC_HTB_NUMPRIO; i++)
976 INIT_LIST_HEAD(q->drops + i); 991 INIT_LIST_HEAD(q->drops + i);
977} 992}
@@ -1192,7 +1207,8 @@ static void htb_parent_to_leaf(struct htb_sched *q, struct htb_class *cl,
1192 WARN_ON(cl->level || !cl->un.leaf.q || cl->prio_activity); 1207 WARN_ON(cl->level || !cl->un.leaf.q || cl->prio_activity);
1193 1208
1194 if (parent->cmode != HTB_CAN_SEND) 1209 if (parent->cmode != HTB_CAN_SEND)
1195 htb_safe_rb_erase(&parent->pq_node, q->wait_pq + parent->level); 1210 htb_safe_rb_erase(&parent->pq_node,
1211 &q->hlevel[parent->level].wait_pq);
1196 1212
1197 parent->level = 0; 1213 parent->level = 0;
1198 memset(&parent->un.inner, 0, sizeof(parent->un.inner)); 1214 memset(&parent->un.inner, 0, sizeof(parent->un.inner));
@@ -1281,7 +1297,8 @@ static int htb_delete(struct Qdisc *sch, unsigned long arg)
1281 htb_deactivate(q, cl); 1297 htb_deactivate(q, cl);
1282 1298
1283 if (cl->cmode != HTB_CAN_SEND) 1299 if (cl->cmode != HTB_CAN_SEND)
1284 htb_safe_rb_erase(&cl->pq_node, q->wait_pq + cl->level); 1300 htb_safe_rb_erase(&cl->pq_node,
1301 &q->hlevel[cl->level].wait_pq);
1285 1302
1286 if (last_child) 1303 if (last_child)
1287 htb_parent_to_leaf(q, cl, new_q); 1304 htb_parent_to_leaf(q, cl, new_q);
@@ -1366,12 +1383,14 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
1366 if (!cl) 1383 if (!cl)
1367 goto failure; 1384 goto failure;
1368 1385
1369 err = gen_new_estimator(&cl->bstats, &cl->rate_est, 1386 if (htb_rate_est || tca[TCA_RATE]) {
1370 qdisc_root_sleeping_lock(sch), 1387 err = gen_new_estimator(&cl->bstats, &cl->rate_est,
1371 tca[TCA_RATE] ? : &est.nla); 1388 qdisc_root_sleeping_lock(sch),
1372 if (err) { 1389 tca[TCA_RATE] ? : &est.nla);
1373 kfree(cl); 1390 if (err) {
1374 goto failure; 1391 kfree(cl);
1392 goto failure;
1393 }
1375 } 1394 }
1376 1395
1377 cl->refcnt = 1; 1396 cl->refcnt = 1;
@@ -1401,7 +1420,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
1401 1420
1402 /* remove from evt list because of level change */ 1421 /* remove from evt list because of level change */
1403 if (parent->cmode != HTB_CAN_SEND) { 1422 if (parent->cmode != HTB_CAN_SEND) {
1404 htb_safe_rb_erase(&parent->pq_node, q->wait_pq); 1423 htb_safe_rb_erase(&parent->pq_node, &q->hlevel[0].wait_pq);
1405 parent->cmode = HTB_CAN_SEND; 1424 parent->cmode = HTB_CAN_SEND;
1406 } 1425 }
1407 parent->level = (parent->parent ? parent->parent->level 1426 parent->level = (parent->parent ? parent->parent->level
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 3d2acc7a9c80..82f6016d89ab 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -23,6 +23,7 @@
23#include <linux/vmalloc.h> 23#include <linux/vmalloc.h>
24#include <linux/rtnetlink.h> 24#include <linux/rtnetlink.h>
25#include <linux/reciprocal_div.h> 25#include <linux/reciprocal_div.h>
26#include <linux/rbtree.h>
26 27
27#include <net/netlink.h> 28#include <net/netlink.h>
28#include <net/pkt_sched.h> 29#include <net/pkt_sched.h>
@@ -68,7 +69,8 @@
68*/ 69*/
69 70
70struct netem_sched_data { 71struct netem_sched_data {
71 /* internal t(ime)fifo qdisc uses sch->q and sch->limit */ 72 /* internal t(ime)fifo qdisc uses t_root and sch->limit */
73 struct rb_root t_root;
72 74
73 /* optional qdisc for classful handling (NULL at netem init) */ 75 /* optional qdisc for classful handling (NULL at netem init) */
74 struct Qdisc *qdisc; 76 struct Qdisc *qdisc;
@@ -128,10 +130,35 @@ struct netem_sched_data {
128 */ 130 */
129struct netem_skb_cb { 131struct netem_skb_cb {
130 psched_time_t time_to_send; 132 psched_time_t time_to_send;
133 ktime_t tstamp_save;
131}; 134};
132 135
136/* Because space in skb->cb[] is tight, netem overloads skb->next/prev/tstamp
137 * to hold a rb_node structure.
138 *
139 * If struct sk_buff layout is changed, the following checks will complain.
140 */
141static struct rb_node *netem_rb_node(struct sk_buff *skb)
142{
143 BUILD_BUG_ON(offsetof(struct sk_buff, next) != 0);
144 BUILD_BUG_ON(offsetof(struct sk_buff, prev) !=
145 offsetof(struct sk_buff, next) + sizeof(skb->next));
146 BUILD_BUG_ON(offsetof(struct sk_buff, tstamp) !=
147 offsetof(struct sk_buff, prev) + sizeof(skb->prev));
148 BUILD_BUG_ON(sizeof(struct rb_node) > sizeof(skb->next) +
149 sizeof(skb->prev) +
150 sizeof(skb->tstamp));
151 return (struct rb_node *)&skb->next;
152}
153
154static struct sk_buff *netem_rb_to_skb(struct rb_node *rb)
155{
156 return (struct sk_buff *)rb;
157}
158
133static inline struct netem_skb_cb *netem_skb_cb(struct sk_buff *skb) 159static inline struct netem_skb_cb *netem_skb_cb(struct sk_buff *skb)
134{ 160{
161 /* we assume we can use skb next/prev/tstamp as storage for rb_node */
135 qdisc_cb_private_validate(skb, sizeof(struct netem_skb_cb)); 162 qdisc_cb_private_validate(skb, sizeof(struct netem_skb_cb));
136 return (struct netem_skb_cb *)qdisc_skb_cb(skb)->data; 163 return (struct netem_skb_cb *)qdisc_skb_cb(skb)->data;
137} 164}
@@ -333,20 +360,23 @@ static psched_time_t packet_len_2_sched_time(unsigned int len, struct netem_sche
333 360
334static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch) 361static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch)
335{ 362{
336 struct sk_buff_head *list = &sch->q; 363 struct netem_sched_data *q = qdisc_priv(sch);
337 psched_time_t tnext = netem_skb_cb(nskb)->time_to_send; 364 psched_time_t tnext = netem_skb_cb(nskb)->time_to_send;
338 struct sk_buff *skb = skb_peek_tail(list); 365 struct rb_node **p = &q->t_root.rb_node, *parent = NULL;
339 366
340 /* Optimize for add at tail */ 367 while (*p) {
341 if (likely(!skb || tnext >= netem_skb_cb(skb)->time_to_send)) 368 struct sk_buff *skb;
342 return __skb_queue_tail(list, nskb);
343 369
344 skb_queue_reverse_walk(list, skb) { 370 parent = *p;
371 skb = netem_rb_to_skb(parent);
345 if (tnext >= netem_skb_cb(skb)->time_to_send) 372 if (tnext >= netem_skb_cb(skb)->time_to_send)
346 break; 373 p = &parent->rb_right;
374 else
375 p = &parent->rb_left;
347 } 376 }
348 377 rb_link_node(netem_rb_node(nskb), parent, p);
349 __skb_queue_after(list, skb, nskb); 378 rb_insert_color(netem_rb_node(nskb), &q->t_root);
379 sch->q.qlen++;
350} 380}
351 381
352/* 382/*
@@ -436,23 +466,28 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
436 now = psched_get_time(); 466 now = psched_get_time();
437 467
438 if (q->rate) { 468 if (q->rate) {
439 struct sk_buff_head *list = &sch->q; 469 struct sk_buff *last;
440 470
441 if (!skb_queue_empty(list)) { 471 if (!skb_queue_empty(&sch->q))
472 last = skb_peek_tail(&sch->q);
473 else
474 last = netem_rb_to_skb(rb_last(&q->t_root));
475 if (last) {
442 /* 476 /*
443 * Last packet in queue is reference point (now), 477 * Last packet in queue is reference point (now),
444 * calculate this time bonus and subtract 478 * calculate this time bonus and subtract
445 * from delay. 479 * from delay.
446 */ 480 */
447 delay -= netem_skb_cb(skb_peek_tail(list))->time_to_send - now; 481 delay -= netem_skb_cb(last)->time_to_send - now;
448 delay = max_t(psched_tdiff_t, 0, delay); 482 delay = max_t(psched_tdiff_t, 0, delay);
449 now = netem_skb_cb(skb_peek_tail(list))->time_to_send; 483 now = netem_skb_cb(last)->time_to_send;
450 } 484 }
451 485
452 delay += packet_len_2_sched_time(skb->len, q); 486 delay += packet_len_2_sched_time(skb->len, q);
453 } 487 }
454 488
455 cb->time_to_send = now + delay; 489 cb->time_to_send = now + delay;
490 cb->tstamp_save = skb->tstamp;
456 ++q->counter; 491 ++q->counter;
457 tfifo_enqueue(skb, sch); 492 tfifo_enqueue(skb, sch);
458 } else { 493 } else {
@@ -476,6 +511,21 @@ static unsigned int netem_drop(struct Qdisc *sch)
476 unsigned int len; 511 unsigned int len;
477 512
478 len = qdisc_queue_drop(sch); 513 len = qdisc_queue_drop(sch);
514
515 if (!len) {
516 struct rb_node *p = rb_first(&q->t_root);
517
518 if (p) {
519 struct sk_buff *skb = netem_rb_to_skb(p);
520
521 rb_erase(p, &q->t_root);
522 sch->q.qlen--;
523 skb->next = NULL;
524 skb->prev = NULL;
525 len = qdisc_pkt_len(skb);
526 kfree_skb(skb);
527 }
528 }
479 if (!len && q->qdisc && q->qdisc->ops->drop) 529 if (!len && q->qdisc && q->qdisc->ops->drop)
480 len = q->qdisc->ops->drop(q->qdisc); 530 len = q->qdisc->ops->drop(q->qdisc);
481 if (len) 531 if (len)
@@ -488,19 +538,35 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
488{ 538{
489 struct netem_sched_data *q = qdisc_priv(sch); 539 struct netem_sched_data *q = qdisc_priv(sch);
490 struct sk_buff *skb; 540 struct sk_buff *skb;
541 struct rb_node *p;
491 542
492 if (qdisc_is_throttled(sch)) 543 if (qdisc_is_throttled(sch))
493 return NULL; 544 return NULL;
494 545
495tfifo_dequeue: 546tfifo_dequeue:
496 skb = qdisc_peek_head(sch); 547 skb = __skb_dequeue(&sch->q);
497 if (skb) { 548 if (skb) {
498 const struct netem_skb_cb *cb = netem_skb_cb(skb); 549deliver:
550 sch->qstats.backlog -= qdisc_pkt_len(skb);
551 qdisc_unthrottled(sch);
552 qdisc_bstats_update(sch, skb);
553 return skb;
554 }
555 p = rb_first(&q->t_root);
556 if (p) {
557 psched_time_t time_to_send;
558
559 skb = netem_rb_to_skb(p);
499 560
500 /* if more time remaining? */ 561 /* if more time remaining? */
501 if (cb->time_to_send <= psched_get_time()) { 562 time_to_send = netem_skb_cb(skb)->time_to_send;
502 __skb_unlink(skb, &sch->q); 563 if (time_to_send <= psched_get_time()) {
503 sch->qstats.backlog -= qdisc_pkt_len(skb); 564 rb_erase(p, &q->t_root);
565
566 sch->q.qlen--;
567 skb->next = NULL;
568 skb->prev = NULL;
569 skb->tstamp = netem_skb_cb(skb)->tstamp_save;
504 570
505#ifdef CONFIG_NET_CLS_ACT 571#ifdef CONFIG_NET_CLS_ACT
506 /* 572 /*
@@ -522,10 +588,7 @@ tfifo_dequeue:
522 } 588 }
523 goto tfifo_dequeue; 589 goto tfifo_dequeue;
524 } 590 }
525deliver: 591 goto deliver;
526 qdisc_unthrottled(sch);
527 qdisc_bstats_update(sch, skb);
528 return skb;
529 } 592 }
530 593
531 if (q->qdisc) { 594 if (q->qdisc) {
@@ -533,7 +596,7 @@ deliver:
533 if (skb) 596 if (skb)
534 goto deliver; 597 goto deliver;
535 } 598 }
536 qdisc_watchdog_schedule(&q->watchdog, cb->time_to_send); 599 qdisc_watchdog_schedule(&q->watchdog, time_to_send);
537 } 600 }
538 601
539 if (q->qdisc) { 602 if (q->qdisc) {
diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
index d51852bba01c..7c195d972bf0 100644
--- a/net/sched/sch_qfq.c
+++ b/net/sched/sch_qfq.c
@@ -138,7 +138,7 @@ struct qfq_class {
138 138
139 struct gnet_stats_basic_packed bstats; 139 struct gnet_stats_basic_packed bstats;
140 struct gnet_stats_queue qstats; 140 struct gnet_stats_queue qstats;
141 struct gnet_stats_rate_est rate_est; 141 struct gnet_stats_rate_est64 rate_est;
142 struct Qdisc *qdisc; 142 struct Qdisc *qdisc;
143 struct list_head alist; /* Link for active-classes list. */ 143 struct list_head alist; /* Link for active-classes list. */
144 struct qfq_aggregate *agg; /* Parent aggregate. */ 144 struct qfq_aggregate *agg; /* Parent aggregate. */
diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index e478d316602b..1aaf1b6e51a2 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -116,14 +116,57 @@ struct tbf_sched_data {
116 struct qdisc_watchdog watchdog; /* Watchdog timer */ 116 struct qdisc_watchdog watchdog; /* Watchdog timer */
117}; 117};
118 118
119
120/* GSO packet is too big, segment it so that tbf can transmit
121 * each segment in time
122 */
123static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch)
124{
125 struct tbf_sched_data *q = qdisc_priv(sch);
126 struct sk_buff *segs, *nskb;
127 netdev_features_t features = netif_skb_features(skb);
128 int ret, nb;
129
130 segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
131
132 if (IS_ERR_OR_NULL(segs))
133 return qdisc_reshape_fail(skb, sch);
134
135 nb = 0;
136 while (segs) {
137 nskb = segs->next;
138 segs->next = NULL;
139 if (likely(segs->len <= q->max_size)) {
140 qdisc_skb_cb(segs)->pkt_len = segs->len;
141 ret = qdisc_enqueue(segs, q->qdisc);
142 } else {
143 ret = qdisc_reshape_fail(skb, sch);
144 }
145 if (ret != NET_XMIT_SUCCESS) {
146 if (net_xmit_drop_count(ret))
147 sch->qstats.drops++;
148 } else {
149 nb++;
150 }
151 segs = nskb;
152 }
153 sch->q.qlen += nb;
154 if (nb > 1)
155 qdisc_tree_decrease_qlen(sch, 1 - nb);
156 consume_skb(skb);
157 return nb > 0 ? NET_XMIT_SUCCESS : NET_XMIT_DROP;
158}
159
119static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch) 160static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
120{ 161{
121 struct tbf_sched_data *q = qdisc_priv(sch); 162 struct tbf_sched_data *q = qdisc_priv(sch);
122 int ret; 163 int ret;
123 164
124 if (qdisc_pkt_len(skb) > q->max_size) 165 if (qdisc_pkt_len(skb) > q->max_size) {
166 if (skb_is_gso(skb))
167 return tbf_segment(skb, sch);
125 return qdisc_reshape_fail(skb, sch); 168 return qdisc_reshape_fail(skb, sch);
126 169 }
127 ret = qdisc_enqueue(skb, q->qdisc); 170 ret = qdisc_enqueue(skb, q->qdisc);
128 if (ret != NET_XMIT_SUCCESS) { 171 if (ret != NET_XMIT_SUCCESS) {
129 if (net_xmit_drop_count(ret)) 172 if (net_xmit_drop_count(ret))
diff --git a/net/sctp/Kconfig b/net/sctp/Kconfig
index cf4852814e0c..71c1a598d9bc 100644
--- a/net/sctp/Kconfig
+++ b/net/sctp/Kconfig
@@ -30,7 +30,8 @@ menuconfig IP_SCTP
30 homing at either or both ends of an association." 30 homing at either or both ends of an association."
31 31
32 To compile this protocol support as a module, choose M here: the 32 To compile this protocol support as a module, choose M here: the
33 module will be called sctp. 33 module will be called sctp. Debug messages are handeled by the
34 kernel's dynamic debugging framework.
34 35
35 If in doubt, say N. 36 If in doubt, say N.
36 37
@@ -48,14 +49,6 @@ config NET_SCTPPROBE
48 To compile this code as a module, choose M here: the 49 To compile this code as a module, choose M here: the
49 module will be called sctp_probe. 50 module will be called sctp_probe.
50 51
51config SCTP_DBG_MSG
52 bool "SCTP: Debug messages"
53 help
54 If you say Y, this will enable verbose debugging messages.
55
56 If unsure, say N. However, if you are running into problems, use
57 this option to gather detailed trace information
58
59config SCTP_DBG_OBJCNT 52config SCTP_DBG_OBJCNT
60 bool "SCTP: Debug object counts" 53 bool "SCTP: Debug object counts"
61 depends on PROC_FS 54 depends on PROC_FS
diff --git a/net/sctp/associola.c b/net/sctp/associola.c
index 91cfd8f94a19..bce5b79662a6 100644
--- a/net/sctp/associola.c
+++ b/net/sctp/associola.c
@@ -86,10 +86,9 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a
86 86
87 /* Discarding const is appropriate here. */ 87 /* Discarding const is appropriate here. */
88 asoc->ep = (struct sctp_endpoint *)ep; 88 asoc->ep = (struct sctp_endpoint *)ep;
89 sctp_endpoint_hold(asoc->ep);
90
91 /* Hold the sock. */
92 asoc->base.sk = (struct sock *)sk; 89 asoc->base.sk = (struct sock *)sk;
90
91 sctp_endpoint_hold(asoc->ep);
93 sock_hold(asoc->base.sk); 92 sock_hold(asoc->base.sk);
94 93
95 /* Initialize the common base substructure. */ 94 /* Initialize the common base substructure. */
@@ -103,13 +102,7 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a
103 sctp_bind_addr_init(&asoc->base.bind_addr, ep->base.bind_addr.port); 102 sctp_bind_addr_init(&asoc->base.bind_addr, ep->base.bind_addr.port);
104 103
105 asoc->state = SCTP_STATE_CLOSED; 104 asoc->state = SCTP_STATE_CLOSED;
106 105 asoc->cookie_life = ms_to_ktime(sp->assocparams.sasoc_cookie_life);
107 /* Set these values from the socket values, a conversion between
108 * millsecons to seconds/microseconds must also be done.
109 */
110 asoc->cookie_life.tv_sec = sp->assocparams.sasoc_cookie_life / 1000;
111 asoc->cookie_life.tv_usec = (sp->assocparams.sasoc_cookie_life % 1000)
112 * 1000;
113 asoc->frag_point = 0; 106 asoc->frag_point = 0;
114 asoc->user_frag = sp->user_frag; 107 asoc->user_frag = sp->user_frag;
115 108
@@ -343,8 +336,8 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a
343 return asoc; 336 return asoc;
344 337
345fail_init: 338fail_init:
346 sctp_endpoint_put(asoc->ep);
347 sock_put(asoc->base.sk); 339 sock_put(asoc->base.sk);
340 sctp_endpoint_put(asoc->ep);
348 return NULL; 341 return NULL;
349} 342}
350 343
@@ -356,7 +349,7 @@ struct sctp_association *sctp_association_new(const struct sctp_endpoint *ep,
356{ 349{
357 struct sctp_association *asoc; 350 struct sctp_association *asoc;
358 351
359 asoc = t_new(struct sctp_association, gfp); 352 asoc = kzalloc(sizeof(*asoc), gfp);
360 if (!asoc) 353 if (!asoc)
361 goto fail; 354 goto fail;
362 355
@@ -364,7 +357,8 @@ struct sctp_association *sctp_association_new(const struct sctp_endpoint *ep,
364 goto fail_init; 357 goto fail_init;
365 358
366 SCTP_DBG_OBJCNT_INC(assoc); 359 SCTP_DBG_OBJCNT_INC(assoc);
367 SCTP_DEBUG_PRINTK("Created asoc %p\n", asoc); 360
361 pr_debug("Created asoc %p\n", asoc);
368 362
369 return asoc; 363 return asoc;
370 364
@@ -462,7 +456,10 @@ void sctp_association_free(struct sctp_association *asoc)
462/* Cleanup and free up an association. */ 456/* Cleanup and free up an association. */
463static void sctp_association_destroy(struct sctp_association *asoc) 457static void sctp_association_destroy(struct sctp_association *asoc)
464{ 458{
465 SCTP_ASSERT(asoc->base.dead, "Assoc is not dead", return); 459 if (unlikely(!asoc->base.dead)) {
460 WARN(1, "Attempt to destroy undead association %p!\n", asoc);
461 return;
462 }
466 463
467 sctp_endpoint_put(asoc->ep); 464 sctp_endpoint_put(asoc->ep);
468 sock_put(asoc->base.sk); 465 sock_put(asoc->base.sk);
@@ -543,11 +540,8 @@ void sctp_assoc_rm_peer(struct sctp_association *asoc,
543 struct list_head *pos; 540 struct list_head *pos;
544 struct sctp_transport *transport; 541 struct sctp_transport *transport;
545 542
546 SCTP_DEBUG_PRINTK_IPADDR("sctp_assoc_rm_peer:association %p addr: ", 543 pr_debug("%s: association:%p addr:%pISpc\n",
547 " port: %d\n", 544 __func__, asoc, &peer->ipaddr.sa);
548 asoc,
549 (&peer->ipaddr),
550 ntohs(peer->ipaddr.v4.sin_port));
551 545
552 /* If we are to remove the current retran_path, update it 546 /* If we are to remove the current retran_path, update it
553 * to the next peer before removing this peer from the list. 547 * to the next peer before removing this peer from the list.
@@ -643,12 +637,8 @@ struct sctp_transport *sctp_assoc_add_peer(struct sctp_association *asoc,
643 /* AF_INET and AF_INET6 share common port field. */ 637 /* AF_INET and AF_INET6 share common port field. */
644 port = ntohs(addr->v4.sin_port); 638 port = ntohs(addr->v4.sin_port);
645 639
646 SCTP_DEBUG_PRINTK_IPADDR("sctp_assoc_add_peer:association %p addr: ", 640 pr_debug("%s: association:%p addr:%pISpc state:%d\n", __func__,
647 " port: %d state:%d\n", 641 asoc, &addr->sa, peer_state);
648 asoc,
649 addr,
650 port,
651 peer_state);
652 642
653 /* Set the port if it has not been set yet. */ 643 /* Set the port if it has not been set yet. */
654 if (0 == asoc->peer.port) 644 if (0 == asoc->peer.port)
@@ -715,8 +705,9 @@ struct sctp_transport *sctp_assoc_add_peer(struct sctp_association *asoc,
715 else 705 else
716 asoc->pathmtu = peer->pathmtu; 706 asoc->pathmtu = peer->pathmtu;
717 707
718 SCTP_DEBUG_PRINTK("sctp_assoc_add_peer:association %p PMTU set to " 708 pr_debug("%s: association:%p PMTU set to %d\n", __func__, asoc,
719 "%d\n", asoc, asoc->pathmtu); 709 asoc->pathmtu);
710
720 peer->pmtu_pending = 0; 711 peer->pmtu_pending = 0;
721 712
722 asoc->frag_point = sctp_frag_point(asoc, asoc->pathmtu); 713 asoc->frag_point = sctp_frag_point(asoc, asoc->pathmtu);
@@ -1356,12 +1347,8 @@ void sctp_assoc_update_retran_path(struct sctp_association *asoc)
1356 else 1347 else
1357 t = asoc->peer.retran_path; 1348 t = asoc->peer.retran_path;
1358 1349
1359 SCTP_DEBUG_PRINTK_IPADDR("sctp_assoc_update_retran_path:association" 1350 pr_debug("%s: association:%p addr:%pISpc\n", __func__, asoc,
1360 " %p addr: ", 1351 &t->ipaddr.sa);
1361 " port: %d\n",
1362 asoc,
1363 (&t->ipaddr),
1364 ntohs(t->ipaddr.v4.sin_port));
1365} 1352}
1366 1353
1367/* Choose the transport for sending retransmit packet. */ 1354/* Choose the transport for sending retransmit packet. */
@@ -1408,8 +1395,8 @@ void sctp_assoc_sync_pmtu(struct sock *sk, struct sctp_association *asoc)
1408 asoc->frag_point = sctp_frag_point(asoc, pmtu); 1395 asoc->frag_point = sctp_frag_point(asoc, pmtu);
1409 } 1396 }
1410 1397
1411 SCTP_DEBUG_PRINTK("%s: asoc:%p, pmtu:%d, frag_point:%d\n", 1398 pr_debug("%s: asoc:%p, pmtu:%d, frag_point:%d\n", __func__, asoc,
1412 __func__, asoc, asoc->pathmtu, asoc->frag_point); 1399 asoc->pathmtu, asoc->frag_point);
1413} 1400}
1414 1401
1415/* Should we send a SACK to update our peer? */ 1402/* Should we send a SACK to update our peer? */
@@ -1461,9 +1448,9 @@ void sctp_assoc_rwnd_increase(struct sctp_association *asoc, unsigned int len)
1461 asoc->rwnd_press -= change; 1448 asoc->rwnd_press -= change;
1462 } 1449 }
1463 1450
1464 SCTP_DEBUG_PRINTK("%s: asoc %p rwnd increased by %d to (%u, %u) " 1451 pr_debug("%s: asoc:%p rwnd increased by %d to (%u, %u) - %u\n",
1465 "- %u\n", __func__, asoc, len, asoc->rwnd, 1452 __func__, asoc, len, asoc->rwnd, asoc->rwnd_over,
1466 asoc->rwnd_over, asoc->a_rwnd); 1453 asoc->a_rwnd);
1467 1454
1468 /* Send a window update SACK if the rwnd has increased by at least the 1455 /* Send a window update SACK if the rwnd has increased by at least the
1469 * minimum of the association's PMTU and half of the receive buffer. 1456 * minimum of the association's PMTU and half of the receive buffer.
@@ -1472,9 +1459,11 @@ void sctp_assoc_rwnd_increase(struct sctp_association *asoc, unsigned int len)
1472 */ 1459 */
1473 if (sctp_peer_needs_update(asoc)) { 1460 if (sctp_peer_needs_update(asoc)) {
1474 asoc->a_rwnd = asoc->rwnd; 1461 asoc->a_rwnd = asoc->rwnd;
1475 SCTP_DEBUG_PRINTK("%s: Sending window update SACK- asoc: %p " 1462
1476 "rwnd: %u a_rwnd: %u\n", __func__, 1463 pr_debug("%s: sending window update SACK- asoc:%p rwnd:%u "
1477 asoc, asoc->rwnd, asoc->a_rwnd); 1464 "a_rwnd:%u\n", __func__, asoc, asoc->rwnd,
1465 asoc->a_rwnd);
1466
1478 sack = sctp_make_sack(asoc); 1467 sack = sctp_make_sack(asoc);
1479 if (!sack) 1468 if (!sack)
1480 return; 1469 return;
@@ -1496,8 +1485,10 @@ void sctp_assoc_rwnd_decrease(struct sctp_association *asoc, unsigned int len)
1496 int rx_count; 1485 int rx_count;
1497 int over = 0; 1486 int over = 0;
1498 1487
1499 SCTP_ASSERT(asoc->rwnd, "rwnd zero", return); 1488 if (unlikely(!asoc->rwnd || asoc->rwnd_over))
1500 SCTP_ASSERT(!asoc->rwnd_over, "rwnd_over not zero", return); 1489 pr_debug("%s: association:%p has asoc->rwnd:%u, "
1490 "asoc->rwnd_over:%u!\n", __func__, asoc,
1491 asoc->rwnd, asoc->rwnd_over);
1501 1492
1502 if (asoc->ep->rcvbuf_policy) 1493 if (asoc->ep->rcvbuf_policy)
1503 rx_count = atomic_read(&asoc->rmem_alloc); 1494 rx_count = atomic_read(&asoc->rmem_alloc);
@@ -1522,9 +1513,10 @@ void sctp_assoc_rwnd_decrease(struct sctp_association *asoc, unsigned int len)
1522 asoc->rwnd_over = len - asoc->rwnd; 1513 asoc->rwnd_over = len - asoc->rwnd;
1523 asoc->rwnd = 0; 1514 asoc->rwnd = 0;
1524 } 1515 }
1525 SCTP_DEBUG_PRINTK("%s: asoc %p rwnd decreased by %d to (%u, %u, %u)\n", 1516
1526 __func__, asoc, len, asoc->rwnd, 1517 pr_debug("%s: asoc:%p rwnd decreased by %d to (%u, %u, %u)\n",
1527 asoc->rwnd_over, asoc->rwnd_press); 1518 __func__, asoc, len, asoc->rwnd, asoc->rwnd_over,
1519 asoc->rwnd_press);
1528} 1520}
1529 1521
1530/* Build the bind address list for the association based on info from the 1522/* Build the bind address list for the association based on info from the
diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
index 41145fe31813..64977ea0f9c5 100644
--- a/net/sctp/bind_addr.c
+++ b/net/sctp/bind_addr.c
@@ -162,7 +162,7 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
162 struct sctp_sockaddr_entry *addr; 162 struct sctp_sockaddr_entry *addr;
163 163
164 /* Add the address to the bind address list. */ 164 /* Add the address to the bind address list. */
165 addr = t_new(struct sctp_sockaddr_entry, gfp); 165 addr = kzalloc(sizeof(*addr), gfp);
166 if (!addr) 166 if (!addr)
167 return -ENOMEM; 167 return -ENOMEM;
168 168
diff --git a/net/sctp/chunk.c b/net/sctp/chunk.c
index 69ce21e3716f..5780565f5b7d 100644
--- a/net/sctp/chunk.c
+++ b/net/sctp/chunk.c
@@ -66,7 +66,7 @@ static void sctp_datamsg_init(struct sctp_datamsg *msg)
66} 66}
67 67
68/* Allocate and initialize datamsg. */ 68/* Allocate and initialize datamsg. */
69SCTP_STATIC struct sctp_datamsg *sctp_datamsg_new(gfp_t gfp) 69static struct sctp_datamsg *sctp_datamsg_new(gfp_t gfp)
70{ 70{
71 struct sctp_datamsg *msg; 71 struct sctp_datamsg *msg;
72 msg = kmalloc(sizeof(struct sctp_datamsg), gfp); 72 msg = kmalloc(sizeof(struct sctp_datamsg), gfp);
@@ -193,8 +193,9 @@ struct sctp_datamsg *sctp_datamsg_from_user(struct sctp_association *asoc,
193 msg->expires_at = jiffies + 193 msg->expires_at = jiffies +
194 msecs_to_jiffies(sinfo->sinfo_timetolive); 194 msecs_to_jiffies(sinfo->sinfo_timetolive);
195 msg->can_abandon = 1; 195 msg->can_abandon = 1;
196 SCTP_DEBUG_PRINTK("%s: msg:%p expires_at: %ld jiffies:%ld\n", 196
197 __func__, msg, msg->expires_at, jiffies); 197 pr_debug("%s: msg:%p expires_at:%ld jiffies:%ld\n", __func__,
198 msg, msg->expires_at, jiffies);
198 } 199 }
199 200
200 /* This is the biggest possible DATA chunk that can fit into 201 /* This is the biggest possible DATA chunk that can fit into
diff --git a/net/sctp/debug.c b/net/sctp/debug.c
index ec997cfe0a7e..f4998780d6df 100644
--- a/net/sctp/debug.c
+++ b/net/sctp/debug.c
@@ -47,10 +47,6 @@
47 47
48#include <net/sctp/sctp.h> 48#include <net/sctp/sctp.h>
49 49
50#if SCTP_DEBUG
51int sctp_debug_flag = 1; /* Initially enable DEBUG */
52#endif /* SCTP_DEBUG */
53
54/* These are printable forms of Chunk ID's from section 3.1. */ 50/* These are printable forms of Chunk ID's from section 3.1. */
55static const char *const sctp_cid_tbl[SCTP_NUM_BASE_CHUNK_TYPES] = { 51static const char *const sctp_cid_tbl[SCTP_NUM_BASE_CHUNK_TYPES] = {
56 "DATA", 52 "DATA",
diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
index 5fbd7bc6bb11..9e3d257de0e0 100644
--- a/net/sctp/endpointola.c
+++ b/net/sctp/endpointola.c
@@ -192,9 +192,10 @@ struct sctp_endpoint *sctp_endpoint_new(struct sock *sk, gfp_t gfp)
192 struct sctp_endpoint *ep; 192 struct sctp_endpoint *ep;
193 193
194 /* Build a local endpoint. */ 194 /* Build a local endpoint. */
195 ep = t_new(struct sctp_endpoint, gfp); 195 ep = kzalloc(sizeof(*ep), gfp);
196 if (!ep) 196 if (!ep)
197 goto fail; 197 goto fail;
198
198 if (!sctp_endpoint_init(ep, sk, gfp)) 199 if (!sctp_endpoint_init(ep, sk, gfp))
199 goto fail_init; 200 goto fail_init;
200 201
@@ -246,10 +247,12 @@ void sctp_endpoint_free(struct sctp_endpoint *ep)
246/* Final destructor for endpoint. */ 247/* Final destructor for endpoint. */
247static void sctp_endpoint_destroy(struct sctp_endpoint *ep) 248static void sctp_endpoint_destroy(struct sctp_endpoint *ep)
248{ 249{
249 SCTP_ASSERT(ep->base.dead, "Endpoint is not dead", return); 250 struct sock *sk;
250 251
251 /* Free up the HMAC transform. */ 252 if (unlikely(!ep->base.dead)) {
252 crypto_free_hash(sctp_sk(ep->base.sk)->hmac); 253 WARN(1, "Attempt to destroy undead endpoint %p!\n", ep);
254 return;
255 }
253 256
254 /* Free the digest buffer */ 257 /* Free the digest buffer */
255 kfree(ep->digest); 258 kfree(ep->digest);
@@ -270,13 +273,15 @@ static void sctp_endpoint_destroy(struct sctp_endpoint *ep)
270 273
271 memset(ep->secret_key, 0, sizeof(ep->secret_key)); 274 memset(ep->secret_key, 0, sizeof(ep->secret_key));
272 275
273 /* Remove and free the port */
274 if (sctp_sk(ep->base.sk)->bind_hash)
275 sctp_put_port(ep->base.sk);
276
277 /* Give up our hold on the sock. */ 276 /* Give up our hold on the sock. */
278 if (ep->base.sk) 277 sk = ep->base.sk;
279 sock_put(ep->base.sk); 278 if (sk != NULL) {
279 /* Remove and free the port */
280 if (sctp_sk(sk)->bind_hash)
281 sctp_put_port(sk);
282
283 sock_put(sk);
284 }
280 285
281 kfree(ep); 286 kfree(ep);
282 SCTP_DBG_OBJCNT_DEC(ep); 287 SCTP_DBG_OBJCNT_DEC(ep);
diff --git a/net/sctp/input.c b/net/sctp/input.c
index 4b2c83146aa7..3fa4d858c35a 100644
--- a/net/sctp/input.c
+++ b/net/sctp/input.c
@@ -454,8 +454,6 @@ void sctp_icmp_proto_unreachable(struct sock *sk,
454 struct sctp_association *asoc, 454 struct sctp_association *asoc,
455 struct sctp_transport *t) 455 struct sctp_transport *t)
456{ 456{
457 SCTP_DEBUG_PRINTK("%s\n", __func__);
458
459 if (sock_owned_by_user(sk)) { 457 if (sock_owned_by_user(sk)) {
460 if (timer_pending(&t->proto_unreach_timer)) 458 if (timer_pending(&t->proto_unreach_timer))
461 return; 459 return;
@@ -464,10 +462,12 @@ void sctp_icmp_proto_unreachable(struct sock *sk,
464 jiffies + (HZ/20))) 462 jiffies + (HZ/20)))
465 sctp_association_hold(asoc); 463 sctp_association_hold(asoc);
466 } 464 }
467
468 } else { 465 } else {
469 struct net *net = sock_net(sk); 466 struct net *net = sock_net(sk);
470 467
468 pr_debug("%s: unrecognized next header type "
469 "encountered!\n", __func__);
470
471 if (del_timer(&t->proto_unreach_timer)) 471 if (del_timer(&t->proto_unreach_timer))
472 sctp_association_put(asoc); 472 sctp_association_put(asoc);
473 473
@@ -589,7 +589,7 @@ void sctp_v4_err(struct sk_buff *skb, __u32 info)
589 struct sctp_association *asoc = NULL; 589 struct sctp_association *asoc = NULL;
590 struct sctp_transport *transport; 590 struct sctp_transport *transport;
591 struct inet_sock *inet; 591 struct inet_sock *inet;
592 sk_buff_data_t saveip, savesctp; 592 __u16 saveip, savesctp;
593 int err; 593 int err;
594 struct net *net = dev_net(skb->dev); 594 struct net *net = dev_net(skb->dev);
595 595
@@ -903,11 +903,11 @@ hit:
903} 903}
904 904
905/* Look up an association. BH-safe. */ 905/* Look up an association. BH-safe. */
906SCTP_STATIC 906static
907struct sctp_association *sctp_lookup_association(struct net *net, 907struct sctp_association *sctp_lookup_association(struct net *net,
908 const union sctp_addr *laddr, 908 const union sctp_addr *laddr,
909 const union sctp_addr *paddr, 909 const union sctp_addr *paddr,
910 struct sctp_transport **transportp) 910 struct sctp_transport **transportp)
911{ 911{
912 struct sctp_association *asoc; 912 struct sctp_association *asoc;
913 913
diff --git a/net/sctp/inqueue.c b/net/sctp/inqueue.c
index 3221d073448c..cb25f040fed0 100644
--- a/net/sctp/inqueue.c
+++ b/net/sctp/inqueue.c
@@ -219,10 +219,10 @@ struct sctp_chunk *sctp_inq_pop(struct sctp_inq *queue)
219 chunk->end_of_packet = 1; 219 chunk->end_of_packet = 1;
220 } 220 }
221 221
222 SCTP_DEBUG_PRINTK("+++sctp_inq_pop+++ chunk %p[%s]," 222 pr_debug("+++sctp_inq_pop+++ chunk:%p[%s], length:%d, skb->len:%d\n",
223 " length %d, skb->len %d\n",chunk, 223 chunk, sctp_cname(SCTP_ST_CHUNK(chunk->chunk_hdr->type)),
224 sctp_cname(SCTP_ST_CHUNK(chunk->chunk_hdr->type)), 224 ntohs(chunk->chunk_hdr->length), chunk->skb->len);
225 ntohs(chunk->chunk_hdr->length), chunk->skb->len); 225
226 return chunk; 226 return chunk;
227} 227}
228 228
@@ -238,4 +238,3 @@ void sctp_inq_set_th_handler(struct sctp_inq *q, work_func_t callback)
238{ 238{
239 INIT_WORK(&q->immediate, callback); 239 INIT_WORK(&q->immediate, callback);
240} 240}
241
diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
index 391a245d5203..09ffcc912d23 100644
--- a/net/sctp/ipv6.c
+++ b/net/sctp/ipv6.c
@@ -145,15 +145,15 @@ static struct notifier_block sctp_inet6addr_notifier = {
145}; 145};
146 146
147/* ICMP error handler. */ 147/* ICMP error handler. */
148SCTP_STATIC void sctp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, 148static void sctp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
149 u8 type, u8 code, int offset, __be32 info) 149 u8 type, u8 code, int offset, __be32 info)
150{ 150{
151 struct inet6_dev *idev; 151 struct inet6_dev *idev;
152 struct sock *sk; 152 struct sock *sk;
153 struct sctp_association *asoc; 153 struct sctp_association *asoc;
154 struct sctp_transport *transport; 154 struct sctp_transport *transport;
155 struct ipv6_pinfo *np; 155 struct ipv6_pinfo *np;
156 sk_buff_data_t saveip, savesctp; 156 __u16 saveip, savesctp;
157 int err; 157 int err;
158 struct net *net = dev_net(skb->dev); 158 struct net *net = dev_net(skb->dev);
159 159
@@ -239,9 +239,8 @@ static int sctp_v6_xmit(struct sk_buff *skb, struct sctp_transport *transport)
239 fl6.daddr = *rt0->addr; 239 fl6.daddr = *rt0->addr;
240 } 240 }
241 241
242 SCTP_DEBUG_PRINTK("%s: skb:%p, len:%d, src:%pI6 dst:%pI6\n", 242 pr_debug("%s: skb:%p, len:%d, src:%pI6 dst:%pI6\n", __func__, skb,
243 __func__, skb, skb->len, 243 skb->len, &fl6.saddr, &fl6.daddr);
244 &fl6.saddr, &fl6.daddr);
245 244
246 SCTP_INC_STATS(sock_net(sk), SCTP_MIB_OUTSCTPPACKS); 245 SCTP_INC_STATS(sock_net(sk), SCTP_MIB_OUTSCTPPACKS);
247 246
@@ -276,7 +275,7 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
276 if (ipv6_addr_type(&daddr->v6.sin6_addr) & IPV6_ADDR_LINKLOCAL) 275 if (ipv6_addr_type(&daddr->v6.sin6_addr) & IPV6_ADDR_LINKLOCAL)
277 fl6->flowi6_oif = daddr->v6.sin6_scope_id; 276 fl6->flowi6_oif = daddr->v6.sin6_scope_id;
278 277
279 SCTP_DEBUG_PRINTK("%s: DST=%pI6 ", __func__, &fl6->daddr); 278 pr_debug("%s: dst=%pI6 ", __func__, &fl6->daddr);
280 279
281 if (asoc) 280 if (asoc)
282 fl6->fl6_sport = htons(asoc->base.bind_addr.port); 281 fl6->fl6_sport = htons(asoc->base.bind_addr.port);
@@ -284,7 +283,8 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
284 if (saddr) { 283 if (saddr) {
285 fl6->saddr = saddr->v6.sin6_addr; 284 fl6->saddr = saddr->v6.sin6_addr;
286 fl6->fl6_sport = saddr->v6.sin6_port; 285 fl6->fl6_sport = saddr->v6.sin6_port;
287 SCTP_DEBUG_PRINTK("SRC=%pI6 - ", &fl6->saddr); 286
287 pr_debug("src=%pI6 - ", &fl6->saddr);
288 } 288 }
289 289
290 dst = ip6_dst_lookup_flow(sk, fl6, NULL, false); 290 dst = ip6_dst_lookup_flow(sk, fl6, NULL, false);
@@ -348,13 +348,16 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
348out: 348out:
349 if (!IS_ERR_OR_NULL(dst)) { 349 if (!IS_ERR_OR_NULL(dst)) {
350 struct rt6_info *rt; 350 struct rt6_info *rt;
351
351 rt = (struct rt6_info *)dst; 352 rt = (struct rt6_info *)dst;
352 t->dst = dst; 353 t->dst = dst;
353 SCTP_DEBUG_PRINTK("rt6_dst:%pI6 rt6_src:%pI6\n", 354
354 &rt->rt6i_dst.addr, &fl6->saddr); 355 pr_debug("rt6_dst:%pI6 rt6_src:%pI6\n", &rt->rt6i_dst.addr,
356 &fl6->saddr);
355 } else { 357 } else {
356 t->dst = NULL; 358 t->dst = NULL;
357 SCTP_DEBUG_PRINTK("NO ROUTE\n"); 359
360 pr_debug("no route\n");
358 } 361 }
359} 362}
360 363
@@ -377,7 +380,7 @@ static void sctp_v6_get_saddr(struct sctp_sock *sk,
377 struct flowi6 *fl6 = &fl->u.ip6; 380 struct flowi6 *fl6 = &fl->u.ip6;
378 union sctp_addr *saddr = &t->saddr; 381 union sctp_addr *saddr = &t->saddr;
379 382
380 SCTP_DEBUG_PRINTK("%s: asoc:%p dst:%p\n", __func__, t->asoc, t->dst); 383 pr_debug("%s: asoc:%p dst:%p\n", __func__, t->asoc, t->dst);
381 384
382 if (t->dst) { 385 if (t->dst) {
383 saddr->v6.sin6_family = AF_INET6; 386 saddr->v6.sin6_family = AF_INET6;
@@ -402,7 +405,7 @@ static void sctp_v6_copy_addrlist(struct list_head *addrlist,
402 read_lock_bh(&in6_dev->lock); 405 read_lock_bh(&in6_dev->lock);
403 list_for_each_entry(ifp, &in6_dev->addr_list, if_list) { 406 list_for_each_entry(ifp, &in6_dev->addr_list, if_list) {
404 /* Add the address to the local list. */ 407 /* Add the address to the local list. */
405 addr = t_new(struct sctp_sockaddr_entry, GFP_ATOMIC); 408 addr = kzalloc(sizeof(*addr), GFP_ATOMIC);
406 if (addr) { 409 if (addr) {
407 addr->a.v6.sin6_family = AF_INET6; 410 addr->a.v6.sin6_family = AF_INET6;
408 addr->a.v6.sin6_port = 0; 411 addr->a.v6.sin6_port = 0;
diff --git a/net/sctp/output.c b/net/sctp/output.c
index bbef4a7a9b56..a46d1eb41762 100644
--- a/net/sctp/output.c
+++ b/net/sctp/output.c
@@ -93,8 +93,7 @@ struct sctp_packet *sctp_packet_config(struct sctp_packet *packet,
93{ 93{
94 struct sctp_chunk *chunk = NULL; 94 struct sctp_chunk *chunk = NULL;
95 95
96 SCTP_DEBUG_PRINTK("%s: packet:%p vtag:0x%x\n", __func__, 96 pr_debug("%s: packet:%p vtag:0x%x\n", __func__, packet, vtag);
97 packet, vtag);
98 97
99 packet->vtag = vtag; 98 packet->vtag = vtag;
100 99
@@ -119,8 +118,7 @@ struct sctp_packet *sctp_packet_init(struct sctp_packet *packet,
119 struct sctp_association *asoc = transport->asoc; 118 struct sctp_association *asoc = transport->asoc;
120 size_t overhead; 119 size_t overhead;
121 120
122 SCTP_DEBUG_PRINTK("%s: packet:%p transport:%p\n", __func__, 121 pr_debug("%s: packet:%p transport:%p\n", __func__, packet, transport);
123 packet, transport);
124 122
125 packet->transport = transport; 123 packet->transport = transport;
126 packet->source_port = sport; 124 packet->source_port = sport;
@@ -145,7 +143,7 @@ void sctp_packet_free(struct sctp_packet *packet)
145{ 143{
146 struct sctp_chunk *chunk, *tmp; 144 struct sctp_chunk *chunk, *tmp;
147 145
148 SCTP_DEBUG_PRINTK("%s: packet:%p\n", __func__, packet); 146 pr_debug("%s: packet:%p\n", __func__, packet);
149 147
150 list_for_each_entry_safe(chunk, tmp, &packet->chunk_list, list) { 148 list_for_each_entry_safe(chunk, tmp, &packet->chunk_list, list) {
151 list_del_init(&chunk->list); 149 list_del_init(&chunk->list);
@@ -167,8 +165,7 @@ sctp_xmit_t sctp_packet_transmit_chunk(struct sctp_packet *packet,
167 sctp_xmit_t retval; 165 sctp_xmit_t retval;
168 int error = 0; 166 int error = 0;
169 167
170 SCTP_DEBUG_PRINTK("%s: packet:%p chunk:%p\n", __func__, 168 pr_debug("%s: packet:%p chunk:%p\n", __func__, packet, chunk);
171 packet, chunk);
172 169
173 switch ((retval = (sctp_packet_append_chunk(packet, chunk)))) { 170 switch ((retval = (sctp_packet_append_chunk(packet, chunk)))) {
174 case SCTP_XMIT_PMTU_FULL: 171 case SCTP_XMIT_PMTU_FULL:
@@ -334,8 +331,7 @@ sctp_xmit_t sctp_packet_append_chunk(struct sctp_packet *packet,
334{ 331{
335 sctp_xmit_t retval = SCTP_XMIT_OK; 332 sctp_xmit_t retval = SCTP_XMIT_OK;
336 333
337 SCTP_DEBUG_PRINTK("%s: packet:%p chunk:%p\n", __func__, packet, 334 pr_debug("%s: packet:%p chunk:%p\n", __func__, packet, chunk);
338 chunk);
339 335
340 /* Data chunks are special. Before seeing what else we can 336 /* Data chunks are special. Before seeing what else we can
341 * bundle into this packet, check to see if we are allowed to 337 * bundle into this packet, check to see if we are allowed to
@@ -402,7 +398,7 @@ int sctp_packet_transmit(struct sctp_packet *packet)
402 unsigned char *auth = NULL; /* pointer to auth in skb data */ 398 unsigned char *auth = NULL; /* pointer to auth in skb data */
403 __u32 cksum_buf_len = sizeof(struct sctphdr); 399 __u32 cksum_buf_len = sizeof(struct sctphdr);
404 400
405 SCTP_DEBUG_PRINTK("%s: packet:%p\n", __func__, packet); 401 pr_debug("%s: packet:%p\n", __func__, packet);
406 402
407 /* Do NOT generate a chunkless packet. */ 403 /* Do NOT generate a chunkless packet. */
408 if (list_empty(&packet->chunk_list)) 404 if (list_empty(&packet->chunk_list))
@@ -472,7 +468,9 @@ int sctp_packet_transmit(struct sctp_packet *packet)
472 * 468 *
473 * [This whole comment explains WORD_ROUND() below.] 469 * [This whole comment explains WORD_ROUND() below.]
474 */ 470 */
475 SCTP_DEBUG_PRINTK("***sctp_transmit_packet***\n"); 471
472 pr_debug("***sctp_transmit_packet***\n");
473
476 list_for_each_entry_safe(chunk, tmp, &packet->chunk_list, list) { 474 list_for_each_entry_safe(chunk, tmp, &packet->chunk_list, list) {
477 list_del_init(&chunk->list); 475 list_del_init(&chunk->list);
478 if (sctp_chunk_is_data(chunk)) { 476 if (sctp_chunk_is_data(chunk)) {
@@ -505,16 +503,13 @@ int sctp_packet_transmit(struct sctp_packet *packet)
505 memcpy(skb_put(nskb, chunk->skb->len), 503 memcpy(skb_put(nskb, chunk->skb->len),
506 chunk->skb->data, chunk->skb->len); 504 chunk->skb->data, chunk->skb->len);
507 505
508 SCTP_DEBUG_PRINTK("%s %p[%s] %s 0x%x, %s %d, %s %d, %s %d\n", 506 pr_debug("*** Chunk:%p[%s] %s 0x%x, length:%d, chunk->skb->len:%d, "
509 "*** Chunk", chunk, 507 "rtt_in_progress:%d\n", chunk,
510 sctp_cname(SCTP_ST_CHUNK( 508 sctp_cname(SCTP_ST_CHUNK(chunk->chunk_hdr->type)),
511 chunk->chunk_hdr->type)), 509 chunk->has_tsn ? "TSN" : "No TSN",
512 chunk->has_tsn ? "TSN" : "No TSN", 510 chunk->has_tsn ? ntohl(chunk->subh.data_hdr->tsn) : 0,
513 chunk->has_tsn ? 511 ntohs(chunk->chunk_hdr->length), chunk->skb->len,
514 ntohl(chunk->subh.data_hdr->tsn) : 0, 512 chunk->rtt_in_progress);
515 "length", ntohs(chunk->chunk_hdr->length),
516 "chunk->skb->len", chunk->skb->len,
517 "rtt_in_progress", chunk->rtt_in_progress);
518 513
519 /* 514 /*
520 * If this is a control chunk, this is our last 515 * If this is a control chunk, this is our last
@@ -606,8 +601,7 @@ int sctp_packet_transmit(struct sctp_packet *packet)
606 } 601 }
607 } 602 }
608 603
609 SCTP_DEBUG_PRINTK("***sctp_transmit_packet*** skb len %d\n", 604 pr_debug("***sctp_transmit_packet*** skb->len:%d\n", nskb->len);
610 nskb->len);
611 605
612 nskb->local_df = packet->ipfragok; 606 nskb->local_df = packet->ipfragok;
613 (*tp->af_specific->sctp_xmit)(nskb, tp); 607 (*tp->af_specific->sctp_xmit)(nskb, tp);
diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c
index be35e2dbcc9a..ef9e2bbc0f2f 100644
--- a/net/sctp/outqueue.c
+++ b/net/sctp/outqueue.c
@@ -299,10 +299,10 @@ int sctp_outq_tail(struct sctp_outq *q, struct sctp_chunk *chunk)
299 struct net *net = sock_net(q->asoc->base.sk); 299 struct net *net = sock_net(q->asoc->base.sk);
300 int error = 0; 300 int error = 0;
301 301
302 SCTP_DEBUG_PRINTK("sctp_outq_tail(%p, %p[%s])\n", 302 pr_debug("%s: outq:%p, chunk:%p[%s]\n", __func__, q, chunk,
303 q, chunk, chunk && chunk->chunk_hdr ? 303 chunk && chunk->chunk_hdr ?
304 sctp_cname(SCTP_ST_CHUNK(chunk->chunk_hdr->type)) 304 sctp_cname(SCTP_ST_CHUNK(chunk->chunk_hdr->type)) :
305 : "Illegal Chunk"); 305 "illegal chunk");
306 306
307 /* If it is data, queue it up, otherwise, send it 307 /* If it is data, queue it up, otherwise, send it
308 * immediately. 308 * immediately.
@@ -328,10 +328,10 @@ int sctp_outq_tail(struct sctp_outq *q, struct sctp_chunk *chunk)
328 break; 328 break;
329 329
330 default: 330 default:
331 SCTP_DEBUG_PRINTK("outqueueing (%p, %p[%s])\n", 331 pr_debug("%s: outqueueing: outq:%p, chunk:%p[%s])\n",
332 q, chunk, chunk && chunk->chunk_hdr ? 332 __func__, q, chunk, chunk && chunk->chunk_hdr ?
333 sctp_cname(SCTP_ST_CHUNK(chunk->chunk_hdr->type)) 333 sctp_cname(SCTP_ST_CHUNK(chunk->chunk_hdr->type)) :
334 : "Illegal Chunk"); 334 "illegal chunk");
335 335
336 sctp_outq_tail_data(q, chunk); 336 sctp_outq_tail_data(q, chunk);
337 if (chunk->chunk_hdr->flags & SCTP_DATA_UNORDERED) 337 if (chunk->chunk_hdr->flags & SCTP_DATA_UNORDERED)
@@ -460,14 +460,10 @@ void sctp_retransmit_mark(struct sctp_outq *q,
460 } 460 }
461 } 461 }
462 462
463 SCTP_DEBUG_PRINTK("%s: transport: %p, reason: %d, " 463 pr_debug("%s: transport:%p, reason:%d, cwnd:%d, ssthresh:%d, "
464 "cwnd: %d, ssthresh: %d, flight_size: %d, " 464 "flight_size:%d, pba:%d\n", __func__, transport, reason,
465 "pba: %d\n", __func__, 465 transport->cwnd, transport->ssthresh, transport->flight_size,
466 transport, reason, 466 transport->partial_bytes_acked);
467 transport->cwnd, transport->ssthresh,
468 transport->flight_size,
469 transport->partial_bytes_acked);
470
471} 467}
472 468
473/* Mark all the eligible packets on a transport for retransmission and force 469/* Mark all the eligible packets on a transport for retransmission and force
@@ -1014,19 +1010,13 @@ static int sctp_outq_flush(struct sctp_outq *q, int rtx_timeout)
1014 sctp_transport_burst_limited(transport); 1010 sctp_transport_burst_limited(transport);
1015 } 1011 }
1016 1012
1017 SCTP_DEBUG_PRINTK("sctp_outq_flush(%p, %p[%s]), ", 1013 pr_debug("%s: outq:%p, chunk:%p[%s], tx-tsn:0x%x skb->head:%p "
1018 q, chunk, 1014 "skb->users:%d\n",
1019 chunk && chunk->chunk_hdr ? 1015 __func__, q, chunk, chunk && chunk->chunk_hdr ?
1020 sctp_cname(SCTP_ST_CHUNK( 1016 sctp_cname(SCTP_ST_CHUNK(chunk->chunk_hdr->type)) :
1021 chunk->chunk_hdr->type)) 1017 "illegal chunk", ntohl(chunk->subh.data_hdr->tsn),
1022 : "Illegal Chunk"); 1018 chunk->skb ? chunk->skb->head : NULL, chunk->skb ?
1023 1019 atomic_read(&chunk->skb->users) : -1);
1024 SCTP_DEBUG_PRINTK("TX TSN 0x%x skb->head "
1025 "%p skb->users %d.\n",
1026 ntohl(chunk->subh.data_hdr->tsn),
1027 chunk->skb ?chunk->skb->head : NULL,
1028 chunk->skb ?
1029 atomic_read(&chunk->skb->users) : -1);
1030 1020
1031 /* Add the chunk to the packet. */ 1021 /* Add the chunk to the packet. */
1032 status = sctp_packet_transmit_chunk(packet, chunk, 0); 1022 status = sctp_packet_transmit_chunk(packet, chunk, 0);
@@ -1038,10 +1028,10 @@ static int sctp_outq_flush(struct sctp_outq *q, int rtx_timeout)
1038 /* We could not append this chunk, so put 1028 /* We could not append this chunk, so put
1039 * the chunk back on the output queue. 1029 * the chunk back on the output queue.
1040 */ 1030 */
1041 SCTP_DEBUG_PRINTK("sctp_outq_flush: could " 1031 pr_debug("%s: could not transmit tsn:0x%x, status:%d\n",
1042 "not transmit TSN: 0x%x, status: %d\n", 1032 __func__, ntohl(chunk->subh.data_hdr->tsn),
1043 ntohl(chunk->subh.data_hdr->tsn), 1033 status);
1044 status); 1034
1045 sctp_outq_head_data(q, chunk); 1035 sctp_outq_head_data(q, chunk);
1046 goto sctp_flush_out; 1036 goto sctp_flush_out;
1047 break; 1037 break;
@@ -1284,11 +1274,10 @@ int sctp_outq_sack(struct sctp_outq *q, struct sctp_chunk *chunk)
1284 1274
1285 sctp_generate_fwdtsn(q, sack_ctsn); 1275 sctp_generate_fwdtsn(q, sack_ctsn);
1286 1276
1287 SCTP_DEBUG_PRINTK("%s: sack Cumulative TSN Ack is 0x%x.\n", 1277 pr_debug("%s: sack cumulative tsn ack:0x%x\n", __func__, sack_ctsn);
1288 __func__, sack_ctsn); 1278 pr_debug("%s: cumulative tsn ack of assoc:%p is 0x%x, "
1289 SCTP_DEBUG_PRINTK("%s: Cumulative TSN Ack of association, " 1279 "advertised peer ack point:0x%x\n", __func__, asoc, ctsn,
1290 "%p is 0x%x. Adv peer ack point: 0x%x\n", 1280 asoc->adv_peer_ack_point);
1291 __func__, asoc, ctsn, asoc->adv_peer_ack_point);
1292 1281
1293 /* See if all chunks are acked. 1282 /* See if all chunks are acked.
1294 * Make sure the empty queue handler will get run later. 1283 * Make sure the empty queue handler will get run later.
@@ -1304,7 +1293,7 @@ int sctp_outq_sack(struct sctp_outq *q, struct sctp_chunk *chunk)
1304 goto finish; 1293 goto finish;
1305 } 1294 }
1306 1295
1307 SCTP_DEBUG_PRINTK("sack queue is empty.\n"); 1296 pr_debug("%s: sack queue is empty\n", __func__);
1308finish: 1297finish:
1309 return q->empty; 1298 return q->empty;
1310} 1299}
@@ -1345,21 +1334,7 @@ static void sctp_check_transmitted(struct sctp_outq *q,
1345 __u8 restart_timer = 0; 1334 __u8 restart_timer = 0;
1346 int bytes_acked = 0; 1335 int bytes_acked = 0;
1347 int migrate_bytes = 0; 1336 int migrate_bytes = 0;
1348 1337 bool forward_progress = false;
1349 /* These state variables are for coherent debug output. --xguo */
1350
1351#if SCTP_DEBUG
1352 __u32 dbg_ack_tsn = 0; /* An ACKed TSN range starts here... */
1353 __u32 dbg_last_ack_tsn = 0; /* ...and finishes here. */
1354 __u32 dbg_kept_tsn = 0; /* An un-ACKed range starts here... */
1355 __u32 dbg_last_kept_tsn = 0; /* ...and finishes here. */
1356
1357 /* 0 : The last TSN was ACKed.
1358 * 1 : The last TSN was NOT ACKed (i.e. KEPT).
1359 * -1: We need to initialize.
1360 */
1361 int dbg_prt_state = -1;
1362#endif /* SCTP_DEBUG */
1363 1338
1364 sack_ctsn = ntohl(sack->cum_tsn_ack); 1339 sack_ctsn = ntohl(sack->cum_tsn_ack);
1365 1340
@@ -1426,6 +1401,7 @@ static void sctp_check_transmitted(struct sctp_outq *q,
1426 bytes_acked += sctp_data_size(tchunk); 1401 bytes_acked += sctp_data_size(tchunk);
1427 if (!tchunk->transport) 1402 if (!tchunk->transport)
1428 migrate_bytes += sctp_data_size(tchunk); 1403 migrate_bytes += sctp_data_size(tchunk);
1404 forward_progress = true;
1429 } 1405 }
1430 1406
1431 if (TSN_lte(tsn, sack_ctsn)) { 1407 if (TSN_lte(tsn, sack_ctsn)) {
@@ -1439,6 +1415,7 @@ static void sctp_check_transmitted(struct sctp_outq *q,
1439 * current RTO. 1415 * current RTO.
1440 */ 1416 */
1441 restart_timer = 1; 1417 restart_timer = 1;
1418 forward_progress = true;
1442 1419
1443 if (!tchunk->tsn_gap_acked) { 1420 if (!tchunk->tsn_gap_acked) {
1444 /* 1421 /*
@@ -1482,57 +1459,11 @@ static void sctp_check_transmitted(struct sctp_outq *q,
1482 */ 1459 */
1483 list_add_tail(lchunk, &tlist); 1460 list_add_tail(lchunk, &tlist);
1484 } 1461 }
1485
1486#if SCTP_DEBUG
1487 switch (dbg_prt_state) {
1488 case 0: /* last TSN was ACKed */
1489 if (dbg_last_ack_tsn + 1 == tsn) {
1490 /* This TSN belongs to the
1491 * current ACK range.
1492 */
1493 break;
1494 }
1495
1496 if (dbg_last_ack_tsn != dbg_ack_tsn) {
1497 /* Display the end of the
1498 * current range.
1499 */
1500 SCTP_DEBUG_PRINTK_CONT("-%08x",
1501 dbg_last_ack_tsn);
1502 }
1503
1504 /* Start a new range. */
1505 SCTP_DEBUG_PRINTK_CONT(",%08x", tsn);
1506 dbg_ack_tsn = tsn;
1507 break;
1508
1509 case 1: /* The last TSN was NOT ACKed. */
1510 if (dbg_last_kept_tsn != dbg_kept_tsn) {
1511 /* Display the end of current range. */
1512 SCTP_DEBUG_PRINTK_CONT("-%08x",
1513 dbg_last_kept_tsn);
1514 }
1515
1516 SCTP_DEBUG_PRINTK_CONT("\n");
1517
1518 /* FALL THROUGH... */
1519 default:
1520 /* This is the first-ever TSN we examined. */
1521 /* Start a new range of ACK-ed TSNs. */
1522 SCTP_DEBUG_PRINTK("ACKed: %08x", tsn);
1523 dbg_prt_state = 0;
1524 dbg_ack_tsn = tsn;
1525 }
1526
1527 dbg_last_ack_tsn = tsn;
1528#endif /* SCTP_DEBUG */
1529
1530 } else { 1462 } else {
1531 if (tchunk->tsn_gap_acked) { 1463 if (tchunk->tsn_gap_acked) {
1532 SCTP_DEBUG_PRINTK("%s: Receiver reneged on " 1464 pr_debug("%s: receiver reneged on data TSN:0x%x\n",
1533 "data TSN: 0x%x\n", 1465 __func__, tsn);
1534 __func__, 1466
1535 tsn);
1536 tchunk->tsn_gap_acked = 0; 1467 tchunk->tsn_gap_acked = 0;
1537 1468
1538 if (tchunk->transport) 1469 if (tchunk->transport)
@@ -1551,59 +1482,9 @@ static void sctp_check_transmitted(struct sctp_outq *q,
1551 } 1482 }
1552 1483
1553 list_add_tail(lchunk, &tlist); 1484 list_add_tail(lchunk, &tlist);
1554
1555#if SCTP_DEBUG
1556 /* See the above comments on ACK-ed TSNs. */
1557 switch (dbg_prt_state) {
1558 case 1:
1559 if (dbg_last_kept_tsn + 1 == tsn)
1560 break;
1561
1562 if (dbg_last_kept_tsn != dbg_kept_tsn)
1563 SCTP_DEBUG_PRINTK_CONT("-%08x",
1564 dbg_last_kept_tsn);
1565
1566 SCTP_DEBUG_PRINTK_CONT(",%08x", tsn);
1567 dbg_kept_tsn = tsn;
1568 break;
1569
1570 case 0:
1571 if (dbg_last_ack_tsn != dbg_ack_tsn)
1572 SCTP_DEBUG_PRINTK_CONT("-%08x",
1573 dbg_last_ack_tsn);
1574 SCTP_DEBUG_PRINTK_CONT("\n");
1575
1576 /* FALL THROUGH... */
1577 default:
1578 SCTP_DEBUG_PRINTK("KEPT: %08x",tsn);
1579 dbg_prt_state = 1;
1580 dbg_kept_tsn = tsn;
1581 }
1582
1583 dbg_last_kept_tsn = tsn;
1584#endif /* SCTP_DEBUG */
1585 } 1485 }
1586 } 1486 }
1587 1487
1588#if SCTP_DEBUG
1589 /* Finish off the last range, displaying its ending TSN. */
1590 switch (dbg_prt_state) {
1591 case 0:
1592 if (dbg_last_ack_tsn != dbg_ack_tsn) {
1593 SCTP_DEBUG_PRINTK_CONT("-%08x\n", dbg_last_ack_tsn);
1594 } else {
1595 SCTP_DEBUG_PRINTK_CONT("\n");
1596 }
1597 break;
1598
1599 case 1:
1600 if (dbg_last_kept_tsn != dbg_kept_tsn) {
1601 SCTP_DEBUG_PRINTK_CONT("-%08x\n", dbg_last_kept_tsn);
1602 } else {
1603 SCTP_DEBUG_PRINTK_CONT("\n");
1604 }
1605 }
1606#endif /* SCTP_DEBUG */
1607 if (transport) { 1488 if (transport) {
1608 if (bytes_acked) { 1489 if (bytes_acked) {
1609 struct sctp_association *asoc = transport->asoc; 1490 struct sctp_association *asoc = transport->asoc;
@@ -1625,6 +1506,7 @@ static void sctp_check_transmitted(struct sctp_outq *q,
1625 */ 1506 */
1626 transport->error_count = 0; 1507 transport->error_count = 0;
1627 transport->asoc->overall_error_count = 0; 1508 transport->asoc->overall_error_count = 0;
1509 forward_progress = true;
1628 1510
1629 /* 1511 /*
1630 * While in SHUTDOWN PENDING, we may have started 1512 * While in SHUTDOWN PENDING, we may have started
@@ -1676,9 +1558,9 @@ static void sctp_check_transmitted(struct sctp_outq *q,
1676 !list_empty(&tlist) && 1558 !list_empty(&tlist) &&
1677 (sack_ctsn+2 == q->asoc->next_tsn) && 1559 (sack_ctsn+2 == q->asoc->next_tsn) &&
1678 q->asoc->state < SCTP_STATE_SHUTDOWN_PENDING) { 1560 q->asoc->state < SCTP_STATE_SHUTDOWN_PENDING) {
1679 SCTP_DEBUG_PRINTK("%s: SACK received for zero " 1561 pr_debug("%s: sack received for zero window "
1680 "window probe: %u\n", 1562 "probe:%u\n", __func__, sack_ctsn);
1681 __func__, sack_ctsn); 1563
1682 q->asoc->overall_error_count = 0; 1564 q->asoc->overall_error_count = 0;
1683 transport->error_count = 0; 1565 transport->error_count = 0;
1684 } 1566 }
@@ -1698,6 +1580,11 @@ static void sctp_check_transmitted(struct sctp_outq *q,
1698 jiffies + transport->rto)) 1580 jiffies + transport->rto))
1699 sctp_transport_hold(transport); 1581 sctp_transport_hold(transport);
1700 } 1582 }
1583
1584 if (forward_progress) {
1585 if (transport->dst)
1586 dst_confirm(transport->dst);
1587 }
1701 } 1588 }
1702 1589
1703 list_splice(&tlist, transmitted_queue); 1590 list_splice(&tlist, transmitted_queue);
@@ -1739,10 +1626,8 @@ static void sctp_mark_missing(struct sctp_outq *q,
1739 count_of_newacks, tsn)) { 1626 count_of_newacks, tsn)) {
1740 chunk->tsn_missing_report++; 1627 chunk->tsn_missing_report++;
1741 1628
1742 SCTP_DEBUG_PRINTK( 1629 pr_debug("%s: tsn:0x%x missing counter:%d\n",
1743 "%s: TSN 0x%x missing counter: %d\n", 1630 __func__, tsn, chunk->tsn_missing_report);
1744 __func__, tsn,
1745 chunk->tsn_missing_report);
1746 } 1631 }
1747 } 1632 }
1748 /* 1633 /*
@@ -1762,11 +1647,10 @@ static void sctp_mark_missing(struct sctp_outq *q,
1762 if (do_fast_retransmit) 1647 if (do_fast_retransmit)
1763 sctp_retransmit(q, transport, SCTP_RTXR_FAST_RTX); 1648 sctp_retransmit(q, transport, SCTP_RTXR_FAST_RTX);
1764 1649
1765 SCTP_DEBUG_PRINTK("%s: transport: %p, cwnd: %d, " 1650 pr_debug("%s: transport:%p, cwnd:%d, ssthresh:%d, "
1766 "ssthresh: %d, flight_size: %d, pba: %d\n", 1651 "flight_size:%d, pba:%d\n", __func__, transport,
1767 __func__, transport, transport->cwnd, 1652 transport->cwnd, transport->ssthresh,
1768 transport->ssthresh, transport->flight_size, 1653 transport->flight_size, transport->partial_bytes_acked);
1769 transport->partial_bytes_acked);
1770 } 1654 }
1771} 1655}
1772 1656
diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index 4e45ee35d0db..62526c477050 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -134,9 +134,15 @@ static void sctp_seq_dump_local_addrs(struct seq_file *seq, struct sctp_ep_commo
134 struct sctp_af *af; 134 struct sctp_af *af;
135 135
136 if (epb->type == SCTP_EP_TYPE_ASSOCIATION) { 136 if (epb->type == SCTP_EP_TYPE_ASSOCIATION) {
137 asoc = sctp_assoc(epb); 137 asoc = sctp_assoc(epb);
138 peer = asoc->peer.primary_path; 138
139 primary = &peer->saddr; 139 peer = asoc->peer.primary_path;
140 if (unlikely(peer == NULL)) {
141 WARN(1, "Association %p with NULL primary path!\n", asoc);
142 return;
143 }
144
145 primary = &peer->saddr;
140 } 146 }
141 147
142 rcu_read_lock(); 148 rcu_read_lock();
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
index eaee00c61139..4a17494d736c 100644
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -153,7 +153,7 @@ static void sctp_v4_copy_addrlist(struct list_head *addrlist,
153 153
154 for (ifa = in_dev->ifa_list; ifa; ifa = ifa->ifa_next) { 154 for (ifa = in_dev->ifa_list; ifa; ifa = ifa->ifa_next) {
155 /* Add the address to the local list. */ 155 /* Add the address to the local list. */
156 addr = t_new(struct sctp_sockaddr_entry, GFP_ATOMIC); 156 addr = kzalloc(sizeof(*addr), GFP_ATOMIC);
157 if (addr) { 157 if (addr) {
158 addr->a.v4.sin_family = AF_INET; 158 addr->a.v4.sin_family = AF_INET;
159 addr->a.v4.sin_port = 0; 159 addr->a.v4.sin_port = 0;
@@ -178,7 +178,7 @@ static void sctp_get_local_addr_list(struct net *net)
178 178
179 rcu_read_lock(); 179 rcu_read_lock();
180 for_each_netdev_rcu(net, dev) { 180 for_each_netdev_rcu(net, dev) {
181 __list_for_each(pos, &sctp_address_families) { 181 list_for_each(pos, &sctp_address_families) {
182 af = list_entry(pos, struct sctp_af, list); 182 af = list_entry(pos, struct sctp_af, list);
183 af->copy_addrlist(&net->sctp.local_addr_list, dev); 183 af->copy_addrlist(&net->sctp.local_addr_list, dev);
184 } 184 }
@@ -451,8 +451,8 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
451 fl4->fl4_sport = saddr->v4.sin_port; 451 fl4->fl4_sport = saddr->v4.sin_port;
452 } 452 }
453 453
454 SCTP_DEBUG_PRINTK("%s: DST:%pI4, SRC:%pI4 - ", 454 pr_debug("%s: dst:%pI4, src:%pI4 - ", __func__, &fl4->daddr,
455 __func__, &fl4->daddr, &fl4->saddr); 455 &fl4->saddr);
456 456
457 rt = ip_route_output_key(sock_net(sk), fl4); 457 rt = ip_route_output_key(sock_net(sk), fl4);
458 if (!IS_ERR(rt)) 458 if (!IS_ERR(rt))
@@ -513,10 +513,10 @@ out_unlock:
513out: 513out:
514 t->dst = dst; 514 t->dst = dst;
515 if (dst) 515 if (dst)
516 SCTP_DEBUG_PRINTK("rt_dst:%pI4, rt_src:%pI4\n", 516 pr_debug("rt_dst:%pI4, rt_src:%pI4\n",
517 &fl4->daddr, &fl4->saddr); 517 &fl4->daddr, &fl4->saddr);
518 else 518 else
519 SCTP_DEBUG_PRINTK("NO ROUTE\n"); 519 pr_debug("no route\n");
520} 520}
521 521
522/* For v4, the source address is cached in the route entry(dst). So no need 522/* For v4, the source address is cached in the route entry(dst). So no need
@@ -604,9 +604,9 @@ static void sctp_addr_wq_timeout_handler(unsigned long arg)
604 spin_lock_bh(&net->sctp.addr_wq_lock); 604 spin_lock_bh(&net->sctp.addr_wq_lock);
605 605
606 list_for_each_entry_safe(addrw, temp, &net->sctp.addr_waitq, list) { 606 list_for_each_entry_safe(addrw, temp, &net->sctp.addr_waitq, list) {
607 SCTP_DEBUG_PRINTK_IPADDR("sctp_addrwq_timo_handler: the first ent in wq %p is ", 607 pr_debug("%s: the first ent in wq:%p is addr:%pISc for cmd:%d at "
608 " for cmd %d at entry %p\n", &net->sctp.addr_waitq, &addrw->a, addrw->state, 608 "entry:%p\n", __func__, &net->sctp.addr_waitq, &addrw->a.sa,
609 addrw); 609 addrw->state, addrw);
610 610
611#if IS_ENABLED(CONFIG_IPV6) 611#if IS_ENABLED(CONFIG_IPV6)
612 /* Now we send an ASCONF for each association */ 612 /* Now we send an ASCONF for each association */
@@ -623,8 +623,10 @@ static void sctp_addr_wq_timeout_handler(unsigned long arg)
623 addrw->state == SCTP_ADDR_NEW) { 623 addrw->state == SCTP_ADDR_NEW) {
624 unsigned long timeo_val; 624 unsigned long timeo_val;
625 625
626 SCTP_DEBUG_PRINTK("sctp_timo_handler: this is on DAD, trying %d sec later\n", 626 pr_debug("%s: this is on DAD, trying %d sec "
627 SCTP_ADDRESS_TICK_DELAY); 627 "later\n", __func__,
628 SCTP_ADDRESS_TICK_DELAY);
629
628 timeo_val = jiffies; 630 timeo_val = jiffies;
629 timeo_val += msecs_to_jiffies(SCTP_ADDRESS_TICK_DELAY); 631 timeo_val += msecs_to_jiffies(SCTP_ADDRESS_TICK_DELAY);
630 mod_timer(&net->sctp.addr_wq_timer, timeo_val); 632 mod_timer(&net->sctp.addr_wq_timer, timeo_val);
@@ -641,7 +643,7 @@ static void sctp_addr_wq_timeout_handler(unsigned long arg)
641 continue; 643 continue;
642 sctp_bh_lock_sock(sk); 644 sctp_bh_lock_sock(sk);
643 if (sctp_asconf_mgmt(sp, addrw) < 0) 645 if (sctp_asconf_mgmt(sp, addrw) < 0)
644 SCTP_DEBUG_PRINTK("sctp_addrwq_timo_handler: sctp_asconf_mgmt failed\n"); 646 pr_debug("%s: sctp_asconf_mgmt failed\n", __func__);
645 sctp_bh_unlock_sock(sk); 647 sctp_bh_unlock_sock(sk);
646 } 648 }
647#if IS_ENABLED(CONFIG_IPV6) 649#if IS_ENABLED(CONFIG_IPV6)
@@ -707,9 +709,10 @@ void sctp_addr_wq_mgmt(struct net *net, struct sctp_sockaddr_entry *addr, int cm
707 addrw = sctp_addr_wq_lookup(net, addr); 709 addrw = sctp_addr_wq_lookup(net, addr);
708 if (addrw) { 710 if (addrw) {
709 if (addrw->state != cmd) { 711 if (addrw->state != cmd) {
710 SCTP_DEBUG_PRINTK_IPADDR("sctp_addr_wq_mgmt offsets existing entry for %d ", 712 pr_debug("%s: offsets existing entry for %d, addr:%pISc "
711 " in wq %p\n", addrw->state, &addrw->a, 713 "in wq:%p\n", __func__, addrw->state, &addrw->a.sa,
712 &net->sctp.addr_waitq); 714 &net->sctp.addr_waitq);
715
713 list_del(&addrw->list); 716 list_del(&addrw->list);
714 kfree(addrw); 717 kfree(addrw);
715 } 718 }
@@ -725,8 +728,9 @@ void sctp_addr_wq_mgmt(struct net *net, struct sctp_sockaddr_entry *addr, int cm
725 } 728 }
726 addrw->state = cmd; 729 addrw->state = cmd;
727 list_add_tail(&addrw->list, &net->sctp.addr_waitq); 730 list_add_tail(&addrw->list, &net->sctp.addr_waitq);
728 SCTP_DEBUG_PRINTK_IPADDR("sctp_addr_wq_mgmt add new entry for cmd:%d ", 731
729 " in wq %p\n", addrw->state, &addrw->a, &net->sctp.addr_waitq); 732 pr_debug("%s: add new entry for cmd:%d, addr:%pISc in wq:%p\n",
733 __func__, addrw->state, &addrw->a.sa, &net->sctp.addr_waitq);
730 734
731 if (!timer_pending(&net->sctp.addr_wq_timer)) { 735 if (!timer_pending(&net->sctp.addr_wq_timer)) {
732 timeo_val = jiffies; 736 timeo_val = jiffies;
@@ -952,15 +956,14 @@ static inline int sctp_v4_xmit(struct sk_buff *skb,
952{ 956{
953 struct inet_sock *inet = inet_sk(skb->sk); 957 struct inet_sock *inet = inet_sk(skb->sk);
954 958
955 SCTP_DEBUG_PRINTK("%s: skb:%p, len:%d, src:%pI4, dst:%pI4\n", 959 pr_debug("%s: skb:%p, len:%d, src:%pI4, dst:%pI4\n", __func__, skb,
956 __func__, skb, skb->len, 960 skb->len, &transport->fl.u.ip4.saddr, &transport->fl.u.ip4.daddr);
957 &transport->fl.u.ip4.saddr,
958 &transport->fl.u.ip4.daddr);
959 961
960 inet->pmtudisc = transport->param_flags & SPP_PMTUD_ENABLE ? 962 inet->pmtudisc = transport->param_flags & SPP_PMTUD_ENABLE ?
961 IP_PMTUDISC_DO : IP_PMTUDISC_DONT; 963 IP_PMTUDISC_DO : IP_PMTUDISC_DONT;
962 964
963 SCTP_INC_STATS(sock_net(&inet->sk), SCTP_MIB_OUTSCTPPACKS); 965 SCTP_INC_STATS(sock_net(&inet->sk), SCTP_MIB_OUTSCTPPACKS);
966
964 return ip_queue_xmit(skb, &transport->fl); 967 return ip_queue_xmit(skb, &transport->fl);
965} 968}
966 969
@@ -1312,7 +1315,7 @@ static struct pernet_operations sctp_net_ops = {
1312}; 1315};
1313 1316
1314/* Initialize the universe into something sensible. */ 1317/* Initialize the universe into something sensible. */
1315SCTP_STATIC __init int sctp_init(void) 1318static __init int sctp_init(void)
1316{ 1319{
1317 int i; 1320 int i;
1318 int status = -EINVAL; 1321 int status = -EINVAL;
@@ -1321,9 +1324,8 @@ SCTP_STATIC __init int sctp_init(void)
1321 int max_share; 1324 int max_share;
1322 int order; 1325 int order;
1323 1326
1324 /* SCTP_DEBUG sanity check. */ 1327 BUILD_BUG_ON(sizeof(struct sctp_ulpevent) >
1325 if (!sctp_sanity_check()) 1328 sizeof(((struct sk_buff *) 0)->cb));
1326 goto out;
1327 1329
1328 /* Allocate bind_bucket and chunk caches. */ 1330 /* Allocate bind_bucket and chunk caches. */
1329 status = -ENOBUFS; 1331 status = -ENOBUFS;
@@ -1499,7 +1501,7 @@ err_chunk_cachep:
1499} 1501}
1500 1502
1501/* Exit handler for the SCTP protocol. */ 1503/* Exit handler for the SCTP protocol. */
1502SCTP_STATIC __exit void sctp_exit(void) 1504static __exit void sctp_exit(void)
1503{ 1505{
1504 /* BUG. This should probably do something useful like clean 1506 /* BUG. This should probably do something useful like clean
1505 * up all the remaining associations and all that memory. 1507 * up all the remaining associations and all that memory.
diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
index cf579e71cff0..362ae6e2fd93 100644
--- a/net/sctp/sm_make_chunk.c
+++ b/net/sctp/sm_make_chunk.c
@@ -68,9 +68,8 @@
68#include <net/sctp/sctp.h> 68#include <net/sctp/sctp.h>
69#include <net/sctp/sm.h> 69#include <net/sctp/sm.h>
70 70
71SCTP_STATIC 71static struct sctp_chunk *sctp_make_chunk(const struct sctp_association *asoc,
72struct sctp_chunk *sctp_make_chunk(const struct sctp_association *asoc, 72 __u8 type, __u8 flags, int paylen);
73 __u8 type, __u8 flags, int paylen);
74static sctp_cookie_param_t *sctp_pack_cookie(const struct sctp_endpoint *ep, 73static sctp_cookie_param_t *sctp_pack_cookie(const struct sctp_endpoint *ep,
75 const struct sctp_association *asoc, 74 const struct sctp_association *asoc,
76 const struct sctp_chunk *init_chunk, 75 const struct sctp_chunk *init_chunk,
@@ -742,7 +741,8 @@ struct sctp_chunk *sctp_make_sack(const struct sctp_association *asoc)
742 741
743 memset(gabs, 0, sizeof(gabs)); 742 memset(gabs, 0, sizeof(gabs));
744 ctsn = sctp_tsnmap_get_ctsn(map); 743 ctsn = sctp_tsnmap_get_ctsn(map);
745 SCTP_DEBUG_PRINTK("sackCTSNAck sent: 0x%x.\n", ctsn); 744
745 pr_debug("%s: sackCTSNAck sent:0x%x\n", __func__, ctsn);
746 746
747 /* How much room is needed in the chunk? */ 747 /* How much room is needed in the chunk? */
748 num_gabs = sctp_tsnmap_num_gabs(map, gabs); 748 num_gabs = sctp_tsnmap_num_gabs(map, gabs);
@@ -1288,10 +1288,8 @@ struct sctp_chunk *sctp_chunkify(struct sk_buff *skb,
1288 1288
1289 if (!retval) 1289 if (!retval)
1290 goto nodata; 1290 goto nodata;
1291 1291 if (!sk)
1292 if (!sk) { 1292 pr_debug("%s: chunkifying skb:%p w/o an sk\n", __func__, skb);
1293 SCTP_DEBUG_PRINTK("chunkifying skb %p w/o an sk\n", skb);
1294 }
1295 1293
1296 INIT_LIST_HEAD(&retval->list); 1294 INIT_LIST_HEAD(&retval->list);
1297 retval->skb = skb; 1295 retval->skb = skb;
@@ -1353,9 +1351,8 @@ const union sctp_addr *sctp_source(const struct sctp_chunk *chunk)
1353/* Create a new chunk, setting the type and flags headers from the 1351/* Create a new chunk, setting the type and flags headers from the
1354 * arguments, reserving enough space for a 'paylen' byte payload. 1352 * arguments, reserving enough space for a 'paylen' byte payload.
1355 */ 1353 */
1356SCTP_STATIC 1354static struct sctp_chunk *sctp_make_chunk(const struct sctp_association *asoc,
1357struct sctp_chunk *sctp_make_chunk(const struct sctp_association *asoc, 1355 __u8 type, __u8 flags, int paylen)
1358 __u8 type, __u8 flags, int paylen)
1359{ 1356{
1360 struct sctp_chunk *retval; 1357 struct sctp_chunk *retval;
1361 sctp_chunkhdr_t *chunk_hdr; 1358 sctp_chunkhdr_t *chunk_hdr;
@@ -1632,8 +1629,8 @@ static sctp_cookie_param_t *sctp_pack_cookie(const struct sctp_endpoint *ep,
1632 cookie->c.adaptation_ind = asoc->peer.adaptation_ind; 1629 cookie->c.adaptation_ind = asoc->peer.adaptation_ind;
1633 1630
1634 /* Set an expiration time for the cookie. */ 1631 /* Set an expiration time for the cookie. */
1635 do_gettimeofday(&cookie->c.expiration); 1632 cookie->c.expiration = ktime_add(asoc->cookie_life,
1636 TIMEVAL_ADD(asoc->cookie_life, cookie->c.expiration); 1633 ktime_get());
1637 1634
1638 /* Copy the peer's init packet. */ 1635 /* Copy the peer's init packet. */
1639 memcpy(&cookie->c.peer_init[0], init_chunk->chunk_hdr, 1636 memcpy(&cookie->c.peer_init[0], init_chunk->chunk_hdr,
@@ -1682,7 +1679,7 @@ struct sctp_association *sctp_unpack_cookie(
1682 unsigned int len; 1679 unsigned int len;
1683 sctp_scope_t scope; 1680 sctp_scope_t scope;
1684 struct sk_buff *skb = chunk->skb; 1681 struct sk_buff *skb = chunk->skb;
1685 struct timeval tv; 1682 ktime_t kt;
1686 struct hash_desc desc; 1683 struct hash_desc desc;
1687 1684
1688 /* Header size is static data prior to the actual cookie, including 1685 /* Header size is static data prior to the actual cookie, including
@@ -1759,11 +1756,11 @@ no_hmac:
1759 * down the new association establishment instead of every packet. 1756 * down the new association establishment instead of every packet.
1760 */ 1757 */
1761 if (sock_flag(ep->base.sk, SOCK_TIMESTAMP)) 1758 if (sock_flag(ep->base.sk, SOCK_TIMESTAMP))
1762 skb_get_timestamp(skb, &tv); 1759 kt = skb_get_ktime(skb);
1763 else 1760 else
1764 do_gettimeofday(&tv); 1761 kt = ktime_get();
1765 1762
1766 if (!asoc && tv_lt(bear_cookie->expiration, tv)) { 1763 if (!asoc && ktime_compare(bear_cookie->expiration, kt) < 0) {
1767 /* 1764 /*
1768 * Section 3.3.10.3 Stale Cookie Error (3) 1765 * Section 3.3.10.3 Stale Cookie Error (3)
1769 * 1766 *
@@ -1775,9 +1772,7 @@ no_hmac:
1775 len = ntohs(chunk->chunk_hdr->length); 1772 len = ntohs(chunk->chunk_hdr->length);
1776 *errp = sctp_make_op_error_space(asoc, chunk, len); 1773 *errp = sctp_make_op_error_space(asoc, chunk, len);
1777 if (*errp) { 1774 if (*errp) {
1778 suseconds_t usecs = (tv.tv_sec - 1775 suseconds_t usecs = ktime_to_us(ktime_sub(kt, bear_cookie->expiration));
1779 bear_cookie->expiration.tv_sec) * 1000000L +
1780 tv.tv_usec - bear_cookie->expiration.tv_usec;
1781 __be32 n = htonl(usecs); 1776 __be32 n = htonl(usecs);
1782 1777
1783 sctp_init_cause(*errp, SCTP_ERROR_STALE_COOKIE, 1778 sctp_init_cause(*errp, SCTP_ERROR_STALE_COOKIE,
@@ -2195,8 +2190,9 @@ static sctp_ierror_t sctp_verify_param(struct net *net,
2195 break; 2190 break;
2196fallthrough: 2191fallthrough:
2197 default: 2192 default:
2198 SCTP_DEBUG_PRINTK("Unrecognized param: %d for chunk %d.\n", 2193 pr_debug("%s: unrecognized param:%d for chunk:%d\n",
2199 ntohs(param.p->type), cid); 2194 __func__, ntohs(param.p->type), cid);
2195
2200 retval = sctp_process_unk_param(asoc, param, chunk, err_chunk); 2196 retval = sctp_process_unk_param(asoc, param, chunk, err_chunk);
2201 break; 2197 break;
2202 } 2198 }
@@ -2516,12 +2512,11 @@ do_addr_param:
2516 /* Suggested Cookie Life span increment's unit is msec, 2512 /* Suggested Cookie Life span increment's unit is msec,
2517 * (1/1000sec). 2513 * (1/1000sec).
2518 */ 2514 */
2519 asoc->cookie_life.tv_sec += stale / 1000; 2515 asoc->cookie_life = ktime_add_ms(asoc->cookie_life, stale);
2520 asoc->cookie_life.tv_usec += (stale % 1000) * 1000;
2521 break; 2516 break;
2522 2517
2523 case SCTP_PARAM_HOST_NAME_ADDRESS: 2518 case SCTP_PARAM_HOST_NAME_ADDRESS:
2524 SCTP_DEBUG_PRINTK("unimplemented SCTP_HOST_NAME_ADDRESS\n"); 2519 pr_debug("%s: unimplemented SCTP_HOST_NAME_ADDRESS\n", __func__);
2525 break; 2520 break;
2526 2521
2527 case SCTP_PARAM_SUPPORTED_ADDRESS_TYPES: 2522 case SCTP_PARAM_SUPPORTED_ADDRESS_TYPES:
@@ -2667,8 +2662,8 @@ fall_through:
2667 * called prior to this routine. Simply log the error 2662 * called prior to this routine. Simply log the error
2668 * here. 2663 * here.
2669 */ 2664 */
2670 SCTP_DEBUG_PRINTK("Ignoring param: %d for association %p.\n", 2665 pr_debug("%s: ignoring param:%d for association:%p.\n",
2671 ntohs(param.p->type), asoc); 2666 __func__, ntohs(param.p->type), asoc);
2672 break; 2667 break;
2673 } 2668 }
2674 2669
@@ -2810,7 +2805,10 @@ struct sctp_chunk *sctp_make_asconf_update_ip(struct sctp_association *asoc,
2810 totallen += paramlen; 2805 totallen += paramlen;
2811 totallen += addr_param_len; 2806 totallen += addr_param_len;
2812 del_pickup = 1; 2807 del_pickup = 1;
2813 SCTP_DEBUG_PRINTK("mkasconf_update_ip: picked same-scope del_pending addr, totallen for all addresses is %d\n", totallen); 2808
2809 pr_debug("%s: picked same-scope del_pending addr, "
2810 "totallen for all addresses is %d\n",
2811 __func__, totallen);
2814 } 2812 }
2815 } 2813 }
2816 2814
diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
index 8aab894aeabe..9da68852ee94 100644
--- a/net/sctp/sm_sideeffect.c
+++ b/net/sctp/sm_sideeffect.c
@@ -257,7 +257,7 @@ void sctp_generate_t3_rtx_event(unsigned long peer)
257 257
258 sctp_bh_lock_sock(asoc->base.sk); 258 sctp_bh_lock_sock(asoc->base.sk);
259 if (sock_owned_by_user(asoc->base.sk)) { 259 if (sock_owned_by_user(asoc->base.sk)) {
260 SCTP_DEBUG_PRINTK("%s:Sock is busy.\n", __func__); 260 pr_debug("%s: sock is busy\n", __func__);
261 261
262 /* Try again later. */ 262 /* Try again later. */
263 if (!mod_timer(&transport->T3_rtx_timer, jiffies + (HZ/20))) 263 if (!mod_timer(&transport->T3_rtx_timer, jiffies + (HZ/20)))
@@ -297,9 +297,8 @@ static void sctp_generate_timeout_event(struct sctp_association *asoc,
297 297
298 sctp_bh_lock_sock(asoc->base.sk); 298 sctp_bh_lock_sock(asoc->base.sk);
299 if (sock_owned_by_user(asoc->base.sk)) { 299 if (sock_owned_by_user(asoc->base.sk)) {
300 SCTP_DEBUG_PRINTK("%s:Sock is busy: timer %d\n", 300 pr_debug("%s: sock is busy: timer %d\n", __func__,
301 __func__, 301 timeout_type);
302 timeout_type);
303 302
304 /* Try again later. */ 303 /* Try again later. */
305 if (!mod_timer(&asoc->timers[timeout_type], jiffies + (HZ/20))) 304 if (!mod_timer(&asoc->timers[timeout_type], jiffies + (HZ/20)))
@@ -377,7 +376,7 @@ void sctp_generate_heartbeat_event(unsigned long data)
377 376
378 sctp_bh_lock_sock(asoc->base.sk); 377 sctp_bh_lock_sock(asoc->base.sk);
379 if (sock_owned_by_user(asoc->base.sk)) { 378 if (sock_owned_by_user(asoc->base.sk)) {
380 SCTP_DEBUG_PRINTK("%s:Sock is busy.\n", __func__); 379 pr_debug("%s: sock is busy\n", __func__);
381 380
382 /* Try again later. */ 381 /* Try again later. */
383 if (!mod_timer(&transport->hb_timer, jiffies + (HZ/20))) 382 if (!mod_timer(&transport->hb_timer, jiffies + (HZ/20)))
@@ -415,7 +414,7 @@ void sctp_generate_proto_unreach_event(unsigned long data)
415 414
416 sctp_bh_lock_sock(asoc->base.sk); 415 sctp_bh_lock_sock(asoc->base.sk);
417 if (sock_owned_by_user(asoc->base.sk)) { 416 if (sock_owned_by_user(asoc->base.sk)) {
418 SCTP_DEBUG_PRINTK("%s:Sock is busy.\n", __func__); 417 pr_debug("%s: sock is busy\n", __func__);
419 418
420 /* Try again later. */ 419 /* Try again later. */
421 if (!mod_timer(&transport->proto_unreach_timer, 420 if (!mod_timer(&transport->proto_unreach_timer,
@@ -521,11 +520,9 @@ static void sctp_do_8_2_transport_strike(sctp_cmd_seq_t *commands,
521 520
522 if (transport->state != SCTP_INACTIVE && 521 if (transport->state != SCTP_INACTIVE &&
523 (transport->error_count > transport->pathmaxrxt)) { 522 (transport->error_count > transport->pathmaxrxt)) {
524 SCTP_DEBUG_PRINTK_IPADDR("transport_strike:association %p", 523 pr_debug("%s: association:%p transport addr:%pISpc failed\n",
525 " transport IP: port:%d failed.\n", 524 __func__, asoc, &transport->ipaddr.sa);
526 asoc, 525
527 (&transport->ipaddr),
528 ntohs(transport->ipaddr.v4.sin_port));
529 sctp_assoc_control_transport(asoc, transport, 526 sctp_assoc_control_transport(asoc, transport,
530 SCTP_TRANSPORT_DOWN, 527 SCTP_TRANSPORT_DOWN,
531 SCTP_FAILED_THRESHOLD); 528 SCTP_FAILED_THRESHOLD);
@@ -733,6 +730,12 @@ static void sctp_cmd_transport_on(sctp_cmd_seq_t *cmds,
733 sctp_assoc_control_transport(asoc, t, SCTP_TRANSPORT_UP, 730 sctp_assoc_control_transport(asoc, t, SCTP_TRANSPORT_UP,
734 SCTP_HEARTBEAT_SUCCESS); 731 SCTP_HEARTBEAT_SUCCESS);
735 732
733 /* HB-ACK was received for a the proper HB. Consider this
734 * forward progress.
735 */
736 if (t->dst)
737 dst_confirm(t->dst);
738
736 /* The receiver of the HEARTBEAT ACK should also perform an 739 /* The receiver of the HEARTBEAT ACK should also perform an
737 * RTT measurement for that destination transport address 740 * RTT measurement for that destination transport address
738 * using the time value carried in the HEARTBEAT ACK chunk. 741 * using the time value carried in the HEARTBEAT ACK chunk.
@@ -804,8 +807,7 @@ static void sctp_cmd_new_state(sctp_cmd_seq_t *cmds,
804 807
805 asoc->state = state; 808 asoc->state = state;
806 809
807 SCTP_DEBUG_PRINTK("sctp_cmd_new_state: asoc %p[%s]\n", 810 pr_debug("%s: asoc:%p[%s]\n", __func__, asoc, sctp_state_tbl[state]);
808 asoc, sctp_state_tbl[state]);
809 811
810 if (sctp_style(sk, TCP)) { 812 if (sctp_style(sk, TCP)) {
811 /* Change the sk->sk_state of a TCP-style socket that has 813 /* Change the sk->sk_state of a TCP-style socket that has
@@ -864,6 +866,7 @@ static void sctp_cmd_delete_tcb(sctp_cmd_seq_t *cmds,
864 (!asoc->temp) && (sk->sk_shutdown != SHUTDOWN_MASK)) 866 (!asoc->temp) && (sk->sk_shutdown != SHUTDOWN_MASK))
865 return; 867 return;
866 868
869 BUG_ON(asoc->peer.primary_path == NULL);
867 sctp_unhash_established(asoc); 870 sctp_unhash_established(asoc);
868 sctp_association_free(asoc); 871 sctp_association_free(asoc);
869} 872}
@@ -1016,15 +1019,11 @@ static void sctp_cmd_t1_timer_update(struct sctp_association *asoc,
1016 asoc->timeouts[timer] = asoc->max_init_timeo; 1019 asoc->timeouts[timer] = asoc->max_init_timeo;
1017 } 1020 }
1018 asoc->init_cycle++; 1021 asoc->init_cycle++;
1019 SCTP_DEBUG_PRINTK( 1022
1020 "T1 %s Timeout adjustment" 1023 pr_debug("%s: T1[%s] timeout adjustment init_err_counter:%d"
1021 " init_err_counter: %d" 1024 " cycle:%d timeout:%ld\n", __func__, name,
1022 " cycle: %d" 1025 asoc->init_err_counter, asoc->init_cycle,
1023 " timeout: %ld\n", 1026 asoc->timeouts[timer]);
1024 name,
1025 asoc->init_err_counter,
1026 asoc->init_cycle,
1027 asoc->timeouts[timer]);
1028 } 1027 }
1029 1028
1030} 1029}
@@ -1079,23 +1078,19 @@ static void sctp_cmd_send_asconf(struct sctp_association *asoc)
1079 * main flow of sctp_do_sm() to keep attention focused on the real 1078 * main flow of sctp_do_sm() to keep attention focused on the real
1080 * functionality there. 1079 * functionality there.
1081 */ 1080 */
1082#define DEBUG_PRE \ 1081#define debug_pre_sfn() \
1083 SCTP_DEBUG_PRINTK("sctp_do_sm prefn: " \ 1082 pr_debug("%s[pre-fn]: ep:%p, %s, %s, asoc:%p[%s], %s\n", __func__, \
1084 "ep %p, %s, %s, asoc %p[%s], %s\n", \ 1083 ep, sctp_evttype_tbl[event_type], (*debug_fn)(subtype), \
1085 ep, sctp_evttype_tbl[event_type], \ 1084 asoc, sctp_state_tbl[state], state_fn->name)
1086 (*debug_fn)(subtype), asoc, \ 1085
1087 sctp_state_tbl[state], state_fn->name) 1086#define debug_post_sfn() \
1088 1087 pr_debug("%s[post-fn]: asoc:%p, status:%s\n", __func__, asoc, \
1089#define DEBUG_POST \ 1088 sctp_status_tbl[status])
1090 SCTP_DEBUG_PRINTK("sctp_do_sm postfn: " \ 1089
1091 "asoc %p, status: %s\n", \ 1090#define debug_post_sfx() \
1092 asoc, sctp_status_tbl[status]) 1091 pr_debug("%s[post-sfx]: error:%d, asoc:%p[%s]\n", __func__, error, \
1093 1092 asoc, sctp_state_tbl[(asoc && sctp_id2assoc(ep->base.sk, \
1094#define DEBUG_POST_SFX \ 1093 sctp_assoc2id(asoc))) ? asoc->state : SCTP_STATE_CLOSED])
1095 SCTP_DEBUG_PRINTK("sctp_do_sm post sfx: error %d, asoc %p[%s]\n", \
1096 error, asoc, \
1097 sctp_state_tbl[(asoc && sctp_id2assoc(ep->base.sk, \
1098 sctp_assoc2id(asoc)))?asoc->state:SCTP_STATE_CLOSED])
1099 1094
1100/* 1095/*
1101 * This is the master state machine processing function. 1096 * This is the master state machine processing function.
@@ -1115,7 +1110,6 @@ int sctp_do_sm(struct net *net, sctp_event_t event_type, sctp_subtype_t subtype,
1115 sctp_disposition_t status; 1110 sctp_disposition_t status;
1116 int error = 0; 1111 int error = 0;
1117 typedef const char *(printfn_t)(sctp_subtype_t); 1112 typedef const char *(printfn_t)(sctp_subtype_t);
1118
1119 static printfn_t *table[] = { 1113 static printfn_t *table[] = {
1120 NULL, sctp_cname, sctp_tname, sctp_oname, sctp_pname, 1114 NULL, sctp_cname, sctp_tname, sctp_oname, sctp_pname,
1121 }; 1115 };
@@ -1128,21 +1122,18 @@ int sctp_do_sm(struct net *net, sctp_event_t event_type, sctp_subtype_t subtype,
1128 1122
1129 sctp_init_cmd_seq(&commands); 1123 sctp_init_cmd_seq(&commands);
1130 1124
1131 DEBUG_PRE; 1125 debug_pre_sfn();
1132 status = (*state_fn->fn)(net, ep, asoc, subtype, event_arg, &commands); 1126 status = (*state_fn->fn)(net, ep, asoc, subtype, event_arg, &commands);
1133 DEBUG_POST; 1127 debug_post_sfn();
1134 1128
1135 error = sctp_side_effects(event_type, subtype, state, 1129 error = sctp_side_effects(event_type, subtype, state,
1136 ep, asoc, event_arg, status, 1130 ep, asoc, event_arg, status,
1137 &commands, gfp); 1131 &commands, gfp);
1138 DEBUG_POST_SFX; 1132 debug_post_sfx();
1139 1133
1140 return error; 1134 return error;
1141} 1135}
1142 1136
1143#undef DEBUG_PRE
1144#undef DEBUG_POST
1145
1146/***************************************************************** 1137/*****************************************************************
1147 * This the master state function side effect processing function. 1138 * This the master state function side effect processing function.
1148 *****************************************************************/ 1139 *****************************************************************/
@@ -1171,9 +1162,9 @@ static int sctp_side_effects(sctp_event_t event_type, sctp_subtype_t subtype,
1171 1162
1172 switch (status) { 1163 switch (status) {
1173 case SCTP_DISPOSITION_DISCARD: 1164 case SCTP_DISPOSITION_DISCARD:
1174 SCTP_DEBUG_PRINTK("Ignored sctp protocol event - state %d, " 1165 pr_debug("%s: ignored sctp protocol event - state:%d, "
1175 "event_type %d, event_id %d\n", 1166 "event_type:%d, event_id:%d\n", __func__, state,
1176 state, event_type, subtype.chunk); 1167 event_type, subtype.chunk);
1177 break; 1168 break;
1178 1169
1179 case SCTP_DISPOSITION_NOMEM: 1170 case SCTP_DISPOSITION_NOMEM:
@@ -1274,8 +1265,10 @@ static int sctp_cmd_interpreter(sctp_event_t event_type,
1274 sctp_outq_uncork(&asoc->outqueue); 1265 sctp_outq_uncork(&asoc->outqueue);
1275 local_cork = 0; 1266 local_cork = 0;
1276 } 1267 }
1277 asoc = cmd->obj.asoc; 1268
1278 /* Register with the endpoint. */ 1269 /* Register with the endpoint. */
1270 asoc = cmd->obj.asoc;
1271 BUG_ON(asoc->peer.primary_path == NULL);
1279 sctp_endpoint_add_asoc(ep, asoc); 1272 sctp_endpoint_add_asoc(ep, asoc);
1280 sctp_hash_established(asoc); 1273 sctp_hash_established(asoc);
1281 break; 1274 break;
@@ -1422,18 +1415,18 @@ static int sctp_cmd_interpreter(sctp_event_t event_type,
1422 1415
1423 case SCTP_CMD_CHUNK_ULP: 1416 case SCTP_CMD_CHUNK_ULP:
1424 /* Send a chunk to the sockets layer. */ 1417 /* Send a chunk to the sockets layer. */
1425 SCTP_DEBUG_PRINTK("sm_sideff: %s %p, %s %p.\n", 1418 pr_debug("%s: sm_sideff: chunk_up:%p, ulpq:%p\n",
1426 "chunk_up:", cmd->obj.chunk, 1419 __func__, cmd->obj.chunk, &asoc->ulpq);
1427 "ulpq:", &asoc->ulpq); 1420
1428 sctp_ulpq_tail_data(&asoc->ulpq, cmd->obj.chunk, 1421 sctp_ulpq_tail_data(&asoc->ulpq, cmd->obj.chunk,
1429 GFP_ATOMIC); 1422 GFP_ATOMIC);
1430 break; 1423 break;
1431 1424
1432 case SCTP_CMD_EVENT_ULP: 1425 case SCTP_CMD_EVENT_ULP:
1433 /* Send a notification to the sockets layer. */ 1426 /* Send a notification to the sockets layer. */
1434 SCTP_DEBUG_PRINTK("sm_sideff: %s %p, %s %p.\n", 1427 pr_debug("%s: sm_sideff: event_up:%p, ulpq:%p\n",
1435 "event_up:",cmd->obj.ulpevent, 1428 __func__, cmd->obj.ulpevent, &asoc->ulpq);
1436 "ulpq:",&asoc->ulpq); 1429
1437 sctp_ulpq_tail_event(&asoc->ulpq, cmd->obj.ulpevent); 1430 sctp_ulpq_tail_event(&asoc->ulpq, cmd->obj.ulpevent);
1438 break; 1431 break;
1439 1432
@@ -1598,7 +1591,7 @@ static int sctp_cmd_interpreter(sctp_event_t event_type,
1598 break; 1591 break;
1599 1592
1600 case SCTP_CMD_REPORT_BAD_TAG: 1593 case SCTP_CMD_REPORT_BAD_TAG:
1601 SCTP_DEBUG_PRINTK("vtag mismatch!\n"); 1594 pr_debug("%s: vtag mismatch!\n", __func__);
1602 break; 1595 break;
1603 1596
1604 case SCTP_CMD_STRIKE: 1597 case SCTP_CMD_STRIKE:
diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
index de1a0138317f..f6b7109195a6 100644
--- a/net/sctp/sm_statefuns.c
+++ b/net/sctp/sm_statefuns.c
@@ -1179,9 +1179,9 @@ sctp_disposition_t sctp_sf_backbeat_8_3(struct net *net,
1179 /* Check if the timestamp looks valid. */ 1179 /* Check if the timestamp looks valid. */
1180 if (time_after(hbinfo->sent_at, jiffies) || 1180 if (time_after(hbinfo->sent_at, jiffies) ||
1181 time_after(jiffies, hbinfo->sent_at + max_interval)) { 1181 time_after(jiffies, hbinfo->sent_at + max_interval)) {
1182 SCTP_DEBUG_PRINTK("%s: HEARTBEAT ACK with invalid timestamp " 1182 pr_debug("%s: HEARTBEAT ACK with invalid timestamp received "
1183 "received for transport: %p\n", 1183 "for transport:%p\n", __func__, link);
1184 __func__, link); 1184
1185 return SCTP_DISPOSITION_DISCARD; 1185 return SCTP_DISPOSITION_DISCARD;
1186 } 1186 }
1187 1187
@@ -2562,7 +2562,8 @@ static sctp_disposition_t sctp_stop_t1_and_abort(struct net *net,
2562 const struct sctp_association *asoc, 2562 const struct sctp_association *asoc,
2563 struct sctp_transport *transport) 2563 struct sctp_transport *transport)
2564{ 2564{
2565 SCTP_DEBUG_PRINTK("ABORT received (INIT).\n"); 2565 pr_debug("%s: ABORT received (INIT)\n", __func__);
2566
2566 sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, 2567 sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
2567 SCTP_STATE(SCTP_STATE_CLOSED)); 2568 SCTP_STATE(SCTP_STATE_CLOSED));
2568 SCTP_INC_STATS(net, SCTP_MIB_ABORTEDS); 2569 SCTP_INC_STATS(net, SCTP_MIB_ABORTEDS);
@@ -2572,6 +2573,7 @@ static sctp_disposition_t sctp_stop_t1_and_abort(struct net *net,
2572 /* CMD_INIT_FAILED will DELETE_TCB. */ 2573 /* CMD_INIT_FAILED will DELETE_TCB. */
2573 sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED, 2574 sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED,
2574 SCTP_PERR(error)); 2575 SCTP_PERR(error));
2576
2575 return SCTP_DISPOSITION_ABORT; 2577 return SCTP_DISPOSITION_ABORT;
2576} 2578}
2577 2579
@@ -2637,8 +2639,9 @@ sctp_disposition_t sctp_sf_do_9_2_shutdown(struct net *net,
2637 ctsn = ntohl(sdh->cum_tsn_ack); 2639 ctsn = ntohl(sdh->cum_tsn_ack);
2638 2640
2639 if (TSN_lt(ctsn, asoc->ctsn_ack_point)) { 2641 if (TSN_lt(ctsn, asoc->ctsn_ack_point)) {
2640 SCTP_DEBUG_PRINTK("ctsn %x\n", ctsn); 2642 pr_debug("%s: ctsn:%x, ctsn_ack_point:%x\n", __func__, ctsn,
2641 SCTP_DEBUG_PRINTK("ctsn_ack_point %x\n", asoc->ctsn_ack_point); 2643 asoc->ctsn_ack_point);
2644
2642 return SCTP_DISPOSITION_DISCARD; 2645 return SCTP_DISPOSITION_DISCARD;
2643 } 2646 }
2644 2647
@@ -2721,8 +2724,9 @@ sctp_disposition_t sctp_sf_do_9_2_shut_ctsn(struct net *net,
2721 ctsn = ntohl(sdh->cum_tsn_ack); 2724 ctsn = ntohl(sdh->cum_tsn_ack);
2722 2725
2723 if (TSN_lt(ctsn, asoc->ctsn_ack_point)) { 2726 if (TSN_lt(ctsn, asoc->ctsn_ack_point)) {
2724 SCTP_DEBUG_PRINTK("ctsn %x\n", ctsn); 2727 pr_debug("%s: ctsn:%x, ctsn_ack_point:%x\n", __func__, ctsn,
2725 SCTP_DEBUG_PRINTK("ctsn_ack_point %x\n", asoc->ctsn_ack_point); 2728 asoc->ctsn_ack_point);
2729
2726 return SCTP_DISPOSITION_DISCARD; 2730 return SCTP_DISPOSITION_DISCARD;
2727 } 2731 }
2728 2732
@@ -3174,8 +3178,9 @@ sctp_disposition_t sctp_sf_eat_sack_6_2(struct net *net,
3174 * Point indicates an out-of-order SACK. 3178 * Point indicates an out-of-order SACK.
3175 */ 3179 */
3176 if (TSN_lt(ctsn, asoc->ctsn_ack_point)) { 3180 if (TSN_lt(ctsn, asoc->ctsn_ack_point)) {
3177 SCTP_DEBUG_PRINTK("ctsn %x\n", ctsn); 3181 pr_debug("%s: ctsn:%x, ctsn_ack_point:%x\n", __func__, ctsn,
3178 SCTP_DEBUG_PRINTK("ctsn_ack_point %x\n", asoc->ctsn_ack_point); 3182 asoc->ctsn_ack_point);
3183
3179 return SCTP_DISPOSITION_DISCARD; 3184 return SCTP_DISPOSITION_DISCARD;
3180 } 3185 }
3181 3186
@@ -3859,7 +3864,7 @@ sctp_disposition_t sctp_sf_eat_fwd_tsn(struct net *net,
3859 skb_pull(chunk->skb, len); 3864 skb_pull(chunk->skb, len);
3860 3865
3861 tsn = ntohl(fwdtsn_hdr->new_cum_tsn); 3866 tsn = ntohl(fwdtsn_hdr->new_cum_tsn);
3862 SCTP_DEBUG_PRINTK("%s: TSN 0x%x.\n", __func__, tsn); 3867 pr_debug("%s: TSN 0x%x\n", __func__, tsn);
3863 3868
3864 /* The TSN is too high--silently discard the chunk and count on it 3869 /* The TSN is too high--silently discard the chunk and count on it
3865 * getting retransmitted later. 3870 * getting retransmitted later.
@@ -3927,7 +3932,7 @@ sctp_disposition_t sctp_sf_eat_fwd_tsn_fast(
3927 skb_pull(chunk->skb, len); 3932 skb_pull(chunk->skb, len);
3928 3933
3929 tsn = ntohl(fwdtsn_hdr->new_cum_tsn); 3934 tsn = ntohl(fwdtsn_hdr->new_cum_tsn);
3930 SCTP_DEBUG_PRINTK("%s: TSN 0x%x.\n", __func__, tsn); 3935 pr_debug("%s: TSN 0x%x\n", __func__, tsn);
3931 3936
3932 /* The TSN is too high--silently discard the chunk and count on it 3937 /* The TSN is too high--silently discard the chunk and count on it
3933 * getting retransmitted later. 3938 * getting retransmitted later.
@@ -4166,7 +4171,7 @@ sctp_disposition_t sctp_sf_unk_chunk(struct net *net,
4166 struct sctp_chunk *err_chunk; 4171 struct sctp_chunk *err_chunk;
4167 sctp_chunkhdr_t *hdr; 4172 sctp_chunkhdr_t *hdr;
4168 4173
4169 SCTP_DEBUG_PRINTK("Processing the unknown chunk id %d.\n", type.chunk); 4174 pr_debug("%s: processing unknown chunk id:%d\n", __func__, type.chunk);
4170 4175
4171 if (!sctp_vtag_verify(unk_chunk, asoc)) 4176 if (!sctp_vtag_verify(unk_chunk, asoc))
4172 return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 4177 return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
@@ -4256,7 +4261,8 @@ sctp_disposition_t sctp_sf_discard_chunk(struct net *net,
4256 return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, 4261 return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
4257 commands); 4262 commands);
4258 4263
4259 SCTP_DEBUG_PRINTK("Chunk %d is discarded\n", type.chunk); 4264 pr_debug("%s: chunk:%d is discarded\n", __func__, type.chunk);
4265
4260 return SCTP_DISPOSITION_DISCARD; 4266 return SCTP_DISPOSITION_DISCARD;
4261} 4267}
4262 4268
@@ -4632,16 +4638,16 @@ sctp_disposition_t sctp_sf_do_prm_asoc(struct net *net,
4632 if (!repl) 4638 if (!repl)
4633 goto nomem; 4639 goto nomem;
4634 4640
4641 /* Choose transport for INIT. */
4642 sctp_add_cmd_sf(commands, SCTP_CMD_INIT_CHOOSE_TRANSPORT,
4643 SCTP_CHUNK(repl));
4644
4635 /* Cast away the const modifier, as we want to just 4645 /* Cast away the const modifier, as we want to just
4636 * rerun it through as a sideffect. 4646 * rerun it through as a sideffect.
4637 */ 4647 */
4638 my_asoc = (struct sctp_association *)asoc; 4648 my_asoc = (struct sctp_association *)asoc;
4639 sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(my_asoc)); 4649 sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(my_asoc));
4640 4650
4641 /* Choose transport for INIT. */
4642 sctp_add_cmd_sf(commands, SCTP_CMD_INIT_CHOOSE_TRANSPORT,
4643 SCTP_CHUNK(repl));
4644
4645 /* After sending the INIT, "A" starts the T1-init timer and 4651 /* After sending the INIT, "A" starts the T1-init timer and
4646 * enters the COOKIE-WAIT state. 4652 * enters the COOKIE-WAIT state.
4647 */ 4653 */
@@ -5184,7 +5190,9 @@ sctp_disposition_t sctp_sf_ignore_primitive(
5184 void *arg, 5190 void *arg,
5185 sctp_cmd_seq_t *commands) 5191 sctp_cmd_seq_t *commands)
5186{ 5192{
5187 SCTP_DEBUG_PRINTK("Primitive type %d is ignored.\n", type.primitive); 5193 pr_debug("%s: primitive type:%d is ignored\n", __func__,
5194 type.primitive);
5195
5188 return SCTP_DISPOSITION_DISCARD; 5196 return SCTP_DISPOSITION_DISCARD;
5189} 5197}
5190 5198
@@ -5379,7 +5387,9 @@ sctp_disposition_t sctp_sf_ignore_other(struct net *net,
5379 void *arg, 5387 void *arg,
5380 sctp_cmd_seq_t *commands) 5388 sctp_cmd_seq_t *commands)
5381{ 5389{
5382 SCTP_DEBUG_PRINTK("The event other type %d is ignored\n", type.other); 5390 pr_debug("%s: the event other type:%d is ignored\n",
5391 __func__, type.other);
5392
5383 return SCTP_DISPOSITION_DISCARD; 5393 return SCTP_DISPOSITION_DISCARD;
5384} 5394}
5385 5395
@@ -5527,7 +5537,8 @@ sctp_disposition_t sctp_sf_t1_init_timer_expire(struct net *net,
5527 struct sctp_bind_addr *bp; 5537 struct sctp_bind_addr *bp;
5528 int attempts = asoc->init_err_counter + 1; 5538 int attempts = asoc->init_err_counter + 1;
5529 5539
5530 SCTP_DEBUG_PRINTK("Timer T1 expired (INIT).\n"); 5540 pr_debug("%s: timer T1 expired (INIT)\n", __func__);
5541
5531 SCTP_INC_STATS(net, SCTP_MIB_T1_INIT_EXPIREDS); 5542 SCTP_INC_STATS(net, SCTP_MIB_T1_INIT_EXPIREDS);
5532 5543
5533 if (attempts <= asoc->max_init_attempts) { 5544 if (attempts <= asoc->max_init_attempts) {
@@ -5546,9 +5557,10 @@ sctp_disposition_t sctp_sf_t1_init_timer_expire(struct net *net,
5546 5557
5547 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl)); 5558 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));
5548 } else { 5559 } else {
5549 SCTP_DEBUG_PRINTK("Giving up on INIT, attempts: %d" 5560 pr_debug("%s: giving up on INIT, attempts:%d "
5550 " max_init_attempts: %d\n", 5561 "max_init_attempts:%d\n", __func__, attempts,
5551 attempts, asoc->max_init_attempts); 5562 asoc->max_init_attempts);
5563
5552 sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, 5564 sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR,
5553 SCTP_ERROR(ETIMEDOUT)); 5565 SCTP_ERROR(ETIMEDOUT));
5554 sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED, 5566 sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED,
@@ -5588,7 +5600,8 @@ sctp_disposition_t sctp_sf_t1_cookie_timer_expire(struct net *net,
5588 struct sctp_chunk *repl = NULL; 5600 struct sctp_chunk *repl = NULL;
5589 int attempts = asoc->init_err_counter + 1; 5601 int attempts = asoc->init_err_counter + 1;
5590 5602
5591 SCTP_DEBUG_PRINTK("Timer T1 expired (COOKIE-ECHO).\n"); 5603 pr_debug("%s: timer T1 expired (COOKIE-ECHO)\n", __func__);
5604
5592 SCTP_INC_STATS(net, SCTP_MIB_T1_COOKIE_EXPIREDS); 5605 SCTP_INC_STATS(net, SCTP_MIB_T1_COOKIE_EXPIREDS);
5593 5606
5594 if (attempts <= asoc->max_init_attempts) { 5607 if (attempts <= asoc->max_init_attempts) {
@@ -5636,7 +5649,8 @@ sctp_disposition_t sctp_sf_t2_timer_expire(struct net *net,
5636{ 5649{
5637 struct sctp_chunk *reply = NULL; 5650 struct sctp_chunk *reply = NULL;
5638 5651
5639 SCTP_DEBUG_PRINTK("Timer T2 expired.\n"); 5652 pr_debug("%s: timer T2 expired\n", __func__);
5653
5640 SCTP_INC_STATS(net, SCTP_MIB_T2_SHUTDOWN_EXPIREDS); 5654 SCTP_INC_STATS(net, SCTP_MIB_T2_SHUTDOWN_EXPIREDS);
5641 5655
5642 ((struct sctp_association *)asoc)->shutdown_retries++; 5656 ((struct sctp_association *)asoc)->shutdown_retries++;
@@ -5777,7 +5791,8 @@ sctp_disposition_t sctp_sf_t5_timer_expire(struct net *net,
5777{ 5791{
5778 struct sctp_chunk *reply = NULL; 5792 struct sctp_chunk *reply = NULL;
5779 5793
5780 SCTP_DEBUG_PRINTK("Timer T5 expired.\n"); 5794 pr_debug("%s: timer T5 expired\n", __func__);
5795
5781 SCTP_INC_STATS(net, SCTP_MIB_T5_SHUTDOWN_GUARD_EXPIREDS); 5796 SCTP_INC_STATS(net, SCTP_MIB_T5_SHUTDOWN_GUARD_EXPIREDS);
5782 5797
5783 reply = sctp_make_abort(asoc, NULL, 0); 5798 reply = sctp_make_abort(asoc, NULL, 0);
@@ -5892,7 +5907,8 @@ sctp_disposition_t sctp_sf_timer_ignore(struct net *net,
5892 void *arg, 5907 void *arg,
5893 sctp_cmd_seq_t *commands) 5908 sctp_cmd_seq_t *commands)
5894{ 5909{
5895 SCTP_DEBUG_PRINTK("Timer %d ignored.\n", type.chunk); 5910 pr_debug("%s: timer %d ignored\n", __func__, type.chunk);
5911
5896 return SCTP_DISPOSITION_CONSUME; 5912 return SCTP_DISPOSITION_CONSUME;
5897} 5913}
5898 5914
@@ -6102,7 +6118,7 @@ static int sctp_eat_data(const struct sctp_association *asoc,
6102 skb_pull(chunk->skb, sizeof(sctp_datahdr_t)); 6118 skb_pull(chunk->skb, sizeof(sctp_datahdr_t));
6103 6119
6104 tsn = ntohl(data_hdr->tsn); 6120 tsn = ntohl(data_hdr->tsn);
6105 SCTP_DEBUG_PRINTK("eat_data: TSN 0x%x.\n", tsn); 6121 pr_debug("%s: TSN 0x%x\n", __func__, tsn);
6106 6122
6107 /* ASSERT: Now skb->data is really the user data. */ 6123 /* ASSERT: Now skb->data is really the user data. */
6108 6124
@@ -6179,12 +6195,12 @@ static int sctp_eat_data(const struct sctp_association *asoc,
6179 */ 6195 */
6180 if (sctp_tsnmap_has_gap(map) && 6196 if (sctp_tsnmap_has_gap(map) &&
6181 (sctp_tsnmap_get_ctsn(map) + 1) == tsn) { 6197 (sctp_tsnmap_get_ctsn(map) + 1) == tsn) {
6182 SCTP_DEBUG_PRINTK("Reneging for tsn:%u\n", tsn); 6198 pr_debug("%s: reneging for tsn:%u\n", __func__, tsn);
6183 deliver = SCTP_CMD_RENEGE; 6199 deliver = SCTP_CMD_RENEGE;
6184 } else { 6200 } else {
6185 SCTP_DEBUG_PRINTK("Discard tsn: %u len: %Zd, " 6201 pr_debug("%s: discard tsn:%u len:%zu, rwnd:%d\n",
6186 "rwnd: %d\n", tsn, datalen, 6202 __func__, tsn, datalen, asoc->rwnd);
6187 asoc->rwnd); 6203
6188 return SCTP_IERROR_IGNORE_TSN; 6204 return SCTP_IERROR_IGNORE_TSN;
6189 } 6205 }
6190 } 6206 }
@@ -6199,7 +6215,8 @@ static int sctp_eat_data(const struct sctp_association *asoc,
6199 if (*sk->sk_prot_creator->memory_pressure) { 6215 if (*sk->sk_prot_creator->memory_pressure) {
6200 if (sctp_tsnmap_has_gap(map) && 6216 if (sctp_tsnmap_has_gap(map) &&
6201 (sctp_tsnmap_get_ctsn(map) + 1) == tsn) { 6217 (sctp_tsnmap_get_ctsn(map) + 1) == tsn) {
6202 SCTP_DEBUG_PRINTK("Under Pressure! Reneging for tsn:%u\n", tsn); 6218 pr_debug("%s: under pressure, reneging for tsn:%u\n",
6219 __func__, tsn);
6203 deliver = SCTP_CMD_RENEGE; 6220 deliver = SCTP_CMD_RENEGE;
6204 } 6221 }
6205 } 6222 }
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 6abb1caf9836..c6670d2e3f8d 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -84,11 +84,6 @@
84#include <net/sctp/sctp.h> 84#include <net/sctp/sctp.h>
85#include <net/sctp/sm.h> 85#include <net/sctp/sm.h>
86 86
87/* WARNING: Please do not remove the SCTP_STATIC attribute to
88 * any of the functions below as they are used to export functions
89 * used by a project regression testsuite.
90 */
91
92/* Forward declarations for internal helper functions. */ 87/* Forward declarations for internal helper functions. */
93static int sctp_writeable(struct sock *sk); 88static int sctp_writeable(struct sock *sk);
94static void sctp_wfree(struct sk_buff *skb); 89static void sctp_wfree(struct sk_buff *skb);
@@ -98,6 +93,7 @@ static int sctp_wait_for_packet(struct sock * sk, int *err, long *timeo_p);
98static int sctp_wait_for_connect(struct sctp_association *, long *timeo_p); 93static int sctp_wait_for_connect(struct sctp_association *, long *timeo_p);
99static int sctp_wait_for_accept(struct sock *sk, long timeo); 94static int sctp_wait_for_accept(struct sock *sk, long timeo);
100static void sctp_wait_for_close(struct sock *sk, long timeo); 95static void sctp_wait_for_close(struct sock *sk, long timeo);
96static void sctp_destruct_sock(struct sock *sk);
101static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt, 97static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt,
102 union sctp_addr *addr, int len); 98 union sctp_addr *addr, int len);
103static int sctp_bindx_add(struct sock *, struct sockaddr *, int); 99static int sctp_bindx_add(struct sock *, struct sockaddr *, int);
@@ -279,14 +275,14 @@ static struct sctp_transport *sctp_addr_id2transport(struct sock *sk,
279 * sockaddr_in6 [RFC 2553]), 275 * sockaddr_in6 [RFC 2553]),
280 * addr_len - the size of the address structure. 276 * addr_len - the size of the address structure.
281 */ 277 */
282SCTP_STATIC int sctp_bind(struct sock *sk, struct sockaddr *addr, int addr_len) 278static int sctp_bind(struct sock *sk, struct sockaddr *addr, int addr_len)
283{ 279{
284 int retval = 0; 280 int retval = 0;
285 281
286 sctp_lock_sock(sk); 282 sctp_lock_sock(sk);
287 283
288 SCTP_DEBUG_PRINTK("sctp_bind(sk: %p, addr: %p, addr_len: %d)\n", 284 pr_debug("%s: sk:%p, addr:%p, addr_len:%d\n", __func__, sk,
289 sk, addr, addr_len); 285 addr, addr_len);
290 286
291 /* Disallow binding twice. */ 287 /* Disallow binding twice. */
292 if (!sctp_sk(sk)->ep->base.bind_addr.port) 288 if (!sctp_sk(sk)->ep->base.bind_addr.port)
@@ -333,7 +329,7 @@ static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt,
333} 329}
334 330
335/* Bind a local address either to an endpoint or to an association. */ 331/* Bind a local address either to an endpoint or to an association. */
336SCTP_STATIC int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len) 332static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
337{ 333{
338 struct net *net = sock_net(sk); 334 struct net *net = sock_net(sk);
339 struct sctp_sock *sp = sctp_sk(sk); 335 struct sctp_sock *sp = sctp_sk(sk);
@@ -346,19 +342,15 @@ SCTP_STATIC int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
346 /* Common sockaddr verification. */ 342 /* Common sockaddr verification. */
347 af = sctp_sockaddr_af(sp, addr, len); 343 af = sctp_sockaddr_af(sp, addr, len);
348 if (!af) { 344 if (!af) {
349 SCTP_DEBUG_PRINTK("sctp_do_bind(sk: %p, newaddr: %p, len: %d) EINVAL\n", 345 pr_debug("%s: sk:%p, newaddr:%p, len:%d EINVAL\n",
350 sk, addr, len); 346 __func__, sk, addr, len);
351 return -EINVAL; 347 return -EINVAL;
352 } 348 }
353 349
354 snum = ntohs(addr->v4.sin_port); 350 snum = ntohs(addr->v4.sin_port);
355 351
356 SCTP_DEBUG_PRINTK_IPADDR("sctp_do_bind(sk: %p, new addr: ", 352 pr_debug("%s: sk:%p, new addr:%pISc, port:%d, new port:%d, len:%d\n",
357 ", port: %d, new port: %d, len: %d)\n", 353 __func__, sk, &addr->sa, bp->port, snum, len);
358 sk,
359 addr,
360 bp->port, snum,
361 len);
362 354
363 /* PF specific bind() address verification. */ 355 /* PF specific bind() address verification. */
364 if (!sp->pf->bind_verify(sp, addr)) 356 if (!sp->pf->bind_verify(sp, addr))
@@ -372,9 +364,8 @@ SCTP_STATIC int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
372 if (!snum) 364 if (!snum)
373 snum = bp->port; 365 snum = bp->port;
374 else if (snum != bp->port) { 366 else if (snum != bp->port) {
375 SCTP_DEBUG_PRINTK("sctp_do_bind:" 367 pr_debug("%s: new port %d doesn't match existing port "
376 " New port %d does not match existing port " 368 "%d\n", __func__, snum, bp->port);
377 "%d.\n", snum, bp->port);
378 return -EINVAL; 369 return -EINVAL;
379 } 370 }
380 } 371 }
@@ -472,8 +463,8 @@ static int sctp_bindx_add(struct sock *sk, struct sockaddr *addrs, int addrcnt)
472 struct sockaddr *sa_addr; 463 struct sockaddr *sa_addr;
473 struct sctp_af *af; 464 struct sctp_af *af;
474 465
475 SCTP_DEBUG_PRINTK("sctp_bindx_add (sk: %p, addrs: %p, addrcnt: %d)\n", 466 pr_debug("%s: sk:%p, addrs:%p, addrcnt:%d\n", __func__, sk,
476 sk, addrs, addrcnt); 467 addrs, addrcnt);
477 468
478 addr_buf = addrs; 469 addr_buf = addrs;
479 for (cnt = 0; cnt < addrcnt; cnt++) { 470 for (cnt = 0; cnt < addrcnt; cnt++) {
@@ -539,11 +530,10 @@ static int sctp_send_asconf_add_ip(struct sock *sk,
539 sp = sctp_sk(sk); 530 sp = sctp_sk(sk);
540 ep = sp->ep; 531 ep = sp->ep;
541 532
542 SCTP_DEBUG_PRINTK("%s: (sk: %p, addrs: %p, addrcnt: %d)\n", 533 pr_debug("%s: sk:%p, addrs:%p, addrcnt:%d\n",
543 __func__, sk, addrs, addrcnt); 534 __func__, sk, addrs, addrcnt);
544 535
545 list_for_each_entry(asoc, &ep->asocs, asocs) { 536 list_for_each_entry(asoc, &ep->asocs, asocs) {
546
547 if (!asoc->peer.asconf_capable) 537 if (!asoc->peer.asconf_capable)
548 continue; 538 continue;
549 539
@@ -650,8 +640,8 @@ static int sctp_bindx_rem(struct sock *sk, struct sockaddr *addrs, int addrcnt)
650 union sctp_addr *sa_addr; 640 union sctp_addr *sa_addr;
651 struct sctp_af *af; 641 struct sctp_af *af;
652 642
653 SCTP_DEBUG_PRINTK("sctp_bindx_rem (sk: %p, addrs: %p, addrcnt: %d)\n", 643 pr_debug("%s: sk:%p, addrs:%p, addrcnt:%d\n",
654 sk, addrs, addrcnt); 644 __func__, sk, addrs, addrcnt);
655 645
656 addr_buf = addrs; 646 addr_buf = addrs;
657 for (cnt = 0; cnt < addrcnt; cnt++) { 647 for (cnt = 0; cnt < addrcnt; cnt++) {
@@ -744,8 +734,8 @@ static int sctp_send_asconf_del_ip(struct sock *sk,
744 sp = sctp_sk(sk); 734 sp = sctp_sk(sk);
745 ep = sp->ep; 735 ep = sp->ep;
746 736
747 SCTP_DEBUG_PRINTK("%s: (sk: %p, addrs: %p, addrcnt: %d)\n", 737 pr_debug("%s: sk:%p, addrs:%p, addrcnt:%d\n",
748 __func__, sk, addrs, addrcnt); 738 __func__, sk, addrs, addrcnt);
749 739
750 list_for_each_entry(asoc, &ep->asocs, asocs) { 740 list_for_each_entry(asoc, &ep->asocs, asocs) {
751 741
@@ -812,9 +802,11 @@ static int sctp_send_asconf_del_ip(struct sock *sk,
812 sin6 = (struct sockaddr_in6 *)addrs; 802 sin6 = (struct sockaddr_in6 *)addrs;
813 asoc->asconf_addr_del_pending->v6.sin6_addr = sin6->sin6_addr; 803 asoc->asconf_addr_del_pending->v6.sin6_addr = sin6->sin6_addr;
814 } 804 }
815 SCTP_DEBUG_PRINTK_IPADDR("send_asconf_del_ip: keep the last address asoc: %p ", 805
816 " at %p\n", asoc, asoc->asconf_addr_del_pending, 806 pr_debug("%s: keep the last address asoc:%p %pISc at %p\n",
817 asoc->asconf_addr_del_pending); 807 __func__, asoc, &asoc->asconf_addr_del_pending->sa,
808 asoc->asconf_addr_del_pending);
809
818 asoc->src_out_of_asoc_ok = 1; 810 asoc->src_out_of_asoc_ok = 1;
819 stored = 1; 811 stored = 1;
820 goto skip_mkasconf; 812 goto skip_mkasconf;
@@ -964,9 +956,9 @@ int sctp_asconf_mgmt(struct sctp_sock *sp, struct sctp_sockaddr_entry *addrw)
964 * 956 *
965 * Returns 0 if ok, <0 errno code on error. 957 * Returns 0 if ok, <0 errno code on error.
966 */ 958 */
967SCTP_STATIC int sctp_setsockopt_bindx(struct sock* sk, 959static int sctp_setsockopt_bindx(struct sock* sk,
968 struct sockaddr __user *addrs, 960 struct sockaddr __user *addrs,
969 int addrs_size, int op) 961 int addrs_size, int op)
970{ 962{
971 struct sockaddr *kaddrs; 963 struct sockaddr *kaddrs;
972 int err; 964 int err;
@@ -976,8 +968,8 @@ SCTP_STATIC int sctp_setsockopt_bindx(struct sock* sk,
976 void *addr_buf; 968 void *addr_buf;
977 struct sctp_af *af; 969 struct sctp_af *af;
978 970
979 SCTP_DEBUG_PRINTK("sctp_setsockopt_bindx: sk %p addrs %p" 971 pr_debug("%s: sk:%p addrs:%p addrs_size:%d opt:%d\n",
980 " addrs_size %d opt %d\n", sk, addrs, addrs_size, op); 972 __func__, sk, addrs, addrs_size, op);
981 973
982 if (unlikely(addrs_size <= 0)) 974 if (unlikely(addrs_size <= 0))
983 return -EINVAL; 975 return -EINVAL;
@@ -1235,10 +1227,9 @@ static int __sctp_connect(struct sock* sk,
1235 asoc = NULL; 1227 asoc = NULL;
1236 1228
1237out_free: 1229out_free:
1230 pr_debug("%s: took out_free path with asoc:%p kaddrs:%p err:%d\n",
1231 __func__, asoc, kaddrs, err);
1238 1232
1239 SCTP_DEBUG_PRINTK("About to exit __sctp_connect() free asoc: %p"
1240 " kaddrs: %p err: %d\n",
1241 asoc, kaddrs, err);
1242 if (asoc) { 1233 if (asoc) {
1243 /* sctp_primitive_ASSOCIATE may have added this association 1234 /* sctp_primitive_ASSOCIATE may have added this association
1244 * To the hash table, try to unhash it, just in case, its a noop 1235 * To the hash table, try to unhash it, just in case, its a noop
@@ -1312,7 +1303,7 @@ out_free:
1312 * 1303 *
1313 * Returns >=0 if ok, <0 errno code on error. 1304 * Returns >=0 if ok, <0 errno code on error.
1314 */ 1305 */
1315SCTP_STATIC int __sctp_setsockopt_connectx(struct sock* sk, 1306static int __sctp_setsockopt_connectx(struct sock* sk,
1316 struct sockaddr __user *addrs, 1307 struct sockaddr __user *addrs,
1317 int addrs_size, 1308 int addrs_size,
1318 sctp_assoc_t *assoc_id) 1309 sctp_assoc_t *assoc_id)
@@ -1320,8 +1311,8 @@ SCTP_STATIC int __sctp_setsockopt_connectx(struct sock* sk,
1320 int err = 0; 1311 int err = 0;
1321 struct sockaddr *kaddrs; 1312 struct sockaddr *kaddrs;
1322 1313
1323 SCTP_DEBUG_PRINTK("%s - sk %p addrs %p addrs_size %d\n", 1314 pr_debug("%s: sk:%p addrs:%p addrs_size:%d\n",
1324 __func__, sk, addrs, addrs_size); 1315 __func__, sk, addrs, addrs_size);
1325 1316
1326 if (unlikely(addrs_size <= 0)) 1317 if (unlikely(addrs_size <= 0))
1327 return -EINVAL; 1318 return -EINVAL;
@@ -1350,9 +1341,9 @@ SCTP_STATIC int __sctp_setsockopt_connectx(struct sock* sk,
1350 * This is an older interface. It's kept for backward compatibility 1341 * This is an older interface. It's kept for backward compatibility
1351 * to the option that doesn't provide association id. 1342 * to the option that doesn't provide association id.
1352 */ 1343 */
1353SCTP_STATIC int sctp_setsockopt_connectx_old(struct sock* sk, 1344static int sctp_setsockopt_connectx_old(struct sock* sk,
1354 struct sockaddr __user *addrs, 1345 struct sockaddr __user *addrs,
1355 int addrs_size) 1346 int addrs_size)
1356{ 1347{
1357 return __sctp_setsockopt_connectx(sk, addrs, addrs_size, NULL); 1348 return __sctp_setsockopt_connectx(sk, addrs, addrs_size, NULL);
1358} 1349}
@@ -1363,9 +1354,9 @@ SCTP_STATIC int sctp_setsockopt_connectx_old(struct sock* sk,
1363 * indication to the call. Error is always negative and association id is 1354 * indication to the call. Error is always negative and association id is
1364 * always positive. 1355 * always positive.
1365 */ 1356 */
1366SCTP_STATIC int sctp_setsockopt_connectx(struct sock* sk, 1357static int sctp_setsockopt_connectx(struct sock* sk,
1367 struct sockaddr __user *addrs, 1358 struct sockaddr __user *addrs,
1368 int addrs_size) 1359 int addrs_size)
1369{ 1360{
1370 sctp_assoc_t assoc_id = 0; 1361 sctp_assoc_t assoc_id = 0;
1371 int err = 0; 1362 int err = 0;
@@ -1386,9 +1377,9 @@ SCTP_STATIC int sctp_setsockopt_connectx(struct sock* sk,
1386 * addrs_num structure member. That way we can re-use the existing 1377 * addrs_num structure member. That way we can re-use the existing
1387 * code. 1378 * code.
1388 */ 1379 */
1389SCTP_STATIC int sctp_getsockopt_connectx3(struct sock* sk, int len, 1380static int sctp_getsockopt_connectx3(struct sock* sk, int len,
1390 char __user *optval, 1381 char __user *optval,
1391 int __user *optlen) 1382 int __user *optlen)
1392{ 1383{
1393 struct sctp_getaddrs_old param; 1384 struct sctp_getaddrs_old param;
1394 sctp_assoc_t assoc_id = 0; 1385 sctp_assoc_t assoc_id = 0;
@@ -1464,7 +1455,7 @@ SCTP_STATIC int sctp_getsockopt_connectx3(struct sock* sk, int len,
1464 * shutdown phase does not finish during this period, close() will 1455 * shutdown phase does not finish during this period, close() will
1465 * return but the graceful shutdown phase continues in the system. 1456 * return but the graceful shutdown phase continues in the system.
1466 */ 1457 */
1467SCTP_STATIC void sctp_close(struct sock *sk, long timeout) 1458static void sctp_close(struct sock *sk, long timeout)
1468{ 1459{
1469 struct net *net = sock_net(sk); 1460 struct net *net = sock_net(sk);
1470 struct sctp_endpoint *ep; 1461 struct sctp_endpoint *ep;
@@ -1472,7 +1463,7 @@ SCTP_STATIC void sctp_close(struct sock *sk, long timeout)
1472 struct list_head *pos, *temp; 1463 struct list_head *pos, *temp;
1473 unsigned int data_was_unread; 1464 unsigned int data_was_unread;
1474 1465
1475 SCTP_DEBUG_PRINTK("sctp_close(sk: 0x%p, timeout:%ld)\n", sk, timeout); 1466 pr_debug("%s: sk:%p, timeout:%ld\n", __func__, sk, timeout);
1476 1467
1477 sctp_lock_sock(sk); 1468 sctp_lock_sock(sk);
1478 sk->sk_shutdown = SHUTDOWN_MASK; 1469 sk->sk_shutdown = SHUTDOWN_MASK;
@@ -1573,10 +1564,10 @@ static int sctp_error(struct sock *sk, int flags, int err)
1573 */ 1564 */
1574/* BUG: We do not implement the equivalent of sk_stream_wait_memory(). */ 1565/* BUG: We do not implement the equivalent of sk_stream_wait_memory(). */
1575 1566
1576SCTP_STATIC int sctp_msghdr_parse(const struct msghdr *, sctp_cmsgs_t *); 1567static int sctp_msghdr_parse(const struct msghdr *, sctp_cmsgs_t *);
1577 1568
1578SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk, 1569static int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1579 struct msghdr *msg, size_t msg_len) 1570 struct msghdr *msg, size_t msg_len)
1580{ 1571{
1581 struct net *net = sock_net(sk); 1572 struct net *net = sock_net(sk);
1582 struct sctp_sock *sp; 1573 struct sctp_sock *sp;
@@ -1598,14 +1589,12 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1598 struct sctp_datamsg *datamsg; 1589 struct sctp_datamsg *datamsg;
1599 int msg_flags = msg->msg_flags; 1590 int msg_flags = msg->msg_flags;
1600 1591
1601 SCTP_DEBUG_PRINTK("sctp_sendmsg(sk: %p, msg: %p, msg_len: %zu)\n",
1602 sk, msg, msg_len);
1603
1604 err = 0; 1592 err = 0;
1605 sp = sctp_sk(sk); 1593 sp = sctp_sk(sk);
1606 ep = sp->ep; 1594 ep = sp->ep;
1607 1595
1608 SCTP_DEBUG_PRINTK("Using endpoint: %p.\n", ep); 1596 pr_debug("%s: sk:%p, msg:%p, msg_len:%zu ep:%p\n", __func__, sk,
1597 msg, msg_len, ep);
1609 1598
1610 /* We cannot send a message over a TCP-style listening socket. */ 1599 /* We cannot send a message over a TCP-style listening socket. */
1611 if (sctp_style(sk, TCP) && sctp_sstate(sk, LISTENING)) { 1600 if (sctp_style(sk, TCP) && sctp_sstate(sk, LISTENING)) {
@@ -1615,9 +1604,8 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1615 1604
1616 /* Parse out the SCTP CMSGs. */ 1605 /* Parse out the SCTP CMSGs. */
1617 err = sctp_msghdr_parse(msg, &cmsgs); 1606 err = sctp_msghdr_parse(msg, &cmsgs);
1618
1619 if (err) { 1607 if (err) {
1620 SCTP_DEBUG_PRINTK("msghdr parse err = %x\n", err); 1608 pr_debug("%s: msghdr parse err:%x\n", __func__, err);
1621 goto out_nounlock; 1609 goto out_nounlock;
1622 } 1610 }
1623 1611
@@ -1649,8 +1637,8 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1649 associd = sinfo->sinfo_assoc_id; 1637 associd = sinfo->sinfo_assoc_id;
1650 } 1638 }
1651 1639
1652 SCTP_DEBUG_PRINTK("msg_len: %zu, sinfo_flags: 0x%x\n", 1640 pr_debug("%s: msg_len:%zu, sinfo_flags:0x%x\n", __func__,
1653 msg_len, sinfo_flags); 1641 msg_len, sinfo_flags);
1654 1642
1655 /* SCTP_EOF or SCTP_ABORT cannot be set on a TCP-style socket. */ 1643 /* SCTP_EOF or SCTP_ABORT cannot be set on a TCP-style socket. */
1656 if (sctp_style(sk, TCP) && (sinfo_flags & (SCTP_EOF | SCTP_ABORT))) { 1644 if (sctp_style(sk, TCP) && (sinfo_flags & (SCTP_EOF | SCTP_ABORT))) {
@@ -1679,7 +1667,7 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1679 1667
1680 transport = NULL; 1668 transport = NULL;
1681 1669
1682 SCTP_DEBUG_PRINTK("About to look up association.\n"); 1670 pr_debug("%s: about to look up association\n", __func__);
1683 1671
1684 sctp_lock_sock(sk); 1672 sctp_lock_sock(sk);
1685 1673
@@ -1709,7 +1697,7 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1709 } 1697 }
1710 1698
1711 if (asoc) { 1699 if (asoc) {
1712 SCTP_DEBUG_PRINTK("Just looked up association: %p.\n", asoc); 1700 pr_debug("%s: just looked up association:%p\n", __func__, asoc);
1713 1701
1714 /* We cannot send a message on a TCP-style SCTP_SS_ESTABLISHED 1702 /* We cannot send a message on a TCP-style SCTP_SS_ESTABLISHED
1715 * socket that has an association in CLOSED state. This can 1703 * socket that has an association in CLOSED state. This can
@@ -1722,8 +1710,9 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1722 } 1710 }
1723 1711
1724 if (sinfo_flags & SCTP_EOF) { 1712 if (sinfo_flags & SCTP_EOF) {
1725 SCTP_DEBUG_PRINTK("Shutting down association: %p\n", 1713 pr_debug("%s: shutting down association:%p\n",
1726 asoc); 1714 __func__, asoc);
1715
1727 sctp_primitive_SHUTDOWN(net, asoc, NULL); 1716 sctp_primitive_SHUTDOWN(net, asoc, NULL);
1728 err = 0; 1717 err = 0;
1729 goto out_unlock; 1718 goto out_unlock;
@@ -1736,7 +1725,9 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1736 goto out_unlock; 1725 goto out_unlock;
1737 } 1726 }
1738 1727
1739 SCTP_DEBUG_PRINTK("Aborting association: %p\n", asoc); 1728 pr_debug("%s: aborting association:%p\n",
1729 __func__, asoc);
1730
1740 sctp_primitive_ABORT(net, asoc, chunk); 1731 sctp_primitive_ABORT(net, asoc, chunk);
1741 err = 0; 1732 err = 0;
1742 goto out_unlock; 1733 goto out_unlock;
@@ -1745,7 +1736,7 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1745 1736
1746 /* Do we need to create the association? */ 1737 /* Do we need to create the association? */
1747 if (!asoc) { 1738 if (!asoc) {
1748 SCTP_DEBUG_PRINTK("There is no association yet.\n"); 1739 pr_debug("%s: there is no association yet\n", __func__);
1749 1740
1750 if (sinfo_flags & (SCTP_EOF | SCTP_ABORT)) { 1741 if (sinfo_flags & (SCTP_EOF | SCTP_ABORT)) {
1751 err = -EINVAL; 1742 err = -EINVAL;
@@ -1844,7 +1835,7 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1844 } 1835 }
1845 1836
1846 /* ASSERT: we have a valid association at this point. */ 1837 /* ASSERT: we have a valid association at this point. */
1847 SCTP_DEBUG_PRINTK("We have a valid association.\n"); 1838 pr_debug("%s: we have a valid association\n", __func__);
1848 1839
1849 if (!sinfo) { 1840 if (!sinfo) {
1850 /* If the user didn't specify SNDRCVINFO, make up one with 1841 /* If the user didn't specify SNDRCVINFO, make up one with
@@ -1913,7 +1904,8 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1913 err = sctp_primitive_ASSOCIATE(net, asoc, NULL); 1904 err = sctp_primitive_ASSOCIATE(net, asoc, NULL);
1914 if (err < 0) 1905 if (err < 0)
1915 goto out_free; 1906 goto out_free;
1916 SCTP_DEBUG_PRINTK("We associated primitively.\n"); 1907
1908 pr_debug("%s: we associated primitively\n", __func__);
1917 } 1909 }
1918 1910
1919 /* Break the message into multiple chunks of maximum size. */ 1911 /* Break the message into multiple chunks of maximum size. */
@@ -1940,17 +1932,15 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk,
1940 */ 1932 */
1941 err = sctp_primitive_SEND(net, asoc, datamsg); 1933 err = sctp_primitive_SEND(net, asoc, datamsg);
1942 /* Did the lower layer accept the chunk? */ 1934 /* Did the lower layer accept the chunk? */
1943 if (err) 1935 if (err) {
1944 sctp_datamsg_free(datamsg); 1936 sctp_datamsg_free(datamsg);
1945 else 1937 goto out_free;
1946 sctp_datamsg_put(datamsg); 1938 }
1947 1939
1948 SCTP_DEBUG_PRINTK("We sent primitively.\n"); 1940 pr_debug("%s: we sent primitively\n", __func__);
1949 1941
1950 if (err) 1942 sctp_datamsg_put(datamsg);
1951 goto out_free; 1943 err = msg_len;
1952 else
1953 err = msg_len;
1954 1944
1955 /* If we are already past ASSOCIATE, the lower 1945 /* If we are already past ASSOCIATE, the lower
1956 * layers are responsible for association cleanup. 1946 * layers are responsible for association cleanup.
@@ -2034,9 +2024,9 @@ static int sctp_skb_pull(struct sk_buff *skb, int len)
2034 */ 2024 */
2035static struct sk_buff *sctp_skb_recv_datagram(struct sock *, int, int, int *); 2025static struct sk_buff *sctp_skb_recv_datagram(struct sock *, int, int, int *);
2036 2026
2037SCTP_STATIC int sctp_recvmsg(struct kiocb *iocb, struct sock *sk, 2027static int sctp_recvmsg(struct kiocb *iocb, struct sock *sk,
2038 struct msghdr *msg, size_t len, int noblock, 2028 struct msghdr *msg, size_t len, int noblock,
2039 int flags, int *addr_len) 2029 int flags, int *addr_len)
2040{ 2030{
2041 struct sctp_ulpevent *event = NULL; 2031 struct sctp_ulpevent *event = NULL;
2042 struct sctp_sock *sp = sctp_sk(sk); 2032 struct sctp_sock *sp = sctp_sk(sk);
@@ -2045,10 +2035,9 @@ SCTP_STATIC int sctp_recvmsg(struct kiocb *iocb, struct sock *sk,
2045 int err = 0; 2035 int err = 0;
2046 int skb_len; 2036 int skb_len;
2047 2037
2048 SCTP_DEBUG_PRINTK("sctp_recvmsg(%s: %p, %s: %p, %s: %zd, %s: %d, %s: " 2038 pr_debug("%s: sk:%p, msghdr:%p, len:%zd, noblock:%d, flags:0x%x, "
2049 "0x%x, %s: %p)\n", "sk", sk, "msghdr", msg, 2039 "addr_len:%p)\n", __func__, sk, msg, len, noblock, flags,
2050 "len", len, "knoblauch", noblock, 2040 addr_len);
2051 "flags", flags, "addr_len", addr_len);
2052 2041
2053 sctp_lock_sock(sk); 2042 sctp_lock_sock(sk);
2054 2043
@@ -2915,13 +2904,8 @@ static int sctp_setsockopt_associnfo(struct sock *sk, char __user *optval, unsig
2915 asoc->max_retrans = assocparams.sasoc_asocmaxrxt; 2904 asoc->max_retrans = assocparams.sasoc_asocmaxrxt;
2916 } 2905 }
2917 2906
2918 if (assocparams.sasoc_cookie_life != 0) { 2907 if (assocparams.sasoc_cookie_life != 0)
2919 asoc->cookie_life.tv_sec = 2908 asoc->cookie_life = ms_to_ktime(assocparams.sasoc_cookie_life);
2920 assocparams.sasoc_cookie_life / 1000;
2921 asoc->cookie_life.tv_usec =
2922 (assocparams.sasoc_cookie_life % 1000)
2923 * 1000;
2924 }
2925 } else { 2909 } else {
2926 /* Set the values to the endpoint */ 2910 /* Set the values to the endpoint */
2927 struct sctp_sock *sp = sctp_sk(sk); 2911 struct sctp_sock *sp = sctp_sk(sk);
@@ -3095,7 +3079,7 @@ static int sctp_setsockopt_peer_primary_addr(struct sock *sk, char __user *optva
3095 3079
3096 err = sctp_send_asconf(asoc, chunk); 3080 err = sctp_send_asconf(asoc, chunk);
3097 3081
3098 SCTP_DEBUG_PRINTK("We set peer primary addr primitively.\n"); 3082 pr_debug("%s: we set peer primary addr primitively\n", __func__);
3099 3083
3100 return err; 3084 return err;
3101} 3085}
@@ -3565,13 +3549,12 @@ static int sctp_setsockopt_paddr_thresholds(struct sock *sk,
3565 * optval - the buffer to store the value of the option. 3549 * optval - the buffer to store the value of the option.
3566 * optlen - the size of the buffer. 3550 * optlen - the size of the buffer.
3567 */ 3551 */
3568SCTP_STATIC int sctp_setsockopt(struct sock *sk, int level, int optname, 3552static int sctp_setsockopt(struct sock *sk, int level, int optname,
3569 char __user *optval, unsigned int optlen) 3553 char __user *optval, unsigned int optlen)
3570{ 3554{
3571 int retval = 0; 3555 int retval = 0;
3572 3556
3573 SCTP_DEBUG_PRINTK("sctp_setsockopt(sk: %p... optname: %d)\n", 3557 pr_debug("%s: sk:%p, optname:%d\n", __func__, sk, optname);
3574 sk, optname);
3575 3558
3576 /* I can hardly begin to describe how wrong this is. This is 3559 /* I can hardly begin to describe how wrong this is. This is
3577 * so broken as to be worse than useless. The API draft 3560 * so broken as to be worse than useless. The API draft
@@ -3725,16 +3708,16 @@ out_nounlock:
3725 * 3708 *
3726 * len: the size of the address. 3709 * len: the size of the address.
3727 */ 3710 */
3728SCTP_STATIC int sctp_connect(struct sock *sk, struct sockaddr *addr, 3711static int sctp_connect(struct sock *sk, struct sockaddr *addr,
3729 int addr_len) 3712 int addr_len)
3730{ 3713{
3731 int err = 0; 3714 int err = 0;
3732 struct sctp_af *af; 3715 struct sctp_af *af;
3733 3716
3734 sctp_lock_sock(sk); 3717 sctp_lock_sock(sk);
3735 3718
3736 SCTP_DEBUG_PRINTK("%s - sk: %p, sockaddr: %p, addr_len: %d\n", 3719 pr_debug("%s: sk:%p, sockaddr:%p, addr_len:%d\n", __func__, sk,
3737 __func__, sk, addr, addr_len); 3720 addr, addr_len);
3738 3721
3739 /* Validate addr_len before calling common connect/connectx routine. */ 3722 /* Validate addr_len before calling common connect/connectx routine. */
3740 af = sctp_get_af_specific(addr->sa_family); 3723 af = sctp_get_af_specific(addr->sa_family);
@@ -3752,7 +3735,7 @@ SCTP_STATIC int sctp_connect(struct sock *sk, struct sockaddr *addr,
3752} 3735}
3753 3736
3754/* FIXME: Write comments. */ 3737/* FIXME: Write comments. */
3755SCTP_STATIC int sctp_disconnect(struct sock *sk, int flags) 3738static int sctp_disconnect(struct sock *sk, int flags)
3756{ 3739{
3757 return -EOPNOTSUPP; /* STUB */ 3740 return -EOPNOTSUPP; /* STUB */
3758} 3741}
@@ -3764,7 +3747,7 @@ SCTP_STATIC int sctp_disconnect(struct sock *sk, int flags)
3764 * descriptor will be returned from accept() to represent the newly 3747 * descriptor will be returned from accept() to represent the newly
3765 * formed association. 3748 * formed association.
3766 */ 3749 */
3767SCTP_STATIC struct sock *sctp_accept(struct sock *sk, int flags, int *err) 3750static struct sock *sctp_accept(struct sock *sk, int flags, int *err)
3768{ 3751{
3769 struct sctp_sock *sp; 3752 struct sctp_sock *sp;
3770 struct sctp_endpoint *ep; 3753 struct sctp_endpoint *ep;
@@ -3817,7 +3800,7 @@ out:
3817} 3800}
3818 3801
3819/* The SCTP ioctl handler. */ 3802/* The SCTP ioctl handler. */
3820SCTP_STATIC int sctp_ioctl(struct sock *sk, int cmd, unsigned long arg) 3803static int sctp_ioctl(struct sock *sk, int cmd, unsigned long arg)
3821{ 3804{
3822 int rc = -ENOTCONN; 3805 int rc = -ENOTCONN;
3823 3806
@@ -3859,13 +3842,12 @@ out:
3859 * initialized the SCTP-specific portion of the sock. 3842 * initialized the SCTP-specific portion of the sock.
3860 * The sock structure should already be zero-filled memory. 3843 * The sock structure should already be zero-filled memory.
3861 */ 3844 */
3862SCTP_STATIC int sctp_init_sock(struct sock *sk) 3845static int sctp_init_sock(struct sock *sk)
3863{ 3846{
3864 struct net *net = sock_net(sk); 3847 struct net *net = sock_net(sk);
3865 struct sctp_endpoint *ep;
3866 struct sctp_sock *sp; 3848 struct sctp_sock *sp;
3867 3849
3868 SCTP_DEBUG_PRINTK("sctp_init_sock(sk: %p)\n", sk); 3850 pr_debug("%s: sk:%p\n", __func__, sk);
3869 3851
3870 sp = sctp_sk(sk); 3852 sp = sctp_sk(sk);
3871 3853
@@ -3971,13 +3953,14 @@ SCTP_STATIC int sctp_init_sock(struct sock *sk)
3971 * change the data structure relationships, this may still 3953 * change the data structure relationships, this may still
3972 * be useful for storing pre-connect address information. 3954 * be useful for storing pre-connect address information.
3973 */ 3955 */
3974 ep = sctp_endpoint_new(sk, GFP_KERNEL); 3956 sp->ep = sctp_endpoint_new(sk, GFP_KERNEL);
3975 if (!ep) 3957 if (!sp->ep)
3976 return -ENOMEM; 3958 return -ENOMEM;
3977 3959
3978 sp->ep = ep;
3979 sp->hmac = NULL; 3960 sp->hmac = NULL;
3980 3961
3962 sk->sk_destruct = sctp_destruct_sock;
3963
3981 SCTP_DBG_OBJCNT_INC(sock); 3964 SCTP_DBG_OBJCNT_INC(sock);
3982 3965
3983 local_bh_disable(); 3966 local_bh_disable();
@@ -3995,11 +3978,11 @@ SCTP_STATIC int sctp_init_sock(struct sock *sk)
3995} 3978}
3996 3979
3997/* Cleanup any SCTP per socket resources. */ 3980/* Cleanup any SCTP per socket resources. */
3998SCTP_STATIC void sctp_destroy_sock(struct sock *sk) 3981static void sctp_destroy_sock(struct sock *sk)
3999{ 3982{
4000 struct sctp_sock *sp; 3983 struct sctp_sock *sp;
4001 3984
4002 SCTP_DEBUG_PRINTK("sctp_destroy_sock(sk: %p)\n", sk); 3985 pr_debug("%s: sk:%p\n", __func__, sk);
4003 3986
4004 /* Release our hold on the endpoint. */ 3987 /* Release our hold on the endpoint. */
4005 sp = sctp_sk(sk); 3988 sp = sctp_sk(sk);
@@ -4020,6 +4003,17 @@ SCTP_STATIC void sctp_destroy_sock(struct sock *sk)
4020 local_bh_enable(); 4003 local_bh_enable();
4021} 4004}
4022 4005
4006/* Triggered when there are no references on the socket anymore */
4007static void sctp_destruct_sock(struct sock *sk)
4008{
4009 struct sctp_sock *sp = sctp_sk(sk);
4010
4011 /* Free up the HMAC transform. */
4012 crypto_free_hash(sp->hmac);
4013
4014 inet_sock_destruct(sk);
4015}
4016
4023/* API 4.1.7 shutdown() - TCP Style Syntax 4017/* API 4.1.7 shutdown() - TCP Style Syntax
4024 * int shutdown(int socket, int how); 4018 * int shutdown(int socket, int how);
4025 * 4019 *
@@ -4036,7 +4030,7 @@ SCTP_STATIC void sctp_destroy_sock(struct sock *sk)
4036 * Disables further send and receive operations 4030 * Disables further send and receive operations
4037 * and initiates the SCTP shutdown sequence. 4031 * and initiates the SCTP shutdown sequence.
4038 */ 4032 */
4039SCTP_STATIC void sctp_shutdown(struct sock *sk, int how) 4033static void sctp_shutdown(struct sock *sk, int how)
4040{ 4034{
4041 struct net *net = sock_net(sk); 4035 struct net *net = sock_net(sk);
4042 struct sctp_endpoint *ep; 4036 struct sctp_endpoint *ep;
@@ -4121,9 +4115,9 @@ static int sctp_getsockopt_sctp_status(struct sock *sk, int len,
4121 goto out; 4115 goto out;
4122 } 4116 }
4123 4117
4124 SCTP_DEBUG_PRINTK("sctp_getsockopt_sctp_status(%d): %d %d %d\n", 4118 pr_debug("%s: len:%d, state:%d, rwnd:%d, assoc_id:%d\n",
4125 len, status.sstat_state, status.sstat_rwnd, 4119 __func__, len, status.sstat_state, status.sstat_rwnd,
4126 status.sstat_assoc_id); 4120 status.sstat_assoc_id);
4127 4121
4128 if (copy_to_user(optval, &status, len)) { 4122 if (copy_to_user(optval, &status, len)) {
4129 retval = -EFAULT; 4123 retval = -EFAULT;
@@ -4318,7 +4312,7 @@ static int sctp_getsockopt_peeloff(struct sock *sk, int len, char __user *optval
4318 goto out; 4312 goto out;
4319 4313
4320 /* Map the socket to an unused fd that can be returned to the user. */ 4314 /* Map the socket to an unused fd that can be returned to the user. */
4321 retval = get_unused_fd(); 4315 retval = get_unused_fd_flags(0);
4322 if (retval < 0) { 4316 if (retval < 0) {
4323 sock_release(newsock); 4317 sock_release(newsock);
4324 goto out; 4318 goto out;
@@ -4331,8 +4325,8 @@ static int sctp_getsockopt_peeloff(struct sock *sk, int len, char __user *optval
4331 return PTR_ERR(newfile); 4325 return PTR_ERR(newfile);
4332 } 4326 }
4333 4327
4334 SCTP_DEBUG_PRINTK("%s: sk: %p newsk: %p sd: %d\n", 4328 pr_debug("%s: sk:%p, newsk:%p, sd:%d\n", __func__, sk, newsock->sk,
4335 __func__, sk, newsock->sk, retval); 4329 retval);
4336 4330
4337 /* Return the fd mapped to the new socket. */ 4331 /* Return the fd mapped to the new socket. */
4338 if (put_user(len, optlen)) { 4332 if (put_user(len, optlen)) {
@@ -4465,7 +4459,7 @@ static int sctp_getsockopt_peer_addr_params(struct sock *sk, int len,
4465 trans = sctp_addr_id2transport(sk, &params.spp_address, 4459 trans = sctp_addr_id2transport(sk, &params.spp_address,
4466 params.spp_assoc_id); 4460 params.spp_assoc_id);
4467 if (!trans) { 4461 if (!trans) {
4468 SCTP_DEBUG_PRINTK("Failed no transport\n"); 4462 pr_debug("%s: failed no transport\n", __func__);
4469 return -EINVAL; 4463 return -EINVAL;
4470 } 4464 }
4471 } 4465 }
@@ -4476,7 +4470,7 @@ static int sctp_getsockopt_peer_addr_params(struct sock *sk, int len,
4476 */ 4470 */
4477 asoc = sctp_id2assoc(sk, params.spp_assoc_id); 4471 asoc = sctp_id2assoc(sk, params.spp_assoc_id);
4478 if (!asoc && params.spp_assoc_id && sctp_style(sk, UDP)) { 4472 if (!asoc && params.spp_assoc_id && sctp_style(sk, UDP)) {
4479 SCTP_DEBUG_PRINTK("Failed no association\n"); 4473 pr_debug("%s: failed no association\n", __func__);
4480 return -EINVAL; 4474 return -EINVAL;
4481 } 4475 }
4482 4476
@@ -5081,10 +5075,7 @@ static int sctp_getsockopt_associnfo(struct sock *sk, int len,
5081 assocparams.sasoc_asocmaxrxt = asoc->max_retrans; 5075 assocparams.sasoc_asocmaxrxt = asoc->max_retrans;
5082 assocparams.sasoc_peer_rwnd = asoc->peer.rwnd; 5076 assocparams.sasoc_peer_rwnd = asoc->peer.rwnd;
5083 assocparams.sasoc_local_rwnd = asoc->a_rwnd; 5077 assocparams.sasoc_local_rwnd = asoc->a_rwnd;
5084 assocparams.sasoc_cookie_life = (asoc->cookie_life.tv_sec 5078 assocparams.sasoc_cookie_life = ktime_to_ms(asoc->cookie_life);
5085 * 1000) +
5086 (asoc->cookie_life.tv_usec
5087 / 1000);
5088 5079
5089 list_for_each(pos, &asoc->peer.transport_addr_list) { 5080 list_for_each(pos, &asoc->peer.transport_addr_list) {
5090 cnt ++; 5081 cnt ++;
@@ -5699,8 +5690,7 @@ static int sctp_getsockopt_assoc_stats(struct sock *sk, int len,
5699 if (put_user(len, optlen)) 5690 if (put_user(len, optlen))
5700 return -EFAULT; 5691 return -EFAULT;
5701 5692
5702 SCTP_DEBUG_PRINTK("sctp_getsockopt_assoc_stat(%d): %d\n", 5693 pr_debug("%s: len:%d, assoc_id:%d\n", __func__, len, sas.sas_assoc_id);
5703 len, sas.sas_assoc_id);
5704 5694
5705 if (copy_to_user(optval, &sas, len)) 5695 if (copy_to_user(optval, &sas, len))
5706 return -EFAULT; 5696 return -EFAULT;
@@ -5708,14 +5698,13 @@ static int sctp_getsockopt_assoc_stats(struct sock *sk, int len,
5708 return 0; 5698 return 0;
5709} 5699}
5710 5700
5711SCTP_STATIC int sctp_getsockopt(struct sock *sk, int level, int optname, 5701static int sctp_getsockopt(struct sock *sk, int level, int optname,
5712 char __user *optval, int __user *optlen) 5702 char __user *optval, int __user *optlen)
5713{ 5703{
5714 int retval = 0; 5704 int retval = 0;
5715 int len; 5705 int len;
5716 5706
5717 SCTP_DEBUG_PRINTK("sctp_getsockopt(sk: %p... optname: %d)\n", 5707 pr_debug("%s: sk:%p, optname:%d\n", __func__, sk, optname);
5718 sk, optname);
5719 5708
5720 /* I can hardly begin to describe how wrong this is. This is 5709 /* I can hardly begin to describe how wrong this is. This is
5721 * so broken as to be worse than useless. The API draft 5710 * so broken as to be worse than useless. The API draft
@@ -5895,7 +5884,8 @@ static long sctp_get_port_local(struct sock *sk, union sctp_addr *addr)
5895 5884
5896 snum = ntohs(addr->v4.sin_port); 5885 snum = ntohs(addr->v4.sin_port);
5897 5886
5898 SCTP_DEBUG_PRINTK("sctp_get_port() begins, snum=%d\n", snum); 5887 pr_debug("%s: begins, snum:%d\n", __func__, snum);
5888
5899 sctp_local_bh_disable(); 5889 sctp_local_bh_disable();
5900 5890
5901 if (snum == 0) { 5891 if (snum == 0) {
@@ -5961,7 +5951,8 @@ pp_found:
5961 int reuse = sk->sk_reuse; 5951 int reuse = sk->sk_reuse;
5962 struct sock *sk2; 5952 struct sock *sk2;
5963 5953
5964 SCTP_DEBUG_PRINTK("sctp_get_port() found a possible match\n"); 5954 pr_debug("%s: found a possible match\n", __func__);
5955
5965 if (pp->fastreuse && sk->sk_reuse && 5956 if (pp->fastreuse && sk->sk_reuse &&
5966 sk->sk_state != SCTP_SS_LISTENING) 5957 sk->sk_state != SCTP_SS_LISTENING)
5967 goto success; 5958 goto success;
@@ -5991,7 +5982,8 @@ pp_found:
5991 goto fail_unlock; 5982 goto fail_unlock;
5992 } 5983 }
5993 } 5984 }
5994 SCTP_DEBUG_PRINTK("sctp_get_port(): Found a match\n"); 5985
5986 pr_debug("%s: found a match\n", __func__);
5995 } 5987 }
5996pp_not_found: 5988pp_not_found:
5997 /* If there was a hash table miss, create a new port. */ 5989 /* If there was a hash table miss, create a new port. */
@@ -6037,7 +6029,6 @@ fail:
6037 */ 6029 */
6038static int sctp_get_port(struct sock *sk, unsigned short snum) 6030static int sctp_get_port(struct sock *sk, unsigned short snum)
6039{ 6031{
6040 long ret;
6041 union sctp_addr addr; 6032 union sctp_addr addr;
6042 struct sctp_af *af = sctp_sk(sk)->pf->af; 6033 struct sctp_af *af = sctp_sk(sk)->pf->af;
6043 6034
@@ -6046,15 +6037,13 @@ static int sctp_get_port(struct sock *sk, unsigned short snum)
6046 addr.v4.sin_port = htons(snum); 6037 addr.v4.sin_port = htons(snum);
6047 6038
6048 /* Note: sk->sk_num gets filled in if ephemeral port request. */ 6039 /* Note: sk->sk_num gets filled in if ephemeral port request. */
6049 ret = sctp_get_port_local(sk, &addr); 6040 return !!sctp_get_port_local(sk, &addr);
6050
6051 return ret ? 1 : 0;
6052} 6041}
6053 6042
6054/* 6043/*
6055 * Move a socket to LISTENING state. 6044 * Move a socket to LISTENING state.
6056 */ 6045 */
6057SCTP_STATIC int sctp_listen_start(struct sock *sk, int backlog) 6046static int sctp_listen_start(struct sock *sk, int backlog)
6058{ 6047{
6059 struct sctp_sock *sp = sctp_sk(sk); 6048 struct sctp_sock *sp = sctp_sk(sk);
6060 struct sctp_endpoint *ep = sp->ep; 6049 struct sctp_endpoint *ep = sp->ep;
@@ -6341,8 +6330,7 @@ static int sctp_autobind(struct sock *sk)
6341 * msg_control 6330 * msg_control
6342 * points here 6331 * points here
6343 */ 6332 */
6344SCTP_STATIC int sctp_msghdr_parse(const struct msghdr *msg, 6333static int sctp_msghdr_parse(const struct msghdr *msg, sctp_cmsgs_t *cmsgs)
6345 sctp_cmsgs_t *cmsgs)
6346{ 6334{
6347 struct cmsghdr *cmsg; 6335 struct cmsghdr *cmsg;
6348 struct msghdr *my_msg = (struct msghdr *)msg; 6336 struct msghdr *my_msg = (struct msghdr *)msg;
@@ -6484,8 +6472,8 @@ static struct sk_buff *sctp_skb_recv_datagram(struct sock *sk, int flags,
6484 6472
6485 timeo = sock_rcvtimeo(sk, noblock); 6473 timeo = sock_rcvtimeo(sk, noblock);
6486 6474
6487 SCTP_DEBUG_PRINTK("Timeout: timeo: %ld, MAX: %ld.\n", 6475 pr_debug("%s: timeo:%ld, max:%ld\n", __func__, timeo,
6488 timeo, MAX_SCHEDULE_TIMEOUT); 6476 MAX_SCHEDULE_TIMEOUT);
6489 6477
6490 do { 6478 do {
6491 /* Again only user level code calls this function, 6479 /* Again only user level code calls this function,
@@ -6616,8 +6604,8 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
6616 long current_timeo = *timeo_p; 6604 long current_timeo = *timeo_p;
6617 DEFINE_WAIT(wait); 6605 DEFINE_WAIT(wait);
6618 6606
6619 SCTP_DEBUG_PRINTK("wait_for_sndbuf: asoc=%p, timeo=%ld, msg_len=%zu\n", 6607 pr_debug("%s: asoc:%p, timeo:%ld, msg_len:%zu\n", __func__, asoc,
6620 asoc, (long)(*timeo_p), msg_len); 6608 *timeo_p, msg_len);
6621 6609
6622 /* Increment the association's refcnt. */ 6610 /* Increment the association's refcnt. */
6623 sctp_association_hold(asoc); 6611 sctp_association_hold(asoc);
@@ -6723,8 +6711,7 @@ static int sctp_wait_for_connect(struct sctp_association *asoc, long *timeo_p)
6723 long current_timeo = *timeo_p; 6711 long current_timeo = *timeo_p;
6724 DEFINE_WAIT(wait); 6712 DEFINE_WAIT(wait);
6725 6713
6726 SCTP_DEBUG_PRINTK("%s: asoc=%p, timeo=%ld\n", __func__, asoc, 6714 pr_debug("%s: asoc:%p, timeo:%ld\n", __func__, asoc, *timeo_p);
6727 (long)(*timeo_p));
6728 6715
6729 /* Increment the association's refcnt. */ 6716 /* Increment the association's refcnt. */
6730 sctp_association_hold(asoc); 6717 sctp_association_hold(asoc);
@@ -6864,7 +6851,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
6864 newsk->sk_reuse = sk->sk_reuse; 6851 newsk->sk_reuse = sk->sk_reuse;
6865 6852
6866 newsk->sk_shutdown = sk->sk_shutdown; 6853 newsk->sk_shutdown = sk->sk_shutdown;
6867 newsk->sk_destruct = inet_sock_destruct; 6854 newsk->sk_destruct = sctp_destruct_sock;
6868 newsk->sk_family = sk->sk_family; 6855 newsk->sk_family = sk->sk_family;
6869 newsk->sk_protocol = IPPROTO_SCTP; 6856 newsk->sk_protocol = IPPROTO_SCTP;
6870 newsk->sk_backlog_rcv = sk->sk_prot->backlog_rcv; 6857 newsk->sk_backlog_rcv = sk->sk_prot->backlog_rcv;
diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c
index bf3c6e8fc401..9a5c4c9eddaf 100644
--- a/net/sctp/sysctl.c
+++ b/net/sctp/sysctl.c
@@ -62,12 +62,12 @@ extern long sysctl_sctp_mem[3];
62extern int sysctl_sctp_rmem[3]; 62extern int sysctl_sctp_rmem[3];
63extern int sysctl_sctp_wmem[3]; 63extern int sysctl_sctp_wmem[3];
64 64
65static int proc_sctp_do_hmac_alg(ctl_table *ctl, 65static int proc_sctp_do_hmac_alg(struct ctl_table *ctl,
66 int write, 66 int write,
67 void __user *buffer, size_t *lenp, 67 void __user *buffer, size_t *lenp,
68 68
69 loff_t *ppos); 69 loff_t *ppos);
70static ctl_table sctp_table[] = { 70static struct ctl_table sctp_table[] = {
71 { 71 {
72 .procname = "sctp_mem", 72 .procname = "sctp_mem",
73 .data = &sysctl_sctp_mem, 73 .data = &sysctl_sctp_mem,
@@ -93,7 +93,7 @@ static ctl_table sctp_table[] = {
93 { /* sentinel */ } 93 { /* sentinel */ }
94}; 94};
95 95
96static ctl_table sctp_net_table[] = { 96static struct ctl_table sctp_net_table[] = {
97 { 97 {
98 .procname = "rto_initial", 98 .procname = "rto_initial",
99 .data = &init_net.sctp.rto_initial, 99 .data = &init_net.sctp.rto_initial,
@@ -300,14 +300,14 @@ static ctl_table sctp_net_table[] = {
300 { /* sentinel */ } 300 { /* sentinel */ }
301}; 301};
302 302
303static int proc_sctp_do_hmac_alg(ctl_table *ctl, 303static int proc_sctp_do_hmac_alg(struct ctl_table *ctl,
304 int write, 304 int write,
305 void __user *buffer, size_t *lenp, 305 void __user *buffer, size_t *lenp,
306 loff_t *ppos) 306 loff_t *ppos)
307{ 307{
308 struct net *net = current->nsproxy->net_ns; 308 struct net *net = current->nsproxy->net_ns;
309 char tmp[8]; 309 char tmp[8];
310 ctl_table tbl; 310 struct ctl_table tbl;
311 int ret; 311 int ret;
312 int changed = 0; 312 int changed = 0;
313 char *none = "none"; 313 char *none = "none";
diff --git a/net/sctp/transport.c b/net/sctp/transport.c
index 098f1d5f769e..bdbbc3fd7c14 100644
--- a/net/sctp/transport.c
+++ b/net/sctp/transport.c
@@ -116,7 +116,7 @@ struct sctp_transport *sctp_transport_new(struct net *net,
116{ 116{
117 struct sctp_transport *transport; 117 struct sctp_transport *transport;
118 118
119 transport = t_new(struct sctp_transport, gfp); 119 transport = kzalloc(sizeof(*transport), gfp);
120 if (!transport) 120 if (!transport)
121 goto fail; 121 goto fail;
122 122
@@ -176,7 +176,10 @@ static void sctp_transport_destroy_rcu(struct rcu_head *head)
176 */ 176 */
177static void sctp_transport_destroy(struct sctp_transport *transport) 177static void sctp_transport_destroy(struct sctp_transport *transport)
178{ 178{
179 SCTP_ASSERT(transport->dead, "Transport is not dead", return); 179 if (unlikely(!transport->dead)) {
180 WARN(1, "Attempt to destroy undead transport %p!\n", transport);
181 return;
182 }
180 183
181 call_rcu(&transport->rcu, sctp_transport_destroy_rcu); 184 call_rcu(&transport->rcu, sctp_transport_destroy_rcu);
182 185
@@ -317,11 +320,9 @@ void sctp_transport_put(struct sctp_transport *transport)
317/* Update transport's RTO based on the newly calculated RTT. */ 320/* Update transport's RTO based on the newly calculated RTT. */
318void sctp_transport_update_rto(struct sctp_transport *tp, __u32 rtt) 321void sctp_transport_update_rto(struct sctp_transport *tp, __u32 rtt)
319{ 322{
320 /* Check for valid transport. */ 323 if (unlikely(!tp->rto_pending))
321 SCTP_ASSERT(tp, "NULL transport", return); 324 /* We should not be doing any RTO updates unless rto_pending is set. */
322 325 pr_debug("%s: rto_pending not set on transport %p!\n", __func__, tp);
323 /* We should not be doing any RTO updates unless rto_pending is set. */
324 SCTP_ASSERT(tp->rto_pending, "rto_pending not set", return);
325 326
326 if (tp->rttvar || tp->srtt) { 327 if (tp->rttvar || tp->srtt) {
327 struct net *net = sock_net(tp->asoc->base.sk); 328 struct net *net = sock_net(tp->asoc->base.sk);
@@ -377,9 +378,8 @@ void sctp_transport_update_rto(struct sctp_transport *tp, __u32 rtt)
377 */ 378 */
378 tp->rto_pending = 0; 379 tp->rto_pending = 0;
379 380
380 SCTP_DEBUG_PRINTK("%s: transport: %p, rtt: %d, srtt: %d " 381 pr_debug("%s: transport:%p, rtt:%d, srtt:%d rttvar:%d, rto:%ld\n",
381 "rttvar: %d, rto: %ld\n", __func__, 382 __func__, tp, rtt, tp->srtt, tp->rttvar, tp->rto);
382 tp, rtt, tp->srtt, tp->rttvar, tp->rto);
383} 383}
384 384
385/* This routine updates the transport's cwnd and partial_bytes_acked 385/* This routine updates the transport's cwnd and partial_bytes_acked
@@ -433,12 +433,11 @@ void sctp_transport_raise_cwnd(struct sctp_transport *transport,
433 cwnd += pmtu; 433 cwnd += pmtu;
434 else 434 else
435 cwnd += bytes_acked; 435 cwnd += bytes_acked;
436 SCTP_DEBUG_PRINTK("%s: SLOW START: transport: %p, " 436
437 "bytes_acked: %d, cwnd: %d, ssthresh: %d, " 437 pr_debug("%s: slow start: transport:%p, bytes_acked:%d, "
438 "flight_size: %d, pba: %d\n", 438 "cwnd:%d, ssthresh:%d, flight_size:%d, pba:%d\n",
439 __func__, 439 __func__, transport, bytes_acked, cwnd, ssthresh,
440 transport, bytes_acked, cwnd, 440 flight_size, pba);
441 ssthresh, flight_size, pba);
442 } else { 441 } else {
443 /* RFC 2960 7.2.2 Whenever cwnd is greater than ssthresh, 442 /* RFC 2960 7.2.2 Whenever cwnd is greater than ssthresh,
444 * upon each SACK arrival that advances the Cumulative TSN Ack 443 * upon each SACK arrival that advances the Cumulative TSN Ack
@@ -459,12 +458,12 @@ void sctp_transport_raise_cwnd(struct sctp_transport *transport,
459 cwnd += pmtu; 458 cwnd += pmtu;
460 pba = ((cwnd < pba) ? (pba - cwnd) : 0); 459 pba = ((cwnd < pba) ? (pba - cwnd) : 0);
461 } 460 }
462 SCTP_DEBUG_PRINTK("%s: CONGESTION AVOIDANCE: " 461
463 "transport: %p, bytes_acked: %d, cwnd: %d, " 462 pr_debug("%s: congestion avoidance: transport:%p, "
464 "ssthresh: %d, flight_size: %d, pba: %d\n", 463 "bytes_acked:%d, cwnd:%d, ssthresh:%d, "
465 __func__, 464 "flight_size:%d, pba:%d\n", __func__,
466 transport, bytes_acked, cwnd, 465 transport, bytes_acked, cwnd, ssthresh,
467 ssthresh, flight_size, pba); 466 flight_size, pba);
468 } 467 }
469 468
470 transport->cwnd = cwnd; 469 transport->cwnd = cwnd;
@@ -558,10 +557,10 @@ void sctp_transport_lower_cwnd(struct sctp_transport *transport,
558 } 557 }
559 558
560 transport->partial_bytes_acked = 0; 559 transport->partial_bytes_acked = 0;
561 SCTP_DEBUG_PRINTK("%s: transport: %p reason: %d cwnd: " 560
562 "%d ssthresh: %d\n", __func__, 561 pr_debug("%s: transport:%p, reason:%d, cwnd:%d, ssthresh:%d\n",
563 transport, reason, 562 __func__, transport, reason, transport->cwnd,
564 transport->cwnd, transport->ssthresh); 563 transport->ssthresh);
565} 564}
566 565
567/* Apply Max.Burst limit to the congestion window: 566/* Apply Max.Burst limit to the congestion window:
diff --git a/net/sctp/tsnmap.c b/net/sctp/tsnmap.c
index 396c45174e5b..b46019568a86 100644
--- a/net/sctp/tsnmap.c
+++ b/net/sctp/tsnmap.c
@@ -161,8 +161,8 @@ int sctp_tsnmap_mark(struct sctp_tsnmap *map, __u32 tsn,
161 161
162 162
163/* Initialize a Gap Ack Block iterator from memory being provided. */ 163/* Initialize a Gap Ack Block iterator from memory being provided. */
164SCTP_STATIC void sctp_tsnmap_iter_init(const struct sctp_tsnmap *map, 164static void sctp_tsnmap_iter_init(const struct sctp_tsnmap *map,
165 struct sctp_tsnmap_iter *iter) 165 struct sctp_tsnmap_iter *iter)
166{ 166{
167 /* Only start looking one past the Cumulative TSN Ack Point. */ 167 /* Only start looking one past the Cumulative TSN Ack Point. */
168 iter->start = map->cumulative_tsn_ack_point + 1; 168 iter->start = map->cumulative_tsn_ack_point + 1;
@@ -171,9 +171,9 @@ SCTP_STATIC void sctp_tsnmap_iter_init(const struct sctp_tsnmap *map,
171/* Get the next Gap Ack Blocks. Returns 0 if there was not another block 171/* Get the next Gap Ack Blocks. Returns 0 if there was not another block
172 * to get. 172 * to get.
173 */ 173 */
174SCTP_STATIC int sctp_tsnmap_next_gap_ack(const struct sctp_tsnmap *map, 174static int sctp_tsnmap_next_gap_ack(const struct sctp_tsnmap *map,
175 struct sctp_tsnmap_iter *iter, 175 struct sctp_tsnmap_iter *iter,
176 __u16 *start, __u16 *end) 176 __u16 *start, __u16 *end)
177{ 177{
178 int ended = 0; 178 int ended = 0;
179 __u16 start_ = 0, end_ = 0, offset; 179 __u16 start_ = 0, end_ = 0, offset;
diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c
index 10c018a5b9fe..44a45dbee4df 100644
--- a/net/sctp/ulpevent.c
+++ b/net/sctp/ulpevent.c
@@ -57,9 +57,9 @@ static void sctp_ulpevent_release_frag_data(struct sctp_ulpevent *event);
57 57
58 58
59/* Initialize an ULP event from an given skb. */ 59/* Initialize an ULP event from an given skb. */
60SCTP_STATIC void sctp_ulpevent_init(struct sctp_ulpevent *event, 60static void sctp_ulpevent_init(struct sctp_ulpevent *event,
61 int msg_flags, 61 int msg_flags,
62 unsigned int len) 62 unsigned int len)
63{ 63{
64 memset(event, 0, sizeof(struct sctp_ulpevent)); 64 memset(event, 0, sizeof(struct sctp_ulpevent));
65 event->msg_flags = msg_flags; 65 event->msg_flags = msg_flags;
@@ -67,8 +67,8 @@ SCTP_STATIC void sctp_ulpevent_init(struct sctp_ulpevent *event,
67} 67}
68 68
69/* Create a new sctp_ulpevent. */ 69/* Create a new sctp_ulpevent. */
70SCTP_STATIC struct sctp_ulpevent *sctp_ulpevent_new(int size, int msg_flags, 70static struct sctp_ulpevent *sctp_ulpevent_new(int size, int msg_flags,
71 gfp_t gfp) 71 gfp_t gfp)
72{ 72{
73 struct sctp_ulpevent *event; 73 struct sctp_ulpevent *event;
74 struct sk_buff *skb; 74 struct sk_buff *skb;
diff --git a/net/socket.c b/net/socket.c
index 4ca1526db756..45afa648364a 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -104,6 +104,12 @@
104#include <linux/route.h> 104#include <linux/route.h>
105#include <linux/sockios.h> 105#include <linux/sockios.h>
106#include <linux/atalk.h> 106#include <linux/atalk.h>
107#include <net/ll_poll.h>
108
109#ifdef CONFIG_NET_LL_RX_POLL
110unsigned int sysctl_net_ll_read __read_mostly;
111unsigned int sysctl_net_ll_poll __read_mostly;
112#endif
107 113
108static int sock_no_open(struct inode *irrelevant, struct file *dontcare); 114static int sock_no_open(struct inode *irrelevant, struct file *dontcare);
109static ssize_t sock_aio_read(struct kiocb *iocb, const struct iovec *iov, 115static ssize_t sock_aio_read(struct kiocb *iocb, const struct iovec *iov,
@@ -1142,13 +1148,24 @@ EXPORT_SYMBOL(sock_create_lite);
1142/* No kernel lock held - perfect */ 1148/* No kernel lock held - perfect */
1143static unsigned int sock_poll(struct file *file, poll_table *wait) 1149static unsigned int sock_poll(struct file *file, poll_table *wait)
1144{ 1150{
1151 unsigned int busy_flag = 0;
1145 struct socket *sock; 1152 struct socket *sock;
1146 1153
1147 /* 1154 /*
1148 * We can't return errors to poll, so it's either yes or no. 1155 * We can't return errors to poll, so it's either yes or no.
1149 */ 1156 */
1150 sock = file->private_data; 1157 sock = file->private_data;
1151 return sock->ops->poll(file, sock, wait); 1158
1159 if (sk_can_busy_loop(sock->sk)) {
1160 /* this socket can poll_ll so tell the system call */
1161 busy_flag = POLL_BUSY_LOOP;
1162
1163 /* once, only if requested by syscall */
1164 if (wait && (wait->_key & POLL_BUSY_LOOP))
1165 sk_busy_loop(sock->sk, 1);
1166 }
1167
1168 return busy_flag | sock->ops->poll(file, sock, wait);
1152} 1169}
1153 1170
1154static int sock_mmap(struct file *file, struct vm_area_struct *vma) 1171static int sock_mmap(struct file *file, struct vm_area_struct *vma)
@@ -2635,7 +2652,9 @@ static int __init sock_init(void)
2635 */ 2652 */
2636 2653
2637#ifdef CONFIG_NETFILTER 2654#ifdef CONFIG_NETFILTER
2638 netfilter_init(); 2655 err = netfilter_init();
2656 if (err)
2657 goto out;
2639#endif 2658#endif
2640 2659
2641#ifdef CONFIG_NETWORK_PHY_TIMESTAMPING 2660#ifdef CONFIG_NETWORK_PHY_TIMESTAMPING
diff --git a/net/sunrpc/sysctl.c b/net/sunrpc/sysctl.c
index af7d339add9d..c99c58e2ee66 100644
--- a/net/sunrpc/sysctl.c
+++ b/net/sunrpc/sysctl.c
@@ -40,7 +40,7 @@ EXPORT_SYMBOL_GPL(nlm_debug);
40#ifdef RPC_DEBUG 40#ifdef RPC_DEBUG
41 41
42static struct ctl_table_header *sunrpc_table_header; 42static struct ctl_table_header *sunrpc_table_header;
43static ctl_table sunrpc_table[]; 43static struct ctl_table sunrpc_table[];
44 44
45void 45void
46rpc_register_sysctl(void) 46rpc_register_sysctl(void)
@@ -58,7 +58,7 @@ rpc_unregister_sysctl(void)
58 } 58 }
59} 59}
60 60
61static int proc_do_xprt(ctl_table *table, int write, 61static int proc_do_xprt(struct ctl_table *table, int write,
62 void __user *buffer, size_t *lenp, loff_t *ppos) 62 void __user *buffer, size_t *lenp, loff_t *ppos)
63{ 63{
64 char tmpbuf[256]; 64 char tmpbuf[256];
@@ -73,7 +73,7 @@ static int proc_do_xprt(ctl_table *table, int write,
73} 73}
74 74
75static int 75static int
76proc_dodebug(ctl_table *table, int write, 76proc_dodebug(struct ctl_table *table, int write,
77 void __user *buffer, size_t *lenp, loff_t *ppos) 77 void __user *buffer, size_t *lenp, loff_t *ppos)
78{ 78{
79 char tmpbuf[20], c, *s; 79 char tmpbuf[20], c, *s;
@@ -135,7 +135,7 @@ done:
135} 135}
136 136
137 137
138static ctl_table debug_table[] = { 138static struct ctl_table debug_table[] = {
139 { 139 {
140 .procname = "rpc_debug", 140 .procname = "rpc_debug",
141 .data = &rpc_debug, 141 .data = &rpc_debug,
@@ -173,7 +173,7 @@ static ctl_table debug_table[] = {
173 { } 173 { }
174}; 174};
175 175
176static ctl_table sunrpc_table[] = { 176static struct ctl_table sunrpc_table[] = {
177 { 177 {
178 .procname = "sunrpc", 178 .procname = "sunrpc",
179 .mode = 0555, 179 .mode = 0555,
diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
index 8343737e85f4..c1b6270262c2 100644
--- a/net/sunrpc/xprtrdma/svc_rdma.c
+++ b/net/sunrpc/xprtrdma/svc_rdma.c
@@ -84,7 +84,7 @@ struct workqueue_struct *svc_rdma_wq;
84 * resets the associated statistic to zero. Any read returns it's 84 * resets the associated statistic to zero. Any read returns it's
85 * current value. 85 * current value.
86 */ 86 */
87static int read_reset_stat(ctl_table *table, int write, 87static int read_reset_stat(struct ctl_table *table, int write,
88 void __user *buffer, size_t *lenp, 88 void __user *buffer, size_t *lenp,
89 loff_t *ppos) 89 loff_t *ppos)
90{ 90{
@@ -119,7 +119,7 @@ static int read_reset_stat(ctl_table *table, int write,
119} 119}
120 120
121static struct ctl_table_header *svcrdma_table_header; 121static struct ctl_table_header *svcrdma_table_header;
122static ctl_table svcrdma_parm_table[] = { 122static struct ctl_table svcrdma_parm_table[] = {
123 { 123 {
124 .procname = "max_requests", 124 .procname = "max_requests",
125 .data = &svcrdma_max_requests, 125 .data = &svcrdma_max_requests,
@@ -214,7 +214,7 @@ static ctl_table svcrdma_parm_table[] = {
214 { }, 214 { },
215}; 215};
216 216
217static ctl_table svcrdma_table[] = { 217static struct ctl_table svcrdma_table[] = {
218 { 218 {
219 .procname = "svc_rdma", 219 .procname = "svc_rdma",
220 .mode = 0555, 220 .mode = 0555,
@@ -223,7 +223,7 @@ static ctl_table svcrdma_table[] = {
223 { }, 223 { },
224}; 224};
225 225
226static ctl_table svcrdma_root_table[] = { 226static struct ctl_table svcrdma_root_table[] = {
227 { 227 {
228 .procname = "sunrpc", 228 .procname = "sunrpc",
229 .mode = 0555, 229 .mode = 0555,
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 794312f22b9b..285dc0884115 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -86,7 +86,7 @@ static unsigned int max_memreg = RPCRDMA_LAST - 1;
86 86
87static struct ctl_table_header *sunrpc_table_header; 87static struct ctl_table_header *sunrpc_table_header;
88 88
89static ctl_table xr_tunables_table[] = { 89static struct ctl_table xr_tunables_table[] = {
90 { 90 {
91 .procname = "rdma_slot_table_entries", 91 .procname = "rdma_slot_table_entries",
92 .data = &xprt_rdma_slot_table_entries, 92 .data = &xprt_rdma_slot_table_entries,
@@ -138,7 +138,7 @@ static ctl_table xr_tunables_table[] = {
138 { }, 138 { },
139}; 139};
140 140
141static ctl_table sunrpc_table[] = { 141static struct ctl_table sunrpc_table[] = {
142 { 142 {
143 .procname = "sunrpc", 143 .procname = "sunrpc",
144 .mode = 0555, 144 .mode = 0555,
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index ffd50348a509..412de7cfcc80 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -87,7 +87,7 @@ static struct ctl_table_header *sunrpc_table_header;
87 * FIXME: changing the UDP slot table size should also resize the UDP 87 * FIXME: changing the UDP slot table size should also resize the UDP
88 * socket buffers for existing UDP transports 88 * socket buffers for existing UDP transports
89 */ 89 */
90static ctl_table xs_tunables_table[] = { 90static struct ctl_table xs_tunables_table[] = {
91 { 91 {
92 .procname = "udp_slot_table_entries", 92 .procname = "udp_slot_table_entries",
93 .data = &xprt_udp_slot_table_entries, 93 .data = &xprt_udp_slot_table_entries,
@@ -143,7 +143,7 @@ static ctl_table xs_tunables_table[] = {
143 { }, 143 { },
144}; 144};
145 145
146static ctl_table sunrpc_table[] = { 146static struct ctl_table sunrpc_table[] = {
147 { 147 {
148 .procname = "sunrpc", 148 .procname = "sunrpc",
149 .mode = 0555, 149 .mode = 0555,
diff --git a/net/tipc/Makefile b/net/tipc/Makefile
index 4df8e02d9008..b282f7130d2b 100644
--- a/net/tipc/Makefile
+++ b/net/tipc/Makefile
@@ -8,6 +8,7 @@ tipc-y += addr.o bcast.o bearer.o config.o \
8 core.o handler.o link.o discover.o msg.o \ 8 core.o handler.o link.o discover.o msg.o \
9 name_distr.o subscr.o name_table.o net.o \ 9 name_distr.o subscr.o name_table.o net.o \
10 netlink.o node.o node_subscr.o port.o ref.o \ 10 netlink.o node.o node_subscr.o port.o ref.o \
11 socket.o log.o eth_media.o 11 socket.o log.o eth_media.o server.o
12 12
13tipc-$(CONFIG_TIPC_MEDIA_IB) += ib_media.o 13tipc-$(CONFIG_TIPC_MEDIA_IB) += ib_media.o
14tipc-$(CONFIG_SYSCTL) += sysctl.o
diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
index e5f3da507823..716de1ac6cb5 100644
--- a/net/tipc/bcast.c
+++ b/net/tipc/bcast.c
@@ -578,8 +578,7 @@ u32 tipc_bclink_acks_missing(struct tipc_node *n_ptr)
578 * Returns 0 (packet sent successfully) under all circumstances, 578 * Returns 0 (packet sent successfully) under all circumstances,
579 * since the broadcast link's pseudo-bearer never blocks 579 * since the broadcast link's pseudo-bearer never blocks
580 */ 580 */
581static int tipc_bcbearer_send(struct sk_buff *buf, 581static int tipc_bcbearer_send(struct sk_buff *buf, struct tipc_bearer *unused1,
582 struct tipc_bearer *unused1,
583 struct tipc_media_addr *unused2) 582 struct tipc_media_addr *unused2)
584{ 583{
585 int bp_index; 584 int bp_index;
diff --git a/net/tipc/bcast.h b/net/tipc/bcast.h
index a93306557e00..6ee587b469fd 100644
--- a/net/tipc/bcast.h
+++ b/net/tipc/bcast.h
@@ -75,7 +75,8 @@ void tipc_nmap_remove(struct tipc_node_map *nm_ptr, u32 node);
75/** 75/**
76 * tipc_nmap_equal - test for equality of node maps 76 * tipc_nmap_equal - test for equality of node maps
77 */ 77 */
78static inline int tipc_nmap_equal(struct tipc_node_map *nm_a, struct tipc_node_map *nm_b) 78static inline int tipc_nmap_equal(struct tipc_node_map *nm_a,
79 struct tipc_node_map *nm_b)
79{ 80{
80 return !memcmp(nm_a, nm_b, sizeof(*nm_a)); 81 return !memcmp(nm_a, nm_b, sizeof(*nm_a));
81} 82}
diff --git a/net/tipc/config.c b/net/tipc/config.c
index f67866c765dd..c301a9a592d8 100644
--- a/net/tipc/config.c
+++ b/net/tipc/config.c
@@ -2,7 +2,7 @@
2 * net/tipc/config.c: TIPC configuration management code 2 * net/tipc/config.c: TIPC configuration management code
3 * 3 *
4 * Copyright (c) 2002-2006, Ericsson AB 4 * Copyright (c) 2002-2006, Ericsson AB
5 * Copyright (c) 2004-2007, 2010-2012, Wind River Systems 5 * Copyright (c) 2004-2007, 2010-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -38,12 +38,12 @@
38#include "port.h" 38#include "port.h"
39#include "name_table.h" 39#include "name_table.h"
40#include "config.h" 40#include "config.h"
41#include "server.h"
41 42
42#define REPLY_TRUNCATED "<truncated>\n" 43#define REPLY_TRUNCATED "<truncated>\n"
43 44
44static u32 config_port_ref; 45static DEFINE_MUTEX(config_mutex);
45 46static struct tipc_server cfgsrv;
46static DEFINE_SPINLOCK(config_lock);
47 47
48static const void *req_tlv_area; /* request message TLV area */ 48static const void *req_tlv_area; /* request message TLV area */
49static int req_tlv_space; /* request message TLV area size */ 49static int req_tlv_space; /* request message TLV area size */
@@ -181,18 +181,7 @@ static struct sk_buff *cfg_set_own_addr(void)
181 if (tipc_own_addr) 181 if (tipc_own_addr)
182 return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED 182 return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
183 " (cannot change node address once assigned)"); 183 " (cannot change node address once assigned)");
184
185 /*
186 * Must temporarily release configuration spinlock while switching into
187 * networking mode as it calls tipc_eth_media_start(), which may sleep.
188 * Releasing the lock is harmless as other locally-issued configuration
189 * commands won't occur until this one completes, and remotely-issued
190 * configuration commands can't be received until a local configuration
191 * command to enable the first bearer is received and processed.
192 */
193 spin_unlock_bh(&config_lock);
194 tipc_core_start_net(addr); 184 tipc_core_start_net(addr);
195 spin_lock_bh(&config_lock);
196 return tipc_cfg_reply_none(); 185 return tipc_cfg_reply_none();
197} 186}
198 187
@@ -248,7 +237,7 @@ struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area
248{ 237{
249 struct sk_buff *rep_tlv_buf; 238 struct sk_buff *rep_tlv_buf;
250 239
251 spin_lock_bh(&config_lock); 240 mutex_lock(&config_mutex);
252 241
253 /* Save request and reply details in a well-known location */ 242 /* Save request and reply details in a well-known location */
254 req_tlv_area = request_area; 243 req_tlv_area = request_area;
@@ -377,37 +366,31 @@ struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area
377 366
378 /* Return reply buffer */ 367 /* Return reply buffer */
379exit: 368exit:
380 spin_unlock_bh(&config_lock); 369 mutex_unlock(&config_mutex);
381 return rep_tlv_buf; 370 return rep_tlv_buf;
382} 371}
383 372
384static void cfg_named_msg_event(void *userdata, 373static void cfg_conn_msg_event(int conid, struct sockaddr_tipc *addr,
385 u32 port_ref, 374 void *usr_data, void *buf, size_t len)
386 struct sk_buff **buf,
387 const unchar *msg,
388 u32 size,
389 u32 importance,
390 struct tipc_portid const *orig,
391 struct tipc_name_seq const *dest)
392{ 375{
393 struct tipc_cfg_msg_hdr *req_hdr; 376 struct tipc_cfg_msg_hdr *req_hdr;
394 struct tipc_cfg_msg_hdr *rep_hdr; 377 struct tipc_cfg_msg_hdr *rep_hdr;
395 struct sk_buff *rep_buf; 378 struct sk_buff *rep_buf;
379 int ret;
396 380
397 /* Validate configuration message header (ignore invalid message) */ 381 /* Validate configuration message header (ignore invalid message) */
398 req_hdr = (struct tipc_cfg_msg_hdr *)msg; 382 req_hdr = (struct tipc_cfg_msg_hdr *)buf;
399 if ((size < sizeof(*req_hdr)) || 383 if ((len < sizeof(*req_hdr)) ||
400 (size != TCM_ALIGN(ntohl(req_hdr->tcm_len))) || 384 (len != TCM_ALIGN(ntohl(req_hdr->tcm_len))) ||
401 (ntohs(req_hdr->tcm_flags) != TCM_F_REQUEST)) { 385 (ntohs(req_hdr->tcm_flags) != TCM_F_REQUEST)) {
402 pr_warn("Invalid configuration message discarded\n"); 386 pr_warn("Invalid configuration message discarded\n");
403 return; 387 return;
404 } 388 }
405 389
406 /* Generate reply for request (if can't, return request) */ 390 /* Generate reply for request (if can't, return request) */
407 rep_buf = tipc_cfg_do_cmd(orig->node, 391 rep_buf = tipc_cfg_do_cmd(addr->addr.id.node, ntohs(req_hdr->tcm_type),
408 ntohs(req_hdr->tcm_type), 392 buf + sizeof(*req_hdr),
409 msg + sizeof(*req_hdr), 393 len - sizeof(*req_hdr),
410 size - sizeof(*req_hdr),
411 BUF_HEADROOM + MAX_H_SIZE + sizeof(*rep_hdr)); 394 BUF_HEADROOM + MAX_H_SIZE + sizeof(*rep_hdr));
412 if (rep_buf) { 395 if (rep_buf) {
413 skb_push(rep_buf, sizeof(*rep_hdr)); 396 skb_push(rep_buf, sizeof(*rep_hdr));
@@ -415,57 +398,51 @@ static void cfg_named_msg_event(void *userdata,
415 memcpy(rep_hdr, req_hdr, sizeof(*rep_hdr)); 398 memcpy(rep_hdr, req_hdr, sizeof(*rep_hdr));
416 rep_hdr->tcm_len = htonl(rep_buf->len); 399 rep_hdr->tcm_len = htonl(rep_buf->len);
417 rep_hdr->tcm_flags &= htons(~TCM_F_REQUEST); 400 rep_hdr->tcm_flags &= htons(~TCM_F_REQUEST);
418 } else {
419 rep_buf = *buf;
420 *buf = NULL;
421 }
422 401
423 /* NEED TO ADD CODE TO HANDLE FAILED SEND (SUCH AS CONGESTION) */ 402 ret = tipc_conn_sendmsg(&cfgsrv, conid, addr, rep_buf->data,
424 tipc_send_buf2port(port_ref, orig, rep_buf, rep_buf->len); 403 rep_buf->len);
404 if (ret < 0)
405 pr_err("Sending cfg reply message failed, no memory\n");
406
407 kfree_skb(rep_buf);
408 }
425} 409}
426 410
411static struct sockaddr_tipc cfgsrv_addr __read_mostly = {
412 .family = AF_TIPC,
413 .addrtype = TIPC_ADDR_NAMESEQ,
414 .addr.nameseq.type = TIPC_CFG_SRV,
415 .addr.nameseq.lower = 0,
416 .addr.nameseq.upper = 0,
417 .scope = TIPC_ZONE_SCOPE
418};
419
420static struct tipc_server cfgsrv __read_mostly = {
421 .saddr = &cfgsrv_addr,
422 .imp = TIPC_CRITICAL_IMPORTANCE,
423 .type = SOCK_RDM,
424 .max_rcvbuf_size = 64 * 1024,
425 .name = "cfg_server",
426 .tipc_conn_recvmsg = cfg_conn_msg_event,
427 .tipc_conn_new = NULL,
428 .tipc_conn_shutdown = NULL
429};
430
427int tipc_cfg_init(void) 431int tipc_cfg_init(void)
428{ 432{
429 struct tipc_name_seq seq; 433 return tipc_server_start(&cfgsrv);
430 int res;
431
432 res = tipc_createport(NULL, TIPC_CRITICAL_IMPORTANCE,
433 NULL, NULL, NULL,
434 NULL, cfg_named_msg_event, NULL,
435 NULL, &config_port_ref);
436 if (res)
437 goto failed;
438
439 seq.type = TIPC_CFG_SRV;
440 seq.lower = seq.upper = tipc_own_addr;
441 res = tipc_publish(config_port_ref, TIPC_ZONE_SCOPE, &seq);
442 if (res)
443 goto failed;
444
445 return 0;
446
447failed:
448 pr_err("Unable to create configuration service\n");
449 return res;
450} 434}
451 435
452void tipc_cfg_reinit(void) 436void tipc_cfg_reinit(void)
453{ 437{
454 struct tipc_name_seq seq; 438 tipc_server_stop(&cfgsrv);
455 int res;
456
457 seq.type = TIPC_CFG_SRV;
458 seq.lower = seq.upper = 0;
459 tipc_withdraw(config_port_ref, TIPC_ZONE_SCOPE, &seq);
460 439
461 seq.lower = seq.upper = tipc_own_addr; 440 cfgsrv_addr.addr.nameseq.lower = tipc_own_addr;
462 res = tipc_publish(config_port_ref, TIPC_ZONE_SCOPE, &seq); 441 cfgsrv_addr.addr.nameseq.upper = tipc_own_addr;
463 if (res) 442 tipc_server_start(&cfgsrv);
464 pr_err("Unable to reinitialize configuration service\n");
465} 443}
466 444
467void tipc_cfg_stop(void) 445void tipc_cfg_stop(void)
468{ 446{
469 tipc_deleteport(config_port_ref); 447 tipc_server_stop(&cfgsrv);
470 config_port_ref = 0;
471} 448}
diff --git a/net/tipc/core.c b/net/tipc/core.c
index 7ec2c1eb94f1..fd4eeeaa972a 100644
--- a/net/tipc/core.c
+++ b/net/tipc/core.c
@@ -2,7 +2,7 @@
2 * net/tipc/core.c: TIPC module code 2 * net/tipc/core.c: TIPC module code
3 * 3 *
4 * Copyright (c) 2003-2006, Ericsson AB 4 * Copyright (c) 2003-2006, Ericsson AB
5 * Copyright (c) 2005-2006, 2010-2011, Wind River Systems 5 * Copyright (c) 2005-2006, 2010-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -39,6 +39,7 @@
39#include "name_table.h" 39#include "name_table.h"
40#include "subscr.h" 40#include "subscr.h"
41#include "config.h" 41#include "config.h"
42#include "port.h"
42 43
43#include <linux/module.h> 44#include <linux/module.h>
44 45
@@ -50,7 +51,7 @@ u32 tipc_own_addr __read_mostly;
50int tipc_max_ports __read_mostly; 51int tipc_max_ports __read_mostly;
51int tipc_net_id __read_mostly; 52int tipc_net_id __read_mostly;
52int tipc_remote_management __read_mostly; 53int tipc_remote_management __read_mostly;
53 54int sysctl_tipc_rmem[3] __read_mostly; /* min/default/max */
54 55
55/** 56/**
56 * tipc_buf_acquire - creates a TIPC message buffer 57 * tipc_buf_acquire - creates a TIPC message buffer
@@ -118,6 +119,7 @@ static void tipc_core_stop(void)
118 tipc_nametbl_stop(); 119 tipc_nametbl_stop();
119 tipc_ref_table_stop(); 120 tipc_ref_table_stop();
120 tipc_socket_stop(); 121 tipc_socket_stop();
122 tipc_unregister_sysctl();
121} 123}
122 124
123/** 125/**
@@ -135,20 +137,21 @@ static int tipc_core_start(void)
135 if (!res) 137 if (!res)
136 res = tipc_nametbl_init(); 138 res = tipc_nametbl_init();
137 if (!res) 139 if (!res)
138 res = tipc_subscr_start();
139 if (!res)
140 res = tipc_cfg_init();
141 if (!res)
142 res = tipc_netlink_start(); 140 res = tipc_netlink_start();
143 if (!res) 141 if (!res)
144 res = tipc_socket_init(); 142 res = tipc_socket_init();
143 if (!res)
144 res = tipc_register_sysctl();
145 if (!res)
146 res = tipc_subscr_start();
147 if (!res)
148 res = tipc_cfg_init();
145 if (res) 149 if (res)
146 tipc_core_stop(); 150 tipc_core_stop();
147 151
148 return res; 152 return res;
149} 153}
150 154
151
152static int __init tipc_init(void) 155static int __init tipc_init(void)
153{ 156{
154 int res; 157 int res;
@@ -160,6 +163,11 @@ static int __init tipc_init(void)
160 tipc_max_ports = CONFIG_TIPC_PORTS; 163 tipc_max_ports = CONFIG_TIPC_PORTS;
161 tipc_net_id = 4711; 164 tipc_net_id = 4711;
162 165
166 sysctl_tipc_rmem[0] = CONN_OVERLOAD_LIMIT >> 4 << TIPC_LOW_IMPORTANCE;
167 sysctl_tipc_rmem[1] = CONN_OVERLOAD_LIMIT >> 4 <<
168 TIPC_CRITICAL_IMPORTANCE;
169 sysctl_tipc_rmem[2] = CONN_OVERLOAD_LIMIT;
170
163 res = tipc_core_start(); 171 res = tipc_core_start();
164 if (res) 172 if (res)
165 pr_err("Unable to start in single node mode\n"); 173 pr_err("Unable to start in single node mode\n");
diff --git a/net/tipc/core.h b/net/tipc/core.h
index 0207db04179a..be72f8cebc53 100644
--- a/net/tipc/core.h
+++ b/net/tipc/core.h
@@ -1,8 +1,8 @@
1/* 1/*
2 * net/tipc/core.h: Include file for TIPC global declarations 2 * net/tipc/core.h: Include file for TIPC global declarations
3 * 3 *
4 * Copyright (c) 2005-2006, Ericsson AB 4 * Copyright (c) 2005-2006, 2013 Ericsson AB
5 * Copyright (c) 2005-2007, 2010-2011, Wind River Systems 5 * Copyright (c) 2005-2007, 2010-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -80,6 +80,7 @@ extern u32 tipc_own_addr __read_mostly;
80extern int tipc_max_ports __read_mostly; 80extern int tipc_max_ports __read_mostly;
81extern int tipc_net_id __read_mostly; 81extern int tipc_net_id __read_mostly;
82extern int tipc_remote_management __read_mostly; 82extern int tipc_remote_management __read_mostly;
83extern int sysctl_tipc_rmem[3] __read_mostly;
83 84
84/* 85/*
85 * Other global variables 86 * Other global variables
@@ -96,6 +97,18 @@ extern int tipc_netlink_start(void);
96extern void tipc_netlink_stop(void); 97extern void tipc_netlink_stop(void);
97extern int tipc_socket_init(void); 98extern int tipc_socket_init(void);
98extern void tipc_socket_stop(void); 99extern void tipc_socket_stop(void);
100extern int tipc_sock_create_local(int type, struct socket **res);
101extern void tipc_sock_release_local(struct socket *sock);
102extern int tipc_sock_accept_local(struct socket *sock,
103 struct socket **newsock, int flags);
104
105#ifdef CONFIG_SYSCTL
106extern int tipc_register_sysctl(void);
107extern void tipc_unregister_sysctl(void);
108#else
109#define tipc_register_sysctl() 0
110#define tipc_unregister_sysctl()
111#endif
99 112
100/* 113/*
101 * TIPC timer and signal code 114 * TIPC timer and signal code
diff --git a/net/tipc/discover.c b/net/tipc/discover.c
index eedff58d0387..ecc758c6eacf 100644
--- a/net/tipc/discover.c
+++ b/net/tipc/discover.c
@@ -70,8 +70,7 @@ struct tipc_link_req {
70 * @dest_domain: network domain of node(s) which should respond to message 70 * @dest_domain: network domain of node(s) which should respond to message
71 * @b_ptr: ptr to bearer issuing message 71 * @b_ptr: ptr to bearer issuing message
72 */ 72 */
73static struct sk_buff *tipc_disc_init_msg(u32 type, 73static struct sk_buff *tipc_disc_init_msg(u32 type, u32 dest_domain,
74 u32 dest_domain,
75 struct tipc_bearer *b_ptr) 74 struct tipc_bearer *b_ptr)
76{ 75{
77 struct sk_buff *buf = tipc_buf_acquire(INT_H_SIZE); 76 struct sk_buff *buf = tipc_buf_acquire(INT_H_SIZE);
@@ -346,8 +345,8 @@ exit:
346 * 345 *
347 * Returns 0 if successful, otherwise -errno. 346 * Returns 0 if successful, otherwise -errno.
348 */ 347 */
349int tipc_disc_create(struct tipc_bearer *b_ptr, 348int tipc_disc_create(struct tipc_bearer *b_ptr, struct tipc_media_addr *dest,
350 struct tipc_media_addr *dest, u32 dest_domain) 349 u32 dest_domain)
351{ 350{
352 struct tipc_link_req *req; 351 struct tipc_link_req *req;
353 352
diff --git a/net/tipc/eth_media.c b/net/tipc/eth_media.c
index 120a676a3360..40ea40cf6204 100644
--- a/net/tipc/eth_media.c
+++ b/net/tipc/eth_media.c
@@ -62,7 +62,7 @@ static struct eth_bearer eth_bearers[MAX_ETH_BEARERS];
62static int eth_started; 62static int eth_started;
63 63
64static int recv_notification(struct notifier_block *nb, unsigned long evt, 64static int recv_notification(struct notifier_block *nb, unsigned long evt,
65 void *dv); 65 void *dv);
66/* 66/*
67 * Network device notifier info 67 * Network device notifier info
68 */ 68 */
@@ -162,8 +162,7 @@ static void setup_bearer(struct work_struct *work)
162 */ 162 */
163static int enable_bearer(struct tipc_bearer *tb_ptr) 163static int enable_bearer(struct tipc_bearer *tb_ptr)
164{ 164{
165 struct net_device *dev = NULL; 165 struct net_device *dev;
166 struct net_device *pdev = NULL;
167 struct eth_bearer *eb_ptr = &eth_bearers[0]; 166 struct eth_bearer *eb_ptr = &eth_bearers[0];
168 struct eth_bearer *stop = &eth_bearers[MAX_ETH_BEARERS]; 167 struct eth_bearer *stop = &eth_bearers[MAX_ETH_BEARERS];
169 char *driver_name = strchr((const char *)tb_ptr->name, ':') + 1; 168 char *driver_name = strchr((const char *)tb_ptr->name, ':') + 1;
@@ -178,15 +177,7 @@ static int enable_bearer(struct tipc_bearer *tb_ptr)
178 } 177 }
179 178
180 /* Find device with specified name */ 179 /* Find device with specified name */
181 read_lock(&dev_base_lock); 180 dev = dev_get_by_name(&init_net, driver_name);
182 for_each_netdev(&init_net, pdev) {
183 if (!strncmp(pdev->name, driver_name, IFNAMSIZ)) {
184 dev = pdev;
185 dev_hold(dev);
186 break;
187 }
188 }
189 read_unlock(&dev_base_lock);
190 if (!dev) 181 if (!dev)
191 return -ENODEV; 182 return -ENODEV;
192 183
@@ -251,9 +242,9 @@ static void disable_bearer(struct tipc_bearer *tb_ptr)
251 * specified device. 242 * specified device.
252 */ 243 */
253static int recv_notification(struct notifier_block *nb, unsigned long evt, 244static int recv_notification(struct notifier_block *nb, unsigned long evt,
254 void *dv) 245 void *ptr)
255{ 246{
256 struct net_device *dev = (struct net_device *)dv; 247 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
257 struct eth_bearer *eb_ptr = &eth_bearers[0]; 248 struct eth_bearer *eb_ptr = &eth_bearers[0];
258 struct eth_bearer *stop = &eth_bearers[MAX_ETH_BEARERS]; 249 struct eth_bearer *stop = &eth_bearers[MAX_ETH_BEARERS];
259 250
diff --git a/net/tipc/ib_media.c b/net/tipc/ib_media.c
index 2a2864c25e15..ad2e1ec4117e 100644
--- a/net/tipc/ib_media.c
+++ b/net/tipc/ib_media.c
@@ -155,8 +155,7 @@ static void setup_bearer(struct work_struct *work)
155 */ 155 */
156static int enable_bearer(struct tipc_bearer *tb_ptr) 156static int enable_bearer(struct tipc_bearer *tb_ptr)
157{ 157{
158 struct net_device *dev = NULL; 158 struct net_device *dev;
159 struct net_device *pdev = NULL;
160 struct ib_bearer *ib_ptr = &ib_bearers[0]; 159 struct ib_bearer *ib_ptr = &ib_bearers[0];
161 struct ib_bearer *stop = &ib_bearers[MAX_IB_BEARERS]; 160 struct ib_bearer *stop = &ib_bearers[MAX_IB_BEARERS];
162 char *driver_name = strchr((const char *)tb_ptr->name, ':') + 1; 161 char *driver_name = strchr((const char *)tb_ptr->name, ':') + 1;
@@ -171,15 +170,7 @@ static int enable_bearer(struct tipc_bearer *tb_ptr)
171 } 170 }
172 171
173 /* Find device with specified name */ 172 /* Find device with specified name */
174 read_lock(&dev_base_lock); 173 dev = dev_get_by_name(&init_net, driver_name);
175 for_each_netdev(&init_net, pdev) {
176 if (!strncmp(pdev->name, driver_name, IFNAMSIZ)) {
177 dev = pdev;
178 dev_hold(dev);
179 break;
180 }
181 }
182 read_unlock(&dev_base_lock);
183 if (!dev) 174 if (!dev)
184 return -ENODEV; 175 return -ENODEV;
185 176
@@ -244,9 +235,9 @@ static void disable_bearer(struct tipc_bearer *tb_ptr)
244 * specified device. 235 * specified device.
245 */ 236 */
246static int recv_notification(struct notifier_block *nb, unsigned long evt, 237static int recv_notification(struct notifier_block *nb, unsigned long evt,
247 void *dv) 238 void *ptr)
248{ 239{
249 struct net_device *dev = (struct net_device *)dv; 240 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
250 struct ib_bearer *ib_ptr = &ib_bearers[0]; 241 struct ib_bearer *ib_ptr = &ib_bearers[0];
251 struct ib_bearer *stop = &ib_bearers[MAX_IB_BEARERS]; 242 struct ib_bearer *stop = &ib_bearers[MAX_IB_BEARERS];
252 243
diff --git a/net/tipc/link.c b/net/tipc/link.c
index a80feee5197a..0cc3d9015c5d 100644
--- a/net/tipc/link.c
+++ b/net/tipc/link.c
@@ -2,7 +2,7 @@
2 * net/tipc/link.c: TIPC link code 2 * net/tipc/link.c: TIPC link code
3 * 3 *
4 * Copyright (c) 1996-2007, 2012, Ericsson AB 4 * Copyright (c) 1996-2007, 2012, Ericsson AB
5 * Copyright (c) 2004-2007, 2010-2011, Wind River Systems 5 * Copyright (c) 2004-2007, 2010-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -41,6 +41,8 @@
41#include "discover.h" 41#include "discover.h"
42#include "config.h" 42#include "config.h"
43 43
44#include <linux/pkt_sched.h>
45
44/* 46/*
45 * Error message prefixes 47 * Error message prefixes
46 */ 48 */
@@ -771,8 +773,7 @@ static void link_state_event(struct tipc_link *l_ptr, unsigned int event)
771 * link_bundle_buf(): Append contents of a buffer to 773 * link_bundle_buf(): Append contents of a buffer to
772 * the tail of an existing one. 774 * the tail of an existing one.
773 */ 775 */
774static int link_bundle_buf(struct tipc_link *l_ptr, 776static int link_bundle_buf(struct tipc_link *l_ptr, struct sk_buff *bundler,
775 struct sk_buff *bundler,
776 struct sk_buff *buf) 777 struct sk_buff *buf)
777{ 778{
778 struct tipc_msg *bundler_msg = buf_msg(bundler); 779 struct tipc_msg *bundler_msg = buf_msg(bundler);
@@ -1057,40 +1058,6 @@ static int link_send_buf_fast(struct tipc_link *l_ptr, struct sk_buff *buf,
1057} 1058}
1058 1059
1059/* 1060/*
1060 * tipc_send_buf_fast: Entry for data messages where the
1061 * destination node is known and the header is complete,
1062 * inclusive total message length.
1063 * Returns user data length.
1064 */
1065int tipc_send_buf_fast(struct sk_buff *buf, u32 destnode)
1066{
1067 struct tipc_link *l_ptr;
1068 struct tipc_node *n_ptr;
1069 int res;
1070 u32 selector = msg_origport(buf_msg(buf)) & 1;
1071 u32 dummy;
1072
1073 read_lock_bh(&tipc_net_lock);
1074 n_ptr = tipc_node_find(destnode);
1075 if (likely(n_ptr)) {
1076 tipc_node_lock(n_ptr);
1077 l_ptr = n_ptr->active_links[selector];
1078 if (likely(l_ptr)) {
1079 res = link_send_buf_fast(l_ptr, buf, &dummy);
1080 tipc_node_unlock(n_ptr);
1081 read_unlock_bh(&tipc_net_lock);
1082 return res;
1083 }
1084 tipc_node_unlock(n_ptr);
1085 }
1086 read_unlock_bh(&tipc_net_lock);
1087 res = msg_data_sz(buf_msg(buf));
1088 tipc_reject_msg(buf, TIPC_ERR_NO_NODE);
1089 return res;
1090}
1091
1092
1093/*
1094 * tipc_link_send_sections_fast: Entry for messages where the 1061 * tipc_link_send_sections_fast: Entry for messages where the
1095 * destination processor is known and the header is complete, 1062 * destination processor is known and the header is complete,
1096 * except for total message length. 1063 * except for total message length.
@@ -1098,8 +1065,7 @@ int tipc_send_buf_fast(struct sk_buff *buf, u32 destnode)
1098 */ 1065 */
1099int tipc_link_send_sections_fast(struct tipc_port *sender, 1066int tipc_link_send_sections_fast(struct tipc_port *sender,
1100 struct iovec const *msg_sect, 1067 struct iovec const *msg_sect,
1101 const u32 num_sect, 1068 const u32 num_sect, unsigned int total_len,
1102 unsigned int total_len,
1103 u32 destaddr) 1069 u32 destaddr)
1104{ 1070{
1105 struct tipc_msg *hdr = &sender->phdr; 1071 struct tipc_msg *hdr = &sender->phdr;
@@ -1115,7 +1081,10 @@ again:
1115 * (Must not hold any locks while building message.) 1081 * (Must not hold any locks while building message.)
1116 */ 1082 */
1117 res = tipc_msg_build(hdr, msg_sect, num_sect, total_len, 1083 res = tipc_msg_build(hdr, msg_sect, num_sect, total_len,
1118 sender->max_pkt, !sender->user_port, &buf); 1084 sender->max_pkt, &buf);
1085 /* Exit if build request was invalid */
1086 if (unlikely(res < 0))
1087 return res;
1119 1088
1120 read_lock_bh(&tipc_net_lock); 1089 read_lock_bh(&tipc_net_lock);
1121 node = tipc_node_find(destaddr); 1090 node = tipc_node_find(destaddr);
@@ -1132,10 +1101,6 @@ exit:
1132 return res; 1101 return res;
1133 } 1102 }
1134 1103
1135 /* Exit if build request was invalid */
1136 if (unlikely(res < 0))
1137 goto exit;
1138
1139 /* Exit if link (or bearer) is congested */ 1104 /* Exit if link (or bearer) is congested */
1140 if (link_congested(l_ptr) || 1105 if (link_congested(l_ptr) ||
1141 tipc_bearer_blocked(l_ptr->b_ptr)) { 1106 tipc_bearer_blocked(l_ptr->b_ptr)) {
@@ -1189,8 +1154,7 @@ exit:
1189 */ 1154 */
1190static int link_send_sections_long(struct tipc_port *sender, 1155static int link_send_sections_long(struct tipc_port *sender,
1191 struct iovec const *msg_sect, 1156 struct iovec const *msg_sect,
1192 u32 num_sect, 1157 u32 num_sect, unsigned int total_len,
1193 unsigned int total_len,
1194 u32 destaddr) 1158 u32 destaddr)
1195{ 1159{
1196 struct tipc_link *l_ptr; 1160 struct tipc_link *l_ptr;
@@ -1204,6 +1168,7 @@ static int link_send_sections_long(struct tipc_port *sender,
1204 const unchar *sect_crs; 1168 const unchar *sect_crs;
1205 int curr_sect; 1169 int curr_sect;
1206 u32 fragm_no; 1170 u32 fragm_no;
1171 int res = 0;
1207 1172
1208again: 1173again:
1209 fragm_no = 1; 1174 fragm_no = 1;
@@ -1250,18 +1215,15 @@ again:
1250 else 1215 else
1251 sz = fragm_rest; 1216 sz = fragm_rest;
1252 1217
1253 if (likely(!sender->user_port)) { 1218 if (copy_from_user(buf->data + fragm_crs, sect_crs, sz)) {
1254 if (copy_from_user(buf->data + fragm_crs, sect_crs, sz)) { 1219 res = -EFAULT;
1255error: 1220error:
1256 for (; buf_chain; buf_chain = buf) { 1221 for (; buf_chain; buf_chain = buf) {
1257 buf = buf_chain->next; 1222 buf = buf_chain->next;
1258 kfree_skb(buf_chain); 1223 kfree_skb(buf_chain);
1259 }
1260 return -EFAULT;
1261 } 1224 }
1262 } else 1225 return res;
1263 skb_copy_to_linear_data_offset(buf, fragm_crs, 1226 }
1264 sect_crs, sz);
1265 sect_crs += sz; 1227 sect_crs += sz;
1266 sect_rest -= sz; 1228 sect_rest -= sz;
1267 fragm_crs += sz; 1229 fragm_crs += sz;
@@ -1281,8 +1243,10 @@ error:
1281 msg_set_fragm_no(&fragm_hdr, ++fragm_no); 1243 msg_set_fragm_no(&fragm_hdr, ++fragm_no);
1282 prev = buf; 1244 prev = buf;
1283 buf = tipc_buf_acquire(fragm_sz + INT_H_SIZE); 1245 buf = tipc_buf_acquire(fragm_sz + INT_H_SIZE);
1284 if (!buf) 1246 if (!buf) {
1247 res = -ENOMEM;
1285 goto error; 1248 goto error;
1249 }
1286 1250
1287 buf->next = NULL; 1251 buf->next = NULL;
1288 prev->next = buf; 1252 prev->next = buf;
@@ -1446,7 +1410,7 @@ static void link_reset_all(unsigned long addr)
1446} 1410}
1447 1411
1448static void link_retransmit_failure(struct tipc_link *l_ptr, 1412static void link_retransmit_failure(struct tipc_link *l_ptr,
1449 struct sk_buff *buf) 1413 struct sk_buff *buf)
1450{ 1414{
1451 struct tipc_msg *msg = buf_msg(buf); 1415 struct tipc_msg *msg = buf_msg(buf);
1452 1416
@@ -1901,8 +1865,8 @@ static void link_handle_out_of_seq_msg(struct tipc_link *l_ptr,
1901 * Send protocol message to the other endpoint. 1865 * Send protocol message to the other endpoint.
1902 */ 1866 */
1903void tipc_link_send_proto_msg(struct tipc_link *l_ptr, u32 msg_typ, 1867void tipc_link_send_proto_msg(struct tipc_link *l_ptr, u32 msg_typ,
1904 int probe_msg, u32 gap, u32 tolerance, 1868 int probe_msg, u32 gap, u32 tolerance,
1905 u32 priority, u32 ack_mtu) 1869 u32 priority, u32 ack_mtu)
1906{ 1870{
1907 struct sk_buff *buf = NULL; 1871 struct sk_buff *buf = NULL;
1908 struct tipc_msg *msg = l_ptr->pmsg; 1872 struct tipc_msg *msg = l_ptr->pmsg;
@@ -1988,6 +1952,7 @@ void tipc_link_send_proto_msg(struct tipc_link *l_ptr, u32 msg_typ,
1988 return; 1952 return;
1989 1953
1990 skb_copy_to_linear_data(buf, msg, sizeof(l_ptr->proto_msg)); 1954 skb_copy_to_linear_data(buf, msg, sizeof(l_ptr->proto_msg));
1955 buf->priority = TC_PRIO_CONTROL;
1991 1956
1992 /* Defer message if bearer is already blocked */ 1957 /* Defer message if bearer is already blocked */
1993 if (tipc_bearer_blocked(l_ptr->b_ptr)) { 1958 if (tipc_bearer_blocked(l_ptr->b_ptr)) {
@@ -2145,8 +2110,7 @@ exit:
2145 * another bearer. Owner node is locked. 2110 * another bearer. Owner node is locked.
2146 */ 2111 */
2147static void tipc_link_tunnel(struct tipc_link *l_ptr, 2112static void tipc_link_tunnel(struct tipc_link *l_ptr,
2148 struct tipc_msg *tunnel_hdr, 2113 struct tipc_msg *tunnel_hdr, struct tipc_msg *msg,
2149 struct tipc_msg *msg,
2150 u32 selector) 2114 u32 selector)
2151{ 2115{
2152 struct tipc_link *tunnel; 2116 struct tipc_link *tunnel;
diff --git a/net/tipc/msg.c b/net/tipc/msg.c
index f2db8a87d9c5..ced60e2fc4f7 100644
--- a/net/tipc/msg.c
+++ b/net/tipc/msg.c
@@ -51,8 +51,8 @@ u32 tipc_msg_tot_importance(struct tipc_msg *m)
51} 51}
52 52
53 53
54void tipc_msg_init(struct tipc_msg *m, u32 user, u32 type, 54void tipc_msg_init(struct tipc_msg *m, u32 user, u32 type, u32 hsize,
55 u32 hsize, u32 destnode) 55 u32 destnode)
56{ 56{
57 memset(m, 0, hsize); 57 memset(m, 0, hsize);
58 msg_set_version(m); 58 msg_set_version(m);
@@ -73,8 +73,8 @@ void tipc_msg_init(struct tipc_msg *m, u32 user, u32 type,
73 * Returns message data size or errno 73 * Returns message data size or errno
74 */ 74 */
75int tipc_msg_build(struct tipc_msg *hdr, struct iovec const *msg_sect, 75int tipc_msg_build(struct tipc_msg *hdr, struct iovec const *msg_sect,
76 u32 num_sect, unsigned int total_len, 76 u32 num_sect, unsigned int total_len, int max_size,
77 int max_size, int usrmem, struct sk_buff **buf) 77 struct sk_buff **buf)
78{ 78{
79 int dsz, sz, hsz, pos, res, cnt; 79 int dsz, sz, hsz, pos, res, cnt;
80 80
@@ -92,14 +92,9 @@ int tipc_msg_build(struct tipc_msg *hdr, struct iovec const *msg_sect,
92 return -ENOMEM; 92 return -ENOMEM;
93 skb_copy_to_linear_data(*buf, hdr, hsz); 93 skb_copy_to_linear_data(*buf, hdr, hsz);
94 for (res = 1, cnt = 0; res && (cnt < num_sect); cnt++) { 94 for (res = 1, cnt = 0; res && (cnt < num_sect); cnt++) {
95 if (likely(usrmem)) 95 skb_copy_to_linear_data_offset(*buf, pos,
96 res = !copy_from_user((*buf)->data + pos, 96 msg_sect[cnt].iov_base,
97 msg_sect[cnt].iov_base, 97 msg_sect[cnt].iov_len);
98 msg_sect[cnt].iov_len);
99 else
100 skb_copy_to_linear_data_offset(*buf, pos,
101 msg_sect[cnt].iov_base,
102 msg_sect[cnt].iov_len);
103 pos += msg_sect[cnt].iov_len; 98 pos += msg_sect[cnt].iov_len;
104 } 99 }
105 if (likely(res)) 100 if (likely(res))
diff --git a/net/tipc/msg.h b/net/tipc/msg.h
index ba2a72beea68..5e4ccf5c27df 100644
--- a/net/tipc/msg.h
+++ b/net/tipc/msg.h
@@ -719,9 +719,9 @@ static inline void msg_set_link_tolerance(struct tipc_msg *m, u32 n)
719} 719}
720 720
721u32 tipc_msg_tot_importance(struct tipc_msg *m); 721u32 tipc_msg_tot_importance(struct tipc_msg *m);
722void tipc_msg_init(struct tipc_msg *m, u32 user, u32 type, 722void tipc_msg_init(struct tipc_msg *m, u32 user, u32 type, u32 hsize,
723 u32 hsize, u32 destnode); 723 u32 destnode);
724int tipc_msg_build(struct tipc_msg *hdr, struct iovec const *msg_sect, 724int tipc_msg_build(struct tipc_msg *hdr, struct iovec const *msg_sect,
725 u32 num_sect, unsigned int total_len, 725 u32 num_sect, unsigned int total_len, int max_size,
726 int max_size, int usrmem, struct sk_buff **buf); 726 struct sk_buff **buf);
727#endif 727#endif
diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
index 24b167914311..09dcd54b04e1 100644
--- a/net/tipc/name_table.c
+++ b/net/tipc/name_table.c
@@ -440,7 +440,7 @@ found:
440 * sequence overlapping with the requested sequence 440 * sequence overlapping with the requested sequence
441 */ 441 */
442static void tipc_nameseq_subscribe(struct name_seq *nseq, 442static void tipc_nameseq_subscribe(struct name_seq *nseq,
443 struct tipc_subscription *s) 443 struct tipc_subscription *s)
444{ 444{
445 struct sub_seq *sseq = nseq->sseqs; 445 struct sub_seq *sseq = nseq->sseqs;
446 446
@@ -662,7 +662,7 @@ exit:
662 * tipc_nametbl_publish - add name publication to network name tables 662 * tipc_nametbl_publish - add name publication to network name tables
663 */ 663 */
664struct publication *tipc_nametbl_publish(u32 type, u32 lower, u32 upper, 664struct publication *tipc_nametbl_publish(u32 type, u32 lower, u32 upper,
665 u32 scope, u32 port_ref, u32 key) 665 u32 scope, u32 port_ref, u32 key)
666{ 666{
667 struct publication *publ; 667 struct publication *publ;
668 668
@@ -753,7 +753,7 @@ void tipc_nametbl_unsubscribe(struct tipc_subscription *s)
753 * subseq_list - print specified sub-sequence contents into the given buffer 753 * subseq_list - print specified sub-sequence contents into the given buffer
754 */ 754 */
755static int subseq_list(struct sub_seq *sseq, char *buf, int len, u32 depth, 755static int subseq_list(struct sub_seq *sseq, char *buf, int len, u32 depth,
756 u32 index) 756 u32 index)
757{ 757{
758 char portIdStr[27]; 758 char portIdStr[27];
759 const char *scope_str[] = {"", " zone", " cluster", " node"}; 759 const char *scope_str[] = {"", " zone", " cluster", " node"};
@@ -792,7 +792,7 @@ static int subseq_list(struct sub_seq *sseq, char *buf, int len, u32 depth,
792 * nameseq_list - print specified name sequence contents into the given buffer 792 * nameseq_list - print specified name sequence contents into the given buffer
793 */ 793 */
794static int nameseq_list(struct name_seq *seq, char *buf, int len, u32 depth, 794static int nameseq_list(struct name_seq *seq, char *buf, int len, u32 depth,
795 u32 type, u32 lowbound, u32 upbound, u32 index) 795 u32 type, u32 lowbound, u32 upbound, u32 index)
796{ 796{
797 struct sub_seq *sseq; 797 struct sub_seq *sseq;
798 char typearea[11]; 798 char typearea[11];
@@ -849,7 +849,7 @@ static int nametbl_header(char *buf, int len, u32 depth)
849 * nametbl_list - print specified name table contents into the given buffer 849 * nametbl_list - print specified name table contents into the given buffer
850 */ 850 */
851static int nametbl_list(char *buf, int len, u32 depth_info, 851static int nametbl_list(char *buf, int len, u32 depth_info,
852 u32 type, u32 lowbound, u32 upbound) 852 u32 type, u32 lowbound, u32 upbound)
853{ 853{
854 struct hlist_head *seq_head; 854 struct hlist_head *seq_head;
855 struct name_seq *seq; 855 struct name_seq *seq;
diff --git a/net/tipc/name_table.h b/net/tipc/name_table.h
index 71cb4dc712df..f02f48b9a216 100644
--- a/net/tipc/name_table.h
+++ b/net/tipc/name_table.h
@@ -87,14 +87,15 @@ extern rwlock_t tipc_nametbl_lock;
87struct sk_buff *tipc_nametbl_get(const void *req_tlv_area, int req_tlv_space); 87struct sk_buff *tipc_nametbl_get(const void *req_tlv_area, int req_tlv_space);
88u32 tipc_nametbl_translate(u32 type, u32 instance, u32 *node); 88u32 tipc_nametbl_translate(u32 type, u32 instance, u32 *node);
89int tipc_nametbl_mc_translate(u32 type, u32 lower, u32 upper, u32 limit, 89int tipc_nametbl_mc_translate(u32 type, u32 lower, u32 upper, u32 limit,
90 struct tipc_port_list *dports); 90 struct tipc_port_list *dports);
91struct publication *tipc_nametbl_publish(u32 type, u32 lower, u32 upper, 91struct publication *tipc_nametbl_publish(u32 type, u32 lower, u32 upper,
92 u32 scope, u32 port_ref, u32 key); 92 u32 scope, u32 port_ref, u32 key);
93int tipc_nametbl_withdraw(u32 type, u32 lower, u32 ref, u32 key); 93int tipc_nametbl_withdraw(u32 type, u32 lower, u32 ref, u32 key);
94struct publication *tipc_nametbl_insert_publ(u32 type, u32 lower, u32 upper, 94struct publication *tipc_nametbl_insert_publ(u32 type, u32 lower, u32 upper,
95 u32 scope, u32 node, u32 ref, u32 key); 95 u32 scope, u32 node, u32 ref,
96struct publication *tipc_nametbl_remove_publ(u32 type, u32 lower, 96 u32 key);
97 u32 node, u32 ref, u32 key); 97struct publication *tipc_nametbl_remove_publ(u32 type, u32 lower, u32 node,
98 u32 ref, u32 key);
98void tipc_nametbl_subscribe(struct tipc_subscription *s); 99void tipc_nametbl_subscribe(struct tipc_subscription *s);
99void tipc_nametbl_unsubscribe(struct tipc_subscription *s); 100void tipc_nametbl_unsubscribe(struct tipc_subscription *s);
100int tipc_nametbl_init(void); 101int tipc_nametbl_init(void);
diff --git a/net/tipc/node_subscr.c b/net/tipc/node_subscr.c
index 5e34b015da45..8a7384c04add 100644
--- a/net/tipc/node_subscr.c
+++ b/net/tipc/node_subscr.c
@@ -42,7 +42,7 @@
42 * tipc_nodesub_subscribe - create "node down" subscription for specified node 42 * tipc_nodesub_subscribe - create "node down" subscription for specified node
43 */ 43 */
44void tipc_nodesub_subscribe(struct tipc_node_subscr *node_sub, u32 addr, 44void tipc_nodesub_subscribe(struct tipc_node_subscr *node_sub, u32 addr,
45 void *usr_handle, net_ev_handler handle_down) 45 void *usr_handle, net_ev_handler handle_down)
46{ 46{
47 if (in_own_node(addr)) { 47 if (in_own_node(addr)) {
48 node_sub->node = NULL; 48 node_sub->node = NULL;
diff --git a/net/tipc/port.c b/net/tipc/port.c
index 18098cac62f2..b3ed2fcab4fb 100644
--- a/net/tipc/port.c
+++ b/net/tipc/port.c
@@ -2,7 +2,7 @@
2 * net/tipc/port.c: TIPC port code 2 * net/tipc/port.c: TIPC port code
3 * 3 *
4 * Copyright (c) 1992-2007, Ericsson AB 4 * Copyright (c) 1992-2007, Ericsson AB
5 * Copyright (c) 2004-2008, 2010-2011, Wind River Systems 5 * Copyright (c) 2004-2008, 2010-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -46,11 +46,7 @@
46 46
47#define MAX_REJECT_SIZE 1024 47#define MAX_REJECT_SIZE 1024
48 48
49static struct sk_buff *msg_queue_head;
50static struct sk_buff *msg_queue_tail;
51
52DEFINE_SPINLOCK(tipc_port_list_lock); 49DEFINE_SPINLOCK(tipc_port_list_lock);
53static DEFINE_SPINLOCK(queue_lock);
54 50
55static LIST_HEAD(ports); 51static LIST_HEAD(ports);
56static void port_handle_node_down(unsigned long ref); 52static void port_handle_node_down(unsigned long ref);
@@ -119,7 +115,7 @@ int tipc_multicast(u32 ref, struct tipc_name_seq const *seq,
119 msg_set_nameupper(hdr, seq->upper); 115 msg_set_nameupper(hdr, seq->upper);
120 msg_set_hdr_sz(hdr, MCAST_H_SIZE); 116 msg_set_hdr_sz(hdr, MCAST_H_SIZE);
121 res = tipc_msg_build(hdr, msg_sect, num_sect, total_len, MAX_MSG_SIZE, 117 res = tipc_msg_build(hdr, msg_sect, num_sect, total_len, MAX_MSG_SIZE,
122 !oport->user_port, &buf); 118 &buf);
123 if (unlikely(!buf)) 119 if (unlikely(!buf))
124 return res; 120 return res;
125 121
@@ -206,14 +202,15 @@ exit:
206} 202}
207 203
208/** 204/**
209 * tipc_createport_raw - create a generic TIPC port 205 * tipc_createport - create a generic TIPC port
210 * 206 *
211 * Returns pointer to (locked) TIPC port, or NULL if unable to create it 207 * Returns pointer to (locked) TIPC port, or NULL if unable to create it
212 */ 208 */
213struct tipc_port *tipc_createport_raw(void *usr_handle, 209struct tipc_port *tipc_createport(struct sock *sk,
214 u32 (*dispatcher)(struct tipc_port *, struct sk_buff *), 210 u32 (*dispatcher)(struct tipc_port *,
215 void (*wakeup)(struct tipc_port *), 211 struct sk_buff *),
216 const u32 importance) 212 void (*wakeup)(struct tipc_port *),
213 const u32 importance)
217{ 214{
218 struct tipc_port *p_ptr; 215 struct tipc_port *p_ptr;
219 struct tipc_msg *msg; 216 struct tipc_msg *msg;
@@ -231,14 +228,13 @@ struct tipc_port *tipc_createport_raw(void *usr_handle,
231 return NULL; 228 return NULL;
232 } 229 }
233 230
234 p_ptr->usr_handle = usr_handle; 231 p_ptr->sk = sk;
235 p_ptr->max_pkt = MAX_PKT_DEFAULT; 232 p_ptr->max_pkt = MAX_PKT_DEFAULT;
236 p_ptr->ref = ref; 233 p_ptr->ref = ref;
237 INIT_LIST_HEAD(&p_ptr->wait_list); 234 INIT_LIST_HEAD(&p_ptr->wait_list);
238 INIT_LIST_HEAD(&p_ptr->subscription.nodesub_list); 235 INIT_LIST_HEAD(&p_ptr->subscription.nodesub_list);
239 p_ptr->dispatcher = dispatcher; 236 p_ptr->dispatcher = dispatcher;
240 p_ptr->wakeup = wakeup; 237 p_ptr->wakeup = wakeup;
241 p_ptr->user_port = NULL;
242 k_init_timer(&p_ptr->timer, (Handler)port_timeout, ref); 238 k_init_timer(&p_ptr->timer, (Handler)port_timeout, ref);
243 INIT_LIST_HEAD(&p_ptr->publications); 239 INIT_LIST_HEAD(&p_ptr->publications);
244 INIT_LIST_HEAD(&p_ptr->port_list); 240 INIT_LIST_HEAD(&p_ptr->port_list);
@@ -275,7 +271,6 @@ int tipc_deleteport(u32 ref)
275 buf = port_build_peer_abort_msg(p_ptr, TIPC_ERR_NO_PORT); 271 buf = port_build_peer_abort_msg(p_ptr, TIPC_ERR_NO_PORT);
276 tipc_nodesub_unsubscribe(&p_ptr->subscription); 272 tipc_nodesub_unsubscribe(&p_ptr->subscription);
277 } 273 }
278 kfree(p_ptr->user_port);
279 274
280 spin_lock_bh(&tipc_port_list_lock); 275 spin_lock_bh(&tipc_port_list_lock);
281 list_del(&p_ptr->port_list); 276 list_del(&p_ptr->port_list);
@@ -448,7 +443,7 @@ int tipc_port_reject_sections(struct tipc_port *p_ptr, struct tipc_msg *hdr,
448 int res; 443 int res;
449 444
450 res = tipc_msg_build(hdr, msg_sect, num_sect, total_len, MAX_MSG_SIZE, 445 res = tipc_msg_build(hdr, msg_sect, num_sect, total_len, MAX_MSG_SIZE,
451 !p_ptr->user_port, &buf); 446 &buf);
452 if (!buf) 447 if (!buf)
453 return res; 448 return res;
454 449
@@ -668,215 +663,6 @@ void tipc_port_reinit(void)
668 spin_unlock_bh(&tipc_port_list_lock); 663 spin_unlock_bh(&tipc_port_list_lock);
669} 664}
670 665
671
672/*
673 * port_dispatcher_sigh(): Signal handler for messages destinated
674 * to the tipc_port interface.
675 */
676static void port_dispatcher_sigh(void *dummy)
677{
678 struct sk_buff *buf;
679
680 spin_lock_bh(&queue_lock);
681 buf = msg_queue_head;
682 msg_queue_head = NULL;
683 spin_unlock_bh(&queue_lock);
684
685 while (buf) {
686 struct tipc_port *p_ptr;
687 struct user_port *up_ptr;
688 struct tipc_portid orig;
689 struct tipc_name_seq dseq;
690 void *usr_handle;
691 int connected;
692 int peer_invalid;
693 int published;
694 u32 message_type;
695
696 struct sk_buff *next = buf->next;
697 struct tipc_msg *msg = buf_msg(buf);
698 u32 dref = msg_destport(msg);
699
700 message_type = msg_type(msg);
701 if (message_type > TIPC_DIRECT_MSG)
702 goto reject; /* Unsupported message type */
703
704 p_ptr = tipc_port_lock(dref);
705 if (!p_ptr)
706 goto reject; /* Port deleted while msg in queue */
707
708 orig.ref = msg_origport(msg);
709 orig.node = msg_orignode(msg);
710 up_ptr = p_ptr->user_port;
711 usr_handle = up_ptr->usr_handle;
712 connected = p_ptr->connected;
713 peer_invalid = connected && !tipc_port_peer_msg(p_ptr, msg);
714 published = p_ptr->published;
715
716 if (unlikely(msg_errcode(msg)))
717 goto err;
718
719 switch (message_type) {
720
721 case TIPC_CONN_MSG:{
722 tipc_conn_msg_event cb = up_ptr->conn_msg_cb;
723 u32 dsz;
724
725 tipc_port_unlock(p_ptr);
726 if (unlikely(!cb))
727 goto reject;
728 if (unlikely(!connected)) {
729 if (tipc_connect(dref, &orig))
730 goto reject;
731 } else if (peer_invalid)
732 goto reject;
733 dsz = msg_data_sz(msg);
734 if (unlikely(dsz &&
735 (++p_ptr->conn_unacked >=
736 TIPC_FLOW_CONTROL_WIN)))
737 tipc_acknowledge(dref,
738 p_ptr->conn_unacked);
739 skb_pull(buf, msg_hdr_sz(msg));
740 cb(usr_handle, dref, &buf, msg_data(msg), dsz);
741 break;
742 }
743 case TIPC_DIRECT_MSG:{
744 tipc_msg_event cb = up_ptr->msg_cb;
745
746 tipc_port_unlock(p_ptr);
747 if (unlikely(!cb || connected))
748 goto reject;
749 skb_pull(buf, msg_hdr_sz(msg));
750 cb(usr_handle, dref, &buf, msg_data(msg),
751 msg_data_sz(msg), msg_importance(msg),
752 &orig);
753 break;
754 }
755 case TIPC_MCAST_MSG:
756 case TIPC_NAMED_MSG:{
757 tipc_named_msg_event cb = up_ptr->named_msg_cb;
758
759 tipc_port_unlock(p_ptr);
760 if (unlikely(!cb || connected || !published))
761 goto reject;
762 dseq.type = msg_nametype(msg);
763 dseq.lower = msg_nameinst(msg);
764 dseq.upper = (message_type == TIPC_NAMED_MSG)
765 ? dseq.lower : msg_nameupper(msg);
766 skb_pull(buf, msg_hdr_sz(msg));
767 cb(usr_handle, dref, &buf, msg_data(msg),
768 msg_data_sz(msg), msg_importance(msg),
769 &orig, &dseq);
770 break;
771 }
772 }
773 if (buf)
774 kfree_skb(buf);
775 buf = next;
776 continue;
777err:
778 switch (message_type) {
779
780 case TIPC_CONN_MSG:{
781 tipc_conn_shutdown_event cb =
782 up_ptr->conn_err_cb;
783
784 tipc_port_unlock(p_ptr);
785 if (!cb || !connected || peer_invalid)
786 break;
787 tipc_disconnect(dref);
788 skb_pull(buf, msg_hdr_sz(msg));
789 cb(usr_handle, dref, &buf, msg_data(msg),
790 msg_data_sz(msg), msg_errcode(msg));
791 break;
792 }
793 case TIPC_DIRECT_MSG:{
794 tipc_msg_err_event cb = up_ptr->err_cb;
795
796 tipc_port_unlock(p_ptr);
797 if (!cb || connected)
798 break;
799 skb_pull(buf, msg_hdr_sz(msg));
800 cb(usr_handle, dref, &buf, msg_data(msg),
801 msg_data_sz(msg), msg_errcode(msg), &orig);
802 break;
803 }
804 case TIPC_MCAST_MSG:
805 case TIPC_NAMED_MSG:{
806 tipc_named_msg_err_event cb =
807 up_ptr->named_err_cb;
808
809 tipc_port_unlock(p_ptr);
810 if (!cb || connected)
811 break;
812 dseq.type = msg_nametype(msg);
813 dseq.lower = msg_nameinst(msg);
814 dseq.upper = (message_type == TIPC_NAMED_MSG)
815 ? dseq.lower : msg_nameupper(msg);
816 skb_pull(buf, msg_hdr_sz(msg));
817 cb(usr_handle, dref, &buf, msg_data(msg),
818 msg_data_sz(msg), msg_errcode(msg), &dseq);
819 break;
820 }
821 }
822 if (buf)
823 kfree_skb(buf);
824 buf = next;
825 continue;
826reject:
827 tipc_reject_msg(buf, TIPC_ERR_NO_PORT);
828 buf = next;
829 }
830}
831
832/*
833 * port_dispatcher(): Dispatcher for messages destinated
834 * to the tipc_port interface. Called with port locked.
835 */
836static u32 port_dispatcher(struct tipc_port *dummy, struct sk_buff *buf)
837{
838 buf->next = NULL;
839 spin_lock_bh(&queue_lock);
840 if (msg_queue_head) {
841 msg_queue_tail->next = buf;
842 msg_queue_tail = buf;
843 } else {
844 msg_queue_tail = msg_queue_head = buf;
845 tipc_k_signal((Handler)port_dispatcher_sigh, 0);
846 }
847 spin_unlock_bh(&queue_lock);
848 return 0;
849}
850
851/*
852 * Wake up port after congestion: Called with port locked
853 */
854static void port_wakeup_sh(unsigned long ref)
855{
856 struct tipc_port *p_ptr;
857 struct user_port *up_ptr;
858 tipc_continue_event cb = NULL;
859 void *uh = NULL;
860
861 p_ptr = tipc_port_lock(ref);
862 if (p_ptr) {
863 up_ptr = p_ptr->user_port;
864 if (up_ptr) {
865 cb = up_ptr->continue_event_cb;
866 uh = up_ptr->usr_handle;
867 }
868 tipc_port_unlock(p_ptr);
869 }
870 if (cb)
871 cb(uh, ref);
872}
873
874
875static void port_wakeup(struct tipc_port *p_ptr)
876{
877 tipc_k_signal((Handler)port_wakeup_sh, p_ptr->ref);
878}
879
880void tipc_acknowledge(u32 ref, u32 ack) 666void tipc_acknowledge(u32 ref, u32 ack)
881{ 667{
882 struct tipc_port *p_ptr; 668 struct tipc_port *p_ptr;
@@ -893,50 +679,6 @@ void tipc_acknowledge(u32 ref, u32 ack)
893 tipc_net_route_msg(buf); 679 tipc_net_route_msg(buf);
894} 680}
895 681
896/*
897 * tipc_createport(): user level call.
898 */
899int tipc_createport(void *usr_handle,
900 unsigned int importance,
901 tipc_msg_err_event error_cb,
902 tipc_named_msg_err_event named_error_cb,
903 tipc_conn_shutdown_event conn_error_cb,
904 tipc_msg_event msg_cb,
905 tipc_named_msg_event named_msg_cb,
906 tipc_conn_msg_event conn_msg_cb,
907 tipc_continue_event continue_event_cb, /* May be zero */
908 u32 *portref)
909{
910 struct user_port *up_ptr;
911 struct tipc_port *p_ptr;
912
913 up_ptr = kmalloc(sizeof(*up_ptr), GFP_ATOMIC);
914 if (!up_ptr) {
915 pr_warn("Port creation failed, no memory\n");
916 return -ENOMEM;
917 }
918 p_ptr = tipc_createport_raw(NULL, port_dispatcher, port_wakeup,
919 importance);
920 if (!p_ptr) {
921 kfree(up_ptr);
922 return -ENOMEM;
923 }
924
925 p_ptr->user_port = up_ptr;
926 up_ptr->usr_handle = usr_handle;
927 up_ptr->ref = p_ptr->ref;
928 up_ptr->err_cb = error_cb;
929 up_ptr->named_err_cb = named_error_cb;
930 up_ptr->conn_err_cb = conn_error_cb;
931 up_ptr->msg_cb = msg_cb;
932 up_ptr->named_msg_cb = named_msg_cb;
933 up_ptr->conn_msg_cb = conn_msg_cb;
934 up_ptr->continue_event_cb = continue_event_cb;
935 *portref = p_ptr->ref;
936 tipc_port_unlock(p_ptr);
937 return 0;
938}
939
940int tipc_portimportance(u32 ref, unsigned int *importance) 682int tipc_portimportance(u32 ref, unsigned int *importance)
941{ 683{
942 struct tipc_port *p_ptr; 684 struct tipc_port *p_ptr;
@@ -1184,7 +926,7 @@ static int tipc_port_recv_sections(struct tipc_port *sender, unsigned int num_se
1184 int res; 926 int res;
1185 927
1186 res = tipc_msg_build(&sender->phdr, msg_sect, num_sect, total_len, 928 res = tipc_msg_build(&sender->phdr, msg_sect, num_sect, total_len,
1187 MAX_MSG_SIZE, !sender->user_port, &buf); 929 MAX_MSG_SIZE, &buf);
1188 if (likely(buf)) 930 if (likely(buf))
1189 tipc_port_recv_msg(buf); 931 tipc_port_recv_msg(buf);
1190 return res; 932 return res;
@@ -1322,43 +1064,3 @@ int tipc_send2port(u32 ref, struct tipc_portid const *dest,
1322 } 1064 }
1323 return -ELINKCONG; 1065 return -ELINKCONG;
1324} 1066}
1325
1326/**
1327 * tipc_send_buf2port - send message buffer to port identity
1328 */
1329int tipc_send_buf2port(u32 ref, struct tipc_portid const *dest,
1330 struct sk_buff *buf, unsigned int dsz)
1331{
1332 struct tipc_port *p_ptr;
1333 struct tipc_msg *msg;
1334 int res;
1335
1336 p_ptr = (struct tipc_port *)tipc_ref_deref(ref);
1337 if (!p_ptr || p_ptr->connected)
1338 return -EINVAL;
1339
1340 msg = &p_ptr->phdr;
1341 msg_set_type(msg, TIPC_DIRECT_MSG);
1342 msg_set_destnode(msg, dest->node);
1343 msg_set_destport(msg, dest->ref);
1344 msg_set_hdr_sz(msg, BASIC_H_SIZE);
1345 msg_set_size(msg, BASIC_H_SIZE + dsz);
1346 if (skb_cow(buf, BASIC_H_SIZE))
1347 return -ENOMEM;
1348
1349 skb_push(buf, BASIC_H_SIZE);
1350 skb_copy_to_linear_data(buf, msg, BASIC_H_SIZE);
1351
1352 if (in_own_node(dest->node))
1353 res = tipc_port_recv_msg(buf);
1354 else
1355 res = tipc_send_buf_fast(buf, dest->node);
1356 if (likely(res != -ELINKCONG)) {
1357 if (res > 0)
1358 p_ptr->sent++;
1359 return res;
1360 }
1361 if (port_unreliable(p_ptr))
1362 return dsz;
1363 return -ELINKCONG;
1364}
diff --git a/net/tipc/port.h b/net/tipc/port.h
index fb66e2e5f4d1..5a7026b9c345 100644
--- a/net/tipc/port.h
+++ b/net/tipc/port.h
@@ -2,7 +2,7 @@
2 * net/tipc/port.h: Include file for TIPC port code 2 * net/tipc/port.h: Include file for TIPC port code
3 * 3 *
4 * Copyright (c) 1994-2007, Ericsson AB 4 * Copyright (c) 1994-2007, Ericsson AB
5 * Copyright (c) 2004-2007, 2010-2011, Wind River Systems 5 * Copyright (c) 2004-2007, 2010-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -43,60 +43,12 @@
43#include "node_subscr.h" 43#include "node_subscr.h"
44 44
45#define TIPC_FLOW_CONTROL_WIN 512 45#define TIPC_FLOW_CONTROL_WIN 512
46 46#define CONN_OVERLOAD_LIMIT ((TIPC_FLOW_CONTROL_WIN * 2 + 1) * \
47typedef void (*tipc_msg_err_event) (void *usr_handle, u32 portref, 47 SKB_TRUESIZE(TIPC_MAX_USER_MSG_SIZE))
48 struct sk_buff **buf, unsigned char const *data,
49 unsigned int size, int reason,
50 struct tipc_portid const *attmpt_destid);
51
52typedef void (*tipc_named_msg_err_event) (void *usr_handle, u32 portref,
53 struct sk_buff **buf, unsigned char const *data,
54 unsigned int size, int reason,
55 struct tipc_name_seq const *attmpt_dest);
56
57typedef void (*tipc_conn_shutdown_event) (void *usr_handle, u32 portref,
58 struct sk_buff **buf, unsigned char const *data,
59 unsigned int size, int reason);
60
61typedef void (*tipc_msg_event) (void *usr_handle, u32 portref,
62 struct sk_buff **buf, unsigned char const *data,
63 unsigned int size, unsigned int importance,
64 struct tipc_portid const *origin);
65
66typedef void (*tipc_named_msg_event) (void *usr_handle, u32 portref,
67 struct sk_buff **buf, unsigned char const *data,
68 unsigned int size, unsigned int importance,
69 struct tipc_portid const *orig,
70 struct tipc_name_seq const *dest);
71
72typedef void (*tipc_conn_msg_event) (void *usr_handle, u32 portref,
73 struct sk_buff **buf, unsigned char const *data,
74 unsigned int size);
75
76typedef void (*tipc_continue_event) (void *usr_handle, u32 portref);
77
78/**
79 * struct user_port - TIPC user port (used with native API)
80 * @usr_handle: user-specified field
81 * @ref: object reference to associated TIPC port
82 *
83 * <various callback routines>
84 */
85struct user_port {
86 void *usr_handle;
87 u32 ref;
88 tipc_msg_err_event err_cb;
89 tipc_named_msg_err_event named_err_cb;
90 tipc_conn_shutdown_event conn_err_cb;
91 tipc_msg_event msg_cb;
92 tipc_named_msg_event named_msg_cb;
93 tipc_conn_msg_event conn_msg_cb;
94 tipc_continue_event continue_event_cb;
95};
96 48
97/** 49/**
98 * struct tipc_port - TIPC port structure 50 * struct tipc_port - TIPC port structure
99 * @usr_handle: pointer to additional user-defined information about port 51 * @sk: pointer to socket handle
100 * @lock: pointer to spinlock for controlling access to port 52 * @lock: pointer to spinlock for controlling access to port
101 * @connected: non-zero if port is currently connected to a peer port 53 * @connected: non-zero if port is currently connected to a peer port
102 * @conn_type: TIPC type used when connection was established 54 * @conn_type: TIPC type used when connection was established
@@ -110,7 +62,6 @@ struct user_port {
110 * @port_list: adjacent ports in TIPC's global list of ports 62 * @port_list: adjacent ports in TIPC's global list of ports
111 * @dispatcher: ptr to routine which handles received messages 63 * @dispatcher: ptr to routine which handles received messages
112 * @wakeup: ptr to routine to call when port is no longer congested 64 * @wakeup: ptr to routine to call when port is no longer congested
113 * @user_port: ptr to user port associated with port (if any)
114 * @wait_list: adjacent ports in list of ports waiting on link congestion 65 * @wait_list: adjacent ports in list of ports waiting on link congestion
115 * @waiting_pkts: 66 * @waiting_pkts:
116 * @sent: # of non-empty messages sent by port 67 * @sent: # of non-empty messages sent by port
@@ -123,7 +74,7 @@ struct user_port {
123 * @subscription: "node down" subscription used to terminate failed connections 74 * @subscription: "node down" subscription used to terminate failed connections
124 */ 75 */
125struct tipc_port { 76struct tipc_port {
126 void *usr_handle; 77 struct sock *sk;
127 spinlock_t *lock; 78 spinlock_t *lock;
128 int connected; 79 int connected;
129 u32 conn_type; 80 u32 conn_type;
@@ -137,7 +88,6 @@ struct tipc_port {
137 struct list_head port_list; 88 struct list_head port_list;
138 u32 (*dispatcher)(struct tipc_port *, struct sk_buff *); 89 u32 (*dispatcher)(struct tipc_port *, struct sk_buff *);
139 void (*wakeup)(struct tipc_port *); 90 void (*wakeup)(struct tipc_port *);
140 struct user_port *user_port;
141 struct list_head wait_list; 91 struct list_head wait_list;
142 u32 waiting_pkts; 92 u32 waiting_pkts;
143 u32 sent; 93 u32 sent;
@@ -156,24 +106,16 @@ struct tipc_port_list;
156/* 106/*
157 * TIPC port manipulation routines 107 * TIPC port manipulation routines
158 */ 108 */
159struct tipc_port *tipc_createport_raw(void *usr_handle, 109struct tipc_port *tipc_createport(struct sock *sk,
160 u32 (*dispatcher)(struct tipc_port *, struct sk_buff *), 110 u32 (*dispatcher)(struct tipc_port *,
161 void (*wakeup)(struct tipc_port *), const u32 importance); 111 struct sk_buff *),
112 void (*wakeup)(struct tipc_port *),
113 const u32 importance);
162 114
163int tipc_reject_msg(struct sk_buff *buf, u32 err); 115int tipc_reject_msg(struct sk_buff *buf, u32 err);
164 116
165int tipc_send_buf_fast(struct sk_buff *buf, u32 destnode);
166
167void tipc_acknowledge(u32 port_ref, u32 ack); 117void tipc_acknowledge(u32 port_ref, u32 ack);
168 118
169int tipc_createport(void *usr_handle,
170 unsigned int importance, tipc_msg_err_event error_cb,
171 tipc_named_msg_err_event named_error_cb,
172 tipc_conn_shutdown_event conn_error_cb, tipc_msg_event msg_cb,
173 tipc_named_msg_event named_msg_cb,
174 tipc_conn_msg_event conn_msg_cb,
175 tipc_continue_event continue_event_cb, u32 *portref);
176
177int tipc_deleteport(u32 portref); 119int tipc_deleteport(u32 portref);
178 120
179int tipc_portimportance(u32 portref, unsigned int *importance); 121int tipc_portimportance(u32 portref, unsigned int *importance);
@@ -186,9 +128,9 @@ int tipc_portunreturnable(u32 portref, unsigned int *isunreturnable);
186int tipc_set_portunreturnable(u32 portref, unsigned int isunreturnable); 128int tipc_set_portunreturnable(u32 portref, unsigned int isunreturnable);
187 129
188int tipc_publish(u32 portref, unsigned int scope, 130int tipc_publish(u32 portref, unsigned int scope,
189 struct tipc_name_seq const *name_seq); 131 struct tipc_name_seq const *name_seq);
190int tipc_withdraw(u32 portref, unsigned int scope, 132int tipc_withdraw(u32 portref, unsigned int scope,
191 struct tipc_name_seq const *name_seq); 133 struct tipc_name_seq const *name_seq);
192 134
193int tipc_connect(u32 portref, struct tipc_portid const *port); 135int tipc_connect(u32 portref, struct tipc_portid const *port);
194 136
@@ -220,9 +162,6 @@ int tipc_send2port(u32 portref, struct tipc_portid const *dest,
220 unsigned int num_sect, struct iovec const *msg_sect, 162 unsigned int num_sect, struct iovec const *msg_sect,
221 unsigned int total_len); 163 unsigned int total_len);
222 164
223int tipc_send_buf2port(u32 portref, struct tipc_portid const *dest,
224 struct sk_buff *buf, unsigned int dsz);
225
226int tipc_multicast(u32 portref, struct tipc_name_seq const *seq, 165int tipc_multicast(u32 portref, struct tipc_name_seq const *seq,
227 unsigned int section_count, struct iovec const *msg, 166 unsigned int section_count, struct iovec const *msg,
228 unsigned int total_len); 167 unsigned int total_len);
diff --git a/net/tipc/server.c b/net/tipc/server.c
new file mode 100644
index 000000000000..19da5abe0fa6
--- /dev/null
+++ b/net/tipc/server.c
@@ -0,0 +1,596 @@
1/*
2 * net/tipc/server.c: TIPC server infrastructure
3 *
4 * Copyright (c) 2012-2013, Wind River Systems
5 * All rights reserved.
6 *
7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions are met:
9 *
10 * 1. Redistributions of source code must retain the above copyright
11 * notice, this list of conditions and the following disclaimer.
12 * 2. Redistributions in binary form must reproduce the above copyright
13 * notice, this list of conditions and the following disclaimer in the
14 * documentation and/or other materials provided with the distribution.
15 * 3. Neither the names of the copyright holders nor the names of its
16 * contributors may be used to endorse or promote products derived from
17 * this software without specific prior written permission.
18 *
19 * Alternatively, this software may be distributed under the terms of the
20 * GNU General Public License ("GPL") version 2 as published by the Free
21 * Software Foundation.
22 *
23 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
24 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
25 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
26 * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
27 * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
28 * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
29 * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
30 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
31 * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
32 * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
33 * POSSIBILITY OF SUCH DAMAGE.
34 */
35
36#include "server.h"
37#include "core.h"
38#include <net/sock.h>
39
40/* Number of messages to send before rescheduling */
41#define MAX_SEND_MSG_COUNT 25
42#define MAX_RECV_MSG_COUNT 25
43#define CF_CONNECTED 1
44
45#define sock2con(x) ((struct tipc_conn *)(x)->sk_user_data)
46
47/**
48 * struct tipc_conn - TIPC connection structure
49 * @kref: reference counter to connection object
50 * @conid: connection identifier
51 * @sock: socket handler associated with connection
52 * @flags: indicates connection state
53 * @server: pointer to connected server
54 * @rwork: receive work item
55 * @usr_data: user-specified field
56 * @rx_action: what to do when connection socket is active
57 * @outqueue: pointer to first outbound message in queue
58 * @outqueue_lock: controll access to the outqueue
59 * @outqueue: list of connection objects for its server
60 * @swork: send work item
61 */
62struct tipc_conn {
63 struct kref kref;
64 int conid;
65 struct socket *sock;
66 unsigned long flags;
67 struct tipc_server *server;
68 struct work_struct rwork;
69 int (*rx_action) (struct tipc_conn *con);
70 void *usr_data;
71 struct list_head outqueue;
72 spinlock_t outqueue_lock;
73 struct work_struct swork;
74};
75
76/* An entry waiting to be sent */
77struct outqueue_entry {
78 struct list_head list;
79 struct kvec iov;
80 struct sockaddr_tipc dest;
81};
82
83static void tipc_recv_work(struct work_struct *work);
84static void tipc_send_work(struct work_struct *work);
85static void tipc_clean_outqueues(struct tipc_conn *con);
86
87static void tipc_conn_kref_release(struct kref *kref)
88{
89 struct tipc_conn *con = container_of(kref, struct tipc_conn, kref);
90 struct tipc_server *s = con->server;
91
92 if (con->sock) {
93 tipc_sock_release_local(con->sock);
94 con->sock = NULL;
95 }
96
97 tipc_clean_outqueues(con);
98
99 if (con->conid)
100 s->tipc_conn_shutdown(con->conid, con->usr_data);
101
102 kfree(con);
103}
104
105static void conn_put(struct tipc_conn *con)
106{
107 kref_put(&con->kref, tipc_conn_kref_release);
108}
109
110static void conn_get(struct tipc_conn *con)
111{
112 kref_get(&con->kref);
113}
114
115static struct tipc_conn *tipc_conn_lookup(struct tipc_server *s, int conid)
116{
117 struct tipc_conn *con;
118
119 spin_lock_bh(&s->idr_lock);
120 con = idr_find(&s->conn_idr, conid);
121 if (con)
122 conn_get(con);
123 spin_unlock_bh(&s->idr_lock);
124 return con;
125}
126
127static void sock_data_ready(struct sock *sk, int unused)
128{
129 struct tipc_conn *con;
130
131 read_lock(&sk->sk_callback_lock);
132 con = sock2con(sk);
133 if (con && test_bit(CF_CONNECTED, &con->flags)) {
134 conn_get(con);
135 if (!queue_work(con->server->rcv_wq, &con->rwork))
136 conn_put(con);
137 }
138 read_unlock(&sk->sk_callback_lock);
139}
140
141static void sock_write_space(struct sock *sk)
142{
143 struct tipc_conn *con;
144
145 read_lock(&sk->sk_callback_lock);
146 con = sock2con(sk);
147 if (con && test_bit(CF_CONNECTED, &con->flags)) {
148 conn_get(con);
149 if (!queue_work(con->server->send_wq, &con->swork))
150 conn_put(con);
151 }
152 read_unlock(&sk->sk_callback_lock);
153}
154
155static void tipc_register_callbacks(struct socket *sock, struct tipc_conn *con)
156{
157 struct sock *sk = sock->sk;
158
159 write_lock_bh(&sk->sk_callback_lock);
160
161 sk->sk_data_ready = sock_data_ready;
162 sk->sk_write_space = sock_write_space;
163 sk->sk_user_data = con;
164
165 con->sock = sock;
166
167 write_unlock_bh(&sk->sk_callback_lock);
168}
169
170static void tipc_unregister_callbacks(struct tipc_conn *con)
171{
172 struct sock *sk = con->sock->sk;
173
174 write_lock_bh(&sk->sk_callback_lock);
175 sk->sk_user_data = NULL;
176 write_unlock_bh(&sk->sk_callback_lock);
177}
178
179static void tipc_close_conn(struct tipc_conn *con)
180{
181 struct tipc_server *s = con->server;
182
183 if (test_and_clear_bit(CF_CONNECTED, &con->flags)) {
184 spin_lock_bh(&s->idr_lock);
185 idr_remove(&s->conn_idr, con->conid);
186 s->idr_in_use--;
187 spin_unlock_bh(&s->idr_lock);
188
189 tipc_unregister_callbacks(con);
190
191 /* We shouldn't flush pending works as we may be in the
192 * thread. In fact the races with pending rx/tx work structs
193 * are harmless for us here as we have already deleted this
194 * connection from server connection list and set
195 * sk->sk_user_data to 0 before releasing connection object.
196 */
197 kernel_sock_shutdown(con->sock, SHUT_RDWR);
198
199 conn_put(con);
200 }
201}
202
203static struct tipc_conn *tipc_alloc_conn(struct tipc_server *s)
204{
205 struct tipc_conn *con;
206 int ret;
207
208 con = kzalloc(sizeof(struct tipc_conn), GFP_ATOMIC);
209 if (!con)
210 return ERR_PTR(-ENOMEM);
211
212 kref_init(&con->kref);
213 INIT_LIST_HEAD(&con->outqueue);
214 spin_lock_init(&con->outqueue_lock);
215 INIT_WORK(&con->swork, tipc_send_work);
216 INIT_WORK(&con->rwork, tipc_recv_work);
217
218 spin_lock_bh(&s->idr_lock);
219 ret = idr_alloc(&s->conn_idr, con, 0, 0, GFP_ATOMIC);
220 if (ret < 0) {
221 kfree(con);
222 spin_unlock_bh(&s->idr_lock);
223 return ERR_PTR(-ENOMEM);
224 }
225 con->conid = ret;
226 s->idr_in_use++;
227 spin_unlock_bh(&s->idr_lock);
228
229 set_bit(CF_CONNECTED, &con->flags);
230 con->server = s;
231
232 return con;
233}
234
235static int tipc_receive_from_sock(struct tipc_conn *con)
236{
237 struct msghdr msg = {};
238 struct tipc_server *s = con->server;
239 struct sockaddr_tipc addr;
240 struct kvec iov;
241 void *buf;
242 int ret;
243
244 buf = kmem_cache_alloc(s->rcvbuf_cache, GFP_ATOMIC);
245 if (!buf) {
246 ret = -ENOMEM;
247 goto out_close;
248 }
249
250 iov.iov_base = buf;
251 iov.iov_len = s->max_rcvbuf_size;
252 msg.msg_name = &addr;
253 ret = kernel_recvmsg(con->sock, &msg, &iov, 1, iov.iov_len,
254 MSG_DONTWAIT);
255 if (ret <= 0) {
256 kmem_cache_free(s->rcvbuf_cache, buf);
257 goto out_close;
258 }
259
260 s->tipc_conn_recvmsg(con->conid, &addr, con->usr_data, buf, ret);
261
262 kmem_cache_free(s->rcvbuf_cache, buf);
263
264 return 0;
265
266out_close:
267 if (ret != -EWOULDBLOCK)
268 tipc_close_conn(con);
269 else if (ret == 0)
270 /* Don't return success if we really got EOF */
271 ret = -EAGAIN;
272
273 return ret;
274}
275
276static int tipc_accept_from_sock(struct tipc_conn *con)
277{
278 struct tipc_server *s = con->server;
279 struct socket *sock = con->sock;
280 struct socket *newsock;
281 struct tipc_conn *newcon;
282 int ret;
283
284 ret = tipc_sock_accept_local(sock, &newsock, O_NONBLOCK);
285 if (ret < 0)
286 return ret;
287
288 newcon = tipc_alloc_conn(con->server);
289 if (IS_ERR(newcon)) {
290 ret = PTR_ERR(newcon);
291 sock_release(newsock);
292 return ret;
293 }
294
295 newcon->rx_action = tipc_receive_from_sock;
296 tipc_register_callbacks(newsock, newcon);
297
298 /* Notify that new connection is incoming */
299 newcon->usr_data = s->tipc_conn_new(newcon->conid);
300
301 /* Wake up receive process in case of 'SYN+' message */
302 newsock->sk->sk_data_ready(newsock->sk, 0);
303 return ret;
304}
305
306static struct socket *tipc_create_listen_sock(struct tipc_conn *con)
307{
308 struct tipc_server *s = con->server;
309 struct socket *sock = NULL;
310 int ret;
311
312 ret = tipc_sock_create_local(s->type, &sock);
313 if (ret < 0)
314 return NULL;
315 ret = kernel_setsockopt(sock, SOL_TIPC, TIPC_IMPORTANCE,
316 (char *)&s->imp, sizeof(s->imp));
317 if (ret < 0)
318 goto create_err;
319 ret = kernel_bind(sock, (struct sockaddr *)s->saddr, sizeof(*s->saddr));
320 if (ret < 0)
321 goto create_err;
322
323 switch (s->type) {
324 case SOCK_STREAM:
325 case SOCK_SEQPACKET:
326 con->rx_action = tipc_accept_from_sock;
327
328 ret = kernel_listen(sock, 0);
329 if (ret < 0)
330 goto create_err;
331 break;
332 case SOCK_DGRAM:
333 case SOCK_RDM:
334 con->rx_action = tipc_receive_from_sock;
335 break;
336 default:
337 pr_err("Unknown socket type %d\n", s->type);
338 goto create_err;
339 }
340 return sock;
341
342create_err:
343 sock_release(sock);
344 con->sock = NULL;
345 return NULL;
346}
347
348static int tipc_open_listening_sock(struct tipc_server *s)
349{
350 struct socket *sock;
351 struct tipc_conn *con;
352
353 con = tipc_alloc_conn(s);
354 if (IS_ERR(con))
355 return PTR_ERR(con);
356
357 sock = tipc_create_listen_sock(con);
358 if (!sock)
359 return -EINVAL;
360
361 tipc_register_callbacks(sock, con);
362 return 0;
363}
364
365static struct outqueue_entry *tipc_alloc_entry(void *data, int len)
366{
367 struct outqueue_entry *entry;
368 void *buf;
369
370 entry = kmalloc(sizeof(struct outqueue_entry), GFP_ATOMIC);
371 if (!entry)
372 return NULL;
373
374 buf = kmalloc(len, GFP_ATOMIC);
375 if (!buf) {
376 kfree(entry);
377 return NULL;
378 }
379
380 memcpy(buf, data, len);
381 entry->iov.iov_base = buf;
382 entry->iov.iov_len = len;
383
384 return entry;
385}
386
387static void tipc_free_entry(struct outqueue_entry *e)
388{
389 kfree(e->iov.iov_base);
390 kfree(e);
391}
392
393static void tipc_clean_outqueues(struct tipc_conn *con)
394{
395 struct outqueue_entry *e, *safe;
396
397 spin_lock_bh(&con->outqueue_lock);
398 list_for_each_entry_safe(e, safe, &con->outqueue, list) {
399 list_del(&e->list);
400 tipc_free_entry(e);
401 }
402 spin_unlock_bh(&con->outqueue_lock);
403}
404
405int tipc_conn_sendmsg(struct tipc_server *s, int conid,
406 struct sockaddr_tipc *addr, void *data, size_t len)
407{
408 struct outqueue_entry *e;
409 struct tipc_conn *con;
410
411 con = tipc_conn_lookup(s, conid);
412 if (!con)
413 return -EINVAL;
414
415 e = tipc_alloc_entry(data, len);
416 if (!e) {
417 conn_put(con);
418 return -ENOMEM;
419 }
420
421 if (addr)
422 memcpy(&e->dest, addr, sizeof(struct sockaddr_tipc));
423
424 spin_lock_bh(&con->outqueue_lock);
425 list_add_tail(&e->list, &con->outqueue);
426 spin_unlock_bh(&con->outqueue_lock);
427
428 if (test_bit(CF_CONNECTED, &con->flags))
429 if (!queue_work(s->send_wq, &con->swork))
430 conn_put(con);
431
432 return 0;
433}
434
435void tipc_conn_terminate(struct tipc_server *s, int conid)
436{
437 struct tipc_conn *con;
438
439 con = tipc_conn_lookup(s, conid);
440 if (con) {
441 tipc_close_conn(con);
442 conn_put(con);
443 }
444}
445
446static void tipc_send_to_sock(struct tipc_conn *con)
447{
448 int count = 0;
449 struct tipc_server *s = con->server;
450 struct outqueue_entry *e;
451 struct msghdr msg;
452 int ret;
453
454 spin_lock_bh(&con->outqueue_lock);
455 while (1) {
456 e = list_entry(con->outqueue.next, struct outqueue_entry,
457 list);
458 if ((struct list_head *) e == &con->outqueue)
459 break;
460 spin_unlock_bh(&con->outqueue_lock);
461
462 memset(&msg, 0, sizeof(msg));
463 msg.msg_flags = MSG_DONTWAIT;
464
465 if (s->type == SOCK_DGRAM || s->type == SOCK_RDM) {
466 msg.msg_name = &e->dest;
467 msg.msg_namelen = sizeof(struct sockaddr_tipc);
468 }
469 ret = kernel_sendmsg(con->sock, &msg, &e->iov, 1,
470 e->iov.iov_len);
471 if (ret == -EWOULDBLOCK || ret == 0) {
472 cond_resched();
473 goto out;
474 } else if (ret < 0) {
475 goto send_err;
476 }
477
478 /* Don't starve users filling buffers */
479 if (++count >= MAX_SEND_MSG_COUNT) {
480 cond_resched();
481 count = 0;
482 }
483
484 spin_lock_bh(&con->outqueue_lock);
485 list_del(&e->list);
486 tipc_free_entry(e);
487 }
488 spin_unlock_bh(&con->outqueue_lock);
489out:
490 return;
491
492send_err:
493 tipc_close_conn(con);
494}
495
496static void tipc_recv_work(struct work_struct *work)
497{
498 struct tipc_conn *con = container_of(work, struct tipc_conn, rwork);
499 int count = 0;
500
501 while (test_bit(CF_CONNECTED, &con->flags)) {
502 if (con->rx_action(con))
503 break;
504
505 /* Don't flood Rx machine */
506 if (++count >= MAX_RECV_MSG_COUNT) {
507 cond_resched();
508 count = 0;
509 }
510 }
511 conn_put(con);
512}
513
514static void tipc_send_work(struct work_struct *work)
515{
516 struct tipc_conn *con = container_of(work, struct tipc_conn, swork);
517
518 if (test_bit(CF_CONNECTED, &con->flags))
519 tipc_send_to_sock(con);
520
521 conn_put(con);
522}
523
524static void tipc_work_stop(struct tipc_server *s)
525{
526 destroy_workqueue(s->rcv_wq);
527 destroy_workqueue(s->send_wq);
528}
529
530static int tipc_work_start(struct tipc_server *s)
531{
532 s->rcv_wq = alloc_workqueue("tipc_rcv", WQ_UNBOUND, 1);
533 if (!s->rcv_wq) {
534 pr_err("can't start tipc receive workqueue\n");
535 return -ENOMEM;
536 }
537
538 s->send_wq = alloc_workqueue("tipc_send", WQ_UNBOUND, 1);
539 if (!s->send_wq) {
540 pr_err("can't start tipc send workqueue\n");
541 destroy_workqueue(s->rcv_wq);
542 return -ENOMEM;
543 }
544
545 return 0;
546}
547
548int tipc_server_start(struct tipc_server *s)
549{
550 int ret;
551
552 spin_lock_init(&s->idr_lock);
553 idr_init(&s->conn_idr);
554 s->idr_in_use = 0;
555
556 s->rcvbuf_cache = kmem_cache_create(s->name, s->max_rcvbuf_size,
557 0, SLAB_HWCACHE_ALIGN, NULL);
558 if (!s->rcvbuf_cache)
559 return -ENOMEM;
560
561 ret = tipc_work_start(s);
562 if (ret < 0) {
563 kmem_cache_destroy(s->rcvbuf_cache);
564 return ret;
565 }
566 s->enabled = 1;
567
568 return tipc_open_listening_sock(s);
569}
570
571void tipc_server_stop(struct tipc_server *s)
572{
573 struct tipc_conn *con;
574 int total = 0;
575 int id;
576
577 if (!s->enabled)
578 return;
579
580 s->enabled = 0;
581 spin_lock_bh(&s->idr_lock);
582 for (id = 0; total < s->idr_in_use; id++) {
583 con = idr_find(&s->conn_idr, id);
584 if (con) {
585 total++;
586 spin_unlock_bh(&s->idr_lock);
587 tipc_close_conn(con);
588 spin_lock_bh(&s->idr_lock);
589 }
590 }
591 spin_unlock_bh(&s->idr_lock);
592
593 tipc_work_stop(s);
594 kmem_cache_destroy(s->rcvbuf_cache);
595 idr_destroy(&s->conn_idr);
596}
diff --git a/net/tipc/server.h b/net/tipc/server.h
new file mode 100644
index 000000000000..98b23f20bc0f
--- /dev/null
+++ b/net/tipc/server.h
@@ -0,0 +1,94 @@
1/*
2 * net/tipc/server.h: Include file for TIPC server code
3 *
4 * Copyright (c) 2012-2013, Wind River Systems
5 * All rights reserved.
6 *
7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions are met:
9 *
10 * 1. Redistributions of source code must retain the above copyright
11 * notice, this list of conditions and the following disclaimer.
12 * 2. Redistributions in binary form must reproduce the above copyright
13 * notice, this list of conditions and the following disclaimer in the
14 * documentation and/or other materials provided with the distribution.
15 * 3. Neither the names of the copyright holders nor the names of its
16 * contributors may be used to endorse or promote products derived from
17 * this software without specific prior written permission.
18 *
19 * Alternatively, this software may be distributed under the terms of the
20 * GNU General Public License ("GPL") version 2 as published by the Free
21 * Software Foundation.
22 *
23 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
24 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
25 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
26 * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
27 * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
28 * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
29 * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
30 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
31 * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
32 * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
33 * POSSIBILITY OF SUCH DAMAGE.
34 */
35
36#ifndef _TIPC_SERVER_H
37#define _TIPC_SERVER_H
38
39#include "core.h"
40
41#define TIPC_SERVER_NAME_LEN 32
42
43/**
44 * struct tipc_server - TIPC server structure
45 * @conn_idr: identifier set of connection
46 * @idr_lock: protect the connection identifier set
47 * @idr_in_use: amount of allocated identifier entry
48 * @rcvbuf_cache: memory cache of server receive buffer
49 * @rcv_wq: receive workqueue
50 * @send_wq: send workqueue
51 * @max_rcvbuf_size: maximum permitted receive message length
52 * @tipc_conn_new: callback will be called when new connection is incoming
53 * @tipc_conn_shutdown: callback will be called when connection is shut down
54 * @tipc_conn_recvmsg: callback will be called when message arrives
55 * @saddr: TIPC server address
56 * @name: server name
57 * @imp: message importance
58 * @type: socket type
59 * @enabled: identify whether server is launched or not
60 */
61struct tipc_server {
62 struct idr conn_idr;
63 spinlock_t idr_lock;
64 int idr_in_use;
65 struct kmem_cache *rcvbuf_cache;
66 struct workqueue_struct *rcv_wq;
67 struct workqueue_struct *send_wq;
68 int max_rcvbuf_size;
69 void *(*tipc_conn_new) (int conid);
70 void (*tipc_conn_shutdown) (int conid, void *usr_data);
71 void (*tipc_conn_recvmsg) (int conid, struct sockaddr_tipc *addr,
72 void *usr_data, void *buf, size_t len);
73 struct sockaddr_tipc *saddr;
74 const char name[TIPC_SERVER_NAME_LEN];
75 int imp;
76 int type;
77 int enabled;
78};
79
80int tipc_conn_sendmsg(struct tipc_server *s, int conid,
81 struct sockaddr_tipc *addr, void *data, size_t len);
82
83/**
84 * tipc_conn_terminate - terminate connection with server
85 *
86 * Note: Must call it in process context since it might sleep
87 */
88void tipc_conn_terminate(struct tipc_server *s, int conid);
89
90int tipc_server_start(struct tipc_server *s);
91
92void tipc_server_stop(struct tipc_server *s);
93
94#endif
diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index 515ce38e4f4c..ce8249c76827 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -2,7 +2,7 @@
2 * net/tipc/socket.c: TIPC socket API 2 * net/tipc/socket.c: TIPC socket API
3 * 3 *
4 * Copyright (c) 2001-2007, 2012 Ericsson AB 4 * Copyright (c) 2001-2007, 2012 Ericsson AB
5 * Copyright (c) 2004-2008, 2010-2012, Wind River Systems 5 * Copyright (c) 2004-2008, 2010-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -43,8 +43,6 @@
43#define SS_LISTENING -1 /* socket is listening */ 43#define SS_LISTENING -1 /* socket is listening */
44#define SS_READY -2 /* socket is connectionless */ 44#define SS_READY -2 /* socket is connectionless */
45 45
46#define CONN_OVERLOAD_LIMIT ((TIPC_FLOW_CONTROL_WIN * 2 + 1) * \
47 SKB_TRUESIZE(TIPC_MAX_USER_MSG_SIZE))
48#define CONN_TIMEOUT_DEFAULT 8000 /* default connect timeout = 8s */ 46#define CONN_TIMEOUT_DEFAULT 8000 /* default connect timeout = 8s */
49 47
50struct tipc_sock { 48struct tipc_sock {
@@ -65,12 +63,15 @@ static u32 dispatch(struct tipc_port *tport, struct sk_buff *buf);
65static void wakeupdispatch(struct tipc_port *tport); 63static void wakeupdispatch(struct tipc_port *tport);
66static void tipc_data_ready(struct sock *sk, int len); 64static void tipc_data_ready(struct sock *sk, int len);
67static void tipc_write_space(struct sock *sk); 65static void tipc_write_space(struct sock *sk);
66static int release(struct socket *sock);
67static int accept(struct socket *sock, struct socket *new_sock, int flags);
68 68
69static const struct proto_ops packet_ops; 69static const struct proto_ops packet_ops;
70static const struct proto_ops stream_ops; 70static const struct proto_ops stream_ops;
71static const struct proto_ops msg_ops; 71static const struct proto_ops msg_ops;
72 72
73static struct proto tipc_proto; 73static struct proto tipc_proto;
74static struct proto tipc_proto_kern;
74 75
75static int sockets_enabled; 76static int sockets_enabled;
76 77
@@ -143,7 +144,7 @@ static void reject_rx_queue(struct sock *sk)
143} 144}
144 145
145/** 146/**
146 * tipc_create - create a TIPC socket 147 * tipc_sk_create - create a TIPC socket
147 * @net: network namespace (must be default network) 148 * @net: network namespace (must be default network)
148 * @sock: pre-allocated socket structure 149 * @sock: pre-allocated socket structure
149 * @protocol: protocol indicator (must be 0) 150 * @protocol: protocol indicator (must be 0)
@@ -154,8 +155,8 @@ static void reject_rx_queue(struct sock *sk)
154 * 155 *
155 * Returns 0 on success, errno otherwise 156 * Returns 0 on success, errno otherwise
156 */ 157 */
157static int tipc_create(struct net *net, struct socket *sock, int protocol, 158static int tipc_sk_create(struct net *net, struct socket *sock, int protocol,
158 int kern) 159 int kern)
159{ 160{
160 const struct proto_ops *ops; 161 const struct proto_ops *ops;
161 socket_state state; 162 socket_state state;
@@ -185,13 +186,17 @@ static int tipc_create(struct net *net, struct socket *sock, int protocol,
185 } 186 }
186 187
187 /* Allocate socket's protocol area */ 188 /* Allocate socket's protocol area */
188 sk = sk_alloc(net, AF_TIPC, GFP_KERNEL, &tipc_proto); 189 if (!kern)
190 sk = sk_alloc(net, AF_TIPC, GFP_KERNEL, &tipc_proto);
191 else
192 sk = sk_alloc(net, AF_TIPC, GFP_KERNEL, &tipc_proto_kern);
193
189 if (sk == NULL) 194 if (sk == NULL)
190 return -ENOMEM; 195 return -ENOMEM;
191 196
192 /* Allocate TIPC port for socket to use */ 197 /* Allocate TIPC port for socket to use */
193 tp_ptr = tipc_createport_raw(sk, &dispatch, &wakeupdispatch, 198 tp_ptr = tipc_createport(sk, &dispatch, &wakeupdispatch,
194 TIPC_LOW_IMPORTANCE); 199 TIPC_LOW_IMPORTANCE);
195 if (unlikely(!tp_ptr)) { 200 if (unlikely(!tp_ptr)) {
196 sk_free(sk); 201 sk_free(sk);
197 return -ENOMEM; 202 return -ENOMEM;
@@ -203,6 +208,7 @@ static int tipc_create(struct net *net, struct socket *sock, int protocol,
203 208
204 sock_init_data(sock, sk); 209 sock_init_data(sock, sk);
205 sk->sk_backlog_rcv = backlog_rcv; 210 sk->sk_backlog_rcv = backlog_rcv;
211 sk->sk_rcvbuf = sysctl_tipc_rmem[1];
206 sk->sk_data_ready = tipc_data_ready; 212 sk->sk_data_ready = tipc_data_ready;
207 sk->sk_write_space = tipc_write_space; 213 sk->sk_write_space = tipc_write_space;
208 tipc_sk(sk)->p = tp_ptr; 214 tipc_sk(sk)->p = tp_ptr;
@@ -220,6 +226,78 @@ static int tipc_create(struct net *net, struct socket *sock, int protocol,
220} 226}
221 227
222/** 228/**
229 * tipc_sock_create_local - create TIPC socket from inside TIPC module
230 * @type: socket type - SOCK_RDM or SOCK_SEQPACKET
231 *
232 * We cannot use sock_creat_kern here because it bumps module user count.
233 * Since socket owner and creator is the same module we must make sure
234 * that module count remains zero for module local sockets, otherwise
235 * we cannot do rmmod.
236 *
237 * Returns 0 on success, errno otherwise
238 */
239int tipc_sock_create_local(int type, struct socket **res)
240{
241 int rc;
242 struct sock *sk;
243
244 rc = sock_create_lite(AF_TIPC, type, 0, res);
245 if (rc < 0) {
246 pr_err("Failed to create kernel socket\n");
247 return rc;
248 }
249 tipc_sk_create(&init_net, *res, 0, 1);
250
251 sk = (*res)->sk;
252
253 return 0;
254}
255
256/**
257 * tipc_sock_release_local - release socket created by tipc_sock_create_local
258 * @sock: the socket to be released.
259 *
260 * Module reference count is not incremented when such sockets are created,
261 * so we must keep it from being decremented when they are released.
262 */
263void tipc_sock_release_local(struct socket *sock)
264{
265 release(sock);
266 sock->ops = NULL;
267 sock_release(sock);
268}
269
270/**
271 * tipc_sock_accept_local - accept a connection on a socket created
272 * with tipc_sock_create_local. Use this function to avoid that
273 * module reference count is inadvertently incremented.
274 *
275 * @sock: the accepting socket
276 * @newsock: reference to the new socket to be created
277 * @flags: socket flags
278 */
279
280int tipc_sock_accept_local(struct socket *sock, struct socket **newsock,
281 int flags)
282{
283 struct sock *sk = sock->sk;
284 int ret;
285
286 ret = sock_create_lite(sk->sk_family, sk->sk_type,
287 sk->sk_protocol, newsock);
288 if (ret < 0)
289 return ret;
290
291 ret = accept(sock, *newsock, flags);
292 if (ret < 0) {
293 sock_release(*newsock);
294 return ret;
295 }
296 (*newsock)->ops = sock->ops;
297 return ret;
298}
299
300/**
223 * release - destroy a TIPC socket 301 * release - destroy a TIPC socket
224 * @sock: socket to destroy 302 * @sock: socket to destroy
225 * 303 *
@@ -324,7 +402,9 @@ static int bind(struct socket *sock, struct sockaddr *uaddr, int uaddr_len)
324 else if (addr->addrtype != TIPC_ADDR_NAMESEQ) 402 else if (addr->addrtype != TIPC_ADDR_NAMESEQ)
325 return -EAFNOSUPPORT; 403 return -EAFNOSUPPORT;
326 404
327 if (addr->addr.nameseq.type < TIPC_RESERVED_TYPES) 405 if ((addr->addr.nameseq.type < TIPC_RESERVED_TYPES) &&
406 (addr->addr.nameseq.type != TIPC_TOP_SRV) &&
407 (addr->addr.nameseq.type != TIPC_CFG_SRV))
328 return -EACCES; 408 return -EACCES;
329 409
330 return (addr->scope > 0) ? 410 return (addr->scope > 0) ?
@@ -519,8 +599,7 @@ static int send_msg(struct kiocb *iocb, struct socket *sock,
519 res = -EISCONN; 599 res = -EISCONN;
520 goto exit; 600 goto exit;
521 } 601 }
522 if ((tport->published) || 602 if (tport->published) {
523 ((sock->type == SOCK_STREAM) && (total_len != 0))) {
524 res = -EOPNOTSUPP; 603 res = -EOPNOTSUPP;
525 goto exit; 604 goto exit;
526 } 605 }
@@ -810,7 +889,7 @@ static void set_orig_addr(struct msghdr *m, struct tipc_msg *msg)
810 * Returns 0 if successful, otherwise errno 889 * Returns 0 if successful, otherwise errno
811 */ 890 */
812static int anc_data_recv(struct msghdr *m, struct tipc_msg *msg, 891static int anc_data_recv(struct msghdr *m, struct tipc_msg *msg,
813 struct tipc_port *tport) 892 struct tipc_port *tport)
814{ 893{
815 u32 anc_data[3]; 894 u32 anc_data[3];
816 u32 err; 895 u32 err;
@@ -1011,8 +1090,7 @@ static int recv_stream(struct kiocb *iocb, struct socket *sock,
1011 1090
1012 lock_sock(sk); 1091 lock_sock(sk);
1013 1092
1014 if (unlikely((sock->state == SS_UNCONNECTED) || 1093 if (unlikely((sock->state == SS_UNCONNECTED))) {
1015 (sock->state == SS_CONNECTING))) {
1016 res = -ENOTCONN; 1094 res = -ENOTCONN;
1017 goto exit; 1095 goto exit;
1018 } 1096 }
@@ -1233,10 +1311,10 @@ static u32 filter_connect(struct tipc_sock *tsock, struct sk_buff **buf)
1233 * For all connectionless messages, by default new queue limits are 1311 * For all connectionless messages, by default new queue limits are
1234 * as belows: 1312 * as belows:
1235 * 1313 *
1236 * TIPC_LOW_IMPORTANCE (5MB) 1314 * TIPC_LOW_IMPORTANCE (4 MB)
1237 * TIPC_MEDIUM_IMPORTANCE (10MB) 1315 * TIPC_MEDIUM_IMPORTANCE (8 MB)
1238 * TIPC_HIGH_IMPORTANCE (20MB) 1316 * TIPC_HIGH_IMPORTANCE (16 MB)
1239 * TIPC_CRITICAL_IMPORTANCE (40MB) 1317 * TIPC_CRITICAL_IMPORTANCE (32 MB)
1240 * 1318 *
1241 * Returns overload limit according to corresponding message importance 1319 * Returns overload limit according to corresponding message importance
1242 */ 1320 */
@@ -1246,9 +1324,10 @@ static unsigned int rcvbuf_limit(struct sock *sk, struct sk_buff *buf)
1246 unsigned int limit; 1324 unsigned int limit;
1247 1325
1248 if (msg_connected(msg)) 1326 if (msg_connected(msg))
1249 limit = CONN_OVERLOAD_LIMIT; 1327 limit = sysctl_tipc_rmem[2];
1250 else 1328 else
1251 limit = sk->sk_rcvbuf << (msg_importance(msg) + 5); 1329 limit = sk->sk_rcvbuf >> TIPC_CRITICAL_IMPORTANCE <<
1330 msg_importance(msg);
1252 return limit; 1331 return limit;
1253} 1332}
1254 1333
@@ -1327,7 +1406,7 @@ static int backlog_rcv(struct sock *sk, struct sk_buff *buf)
1327 */ 1406 */
1328static u32 dispatch(struct tipc_port *tport, struct sk_buff *buf) 1407static u32 dispatch(struct tipc_port *tport, struct sk_buff *buf)
1329{ 1408{
1330 struct sock *sk = (struct sock *)tport->usr_handle; 1409 struct sock *sk = tport->sk;
1331 u32 res; 1410 u32 res;
1332 1411
1333 /* 1412 /*
@@ -1358,7 +1437,7 @@ static u32 dispatch(struct tipc_port *tport, struct sk_buff *buf)
1358 */ 1437 */
1359static void wakeupdispatch(struct tipc_port *tport) 1438static void wakeupdispatch(struct tipc_port *tport)
1360{ 1439{
1361 struct sock *sk = (struct sock *)tport->usr_handle; 1440 struct sock *sk = tport->sk;
1362 1441
1363 sk->sk_write_space(sk); 1442 sk->sk_write_space(sk);
1364} 1443}
@@ -1531,7 +1610,7 @@ static int accept(struct socket *sock, struct socket *new_sock, int flags)
1531 1610
1532 buf = skb_peek(&sk->sk_receive_queue); 1611 buf = skb_peek(&sk->sk_receive_queue);
1533 1612
1534 res = tipc_create(sock_net(sock->sk), new_sock, 0, 0); 1613 res = tipc_sk_create(sock_net(sock->sk), new_sock, 0, 1);
1535 if (res) 1614 if (res)
1536 goto exit; 1615 goto exit;
1537 1616
@@ -1657,8 +1736,8 @@ restart:
1657 * 1736 *
1658 * Returns 0 on success, errno otherwise 1737 * Returns 0 on success, errno otherwise
1659 */ 1738 */
1660static int setsockopt(struct socket *sock, 1739static int setsockopt(struct socket *sock, int lvl, int opt, char __user *ov,
1661 int lvl, int opt, char __user *ov, unsigned int ol) 1740 unsigned int ol)
1662{ 1741{
1663 struct sock *sk = sock->sk; 1742 struct sock *sk = sock->sk;
1664 struct tipc_port *tport = tipc_sk_port(sk); 1743 struct tipc_port *tport = tipc_sk_port(sk);
@@ -1716,8 +1795,8 @@ static int setsockopt(struct socket *sock,
1716 * 1795 *
1717 * Returns 0 on success, errno otherwise 1796 * Returns 0 on success, errno otherwise
1718 */ 1797 */
1719static int getsockopt(struct socket *sock, 1798static int getsockopt(struct socket *sock, int lvl, int opt, char __user *ov,
1720 int lvl, int opt, char __user *ov, int __user *ol) 1799 int __user *ol)
1721{ 1800{
1722 struct sock *sk = sock->sk; 1801 struct sock *sk = sock->sk;
1723 struct tipc_port *tport = tipc_sk_port(sk); 1802 struct tipc_port *tport = tipc_sk_port(sk);
@@ -1841,13 +1920,20 @@ static const struct proto_ops stream_ops = {
1841static const struct net_proto_family tipc_family_ops = { 1920static const struct net_proto_family tipc_family_ops = {
1842 .owner = THIS_MODULE, 1921 .owner = THIS_MODULE,
1843 .family = AF_TIPC, 1922 .family = AF_TIPC,
1844 .create = tipc_create 1923 .create = tipc_sk_create
1845}; 1924};
1846 1925
1847static struct proto tipc_proto = { 1926static struct proto tipc_proto = {
1848 .name = "TIPC", 1927 .name = "TIPC",
1849 .owner = THIS_MODULE, 1928 .owner = THIS_MODULE,
1850 .obj_size = sizeof(struct tipc_sock) 1929 .obj_size = sizeof(struct tipc_sock),
1930 .sysctl_rmem = sysctl_tipc_rmem
1931};
1932
1933static struct proto tipc_proto_kern = {
1934 .name = "TIPC",
1935 .obj_size = sizeof(struct tipc_sock),
1936 .sysctl_rmem = sysctl_tipc_rmem
1851}; 1937};
1852 1938
1853/** 1939/**
diff --git a/net/tipc/subscr.c b/net/tipc/subscr.c
index 6b42d47029af..d38bb45d82e9 100644
--- a/net/tipc/subscr.c
+++ b/net/tipc/subscr.c
@@ -2,7 +2,7 @@
2 * net/tipc/subscr.c: TIPC network topology service 2 * net/tipc/subscr.c: TIPC network topology service
3 * 3 *
4 * Copyright (c) 2000-2006, Ericsson AB 4 * Copyright (c) 2000-2006, Ericsson AB
5 * Copyright (c) 2005-2007, 2010-2011, Wind River Systems 5 * Copyright (c) 2005-2007, 2010-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -41,33 +41,42 @@
41 41
42/** 42/**
43 * struct tipc_subscriber - TIPC network topology subscriber 43 * struct tipc_subscriber - TIPC network topology subscriber
44 * @port_ref: object reference to server port connecting to subscriber 44 * @conid: connection identifier to server connecting to subscriber
45 * @lock: pointer to spinlock controlling access to subscriber's server port 45 * @lock: controll access to subscriber
46 * @subscriber_list: adjacent subscribers in top. server's list of subscribers
47 * @subscription_list: list of subscription objects for this subscriber 46 * @subscription_list: list of subscription objects for this subscriber
48 */ 47 */
49struct tipc_subscriber { 48struct tipc_subscriber {
50 u32 port_ref; 49 int conid;
51 spinlock_t *lock; 50 spinlock_t lock;
52 struct list_head subscriber_list;
53 struct list_head subscription_list; 51 struct list_head subscription_list;
54}; 52};
55 53
56/** 54static void subscr_conn_msg_event(int conid, struct sockaddr_tipc *addr,
57 * struct top_srv - TIPC network topology subscription service 55 void *usr_data, void *buf, size_t len);
58 * @setup_port: reference to TIPC port that handles subscription requests 56static void *subscr_named_msg_event(int conid);
59 * @subscription_count: number of active subscriptions (not subscribers!) 57static void subscr_conn_shutdown_event(int conid, void *usr_data);
60 * @subscriber_list: list of ports subscribing to service 58
61 * @lock: spinlock govering access to subscriber list 59static atomic_t subscription_count = ATOMIC_INIT(0);
62 */ 60
63struct top_srv { 61static struct sockaddr_tipc topsrv_addr __read_mostly = {
64 u32 setup_port; 62 .family = AF_TIPC,
65 atomic_t subscription_count; 63 .addrtype = TIPC_ADDR_NAMESEQ,
66 struct list_head subscriber_list; 64 .addr.nameseq.type = TIPC_TOP_SRV,
67 spinlock_t lock; 65 .addr.nameseq.lower = TIPC_TOP_SRV,
66 .addr.nameseq.upper = TIPC_TOP_SRV,
67 .scope = TIPC_NODE_SCOPE
68}; 68};
69 69
70static struct top_srv topsrv; 70static struct tipc_server topsrv __read_mostly = {
71 .saddr = &topsrv_addr,
72 .imp = TIPC_CRITICAL_IMPORTANCE,
73 .type = SOCK_SEQPACKET,
74 .max_rcvbuf_size = sizeof(struct tipc_subscr),
75 .name = "topology_server",
76 .tipc_conn_recvmsg = subscr_conn_msg_event,
77 .tipc_conn_new = subscr_named_msg_event,
78 .tipc_conn_shutdown = subscr_conn_shutdown_event,
79};
71 80
72/** 81/**
73 * htohl - convert value to endianness used by destination 82 * htohl - convert value to endianness used by destination
@@ -81,20 +90,13 @@ static u32 htohl(u32 in, int swap)
81 return swap ? swab32(in) : in; 90 return swap ? swab32(in) : in;
82} 91}
83 92
84/** 93static void subscr_send_event(struct tipc_subscription *sub, u32 found_lower,
85 * subscr_send_event - send a message containing a tipc_event to the subscriber 94 u32 found_upper, u32 event, u32 port_ref,
86 *
87 * Note: Must not hold subscriber's server port lock, since tipc_send() will
88 * try to take the lock if the message is rejected and returned!
89 */
90static void subscr_send_event(struct tipc_subscription *sub,
91 u32 found_lower,
92 u32 found_upper,
93 u32 event,
94 u32 port_ref,
95 u32 node) 95 u32 node)
96{ 96{
97 struct iovec msg_sect; 97 struct tipc_subscriber *subscriber = sub->subscriber;
98 struct kvec msg_sect;
99 int ret;
98 100
99 msg_sect.iov_base = (void *)&sub->evt; 101 msg_sect.iov_base = (void *)&sub->evt;
100 msg_sect.iov_len = sizeof(struct tipc_event); 102 msg_sect.iov_len = sizeof(struct tipc_event);
@@ -104,7 +106,10 @@ static void subscr_send_event(struct tipc_subscription *sub,
104 sub->evt.found_upper = htohl(found_upper, sub->swap); 106 sub->evt.found_upper = htohl(found_upper, sub->swap);
105 sub->evt.port.ref = htohl(port_ref, sub->swap); 107 sub->evt.port.ref = htohl(port_ref, sub->swap);
106 sub->evt.port.node = htohl(node, sub->swap); 108 sub->evt.port.node = htohl(node, sub->swap);
107 tipc_send(sub->server_ref, 1, &msg_sect, msg_sect.iov_len); 109 ret = tipc_conn_sendmsg(&topsrv, subscriber->conid, NULL,
110 msg_sect.iov_base, msg_sect.iov_len);
111 if (ret < 0)
112 pr_err("Sending subscription event failed, no memory\n");
108} 113}
109 114
110/** 115/**
@@ -112,10 +117,8 @@ static void subscr_send_event(struct tipc_subscription *sub,
112 * 117 *
113 * Returns 1 if there is overlap, otherwise 0. 118 * Returns 1 if there is overlap, otherwise 0.
114 */ 119 */
115int tipc_subscr_overlap(struct tipc_subscription *sub, 120int tipc_subscr_overlap(struct tipc_subscription *sub, u32 found_lower,
116 u32 found_lower,
117 u32 found_upper) 121 u32 found_upper)
118
119{ 122{
120 if (found_lower < sub->seq.lower) 123 if (found_lower < sub->seq.lower)
121 found_lower = sub->seq.lower; 124 found_lower = sub->seq.lower;
@@ -131,13 +134,9 @@ int tipc_subscr_overlap(struct tipc_subscription *sub,
131 * 134 *
132 * Protected by nameseq.lock in name_table.c 135 * Protected by nameseq.lock in name_table.c
133 */ 136 */
134void tipc_subscr_report_overlap(struct tipc_subscription *sub, 137void tipc_subscr_report_overlap(struct tipc_subscription *sub, u32 found_lower,
135 u32 found_lower, 138 u32 found_upper, u32 event, u32 port_ref,
136 u32 found_upper, 139 u32 node, int must)
137 u32 event,
138 u32 port_ref,
139 u32 node,
140 int must)
141{ 140{
142 if (!tipc_subscr_overlap(sub, found_lower, found_upper)) 141 if (!tipc_subscr_overlap(sub, found_lower, found_upper))
143 return; 142 return;
@@ -147,21 +146,24 @@ void tipc_subscr_report_overlap(struct tipc_subscription *sub,
147 subscr_send_event(sub, found_lower, found_upper, event, port_ref, node); 146 subscr_send_event(sub, found_lower, found_upper, event, port_ref, node);
148} 147}
149 148
150/**
151 * subscr_timeout - subscription timeout has occurred
152 */
153static void subscr_timeout(struct tipc_subscription *sub) 149static void subscr_timeout(struct tipc_subscription *sub)
154{ 150{
155 struct tipc_port *server_port; 151 struct tipc_subscriber *subscriber = sub->subscriber;
152
153 /* The spin lock per subscriber is used to protect its members */
154 spin_lock_bh(&subscriber->lock);
156 155
157 /* Validate server port reference (in case subscriber is terminating) */ 156 /* Validate if the connection related to the subscriber is
158 server_port = tipc_port_lock(sub->server_ref); 157 * closed (in case subscriber is terminating)
159 if (server_port == NULL) 158 */
159 if (subscriber->conid == 0) {
160 spin_unlock_bh(&subscriber->lock);
160 return; 161 return;
162 }
161 163
162 /* Validate timeout (in case subscription is being cancelled) */ 164 /* Validate timeout (in case subscription is being cancelled) */
163 if (sub->timeout == TIPC_WAIT_FOREVER) { 165 if (sub->timeout == TIPC_WAIT_FOREVER) {
164 tipc_port_unlock(server_port); 166 spin_unlock_bh(&subscriber->lock);
165 return; 167 return;
166 } 168 }
167 169
@@ -171,8 +173,7 @@ static void subscr_timeout(struct tipc_subscription *sub)
171 /* Unlink subscription from subscriber */ 173 /* Unlink subscription from subscriber */
172 list_del(&sub->subscription_list); 174 list_del(&sub->subscription_list);
173 175
174 /* Release subscriber's server port */ 176 spin_unlock_bh(&subscriber->lock);
175 tipc_port_unlock(server_port);
176 177
177 /* Notify subscriber of timeout */ 178 /* Notify subscriber of timeout */
178 subscr_send_event(sub, sub->evt.s.seq.lower, sub->evt.s.seq.upper, 179 subscr_send_event(sub, sub->evt.s.seq.lower, sub->evt.s.seq.upper,
@@ -181,64 +182,54 @@ static void subscr_timeout(struct tipc_subscription *sub)
181 /* Now destroy subscription */ 182 /* Now destroy subscription */
182 k_term_timer(&sub->timer); 183 k_term_timer(&sub->timer);
183 kfree(sub); 184 kfree(sub);
184 atomic_dec(&topsrv.subscription_count); 185 atomic_dec(&subscription_count);
185} 186}
186 187
187/** 188/**
188 * subscr_del - delete a subscription within a subscription list 189 * subscr_del - delete a subscription within a subscription list
189 * 190 *
190 * Called with subscriber port locked. 191 * Called with subscriber lock held.
191 */ 192 */
192static void subscr_del(struct tipc_subscription *sub) 193static void subscr_del(struct tipc_subscription *sub)
193{ 194{
194 tipc_nametbl_unsubscribe(sub); 195 tipc_nametbl_unsubscribe(sub);
195 list_del(&sub->subscription_list); 196 list_del(&sub->subscription_list);
196 kfree(sub); 197 kfree(sub);
197 atomic_dec(&topsrv.subscription_count); 198 atomic_dec(&subscription_count);
198} 199}
199 200
200/** 201/**
201 * subscr_terminate - terminate communication with a subscriber 202 * subscr_terminate - terminate communication with a subscriber
202 * 203 *
203 * Called with subscriber port locked. Routine must temporarily release lock 204 * Note: Must call it in process context since it might sleep.
204 * to enable subscription timeout routine(s) to finish without deadlocking;
205 * the lock is then reclaimed to allow caller to release it upon return.
206 * (This should work even in the unlikely event some other thread creates
207 * a new object reference in the interim that uses this lock; this routine will
208 * simply wait for it to be released, then claim it.)
209 */ 205 */
210static void subscr_terminate(struct tipc_subscriber *subscriber) 206static void subscr_terminate(struct tipc_subscriber *subscriber)
211{ 207{
212 u32 port_ref; 208 tipc_conn_terminate(&topsrv, subscriber->conid);
209}
210
211static void subscr_release(struct tipc_subscriber *subscriber)
212{
213 struct tipc_subscription *sub; 213 struct tipc_subscription *sub;
214 struct tipc_subscription *sub_temp; 214 struct tipc_subscription *sub_temp;
215 215
216 /* Invalidate subscriber reference */ 216 spin_lock_bh(&subscriber->lock);
217 port_ref = subscriber->port_ref;
218 subscriber->port_ref = 0;
219 spin_unlock_bh(subscriber->lock);
220 217
221 /* Sever connection to subscriber */ 218 /* Invalidate subscriber reference */
222 tipc_shutdown(port_ref); 219 subscriber->conid = 0;
223 tipc_deleteport(port_ref);
224 220
225 /* Destroy any existing subscriptions for subscriber */ 221 /* Destroy any existing subscriptions for subscriber */
226 list_for_each_entry_safe(sub, sub_temp, &subscriber->subscription_list, 222 list_for_each_entry_safe(sub, sub_temp, &subscriber->subscription_list,
227 subscription_list) { 223 subscription_list) {
228 if (sub->timeout != TIPC_WAIT_FOREVER) { 224 if (sub->timeout != TIPC_WAIT_FOREVER) {
225 spin_unlock_bh(&subscriber->lock);
229 k_cancel_timer(&sub->timer); 226 k_cancel_timer(&sub->timer);
230 k_term_timer(&sub->timer); 227 k_term_timer(&sub->timer);
228 spin_lock_bh(&subscriber->lock);
231 } 229 }
232 subscr_del(sub); 230 subscr_del(sub);
233 } 231 }
234 232 spin_unlock_bh(&subscriber->lock);
235 /* Remove subscriber from topology server's subscriber list */
236 spin_lock_bh(&topsrv.lock);
237 list_del(&subscriber->subscriber_list);
238 spin_unlock_bh(&topsrv.lock);
239
240 /* Reclaim subscriber lock */
241 spin_lock_bh(subscriber->lock);
242 233
243 /* Now destroy subscriber */ 234 /* Now destroy subscriber */
244 kfree(subscriber); 235 kfree(subscriber);
@@ -247,7 +238,7 @@ static void subscr_terminate(struct tipc_subscriber *subscriber)
247/** 238/**
248 * subscr_cancel - handle subscription cancellation request 239 * subscr_cancel - handle subscription cancellation request
249 * 240 *
250 * Called with subscriber port locked. Routine must temporarily release lock 241 * Called with subscriber lock held. Routine must temporarily release lock
251 * to enable the subscription timeout routine to finish without deadlocking; 242 * to enable the subscription timeout routine to finish without deadlocking;
252 * the lock is then reclaimed to allow caller to release it upon return. 243 * the lock is then reclaimed to allow caller to release it upon return.
253 * 244 *
@@ -274,10 +265,10 @@ static void subscr_cancel(struct tipc_subscr *s,
274 /* Cancel subscription timer (if used), then delete subscription */ 265 /* Cancel subscription timer (if used), then delete subscription */
275 if (sub->timeout != TIPC_WAIT_FOREVER) { 266 if (sub->timeout != TIPC_WAIT_FOREVER) {
276 sub->timeout = TIPC_WAIT_FOREVER; 267 sub->timeout = TIPC_WAIT_FOREVER;
277 spin_unlock_bh(subscriber->lock); 268 spin_unlock_bh(&subscriber->lock);
278 k_cancel_timer(&sub->timer); 269 k_cancel_timer(&sub->timer);
279 k_term_timer(&sub->timer); 270 k_term_timer(&sub->timer);
280 spin_lock_bh(subscriber->lock); 271 spin_lock_bh(&subscriber->lock);
281 } 272 }
282 subscr_del(sub); 273 subscr_del(sub);
283} 274}
@@ -285,7 +276,7 @@ static void subscr_cancel(struct tipc_subscr *s,
285/** 276/**
286 * subscr_subscribe - create subscription for subscriber 277 * subscr_subscribe - create subscription for subscriber
287 * 278 *
288 * Called with subscriber port locked. 279 * Called with subscriber lock held.
289 */ 280 */
290static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s, 281static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s,
291 struct tipc_subscriber *subscriber) 282 struct tipc_subscriber *subscriber)
@@ -304,7 +295,7 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s,
304 } 295 }
305 296
306 /* Refuse subscription if global limit exceeded */ 297 /* Refuse subscription if global limit exceeded */
307 if (atomic_read(&topsrv.subscription_count) >= TIPC_MAX_SUBSCRIPTIONS) { 298 if (atomic_read(&subscription_count) >= TIPC_MAX_SUBSCRIPTIONS) {
308 pr_warn("Subscription rejected, limit reached (%u)\n", 299 pr_warn("Subscription rejected, limit reached (%u)\n",
309 TIPC_MAX_SUBSCRIPTIONS); 300 TIPC_MAX_SUBSCRIPTIONS);
310 subscr_terminate(subscriber); 301 subscr_terminate(subscriber);
@@ -335,10 +326,10 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s,
335 } 326 }
336 INIT_LIST_HEAD(&sub->nameseq_list); 327 INIT_LIST_HEAD(&sub->nameseq_list);
337 list_add(&sub->subscription_list, &subscriber->subscription_list); 328 list_add(&sub->subscription_list, &subscriber->subscription_list);
338 sub->server_ref = subscriber->port_ref; 329 sub->subscriber = subscriber;
339 sub->swap = swap; 330 sub->swap = swap;
340 memcpy(&sub->evt.s, s, sizeof(struct tipc_subscr)); 331 memcpy(&sub->evt.s, s, sizeof(struct tipc_subscr));
341 atomic_inc(&topsrv.subscription_count); 332 atomic_inc(&subscription_count);
342 if (sub->timeout != TIPC_WAIT_FOREVER) { 333 if (sub->timeout != TIPC_WAIT_FOREVER) {
343 k_init_timer(&sub->timer, 334 k_init_timer(&sub->timer,
344 (Handler)subscr_timeout, (unsigned long)sub); 335 (Handler)subscr_timeout, (unsigned long)sub);
@@ -348,196 +339,51 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s,
348 return sub; 339 return sub;
349} 340}
350 341
351/** 342/* Handle one termination request for the subscriber */
352 * subscr_conn_shutdown_event - handle termination request from subscriber 343static void subscr_conn_shutdown_event(int conid, void *usr_data)
353 *
354 * Called with subscriber's server port unlocked.
355 */
356static void subscr_conn_shutdown_event(void *usr_handle,
357 u32 port_ref,
358 struct sk_buff **buf,
359 unsigned char const *data,
360 unsigned int size,
361 int reason)
362{ 344{
363 struct tipc_subscriber *subscriber = usr_handle; 345 subscr_release((struct tipc_subscriber *)usr_data);
364 spinlock_t *subscriber_lock;
365
366 if (tipc_port_lock(port_ref) == NULL)
367 return;
368
369 subscriber_lock = subscriber->lock;
370 subscr_terminate(subscriber);
371 spin_unlock_bh(subscriber_lock);
372} 346}
373 347
374/** 348/* Handle one request to create a new subscription for the subscriber */
375 * subscr_conn_msg_event - handle new subscription request from subscriber 349static void subscr_conn_msg_event(int conid, struct sockaddr_tipc *addr,
376 * 350 void *usr_data, void *buf, size_t len)
377 * Called with subscriber's server port unlocked.
378 */
379static void subscr_conn_msg_event(void *usr_handle,
380 u32 port_ref,
381 struct sk_buff **buf,
382 const unchar *data,
383 u32 size)
384{ 351{
385 struct tipc_subscriber *subscriber = usr_handle; 352 struct tipc_subscriber *subscriber = usr_data;
386 spinlock_t *subscriber_lock;
387 struct tipc_subscription *sub; 353 struct tipc_subscription *sub;
388 354
389 /* 355 spin_lock_bh(&subscriber->lock);
390 * Lock subscriber's server port (& make a local copy of lock pointer, 356 sub = subscr_subscribe((struct tipc_subscr *)buf, subscriber);
391 * in case subscriber is deleted while processing subscription request) 357 if (sub)
392 */ 358 tipc_nametbl_subscribe(sub);
393 if (tipc_port_lock(port_ref) == NULL) 359 spin_unlock_bh(&subscriber->lock);
394 return;
395
396 subscriber_lock = subscriber->lock;
397
398 if (size != sizeof(struct tipc_subscr)) {
399 subscr_terminate(subscriber);
400 spin_unlock_bh(subscriber_lock);
401 } else {
402 sub = subscr_subscribe((struct tipc_subscr *)data, subscriber);
403 spin_unlock_bh(subscriber_lock);
404 if (sub != NULL) {
405
406 /*
407 * We must release the server port lock before adding a
408 * subscription to the name table since TIPC needs to be
409 * able to (re)acquire the port lock if an event message
410 * issued by the subscription process is rejected and
411 * returned. The subscription cannot be deleted while
412 * it is being added to the name table because:
413 * a) the single-threading of the native API port code
414 * ensures the subscription cannot be cancelled and
415 * the subscriber connection cannot be broken, and
416 * b) the name table lock ensures the subscription
417 * timeout code cannot delete the subscription,
418 * so the subscription object is still protected.
419 */
420 tipc_nametbl_subscribe(sub);
421 }
422 }
423} 360}
424 361
425/** 362
426 * subscr_named_msg_event - handle request to establish a new subscriber 363/* Handle one request to establish a new subscriber */
427 */ 364static void *subscr_named_msg_event(int conid)
428static void subscr_named_msg_event(void *usr_handle,
429 u32 port_ref,
430 struct sk_buff **buf,
431 const unchar *data,
432 u32 size,
433 u32 importance,
434 struct tipc_portid const *orig,
435 struct tipc_name_seq const *dest)
436{ 365{
437 struct tipc_subscriber *subscriber; 366 struct tipc_subscriber *subscriber;
438 u32 server_port_ref;
439 367
440 /* Create subscriber object */ 368 /* Create subscriber object */
441 subscriber = kzalloc(sizeof(struct tipc_subscriber), GFP_ATOMIC); 369 subscriber = kzalloc(sizeof(struct tipc_subscriber), GFP_ATOMIC);
442 if (subscriber == NULL) { 370 if (subscriber == NULL) {
443 pr_warn("Subscriber rejected, no memory\n"); 371 pr_warn("Subscriber rejected, no memory\n");
444 return; 372 return NULL;
445 } 373 }
446 INIT_LIST_HEAD(&subscriber->subscription_list); 374 INIT_LIST_HEAD(&subscriber->subscription_list);
447 INIT_LIST_HEAD(&subscriber->subscriber_list); 375 subscriber->conid = conid;
448 376 spin_lock_init(&subscriber->lock);
449 /* Create server port & establish connection to subscriber */
450 tipc_createport(subscriber,
451 importance,
452 NULL,
453 NULL,
454 subscr_conn_shutdown_event,
455 NULL,
456 NULL,
457 subscr_conn_msg_event,
458 NULL,
459 &subscriber->port_ref);
460 if (subscriber->port_ref == 0) {
461 pr_warn("Subscriber rejected, unable to create port\n");
462 kfree(subscriber);
463 return;
464 }
465 tipc_connect(subscriber->port_ref, orig);
466
467 /* Lock server port (& save lock address for future use) */
468 subscriber->lock = tipc_port_lock(subscriber->port_ref)->lock;
469
470 /* Add subscriber to topology server's subscriber list */
471 spin_lock_bh(&topsrv.lock);
472 list_add(&subscriber->subscriber_list, &topsrv.subscriber_list);
473 spin_unlock_bh(&topsrv.lock);
474
475 /* Unlock server port */
476 server_port_ref = subscriber->port_ref;
477 spin_unlock_bh(subscriber->lock);
478
479 /* Send an ACK- to complete connection handshaking */
480 tipc_send(server_port_ref, 0, NULL, 0);
481 377
482 /* Handle optional subscription request */ 378 return (void *)subscriber;
483 if (size != 0) {
484 subscr_conn_msg_event(subscriber, server_port_ref,
485 buf, data, size);
486 }
487} 379}
488 380
489int tipc_subscr_start(void) 381int tipc_subscr_start(void)
490{ 382{
491 struct tipc_name_seq seq = {TIPC_TOP_SRV, TIPC_TOP_SRV, TIPC_TOP_SRV}; 383 return tipc_server_start(&topsrv);
492 int res;
493
494 spin_lock_init(&topsrv.lock);
495 INIT_LIST_HEAD(&topsrv.subscriber_list);
496
497 res = tipc_createport(NULL,
498 TIPC_CRITICAL_IMPORTANCE,
499 NULL,
500 NULL,
501 NULL,
502 NULL,
503 subscr_named_msg_event,
504 NULL,
505 NULL,
506 &topsrv.setup_port);
507 if (res)
508 goto failed;
509
510 res = tipc_publish(topsrv.setup_port, TIPC_NODE_SCOPE, &seq);
511 if (res) {
512 tipc_deleteport(topsrv.setup_port);
513 topsrv.setup_port = 0;
514 goto failed;
515 }
516
517 return 0;
518
519failed:
520 pr_err("Failed to create subscription service\n");
521 return res;
522} 384}
523 385
524void tipc_subscr_stop(void) 386void tipc_subscr_stop(void)
525{ 387{
526 struct tipc_subscriber *subscriber; 388 tipc_server_stop(&topsrv);
527 struct tipc_subscriber *subscriber_temp;
528 spinlock_t *subscriber_lock;
529
530 if (topsrv.setup_port) {
531 tipc_deleteport(topsrv.setup_port);
532 topsrv.setup_port = 0;
533
534 list_for_each_entry_safe(subscriber, subscriber_temp,
535 &topsrv.subscriber_list,
536 subscriber_list) {
537 subscriber_lock = subscriber->lock;
538 spin_lock_bh(subscriber_lock);
539 subscr_terminate(subscriber);
540 spin_unlock_bh(subscriber_lock);
541 }
542 }
543} 389}
diff --git a/net/tipc/subscr.h b/net/tipc/subscr.h
index 218d2e07f0cc..393e417bee3f 100644
--- a/net/tipc/subscr.h
+++ b/net/tipc/subscr.h
@@ -2,7 +2,7 @@
2 * net/tipc/subscr.h: Include file for TIPC network topology service 2 * net/tipc/subscr.h: Include file for TIPC network topology service
3 * 3 *
4 * Copyright (c) 2003-2006, Ericsson AB 4 * Copyright (c) 2003-2006, Ericsson AB
5 * Copyright (c) 2005-2007, Wind River Systems 5 * Copyright (c) 2005-2007, 2012-2013, Wind River Systems
6 * All rights reserved. 6 * All rights reserved.
7 * 7 *
8 * Redistribution and use in source and binary forms, with or without 8 * Redistribution and use in source and binary forms, with or without
@@ -37,10 +37,14 @@
37#ifndef _TIPC_SUBSCR_H 37#ifndef _TIPC_SUBSCR_H
38#define _TIPC_SUBSCR_H 38#define _TIPC_SUBSCR_H
39 39
40#include "server.h"
41
40struct tipc_subscription; 42struct tipc_subscription;
43struct tipc_subscriber;
41 44
42/** 45/**
43 * struct tipc_subscription - TIPC network topology subscription object 46 * struct tipc_subscription - TIPC network topology subscription object
47 * @subscriber: pointer to its subscriber
44 * @seq: name sequence associated with subscription 48 * @seq: name sequence associated with subscription
45 * @timeout: duration of subscription (in ms) 49 * @timeout: duration of subscription (in ms)
46 * @filter: event filtering to be done for subscription 50 * @filter: event filtering to be done for subscription
@@ -52,28 +56,23 @@ struct tipc_subscription;
52 * @evt: template for events generated by subscription 56 * @evt: template for events generated by subscription
53 */ 57 */
54struct tipc_subscription { 58struct tipc_subscription {
59 struct tipc_subscriber *subscriber;
55 struct tipc_name_seq seq; 60 struct tipc_name_seq seq;
56 u32 timeout; 61 u32 timeout;
57 u32 filter; 62 u32 filter;
58 struct timer_list timer; 63 struct timer_list timer;
59 struct list_head nameseq_list; 64 struct list_head nameseq_list;
60 struct list_head subscription_list; 65 struct list_head subscription_list;
61 u32 server_ref;
62 int swap; 66 int swap;
63 struct tipc_event evt; 67 struct tipc_event evt;
64}; 68};
65 69
66int tipc_subscr_overlap(struct tipc_subscription *sub, 70int tipc_subscr_overlap(struct tipc_subscription *sub, u32 found_lower,
67 u32 found_lower,
68 u32 found_upper); 71 u32 found_upper);
69 72
70void tipc_subscr_report_overlap(struct tipc_subscription *sub, 73void tipc_subscr_report_overlap(struct tipc_subscription *sub, u32 found_lower,
71 u32 found_lower, 74 u32 found_upper, u32 event, u32 port_ref,
72 u32 found_upper, 75 u32 node, int must);
73 u32 event,
74 u32 port_ref,
75 u32 node,
76 int must_report);
77 76
78int tipc_subscr_start(void); 77int tipc_subscr_start(void);
79 78
diff --git a/net/tipc/sysctl.c b/net/tipc/sysctl.c
new file mode 100644
index 000000000000..f3fef93325a8
--- /dev/null
+++ b/net/tipc/sysctl.c
@@ -0,0 +1,64 @@
1/*
2 * net/tipc/sysctl.c: sysctl interface to TIPC subsystem
3 *
4 * Copyright (c) 2013, Wind River Systems
5 * All rights reserved.
6 *
7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions are met:
9 *
10 * 1. Redistributions of source code must retain the above copyright
11 * notice, this list of conditions and the following disclaimer.
12 * 2. Redistributions in binary form must reproduce the above copyright
13 * notice, this list of conditions and the following disclaimer in the
14 * documentation and/or other materials provided with the distribution.
15 * 3. Neither the names of the copyright holders nor the names of its
16 * contributors may be used to endorse or promote products derived from
17 * this software without specific prior written permission.
18 *
19 * Alternatively, this software may be distributed under the terms of the
20 * GNU General Public License ("GPL") version 2 as published by the Free
21 * Software Foundation.
22 *
23 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
24 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
25 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
26 * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
27 * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
28 * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
29 * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
30 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
31 * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
32 * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
33 * POSSIBILITY OF SUCH DAMAGE.
34 */
35
36#include "core.h"
37
38#include <linux/sysctl.h>
39
40static struct ctl_table_header *tipc_ctl_hdr;
41
42static struct ctl_table tipc_table[] = {
43 {
44 .procname = "tipc_rmem",
45 .data = &sysctl_tipc_rmem,
46 .maxlen = sizeof(sysctl_tipc_rmem),
47 .mode = 0644,
48 .proc_handler = proc_dointvec,
49 },
50 {}
51};
52
53int tipc_register_sysctl(void)
54{
55 tipc_ctl_hdr = register_net_sysctl(&init_net, "net/tipc", tipc_table);
56 if (tipc_ctl_hdr == NULL)
57 return -ENOMEM;
58 return 0;
59}
60
61void tipc_unregister_sysctl(void)
62{
63 unregister_net_sysctl_table(tipc_ctl_hdr);
64}
diff --git a/net/unix/sysctl_net_unix.c b/net/unix/sysctl_net_unix.c
index 8800604c93f4..b3d515021b74 100644
--- a/net/unix/sysctl_net_unix.c
+++ b/net/unix/sysctl_net_unix.c
@@ -15,7 +15,7 @@
15 15
16#include <net/af_unix.h> 16#include <net/af_unix.h>
17 17
18static ctl_table unix_table[] = { 18static struct ctl_table unix_table[] = {
19 { 19 {
20 .procname = "max_dgram_qlen", 20 .procname = "max_dgram_qlen",
21 .data = &init_net.unx.sysctl_max_dgram_qlen, 21 .data = &init_net.unx.sysctl_max_dgram_qlen,
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 3f77f42a3b58..593071dabd1c 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -144,18 +144,18 @@ EXPORT_SYMBOL_GPL(vm_sockets_get_local_cid);
144 * VSOCK_HASH_SIZE + 1 so that vsock_bind_table[0] through 144 * VSOCK_HASH_SIZE + 1 so that vsock_bind_table[0] through
145 * vsock_bind_table[VSOCK_HASH_SIZE - 1] are for bound sockets and 145 * vsock_bind_table[VSOCK_HASH_SIZE - 1] are for bound sockets and
146 * vsock_bind_table[VSOCK_HASH_SIZE] is for unbound sockets. The hash function 146 * vsock_bind_table[VSOCK_HASH_SIZE] is for unbound sockets. The hash function
147 * mods with VSOCK_HASH_SIZE - 1 to ensure this. 147 * mods with VSOCK_HASH_SIZE to ensure this.
148 */ 148 */
149#define VSOCK_HASH_SIZE 251 149#define VSOCK_HASH_SIZE 251
150#define MAX_PORT_RETRIES 24 150#define MAX_PORT_RETRIES 24
151 151
152#define VSOCK_HASH(addr) ((addr)->svm_port % (VSOCK_HASH_SIZE - 1)) 152#define VSOCK_HASH(addr) ((addr)->svm_port % VSOCK_HASH_SIZE)
153#define vsock_bound_sockets(addr) (&vsock_bind_table[VSOCK_HASH(addr)]) 153#define vsock_bound_sockets(addr) (&vsock_bind_table[VSOCK_HASH(addr)])
154#define vsock_unbound_sockets (&vsock_bind_table[VSOCK_HASH_SIZE]) 154#define vsock_unbound_sockets (&vsock_bind_table[VSOCK_HASH_SIZE])
155 155
156/* XXX This can probably be implemented in a better way. */ 156/* XXX This can probably be implemented in a better way. */
157#define VSOCK_CONN_HASH(src, dst) \ 157#define VSOCK_CONN_HASH(src, dst) \
158 (((src)->svm_cid ^ (dst)->svm_port) % (VSOCK_HASH_SIZE - 1)) 158 (((src)->svm_cid ^ (dst)->svm_port) % VSOCK_HASH_SIZE)
159#define vsock_connected_sockets(src, dst) \ 159#define vsock_connected_sockets(src, dst) \
160 (&vsock_connected_table[VSOCK_CONN_HASH(src, dst)]) 160 (&vsock_connected_table[VSOCK_CONN_HASH(src, dst)])
161#define vsock_connected_sockets_vsk(vsk) \ 161#define vsock_connected_sockets_vsk(vsk) \
@@ -165,6 +165,18 @@ static struct list_head vsock_bind_table[VSOCK_HASH_SIZE + 1];
165static struct list_head vsock_connected_table[VSOCK_HASH_SIZE]; 165static struct list_head vsock_connected_table[VSOCK_HASH_SIZE];
166static DEFINE_SPINLOCK(vsock_table_lock); 166static DEFINE_SPINLOCK(vsock_table_lock);
167 167
168/* Autobind this socket to the local address if necessary. */
169static int vsock_auto_bind(struct vsock_sock *vsk)
170{
171 struct sock *sk = sk_vsock(vsk);
172 struct sockaddr_vm local_addr;
173
174 if (vsock_addr_bound(&vsk->local_addr))
175 return 0;
176 vsock_addr_init(&local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY);
177 return __vsock_bind(sk, &local_addr);
178}
179
168static void vsock_init_tables(void) 180static void vsock_init_tables(void)
169{ 181{
170 int i; 182 int i;
@@ -956,15 +968,10 @@ static int vsock_dgram_sendmsg(struct kiocb *kiocb, struct socket *sock,
956 968
957 lock_sock(sk); 969 lock_sock(sk);
958 970
959 if (!vsock_addr_bound(&vsk->local_addr)) { 971 err = vsock_auto_bind(vsk);
960 struct sockaddr_vm local_addr; 972 if (err)
961 973 goto out;
962 vsock_addr_init(&local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY);
963 err = __vsock_bind(sk, &local_addr);
964 if (err != 0)
965 goto out;
966 974
967 }
968 975
969 /* If the provided message contains an address, use that. Otherwise 976 /* If the provided message contains an address, use that. Otherwise
970 * fall back on the socket's remote handle (if it has been connected). 977 * fall back on the socket's remote handle (if it has been connected).
@@ -1038,15 +1045,9 @@ static int vsock_dgram_connect(struct socket *sock,
1038 1045
1039 lock_sock(sk); 1046 lock_sock(sk);
1040 1047
1041 if (!vsock_addr_bound(&vsk->local_addr)) { 1048 err = vsock_auto_bind(vsk);
1042 struct sockaddr_vm local_addr; 1049 if (err)
1043 1050 goto out;
1044 vsock_addr_init(&local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY);
1045 err = __vsock_bind(sk, &local_addr);
1046 if (err != 0)
1047 goto out;
1048
1049 }
1050 1051
1051 if (!transport->dgram_allow(remote_addr->svm_cid, 1052 if (!transport->dgram_allow(remote_addr->svm_cid,
1052 remote_addr->svm_port)) { 1053 remote_addr->svm_port)) {
@@ -1163,17 +1164,9 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
1163 memcpy(&vsk->remote_addr, remote_addr, 1164 memcpy(&vsk->remote_addr, remote_addr,
1164 sizeof(vsk->remote_addr)); 1165 sizeof(vsk->remote_addr));
1165 1166
1166 /* Autobind this socket to the local address if necessary. */ 1167 err = vsock_auto_bind(vsk);
1167 if (!vsock_addr_bound(&vsk->local_addr)) { 1168 if (err)
1168 struct sockaddr_vm local_addr; 1169 goto out;
1169
1170 vsock_addr_init(&local_addr, VMADDR_CID_ANY,
1171 VMADDR_PORT_ANY);
1172 err = __vsock_bind(sk, &local_addr);
1173 if (err != 0)
1174 goto out;
1175
1176 }
1177 1170
1178 sk->sk_state = SS_CONNECTING; 1171 sk->sk_state = SS_CONNECTING;
1179 1172
diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
index daff75200e25..ffc11df02af2 100644
--- a/net/vmw_vsock/vmci_transport.c
+++ b/net/vmw_vsock/vmci_transport.c
@@ -625,13 +625,14 @@ static int vmci_transport_recv_dgram_cb(void *data, struct vmci_datagram *dg)
625 625
626 /* Attach the packet to the socket's receive queue as an sk_buff. */ 626 /* Attach the packet to the socket's receive queue as an sk_buff. */
627 skb = alloc_skb(size, GFP_ATOMIC); 627 skb = alloc_skb(size, GFP_ATOMIC);
628 if (skb) { 628 if (!skb)
629 /* sk_receive_skb() will do a sock_put(), so hold here. */ 629 return VMCI_ERROR_NO_MEM;
630 sock_hold(sk); 630
631 skb_put(skb, size); 631 /* sk_receive_skb() will do a sock_put(), so hold here. */
632 memcpy(skb->data, dg, size); 632 sock_hold(sk);
633 sk_receive_skb(sk, skb, 0); 633 skb_put(skb, size);
634 } 634 memcpy(skb->data, dg, size);
635 sk_receive_skb(sk, skb, 0);
635 636
636 return VMCI_SUCCESS; 637 return VMCI_SUCCESS;
637} 638}
@@ -939,10 +940,9 @@ static void vmci_transport_recv_pkt_work(struct work_struct *work)
939 * reset to prevent that. 940 * reset to prevent that.
940 */ 941 */
941 vmci_transport_send_reset(sk, pkt); 942 vmci_transport_send_reset(sk, pkt);
942 goto out; 943 break;
943 } 944 }
944 945
945out:
946 release_sock(sk); 946 release_sock(sk);
947 kfree(recv_pkt_info); 947 kfree(recv_pkt_info);
948 /* Release reference obtained in the stream callback when we fetched 948 /* Release reference obtained in the stream callback when we fetched
diff --git a/net/wireless/chan.c b/net/wireless/chan.c
index fd556ac05fdb..50f6195c8b70 100644
--- a/net/wireless/chan.c
+++ b/net/wireless/chan.c
@@ -54,6 +54,8 @@ bool cfg80211_chandef_valid(const struct cfg80211_chan_def *chandef)
54 control_freq = chandef->chan->center_freq; 54 control_freq = chandef->chan->center_freq;
55 55
56 switch (chandef->width) { 56 switch (chandef->width) {
57 case NL80211_CHAN_WIDTH_5:
58 case NL80211_CHAN_WIDTH_10:
57 case NL80211_CHAN_WIDTH_20: 59 case NL80211_CHAN_WIDTH_20:
58 case NL80211_CHAN_WIDTH_20_NOHT: 60 case NL80211_CHAN_WIDTH_20_NOHT:
59 if (chandef->center_freq1 != control_freq) 61 if (chandef->center_freq1 != control_freq)
@@ -152,6 +154,12 @@ static int cfg80211_chandef_get_width(const struct cfg80211_chan_def *c)
152 int width; 154 int width;
153 155
154 switch (c->width) { 156 switch (c->width) {
157 case NL80211_CHAN_WIDTH_5:
158 width = 5;
159 break;
160 case NL80211_CHAN_WIDTH_10:
161 width = 10;
162 break;
155 case NL80211_CHAN_WIDTH_20: 163 case NL80211_CHAN_WIDTH_20:
156 case NL80211_CHAN_WIDTH_20_NOHT: 164 case NL80211_CHAN_WIDTH_20_NOHT:
157 width = 20; 165 width = 20;
@@ -194,6 +202,16 @@ cfg80211_chandef_compatible(const struct cfg80211_chan_def *c1,
194 if (c1->width == c2->width) 202 if (c1->width == c2->width)
195 return NULL; 203 return NULL;
196 204
205 /*
206 * can't be compatible if one of them is 5 or 10 MHz,
207 * but they don't have the same width.
208 */
209 if (c1->width == NL80211_CHAN_WIDTH_5 ||
210 c1->width == NL80211_CHAN_WIDTH_10 ||
211 c2->width == NL80211_CHAN_WIDTH_5 ||
212 c2->width == NL80211_CHAN_WIDTH_10)
213 return NULL;
214
197 if (c1->width == NL80211_CHAN_WIDTH_20_NOHT || 215 if (c1->width == NL80211_CHAN_WIDTH_20_NOHT ||
198 c1->width == NL80211_CHAN_WIDTH_20) 216 c1->width == NL80211_CHAN_WIDTH_20)
199 return c2; 217 return c2;
@@ -264,11 +282,17 @@ static int cfg80211_get_chans_dfs_required(struct wiphy *wiphy,
264 u32 bandwidth) 282 u32 bandwidth)
265{ 283{
266 struct ieee80211_channel *c; 284 struct ieee80211_channel *c;
267 u32 freq; 285 u32 freq, start_freq, end_freq;
286
287 if (bandwidth <= 20) {
288 start_freq = center_freq;
289 end_freq = center_freq;
290 } else {
291 start_freq = center_freq - bandwidth/2 + 10;
292 end_freq = center_freq + bandwidth/2 - 10;
293 }
268 294
269 for (freq = center_freq - bandwidth/2 + 10; 295 for (freq = start_freq; freq <= end_freq; freq += 20) {
270 freq <= center_freq + bandwidth/2 - 10;
271 freq += 20) {
272 c = ieee80211_get_channel(wiphy, freq); 296 c = ieee80211_get_channel(wiphy, freq);
273 if (!c) 297 if (!c)
274 return -EINVAL; 298 return -EINVAL;
@@ -310,11 +334,17 @@ static bool cfg80211_secondary_chans_ok(struct wiphy *wiphy,
310 u32 prohibited_flags) 334 u32 prohibited_flags)
311{ 335{
312 struct ieee80211_channel *c; 336 struct ieee80211_channel *c;
313 u32 freq; 337 u32 freq, start_freq, end_freq;
338
339 if (bandwidth <= 20) {
340 start_freq = center_freq;
341 end_freq = center_freq;
342 } else {
343 start_freq = center_freq - bandwidth/2 + 10;
344 end_freq = center_freq + bandwidth/2 - 10;
345 }
314 346
315 for (freq = center_freq - bandwidth/2 + 10; 347 for (freq = start_freq; freq <= end_freq; freq += 20) {
316 freq <= center_freq + bandwidth/2 - 10;
317 freq += 20) {
318 c = ieee80211_get_channel(wiphy, freq); 348 c = ieee80211_get_channel(wiphy, freq);
319 if (!c) 349 if (!c)
320 return false; 350 return false;
@@ -349,6 +379,12 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy,
349 control_freq = chandef->chan->center_freq; 379 control_freq = chandef->chan->center_freq;
350 380
351 switch (chandef->width) { 381 switch (chandef->width) {
382 case NL80211_CHAN_WIDTH_5:
383 width = 5;
384 break;
385 case NL80211_CHAN_WIDTH_10:
386 width = 10;
387 break;
352 case NL80211_CHAN_WIDTH_20: 388 case NL80211_CHAN_WIDTH_20:
353 if (!ht_cap->ht_supported) 389 if (!ht_cap->ht_supported)
354 return false; 390 return false;
@@ -405,6 +441,11 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy,
405 if (width > 20) 441 if (width > 20)
406 prohibited_flags |= IEEE80211_CHAN_NO_OFDM; 442 prohibited_flags |= IEEE80211_CHAN_NO_OFDM;
407 443
444 /* 5 and 10 MHz are only defined for the OFDM PHY */
445 if (width < 20)
446 prohibited_flags |= IEEE80211_CHAN_NO_OFDM;
447
448
408 if (!cfg80211_secondary_chans_ok(wiphy, chandef->center_freq1, 449 if (!cfg80211_secondary_chans_ok(wiphy, chandef->center_freq1,
409 width, prohibited_flags)) 450 width, prohibited_flags))
410 return false; 451 return false;
diff --git a/net/wireless/core.c b/net/wireless/core.c
index 73405e00c800..4f9f216665e9 100644
--- a/net/wireless/core.c
+++ b/net/wireless/core.c
@@ -34,13 +34,12 @@
34MODULE_AUTHOR("Johannes Berg"); 34MODULE_AUTHOR("Johannes Berg");
35MODULE_LICENSE("GPL"); 35MODULE_LICENSE("GPL");
36MODULE_DESCRIPTION("wireless configuration support"); 36MODULE_DESCRIPTION("wireless configuration support");
37MODULE_ALIAS_GENL_FAMILY(NL80211_GENL_NAME);
37 38
38/* RCU-protected (and cfg80211_mutex for writers) */ 39/* RCU-protected (and RTNL for writers) */
39LIST_HEAD(cfg80211_rdev_list); 40LIST_HEAD(cfg80211_rdev_list);
40int cfg80211_rdev_list_generation; 41int cfg80211_rdev_list_generation;
41 42
42DEFINE_MUTEX(cfg80211_mutex);
43
44/* for debugfs */ 43/* for debugfs */
45static struct dentry *ieee80211_debugfs_dir; 44static struct dentry *ieee80211_debugfs_dir;
46 45
@@ -52,12 +51,11 @@ module_param(cfg80211_disable_40mhz_24ghz, bool, 0644);
52MODULE_PARM_DESC(cfg80211_disable_40mhz_24ghz, 51MODULE_PARM_DESC(cfg80211_disable_40mhz_24ghz,
53 "Disable 40MHz support in the 2.4GHz band"); 52 "Disable 40MHz support in the 2.4GHz band");
54 53
55/* requires cfg80211_mutex to be held! */
56struct cfg80211_registered_device *cfg80211_rdev_by_wiphy_idx(int wiphy_idx) 54struct cfg80211_registered_device *cfg80211_rdev_by_wiphy_idx(int wiphy_idx)
57{ 55{
58 struct cfg80211_registered_device *result = NULL, *rdev; 56 struct cfg80211_registered_device *result = NULL, *rdev;
59 57
60 assert_cfg80211_lock(); 58 ASSERT_RTNL();
61 59
62 list_for_each_entry(rdev, &cfg80211_rdev_list, list) { 60 list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
63 if (rdev->wiphy_idx == wiphy_idx) { 61 if (rdev->wiphy_idx == wiphy_idx) {
@@ -76,12 +74,11 @@ int get_wiphy_idx(struct wiphy *wiphy)
76 return rdev->wiphy_idx; 74 return rdev->wiphy_idx;
77} 75}
78 76
79/* requires cfg80211_rdev_mutex to be held! */
80struct wiphy *wiphy_idx_to_wiphy(int wiphy_idx) 77struct wiphy *wiphy_idx_to_wiphy(int wiphy_idx)
81{ 78{
82 struct cfg80211_registered_device *rdev; 79 struct cfg80211_registered_device *rdev;
83 80
84 assert_cfg80211_lock(); 81 ASSERT_RTNL();
85 82
86 rdev = cfg80211_rdev_by_wiphy_idx(wiphy_idx); 83 rdev = cfg80211_rdev_by_wiphy_idx(wiphy_idx);
87 if (!rdev) 84 if (!rdev)
@@ -89,35 +86,13 @@ struct wiphy *wiphy_idx_to_wiphy(int wiphy_idx)
89 return &rdev->wiphy; 86 return &rdev->wiphy;
90} 87}
91 88
92struct cfg80211_registered_device *
93cfg80211_get_dev_from_ifindex(struct net *net, int ifindex)
94{
95 struct cfg80211_registered_device *rdev = ERR_PTR(-ENODEV);
96 struct net_device *dev;
97
98 mutex_lock(&cfg80211_mutex);
99 dev = dev_get_by_index(net, ifindex);
100 if (!dev)
101 goto out;
102 if (dev->ieee80211_ptr) {
103 rdev = wiphy_to_dev(dev->ieee80211_ptr->wiphy);
104 mutex_lock(&rdev->mtx);
105 } else
106 rdev = ERR_PTR(-ENODEV);
107 dev_put(dev);
108 out:
109 mutex_unlock(&cfg80211_mutex);
110 return rdev;
111}
112
113/* requires cfg80211_mutex to be held */
114int cfg80211_dev_rename(struct cfg80211_registered_device *rdev, 89int cfg80211_dev_rename(struct cfg80211_registered_device *rdev,
115 char *newname) 90 char *newname)
116{ 91{
117 struct cfg80211_registered_device *rdev2; 92 struct cfg80211_registered_device *rdev2;
118 int wiphy_idx, taken = -1, result, digits; 93 int wiphy_idx, taken = -1, result, digits;
119 94
120 assert_cfg80211_lock(); 95 ASSERT_RTNL();
121 96
122 /* prohibit calling the thing phy%d when %d is not its number */ 97 /* prohibit calling the thing phy%d when %d is not its number */
123 sscanf(newname, PHY_NAME "%d%n", &wiphy_idx, &taken); 98 sscanf(newname, PHY_NAME "%d%n", &wiphy_idx, &taken);
@@ -215,8 +190,7 @@ static void cfg80211_rfkill_poll(struct rfkill *rfkill, void *data)
215void cfg80211_stop_p2p_device(struct cfg80211_registered_device *rdev, 190void cfg80211_stop_p2p_device(struct cfg80211_registered_device *rdev,
216 struct wireless_dev *wdev) 191 struct wireless_dev *wdev)
217{ 192{
218 lockdep_assert_held(&rdev->devlist_mtx); 193 ASSERT_RTNL();
219 lockdep_assert_held(&rdev->sched_scan_mtx);
220 194
221 if (WARN_ON(wdev->iftype != NL80211_IFTYPE_P2P_DEVICE)) 195 if (WARN_ON(wdev->iftype != NL80211_IFTYPE_P2P_DEVICE))
222 return; 196 return;
@@ -230,18 +204,15 @@ void cfg80211_stop_p2p_device(struct cfg80211_registered_device *rdev,
230 rdev->opencount--; 204 rdev->opencount--;
231 205
232 if (rdev->scan_req && rdev->scan_req->wdev == wdev) { 206 if (rdev->scan_req && rdev->scan_req->wdev == wdev) {
233 bool busy = work_busy(&rdev->scan_done_wk);
234
235 /* 207 /*
236 * If the work isn't pending or running (in which case it would 208 * If the scan request wasn't notified as done, set it
237 * be waiting for the lock we hold) the driver didn't properly 209 * to aborted and leak it after a warning. The driver
238 * cancel the scan when the interface was removed. In this case 210 * should have notified us that it ended at the latest
239 * warn and leak the scan request object to not crash later. 211 * during rdev_stop_p2p_device().
240 */ 212 */
241 WARN_ON(!busy); 213 if (WARN_ON(!rdev->scan_req->notified))
242 214 rdev->scan_req->aborted = true;
243 rdev->scan_req->aborted = true; 215 ___cfg80211_scan_done(rdev, !rdev->scan_req->notified);
244 ___cfg80211_scan_done(rdev, !busy);
245 } 216 }
246} 217}
247 218
@@ -255,8 +226,6 @@ static int cfg80211_rfkill_set_block(void *data, bool blocked)
255 226
256 rtnl_lock(); 227 rtnl_lock();
257 228
258 /* read-only iteration need not hold the devlist_mtx */
259
260 list_for_each_entry(wdev, &rdev->wdev_list, list) { 229 list_for_each_entry(wdev, &rdev->wdev_list, list) {
261 if (wdev->netdev) { 230 if (wdev->netdev) {
262 dev_close(wdev->netdev); 231 dev_close(wdev->netdev);
@@ -265,12 +234,7 @@ static int cfg80211_rfkill_set_block(void *data, bool blocked)
265 /* otherwise, check iftype */ 234 /* otherwise, check iftype */
266 switch (wdev->iftype) { 235 switch (wdev->iftype) {
267 case NL80211_IFTYPE_P2P_DEVICE: 236 case NL80211_IFTYPE_P2P_DEVICE:
268 /* but this requires it */
269 mutex_lock(&rdev->devlist_mtx);
270 mutex_lock(&rdev->sched_scan_mtx);
271 cfg80211_stop_p2p_device(rdev, wdev); 237 cfg80211_stop_p2p_device(rdev, wdev);
272 mutex_unlock(&rdev->sched_scan_mtx);
273 mutex_unlock(&rdev->devlist_mtx);
274 break; 238 break;
275 default: 239 default:
276 break; 240 break;
@@ -298,10 +262,7 @@ static void cfg80211_event_work(struct work_struct *work)
298 event_work); 262 event_work);
299 263
300 rtnl_lock(); 264 rtnl_lock();
301 cfg80211_lock_rdev(rdev);
302
303 cfg80211_process_rdev_events(rdev); 265 cfg80211_process_rdev_events(rdev);
304 cfg80211_unlock_rdev(rdev);
305 rtnl_unlock(); 266 rtnl_unlock();
306} 267}
307 268
@@ -309,7 +270,7 @@ static void cfg80211_event_work(struct work_struct *work)
309 270
310struct wiphy *wiphy_new(const struct cfg80211_ops *ops, int sizeof_priv) 271struct wiphy *wiphy_new(const struct cfg80211_ops *ops, int sizeof_priv)
311{ 272{
312 static int wiphy_counter; 273 static atomic_t wiphy_counter = ATOMIC_INIT(0);
313 274
314 struct cfg80211_registered_device *rdev; 275 struct cfg80211_registered_device *rdev;
315 int alloc_size; 276 int alloc_size;
@@ -331,26 +292,21 @@ struct wiphy *wiphy_new(const struct cfg80211_ops *ops, int sizeof_priv)
331 292
332 rdev->ops = ops; 293 rdev->ops = ops;
333 294
334 mutex_lock(&cfg80211_mutex); 295 rdev->wiphy_idx = atomic_inc_return(&wiphy_counter);
335
336 rdev->wiphy_idx = wiphy_counter++;
337 296
338 if (unlikely(rdev->wiphy_idx < 0)) { 297 if (unlikely(rdev->wiphy_idx < 0)) {
339 wiphy_counter--;
340 mutex_unlock(&cfg80211_mutex);
341 /* ugh, wrapped! */ 298 /* ugh, wrapped! */
299 atomic_dec(&wiphy_counter);
342 kfree(rdev); 300 kfree(rdev);
343 return NULL; 301 return NULL;
344 } 302 }
345 303
346 mutex_unlock(&cfg80211_mutex); 304 /* atomic_inc_return makes it start at 1, make it start at 0 */
305 rdev->wiphy_idx--;
347 306
348 /* give it a proper name */ 307 /* give it a proper name */
349 dev_set_name(&rdev->wiphy.dev, PHY_NAME "%d", rdev->wiphy_idx); 308 dev_set_name(&rdev->wiphy.dev, PHY_NAME "%d", rdev->wiphy_idx);
350 309
351 mutex_init(&rdev->mtx);
352 mutex_init(&rdev->devlist_mtx);
353 mutex_init(&rdev->sched_scan_mtx);
354 INIT_LIST_HEAD(&rdev->wdev_list); 310 INIT_LIST_HEAD(&rdev->wdev_list);
355 INIT_LIST_HEAD(&rdev->beacon_registrations); 311 INIT_LIST_HEAD(&rdev->beacon_registrations);
356 spin_lock_init(&rdev->beacon_registrations_lock); 312 spin_lock_init(&rdev->beacon_registrations_lock);
@@ -496,8 +452,13 @@ int wiphy_register(struct wiphy *wiphy)
496 u16 ifmodes = wiphy->interface_modes; 452 u16 ifmodes = wiphy->interface_modes;
497 453
498#ifdef CONFIG_PM 454#ifdef CONFIG_PM
499 if (WARN_ON((wiphy->wowlan.flags & WIPHY_WOWLAN_GTK_REKEY_FAILURE) && 455 if (WARN_ON(wiphy->wowlan &&
500 !(wiphy->wowlan.flags & WIPHY_WOWLAN_SUPPORTS_GTK_REKEY))) 456 (wiphy->wowlan->flags & WIPHY_WOWLAN_GTK_REKEY_FAILURE) &&
457 !(wiphy->wowlan->flags & WIPHY_WOWLAN_SUPPORTS_GTK_REKEY)))
458 return -EINVAL;
459 if (WARN_ON(wiphy->wowlan &&
460 !wiphy->wowlan->flags && !wiphy->wowlan->n_patterns &&
461 !wiphy->wowlan->tcp))
501 return -EINVAL; 462 return -EINVAL;
502#endif 463#endif
503 464
@@ -587,25 +548,28 @@ int wiphy_register(struct wiphy *wiphy)
587 } 548 }
588 549
589#ifdef CONFIG_PM 550#ifdef CONFIG_PM
590 if (rdev->wiphy.wowlan.n_patterns) { 551 if (WARN_ON(rdev->wiphy.wowlan && rdev->wiphy.wowlan->n_patterns &&
591 if (WARN_ON(!rdev->wiphy.wowlan.pattern_min_len || 552 (!rdev->wiphy.wowlan->pattern_min_len ||
592 rdev->wiphy.wowlan.pattern_min_len > 553 rdev->wiphy.wowlan->pattern_min_len >
593 rdev->wiphy.wowlan.pattern_max_len)) 554 rdev->wiphy.wowlan->pattern_max_len)))
594 return -EINVAL; 555 return -EINVAL;
595 }
596#endif 556#endif
597 557
598 /* check and set up bitrates */ 558 /* check and set up bitrates */
599 ieee80211_set_bitrate_flags(wiphy); 559 ieee80211_set_bitrate_flags(wiphy);
600 560
601 mutex_lock(&cfg80211_mutex);
602 561
603 res = device_add(&rdev->wiphy.dev); 562 res = device_add(&rdev->wiphy.dev);
563 if (res)
564 return res;
565
566 res = rfkill_register(rdev->rfkill);
604 if (res) { 567 if (res) {
605 mutex_unlock(&cfg80211_mutex); 568 device_del(&rdev->wiphy.dev);
606 return res; 569 return res;
607 } 570 }
608 571
572 rtnl_lock();
609 /* set up regulatory info */ 573 /* set up regulatory info */
610 wiphy_regulatory_register(wiphy); 574 wiphy_regulatory_register(wiphy);
611 575
@@ -631,25 +595,7 @@ int wiphy_register(struct wiphy *wiphy)
631 } 595 }
632 596
633 cfg80211_debugfs_rdev_add(rdev); 597 cfg80211_debugfs_rdev_add(rdev);
634 mutex_unlock(&cfg80211_mutex);
635
636 /*
637 * due to a locking dependency this has to be outside of the
638 * cfg80211_mutex lock
639 */
640 res = rfkill_register(rdev->rfkill);
641 if (res) {
642 device_del(&rdev->wiphy.dev);
643
644 mutex_lock(&cfg80211_mutex);
645 debugfs_remove_recursive(rdev->wiphy.debugfsdir);
646 list_del_rcu(&rdev->list);
647 wiphy_regulatory_deregister(wiphy);
648 mutex_unlock(&cfg80211_mutex);
649 return res;
650 }
651 598
652 rtnl_lock();
653 rdev->wiphy.registered = true; 599 rdev->wiphy.registered = true;
654 rtnl_unlock(); 600 rtnl_unlock();
655 return 0; 601 return 0;
@@ -679,25 +625,19 @@ void wiphy_unregister(struct wiphy *wiphy)
679{ 625{
680 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); 626 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
681 627
682 rtnl_lock();
683 rdev->wiphy.registered = false;
684 rtnl_unlock();
685
686 rfkill_unregister(rdev->rfkill);
687
688 /* protect the device list */
689 mutex_lock(&cfg80211_mutex);
690
691 wait_event(rdev->dev_wait, ({ 628 wait_event(rdev->dev_wait, ({
692 int __count; 629 int __count;
693 mutex_lock(&rdev->devlist_mtx); 630 rtnl_lock();
694 __count = rdev->opencount; 631 __count = rdev->opencount;
695 mutex_unlock(&rdev->devlist_mtx); 632 rtnl_unlock();
696 __count == 0; })); 633 __count == 0; }));
697 634
698 mutex_lock(&rdev->devlist_mtx); 635 rfkill_unregister(rdev->rfkill);
636
637 rtnl_lock();
638 rdev->wiphy.registered = false;
639
699 BUG_ON(!list_empty(&rdev->wdev_list)); 640 BUG_ON(!list_empty(&rdev->wdev_list));
700 mutex_unlock(&rdev->devlist_mtx);
701 641
702 /* 642 /*
703 * First remove the hardware from everywhere, this makes 643 * First remove the hardware from everywhere, this makes
@@ -708,20 +648,6 @@ void wiphy_unregister(struct wiphy *wiphy)
708 synchronize_rcu(); 648 synchronize_rcu();
709 649
710 /* 650 /*
711 * Try to grab rdev->mtx. If a command is still in progress,
712 * hopefully the driver will refuse it since it's tearing
713 * down the device already. We wait for this command to complete
714 * before unlinking the item from the list.
715 * Note: as codified by the BUG_ON above we cannot get here if
716 * a virtual interface is still present. Hence, we can only get
717 * to lock contention here if userspace issues a command that
718 * identified the hardware by wiphy index.
719 */
720 cfg80211_lock_rdev(rdev);
721 /* nothing */
722 cfg80211_unlock_rdev(rdev);
723
724 /*
725 * If this device got a regulatory hint tell core its 651 * If this device got a regulatory hint tell core its
726 * free to listen now to a new shiny device regulatory hint 652 * free to listen now to a new shiny device regulatory hint
727 */ 653 */
@@ -730,15 +656,17 @@ void wiphy_unregister(struct wiphy *wiphy)
730 cfg80211_rdev_list_generation++; 656 cfg80211_rdev_list_generation++;
731 device_del(&rdev->wiphy.dev); 657 device_del(&rdev->wiphy.dev);
732 658
733 mutex_unlock(&cfg80211_mutex); 659 rtnl_unlock();
734 660
735 flush_work(&rdev->scan_done_wk); 661 flush_work(&rdev->scan_done_wk);
736 cancel_work_sync(&rdev->conn_work); 662 cancel_work_sync(&rdev->conn_work);
737 flush_work(&rdev->event_work); 663 flush_work(&rdev->event_work);
738 cancel_delayed_work_sync(&rdev->dfs_update_channels_wk); 664 cancel_delayed_work_sync(&rdev->dfs_update_channels_wk);
739 665
740 if (rdev->wowlan && rdev->ops->set_wakeup) 666#ifdef CONFIG_PM
667 if (rdev->wiphy.wowlan_config && rdev->ops->set_wakeup)
741 rdev_set_wakeup(rdev, false); 668 rdev_set_wakeup(rdev, false);
669#endif
742 cfg80211_rdev_free_wowlan(rdev); 670 cfg80211_rdev_free_wowlan(rdev);
743} 671}
744EXPORT_SYMBOL(wiphy_unregister); 672EXPORT_SYMBOL(wiphy_unregister);
@@ -748,9 +676,6 @@ void cfg80211_dev_free(struct cfg80211_registered_device *rdev)
748 struct cfg80211_internal_bss *scan, *tmp; 676 struct cfg80211_internal_bss *scan, *tmp;
749 struct cfg80211_beacon_registration *reg, *treg; 677 struct cfg80211_beacon_registration *reg, *treg;
750 rfkill_destroy(rdev->rfkill); 678 rfkill_destroy(rdev->rfkill);
751 mutex_destroy(&rdev->mtx);
752 mutex_destroy(&rdev->devlist_mtx);
753 mutex_destroy(&rdev->sched_scan_mtx);
754 list_for_each_entry_safe(reg, treg, &rdev->beacon_registrations, list) { 679 list_for_each_entry_safe(reg, treg, &rdev->beacon_registrations, list) {
755 list_del(&reg->list); 680 list_del(&reg->list);
756 kfree(reg); 681 kfree(reg);
@@ -775,36 +700,6 @@ void wiphy_rfkill_set_hw_state(struct wiphy *wiphy, bool blocked)
775} 700}
776EXPORT_SYMBOL(wiphy_rfkill_set_hw_state); 701EXPORT_SYMBOL(wiphy_rfkill_set_hw_state);
777 702
778static void wdev_cleanup_work(struct work_struct *work)
779{
780 struct wireless_dev *wdev;
781 struct cfg80211_registered_device *rdev;
782
783 wdev = container_of(work, struct wireless_dev, cleanup_work);
784 rdev = wiphy_to_dev(wdev->wiphy);
785
786 mutex_lock(&rdev->sched_scan_mtx);
787
788 if (WARN_ON(rdev->scan_req && rdev->scan_req->wdev == wdev)) {
789 rdev->scan_req->aborted = true;
790 ___cfg80211_scan_done(rdev, true);
791 }
792
793 if (WARN_ON(rdev->sched_scan_req &&
794 rdev->sched_scan_req->dev == wdev->netdev)) {
795 __cfg80211_stop_sched_scan(rdev, false);
796 }
797
798 mutex_unlock(&rdev->sched_scan_mtx);
799
800 mutex_lock(&rdev->devlist_mtx);
801 rdev->opencount--;
802 mutex_unlock(&rdev->devlist_mtx);
803 wake_up(&rdev->dev_wait);
804
805 dev_put(wdev->netdev);
806}
807
808void cfg80211_unregister_wdev(struct wireless_dev *wdev) 703void cfg80211_unregister_wdev(struct wireless_dev *wdev)
809{ 704{
810 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy); 705 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
@@ -814,8 +709,6 @@ void cfg80211_unregister_wdev(struct wireless_dev *wdev)
814 if (WARN_ON(wdev->netdev)) 709 if (WARN_ON(wdev->netdev))
815 return; 710 return;
816 711
817 mutex_lock(&rdev->devlist_mtx);
818 mutex_lock(&rdev->sched_scan_mtx);
819 list_del_rcu(&wdev->list); 712 list_del_rcu(&wdev->list);
820 rdev->devlist_generation++; 713 rdev->devlist_generation++;
821 714
@@ -827,8 +720,6 @@ void cfg80211_unregister_wdev(struct wireless_dev *wdev)
827 WARN_ON_ONCE(1); 720 WARN_ON_ONCE(1);
828 break; 721 break;
829 } 722 }
830 mutex_unlock(&rdev->sched_scan_mtx);
831 mutex_unlock(&rdev->devlist_mtx);
832} 723}
833EXPORT_SYMBOL(cfg80211_unregister_wdev); 724EXPORT_SYMBOL(cfg80211_unregister_wdev);
834 725
@@ -847,7 +738,7 @@ void cfg80211_update_iface_num(struct cfg80211_registered_device *rdev,
847} 738}
848 739
849void cfg80211_leave(struct cfg80211_registered_device *rdev, 740void cfg80211_leave(struct cfg80211_registered_device *rdev,
850 struct wireless_dev *wdev) 741 struct wireless_dev *wdev)
851{ 742{
852 struct net_device *dev = wdev->netdev; 743 struct net_device *dev = wdev->netdev;
853 744
@@ -857,9 +748,7 @@ void cfg80211_leave(struct cfg80211_registered_device *rdev,
857 break; 748 break;
858 case NL80211_IFTYPE_P2P_CLIENT: 749 case NL80211_IFTYPE_P2P_CLIENT:
859 case NL80211_IFTYPE_STATION: 750 case NL80211_IFTYPE_STATION:
860 mutex_lock(&rdev->sched_scan_mtx);
861 __cfg80211_stop_sched_scan(rdev, false); 751 __cfg80211_stop_sched_scan(rdev, false);
862 mutex_unlock(&rdev->sched_scan_mtx);
863 752
864 wdev_lock(wdev); 753 wdev_lock(wdev);
865#ifdef CONFIG_CFG80211_WEXT 754#ifdef CONFIG_CFG80211_WEXT
@@ -868,8 +757,8 @@ void cfg80211_leave(struct cfg80211_registered_device *rdev,
868 wdev->wext.ie_len = 0; 757 wdev->wext.ie_len = 0;
869 wdev->wext.connect.auth_type = NL80211_AUTHTYPE_AUTOMATIC; 758 wdev->wext.connect.auth_type = NL80211_AUTHTYPE_AUTOMATIC;
870#endif 759#endif
871 __cfg80211_disconnect(rdev, dev, 760 cfg80211_disconnect(rdev, dev,
872 WLAN_REASON_DEAUTH_LEAVING, true); 761 WLAN_REASON_DEAUTH_LEAVING, true);
873 wdev_unlock(wdev); 762 wdev_unlock(wdev);
874 break; 763 break;
875 case NL80211_IFTYPE_MESH_POINT: 764 case NL80211_IFTYPE_MESH_POINT:
@@ -886,10 +775,9 @@ void cfg80211_leave(struct cfg80211_registered_device *rdev,
886} 775}
887 776
888static int cfg80211_netdev_notifier_call(struct notifier_block *nb, 777static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
889 unsigned long state, 778 unsigned long state, void *ptr)
890 void *ndev)
891{ 779{
892 struct net_device *dev = ndev; 780 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
893 struct wireless_dev *wdev = dev->ieee80211_ptr; 781 struct wireless_dev *wdev = dev->ieee80211_ptr;
894 struct cfg80211_registered_device *rdev; 782 struct cfg80211_registered_device *rdev;
895 int ret; 783 int ret;
@@ -912,13 +800,11 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
912 * are added with nl80211. 800 * are added with nl80211.
913 */ 801 */
914 mutex_init(&wdev->mtx); 802 mutex_init(&wdev->mtx);
915 INIT_WORK(&wdev->cleanup_work, wdev_cleanup_work);
916 INIT_LIST_HEAD(&wdev->event_list); 803 INIT_LIST_HEAD(&wdev->event_list);
917 spin_lock_init(&wdev->event_lock); 804 spin_lock_init(&wdev->event_lock);
918 INIT_LIST_HEAD(&wdev->mgmt_registrations); 805 INIT_LIST_HEAD(&wdev->mgmt_registrations);
919 spin_lock_init(&wdev->mgmt_registrations_lock); 806 spin_lock_init(&wdev->mgmt_registrations_lock);
920 807
921 mutex_lock(&rdev->devlist_mtx);
922 wdev->identifier = ++rdev->wdev_id; 808 wdev->identifier = ++rdev->wdev_id;
923 list_add_rcu(&wdev->list, &rdev->wdev_list); 809 list_add_rcu(&wdev->list, &rdev->wdev_list);
924 rdev->devlist_generation++; 810 rdev->devlist_generation++;
@@ -930,8 +816,6 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
930 pr_err("failed to add phy80211 symlink to netdev!\n"); 816 pr_err("failed to add phy80211 symlink to netdev!\n");
931 } 817 }
932 wdev->netdev = dev; 818 wdev->netdev = dev;
933 wdev->sme_state = CFG80211_SME_IDLE;
934 mutex_unlock(&rdev->devlist_mtx);
935#ifdef CONFIG_CFG80211_WEXT 819#ifdef CONFIG_CFG80211_WEXT
936 wdev->wext.default_key = -1; 820 wdev->wext.default_key = -1;
937 wdev->wext.default_mgmt_key = -1; 821 wdev->wext.default_mgmt_key = -1;
@@ -957,26 +841,22 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
957 break; 841 break;
958 case NETDEV_DOWN: 842 case NETDEV_DOWN:
959 cfg80211_update_iface_num(rdev, wdev->iftype, -1); 843 cfg80211_update_iface_num(rdev, wdev->iftype, -1);
960 dev_hold(dev); 844 if (rdev->scan_req && rdev->scan_req->wdev == wdev) {
961 queue_work(cfg80211_wq, &wdev->cleanup_work); 845 if (WARN_ON(!rdev->scan_req->notified))
846 rdev->scan_req->aborted = true;
847 ___cfg80211_scan_done(rdev, true);
848 }
849
850 if (WARN_ON(rdev->sched_scan_req &&
851 rdev->sched_scan_req->dev == wdev->netdev)) {
852 __cfg80211_stop_sched_scan(rdev, false);
853 }
854
855 rdev->opencount--;
856 wake_up(&rdev->dev_wait);
962 break; 857 break;
963 case NETDEV_UP: 858 case NETDEV_UP:
964 /*
965 * If we have a really quick DOWN/UP succession we may
966 * have this work still pending ... cancel it and see
967 * if it was pending, in which case we need to account
968 * for some of the work it would have done.
969 */
970 if (cancel_work_sync(&wdev->cleanup_work)) {
971 mutex_lock(&rdev->devlist_mtx);
972 rdev->opencount--;
973 mutex_unlock(&rdev->devlist_mtx);
974 dev_put(dev);
975 }
976 cfg80211_update_iface_num(rdev, wdev->iftype, 1); 859 cfg80211_update_iface_num(rdev, wdev->iftype, 1);
977 cfg80211_lock_rdev(rdev);
978 mutex_lock(&rdev->devlist_mtx);
979 mutex_lock(&rdev->sched_scan_mtx);
980 wdev_lock(wdev); 860 wdev_lock(wdev);
981 switch (wdev->iftype) { 861 switch (wdev->iftype) {
982#ifdef CONFIG_CFG80211_WEXT 862#ifdef CONFIG_CFG80211_WEXT
@@ -1008,10 +888,7 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
1008 break; 888 break;
1009 } 889 }
1010 wdev_unlock(wdev); 890 wdev_unlock(wdev);
1011 mutex_unlock(&rdev->sched_scan_mtx);
1012 rdev->opencount++; 891 rdev->opencount++;
1013 mutex_unlock(&rdev->devlist_mtx);
1014 cfg80211_unlock_rdev(rdev);
1015 892
1016 /* 893 /*
1017 * Configure power management to the driver here so that its 894 * Configure power management to the driver here so that its
@@ -1028,12 +905,6 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
1028 break; 905 break;
1029 case NETDEV_UNREGISTER: 906 case NETDEV_UNREGISTER:
1030 /* 907 /*
1031 * NB: cannot take rdev->mtx here because this may be
1032 * called within code protected by it when interfaces
1033 * are removed with nl80211.
1034 */
1035 mutex_lock(&rdev->devlist_mtx);
1036 /*
1037 * It is possible to get NETDEV_UNREGISTER 908 * It is possible to get NETDEV_UNREGISTER
1038 * multiple times. To detect that, check 909 * multiple times. To detect that, check
1039 * that the interface is still on the list 910 * that the interface is still on the list
@@ -1049,7 +920,6 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
1049 kfree(wdev->wext.keys); 920 kfree(wdev->wext.keys);
1050#endif 921#endif
1051 } 922 }
1052 mutex_unlock(&rdev->devlist_mtx);
1053 /* 923 /*
1054 * synchronise (so that we won't find this netdev 924 * synchronise (so that we won't find this netdev
1055 * from other code any more) and then clear the list 925 * from other code any more) and then clear the list
@@ -1063,15 +933,19 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
1063 * freed. 933 * freed.
1064 */ 934 */
1065 cfg80211_process_wdev_events(wdev); 935 cfg80211_process_wdev_events(wdev);
936
937 if (WARN_ON(wdev->current_bss)) {
938 cfg80211_unhold_bss(wdev->current_bss);
939 cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
940 wdev->current_bss = NULL;
941 }
1066 break; 942 break;
1067 case NETDEV_PRE_UP: 943 case NETDEV_PRE_UP:
1068 if (!(wdev->wiphy->interface_modes & BIT(wdev->iftype))) 944 if (!(wdev->wiphy->interface_modes & BIT(wdev->iftype)))
1069 return notifier_from_errno(-EOPNOTSUPP); 945 return notifier_from_errno(-EOPNOTSUPP);
1070 if (rfkill_blocked(rdev->rfkill)) 946 if (rfkill_blocked(rdev->rfkill))
1071 return notifier_from_errno(-ERFKILL); 947 return notifier_from_errno(-ERFKILL);
1072 mutex_lock(&rdev->devlist_mtx);
1073 ret = cfg80211_can_add_interface(rdev, wdev->iftype); 948 ret = cfg80211_can_add_interface(rdev, wdev->iftype);
1074 mutex_unlock(&rdev->devlist_mtx);
1075 if (ret) 949 if (ret)
1076 return notifier_from_errno(ret); 950 return notifier_from_errno(ret);
1077 break; 951 break;
@@ -1089,12 +963,10 @@ static void __net_exit cfg80211_pernet_exit(struct net *net)
1089 struct cfg80211_registered_device *rdev; 963 struct cfg80211_registered_device *rdev;
1090 964
1091 rtnl_lock(); 965 rtnl_lock();
1092 mutex_lock(&cfg80211_mutex);
1093 list_for_each_entry(rdev, &cfg80211_rdev_list, list) { 966 list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
1094 if (net_eq(wiphy_net(&rdev->wiphy), net)) 967 if (net_eq(wiphy_net(&rdev->wiphy), net))
1095 WARN_ON(cfg80211_switch_netns(rdev, &init_net)); 968 WARN_ON(cfg80211_switch_netns(rdev, &init_net));
1096 } 969 }
1097 mutex_unlock(&cfg80211_mutex);
1098 rtnl_unlock(); 970 rtnl_unlock();
1099} 971}
1100 972
diff --git a/net/wireless/core.h b/net/wireless/core.h
index fd35dae547c4..a6b45bf00f33 100644
--- a/net/wireless/core.h
+++ b/net/wireless/core.h
@@ -5,7 +5,6 @@
5 */ 5 */
6#ifndef __NET_WIRELESS_CORE_H 6#ifndef __NET_WIRELESS_CORE_H
7#define __NET_WIRELESS_CORE_H 7#define __NET_WIRELESS_CORE_H
8#include <linux/mutex.h>
9#include <linux/list.h> 8#include <linux/list.h>
10#include <linux/netdevice.h> 9#include <linux/netdevice.h>
11#include <linux/rbtree.h> 10#include <linux/rbtree.h>
@@ -23,11 +22,6 @@
23struct cfg80211_registered_device { 22struct cfg80211_registered_device {
24 const struct cfg80211_ops *ops; 23 const struct cfg80211_ops *ops;
25 struct list_head list; 24 struct list_head list;
26 /* we hold this mutex during any call so that
27 * we cannot do multiple calls at once, and also
28 * to avoid the deregister call to proceed while
29 * any call is in progress */
30 struct mutex mtx;
31 25
32 /* rfkill support */ 26 /* rfkill support */
33 struct rfkill_ops rfkill_ops; 27 struct rfkill_ops rfkill_ops;
@@ -49,9 +43,7 @@ struct cfg80211_registered_device {
49 /* wiphy index, internal only */ 43 /* wiphy index, internal only */
50 int wiphy_idx; 44 int wiphy_idx;
51 45
52 /* associated wireless interfaces */ 46 /* associated wireless interfaces, protected by rtnl or RCU */
53 struct mutex devlist_mtx;
54 /* protected by devlist_mtx or RCU */
55 struct list_head wdev_list; 47 struct list_head wdev_list;
56 int devlist_generation, wdev_id; 48 int devlist_generation, wdev_id;
57 int opencount; /* also protected by devlist_mtx */ 49 int opencount; /* also protected by devlist_mtx */
@@ -75,8 +67,6 @@ struct cfg80211_registered_device {
75 struct work_struct scan_done_wk; 67 struct work_struct scan_done_wk;
76 struct work_struct sched_scan_results_wk; 68 struct work_struct sched_scan_results_wk;
77 69
78 struct mutex sched_scan_mtx;
79
80#ifdef CONFIG_NL80211_TESTMODE 70#ifdef CONFIG_NL80211_TESTMODE
81 struct genl_info *testmode_info; 71 struct genl_info *testmode_info;
82#endif 72#endif
@@ -84,8 +74,6 @@ struct cfg80211_registered_device {
84 struct work_struct conn_work; 74 struct work_struct conn_work;
85 struct work_struct event_work; 75 struct work_struct event_work;
86 76
87 struct cfg80211_wowlan *wowlan;
88
89 struct delayed_work dfs_update_channels_wk; 77 struct delayed_work dfs_update_channels_wk;
90 78
91 /* netlink port which started critical protocol (0 means not started) */ 79 /* netlink port which started critical protocol (0 means not started) */
@@ -106,29 +94,26 @@ struct cfg80211_registered_device *wiphy_to_dev(struct wiphy *wiphy)
106static inline void 94static inline void
107cfg80211_rdev_free_wowlan(struct cfg80211_registered_device *rdev) 95cfg80211_rdev_free_wowlan(struct cfg80211_registered_device *rdev)
108{ 96{
97#ifdef CONFIG_PM
109 int i; 98 int i;
110 99
111 if (!rdev->wowlan) 100 if (!rdev->wiphy.wowlan_config)
112 return; 101 return;
113 for (i = 0; i < rdev->wowlan->n_patterns; i++) 102 for (i = 0; i < rdev->wiphy.wowlan_config->n_patterns; i++)
114 kfree(rdev->wowlan->patterns[i].mask); 103 kfree(rdev->wiphy.wowlan_config->patterns[i].mask);
115 kfree(rdev->wowlan->patterns); 104 kfree(rdev->wiphy.wowlan_config->patterns);
116 if (rdev->wowlan->tcp && rdev->wowlan->tcp->sock) 105 if (rdev->wiphy.wowlan_config->tcp &&
117 sock_release(rdev->wowlan->tcp->sock); 106 rdev->wiphy.wowlan_config->tcp->sock)
118 kfree(rdev->wowlan->tcp); 107 sock_release(rdev->wiphy.wowlan_config->tcp->sock);
119 kfree(rdev->wowlan); 108 kfree(rdev->wiphy.wowlan_config->tcp);
109 kfree(rdev->wiphy.wowlan_config);
110#endif
120} 111}
121 112
122extern struct workqueue_struct *cfg80211_wq; 113extern struct workqueue_struct *cfg80211_wq;
123extern struct mutex cfg80211_mutex;
124extern struct list_head cfg80211_rdev_list; 114extern struct list_head cfg80211_rdev_list;
125extern int cfg80211_rdev_list_generation; 115extern int cfg80211_rdev_list_generation;
126 116
127static inline void assert_cfg80211_lock(void)
128{
129 lockdep_assert_held(&cfg80211_mutex);
130}
131
132struct cfg80211_internal_bss { 117struct cfg80211_internal_bss {
133 struct list_head list; 118 struct list_head list;
134 struct list_head hidden_list; 119 struct list_head hidden_list;
@@ -161,27 +146,11 @@ static inline void cfg80211_unhold_bss(struct cfg80211_internal_bss *bss)
161struct cfg80211_registered_device *cfg80211_rdev_by_wiphy_idx(int wiphy_idx); 146struct cfg80211_registered_device *cfg80211_rdev_by_wiphy_idx(int wiphy_idx);
162int get_wiphy_idx(struct wiphy *wiphy); 147int get_wiphy_idx(struct wiphy *wiphy);
163 148
164/* requires cfg80211_rdev_mutex to be held! */
165struct wiphy *wiphy_idx_to_wiphy(int wiphy_idx); 149struct wiphy *wiphy_idx_to_wiphy(int wiphy_idx);
166 150
167/* identical to cfg80211_get_dev_from_info but only operate on ifindex */
168extern struct cfg80211_registered_device *
169cfg80211_get_dev_from_ifindex(struct net *net, int ifindex);
170
171int cfg80211_switch_netns(struct cfg80211_registered_device *rdev, 151int cfg80211_switch_netns(struct cfg80211_registered_device *rdev,
172 struct net *net); 152 struct net *net);
173 153
174static inline void cfg80211_lock_rdev(struct cfg80211_registered_device *rdev)
175{
176 mutex_lock(&rdev->mtx);
177}
178
179static inline void cfg80211_unlock_rdev(struct cfg80211_registered_device *rdev)
180{
181 BUG_ON(IS_ERR(rdev) || !rdev);
182 mutex_unlock(&rdev->mtx);
183}
184
185static inline void wdev_lock(struct wireless_dev *wdev) 154static inline void wdev_lock(struct wireless_dev *wdev)
186 __acquires(wdev) 155 __acquires(wdev)
187{ 156{
@@ -196,7 +165,7 @@ static inline void wdev_unlock(struct wireless_dev *wdev)
196 mutex_unlock(&wdev->mtx); 165 mutex_unlock(&wdev->mtx);
197} 166}
198 167
199#define ASSERT_RDEV_LOCK(rdev) lockdep_assert_held(&(rdev)->mtx) 168#define ASSERT_RDEV_LOCK(rdev) ASSERT_RTNL()
200#define ASSERT_WDEV_LOCK(wdev) lockdep_assert_held(&(wdev)->mtx) 169#define ASSERT_WDEV_LOCK(wdev) lockdep_assert_held(&(wdev)->mtx)
201 170
202static inline bool cfg80211_has_monitors_only(struct cfg80211_registered_device *rdev) 171static inline bool cfg80211_has_monitors_only(struct cfg80211_registered_device *rdev)
@@ -314,38 +283,21 @@ int cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
314 struct net_device *dev); 283 struct net_device *dev);
315 284
316/* MLME */ 285/* MLME */
317int __cfg80211_mlme_auth(struct cfg80211_registered_device *rdev,
318 struct net_device *dev,
319 struct ieee80211_channel *chan,
320 enum nl80211_auth_type auth_type,
321 const u8 *bssid,
322 const u8 *ssid, int ssid_len,
323 const u8 *ie, int ie_len,
324 const u8 *key, int key_len, int key_idx,
325 const u8 *sae_data, int sae_data_len);
326int cfg80211_mlme_auth(struct cfg80211_registered_device *rdev, 286int cfg80211_mlme_auth(struct cfg80211_registered_device *rdev,
327 struct net_device *dev, struct ieee80211_channel *chan, 287 struct net_device *dev,
328 enum nl80211_auth_type auth_type, const u8 *bssid, 288 struct ieee80211_channel *chan,
289 enum nl80211_auth_type auth_type,
290 const u8 *bssid,
329 const u8 *ssid, int ssid_len, 291 const u8 *ssid, int ssid_len,
330 const u8 *ie, int ie_len, 292 const u8 *ie, int ie_len,
331 const u8 *key, int key_len, int key_idx, 293 const u8 *key, int key_len, int key_idx,
332 const u8 *sae_data, int sae_data_len); 294 const u8 *sae_data, int sae_data_len);
333int __cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev,
334 struct net_device *dev,
335 struct ieee80211_channel *chan,
336 const u8 *bssid,
337 const u8 *ssid, int ssid_len,
338 struct cfg80211_assoc_request *req);
339int cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev, 295int cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev,
340 struct net_device *dev, 296 struct net_device *dev,
341 struct ieee80211_channel *chan, 297 struct ieee80211_channel *chan,
342 const u8 *bssid, 298 const u8 *bssid,
343 const u8 *ssid, int ssid_len, 299 const u8 *ssid, int ssid_len,
344 struct cfg80211_assoc_request *req); 300 struct cfg80211_assoc_request *req);
345int __cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev,
346 struct net_device *dev, const u8 *bssid,
347 const u8 *ie, int ie_len, u16 reason,
348 bool local_state_change);
349int cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev, 301int cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev,
350 struct net_device *dev, const u8 *bssid, 302 struct net_device *dev, const u8 *bssid,
351 const u8 *ie, int ie_len, u16 reason, 303 const u8 *ie, int ie_len, u16 reason,
@@ -356,11 +308,6 @@ int cfg80211_mlme_disassoc(struct cfg80211_registered_device *rdev,
356 bool local_state_change); 308 bool local_state_change);
357void cfg80211_mlme_down(struct cfg80211_registered_device *rdev, 309void cfg80211_mlme_down(struct cfg80211_registered_device *rdev,
358 struct net_device *dev); 310 struct net_device *dev);
359void __cfg80211_connect_result(struct net_device *dev, const u8 *bssid,
360 const u8 *req_ie, size_t req_ie_len,
361 const u8 *resp_ie, size_t resp_ie_len,
362 u16 status, bool wextev,
363 struct cfg80211_bss *bss);
364int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_pid, 311int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_pid,
365 u16 frame_type, const u8 *match_data, 312 u16 frame_type, const u8 *match_data,
366 int match_len); 313 int match_len);
@@ -376,19 +323,19 @@ void cfg80211_oper_and_ht_capa(struct ieee80211_ht_cap *ht_capa,
376void cfg80211_oper_and_vht_capa(struct ieee80211_vht_cap *vht_capa, 323void cfg80211_oper_and_vht_capa(struct ieee80211_vht_cap *vht_capa,
377 const struct ieee80211_vht_cap *vht_capa_mask); 324 const struct ieee80211_vht_cap *vht_capa_mask);
378 325
379/* SME */ 326/* SME events */
380int __cfg80211_connect(struct cfg80211_registered_device *rdev,
381 struct net_device *dev,
382 struct cfg80211_connect_params *connect,
383 struct cfg80211_cached_keys *connkeys,
384 const u8 *prev_bssid);
385int cfg80211_connect(struct cfg80211_registered_device *rdev, 327int cfg80211_connect(struct cfg80211_registered_device *rdev,
386 struct net_device *dev, 328 struct net_device *dev,
387 struct cfg80211_connect_params *connect, 329 struct cfg80211_connect_params *connect,
388 struct cfg80211_cached_keys *connkeys); 330 struct cfg80211_cached_keys *connkeys,
389int __cfg80211_disconnect(struct cfg80211_registered_device *rdev, 331 const u8 *prev_bssid);
390 struct net_device *dev, u16 reason, 332void __cfg80211_connect_result(struct net_device *dev, const u8 *bssid,
391 bool wextev); 333 const u8 *req_ie, size_t req_ie_len,
334 const u8 *resp_ie, size_t resp_ie_len,
335 u16 status, bool wextev,
336 struct cfg80211_bss *bss);
337void __cfg80211_disconnected(struct net_device *dev, const u8 *ie,
338 size_t ie_len, u16 reason, bool from_ap);
392int cfg80211_disconnect(struct cfg80211_registered_device *rdev, 339int cfg80211_disconnect(struct cfg80211_registered_device *rdev,
393 struct net_device *dev, u16 reason, 340 struct net_device *dev, u16 reason,
394 bool wextev); 341 bool wextev);
@@ -399,21 +346,21 @@ void __cfg80211_roamed(struct wireless_dev *wdev,
399int cfg80211_mgd_wext_connect(struct cfg80211_registered_device *rdev, 346int cfg80211_mgd_wext_connect(struct cfg80211_registered_device *rdev,
400 struct wireless_dev *wdev); 347 struct wireless_dev *wdev);
401 348
349/* SME implementation */
402void cfg80211_conn_work(struct work_struct *work); 350void cfg80211_conn_work(struct work_struct *work);
403void cfg80211_sme_failed_assoc(struct wireless_dev *wdev); 351void cfg80211_sme_scan_done(struct net_device *dev);
404bool cfg80211_sme_failed_reassoc(struct wireless_dev *wdev); 352bool cfg80211_sme_rx_assoc_resp(struct wireless_dev *wdev, u16 status);
353void cfg80211_sme_rx_auth(struct wireless_dev *wdev, const u8 *buf, size_t len);
354void cfg80211_sme_disassoc(struct wireless_dev *wdev);
355void cfg80211_sme_deauth(struct wireless_dev *wdev);
356void cfg80211_sme_auth_timeout(struct wireless_dev *wdev);
357void cfg80211_sme_assoc_timeout(struct wireless_dev *wdev);
405 358
406/* internal helpers */ 359/* internal helpers */
407bool cfg80211_supported_cipher_suite(struct wiphy *wiphy, u32 cipher); 360bool cfg80211_supported_cipher_suite(struct wiphy *wiphy, u32 cipher);
408int cfg80211_validate_key_settings(struct cfg80211_registered_device *rdev, 361int cfg80211_validate_key_settings(struct cfg80211_registered_device *rdev,
409 struct key_params *params, int key_idx, 362 struct key_params *params, int key_idx,
410 bool pairwise, const u8 *mac_addr); 363 bool pairwise, const u8 *mac_addr);
411void __cfg80211_disconnected(struct net_device *dev, const u8 *ie,
412 size_t ie_len, u16 reason, bool from_ap);
413void cfg80211_sme_scan_done(struct net_device *dev);
414void cfg80211_sme_rx_auth(struct net_device *dev, const u8 *buf, size_t len);
415void cfg80211_sme_disassoc(struct net_device *dev,
416 struct cfg80211_internal_bss *bss);
417void __cfg80211_scan_done(struct work_struct *wk); 364void __cfg80211_scan_done(struct work_struct *wk);
418void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev, bool leak); 365void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev, bool leak);
419void __cfg80211_sched_scan_results(struct work_struct *wk); 366void __cfg80211_sched_scan_results(struct work_struct *wk);
diff --git a/net/wireless/debugfs.c b/net/wireless/debugfs.c
index 920cabe0461b..90d050036624 100644
--- a/net/wireless/debugfs.c
+++ b/net/wireless/debugfs.c
@@ -74,7 +74,7 @@ static ssize_t ht40allow_map_read(struct file *file,
74 if (!buf) 74 if (!buf)
75 return -ENOMEM; 75 return -ENOMEM;
76 76
77 mutex_lock(&cfg80211_mutex); 77 rtnl_lock();
78 78
79 for (band = 0; band < IEEE80211_NUM_BANDS; band++) { 79 for (band = 0; band < IEEE80211_NUM_BANDS; band++) {
80 sband = wiphy->bands[band]; 80 sband = wiphy->bands[band];
@@ -85,7 +85,7 @@ static ssize_t ht40allow_map_read(struct file *file,
85 buf, buf_size, offset); 85 buf, buf_size, offset);
86 } 86 }
87 87
88 mutex_unlock(&cfg80211_mutex); 88 rtnl_unlock();
89 89
90 r = simple_read_from_buffer(user_buf, count, ppos, buf, offset); 90 r = simple_read_from_buffer(user_buf, count, ppos, buf, offset);
91 91
diff --git a/net/wireless/ibss.c b/net/wireless/ibss.c
index d80e47194d49..39bff7d36768 100644
--- a/net/wireless/ibss.c
+++ b/net/wireless/ibss.c
@@ -43,7 +43,6 @@ void __cfg80211_ibss_joined(struct net_device *dev, const u8 *bssid)
43 cfg80211_hold_bss(bss_from_pub(bss)); 43 cfg80211_hold_bss(bss_from_pub(bss));
44 wdev->current_bss = bss_from_pub(bss); 44 wdev->current_bss = bss_from_pub(bss);
45 45
46 wdev->sme_state = CFG80211_SME_CONNECTED;
47 cfg80211_upload_connect_keys(wdev); 46 cfg80211_upload_connect_keys(wdev);
48 47
49 nl80211_send_ibss_bssid(wiphy_to_dev(wdev->wiphy), dev, bssid, 48 nl80211_send_ibss_bssid(wiphy_to_dev(wdev->wiphy), dev, bssid,
@@ -64,8 +63,6 @@ void cfg80211_ibss_joined(struct net_device *dev, const u8 *bssid, gfp_t gfp)
64 63
65 trace_cfg80211_ibss_joined(dev, bssid); 64 trace_cfg80211_ibss_joined(dev, bssid);
66 65
67 CFG80211_DEV_WARN_ON(wdev->sme_state != CFG80211_SME_CONNECTING);
68
69 ev = kzalloc(sizeof(*ev), gfp); 66 ev = kzalloc(sizeof(*ev), gfp);
70 if (!ev) 67 if (!ev)
71 return; 68 return;
@@ -120,7 +117,6 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
120#ifdef CONFIG_CFG80211_WEXT 117#ifdef CONFIG_CFG80211_WEXT
121 wdev->wext.ibss.chandef = params->chandef; 118 wdev->wext.ibss.chandef = params->chandef;
122#endif 119#endif
123 wdev->sme_state = CFG80211_SME_CONNECTING;
124 120
125 err = cfg80211_can_use_chan(rdev, wdev, params->chandef.chan, 121 err = cfg80211_can_use_chan(rdev, wdev, params->chandef.chan,
126 params->channel_fixed 122 params->channel_fixed
@@ -134,7 +130,6 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
134 err = rdev_join_ibss(rdev, dev, params); 130 err = rdev_join_ibss(rdev, dev, params);
135 if (err) { 131 if (err) {
136 wdev->connect_keys = NULL; 132 wdev->connect_keys = NULL;
137 wdev->sme_state = CFG80211_SME_IDLE;
138 return err; 133 return err;
139 } 134 }
140 135
@@ -152,11 +147,11 @@ int cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
152 struct wireless_dev *wdev = dev->ieee80211_ptr; 147 struct wireless_dev *wdev = dev->ieee80211_ptr;
153 int err; 148 int err;
154 149
155 mutex_lock(&rdev->devlist_mtx); 150 ASSERT_RTNL();
151
156 wdev_lock(wdev); 152 wdev_lock(wdev);
157 err = __cfg80211_join_ibss(rdev, dev, params, connkeys); 153 err = __cfg80211_join_ibss(rdev, dev, params, connkeys);
158 wdev_unlock(wdev); 154 wdev_unlock(wdev);
159 mutex_unlock(&rdev->devlist_mtx);
160 155
161 return err; 156 return err;
162} 157}
@@ -186,7 +181,6 @@ static void __cfg80211_clear_ibss(struct net_device *dev, bool nowext)
186 } 181 }
187 182
188 wdev->current_bss = NULL; 183 wdev->current_bss = NULL;
189 wdev->sme_state = CFG80211_SME_IDLE;
190 wdev->ssid_len = 0; 184 wdev->ssid_len = 0;
191#ifdef CONFIG_CFG80211_WEXT 185#ifdef CONFIG_CFG80211_WEXT
192 if (!nowext) 186 if (!nowext)
@@ -359,11 +353,9 @@ int cfg80211_ibss_wext_siwfreq(struct net_device *dev,
359 wdev->wext.ibss.channel_fixed = false; 353 wdev->wext.ibss.channel_fixed = false;
360 } 354 }
361 355
362 mutex_lock(&rdev->devlist_mtx);
363 wdev_lock(wdev); 356 wdev_lock(wdev);
364 err = cfg80211_ibss_wext_join(rdev, wdev); 357 err = cfg80211_ibss_wext_join(rdev, wdev);
365 wdev_unlock(wdev); 358 wdev_unlock(wdev);
366 mutex_unlock(&rdev->devlist_mtx);
367 359
368 return err; 360 return err;
369} 361}
@@ -429,11 +421,9 @@ int cfg80211_ibss_wext_siwessid(struct net_device *dev,
429 memcpy(wdev->wext.ibss.ssid, ssid, len); 421 memcpy(wdev->wext.ibss.ssid, ssid, len);
430 wdev->wext.ibss.ssid_len = len; 422 wdev->wext.ibss.ssid_len = len;
431 423
432 mutex_lock(&rdev->devlist_mtx);
433 wdev_lock(wdev); 424 wdev_lock(wdev);
434 err = cfg80211_ibss_wext_join(rdev, wdev); 425 err = cfg80211_ibss_wext_join(rdev, wdev);
435 wdev_unlock(wdev); 426 wdev_unlock(wdev);
436 mutex_unlock(&rdev->devlist_mtx);
437 427
438 return err; 428 return err;
439} 429}
@@ -512,11 +502,9 @@ int cfg80211_ibss_wext_siwap(struct net_device *dev,
512 } else 502 } else
513 wdev->wext.ibss.bssid = NULL; 503 wdev->wext.ibss.bssid = NULL;
514 504
515 mutex_lock(&rdev->devlist_mtx);
516 wdev_lock(wdev); 505 wdev_lock(wdev);
517 err = cfg80211_ibss_wext_join(rdev, wdev); 506 err = cfg80211_ibss_wext_join(rdev, wdev);
518 wdev_unlock(wdev); 507 wdev_unlock(wdev);
519 mutex_unlock(&rdev->devlist_mtx);
520 508
521 return err; 509 return err;
522} 510}
diff --git a/net/wireless/mesh.c b/net/wireless/mesh.c
index 0bb93f3061a4..30c49202ee4d 100644
--- a/net/wireless/mesh.c
+++ b/net/wireless/mesh.c
@@ -18,6 +18,7 @@
18#define MESH_PATH_TO_ROOT_TIMEOUT 6000 18#define MESH_PATH_TO_ROOT_TIMEOUT 6000
19#define MESH_ROOT_INTERVAL 5000 19#define MESH_ROOT_INTERVAL 5000
20#define MESH_ROOT_CONFIRMATION_INTERVAL 2000 20#define MESH_ROOT_CONFIRMATION_INTERVAL 2000
21#define MESH_DEFAULT_PLINK_TIMEOUT 1800 /* timeout in seconds */
21 22
22/* 23/*
23 * Minimum interval between two consecutive PREQs originated by the same 24 * Minimum interval between two consecutive PREQs originated by the same
@@ -75,6 +76,7 @@ const struct mesh_config default_mesh_config = {
75 .dot11MeshHWMPconfirmationInterval = MESH_ROOT_CONFIRMATION_INTERVAL, 76 .dot11MeshHWMPconfirmationInterval = MESH_ROOT_CONFIRMATION_INTERVAL,
76 .power_mode = NL80211_MESH_POWER_ACTIVE, 77 .power_mode = NL80211_MESH_POWER_ACTIVE,
77 .dot11MeshAwakeWindowDuration = MESH_DEFAULT_AWAKE_WINDOW, 78 .dot11MeshAwakeWindowDuration = MESH_DEFAULT_AWAKE_WINDOW,
79 .plink_timeout = MESH_DEFAULT_PLINK_TIMEOUT,
78}; 80};
79 81
80const struct mesh_setup default_mesh_setup = { 82const struct mesh_setup default_mesh_setup = {
@@ -82,6 +84,7 @@ const struct mesh_setup default_mesh_setup = {
82 .sync_method = IEEE80211_SYNC_METHOD_NEIGHBOR_OFFSET, 84 .sync_method = IEEE80211_SYNC_METHOD_NEIGHBOR_OFFSET,
83 .path_sel_proto = IEEE80211_PATH_PROTOCOL_HWMP, 85 .path_sel_proto = IEEE80211_PATH_PROTOCOL_HWMP,
84 .path_metric = IEEE80211_PATH_METRIC_AIRTIME, 86 .path_metric = IEEE80211_PATH_METRIC_AIRTIME,
87 .auth_id = 0, /* open */
85 .ie = NULL, 88 .ie = NULL,
86 .ie_len = 0, 89 .ie_len = 0,
87 .is_secure = false, 90 .is_secure = false,
@@ -159,6 +162,16 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
159 setup->chandef.center_freq1 = setup->chandef.chan->center_freq; 162 setup->chandef.center_freq1 = setup->chandef.chan->center_freq;
160 } 163 }
161 164
165 /*
166 * check if basic rates are available otherwise use mandatory rates as
167 * basic rates
168 */
169 if (!setup->basic_rates) {
170 struct ieee80211_supported_band *sband =
171 rdev->wiphy.bands[setup->chandef.chan->band];
172 setup->basic_rates = ieee80211_mandatory_rates(sband);
173 }
174
162 if (!cfg80211_reg_can_beacon(&rdev->wiphy, &setup->chandef)) 175 if (!cfg80211_reg_can_beacon(&rdev->wiphy, &setup->chandef))
163 return -EINVAL; 176 return -EINVAL;
164 177
@@ -185,11 +198,9 @@ int cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
185 struct wireless_dev *wdev = dev->ieee80211_ptr; 198 struct wireless_dev *wdev = dev->ieee80211_ptr;
186 int err; 199 int err;
187 200
188 mutex_lock(&rdev->devlist_mtx);
189 wdev_lock(wdev); 201 wdev_lock(wdev);
190 err = __cfg80211_join_mesh(rdev, dev, setup, conf); 202 err = __cfg80211_join_mesh(rdev, dev, setup, conf);
191 wdev_unlock(wdev); 203 wdev_unlock(wdev);
192 mutex_unlock(&rdev->devlist_mtx);
193 204
194 return err; 205 return err;
195} 206}
diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
index 0c7b7dd855f6..bfac5e186f57 100644
--- a/net/wireless/mlme.c
+++ b/net/wireless/mlme.c
@@ -18,37 +18,18 @@
18#include "rdev-ops.h" 18#include "rdev-ops.h"
19 19
20 20
21void cfg80211_send_rx_auth(struct net_device *dev, const u8 *buf, size_t len) 21void cfg80211_rx_assoc_resp(struct net_device *dev, struct cfg80211_bss *bss,
22{
23 struct wireless_dev *wdev = dev->ieee80211_ptr;
24 struct wiphy *wiphy = wdev->wiphy;
25 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
26
27 trace_cfg80211_send_rx_auth(dev);
28 wdev_lock(wdev);
29
30 nl80211_send_rx_auth(rdev, dev, buf, len, GFP_KERNEL);
31 cfg80211_sme_rx_auth(dev, buf, len);
32
33 wdev_unlock(wdev);
34}
35EXPORT_SYMBOL(cfg80211_send_rx_auth);
36
37void cfg80211_send_rx_assoc(struct net_device *dev, struct cfg80211_bss *bss,
38 const u8 *buf, size_t len) 22 const u8 *buf, size_t len)
39{ 23{
40 u16 status_code;
41 struct wireless_dev *wdev = dev->ieee80211_ptr; 24 struct wireless_dev *wdev = dev->ieee80211_ptr;
42 struct wiphy *wiphy = wdev->wiphy; 25 struct wiphy *wiphy = wdev->wiphy;
43 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); 26 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
44 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf; 27 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf;
45 u8 *ie = mgmt->u.assoc_resp.variable; 28 u8 *ie = mgmt->u.assoc_resp.variable;
46 int ieoffs = offsetof(struct ieee80211_mgmt, u.assoc_resp.variable); 29 int ieoffs = offsetof(struct ieee80211_mgmt, u.assoc_resp.variable);
30 u16 status_code = le16_to_cpu(mgmt->u.assoc_resp.status_code);
47 31
48 trace_cfg80211_send_rx_assoc(dev, bss); 32 trace_cfg80211_send_rx_assoc(dev, bss);
49 wdev_lock(wdev);
50
51 status_code = le16_to_cpu(mgmt->u.assoc_resp.status_code);
52 33
53 /* 34 /*
54 * This is a bit of a hack, we don't notify userspace of 35 * This is a bit of a hack, we don't notify userspace of
@@ -56,174 +37,135 @@ void cfg80211_send_rx_assoc(struct net_device *dev, struct cfg80211_bss *bss,
56 * and got a reject -- we only try again with an assoc 37 * and got a reject -- we only try again with an assoc
57 * frame instead of reassoc. 38 * frame instead of reassoc.
58 */ 39 */
59 if (status_code != WLAN_STATUS_SUCCESS && wdev->conn && 40 if (cfg80211_sme_rx_assoc_resp(wdev, status_code)) {
60 cfg80211_sme_failed_reassoc(wdev)) { 41 cfg80211_unhold_bss(bss_from_pub(bss));
61 cfg80211_put_bss(wiphy, bss); 42 cfg80211_put_bss(wiphy, bss);
62 goto out; 43 return;
63 } 44 }
64 45
65 nl80211_send_rx_assoc(rdev, dev, buf, len, GFP_KERNEL); 46 nl80211_send_rx_assoc(rdev, dev, buf, len, GFP_KERNEL);
66 47 /* update current_bss etc., consumes the bss reference */
67 if (status_code != WLAN_STATUS_SUCCESS && wdev->conn) {
68 cfg80211_sme_failed_assoc(wdev);
69 /*
70 * do not call connect_result() now because the
71 * sme will schedule work that does it later.
72 */
73 cfg80211_put_bss(wiphy, bss);
74 goto out;
75 }
76
77 if (!wdev->conn && wdev->sme_state == CFG80211_SME_IDLE) {
78 /*
79 * This is for the userspace SME, the CONNECTING
80 * state will be changed to CONNECTED by
81 * __cfg80211_connect_result() below.
82 */
83 wdev->sme_state = CFG80211_SME_CONNECTING;
84 }
85
86 /* this consumes the bss reference */
87 __cfg80211_connect_result(dev, mgmt->bssid, NULL, 0, ie, len - ieoffs, 48 __cfg80211_connect_result(dev, mgmt->bssid, NULL, 0, ie, len - ieoffs,
88 status_code, 49 status_code,
89 status_code == WLAN_STATUS_SUCCESS, bss); 50 status_code == WLAN_STATUS_SUCCESS, bss);
90 out:
91 wdev_unlock(wdev);
92} 51}
93EXPORT_SYMBOL(cfg80211_send_rx_assoc); 52EXPORT_SYMBOL(cfg80211_rx_assoc_resp);
94 53
95void __cfg80211_send_deauth(struct net_device *dev, 54static void cfg80211_process_auth(struct wireless_dev *wdev,
96 const u8 *buf, size_t len) 55 const u8 *buf, size_t len)
97{ 56{
98 struct wireless_dev *wdev = dev->ieee80211_ptr; 57 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
99 struct wiphy *wiphy = wdev->wiphy;
100 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
101 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf;
102 const u8 *bssid = mgmt->bssid;
103 bool was_current = false;
104 58
105 trace___cfg80211_send_deauth(dev); 59 nl80211_send_rx_auth(rdev, wdev->netdev, buf, len, GFP_KERNEL);
106 ASSERT_WDEV_LOCK(wdev); 60 cfg80211_sme_rx_auth(wdev, buf, len);
107 61}
108 if (wdev->current_bss &&
109 ether_addr_equal(wdev->current_bss->pub.bssid, bssid)) {
110 cfg80211_unhold_bss(wdev->current_bss);
111 cfg80211_put_bss(wiphy, &wdev->current_bss->pub);
112 wdev->current_bss = NULL;
113 was_current = true;
114 }
115 62
116 nl80211_send_deauth(rdev, dev, buf, len, GFP_KERNEL); 63static void cfg80211_process_deauth(struct wireless_dev *wdev,
64 const u8 *buf, size_t len)
65{
66 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
67 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf;
68 const u8 *bssid = mgmt->bssid;
69 u16 reason_code = le16_to_cpu(mgmt->u.deauth.reason_code);
70 bool from_ap = !ether_addr_equal(mgmt->sa, wdev->netdev->dev_addr);
117 71
118 if (wdev->sme_state == CFG80211_SME_CONNECTED && was_current) { 72 nl80211_send_deauth(rdev, wdev->netdev, buf, len, GFP_KERNEL);
119 u16 reason_code;
120 bool from_ap;
121 73
122 reason_code = le16_to_cpu(mgmt->u.deauth.reason_code); 74 if (!wdev->current_bss ||
75 !ether_addr_equal(wdev->current_bss->pub.bssid, bssid))
76 return;
123 77
124 from_ap = !ether_addr_equal(mgmt->sa, dev->dev_addr); 78 __cfg80211_disconnected(wdev->netdev, NULL, 0, reason_code, from_ap);
125 __cfg80211_disconnected(dev, NULL, 0, reason_code, from_ap); 79 cfg80211_sme_deauth(wdev);
126 } else if (wdev->sme_state == CFG80211_SME_CONNECTING) {
127 __cfg80211_connect_result(dev, mgmt->bssid, NULL, 0, NULL, 0,
128 WLAN_STATUS_UNSPECIFIED_FAILURE,
129 false, NULL);
130 }
131} 80}
132EXPORT_SYMBOL(__cfg80211_send_deauth);
133 81
134void cfg80211_send_deauth(struct net_device *dev, const u8 *buf, size_t len) 82static void cfg80211_process_disassoc(struct wireless_dev *wdev,
83 const u8 *buf, size_t len)
135{ 84{
136 struct wireless_dev *wdev = dev->ieee80211_ptr; 85 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
86 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf;
87 const u8 *bssid = mgmt->bssid;
88 u16 reason_code = le16_to_cpu(mgmt->u.disassoc.reason_code);
89 bool from_ap = !ether_addr_equal(mgmt->sa, wdev->netdev->dev_addr);
137 90
138 wdev_lock(wdev); 91 nl80211_send_disassoc(rdev, wdev->netdev, buf, len, GFP_KERNEL);
139 __cfg80211_send_deauth(dev, buf, len); 92
140 wdev_unlock(wdev); 93 if (WARN_ON(!wdev->current_bss ||
94 !ether_addr_equal(wdev->current_bss->pub.bssid, bssid)))
95 return;
96
97 __cfg80211_disconnected(wdev->netdev, NULL, 0, reason_code, from_ap);
98 cfg80211_sme_disassoc(wdev);
141} 99}
142EXPORT_SYMBOL(cfg80211_send_deauth);
143 100
144void __cfg80211_send_disassoc(struct net_device *dev, 101void cfg80211_rx_mlme_mgmt(struct net_device *dev, const u8 *buf, size_t len)
145 const u8 *buf, size_t len)
146{ 102{
147 struct wireless_dev *wdev = dev->ieee80211_ptr; 103 struct wireless_dev *wdev = dev->ieee80211_ptr;
148 struct wiphy *wiphy = wdev->wiphy; 104 struct ieee80211_mgmt *mgmt = (void *)buf;
149 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
150 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf;
151 const u8 *bssid = mgmt->bssid;
152 u16 reason_code;
153 bool from_ap;
154 105
155 trace___cfg80211_send_disassoc(dev);
156 ASSERT_WDEV_LOCK(wdev); 106 ASSERT_WDEV_LOCK(wdev);
157 107
158 nl80211_send_disassoc(rdev, dev, buf, len, GFP_KERNEL); 108 trace_cfg80211_rx_mlme_mgmt(dev, buf, len);
159 109
160 if (wdev->sme_state != CFG80211_SME_CONNECTED) 110 if (WARN_ON(len < 2))
161 return; 111 return;
162 112
163 if (wdev->current_bss && 113 if (ieee80211_is_auth(mgmt->frame_control))
164 ether_addr_equal(wdev->current_bss->pub.bssid, bssid)) { 114 cfg80211_process_auth(wdev, buf, len);
165 cfg80211_sme_disassoc(dev, wdev->current_bss); 115 else if (ieee80211_is_deauth(mgmt->frame_control))
166 cfg80211_unhold_bss(wdev->current_bss); 116 cfg80211_process_deauth(wdev, buf, len);
167 cfg80211_put_bss(wiphy, &wdev->current_bss->pub); 117 else if (ieee80211_is_disassoc(mgmt->frame_control))
168 wdev->current_bss = NULL; 118 cfg80211_process_disassoc(wdev, buf, len);
169 } else
170 WARN_ON(1);
171
172
173 reason_code = le16_to_cpu(mgmt->u.disassoc.reason_code);
174
175 from_ap = !ether_addr_equal(mgmt->sa, dev->dev_addr);
176 __cfg80211_disconnected(dev, NULL, 0, reason_code, from_ap);
177} 119}
178EXPORT_SYMBOL(__cfg80211_send_disassoc); 120EXPORT_SYMBOL(cfg80211_rx_mlme_mgmt);
179 121
180void cfg80211_send_disassoc(struct net_device *dev, const u8 *buf, size_t len) 122void cfg80211_auth_timeout(struct net_device *dev, const u8 *addr)
181{ 123{
182 struct wireless_dev *wdev = dev->ieee80211_ptr; 124 struct wireless_dev *wdev = dev->ieee80211_ptr;
125 struct wiphy *wiphy = wdev->wiphy;
126 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
127
128 trace_cfg80211_send_auth_timeout(dev, addr);
183 129
184 wdev_lock(wdev); 130 nl80211_send_auth_timeout(rdev, dev, addr, GFP_KERNEL);
185 __cfg80211_send_disassoc(dev, buf, len); 131 cfg80211_sme_auth_timeout(wdev);
186 wdev_unlock(wdev);
187} 132}
188EXPORT_SYMBOL(cfg80211_send_disassoc); 133EXPORT_SYMBOL(cfg80211_auth_timeout);
189 134
190void cfg80211_send_auth_timeout(struct net_device *dev, const u8 *addr) 135void cfg80211_assoc_timeout(struct net_device *dev, struct cfg80211_bss *bss)
191{ 136{
192 struct wireless_dev *wdev = dev->ieee80211_ptr; 137 struct wireless_dev *wdev = dev->ieee80211_ptr;
193 struct wiphy *wiphy = wdev->wiphy; 138 struct wiphy *wiphy = wdev->wiphy;
194 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); 139 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
195 140
196 trace_cfg80211_send_auth_timeout(dev, addr); 141 trace_cfg80211_send_assoc_timeout(dev, bss->bssid);
197 wdev_lock(wdev);
198 142
199 nl80211_send_auth_timeout(rdev, dev, addr, GFP_KERNEL); 143 nl80211_send_assoc_timeout(rdev, dev, bss->bssid, GFP_KERNEL);
200 if (wdev->sme_state == CFG80211_SME_CONNECTING) 144 cfg80211_sme_assoc_timeout(wdev);
201 __cfg80211_connect_result(dev, addr, NULL, 0, NULL, 0,
202 WLAN_STATUS_UNSPECIFIED_FAILURE,
203 false, NULL);
204 145
205 wdev_unlock(wdev); 146 cfg80211_unhold_bss(bss_from_pub(bss));
147 cfg80211_put_bss(wiphy, bss);
206} 148}
207EXPORT_SYMBOL(cfg80211_send_auth_timeout); 149EXPORT_SYMBOL(cfg80211_assoc_timeout);
208 150
209void cfg80211_send_assoc_timeout(struct net_device *dev, const u8 *addr) 151void cfg80211_tx_mlme_mgmt(struct net_device *dev, const u8 *buf, size_t len)
210{ 152{
211 struct wireless_dev *wdev = dev->ieee80211_ptr; 153 struct wireless_dev *wdev = dev->ieee80211_ptr;
212 struct wiphy *wiphy = wdev->wiphy; 154 struct ieee80211_mgmt *mgmt = (void *)buf;
213 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
214 155
215 trace_cfg80211_send_assoc_timeout(dev, addr); 156 ASSERT_WDEV_LOCK(wdev);
216 wdev_lock(wdev);
217 157
218 nl80211_send_assoc_timeout(rdev, dev, addr, GFP_KERNEL); 158 trace_cfg80211_tx_mlme_mgmt(dev, buf, len);
219 if (wdev->sme_state == CFG80211_SME_CONNECTING)
220 __cfg80211_connect_result(dev, addr, NULL, 0, NULL, 0,
221 WLAN_STATUS_UNSPECIFIED_FAILURE,
222 false, NULL);
223 159
224 wdev_unlock(wdev); 160 if (WARN_ON(len < 2))
161 return;
162
163 if (ieee80211_is_deauth(mgmt->frame_control))
164 cfg80211_process_deauth(wdev, buf, len);
165 else
166 cfg80211_process_disassoc(wdev, buf, len);
225} 167}
226EXPORT_SYMBOL(cfg80211_send_assoc_timeout); 168EXPORT_SYMBOL(cfg80211_tx_mlme_mgmt);
227 169
228void cfg80211_michael_mic_failure(struct net_device *dev, const u8 *addr, 170void cfg80211_michael_mic_failure(struct net_device *dev, const u8 *addr,
229 enum nl80211_key_type key_type, int key_id, 171 enum nl80211_key_type key_type, int key_id,
@@ -253,18 +195,27 @@ void cfg80211_michael_mic_failure(struct net_device *dev, const u8 *addr,
253EXPORT_SYMBOL(cfg80211_michael_mic_failure); 195EXPORT_SYMBOL(cfg80211_michael_mic_failure);
254 196
255/* some MLME handling for userspace SME */ 197/* some MLME handling for userspace SME */
256int __cfg80211_mlme_auth(struct cfg80211_registered_device *rdev, 198int cfg80211_mlme_auth(struct cfg80211_registered_device *rdev,
257 struct net_device *dev, 199 struct net_device *dev,
258 struct ieee80211_channel *chan, 200 struct ieee80211_channel *chan,
259 enum nl80211_auth_type auth_type, 201 enum nl80211_auth_type auth_type,
260 const u8 *bssid, 202 const u8 *bssid,
261 const u8 *ssid, int ssid_len, 203 const u8 *ssid, int ssid_len,
262 const u8 *ie, int ie_len, 204 const u8 *ie, int ie_len,
263 const u8 *key, int key_len, int key_idx, 205 const u8 *key, int key_len, int key_idx,
264 const u8 *sae_data, int sae_data_len) 206 const u8 *sae_data, int sae_data_len)
265{ 207{
266 struct wireless_dev *wdev = dev->ieee80211_ptr; 208 struct wireless_dev *wdev = dev->ieee80211_ptr;
267 struct cfg80211_auth_request req; 209 struct cfg80211_auth_request req = {
210 .ie = ie,
211 .ie_len = ie_len,
212 .sae_data = sae_data,
213 .sae_data_len = sae_data_len,
214 .auth_type = auth_type,
215 .key = key,
216 .key_len = key_len,
217 .key_idx = key_idx,
218 };
268 int err; 219 int err;
269 220
270 ASSERT_WDEV_LOCK(wdev); 221 ASSERT_WDEV_LOCK(wdev);
@@ -277,18 +228,8 @@ int __cfg80211_mlme_auth(struct cfg80211_registered_device *rdev,
277 ether_addr_equal(bssid, wdev->current_bss->pub.bssid)) 228 ether_addr_equal(bssid, wdev->current_bss->pub.bssid))
278 return -EALREADY; 229 return -EALREADY;
279 230
280 memset(&req, 0, sizeof(req));
281
282 req.ie = ie;
283 req.ie_len = ie_len;
284 req.sae_data = sae_data;
285 req.sae_data_len = sae_data_len;
286 req.auth_type = auth_type;
287 req.bss = cfg80211_get_bss(&rdev->wiphy, chan, bssid, ssid, ssid_len, 231 req.bss = cfg80211_get_bss(&rdev->wiphy, chan, bssid, ssid, ssid_len,
288 WLAN_CAPABILITY_ESS, WLAN_CAPABILITY_ESS); 232 WLAN_CAPABILITY_ESS, WLAN_CAPABILITY_ESS);
289 req.key = key;
290 req.key_len = key_len;
291 req.key_idx = key_idx;
292 if (!req.bss) 233 if (!req.bss)
293 return -ENOENT; 234 return -ENOENT;
294 235
@@ -304,28 +245,6 @@ out:
304 return err; 245 return err;
305} 246}
306 247
307int cfg80211_mlme_auth(struct cfg80211_registered_device *rdev,
308 struct net_device *dev, struct ieee80211_channel *chan,
309 enum nl80211_auth_type auth_type, const u8 *bssid,
310 const u8 *ssid, int ssid_len,
311 const u8 *ie, int ie_len,
312 const u8 *key, int key_len, int key_idx,
313 const u8 *sae_data, int sae_data_len)
314{
315 int err;
316
317 mutex_lock(&rdev->devlist_mtx);
318 wdev_lock(dev->ieee80211_ptr);
319 err = __cfg80211_mlme_auth(rdev, dev, chan, auth_type, bssid,
320 ssid, ssid_len, ie, ie_len,
321 key, key_len, key_idx,
322 sae_data, sae_data_len);
323 wdev_unlock(dev->ieee80211_ptr);
324 mutex_unlock(&rdev->devlist_mtx);
325
326 return err;
327}
328
329/* Do a logical ht_capa &= ht_capa_mask. */ 248/* Do a logical ht_capa &= ht_capa_mask. */
330void cfg80211_oper_and_ht_capa(struct ieee80211_ht_cap *ht_capa, 249void cfg80211_oper_and_ht_capa(struct ieee80211_ht_cap *ht_capa,
331 const struct ieee80211_ht_cap *ht_capa_mask) 250 const struct ieee80211_ht_cap *ht_capa_mask)
@@ -360,30 +279,21 @@ void cfg80211_oper_and_vht_capa(struct ieee80211_vht_cap *vht_capa,
360 p1[i] &= p2[i]; 279 p1[i] &= p2[i];
361} 280}
362 281
363int __cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev, 282int cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev,
364 struct net_device *dev, 283 struct net_device *dev,
365 struct ieee80211_channel *chan, 284 struct ieee80211_channel *chan,
366 const u8 *bssid, 285 const u8 *bssid,
367 const u8 *ssid, int ssid_len, 286 const u8 *ssid, int ssid_len,
368 struct cfg80211_assoc_request *req) 287 struct cfg80211_assoc_request *req)
369{ 288{
370 struct wireless_dev *wdev = dev->ieee80211_ptr; 289 struct wireless_dev *wdev = dev->ieee80211_ptr;
371 int err; 290 int err;
372 bool was_connected = false;
373 291
374 ASSERT_WDEV_LOCK(wdev); 292 ASSERT_WDEV_LOCK(wdev);
375 293
376 if (wdev->current_bss && req->prev_bssid && 294 if (wdev->current_bss &&
377 ether_addr_equal(wdev->current_bss->pub.bssid, req->prev_bssid)) { 295 (!req->prev_bssid || !ether_addr_equal(wdev->current_bss->pub.bssid,
378 /* 296 req->prev_bssid)))
379 * Trying to reassociate: Allow this to proceed and let the old
380 * association to be dropped when the new one is completed.
381 */
382 if (wdev->sme_state == CFG80211_SME_CONNECTED) {
383 was_connected = true;
384 wdev->sme_state = CFG80211_SME_CONNECTING;
385 }
386 } else if (wdev->current_bss)
387 return -EALREADY; 297 return -EALREADY;
388 298
389 cfg80211_oper_and_ht_capa(&req->ht_capa_mask, 299 cfg80211_oper_and_ht_capa(&req->ht_capa_mask,
@@ -393,52 +303,28 @@ int __cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev,
393 303
394 req->bss = cfg80211_get_bss(&rdev->wiphy, chan, bssid, ssid, ssid_len, 304 req->bss = cfg80211_get_bss(&rdev->wiphy, chan, bssid, ssid, ssid_len,
395 WLAN_CAPABILITY_ESS, WLAN_CAPABILITY_ESS); 305 WLAN_CAPABILITY_ESS, WLAN_CAPABILITY_ESS);
396 if (!req->bss) { 306 if (!req->bss)
397 if (was_connected)
398 wdev->sme_state = CFG80211_SME_CONNECTED;
399 return -ENOENT; 307 return -ENOENT;
400 }
401 308
402 err = cfg80211_can_use_chan(rdev, wdev, chan, CHAN_MODE_SHARED); 309 err = cfg80211_can_use_chan(rdev, wdev, chan, CHAN_MODE_SHARED);
403 if (err) 310 if (err)
404 goto out; 311 goto out;
405 312
406 err = rdev_assoc(rdev, dev, req); 313 err = rdev_assoc(rdev, dev, req);
314 if (!err)
315 cfg80211_hold_bss(bss_from_pub(req->bss));
407 316
408out: 317out:
409 if (err) { 318 if (err)
410 if (was_connected)
411 wdev->sme_state = CFG80211_SME_CONNECTED;
412 cfg80211_put_bss(&rdev->wiphy, req->bss); 319 cfg80211_put_bss(&rdev->wiphy, req->bss);
413 }
414 320
415 return err; 321 return err;
416} 322}
417 323
418int cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev, 324int cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev,
419 struct net_device *dev, 325 struct net_device *dev, const u8 *bssid,
420 struct ieee80211_channel *chan, 326 const u8 *ie, int ie_len, u16 reason,
421 const u8 *bssid, 327 bool local_state_change)
422 const u8 *ssid, int ssid_len,
423 struct cfg80211_assoc_request *req)
424{
425 struct wireless_dev *wdev = dev->ieee80211_ptr;
426 int err;
427
428 mutex_lock(&rdev->devlist_mtx);
429 wdev_lock(wdev);
430 err = __cfg80211_mlme_assoc(rdev, dev, chan, bssid,
431 ssid, ssid_len, req);
432 wdev_unlock(wdev);
433 mutex_unlock(&rdev->devlist_mtx);
434
435 return err;
436}
437
438int __cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev,
439 struct net_device *dev, const u8 *bssid,
440 const u8 *ie, int ie_len, u16 reason,
441 bool local_state_change)
442{ 328{
443 struct wireless_dev *wdev = dev->ieee80211_ptr; 329 struct wireless_dev *wdev = dev->ieee80211_ptr;
444 struct cfg80211_deauth_request req = { 330 struct cfg80211_deauth_request req = {
@@ -451,79 +337,51 @@ int __cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev,
451 337
452 ASSERT_WDEV_LOCK(wdev); 338 ASSERT_WDEV_LOCK(wdev);
453 339
454 if (local_state_change && (!wdev->current_bss || 340 if (local_state_change &&
455 !ether_addr_equal(wdev->current_bss->pub.bssid, bssid))) 341 (!wdev->current_bss ||
342 !ether_addr_equal(wdev->current_bss->pub.bssid, bssid)))
456 return 0; 343 return 0;
457 344
458 return rdev_deauth(rdev, dev, &req); 345 return rdev_deauth(rdev, dev, &req);
459} 346}
460 347
461int cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev, 348int cfg80211_mlme_disassoc(struct cfg80211_registered_device *rdev,
462 struct net_device *dev, const u8 *bssid, 349 struct net_device *dev, const u8 *bssid,
463 const u8 *ie, int ie_len, u16 reason, 350 const u8 *ie, int ie_len, u16 reason,
464 bool local_state_change) 351 bool local_state_change)
465{ 352{
466 struct wireless_dev *wdev = dev->ieee80211_ptr; 353 struct wireless_dev *wdev = dev->ieee80211_ptr;
354 struct cfg80211_disassoc_request req = {
355 .reason_code = reason,
356 .local_state_change = local_state_change,
357 .ie = ie,
358 .ie_len = ie_len,
359 };
467 int err; 360 int err;
468 361
469 wdev_lock(wdev);
470 err = __cfg80211_mlme_deauth(rdev, dev, bssid, ie, ie_len, reason,
471 local_state_change);
472 wdev_unlock(wdev);
473
474 return err;
475}
476
477static int __cfg80211_mlme_disassoc(struct cfg80211_registered_device *rdev,
478 struct net_device *dev, const u8 *bssid,
479 const u8 *ie, int ie_len, u16 reason,
480 bool local_state_change)
481{
482 struct wireless_dev *wdev = dev->ieee80211_ptr;
483 struct cfg80211_disassoc_request req;
484
485 ASSERT_WDEV_LOCK(wdev); 362 ASSERT_WDEV_LOCK(wdev);
486 363
487 if (wdev->sme_state != CFG80211_SME_CONNECTED) 364 if (!wdev->current_bss)
488 return -ENOTCONN;
489
490 if (WARN(!wdev->current_bss, "sme_state=%d\n", wdev->sme_state))
491 return -ENOTCONN; 365 return -ENOTCONN;
492 366
493 memset(&req, 0, sizeof(req));
494 req.reason_code = reason;
495 req.local_state_change = local_state_change;
496 req.ie = ie;
497 req.ie_len = ie_len;
498 if (ether_addr_equal(wdev->current_bss->pub.bssid, bssid)) 367 if (ether_addr_equal(wdev->current_bss->pub.bssid, bssid))
499 req.bss = &wdev->current_bss->pub; 368 req.bss = &wdev->current_bss->pub;
500 else 369 else
501 return -ENOTCONN; 370 return -ENOTCONN;
502 371
503 return rdev_disassoc(rdev, dev, &req); 372 err = rdev_disassoc(rdev, dev, &req);
504} 373 if (err)
505 374 return err;
506int cfg80211_mlme_disassoc(struct cfg80211_registered_device *rdev,
507 struct net_device *dev, const u8 *bssid,
508 const u8 *ie, int ie_len, u16 reason,
509 bool local_state_change)
510{
511 struct wireless_dev *wdev = dev->ieee80211_ptr;
512 int err;
513
514 wdev_lock(wdev);
515 err = __cfg80211_mlme_disassoc(rdev, dev, bssid, ie, ie_len, reason,
516 local_state_change);
517 wdev_unlock(wdev);
518 375
519 return err; 376 /* driver should have reported the disassoc */
377 WARN_ON(wdev->current_bss);
378 return 0;
520} 379}
521 380
522void cfg80211_mlme_down(struct cfg80211_registered_device *rdev, 381void cfg80211_mlme_down(struct cfg80211_registered_device *rdev,
523 struct net_device *dev) 382 struct net_device *dev)
524{ 383{
525 struct wireless_dev *wdev = dev->ieee80211_ptr; 384 struct wireless_dev *wdev = dev->ieee80211_ptr;
526 struct cfg80211_deauth_request req;
527 u8 bssid[ETH_ALEN]; 385 u8 bssid[ETH_ALEN];
528 386
529 ASSERT_WDEV_LOCK(wdev); 387 ASSERT_WDEV_LOCK(wdev);
@@ -531,23 +389,12 @@ void cfg80211_mlme_down(struct cfg80211_registered_device *rdev,
531 if (!rdev->ops->deauth) 389 if (!rdev->ops->deauth)
532 return; 390 return;
533 391
534 memset(&req, 0, sizeof(req));
535 req.reason_code = WLAN_REASON_DEAUTH_LEAVING;
536 req.ie = NULL;
537 req.ie_len = 0;
538
539 if (!wdev->current_bss) 392 if (!wdev->current_bss)
540 return; 393 return;
541 394
542 memcpy(bssid, wdev->current_bss->pub.bssid, ETH_ALEN); 395 memcpy(bssid, wdev->current_bss->pub.bssid, ETH_ALEN);
543 req.bssid = bssid; 396 cfg80211_mlme_deauth(rdev, dev, bssid, NULL, 0,
544 rdev_deauth(rdev, dev, &req); 397 WLAN_REASON_DEAUTH_LEAVING, false);
545
546 if (wdev->current_bss) {
547 cfg80211_unhold_bss(wdev->current_bss);
548 cfg80211_put_bss(&rdev->wiphy, &wdev->current_bss->pub);
549 wdev->current_bss = NULL;
550 }
551} 398}
552 399
553struct cfg80211_mgmt_registration { 400struct cfg80211_mgmt_registration {
@@ -848,7 +695,7 @@ void cfg80211_dfs_channels_update_work(struct work_struct *work)
848 dfs_update_channels_wk); 695 dfs_update_channels_wk);
849 wiphy = &rdev->wiphy; 696 wiphy = &rdev->wiphy;
850 697
851 mutex_lock(&cfg80211_mutex); 698 rtnl_lock();
852 for (bandid = 0; bandid < IEEE80211_NUM_BANDS; bandid++) { 699 for (bandid = 0; bandid < IEEE80211_NUM_BANDS; bandid++) {
853 sband = wiphy->bands[bandid]; 700 sband = wiphy->bands[bandid];
854 if (!sband) 701 if (!sband)
@@ -881,7 +728,7 @@ void cfg80211_dfs_channels_update_work(struct work_struct *work)
881 check_again = true; 728 check_again = true;
882 } 729 }
883 } 730 }
884 mutex_unlock(&cfg80211_mutex); 731 rtnl_unlock();
885 732
886 /* reschedule if there are other channels waiting to be cleared again */ 733 /* reschedule if there are other channels waiting to be cleared again */
887 if (check_again) 734 if (check_again)
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index b14b7e3cb6e6..1cc47aca7f05 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -37,10 +37,10 @@ static void nl80211_post_doit(struct genl_ops *ops, struct sk_buff *skb,
37 37
38/* the netlink family */ 38/* the netlink family */
39static struct genl_family nl80211_fam = { 39static struct genl_family nl80211_fam = {
40 .id = GENL_ID_GENERATE, /* don't bother with a hardcoded ID */ 40 .id = GENL_ID_GENERATE, /* don't bother with a hardcoded ID */
41 .name = "nl80211", /* have users key off the name instead */ 41 .name = NL80211_GENL_NAME, /* have users key off the name instead */
42 .hdrsize = 0, /* no private header */ 42 .hdrsize = 0, /* no private header */
43 .version = 1, /* no particular meaning now */ 43 .version = 1, /* no particular meaning now */
44 .maxattr = NL80211_ATTR_MAX, 44 .maxattr = NL80211_ATTR_MAX,
45 .netnsok = true, 45 .netnsok = true,
46 .pre_doit = nl80211_pre_doit, 46 .pre_doit = nl80211_pre_doit,
@@ -59,7 +59,7 @@ __cfg80211_wdev_from_attrs(struct net *netns, struct nlattr **attrs)
59 int wiphy_idx = -1; 59 int wiphy_idx = -1;
60 int ifidx = -1; 60 int ifidx = -1;
61 61
62 assert_cfg80211_lock(); 62 ASSERT_RTNL();
63 63
64 if (!have_ifidx && !have_wdev_id) 64 if (!have_ifidx && !have_wdev_id)
65 return ERR_PTR(-EINVAL); 65 return ERR_PTR(-EINVAL);
@@ -80,7 +80,6 @@ __cfg80211_wdev_from_attrs(struct net *netns, struct nlattr **attrs)
80 if (have_wdev_id && rdev->wiphy_idx != wiphy_idx) 80 if (have_wdev_id && rdev->wiphy_idx != wiphy_idx)
81 continue; 81 continue;
82 82
83 mutex_lock(&rdev->devlist_mtx);
84 list_for_each_entry(wdev, &rdev->wdev_list, list) { 83 list_for_each_entry(wdev, &rdev->wdev_list, list) {
85 if (have_ifidx && wdev->netdev && 84 if (have_ifidx && wdev->netdev &&
86 wdev->netdev->ifindex == ifidx) { 85 wdev->netdev->ifindex == ifidx) {
@@ -92,7 +91,6 @@ __cfg80211_wdev_from_attrs(struct net *netns, struct nlattr **attrs)
92 break; 91 break;
93 } 92 }
94 } 93 }
95 mutex_unlock(&rdev->devlist_mtx);
96 94
97 if (result) 95 if (result)
98 break; 96 break;
@@ -109,7 +107,7 @@ __cfg80211_rdev_from_attrs(struct net *netns, struct nlattr **attrs)
109 struct cfg80211_registered_device *rdev = NULL, *tmp; 107 struct cfg80211_registered_device *rdev = NULL, *tmp;
110 struct net_device *netdev; 108 struct net_device *netdev;
111 109
112 assert_cfg80211_lock(); 110 ASSERT_RTNL();
113 111
114 if (!attrs[NL80211_ATTR_WIPHY] && 112 if (!attrs[NL80211_ATTR_WIPHY] &&
115 !attrs[NL80211_ATTR_IFINDEX] && 113 !attrs[NL80211_ATTR_IFINDEX] &&
@@ -128,14 +126,12 @@ __cfg80211_rdev_from_attrs(struct net *netns, struct nlattr **attrs)
128 tmp = cfg80211_rdev_by_wiphy_idx(wdev_id >> 32); 126 tmp = cfg80211_rdev_by_wiphy_idx(wdev_id >> 32);
129 if (tmp) { 127 if (tmp) {
130 /* make sure wdev exists */ 128 /* make sure wdev exists */
131 mutex_lock(&tmp->devlist_mtx);
132 list_for_each_entry(wdev, &tmp->wdev_list, list) { 129 list_for_each_entry(wdev, &tmp->wdev_list, list) {
133 if (wdev->identifier != (u32)wdev_id) 130 if (wdev->identifier != (u32)wdev_id)
134 continue; 131 continue;
135 found = true; 132 found = true;
136 break; 133 break;
137 } 134 }
138 mutex_unlock(&tmp->devlist_mtx);
139 135
140 if (!found) 136 if (!found)
141 tmp = NULL; 137 tmp = NULL;
@@ -182,19 +178,6 @@ __cfg80211_rdev_from_attrs(struct net *netns, struct nlattr **attrs)
182/* 178/*
183 * This function returns a pointer to the driver 179 * This function returns a pointer to the driver
184 * that the genl_info item that is passed refers to. 180 * that the genl_info item that is passed refers to.
185 * If successful, it returns non-NULL and also locks
186 * the driver's mutex!
187 *
188 * This means that you need to call cfg80211_unlock_rdev()
189 * before being allowed to acquire &cfg80211_mutex!
190 *
191 * This is necessary because we need to lock the global
192 * mutex to get an item off the list safely, and then
193 * we lock the rdev mutex so it doesn't go away under us.
194 *
195 * We don't want to keep cfg80211_mutex locked
196 * for all the time in order to allow requests on
197 * other interfaces to go through at the same time.
198 * 181 *
199 * The result of this can be a PTR_ERR and hence must 182 * The result of this can be a PTR_ERR and hence must
200 * be checked with IS_ERR() for errors. 183 * be checked with IS_ERR() for errors.
@@ -202,20 +185,7 @@ __cfg80211_rdev_from_attrs(struct net *netns, struct nlattr **attrs)
202static struct cfg80211_registered_device * 185static struct cfg80211_registered_device *
203cfg80211_get_dev_from_info(struct net *netns, struct genl_info *info) 186cfg80211_get_dev_from_info(struct net *netns, struct genl_info *info)
204{ 187{
205 struct cfg80211_registered_device *rdev; 188 return __cfg80211_rdev_from_attrs(netns, info->attrs);
206
207 mutex_lock(&cfg80211_mutex);
208 rdev = __cfg80211_rdev_from_attrs(netns, info->attrs);
209
210 /* if it is not an error we grab the lock on
211 * it to assure it won't be going away while
212 * we operate on it */
213 if (!IS_ERR(rdev))
214 mutex_lock(&rdev->mtx);
215
216 mutex_unlock(&cfg80211_mutex);
217
218 return rdev;
219} 189}
220 190
221/* policy for the attributes */ 191/* policy for the attributes */
@@ -378,6 +348,7 @@ static const struct nla_policy nl80211_policy[NL80211_ATTR_MAX+1] = {
378 [NL80211_ATTR_MDID] = { .type = NLA_U16 }, 348 [NL80211_ATTR_MDID] = { .type = NLA_U16 },
379 [NL80211_ATTR_IE_RIC] = { .type = NLA_BINARY, 349 [NL80211_ATTR_IE_RIC] = { .type = NLA_BINARY,
380 .len = IEEE80211_MAX_DATA_LEN }, 350 .len = IEEE80211_MAX_DATA_LEN },
351 [NL80211_ATTR_PEER_AID] = { .type = NLA_U16 },
381}; 352};
382 353
383/* policy for the key attributes */ 354/* policy for the key attributes */
@@ -455,7 +426,6 @@ static int nl80211_prepare_wdev_dump(struct sk_buff *skb,
455 int err; 426 int err;
456 427
457 rtnl_lock(); 428 rtnl_lock();
458 mutex_lock(&cfg80211_mutex);
459 429
460 if (!cb->args[0]) { 430 if (!cb->args[0]) {
461 err = nlmsg_parse(cb->nlh, GENL_HDRLEN + nl80211_fam.hdrsize, 431 err = nlmsg_parse(cb->nlh, GENL_HDRLEN + nl80211_fam.hdrsize,
@@ -484,14 +454,12 @@ static int nl80211_prepare_wdev_dump(struct sk_buff *skb,
484 *rdev = wiphy_to_dev(wiphy); 454 *rdev = wiphy_to_dev(wiphy);
485 *wdev = NULL; 455 *wdev = NULL;
486 456
487 mutex_lock(&(*rdev)->devlist_mtx);
488 list_for_each_entry(tmp, &(*rdev)->wdev_list, list) { 457 list_for_each_entry(tmp, &(*rdev)->wdev_list, list) {
489 if (tmp->identifier == cb->args[1]) { 458 if (tmp->identifier == cb->args[1]) {
490 *wdev = tmp; 459 *wdev = tmp;
491 break; 460 break;
492 } 461 }
493 } 462 }
494 mutex_unlock(&(*rdev)->devlist_mtx);
495 463
496 if (!*wdev) { 464 if (!*wdev) {
497 err = -ENODEV; 465 err = -ENODEV;
@@ -499,19 +467,14 @@ static int nl80211_prepare_wdev_dump(struct sk_buff *skb,
499 } 467 }
500 } 468 }
501 469
502 cfg80211_lock_rdev(*rdev);
503
504 mutex_unlock(&cfg80211_mutex);
505 return 0; 470 return 0;
506 out_unlock: 471 out_unlock:
507 mutex_unlock(&cfg80211_mutex);
508 rtnl_unlock(); 472 rtnl_unlock();
509 return err; 473 return err;
510} 474}
511 475
512static void nl80211_finish_wdev_dump(struct cfg80211_registered_device *rdev) 476static void nl80211_finish_wdev_dump(struct cfg80211_registered_device *rdev)
513{ 477{
514 cfg80211_unlock_rdev(rdev);
515 rtnl_unlock(); 478 rtnl_unlock();
516} 479}
517 480
@@ -837,12 +800,9 @@ static int nl80211_key_allowed(struct wireless_dev *wdev)
837 case NL80211_IFTYPE_MESH_POINT: 800 case NL80211_IFTYPE_MESH_POINT:
838 break; 801 break;
839 case NL80211_IFTYPE_ADHOC: 802 case NL80211_IFTYPE_ADHOC:
840 if (!wdev->current_bss)
841 return -ENOLINK;
842 break;
843 case NL80211_IFTYPE_STATION: 803 case NL80211_IFTYPE_STATION:
844 case NL80211_IFTYPE_P2P_CLIENT: 804 case NL80211_IFTYPE_P2P_CLIENT:
845 if (wdev->sme_state != CFG80211_SME_CONNECTED) 805 if (!wdev->current_bss)
846 return -ENOLINK; 806 return -ENOLINK;
847 break; 807 break;
848 default: 808 default:
@@ -945,7 +905,7 @@ nla_put_failure:
945static int nl80211_send_wowlan_tcp_caps(struct cfg80211_registered_device *rdev, 905static int nl80211_send_wowlan_tcp_caps(struct cfg80211_registered_device *rdev,
946 struct sk_buff *msg) 906 struct sk_buff *msg)
947{ 907{
948 const struct wiphy_wowlan_tcp_support *tcp = rdev->wiphy.wowlan.tcp; 908 const struct wiphy_wowlan_tcp_support *tcp = rdev->wiphy.wowlan->tcp;
949 struct nlattr *nl_tcp; 909 struct nlattr *nl_tcp;
950 910
951 if (!tcp) 911 if (!tcp)
@@ -988,37 +948,37 @@ static int nl80211_send_wowlan(struct sk_buff *msg,
988{ 948{
989 struct nlattr *nl_wowlan; 949 struct nlattr *nl_wowlan;
990 950
991 if (!dev->wiphy.wowlan.flags && !dev->wiphy.wowlan.n_patterns) 951 if (!dev->wiphy.wowlan)
992 return 0; 952 return 0;
993 953
994 nl_wowlan = nla_nest_start(msg, NL80211_ATTR_WOWLAN_TRIGGERS_SUPPORTED); 954 nl_wowlan = nla_nest_start(msg, NL80211_ATTR_WOWLAN_TRIGGERS_SUPPORTED);
995 if (!nl_wowlan) 955 if (!nl_wowlan)
996 return -ENOBUFS; 956 return -ENOBUFS;
997 957
998 if (((dev->wiphy.wowlan.flags & WIPHY_WOWLAN_ANY) && 958 if (((dev->wiphy.wowlan->flags & WIPHY_WOWLAN_ANY) &&
999 nla_put_flag(msg, NL80211_WOWLAN_TRIG_ANY)) || 959 nla_put_flag(msg, NL80211_WOWLAN_TRIG_ANY)) ||
1000 ((dev->wiphy.wowlan.flags & WIPHY_WOWLAN_DISCONNECT) && 960 ((dev->wiphy.wowlan->flags & WIPHY_WOWLAN_DISCONNECT) &&
1001 nla_put_flag(msg, NL80211_WOWLAN_TRIG_DISCONNECT)) || 961 nla_put_flag(msg, NL80211_WOWLAN_TRIG_DISCONNECT)) ||
1002 ((dev->wiphy.wowlan.flags & WIPHY_WOWLAN_MAGIC_PKT) && 962 ((dev->wiphy.wowlan->flags & WIPHY_WOWLAN_MAGIC_PKT) &&
1003 nla_put_flag(msg, NL80211_WOWLAN_TRIG_MAGIC_PKT)) || 963 nla_put_flag(msg, NL80211_WOWLAN_TRIG_MAGIC_PKT)) ||
1004 ((dev->wiphy.wowlan.flags & WIPHY_WOWLAN_SUPPORTS_GTK_REKEY) && 964 ((dev->wiphy.wowlan->flags & WIPHY_WOWLAN_SUPPORTS_GTK_REKEY) &&
1005 nla_put_flag(msg, NL80211_WOWLAN_TRIG_GTK_REKEY_SUPPORTED)) || 965 nla_put_flag(msg, NL80211_WOWLAN_TRIG_GTK_REKEY_SUPPORTED)) ||
1006 ((dev->wiphy.wowlan.flags & WIPHY_WOWLAN_GTK_REKEY_FAILURE) && 966 ((dev->wiphy.wowlan->flags & WIPHY_WOWLAN_GTK_REKEY_FAILURE) &&
1007 nla_put_flag(msg, NL80211_WOWLAN_TRIG_GTK_REKEY_FAILURE)) || 967 nla_put_flag(msg, NL80211_WOWLAN_TRIG_GTK_REKEY_FAILURE)) ||
1008 ((dev->wiphy.wowlan.flags & WIPHY_WOWLAN_EAP_IDENTITY_REQ) && 968 ((dev->wiphy.wowlan->flags & WIPHY_WOWLAN_EAP_IDENTITY_REQ) &&
1009 nla_put_flag(msg, NL80211_WOWLAN_TRIG_EAP_IDENT_REQUEST)) || 969 nla_put_flag(msg, NL80211_WOWLAN_TRIG_EAP_IDENT_REQUEST)) ||
1010 ((dev->wiphy.wowlan.flags & WIPHY_WOWLAN_4WAY_HANDSHAKE) && 970 ((dev->wiphy.wowlan->flags & WIPHY_WOWLAN_4WAY_HANDSHAKE) &&
1011 nla_put_flag(msg, NL80211_WOWLAN_TRIG_4WAY_HANDSHAKE)) || 971 nla_put_flag(msg, NL80211_WOWLAN_TRIG_4WAY_HANDSHAKE)) ||
1012 ((dev->wiphy.wowlan.flags & WIPHY_WOWLAN_RFKILL_RELEASE) && 972 ((dev->wiphy.wowlan->flags & WIPHY_WOWLAN_RFKILL_RELEASE) &&
1013 nla_put_flag(msg, NL80211_WOWLAN_TRIG_RFKILL_RELEASE))) 973 nla_put_flag(msg, NL80211_WOWLAN_TRIG_RFKILL_RELEASE)))
1014 return -ENOBUFS; 974 return -ENOBUFS;
1015 975
1016 if (dev->wiphy.wowlan.n_patterns) { 976 if (dev->wiphy.wowlan->n_patterns) {
1017 struct nl80211_wowlan_pattern_support pat = { 977 struct nl80211_wowlan_pattern_support pat = {
1018 .max_patterns = dev->wiphy.wowlan.n_patterns, 978 .max_patterns = dev->wiphy.wowlan->n_patterns,
1019 .min_pattern_len = dev->wiphy.wowlan.pattern_min_len, 979 .min_pattern_len = dev->wiphy.wowlan->pattern_min_len,
1020 .max_pattern_len = dev->wiphy.wowlan.pattern_max_len, 980 .max_pattern_len = dev->wiphy.wowlan->pattern_max_len,
1021 .max_pkt_offset = dev->wiphy.wowlan.max_pkt_offset, 981 .max_pkt_offset = dev->wiphy.wowlan->max_pkt_offset,
1022 }; 982 };
1023 983
1024 if (nla_put(msg, NL80211_WOWLAN_TRIG_PKT_PATTERN, 984 if (nla_put(msg, NL80211_WOWLAN_TRIG_PKT_PATTERN,
@@ -1151,10 +1111,16 @@ nl80211_send_mgmt_stypes(struct sk_buff *msg,
1151 return 0; 1111 return 0;
1152} 1112}
1153 1113
1114struct nl80211_dump_wiphy_state {
1115 s64 filter_wiphy;
1116 long start;
1117 long split_start, band_start, chan_start;
1118 bool split;
1119};
1120
1154static int nl80211_send_wiphy(struct cfg80211_registered_device *dev, 1121static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1155 struct sk_buff *msg, u32 portid, u32 seq, 1122 struct sk_buff *msg, u32 portid, u32 seq,
1156 int flags, bool split, long *split_start, 1123 int flags, struct nl80211_dump_wiphy_state *state)
1157 long *band_start, long *chan_start)
1158{ 1124{
1159 void *hdr; 1125 void *hdr;
1160 struct nlattr *nl_bands, *nl_band; 1126 struct nlattr *nl_bands, *nl_band;
@@ -1165,19 +1131,14 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1165 int i; 1131 int i;
1166 const struct ieee80211_txrx_stypes *mgmt_stypes = 1132 const struct ieee80211_txrx_stypes *mgmt_stypes =
1167 dev->wiphy.mgmt_stypes; 1133 dev->wiphy.mgmt_stypes;
1168 long start = 0, start_chan = 0, start_band = 0;
1169 u32 features; 1134 u32 features;
1170 1135
1171 hdr = nl80211hdr_put(msg, portid, seq, flags, NL80211_CMD_NEW_WIPHY); 1136 hdr = nl80211hdr_put(msg, portid, seq, flags, NL80211_CMD_NEW_WIPHY);
1172 if (!hdr) 1137 if (!hdr)
1173 return -ENOBUFS; 1138 return -ENOBUFS;
1174 1139
1175 /* allow always using the variables */ 1140 if (WARN_ON(!state))
1176 if (!split) { 1141 return -EINVAL;
1177 split_start = &start;
1178 band_start = &start_band;
1179 chan_start = &start_chan;
1180 }
1181 1142
1182 if (nla_put_u32(msg, NL80211_ATTR_WIPHY, dev->wiphy_idx) || 1143 if (nla_put_u32(msg, NL80211_ATTR_WIPHY, dev->wiphy_idx) ||
1183 nla_put_string(msg, NL80211_ATTR_WIPHY_NAME, 1144 nla_put_string(msg, NL80211_ATTR_WIPHY_NAME,
@@ -1186,7 +1147,7 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1186 cfg80211_rdev_list_generation)) 1147 cfg80211_rdev_list_generation))
1187 goto nla_put_failure; 1148 goto nla_put_failure;
1188 1149
1189 switch (*split_start) { 1150 switch (state->split_start) {
1190 case 0: 1151 case 0:
1191 if (nla_put_u8(msg, NL80211_ATTR_WIPHY_RETRY_SHORT, 1152 if (nla_put_u8(msg, NL80211_ATTR_WIPHY_RETRY_SHORT,
1192 dev->wiphy.retry_short) || 1153 dev->wiphy.retry_short) ||
@@ -1228,9 +1189,12 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1228 if ((dev->wiphy.flags & WIPHY_FLAG_TDLS_EXTERNAL_SETUP) && 1189 if ((dev->wiphy.flags & WIPHY_FLAG_TDLS_EXTERNAL_SETUP) &&
1229 nla_put_flag(msg, NL80211_ATTR_TDLS_EXTERNAL_SETUP)) 1190 nla_put_flag(msg, NL80211_ATTR_TDLS_EXTERNAL_SETUP))
1230 goto nla_put_failure; 1191 goto nla_put_failure;
1192 if ((dev->wiphy.flags & WIPHY_FLAG_SUPPORTS_5_10_MHZ) &&
1193 nla_put_flag(msg, WIPHY_FLAG_SUPPORTS_5_10_MHZ))
1194 goto nla_put_failure;
1231 1195
1232 (*split_start)++; 1196 state->split_start++;
1233 if (split) 1197 if (state->split)
1234 break; 1198 break;
1235 case 1: 1199 case 1:
1236 if (nla_put(msg, NL80211_ATTR_CIPHER_SUITES, 1200 if (nla_put(msg, NL80211_ATTR_CIPHER_SUITES,
@@ -1274,22 +1238,23 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1274 } 1238 }
1275 } 1239 }
1276 1240
1277 (*split_start)++; 1241 state->split_start++;
1278 if (split) 1242 if (state->split)
1279 break; 1243 break;
1280 case 2: 1244 case 2:
1281 if (nl80211_put_iftypes(msg, NL80211_ATTR_SUPPORTED_IFTYPES, 1245 if (nl80211_put_iftypes(msg, NL80211_ATTR_SUPPORTED_IFTYPES,
1282 dev->wiphy.interface_modes)) 1246 dev->wiphy.interface_modes))
1283 goto nla_put_failure; 1247 goto nla_put_failure;
1284 (*split_start)++; 1248 state->split_start++;
1285 if (split) 1249 if (state->split)
1286 break; 1250 break;
1287 case 3: 1251 case 3:
1288 nl_bands = nla_nest_start(msg, NL80211_ATTR_WIPHY_BANDS); 1252 nl_bands = nla_nest_start(msg, NL80211_ATTR_WIPHY_BANDS);
1289 if (!nl_bands) 1253 if (!nl_bands)
1290 goto nla_put_failure; 1254 goto nla_put_failure;
1291 1255
1292 for (band = *band_start; band < IEEE80211_NUM_BANDS; band++) { 1256 for (band = state->band_start;
1257 band < IEEE80211_NUM_BANDS; band++) {
1293 struct ieee80211_supported_band *sband; 1258 struct ieee80211_supported_band *sband;
1294 1259
1295 sband = dev->wiphy.bands[band]; 1260 sband = dev->wiphy.bands[band];
@@ -1301,12 +1266,12 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1301 if (!nl_band) 1266 if (!nl_band)
1302 goto nla_put_failure; 1267 goto nla_put_failure;
1303 1268
1304 switch (*chan_start) { 1269 switch (state->chan_start) {
1305 case 0: 1270 case 0:
1306 if (nl80211_send_band_rateinfo(msg, sband)) 1271 if (nl80211_send_band_rateinfo(msg, sband))
1307 goto nla_put_failure; 1272 goto nla_put_failure;
1308 (*chan_start)++; 1273 state->chan_start++;
1309 if (split) 1274 if (state->split)
1310 break; 1275 break;
1311 default: 1276 default:
1312 /* add frequencies */ 1277 /* add frequencies */
@@ -1315,7 +1280,7 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1315 if (!nl_freqs) 1280 if (!nl_freqs)
1316 goto nla_put_failure; 1281 goto nla_put_failure;
1317 1282
1318 for (i = *chan_start - 1; 1283 for (i = state->chan_start - 1;
1319 i < sband->n_channels; 1284 i < sband->n_channels;
1320 i++) { 1285 i++) {
1321 nl_freq = nla_nest_start(msg, i); 1286 nl_freq = nla_nest_start(msg, i);
@@ -1324,26 +1289,27 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1324 1289
1325 chan = &sband->channels[i]; 1290 chan = &sband->channels[i];
1326 1291
1327 if (nl80211_msg_put_channel(msg, chan, 1292 if (nl80211_msg_put_channel(
1328 split)) 1293 msg, chan,
1294 state->split))
1329 goto nla_put_failure; 1295 goto nla_put_failure;
1330 1296
1331 nla_nest_end(msg, nl_freq); 1297 nla_nest_end(msg, nl_freq);
1332 if (split) 1298 if (state->split)
1333 break; 1299 break;
1334 } 1300 }
1335 if (i < sband->n_channels) 1301 if (i < sband->n_channels)
1336 *chan_start = i + 2; 1302 state->chan_start = i + 2;
1337 else 1303 else
1338 *chan_start = 0; 1304 state->chan_start = 0;
1339 nla_nest_end(msg, nl_freqs); 1305 nla_nest_end(msg, nl_freqs);
1340 } 1306 }
1341 1307
1342 nla_nest_end(msg, nl_band); 1308 nla_nest_end(msg, nl_band);
1343 1309
1344 if (split) { 1310 if (state->split) {
1345 /* start again here */ 1311 /* start again here */
1346 if (*chan_start) 1312 if (state->chan_start)
1347 band--; 1313 band--;
1348 break; 1314 break;
1349 } 1315 }
@@ -1351,14 +1317,14 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1351 nla_nest_end(msg, nl_bands); 1317 nla_nest_end(msg, nl_bands);
1352 1318
1353 if (band < IEEE80211_NUM_BANDS) 1319 if (band < IEEE80211_NUM_BANDS)
1354 *band_start = band + 1; 1320 state->band_start = band + 1;
1355 else 1321 else
1356 *band_start = 0; 1322 state->band_start = 0;
1357 1323
1358 /* if bands & channels are done, continue outside */ 1324 /* if bands & channels are done, continue outside */
1359 if (*band_start == 0 && *chan_start == 0) 1325 if (state->band_start == 0 && state->chan_start == 0)
1360 (*split_start)++; 1326 state->split_start++;
1361 if (split) 1327 if (state->split)
1362 break; 1328 break;
1363 case 4: 1329 case 4:
1364 nl_cmds = nla_nest_start(msg, NL80211_ATTR_SUPPORTED_COMMANDS); 1330 nl_cmds = nla_nest_start(msg, NL80211_ATTR_SUPPORTED_COMMANDS);
@@ -1424,7 +1390,7 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1424 } 1390 }
1425 CMD(start_p2p_device, START_P2P_DEVICE); 1391 CMD(start_p2p_device, START_P2P_DEVICE);
1426 CMD(set_mcast_rate, SET_MCAST_RATE); 1392 CMD(set_mcast_rate, SET_MCAST_RATE);
1427 if (split) { 1393 if (state->split) {
1428 CMD(crit_proto_start, CRIT_PROTOCOL_START); 1394 CMD(crit_proto_start, CRIT_PROTOCOL_START);
1429 CMD(crit_proto_stop, CRIT_PROTOCOL_STOP); 1395 CMD(crit_proto_stop, CRIT_PROTOCOL_STOP);
1430 } 1396 }
@@ -1448,8 +1414,8 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1448 } 1414 }
1449 1415
1450 nla_nest_end(msg, nl_cmds); 1416 nla_nest_end(msg, nl_cmds);
1451 (*split_start)++; 1417 state->split_start++;
1452 if (split) 1418 if (state->split)
1453 break; 1419 break;
1454 case 5: 1420 case 5:
1455 if (dev->ops->remain_on_channel && 1421 if (dev->ops->remain_on_channel &&
@@ -1465,29 +1431,30 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1465 1431
1466 if (nl80211_send_mgmt_stypes(msg, mgmt_stypes)) 1432 if (nl80211_send_mgmt_stypes(msg, mgmt_stypes))
1467 goto nla_put_failure; 1433 goto nla_put_failure;
1468 (*split_start)++; 1434 state->split_start++;
1469 if (split) 1435 if (state->split)
1470 break; 1436 break;
1471 case 6: 1437 case 6:
1472#ifdef CONFIG_PM 1438#ifdef CONFIG_PM
1473 if (nl80211_send_wowlan(msg, dev, split)) 1439 if (nl80211_send_wowlan(msg, dev, state->split))
1474 goto nla_put_failure; 1440 goto nla_put_failure;
1475 (*split_start)++; 1441 state->split_start++;
1476 if (split) 1442 if (state->split)
1477 break; 1443 break;
1478#else 1444#else
1479 (*split_start)++; 1445 state->split_start++;
1480#endif 1446#endif
1481 case 7: 1447 case 7:
1482 if (nl80211_put_iftypes(msg, NL80211_ATTR_SOFTWARE_IFTYPES, 1448 if (nl80211_put_iftypes(msg, NL80211_ATTR_SOFTWARE_IFTYPES,
1483 dev->wiphy.software_iftypes)) 1449 dev->wiphy.software_iftypes))
1484 goto nla_put_failure; 1450 goto nla_put_failure;
1485 1451
1486 if (nl80211_put_iface_combinations(&dev->wiphy, msg, split)) 1452 if (nl80211_put_iface_combinations(&dev->wiphy, msg,
1453 state->split))
1487 goto nla_put_failure; 1454 goto nla_put_failure;
1488 1455
1489 (*split_start)++; 1456 state->split_start++;
1490 if (split) 1457 if (state->split)
1491 break; 1458 break;
1492 case 8: 1459 case 8:
1493 if ((dev->wiphy.flags & WIPHY_FLAG_HAVE_AP_SME) && 1460 if ((dev->wiphy.flags & WIPHY_FLAG_HAVE_AP_SME) &&
@@ -1501,7 +1468,7 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1501 * dump is split, otherwise it makes it too big. Therefore 1468 * dump is split, otherwise it makes it too big. Therefore
1502 * only advertise it in that case. 1469 * only advertise it in that case.
1503 */ 1470 */
1504 if (split) 1471 if (state->split)
1505 features |= NL80211_FEATURE_ADVERTISE_CHAN_LIMITS; 1472 features |= NL80211_FEATURE_ADVERTISE_CHAN_LIMITS;
1506 if (nla_put_u32(msg, NL80211_ATTR_FEATURE_FLAGS, features)) 1473 if (nla_put_u32(msg, NL80211_ATTR_FEATURE_FLAGS, features))
1507 goto nla_put_failure; 1474 goto nla_put_failure;
@@ -1528,7 +1495,7 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1528 * case we'll continue with more data in the next round, 1495 * case we'll continue with more data in the next round,
1529 * but break unconditionally so unsplit data stops here. 1496 * but break unconditionally so unsplit data stops here.
1530 */ 1497 */
1531 (*split_start)++; 1498 state->split_start++;
1532 break; 1499 break;
1533 case 9: 1500 case 9:
1534 if (dev->wiphy.extended_capabilities && 1501 if (dev->wiphy.extended_capabilities &&
@@ -1547,7 +1514,7 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1547 goto nla_put_failure; 1514 goto nla_put_failure;
1548 1515
1549 /* done */ 1516 /* done */
1550 *split_start = 0; 1517 state->split_start = 0;
1551 break; 1518 break;
1552 } 1519 }
1553 return genlmsg_end(msg, hdr); 1520 return genlmsg_end(msg, hdr);
@@ -1557,66 +1524,78 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *dev,
1557 return -EMSGSIZE; 1524 return -EMSGSIZE;
1558} 1525}
1559 1526
1527static int nl80211_dump_wiphy_parse(struct sk_buff *skb,
1528 struct netlink_callback *cb,
1529 struct nl80211_dump_wiphy_state *state)
1530{
1531 struct nlattr **tb = nl80211_fam.attrbuf;
1532 int ret = nlmsg_parse(cb->nlh, GENL_HDRLEN + nl80211_fam.hdrsize,
1533 tb, nl80211_fam.maxattr, nl80211_policy);
1534 /* ignore parse errors for backward compatibility */
1535 if (ret)
1536 return 0;
1537
1538 state->split = tb[NL80211_ATTR_SPLIT_WIPHY_DUMP];
1539 if (tb[NL80211_ATTR_WIPHY])
1540 state->filter_wiphy = nla_get_u32(tb[NL80211_ATTR_WIPHY]);
1541 if (tb[NL80211_ATTR_WDEV])
1542 state->filter_wiphy = nla_get_u64(tb[NL80211_ATTR_WDEV]) >> 32;
1543 if (tb[NL80211_ATTR_IFINDEX]) {
1544 struct net_device *netdev;
1545 struct cfg80211_registered_device *rdev;
1546 int ifidx = nla_get_u32(tb[NL80211_ATTR_IFINDEX]);
1547
1548 netdev = dev_get_by_index(sock_net(skb->sk), ifidx);
1549 if (!netdev)
1550 return -ENODEV;
1551 if (netdev->ieee80211_ptr) {
1552 rdev = wiphy_to_dev(
1553 netdev->ieee80211_ptr->wiphy);
1554 state->filter_wiphy = rdev->wiphy_idx;
1555 }
1556 dev_put(netdev);
1557 }
1558
1559 return 0;
1560}
1561
1560static int nl80211_dump_wiphy(struct sk_buff *skb, struct netlink_callback *cb) 1562static int nl80211_dump_wiphy(struct sk_buff *skb, struct netlink_callback *cb)
1561{ 1563{
1562 int idx = 0, ret; 1564 int idx = 0, ret;
1563 int start = cb->args[0]; 1565 struct nl80211_dump_wiphy_state *state = (void *)cb->args[0];
1564 struct cfg80211_registered_device *dev; 1566 struct cfg80211_registered_device *dev;
1565 s64 filter_wiphy = -1;
1566 bool split = false;
1567 struct nlattr **tb;
1568 int res;
1569 1567
1570 /* will be zeroed in nlmsg_parse() */ 1568 rtnl_lock();
1571 tb = kmalloc(sizeof(*tb) * (NL80211_ATTR_MAX + 1), GFP_KERNEL); 1569 if (!state) {
1572 if (!tb) 1570 state = kzalloc(sizeof(*state), GFP_KERNEL);
1573 return -ENOMEM; 1571 if (!state) {
1574 1572 rtnl_unlock();
1575 mutex_lock(&cfg80211_mutex); 1573 return -ENOMEM;
1576 res = nlmsg_parse(cb->nlh, GENL_HDRLEN + nl80211_fam.hdrsize, 1574 }
1577 tb, NL80211_ATTR_MAX, nl80211_policy); 1575 state->filter_wiphy = -1;
1578 if (res == 0) { 1576 ret = nl80211_dump_wiphy_parse(skb, cb, state);
1579 split = tb[NL80211_ATTR_SPLIT_WIPHY_DUMP]; 1577 if (ret) {
1580 if (tb[NL80211_ATTR_WIPHY]) 1578 kfree(state);
1581 filter_wiphy = nla_get_u32(tb[NL80211_ATTR_WIPHY]); 1579 rtnl_unlock();
1582 if (tb[NL80211_ATTR_WDEV]) 1580 return ret;
1583 filter_wiphy = nla_get_u64(tb[NL80211_ATTR_WDEV]) >> 32;
1584 if (tb[NL80211_ATTR_IFINDEX]) {
1585 struct net_device *netdev;
1586 int ifidx = nla_get_u32(tb[NL80211_ATTR_IFINDEX]);
1587
1588 netdev = dev_get_by_index(sock_net(skb->sk), ifidx);
1589 if (!netdev) {
1590 mutex_unlock(&cfg80211_mutex);
1591 kfree(tb);
1592 return -ENODEV;
1593 }
1594 if (netdev->ieee80211_ptr) {
1595 dev = wiphy_to_dev(
1596 netdev->ieee80211_ptr->wiphy);
1597 filter_wiphy = dev->wiphy_idx;
1598 }
1599 dev_put(netdev);
1600 } 1581 }
1582 cb->args[0] = (long)state;
1601 } 1583 }
1602 kfree(tb);
1603 1584
1604 list_for_each_entry(dev, &cfg80211_rdev_list, list) { 1585 list_for_each_entry(dev, &cfg80211_rdev_list, list) {
1605 if (!net_eq(wiphy_net(&dev->wiphy), sock_net(skb->sk))) 1586 if (!net_eq(wiphy_net(&dev->wiphy), sock_net(skb->sk)))
1606 continue; 1587 continue;
1607 if (++idx <= start) 1588 if (++idx <= state->start)
1608 continue; 1589 continue;
1609 if (filter_wiphy != -1 && dev->wiphy_idx != filter_wiphy) 1590 if (state->filter_wiphy != -1 &&
1591 state->filter_wiphy != dev->wiphy_idx)
1610 continue; 1592 continue;
1611 /* attempt to fit multiple wiphy data chunks into the skb */ 1593 /* attempt to fit multiple wiphy data chunks into the skb */
1612 do { 1594 do {
1613 ret = nl80211_send_wiphy(dev, skb, 1595 ret = nl80211_send_wiphy(dev, skb,
1614 NETLINK_CB(cb->skb).portid, 1596 NETLINK_CB(cb->skb).portid,
1615 cb->nlh->nlmsg_seq, 1597 cb->nlh->nlmsg_seq,
1616 NLM_F_MULTI, 1598 NLM_F_MULTI, state);
1617 split, &cb->args[1],
1618 &cb->args[2],
1619 &cb->args[3]);
1620 if (ret < 0) { 1599 if (ret < 0) {
1621 /* 1600 /*
1622 * If sending the wiphy data didn't fit (ENOBUFS 1601 * If sending the wiphy data didn't fit (ENOBUFS
@@ -1635,33 +1614,40 @@ static int nl80211_dump_wiphy(struct sk_buff *skb, struct netlink_callback *cb)
1635 !skb->len && 1614 !skb->len &&
1636 cb->min_dump_alloc < 4096) { 1615 cb->min_dump_alloc < 4096) {
1637 cb->min_dump_alloc = 4096; 1616 cb->min_dump_alloc = 4096;
1638 mutex_unlock(&cfg80211_mutex); 1617 rtnl_unlock();
1639 return 1; 1618 return 1;
1640 } 1619 }
1641 idx--; 1620 idx--;
1642 break; 1621 break;
1643 } 1622 }
1644 } while (cb->args[1] > 0); 1623 } while (state->split_start > 0);
1645 break; 1624 break;
1646 } 1625 }
1647 mutex_unlock(&cfg80211_mutex); 1626 rtnl_unlock();
1648 1627
1649 cb->args[0] = idx; 1628 state->start = idx;
1650 1629
1651 return skb->len; 1630 return skb->len;
1652} 1631}
1653 1632
1633static int nl80211_dump_wiphy_done(struct netlink_callback *cb)
1634{
1635 kfree((void *)cb->args[0]);
1636 return 0;
1637}
1638
1654static int nl80211_get_wiphy(struct sk_buff *skb, struct genl_info *info) 1639static int nl80211_get_wiphy(struct sk_buff *skb, struct genl_info *info)
1655{ 1640{
1656 struct sk_buff *msg; 1641 struct sk_buff *msg;
1657 struct cfg80211_registered_device *dev = info->user_ptr[0]; 1642 struct cfg80211_registered_device *dev = info->user_ptr[0];
1643 struct nl80211_dump_wiphy_state state = {};
1658 1644
1659 msg = nlmsg_new(4096, GFP_KERNEL); 1645 msg = nlmsg_new(4096, GFP_KERNEL);
1660 if (!msg) 1646 if (!msg)
1661 return -ENOMEM; 1647 return -ENOMEM;
1662 1648
1663 if (nl80211_send_wiphy(dev, msg, info->snd_portid, info->snd_seq, 0, 1649 if (nl80211_send_wiphy(dev, msg, info->snd_portid, info->snd_seq, 0,
1664 false, NULL, NULL, NULL) < 0) { 1650 &state) < 0) {
1665 nlmsg_free(msg); 1651 nlmsg_free(msg);
1666 return -ENOBUFS; 1652 return -ENOBUFS;
1667 } 1653 }
@@ -1778,6 +1764,11 @@ static int nl80211_parse_chandef(struct cfg80211_registered_device *rdev,
1778 IEEE80211_CHAN_DISABLED)) 1764 IEEE80211_CHAN_DISABLED))
1779 return -EINVAL; 1765 return -EINVAL;
1780 1766
1767 if ((chandef->width == NL80211_CHAN_WIDTH_5 ||
1768 chandef->width == NL80211_CHAN_WIDTH_10) &&
1769 !(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_5_10_MHZ))
1770 return -EINVAL;
1771
1781 return 0; 1772 return 0;
1782} 1773}
1783 1774
@@ -1799,7 +1790,6 @@ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev,
1799 if (result) 1790 if (result)
1800 return result; 1791 return result;
1801 1792
1802 mutex_lock(&rdev->devlist_mtx);
1803 switch (iftype) { 1793 switch (iftype) {
1804 case NL80211_IFTYPE_AP: 1794 case NL80211_IFTYPE_AP:
1805 case NL80211_IFTYPE_P2P_GO: 1795 case NL80211_IFTYPE_P2P_GO:
@@ -1823,7 +1813,6 @@ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev,
1823 default: 1813 default:
1824 result = -EINVAL; 1814 result = -EINVAL;
1825 } 1815 }
1826 mutex_unlock(&rdev->devlist_mtx);
1827 1816
1828 return result; 1817 return result;
1829} 1818}
@@ -1872,6 +1861,8 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
1872 u32 frag_threshold = 0, rts_threshold = 0; 1861 u32 frag_threshold = 0, rts_threshold = 0;
1873 u8 coverage_class = 0; 1862 u8 coverage_class = 0;
1874 1863
1864 ASSERT_RTNL();
1865
1875 /* 1866 /*
1876 * Try to find the wiphy and netdev. Normally this 1867 * Try to find the wiphy and netdev. Normally this
1877 * function shouldn't need the netdev, but this is 1868 * function shouldn't need the netdev, but this is
@@ -1881,31 +1872,25 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
1881 * also passed a netdev to set_wiphy, so that it is 1872 * also passed a netdev to set_wiphy, so that it is
1882 * possible to let that go to the right netdev! 1873 * possible to let that go to the right netdev!
1883 */ 1874 */
1884 mutex_lock(&cfg80211_mutex);
1885 1875
1886 if (info->attrs[NL80211_ATTR_IFINDEX]) { 1876 if (info->attrs[NL80211_ATTR_IFINDEX]) {
1887 int ifindex = nla_get_u32(info->attrs[NL80211_ATTR_IFINDEX]); 1877 int ifindex = nla_get_u32(info->attrs[NL80211_ATTR_IFINDEX]);
1888 1878
1889 netdev = dev_get_by_index(genl_info_net(info), ifindex); 1879 netdev = dev_get_by_index(genl_info_net(info), ifindex);
1890 if (netdev && netdev->ieee80211_ptr) { 1880 if (netdev && netdev->ieee80211_ptr)
1891 rdev = wiphy_to_dev(netdev->ieee80211_ptr->wiphy); 1881 rdev = wiphy_to_dev(netdev->ieee80211_ptr->wiphy);
1892 mutex_lock(&rdev->mtx); 1882 else
1893 } else
1894 netdev = NULL; 1883 netdev = NULL;
1895 } 1884 }
1896 1885
1897 if (!netdev) { 1886 if (!netdev) {
1898 rdev = __cfg80211_rdev_from_attrs(genl_info_net(info), 1887 rdev = __cfg80211_rdev_from_attrs(genl_info_net(info),
1899 info->attrs); 1888 info->attrs);
1900 if (IS_ERR(rdev)) { 1889 if (IS_ERR(rdev))
1901 mutex_unlock(&cfg80211_mutex);
1902 return PTR_ERR(rdev); 1890 return PTR_ERR(rdev);
1903 }
1904 wdev = NULL; 1891 wdev = NULL;
1905 netdev = NULL; 1892 netdev = NULL;
1906 result = 0; 1893 result = 0;
1907
1908 mutex_lock(&rdev->mtx);
1909 } else 1894 } else
1910 wdev = netdev->ieee80211_ptr; 1895 wdev = netdev->ieee80211_ptr;
1911 1896
@@ -1918,8 +1903,6 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
1918 result = cfg80211_dev_rename( 1903 result = cfg80211_dev_rename(
1919 rdev, nla_data(info->attrs[NL80211_ATTR_WIPHY_NAME])); 1904 rdev, nla_data(info->attrs[NL80211_ATTR_WIPHY_NAME]));
1920 1905
1921 mutex_unlock(&cfg80211_mutex);
1922
1923 if (result) 1906 if (result)
1924 goto bad_res; 1907 goto bad_res;
1925 1908
@@ -2126,7 +2109,6 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
2126 } 2109 }
2127 2110
2128 bad_res: 2111 bad_res:
2129 mutex_unlock(&rdev->mtx);
2130 if (netdev) 2112 if (netdev)
2131 dev_put(netdev); 2113 dev_put(netdev);
2132 return result; 2114 return result;
@@ -2224,7 +2206,7 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback *
2224 struct cfg80211_registered_device *rdev; 2206 struct cfg80211_registered_device *rdev;
2225 struct wireless_dev *wdev; 2207 struct wireless_dev *wdev;
2226 2208
2227 mutex_lock(&cfg80211_mutex); 2209 rtnl_lock();
2228 list_for_each_entry(rdev, &cfg80211_rdev_list, list) { 2210 list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
2229 if (!net_eq(wiphy_net(&rdev->wiphy), sock_net(skb->sk))) 2211 if (!net_eq(wiphy_net(&rdev->wiphy), sock_net(skb->sk)))
2230 continue; 2212 continue;
@@ -2234,7 +2216,6 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback *
2234 } 2216 }
2235 if_idx = 0; 2217 if_idx = 0;
2236 2218
2237 mutex_lock(&rdev->devlist_mtx);
2238 list_for_each_entry(wdev, &rdev->wdev_list, list) { 2219 list_for_each_entry(wdev, &rdev->wdev_list, list) {
2239 if (if_idx < if_start) { 2220 if (if_idx < if_start) {
2240 if_idx++; 2221 if_idx++;
@@ -2243,17 +2224,15 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback *
2243 if (nl80211_send_iface(skb, NETLINK_CB(cb->skb).portid, 2224 if (nl80211_send_iface(skb, NETLINK_CB(cb->skb).portid,
2244 cb->nlh->nlmsg_seq, NLM_F_MULTI, 2225 cb->nlh->nlmsg_seq, NLM_F_MULTI,
2245 rdev, wdev) < 0) { 2226 rdev, wdev) < 0) {
2246 mutex_unlock(&rdev->devlist_mtx);
2247 goto out; 2227 goto out;
2248 } 2228 }
2249 if_idx++; 2229 if_idx++;
2250 } 2230 }
2251 mutex_unlock(&rdev->devlist_mtx);
2252 2231
2253 wp_idx++; 2232 wp_idx++;
2254 } 2233 }
2255 out: 2234 out:
2256 mutex_unlock(&cfg80211_mutex); 2235 rtnl_unlock();
2257 2236
2258 cb->args[0] = wp_idx; 2237 cb->args[0] = wp_idx;
2259 cb->args[1] = if_idx; 2238 cb->args[1] = if_idx;
@@ -2286,6 +2265,7 @@ static const struct nla_policy mntr_flags_policy[NL80211_MNTR_FLAG_MAX + 1] = {
2286 [NL80211_MNTR_FLAG_CONTROL] = { .type = NLA_FLAG }, 2265 [NL80211_MNTR_FLAG_CONTROL] = { .type = NLA_FLAG },
2287 [NL80211_MNTR_FLAG_OTHER_BSS] = { .type = NLA_FLAG }, 2266 [NL80211_MNTR_FLAG_OTHER_BSS] = { .type = NLA_FLAG },
2288 [NL80211_MNTR_FLAG_COOK_FRAMES] = { .type = NLA_FLAG }, 2267 [NL80211_MNTR_FLAG_COOK_FRAMES] = { .type = NLA_FLAG },
2268 [NL80211_MNTR_FLAG_ACTIVE] = { .type = NLA_FLAG },
2289}; 2269};
2290 2270
2291static int parse_monitor_flags(struct nlattr *nla, u32 *mntrflags) 2271static int parse_monitor_flags(struct nlattr *nla, u32 *mntrflags)
@@ -2397,6 +2377,10 @@ static int nl80211_set_interface(struct sk_buff *skb, struct genl_info *info)
2397 change = true; 2377 change = true;
2398 } 2378 }
2399 2379
2380 if (flags && (*flags & NL80211_MNTR_FLAG_ACTIVE) &&
2381 !(rdev->wiphy.features & NL80211_FEATURE_ACTIVE_MONITOR))
2382 return -EOPNOTSUPP;
2383
2400 if (change) 2384 if (change)
2401 err = cfg80211_change_iface(rdev, dev, ntype, flags, &params); 2385 err = cfg80211_change_iface(rdev, dev, ntype, flags, &params);
2402 else 2386 else
@@ -2454,6 +2438,11 @@ static int nl80211_new_interface(struct sk_buff *skb, struct genl_info *info)
2454 err = parse_monitor_flags(type == NL80211_IFTYPE_MONITOR ? 2438 err = parse_monitor_flags(type == NL80211_IFTYPE_MONITOR ?
2455 info->attrs[NL80211_ATTR_MNTR_FLAGS] : NULL, 2439 info->attrs[NL80211_ATTR_MNTR_FLAGS] : NULL,
2456 &flags); 2440 &flags);
2441
2442 if (!err && (flags & NL80211_MNTR_FLAG_ACTIVE) &&
2443 !(rdev->wiphy.features & NL80211_FEATURE_ACTIVE_MONITOR))
2444 return -EOPNOTSUPP;
2445
2457 wdev = rdev_add_virtual_intf(rdev, 2446 wdev = rdev_add_virtual_intf(rdev,
2458 nla_data(info->attrs[NL80211_ATTR_IFNAME]), 2447 nla_data(info->attrs[NL80211_ATTR_IFNAME]),
2459 type, err ? NULL : &flags, &params); 2448 type, err ? NULL : &flags, &params);
@@ -2486,11 +2475,9 @@ static int nl80211_new_interface(struct sk_buff *skb, struct genl_info *info)
2486 INIT_LIST_HEAD(&wdev->mgmt_registrations); 2475 INIT_LIST_HEAD(&wdev->mgmt_registrations);
2487 spin_lock_init(&wdev->mgmt_registrations_lock); 2476 spin_lock_init(&wdev->mgmt_registrations_lock);
2488 2477
2489 mutex_lock(&rdev->devlist_mtx);
2490 wdev->identifier = ++rdev->wdev_id; 2478 wdev->identifier = ++rdev->wdev_id;
2491 list_add_rcu(&wdev->list, &rdev->wdev_list); 2479 list_add_rcu(&wdev->list, &rdev->wdev_list);
2492 rdev->devlist_generation++; 2480 rdev->devlist_generation++;
2493 mutex_unlock(&rdev->devlist_mtx);
2494 break; 2481 break;
2495 default: 2482 default:
2496 break; 2483 break;
@@ -2933,61 +2920,58 @@ static int nl80211_set_mac_acl(struct sk_buff *skb, struct genl_info *info)
2933 return err; 2920 return err;
2934} 2921}
2935 2922
2936static int nl80211_parse_beacon(struct genl_info *info, 2923static int nl80211_parse_beacon(struct nlattr *attrs[],
2937 struct cfg80211_beacon_data *bcn) 2924 struct cfg80211_beacon_data *bcn)
2938{ 2925{
2939 bool haveinfo = false; 2926 bool haveinfo = false;
2940 2927
2941 if (!is_valid_ie_attr(info->attrs[NL80211_ATTR_BEACON_TAIL]) || 2928 if (!is_valid_ie_attr(attrs[NL80211_ATTR_BEACON_TAIL]) ||
2942 !is_valid_ie_attr(info->attrs[NL80211_ATTR_IE]) || 2929 !is_valid_ie_attr(attrs[NL80211_ATTR_IE]) ||
2943 !is_valid_ie_attr(info->attrs[NL80211_ATTR_IE_PROBE_RESP]) || 2930 !is_valid_ie_attr(attrs[NL80211_ATTR_IE_PROBE_RESP]) ||
2944 !is_valid_ie_attr(info->attrs[NL80211_ATTR_IE_ASSOC_RESP])) 2931 !is_valid_ie_attr(attrs[NL80211_ATTR_IE_ASSOC_RESP]))
2945 return -EINVAL; 2932 return -EINVAL;
2946 2933
2947 memset(bcn, 0, sizeof(*bcn)); 2934 memset(bcn, 0, sizeof(*bcn));
2948 2935
2949 if (info->attrs[NL80211_ATTR_BEACON_HEAD]) { 2936 if (attrs[NL80211_ATTR_BEACON_HEAD]) {
2950 bcn->head = nla_data(info->attrs[NL80211_ATTR_BEACON_HEAD]); 2937 bcn->head = nla_data(attrs[NL80211_ATTR_BEACON_HEAD]);
2951 bcn->head_len = nla_len(info->attrs[NL80211_ATTR_BEACON_HEAD]); 2938 bcn->head_len = nla_len(attrs[NL80211_ATTR_BEACON_HEAD]);
2952 if (!bcn->head_len) 2939 if (!bcn->head_len)
2953 return -EINVAL; 2940 return -EINVAL;
2954 haveinfo = true; 2941 haveinfo = true;
2955 } 2942 }
2956 2943
2957 if (info->attrs[NL80211_ATTR_BEACON_TAIL]) { 2944 if (attrs[NL80211_ATTR_BEACON_TAIL]) {
2958 bcn->tail = nla_data(info->attrs[NL80211_ATTR_BEACON_TAIL]); 2945 bcn->tail = nla_data(attrs[NL80211_ATTR_BEACON_TAIL]);
2959 bcn->tail_len = 2946 bcn->tail_len = nla_len(attrs[NL80211_ATTR_BEACON_TAIL]);
2960 nla_len(info->attrs[NL80211_ATTR_BEACON_TAIL]);
2961 haveinfo = true; 2947 haveinfo = true;
2962 } 2948 }
2963 2949
2964 if (!haveinfo) 2950 if (!haveinfo)
2965 return -EINVAL; 2951 return -EINVAL;
2966 2952
2967 if (info->attrs[NL80211_ATTR_IE]) { 2953 if (attrs[NL80211_ATTR_IE]) {
2968 bcn->beacon_ies = nla_data(info->attrs[NL80211_ATTR_IE]); 2954 bcn->beacon_ies = nla_data(attrs[NL80211_ATTR_IE]);
2969 bcn->beacon_ies_len = nla_len(info->attrs[NL80211_ATTR_IE]); 2955 bcn->beacon_ies_len = nla_len(attrs[NL80211_ATTR_IE]);
2970 } 2956 }
2971 2957
2972 if (info->attrs[NL80211_ATTR_IE_PROBE_RESP]) { 2958 if (attrs[NL80211_ATTR_IE_PROBE_RESP]) {
2973 bcn->proberesp_ies = 2959 bcn->proberesp_ies =
2974 nla_data(info->attrs[NL80211_ATTR_IE_PROBE_RESP]); 2960 nla_data(attrs[NL80211_ATTR_IE_PROBE_RESP]);
2975 bcn->proberesp_ies_len = 2961 bcn->proberesp_ies_len =
2976 nla_len(info->attrs[NL80211_ATTR_IE_PROBE_RESP]); 2962 nla_len(attrs[NL80211_ATTR_IE_PROBE_RESP]);
2977 } 2963 }
2978 2964
2979 if (info->attrs[NL80211_ATTR_IE_ASSOC_RESP]) { 2965 if (attrs[NL80211_ATTR_IE_ASSOC_RESP]) {
2980 bcn->assocresp_ies = 2966 bcn->assocresp_ies =
2981 nla_data(info->attrs[NL80211_ATTR_IE_ASSOC_RESP]); 2967 nla_data(attrs[NL80211_ATTR_IE_ASSOC_RESP]);
2982 bcn->assocresp_ies_len = 2968 bcn->assocresp_ies_len =
2983 nla_len(info->attrs[NL80211_ATTR_IE_ASSOC_RESP]); 2969 nla_len(attrs[NL80211_ATTR_IE_ASSOC_RESP]);
2984 } 2970 }
2985 2971
2986 if (info->attrs[NL80211_ATTR_PROBE_RESP]) { 2972 if (attrs[NL80211_ATTR_PROBE_RESP]) {
2987 bcn->probe_resp = 2973 bcn->probe_resp = nla_data(attrs[NL80211_ATTR_PROBE_RESP]);
2988 nla_data(info->attrs[NL80211_ATTR_PROBE_RESP]); 2974 bcn->probe_resp_len = nla_len(attrs[NL80211_ATTR_PROBE_RESP]);
2989 bcn->probe_resp_len =
2990 nla_len(info->attrs[NL80211_ATTR_PROBE_RESP]);
2991 } 2975 }
2992 2976
2993 return 0; 2977 return 0;
@@ -2999,8 +2983,6 @@ static bool nl80211_get_ap_channel(struct cfg80211_registered_device *rdev,
2999 struct wireless_dev *wdev; 2983 struct wireless_dev *wdev;
3000 bool ret = false; 2984 bool ret = false;
3001 2985
3002 mutex_lock(&rdev->devlist_mtx);
3003
3004 list_for_each_entry(wdev, &rdev->wdev_list, list) { 2986 list_for_each_entry(wdev, &rdev->wdev_list, list) {
3005 if (wdev->iftype != NL80211_IFTYPE_AP && 2987 if (wdev->iftype != NL80211_IFTYPE_AP &&
3006 wdev->iftype != NL80211_IFTYPE_P2P_GO) 2988 wdev->iftype != NL80211_IFTYPE_P2P_GO)
@@ -3014,8 +2996,6 @@ static bool nl80211_get_ap_channel(struct cfg80211_registered_device *rdev,
3014 break; 2996 break;
3015 } 2997 }
3016 2998
3017 mutex_unlock(&rdev->devlist_mtx);
3018
3019 return ret; 2999 return ret;
3020} 3000}
3021 3001
@@ -3070,7 +3050,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
3070 !info->attrs[NL80211_ATTR_BEACON_HEAD]) 3050 !info->attrs[NL80211_ATTR_BEACON_HEAD])
3071 return -EINVAL; 3051 return -EINVAL;
3072 3052
3073 err = nl80211_parse_beacon(info, &params.beacon); 3053 err = nl80211_parse_beacon(info->attrs, &params.beacon);
3074 if (err) 3054 if (err)
3075 return err; 3055 return err;
3076 3056
@@ -3177,13 +3157,10 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
3177 params.radar_required = true; 3157 params.radar_required = true;
3178 } 3158 }
3179 3159
3180 mutex_lock(&rdev->devlist_mtx);
3181 err = cfg80211_can_use_iftype_chan(rdev, wdev, wdev->iftype, 3160 err = cfg80211_can_use_iftype_chan(rdev, wdev, wdev->iftype,
3182 params.chandef.chan, 3161 params.chandef.chan,
3183 CHAN_MODE_SHARED, 3162 CHAN_MODE_SHARED,
3184 radar_detect_width); 3163 radar_detect_width);
3185 mutex_unlock(&rdev->devlist_mtx);
3186
3187 if (err) 3164 if (err)
3188 return err; 3165 return err;
3189 3166
@@ -3225,7 +3202,7 @@ static int nl80211_set_beacon(struct sk_buff *skb, struct genl_info *info)
3225 if (!wdev->beacon_interval) 3202 if (!wdev->beacon_interval)
3226 return -EINVAL; 3203 return -EINVAL;
3227 3204
3228 err = nl80211_parse_beacon(info, &params); 3205 err = nl80211_parse_beacon(info->attrs, &params);
3229 if (err) 3206 if (err)
3230 return err; 3207 return err;
3231 3208
@@ -3383,6 +3360,32 @@ static bool nl80211_put_sta_rate(struct sk_buff *msg, struct rate_info *info,
3383 return true; 3360 return true;
3384} 3361}
3385 3362
3363static bool nl80211_put_signal(struct sk_buff *msg, u8 mask, s8 *signal,
3364 int id)
3365{
3366 void *attr;
3367 int i = 0;
3368
3369 if (!mask)
3370 return true;
3371
3372 attr = nla_nest_start(msg, id);
3373 if (!attr)
3374 return false;
3375
3376 for (i = 0; i < IEEE80211_MAX_CHAINS; i++) {
3377 if (!(mask & BIT(i)))
3378 continue;
3379
3380 if (nla_put_u8(msg, i, signal[i]))
3381 return false;
3382 }
3383
3384 nla_nest_end(msg, attr);
3385
3386 return true;
3387}
3388
3386static int nl80211_send_station(struct sk_buff *msg, u32 portid, u32 seq, 3389static int nl80211_send_station(struct sk_buff *msg, u32 portid, u32 seq,
3387 int flags, 3390 int flags,
3388 struct cfg80211_registered_device *rdev, 3391 struct cfg80211_registered_device *rdev,
@@ -3454,6 +3457,18 @@ static int nl80211_send_station(struct sk_buff *msg, u32 portid, u32 seq,
3454 default: 3457 default:
3455 break; 3458 break;
3456 } 3459 }
3460 if (sinfo->filled & STATION_INFO_CHAIN_SIGNAL) {
3461 if (!nl80211_put_signal(msg, sinfo->chains,
3462 sinfo->chain_signal,
3463 NL80211_STA_INFO_CHAIN_SIGNAL))
3464 goto nla_put_failure;
3465 }
3466 if (sinfo->filled & STATION_INFO_CHAIN_SIGNAL_AVG) {
3467 if (!nl80211_put_signal(msg, sinfo->chains,
3468 sinfo->chain_signal_avg,
3469 NL80211_STA_INFO_CHAIN_SIGNAL_AVG))
3470 goto nla_put_failure;
3471 }
3457 if (sinfo->filled & STATION_INFO_TX_BITRATE) { 3472 if (sinfo->filled & STATION_INFO_TX_BITRATE) {
3458 if (!nl80211_put_sta_rate(msg, &sinfo->txrate, 3473 if (!nl80211_put_sta_rate(msg, &sinfo->txrate,
3459 NL80211_STA_INFO_TX_BITRATE)) 3474 NL80211_STA_INFO_TX_BITRATE))
@@ -3841,6 +3856,8 @@ static int nl80211_set_station_tdls(struct genl_info *info,
3841 struct station_parameters *params) 3856 struct station_parameters *params)
3842{ 3857{
3843 /* Dummy STA entry gets updated once the peer capabilities are known */ 3858 /* Dummy STA entry gets updated once the peer capabilities are known */
3859 if (info->attrs[NL80211_ATTR_PEER_AID])
3860 params->aid = nla_get_u16(info->attrs[NL80211_ATTR_PEER_AID]);
3844 if (info->attrs[NL80211_ATTR_HT_CAPABILITY]) 3861 if (info->attrs[NL80211_ATTR_HT_CAPABILITY])
3845 params->ht_capa = 3862 params->ht_capa =
3846 nla_data(info->attrs[NL80211_ATTR_HT_CAPABILITY]); 3863 nla_data(info->attrs[NL80211_ATTR_HT_CAPABILITY]);
@@ -3981,7 +3998,8 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info)
3981 if (!info->attrs[NL80211_ATTR_STA_SUPPORTED_RATES]) 3998 if (!info->attrs[NL80211_ATTR_STA_SUPPORTED_RATES])
3982 return -EINVAL; 3999 return -EINVAL;
3983 4000
3984 if (!info->attrs[NL80211_ATTR_STA_AID]) 4001 if (!info->attrs[NL80211_ATTR_STA_AID] &&
4002 !info->attrs[NL80211_ATTR_PEER_AID])
3985 return -EINVAL; 4003 return -EINVAL;
3986 4004
3987 mac_addr = nla_data(info->attrs[NL80211_ATTR_MAC]); 4005 mac_addr = nla_data(info->attrs[NL80211_ATTR_MAC]);
@@ -3992,7 +4010,10 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info)
3992 params.listen_interval = 4010 params.listen_interval =
3993 nla_get_u16(info->attrs[NL80211_ATTR_STA_LISTEN_INTERVAL]); 4011 nla_get_u16(info->attrs[NL80211_ATTR_STA_LISTEN_INTERVAL]);
3994 4012
3995 params.aid = nla_get_u16(info->attrs[NL80211_ATTR_STA_AID]); 4013 if (info->attrs[NL80211_ATTR_PEER_AID])
4014 params.aid = nla_get_u16(info->attrs[NL80211_ATTR_PEER_AID]);
4015 else
4016 params.aid = nla_get_u16(info->attrs[NL80211_ATTR_STA_AID]);
3996 if (!params.aid || params.aid > IEEE80211_MAX_AID) 4017 if (!params.aid || params.aid > IEEE80211_MAX_AID)
3997 return -EINVAL; 4018 return -EINVAL;
3998 4019
@@ -4044,7 +4065,8 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info)
4044 params.sta_modify_mask &= ~STATION_PARAM_APPLY_UAPSD; 4065 params.sta_modify_mask &= ~STATION_PARAM_APPLY_UAPSD;
4045 4066
4046 /* TDLS peers cannot be added */ 4067 /* TDLS peers cannot be added */
4047 if (params.sta_flags_set & BIT(NL80211_STA_FLAG_TDLS_PEER)) 4068 if ((params.sta_flags_set & BIT(NL80211_STA_FLAG_TDLS_PEER)) ||
4069 info->attrs[NL80211_ATTR_PEER_AID])
4048 return -EINVAL; 4070 return -EINVAL;
4049 /* but don't bother the driver with it */ 4071 /* but don't bother the driver with it */
4050 params.sta_flags_mask &= ~BIT(NL80211_STA_FLAG_TDLS_PEER); 4072 params.sta_flags_mask &= ~BIT(NL80211_STA_FLAG_TDLS_PEER);
@@ -4070,7 +4092,8 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info)
4070 if (params.sta_flags_mask & BIT(NL80211_STA_FLAG_ASSOCIATED)) 4092 if (params.sta_flags_mask & BIT(NL80211_STA_FLAG_ASSOCIATED))
4071 return -EINVAL; 4093 return -EINVAL;
4072 /* TDLS peers cannot be added */ 4094 /* TDLS peers cannot be added */
4073 if (params.sta_flags_set & BIT(NL80211_STA_FLAG_TDLS_PEER)) 4095 if ((params.sta_flags_set & BIT(NL80211_STA_FLAG_TDLS_PEER)) ||
4096 info->attrs[NL80211_ATTR_PEER_AID])
4074 return -EINVAL; 4097 return -EINVAL;
4075 break; 4098 break;
4076 case NL80211_IFTYPE_STATION: 4099 case NL80211_IFTYPE_STATION:
@@ -4592,7 +4615,9 @@ static int nl80211_get_mesh_config(struct sk_buff *skb,
4592 nla_put_u32(msg, NL80211_MESHCONF_POWER_MODE, 4615 nla_put_u32(msg, NL80211_MESHCONF_POWER_MODE,
4593 cur_params.power_mode) || 4616 cur_params.power_mode) ||
4594 nla_put_u16(msg, NL80211_MESHCONF_AWAKE_WINDOW, 4617 nla_put_u16(msg, NL80211_MESHCONF_AWAKE_WINDOW,
4595 cur_params.dot11MeshAwakeWindowDuration)) 4618 cur_params.dot11MeshAwakeWindowDuration) ||
4619 nla_put_u32(msg, NL80211_MESHCONF_PLINK_TIMEOUT,
4620 cur_params.plink_timeout))
4596 goto nla_put_failure; 4621 goto nla_put_failure;
4597 nla_nest_end(msg, pinfoattr); 4622 nla_nest_end(msg, pinfoattr);
4598 genlmsg_end(msg, hdr); 4623 genlmsg_end(msg, hdr);
@@ -4633,6 +4658,7 @@ static const struct nla_policy nl80211_meshconf_params_policy[NL80211_MESHCONF_A
4633 [NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL] = { .type = NLA_U16 }, 4658 [NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL] = { .type = NLA_U16 },
4634 [NL80211_MESHCONF_POWER_MODE] = { .type = NLA_U32 }, 4659 [NL80211_MESHCONF_POWER_MODE] = { .type = NLA_U32 },
4635 [NL80211_MESHCONF_AWAKE_WINDOW] = { .type = NLA_U16 }, 4660 [NL80211_MESHCONF_AWAKE_WINDOW] = { .type = NLA_U16 },
4661 [NL80211_MESHCONF_PLINK_TIMEOUT] = { .type = NLA_U32 },
4636}; 4662};
4637 4663
4638static const struct nla_policy 4664static const struct nla_policy
@@ -4641,6 +4667,7 @@ static const struct nla_policy
4641 [NL80211_MESH_SETUP_ENABLE_VENDOR_PATH_SEL] = { .type = NLA_U8 }, 4667 [NL80211_MESH_SETUP_ENABLE_VENDOR_PATH_SEL] = { .type = NLA_U8 },
4642 [NL80211_MESH_SETUP_ENABLE_VENDOR_METRIC] = { .type = NLA_U8 }, 4668 [NL80211_MESH_SETUP_ENABLE_VENDOR_METRIC] = { .type = NLA_U8 },
4643 [NL80211_MESH_SETUP_USERSPACE_AUTH] = { .type = NLA_FLAG }, 4669 [NL80211_MESH_SETUP_USERSPACE_AUTH] = { .type = NLA_FLAG },
4670 [NL80211_MESH_SETUP_AUTH_PROTOCOL] = { .type = NLA_U8 },
4644 [NL80211_MESH_SETUP_USERSPACE_MPM] = { .type = NLA_FLAG }, 4671 [NL80211_MESH_SETUP_USERSPACE_MPM] = { .type = NLA_FLAG },
4645 [NL80211_MESH_SETUP_IE] = { .type = NLA_BINARY, 4672 [NL80211_MESH_SETUP_IE] = { .type = NLA_BINARY,
4646 .len = IEEE80211_MAX_DATA_LEN }, 4673 .len = IEEE80211_MAX_DATA_LEN },
@@ -4769,6 +4796,9 @@ do { \
4769 FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshAwakeWindowDuration, 4796 FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshAwakeWindowDuration,
4770 0, 65535, mask, 4797 0, 65535, mask,
4771 NL80211_MESHCONF_AWAKE_WINDOW, nla_get_u16); 4798 NL80211_MESHCONF_AWAKE_WINDOW, nla_get_u16);
4799 FILL_IN_MESH_PARAM_IF_SET(tb, cfg, plink_timeout, 1, 0xffffffff,
4800 mask, NL80211_MESHCONF_PLINK_TIMEOUT,
4801 nla_get_u32);
4772 if (mask_out) 4802 if (mask_out)
4773 *mask_out = mask; 4803 *mask_out = mask;
4774 4804
@@ -4826,6 +4856,13 @@ static int nl80211_parse_mesh_setup(struct genl_info *info,
4826 if (setup->is_secure) 4856 if (setup->is_secure)
4827 setup->user_mpm = true; 4857 setup->user_mpm = true;
4828 4858
4859 if (tb[NL80211_MESH_SETUP_AUTH_PROTOCOL]) {
4860 if (!setup->user_mpm)
4861 return -EINVAL;
4862 setup->auth_id =
4863 nla_get_u8(tb[NL80211_MESH_SETUP_AUTH_PROTOCOL]);
4864 }
4865
4829 return 0; 4866 return 0;
4830} 4867}
4831 4868
@@ -4868,18 +4905,13 @@ static int nl80211_get_reg(struct sk_buff *skb, struct genl_info *info)
4868 void *hdr = NULL; 4905 void *hdr = NULL;
4869 struct nlattr *nl_reg_rules; 4906 struct nlattr *nl_reg_rules;
4870 unsigned int i; 4907 unsigned int i;
4871 int err = -EINVAL;
4872
4873 mutex_lock(&cfg80211_mutex);
4874 4908
4875 if (!cfg80211_regdomain) 4909 if (!cfg80211_regdomain)
4876 goto out; 4910 return -EINVAL;
4877 4911
4878 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 4912 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
4879 if (!msg) { 4913 if (!msg)
4880 err = -ENOBUFS; 4914 return -ENOBUFS;
4881 goto out;
4882 }
4883 4915
4884 hdr = nl80211hdr_put(msg, info->snd_portid, info->snd_seq, 0, 4916 hdr = nl80211hdr_put(msg, info->snd_portid, info->snd_seq, 0,
4885 NL80211_CMD_GET_REG); 4917 NL80211_CMD_GET_REG);
@@ -4938,8 +4970,7 @@ static int nl80211_get_reg(struct sk_buff *skb, struct genl_info *info)
4938 nla_nest_end(msg, nl_reg_rules); 4970 nla_nest_end(msg, nl_reg_rules);
4939 4971
4940 genlmsg_end(msg, hdr); 4972 genlmsg_end(msg, hdr);
4941 err = genlmsg_reply(msg, info); 4973 return genlmsg_reply(msg, info);
4942 goto out;
4943 4974
4944nla_put_failure_rcu: 4975nla_put_failure_rcu:
4945 rcu_read_unlock(); 4976 rcu_read_unlock();
@@ -4947,10 +4978,7 @@ nla_put_failure:
4947 genlmsg_cancel(msg, hdr); 4978 genlmsg_cancel(msg, hdr);
4948put_failure: 4979put_failure:
4949 nlmsg_free(msg); 4980 nlmsg_free(msg);
4950 err = -EMSGSIZE; 4981 return -EMSGSIZE;
4951out:
4952 mutex_unlock(&cfg80211_mutex);
4953 return err;
4954} 4982}
4955 4983
4956static int nl80211_set_reg(struct sk_buff *skb, struct genl_info *info) 4984static int nl80211_set_reg(struct sk_buff *skb, struct genl_info *info)
@@ -5016,12 +5044,9 @@ static int nl80211_set_reg(struct sk_buff *skb, struct genl_info *info)
5016 } 5044 }
5017 } 5045 }
5018 5046
5019 mutex_lock(&cfg80211_mutex);
5020
5021 r = set_regdom(rd); 5047 r = set_regdom(rd);
5022 /* set_regdom took ownership */ 5048 /* set_regdom took ownership */
5023 rd = NULL; 5049 rd = NULL;
5024 mutex_unlock(&cfg80211_mutex);
5025 5050
5026 bad_reg: 5051 bad_reg:
5027 kfree(rd); 5052 kfree(rd);
@@ -5071,7 +5096,6 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info)
5071 if (!rdev->ops->scan) 5096 if (!rdev->ops->scan)
5072 return -EOPNOTSUPP; 5097 return -EOPNOTSUPP;
5073 5098
5074 mutex_lock(&rdev->sched_scan_mtx);
5075 if (rdev->scan_req) { 5099 if (rdev->scan_req) {
5076 err = -EBUSY; 5100 err = -EBUSY;
5077 goto unlock; 5101 goto unlock;
@@ -5257,7 +5281,6 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info)
5257 } 5281 }
5258 5282
5259 unlock: 5283 unlock:
5260 mutex_unlock(&rdev->sched_scan_mtx);
5261 return err; 5284 return err;
5262} 5285}
5263 5286
@@ -5329,8 +5352,6 @@ static int nl80211_start_sched_scan(struct sk_buff *skb,
5329 if (ie_len > wiphy->max_sched_scan_ie_len) 5352 if (ie_len > wiphy->max_sched_scan_ie_len)
5330 return -EINVAL; 5353 return -EINVAL;
5331 5354
5332 mutex_lock(&rdev->sched_scan_mtx);
5333
5334 if (rdev->sched_scan_req) { 5355 if (rdev->sched_scan_req) {
5335 err = -EINPROGRESS; 5356 err = -EINPROGRESS;
5336 goto out; 5357 goto out;
@@ -5498,7 +5519,6 @@ static int nl80211_start_sched_scan(struct sk_buff *skb,
5498out_free: 5519out_free:
5499 kfree(request); 5520 kfree(request);
5500out: 5521out:
5501 mutex_unlock(&rdev->sched_scan_mtx);
5502 return err; 5522 return err;
5503} 5523}
5504 5524
@@ -5506,17 +5526,12 @@ static int nl80211_stop_sched_scan(struct sk_buff *skb,
5506 struct genl_info *info) 5526 struct genl_info *info)
5507{ 5527{
5508 struct cfg80211_registered_device *rdev = info->user_ptr[0]; 5528 struct cfg80211_registered_device *rdev = info->user_ptr[0];
5509 int err;
5510 5529
5511 if (!(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_SCHED_SCAN) || 5530 if (!(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_SCHED_SCAN) ||
5512 !rdev->ops->sched_scan_stop) 5531 !rdev->ops->sched_scan_stop)
5513 return -EOPNOTSUPP; 5532 return -EOPNOTSUPP;
5514 5533
5515 mutex_lock(&rdev->sched_scan_mtx); 5534 return __cfg80211_stop_sched_scan(rdev, false);
5516 err = __cfg80211_stop_sched_scan(rdev, false);
5517 mutex_unlock(&rdev->sched_scan_mtx);
5518
5519 return err;
5520} 5535}
5521 5536
5522static int nl80211_start_radar_detection(struct sk_buff *skb, 5537static int nl80211_start_radar_detection(struct sk_buff *skb,
@@ -5548,12 +5563,11 @@ static int nl80211_start_radar_detection(struct sk_buff *skb,
5548 if (!rdev->ops->start_radar_detection) 5563 if (!rdev->ops->start_radar_detection)
5549 return -EOPNOTSUPP; 5564 return -EOPNOTSUPP;
5550 5565
5551 mutex_lock(&rdev->devlist_mtx);
5552 err = cfg80211_can_use_iftype_chan(rdev, wdev, wdev->iftype, 5566 err = cfg80211_can_use_iftype_chan(rdev, wdev, wdev->iftype,
5553 chandef.chan, CHAN_MODE_SHARED, 5567 chandef.chan, CHAN_MODE_SHARED,
5554 BIT(chandef.width)); 5568 BIT(chandef.width));
5555 if (err) 5569 if (err)
5556 goto err_locked; 5570 return err;
5557 5571
5558 err = rdev->ops->start_radar_detection(&rdev->wiphy, dev, &chandef); 5572 err = rdev->ops->start_radar_detection(&rdev->wiphy, dev, &chandef);
5559 if (!err) { 5573 if (!err) {
@@ -5561,9 +5575,6 @@ static int nl80211_start_radar_detection(struct sk_buff *skb,
5561 wdev->cac_started = true; 5575 wdev->cac_started = true;
5562 wdev->cac_start_time = jiffies; 5576 wdev->cac_start_time = jiffies;
5563 } 5577 }
5564err_locked:
5565 mutex_unlock(&rdev->devlist_mtx);
5566
5567 return err; 5578 return err;
5568} 5579}
5569 5580
@@ -5946,10 +5957,13 @@ static int nl80211_authenticate(struct sk_buff *skb, struct genl_info *info)
5946 if (local_state_change) 5957 if (local_state_change)
5947 return 0; 5958 return 0;
5948 5959
5949 return cfg80211_mlme_auth(rdev, dev, chan, auth_type, bssid, 5960 wdev_lock(dev->ieee80211_ptr);
5950 ssid, ssid_len, ie, ie_len, 5961 err = cfg80211_mlme_auth(rdev, dev, chan, auth_type, bssid,
5951 key.p.key, key.p.key_len, key.idx, 5962 ssid, ssid_len, ie, ie_len,
5952 sae_data, sae_data_len); 5963 key.p.key, key.p.key_len, key.idx,
5964 sae_data, sae_data_len);
5965 wdev_unlock(dev->ieee80211_ptr);
5966 return err;
5953} 5967}
5954 5968
5955static int nl80211_crypto_settings(struct cfg80211_registered_device *rdev, 5969static int nl80211_crypto_settings(struct cfg80211_registered_device *rdev,
@@ -6116,9 +6130,12 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
6116 } 6130 }
6117 6131
6118 err = nl80211_crypto_settings(rdev, info, &req.crypto, 1); 6132 err = nl80211_crypto_settings(rdev, info, &req.crypto, 1);
6119 if (!err) 6133 if (!err) {
6134 wdev_lock(dev->ieee80211_ptr);
6120 err = cfg80211_mlme_assoc(rdev, dev, chan, bssid, 6135 err = cfg80211_mlme_assoc(rdev, dev, chan, bssid,
6121 ssid, ssid_len, &req); 6136 ssid, ssid_len, &req);
6137 wdev_unlock(dev->ieee80211_ptr);
6138 }
6122 6139
6123 return err; 6140 return err;
6124} 6141}
@@ -6128,7 +6145,7 @@ static int nl80211_deauthenticate(struct sk_buff *skb, struct genl_info *info)
6128 struct cfg80211_registered_device *rdev = info->user_ptr[0]; 6145 struct cfg80211_registered_device *rdev = info->user_ptr[0];
6129 struct net_device *dev = info->user_ptr[1]; 6146 struct net_device *dev = info->user_ptr[1];
6130 const u8 *ie = NULL, *bssid; 6147 const u8 *ie = NULL, *bssid;
6131 int ie_len = 0; 6148 int ie_len = 0, err;
6132 u16 reason_code; 6149 u16 reason_code;
6133 bool local_state_change; 6150 bool local_state_change;
6134 6151
@@ -6163,8 +6180,11 @@ static int nl80211_deauthenticate(struct sk_buff *skb, struct genl_info *info)
6163 6180
6164 local_state_change = !!info->attrs[NL80211_ATTR_LOCAL_STATE_CHANGE]; 6181 local_state_change = !!info->attrs[NL80211_ATTR_LOCAL_STATE_CHANGE];
6165 6182
6166 return cfg80211_mlme_deauth(rdev, dev, bssid, ie, ie_len, reason_code, 6183 wdev_lock(dev->ieee80211_ptr);
6167 local_state_change); 6184 err = cfg80211_mlme_deauth(rdev, dev, bssid, ie, ie_len, reason_code,
6185 local_state_change);
6186 wdev_unlock(dev->ieee80211_ptr);
6187 return err;
6168} 6188}
6169 6189
6170static int nl80211_disassociate(struct sk_buff *skb, struct genl_info *info) 6190static int nl80211_disassociate(struct sk_buff *skb, struct genl_info *info)
@@ -6172,7 +6192,7 @@ static int nl80211_disassociate(struct sk_buff *skb, struct genl_info *info)
6172 struct cfg80211_registered_device *rdev = info->user_ptr[0]; 6192 struct cfg80211_registered_device *rdev = info->user_ptr[0];
6173 struct net_device *dev = info->user_ptr[1]; 6193 struct net_device *dev = info->user_ptr[1];
6174 const u8 *ie = NULL, *bssid; 6194 const u8 *ie = NULL, *bssid;
6175 int ie_len = 0; 6195 int ie_len = 0, err;
6176 u16 reason_code; 6196 u16 reason_code;
6177 bool local_state_change; 6197 bool local_state_change;
6178 6198
@@ -6207,8 +6227,11 @@ static int nl80211_disassociate(struct sk_buff *skb, struct genl_info *info)
6207 6227
6208 local_state_change = !!info->attrs[NL80211_ATTR_LOCAL_STATE_CHANGE]; 6228 local_state_change = !!info->attrs[NL80211_ATTR_LOCAL_STATE_CHANGE];
6209 6229
6210 return cfg80211_mlme_disassoc(rdev, dev, bssid, ie, ie_len, reason_code, 6230 wdev_lock(dev->ieee80211_ptr);
6211 local_state_change); 6231 err = cfg80211_mlme_disassoc(rdev, dev, bssid, ie, ie_len, reason_code,
6232 local_state_change);
6233 wdev_unlock(dev->ieee80211_ptr);
6234 return err;
6212} 6235}
6213 6236
6214static bool 6237static bool
@@ -6295,11 +6318,16 @@ static int nl80211_join_ibss(struct sk_buff *skb, struct genl_info *info)
6295 if (!cfg80211_reg_can_beacon(&rdev->wiphy, &ibss.chandef)) 6318 if (!cfg80211_reg_can_beacon(&rdev->wiphy, &ibss.chandef))
6296 return -EINVAL; 6319 return -EINVAL;
6297 6320
6298 if (ibss.chandef.width > NL80211_CHAN_WIDTH_40) 6321 switch (ibss.chandef.width) {
6299 return -EINVAL; 6322 case NL80211_CHAN_WIDTH_20_NOHT:
6300 if (ibss.chandef.width != NL80211_CHAN_WIDTH_20_NOHT && 6323 break;
6301 !(rdev->wiphy.features & NL80211_FEATURE_HT_IBSS)) 6324 case NL80211_CHAN_WIDTH_20:
6325 case NL80211_CHAN_WIDTH_40:
6326 if (rdev->wiphy.features & NL80211_FEATURE_HT_IBSS)
6327 break;
6328 default:
6302 return -EINVAL; 6329 return -EINVAL;
6330 }
6303 6331
6304 ibss.channel_fixed = !!info->attrs[NL80211_ATTR_FREQ_FIXED]; 6332 ibss.channel_fixed = !!info->attrs[NL80211_ATTR_FREQ_FIXED];
6305 ibss.privacy = !!info->attrs[NL80211_ATTR_PRIVACY]; 6333 ibss.privacy = !!info->attrs[NL80211_ATTR_PRIVACY];
@@ -6426,6 +6454,8 @@ static int nl80211_testmode_dump(struct sk_buff *skb,
6426 void *data = NULL; 6454 void *data = NULL;
6427 int data_len = 0; 6455 int data_len = 0;
6428 6456
6457 rtnl_lock();
6458
6429 if (cb->args[0]) { 6459 if (cb->args[0]) {
6430 /* 6460 /*
6431 * 0 is a valid index, but not valid for args[0], 6461 * 0 is a valid index, but not valid for args[0],
@@ -6437,18 +6467,16 @@ static int nl80211_testmode_dump(struct sk_buff *skb,
6437 nl80211_fam.attrbuf, nl80211_fam.maxattr, 6467 nl80211_fam.attrbuf, nl80211_fam.maxattr,
6438 nl80211_policy); 6468 nl80211_policy);
6439 if (err) 6469 if (err)
6440 return err; 6470 goto out_err;
6441 6471
6442 mutex_lock(&cfg80211_mutex);
6443 rdev = __cfg80211_rdev_from_attrs(sock_net(skb->sk), 6472 rdev = __cfg80211_rdev_from_attrs(sock_net(skb->sk),
6444 nl80211_fam.attrbuf); 6473 nl80211_fam.attrbuf);
6445 if (IS_ERR(rdev)) { 6474 if (IS_ERR(rdev)) {
6446 mutex_unlock(&cfg80211_mutex); 6475 err = PTR_ERR(rdev);
6447 return PTR_ERR(rdev); 6476 goto out_err;
6448 } 6477 }
6449 phy_idx = rdev->wiphy_idx; 6478 phy_idx = rdev->wiphy_idx;
6450 rdev = NULL; 6479 rdev = NULL;
6451 mutex_unlock(&cfg80211_mutex);
6452 6480
6453 if (nl80211_fam.attrbuf[NL80211_ATTR_TESTDATA]) 6481 if (nl80211_fam.attrbuf[NL80211_ATTR_TESTDATA])
6454 cb->args[1] = 6482 cb->args[1] =
@@ -6460,14 +6488,11 @@ static int nl80211_testmode_dump(struct sk_buff *skb,
6460 data_len = nla_len((void *)cb->args[1]); 6488 data_len = nla_len((void *)cb->args[1]);
6461 } 6489 }
6462 6490
6463 mutex_lock(&cfg80211_mutex);
6464 rdev = cfg80211_rdev_by_wiphy_idx(phy_idx); 6491 rdev = cfg80211_rdev_by_wiphy_idx(phy_idx);
6465 if (!rdev) { 6492 if (!rdev) {
6466 mutex_unlock(&cfg80211_mutex); 6493 err = -ENOENT;
6467 return -ENOENT; 6494 goto out_err;
6468 } 6495 }
6469 cfg80211_lock_rdev(rdev);
6470 mutex_unlock(&cfg80211_mutex);
6471 6496
6472 if (!rdev->ops->testmode_dump) { 6497 if (!rdev->ops->testmode_dump) {
6473 err = -EOPNOTSUPP; 6498 err = -EOPNOTSUPP;
@@ -6508,7 +6533,7 @@ static int nl80211_testmode_dump(struct sk_buff *skb,
6508 /* see above */ 6533 /* see above */
6509 cb->args[0] = phy_idx + 1; 6534 cb->args[0] = phy_idx + 1;
6510 out_err: 6535 out_err:
6511 cfg80211_unlock_rdev(rdev); 6536 rtnl_unlock();
6512 return err; 6537 return err;
6513} 6538}
6514 6539
@@ -6716,7 +6741,9 @@ static int nl80211_connect(struct sk_buff *skb, struct genl_info *info)
6716 sizeof(connect.vht_capa)); 6741 sizeof(connect.vht_capa));
6717 } 6742 }
6718 6743
6719 err = cfg80211_connect(rdev, dev, &connect, connkeys); 6744 wdev_lock(dev->ieee80211_ptr);
6745 err = cfg80211_connect(rdev, dev, &connect, connkeys, NULL);
6746 wdev_unlock(dev->ieee80211_ptr);
6720 if (err) 6747 if (err)
6721 kfree(connkeys); 6748 kfree(connkeys);
6722 return err; 6749 return err;
@@ -6727,6 +6754,7 @@ static int nl80211_disconnect(struct sk_buff *skb, struct genl_info *info)
6727 struct cfg80211_registered_device *rdev = info->user_ptr[0]; 6754 struct cfg80211_registered_device *rdev = info->user_ptr[0];
6728 struct net_device *dev = info->user_ptr[1]; 6755 struct net_device *dev = info->user_ptr[1];
6729 u16 reason; 6756 u16 reason;
6757 int ret;
6730 6758
6731 if (!info->attrs[NL80211_ATTR_REASON_CODE]) 6759 if (!info->attrs[NL80211_ATTR_REASON_CODE])
6732 reason = WLAN_REASON_DEAUTH_LEAVING; 6760 reason = WLAN_REASON_DEAUTH_LEAVING;
@@ -6740,7 +6768,10 @@ static int nl80211_disconnect(struct sk_buff *skb, struct genl_info *info)
6740 dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_CLIENT) 6768 dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_CLIENT)
6741 return -EOPNOTSUPP; 6769 return -EOPNOTSUPP;
6742 6770
6743 return cfg80211_disconnect(rdev, dev, reason, true); 6771 wdev_lock(dev->ieee80211_ptr);
6772 ret = cfg80211_disconnect(rdev, dev, reason, true);
6773 wdev_unlock(dev->ieee80211_ptr);
6774 return ret;
6744} 6775}
6745 6776
6746static int nl80211_wiphy_netns(struct sk_buff *skb, struct genl_info *info) 6777static int nl80211_wiphy_netns(struct sk_buff *skb, struct genl_info *info)
@@ -7159,6 +7190,9 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info)
7159 return -EOPNOTSUPP; 7190 return -EOPNOTSUPP;
7160 7191
7161 switch (wdev->iftype) { 7192 switch (wdev->iftype) {
7193 case NL80211_IFTYPE_P2P_DEVICE:
7194 if (!info->attrs[NL80211_ATTR_WIPHY_FREQ])
7195 return -EINVAL;
7162 case NL80211_IFTYPE_STATION: 7196 case NL80211_IFTYPE_STATION:
7163 case NL80211_IFTYPE_ADHOC: 7197 case NL80211_IFTYPE_ADHOC:
7164 case NL80211_IFTYPE_P2P_CLIENT: 7198 case NL80211_IFTYPE_P2P_CLIENT:
@@ -7166,7 +7200,6 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info)
7166 case NL80211_IFTYPE_AP_VLAN: 7200 case NL80211_IFTYPE_AP_VLAN:
7167 case NL80211_IFTYPE_MESH_POINT: 7201 case NL80211_IFTYPE_MESH_POINT:
7168 case NL80211_IFTYPE_P2P_GO: 7202 case NL80211_IFTYPE_P2P_GO:
7169 case NL80211_IFTYPE_P2P_DEVICE:
7170 break; 7203 break;
7171 default: 7204 default:
7172 return -EOPNOTSUPP; 7205 return -EOPNOTSUPP;
@@ -7194,9 +7227,18 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info)
7194 7227
7195 no_cck = nla_get_flag(info->attrs[NL80211_ATTR_TX_NO_CCK_RATE]); 7228 no_cck = nla_get_flag(info->attrs[NL80211_ATTR_TX_NO_CCK_RATE]);
7196 7229
7197 err = nl80211_parse_chandef(rdev, info, &chandef); 7230 /* get the channel if any has been specified, otherwise pass NULL to
7198 if (err) 7231 * the driver. The latter will use the current one
7199 return err; 7232 */
7233 chandef.chan = NULL;
7234 if (info->attrs[NL80211_ATTR_WIPHY_FREQ]) {
7235 err = nl80211_parse_chandef(rdev, info, &chandef);
7236 if (err)
7237 return err;
7238 }
7239
7240 if (!chandef.chan && offchan)
7241 return -EINVAL;
7200 7242
7201 if (!dont_wait_for_ack) { 7243 if (!dont_wait_for_ack) {
7202 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 7244 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
@@ -7501,6 +7543,23 @@ static int nl80211_join_mesh(struct sk_buff *skb, struct genl_info *info)
7501 setup.chandef.chan = NULL; 7543 setup.chandef.chan = NULL;
7502 } 7544 }
7503 7545
7546 if (info->attrs[NL80211_ATTR_BSS_BASIC_RATES]) {
7547 u8 *rates = nla_data(info->attrs[NL80211_ATTR_BSS_BASIC_RATES]);
7548 int n_rates =
7549 nla_len(info->attrs[NL80211_ATTR_BSS_BASIC_RATES]);
7550 struct ieee80211_supported_band *sband;
7551
7552 if (!setup.chandef.chan)
7553 return -EINVAL;
7554
7555 sband = rdev->wiphy.bands[setup.chandef.chan->band];
7556
7557 err = ieee80211_get_ratemask(sband, rates, n_rates,
7558 &setup.basic_rates);
7559 if (err)
7560 return err;
7561 }
7562
7504 return cfg80211_join_mesh(rdev, dev, &setup, &cfg); 7563 return cfg80211_join_mesh(rdev, dev, &setup, &cfg);
7505} 7564}
7506 7565
@@ -7516,28 +7575,29 @@ static int nl80211_leave_mesh(struct sk_buff *skb, struct genl_info *info)
7516static int nl80211_send_wowlan_patterns(struct sk_buff *msg, 7575static int nl80211_send_wowlan_patterns(struct sk_buff *msg,
7517 struct cfg80211_registered_device *rdev) 7576 struct cfg80211_registered_device *rdev)
7518{ 7577{
7578 struct cfg80211_wowlan *wowlan = rdev->wiphy.wowlan_config;
7519 struct nlattr *nl_pats, *nl_pat; 7579 struct nlattr *nl_pats, *nl_pat;
7520 int i, pat_len; 7580 int i, pat_len;
7521 7581
7522 if (!rdev->wowlan->n_patterns) 7582 if (!wowlan->n_patterns)
7523 return 0; 7583 return 0;
7524 7584
7525 nl_pats = nla_nest_start(msg, NL80211_WOWLAN_TRIG_PKT_PATTERN); 7585 nl_pats = nla_nest_start(msg, NL80211_WOWLAN_TRIG_PKT_PATTERN);
7526 if (!nl_pats) 7586 if (!nl_pats)
7527 return -ENOBUFS; 7587 return -ENOBUFS;
7528 7588
7529 for (i = 0; i < rdev->wowlan->n_patterns; i++) { 7589 for (i = 0; i < wowlan->n_patterns; i++) {
7530 nl_pat = nla_nest_start(msg, i + 1); 7590 nl_pat = nla_nest_start(msg, i + 1);
7531 if (!nl_pat) 7591 if (!nl_pat)
7532 return -ENOBUFS; 7592 return -ENOBUFS;
7533 pat_len = rdev->wowlan->patterns[i].pattern_len; 7593 pat_len = wowlan->patterns[i].pattern_len;
7534 if (nla_put(msg, NL80211_WOWLAN_PKTPAT_MASK, 7594 if (nla_put(msg, NL80211_WOWLAN_PKTPAT_MASK,
7535 DIV_ROUND_UP(pat_len, 8), 7595 DIV_ROUND_UP(pat_len, 8),
7536 rdev->wowlan->patterns[i].mask) || 7596 wowlan->patterns[i].mask) ||
7537 nla_put(msg, NL80211_WOWLAN_PKTPAT_PATTERN, 7597 nla_put(msg, NL80211_WOWLAN_PKTPAT_PATTERN,
7538 pat_len, rdev->wowlan->patterns[i].pattern) || 7598 pat_len, wowlan->patterns[i].pattern) ||
7539 nla_put_u32(msg, NL80211_WOWLAN_PKTPAT_OFFSET, 7599 nla_put_u32(msg, NL80211_WOWLAN_PKTPAT_OFFSET,
7540 rdev->wowlan->patterns[i].pkt_offset)) 7600 wowlan->patterns[i].pkt_offset))
7541 return -ENOBUFS; 7601 return -ENOBUFS;
7542 nla_nest_end(msg, nl_pat); 7602 nla_nest_end(msg, nl_pat);
7543 } 7603 }
@@ -7596,16 +7656,15 @@ static int nl80211_get_wowlan(struct sk_buff *skb, struct genl_info *info)
7596 void *hdr; 7656 void *hdr;
7597 u32 size = NLMSG_DEFAULT_SIZE; 7657 u32 size = NLMSG_DEFAULT_SIZE;
7598 7658
7599 if (!rdev->wiphy.wowlan.flags && !rdev->wiphy.wowlan.n_patterns && 7659 if (!rdev->wiphy.wowlan)
7600 !rdev->wiphy.wowlan.tcp)
7601 return -EOPNOTSUPP; 7660 return -EOPNOTSUPP;
7602 7661
7603 if (rdev->wowlan && rdev->wowlan->tcp) { 7662 if (rdev->wiphy.wowlan_config && rdev->wiphy.wowlan_config->tcp) {
7604 /* adjust size to have room for all the data */ 7663 /* adjust size to have room for all the data */
7605 size += rdev->wowlan->tcp->tokens_size + 7664 size += rdev->wiphy.wowlan_config->tcp->tokens_size +
7606 rdev->wowlan->tcp->payload_len + 7665 rdev->wiphy.wowlan_config->tcp->payload_len +
7607 rdev->wowlan->tcp->wake_len + 7666 rdev->wiphy.wowlan_config->tcp->wake_len +
7608 rdev->wowlan->tcp->wake_len / 8; 7667 rdev->wiphy.wowlan_config->tcp->wake_len / 8;
7609 } 7668 }
7610 7669
7611 msg = nlmsg_new(size, GFP_KERNEL); 7670 msg = nlmsg_new(size, GFP_KERNEL);
@@ -7617,33 +7676,34 @@ static int nl80211_get_wowlan(struct sk_buff *skb, struct genl_info *info)
7617 if (!hdr) 7676 if (!hdr)
7618 goto nla_put_failure; 7677 goto nla_put_failure;
7619 7678
7620 if (rdev->wowlan) { 7679 if (rdev->wiphy.wowlan_config) {
7621 struct nlattr *nl_wowlan; 7680 struct nlattr *nl_wowlan;
7622 7681
7623 nl_wowlan = nla_nest_start(msg, NL80211_ATTR_WOWLAN_TRIGGERS); 7682 nl_wowlan = nla_nest_start(msg, NL80211_ATTR_WOWLAN_TRIGGERS);
7624 if (!nl_wowlan) 7683 if (!nl_wowlan)
7625 goto nla_put_failure; 7684 goto nla_put_failure;
7626 7685
7627 if ((rdev->wowlan->any && 7686 if ((rdev->wiphy.wowlan_config->any &&
7628 nla_put_flag(msg, NL80211_WOWLAN_TRIG_ANY)) || 7687 nla_put_flag(msg, NL80211_WOWLAN_TRIG_ANY)) ||
7629 (rdev->wowlan->disconnect && 7688 (rdev->wiphy.wowlan_config->disconnect &&
7630 nla_put_flag(msg, NL80211_WOWLAN_TRIG_DISCONNECT)) || 7689 nla_put_flag(msg, NL80211_WOWLAN_TRIG_DISCONNECT)) ||
7631 (rdev->wowlan->magic_pkt && 7690 (rdev->wiphy.wowlan_config->magic_pkt &&
7632 nla_put_flag(msg, NL80211_WOWLAN_TRIG_MAGIC_PKT)) || 7691 nla_put_flag(msg, NL80211_WOWLAN_TRIG_MAGIC_PKT)) ||
7633 (rdev->wowlan->gtk_rekey_failure && 7692 (rdev->wiphy.wowlan_config->gtk_rekey_failure &&
7634 nla_put_flag(msg, NL80211_WOWLAN_TRIG_GTK_REKEY_FAILURE)) || 7693 nla_put_flag(msg, NL80211_WOWLAN_TRIG_GTK_REKEY_FAILURE)) ||
7635 (rdev->wowlan->eap_identity_req && 7694 (rdev->wiphy.wowlan_config->eap_identity_req &&
7636 nla_put_flag(msg, NL80211_WOWLAN_TRIG_EAP_IDENT_REQUEST)) || 7695 nla_put_flag(msg, NL80211_WOWLAN_TRIG_EAP_IDENT_REQUEST)) ||
7637 (rdev->wowlan->four_way_handshake && 7696 (rdev->wiphy.wowlan_config->four_way_handshake &&
7638 nla_put_flag(msg, NL80211_WOWLAN_TRIG_4WAY_HANDSHAKE)) || 7697 nla_put_flag(msg, NL80211_WOWLAN_TRIG_4WAY_HANDSHAKE)) ||
7639 (rdev->wowlan->rfkill_release && 7698 (rdev->wiphy.wowlan_config->rfkill_release &&
7640 nla_put_flag(msg, NL80211_WOWLAN_TRIG_RFKILL_RELEASE))) 7699 nla_put_flag(msg, NL80211_WOWLAN_TRIG_RFKILL_RELEASE)))
7641 goto nla_put_failure; 7700 goto nla_put_failure;
7642 7701
7643 if (nl80211_send_wowlan_patterns(msg, rdev)) 7702 if (nl80211_send_wowlan_patterns(msg, rdev))
7644 goto nla_put_failure; 7703 goto nla_put_failure;
7645 7704
7646 if (nl80211_send_wowlan_tcp(msg, rdev->wowlan->tcp)) 7705 if (nl80211_send_wowlan_tcp(msg,
7706 rdev->wiphy.wowlan_config->tcp))
7647 goto nla_put_failure; 7707 goto nla_put_failure;
7648 7708
7649 nla_nest_end(msg, nl_wowlan); 7709 nla_nest_end(msg, nl_wowlan);
@@ -7669,7 +7729,7 @@ static int nl80211_parse_wowlan_tcp(struct cfg80211_registered_device *rdev,
7669 u32 data_size, wake_size, tokens_size = 0, wake_mask_size; 7729 u32 data_size, wake_size, tokens_size = 0, wake_mask_size;
7670 int err, port; 7730 int err, port;
7671 7731
7672 if (!rdev->wiphy.wowlan.tcp) 7732 if (!rdev->wiphy.wowlan->tcp)
7673 return -EINVAL; 7733 return -EINVAL;
7674 7734
7675 err = nla_parse(tb, MAX_NL80211_WOWLAN_TCP, 7735 err = nla_parse(tb, MAX_NL80211_WOWLAN_TCP,
@@ -7689,16 +7749,16 @@ static int nl80211_parse_wowlan_tcp(struct cfg80211_registered_device *rdev,
7689 return -EINVAL; 7749 return -EINVAL;
7690 7750
7691 data_size = nla_len(tb[NL80211_WOWLAN_TCP_DATA_PAYLOAD]); 7751 data_size = nla_len(tb[NL80211_WOWLAN_TCP_DATA_PAYLOAD]);
7692 if (data_size > rdev->wiphy.wowlan.tcp->data_payload_max) 7752 if (data_size > rdev->wiphy.wowlan->tcp->data_payload_max)
7693 return -EINVAL; 7753 return -EINVAL;
7694 7754
7695 if (nla_get_u32(tb[NL80211_WOWLAN_TCP_DATA_INTERVAL]) > 7755 if (nla_get_u32(tb[NL80211_WOWLAN_TCP_DATA_INTERVAL]) >
7696 rdev->wiphy.wowlan.tcp->data_interval_max || 7756 rdev->wiphy.wowlan->tcp->data_interval_max ||
7697 nla_get_u32(tb[NL80211_WOWLAN_TCP_DATA_INTERVAL]) == 0) 7757 nla_get_u32(tb[NL80211_WOWLAN_TCP_DATA_INTERVAL]) == 0)
7698 return -EINVAL; 7758 return -EINVAL;
7699 7759
7700 wake_size = nla_len(tb[NL80211_WOWLAN_TCP_WAKE_PAYLOAD]); 7760 wake_size = nla_len(tb[NL80211_WOWLAN_TCP_WAKE_PAYLOAD]);
7701 if (wake_size > rdev->wiphy.wowlan.tcp->wake_payload_max) 7761 if (wake_size > rdev->wiphy.wowlan->tcp->wake_payload_max)
7702 return -EINVAL; 7762 return -EINVAL;
7703 7763
7704 wake_mask_size = nla_len(tb[NL80211_WOWLAN_TCP_WAKE_MASK]); 7764 wake_mask_size = nla_len(tb[NL80211_WOWLAN_TCP_WAKE_MASK]);
@@ -7713,13 +7773,13 @@ static int nl80211_parse_wowlan_tcp(struct cfg80211_registered_device *rdev,
7713 7773
7714 if (!tok->len || tokens_size % tok->len) 7774 if (!tok->len || tokens_size % tok->len)
7715 return -EINVAL; 7775 return -EINVAL;
7716 if (!rdev->wiphy.wowlan.tcp->tok) 7776 if (!rdev->wiphy.wowlan->tcp->tok)
7717 return -EINVAL; 7777 return -EINVAL;
7718 if (tok->len > rdev->wiphy.wowlan.tcp->tok->max_len) 7778 if (tok->len > rdev->wiphy.wowlan->tcp->tok->max_len)
7719 return -EINVAL; 7779 return -EINVAL;
7720 if (tok->len < rdev->wiphy.wowlan.tcp->tok->min_len) 7780 if (tok->len < rdev->wiphy.wowlan->tcp->tok->min_len)
7721 return -EINVAL; 7781 return -EINVAL;
7722 if (tokens_size > rdev->wiphy.wowlan.tcp->tok->bufsize) 7782 if (tokens_size > rdev->wiphy.wowlan->tcp->tok->bufsize)
7723 return -EINVAL; 7783 return -EINVAL;
7724 if (tok->offset + tok->len > data_size) 7784 if (tok->offset + tok->len > data_size)
7725 return -EINVAL; 7785 return -EINVAL;
@@ -7727,7 +7787,7 @@ static int nl80211_parse_wowlan_tcp(struct cfg80211_registered_device *rdev,
7727 7787
7728 if (tb[NL80211_WOWLAN_TCP_DATA_PAYLOAD_SEQ]) { 7788 if (tb[NL80211_WOWLAN_TCP_DATA_PAYLOAD_SEQ]) {
7729 seq = nla_data(tb[NL80211_WOWLAN_TCP_DATA_PAYLOAD_SEQ]); 7789 seq = nla_data(tb[NL80211_WOWLAN_TCP_DATA_PAYLOAD_SEQ]);
7730 if (!rdev->wiphy.wowlan.tcp->seq) 7790 if (!rdev->wiphy.wowlan->tcp->seq)
7731 return -EINVAL; 7791 return -EINVAL;
7732 if (seq->len == 0 || seq->len > 4) 7792 if (seq->len == 0 || seq->len > 4)
7733 return -EINVAL; 7793 return -EINVAL;
@@ -7808,17 +7868,16 @@ static int nl80211_set_wowlan(struct sk_buff *skb, struct genl_info *info)
7808 struct nlattr *tb[NUM_NL80211_WOWLAN_TRIG]; 7868 struct nlattr *tb[NUM_NL80211_WOWLAN_TRIG];
7809 struct cfg80211_wowlan new_triggers = {}; 7869 struct cfg80211_wowlan new_triggers = {};
7810 struct cfg80211_wowlan *ntrig; 7870 struct cfg80211_wowlan *ntrig;
7811 struct wiphy_wowlan_support *wowlan = &rdev->wiphy.wowlan; 7871 const struct wiphy_wowlan_support *wowlan = rdev->wiphy.wowlan;
7812 int err, i; 7872 int err, i;
7813 bool prev_enabled = rdev->wowlan; 7873 bool prev_enabled = rdev->wiphy.wowlan_config;
7814 7874
7815 if (!rdev->wiphy.wowlan.flags && !rdev->wiphy.wowlan.n_patterns && 7875 if (!wowlan)
7816 !rdev->wiphy.wowlan.tcp)
7817 return -EOPNOTSUPP; 7876 return -EOPNOTSUPP;
7818 7877
7819 if (!info->attrs[NL80211_ATTR_WOWLAN_TRIGGERS]) { 7878 if (!info->attrs[NL80211_ATTR_WOWLAN_TRIGGERS]) {
7820 cfg80211_rdev_free_wowlan(rdev); 7879 cfg80211_rdev_free_wowlan(rdev);
7821 rdev->wowlan = NULL; 7880 rdev->wiphy.wowlan_config = NULL;
7822 goto set_wakeup; 7881 goto set_wakeup;
7823 } 7882 }
7824 7883
@@ -7954,11 +8013,12 @@ static int nl80211_set_wowlan(struct sk_buff *skb, struct genl_info *info)
7954 goto error; 8013 goto error;
7955 } 8014 }
7956 cfg80211_rdev_free_wowlan(rdev); 8015 cfg80211_rdev_free_wowlan(rdev);
7957 rdev->wowlan = ntrig; 8016 rdev->wiphy.wowlan_config = ntrig;
7958 8017
7959 set_wakeup: 8018 set_wakeup:
7960 if (rdev->ops->set_wakeup && prev_enabled != !!rdev->wowlan) 8019 if (rdev->ops->set_wakeup &&
7961 rdev_set_wakeup(rdev, rdev->wowlan); 8020 prev_enabled != !!rdev->wiphy.wowlan_config)
8021 rdev_set_wakeup(rdev, rdev->wiphy.wowlan_config);
7962 8022
7963 return 0; 8023 return 0;
7964 error: 8024 error:
@@ -8143,9 +8203,7 @@ static int nl80211_start_p2p_device(struct sk_buff *skb, struct genl_info *info)
8143 if (wdev->p2p_started) 8203 if (wdev->p2p_started)
8144 return 0; 8204 return 0;
8145 8205
8146 mutex_lock(&rdev->devlist_mtx);
8147 err = cfg80211_can_add_interface(rdev, wdev->iftype); 8206 err = cfg80211_can_add_interface(rdev, wdev->iftype);
8148 mutex_unlock(&rdev->devlist_mtx);
8149 if (err) 8207 if (err)
8150 return err; 8208 return err;
8151 8209
@@ -8154,9 +8212,7 @@ static int nl80211_start_p2p_device(struct sk_buff *skb, struct genl_info *info)
8154 return err; 8212 return err;
8155 8213
8156 wdev->p2p_started = true; 8214 wdev->p2p_started = true;
8157 mutex_lock(&rdev->devlist_mtx);
8158 rdev->opencount++; 8215 rdev->opencount++;
8159 mutex_unlock(&rdev->devlist_mtx);
8160 8216
8161 return 0; 8217 return 0;
8162} 8218}
@@ -8172,11 +8228,7 @@ static int nl80211_stop_p2p_device(struct sk_buff *skb, struct genl_info *info)
8172 if (!rdev->ops->stop_p2p_device) 8228 if (!rdev->ops->stop_p2p_device)
8173 return -EOPNOTSUPP; 8229 return -EOPNOTSUPP;
8174 8230
8175 mutex_lock(&rdev->devlist_mtx);
8176 mutex_lock(&rdev->sched_scan_mtx);
8177 cfg80211_stop_p2p_device(rdev, wdev); 8231 cfg80211_stop_p2p_device(rdev, wdev);
8178 mutex_unlock(&rdev->sched_scan_mtx);
8179 mutex_unlock(&rdev->devlist_mtx);
8180 8232
8181 return 0; 8233 return 0;
8182} 8234}
@@ -8319,11 +8371,11 @@ static int nl80211_pre_doit(struct genl_ops *ops, struct sk_buff *skb,
8319 info->user_ptr[0] = rdev; 8371 info->user_ptr[0] = rdev;
8320 } else if (ops->internal_flags & NL80211_FLAG_NEED_NETDEV || 8372 } else if (ops->internal_flags & NL80211_FLAG_NEED_NETDEV ||
8321 ops->internal_flags & NL80211_FLAG_NEED_WDEV) { 8373 ops->internal_flags & NL80211_FLAG_NEED_WDEV) {
8322 mutex_lock(&cfg80211_mutex); 8374 ASSERT_RTNL();
8375
8323 wdev = __cfg80211_wdev_from_attrs(genl_info_net(info), 8376 wdev = __cfg80211_wdev_from_attrs(genl_info_net(info),
8324 info->attrs); 8377 info->attrs);
8325 if (IS_ERR(wdev)) { 8378 if (IS_ERR(wdev)) {
8326 mutex_unlock(&cfg80211_mutex);
8327 if (rtnl) 8379 if (rtnl)
8328 rtnl_unlock(); 8380 rtnl_unlock();
8329 return PTR_ERR(wdev); 8381 return PTR_ERR(wdev);
@@ -8334,7 +8386,6 @@ static int nl80211_pre_doit(struct genl_ops *ops, struct sk_buff *skb,
8334 8386
8335 if (ops->internal_flags & NL80211_FLAG_NEED_NETDEV) { 8387 if (ops->internal_flags & NL80211_FLAG_NEED_NETDEV) {
8336 if (!dev) { 8388 if (!dev) {
8337 mutex_unlock(&cfg80211_mutex);
8338 if (rtnl) 8389 if (rtnl)
8339 rtnl_unlock(); 8390 rtnl_unlock();
8340 return -EINVAL; 8391 return -EINVAL;
@@ -8348,7 +8399,6 @@ static int nl80211_pre_doit(struct genl_ops *ops, struct sk_buff *skb,
8348 if (dev) { 8399 if (dev) {
8349 if (ops->internal_flags & NL80211_FLAG_CHECK_NETDEV_UP && 8400 if (ops->internal_flags & NL80211_FLAG_CHECK_NETDEV_UP &&
8350 !netif_running(dev)) { 8401 !netif_running(dev)) {
8351 mutex_unlock(&cfg80211_mutex);
8352 if (rtnl) 8402 if (rtnl)
8353 rtnl_unlock(); 8403 rtnl_unlock();
8354 return -ENETDOWN; 8404 return -ENETDOWN;
@@ -8357,17 +8407,12 @@ static int nl80211_pre_doit(struct genl_ops *ops, struct sk_buff *skb,
8357 dev_hold(dev); 8407 dev_hold(dev);
8358 } else if (ops->internal_flags & NL80211_FLAG_CHECK_NETDEV_UP) { 8408 } else if (ops->internal_flags & NL80211_FLAG_CHECK_NETDEV_UP) {
8359 if (!wdev->p2p_started) { 8409 if (!wdev->p2p_started) {
8360 mutex_unlock(&cfg80211_mutex);
8361 if (rtnl) 8410 if (rtnl)
8362 rtnl_unlock(); 8411 rtnl_unlock();
8363 return -ENETDOWN; 8412 return -ENETDOWN;
8364 } 8413 }
8365 } 8414 }
8366 8415
8367 cfg80211_lock_rdev(rdev);
8368
8369 mutex_unlock(&cfg80211_mutex);
8370
8371 info->user_ptr[0] = rdev; 8416 info->user_ptr[0] = rdev;
8372 } 8417 }
8373 8418
@@ -8377,8 +8422,6 @@ static int nl80211_pre_doit(struct genl_ops *ops, struct sk_buff *skb,
8377static void nl80211_post_doit(struct genl_ops *ops, struct sk_buff *skb, 8422static void nl80211_post_doit(struct genl_ops *ops, struct sk_buff *skb,
8378 struct genl_info *info) 8423 struct genl_info *info)
8379{ 8424{
8380 if (info->user_ptr[0])
8381 cfg80211_unlock_rdev(info->user_ptr[0]);
8382 if (info->user_ptr[1]) { 8425 if (info->user_ptr[1]) {
8383 if (ops->internal_flags & NL80211_FLAG_NEED_WDEV) { 8426 if (ops->internal_flags & NL80211_FLAG_NEED_WDEV) {
8384 struct wireless_dev *wdev = info->user_ptr[1]; 8427 struct wireless_dev *wdev = info->user_ptr[1];
@@ -8398,9 +8441,11 @@ static struct genl_ops nl80211_ops[] = {
8398 .cmd = NL80211_CMD_GET_WIPHY, 8441 .cmd = NL80211_CMD_GET_WIPHY,
8399 .doit = nl80211_get_wiphy, 8442 .doit = nl80211_get_wiphy,
8400 .dumpit = nl80211_dump_wiphy, 8443 .dumpit = nl80211_dump_wiphy,
8444 .done = nl80211_dump_wiphy_done,
8401 .policy = nl80211_policy, 8445 .policy = nl80211_policy,
8402 /* can be retrieved by unprivileged users */ 8446 /* can be retrieved by unprivileged users */
8403 .internal_flags = NL80211_FLAG_NEED_WIPHY, 8447 .internal_flags = NL80211_FLAG_NEED_WIPHY |
8448 NL80211_FLAG_NEED_RTNL,
8404 }, 8449 },
8405 { 8450 {
8406 .cmd = NL80211_CMD_SET_WIPHY, 8451 .cmd = NL80211_CMD_SET_WIPHY,
@@ -8415,7 +8460,8 @@ static struct genl_ops nl80211_ops[] = {
8415 .dumpit = nl80211_dump_interface, 8460 .dumpit = nl80211_dump_interface,
8416 .policy = nl80211_policy, 8461 .policy = nl80211_policy,
8417 /* can be retrieved by unprivileged users */ 8462 /* can be retrieved by unprivileged users */
8418 .internal_flags = NL80211_FLAG_NEED_WDEV, 8463 .internal_flags = NL80211_FLAG_NEED_WDEV |
8464 NL80211_FLAG_NEED_RTNL,
8419 }, 8465 },
8420 { 8466 {
8421 .cmd = NL80211_CMD_SET_INTERFACE, 8467 .cmd = NL80211_CMD_SET_INTERFACE,
@@ -8574,6 +8620,7 @@ static struct genl_ops nl80211_ops[] = {
8574 .cmd = NL80211_CMD_GET_REG, 8620 .cmd = NL80211_CMD_GET_REG,
8575 .doit = nl80211_get_reg, 8621 .doit = nl80211_get_reg,
8576 .policy = nl80211_policy, 8622 .policy = nl80211_policy,
8623 .internal_flags = NL80211_FLAG_NEED_RTNL,
8577 /* can be retrieved by unprivileged users */ 8624 /* can be retrieved by unprivileged users */
8578 }, 8625 },
8579 { 8626 {
@@ -8581,6 +8628,7 @@ static struct genl_ops nl80211_ops[] = {
8581 .doit = nl80211_set_reg, 8628 .doit = nl80211_set_reg,
8582 .policy = nl80211_policy, 8629 .policy = nl80211_policy,
8583 .flags = GENL_ADMIN_PERM, 8630 .flags = GENL_ADMIN_PERM,
8631 .internal_flags = NL80211_FLAG_NEED_RTNL,
8584 }, 8632 },
8585 { 8633 {
8586 .cmd = NL80211_CMD_REQ_SET_REG, 8634 .cmd = NL80211_CMD_REQ_SET_REG,
@@ -9014,13 +9062,13 @@ static struct genl_multicast_group nl80211_regulatory_mcgrp = {
9014void nl80211_notify_dev_rename(struct cfg80211_registered_device *rdev) 9062void nl80211_notify_dev_rename(struct cfg80211_registered_device *rdev)
9015{ 9063{
9016 struct sk_buff *msg; 9064 struct sk_buff *msg;
9065 struct nl80211_dump_wiphy_state state = {};
9017 9066
9018 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 9067 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
9019 if (!msg) 9068 if (!msg)
9020 return; 9069 return;
9021 9070
9022 if (nl80211_send_wiphy(rdev, msg, 0, 0, 0, 9071 if (nl80211_send_wiphy(rdev, msg, 0, 0, 0, &state) < 0) {
9023 false, NULL, NULL, NULL) < 0) {
9024 nlmsg_free(msg); 9072 nlmsg_free(msg);
9025 return; 9073 return;
9026 } 9074 }
@@ -9036,8 +9084,6 @@ static int nl80211_add_scan_req(struct sk_buff *msg,
9036 struct nlattr *nest; 9084 struct nlattr *nest;
9037 int i; 9085 int i;
9038 9086
9039 lockdep_assert_held(&rdev->sched_scan_mtx);
9040
9041 if (WARN_ON(!req)) 9087 if (WARN_ON(!req))
9042 return 0; 9088 return 0;
9043 9089
@@ -9344,31 +9390,27 @@ void nl80211_send_disassoc(struct cfg80211_registered_device *rdev,
9344 NL80211_CMD_DISASSOCIATE, gfp); 9390 NL80211_CMD_DISASSOCIATE, gfp);
9345} 9391}
9346 9392
9347void cfg80211_send_unprot_deauth(struct net_device *dev, const u8 *buf, 9393void cfg80211_rx_unprot_mlme_mgmt(struct net_device *dev, const u8 *buf,
9348 size_t len) 9394 size_t len)
9349{ 9395{
9350 struct wireless_dev *wdev = dev->ieee80211_ptr; 9396 struct wireless_dev *wdev = dev->ieee80211_ptr;
9351 struct wiphy *wiphy = wdev->wiphy; 9397 struct wiphy *wiphy = wdev->wiphy;
9352 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); 9398 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
9399 const struct ieee80211_mgmt *mgmt = (void *)buf;
9400 u32 cmd;
9353 9401
9354 trace_cfg80211_send_unprot_deauth(dev); 9402 if (WARN_ON(len < 2))
9355 nl80211_send_mlme_event(rdev, dev, buf, len, 9403 return;
9356 NL80211_CMD_UNPROT_DEAUTHENTICATE, GFP_ATOMIC);
9357}
9358EXPORT_SYMBOL(cfg80211_send_unprot_deauth);
9359 9404
9360void cfg80211_send_unprot_disassoc(struct net_device *dev, const u8 *buf, 9405 if (ieee80211_is_deauth(mgmt->frame_control))
9361 size_t len) 9406 cmd = NL80211_CMD_UNPROT_DEAUTHENTICATE;
9362{ 9407 else
9363 struct wireless_dev *wdev = dev->ieee80211_ptr; 9408 cmd = NL80211_CMD_UNPROT_DISASSOCIATE;
9364 struct wiphy *wiphy = wdev->wiphy;
9365 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
9366 9409
9367 trace_cfg80211_send_unprot_disassoc(dev); 9410 trace_cfg80211_rx_unprot_mlme_mgmt(dev, buf, len);
9368 nl80211_send_mlme_event(rdev, dev, buf, len, 9411 nl80211_send_mlme_event(rdev, dev, buf, len, cmd, GFP_ATOMIC);
9369 NL80211_CMD_UNPROT_DISASSOCIATE, GFP_ATOMIC);
9370} 9412}
9371EXPORT_SYMBOL(cfg80211_send_unprot_disassoc); 9413EXPORT_SYMBOL(cfg80211_rx_unprot_mlme_mgmt);
9372 9414
9373static void nl80211_send_mlme_timeout(struct cfg80211_registered_device *rdev, 9415static void nl80211_send_mlme_timeout(struct cfg80211_registered_device *rdev,
9374 struct net_device *netdev, int cmd, 9416 struct net_device *netdev, int cmd,
@@ -9879,7 +9921,6 @@ static bool __nl80211_unexpected_frame(struct net_device *dev, u8 cmd,
9879 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy); 9921 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
9880 struct sk_buff *msg; 9922 struct sk_buff *msg;
9881 void *hdr; 9923 void *hdr;
9882 int err;
9883 u32 nlportid = ACCESS_ONCE(wdev->ap_unexpected_nlportid); 9924 u32 nlportid = ACCESS_ONCE(wdev->ap_unexpected_nlportid);
9884 9925
9885 if (!nlportid) 9926 if (!nlportid)
@@ -9900,12 +9941,7 @@ static bool __nl80211_unexpected_frame(struct net_device *dev, u8 cmd,
9900 nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, addr)) 9941 nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, addr))
9901 goto nla_put_failure; 9942 goto nla_put_failure;
9902 9943
9903 err = genlmsg_end(msg, hdr); 9944 genlmsg_end(msg, hdr);
9904 if (err < 0) {
9905 nlmsg_free(msg);
9906 return true;
9907 }
9908
9909 genlmsg_unicast(wiphy_net(&rdev->wiphy), msg, nlportid); 9945 genlmsg_unicast(wiphy_net(&rdev->wiphy), msg, nlportid);
9910 return true; 9946 return true;
9911 9947
@@ -10348,10 +10384,7 @@ nl80211_radar_notify(struct cfg80211_registered_device *rdev,
10348 if (nl80211_send_chandef(msg, chandef)) 10384 if (nl80211_send_chandef(msg, chandef))
10349 goto nla_put_failure; 10385 goto nla_put_failure;
10350 10386
10351 if (genlmsg_end(msg, hdr) < 0) { 10387 genlmsg_end(msg, hdr);
10352 nlmsg_free(msg);
10353 return;
10354 }
10355 10388
10356 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0, 10389 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0,
10357 nl80211_mlme_mcgrp.id, gfp); 10390 nl80211_mlme_mcgrp.id, gfp);
@@ -10417,7 +10450,6 @@ void cfg80211_probe_status(struct net_device *dev, const u8 *addr,
10417 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy); 10450 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
10418 struct sk_buff *msg; 10451 struct sk_buff *msg;
10419 void *hdr; 10452 void *hdr;
10420 int err;
10421 10453
10422 trace_cfg80211_probe_status(dev, addr, cookie, acked); 10454 trace_cfg80211_probe_status(dev, addr, cookie, acked);
10423 10455
@@ -10439,11 +10471,7 @@ void cfg80211_probe_status(struct net_device *dev, const u8 *addr,
10439 (acked && nla_put_flag(msg, NL80211_ATTR_ACK))) 10471 (acked && nla_put_flag(msg, NL80211_ATTR_ACK)))
10440 goto nla_put_failure; 10472 goto nla_put_failure;
10441 10473
10442 err = genlmsg_end(msg, hdr); 10474 genlmsg_end(msg, hdr);
10443 if (err < 0) {
10444 nlmsg_free(msg);
10445 return;
10446 }
10447 10475
10448 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0, 10476 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0,
10449 nl80211_mlme_mcgrp.id, gfp); 10477 nl80211_mlme_mcgrp.id, gfp);
@@ -10509,7 +10537,7 @@ void cfg80211_report_wowlan_wakeup(struct wireless_dev *wdev,
10509 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy); 10537 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
10510 struct sk_buff *msg; 10538 struct sk_buff *msg;
10511 void *hdr; 10539 void *hdr;
10512 int err, size = 200; 10540 int size = 200;
10513 10541
10514 trace_cfg80211_report_wowlan_wakeup(wdev->wiphy, wdev, wakeup); 10542 trace_cfg80211_report_wowlan_wakeup(wdev->wiphy, wdev, wakeup);
10515 10543
@@ -10595,9 +10623,7 @@ void cfg80211_report_wowlan_wakeup(struct wireless_dev *wdev,
10595 nla_nest_end(msg, reasons); 10623 nla_nest_end(msg, reasons);
10596 } 10624 }
10597 10625
10598 err = genlmsg_end(msg, hdr); 10626 genlmsg_end(msg, hdr);
10599 if (err < 0)
10600 goto free_msg;
10601 10627
10602 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0, 10628 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0,
10603 nl80211_mlme_mcgrp.id, gfp); 10629 nl80211_mlme_mcgrp.id, gfp);
@@ -10617,7 +10643,6 @@ void cfg80211_tdls_oper_request(struct net_device *dev, const u8 *peer,
10617 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy); 10643 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
10618 struct sk_buff *msg; 10644 struct sk_buff *msg;
10619 void *hdr; 10645 void *hdr;
10620 int err;
10621 10646
10622 trace_cfg80211_tdls_oper_request(wdev->wiphy, dev, peer, oper, 10647 trace_cfg80211_tdls_oper_request(wdev->wiphy, dev, peer, oper,
10623 reason_code); 10648 reason_code);
@@ -10640,11 +10665,7 @@ void cfg80211_tdls_oper_request(struct net_device *dev, const u8 *peer,
10640 nla_put_u16(msg, NL80211_ATTR_REASON_CODE, reason_code))) 10665 nla_put_u16(msg, NL80211_ATTR_REASON_CODE, reason_code)))
10641 goto nla_put_failure; 10666 goto nla_put_failure;
10642 10667
10643 err = genlmsg_end(msg, hdr); 10668 genlmsg_end(msg, hdr);
10644 if (err < 0) {
10645 nlmsg_free(msg);
10646 return;
10647 }
10648 10669
10649 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0, 10670 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0,
10650 nl80211_mlme_mcgrp.id, gfp); 10671 nl80211_mlme_mcgrp.id, gfp);
@@ -10702,7 +10723,6 @@ void cfg80211_ft_event(struct net_device *netdev,
10702 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); 10723 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
10703 struct sk_buff *msg; 10724 struct sk_buff *msg;
10704 void *hdr; 10725 void *hdr;
10705 int err;
10706 10726
10707 trace_cfg80211_ft_event(wiphy, netdev, ft_event); 10727 trace_cfg80211_ft_event(wiphy, netdev, ft_event);
10708 10728
@@ -10728,11 +10748,7 @@ void cfg80211_ft_event(struct net_device *netdev,
10728 nla_put(msg, NL80211_ATTR_IE_RIC, ft_event->ric_ies_len, 10748 nla_put(msg, NL80211_ATTR_IE_RIC, ft_event->ric_ies_len,
10729 ft_event->ric_ies); 10749 ft_event->ric_ies);
10730 10750
10731 err = genlmsg_end(msg, hdr); 10751 genlmsg_end(msg, hdr);
10732 if (err < 0) {
10733 nlmsg_free(msg);
10734 return;
10735 }
10736 10752
10737 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0, 10753 genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0,
10738 nl80211_mlme_mcgrp.id, GFP_KERNEL); 10754 nl80211_mlme_mcgrp.id, GFP_KERNEL);
diff --git a/net/wireless/reg.c b/net/wireless/reg.c
index cc35fbaa4578..5a24c986f34b 100644
--- a/net/wireless/reg.c
+++ b/net/wireless/reg.c
@@ -81,7 +81,10 @@ static struct regulatory_request core_request_world = {
81 .country_ie_env = ENVIRON_ANY, 81 .country_ie_env = ENVIRON_ANY,
82}; 82};
83 83
84/* Receipt of information from last regulatory request */ 84/*
85 * Receipt of information from last regulatory request,
86 * protected by RTNL (and can be accessed with RCU protection)
87 */
85static struct regulatory_request __rcu *last_request = 88static struct regulatory_request __rcu *last_request =
86 (void __rcu *)&core_request_world; 89 (void __rcu *)&core_request_world;
87 90
@@ -96,39 +99,25 @@ static struct device_type reg_device_type = {
96 * Central wireless core regulatory domains, we only need two, 99 * Central wireless core regulatory domains, we only need two,
97 * the current one and a world regulatory domain in case we have no 100 * the current one and a world regulatory domain in case we have no
98 * information to give us an alpha2. 101 * information to give us an alpha2.
102 * (protected by RTNL, can be read under RCU)
99 */ 103 */
100const struct ieee80211_regdomain __rcu *cfg80211_regdomain; 104const struct ieee80211_regdomain __rcu *cfg80211_regdomain;
101 105
102/* 106/*
103 * Protects static reg.c components:
104 * - cfg80211_regdomain (if not used with RCU)
105 * - cfg80211_world_regdom
106 * - last_request (if not used with RCU)
107 * - reg_num_devs_support_basehint
108 */
109static DEFINE_MUTEX(reg_mutex);
110
111/*
112 * Number of devices that registered to the core 107 * Number of devices that registered to the core
113 * that support cellular base station regulatory hints 108 * that support cellular base station regulatory hints
109 * (protected by RTNL)
114 */ 110 */
115static int reg_num_devs_support_basehint; 111static int reg_num_devs_support_basehint;
116 112
117static inline void assert_reg_lock(void)
118{
119 lockdep_assert_held(&reg_mutex);
120}
121
122static const struct ieee80211_regdomain *get_cfg80211_regdom(void) 113static const struct ieee80211_regdomain *get_cfg80211_regdom(void)
123{ 114{
124 return rcu_dereference_protected(cfg80211_regdomain, 115 return rtnl_dereference(cfg80211_regdomain);
125 lockdep_is_held(&reg_mutex));
126} 116}
127 117
128static const struct ieee80211_regdomain *get_wiphy_regdom(struct wiphy *wiphy) 118static const struct ieee80211_regdomain *get_wiphy_regdom(struct wiphy *wiphy)
129{ 119{
130 return rcu_dereference_protected(wiphy->regd, 120 return rtnl_dereference(wiphy->regd);
131 lockdep_is_held(&reg_mutex));
132} 121}
133 122
134static void rcu_free_regdom(const struct ieee80211_regdomain *r) 123static void rcu_free_regdom(const struct ieee80211_regdomain *r)
@@ -140,8 +129,7 @@ static void rcu_free_regdom(const struct ieee80211_regdomain *r)
140 129
141static struct regulatory_request *get_last_request(void) 130static struct regulatory_request *get_last_request(void)
142{ 131{
143 return rcu_dereference_check(last_request, 132 return rcu_dereference_rtnl(last_request);
144 lockdep_is_held(&reg_mutex));
145} 133}
146 134
147/* Used to queue up regulatory hints */ 135/* Used to queue up regulatory hints */
@@ -200,6 +188,7 @@ static const struct ieee80211_regdomain world_regdom = {
200 } 188 }
201}; 189};
202 190
191/* protected by RTNL */
203static const struct ieee80211_regdomain *cfg80211_world_regdom = 192static const struct ieee80211_regdomain *cfg80211_world_regdom =
204 &world_regdom; 193 &world_regdom;
205 194
@@ -215,7 +204,7 @@ static void reset_regdomains(bool full_reset,
215 const struct ieee80211_regdomain *r; 204 const struct ieee80211_regdomain *r;
216 struct regulatory_request *lr; 205 struct regulatory_request *lr;
217 206
218 assert_reg_lock(); 207 ASSERT_RTNL();
219 208
220 r = get_cfg80211_regdom(); 209 r = get_cfg80211_regdom();
221 210
@@ -377,7 +366,7 @@ static void reg_regdb_search(struct work_struct *work)
377 const struct ieee80211_regdomain *curdom, *regdom = NULL; 366 const struct ieee80211_regdomain *curdom, *regdom = NULL;
378 int i; 367 int i;
379 368
380 mutex_lock(&cfg80211_mutex); 369 rtnl_lock();
381 370
382 mutex_lock(&reg_regdb_search_mutex); 371 mutex_lock(&reg_regdb_search_mutex);
383 while (!list_empty(&reg_regdb_search_list)) { 372 while (!list_empty(&reg_regdb_search_list)) {
@@ -402,7 +391,7 @@ static void reg_regdb_search(struct work_struct *work)
402 if (!IS_ERR_OR_NULL(regdom)) 391 if (!IS_ERR_OR_NULL(regdom))
403 set_regdom(regdom); 392 set_regdom(regdom);
404 393
405 mutex_unlock(&cfg80211_mutex); 394 rtnl_unlock();
406} 395}
407 396
408static DECLARE_WORK(reg_regdb_work, reg_regdb_search); 397static DECLARE_WORK(reg_regdb_work, reg_regdb_search);
@@ -936,13 +925,7 @@ static bool reg_request_cell_base(struct regulatory_request *request)
936 925
937bool reg_last_request_cell_base(void) 926bool reg_last_request_cell_base(void)
938{ 927{
939 bool val; 928 return reg_request_cell_base(get_last_request());
940
941 mutex_lock(&reg_mutex);
942 val = reg_request_cell_base(get_last_request());
943 mutex_unlock(&reg_mutex);
944
945 return val;
946} 929}
947 930
948#ifdef CONFIG_CFG80211_CERTIFICATION_ONUS 931#ifdef CONFIG_CFG80211_CERTIFICATION_ONUS
@@ -1225,7 +1208,7 @@ static void update_all_wiphy_regulatory(enum nl80211_reg_initiator initiator)
1225 struct cfg80211_registered_device *rdev; 1208 struct cfg80211_registered_device *rdev;
1226 struct wiphy *wiphy; 1209 struct wiphy *wiphy;
1227 1210
1228 assert_cfg80211_lock(); 1211 ASSERT_RTNL();
1229 1212
1230 list_for_each_entry(rdev, &cfg80211_rdev_list, list) { 1213 list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
1231 wiphy = &rdev->wiphy; 1214 wiphy = &rdev->wiphy;
@@ -1362,7 +1345,7 @@ get_reg_request_treatment(struct wiphy *wiphy,
1362 return REG_REQ_OK; 1345 return REG_REQ_OK;
1363 return REG_REQ_ALREADY_SET; 1346 return REG_REQ_ALREADY_SET;
1364 } 1347 }
1365 return 0; 1348 return REG_REQ_OK;
1366 case NL80211_REGDOM_SET_BY_DRIVER: 1349 case NL80211_REGDOM_SET_BY_DRIVER:
1367 if (lr->initiator == NL80211_REGDOM_SET_BY_CORE) { 1350 if (lr->initiator == NL80211_REGDOM_SET_BY_CORE) {
1368 if (regdom_changes(pending_request->alpha2)) 1351 if (regdom_changes(pending_request->alpha2))
@@ -1444,8 +1427,6 @@ static void reg_set_request_processed(void)
1444 * what it believes should be the current regulatory domain. 1427 * what it believes should be the current regulatory domain.
1445 * 1428 *
1446 * Returns one of the different reg request treatment values. 1429 * Returns one of the different reg request treatment values.
1447 *
1448 * Caller must hold &reg_mutex
1449 */ 1430 */
1450static enum reg_request_treatment 1431static enum reg_request_treatment
1451__regulatory_hint(struct wiphy *wiphy, 1432__regulatory_hint(struct wiphy *wiphy,
@@ -1570,21 +1551,19 @@ static void reg_process_pending_hints(void)
1570{ 1551{
1571 struct regulatory_request *reg_request, *lr; 1552 struct regulatory_request *reg_request, *lr;
1572 1553
1573 mutex_lock(&cfg80211_mutex);
1574 mutex_lock(&reg_mutex);
1575 lr = get_last_request(); 1554 lr = get_last_request();
1576 1555
1577 /* When last_request->processed becomes true this will be rescheduled */ 1556 /* When last_request->processed becomes true this will be rescheduled */
1578 if (lr && !lr->processed) { 1557 if (lr && !lr->processed) {
1579 REG_DBG_PRINT("Pending regulatory request, waiting for it to be processed...\n"); 1558 REG_DBG_PRINT("Pending regulatory request, waiting for it to be processed...\n");
1580 goto out; 1559 return;
1581 } 1560 }
1582 1561
1583 spin_lock(&reg_requests_lock); 1562 spin_lock(&reg_requests_lock);
1584 1563
1585 if (list_empty(&reg_requests_list)) { 1564 if (list_empty(&reg_requests_list)) {
1586 spin_unlock(&reg_requests_lock); 1565 spin_unlock(&reg_requests_lock);
1587 goto out; 1566 return;
1588 } 1567 }
1589 1568
1590 reg_request = list_first_entry(&reg_requests_list, 1569 reg_request = list_first_entry(&reg_requests_list,
@@ -1595,10 +1574,6 @@ static void reg_process_pending_hints(void)
1595 spin_unlock(&reg_requests_lock); 1574 spin_unlock(&reg_requests_lock);
1596 1575
1597 reg_process_hint(reg_request, reg_request->initiator); 1576 reg_process_hint(reg_request, reg_request->initiator);
1598
1599out:
1600 mutex_unlock(&reg_mutex);
1601 mutex_unlock(&cfg80211_mutex);
1602} 1577}
1603 1578
1604/* Processes beacon hints -- this has nothing to do with country IEs */ 1579/* Processes beacon hints -- this has nothing to do with country IEs */
@@ -1607,9 +1582,6 @@ static void reg_process_pending_beacon_hints(void)
1607 struct cfg80211_registered_device *rdev; 1582 struct cfg80211_registered_device *rdev;
1608 struct reg_beacon *pending_beacon, *tmp; 1583 struct reg_beacon *pending_beacon, *tmp;
1609 1584
1610 mutex_lock(&cfg80211_mutex);
1611 mutex_lock(&reg_mutex);
1612
1613 /* This goes through the _pending_ beacon list */ 1585 /* This goes through the _pending_ beacon list */
1614 spin_lock_bh(&reg_pending_beacons_lock); 1586 spin_lock_bh(&reg_pending_beacons_lock);
1615 1587
@@ -1626,14 +1598,14 @@ static void reg_process_pending_beacon_hints(void)
1626 } 1598 }
1627 1599
1628 spin_unlock_bh(&reg_pending_beacons_lock); 1600 spin_unlock_bh(&reg_pending_beacons_lock);
1629 mutex_unlock(&reg_mutex);
1630 mutex_unlock(&cfg80211_mutex);
1631} 1601}
1632 1602
1633static void reg_todo(struct work_struct *work) 1603static void reg_todo(struct work_struct *work)
1634{ 1604{
1605 rtnl_lock();
1635 reg_process_pending_hints(); 1606 reg_process_pending_hints();
1636 reg_process_pending_beacon_hints(); 1607 reg_process_pending_beacon_hints();
1608 rtnl_unlock();
1637} 1609}
1638 1610
1639static void queue_regulatory_request(struct regulatory_request *request) 1611static void queue_regulatory_request(struct regulatory_request *request)
@@ -1717,29 +1689,23 @@ int regulatory_hint(struct wiphy *wiphy, const char *alpha2)
1717} 1689}
1718EXPORT_SYMBOL(regulatory_hint); 1690EXPORT_SYMBOL(regulatory_hint);
1719 1691
1720/*
1721 * We hold wdev_lock() here so we cannot hold cfg80211_mutex() and
1722 * therefore cannot iterate over the rdev list here.
1723 */
1724void regulatory_hint_11d(struct wiphy *wiphy, enum ieee80211_band band, 1692void regulatory_hint_11d(struct wiphy *wiphy, enum ieee80211_band band,
1725 const u8 *country_ie, u8 country_ie_len) 1693 const u8 *country_ie, u8 country_ie_len)
1726{ 1694{
1727 char alpha2[2]; 1695 char alpha2[2];
1728 enum environment_cap env = ENVIRON_ANY; 1696 enum environment_cap env = ENVIRON_ANY;
1729 struct regulatory_request *request, *lr; 1697 struct regulatory_request *request = NULL, *lr;
1730
1731 mutex_lock(&reg_mutex);
1732 lr = get_last_request();
1733
1734 if (unlikely(!lr))
1735 goto out;
1736 1698
1737 /* IE len must be evenly divisible by 2 */ 1699 /* IE len must be evenly divisible by 2 */
1738 if (country_ie_len & 0x01) 1700 if (country_ie_len & 0x01)
1739 goto out; 1701 return;
1740 1702
1741 if (country_ie_len < IEEE80211_COUNTRY_IE_MIN_LEN) 1703 if (country_ie_len < IEEE80211_COUNTRY_IE_MIN_LEN)
1742 goto out; 1704 return;
1705
1706 request = kzalloc(sizeof(*request), GFP_KERNEL);
1707 if (!request)
1708 return;
1743 1709
1744 alpha2[0] = country_ie[0]; 1710 alpha2[0] = country_ie[0];
1745 alpha2[1] = country_ie[1]; 1711 alpha2[1] = country_ie[1];
@@ -1749,19 +1715,21 @@ void regulatory_hint_11d(struct wiphy *wiphy, enum ieee80211_band band,
1749 else if (country_ie[2] == 'O') 1715 else if (country_ie[2] == 'O')
1750 env = ENVIRON_OUTDOOR; 1716 env = ENVIRON_OUTDOOR;
1751 1717
1718 rcu_read_lock();
1719 lr = get_last_request();
1720
1721 if (unlikely(!lr))
1722 goto out;
1723
1752 /* 1724 /*
1753 * We will run this only upon a successful connection on cfg80211. 1725 * We will run this only upon a successful connection on cfg80211.
1754 * We leave conflict resolution to the workqueue, where can hold 1726 * We leave conflict resolution to the workqueue, where can hold
1755 * cfg80211_mutex. 1727 * the RTNL.
1756 */ 1728 */
1757 if (lr->initiator == NL80211_REGDOM_SET_BY_COUNTRY_IE && 1729 if (lr->initiator == NL80211_REGDOM_SET_BY_COUNTRY_IE &&
1758 lr->wiphy_idx != WIPHY_IDX_INVALID) 1730 lr->wiphy_idx != WIPHY_IDX_INVALID)
1759 goto out; 1731 goto out;
1760 1732
1761 request = kzalloc(sizeof(struct regulatory_request), GFP_KERNEL);
1762 if (!request)
1763 goto out;
1764
1765 request->wiphy_idx = get_wiphy_idx(wiphy); 1733 request->wiphy_idx = get_wiphy_idx(wiphy);
1766 request->alpha2[0] = alpha2[0]; 1734 request->alpha2[0] = alpha2[0];
1767 request->alpha2[1] = alpha2[1]; 1735 request->alpha2[1] = alpha2[1];
@@ -1769,8 +1737,10 @@ void regulatory_hint_11d(struct wiphy *wiphy, enum ieee80211_band band,
1769 request->country_ie_env = env; 1737 request->country_ie_env = env;
1770 1738
1771 queue_regulatory_request(request); 1739 queue_regulatory_request(request);
1740 request = NULL;
1772out: 1741out:
1773 mutex_unlock(&reg_mutex); 1742 kfree(request);
1743 rcu_read_unlock();
1774} 1744}
1775 1745
1776static void restore_alpha2(char *alpha2, bool reset_user) 1746static void restore_alpha2(char *alpha2, bool reset_user)
@@ -1858,8 +1828,7 @@ static void restore_regulatory_settings(bool reset_user)
1858 LIST_HEAD(tmp_reg_req_list); 1828 LIST_HEAD(tmp_reg_req_list);
1859 struct cfg80211_registered_device *rdev; 1829 struct cfg80211_registered_device *rdev;
1860 1830
1861 mutex_lock(&cfg80211_mutex); 1831 ASSERT_RTNL();
1862 mutex_lock(&reg_mutex);
1863 1832
1864 reset_regdomains(true, &world_regdom); 1833 reset_regdomains(true, &world_regdom);
1865 restore_alpha2(alpha2, reset_user); 1834 restore_alpha2(alpha2, reset_user);
@@ -1914,9 +1883,6 @@ static void restore_regulatory_settings(bool reset_user)
1914 list_splice_tail_init(&tmp_reg_req_list, &reg_requests_list); 1883 list_splice_tail_init(&tmp_reg_req_list, &reg_requests_list);
1915 spin_unlock(&reg_requests_lock); 1884 spin_unlock(&reg_requests_lock);
1916 1885
1917 mutex_unlock(&reg_mutex);
1918 mutex_unlock(&cfg80211_mutex);
1919
1920 REG_DBG_PRINT("Kicking the queue\n"); 1886 REG_DBG_PRINT("Kicking the queue\n");
1921 1887
1922 schedule_work(&reg_work); 1888 schedule_work(&reg_work);
@@ -2231,7 +2197,6 @@ int set_regdom(const struct ieee80211_regdomain *rd)
2231 struct regulatory_request *lr; 2197 struct regulatory_request *lr;
2232 int r; 2198 int r;
2233 2199
2234 mutex_lock(&reg_mutex);
2235 lr = get_last_request(); 2200 lr = get_last_request();
2236 2201
2237 /* Note that this doesn't update the wiphys, this is done below */ 2202 /* Note that this doesn't update the wiphys, this is done below */
@@ -2241,14 +2206,12 @@ int set_regdom(const struct ieee80211_regdomain *rd)
2241 reg_set_request_processed(); 2206 reg_set_request_processed();
2242 2207
2243 kfree(rd); 2208 kfree(rd);
2244 goto out; 2209 return r;
2245 } 2210 }
2246 2211
2247 /* This would make this whole thing pointless */ 2212 /* This would make this whole thing pointless */
2248 if (WARN_ON(!lr->intersect && rd != get_cfg80211_regdom())) { 2213 if (WARN_ON(!lr->intersect && rd != get_cfg80211_regdom()))
2249 r = -EINVAL; 2214 return -EINVAL;
2250 goto out;
2251 }
2252 2215
2253 /* update all wiphys now with the new established regulatory domain */ 2216 /* update all wiphys now with the new established regulatory domain */
2254 update_all_wiphy_regulatory(lr->initiator); 2217 update_all_wiphy_regulatory(lr->initiator);
@@ -2259,10 +2222,7 @@ int set_regdom(const struct ieee80211_regdomain *rd)
2259 2222
2260 reg_set_request_processed(); 2223 reg_set_request_processed();
2261 2224
2262 out: 2225 return 0;
2263 mutex_unlock(&reg_mutex);
2264
2265 return r;
2266} 2226}
2267 2227
2268int reg_device_uevent(struct device *dev, struct kobj_uevent_env *env) 2228int reg_device_uevent(struct device *dev, struct kobj_uevent_env *env)
@@ -2287,23 +2247,17 @@ int reg_device_uevent(struct device *dev, struct kobj_uevent_env *env)
2287 2247
2288void wiphy_regulatory_register(struct wiphy *wiphy) 2248void wiphy_regulatory_register(struct wiphy *wiphy)
2289{ 2249{
2290 mutex_lock(&reg_mutex);
2291
2292 if (!reg_dev_ignore_cell_hint(wiphy)) 2250 if (!reg_dev_ignore_cell_hint(wiphy))
2293 reg_num_devs_support_basehint++; 2251 reg_num_devs_support_basehint++;
2294 2252
2295 wiphy_update_regulatory(wiphy, NL80211_REGDOM_SET_BY_CORE); 2253 wiphy_update_regulatory(wiphy, NL80211_REGDOM_SET_BY_CORE);
2296
2297 mutex_unlock(&reg_mutex);
2298} 2254}
2299 2255
2300/* Caller must hold cfg80211_mutex */
2301void wiphy_regulatory_deregister(struct wiphy *wiphy) 2256void wiphy_regulatory_deregister(struct wiphy *wiphy)
2302{ 2257{
2303 struct wiphy *request_wiphy = NULL; 2258 struct wiphy *request_wiphy = NULL;
2304 struct regulatory_request *lr; 2259 struct regulatory_request *lr;
2305 2260
2306 mutex_lock(&reg_mutex);
2307 lr = get_last_request(); 2261 lr = get_last_request();
2308 2262
2309 if (!reg_dev_ignore_cell_hint(wiphy)) 2263 if (!reg_dev_ignore_cell_hint(wiphy))
@@ -2316,12 +2270,10 @@ void wiphy_regulatory_deregister(struct wiphy *wiphy)
2316 request_wiphy = wiphy_idx_to_wiphy(lr->wiphy_idx); 2270 request_wiphy = wiphy_idx_to_wiphy(lr->wiphy_idx);
2317 2271
2318 if (!request_wiphy || request_wiphy != wiphy) 2272 if (!request_wiphy || request_wiphy != wiphy)
2319 goto out; 2273 return;
2320 2274
2321 lr->wiphy_idx = WIPHY_IDX_INVALID; 2275 lr->wiphy_idx = WIPHY_IDX_INVALID;
2322 lr->country_ie_env = ENVIRON_ANY; 2276 lr->country_ie_env = ENVIRON_ANY;
2323out:
2324 mutex_unlock(&reg_mutex);
2325} 2277}
2326 2278
2327static void reg_timeout_work(struct work_struct *work) 2279static void reg_timeout_work(struct work_struct *work)
@@ -2385,9 +2337,9 @@ void regulatory_exit(void)
2385 cancel_delayed_work_sync(&reg_timeout); 2337 cancel_delayed_work_sync(&reg_timeout);
2386 2338
2387 /* Lock to suppress warnings */ 2339 /* Lock to suppress warnings */
2388 mutex_lock(&reg_mutex); 2340 rtnl_lock();
2389 reset_regdomains(true, NULL); 2341 reset_regdomains(true, NULL);
2390 mutex_unlock(&reg_mutex); 2342 rtnl_unlock();
2391 2343
2392 dev_set_uevent_suppress(&reg_pdev->dev, true); 2344 dev_set_uevent_suppress(&reg_pdev->dev, true);
2393 2345
diff --git a/net/wireless/scan.c b/net/wireless/scan.c
index fd99ea495b7e..ae8c186b50d6 100644
--- a/net/wireless/scan.c
+++ b/net/wireless/scan.c
@@ -169,7 +169,7 @@ void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev, bool leak)
169 union iwreq_data wrqu; 169 union iwreq_data wrqu;
170#endif 170#endif
171 171
172 lockdep_assert_held(&rdev->sched_scan_mtx); 172 ASSERT_RTNL();
173 173
174 request = rdev->scan_req; 174 request = rdev->scan_req;
175 175
@@ -230,9 +230,9 @@ void __cfg80211_scan_done(struct work_struct *wk)
230 rdev = container_of(wk, struct cfg80211_registered_device, 230 rdev = container_of(wk, struct cfg80211_registered_device,
231 scan_done_wk); 231 scan_done_wk);
232 232
233 mutex_lock(&rdev->sched_scan_mtx); 233 rtnl_lock();
234 ___cfg80211_scan_done(rdev, false); 234 ___cfg80211_scan_done(rdev, false);
235 mutex_unlock(&rdev->sched_scan_mtx); 235 rtnl_unlock();
236} 236}
237 237
238void cfg80211_scan_done(struct cfg80211_scan_request *request, bool aborted) 238void cfg80211_scan_done(struct cfg80211_scan_request *request, bool aborted)
@@ -241,6 +241,7 @@ void cfg80211_scan_done(struct cfg80211_scan_request *request, bool aborted)
241 WARN_ON(request != wiphy_to_dev(request->wiphy)->scan_req); 241 WARN_ON(request != wiphy_to_dev(request->wiphy)->scan_req);
242 242
243 request->aborted = aborted; 243 request->aborted = aborted;
244 request->notified = true;
244 queue_work(cfg80211_wq, &wiphy_to_dev(request->wiphy)->scan_done_wk); 245 queue_work(cfg80211_wq, &wiphy_to_dev(request->wiphy)->scan_done_wk);
245} 246}
246EXPORT_SYMBOL(cfg80211_scan_done); 247EXPORT_SYMBOL(cfg80211_scan_done);
@@ -255,7 +256,7 @@ void __cfg80211_sched_scan_results(struct work_struct *wk)
255 256
256 request = rdev->sched_scan_req; 257 request = rdev->sched_scan_req;
257 258
258 mutex_lock(&rdev->sched_scan_mtx); 259 rtnl_lock();
259 260
260 /* we don't have sched_scan_req anymore if the scan is stopping */ 261 /* we don't have sched_scan_req anymore if the scan is stopping */
261 if (request) { 262 if (request) {
@@ -270,7 +271,7 @@ void __cfg80211_sched_scan_results(struct work_struct *wk)
270 nl80211_send_sched_scan_results(rdev, request->dev); 271 nl80211_send_sched_scan_results(rdev, request->dev);
271 } 272 }
272 273
273 mutex_unlock(&rdev->sched_scan_mtx); 274 rtnl_unlock();
274} 275}
275 276
276void cfg80211_sched_scan_results(struct wiphy *wiphy) 277void cfg80211_sched_scan_results(struct wiphy *wiphy)
@@ -289,9 +290,9 @@ void cfg80211_sched_scan_stopped(struct wiphy *wiphy)
289 290
290 trace_cfg80211_sched_scan_stopped(wiphy); 291 trace_cfg80211_sched_scan_stopped(wiphy);
291 292
292 mutex_lock(&rdev->sched_scan_mtx); 293 rtnl_lock();
293 __cfg80211_stop_sched_scan(rdev, true); 294 __cfg80211_stop_sched_scan(rdev, true);
294 mutex_unlock(&rdev->sched_scan_mtx); 295 rtnl_unlock();
295} 296}
296EXPORT_SYMBOL(cfg80211_sched_scan_stopped); 297EXPORT_SYMBOL(cfg80211_sched_scan_stopped);
297 298
@@ -300,7 +301,7 @@ int __cfg80211_stop_sched_scan(struct cfg80211_registered_device *rdev,
300{ 301{
301 struct net_device *dev; 302 struct net_device *dev;
302 303
303 lockdep_assert_held(&rdev->sched_scan_mtx); 304 ASSERT_RTNL();
304 305
305 if (!rdev->sched_scan_req) 306 if (!rdev->sched_scan_req)
306 return -ENOENT; 307 return -ENOENT;
@@ -522,6 +523,7 @@ static int cmp_bss(struct cfg80211_bss *a,
522 } 523 }
523} 524}
524 525
526/* Returned bss is reference counted and must be cleaned up appropriately. */
525struct cfg80211_bss *cfg80211_get_bss(struct wiphy *wiphy, 527struct cfg80211_bss *cfg80211_get_bss(struct wiphy *wiphy,
526 struct ieee80211_channel *channel, 528 struct ieee80211_channel *channel,
527 const u8 *bssid, 529 const u8 *bssid,
@@ -677,6 +679,7 @@ static bool cfg80211_combine_bsses(struct cfg80211_registered_device *dev,
677 return true; 679 return true;
678} 680}
679 681
682/* Returned bss is reference counted and must be cleaned up appropriately. */
680static struct cfg80211_internal_bss * 683static struct cfg80211_internal_bss *
681cfg80211_bss_update(struct cfg80211_registered_device *dev, 684cfg80211_bss_update(struct cfg80211_registered_device *dev,
682 struct cfg80211_internal_bss *tmp) 685 struct cfg80211_internal_bss *tmp)
@@ -865,6 +868,7 @@ cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
865 return channel; 868 return channel;
866} 869}
867 870
871/* Returned bss is reference counted and must be cleaned up appropriately. */
868struct cfg80211_bss* 872struct cfg80211_bss*
869cfg80211_inform_bss(struct wiphy *wiphy, 873cfg80211_inform_bss(struct wiphy *wiphy,
870 struct ieee80211_channel *channel, 874 struct ieee80211_channel *channel,
@@ -922,6 +926,7 @@ cfg80211_inform_bss(struct wiphy *wiphy,
922} 926}
923EXPORT_SYMBOL(cfg80211_inform_bss); 927EXPORT_SYMBOL(cfg80211_inform_bss);
924 928
929/* Returned bss is reference counted and must be cleaned up appropriately. */
925struct cfg80211_bss * 930struct cfg80211_bss *
926cfg80211_inform_bss_frame(struct wiphy *wiphy, 931cfg80211_inform_bss_frame(struct wiphy *wiphy,
927 struct ieee80211_channel *channel, 932 struct ieee80211_channel *channel,
@@ -1040,6 +1045,25 @@ void cfg80211_unlink_bss(struct wiphy *wiphy, struct cfg80211_bss *pub)
1040EXPORT_SYMBOL(cfg80211_unlink_bss); 1045EXPORT_SYMBOL(cfg80211_unlink_bss);
1041 1046
1042#ifdef CONFIG_CFG80211_WEXT 1047#ifdef CONFIG_CFG80211_WEXT
1048static struct cfg80211_registered_device *
1049cfg80211_get_dev_from_ifindex(struct net *net, int ifindex)
1050{
1051 struct cfg80211_registered_device *rdev;
1052 struct net_device *dev;
1053
1054 ASSERT_RTNL();
1055
1056 dev = dev_get_by_index(net, ifindex);
1057 if (!dev)
1058 return ERR_PTR(-ENODEV);
1059 if (dev->ieee80211_ptr)
1060 rdev = wiphy_to_dev(dev->ieee80211_ptr->wiphy);
1061 else
1062 rdev = ERR_PTR(-ENODEV);
1063 dev_put(dev);
1064 return rdev;
1065}
1066
1043int cfg80211_wext_siwscan(struct net_device *dev, 1067int cfg80211_wext_siwscan(struct net_device *dev,
1044 struct iw_request_info *info, 1068 struct iw_request_info *info,
1045 union iwreq_data *wrqu, char *extra) 1069 union iwreq_data *wrqu, char *extra)
@@ -1062,7 +1086,6 @@ int cfg80211_wext_siwscan(struct net_device *dev,
1062 if (IS_ERR(rdev)) 1086 if (IS_ERR(rdev))
1063 return PTR_ERR(rdev); 1087 return PTR_ERR(rdev);
1064 1088
1065 mutex_lock(&rdev->sched_scan_mtx);
1066 if (rdev->scan_req) { 1089 if (rdev->scan_req) {
1067 err = -EBUSY; 1090 err = -EBUSY;
1068 goto out; 1091 goto out;
@@ -1169,9 +1192,7 @@ int cfg80211_wext_siwscan(struct net_device *dev,
1169 dev_hold(dev); 1192 dev_hold(dev);
1170 } 1193 }
1171 out: 1194 out:
1172 mutex_unlock(&rdev->sched_scan_mtx);
1173 kfree(creq); 1195 kfree(creq);
1174 cfg80211_unlock_rdev(rdev);
1175 return err; 1196 return err;
1176} 1197}
1177EXPORT_SYMBOL_GPL(cfg80211_wext_siwscan); 1198EXPORT_SYMBOL_GPL(cfg80211_wext_siwscan);
@@ -1470,10 +1491,8 @@ int cfg80211_wext_giwscan(struct net_device *dev,
1470 if (IS_ERR(rdev)) 1491 if (IS_ERR(rdev))
1471 return PTR_ERR(rdev); 1492 return PTR_ERR(rdev);
1472 1493
1473 if (rdev->scan_req) { 1494 if (rdev->scan_req)
1474 res = -EAGAIN; 1495 return -EAGAIN;
1475 goto out;
1476 }
1477 1496
1478 res = ieee80211_scan_results(rdev, info, extra, data->length); 1497 res = ieee80211_scan_results(rdev, info, extra, data->length);
1479 data->length = 0; 1498 data->length = 0;
@@ -1482,8 +1501,6 @@ int cfg80211_wext_giwscan(struct net_device *dev,
1482 res = 0; 1501 res = 0;
1483 } 1502 }
1484 1503
1485 out:
1486 cfg80211_unlock_rdev(rdev);
1487 return res; 1504 return res;
1488} 1505}
1489EXPORT_SYMBOL_GPL(cfg80211_wext_giwscan); 1506EXPORT_SYMBOL_GPL(cfg80211_wext_giwscan);
diff --git a/net/wireless/sme.c b/net/wireless/sme.c
index 3ed35c345cae..1d3cfb1a3f28 100644
--- a/net/wireless/sme.c
+++ b/net/wireless/sme.c
@@ -1,5 +1,7 @@
1/* 1/*
2 * SME code for cfg80211's connect emulation. 2 * SME code for cfg80211
3 * both driver SME event handling and the SME implementation
4 * (for nl80211's connect() and wext)
3 * 5 *
4 * Copyright 2009 Johannes Berg <johannes@sipsolutions.net> 6 * Copyright 2009 Johannes Berg <johannes@sipsolutions.net>
5 * Copyright (C) 2009 Intel Corporation. All rights reserved. 7 * Copyright (C) 2009 Intel Corporation. All rights reserved.
@@ -18,18 +20,24 @@
18#include "reg.h" 20#include "reg.h"
19#include "rdev-ops.h" 21#include "rdev-ops.h"
20 22
23/*
24 * Software SME in cfg80211, using auth/assoc/deauth calls to the
25 * driver. This is is for implementing nl80211's connect/disconnect
26 * and wireless extensions (if configured.)
27 */
28
21struct cfg80211_conn { 29struct cfg80211_conn {
22 struct cfg80211_connect_params params; 30 struct cfg80211_connect_params params;
23 /* these are sub-states of the _CONNECTING sme_state */ 31 /* these are sub-states of the _CONNECTING sme_state */
24 enum { 32 enum {
25 CFG80211_CONN_IDLE,
26 CFG80211_CONN_SCANNING, 33 CFG80211_CONN_SCANNING,
27 CFG80211_CONN_SCAN_AGAIN, 34 CFG80211_CONN_SCAN_AGAIN,
28 CFG80211_CONN_AUTHENTICATE_NEXT, 35 CFG80211_CONN_AUTHENTICATE_NEXT,
29 CFG80211_CONN_AUTHENTICATING, 36 CFG80211_CONN_AUTHENTICATING,
30 CFG80211_CONN_ASSOCIATE_NEXT, 37 CFG80211_CONN_ASSOCIATE_NEXT,
31 CFG80211_CONN_ASSOCIATING, 38 CFG80211_CONN_ASSOCIATING,
32 CFG80211_CONN_DEAUTH_ASSOC_FAIL, 39 CFG80211_CONN_DEAUTH,
40 CFG80211_CONN_CONNECTED,
33 } state; 41 } state;
34 u8 bssid[ETH_ALEN], prev_bssid[ETH_ALEN]; 42 u8 bssid[ETH_ALEN], prev_bssid[ETH_ALEN];
35 u8 *ie; 43 u8 *ie;
@@ -37,45 +45,16 @@ struct cfg80211_conn {
37 bool auto_auth, prev_bssid_valid; 45 bool auto_auth, prev_bssid_valid;
38}; 46};
39 47
40static bool cfg80211_is_all_idle(void) 48static void cfg80211_sme_free(struct wireless_dev *wdev)
41{
42 struct cfg80211_registered_device *rdev;
43 struct wireless_dev *wdev;
44 bool is_all_idle = true;
45
46 mutex_lock(&cfg80211_mutex);
47
48 /*
49 * All devices must be idle as otherwise if you are actively
50 * scanning some new beacon hints could be learned and would
51 * count as new regulatory hints.
52 */
53 list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
54 cfg80211_lock_rdev(rdev);
55 list_for_each_entry(wdev, &rdev->wdev_list, list) {
56 wdev_lock(wdev);
57 if (wdev->sme_state != CFG80211_SME_IDLE)
58 is_all_idle = false;
59 wdev_unlock(wdev);
60 }
61 cfg80211_unlock_rdev(rdev);
62 }
63
64 mutex_unlock(&cfg80211_mutex);
65
66 return is_all_idle;
67}
68
69static void disconnect_work(struct work_struct *work)
70{ 49{
71 if (!cfg80211_is_all_idle()) 50 if (!wdev->conn)
72 return; 51 return;
73 52
74 regulatory_hint_disconnect(); 53 kfree(wdev->conn->ie);
54 kfree(wdev->conn);
55 wdev->conn = NULL;
75} 56}
76 57
77static DECLARE_WORK(cfg80211_disconnect_work, disconnect_work);
78
79static int cfg80211_conn_scan(struct wireless_dev *wdev) 58static int cfg80211_conn_scan(struct wireless_dev *wdev)
80{ 59{
81 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy); 60 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
@@ -85,7 +64,6 @@ static int cfg80211_conn_scan(struct wireless_dev *wdev)
85 ASSERT_RTNL(); 64 ASSERT_RTNL();
86 ASSERT_RDEV_LOCK(rdev); 65 ASSERT_RDEV_LOCK(rdev);
87 ASSERT_WDEV_LOCK(wdev); 66 ASSERT_WDEV_LOCK(wdev);
88 lockdep_assert_held(&rdev->sched_scan_mtx);
89 67
90 if (rdev->scan_req) 68 if (rdev->scan_req)
91 return -EBUSY; 69 return -EBUSY;
@@ -171,18 +149,21 @@ static int cfg80211_conn_do_work(struct wireless_dev *wdev)
171 params = &wdev->conn->params; 149 params = &wdev->conn->params;
172 150
173 switch (wdev->conn->state) { 151 switch (wdev->conn->state) {
152 case CFG80211_CONN_SCANNING:
153 /* didn't find it during scan ... */
154 return -ENOENT;
174 case CFG80211_CONN_SCAN_AGAIN: 155 case CFG80211_CONN_SCAN_AGAIN:
175 return cfg80211_conn_scan(wdev); 156 return cfg80211_conn_scan(wdev);
176 case CFG80211_CONN_AUTHENTICATE_NEXT: 157 case CFG80211_CONN_AUTHENTICATE_NEXT:
177 BUG_ON(!rdev->ops->auth); 158 BUG_ON(!rdev->ops->auth);
178 wdev->conn->state = CFG80211_CONN_AUTHENTICATING; 159 wdev->conn->state = CFG80211_CONN_AUTHENTICATING;
179 return __cfg80211_mlme_auth(rdev, wdev->netdev, 160 return cfg80211_mlme_auth(rdev, wdev->netdev,
180 params->channel, params->auth_type, 161 params->channel, params->auth_type,
181 params->bssid, 162 params->bssid,
182 params->ssid, params->ssid_len, 163 params->ssid, params->ssid_len,
183 NULL, 0, 164 NULL, 0,
184 params->key, params->key_len, 165 params->key, params->key_len,
185 params->key_idx, NULL, 0); 166 params->key_idx, NULL, 0);
186 case CFG80211_CONN_ASSOCIATE_NEXT: 167 case CFG80211_CONN_ASSOCIATE_NEXT:
187 BUG_ON(!rdev->ops->assoc); 168 BUG_ON(!rdev->ops->assoc);
188 wdev->conn->state = CFG80211_CONN_ASSOCIATING; 169 wdev->conn->state = CFG80211_CONN_ASSOCIATING;
@@ -198,21 +179,20 @@ static int cfg80211_conn_do_work(struct wireless_dev *wdev)
198 req.vht_capa = params->vht_capa; 179 req.vht_capa = params->vht_capa;
199 req.vht_capa_mask = params->vht_capa_mask; 180 req.vht_capa_mask = params->vht_capa_mask;
200 181
201 err = __cfg80211_mlme_assoc(rdev, wdev->netdev, params->channel, 182 err = cfg80211_mlme_assoc(rdev, wdev->netdev, params->channel,
202 params->bssid, params->ssid, 183 params->bssid, params->ssid,
203 params->ssid_len, &req); 184 params->ssid_len, &req);
204 if (err) 185 if (err)
205 __cfg80211_mlme_deauth(rdev, wdev->netdev, params->bssid, 186 cfg80211_mlme_deauth(rdev, wdev->netdev, params->bssid,
206 NULL, 0, 187 NULL, 0,
207 WLAN_REASON_DEAUTH_LEAVING, 188 WLAN_REASON_DEAUTH_LEAVING,
208 false); 189 false);
209 return err; 190 return err;
210 case CFG80211_CONN_DEAUTH_ASSOC_FAIL: 191 case CFG80211_CONN_DEAUTH:
211 __cfg80211_mlme_deauth(rdev, wdev->netdev, params->bssid, 192 cfg80211_mlme_deauth(rdev, wdev->netdev, params->bssid,
212 NULL, 0, 193 NULL, 0,
213 WLAN_REASON_DEAUTH_LEAVING, false); 194 WLAN_REASON_DEAUTH_LEAVING, false);
214 /* return an error so that we call __cfg80211_connect_result() */ 195 return 0;
215 return -EINVAL;
216 default: 196 default:
217 return 0; 197 return 0;
218 } 198 }
@@ -226,9 +206,6 @@ void cfg80211_conn_work(struct work_struct *work)
226 u8 bssid_buf[ETH_ALEN], *bssid = NULL; 206 u8 bssid_buf[ETH_ALEN], *bssid = NULL;
227 207
228 rtnl_lock(); 208 rtnl_lock();
229 cfg80211_lock_rdev(rdev);
230 mutex_lock(&rdev->devlist_mtx);
231 mutex_lock(&rdev->sched_scan_mtx);
232 209
233 list_for_each_entry(wdev, &rdev->wdev_list, list) { 210 list_for_each_entry(wdev, &rdev->wdev_list, list) {
234 if (!wdev->netdev) 211 if (!wdev->netdev)
@@ -239,7 +216,8 @@ void cfg80211_conn_work(struct work_struct *work)
239 wdev_unlock(wdev); 216 wdev_unlock(wdev);
240 continue; 217 continue;
241 } 218 }
242 if (wdev->sme_state != CFG80211_SME_CONNECTING || !wdev->conn) { 219 if (!wdev->conn ||
220 wdev->conn->state == CFG80211_CONN_CONNECTED) {
243 wdev_unlock(wdev); 221 wdev_unlock(wdev);
244 continue; 222 continue;
245 } 223 }
@@ -247,21 +225,21 @@ void cfg80211_conn_work(struct work_struct *work)
247 memcpy(bssid_buf, wdev->conn->params.bssid, ETH_ALEN); 225 memcpy(bssid_buf, wdev->conn->params.bssid, ETH_ALEN);
248 bssid = bssid_buf; 226 bssid = bssid_buf;
249 } 227 }
250 if (cfg80211_conn_do_work(wdev)) 228 if (cfg80211_conn_do_work(wdev)) {
251 __cfg80211_connect_result( 229 __cfg80211_connect_result(
252 wdev->netdev, bssid, 230 wdev->netdev, bssid,
253 NULL, 0, NULL, 0, 231 NULL, 0, NULL, 0,
254 WLAN_STATUS_UNSPECIFIED_FAILURE, 232 WLAN_STATUS_UNSPECIFIED_FAILURE,
255 false, NULL); 233 false, NULL);
234 cfg80211_sme_free(wdev);
235 }
256 wdev_unlock(wdev); 236 wdev_unlock(wdev);
257 } 237 }
258 238
259 mutex_unlock(&rdev->sched_scan_mtx);
260 mutex_unlock(&rdev->devlist_mtx);
261 cfg80211_unlock_rdev(rdev);
262 rtnl_unlock(); 239 rtnl_unlock();
263} 240}
264 241
242/* Returned bss is reference counted and must be cleaned up appropriately. */
265static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev) 243static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
266{ 244{
267 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy); 245 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
@@ -299,9 +277,6 @@ static void __cfg80211_sme_scan_done(struct net_device *dev)
299 277
300 ASSERT_WDEV_LOCK(wdev); 278 ASSERT_WDEV_LOCK(wdev);
301 279
302 if (wdev->sme_state != CFG80211_SME_CONNECTING)
303 return;
304
305 if (!wdev->conn) 280 if (!wdev->conn)
306 return; 281 return;
307 282
@@ -310,20 +285,10 @@ static void __cfg80211_sme_scan_done(struct net_device *dev)
310 return; 285 return;
311 286
312 bss = cfg80211_get_conn_bss(wdev); 287 bss = cfg80211_get_conn_bss(wdev);
313 if (bss) { 288 if (bss)
314 cfg80211_put_bss(&rdev->wiphy, bss); 289 cfg80211_put_bss(&rdev->wiphy, bss);
315 } else { 290 else
316 /* not found */ 291 schedule_work(&rdev->conn_work);
317 if (wdev->conn->state == CFG80211_CONN_SCAN_AGAIN)
318 schedule_work(&rdev->conn_work);
319 else
320 __cfg80211_connect_result(
321 wdev->netdev,
322 wdev->conn->params.bssid,
323 NULL, 0, NULL, 0,
324 WLAN_STATUS_UNSPECIFIED_FAILURE,
325 false, NULL);
326 }
327} 292}
328 293
329void cfg80211_sme_scan_done(struct net_device *dev) 294void cfg80211_sme_scan_done(struct net_device *dev)
@@ -335,10 +300,8 @@ void cfg80211_sme_scan_done(struct net_device *dev)
335 wdev_unlock(wdev); 300 wdev_unlock(wdev);
336} 301}
337 302
338void cfg80211_sme_rx_auth(struct net_device *dev, 303void cfg80211_sme_rx_auth(struct wireless_dev *wdev, const u8 *buf, size_t len)
339 const u8 *buf, size_t len)
340{ 304{
341 struct wireless_dev *wdev = dev->ieee80211_ptr;
342 struct wiphy *wiphy = wdev->wiphy; 305 struct wiphy *wiphy = wdev->wiphy;
343 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); 306 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
344 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf; 307 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf;
@@ -346,11 +309,7 @@ void cfg80211_sme_rx_auth(struct net_device *dev,
346 309
347 ASSERT_WDEV_LOCK(wdev); 310 ASSERT_WDEV_LOCK(wdev);
348 311
349 /* should only RX auth frames when connecting */ 312 if (!wdev->conn || wdev->conn->state == CFG80211_CONN_CONNECTED)
350 if (wdev->sme_state != CFG80211_SME_CONNECTING)
351 return;
352
353 if (WARN_ON(!wdev->conn))
354 return; 313 return;
355 314
356 if (status_code == WLAN_STATUS_NOT_SUPPORTED_AUTH_ALG && 315 if (status_code == WLAN_STATUS_NOT_SUPPORTED_AUTH_ALG &&
@@ -379,46 +338,227 @@ void cfg80211_sme_rx_auth(struct net_device *dev,
379 wdev->conn->state = CFG80211_CONN_AUTHENTICATE_NEXT; 338 wdev->conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
380 schedule_work(&rdev->conn_work); 339 schedule_work(&rdev->conn_work);
381 } else if (status_code != WLAN_STATUS_SUCCESS) { 340 } else if (status_code != WLAN_STATUS_SUCCESS) {
382 __cfg80211_connect_result(dev, mgmt->bssid, NULL, 0, NULL, 0, 341 __cfg80211_connect_result(wdev->netdev, mgmt->bssid,
342 NULL, 0, NULL, 0,
383 status_code, false, NULL); 343 status_code, false, NULL);
384 } else if (wdev->sme_state == CFG80211_SME_CONNECTING && 344 } else if (wdev->conn->state == CFG80211_CONN_AUTHENTICATING) {
385 wdev->conn->state == CFG80211_CONN_AUTHENTICATING) {
386 wdev->conn->state = CFG80211_CONN_ASSOCIATE_NEXT; 345 wdev->conn->state = CFG80211_CONN_ASSOCIATE_NEXT;
387 schedule_work(&rdev->conn_work); 346 schedule_work(&rdev->conn_work);
388 } 347 }
389} 348}
390 349
391bool cfg80211_sme_failed_reassoc(struct wireless_dev *wdev) 350bool cfg80211_sme_rx_assoc_resp(struct wireless_dev *wdev, u16 status)
392{ 351{
393 struct wiphy *wiphy = wdev->wiphy; 352 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
394 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy);
395 353
396 if (WARN_ON(!wdev->conn)) 354 if (!wdev->conn)
397 return false; 355 return false;
398 356
399 if (!wdev->conn->prev_bssid_valid) 357 if (status == WLAN_STATUS_SUCCESS) {
358 wdev->conn->state = CFG80211_CONN_CONNECTED;
400 return false; 359 return false;
360 }
401 361
402 /* 362 if (wdev->conn->prev_bssid_valid) {
403 * Some stupid APs don't accept reassoc, so we 363 /*
404 * need to fall back to trying regular assoc. 364 * Some stupid APs don't accept reassoc, so we
405 */ 365 * need to fall back to trying regular assoc;
406 wdev->conn->prev_bssid_valid = false; 366 * return true so no event is sent to userspace.
407 wdev->conn->state = CFG80211_CONN_ASSOCIATE_NEXT; 367 */
368 wdev->conn->prev_bssid_valid = false;
369 wdev->conn->state = CFG80211_CONN_ASSOCIATE_NEXT;
370 schedule_work(&rdev->conn_work);
371 return true;
372 }
373
374 wdev->conn->state = CFG80211_CONN_DEAUTH;
408 schedule_work(&rdev->conn_work); 375 schedule_work(&rdev->conn_work);
376 return false;
377}
409 378
410 return true; 379void cfg80211_sme_deauth(struct wireless_dev *wdev)
380{
381 cfg80211_sme_free(wdev);
411} 382}
412 383
413void cfg80211_sme_failed_assoc(struct wireless_dev *wdev) 384void cfg80211_sme_auth_timeout(struct wireless_dev *wdev)
414{ 385{
415 struct wiphy *wiphy = wdev->wiphy; 386 cfg80211_sme_free(wdev);
416 struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); 387}
417 388
418 wdev->conn->state = CFG80211_CONN_DEAUTH_ASSOC_FAIL; 389void cfg80211_sme_disassoc(struct wireless_dev *wdev)
390{
391 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
392
393 if (!wdev->conn)
394 return;
395
396 wdev->conn->state = CFG80211_CONN_DEAUTH;
419 schedule_work(&rdev->conn_work); 397 schedule_work(&rdev->conn_work);
420} 398}
421 399
400void cfg80211_sme_assoc_timeout(struct wireless_dev *wdev)
401{
402 cfg80211_sme_disassoc(wdev);
403}
404
405static int cfg80211_sme_connect(struct wireless_dev *wdev,
406 struct cfg80211_connect_params *connect,
407 const u8 *prev_bssid)
408{
409 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
410 struct cfg80211_bss *bss;
411 int err;
412
413 if (!rdev->ops->auth || !rdev->ops->assoc)
414 return -EOPNOTSUPP;
415
416 if (wdev->current_bss)
417 return -EALREADY;
418
419 if (WARN_ON(wdev->conn))
420 return -EINPROGRESS;
421
422 wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL);
423 if (!wdev->conn)
424 return -ENOMEM;
425
426 /*
427 * Copy all parameters, and treat explicitly IEs, BSSID, SSID.
428 */
429 memcpy(&wdev->conn->params, connect, sizeof(*connect));
430 if (connect->bssid) {
431 wdev->conn->params.bssid = wdev->conn->bssid;
432 memcpy(wdev->conn->bssid, connect->bssid, ETH_ALEN);
433 }
434
435 if (connect->ie) {
436 wdev->conn->ie = kmemdup(connect->ie, connect->ie_len,
437 GFP_KERNEL);
438 wdev->conn->params.ie = wdev->conn->ie;
439 if (!wdev->conn->ie) {
440 kfree(wdev->conn);
441 wdev->conn = NULL;
442 return -ENOMEM;
443 }
444 }
445
446 if (connect->auth_type == NL80211_AUTHTYPE_AUTOMATIC) {
447 wdev->conn->auto_auth = true;
448 /* start with open system ... should mostly work */
449 wdev->conn->params.auth_type =
450 NL80211_AUTHTYPE_OPEN_SYSTEM;
451 } else {
452 wdev->conn->auto_auth = false;
453 }
454
455 wdev->conn->params.ssid = wdev->ssid;
456 wdev->conn->params.ssid_len = connect->ssid_len;
457
458 /* see if we have the bss already */
459 bss = cfg80211_get_conn_bss(wdev);
460
461 if (prev_bssid) {
462 memcpy(wdev->conn->prev_bssid, prev_bssid, ETH_ALEN);
463 wdev->conn->prev_bssid_valid = true;
464 }
465
466 /* we're good if we have a matching bss struct */
467 if (bss) {
468 wdev->conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
469 err = cfg80211_conn_do_work(wdev);
470 cfg80211_put_bss(wdev->wiphy, bss);
471 } else {
472 /* otherwise we'll need to scan for the AP first */
473 err = cfg80211_conn_scan(wdev);
474
475 /*
476 * If we can't scan right now, then we need to scan again
477 * after the current scan finished, since the parameters
478 * changed (unless we find a good AP anyway).
479 */
480 if (err == -EBUSY) {
481 err = 0;
482 wdev->conn->state = CFG80211_CONN_SCAN_AGAIN;
483 }
484 }
485
486 if (err)
487 cfg80211_sme_free(wdev);
488
489 return err;
490}
491
492static int cfg80211_sme_disconnect(struct wireless_dev *wdev, u16 reason)
493{
494 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
495 int err;
496
497 if (!wdev->conn)
498 return 0;
499
500 if (!rdev->ops->deauth)
501 return -EOPNOTSUPP;
502
503 if (wdev->conn->state == CFG80211_CONN_SCANNING ||
504 wdev->conn->state == CFG80211_CONN_SCAN_AGAIN) {
505 err = 0;
506 goto out;
507 }
508
509 /* wdev->conn->params.bssid must be set if > SCANNING */
510 err = cfg80211_mlme_deauth(rdev, wdev->netdev,
511 wdev->conn->params.bssid,
512 NULL, 0, reason, false);
513 out:
514 cfg80211_sme_free(wdev);
515 return err;
516}
517
518/*
519 * code shared for in-device and software SME
520 */
521
522static bool cfg80211_is_all_idle(void)
523{
524 struct cfg80211_registered_device *rdev;
525 struct wireless_dev *wdev;
526 bool is_all_idle = true;
527
528 /*
529 * All devices must be idle as otherwise if you are actively
530 * scanning some new beacon hints could be learned and would
531 * count as new regulatory hints.
532 */
533 list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
534 list_for_each_entry(wdev, &rdev->wdev_list, list) {
535 wdev_lock(wdev);
536 if (wdev->conn || wdev->current_bss)
537 is_all_idle = false;
538 wdev_unlock(wdev);
539 }
540 }
541
542 return is_all_idle;
543}
544
545static void disconnect_work(struct work_struct *work)
546{
547 rtnl_lock();
548 if (cfg80211_is_all_idle())
549 regulatory_hint_disconnect();
550 rtnl_unlock();
551}
552
553static DECLARE_WORK(cfg80211_disconnect_work, disconnect_work);
554
555
556/*
557 * API calls for drivers implementing connect/disconnect and
558 * SME event handling
559 */
560
561/* This method must consume bss one way or another */
422void __cfg80211_connect_result(struct net_device *dev, const u8 *bssid, 562void __cfg80211_connect_result(struct net_device *dev, const u8 *bssid,
423 const u8 *req_ie, size_t req_ie_len, 563 const u8 *req_ie, size_t req_ie_len,
424 const u8 *resp_ie, size_t resp_ie_len, 564 const u8 *resp_ie, size_t resp_ie_len,
@@ -434,11 +574,10 @@ void __cfg80211_connect_result(struct net_device *dev, const u8 *bssid,
434 ASSERT_WDEV_LOCK(wdev); 574 ASSERT_WDEV_LOCK(wdev);
435 575
436 if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION && 576 if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION &&
437 wdev->iftype != NL80211_IFTYPE_P2P_CLIENT)) 577 wdev->iftype != NL80211_IFTYPE_P2P_CLIENT)) {
438 return; 578 cfg80211_put_bss(wdev->wiphy, bss);
439
440 if (wdev->sme_state != CFG80211_SME_CONNECTING)
441 return; 579 return;
580 }
442 581
443 nl80211_send_connect_result(wiphy_to_dev(wdev->wiphy), dev, 582 nl80211_send_connect_result(wiphy_to_dev(wdev->wiphy), dev,
444 bssid, req_ie, req_ie_len, 583 bssid, req_ie, req_ie_len,
@@ -476,38 +615,30 @@ void __cfg80211_connect_result(struct net_device *dev, const u8 *bssid,
476 wdev->current_bss = NULL; 615 wdev->current_bss = NULL;
477 } 616 }
478 617
479 if (wdev->conn)
480 wdev->conn->state = CFG80211_CONN_IDLE;
481
482 if (status != WLAN_STATUS_SUCCESS) { 618 if (status != WLAN_STATUS_SUCCESS) {
483 wdev->sme_state = CFG80211_SME_IDLE;
484 if (wdev->conn)
485 kfree(wdev->conn->ie);
486 kfree(wdev->conn);
487 wdev->conn = NULL;
488 kfree(wdev->connect_keys); 619 kfree(wdev->connect_keys);
489 wdev->connect_keys = NULL; 620 wdev->connect_keys = NULL;
490 wdev->ssid_len = 0; 621 wdev->ssid_len = 0;
491 cfg80211_put_bss(wdev->wiphy, bss); 622 if (bss) {
623 cfg80211_unhold_bss(bss_from_pub(bss));
624 cfg80211_put_bss(wdev->wiphy, bss);
625 }
492 return; 626 return;
493 } 627 }
494 628
495 if (!bss) 629 if (!bss) {
496 bss = cfg80211_get_bss(wdev->wiphy, 630 WARN_ON_ONCE(!wiphy_to_dev(wdev->wiphy)->ops->connect);
497 wdev->conn ? wdev->conn->params.channel : 631 bss = cfg80211_get_bss(wdev->wiphy, NULL, bssid,
498 NULL,
499 bssid,
500 wdev->ssid, wdev->ssid_len, 632 wdev->ssid, wdev->ssid_len,
501 WLAN_CAPABILITY_ESS, 633 WLAN_CAPABILITY_ESS,
502 WLAN_CAPABILITY_ESS); 634 WLAN_CAPABILITY_ESS);
635 if (WARN_ON(!bss))
636 return;
637 cfg80211_hold_bss(bss_from_pub(bss));
638 }
503 639
504 if (WARN_ON(!bss))
505 return;
506
507 cfg80211_hold_bss(bss_from_pub(bss));
508 wdev->current_bss = bss_from_pub(bss); 640 wdev->current_bss = bss_from_pub(bss);
509 641
510 wdev->sme_state = CFG80211_SME_CONNECTED;
511 cfg80211_upload_connect_keys(wdev); 642 cfg80211_upload_connect_keys(wdev);
512 643
513 rcu_read_lock(); 644 rcu_read_lock();
@@ -543,8 +674,6 @@ void cfg80211_connect_result(struct net_device *dev, const u8 *bssid,
543 struct cfg80211_event *ev; 674 struct cfg80211_event *ev;
544 unsigned long flags; 675 unsigned long flags;
545 676
546 CFG80211_DEV_WARN_ON(wdev->sme_state != CFG80211_SME_CONNECTING);
547
548 ev = kzalloc(sizeof(*ev) + req_ie_len + resp_ie_len, gfp); 677 ev = kzalloc(sizeof(*ev) + req_ie_len + resp_ie_len, gfp);
549 if (!ev) 678 if (!ev)
550 return; 679 return;
@@ -571,6 +700,7 @@ void cfg80211_connect_result(struct net_device *dev, const u8 *bssid,
571} 700}
572EXPORT_SYMBOL(cfg80211_connect_result); 701EXPORT_SYMBOL(cfg80211_connect_result);
573 702
703/* Consumes bss object one way or another */
574void __cfg80211_roamed(struct wireless_dev *wdev, 704void __cfg80211_roamed(struct wireless_dev *wdev,
575 struct cfg80211_bss *bss, 705 struct cfg80211_bss *bss,
576 const u8 *req_ie, size_t req_ie_len, 706 const u8 *req_ie, size_t req_ie_len,
@@ -585,14 +715,9 @@ void __cfg80211_roamed(struct wireless_dev *wdev,
585 wdev->iftype != NL80211_IFTYPE_P2P_CLIENT)) 715 wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
586 goto out; 716 goto out;
587 717
588 if (wdev->sme_state != CFG80211_SME_CONNECTED) 718 if (WARN_ON(!wdev->current_bss))
589 goto out; 719 goto out;
590 720
591 /* internal error -- how did we get to CONNECTED w/o BSS? */
592 if (WARN_ON(!wdev->current_bss)) {
593 goto out;
594 }
595
596 cfg80211_unhold_bss(wdev->current_bss); 721 cfg80211_unhold_bss(wdev->current_bss);
597 cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub); 722 cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
598 wdev->current_bss = NULL; 723 wdev->current_bss = NULL;
@@ -641,8 +766,6 @@ void cfg80211_roamed(struct net_device *dev,
641 struct wireless_dev *wdev = dev->ieee80211_ptr; 766 struct wireless_dev *wdev = dev->ieee80211_ptr;
642 struct cfg80211_bss *bss; 767 struct cfg80211_bss *bss;
643 768
644 CFG80211_DEV_WARN_ON(wdev->sme_state != CFG80211_SME_CONNECTED);
645
646 bss = cfg80211_get_bss(wdev->wiphy, channel, bssid, wdev->ssid, 769 bss = cfg80211_get_bss(wdev->wiphy, channel, bssid, wdev->ssid,
647 wdev->ssid_len, WLAN_CAPABILITY_ESS, 770 wdev->ssid_len, WLAN_CAPABILITY_ESS,
648 WLAN_CAPABILITY_ESS); 771 WLAN_CAPABILITY_ESS);
@@ -654,6 +777,7 @@ void cfg80211_roamed(struct net_device *dev,
654} 777}
655EXPORT_SYMBOL(cfg80211_roamed); 778EXPORT_SYMBOL(cfg80211_roamed);
656 779
780/* Consumes bss object one way or another */
657void cfg80211_roamed_bss(struct net_device *dev, 781void cfg80211_roamed_bss(struct net_device *dev,
658 struct cfg80211_bss *bss, const u8 *req_ie, 782 struct cfg80211_bss *bss, const u8 *req_ie,
659 size_t req_ie_len, const u8 *resp_ie, 783 size_t req_ie_len, const u8 *resp_ie,
@@ -664,8 +788,6 @@ void cfg80211_roamed_bss(struct net_device *dev,
664 struct cfg80211_event *ev; 788 struct cfg80211_event *ev;
665 unsigned long flags; 789 unsigned long flags;
666 790
667 CFG80211_DEV_WARN_ON(wdev->sme_state != CFG80211_SME_CONNECTED);
668
669 if (WARN_ON(!bss)) 791 if (WARN_ON(!bss))
670 return; 792 return;
671 793
@@ -707,25 +829,14 @@ void __cfg80211_disconnected(struct net_device *dev, const u8 *ie,
707 wdev->iftype != NL80211_IFTYPE_P2P_CLIENT)) 829 wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
708 return; 830 return;
709 831
710 if (wdev->sme_state != CFG80211_SME_CONNECTED)
711 return;
712
713 if (wdev->current_bss) { 832 if (wdev->current_bss) {
714 cfg80211_unhold_bss(wdev->current_bss); 833 cfg80211_unhold_bss(wdev->current_bss);
715 cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub); 834 cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
716 } 835 }
717 836
718 wdev->current_bss = NULL; 837 wdev->current_bss = NULL;
719 wdev->sme_state = CFG80211_SME_IDLE;
720 wdev->ssid_len = 0; 838 wdev->ssid_len = 0;
721 839
722 if (wdev->conn) {
723 kfree(wdev->conn->ie);
724 wdev->conn->ie = NULL;
725 kfree(wdev->conn);
726 wdev->conn = NULL;
727 }
728
729 nl80211_send_disconnected(rdev, dev, reason, ie, ie_len, from_ap); 840 nl80211_send_disconnected(rdev, dev, reason, ie, ie_len, from_ap);
730 841
731 /* 842 /*
@@ -754,8 +865,6 @@ void cfg80211_disconnected(struct net_device *dev, u16 reason,
754 struct cfg80211_event *ev; 865 struct cfg80211_event *ev;
755 unsigned long flags; 866 unsigned long flags;
756 867
757 CFG80211_DEV_WARN_ON(wdev->sme_state != CFG80211_SME_CONNECTED);
758
759 ev = kzalloc(sizeof(*ev) + ie_len, gfp); 868 ev = kzalloc(sizeof(*ev) + ie_len, gfp);
760 if (!ev) 869 if (!ev)
761 return; 870 return;
@@ -773,21 +882,20 @@ void cfg80211_disconnected(struct net_device *dev, u16 reason,
773} 882}
774EXPORT_SYMBOL(cfg80211_disconnected); 883EXPORT_SYMBOL(cfg80211_disconnected);
775 884
776int __cfg80211_connect(struct cfg80211_registered_device *rdev, 885/*
777 struct net_device *dev, 886 * API calls for nl80211/wext compatibility code
778 struct cfg80211_connect_params *connect, 887 */
779 struct cfg80211_cached_keys *connkeys, 888int cfg80211_connect(struct cfg80211_registered_device *rdev,
780 const u8 *prev_bssid) 889 struct net_device *dev,
890 struct cfg80211_connect_params *connect,
891 struct cfg80211_cached_keys *connkeys,
892 const u8 *prev_bssid)
781{ 893{
782 struct wireless_dev *wdev = dev->ieee80211_ptr; 894 struct wireless_dev *wdev = dev->ieee80211_ptr;
783 struct cfg80211_bss *bss = NULL;
784 int err; 895 int err;
785 896
786 ASSERT_WDEV_LOCK(wdev); 897 ASSERT_WDEV_LOCK(wdev);
787 898
788 if (wdev->sme_state != CFG80211_SME_IDLE)
789 return -EALREADY;
790
791 if (WARN_ON(wdev->connect_keys)) { 899 if (WARN_ON(wdev->connect_keys)) {
792 kfree(wdev->connect_keys); 900 kfree(wdev->connect_keys);
793 wdev->connect_keys = NULL; 901 wdev->connect_keys = NULL;
@@ -823,219 +931,43 @@ int __cfg80211_connect(struct cfg80211_registered_device *rdev,
823 } 931 }
824 } 932 }
825 933
826 if (!rdev->ops->connect) { 934 wdev->connect_keys = connkeys;
827 if (!rdev->ops->auth || !rdev->ops->assoc) 935 memcpy(wdev->ssid, connect->ssid, connect->ssid_len);
828 return -EOPNOTSUPP; 936 wdev->ssid_len = connect->ssid_len;
829
830 if (WARN_ON(wdev->conn))
831 return -EINPROGRESS;
832
833 wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL);
834 if (!wdev->conn)
835 return -ENOMEM;
836
837 /*
838 * Copy all parameters, and treat explicitly IEs, BSSID, SSID.
839 */
840 memcpy(&wdev->conn->params, connect, sizeof(*connect));
841 if (connect->bssid) {
842 wdev->conn->params.bssid = wdev->conn->bssid;
843 memcpy(wdev->conn->bssid, connect->bssid, ETH_ALEN);
844 }
845
846 if (connect->ie) {
847 wdev->conn->ie = kmemdup(connect->ie, connect->ie_len,
848 GFP_KERNEL);
849 wdev->conn->params.ie = wdev->conn->ie;
850 if (!wdev->conn->ie) {
851 kfree(wdev->conn);
852 wdev->conn = NULL;
853 return -ENOMEM;
854 }
855 }
856
857 if (connect->auth_type == NL80211_AUTHTYPE_AUTOMATIC) {
858 wdev->conn->auto_auth = true;
859 /* start with open system ... should mostly work */
860 wdev->conn->params.auth_type =
861 NL80211_AUTHTYPE_OPEN_SYSTEM;
862 } else {
863 wdev->conn->auto_auth = false;
864 }
865
866 memcpy(wdev->ssid, connect->ssid, connect->ssid_len);
867 wdev->ssid_len = connect->ssid_len;
868 wdev->conn->params.ssid = wdev->ssid;
869 wdev->conn->params.ssid_len = connect->ssid_len;
870
871 /* see if we have the bss already */
872 bss = cfg80211_get_conn_bss(wdev);
873
874 wdev->sme_state = CFG80211_SME_CONNECTING;
875 wdev->connect_keys = connkeys;
876
877 if (prev_bssid) {
878 memcpy(wdev->conn->prev_bssid, prev_bssid, ETH_ALEN);
879 wdev->conn->prev_bssid_valid = true;
880 }
881
882 /* we're good if we have a matching bss struct */
883 if (bss) {
884 wdev->conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
885 err = cfg80211_conn_do_work(wdev);
886 cfg80211_put_bss(wdev->wiphy, bss);
887 } else {
888 /* otherwise we'll need to scan for the AP first */
889 err = cfg80211_conn_scan(wdev);
890 /*
891 * If we can't scan right now, then we need to scan again
892 * after the current scan finished, since the parameters
893 * changed (unless we find a good AP anyway).
894 */
895 if (err == -EBUSY) {
896 err = 0;
897 wdev->conn->state = CFG80211_CONN_SCAN_AGAIN;
898 }
899 }
900 if (err) {
901 kfree(wdev->conn->ie);
902 kfree(wdev->conn);
903 wdev->conn = NULL;
904 wdev->sme_state = CFG80211_SME_IDLE;
905 wdev->connect_keys = NULL;
906 wdev->ssid_len = 0;
907 }
908 937
909 return err; 938 if (!rdev->ops->connect)
910 } else { 939 err = cfg80211_sme_connect(wdev, connect, prev_bssid);
911 wdev->sme_state = CFG80211_SME_CONNECTING; 940 else
912 wdev->connect_keys = connkeys;
913 err = rdev_connect(rdev, dev, connect); 941 err = rdev_connect(rdev, dev, connect);
914 if (err) {
915 wdev->connect_keys = NULL;
916 wdev->sme_state = CFG80211_SME_IDLE;
917 return err;
918 }
919
920 memcpy(wdev->ssid, connect->ssid, connect->ssid_len);
921 wdev->ssid_len = connect->ssid_len;
922 942
923 return 0; 943 if (err) {
944 wdev->connect_keys = NULL;
945 wdev->ssid_len = 0;
946 return err;
924 } 947 }
925}
926
927int cfg80211_connect(struct cfg80211_registered_device *rdev,
928 struct net_device *dev,
929 struct cfg80211_connect_params *connect,
930 struct cfg80211_cached_keys *connkeys)
931{
932 int err;
933
934 mutex_lock(&rdev->devlist_mtx);
935 /* might request scan - scan_mtx -> wdev_mtx dependency */
936 mutex_lock(&rdev->sched_scan_mtx);
937 wdev_lock(dev->ieee80211_ptr);
938 err = __cfg80211_connect(rdev, dev, connect, connkeys, NULL);
939 wdev_unlock(dev->ieee80211_ptr);
940 mutex_unlock(&rdev->sched_scan_mtx);
941 mutex_unlock(&rdev->devlist_mtx);
942 948
943 return err; 949 return 0;
944} 950}
945 951
946int __cfg80211_disconnect(struct cfg80211_registered_device *rdev, 952int cfg80211_disconnect(struct cfg80211_registered_device *rdev,
947 struct net_device *dev, u16 reason, bool wextev) 953 struct net_device *dev, u16 reason, bool wextev)
948{ 954{
949 struct wireless_dev *wdev = dev->ieee80211_ptr; 955 struct wireless_dev *wdev = dev->ieee80211_ptr;
950 int err; 956 int err;
951 957
952 ASSERT_WDEV_LOCK(wdev); 958 ASSERT_WDEV_LOCK(wdev);
953 959
954 if (wdev->sme_state == CFG80211_SME_IDLE)
955 return -EINVAL;
956
957 kfree(wdev->connect_keys); 960 kfree(wdev->connect_keys);
958 wdev->connect_keys = NULL; 961 wdev->connect_keys = NULL;
959 962
960 if (!rdev->ops->disconnect) { 963 if (wdev->conn) {
961 if (!rdev->ops->deauth) 964 err = cfg80211_sme_disconnect(wdev, reason);
962 return -EOPNOTSUPP; 965 } else if (!rdev->ops->disconnect) {
963 966 cfg80211_mlme_down(rdev, dev);
964 /* was it connected by userspace SME? */ 967 err = 0;
965 if (!wdev->conn) {
966 cfg80211_mlme_down(rdev, dev);
967 goto disconnect;
968 }
969
970 if (wdev->sme_state == CFG80211_SME_CONNECTING &&
971 (wdev->conn->state == CFG80211_CONN_SCANNING ||
972 wdev->conn->state == CFG80211_CONN_SCAN_AGAIN)) {
973 wdev->sme_state = CFG80211_SME_IDLE;
974 kfree(wdev->conn->ie);
975 kfree(wdev->conn);
976 wdev->conn = NULL;
977 wdev->ssid_len = 0;
978 return 0;
979 }
980
981 /* wdev->conn->params.bssid must be set if > SCANNING */
982 err = __cfg80211_mlme_deauth(rdev, dev,
983 wdev->conn->params.bssid,
984 NULL, 0, reason, false);
985 if (err)
986 return err;
987 } else { 968 } else {
988 err = rdev_disconnect(rdev, dev, reason); 969 err = rdev_disconnect(rdev, dev, reason);
989 if (err)
990 return err;
991 } 970 }
992 971
993 disconnect:
994 if (wdev->sme_state == CFG80211_SME_CONNECTED)
995 __cfg80211_disconnected(dev, NULL, 0, 0, false);
996 else if (wdev->sme_state == CFG80211_SME_CONNECTING)
997 __cfg80211_connect_result(dev, NULL, NULL, 0, NULL, 0,
998 WLAN_STATUS_UNSPECIFIED_FAILURE,
999 wextev, NULL);
1000
1001 return 0;
1002}
1003
1004int cfg80211_disconnect(struct cfg80211_registered_device *rdev,
1005 struct net_device *dev,
1006 u16 reason, bool wextev)
1007{
1008 int err;
1009
1010 wdev_lock(dev->ieee80211_ptr);
1011 err = __cfg80211_disconnect(rdev, dev, reason, wextev);
1012 wdev_unlock(dev->ieee80211_ptr);
1013
1014 return err; 972 return err;
1015} 973}
1016
1017void cfg80211_sme_disassoc(struct net_device *dev,
1018 struct cfg80211_internal_bss *bss)
1019{
1020 struct wireless_dev *wdev = dev->ieee80211_ptr;
1021 struct cfg80211_registered_device *rdev = wiphy_to_dev(wdev->wiphy);
1022 u8 bssid[ETH_ALEN];
1023
1024 ASSERT_WDEV_LOCK(wdev);
1025
1026 if (!wdev->conn)
1027 return;
1028
1029 if (wdev->conn->state == CFG80211_CONN_IDLE)
1030 return;
1031
1032 /*
1033 * Ok, so the association was made by this SME -- we don't
1034 * want it any more so deauthenticate too.
1035 */
1036
1037 memcpy(bssid, bss->pub.bssid, ETH_ALEN);
1038
1039 __cfg80211_mlme_deauth(rdev, dev, bssid, NULL, 0,
1040 WLAN_REASON_DEAUTH_LEAVING, false);
1041}
diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c
index 8f28b9f798d8..a23253e06358 100644
--- a/net/wireless/sysfs.c
+++ b/net/wireless/sysfs.c
@@ -83,6 +83,7 @@ static int wiphy_uevent(struct device *dev, struct kobj_uevent_env *env)
83 return 0; 83 return 0;
84} 84}
85 85
86#ifdef CONFIG_PM
86static void cfg80211_leave_all(struct cfg80211_registered_device *rdev) 87static void cfg80211_leave_all(struct cfg80211_registered_device *rdev)
87{ 88{
88 struct wireless_dev *wdev; 89 struct wireless_dev *wdev;
@@ -100,10 +101,10 @@ static int wiphy_suspend(struct device *dev, pm_message_t state)
100 101
101 rtnl_lock(); 102 rtnl_lock();
102 if (rdev->wiphy.registered) { 103 if (rdev->wiphy.registered) {
103 if (!rdev->wowlan) 104 if (!rdev->wiphy.wowlan_config)
104 cfg80211_leave_all(rdev); 105 cfg80211_leave_all(rdev);
105 if (rdev->ops->suspend) 106 if (rdev->ops->suspend)
106 ret = rdev_suspend(rdev, rdev->wowlan); 107 ret = rdev_suspend(rdev, rdev->wiphy.wowlan_config);
107 if (ret == 1) { 108 if (ret == 1) {
108 /* Driver refuse to configure wowlan */ 109 /* Driver refuse to configure wowlan */
109 cfg80211_leave_all(rdev); 110 cfg80211_leave_all(rdev);
@@ -132,6 +133,7 @@ static int wiphy_resume(struct device *dev)
132 133
133 return ret; 134 return ret;
134} 135}
136#endif
135 137
136static const void *wiphy_namespace(struct device *d) 138static const void *wiphy_namespace(struct device *d)
137{ 139{
@@ -146,8 +148,10 @@ struct class ieee80211_class = {
146 .dev_release = wiphy_dev_release, 148 .dev_release = wiphy_dev_release,
147 .dev_attrs = ieee80211_dev_attrs, 149 .dev_attrs = ieee80211_dev_attrs,
148 .dev_uevent = wiphy_uevent, 150 .dev_uevent = wiphy_uevent,
151#ifdef CONFIG_PM
149 .suspend = wiphy_suspend, 152 .suspend = wiphy_suspend,
150 .resume = wiphy_resume, 153 .resume = wiphy_resume,
154#endif
151 .ns_type = &net_ns_type_operations, 155 .ns_type = &net_ns_type_operations,
152 .namespace = wiphy_namespace, 156 .namespace = wiphy_namespace,
153}; 157};
diff --git a/net/wireless/trace.h b/net/wireless/trace.h
index 5755bc14abbd..e1534baf2ebb 100644
--- a/net/wireless/trace.h
+++ b/net/wireless/trace.h
@@ -1911,24 +1911,46 @@ TRACE_EVENT(cfg80211_send_rx_assoc,
1911 NETDEV_PR_ARG, MAC_PR_ARG(bssid), CHAN_PR_ARG) 1911 NETDEV_PR_ARG, MAC_PR_ARG(bssid), CHAN_PR_ARG)
1912); 1912);
1913 1913
1914DEFINE_EVENT(netdev_evt_only, __cfg80211_send_deauth, 1914DECLARE_EVENT_CLASS(netdev_frame_event,
1915 TP_PROTO(struct net_device *netdev), 1915 TP_PROTO(struct net_device *netdev, const u8 *buf, int len),
1916 TP_ARGS(netdev) 1916 TP_ARGS(netdev, buf, len),
1917 TP_STRUCT__entry(
1918 NETDEV_ENTRY
1919 __dynamic_array(u8, frame, len)
1920 ),
1921 TP_fast_assign(
1922 NETDEV_ASSIGN;
1923 memcpy(__get_dynamic_array(frame), buf, len);
1924 ),
1925 TP_printk(NETDEV_PR_FMT ", ftype:0x%.2x",
1926 NETDEV_PR_ARG,
1927 le16_to_cpup((__le16 *)__get_dynamic_array(frame)))
1917); 1928);
1918 1929
1919DEFINE_EVENT(netdev_evt_only, __cfg80211_send_disassoc, 1930DEFINE_EVENT(netdev_frame_event, cfg80211_rx_unprot_mlme_mgmt,
1920 TP_PROTO(struct net_device *netdev), 1931 TP_PROTO(struct net_device *netdev, const u8 *buf, int len),
1921 TP_ARGS(netdev) 1932 TP_ARGS(netdev, buf, len)
1922); 1933);
1923 1934
1924DEFINE_EVENT(netdev_evt_only, cfg80211_send_unprot_deauth, 1935DEFINE_EVENT(netdev_frame_event, cfg80211_rx_mlme_mgmt,
1925 TP_PROTO(struct net_device *netdev), 1936 TP_PROTO(struct net_device *netdev, const u8 *buf, int len),
1926 TP_ARGS(netdev) 1937 TP_ARGS(netdev, buf, len)
1927); 1938);
1928 1939
1929DEFINE_EVENT(netdev_evt_only, cfg80211_send_unprot_disassoc, 1940TRACE_EVENT(cfg80211_tx_mlme_mgmt,
1930 TP_PROTO(struct net_device *netdev), 1941 TP_PROTO(struct net_device *netdev, const u8 *buf, int len),
1931 TP_ARGS(netdev) 1942 TP_ARGS(netdev, buf, len),
1943 TP_STRUCT__entry(
1944 NETDEV_ENTRY
1945 __dynamic_array(u8, frame, len)
1946 ),
1947 TP_fast_assign(
1948 NETDEV_ASSIGN;
1949 memcpy(__get_dynamic_array(frame), buf, len);
1950 ),
1951 TP_printk(NETDEV_PR_FMT ", ftype:0x%.2x",
1952 NETDEV_PR_ARG,
1953 le16_to_cpup((__le16 *)__get_dynamic_array(frame)))
1932); 1954);
1933 1955
1934DECLARE_EVENT_CLASS(netdev_mac_evt, 1956DECLARE_EVENT_CLASS(netdev_mac_evt,
diff --git a/net/wireless/util.c b/net/wireless/util.c
index f5ad4d94ba88..74458b7f61eb 100644
--- a/net/wireless/util.c
+++ b/net/wireless/util.c
@@ -33,6 +33,29 @@ ieee80211_get_response_rate(struct ieee80211_supported_band *sband,
33} 33}
34EXPORT_SYMBOL(ieee80211_get_response_rate); 34EXPORT_SYMBOL(ieee80211_get_response_rate);
35 35
36u32 ieee80211_mandatory_rates(struct ieee80211_supported_band *sband)
37{
38 struct ieee80211_rate *bitrates;
39 u32 mandatory_rates = 0;
40 enum ieee80211_rate_flags mandatory_flag;
41 int i;
42
43 if (WARN_ON(!sband))
44 return 1;
45
46 if (sband->band == IEEE80211_BAND_2GHZ)
47 mandatory_flag = IEEE80211_RATE_MANDATORY_B;
48 else
49 mandatory_flag = IEEE80211_RATE_MANDATORY_A;
50
51 bitrates = sband->bitrates;
52 for (i = 0; i < sband->n_bitrates; i++)
53 if (bitrates[i].flags & mandatory_flag)
54 mandatory_rates |= BIT(i);
55 return mandatory_rates;
56}
57EXPORT_SYMBOL(ieee80211_mandatory_rates);
58
36int ieee80211_channel_to_frequency(int chan, enum ieee80211_band band) 59int ieee80211_channel_to_frequency(int chan, enum ieee80211_band band)
37{ 60{
38 /* see 802.11 17.3.8.3.2 and Annex J 61 /* see 802.11 17.3.8.3.2 and Annex J
@@ -785,12 +808,8 @@ void cfg80211_process_rdev_events(struct cfg80211_registered_device *rdev)
785 ASSERT_RTNL(); 808 ASSERT_RTNL();
786 ASSERT_RDEV_LOCK(rdev); 809 ASSERT_RDEV_LOCK(rdev);
787 810
788 mutex_lock(&rdev->devlist_mtx);
789
790 list_for_each_entry(wdev, &rdev->wdev_list, list) 811 list_for_each_entry(wdev, &rdev->wdev_list, list)
791 cfg80211_process_wdev_events(wdev); 812 cfg80211_process_wdev_events(wdev);
792
793 mutex_unlock(&rdev->devlist_mtx);
794} 813}
795 814
796int cfg80211_change_iface(struct cfg80211_registered_device *rdev, 815int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
@@ -822,10 +841,8 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
822 return -EBUSY; 841 return -EBUSY;
823 842
824 if (ntype != otype && netif_running(dev)) { 843 if (ntype != otype && netif_running(dev)) {
825 mutex_lock(&rdev->devlist_mtx);
826 err = cfg80211_can_change_interface(rdev, dev->ieee80211_ptr, 844 err = cfg80211_can_change_interface(rdev, dev->ieee80211_ptr,
827 ntype); 845 ntype);
828 mutex_unlock(&rdev->devlist_mtx);
829 if (err) 846 if (err)
830 return err; 847 return err;
831 848
@@ -841,8 +858,10 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
841 break; 858 break;
842 case NL80211_IFTYPE_STATION: 859 case NL80211_IFTYPE_STATION:
843 case NL80211_IFTYPE_P2P_CLIENT: 860 case NL80211_IFTYPE_P2P_CLIENT:
861 wdev_lock(dev->ieee80211_ptr);
844 cfg80211_disconnect(rdev, dev, 862 cfg80211_disconnect(rdev, dev,
845 WLAN_REASON_DEAUTH_LEAVING, true); 863 WLAN_REASON_DEAUTH_LEAVING, true);
864 wdev_unlock(dev->ieee80211_ptr);
846 break; 865 break;
847 case NL80211_IFTYPE_MESH_POINT: 866 case NL80211_IFTYPE_MESH_POINT:
848 /* mesh should be handled? */ 867 /* mesh should be handled? */
@@ -1169,6 +1188,9 @@ bool ieee80211_operating_class_to_band(u8 operating_class,
1169 case 84: 1188 case 84:
1170 *band = IEEE80211_BAND_2GHZ; 1189 *band = IEEE80211_BAND_2GHZ;
1171 return true; 1190 return true;
1191 case 180:
1192 *band = IEEE80211_BAND_60GHZ;
1193 return true;
1172 } 1194 }
1173 1195
1174 return false; 1196 return false;
@@ -1184,8 +1206,6 @@ int cfg80211_validate_beacon_int(struct cfg80211_registered_device *rdev,
1184 if (!beacon_int) 1206 if (!beacon_int)
1185 return -EINVAL; 1207 return -EINVAL;
1186 1208
1187 mutex_lock(&rdev->devlist_mtx);
1188
1189 list_for_each_entry(wdev, &rdev->wdev_list, list) { 1209 list_for_each_entry(wdev, &rdev->wdev_list, list) {
1190 if (!wdev->beacon_interval) 1210 if (!wdev->beacon_interval)
1191 continue; 1211 continue;
@@ -1195,8 +1215,6 @@ int cfg80211_validate_beacon_int(struct cfg80211_registered_device *rdev,
1195 } 1215 }
1196 } 1216 }
1197 1217
1198 mutex_unlock(&rdev->devlist_mtx);
1199
1200 return res; 1218 return res;
1201} 1219}
1202 1220
@@ -1220,7 +1238,6 @@ int cfg80211_can_use_iftype_chan(struct cfg80211_registered_device *rdev,
1220 int i, j; 1238 int i, j;
1221 1239
1222 ASSERT_RTNL(); 1240 ASSERT_RTNL();
1223 lockdep_assert_held(&rdev->devlist_mtx);
1224 1241
1225 if (WARN_ON(hweight32(radar_detect) > 1)) 1242 if (WARN_ON(hweight32(radar_detect) > 1))
1226 return -EINVAL; 1243 return -EINVAL;
diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c
index d997d0f0c54a..e7c6e862580d 100644
--- a/net/wireless/wext-compat.c
+++ b/net/wireless/wext-compat.c
@@ -72,7 +72,6 @@ int cfg80211_wext_siwmode(struct net_device *dev, struct iw_request_info *info,
72 struct cfg80211_registered_device *rdev; 72 struct cfg80211_registered_device *rdev;
73 struct vif_params vifparams; 73 struct vif_params vifparams;
74 enum nl80211_iftype type; 74 enum nl80211_iftype type;
75 int ret;
76 75
77 rdev = wiphy_to_dev(wdev->wiphy); 76 rdev = wiphy_to_dev(wdev->wiphy);
78 77
@@ -98,11 +97,7 @@ int cfg80211_wext_siwmode(struct net_device *dev, struct iw_request_info *info,
98 97
99 memset(&vifparams, 0, sizeof(vifparams)); 98 memset(&vifparams, 0, sizeof(vifparams));
100 99
101 cfg80211_lock_rdev(rdev); 100 return cfg80211_change_iface(rdev, dev, type, NULL, &vifparams);
102 ret = cfg80211_change_iface(rdev, dev, type, NULL, &vifparams);
103 cfg80211_unlock_rdev(rdev);
104
105 return ret;
106} 101}
107EXPORT_SYMBOL_GPL(cfg80211_wext_siwmode); 102EXPORT_SYMBOL_GPL(cfg80211_wext_siwmode);
108 103
@@ -579,13 +574,10 @@ static int cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
579{ 574{
580 int err; 575 int err;
581 576
582 /* devlist mutex needed for possible IBSS re-join */
583 mutex_lock(&rdev->devlist_mtx);
584 wdev_lock(dev->ieee80211_ptr); 577 wdev_lock(dev->ieee80211_ptr);
585 err = __cfg80211_set_encryption(rdev, dev, pairwise, addr, 578 err = __cfg80211_set_encryption(rdev, dev, pairwise, addr,
586 remove, tx_key, idx, params); 579 remove, tx_key, idx, params);
587 wdev_unlock(dev->ieee80211_ptr); 580 wdev_unlock(dev->ieee80211_ptr);
588 mutex_unlock(&rdev->devlist_mtx);
589 581
590 return err; 582 return err;
591} 583}
@@ -787,7 +779,7 @@ static int cfg80211_wext_siwfreq(struct net_device *dev,
787 struct cfg80211_chan_def chandef = { 779 struct cfg80211_chan_def chandef = {
788 .width = NL80211_CHAN_WIDTH_20_NOHT, 780 .width = NL80211_CHAN_WIDTH_20_NOHT,
789 }; 781 };
790 int freq, err; 782 int freq;
791 783
792 switch (wdev->iftype) { 784 switch (wdev->iftype) {
793 case NL80211_IFTYPE_STATION: 785 case NL80211_IFTYPE_STATION:
@@ -804,10 +796,7 @@ static int cfg80211_wext_siwfreq(struct net_device *dev,
804 chandef.chan = ieee80211_get_channel(&rdev->wiphy, freq); 796 chandef.chan = ieee80211_get_channel(&rdev->wiphy, freq);
805 if (!chandef.chan) 797 if (!chandef.chan)
806 return -EINVAL; 798 return -EINVAL;
807 mutex_lock(&rdev->devlist_mtx); 799 return cfg80211_set_monitor_channel(rdev, &chandef);
808 err = cfg80211_set_monitor_channel(rdev, &chandef);
809 mutex_unlock(&rdev->devlist_mtx);
810 return err;
811 case NL80211_IFTYPE_MESH_POINT: 800 case NL80211_IFTYPE_MESH_POINT:
812 freq = cfg80211_wext_freq(wdev->wiphy, wextfreq); 801 freq = cfg80211_wext_freq(wdev->wiphy, wextfreq);
813 if (freq < 0) 802 if (freq < 0)
@@ -818,10 +807,7 @@ static int cfg80211_wext_siwfreq(struct net_device *dev,
818 chandef.chan = ieee80211_get_channel(&rdev->wiphy, freq); 807 chandef.chan = ieee80211_get_channel(&rdev->wiphy, freq);
819 if (!chandef.chan) 808 if (!chandef.chan)
820 return -EINVAL; 809 return -EINVAL;
821 mutex_lock(&rdev->devlist_mtx); 810 return cfg80211_set_mesh_channel(rdev, wdev, &chandef);
822 err = cfg80211_set_mesh_channel(rdev, wdev, &chandef);
823 mutex_unlock(&rdev->devlist_mtx);
824 return err;
825 default: 811 default:
826 return -EOPNOTSUPP; 812 return -EOPNOTSUPP;
827 } 813 }
diff --git a/net/wireless/wext-sme.c b/net/wireless/wext-sme.c
index e79cb5c0655a..14c9a2583ba0 100644
--- a/net/wireless/wext-sme.c
+++ b/net/wireless/wext-sme.c
@@ -54,8 +54,8 @@ int cfg80211_mgd_wext_connect(struct cfg80211_registered_device *rdev,
54 if (wdev->wext.prev_bssid_valid) 54 if (wdev->wext.prev_bssid_valid)
55 prev_bssid = wdev->wext.prev_bssid; 55 prev_bssid = wdev->wext.prev_bssid;
56 56
57 err = __cfg80211_connect(rdev, wdev->netdev, 57 err = cfg80211_connect(rdev, wdev->netdev,
58 &wdev->wext.connect, ck, prev_bssid); 58 &wdev->wext.connect, ck, prev_bssid);
59 if (err) 59 if (err)
60 kfree(ck); 60 kfree(ck);
61 61
@@ -87,12 +87,9 @@ int cfg80211_mgd_wext_siwfreq(struct net_device *dev,
87 return -EINVAL; 87 return -EINVAL;
88 } 88 }
89 89
90 cfg80211_lock_rdev(rdev);
91 mutex_lock(&rdev->devlist_mtx);
92 mutex_lock(&rdev->sched_scan_mtx);
93 wdev_lock(wdev); 90 wdev_lock(wdev);
94 91
95 if (wdev->sme_state != CFG80211_SME_IDLE) { 92 if (wdev->conn) {
96 bool event = true; 93 bool event = true;
97 94
98 if (wdev->wext.connect.channel == chan) { 95 if (wdev->wext.connect.channel == chan) {
@@ -103,8 +100,8 @@ int cfg80211_mgd_wext_siwfreq(struct net_device *dev,
103 /* if SSID set, we'll try right again, avoid event */ 100 /* if SSID set, we'll try right again, avoid event */
104 if (wdev->wext.connect.ssid_len) 101 if (wdev->wext.connect.ssid_len)
105 event = false; 102 event = false;
106 err = __cfg80211_disconnect(rdev, dev, 103 err = cfg80211_disconnect(rdev, dev,
107 WLAN_REASON_DEAUTH_LEAVING, event); 104 WLAN_REASON_DEAUTH_LEAVING, event);
108 if (err) 105 if (err)
109 goto out; 106 goto out;
110 } 107 }
@@ -136,9 +133,6 @@ int cfg80211_mgd_wext_siwfreq(struct net_device *dev,
136 err = cfg80211_mgd_wext_connect(rdev, wdev); 133 err = cfg80211_mgd_wext_connect(rdev, wdev);
137 out: 134 out:
138 wdev_unlock(wdev); 135 wdev_unlock(wdev);
139 mutex_unlock(&rdev->sched_scan_mtx);
140 mutex_unlock(&rdev->devlist_mtx);
141 cfg80211_unlock_rdev(rdev);
142 return err; 136 return err;
143} 137}
144 138
@@ -190,14 +184,11 @@ int cfg80211_mgd_wext_siwessid(struct net_device *dev,
190 if (len > 0 && ssid[len - 1] == '\0') 184 if (len > 0 && ssid[len - 1] == '\0')
191 len--; 185 len--;
192 186
193 cfg80211_lock_rdev(rdev);
194 mutex_lock(&rdev->devlist_mtx);
195 mutex_lock(&rdev->sched_scan_mtx);
196 wdev_lock(wdev); 187 wdev_lock(wdev);
197 188
198 err = 0; 189 err = 0;
199 190
200 if (wdev->sme_state != CFG80211_SME_IDLE) { 191 if (wdev->conn) {
201 bool event = true; 192 bool event = true;
202 193
203 if (wdev->wext.connect.ssid && len && 194 if (wdev->wext.connect.ssid && len &&
@@ -208,8 +199,8 @@ int cfg80211_mgd_wext_siwessid(struct net_device *dev,
208 /* if SSID set now, we'll try to connect, avoid event */ 199 /* if SSID set now, we'll try to connect, avoid event */
209 if (len) 200 if (len)
210 event = false; 201 event = false;
211 err = __cfg80211_disconnect(rdev, dev, 202 err = cfg80211_disconnect(rdev, dev,
212 WLAN_REASON_DEAUTH_LEAVING, event); 203 WLAN_REASON_DEAUTH_LEAVING, event);
213 if (err) 204 if (err)
214 goto out; 205 goto out;
215 } 206 }
@@ -226,9 +217,6 @@ int cfg80211_mgd_wext_siwessid(struct net_device *dev,
226 err = cfg80211_mgd_wext_connect(rdev, wdev); 217 err = cfg80211_mgd_wext_connect(rdev, wdev);
227 out: 218 out:
228 wdev_unlock(wdev); 219 wdev_unlock(wdev);
229 mutex_unlock(&rdev->sched_scan_mtx);
230 mutex_unlock(&rdev->devlist_mtx);
231 cfg80211_unlock_rdev(rdev);
232 return err; 220 return err;
233} 221}
234 222
@@ -287,12 +275,9 @@ int cfg80211_mgd_wext_siwap(struct net_device *dev,
287 if (is_zero_ether_addr(bssid) || is_broadcast_ether_addr(bssid)) 275 if (is_zero_ether_addr(bssid) || is_broadcast_ether_addr(bssid))
288 bssid = NULL; 276 bssid = NULL;
289 277
290 cfg80211_lock_rdev(rdev);
291 mutex_lock(&rdev->devlist_mtx);
292 mutex_lock(&rdev->sched_scan_mtx);
293 wdev_lock(wdev); 278 wdev_lock(wdev);
294 279
295 if (wdev->sme_state != CFG80211_SME_IDLE) { 280 if (wdev->conn) {
296 err = 0; 281 err = 0;
297 /* both automatic */ 282 /* both automatic */
298 if (!bssid && !wdev->wext.connect.bssid) 283 if (!bssid && !wdev->wext.connect.bssid)
@@ -303,8 +288,8 @@ int cfg80211_mgd_wext_siwap(struct net_device *dev,
303 ether_addr_equal(bssid, wdev->wext.connect.bssid)) 288 ether_addr_equal(bssid, wdev->wext.connect.bssid))
304 goto out; 289 goto out;
305 290
306 err = __cfg80211_disconnect(rdev, dev, 291 err = cfg80211_disconnect(rdev, dev,
307 WLAN_REASON_DEAUTH_LEAVING, false); 292 WLAN_REASON_DEAUTH_LEAVING, false);
308 if (err) 293 if (err)
309 goto out; 294 goto out;
310 } 295 }
@@ -318,9 +303,6 @@ int cfg80211_mgd_wext_siwap(struct net_device *dev,
318 err = cfg80211_mgd_wext_connect(rdev, wdev); 303 err = cfg80211_mgd_wext_connect(rdev, wdev);
319 out: 304 out:
320 wdev_unlock(wdev); 305 wdev_unlock(wdev);
321 mutex_unlock(&rdev->sched_scan_mtx);
322 mutex_unlock(&rdev->devlist_mtx);
323 cfg80211_unlock_rdev(rdev);
324 return err; 306 return err;
325} 307}
326 308
@@ -382,9 +364,9 @@ int cfg80211_wext_siwgenie(struct net_device *dev,
382 wdev->wext.ie = ie; 364 wdev->wext.ie = ie;
383 wdev->wext.ie_len = ie_len; 365 wdev->wext.ie_len = ie_len;
384 366
385 if (wdev->sme_state != CFG80211_SME_IDLE) { 367 if (wdev->conn) {
386 err = __cfg80211_disconnect(rdev, dev, 368 err = cfg80211_disconnect(rdev, dev,
387 WLAN_REASON_DEAUTH_LEAVING, false); 369 WLAN_REASON_DEAUTH_LEAVING, false);
388 if (err) 370 if (err)
389 goto out; 371 goto out;
390 } 372 }
@@ -420,8 +402,7 @@ int cfg80211_wext_siwmlme(struct net_device *dev,
420 switch (mlme->cmd) { 402 switch (mlme->cmd) {
421 case IW_MLME_DEAUTH: 403 case IW_MLME_DEAUTH:
422 case IW_MLME_DISASSOC: 404 case IW_MLME_DISASSOC:
423 err = __cfg80211_disconnect(rdev, dev, mlme->reason_code, 405 err = cfg80211_disconnect(rdev, dev, mlme->reason_code, true);
424 true);
425 break; 406 break;
426 default: 407 default:
427 err = -EOPNOTSUPP; 408 err = -EOPNOTSUPP;
diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
index 37ca9694aabe..45a3ab5612c1 100644
--- a/net/x25/af_x25.c
+++ b/net/x25/af_x25.c
@@ -224,7 +224,7 @@ static void x25_kill_by_device(struct net_device *dev)
224static int x25_device_event(struct notifier_block *this, unsigned long event, 224static int x25_device_event(struct notifier_block *this, unsigned long event,
225 void *ptr) 225 void *ptr)
226{ 226{
227 struct net_device *dev = ptr; 227 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
228 struct x25_neigh *nb; 228 struct x25_neigh *nb;
229 229
230 if (!net_eq(dev_net(dev), &init_net)) 230 if (!net_eq(dev_net(dev), &init_net))
@@ -1583,11 +1583,11 @@ out_cud_release:
1583 case SIOCX25CALLACCPTAPPRV: { 1583 case SIOCX25CALLACCPTAPPRV: {
1584 rc = -EINVAL; 1584 rc = -EINVAL;
1585 lock_sock(sk); 1585 lock_sock(sk);
1586 if (sk->sk_state != TCP_CLOSE) 1586 if (sk->sk_state == TCP_CLOSE) {
1587 break; 1587 clear_bit(X25_ACCPT_APPRV_FLAG, &x25->flags);
1588 clear_bit(X25_ACCPT_APPRV_FLAG, &x25->flags); 1588 rc = 0;
1589 }
1589 release_sock(sk); 1590 release_sock(sk);
1590 rc = 0;
1591 break; 1591 break;
1592 } 1592 }
1593 1593
@@ -1595,14 +1595,15 @@ out_cud_release:
1595 rc = -EINVAL; 1595 rc = -EINVAL;
1596 lock_sock(sk); 1596 lock_sock(sk);
1597 if (sk->sk_state != TCP_ESTABLISHED) 1597 if (sk->sk_state != TCP_ESTABLISHED)
1598 break; 1598 goto out_sendcallaccpt_release;
1599 /* must call accptapprv above */ 1599 /* must call accptapprv above */
1600 if (test_bit(X25_ACCPT_APPRV_FLAG, &x25->flags)) 1600 if (test_bit(X25_ACCPT_APPRV_FLAG, &x25->flags))
1601 break; 1601 goto out_sendcallaccpt_release;
1602 x25_write_internal(sk, X25_CALL_ACCEPTED); 1602 x25_write_internal(sk, X25_CALL_ACCEPTED);
1603 x25->state = X25_STATE_3; 1603 x25->state = X25_STATE_3;
1604 release_sock(sk);
1605 rc = 0; 1604 rc = 0;
1605out_sendcallaccpt_release:
1606 release_sock(sk);
1606 break; 1607 break;
1607 } 1608 }
1608 1609
diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
index ab2bb42fe094..88843996f935 100644
--- a/net/xfrm/xfrm_input.c
+++ b/net/xfrm/xfrm_input.c
@@ -163,6 +163,11 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
163 skb->sp->xvec[skb->sp->len++] = x; 163 skb->sp->xvec[skb->sp->len++] = x;
164 164
165 spin_lock(&x->lock); 165 spin_lock(&x->lock);
166 if (unlikely(x->km.state == XFRM_STATE_ACQ)) {
167 XFRM_INC_STATS(net, LINUX_MIB_XFRMACQUIREERROR);
168 goto drop_unlock;
169 }
170
166 if (unlikely(x->km.state != XFRM_STATE_VALID)) { 171 if (unlikely(x->km.state != XFRM_STATE_VALID)) {
167 XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATEINVALID); 172 XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATEINVALID);
168 goto drop_unlock; 173 goto drop_unlock;
diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
index 0cf003dfa8fc..eb4a84288648 100644
--- a/net/xfrm/xfrm_output.c
+++ b/net/xfrm/xfrm_output.c
@@ -89,7 +89,7 @@ static int xfrm_output_one(struct sk_buff *skb, int err)
89 89
90 err = x->type->output(x, skb); 90 err = x->type->output(x, skb);
91 if (err == -EINPROGRESS) 91 if (err == -EINPROGRESS)
92 goto out_exit; 92 goto out;
93 93
94resume: 94resume:
95 if (err) { 95 if (err) {
@@ -107,15 +107,14 @@ resume:
107 x = dst->xfrm; 107 x = dst->xfrm;
108 } while (x && !(x->outer_mode->flags & XFRM_MODE_FLAG_TUNNEL)); 108 } while (x && !(x->outer_mode->flags & XFRM_MODE_FLAG_TUNNEL));
109 109
110 err = 0; 110 return 0;
111 111
112out_exit:
113 return err;
114error: 112error:
115 spin_unlock_bh(&x->lock); 113 spin_unlock_bh(&x->lock);
116error_nolock: 114error_nolock:
117 kfree_skb(skb); 115 kfree_skb(skb);
118 goto out_exit; 116out:
117 return err;
119} 118}
120 119
121int xfrm_output_resume(struct sk_buff *skb, int err) 120int xfrm_output_resume(struct sk_buff *skb, int err)
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
index ea970b8002a2..e52cab3591dd 100644
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -2785,7 +2785,7 @@ static void __net_init xfrm_dst_ops_init(struct net *net)
2785 2785
2786static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void *ptr) 2786static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void *ptr)
2787{ 2787{
2788 struct net_device *dev = ptr; 2788 struct net_device *dev = netdev_notifier_info_to_dev(ptr);
2789 2789
2790 switch (event) { 2790 switch (event) {
2791 case NETDEV_DOWN: 2791 case NETDEV_DOWN:
diff --git a/net/xfrm/xfrm_proc.c b/net/xfrm/xfrm_proc.c
index c721b0d9ab8b..80cd1e55b834 100644
--- a/net/xfrm/xfrm_proc.c
+++ b/net/xfrm/xfrm_proc.c
@@ -44,6 +44,7 @@ static const struct snmp_mib xfrm_mib_list[] = {
44 SNMP_MIB_ITEM("XfrmOutPolError", LINUX_MIB_XFRMOUTPOLERROR), 44 SNMP_MIB_ITEM("XfrmOutPolError", LINUX_MIB_XFRMOUTPOLERROR),
45 SNMP_MIB_ITEM("XfrmFwdHdrError", LINUX_MIB_XFRMFWDHDRERROR), 45 SNMP_MIB_ITEM("XfrmFwdHdrError", LINUX_MIB_XFRMFWDHDRERROR),
46 SNMP_MIB_ITEM("XfrmOutStateInvalid", LINUX_MIB_XFRMOUTSTATEINVALID), 46 SNMP_MIB_ITEM("XfrmOutStateInvalid", LINUX_MIB_XFRMOUTSTATEINVALID),
47 SNMP_MIB_ITEM("XfrmAcquireError", LINUX_MIB_XFRMACQUIREERROR),
47 SNMP_MIB_SENTINEL 48 SNMP_MIB_SENTINEL
48}; 49};
49 50