aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/wireless/ath/ath9k/recv.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-11-13 03:40:34 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2013-11-13 03:40:34 -0500
commit42a2d923cc349583ebf6fdd52a7d35e1c2f7e6bd (patch)
tree2b2b0c03b5389c1301800119333967efafd994ca /drivers/net/wireless/ath/ath9k/recv.c
parent5cbb3d216e2041700231bcfc383ee5f8b7fc8b74 (diff)
parent75ecab1df14d90e86cebef9ec5c76befde46e65f (diff)
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: 1) The addition of nftables. No longer will we need protocol aware firewall filtering modules, it can all live in userspace. At the core of nftables is a, for lack of a better term, virtual machine that executes byte codes to inspect packet or metadata (arriving interface index, etc.) and make verdict decisions. Besides support for loading packet contents and comparing them, the interpreter supports lookups in various datastructures as fundamental operations. For example sets are supports, and therefore one could create a set of whitelist IP address entries which have ACCEPT verdicts attached to them, and use the appropriate byte codes to do such lookups. Since the interpreted code is composed in userspace, userspace can do things like optimize things before giving it to the kernel. Another major improvement is the capability of atomically updating portions of the ruleset. In the existing netfilter implementation, one has to update the entire rule set in order to make a change and this is very expensive. Userspace tools exist to create nftables rules using existing netfilter rule sets, but both kernel implementations will need to co-exist for quite some time as we transition from the old to the new stuff. Kudos to Patrick McHardy, Pablo Neira Ayuso, and others who have worked so hard on this. 2) Daniel Borkmann and Hannes Frederic Sowa made several improvements to our pseudo-random number generator, mostly used for things like UDP port randomization and netfitler, amongst other things. In particular the taus88 generater is updated to taus113, and test cases are added. 3) Support 64-bit rates in HTB and TBF schedulers, from Eric Dumazet and Yang Yingliang. 4) Add support for new 577xx tigon3 chips to tg3 driver, from Nithin Sujir. 5) Fix two fatal flaws in TCP dynamic right sizing, from Eric Dumazet, Neal Cardwell, and Yuchung Cheng. 6) Allow IP_TOS and IP_TTL to be specified in sendmsg() ancillary control message data, much like other socket option attributes. From Francesco Fusco. 7) Allow applications to specify a cap on the rate computed automatically by the kernel for pacing flows, via a new SO_MAX_PACING_RATE socket option. From Eric Dumazet. 8) Make the initial autotuned send buffer sizing in TCP more closely reflect actual needs, from Eric Dumazet. 9) Currently early socket demux only happens for TCP sockets, but we can do it for connected UDP sockets too. Implementation from Shawn Bohrer. 10) Refactor inet socket demux with the goal of improving hash demux performance for listening sockets. With the main goals being able to use RCU lookups on even request sockets, and eliminating the listening lock contention. From Eric Dumazet. 11) The bonding layer has many demuxes in it's fast path, and an RCU conversion was started back in 3.11, several changes here extend the RCU usage to even more locations. From Ding Tianhong and Wang Yufen, based upon suggestions by Nikolay Aleksandrov and Veaceslav Falico. 12) Allow stackability of segmentation offloads to, in particular, allow segmentation offloading over tunnels. From Eric Dumazet. 13) Significantly improve the handling of secret keys we input into the various hash functions in the inet hashtables, TCP fast open, as well as syncookies. From Hannes Frederic Sowa. The key fundamental operation is "net_get_random_once()" which uses static keys. Hannes even extended this to ipv4/ipv6 fragmentation handling and our generic flow dissector. 14) The generic driver layer takes care now to set the driver data to NULL on device removal, so it's no longer necessary for drivers to explicitly set it to NULL any more. Many drivers have been cleaned up in this way, from Jingoo Han. 15) Add a BPF based packet scheduler classifier, from Daniel Borkmann. 16) Improve CRC32 interfaces and generic SKB checksum iterators so that SCTP's checksumming can more cleanly be handled. Also from Daniel Borkmann. 17) Add a new PMTU discovery mode, IP_PMTUDISC_INTERFACE, which forces using the interface MTU value. This helps avoid PMTU attacks, particularly on DNS servers. From Hannes Frederic Sowa. 18) Use generic XPS for transmit queue steering rather than internal (re-)implementation in virtio-net. From Jason Wang. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1622 commits) random32: add test cases for taus113 implementation random32: upgrade taus88 generator to taus113 from errata paper random32: move rnd_state to linux/random.h random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized random32: add periodic reseeding random32: fix off-by-one in seeding requirement PHY: Add RTL8201CP phy_driver to realtek xtsonic: add missing platform_set_drvdata() in xtsonic_probe() macmace: add missing platform_set_drvdata() in mace_probe() ethernet/arc/arc_emac: add missing platform_set_drvdata() in arc_emac_probe() ipv6: protect for_each_sk_fl_rcu in mem_check with rcu_read_lock_bh vlan: Implement vlan_dev_get_egress_qos_mask as an inline. ixgbe: add warning when max_vfs is out of range. igb: Update link modes display in ethtool netfilter: push reasm skb through instead of original frag skbs ip6_output: fragment outgoing reassembled skb properly MAINTAINERS: mv643xx_eth: take over maintainership from Lennart net_sched: tbf: support of 64bit rates ixgbe: deleting dfwd stations out of order can cause null ptr deref ixgbe: fix build err, num_rx_queues is only available with CONFIG_RPS ...
Diffstat (limited to 'drivers/net/wireless/ath/ath9k/recv.c')
-rw-r--r--drivers/net/wireless/ath/ath9k/recv.c197
1 files changed, 135 insertions, 62 deletions
diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c
index ab9e3a8410bc..95ddca5495d4 100644
--- a/drivers/net/wireless/ath/ath9k/recv.c
+++ b/drivers/net/wireless/ath/ath9k/recv.c
@@ -19,7 +19,7 @@
19#include "ath9k.h" 19#include "ath9k.h"
20#include "ar9003_mac.h" 20#include "ar9003_mac.h"
21 21
22#define SKB_CB_ATHBUF(__skb) (*((struct ath_buf **)__skb->cb)) 22#define SKB_CB_ATHBUF(__skb) (*((struct ath_rxbuf **)__skb->cb))
23 23
24static inline bool ath9k_check_auto_sleep(struct ath_softc *sc) 24static inline bool ath9k_check_auto_sleep(struct ath_softc *sc)
25{ 25{
@@ -35,7 +35,7 @@ static inline bool ath9k_check_auto_sleep(struct ath_softc *sc)
35 * buffer (or rx fifo). This can incorrectly acknowledge packets 35 * buffer (or rx fifo). This can incorrectly acknowledge packets
36 * to a sender if last desc is self-linked. 36 * to a sender if last desc is self-linked.
37 */ 37 */
38static void ath_rx_buf_link(struct ath_softc *sc, struct ath_buf *bf) 38static void ath_rx_buf_link(struct ath_softc *sc, struct ath_rxbuf *bf)
39{ 39{
40 struct ath_hw *ah = sc->sc_ah; 40 struct ath_hw *ah = sc->sc_ah;
41 struct ath_common *common = ath9k_hw_common(ah); 41 struct ath_common *common = ath9k_hw_common(ah);
@@ -68,7 +68,7 @@ static void ath_rx_buf_link(struct ath_softc *sc, struct ath_buf *bf)
68 sc->rx.rxlink = &ds->ds_link; 68 sc->rx.rxlink = &ds->ds_link;
69} 69}
70 70
71static void ath_rx_buf_relink(struct ath_softc *sc, struct ath_buf *bf) 71static void ath_rx_buf_relink(struct ath_softc *sc, struct ath_rxbuf *bf)
72{ 72{
73 if (sc->rx.buf_hold) 73 if (sc->rx.buf_hold)
74 ath_rx_buf_link(sc, sc->rx.buf_hold); 74 ath_rx_buf_link(sc, sc->rx.buf_hold);
@@ -112,13 +112,13 @@ static bool ath_rx_edma_buf_link(struct ath_softc *sc,
112 struct ath_hw *ah = sc->sc_ah; 112 struct ath_hw *ah = sc->sc_ah;
113 struct ath_rx_edma *rx_edma; 113 struct ath_rx_edma *rx_edma;
114 struct sk_buff *skb; 114 struct sk_buff *skb;
115 struct ath_buf *bf; 115 struct ath_rxbuf *bf;
116 116
117 rx_edma = &sc->rx.rx_edma[qtype]; 117 rx_edma = &sc->rx.rx_edma[qtype];
118 if (skb_queue_len(&rx_edma->rx_fifo) >= rx_edma->rx_fifo_hwsize) 118 if (skb_queue_len(&rx_edma->rx_fifo) >= rx_edma->rx_fifo_hwsize)
119 return false; 119 return false;
120 120
121 bf = list_first_entry(&sc->rx.rxbuf, struct ath_buf, list); 121 bf = list_first_entry(&sc->rx.rxbuf, struct ath_rxbuf, list);
122 list_del_init(&bf->list); 122 list_del_init(&bf->list);
123 123
124 skb = bf->bf_mpdu; 124 skb = bf->bf_mpdu;
@@ -138,7 +138,7 @@ static void ath_rx_addbuffer_edma(struct ath_softc *sc,
138 enum ath9k_rx_qtype qtype) 138 enum ath9k_rx_qtype qtype)
139{ 139{
140 struct ath_common *common = ath9k_hw_common(sc->sc_ah); 140 struct ath_common *common = ath9k_hw_common(sc->sc_ah);
141 struct ath_buf *bf, *tbf; 141 struct ath_rxbuf *bf, *tbf;
142 142
143 if (list_empty(&sc->rx.rxbuf)) { 143 if (list_empty(&sc->rx.rxbuf)) {
144 ath_dbg(common, QUEUE, "No free rx buf available\n"); 144 ath_dbg(common, QUEUE, "No free rx buf available\n");
@@ -154,7 +154,7 @@ static void ath_rx_addbuffer_edma(struct ath_softc *sc,
154static void ath_rx_remove_buffer(struct ath_softc *sc, 154static void ath_rx_remove_buffer(struct ath_softc *sc,
155 enum ath9k_rx_qtype qtype) 155 enum ath9k_rx_qtype qtype)
156{ 156{
157 struct ath_buf *bf; 157 struct ath_rxbuf *bf;
158 struct ath_rx_edma *rx_edma; 158 struct ath_rx_edma *rx_edma;
159 struct sk_buff *skb; 159 struct sk_buff *skb;
160 160
@@ -171,7 +171,7 @@ static void ath_rx_edma_cleanup(struct ath_softc *sc)
171{ 171{
172 struct ath_hw *ah = sc->sc_ah; 172 struct ath_hw *ah = sc->sc_ah;
173 struct ath_common *common = ath9k_hw_common(ah); 173 struct ath_common *common = ath9k_hw_common(ah);
174 struct ath_buf *bf; 174 struct ath_rxbuf *bf;
175 175
176 ath_rx_remove_buffer(sc, ATH9K_RX_QUEUE_LP); 176 ath_rx_remove_buffer(sc, ATH9K_RX_QUEUE_LP);
177 ath_rx_remove_buffer(sc, ATH9K_RX_QUEUE_HP); 177 ath_rx_remove_buffer(sc, ATH9K_RX_QUEUE_HP);
@@ -199,7 +199,7 @@ static int ath_rx_edma_init(struct ath_softc *sc, int nbufs)
199 struct ath_common *common = ath9k_hw_common(sc->sc_ah); 199 struct ath_common *common = ath9k_hw_common(sc->sc_ah);
200 struct ath_hw *ah = sc->sc_ah; 200 struct ath_hw *ah = sc->sc_ah;
201 struct sk_buff *skb; 201 struct sk_buff *skb;
202 struct ath_buf *bf; 202 struct ath_rxbuf *bf;
203 int error = 0, i; 203 int error = 0, i;
204 u32 size; 204 u32 size;
205 205
@@ -211,7 +211,7 @@ static int ath_rx_edma_init(struct ath_softc *sc, int nbufs)
211 ath_rx_edma_init_queue(&sc->rx.rx_edma[ATH9K_RX_QUEUE_HP], 211 ath_rx_edma_init_queue(&sc->rx.rx_edma[ATH9K_RX_QUEUE_HP],
212 ah->caps.rx_hp_qdepth); 212 ah->caps.rx_hp_qdepth);
213 213
214 size = sizeof(struct ath_buf) * nbufs; 214 size = sizeof(struct ath_rxbuf) * nbufs;
215 bf = devm_kzalloc(sc->dev, size, GFP_KERNEL); 215 bf = devm_kzalloc(sc->dev, size, GFP_KERNEL);
216 if (!bf) 216 if (!bf)
217 return -ENOMEM; 217 return -ENOMEM;
@@ -271,7 +271,7 @@ int ath_rx_init(struct ath_softc *sc, int nbufs)
271{ 271{
272 struct ath_common *common = ath9k_hw_common(sc->sc_ah); 272 struct ath_common *common = ath9k_hw_common(sc->sc_ah);
273 struct sk_buff *skb; 273 struct sk_buff *skb;
274 struct ath_buf *bf; 274 struct ath_rxbuf *bf;
275 int error = 0; 275 int error = 0;
276 276
277 spin_lock_init(&sc->sc_pcu_lock); 277 spin_lock_init(&sc->sc_pcu_lock);
@@ -332,7 +332,7 @@ void ath_rx_cleanup(struct ath_softc *sc)
332 struct ath_hw *ah = sc->sc_ah; 332 struct ath_hw *ah = sc->sc_ah;
333 struct ath_common *common = ath9k_hw_common(ah); 333 struct ath_common *common = ath9k_hw_common(ah);
334 struct sk_buff *skb; 334 struct sk_buff *skb;
335 struct ath_buf *bf; 335 struct ath_rxbuf *bf;
336 336
337 if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) { 337 if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) {
338 ath_rx_edma_cleanup(sc); 338 ath_rx_edma_cleanup(sc);
@@ -375,6 +375,9 @@ u32 ath_calcrxfilter(struct ath_softc *sc)
375{ 375{
376 u32 rfilt; 376 u32 rfilt;
377 377
378 if (config_enabled(CONFIG_ATH9K_TX99))
379 return 0;
380
378 rfilt = ATH9K_RX_FILTER_UCAST | ATH9K_RX_FILTER_BCAST 381 rfilt = ATH9K_RX_FILTER_UCAST | ATH9K_RX_FILTER_BCAST
379 | ATH9K_RX_FILTER_MCAST; 382 | ATH9K_RX_FILTER_MCAST;
380 383
@@ -427,7 +430,7 @@ u32 ath_calcrxfilter(struct ath_softc *sc)
427int ath_startrecv(struct ath_softc *sc) 430int ath_startrecv(struct ath_softc *sc)
428{ 431{
429 struct ath_hw *ah = sc->sc_ah; 432 struct ath_hw *ah = sc->sc_ah;
430 struct ath_buf *bf, *tbf; 433 struct ath_rxbuf *bf, *tbf;
431 434
432 if (ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) { 435 if (ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) {
433 ath_edma_start_recv(sc); 436 ath_edma_start_recv(sc);
@@ -447,7 +450,7 @@ int ath_startrecv(struct ath_softc *sc)
447 if (list_empty(&sc->rx.rxbuf)) 450 if (list_empty(&sc->rx.rxbuf))
448 goto start_recv; 451 goto start_recv;
449 452
450 bf = list_first_entry(&sc->rx.rxbuf, struct ath_buf, list); 453 bf = list_first_entry(&sc->rx.rxbuf, struct ath_rxbuf, list);
451 ath9k_hw_putrxbuf(ah, bf->bf_daddr); 454 ath9k_hw_putrxbuf(ah, bf->bf_daddr);
452 ath9k_hw_rxena(ah); 455 ath9k_hw_rxena(ah);
453 456
@@ -603,13 +606,13 @@ static void ath_rx_ps(struct ath_softc *sc, struct sk_buff *skb, bool mybeacon)
603static bool ath_edma_get_buffers(struct ath_softc *sc, 606static bool ath_edma_get_buffers(struct ath_softc *sc,
604 enum ath9k_rx_qtype qtype, 607 enum ath9k_rx_qtype qtype,
605 struct ath_rx_status *rs, 608 struct ath_rx_status *rs,
606 struct ath_buf **dest) 609 struct ath_rxbuf **dest)
607{ 610{
608 struct ath_rx_edma *rx_edma = &sc->rx.rx_edma[qtype]; 611 struct ath_rx_edma *rx_edma = &sc->rx.rx_edma[qtype];
609 struct ath_hw *ah = sc->sc_ah; 612 struct ath_hw *ah = sc->sc_ah;
610 struct ath_common *common = ath9k_hw_common(ah); 613 struct ath_common *common = ath9k_hw_common(ah);
611 struct sk_buff *skb; 614 struct sk_buff *skb;
612 struct ath_buf *bf; 615 struct ath_rxbuf *bf;
613 int ret; 616 int ret;
614 617
615 skb = skb_peek(&rx_edma->rx_fifo); 618 skb = skb_peek(&rx_edma->rx_fifo);
@@ -653,11 +656,11 @@ static bool ath_edma_get_buffers(struct ath_softc *sc,
653 return true; 656 return true;
654} 657}
655 658
656static struct ath_buf *ath_edma_get_next_rx_buf(struct ath_softc *sc, 659static struct ath_rxbuf *ath_edma_get_next_rx_buf(struct ath_softc *sc,
657 struct ath_rx_status *rs, 660 struct ath_rx_status *rs,
658 enum ath9k_rx_qtype qtype) 661 enum ath9k_rx_qtype qtype)
659{ 662{
660 struct ath_buf *bf = NULL; 663 struct ath_rxbuf *bf = NULL;
661 664
662 while (ath_edma_get_buffers(sc, qtype, rs, &bf)) { 665 while (ath_edma_get_buffers(sc, qtype, rs, &bf)) {
663 if (!bf) 666 if (!bf)
@@ -668,13 +671,13 @@ static struct ath_buf *ath_edma_get_next_rx_buf(struct ath_softc *sc,
668 return NULL; 671 return NULL;
669} 672}
670 673
671static struct ath_buf *ath_get_next_rx_buf(struct ath_softc *sc, 674static struct ath_rxbuf *ath_get_next_rx_buf(struct ath_softc *sc,
672 struct ath_rx_status *rs) 675 struct ath_rx_status *rs)
673{ 676{
674 struct ath_hw *ah = sc->sc_ah; 677 struct ath_hw *ah = sc->sc_ah;
675 struct ath_common *common = ath9k_hw_common(ah); 678 struct ath_common *common = ath9k_hw_common(ah);
676 struct ath_desc *ds; 679 struct ath_desc *ds;
677 struct ath_buf *bf; 680 struct ath_rxbuf *bf;
678 int ret; 681 int ret;
679 682
680 if (list_empty(&sc->rx.rxbuf)) { 683 if (list_empty(&sc->rx.rxbuf)) {
@@ -682,7 +685,7 @@ static struct ath_buf *ath_get_next_rx_buf(struct ath_softc *sc,
682 return NULL; 685 return NULL;
683 } 686 }
684 687
685 bf = list_first_entry(&sc->rx.rxbuf, struct ath_buf, list); 688 bf = list_first_entry(&sc->rx.rxbuf, struct ath_rxbuf, list);
686 if (bf == sc->rx.buf_hold) 689 if (bf == sc->rx.buf_hold)
687 return NULL; 690 return NULL;
688 691
@@ -702,7 +705,7 @@ static struct ath_buf *ath_get_next_rx_buf(struct ath_softc *sc,
702 ret = ath9k_hw_rxprocdesc(ah, ds, rs); 705 ret = ath9k_hw_rxprocdesc(ah, ds, rs);
703 if (ret == -EINPROGRESS) { 706 if (ret == -EINPROGRESS) {
704 struct ath_rx_status trs; 707 struct ath_rx_status trs;
705 struct ath_buf *tbf; 708 struct ath_rxbuf *tbf;
706 struct ath_desc *tds; 709 struct ath_desc *tds;
707 710
708 memset(&trs, 0, sizeof(trs)); 711 memset(&trs, 0, sizeof(trs));
@@ -711,7 +714,7 @@ static struct ath_buf *ath_get_next_rx_buf(struct ath_softc *sc,
711 return NULL; 714 return NULL;
712 } 715 }
713 716
714 tbf = list_entry(bf->list.next, struct ath_buf, list); 717 tbf = list_entry(bf->list.next, struct ath_rxbuf, list);
715 718
716 /* 719 /*
717 * On some hardware the descriptor status words could 720 * On some hardware the descriptor status words could
@@ -972,14 +975,15 @@ static int ath_process_fft(struct ath_softc *sc, struct ieee80211_hdr *hdr,
972{ 975{
973#ifdef CONFIG_ATH9K_DEBUGFS 976#ifdef CONFIG_ATH9K_DEBUGFS
974 struct ath_hw *ah = sc->sc_ah; 977 struct ath_hw *ah = sc->sc_ah;
975 u8 bins[SPECTRAL_HT20_NUM_BINS]; 978 u8 num_bins, *bins, *vdata = (u8 *)hdr;
976 u8 *vdata = (u8 *)hdr; 979 struct fft_sample_ht20 fft_sample_20;
977 struct fft_sample_ht20 fft_sample; 980 struct fft_sample_ht20_40 fft_sample_40;
981 struct fft_sample_tlv *tlv;
978 struct ath_radar_info *radar_info; 982 struct ath_radar_info *radar_info;
979 struct ath_ht20_mag_info *mag_info;
980 int len = rs->rs_datalen; 983 int len = rs->rs_datalen;
981 int dc_pos; 984 int dc_pos;
982 u16 length, max_magnitude; 985 u16 fft_len, length, freq = ah->curchan->chan->center_freq;
986 enum nl80211_channel_type chan_type;
983 987
984 /* AR9280 and before report via ATH9K_PHYERR_RADAR, AR93xx and newer 988 /* AR9280 and before report via ATH9K_PHYERR_RADAR, AR93xx and newer
985 * via ATH9K_PHYERR_SPECTRAL. Haven't seen ATH9K_PHYERR_FALSE_RADAR_EXT 989 * via ATH9K_PHYERR_SPECTRAL. Haven't seen ATH9K_PHYERR_FALSE_RADAR_EXT
@@ -997,45 +1001,44 @@ static int ath_process_fft(struct ath_softc *sc, struct ieee80211_hdr *hdr,
997 if (!(radar_info->pulse_bw_info & SPECTRAL_SCAN_BITMASK)) 1001 if (!(radar_info->pulse_bw_info & SPECTRAL_SCAN_BITMASK))
998 return 0; 1002 return 0;
999 1003
1000 /* Variation in the data length is possible and will be fixed later. 1004 chan_type = cfg80211_get_chandef_type(&sc->hw->conf.chandef);
1001 * Note that we only support HT20 for now. 1005 if ((chan_type == NL80211_CHAN_HT40MINUS) ||
1002 * 1006 (chan_type == NL80211_CHAN_HT40PLUS)) {
1003 * TODO: add HT20_40 support as well. 1007 fft_len = SPECTRAL_HT20_40_TOTAL_DATA_LEN;
1004 */ 1008 num_bins = SPECTRAL_HT20_40_NUM_BINS;
1005 if ((len > SPECTRAL_HT20_TOTAL_DATA_LEN + 2) || 1009 bins = (u8 *)fft_sample_40.data;
1006 (len < SPECTRAL_HT20_TOTAL_DATA_LEN - 1)) 1010 } else {
1007 return 1; 1011 fft_len = SPECTRAL_HT20_TOTAL_DATA_LEN;
1008 1012 num_bins = SPECTRAL_HT20_NUM_BINS;
1009 fft_sample.tlv.type = ATH_FFT_SAMPLE_HT20; 1013 bins = (u8 *)fft_sample_20.data;
1010 length = sizeof(fft_sample) - sizeof(fft_sample.tlv); 1014 }
1011 fft_sample.tlv.length = __cpu_to_be16(length);
1012 1015
1013 fft_sample.freq = __cpu_to_be16(ah->curchan->chan->center_freq); 1016 /* Variation in the data length is possible and will be fixed later */
1014 fft_sample.rssi = fix_rssi_inv_only(rs->rs_rssi_ctl0); 1017 if ((len > fft_len + 2) || (len < fft_len - 1))
1015 fft_sample.noise = ah->noise; 1018 return 1;
1016 1019
1017 switch (len - SPECTRAL_HT20_TOTAL_DATA_LEN) { 1020 switch (len - fft_len) {
1018 case 0: 1021 case 0:
1019 /* length correct, nothing to do. */ 1022 /* length correct, nothing to do. */
1020 memcpy(bins, vdata, SPECTRAL_HT20_NUM_BINS); 1023 memcpy(bins, vdata, num_bins);
1021 break; 1024 break;
1022 case -1: 1025 case -1:
1023 /* first byte missing, duplicate it. */ 1026 /* first byte missing, duplicate it. */
1024 memcpy(&bins[1], vdata, SPECTRAL_HT20_NUM_BINS - 1); 1027 memcpy(&bins[1], vdata, num_bins - 1);
1025 bins[0] = vdata[0]; 1028 bins[0] = vdata[0];
1026 break; 1029 break;
1027 case 2: 1030 case 2:
1028 /* MAC added 2 extra bytes at bin 30 and 32, remove them. */ 1031 /* MAC added 2 extra bytes at bin 30 and 32, remove them. */
1029 memcpy(bins, vdata, 30); 1032 memcpy(bins, vdata, 30);
1030 bins[30] = vdata[31]; 1033 bins[30] = vdata[31];
1031 memcpy(&bins[31], &vdata[33], SPECTRAL_HT20_NUM_BINS - 31); 1034 memcpy(&bins[31], &vdata[33], num_bins - 31);
1032 break; 1035 break;
1033 case 1: 1036 case 1:
1034 /* MAC added 2 extra bytes AND first byte is missing. */ 1037 /* MAC added 2 extra bytes AND first byte is missing. */
1035 bins[0] = vdata[0]; 1038 bins[0] = vdata[0];
1036 memcpy(&bins[0], vdata, 30); 1039 memcpy(&bins[1], vdata, 30);
1037 bins[31] = vdata[31]; 1040 bins[31] = vdata[31];
1038 memcpy(&bins[32], &vdata[33], SPECTRAL_HT20_NUM_BINS - 32); 1041 memcpy(&bins[32], &vdata[33], num_bins - 32);
1039 break; 1042 break;
1040 default: 1043 default:
1041 return 1; 1044 return 1;
@@ -1044,23 +1047,93 @@ static int ath_process_fft(struct ath_softc *sc, struct ieee80211_hdr *hdr,
1044 /* DC value (value in the middle) is the blind spot of the spectral 1047 /* DC value (value in the middle) is the blind spot of the spectral
1045 * sample and invalid, interpolate it. 1048 * sample and invalid, interpolate it.
1046 */ 1049 */
1047 dc_pos = SPECTRAL_HT20_NUM_BINS / 2; 1050 dc_pos = num_bins / 2;
1048 bins[dc_pos] = (bins[dc_pos + 1] + bins[dc_pos - 1]) / 2; 1051 bins[dc_pos] = (bins[dc_pos + 1] + bins[dc_pos - 1]) / 2;
1049 1052
1050 /* mag data is at the end of the frame, in front of radar_info */ 1053 if ((chan_type == NL80211_CHAN_HT40MINUS) ||
1051 mag_info = ((struct ath_ht20_mag_info *)radar_info) - 1; 1054 (chan_type == NL80211_CHAN_HT40PLUS)) {
1055 s8 lower_rssi, upper_rssi;
1056 s16 ext_nf;
1057 u8 lower_max_index, upper_max_index;
1058 u8 lower_bitmap_w, upper_bitmap_w;
1059 u16 lower_mag, upper_mag;
1060 struct ath9k_hw_cal_data *caldata = ah->caldata;
1061 struct ath_ht20_40_mag_info *mag_info;
1062
1063 if (caldata)
1064 ext_nf = ath9k_hw_getchan_noise(ah, ah->curchan,
1065 caldata->nfCalHist[3].privNF);
1066 else
1067 ext_nf = ATH_DEFAULT_NOISE_FLOOR;
1068
1069 length = sizeof(fft_sample_40) - sizeof(struct fft_sample_tlv);
1070 fft_sample_40.tlv.type = ATH_FFT_SAMPLE_HT20_40;
1071 fft_sample_40.tlv.length = __cpu_to_be16(length);
1072 fft_sample_40.freq = __cpu_to_be16(freq);
1073 fft_sample_40.channel_type = chan_type;
1074
1075 if (chan_type == NL80211_CHAN_HT40PLUS) {
1076 lower_rssi = fix_rssi_inv_only(rs->rs_rssi_ctl0);
1077 upper_rssi = fix_rssi_inv_only(rs->rs_rssi_ext0);
1052 1078
1053 /* copy raw bins without scaling them */ 1079 fft_sample_40.lower_noise = ah->noise;
1054 memcpy(fft_sample.data, bins, SPECTRAL_HT20_NUM_BINS); 1080 fft_sample_40.upper_noise = ext_nf;
1055 fft_sample.max_exp = mag_info->max_exp & 0xf; 1081 } else {
1082 lower_rssi = fix_rssi_inv_only(rs->rs_rssi_ext0);
1083 upper_rssi = fix_rssi_inv_only(rs->rs_rssi_ctl0);
1056 1084
1057 max_magnitude = spectral_max_magnitude(mag_info->all_bins); 1085 fft_sample_40.lower_noise = ext_nf;
1058 fft_sample.max_magnitude = __cpu_to_be16(max_magnitude); 1086 fft_sample_40.upper_noise = ah->noise;
1059 fft_sample.max_index = spectral_max_index(mag_info->all_bins); 1087 }
1060 fft_sample.bitmap_weight = spectral_bitmap_weight(mag_info->all_bins); 1088 fft_sample_40.lower_rssi = lower_rssi;
1061 fft_sample.tsf = __cpu_to_be64(tsf); 1089 fft_sample_40.upper_rssi = upper_rssi;
1090
1091 mag_info = ((struct ath_ht20_40_mag_info *)radar_info) - 1;
1092 lower_mag = spectral_max_magnitude(mag_info->lower_bins);
1093 upper_mag = spectral_max_magnitude(mag_info->upper_bins);
1094 fft_sample_40.lower_max_magnitude = __cpu_to_be16(lower_mag);
1095 fft_sample_40.upper_max_magnitude = __cpu_to_be16(upper_mag);
1096 lower_max_index = spectral_max_index(mag_info->lower_bins);
1097 upper_max_index = spectral_max_index(mag_info->upper_bins);
1098 fft_sample_40.lower_max_index = lower_max_index;
1099 fft_sample_40.upper_max_index = upper_max_index;
1100 lower_bitmap_w = spectral_bitmap_weight(mag_info->lower_bins);
1101 upper_bitmap_w = spectral_bitmap_weight(mag_info->upper_bins);
1102 fft_sample_40.lower_bitmap_weight = lower_bitmap_w;
1103 fft_sample_40.upper_bitmap_weight = upper_bitmap_w;
1104 fft_sample_40.max_exp = mag_info->max_exp & 0xf;
1105
1106 fft_sample_40.tsf = __cpu_to_be64(tsf);
1107
1108 tlv = (struct fft_sample_tlv *)&fft_sample_40;
1109 } else {
1110 u8 max_index, bitmap_w;
1111 u16 magnitude;
1112 struct ath_ht20_mag_info *mag_info;
1113
1114 length = sizeof(fft_sample_20) - sizeof(struct fft_sample_tlv);
1115 fft_sample_20.tlv.type = ATH_FFT_SAMPLE_HT20;
1116 fft_sample_20.tlv.length = __cpu_to_be16(length);
1117 fft_sample_20.freq = __cpu_to_be16(freq);
1118
1119 fft_sample_20.rssi = fix_rssi_inv_only(rs->rs_rssi_ctl0);
1120 fft_sample_20.noise = ah->noise;
1121
1122 mag_info = ((struct ath_ht20_mag_info *)radar_info) - 1;
1123 magnitude = spectral_max_magnitude(mag_info->all_bins);
1124 fft_sample_20.max_magnitude = __cpu_to_be16(magnitude);
1125 max_index = spectral_max_index(mag_info->all_bins);
1126 fft_sample_20.max_index = max_index;
1127 bitmap_w = spectral_bitmap_weight(mag_info->all_bins);
1128 fft_sample_20.bitmap_weight = bitmap_w;
1129 fft_sample_20.max_exp = mag_info->max_exp & 0xf;
1130
1131 fft_sample_20.tsf = __cpu_to_be64(tsf);
1132
1133 tlv = (struct fft_sample_tlv *)&fft_sample_20;
1134 }
1062 1135
1063 ath_debug_send_fft_sample(sc, &fft_sample.tlv); 1136 ath_debug_send_fft_sample(sc, tlv);
1064 return 1; 1137 return 1;
1065#else 1138#else
1066 return 0; 1139 return 0;
@@ -1308,7 +1381,7 @@ static void ath9k_apply_ampdu_details(struct ath_softc *sc,
1308 1381
1309int ath_rx_tasklet(struct ath_softc *sc, int flush, bool hp) 1382int ath_rx_tasklet(struct ath_softc *sc, int flush, bool hp)
1310{ 1383{
1311 struct ath_buf *bf; 1384 struct ath_rxbuf *bf;
1312 struct sk_buff *skb = NULL, *requeue_skb, *hdr_skb; 1385 struct sk_buff *skb = NULL, *requeue_skb, *hdr_skb;
1313 struct ieee80211_rx_status *rxs; 1386 struct ieee80211_rx_status *rxs;
1314 struct ath_hw *ah = sc->sc_ah; 1387 struct ath_hw *ah = sc->sc_ah;