aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/wireless/ath/ath10k
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-11-13 03:40:34 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2013-11-13 03:40:34 -0500
commit42a2d923cc349583ebf6fdd52a7d35e1c2f7e6bd (patch)
tree2b2b0c03b5389c1301800119333967efafd994ca /drivers/net/wireless/ath/ath10k
parent5cbb3d216e2041700231bcfc383ee5f8b7fc8b74 (diff)
parent75ecab1df14d90e86cebef9ec5c76befde46e65f (diff)
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: 1) The addition of nftables. No longer will we need protocol aware firewall filtering modules, it can all live in userspace. At the core of nftables is a, for lack of a better term, virtual machine that executes byte codes to inspect packet or metadata (arriving interface index, etc.) and make verdict decisions. Besides support for loading packet contents and comparing them, the interpreter supports lookups in various datastructures as fundamental operations. For example sets are supports, and therefore one could create a set of whitelist IP address entries which have ACCEPT verdicts attached to them, and use the appropriate byte codes to do such lookups. Since the interpreted code is composed in userspace, userspace can do things like optimize things before giving it to the kernel. Another major improvement is the capability of atomically updating portions of the ruleset. In the existing netfilter implementation, one has to update the entire rule set in order to make a change and this is very expensive. Userspace tools exist to create nftables rules using existing netfilter rule sets, but both kernel implementations will need to co-exist for quite some time as we transition from the old to the new stuff. Kudos to Patrick McHardy, Pablo Neira Ayuso, and others who have worked so hard on this. 2) Daniel Borkmann and Hannes Frederic Sowa made several improvements to our pseudo-random number generator, mostly used for things like UDP port randomization and netfitler, amongst other things. In particular the taus88 generater is updated to taus113, and test cases are added. 3) Support 64-bit rates in HTB and TBF schedulers, from Eric Dumazet and Yang Yingliang. 4) Add support for new 577xx tigon3 chips to tg3 driver, from Nithin Sujir. 5) Fix two fatal flaws in TCP dynamic right sizing, from Eric Dumazet, Neal Cardwell, and Yuchung Cheng. 6) Allow IP_TOS and IP_TTL to be specified in sendmsg() ancillary control message data, much like other socket option attributes. From Francesco Fusco. 7) Allow applications to specify a cap on the rate computed automatically by the kernel for pacing flows, via a new SO_MAX_PACING_RATE socket option. From Eric Dumazet. 8) Make the initial autotuned send buffer sizing in TCP more closely reflect actual needs, from Eric Dumazet. 9) Currently early socket demux only happens for TCP sockets, but we can do it for connected UDP sockets too. Implementation from Shawn Bohrer. 10) Refactor inet socket demux with the goal of improving hash demux performance for listening sockets. With the main goals being able to use RCU lookups on even request sockets, and eliminating the listening lock contention. From Eric Dumazet. 11) The bonding layer has many demuxes in it's fast path, and an RCU conversion was started back in 3.11, several changes here extend the RCU usage to even more locations. From Ding Tianhong and Wang Yufen, based upon suggestions by Nikolay Aleksandrov and Veaceslav Falico. 12) Allow stackability of segmentation offloads to, in particular, allow segmentation offloading over tunnels. From Eric Dumazet. 13) Significantly improve the handling of secret keys we input into the various hash functions in the inet hashtables, TCP fast open, as well as syncookies. From Hannes Frederic Sowa. The key fundamental operation is "net_get_random_once()" which uses static keys. Hannes even extended this to ipv4/ipv6 fragmentation handling and our generic flow dissector. 14) The generic driver layer takes care now to set the driver data to NULL on device removal, so it's no longer necessary for drivers to explicitly set it to NULL any more. Many drivers have been cleaned up in this way, from Jingoo Han. 15) Add a BPF based packet scheduler classifier, from Daniel Borkmann. 16) Improve CRC32 interfaces and generic SKB checksum iterators so that SCTP's checksumming can more cleanly be handled. Also from Daniel Borkmann. 17) Add a new PMTU discovery mode, IP_PMTUDISC_INTERFACE, which forces using the interface MTU value. This helps avoid PMTU attacks, particularly on DNS servers. From Hannes Frederic Sowa. 18) Use generic XPS for transmit queue steering rather than internal (re-)implementation in virtio-net. From Jason Wang. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1622 commits) random32: add test cases for taus113 implementation random32: upgrade taus88 generator to taus113 from errata paper random32: move rnd_state to linux/random.h random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized random32: add periodic reseeding random32: fix off-by-one in seeding requirement PHY: Add RTL8201CP phy_driver to realtek xtsonic: add missing platform_set_drvdata() in xtsonic_probe() macmace: add missing platform_set_drvdata() in mace_probe() ethernet/arc/arc_emac: add missing platform_set_drvdata() in arc_emac_probe() ipv6: protect for_each_sk_fl_rcu in mem_check with rcu_read_lock_bh vlan: Implement vlan_dev_get_egress_qos_mask as an inline. ixgbe: add warning when max_vfs is out of range. igb: Update link modes display in ethtool netfilter: push reasm skb through instead of original frag skbs ip6_output: fragment outgoing reassembled skb properly MAINTAINERS: mv643xx_eth: take over maintainership from Lennart net_sched: tbf: support of 64bit rates ixgbe: deleting dfwd stations out of order can cause null ptr deref ixgbe: fix build err, num_rx_queues is only available with CONFIG_RPS ...
Diffstat (limited to 'drivers/net/wireless/ath/ath10k')
-rw-r--r--drivers/net/wireless/ath/ath10k/bmi.c42
-rw-r--r--drivers/net/wireless/ath/ath10k/ce.c397
-rw-r--r--drivers/net/wireless/ath/ath10k/ce.h126
-rw-r--r--drivers/net/wireless/ath/ath10k/core.c355
-rw-r--r--drivers/net/wireless/ath/ath10k/core.h80
-rw-r--r--drivers/net/wireless/ath/ath10k/debug.c157
-rw-r--r--drivers/net/wireless/ath/ath10k/debug.h27
-rw-r--r--drivers/net/wireless/ath/ath10k/htc.c241
-rw-r--r--drivers/net/wireless/ath/ath10k/htc.h5
-rw-r--r--drivers/net/wireless/ath/ath10k/htt.c19
-rw-r--r--drivers/net/wireless/ath/ath10k/htt.h13
-rw-r--r--drivers/net/wireless/ath/ath10k/htt_rx.c314
-rw-r--r--drivers/net/wireless/ath/ath10k/htt_tx.c287
-rw-r--r--drivers/net/wireless/ath/ath10k/hw.h79
-rw-r--r--drivers/net/wireless/ath/ath10k/mac.c732
-rw-r--r--drivers/net/wireless/ath/ath10k/mac.h2
-rw-r--r--drivers/net/wireless/ath/ath10k/pci.c465
-rw-r--r--drivers/net/wireless/ath/ath10k/pci.h76
-rw-r--r--drivers/net/wireless/ath/ath10k/rx_desc.h24
-rw-r--r--drivers/net/wireless/ath/ath10k/trace.h32
-rw-r--r--drivers/net/wireless/ath/ath10k/txrx.c67
-rw-r--r--drivers/net/wireless/ath/ath10k/txrx.h5
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi.c1277
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi.h1037
24 files changed, 4155 insertions, 1704 deletions
diff --git a/drivers/net/wireless/ath/ath10k/bmi.c b/drivers/net/wireless/ath/ath10k/bmi.c
index 744da6d1c405..a1f099628850 100644
--- a/drivers/net/wireless/ath/ath10k/bmi.c
+++ b/drivers/net/wireless/ath/ath10k/bmi.c
@@ -22,7 +22,8 @@
22 22
23void ath10k_bmi_start(struct ath10k *ar) 23void ath10k_bmi_start(struct ath10k *ar)
24{ 24{
25 ath10k_dbg(ATH10K_DBG_CORE, "BMI started\n"); 25 ath10k_dbg(ATH10K_DBG_BMI, "bmi start\n");
26
26 ar->bmi.done_sent = false; 27 ar->bmi.done_sent = false;
27} 28}
28 29
@@ -32,8 +33,10 @@ int ath10k_bmi_done(struct ath10k *ar)
32 u32 cmdlen = sizeof(cmd.id) + sizeof(cmd.done); 33 u32 cmdlen = sizeof(cmd.id) + sizeof(cmd.done);
33 int ret; 34 int ret;
34 35
36 ath10k_dbg(ATH10K_DBG_BMI, "bmi done\n");
37
35 if (ar->bmi.done_sent) { 38 if (ar->bmi.done_sent) {
36 ath10k_dbg(ATH10K_DBG_CORE, "%s skipped\n", __func__); 39 ath10k_dbg(ATH10K_DBG_BMI, "bmi skipped\n");
37 return 0; 40 return 0;
38 } 41 }
39 42
@@ -46,7 +49,6 @@ int ath10k_bmi_done(struct ath10k *ar)
46 return ret; 49 return ret;
47 } 50 }
48 51
49 ath10k_dbg(ATH10K_DBG_CORE, "BMI done\n");
50 return 0; 52 return 0;
51} 53}
52 54
@@ -59,6 +61,8 @@ int ath10k_bmi_get_target_info(struct ath10k *ar,
59 u32 resplen = sizeof(resp.get_target_info); 61 u32 resplen = sizeof(resp.get_target_info);
60 int ret; 62 int ret;
61 63
64 ath10k_dbg(ATH10K_DBG_BMI, "bmi get target info\n");
65
62 if (ar->bmi.done_sent) { 66 if (ar->bmi.done_sent) {
63 ath10k_warn("BMI Get Target Info Command disallowed\n"); 67 ath10k_warn("BMI Get Target Info Command disallowed\n");
64 return -EBUSY; 68 return -EBUSY;
@@ -80,6 +84,7 @@ int ath10k_bmi_get_target_info(struct ath10k *ar,
80 84
81 target_info->version = __le32_to_cpu(resp.get_target_info.version); 85 target_info->version = __le32_to_cpu(resp.get_target_info.version);
82 target_info->type = __le32_to_cpu(resp.get_target_info.type); 86 target_info->type = __le32_to_cpu(resp.get_target_info.type);
87
83 return 0; 88 return 0;
84} 89}
85 90
@@ -92,15 +97,14 @@ int ath10k_bmi_read_memory(struct ath10k *ar,
92 u32 rxlen; 97 u32 rxlen;
93 int ret; 98 int ret;
94 99
100 ath10k_dbg(ATH10K_DBG_BMI, "bmi read address 0x%x length %d\n",
101 address, length);
102
95 if (ar->bmi.done_sent) { 103 if (ar->bmi.done_sent) {
96 ath10k_warn("command disallowed\n"); 104 ath10k_warn("command disallowed\n");
97 return -EBUSY; 105 return -EBUSY;
98 } 106 }
99 107
100 ath10k_dbg(ATH10K_DBG_CORE,
101 "%s: (device: 0x%p, address: 0x%x, length: %d)\n",
102 __func__, ar, address, length);
103
104 while (length) { 108 while (length) {
105 rxlen = min_t(u32, length, BMI_MAX_DATA_SIZE); 109 rxlen = min_t(u32, length, BMI_MAX_DATA_SIZE);
106 110
@@ -133,15 +137,14 @@ int ath10k_bmi_write_memory(struct ath10k *ar,
133 u32 txlen; 137 u32 txlen;
134 int ret; 138 int ret;
135 139
140 ath10k_dbg(ATH10K_DBG_BMI, "bmi write address 0x%x length %d\n",
141 address, length);
142
136 if (ar->bmi.done_sent) { 143 if (ar->bmi.done_sent) {
137 ath10k_warn("command disallowed\n"); 144 ath10k_warn("command disallowed\n");
138 return -EBUSY; 145 return -EBUSY;
139 } 146 }
140 147
141 ath10k_dbg(ATH10K_DBG_CORE,
142 "%s: (device: 0x%p, address: 0x%x, length: %d)\n",
143 __func__, ar, address, length);
144
145 while (length) { 148 while (length) {
146 txlen = min(length, BMI_MAX_DATA_SIZE - hdrlen); 149 txlen = min(length, BMI_MAX_DATA_SIZE - hdrlen);
147 150
@@ -180,15 +183,14 @@ int ath10k_bmi_execute(struct ath10k *ar, u32 address, u32 *param)
180 u32 resplen = sizeof(resp.execute); 183 u32 resplen = sizeof(resp.execute);
181 int ret; 184 int ret;
182 185
186 ath10k_dbg(ATH10K_DBG_BMI, "bmi execute address 0x%x param 0x%x\n",
187 address, *param);
188
183 if (ar->bmi.done_sent) { 189 if (ar->bmi.done_sent) {
184 ath10k_warn("command disallowed\n"); 190 ath10k_warn("command disallowed\n");
185 return -EBUSY; 191 return -EBUSY;
186 } 192 }
187 193
188 ath10k_dbg(ATH10K_DBG_CORE,
189 "%s: (device: 0x%p, address: 0x%x, param: %d)\n",
190 __func__, ar, address, *param);
191
192 cmd.id = __cpu_to_le32(BMI_EXECUTE); 194 cmd.id = __cpu_to_le32(BMI_EXECUTE);
193 cmd.execute.addr = __cpu_to_le32(address); 195 cmd.execute.addr = __cpu_to_le32(address);
194 cmd.execute.param = __cpu_to_le32(*param); 196 cmd.execute.param = __cpu_to_le32(*param);
@@ -216,6 +218,9 @@ int ath10k_bmi_lz_data(struct ath10k *ar, const void *buffer, u32 length)
216 u32 txlen; 218 u32 txlen;
217 int ret; 219 int ret;
218 220
221 ath10k_dbg(ATH10K_DBG_BMI, "bmi lz data buffer 0x%p length %d\n",
222 buffer, length);
223
219 if (ar->bmi.done_sent) { 224 if (ar->bmi.done_sent) {
220 ath10k_warn("command disallowed\n"); 225 ath10k_warn("command disallowed\n");
221 return -EBUSY; 226 return -EBUSY;
@@ -250,6 +255,9 @@ int ath10k_bmi_lz_stream_start(struct ath10k *ar, u32 address)
250 u32 cmdlen = sizeof(cmd.id) + sizeof(cmd.lz_start); 255 u32 cmdlen = sizeof(cmd.id) + sizeof(cmd.lz_start);
251 int ret; 256 int ret;
252 257
258 ath10k_dbg(ATH10K_DBG_BMI, "bmi lz stream start address 0x%x\n",
259 address);
260
253 if (ar->bmi.done_sent) { 261 if (ar->bmi.done_sent) {
254 ath10k_warn("command disallowed\n"); 262 ath10k_warn("command disallowed\n");
255 return -EBUSY; 263 return -EBUSY;
@@ -275,6 +283,10 @@ int ath10k_bmi_fast_download(struct ath10k *ar,
275 u32 trailer_len = length - head_len; 283 u32 trailer_len = length - head_len;
276 int ret; 284 int ret;
277 285
286 ath10k_dbg(ATH10K_DBG_BMI,
287 "bmi fast download address 0x%x buffer 0x%p length %d\n",
288 address, buffer, length);
289
278 ret = ath10k_bmi_lz_stream_start(ar, address); 290 ret = ath10k_bmi_lz_stream_start(ar, address);
279 if (ret) 291 if (ret)
280 return ret; 292 return ret;
diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
index f8b969f518f8..e46951b8fb92 100644
--- a/drivers/net/wireless/ath/ath10k/ce.c
+++ b/drivers/net/wireless/ath/ath10k/ce.c
@@ -76,36 +76,7 @@ static inline void ath10k_ce_src_ring_write_index_set(struct ath10k *ar,
76 u32 ce_ctrl_addr, 76 u32 ce_ctrl_addr,
77 unsigned int n) 77 unsigned int n)
78{ 78{
79 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 79 ath10k_pci_write32(ar, ce_ctrl_addr + SR_WR_INDEX_ADDRESS, n);
80 void __iomem *indicator_addr;
81
82 if (!test_bit(ATH10K_PCI_FEATURE_HW_1_0_WORKAROUND, ar_pci->features)) {
83 ath10k_pci_write32(ar, ce_ctrl_addr + SR_WR_INDEX_ADDRESS, n);
84 return;
85 }
86
87 /* workaround for QCA988x_1.0 HW CE */
88 indicator_addr = ar_pci->mem + ce_ctrl_addr + DST_WATERMARK_ADDRESS;
89
90 if (ce_ctrl_addr == ath10k_ce_base_address(CDC_WAR_DATA_CE)) {
91 iowrite32((CDC_WAR_MAGIC_STR | n), indicator_addr);
92 } else {
93 unsigned long irq_flags;
94 local_irq_save(irq_flags);
95 iowrite32(1, indicator_addr);
96
97 /*
98 * PCIE write waits for ACK in IPQ8K, there is no
99 * need to read back value.
100 */
101 (void)ioread32(indicator_addr);
102 (void)ioread32(indicator_addr); /* conservative */
103
104 ath10k_pci_write32(ar, ce_ctrl_addr + SR_WR_INDEX_ADDRESS, n);
105
106 iowrite32(0, indicator_addr);
107 local_irq_restore(irq_flags);
108 }
109} 80}
110 81
111static inline u32 ath10k_ce_src_ring_write_index_get(struct ath10k *ar, 82static inline u32 ath10k_ce_src_ring_write_index_get(struct ath10k *ar,
@@ -285,7 +256,7 @@ static inline void ath10k_ce_engine_int_status_clear(struct ath10k *ar,
285 * ath10k_ce_sendlist_send. 256 * ath10k_ce_sendlist_send.
286 * The caller takes responsibility for any needed locking. 257 * The caller takes responsibility for any needed locking.
287 */ 258 */
288static int ath10k_ce_send_nolock(struct ce_state *ce_state, 259static int ath10k_ce_send_nolock(struct ath10k_ce_pipe *ce_state,
289 void *per_transfer_context, 260 void *per_transfer_context,
290 u32 buffer, 261 u32 buffer,
291 unsigned int nbytes, 262 unsigned int nbytes,
@@ -293,7 +264,7 @@ static int ath10k_ce_send_nolock(struct ce_state *ce_state,
293 unsigned int flags) 264 unsigned int flags)
294{ 265{
295 struct ath10k *ar = ce_state->ar; 266 struct ath10k *ar = ce_state->ar;
296 struct ce_ring_state *src_ring = ce_state->src_ring; 267 struct ath10k_ce_ring *src_ring = ce_state->src_ring;
297 struct ce_desc *desc, *sdesc; 268 struct ce_desc *desc, *sdesc;
298 unsigned int nentries_mask = src_ring->nentries_mask; 269 unsigned int nentries_mask = src_ring->nentries_mask;
299 unsigned int sw_index = src_ring->sw_index; 270 unsigned int sw_index = src_ring->sw_index;
@@ -306,11 +277,13 @@ static int ath10k_ce_send_nolock(struct ce_state *ce_state,
306 ath10k_warn("%s: send more we can (nbytes: %d, max: %d)\n", 277 ath10k_warn("%s: send more we can (nbytes: %d, max: %d)\n",
307 __func__, nbytes, ce_state->src_sz_max); 278 __func__, nbytes, ce_state->src_sz_max);
308 279
309 ath10k_pci_wake(ar); 280 ret = ath10k_pci_wake(ar);
281 if (ret)
282 return ret;
310 283
311 if (unlikely(CE_RING_DELTA(nentries_mask, 284 if (unlikely(CE_RING_DELTA(nentries_mask,
312 write_index, sw_index - 1) <= 0)) { 285 write_index, sw_index - 1) <= 0)) {
313 ret = -EIO; 286 ret = -ENOSR;
314 goto exit; 287 goto exit;
315 } 288 }
316 289
@@ -346,7 +319,7 @@ exit:
346 return ret; 319 return ret;
347} 320}
348 321
349int ath10k_ce_send(struct ce_state *ce_state, 322int ath10k_ce_send(struct ath10k_ce_pipe *ce_state,
350 void *per_transfer_context, 323 void *per_transfer_context,
351 u32 buffer, 324 u32 buffer,
352 unsigned int nbytes, 325 unsigned int nbytes,
@@ -365,77 +338,26 @@ int ath10k_ce_send(struct ce_state *ce_state,
365 return ret; 338 return ret;
366} 339}
367 340
368void ath10k_ce_sendlist_buf_add(struct ce_sendlist *sendlist, u32 buffer, 341int ath10k_ce_num_free_src_entries(struct ath10k_ce_pipe *pipe)
369 unsigned int nbytes, u32 flags)
370{ 342{
371 unsigned int num_items = sendlist->num_items; 343 struct ath10k *ar = pipe->ar;
372 struct ce_sendlist_item *item;
373
374 item = &sendlist->item[num_items];
375 item->data = buffer;
376 item->u.nbytes = nbytes;
377 item->flags = flags;
378 sendlist->num_items++;
379}
380
381int ath10k_ce_sendlist_send(struct ce_state *ce_state,
382 void *per_transfer_context,
383 struct ce_sendlist *sendlist,
384 unsigned int transfer_id)
385{
386 struct ce_ring_state *src_ring = ce_state->src_ring;
387 struct ce_sendlist_item *item;
388 struct ath10k *ar = ce_state->ar;
389 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 344 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
390 unsigned int nentries_mask = src_ring->nentries_mask; 345 int delta;
391 unsigned int num_items = sendlist->num_items;
392 unsigned int sw_index;
393 unsigned int write_index;
394 int i, delta, ret = -ENOMEM;
395 346
396 spin_lock_bh(&ar_pci->ce_lock); 347 spin_lock_bh(&ar_pci->ce_lock);
397 348 delta = CE_RING_DELTA(pipe->src_ring->nentries_mask,
398 sw_index = src_ring->sw_index; 349 pipe->src_ring->write_index,
399 write_index = src_ring->write_index; 350 pipe->src_ring->sw_index - 1);
400
401 delta = CE_RING_DELTA(nentries_mask, write_index, sw_index - 1);
402
403 if (delta >= num_items) {
404 /*
405 * Handle all but the last item uniformly.
406 */
407 for (i = 0; i < num_items - 1; i++) {
408 item = &sendlist->item[i];
409 ret = ath10k_ce_send_nolock(ce_state,
410 CE_SENDLIST_ITEM_CTXT,
411 (u32) item->data,
412 item->u.nbytes, transfer_id,
413 item->flags |
414 CE_SEND_FLAG_GATHER);
415 if (ret)
416 ath10k_warn("CE send failed for item: %d\n", i);
417 }
418 /*
419 * Provide valid context pointer for final item.
420 */
421 item = &sendlist->item[i];
422 ret = ath10k_ce_send_nolock(ce_state, per_transfer_context,
423 (u32) item->data, item->u.nbytes,
424 transfer_id, item->flags);
425 if (ret)
426 ath10k_warn("CE send failed for last item: %d\n", i);
427 }
428
429 spin_unlock_bh(&ar_pci->ce_lock); 351 spin_unlock_bh(&ar_pci->ce_lock);
430 352
431 return ret; 353 return delta;
432} 354}
433 355
434int ath10k_ce_recv_buf_enqueue(struct ce_state *ce_state, 356int ath10k_ce_recv_buf_enqueue(struct ath10k_ce_pipe *ce_state,
435 void *per_recv_context, 357 void *per_recv_context,
436 u32 buffer) 358 u32 buffer)
437{ 359{
438 struct ce_ring_state *dest_ring = ce_state->dest_ring; 360 struct ath10k_ce_ring *dest_ring = ce_state->dest_ring;
439 u32 ctrl_addr = ce_state->ctrl_addr; 361 u32 ctrl_addr = ce_state->ctrl_addr;
440 struct ath10k *ar = ce_state->ar; 362 struct ath10k *ar = ce_state->ar;
441 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 363 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
@@ -448,7 +370,9 @@ int ath10k_ce_recv_buf_enqueue(struct ce_state *ce_state,
448 write_index = dest_ring->write_index; 370 write_index = dest_ring->write_index;
449 sw_index = dest_ring->sw_index; 371 sw_index = dest_ring->sw_index;
450 372
451 ath10k_pci_wake(ar); 373 ret = ath10k_pci_wake(ar);
374 if (ret)
375 goto out;
452 376
453 if (CE_RING_DELTA(nentries_mask, write_index, sw_index - 1) > 0) { 377 if (CE_RING_DELTA(nentries_mask, write_index, sw_index - 1) > 0) {
454 struct ce_desc *base = dest_ring->base_addr_owner_space; 378 struct ce_desc *base = dest_ring->base_addr_owner_space;
@@ -470,6 +394,8 @@ int ath10k_ce_recv_buf_enqueue(struct ce_state *ce_state,
470 ret = -EIO; 394 ret = -EIO;
471 } 395 }
472 ath10k_pci_sleep(ar); 396 ath10k_pci_sleep(ar);
397
398out:
473 spin_unlock_bh(&ar_pci->ce_lock); 399 spin_unlock_bh(&ar_pci->ce_lock);
474 400
475 return ret; 401 return ret;
@@ -479,14 +405,14 @@ int ath10k_ce_recv_buf_enqueue(struct ce_state *ce_state,
479 * Guts of ath10k_ce_completed_recv_next. 405 * Guts of ath10k_ce_completed_recv_next.
480 * The caller takes responsibility for any necessary locking. 406 * The caller takes responsibility for any necessary locking.
481 */ 407 */
482static int ath10k_ce_completed_recv_next_nolock(struct ce_state *ce_state, 408static int ath10k_ce_completed_recv_next_nolock(struct ath10k_ce_pipe *ce_state,
483 void **per_transfer_contextp, 409 void **per_transfer_contextp,
484 u32 *bufferp, 410 u32 *bufferp,
485 unsigned int *nbytesp, 411 unsigned int *nbytesp,
486 unsigned int *transfer_idp, 412 unsigned int *transfer_idp,
487 unsigned int *flagsp) 413 unsigned int *flagsp)
488{ 414{
489 struct ce_ring_state *dest_ring = ce_state->dest_ring; 415 struct ath10k_ce_ring *dest_ring = ce_state->dest_ring;
490 unsigned int nentries_mask = dest_ring->nentries_mask; 416 unsigned int nentries_mask = dest_ring->nentries_mask;
491 unsigned int sw_index = dest_ring->sw_index; 417 unsigned int sw_index = dest_ring->sw_index;
492 418
@@ -535,7 +461,7 @@ static int ath10k_ce_completed_recv_next_nolock(struct ce_state *ce_state,
535 return 0; 461 return 0;
536} 462}
537 463
538int ath10k_ce_completed_recv_next(struct ce_state *ce_state, 464int ath10k_ce_completed_recv_next(struct ath10k_ce_pipe *ce_state,
539 void **per_transfer_contextp, 465 void **per_transfer_contextp,
540 u32 *bufferp, 466 u32 *bufferp,
541 unsigned int *nbytesp, 467 unsigned int *nbytesp,
@@ -556,11 +482,11 @@ int ath10k_ce_completed_recv_next(struct ce_state *ce_state,
556 return ret; 482 return ret;
557} 483}
558 484
559int ath10k_ce_revoke_recv_next(struct ce_state *ce_state, 485int ath10k_ce_revoke_recv_next(struct ath10k_ce_pipe *ce_state,
560 void **per_transfer_contextp, 486 void **per_transfer_contextp,
561 u32 *bufferp) 487 u32 *bufferp)
562{ 488{
563 struct ce_ring_state *dest_ring; 489 struct ath10k_ce_ring *dest_ring;
564 unsigned int nentries_mask; 490 unsigned int nentries_mask;
565 unsigned int sw_index; 491 unsigned int sw_index;
566 unsigned int write_index; 492 unsigned int write_index;
@@ -612,19 +538,20 @@ int ath10k_ce_revoke_recv_next(struct ce_state *ce_state,
612 * Guts of ath10k_ce_completed_send_next. 538 * Guts of ath10k_ce_completed_send_next.
613 * The caller takes responsibility for any necessary locking. 539 * The caller takes responsibility for any necessary locking.
614 */ 540 */
615static int ath10k_ce_completed_send_next_nolock(struct ce_state *ce_state, 541static int ath10k_ce_completed_send_next_nolock(struct ath10k_ce_pipe *ce_state,
616 void **per_transfer_contextp, 542 void **per_transfer_contextp,
617 u32 *bufferp, 543 u32 *bufferp,
618 unsigned int *nbytesp, 544 unsigned int *nbytesp,
619 unsigned int *transfer_idp) 545 unsigned int *transfer_idp)
620{ 546{
621 struct ce_ring_state *src_ring = ce_state->src_ring; 547 struct ath10k_ce_ring *src_ring = ce_state->src_ring;
622 u32 ctrl_addr = ce_state->ctrl_addr; 548 u32 ctrl_addr = ce_state->ctrl_addr;
623 struct ath10k *ar = ce_state->ar; 549 struct ath10k *ar = ce_state->ar;
624 unsigned int nentries_mask = src_ring->nentries_mask; 550 unsigned int nentries_mask = src_ring->nentries_mask;
625 unsigned int sw_index = src_ring->sw_index; 551 unsigned int sw_index = src_ring->sw_index;
552 struct ce_desc *sdesc, *sbase;
626 unsigned int read_index; 553 unsigned int read_index;
627 int ret = -EIO; 554 int ret;
628 555
629 if (src_ring->hw_index == sw_index) { 556 if (src_ring->hw_index == sw_index) {
630 /* 557 /*
@@ -634,48 +561,54 @@ static int ath10k_ce_completed_send_next_nolock(struct ce_state *ce_state,
634 * the SW has really caught up to the HW, or if the cached 561 * the SW has really caught up to the HW, or if the cached
635 * value of the HW index has become stale. 562 * value of the HW index has become stale.
636 */ 563 */
637 ath10k_pci_wake(ar); 564
565 ret = ath10k_pci_wake(ar);
566 if (ret)
567 return ret;
568
638 src_ring->hw_index = 569 src_ring->hw_index =
639 ath10k_ce_src_ring_read_index_get(ar, ctrl_addr); 570 ath10k_ce_src_ring_read_index_get(ar, ctrl_addr);
640 src_ring->hw_index &= nentries_mask; 571 src_ring->hw_index &= nentries_mask;
572
641 ath10k_pci_sleep(ar); 573 ath10k_pci_sleep(ar);
642 } 574 }
575
643 read_index = src_ring->hw_index; 576 read_index = src_ring->hw_index;
644 577
645 if ((read_index != sw_index) && (read_index != 0xffffffff)) { 578 if ((read_index == sw_index) || (read_index == 0xffffffff))
646 struct ce_desc *sbase = src_ring->shadow_base; 579 return -EIO;
647 struct ce_desc *sdesc = CE_SRC_RING_TO_DESC(sbase, sw_index);
648 580
649 /* Return data from completed source descriptor */ 581 sbase = src_ring->shadow_base;
650 *bufferp = __le32_to_cpu(sdesc->addr); 582 sdesc = CE_SRC_RING_TO_DESC(sbase, sw_index);
651 *nbytesp = __le16_to_cpu(sdesc->nbytes);
652 *transfer_idp = MS(__le16_to_cpu(sdesc->flags),
653 CE_DESC_FLAGS_META_DATA);
654 583
655 if (per_transfer_contextp) 584 /* Return data from completed source descriptor */
656 *per_transfer_contextp = 585 *bufferp = __le32_to_cpu(sdesc->addr);
657 src_ring->per_transfer_context[sw_index]; 586 *nbytesp = __le16_to_cpu(sdesc->nbytes);
587 *transfer_idp = MS(__le16_to_cpu(sdesc->flags),
588 CE_DESC_FLAGS_META_DATA);
658 589
659 /* sanity */ 590 if (per_transfer_contextp)
660 src_ring->per_transfer_context[sw_index] = NULL; 591 *per_transfer_contextp =
592 src_ring->per_transfer_context[sw_index];
661 593
662 /* Update sw_index */ 594 /* sanity */
663 sw_index = CE_RING_IDX_INCR(nentries_mask, sw_index); 595 src_ring->per_transfer_context[sw_index] = NULL;
664 src_ring->sw_index = sw_index;
665 ret = 0;
666 }
667 596
668 return ret; 597 /* Update sw_index */
598 sw_index = CE_RING_IDX_INCR(nentries_mask, sw_index);
599 src_ring->sw_index = sw_index;
600
601 return 0;
669} 602}
670 603
671/* NB: Modeled after ath10k_ce_completed_send_next */ 604/* NB: Modeled after ath10k_ce_completed_send_next */
672int ath10k_ce_cancel_send_next(struct ce_state *ce_state, 605int ath10k_ce_cancel_send_next(struct ath10k_ce_pipe *ce_state,
673 void **per_transfer_contextp, 606 void **per_transfer_contextp,
674 u32 *bufferp, 607 u32 *bufferp,
675 unsigned int *nbytesp, 608 unsigned int *nbytesp,
676 unsigned int *transfer_idp) 609 unsigned int *transfer_idp)
677{ 610{
678 struct ce_ring_state *src_ring; 611 struct ath10k_ce_ring *src_ring;
679 unsigned int nentries_mask; 612 unsigned int nentries_mask;
680 unsigned int sw_index; 613 unsigned int sw_index;
681 unsigned int write_index; 614 unsigned int write_index;
@@ -727,7 +660,7 @@ int ath10k_ce_cancel_send_next(struct ce_state *ce_state,
727 return ret; 660 return ret;
728} 661}
729 662
730int ath10k_ce_completed_send_next(struct ce_state *ce_state, 663int ath10k_ce_completed_send_next(struct ath10k_ce_pipe *ce_state,
731 void **per_transfer_contextp, 664 void **per_transfer_contextp,
732 u32 *bufferp, 665 u32 *bufferp,
733 unsigned int *nbytesp, 666 unsigned int *nbytesp,
@@ -756,53 +689,29 @@ int ath10k_ce_completed_send_next(struct ce_state *ce_state,
756void ath10k_ce_per_engine_service(struct ath10k *ar, unsigned int ce_id) 689void ath10k_ce_per_engine_service(struct ath10k *ar, unsigned int ce_id)
757{ 690{
758 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 691 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
759 struct ce_state *ce_state = ar_pci->ce_id_to_state[ce_id]; 692 struct ath10k_ce_pipe *ce_state = &ar_pci->ce_states[ce_id];
760 u32 ctrl_addr = ce_state->ctrl_addr; 693 u32 ctrl_addr = ce_state->ctrl_addr;
761 void *transfer_context; 694 int ret;
762 u32 buf; 695
763 unsigned int nbytes; 696 ret = ath10k_pci_wake(ar);
764 unsigned int id; 697 if (ret)
765 unsigned int flags; 698 return;
766 699
767 ath10k_pci_wake(ar);
768 spin_lock_bh(&ar_pci->ce_lock); 700 spin_lock_bh(&ar_pci->ce_lock);
769 701
770 /* Clear the copy-complete interrupts that will be handled here. */ 702 /* Clear the copy-complete interrupts that will be handled here. */
771 ath10k_ce_engine_int_status_clear(ar, ctrl_addr, 703 ath10k_ce_engine_int_status_clear(ar, ctrl_addr,
772 HOST_IS_COPY_COMPLETE_MASK); 704 HOST_IS_COPY_COMPLETE_MASK);
773 705
774 if (ce_state->recv_cb) { 706 spin_unlock_bh(&ar_pci->ce_lock);
775 /*
776 * Pop completed recv buffers and call the registered
777 * recv callback for each
778 */
779 while (ath10k_ce_completed_recv_next_nolock(ce_state,
780 &transfer_context,
781 &buf, &nbytes,
782 &id, &flags) == 0) {
783 spin_unlock_bh(&ar_pci->ce_lock);
784 ce_state->recv_cb(ce_state, transfer_context, buf,
785 nbytes, id, flags);
786 spin_lock_bh(&ar_pci->ce_lock);
787 }
788 }
789 707
790 if (ce_state->send_cb) { 708 if (ce_state->recv_cb)
791 /* 709 ce_state->recv_cb(ce_state);
792 * Pop completed send buffers and call the registered 710
793 * send callback for each 711 if (ce_state->send_cb)
794 */ 712 ce_state->send_cb(ce_state);
795 while (ath10k_ce_completed_send_next_nolock(ce_state, 713
796 &transfer_context, 714 spin_lock_bh(&ar_pci->ce_lock);
797 &buf,
798 &nbytes,
799 &id) == 0) {
800 spin_unlock_bh(&ar_pci->ce_lock);
801 ce_state->send_cb(ce_state, transfer_context,
802 buf, nbytes, id);
803 spin_lock_bh(&ar_pci->ce_lock);
804 }
805 }
806 715
807 /* 716 /*
808 * Misc CE interrupts are not being handled, but still need 717 * Misc CE interrupts are not being handled, but still need
@@ -823,10 +732,13 @@ void ath10k_ce_per_engine_service(struct ath10k *ar, unsigned int ce_id)
823void ath10k_ce_per_engine_service_any(struct ath10k *ar) 732void ath10k_ce_per_engine_service_any(struct ath10k *ar)
824{ 733{
825 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 734 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
826 int ce_id; 735 int ce_id, ret;
827 u32 intr_summary; 736 u32 intr_summary;
828 737
829 ath10k_pci_wake(ar); 738 ret = ath10k_pci_wake(ar);
739 if (ret)
740 return;
741
830 intr_summary = CE_INTERRUPT_SUMMARY(ar); 742 intr_summary = CE_INTERRUPT_SUMMARY(ar);
831 743
832 for (ce_id = 0; intr_summary && (ce_id < ar_pci->ce_count); ce_id++) { 744 for (ce_id = 0; intr_summary && (ce_id < ar_pci->ce_count); ce_id++) {
@@ -849,13 +761,16 @@ void ath10k_ce_per_engine_service_any(struct ath10k *ar)
849 * 761 *
850 * Called with ce_lock held. 762 * Called with ce_lock held.
851 */ 763 */
852static void ath10k_ce_per_engine_handler_adjust(struct ce_state *ce_state, 764static void ath10k_ce_per_engine_handler_adjust(struct ath10k_ce_pipe *ce_state,
853 int disable_copy_compl_intr) 765 int disable_copy_compl_intr)
854{ 766{
855 u32 ctrl_addr = ce_state->ctrl_addr; 767 u32 ctrl_addr = ce_state->ctrl_addr;
856 struct ath10k *ar = ce_state->ar; 768 struct ath10k *ar = ce_state->ar;
769 int ret;
857 770
858 ath10k_pci_wake(ar); 771 ret = ath10k_pci_wake(ar);
772 if (ret)
773 return;
859 774
860 if ((!disable_copy_compl_intr) && 775 if ((!disable_copy_compl_intr) &&
861 (ce_state->send_cb || ce_state->recv_cb)) 776 (ce_state->send_cb || ce_state->recv_cb))
@@ -871,11 +786,14 @@ static void ath10k_ce_per_engine_handler_adjust(struct ce_state *ce_state,
871void ath10k_ce_disable_interrupts(struct ath10k *ar) 786void ath10k_ce_disable_interrupts(struct ath10k *ar)
872{ 787{
873 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 788 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
874 int ce_id; 789 int ce_id, ret;
790
791 ret = ath10k_pci_wake(ar);
792 if (ret)
793 return;
875 794
876 ath10k_pci_wake(ar);
877 for (ce_id = 0; ce_id < ar_pci->ce_count; ce_id++) { 795 for (ce_id = 0; ce_id < ar_pci->ce_count; ce_id++) {
878 struct ce_state *ce_state = ar_pci->ce_id_to_state[ce_id]; 796 struct ath10k_ce_pipe *ce_state = &ar_pci->ce_states[ce_id];
879 u32 ctrl_addr = ce_state->ctrl_addr; 797 u32 ctrl_addr = ce_state->ctrl_addr;
880 798
881 ath10k_ce_copy_complete_intr_disable(ar, ctrl_addr); 799 ath10k_ce_copy_complete_intr_disable(ar, ctrl_addr);
@@ -883,12 +801,8 @@ void ath10k_ce_disable_interrupts(struct ath10k *ar)
883 ath10k_pci_sleep(ar); 801 ath10k_pci_sleep(ar);
884} 802}
885 803
886void ath10k_ce_send_cb_register(struct ce_state *ce_state, 804void ath10k_ce_send_cb_register(struct ath10k_ce_pipe *ce_state,
887 void (*send_cb) (struct ce_state *ce_state, 805 void (*send_cb)(struct ath10k_ce_pipe *),
888 void *transfer_context,
889 u32 buffer,
890 unsigned int nbytes,
891 unsigned int transfer_id),
892 int disable_interrupts) 806 int disable_interrupts)
893{ 807{
894 struct ath10k *ar = ce_state->ar; 808 struct ath10k *ar = ce_state->ar;
@@ -900,13 +814,8 @@ void ath10k_ce_send_cb_register(struct ce_state *ce_state,
900 spin_unlock_bh(&ar_pci->ce_lock); 814 spin_unlock_bh(&ar_pci->ce_lock);
901} 815}
902 816
903void ath10k_ce_recv_cb_register(struct ce_state *ce_state, 817void ath10k_ce_recv_cb_register(struct ath10k_ce_pipe *ce_state,
904 void (*recv_cb) (struct ce_state *ce_state, 818 void (*recv_cb)(struct ath10k_ce_pipe *))
905 void *transfer_context,
906 u32 buffer,
907 unsigned int nbytes,
908 unsigned int transfer_id,
909 unsigned int flags))
910{ 819{
911 struct ath10k *ar = ce_state->ar; 820 struct ath10k *ar = ce_state->ar;
912 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 821 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
@@ -919,11 +828,11 @@ void ath10k_ce_recv_cb_register(struct ce_state *ce_state,
919 828
920static int ath10k_ce_init_src_ring(struct ath10k *ar, 829static int ath10k_ce_init_src_ring(struct ath10k *ar,
921 unsigned int ce_id, 830 unsigned int ce_id,
922 struct ce_state *ce_state, 831 struct ath10k_ce_pipe *ce_state,
923 const struct ce_attr *attr) 832 const struct ce_attr *attr)
924{ 833{
925 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 834 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
926 struct ce_ring_state *src_ring; 835 struct ath10k_ce_ring *src_ring;
927 unsigned int nentries = attr->src_nentries; 836 unsigned int nentries = attr->src_nentries;
928 unsigned int ce_nbytes; 837 unsigned int ce_nbytes;
929 u32 ctrl_addr = ath10k_ce_base_address(ce_id); 838 u32 ctrl_addr = ath10k_ce_base_address(ce_id);
@@ -937,19 +846,18 @@ static int ath10k_ce_init_src_ring(struct ath10k *ar,
937 return 0; 846 return 0;
938 } 847 }
939 848
940 ce_nbytes = sizeof(struct ce_ring_state) + (nentries * sizeof(void *)); 849 ce_nbytes = sizeof(struct ath10k_ce_ring) + (nentries * sizeof(void *));
941 ptr = kzalloc(ce_nbytes, GFP_KERNEL); 850 ptr = kzalloc(ce_nbytes, GFP_KERNEL);
942 if (ptr == NULL) 851 if (ptr == NULL)
943 return -ENOMEM; 852 return -ENOMEM;
944 853
945 ce_state->src_ring = (struct ce_ring_state *)ptr; 854 ce_state->src_ring = (struct ath10k_ce_ring *)ptr;
946 src_ring = ce_state->src_ring; 855 src_ring = ce_state->src_ring;
947 856
948 ptr += sizeof(struct ce_ring_state); 857 ptr += sizeof(struct ath10k_ce_ring);
949 src_ring->nentries = nentries; 858 src_ring->nentries = nentries;
950 src_ring->nentries_mask = nentries - 1; 859 src_ring->nentries_mask = nentries - 1;
951 860
952 ath10k_pci_wake(ar);
953 src_ring->sw_index = ath10k_ce_src_ring_read_index_get(ar, ctrl_addr); 861 src_ring->sw_index = ath10k_ce_src_ring_read_index_get(ar, ctrl_addr);
954 src_ring->sw_index &= src_ring->nentries_mask; 862 src_ring->sw_index &= src_ring->nentries_mask;
955 src_ring->hw_index = src_ring->sw_index; 863 src_ring->hw_index = src_ring->sw_index;
@@ -957,7 +865,6 @@ static int ath10k_ce_init_src_ring(struct ath10k *ar,
957 src_ring->write_index = 865 src_ring->write_index =
958 ath10k_ce_src_ring_write_index_get(ar, ctrl_addr); 866 ath10k_ce_src_ring_write_index_get(ar, ctrl_addr);
959 src_ring->write_index &= src_ring->nentries_mask; 867 src_ring->write_index &= src_ring->nentries_mask;
960 ath10k_pci_sleep(ar);
961 868
962 src_ring->per_transfer_context = (void **)ptr; 869 src_ring->per_transfer_context = (void **)ptr;
963 870
@@ -970,6 +877,12 @@ static int ath10k_ce_init_src_ring(struct ath10k *ar,
970 (nentries * sizeof(struct ce_desc) + 877 (nentries * sizeof(struct ce_desc) +
971 CE_DESC_RING_ALIGN), 878 CE_DESC_RING_ALIGN),
972 &base_addr); 879 &base_addr);
880 if (!src_ring->base_addr_owner_space_unaligned) {
881 kfree(ce_state->src_ring);
882 ce_state->src_ring = NULL;
883 return -ENOMEM;
884 }
885
973 src_ring->base_addr_ce_space_unaligned = base_addr; 886 src_ring->base_addr_ce_space_unaligned = base_addr;
974 887
975 src_ring->base_addr_owner_space = PTR_ALIGN( 888 src_ring->base_addr_owner_space = PTR_ALIGN(
@@ -986,12 +899,21 @@ static int ath10k_ce_init_src_ring(struct ath10k *ar,
986 src_ring->shadow_base_unaligned = 899 src_ring->shadow_base_unaligned =
987 kmalloc((nentries * sizeof(struct ce_desc) + 900 kmalloc((nentries * sizeof(struct ce_desc) +
988 CE_DESC_RING_ALIGN), GFP_KERNEL); 901 CE_DESC_RING_ALIGN), GFP_KERNEL);
902 if (!src_ring->shadow_base_unaligned) {
903 pci_free_consistent(ar_pci->pdev,
904 (nentries * sizeof(struct ce_desc) +
905 CE_DESC_RING_ALIGN),
906 src_ring->base_addr_owner_space,
907 src_ring->base_addr_ce_space);
908 kfree(ce_state->src_ring);
909 ce_state->src_ring = NULL;
910 return -ENOMEM;
911 }
989 912
990 src_ring->shadow_base = PTR_ALIGN( 913 src_ring->shadow_base = PTR_ALIGN(
991 src_ring->shadow_base_unaligned, 914 src_ring->shadow_base_unaligned,
992 CE_DESC_RING_ALIGN); 915 CE_DESC_RING_ALIGN);
993 916
994 ath10k_pci_wake(ar);
995 ath10k_ce_src_ring_base_addr_set(ar, ctrl_addr, 917 ath10k_ce_src_ring_base_addr_set(ar, ctrl_addr,
996 src_ring->base_addr_ce_space); 918 src_ring->base_addr_ce_space);
997 ath10k_ce_src_ring_size_set(ar, ctrl_addr, nentries); 919 ath10k_ce_src_ring_size_set(ar, ctrl_addr, nentries);
@@ -999,18 +921,21 @@ static int ath10k_ce_init_src_ring(struct ath10k *ar,
999 ath10k_ce_src_ring_byte_swap_set(ar, ctrl_addr, 0); 921 ath10k_ce_src_ring_byte_swap_set(ar, ctrl_addr, 0);
1000 ath10k_ce_src_ring_lowmark_set(ar, ctrl_addr, 0); 922 ath10k_ce_src_ring_lowmark_set(ar, ctrl_addr, 0);
1001 ath10k_ce_src_ring_highmark_set(ar, ctrl_addr, nentries); 923 ath10k_ce_src_ring_highmark_set(ar, ctrl_addr, nentries);
1002 ath10k_pci_sleep(ar); 924
925 ath10k_dbg(ATH10K_DBG_BOOT,
926 "boot ce src ring id %d entries %d base_addr %p\n",
927 ce_id, nentries, src_ring->base_addr_owner_space);
1003 928
1004 return 0; 929 return 0;
1005} 930}
1006 931
1007static int ath10k_ce_init_dest_ring(struct ath10k *ar, 932static int ath10k_ce_init_dest_ring(struct ath10k *ar,
1008 unsigned int ce_id, 933 unsigned int ce_id,
1009 struct ce_state *ce_state, 934 struct ath10k_ce_pipe *ce_state,
1010 const struct ce_attr *attr) 935 const struct ce_attr *attr)
1011{ 936{
1012 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 937 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
1013 struct ce_ring_state *dest_ring; 938 struct ath10k_ce_ring *dest_ring;
1014 unsigned int nentries = attr->dest_nentries; 939 unsigned int nentries = attr->dest_nentries;
1015 unsigned int ce_nbytes; 940 unsigned int ce_nbytes;
1016 u32 ctrl_addr = ath10k_ce_base_address(ce_id); 941 u32 ctrl_addr = ath10k_ce_base_address(ce_id);
@@ -1024,25 +949,23 @@ static int ath10k_ce_init_dest_ring(struct ath10k *ar,
1024 return 0; 949 return 0;
1025 } 950 }
1026 951
1027 ce_nbytes = sizeof(struct ce_ring_state) + (nentries * sizeof(void *)); 952 ce_nbytes = sizeof(struct ath10k_ce_ring) + (nentries * sizeof(void *));
1028 ptr = kzalloc(ce_nbytes, GFP_KERNEL); 953 ptr = kzalloc(ce_nbytes, GFP_KERNEL);
1029 if (ptr == NULL) 954 if (ptr == NULL)
1030 return -ENOMEM; 955 return -ENOMEM;
1031 956
1032 ce_state->dest_ring = (struct ce_ring_state *)ptr; 957 ce_state->dest_ring = (struct ath10k_ce_ring *)ptr;
1033 dest_ring = ce_state->dest_ring; 958 dest_ring = ce_state->dest_ring;
1034 959
1035 ptr += sizeof(struct ce_ring_state); 960 ptr += sizeof(struct ath10k_ce_ring);
1036 dest_ring->nentries = nentries; 961 dest_ring->nentries = nentries;
1037 dest_ring->nentries_mask = nentries - 1; 962 dest_ring->nentries_mask = nentries - 1;
1038 963
1039 ath10k_pci_wake(ar);
1040 dest_ring->sw_index = ath10k_ce_dest_ring_read_index_get(ar, ctrl_addr); 964 dest_ring->sw_index = ath10k_ce_dest_ring_read_index_get(ar, ctrl_addr);
1041 dest_ring->sw_index &= dest_ring->nentries_mask; 965 dest_ring->sw_index &= dest_ring->nentries_mask;
1042 dest_ring->write_index = 966 dest_ring->write_index =
1043 ath10k_ce_dest_ring_write_index_get(ar, ctrl_addr); 967 ath10k_ce_dest_ring_write_index_get(ar, ctrl_addr);
1044 dest_ring->write_index &= dest_ring->nentries_mask; 968 dest_ring->write_index &= dest_ring->nentries_mask;
1045 ath10k_pci_sleep(ar);
1046 969
1047 dest_ring->per_transfer_context = (void **)ptr; 970 dest_ring->per_transfer_context = (void **)ptr;
1048 971
@@ -1055,6 +978,12 @@ static int ath10k_ce_init_dest_ring(struct ath10k *ar,
1055 (nentries * sizeof(struct ce_desc) + 978 (nentries * sizeof(struct ce_desc) +
1056 CE_DESC_RING_ALIGN), 979 CE_DESC_RING_ALIGN),
1057 &base_addr); 980 &base_addr);
981 if (!dest_ring->base_addr_owner_space_unaligned) {
982 kfree(ce_state->dest_ring);
983 ce_state->dest_ring = NULL;
984 return -ENOMEM;
985 }
986
1058 dest_ring->base_addr_ce_space_unaligned = base_addr; 987 dest_ring->base_addr_ce_space_unaligned = base_addr;
1059 988
1060 /* 989 /*
@@ -1071,44 +1000,35 @@ static int ath10k_ce_init_dest_ring(struct ath10k *ar,
1071 dest_ring->base_addr_ce_space_unaligned, 1000 dest_ring->base_addr_ce_space_unaligned,
1072 CE_DESC_RING_ALIGN); 1001 CE_DESC_RING_ALIGN);
1073 1002
1074 ath10k_pci_wake(ar);
1075 ath10k_ce_dest_ring_base_addr_set(ar, ctrl_addr, 1003 ath10k_ce_dest_ring_base_addr_set(ar, ctrl_addr,
1076 dest_ring->base_addr_ce_space); 1004 dest_ring->base_addr_ce_space);
1077 ath10k_ce_dest_ring_size_set(ar, ctrl_addr, nentries); 1005 ath10k_ce_dest_ring_size_set(ar, ctrl_addr, nentries);
1078 ath10k_ce_dest_ring_byte_swap_set(ar, ctrl_addr, 0); 1006 ath10k_ce_dest_ring_byte_swap_set(ar, ctrl_addr, 0);
1079 ath10k_ce_dest_ring_lowmark_set(ar, ctrl_addr, 0); 1007 ath10k_ce_dest_ring_lowmark_set(ar, ctrl_addr, 0);
1080 ath10k_ce_dest_ring_highmark_set(ar, ctrl_addr, nentries); 1008 ath10k_ce_dest_ring_highmark_set(ar, ctrl_addr, nentries);
1081 ath10k_pci_sleep(ar); 1009
1010 ath10k_dbg(ATH10K_DBG_BOOT,
1011 "boot ce dest ring id %d entries %d base_addr %p\n",
1012 ce_id, nentries, dest_ring->base_addr_owner_space);
1082 1013
1083 return 0; 1014 return 0;
1084} 1015}
1085 1016
1086static struct ce_state *ath10k_ce_init_state(struct ath10k *ar, 1017static struct ath10k_ce_pipe *ath10k_ce_init_state(struct ath10k *ar,
1087 unsigned int ce_id, 1018 unsigned int ce_id,
1088 const struct ce_attr *attr) 1019 const struct ce_attr *attr)
1089{ 1020{
1090 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 1021 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
1091 struct ce_state *ce_state = NULL; 1022 struct ath10k_ce_pipe *ce_state = &ar_pci->ce_states[ce_id];
1092 u32 ctrl_addr = ath10k_ce_base_address(ce_id); 1023 u32 ctrl_addr = ath10k_ce_base_address(ce_id);
1093 1024
1094 spin_lock_bh(&ar_pci->ce_lock); 1025 spin_lock_bh(&ar_pci->ce_lock);
1095 1026
1096 if (!ar_pci->ce_id_to_state[ce_id]) { 1027 ce_state->ar = ar;
1097 ce_state = kzalloc(sizeof(*ce_state), GFP_ATOMIC); 1028 ce_state->id = ce_id;
1098 if (ce_state == NULL) { 1029 ce_state->ctrl_addr = ctrl_addr;
1099 spin_unlock_bh(&ar_pci->ce_lock); 1030 ce_state->attr_flags = attr->flags;
1100 return NULL; 1031 ce_state->src_sz_max = attr->src_sz_max;
1101 }
1102
1103 ar_pci->ce_id_to_state[ce_id] = ce_state;
1104 ce_state->ar = ar;
1105 ce_state->id = ce_id;
1106 ce_state->ctrl_addr = ctrl_addr;
1107 ce_state->state = CE_RUNNING;
1108 /* Save attribute flags */
1109 ce_state->attr_flags = attr->flags;
1110 ce_state->src_sz_max = attr->src_sz_max;
1111 }
1112 1032
1113 spin_unlock_bh(&ar_pci->ce_lock); 1033 spin_unlock_bh(&ar_pci->ce_lock);
1114 1034
@@ -1122,12 +1042,17 @@ static struct ce_state *ath10k_ce_init_state(struct ath10k *ar,
1122 * initialization. It may be that only one side or the other is 1042 * initialization. It may be that only one side or the other is
1123 * initialized by software/firmware. 1043 * initialized by software/firmware.
1124 */ 1044 */
1125struct ce_state *ath10k_ce_init(struct ath10k *ar, 1045struct ath10k_ce_pipe *ath10k_ce_init(struct ath10k *ar,
1126 unsigned int ce_id, 1046 unsigned int ce_id,
1127 const struct ce_attr *attr) 1047 const struct ce_attr *attr)
1128{ 1048{
1129 struct ce_state *ce_state; 1049 struct ath10k_ce_pipe *ce_state;
1130 u32 ctrl_addr = ath10k_ce_base_address(ce_id); 1050 u32 ctrl_addr = ath10k_ce_base_address(ce_id);
1051 int ret;
1052
1053 ret = ath10k_pci_wake(ar);
1054 if (ret)
1055 return NULL;
1131 1056
1132 ce_state = ath10k_ce_init_state(ar, ce_id, attr); 1057 ce_state = ath10k_ce_init_state(ar, ce_id, attr);
1133 if (!ce_state) { 1058 if (!ce_state) {
@@ -1136,40 +1061,38 @@ struct ce_state *ath10k_ce_init(struct ath10k *ar,
1136 } 1061 }
1137 1062
1138 if (attr->src_nentries) { 1063 if (attr->src_nentries) {
1139 if (ath10k_ce_init_src_ring(ar, ce_id, ce_state, attr)) { 1064 ret = ath10k_ce_init_src_ring(ar, ce_id, ce_state, attr);
1140 ath10k_err("Failed to initialize CE src ring for ID: %d\n", 1065 if (ret) {
1141 ce_id); 1066 ath10k_err("Failed to initialize CE src ring for ID: %d (%d)\n",
1067 ce_id, ret);
1142 ath10k_ce_deinit(ce_state); 1068 ath10k_ce_deinit(ce_state);
1143 return NULL; 1069 return NULL;
1144 } 1070 }
1145 } 1071 }
1146 1072
1147 if (attr->dest_nentries) { 1073 if (attr->dest_nentries) {
1148 if (ath10k_ce_init_dest_ring(ar, ce_id, ce_state, attr)) { 1074 ret = ath10k_ce_init_dest_ring(ar, ce_id, ce_state, attr);
1149 ath10k_err("Failed to initialize CE dest ring for ID: %d\n", 1075 if (ret) {
1150 ce_id); 1076 ath10k_err("Failed to initialize CE dest ring for ID: %d (%d)\n",
1077 ce_id, ret);
1151 ath10k_ce_deinit(ce_state); 1078 ath10k_ce_deinit(ce_state);
1152 return NULL; 1079 return NULL;
1153 } 1080 }
1154 } 1081 }
1155 1082
1156 /* Enable CE error interrupts */ 1083 /* Enable CE error interrupts */
1157 ath10k_pci_wake(ar);
1158 ath10k_ce_error_intr_enable(ar, ctrl_addr); 1084 ath10k_ce_error_intr_enable(ar, ctrl_addr);
1085
1159 ath10k_pci_sleep(ar); 1086 ath10k_pci_sleep(ar);
1160 1087
1161 return ce_state; 1088 return ce_state;
1162} 1089}
1163 1090
1164void ath10k_ce_deinit(struct ce_state *ce_state) 1091void ath10k_ce_deinit(struct ath10k_ce_pipe *ce_state)
1165{ 1092{
1166 unsigned int ce_id = ce_state->id;
1167 struct ath10k *ar = ce_state->ar; 1093 struct ath10k *ar = ce_state->ar;
1168 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 1094 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
1169 1095
1170 ce_state->state = CE_UNUSED;
1171 ar_pci->ce_id_to_state[ce_id] = NULL;
1172
1173 if (ce_state->src_ring) { 1096 if (ce_state->src_ring) {
1174 kfree(ce_state->src_ring->shadow_base_unaligned); 1097 kfree(ce_state->src_ring->shadow_base_unaligned);
1175 pci_free_consistent(ar_pci->pdev, 1098 pci_free_consistent(ar_pci->pdev,
@@ -1190,5 +1113,7 @@ void ath10k_ce_deinit(struct ce_state *ce_state)
1190 ce_state->dest_ring->base_addr_ce_space); 1113 ce_state->dest_ring->base_addr_ce_space);
1191 kfree(ce_state->dest_ring); 1114 kfree(ce_state->dest_ring);
1192 } 1115 }
1193 kfree(ce_state); 1116
1117 ce_state->src_ring = NULL;
1118 ce_state->dest_ring = NULL;
1194} 1119}
diff --git a/drivers/net/wireless/ath/ath10k/ce.h b/drivers/net/wireless/ath/ath10k/ce.h
index c17f07c026f4..15d45b5b7615 100644
--- a/drivers/net/wireless/ath/ath10k/ce.h
+++ b/drivers/net/wireless/ath/ath10k/ce.h
@@ -27,7 +27,6 @@
27 27
28/* Descriptor rings must be aligned to this boundary */ 28/* Descriptor rings must be aligned to this boundary */
29#define CE_DESC_RING_ALIGN 8 29#define CE_DESC_RING_ALIGN 8
30#define CE_SENDLIST_ITEMS_MAX 12
31#define CE_SEND_FLAG_GATHER 0x00010000 30#define CE_SEND_FLAG_GATHER 0x00010000
32 31
33/* 32/*
@@ -36,16 +35,9 @@
36 * how to use copy engines. 35 * how to use copy engines.
37 */ 36 */
38 37
39struct ce_state; 38struct ath10k_ce_pipe;
40 39
41 40
42/* Copy Engine operational state */
43enum ce_op_state {
44 CE_UNUSED,
45 CE_PAUSED,
46 CE_RUNNING,
47};
48
49#define CE_DESC_FLAGS_GATHER (1 << 0) 41#define CE_DESC_FLAGS_GATHER (1 << 0)
50#define CE_DESC_FLAGS_BYTE_SWAP (1 << 1) 42#define CE_DESC_FLAGS_BYTE_SWAP (1 << 1)
51#define CE_DESC_FLAGS_META_DATA_MASK 0xFFFC 43#define CE_DESC_FLAGS_META_DATA_MASK 0xFFFC
@@ -57,8 +49,7 @@ struct ce_desc {
57 __le16 flags; /* %CE_DESC_FLAGS_ */ 49 __le16 flags; /* %CE_DESC_FLAGS_ */
58}; 50};
59 51
60/* Copy Engine Ring internal state */ 52struct ath10k_ce_ring {
61struct ce_ring_state {
62 /* Number of entries in this ring; must be power of 2 */ 53 /* Number of entries in this ring; must be power of 2 */
63 unsigned int nentries; 54 unsigned int nentries;
64 unsigned int nentries_mask; 55 unsigned int nentries_mask;
@@ -116,49 +107,20 @@ struct ce_ring_state {
116 void **per_transfer_context; 107 void **per_transfer_context;
117}; 108};
118 109
119/* Copy Engine internal state */ 110struct ath10k_ce_pipe {
120struct ce_state {
121 struct ath10k *ar; 111 struct ath10k *ar;
122 unsigned int id; 112 unsigned int id;
123 113
124 unsigned int attr_flags; 114 unsigned int attr_flags;
125 115
126 u32 ctrl_addr; 116 u32 ctrl_addr;
127 enum ce_op_state state;
128
129 void (*send_cb) (struct ce_state *ce_state,
130 void *per_transfer_send_context,
131 u32 buffer,
132 unsigned int nbytes,
133 unsigned int transfer_id);
134 void (*recv_cb) (struct ce_state *ce_state,
135 void *per_transfer_recv_context,
136 u32 buffer,
137 unsigned int nbytes,
138 unsigned int transfer_id,
139 unsigned int flags);
140
141 unsigned int src_sz_max;
142 struct ce_ring_state *src_ring;
143 struct ce_ring_state *dest_ring;
144};
145 117
146struct ce_sendlist_item { 118 void (*send_cb)(struct ath10k_ce_pipe *);
147 /* e.g. buffer or desc list */ 119 void (*recv_cb)(struct ath10k_ce_pipe *);
148 dma_addr_t data;
149 union {
150 /* simple buffer */
151 unsigned int nbytes;
152 /* Rx descriptor list */
153 unsigned int ndesc;
154 } u;
155 /* externally-specified flags; OR-ed with internal flags */
156 u32 flags;
157};
158 120
159struct ce_sendlist { 121 unsigned int src_sz_max;
160 unsigned int num_items; 122 struct ath10k_ce_ring *src_ring;
161 struct ce_sendlist_item item[CE_SENDLIST_ITEMS_MAX]; 123 struct ath10k_ce_ring *dest_ring;
162}; 124};
163 125
164/* Copy Engine settable attributes */ 126/* Copy Engine settable attributes */
@@ -182,7 +144,7 @@ struct ce_attr;
182 * 144 *
183 * Implementation note: pushes 1 buffer to Source ring 145 * Implementation note: pushes 1 buffer to Source ring
184 */ 146 */
185int ath10k_ce_send(struct ce_state *ce_state, 147int ath10k_ce_send(struct ath10k_ce_pipe *ce_state,
186 void *per_transfer_send_context, 148 void *per_transfer_send_context,
187 u32 buffer, 149 u32 buffer,
188 unsigned int nbytes, 150 unsigned int nbytes,
@@ -190,36 +152,11 @@ int ath10k_ce_send(struct ce_state *ce_state,
190 unsigned int transfer_id, 152 unsigned int transfer_id,
191 unsigned int flags); 153 unsigned int flags);
192 154
193void ath10k_ce_send_cb_register(struct ce_state *ce_state, 155void ath10k_ce_send_cb_register(struct ath10k_ce_pipe *ce_state,
194 void (*send_cb) (struct ce_state *ce_state, 156 void (*send_cb)(struct ath10k_ce_pipe *),
195 void *transfer_context,
196 u32 buffer,
197 unsigned int nbytes,
198 unsigned int transfer_id),
199 int disable_interrupts); 157 int disable_interrupts);
200 158
201/* Append a simple buffer (address/length) to a sendlist. */ 159int ath10k_ce_num_free_src_entries(struct ath10k_ce_pipe *pipe);
202void ath10k_ce_sendlist_buf_add(struct ce_sendlist *sendlist,
203 u32 buffer,
204 unsigned int nbytes,
205 /* OR-ed with internal flags */
206 u32 flags);
207
208/*
209 * Queue a "sendlist" of buffers to be sent using gather to a single
210 * anonymous destination buffer
211 * ce - which copy engine to use
212 * sendlist - list of simple buffers to send using gather
213 * transfer_id - arbitrary ID; reflected to destination
214 * Returns 0 on success; otherwise an error status.
215 *
216 * Implemenation note: Pushes multiple buffers with Gather to Source ring.
217 */
218int ath10k_ce_sendlist_send(struct ce_state *ce_state,
219 void *per_transfer_send_context,
220 struct ce_sendlist *sendlist,
221 /* 14 bits */
222 unsigned int transfer_id);
223 160
224/*==================Recv=======================*/ 161/*==================Recv=======================*/
225 162
@@ -233,17 +170,12 @@ int ath10k_ce_sendlist_send(struct ce_state *ce_state,
233 * 170 *
234 * Implemenation note: Pushes a buffer to Dest ring. 171 * Implemenation note: Pushes a buffer to Dest ring.
235 */ 172 */
236int ath10k_ce_recv_buf_enqueue(struct ce_state *ce_state, 173int ath10k_ce_recv_buf_enqueue(struct ath10k_ce_pipe *ce_state,
237 void *per_transfer_recv_context, 174 void *per_transfer_recv_context,
238 u32 buffer); 175 u32 buffer);
239 176
240void ath10k_ce_recv_cb_register(struct ce_state *ce_state, 177void ath10k_ce_recv_cb_register(struct ath10k_ce_pipe *ce_state,
241 void (*recv_cb) (struct ce_state *ce_state, 178 void (*recv_cb)(struct ath10k_ce_pipe *));
242 void *transfer_context,
243 u32 buffer,
244 unsigned int nbytes,
245 unsigned int transfer_id,
246 unsigned int flags));
247 179
248/* recv flags */ 180/* recv flags */
249/* Data is byte-swapped */ 181/* Data is byte-swapped */
@@ -253,7 +185,7 @@ void ath10k_ce_recv_cb_register(struct ce_state *ce_state,
253 * Supply data for the next completed unprocessed receive descriptor. 185 * Supply data for the next completed unprocessed receive descriptor.
254 * Pops buffer from Dest ring. 186 * Pops buffer from Dest ring.
255 */ 187 */
256int ath10k_ce_completed_recv_next(struct ce_state *ce_state, 188int ath10k_ce_completed_recv_next(struct ath10k_ce_pipe *ce_state,
257 void **per_transfer_contextp, 189 void **per_transfer_contextp,
258 u32 *bufferp, 190 u32 *bufferp,
259 unsigned int *nbytesp, 191 unsigned int *nbytesp,
@@ -263,7 +195,7 @@ int ath10k_ce_completed_recv_next(struct ce_state *ce_state,
263 * Supply data for the next completed unprocessed send descriptor. 195 * Supply data for the next completed unprocessed send descriptor.
264 * Pops 1 completed send buffer from Source ring. 196 * Pops 1 completed send buffer from Source ring.
265 */ 197 */
266int ath10k_ce_completed_send_next(struct ce_state *ce_state, 198int ath10k_ce_completed_send_next(struct ath10k_ce_pipe *ce_state,
267 void **per_transfer_contextp, 199 void **per_transfer_contextp,
268 u32 *bufferp, 200 u32 *bufferp,
269 unsigned int *nbytesp, 201 unsigned int *nbytesp,
@@ -272,7 +204,7 @@ int ath10k_ce_completed_send_next(struct ce_state *ce_state,
272/*==================CE Engine Initialization=======================*/ 204/*==================CE Engine Initialization=======================*/
273 205
274/* Initialize an instance of a CE */ 206/* Initialize an instance of a CE */
275struct ce_state *ath10k_ce_init(struct ath10k *ar, 207struct ath10k_ce_pipe *ath10k_ce_init(struct ath10k *ar,
276 unsigned int ce_id, 208 unsigned int ce_id,
277 const struct ce_attr *attr); 209 const struct ce_attr *attr);
278 210
@@ -282,7 +214,7 @@ struct ce_state *ath10k_ce_init(struct ath10k *ar,
282 * receive buffers. Target DMA must be stopped before using 214 * receive buffers. Target DMA must be stopped before using
283 * this API. 215 * this API.
284 */ 216 */
285int ath10k_ce_revoke_recv_next(struct ce_state *ce_state, 217int ath10k_ce_revoke_recv_next(struct ath10k_ce_pipe *ce_state,
286 void **per_transfer_contextp, 218 void **per_transfer_contextp,
287 u32 *bufferp); 219 u32 *bufferp);
288 220
@@ -291,13 +223,13 @@ int ath10k_ce_revoke_recv_next(struct ce_state *ce_state,
291 * pending sends. Target DMA must be stopped before using 223 * pending sends. Target DMA must be stopped before using
292 * this API. 224 * this API.
293 */ 225 */
294int ath10k_ce_cancel_send_next(struct ce_state *ce_state, 226int ath10k_ce_cancel_send_next(struct ath10k_ce_pipe *ce_state,
295 void **per_transfer_contextp, 227 void **per_transfer_contextp,
296 u32 *bufferp, 228 u32 *bufferp,
297 unsigned int *nbytesp, 229 unsigned int *nbytesp,
298 unsigned int *transfer_idp); 230 unsigned int *transfer_idp);
299 231
300void ath10k_ce_deinit(struct ce_state *ce_state); 232void ath10k_ce_deinit(struct ath10k_ce_pipe *ce_state);
301 233
302/*==================CE Interrupt Handlers====================*/ 234/*==================CE Interrupt Handlers====================*/
303void ath10k_ce_per_engine_service_any(struct ath10k *ar); 235void ath10k_ce_per_engine_service_any(struct ath10k *ar);
@@ -322,9 +254,6 @@ struct ce_attr {
322 /* CE_ATTR_* values */ 254 /* CE_ATTR_* values */
323 unsigned int flags; 255 unsigned int flags;
324 256
325 /* currently not in use */
326 unsigned int priority;
327
328 /* #entries in source ring - Must be a power of 2 */ 257 /* #entries in source ring - Must be a power of 2 */
329 unsigned int src_nentries; 258 unsigned int src_nentries;
330 259
@@ -336,21 +265,8 @@ struct ce_attr {
336 265
337 /* #entries in destination ring - Must be a power of 2 */ 266 /* #entries in destination ring - Must be a power of 2 */
338 unsigned int dest_nentries; 267 unsigned int dest_nentries;
339
340 /* Future use */
341 void *reserved;
342}; 268};
343 269
344/*
345 * When using sendlist_send to transfer multiple buffer fragments, the
346 * transfer context of each fragment, except last one, will be filled
347 * with CE_SENDLIST_ITEM_CTXT. ce_completed_send will return success for
348 * each fragment done with send and the transfer context would be
349 * CE_SENDLIST_ITEM_CTXT. Upper layer could use this to identify the
350 * status of a send completion.
351 */
352#define CE_SENDLIST_ITEM_CTXT ((void *)0xcecebeef)
353
354#define SR_BA_ADDRESS 0x0000 270#define SR_BA_ADDRESS 0x0000
355#define SR_SIZE_ADDRESS 0x0004 271#define SR_SIZE_ADDRESS 0x0004
356#define DR_BA_ADDRESS 0x0008 272#define DR_BA_ADDRESS 0x0008
diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
index 7226c23b9569..1129994fb105 100644
--- a/drivers/net/wireless/ath/ath10k/core.c
+++ b/drivers/net/wireless/ath/ath10k/core.c
@@ -39,17 +39,6 @@ MODULE_PARM_DESC(p2p, "Enable ath10k P2P support");
39 39
40static const struct ath10k_hw_params ath10k_hw_params_list[] = { 40static const struct ath10k_hw_params ath10k_hw_params_list[] = {
41 { 41 {
42 .id = QCA988X_HW_1_0_VERSION,
43 .name = "qca988x hw1.0",
44 .patch_load_addr = QCA988X_HW_1_0_PATCH_LOAD_ADDR,
45 .fw = {
46 .dir = QCA988X_HW_1_0_FW_DIR,
47 .fw = QCA988X_HW_1_0_FW_FILE,
48 .otp = QCA988X_HW_1_0_OTP_FILE,
49 .board = QCA988X_HW_1_0_BOARD_DATA_FILE,
50 },
51 },
52 {
53 .id = QCA988X_HW_2_0_VERSION, 42 .id = QCA988X_HW_2_0_VERSION,
54 .name = "qca988x hw2.0", 43 .name = "qca988x hw2.0",
55 .patch_load_addr = QCA988X_HW_2_0_PATCH_LOAD_ADDR, 44 .patch_load_addr = QCA988X_HW_2_0_PATCH_LOAD_ADDR,
@@ -64,33 +53,12 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
64 53
65static void ath10k_send_suspend_complete(struct ath10k *ar) 54static void ath10k_send_suspend_complete(struct ath10k *ar)
66{ 55{
67 ath10k_dbg(ATH10K_DBG_CORE, "%s\n", __func__); 56 ath10k_dbg(ATH10K_DBG_BOOT, "boot suspend complete\n");
68 57
69 ar->is_target_paused = true; 58 ar->is_target_paused = true;
70 wake_up(&ar->event_queue); 59 wake_up(&ar->event_queue);
71} 60}
72 61
73static int ath10k_check_fw_version(struct ath10k *ar)
74{
75 char version[32];
76
77 if (ar->fw_version_major >= SUPPORTED_FW_MAJOR &&
78 ar->fw_version_minor >= SUPPORTED_FW_MINOR &&
79 ar->fw_version_release >= SUPPORTED_FW_RELEASE &&
80 ar->fw_version_build >= SUPPORTED_FW_BUILD)
81 return 0;
82
83 snprintf(version, sizeof(version), "%u.%u.%u.%u",
84 SUPPORTED_FW_MAJOR, SUPPORTED_FW_MINOR,
85 SUPPORTED_FW_RELEASE, SUPPORTED_FW_BUILD);
86
87 ath10k_warn("WARNING: Firmware version %s is not officially supported.\n",
88 ar->hw->wiphy->fw_version);
89 ath10k_warn("Please upgrade to version %s (or newer)\n", version);
90
91 return 0;
92}
93
94static int ath10k_init_connect_htc(struct ath10k *ar) 62static int ath10k_init_connect_htc(struct ath10k *ar)
95{ 63{
96 int status; 64 int status;
@@ -112,7 +80,7 @@ static int ath10k_init_connect_htc(struct ath10k *ar)
112 goto timeout; 80 goto timeout;
113 } 81 }
114 82
115 ath10k_dbg(ATH10K_DBG_CORE, "core wmi ready\n"); 83 ath10k_dbg(ATH10K_DBG_BOOT, "boot wmi ready\n");
116 return 0; 84 return 0;
117 85
118timeout: 86timeout:
@@ -200,8 +168,7 @@ static const struct firmware *ath10k_fetch_fw_file(struct ath10k *ar,
200 return fw; 168 return fw;
201} 169}
202 170
203static int ath10k_push_board_ext_data(struct ath10k *ar, 171static int ath10k_push_board_ext_data(struct ath10k *ar)
204 const struct firmware *fw)
205{ 172{
206 u32 board_data_size = QCA988X_BOARD_DATA_SZ; 173 u32 board_data_size = QCA988X_BOARD_DATA_SZ;
207 u32 board_ext_data_size = QCA988X_BOARD_EXT_DATA_SZ; 174 u32 board_ext_data_size = QCA988X_BOARD_EXT_DATA_SZ;
@@ -214,21 +181,21 @@ static int ath10k_push_board_ext_data(struct ath10k *ar,
214 return ret; 181 return ret;
215 } 182 }
216 183
217 ath10k_dbg(ATH10K_DBG_CORE, 184 ath10k_dbg(ATH10K_DBG_BOOT,
218 "ath10k: Board extended Data download addr: 0x%x\n", 185 "boot push board extended data addr 0x%x\n",
219 board_ext_data_addr); 186 board_ext_data_addr);
220 187
221 if (board_ext_data_addr == 0) 188 if (board_ext_data_addr == 0)
222 return 0; 189 return 0;
223 190
224 if (fw->size != (board_data_size + board_ext_data_size)) { 191 if (ar->board_len != (board_data_size + board_ext_data_size)) {
225 ath10k_err("invalid board (ext) data sizes %zu != %d+%d\n", 192 ath10k_err("invalid board (ext) data sizes %zu != %d+%d\n",
226 fw->size, board_data_size, board_ext_data_size); 193 ar->board_len, board_data_size, board_ext_data_size);
227 return -EINVAL; 194 return -EINVAL;
228 } 195 }
229 196
230 ret = ath10k_bmi_write_memory(ar, board_ext_data_addr, 197 ret = ath10k_bmi_write_memory(ar, board_ext_data_addr,
231 fw->data + board_data_size, 198 ar->board_data + board_data_size,
232 board_ext_data_size); 199 board_ext_data_size);
233 if (ret) { 200 if (ret) {
234 ath10k_err("could not write board ext data (%d)\n", ret); 201 ath10k_err("could not write board ext data (%d)\n", ret);
@@ -247,12 +214,11 @@ static int ath10k_push_board_ext_data(struct ath10k *ar,
247 214
248static int ath10k_download_board_data(struct ath10k *ar) 215static int ath10k_download_board_data(struct ath10k *ar)
249{ 216{
250 const struct firmware *fw = ar->board_data;
251 u32 board_data_size = QCA988X_BOARD_DATA_SZ; 217 u32 board_data_size = QCA988X_BOARD_DATA_SZ;
252 u32 address; 218 u32 address;
253 int ret; 219 int ret;
254 220
255 ret = ath10k_push_board_ext_data(ar, fw); 221 ret = ath10k_push_board_ext_data(ar);
256 if (ret) { 222 if (ret) {
257 ath10k_err("could not push board ext data (%d)\n", ret); 223 ath10k_err("could not push board ext data (%d)\n", ret);
258 goto exit; 224 goto exit;
@@ -264,8 +230,9 @@ static int ath10k_download_board_data(struct ath10k *ar)
264 goto exit; 230 goto exit;
265 } 231 }
266 232
267 ret = ath10k_bmi_write_memory(ar, address, fw->data, 233 ret = ath10k_bmi_write_memory(ar, address, ar->board_data,
268 min_t(u32, board_data_size, fw->size)); 234 min_t(u32, board_data_size,
235 ar->board_len));
269 if (ret) { 236 if (ret) {
270 ath10k_err("could not write board data (%d)\n", ret); 237 ath10k_err("could not write board data (%d)\n", ret);
271 goto exit; 238 goto exit;
@@ -283,17 +250,16 @@ exit:
283 250
284static int ath10k_download_and_run_otp(struct ath10k *ar) 251static int ath10k_download_and_run_otp(struct ath10k *ar)
285{ 252{
286 const struct firmware *fw = ar->otp;
287 u32 address = ar->hw_params.patch_load_addr; 253 u32 address = ar->hw_params.patch_load_addr;
288 u32 exec_param; 254 u32 exec_param;
289 int ret; 255 int ret;
290 256
291 /* OTP is optional */ 257 /* OTP is optional */
292 258
293 if (!ar->otp) 259 if (!ar->otp_data || !ar->otp_len)
294 return 0; 260 return 0;
295 261
296 ret = ath10k_bmi_fast_download(ar, address, fw->data, fw->size); 262 ret = ath10k_bmi_fast_download(ar, address, ar->otp_data, ar->otp_len);
297 if (ret) { 263 if (ret) {
298 ath10k_err("could not write otp (%d)\n", ret); 264 ath10k_err("could not write otp (%d)\n", ret);
299 goto exit; 265 goto exit;
@@ -312,13 +278,13 @@ exit:
312 278
313static int ath10k_download_fw(struct ath10k *ar) 279static int ath10k_download_fw(struct ath10k *ar)
314{ 280{
315 const struct firmware *fw = ar->firmware;
316 u32 address; 281 u32 address;
317 int ret; 282 int ret;
318 283
319 address = ar->hw_params.patch_load_addr; 284 address = ar->hw_params.patch_load_addr;
320 285
321 ret = ath10k_bmi_fast_download(ar, address, fw->data, fw->size); 286 ret = ath10k_bmi_fast_download(ar, address, ar->firmware_data,
287 ar->firmware_len);
322 if (ret) { 288 if (ret) {
323 ath10k_err("could not write fw (%d)\n", ret); 289 ath10k_err("could not write fw (%d)\n", ret);
324 goto exit; 290 goto exit;
@@ -330,8 +296,8 @@ exit:
330 296
331static void ath10k_core_free_firmware_files(struct ath10k *ar) 297static void ath10k_core_free_firmware_files(struct ath10k *ar)
332{ 298{
333 if (ar->board_data && !IS_ERR(ar->board_data)) 299 if (ar->board && !IS_ERR(ar->board))
334 release_firmware(ar->board_data); 300 release_firmware(ar->board);
335 301
336 if (ar->otp && !IS_ERR(ar->otp)) 302 if (ar->otp && !IS_ERR(ar->otp))
337 release_firmware(ar->otp); 303 release_firmware(ar->otp);
@@ -339,12 +305,20 @@ static void ath10k_core_free_firmware_files(struct ath10k *ar)
339 if (ar->firmware && !IS_ERR(ar->firmware)) 305 if (ar->firmware && !IS_ERR(ar->firmware))
340 release_firmware(ar->firmware); 306 release_firmware(ar->firmware);
341 307
308 ar->board = NULL;
342 ar->board_data = NULL; 309 ar->board_data = NULL;
310 ar->board_len = 0;
311
343 ar->otp = NULL; 312 ar->otp = NULL;
313 ar->otp_data = NULL;
314 ar->otp_len = 0;
315
344 ar->firmware = NULL; 316 ar->firmware = NULL;
317 ar->firmware_data = NULL;
318 ar->firmware_len = 0;
345} 319}
346 320
347static int ath10k_core_fetch_firmware_files(struct ath10k *ar) 321static int ath10k_core_fetch_firmware_api_1(struct ath10k *ar)
348{ 322{
349 int ret = 0; 323 int ret = 0;
350 324
@@ -358,15 +332,18 @@ static int ath10k_core_fetch_firmware_files(struct ath10k *ar)
358 return -EINVAL; 332 return -EINVAL;
359 } 333 }
360 334
361 ar->board_data = ath10k_fetch_fw_file(ar, 335 ar->board = ath10k_fetch_fw_file(ar,
362 ar->hw_params.fw.dir, 336 ar->hw_params.fw.dir,
363 ar->hw_params.fw.board); 337 ar->hw_params.fw.board);
364 if (IS_ERR(ar->board_data)) { 338 if (IS_ERR(ar->board)) {
365 ret = PTR_ERR(ar->board_data); 339 ret = PTR_ERR(ar->board);
366 ath10k_err("could not fetch board data (%d)\n", ret); 340 ath10k_err("could not fetch board data (%d)\n", ret);
367 goto err; 341 goto err;
368 } 342 }
369 343
344 ar->board_data = ar->board->data;
345 ar->board_len = ar->board->size;
346
370 ar->firmware = ath10k_fetch_fw_file(ar, 347 ar->firmware = ath10k_fetch_fw_file(ar,
371 ar->hw_params.fw.dir, 348 ar->hw_params.fw.dir,
372 ar->hw_params.fw.fw); 349 ar->hw_params.fw.fw);
@@ -376,6 +353,9 @@ static int ath10k_core_fetch_firmware_files(struct ath10k *ar)
376 goto err; 353 goto err;
377 } 354 }
378 355
356 ar->firmware_data = ar->firmware->data;
357 ar->firmware_len = ar->firmware->size;
358
379 /* OTP may be undefined. If so, don't fetch it at all */ 359 /* OTP may be undefined. If so, don't fetch it at all */
380 if (ar->hw_params.fw.otp == NULL) 360 if (ar->hw_params.fw.otp == NULL)
381 return 0; 361 return 0;
@@ -389,6 +369,172 @@ static int ath10k_core_fetch_firmware_files(struct ath10k *ar)
389 goto err; 369 goto err;
390 } 370 }
391 371
372 ar->otp_data = ar->otp->data;
373 ar->otp_len = ar->otp->size;
374
375 return 0;
376
377err:
378 ath10k_core_free_firmware_files(ar);
379 return ret;
380}
381
382static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
383{
384 size_t magic_len, len, ie_len;
385 int ie_id, i, index, bit, ret;
386 struct ath10k_fw_ie *hdr;
387 const u8 *data;
388 __le32 *timestamp;
389
390 /* first fetch the firmware file (firmware-*.bin) */
391 ar->firmware = ath10k_fetch_fw_file(ar, ar->hw_params.fw.dir, name);
392 if (IS_ERR(ar->firmware)) {
393 ath10k_err("Could not fetch firmware file '%s': %ld\n",
394 name, PTR_ERR(ar->firmware));
395 return PTR_ERR(ar->firmware);
396 }
397
398 data = ar->firmware->data;
399 len = ar->firmware->size;
400
401 /* magic also includes the null byte, check that as well */
402 magic_len = strlen(ATH10K_FIRMWARE_MAGIC) + 1;
403
404 if (len < magic_len) {
405 ath10k_err("firmware image too small to contain magic: %zu\n",
406 len);
407 ret = -EINVAL;
408 goto err;
409 }
410
411 if (memcmp(data, ATH10K_FIRMWARE_MAGIC, magic_len) != 0) {
412 ath10k_err("Invalid firmware magic\n");
413 ret = -EINVAL;
414 goto err;
415 }
416
417 /* jump over the padding */
418 magic_len = ALIGN(magic_len, 4);
419
420 len -= magic_len;
421 data += magic_len;
422
423 /* loop elements */
424 while (len > sizeof(struct ath10k_fw_ie)) {
425 hdr = (struct ath10k_fw_ie *)data;
426
427 ie_id = le32_to_cpu(hdr->id);
428 ie_len = le32_to_cpu(hdr->len);
429
430 len -= sizeof(*hdr);
431 data += sizeof(*hdr);
432
433 if (len < ie_len) {
434 ath10k_err("Invalid length for FW IE %d (%zu < %zu)\n",
435 ie_id, len, ie_len);
436 ret = -EINVAL;
437 goto err;
438 }
439
440 switch (ie_id) {
441 case ATH10K_FW_IE_FW_VERSION:
442 if (ie_len > sizeof(ar->hw->wiphy->fw_version) - 1)
443 break;
444
445 memcpy(ar->hw->wiphy->fw_version, data, ie_len);
446 ar->hw->wiphy->fw_version[ie_len] = '\0';
447
448 ath10k_dbg(ATH10K_DBG_BOOT,
449 "found fw version %s\n",
450 ar->hw->wiphy->fw_version);
451 break;
452 case ATH10K_FW_IE_TIMESTAMP:
453 if (ie_len != sizeof(u32))
454 break;
455
456 timestamp = (__le32 *)data;
457
458 ath10k_dbg(ATH10K_DBG_BOOT, "found fw timestamp %d\n",
459 le32_to_cpup(timestamp));
460 break;
461 case ATH10K_FW_IE_FEATURES:
462 ath10k_dbg(ATH10K_DBG_BOOT,
463 "found firmware features ie (%zd B)\n",
464 ie_len);
465
466 for (i = 0; i < ATH10K_FW_FEATURE_COUNT; i++) {
467 index = i / 8;
468 bit = i % 8;
469
470 if (index == ie_len)
471 break;
472
473 if (data[index] & (1 << bit))
474 __set_bit(i, ar->fw_features);
475 }
476
477 ath10k_dbg_dump(ATH10K_DBG_BOOT, "features", "",
478 ar->fw_features,
479 sizeof(ar->fw_features));
480 break;
481 case ATH10K_FW_IE_FW_IMAGE:
482 ath10k_dbg(ATH10K_DBG_BOOT,
483 "found fw image ie (%zd B)\n",
484 ie_len);
485
486 ar->firmware_data = data;
487 ar->firmware_len = ie_len;
488
489 break;
490 case ATH10K_FW_IE_OTP_IMAGE:
491 ath10k_dbg(ATH10K_DBG_BOOT,
492 "found otp image ie (%zd B)\n",
493 ie_len);
494
495 ar->otp_data = data;
496 ar->otp_len = ie_len;
497
498 break;
499 default:
500 ath10k_warn("Unknown FW IE: %u\n",
501 le32_to_cpu(hdr->id));
502 break;
503 }
504
505 /* jump over the padding */
506 ie_len = ALIGN(ie_len, 4);
507
508 len -= ie_len;
509 data += ie_len;
510 }
511
512 if (!ar->firmware_data || !ar->firmware_len) {
513 ath10k_warn("No ATH10K_FW_IE_FW_IMAGE found from %s, skipping\n",
514 name);
515 ret = -ENOMEDIUM;
516 goto err;
517 }
518
519 /* now fetch the board file */
520 if (ar->hw_params.fw.board == NULL) {
521 ath10k_err("board data file not defined");
522 ret = -EINVAL;
523 goto err;
524 }
525
526 ar->board = ath10k_fetch_fw_file(ar,
527 ar->hw_params.fw.dir,
528 ar->hw_params.fw.board);
529 if (IS_ERR(ar->board)) {
530 ret = PTR_ERR(ar->board);
531 ath10k_err("could not fetch board data (%d)\n", ret);
532 goto err;
533 }
534
535 ar->board_data = ar->board->data;
536 ar->board_len = ar->board->size;
537
392 return 0; 538 return 0;
393 539
394err: 540err:
@@ -396,6 +542,28 @@ err:
396 return ret; 542 return ret;
397} 543}
398 544
545static int ath10k_core_fetch_firmware_files(struct ath10k *ar)
546{
547 int ret;
548
549 ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API2_FILE);
550 if (ret == 0) {
551 ar->fw_api = 2;
552 goto out;
553 }
554
555 ret = ath10k_core_fetch_firmware_api_1(ar);
556 if (ret)
557 return ret;
558
559 ar->fw_api = 1;
560
561out:
562 ath10k_dbg(ATH10K_DBG_BOOT, "using fw api %d\n", ar->fw_api);
563
564 return 0;
565}
566
399static int ath10k_init_download_firmware(struct ath10k *ar) 567static int ath10k_init_download_firmware(struct ath10k *ar)
400{ 568{
401 int ret; 569 int ret;
@@ -446,6 +614,13 @@ static int ath10k_init_uart(struct ath10k *ar)
446 return ret; 614 return ret;
447 } 615 }
448 616
617 /* Set the UART baud rate to 19200. */
618 ret = ath10k_bmi_write32(ar, hi_desired_baud_rate, 19200);
619 if (ret) {
620 ath10k_warn("could not set the baud rate (%d)\n", ret);
621 return ret;
622 }
623
449 ath10k_info("UART prints enabled\n"); 624 ath10k_info("UART prints enabled\n");
450 return 0; 625 return 0;
451} 626}
@@ -545,6 +720,9 @@ struct ath10k *ath10k_core_create(void *hif_priv, struct device *dev,
545 INIT_WORK(&ar->offchan_tx_work, ath10k_offchan_tx_work); 720 INIT_WORK(&ar->offchan_tx_work, ath10k_offchan_tx_work);
546 skb_queue_head_init(&ar->offchan_tx_queue); 721 skb_queue_head_init(&ar->offchan_tx_queue);
547 722
723 INIT_WORK(&ar->wmi_mgmt_tx_work, ath10k_mgmt_over_wmi_tx_work);
724 skb_queue_head_init(&ar->wmi_mgmt_tx_queue);
725
548 init_waitqueue_head(&ar->event_queue); 726 init_waitqueue_head(&ar->event_queue);
549 727
550 INIT_WORK(&ar->restart_work, ath10k_core_restart); 728 INIT_WORK(&ar->restart_work, ath10k_core_restart);
@@ -559,6 +737,8 @@ EXPORT_SYMBOL(ath10k_core_create);
559 737
560void ath10k_core_destroy(struct ath10k *ar) 738void ath10k_core_destroy(struct ath10k *ar)
561{ 739{
740 ath10k_debug_destroy(ar);
741
562 flush_workqueue(ar->workqueue); 742 flush_workqueue(ar->workqueue);
563 destroy_workqueue(ar->workqueue); 743 destroy_workqueue(ar->workqueue);
564 744
@@ -570,6 +750,8 @@ int ath10k_core_start(struct ath10k *ar)
570{ 750{
571 int status; 751 int status;
572 752
753 lockdep_assert_held(&ar->conf_mutex);
754
573 ath10k_bmi_start(ar); 755 ath10k_bmi_start(ar);
574 756
575 if (ath10k_init_configure_target(ar)) { 757 if (ath10k_init_configure_target(ar)) {
@@ -620,10 +802,6 @@ int ath10k_core_start(struct ath10k *ar)
620 802
621 ath10k_info("firmware %s booted\n", ar->hw->wiphy->fw_version); 803 ath10k_info("firmware %s booted\n", ar->hw->wiphy->fw_version);
622 804
623 status = ath10k_check_fw_version(ar);
624 if (status)
625 goto err_disconnect_htc;
626
627 status = ath10k_wmi_cmd_init(ar); 805 status = ath10k_wmi_cmd_init(ar);
628 if (status) { 806 if (status) {
629 ath10k_err("could not send WMI init command (%d)\n", status); 807 ath10k_err("could not send WMI init command (%d)\n", status);
@@ -641,7 +819,12 @@ int ath10k_core_start(struct ath10k *ar)
641 if (status) 819 if (status)
642 goto err_disconnect_htc; 820 goto err_disconnect_htc;
643 821
822 status = ath10k_debug_start(ar);
823 if (status)
824 goto err_disconnect_htc;
825
644 ar->free_vdev_map = (1 << TARGET_NUM_VDEVS) - 1; 826 ar->free_vdev_map = (1 << TARGET_NUM_VDEVS) - 1;
827 INIT_LIST_HEAD(&ar->arvifs);
645 828
646 return 0; 829 return 0;
647 830
@@ -658,6 +841,9 @@ EXPORT_SYMBOL(ath10k_core_start);
658 841
659void ath10k_core_stop(struct ath10k *ar) 842void ath10k_core_stop(struct ath10k *ar)
660{ 843{
844 lockdep_assert_held(&ar->conf_mutex);
845
846 ath10k_debug_stop(ar);
661 ath10k_htc_stop(&ar->htc); 847 ath10k_htc_stop(&ar->htc);
662 ath10k_htt_detach(&ar->htt); 848 ath10k_htt_detach(&ar->htt);
663 ath10k_wmi_detach(ar); 849 ath10k_wmi_detach(ar);
@@ -704,23 +890,65 @@ static int ath10k_core_probe_fw(struct ath10k *ar)
704 return ret; 890 return ret;
705 } 891 }
706 892
893 mutex_lock(&ar->conf_mutex);
894
707 ret = ath10k_core_start(ar); 895 ret = ath10k_core_start(ar);
708 if (ret) { 896 if (ret) {
709 ath10k_err("could not init core (%d)\n", ret); 897 ath10k_err("could not init core (%d)\n", ret);
710 ath10k_core_free_firmware_files(ar); 898 ath10k_core_free_firmware_files(ar);
711 ath10k_hif_power_down(ar); 899 ath10k_hif_power_down(ar);
900 mutex_unlock(&ar->conf_mutex);
712 return ret; 901 return ret;
713 } 902 }
714 903
715 ath10k_core_stop(ar); 904 ath10k_core_stop(ar);
905
906 mutex_unlock(&ar->conf_mutex);
907
716 ath10k_hif_power_down(ar); 908 ath10k_hif_power_down(ar);
717 return 0; 909 return 0;
718} 910}
719 911
720int ath10k_core_register(struct ath10k *ar) 912static int ath10k_core_check_chip_id(struct ath10k *ar)
913{
914 u32 hw_revision = MS(ar->chip_id, SOC_CHIP_ID_REV);
915
916 ath10k_dbg(ATH10K_DBG_BOOT, "boot chip_id 0x%08x hw_revision 0x%x\n",
917 ar->chip_id, hw_revision);
918
919 /* Check that we are not using hw1.0 (some of them have same pci id
920 * as hw2.0) before doing anything else as ath10k crashes horribly
921 * due to missing hw1.0 workarounds. */
922 switch (hw_revision) {
923 case QCA988X_HW_1_0_CHIP_ID_REV:
924 ath10k_err("ERROR: qca988x hw1.0 is not supported\n");
925 return -EOPNOTSUPP;
926
927 case QCA988X_HW_2_0_CHIP_ID_REV:
928 /* known hardware revision, continue normally */
929 return 0;
930
931 default:
932 ath10k_warn("Warning: hardware revision unknown (0x%x), expect problems\n",
933 ar->chip_id);
934 return 0;
935 }
936
937 return 0;
938}
939
940int ath10k_core_register(struct ath10k *ar, u32 chip_id)
721{ 941{
722 int status; 942 int status;
723 943
944 ar->chip_id = chip_id;
945
946 status = ath10k_core_check_chip_id(ar);
947 if (status) {
948 ath10k_err("Unsupported chip id 0x%08x\n", ar->chip_id);
949 return status;
950 }
951
724 status = ath10k_core_probe_fw(ar); 952 status = ath10k_core_probe_fw(ar);
725 if (status) { 953 if (status) {
726 ath10k_err("could not probe fw (%d)\n", status); 954 ath10k_err("could not probe fw (%d)\n", status);
@@ -755,6 +983,7 @@ void ath10k_core_unregister(struct ath10k *ar)
755 * Otherwise we will fail to submit commands to FW and mac80211 will be 983 * Otherwise we will fail to submit commands to FW and mac80211 will be
756 * unhappy about callback failures. */ 984 * unhappy about callback failures. */
757 ath10k_mac_unregister(ar); 985 ath10k_mac_unregister(ar);
986
758 ath10k_core_free_firmware_files(ar); 987 ath10k_core_free_firmware_files(ar);
759} 988}
760EXPORT_SYMBOL(ath10k_core_unregister); 989EXPORT_SYMBOL(ath10k_core_unregister);
diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h
index e4bba563ed42..0934f7633de3 100644
--- a/drivers/net/wireless/ath/ath10k/core.h
+++ b/drivers/net/wireless/ath/ath10k/core.h
@@ -43,27 +43,23 @@
43/* Antenna noise floor */ 43/* Antenna noise floor */
44#define ATH10K_DEFAULT_NOISE_FLOOR -95 44#define ATH10K_DEFAULT_NOISE_FLOOR -95
45 45
46#define ATH10K_MAX_NUM_MGMT_PENDING 16
47
46struct ath10k; 48struct ath10k;
47 49
48struct ath10k_skb_cb { 50struct ath10k_skb_cb {
49 dma_addr_t paddr; 51 dma_addr_t paddr;
50 bool is_mapped; 52 bool is_mapped;
51 bool is_aborted; 53 bool is_aborted;
54 u8 vdev_id;
52 55
53 struct { 56 struct {
54 u8 vdev_id;
55 u16 msdu_id;
56 u8 tid; 57 u8 tid;
57 bool is_offchan; 58 bool is_offchan;
58 bool is_conf;
59 bool discard;
60 bool no_ack;
61 u8 refcount;
62 struct sk_buff *txfrag;
63 struct sk_buff *msdu;
64 } __packed htt;
65 59
66 /* 4 bytes left on 64bit arch */ 60 u8 frag_len;
61 u8 pad_len;
62 } __packed htt;
67} __packed; 63} __packed;
68 64
69static inline struct ath10k_skb_cb *ATH10K_SKB_CB(struct sk_buff *skb) 65static inline struct ath10k_skb_cb *ATH10K_SKB_CB(struct sk_buff *skb)
@@ -108,15 +104,26 @@ struct ath10k_bmi {
108 bool done_sent; 104 bool done_sent;
109}; 105};
110 106
107#define ATH10K_MAX_MEM_REQS 16
108
109struct ath10k_mem_chunk {
110 void *vaddr;
111 dma_addr_t paddr;
112 u32 len;
113 u32 req_id;
114};
115
111struct ath10k_wmi { 116struct ath10k_wmi {
112 enum ath10k_htc_ep_id eid; 117 enum ath10k_htc_ep_id eid;
113 struct completion service_ready; 118 struct completion service_ready;
114 struct completion unified_ready; 119 struct completion unified_ready;
115 atomic_t pending_tx_count; 120 wait_queue_head_t tx_credits_wq;
116 wait_queue_head_t wq; 121 struct wmi_cmd_map *cmd;
122 struct wmi_vdev_param_map *vdev_param;
123 struct wmi_pdev_param_map *pdev_param;
117 124
118 struct sk_buff_head wmi_event_list; 125 u32 num_mem_chunks;
119 struct work_struct wmi_event_work; 126 struct ath10k_mem_chunk mem_chunks[ATH10K_MAX_MEM_REQS];
120}; 127};
121 128
122struct ath10k_peer_stat { 129struct ath10k_peer_stat {
@@ -198,17 +205,22 @@ struct ath10k_peer {
198#define ATH10K_VDEV_SETUP_TIMEOUT_HZ (5*HZ) 205#define ATH10K_VDEV_SETUP_TIMEOUT_HZ (5*HZ)
199 206
200struct ath10k_vif { 207struct ath10k_vif {
208 struct list_head list;
209
201 u32 vdev_id; 210 u32 vdev_id;
202 enum wmi_vdev_type vdev_type; 211 enum wmi_vdev_type vdev_type;
203 enum wmi_vdev_subtype vdev_subtype; 212 enum wmi_vdev_subtype vdev_subtype;
204 u32 beacon_interval; 213 u32 beacon_interval;
205 u32 dtim_period; 214 u32 dtim_period;
215 struct sk_buff *beacon;
206 216
207 struct ath10k *ar; 217 struct ath10k *ar;
208 struct ieee80211_vif *vif; 218 struct ieee80211_vif *vif;
209 219
220 struct work_struct wep_key_work;
210 struct ieee80211_key_conf *wep_keys[WMI_MAX_KEY_INDEX + 1]; 221 struct ieee80211_key_conf *wep_keys[WMI_MAX_KEY_INDEX + 1];
211 u8 def_wep_key_index; 222 u8 def_wep_key_idx;
223 u8 def_wep_key_newidx;
212 224
213 u16 tx_seq_no; 225 u16 tx_seq_no;
214 226
@@ -246,6 +258,9 @@ struct ath10k_debug {
246 u32 wmi_service_bitmap[WMI_SERVICE_BM_SIZE]; 258 u32 wmi_service_bitmap[WMI_SERVICE_BM_SIZE];
247 259
248 struct completion event_stats_compl; 260 struct completion event_stats_compl;
261
262 unsigned long htt_stats_mask;
263 struct delayed_work htt_stats_dwork;
249}; 264};
250 265
251enum ath10k_state { 266enum ath10k_state {
@@ -270,12 +285,27 @@ enum ath10k_state {
270 ATH10K_STATE_WEDGED, 285 ATH10K_STATE_WEDGED,
271}; 286};
272 287
288enum ath10k_fw_features {
289 /* wmi_mgmt_rx_hdr contains extra RSSI information */
290 ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX = 0,
291
292 /* firmware from 10X branch */
293 ATH10K_FW_FEATURE_WMI_10X = 1,
294
295 /* firmware support tx frame management over WMI, otherwise it's HTT */
296 ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX = 2,
297
298 /* keep last */
299 ATH10K_FW_FEATURE_COUNT,
300};
301
273struct ath10k { 302struct ath10k {
274 struct ath_common ath_common; 303 struct ath_common ath_common;
275 struct ieee80211_hw *hw; 304 struct ieee80211_hw *hw;
276 struct device *dev; 305 struct device *dev;
277 u8 mac_addr[ETH_ALEN]; 306 u8 mac_addr[ETH_ALEN];
278 307
308 u32 chip_id;
279 u32 target_version; 309 u32 target_version;
280 u8 fw_version_major; 310 u8 fw_version_major;
281 u32 fw_version_minor; 311 u32 fw_version_minor;
@@ -288,6 +318,8 @@ struct ath10k {
288 u32 vht_cap_info; 318 u32 vht_cap_info;
289 u32 num_rf_chains; 319 u32 num_rf_chains;
290 320
321 DECLARE_BITMAP(fw_features, ATH10K_FW_FEATURE_COUNT);
322
291 struct targetdef *targetdef; 323 struct targetdef *targetdef;
292 struct hostdef *hostdef; 324 struct hostdef *hostdef;
293 325
@@ -319,9 +351,19 @@ struct ath10k {
319 } fw; 351 } fw;
320 } hw_params; 352 } hw_params;
321 353
322 const struct firmware *board_data; 354 const struct firmware *board;
355 const void *board_data;
356 size_t board_len;
357
323 const struct firmware *otp; 358 const struct firmware *otp;
359 const void *otp_data;
360 size_t otp_len;
361
324 const struct firmware *firmware; 362 const struct firmware *firmware;
363 const void *firmware_data;
364 size_t firmware_len;
365
366 int fw_api;
325 367
326 struct { 368 struct {
327 struct completion started; 369 struct completion started;
@@ -364,6 +406,7 @@ struct ath10k {
364 /* protects shared structure data */ 406 /* protects shared structure data */
365 spinlock_t data_lock; 407 spinlock_t data_lock;
366 408
409 struct list_head arvifs;
367 struct list_head peers; 410 struct list_head peers;
368 wait_queue_head_t peer_mapping_wq; 411 wait_queue_head_t peer_mapping_wq;
369 412
@@ -372,6 +415,9 @@ struct ath10k {
372 struct completion offchan_tx_completed; 415 struct completion offchan_tx_completed;
373 struct sk_buff *offchan_tx_skb; 416 struct sk_buff *offchan_tx_skb;
374 417
418 struct work_struct wmi_mgmt_tx_work;
419 struct sk_buff_head wmi_mgmt_tx_queue;
420
375 enum ath10k_state state; 421 enum ath10k_state state;
376 422
377 struct work_struct restart_work; 423 struct work_struct restart_work;
@@ -393,7 +439,7 @@ void ath10k_core_destroy(struct ath10k *ar);
393 439
394int ath10k_core_start(struct ath10k *ar); 440int ath10k_core_start(struct ath10k *ar);
395void ath10k_core_stop(struct ath10k *ar); 441void ath10k_core_stop(struct ath10k *ar);
396int ath10k_core_register(struct ath10k *ar); 442int ath10k_core_register(struct ath10k *ar, u32 chip_id);
397void ath10k_core_unregister(struct ath10k *ar); 443void ath10k_core_unregister(struct ath10k *ar);
398 444
399#endif /* _CORE_H_ */ 445#endif /* _CORE_H_ */
diff --git a/drivers/net/wireless/ath/ath10k/debug.c b/drivers/net/wireless/ath/ath10k/debug.c
index 3d65594fa098..760ff2289e3c 100644
--- a/drivers/net/wireless/ath/ath10k/debug.c
+++ b/drivers/net/wireless/ath/ath10k/debug.c
@@ -21,6 +21,9 @@
21#include "core.h" 21#include "core.h"
22#include "debug.h" 22#include "debug.h"
23 23
24/* ms */
25#define ATH10K_DEBUG_HTT_STATS_INTERVAL 1000
26
24static int ath10k_printk(const char *level, const char *fmt, ...) 27static int ath10k_printk(const char *level, const char *fmt, ...)
25{ 28{
26 struct va_format vaf; 29 struct va_format vaf;
@@ -260,7 +263,6 @@ void ath10k_debug_read_target_stats(struct ath10k *ar,
260 } 263 }
261 264
262 spin_unlock_bh(&ar->data_lock); 265 spin_unlock_bh(&ar->data_lock);
263 mutex_unlock(&ar->conf_mutex);
264 complete(&ar->debug.event_stats_compl); 266 complete(&ar->debug.event_stats_compl);
265} 267}
266 268
@@ -499,6 +501,144 @@ static const struct file_operations fops_simulate_fw_crash = {
499 .llseek = default_llseek, 501 .llseek = default_llseek,
500}; 502};
501 503
504static ssize_t ath10k_read_chip_id(struct file *file, char __user *user_buf,
505 size_t count, loff_t *ppos)
506{
507 struct ath10k *ar = file->private_data;
508 unsigned int len;
509 char buf[50];
510
511 len = scnprintf(buf, sizeof(buf), "0x%08x\n", ar->chip_id);
512
513 return simple_read_from_buffer(user_buf, count, ppos, buf, len);
514}
515
516static const struct file_operations fops_chip_id = {
517 .read = ath10k_read_chip_id,
518 .open = simple_open,
519 .owner = THIS_MODULE,
520 .llseek = default_llseek,
521};
522
523static int ath10k_debug_htt_stats_req(struct ath10k *ar)
524{
525 u64 cookie;
526 int ret;
527
528 lockdep_assert_held(&ar->conf_mutex);
529
530 if (ar->debug.htt_stats_mask == 0)
531 /* htt stats are disabled */
532 return 0;
533
534 if (ar->state != ATH10K_STATE_ON)
535 return 0;
536
537 cookie = get_jiffies_64();
538
539 ret = ath10k_htt_h2t_stats_req(&ar->htt, ar->debug.htt_stats_mask,
540 cookie);
541 if (ret) {
542 ath10k_warn("failed to send htt stats request: %d\n", ret);
543 return ret;
544 }
545
546 queue_delayed_work(ar->workqueue, &ar->debug.htt_stats_dwork,
547 msecs_to_jiffies(ATH10K_DEBUG_HTT_STATS_INTERVAL));
548
549 return 0;
550}
551
552static void ath10k_debug_htt_stats_dwork(struct work_struct *work)
553{
554 struct ath10k *ar = container_of(work, struct ath10k,
555 debug.htt_stats_dwork.work);
556
557 mutex_lock(&ar->conf_mutex);
558
559 ath10k_debug_htt_stats_req(ar);
560
561 mutex_unlock(&ar->conf_mutex);
562}
563
564static ssize_t ath10k_read_htt_stats_mask(struct file *file,
565 char __user *user_buf,
566 size_t count, loff_t *ppos)
567{
568 struct ath10k *ar = file->private_data;
569 char buf[32];
570 unsigned int len;
571
572 len = scnprintf(buf, sizeof(buf), "%lu\n", ar->debug.htt_stats_mask);
573
574 return simple_read_from_buffer(user_buf, count, ppos, buf, len);
575}
576
577static ssize_t ath10k_write_htt_stats_mask(struct file *file,
578 const char __user *user_buf,
579 size_t count, loff_t *ppos)
580{
581 struct ath10k *ar = file->private_data;
582 unsigned long mask;
583 int ret;
584
585 ret = kstrtoul_from_user(user_buf, count, 0, &mask);
586 if (ret)
587 return ret;
588
589 /* max 8 bit masks (for now) */
590 if (mask > 0xff)
591 return -E2BIG;
592
593 mutex_lock(&ar->conf_mutex);
594
595 ar->debug.htt_stats_mask = mask;
596
597 ret = ath10k_debug_htt_stats_req(ar);
598 if (ret)
599 goto out;
600
601 ret = count;
602
603out:
604 mutex_unlock(&ar->conf_mutex);
605
606 return ret;
607}
608
609static const struct file_operations fops_htt_stats_mask = {
610 .read = ath10k_read_htt_stats_mask,
611 .write = ath10k_write_htt_stats_mask,
612 .open = simple_open,
613 .owner = THIS_MODULE,
614 .llseek = default_llseek,
615};
616
617int ath10k_debug_start(struct ath10k *ar)
618{
619 int ret;
620
621 lockdep_assert_held(&ar->conf_mutex);
622
623 ret = ath10k_debug_htt_stats_req(ar);
624 if (ret)
625 /* continue normally anyway, this isn't serious */
626 ath10k_warn("failed to start htt stats workqueue: %d\n", ret);
627
628 return 0;
629}
630
631void ath10k_debug_stop(struct ath10k *ar)
632{
633 lockdep_assert_held(&ar->conf_mutex);
634
635 /* Must not use _sync to avoid deadlock, we do that in
636 * ath10k_debug_destroy(). The check for htt_stats_mask is to avoid
637 * warning from del_timer(). */
638 if (ar->debug.htt_stats_mask != 0)
639 cancel_delayed_work(&ar->debug.htt_stats_dwork);
640}
641
502int ath10k_debug_create(struct ath10k *ar) 642int ath10k_debug_create(struct ath10k *ar)
503{ 643{
504 ar->debug.debugfs_phy = debugfs_create_dir("ath10k", 644 ar->debug.debugfs_phy = debugfs_create_dir("ath10k",
@@ -507,6 +647,9 @@ int ath10k_debug_create(struct ath10k *ar)
507 if (!ar->debug.debugfs_phy) 647 if (!ar->debug.debugfs_phy)
508 return -ENOMEM; 648 return -ENOMEM;
509 649
650 INIT_DELAYED_WORK(&ar->debug.htt_stats_dwork,
651 ath10k_debug_htt_stats_dwork);
652
510 init_completion(&ar->debug.event_stats_compl); 653 init_completion(&ar->debug.event_stats_compl);
511 654
512 debugfs_create_file("fw_stats", S_IRUSR, ar->debug.debugfs_phy, ar, 655 debugfs_create_file("fw_stats", S_IRUSR, ar->debug.debugfs_phy, ar,
@@ -518,8 +661,20 @@ int ath10k_debug_create(struct ath10k *ar)
518 debugfs_create_file("simulate_fw_crash", S_IRUSR, ar->debug.debugfs_phy, 661 debugfs_create_file("simulate_fw_crash", S_IRUSR, ar->debug.debugfs_phy,
519 ar, &fops_simulate_fw_crash); 662 ar, &fops_simulate_fw_crash);
520 663
664 debugfs_create_file("chip_id", S_IRUSR, ar->debug.debugfs_phy,
665 ar, &fops_chip_id);
666
667 debugfs_create_file("htt_stats_mask", S_IRUSR, ar->debug.debugfs_phy,
668 ar, &fops_htt_stats_mask);
669
521 return 0; 670 return 0;
522} 671}
672
673void ath10k_debug_destroy(struct ath10k *ar)
674{
675 cancel_delayed_work_sync(&ar->debug.htt_stats_dwork);
676}
677
523#endif /* CONFIG_ATH10K_DEBUGFS */ 678#endif /* CONFIG_ATH10K_DEBUGFS */
524 679
525#ifdef CONFIG_ATH10K_DEBUG 680#ifdef CONFIG_ATH10K_DEBUG
diff --git a/drivers/net/wireless/ath/ath10k/debug.h b/drivers/net/wireless/ath/ath10k/debug.h
index 168140c54028..3cfe3ee90dbe 100644
--- a/drivers/net/wireless/ath/ath10k/debug.h
+++ b/drivers/net/wireless/ath/ath10k/debug.h
@@ -27,22 +27,26 @@ enum ath10k_debug_mask {
27 ATH10K_DBG_HTC = 0x00000004, 27 ATH10K_DBG_HTC = 0x00000004,
28 ATH10K_DBG_HTT = 0x00000008, 28 ATH10K_DBG_HTT = 0x00000008,
29 ATH10K_DBG_MAC = 0x00000010, 29 ATH10K_DBG_MAC = 0x00000010,
30 ATH10K_DBG_CORE = 0x00000020, 30 ATH10K_DBG_BOOT = 0x00000020,
31 ATH10K_DBG_PCI_DUMP = 0x00000040, 31 ATH10K_DBG_PCI_DUMP = 0x00000040,
32 ATH10K_DBG_HTT_DUMP = 0x00000080, 32 ATH10K_DBG_HTT_DUMP = 0x00000080,
33 ATH10K_DBG_MGMT = 0x00000100, 33 ATH10K_DBG_MGMT = 0x00000100,
34 ATH10K_DBG_DATA = 0x00000200, 34 ATH10K_DBG_DATA = 0x00000200,
35 ATH10K_DBG_BMI = 0x00000400,
35 ATH10K_DBG_ANY = 0xffffffff, 36 ATH10K_DBG_ANY = 0xffffffff,
36}; 37};
37 38
38extern unsigned int ath10k_debug_mask; 39extern unsigned int ath10k_debug_mask;
39 40
40extern __printf(1, 2) int ath10k_info(const char *fmt, ...); 41__printf(1, 2) int ath10k_info(const char *fmt, ...);
41extern __printf(1, 2) int ath10k_err(const char *fmt, ...); 42__printf(1, 2) int ath10k_err(const char *fmt, ...);
42extern __printf(1, 2) int ath10k_warn(const char *fmt, ...); 43__printf(1, 2) int ath10k_warn(const char *fmt, ...);
43 44
44#ifdef CONFIG_ATH10K_DEBUGFS 45#ifdef CONFIG_ATH10K_DEBUGFS
46int ath10k_debug_start(struct ath10k *ar);
47void ath10k_debug_stop(struct ath10k *ar);
45int ath10k_debug_create(struct ath10k *ar); 48int ath10k_debug_create(struct ath10k *ar);
49void ath10k_debug_destroy(struct ath10k *ar);
46void ath10k_debug_read_service_map(struct ath10k *ar, 50void ath10k_debug_read_service_map(struct ath10k *ar,
47 void *service_map, 51 void *service_map,
48 size_t map_size); 52 size_t map_size);
@@ -50,11 +54,24 @@ void ath10k_debug_read_target_stats(struct ath10k *ar,
50 struct wmi_stats_event *ev); 54 struct wmi_stats_event *ev);
51 55
52#else 56#else
57static inline int ath10k_debug_start(struct ath10k *ar)
58{
59 return 0;
60}
61
62static inline void ath10k_debug_stop(struct ath10k *ar)
63{
64}
65
53static inline int ath10k_debug_create(struct ath10k *ar) 66static inline int ath10k_debug_create(struct ath10k *ar)
54{ 67{
55 return 0; 68 return 0;
56} 69}
57 70
71static inline void ath10k_debug_destroy(struct ath10k *ar)
72{
73}
74
58static inline void ath10k_debug_read_service_map(struct ath10k *ar, 75static inline void ath10k_debug_read_service_map(struct ath10k *ar,
59 void *service_map, 76 void *service_map,
60 size_t map_size) 77 size_t map_size)
@@ -68,7 +85,7 @@ static inline void ath10k_debug_read_target_stats(struct ath10k *ar,
68#endif /* CONFIG_ATH10K_DEBUGFS */ 85#endif /* CONFIG_ATH10K_DEBUGFS */
69 86
70#ifdef CONFIG_ATH10K_DEBUG 87#ifdef CONFIG_ATH10K_DEBUG
71extern __printf(2, 3) void ath10k_dbg(enum ath10k_debug_mask mask, 88__printf(2, 3) void ath10k_dbg(enum ath10k_debug_mask mask,
72 const char *fmt, ...); 89 const char *fmt, ...);
73void ath10k_dbg_dump(enum ath10k_debug_mask mask, 90void ath10k_dbg_dump(enum ath10k_debug_mask mask,
74 const char *msg, const char *prefix, 91 const char *msg, const char *prefix,
diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
index ef3329ef52f3..3118d7506734 100644
--- a/drivers/net/wireless/ath/ath10k/htc.c
+++ b/drivers/net/wireless/ath/ath10k/htc.c
@@ -103,10 +103,10 @@ static void ath10k_htc_prepare_tx_skb(struct ath10k_htc_ep *ep,
103 struct ath10k_htc_hdr *hdr; 103 struct ath10k_htc_hdr *hdr;
104 104
105 hdr = (struct ath10k_htc_hdr *)skb->data; 105 hdr = (struct ath10k_htc_hdr *)skb->data;
106 memset(hdr, 0, sizeof(*hdr));
107 106
108 hdr->eid = ep->eid; 107 hdr->eid = ep->eid;
109 hdr->len = __cpu_to_le16(skb->len - sizeof(*hdr)); 108 hdr->len = __cpu_to_le16(skb->len - sizeof(*hdr));
109 hdr->flags = 0;
110 110
111 spin_lock_bh(&ep->htc->tx_lock); 111 spin_lock_bh(&ep->htc->tx_lock);
112 hdr->seq_no = ep->seq_no++; 112 hdr->seq_no = ep->seq_no++;
@@ -117,134 +117,13 @@ static void ath10k_htc_prepare_tx_skb(struct ath10k_htc_ep *ep,
117 spin_unlock_bh(&ep->htc->tx_lock); 117 spin_unlock_bh(&ep->htc->tx_lock);
118} 118}
119 119
120static int ath10k_htc_issue_skb(struct ath10k_htc *htc,
121 struct ath10k_htc_ep *ep,
122 struct sk_buff *skb,
123 u8 credits)
124{
125 struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(skb);
126 int ret;
127
128 ath10k_dbg(ATH10K_DBG_HTC, "%s: ep %d skb %p\n", __func__,
129 ep->eid, skb);
130
131 ath10k_htc_prepare_tx_skb(ep, skb);
132
133 ret = ath10k_skb_map(htc->ar->dev, skb);
134 if (ret)
135 goto err;
136
137 ret = ath10k_hif_send_head(htc->ar,
138 ep->ul_pipe_id,
139 ep->eid,
140 skb->len,
141 skb);
142 if (unlikely(ret))
143 goto err;
144
145 return 0;
146err:
147 ath10k_warn("HTC issue failed: %d\n", ret);
148
149 spin_lock_bh(&htc->tx_lock);
150 ep->tx_credits += credits;
151 spin_unlock_bh(&htc->tx_lock);
152
153 /* this is the simplest way to handle out-of-resources for non-credit
154 * based endpoints. credit based endpoints can still get -ENOSR, but
155 * this is highly unlikely as credit reservation should prevent that */
156 if (ret == -ENOSR) {
157 spin_lock_bh(&htc->tx_lock);
158 __skb_queue_head(&ep->tx_queue, skb);
159 spin_unlock_bh(&htc->tx_lock);
160
161 return ret;
162 }
163
164 skb_cb->is_aborted = true;
165 ath10k_htc_notify_tx_completion(ep, skb);
166
167 return ret;
168}
169
170static struct sk_buff *ath10k_htc_get_skb_credit_based(struct ath10k_htc *htc,
171 struct ath10k_htc_ep *ep,
172 u8 *credits)
173{
174 struct sk_buff *skb;
175 struct ath10k_skb_cb *skb_cb;
176 int credits_required;
177 int remainder;
178 unsigned int transfer_len;
179
180 lockdep_assert_held(&htc->tx_lock);
181
182 skb = __skb_dequeue(&ep->tx_queue);
183 if (!skb)
184 return NULL;
185
186 skb_cb = ATH10K_SKB_CB(skb);
187 transfer_len = skb->len;
188
189 if (likely(transfer_len <= htc->target_credit_size)) {
190 credits_required = 1;
191 } else {
192 /* figure out how many credits this message requires */
193 credits_required = transfer_len / htc->target_credit_size;
194 remainder = transfer_len % htc->target_credit_size;
195
196 if (remainder)
197 credits_required++;
198 }
199
200 ath10k_dbg(ATH10K_DBG_HTC, "Credits required %d got %d\n",
201 credits_required, ep->tx_credits);
202
203 if (ep->tx_credits < credits_required) {
204 __skb_queue_head(&ep->tx_queue, skb);
205 return NULL;
206 }
207
208 ep->tx_credits -= credits_required;
209 *credits = credits_required;
210 return skb;
211}
212
213static void ath10k_htc_send_work(struct work_struct *work)
214{
215 struct ath10k_htc_ep *ep = container_of(work,
216 struct ath10k_htc_ep, send_work);
217 struct ath10k_htc *htc = ep->htc;
218 struct sk_buff *skb;
219 u8 credits = 0;
220 int ret;
221
222 while (true) {
223 if (ep->ul_is_polled)
224 ath10k_htc_send_complete_check(ep, 0);
225
226 spin_lock_bh(&htc->tx_lock);
227 if (ep->tx_credit_flow_enabled)
228 skb = ath10k_htc_get_skb_credit_based(htc, ep,
229 &credits);
230 else
231 skb = __skb_dequeue(&ep->tx_queue);
232 spin_unlock_bh(&htc->tx_lock);
233
234 if (!skb)
235 break;
236
237 ret = ath10k_htc_issue_skb(htc, ep, skb, credits);
238 if (ret == -ENOSR)
239 break;
240 }
241}
242
243int ath10k_htc_send(struct ath10k_htc *htc, 120int ath10k_htc_send(struct ath10k_htc *htc,
244 enum ath10k_htc_ep_id eid, 121 enum ath10k_htc_ep_id eid,
245 struct sk_buff *skb) 122 struct sk_buff *skb)
246{ 123{
247 struct ath10k_htc_ep *ep = &htc->endpoint[eid]; 124 struct ath10k_htc_ep *ep = &htc->endpoint[eid];
125 int credits = 0;
126 int ret;
248 127
249 if (htc->ar->state == ATH10K_STATE_WEDGED) 128 if (htc->ar->state == ATH10K_STATE_WEDGED)
250 return -ECOMM; 129 return -ECOMM;
@@ -254,18 +133,55 @@ int ath10k_htc_send(struct ath10k_htc *htc,
254 return -ENOENT; 133 return -ENOENT;
255 } 134 }
256 135
136 /* FIXME: This looks ugly, can we fix it? */
257 spin_lock_bh(&htc->tx_lock); 137 spin_lock_bh(&htc->tx_lock);
258 if (htc->stopped) { 138 if (htc->stopped) {
259 spin_unlock_bh(&htc->tx_lock); 139 spin_unlock_bh(&htc->tx_lock);
260 return -ESHUTDOWN; 140 return -ESHUTDOWN;
261 } 141 }
142 spin_unlock_bh(&htc->tx_lock);
262 143
263 __skb_queue_tail(&ep->tx_queue, skb);
264 skb_push(skb, sizeof(struct ath10k_htc_hdr)); 144 skb_push(skb, sizeof(struct ath10k_htc_hdr));
265 spin_unlock_bh(&htc->tx_lock);
266 145
267 queue_work(htc->ar->workqueue, &ep->send_work); 146 if (ep->tx_credit_flow_enabled) {
147 credits = DIV_ROUND_UP(skb->len, htc->target_credit_size);
148 spin_lock_bh(&htc->tx_lock);
149 if (ep->tx_credits < credits) {
150 spin_unlock_bh(&htc->tx_lock);
151 ret = -EAGAIN;
152 goto err_pull;
153 }
154 ep->tx_credits -= credits;
155 spin_unlock_bh(&htc->tx_lock);
156 }
157
158 ath10k_htc_prepare_tx_skb(ep, skb);
159
160 ret = ath10k_skb_map(htc->ar->dev, skb);
161 if (ret)
162 goto err_credits;
163
164 ret = ath10k_hif_send_head(htc->ar, ep->ul_pipe_id, ep->eid,
165 skb->len, skb);
166 if (ret)
167 goto err_unmap;
168
268 return 0; 169 return 0;
170
171err_unmap:
172 ath10k_skb_unmap(htc->ar->dev, skb);
173err_credits:
174 if (ep->tx_credit_flow_enabled) {
175 spin_lock_bh(&htc->tx_lock);
176 ep->tx_credits += credits;
177 spin_unlock_bh(&htc->tx_lock);
178
179 if (ep->ep_ops.ep_tx_credits)
180 ep->ep_ops.ep_tx_credits(htc->ar);
181 }
182err_pull:
183 skb_pull(skb, sizeof(struct ath10k_htc_hdr));
184 return ret;
269} 185}
270 186
271static int ath10k_htc_tx_completion_handler(struct ath10k *ar, 187static int ath10k_htc_tx_completion_handler(struct ath10k *ar,
@@ -278,39 +194,9 @@ static int ath10k_htc_tx_completion_handler(struct ath10k *ar,
278 ath10k_htc_notify_tx_completion(ep, skb); 194 ath10k_htc_notify_tx_completion(ep, skb);
279 /* the skb now belongs to the completion handler */ 195 /* the skb now belongs to the completion handler */
280 196
281 /* note: when using TX credit flow, the re-checking of queues happens
282 * when credits flow back from the target. in the non-TX credit case,
283 * we recheck after the packet completes */
284 spin_lock_bh(&htc->tx_lock);
285 if (!ep->tx_credit_flow_enabled && !htc->stopped)
286 queue_work(ar->workqueue, &ep->send_work);
287 spin_unlock_bh(&htc->tx_lock);
288
289 return 0; 197 return 0;
290} 198}
291 199
292/* flush endpoint TX queue */
293static void ath10k_htc_flush_endpoint_tx(struct ath10k_htc *htc,
294 struct ath10k_htc_ep *ep)
295{
296 struct sk_buff *skb;
297 struct ath10k_skb_cb *skb_cb;
298
299 spin_lock_bh(&htc->tx_lock);
300 for (;;) {
301 skb = __skb_dequeue(&ep->tx_queue);
302 if (!skb)
303 break;
304
305 skb_cb = ATH10K_SKB_CB(skb);
306 skb_cb->is_aborted = true;
307 ath10k_htc_notify_tx_completion(ep, skb);
308 }
309 spin_unlock_bh(&htc->tx_lock);
310
311 cancel_work_sync(&ep->send_work);
312}
313
314/***********/ 200/***********/
315/* Receive */ 201/* Receive */
316/***********/ 202/***********/
@@ -340,8 +226,11 @@ ath10k_htc_process_credit_report(struct ath10k_htc *htc,
340 ep = &htc->endpoint[report->eid]; 226 ep = &htc->endpoint[report->eid];
341 ep->tx_credits += report->credits; 227 ep->tx_credits += report->credits;
342 228
343 if (ep->tx_credits && !skb_queue_empty(&ep->tx_queue)) 229 if (ep->ep_ops.ep_tx_credits) {
344 queue_work(htc->ar->workqueue, &ep->send_work); 230 spin_unlock_bh(&htc->tx_lock);
231 ep->ep_ops.ep_tx_credits(htc->ar);
232 spin_lock_bh(&htc->tx_lock);
233 }
345 } 234 }
346 spin_unlock_bh(&htc->tx_lock); 235 spin_unlock_bh(&htc->tx_lock);
347} 236}
@@ -599,10 +488,8 @@ static void ath10k_htc_reset_endpoint_states(struct ath10k_htc *htc)
599 ep->max_ep_message_len = 0; 488 ep->max_ep_message_len = 0;
600 ep->max_tx_queue_depth = 0; 489 ep->max_tx_queue_depth = 0;
601 ep->eid = i; 490 ep->eid = i;
602 skb_queue_head_init(&ep->tx_queue);
603 ep->htc = htc; 491 ep->htc = htc;
604 ep->tx_credit_flow_enabled = true; 492 ep->tx_credit_flow_enabled = true;
605 INIT_WORK(&ep->send_work, ath10k_htc_send_work);
606 } 493 }
607} 494}
608 495
@@ -752,8 +639,8 @@ int ath10k_htc_connect_service(struct ath10k_htc *htc,
752 tx_alloc = ath10k_htc_get_credit_allocation(htc, 639 tx_alloc = ath10k_htc_get_credit_allocation(htc,
753 conn_req->service_id); 640 conn_req->service_id);
754 if (!tx_alloc) 641 if (!tx_alloc)
755 ath10k_dbg(ATH10K_DBG_HTC, 642 ath10k_dbg(ATH10K_DBG_BOOT,
756 "HTC Service %s does not allocate target credits\n", 643 "boot htc service %s does not allocate target credits\n",
757 htc_service_name(conn_req->service_id)); 644 htc_service_name(conn_req->service_id));
758 645
759 skb = ath10k_htc_build_tx_ctrl_skb(htc->ar); 646 skb = ath10k_htc_build_tx_ctrl_skb(htc->ar);
@@ -772,16 +659,16 @@ int ath10k_htc_connect_service(struct ath10k_htc *htc,
772 659
773 flags |= SM(tx_alloc, ATH10K_HTC_CONN_FLAGS_RECV_ALLOC); 660 flags |= SM(tx_alloc, ATH10K_HTC_CONN_FLAGS_RECV_ALLOC);
774 661
775 req_msg = &msg->connect_service;
776 req_msg->flags = __cpu_to_le16(flags);
777 req_msg->service_id = __cpu_to_le16(conn_req->service_id);
778
779 /* Only enable credit flow control for WMI ctrl service */ 662 /* Only enable credit flow control for WMI ctrl service */
780 if (conn_req->service_id != ATH10K_HTC_SVC_ID_WMI_CONTROL) { 663 if (conn_req->service_id != ATH10K_HTC_SVC_ID_WMI_CONTROL) {
781 flags |= ATH10K_HTC_CONN_FLAGS_DISABLE_CREDIT_FLOW_CTRL; 664 flags |= ATH10K_HTC_CONN_FLAGS_DISABLE_CREDIT_FLOW_CTRL;
782 disable_credit_flow_ctrl = true; 665 disable_credit_flow_ctrl = true;
783 } 666 }
784 667
668 req_msg = &msg->connect_service;
669 req_msg->flags = __cpu_to_le16(flags);
670 req_msg->service_id = __cpu_to_le16(conn_req->service_id);
671
785 INIT_COMPLETION(htc->ctl_resp); 672 INIT_COMPLETION(htc->ctl_resp);
786 673
787 status = ath10k_htc_send(htc, ATH10K_HTC_EP_0, skb); 674 status = ath10k_htc_send(htc, ATH10K_HTC_EP_0, skb);
@@ -873,19 +760,19 @@ setup:
873 if (status) 760 if (status)
874 return status; 761 return status;
875 762
876 ath10k_dbg(ATH10K_DBG_HTC, 763 ath10k_dbg(ATH10K_DBG_BOOT,
877 "HTC service: %s UL pipe: %d DL pipe: %d eid: %d ready\n", 764 "boot htc service '%s' ul pipe %d dl pipe %d eid %d ready\n",
878 htc_service_name(ep->service_id), ep->ul_pipe_id, 765 htc_service_name(ep->service_id), ep->ul_pipe_id,
879 ep->dl_pipe_id, ep->eid); 766 ep->dl_pipe_id, ep->eid);
880 767
881 ath10k_dbg(ATH10K_DBG_HTC, 768 ath10k_dbg(ATH10K_DBG_BOOT,
882 "EP %d UL polled: %d, DL polled: %d\n", 769 "boot htc ep %d ul polled %d dl polled %d\n",
883 ep->eid, ep->ul_is_polled, ep->dl_is_polled); 770 ep->eid, ep->ul_is_polled, ep->dl_is_polled);
884 771
885 if (disable_credit_flow_ctrl && ep->tx_credit_flow_enabled) { 772 if (disable_credit_flow_ctrl && ep->tx_credit_flow_enabled) {
886 ep->tx_credit_flow_enabled = false; 773 ep->tx_credit_flow_enabled = false;
887 ath10k_dbg(ATH10K_DBG_HTC, 774 ath10k_dbg(ATH10K_DBG_BOOT,
888 "HTC service: %s eid: %d TX flow control disabled\n", 775 "boot htc service '%s' eid %d TX flow control disabled\n",
889 htc_service_name(ep->service_id), assigned_eid); 776 htc_service_name(ep->service_id), assigned_eid);
890 } 777 }
891 778
@@ -945,18 +832,10 @@ int ath10k_htc_start(struct ath10k_htc *htc)
945 */ 832 */
946void ath10k_htc_stop(struct ath10k_htc *htc) 833void ath10k_htc_stop(struct ath10k_htc *htc)
947{ 834{
948 int i;
949 struct ath10k_htc_ep *ep;
950
951 spin_lock_bh(&htc->tx_lock); 835 spin_lock_bh(&htc->tx_lock);
952 htc->stopped = true; 836 htc->stopped = true;
953 spin_unlock_bh(&htc->tx_lock); 837 spin_unlock_bh(&htc->tx_lock);
954 838
955 for (i = ATH10K_HTC_EP_0; i < ATH10K_HTC_EP_COUNT; i++) {
956 ep = &htc->endpoint[i];
957 ath10k_htc_flush_endpoint_tx(htc, ep);
958 }
959
960 ath10k_hif_stop(htc->ar); 839 ath10k_hif_stop(htc->ar);
961} 840}
962 841
diff --git a/drivers/net/wireless/ath/ath10k/htc.h b/drivers/net/wireless/ath/ath10k/htc.h
index e1dd8c761853..4716d331e6b6 100644
--- a/drivers/net/wireless/ath/ath10k/htc.h
+++ b/drivers/net/wireless/ath/ath10k/htc.h
@@ -276,6 +276,7 @@ struct ath10k_htc_ops {
276struct ath10k_htc_ep_ops { 276struct ath10k_htc_ep_ops {
277 void (*ep_tx_complete)(struct ath10k *, struct sk_buff *); 277 void (*ep_tx_complete)(struct ath10k *, struct sk_buff *);
278 void (*ep_rx_complete)(struct ath10k *, struct sk_buff *); 278 void (*ep_rx_complete)(struct ath10k *, struct sk_buff *);
279 void (*ep_tx_credits)(struct ath10k *);
279}; 280};
280 281
281/* service connection information */ 282/* service connection information */
@@ -315,15 +316,11 @@ struct ath10k_htc_ep {
315 int ul_is_polled; /* call HIF to get tx completions */ 316 int ul_is_polled; /* call HIF to get tx completions */
316 int dl_is_polled; /* call HIF to fetch rx (not implemented) */ 317 int dl_is_polled; /* call HIF to fetch rx (not implemented) */
317 318
318 struct sk_buff_head tx_queue;
319
320 u8 seq_no; /* for debugging */ 319 u8 seq_no; /* for debugging */
321 int tx_credits; 320 int tx_credits;
322 int tx_credit_size; 321 int tx_credit_size;
323 int tx_credits_per_max_message; 322 int tx_credits_per_max_message;
324 bool tx_credit_flow_enabled; 323 bool tx_credit_flow_enabled;
325
326 struct work_struct send_work;
327}; 324};
328 325
329struct ath10k_htc_svc_tx_credits { 326struct ath10k_htc_svc_tx_credits {
diff --git a/drivers/net/wireless/ath/ath10k/htt.c b/drivers/net/wireless/ath/ath10k/htt.c
index 39342c5cfcb2..5f7eeebc5432 100644
--- a/drivers/net/wireless/ath/ath10k/htt.c
+++ b/drivers/net/wireless/ath/ath10k/htt.c
@@ -104,21 +104,16 @@ err_htc_attach:
104 104
105static int ath10k_htt_verify_version(struct ath10k_htt *htt) 105static int ath10k_htt_verify_version(struct ath10k_htt *htt)
106{ 106{
107 ath10k_dbg(ATH10K_DBG_HTT, 107 ath10k_info("htt target version %d.%d\n",
108 "htt target version %d.%d; host version %d.%d\n", 108 htt->target_version_major, htt->target_version_minor);
109 htt->target_version_major, 109
110 htt->target_version_minor, 110 if (htt->target_version_major != 2 &&
111 HTT_CURRENT_VERSION_MAJOR, 111 htt->target_version_major != 3) {
112 HTT_CURRENT_VERSION_MINOR); 112 ath10k_err("unsupported htt major version %d. supported versions are 2 and 3\n",
113 113 htt->target_version_major);
114 if (htt->target_version_major != HTT_CURRENT_VERSION_MAJOR) {
115 ath10k_err("htt major versions are incompatible!\n");
116 return -ENOTSUPP; 114 return -ENOTSUPP;
117 } 115 }
118 116
119 if (htt->target_version_minor != HTT_CURRENT_VERSION_MINOR)
120 ath10k_warn("htt minor version differ but still compatible\n");
121
122 return 0; 117 return 0;
123} 118}
124 119
diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h
index 318be4629cde..1a337e93b7e9 100644
--- a/drivers/net/wireless/ath/ath10k/htt.h
+++ b/drivers/net/wireless/ath/ath10k/htt.h
@@ -19,13 +19,11 @@
19#define _HTT_H_ 19#define _HTT_H_
20 20
21#include <linux/bug.h> 21#include <linux/bug.h>
22#include <linux/interrupt.h>
22 23
23#include "htc.h" 24#include "htc.h"
24#include "rx_desc.h" 25#include "rx_desc.h"
25 26
26#define HTT_CURRENT_VERSION_MAJOR 2
27#define HTT_CURRENT_VERSION_MINOR 1
28
29enum htt_dbg_stats_type { 27enum htt_dbg_stats_type {
30 HTT_DBG_STATS_WAL_PDEV_TXRX = 1 << 0, 28 HTT_DBG_STATS_WAL_PDEV_TXRX = 1 << 0,
31 HTT_DBG_STATS_RX_REORDER = 1 << 1, 29 HTT_DBG_STATS_RX_REORDER = 1 << 1,
@@ -45,6 +43,9 @@ enum htt_h2t_msg_type { /* host-to-target */
45 HTT_H2T_MSG_TYPE_SYNC = 4, 43 HTT_H2T_MSG_TYPE_SYNC = 4,
46 HTT_H2T_MSG_TYPE_AGGR_CFG = 5, 44 HTT_H2T_MSG_TYPE_AGGR_CFG = 5,
47 HTT_H2T_MSG_TYPE_FRAG_DESC_BANK_CFG = 6, 45 HTT_H2T_MSG_TYPE_FRAG_DESC_BANK_CFG = 6,
46
47 /* This command is used for sending management frames in HTT < 3.0.
48 * HTT >= 3.0 uses TX_FRM for everything. */
48 HTT_H2T_MSG_TYPE_MGMT_TX = 7, 49 HTT_H2T_MSG_TYPE_MGMT_TX = 7,
49 50
50 HTT_H2T_NUM_MSGS /* keep this last */ 51 HTT_H2T_NUM_MSGS /* keep this last */
@@ -1268,6 +1269,7 @@ struct ath10k_htt {
1268 /* set if host-fw communication goes haywire 1269 /* set if host-fw communication goes haywire
1269 * used to avoid further failures */ 1270 * used to avoid further failures */
1270 bool rx_confused; 1271 bool rx_confused;
1272 struct tasklet_struct rx_replenish_task;
1271}; 1273};
1272 1274
1273#define RX_HTT_HDR_STATUS_LEN 64 1275#define RX_HTT_HDR_STATUS_LEN 64
@@ -1308,6 +1310,10 @@ struct htt_rx_desc {
1308#define HTT_RX_BUF_SIZE 1920 1310#define HTT_RX_BUF_SIZE 1920
1309#define HTT_RX_MSDU_SIZE (HTT_RX_BUF_SIZE - (int)sizeof(struct htt_rx_desc)) 1311#define HTT_RX_MSDU_SIZE (HTT_RX_BUF_SIZE - (int)sizeof(struct htt_rx_desc))
1310 1312
1313/* Refill a bunch of RX buffers for each refill round so that FW/HW can handle
1314 * aggregated traffic more nicely. */
1315#define ATH10K_HTT_MAX_NUM_REFILL 16
1316
1311/* 1317/*
1312 * DMA_MAP expects the buffer to be an integral number of cache lines. 1318 * DMA_MAP expects the buffer to be an integral number of cache lines.
1313 * Rather than checking the actual cache line size, this code makes a 1319 * Rather than checking the actual cache line size, this code makes a
@@ -1327,6 +1333,7 @@ void ath10k_htt_rx_detach(struct ath10k_htt *htt);
1327void ath10k_htt_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb); 1333void ath10k_htt_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb);
1328void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb); 1334void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb);
1329int ath10k_htt_h2t_ver_req_msg(struct ath10k_htt *htt); 1335int ath10k_htt_h2t_ver_req_msg(struct ath10k_htt *htt);
1336int ath10k_htt_h2t_stats_req(struct ath10k_htt *htt, u8 mask, u64 cookie);
1330int ath10k_htt_send_rx_ring_cfg_ll(struct ath10k_htt *htt); 1337int ath10k_htt_send_rx_ring_cfg_ll(struct ath10k_htt *htt);
1331 1338
1332void __ath10k_htt_tx_dec_pending(struct ath10k_htt *htt); 1339void __ath10k_htt_tx_dec_pending(struct ath10k_htt *htt);
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
index e784c40b904b..90d4f74c28d7 100644
--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
@@ -20,6 +20,7 @@
20#include "htt.h" 20#include "htt.h"
21#include "txrx.h" 21#include "txrx.h"
22#include "debug.h" 22#include "debug.h"
23#include "trace.h"
23 24
24#include <linux/log2.h> 25#include <linux/log2.h>
25 26
@@ -40,6 +41,10 @@
40/* when under memory pressure rx ring refill may fail and needs a retry */ 41/* when under memory pressure rx ring refill may fail and needs a retry */
41#define HTT_RX_RING_REFILL_RETRY_MS 50 42#define HTT_RX_RING_REFILL_RETRY_MS 50
42 43
44
45static int ath10k_htt_rx_get_csum_state(struct sk_buff *skb);
46
47
43static int ath10k_htt_rx_ring_size(struct ath10k_htt *htt) 48static int ath10k_htt_rx_ring_size(struct ath10k_htt *htt)
44{ 49{
45 int size; 50 int size;
@@ -177,10 +182,27 @@ static int ath10k_htt_rx_ring_fill_n(struct ath10k_htt *htt, int num)
177 182
178static void ath10k_htt_rx_msdu_buff_replenish(struct ath10k_htt *htt) 183static void ath10k_htt_rx_msdu_buff_replenish(struct ath10k_htt *htt)
179{ 184{
180 int ret, num_to_fill; 185 int ret, num_deficit, num_to_fill;
181 186
187 /* Refilling the whole RX ring buffer proves to be a bad idea. The
188 * reason is RX may take up significant amount of CPU cycles and starve
189 * other tasks, e.g. TX on an ethernet device while acting as a bridge
190 * with ath10k wlan interface. This ended up with very poor performance
191 * once CPU the host system was overwhelmed with RX on ath10k.
192 *
193 * By limiting the number of refills the replenishing occurs
194 * progressively. This in turns makes use of the fact tasklets are
195 * processed in FIFO order. This means actual RX processing can starve
196 * out refilling. If there's not enough buffers on RX ring FW will not
197 * report RX until it is refilled with enough buffers. This
198 * automatically balances load wrt to CPU power.
199 *
200 * This probably comes at a cost of lower maximum throughput but
201 * improves the avarage and stability. */
182 spin_lock_bh(&htt->rx_ring.lock); 202 spin_lock_bh(&htt->rx_ring.lock);
183 num_to_fill = htt->rx_ring.fill_level - htt->rx_ring.fill_cnt; 203 num_deficit = htt->rx_ring.fill_level - htt->rx_ring.fill_cnt;
204 num_to_fill = min(ATH10K_HTT_MAX_NUM_REFILL, num_deficit);
205 num_deficit -= num_to_fill;
184 ret = ath10k_htt_rx_ring_fill_n(htt, num_to_fill); 206 ret = ath10k_htt_rx_ring_fill_n(htt, num_to_fill);
185 if (ret == -ENOMEM) { 207 if (ret == -ENOMEM) {
186 /* 208 /*
@@ -191,6 +213,8 @@ static void ath10k_htt_rx_msdu_buff_replenish(struct ath10k_htt *htt)
191 */ 213 */
192 mod_timer(&htt->rx_ring.refill_retry_timer, jiffies + 214 mod_timer(&htt->rx_ring.refill_retry_timer, jiffies +
193 msecs_to_jiffies(HTT_RX_RING_REFILL_RETRY_MS)); 215 msecs_to_jiffies(HTT_RX_RING_REFILL_RETRY_MS));
216 } else if (num_deficit > 0) {
217 tasklet_schedule(&htt->rx_replenish_task);
194 } 218 }
195 spin_unlock_bh(&htt->rx_ring.lock); 219 spin_unlock_bh(&htt->rx_ring.lock);
196} 220}
@@ -212,6 +236,7 @@ void ath10k_htt_rx_detach(struct ath10k_htt *htt)
212 int sw_rd_idx = htt->rx_ring.sw_rd_idx.msdu_payld; 236 int sw_rd_idx = htt->rx_ring.sw_rd_idx.msdu_payld;
213 237
214 del_timer_sync(&htt->rx_ring.refill_retry_timer); 238 del_timer_sync(&htt->rx_ring.refill_retry_timer);
239 tasklet_kill(&htt->rx_replenish_task);
215 240
216 while (sw_rd_idx != __le32_to_cpu(*(htt->rx_ring.alloc_idx.vaddr))) { 241 while (sw_rd_idx != __le32_to_cpu(*(htt->rx_ring.alloc_idx.vaddr))) {
217 struct sk_buff *skb = 242 struct sk_buff *skb =
@@ -441,6 +466,12 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt,
441 return msdu_chaining; 466 return msdu_chaining;
442} 467}
443 468
469static void ath10k_htt_rx_replenish_task(unsigned long ptr)
470{
471 struct ath10k_htt *htt = (struct ath10k_htt *)ptr;
472 ath10k_htt_rx_msdu_buff_replenish(htt);
473}
474
444int ath10k_htt_rx_attach(struct ath10k_htt *htt) 475int ath10k_htt_rx_attach(struct ath10k_htt *htt)
445{ 476{
446 dma_addr_t paddr; 477 dma_addr_t paddr;
@@ -501,7 +532,10 @@ int ath10k_htt_rx_attach(struct ath10k_htt *htt)
501 if (__ath10k_htt_rx_ring_fill_n(htt, htt->rx_ring.fill_level)) 532 if (__ath10k_htt_rx_ring_fill_n(htt, htt->rx_ring.fill_level))
502 goto err_fill_ring; 533 goto err_fill_ring;
503 534
504 ath10k_dbg(ATH10K_DBG_HTT, "HTT RX ring size: %d, fill_level: %d\n", 535 tasklet_init(&htt->rx_replenish_task, ath10k_htt_rx_replenish_task,
536 (unsigned long)htt);
537
538 ath10k_dbg(ATH10K_DBG_BOOT, "htt rx ring size %d fill_level %d\n",
505 htt->rx_ring.size, htt->rx_ring.fill_level); 539 htt->rx_ring.size, htt->rx_ring.fill_level);
506 return 0; 540 return 0;
507 541
@@ -590,134 +624,144 @@ static bool ath10k_htt_rx_hdr_is_amsdu(struct ieee80211_hdr *hdr)
590 return false; 624 return false;
591} 625}
592 626
593static int ath10k_htt_rx_amsdu(struct ath10k_htt *htt, 627struct rfc1042_hdr {
594 struct htt_rx_info *info) 628 u8 llc_dsap;
629 u8 llc_ssap;
630 u8 llc_ctrl;
631 u8 snap_oui[3];
632 __be16 snap_type;
633} __packed;
634
635struct amsdu_subframe_hdr {
636 u8 dst[ETH_ALEN];
637 u8 src[ETH_ALEN];
638 __be16 len;
639} __packed;
640
641static void ath10k_htt_rx_amsdu(struct ath10k_htt *htt,
642 struct htt_rx_info *info)
595{ 643{
596 struct htt_rx_desc *rxd; 644 struct htt_rx_desc *rxd;
597 struct sk_buff *amsdu;
598 struct sk_buff *first; 645 struct sk_buff *first;
599 struct ieee80211_hdr *hdr;
600 struct sk_buff *skb = info->skb; 646 struct sk_buff *skb = info->skb;
601 enum rx_msdu_decap_format fmt; 647 enum rx_msdu_decap_format fmt;
602 enum htt_rx_mpdu_encrypt_type enctype; 648 enum htt_rx_mpdu_encrypt_type enctype;
649 struct ieee80211_hdr *hdr;
650 u8 hdr_buf[64], addr[ETH_ALEN], *qos;
603 unsigned int hdr_len; 651 unsigned int hdr_len;
604 int crypto_len;
605 652
606 rxd = (void *)skb->data - sizeof(*rxd); 653 rxd = (void *)skb->data - sizeof(*rxd);
607 fmt = MS(__le32_to_cpu(rxd->msdu_start.info1),
608 RX_MSDU_START_INFO1_DECAP_FORMAT);
609 enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0), 654 enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0),
610 RX_MPDU_START_INFO0_ENCRYPT_TYPE); 655 RX_MPDU_START_INFO0_ENCRYPT_TYPE);
611 656
612 /* FIXME: No idea what assumptions are safe here. Need logs */ 657 hdr = (struct ieee80211_hdr *)rxd->rx_hdr_status;
613 if ((fmt == RX_MSDU_DECAP_RAW && skb->next) || 658 hdr_len = ieee80211_hdrlen(hdr->frame_control);
614 (fmt == RX_MSDU_DECAP_8023_SNAP_LLC)) { 659 memcpy(hdr_buf, hdr, hdr_len);
615 ath10k_htt_rx_free_msdu_chain(skb->next); 660 hdr = (struct ieee80211_hdr *)hdr_buf;
616 skb->next = NULL;
617 return -ENOTSUPP;
618 }
619 661
620 /* A-MSDU max is a little less than 8K */ 662 /* FIXME: Hopefully this is a temporary measure.
621 amsdu = dev_alloc_skb(8*1024); 663 *
622 if (!amsdu) { 664 * Reporting individual A-MSDU subframes means each reported frame
623 ath10k_warn("A-MSDU allocation failed\n"); 665 * shares the same sequence number.
624 ath10k_htt_rx_free_msdu_chain(skb->next); 666 *
625 skb->next = NULL; 667 * mac80211 drops frames it recognizes as duplicates, i.e.
626 return -ENOMEM; 668 * retransmission flag is set and sequence number matches sequence
627 } 669 * number from a previous frame (as per IEEE 802.11-2012: 9.3.2.10
628 670 * "Duplicate detection and recovery")
629 if (fmt >= RX_MSDU_DECAP_NATIVE_WIFI) { 671 *
630 int hdrlen; 672 * To avoid frames being dropped clear retransmission flag for all
631 673 * received A-MSDUs.
632 hdr = (void *)rxd->rx_hdr_status; 674 *
633 hdrlen = ieee80211_hdrlen(hdr->frame_control); 675 * Worst case: actual duplicate frames will be reported but this should
634 memcpy(skb_put(amsdu, hdrlen), hdr, hdrlen); 676 * still be handled gracefully by other OSI/ISO layers. */
635 } 677 hdr->frame_control &= cpu_to_le16(~IEEE80211_FCTL_RETRY);
636 678
637 first = skb; 679 first = skb;
638 while (skb) { 680 while (skb) {
639 void *decap_hdr; 681 void *decap_hdr;
640 int decap_len = 0; 682 int len;
641 683
642 rxd = (void *)skb->data - sizeof(*rxd); 684 rxd = (void *)skb->data - sizeof(*rxd);
643 fmt = MS(__le32_to_cpu(rxd->msdu_start.info1), 685 fmt = MS(__le32_to_cpu(rxd->msdu_start.info1),
644 RX_MSDU_START_INFO1_DECAP_FORMAT); 686 RX_MSDU_START_INFO1_DECAP_FORMAT);
645 decap_hdr = (void *)rxd->rx_hdr_status; 687 decap_hdr = (void *)rxd->rx_hdr_status;
646 688
647 if (skb == first) { 689 skb->ip_summed = ath10k_htt_rx_get_csum_state(skb);
648 /* We receive linked A-MSDU subframe skbuffs. The
649 * first one contains the original 802.11 header (and
650 * possible crypto param) in the RX descriptor. The
651 * A-MSDU subframe header follows that. Each part is
652 * aligned to 4 byte boundary. */
653
654 hdr = (void *)amsdu->data;
655 hdr_len = ieee80211_hdrlen(hdr->frame_control);
656 crypto_len = ath10k_htt_rx_crypto_param_len(enctype);
657
658 decap_hdr += roundup(hdr_len, 4);
659 decap_hdr += roundup(crypto_len, 4);
660 }
661 690
662 if (fmt == RX_MSDU_DECAP_ETHERNET2_DIX) { 691 /* First frame in an A-MSDU chain has more decapped data. */
663 /* Ethernet2 decap inserts ethernet header in place of 692 if (skb == first) {
664 * A-MSDU subframe header. */ 693 len = round_up(ieee80211_hdrlen(hdr->frame_control), 4);
665 skb_pull(skb, 6 + 6 + 2); 694 len += round_up(ath10k_htt_rx_crypto_param_len(enctype),
666 695 4);
667 /* A-MSDU subframe header length */ 696 decap_hdr += len;
668 decap_len += 6 + 6 + 2;
669
670 /* Ethernet2 decap also strips the LLC/SNAP so we need
671 * to re-insert it. The LLC/SNAP follows A-MSDU
672 * subframe header. */
673 /* FIXME: Not all LLCs are 8 bytes long */
674 decap_len += 8;
675
676 memcpy(skb_put(amsdu, decap_len), decap_hdr, decap_len);
677 } 697 }
678 698
679 if (fmt == RX_MSDU_DECAP_NATIVE_WIFI) { 699 switch (fmt) {
680 /* Native Wifi decap inserts regular 802.11 header 700 case RX_MSDU_DECAP_RAW:
681 * in place of A-MSDU subframe header. */ 701 /* remove trailing FCS */
702 skb_trim(skb, skb->len - FCS_LEN);
703 break;
704 case RX_MSDU_DECAP_NATIVE_WIFI:
705 /* pull decapped header and copy DA */
682 hdr = (struct ieee80211_hdr *)skb->data; 706 hdr = (struct ieee80211_hdr *)skb->data;
683 skb_pull(skb, ieee80211_hdrlen(hdr->frame_control)); 707 hdr_len = ieee80211_hdrlen(hdr->frame_control);
708 memcpy(addr, ieee80211_get_DA(hdr), ETH_ALEN);
709 skb_pull(skb, hdr_len);
684 710
685 /* A-MSDU subframe header length */ 711 /* push original 802.11 header */
686 decap_len += 6 + 6 + 2; 712 hdr = (struct ieee80211_hdr *)hdr_buf;
713 hdr_len = ieee80211_hdrlen(hdr->frame_control);
714 memcpy(skb_push(skb, hdr_len), hdr, hdr_len);
687 715
688 memcpy(skb_put(amsdu, decap_len), decap_hdr, decap_len); 716 /* original A-MSDU header has the bit set but we're
689 } 717 * not including A-MSDU subframe header */
718 hdr = (struct ieee80211_hdr *)skb->data;
719 qos = ieee80211_get_qos_ctl(hdr);
720 qos[0] &= ~IEEE80211_QOS_CTL_A_MSDU_PRESENT;
690 721
691 if (fmt == RX_MSDU_DECAP_RAW) 722 /* original 802.11 header has a different DA */
692 skb_trim(skb, skb->len - 4); /* remove FCS */ 723 memcpy(ieee80211_get_DA(hdr), addr, ETH_ALEN);
724 break;
725 case RX_MSDU_DECAP_ETHERNET2_DIX:
726 /* strip ethernet header and insert decapped 802.11
727 * header, amsdu subframe header and rfc1042 header */
693 728
694 memcpy(skb_put(amsdu, skb->len), skb->data, skb->len); 729 len = 0;
730 len += sizeof(struct rfc1042_hdr);
731 len += sizeof(struct amsdu_subframe_hdr);
695 732
696 /* A-MSDU subframes are padded to 4bytes 733 skb_pull(skb, sizeof(struct ethhdr));
697 * but relative to first subframe, not the whole MPDU */ 734 memcpy(skb_push(skb, len), decap_hdr, len);
698 if (skb->next && ((decap_len + skb->len) & 3)) { 735 memcpy(skb_push(skb, hdr_len), hdr, hdr_len);
699 int padlen = 4 - ((decap_len + skb->len) & 3); 736 break;
700 memset(skb_put(amsdu, padlen), 0, padlen); 737 case RX_MSDU_DECAP_8023_SNAP_LLC:
738 /* insert decapped 802.11 header making a singly
739 * A-MSDU */
740 memcpy(skb_push(skb, hdr_len), hdr, hdr_len);
741 break;
701 } 742 }
702 743
744 info->skb = skb;
745 info->encrypt_type = enctype;
703 skb = skb->next; 746 skb = skb->next;
704 } 747 info->skb->next = NULL;
705 748
706 info->skb = amsdu; 749 ath10k_process_rx(htt->ar, info);
707 info->encrypt_type = enctype; 750 }
708
709 ath10k_htt_rx_free_msdu_chain(first);
710 751
711 return 0; 752 /* FIXME: It might be nice to re-assemble the A-MSDU when there's a
753 * monitor interface active for sniffing purposes. */
712} 754}
713 755
714static int ath10k_htt_rx_msdu(struct ath10k_htt *htt, struct htt_rx_info *info) 756static void ath10k_htt_rx_msdu(struct ath10k_htt *htt, struct htt_rx_info *info)
715{ 757{
716 struct sk_buff *skb = info->skb; 758 struct sk_buff *skb = info->skb;
717 struct htt_rx_desc *rxd; 759 struct htt_rx_desc *rxd;
718 struct ieee80211_hdr *hdr; 760 struct ieee80211_hdr *hdr;
719 enum rx_msdu_decap_format fmt; 761 enum rx_msdu_decap_format fmt;
720 enum htt_rx_mpdu_encrypt_type enctype; 762 enum htt_rx_mpdu_encrypt_type enctype;
763 int hdr_len;
764 void *rfc1042;
721 765
722 /* This shouldn't happen. If it does than it may be a FW bug. */ 766 /* This shouldn't happen. If it does than it may be a FW bug. */
723 if (skb->next) { 767 if (skb->next) {
@@ -731,49 +775,53 @@ static int ath10k_htt_rx_msdu(struct ath10k_htt *htt, struct htt_rx_info *info)
731 RX_MSDU_START_INFO1_DECAP_FORMAT); 775 RX_MSDU_START_INFO1_DECAP_FORMAT);
732 enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0), 776 enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0),
733 RX_MPDU_START_INFO0_ENCRYPT_TYPE); 777 RX_MPDU_START_INFO0_ENCRYPT_TYPE);
734 hdr = (void *)skb->data - RX_HTT_HDR_STATUS_LEN; 778 hdr = (struct ieee80211_hdr *)rxd->rx_hdr_status;
779 hdr_len = ieee80211_hdrlen(hdr->frame_control);
780
781 skb->ip_summed = ath10k_htt_rx_get_csum_state(skb);
735 782
736 switch (fmt) { 783 switch (fmt) {
737 case RX_MSDU_DECAP_RAW: 784 case RX_MSDU_DECAP_RAW:
738 /* remove trailing FCS */ 785 /* remove trailing FCS */
739 skb_trim(skb, skb->len - 4); 786 skb_trim(skb, skb->len - FCS_LEN);
740 break; 787 break;
741 case RX_MSDU_DECAP_NATIVE_WIFI: 788 case RX_MSDU_DECAP_NATIVE_WIFI:
742 /* nothing to do here */ 789 /* Pull decapped header */
790 hdr = (struct ieee80211_hdr *)skb->data;
791 hdr_len = ieee80211_hdrlen(hdr->frame_control);
792 skb_pull(skb, hdr_len);
793
794 /* Push original header */
795 hdr = (struct ieee80211_hdr *)rxd->rx_hdr_status;
796 hdr_len = ieee80211_hdrlen(hdr->frame_control);
797 memcpy(skb_push(skb, hdr_len), hdr, hdr_len);
743 break; 798 break;
744 case RX_MSDU_DECAP_ETHERNET2_DIX: 799 case RX_MSDU_DECAP_ETHERNET2_DIX:
745 /* macaddr[6] + macaddr[6] + ethertype[2] */ 800 /* strip ethernet header and insert decapped 802.11 header and
746 skb_pull(skb, 6 + 6 + 2); 801 * rfc1042 header */
747 break;
748 case RX_MSDU_DECAP_8023_SNAP_LLC:
749 /* macaddr[6] + macaddr[6] + len[2] */
750 /* we don't need this for non-A-MSDU */
751 skb_pull(skb, 6 + 6 + 2);
752 break;
753 }
754 802
755 if (fmt == RX_MSDU_DECAP_ETHERNET2_DIX) { 803 rfc1042 = hdr;
756 void *llc; 804 rfc1042 += roundup(hdr_len, 4);
757 int llclen; 805 rfc1042 += roundup(ath10k_htt_rx_crypto_param_len(enctype), 4);
758 806
759 llclen = 8; 807 skb_pull(skb, sizeof(struct ethhdr));
760 llc = hdr; 808 memcpy(skb_push(skb, sizeof(struct rfc1042_hdr)),
761 llc += roundup(ieee80211_hdrlen(hdr->frame_control), 4); 809 rfc1042, sizeof(struct rfc1042_hdr));
762 llc += roundup(ath10k_htt_rx_crypto_param_len(enctype), 4); 810 memcpy(skb_push(skb, hdr_len), hdr, hdr_len);
763 811 break;
764 skb_push(skb, llclen); 812 case RX_MSDU_DECAP_8023_SNAP_LLC:
765 memcpy(skb->data, llc, llclen); 813 /* remove A-MSDU subframe header and insert
766 } 814 * decapped 802.11 header. rfc1042 header is already there */
767 815
768 if (fmt >= RX_MSDU_DECAP_ETHERNET2_DIX) { 816 skb_pull(skb, sizeof(struct amsdu_subframe_hdr));
769 int len = ieee80211_hdrlen(hdr->frame_control); 817 memcpy(skb_push(skb, hdr_len), hdr, hdr_len);
770 skb_push(skb, len); 818 break;
771 memcpy(skb->data, hdr, len);
772 } 819 }
773 820
774 info->skb = skb; 821 info->skb = skb;
775 info->encrypt_type = enctype; 822 info->encrypt_type = enctype;
776 return 0; 823
824 ath10k_process_rx(htt->ar, info);
777} 825}
778 826
779static bool ath10k_htt_rx_has_decrypt_err(struct sk_buff *skb) 827static bool ath10k_htt_rx_has_decrypt_err(struct sk_buff *skb)
@@ -845,8 +893,6 @@ static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
845 int fw_desc_len; 893 int fw_desc_len;
846 u8 *fw_desc; 894 u8 *fw_desc;
847 int i, j; 895 int i, j;
848 int ret;
849 int ip_summed;
850 896
851 memset(&info, 0, sizeof(info)); 897 memset(&info, 0, sizeof(info));
852 898
@@ -921,11 +967,6 @@ static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
921 continue; 967 continue;
922 } 968 }
923 969
924 /* The skb is not yet processed and it may be
925 * reallocated. Since the offload is in the original
926 * skb extract the checksum now and assign it later */
927 ip_summed = ath10k_htt_rx_get_csum_state(msdu_head);
928
929 info.skb = msdu_head; 970 info.skb = msdu_head;
930 info.fcs_err = ath10k_htt_rx_has_fcs_err(msdu_head); 971 info.fcs_err = ath10k_htt_rx_has_fcs_err(msdu_head);
931 info.signal = ATH10K_DEFAULT_NOISE_FLOOR; 972 info.signal = ATH10K_DEFAULT_NOISE_FLOOR;
@@ -938,28 +979,13 @@ static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
938 hdr = ath10k_htt_rx_skb_get_hdr(msdu_head); 979 hdr = ath10k_htt_rx_skb_get_hdr(msdu_head);
939 980
940 if (ath10k_htt_rx_hdr_is_amsdu(hdr)) 981 if (ath10k_htt_rx_hdr_is_amsdu(hdr))
941 ret = ath10k_htt_rx_amsdu(htt, &info); 982 ath10k_htt_rx_amsdu(htt, &info);
942 else 983 else
943 ret = ath10k_htt_rx_msdu(htt, &info); 984 ath10k_htt_rx_msdu(htt, &info);
944
945 if (ret && !info.fcs_err) {
946 ath10k_warn("error processing msdus %d\n", ret);
947 dev_kfree_skb_any(info.skb);
948 continue;
949 }
950
951 if (ath10k_htt_rx_hdr_is_amsdu((void *)info.skb->data))
952 ath10k_dbg(ATH10K_DBG_HTT, "htt mpdu is amsdu\n");
953
954 info.skb->ip_summed = ip_summed;
955
956 ath10k_dbg_dump(ATH10K_DBG_HTT_DUMP, NULL, "htt mpdu: ",
957 info.skb->data, info.skb->len);
958 ath10k_process_rx(htt->ar, &info);
959 } 985 }
960 } 986 }
961 987
962 ath10k_htt_rx_msdu_buff_replenish(htt); 988 tasklet_schedule(&htt->rx_replenish_task);
963} 989}
964 990
965static void ath10k_htt_rx_frag_handler(struct ath10k_htt *htt, 991static void ath10k_htt_rx_frag_handler(struct ath10k_htt *htt,
@@ -1131,7 +1157,7 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
1131 break; 1157 break;
1132 } 1158 }
1133 1159
1134 ath10k_txrx_tx_completed(htt, &tx_done); 1160 ath10k_txrx_tx_unref(htt, &tx_done);
1135 break; 1161 break;
1136 } 1162 }
1137 case HTT_T2H_MSG_TYPE_TX_COMPL_IND: { 1163 case HTT_T2H_MSG_TYPE_TX_COMPL_IND: {
@@ -1165,7 +1191,7 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
1165 for (i = 0; i < resp->data_tx_completion.num_msdus; i++) { 1191 for (i = 0; i < resp->data_tx_completion.num_msdus; i++) {
1166 msdu_id = resp->data_tx_completion.msdus[i]; 1192 msdu_id = resp->data_tx_completion.msdus[i];
1167 tx_done.msdu_id = __le16_to_cpu(msdu_id); 1193 tx_done.msdu_id = __le16_to_cpu(msdu_id);
1168 ath10k_txrx_tx_completed(htt, &tx_done); 1194 ath10k_txrx_tx_unref(htt, &tx_done);
1169 } 1195 }
1170 break; 1196 break;
1171 } 1197 }
@@ -1190,8 +1216,10 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
1190 case HTT_T2H_MSG_TYPE_TEST: 1216 case HTT_T2H_MSG_TYPE_TEST:
1191 /* FIX THIS */ 1217 /* FIX THIS */
1192 break; 1218 break;
1193 case HTT_T2H_MSG_TYPE_TX_INSPECT_IND:
1194 case HTT_T2H_MSG_TYPE_STATS_CONF: 1219 case HTT_T2H_MSG_TYPE_STATS_CONF:
1220 trace_ath10k_htt_stats(skb->data, skb->len);
1221 break;
1222 case HTT_T2H_MSG_TYPE_TX_INSPECT_IND:
1195 case HTT_T2H_MSG_TYPE_RX_ADDBA: 1223 case HTT_T2H_MSG_TYPE_RX_ADDBA:
1196 case HTT_T2H_MSG_TYPE_RX_DELBA: 1224 case HTT_T2H_MSG_TYPE_RX_DELBA:
1197 case HTT_T2H_MSG_TYPE_RX_FLUSH: 1225 case HTT_T2H_MSG_TYPE_RX_FLUSH:
diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
index 656c2546b294..d9335e9d0d04 100644
--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
@@ -96,7 +96,7 @@ int ath10k_htt_tx_attach(struct ath10k_htt *htt)
96 htt->max_num_pending_tx = ath10k_hif_get_free_queue_number(htt->ar, 96 htt->max_num_pending_tx = ath10k_hif_get_free_queue_number(htt->ar,
97 pipe); 97 pipe);
98 98
99 ath10k_dbg(ATH10K_DBG_HTT, "htt tx max num pending tx %d\n", 99 ath10k_dbg(ATH10K_DBG_BOOT, "htt tx max num pending tx %d\n",
100 htt->max_num_pending_tx); 100 htt->max_num_pending_tx);
101 101
102 htt->pending_tx = kzalloc(sizeof(*htt->pending_tx) * 102 htt->pending_tx = kzalloc(sizeof(*htt->pending_tx) *
@@ -117,7 +117,7 @@ int ath10k_htt_tx_attach(struct ath10k_htt *htt)
117 117
118static void ath10k_htt_tx_cleanup_pending(struct ath10k_htt *htt) 118static void ath10k_htt_tx_cleanup_pending(struct ath10k_htt *htt)
119{ 119{
120 struct sk_buff *txdesc; 120 struct htt_tx_done tx_done = {0};
121 int msdu_id; 121 int msdu_id;
122 122
123 /* No locks needed. Called after communication with the device has 123 /* No locks needed. Called after communication with the device has
@@ -127,18 +127,13 @@ static void ath10k_htt_tx_cleanup_pending(struct ath10k_htt *htt)
127 if (!test_bit(msdu_id, htt->used_msdu_ids)) 127 if (!test_bit(msdu_id, htt->used_msdu_ids))
128 continue; 128 continue;
129 129
130 txdesc = htt->pending_tx[msdu_id];
131 if (!txdesc)
132 continue;
133
134 ath10k_dbg(ATH10K_DBG_HTT, "force cleanup msdu_id %hu\n", 130 ath10k_dbg(ATH10K_DBG_HTT, "force cleanup msdu_id %hu\n",
135 msdu_id); 131 msdu_id);
136 132
137 if (ATH10K_SKB_CB(txdesc)->htt.refcount > 0) 133 tx_done.discard = 1;
138 ATH10K_SKB_CB(txdesc)->htt.refcount = 1; 134 tx_done.msdu_id = msdu_id;
139 135
140 ATH10K_SKB_CB(txdesc)->htt.discard = true; 136 ath10k_txrx_tx_unref(htt, &tx_done);
141 ath10k_txrx_tx_unref(htt, txdesc);
142 } 137 }
143} 138}
144 139
@@ -152,26 +147,7 @@ void ath10k_htt_tx_detach(struct ath10k_htt *htt)
152 147
153void ath10k_htt_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb) 148void ath10k_htt_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb)
154{ 149{
155 struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(skb); 150 dev_kfree_skb_any(skb);
156 struct ath10k_htt *htt = &ar->htt;
157
158 if (skb_cb->htt.is_conf) {
159 dev_kfree_skb_any(skb);
160 return;
161 }
162
163 if (skb_cb->is_aborted) {
164 skb_cb->htt.discard = true;
165
166 /* if the skbuff is aborted we need to make sure we'll free up
167 * the tx resources, we can't simply run tx_unref() 2 times
168 * because if htt tx completion came in earlier we'd access
169 * unallocated memory */
170 if (skb_cb->htt.refcount > 1)
171 skb_cb->htt.refcount = 1;
172 }
173
174 ath10k_txrx_tx_unref(htt, skb);
175} 151}
176 152
177int ath10k_htt_h2t_ver_req_msg(struct ath10k_htt *htt) 153int ath10k_htt_h2t_ver_req_msg(struct ath10k_htt *htt)
@@ -192,10 +168,48 @@ int ath10k_htt_h2t_ver_req_msg(struct ath10k_htt *htt)
192 cmd = (struct htt_cmd *)skb->data; 168 cmd = (struct htt_cmd *)skb->data;
193 cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_VERSION_REQ; 169 cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_VERSION_REQ;
194 170
195 ATH10K_SKB_CB(skb)->htt.is_conf = true; 171 ret = ath10k_htc_send(&htt->ar->htc, htt->eid, skb);
172 if (ret) {
173 dev_kfree_skb_any(skb);
174 return ret;
175 }
176
177 return 0;
178}
179
180int ath10k_htt_h2t_stats_req(struct ath10k_htt *htt, u8 mask, u64 cookie)
181{
182 struct htt_stats_req *req;
183 struct sk_buff *skb;
184 struct htt_cmd *cmd;
185 int len = 0, ret;
186
187 len += sizeof(cmd->hdr);
188 len += sizeof(cmd->stats_req);
189
190 skb = ath10k_htc_alloc_skb(len);
191 if (!skb)
192 return -ENOMEM;
193
194 skb_put(skb, len);
195 cmd = (struct htt_cmd *)skb->data;
196 cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_STATS_REQ;
197
198 req = &cmd->stats_req;
199
200 memset(req, 0, sizeof(*req));
201
202 /* currently we support only max 8 bit masks so no need to worry
203 * about endian support */
204 req->upload_types[0] = mask;
205 req->reset_types[0] = mask;
206 req->stat_type = HTT_STATS_REQ_CFG_STAT_TYPE_INVALID;
207 req->cookie_lsb = cpu_to_le32(cookie & 0xffffffff);
208 req->cookie_msb = cpu_to_le32((cookie & 0xffffffff00000000ULL) >> 32);
196 209
197 ret = ath10k_htc_send(&htt->ar->htc, htt->eid, skb); 210 ret = ath10k_htc_send(&htt->ar->htc, htt->eid, skb);
198 if (ret) { 211 if (ret) {
212 ath10k_warn("failed to send htt type stats request: %d", ret);
199 dev_kfree_skb_any(skb); 213 dev_kfree_skb_any(skb);
200 return ret; 214 return ret;
201 } 215 }
@@ -279,8 +293,6 @@ int ath10k_htt_send_rx_ring_cfg_ll(struct ath10k_htt *htt)
279 293
280#undef desc_offset 294#undef desc_offset
281 295
282 ATH10K_SKB_CB(skb)->htt.is_conf = true;
283
284 ret = ath10k_htc_send(&htt->ar->htc, htt->eid, skb); 296 ret = ath10k_htc_send(&htt->ar->htc, htt->eid, skb);
285 if (ret) { 297 if (ret) {
286 dev_kfree_skb_any(skb); 298 dev_kfree_skb_any(skb);
@@ -293,10 +305,10 @@ int ath10k_htt_send_rx_ring_cfg_ll(struct ath10k_htt *htt)
293int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu) 305int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
294{ 306{
295 struct device *dev = htt->ar->dev; 307 struct device *dev = htt->ar->dev;
296 struct ath10k_skb_cb *skb_cb;
297 struct sk_buff *txdesc = NULL; 308 struct sk_buff *txdesc = NULL;
298 struct htt_cmd *cmd; 309 struct htt_cmd *cmd;
299 u8 vdev_id = ATH10K_SKB_CB(msdu)->htt.vdev_id; 310 struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(msdu);
311 u8 vdev_id = skb_cb->vdev_id;
300 int len = 0; 312 int len = 0;
301 int msdu_id = -1; 313 int msdu_id = -1;
302 int res; 314 int res;
@@ -304,30 +316,30 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
304 316
305 res = ath10k_htt_tx_inc_pending(htt); 317 res = ath10k_htt_tx_inc_pending(htt);
306 if (res) 318 if (res)
307 return res; 319 goto err;
308 320
309 len += sizeof(cmd->hdr); 321 len += sizeof(cmd->hdr);
310 len += sizeof(cmd->mgmt_tx); 322 len += sizeof(cmd->mgmt_tx);
311 323
312 txdesc = ath10k_htc_alloc_skb(len);
313 if (!txdesc) {
314 res = -ENOMEM;
315 goto err;
316 }
317
318 spin_lock_bh(&htt->tx_lock); 324 spin_lock_bh(&htt->tx_lock);
319 msdu_id = ath10k_htt_tx_alloc_msdu_id(htt); 325 res = ath10k_htt_tx_alloc_msdu_id(htt);
320 if (msdu_id < 0) { 326 if (res < 0) {
321 spin_unlock_bh(&htt->tx_lock); 327 spin_unlock_bh(&htt->tx_lock);
322 res = msdu_id; 328 goto err_tx_dec;
323 goto err;
324 } 329 }
325 htt->pending_tx[msdu_id] = txdesc; 330 msdu_id = res;
331 htt->pending_tx[msdu_id] = msdu;
326 spin_unlock_bh(&htt->tx_lock); 332 spin_unlock_bh(&htt->tx_lock);
327 333
334 txdesc = ath10k_htc_alloc_skb(len);
335 if (!txdesc) {
336 res = -ENOMEM;
337 goto err_free_msdu_id;
338 }
339
328 res = ath10k_skb_map(dev, msdu); 340 res = ath10k_skb_map(dev, msdu);
329 if (res) 341 if (res)
330 goto err; 342 goto err_free_txdesc;
331 343
332 skb_put(txdesc, len); 344 skb_put(txdesc, len);
333 cmd = (struct htt_cmd *)txdesc->data; 345 cmd = (struct htt_cmd *)txdesc->data;
@@ -339,31 +351,27 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
339 memcpy(cmd->mgmt_tx.hdr, msdu->data, 351 memcpy(cmd->mgmt_tx.hdr, msdu->data,
340 min_t(int, msdu->len, HTT_MGMT_FRM_HDR_DOWNLOAD_LEN)); 352 min_t(int, msdu->len, HTT_MGMT_FRM_HDR_DOWNLOAD_LEN));
341 353
342 /* refcount is decremented by HTC and HTT completions until it reaches 354 skb_cb->htt.frag_len = 0;
343 * zero and is freed */ 355 skb_cb->htt.pad_len = 0;
344 skb_cb = ATH10K_SKB_CB(txdesc);
345 skb_cb->htt.msdu_id = msdu_id;
346 skb_cb->htt.refcount = 2;
347 skb_cb->htt.msdu = msdu;
348 356
349 res = ath10k_htc_send(&htt->ar->htc, htt->eid, txdesc); 357 res = ath10k_htc_send(&htt->ar->htc, htt->eid, txdesc);
350 if (res) 358 if (res)
351 goto err; 359 goto err_unmap_msdu;
352 360
353 return 0; 361 return 0;
354 362
355err: 363err_unmap_msdu:
356 ath10k_skb_unmap(dev, msdu); 364 ath10k_skb_unmap(dev, msdu);
357 365err_free_txdesc:
358 if (txdesc) 366 dev_kfree_skb_any(txdesc);
359 dev_kfree_skb_any(txdesc); 367err_free_msdu_id:
360 if (msdu_id >= 0) { 368 spin_lock_bh(&htt->tx_lock);
361 spin_lock_bh(&htt->tx_lock); 369 htt->pending_tx[msdu_id] = NULL;
362 htt->pending_tx[msdu_id] = NULL; 370 ath10k_htt_tx_free_msdu_id(htt, msdu_id);
363 ath10k_htt_tx_free_msdu_id(htt, msdu_id); 371 spin_unlock_bh(&htt->tx_lock);
364 spin_unlock_bh(&htt->tx_lock); 372err_tx_dec:
365 }
366 ath10k_htt_tx_dec_pending(htt); 373 ath10k_htt_tx_dec_pending(htt);
374err:
367 return res; 375 return res;
368} 376}
369 377
@@ -373,13 +381,12 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
373 struct htt_cmd *cmd; 381 struct htt_cmd *cmd;
374 struct htt_data_tx_desc_frag *tx_frags; 382 struct htt_data_tx_desc_frag *tx_frags;
375 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)msdu->data; 383 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)msdu->data;
376 struct ath10k_skb_cb *skb_cb; 384 struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(msdu);
377 struct sk_buff *txdesc = NULL; 385 struct sk_buff *txdesc = NULL;
378 struct sk_buff *txfrag = NULL; 386 bool use_frags;
379 u8 vdev_id = ATH10K_SKB_CB(msdu)->htt.vdev_id; 387 u8 vdev_id = ATH10K_SKB_CB(msdu)->vdev_id;
380 u8 tid; 388 u8 tid;
381 int prefetch_len, desc_len, frag_len; 389 int prefetch_len, desc_len;
382 dma_addr_t frags_paddr;
383 int msdu_id = -1; 390 int msdu_id = -1;
384 int res; 391 int res;
385 u8 flags0; 392 u8 flags0;
@@ -387,69 +394,82 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
387 394
388 res = ath10k_htt_tx_inc_pending(htt); 395 res = ath10k_htt_tx_inc_pending(htt);
389 if (res) 396 if (res)
390 return res; 397 goto err;
398
399 spin_lock_bh(&htt->tx_lock);
400 res = ath10k_htt_tx_alloc_msdu_id(htt);
401 if (res < 0) {
402 spin_unlock_bh(&htt->tx_lock);
403 goto err_tx_dec;
404 }
405 msdu_id = res;
406 htt->pending_tx[msdu_id] = msdu;
407 spin_unlock_bh(&htt->tx_lock);
391 408
392 prefetch_len = min(htt->prefetch_len, msdu->len); 409 prefetch_len = min(htt->prefetch_len, msdu->len);
393 prefetch_len = roundup(prefetch_len, 4); 410 prefetch_len = roundup(prefetch_len, 4);
394 411
395 desc_len = sizeof(cmd->hdr) + sizeof(cmd->data_tx) + prefetch_len; 412 desc_len = sizeof(cmd->hdr) + sizeof(cmd->data_tx) + prefetch_len;
396 frag_len = sizeof(*tx_frags) * 2;
397 413
398 txdesc = ath10k_htc_alloc_skb(desc_len); 414 txdesc = ath10k_htc_alloc_skb(desc_len);
399 if (!txdesc) { 415 if (!txdesc) {
400 res = -ENOMEM; 416 res = -ENOMEM;
401 goto err; 417 goto err_free_msdu_id;
402 } 418 }
403 419
404 txfrag = dev_alloc_skb(frag_len); 420 /* Since HTT 3.0 there is no separate mgmt tx command. However in case
405 if (!txfrag) { 421 * of mgmt tx using TX_FRM there is not tx fragment list. Instead of tx
406 res = -ENOMEM; 422 * fragment list host driver specifies directly frame pointer. */
407 goto err; 423 use_frags = htt->target_version_major < 3 ||
408 } 424 !ieee80211_is_mgmt(hdr->frame_control);
409 425
410 if (!IS_ALIGNED((unsigned long)txdesc->data, 4)) { 426 if (!IS_ALIGNED((unsigned long)txdesc->data, 4)) {
411 ath10k_warn("htt alignment check failed. dropping packet.\n"); 427 ath10k_warn("htt alignment check failed. dropping packet.\n");
412 res = -EIO; 428 res = -EIO;
413 goto err; 429 goto err_free_txdesc;
414 } 430 }
415 431
416 spin_lock_bh(&htt->tx_lock); 432 if (use_frags) {
417 msdu_id = ath10k_htt_tx_alloc_msdu_id(htt); 433 skb_cb->htt.frag_len = sizeof(*tx_frags) * 2;
418 if (msdu_id < 0) { 434 skb_cb->htt.pad_len = (unsigned long)msdu->data -
419 spin_unlock_bh(&htt->tx_lock); 435 round_down((unsigned long)msdu->data, 4);
420 res = msdu_id; 436
421 goto err; 437 skb_push(msdu, skb_cb->htt.frag_len + skb_cb->htt.pad_len);
438 } else {
439 skb_cb->htt.frag_len = 0;
440 skb_cb->htt.pad_len = 0;
422 } 441 }
423 htt->pending_tx[msdu_id] = txdesc;
424 spin_unlock_bh(&htt->tx_lock);
425 442
426 res = ath10k_skb_map(dev, msdu); 443 res = ath10k_skb_map(dev, msdu);
427 if (res) 444 if (res)
428 goto err; 445 goto err_pull_txfrag;
429 446
430 /* tx fragment list must be terminated with zero-entry */ 447 if (use_frags) {
431 skb_put(txfrag, frag_len); 448 dma_sync_single_for_cpu(dev, skb_cb->paddr, msdu->len,
432 tx_frags = (struct htt_data_tx_desc_frag *)txfrag->data; 449 DMA_TO_DEVICE);
433 tx_frags[0].paddr = __cpu_to_le32(ATH10K_SKB_CB(msdu)->paddr); 450
434 tx_frags[0].len = __cpu_to_le32(msdu->len); 451 /* tx fragment list must be terminated with zero-entry */
435 tx_frags[1].paddr = __cpu_to_le32(0); 452 tx_frags = (struct htt_data_tx_desc_frag *)msdu->data;
436 tx_frags[1].len = __cpu_to_le32(0); 453 tx_frags[0].paddr = __cpu_to_le32(skb_cb->paddr +
437 454 skb_cb->htt.frag_len +
438 res = ath10k_skb_map(dev, txfrag); 455 skb_cb->htt.pad_len);
439 if (res) 456 tx_frags[0].len = __cpu_to_le32(msdu->len -
440 goto err; 457 skb_cb->htt.frag_len -
458 skb_cb->htt.pad_len);
459 tx_frags[1].paddr = __cpu_to_le32(0);
460 tx_frags[1].len = __cpu_to_le32(0);
461
462 dma_sync_single_for_device(dev, skb_cb->paddr, msdu->len,
463 DMA_TO_DEVICE);
464 }
441 465
442 ath10k_dbg(ATH10K_DBG_HTT, "txfrag 0x%llx msdu 0x%llx\n", 466 ath10k_dbg(ATH10K_DBG_HTT, "msdu 0x%llx\n",
443 (unsigned long long) ATH10K_SKB_CB(txfrag)->paddr,
444 (unsigned long long) ATH10K_SKB_CB(msdu)->paddr); 467 (unsigned long long) ATH10K_SKB_CB(msdu)->paddr);
445 ath10k_dbg_dump(ATH10K_DBG_HTT_DUMP, NULL, "txfrag: ",
446 txfrag->data, frag_len);
447 ath10k_dbg_dump(ATH10K_DBG_HTT_DUMP, NULL, "msdu: ", 468 ath10k_dbg_dump(ATH10K_DBG_HTT_DUMP, NULL, "msdu: ",
448 msdu->data, msdu->len); 469 msdu->data, msdu->len);
449 470
450 skb_put(txdesc, desc_len); 471 skb_put(txdesc, desc_len);
451 cmd = (struct htt_cmd *)txdesc->data; 472 cmd = (struct htt_cmd *)txdesc->data;
452 memset(cmd, 0, desc_len);
453 473
454 tid = ATH10K_SKB_CB(msdu)->htt.tid; 474 tid = ATH10K_SKB_CB(msdu)->htt.tid;
455 475
@@ -459,8 +479,13 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
459 if (!ieee80211_has_protected(hdr->frame_control)) 479 if (!ieee80211_has_protected(hdr->frame_control))
460 flags0 |= HTT_DATA_TX_DESC_FLAGS0_NO_ENCRYPT; 480 flags0 |= HTT_DATA_TX_DESC_FLAGS0_NO_ENCRYPT;
461 flags0 |= HTT_DATA_TX_DESC_FLAGS0_MAC_HDR_PRESENT; 481 flags0 |= HTT_DATA_TX_DESC_FLAGS0_MAC_HDR_PRESENT;
462 flags0 |= SM(ATH10K_HW_TXRX_NATIVE_WIFI, 482
463 HTT_DATA_TX_DESC_FLAGS0_PKT_TYPE); 483 if (use_frags)
484 flags0 |= SM(ATH10K_HW_TXRX_NATIVE_WIFI,
485 HTT_DATA_TX_DESC_FLAGS0_PKT_TYPE);
486 else
487 flags0 |= SM(ATH10K_HW_TXRX_MGMT,
488 HTT_DATA_TX_DESC_FLAGS0_PKT_TYPE);
464 489
465 flags1 = 0; 490 flags1 = 0;
466 flags1 |= SM((u16)vdev_id, HTT_DATA_TX_DESC_FLAGS1_VDEV_ID); 491 flags1 |= SM((u16)vdev_id, HTT_DATA_TX_DESC_FLAGS1_VDEV_ID);
@@ -468,45 +493,37 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
468 flags1 |= HTT_DATA_TX_DESC_FLAGS1_CKSUM_L3_OFFLOAD; 493 flags1 |= HTT_DATA_TX_DESC_FLAGS1_CKSUM_L3_OFFLOAD;
469 flags1 |= HTT_DATA_TX_DESC_FLAGS1_CKSUM_L4_OFFLOAD; 494 flags1 |= HTT_DATA_TX_DESC_FLAGS1_CKSUM_L4_OFFLOAD;
470 495
471 frags_paddr = ATH10K_SKB_CB(txfrag)->paddr;
472
473 cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_TX_FRM; 496 cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_TX_FRM;
474 cmd->data_tx.flags0 = flags0; 497 cmd->data_tx.flags0 = flags0;
475 cmd->data_tx.flags1 = __cpu_to_le16(flags1); 498 cmd->data_tx.flags1 = __cpu_to_le16(flags1);
476 cmd->data_tx.len = __cpu_to_le16(msdu->len); 499 cmd->data_tx.len = __cpu_to_le16(msdu->len -
500 skb_cb->htt.frag_len -
501 skb_cb->htt.pad_len);
477 cmd->data_tx.id = __cpu_to_le16(msdu_id); 502 cmd->data_tx.id = __cpu_to_le16(msdu_id);
478 cmd->data_tx.frags_paddr = __cpu_to_le32(frags_paddr); 503 cmd->data_tx.frags_paddr = __cpu_to_le32(skb_cb->paddr);
479 cmd->data_tx.peerid = __cpu_to_le32(HTT_INVALID_PEERID); 504 cmd->data_tx.peerid = __cpu_to_le32(HTT_INVALID_PEERID);
480 505
481 memcpy(cmd->data_tx.prefetch, msdu->data, prefetch_len); 506 memcpy(cmd->data_tx.prefetch, hdr, prefetch_len);
482
483 /* refcount is decremented by HTC and HTT completions until it reaches
484 * zero and is freed */
485 skb_cb = ATH10K_SKB_CB(txdesc);
486 skb_cb->htt.msdu_id = msdu_id;
487 skb_cb->htt.refcount = 2;
488 skb_cb->htt.txfrag = txfrag;
489 skb_cb->htt.msdu = msdu;
490 507
491 res = ath10k_htc_send(&htt->ar->htc, htt->eid, txdesc); 508 res = ath10k_htc_send(&htt->ar->htc, htt->eid, txdesc);
492 if (res) 509 if (res)
493 goto err; 510 goto err_unmap_msdu;
494 511
495 return 0; 512 return 0;
496err: 513
497 if (txfrag) 514err_unmap_msdu:
498 ath10k_skb_unmap(dev, txfrag);
499 if (txdesc)
500 dev_kfree_skb_any(txdesc);
501 if (txfrag)
502 dev_kfree_skb_any(txfrag);
503 if (msdu_id >= 0) {
504 spin_lock_bh(&htt->tx_lock);
505 htt->pending_tx[msdu_id] = NULL;
506 ath10k_htt_tx_free_msdu_id(htt, msdu_id);
507 spin_unlock_bh(&htt->tx_lock);
508 }
509 ath10k_htt_tx_dec_pending(htt);
510 ath10k_skb_unmap(dev, msdu); 515 ath10k_skb_unmap(dev, msdu);
516err_pull_txfrag:
517 skb_pull(msdu, skb_cb->htt.frag_len + skb_cb->htt.pad_len);
518err_free_txdesc:
519 dev_kfree_skb_any(txdesc);
520err_free_msdu_id:
521 spin_lock_bh(&htt->tx_lock);
522 htt->pending_tx[msdu_id] = NULL;
523 ath10k_htt_tx_free_msdu_id(htt, msdu_id);
524 spin_unlock_bh(&htt->tx_lock);
525err_tx_dec:
526 ath10k_htt_tx_dec_pending(htt);
527err:
511 return res; 528 return res;
512} 529}
diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
index 44ed5af0a204..8aeb46d9b534 100644
--- a/drivers/net/wireless/ath/ath10k/hw.h
+++ b/drivers/net/wireless/ath/ath10k/hw.h
@@ -20,28 +20,37 @@
20 20
21#include "targaddrs.h" 21#include "targaddrs.h"
22 22
23/* Supported FW version */ 23/* QCA988X 1.0 definitions (unsupported) */
24#define SUPPORTED_FW_MAJOR 1 24#define QCA988X_HW_1_0_CHIP_ID_REV 0x0
25#define SUPPORTED_FW_MINOR 0
26#define SUPPORTED_FW_RELEASE 0
27#define SUPPORTED_FW_BUILD 629
28
29/* QCA988X 1.0 definitions */
30#define QCA988X_HW_1_0_VERSION 0x4000002c
31#define QCA988X_HW_1_0_FW_DIR "ath10k/QCA988X/hw1.0"
32#define QCA988X_HW_1_0_FW_FILE "firmware.bin"
33#define QCA988X_HW_1_0_OTP_FILE "otp.bin"
34#define QCA988X_HW_1_0_BOARD_DATA_FILE "board.bin"
35#define QCA988X_HW_1_0_PATCH_LOAD_ADDR 0x1234
36 25
37/* QCA988X 2.0 definitions */ 26/* QCA988X 2.0 definitions */
38#define QCA988X_HW_2_0_VERSION 0x4100016c 27#define QCA988X_HW_2_0_VERSION 0x4100016c
28#define QCA988X_HW_2_0_CHIP_ID_REV 0x2
39#define QCA988X_HW_2_0_FW_DIR "ath10k/QCA988X/hw2.0" 29#define QCA988X_HW_2_0_FW_DIR "ath10k/QCA988X/hw2.0"
40#define QCA988X_HW_2_0_FW_FILE "firmware.bin" 30#define QCA988X_HW_2_0_FW_FILE "firmware.bin"
41#define QCA988X_HW_2_0_OTP_FILE "otp.bin" 31#define QCA988X_HW_2_0_OTP_FILE "otp.bin"
42#define QCA988X_HW_2_0_BOARD_DATA_FILE "board.bin" 32#define QCA988X_HW_2_0_BOARD_DATA_FILE "board.bin"
43#define QCA988X_HW_2_0_PATCH_LOAD_ADDR 0x1234 33#define QCA988X_HW_2_0_PATCH_LOAD_ADDR 0x1234
44 34
35#define ATH10K_FW_API2_FILE "firmware-2.bin"
36
37/* includes also the null byte */
38#define ATH10K_FIRMWARE_MAGIC "QCA-ATH10K"
39
40struct ath10k_fw_ie {
41 __le32 id;
42 __le32 len;
43 u8 data[0];
44};
45
46enum ath10k_fw_ie_type {
47 ATH10K_FW_IE_FW_VERSION = 0,
48 ATH10K_FW_IE_TIMESTAMP = 1,
49 ATH10K_FW_IE_FEATURES = 2,
50 ATH10K_FW_IE_FW_IMAGE = 3,
51 ATH10K_FW_IE_OTP_IMAGE = 4,
52};
53
45/* Known pecularities: 54/* Known pecularities:
46 * - current FW doesn't support raw rx mode (last tested v599) 55 * - current FW doesn't support raw rx mode (last tested v599)
47 * - current FW dumps upon raw tx mode (last tested v599) 56 * - current FW dumps upon raw tx mode (last tested v599)
@@ -53,6 +62,9 @@ enum ath10k_hw_txrx_mode {
53 ATH10K_HW_TXRX_RAW = 0, 62 ATH10K_HW_TXRX_RAW = 0,
54 ATH10K_HW_TXRX_NATIVE_WIFI = 1, 63 ATH10K_HW_TXRX_NATIVE_WIFI = 1,
55 ATH10K_HW_TXRX_ETHERNET = 2, 64 ATH10K_HW_TXRX_ETHERNET = 2,
65
66 /* Valid for HTT >= 3.0. Used for management frames in TX_FRM. */
67 ATH10K_HW_TXRX_MGMT = 3,
56}; 68};
57 69
58enum ath10k_mcast2ucast_mode { 70enum ath10k_mcast2ucast_mode {
@@ -60,6 +72,7 @@ enum ath10k_mcast2ucast_mode {
60 ATH10K_MCAST2UCAST_ENABLED = 1, 72 ATH10K_MCAST2UCAST_ENABLED = 1,
61}; 73};
62 74
75/* Target specific defines for MAIN firmware */
63#define TARGET_NUM_VDEVS 8 76#define TARGET_NUM_VDEVS 8
64#define TARGET_NUM_PEER_AST 2 77#define TARGET_NUM_PEER_AST 2
65#define TARGET_NUM_WDS_ENTRIES 32 78#define TARGET_NUM_WDS_ENTRIES 32
@@ -75,7 +88,11 @@ enum ath10k_mcast2ucast_mode {
75#define TARGET_RX_CHAIN_MASK (BIT(0) | BIT(1) | BIT(2)) 88#define TARGET_RX_CHAIN_MASK (BIT(0) | BIT(1) | BIT(2))
76#define TARGET_RX_TIMEOUT_LO_PRI 100 89#define TARGET_RX_TIMEOUT_LO_PRI 100
77#define TARGET_RX_TIMEOUT_HI_PRI 40 90#define TARGET_RX_TIMEOUT_HI_PRI 40
78#define TARGET_RX_DECAP_MODE ATH10K_HW_TXRX_ETHERNET 91
92/* Native Wifi decap mode is used to align IP frames to 4-byte boundaries and
93 * avoid a very expensive re-alignment in mac80211. */
94#define TARGET_RX_DECAP_MODE ATH10K_HW_TXRX_NATIVE_WIFI
95
79#define TARGET_SCAN_MAX_PENDING_REQS 4 96#define TARGET_SCAN_MAX_PENDING_REQS 4
80#define TARGET_BMISS_OFFLOAD_MAX_VDEV 3 97#define TARGET_BMISS_OFFLOAD_MAX_VDEV 3
81#define TARGET_ROAM_OFFLOAD_MAX_VDEV 3 98#define TARGET_ROAM_OFFLOAD_MAX_VDEV 3
@@ -90,6 +107,36 @@ enum ath10k_mcast2ucast_mode {
90#define TARGET_NUM_MSDU_DESC (1024 + 400) 107#define TARGET_NUM_MSDU_DESC (1024 + 400)
91#define TARGET_MAX_FRAG_ENTRIES 0 108#define TARGET_MAX_FRAG_ENTRIES 0
92 109
110/* Target specific defines for 10.X firmware */
111#define TARGET_10X_NUM_VDEVS 16
112#define TARGET_10X_NUM_PEER_AST 2
113#define TARGET_10X_NUM_WDS_ENTRIES 32
114#define TARGET_10X_DMA_BURST_SIZE 0
115#define TARGET_10X_MAC_AGGR_DELIM 0
116#define TARGET_10X_AST_SKID_LIMIT 16
117#define TARGET_10X_NUM_PEERS (128 + (TARGET_10X_NUM_VDEVS))
118#define TARGET_10X_NUM_OFFLOAD_PEERS 0
119#define TARGET_10X_NUM_OFFLOAD_REORDER_BUFS 0
120#define TARGET_10X_NUM_PEER_KEYS 2
121#define TARGET_10X_NUM_TIDS 256
122#define TARGET_10X_TX_CHAIN_MASK (BIT(0) | BIT(1) | BIT(2))
123#define TARGET_10X_RX_CHAIN_MASK (BIT(0) | BIT(1) | BIT(2))
124#define TARGET_10X_RX_TIMEOUT_LO_PRI 100
125#define TARGET_10X_RX_TIMEOUT_HI_PRI 40
126#define TARGET_10X_RX_DECAP_MODE ATH10K_HW_TXRX_NATIVE_WIFI
127#define TARGET_10X_SCAN_MAX_PENDING_REQS 4
128#define TARGET_10X_BMISS_OFFLOAD_MAX_VDEV 2
129#define TARGET_10X_ROAM_OFFLOAD_MAX_VDEV 2
130#define TARGET_10X_ROAM_OFFLOAD_MAX_AP_PROFILES 8
131#define TARGET_10X_GTK_OFFLOAD_MAX_VDEV 3
132#define TARGET_10X_NUM_MCAST_GROUPS 0
133#define TARGET_10X_NUM_MCAST_TABLE_ELEMS 0
134#define TARGET_10X_MCAST2UCAST_MODE ATH10K_MCAST2UCAST_DISABLED
135#define TARGET_10X_TX_DBG_LOG_SIZE 1024
136#define TARGET_10X_RX_SKIP_DEFRAG_TIMEOUT_DUP_DETECTION_CHECK 1
137#define TARGET_10X_VOW_CONFIG 0
138#define TARGET_10X_NUM_MSDU_DESC (1024 + 400)
139#define TARGET_10X_MAX_FRAG_ENTRIES 0
93 140
94/* Number of Copy Engines supported */ 141/* Number of Copy Engines supported */
95#define CE_COUNT 8 142#define CE_COUNT 8
@@ -169,6 +216,10 @@ enum ath10k_mcast2ucast_mode {
169#define SOC_LPO_CAL_ENABLE_LSB 20 216#define SOC_LPO_CAL_ENABLE_LSB 20
170#define SOC_LPO_CAL_ENABLE_MASK 0x00100000 217#define SOC_LPO_CAL_ENABLE_MASK 0x00100000
171 218
219#define SOC_CHIP_ID_ADDRESS 0x000000ec
220#define SOC_CHIP_ID_REV_LSB 8
221#define SOC_CHIP_ID_REV_MASK 0x00000f00
222
172#define WLAN_RESET_CONTROL_COLD_RST_MASK 0x00000008 223#define WLAN_RESET_CONTROL_COLD_RST_MASK 0x00000008
173#define WLAN_RESET_CONTROL_WARM_RST_MASK 0x00000004 224#define WLAN_RESET_CONTROL_WARM_RST_MASK 0x00000004
174#define WLAN_SYSTEM_SLEEP_DISABLE_LSB 0 225#define WLAN_SYSTEM_SLEEP_DISABLE_LSB 0
diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
index cf2ba4d850c9..0b1cc516e778 100644
--- a/drivers/net/wireless/ath/ath10k/mac.c
+++ b/drivers/net/wireless/ath/ath10k/mac.c
@@ -334,25 +334,29 @@ static int ath10k_peer_create(struct ath10k *ar, u32 vdev_id, const u8 *addr)
334 334
335static int ath10k_mac_set_rts(struct ath10k_vif *arvif, u32 value) 335static int ath10k_mac_set_rts(struct ath10k_vif *arvif, u32 value)
336{ 336{
337 struct ath10k *ar = arvif->ar;
338 u32 vdev_param;
339
337 if (value != 0xFFFFFFFF) 340 if (value != 0xFFFFFFFF)
338 value = min_t(u32, arvif->ar->hw->wiphy->rts_threshold, 341 value = min_t(u32, arvif->ar->hw->wiphy->rts_threshold,
339 ATH10K_RTS_MAX); 342 ATH10K_RTS_MAX);
340 343
341 return ath10k_wmi_vdev_set_param(arvif->ar, arvif->vdev_id, 344 vdev_param = ar->wmi.vdev_param->rts_threshold;
342 WMI_VDEV_PARAM_RTS_THRESHOLD, 345 return ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param, value);
343 value);
344} 346}
345 347
346static int ath10k_mac_set_frag(struct ath10k_vif *arvif, u32 value) 348static int ath10k_mac_set_frag(struct ath10k_vif *arvif, u32 value)
347{ 349{
350 struct ath10k *ar = arvif->ar;
351 u32 vdev_param;
352
348 if (value != 0xFFFFFFFF) 353 if (value != 0xFFFFFFFF)
349 value = clamp_t(u32, arvif->ar->hw->wiphy->frag_threshold, 354 value = clamp_t(u32, arvif->ar->hw->wiphy->frag_threshold,
350 ATH10K_FRAGMT_THRESHOLD_MIN, 355 ATH10K_FRAGMT_THRESHOLD_MIN,
351 ATH10K_FRAGMT_THRESHOLD_MAX); 356 ATH10K_FRAGMT_THRESHOLD_MAX);
352 357
353 return ath10k_wmi_vdev_set_param(arvif->ar, arvif->vdev_id, 358 vdev_param = ar->wmi.vdev_param->fragmentation_threshold;
354 WMI_VDEV_PARAM_FRAGMENTATION_THRESHOLD, 359 return ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param, value);
355 value);
356} 360}
357 361
358static int ath10k_peer_delete(struct ath10k *ar, u32 vdev_id, const u8 *addr) 362static int ath10k_peer_delete(struct ath10k *ar, u32 vdev_id, const u8 *addr)
@@ -460,6 +464,11 @@ static int ath10k_vdev_start(struct ath10k_vif *arvif)
460 arg.ssid_len = arvif->vif->bss_conf.ssid_len; 464 arg.ssid_len = arvif->vif->bss_conf.ssid_len;
461 } 465 }
462 466
467 ath10k_dbg(ATH10K_DBG_MAC,
468 "mac vdev %d start center_freq %d phymode %s\n",
469 arg.vdev_id, arg.channel.freq,
470 ath10k_wmi_phymode_str(arg.channel.mode));
471
463 ret = ath10k_wmi_vdev_start(ar, &arg); 472 ret = ath10k_wmi_vdev_start(ar, &arg);
464 if (ret) { 473 if (ret) {
465 ath10k_warn("WMI vdev start failed: ret %d\n", ret); 474 ath10k_warn("WMI vdev start failed: ret %d\n", ret);
@@ -503,13 +512,10 @@ static int ath10k_monitor_start(struct ath10k *ar, int vdev_id)
503{ 512{
504 struct ieee80211_channel *channel = ar->hw->conf.chandef.chan; 513 struct ieee80211_channel *channel = ar->hw->conf.chandef.chan;
505 struct wmi_vdev_start_request_arg arg = {}; 514 struct wmi_vdev_start_request_arg arg = {};
506 enum nl80211_channel_type type;
507 int ret = 0; 515 int ret = 0;
508 516
509 lockdep_assert_held(&ar->conf_mutex); 517 lockdep_assert_held(&ar->conf_mutex);
510 518
511 type = cfg80211_get_chandef_type(&ar->hw->conf.chandef);
512
513 arg.vdev_id = vdev_id; 519 arg.vdev_id = vdev_id;
514 arg.channel.freq = channel->center_freq; 520 arg.channel.freq = channel->center_freq;
515 arg.channel.band_center_freq1 = ar->hw->conf.chandef.center_freq1; 521 arg.channel.band_center_freq1 = ar->hw->conf.chandef.center_freq1;
@@ -560,12 +566,9 @@ static int ath10k_monitor_stop(struct ath10k *ar)
560 566
561 lockdep_assert_held(&ar->conf_mutex); 567 lockdep_assert_held(&ar->conf_mutex);
562 568
563 /* For some reasons, ath10k_wmi_vdev_down() here couse 569 ret = ath10k_wmi_vdev_down(ar, ar->monitor_vdev_id);
564 * often ath10k_wmi_vdev_stop() to fail. Next we could 570 if (ret)
565 * not run monitor vdev and driver reload 571 ath10k_warn("Monitor vdev down failed: %d\n", ret);
566 * required. Don't see such problems we skip
567 * ath10k_wmi_vdev_down() here.
568 */
569 572
570 ret = ath10k_wmi_vdev_stop(ar, ar->monitor_vdev_id); 573 ret = ath10k_wmi_vdev_stop(ar, ar->monitor_vdev_id);
571 if (ret) 574 if (ret)
@@ -607,7 +610,7 @@ static int ath10k_monitor_create(struct ath10k *ar)
607 goto vdev_fail; 610 goto vdev_fail;
608 } 611 }
609 612
610 ath10k_dbg(ATH10K_DBG_MAC, "Monitor interface created, vdev id: %d\n", 613 ath10k_dbg(ATH10K_DBG_MAC, "mac monitor vdev %d created\n",
611 ar->monitor_vdev_id); 614 ar->monitor_vdev_id);
612 615
613 ar->monitor_present = true; 616 ar->monitor_present = true;
@@ -639,7 +642,7 @@ static int ath10k_monitor_destroy(struct ath10k *ar)
639 ar->free_vdev_map |= 1 << (ar->monitor_vdev_id); 642 ar->free_vdev_map |= 1 << (ar->monitor_vdev_id);
640 ar->monitor_present = false; 643 ar->monitor_present = false;
641 644
642 ath10k_dbg(ATH10K_DBG_MAC, "Monitor interface destroyed, vdev id: %d\n", 645 ath10k_dbg(ATH10K_DBG_MAC, "mac monitor vdev %d deleted\n",
643 ar->monitor_vdev_id); 646 ar->monitor_vdev_id);
644 return ret; 647 return ret;
645} 648}
@@ -668,13 +671,14 @@ static void ath10k_control_beaconing(struct ath10k_vif *arvif,
668 arvif->vdev_id); 671 arvif->vdev_id);
669 return; 672 return;
670 } 673 }
671 ath10k_dbg(ATH10K_DBG_MAC, "VDEV: %d up\n", arvif->vdev_id); 674 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d up\n", arvif->vdev_id);
672} 675}
673 676
674static void ath10k_control_ibss(struct ath10k_vif *arvif, 677static void ath10k_control_ibss(struct ath10k_vif *arvif,
675 struct ieee80211_bss_conf *info, 678 struct ieee80211_bss_conf *info,
676 const u8 self_peer[ETH_ALEN]) 679 const u8 self_peer[ETH_ALEN])
677{ 680{
681 u32 vdev_param;
678 int ret = 0; 682 int ret = 0;
679 683
680 lockdep_assert_held(&arvif->ar->conf_mutex); 684 lockdep_assert_held(&arvif->ar->conf_mutex);
@@ -708,8 +712,8 @@ static void ath10k_control_ibss(struct ath10k_vif *arvif,
708 return; 712 return;
709 } 713 }
710 714
711 ret = ath10k_wmi_vdev_set_param(arvif->ar, arvif->vdev_id, 715 vdev_param = arvif->ar->wmi.vdev_param->atim_window;
712 WMI_VDEV_PARAM_ATIM_WINDOW, 716 ret = ath10k_wmi_vdev_set_param(arvif->ar, arvif->vdev_id, vdev_param,
713 ATH10K_DEFAULT_ATIM); 717 ATH10K_DEFAULT_ATIM);
714 if (ret) 718 if (ret)
715 ath10k_warn("Failed to set IBSS ATIM for VDEV:%d ret:%d\n", 719 ath10k_warn("Failed to set IBSS ATIM for VDEV:%d ret:%d\n",
@@ -719,47 +723,45 @@ static void ath10k_control_ibss(struct ath10k_vif *arvif,
719/* 723/*
720 * Review this when mac80211 gains per-interface powersave support. 724 * Review this when mac80211 gains per-interface powersave support.
721 */ 725 */
722static void ath10k_ps_iter(void *data, u8 *mac, struct ieee80211_vif *vif) 726static int ath10k_mac_vif_setup_ps(struct ath10k_vif *arvif)
723{ 727{
724 struct ath10k_generic_iter *ar_iter = data; 728 struct ath10k *ar = arvif->ar;
725 struct ieee80211_conf *conf = &ar_iter->ar->hw->conf; 729 struct ieee80211_conf *conf = &ar->hw->conf;
726 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
727 enum wmi_sta_powersave_param param; 730 enum wmi_sta_powersave_param param;
728 enum wmi_sta_ps_mode psmode; 731 enum wmi_sta_ps_mode psmode;
729 int ret; 732 int ret;
730 733
731 lockdep_assert_held(&arvif->ar->conf_mutex); 734 lockdep_assert_held(&arvif->ar->conf_mutex);
732 735
733 if (vif->type != NL80211_IFTYPE_STATION) 736 if (arvif->vif->type != NL80211_IFTYPE_STATION)
734 return; 737 return 0;
735 738
736 if (conf->flags & IEEE80211_CONF_PS) { 739 if (conf->flags & IEEE80211_CONF_PS) {
737 psmode = WMI_STA_PS_MODE_ENABLED; 740 psmode = WMI_STA_PS_MODE_ENABLED;
738 param = WMI_STA_PS_PARAM_INACTIVITY_TIME; 741 param = WMI_STA_PS_PARAM_INACTIVITY_TIME;
739 742
740 ret = ath10k_wmi_set_sta_ps_param(ar_iter->ar, 743 ret = ath10k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param,
741 arvif->vdev_id,
742 param,
743 conf->dynamic_ps_timeout); 744 conf->dynamic_ps_timeout);
744 if (ret) { 745 if (ret) {
745 ath10k_warn("Failed to set inactivity time for VDEV: %d\n", 746 ath10k_warn("Failed to set inactivity time for VDEV: %d\n",
746 arvif->vdev_id); 747 arvif->vdev_id);
747 return; 748 return ret;
748 } 749 }
749
750 ar_iter->ret = ret;
751 } else { 750 } else {
752 psmode = WMI_STA_PS_MODE_DISABLED; 751 psmode = WMI_STA_PS_MODE_DISABLED;
753 } 752 }
754 753
755 ar_iter->ret = ath10k_wmi_set_psmode(ar_iter->ar, arvif->vdev_id, 754 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d psmode %s\n",
756 psmode); 755 arvif->vdev_id, psmode ? "enable" : "disable");
757 if (ar_iter->ret) 756
757 ret = ath10k_wmi_set_psmode(ar, arvif->vdev_id, psmode);
758 if (ret) {
758 ath10k_warn("Failed to set PS Mode: %d for VDEV: %d\n", 759 ath10k_warn("Failed to set PS Mode: %d for VDEV: %d\n",
759 psmode, arvif->vdev_id); 760 psmode, arvif->vdev_id);
760 else 761 return ret;
761 ath10k_dbg(ATH10K_DBG_MAC, "Set PS Mode: %d for VDEV: %d\n", 762 }
762 psmode, arvif->vdev_id); 763
764 return 0;
763} 765}
764 766
765/**********************/ 767/**********************/
@@ -949,7 +951,8 @@ static void ath10k_peer_assoc_h_ht(struct ath10k *ar,
949 arg->peer_ht_rates.num_rates = n; 951 arg->peer_ht_rates.num_rates = n;
950 arg->peer_num_spatial_streams = max((n+7) / 8, 1); 952 arg->peer_num_spatial_streams = max((n+7) / 8, 1);
951 953
952 ath10k_dbg(ATH10K_DBG_MAC, "mcs cnt %d nss %d\n", 954 ath10k_dbg(ATH10K_DBG_MAC, "mac ht peer %pM mcs cnt %d nss %d\n",
955 arg->addr,
953 arg->peer_ht_rates.num_rates, 956 arg->peer_ht_rates.num_rates,
954 arg->peer_num_spatial_streams); 957 arg->peer_num_spatial_streams);
955} 958}
@@ -969,11 +972,11 @@ static void ath10k_peer_assoc_h_qos_ap(struct ath10k *ar,
969 arg->peer_flags |= WMI_PEER_QOS; 972 arg->peer_flags |= WMI_PEER_QOS;
970 973
971 if (sta->wme && sta->uapsd_queues) { 974 if (sta->wme && sta->uapsd_queues) {
972 ath10k_dbg(ATH10K_DBG_MAC, "uapsd_queues: 0x%X, max_sp: %d\n", 975 ath10k_dbg(ATH10K_DBG_MAC, "mac uapsd_queues 0x%x max_sp %d\n",
973 sta->uapsd_queues, sta->max_sp); 976 sta->uapsd_queues, sta->max_sp);
974 977
975 arg->peer_flags |= WMI_PEER_APSD; 978 arg->peer_flags |= WMI_PEER_APSD;
976 arg->peer_flags |= WMI_RC_UAPSD_FLAG; 979 arg->peer_rate_caps |= WMI_RC_UAPSD_FLAG;
977 980
978 if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VO) 981 if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VO)
979 uapsd |= WMI_AP_PS_UAPSD_AC3_DELIVERY_EN | 982 uapsd |= WMI_AP_PS_UAPSD_AC3_DELIVERY_EN |
@@ -1028,14 +1031,27 @@ static void ath10k_peer_assoc_h_vht(struct ath10k *ar,
1028 struct wmi_peer_assoc_complete_arg *arg) 1031 struct wmi_peer_assoc_complete_arg *arg)
1029{ 1032{
1030 const struct ieee80211_sta_vht_cap *vht_cap = &sta->vht_cap; 1033 const struct ieee80211_sta_vht_cap *vht_cap = &sta->vht_cap;
1034 u8 ampdu_factor;
1031 1035
1032 if (!vht_cap->vht_supported) 1036 if (!vht_cap->vht_supported)
1033 return; 1037 return;
1034 1038
1035 arg->peer_flags |= WMI_PEER_VHT; 1039 arg->peer_flags |= WMI_PEER_VHT;
1036
1037 arg->peer_vht_caps = vht_cap->cap; 1040 arg->peer_vht_caps = vht_cap->cap;
1038 1041
1042
1043 ampdu_factor = (vht_cap->cap &
1044 IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK) >>
1045 IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT;
1046
1047 /* Workaround: Some Netgear/Linksys 11ac APs set Rx A-MPDU factor to
1048 * zero in VHT IE. Using it would result in degraded throughput.
1049 * arg->peer_max_mpdu at this point contains HT max_mpdu so keep
1050 * it if VHT max_mpdu is smaller. */
1051 arg->peer_max_mpdu = max(arg->peer_max_mpdu,
1052 (1U << (IEEE80211_HT_MAX_AMPDU_FACTOR +
1053 ampdu_factor)) - 1);
1054
1039 if (sta->bandwidth == IEEE80211_STA_RX_BW_80) 1055 if (sta->bandwidth == IEEE80211_STA_RX_BW_80)
1040 arg->peer_flags |= WMI_PEER_80MHZ; 1056 arg->peer_flags |= WMI_PEER_80MHZ;
1041 1057
@@ -1048,7 +1064,8 @@ static void ath10k_peer_assoc_h_vht(struct ath10k *ar,
1048 arg->peer_vht_rates.tx_mcs_set = 1064 arg->peer_vht_rates.tx_mcs_set =
1049 __le16_to_cpu(vht_cap->vht_mcs.tx_mcs_map); 1065 __le16_to_cpu(vht_cap->vht_mcs.tx_mcs_map);
1050 1066
1051 ath10k_dbg(ATH10K_DBG_MAC, "mac vht peer\n"); 1067 ath10k_dbg(ATH10K_DBG_MAC, "mac vht peer %pM max_mpdu %d flags 0x%x\n",
1068 sta->addr, arg->peer_max_mpdu, arg->peer_flags);
1052} 1069}
1053 1070
1054static void ath10k_peer_assoc_h_qos(struct ath10k *ar, 1071static void ath10k_peer_assoc_h_qos(struct ath10k *ar,
@@ -1076,8 +1093,6 @@ static void ath10k_peer_assoc_h_phymode(struct ath10k *ar,
1076{ 1093{
1077 enum wmi_phy_mode phymode = MODE_UNKNOWN; 1094 enum wmi_phy_mode phymode = MODE_UNKNOWN;
1078 1095
1079 /* FIXME: add VHT */
1080
1081 switch (ar->hw->conf.chandef.chan->band) { 1096 switch (ar->hw->conf.chandef.chan->band) {
1082 case IEEE80211_BAND_2GHZ: 1097 case IEEE80211_BAND_2GHZ:
1083 if (sta->ht_cap.ht_supported) { 1098 if (sta->ht_cap.ht_supported) {
@@ -1091,7 +1106,17 @@ static void ath10k_peer_assoc_h_phymode(struct ath10k *ar,
1091 1106
1092 break; 1107 break;
1093 case IEEE80211_BAND_5GHZ: 1108 case IEEE80211_BAND_5GHZ:
1094 if (sta->ht_cap.ht_supported) { 1109 /*
1110 * Check VHT first.
1111 */
1112 if (sta->vht_cap.vht_supported) {
1113 if (sta->bandwidth == IEEE80211_STA_RX_BW_80)
1114 phymode = MODE_11AC_VHT80;
1115 else if (sta->bandwidth == IEEE80211_STA_RX_BW_40)
1116 phymode = MODE_11AC_VHT40;
1117 else if (sta->bandwidth == IEEE80211_STA_RX_BW_20)
1118 phymode = MODE_11AC_VHT20;
1119 } else if (sta->ht_cap.ht_supported) {
1095 if (sta->bandwidth == IEEE80211_STA_RX_BW_40) 1120 if (sta->bandwidth == IEEE80211_STA_RX_BW_40)
1096 phymode = MODE_11NA_HT40; 1121 phymode = MODE_11NA_HT40;
1097 else 1122 else
@@ -1105,30 +1130,32 @@ static void ath10k_peer_assoc_h_phymode(struct ath10k *ar,
1105 break; 1130 break;
1106 } 1131 }
1107 1132
1133 ath10k_dbg(ATH10K_DBG_MAC, "mac peer %pM phymode %s\n",
1134 sta->addr, ath10k_wmi_phymode_str(phymode));
1135
1108 arg->peer_phymode = phymode; 1136 arg->peer_phymode = phymode;
1109 WARN_ON(phymode == MODE_UNKNOWN); 1137 WARN_ON(phymode == MODE_UNKNOWN);
1110} 1138}
1111 1139
1112static int ath10k_peer_assoc(struct ath10k *ar, 1140static int ath10k_peer_assoc_prepare(struct ath10k *ar,
1113 struct ath10k_vif *arvif, 1141 struct ath10k_vif *arvif,
1114 struct ieee80211_sta *sta, 1142 struct ieee80211_sta *sta,
1115 struct ieee80211_bss_conf *bss_conf) 1143 struct ieee80211_bss_conf *bss_conf,
1144 struct wmi_peer_assoc_complete_arg *arg)
1116{ 1145{
1117 struct wmi_peer_assoc_complete_arg arg;
1118
1119 lockdep_assert_held(&ar->conf_mutex); 1146 lockdep_assert_held(&ar->conf_mutex);
1120 1147
1121 memset(&arg, 0, sizeof(struct wmi_peer_assoc_complete_arg)); 1148 memset(arg, 0, sizeof(*arg));
1122 1149
1123 ath10k_peer_assoc_h_basic(ar, arvif, sta, bss_conf, &arg); 1150 ath10k_peer_assoc_h_basic(ar, arvif, sta, bss_conf, arg);
1124 ath10k_peer_assoc_h_crypto(ar, arvif, &arg); 1151 ath10k_peer_assoc_h_crypto(ar, arvif, arg);
1125 ath10k_peer_assoc_h_rates(ar, sta, &arg); 1152 ath10k_peer_assoc_h_rates(ar, sta, arg);
1126 ath10k_peer_assoc_h_ht(ar, sta, &arg); 1153 ath10k_peer_assoc_h_ht(ar, sta, arg);
1127 ath10k_peer_assoc_h_vht(ar, sta, &arg); 1154 ath10k_peer_assoc_h_vht(ar, sta, arg);
1128 ath10k_peer_assoc_h_qos(ar, arvif, sta, bss_conf, &arg); 1155 ath10k_peer_assoc_h_qos(ar, arvif, sta, bss_conf, arg);
1129 ath10k_peer_assoc_h_phymode(ar, arvif, sta, &arg); 1156 ath10k_peer_assoc_h_phymode(ar, arvif, sta, arg);
1130 1157
1131 return ath10k_wmi_peer_assoc(ar, &arg); 1158 return 0;
1132} 1159}
1133 1160
1134/* can be called only in mac80211 callbacks due to `key_count` usage */ 1161/* can be called only in mac80211 callbacks due to `key_count` usage */
@@ -1138,6 +1165,7 @@ static void ath10k_bss_assoc(struct ieee80211_hw *hw,
1138{ 1165{
1139 struct ath10k *ar = hw->priv; 1166 struct ath10k *ar = hw->priv;
1140 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif); 1167 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
1168 struct wmi_peer_assoc_complete_arg peer_arg;
1141 struct ieee80211_sta *ap_sta; 1169 struct ieee80211_sta *ap_sta;
1142 int ret; 1170 int ret;
1143 1171
@@ -1153,24 +1181,33 @@ static void ath10k_bss_assoc(struct ieee80211_hw *hw,
1153 return; 1181 return;
1154 } 1182 }
1155 1183
1156 ret = ath10k_peer_assoc(ar, arvif, ap_sta, bss_conf); 1184 ret = ath10k_peer_assoc_prepare(ar, arvif, ap_sta,
1185 bss_conf, &peer_arg);
1157 if (ret) { 1186 if (ret) {
1158 ath10k_warn("Peer assoc failed for %pM\n", bss_conf->bssid); 1187 ath10k_warn("Peer assoc prepare failed for %pM\n: %d",
1188 bss_conf->bssid, ret);
1159 rcu_read_unlock(); 1189 rcu_read_unlock();
1160 return; 1190 return;
1161 } 1191 }
1162 1192
1163 rcu_read_unlock(); 1193 rcu_read_unlock();
1164 1194
1195 ret = ath10k_wmi_peer_assoc(ar, &peer_arg);
1196 if (ret) {
1197 ath10k_warn("Peer assoc failed for %pM\n: %d",
1198 bss_conf->bssid, ret);
1199 return;
1200 }
1201
1202 ath10k_dbg(ATH10K_DBG_MAC,
1203 "mac vdev %d up (associated) bssid %pM aid %d\n",
1204 arvif->vdev_id, bss_conf->bssid, bss_conf->aid);
1205
1165 ret = ath10k_wmi_vdev_up(ar, arvif->vdev_id, bss_conf->aid, 1206 ret = ath10k_wmi_vdev_up(ar, arvif->vdev_id, bss_conf->aid,
1166 bss_conf->bssid); 1207 bss_conf->bssid);
1167 if (ret) 1208 if (ret)
1168 ath10k_warn("VDEV: %d up failed: ret %d\n", 1209 ath10k_warn("VDEV: %d up failed: ret %d\n",
1169 arvif->vdev_id, ret); 1210 arvif->vdev_id, ret);
1170 else
1171 ath10k_dbg(ATH10K_DBG_MAC,
1172 "VDEV: %d associated, BSSID: %pM, AID: %d\n",
1173 arvif->vdev_id, bss_conf->bssid, bss_conf->aid);
1174} 1211}
1175 1212
1176/* 1213/*
@@ -1191,10 +1228,11 @@ static void ath10k_bss_disassoc(struct ieee80211_hw *hw,
1191 * No idea why this happens, even though VDEV-DOWN is supposed 1228 * No idea why this happens, even though VDEV-DOWN is supposed
1192 * to be analogous to link down, so just stop the VDEV. 1229 * to be analogous to link down, so just stop the VDEV.
1193 */ 1230 */
1231 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d stop (disassociated\n",
1232 arvif->vdev_id);
1233
1234 /* FIXME: check return value */
1194 ret = ath10k_vdev_stop(arvif); 1235 ret = ath10k_vdev_stop(arvif);
1195 if (!ret)
1196 ath10k_dbg(ATH10K_DBG_MAC, "VDEV: %d stopped\n",
1197 arvif->vdev_id);
1198 1236
1199 /* 1237 /*
1200 * If we don't call VDEV-DOWN after VDEV-STOP FW will remain active and 1238 * If we don't call VDEV-DOWN after VDEV-STOP FW will remain active and
@@ -1203,26 +1241,33 @@ static void ath10k_bss_disassoc(struct ieee80211_hw *hw,
1203 * interfaces as it expects there is no rx when no interface is 1241 * interfaces as it expects there is no rx when no interface is
1204 * running. 1242 * running.
1205 */ 1243 */
1206 ret = ath10k_wmi_vdev_down(ar, arvif->vdev_id); 1244 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d down\n", arvif->vdev_id);
1207 if (ret)
1208 ath10k_dbg(ATH10K_DBG_MAC, "VDEV: %d ath10k_wmi_vdev_down failed (%d)\n",
1209 arvif->vdev_id, ret);
1210 1245
1211 ath10k_wmi_flush_tx(ar); 1246 /* FIXME: why don't we print error if wmi call fails? */
1247 ret = ath10k_wmi_vdev_down(ar, arvif->vdev_id);
1212 1248
1213 arvif->def_wep_key_index = 0; 1249 arvif->def_wep_key_idx = 0;
1214} 1250}
1215 1251
1216static int ath10k_station_assoc(struct ath10k *ar, struct ath10k_vif *arvif, 1252static int ath10k_station_assoc(struct ath10k *ar, struct ath10k_vif *arvif,
1217 struct ieee80211_sta *sta) 1253 struct ieee80211_sta *sta)
1218{ 1254{
1255 struct wmi_peer_assoc_complete_arg peer_arg;
1219 int ret = 0; 1256 int ret = 0;
1220 1257
1221 lockdep_assert_held(&ar->conf_mutex); 1258 lockdep_assert_held(&ar->conf_mutex);
1222 1259
1223 ret = ath10k_peer_assoc(ar, arvif, sta, NULL); 1260 ret = ath10k_peer_assoc_prepare(ar, arvif, sta, NULL, &peer_arg);
1261 if (ret) {
1262 ath10k_warn("WMI peer assoc prepare failed for %pM\n",
1263 sta->addr);
1264 return ret;
1265 }
1266
1267 ret = ath10k_wmi_peer_assoc(ar, &peer_arg);
1224 if (ret) { 1268 if (ret) {
1225 ath10k_warn("WMI peer assoc failed for %pM\n", sta->addr); 1269 ath10k_warn("Peer assoc failed for STA %pM\n: %d",
1270 sta->addr, ret);
1226 return ret; 1271 return ret;
1227 } 1272 }
1228 1273
@@ -1333,8 +1378,8 @@ static int ath10k_update_channel_list(struct ath10k *ar)
1333 continue; 1378 continue;
1334 1379
1335 ath10k_dbg(ATH10K_DBG_WMI, 1380 ath10k_dbg(ATH10K_DBG_WMI,
1336 "%s: [%zd/%d] freq %d maxpower %d regpower %d antenna %d mode %d\n", 1381 "mac channel [%zd/%d] freq %d maxpower %d regpower %d antenna %d mode %d\n",
1337 __func__, ch - arg.channels, arg.n_channels, 1382 ch - arg.channels, arg.n_channels,
1338 ch->freq, ch->max_power, ch->max_reg_power, 1383 ch->freq, ch->max_power, ch->max_reg_power,
1339 ch->max_antenna_gain, ch->mode); 1384 ch->max_antenna_gain, ch->mode);
1340 1385
@@ -1391,6 +1436,33 @@ static void ath10k_reg_notifier(struct wiphy *wiphy,
1391/* TX handlers */ 1436/* TX handlers */
1392/***************/ 1437/***************/
1393 1438
1439static u8 ath10k_tx_h_get_tid(struct ieee80211_hdr *hdr)
1440{
1441 if (ieee80211_is_mgmt(hdr->frame_control))
1442 return HTT_DATA_TX_EXT_TID_MGMT;
1443
1444 if (!ieee80211_is_data_qos(hdr->frame_control))
1445 return HTT_DATA_TX_EXT_TID_NON_QOS_MCAST_BCAST;
1446
1447 if (!is_unicast_ether_addr(ieee80211_get_DA(hdr)))
1448 return HTT_DATA_TX_EXT_TID_NON_QOS_MCAST_BCAST;
1449
1450 return ieee80211_get_qos_ctl(hdr)[0] & IEEE80211_QOS_CTL_TID_MASK;
1451}
1452
1453static u8 ath10k_tx_h_get_vdev_id(struct ath10k *ar,
1454 struct ieee80211_tx_info *info)
1455{
1456 if (info->control.vif)
1457 return ath10k_vif_to_arvif(info->control.vif)->vdev_id;
1458
1459 if (ar->monitor_enabled)
1460 return ar->monitor_vdev_id;
1461
1462 ath10k_warn("could not resolve vdev id\n");
1463 return 0;
1464}
1465
1394/* 1466/*
1395 * Frames sent to the FW have to be in "Native Wifi" format. 1467 * Frames sent to the FW have to be in "Native Wifi" format.
1396 * Strip the QoS field from the 802.11 header. 1468 * Strip the QoS field from the 802.11 header.
@@ -1411,6 +1483,30 @@ static void ath10k_tx_h_qos_workaround(struct ieee80211_hw *hw,
1411 skb_pull(skb, IEEE80211_QOS_CTL_LEN); 1483 skb_pull(skb, IEEE80211_QOS_CTL_LEN);
1412} 1484}
1413 1485
1486static void ath10k_tx_wep_key_work(struct work_struct *work)
1487{
1488 struct ath10k_vif *arvif = container_of(work, struct ath10k_vif,
1489 wep_key_work);
1490 int ret, keyidx = arvif->def_wep_key_newidx;
1491
1492 if (arvif->def_wep_key_idx == keyidx)
1493 return;
1494
1495 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d set keyidx %d\n",
1496 arvif->vdev_id, keyidx);
1497
1498 ret = ath10k_wmi_vdev_set_param(arvif->ar,
1499 arvif->vdev_id,
1500 arvif->ar->wmi.vdev_param->def_keyid,
1501 keyidx);
1502 if (ret) {
1503 ath10k_warn("could not update wep keyidx (%d)\n", ret);
1504 return;
1505 }
1506
1507 arvif->def_wep_key_idx = keyidx;
1508}
1509
1414static void ath10k_tx_h_update_wep_key(struct sk_buff *skb) 1510static void ath10k_tx_h_update_wep_key(struct sk_buff *skb)
1415{ 1511{
1416 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1512 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
@@ -1419,11 +1515,6 @@ static void ath10k_tx_h_update_wep_key(struct sk_buff *skb)
1419 struct ath10k *ar = arvif->ar; 1515 struct ath10k *ar = arvif->ar;
1420 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1516 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
1421 struct ieee80211_key_conf *key = info->control.hw_key; 1517 struct ieee80211_key_conf *key = info->control.hw_key;
1422 int ret;
1423
1424 /* TODO AP mode should be implemented */
1425 if (vif->type != NL80211_IFTYPE_STATION)
1426 return;
1427 1518
1428 if (!ieee80211_has_protected(hdr->frame_control)) 1519 if (!ieee80211_has_protected(hdr->frame_control))
1429 return; 1520 return;
@@ -1435,20 +1526,14 @@ static void ath10k_tx_h_update_wep_key(struct sk_buff *skb)
1435 key->cipher != WLAN_CIPHER_SUITE_WEP104) 1526 key->cipher != WLAN_CIPHER_SUITE_WEP104)
1436 return; 1527 return;
1437 1528
1438 if (key->keyidx == arvif->def_wep_key_index) 1529 if (key->keyidx == arvif->def_wep_key_idx)
1439 return; 1530 return;
1440 1531
1441 ath10k_dbg(ATH10K_DBG_MAC, "new wep keyidx will be %d\n", key->keyidx); 1532 /* FIXME: Most likely a few frames will be TXed with an old key. Simply
1442 1533 * queueing frames until key index is updated is not an option because
1443 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, 1534 * sk_buff may need more processing to be done, e.g. offchannel */
1444 WMI_VDEV_PARAM_DEF_KEYID, 1535 arvif->def_wep_key_newidx = key->keyidx;
1445 key->keyidx); 1536 ieee80211_queue_work(ar->hw, &arvif->wep_key_work);
1446 if (ret) {
1447 ath10k_warn("could not update wep keyidx (%d)\n", ret);
1448 return;
1449 }
1450
1451 arvif->def_wep_key_index = key->keyidx;
1452} 1537}
1453 1538
1454static void ath10k_tx_h_add_p2p_noa_ie(struct ath10k *ar, struct sk_buff *skb) 1539static void ath10k_tx_h_add_p2p_noa_ie(struct ath10k *ar, struct sk_buff *skb)
@@ -1478,19 +1563,42 @@ static void ath10k_tx_h_add_p2p_noa_ie(struct ath10k *ar, struct sk_buff *skb)
1478static void ath10k_tx_htt(struct ath10k *ar, struct sk_buff *skb) 1563static void ath10k_tx_htt(struct ath10k *ar, struct sk_buff *skb)
1479{ 1564{
1480 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1565 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
1481 int ret; 1566 int ret = 0;
1482 1567
1483 if (ieee80211_is_mgmt(hdr->frame_control)) 1568 if (ar->htt.target_version_major >= 3) {
1484 ret = ath10k_htt_mgmt_tx(&ar->htt, skb); 1569 /* Since HTT 3.0 there is no separate mgmt tx command */
1485 else if (ieee80211_is_nullfunc(hdr->frame_control)) 1570 ret = ath10k_htt_tx(&ar->htt, skb);
1571 goto exit;
1572 }
1573
1574 if (ieee80211_is_mgmt(hdr->frame_control)) {
1575 if (test_bit(ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX,
1576 ar->fw_features)) {
1577 if (skb_queue_len(&ar->wmi_mgmt_tx_queue) >=
1578 ATH10K_MAX_NUM_MGMT_PENDING) {
1579 ath10k_warn("wmi mgmt_tx queue limit reached\n");
1580 ret = -EBUSY;
1581 goto exit;
1582 }
1583
1584 skb_queue_tail(&ar->wmi_mgmt_tx_queue, skb);
1585 ieee80211_queue_work(ar->hw, &ar->wmi_mgmt_tx_work);
1586 } else {
1587 ret = ath10k_htt_mgmt_tx(&ar->htt, skb);
1588 }
1589 } else if (!test_bit(ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX,
1590 ar->fw_features) &&
1591 ieee80211_is_nullfunc(hdr->frame_control)) {
1486 /* FW does not report tx status properly for NullFunc frames 1592 /* FW does not report tx status properly for NullFunc frames
1487 * unless they are sent through mgmt tx path. mac80211 sends 1593 * unless they are sent through mgmt tx path. mac80211 sends
1488 * those frames when it detects link/beacon loss and depends on 1594 * those frames when it detects link/beacon loss and depends
1489 * the tx status to be correct. */ 1595 * on the tx status to be correct. */
1490 ret = ath10k_htt_mgmt_tx(&ar->htt, skb); 1596 ret = ath10k_htt_mgmt_tx(&ar->htt, skb);
1491 else 1597 } else {
1492 ret = ath10k_htt_tx(&ar->htt, skb); 1598 ret = ath10k_htt_tx(&ar->htt, skb);
1599 }
1493 1600
1601exit:
1494 if (ret) { 1602 if (ret) {
1495 ath10k_warn("tx failed (%d). dropping packet.\n", ret); 1603 ath10k_warn("tx failed (%d). dropping packet.\n", ret);
1496 ieee80211_free_txskb(ar->hw, skb); 1604 ieee80211_free_txskb(ar->hw, skb);
@@ -1534,18 +1642,19 @@ void ath10k_offchan_tx_work(struct work_struct *work)
1534 1642
1535 mutex_lock(&ar->conf_mutex); 1643 mutex_lock(&ar->conf_mutex);
1536 1644
1537 ath10k_dbg(ATH10K_DBG_MAC, "processing offchannel skb %p\n", 1645 ath10k_dbg(ATH10K_DBG_MAC, "mac offchannel skb %p\n",
1538 skb); 1646 skb);
1539 1647
1540 hdr = (struct ieee80211_hdr *)skb->data; 1648 hdr = (struct ieee80211_hdr *)skb->data;
1541 peer_addr = ieee80211_get_DA(hdr); 1649 peer_addr = ieee80211_get_DA(hdr);
1542 vdev_id = ATH10K_SKB_CB(skb)->htt.vdev_id; 1650 vdev_id = ATH10K_SKB_CB(skb)->vdev_id;
1543 1651
1544 spin_lock_bh(&ar->data_lock); 1652 spin_lock_bh(&ar->data_lock);
1545 peer = ath10k_peer_find(ar, vdev_id, peer_addr); 1653 peer = ath10k_peer_find(ar, vdev_id, peer_addr);
1546 spin_unlock_bh(&ar->data_lock); 1654 spin_unlock_bh(&ar->data_lock);
1547 1655
1548 if (peer) 1656 if (peer)
1657 /* FIXME: should this use ath10k_warn()? */
1549 ath10k_dbg(ATH10K_DBG_MAC, "peer %pM on vdev %d already present\n", 1658 ath10k_dbg(ATH10K_DBG_MAC, "peer %pM on vdev %d already present\n",
1550 peer_addr, vdev_id); 1659 peer_addr, vdev_id);
1551 1660
@@ -1580,6 +1689,36 @@ void ath10k_offchan_tx_work(struct work_struct *work)
1580 } 1689 }
1581} 1690}
1582 1691
1692void ath10k_mgmt_over_wmi_tx_purge(struct ath10k *ar)
1693{
1694 struct sk_buff *skb;
1695
1696 for (;;) {
1697 skb = skb_dequeue(&ar->wmi_mgmt_tx_queue);
1698 if (!skb)
1699 break;
1700
1701 ieee80211_free_txskb(ar->hw, skb);
1702 }
1703}
1704
1705void ath10k_mgmt_over_wmi_tx_work(struct work_struct *work)
1706{
1707 struct ath10k *ar = container_of(work, struct ath10k, wmi_mgmt_tx_work);
1708 struct sk_buff *skb;
1709 int ret;
1710
1711 for (;;) {
1712 skb = skb_dequeue(&ar->wmi_mgmt_tx_queue);
1713 if (!skb)
1714 break;
1715
1716 ret = ath10k_wmi_mgmt_tx(ar, skb);
1717 if (ret)
1718 ath10k_warn("wmi mgmt_tx failed (%d)\n", ret);
1719 }
1720}
1721
1583/************/ 1722/************/
1584/* Scanning */ 1723/* Scanning */
1585/************/ 1724/************/
@@ -1643,8 +1782,6 @@ static int ath10k_abort_scan(struct ath10k *ar)
1643 return -EIO; 1782 return -EIO;
1644 } 1783 }
1645 1784
1646 ath10k_wmi_flush_tx(ar);
1647
1648 ret = wait_for_completion_timeout(&ar->scan.completed, 3*HZ); 1785 ret = wait_for_completion_timeout(&ar->scan.completed, 3*HZ);
1649 if (ret == 0) 1786 if (ret == 0)
1650 ath10k_warn("timed out while waiting for scan to stop\n"); 1787 ath10k_warn("timed out while waiting for scan to stop\n");
@@ -1678,10 +1815,6 @@ static int ath10k_start_scan(struct ath10k *ar,
1678 if (ret) 1815 if (ret)
1679 return ret; 1816 return ret;
1680 1817
1681 /* make sure we submit the command so the completion
1682 * timeout makes sense */
1683 ath10k_wmi_flush_tx(ar);
1684
1685 ret = wait_for_completion_timeout(&ar->scan.started, 1*HZ); 1818 ret = wait_for_completion_timeout(&ar->scan.started, 1*HZ);
1686 if (ret == 0) { 1819 if (ret == 0) {
1687 ath10k_abort_scan(ar); 1820 ath10k_abort_scan(ar);
@@ -1709,16 +1842,7 @@ static void ath10k_tx(struct ieee80211_hw *hw,
1709 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1842 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
1710 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1843 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
1711 struct ath10k *ar = hw->priv; 1844 struct ath10k *ar = hw->priv;
1712 struct ath10k_vif *arvif = NULL; 1845 u8 tid, vdev_id;
1713 u32 vdev_id = 0;
1714 u8 tid;
1715
1716 if (info->control.vif) {
1717 arvif = ath10k_vif_to_arvif(info->control.vif);
1718 vdev_id = arvif->vdev_id;
1719 } else if (ar->monitor_enabled) {
1720 vdev_id = ar->monitor_vdev_id;
1721 }
1722 1846
1723 /* We should disable CCK RATE due to P2P */ 1847 /* We should disable CCK RATE due to P2P */
1724 if (info->flags & IEEE80211_TX_CTL_NO_CCK_RATE) 1848 if (info->flags & IEEE80211_TX_CTL_NO_CCK_RATE)
@@ -1726,12 +1850,8 @@ static void ath10k_tx(struct ieee80211_hw *hw,
1726 1850
1727 /* we must calculate tid before we apply qos workaround 1851 /* we must calculate tid before we apply qos workaround
1728 * as we'd lose the qos control field */ 1852 * as we'd lose the qos control field */
1729 tid = HTT_DATA_TX_EXT_TID_NON_QOS_MCAST_BCAST; 1853 tid = ath10k_tx_h_get_tid(hdr);
1730 if (ieee80211_is_data_qos(hdr->frame_control) && 1854 vdev_id = ath10k_tx_h_get_vdev_id(ar, info);
1731 is_unicast_ether_addr(ieee80211_get_DA(hdr))) {
1732 u8 *qc = ieee80211_get_qos_ctl(hdr);
1733 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
1734 }
1735 1855
1736 /* it makes no sense to process injected frames like that */ 1856 /* it makes no sense to process injected frames like that */
1737 if (info->control.vif && 1857 if (info->control.vif &&
@@ -1742,14 +1862,14 @@ static void ath10k_tx(struct ieee80211_hw *hw,
1742 ath10k_tx_h_seq_no(skb); 1862 ath10k_tx_h_seq_no(skb);
1743 } 1863 }
1744 1864
1745 memset(ATH10K_SKB_CB(skb), 0, sizeof(*ATH10K_SKB_CB(skb))); 1865 ATH10K_SKB_CB(skb)->vdev_id = vdev_id;
1746 ATH10K_SKB_CB(skb)->htt.vdev_id = vdev_id; 1866 ATH10K_SKB_CB(skb)->htt.is_offchan = false;
1747 ATH10K_SKB_CB(skb)->htt.tid = tid; 1867 ATH10K_SKB_CB(skb)->htt.tid = tid;
1748 1868
1749 if (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) { 1869 if (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) {
1750 spin_lock_bh(&ar->data_lock); 1870 spin_lock_bh(&ar->data_lock);
1751 ATH10K_SKB_CB(skb)->htt.is_offchan = true; 1871 ATH10K_SKB_CB(skb)->htt.is_offchan = true;
1752 ATH10K_SKB_CB(skb)->htt.vdev_id = ar->scan.vdev_id; 1872 ATH10K_SKB_CB(skb)->vdev_id = ar->scan.vdev_id;
1753 spin_unlock_bh(&ar->data_lock); 1873 spin_unlock_bh(&ar->data_lock);
1754 1874
1755 ath10k_dbg(ATH10K_DBG_MAC, "queued offchannel skb %p\n", skb); 1875 ath10k_dbg(ATH10K_DBG_MAC, "queued offchannel skb %p\n", skb);
@@ -1771,6 +1891,7 @@ void ath10k_halt(struct ath10k *ar)
1771 1891
1772 del_timer_sync(&ar->scan.timeout); 1892 del_timer_sync(&ar->scan.timeout);
1773 ath10k_offchan_tx_purge(ar); 1893 ath10k_offchan_tx_purge(ar);
1894 ath10k_mgmt_over_wmi_tx_purge(ar);
1774 ath10k_peer_cleanup_all(ar); 1895 ath10k_peer_cleanup_all(ar);
1775 ath10k_core_stop(ar); 1896 ath10k_core_stop(ar);
1776 ath10k_hif_power_down(ar); 1897 ath10k_hif_power_down(ar);
@@ -1817,12 +1938,12 @@ static int ath10k_start(struct ieee80211_hw *hw)
1817 else if (ar->state == ATH10K_STATE_RESTARTING) 1938 else if (ar->state == ATH10K_STATE_RESTARTING)
1818 ar->state = ATH10K_STATE_RESTARTED; 1939 ar->state = ATH10K_STATE_RESTARTED;
1819 1940
1820 ret = ath10k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_PMF_QOS, 1); 1941 ret = ath10k_wmi_pdev_set_param(ar, ar->wmi.pdev_param->pmf_qos, 1);
1821 if (ret) 1942 if (ret)
1822 ath10k_warn("could not enable WMI_PDEV_PARAM_PMF_QOS (%d)\n", 1943 ath10k_warn("could not enable WMI_PDEV_PARAM_PMF_QOS (%d)\n",
1823 ret); 1944 ret);
1824 1945
1825 ret = ath10k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_DYNAMIC_BW, 0); 1946 ret = ath10k_wmi_pdev_set_param(ar, ar->wmi.pdev_param->dynamic_bw, 0);
1826 if (ret) 1947 if (ret)
1827 ath10k_warn("could not init WMI_PDEV_PARAM_DYNAMIC_BW (%d)\n", 1948 ath10k_warn("could not init WMI_PDEV_PARAM_DYNAMIC_BW (%d)\n",
1828 ret); 1949 ret);
@@ -1847,32 +1968,29 @@ static void ath10k_stop(struct ieee80211_hw *hw)
1847 ar->state = ATH10K_STATE_OFF; 1968 ar->state = ATH10K_STATE_OFF;
1848 mutex_unlock(&ar->conf_mutex); 1969 mutex_unlock(&ar->conf_mutex);
1849 1970
1971 ath10k_mgmt_over_wmi_tx_purge(ar);
1972
1850 cancel_work_sync(&ar->offchan_tx_work); 1973 cancel_work_sync(&ar->offchan_tx_work);
1974 cancel_work_sync(&ar->wmi_mgmt_tx_work);
1851 cancel_work_sync(&ar->restart_work); 1975 cancel_work_sync(&ar->restart_work);
1852} 1976}
1853 1977
1854static void ath10k_config_ps(struct ath10k *ar) 1978static int ath10k_config_ps(struct ath10k *ar)
1855{ 1979{
1856 struct ath10k_generic_iter ar_iter; 1980 struct ath10k_vif *arvif;
1981 int ret = 0;
1857 1982
1858 lockdep_assert_held(&ar->conf_mutex); 1983 lockdep_assert_held(&ar->conf_mutex);
1859 1984
1860 /* During HW reconfiguration mac80211 reports all interfaces that were 1985 list_for_each_entry(arvif, &ar->arvifs, list) {
1861 * running until reconfiguration was started. Since FW doesn't have any 1986 ret = ath10k_mac_vif_setup_ps(arvif);
1862 * vdevs at this point we must not iterate over this interface list. 1987 if (ret) {
1863 * This setting will be updated upon add_interface(). */ 1988 ath10k_warn("could not setup powersave (%d)\n", ret);
1864 if (ar->state == ATH10K_STATE_RESTARTED) 1989 break;
1865 return; 1990 }
1866 1991 }
1867 memset(&ar_iter, 0, sizeof(struct ath10k_generic_iter));
1868 ar_iter.ar = ar;
1869
1870 ieee80211_iterate_active_interfaces_atomic(
1871 ar->hw, IEEE80211_IFACE_ITER_NORMAL,
1872 ath10k_ps_iter, &ar_iter);
1873 1992
1874 if (ar_iter.ret) 1993 return ret;
1875 ath10k_warn("failed to set ps config (%d)\n", ar_iter.ret);
1876} 1994}
1877 1995
1878static int ath10k_config(struct ieee80211_hw *hw, u32 changed) 1996static int ath10k_config(struct ieee80211_hw *hw, u32 changed)
@@ -1884,7 +2002,7 @@ static int ath10k_config(struct ieee80211_hw *hw, u32 changed)
1884 mutex_lock(&ar->conf_mutex); 2002 mutex_lock(&ar->conf_mutex);
1885 2003
1886 if (changed & IEEE80211_CONF_CHANGE_CHANNEL) { 2004 if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
1887 ath10k_dbg(ATH10K_DBG_MAC, "Config channel %d mhz\n", 2005 ath10k_dbg(ATH10K_DBG_MAC, "mac config channel %d mhz\n",
1888 conf->chandef.chan->center_freq); 2006 conf->chandef.chan->center_freq);
1889 spin_lock_bh(&ar->data_lock); 2007 spin_lock_bh(&ar->data_lock);
1890 ar->rx_channel = conf->chandef.chan; 2008 ar->rx_channel = conf->chandef.chan;
@@ -1901,7 +2019,6 @@ static int ath10k_config(struct ieee80211_hw *hw, u32 changed)
1901 ret = ath10k_monitor_destroy(ar); 2019 ret = ath10k_monitor_destroy(ar);
1902 } 2020 }
1903 2021
1904 ath10k_wmi_flush_tx(ar);
1905 mutex_unlock(&ar->conf_mutex); 2022 mutex_unlock(&ar->conf_mutex);
1906 return ret; 2023 return ret;
1907} 2024}
@@ -1922,6 +2039,7 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
1922 int ret = 0; 2039 int ret = 0;
1923 u32 value; 2040 u32 value;
1924 int bit; 2041 int bit;
2042 u32 vdev_param;
1925 2043
1926 mutex_lock(&ar->conf_mutex); 2044 mutex_lock(&ar->conf_mutex);
1927 2045
@@ -1930,21 +2048,22 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
1930 arvif->ar = ar; 2048 arvif->ar = ar;
1931 arvif->vif = vif; 2049 arvif->vif = vif;
1932 2050
2051 INIT_WORK(&arvif->wep_key_work, ath10k_tx_wep_key_work);
2052
1933 if ((vif->type == NL80211_IFTYPE_MONITOR) && ar->monitor_present) { 2053 if ((vif->type == NL80211_IFTYPE_MONITOR) && ar->monitor_present) {
1934 ath10k_warn("Only one monitor interface allowed\n"); 2054 ath10k_warn("Only one monitor interface allowed\n");
1935 ret = -EBUSY; 2055 ret = -EBUSY;
1936 goto exit; 2056 goto err;
1937 } 2057 }
1938 2058
1939 bit = ffs(ar->free_vdev_map); 2059 bit = ffs(ar->free_vdev_map);
1940 if (bit == 0) { 2060 if (bit == 0) {
1941 ret = -EBUSY; 2061 ret = -EBUSY;
1942 goto exit; 2062 goto err;
1943 } 2063 }
1944 2064
1945 arvif->vdev_id = bit - 1; 2065 arvif->vdev_id = bit - 1;
1946 arvif->vdev_subtype = WMI_VDEV_SUBTYPE_NONE; 2066 arvif->vdev_subtype = WMI_VDEV_SUBTYPE_NONE;
1947 ar->free_vdev_map &= ~(1 << arvif->vdev_id);
1948 2067
1949 if (ar->p2p) 2068 if (ar->p2p)
1950 arvif->vdev_subtype = WMI_VDEV_SUBTYPE_P2P_DEVICE; 2069 arvif->vdev_subtype = WMI_VDEV_SUBTYPE_P2P_DEVICE;
@@ -1973,32 +2092,41 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
1973 break; 2092 break;
1974 } 2093 }
1975 2094
1976 ath10k_dbg(ATH10K_DBG_MAC, "Add interface: id %d type %d subtype %d\n", 2095 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev create %d (add interface) type %d subtype %d\n",
1977 arvif->vdev_id, arvif->vdev_type, arvif->vdev_subtype); 2096 arvif->vdev_id, arvif->vdev_type, arvif->vdev_subtype);
1978 2097
1979 ret = ath10k_wmi_vdev_create(ar, arvif->vdev_id, arvif->vdev_type, 2098 ret = ath10k_wmi_vdev_create(ar, arvif->vdev_id, arvif->vdev_type,
1980 arvif->vdev_subtype, vif->addr); 2099 arvif->vdev_subtype, vif->addr);
1981 if (ret) { 2100 if (ret) {
1982 ath10k_warn("WMI vdev create failed: ret %d\n", ret); 2101 ath10k_warn("WMI vdev create failed: ret %d\n", ret);
1983 goto exit; 2102 goto err;
1984 } 2103 }
1985 2104
1986 ret = ath10k_wmi_vdev_set_param(ar, 0, WMI_VDEV_PARAM_DEF_KEYID, 2105 ar->free_vdev_map &= ~BIT(arvif->vdev_id);
1987 arvif->def_wep_key_index); 2106 list_add(&arvif->list, &ar->arvifs);
1988 if (ret) 2107
2108 vdev_param = ar->wmi.vdev_param->def_keyid;
2109 ret = ath10k_wmi_vdev_set_param(ar, 0, vdev_param,
2110 arvif->def_wep_key_idx);
2111 if (ret) {
1989 ath10k_warn("Failed to set default keyid: %d\n", ret); 2112 ath10k_warn("Failed to set default keyid: %d\n", ret);
2113 goto err_vdev_delete;
2114 }
1990 2115
1991 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, 2116 vdev_param = ar->wmi.vdev_param->tx_encap_type;
1992 WMI_VDEV_PARAM_TX_ENCAP_TYPE, 2117 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param,
1993 ATH10K_HW_TXRX_NATIVE_WIFI); 2118 ATH10K_HW_TXRX_NATIVE_WIFI);
1994 if (ret) 2119 /* 10.X firmware does not support this VDEV parameter. Do not warn */
2120 if (ret && ret != -EOPNOTSUPP) {
1995 ath10k_warn("Failed to set TX encap: %d\n", ret); 2121 ath10k_warn("Failed to set TX encap: %d\n", ret);
2122 goto err_vdev_delete;
2123 }
1996 2124
1997 if (arvif->vdev_type == WMI_VDEV_TYPE_AP) { 2125 if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
1998 ret = ath10k_peer_create(ar, arvif->vdev_id, vif->addr); 2126 ret = ath10k_peer_create(ar, arvif->vdev_id, vif->addr);
1999 if (ret) { 2127 if (ret) {
2000 ath10k_warn("Failed to create peer for AP: %d\n", ret); 2128 ath10k_warn("Failed to create peer for AP: %d\n", ret);
2001 goto exit; 2129 goto err_vdev_delete;
2002 } 2130 }
2003 } 2131 }
2004 2132
@@ -2007,39 +2135,62 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
2007 value = WMI_STA_PS_RX_WAKE_POLICY_WAKE; 2135 value = WMI_STA_PS_RX_WAKE_POLICY_WAKE;
2008 ret = ath10k_wmi_set_sta_ps_param(ar, arvif->vdev_id, 2136 ret = ath10k_wmi_set_sta_ps_param(ar, arvif->vdev_id,
2009 param, value); 2137 param, value);
2010 if (ret) 2138 if (ret) {
2011 ath10k_warn("Failed to set RX wake policy: %d\n", ret); 2139 ath10k_warn("Failed to set RX wake policy: %d\n", ret);
2140 goto err_peer_delete;
2141 }
2012 2142
2013 param = WMI_STA_PS_PARAM_TX_WAKE_THRESHOLD; 2143 param = WMI_STA_PS_PARAM_TX_WAKE_THRESHOLD;
2014 value = WMI_STA_PS_TX_WAKE_THRESHOLD_ALWAYS; 2144 value = WMI_STA_PS_TX_WAKE_THRESHOLD_ALWAYS;
2015 ret = ath10k_wmi_set_sta_ps_param(ar, arvif->vdev_id, 2145 ret = ath10k_wmi_set_sta_ps_param(ar, arvif->vdev_id,
2016 param, value); 2146 param, value);
2017 if (ret) 2147 if (ret) {
2018 ath10k_warn("Failed to set TX wake thresh: %d\n", ret); 2148 ath10k_warn("Failed to set TX wake thresh: %d\n", ret);
2149 goto err_peer_delete;
2150 }
2019 2151
2020 param = WMI_STA_PS_PARAM_PSPOLL_COUNT; 2152 param = WMI_STA_PS_PARAM_PSPOLL_COUNT;
2021 value = WMI_STA_PS_PSPOLL_COUNT_NO_MAX; 2153 value = WMI_STA_PS_PSPOLL_COUNT_NO_MAX;
2022 ret = ath10k_wmi_set_sta_ps_param(ar, arvif->vdev_id, 2154 ret = ath10k_wmi_set_sta_ps_param(ar, arvif->vdev_id,
2023 param, value); 2155 param, value);
2024 if (ret) 2156 if (ret) {
2025 ath10k_warn("Failed to set PSPOLL count: %d\n", ret); 2157 ath10k_warn("Failed to set PSPOLL count: %d\n", ret);
2158 goto err_peer_delete;
2159 }
2026 } 2160 }
2027 2161
2028 ret = ath10k_mac_set_rts(arvif, ar->hw->wiphy->rts_threshold); 2162 ret = ath10k_mac_set_rts(arvif, ar->hw->wiphy->rts_threshold);
2029 if (ret) 2163 if (ret) {
2030 ath10k_warn("failed to set rts threshold for vdev %d (%d)\n", 2164 ath10k_warn("failed to set rts threshold for vdev %d (%d)\n",
2031 arvif->vdev_id, ret); 2165 arvif->vdev_id, ret);
2166 goto err_peer_delete;
2167 }
2032 2168
2033 ret = ath10k_mac_set_frag(arvif, ar->hw->wiphy->frag_threshold); 2169 ret = ath10k_mac_set_frag(arvif, ar->hw->wiphy->frag_threshold);
2034 if (ret) 2170 if (ret) {
2035 ath10k_warn("failed to set frag threshold for vdev %d (%d)\n", 2171 ath10k_warn("failed to set frag threshold for vdev %d (%d)\n",
2036 arvif->vdev_id, ret); 2172 arvif->vdev_id, ret);
2173 goto err_peer_delete;
2174 }
2037 2175
2038 if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) 2176 if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR)
2039 ar->monitor_present = true; 2177 ar->monitor_present = true;
2040 2178
2041exit:
2042 mutex_unlock(&ar->conf_mutex); 2179 mutex_unlock(&ar->conf_mutex);
2180 return 0;
2181
2182err_peer_delete:
2183 if (arvif->vdev_type == WMI_VDEV_TYPE_AP)
2184 ath10k_wmi_peer_delete(ar, arvif->vdev_id, vif->addr);
2185
2186err_vdev_delete:
2187 ath10k_wmi_vdev_delete(ar, arvif->vdev_id);
2188 ar->free_vdev_map &= ~BIT(arvif->vdev_id);
2189 list_del(&arvif->list);
2190
2191err:
2192 mutex_unlock(&ar->conf_mutex);
2193
2043 return ret; 2194 return ret;
2044} 2195}
2045 2196
@@ -2052,9 +2203,17 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
2052 2203
2053 mutex_lock(&ar->conf_mutex); 2204 mutex_lock(&ar->conf_mutex);
2054 2205
2055 ath10k_dbg(ATH10K_DBG_MAC, "Remove interface: id %d\n", arvif->vdev_id); 2206 cancel_work_sync(&arvif->wep_key_work);
2207
2208 spin_lock_bh(&ar->data_lock);
2209 if (arvif->beacon) {
2210 dev_kfree_skb_any(arvif->beacon);
2211 arvif->beacon = NULL;
2212 }
2213 spin_unlock_bh(&ar->data_lock);
2056 2214
2057 ar->free_vdev_map |= 1 << (arvif->vdev_id); 2215 ar->free_vdev_map |= 1 << (arvif->vdev_id);
2216 list_del(&arvif->list);
2058 2217
2059 if (arvif->vdev_type == WMI_VDEV_TYPE_AP) { 2218 if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
2060 ret = ath10k_peer_delete(arvif->ar, arvif->vdev_id, vif->addr); 2219 ret = ath10k_peer_delete(arvif->ar, arvif->vdev_id, vif->addr);
@@ -2064,6 +2223,9 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
2064 kfree(arvif->u.ap.noa_data); 2223 kfree(arvif->u.ap.noa_data);
2065 } 2224 }
2066 2225
2226 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev delete %d (remove interface)\n",
2227 arvif->vdev_id);
2228
2067 ret = ath10k_wmi_vdev_delete(ar, arvif->vdev_id); 2229 ret = ath10k_wmi_vdev_delete(ar, arvif->vdev_id);
2068 if (ret) 2230 if (ret)
2069 ath10k_warn("WMI vdev delete failed: %d\n", ret); 2231 ath10k_warn("WMI vdev delete failed: %d\n", ret);
@@ -2105,18 +2267,20 @@ static void ath10k_configure_filter(struct ieee80211_hw *hw,
2105 2267
2106 if ((ar->filter_flags & FIF_PROMISC_IN_BSS) && 2268 if ((ar->filter_flags & FIF_PROMISC_IN_BSS) &&
2107 !ar->monitor_enabled) { 2269 !ar->monitor_enabled) {
2270 ath10k_dbg(ATH10K_DBG_MAC, "mac monitor %d start\n",
2271 ar->monitor_vdev_id);
2272
2108 ret = ath10k_monitor_start(ar, ar->monitor_vdev_id); 2273 ret = ath10k_monitor_start(ar, ar->monitor_vdev_id);
2109 if (ret) 2274 if (ret)
2110 ath10k_warn("Unable to start monitor mode\n"); 2275 ath10k_warn("Unable to start monitor mode\n");
2111 else
2112 ath10k_dbg(ATH10K_DBG_MAC, "Monitor mode started\n");
2113 } else if (!(ar->filter_flags & FIF_PROMISC_IN_BSS) && 2276 } else if (!(ar->filter_flags & FIF_PROMISC_IN_BSS) &&
2114 ar->monitor_enabled) { 2277 ar->monitor_enabled) {
2278 ath10k_dbg(ATH10K_DBG_MAC, "mac monitor %d stop\n",
2279 ar->monitor_vdev_id);
2280
2115 ret = ath10k_monitor_stop(ar); 2281 ret = ath10k_monitor_stop(ar);
2116 if (ret) 2282 if (ret)
2117 ath10k_warn("Unable to stop monitor mode\n"); 2283 ath10k_warn("Unable to stop monitor mode\n");
2118 else
2119 ath10k_dbg(ATH10K_DBG_MAC, "Monitor mode stopped\n");
2120 } 2284 }
2121 2285
2122 mutex_unlock(&ar->conf_mutex); 2286 mutex_unlock(&ar->conf_mutex);
@@ -2130,6 +2294,7 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
2130 struct ath10k *ar = hw->priv; 2294 struct ath10k *ar = hw->priv;
2131 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif); 2295 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
2132 int ret = 0; 2296 int ret = 0;
2297 u32 vdev_param, pdev_param;
2133 2298
2134 mutex_lock(&ar->conf_mutex); 2299 mutex_lock(&ar->conf_mutex);
2135 2300
@@ -2138,44 +2303,44 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
2138 2303
2139 if (changed & BSS_CHANGED_BEACON_INT) { 2304 if (changed & BSS_CHANGED_BEACON_INT) {
2140 arvif->beacon_interval = info->beacon_int; 2305 arvif->beacon_interval = info->beacon_int;
2141 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, 2306 vdev_param = ar->wmi.vdev_param->beacon_interval;
2142 WMI_VDEV_PARAM_BEACON_INTERVAL, 2307 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param,
2143 arvif->beacon_interval); 2308 arvif->beacon_interval);
2309 ath10k_dbg(ATH10K_DBG_MAC,
2310 "mac vdev %d beacon_interval %d\n",
2311 arvif->vdev_id, arvif->beacon_interval);
2312
2144 if (ret) 2313 if (ret)
2145 ath10k_warn("Failed to set beacon interval for VDEV: %d\n", 2314 ath10k_warn("Failed to set beacon interval for VDEV: %d\n",
2146 arvif->vdev_id); 2315 arvif->vdev_id);
2147 else
2148 ath10k_dbg(ATH10K_DBG_MAC,
2149 "Beacon interval: %d set for VDEV: %d\n",
2150 arvif->beacon_interval, arvif->vdev_id);
2151 } 2316 }
2152 2317
2153 if (changed & BSS_CHANGED_BEACON) { 2318 if (changed & BSS_CHANGED_BEACON) {
2154 ret = ath10k_wmi_pdev_set_param(ar, 2319 ath10k_dbg(ATH10K_DBG_MAC,
2155 WMI_PDEV_PARAM_BEACON_TX_MODE, 2320 "vdev %d set beacon tx mode to staggered\n",
2321 arvif->vdev_id);
2322
2323 pdev_param = ar->wmi.pdev_param->beacon_tx_mode;
2324 ret = ath10k_wmi_pdev_set_param(ar, pdev_param,
2156 WMI_BEACON_STAGGERED_MODE); 2325 WMI_BEACON_STAGGERED_MODE);
2157 if (ret) 2326 if (ret)
2158 ath10k_warn("Failed to set beacon mode for VDEV: %d\n", 2327 ath10k_warn("Failed to set beacon mode for VDEV: %d\n",
2159 arvif->vdev_id); 2328 arvif->vdev_id);
2160 else
2161 ath10k_dbg(ATH10K_DBG_MAC,
2162 "Set staggered beacon mode for VDEV: %d\n",
2163 arvif->vdev_id);
2164 } 2329 }
2165 2330
2166 if (changed & BSS_CHANGED_BEACON_INFO) { 2331 if (changed & BSS_CHANGED_BEACON_INFO) {
2167 arvif->dtim_period = info->dtim_period; 2332 arvif->dtim_period = info->dtim_period;
2168 2333
2169 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, 2334 ath10k_dbg(ATH10K_DBG_MAC,
2170 WMI_VDEV_PARAM_DTIM_PERIOD, 2335 "mac vdev %d dtim_period %d\n",
2336 arvif->vdev_id, arvif->dtim_period);
2337
2338 vdev_param = ar->wmi.vdev_param->dtim_period;
2339 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param,
2171 arvif->dtim_period); 2340 arvif->dtim_period);
2172 if (ret) 2341 if (ret)
2173 ath10k_warn("Failed to set dtim period for VDEV: %d\n", 2342 ath10k_warn("Failed to set dtim period for VDEV: %d\n",
2174 arvif->vdev_id); 2343 arvif->vdev_id);
2175 else
2176 ath10k_dbg(ATH10K_DBG_MAC,
2177 "Set dtim period: %d for VDEV: %d\n",
2178 arvif->dtim_period, arvif->vdev_id);
2179 } 2344 }
2180 2345
2181 if (changed & BSS_CHANGED_SSID && 2346 if (changed & BSS_CHANGED_SSID &&
@@ -2188,16 +2353,15 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
2188 2353
2189 if (changed & BSS_CHANGED_BSSID) { 2354 if (changed & BSS_CHANGED_BSSID) {
2190 if (!is_zero_ether_addr(info->bssid)) { 2355 if (!is_zero_ether_addr(info->bssid)) {
2356 ath10k_dbg(ATH10K_DBG_MAC,
2357 "mac vdev %d create peer %pM\n",
2358 arvif->vdev_id, info->bssid);
2359
2191 ret = ath10k_peer_create(ar, arvif->vdev_id, 2360 ret = ath10k_peer_create(ar, arvif->vdev_id,
2192 info->bssid); 2361 info->bssid);
2193 if (ret) 2362 if (ret)
2194 ath10k_warn("Failed to add peer: %pM for VDEV: %d\n", 2363 ath10k_warn("Failed to add peer: %pM for VDEV: %d\n",
2195 info->bssid, arvif->vdev_id); 2364 info->bssid, arvif->vdev_id);
2196 else
2197 ath10k_dbg(ATH10K_DBG_MAC,
2198 "Added peer: %pM for VDEV: %d\n",
2199 info->bssid, arvif->vdev_id);
2200
2201 2365
2202 if (vif->type == NL80211_IFTYPE_STATION) { 2366 if (vif->type == NL80211_IFTYPE_STATION) {
2203 /* 2367 /*
@@ -2207,11 +2371,12 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
2207 memcpy(arvif->u.sta.bssid, info->bssid, 2371 memcpy(arvif->u.sta.bssid, info->bssid,
2208 ETH_ALEN); 2372 ETH_ALEN);
2209 2373
2374 ath10k_dbg(ATH10K_DBG_MAC,
2375 "mac vdev %d start %pM\n",
2376 arvif->vdev_id, info->bssid);
2377
2378 /* FIXME: check return value */
2210 ret = ath10k_vdev_start(arvif); 2379 ret = ath10k_vdev_start(arvif);
2211 if (!ret)
2212 ath10k_dbg(ATH10K_DBG_MAC,
2213 "VDEV: %d started with BSSID: %pM\n",
2214 arvif->vdev_id, info->bssid);
2215 } 2380 }
2216 2381
2217 /* 2382 /*
@@ -2235,16 +2400,15 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
2235 else 2400 else
2236 cts_prot = 0; 2401 cts_prot = 0;
2237 2402
2238 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, 2403 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d cts_prot %d\n",
2239 WMI_VDEV_PARAM_ENABLE_RTSCTS, 2404 arvif->vdev_id, cts_prot);
2405
2406 vdev_param = ar->wmi.vdev_param->enable_rtscts;
2407 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param,
2240 cts_prot); 2408 cts_prot);
2241 if (ret) 2409 if (ret)
2242 ath10k_warn("Failed to set CTS prot for VDEV: %d\n", 2410 ath10k_warn("Failed to set CTS prot for VDEV: %d\n",
2243 arvif->vdev_id); 2411 arvif->vdev_id);
2244 else
2245 ath10k_dbg(ATH10K_DBG_MAC,
2246 "Set CTS prot: %d for VDEV: %d\n",
2247 cts_prot, arvif->vdev_id);
2248 } 2412 }
2249 2413
2250 if (changed & BSS_CHANGED_ERP_SLOT) { 2414 if (changed & BSS_CHANGED_ERP_SLOT) {
@@ -2255,16 +2419,15 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
2255 else 2419 else
2256 slottime = WMI_VDEV_SLOT_TIME_LONG; /* 20us */ 2420 slottime = WMI_VDEV_SLOT_TIME_LONG; /* 20us */
2257 2421
2258 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, 2422 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d slot_time %d\n",
2259 WMI_VDEV_PARAM_SLOT_TIME, 2423 arvif->vdev_id, slottime);
2424
2425 vdev_param = ar->wmi.vdev_param->slot_time;
2426 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param,
2260 slottime); 2427 slottime);
2261 if (ret) 2428 if (ret)
2262 ath10k_warn("Failed to set erp slot for VDEV: %d\n", 2429 ath10k_warn("Failed to set erp slot for VDEV: %d\n",
2263 arvif->vdev_id); 2430 arvif->vdev_id);
2264 else
2265 ath10k_dbg(ATH10K_DBG_MAC,
2266 "Set slottime: %d for VDEV: %d\n",
2267 slottime, arvif->vdev_id);
2268 } 2431 }
2269 2432
2270 if (changed & BSS_CHANGED_ERP_PREAMBLE) { 2433 if (changed & BSS_CHANGED_ERP_PREAMBLE) {
@@ -2274,16 +2437,16 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
2274 else 2437 else
2275 preamble = WMI_VDEV_PREAMBLE_LONG; 2438 preamble = WMI_VDEV_PREAMBLE_LONG;
2276 2439
2277 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, 2440 ath10k_dbg(ATH10K_DBG_MAC,
2278 WMI_VDEV_PARAM_PREAMBLE, 2441 "mac vdev %d preamble %dn",
2442 arvif->vdev_id, preamble);
2443
2444 vdev_param = ar->wmi.vdev_param->preamble;
2445 ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param,
2279 preamble); 2446 preamble);
2280 if (ret) 2447 if (ret)
2281 ath10k_warn("Failed to set preamble for VDEV: %d\n", 2448 ath10k_warn("Failed to set preamble for VDEV: %d\n",
2282 arvif->vdev_id); 2449 arvif->vdev_id);
2283 else
2284 ath10k_dbg(ATH10K_DBG_MAC,
2285 "Set preamble: %d for VDEV: %d\n",
2286 preamble, arvif->vdev_id);
2287 } 2450 }
2288 2451
2289 if (changed & BSS_CHANGED_ASSOC) { 2452 if (changed & BSS_CHANGED_ASSOC) {
@@ -2474,27 +2637,26 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
2474 /* 2637 /*
2475 * New station addition. 2638 * New station addition.
2476 */ 2639 */
2640 ath10k_dbg(ATH10K_DBG_MAC,
2641 "mac vdev %d peer create %pM (new sta)\n",
2642 arvif->vdev_id, sta->addr);
2643
2477 ret = ath10k_peer_create(ar, arvif->vdev_id, sta->addr); 2644 ret = ath10k_peer_create(ar, arvif->vdev_id, sta->addr);
2478 if (ret) 2645 if (ret)
2479 ath10k_warn("Failed to add peer: %pM for VDEV: %d\n", 2646 ath10k_warn("Failed to add peer: %pM for VDEV: %d\n",
2480 sta->addr, arvif->vdev_id); 2647 sta->addr, arvif->vdev_id);
2481 else
2482 ath10k_dbg(ATH10K_DBG_MAC,
2483 "Added peer: %pM for VDEV: %d\n",
2484 sta->addr, arvif->vdev_id);
2485 } else if ((old_state == IEEE80211_STA_NONE && 2648 } else if ((old_state == IEEE80211_STA_NONE &&
2486 new_state == IEEE80211_STA_NOTEXIST)) { 2649 new_state == IEEE80211_STA_NOTEXIST)) {
2487 /* 2650 /*
2488 * Existing station deletion. 2651 * Existing station deletion.
2489 */ 2652 */
2653 ath10k_dbg(ATH10K_DBG_MAC,
2654 "mac vdev %d peer delete %pM (sta gone)\n",
2655 arvif->vdev_id, sta->addr);
2490 ret = ath10k_peer_delete(ar, arvif->vdev_id, sta->addr); 2656 ret = ath10k_peer_delete(ar, arvif->vdev_id, sta->addr);
2491 if (ret) 2657 if (ret)
2492 ath10k_warn("Failed to delete peer: %pM for VDEV: %d\n", 2658 ath10k_warn("Failed to delete peer: %pM for VDEV: %d\n",
2493 sta->addr, arvif->vdev_id); 2659 sta->addr, arvif->vdev_id);
2494 else
2495 ath10k_dbg(ATH10K_DBG_MAC,
2496 "Removed peer: %pM for VDEV: %d\n",
2497 sta->addr, arvif->vdev_id);
2498 2660
2499 if (vif->type == NL80211_IFTYPE_STATION) 2661 if (vif->type == NL80211_IFTYPE_STATION)
2500 ath10k_bss_disassoc(hw, vif); 2662 ath10k_bss_disassoc(hw, vif);
@@ -2505,14 +2667,13 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
2505 /* 2667 /*
2506 * New association. 2668 * New association.
2507 */ 2669 */
2670 ath10k_dbg(ATH10K_DBG_MAC, "mac sta %pM associated\n",
2671 sta->addr);
2672
2508 ret = ath10k_station_assoc(ar, arvif, sta); 2673 ret = ath10k_station_assoc(ar, arvif, sta);
2509 if (ret) 2674 if (ret)
2510 ath10k_warn("Failed to associate station: %pM\n", 2675 ath10k_warn("Failed to associate station: %pM\n",
2511 sta->addr); 2676 sta->addr);
2512 else
2513 ath10k_dbg(ATH10K_DBG_MAC,
2514 "Station %pM moved to assoc state\n",
2515 sta->addr);
2516 } else if (old_state == IEEE80211_STA_ASSOC && 2677 } else if (old_state == IEEE80211_STA_ASSOC &&
2517 new_state == IEEE80211_STA_AUTH && 2678 new_state == IEEE80211_STA_AUTH &&
2518 (vif->type == NL80211_IFTYPE_AP || 2679 (vif->type == NL80211_IFTYPE_AP ||
@@ -2520,14 +2681,13 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
2520 /* 2681 /*
2521 * Disassociation. 2682 * Disassociation.
2522 */ 2683 */
2684 ath10k_dbg(ATH10K_DBG_MAC, "mac sta %pM disassociated\n",
2685 sta->addr);
2686
2523 ret = ath10k_station_disassoc(ar, arvif, sta); 2687 ret = ath10k_station_disassoc(ar, arvif, sta);
2524 if (ret) 2688 if (ret)
2525 ath10k_warn("Failed to disassociate station: %pM\n", 2689 ath10k_warn("Failed to disassociate station: %pM\n",
2526 sta->addr); 2690 sta->addr);
2527 else
2528 ath10k_dbg(ATH10K_DBG_MAC,
2529 "Station %pM moved to disassociated state\n",
2530 sta->addr);
2531 } 2691 }
2532 2692
2533 mutex_unlock(&ar->conf_mutex); 2693 mutex_unlock(&ar->conf_mutex);
@@ -2732,88 +2892,51 @@ static int ath10k_cancel_remain_on_channel(struct ieee80211_hw *hw)
2732 * Both RTS and Fragmentation threshold are interface-specific 2892 * Both RTS and Fragmentation threshold are interface-specific
2733 * in ath10k, but device-specific in mac80211. 2893 * in ath10k, but device-specific in mac80211.
2734 */ 2894 */
2735static void ath10k_set_rts_iter(void *data, u8 *mac, struct ieee80211_vif *vif)
2736{
2737 struct ath10k_generic_iter *ar_iter = data;
2738 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
2739 u32 rts = ar_iter->ar->hw->wiphy->rts_threshold;
2740
2741 lockdep_assert_held(&arvif->ar->conf_mutex);
2742
2743 /* During HW reconfiguration mac80211 reports all interfaces that were
2744 * running until reconfiguration was started. Since FW doesn't have any
2745 * vdevs at this point we must not iterate over this interface list.
2746 * This setting will be updated upon add_interface(). */
2747 if (ar_iter->ar->state == ATH10K_STATE_RESTARTED)
2748 return;
2749
2750 ar_iter->ret = ath10k_mac_set_rts(arvif, rts);
2751 if (ar_iter->ret)
2752 ath10k_warn("Failed to set RTS threshold for VDEV: %d\n",
2753 arvif->vdev_id);
2754 else
2755 ath10k_dbg(ATH10K_DBG_MAC,
2756 "Set RTS threshold: %d for VDEV: %d\n",
2757 rts, arvif->vdev_id);
2758}
2759 2895
2760static int ath10k_set_rts_threshold(struct ieee80211_hw *hw, u32 value) 2896static int ath10k_set_rts_threshold(struct ieee80211_hw *hw, u32 value)
2761{ 2897{
2762 struct ath10k_generic_iter ar_iter;
2763 struct ath10k *ar = hw->priv; 2898 struct ath10k *ar = hw->priv;
2764 2899 struct ath10k_vif *arvif;
2765 memset(&ar_iter, 0, sizeof(struct ath10k_generic_iter)); 2900 int ret = 0;
2766 ar_iter.ar = ar;
2767 2901
2768 mutex_lock(&ar->conf_mutex); 2902 mutex_lock(&ar->conf_mutex);
2769 ieee80211_iterate_active_interfaces_atomic( 2903 list_for_each_entry(arvif, &ar->arvifs, list) {
2770 hw, IEEE80211_IFACE_ITER_NORMAL, 2904 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d rts threshold %d\n",
2771 ath10k_set_rts_iter, &ar_iter); 2905 arvif->vdev_id, value);
2772 mutex_unlock(&ar->conf_mutex);
2773
2774 return ar_iter.ret;
2775}
2776 2906
2777static void ath10k_set_frag_iter(void *data, u8 *mac, struct ieee80211_vif *vif) 2907 ret = ath10k_mac_set_rts(arvif, value);
2778{ 2908 if (ret) {
2779 struct ath10k_generic_iter *ar_iter = data; 2909 ath10k_warn("could not set rts threshold for vdev %d (%d)\n",
2780 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif); 2910 arvif->vdev_id, ret);
2781 u32 frag = ar_iter->ar->hw->wiphy->frag_threshold; 2911 break;
2782 2912 }
2783 lockdep_assert_held(&arvif->ar->conf_mutex); 2913 }
2784 2914 mutex_unlock(&ar->conf_mutex);
2785 /* During HW reconfiguration mac80211 reports all interfaces that were
2786 * running until reconfiguration was started. Since FW doesn't have any
2787 * vdevs at this point we must not iterate over this interface list.
2788 * This setting will be updated upon add_interface(). */
2789 if (ar_iter->ar->state == ATH10K_STATE_RESTARTED)
2790 return;
2791 2915
2792 ar_iter->ret = ath10k_mac_set_frag(arvif, frag); 2916 return ret;
2793 if (ar_iter->ret)
2794 ath10k_warn("Failed to set frag threshold for VDEV: %d\n",
2795 arvif->vdev_id);
2796 else
2797 ath10k_dbg(ATH10K_DBG_MAC,
2798 "Set frag threshold: %d for VDEV: %d\n",
2799 frag, arvif->vdev_id);
2800} 2917}
2801 2918
2802static int ath10k_set_frag_threshold(struct ieee80211_hw *hw, u32 value) 2919static int ath10k_set_frag_threshold(struct ieee80211_hw *hw, u32 value)
2803{ 2920{
2804 struct ath10k_generic_iter ar_iter;
2805 struct ath10k *ar = hw->priv; 2921 struct ath10k *ar = hw->priv;
2806 2922 struct ath10k_vif *arvif;
2807 memset(&ar_iter, 0, sizeof(struct ath10k_generic_iter)); 2923 int ret = 0;
2808 ar_iter.ar = ar;
2809 2924
2810 mutex_lock(&ar->conf_mutex); 2925 mutex_lock(&ar->conf_mutex);
2811 ieee80211_iterate_active_interfaces_atomic( 2926 list_for_each_entry(arvif, &ar->arvifs, list) {
2812 hw, IEEE80211_IFACE_ITER_NORMAL, 2927 ath10k_dbg(ATH10K_DBG_MAC, "mac vdev %d fragmentation threshold %d\n",
2813 ath10k_set_frag_iter, &ar_iter); 2928 arvif->vdev_id, value);
2929
2930 ret = ath10k_mac_set_rts(arvif, value);
2931 if (ret) {
2932 ath10k_warn("could not set fragmentation threshold for vdev %d (%d)\n",
2933 arvif->vdev_id, ret);
2934 break;
2935 }
2936 }
2814 mutex_unlock(&ar->conf_mutex); 2937 mutex_unlock(&ar->conf_mutex);
2815 2938
2816 return ar_iter.ret; 2939 return ret;
2817} 2940}
2818 2941
2819static void ath10k_flush(struct ieee80211_hw *hw, u32 queues, bool drop) 2942static void ath10k_flush(struct ieee80211_hw *hw, u32 queues, bool drop)
@@ -2836,8 +2959,7 @@ static void ath10k_flush(struct ieee80211_hw *hw, u32 queues, bool drop)
2836 bool empty; 2959 bool empty;
2837 2960
2838 spin_lock_bh(&ar->htt.tx_lock); 2961 spin_lock_bh(&ar->htt.tx_lock);
2839 empty = bitmap_empty(ar->htt.used_msdu_ids, 2962 empty = (ar->htt.num_pending_tx == 0);
2840 ar->htt.max_num_pending_tx);
2841 spin_unlock_bh(&ar->htt.tx_lock); 2963 spin_unlock_bh(&ar->htt.tx_lock);
2842 2964
2843 skip = (ar->state == ATH10K_STATE_WEDGED); 2965 skip = (ar->state == ATH10K_STATE_WEDGED);
@@ -3326,6 +3448,10 @@ int ath10k_mac_register(struct ath10k *ar)
3326 IEEE80211_HW_WANT_MONITOR_VIF | 3448 IEEE80211_HW_WANT_MONITOR_VIF |
3327 IEEE80211_HW_AP_LINK_PS; 3449 IEEE80211_HW_AP_LINK_PS;
3328 3450
3451 /* MSDU can have HTT TX fragment pushed in front. The additional 4
3452 * bytes is used for padding/alignment if necessary. */
3453 ar->hw->extra_tx_headroom += sizeof(struct htt_data_tx_desc_frag)*2 + 4;
3454
3329 if (ar->ht_cap_info & WMI_HT_CAP_DYNAMIC_SMPS) 3455 if (ar->ht_cap_info & WMI_HT_CAP_DYNAMIC_SMPS)
3330 ar->hw->flags |= IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS; 3456 ar->hw->flags |= IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS;
3331 3457
diff --git a/drivers/net/wireless/ath/ath10k/mac.h b/drivers/net/wireless/ath/ath10k/mac.h
index 6fce9bfb19a5..ba1021997b8f 100644
--- a/drivers/net/wireless/ath/ath10k/mac.h
+++ b/drivers/net/wireless/ath/ath10k/mac.h
@@ -34,6 +34,8 @@ struct ath10k_vif *ath10k_get_arvif(struct ath10k *ar, u32 vdev_id);
34void ath10k_reset_scan(unsigned long ptr); 34void ath10k_reset_scan(unsigned long ptr);
35void ath10k_offchan_tx_purge(struct ath10k *ar); 35void ath10k_offchan_tx_purge(struct ath10k *ar);
36void ath10k_offchan_tx_work(struct work_struct *work); 36void ath10k_offchan_tx_work(struct work_struct *work);
37void ath10k_mgmt_over_wmi_tx_purge(struct ath10k *ar);
38void ath10k_mgmt_over_wmi_tx_work(struct work_struct *work);
37void ath10k_halt(struct ath10k *ar); 39void ath10k_halt(struct ath10k *ar);
38 40
39static inline struct ath10k_vif *ath10k_vif_to_arvif(struct ieee80211_vif *vif) 41static inline struct ath10k_vif *ath10k_vif_to_arvif(struct ieee80211_vif *vif)
diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
index e2f9ef50b1bd..f8d59c7b9082 100644
--- a/drivers/net/wireless/ath/ath10k/pci.c
+++ b/drivers/net/wireless/ath/ath10k/pci.c
@@ -36,11 +36,9 @@ static unsigned int ath10k_target_ps;
36module_param(ath10k_target_ps, uint, 0644); 36module_param(ath10k_target_ps, uint, 0644);
37MODULE_PARM_DESC(ath10k_target_ps, "Enable ath10k Target (SoC) PS option"); 37MODULE_PARM_DESC(ath10k_target_ps, "Enable ath10k Target (SoC) PS option");
38 38
39#define QCA988X_1_0_DEVICE_ID (0xabcd)
40#define QCA988X_2_0_DEVICE_ID (0x003c) 39#define QCA988X_2_0_DEVICE_ID (0x003c)
41 40
42static DEFINE_PCI_DEVICE_TABLE(ath10k_pci_id_table) = { 41static DEFINE_PCI_DEVICE_TABLE(ath10k_pci_id_table) = {
43 { PCI_VDEVICE(ATHEROS, QCA988X_1_0_DEVICE_ID) }, /* PCI-E QCA988X V1 */
44 { PCI_VDEVICE(ATHEROS, QCA988X_2_0_DEVICE_ID) }, /* PCI-E QCA988X V2 */ 42 { PCI_VDEVICE(ATHEROS, QCA988X_2_0_DEVICE_ID) }, /* PCI-E QCA988X V2 */
45 {0} 43 {0}
46}; 44};
@@ -50,9 +48,9 @@ static int ath10k_pci_diag_read_access(struct ath10k *ar, u32 address,
50 48
51static void ath10k_pci_process_ce(struct ath10k *ar); 49static void ath10k_pci_process_ce(struct ath10k *ar);
52static int ath10k_pci_post_rx(struct ath10k *ar); 50static int ath10k_pci_post_rx(struct ath10k *ar);
53static int ath10k_pci_post_rx_pipe(struct hif_ce_pipe_info *pipe_info, 51static int ath10k_pci_post_rx_pipe(struct ath10k_pci_pipe *pipe_info,
54 int num); 52 int num);
55static void ath10k_pci_rx_pipe_cleanup(struct hif_ce_pipe_info *pipe_info); 53static void ath10k_pci_rx_pipe_cleanup(struct ath10k_pci_pipe *pipe_info);
56static void ath10k_pci_stop_ce(struct ath10k *ar); 54static void ath10k_pci_stop_ce(struct ath10k *ar);
57static void ath10k_pci_device_reset(struct ath10k *ar); 55static void ath10k_pci_device_reset(struct ath10k *ar);
58static int ath10k_pci_reset_target(struct ath10k *ar); 56static int ath10k_pci_reset_target(struct ath10k *ar);
@@ -60,43 +58,145 @@ static int ath10k_pci_start_intr(struct ath10k *ar);
60static void ath10k_pci_stop_intr(struct ath10k *ar); 58static void ath10k_pci_stop_intr(struct ath10k *ar);
61 59
62static const struct ce_attr host_ce_config_wlan[] = { 60static const struct ce_attr host_ce_config_wlan[] = {
63 /* host->target HTC control and raw streams */ 61 /* CE0: host->target HTC control and raw streams */
64 { /* CE0 */ CE_ATTR_FLAGS, 0, 16, 256, 0, NULL,}, 62 {
65 /* could be moved to share CE3 */ 63 .flags = CE_ATTR_FLAGS,
66 /* target->host HTT + HTC control */ 64 .src_nentries = 16,
67 { /* CE1 */ CE_ATTR_FLAGS, 0, 0, 512, 512, NULL,}, 65 .src_sz_max = 256,
68 /* target->host WMI */ 66 .dest_nentries = 0,
69 { /* CE2 */ CE_ATTR_FLAGS, 0, 0, 2048, 32, NULL,}, 67 },
70 /* host->target WMI */ 68
71 { /* CE3 */ CE_ATTR_FLAGS, 0, 32, 2048, 0, NULL,}, 69 /* CE1: target->host HTT + HTC control */
72 /* host->target HTT */ 70 {
73 { /* CE4 */ CE_ATTR_FLAGS | CE_ATTR_DIS_INTR, 0, 71 .flags = CE_ATTR_FLAGS,
74 CE_HTT_H2T_MSG_SRC_NENTRIES, 256, 0, NULL,}, 72 .src_nentries = 0,
75 /* unused */ 73 .src_sz_max = 512,
76 { /* CE5 */ CE_ATTR_FLAGS, 0, 0, 0, 0, NULL,}, 74 .dest_nentries = 512,
77 /* Target autonomous hif_memcpy */ 75 },
78 { /* CE6 */ CE_ATTR_FLAGS, 0, 0, 0, 0, NULL,}, 76
79 /* ce_diag, the Diagnostic Window */ 77 /* CE2: target->host WMI */
80 { /* CE7 */ CE_ATTR_FLAGS, 0, 2, DIAG_TRANSFER_LIMIT, 2, NULL,}, 78 {
79 .flags = CE_ATTR_FLAGS,
80 .src_nentries = 0,
81 .src_sz_max = 2048,
82 .dest_nentries = 32,
83 },
84
85 /* CE3: host->target WMI */
86 {
87 .flags = CE_ATTR_FLAGS,
88 .src_nentries = 32,
89 .src_sz_max = 2048,
90 .dest_nentries = 0,
91 },
92
93 /* CE4: host->target HTT */
94 {
95 .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
96 .src_nentries = CE_HTT_H2T_MSG_SRC_NENTRIES,
97 .src_sz_max = 256,
98 .dest_nentries = 0,
99 },
100
101 /* CE5: unused */
102 {
103 .flags = CE_ATTR_FLAGS,
104 .src_nentries = 0,
105 .src_sz_max = 0,
106 .dest_nentries = 0,
107 },
108
109 /* CE6: target autonomous hif_memcpy */
110 {
111 .flags = CE_ATTR_FLAGS,
112 .src_nentries = 0,
113 .src_sz_max = 0,
114 .dest_nentries = 0,
115 },
116
117 /* CE7: ce_diag, the Diagnostic Window */
118 {
119 .flags = CE_ATTR_FLAGS,
120 .src_nentries = 2,
121 .src_sz_max = DIAG_TRANSFER_LIMIT,
122 .dest_nentries = 2,
123 },
81}; 124};
82 125
83/* Target firmware's Copy Engine configuration. */ 126/* Target firmware's Copy Engine configuration. */
84static const struct ce_pipe_config target_ce_config_wlan[] = { 127static const struct ce_pipe_config target_ce_config_wlan[] = {
85 /* host->target HTC control and raw streams */ 128 /* CE0: host->target HTC control and raw streams */
86 { /* CE0 */ 0, PIPEDIR_OUT, 32, 256, CE_ATTR_FLAGS, 0,}, 129 {
87 /* target->host HTT + HTC control */ 130 .pipenum = 0,
88 { /* CE1 */ 1, PIPEDIR_IN, 32, 512, CE_ATTR_FLAGS, 0,}, 131 .pipedir = PIPEDIR_OUT,
89 /* target->host WMI */ 132 .nentries = 32,
90 { /* CE2 */ 2, PIPEDIR_IN, 32, 2048, CE_ATTR_FLAGS, 0,}, 133 .nbytes_max = 256,
91 /* host->target WMI */ 134 .flags = CE_ATTR_FLAGS,
92 { /* CE3 */ 3, PIPEDIR_OUT, 32, 2048, CE_ATTR_FLAGS, 0,}, 135 .reserved = 0,
93 /* host->target HTT */ 136 },
94 { /* CE4 */ 4, PIPEDIR_OUT, 256, 256, CE_ATTR_FLAGS, 0,}, 137
138 /* CE1: target->host HTT + HTC control */
139 {
140 .pipenum = 1,
141 .pipedir = PIPEDIR_IN,
142 .nentries = 32,
143 .nbytes_max = 512,
144 .flags = CE_ATTR_FLAGS,
145 .reserved = 0,
146 },
147
148 /* CE2: target->host WMI */
149 {
150 .pipenum = 2,
151 .pipedir = PIPEDIR_IN,
152 .nentries = 32,
153 .nbytes_max = 2048,
154 .flags = CE_ATTR_FLAGS,
155 .reserved = 0,
156 },
157
158 /* CE3: host->target WMI */
159 {
160 .pipenum = 3,
161 .pipedir = PIPEDIR_OUT,
162 .nentries = 32,
163 .nbytes_max = 2048,
164 .flags = CE_ATTR_FLAGS,
165 .reserved = 0,
166 },
167
168 /* CE4: host->target HTT */
169 {
170 .pipenum = 4,
171 .pipedir = PIPEDIR_OUT,
172 .nentries = 256,
173 .nbytes_max = 256,
174 .flags = CE_ATTR_FLAGS,
175 .reserved = 0,
176 },
177
95 /* NB: 50% of src nentries, since tx has 2 frags */ 178 /* NB: 50% of src nentries, since tx has 2 frags */
96 /* unused */ 179
97 { /* CE5 */ 5, PIPEDIR_OUT, 32, 2048, CE_ATTR_FLAGS, 0,}, 180 /* CE5: unused */
98 /* Reserved for target autonomous hif_memcpy */ 181 {
99 { /* CE6 */ 6, PIPEDIR_INOUT, 32, 4096, CE_ATTR_FLAGS, 0,}, 182 .pipenum = 5,
183 .pipedir = PIPEDIR_OUT,
184 .nentries = 32,
185 .nbytes_max = 2048,
186 .flags = CE_ATTR_FLAGS,
187 .reserved = 0,
188 },
189
190 /* CE6: Reserved for target autonomous hif_memcpy */
191 {
192 .pipenum = 6,
193 .pipedir = PIPEDIR_INOUT,
194 .nentries = 32,
195 .nbytes_max = 4096,
196 .flags = CE_ATTR_FLAGS,
197 .reserved = 0,
198 },
199
100 /* CE7 used only by Host */ 200 /* CE7 used only by Host */
101}; 201};
102 202
@@ -114,7 +214,7 @@ static int ath10k_pci_diag_read_mem(struct ath10k *ar, u32 address, void *data,
114 unsigned int completed_nbytes, orig_nbytes, remaining_bytes; 214 unsigned int completed_nbytes, orig_nbytes, remaining_bytes;
115 unsigned int id; 215 unsigned int id;
116 unsigned int flags; 216 unsigned int flags;
117 struct ce_state *ce_diag; 217 struct ath10k_ce_pipe *ce_diag;
118 /* Host buffer address in CE space */ 218 /* Host buffer address in CE space */
119 u32 ce_data; 219 u32 ce_data;
120 dma_addr_t ce_data_base = 0; 220 dma_addr_t ce_data_base = 0;
@@ -278,7 +378,7 @@ static int ath10k_pci_diag_write_mem(struct ath10k *ar, u32 address,
278 unsigned int completed_nbytes, orig_nbytes, remaining_bytes; 378 unsigned int completed_nbytes, orig_nbytes, remaining_bytes;
279 unsigned int id; 379 unsigned int id;
280 unsigned int flags; 380 unsigned int flags;
281 struct ce_state *ce_diag; 381 struct ath10k_ce_pipe *ce_diag;
282 void *data_buf = NULL; 382 void *data_buf = NULL;
283 u32 ce_data; /* Host buffer address in CE space */ 383 u32 ce_data; /* Host buffer address in CE space */
284 dma_addr_t ce_data_base = 0; 384 dma_addr_t ce_data_base = 0;
@@ -437,7 +537,7 @@ static void ath10k_pci_wait(struct ath10k *ar)
437 ath10k_warn("Unable to wakeup target\n"); 537 ath10k_warn("Unable to wakeup target\n");
438} 538}
439 539
440void ath10k_do_pci_wake(struct ath10k *ar) 540int ath10k_do_pci_wake(struct ath10k *ar)
441{ 541{
442 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 542 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
443 void __iomem *pci_addr = ar_pci->mem; 543 void __iomem *pci_addr = ar_pci->mem;
@@ -453,18 +553,19 @@ void ath10k_do_pci_wake(struct ath10k *ar)
453 atomic_inc(&ar_pci->keep_awake_count); 553 atomic_inc(&ar_pci->keep_awake_count);
454 554
455 if (ar_pci->verified_awake) 555 if (ar_pci->verified_awake)
456 return; 556 return 0;
457 557
458 for (;;) { 558 for (;;) {
459 if (ath10k_pci_target_is_awake(ar)) { 559 if (ath10k_pci_target_is_awake(ar)) {
460 ar_pci->verified_awake = true; 560 ar_pci->verified_awake = true;
461 break; 561 return 0;
462 } 562 }
463 563
464 if (tot_delay > PCIE_WAKE_TIMEOUT) { 564 if (tot_delay > PCIE_WAKE_TIMEOUT) {
465 ath10k_warn("target takes too long to wake up (awake count %d)\n", 565 ath10k_warn("target took longer %d us to wake up (awake count %d)\n",
566 PCIE_WAKE_TIMEOUT,
466 atomic_read(&ar_pci->keep_awake_count)); 567 atomic_read(&ar_pci->keep_awake_count));
467 break; 568 return -ETIMEDOUT;
468 } 569 }
469 570
470 udelay(curr_delay); 571 udelay(curr_delay);
@@ -493,7 +594,7 @@ void ath10k_do_pci_sleep(struct ath10k *ar)
493 * FIXME: Handle OOM properly. 594 * FIXME: Handle OOM properly.
494 */ 595 */
495static inline 596static inline
496struct ath10k_pci_compl *get_free_compl(struct hif_ce_pipe_info *pipe_info) 597struct ath10k_pci_compl *get_free_compl(struct ath10k_pci_pipe *pipe_info)
497{ 598{
498 struct ath10k_pci_compl *compl = NULL; 599 struct ath10k_pci_compl *compl = NULL;
499 600
@@ -511,39 +612,28 @@ exit:
511} 612}
512 613
513/* Called by lower (CE) layer when a send to Target completes. */ 614/* Called by lower (CE) layer when a send to Target completes. */
514static void ath10k_pci_ce_send_done(struct ce_state *ce_state, 615static void ath10k_pci_ce_send_done(struct ath10k_ce_pipe *ce_state)
515 void *transfer_context,
516 u32 ce_data,
517 unsigned int nbytes,
518 unsigned int transfer_id)
519{ 616{
520 struct ath10k *ar = ce_state->ar; 617 struct ath10k *ar = ce_state->ar;
521 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 618 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
522 struct hif_ce_pipe_info *pipe_info = &ar_pci->pipe_info[ce_state->id]; 619 struct ath10k_pci_pipe *pipe_info = &ar_pci->pipe_info[ce_state->id];
523 struct ath10k_pci_compl *compl; 620 struct ath10k_pci_compl *compl;
524 bool process = false; 621 void *transfer_context;
525 622 u32 ce_data;
526 do { 623 unsigned int nbytes;
527 /* 624 unsigned int transfer_id;
528 * For the send completion of an item in sendlist, just
529 * increment num_sends_allowed. The upper layer callback will
530 * be triggered when last fragment is done with send.
531 */
532 if (transfer_context == CE_SENDLIST_ITEM_CTXT) {
533 spin_lock_bh(&pipe_info->pipe_lock);
534 pipe_info->num_sends_allowed++;
535 spin_unlock_bh(&pipe_info->pipe_lock);
536 continue;
537 }
538 625
626 while (ath10k_ce_completed_send_next(ce_state, &transfer_context,
627 &ce_data, &nbytes,
628 &transfer_id) == 0) {
539 compl = get_free_compl(pipe_info); 629 compl = get_free_compl(pipe_info);
540 if (!compl) 630 if (!compl)
541 break; 631 break;
542 632
543 compl->send_or_recv = HIF_CE_COMPLETE_SEND; 633 compl->state = ATH10K_PCI_COMPL_SEND;
544 compl->ce_state = ce_state; 634 compl->ce_state = ce_state;
545 compl->pipe_info = pipe_info; 635 compl->pipe_info = pipe_info;
546 compl->transfer_context = transfer_context; 636 compl->skb = transfer_context;
547 compl->nbytes = nbytes; 637 compl->nbytes = nbytes;
548 compl->transfer_id = transfer_id; 638 compl->transfer_id = transfer_id;
549 compl->flags = 0; 639 compl->flags = 0;
@@ -554,46 +644,36 @@ static void ath10k_pci_ce_send_done(struct ce_state *ce_state,
554 spin_lock_bh(&ar_pci->compl_lock); 644 spin_lock_bh(&ar_pci->compl_lock);
555 list_add_tail(&compl->list, &ar_pci->compl_process); 645 list_add_tail(&compl->list, &ar_pci->compl_process);
556 spin_unlock_bh(&ar_pci->compl_lock); 646 spin_unlock_bh(&ar_pci->compl_lock);
557 647 }
558 process = true;
559 } while (ath10k_ce_completed_send_next(ce_state,
560 &transfer_context,
561 &ce_data, &nbytes,
562 &transfer_id) == 0);
563
564 /*
565 * If only some of the items within a sendlist have completed,
566 * don't invoke completion processing until the entire sendlist
567 * has been sent.
568 */
569 if (!process)
570 return;
571 648
572 ath10k_pci_process_ce(ar); 649 ath10k_pci_process_ce(ar);
573} 650}
574 651
575/* Called by lower (CE) layer when data is received from the Target. */ 652/* Called by lower (CE) layer when data is received from the Target. */
576static void ath10k_pci_ce_recv_data(struct ce_state *ce_state, 653static void ath10k_pci_ce_recv_data(struct ath10k_ce_pipe *ce_state)
577 void *transfer_context, u32 ce_data,
578 unsigned int nbytes,
579 unsigned int transfer_id,
580 unsigned int flags)
581{ 654{
582 struct ath10k *ar = ce_state->ar; 655 struct ath10k *ar = ce_state->ar;
583 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 656 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
584 struct hif_ce_pipe_info *pipe_info = &ar_pci->pipe_info[ce_state->id]; 657 struct ath10k_pci_pipe *pipe_info = &ar_pci->pipe_info[ce_state->id];
585 struct ath10k_pci_compl *compl; 658 struct ath10k_pci_compl *compl;
586 struct sk_buff *skb; 659 struct sk_buff *skb;
660 void *transfer_context;
661 u32 ce_data;
662 unsigned int nbytes;
663 unsigned int transfer_id;
664 unsigned int flags;
587 665
588 do { 666 while (ath10k_ce_completed_recv_next(ce_state, &transfer_context,
667 &ce_data, &nbytes, &transfer_id,
668 &flags) == 0) {
589 compl = get_free_compl(pipe_info); 669 compl = get_free_compl(pipe_info);
590 if (!compl) 670 if (!compl)
591 break; 671 break;
592 672
593 compl->send_or_recv = HIF_CE_COMPLETE_RECV; 673 compl->state = ATH10K_PCI_COMPL_RECV;
594 compl->ce_state = ce_state; 674 compl->ce_state = ce_state;
595 compl->pipe_info = pipe_info; 675 compl->pipe_info = pipe_info;
596 compl->transfer_context = transfer_context; 676 compl->skb = transfer_context;
597 compl->nbytes = nbytes; 677 compl->nbytes = nbytes;
598 compl->transfer_id = transfer_id; 678 compl->transfer_id = transfer_id;
599 compl->flags = flags; 679 compl->flags = flags;
@@ -608,12 +688,7 @@ static void ath10k_pci_ce_recv_data(struct ce_state *ce_state,
608 spin_lock_bh(&ar_pci->compl_lock); 688 spin_lock_bh(&ar_pci->compl_lock);
609 list_add_tail(&compl->list, &ar_pci->compl_process); 689 list_add_tail(&compl->list, &ar_pci->compl_process);
610 spin_unlock_bh(&ar_pci->compl_lock); 690 spin_unlock_bh(&ar_pci->compl_lock);
611 691 }
612 } while (ath10k_ce_completed_recv_next(ce_state,
613 &transfer_context,
614 &ce_data, &nbytes,
615 &transfer_id,
616 &flags) == 0);
617 692
618 ath10k_pci_process_ce(ar); 693 ath10k_pci_process_ce(ar);
619} 694}
@@ -625,15 +700,12 @@ static int ath10k_pci_hif_send_head(struct ath10k *ar, u8 pipe_id,
625{ 700{
626 struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(nbuf); 701 struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(nbuf);
627 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 702 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
628 struct hif_ce_pipe_info *pipe_info = &(ar_pci->pipe_info[pipe_id]); 703 struct ath10k_pci_pipe *pipe_info = &(ar_pci->pipe_info[pipe_id]);
629 struct ce_state *ce_hdl = pipe_info->ce_hdl; 704 struct ath10k_ce_pipe *ce_hdl = pipe_info->ce_hdl;
630 struct ce_sendlist sendlist;
631 unsigned int len; 705 unsigned int len;
632 u32 flags = 0; 706 u32 flags = 0;
633 int ret; 707 int ret;
634 708
635 memset(&sendlist, 0, sizeof(struct ce_sendlist));
636
637 len = min(bytes, nbuf->len); 709 len = min(bytes, nbuf->len);
638 bytes -= len; 710 bytes -= len;
639 711
@@ -648,19 +720,8 @@ static int ath10k_pci_hif_send_head(struct ath10k *ar, u8 pipe_id,
648 "ath10k tx: data: ", 720 "ath10k tx: data: ",
649 nbuf->data, nbuf->len); 721 nbuf->data, nbuf->len);
650 722
651 ath10k_ce_sendlist_buf_add(&sendlist, skb_cb->paddr, len, flags); 723 ret = ath10k_ce_send(ce_hdl, nbuf, skb_cb->paddr, len, transfer_id,
652 724 flags);
653 /* Make sure we have resources to handle this request */
654 spin_lock_bh(&pipe_info->pipe_lock);
655 if (!pipe_info->num_sends_allowed) {
656 ath10k_warn("Pipe: %d is full\n", pipe_id);
657 spin_unlock_bh(&pipe_info->pipe_lock);
658 return -ENOSR;
659 }
660 pipe_info->num_sends_allowed--;
661 spin_unlock_bh(&pipe_info->pipe_lock);
662
663 ret = ath10k_ce_sendlist_send(ce_hdl, nbuf, &sendlist, transfer_id);
664 if (ret) 725 if (ret)
665 ath10k_warn("CE send failed: %p\n", nbuf); 726 ath10k_warn("CE send failed: %p\n", nbuf);
666 727
@@ -670,14 +731,7 @@ static int ath10k_pci_hif_send_head(struct ath10k *ar, u8 pipe_id,
670static u16 ath10k_pci_hif_get_free_queue_number(struct ath10k *ar, u8 pipe) 731static u16 ath10k_pci_hif_get_free_queue_number(struct ath10k *ar, u8 pipe)
671{ 732{
672 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 733 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
673 struct hif_ce_pipe_info *pipe_info = &(ar_pci->pipe_info[pipe]); 734 return ath10k_ce_num_free_src_entries(ar_pci->pipe_info[pipe].ce_hdl);
674 int ret;
675
676 spin_lock_bh(&pipe_info->pipe_lock);
677 ret = pipe_info->num_sends_allowed;
678 spin_unlock_bh(&pipe_info->pipe_lock);
679
680 return ret;
681} 735}
682 736
683static void ath10k_pci_hif_dump_area(struct ath10k *ar) 737static void ath10k_pci_hif_dump_area(struct ath10k *ar)
@@ -764,9 +818,9 @@ static void ath10k_pci_hif_set_callbacks(struct ath10k *ar,
764static int ath10k_pci_start_ce(struct ath10k *ar) 818static int ath10k_pci_start_ce(struct ath10k *ar)
765{ 819{
766 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 820 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
767 struct ce_state *ce_diag = ar_pci->ce_diag; 821 struct ath10k_ce_pipe *ce_diag = ar_pci->ce_diag;
768 const struct ce_attr *attr; 822 const struct ce_attr *attr;
769 struct hif_ce_pipe_info *pipe_info; 823 struct ath10k_pci_pipe *pipe_info;
770 struct ath10k_pci_compl *compl; 824 struct ath10k_pci_compl *compl;
771 int i, pipe_num, completions, disable_interrupts; 825 int i, pipe_num, completions, disable_interrupts;
772 826
@@ -792,7 +846,6 @@ static int ath10k_pci_start_ce(struct ath10k *ar)
792 ath10k_pci_ce_send_done, 846 ath10k_pci_ce_send_done,
793 disable_interrupts); 847 disable_interrupts);
794 completions += attr->src_nentries; 848 completions += attr->src_nentries;
795 pipe_info->num_sends_allowed = attr->src_nentries - 1;
796 } 849 }
797 850
798 if (attr->dest_nentries) { 851 if (attr->dest_nentries) {
@@ -805,15 +858,14 @@ static int ath10k_pci_start_ce(struct ath10k *ar)
805 continue; 858 continue;
806 859
807 for (i = 0; i < completions; i++) { 860 for (i = 0; i < completions; i++) {
808 compl = kmalloc(sizeof(struct ath10k_pci_compl), 861 compl = kmalloc(sizeof(*compl), GFP_KERNEL);
809 GFP_KERNEL);
810 if (!compl) { 862 if (!compl) {
811 ath10k_warn("No memory for completion state\n"); 863 ath10k_warn("No memory for completion state\n");
812 ath10k_pci_stop_ce(ar); 864 ath10k_pci_stop_ce(ar);
813 return -ENOMEM; 865 return -ENOMEM;
814 } 866 }
815 867
816 compl->send_or_recv = HIF_CE_COMPLETE_FREE; 868 compl->state = ATH10K_PCI_COMPL_FREE;
817 list_add_tail(&compl->list, &pipe_info->compl_free); 869 list_add_tail(&compl->list, &pipe_info->compl_free);
818 } 870 }
819 } 871 }
@@ -840,7 +892,7 @@ static void ath10k_pci_stop_ce(struct ath10k *ar)
840 * their associated resources */ 892 * their associated resources */
841 spin_lock_bh(&ar_pci->compl_lock); 893 spin_lock_bh(&ar_pci->compl_lock);
842 list_for_each_entry(compl, &ar_pci->compl_process, list) { 894 list_for_each_entry(compl, &ar_pci->compl_process, list) {
843 skb = (struct sk_buff *)compl->transfer_context; 895 skb = compl->skb;
844 ATH10K_SKB_CB(skb)->is_aborted = true; 896 ATH10K_SKB_CB(skb)->is_aborted = true;
845 } 897 }
846 spin_unlock_bh(&ar_pci->compl_lock); 898 spin_unlock_bh(&ar_pci->compl_lock);
@@ -850,7 +902,7 @@ static void ath10k_pci_cleanup_ce(struct ath10k *ar)
850{ 902{
851 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 903 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
852 struct ath10k_pci_compl *compl, *tmp; 904 struct ath10k_pci_compl *compl, *tmp;
853 struct hif_ce_pipe_info *pipe_info; 905 struct ath10k_pci_pipe *pipe_info;
854 struct sk_buff *netbuf; 906 struct sk_buff *netbuf;
855 int pipe_num; 907 int pipe_num;
856 908
@@ -861,7 +913,7 @@ static void ath10k_pci_cleanup_ce(struct ath10k *ar)
861 913
862 list_for_each_entry_safe(compl, tmp, &ar_pci->compl_process, list) { 914 list_for_each_entry_safe(compl, tmp, &ar_pci->compl_process, list) {
863 list_del(&compl->list); 915 list_del(&compl->list);
864 netbuf = (struct sk_buff *)compl->transfer_context; 916 netbuf = compl->skb;
865 dev_kfree_skb_any(netbuf); 917 dev_kfree_skb_any(netbuf);
866 kfree(compl); 918 kfree(compl);
867 } 919 }
@@ -912,12 +964,14 @@ static void ath10k_pci_process_ce(struct ath10k *ar)
912 list_del(&compl->list); 964 list_del(&compl->list);
913 spin_unlock_bh(&ar_pci->compl_lock); 965 spin_unlock_bh(&ar_pci->compl_lock);
914 966
915 if (compl->send_or_recv == HIF_CE_COMPLETE_SEND) { 967 switch (compl->state) {
968 case ATH10K_PCI_COMPL_SEND:
916 cb->tx_completion(ar, 969 cb->tx_completion(ar,
917 compl->transfer_context, 970 compl->skb,
918 compl->transfer_id); 971 compl->transfer_id);
919 send_done = 1; 972 send_done = 1;
920 } else { 973 break;
974 case ATH10K_PCI_COMPL_RECV:
921 ret = ath10k_pci_post_rx_pipe(compl->pipe_info, 1); 975 ret = ath10k_pci_post_rx_pipe(compl->pipe_info, 1);
922 if (ret) { 976 if (ret) {
923 ath10k_warn("Unable to post recv buffer for pipe: %d\n", 977 ath10k_warn("Unable to post recv buffer for pipe: %d\n",
@@ -925,7 +979,7 @@ static void ath10k_pci_process_ce(struct ath10k *ar)
925 break; 979 break;
926 } 980 }
927 981
928 skb = (struct sk_buff *)compl->transfer_context; 982 skb = compl->skb;
929 nbytes = compl->nbytes; 983 nbytes = compl->nbytes;
930 984
931 ath10k_dbg(ATH10K_DBG_PCI, 985 ath10k_dbg(ATH10K_DBG_PCI,
@@ -944,16 +998,23 @@ static void ath10k_pci_process_ce(struct ath10k *ar)
944 nbytes, 998 nbytes,
945 skb->len + skb_tailroom(skb)); 999 skb->len + skb_tailroom(skb));
946 } 1000 }
1001 break;
1002 case ATH10K_PCI_COMPL_FREE:
1003 ath10k_warn("free completion cannot be processed\n");
1004 break;
1005 default:
1006 ath10k_warn("invalid completion state (%d)\n",
1007 compl->state);
1008 break;
947 } 1009 }
948 1010
949 compl->send_or_recv = HIF_CE_COMPLETE_FREE; 1011 compl->state = ATH10K_PCI_COMPL_FREE;
950 1012
951 /* 1013 /*
952 * Add completion back to the pipe's free list. 1014 * Add completion back to the pipe's free list.
953 */ 1015 */
954 spin_lock_bh(&compl->pipe_info->pipe_lock); 1016 spin_lock_bh(&compl->pipe_info->pipe_lock);
955 list_add_tail(&compl->list, &compl->pipe_info->compl_free); 1017 list_add_tail(&compl->list, &compl->pipe_info->compl_free);
956 compl->pipe_info->num_sends_allowed += send_done;
957 spin_unlock_bh(&compl->pipe_info->pipe_lock); 1018 spin_unlock_bh(&compl->pipe_info->pipe_lock);
958 } 1019 }
959 1020
@@ -1037,12 +1098,12 @@ static void ath10k_pci_hif_get_default_pipe(struct ath10k *ar,
1037 &dl_is_polled); 1098 &dl_is_polled);
1038} 1099}
1039 1100
1040static int ath10k_pci_post_rx_pipe(struct hif_ce_pipe_info *pipe_info, 1101static int ath10k_pci_post_rx_pipe(struct ath10k_pci_pipe *pipe_info,
1041 int num) 1102 int num)
1042{ 1103{
1043 struct ath10k *ar = pipe_info->hif_ce_state; 1104 struct ath10k *ar = pipe_info->hif_ce_state;
1044 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 1105 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
1045 struct ce_state *ce_state = pipe_info->ce_hdl; 1106 struct ath10k_ce_pipe *ce_state = pipe_info->ce_hdl;
1046 struct sk_buff *skb; 1107 struct sk_buff *skb;
1047 dma_addr_t ce_data; 1108 dma_addr_t ce_data;
1048 int i, ret = 0; 1109 int i, ret = 0;
@@ -1097,7 +1158,7 @@ err:
1097static int ath10k_pci_post_rx(struct ath10k *ar) 1158static int ath10k_pci_post_rx(struct ath10k *ar)
1098{ 1159{
1099 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 1160 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
1100 struct hif_ce_pipe_info *pipe_info; 1161 struct ath10k_pci_pipe *pipe_info;
1101 const struct ce_attr *attr; 1162 const struct ce_attr *attr;
1102 int pipe_num, ret = 0; 1163 int pipe_num, ret = 0;
1103 1164
@@ -1147,11 +1208,11 @@ static int ath10k_pci_hif_start(struct ath10k *ar)
1147 return 0; 1208 return 0;
1148} 1209}
1149 1210
1150static void ath10k_pci_rx_pipe_cleanup(struct hif_ce_pipe_info *pipe_info) 1211static void ath10k_pci_rx_pipe_cleanup(struct ath10k_pci_pipe *pipe_info)
1151{ 1212{
1152 struct ath10k *ar; 1213 struct ath10k *ar;
1153 struct ath10k_pci *ar_pci; 1214 struct ath10k_pci *ar_pci;
1154 struct ce_state *ce_hdl; 1215 struct ath10k_ce_pipe *ce_hdl;
1155 u32 buf_sz; 1216 u32 buf_sz;
1156 struct sk_buff *netbuf; 1217 struct sk_buff *netbuf;
1157 u32 ce_data; 1218 u32 ce_data;
@@ -1179,11 +1240,11 @@ static void ath10k_pci_rx_pipe_cleanup(struct hif_ce_pipe_info *pipe_info)
1179 } 1240 }
1180} 1241}
1181 1242
1182static void ath10k_pci_tx_pipe_cleanup(struct hif_ce_pipe_info *pipe_info) 1243static void ath10k_pci_tx_pipe_cleanup(struct ath10k_pci_pipe *pipe_info)
1183{ 1244{
1184 struct ath10k *ar; 1245 struct ath10k *ar;
1185 struct ath10k_pci *ar_pci; 1246 struct ath10k_pci *ar_pci;
1186 struct ce_state *ce_hdl; 1247 struct ath10k_ce_pipe *ce_hdl;
1187 struct sk_buff *netbuf; 1248 struct sk_buff *netbuf;
1188 u32 ce_data; 1249 u32 ce_data;
1189 unsigned int nbytes; 1250 unsigned int nbytes;
@@ -1206,15 +1267,14 @@ static void ath10k_pci_tx_pipe_cleanup(struct hif_ce_pipe_info *pipe_info)
1206 1267
1207 while (ath10k_ce_cancel_send_next(ce_hdl, (void **)&netbuf, 1268 while (ath10k_ce_cancel_send_next(ce_hdl, (void **)&netbuf,
1208 &ce_data, &nbytes, &id) == 0) { 1269 &ce_data, &nbytes, &id) == 0) {
1209 if (netbuf != CE_SENDLIST_ITEM_CTXT) 1270 /*
1210 /* 1271 * Indicate the completion to higer layer to free
1211 * Indicate the completion to higer layer to free 1272 * the buffer
1212 * the buffer 1273 */
1213 */ 1274 ATH10K_SKB_CB(netbuf)->is_aborted = true;
1214 ATH10K_SKB_CB(netbuf)->is_aborted = true; 1275 ar_pci->msg_callbacks_current.tx_completion(ar,
1215 ar_pci->msg_callbacks_current.tx_completion(ar, 1276 netbuf,
1216 netbuf, 1277 id);
1217 id);
1218 } 1278 }
1219} 1279}
1220 1280
@@ -1232,7 +1292,7 @@ static void ath10k_pci_buffer_cleanup(struct ath10k *ar)
1232 int pipe_num; 1292 int pipe_num;
1233 1293
1234 for (pipe_num = 0; pipe_num < ar_pci->ce_count; pipe_num++) { 1294 for (pipe_num = 0; pipe_num < ar_pci->ce_count; pipe_num++) {
1235 struct hif_ce_pipe_info *pipe_info; 1295 struct ath10k_pci_pipe *pipe_info;
1236 1296
1237 pipe_info = &ar_pci->pipe_info[pipe_num]; 1297 pipe_info = &ar_pci->pipe_info[pipe_num];
1238 ath10k_pci_rx_pipe_cleanup(pipe_info); 1298 ath10k_pci_rx_pipe_cleanup(pipe_info);
@@ -1243,7 +1303,7 @@ static void ath10k_pci_buffer_cleanup(struct ath10k *ar)
1243static void ath10k_pci_ce_deinit(struct ath10k *ar) 1303static void ath10k_pci_ce_deinit(struct ath10k *ar)
1244{ 1304{
1245 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 1305 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
1246 struct hif_ce_pipe_info *pipe_info; 1306 struct ath10k_pci_pipe *pipe_info;
1247 int pipe_num; 1307 int pipe_num;
1248 1308
1249 for (pipe_num = 0; pipe_num < ar_pci->ce_count; pipe_num++) { 1309 for (pipe_num = 0; pipe_num < ar_pci->ce_count; pipe_num++) {
@@ -1293,8 +1353,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar,
1293 void *resp, u32 *resp_len) 1353 void *resp, u32 *resp_len)
1294{ 1354{
1295 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 1355 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
1296 struct ce_state *ce_tx = ar_pci->pipe_info[BMI_CE_NUM_TO_TARG].ce_hdl; 1356 struct ath10k_pci_pipe *pci_tx = &ar_pci->pipe_info[BMI_CE_NUM_TO_TARG];
1297 struct ce_state *ce_rx = ar_pci->pipe_info[BMI_CE_NUM_TO_HOST].ce_hdl; 1357 struct ath10k_pci_pipe *pci_rx = &ar_pci->pipe_info[BMI_CE_NUM_TO_HOST];
1358 struct ath10k_ce_pipe *ce_tx = pci_tx->ce_hdl;
1359 struct ath10k_ce_pipe *ce_rx = pci_rx->ce_hdl;
1298 dma_addr_t req_paddr = 0; 1360 dma_addr_t req_paddr = 0;
1299 dma_addr_t resp_paddr = 0; 1361 dma_addr_t resp_paddr = 0;
1300 struct bmi_xfer xfer = {}; 1362 struct bmi_xfer xfer = {};
@@ -1378,13 +1440,16 @@ err_dma:
1378 return ret; 1440 return ret;
1379} 1441}
1380 1442
1381static void ath10k_pci_bmi_send_done(struct ce_state *ce_state, 1443static void ath10k_pci_bmi_send_done(struct ath10k_ce_pipe *ce_state)
1382 void *transfer_context,
1383 u32 data,
1384 unsigned int nbytes,
1385 unsigned int transfer_id)
1386{ 1444{
1387 struct bmi_xfer *xfer = transfer_context; 1445 struct bmi_xfer *xfer;
1446 u32 ce_data;
1447 unsigned int nbytes;
1448 unsigned int transfer_id;
1449
1450 if (ath10k_ce_completed_send_next(ce_state, (void **)&xfer, &ce_data,
1451 &nbytes, &transfer_id))
1452 return;
1388 1453
1389 if (xfer->wait_for_resp) 1454 if (xfer->wait_for_resp)
1390 return; 1455 return;
@@ -1392,14 +1457,17 @@ static void ath10k_pci_bmi_send_done(struct ce_state *ce_state,
1392 complete(&xfer->done); 1457 complete(&xfer->done);
1393} 1458}
1394 1459
1395static void ath10k_pci_bmi_recv_data(struct ce_state *ce_state, 1460static void ath10k_pci_bmi_recv_data(struct ath10k_ce_pipe *ce_state)
1396 void *transfer_context,
1397 u32 data,
1398 unsigned int nbytes,
1399 unsigned int transfer_id,
1400 unsigned int flags)
1401{ 1461{
1402 struct bmi_xfer *xfer = transfer_context; 1462 struct bmi_xfer *xfer;
1463 u32 ce_data;
1464 unsigned int nbytes;
1465 unsigned int transfer_id;
1466 unsigned int flags;
1467
1468 if (ath10k_ce_completed_recv_next(ce_state, (void **)&xfer, &ce_data,
1469 &nbytes, &transfer_id, &flags))
1470 return;
1403 1471
1404 if (!xfer->wait_for_resp) { 1472 if (!xfer->wait_for_resp) {
1405 ath10k_warn("unexpected: BMI data received; ignoring\n"); 1473 ath10k_warn("unexpected: BMI data received; ignoring\n");
@@ -1679,7 +1747,7 @@ static int ath10k_pci_init_config(struct ath10k *ar)
1679static int ath10k_pci_ce_init(struct ath10k *ar) 1747static int ath10k_pci_ce_init(struct ath10k *ar)
1680{ 1748{
1681 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 1749 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
1682 struct hif_ce_pipe_info *pipe_info; 1750 struct ath10k_pci_pipe *pipe_info;
1683 const struct ce_attr *attr; 1751 const struct ce_attr *attr;
1684 int pipe_num; 1752 int pipe_num;
1685 1753
@@ -1895,7 +1963,7 @@ static const struct ath10k_hif_ops ath10k_pci_hif_ops = {
1895 1963
1896static void ath10k_pci_ce_tasklet(unsigned long ptr) 1964static void ath10k_pci_ce_tasklet(unsigned long ptr)
1897{ 1965{
1898 struct hif_ce_pipe_info *pipe = (struct hif_ce_pipe_info *)ptr; 1966 struct ath10k_pci_pipe *pipe = (struct ath10k_pci_pipe *)ptr;
1899 struct ath10k_pci *ar_pci = pipe->ar_pci; 1967 struct ath10k_pci *ar_pci = pipe->ar_pci;
1900 1968
1901 ath10k_ce_per_engine_service(ar_pci->ar, pipe->pipe_num); 1969 ath10k_ce_per_engine_service(ar_pci->ar, pipe->pipe_num);
@@ -2212,18 +2280,13 @@ static int ath10k_pci_reset_target(struct ath10k *ar)
2212 2280
2213static void ath10k_pci_device_reset(struct ath10k *ar) 2281static void ath10k_pci_device_reset(struct ath10k *ar)
2214{ 2282{
2215 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
2216 void __iomem *mem = ar_pci->mem;
2217 int i; 2283 int i;
2218 u32 val; 2284 u32 val;
2219 2285
2220 if (!SOC_GLOBAL_RESET_ADDRESS) 2286 if (!SOC_GLOBAL_RESET_ADDRESS)
2221 return; 2287 return;
2222 2288
2223 if (!mem) 2289 ath10k_pci_reg_write32(ar, PCIE_SOC_WAKE_ADDRESS,
2224 return;
2225
2226 ath10k_pci_reg_write32(mem, PCIE_SOC_WAKE_ADDRESS,
2227 PCIE_SOC_WAKE_V_MASK); 2290 PCIE_SOC_WAKE_V_MASK);
2228 for (i = 0; i < ATH_PCI_RESET_WAIT_MAX; i++) { 2291 for (i = 0; i < ATH_PCI_RESET_WAIT_MAX; i++) {
2229 if (ath10k_pci_target_is_awake(ar)) 2292 if (ath10k_pci_target_is_awake(ar))
@@ -2232,12 +2295,12 @@ static void ath10k_pci_device_reset(struct ath10k *ar)
2232 } 2295 }
2233 2296
2234 /* Put Target, including PCIe, into RESET. */ 2297 /* Put Target, including PCIe, into RESET. */
2235 val = ath10k_pci_reg_read32(mem, SOC_GLOBAL_RESET_ADDRESS); 2298 val = ath10k_pci_reg_read32(ar, SOC_GLOBAL_RESET_ADDRESS);
2236 val |= 1; 2299 val |= 1;
2237 ath10k_pci_reg_write32(mem, SOC_GLOBAL_RESET_ADDRESS, val); 2300 ath10k_pci_reg_write32(ar, SOC_GLOBAL_RESET_ADDRESS, val);
2238 2301
2239 for (i = 0; i < ATH_PCI_RESET_WAIT_MAX; i++) { 2302 for (i = 0; i < ATH_PCI_RESET_WAIT_MAX; i++) {
2240 if (ath10k_pci_reg_read32(mem, RTC_STATE_ADDRESS) & 2303 if (ath10k_pci_reg_read32(ar, RTC_STATE_ADDRESS) &
2241 RTC_STATE_COLD_RESET_MASK) 2304 RTC_STATE_COLD_RESET_MASK)
2242 break; 2305 break;
2243 msleep(1); 2306 msleep(1);
@@ -2245,16 +2308,16 @@ static void ath10k_pci_device_reset(struct ath10k *ar)
2245 2308
2246 /* Pull Target, including PCIe, out of RESET. */ 2309 /* Pull Target, including PCIe, out of RESET. */
2247 val &= ~1; 2310 val &= ~1;
2248 ath10k_pci_reg_write32(mem, SOC_GLOBAL_RESET_ADDRESS, val); 2311 ath10k_pci_reg_write32(ar, SOC_GLOBAL_RESET_ADDRESS, val);
2249 2312
2250 for (i = 0; i < ATH_PCI_RESET_WAIT_MAX; i++) { 2313 for (i = 0; i < ATH_PCI_RESET_WAIT_MAX; i++) {
2251 if (!(ath10k_pci_reg_read32(mem, RTC_STATE_ADDRESS) & 2314 if (!(ath10k_pci_reg_read32(ar, RTC_STATE_ADDRESS) &
2252 RTC_STATE_COLD_RESET_MASK)) 2315 RTC_STATE_COLD_RESET_MASK))
2253 break; 2316 break;
2254 msleep(1); 2317 msleep(1);
2255 } 2318 }
2256 2319
2257 ath10k_pci_reg_write32(mem, PCIE_SOC_WAKE_ADDRESS, PCIE_SOC_WAKE_RESET); 2320 ath10k_pci_reg_write32(ar, PCIE_SOC_WAKE_ADDRESS, PCIE_SOC_WAKE_RESET);
2258} 2321}
2259 2322
2260static void ath10k_pci_dump_features(struct ath10k_pci *ar_pci) 2323static void ath10k_pci_dump_features(struct ath10k_pci *ar_pci)
@@ -2267,13 +2330,10 @@ static void ath10k_pci_dump_features(struct ath10k_pci *ar_pci)
2267 2330
2268 switch (i) { 2331 switch (i) {
2269 case ATH10K_PCI_FEATURE_MSI_X: 2332 case ATH10K_PCI_FEATURE_MSI_X:
2270 ath10k_dbg(ATH10K_DBG_PCI, "device supports MSI-X\n"); 2333 ath10k_dbg(ATH10K_DBG_BOOT, "device supports MSI-X\n");
2271 break;
2272 case ATH10K_PCI_FEATURE_HW_1_0_WORKAROUND:
2273 ath10k_dbg(ATH10K_DBG_PCI, "QCA988X_1.0 workaround enabled\n");
2274 break; 2334 break;
2275 case ATH10K_PCI_FEATURE_SOC_POWER_SAVE: 2335 case ATH10K_PCI_FEATURE_SOC_POWER_SAVE:
2276 ath10k_dbg(ATH10K_DBG_PCI, "QCA98XX SoC power save enabled\n"); 2336 ath10k_dbg(ATH10K_DBG_BOOT, "QCA98XX SoC power save enabled\n");
2277 break; 2337 break;
2278 } 2338 }
2279 } 2339 }
@@ -2286,7 +2346,7 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
2286 int ret = 0; 2346 int ret = 0;
2287 struct ath10k *ar; 2347 struct ath10k *ar;
2288 struct ath10k_pci *ar_pci; 2348 struct ath10k_pci *ar_pci;
2289 u32 lcr_val; 2349 u32 lcr_val, chip_id;
2290 2350
2291 ath10k_dbg(ATH10K_DBG_PCI, "%s\n", __func__); 2351 ath10k_dbg(ATH10K_DBG_PCI, "%s\n", __func__);
2292 2352
@@ -2298,9 +2358,6 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
2298 ar_pci->dev = &pdev->dev; 2358 ar_pci->dev = &pdev->dev;
2299 2359
2300 switch (pci_dev->device) { 2360 switch (pci_dev->device) {
2301 case QCA988X_1_0_DEVICE_ID:
2302 set_bit(ATH10K_PCI_FEATURE_HW_1_0_WORKAROUND, ar_pci->features);
2303 break;
2304 case QCA988X_2_0_DEVICE_ID: 2361 case QCA988X_2_0_DEVICE_ID:
2305 set_bit(ATH10K_PCI_FEATURE_MSI_X, ar_pci->features); 2362 set_bit(ATH10K_PCI_FEATURE_MSI_X, ar_pci->features);
2306 break; 2363 break;
@@ -2322,10 +2379,6 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
2322 goto err_ar_pci; 2379 goto err_ar_pci;
2323 } 2380 }
2324 2381
2325 /* Enable QCA988X_1.0 HW workarounds */
2326 if (test_bit(ATH10K_PCI_FEATURE_HW_1_0_WORKAROUND, ar_pci->features))
2327 spin_lock_init(&ar_pci->hw_v1_workaround_lock);
2328
2329 ar_pci->ar = ar; 2382 ar_pci->ar = ar;
2330 ar_pci->fw_indicator_address = FW_INDICATOR_ADDRESS; 2383 ar_pci->fw_indicator_address = FW_INDICATOR_ADDRESS;
2331 atomic_set(&ar_pci->keep_awake_count, 0); 2384 atomic_set(&ar_pci->keep_awake_count, 0);
@@ -2395,9 +2448,20 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
2395 2448
2396 spin_lock_init(&ar_pci->ce_lock); 2449 spin_lock_init(&ar_pci->ce_lock);
2397 2450
2398 ar_pci->cacheline_sz = dma_get_cache_alignment(); 2451 ret = ath10k_do_pci_wake(ar);
2452 if (ret) {
2453 ath10k_err("Failed to get chip id: %d\n", ret);
2454 return ret;
2455 }
2456
2457 chip_id = ath10k_pci_read32(ar,
2458 RTC_SOC_BASE_ADDRESS + SOC_CHIP_ID_ADDRESS);
2459
2460 ath10k_do_pci_sleep(ar);
2461
2462 ath10k_dbg(ATH10K_DBG_BOOT, "boot pci_mem 0x%p\n", ar_pci->mem);
2399 2463
2400 ret = ath10k_core_register(ar); 2464 ret = ath10k_core_register(ar, chip_id);
2401 if (ret) { 2465 if (ret) {
2402 ath10k_err("could not register driver core (%d)\n", ret); 2466 ath10k_err("could not register driver core (%d)\n", ret);
2403 goto err_iomap; 2467 goto err_iomap;
@@ -2414,7 +2478,6 @@ err_region:
2414err_device: 2478err_device:
2415 pci_disable_device(pdev); 2479 pci_disable_device(pdev);
2416err_ar: 2480err_ar:
2417 pci_set_drvdata(pdev, NULL);
2418 ath10k_core_destroy(ar); 2481 ath10k_core_destroy(ar);
2419err_ar_pci: 2482err_ar_pci:
2420 /* call HIF PCI free here */ 2483 /* call HIF PCI free here */
@@ -2442,7 +2505,6 @@ static void ath10k_pci_remove(struct pci_dev *pdev)
2442 2505
2443 ath10k_core_unregister(ar); 2506 ath10k_core_unregister(ar);
2444 2507
2445 pci_set_drvdata(pdev, NULL);
2446 pci_iounmap(pdev, ar_pci->mem); 2508 pci_iounmap(pdev, ar_pci->mem);
2447 pci_release_region(pdev, BAR_NUM); 2509 pci_release_region(pdev, BAR_NUM);
2448 pci_clear_master(pdev); 2510 pci_clear_master(pdev);
@@ -2483,9 +2545,6 @@ module_exit(ath10k_pci_exit);
2483MODULE_AUTHOR("Qualcomm Atheros"); 2545MODULE_AUTHOR("Qualcomm Atheros");
2484MODULE_DESCRIPTION("Driver support for Atheros QCA988X PCIe devices"); 2546MODULE_DESCRIPTION("Driver support for Atheros QCA988X PCIe devices");
2485MODULE_LICENSE("Dual BSD/GPL"); 2547MODULE_LICENSE("Dual BSD/GPL");
2486MODULE_FIRMWARE(QCA988X_HW_1_0_FW_DIR "/" QCA988X_HW_1_0_FW_FILE);
2487MODULE_FIRMWARE(QCA988X_HW_1_0_FW_DIR "/" QCA988X_HW_1_0_OTP_FILE);
2488MODULE_FIRMWARE(QCA988X_HW_1_0_FW_DIR "/" QCA988X_HW_1_0_BOARD_DATA_FILE);
2489MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_FW_FILE); 2548MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_FW_FILE);
2490MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_OTP_FILE); 2549MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_OTP_FILE);
2491MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_BOARD_DATA_FILE); 2550MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_BOARD_DATA_FILE);
diff --git a/drivers/net/wireless/ath/ath10k/pci.h b/drivers/net/wireless/ath/ath10k/pci.h
index 871bb339d56d..52fb7b973571 100644
--- a/drivers/net/wireless/ath/ath10k/pci.h
+++ b/drivers/net/wireless/ath/ath10k/pci.h
@@ -43,22 +43,23 @@ struct bmi_xfer {
43 u32 resp_len; 43 u32 resp_len;
44}; 44};
45 45
46enum ath10k_pci_compl_state {
47 ATH10K_PCI_COMPL_FREE = 0,
48 ATH10K_PCI_COMPL_SEND,
49 ATH10K_PCI_COMPL_RECV,
50};
51
46struct ath10k_pci_compl { 52struct ath10k_pci_compl {
47 struct list_head list; 53 struct list_head list;
48 int send_or_recv; 54 enum ath10k_pci_compl_state state;
49 struct ce_state *ce_state; 55 struct ath10k_ce_pipe *ce_state;
50 struct hif_ce_pipe_info *pipe_info; 56 struct ath10k_pci_pipe *pipe_info;
51 void *transfer_context; 57 struct sk_buff *skb;
52 unsigned int nbytes; 58 unsigned int nbytes;
53 unsigned int transfer_id; 59 unsigned int transfer_id;
54 unsigned int flags; 60 unsigned int flags;
55}; 61};
56 62
57/* compl_state.send_or_recv */
58#define HIF_CE_COMPLETE_FREE 0
59#define HIF_CE_COMPLETE_SEND 1
60#define HIF_CE_COMPLETE_RECV 2
61
62/* 63/*
63 * PCI-specific Target state 64 * PCI-specific Target state
64 * 65 *
@@ -152,17 +153,16 @@ struct service_to_pipe {
152 153
153enum ath10k_pci_features { 154enum ath10k_pci_features {
154 ATH10K_PCI_FEATURE_MSI_X = 0, 155 ATH10K_PCI_FEATURE_MSI_X = 0,
155 ATH10K_PCI_FEATURE_HW_1_0_WORKAROUND = 1, 156 ATH10K_PCI_FEATURE_SOC_POWER_SAVE = 1,
156 ATH10K_PCI_FEATURE_SOC_POWER_SAVE = 2,
157 157
158 /* keep last */ 158 /* keep last */
159 ATH10K_PCI_FEATURE_COUNT 159 ATH10K_PCI_FEATURE_COUNT
160}; 160};
161 161
162/* Per-pipe state. */ 162/* Per-pipe state. */
163struct hif_ce_pipe_info { 163struct ath10k_pci_pipe {
164 /* Handle of underlying Copy Engine */ 164 /* Handle of underlying Copy Engine */
165 struct ce_state *ce_hdl; 165 struct ath10k_ce_pipe *ce_hdl;
166 166
167 /* Our pipe number; facilitiates use of pipe_info ptrs. */ 167 /* Our pipe number; facilitiates use of pipe_info ptrs. */
168 u8 pipe_num; 168 u8 pipe_num;
@@ -178,9 +178,6 @@ struct hif_ce_pipe_info {
178 /* List of free CE completion slots */ 178 /* List of free CE completion slots */
179 struct list_head compl_free; 179 struct list_head compl_free;
180 180
181 /* Limit the number of outstanding send requests. */
182 int num_sends_allowed;
183
184 struct ath10k_pci *ar_pci; 181 struct ath10k_pci *ar_pci;
185 struct tasklet_struct intr; 182 struct tasklet_struct intr;
186}; 183};
@@ -190,7 +187,6 @@ struct ath10k_pci {
190 struct device *dev; 187 struct device *dev;
191 struct ath10k *ar; 188 struct ath10k *ar;
192 void __iomem *mem; 189 void __iomem *mem;
193 int cacheline_sz;
194 190
195 DECLARE_BITMAP(features, ATH10K_PCI_FEATURE_COUNT); 191 DECLARE_BITMAP(features, ATH10K_PCI_FEATURE_COUNT);
196 192
@@ -219,7 +215,7 @@ struct ath10k_pci {
219 215
220 bool compl_processing; 216 bool compl_processing;
221 217
222 struct hif_ce_pipe_info pipe_info[CE_COUNT_MAX]; 218 struct ath10k_pci_pipe pipe_info[CE_COUNT_MAX];
223 219
224 struct ath10k_hif_cb msg_callbacks_current; 220 struct ath10k_hif_cb msg_callbacks_current;
225 221
@@ -227,16 +223,13 @@ struct ath10k_pci {
227 u32 fw_indicator_address; 223 u32 fw_indicator_address;
228 224
229 /* Copy Engine used for Diagnostic Accesses */ 225 /* Copy Engine used for Diagnostic Accesses */
230 struct ce_state *ce_diag; 226 struct ath10k_ce_pipe *ce_diag;
231 227
232 /* FIXME: document what this really protects */ 228 /* FIXME: document what this really protects */
233 spinlock_t ce_lock; 229 spinlock_t ce_lock;
234 230
235 /* Map CE id to ce_state */ 231 /* Map CE id to ce_state */
236 struct ce_state *ce_id_to_state[CE_COUNT_MAX]; 232 struct ath10k_ce_pipe ce_states[CE_COUNT_MAX];
237
238 /* makes sure that dummy reads are atomic */
239 spinlock_t hw_v1_workaround_lock;
240}; 233};
241 234
242static inline struct ath10k_pci *ath10k_pci_priv(struct ath10k *ar) 235static inline struct ath10k_pci *ath10k_pci_priv(struct ath10k *ar)
@@ -244,14 +237,18 @@ static inline struct ath10k_pci *ath10k_pci_priv(struct ath10k *ar)
244 return ar->hif.priv; 237 return ar->hif.priv;
245} 238}
246 239
247static inline u32 ath10k_pci_reg_read32(void __iomem *mem, u32 addr) 240static inline u32 ath10k_pci_reg_read32(struct ath10k *ar, u32 addr)
248{ 241{
249 return ioread32(mem + PCIE_LOCAL_BASE_ADDRESS + addr); 242 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
243
244 return ioread32(ar_pci->mem + PCIE_LOCAL_BASE_ADDRESS + addr);
250} 245}
251 246
252static inline void ath10k_pci_reg_write32(void __iomem *mem, u32 addr, u32 val) 247static inline void ath10k_pci_reg_write32(struct ath10k *ar, u32 addr, u32 val)
253{ 248{
254 iowrite32(val, mem + PCIE_LOCAL_BASE_ADDRESS + addr); 249 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
250
251 iowrite32(val, ar_pci->mem + PCIE_LOCAL_BASE_ADDRESS + addr);
255} 252}
256 253
257#define ATH_PCI_RESET_WAIT_MAX 10 /* ms */ 254#define ATH_PCI_RESET_WAIT_MAX 10 /* ms */
@@ -310,23 +307,8 @@ static inline void ath10k_pci_write32(struct ath10k *ar, u32 offset,
310 u32 value) 307 u32 value)
311{ 308{
312 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 309 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
313 void __iomem *addr = ar_pci->mem;
314
315 if (test_bit(ATH10K_PCI_FEATURE_HW_1_0_WORKAROUND, ar_pci->features)) {
316 unsigned long irq_flags;
317 310
318 spin_lock_irqsave(&ar_pci->hw_v1_workaround_lock, irq_flags); 311 iowrite32(value, ar_pci->mem + offset);
319
320 ioread32(addr+offset+4); /* 3rd read prior to write */
321 ioread32(addr+offset+4); /* 2nd read prior to write */
322 ioread32(addr+offset+4); /* 1st read prior to write */
323 iowrite32(value, addr+offset);
324
325 spin_unlock_irqrestore(&ar_pci->hw_v1_workaround_lock,
326 irq_flags);
327 } else {
328 iowrite32(value, addr+offset);
329 }
330} 312}
331 313
332static inline u32 ath10k_pci_read32(struct ath10k *ar, u32 offset) 314static inline u32 ath10k_pci_read32(struct ath10k *ar, u32 offset)
@@ -336,15 +318,17 @@ static inline u32 ath10k_pci_read32(struct ath10k *ar, u32 offset)
336 return ioread32(ar_pci->mem + offset); 318 return ioread32(ar_pci->mem + offset);
337} 319}
338 320
339void ath10k_do_pci_wake(struct ath10k *ar); 321int ath10k_do_pci_wake(struct ath10k *ar);
340void ath10k_do_pci_sleep(struct ath10k *ar); 322void ath10k_do_pci_sleep(struct ath10k *ar);
341 323
342static inline void ath10k_pci_wake(struct ath10k *ar) 324static inline int ath10k_pci_wake(struct ath10k *ar)
343{ 325{
344 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 326 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
345 327
346 if (test_bit(ATH10K_PCI_FEATURE_SOC_POWER_SAVE, ar_pci->features)) 328 if (test_bit(ATH10K_PCI_FEATURE_SOC_POWER_SAVE, ar_pci->features))
347 ath10k_do_pci_wake(ar); 329 return ath10k_do_pci_wake(ar);
330
331 return 0;
348} 332}
349 333
350static inline void ath10k_pci_sleep(struct ath10k *ar) 334static inline void ath10k_pci_sleep(struct ath10k *ar)
diff --git a/drivers/net/wireless/ath/ath10k/rx_desc.h b/drivers/net/wireless/ath/ath10k/rx_desc.h
index bfec6c8f2ecb..1c584c4b019c 100644
--- a/drivers/net/wireless/ath/ath10k/rx_desc.h
+++ b/drivers/net/wireless/ath/ath10k/rx_desc.h
@@ -422,10 +422,30 @@ struct rx_mpdu_end {
422#define RX_MSDU_START_INFO1_IP_FRAG (1 << 14) 422#define RX_MSDU_START_INFO1_IP_FRAG (1 << 14)
423#define RX_MSDU_START_INFO1_TCP_ONLY_ACK (1 << 15) 423#define RX_MSDU_START_INFO1_TCP_ONLY_ACK (1 << 15)
424 424
425/* The decapped header (rx_hdr_status) contains the following:
426 * a) 802.11 header
427 * [padding to 4 bytes]
428 * b) HW crypto parameter
429 * - 0 bytes for no security
430 * - 4 bytes for WEP
431 * - 8 bytes for TKIP, AES
432 * [padding to 4 bytes]
433 * c) A-MSDU subframe header (14 bytes) if appliable
434 * d) LLC/SNAP (RFC1042, 8 bytes)
435 *
436 * In case of A-MSDU only first frame in sequence contains (a) and (b). */
425enum rx_msdu_decap_format { 437enum rx_msdu_decap_format {
426 RX_MSDU_DECAP_RAW = 0, 438 RX_MSDU_DECAP_RAW = 0,
427 RX_MSDU_DECAP_NATIVE_WIFI = 1, 439
440 /* Note: QoS frames are reported as non-QoS. The rx_hdr_status in
441 * htt_rx_desc contains the original decapped 802.11 header. */
442 RX_MSDU_DECAP_NATIVE_WIFI = 1,
443
444 /* Payload contains an ethernet header (struct ethhdr). */
428 RX_MSDU_DECAP_ETHERNET2_DIX = 2, 445 RX_MSDU_DECAP_ETHERNET2_DIX = 2,
446
447 /* Payload contains two 48-bit addresses and 2-byte length (14 bytes
448 * total), followed by an RFC1042 header (8 bytes). */
429 RX_MSDU_DECAP_8023_SNAP_LLC = 3 449 RX_MSDU_DECAP_8023_SNAP_LLC = 3
430}; 450};
431 451
diff --git a/drivers/net/wireless/ath/ath10k/trace.h b/drivers/net/wireless/ath/ath10k/trace.h
index 85e806bf7257..90817ddc92ba 100644
--- a/drivers/net/wireless/ath/ath10k/trace.h
+++ b/drivers/net/wireless/ath/ath10k/trace.h
@@ -111,26 +111,29 @@ TRACE_EVENT(ath10k_log_dbg_dump,
111); 111);
112 112
113TRACE_EVENT(ath10k_wmi_cmd, 113TRACE_EVENT(ath10k_wmi_cmd,
114 TP_PROTO(int id, void *buf, size_t buf_len), 114 TP_PROTO(int id, void *buf, size_t buf_len, int ret),
115 115
116 TP_ARGS(id, buf, buf_len), 116 TP_ARGS(id, buf, buf_len, ret),
117 117
118 TP_STRUCT__entry( 118 TP_STRUCT__entry(
119 __field(unsigned int, id) 119 __field(unsigned int, id)
120 __field(size_t, buf_len) 120 __field(size_t, buf_len)
121 __dynamic_array(u8, buf, buf_len) 121 __dynamic_array(u8, buf, buf_len)
122 __field(int, ret)
122 ), 123 ),
123 124
124 TP_fast_assign( 125 TP_fast_assign(
125 __entry->id = id; 126 __entry->id = id;
126 __entry->buf_len = buf_len; 127 __entry->buf_len = buf_len;
128 __entry->ret = ret;
127 memcpy(__get_dynamic_array(buf), buf, buf_len); 129 memcpy(__get_dynamic_array(buf), buf, buf_len);
128 ), 130 ),
129 131
130 TP_printk( 132 TP_printk(
131 "id %d len %zu", 133 "id %d len %zu ret %d",
132 __entry->id, 134 __entry->id,
133 __entry->buf_len 135 __entry->buf_len,
136 __entry->ret
134 ) 137 )
135); 138);
136 139
@@ -158,6 +161,27 @@ TRACE_EVENT(ath10k_wmi_event,
158 ) 161 )
159); 162);
160 163
164TRACE_EVENT(ath10k_htt_stats,
165 TP_PROTO(void *buf, size_t buf_len),
166
167 TP_ARGS(buf, buf_len),
168
169 TP_STRUCT__entry(
170 __field(size_t, buf_len)
171 __dynamic_array(u8, buf, buf_len)
172 ),
173
174 TP_fast_assign(
175 __entry->buf_len = buf_len;
176 memcpy(__get_dynamic_array(buf), buf, buf_len);
177 ),
178
179 TP_printk(
180 "len %zu",
181 __entry->buf_len
182 )
183);
184
161#endif /* _TRACE_H_ || TRACE_HEADER_MULTI_READ*/ 185#endif /* _TRACE_H_ || TRACE_HEADER_MULTI_READ*/
162 186
163/* we don't want to use include/trace/events */ 187/* we don't want to use include/trace/events */
diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
index 68b6faefd1d8..5ae373a1e294 100644
--- a/drivers/net/wireless/ath/ath10k/txrx.c
+++ b/drivers/net/wireless/ath/ath10k/txrx.c
@@ -44,40 +44,39 @@ out:
44 spin_unlock_bh(&ar->data_lock); 44 spin_unlock_bh(&ar->data_lock);
45} 45}
46 46
47void ath10k_txrx_tx_unref(struct ath10k_htt *htt, struct sk_buff *txdesc) 47void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
48 const struct htt_tx_done *tx_done)
48{ 49{
49 struct device *dev = htt->ar->dev; 50 struct device *dev = htt->ar->dev;
50 struct ieee80211_tx_info *info; 51 struct ieee80211_tx_info *info;
51 struct sk_buff *txfrag = ATH10K_SKB_CB(txdesc)->htt.txfrag; 52 struct ath10k_skb_cb *skb_cb;
52 struct sk_buff *msdu = ATH10K_SKB_CB(txdesc)->htt.msdu; 53 struct sk_buff *msdu;
53 int ret; 54 int ret;
54 55
55 if (ATH10K_SKB_CB(txdesc)->htt.refcount == 0) 56 ath10k_dbg(ATH10K_DBG_HTT, "htt tx completion msdu_id %u discard %d no_ack %d\n",
56 return; 57 tx_done->msdu_id, !!tx_done->discard, !!tx_done->no_ack);
57
58 ATH10K_SKB_CB(txdesc)->htt.refcount--;
59 58
60 if (ATH10K_SKB_CB(txdesc)->htt.refcount > 0) 59 if (tx_done->msdu_id >= htt->max_num_pending_tx) {
60 ath10k_warn("warning: msdu_id %d too big, ignoring\n",
61 tx_done->msdu_id);
61 return; 62 return;
62
63 if (txfrag) {
64 ret = ath10k_skb_unmap(dev, txfrag);
65 if (ret)
66 ath10k_warn("txfrag unmap failed (%d)\n", ret);
67
68 dev_kfree_skb_any(txfrag);
69 } 63 }
70 64
65 msdu = htt->pending_tx[tx_done->msdu_id];
66 skb_cb = ATH10K_SKB_CB(msdu);
67
71 ret = ath10k_skb_unmap(dev, msdu); 68 ret = ath10k_skb_unmap(dev, msdu);
72 if (ret) 69 if (ret)
73 ath10k_warn("data skb unmap failed (%d)\n", ret); 70 ath10k_warn("data skb unmap failed (%d)\n", ret);
74 71
72 if (skb_cb->htt.frag_len)
73 skb_pull(msdu, skb_cb->htt.frag_len + skb_cb->htt.pad_len);
74
75 ath10k_report_offchan_tx(htt->ar, msdu); 75 ath10k_report_offchan_tx(htt->ar, msdu);
76 76
77 info = IEEE80211_SKB_CB(msdu); 77 info = IEEE80211_SKB_CB(msdu);
78 memset(&info->status, 0, sizeof(info->status));
79 78
80 if (ATH10K_SKB_CB(txdesc)->htt.discard) { 79 if (tx_done->discard) {
81 ieee80211_free_txskb(htt->ar->hw, msdu); 80 ieee80211_free_txskb(htt->ar->hw, msdu);
82 goto exit; 81 goto exit;
83 } 82 }
@@ -85,7 +84,7 @@ void ath10k_txrx_tx_unref(struct ath10k_htt *htt, struct sk_buff *txdesc)
85 if (!(info->flags & IEEE80211_TX_CTL_NO_ACK)) 84 if (!(info->flags & IEEE80211_TX_CTL_NO_ACK))
86 info->flags |= IEEE80211_TX_STAT_ACK; 85 info->flags |= IEEE80211_TX_STAT_ACK;
87 86
88 if (ATH10K_SKB_CB(txdesc)->htt.no_ack) 87 if (tx_done->no_ack)
89 info->flags &= ~IEEE80211_TX_STAT_ACK; 88 info->flags &= ~IEEE80211_TX_STAT_ACK;
90 89
91 ieee80211_tx_status(htt->ar->hw, msdu); 90 ieee80211_tx_status(htt->ar->hw, msdu);
@@ -93,36 +92,12 @@ void ath10k_txrx_tx_unref(struct ath10k_htt *htt, struct sk_buff *txdesc)
93 92
94exit: 93exit:
95 spin_lock_bh(&htt->tx_lock); 94 spin_lock_bh(&htt->tx_lock);
96 htt->pending_tx[ATH10K_SKB_CB(txdesc)->htt.msdu_id] = NULL; 95 htt->pending_tx[tx_done->msdu_id] = NULL;
97 ath10k_htt_tx_free_msdu_id(htt, ATH10K_SKB_CB(txdesc)->htt.msdu_id); 96 ath10k_htt_tx_free_msdu_id(htt, tx_done->msdu_id);
98 __ath10k_htt_tx_dec_pending(htt); 97 __ath10k_htt_tx_dec_pending(htt);
99 if (bitmap_empty(htt->used_msdu_ids, htt->max_num_pending_tx)) 98 if (htt->num_pending_tx == 0)
100 wake_up(&htt->empty_tx_wq); 99 wake_up(&htt->empty_tx_wq);
101 spin_unlock_bh(&htt->tx_lock); 100 spin_unlock_bh(&htt->tx_lock);
102
103 dev_kfree_skb_any(txdesc);
104}
105
106void ath10k_txrx_tx_completed(struct ath10k_htt *htt,
107 const struct htt_tx_done *tx_done)
108{
109 struct sk_buff *txdesc;
110
111 ath10k_dbg(ATH10K_DBG_HTT, "htt tx completion msdu_id %u discard %d no_ack %d\n",
112 tx_done->msdu_id, !!tx_done->discard, !!tx_done->no_ack);
113
114 if (tx_done->msdu_id >= htt->max_num_pending_tx) {
115 ath10k_warn("warning: msdu_id %d too big, ignoring\n",
116 tx_done->msdu_id);
117 return;
118 }
119
120 txdesc = htt->pending_tx[tx_done->msdu_id];
121
122 ATH10K_SKB_CB(txdesc)->htt.discard = tx_done->discard;
123 ATH10K_SKB_CB(txdesc)->htt.no_ack = tx_done->no_ack;
124
125 ath10k_txrx_tx_unref(htt, txdesc);
126} 101}
127 102
128static const u8 rx_legacy_rate_idx[] = { 103static const u8 rx_legacy_rate_idx[] = {
@@ -293,6 +268,8 @@ void ath10k_process_rx(struct ath10k *ar, struct htt_rx_info *info)
293 status->vht_nss, 268 status->vht_nss,
294 status->freq, 269 status->freq,
295 status->band); 270 status->band);
271 ath10k_dbg_dump(ATH10K_DBG_HTT_DUMP, NULL, "rx skb: ",
272 info->skb->data, info->skb->len);
296 273
297 ieee80211_rx(ar->hw, info->skb); 274 ieee80211_rx(ar->hw, info->skb);
298} 275}
diff --git a/drivers/net/wireless/ath/ath10k/txrx.h b/drivers/net/wireless/ath/ath10k/txrx.h
index e78632a76df7..356dc9c04c9e 100644
--- a/drivers/net/wireless/ath/ath10k/txrx.h
+++ b/drivers/net/wireless/ath/ath10k/txrx.h
@@ -19,9 +19,8 @@
19 19
20#include "htt.h" 20#include "htt.h"
21 21
22void ath10k_txrx_tx_unref(struct ath10k_htt *htt, struct sk_buff *txdesc); 22void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
23void ath10k_txrx_tx_completed(struct ath10k_htt *htt, 23 const struct htt_tx_done *tx_done);
24 const struct htt_tx_done *tx_done);
25void ath10k_process_rx(struct ath10k *ar, struct htt_rx_info *info); 24void ath10k_process_rx(struct ath10k *ar, struct htt_rx_info *info);
26 25
27struct ath10k_peer *ath10k_peer_find(struct ath10k *ar, int vdev_id, 26struct ath10k_peer *ath10k_peer_find(struct ath10k *ar, int vdev_id,
diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
index 55f90c761868..ccf3597fd9e2 100644
--- a/drivers/net/wireless/ath/ath10k/wmi.c
+++ b/drivers/net/wireless/ath/ath10k/wmi.c
@@ -23,29 +23,470 @@
23#include "wmi.h" 23#include "wmi.h"
24#include "mac.h" 24#include "mac.h"
25 25
26void ath10k_wmi_flush_tx(struct ath10k *ar) 26/* MAIN WMI cmd track */
27{ 27static struct wmi_cmd_map wmi_cmd_map = {
28 int ret; 28 .init_cmdid = WMI_INIT_CMDID,
29 29 .start_scan_cmdid = WMI_START_SCAN_CMDID,
30 lockdep_assert_held(&ar->conf_mutex); 30 .stop_scan_cmdid = WMI_STOP_SCAN_CMDID,
31 31 .scan_chan_list_cmdid = WMI_SCAN_CHAN_LIST_CMDID,
32 if (ar->state == ATH10K_STATE_WEDGED) { 32 .scan_sch_prio_tbl_cmdid = WMI_SCAN_SCH_PRIO_TBL_CMDID,
33 ath10k_warn("wmi flush skipped - device is wedged anyway\n"); 33 .pdev_set_regdomain_cmdid = WMI_PDEV_SET_REGDOMAIN_CMDID,
34 return; 34 .pdev_set_channel_cmdid = WMI_PDEV_SET_CHANNEL_CMDID,
35 } 35 .pdev_set_param_cmdid = WMI_PDEV_SET_PARAM_CMDID,
36 36 .pdev_pktlog_enable_cmdid = WMI_PDEV_PKTLOG_ENABLE_CMDID,
37 ret = wait_event_timeout(ar->wmi.wq, 37 .pdev_pktlog_disable_cmdid = WMI_PDEV_PKTLOG_DISABLE_CMDID,
38 atomic_read(&ar->wmi.pending_tx_count) == 0, 38 .pdev_set_wmm_params_cmdid = WMI_PDEV_SET_WMM_PARAMS_CMDID,
39 5*HZ); 39 .pdev_set_ht_cap_ie_cmdid = WMI_PDEV_SET_HT_CAP_IE_CMDID,
40 if (atomic_read(&ar->wmi.pending_tx_count) == 0) 40 .pdev_set_vht_cap_ie_cmdid = WMI_PDEV_SET_VHT_CAP_IE_CMDID,
41 return; 41 .pdev_set_dscp_tid_map_cmdid = WMI_PDEV_SET_DSCP_TID_MAP_CMDID,
42 42 .pdev_set_quiet_mode_cmdid = WMI_PDEV_SET_QUIET_MODE_CMDID,
43 if (ret == 0) 43 .pdev_green_ap_ps_enable_cmdid = WMI_PDEV_GREEN_AP_PS_ENABLE_CMDID,
44 ret = -ETIMEDOUT; 44 .pdev_get_tpc_config_cmdid = WMI_PDEV_GET_TPC_CONFIG_CMDID,
45 45 .pdev_set_base_macaddr_cmdid = WMI_PDEV_SET_BASE_MACADDR_CMDID,
46 if (ret < 0) 46 .vdev_create_cmdid = WMI_VDEV_CREATE_CMDID,
47 ath10k_warn("wmi flush failed (%d)\n", ret); 47 .vdev_delete_cmdid = WMI_VDEV_DELETE_CMDID,
48} 48 .vdev_start_request_cmdid = WMI_VDEV_START_REQUEST_CMDID,
49 .vdev_restart_request_cmdid = WMI_VDEV_RESTART_REQUEST_CMDID,
50 .vdev_up_cmdid = WMI_VDEV_UP_CMDID,
51 .vdev_stop_cmdid = WMI_VDEV_STOP_CMDID,
52 .vdev_down_cmdid = WMI_VDEV_DOWN_CMDID,
53 .vdev_set_param_cmdid = WMI_VDEV_SET_PARAM_CMDID,
54 .vdev_install_key_cmdid = WMI_VDEV_INSTALL_KEY_CMDID,
55 .peer_create_cmdid = WMI_PEER_CREATE_CMDID,
56 .peer_delete_cmdid = WMI_PEER_DELETE_CMDID,
57 .peer_flush_tids_cmdid = WMI_PEER_FLUSH_TIDS_CMDID,
58 .peer_set_param_cmdid = WMI_PEER_SET_PARAM_CMDID,
59 .peer_assoc_cmdid = WMI_PEER_ASSOC_CMDID,
60 .peer_add_wds_entry_cmdid = WMI_PEER_ADD_WDS_ENTRY_CMDID,
61 .peer_remove_wds_entry_cmdid = WMI_PEER_REMOVE_WDS_ENTRY_CMDID,
62 .peer_mcast_group_cmdid = WMI_PEER_MCAST_GROUP_CMDID,
63 .bcn_tx_cmdid = WMI_BCN_TX_CMDID,
64 .pdev_send_bcn_cmdid = WMI_PDEV_SEND_BCN_CMDID,
65 .bcn_tmpl_cmdid = WMI_BCN_TMPL_CMDID,
66 .bcn_filter_rx_cmdid = WMI_BCN_FILTER_RX_CMDID,
67 .prb_req_filter_rx_cmdid = WMI_PRB_REQ_FILTER_RX_CMDID,
68 .mgmt_tx_cmdid = WMI_MGMT_TX_CMDID,
69 .prb_tmpl_cmdid = WMI_PRB_TMPL_CMDID,
70 .addba_clear_resp_cmdid = WMI_ADDBA_CLEAR_RESP_CMDID,
71 .addba_send_cmdid = WMI_ADDBA_SEND_CMDID,
72 .addba_status_cmdid = WMI_ADDBA_STATUS_CMDID,
73 .delba_send_cmdid = WMI_DELBA_SEND_CMDID,
74 .addba_set_resp_cmdid = WMI_ADDBA_SET_RESP_CMDID,
75 .send_singleamsdu_cmdid = WMI_SEND_SINGLEAMSDU_CMDID,
76 .sta_powersave_mode_cmdid = WMI_STA_POWERSAVE_MODE_CMDID,
77 .sta_powersave_param_cmdid = WMI_STA_POWERSAVE_PARAM_CMDID,
78 .sta_mimo_ps_mode_cmdid = WMI_STA_MIMO_PS_MODE_CMDID,
79 .pdev_dfs_enable_cmdid = WMI_PDEV_DFS_ENABLE_CMDID,
80 .pdev_dfs_disable_cmdid = WMI_PDEV_DFS_DISABLE_CMDID,
81 .roam_scan_mode = WMI_ROAM_SCAN_MODE,
82 .roam_scan_rssi_threshold = WMI_ROAM_SCAN_RSSI_THRESHOLD,
83 .roam_scan_period = WMI_ROAM_SCAN_PERIOD,
84 .roam_scan_rssi_change_threshold = WMI_ROAM_SCAN_RSSI_CHANGE_THRESHOLD,
85 .roam_ap_profile = WMI_ROAM_AP_PROFILE,
86 .ofl_scan_add_ap_profile = WMI_ROAM_AP_PROFILE,
87 .ofl_scan_remove_ap_profile = WMI_OFL_SCAN_REMOVE_AP_PROFILE,
88 .ofl_scan_period = WMI_OFL_SCAN_PERIOD,
89 .p2p_dev_set_device_info = WMI_P2P_DEV_SET_DEVICE_INFO,
90 .p2p_dev_set_discoverability = WMI_P2P_DEV_SET_DISCOVERABILITY,
91 .p2p_go_set_beacon_ie = WMI_P2P_GO_SET_BEACON_IE,
92 .p2p_go_set_probe_resp_ie = WMI_P2P_GO_SET_PROBE_RESP_IE,
93 .p2p_set_vendor_ie_data_cmdid = WMI_P2P_SET_VENDOR_IE_DATA_CMDID,
94 .ap_ps_peer_param_cmdid = WMI_AP_PS_PEER_PARAM_CMDID,
95 .ap_ps_peer_uapsd_coex_cmdid = WMI_AP_PS_PEER_UAPSD_COEX_CMDID,
96 .peer_rate_retry_sched_cmdid = WMI_PEER_RATE_RETRY_SCHED_CMDID,
97 .wlan_profile_trigger_cmdid = WMI_WLAN_PROFILE_TRIGGER_CMDID,
98 .wlan_profile_set_hist_intvl_cmdid =
99 WMI_WLAN_PROFILE_SET_HIST_INTVL_CMDID,
100 .wlan_profile_get_profile_data_cmdid =
101 WMI_WLAN_PROFILE_GET_PROFILE_DATA_CMDID,
102 .wlan_profile_enable_profile_id_cmdid =
103 WMI_WLAN_PROFILE_ENABLE_PROFILE_ID_CMDID,
104 .wlan_profile_list_profile_id_cmdid =
105 WMI_WLAN_PROFILE_LIST_PROFILE_ID_CMDID,
106 .pdev_suspend_cmdid = WMI_PDEV_SUSPEND_CMDID,
107 .pdev_resume_cmdid = WMI_PDEV_RESUME_CMDID,
108 .add_bcn_filter_cmdid = WMI_ADD_BCN_FILTER_CMDID,
109 .rmv_bcn_filter_cmdid = WMI_RMV_BCN_FILTER_CMDID,
110 .wow_add_wake_pattern_cmdid = WMI_WOW_ADD_WAKE_PATTERN_CMDID,
111 .wow_del_wake_pattern_cmdid = WMI_WOW_DEL_WAKE_PATTERN_CMDID,
112 .wow_enable_disable_wake_event_cmdid =
113 WMI_WOW_ENABLE_DISABLE_WAKE_EVENT_CMDID,
114 .wow_enable_cmdid = WMI_WOW_ENABLE_CMDID,
115 .wow_hostwakeup_from_sleep_cmdid = WMI_WOW_HOSTWAKEUP_FROM_SLEEP_CMDID,
116 .rtt_measreq_cmdid = WMI_RTT_MEASREQ_CMDID,
117 .rtt_tsf_cmdid = WMI_RTT_TSF_CMDID,
118 .vdev_spectral_scan_configure_cmdid =
119 WMI_VDEV_SPECTRAL_SCAN_CONFIGURE_CMDID,
120 .vdev_spectral_scan_enable_cmdid = WMI_VDEV_SPECTRAL_SCAN_ENABLE_CMDID,
121 .request_stats_cmdid = WMI_REQUEST_STATS_CMDID,
122 .set_arp_ns_offload_cmdid = WMI_SET_ARP_NS_OFFLOAD_CMDID,
123 .network_list_offload_config_cmdid =
124 WMI_NETWORK_LIST_OFFLOAD_CONFIG_CMDID,
125 .gtk_offload_cmdid = WMI_GTK_OFFLOAD_CMDID,
126 .csa_offload_enable_cmdid = WMI_CSA_OFFLOAD_ENABLE_CMDID,
127 .csa_offload_chanswitch_cmdid = WMI_CSA_OFFLOAD_CHANSWITCH_CMDID,
128 .chatter_set_mode_cmdid = WMI_CHATTER_SET_MODE_CMDID,
129 .peer_tid_addba_cmdid = WMI_PEER_TID_ADDBA_CMDID,
130 .peer_tid_delba_cmdid = WMI_PEER_TID_DELBA_CMDID,
131 .sta_dtim_ps_method_cmdid = WMI_STA_DTIM_PS_METHOD_CMDID,
132 .sta_uapsd_auto_trig_cmdid = WMI_STA_UAPSD_AUTO_TRIG_CMDID,
133 .sta_keepalive_cmd = WMI_STA_KEEPALIVE_CMD,
134 .echo_cmdid = WMI_ECHO_CMDID,
135 .pdev_utf_cmdid = WMI_PDEV_UTF_CMDID,
136 .dbglog_cfg_cmdid = WMI_DBGLOG_CFG_CMDID,
137 .pdev_qvit_cmdid = WMI_PDEV_QVIT_CMDID,
138 .pdev_ftm_intg_cmdid = WMI_PDEV_FTM_INTG_CMDID,
139 .vdev_set_keepalive_cmdid = WMI_VDEV_SET_KEEPALIVE_CMDID,
140 .vdev_get_keepalive_cmdid = WMI_VDEV_GET_KEEPALIVE_CMDID,
141 .force_fw_hang_cmdid = WMI_FORCE_FW_HANG_CMDID,
142 .gpio_config_cmdid = WMI_GPIO_CONFIG_CMDID,
143 .gpio_output_cmdid = WMI_GPIO_OUTPUT_CMDID,
144};
145
146/* 10.X WMI cmd track */
147static struct wmi_cmd_map wmi_10x_cmd_map = {
148 .init_cmdid = WMI_10X_INIT_CMDID,
149 .start_scan_cmdid = WMI_10X_START_SCAN_CMDID,
150 .stop_scan_cmdid = WMI_10X_STOP_SCAN_CMDID,
151 .scan_chan_list_cmdid = WMI_10X_SCAN_CHAN_LIST_CMDID,
152 .scan_sch_prio_tbl_cmdid = WMI_CMD_UNSUPPORTED,
153 .pdev_set_regdomain_cmdid = WMI_10X_PDEV_SET_REGDOMAIN_CMDID,
154 .pdev_set_channel_cmdid = WMI_10X_PDEV_SET_CHANNEL_CMDID,
155 .pdev_set_param_cmdid = WMI_10X_PDEV_SET_PARAM_CMDID,
156 .pdev_pktlog_enable_cmdid = WMI_10X_PDEV_PKTLOG_ENABLE_CMDID,
157 .pdev_pktlog_disable_cmdid = WMI_10X_PDEV_PKTLOG_DISABLE_CMDID,
158 .pdev_set_wmm_params_cmdid = WMI_10X_PDEV_SET_WMM_PARAMS_CMDID,
159 .pdev_set_ht_cap_ie_cmdid = WMI_10X_PDEV_SET_HT_CAP_IE_CMDID,
160 .pdev_set_vht_cap_ie_cmdid = WMI_10X_PDEV_SET_VHT_CAP_IE_CMDID,
161 .pdev_set_dscp_tid_map_cmdid = WMI_10X_PDEV_SET_DSCP_TID_MAP_CMDID,
162 .pdev_set_quiet_mode_cmdid = WMI_10X_PDEV_SET_QUIET_MODE_CMDID,
163 .pdev_green_ap_ps_enable_cmdid = WMI_10X_PDEV_GREEN_AP_PS_ENABLE_CMDID,
164 .pdev_get_tpc_config_cmdid = WMI_10X_PDEV_GET_TPC_CONFIG_CMDID,
165 .pdev_set_base_macaddr_cmdid = WMI_10X_PDEV_SET_BASE_MACADDR_CMDID,
166 .vdev_create_cmdid = WMI_10X_VDEV_CREATE_CMDID,
167 .vdev_delete_cmdid = WMI_10X_VDEV_DELETE_CMDID,
168 .vdev_start_request_cmdid = WMI_10X_VDEV_START_REQUEST_CMDID,
169 .vdev_restart_request_cmdid = WMI_10X_VDEV_RESTART_REQUEST_CMDID,
170 .vdev_up_cmdid = WMI_10X_VDEV_UP_CMDID,
171 .vdev_stop_cmdid = WMI_10X_VDEV_STOP_CMDID,
172 .vdev_down_cmdid = WMI_10X_VDEV_DOWN_CMDID,
173 .vdev_set_param_cmdid = WMI_10X_VDEV_SET_PARAM_CMDID,
174 .vdev_install_key_cmdid = WMI_10X_VDEV_INSTALL_KEY_CMDID,
175 .peer_create_cmdid = WMI_10X_PEER_CREATE_CMDID,
176 .peer_delete_cmdid = WMI_10X_PEER_DELETE_CMDID,
177 .peer_flush_tids_cmdid = WMI_10X_PEER_FLUSH_TIDS_CMDID,
178 .peer_set_param_cmdid = WMI_10X_PEER_SET_PARAM_CMDID,
179 .peer_assoc_cmdid = WMI_10X_PEER_ASSOC_CMDID,
180 .peer_add_wds_entry_cmdid = WMI_10X_PEER_ADD_WDS_ENTRY_CMDID,
181 .peer_remove_wds_entry_cmdid = WMI_10X_PEER_REMOVE_WDS_ENTRY_CMDID,
182 .peer_mcast_group_cmdid = WMI_10X_PEER_MCAST_GROUP_CMDID,
183 .bcn_tx_cmdid = WMI_10X_BCN_TX_CMDID,
184 .pdev_send_bcn_cmdid = WMI_10X_PDEV_SEND_BCN_CMDID,
185 .bcn_tmpl_cmdid = WMI_CMD_UNSUPPORTED,
186 .bcn_filter_rx_cmdid = WMI_10X_BCN_FILTER_RX_CMDID,
187 .prb_req_filter_rx_cmdid = WMI_10X_PRB_REQ_FILTER_RX_CMDID,
188 .mgmt_tx_cmdid = WMI_10X_MGMT_TX_CMDID,
189 .prb_tmpl_cmdid = WMI_CMD_UNSUPPORTED,
190 .addba_clear_resp_cmdid = WMI_10X_ADDBA_CLEAR_RESP_CMDID,
191 .addba_send_cmdid = WMI_10X_ADDBA_SEND_CMDID,
192 .addba_status_cmdid = WMI_10X_ADDBA_STATUS_CMDID,
193 .delba_send_cmdid = WMI_10X_DELBA_SEND_CMDID,
194 .addba_set_resp_cmdid = WMI_10X_ADDBA_SET_RESP_CMDID,
195 .send_singleamsdu_cmdid = WMI_10X_SEND_SINGLEAMSDU_CMDID,
196 .sta_powersave_mode_cmdid = WMI_10X_STA_POWERSAVE_MODE_CMDID,
197 .sta_powersave_param_cmdid = WMI_10X_STA_POWERSAVE_PARAM_CMDID,
198 .sta_mimo_ps_mode_cmdid = WMI_10X_STA_MIMO_PS_MODE_CMDID,
199 .pdev_dfs_enable_cmdid = WMI_10X_PDEV_DFS_ENABLE_CMDID,
200 .pdev_dfs_disable_cmdid = WMI_10X_PDEV_DFS_DISABLE_CMDID,
201 .roam_scan_mode = WMI_10X_ROAM_SCAN_MODE,
202 .roam_scan_rssi_threshold = WMI_10X_ROAM_SCAN_RSSI_THRESHOLD,
203 .roam_scan_period = WMI_10X_ROAM_SCAN_PERIOD,
204 .roam_scan_rssi_change_threshold =
205 WMI_10X_ROAM_SCAN_RSSI_CHANGE_THRESHOLD,
206 .roam_ap_profile = WMI_10X_ROAM_AP_PROFILE,
207 .ofl_scan_add_ap_profile = WMI_10X_OFL_SCAN_ADD_AP_PROFILE,
208 .ofl_scan_remove_ap_profile = WMI_10X_OFL_SCAN_REMOVE_AP_PROFILE,
209 .ofl_scan_period = WMI_10X_OFL_SCAN_PERIOD,
210 .p2p_dev_set_device_info = WMI_10X_P2P_DEV_SET_DEVICE_INFO,
211 .p2p_dev_set_discoverability = WMI_10X_P2P_DEV_SET_DISCOVERABILITY,
212 .p2p_go_set_beacon_ie = WMI_10X_P2P_GO_SET_BEACON_IE,
213 .p2p_go_set_probe_resp_ie = WMI_10X_P2P_GO_SET_PROBE_RESP_IE,
214 .p2p_set_vendor_ie_data_cmdid = WMI_CMD_UNSUPPORTED,
215 .ap_ps_peer_param_cmdid = WMI_CMD_UNSUPPORTED,
216 .ap_ps_peer_uapsd_coex_cmdid = WMI_CMD_UNSUPPORTED,
217 .peer_rate_retry_sched_cmdid = WMI_10X_PEER_RATE_RETRY_SCHED_CMDID,
218 .wlan_profile_trigger_cmdid = WMI_10X_WLAN_PROFILE_TRIGGER_CMDID,
219 .wlan_profile_set_hist_intvl_cmdid =
220 WMI_10X_WLAN_PROFILE_SET_HIST_INTVL_CMDID,
221 .wlan_profile_get_profile_data_cmdid =
222 WMI_10X_WLAN_PROFILE_GET_PROFILE_DATA_CMDID,
223 .wlan_profile_enable_profile_id_cmdid =
224 WMI_10X_WLAN_PROFILE_ENABLE_PROFILE_ID_CMDID,
225 .wlan_profile_list_profile_id_cmdid =
226 WMI_10X_WLAN_PROFILE_LIST_PROFILE_ID_CMDID,
227 .pdev_suspend_cmdid = WMI_10X_PDEV_SUSPEND_CMDID,
228 .pdev_resume_cmdid = WMI_10X_PDEV_RESUME_CMDID,
229 .add_bcn_filter_cmdid = WMI_10X_ADD_BCN_FILTER_CMDID,
230 .rmv_bcn_filter_cmdid = WMI_10X_RMV_BCN_FILTER_CMDID,
231 .wow_add_wake_pattern_cmdid = WMI_10X_WOW_ADD_WAKE_PATTERN_CMDID,
232 .wow_del_wake_pattern_cmdid = WMI_10X_WOW_DEL_WAKE_PATTERN_CMDID,
233 .wow_enable_disable_wake_event_cmdid =
234 WMI_10X_WOW_ENABLE_DISABLE_WAKE_EVENT_CMDID,
235 .wow_enable_cmdid = WMI_10X_WOW_ENABLE_CMDID,
236 .wow_hostwakeup_from_sleep_cmdid =
237 WMI_10X_WOW_HOSTWAKEUP_FROM_SLEEP_CMDID,
238 .rtt_measreq_cmdid = WMI_10X_RTT_MEASREQ_CMDID,
239 .rtt_tsf_cmdid = WMI_10X_RTT_TSF_CMDID,
240 .vdev_spectral_scan_configure_cmdid =
241 WMI_10X_VDEV_SPECTRAL_SCAN_CONFIGURE_CMDID,
242 .vdev_spectral_scan_enable_cmdid =
243 WMI_10X_VDEV_SPECTRAL_SCAN_ENABLE_CMDID,
244 .request_stats_cmdid = WMI_10X_REQUEST_STATS_CMDID,
245 .set_arp_ns_offload_cmdid = WMI_CMD_UNSUPPORTED,
246 .network_list_offload_config_cmdid = WMI_CMD_UNSUPPORTED,
247 .gtk_offload_cmdid = WMI_CMD_UNSUPPORTED,
248 .csa_offload_enable_cmdid = WMI_CMD_UNSUPPORTED,
249 .csa_offload_chanswitch_cmdid = WMI_CMD_UNSUPPORTED,
250 .chatter_set_mode_cmdid = WMI_CMD_UNSUPPORTED,
251 .peer_tid_addba_cmdid = WMI_CMD_UNSUPPORTED,
252 .peer_tid_delba_cmdid = WMI_CMD_UNSUPPORTED,
253 .sta_dtim_ps_method_cmdid = WMI_CMD_UNSUPPORTED,
254 .sta_uapsd_auto_trig_cmdid = WMI_CMD_UNSUPPORTED,
255 .sta_keepalive_cmd = WMI_CMD_UNSUPPORTED,
256 .echo_cmdid = WMI_10X_ECHO_CMDID,
257 .pdev_utf_cmdid = WMI_10X_PDEV_UTF_CMDID,
258 .dbglog_cfg_cmdid = WMI_10X_DBGLOG_CFG_CMDID,
259 .pdev_qvit_cmdid = WMI_10X_PDEV_QVIT_CMDID,
260 .pdev_ftm_intg_cmdid = WMI_CMD_UNSUPPORTED,
261 .vdev_set_keepalive_cmdid = WMI_CMD_UNSUPPORTED,
262 .vdev_get_keepalive_cmdid = WMI_CMD_UNSUPPORTED,
263 .force_fw_hang_cmdid = WMI_CMD_UNSUPPORTED,
264 .gpio_config_cmdid = WMI_10X_GPIO_CONFIG_CMDID,
265 .gpio_output_cmdid = WMI_10X_GPIO_OUTPUT_CMDID,
266};
267
268/* MAIN WMI VDEV param map */
269static struct wmi_vdev_param_map wmi_vdev_param_map = {
270 .rts_threshold = WMI_VDEV_PARAM_RTS_THRESHOLD,
271 .fragmentation_threshold = WMI_VDEV_PARAM_FRAGMENTATION_THRESHOLD,
272 .beacon_interval = WMI_VDEV_PARAM_BEACON_INTERVAL,
273 .listen_interval = WMI_VDEV_PARAM_LISTEN_INTERVAL,
274 .multicast_rate = WMI_VDEV_PARAM_MULTICAST_RATE,
275 .mgmt_tx_rate = WMI_VDEV_PARAM_MGMT_TX_RATE,
276 .slot_time = WMI_VDEV_PARAM_SLOT_TIME,
277 .preamble = WMI_VDEV_PARAM_PREAMBLE,
278 .swba_time = WMI_VDEV_PARAM_SWBA_TIME,
279 .wmi_vdev_stats_update_period = WMI_VDEV_STATS_UPDATE_PERIOD,
280 .wmi_vdev_pwrsave_ageout_time = WMI_VDEV_PWRSAVE_AGEOUT_TIME,
281 .wmi_vdev_host_swba_interval = WMI_VDEV_HOST_SWBA_INTERVAL,
282 .dtim_period = WMI_VDEV_PARAM_DTIM_PERIOD,
283 .wmi_vdev_oc_scheduler_air_time_limit =
284 WMI_VDEV_OC_SCHEDULER_AIR_TIME_LIMIT,
285 .wds = WMI_VDEV_PARAM_WDS,
286 .atim_window = WMI_VDEV_PARAM_ATIM_WINDOW,
287 .bmiss_count_max = WMI_VDEV_PARAM_BMISS_COUNT_MAX,
288 .bmiss_first_bcnt = WMI_VDEV_PARAM_BMISS_FIRST_BCNT,
289 .bmiss_final_bcnt = WMI_VDEV_PARAM_BMISS_FINAL_BCNT,
290 .feature_wmm = WMI_VDEV_PARAM_FEATURE_WMM,
291 .chwidth = WMI_VDEV_PARAM_CHWIDTH,
292 .chextoffset = WMI_VDEV_PARAM_CHEXTOFFSET,
293 .disable_htprotection = WMI_VDEV_PARAM_DISABLE_HTPROTECTION,
294 .sta_quickkickout = WMI_VDEV_PARAM_STA_QUICKKICKOUT,
295 .mgmt_rate = WMI_VDEV_PARAM_MGMT_RATE,
296 .protection_mode = WMI_VDEV_PARAM_PROTECTION_MODE,
297 .fixed_rate = WMI_VDEV_PARAM_FIXED_RATE,
298 .sgi = WMI_VDEV_PARAM_SGI,
299 .ldpc = WMI_VDEV_PARAM_LDPC,
300 .tx_stbc = WMI_VDEV_PARAM_TX_STBC,
301 .rx_stbc = WMI_VDEV_PARAM_RX_STBC,
302 .intra_bss_fwd = WMI_VDEV_PARAM_INTRA_BSS_FWD,
303 .def_keyid = WMI_VDEV_PARAM_DEF_KEYID,
304 .nss = WMI_VDEV_PARAM_NSS,
305 .bcast_data_rate = WMI_VDEV_PARAM_BCAST_DATA_RATE,
306 .mcast_data_rate = WMI_VDEV_PARAM_MCAST_DATA_RATE,
307 .mcast_indicate = WMI_VDEV_PARAM_MCAST_INDICATE,
308 .dhcp_indicate = WMI_VDEV_PARAM_DHCP_INDICATE,
309 .unknown_dest_indicate = WMI_VDEV_PARAM_UNKNOWN_DEST_INDICATE,
310 .ap_keepalive_min_idle_inactive_time_secs =
311 WMI_VDEV_PARAM_AP_KEEPALIVE_MIN_IDLE_INACTIVE_TIME_SECS,
312 .ap_keepalive_max_idle_inactive_time_secs =
313 WMI_VDEV_PARAM_AP_KEEPALIVE_MAX_IDLE_INACTIVE_TIME_SECS,
314 .ap_keepalive_max_unresponsive_time_secs =
315 WMI_VDEV_PARAM_AP_KEEPALIVE_MAX_UNRESPONSIVE_TIME_SECS,
316 .ap_enable_nawds = WMI_VDEV_PARAM_AP_ENABLE_NAWDS,
317 .mcast2ucast_set = WMI_VDEV_PARAM_UNSUPPORTED,
318 .enable_rtscts = WMI_VDEV_PARAM_ENABLE_RTSCTS,
319 .txbf = WMI_VDEV_PARAM_TXBF,
320 .packet_powersave = WMI_VDEV_PARAM_PACKET_POWERSAVE,
321 .drop_unencry = WMI_VDEV_PARAM_DROP_UNENCRY,
322 .tx_encap_type = WMI_VDEV_PARAM_TX_ENCAP_TYPE,
323 .ap_detect_out_of_sync_sleeping_sta_time_secs =
324 WMI_VDEV_PARAM_UNSUPPORTED,
325};
326
327/* 10.X WMI VDEV param map */
328static struct wmi_vdev_param_map wmi_10x_vdev_param_map = {
329 .rts_threshold = WMI_10X_VDEV_PARAM_RTS_THRESHOLD,
330 .fragmentation_threshold = WMI_10X_VDEV_PARAM_FRAGMENTATION_THRESHOLD,
331 .beacon_interval = WMI_10X_VDEV_PARAM_BEACON_INTERVAL,
332 .listen_interval = WMI_10X_VDEV_PARAM_LISTEN_INTERVAL,
333 .multicast_rate = WMI_10X_VDEV_PARAM_MULTICAST_RATE,
334 .mgmt_tx_rate = WMI_10X_VDEV_PARAM_MGMT_TX_RATE,
335 .slot_time = WMI_10X_VDEV_PARAM_SLOT_TIME,
336 .preamble = WMI_10X_VDEV_PARAM_PREAMBLE,
337 .swba_time = WMI_10X_VDEV_PARAM_SWBA_TIME,
338 .wmi_vdev_stats_update_period = WMI_10X_VDEV_STATS_UPDATE_PERIOD,
339 .wmi_vdev_pwrsave_ageout_time = WMI_10X_VDEV_PWRSAVE_AGEOUT_TIME,
340 .wmi_vdev_host_swba_interval = WMI_10X_VDEV_HOST_SWBA_INTERVAL,
341 .dtim_period = WMI_10X_VDEV_PARAM_DTIM_PERIOD,
342 .wmi_vdev_oc_scheduler_air_time_limit =
343 WMI_10X_VDEV_OC_SCHEDULER_AIR_TIME_LIMIT,
344 .wds = WMI_10X_VDEV_PARAM_WDS,
345 .atim_window = WMI_10X_VDEV_PARAM_ATIM_WINDOW,
346 .bmiss_count_max = WMI_10X_VDEV_PARAM_BMISS_COUNT_MAX,
347 .bmiss_first_bcnt = WMI_VDEV_PARAM_UNSUPPORTED,
348 .bmiss_final_bcnt = WMI_VDEV_PARAM_UNSUPPORTED,
349 .feature_wmm = WMI_10X_VDEV_PARAM_FEATURE_WMM,
350 .chwidth = WMI_10X_VDEV_PARAM_CHWIDTH,
351 .chextoffset = WMI_10X_VDEV_PARAM_CHEXTOFFSET,
352 .disable_htprotection = WMI_10X_VDEV_PARAM_DISABLE_HTPROTECTION,
353 .sta_quickkickout = WMI_10X_VDEV_PARAM_STA_QUICKKICKOUT,
354 .mgmt_rate = WMI_10X_VDEV_PARAM_MGMT_RATE,
355 .protection_mode = WMI_10X_VDEV_PARAM_PROTECTION_MODE,
356 .fixed_rate = WMI_10X_VDEV_PARAM_FIXED_RATE,
357 .sgi = WMI_10X_VDEV_PARAM_SGI,
358 .ldpc = WMI_10X_VDEV_PARAM_LDPC,
359 .tx_stbc = WMI_10X_VDEV_PARAM_TX_STBC,
360 .rx_stbc = WMI_10X_VDEV_PARAM_RX_STBC,
361 .intra_bss_fwd = WMI_10X_VDEV_PARAM_INTRA_BSS_FWD,
362 .def_keyid = WMI_10X_VDEV_PARAM_DEF_KEYID,
363 .nss = WMI_10X_VDEV_PARAM_NSS,
364 .bcast_data_rate = WMI_10X_VDEV_PARAM_BCAST_DATA_RATE,
365 .mcast_data_rate = WMI_10X_VDEV_PARAM_MCAST_DATA_RATE,
366 .mcast_indicate = WMI_10X_VDEV_PARAM_MCAST_INDICATE,
367 .dhcp_indicate = WMI_10X_VDEV_PARAM_DHCP_INDICATE,
368 .unknown_dest_indicate = WMI_10X_VDEV_PARAM_UNKNOWN_DEST_INDICATE,
369 .ap_keepalive_min_idle_inactive_time_secs =
370 WMI_10X_VDEV_PARAM_AP_KEEPALIVE_MIN_IDLE_INACTIVE_TIME_SECS,
371 .ap_keepalive_max_idle_inactive_time_secs =
372 WMI_10X_VDEV_PARAM_AP_KEEPALIVE_MAX_IDLE_INACTIVE_TIME_SECS,
373 .ap_keepalive_max_unresponsive_time_secs =
374 WMI_10X_VDEV_PARAM_AP_KEEPALIVE_MAX_UNRESPONSIVE_TIME_SECS,
375 .ap_enable_nawds = WMI_10X_VDEV_PARAM_AP_ENABLE_NAWDS,
376 .mcast2ucast_set = WMI_10X_VDEV_PARAM_MCAST2UCAST_SET,
377 .enable_rtscts = WMI_10X_VDEV_PARAM_ENABLE_RTSCTS,
378 .txbf = WMI_VDEV_PARAM_UNSUPPORTED,
379 .packet_powersave = WMI_VDEV_PARAM_UNSUPPORTED,
380 .drop_unencry = WMI_VDEV_PARAM_UNSUPPORTED,
381 .tx_encap_type = WMI_VDEV_PARAM_UNSUPPORTED,
382 .ap_detect_out_of_sync_sleeping_sta_time_secs =
383 WMI_10X_VDEV_PARAM_AP_DETECT_OUT_OF_SYNC_SLEEPING_STA_TIME_SECS,
384};
385
386static struct wmi_pdev_param_map wmi_pdev_param_map = {
387 .tx_chain_mask = WMI_PDEV_PARAM_TX_CHAIN_MASK,
388 .rx_chain_mask = WMI_PDEV_PARAM_RX_CHAIN_MASK,
389 .txpower_limit2g = WMI_PDEV_PARAM_TXPOWER_LIMIT2G,
390 .txpower_limit5g = WMI_PDEV_PARAM_TXPOWER_LIMIT5G,
391 .txpower_scale = WMI_PDEV_PARAM_TXPOWER_SCALE,
392 .beacon_gen_mode = WMI_PDEV_PARAM_BEACON_GEN_MODE,
393 .beacon_tx_mode = WMI_PDEV_PARAM_BEACON_TX_MODE,
394 .resmgr_offchan_mode = WMI_PDEV_PARAM_RESMGR_OFFCHAN_MODE,
395 .protection_mode = WMI_PDEV_PARAM_PROTECTION_MODE,
396 .dynamic_bw = WMI_PDEV_PARAM_DYNAMIC_BW,
397 .non_agg_sw_retry_th = WMI_PDEV_PARAM_NON_AGG_SW_RETRY_TH,
398 .agg_sw_retry_th = WMI_PDEV_PARAM_AGG_SW_RETRY_TH,
399 .sta_kickout_th = WMI_PDEV_PARAM_STA_KICKOUT_TH,
400 .ac_aggrsize_scaling = WMI_PDEV_PARAM_AC_AGGRSIZE_SCALING,
401 .ltr_enable = WMI_PDEV_PARAM_LTR_ENABLE,
402 .ltr_ac_latency_be = WMI_PDEV_PARAM_LTR_AC_LATENCY_BE,
403 .ltr_ac_latency_bk = WMI_PDEV_PARAM_LTR_AC_LATENCY_BK,
404 .ltr_ac_latency_vi = WMI_PDEV_PARAM_LTR_AC_LATENCY_VI,
405 .ltr_ac_latency_vo = WMI_PDEV_PARAM_LTR_AC_LATENCY_VO,
406 .ltr_ac_latency_timeout = WMI_PDEV_PARAM_LTR_AC_LATENCY_TIMEOUT,
407 .ltr_sleep_override = WMI_PDEV_PARAM_LTR_SLEEP_OVERRIDE,
408 .ltr_rx_override = WMI_PDEV_PARAM_LTR_RX_OVERRIDE,
409 .ltr_tx_activity_timeout = WMI_PDEV_PARAM_LTR_TX_ACTIVITY_TIMEOUT,
410 .l1ss_enable = WMI_PDEV_PARAM_L1SS_ENABLE,
411 .dsleep_enable = WMI_PDEV_PARAM_DSLEEP_ENABLE,
412 .pcielp_txbuf_flush = WMI_PDEV_PARAM_PCIELP_TXBUF_FLUSH,
413 .pcielp_txbuf_watermark = WMI_PDEV_PARAM_PCIELP_TXBUF_TMO_EN,
414 .pcielp_txbuf_tmo_en = WMI_PDEV_PARAM_PCIELP_TXBUF_TMO_EN,
415 .pcielp_txbuf_tmo_value = WMI_PDEV_PARAM_PCIELP_TXBUF_TMO_VALUE,
416 .pdev_stats_update_period = WMI_PDEV_PARAM_PDEV_STATS_UPDATE_PERIOD,
417 .vdev_stats_update_period = WMI_PDEV_PARAM_VDEV_STATS_UPDATE_PERIOD,
418 .peer_stats_update_period = WMI_PDEV_PARAM_PEER_STATS_UPDATE_PERIOD,
419 .bcnflt_stats_update_period = WMI_PDEV_PARAM_BCNFLT_STATS_UPDATE_PERIOD,
420 .pmf_qos = WMI_PDEV_PARAM_PMF_QOS,
421 .arp_ac_override = WMI_PDEV_PARAM_ARP_AC_OVERRIDE,
422 .arpdhcp_ac_override = WMI_PDEV_PARAM_UNSUPPORTED,
423 .dcs = WMI_PDEV_PARAM_DCS,
424 .ani_enable = WMI_PDEV_PARAM_ANI_ENABLE,
425 .ani_poll_period = WMI_PDEV_PARAM_ANI_POLL_PERIOD,
426 .ani_listen_period = WMI_PDEV_PARAM_ANI_LISTEN_PERIOD,
427 .ani_ofdm_level = WMI_PDEV_PARAM_ANI_OFDM_LEVEL,
428 .ani_cck_level = WMI_PDEV_PARAM_ANI_CCK_LEVEL,
429 .dyntxchain = WMI_PDEV_PARAM_DYNTXCHAIN,
430 .proxy_sta = WMI_PDEV_PARAM_PROXY_STA,
431 .idle_ps_config = WMI_PDEV_PARAM_IDLE_PS_CONFIG,
432 .power_gating_sleep = WMI_PDEV_PARAM_POWER_GATING_SLEEP,
433 .fast_channel_reset = WMI_PDEV_PARAM_UNSUPPORTED,
434 .burst_dur = WMI_PDEV_PARAM_UNSUPPORTED,
435 .burst_enable = WMI_PDEV_PARAM_UNSUPPORTED,
436};
437
438static struct wmi_pdev_param_map wmi_10x_pdev_param_map = {
439 .tx_chain_mask = WMI_10X_PDEV_PARAM_TX_CHAIN_MASK,
440 .rx_chain_mask = WMI_10X_PDEV_PARAM_RX_CHAIN_MASK,
441 .txpower_limit2g = WMI_10X_PDEV_PARAM_TXPOWER_LIMIT2G,
442 .txpower_limit5g = WMI_10X_PDEV_PARAM_TXPOWER_LIMIT5G,
443 .txpower_scale = WMI_10X_PDEV_PARAM_TXPOWER_SCALE,
444 .beacon_gen_mode = WMI_10X_PDEV_PARAM_BEACON_GEN_MODE,
445 .beacon_tx_mode = WMI_10X_PDEV_PARAM_BEACON_TX_MODE,
446 .resmgr_offchan_mode = WMI_10X_PDEV_PARAM_RESMGR_OFFCHAN_MODE,
447 .protection_mode = WMI_10X_PDEV_PARAM_PROTECTION_MODE,
448 .dynamic_bw = WMI_10X_PDEV_PARAM_DYNAMIC_BW,
449 .non_agg_sw_retry_th = WMI_10X_PDEV_PARAM_NON_AGG_SW_RETRY_TH,
450 .agg_sw_retry_th = WMI_10X_PDEV_PARAM_AGG_SW_RETRY_TH,
451 .sta_kickout_th = WMI_10X_PDEV_PARAM_STA_KICKOUT_TH,
452 .ac_aggrsize_scaling = WMI_10X_PDEV_PARAM_AC_AGGRSIZE_SCALING,
453 .ltr_enable = WMI_10X_PDEV_PARAM_LTR_ENABLE,
454 .ltr_ac_latency_be = WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_BE,
455 .ltr_ac_latency_bk = WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_BK,
456 .ltr_ac_latency_vi = WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_VI,
457 .ltr_ac_latency_vo = WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_VO,
458 .ltr_ac_latency_timeout = WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_TIMEOUT,
459 .ltr_sleep_override = WMI_10X_PDEV_PARAM_LTR_SLEEP_OVERRIDE,
460 .ltr_rx_override = WMI_10X_PDEV_PARAM_LTR_RX_OVERRIDE,
461 .ltr_tx_activity_timeout = WMI_10X_PDEV_PARAM_LTR_TX_ACTIVITY_TIMEOUT,
462 .l1ss_enable = WMI_10X_PDEV_PARAM_L1SS_ENABLE,
463 .dsleep_enable = WMI_10X_PDEV_PARAM_DSLEEP_ENABLE,
464 .pcielp_txbuf_flush = WMI_PDEV_PARAM_UNSUPPORTED,
465 .pcielp_txbuf_watermark = WMI_PDEV_PARAM_UNSUPPORTED,
466 .pcielp_txbuf_tmo_en = WMI_PDEV_PARAM_UNSUPPORTED,
467 .pcielp_txbuf_tmo_value = WMI_PDEV_PARAM_UNSUPPORTED,
468 .pdev_stats_update_period = WMI_10X_PDEV_PARAM_PDEV_STATS_UPDATE_PERIOD,
469 .vdev_stats_update_period = WMI_10X_PDEV_PARAM_VDEV_STATS_UPDATE_PERIOD,
470 .peer_stats_update_period = WMI_10X_PDEV_PARAM_PEER_STATS_UPDATE_PERIOD,
471 .bcnflt_stats_update_period =
472 WMI_10X_PDEV_PARAM_BCNFLT_STATS_UPDATE_PERIOD,
473 .pmf_qos = WMI_10X_PDEV_PARAM_PMF_QOS,
474 .arp_ac_override = WMI_PDEV_PARAM_UNSUPPORTED,
475 .arpdhcp_ac_override = WMI_10X_PDEV_PARAM_ARPDHCP_AC_OVERRIDE,
476 .dcs = WMI_10X_PDEV_PARAM_DCS,
477 .ani_enable = WMI_10X_PDEV_PARAM_ANI_ENABLE,
478 .ani_poll_period = WMI_10X_PDEV_PARAM_ANI_POLL_PERIOD,
479 .ani_listen_period = WMI_10X_PDEV_PARAM_ANI_LISTEN_PERIOD,
480 .ani_ofdm_level = WMI_10X_PDEV_PARAM_ANI_OFDM_LEVEL,
481 .ani_cck_level = WMI_10X_PDEV_PARAM_ANI_CCK_LEVEL,
482 .dyntxchain = WMI_10X_PDEV_PARAM_DYNTXCHAIN,
483 .proxy_sta = WMI_PDEV_PARAM_UNSUPPORTED,
484 .idle_ps_config = WMI_PDEV_PARAM_UNSUPPORTED,
485 .power_gating_sleep = WMI_PDEV_PARAM_UNSUPPORTED,
486 .fast_channel_reset = WMI_10X_PDEV_PARAM_FAST_CHANNEL_RESET,
487 .burst_dur = WMI_10X_PDEV_PARAM_BURST_DUR,
488 .burst_enable = WMI_10X_PDEV_PARAM_BURST_ENABLE,
489};
49 490
50int ath10k_wmi_wait_for_service_ready(struct ath10k *ar) 491int ath10k_wmi_wait_for_service_ready(struct ath10k *ar)
51{ 492{
@@ -85,18 +526,14 @@ static struct sk_buff *ath10k_wmi_alloc_skb(u32 len)
85static void ath10k_wmi_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb) 526static void ath10k_wmi_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb)
86{ 527{
87 dev_kfree_skb(skb); 528 dev_kfree_skb(skb);
88
89 if (atomic_sub_return(1, &ar->wmi.pending_tx_count) == 0)
90 wake_up(&ar->wmi.wq);
91} 529}
92 530
93/* WMI command API */ 531static int ath10k_wmi_cmd_send_nowait(struct ath10k *ar, struct sk_buff *skb,
94static int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, 532 u32 cmd_id)
95 enum wmi_cmd_id cmd_id)
96{ 533{
97 struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(skb); 534 struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(skb);
98 struct wmi_cmd_hdr *cmd_hdr; 535 struct wmi_cmd_hdr *cmd_hdr;
99 int status; 536 int ret;
100 u32 cmd = 0; 537 u32 cmd = 0;
101 538
102 if (skb_push(skb, sizeof(struct wmi_cmd_hdr)) == NULL) 539 if (skb_push(skb, sizeof(struct wmi_cmd_hdr)) == NULL)
@@ -107,25 +544,146 @@ static int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb,
107 cmd_hdr = (struct wmi_cmd_hdr *)skb->data; 544 cmd_hdr = (struct wmi_cmd_hdr *)skb->data;
108 cmd_hdr->cmd_id = __cpu_to_le32(cmd); 545 cmd_hdr->cmd_id = __cpu_to_le32(cmd);
109 546
110 if (atomic_add_return(1, &ar->wmi.pending_tx_count) > 547 memset(skb_cb, 0, sizeof(*skb_cb));
111 WMI_MAX_PENDING_TX_COUNT) { 548 ret = ath10k_htc_send(&ar->htc, ar->wmi.eid, skb);
112 /* avoid using up memory when FW hangs */ 549 trace_ath10k_wmi_cmd(cmd_id, skb->data, skb->len, ret);
113 atomic_dec(&ar->wmi.pending_tx_count); 550
114 return -EBUSY; 551 if (ret)
552 goto err_pull;
553
554 return 0;
555
556err_pull:
557 skb_pull(skb, sizeof(struct wmi_cmd_hdr));
558 return ret;
559}
560
561static void ath10k_wmi_tx_beacon_nowait(struct ath10k_vif *arvif)
562{
563 struct wmi_bcn_tx_arg arg = {0};
564 int ret;
565
566 lockdep_assert_held(&arvif->ar->data_lock);
567
568 if (arvif->beacon == NULL)
569 return;
570
571 arg.vdev_id = arvif->vdev_id;
572 arg.tx_rate = 0;
573 arg.tx_power = 0;
574 arg.bcn = arvif->beacon->data;
575 arg.bcn_len = arvif->beacon->len;
576
577 ret = ath10k_wmi_beacon_send_nowait(arvif->ar, &arg);
578 if (ret)
579 return;
580
581 dev_kfree_skb_any(arvif->beacon);
582 arvif->beacon = NULL;
583}
584
585static void ath10k_wmi_tx_beacons_iter(void *data, u8 *mac,
586 struct ieee80211_vif *vif)
587{
588 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
589
590 ath10k_wmi_tx_beacon_nowait(arvif);
591}
592
593static void ath10k_wmi_tx_beacons_nowait(struct ath10k *ar)
594{
595 spin_lock_bh(&ar->data_lock);
596 ieee80211_iterate_active_interfaces_atomic(ar->hw,
597 IEEE80211_IFACE_ITER_NORMAL,
598 ath10k_wmi_tx_beacons_iter,
599 NULL);
600 spin_unlock_bh(&ar->data_lock);
601}
602
603static void ath10k_wmi_op_ep_tx_credits(struct ath10k *ar)
604{
605 /* try to send pending beacons first. they take priority */
606 ath10k_wmi_tx_beacons_nowait(ar);
607
608 wake_up(&ar->wmi.tx_credits_wq);
609}
610
611static int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb,
612 u32 cmd_id)
613{
614 int ret = -EOPNOTSUPP;
615
616 might_sleep();
617
618 if (cmd_id == WMI_CMD_UNSUPPORTED) {
619 ath10k_warn("wmi command %d is not supported by firmware\n",
620 cmd_id);
621 return ret;
115 } 622 }
116 623
117 memset(skb_cb, 0, sizeof(*skb_cb)); 624 wait_event_timeout(ar->wmi.tx_credits_wq, ({
625 /* try to send pending beacons first. they take priority */
626 ath10k_wmi_tx_beacons_nowait(ar);
118 627
119 trace_ath10k_wmi_cmd(cmd_id, skb->data, skb->len); 628 ret = ath10k_wmi_cmd_send_nowait(ar, skb, cmd_id);
629 (ret != -EAGAIN);
630 }), 3*HZ);
120 631
121 status = ath10k_htc_send(&ar->htc, ar->wmi.eid, skb); 632 if (ret)
122 if (status) {
123 dev_kfree_skb_any(skb); 633 dev_kfree_skb_any(skb);
124 atomic_dec(&ar->wmi.pending_tx_count); 634
125 return status; 635 return ret;
636}
637
638int ath10k_wmi_mgmt_tx(struct ath10k *ar, struct sk_buff *skb)
639{
640 int ret = 0;
641 struct wmi_mgmt_tx_cmd *cmd;
642 struct ieee80211_hdr *hdr;
643 struct sk_buff *wmi_skb;
644 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
645 int len;
646 u16 fc;
647
648 hdr = (struct ieee80211_hdr *)skb->data;
649 fc = le16_to_cpu(hdr->frame_control);
650
651 if (WARN_ON_ONCE(!ieee80211_is_mgmt(hdr->frame_control)))
652 return -EINVAL;
653
654 len = sizeof(cmd->hdr) + skb->len;
655 len = round_up(len, 4);
656
657 wmi_skb = ath10k_wmi_alloc_skb(len);
658 if (!wmi_skb)
659 return -ENOMEM;
660
661 cmd = (struct wmi_mgmt_tx_cmd *)wmi_skb->data;
662
663 cmd->hdr.vdev_id = __cpu_to_le32(ATH10K_SKB_CB(skb)->vdev_id);
664 cmd->hdr.tx_rate = 0;
665 cmd->hdr.tx_power = 0;
666 cmd->hdr.buf_len = __cpu_to_le32((u32)(skb->len));
667
668 memcpy(cmd->hdr.peer_macaddr.addr, ieee80211_get_DA(hdr), ETH_ALEN);
669 memcpy(cmd->buf, skb->data, skb->len);
670
671 ath10k_dbg(ATH10K_DBG_WMI, "wmi mgmt tx skb %p len %d ftype %02x stype %02x\n",
672 wmi_skb, wmi_skb->len, fc & IEEE80211_FCTL_FTYPE,
673 fc & IEEE80211_FCTL_STYPE);
674
675 /* Send the management frame buffer to the target */
676 ret = ath10k_wmi_cmd_send(ar, wmi_skb, ar->wmi.cmd->mgmt_tx_cmdid);
677 if (ret) {
678 dev_kfree_skb_any(skb);
679 return ret;
126 } 680 }
127 681
128 return 0; 682 /* TODO: report tx status to mac80211 - temporary just ACK */
683 info->flags |= IEEE80211_TX_STAT_ACK;
684 ieee80211_tx_status_irqsafe(ar->hw, skb);
685
686 return ret;
129} 687}
130 688
131static int ath10k_wmi_event_scan(struct ath10k *ar, struct sk_buff *skb) 689static int ath10k_wmi_event_scan(struct ath10k *ar, struct sk_buff *skb)
@@ -315,7 +873,9 @@ static inline u8 get_rate_idx(u32 rate, enum ieee80211_band band)
315 873
316static int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb) 874static int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
317{ 875{
318 struct wmi_mgmt_rx_event *event = (struct wmi_mgmt_rx_event *)skb->data; 876 struct wmi_mgmt_rx_event_v1 *ev_v1;
877 struct wmi_mgmt_rx_event_v2 *ev_v2;
878 struct wmi_mgmt_rx_hdr_v1 *ev_hdr;
319 struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb); 879 struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
320 struct ieee80211_hdr *hdr; 880 struct ieee80211_hdr *hdr;
321 u32 rx_status; 881 u32 rx_status;
@@ -325,13 +885,24 @@ static int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
325 u32 rate; 885 u32 rate;
326 u32 buf_len; 886 u32 buf_len;
327 u16 fc; 887 u16 fc;
888 int pull_len;
889
890 if (test_bit(ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX, ar->fw_features)) {
891 ev_v2 = (struct wmi_mgmt_rx_event_v2 *)skb->data;
892 ev_hdr = &ev_v2->hdr.v1;
893 pull_len = sizeof(*ev_v2);
894 } else {
895 ev_v1 = (struct wmi_mgmt_rx_event_v1 *)skb->data;
896 ev_hdr = &ev_v1->hdr;
897 pull_len = sizeof(*ev_v1);
898 }
328 899
329 channel = __le32_to_cpu(event->hdr.channel); 900 channel = __le32_to_cpu(ev_hdr->channel);
330 buf_len = __le32_to_cpu(event->hdr.buf_len); 901 buf_len = __le32_to_cpu(ev_hdr->buf_len);
331 rx_status = __le32_to_cpu(event->hdr.status); 902 rx_status = __le32_to_cpu(ev_hdr->status);
332 snr = __le32_to_cpu(event->hdr.snr); 903 snr = __le32_to_cpu(ev_hdr->snr);
333 phy_mode = __le32_to_cpu(event->hdr.phy_mode); 904 phy_mode = __le32_to_cpu(ev_hdr->phy_mode);
334 rate = __le32_to_cpu(event->hdr.rate); 905 rate = __le32_to_cpu(ev_hdr->rate);
335 906
336 memset(status, 0, sizeof(*status)); 907 memset(status, 0, sizeof(*status));
337 908
@@ -358,7 +929,7 @@ static int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
358 status->signal = snr + ATH10K_DEFAULT_NOISE_FLOOR; 929 status->signal = snr + ATH10K_DEFAULT_NOISE_FLOOR;
359 status->rate_idx = get_rate_idx(rate, status->band); 930 status->rate_idx = get_rate_idx(rate, status->band);
360 931
361 skb_pull(skb, sizeof(event->hdr)); 932 skb_pull(skb, pull_len);
362 933
363 hdr = (struct ieee80211_hdr *)skb->data; 934 hdr = (struct ieee80211_hdr *)skb->data;
364 fc = le16_to_cpu(hdr->frame_control); 935 fc = le16_to_cpu(hdr->frame_control);
@@ -734,10 +1305,8 @@ static void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb)
734 int i = -1; 1305 int i = -1;
735 struct wmi_bcn_info *bcn_info; 1306 struct wmi_bcn_info *bcn_info;
736 struct ath10k_vif *arvif; 1307 struct ath10k_vif *arvif;
737 struct wmi_bcn_tx_arg arg;
738 struct sk_buff *bcn; 1308 struct sk_buff *bcn;
739 int vdev_id = 0; 1309 int vdev_id = 0;
740 int ret;
741 1310
742 ath10k_dbg(ATH10K_DBG_MGMT, "WMI_HOST_SWBA_EVENTID\n"); 1311 ath10k_dbg(ATH10K_DBG_MGMT, "WMI_HOST_SWBA_EVENTID\n");
743 1312
@@ -794,17 +1363,17 @@ static void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb)
794 ath10k_wmi_update_tim(ar, arvif, bcn, bcn_info); 1363 ath10k_wmi_update_tim(ar, arvif, bcn, bcn_info);
795 ath10k_wmi_update_noa(ar, arvif, bcn, bcn_info); 1364 ath10k_wmi_update_noa(ar, arvif, bcn, bcn_info);
796 1365
797 arg.vdev_id = arvif->vdev_id; 1366 spin_lock_bh(&ar->data_lock);
798 arg.tx_rate = 0; 1367 if (arvif->beacon) {
799 arg.tx_power = 0; 1368 ath10k_warn("SWBA overrun on vdev %d\n",
800 arg.bcn = bcn->data; 1369 arvif->vdev_id);
801 arg.bcn_len = bcn->len; 1370 dev_kfree_skb_any(arvif->beacon);
1371 }
802 1372
803 ret = ath10k_wmi_beacon_send(ar, &arg); 1373 arvif->beacon = bcn;
804 if (ret)
805 ath10k_warn("could not send beacon (%d)\n", ret);
806 1374
807 dev_kfree_skb_any(bcn); 1375 ath10k_wmi_tx_beacon_nowait(arvif);
1376 spin_unlock_bh(&ar->data_lock);
808 } 1377 }
809} 1378}
810 1379
@@ -919,6 +1488,55 @@ static void ath10k_wmi_event_vdev_install_key_complete(struct ath10k *ar,
919 ath10k_dbg(ATH10K_DBG_WMI, "WMI_VDEV_INSTALL_KEY_COMPLETE_EVENTID\n"); 1488 ath10k_dbg(ATH10K_DBG_WMI, "WMI_VDEV_INSTALL_KEY_COMPLETE_EVENTID\n");
920} 1489}
921 1490
1491static void ath10k_wmi_event_inst_rssi_stats(struct ath10k *ar,
1492 struct sk_buff *skb)
1493{
1494 ath10k_dbg(ATH10K_DBG_WMI, "WMI_INST_RSSI_STATS_EVENTID\n");
1495}
1496
1497static void ath10k_wmi_event_vdev_standby_req(struct ath10k *ar,
1498 struct sk_buff *skb)
1499{
1500 ath10k_dbg(ATH10K_DBG_WMI, "WMI_VDEV_STANDBY_REQ_EVENTID\n");
1501}
1502
1503static void ath10k_wmi_event_vdev_resume_req(struct ath10k *ar,
1504 struct sk_buff *skb)
1505{
1506 ath10k_dbg(ATH10K_DBG_WMI, "WMI_VDEV_RESUME_REQ_EVENTID\n");
1507}
1508
1509static int ath10k_wmi_alloc_host_mem(struct ath10k *ar, u32 req_id,
1510 u32 num_units, u32 unit_len)
1511{
1512 dma_addr_t paddr;
1513 u32 pool_size;
1514 int idx = ar->wmi.num_mem_chunks;
1515
1516 pool_size = num_units * round_up(unit_len, 4);
1517
1518 if (!pool_size)
1519 return -EINVAL;
1520
1521 ar->wmi.mem_chunks[idx].vaddr = dma_alloc_coherent(ar->dev,
1522 pool_size,
1523 &paddr,
1524 GFP_ATOMIC);
1525 if (!ar->wmi.mem_chunks[idx].vaddr) {
1526 ath10k_warn("failed to allocate memory chunk\n");
1527 return -ENOMEM;
1528 }
1529
1530 memset(ar->wmi.mem_chunks[idx].vaddr, 0, pool_size);
1531
1532 ar->wmi.mem_chunks[idx].paddr = paddr;
1533 ar->wmi.mem_chunks[idx].len = pool_size;
1534 ar->wmi.mem_chunks[idx].req_id = req_id;
1535 ar->wmi.num_mem_chunks++;
1536
1537 return 0;
1538}
1539
922static void ath10k_wmi_service_ready_event_rx(struct ath10k *ar, 1540static void ath10k_wmi_service_ready_event_rx(struct ath10k *ar,
923 struct sk_buff *skb) 1541 struct sk_buff *skb)
924{ 1542{
@@ -943,6 +1561,10 @@ static void ath10k_wmi_service_ready_event_rx(struct ath10k *ar,
943 ar->phy_capability = __le32_to_cpu(ev->phy_capability); 1561 ar->phy_capability = __le32_to_cpu(ev->phy_capability);
944 ar->num_rf_chains = __le32_to_cpu(ev->num_rf_chains); 1562 ar->num_rf_chains = __le32_to_cpu(ev->num_rf_chains);
945 1563
1564 /* only manually set fw features when not using FW IE format */
1565 if (ar->fw_api == 1 && ar->fw_version_build > 636)
1566 set_bit(ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX, ar->fw_features);
1567
946 if (ar->num_rf_chains > WMI_MAX_SPATIAL_STREAM) { 1568 if (ar->num_rf_chains > WMI_MAX_SPATIAL_STREAM) {
947 ath10k_warn("hardware advertises support for more spatial streams than it should (%d > %d)\n", 1569 ath10k_warn("hardware advertises support for more spatial streams than it should (%d > %d)\n",
948 ar->num_rf_chains, WMI_MAX_SPATIAL_STREAM); 1570 ar->num_rf_chains, WMI_MAX_SPATIAL_STREAM);
@@ -987,6 +1609,108 @@ static void ath10k_wmi_service_ready_event_rx(struct ath10k *ar,
987 complete(&ar->wmi.service_ready); 1609 complete(&ar->wmi.service_ready);
988} 1610}
989 1611
1612static void ath10k_wmi_10x_service_ready_event_rx(struct ath10k *ar,
1613 struct sk_buff *skb)
1614{
1615 u32 num_units, req_id, unit_size, num_mem_reqs, num_unit_info, i;
1616 int ret;
1617 struct wmi_service_ready_event_10x *ev = (void *)skb->data;
1618
1619 if (skb->len < sizeof(*ev)) {
1620 ath10k_warn("Service ready event was %d B but expected %zu B. Wrong firmware version?\n",
1621 skb->len, sizeof(*ev));
1622 return;
1623 }
1624
1625 ar->hw_min_tx_power = __le32_to_cpu(ev->hw_min_tx_power);
1626 ar->hw_max_tx_power = __le32_to_cpu(ev->hw_max_tx_power);
1627 ar->ht_cap_info = __le32_to_cpu(ev->ht_cap_info);
1628 ar->vht_cap_info = __le32_to_cpu(ev->vht_cap_info);
1629 ar->fw_version_major =
1630 (__le32_to_cpu(ev->sw_version) & 0xff000000) >> 24;
1631 ar->fw_version_minor = (__le32_to_cpu(ev->sw_version) & 0x00ffffff);
1632 ar->phy_capability = __le32_to_cpu(ev->phy_capability);
1633 ar->num_rf_chains = __le32_to_cpu(ev->num_rf_chains);
1634
1635 if (ar->num_rf_chains > WMI_MAX_SPATIAL_STREAM) {
1636 ath10k_warn("hardware advertises support for more spatial streams than it should (%d > %d)\n",
1637 ar->num_rf_chains, WMI_MAX_SPATIAL_STREAM);
1638 ar->num_rf_chains = WMI_MAX_SPATIAL_STREAM;
1639 }
1640
1641 ar->ath_common.regulatory.current_rd =
1642 __le32_to_cpu(ev->hal_reg_capabilities.eeprom_rd);
1643
1644 ath10k_debug_read_service_map(ar, ev->wmi_service_bitmap,
1645 sizeof(ev->wmi_service_bitmap));
1646
1647 if (strlen(ar->hw->wiphy->fw_version) == 0) {
1648 snprintf(ar->hw->wiphy->fw_version,
1649 sizeof(ar->hw->wiphy->fw_version),
1650 "%u.%u",
1651 ar->fw_version_major,
1652 ar->fw_version_minor);
1653 }
1654
1655 num_mem_reqs = __le32_to_cpu(ev->num_mem_reqs);
1656
1657 if (num_mem_reqs > ATH10K_MAX_MEM_REQS) {
1658 ath10k_warn("requested memory chunks number (%d) exceeds the limit\n",
1659 num_mem_reqs);
1660 return;
1661 }
1662
1663 if (!num_mem_reqs)
1664 goto exit;
1665
1666 ath10k_dbg(ATH10K_DBG_WMI, "firmware has requested %d memory chunks\n",
1667 num_mem_reqs);
1668
1669 for (i = 0; i < num_mem_reqs; ++i) {
1670 req_id = __le32_to_cpu(ev->mem_reqs[i].req_id);
1671 num_units = __le32_to_cpu(ev->mem_reqs[i].num_units);
1672 unit_size = __le32_to_cpu(ev->mem_reqs[i].unit_size);
1673 num_unit_info = __le32_to_cpu(ev->mem_reqs[i].num_unit_info);
1674
1675 if (num_unit_info & NUM_UNITS_IS_NUM_PEERS)
1676 /* number of units to allocate is number of
1677 * peers, 1 extra for self peer on target */
1678 /* this needs to be tied, host and target
1679 * can get out of sync */
1680 num_units = TARGET_10X_NUM_PEERS + 1;
1681 else if (num_unit_info & NUM_UNITS_IS_NUM_VDEVS)
1682 num_units = TARGET_10X_NUM_VDEVS + 1;
1683
1684 ath10k_dbg(ATH10K_DBG_WMI,
1685 "wmi mem_req_id %d num_units %d num_unit_info %d unit size %d actual units %d\n",
1686 req_id,
1687 __le32_to_cpu(ev->mem_reqs[i].num_units),
1688 num_unit_info,
1689 unit_size,
1690 num_units);
1691
1692 ret = ath10k_wmi_alloc_host_mem(ar, req_id, num_units,
1693 unit_size);
1694 if (ret)
1695 return;
1696 }
1697
1698exit:
1699 ath10k_dbg(ATH10K_DBG_WMI,
1700 "wmi event service ready sw_ver 0x%08x abi_ver %u phy_cap 0x%08x ht_cap 0x%08x vht_cap 0x%08x vht_supp_msc 0x%08x sys_cap_info 0x%08x mem_reqs %u num_rf_chains %u\n",
1701 __le32_to_cpu(ev->sw_version),
1702 __le32_to_cpu(ev->abi_version),
1703 __le32_to_cpu(ev->phy_capability),
1704 __le32_to_cpu(ev->ht_cap_info),
1705 __le32_to_cpu(ev->vht_cap_info),
1706 __le32_to_cpu(ev->vht_supp_mcs),
1707 __le32_to_cpu(ev->sys_cap_info),
1708 __le32_to_cpu(ev->num_mem_reqs),
1709 __le32_to_cpu(ev->num_rf_chains));
1710
1711 complete(&ar->wmi.service_ready);
1712}
1713
990static int ath10k_wmi_ready_event_rx(struct ath10k *ar, struct sk_buff *skb) 1714static int ath10k_wmi_ready_event_rx(struct ath10k *ar, struct sk_buff *skb)
991{ 1715{
992 struct wmi_ready_event *ev = (struct wmi_ready_event *)skb->data; 1716 struct wmi_ready_event *ev = (struct wmi_ready_event *)skb->data;
@@ -1007,7 +1731,7 @@ static int ath10k_wmi_ready_event_rx(struct ath10k *ar, struct sk_buff *skb)
1007 return 0; 1731 return 0;
1008} 1732}
1009 1733
1010static void ath10k_wmi_event_process(struct ath10k *ar, struct sk_buff *skb) 1734static void ath10k_wmi_main_process_rx(struct ath10k *ar, struct sk_buff *skb)
1011{ 1735{
1012 struct wmi_cmd_hdr *cmd_hdr; 1736 struct wmi_cmd_hdr *cmd_hdr;
1013 enum wmi_event_id id; 1737 enum wmi_event_id id;
@@ -1126,64 +1850,158 @@ static void ath10k_wmi_event_process(struct ath10k *ar, struct sk_buff *skb)
1126 dev_kfree_skb(skb); 1850 dev_kfree_skb(skb);
1127} 1851}
1128 1852
1129static void ath10k_wmi_event_work(struct work_struct *work) 1853static void ath10k_wmi_10x_process_rx(struct ath10k *ar, struct sk_buff *skb)
1130{ 1854{
1131 struct ath10k *ar = container_of(work, struct ath10k, 1855 struct wmi_cmd_hdr *cmd_hdr;
1132 wmi.wmi_event_work); 1856 enum wmi_10x_event_id id;
1133 struct sk_buff *skb; 1857 u16 len;
1134 1858
1135 for (;;) { 1859 cmd_hdr = (struct wmi_cmd_hdr *)skb->data;
1136 skb = skb_dequeue(&ar->wmi.wmi_event_list); 1860 id = MS(__le32_to_cpu(cmd_hdr->cmd_id), WMI_CMD_HDR_CMD_ID);
1137 if (!skb)
1138 break;
1139 1861
1140 ath10k_wmi_event_process(ar, skb); 1862 if (skb_pull(skb, sizeof(struct wmi_cmd_hdr)) == NULL)
1141 } 1863 return;
1142}
1143 1864
1144static void ath10k_wmi_process_rx(struct ath10k *ar, struct sk_buff *skb) 1865 len = skb->len;
1145{
1146 struct wmi_cmd_hdr *cmd_hdr = (struct wmi_cmd_hdr *)skb->data;
1147 enum wmi_event_id event_id;
1148 1866
1149 event_id = MS(__le32_to_cpu(cmd_hdr->cmd_id), WMI_CMD_HDR_CMD_ID); 1867 trace_ath10k_wmi_event(id, skb->data, skb->len);
1150 1868
1151 /* some events require to be handled ASAP 1869 switch (id) {
1152 * thus can't be defered to a worker thread */ 1870 case WMI_10X_MGMT_RX_EVENTID:
1153 switch (event_id) { 1871 ath10k_wmi_event_mgmt_rx(ar, skb);
1154 case WMI_HOST_SWBA_EVENTID: 1872 /* mgmt_rx() owns the skb now! */
1155 case WMI_MGMT_RX_EVENTID:
1156 ath10k_wmi_event_process(ar, skb);
1157 return; 1873 return;
1874 case WMI_10X_SCAN_EVENTID:
1875 ath10k_wmi_event_scan(ar, skb);
1876 break;
1877 case WMI_10X_CHAN_INFO_EVENTID:
1878 ath10k_wmi_event_chan_info(ar, skb);
1879 break;
1880 case WMI_10X_ECHO_EVENTID:
1881 ath10k_wmi_event_echo(ar, skb);
1882 break;
1883 case WMI_10X_DEBUG_MESG_EVENTID:
1884 ath10k_wmi_event_debug_mesg(ar, skb);
1885 break;
1886 case WMI_10X_UPDATE_STATS_EVENTID:
1887 ath10k_wmi_event_update_stats(ar, skb);
1888 break;
1889 case WMI_10X_VDEV_START_RESP_EVENTID:
1890 ath10k_wmi_event_vdev_start_resp(ar, skb);
1891 break;
1892 case WMI_10X_VDEV_STOPPED_EVENTID:
1893 ath10k_wmi_event_vdev_stopped(ar, skb);
1894 break;
1895 case WMI_10X_PEER_STA_KICKOUT_EVENTID:
1896 ath10k_wmi_event_peer_sta_kickout(ar, skb);
1897 break;
1898 case WMI_10X_HOST_SWBA_EVENTID:
1899 ath10k_wmi_event_host_swba(ar, skb);
1900 break;
1901 case WMI_10X_TBTTOFFSET_UPDATE_EVENTID:
1902 ath10k_wmi_event_tbttoffset_update(ar, skb);
1903 break;
1904 case WMI_10X_PHYERR_EVENTID:
1905 ath10k_wmi_event_phyerr(ar, skb);
1906 break;
1907 case WMI_10X_ROAM_EVENTID:
1908 ath10k_wmi_event_roam(ar, skb);
1909 break;
1910 case WMI_10X_PROFILE_MATCH:
1911 ath10k_wmi_event_profile_match(ar, skb);
1912 break;
1913 case WMI_10X_DEBUG_PRINT_EVENTID:
1914 ath10k_wmi_event_debug_print(ar, skb);
1915 break;
1916 case WMI_10X_PDEV_QVIT_EVENTID:
1917 ath10k_wmi_event_pdev_qvit(ar, skb);
1918 break;
1919 case WMI_10X_WLAN_PROFILE_DATA_EVENTID:
1920 ath10k_wmi_event_wlan_profile_data(ar, skb);
1921 break;
1922 case WMI_10X_RTT_MEASUREMENT_REPORT_EVENTID:
1923 ath10k_wmi_event_rtt_measurement_report(ar, skb);
1924 break;
1925 case WMI_10X_TSF_MEASUREMENT_REPORT_EVENTID:
1926 ath10k_wmi_event_tsf_measurement_report(ar, skb);
1927 break;
1928 case WMI_10X_RTT_ERROR_REPORT_EVENTID:
1929 ath10k_wmi_event_rtt_error_report(ar, skb);
1930 break;
1931 case WMI_10X_WOW_WAKEUP_HOST_EVENTID:
1932 ath10k_wmi_event_wow_wakeup_host(ar, skb);
1933 break;
1934 case WMI_10X_DCS_INTERFERENCE_EVENTID:
1935 ath10k_wmi_event_dcs_interference(ar, skb);
1936 break;
1937 case WMI_10X_PDEV_TPC_CONFIG_EVENTID:
1938 ath10k_wmi_event_pdev_tpc_config(ar, skb);
1939 break;
1940 case WMI_10X_INST_RSSI_STATS_EVENTID:
1941 ath10k_wmi_event_inst_rssi_stats(ar, skb);
1942 break;
1943 case WMI_10X_VDEV_STANDBY_REQ_EVENTID:
1944 ath10k_wmi_event_vdev_standby_req(ar, skb);
1945 break;
1946 case WMI_10X_VDEV_RESUME_REQ_EVENTID:
1947 ath10k_wmi_event_vdev_resume_req(ar, skb);
1948 break;
1949 case WMI_10X_SERVICE_READY_EVENTID:
1950 ath10k_wmi_10x_service_ready_event_rx(ar, skb);
1951 break;
1952 case WMI_10X_READY_EVENTID:
1953 ath10k_wmi_ready_event_rx(ar, skb);
1954 break;
1158 default: 1955 default:
1956 ath10k_warn("Unknown eventid: %d\n", id);
1159 break; 1957 break;
1160 } 1958 }
1161 1959
1162 skb_queue_tail(&ar->wmi.wmi_event_list, skb); 1960 dev_kfree_skb(skb);
1163 queue_work(ar->workqueue, &ar->wmi.wmi_event_work); 1961}
1962
1963
1964static void ath10k_wmi_process_rx(struct ath10k *ar, struct sk_buff *skb)
1965{
1966 if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features))
1967 ath10k_wmi_10x_process_rx(ar, skb);
1968 else
1969 ath10k_wmi_main_process_rx(ar, skb);
1164} 1970}
1165 1971
1166/* WMI Initialization functions */ 1972/* WMI Initialization functions */
1167int ath10k_wmi_attach(struct ath10k *ar) 1973int ath10k_wmi_attach(struct ath10k *ar)
1168{ 1974{
1975 if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) {
1976 ar->wmi.cmd = &wmi_10x_cmd_map;
1977 ar->wmi.vdev_param = &wmi_10x_vdev_param_map;
1978 ar->wmi.pdev_param = &wmi_10x_pdev_param_map;
1979 } else {
1980 ar->wmi.cmd = &wmi_cmd_map;
1981 ar->wmi.vdev_param = &wmi_vdev_param_map;
1982 ar->wmi.pdev_param = &wmi_pdev_param_map;
1983 }
1984
1169 init_completion(&ar->wmi.service_ready); 1985 init_completion(&ar->wmi.service_ready);
1170 init_completion(&ar->wmi.unified_ready); 1986 init_completion(&ar->wmi.unified_ready);
1171 init_waitqueue_head(&ar->wmi.wq); 1987 init_waitqueue_head(&ar->wmi.tx_credits_wq);
1172
1173 skb_queue_head_init(&ar->wmi.wmi_event_list);
1174 INIT_WORK(&ar->wmi.wmi_event_work, ath10k_wmi_event_work);
1175 1988
1176 return 0; 1989 return 0;
1177} 1990}
1178 1991
1179void ath10k_wmi_detach(struct ath10k *ar) 1992void ath10k_wmi_detach(struct ath10k *ar)
1180{ 1993{
1181 /* HTC should've drained the packets already */ 1994 int i;
1182 if (WARN_ON(atomic_read(&ar->wmi.pending_tx_count) > 0)) 1995
1183 ath10k_warn("there are still pending packets\n"); 1996 /* free the host memory chunks requested by firmware */
1997 for (i = 0; i < ar->wmi.num_mem_chunks; i++) {
1998 dma_free_coherent(ar->dev,
1999 ar->wmi.mem_chunks[i].len,
2000 ar->wmi.mem_chunks[i].vaddr,
2001 ar->wmi.mem_chunks[i].paddr);
2002 }
1184 2003
1185 cancel_work_sync(&ar->wmi.wmi_event_work); 2004 ar->wmi.num_mem_chunks = 0;
1186 skb_queue_purge(&ar->wmi.wmi_event_list);
1187} 2005}
1188 2006
1189int ath10k_wmi_connect_htc_service(struct ath10k *ar) 2007int ath10k_wmi_connect_htc_service(struct ath10k *ar)
@@ -1198,6 +2016,7 @@ int ath10k_wmi_connect_htc_service(struct ath10k *ar)
1198 /* these fields are the same for all service endpoints */ 2016 /* these fields are the same for all service endpoints */
1199 conn_req.ep_ops.ep_tx_complete = ath10k_wmi_htc_tx_complete; 2017 conn_req.ep_ops.ep_tx_complete = ath10k_wmi_htc_tx_complete;
1200 conn_req.ep_ops.ep_rx_complete = ath10k_wmi_process_rx; 2018 conn_req.ep_ops.ep_rx_complete = ath10k_wmi_process_rx;
2019 conn_req.ep_ops.ep_tx_credits = ath10k_wmi_op_ep_tx_credits;
1201 2020
1202 /* connect to control service */ 2021 /* connect to control service */
1203 conn_req.service_id = ATH10K_HTC_SVC_ID_WMI_CONTROL; 2022 conn_req.service_id = ATH10K_HTC_SVC_ID_WMI_CONTROL;
@@ -1234,7 +2053,8 @@ int ath10k_wmi_pdev_set_regdomain(struct ath10k *ar, u16 rd, u16 rd2g,
1234 "wmi pdev regdomain rd %x rd2g %x rd5g %x ctl2g %x ctl5g %x\n", 2053 "wmi pdev regdomain rd %x rd2g %x rd5g %x ctl2g %x ctl5g %x\n",
1235 rd, rd2g, rd5g, ctl2g, ctl5g); 2054 rd, rd2g, rd5g, ctl2g, ctl5g);
1236 2055
1237 return ath10k_wmi_cmd_send(ar, skb, WMI_PDEV_SET_REGDOMAIN_CMDID); 2056 return ath10k_wmi_cmd_send(ar, skb,
2057 ar->wmi.cmd->pdev_set_regdomain_cmdid);
1238} 2058}
1239 2059
1240int ath10k_wmi_pdev_set_channel(struct ath10k *ar, 2060int ath10k_wmi_pdev_set_channel(struct ath10k *ar,
@@ -1264,7 +2084,8 @@ int ath10k_wmi_pdev_set_channel(struct ath10k *ar,
1264 "wmi set channel mode %d freq %d\n", 2084 "wmi set channel mode %d freq %d\n",
1265 arg->mode, arg->freq); 2085 arg->mode, arg->freq);
1266 2086
1267 return ath10k_wmi_cmd_send(ar, skb, WMI_PDEV_SET_CHANNEL_CMDID); 2087 return ath10k_wmi_cmd_send(ar, skb,
2088 ar->wmi.cmd->pdev_set_channel_cmdid);
1268} 2089}
1269 2090
1270int ath10k_wmi_pdev_suspend_target(struct ath10k *ar) 2091int ath10k_wmi_pdev_suspend_target(struct ath10k *ar)
@@ -1279,7 +2100,7 @@ int ath10k_wmi_pdev_suspend_target(struct ath10k *ar)
1279 cmd = (struct wmi_pdev_suspend_cmd *)skb->data; 2100 cmd = (struct wmi_pdev_suspend_cmd *)skb->data;
1280 cmd->suspend_opt = WMI_PDEV_SUSPEND; 2101 cmd->suspend_opt = WMI_PDEV_SUSPEND;
1281 2102
1282 return ath10k_wmi_cmd_send(ar, skb, WMI_PDEV_SUSPEND_CMDID); 2103 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->pdev_suspend_cmdid);
1283} 2104}
1284 2105
1285int ath10k_wmi_pdev_resume_target(struct ath10k *ar) 2106int ath10k_wmi_pdev_resume_target(struct ath10k *ar)
@@ -1290,15 +2111,19 @@ int ath10k_wmi_pdev_resume_target(struct ath10k *ar)
1290 if (skb == NULL) 2111 if (skb == NULL)
1291 return -ENOMEM; 2112 return -ENOMEM;
1292 2113
1293 return ath10k_wmi_cmd_send(ar, skb, WMI_PDEV_RESUME_CMDID); 2114 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->pdev_resume_cmdid);
1294} 2115}
1295 2116
1296int ath10k_wmi_pdev_set_param(struct ath10k *ar, enum wmi_pdev_param id, 2117int ath10k_wmi_pdev_set_param(struct ath10k *ar, u32 id, u32 value)
1297 u32 value)
1298{ 2118{
1299 struct wmi_pdev_set_param_cmd *cmd; 2119 struct wmi_pdev_set_param_cmd *cmd;
1300 struct sk_buff *skb; 2120 struct sk_buff *skb;
1301 2121
2122 if (id == WMI_PDEV_PARAM_UNSUPPORTED) {
2123 ath10k_warn("pdev param %d not supported by firmware\n", id);
2124 return -EOPNOTSUPP;
2125 }
2126
1302 skb = ath10k_wmi_alloc_skb(sizeof(*cmd)); 2127 skb = ath10k_wmi_alloc_skb(sizeof(*cmd));
1303 if (!skb) 2128 if (!skb)
1304 return -ENOMEM; 2129 return -ENOMEM;
@@ -1309,15 +2134,16 @@ int ath10k_wmi_pdev_set_param(struct ath10k *ar, enum wmi_pdev_param id,
1309 2134
1310 ath10k_dbg(ATH10K_DBG_WMI, "wmi pdev set param %d value %d\n", 2135 ath10k_dbg(ATH10K_DBG_WMI, "wmi pdev set param %d value %d\n",
1311 id, value); 2136 id, value);
1312 return ath10k_wmi_cmd_send(ar, skb, WMI_PDEV_SET_PARAM_CMDID); 2137 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->pdev_set_param_cmdid);
1313} 2138}
1314 2139
1315int ath10k_wmi_cmd_init(struct ath10k *ar) 2140static int ath10k_wmi_main_cmd_init(struct ath10k *ar)
1316{ 2141{
1317 struct wmi_init_cmd *cmd; 2142 struct wmi_init_cmd *cmd;
1318 struct sk_buff *buf; 2143 struct sk_buff *buf;
1319 struct wmi_resource_config config = {}; 2144 struct wmi_resource_config config = {};
1320 u32 val; 2145 u32 len, val;
2146 int i;
1321 2147
1322 config.num_vdevs = __cpu_to_le32(TARGET_NUM_VDEVS); 2148 config.num_vdevs = __cpu_to_le32(TARGET_NUM_VDEVS);
1323 config.num_peers = __cpu_to_le32(TARGET_NUM_PEERS + TARGET_NUM_VDEVS); 2149 config.num_peers = __cpu_to_le32(TARGET_NUM_PEERS + TARGET_NUM_VDEVS);
@@ -1370,23 +2196,158 @@ int ath10k_wmi_cmd_init(struct ath10k *ar)
1370 config.num_msdu_desc = __cpu_to_le32(TARGET_NUM_MSDU_DESC); 2196 config.num_msdu_desc = __cpu_to_le32(TARGET_NUM_MSDU_DESC);
1371 config.max_frag_entries = __cpu_to_le32(TARGET_MAX_FRAG_ENTRIES); 2197 config.max_frag_entries = __cpu_to_le32(TARGET_MAX_FRAG_ENTRIES);
1372 2198
1373 buf = ath10k_wmi_alloc_skb(sizeof(*cmd)); 2199 len = sizeof(*cmd) +
2200 (sizeof(struct host_memory_chunk) * ar->wmi.num_mem_chunks);
2201
2202 buf = ath10k_wmi_alloc_skb(len);
1374 if (!buf) 2203 if (!buf)
1375 return -ENOMEM; 2204 return -ENOMEM;
1376 2205
1377 cmd = (struct wmi_init_cmd *)buf->data; 2206 cmd = (struct wmi_init_cmd *)buf->data;
1378 cmd->num_host_mem_chunks = 0; 2207
2208 if (ar->wmi.num_mem_chunks == 0) {
2209 cmd->num_host_mem_chunks = 0;
2210 goto out;
2211 }
2212
2213 ath10k_dbg(ATH10K_DBG_WMI, "wmi sending %d memory chunks info.\n",
2214 __cpu_to_le32(ar->wmi.num_mem_chunks));
2215
2216 cmd->num_host_mem_chunks = __cpu_to_le32(ar->wmi.num_mem_chunks);
2217
2218 for (i = 0; i < ar->wmi.num_mem_chunks; i++) {
2219 cmd->host_mem_chunks[i].ptr =
2220 __cpu_to_le32(ar->wmi.mem_chunks[i].paddr);
2221 cmd->host_mem_chunks[i].size =
2222 __cpu_to_le32(ar->wmi.mem_chunks[i].len);
2223 cmd->host_mem_chunks[i].req_id =
2224 __cpu_to_le32(ar->wmi.mem_chunks[i].req_id);
2225
2226 ath10k_dbg(ATH10K_DBG_WMI,
2227 "wmi chunk %d len %d requested, addr 0x%x\n",
2228 i,
2229 cmd->host_mem_chunks[i].size,
2230 cmd->host_mem_chunks[i].ptr);
2231 }
2232out:
1379 memcpy(&cmd->resource_config, &config, sizeof(config)); 2233 memcpy(&cmd->resource_config, &config, sizeof(config));
1380 2234
1381 ath10k_dbg(ATH10K_DBG_WMI, "wmi init\n"); 2235 ath10k_dbg(ATH10K_DBG_WMI, "wmi init\n");
1382 return ath10k_wmi_cmd_send(ar, buf, WMI_INIT_CMDID); 2236 return ath10k_wmi_cmd_send(ar, buf, ar->wmi.cmd->init_cmdid);
1383} 2237}
1384 2238
1385static int ath10k_wmi_start_scan_calc_len(const struct wmi_start_scan_arg *arg) 2239static int ath10k_wmi_10x_cmd_init(struct ath10k *ar)
2240{
2241 struct wmi_init_cmd_10x *cmd;
2242 struct sk_buff *buf;
2243 struct wmi_resource_config_10x config = {};
2244 u32 len, val;
2245 int i;
2246
2247 config.num_vdevs = __cpu_to_le32(TARGET_10X_NUM_VDEVS);
2248 config.num_peers = __cpu_to_le32(TARGET_10X_NUM_PEERS);
2249 config.num_peer_keys = __cpu_to_le32(TARGET_10X_NUM_PEER_KEYS);
2250 config.num_tids = __cpu_to_le32(TARGET_10X_NUM_TIDS);
2251 config.ast_skid_limit = __cpu_to_le32(TARGET_10X_AST_SKID_LIMIT);
2252 config.tx_chain_mask = __cpu_to_le32(TARGET_10X_TX_CHAIN_MASK);
2253 config.rx_chain_mask = __cpu_to_le32(TARGET_10X_RX_CHAIN_MASK);
2254 config.rx_timeout_pri_vo = __cpu_to_le32(TARGET_10X_RX_TIMEOUT_LO_PRI);
2255 config.rx_timeout_pri_vi = __cpu_to_le32(TARGET_10X_RX_TIMEOUT_LO_PRI);
2256 config.rx_timeout_pri_be = __cpu_to_le32(TARGET_10X_RX_TIMEOUT_LO_PRI);
2257 config.rx_timeout_pri_bk = __cpu_to_le32(TARGET_10X_RX_TIMEOUT_HI_PRI);
2258 config.rx_decap_mode = __cpu_to_le32(TARGET_10X_RX_DECAP_MODE);
2259
2260 config.scan_max_pending_reqs =
2261 __cpu_to_le32(TARGET_10X_SCAN_MAX_PENDING_REQS);
2262
2263 config.bmiss_offload_max_vdev =
2264 __cpu_to_le32(TARGET_10X_BMISS_OFFLOAD_MAX_VDEV);
2265
2266 config.roam_offload_max_vdev =
2267 __cpu_to_le32(TARGET_10X_ROAM_OFFLOAD_MAX_VDEV);
2268
2269 config.roam_offload_max_ap_profiles =
2270 __cpu_to_le32(TARGET_10X_ROAM_OFFLOAD_MAX_AP_PROFILES);
2271
2272 config.num_mcast_groups = __cpu_to_le32(TARGET_10X_NUM_MCAST_GROUPS);
2273 config.num_mcast_table_elems =
2274 __cpu_to_le32(TARGET_10X_NUM_MCAST_TABLE_ELEMS);
2275
2276 config.mcast2ucast_mode = __cpu_to_le32(TARGET_10X_MCAST2UCAST_MODE);
2277 config.tx_dbg_log_size = __cpu_to_le32(TARGET_10X_TX_DBG_LOG_SIZE);
2278 config.num_wds_entries = __cpu_to_le32(TARGET_10X_NUM_WDS_ENTRIES);
2279 config.dma_burst_size = __cpu_to_le32(TARGET_10X_DMA_BURST_SIZE);
2280 config.mac_aggr_delim = __cpu_to_le32(TARGET_10X_MAC_AGGR_DELIM);
2281
2282 val = TARGET_10X_RX_SKIP_DEFRAG_TIMEOUT_DUP_DETECTION_CHECK;
2283 config.rx_skip_defrag_timeout_dup_detection_check = __cpu_to_le32(val);
2284
2285 config.vow_config = __cpu_to_le32(TARGET_10X_VOW_CONFIG);
2286
2287 config.num_msdu_desc = __cpu_to_le32(TARGET_10X_NUM_MSDU_DESC);
2288 config.max_frag_entries = __cpu_to_le32(TARGET_10X_MAX_FRAG_ENTRIES);
2289
2290 len = sizeof(*cmd) +
2291 (sizeof(struct host_memory_chunk) * ar->wmi.num_mem_chunks);
2292
2293 buf = ath10k_wmi_alloc_skb(len);
2294 if (!buf)
2295 return -ENOMEM;
2296
2297 cmd = (struct wmi_init_cmd_10x *)buf->data;
2298
2299 if (ar->wmi.num_mem_chunks == 0) {
2300 cmd->num_host_mem_chunks = 0;
2301 goto out;
2302 }
2303
2304 ath10k_dbg(ATH10K_DBG_WMI, "wmi sending %d memory chunks info.\n",
2305 __cpu_to_le32(ar->wmi.num_mem_chunks));
2306
2307 cmd->num_host_mem_chunks = __cpu_to_le32(ar->wmi.num_mem_chunks);
2308
2309 for (i = 0; i < ar->wmi.num_mem_chunks; i++) {
2310 cmd->host_mem_chunks[i].ptr =
2311 __cpu_to_le32(ar->wmi.mem_chunks[i].paddr);
2312 cmd->host_mem_chunks[i].size =
2313 __cpu_to_le32(ar->wmi.mem_chunks[i].len);
2314 cmd->host_mem_chunks[i].req_id =
2315 __cpu_to_le32(ar->wmi.mem_chunks[i].req_id);
2316
2317 ath10k_dbg(ATH10K_DBG_WMI,
2318 "wmi chunk %d len %d requested, addr 0x%x\n",
2319 i,
2320 cmd->host_mem_chunks[i].size,
2321 cmd->host_mem_chunks[i].ptr);
2322 }
2323out:
2324 memcpy(&cmd->resource_config, &config, sizeof(config));
2325
2326 ath10k_dbg(ATH10K_DBG_WMI, "wmi init 10x\n");
2327 return ath10k_wmi_cmd_send(ar, buf, ar->wmi.cmd->init_cmdid);
2328}
2329
2330int ath10k_wmi_cmd_init(struct ath10k *ar)
2331{
2332 int ret;
2333
2334 if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features))
2335 ret = ath10k_wmi_10x_cmd_init(ar);
2336 else
2337 ret = ath10k_wmi_main_cmd_init(ar);
2338
2339 return ret;
2340}
2341
2342static int ath10k_wmi_start_scan_calc_len(struct ath10k *ar,
2343 const struct wmi_start_scan_arg *arg)
1386{ 2344{
1387 int len; 2345 int len;
1388 2346
1389 len = sizeof(struct wmi_start_scan_cmd); 2347 if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features))
2348 len = sizeof(struct wmi_start_scan_cmd_10x);
2349 else
2350 len = sizeof(struct wmi_start_scan_cmd);
1390 2351
1391 if (arg->ie_len) { 2352 if (arg->ie_len) {
1392 if (!arg->ie) 2353 if (!arg->ie)
@@ -1446,7 +2407,7 @@ int ath10k_wmi_start_scan(struct ath10k *ar,
1446 int len = 0; 2407 int len = 0;
1447 int i; 2408 int i;
1448 2409
1449 len = ath10k_wmi_start_scan_calc_len(arg); 2410 len = ath10k_wmi_start_scan_calc_len(ar, arg);
1450 if (len < 0) 2411 if (len < 0)
1451 return len; /* len contains error code here */ 2412 return len; /* len contains error code here */
1452 2413
@@ -1478,7 +2439,14 @@ int ath10k_wmi_start_scan(struct ath10k *ar,
1478 cmd->scan_ctrl_flags = __cpu_to_le32(arg->scan_ctrl_flags); 2439 cmd->scan_ctrl_flags = __cpu_to_le32(arg->scan_ctrl_flags);
1479 2440
1480 /* TLV list starts after fields included in the struct */ 2441 /* TLV list starts after fields included in the struct */
1481 off = sizeof(*cmd); 2442 /* There's just one filed that differes the two start_scan
2443 * structures - burst_duration, which we are not using btw,
2444 no point to make the split here, just shift the buffer to fit with
2445 given FW */
2446 if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features))
2447 off = sizeof(struct wmi_start_scan_cmd_10x);
2448 else
2449 off = sizeof(struct wmi_start_scan_cmd);
1482 2450
1483 if (arg->n_channels) { 2451 if (arg->n_channels) {
1484 channels = (void *)skb->data + off; 2452 channels = (void *)skb->data + off;
@@ -1540,7 +2508,7 @@ int ath10k_wmi_start_scan(struct ath10k *ar,
1540 } 2508 }
1541 2509
1542 ath10k_dbg(ATH10K_DBG_WMI, "wmi start scan\n"); 2510 ath10k_dbg(ATH10K_DBG_WMI, "wmi start scan\n");
1543 return ath10k_wmi_cmd_send(ar, skb, WMI_START_SCAN_CMDID); 2511 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->start_scan_cmdid);
1544} 2512}
1545 2513
1546void ath10k_wmi_start_scan_init(struct ath10k *ar, 2514void ath10k_wmi_start_scan_init(struct ath10k *ar,
@@ -1556,7 +2524,7 @@ void ath10k_wmi_start_scan_init(struct ath10k *ar,
1556 arg->repeat_probe_time = 0; 2524 arg->repeat_probe_time = 0;
1557 arg->probe_spacing_time = 0; 2525 arg->probe_spacing_time = 0;
1558 arg->idle_time = 0; 2526 arg->idle_time = 0;
1559 arg->max_scan_time = 5000; 2527 arg->max_scan_time = 20000;
1560 arg->probe_delay = 5; 2528 arg->probe_delay = 5;
1561 arg->notify_scan_events = WMI_SCAN_EVENT_STARTED 2529 arg->notify_scan_events = WMI_SCAN_EVENT_STARTED
1562 | WMI_SCAN_EVENT_COMPLETED 2530 | WMI_SCAN_EVENT_COMPLETED
@@ -1600,7 +2568,7 @@ int ath10k_wmi_stop_scan(struct ath10k *ar, const struct wmi_stop_scan_arg *arg)
1600 ath10k_dbg(ATH10K_DBG_WMI, 2568 ath10k_dbg(ATH10K_DBG_WMI,
1601 "wmi stop scan reqid %d req_type %d vdev/scan_id %d\n", 2569 "wmi stop scan reqid %d req_type %d vdev/scan_id %d\n",
1602 arg->req_id, arg->req_type, arg->u.scan_id); 2570 arg->req_id, arg->req_type, arg->u.scan_id);
1603 return ath10k_wmi_cmd_send(ar, skb, WMI_STOP_SCAN_CMDID); 2571 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->stop_scan_cmdid);
1604} 2572}
1605 2573
1606int ath10k_wmi_vdev_create(struct ath10k *ar, u32 vdev_id, 2574int ath10k_wmi_vdev_create(struct ath10k *ar, u32 vdev_id,
@@ -1625,7 +2593,7 @@ int ath10k_wmi_vdev_create(struct ath10k *ar, u32 vdev_id,
1625 "WMI vdev create: id %d type %d subtype %d macaddr %pM\n", 2593 "WMI vdev create: id %d type %d subtype %d macaddr %pM\n",
1626 vdev_id, type, subtype, macaddr); 2594 vdev_id, type, subtype, macaddr);
1627 2595
1628 return ath10k_wmi_cmd_send(ar, skb, WMI_VDEV_CREATE_CMDID); 2596 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->vdev_create_cmdid);
1629} 2597}
1630 2598
1631int ath10k_wmi_vdev_delete(struct ath10k *ar, u32 vdev_id) 2599int ath10k_wmi_vdev_delete(struct ath10k *ar, u32 vdev_id)
@@ -1643,20 +2611,20 @@ int ath10k_wmi_vdev_delete(struct ath10k *ar, u32 vdev_id)
1643 ath10k_dbg(ATH10K_DBG_WMI, 2611 ath10k_dbg(ATH10K_DBG_WMI,
1644 "WMI vdev delete id %d\n", vdev_id); 2612 "WMI vdev delete id %d\n", vdev_id);
1645 2613
1646 return ath10k_wmi_cmd_send(ar, skb, WMI_VDEV_DELETE_CMDID); 2614 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->vdev_delete_cmdid);
1647} 2615}
1648 2616
1649static int ath10k_wmi_vdev_start_restart(struct ath10k *ar, 2617static int ath10k_wmi_vdev_start_restart(struct ath10k *ar,
1650 const struct wmi_vdev_start_request_arg *arg, 2618 const struct wmi_vdev_start_request_arg *arg,
1651 enum wmi_cmd_id cmd_id) 2619 u32 cmd_id)
1652{ 2620{
1653 struct wmi_vdev_start_request_cmd *cmd; 2621 struct wmi_vdev_start_request_cmd *cmd;
1654 struct sk_buff *skb; 2622 struct sk_buff *skb;
1655 const char *cmdname; 2623 const char *cmdname;
1656 u32 flags = 0; 2624 u32 flags = 0;
1657 2625
1658 if (cmd_id != WMI_VDEV_START_REQUEST_CMDID && 2626 if (cmd_id != ar->wmi.cmd->vdev_start_request_cmdid &&
1659 cmd_id != WMI_VDEV_RESTART_REQUEST_CMDID) 2627 cmd_id != ar->wmi.cmd->vdev_restart_request_cmdid)
1660 return -EINVAL; 2628 return -EINVAL;
1661 if (WARN_ON(arg->ssid && arg->ssid_len == 0)) 2629 if (WARN_ON(arg->ssid && arg->ssid_len == 0))
1662 return -EINVAL; 2630 return -EINVAL;
@@ -1665,9 +2633,9 @@ static int ath10k_wmi_vdev_start_restart(struct ath10k *ar,
1665 if (WARN_ON(arg->ssid_len > sizeof(cmd->ssid.ssid))) 2633 if (WARN_ON(arg->ssid_len > sizeof(cmd->ssid.ssid)))
1666 return -EINVAL; 2634 return -EINVAL;
1667 2635
1668 if (cmd_id == WMI_VDEV_START_REQUEST_CMDID) 2636 if (cmd_id == ar->wmi.cmd->vdev_start_request_cmdid)
1669 cmdname = "start"; 2637 cmdname = "start";
1670 else if (cmd_id == WMI_VDEV_RESTART_REQUEST_CMDID) 2638 else if (cmd_id == ar->wmi.cmd->vdev_restart_request_cmdid)
1671 cmdname = "restart"; 2639 cmdname = "restart";
1672 else 2640 else
1673 return -EINVAL; /* should not happen, we already check cmd_id */ 2641 return -EINVAL; /* should not happen, we already check cmd_id */
@@ -1718,15 +2686,17 @@ static int ath10k_wmi_vdev_start_restart(struct ath10k *ar,
1718int ath10k_wmi_vdev_start(struct ath10k *ar, 2686int ath10k_wmi_vdev_start(struct ath10k *ar,
1719 const struct wmi_vdev_start_request_arg *arg) 2687 const struct wmi_vdev_start_request_arg *arg)
1720{ 2688{
1721 return ath10k_wmi_vdev_start_restart(ar, arg, 2689 u32 cmd_id = ar->wmi.cmd->vdev_start_request_cmdid;
1722 WMI_VDEV_START_REQUEST_CMDID); 2690
2691 return ath10k_wmi_vdev_start_restart(ar, arg, cmd_id);
1723} 2692}
1724 2693
1725int ath10k_wmi_vdev_restart(struct ath10k *ar, 2694int ath10k_wmi_vdev_restart(struct ath10k *ar,
1726 const struct wmi_vdev_start_request_arg *arg) 2695 const struct wmi_vdev_start_request_arg *arg)
1727{ 2696{
1728 return ath10k_wmi_vdev_start_restart(ar, arg, 2697 u32 cmd_id = ar->wmi.cmd->vdev_restart_request_cmdid;
1729 WMI_VDEV_RESTART_REQUEST_CMDID); 2698
2699 return ath10k_wmi_vdev_start_restart(ar, arg, cmd_id);
1730} 2700}
1731 2701
1732int ath10k_wmi_vdev_stop(struct ath10k *ar, u32 vdev_id) 2702int ath10k_wmi_vdev_stop(struct ath10k *ar, u32 vdev_id)
@@ -1743,7 +2713,7 @@ int ath10k_wmi_vdev_stop(struct ath10k *ar, u32 vdev_id)
1743 2713
1744 ath10k_dbg(ATH10K_DBG_WMI, "wmi vdev stop id 0x%x\n", vdev_id); 2714 ath10k_dbg(ATH10K_DBG_WMI, "wmi vdev stop id 0x%x\n", vdev_id);
1745 2715
1746 return ath10k_wmi_cmd_send(ar, skb, WMI_VDEV_STOP_CMDID); 2716 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->vdev_stop_cmdid);
1747} 2717}
1748 2718
1749int ath10k_wmi_vdev_up(struct ath10k *ar, u32 vdev_id, u32 aid, const u8 *bssid) 2719int ath10k_wmi_vdev_up(struct ath10k *ar, u32 vdev_id, u32 aid, const u8 *bssid)
@@ -1758,13 +2728,13 @@ int ath10k_wmi_vdev_up(struct ath10k *ar, u32 vdev_id, u32 aid, const u8 *bssid)
1758 cmd = (struct wmi_vdev_up_cmd *)skb->data; 2728 cmd = (struct wmi_vdev_up_cmd *)skb->data;
1759 cmd->vdev_id = __cpu_to_le32(vdev_id); 2729 cmd->vdev_id = __cpu_to_le32(vdev_id);
1760 cmd->vdev_assoc_id = __cpu_to_le32(aid); 2730 cmd->vdev_assoc_id = __cpu_to_le32(aid);
1761 memcpy(&cmd->vdev_bssid.addr, bssid, 6); 2731 memcpy(&cmd->vdev_bssid.addr, bssid, ETH_ALEN);
1762 2732
1763 ath10k_dbg(ATH10K_DBG_WMI, 2733 ath10k_dbg(ATH10K_DBG_WMI,
1764 "wmi mgmt vdev up id 0x%x assoc id %d bssid %pM\n", 2734 "wmi mgmt vdev up id 0x%x assoc id %d bssid %pM\n",
1765 vdev_id, aid, bssid); 2735 vdev_id, aid, bssid);
1766 2736
1767 return ath10k_wmi_cmd_send(ar, skb, WMI_VDEV_UP_CMDID); 2737 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->vdev_up_cmdid);
1768} 2738}
1769 2739
1770int ath10k_wmi_vdev_down(struct ath10k *ar, u32 vdev_id) 2740int ath10k_wmi_vdev_down(struct ath10k *ar, u32 vdev_id)
@@ -1782,15 +2752,22 @@ int ath10k_wmi_vdev_down(struct ath10k *ar, u32 vdev_id)
1782 ath10k_dbg(ATH10K_DBG_WMI, 2752 ath10k_dbg(ATH10K_DBG_WMI,
1783 "wmi mgmt vdev down id 0x%x\n", vdev_id); 2753 "wmi mgmt vdev down id 0x%x\n", vdev_id);
1784 2754
1785 return ath10k_wmi_cmd_send(ar, skb, WMI_VDEV_DOWN_CMDID); 2755 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->vdev_down_cmdid);
1786} 2756}
1787 2757
1788int ath10k_wmi_vdev_set_param(struct ath10k *ar, u32 vdev_id, 2758int ath10k_wmi_vdev_set_param(struct ath10k *ar, u32 vdev_id,
1789 enum wmi_vdev_param param_id, u32 param_value) 2759 u32 param_id, u32 param_value)
1790{ 2760{
1791 struct wmi_vdev_set_param_cmd *cmd; 2761 struct wmi_vdev_set_param_cmd *cmd;
1792 struct sk_buff *skb; 2762 struct sk_buff *skb;
1793 2763
2764 if (param_id == WMI_VDEV_PARAM_UNSUPPORTED) {
2765 ath10k_dbg(ATH10K_DBG_WMI,
2766 "vdev param %d not supported by firmware\n",
2767 param_id);
2768 return -EOPNOTSUPP;
2769 }
2770
1794 skb = ath10k_wmi_alloc_skb(sizeof(*cmd)); 2771 skb = ath10k_wmi_alloc_skb(sizeof(*cmd));
1795 if (!skb) 2772 if (!skb)
1796 return -ENOMEM; 2773 return -ENOMEM;
@@ -1804,7 +2781,7 @@ int ath10k_wmi_vdev_set_param(struct ath10k *ar, u32 vdev_id,
1804 "wmi vdev id 0x%x set param %d value %d\n", 2781 "wmi vdev id 0x%x set param %d value %d\n",
1805 vdev_id, param_id, param_value); 2782 vdev_id, param_id, param_value);
1806 2783
1807 return ath10k_wmi_cmd_send(ar, skb, WMI_VDEV_SET_PARAM_CMDID); 2784 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->vdev_set_param_cmdid);
1808} 2785}
1809 2786
1810int ath10k_wmi_vdev_install_key(struct ath10k *ar, 2787int ath10k_wmi_vdev_install_key(struct ath10k *ar,
@@ -1839,7 +2816,8 @@ int ath10k_wmi_vdev_install_key(struct ath10k *ar,
1839 ath10k_dbg(ATH10K_DBG_WMI, 2816 ath10k_dbg(ATH10K_DBG_WMI,
1840 "wmi vdev install key idx %d cipher %d len %d\n", 2817 "wmi vdev install key idx %d cipher %d len %d\n",
1841 arg->key_idx, arg->key_cipher, arg->key_len); 2818 arg->key_idx, arg->key_cipher, arg->key_len);
1842 return ath10k_wmi_cmd_send(ar, skb, WMI_VDEV_INSTALL_KEY_CMDID); 2819 return ath10k_wmi_cmd_send(ar, skb,
2820 ar->wmi.cmd->vdev_install_key_cmdid);
1843} 2821}
1844 2822
1845int ath10k_wmi_peer_create(struct ath10k *ar, u32 vdev_id, 2823int ath10k_wmi_peer_create(struct ath10k *ar, u32 vdev_id,
@@ -1859,7 +2837,7 @@ int ath10k_wmi_peer_create(struct ath10k *ar, u32 vdev_id,
1859 ath10k_dbg(ATH10K_DBG_WMI, 2837 ath10k_dbg(ATH10K_DBG_WMI,
1860 "wmi peer create vdev_id %d peer_addr %pM\n", 2838 "wmi peer create vdev_id %d peer_addr %pM\n",
1861 vdev_id, peer_addr); 2839 vdev_id, peer_addr);
1862 return ath10k_wmi_cmd_send(ar, skb, WMI_PEER_CREATE_CMDID); 2840 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->peer_create_cmdid);
1863} 2841}
1864 2842
1865int ath10k_wmi_peer_delete(struct ath10k *ar, u32 vdev_id, 2843int ath10k_wmi_peer_delete(struct ath10k *ar, u32 vdev_id,
@@ -1879,7 +2857,7 @@ int ath10k_wmi_peer_delete(struct ath10k *ar, u32 vdev_id,
1879 ath10k_dbg(ATH10K_DBG_WMI, 2857 ath10k_dbg(ATH10K_DBG_WMI,
1880 "wmi peer delete vdev_id %d peer_addr %pM\n", 2858 "wmi peer delete vdev_id %d peer_addr %pM\n",
1881 vdev_id, peer_addr); 2859 vdev_id, peer_addr);
1882 return ath10k_wmi_cmd_send(ar, skb, WMI_PEER_DELETE_CMDID); 2860 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->peer_delete_cmdid);
1883} 2861}
1884 2862
1885int ath10k_wmi_peer_flush(struct ath10k *ar, u32 vdev_id, 2863int ath10k_wmi_peer_flush(struct ath10k *ar, u32 vdev_id,
@@ -1900,7 +2878,7 @@ int ath10k_wmi_peer_flush(struct ath10k *ar, u32 vdev_id,
1900 ath10k_dbg(ATH10K_DBG_WMI, 2878 ath10k_dbg(ATH10K_DBG_WMI,
1901 "wmi peer flush vdev_id %d peer_addr %pM tids %08x\n", 2879 "wmi peer flush vdev_id %d peer_addr %pM tids %08x\n",
1902 vdev_id, peer_addr, tid_bitmap); 2880 vdev_id, peer_addr, tid_bitmap);
1903 return ath10k_wmi_cmd_send(ar, skb, WMI_PEER_FLUSH_TIDS_CMDID); 2881 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->peer_flush_tids_cmdid);
1904} 2882}
1905 2883
1906int ath10k_wmi_peer_set_param(struct ath10k *ar, u32 vdev_id, 2884int ath10k_wmi_peer_set_param(struct ath10k *ar, u32 vdev_id,
@@ -1918,13 +2896,13 @@ int ath10k_wmi_peer_set_param(struct ath10k *ar, u32 vdev_id,
1918 cmd->vdev_id = __cpu_to_le32(vdev_id); 2896 cmd->vdev_id = __cpu_to_le32(vdev_id);
1919 cmd->param_id = __cpu_to_le32(param_id); 2897 cmd->param_id = __cpu_to_le32(param_id);
1920 cmd->param_value = __cpu_to_le32(param_value); 2898 cmd->param_value = __cpu_to_le32(param_value);
1921 memcpy(&cmd->peer_macaddr.addr, peer_addr, 6); 2899 memcpy(&cmd->peer_macaddr.addr, peer_addr, ETH_ALEN);
1922 2900
1923 ath10k_dbg(ATH10K_DBG_WMI, 2901 ath10k_dbg(ATH10K_DBG_WMI,
1924 "wmi vdev %d peer 0x%pM set param %d value %d\n", 2902 "wmi vdev %d peer 0x%pM set param %d value %d\n",
1925 vdev_id, peer_addr, param_id, param_value); 2903 vdev_id, peer_addr, param_id, param_value);
1926 2904
1927 return ath10k_wmi_cmd_send(ar, skb, WMI_PEER_SET_PARAM_CMDID); 2905 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->peer_set_param_cmdid);
1928} 2906}
1929 2907
1930int ath10k_wmi_set_psmode(struct ath10k *ar, u32 vdev_id, 2908int ath10k_wmi_set_psmode(struct ath10k *ar, u32 vdev_id,
@@ -1945,7 +2923,8 @@ int ath10k_wmi_set_psmode(struct ath10k *ar, u32 vdev_id,
1945 "wmi set powersave id 0x%x mode %d\n", 2923 "wmi set powersave id 0x%x mode %d\n",
1946 vdev_id, psmode); 2924 vdev_id, psmode);
1947 2925
1948 return ath10k_wmi_cmd_send(ar, skb, WMI_STA_POWERSAVE_MODE_CMDID); 2926 return ath10k_wmi_cmd_send(ar, skb,
2927 ar->wmi.cmd->sta_powersave_mode_cmdid);
1949} 2928}
1950 2929
1951int ath10k_wmi_set_sta_ps_param(struct ath10k *ar, u32 vdev_id, 2930int ath10k_wmi_set_sta_ps_param(struct ath10k *ar, u32 vdev_id,
@@ -1967,7 +2946,8 @@ int ath10k_wmi_set_sta_ps_param(struct ath10k *ar, u32 vdev_id,
1967 ath10k_dbg(ATH10K_DBG_WMI, 2946 ath10k_dbg(ATH10K_DBG_WMI,
1968 "wmi sta ps param vdev_id 0x%x param %d value %d\n", 2947 "wmi sta ps param vdev_id 0x%x param %d value %d\n",
1969 vdev_id, param_id, value); 2948 vdev_id, param_id, value);
1970 return ath10k_wmi_cmd_send(ar, skb, WMI_STA_POWERSAVE_PARAM_CMDID); 2949 return ath10k_wmi_cmd_send(ar, skb,
2950 ar->wmi.cmd->sta_powersave_param_cmdid);
1971} 2951}
1972 2952
1973int ath10k_wmi_set_ap_ps_param(struct ath10k *ar, u32 vdev_id, const u8 *mac, 2953int ath10k_wmi_set_ap_ps_param(struct ath10k *ar, u32 vdev_id, const u8 *mac,
@@ -1993,7 +2973,8 @@ int ath10k_wmi_set_ap_ps_param(struct ath10k *ar, u32 vdev_id, const u8 *mac,
1993 "wmi ap ps param vdev_id 0x%X param %d value %d mac_addr %pM\n", 2973 "wmi ap ps param vdev_id 0x%X param %d value %d mac_addr %pM\n",
1994 vdev_id, param_id, value, mac); 2974 vdev_id, param_id, value, mac);
1995 2975
1996 return ath10k_wmi_cmd_send(ar, skb, WMI_AP_PS_PEER_PARAM_CMDID); 2976 return ath10k_wmi_cmd_send(ar, skb,
2977 ar->wmi.cmd->ap_ps_peer_param_cmdid);
1997} 2978}
1998 2979
1999int ath10k_wmi_scan_chan_list(struct ath10k *ar, 2980int ath10k_wmi_scan_chan_list(struct ath10k *ar,
@@ -2046,7 +3027,7 @@ int ath10k_wmi_scan_chan_list(struct ath10k *ar,
2046 ci->flags |= __cpu_to_le32(flags); 3027 ci->flags |= __cpu_to_le32(flags);
2047 } 3028 }
2048 3029
2049 return ath10k_wmi_cmd_send(ar, skb, WMI_SCAN_CHAN_LIST_CMDID); 3030 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->scan_chan_list_cmdid);
2050} 3031}
2051 3032
2052int ath10k_wmi_peer_assoc(struct ath10k *ar, 3033int ath10k_wmi_peer_assoc(struct ath10k *ar,
@@ -2105,10 +3086,11 @@ int ath10k_wmi_peer_assoc(struct ath10k *ar,
2105 ath10k_dbg(ATH10K_DBG_WMI, 3086 ath10k_dbg(ATH10K_DBG_WMI,
2106 "wmi peer assoc vdev %d addr %pM\n", 3087 "wmi peer assoc vdev %d addr %pM\n",
2107 arg->vdev_id, arg->addr); 3088 arg->vdev_id, arg->addr);
2108 return ath10k_wmi_cmd_send(ar, skb, WMI_PEER_ASSOC_CMDID); 3089 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->peer_assoc_cmdid);
2109} 3090}
2110 3091
2111int ath10k_wmi_beacon_send(struct ath10k *ar, const struct wmi_bcn_tx_arg *arg) 3092int ath10k_wmi_beacon_send_nowait(struct ath10k *ar,
3093 const struct wmi_bcn_tx_arg *arg)
2112{ 3094{
2113 struct wmi_bcn_tx_cmd *cmd; 3095 struct wmi_bcn_tx_cmd *cmd;
2114 struct sk_buff *skb; 3096 struct sk_buff *skb;
@@ -2124,7 +3106,7 @@ int ath10k_wmi_beacon_send(struct ath10k *ar, const struct wmi_bcn_tx_arg *arg)
2124 cmd->hdr.bcn_len = __cpu_to_le32(arg->bcn_len); 3106 cmd->hdr.bcn_len = __cpu_to_le32(arg->bcn_len);
2125 memcpy(cmd->bcn, arg->bcn, arg->bcn_len); 3107 memcpy(cmd->bcn, arg->bcn, arg->bcn_len);
2126 3108
2127 return ath10k_wmi_cmd_send(ar, skb, WMI_BCN_TX_CMDID); 3109 return ath10k_wmi_cmd_send_nowait(ar, skb, ar->wmi.cmd->bcn_tx_cmdid);
2128} 3110}
2129 3111
2130static void ath10k_wmi_pdev_set_wmm_param(struct wmi_wmm_params *params, 3112static void ath10k_wmi_pdev_set_wmm_param(struct wmi_wmm_params *params,
@@ -2155,7 +3137,8 @@ int ath10k_wmi_pdev_set_wmm_params(struct ath10k *ar,
2155 ath10k_wmi_pdev_set_wmm_param(&cmd->ac_vo, &arg->ac_vo); 3137 ath10k_wmi_pdev_set_wmm_param(&cmd->ac_vo, &arg->ac_vo);
2156 3138
2157 ath10k_dbg(ATH10K_DBG_WMI, "wmi pdev set wmm params\n"); 3139 ath10k_dbg(ATH10K_DBG_WMI, "wmi pdev set wmm params\n");
2158 return ath10k_wmi_cmd_send(ar, skb, WMI_PDEV_SET_WMM_PARAMS_CMDID); 3140 return ath10k_wmi_cmd_send(ar, skb,
3141 ar->wmi.cmd->pdev_set_wmm_params_cmdid);
2159} 3142}
2160 3143
2161int ath10k_wmi_request_stats(struct ath10k *ar, enum wmi_stats_id stats_id) 3144int ath10k_wmi_request_stats(struct ath10k *ar, enum wmi_stats_id stats_id)
@@ -2171,7 +3154,7 @@ int ath10k_wmi_request_stats(struct ath10k *ar, enum wmi_stats_id stats_id)
2171 cmd->stats_id = __cpu_to_le32(stats_id); 3154 cmd->stats_id = __cpu_to_le32(stats_id);
2172 3155
2173 ath10k_dbg(ATH10K_DBG_WMI, "wmi request stats %d\n", (int)stats_id); 3156 ath10k_dbg(ATH10K_DBG_WMI, "wmi request stats %d\n", (int)stats_id);
2174 return ath10k_wmi_cmd_send(ar, skb, WMI_REQUEST_STATS_CMDID); 3157 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->request_stats_cmdid);
2175} 3158}
2176 3159
2177int ath10k_wmi_force_fw_hang(struct ath10k *ar, 3160int ath10k_wmi_force_fw_hang(struct ath10k *ar,
@@ -2190,5 +3173,5 @@ int ath10k_wmi_force_fw_hang(struct ath10k *ar,
2190 3173
2191 ath10k_dbg(ATH10K_DBG_WMI, "wmi force fw hang %d delay %d\n", 3174 ath10k_dbg(ATH10K_DBG_WMI, "wmi force fw hang %d delay %d\n",
2192 type, delay_ms); 3175 type, delay_ms);
2193 return ath10k_wmi_cmd_send(ar, skb, WMI_FORCE_FW_HANG_CMDID); 3176 return ath10k_wmi_cmd_send(ar, skb, ar->wmi.cmd->force_fw_hang_cmdid);
2194} 3177}
diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
index 2c5a4f8daf2e..78c991aec7f9 100644
--- a/drivers/net/wireless/ath/ath10k/wmi.h
+++ b/drivers/net/wireless/ath/ath10k/wmi.h
@@ -208,6 +208,118 @@ struct wmi_mac_addr {
208 (c_macaddr)[5] = (((pwmi_mac_addr)->word1) >> 8) & 0xff; \ 208 (c_macaddr)[5] = (((pwmi_mac_addr)->word1) >> 8) & 0xff; \
209 } while (0) 209 } while (0)
210 210
211struct wmi_cmd_map {
212 u32 init_cmdid;
213 u32 start_scan_cmdid;
214 u32 stop_scan_cmdid;
215 u32 scan_chan_list_cmdid;
216 u32 scan_sch_prio_tbl_cmdid;
217 u32 pdev_set_regdomain_cmdid;
218 u32 pdev_set_channel_cmdid;
219 u32 pdev_set_param_cmdid;
220 u32 pdev_pktlog_enable_cmdid;
221 u32 pdev_pktlog_disable_cmdid;
222 u32 pdev_set_wmm_params_cmdid;
223 u32 pdev_set_ht_cap_ie_cmdid;
224 u32 pdev_set_vht_cap_ie_cmdid;
225 u32 pdev_set_dscp_tid_map_cmdid;
226 u32 pdev_set_quiet_mode_cmdid;
227 u32 pdev_green_ap_ps_enable_cmdid;
228 u32 pdev_get_tpc_config_cmdid;
229 u32 pdev_set_base_macaddr_cmdid;
230 u32 vdev_create_cmdid;
231 u32 vdev_delete_cmdid;
232 u32 vdev_start_request_cmdid;
233 u32 vdev_restart_request_cmdid;
234 u32 vdev_up_cmdid;
235 u32 vdev_stop_cmdid;
236 u32 vdev_down_cmdid;
237 u32 vdev_set_param_cmdid;
238 u32 vdev_install_key_cmdid;
239 u32 peer_create_cmdid;
240 u32 peer_delete_cmdid;
241 u32 peer_flush_tids_cmdid;
242 u32 peer_set_param_cmdid;
243 u32 peer_assoc_cmdid;
244 u32 peer_add_wds_entry_cmdid;
245 u32 peer_remove_wds_entry_cmdid;
246 u32 peer_mcast_group_cmdid;
247 u32 bcn_tx_cmdid;
248 u32 pdev_send_bcn_cmdid;
249 u32 bcn_tmpl_cmdid;
250 u32 bcn_filter_rx_cmdid;
251 u32 prb_req_filter_rx_cmdid;
252 u32 mgmt_tx_cmdid;
253 u32 prb_tmpl_cmdid;
254 u32 addba_clear_resp_cmdid;
255 u32 addba_send_cmdid;
256 u32 addba_status_cmdid;
257 u32 delba_send_cmdid;
258 u32 addba_set_resp_cmdid;
259 u32 send_singleamsdu_cmdid;
260 u32 sta_powersave_mode_cmdid;
261 u32 sta_powersave_param_cmdid;
262 u32 sta_mimo_ps_mode_cmdid;
263 u32 pdev_dfs_enable_cmdid;
264 u32 pdev_dfs_disable_cmdid;
265 u32 roam_scan_mode;
266 u32 roam_scan_rssi_threshold;
267 u32 roam_scan_period;
268 u32 roam_scan_rssi_change_threshold;
269 u32 roam_ap_profile;
270 u32 ofl_scan_add_ap_profile;
271 u32 ofl_scan_remove_ap_profile;
272 u32 ofl_scan_period;
273 u32 p2p_dev_set_device_info;
274 u32 p2p_dev_set_discoverability;
275 u32 p2p_go_set_beacon_ie;
276 u32 p2p_go_set_probe_resp_ie;
277 u32 p2p_set_vendor_ie_data_cmdid;
278 u32 ap_ps_peer_param_cmdid;
279 u32 ap_ps_peer_uapsd_coex_cmdid;
280 u32 peer_rate_retry_sched_cmdid;
281 u32 wlan_profile_trigger_cmdid;
282 u32 wlan_profile_set_hist_intvl_cmdid;
283 u32 wlan_profile_get_profile_data_cmdid;
284 u32 wlan_profile_enable_profile_id_cmdid;
285 u32 wlan_profile_list_profile_id_cmdid;
286 u32 pdev_suspend_cmdid;
287 u32 pdev_resume_cmdid;
288 u32 add_bcn_filter_cmdid;
289 u32 rmv_bcn_filter_cmdid;
290 u32 wow_add_wake_pattern_cmdid;
291 u32 wow_del_wake_pattern_cmdid;
292 u32 wow_enable_disable_wake_event_cmdid;
293 u32 wow_enable_cmdid;
294 u32 wow_hostwakeup_from_sleep_cmdid;
295 u32 rtt_measreq_cmdid;
296 u32 rtt_tsf_cmdid;
297 u32 vdev_spectral_scan_configure_cmdid;
298 u32 vdev_spectral_scan_enable_cmdid;
299 u32 request_stats_cmdid;
300 u32 set_arp_ns_offload_cmdid;
301 u32 network_list_offload_config_cmdid;
302 u32 gtk_offload_cmdid;
303 u32 csa_offload_enable_cmdid;
304 u32 csa_offload_chanswitch_cmdid;
305 u32 chatter_set_mode_cmdid;
306 u32 peer_tid_addba_cmdid;
307 u32 peer_tid_delba_cmdid;
308 u32 sta_dtim_ps_method_cmdid;
309 u32 sta_uapsd_auto_trig_cmdid;
310 u32 sta_keepalive_cmd;
311 u32 echo_cmdid;
312 u32 pdev_utf_cmdid;
313 u32 dbglog_cfg_cmdid;
314 u32 pdev_qvit_cmdid;
315 u32 pdev_ftm_intg_cmdid;
316 u32 vdev_set_keepalive_cmdid;
317 u32 vdev_get_keepalive_cmdid;
318 u32 force_fw_hang_cmdid;
319 u32 gpio_config_cmdid;
320 u32 gpio_output_cmdid;
321};
322
211/* 323/*
212 * wmi command groups. 324 * wmi command groups.
213 */ 325 */
@@ -247,7 +359,9 @@ enum wmi_cmd_group {
247#define WMI_CMD_GRP(grp_id) (((grp_id) << 12) | 0x1) 359#define WMI_CMD_GRP(grp_id) (((grp_id) << 12) | 0x1)
248#define WMI_EVT_GRP_START_ID(grp_id) (((grp_id) << 12) | 0x1) 360#define WMI_EVT_GRP_START_ID(grp_id) (((grp_id) << 12) | 0x1)
249 361
250/* Command IDs and commande events. */ 362#define WMI_CMD_UNSUPPORTED 0
363
364/* Command IDs and command events for MAIN FW. */
251enum wmi_cmd_id { 365enum wmi_cmd_id {
252 WMI_INIT_CMDID = 0x1, 366 WMI_INIT_CMDID = 0x1,
253 367
@@ -488,6 +602,217 @@ enum wmi_event_id {
488 WMI_GPIO_INPUT_EVENTID = WMI_EVT_GRP_START_ID(WMI_GRP_GPIO), 602 WMI_GPIO_INPUT_EVENTID = WMI_EVT_GRP_START_ID(WMI_GRP_GPIO),
489}; 603};
490 604
605/* Command IDs and command events for 10.X firmware */
606enum wmi_10x_cmd_id {
607 WMI_10X_START_CMDID = 0x9000,
608 WMI_10X_END_CMDID = 0x9FFF,
609
610 /* initialize the wlan sub system */
611 WMI_10X_INIT_CMDID,
612
613 /* Scan specific commands */
614
615 WMI_10X_START_SCAN_CMDID = WMI_10X_START_CMDID,
616 WMI_10X_STOP_SCAN_CMDID,
617 WMI_10X_SCAN_CHAN_LIST_CMDID,
618 WMI_10X_ECHO_CMDID,
619
620 /* PDEV(physical device) specific commands */
621 WMI_10X_PDEV_SET_REGDOMAIN_CMDID,
622 WMI_10X_PDEV_SET_CHANNEL_CMDID,
623 WMI_10X_PDEV_SET_PARAM_CMDID,
624 WMI_10X_PDEV_PKTLOG_ENABLE_CMDID,
625 WMI_10X_PDEV_PKTLOG_DISABLE_CMDID,
626 WMI_10X_PDEV_SET_WMM_PARAMS_CMDID,
627 WMI_10X_PDEV_SET_HT_CAP_IE_CMDID,
628 WMI_10X_PDEV_SET_VHT_CAP_IE_CMDID,
629 WMI_10X_PDEV_SET_BASE_MACADDR_CMDID,
630 WMI_10X_PDEV_SET_DSCP_TID_MAP_CMDID,
631 WMI_10X_PDEV_SET_QUIET_MODE_CMDID,
632 WMI_10X_PDEV_GREEN_AP_PS_ENABLE_CMDID,
633 WMI_10X_PDEV_GET_TPC_CONFIG_CMDID,
634
635 /* VDEV(virtual device) specific commands */
636 WMI_10X_VDEV_CREATE_CMDID,
637 WMI_10X_VDEV_DELETE_CMDID,
638 WMI_10X_VDEV_START_REQUEST_CMDID,
639 WMI_10X_VDEV_RESTART_REQUEST_CMDID,
640 WMI_10X_VDEV_UP_CMDID,
641 WMI_10X_VDEV_STOP_CMDID,
642 WMI_10X_VDEV_DOWN_CMDID,
643 WMI_10X_VDEV_STANDBY_RESPONSE_CMDID,
644 WMI_10X_VDEV_RESUME_RESPONSE_CMDID,
645 WMI_10X_VDEV_SET_PARAM_CMDID,
646 WMI_10X_VDEV_INSTALL_KEY_CMDID,
647
648 /* peer specific commands */
649 WMI_10X_PEER_CREATE_CMDID,
650 WMI_10X_PEER_DELETE_CMDID,
651 WMI_10X_PEER_FLUSH_TIDS_CMDID,
652 WMI_10X_PEER_SET_PARAM_CMDID,
653 WMI_10X_PEER_ASSOC_CMDID,
654 WMI_10X_PEER_ADD_WDS_ENTRY_CMDID,
655 WMI_10X_PEER_REMOVE_WDS_ENTRY_CMDID,
656 WMI_10X_PEER_MCAST_GROUP_CMDID,
657
658 /* beacon/management specific commands */
659
660 WMI_10X_BCN_TX_CMDID,
661 WMI_10X_BCN_PRB_TMPL_CMDID,
662 WMI_10X_BCN_FILTER_RX_CMDID,
663 WMI_10X_PRB_REQ_FILTER_RX_CMDID,
664 WMI_10X_MGMT_TX_CMDID,
665
666 /* commands to directly control ba negotiation directly from host. */
667 WMI_10X_ADDBA_CLEAR_RESP_CMDID,
668 WMI_10X_ADDBA_SEND_CMDID,
669 WMI_10X_ADDBA_STATUS_CMDID,
670 WMI_10X_DELBA_SEND_CMDID,
671 WMI_10X_ADDBA_SET_RESP_CMDID,
672 WMI_10X_SEND_SINGLEAMSDU_CMDID,
673
674 /* Station power save specific config */
675 WMI_10X_STA_POWERSAVE_MODE_CMDID,
676 WMI_10X_STA_POWERSAVE_PARAM_CMDID,
677 WMI_10X_STA_MIMO_PS_MODE_CMDID,
678
679 /* set debug log config */
680 WMI_10X_DBGLOG_CFG_CMDID,
681
682 /* DFS-specific commands */
683 WMI_10X_PDEV_DFS_ENABLE_CMDID,
684 WMI_10X_PDEV_DFS_DISABLE_CMDID,
685
686 /* QVIT specific command id */
687 WMI_10X_PDEV_QVIT_CMDID,
688
689 /* Offload Scan and Roaming related commands */
690 WMI_10X_ROAM_SCAN_MODE,
691 WMI_10X_ROAM_SCAN_RSSI_THRESHOLD,
692 WMI_10X_ROAM_SCAN_PERIOD,
693 WMI_10X_ROAM_SCAN_RSSI_CHANGE_THRESHOLD,
694 WMI_10X_ROAM_AP_PROFILE,
695 WMI_10X_OFL_SCAN_ADD_AP_PROFILE,
696 WMI_10X_OFL_SCAN_REMOVE_AP_PROFILE,
697 WMI_10X_OFL_SCAN_PERIOD,
698
699 /* P2P specific commands */
700 WMI_10X_P2P_DEV_SET_DEVICE_INFO,
701 WMI_10X_P2P_DEV_SET_DISCOVERABILITY,
702 WMI_10X_P2P_GO_SET_BEACON_IE,
703 WMI_10X_P2P_GO_SET_PROBE_RESP_IE,
704
705 /* AP power save specific config */
706 WMI_10X_AP_PS_PEER_PARAM_CMDID,
707 WMI_10X_AP_PS_PEER_UAPSD_COEX_CMDID,
708
709 /* Rate-control specific commands */
710 WMI_10X_PEER_RATE_RETRY_SCHED_CMDID,
711
712 /* WLAN Profiling commands. */
713 WMI_10X_WLAN_PROFILE_TRIGGER_CMDID,
714 WMI_10X_WLAN_PROFILE_SET_HIST_INTVL_CMDID,
715 WMI_10X_WLAN_PROFILE_GET_PROFILE_DATA_CMDID,
716 WMI_10X_WLAN_PROFILE_ENABLE_PROFILE_ID_CMDID,
717 WMI_10X_WLAN_PROFILE_LIST_PROFILE_ID_CMDID,
718
719 /* Suspend resume command Ids */
720 WMI_10X_PDEV_SUSPEND_CMDID,
721 WMI_10X_PDEV_RESUME_CMDID,
722
723 /* Beacon filter commands */
724 WMI_10X_ADD_BCN_FILTER_CMDID,
725 WMI_10X_RMV_BCN_FILTER_CMDID,
726
727 /* WOW Specific WMI commands*/
728 WMI_10X_WOW_ADD_WAKE_PATTERN_CMDID,
729 WMI_10X_WOW_DEL_WAKE_PATTERN_CMDID,
730 WMI_10X_WOW_ENABLE_DISABLE_WAKE_EVENT_CMDID,
731 WMI_10X_WOW_ENABLE_CMDID,
732 WMI_10X_WOW_HOSTWAKEUP_FROM_SLEEP_CMDID,
733
734 /* RTT measurement related cmd */
735 WMI_10X_RTT_MEASREQ_CMDID,
736 WMI_10X_RTT_TSF_CMDID,
737
738 /* transmit beacon by value */
739 WMI_10X_PDEV_SEND_BCN_CMDID,
740
741 /* F/W stats */
742 WMI_10X_VDEV_SPECTRAL_SCAN_CONFIGURE_CMDID,
743 WMI_10X_VDEV_SPECTRAL_SCAN_ENABLE_CMDID,
744 WMI_10X_REQUEST_STATS_CMDID,
745
746 /* GPIO Configuration */
747 WMI_10X_GPIO_CONFIG_CMDID,
748 WMI_10X_GPIO_OUTPUT_CMDID,
749
750 WMI_10X_PDEV_UTF_CMDID = WMI_10X_END_CMDID - 1,
751};
752
753enum wmi_10x_event_id {
754 WMI_10X_SERVICE_READY_EVENTID = 0x8000,
755 WMI_10X_READY_EVENTID,
756 WMI_10X_START_EVENTID = 0x9000,
757 WMI_10X_END_EVENTID = 0x9FFF,
758
759 /* Scan specific events */
760 WMI_10X_SCAN_EVENTID = WMI_10X_START_EVENTID,
761 WMI_10X_ECHO_EVENTID,
762 WMI_10X_DEBUG_MESG_EVENTID,
763 WMI_10X_UPDATE_STATS_EVENTID,
764
765 /* Instantaneous RSSI event */
766 WMI_10X_INST_RSSI_STATS_EVENTID,
767
768 /* VDEV specific events */
769 WMI_10X_VDEV_START_RESP_EVENTID,
770 WMI_10X_VDEV_STANDBY_REQ_EVENTID,
771 WMI_10X_VDEV_RESUME_REQ_EVENTID,
772 WMI_10X_VDEV_STOPPED_EVENTID,
773
774 /* peer specific events */
775 WMI_10X_PEER_STA_KICKOUT_EVENTID,
776
777 /* beacon/mgmt specific events */
778 WMI_10X_HOST_SWBA_EVENTID,
779 WMI_10X_TBTTOFFSET_UPDATE_EVENTID,
780 WMI_10X_MGMT_RX_EVENTID,
781
782 /* Channel stats event */
783 WMI_10X_CHAN_INFO_EVENTID,
784
785 /* PHY Error specific WMI event */
786 WMI_10X_PHYERR_EVENTID,
787
788 /* Roam event to trigger roaming on host */
789 WMI_10X_ROAM_EVENTID,
790
791 /* matching AP found from list of profiles */
792 WMI_10X_PROFILE_MATCH,
793
794 /* debug print message used for tracing FW code while debugging */
795 WMI_10X_DEBUG_PRINT_EVENTID,
796 /* VI spoecific event */
797 WMI_10X_PDEV_QVIT_EVENTID,
798 /* FW code profile data in response to profile request */
799 WMI_10X_WLAN_PROFILE_DATA_EVENTID,
800
801 /*RTT related event ID*/
802 WMI_10X_RTT_MEASUREMENT_REPORT_EVENTID,
803 WMI_10X_TSF_MEASUREMENT_REPORT_EVENTID,
804 WMI_10X_RTT_ERROR_REPORT_EVENTID,
805
806 WMI_10X_WOW_WAKEUP_HOST_EVENTID,
807 WMI_10X_DCS_INTERFERENCE_EVENTID,
808
809 /* TPC config for the current operating channel */
810 WMI_10X_PDEV_TPC_CONFIG_EVENTID,
811
812 WMI_10X_GPIO_INPUT_EVENTID,
813 WMI_10X_PDEV_UTF_EVENTID = WMI_10X_END_EVENTID-1,
814};
815
491enum wmi_phy_mode { 816enum wmi_phy_mode {
492 MODE_11A = 0, /* 11a Mode */ 817 MODE_11A = 0, /* 11a Mode */
493 MODE_11G = 1, /* 11b/g Mode */ 818 MODE_11G = 1, /* 11b/g Mode */
@@ -508,6 +833,48 @@ enum wmi_phy_mode {
508 MODE_MAX = 14 833 MODE_MAX = 14
509}; 834};
510 835
836static inline const char *ath10k_wmi_phymode_str(enum wmi_phy_mode mode)
837{
838 switch (mode) {
839 case MODE_11A:
840 return "11a";
841 case MODE_11G:
842 return "11g";
843 case MODE_11B:
844 return "11b";
845 case MODE_11GONLY:
846 return "11gonly";
847 case MODE_11NA_HT20:
848 return "11na-ht20";
849 case MODE_11NG_HT20:
850 return "11ng-ht20";
851 case MODE_11NA_HT40:
852 return "11na-ht40";
853 case MODE_11NG_HT40:
854 return "11ng-ht40";
855 case MODE_11AC_VHT20:
856 return "11ac-vht20";
857 case MODE_11AC_VHT40:
858 return "11ac-vht40";
859 case MODE_11AC_VHT80:
860 return "11ac-vht80";
861 case MODE_11AC_VHT20_2G:
862 return "11ac-vht20-2g";
863 case MODE_11AC_VHT40_2G:
864 return "11ac-vht40-2g";
865 case MODE_11AC_VHT80_2G:
866 return "11ac-vht80-2g";
867 case MODE_UNKNOWN:
868 /* skip */
869 break;
870
871 /* no default handler to allow compiler to check that the
872 * enum is fully handled */
873 };
874
875 return "<unknown>";
876}
877
511#define WMI_CHAN_LIST_TAG 0x1 878#define WMI_CHAN_LIST_TAG 0x1
512#define WMI_SSID_LIST_TAG 0x2 879#define WMI_SSID_LIST_TAG 0x2
513#define WMI_BSSID_LIST_TAG 0x3 880#define WMI_BSSID_LIST_TAG 0x3
@@ -763,13 +1130,45 @@ struct wmi_service_ready_event {
763 struct wlan_host_mem_req mem_reqs[1]; 1130 struct wlan_host_mem_req mem_reqs[1];
764} __packed; 1131} __packed;
765 1132
766/* 1133/* This is the definition from 10.X firmware branch */
767 * status consists of upper 16 bits fo int status and lower 16 bits of 1134struct wmi_service_ready_event_10x {
768 * module ID that retuned status 1135 __le32 sw_version;
769 */ 1136 __le32 abi_version;
770#define WLAN_INIT_STATUS_SUCCESS 0x0 1137
771#define WLAN_GET_INIT_STATUS_REASON(status) ((status) & 0xffff) 1138 /* WMI_PHY_CAPABILITY */
772#define WLAN_GET_INIT_STATUS_MODULE_ID(status) (((status) >> 16) & 0xffff) 1139 __le32 phy_capability;
1140
1141 /* Maximum number of frag table entries that SW will populate less 1 */
1142 __le32 max_frag_entry;
1143 __le32 wmi_service_bitmap[WMI_SERVICE_BM_SIZE];
1144 __le32 num_rf_chains;
1145
1146 /*
1147 * The following field is only valid for service type
1148 * WMI_SERVICE_11AC
1149 */
1150 __le32 ht_cap_info; /* WMI HT Capability */
1151 __le32 vht_cap_info; /* VHT capability info field of 802.11ac */
1152 __le32 vht_supp_mcs; /* VHT Supported MCS Set field Rx/Tx same */
1153 __le32 hw_min_tx_power;
1154 __le32 hw_max_tx_power;
1155
1156 struct hal_reg_capabilities hal_reg_capabilities;
1157
1158 __le32 sys_cap_info;
1159 __le32 min_pkt_size_enable; /* Enterprise mode short pkt enable */
1160
1161 /*
1162 * request to host to allocate a chuck of memory and pss it down to FW
1163 * via WM_INIT. FW uses this as FW extesnsion memory for saving its
1164 * data structures. Only valid for low latency interfaces like PCIE
1165 * where FW can access this memory directly (or) by DMA.
1166 */
1167 __le32 num_mem_reqs;
1168
1169 struct wlan_host_mem_req mem_reqs[1];
1170} __packed;
1171
773 1172
774#define WMI_SERVICE_READY_TIMEOUT_HZ (5*HZ) 1173#define WMI_SERVICE_READY_TIMEOUT_HZ (5*HZ)
775#define WMI_UNIFIED_READY_TIMEOUT_HZ (5*HZ) 1174#define WMI_UNIFIED_READY_TIMEOUT_HZ (5*HZ)
@@ -978,6 +1377,192 @@ struct wmi_resource_config {
978 __le32 max_frag_entries; 1377 __le32 max_frag_entries;
979} __packed; 1378} __packed;
980 1379
1380struct wmi_resource_config_10x {
1381 /* number of virtual devices (VAPs) to support */
1382 __le32 num_vdevs;
1383
1384 /* number of peer nodes to support */
1385 __le32 num_peers;
1386
1387 /* number of keys per peer */
1388 __le32 num_peer_keys;
1389
1390 /* total number of TX/RX data TIDs */
1391 __le32 num_tids;
1392
1393 /*
1394 * max skid for resolving hash collisions
1395 *
1396 * The address search table is sparse, so that if two MAC addresses
1397 * result in the same hash value, the second of these conflicting
1398 * entries can slide to the next index in the address search table,
1399 * and use it, if it is unoccupied. This ast_skid_limit parameter
1400 * specifies the upper bound on how many subsequent indices to search
1401 * over to find an unoccupied space.
1402 */
1403 __le32 ast_skid_limit;
1404
1405 /*
1406 * the nominal chain mask for transmit
1407 *
1408 * The chain mask may be modified dynamically, e.g. to operate AP
1409 * tx with a reduced number of chains if no clients are associated.
1410 * This configuration parameter specifies the nominal chain-mask that
1411 * should be used when not operating with a reduced set of tx chains.
1412 */
1413 __le32 tx_chain_mask;
1414
1415 /*
1416 * the nominal chain mask for receive
1417 *
1418 * The chain mask may be modified dynamically, e.g. for a client
1419 * to use a reduced number of chains for receive if the traffic to
1420 * the client is low enough that it doesn't require downlink MIMO
1421 * or antenna diversity.
1422 * This configuration parameter specifies the nominal chain-mask that
1423 * should be used when not operating with a reduced set of rx chains.
1424 */
1425 __le32 rx_chain_mask;
1426
1427 /*
1428 * what rx reorder timeout (ms) to use for the AC
1429 *
1430 * Each WMM access class (voice, video, best-effort, background) will
1431 * have its own timeout value to dictate how long to wait for missing
1432 * rx MPDUs to arrive before flushing subsequent MPDUs that have
1433 * already been received.
1434 * This parameter specifies the timeout in milliseconds for each
1435 * class.
1436 */
1437 __le32 rx_timeout_pri_vi;
1438 __le32 rx_timeout_pri_vo;
1439 __le32 rx_timeout_pri_be;
1440 __le32 rx_timeout_pri_bk;
1441
1442 /*
1443 * what mode the rx should decap packets to
1444 *
1445 * MAC can decap to RAW (no decap), native wifi or Ethernet types
1446 * THis setting also determines the default TX behavior, however TX
1447 * behavior can be modified on a per VAP basis during VAP init
1448 */
1449 __le32 rx_decap_mode;
1450
1451 /* what is the maximum scan requests than can be queued */
1452 __le32 scan_max_pending_reqs;
1453
1454 /* maximum VDEV that could use BMISS offload */
1455 __le32 bmiss_offload_max_vdev;
1456
1457 /* maximum VDEV that could use offload roaming */
1458 __le32 roam_offload_max_vdev;
1459
1460 /* maximum AP profiles that would push to offload roaming */
1461 __le32 roam_offload_max_ap_profiles;
1462
1463 /*
1464 * how many groups to use for mcast->ucast conversion
1465 *
1466 * The target's WAL maintains a table to hold information regarding
1467 * which peers belong to a given multicast group, so that if
1468 * multicast->unicast conversion is enabled, the target can convert
1469 * multicast tx frames to a series of unicast tx frames, to each
1470 * peer within the multicast group.
1471 This num_mcast_groups configuration parameter tells the target how
1472 * many multicast groups to provide storage for within its multicast
1473 * group membership table.
1474 */
1475 __le32 num_mcast_groups;
1476
1477 /*
1478 * size to alloc for the mcast membership table
1479 *
1480 * This num_mcast_table_elems configuration parameter tells the
1481 * target how many peer elements it needs to provide storage for in
1482 * its multicast group membership table.
1483 * These multicast group membership table elements are shared by the
1484 * multicast groups stored within the table.
1485 */
1486 __le32 num_mcast_table_elems;
1487
1488 /*
1489 * whether/how to do multicast->unicast conversion
1490 *
1491 * This configuration parameter specifies whether the target should
1492 * perform multicast --> unicast conversion on transmit, and if so,
1493 * what to do if it finds no entries in its multicast group
1494 * membership table for the multicast IP address in the tx frame.
1495 * Configuration value:
1496 * 0 -> Do not perform multicast to unicast conversion.
1497 * 1 -> Convert multicast frames to unicast, if the IP multicast
1498 * address from the tx frame is found in the multicast group
1499 * membership table. If the IP multicast address is not found,
1500 * drop the frame.
1501 * 2 -> Convert multicast frames to unicast, if the IP multicast
1502 * address from the tx frame is found in the multicast group
1503 * membership table. If the IP multicast address is not found,
1504 * transmit the frame as multicast.
1505 */
1506 __le32 mcast2ucast_mode;
1507
1508 /*
1509 * how much memory to allocate for a tx PPDU dbg log
1510 *
1511 * This parameter controls how much memory the target will allocate
1512 * to store a log of tx PPDU meta-information (how large the PPDU
1513 * was, when it was sent, whether it was successful, etc.)
1514 */
1515 __le32 tx_dbg_log_size;
1516
1517 /* how many AST entries to be allocated for WDS */
1518 __le32 num_wds_entries;
1519
1520 /*
1521 * MAC DMA burst size, e.g., For target PCI limit can be
1522 * 0 -default, 1 256B
1523 */
1524 __le32 dma_burst_size;
1525
1526 /*
1527 * Fixed delimiters to be inserted after every MPDU to
1528 * account for interface latency to avoid underrun.
1529 */
1530 __le32 mac_aggr_delim;
1531
1532 /*
1533 * determine whether target is responsible for detecting duplicate
1534 * non-aggregate MPDU and timing out stale fragments.
1535 *
1536 * A-MPDU reordering is always performed on the target.
1537 *
1538 * 0: target responsible for frag timeout and dup checking
1539 * 1: host responsible for frag timeout and dup checking
1540 */
1541 __le32 rx_skip_defrag_timeout_dup_detection_check;
1542
1543 /*
1544 * Configuration for VoW :
1545 * No of Video Nodes to be supported
1546 * and Max no of descriptors for each Video link (node).
1547 */
1548 __le32 vow_config;
1549
1550 /* Number of msdu descriptors target should use */
1551 __le32 num_msdu_desc;
1552
1553 /*
1554 * Max. number of Tx fragments per MSDU
1555 * This parameter controls the max number of Tx fragments per MSDU.
1556 * This is sent by the target as part of the WMI_SERVICE_READY event
1557 * and is overriden by the OS shim as required.
1558 */
1559 __le32 max_frag_entries;
1560} __packed;
1561
1562
1563#define NUM_UNITS_IS_NUM_VDEVS 0x1
1564#define NUM_UNITS_IS_NUM_PEERS 0x2
1565
981/* strucutre describing host memory chunk. */ 1566/* strucutre describing host memory chunk. */
982struct host_memory_chunk { 1567struct host_memory_chunk {
983 /* id of the request that is passed up in service ready */ 1568 /* id of the request that is passed up in service ready */
@@ -999,6 +1584,18 @@ struct wmi_init_cmd {
999 struct host_memory_chunk host_mem_chunks[1]; 1584 struct host_memory_chunk host_mem_chunks[1];
1000} __packed; 1585} __packed;
1001 1586
1587/* _10x stucture is from 10.X FW API */
1588struct wmi_init_cmd_10x {
1589 struct wmi_resource_config_10x resource_config;
1590 __le32 num_host_mem_chunks;
1591
1592 /*
1593 * variable number of host memory chunks.
1594 * This should be the last element in the structure
1595 */
1596 struct host_memory_chunk host_mem_chunks[1];
1597} __packed;
1598
1002/* TLV for channel list */ 1599/* TLV for channel list */
1003struct wmi_chan_list { 1600struct wmi_chan_list {
1004 __le32 tag; /* WMI_CHAN_LIST_TAG */ 1601 __le32 tag; /* WMI_CHAN_LIST_TAG */
@@ -1118,6 +1715,88 @@ struct wmi_start_scan_cmd {
1118 */ 1715 */
1119} __packed; 1716} __packed;
1120 1717
1718/* This is the definition from 10.X firmware branch */
1719struct wmi_start_scan_cmd_10x {
1720 /* Scan ID */
1721 __le32 scan_id;
1722
1723 /* Scan requestor ID */
1724 __le32 scan_req_id;
1725
1726 /* VDEV id(interface) that is requesting scan */
1727 __le32 vdev_id;
1728
1729 /* Scan Priority, input to scan scheduler */
1730 __le32 scan_priority;
1731
1732 /* Scan events subscription */
1733 __le32 notify_scan_events;
1734
1735 /* dwell time in msec on active channels */
1736 __le32 dwell_time_active;
1737
1738 /* dwell time in msec on passive channels */
1739 __le32 dwell_time_passive;
1740
1741 /*
1742 * min time in msec on the BSS channel,only valid if atleast one
1743 * VDEV is active
1744 */
1745 __le32 min_rest_time;
1746
1747 /*
1748 * max rest time in msec on the BSS channel,only valid if at least
1749 * one VDEV is active
1750 */
1751 /*
1752 * the scanner will rest on the bss channel at least min_rest_time
1753 * after min_rest_time the scanner will start checking for tx/rx
1754 * activity on all VDEVs. if there is no activity the scanner will
1755 * switch to off channel. if there is activity the scanner will let
1756 * the radio on the bss channel until max_rest_time expires.at
1757 * max_rest_time scanner will switch to off channel irrespective of
1758 * activity. activity is determined by the idle_time parameter.
1759 */
1760 __le32 max_rest_time;
1761
1762 /*
1763 * time before sending next set of probe requests.
1764 * The scanner keeps repeating probe requests transmission with
1765 * period specified by repeat_probe_time.
1766 * The number of probe requests specified depends on the ssid_list
1767 * and bssid_list
1768 */
1769 __le32 repeat_probe_time;
1770
1771 /* time in msec between 2 consequetive probe requests with in a set. */
1772 __le32 probe_spacing_time;
1773
1774 /*
1775 * data inactivity time in msec on bss channel that will be used by
1776 * scanner for measuring the inactivity.
1777 */
1778 __le32 idle_time;
1779
1780 /* maximum time in msec allowed for scan */
1781 __le32 max_scan_time;
1782
1783 /*
1784 * delay in msec before sending first probe request after switching
1785 * to a channel
1786 */
1787 __le32 probe_delay;
1788
1789 /* Scan control flags */
1790 __le32 scan_ctrl_flags;
1791
1792 /*
1793 * TLV (tag length value ) paramerters follow the scan_cmd structure.
1794 * TLV can contain channel list, bssid list, ssid list and
1795 * ie. the TLV tags are defined above;
1796 */
1797} __packed;
1798
1799
1121struct wmi_ssid_arg { 1800struct wmi_ssid_arg {
1122 int len; 1801 int len;
1123 const u8 *ssid; 1802 const u8 *ssid;
@@ -1268,7 +1947,7 @@ struct wmi_scan_event {
1268 * good idea to pass all the fields in the RX status 1947 * good idea to pass all the fields in the RX status
1269 * descriptor up to the host. 1948 * descriptor up to the host.
1270 */ 1949 */
1271struct wmi_mgmt_rx_hdr { 1950struct wmi_mgmt_rx_hdr_v1 {
1272 __le32 channel; 1951 __le32 channel;
1273 __le32 snr; 1952 __le32 snr;
1274 __le32 rate; 1953 __le32 rate;
@@ -1277,8 +1956,18 @@ struct wmi_mgmt_rx_hdr {
1277 __le32 status; /* %WMI_RX_STATUS_ */ 1956 __le32 status; /* %WMI_RX_STATUS_ */
1278} __packed; 1957} __packed;
1279 1958
1280struct wmi_mgmt_rx_event { 1959struct wmi_mgmt_rx_hdr_v2 {
1281 struct wmi_mgmt_rx_hdr hdr; 1960 struct wmi_mgmt_rx_hdr_v1 v1;
1961 __le32 rssi_ctl[4];
1962} __packed;
1963
1964struct wmi_mgmt_rx_event_v1 {
1965 struct wmi_mgmt_rx_hdr_v1 hdr;
1966 u8 buf[0];
1967} __packed;
1968
1969struct wmi_mgmt_rx_event_v2 {
1970 struct wmi_mgmt_rx_hdr_v2 hdr;
1282 u8 buf[0]; 1971 u8 buf[0];
1283} __packed; 1972} __packed;
1284 1973
@@ -1465,6 +2154,60 @@ struct wmi_csa_event {
1465#define VDEV_DEFAULT_STATS_UPDATE_PERIOD 500 2154#define VDEV_DEFAULT_STATS_UPDATE_PERIOD 500
1466#define PEER_DEFAULT_STATS_UPDATE_PERIOD 500 2155#define PEER_DEFAULT_STATS_UPDATE_PERIOD 500
1467 2156
2157struct wmi_pdev_param_map {
2158 u32 tx_chain_mask;
2159 u32 rx_chain_mask;
2160 u32 txpower_limit2g;
2161 u32 txpower_limit5g;
2162 u32 txpower_scale;
2163 u32 beacon_gen_mode;
2164 u32 beacon_tx_mode;
2165 u32 resmgr_offchan_mode;
2166 u32 protection_mode;
2167 u32 dynamic_bw;
2168 u32 non_agg_sw_retry_th;
2169 u32 agg_sw_retry_th;
2170 u32 sta_kickout_th;
2171 u32 ac_aggrsize_scaling;
2172 u32 ltr_enable;
2173 u32 ltr_ac_latency_be;
2174 u32 ltr_ac_latency_bk;
2175 u32 ltr_ac_latency_vi;
2176 u32 ltr_ac_latency_vo;
2177 u32 ltr_ac_latency_timeout;
2178 u32 ltr_sleep_override;
2179 u32 ltr_rx_override;
2180 u32 ltr_tx_activity_timeout;
2181 u32 l1ss_enable;
2182 u32 dsleep_enable;
2183 u32 pcielp_txbuf_flush;
2184 u32 pcielp_txbuf_watermark;
2185 u32 pcielp_txbuf_tmo_en;
2186 u32 pcielp_txbuf_tmo_value;
2187 u32 pdev_stats_update_period;
2188 u32 vdev_stats_update_period;
2189 u32 peer_stats_update_period;
2190 u32 bcnflt_stats_update_period;
2191 u32 pmf_qos;
2192 u32 arp_ac_override;
2193 u32 arpdhcp_ac_override;
2194 u32 dcs;
2195 u32 ani_enable;
2196 u32 ani_poll_period;
2197 u32 ani_listen_period;
2198 u32 ani_ofdm_level;
2199 u32 ani_cck_level;
2200 u32 dyntxchain;
2201 u32 proxy_sta;
2202 u32 idle_ps_config;
2203 u32 power_gating_sleep;
2204 u32 fast_channel_reset;
2205 u32 burst_dur;
2206 u32 burst_enable;
2207};
2208
2209#define WMI_PDEV_PARAM_UNSUPPORTED 0
2210
1468enum wmi_pdev_param { 2211enum wmi_pdev_param {
1469 /* TX chian mask */ 2212 /* TX chian mask */
1470 WMI_PDEV_PARAM_TX_CHAIN_MASK = 0x1, 2213 WMI_PDEV_PARAM_TX_CHAIN_MASK = 0x1,
@@ -1564,6 +2307,97 @@ enum wmi_pdev_param {
1564 WMI_PDEV_PARAM_POWER_GATING_SLEEP, 2307 WMI_PDEV_PARAM_POWER_GATING_SLEEP,
1565}; 2308};
1566 2309
2310enum wmi_10x_pdev_param {
2311 /* TX chian mask */
2312 WMI_10X_PDEV_PARAM_TX_CHAIN_MASK = 0x1,
2313 /* RX chian mask */
2314 WMI_10X_PDEV_PARAM_RX_CHAIN_MASK,
2315 /* TX power limit for 2G Radio */
2316 WMI_10X_PDEV_PARAM_TXPOWER_LIMIT2G,
2317 /* TX power limit for 5G Radio */
2318 WMI_10X_PDEV_PARAM_TXPOWER_LIMIT5G,
2319 /* TX power scale */
2320 WMI_10X_PDEV_PARAM_TXPOWER_SCALE,
2321 /* Beacon generation mode . 0: host, 1: target */
2322 WMI_10X_PDEV_PARAM_BEACON_GEN_MODE,
2323 /* Beacon generation mode . 0: staggered 1: bursted */
2324 WMI_10X_PDEV_PARAM_BEACON_TX_MODE,
2325 /*
2326 * Resource manager off chan mode .
2327 * 0: turn off off chan mode. 1: turn on offchan mode
2328 */
2329 WMI_10X_PDEV_PARAM_RESMGR_OFFCHAN_MODE,
2330 /*
2331 * Protection mode:
2332 * 0: no protection 1:use CTS-to-self 2: use RTS/CTS
2333 */
2334 WMI_10X_PDEV_PARAM_PROTECTION_MODE,
2335 /* Dynamic bandwidth 0: disable 1: enable */
2336 WMI_10X_PDEV_PARAM_DYNAMIC_BW,
2337 /* Non aggregrate/ 11g sw retry threshold.0-disable */
2338 WMI_10X_PDEV_PARAM_NON_AGG_SW_RETRY_TH,
2339 /* aggregrate sw retry threshold. 0-disable*/
2340 WMI_10X_PDEV_PARAM_AGG_SW_RETRY_TH,
2341 /* Station kickout threshold (non of consecutive failures).0-disable */
2342 WMI_10X_PDEV_PARAM_STA_KICKOUT_TH,
2343 /* Aggerate size scaling configuration per AC */
2344 WMI_10X_PDEV_PARAM_AC_AGGRSIZE_SCALING,
2345 /* LTR enable */
2346 WMI_10X_PDEV_PARAM_LTR_ENABLE,
2347 /* LTR latency for BE, in us */
2348 WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_BE,
2349 /* LTR latency for BK, in us */
2350 WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_BK,
2351 /* LTR latency for VI, in us */
2352 WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_VI,
2353 /* LTR latency for VO, in us */
2354 WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_VO,
2355 /* LTR AC latency timeout, in ms */
2356 WMI_10X_PDEV_PARAM_LTR_AC_LATENCY_TIMEOUT,
2357 /* LTR platform latency override, in us */
2358 WMI_10X_PDEV_PARAM_LTR_SLEEP_OVERRIDE,
2359 /* LTR-RX override, in us */
2360 WMI_10X_PDEV_PARAM_LTR_RX_OVERRIDE,
2361 /* Tx activity timeout for LTR, in us */
2362 WMI_10X_PDEV_PARAM_LTR_TX_ACTIVITY_TIMEOUT,
2363 /* L1SS state machine enable */
2364 WMI_10X_PDEV_PARAM_L1SS_ENABLE,
2365 /* Deep sleep state machine enable */
2366 WMI_10X_PDEV_PARAM_DSLEEP_ENABLE,
2367 /* pdev level stats update period in ms */
2368 WMI_10X_PDEV_PARAM_PDEV_STATS_UPDATE_PERIOD,
2369 /* vdev level stats update period in ms */
2370 WMI_10X_PDEV_PARAM_VDEV_STATS_UPDATE_PERIOD,
2371 /* peer level stats update period in ms */
2372 WMI_10X_PDEV_PARAM_PEER_STATS_UPDATE_PERIOD,
2373 /* beacon filter status update period */
2374 WMI_10X_PDEV_PARAM_BCNFLT_STATS_UPDATE_PERIOD,
2375 /* QOS Mgmt frame protection MFP/PMF 0: disable, 1: enable */
2376 WMI_10X_PDEV_PARAM_PMF_QOS,
2377 /* Access category on which ARP and DHCP frames are sent */
2378 WMI_10X_PDEV_PARAM_ARPDHCP_AC_OVERRIDE,
2379 /* DCS configuration */
2380 WMI_10X_PDEV_PARAM_DCS,
2381 /* Enable/Disable ANI on target */
2382 WMI_10X_PDEV_PARAM_ANI_ENABLE,
2383 /* configure the ANI polling period */
2384 WMI_10X_PDEV_PARAM_ANI_POLL_PERIOD,
2385 /* configure the ANI listening period */
2386 WMI_10X_PDEV_PARAM_ANI_LISTEN_PERIOD,
2387 /* configure OFDM immunity level */
2388 WMI_10X_PDEV_PARAM_ANI_OFDM_LEVEL,
2389 /* configure CCK immunity level */
2390 WMI_10X_PDEV_PARAM_ANI_CCK_LEVEL,
2391 /* Enable/Disable CDD for 1x1 STAs in rate control module */
2392 WMI_10X_PDEV_PARAM_DYNTXCHAIN,
2393 /* Enable/Disable Fast channel reset*/
2394 WMI_10X_PDEV_PARAM_FAST_CHANNEL_RESET,
2395 /* Set Bursting DUR */
2396 WMI_10X_PDEV_PARAM_BURST_DUR,
2397 /* Set Bursting Enable*/
2398 WMI_10X_PDEV_PARAM_BURST_ENABLE,
2399};
2400
1567struct wmi_pdev_set_param_cmd { 2401struct wmi_pdev_set_param_cmd {
1568 __le32 param_id; 2402 __le32 param_id;
1569 __le32 param_value; 2403 __le32 param_value;
@@ -2088,6 +2922,61 @@ enum wmi_rate_preamble {
2088/* Value to disable fixed rate setting */ 2922/* Value to disable fixed rate setting */
2089#define WMI_FIXED_RATE_NONE (0xff) 2923#define WMI_FIXED_RATE_NONE (0xff)
2090 2924
2925struct wmi_vdev_param_map {
2926 u32 rts_threshold;
2927 u32 fragmentation_threshold;
2928 u32 beacon_interval;
2929 u32 listen_interval;
2930 u32 multicast_rate;
2931 u32 mgmt_tx_rate;
2932 u32 slot_time;
2933 u32 preamble;
2934 u32 swba_time;
2935 u32 wmi_vdev_stats_update_period;
2936 u32 wmi_vdev_pwrsave_ageout_time;
2937 u32 wmi_vdev_host_swba_interval;
2938 u32 dtim_period;
2939 u32 wmi_vdev_oc_scheduler_air_time_limit;
2940 u32 wds;
2941 u32 atim_window;
2942 u32 bmiss_count_max;
2943 u32 bmiss_first_bcnt;
2944 u32 bmiss_final_bcnt;
2945 u32 feature_wmm;
2946 u32 chwidth;
2947 u32 chextoffset;
2948 u32 disable_htprotection;
2949 u32 sta_quickkickout;
2950 u32 mgmt_rate;
2951 u32 protection_mode;
2952 u32 fixed_rate;
2953 u32 sgi;
2954 u32 ldpc;
2955 u32 tx_stbc;
2956 u32 rx_stbc;
2957 u32 intra_bss_fwd;
2958 u32 def_keyid;
2959 u32 nss;
2960 u32 bcast_data_rate;
2961 u32 mcast_data_rate;
2962 u32 mcast_indicate;
2963 u32 dhcp_indicate;
2964 u32 unknown_dest_indicate;
2965 u32 ap_keepalive_min_idle_inactive_time_secs;
2966 u32 ap_keepalive_max_idle_inactive_time_secs;
2967 u32 ap_keepalive_max_unresponsive_time_secs;
2968 u32 ap_enable_nawds;
2969 u32 mcast2ucast_set;
2970 u32 enable_rtscts;
2971 u32 txbf;
2972 u32 packet_powersave;
2973 u32 drop_unencry;
2974 u32 tx_encap_type;
2975 u32 ap_detect_out_of_sync_sleeping_sta_time_secs;
2976};
2977
2978#define WMI_VDEV_PARAM_UNSUPPORTED 0
2979
2091/* the definition of different VDEV parameters */ 2980/* the definition of different VDEV parameters */
2092enum wmi_vdev_param { 2981enum wmi_vdev_param {
2093 /* RTS Threshold */ 2982 /* RTS Threshold */
@@ -2219,6 +3108,121 @@ enum wmi_vdev_param {
2219 WMI_VDEV_PARAM_TX_ENCAP_TYPE, 3108 WMI_VDEV_PARAM_TX_ENCAP_TYPE,
2220}; 3109};
2221 3110
3111/* the definition of different VDEV parameters */
3112enum wmi_10x_vdev_param {
3113 /* RTS Threshold */
3114 WMI_10X_VDEV_PARAM_RTS_THRESHOLD = 0x1,
3115 /* Fragmentation threshold */
3116 WMI_10X_VDEV_PARAM_FRAGMENTATION_THRESHOLD,
3117 /* beacon interval in TUs */
3118 WMI_10X_VDEV_PARAM_BEACON_INTERVAL,
3119 /* Listen interval in TUs */
3120 WMI_10X_VDEV_PARAM_LISTEN_INTERVAL,
3121 /* muticast rate in Mbps */
3122 WMI_10X_VDEV_PARAM_MULTICAST_RATE,
3123 /* management frame rate in Mbps */
3124 WMI_10X_VDEV_PARAM_MGMT_TX_RATE,
3125 /* slot time (long vs short) */
3126 WMI_10X_VDEV_PARAM_SLOT_TIME,
3127 /* preamble (long vs short) */
3128 WMI_10X_VDEV_PARAM_PREAMBLE,
3129 /* SWBA time (time before tbtt in msec) */
3130 WMI_10X_VDEV_PARAM_SWBA_TIME,
3131 /* time period for updating VDEV stats */
3132 WMI_10X_VDEV_STATS_UPDATE_PERIOD,
3133 /* age out time in msec for frames queued for station in power save */
3134 WMI_10X_VDEV_PWRSAVE_AGEOUT_TIME,
3135 /*
3136 * Host SWBA interval (time in msec before tbtt for SWBA event
3137 * generation).
3138 */
3139 WMI_10X_VDEV_HOST_SWBA_INTERVAL,
3140 /* DTIM period (specified in units of num beacon intervals) */
3141 WMI_10X_VDEV_PARAM_DTIM_PERIOD,
3142 /*
3143 * scheduler air time limit for this VDEV. used by off chan
3144 * scheduler.
3145 */
3146 WMI_10X_VDEV_OC_SCHEDULER_AIR_TIME_LIMIT,
3147 /* enable/dsiable WDS for this VDEV */
3148 WMI_10X_VDEV_PARAM_WDS,
3149 /* ATIM Window */
3150 WMI_10X_VDEV_PARAM_ATIM_WINDOW,
3151 /* BMISS max */
3152 WMI_10X_VDEV_PARAM_BMISS_COUNT_MAX,
3153 /* WMM enables/disabled */
3154 WMI_10X_VDEV_PARAM_FEATURE_WMM,
3155 /* Channel width */
3156 WMI_10X_VDEV_PARAM_CHWIDTH,
3157 /* Channel Offset */
3158 WMI_10X_VDEV_PARAM_CHEXTOFFSET,
3159 /* Disable HT Protection */
3160 WMI_10X_VDEV_PARAM_DISABLE_HTPROTECTION,
3161 /* Quick STA Kickout */
3162 WMI_10X_VDEV_PARAM_STA_QUICKKICKOUT,
3163 /* Rate to be used with Management frames */
3164 WMI_10X_VDEV_PARAM_MGMT_RATE,
3165 /* Protection Mode */
3166 WMI_10X_VDEV_PARAM_PROTECTION_MODE,
3167 /* Fixed rate setting */
3168 WMI_10X_VDEV_PARAM_FIXED_RATE,
3169 /* Short GI Enable/Disable */
3170 WMI_10X_VDEV_PARAM_SGI,
3171 /* Enable LDPC */
3172 WMI_10X_VDEV_PARAM_LDPC,
3173 /* Enable Tx STBC */
3174 WMI_10X_VDEV_PARAM_TX_STBC,
3175 /* Enable Rx STBC */
3176 WMI_10X_VDEV_PARAM_RX_STBC,
3177 /* Intra BSS forwarding */
3178 WMI_10X_VDEV_PARAM_INTRA_BSS_FWD,
3179 /* Setting Default xmit key for Vdev */
3180 WMI_10X_VDEV_PARAM_DEF_KEYID,
3181 /* NSS width */
3182 WMI_10X_VDEV_PARAM_NSS,
3183 /* Set the custom rate for the broadcast data frames */
3184 WMI_10X_VDEV_PARAM_BCAST_DATA_RATE,
3185 /* Set the custom rate (rate-code) for multicast data frames */
3186 WMI_10X_VDEV_PARAM_MCAST_DATA_RATE,
3187 /* Tx multicast packet indicate Enable/Disable */
3188 WMI_10X_VDEV_PARAM_MCAST_INDICATE,
3189 /* Tx DHCP packet indicate Enable/Disable */
3190 WMI_10X_VDEV_PARAM_DHCP_INDICATE,
3191 /* Enable host inspection of Tx unicast packet to unknown destination */
3192 WMI_10X_VDEV_PARAM_UNKNOWN_DEST_INDICATE,
3193
3194 /* The minimum amount of time AP begins to consider STA inactive */
3195 WMI_10X_VDEV_PARAM_AP_KEEPALIVE_MIN_IDLE_INACTIVE_TIME_SECS,
3196
3197 /*
3198 * An associated STA is considered inactive when there is no recent
3199 * TX/RX activity and no downlink frames are buffered for it. Once a
3200 * STA exceeds the maximum idle inactive time, the AP will send an
3201 * 802.11 data-null as a keep alive to verify the STA is still
3202 * associated. If the STA does ACK the data-null, or if the data-null
3203 * is buffered and the STA does not retrieve it, the STA will be
3204 * considered unresponsive
3205 * (see WMI_10X_VDEV_AP_KEEPALIVE_MAX_UNRESPONSIVE_TIME_SECS).
3206 */
3207 WMI_10X_VDEV_PARAM_AP_KEEPALIVE_MAX_IDLE_INACTIVE_TIME_SECS,
3208
3209 /*
3210 * An associated STA is considered unresponsive if there is no recent
3211 * TX/RX activity and downlink frames are buffered for it. Once a STA
3212 * exceeds the maximum unresponsive time, the AP will send a
3213 * WMI_10X_STA_KICKOUT event to the host so the STA can be deleted. */
3214 WMI_10X_VDEV_PARAM_AP_KEEPALIVE_MAX_UNRESPONSIVE_TIME_SECS,
3215
3216 /* Enable NAWDS : MCAST INSPECT Enable, NAWDS Flag set */
3217 WMI_10X_VDEV_PARAM_AP_ENABLE_NAWDS,
3218
3219 WMI_10X_VDEV_PARAM_MCAST2UCAST_SET,
3220 /* Enable/Disable RTS-CTS */
3221 WMI_10X_VDEV_PARAM_ENABLE_RTSCTS,
3222
3223 WMI_10X_VDEV_PARAM_AP_DETECT_OUT_OF_SYNC_SLEEPING_STA_TIME_SECS,
3224};
3225
2222/* slot time long */ 3226/* slot time long */
2223#define WMI_VDEV_SLOT_TIME_LONG 0x1 3227#define WMI_VDEV_SLOT_TIME_LONG 0x1
2224/* slot time short */ 3228/* slot time short */
@@ -3000,7 +4004,6 @@ struct wmi_force_fw_hang_cmd {
3000 4004
3001#define WMI_MAX_EVENT 0x1000 4005#define WMI_MAX_EVENT 0x1000
3002/* Maximum number of pending TXed WMI packets */ 4006/* Maximum number of pending TXed WMI packets */
3003#define WMI_MAX_PENDING_TX_COUNT 128
3004#define WMI_SKB_HEADROOM sizeof(struct wmi_cmd_hdr) 4007#define WMI_SKB_HEADROOM sizeof(struct wmi_cmd_hdr)
3005 4008
3006/* By default disable power save for IBSS */ 4009/* By default disable power save for IBSS */
@@ -3013,7 +4016,6 @@ int ath10k_wmi_attach(struct ath10k *ar);
3013void ath10k_wmi_detach(struct ath10k *ar); 4016void ath10k_wmi_detach(struct ath10k *ar);
3014int ath10k_wmi_wait_for_service_ready(struct ath10k *ar); 4017int ath10k_wmi_wait_for_service_ready(struct ath10k *ar);
3015int ath10k_wmi_wait_for_unified_ready(struct ath10k *ar); 4018int ath10k_wmi_wait_for_unified_ready(struct ath10k *ar);
3016void ath10k_wmi_flush_tx(struct ath10k *ar);
3017 4019
3018int ath10k_wmi_connect_htc_service(struct ath10k *ar); 4020int ath10k_wmi_connect_htc_service(struct ath10k *ar);
3019int ath10k_wmi_pdev_set_channel(struct ath10k *ar, 4021int ath10k_wmi_pdev_set_channel(struct ath10k *ar,
@@ -3022,8 +4024,7 @@ int ath10k_wmi_pdev_suspend_target(struct ath10k *ar);
3022int ath10k_wmi_pdev_resume_target(struct ath10k *ar); 4024int ath10k_wmi_pdev_resume_target(struct ath10k *ar);
3023int ath10k_wmi_pdev_set_regdomain(struct ath10k *ar, u16 rd, u16 rd2g, 4025int ath10k_wmi_pdev_set_regdomain(struct ath10k *ar, u16 rd, u16 rd2g,
3024 u16 rd5g, u16 ctl2g, u16 ctl5g); 4026 u16 rd5g, u16 ctl2g, u16 ctl5g);
3025int ath10k_wmi_pdev_set_param(struct ath10k *ar, enum wmi_pdev_param id, 4027int ath10k_wmi_pdev_set_param(struct ath10k *ar, u32 id, u32 value);
3026 u32 value);
3027int ath10k_wmi_cmd_init(struct ath10k *ar); 4028int ath10k_wmi_cmd_init(struct ath10k *ar);
3028int ath10k_wmi_start_scan(struct ath10k *ar, const struct wmi_start_scan_arg *); 4029int ath10k_wmi_start_scan(struct ath10k *ar, const struct wmi_start_scan_arg *);
3029void ath10k_wmi_start_scan_init(struct ath10k *ar, struct wmi_start_scan_arg *); 4030void ath10k_wmi_start_scan_init(struct ath10k *ar, struct wmi_start_scan_arg *);
@@ -3043,7 +4044,7 @@ int ath10k_wmi_vdev_up(struct ath10k *ar, u32 vdev_id, u32 aid,
3043 const u8 *bssid); 4044 const u8 *bssid);
3044int ath10k_wmi_vdev_down(struct ath10k *ar, u32 vdev_id); 4045int ath10k_wmi_vdev_down(struct ath10k *ar, u32 vdev_id);
3045int ath10k_wmi_vdev_set_param(struct ath10k *ar, u32 vdev_id, 4046int ath10k_wmi_vdev_set_param(struct ath10k *ar, u32 vdev_id,
3046 enum wmi_vdev_param param_id, u32 param_value); 4047 u32 param_id, u32 param_value);
3047int ath10k_wmi_vdev_install_key(struct ath10k *ar, 4048int ath10k_wmi_vdev_install_key(struct ath10k *ar,
3048 const struct wmi_vdev_install_key_arg *arg); 4049 const struct wmi_vdev_install_key_arg *arg);
3049int ath10k_wmi_peer_create(struct ath10k *ar, u32 vdev_id, 4050int ath10k_wmi_peer_create(struct ath10k *ar, u32 vdev_id,
@@ -3066,11 +4067,13 @@ int ath10k_wmi_set_ap_ps_param(struct ath10k *ar, u32 vdev_id, const u8 *mac,
3066 enum wmi_ap_ps_peer_param param_id, u32 value); 4067 enum wmi_ap_ps_peer_param param_id, u32 value);
3067int ath10k_wmi_scan_chan_list(struct ath10k *ar, 4068int ath10k_wmi_scan_chan_list(struct ath10k *ar,
3068 const struct wmi_scan_chan_list_arg *arg); 4069 const struct wmi_scan_chan_list_arg *arg);
3069int ath10k_wmi_beacon_send(struct ath10k *ar, const struct wmi_bcn_tx_arg *arg); 4070int ath10k_wmi_beacon_send_nowait(struct ath10k *ar,
4071 const struct wmi_bcn_tx_arg *arg);
3070int ath10k_wmi_pdev_set_wmm_params(struct ath10k *ar, 4072int ath10k_wmi_pdev_set_wmm_params(struct ath10k *ar,
3071 const struct wmi_pdev_set_wmm_params_arg *arg); 4073 const struct wmi_pdev_set_wmm_params_arg *arg);
3072int ath10k_wmi_request_stats(struct ath10k *ar, enum wmi_stats_id stats_id); 4074int ath10k_wmi_request_stats(struct ath10k *ar, enum wmi_stats_id stats_id);
3073int ath10k_wmi_force_fw_hang(struct ath10k *ar, 4075int ath10k_wmi_force_fw_hang(struct ath10k *ar,
3074 enum wmi_force_fw_hang_type type, u32 delay_ms); 4076 enum wmi_force_fw_hang_type type, u32 delay_ms);
4077int ath10k_wmi_mgmt_tx(struct ath10k *ar, struct sk_buff *skb);
3075 4078
3076#endif /* _WMI_H_ */ 4079#endif /* _WMI_H_ */