diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/net/wireless/realtek/rtw89/wow.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to '')
-rw-r--r-- | drivers/net/wireless/realtek/rtw89/wow.c | 851 |
1 files changed, 851 insertions, 0 deletions
diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c new file mode 100644 index 000000000..c78ee2ab7 --- /dev/null +++ b/drivers/net/wireless/realtek/rtw89/wow.c @@ -0,0 +1,851 @@ +// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause +/* Copyright(c) 2019-2022 Realtek Corporation + */ +#include "cam.h" +#include "core.h" +#include "debug.h" +#include "fw.h" +#include "mac.h" +#include "phy.h" +#include "ps.h" +#include "reg.h" +#include "util.h" +#include "wow.h" + +static void rtw89_wow_leave_deep_ps(struct rtw89_dev *rtwdev) +{ + __rtw89_leave_ps_mode(rtwdev); +} + +static void rtw89_wow_enter_deep_ps(struct rtw89_dev *rtwdev) +{ + struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; + struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + + __rtw89_enter_ps_mode(rtwdev, rtwvif); +} + +static void rtw89_wow_enter_lps(struct rtw89_dev *rtwdev) +{ + struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; + struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + + rtw89_enter_lps(rtwdev, rtwvif); +} + +static void rtw89_wow_leave_lps(struct rtw89_dev *rtwdev) +{ + rtw89_leave_lps(rtwdev); +} + +static int rtw89_wow_config_mac(struct rtw89_dev *rtwdev, bool enable_wow) +{ + int ret; + + if (enable_wow) { + ret = rtw89_mac_resize_ple_rx_quota(rtwdev, true); + if (ret) { + rtw89_err(rtwdev, "[ERR]patch rx qta %d\n", ret); + return ret; + } + rtw89_write32_set(rtwdev, R_AX_RX_FUNCTION_STOP, B_AX_HDR_RX_STOP); + rtw89_write32_clr(rtwdev, R_AX_RX_FLTR_OPT, B_AX_SNIFFER_MODE); + rtw89_mac_cfg_ppdu_status(rtwdev, RTW89_MAC_0, false); + rtw89_write32(rtwdev, R_AX_ACTION_FWD0, 0); + rtw89_write32(rtwdev, R_AX_ACTION_FWD1, 0); + rtw89_write32(rtwdev, R_AX_TF_FWD, 0); + rtw89_write32(rtwdev, R_AX_HW_RPT_FWD, 0); + } else { + ret = rtw89_mac_resize_ple_rx_quota(rtwdev, false); + if (ret) { + rtw89_err(rtwdev, "[ERR]patch rx qta %d\n", ret); + return ret; + } + rtw89_write32_clr(rtwdev, R_AX_RX_FUNCTION_STOP, B_AX_HDR_RX_STOP); + rtw89_mac_cfg_ppdu_status(rtwdev, RTW89_MAC_0, true); + rtw89_write32(rtwdev, R_AX_ACTION_FWD0, TRXCFG_MPDU_PROC_ACT_FRWD); + rtw89_write32(rtwdev, R_AX_TF_FWD, TRXCFG_MPDU_PROC_TF_FRWD); + } + + return 0; +} + +static void rtw89_wow_set_rx_filter(struct rtw89_dev *rtwdev, bool enable) +{ + enum rtw89_mac_fwd_target fwd_target = enable ? + RTW89_FWD_DONT_CARE : + RTW89_FWD_TO_HOST; + + rtw89_mac_typ_fltr_opt(rtwdev, RTW89_MGNT, fwd_target, RTW89_MAC_0); + rtw89_mac_typ_fltr_opt(rtwdev, RTW89_CTRL, fwd_target, RTW89_MAC_0); + rtw89_mac_typ_fltr_opt(rtwdev, RTW89_DATA, fwd_target, RTW89_MAC_0); +} + +static void rtw89_wow_show_wakeup_reason(struct rtw89_dev *rtwdev) +{ + enum rtw89_core_chip_id chip_id = rtwdev->chip->chip_id; + struct cfg80211_wowlan_nd_info nd_info; + struct cfg80211_wowlan_wakeup wakeup = { + .pattern_idx = -1, + }; + u32 wow_reason_reg; + u8 reason; + + if (chip_id == RTL8852A || chip_id == RTL8852B) + wow_reason_reg = R_AX_C2HREG_DATA3 + 3; + else + wow_reason_reg = R_AX_C2HREG_DATA3_V1 + 3; + + reason = rtw89_read8(rtwdev, wow_reason_reg); + + switch (reason) { + case RTW89_WOW_RSN_RX_DEAUTH: + wakeup.disconnect = true; + rtw89_debug(rtwdev, RTW89_DBG_WOW, "WOW: Rx deauth\n"); + break; + case RTW89_WOW_RSN_DISCONNECT: + wakeup.disconnect = true; + rtw89_debug(rtwdev, RTW89_DBG_WOW, "WOW: AP is off\n"); + break; + case RTW89_WOW_RSN_RX_MAGIC_PKT: + wakeup.magic_pkt = true; + rtw89_debug(rtwdev, RTW89_DBG_WOW, "WOW: Rx magic packet\n"); + break; + case RTW89_WOW_RSN_RX_GTK_REKEY: + wakeup.gtk_rekey_failure = true; + rtw89_debug(rtwdev, RTW89_DBG_WOW, "WOW: Rx gtk rekey\n"); + break; + case RTW89_WOW_RSN_RX_PATTERN_MATCH: + /* Current firmware and driver don't report pattern index + * Use pattern_idx to 0 defaultly. + */ + wakeup.pattern_idx = 0; + rtw89_debug(rtwdev, RTW89_DBG_WOW, "WOW: Rx pattern match packet\n"); + break; + case RTW89_WOW_RSN_RX_NLO: + /* Current firmware and driver don't report ssid index. + * Use 0 for n_matches based on its comment. + */ + nd_info.n_matches = 0; + wakeup.net_detect = &nd_info; + rtw89_debug(rtwdev, RTW89_DBG_WOW, "Rx NLO\n"); + break; + default: + rtw89_warn(rtwdev, "Unknown wakeup reason %x\n", reason); + ieee80211_report_wowlan_wakeup(rtwdev->wow.wow_vif, NULL, + GFP_KERNEL); + return; + } + + ieee80211_report_wowlan_wakeup(rtwdev->wow.wow_vif, &wakeup, + GFP_KERNEL); +} + +static void rtw89_wow_vif_iter(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + + /* Current wowlan function support setting of only one STATION vif. + * So when one suitable vif is found, stop the iteration. + */ + if (rtw_wow->wow_vif || vif->type != NL80211_IFTYPE_STATION) + return; + + switch (rtwvif->net_type) { + case RTW89_NET_TYPE_INFRA: + rtw_wow->wow_vif = vif; + break; + case RTW89_NET_TYPE_NO_LINK: + default: + break; + } +} + +static u16 __rtw89_cal_crc16(u8 data, u16 crc) +{ + u8 shift_in, data_bit; + u8 crc_bit4, crc_bit11, crc_bit15; + u16 crc_result; + int index; + + for (index = 0; index < 8; index++) { + crc_bit15 = crc & BIT(15) ? 1 : 0; + data_bit = data & BIT(index) ? 1 : 0; + shift_in = crc_bit15 ^ data_bit; + + crc_result = crc << 1; + + if (shift_in == 0) + crc_result &= ~BIT(0); + else + crc_result |= BIT(0); + + crc_bit11 = (crc & BIT(11) ? 1 : 0) ^ shift_in; + + if (crc_bit11 == 0) + crc_result &= ~BIT(12); + else + crc_result |= BIT(12); + + crc_bit4 = (crc & BIT(4) ? 1 : 0) ^ shift_in; + + if (crc_bit4 == 0) + crc_result &= ~BIT(5); + else + crc_result |= BIT(5); + + crc = crc_result; + } + return crc; +} + +static u16 rtw89_calc_crc(u8 *pdata, int length) +{ + u16 crc = 0xffff; + int i; + + for (i = 0; i < length; i++) + crc = __rtw89_cal_crc16(pdata[i], crc); + + /* get 1' complement */ + return ~crc; +} + +static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif, + struct rtw89_wow_cam_info *rtw_pattern, + const u8 *pattern, u8 da_mask) +{ + u8 da[ETH_ALEN]; + + ether_addr_copy_mask(da, pattern, da_mask); + + /* Each pattern is divided into different kinds by DA address + * a. DA is broadcast address: set bc = 0; + * b. DA is multicast address: set mc = 0 + * c. DA is unicast address same as dev's mac address: set uc = 0 + * d. DA is unmasked. Also called wildcard type: set uc = bc = mc = 0 + * e. Others is invalid type. + */ + + if (is_broadcast_ether_addr(da)) + rtw_pattern->bc = true; + else if (is_multicast_ether_addr(da)) + rtw_pattern->mc = true; + else if (ether_addr_equal(da, rtwvif->mac_addr) && + da_mask == GENMASK(5, 0)) + rtw_pattern->uc = true; + else if (!da_mask) /*da_mask == 0 mean wildcard*/ + return 0; + else + return -EPERM; + + return 0; +} + +static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev, + struct rtw89_vif *rtwvif, + const struct cfg80211_pkt_pattern *pkt_pattern, + struct rtw89_wow_cam_info *rtw_pattern) +{ + u8 mask_hw[RTW89_MAX_PATTERN_MASK_SIZE * 4] = {0}; + u8 content[RTW89_MAX_PATTERN_SIZE] = {0}; + const u8 *mask; + const u8 *pattern; + u8 mask_len; + u16 count; + u32 len; + int i, ret; + + pattern = pkt_pattern->pattern; + len = pkt_pattern->pattern_len; + mask = pkt_pattern->mask; + mask_len = DIV_ROUND_UP(len, 8); + memset(rtw_pattern, 0, sizeof(*rtw_pattern)); + + ret = rtw89_wow_pattern_get_type(rtwvif, rtw_pattern, pattern, + mask[0] & GENMASK(5, 0)); + if (ret) + return ret; + + /* translate mask from os to mask for hw + * pattern from OS uses 'ethenet frame', like this: + * | 6 | 6 | 2 | 20 | Variable | 4 | + * |--------+--------+------+-----------+------------+-----| + * | 802.3 Mac Header | IP Header | TCP Packet | FCS | + * | DA | SA | Type | + * + * BUT, packet catched by our HW is in '802.11 frame', begin from LLC + * | 24 or 30 | 6 | 2 | 20 | Variable | 4 | + * |-------------------+--------+------+-----------+------------+-----| + * | 802.11 MAC Header | LLC | IP Header | TCP Packet | FCS | + * | Others | Tpye | + * + * Therefore, we need translate mask_from_OS to mask_to_hw. + * We should left-shift mask by 6 bits, then set the new bit[0~5] = 0, + * because new mask[0~5] means 'SA', but our HW packet begins from LLC, + * bit[0~5] corresponds to first 6 Bytes in LLC, they just don't match. + */ + + /* Shift 6 bits */ + for (i = 0; i < mask_len - 1; i++) { + mask_hw[i] = u8_get_bits(mask[i], GENMASK(7, 6)) | + u8_get_bits(mask[i + 1], GENMASK(5, 0)) << 2; + } + mask_hw[i] = u8_get_bits(mask[i], GENMASK(7, 6)); + + /* Set bit 0-5 to zero */ + mask_hw[0] &= ~GENMASK(5, 0); + + memcpy(rtw_pattern->mask, mask_hw, sizeof(rtw_pattern->mask)); + + /* To get the wake up pattern from the mask. + * We do not count first 12 bits which means + * DA[6] and SA[6] in the pattern to match HW design. + */ + count = 0; + for (i = 12; i < len; i++) { + if ((mask[i / 8] >> (i % 8)) & 0x01) { + content[count] = pattern[i]; + count++; + } + } + + rtw_pattern->crc = rtw89_calc_crc(content, count); + + return 0; +} + +static int rtw89_wow_parse_patterns(struct rtw89_dev *rtwdev, + struct rtw89_vif *rtwvif, + struct cfg80211_wowlan *wowlan) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct rtw89_wow_cam_info *rtw_pattern = rtw_wow->patterns; + int i; + int ret; + + if (!wowlan->n_patterns || !wowlan->patterns) + return 0; + + for (i = 0; i < wowlan->n_patterns; i++) { + rtw_pattern = &rtw_wow->patterns[i]; + ret = rtw89_wow_pattern_generate(rtwdev, rtwvif, + &wowlan->patterns[i], + rtw_pattern); + if (ret) { + rtw89_err(rtwdev, "failed to generate pattern(%d)\n", i); + rtw_wow->pattern_cnt = 0; + return ret; + } + + rtw_pattern->r_w = true; + rtw_pattern->idx = i; + rtw_pattern->negative_pattern_match = false; + rtw_pattern->skip_mac_hdr = true; + rtw_pattern->valid = true; + } + rtw_wow->pattern_cnt = wowlan->n_patterns; + + return 0; +} + +static void rtw89_wow_pattern_clear_cam(struct rtw89_dev *rtwdev) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct rtw89_wow_cam_info *rtw_pattern = rtw_wow->patterns; + int i = 0; + + for (i = 0; i < rtw_wow->pattern_cnt; i++) { + rtw_pattern = &rtw_wow->patterns[i]; + rtw_pattern->valid = false; + rtw89_fw_wow_cam_update(rtwdev, rtw_pattern); + } +} + +static void rtw89_wow_pattern_write(struct rtw89_dev *rtwdev) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct rtw89_wow_cam_info *rtw_pattern = rtw_wow->patterns; + int i; + + for (i = 0; i < rtw_wow->pattern_cnt; i++) + rtw89_fw_wow_cam_update(rtwdev, rtw_pattern + i); +} + +static void rtw89_wow_pattern_clear(struct rtw89_dev *rtwdev) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + + rtw89_wow_pattern_clear_cam(rtwdev); + + rtw_wow->pattern_cnt = 0; + memset(rtw_wow->patterns, 0, sizeof(rtw_wow->patterns)); +} + +static void rtw89_wow_clear_wakeups(struct rtw89_dev *rtwdev) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + + rtw_wow->wow_vif = NULL; + rtw89_core_release_all_bits_map(rtw_wow->flags, RTW89_WOW_FLAG_NUM); + rtw_wow->pattern_cnt = 0; +} + +static int rtw89_wow_set_wakeups(struct rtw89_dev *rtwdev, + struct cfg80211_wowlan *wowlan) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct rtw89_vif *rtwvif; + + if (wowlan->disconnect) + set_bit(RTW89_WOW_FLAG_EN_DISCONNECT, rtw_wow->flags); + if (wowlan->magic_pkt) + set_bit(RTW89_WOW_FLAG_EN_MAGIC_PKT, rtw_wow->flags); + + rtw89_for_each_rtwvif(rtwdev, rtwvif) + rtw89_wow_vif_iter(rtwdev, rtwvif); + + if (!rtw_wow->wow_vif) + return -EPERM; + + rtwvif = (struct rtw89_vif *)rtw_wow->wow_vif->drv_priv; + return rtw89_wow_parse_patterns(rtwdev, rtwvif, wowlan); +} + +static int rtw89_wow_cfg_wake(struct rtw89_dev *rtwdev, bool wow) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct ieee80211_vif *wow_vif = rtw_wow->wow_vif; + struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct ieee80211_sta *wow_sta; + struct rtw89_sta *rtwsta = NULL; + bool is_conn = true; + int ret; + + wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid); + if (wow_sta) + rtwsta = (struct rtw89_sta *)wow_sta->drv_priv; + else + is_conn = false; + + if (wow) { + if (rtw_wow->pattern_cnt) + rtwvif->wowlan_pattern = true; + if (test_bit(RTW89_WOW_FLAG_EN_MAGIC_PKT, rtw_wow->flags)) + rtwvif->wowlan_magic = true; + } else { + rtwvif->wowlan_pattern = false; + rtwvif->wowlan_magic = false; + } + + ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif, wow); + if (ret) { + rtw89_err(rtwdev, "failed to fw wow wakeup ctrl\n"); + return ret; + } + + if (wow) { + ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta); + if (ret) { + rtw89_err(rtwdev, "failed to update dctl cam sec entry: %d\n", + ret); + return ret; + } + } + + ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, !is_conn); + if (ret) { + rtw89_warn(rtwdev, "failed to send h2c join info\n"); + return ret; + } + + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL); + if (ret) { + rtw89_warn(rtwdev, "failed to send h2c cam\n"); + return ret; + } + + ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif, wow); + if (ret) { + rtw89_err(rtwdev, "failed to fw wow global\n"); + return ret; + } + + return 0; +} + +static int rtw89_wow_check_fw_status(struct rtw89_dev *rtwdev, bool wow_enable) +{ + u8 polling; + int ret; + + ret = read_poll_timeout_atomic(rtw89_read8_mask, polling, + wow_enable == !!polling, + 50, 50000, false, rtwdev, + R_AX_WOW_CTRL, B_AX_WOW_WOWEN); + if (ret) + rtw89_err(rtwdev, "failed to check wow status %s\n", + wow_enable ? "enabled" : "disabled"); + return ret; +} + +static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow) +{ + enum rtw89_fw_type fw_type = wow ? RTW89_FW_WOWLAN : RTW89_FW_NORMAL; + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct ieee80211_vif *wow_vif = rtw_wow->wow_vif; + struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct ieee80211_sta *wow_sta; + struct rtw89_sta *rtwsta = NULL; + bool is_conn = true; + int ret; + + rtw89_hci_disable_intr(rtwdev); + + wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid); + if (wow_sta) + rtwsta = (struct rtw89_sta *)wow_sta->drv_priv; + else + is_conn = false; + + ret = rtw89_fw_download(rtwdev, fw_type); + if (ret) { + rtw89_warn(rtwdev, "download fw failed\n"); + return ret; + } + + rtw89_phy_init_rf_reg(rtwdev, true); + + ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta, + RTW89_ROLE_FW_RESTORE); + if (ret) { + rtw89_warn(rtwdev, "failed to send h2c role maintain\n"); + return ret; + } + + ret = rtw89_fw_h2c_assoc_cmac_tbl(rtwdev, wow_vif, wow_sta); + if (ret) { + rtw89_warn(rtwdev, "failed to send h2c assoc cmac tbl\n"); + return ret; + } + + if (!is_conn) + rtw89_cam_reset_keys(rtwdev); + + ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, !is_conn); + if (ret) { + rtw89_warn(rtwdev, "failed to send h2c join info\n"); + return ret; + } + + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL); + if (ret) { + rtw89_warn(rtwdev, "failed to send h2c cam\n"); + return ret; + } + + if (is_conn) { + ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id); + if (ret) { + rtw89_warn(rtwdev, "failed to send h2c general packet\n"); + return ret; + } + rtw89_phy_ra_assoc(rtwdev, wow_sta); + rtw89_phy_set_bss_color(rtwdev, wow_vif); + rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, wow_vif); + } + + rtw89_mac_hw_mgnt_sec(rtwdev, wow); + rtw89_hci_enable_intr(rtwdev); + + return 0; +} + +static int rtw89_wow_enable_trx_pre(struct rtw89_dev *rtwdev) +{ + int ret; + + rtw89_hci_ctrl_txdma_ch(rtwdev, false); + rtw89_hci_ctrl_txdma_fw_ch(rtwdev, true); + + rtw89_mac_ptk_drop_by_band_and_wait(rtwdev, RTW89_MAC_0); + + ret = rtw89_hci_poll_txdma_ch(rtwdev); + if (ret) { + rtw89_err(rtwdev, "txdma ch busy\n"); + return ret; + } + rtw89_wow_set_rx_filter(rtwdev, true); + + ret = rtw89_mac_cfg_ppdu_status(rtwdev, RTW89_MAC_0, false); + if (ret) { + rtw89_err(rtwdev, "cfg ppdu status\n"); + return ret; + } + + return 0; +} + +static int rtw89_wow_enable_trx_post(struct rtw89_dev *rtwdev) +{ + int ret; + + rtw89_hci_disable_intr(rtwdev); + rtw89_hci_ctrl_trxhci(rtwdev, false); + + ret = rtw89_hci_poll_txdma_ch(rtwdev); + if (ret) { + rtw89_err(rtwdev, "failed to poll txdma ch idle pcie\n"); + return ret; + } + + ret = rtw89_wow_config_mac(rtwdev, true); + if (ret) { + rtw89_err(rtwdev, "failed to config mac\n"); + return ret; + } + + rtw89_wow_set_rx_filter(rtwdev, false); + rtw89_hci_reset(rtwdev); + + return 0; +} + +static int rtw89_wow_disable_trx_pre(struct rtw89_dev *rtwdev) +{ + int ret; + + rtw89_hci_clr_idx_all(rtwdev); + + ret = rtw89_hci_rst_bdram(rtwdev); + if (ret) { + rtw89_warn(rtwdev, "reset bdram busy\n"); + return ret; + } + + rtw89_hci_ctrl_trxhci(rtwdev, true); + rtw89_hci_ctrl_txdma_ch(rtwdev, true); + + ret = rtw89_wow_config_mac(rtwdev, false); + if (ret) { + rtw89_err(rtwdev, "failed to config mac\n"); + return ret; + } + rtw89_hci_enable_intr(rtwdev); + + return 0; +} + +static int rtw89_wow_disable_trx_post(struct rtw89_dev *rtwdev) +{ + int ret; + + ret = rtw89_mac_cfg_ppdu_status(rtwdev, RTW89_MAC_0, true); + if (ret) + rtw89_err(rtwdev, "cfg ppdu status\n"); + + return ret; +} + +static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct rtw89_vif *rtwvif = (struct rtw89_vif *)rtw_wow->wow_vif->drv_priv; + int ret; + + rtw89_wow_pattern_write(rtwdev); + + ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif, true); + if (ret) { + rtw89_err(rtwdev, "wow: failed to enable keep alive\n"); + return ret; + } + + ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, true); + if (ret) { + rtw89_err(rtwdev, "wow: failed to enable disconnect detect\n"); + goto out; + } + + ret = rtw89_wow_cfg_wake(rtwdev, true); + if (ret) { + rtw89_err(rtwdev, "wow: failed to config wake\n"); + goto out; + } + + ret = rtw89_wow_check_fw_status(rtwdev, true); + if (ret) { + rtw89_err(rtwdev, "wow: failed to check enable fw ready\n"); + goto out; + } + +out: + return ret; +} + +static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev) +{ + struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct rtw89_vif *rtwvif = (struct rtw89_vif *)rtw_wow->wow_vif->drv_priv; + int ret; + + rtw89_wow_pattern_clear(rtwdev); + + ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif, false); + if (ret) { + rtw89_err(rtwdev, "wow: failed to disable keep alive\n"); + goto out; + } + + ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, false); + if (ret) { + rtw89_err(rtwdev, "wow: failed to disable disconnect detect\n"); + goto out; + } + + ret = rtw89_wow_cfg_wake(rtwdev, false); + if (ret) { + rtw89_err(rtwdev, "wow: failed to disable config wake\n"); + goto out; + } + + rtw89_fw_release_general_pkt_list(rtwdev, true); + + ret = rtw89_wow_check_fw_status(rtwdev, false); + if (ret) { + rtw89_err(rtwdev, "wow: failed to check disable fw ready\n"); + goto out; + } + +out: + return ret; +} + +static int rtw89_wow_enable(struct rtw89_dev *rtwdev) +{ + int ret; + + set_bit(RTW89_FLAG_WOWLAN, rtwdev->flags); + + ret = rtw89_wow_enable_trx_pre(rtwdev); + if (ret) { + rtw89_err(rtwdev, "wow: failed to enable trx_pre\n"); + goto out; + } + + rtw89_fw_release_general_pkt_list(rtwdev, true); + + ret = rtw89_wow_swap_fw(rtwdev, true); + if (ret) { + rtw89_err(rtwdev, "wow: failed to swap to wow fw\n"); + goto out; + } + + ret = rtw89_wow_fw_start(rtwdev); + if (ret) { + rtw89_err(rtwdev, "wow: failed to let wow fw start\n"); + goto out; + } + + rtw89_wow_enter_lps(rtwdev); + + ret = rtw89_wow_enable_trx_post(rtwdev); + if (ret) { + rtw89_err(rtwdev, "wow: failed to enable trx_post\n"); + goto out; + } + + return 0; + +out: + clear_bit(RTW89_FLAG_WOWLAN, rtwdev->flags); + return ret; +} + +static int rtw89_wow_disable(struct rtw89_dev *rtwdev) +{ + int ret; + + ret = rtw89_wow_disable_trx_pre(rtwdev); + if (ret) { + rtw89_err(rtwdev, "wow: failed to disable trx_pre\n"); + goto out; + } + + rtw89_wow_leave_lps(rtwdev); + + ret = rtw89_wow_fw_stop(rtwdev); + if (ret) { + rtw89_err(rtwdev, "wow: failed to swap to normal fw\n"); + goto out; + } + + ret = rtw89_wow_swap_fw(rtwdev, false); + if (ret) { + rtw89_err(rtwdev, "wow: failed to disable trx_post\n"); + goto out; + } + + ret = rtw89_wow_disable_trx_post(rtwdev); + if (ret) { + rtw89_err(rtwdev, "wow: failed to disable trx_pre\n"); + goto out; + } + +out: + clear_bit(RTW89_FLAG_WOWLAN, rtwdev->flags); + return ret; +} + +int rtw89_wow_resume(struct rtw89_dev *rtwdev) +{ + int ret; + + if (!test_bit(RTW89_FLAG_WOWLAN, rtwdev->flags)) { + rtw89_err(rtwdev, "wow is not enabled\n"); + ret = -EPERM; + goto out; + } + + if (!rtw89_mac_get_power_state(rtwdev)) { + rtw89_err(rtwdev, "chip is no power when resume\n"); + ret = -EPERM; + goto out; + } + + rtw89_wow_leave_deep_ps(rtwdev); + + rtw89_wow_show_wakeup_reason(rtwdev); + + ret = rtw89_wow_disable(rtwdev); + if (ret) + rtw89_err(rtwdev, "failed to disable wow\n"); + +out: + rtw89_wow_clear_wakeups(rtwdev); + return ret; +} + +int rtw89_wow_suspend(struct rtw89_dev *rtwdev, struct cfg80211_wowlan *wowlan) +{ + int ret; + + ret = rtw89_wow_set_wakeups(rtwdev, wowlan); + if (ret) { + rtw89_err(rtwdev, "failed to set wakeup event\n"); + return ret; + } + + rtw89_wow_leave_lps(rtwdev); + + ret = rtw89_wow_enable(rtwdev); + if (ret) { + rtw89_err(rtwdev, "failed to enable wow\n"); + return ret; + } + + rtw89_wow_enter_deep_ps(rtwdev); + + return 0; +} |