From 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Tue, 21 Feb 2023 18:24:12 -0800 Subject: Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next Pull networking updates from Jakub Kicinski: "Core: - Add dedicated kmem_cache for typical/small skb->head, avoid having to access struct page at kfree time, and improve memory use. - Introduce sysctl to set default RPS configuration for new netdevs. - Define Netlink protocol specification format which can be used to describe messages used by each family and auto-generate parsers. Add tools for generating kernel data structures and uAPI headers. - Expose all net/core sysctls inside netns. - Remove 4s sleep in netpoll if carrier is instantly detected on boot. - Add configurable limit of MDB entries per port, and port-vlan. - Continue populating drop reasons throughout the stack. - Retire a handful of legacy Qdiscs and classifiers. Protocols: - Support IPv4 big TCP (TSO frames larger than 64kB). - Add IP_LOCAL_PORT_RANGE socket option, to control local port range on socket by socket basis. - Track and report in procfs number of MPTCP sockets used. - Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path manager. - IPv6: don't check net.ipv6.route.max_size and rely on garbage collection to free memory (similarly to IPv4). - Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986). - ICMP: add per-rate limit counters. - Add support for user scanning requests in ieee802154. - Remove static WEP support. - Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate reporting. - WiFi 7 EHT channel puncturing support (client & AP). BPF: - Add a rbtree data structure following the "next-gen data structure" precedent set by recently added linked list, that is, by using kfunc + kptr instead of adding a new BPF map type. - Expose XDP hints via kfuncs with initial support for RX hash and timestamp metadata. - Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to better support decap on GRE tunnel devices not operating in collect metadata. - Improve x86 JIT's codegen for PROBE_MEM runtime error checks. - Remove the need for trace_printk_lock for bpf_trace_printk and bpf_trace_vprintk helpers. - Extend libbpf's bpf_tracing.h support for tracing arguments of kprobes/uprobes and syscall as a special case. - Significantly reduce the search time for module symbols by livepatch and BPF. - Enable cpumasks to be used as kptrs, which is useful for tracing programs tracking which tasks end up running on which CPUs in different time intervals. - Add support for BPF trampoline on s390x and riscv64. - Add capability to export the XDP features supported by the NIC. - Add __bpf_kfunc tag for marking kernel functions as kfuncs. - Add cgroup.memory=nobpf kernel parameter option to disable BPF memory accounting for container environments. Netfilter: - Remove the CLUSTERIP target. It has been marked as obsolete for years, and we still have WARN splats wrt races of the out-of-band /proc interface installed by this target. - Add 'destroy' commands to nf_tables. They are identical to the existing 'delete' commands, but do not return an error if the referenced object (set, chain, rule...) did not exist. Driver API: - Improve cpumask_local_spread() locality to help NICs set the right IRQ affinity on AMD platforms. - Separate C22 and C45 MDIO bus transactions more clearly. - Introduce new DCB table to control DSCP rewrite on egress. - Support configuration of Physical Layer Collision Avoidance (PLCA) Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of shared medium Ethernet. - Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing preemption of low priority frames by high priority frames. - Add support for controlling MACSec offload using netlink SET. - Rework devlink instance refcounts to allow registration and de-registration under the instance lock. Split the code into multiple files, drop some of the unnecessarily granular locks and factor out common parts of netlink operation handling. - Add TX frame aggregation parameters (for USB drivers). - Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning messages with notifications for debug. - Allow offloading of UDP NEW connections via act_ct. - Add support for per action HW stats in TC. - Support hardware miss to TC action (continue processing in SW from a specific point in the action chain). - Warn if old Wireless Extension user space interface is used with modern cfg80211/mac80211 drivers. Do not support Wireless Extensions for Wi-Fi 7 devices at all. Everyone should switch to using nl80211 interface instead. - Improve the CAN bit timing configuration. Use extack to return error messages directly to user space, update the SJW handling, including the definition of a new default value that will benefit CAN-FD controllers, by increasing their oscillator tolerance. New hardware / drivers: - Ethernet: - nVidia BlueField-3 support (control traffic driver) - Ethernet support for imx93 SoCs - Motorcomm yt8531 gigabit Ethernet PHY - onsemi NCN26000 10BASE-T1S PHY (with support for PLCA) - Microchip LAN8841 PHY (incl. cable diagnostics and PTP) - Amlogic gxl MDIO mux - WiFi: - RealTek RTL8188EU (rtl8xxxu) - Qualcomm Wi-Fi 7 devices (ath12k) - CAN: - Renesas R-Car V4H Drivers: - Bluetooth: - Set Per Platform Antenna Gain (PPAG) for Intel controllers. - Ethernet NICs: - Intel (1G, igc): - support TSN / Qbv / packet scheduling features of i226 model - Intel (100G, ice): - use GNSS subsystem instead of TTY - multi-buffer XDP support - extend support for GPIO pins to E823 devices - nVidia/Mellanox: - update the shared buffer configuration on PFC commands - implement PTP adjphase function for HW offset control - TC support for Geneve and GRE with VF tunnel offload - more efficient crypto key management method - multi-port eswitch support - Netronome/Corigine: - add DCB IEEE support - support IPsec offloading for NFP3800 - Freescale/NXP (enetc): - support XDP_REDIRECT for XDP non-linear buffers - improve reconfig, avoid link flap and waiting for idle - support MAC Merge layer - Other NICs: - sfc/ef100: add basic devlink support for ef100 - ionic: rx_push mode operation (writing descriptors via MMIO) - bnxt: use the auxiliary bus abstraction for RDMA - r8169: disable ASPM and reset bus in case of tx timeout - cpsw: support QSGMII mode for J721e CPSW9G - cpts: support pulse-per-second output - ngbe: add an mdio bus driver - usbnet: optimize usbnet_bh() by avoiding unnecessary queuing - r8152: handle devices with FW with NCM support - amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation - virtio-net: support multi buffer XDP - virtio/vsock: replace virtio_vsock_pkt with sk_buff - tsnep: XDP support - Ethernet high-speed switches: - nVidia/Mellanox (mlxsw): - add support for latency TLV (in FW control messages) - Microchip (sparx5): - separate explicit and implicit traffic forwarding rules, make the implicit rules always active - add support for egress DSCP rewrite - IS0 VCAP support (Ingress Classification) - IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS etc.) - ES2 VCAP support (Egress Access Control) - support for Per-Stream Filtering and Policing (802.1Q, 8.6.5.1) - Ethernet embedded switches: - Marvell (mv88e6xxx): - add MAB (port auth) offload support - enable PTP receive for mv88e6390 - NXP (ocelot): - support MAC Merge layer - support for the the vsc7512 internal copper phys - Microchip: - lan9303: convert to PHYLINK - lan966x: support TC flower filter statistics - lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x - lan937x: support Credit Based Shaper configuration - ksz9477: support Energy Efficient Ethernet - other: - qca8k: convert to regmap read/write API, use bulk operations - rswitch: Improve TX timestamp accuracy - Intel WiFi (iwlwifi): - EHT (Wi-Fi 7) rate reporting - STEP equalizer support: transfer some STEP (connection to radio on platforms with integrated wifi) related parameters from the BIOS to the firmware. - Qualcomm 802.11ax WiFi (ath11k): - IPQ5018 support - Fine Timing Measurement (FTM) responder role support - channel 177 support - MediaTek WiFi (mt76): - per-PHY LED support - mt7996: EHT (Wi-Fi 7) support - Wireless Ethernet Dispatch (WED) reset support - switch to using page pool allocator - RealTek WiFi (rtw89): - support new version of Bluetooth co-existance - Mobile: - rmnet: support TX aggregation" * tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits) page_pool: add a comment explaining the fragment counter usage net: ethtool: fix __ethtool_dev_mm_supported() implementation ethtool: pse-pd: Fix double word in comments xsk: add linux/vmalloc.h to xsk.c sefltests: netdevsim: wait for devlink instance after netns removal selftest: fib_tests: Always cleanup before exit net/mlx5e: Align IPsec ASO result memory to be as required by hardware net/mlx5e: TC, Set CT miss to the specific ct action instance net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG net/mlx5: Refactor tc miss handling to a single function net/mlx5: Kconfig: Make tc offload depend on tc skb extension net/sched: flower: Support hardware miss to tc action net/sched: flower: Move filter handle initialization earlier net/sched: cls_api: Support hardware miss to tc action net/sched: Rename user cookie and act cookie sfc: fix builds without CONFIG_RTC_LIB sfc: clean up some inconsistent indentings net/mlx4_en: Introduce flexible array to silence overflow warning net: lan966x: Fix possible deadlock inside PTP net/ulp: Remove redundant ->clone() test in inet_clone_ulp(). ... --- drivers/net/wireless/ath/ath11k/peer.c | 669 +++++++++++++++++++++++++++++++++ 1 file changed, 669 insertions(+) create mode 100644 drivers/net/wireless/ath/ath11k/peer.c (limited to 'drivers/net/wireless/ath/ath11k/peer.c') diff --git a/drivers/net/wireless/ath/ath11k/peer.c b/drivers/net/wireless/ath/ath11k/peer.c new file mode 100644 index 000000000..1ae7af02c --- /dev/null +++ b/drivers/net/wireless/ath/ath11k/peer.c @@ -0,0 +1,669 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "core.h" +#include "peer.h" +#include "debug.h" + +static struct ath11k_peer *ath11k_peer_find_list_by_id(struct ath11k_base *ab, + int peer_id) +{ + struct ath11k_peer *peer; + + lockdep_assert_held(&ab->base_lock); + + list_for_each_entry(peer, &ab->peers, list) { + if (peer->peer_id != peer_id) + continue; + + return peer; + } + + return NULL; +} + +struct ath11k_peer *ath11k_peer_find(struct ath11k_base *ab, int vdev_id, + const u8 *addr) +{ + struct ath11k_peer *peer; + + lockdep_assert_held(&ab->base_lock); + + list_for_each_entry(peer, &ab->peers, list) { + if (peer->vdev_id != vdev_id) + continue; + if (!ether_addr_equal(peer->addr, addr)) + continue; + + return peer; + } + + return NULL; +} + +struct ath11k_peer *ath11k_peer_find_by_addr(struct ath11k_base *ab, + const u8 *addr) +{ + struct ath11k_peer *peer; + + lockdep_assert_held(&ab->base_lock); + + if (!ab->rhead_peer_addr) + return NULL; + + peer = rhashtable_lookup_fast(ab->rhead_peer_addr, addr, + ab->rhash_peer_addr_param); + + return peer; +} + +struct ath11k_peer *ath11k_peer_find_by_id(struct ath11k_base *ab, + int peer_id) +{ + struct ath11k_peer *peer; + + lockdep_assert_held(&ab->base_lock); + + if (!ab->rhead_peer_id) + return NULL; + + peer = rhashtable_lookup_fast(ab->rhead_peer_id, &peer_id, + ab->rhash_peer_id_param); + + return peer; +} + +struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab, + int vdev_id) +{ + struct ath11k_peer *peer; + + spin_lock_bh(&ab->base_lock); + + list_for_each_entry(peer, &ab->peers, list) { + if (vdev_id == peer->vdev_id) { + spin_unlock_bh(&ab->base_lock); + return peer; + } + } + spin_unlock_bh(&ab->base_lock); + return NULL; +} + +void ath11k_peer_unmap_event(struct ath11k_base *ab, u16 peer_id) +{ + struct ath11k_peer *peer; + + spin_lock_bh(&ab->base_lock); + + peer = ath11k_peer_find_list_by_id(ab, peer_id); + if (!peer) { + ath11k_warn(ab, "peer-unmap-event: unknown peer id %d\n", + peer_id); + goto exit; + } + + ath11k_dbg(ab, ATH11K_DBG_DP_HTT, "htt peer unmap vdev %d peer %pM id %d\n", + peer->vdev_id, peer->addr, peer_id); + + list_del(&peer->list); + kfree(peer); + wake_up(&ab->peer_mapping_wq); + +exit: + spin_unlock_bh(&ab->base_lock); +} + +void ath11k_peer_map_event(struct ath11k_base *ab, u8 vdev_id, u16 peer_id, + u8 *mac_addr, u16 ast_hash, u16 hw_peer_id) +{ + struct ath11k_peer *peer; + + spin_lock_bh(&ab->base_lock); + peer = ath11k_peer_find(ab, vdev_id, mac_addr); + if (!peer) { + peer = kzalloc(sizeof(*peer), GFP_ATOMIC); + if (!peer) + goto exit; + + peer->vdev_id = vdev_id; + peer->peer_id = peer_id; + peer->ast_hash = ast_hash; + peer->hw_peer_id = hw_peer_id; + ether_addr_copy(peer->addr, mac_addr); + list_add(&peer->list, &ab->peers); + wake_up(&ab->peer_mapping_wq); + } + + ath11k_dbg(ab, ATH11K_DBG_DP_HTT, "htt peer map vdev %d peer %pM id %d\n", + vdev_id, mac_addr, peer_id); + +exit: + spin_unlock_bh(&ab->base_lock); +} + +static int ath11k_wait_for_peer_common(struct ath11k_base *ab, int vdev_id, + const u8 *addr, bool expect_mapped) +{ + int ret; + + ret = wait_event_timeout(ab->peer_mapping_wq, ({ + bool mapped; + + spin_lock_bh(&ab->base_lock); + mapped = !!ath11k_peer_find(ab, vdev_id, addr); + spin_unlock_bh(&ab->base_lock); + + (mapped == expect_mapped || + test_bit(ATH11K_FLAG_CRASH_FLUSH, &ab->dev_flags)); + }), 3 * HZ); + + if (ret <= 0) + return -ETIMEDOUT; + + return 0; +} + +static inline int ath11k_peer_rhash_insert(struct ath11k_base *ab, + struct rhashtable *rtbl, + struct rhash_head *rhead, + struct rhashtable_params *params, + void *key) +{ + struct ath11k_peer *tmp; + + lockdep_assert_held(&ab->tbl_mtx_lock); + + tmp = rhashtable_lookup_get_insert_fast(rtbl, rhead, *params); + + if (!tmp) + return 0; + else if (IS_ERR(tmp)) + return PTR_ERR(tmp); + else + return -EEXIST; +} + +static inline int ath11k_peer_rhash_remove(struct ath11k_base *ab, + struct rhashtable *rtbl, + struct rhash_head *rhead, + struct rhashtable_params *params) +{ + int ret; + + lockdep_assert_held(&ab->tbl_mtx_lock); + + ret = rhashtable_remove_fast(rtbl, rhead, *params); + if (ret && ret != -ENOENT) + return ret; + + return 0; +} + +static int ath11k_peer_rhash_add(struct ath11k_base *ab, struct ath11k_peer *peer) +{ + int ret; + + lockdep_assert_held(&ab->base_lock); + lockdep_assert_held(&ab->tbl_mtx_lock); + + if (!ab->rhead_peer_id || !ab->rhead_peer_addr) + return -EPERM; + + ret = ath11k_peer_rhash_insert(ab, ab->rhead_peer_id, &peer->rhash_id, + &ab->rhash_peer_id_param, &peer->peer_id); + if (ret) { + ath11k_warn(ab, "failed to add peer %pM with id %d in rhash_id ret %d\n", + peer->addr, peer->peer_id, ret); + return ret; + } + + ret = ath11k_peer_rhash_insert(ab, ab->rhead_peer_addr, &peer->rhash_addr, + &ab->rhash_peer_addr_param, &peer->addr); + if (ret) { + ath11k_warn(ab, "failed to add peer %pM with id %d in rhash_addr ret %d\n", + peer->addr, peer->peer_id, ret); + goto err_clean; + } + + return 0; + +err_clean: + ath11k_peer_rhash_remove(ab, ab->rhead_peer_id, &peer->rhash_id, + &ab->rhash_peer_id_param); + return ret; +} + +void ath11k_peer_cleanup(struct ath11k *ar, u32 vdev_id) +{ + struct ath11k_peer *peer, *tmp; + struct ath11k_base *ab = ar->ab; + + lockdep_assert_held(&ar->conf_mutex); + + mutex_lock(&ab->tbl_mtx_lock); + spin_lock_bh(&ab->base_lock); + list_for_each_entry_safe(peer, tmp, &ab->peers, list) { + if (peer->vdev_id != vdev_id) + continue; + + ath11k_warn(ab, "removing stale peer %pM from vdev_id %d\n", + peer->addr, vdev_id); + + ath11k_peer_rhash_delete(ab, peer); + list_del(&peer->list); + kfree(peer); + ar->num_peers--; + } + + spin_unlock_bh(&ab->base_lock); + mutex_unlock(&ab->tbl_mtx_lock); +} + +static int ath11k_wait_for_peer_deleted(struct ath11k *ar, int vdev_id, const u8 *addr) +{ + return ath11k_wait_for_peer_common(ar->ab, vdev_id, addr, false); +} + +int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id, + const u8 *addr) +{ + int ret; + unsigned long time_left; + + ret = ath11k_wait_for_peer_deleted(ar, vdev_id, addr); + if (ret) { + ath11k_warn(ar->ab, "failed wait for peer deleted"); + return ret; + } + + time_left = wait_for_completion_timeout(&ar->peer_delete_done, + 3 * HZ); + if (time_left == 0) { + ath11k_warn(ar->ab, "Timeout in receiving peer delete response\n"); + return -ETIMEDOUT; + } + + return 0; +} + +static int __ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, const u8 *addr) +{ + int ret; + struct ath11k_peer *peer; + struct ath11k_base *ab = ar->ab; + + lockdep_assert_held(&ar->conf_mutex); + + mutex_lock(&ab->tbl_mtx_lock); + spin_lock_bh(&ab->base_lock); + + peer = ath11k_peer_find_by_addr(ab, addr); + /* Check if the found peer is what we want to remove. + * While the sta is transitioning to another band we may + * have 2 peer with the same addr assigned to different + * vdev_id. Make sure we are deleting the correct peer. + */ + if (peer && peer->vdev_id == vdev_id) + ath11k_peer_rhash_delete(ab, peer); + + /* Fallback to peer list search if the correct peer can't be found. + * Skip the deletion of the peer from the rhash since it has already + * been deleted in peer add. + */ + if (!peer) + peer = ath11k_peer_find(ab, vdev_id, addr); + + if (!peer) { + spin_unlock_bh(&ab->base_lock); + mutex_unlock(&ab->tbl_mtx_lock); + + ath11k_warn(ab, + "failed to find peer vdev_id %d addr %pM in delete\n", + vdev_id, addr); + return -EINVAL; + } + + spin_unlock_bh(&ab->base_lock); + mutex_unlock(&ab->tbl_mtx_lock); + + reinit_completion(&ar->peer_delete_done); + + ret = ath11k_wmi_send_peer_delete_cmd(ar, addr, vdev_id); + if (ret) { + ath11k_warn(ab, + "failed to delete peer vdev_id %d addr %pM ret %d\n", + vdev_id, addr, ret); + return ret; + } + + ret = ath11k_wait_for_peer_delete_done(ar, vdev_id, addr); + if (ret) + return ret; + + return 0; +} + +int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr) +{ + int ret; + + lockdep_assert_held(&ar->conf_mutex); + + ret = __ath11k_peer_delete(ar, vdev_id, addr); + if (ret) + return ret; + + ar->num_peers--; + + return 0; +} + +static int ath11k_wait_for_peer_created(struct ath11k *ar, int vdev_id, const u8 *addr) +{ + return ath11k_wait_for_peer_common(ar->ab, vdev_id, addr, true); +} + +int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif, + struct ieee80211_sta *sta, struct peer_create_params *param) +{ + struct ath11k_peer *peer; + struct ath11k_sta *arsta; + int ret, fbret; + + lockdep_assert_held(&ar->conf_mutex); + + if (ar->num_peers > (ar->max_num_peers - 1)) { + ath11k_warn(ar->ab, + "failed to create peer due to insufficient peer entry resource in firmware\n"); + return -ENOBUFS; + } + + spin_lock_bh(&ar->ab->base_lock); + peer = ath11k_peer_find_by_addr(ar->ab, param->peer_addr); + if (peer) { + if (peer->vdev_id == param->vdev_id) { + spin_unlock_bh(&ar->ab->base_lock); + return -EINVAL; + } + + /* Assume sta is transitioning to another band. + * Remove here the peer from rhash. + */ + mutex_lock(&ar->ab->tbl_mtx_lock); + ath11k_peer_rhash_delete(ar->ab, peer); + mutex_unlock(&ar->ab->tbl_mtx_lock); + } + spin_unlock_bh(&ar->ab->base_lock); + + ret = ath11k_wmi_send_peer_create_cmd(ar, param); + if (ret) { + ath11k_warn(ar->ab, + "failed to send peer create vdev_id %d ret %d\n", + param->vdev_id, ret); + return ret; + } + + ret = ath11k_wait_for_peer_created(ar, param->vdev_id, + param->peer_addr); + if (ret) + return ret; + + mutex_lock(&ar->ab->tbl_mtx_lock); + spin_lock_bh(&ar->ab->base_lock); + + peer = ath11k_peer_find(ar->ab, param->vdev_id, param->peer_addr); + if (!peer) { + spin_unlock_bh(&ar->ab->base_lock); + mutex_unlock(&ar->ab->tbl_mtx_lock); + ath11k_warn(ar->ab, "failed to find peer %pM on vdev %i after creation\n", + param->peer_addr, param->vdev_id); + + ret = -ENOENT; + goto cleanup; + } + + ret = ath11k_peer_rhash_add(ar->ab, peer); + if (ret) { + spin_unlock_bh(&ar->ab->base_lock); + mutex_unlock(&ar->ab->tbl_mtx_lock); + goto cleanup; + } + + peer->pdev_idx = ar->pdev_idx; + peer->sta = sta; + + if (arvif->vif->type == NL80211_IFTYPE_STATION) { + arvif->ast_hash = peer->ast_hash; + arvif->ast_idx = peer->hw_peer_id; + } + + peer->sec_type = HAL_ENCRYPT_TYPE_OPEN; + peer->sec_type_grp = HAL_ENCRYPT_TYPE_OPEN; + + if (sta) { + arsta = (struct ath11k_sta *)sta->drv_priv; + arsta->tcl_metadata |= FIELD_PREP(HTT_TCL_META_DATA_TYPE, 0) | + FIELD_PREP(HTT_TCL_META_DATA_PEER_ID, + peer->peer_id); + + /* set HTT extension valid bit to 0 by default */ + arsta->tcl_metadata &= ~HTT_TCL_META_DATA_VALID_HTT; + } + + ar->num_peers++; + + spin_unlock_bh(&ar->ab->base_lock); + mutex_unlock(&ar->ab->tbl_mtx_lock); + + return 0; + +cleanup: + fbret = __ath11k_peer_delete(ar, param->vdev_id, param->peer_addr); + if (fbret) + ath11k_warn(ar->ab, "failed peer %pM delete vdev_id %d fallback ret %d\n", + param->peer_addr, param->vdev_id, fbret); + + return ret; +} + +int ath11k_peer_rhash_delete(struct ath11k_base *ab, struct ath11k_peer *peer) +{ + int ret; + + lockdep_assert_held(&ab->base_lock); + lockdep_assert_held(&ab->tbl_mtx_lock); + + if (!ab->rhead_peer_id || !ab->rhead_peer_addr) + return -EPERM; + + ret = ath11k_peer_rhash_remove(ab, ab->rhead_peer_addr, &peer->rhash_addr, + &ab->rhash_peer_addr_param); + if (ret) { + ath11k_warn(ab, "failed to remove peer %pM id %d in rhash_addr ret %d\n", + peer->addr, peer->peer_id, ret); + return ret; + } + + ret = ath11k_peer_rhash_remove(ab, ab->rhead_peer_id, &peer->rhash_id, + &ab->rhash_peer_id_param); + if (ret) { + ath11k_warn(ab, "failed to remove peer %pM id %d in rhash_id ret %d\n", + peer->addr, peer->peer_id, ret); + return ret; + } + + return 0; +} + +static int ath11k_peer_rhash_id_tbl_init(struct ath11k_base *ab) +{ + struct rhashtable_params *param; + struct rhashtable *rhash_id_tbl; + int ret; + size_t size; + + lockdep_assert_held(&ab->tbl_mtx_lock); + + if (ab->rhead_peer_id) + return 0; + + size = sizeof(*ab->rhead_peer_id); + rhash_id_tbl = kzalloc(size, GFP_KERNEL); + if (!rhash_id_tbl) { + ath11k_warn(ab, "failed to init rhash id table due to no mem (size %zu)\n", + size); + return -ENOMEM; + } + + param = &ab->rhash_peer_id_param; + + param->key_offset = offsetof(struct ath11k_peer, peer_id); + param->head_offset = offsetof(struct ath11k_peer, rhash_id); + param->key_len = sizeof_field(struct ath11k_peer, peer_id); + param->automatic_shrinking = true; + param->nelem_hint = ab->num_radios * TARGET_NUM_PEERS_PDEV(ab); + + ret = rhashtable_init(rhash_id_tbl, param); + if (ret) { + ath11k_warn(ab, "failed to init peer id rhash table %d\n", ret); + goto err_free; + } + + spin_lock_bh(&ab->base_lock); + + if (!ab->rhead_peer_id) { + ab->rhead_peer_id = rhash_id_tbl; + } else { + spin_unlock_bh(&ab->base_lock); + goto cleanup_tbl; + } + + spin_unlock_bh(&ab->base_lock); + + return 0; + +cleanup_tbl: + rhashtable_destroy(rhash_id_tbl); +err_free: + kfree(rhash_id_tbl); + + return ret; +} + +static int ath11k_peer_rhash_addr_tbl_init(struct ath11k_base *ab) +{ + struct rhashtable_params *param; + struct rhashtable *rhash_addr_tbl; + int ret; + size_t size; + + lockdep_assert_held(&ab->tbl_mtx_lock); + + if (ab->rhead_peer_addr) + return 0; + + size = sizeof(*ab->rhead_peer_addr); + rhash_addr_tbl = kzalloc(size, GFP_KERNEL); + if (!rhash_addr_tbl) { + ath11k_warn(ab, "failed to init rhash addr table due to no mem (size %zu)\n", + size); + return -ENOMEM; + } + + param = &ab->rhash_peer_addr_param; + + param->key_offset = offsetof(struct ath11k_peer, addr); + param->head_offset = offsetof(struct ath11k_peer, rhash_addr); + param->key_len = sizeof_field(struct ath11k_peer, addr); + param->automatic_shrinking = true; + param->nelem_hint = ab->num_radios * TARGET_NUM_PEERS_PDEV(ab); + + ret = rhashtable_init(rhash_addr_tbl, param); + if (ret) { + ath11k_warn(ab, "failed to init peer addr rhash table %d\n", ret); + goto err_free; + } + + spin_lock_bh(&ab->base_lock); + + if (!ab->rhead_peer_addr) { + ab->rhead_peer_addr = rhash_addr_tbl; + } else { + spin_unlock_bh(&ab->base_lock); + goto cleanup_tbl; + } + + spin_unlock_bh(&ab->base_lock); + + return 0; + +cleanup_tbl: + rhashtable_destroy(rhash_addr_tbl); +err_free: + kfree(rhash_addr_tbl); + + return ret; +} + +static inline void ath11k_peer_rhash_id_tbl_destroy(struct ath11k_base *ab) +{ + lockdep_assert_held(&ab->tbl_mtx_lock); + + if (!ab->rhead_peer_id) + return; + + rhashtable_destroy(ab->rhead_peer_id); + kfree(ab->rhead_peer_id); + ab->rhead_peer_id = NULL; +} + +static inline void ath11k_peer_rhash_addr_tbl_destroy(struct ath11k_base *ab) +{ + lockdep_assert_held(&ab->tbl_mtx_lock); + + if (!ab->rhead_peer_addr) + return; + + rhashtable_destroy(ab->rhead_peer_addr); + kfree(ab->rhead_peer_addr); + ab->rhead_peer_addr = NULL; +} + +int ath11k_peer_rhash_tbl_init(struct ath11k_base *ab) +{ + int ret; + + mutex_lock(&ab->tbl_mtx_lock); + + ret = ath11k_peer_rhash_id_tbl_init(ab); + if (ret) + goto out; + + ret = ath11k_peer_rhash_addr_tbl_init(ab); + if (ret) + goto cleanup_tbl; + + mutex_unlock(&ab->tbl_mtx_lock); + + return 0; + +cleanup_tbl: + ath11k_peer_rhash_id_tbl_destroy(ab); +out: + mutex_unlock(&ab->tbl_mtx_lock); + return ret; +} + +void ath11k_peer_rhash_tbl_destroy(struct ath11k_base *ab) +{ + mutex_lock(&ab->tbl_mtx_lock); + + ath11k_peer_rhash_addr_tbl_destroy(ab); + ath11k_peer_rhash_id_tbl_destroy(ab); + + mutex_unlock(&ab->tbl_mtx_lock); +} -- cgit v1.2.3